How to Move Data From Staging to Production in MongoDB

Connect with

MongoDB
In this post, I tried to explain how to move few of the data might be based on query or based on particular mongoDB collection or mongoDB database. In mongoDB admin job you may get this chance to move data from Staging/local/QA to production/LIVE mongoDB in either replicaset mongoDB or standalone mongoDB.

I assume that, Staging and production both are Linux OS (Operating System), and in my case both are in Linux CenOS environment.

mongodump and mongorestore are the two mongoDB utilities available in mongoDB which can be used most frequently.

  1. mongodump : this command (mongodump ) is utility in mongoDB which is used to take dump bson data in .bson format with metadata in .json. When you use mongodump you can get two different data in different format, it means mongodump make collecitonName.bson (for raw actual bson data) and collecitonName.json ( for metadata)
  2. mongorestore: mongorestore is a utility in mongoDB database , which is basically used to restore the bson data which was dump by using mongodump utility tool.

Step #1 Logged in on Staging Server

Logged-in using user which and type: sudo su - root or make sure your user has permission to run the command.

Step 1.1. Make a directory to take a dump

 mkdir -p /opt/mongobackup/30sep/

Step 1.2. Take a dump
Take a dump by using mongodump command utility:

  • for specific query : here -q is argument to provide a mongo query for which you want to take dump else will take dump of provided collection name.
  • for specific collection: provide collection name , here ‘plan’ which can be provided by -c or –collectionsName, if you not provide collection name , mongodump take backup of entire database.
  • for specific database: provide name of database by -d ‘NameOfDB’, else mongodump will take dump of entire database.
mongodump -d insurance -c plan -q '{"planId":{$in:[4,5,10,33,2]}}' -o /opt/mongobackup/30sep/

In mongodump, accept some parameters in which, few of them are mandatory and few of them are optional, here, output path i.e. -o is mandatory and rest are optional. If you have query then pass -q else skip this it will take entire collection backup.

Console output

[root@mongodb ~]# mongodump -d insurance -c plan -q '{"planId":{$in:[4,5,10,33,2]}}' -o /opt/mongobackup/30sep/
connected to: 127.0.0.1
2016-09-30T19:23:25.711+0530 DATABASE: insurance       to     /opt/mongobackup/30sep/insurance
2016-09-30T19:23:25.713+0530    insurance.plan to /opt/mongobackup/30sep/insurance/plan.bson
2016-09-30T19:23:25.854+0530             13040 documents
2016-09-30T19:23:25.856+0530    Metadata for insurance.plan to /opt/mongobackup/30sep/insurance/plan.metadata.json

Step 1.3. Copy data from Staging to Production Server
Copy data from one server to another by using scp command.
syntax as : scp -r source_folder and target_folder

  scp -r /opt/mongobackup/30sep  admin@10.10.1.4:/tmp/

Here, scp is a utility tool in linux, which is basically used to copy data from one server (linux node) to another server (linux). You see manual of scp by typing “man scp” or scp only.

Console output seems like follows.

[root@mongodb ~]# scp -r /opt/mongobackup/30sep  pbadmin@10.2.17.4:/tmp/                                                                               
[root@mongodb ~]# scp -r /opt/mongobackup/30sep  pbadmin@10.2.17.4:/tmp/                                                                               pbadmin@10.2.17.4's password:
plan.metadata.json                                                                                              100%  891     0.9KB/s   00:00
plan.bson                                                                                                       100% 9591KB   9.4MB/s   00:01

Step #2 Logged in into production server

2.1 Logged in into production server
Logged-in into Production server via putty or mRemoteNG or another utility tool. I personally used mRemoteNG as this is tab based putty to logged-in into multiple servers. Don't forget to run following or make sure your logged-in user have appropriate permission to run mongorestore command.

sudo su - root

2.2 Take backup of target Collection

Take back of collection i.e. "plan" here in which you want to store, this step is optional, but this is recommended to restore in original state, if anything goes wrong while storing of data. This step is just for your safety purpose, you know production data is very critical , do not want any in consistent state.

  mongodump -d insurance -c plan -o /opt/mongobackup/30sep16
  

2.3 Execute following mongorestore from production server
Run the following mongorestore command from production mongoDB server where you want to restore. In case of replica set , execute mongorestore on primary node because you know you can write only on primary node not on secondary.

  mongorestore -d insurance -c plan /tmp/30sep/insurance/plan.bson

Here, make sure your data in specified path, it means both *.bson and *.json should be there.

After executing your above mongorestore , you get something similar like follows.

[root@mongoDBLIVE ~]# mongorestore -d insurance -c plan /tmp/30sep/insurance/plan.bson
plan.bson           plan.metadata.json
[root@mongoDBLIVE ~]# mongorestore -d insurance -c plan /tmp/30sep/insurance/plan.bson
2016-09-30T19:33:23.936+0530    checking for collection data in /tmp/30sep/insurance/plan.bson
2016-09-30T19:33:23.942+0530    reading metadata file from /tmp/30sep/insurance/plan.metadata.json
2016-09-30T19:33:23.942+0530    restoring insurance.plan from file /tmp/30sep/insurance/plan.bson
2016-09-30T19:33:25.857+0530    restoring indexes for collection insurance.plan from metadata
2016-09-30T19:33:25.859+0530    Failed: restore error: insurance.plan: error creating indexes for insurance.plan: createIndex error: exception: Index with name: IDXsatartMnthly already exists with different options

Write you comments/suggestion to improve this post.


Connect with

2 thoughts on “How to Move Data From Staging to Production in MongoDB

Leave a Reply

Your email address will not be published. Required fields are marked *