Home:ALL Converter>Restoring a working MongoDB replica set from a EBS snapshot

Restoring a working MongoDB replica set from a EBS snapshot

Ask Time:2020-12-09T02:51:42         Author:mpartan

Json Formatter

I am using Bitnami MongoDB together with MongoDB Helm Chart (version 10.6.10, image tag being 3.6.17-ol-7-r26) to run a Mongo cluster in AWS under Kubernetes, which was initially created using the Helm chart. I am trying to get backups working using EBS Snapshots so that periodically whole volume from the primary MongoDB member is copied, which in case something happens could be restored to a new MongoDB installation (using the same Helm chart).

Currently I'm trying to get a backup process so that the snapshot would be put into a new volume, and a new Mongo namespace could be created in the same Kubernetes cluster, where the existing volume would be mounted. This works so that I'm able to create the Kubernetes volumes manually from the snapshot (Kubernetes Persistent Volume and Persistent Volume Claim) and link them with the MongoDB Helm chart (as those PV + PVC contain proper names), and start the Mongo server.

However, once the pods are running (primary, secondary and arbiter), the previously existing replica set is in place (old local database I guess) and obviously not working, as it's from the snapshot state.

Now I would wish to, following MongoDB documentation

  • Destroy existing replica set
  • Reset default settings on arbiter+slave
  • Create replica set from primary/master (with data being in place for primary)
  • Attach arbiter and slave to replica set to sync the data similar to what's in the docs.

Checking into the state from primary, I get

{
    "stateStr" : "REMOVED",
    "errmsg" : "Our replica set config is invalid or we are not a member of it",
    "codeName" : "InvalidReplicaSetConfig",
    .. other fields as well
}

So replica set is degraded. When I try to get local database deleted and move this to standalone, I'll get

rs0:OTHER> rs.slaveOk()
rs0:OTHER> show dbs
admin       0.000GB
local       1.891GB
+ some actual databases for data, so I can see data is in place
rs0:OTHER> use local
switched to db local
rs0:OTHER> db.dropDatabase()
"errmsg" : "not authorized on local to execute command { dropDatabase: 1.0, .. few other fields .., $readPreference: { mode: \"secondaryPreferred\" }, $db: \"local\" }"
rs0:OTHER> db.grantRolesToUser('root', [{ role: 'root', db: 'local' }])
2020-12-08T18:25:31.103+0000 E QUERY    [thread1] Error: Primary stepped down while waiting for replication :

As I'm using the Bitnami Helm chart, it's having some start parameters for the Kubernetes cluster which probably aren't in sync for getting this to run with the existing volume, which already has some configuration in place.

So I'm just wondering if I'm trying to do all the wrong things here, and the correct solution would be start a fresh MongoDB chart and then restore the database using Mongorestore (so basically not using EBS snapshots), or is there a way to launch this from an existing snapshot/volume, so that I would get use of the Helm Chart and EBS snapshots.

Author:mpartan,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/65205052/restoring-a-working-mongodb-replica-set-from-a-ebs-snapshot
yy