I am trying to create a sharded collections. For that I created of replica set of config server. But I faced some problem doing that to I deleted the replica set from "local.system.replset". Changed replset name in the config files for the config servers. Then after creating replset with new name but old ips.I am getting the following error:

 "replica set IDs do not match, ours:<new replset name>,remote node‘s: <old replset name>"

So it seems that somewhere the binding still exists for the old replset name. How can I remove it? Or changing the ips of the config servers is the only option?

2

There are 2 best solutions below

2
Joe On BEST ANSWER

One way will be to pick one node that has the current data, remove its replica set configuration document and start it with the new name.

Then make a backup (just in case) and delete all of the files in the dbpath of the other nodes, start them with the new name in the configuration file, and rs.add them to the replica set.

The nodes that had been cleared will copy all of the data from the first.

0
Даніш Mухаммад On

I had a similar problem. I couldn't add new replication members for a variety of reasons. For myself, the best solution for most of the problems associated with MongoDB replication was the following:

  1. I started starting the database not using a configuration file, but using a console command: mongod --port 27017 --dbpath "your/db/path" --logpath "your/db/path/mongod.log" --replSet rs0 --bind_ip 0.0.0.0

  2. Next, I immediately added all members at the moment of initialization of the replica set, because when added using rs.add(), a new member of the replica set forever became with the STARTUP status: rs.initiate({"_id": "rs0", "members": [{"_id": 0, "host": ":27017"}, {"_id": 1, "host": "127.0.0.1:27018"}, {"_id": 2, "host": "127.0.0.1:27019"}]})

Replace the hosts with the ones you need. Just don’t mix local and global IP addresses in one set of replicas, this will cause an error

If you still need to add a replica set to a production server that has already initiated a replica set, you will need to overwrite the replica set configuration:

1. cfg=rs.conf()
2. cfg.members.push({_id: 1, host: "XXX.XXX.XX.XXX:27017"})
3. rs.reconfig(cfg)
4. rs.status()

After this, we can check rs.status() and see that we have new replication members with the SECONDARY status.