AWS MongoDB clusters: Helpful tips and tricks (part one)

Working with MongoDB clusters

If you use noSql database Mongo in your work, sooner or later you will find that one mongod instance is not enough and you will need to configure a cluster of several instances, replications and replication sets.
There are a lot of different tutorials about how to do this on the Internet, so I won’t repeat them, instead I just want to give a little advice on MongoDB clusters and draw your attention to some of the less obvious issues that I encountered during the process and their possible solutions.

Choosing the right disk type for MongoDB. What’s better? What’s cheaper?

Amazon does not support Mongo by default, which means that you have to configure everything by hand on separate instances. The type of disk you choose plays an important part in this process; Amazon propose 3 types of disk:
– General Purpose SSD Volumes
Provisioned IOPS SSD Volumes
Magnetic Volumes

You can read a comparison of the advantages and disadvantages of these different types here.

Obviously the magnetic disk doesn’t suit us because it’s very slow, we’ll definitely need to take one of the ssd options. The ideal variant according to Amazon’s recommendations would be the Provisioned IOPS SSD Volumes but they’re more expensive than the General Purpose SSD Volumes. You can compare their prices using this calculator tool.

When we multiply the price of one disk by the amount of disks needed (minimum cluster size must be at least 3 mongod instances) we have a significant difference. So I prefer to use General Purpose SSD Volumes, but in larger volume (every additional 100GB gives us 300 IOPS ). A 200-300 GB disk should therefore be enough for the number of input output operations in the average small cluster and will save us 40 to 50 dollars per mongod instance per month. Of course if you have a huge database and hundreds of thousands of users you have only one variant, Provisioned IOPS SSD Volumes. But in small and medium sized systems, General Purpose SSD Volumes are a sound, fully operational and economical choice.

Replications getting stuck

In mongo sometimes we have a problem with replicas becoming stuck, it’s a very strange problem which is described in many places, but if you don’t catch it in time you can end up with some very unpleasant consequences: big load on other replicas, increasing time query performance and more.

There are several different instruments for tracking the state of the replicas, but almost all of them are paid solutions and require additional configuration. Instead, a cheap and very easy way to solve such a problem is to write a monitoring script, which sends an email with the current rs.status() of the problem replica set if one of the replicas of this set changes state.

In this example I’ve got 3 replica sets in each of which there are 3 replicas (9 replicas on three ports):

Cron

If the replica gets stuck with the RECOVERING status (when I say ‘stuck’ I mean that the replica has been in this state for a long time) the easiest and fastest solution to revive it is:

– stop this replica
– clear the directory where it stores a copy of the database
– re-launch the replica

When re-launched it enters a state called STARTUP2 and as soon as it downloads a copy of the database from the other replicas in the set, it will go back into operation (SECONDARY or MASTER)

Hopefully this saves you a few headaches working with MongoDB clusters. Come back soon for the second part of my MongoDB hints…