An intro to sharding ((horizontal) scalability) & replica sets (durability)

Recommended Reading

Seven Databases in Seven Weeks (pp 165-173)
Scaling MongoDB (pp All!)
MongoDB: The Definitive Guide (Chapters 9-10)
MongoDB In Action (Chapters 8-9)

Intro to Sharding & Replication

What makes document DBs unique is their ability to efficiently handle arbitrarily nested, schemaless ‘documents’. What makes MongoDB special among document DBs is its ability to scale across several servers, by replicating (copying data to other servers) or sharding collections (splitting a collection into pieces) and performing queries in parallel. Both promote availability. More background here

Sharding has a cost: if one part of the collection is lost, the whole thing is compromised – what good is querying a database of world country facts if the southern hemisphere is down? Mongo deals with this implicit sharding weakness – via duplication.

Replication improves durability, but consequently generates issues not found in single-source databases. One problem is deciding which node gets promoted when a master node goes down. Mongo deals with this by giving each mongod service a vote, and the one with the freshest data is elected the new master.

The problem with even nodes
The concept of replication is simple enough. You write to one MongoDB server and that data is duplicated to others within the replica set. If one server is unavailable, then one of the others can be promoted and handle requests.
– MongoDB demands an odd number of total nodes in the replica set.

To see why an odd number of nodes is best, consider what might happen with a 4-node replica set. Let’s imagine two of the servers lose connectivity to the other two. One set will have the original master, but since it can’t see a clear majority of the network (2 vs 2) the master demotes. The other set also won’t be able to elect a master as it too can’t communicate with a clear majority of nodes. Both sets become unable to process requests and the system is effectively down.

Quorum

CouchDB, on the other hand allows multiple masters. Unlike Riak, MongoDB always knows the most recent data. Mongo’s concern is strong consistency on writes and a) preventing a multi-master scenario and b) Understanding the primacy of data spread across nodes are both strategies to achieve this.

SHARDING facilitates handling very large datasets and extensibility aka horizontal scaling.

Rather than having a single server holding all the data, sharding splits the data onto other servers – I tend to think about database statistics within RDBMS engines here. Perhaps the first 3 chunks of data on the histogram on the left might go on shard 1; the 4th chunk on shard 2 and chunks 5, 6 & 7 on shard 3.

A shard is one or more servers in a cluster that are responsible for some subset of the data. If we had a cluster containing 1,000,000 documents representing users of a website, on shard might contain information about 200,000 of those users.

To evenly spread data across shards, MongoDB move subsets of the data from shard to shard. It figures out which subset(s) to move where based on a key that you choose. For example, we might choose to split up a collection of users based on the username field. MongoDB used range-based splitting: that is, data is split into chunks of given ranges eg [“a”, “d”)

A shard can consist of many servers. If there is more than one server in a shard, each server has an identical copy of the subset of data. In production, a shard will usually be a replica set.

Sharding requires some kind of library/lookup. Imagine we created a table to store city names alphabetically. We need some way to know that eg cities starting A-D go to server mongo2, P-Z on mongo4 etc. In MongoDB you create a config server. Certain versions(?) of MongoDB make this easier by autosharding, managing the spread of data for you.

Advertisements

(Networking) Success!

With some help from my former colleague and current classmate Pete Griffiths, I managed to remote logon to MongoDB installed on the ‘clear’ Pi (with the 32GB SD Card & successful GitHub installation) via MongoVue installed on the Win7 laptop.

This was a breakthrough as I’m now confident my cluster will be able to see each device – which then allows me to begin spreading data across nodes aka sharding.

Node(s) visible

Node(s) visible

Here you can see a connection to ‘clear’ has been created. Clear has been given a fixed IP address of 10.0.0.2, yellow has been given a static IP address of 10.0.0.3, Red .4 etc etc. MongoVue is running on my laptop, but able to connect to remote devices in the network switch.

You can see the dummy document I entered into a new collection!

Image

Clear is plugged in to the TP-Link switch on port 3 (lit up; laptop in port 1).

I’m just waiting for my other 4 32GB cards to come in the post and then I should be able to install Mongo on the remaining R Pi’s then hook everything together into the network switch.

We also linuxed the old Dell using Unetbootin, removing WinXP which is incompatible with MongoDB. Raspbian is crafted for the puny ARM processor, so I had to install Debian, which I believe(?) is from the same Linux family, so I hope has more or less the same functionality and syntax,

Also created virtual machines on the laptop using Virtual Box. I think I’ll need these to create virtual ‘config servers’ which mongodb requires for coordination. Config servers store all cluster metadata, most importantly, the mapping from chunks to shards:
Config servers maintain the shard metadata in a config database. The config database stores the relationship between chunks and where they reside within a sharded cluster. Without a config database, the mongos instances would be unable to route queries or write operations within the cluster.

– see more here

Divide and conquer