Search code examples
nosqlhdfsdistributed-computingshardingdistributed-filesystem

Sharding vs DFS


As far as I understand sharding (e.g in MongoDB) and distributed file systems (e.g. HDFS in HBase or HyperTable) are different mechanisms that databases use to scale-out, however I wonder how do they compare?


Solution

  • Traditional sharding involves breaking tables into a small number of pieces and running each piece (or "shard") in a separate database on a separate machine. Because of the large shard size, this mechanism can be prone to imbalances due to hot spots and unequal growth as was evidenced by the Foursquare incident. Also, because each shard is run on a separate machine, these systems can experience availability problems if one of the machines goes down. To mitigate this problem, most sharding systems, including MongoDB, implement replica groups. Each machine is replaced by a set of three machines in a master plus two slaves configuration. This way if a machine goes down, there are two remaining replicas to serve the data. There are a couple of problems with this design: First, if a replica fails in a replica group, and the group is only left with two members, to bring the replication count back to three, the data on one of these two machines needs to be cloned. Since there are only two machines in the entire cluster that can be used to re-create the replica, there will be enormous drag on one of these two machines while re-replication is taking place, causing serious performance problems on the shard in question (it takes over two hours to copy 1TB over a gigabit link). The second problem is that when one of the replicas goes down, it needs to be replaced with a new machine. Even if there is plenty of spare capacity across the cluster to resolve the replication problem, that spare capacity cannot be used to rectify the situation. The only way to solve it is to replace the machine. This becomes very challenging from an operational standpoint as cluster sizes grow up into the hundreds or thousands of machines.

    The Bigtable+GFS design solves these problems. First, the table data is broken down into much finer grained "tablets". A typical machine in a Bigtable cluster will often have 500+ tablets. If an imbalance occurs, resolving it is just a simple matter of migrating a small number of tablets from one machine to another. If a TabletServer goes down, because the data set is broken down and replicated with such fine granularity, there can be hundreds of machines that participate in the recovery process, which distributes the recovery burden and speeds recovery time. Also, because the data is not tied to a specific machine or machines, the spare capacity on all machines in the cluster can be applied to the failure. There is no operational requirement to replace the machine since any of the spare capacity throughout the cluster can be used to rectify replication imbalance.

    • Doug Judd CEO, Hypertable Inc.