There is something that still doesn't convince me in using Docker.io. Let suppose to deploy a MongoDb replica set on 3 three different containers installed on a Linux VM running on AWS. If for any reasons the VM goes down then all the mongo instances belonging to the replica set goes down as well. Then where is the huge advantages of having different containers even 200 running on the same host machine? In that way I can't never reach fault tolerance. Maybe there is something I'm not considering. Yes I understand fast deploy and fast configuration are two of the main reasons that make Docker.io really fantastic for the developers and sys admins
In general I would argue that containerization is not the same thing as virtualisation. As well as the simple fact that there is no real virtualisation (kernels are shared), it is a very different approach. Container based development advocates one service per container and often follows a microservice or 12 factor app style where state is carefully managed.
There is no reason why containers can't deployed across separate VMs or servers, although this does require a bit more work wrt networking. Containers incur significantly less overhead, both in terms of physical (disk) size and CPU. They boot much faster than VMs, meaning deployments can scale up and down extremely quickly.