Distributed storage architectures in modern-day data centers are designed such that there are 2-3 replicas of each piece of data, so that it is still available when a machine fails.
As I understand it, there is still a non-zero probability of all replicas failing, and given the scale of operations, there must be instances where this may happen. How do large data centers protect against this kind of failure, especially when it's important data, like your email, or images? Even further redundancy can only further make such failures unlikely, but not impossible.
NYC Tech Talk Series: How Google Backs Up the Internet is a good explanation of how Google manages backing up and achieving reliability. A text-based explanation is here.
Most importantly the talk says the following:
Again, as the other answer says, it is only possible to cover all bases and ensure that the probability is so low and the window of data-loss (between one backup failing and being rebuilt from other backups) is extremely low.