I've been wondering for a while; how does websites like facebook code to be able to have multiple servers?
How can the code take in account that several servers will be running the same code and gain from adding more?
Or does the webserver perhaps deal with this regardless of the code?
By sharing and networking. The code 'should' be the same for one server or several.
You can share data via Databases, memory with things like Memcache, the load with a balancer, etc. If you specialise servers like Google does (some do URL fetching, some hold data, some do number crunching, etc) the hardware to hand can be better utilised.
The code can use dispatch logic (normally abstracted via an API) so that it works the same if there is one server or millions of them.
IPC (Inter Process Communication) can be network enabled and allow a 'tighter' bonding of services. Google even have a protocol buffer project to aid with this.
Basically servers have to share to get any real benefits (beyond failover/backup), the code needs to use a level of abstraction to help with sharing. The actual sharing typically uses Round-Robin or Map/Reduce logic.