So, basically what I'm looking for is a thorough explanation of how the large applications handle/create their user databases and network connections? Maybe you have created a large application yourself? Do they utilize sockets or another technique? GZIP? JSON?
The reason I'm asking is because I'm currently writing an application that'll need a user database on the server-side and of course some sort of either socket connection or HTTPRequests from client-side. I know that a lot of people will use the application, I just don't know how to make it scaleable..
I think this is a question that'll help more people than just me too =)
Any help is much appreciated!
// Alexander
In addition to using web servers that scale well (apache does it nicely, if configured properly) there are a few more technologies that can help you process as many clients as you can quickly.
Using a separate server for static content (i.e. nginx). This is a special server that specializes in serving up static content as fast as possible. Used for access to images, js files, css files, etc... anything that doesn't change with one web request to another.
Using memcached to limit database calls. This will help when your database calls are the bottleneck (can happen quickly if you're pinging the database several times (dynamic content) on every web request). This is essentially a very fast in memory (read: temporary) location to put data that doesn't change very rapidly. You must make modifications to your code to check memcached first to see if the information is available (i.e. not expired... with memcached you set a stale time at which the field will be removed), if it is you use this data instead of pinging the database. If the data isn't available, you ping the database and put the data into memcached with a timeout for the next request.
This blog has good information as well.
Others have mentioned this book as a good overall location to gain some insight as well.