When deploying web applications, a common approach is to implement the actual application logic as a series of services and expose them via HTTP, then put some front ends between the services and the users. Typically those front ends will take care of things like SSL, session data, load balancing, routing requests, possibly caching, and so on.
AFAIK, that usually means that every time a new request comes in, one or more new requests must be made to one or more of the backends: each one with its TCP handshake, HTTP overhead, etc.
Doesn't that additional connections add a measurable latency and/or performance hit? If so, what techniques are in common practice to get the best performance from those kind of deployments?
Latency on a local connection will be minimal - single digit milliseconds at most probably. There will be some occupancy overhead for extra HTTP sessions, but then its spread out among different apps.
The advantage of the approach you describe is that is distributes the load amongst different apps so you can have lots of front-end bits doing heavy lifting like SSL and have fewer backend apps that handle more sessions. And you can pick and mix what apps you need.
A single monolithic app will probably be a bit faster until it runs out of capacity at which point you have a problem because its hard to scale up.