This question may be slightly off topic but I didn't know where else to ask. I was reading this https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md and saw that the specification included being able to send out of order messages using the same connection.
The only way I have done TCP socket programming before is by sending requests synchronously on a socket, for example, I will open a socket to 127.0.0.1
, send a request to that server through that socket and wait for a response. When I get the response for the request I have sent I close that connection by calling close()
on the client and close()
on the server after I have responded to the request.
For background, I am working on a project in C++ with libevent
to do something very similar to what RPC systems do, so I wanted to know what request response socket cycle I should use in the underlying transport system.
In C++ thrift there is a client method called open()
that (presumably) opens a connection and keeps it open until you call close()
. How does this work in systems where that is abstracted away? For example in that messagepack-RPC link I have included above. What is the best course of action? Open a connection if there is none, send the request and when all previous requests are served close that connection (on the server, call close()
when all pending requests have been responded to)? Or do we have to somehow try and keep that connection alive for a period of time that extends beyond the request lifetimes? How will the server and the client know what that time period is? For example should we register a read event handler on the socket and close the connection when recv()
returns 0
?
If this is a problem that different systems solve differently then can someone direct me to a resource that I can use to read up on possible patterns for keeping connections alive (preferably in event driven systems)? For example I read that HTTP servers always keep the connection open, why is that? Doesn't keeping every single connection open mean that the server will essentially leak memory?
Sorry about the seemingly basic question, I have only ever done really simple TCP socket programming before, so I might not know how things are done.
All this is much how most HTTP implementations already work. Note the timeouts at both ends to control each end's idle resource usage.