There is a socket server and multi clients, and every client will send a message to server in a fixed period. As a server how to listen every client do or not have sended the message in that period?
I think the server can record the every client connection time locally, and start a thread to periodically check the time interval between last time record and now. if more than the period client send message, it should be a timeout.
It seems a little complex and need a local file to record, is there a more convenient way to do that?
Ignoring the multi threaded part for now, because that is independent of main question: Keeping track of timing between messages of each client.
A simple map will do the job. Keep a dictionary of {socket: last time}
, where last_time
refers to the time whenever the last message from this specific socket was received.
last_time
based on whatever fits you best (-1
or 0
or current system time).For the corner case: if a client doesn't send any message, you need a separate logic that will run at regular intervals and checks if elapsed time for an entry is more than expected. If it is more, you know you need to delete it, close the connection, do whatever you want.
I don't think there is any other way to handle this issue. The best you can do is: write a class that handles this logic internally and exposes you to relevant abstractions:
Coming to multi threading part, you need to take care of concurrency issues in python (or in any language in general), because two or more threads can update the map simultaneously. Regarding python, if I remember correctly, it only allows you to run one thread at a time because of interpreter lock, so you can ignore this issue.
There are many ways to handle connections in sockets: multi threading, multi processing, using select
/poll
system calls, etc. They have their merits if used at appropriate place, but which one to use, is upto the OP.