I created a send-receive datagram system for a game I have created in java (and LWJGL).
However, these datagrams often got dropped. That was because the server was waiting for various IO operations and other processing to be finished in the main loop, while new datagrams were being sent to it (which it was obviously not listening for).
To combat this, I have kept my main thread with the while true loop that catches datagrams, but instead of doing the processing in the main thread, I branch out into different threads.
Like this:
ArrayList<RecieveThread> threads = new ArrayList<RecieveThread>();
public void run(){
while (true){
//System.out.println("Waiting!");
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, data.length);
try {
socket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
//System.out.println("Recieved!");
String str = new String(packet.getData());
str = str.trim();
if (threads.size() < 50){
RecieveThread thr = new RecieveThread();
thr.packet = packet;
thr.str = str;
threads.add(thr);
thr.start();
}else{
boolean taskProcessed = false;
for (RecieveThread thr : threads){
if (!thr.nextTask){
thr.packet = packet;
thr.str = str;
thr.nextTask = true;
taskProcessed = true;
break;
}
}
if (!taskProcessed){
System.out.println("[Warning] All threads full! Defaulting to main thread!");
process(str, packet);
}
}
}
}
That is creating a new thread for every incoming datagram until it hits 50 packets, at which point it chooses to process in one of the existing threads that is waiting for a next task - And if all threads are processing, it defaults to the main thread.
So my question is this: How many threads is a good amount? I don't want to overload anybody's system (The same code will also be run on players' clients), but I also don't want to increase system packet loss.
Also, is different threads even a good idea? Does anybody have a better way of doing this?
Edit: Here is my RecieveThread class (class is 777 lines long):
String str;
DatagramPacket packet;
boolean nextTask = true;
public void run(){
while (true){
////System.out.println("CLIENT: " + str);
//BeforeGame
while (!nextTask){
//Nothing
}
<Insert processing code here that you neither know about, nor care to know about, nor is relevant to the issue. Still, I pastebinned it below>
}
}
First and foremost, any system that uses datagrams (e.g. UDP) for communication has to be able to cope with dropped requests. They will happen. The best you can do is reduce the typical drop rate to something that is acceptable. But you also need to recognize that if your application can't cope with lost datagrams, then it should not be using datagrams. Use regular sockets instead.
Now to the question of how many threads to use. The answer is "it depends".
On the one hand, if you don't have enough threads, there could be unused hardware capacity (cores) that could be used at peak times ... but isn't.
If you have too many threads running (or runnable) at a time, they will be competing for resources at various levels:
All of these things (and associated 2nd order effects) can reduce throughput ... relative to the optimal ... if you have too many threads.
If your request processing involves talking to databases or servers on other machines, then you need enough threads to allow something else to happen while waiting for responses.
As a rule of thumb, if your requests are independent (minimal contention on shared data) and exclusively in-memory (no databases or external service requests) then one worker thread per core is a good place to start. But you need to be prepared to tune (and maybe re-tune) this.
Finally, there is the problem of dealing with overload. On the one hand, if the overload situation is transient, then queuing is a reasonable strategy ... provided that the queue doesn't get too deep. On the other hand, if you anticipate overload to be common, then the best strategy is to drop requests early.
However, there is a secondary problem. A dropped request will probably entail the client noticing that it hasn't gotten a reply in a given time, and resending then resending request. And that can lead to worse problems; i.e. the client resending a request before the server has actually dropped it ... which can lead to the same request being processed multiple times, and a catastrophic drop in effective throughput.
Note that the same thing can happen if you have too many threads and they get bogged down due to resource contention.