I am doing bench marking
on some of my client code. Now I am trying to find out how to calculate the throughput from my Multithreading
code-
I am running my program with 20 threads
. And each thread is going to run for 15 minutes
, so all the 20 threads will run for 15 minutes
.
Below is my code-
public static void main(String[] args) {
try {
// create thread pool with given size
ExecutorService service = Executors.newFixedThreadPool(20);
// queue some tasks
long startTime = System.currentTimeMillis();
long endTime = startTime + (15 * 60 * 1000);
for (int i = 0; i < 20; i++) {
service.submit(new CassandraReadTask(endTime, columnFamilyList));
}
service.shutdown();
service.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
} catch (Exception e) {
LOG.warn("Threw a Exception in" + CNAME + e);
}
}
Below is my class that implements Runnable interface
-
class CassandraReadTask implements Runnable {
public void run() {
try {
while (System.currentTimeMillis() <= endTime) {
double randomNumber = random.nextDouble() * 100.0;
final String id = generateRandomId(random);
ICassandraClientDao clientDao = ClientFactory.getInstance().getDao(clientName);
clientDao.getAttributes(id, columnsList, columnFamily);
}
} catch (Exception e) {
}
}
}
And from the above code, I am generating some random id and that id I am using it to pass to my getAttributes
dao method.
So from my understanding. Total throughput
will be-
total number of request/ total duration the program was run
So, in my case, it will be-
total number of id's I have generated/15 minutes
Am I right?
What you are doing it fine as long as you properly count up (maybe using a shared AtomicInteger
?) all of the requests done by the different threads.
However, I would switch your code around a bit and submit 100,000 (or something) random-ids and then time how long it takes for your threads to handle all of those ids. That's a more realistic test since it will better show your task submission overhead.
Then you just put a startTimeMillis
and calculate the difference from the end to the start and then calculate 100,000 (or whatever your number was) divided by the diff to give you your average iteration/millis.
Something like:
long startTimeMillis = System.currentTimeMillis();
int numIterations = 100000;
for (int i = 0; i < numIterations; i++) {
double randomNumber = random.nextDouble() * 100.0;
final String id = generateRandomId(random);
service.submit(new CassandraReadTask(id, columnFamilyList));
}
service.shutdown();
service.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
long diff = System.currentTimeMillis() - startTimeMillis;
System.out.println("Average time per iteration is " + (numIterations / diff));
Then it's easy to play around with the number of threads and the number of iterations to maximize your throughput.