I am doing project on efficiency of sorting algorithm. Lets say I performed 50 iterations of bubble sorts and find the average time taken for n numbers. But I realize that the first few iterations are always slower than subsequent iterations
e.g. 1394ms, 1381ms, 1001ms, 1008ms, 1008ms ,1011ms...
long lStartTime = System.currentTimeMillis();
bubbleSort(R); // R is the array of int
long lEndTime = System.currentTimeMillis();
long difference = lEndTime - lStartTime;
What could be the reason behind it? It is because cpu able to tell which set of data to be stored into cache? If so, should I not use the subsequent computing time for analysis as it is not realistic?
Thanks!
edit: im doing other sorting algorithms too, bubble sort as an example but the thing is it happens to all of the sorts i did except for insertion
public static int[] bubbleSort(int[] S) {
int counter = 0;
boolean isUnsort = true;
while (isUnsort) {
isUnsort = false;
for (int i = 0; i < S.length - 1 - counter; i++) {
if (S[i] > S[i + 1]) {
int temp = S[i];
S[i] = S[i + 1];
S[i + 1] = temp;
isUnsort = true;
}
}
counter++;
}
return S;
}
It's probably the Just-in Time Compiler translating your byte-code into a more efficient form. I suggest you run at least one (but preferably a few) warm-up sort(s) before you begin your timer. Also, bubble sort is a horribly inefficient sort.