I'm using this little piece of code to test what method is faster:
public void test() {
long start = System.currentTimeMillis();
MyDate date = new MyDate();
int max = 5000;
for (int i = 0; i < max; i++) {
Calendar.getInstance().getTimeInMillis(); // <--
}
System.out.println("Calendar instance delay: " + (System.currentTimeMillis() - start));
start = System.currentTimeMillis();
for (int i = 0; i < max; i++) {
date.getMillis(); // <--
}
System.out.println("My date delay: " + (System.currentTimeMillis() - start));
}
So, basically, I'm trying to compare the performance between Calendar.getInstance().getTimeInMillis()
and MyDate.getMillis()
(MyDate is a class created by me).
Well, when I ran the code above, the output were:
Calendar instance delay: 413
My date delay: 2
BUT, when I inverted the order (first called MyDate, and after Calendar), I got:
My date delay: 247
Calendar instance delay: 119
I tried using System.nanoTime()
, but the same thing occured: the first code to be tested is the one that took longer.
Anyone knows why this difference happens? Or, is there a way to profile codes accurately without using an external profiler application (just pure Java code)?
Thanks.
To test the performance of "small" code, you have to call it many times. Otherwise, the effects of JIT, caching, and branch prediction mess up your testing. If I call you on the phone to see how long it takes before you answer, I'll get a very different answer if I just called you a few seconds ago.