I wrote simple benchmark that test performance of multyplying doubles vs BigDecimal. Is my method correct? I use randomized values because compiler optimized multyplying constants many times (eg Math.PI * Math.E
).
But:
- I don't know if generating random numbers inside a test corrupts the result.
- The same for creating new BigDecimal
objects inside a test.
I want to test performance of multiplication only (not time used by constructor).
How can it be done?
import java.math.*;
import java.util.*;
public class DoubleVsBigDecimal
{
public static void main(String[] args)
{
Random rnd = new Random();
long t1, t2, t3;
double t;
t1 = System.nanoTime();
for(int i=0; i<1000000; i++)
{
double d1 = rnd.nextDouble();
double d2 = rnd.nextDouble();
t = d1 * d2;
}
t2 = System.nanoTime();
for(int i=0; i<1000000; i++)
{
BigDecimal bd1 = BigDecimal.valueOf(rnd.nextDouble());
BigDecimal bd2 = BigDecimal.valueOf(rnd.nextDouble());
bd1.multiply(bd2);
}
t3 = System.nanoTime();
System.out.println(String.format("%f",(t2-t1)/1e9));
System.out.println(String.format("%f",(t3-t2)/1e9));
System.out.println(String.format("%f",(double)(t3-t2)/(double)(t2-t1)));
}
}
You are not only timing the multiply operation, you are also timing other things.
You need to do something like:
long time = 0;
for(int i=0; i<1000000; i++) {
double d1 = rnd.nextDouble();
double d2 = rnd.nextDouble();
long start = System.nanoTime();
t = d1 * d2;
long end = System.nanoTime();
time += (end-start)
}
long meantime = time / 1000000;
then probably calculate the standard error too. Also you will probably need to warm the jvm with some calculations first before you start, otherwise you will get some high values at the start.