Here I have a question when using Java BigDecimal. Which way will be better when I want to multiply 100 for a object of BigDecimal.
BTW, it will be used for commercial calculation, so I'm considering the precision rather than the speed.
Option 3 is the fastest. Options 1 and 2 are roughly the same (option one being to multiply by ten twice). Multiplying by 100 instead gets up near the speed of option 3.
Here's my test code:
import java.math.BigDecimal;
public class TestingStuff {
public static void main(String[] args){
BigDecimal d2 = new BigDecimal(0); // to load the big decimal class outside the loop
long start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
BigDecimal d = new BigDecimal(99);
d2 = d.movePointRight(2);
}
long end = System.currentTimeMillis();
System.out.println("movePointRight: " + (end - start));
BigDecimal ten = new BigDecimal(10);
start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
BigDecimal d = new BigDecimal(99);
d2 = d.multiply(ten).multiply(ten);
}
end = System.currentTimeMillis();
System.out.println("multiply: " + (end - start));
start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
BigDecimal d = new BigDecimal(99);
d2 = d.scaleByPowerOfTen(2);
}
end = System.currentTimeMillis();
System.out.println("scaleByPowerOfTen: " + (end - start));
}
}
Of course you could just try the various options yourself in your own code. If you can't measure the difference, then why are you optimizing?