Search code examples
javaperformancereflectioninstantiation

Performance of new operator versus newInstance() in Java


I was using newInstance() in a sort-of performance-critical area of my code. The method signature is:

<T extends SomethingElse> T create(Class<T> clasz)

I pass Something.class as argument, I get an instance of SomethingElse, created with newInstance().

Today I got back to clear this performance TODO from the list, so I ran a couple of tests of new operator versus newInstance(). I was very surprised with the performance penalty of newInstance().

I wrote a little about it, here: http://biasedbit.com/blog/new-vs-newinstance/

(Sorry about the self promotion... I'd place the text here, but this question would grow out of proportions.)

What I'd love to know is why does the -server flag provide such a performance boost when the number of objects being created grows largely and not for "low" values, say, 100 or 1000.

I did learn my lesson with the whole reflections thing, this is just curiosity about the optimisations the JVM performs in runtime, especially with the -server flag. Also, if I'm doing something wrong in the test, I'd appreciate your feedback!


Edit: I've added a warmup phase and the results are now more stable. Thanks for the input!


Solution

  • I did learn my lesson with the whole reflections thing, this is just curiosity about the optimisations the JVM performs in runtime, especially with the -server flag. Also, if I'm doing something wrong in the test, I'd appreciate your feedback!

    Answering the second part first, your code seems to be making the classic mistake for Java micro-benchmarks and not "warming up" the JVM before making your measurements. Your application needs to run the method that does the test a few times, ignoring the first few iterations ... at least until the numbers stabilize. The reason for this is that a JVM has to do a lot of work to get an application started; e.g. loading classes and (when they've run a few times) JIT compiling the methods where significant application time is being spent.

    I think the reason that "-server" is making a difference is that (among other things) it changes the rules that determine when to JIT compile. The assumption is that for a "server" it is better to JIT sooner this gives slower startup but better throughput. (By contrast a "client" is tuned to defer JIT compiling so that the user gets a working GUI sooner.)