Watching Towards a Universal VM presentation, I studied this slide, which lists all the optimisations that HotSpot JIT does:
In the language-specific techniques
section there is a de-reflection. I tried to find some information about it accross the Internet, but failed. I understood that this optimization eliminates reflection costs in some way, but I'm interested in details. Can someone clarify this, or give some useful links?
Yes, there is an optimization to reduce Reflection costs, though it is implemented mostly in Class Library rather than in JVM.
Before Java 1.4 Method.invoke
worked through a JNI call to VM runtime. Each invocation required at least two transitions from Java to Native and back to Java. The VM runtime parsed a method signature, verified that types of passed arguments were correct, performed boxing/unboxing and constructed a new Java frame for a called method. All that was rather slow.
Since Java 1.4 Method.invoke
uses dynamic bytecode generation if a method is called more than 15 times (configurable via sun.reflect.inflationThreshold
system property). A special Java class responsible for calling the given particular method is built in run-time. This class implements sun.reflect.MethodAccessor which java.lang.reflect.Method
delegates calls to.
The approach with dynamic bytecode generation is much faster since it
Note, that this optimization is implemented mostly in Java code without JVM assistance. The only thing HotSpot VM does to make this optimization possible - is skipping bytecode verification for such generated MethodAccessors. Otherwise the verifier would not allow, for example, to call private methods.