Search code examples
graalvmgraalvm-native-image

How does PE take place for a Truffle interpreter AOT-compiled with native-image?


Using native-image to improve startup times of Truffle interpreters seems to be common.

My understanding is that AOT compilation with native-image will result in methods compiled to native code that are run in the special-purpose SubstrateVM.

Also, that the Truffle framework relies on dynamically gathered profiling information to determine which trees of nodes to partially evaluate. And that PE works by taking the JVM bytecode of the nodes in question and analyzing it with the help of the Graal JIT compiler.

And here's where I'm confused. If we pass a Truffle interpreter through native-image, the code for each node's methods will be native code. How can PE proceed, then? In fact, is Graal even available in SubstrateVM?


Solution

  • Besides the native code of the interpreter, SVM also stores in the image a representation of the interpreter (a group of methods that conform the interpreter) for partial evaluation. The format of this representation is not JVM bytecodes, but the graphs already parsed into Graal IR form. PE runs on these graphs producing even smaller, optimized graphs which are then fed to the Graal compiler, so yes SVM ships the Graal compiler as well in the native image. Why the Graal graphs and not the bytecodes? Bytecodes were used in the past, but storing the graphs directly saves the (bytecodes to Graal IR) parsing step.