Almost all bytecodes know the type of their operands to be found on the stack at runtime, the bytecode verifier will have checked that only these types will actually be found at runtime (for every path) so the interpreter can just go ahead poping values that it expects and all should be well.
There are however a few instructions that need to know the type of the operands e.g. some of the 'dup' bytecodes which have different forms if some of the operands are category 2 (longs & doubles). During verification this is easy as previous push instructions use pseudo verification types where a long is pushed as a type 'long' and a type 'top' so verifying the dup knows that there is a long (as it also finds a 'top').
But how does the runtime determine this?
Hi all, original question says "some of the 'dup' bytecodes", should have been more specific i.e. the more complex ones 'dup_x2', 'dup_2', 'dup2_x1', and 'dup2_x2' whwere the different forms are dependant on what is found on the stack, they will do different things if they encounter longs, or doubles.
My question is how can the runtime determine if a value is 1 entry int or a 2 valued long/double and respond accordingly. thanks
The dup instructions don't care about type. They just blindly duplicate x number of stack slots and place them appropriately. dup2
will duplicate 2 ints (or a float and a reference or whatever) just as well as a single long.
The one caveat is that the verifier still has to verify that single words out of a long/double are not split. But this isn't unique to the dup
instructions.
Every stack or local slot at every point in the bytecode has an implicit type defined by the verifier's dataflow analysis. (In later bytecode versions, there are also type annotations (not to be confused with Java level Annotations) in the classfile to make the verifier more efficient).