Search code examples
javaoptimizationincrementfunction-calls

Function call with i + 1, ++i implementation difference?


Just a quickie about Java, and let me say this first: I'm not worried about this due to performance optimization, just curious about the behind the scenes stuff in general. Since I found nothing to this I assume they are equivalent in every respect, but just to be sure:

In a recursive function foo(int i), or I guess in generel in a function call, is there a difference between foo(++i) and foo(i + 1)? I know the result is the same, but I thought the one with ++i might do some extrawork in the background because it is an assignment? So maybe it creates an (anonymous field) = i in the background which gets incremented and then referenced? Or does the compiler just optimize ++i away to i + 1?

My understanding is a bit fuzzy as you see, so help would be appreciated.

EDIT to clarify:

At first this was due to a recursive function e.g.

public static int foo(int i) {
        return (i >= 4) ? 0 
            : i + foo(++i);
    }

The "functions in general"-part came in the middle of writing the question and, as remarked, makes this ambiguous etc. Hope this clarifies everything.


Solution

  • If the answer is not about semantics but about performance at the machine level after the IR has been optimized, translated into machine instructions, I would say, "no unless measured and proven otherwise."

    It's very unlikely that there will be a performance difference after all the optimizations are made between f(++i) and f(i+1), assuming your code is such that you can actually consider these as alternatives (assuming the state of i ceases to become relevant after the function call).

    It's just basic hardware and compiler design, the cost of instructions for memory already stored in a register, the simplicity of optimizing away this code to the same machine code in even a semi-competent compiler (and I'd think Java's JIT would at least be that). It's a very basic nature of a compiler to recognize unnecessary side effects and eliminate them outright (actually one of the reasons why artificial micro-level benchmarks can be misleading unless they're written very carefully in a way that prevents the optimizer from skipping certain code outright). Among the easiest side effects to eliminate is a case like this, where we're incrementing a variable, i but not depending on the state change afterwards.

    It seems unlikely to have any real impact on performance. Of course the ultimate way is to look at the final resulting machine code (not the bytecode IR but the actual final machine code) or measure and profile it. But I'd be quite shocked, to say the least, if one is faster than the other, and it would tend to make me think that the compiler is doing a pretty poor job in either instruction selection or register allocation.

    That said, if one actually was (remote chance) faster than the other, I think you have it backwards. ++i would likely require less work since it can just increment a value in a register. ++i is a unary operation on one operand, and that works well with registers which are mutable. i + 1 is an expression that requires that i be treated as immutable and would call for a second register (but only in a really horrible kind of toy compiler scenario that didn't optimize anything, though in that case we're also compiling this expression in the context of a function call and have to consider things like stack spills, so even a toy compiler might make these somewhat equivalent).