Search code examples
javascriptinterpretation

Branching in javascript


consider this code:

if (a) 
    doSomething(a)
else if (b)
    doSomething(b)
else
    doSomething(c)

I can eqvivalently rewrite this using the javascript logical operators to this:

(a && doSomething(a) || (!a && b && doSomething(b)) || (!a && !b && doSomething(c)))

I know it's not really readable, but will this somehow be optimizing the previous version ?

Because of the fact that operator && and || return actual expression value, will there be less comparisons ?


Solution

  • This question has pretty much been answered in the comments, but I wanted to expand on some of the comments and offer some additional insight.

    As Frédéric Hamidi pointed out in the comments, a good JavaScript JIT Compiler will probably fix both samples of the code, and produce the exact same result. However, let's imagine that there is no JIT Compiler providing optimizations, and let's instead imagine that you just have an interpreter that is executing the commands sequentially.

    In that case, in your first example (using if-elseif-else), you would have the following steps taking place (excuse the rough assembly pseudocode, but I think it illustrates the issue):

     1. CMP a, 0 // compare value of a to zero
     2. JZ 5 // If comparison is zero (a and 0 are equivalent), jump to the address of the else-if (starts on 5) instruction
     3. CALL doSomething(a) // Not how you would pass a parameter in assembly, but we'll skip over that stuff as it is irrelevant
     4. JMP 10 // Jump to end line. We do not need to do other evaluations.
     5. CMP b, 0 // Compare value of b to zero
     6. JZ 9 // If comparison is zero, jump to the else instruction (line 9)
     7. CALL doSomething(b)
     8. JMP 10 // Jump to end line. We do not need to do other evaluations.
     9. doSomething(c) // Else, we do something to C
     10. RET // Return/exit. We are finished.
    

    On the other hand, let's look at the sequence for your second sample of code (the one that uses only boolean operations):

     1. CMP a, 0
     2. JZ 6 // Start of comparison #2
     3. CALL doSomething(a)
     4. CMP EAX, 0 // Let's assume the call to doSomething puts a result in EAX
     5. JNZ 23 // Jump to end if doSomething returned a "truthy" result. Line 23 is the function's return point
     6. NOT a // let's say this call puts the NOTed a in EDX register
     7. CMP EDX, 0
     8. JZ 14  // start of comparison #3
     9. CMP b, 0
     10. JZ 14 // start of comparison #3
     11. CALL doSomething(b)
     12. CMP EAX, 0
     13. JNZ 23 // Again, jumping to return if doSomething returned "truthy" value.
     14. NOT a
     15. CMP EDX, 0
     16. JZ 23
     17. NOT b
     18. CMP EDX, 0
     19. JZ 23
     20. CALL doSomething(c)
     21. CMP EAX, 0
     23. RET
    

    I'm going to go ahead and say that the sample code #1 (using if-elseif-else branching) would probably be more efficient. Again, as mentioned in the comments on your question, a good JIT Compiler will probably optimize both code samples to equivalent states, but if you are solely utilizing an interpreter, without any kind of compiler to optimize the code, then the code in your second example will require more operations because there are more conditionals and variables to check.

    NOTE: The assembly here is by no means 100% accurate, and it is really pseudocode rather than proper assembly. I have simply included it to point out the difference in the number of operations that must be executed for each code sample.