I have noticed different behavior in NodeJS vs. Browser while screwing around with automatic semicolon insertion and type casting.
Browser:
> {}+{}
NaN
> {}+{};
NaN
> ({}+{})
"[object Object][object Object]"
> ({}+{});
"[object Object][object Object]"
NodeJS:
> {}+{}
'[object Object][object Object]'
> {}+{};
NaN
> ({}+{})
'[object Object][object Object]'
> ({}+{});
'[object Object][object Object]'
A. Why is casting interpreted differently with/without a semicolon or in parenthesis?
B. Which is more compliant to the standard? Or is this not addressed in the standard?
UPDATE: I found that it only does this different behavior in NodeJS. I previously thought this was V8 vs SpiderMonkey.
I'm going to guess that you're entering this into different debug consoles. One debug console treats that as a Program, and the other as an Expression.
{}+{}
When interpreted as a Program this is equivalent to
{
// empty block
} // No semicolon inserted because statement is a block.
(+ Number({})) // Prefix + operator coerces its argument to a number.
When interpreted as an Expression this is equivalent to
// Infix + operator concatenates if arguments are not both numeric
String({}) + String({});
which coerces each object to a string and concatenates those two strings.
Which interpreter is more compliant to the standard? Or is this not addressed in the standard?
If, as I assume, this occurs in a debug console, then no standard addresses what a debug console has to do -- it can bypass the JavaScript interpreter entirely and interpret its input as COBOL.
If you pass it into eval
, the first interpretation must win because the argument to eval
is a string that is parsed as a Program.