Search code examples
javascriptc++performancefloating-pointbenchmarking

Why does JavaScript appear to be 4 times faster than C++?


For a long time, I had thought of C++ being faster than JavaScript. However, today I made a benchmark script to compare the speed of floating point calculations in the two languages and the result is amazing!

JavaScript appears to be almost 4 times faster than C++!

I let both of the languages to do the same job on my i5-430M laptop, performing a = a + b for 100000000 times. C++ takes about 410 ms, while JavaScript takes only about 120 ms.

I really do not have any idea why JavaScript runs so fast in this case. Can anyone explain that?

The code I used for the JavaScript is (run with Node.js):

(function() {
    var a = 3.1415926, b = 2.718;
    var i, j, d1, d2;
    for(j=0; j<10; j++) {
        d1 = new Date();
        for(i=0; i<100000000; i++) {
            a = a + b;
        }
        d2 = new Date();
        console.log("Time Cost:" + (d2.getTime() - d1.getTime()) + "ms");
    }
    console.log("a = " + a);
})();

And the code for C++ (compiled by g++) is:

#include <stdio.h>
#include <ctime>

int main() {
    double a = 3.1415926, b = 2.718;
    int i, j;
    clock_t start, end;
    for(j=0; j<10; j++) {
        start = clock();
        for(i=0; i<100000000; i++) {
            a = a + b;
        }
        end = clock();
        printf("Time Cost: %dms\n", (end - start) * 1000 / CLOCKS_PER_SEC);
    }
    printf("a = %lf\n", a);
    return 0;
}

Solution

  • I may have some bad news for you if you're on a Linux system (which complies with POSIX at least in this situation). The clock() call returns number of clock ticks consumed by the program and scaled by CLOCKS_PER_SEC, which is 1,000,000.

    That means, if you're on such a system, you're talking in microseconds for C and milliseconds for JavaScript (as per the JS online docs). So, rather than JS being four times faster, C++ is actually 250 times faster.

    Now it may be that you're on a system where CLOCKS_PER_SECOND is something other than a million, you can run the following program on your system to see if it's scaled by the same value:

    #include <stdio.h>
    #include <time.h>
    #include <stdlib.h>
    
    #define MILLION * 1000000
    
    static void commaOut (int n, char c) {
        if (n < 1000) {
            printf ("%d%c", n, c);
            return;
        }
    
        commaOut (n / 1000, ',');
        printf ("%03d%c", n % 1000, c);
    }
    
    int main (int argc, char *argv[]) {
        int i;
    
        system("date");
        clock_t start = clock();
        clock_t end = start;
    
        while (end - start < 30 MILLION) {
            for (i = 10 MILLION; i > 0; i--) {};
            end = clock();
        }
    
        system("date");
        commaOut (end - start, '\n');
    
        return 0;
    }
    

    The output on my box is:

    Tuesday 17 November  11:53:01 AWST 2015
    Tuesday 17 November  11:53:31 AWST 2015
    30,001,946
    

    showing that the scaling factor is a million. If you run that program, or investigate CLOCKS_PER_SEC and it's not a scaling factor of one million, you need to look at some other things.


    The first step is to ensure your code is actually being optimised by the compiler. That means, for example, setting -O2 or -O3 for gcc.

    On my system with unoptimised code, I see:

    Time Cost: 320ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    a = 2717999973.760710
    

    and it's three times faster with -O2, albeit with a slightly different answer, though only by about one millionth of a percent:

    Time Cost: 140ms
    Time Cost: 110ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    a = 2718000003.159864
    

    That would bring the two situations back on par with each other, something I'd expect since JavaScript is not some interpreted beast like in the old days, where each token is interpreted whenever it's seen.

    Modern JavaScript engines (V8, Rhino, etc) can compile the code to an intermediate form (or even to machine language) which may allow performance roughly equal with compiled languages like C.

    But, to be honest, you don't tend to choose JavaScript or C++ for its speed, you choose them for their areas of strength. There aren't many C compilers floating around inside browsers and I've not noticed many operating systems nor embedded apps written in JavaScript.