Search code examples
javafloating-pointjitssevectorization

Do any JVM's JIT compilers generate code that uses vectorized floating point instructions?


Let's say the bottleneck of my Java program really is some tight loops to compute a bunch of vector dot products. Yes I've profiled, yes it's the bottleneck, yes it's significant, yes that's just how the algorithm is, yes I've run Proguard to optimize the byte code, etc.

The work is, essentially, dot products. As in, I have two float[50] and I need to compute the sum of pairwise products. I know processor instruction sets exist to perform these kind of operations quickly and in bulk, like SSE or MMX.

Yes I can probably access these by writing some native code in JNI. The JNI call turns out to be pretty expensive.

I know you can't guarantee what a JIT will compile or not compile. Has anyone ever heard of a JIT generating code that uses these instructions? and if so, is there anything about the Java code that helps make it compilable this way?

Probably a "no"; worth asking.


Solution

  • So, basically, you want your code to run faster. JNI is the answer. I know you said it didn't work for you, but let me show you that you are wrong.

    Here's Dot.java:

    import java.nio.FloatBuffer;
    import org.bytedeco.javacpp.*;
    import org.bytedeco.javacpp.annotation.*;
    
    @Platform(include = "Dot.h", compiler = "fastfpu")
    public class Dot {
        static { Loader.load(); }
    
        static float[] a = new float[50], b = new float[50];
        static float dot() {
            float sum = 0;
            for (int i = 0; i < 50; i++) {
                sum += a[i]*b[i];
            }
            return sum;
        }
        static native @MemberGetter FloatPointer ac();
        static native @MemberGetter FloatPointer bc();
        static native @NoException float dotc();
    
        public static void main(String[] args) {
            FloatBuffer ab = ac().capacity(50).asBuffer();
            FloatBuffer bb = bc().capacity(50).asBuffer();
    
            for (int i = 0; i < 10000000; i++) {
                a[i%50] = b[i%50] = dot();
                float sum = dotc();
                ab.put(i%50, sum);
                bb.put(i%50, sum);
            }
            long t1 = System.nanoTime();
            for (int i = 0; i < 10000000; i++) {
                a[i%50] = b[i%50] = dot();
            }
            long t2 = System.nanoTime();
            for (int i = 0; i < 10000000; i++) {
                float sum = dotc();
                ab.put(i%50, sum);
                bb.put(i%50, sum);
            }
            long t3 = System.nanoTime();
            System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
            System.out.println("dotc(): "  + (t3 - t2)/10000000 + " ns");
        }
    }
    

    and Dot.h:

    float ac[50], bc[50];
    
    inline float dotc() {
        float sum = 0;
        for (int i = 0; i < 50; i++) {
            sum += ac[i]*bc[i];
        }
        return sum;
    }
    

    We can compile and run that with JavaCPP using this command:

    $ java -jar javacpp.jar Dot.java -exec
    

    With an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, Fedora 30, GCC 9.1.1, and OpenJDK 8 or 11, I get this kind of output:

    dot(): 39 ns
    dotc(): 16 ns
    

    Or roughly 2.4 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.