Search code examples
javatiffjavax.imageiojai

TIFFPackBitsCompressor - NPE?


I'm using com.sun.media.imageioimpl.plugins.tiff.TIFFPackBitsCompressor to try and encode an array of tiff bytes I have using PackBits. I'm unfamiliar with this class and haven't been finding many examples on how to use it. But, when following the javadoc, I've been getting an NPE every time I try to encode my data. So far as I can see, none of my values are null. I've tried these tests with multiple values at this point, but below is my most recent iteration:

                TIFFPackBitsCompressor pack = new TIFFPackBitsCompressor();
                //bImageFromConvert is a 16-bit BufferedImage with all desired data.
                short[] bufferHolder = ((DataBufferUShort) bImageFromConvert.getRaster().getDataBuffer()).getData();
                //Since bImageFromConvert is 16-bits, the short array isn't the right length. 
                //The below conversion handles tihs issue
                byte[] byteBuffer = convertShortToByte(bufferHolder);
                //I'm not entirely sure what this int[] in the parameters should be. 
                //For now, it is a test int[] array containing all 1s
                int[] testint = new int[byteBuffer.length];
                Arrays.fill(testint, 1);
                //0 offset. dimWidth = 1760, dimHeight = 2140. Not sure what that last param is supposed to be in layman's terms.
                //npe thrown at this line.
                int testOut = pack.encode(byteBuffer, 0, dimWidth, dimHeight, testint, 1);

Does anyone have any insight as to what's happening? Also, if available, does anyone know a better way to encode my TIFF files using PackBits in a java program?

Let me know if there's anything to make my question clearer.

Thank you!


Solution

  • As said in the comment, you are not supposed to use the TIFFPackBitsCompressor directly, instead it's used internally by the JAI ImageIO TIFF plugin (the TIFFImageWriter) when you specify "PackBits" as compression type in the ImageWriteParam. You may also pass a compressor instance in the param, if you cast it to TIFFImageWriteParam first, but this is more useful for custom compressions not known by the plugin.

    Also note that the compressor will only write PackBits compressed pixel data, it will not create a full TIFF file.

    The normal way of writing a PackBits compressed TIFF file is:

    BufferedImage image = ...; // Your input image
    
    ImageWriter writer = ImageIO.getImageWritersByFormatName("TIFF").next(); // Assuming a TIFF plugin is installed
    
    try (ImageOutputStream out = ImageIO.createImageOutputStream(...)) { // Your output file or stream
        writer.setOutput(out);
    
        ImageWriteParam param = writer.getDefaultWriteParam();
        param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
        param.setCompressionType("PackBits");
    
        writer.write(null, new IIOImage(image, null, null), param);
    }
    
    writer.dispose();
    

    The above code should work fine using both JAI ImageIO and the TwelveMonkeys ImageIO TIFF plugins.


    PS: PackBits is a very simple compression algorithm based on run-length encoding of byte data. As 16 bit data may vary wildly between the high and low byte of a single sample, PackBits is generally not a good choice for compression of such data.

    As stated in my comments, using completely random values I got the following results:

    Compression      | File size
    -----------------|-----------------
    None             |  7 533 680 bytes
    PackBits         |  7 593 551 bytes
    LZW w/predictor  | 10 318 091 bytes
    ZLib w/predictor | 10 318 444 bytes
    

    This is not very surprising, as completely random data isn't generally compressible (without data loss). For a linear gradient, which may be more similar to "photographic" image data I got completely different results:

    Compression      | File size
    -----------------|-----------------
    None             |  7 533 680 bytes
    PackBits         |  7 588 779 bytes
    LZW w/predictor  |    200 716 bytes
    ZLib w/predictor |    144 136 bytes
    

    As you see, here the LZW and Deflate/Zlib algorithms (with predictor step) performs MUCH better. For "real" data, there's likely more noise, so your results is likely somewhere in between these extremes.