I have a byte[]
that contains ARGB image data directly. I am trying to find the most performant way to transform this into a BufferedImage
without unnecessary iterations, essentially I'd like to configure the BufferedImage
with the right raster and color model to use this memory area directly.
My current approach is this:
BufferedImage toBufferedImageAbgr(int width, int height, byte[] abgrData) {
int bitMasks[] = new int[]{0xf};
DataBuffer dataBuffer = new DataBufferByte(abgrData, width * height * 4, 0);
int[] masks = new int[]{0xff, 0xff, 0xff, 0xff};
DirectColorModel byteColorModel = new DirectColorModel(8,
0xff, // Red
0xff, // Green
0xff, // Blue
0xff // Alpha
);
SampleModel sampleModel = new SinglePixelPackedSampleModel(DataBuffer.TYPE_BYTE, width, height, masks);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
BufferedImage image = new BufferedImage(byteColorModel, raster, false, null);
return image;
}
I keep playing around with the color model, the bands and all that but can't figure out what's the right configuration for this relatively simple problem.
When I inspect the output image, it unfortunately looks bad, it's a grayscale image with patterns:
Here is the original image for reference:
BufferedImage toBufferedImageAbgr(int width, int height, byte[] abgrData) {
DataBuffer dataBuffer = new DataBufferByte(abgrData, width * height * 4, 0);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB),
new int[] {8,8,8,8}, true, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
WritableRaster raster = Raster.createInterleavedRaster(
dataBuffer, width, height, width * 4, 4, new int[] {3, 2, 1, 0}, null);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
return image;
}