I am trying to fetch an int[]
of pixels from a BufferedImage
using parallel computing. More specifically, I am dividing the image into chunks of 64 bits from the width and computing the pixels per section. The main reason why I am using parallel computing is that I am developing a video player and I want to use multi-threading to parse the frame pixels.
Normally, you can easily fetch all the pixels using the following code:
(1a)
final int width = image.getWidth();
final int[] pixels = image.getRGB(0, 0, width, image.getHeight(), null, 0, width );
However, in my case, I tried to use IntStream
and the parallel()
method to help parallelize the process:
(1b)
public static int[] getRGBFast(@NotNull final BufferedImage image) {
final int width = image.getWidth();
final int height = image.getHeight();
final int[] rgb = new int[width * height];
final int num = width / 64;
final int leftover = width - num * 64;
IntStream.range(0, num + (width % 64 == 0 ? 0 : 1))
.parallel()
.forEach(chunk -> {
final int pixel = chunk << 6;
if (chunk == num) {
image.getRGB(pixel, 0, leftover, height, rgb, 0, width);
} else {
image.getRGB(pixel, 0, 64, height, rgb, 0, width);
}
});
return rgb;
}
When comparing the results from (1a)
and (1b)
, it seems that almost all the pixels are misplaced. I used the following code to compare the two arrays:
for (int i = 0; i < rgb.length; i++) {
final int color = rgb[i];
final int old = original[i];
if (color != old) {
System.out.printf("Index: %d mismatch [%d,%d]%n", i, color, old);
}
}
The results of the last couple thousand lines are as follows. There are many more lines following it which show the pixels don't match. For a visual aspect, I ran the code 10 times and displayed the images for it. For a strange reason, the results seemed to vary for each iteration:
As reference, the original image looks like this:
I am not sure where my algorithm is flawed. Could anyone point out the possible issue that I am encountering?
In this code,
if (chunk == num) {
image.getRGB(pixel, 0, leftover, height, rgb, 0, width);
} else {
image.getRGB(pixel, 0, 64, height, rgb, 0, width);
}
The offset
parameter to getRGB
is always zero. Therefore, every chunk will be written into the "leftmost" part of the array (the part that is leftmost when the array is interpreted as a 2D width x height
image), at zero offset. The different threads write to the same region, the last one to run "wins", which varies run by run, row by row, and even pixel by pixel - that explain the different results and the glitches.
To avoid moving the chunk, the offset should match the x
coordinate (in general, the offset should be the index of the top left pixel, x + y * width
, here y
is zero so only x
remains): (not tested)
if (chunk == num) {
image.getRGB(pixel, 0, leftover, height, rgb, pixel, width);
} else {
image.getRGB(pixel, 0, 64, height, rgb, pixel, width);
}
By the way I recommend dividing the image along its y
axis instead of its x
axis, so that the regions being copied by each thread are contiguous. That isn't more correct, but it may be more efficient, and isn't more difficult.