Consider 4 segmented images or numpy arrays - A,B,C, and D (each pixel is either 0 or a classID at the location of an object in the image). These 4 segmented 2-D arrays are segmentations of different objects present in one image. eg. image A has segmented object #1, image B has segmented object #2, and so on . It is required to overlay all these segmentations of different objects into one image instead of having 4 separate segmented images.
Naturally, one would just perform a + b + c + d to overlay them. However, the segmented pixels of an object may overlap across images.For example, a segmented pixel of object #1 from image 'A' may overlap a with the segmented pixel of object #2 in image 'B'. If this overlap occurs, the ClassID of object #2 takes preference over that pixel location. ClassID of the higher object # takes preference. So one cannot simply add all the images to combine all the segmentations into one coherent segmented image.
Example of two segmented images - object A has pixel value of 1 and B has 2. In the case that only these two images in consideration, I'd want to overlay object B onto object A (each object will have a different color when combined). Object B should be visible over object A where they overlap.
I took an approach of successively adding each image to the next, if the value of any resulting pixel is equal to the sum of classID of object #1 and classID of object#2, it indicates an overlap, and I set those pixels to the value of classID #2. Because this solution of comparing the sum isn't unique it didn't work out correctly.
One way I know how to do it is by iterating through each pixel and comparing pairs of pixels two images at a time. If anyone has a more efficient solution, let me know!
It sounds like you just want the element-wise maximum of all the images. Assuming you are using Numpy, something like this should work:
numpy.maximum.reduce([A, B, C, D])
This applies the element-wise maximum operation to each array one at a time (e.g. a reduction).