I'm doing a job where I need to create an image from a model and signal, and I have these images as the result for signal 1 and 2, respectively:
But ideally I need to get something like this:
I'm also accepting other methods or image processing to get the result.
I tried some basic normalization methods, but the better I could get is this:
that still has some noise.
Thanks in advance.
Assuming all your images have the same top-down gradient in the signal strength, with a constant response horizontally, then you can normalize the response using something like this (psuedo-code, because I experimented with something other than Python and OpenCV, it should be easy to replicate this):
M = max(img, axis=1) # the maximum value of each row
M = dilation(M, 15) # each maximum has a larger reach
img = img / M # normalize each row for the maximum intensity
output = img > 0.5 # some fixed threshold
After the normalization, the fixed threshold was able to extract each of the local maxima easily.
The 15 in the dilation (note that M
is a 1D array at this point) would be the length of the 1D structuring element. This should be large enough to cover the gaps between the rows, 5 should be enough for the first example, the second needs maybe 10.
The idea here is that the row-wise normalization is easier to accomplish if you don't know how closely together the dots are on each row. This operation would be equivalent to applying a dilation to the image using a structuring element that is 15 pixels high and infinitely wide.