I would like to calculate the intensity of I(x,y)/Io(x,y).
First, I read my images using rawpy because I have .nef files (Nikon raw). Then I use opencv to transform my images into grayscale and I calculate I(x,y)/Io(x,y). Where I(x,y) is "brut" and Io(x,y) is "init".
But after dividing the two images (cv2.divide) I use cv2.meanStdDev(test) and I have an "Nan" value.
And when I plot "test" using matplotlib I get this :
and when I use imshow from cv2 I get what I want :
I don't understand why I get nan from cv2.meanStdDev(test) and the differences between the two plots ?
import numpy as np
import cv2
import rawpy
import rawpy.enhance
import matplotlib.pyplot as plt
####################
# Reading a Nikon RAW (NEF) image
init="/media/alexandre/Transcend/Expérience/Ombroscopie/eau/initialisation/2023-09-19_19-02-33.473.nef"
brut="/media/alexandre/Transcend/Expérience/Ombroscopie/eau/DT0.2/2023-09-20_10-34-27.646.nef"
bruit="/media/alexandre/Transcend/Expérience/Ombroscopie/eau/bruit-electronique/2023-09-18_18-59-34.994.nef"
####################
# This uses rawpy library
print("reading init file using rawpy.")
raw_init = rawpy.imread(init)
image_init = raw_init.postprocess(use_camera_wb=True, output_bps=16)
print("Size of init image read:" + str(image_init.shape))
print("reading brut file using rawpy.")
raw_brut = rawpy.imread(brut)
image_brut = raw_brut.postprocess(use_camera_wb=True, output_bps=16)
print("Size of brut image read:" + str(image_brut.shape))
print("reading bruit file using rawpy.")
raw_bruit = rawpy.imread(bruit)
image_bruit = raw_bruit.postprocess(use_camera_wb=True, output_bps=16)
print("Size of bruit image read:" + str(image_bruit.shape))
####################
# (grayscale) OpenCV
print(image_init.dtype)
init_grayscale = cv2.cvtColor(image_init, cv2.COLOR_RGB2GRAY).astype(float)
brut_grayscale = cv2.cvtColor(image_brut, cv2.COLOR_RGB2GRAY).astype(float)
bruit_grayscale = cv2.cvtColor(image_bruit, cv2.COLOR_RGB2GRAY).astype(float)
print(np.max(brut_grayscale))
print(init_grayscale.dtype)
"test = (brut_grayscale)/(init_grayscale)"
init = init_grayscale-bruit_grayscale
test = cv2.divide((brut_grayscale),(init_grayscale))
print(test.shape)
print(test.dtype)
print(type(test))
print(test.max())
print(test.min())
####################
# Irms
mean, std_dev = cv2.meanStdDev(test)
intensite_rms = std_dev[0][0]
print("Intensité RMS de l'image :", intensite_rms)
####################
# Matplotlib
import matplotlib.pyplot as plt
plt.imshow(test, cmap='gray')
plt.show()
# Show using OpenCV
import imutils
image_rawpy = imutils.resize(test, width=1080)
cv2.imshow("image_rawpy read file: ", image_rawpy)
cv2.waitKey(0)
cv2.destroyAllWindows("image_rawpy read file: " , image_rawpy)
output :
reading init file using rawpy.
Size of init image read:(5520, 8288, 3)
reading brut file using rawpy.
Size of brut image read:(5520, 8288, 3)
reading bruit file using rawpy.
Size of bruit image read:(5520, 8288, 3)
uint16
37977.0
float64
(5520, 8288)
float64
<class 'numpy.ndarray'>
nan
nan
Intensité RMS de l'image : nan
^CTraceback (most recent call last):
File "ombro.py", line 62, in <module>
plt.show()
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/pyplot.py", line 368, in show
return _backend_mod.show(*args, **kwargs)
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/backend_bases.py", line 3544, in show
cls.mainloop()
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/backends/backend_qt.py", line 1023, in mainloop
qt_compat._exec(qApp)
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/backends/qt_compat.py", line 262, in _maybe_allow_interrupt
old_sigint_handler(*handler_args)
I don't understand why I get nan from cv2.meanStdDev(test)
It is very difficult to reproduce your code without the original files. But looks like you have some pixels nan
in init_grayscale
, while test.max()
returns nan
and np.max(brut_grayscale)
returns non nan
value.
Im am not sure why it happens, looks like rawpy array may contain non valid uint16 values. Please try to cast image_init
to np.array
with dtype=np.uint16
.
the differences between the two plots
Matplotlibs plt.imshow
normalizes input values (please see the documentation for details) and values passed to the cv2.imshow
will be multiplied by 255 (please see this post for details)
You can get same results if you will call plt.imshow
as:
plt.imshow((test * 255).astype(np.uint8), cmap="gray")