I am trying to denoise an image. I want to know whether it has denoised or not by looking at the SNR value. Now, how can I estimate the SNR of an image in Python?
SNR is defined in different ways, but I guess that for your application its exact definition does not matter. In general, It is a ratio of the signal strength and the noise strength. "Strength" is here intentionally left ambiguous, often the power of the signal and noise are used, but amplitude is also viable.
Let's assume then that SNR is the power of the image signal divided by the power of the noise signal. The power of the noise is simply its variance. You are comparing an image before and after denoising, so presumably the power of the signal doesn't change. We can ignore it and set it to 1. Now our SNR is simply defined as 1/var(noise)
.
So, what you need to do is estimate the noise variance, and compare that before and after denoising. You want to see this variance go down (as the SNR then increases).
It is, in general, not easy to estimate the noise variance of an image if one does not know what the image looks like without noise. You can think of the noisy image as
image = noiseless_image + noise.
So, the variance of the image is the variance of the noise + the variance of the noiseless image. If you don't know the latter, you cannot get the former.
But there are tricks. The simplest method, if you are willing to do a bit of manual work for each image you compare, is to outline a flat region of background (typically a small region of sky will work). It is very important that there is no natural variation in intensity in this region. Now simply compute the variance of the pixels within this region.
Several fully automatic methods have been published for this purpose. I know of one by J. Immerkær: "Fast Noise Variance Estimation", Computer Vision and Image Understanding 64(2):300-302, 1996. The DIPlib library (I'm an author) has an implementation. You can use it from Python as follows:
var = dip.EstimateNoiseVariance(img)
(with img
either an image object from the library, or a Numpy array, or any other object that exposes a buffer). But, since there is not yet an official release of the version of the library that has Python bindings, you will have to fetch and compile the library yourself if you want to use this implementation. But considering it is quite a simple method, you could think of implementing it yourself.
Here is pseudo-code for the method:
mask = gradient magnitude of img
Apply Gaussian smoothing (sigma = 3) to mask
If the image has more than one channel:
mask = max over the channels
Compute the Otsu threshold value for mask
mask = pixels where mask < threshold
error = discrete Laplace of in: apply convolution with [1,-2,1;-2,4,-2;1,-2,1]
MSE = the mean square value of the pixels in error that fall within mask, on a per-channel basis
If the image has more than one channel:
MSE = mean over MSE values for each channel
variance = MSE / 36