Search code examples
c#algorithmmatlabgaussianblur

Normalizing an image after (gaussian) filtering


I have implemented a gaussian filter following the algorithm of Nixon Aguado. The algorithm (after finding the template as described here gaussian template) is the following.

The pseudo code is in MATLAB style I believe.

function convolved=convolve(image,template)
%New image point brightness convolution of template with image
%Usage:[new image]=convolve(image,template of point values)
%Parameters:image-array of points
% template-array of weighting coefficients
%Author: Mark S. Nixon
%get image dimensions
[irows,icols]=size(image);
%get template dimensions
[trows,tcols]=size(template);
%set a temporary image to black
temp(1:irows,1:icols)=0;

%half of template rows is
trhalf=floor(trows/2);
%half of template cols is
tchalf=floor(tcols/2);
%then convolve the template
for x=trhalf+1:icols-trhalf %address all columns except border
    for y=tchalf+1:irows-tchalf %address all rows except border
        sum=0;
        for iwin=1:trows %address template columns
            for jwin=1:tcols %address template rows
                sum=sum+image(y+jwin-tchalf-1,x+iwin-trhalf-1)* template(jwin,iwin);
            end
        end
        temp(y,x)=sum;
    end
end

%finally, normalise the image
convolved=normalise(temp); 

Anyway, what worries me is the last part "normalise". I have tried my algorithm (written in C#) and some pixels got the value 255.00000003 (which is obviously larger than 255). Should I "normalise" the results to enlarge it to a 0-255 range? Wouldn't that be modifying the image (other than the gaussian) I just wan't this operation to involve a gaussian filter and nothing else.


EDIT: I have eliminated the "normalization" and it seems that it works well, so I have no idea why the authors of the book recommended it. Still, worries me that my program will crash if for some reason some value > 255 appears and cannot be drawn.


Solution

  • As has been pointed out by others in comments, normalizing the image in the sense that it ensures that the range of each channel is 0 to 255 would be bad.

    Normalizing the image in the sense that each value is clamped between 0 and 255 should not be necessary with the appropriate filter kernel. In practise however it can be necessary or useful because of the way floating point numbers work. Floating point numbers can't represent every possible real number, and every computation can introduce some inaccuracy. This might be the cause of the 255.00000003 as one of the values.

    Like many signal processing algorithms, this one assumes discrete time/space, but continous values. It's simply much easier to reason about those kind of algortihms and to describe them mathematically.

    On a computer you don't have continuous values. Images use discrete values, most often an integer between 0 and 255 for each channel (8 bit per channel). Sound is often encoded with 16 bit per channel.

    In the vast majority of cases this is perfeectly acceptable, but it is actually yet another filter (although a non-linear one) after your gaussian filter output. So yes, in a strict sense you do modify the output of the gaussian filter, either when you save the image or when you display it on a screen.