Matlab's imgaussfilt equivalent in Python

I am trying to replicate Matlab's imgaussfilter behaviour in Python but I have been unable to reproduce the results. The docs don't help much since there is not explanation of what exactly is being done (for instance how is that function different from fspecial or how it differs from Octave's imsmooth (with Gaussian as an argument).

Matlab Code


with output

 [[-0.02936392 -0.03168419 -0.0343706  ...  0.03136455  0.02864487
 [-0.03212093 -0.0347433  -0.03775943 ...  0.03507484  0.03209807
 [-0.03512981 -0.03808864 -0.04147075 ...  0.03873834  0.03549163
 [-0.07713804 -0.08337475 -0.08975262 ... -0.04314206 -0.03945251
 [-0.07145457 -0.07714807 -0.08297654 ... -0.03986635 -0.03641579
 [-0.06605107 -0.07122191 -0.07651892 ... -0.03684535 -0.03362917

The closest I have managed to get with Python has been:


skimage.filters.gaussian(image, sigma=s,mode = 'nearest',truncate=2.0)

with output

[[-0.02936397 -0.03168425 -0.03437067 ...  0.03136461  0.02864492
   0.0258515 ]
 [-0.03212099 -0.03474336 -0.03775951 ...  0.03507491  0.03209814
   0.0290614 ]
 [-0.03512988 -0.03808872 -0.04147083 ...  0.03873842  0.03549169
 [-0.07713819 -0.08337491 -0.08975279 ... -0.04314214 -0.03945259
 [-0.07145471 -0.07714821 -0.0829767  ... -0.03986643 -0.03641586
 [-0.06605119 -0.07122205 -0.07651907 ... -0.03684542 -0.03362923

which is similar but not exactly the same result. Is there a method that approximates it better? Do I need to change one of the parameters?

EDIT: If you are looking for a close approximation using OpenCV @avisionx has provided a good starting point in the solutions.


  • There's multiple possible causes for this. Since Matlab is not open source software, it's impossible to know the exact cause, but we can take some educated guesses.

    Regarding what Gaussian filtering is doing: it replaces every pixel with a weighted sum of its neighbors. The weighting is determined by the Gaussian probability density function, and the exact weight is determined by sigma but also by the truncate parameter that you can see in the scipy.ndimage.gaussian_filter documentation. This parameter is needed because, technically, the Gaussian function never decays to zero, it just approaches zero as the distance goes to infinity, so a "perfect" Gaussian blur would need to sum an infinite number of neighboring pixels, which is impossible. Therefore, the function needs to decide how far away you are close enough to zero to stop summing things.

    An additional complicating factor is that Gaussian blur is separable, which means that you can do a 2D (or nD) blur by blurring separately along each axis in sequence, which can reduce computing costs. Separability affects the order of computation, and floating point computation is not exact, so if Matlab and Python are doing the operations in a different order, you would expect slightly different results.

    Even if you filter the axes in the exact same order (you might achieve this by e.g. transposing the image in Python, filtering, then transposing it back), the underlying, low-level array computation libraries might be summing arrays in a different order, which again would change the exact results.

    In short, some possible causes for the differences you're seeing:

    • different truncation
    • different implementations (separating the filter or not)
    • different order of processing of the axes
    • different ordering of the low-level sums

    There may be no way of disentangling these, and in the end my advice mirrors Cris Luengo's comment: your results should probably not rely on values more precise than six significant digits. An exact match between Matlab and Python here will be meaningless, because neither is able to guarantee perfect accuracy with respect to the theoretical value of a Gaussian filter. Both are approximations and an exact match only means that you are making the same approximation errors in both.