Given a 2D set of points
pts = np.random.uniform(low=0, high=127, size=32).reshape((-1, 2))
a density image can be computed by drawing peaks and filtering with a Gaussian blur:
sigma = 2
peaks_img = np.zeros((128, 128))
peaks_img[np.int_(pts[:, 0]), np.int_(pts[:, 1])] = 1
density_img = cv2.GaussianBlur(peaks_img, (0, 0), sigma)
Is there a nice (and fast) method for doing this without the cast to int, therefore keeping the floating point precision?
The usual libraries (openCV, scipy.ndimage, etc) need the rounding you did because they work in the context of image processing. Both the input and the output are images, so both input and output are pixels. It's not trivially straightforward to generalize this, especially since the usual Gaussian blur has some "conservation laws" wired into it through a sum rule of the Gaussian kernel used.
I would try generalizing the Gaussian blur by simply adding up proper Gaussian peaks at each (float) position and looking at the resulting function at the relevant pixel coordinates. We can enforce the "conservation of weight" in a similar vein for our continuous approximation, but I'm not entirely sure that this makes perfect sense as a 2d density image. Anyway, this is the best generalization I can think of right now:
import numpy as np
# inputs
imsize = (128,128)
npeaks = 16
pts_x,pts_y = np.random.uniform(low=0, high=min(imsize)-1, size=(2,npeaks,1,1))
# weird size: prepare for broadcasting
# pts_x.shape == pts_y.shape == (npeaks,1,1)
ii,jj = np.ogrid[:imsize[0],:imsize[1]] # memory-efficient np.indices
peaks = np.exp(-((pts_x - ii)**2 + (pts_y - jj)**2)/(2*sigma**2)) # shape (npeaks,*imsize)
# normalize each peak to preserve weight 1 for each
peaks /= peaks.sum(axis=(1,2),keepdims=True)
# sum each peak to end up with an array of size imsize
peaks = peaks.sum(axis=0)
# print(peaks.sum()) is now 16.0
# same for blurred openCV version: 16.18
Comparing the above (left) with the openCV-generated version (right):
Notice the largest difference at the top where one of the peaks is almost at the edge. The difference is due to openCV doing a proper convolution with a Gaussian kernel, which is affected by the presence or absence of edges. My approach blindly sums up Gaussian peaks at each coordinate. These are limitations/features you have to be aware of. All in all, the above might be an acceptable replacement depending on your specific needs.
For posterity, the above figures were created with the random points
array([[ 71.84229522, 53.87674552],
[ 49.46010032, 25.54820408],
[ 118.08357845, 6.83220309],
[ 97.20503813, 76.96684427],
[ 1.00902832, 51.26414236],
[ 82.38165581, 30.2121626 ],
[ 64.46298579, 58.15192887],
[ 121.90414697, 38.02821749],
[ 101.81869852, 49.64438987],
[ 28.49284669, 41.81582785],
[ 9.48948284, 55.24329885],
[ 63.54365708, 116.05157199],
[ 86.00412101, 46.56952082],
[ 34.27945909, 40.62669415],
[ 34.46355519, 14.85889878],
[ 67.23707619, 76.76113722]])