I am using Ogre3D and it's compositors.
Basically, I am wondering what will be the fastest way to blur everything around a dynamic point in the viewport. The point is given by the 2D screen coordinates. The farther a pixel is from that point, the more blurred it should be. To give you an idea of how much blur it is supposed to have, if the blurring point is exactly in the middle of the screen, the edges should be maximally blurred. So the blurriness coefficient of a pixel could be thought of as
MIN ( (normalizedDistanceBetween(pixel, blurringPoint)/0.5), 1.0 )
So I am assuming the right way to go is to first make the completely blurred version of the scene using a shader. That part I have already.
Now, how to mix this blurred image with the original scene? I see two ways:
Does the second method make any sense? Would it have any noticeable performance gain than the first one? (Even a couple fps?) Or does creating an additional RTT pass and texture mitigate all the performance won by not calculating a (rather simple) distance function for every output pixel? I am assuming that the screen size is something normal, like 1024x768, ie, much larger than the 16x16 mixing texture. Or is there any other simpler method?
Either version may have a performance edge. Unless you're doing something particularly inefficient, both ways should be fairly fast.
If you're really concerned about it, you should implement both, and benchmark them.
If you aren't doing this already, you may be able to speed up your effect by generating your blurred buffer at half-resolution. This requires a good resampling filter to avoid aliasing when reading the original scene(I recommend at least a [1,3,3,1] Gaussian, and possibly wider, like [1,5,10,10,5,1]).
In fact, if you want a particularly wide maximum blur, you could repeat the process, generating a multiresolution image pyramid. (once you've downsampled your original, everything becomes much cheaper because half-resolution means 1/4 the pixels)
Another possible speedup (again, if you haven't tried it already): since your compositor is doing straight-up alpha-blending with your original image, it doesn't actually need to sample it. Instead, you can output an alpha channel, render to the original buffer, and use the rasterizer function to do the final compositing.