When using my radial gradient algorithm with the Spread property set to Reflect or Repeat, I get crazy ass moiré when the bands are too close together. This is normal.
What I have been trying to understand is how to figure out where this happens, or if it's at all possible to figure this out. If I can determine some kind of "distinct<->muddled" value, I can use it as a weight to blend between the gradient color at a given pixel and a weighted average of all the colors, thereby mitigating the moiré.
I know this is quite an obscure topic, and maybe this is more of a math question, but does anybody have any ideas?
For reference, my algorithm is similar to this implementation.
Here is what my code gives now:
Use a 3x3 filter and for each 3x3 pixel patch compute the color variance. E.g. for each two adjacent pixels in the patch ( e.g pix( 0,0 ) and pix(0,1)) get the absolute value of the difference between each color channel, square it and sum.
So something like:
double pixelVariance( pix a, pix b )
{
double variance = 0;
variance += ( a.red - b.red ) * ( a.red - b.red );
variance += ( a.green - b.green ) * ( a.green - b.green );
variance += ( a.blue - b.blue ) * ( a.blue - b.blue );
return variance;
}
Then variance for the 3x3 patch:
double patchVariance( Patch patch )
{
double variance = 0;
variance += pixelVariance( patch( 0, 0 ), patch( 0, 1 ));
variance += pixelVariance( patch( 0, 0 ), patch( 1, 0 ));
// etc.
return variance;
}
Patches with high variance are not smooth gradients and are almost certainly going to be high moire areas.