I'm working on tracking objects based on color and I was using EmguCV library to threshold my color image to binary black and white image. Thresholding itself was quite fast, 50 ms on 320x240 image. I'm using RG Chromaticity color space, so there are some necessarily calculations.
Now I'm trying to speed it up using pointers, but the result is very similar with what I did with emguCV (around 50 ms per image).
I want to ask, if there is some expert who can help me, what I am doing wrong. Here is my short code snippet of my color thresholding implementation. It's based on this one: https://web.archive.org/web/20140906075741/http://bobpowell.net/onebit.aspx.
public static Bitmap ThresholdRGChroma(Bitmap original, double angleMin,
double angleMax, double satMin, double satMax)
{
Bitmap bimg = new Bitmap(original.Width, original.Height, PixelFormat.Format1bppIndexed);
BitmapData imgData = original.LockBits(new Rectangle(0, 0, original.Width, original.Height), ImageLockMode.ReadOnly, original.PixelFormat);
BitmapData bimgData = bimg.LockBits(new Rectangle(0, 0, bimg.Width, bimg.Height), ImageLockMode.ReadWrite, bimg.PixelFormat);
int pixelSize = 3;
double r, g, angle, sat;
unsafe
{
byte* R, G, B;
byte* row;
int RGBSum;
for (int y = original.Height - 1; y >= 0; y--)
{
row = (byte*)imgData.Scan0 + (y * imgData.Stride);
for (int x = original.Width - 1; x >= 0; x--)
{
// get rgb values
B = &row[x * pixelSize];
G = &row[x * pixelSize + 1];
R = &row[x * pixelSize + 2];
RGBSum = *R + *G + *B;
if (RGBSum == 0)
{
SetIndexedPixel(x, y, bimgData, false);
continue;
}
//calculate r ang g for rg chroma color space
r = (double)*R / RGBSum;
g = (double)*G / RGBSum;
//and angle and saturation
angle = GetAngleRad(r, g) * (180.0 / Math.PI);
sat = Math.Sqrt(Math.Pow(g, 2) + Math.Pow(r, 2));
//conditions to set pixel black or white
if ((angle >= angleMin && angle <= angleMax) && (sat >= satMin && sat <= satMax))
SetIndexedPixel(x, y, bimgData, true);
else
SetIndexedPixel(x, y, bimgData, false);
}
}
}
bimg.UnlockBits(bimgData);
original.UnlockBits(imgData);
return bimg;
}
private unsafe static void SetIndexedPixel(int x, int y, BitmapData bmd, bool pixel)
{
int index = y * bmd.Stride + (x >> 3);
byte* p = (byte*)bmd.Scan0.ToPointer();
byte mask = (byte)(0x80 >> (x & 0x7));
if (pixel)
p[index] |= mask;
else
p[index] &= (byte)(mask ^ 0xff);
}
private static double GetAngleRad(double x, double y)
{
if (x - _rgChromaOriginX == 0)
return 0.0;
double angle = Math.Atan((y - _rgChromaOriginY) / (x - _rgChromaOriginX)); // 10ms
if (x < _rgChromaOriginX && y > _rgChromaOriginY)
angle = angle + Math.PI;
else if (x < _rgChromaOriginX && y < _rgChromaOriginY)
angle = angle + Math.PI;
else if (x > _rgChromaOriginX && y < _rgChromaOriginY)
angle = angle + 2 * Math.PI;
return angle;
}
You're doing a lot of unnecessary math for each pixel, calculating exact values only to check to see if they're inside some limits. You can simplify the comparisons by precomputing some adjustments to the limits.
The easiest substitution is for the saturation. You're doing a square root which you can avoid by squaring the limits instead.
double satMin2 = satMin*satMin;
double satMax2 = satMax*satMax;
// ...
sat2 = g*g + r*r;
//conditions to set pixel black or white
if ((angle >= angleMin && angle <= angleMax) && (sat2 >= satMin2 && sat <= satMax2))
A similar trick can be used with the angle. Rather than calculating the angle with Math.Atan, figure out what those limits equate to in your r and g ranges.