Search code examples
objective-ccuiimagecgcontextpixel

Speeding up checking pixel color at point


I have this code, which checks pixel colors using mostly C code:

- (NSArray *)colorsForPixelsAtPoints:(NSArray *)pointValues
{
    NSMutableArray *colors = [NSMutableArray array];
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Pinpoint individual pixels from the drawn view.
    for (NSValue *pointValue in pointValues)
    {
        // Setup variables
        unsigned char pixelData[4] = {0, 0, 0, 0};

        CGSize imageSize = self.size;
        CGPoint point = [pointValue CGPointValue];

        // Create graphics context
        CGContextRef colorContext = CGBitmapContextCreate(pixelData, 1, 1, 8, 4, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
        CGContextSetBlendMode(colorContext, kCGBlendModeCopy);
        CGContextTranslateCTM(colorContext, -point.x, (point.y - imageSize.height));

        // Draw image
        CGRect colorFrame = CGRectMake(0, 0, imageSize.width, imageSize.height);
        CGContextDrawImage(colorContext, colorFrame, [self CGImage]);

        // Get color information
        UIColor *pixelColor = [UIColor colorWithRed: (pixelData[0] / 255.0)
                                              green: (pixelData[1] / 255.0)
                                               blue: (pixelData[2] / 255.0)
                                              alpha: 1];
        [colors addObject: pixelColor];

        // Clean up
        CGContextRelease(colorContext);
    }

    // Clean up
    CGColorSpaceRelease(colorSpace);
    return colors;
}

I want to optimise the time it takes.

I'm currently wondering about moving the CGBitmapContextCreate line to before the for loop, and how I could make that work.

Any other ideas for speeding this up would be appreciated.


Solution

  • See: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

    Instead of reading multiple contiguous pixels you would change the method to read pixels at different coordinates:

    ...
    // Now your rawData contains the image data in the RGBA8888 pixel format.
    for (int ii = 0 ; ii < [pointValues count] ; ++ii)
    {
        NSValue *pointValue = [pointValues objectAtIndex:ii]
        CGPoint point = [pointValue CGPointValue];
        int byteIndex = (bytesPerRow * point.y) + point.x * bytesPerPixel;
        CGFloat red   = (rawData[byteIndex]     * 1.0) / 255.0;
        CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
        CGFloat blue  = (rawData[byteIndex + 2] * 1.0) / 255.0;
        CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
    
        UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
        [result addObject:acolor];
    }
    ...
    

    Also make sure your points are on integer coordinates or you might run into weird results!