Search code examples
iosobjective-cipadpixelcolor-space

Setting Device Independent RGB on iOS


I am detecting the rgb of a tapped pixel. Different iPads return slightly different RGB values. I maintain a plist of the different values returned per device and when the app opens it determines the device I am on and uses appropriate values. This is a terrible solution - but it does work.

I now want to fix this properly so I dived into colorspace on iOS. It seems I can use CGColorSpaceCreateCalibratedRGB to set a standard RGB regardless of device so the values returned are the same? Or do I need to convert the

However, I do not know any of the values needed to create a standard color space across devices so my pixel color return values are always the same - or if this is possible.

Some current example return values:
iPad 2 r31 g0 b133 a1
iPad Air r30 g0 b132 a1

Can anyone help 'normalize' the pixel return value in a device independent way?

- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
    UIColor* color = nil;
    CGImageRef inImage = self.image.CGImage;
    // Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpha, Red, Green, Blue
    CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
    if (cgctx == NULL) { return nil; /* error */ }

    size_t w = CGImageGetWidth(inImage);
    size_t h = CGImageGetHeight(inImage);
    CGRect rect = {{0,0},{w,h}}; 

    // Draw the image to the bitmap context. Once we draw, the memory 
    // allocated for the context for rendering will then contain the 
    // raw image data in the specified color space.
    CGContextDrawImage(cgctx, rect, inImage); 

    // Now we can get a pointer to the image data associated with the bitmap
    // context.
    unsigned char* data = CGBitmapContextGetData (cgctx);
    if (data != NULL) {
        //offset locates the pixel in the data from x,y. 
        //4 for 4 bytes of data per pixel, w is width of one row of data.
        int offset = 4*((w*round(point.y))+round(point.x));
        int alpha =  data[offset]; 
        int red = data[offset+1]; 
        int green = data[offset+2]; 
        int blue = data[offset+3]; 
        ////NSLog(@"offset: %i colors: RGB A %i %i %i  %i",offset,red,green,blue,alpha);
        //NSLog(@"colors: RGB A %i %i %i  %i",red,green,blue,alpha);
        color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
    }


    // When finished, release the context
    CGContextRelease(cgctx); 
    // Free image data memory for the context
    if (data) { free(data); }

    return color;
}


- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {

    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    void *          bitmapData;
    int             bitmapByteCount;
    int             bitmapBytesPerRow;

    // Get image width, height. We'll use the entire image.
    size_t pixelsWide = CGImageGetWidth(inImage);
    size_t pixelsHigh = CGImageGetHeight(inImage);

    // Declare the number of bytes per row. Each pixel in the bitmap in this
    // example is represented by 4 bytes; 8 bits each of red, green, blue, and
    // alpha.
    bitmapBytesPerRow   = (pixelsWide * 4);
    bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

    // Use the generic RGB color space.
    colorSpace = CGColorSpaceCreateDeviceRGB();

    if (colorSpace == NULL)
    {
        fprintf(stderr, "Error allocating color space\n");
        return NULL;
    }

    // Allocate memory for image data. This is the destination in memory
    // where any drawing to the bitmap context will be rendered.
    bitmapData = malloc( bitmapByteCount );
    if (bitmapData == NULL) 
    {
        fprintf (stderr, "Memory not allocated!");
        CGColorSpaceRelease( colorSpace );
        return NULL;
    }

    // Create the bitmap context. We want pre-multiplied ARGB, 8-bits 
    // per component. Regardless of what the source image format is 
    // (CMYK, Grayscale, and so on) it will be converted over to the format
    // specified here by CGBitmapContextCreate.
    context = CGBitmapContextCreate (bitmapData,
                                     pixelsWide,
                                     pixelsHigh,
                                     8,      // bits per component
                                     bitmapBytesPerRow,
                                     colorSpace,
                                     kCGImageAlphaPremultipliedFirst);
    if (context == NULL)
    {
        free (bitmapData);
        fprintf (stderr, "Context not created!");
    }

    // Make sure and release colorspace before returning
    CGColorSpaceRelease( colorSpace );

    return context;
}

Solution

  • As mentioned, it's not supported on iOS. I, too, could really use it!

    However, you might try a different method. What I would do is convert the color into HSV space (or some other similar perceptual space), and have a range in your check. Checking for an exact color is the same as checking for exact equality with floating point numbers. It's just not a good idea.

    You can get a UIColor's HSV components by using [-UIColor getHue:saturation:brightness:alpha:] (docs here). Hue is in the range 0-1, but it represents an angle, so finding "closeness" involves a little math. But saturation and value work normally, so an absolute difference should be sufficient to figure out how close 2 colors are to each other.