I have been for some time now attempting to do the following,
But it either seems to do as the title says, Completely black screen, or no image whatsoever...Possibly maybe setting all the alpha to 0, or it crashes something about bad access on the stack near this function CGSConvertBGRX8888toRGBA8888
Here is the function
+ (UIImage*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy
{
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSUInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) malloc(height * width * 4);
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
//convert phone screen coords to texture coordinates.
xx *= width/320;
yy *= height/480;
int counter = 0;
// Now your rawData contains the image data in the RGBA8888 pixel format.
// Get the byteIndex of pixel tapped.
int byteIndex = (bytesPerRow * yy) + xx * bitsPerPixel;
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
for(int x = 0; x < width; x++)
{
for(int y = 0; y < height; y++)
{
byteIndex = ( width * y ) + x * bitsPerPixel;
CGFloat redVal = ( rawData[byteIndex] * 1.0) / 255.0;
CGFloat greenVal = ( rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blueVal = ( rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alphaVal = ( rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
if( alphaVal != 0 )
{
if( redVal == red && greenVal == green && blueVal == blue )
{
rawData[byteIndex + 3] = 0;
counter ++;
}
}
}
}
NSLog(@"Pixels amount: %i", counter);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
width*height*4,
NULL);
CGImageRef newCGimage = CGImageCreate( width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
provider,
NULL, NO, kCGRenderingIntentDefault );
UIImage *newImage = [UIImage imageWithCGImage:newCGimage];
CGDataProviderRelease(provider);
CGImageRelease(newCGimage);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(rawData);
return newImage;
}
I am fairly certain i am obtaining the correct pixels through some tests and outputting the colour selected to a view that part seems to work fine, it seems to be when I'm creating the image after editing the raw data, at least that is where i think the problem lies.
Cheers for any help in advance.
EDIT:
I have now stripped the code so essentially all it does is copy the png to raw data creates a UIImage from it and sets it, essentially i should get the exact same image back and nothing should change. But this time it just dissapears, either from 0 values for everything or due to alpha being 0 I am not sure edited code is as follows
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSUInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGBitmapInfo imageInfo = CGImageGetBitmapInfo(imageRef);
unsigned char *rawData = (unsigned char*) malloc(height * width * 4);
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
imageInfo);
//convert phone screen coords to texture coordinates.
xx *= (width/[[UIScreen mainScreen] bounds].size.width);
yy *= (height/[[UIScreen mainScreen] bounds].size.height);
NSLog(@"converted X pos: %i",xx);
NSLog(@"converted Y pos: %i",yy);
int counter = 0;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
width*height*4,
NULL);
CGImageRef newCGimage = CGImageCreate( width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
imageInfo,
provider,
NULL, NO, kCGRenderingIntentDefault );
UIImage *newImage = [[UIImage alloc]initWithCGImage:newCGimage];
CGDataProviderRelease(provider);
CGImageRelease(newCGimage);
CGContextRelease(context);
free(rawData);
return newImage;
Well I have fixed it, Following others examples sometimes just end up you spending a whole day trying to figure out something you would rather not...Either way the code above is fine...with the exception of
for(int y = 0; y < height; y++)
{
byteIndex = ( width * y ) + x * bitsPerPixel;
CGFloat redVal = ( rawData[byteIndex] * 1.0) / 255.0;
CGFloat greenVal = ( rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blueVal = ( rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alphaVal = ( rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
if( alphaVal != 0 )
{
if( redVal == red && greenVal == green && blueVal == blue )
{
rawData[byteIndex + 3] = 0;
counter ++;
}
}
}
I am incrementing the byte index before using it again in my pixel check, so move that after the test and that is all good.
As for the picture display issue, remove "free(rawData);" apparently the memory dislikes being freed.