Search code examples
iosuiimagemaskingquartz-core

Reverse Image Masking


In IOS SDK, I am able to mask an image but not able to reverse image mask. i mean, i have one make image which have rectangle part and now i mask image and i get attached image but i want reverse result.

I get image as result. enter image description here

while i need this as result.

enter image description here

Please help me to achieve it.

...Edit... My code

 UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();

CGContextTranslateCTM(context, 0.0, self.imageView.frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);

CGImageRef maskImage = [[UIImage imageNamed:@"2.png"] CGImage];
CGContextClipToMask(context, self.imageView.bounds, maskImage);

   CGContextTranslateCTM(context, 0.0, self.imageView.frame.size.height);
   CGContextScaleCTM(context, 1.0, -1.0);

[[self.imageView image] drawInRect:self.imageView.bounds];

UIImage *image11 = UIGraphicsGetImageFromCurrentImageContext();

self.imageView.image = image11;

Thanks


Solution

  • I have achieved it in two steps. May not be the best way to do it, but it works.

    1. Invert your mask Image.
    2. Mask

    Look at the code below.

     - (void)viewDidLoad
    {
        [super viewDidLoad];
    
        _imageView = [[UIImageView alloc] initWithImage:i(@"test1.jpg")];
        _imageView.image = [self maskImage:i(@"face.jpg") withMask:[self negativeImage]];
        [self.view addSubview:_imageView];
    }
    

    The method below is taken from here

    - (UIImage *)negativeImage
    {
        // get width and height as integers, since we'll be using them as
        // array subscripts, etc, and this'll save a whole lot of casting
        CGSize size = self.imageView.frame.size;
        int width = size.width;
        int height = size.height;
    
        // Create a suitable RGB+alpha bitmap context in BGRA colour space
        CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
        unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
        CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
        CGColorSpaceRelease(colourSpace);
    
        // draw the current image to the newly created context
        CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self.imageView.image CGImage]);
    
        // run through every pixel, a scan line at a time...
        for(int y = 0; y < height; y++)
        {
            // get a pointer to the start of this scan line
            unsigned char *linePointer = &memoryPool[y * width * 4];
    
            // step through the pixels one by one...
            for(int x = 0; x < width; x++)
            {
                // get RGB values. We're dealing with premultiplied alpha
                // here, so we need to divide by the alpha channel (if it
                // isn't zero, of course) to get uninflected RGB. We
                // multiply by 255 to keep precision while still using
                // integers
                int r, g, b;
                if(linePointer[3])
                {
                    r = linePointer[0] * 255 / linePointer[3];
                    g = linePointer[1] * 255 / linePointer[3];
                    b = linePointer[2] * 255 / linePointer[3];
                }
                else
                    r = g = b = 0;
    
                // perform the colour inversion
                r = 255 - r;
                g = 255 - g;
                b = 255 - b;
    
                // multiply by alpha again, divide by 255 to undo the
                // scaling before, store the new values and advance
                // the pointer we're reading pixel data from
                linePointer[0] = r * linePointer[3] / 255;
                linePointer[1] = g * linePointer[3] / 255;
                linePointer[2] = b * linePointer[3] / 255;
                linePointer += 4;
            }
        }
    
        // get a CG image from the context, wrap that into a
        // UIImage
        CGImageRef cgImage = CGBitmapContextCreateImage(context);
        UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
    
        // clean up
        CGImageRelease(cgImage);
        CGContextRelease(context);
        free(memoryPool);
    
        // and return
        return returnImage;
    }
    

    The method below taken from here.

    - (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
    
        CGImageRef maskRef = maskImage.CGImage;
    
        CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                            CGImageGetHeight(maskRef),
                                            CGImageGetBitsPerComponent(maskRef),
                                            CGImageGetBitsPerPixel(maskRef),
                                            CGImageGetBytesPerRow(maskRef),
                                            CGImageGetDataProvider(maskRef), NULL, false);
    
        CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
    
    
        return [UIImage imageWithCGImage:masked];
    
    }