Search code examples
iosswiftuiimagecoreml

CVPixelBuffer Writing using UIImage.draw() is TOO SLOW


I'm an undergraduate student and I'm now using CoreML frame to do some video HumanSeg app on iPhone, but as the title shows, I've got a huuuuuge problem.

I have a UIImage and I have to resize and pad it, and draw it into a CVPixelBuffer to feed the MobileNet model, but such process is just TOO SLOW, costing about 30ms, which is unacceptable.

To be specific, In my code, method UIImage.draw(in: CGRect(x: Int, y: Int, width: Int, height: Int)) is TOO SLOW, and took me 20+ ms, which is the major issue.

My codes are below:

func dealRawImage(image : UIImage, dstshape : [Int], pad : UIImage) -> CVPixelBuffer?
{
    // decide whether to shrink in height or width
    let height = image.size.height
    let width = image.size.width
    let ratio = width / height
    let dst_width = Int(min(CGFloat(dstshape[1]) * ratio, CGFloat(dstshape[0])))
    let dst_height = Int(min(CGFloat(dstshape[0]) / ratio, CGFloat(dstshape[1])))
    let origin = [Int((dstshape[0] - dst_height) / 2), Int((dstshape[1] - dst_width) / 2)]

    // init a pixelBuffer to store the resized & padded image
    var pixelBuffer: CVPixelBuffer?
    let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
                 kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]
    CVPixelBufferCreate(kCFAllocatorDefault,
                        dstshape[1],
                        dstshape[0],
                        kCVPixelFormatType_32ARGB,
                        attrs as CFDictionary,
                        &pixelBuffer)

    // get the pointer of this pixelBuffer
    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

    // init a context that contains this pixelBuffer to draw in
    let context = CGContext(data: pixelData,
                            width: dstshape[1],
                            height: dstshape[0],
                            bitsPerComponent: 8,
                            bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!),
                            space: CGColorSpaceCreateDeviceRGB(),
                            bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)!

    // push context
    UIGraphicsPushContext(context)
    context.translateBy(x: 0, y: CGFloat(dstshape[0]))
    context.scaleBy(x: 1, y: -1)

    pad.draw(in:CGRect(x: 0, y: 0, width: dstshape[1], height: dstshape[0]))
    // THIS SINGLE FUNCTION COSTS ME 20+ ms AND IS THE MAJOR ISSUE !
    image.draw(in: CGRect(x: origin[1], y: origin[0], width: dst_width, height: dst_height))

    UIGraphicsPopContext()

    // unlock
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

And I just call this function like this:

let input = dealRawImage(image: raw_input_image, dstshape: [224, 224], pad: black_image)

Where raw_input_image is the UIImage I read from memory, dstshape is the shape I want to resize this image to, and black_image is a totally black UIImage used for padding.

I've searched on this website but no familiar issue was found.

Is there any way I can make this process faster and save this project? I just don't want to abandon my 2-week work.


Solution

  • It's been a while since I've dealt with CVPixelBuffers, and I haven't used CoreML at all yet.

    When I did work with CVPixelBuffers, I found that I got the best performance by creating a single pixel buffer at the target size and keeping it around. I was taking pixels from the camera, passing them to OpenGL as a texture, manipulating them, and mapping the output into the same CVPixelBuffer. I was able to use the same memory structure for all of that. I suggest taking that approach.