Search code examples
iosobjective-copencvvideo-processing

iOS - OpenCV Video processing optimization


I'm working on an iOS project. I need to detect colored circles on live video. I'm using CvVideoCameraDelegate. Here is my code:

-(void)processImage:(cv::Mat &)image{
    cv::medianBlur(image, image, 3);
    Mat hvs;
    cvtColor(image, hvs, COLOR_BGR2HSV);
    Mat lower_red;
    Mat upper_red;
    inRange(hvs, Scalar(0, 100, 100), Scalar(10, 255, 255), lower_red);
    inRange(hvs, Scalar(160, 100, 100), Scalar(179, 255, 255), upper_red);

    Mat red_hue;
    addWeighted(lower_red, 1, upper_red, 1, 0, red_hue);
    GaussianBlur(red_hue, red_hue, cv::Size(9,9), 2,2);


    HoughCircles(red_hue, circles, CV_HOUGH_GRADIENT, 1, red_hue.rows/8,100,20,0,0);
    if(circles.size() != 0){
        for(cv::String::size_type current = 0;current<circles.size();++current){
            cv::Point center(std::round(circles[current][0]),std::round(circles[current][1]));
            int radius = std::round(circles[current][2]);
            cv::circle(image, center, radius, cv::Scalar(0, 255, 0), 5);
        }
    }
}

It works fine but takes a lot of time and the video is a bit laggy. I wanted to put my code in an other queue but than I started getting EXC_BAD_ACCESS on this line: cv::medianBlur(image, image, 3);.

I started using objective-c just for this project so it is a bit hard for me to understand what is going on behind the scenes but I realized that image variable holds the address of that Mat (at least that is what my C++ knowledge says) so when my that gets to the point to execute the code it no longer exist. (Am I right?)

Than I tried to get around that problem. I've added this

Mat m;
image.copyTo(m);

before my queue. But this caused a memory leak. (Why isn't is released automatically? (Again, not too much obj-c knowledge)

Then I had a last idea. I've added this line: Mat m = image; on the first line of the queue. This way I started getting EXC_BAD_ACCES from inside the cv::Mat and it was still lagging. Here is how my code looks now:

-(void)processImage:(cv::Mat &)image{
    //First attempt
    //Mat m;
    //image.copyTo(m);
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        Mat m = image; // second one
        cv::medianBlur(m, m, 3);
        Mat hvs;
        cvtColor(m, hvs, COLOR_BGR2HSV);
        Mat lower_red;
        Mat upper_red;
        inRange(hvs, Scalar(0, 100, 100), Scalar(10, 255, 255), lower_red);
        inRange(hvs, Scalar(160, 100, 100), Scalar(179, 255, 255), upper_red);

        Mat red_hue;
        addWeighted(lower_red, 1, upper_red, 1, 0, red_hue);
        GaussianBlur(red_hue, red_hue, cv::Size(9,9), 2,2);


        HoughCircles(red_hue, circles, CV_HOUGH_GRADIENT, 1, red_hue.rows/8,100,20,0,0);
        if(circles.size() != 0){
            for(cv::String::size_type current = 0;current<circles.size();++current){
                cv::Point center(std::round(circles[current][0]),std::round(circles[current][1]));
                int radius = std::round(circles[current][2]);
                cv::circle(m, center, radius, cv::Scalar(0, 255, 0), 5);
            }
        }
    });

}

I would appreciate any help or maybe a tutorial about Video processing in iOS because everything I've found were using other environments or weren't taking enough process time to need optimization.


Solution

  • Ok, for those who will have the same problem, I've managed to figure out the solution. My second attempt way very close, the problem (I think) way that I tried to process all the frames so I made a copy of all of them into the memory and because the processing takes a lot mire time, it got stacked up and filled the memory. So what I did was that I modified the code so it processes one frame a time and skips (shows without processing) all the others.