Search code examples
opencvvectorrectroi

Roi extraction with cv::rect in opencv


I am trying to extract the first quarter of a 100x100 pixel image. I was thinking of using the cv::rect function from a roi-definition (see code example below). Furthermore my original image is part of a vector-array (here in the code-example below, I only work with one image - later there will be several but this is probably irrelevant for the example here).

What I am observing is a strange behaviour: The newly created image "Test" seems to be ok until about half of its rows. Then it shows completely random pixels (see example image).

What is the cause ?

Image-description: left: orig 100x100 image / right: created "Test" image with errors...

left: orig 100x100 image  / right: created "Test" image with errors...

std::vector<cv::Mat> Image(1);   // create vector array of one 100x100 pixel image
Image[0] = cv::imread(argv[1]);  // or in iOS:  Image[0] = [in_image CVMat];
cv::Rect roi1 = cv::Rect(0, 0, Image[0].cols/2, Image[0].rows/2);
cv::Mat Test = Image[0](roi1);   
imshow("final_result", Test);    // the Test image has error rows !!!!!!! Why ????????

Another example: The second example shows the original 150x150 pixel image to the left. The rectangle rect1 is 100x100pixel. Again - same problem: The cropped image shows good performance at the beginning.But from a certain row, the pixels get messed up (in this example they turned all black...). Below is the original iOS-code. What could be the problem ??? (I am using iOS-simulator - could this be the issue??

Code for example2:

UIImage *in_image = [UIImage imageNamed:@"image013.jpg"];    // 150 x 150 pixel image
cv::Mat cv_in_image = [in_image CVMat];

NSLog(@"cv_in_image cols = %i", cv_in_image.cols);
NSLog(@"cv_in_image rows = %i", cv_in_image.rows);

cv::Rect rect1;
rect1.x = 28;
rect1.y = 27;
rect1.width = 100;
rect1.height = 100;

NSLog(@"rect1 x = %i", rect1.x);
NSLog(@"rect1 y = %i", rect1.y);
NSLog(@"rect1 width = %i", rect1.width);
NSLog(@"rect1 height = %i", rect1.height);

cv::Mat Image1 = cv_in_image(rect1);

NSLog(@"Image1 cols = %i", Image1.cols);
NSLog(@"Image1 rows = %i", Image1.rows);

self.imageView0.image = [[UIImage alloc] UIImageFromCVMat:(Image1)];

The NSLog sais:

cv_in_image cols = 150
cv_in_image rows = 150
rect1 x = 28
rect1 y = 27
rect1 width = 100
rect1 height = 100
Image1 cols = 100
Image1 rows = 100

Example2: Image to the left: 150x150 pixels / rectangle (Image1) to the right shows errors !! Here you can find the original image from example2: link enter image description here


Solution

  • I finally found a workarround for this problem:

    --> use the function "cvtColor"

    UIImage *in_image = [UIImage imageNamed:@"image013.jpg"];    // 150 x 150 pixel image
    cv::Mat cv_in_image = [in_image CVMat];
    
    NSLog(@"cv_in_image cols = %i", cv_in_image.cols);
    NSLog(@"cv_in_image rows = %i", cv_in_image.rows);
    
    cv::Rect rect1;
    rect1.x = 28;
    rect1.y = 27;
    rect1.width = 100;
    rect1.height = 100;
    
    NSLog(@"rect1 x = %i", rect1.x);
    NSLog(@"rect1 y = %i", rect1.y);
    NSLog(@"rect1 width = %i", rect1.width);
    NSLog(@"rect1 height = %i", rect1.height);
    
    cv::Mat Image1 = cv_in_image(rect1);
    
    // This is the workarround !!!!!!!!!!!
    /  -----------------------------------
    // Use cvtColor-function twice in a row...
    cv::cvtColor(Image1, Image1, cv::COLOR_RGB2BGR);  // after that function, the error is gone
    cv::cvtColor(Image1, Image1, cv::COLOR_BGR2RGB);  // I use the function a second time to get back the original color-set...
    
    NSLog(@"Image1 cols = %i", Image1.cols);
    NSLog(@"Image1 rows = %i", Image1.rows);
    
    self.imageView0.image = [[UIImage alloc] UIImageFromCVMat:(Image1)];
    

    The ROI can finally be created as can be seen in the image documentation below : (...Really not sure why this suddenly works that way with the UIImage-View in iOS !! ????) enter image description here