I expect this to have more to do with color spaces and their attributes. I was attempting to convert a YUYV to a grayscale image. I used the following function call:
cv::cvtColor(imgOrig, imgGray, cv::COLOR_YUV2GRAY_420);
With this function, I lost some of the height of the image. I could obviously use the resize method to make the original image large enough such that the conversion wouldn't cut out any actual data, but wanted to find out the more appropriate way to go about it.
Looking at the opencv source, below is an excerpt from the cvtColor method:
case COLOR_YUV2GRAY_420:
{
if (dcn <= 0) dcn = 1;
CV_Assert( dcn == 1 );
CV_Assert( sz.width % 2 == 0 && sz.height % 3 == 0 && depth == CV_8U );
Size dstSz(sz.width, sz.height * 2 / 3);
_dst.create(dstSz, CV_MAKETYPE(depth, dcn));
dst = _dst.getMat();
#ifdef HAVE_IPP
#if IPP_VERSION_X100 >= 201700
if (CV_INSTRUMENT_FUN_IPP(ippiCopy_8u_C1R_L, src.data, (IppSizeL)src.step, dst.data, (IppSizeL)dst.step,
ippiSizeL(dstSz.width, dstSz.height)) >= 0)
break;
#endif
#endif
src(Range(0, dstSz.height), Range::all()).copyTo(dst);
}
break;
You see that on the 8th line, the size of the destination image is made to be two-thirds the size of the source image.
Why is this so? What is the appropriate way to do a color space conversion?
Plainly, it is how YUV420 works. From wikipedia, YUV uses 6 bytes to store 4 pixels. Those 4 pixels share U, V
values while having their own Y values. The picture and the formula presented in the conversion from YUV420 to RGB888 section explains it better. They also explain very well why the check
CV_Assert( sz.width % 2 == 0 && sz.height % 3 == 0 && depth == CV_8U );
and why the RGB's is only 2/3
of the YUV's.