I'm trying to convert an hls file to jpeg. firstly, I used openh264 to convert HLS file to YUV. I got a two dimensional array containing Y, U, V buffer (*pData[3]). After that, I try to combine the three arrays into one to pass it to CompressYUYV2JPEG. here is how I convert:
for(i = 0; i < l; i++) {
inbuf.push_back(yuvData[0][i]);
}
l = bufferInfo.UsrData.sSystemBuffer.iWidth*bufferInfo.UsrData.sSystemBuffer.iHeight/4;
for(i = 0; i < l; i++) {
inbuf.push_back(yuvData[1][i]);
}
l = bufferInfo.UsrData.sSystemBuffer.iWidth*bufferInfo.UsrData.sSystemBuffer.iHeight/4;
for(i = 0; i < l; i++) {
inbuf.push_back(yuvData[2][i]);
}
but unfortunately, It doesn't produce the expected result. What is the proper way to convert 2-dimensional YUV array into a one-dimensional array?
You need YUV422. That means YUYV. inbuf
must be dividable by 4 for every input data. You can use
for(i = 0; i < l/2; i++) {
inbuf.push_back(yuvData[0][2*i]);
inbuf.push_back((yuvData[1][2*i] + yuvData[1][2*i + 1])/2);
inbuf.push_back(yuvData[0][2*i + 1]);
inbuf.push_back((yuvData[2][2*i] + yuvData[2][2*i + 1])/2);
}
In this code snippet all Y values are used but only the average of two Cr resp. Cb values are used. Of course the number of elements in each yuvData channel must be even. Otherwise you have to find a solution for the last element.
I just now saw that you use YUV420. Then you can use this snippet
for(i = 0; i < l/2; i++) {
inbuf.push_back(yuvData[0][2*i]);
inbuf.push_back(yuvData[1][i/2]);
inbuf.push_back(yuvData[0][2*i + 1]);
inbuf.push_back(yuvData[2][i/2]);
}
In this code all Y values are used once and all Cr resp. Cb values are used twice.