I am working with mpeg-4 and h264 streams
First I tryed converting to rgb24 and use imagemagic to make grayscale,it doesnot work
while(av_read_frame(pFormatCtx, &packet)>=0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
&packet);
// Did we get a video frame?
if(frameFinished)
{
f++;
//this->fram->Clear();
// if (pFrame->pict_type == AV_PICTURE_TYPE_I) wxMessageBox("I cadr");
// if (pFrame->pict_type != AV_PICTURE_TYPE_I)
// printMVMatrix(f, pFrame, pCodecCtx);
pFrameRGB->linesize[0]= pCodecCtx->width*3; // in case of rgb4 one plane
sws_scale(swsContext, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
Magick::Blob* m_blob = new Magick::Blob(pFrameRGB->data,pCodecCtx->width*pCodecCtx->height*3);
Magick::Image* image =new Magick::Image(*m_blob); // this doesnotwork
image->quantizeColorSpace( Magick::GRAYColorspace );
image->quantizeColors( 256 );
image->quantize( );
But ffmpeg gives me YUV picture?! so need only Y component, how to get it? to get Ypicture[x][y]
I assume you have configured swscale to YUV420p color space. 420P means 4:2:0 planar. Planar means the color channels are separate.
The luminance data (Y) is stored in the buffer point to pFrame->data[0] (Cb and Cr are in pFrame->data[1] and pFrame->data[2] respectively). in YUV420 the Y plane is 1 byte per pixel.
hence:
uint8_t getY(int x, int y, AVFrame *f)
{
return f->data[0][(y*f->linesize)+x];
}