Search code examples
c++cwindowsffmpeglibav

Memory leak when using av_frame_get_buffer()


I am making a simple video player with ffmpeg. I have noticed that there is a memory leak originating in libavutil. Because ffmpeg is a mature library I assume that I am allocating a new frame incorrectly. The documentation is also vague about freeing the buffer that is created when you call av_frame_get_buffer(). Below is the code I am using to decode the video and queue it up for display on the UI thread.

DWORD WINAPI DecoderThread(LPVOID lpParam)
{
    AVFrame *frame = NULL;
    AVPacket pkt;

    SwsContext *swsCtx = NULL;
    UINT8 *buffer = NULL;
    INT iNumBytes = 0;

    INT result = 0;

    frame = av_frame_alloc();

    av_init_packet(&pkt);
    pkt.data = NULL;
    pkt.size = 0;

    // Create scaling context
    swsCtx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt, codecCtx->width, codecCtx->height, AV_PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL);

   while (av_read_frame(fmtCtx, &pkt) >= 0) {
        if (pkt.stream_index == videoStream) {
            result = avcodec_send_packet(codecCtx, &pkt);

            while (result >= 0) {
                result = avcodec_receive_frame(codecCtx, frame);
                if (result == AVERROR(EAGAIN) || result == AVERROR_EOF) {
                    break;
                } else if (result < 0) {
                    // another error.
                }
                // Create a new frame to store the RGB24 data.
                AVFrame *pFrameRGB = av_frame_alloc();
                // Allocate space for the new RGB image.
                //av_image_alloc(pFrameRGB->data, pFrameRGB->linesize, codecCtx->width, codecCtx->height, AV_PIX_FMT_BGR24, 1);
                // Copy all of the properties from the YUV420P frame.
                av_frame_copy_props(pFrameRGB, frame);
                pFrameRGB->width = frame->width;
                pFrameRGB->height = frame->height;
                pFrameRGB->format = AV_PIX_FMT_BGR24;
                av_frame_get_buffer(pFrameRGB, 0);

                // Convert fram from YUV420P to BGR24 for display.
                sws_scale(swsCtx, (const UINT8* const *) frame->data, frame->linesize, 0, codecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

                // Queue thr BGR frame for drawing by the main thread.
                AddItemToFrameQueue(pFrameRGB);

                av_frame_unref(frame);
            }
        }
       while (GetQueueSize() > 100) {
            Sleep(10);
       }
   }

    CloseFrameQueue();

    av_frame_free(&frame);
   avcodec_close(codecCtx);
   avformat_close_input(&fmtCtx);

    return 0;
}

Is there a better way to allocate a new frame for holding the post sws_scale() transformation?

There is a similar stackoverflow question that uses mostly depreciated function calls. I can't seem to find any answers that conform to the new version of ffmpeg in the documentation. Any help would be appreciated.


Solution

  • Following the suggestions made in the comments I added a av_packet_unref() call to my decoding loop, and it stopped the memory leak issues I was having.

                    sws_scale(swsCtx, (const UINT8* const *) frame->data, frame->linesize, 0, codecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
    
                // Queue thr BGR frame for drawing by the main thread.
                AddItemToFrameQueue(pFrameRGB);
    
                av_frame_unref(frame);
            }
            av_packet_unref(&pkt);
        }
        while (GetQueueSize() > 100) {
            Sleep(10);
        }