Search code examples
iosgstreamerios8-extension

In Gstreamer while playing the pipeline in iOS 8, and after entering background and returning foreground pipeline doesnt work :(?


-Actually i downloaded the sample tutorial for gstreamer from the link,

http://cgit.freedesktop.org/~slomo/gst-sdk-tutorials/

git://people.freedesktop.org/~slomo/gst-sdk-tutorials

  • Now i had modified the following code in the tutorial 3,

    -(void) app_function
     {
    GstBus *bus;
    GSource *bus_source;
    GError *error = NULL;
    
    GST_DEBUG ("Creating pipeline");
    
    pipeline = gst_pipeline_new ("e-pipeline");
    
    
    /* Create our own GLib Main Context and make it the default one */
    context = g_main_context_new ();
    g_main_context_push_thread_default(context);
    
    /* Build pipeline */
    // pipeline = gst_parse_launch("videotestsrc ! warptv ! videoconvert ! autovideosink", &error);
    
    
    source = gst_element_factory_make("udpsrc", "source");
    
    g_object_set( G_OBJECT ( source),   "port", 8001, NULL );
    
    GstCaps *caps;
    
    caps = gst_caps_new_simple ("application/x-rtp",
                                "encoding-name", G_TYPE_STRING, "H264",
                                "payload", G_TYPE_INT, 96,
                                "clock-rate", G_TYPE_INT, 90000,
                                NULL);
    
    g_object_set (source, "caps", caps, NULL);
    
    
    
    
    rtp264depay = gst_element_factory_make ("rtph264depay", "rtph264depay");
    h264parse = gst_element_factory_make ("h264parse", "h264parse");
    vtdec = gst_element_factory_make ("vtdec", "vtdec");
    glimagesink  = gst_element_factory_make ("glimagesink", "glimagesink");
    
    gst_bin_add_many (GST_BIN(pipeline), source,  rtp264depay, h264parse, vtdec, glimagesink, NULL);
    
    
    
    
    if (error) {
        gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
        g_clear_error (&error);
        [self setUIMessage:message];
        g_free (message);
        return;
    }
    
    /* Set the pipeline to READY, so it can already accept a window handle */
    gst_element_set_state(pipeline, GST_STATE_READY);
    
    video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
    if (!video_sink) {
        GST_ERROR ("Could not retrieve video sink");
        return;
    }
    gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(video_sink), (guintptr) (id) ui_video_view);
    
    /* Instruct the bus to emit signals for each received message, and connect to the interesting signals */
    bus = gst_element_get_bus (pipeline);
    bus_source = gst_bus_create_watch (bus);
    g_source_set_callback (bus_source, (GSourceFunc) gst_bus_async_signal_func, NULL, NULL);
    g_source_attach (bus_source, context);
    g_source_unref (bus_source);
    g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, (__bridge void *)self);
    g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, (__bridge void *)self);
    gst_object_unref (bus);
    
    /* Create a GLib Main Loop and set it to run */
    GST_DEBUG ("Entering main loop...");
    main_loop = g_main_loop_new (context, FALSE);
    [self check_initialization_complete];
    g_main_loop_run (main_loop);
    GST_DEBUG ("Exited main loop");
    g_main_loop_unref (main_loop);
    main_loop = NULL;
    
    /* Free resources */
    g_main_context_pop_thread_default(context);
    g_main_context_unref (context);
    gst_element_set_state (pipeline, GST_STATE_NULL);
    gst_object_unref (pipeline);
    
    return;
    

    }

-Now am running the application in the ipad,and Application starts playing.

  • Now am entering background and returning to foreground the Gstreamer streaming updates are not visible in the UI,but in the xcode's network usage I could see the packets receiving....:(

Thanks in advance....iOS GEEKS....


Solution

  • Update: Get UDP to work.

    After further investigation I got UDP h264 streaming to work on linux (PC x86) but the principle should be the same on IOS (specifically avdec_h264 (used on PC) has to be replaced by vtdec).

    Key differences between the TCP and UDP pipelines:

    Server side:

    • IP : The 1st element which confused me between UDP and TCP server sides : On UDP server, the IP address specified on the udpsink element is the client side IP, i.e. gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000

    While on the TCP server side, the IP is the one of the server side (host parameter on tcpserversink) i.e. gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000

    • Video stream payload/format: In order for the client to be able to detect the format and size of the frames, the TCP server side makes use of gdppay, a payloader element, in its pipeline. On the client side the opposite element, a de-payloader is used gdpdepay in order to be able to read the received frames. i.e.

    gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000

    The UDP server side does not use the gdpay element, it leaves the client side to use a CAPS on its udpsink see below in the client side differences.

    Client side

    • IP: The UDP client does NOT need any IP specified. While the TCP client side needs the server IP (host parameter on tcpclientsrc) i.e. gst-launch-1.0 -v tcpclientsrc host=$SERVERIP port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false
    • Video stream payload/format:like mentionned in the previous paragraph, the TCP server side uses payloader gdppay while the client side uses a de-payloader to recognize the format and size of the frames.

    Instead the UDP client has to explicitely specify it using a caps on its udpsrc element i.e. CAPS='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96'

    gst-launch-1.0 -v udpsrc port=5000 caps=$CAPS ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false`

    How to specify the caps : it is a bit hacky but it works: run your UDP server, with the verbose option -v i.e. gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000

    You'll get the following log:

    Setting pipeline to PAUSED ... Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)1280, height=(int)720, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)01640028ffe1000e27640028ac2b402802dd00f1226a01000428ee1f2c /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"J2QAKKwrQCgC3QDxImo\=\,KO4fLA\=\=", payload=(int)96, ssrc=(uint)3473549335, timestamp-offset=(uint)257034921, seqnum-offset=(uint)12956 /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"J2QAKKwrQCgC3QDxImo\=\,KO4fLA\=\=", payload=(int)96, ssrc=(uint)3473549335, timestamp-offset=(uint)257034921, seqnum-offset=(uint)12956 /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:sink: caps = video/x-h264, width=(int)1280, height=(int)720, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)01640028ffe1000e27640028ac2b402802dd00f1226a01000428ee1f2c /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: timestamp = 257034921 /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: seqnum = 12956 Pipeline is PREROLLED ... Setting pipeline to PLAYING ...

    Now copy the caps starting with caps = application/x-rtp This is the one specifying the rtp stream format. As far as I know the one that really is mandatory to get the UDP client to recognize the rtp stream content and then initialise the playing.

    To wrap it up and avoid confusion, find complete command line examples below, using raspivid with a Raspberry pi. if you want to try it ( on linux )

    UDP

    • Server: raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000
    • Client: CAPS='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96' gst-launch-1.0 -v udpsrc port=5000 caps=$CAPS ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false

    TCP

    • Server: raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000

    • Client: gst-launch-1.0 -v tcpclientsrc host=$SERVERIP port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false

    Note: Raspivid could easily be be replaced by a simple h264 file using cat i.e. cat myfile.h264 | gst-launch...