Search code examples
google-chromevideogetusermediagoogle-chrome-osscreencast

Howto: Save screencast to video file ChromeOS?


Two Chrome apps/extensions have caught my eye on the webstore:

  1. Screencastify
  2. Snagit

I am aware of chrome.desktopCapture and how I can use getUserMedia() to capture a live stream of a user's desktop.

Example:

navigator.webkitGetUserMedia({
    audio: false,
    video: {
        mandatory: {
            chromeMediaSource: 'desktop',
            chromeMediaSourceId: desktop_id,
            minWidth: 1280,
            maxWidth: 1280,
            minHeight: 720,
            maxHeight: 720
        }
    }
}, successCallback, errorCallback);

I'd love to create my own screencast app that allows audio recording as well as embedding webcam capture in a given corner of the video like Screencastify.

I understand capturing the desktop and the audio and video of the user, but how do you put it all together and make it into a video file?

I'm assuming there is a way to create a video file from a getUserMedia() stream on ChromeOS. Something that only ChromeOS has implemented?

How is it done? Thanks in advance for your answers.


Solution

  • The actual encoding and saving of the video file isn't something that's been implemented in Chrome as of yet. Mozilla has it in a nascent form at the moment. I'm unsure of its state in ChromeOS. I can give you a little information I've gleaned during development with the Chrome browser, however.

    The two ways to encode, save, and distribute a media stream as a video are client-side and server-side.

    Server-side:
    Requires a media server of some kind. The best I've free/open-source solution that I've found so far is Kurento. The media stream is uploaded(chunks or whole) or streamed to the media server where it is encoded and saved for later use. This also works with peer-to-peer by acting as a middleman, recording as the data streams through.

    Client-side:
    This is all about browser-based encoding. There are currently two working options that I've tested successfully in Chrome.

    1. Whammy.js:
      This method uses a canvas hack to save arrays of webp images and then encode them into a webm container. While slow, it works well with video. No audio support. I'm working on that at the moment.
    2. videoconverter.js(was ffmpeg-asm.js):
      This is a straight port of ffmpeg to JavaScript using Emscripten. It works with both audio and video. It's also gigantic, script-wise, at around 25MB uncompressed. The other reason I'm not using it in production is the shaky licensing ground that ffmpeg is on at the moment. It has not been optimized as much as it could be. It would probably be quite a project to make it reliably production-ready.

    Hopefully that at least gives you avenues of research.