Search code examples
javascriptwebrtc

In webrtc is it possible to get video frames without using canvas?


My actual project is to develop a video call application using webrtc and transfer the video audio frames for NDI SDK, NDI input should be in frames. For this now I have done this using canvas but the delay is high. If frames can be obtained directly from the camera directly or have any other method to convert streams to frames?

This is the sample code how I get the frames now (var frame = this.ctx1.getImageData(0, 0, this.width, this.height);)

var processor = {
  timerCallback: function() {
    if (this.video.paused || this.video.ended) {
      return;
    }
    this.computeFrame();
    var self = this;
    setTimeout(function () {
      self.timerCallback();
    }, 16); // roughly 60 frames per second
  },

  doLoad: function() {
    this.video = document.getElementById("my-video");
    this.c1 = document.getElementById("my-canvas");
    this.ctx1 = this.c1.getContext("2d");
    var self = this;

    this.video.addEventListener("play", function() {
      self.width = self.video.width;
      self.height = self.video.height;
      self.timerCallback();
    }, false);
  },

  computeFrame: function() {
    this.ctx1.drawImage(this.video, 0, 0, this.width, this.height);
    var frame = this.ctx1.getImageData(0, 0, this.width, this.height);
    var l = frame.data.length / 4;

    for (var i = 0; i < l; i++) {
      var grey = (frame.data[i * 4 + 0] + frame.data[i * 4 + 1] + frame.data[i * 4 + 2]) / 3;

      frame.data[i * 4 + 0] = grey;
      frame.data[i * 4 + 1] = grey;
      frame.data[i * 4 + 2] = grey;
    }
    this.ctx1.putImageData(frame, 0, 0);

    return;
  }
};

Solution

  • I don't think there is another method to access (and edit) video frames in live.

    But as suggested in this answer, rather than using requestAnimationFrame function, you can use (on chrome only) requestVideoFrameCallback API. As said here :

    The requestVideoFrameCallback() method allows web authors to register a callback that runs in the rendering steps when a new video frame is sent to the compositor. This is intended to allow developers to perform efficient per-video-frame operations on video, such as video processing and painting to a canvas, video analysis, [...]

    On the other hand, as say in this post, canvas operations are heavy on CPU. To simply edit a video display, you should rather try to use some CSS3 transform.