I am working a video streaming app (WebRTC, iOS, Swift) - app is streaming screen recording. After a few seconds the video is frozen - encoding fails for every frame
I have implemented a custom RTCVideoCapturer
based on this :
open class RTCCustomFrameCapturer: RTCVideoCapturer {
private let kNanosecondsPerSecond: Float64 = 1_000_000_000
private var nanoseconds: Float64 = 0
private let frameQueue = DispatchQueue(label: "custom.capturer.queue")
override init(delegate: RTCVideoCapturerDelegate) {
super.init(delegate: delegate)
}
public func capture(_ sampleBuffer: CMSampleBuffer) {
frameQueue.async { [weak self] in
if let self = self {
if !CMSampleBufferIsValid(sampleBuffer)
|| !CMSampleBufferDataIsReady(sampleBuffer) { return }
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
if pixelBuffer == nil { return }
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer!)
let timeStampNs =
CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * Float64(NSEC_PER_SEC)
let videoFrame = RTCVideoFrame(
buffer: rtcPixelBuffer,
rotation: RTCVideoRotation._0,
timeStampNs: Int64(timeStampNs))
self.delegate?.capturer(self, didCapture: videoFrame)
}
}
}
}
iPhone I am testing on produces frames in size: 828x1792.
WebRTC log output:
(video_stream_adapter.cc:519): Removing resolution down-scaling setting.
(video_stream_adapter.cc:529): Scaling up resolution, max pixels: 2147483647
(video_stream_encoder_resource_manager.cc:505): Downgrade counts: fps: {quality:0cpu:0}, resolution {quality:0cpu:0}
(video_stream_encoder.cc:1768): Updating sink restrictions from QualityScalerResource to { }
(video_source_sink_controller.cc:68): Pushing SourceSink restrictions: max_fps=60 max_pixel_count=2147483647 target_pixel_count=null
(resource_adaptation_processor.cc:229): Resource "QualityScalerResource" signalled kUnderuse. Adapted up successfully. Unfiltered adaptations: { res=0 fps=0 }
(video_adapter.cc:275): Frame size changed: scaled 336 / out 340 / in 340 Changes: 6 Input: 828x1792 Scale: 1/1 Output: 828x1792 fps: 60/60 alignment: 2
(video_stream_encoder.cc:1133): Video frame parameters changed: dimensions=828x1792, texture=1.
(video_stream_encoder.cc:693): ReconfigureEncoder:
Simulcast streams:
0: 828x1792 fps: 60 min_kbps: 30 target_kbps: 2500 max_kbps: 2500 max_fps: 60 max_qp: 56 num_tl: 1 active: true
(RTCVideoEncoderH264.mm:378): Initial encoder frame rate setting 60 is larger than the maximal allowed frame rate 18.
(RTCVideoEncoderH264.mm:526): Encoder frame rate setting 39 is larger than the maximal allowed frame rate 18.
(connection.cc:1168): Conn[70edc00:0:Net[pdp_ip0:10.204.0.x/32:Cellular:id=5]:PmIWAMUE:1:0:local:udp:10.204.0.x:62001->hxIk2cZH:1:2122129151:local:udp:169.254.131.x:49742|C--I|-|0|0|9114475305677503998|-]: Sent STUN BINDING request, id=6b5a684d5064672f66354576, use_candidate=0, nomination=0
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(RTCVideoEncoderH264.mm:512): Failed to encode frame with code: -12902
(video_stream_encoder.cc:1423): Failed to encode frame. Error code: -1
(RTCVideoEncoderH264.mm:526): Encoder frame rate setting 39 is larger than the maximal allowed frame rate 18.
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(RTCVideoEncoderH264.mm:526): Encoder frame rate setting 39 is larger than the maximal allowed frame rate 18.
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(connection.cc:1168): Conn[6889600:0:Net[pdp_ip0:10.204.0.x/32:Cellular:id=5]:PmIWAMUE:1:0:local:udp:10.204.0.x:62001->edbds2kc:1:1686052607:stun:udp:80.208.65.x:2514|C--I|-|0|0|7241540810644798975|-]: Sent STUN BINDING request, id=73686d55355066376b566d74, use_candidate=0, nomination=0
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
(RTCVideoEncoderH264.mm:765): H264 encode failed with code: -12902
Everything seems to work fine however when I set output format to dimensions lower than actual frame before capturing the first frame:
if let capturer = self.videoCapturer as? RTCCustomFrameCapturer {
if !videoSourceSizeInitialized, let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let height = CVPixelBufferGetHeight(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
localVideoTrack?.source.adaptOutputFormat(toWidth: Int32(width - 1), height: Int32(height - 1), fps: fps)
videoSourceSizeInitialized = true
}
capturer.capture(sampleBuffer)
}
Anyone has a clue why this happens?
After a lot of try and error I created this bug. It turns out there are issues with H264
codec. The solution: make sure to set up webRTC with multiple codecs.