I am creating a Multichannel ASIO Audio and Video recorder for dance competitions. The video and audio streams must be perfectly synchronized. The main obstacle is that I don’t have access to the real device (Steinberg UR44) so I work with ASIO4ALL while my client with the device just sends logs to me.
With ASIO4ALL on my machine I have ideal synchronization but the client with the device has a delay growing with time. So here is how I’ve detected buffer loss: the log shows 2482
calls of AsioOut.AudioAvailable
event handler; recording session duration is 35.133
seconds; buffer size: 512
; format: 16 bit PCM, 44kHz, 1 channel
.
Hence: 35.133 x 44100 / 512 = 3026
calls were to happen in theory but the client had only 2482
so he lost ~20%
of audio data. That’s why the output audio file has only 28.003
seconds of duration (~20%
smaller than recording session duration) and that’s why the growing with time delay appears in audio-video synchronization.
Question: Are this calculation and conclusions correct?
It's important that the code handling the buffer callback executes as quickly as possible. If it's doing too much work or running on a slow computer there is a chance that some buffers will get dropped.