Search code examples
swiftavfoundationcore-audioavaudioengine

Can I send sound buffers received using AVAudioSinkNode to be rendered in real-time using AVAudioSourceNode?


I am experimenting with the new AVAudioSinkNode and AVAudioSourceNode nodes, for use with AVAudioEngine.

In terms of setup, similar to the tests described in this other post, my sink node is attached to the input node (e.g. microphone) and my source node is attached to the output node (e.g. speaker). The sink callback is working as expected. Separately, on the source node's side I generated a sine wave signal--the source node also appears to be working properly.

Question

For testing purposes, I'd like to send the (float) buffers captured at the sink node to the source node, preferably in real time and without saving to a file. This should have the effect of replaying the microphone input to the speaker output. Is there a (simple?) way to do this?

Essentially I'm looking for a way to connect the sink node to the source node even though the nodes might not be meant to be used this way, given that the sink node has no output bus and the source node has no input bus (Source).

I assume I could connect the input node to a mixer connected to the output node in order to channel microphone input to the speaker, but for my purposes I would like to use the new sink and source nodes in the configuration as described.

I was thinking I would need to queue up the buffers captured by my sink node in some way until they can be read by the source node to fill its own buffers. I looked into Audio Queue Services but it doesn't seem appropriate.


Solution

  • The way to do this in real time is to use Audio Unit callbacks (the buffers of which can be as small as a few milliseconds). They will almost always be the same size (except maybe at device power state changes), so just save each one, process it as needed, and have it ready for the next output, a few mS later. Or use a circular/ring fifo/buffer. The RemoteIO Audio Unit in iOS has synchronized I/O.