Search code examples
iosswiftaudiocore-audioaudio-streaming

Real time audio processing Swift


Our app continuously records and processes audio from iPhone mic.

Currently I use AVAudioRecorder and AVFoundation and record audio inout into 8 sec ".wav" files.

Instead I want continuously to record audio input into buffer and process 8 sec length buffer's chunks.

How can I record audio input into buffer and how can I read 8 sec length chunks from there?

Thanks!


Solution

  • You could receive the raw PCM a number of ways (in AV Foundation: AVCaptureAudioDataOutput from an AVCaptureDevice, or AVAudioEngine with a processing tap inserted; in Audio Toolbox: Audio Queue Services, the RemoteIO audio unit), then to write the file, you could use Audio Toolbox's AudioFile or ExtAudioFile, just counting up how many frames you've written and deciding when it's time to start a new 8 sec file.

    As Rhythmic Fistman notes above, it would be safer if you did something like

    capture callbacks --pushes-to--> ring buffer <--pulls-from-- file-writing code

    Because when you're closing one file and opening another, the capture callbacks are still going to be coming in, and if you block on file I/O you stand a very good chance of dropping some data on the floor.

    I suppose another approach would be to just fill an 8 sec buffer in memory from your callbacks, and when it's full, have another thread write that file while you malloc a new buffer and start recording into that (obviously, the file writer would dispose the old buffer when it's done).

    Edit: Also, I didn't see anything about Swift in your question, but any of this should work fine from Swift or C/Obj-C.