Search code examples
macosaudiokit

Minimal AudioKit Microphone configuration (MacOS + iOS)


I've been building an App for MacOS (and also iOS) using AudioKit.

The App will play sounds with a MIDISampler, and this bit works!

It will also listen with a device microphone and use a PitchTap to provide tuning information. I have not been able to get this to work.

My audio graph setup looks like this…

final class AudioGraph {
    let engine = AudioEngine()
    let sampler = MIDISampler(name: "sampler")
    let pitchTracker: PitchTap

    init(file: SamplerFile? = nil) throws {
        let mic = engine.input!

        engine.output = Mixer(
            sampler, 
            Fader(mic, gain: 0.0)
        )

        pitchTracker = PitchTap(mic) { f, a in
            guard let f = f.first, let a = a.first else { return }
            print("Frequency \(f) – Amplitude \(a)")
        }

        if let file {
            try setFile(file)
        }
    }

    func setFile(_ file: SamplerFile) throws {
        try sampler.loadInstrument(url: file.url)
    }
}

// MARK: -

extension AudioGraph: NotePlayer {
    func startAudioEngine() throws {
        print("### Start engine")
        try engine.start()
        pitchTracker.start()
    }

    func stopAudioEngine() {
        pitchTracker.stop()
        engine.stop()
    }
}

When I run this lot on MacOS, I can play notes (yay). However, the PitchTap callback is called with frequency and magnitude information, but the pitch is always 100.0 and the magnitude always 2.4266092e-05 (like – pretty zero ish).

I've done some experiments…

  • I’ve confirmed the mic actually works — it works fine in other apps and in the cookbook app the tuner works.
  • I've attached the PitchTap to the MIDISampler instead using pitchTracker = PitchTap(sampler) { … }, and this works. When I play notes their frequency is displayed on the console 👍.
  • I've tried adding engine.input to the output by setting gain of the Mixer to 1.0: engine.output = Mixer(sampler, Fader(mic, gain: 1.0)). I'd expect to hear horrible feedback squeals when I do this, but I don't get anything. Playback of notes from the sampler still works though.
  • I've checked AVCaptureDevice.authorizationStatusForMediaType: .audio as directed by Apple. This displays a permission dialogue, and when I agree microphone access, I get back .authorized.

I've built and run the cook book app of all the things, and the Tuner in this works great on both MacOS and iOS, which is very encouraging!

My speculation is I have not configured microphone input correctly on MacOS, or perhaps obtained necessary permissions, so engine.input is just a stream of silence?

I wondered if there is a supper minimal "Hello, World!" level of demo application showing just how to configure the microphone for use on MacOS?

The key points for microphone input, or just the code, would be a great answer to this question.

I'm also wanting to get this to run on iOS where the PitchTap callback isn't firing. I'll get to this in another question though, once I've resolved this one.

Many Thanks!


Solution

  • The answer to this turned to be as simple as it was completely non obvious.

    It seems to be more general than just the AudioKit library and will probably impact any use of the microphone on all (recent) Apple hardware.

    You need to give your Project Target the necessary "Capabilities" to access audio input.

    If you do not do this, it is still possible to access audio inputs, but the input devices will just provide silence. There doesn't seem to be any log message, compile warning, or thrown errors.

    Getting audio input capability…

    I've found two "Capabilities" to enable in the Xcode project. I'm not sure if both are necessary. Perhaps one is used by Catalyst Targets and the other by non Catalyst Targets. I've not investigated this yet – I suggest just checking both.

    • Open your Xcode project.
    • Go to the project's settings (usually the top most item in the left hand navigation bar).
    • Select your target from the "Targets" on the left of the settings.
    • Select the "Signing & Capabilities" tab along the top of the settings page.
    • Scroll down and find the "App Sandbox" section.
      • Find the "Hardware" subsection
      • Find the "Audio Input" checkbox and select it.
    • Scroll down and find the "Hardened Runtime" section.
      • Find the "Resource Access" subsection.
      • Find the "Audio Input" checkbox and select it.

    Here's what those sections look like…

    App Snadbox UI

    App Sandbox UI

    Hardened Runtime UI

    Hardened Runtime UI