I'm currently working on a pretty straight forward app which records the users singing voice using AVAudioRecorder and processes it using AUGraph (from the iPhoneMixerEQGraphTest example) which applies an effect to the voice and then merges the song + voice eventually.
The only problem I have now is that I record beforehand, and try to process it afterwards. However I don't want the user to have to listen out the whole song + his singing to be able to render it to a file.
My questions are:
Cheers,
M0rph3v5
Eventually I just got an external company to do the audio processing which supplied an library which would output the microphone through the speakers, throw some eq's over it and merges it eventually.
CoreAudio's a bitch :(