I am trying to implement linear convolution using Core Audio, I have the algorithm implemented and working, but I am trying to write the output of this into a .wav audio file. Here is the code for the algorithm...
//Create array containing output of convolution (size of array1 + array2 - 1)
float *COutput;
COutput = (float *)malloc(((size1+size2)-1)* sizeof(float));
int sizeOutput = ((size1 + size2)-1);
//Convolution Algorithm!!!
for (i=0; i<sizeOutput; i++) {
COutput[i]=0;
for (j=0; j<sizeCArray1; j++) {
if (((i-j)+1) > 0) {
COutput[i] += CArray1[i - j] * CArray2[j];
}
}
}
I need to write the float values within COutput (a standard array of floats) into an audio file. Am I right in assuming I need to send these float values to an AudioBuffer within an AudioBufferList initially? Or is there a simple way of doing this?
Many thanks for any help or guidance!
The free DiracLE time stretching library ( http://dirac.dspdimension.com ) has utility code that converts ABLs (AudioBufferLists) into float arrays and vice-versa as part of their example code. Check out their EAFRead and EAFWrite classes, they're exactly what you're looking for.