I have been looking at different methods of detecting the pitch of a tone sung into the microphone.
Seeing as I want to find how closely it resonates with a particular pitch class, I wonder if I could do some sort of physics-based resonance algorithm.
If you hold down to sustain pedal on the piano, and sing a tone into it, (and if you are close enough to one of its existing pitches) a note will resonate sympathetically.
I would love to be able to model this behaviour. But how would I go about the task? Can anyone help me move this forward?
One interesting solution I found is simply feeding the microphone input into a Karplus Strong algorithm.
So Karplus Strong simulates a plucked string by:
Now if we add the microphone stream into this process, so:
x = ( ringBuf[prev] + ring theBuf[prev2] ) * 0.5 * 0.998;
micOut[moPtr++] = x;
ringBuf[curr] = x + micIn[miPtr++];
It actually simulates singing into a guitar really accurately. if you get your tone spot on, it truly wails.
But there is a severe problem with this approach: consider the pitch generated by a buffer of 100 elements, and that generated by a buffer of 101 elements. there is no way to generate any pitch in between these two values. we are limited to a discrete working set Of pitches. while this is going to be pretty accurate for low notes (A2 would have a buffer length of ~400), the higher we go the more the error: A7 would have a buffer length of ~12.5. That error is probably over a semitone.
I cannot see any way of countering this problem. I think the approach has to be dropped.