Is it possible to use the synthesised speech from Web Speech API as a SourceNode
inside Web Audio API's audio context?
I actually asked about adding this on the Web Speech mailing list, and was basically told "no". To be fair to people on that mailing list, I was unable to think of more than one or two specific use cases when prompted.
So unless they've changed something in the past month or so, it sounds like this isn't a planned feature.