Search code examples
javascriptreactjsnext.jsgoogle-speech-to-text-apimediarecorder-api

Google speech to text return empty transcription using the audio created by MediaRecorder API and react


I'm working on a feature to about transcribing speech into text and I'm using google speech to text api with nextjs/react. I record audio using MediaRecorder api of a browser. The audio recorded with it, if i use that in google speech to text, it returns an empty transcription. But if i used an audio recorded in Audacity software it will return the transcription.

Here's my client code:

const startRecording = () => {
    navigator.mediaDevices
      .getUserMedia({ audio: true })
      .then((stream) => {
        const recorder = new MediaRecorder(stream, {
          mimeType: "audio/webm; codecs=opus",
          bitsPerSecond: 128000,
          sampleRate: 48000,
          echoCancellation: true,
          noiseSuppression: true,
          channelCount: 1,
        });
        const chunks = [];

        recorder.addEventListener("dataavailable", (event) => {
          chunks.push(event.data);
        });

        recorder.addEventListener("stop", () => {
          const blob = new Blob(chunks, { type: "audio/wav" });
          const url = URL.createObjectURL(blob);
          setAudioUrl(url);
          setRecording(false);
          setAudioBlob(blob); // Update the audioBlob state variable
        });

        recorder.start();
        setMediaRecorder(recorder);
        setRecording(true);
      })
      .catch((error) => {
        console.log(error);
      });
  };

And here's my server code:

async function transcribeContextClasses() {
      const file = fs.readFileSync("public/audio/1680169074745_audio.wav");
      const audioBytes = file.toString("base64");
      
      const audio = {
        content: audioBytes,
      };

      const speechContext = {
        phrases: ["$TIME"],
      };

      const config = {
        encoding: "LINEAR16",
        sampleRateHertz: 48000,
        languageCode: "en-US",
        speechContexts: [speechContext],
      };

      const request = {
        config: config,
        audio: audio,
      };

      const [response] = await speechClient.recognize(request);
      const transcription = response.results
        .map((result) => result.alternatives[0].transcript)
        .join("\n");
      console.log(`Transcription: ${transcription}`);
    }

For now I save the recorded audio as file and manually input it in my server side code so that I can test other audio recorded from other software.


Solution

  • I was able to fix my issue. I just changed my encoding from this: encoding: "LINEAR16" to this: encoding: 'WAV' since I'm using wav format.