Search code examples
algorithmaudiosignal-processingtext-to-speechlanguage-theory

Adding Accents to Speech Generation


The first part of this question is now its own, here: Analyzing Text for Accents

Question: How could accents be added to generated speech?

What I've come up with:

I do not mean just accent marks, or inflection, or anything singular like that. I mean something like a full British accent, or a Scottish accent, or Russian, etc.

I would think that this could be done outside of the language as well. Ex: something in Russian could be generated with a British accent, or something in Mandarin could have a Russian accent.

I think the basic process would be this:

  1. Analyze the text
    • Compare with a database (or something like that) to determine what needs an accent, how strong it should be, etc.
  2. Generate the speech in specified language
    • Easy with normal text-to-speech processors.
  3. Determine the specified accent based on the analyzed text.
    • This is the part in question.
    • I think an array of amplitudes and filters would work best for the next step.
  4. Mesh speech and accent.
    • This would be the easy part.
    • It could probably be done by multiplying the speech by the accent, like many other DSP methods do.

This is really more of a general DSP question, but I'd like to come up with a programatic algorithm to do this instead of a general idea.


Solution

  • What is an accent?

    An accent is not a sound filter; it's a pattern of acoustic realization of text in a language. You can't take a recording of American English, run it through "array of amplitudes and filters", and have British English pop out. What DSP is useful for is in implementing prosody, not accent.

    Basically (and simplest to model), an accent consists of rules for phonetic realization of a sequence of phonemes. Perception of accent is further influenced by prosody and by which phonemes a speaker chooses when reading text.

    Speech generation

    The process of speech generation has two basic steps:

    1. Text-to-phonemes: Convert written text to a sequence of phonemes (plus suprasegmentals like stress, and prosodic information like utterance boundaries). This is somewhat accent-dependent (e.g. the output for "laboratory" differs between American and British speakers).

    2. Phoneme-to-speech: given the sequence of phonemes, generate audio according to the dialect's rules for phonetic realizations of phonemes. (Typically you then combine diphones and then adjust acoustically the prosody). This is highly accent-dependent, and it is this step that imparts the main quality of the accent. A particular phoneme, even if shared between two accents, may have strikingly different acoustic realizations.

    Normally these are paired. While you could have a British-accented speech generator that uses American pronunciations, that would sound odd.

    Generating speech with a given accent

    Writing a text-to-speech program is an enormous amount of work (in particular, to implement one common scheme, you have to record a native speaker speaking each possible diphone in the language), so you'd be better off using an existing one.

    In short, if you want a British accent, use a British English text-to-phoneme engine together with a British English phoneme-to-speech engine.

    For common accents like American and British English, Standard Mandarin, Metropolitan French, etc., there will be several choices, including open-source ones that you will be able to modify (as below). For example, look at FreeTTS and eSpeak. For less common accents, existing engines unfortunately may not exist.

    Speaking text with a foreign accent

    English-with-a-foreign-accent is socially not very prestigious, so complete systems probably don't exist.

    One strategy would be to combine an off-the-shelf text-to-phoneme engine for a native accent with a phoneme-to-speech engine for the foreign language. For example, a native Russian speaker that learned English in the U.S. would plausibly use American pronunciations of words like laboratory, and map its phonemes onto his native Russian phonemes, pronouncing them as in Russian. (I believe there is a website that does this for English and Japanese, but I don't have the link.)

    The problem is that the result is too extreme. A real English learner would attempt to recognize and generate phonemes that do not exist in his native language, and would also alter his realization of his native phonemes to approximate the native pronunciation. How closely the result matches a native speaker of course varies, but using the pure foreign extreme sounds ridiculous (and mostly incomprehensible).

    So to generate plausible American-English-with-a-Russian-accent (for instance), you'd have to write a text-to-phoneme engine. You could use existing American English and Russian text-to-phoneme engines as a starting point. If you're not willing to find and record such a speaker, you could probably still get a decent approximation using DSP to combine the samples from those two engines. For eSpeak, it uses formant synthesis rather than recorded samples, so it might be easier to combine information from multiple languages.

    Another thing to consider is that foreign speakers often modify the sequence of phonemes under influence by the phonotactics of their native language, typically by simplifying consonant clusters, inserting epenthetic vowels, or diphthongizing or breaking vowel sequences.

    There is some literature on this topic.