Search code examples
audiosignal-processing

Why do we usually multiply an audio signal in log scale by 20?


Here is a quite elementary question in audio processing.

After a signal is converted in log scale, it is usually multiplied by 20. Inversely, to recover some gain in linear scale, we use the formula

G_linear = 10 ** (0.05 * G_in_dB)

This makes sense because 0.05 is 1/20.

But where does this 20 factor comes from? What is the theory behind it?

I read some courses and code for signal processing, but have not found any explanation for this.

Thanks for your help!


Solution

  • By definition, a decibel of gain multiplies the signal power by a factor of 101/10.

    Linear gain applies to amplitude, not power, though.

    Since power = amplitude2, a decibel of gain multiplies the signal amplitude by (101/10)1/2 = 101/20