I am working on audio data manipulation (from/to WAV
files, 16 bits), representing samples with double
values (64 bits).
Since I am working with a lot of amplitude-domain convolution, a lot of time my resulting samples have (positive or negative ) values that go "above" the "maximum data" that can be represented on 16 bits, and the result is they get truncated.
So I need to normalize my data before writing it on a WAV
file.
But it isn't clear to me what are the max (and minimum) double
values that can be represented on 16 bits.
notice: here I refer to minimum value as the maximum negative double number that can be represented in 16bits.
edit: with 16-bits double
I refer to data read from a 16 bit WAV file, stored in my code as a double
value. After amplitude convolution, this data occurs to become greater than 1 or less than -1.
The simple answer is that your denominator (for normalizing 16-bit data) is 2^15 assuming signed PCM.
Divide all your incoming 16-bit data by 32767 is my solution for normalizing. Maybe a case can be made for 32768, as the data ranges from -32768 to 32767. But I've always used 32767.