I am studying a sound converting algorithm where an array of signed shorts is received.
At a given point in the algorithm it converts the samples from 16 bits to 14 bits, and it does it like this:
int16_t sample = (old_sample + 2) >> 2;
For me its clear that the shifting is needed as we want to get rid of the least 2 significant bits, but what about the +2
there?
Shifting down loses the least significant two bits. If you just shift, then that will always round down, even if the bottom two bits are both set. Adding the 2 rounds up if the bigger of the bits being lost is set.
(Also worth noting that a better method of reducing the number of bits is to use dithering, i.e. adding a random (and very small) amount of noise before the reduction in sample size; this avoids the problem that, since the sounds are periodic, the rounding can often end up going consistently up or consistently down for a particular frequency, resulting in perceptible distortion in the sound. The wikipedia article linked explains it better than I can!)