Maybe this is a red herring but I'm trying to understand what (if any) differences exist between integer math and math on some other fixed point number system.
Is there additional cost to operations on a fixed point number where 2^0 isn't bit 0?
What's the difference between multiplying all your numbers by 2^10 and performing "normal" math vs operating in a 22p10 fixed point number system?
Does this come down to nomenclature or am I missing something? I'm working in the context of microcontrollers.
Integer mathematics is simpler and involves less work exactly because of the exponent.
When arithmetic operations are performed on fixed-point numbers, the resultant exponent depends on the operands and the operation. For example, you can only add two fixed-point numbers with the same exponent, and the result is a third number with that same exponent. Meanwhile, when you multiply two fixed-point numbers, the exponent of the result is the sum of the exponents of the operands. (Division is yet more complex.)
In each of these cases, normalisation is needed before or after the operation if you want consistency of exponents between operations. That is done by scaling the value to achieve the desired exponent.
For base-2 fixed-point arithmetic on binary digital computers, scaling can often be achieved with bit-shifting machine instructions. That makes it a good choice for microcontrollers because bit-shifting is cheap. But it isn't free.
Integer arithmetic is a specialisation of fixed-point arithmetic in which operands have an exponent of zero. This means that the results of all operations also have an exponent of zero. That obviously simplifies operations themselves, but it also means that the results of those operations can be used in subsequent operations (or recurrently in the same operation) without the need to normalise the exponent, i.e. no bit shifting!
The downside is that the range of numbers that can be represented is now greatly limited and cannot involve fractions.