Search code examples
c++performanceintlong-integershort

What does 'Natural Size' really mean in C++?


I understand that the 'natural size' is the width of integer that is processed most efficiently by a particular hardware. When using short in an array or in arithmetic operations, the short integer must first be converted into int.

Q: What exactly determines this 'natural size'?

I am not looking for simple answers such as

If it has a 32-bit architecture, it's natural size is 32-bit

I want to understand why this is most efficient, and why a short must be converted before doing arithmetic operations on it.

Bonus Q: What happens when arithmetic operations are conducted on a long integer?


Solution

  • the 'natural size' is the width of integer that is processed most efficiently by a particular hardware.

    Not really. Consider the x64 architecture. Arithmetic on any size from 8 to 64 bits will be essentially the same speed. So why have all x64 compilers settled on a 32-bit int? Well, because there was a lot of code out there which was originally written for 32-bit processors, and a lot of it implicitly relied on ints being 32-bits. And given the near-uselessness of a type which can represent values up to nine quintillion, the extra four bytes per integer would have been virtually unused. So we've decided that 32-bit ints are "natural" for this 64-bit platform.

    Compare the 80286 architecture. Only 16 bits in a register. Performing 32-bit integer addition on such a platform basically requires splitting it into two 16-bit additions. Doing virtually anything with it involves splitting it up, really-- and an attendant slowdown. The 80286's "natural integer size" is most definitely not 32 bits.

    So really, "natural" comes down to considerations like processing efficiency, memory usage, and programmer-friendliness. It is not an acid test. It is very much a matter of subjective judgment on the part of the architecture/compiler designer.