Search code examples
c++visual-c++stdint

Why do fixed width types delegate back to primitives?


In Visual Studio 14 the stdint.h header has definitions for fixed width integer types, but if you actually look at there definitions they just delegate back to primitives. The definitons are as follows:

typedef signed char        int8_t;
typedef short              int16_t;
typedef int                int32_t;
typedef long long          int64_t;
typedef unsigned char      uint8_t;
typedef unsigned short     uint16_t;
typedef unsigned int       uint32_t;
typedef unsigned long long uint64_t;

So is there any reason to use stdint.h if all it does is just fallback to primitives? I also know that Visual Studio does not just replace these definitions at compile time because if you try to print out an int8_t to the console you will get a Unicode character instead of a number because it is really just a signed char.

EDIT

Because people are pointing out that there is nothing else that they would logically define to I think my question needs restating.

Why is it that the header which in the C++ spec states that it will have integers of a fixed length of 8, 16, 32 and 64 bits define these integers as types which by definition can be any size the compiler wants (to put in a way said by someone else in another question The compiler can decide that an int will a 71 bit number stored in a 128 bit memory space where the additional 57 bits are used to store the programmers girlfriends birthday.)?


Solution

  • I understand from both original and restated questions, that there is a misconception about guaranteed width integers (and I say guaranteed because not all types in stdint.h are of fixed width) and the actual problems they solve.

    C/C++ define primitives such as int, long int, long long int etc. For simplicity let's focus on the most common of all, i.e. int. What C standard defines, is that int should be at least 16-bit wide. Though, compilers on all widely used x86 platforms, will actually provide you a 32-bit wide integer when you define an int. This happens because x86 processors can directly fetch a 32-bit wide field (word size of 32-bit x86 CPU) from memory, provide it as is to ALU for 32-bit arithmetic and store it back to memory, without having to do any shifts, padding etc. and that's pretty fast. But that's not the case for every compiler/architecture combination. If you work on an embedded device, with for example a very small MIPS processor, you will probably get a 16-bit wide integer from the compiler when you define an int. So, the width of primitives is specified by the compiler depending solely on hardware capabilities of the target platform, with respect to the minimum widths defined by the standard. And yes, on a strange architecture with e.g. a 25-bit ALU, you will probably be given a 25-bit int.

    In order for a piece of C/C++ code to be portable among many different compiler/hardware combinations, stdint.h provides typedefs that guarantee you certain width (or minimum width). So, when for example you want to use a 16-bit signed integer (e.g. for saving memory, or mod-counters), you don't have to worry whether you should use an int or short, by simply using int16_t. The developers of the compiler will provide you a properly constructed stdint.h that will typedef the requested fixed-size integer into the actual primitive that implements it. That means, on x86 an int16_t will probably be defined as short, while on a small embedded device you may get an int, with all these mappings maintained by the compiler's developers.