Search code examples
c++ccpusize-tsize-type

What is "size of the largest possible object on the target platform" in terms of size_t


I am reading article about size_t in C/C++ http://web.archive.org/web/20081006073410/http://www.embedded.com/columns/programmingpointers/200900195 (link found through Stackoverflow).

Quote from the article:

Type size_t is a typedef that's an alias for some unsigned integer type, typically unsigned int or unsigned long, but possibly even unsigned long long. Each Standard C implementation is supposed to choose the unsigned integer that's big enough--but no bigger than needed--to represent the size of the largest possible object on the target platform.

How can I determine the the size of the largest possible object on my machine ?

What affect the size of the largest object (aside from the processor) ?

Link on detailed explanation are welcomed.


Solution

  • Edit: I think it's important to consider that this type doesn't strictly mean that you CAN have an object of that size - just that it's an integer that is LARGE ENOUGH to hold the size of the largest possible object - that doesn't mean that you can use SIZE_MAX to allocate memory. It just guarantees that the largest possible object can not be LARGER than SIZE_MAX.

    This is an architectural decision by the implementation of the compiler (typically in turn based on the OS that the compiler is targetting, but the OS could offer MORE than the compiler does, or the compiler could support a theoretical amount that is more than the OS allows, just that it will fail when you ask for it).

    In practical terms, it is nearly always the processor that determines this - size_t nearly always matches the bitness of the processor - e.g. it's 32 bits in a 32-bit processor, and 64 bits in a 64-bit processor. But it would be possible to design a system where it is 32-bits on a 64-bit processor - one "object" can't be bigger than 4GB isn't that big a limitation, really. It just means that you can't use one single vector of int to fill more than 4GB, so no more than 1G entries in the vector (or 4G char entries).

    Of course, the other limiting factor is available memory - if you have a very old machine with 256MB of RAM, it's not going to allow you to allocate 4GB, even if the size_t allows it. But give the same machine more memory, and you can go to a much larger size.

    On many 32-bit systems, the maximum memory allowed for an application is less than 4GB (the full 32-bit range), because some portion of memory is "reserved" for other uses. So again, the size_t is 32 bits, so would allow 4GB, but it doesn't actually support the full amount of memory to be used by a single application - on the other hand, a 32-bit machine could have more than 4GB of RAM, and dole it out between multiple applications.

    Also, if the system was limited (for some architectural reasons), say, to 16MB of memory, size_t is most likely still a 32-bit unsigned integer - because most processors don't do 24-bit integers [some DSP's may do that, but regular 16 or 32 bit processors don't].