Search code examples
ctypessizeunsigned

Size, minimum and maximum values of data types in C


I need to determine the size, the minimum and the maximum values of the following types in C:

  • char
  • unsigned char
  • short
  • int
  • unsigned int
  • unsigned long
  • float

I would like if someone could help me clarify the following:

  1. What is exactly meant by the word "size" in this context?

For example, I wrote the following code:

#include <stdio.h>
#include <limits.h>
#include <float.h>

int main(void)
{
    char c;
    int h = sizeof(c);
    printf("%.6d\n", h);

    int n;
    h = sizeof(n);
    printf("%.6d\n", h);
}

It outputs 1 for char and 4 for int. What do these numbers mean?

  1. How does one determine the minimum and maximum values with some simple beginner C code?

  2. What does the word "unsigned" mean in this context?


Solution

    1. What is exactly meant by the word "size" in this context?

    With h = sizeof(n);, sizeof is the the number of bytes the object takes up in memory. In C, a "byte" if often 8 bits, but may be more. Use CHAR_BITS.

    number of bits for smallest object that is not a bit-field (byte)
    CHAR_BIT 8 (minimum value)
    C11 dr §5.2.4.2.1 1

    Values stored in non-bit-field objects of any other object type consist of n × CHAR_BIT bits, where n is the size of an object of that type, in bytes. ...
    §6.2.6.1 4

    To properly compute and print size, use type size_t and "%zu".

    #include <stddef.h>
    #include <stdio.h>
    
    some_type n;
    size_t h = sizeof(n);
    printf("Byte size: %zu, Bits/byte: %d, Bit size: %zu\n", h, CHAR_BIT, h * CHAR_BIT);
    // Octet is the common "outside of C" meaning of a "byte" of 8 bits/byte
    printf("Octet size: %g\n", (h * CHAR_BIT)/8.0);
    

    1. How does one determine the minimum and maximum values with some simple beginner C code?

    C is type rich - there are many types. Robust code does not try to calculate the min/max of a type but uses constants defined in various include files.

    To attempt to write code that calculates the min/max of a type (other than unsigned types) often runs into undefined behavior (UB) or implementation defined behavior. Avoid that.

    // FP limits
    #include <float.h>
    // Standard integers 
    #include <limits.h>
    // Fixed width, minimum width, fast integers 
    #include <stdint.h>
    // Extended multibyte/wide characters
    #include <wchar.h>
    

    To print these, be sure to use a correct type and value. For details on this, research fprintf()

    #include <stdio.h>
    
    printf("char            range %d ... %u\n", CHAR_MIN, CHAR_MAX);
    printf("unsigned char   range %u ... %u\n", 0, UCHAR_MAX);
    printf("short           range %d ... %d\n", SHRT_MIN, SHRT_MAX);
    printf("int             range %d ... %d\n", INT_MIN, INT_MAX);
    printf("unsigned int    range %u ... %u\n", 0, UINT_MAX);
    printf("unsigned long   range %lu ... %lu\n", 0, ULONG_MAX);
    printf("float           finite range %.*g ... %.*g\n", FLT_DECIMAL_DIG, -FLT_MAX,
       FLT_DECIMAL_DIG, FLT_MAX);
    

    Example output - Yours may differ

    char            range -128 ... 127
    unsigned char   range 0 ... 255
    short           range -32768 ... 32767
    int             range -2147483648 ... 2147483647
    unsigned int    range 0 ... 4294967295
    unsigned long   range 0 ... 18446744073709551615
    float           finite range -3.40282347e+38 ... 3.40282347e+38
    

    Many compilers support + or +/- infinity with floating point types. With such, the maximum float is then INFINITY. Research HUGE_VALF for additional ideas of float max.


    1. What does the word "unsigned" mean in this context?

    The integer type lacks a sign bit. It minimum value is 0.