What I know is - UNSIGNED INT
cannot take negative values.
If I take the maximum value
of an UNSIGNED INT
and increment it, I should get ZERO
i.e. the minimum value
and if I take the minimum value
and decrement it, I should get the maximum value
.
Then, why is this happening ?
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main()
{
unsigned int ui;
ui = UINT_MAX;
ui ++;
printf("ui = %d", ui);
ui = 0;
ui --;
printf("\n");
printf("ui = %d", ui);
return EXIT_SUCCESS;
}
Output:
ui = 0
ui = -1
You pass the value to an ellipsis function (printf
). You should expect nothing about the signedness here.
The %d
in the format string controls the sign of the displayed value. There is a cast inside the printf
function since you selected the %d
. That's why you see a signed
value that is equivalent to the binary value FFFFFFFF1.
1 Assuming a 32 bit width for integer.