In the C# language, the maximum and the minimum value that the int type could maintain in itself are -2,147,483,648 to 2,147,483,647.
Why if we add 2 units to the maximum value, the compiler does not show any errors?
It just shows the -2,147,483,646, which is negative and two units larger than the minimum value the int type can save.
It behaves as towards the circle.
You choose one place to start (it is black line here). Then you are going to make a full circle and then you will reach maximum of its size.
Now, instead of having a problem you will start the same circle again(blue line are those 2 numbers) and that is the reason why you move to negative numbers after reaching a maximum of positive.
This can cause many problems, so it is the best to make some if-else(or something else) to secure your program.
Here is your example.
When we add something this happens: 2,147,483,647 -> -2,147,483,648 -> -2,147,483,647 ...
If you have used unsigned int
and tried to decrease them(4,3,2,1,0 ..) then instead of moving to negative numbers you would move to a maximum value which is 4,294,967,295
Edit:
It is runtime behaviour
Credits to Klaus (look comments).