[c] What is a difference between unsigned int and signed int in C?

ISO C states what the differences are.

The int data type is signed and has a minimum range of at least -32767 through 32767 inclusive. The actual values are given in limits.h as INT_MIN and INT_MAX respectively.

An unsigned int has a minimal range of 0 through 65535 inclusive with the actual maximum value being UINT_MAX from that same header file.

Beyond that, the standard does not mandate twos complement notation for encoding the values, that's just one of the possibilities. The three allowed types would have encodings of the following for 5 and -5 (using 16-bit data types):

        two's complement  |  ones' complement   |   sign/magnitude
    +---------------------+---------------------+---------------------+
 5  | 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101 |
-5  | 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101 |
    +---------------------+---------------------+---------------------+
  • In two's complement, you get a negative of a number by inverting all bits then adding 1.
  • In ones' complement, you get a negative of a number by inverting all bits.
  • In sign/magnitude, the top bit is the sign so you just invert that to get the negative.

Note that positive values have the same encoding for all representations, only the negative values are different.

Note further that, for unsigned values, you do not need to use one of the bits for a sign. That means you get more range on the positive side (at the cost of no negative encodings, of course).

And no, 5 and -5 cannot have the same encoding regardless of which representation you use. Otherwise, there'd be no way to tell the difference.


As an aside, there are currently moves underway, in both C and C++ standards, to nominate two's complement as the only encoding for negative integers.