[c] What is a difference between unsigned int and signed int in C?

Consider these definitions:

int x=5;
int y=-5;
unsigned int z=5;

How are they stored in memory? Can anybody explain the bit representation of these in memory?

Can int x=5 and int y=-5 have same bit representation in memory?

This question is related to c unsigned-integer signed-integer

The answer is


Here is the very nice link which explains the storage of signed and unsigned INT in C -

http://answers.yahoo.com/question/index?qid=20090516032239AAzcX1O

Taken from this above article -

"process called two's complement is used to transform positive numbers into negative numbers. The side effect of this is that the most significant bit is used to tell the computer if the number is positive or negative. If the most significant bit is a 1, then the number is negative. If it's 0, the number is positive."


The C standard specifies that unsigned numbers will be stored in binary. (With optional padding bits). Signed numbers can be stored in one of three formats: Magnitude and sign; two's complement or one's complement. Interestingly that rules out certain other representations like Excess-n or Base -2.

However on most machines and compilers store signed numbers in 2's complement.

int is normally 16 or 32 bits. The standard says that int should be whatever is most efficient for the underlying processor, as long as it is >= short and <= long then it is allowed by the standard.

On some machines and OSs history has causes int not to be the best size for the current iteration of hardware however.


ISO C states what the differences are.

The int data type is signed and has a minimum range of at least -32767 through 32767 inclusive. The actual values are given in limits.h as INT_MIN and INT_MAX respectively.

An unsigned int has a minimal range of 0 through 65535 inclusive with the actual maximum value being UINT_MAX from that same header file.

Beyond that, the standard does not mandate twos complement notation for encoding the values, that's just one of the possibilities. The three allowed types would have encodings of the following for 5 and -5 (using 16-bit data types):

        two's complement  |  ones' complement   |   sign/magnitude
    +---------------------+---------------------+---------------------+
 5  | 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101 |
-5  | 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101 |
    +---------------------+---------------------+---------------------+
  • In two's complement, you get a negative of a number by inverting all bits then adding 1.
  • In ones' complement, you get a negative of a number by inverting all bits.
  • In sign/magnitude, the top bit is the sign so you just invert that to get the negative.

Note that positive values have the same encoding for all representations, only the negative values are different.

Note further that, for unsigned values, you do not need to use one of the bits for a sign. That means you get more range on the positive side (at the cost of no negative encodings, of course).

And no, 5 and -5 cannot have the same encoding regardless of which representation you use. Otherwise, there'd be no way to tell the difference.


As an aside, there are currently moves underway, in both C and C++ standards, to nominate two's complement as the only encoding for negative integers.


Assuming int is a 16 bit integer (which depends on the C implementation, most are 32 bit nowadays) the bit representation differs like the following:

 5 = 0000000000000101
-5 = 1111111111111011

if binary 1111111111111011 would be set to an unsigned int, it would be decimal 65531.


Because it's all just about memory, in the end all the numerical values are stored in binary.

A 32 bit unsigned integer can contain values from all binary 0s to all binary 1s.

When it comes to 32 bit signed integer, it means one of its bits (most significant) is a flag, which marks the value to be positive or negative.