[c] The real difference between "int" and "unsigned int"

int:

The 32-bit int data type can hold integer values in the range of -2,147,483,648 to 2,147,483,647. You may also refer to this data type as signed int or signed.

unsigned int :

The 32-bit unsigned int data type can hold integer values in the range of 0 to 4,294,967,295. You may also refer to this data type simply as unsigned.

Ok, but, in practice:

int x = 0xFFFFFFFF;
unsigned int y = 0xFFFFFFFF;
printf("%d, %d, %u, %u", x, y, x, y);
// -1, -1, 4294967295, 4294967295

no difference, O.o. I'm a bit confused.

This question is related to c

The answer is


Hehe. You have an implicit cast here, because you're telling printf what type to expect.

Try this on for size instead:

unsigned int x = 0xFFFFFFFF;
int y = 0xFFFFFFFF;

if (x < 0)
    printf("one\n");
else
    printf("two\n");
if (y < 0)
    printf("three\n");
else
    printf("four\n");

The binary representation is the key. An Example: Unsigned int in HEX

 0XFFFFFFF = translates to = 1111 1111 1111 1111 1111 1111 1111 1111 

Which represents 4,294,967,295 in a base-ten positive number. But we also need a way to represent negative numbers. So the brains decided on twos complement. In short, they took the leftmost bit and decided that when it is a 1 (followed by at least one other bit set to one) the number will be negative. And the leftmost bit is set to 0 the number is positive. Now let's look at what happens

0000 0000 0000 0000 0000 0000 0000 0011 = 3

Adding to the number we finally reach.

0111 1111 1111 1111 1111 1111 1111 1111 = 2,147,483,645

the highest positive number with a signed integer. Let's add 1 more bit (binary addition carries the overflow to the left, in this case, all bits are set to one, so we land on the leftmost bit)

1111 1111 1111 1111 1111 1111 1111 1111 = -1

So I guess in short we could say the difference is the one allows for negative numbers the other does not. Because of the sign bit or leftmost bit or most significant bit.


The problem is that you invoked Undefined Behaviour.


When you invoke UB anything can happen.

The assignments are ok; there is an implicit conversion in the first line

int x = 0xFFFFFFFF;
unsigned int y = 0xFFFFFFFF;

However, the call to printf, is not ok

printf("%d, %d, %u, %u", x, y, x, y);

It is UB to mismatch the % specifier and the type of the argument.
In your case you specify 2 ints and 2 unsigned ints in this order by provide 1 int, 1 unsigned int, 1 int, and 1 unsigned int.


Don't do UB!


There is no difference between the two in how they are stored in memory and registers, there is no signed and unsigned version of int registers there is no signed info stored with the int, the difference only becomes relevant when you perform maths operations, there are signed and unsigned version of the maths ops built into the CPU and the signedness tell the compiler which version to use.


He is asking about the real difference. When you are talking about undefined behavior you are on the level of guarantee provided by language specification - it's far from reality. To understand the real difference please check this snippet (of course this is UB but it's perfectly defined on your favorite compiler):

#include <stdio.h>

int main()
{
    int i1 = ~0;
    int i2 = i1 >> 1;
    unsigned u1 = ~0;
    unsigned u2 = u1 >> 1;
    printf("int         : %X -> %X\n", i1, i2);
    printf("unsigned int: %X -> %X\n", u1, u2);
}

Yes, because in your case they use the same representation.

The bit pattern 0xFFFFFFFF happens to look like -1 when interpreted as a 32b signed integer and as 4294967295 when interpreted as a 32b unsigned integer.

It's the same as char c = 65. If you interpret it as a signed integer, it's 65. If you interpret it as a character it's a.


As R and pmg point out, technically it's undefined behavior to pass arguments that don't match the format specifiers. So the program could do anything (from printing random values to crashing, to printing the "right" thing, etc).

The standard points it out in 7.19.6.1-9

If a conversion speci?cation is invalid, the behavior is unde?ned. If any argument is not the correct type for the corresponding conversion speci?cation, the behavior is unde?ned.


the type just tells you what the bit pattern is supposed to represent. the bits are only what you make of them. the same sequences can be interpreted in different ways.


The printf function interprets the value that you pass it according to the format specifier in a matching position. If you tell printf that you pass an int, but pass unsigned instead, printf would re-interpret one as the other, and print the results that you see.


The internal representation of int and unsigned int is the same.

Therefore, when you pass the same format string to printf it will be printed as the same.

However, there are differences when you compare them. Consider:

int x = 0x7FFFFFFF;
int y = 0xFFFFFFFF;
x < y // false
x > y // true
(unsigned int) x < (unsigned int y) // true
(unsigned int) x > (unsigned int y) // false

This can be also a caveat, because when comparing signed and unsigned integer one of them will be implicitly casted to match the types.