I've seen this unsigned
"typeless" type used a couple of times, but never seen an explanation for it. I suppose there's a corresponding signed
type. Here's an example:
static unsigned long next = 1;
/* RAND_MAX assumed to be 32767 */
int myrand(void) {
next = next * 1103515245 + 12345;
return((unsigned)(next/65536) % 32768);
}
void mysrand(unsigned seed) {
next = seed;
}
What I have gathered so far:
- on my system, sizeof(unsigned) = 4
(hints at a 32-bit unsigned int)
- it might be used as a shorthand for casting another type to the unsigned version:
signed long int i = -42;
printf("%u\n", (unsigned)i);
Is this ANSI C, or just a compiler extension?
unsigned
means unsigned int
. signed
means signed int
. Using just unsigned
is a lazy way of declaring an unsigned int
in C. Yes this is ANSI.
in C, unsigned
is a shortcut for unsigned int
.
You have the same for long
that is a shortcut for long int
And it is also possible to declare a unsigned long
(it will be a unsigned long int
).
This is in the ANSI standard
According to C17 6.7.2 §2:
Each list of type specifiers shall be one of the following multisets (delimited by commas, when there is more than one multiset per item); the type specifiers may occur in any order, possibly intermixed with the other declaration specifiers
— void
— char
— signed char
— unsigned char
— short, signed short, short int, or signed short int
— unsigned short, or unsigned short int
— int, signed, or signed int
— unsigned, or unsigned int
— long, signed long, long int, or signed long int
— unsigned long, or unsigned long int
— long long, signed long long, long long int, or signed long long int
— unsigned long long, or unsigned long long int
— float
— double
— long double
— _Bool
— float _Complex
— double _Complex
— long double _Complex
— atomic type specifier
— struct or union specifier
— enum specifier
— typedef name
So in case of unsigned int
we can either write unsigned
or unsigned int
, or if we are feeling crazy, int unsigned
. The latter since the standard is stupid enough to allow "...may occur in any order, possibly intermixed". This is a known flaw of the language.
Proper C code uses unsigned int
.
Bringing my answer from another question.
From the C specification, section 6.7.2:
— unsigned, or unsigned int
Meaning that unsigned
, when not specified the type, shall default to unsigned int
. So writing unsigned a
is the same as unsigned int a
.
Historically in C, if you omitted a datatype "int" was assumed. So "unsigned" is a shorthand for "unsigned int". This has been considered bad practice for a long time, but there is still a fair amount of code out there that uses it.
In C and C++
unsigned = unsigned int (Integer type)
signed = signed int (Integer type)
An unsigned integer containing n bits can have a value between 0 and (2^n-1) , which is 2^n different values.
An unsigned integer is either positive or zero.
Signed integers are stored in a computer using 2's complement.
Source: Stackoverflow.com