[c] Is the size of C "int" 2 bytes or 4 bytes?

Does an Integer variable in C occupy 2 bytes or 4 bytes? What are the factors that it depends on?

Most of the textbooks say integer variables occupy 2 bytes. But when I run a program printing the successive addresses of an array of integers it shows the difference of 4.

This question is related to c int byte

The answer is


The only guarantees are that char must be at least 8 bits wide, short and int must be at least 16 bits wide, and long must be at least 32 bits wide, and that sizeof (char) <= sizeof (short) <= sizeof (int) <= sizeof (long) (same is true for the unsigned versions of those types).

int may be anywhere from 16 to 64 bits wide depending on the platform.


#include <stdio.h>

int main(void) {
    printf("size of int: %d", (int)sizeof(int));
    return 0;
}

This returns 4, but it's probably machine dependant.


The answer to this question depends on which platform you are using.
But irrespective of platform, you can reliably assume the following types:

 [8-bit] signed char: -127 to 127
 [8-bit] unsigned char: 0 to 255
 [16-bit]signed short: -32767 to 32767
 [16-bit]unsigned short: 0 to 65535
 [32-bit]signed long: -2147483647 to 2147483647
 [32-bit]unsigned long: 0 to 4294967295
 [64-bit]signed long long: -9223372036854775807 to 9223372036854775807
 [64-bit]unsigned long long: 0 to 18446744073709551615

This is a good source for answering this question.

But this question is a kind of a always truth answere "Yes. Both."

It depends on your architecture. If you're going to work on a 16-bit machine or less, it can't be 4 byte (=32 bit). If you're working on a 32-bit or better machine, its length is 32-bit.

To figure out, get you program ready to output something readable and use the "sizeof" function. That returns the size in bytes of your declared datatype. But be carfull using this with arrays.

If you're declaring int t[12]; it will return 12*4 byte. To get the length of this array, just use sizeof(t)/sizeof(t[0]). If you are going to build up a function, that should calculate the size of a send array, remember that if

typedef int array[12];
int function(array t){
    int size_of_t = sizeof(t)/sizeof(t[0]);
    return size_of_t;
}
void main(){
    array t = {1,1,1};  //remember: t= [1,1,1,0,...,0]
    int a = function(t);    //remember: sending t is just a pointer and equal to int* t
   print(a);   // output will be 1, since t will be interpreted as an int itselve. 
}

So this won't even return something different. If you define an array and try to get the length afterwards, use sizeof. If you send an array to a function, remember the send value is just a pointer on the first element. But in case one, you always knows, what size your array has. Case two can be figured out by defining two functions and miss some performance. Define function(array t) and define function2(array t, int size_of_t). Call "function(t)" measure the length by some copy-work and send the result to function2, where you can do whatever you want on variable array-sizes.


I know it's equal to sizeof(int). The size of an int is really compiler dependent. Back in the day, when processors were 16 bit, an int was 2 bytes. Nowadays, it's most often 4 bytes on a 32-bit as well as 64-bit systems.

Still, using sizeof(int) is the best way to get the size of an integer for the specific system the program is executed on.

EDIT: Fixed wrong statement that int is 8 bytes on most 64-bit systems. For example, it is 4 bytes on 64-bit GCC.


Mostly it depends on the platform you are using .It depends from compiler to compiler.Nowadays in most of compilers int is of 4 bytes. If you want to check what your compiler is using you can use sizeof(int).

main()
{
    printf("%d",sizeof(int));
    printf("%d",sizeof(short));
    printf("%d",sizeof(long));
}

The only thing c compiler promise is that size of short must be equal or less than int and size of long must be equal or more than int.So if size of int is 4 ,then size of short may be 2 or 4 but not larger than that.Same is true for long and int. It also says that size of short and long can not be same.


C99 N1256 standard draft

http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf

The size of int and all other integer types are implementation defined, C99 only specifies:

  • minimum size guarantees
  • relative sizes between the types

5.2.4.2.1 "Sizes of integer types <limits.h>" gives the minimum sizes:

1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown [...]

  • UCHAR_MAX 255 // 2 8 - 1
  • USHRT_MAX 65535 // 2 16 - 1
  • UINT_MAX 65535 // 2 16 - 1
  • ULONG_MAX 4294967295 // 2 32 - 1
  • ULLONG_MAX 18446744073709551615 // 2 64 - 1

6.2.5 "Types" then says:

8 For any two integer types with the same signedness and different integer conversion rank (see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a subrange of the values of the other type.

and 6.3.1.1 "Boolean, characters, and integers" determines the relative conversion ranks:

1 Every integer type has an integer conversion rank defined as follows:

  • The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
  • The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any.
  • For all integer types T1, T2, and T3, if T1 has greater rank than T2 and T2 has greater rank than T3, then T1 has greater rank than T3

There's no specific answer. It depends on the platform. It is implementation-defined. It can be 2, 4 or something else.

The idea behind int was that it was supposed to match the natural "word" size on the given platform: 16 bit on 16-bit platforms, 32 bit on 32-bit platforms, 64 bit on 64-bit platforms, you get the idea. However, for backward compatibility purposes some compilers prefer to stick to 32-bit int even on 64-bit platforms.

The time of 2-byte int is long gone though (16-bit platforms?) unless you are using some embedded platform with 16-bit word size. Your textbooks are probably very old.


This is one of the points in C that can be confusing at first, but the C standard only specifies a minimum range for integer types that is guaranteed to be supported. int is guaranteed to be able to hold -32767 to 32767, which requires 16 bits. In that case, int, is 2 bytes. However, implementations are free to go beyond that minimum, as you will see that many modern compilers make int 32-bit (which also means 4 bytes pretty ubiquitously).

The reason your book says 2 bytes is most probably because it's old. At one time, this was the norm. In general, you should always use the sizeof operator if you need to find out how many bytes it is on the platform you're using.

To address this, C99 added new types where you can explicitly ask for a certain sized integer, for example int16_t or int32_t. Prior to that, there was no universal way to get an integer of a specific width (although most platforms provided similar types on a per-platform basis).


This depends on implementation, but usually on x86 and other popular architectures like ARM ints take 4 bytes. You can always check at compile time using sizeof(int) or whatever other type you want to check.

If you want to make sure you use a type of a specific size, use the types in <stdint.h>


Does an Integer variable in C occupy 2 bytes or 4 bytes?

That depends on the platform you're using, as well as how your compiler is configured. The only authoritative answer is to use the sizeof operator to see how big an integer is in your specific situation.


What are the factors that it depends on?

Range might be best considered, rather than size. Both will vary in practice, though it's much more fool-proof to choose variable types by range than size as we shall see. It's also important to note that the standard encourages us to consider choosing our integer types based on range rather than size, but for now let's ignore the standard practice, and let our curiosity explore sizeof, bytes and CHAR_BIT, and integer representation... let's burrow down the rabbit hole and see it for ourselves...


sizeof, bytes and CHAR_BIT

The following statement, taken from the C standard (linked to above), describes this in words that I don't think can be improved upon.

The sizeof operator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type. The size is determined from the type of the operand.

Assuming a clear understanding will lead us to a discussion about bytes. It's commonly assumed that a byte is eight bits, when in fact CHAR_BIT tells you how many bits are in a byte. That's just another one of those nuances which isn't considered when talking about the common two (or four) byte integers.

Let's wrap things up so far:

  • sizeof => size in bytes, and
  • CHAR_BIT => number of bits in byte

Thus, Depending on your system, sizeof (unsigned int) could be any value greater than zero (not just 2 or 4), as if CHAR_BIT is 16, then a single (sixteen-bit) byte has enough bits in it to represent the sixteen bit integer described by the standards (quoted below). That's not necessarily useful information, is it? Let's delve deeper...


Integer representation

The C standard specifies the minimum precision/range for all standard integer types (and CHAR_BIT, too, fwiw) here. From this, we can derive a minimum for how many bits are required to store the value, but we may as well just choose our variables based on ranges. Nonetheless, a huge part of the detail required for this answer resides here. For example, the following that the standard unsigned int requires (at least) sixteen bits of storage:

UINT_MAX                                65535 // 2¹6 - 1

Thus we can see that unsigned int require (at least) 16 bits, which is where you get the two bytes (assuming CHAR_BIT is 8)... and later when that limit increased to 2³² - 1, people were stating 4 bytes instead. This explains the phenomena you've observed:

Most of the textbooks say integer variables occupy 2 bytes. But when I run a program printing the successive addresses of an array of integers it shows the difference of 4.

You're using an ancient textbook and compiler which is teaching you non-portable C; the author who wrote your textbook might not even be aware of CHAR_BIT. You should upgrade your textbook (and compiler), and strive to remember that I.T. is an ever-evolving field that you need to stay ahead of to compete... Enough about that, though; let's see what other non-portable secrets those underlying integer bytes store...

Value bits are what the common misconceptions appear to be counting. The above example uses an unsigned integer type which typically contains only value bits, so it's easy to miss the devil in the detail.

Sign bits... In the above example I quoted UINT_MAX as being the upper limit for unsigned int because it's a trivial example to extract the value 16 from the comment. For signed types, in order to distinguish between positive and negative values (that's the sign), we need to also include the sign bit.

INT_MIN                                -32768 // -(2¹5)
INT_MAX                                +32767 // 2¹5 - 1

Padding bits... While it's not common to encounter computers that have padding bits in integers, the C standard allows that to happen; some machines (i.e. this one) implement larger integer types by combining two smaller (signed) integer values together... and when you combine signed integers, you get a wasted sign bit. That wasted bit is considered padding in C. Other examples of padding bits might include parity bits and trap bits.


As you can see, the standard seems to encourage considering ranges like INT_MIN..INT_MAX and other minimum/maximum values from the standard when choosing integer types, and discourages relying upon sizes as there are other subtle factors likely to be forgotten such as CHAR_BIT and padding bits which might affect the value of sizeof (int) (i.e. the common misconceptions of two-byte and four-byte integers neglects these details).


Is the size of C “int” 2 bytes or 4 bytes?

The answer is "yes" / "no" / "maybe" / "maybe not".

The C programming language specifies the following: the smallest addressable unit, known by char and also called "byte", is exactly CHAR_BIT bits wide, where CHAR_BIT is at least 8.

So, one byte in C is not necessarily an octet, i.e. 8 bits. In the past one of the first platforms to run C code (and Unix) had 4-byte int - but in total int had 36 bits, because CHAR_BIT was 9!

int is supposed to be the natural integer size for the platform that has range of at least -32767 ... 32767. You can get the size of int in the platform bytes with sizeof(int); when you multiply this value by CHAR_BIT you will know how wide it is in bits.


While 36-bit machines are mostly dead, there are still platforms with non-8-bit bytes. Just yesterday there was a question about a Texas Instruments MCU with 16-bit bytes, that has a C99, C11-compliant compiler.

On TMS320C28x it seems that char, short and int are all 16 bits wide, and hence one byte. long int is 2 bytes and long long int is 4 bytes. The beauty of C is that one can still write an efficient program for a platform like this, and even do it in a portable manner!


Is the size of C “int” 2 bytes or 4 bytes?

Does an Integer variable in C occupy 2 bytes or 4 bytes?

C allows "bytes" to be something other than 8 bits per "byte".

CHAR_BIT number of bits for smallest object that is not a bit-field (byte) C11dr §5.2.4.2.1 1

A value of something than 8 is increasingly uncommon. For maximum portability, use CHAR_BIT rather than 8. The size of an int in bits in C is sizeof(int) * CHAR_BIT.

#include <limits.h>
printf("(int) Bit size %zu\n", sizeof(int) * CHAR_BIT);

What are the factors that it depends on?

The int bit size is commonly 32 or 16 bits. C specified minimum ranges:

minimum value for an object of type int INT_MIN -32767
maximum value for an object of type int INT_MAX +32767
C11dr §5.2.4.2.1 1

The minimum range for int forces the bit size to be at least 16 - even if the processor was "8-bit". A size like 64 bits is seen in specialized processors. Other values like 18, 24, 36, etc. have occurred on historic platforms or are at least theoretically possible. Modern coding rarely worries about non-power-of-2 int bit sizes.

The computer's processor and architecture drive the int bit size selection.

Yet even with 64-bit processors, the compiler's int size may be 32-bit for compatibility reasons as large code bases depend on int being 32-bit (or 32/16).


Examples related to c

conflicting types for 'outchar' Can't compile C program on a Mac after upgrade to Mojave Program to find largest and second largest number in array Prime numbers between 1 to 100 in C Programming Language In c, in bool, true == 1 and false == 0? How I can print to stderr in C? Visual Studio Code includePath "error: assignment to expression with array type error" when I assign a struct field (C) Compiling an application for use in highly radioactive environments How can you print multiple variables inside a string using printf?

Examples related to int

How can I convert a char to int in Java? How to take the nth digit of a number in python "OverflowError: Python int too large to convert to C long" on windows but not mac Pandas: Subtracting two date columns and the result being an integer Convert bytes to int? How to round a Double to the nearest Int in swift? Leading zeros for Int in Swift C convert floating point to int Convert Int to String in Swift Converting String to Int with Swift

Examples related to byte

Convert bytes to int? TypeError: a bytes-like object is required, not 'str' when writing to a file in Python3 How many bits is a "word"? How many characters can you store with 1 byte? Save byte array to file Convert dictionary to bytes and back again python? How do I get total physical memory size using PowerShell without WMI? Hashing with SHA1 Algorithm in C# How to create a byte array in C++? Converting string to byte array in C#