[c++] What is the difference between an int and a long in C++?

Correct me if I am wrong,

int is 4 bytes, with a range of values from -2,147,483,648 to 2,147,483,647 (2^31)
long is 4 bytes, with a range of values from -2,147,483,648 to 2,147,483,647 (2^31)

What is the difference in C++? Can they be used interchangeably?

This question is related to c++ variables

The answer is


The C++ specification itself (old version but good enough for this) leaves this open.

There are four signed integer types: 'signed char', 'short int', 'int', and 'long int'. In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment* ;

[Footnote: that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>. --- end foonote]


It depends on your compiler. You are guaranteed that a long will be at least as large as an int, but you are not guaranteed that it will be any longer.


As Kevin Haines points out, ints have the natural size suggested by the execution environment, which has to fit within INT_MIN and INT_MAX.

The C89 standard states that UINT_MAX should be at least 2^16-1, USHRT_MAX 2^16-1 and ULONG_MAX 2^32-1 . That makes a bit-count of at least 16 for short and int, and 32 for long. For char it states explicitly that it should have at least 8 bits (CHAR_BIT). C++ inherits those rules for the limits.h file, so in C++ we have the same fundamental requirements for those values. You should however not derive from that that int is at least 2 byte. Theoretically, char, int and long could all be 1 byte, in which case CHAR_BIT must be at least 32. Just remember that "byte" is always the size of a char, so if char is bigger, a byte is not only 8 bits any more.


The C++ specification itself (old version but good enough for this) leaves this open.

There are four signed integer types: 'signed char', 'short int', 'int', and 'long int'. In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment* ;

[Footnote: that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>. --- end foonote]


The only guarantee you have are:

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

// FROM @KTC. The C++ standard also has:
sizeof(signed char)   == 1
sizeof(unsigned char) == 1

// NOTE: These size are not specified explicitly in the standard.
//       They are implied by the minimum/maximum values that MUST be supported
//       for the type. These limits are defined in limits.h
sizeof(short)     * CHAR_BIT >= 16
sizeof(int)       * CHAR_BIT >= 16
sizeof(long)      * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
CHAR_BIT         >= 8   // Number of bits in a byte

Also see: Is long guaranteed to be at least 32 bits?


The C++ Standard says it like this :

3.9.1, §2 :

There are five signed integer types : "signed char", "short int", "int", "long int", and "long long int". In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment (44); the other signed integer types are provided to meet special needs.

(44) that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>.

The conclusion : it depends on which architecture you're working on. Any other assumption is false.


When compiling for x64, the difference between int and long is somewhere between 0 and 4 bytes, depending on what compiler you use.

GCC uses the LP64 model, which means that ints are 32-bits but longs are 64-bits under 64-bit mode.

MSVC for example uses the LLP64 model, which means both ints and longs are 32-bits even in 64-bit mode.


The only guarantee you have are:

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

// FROM @KTC. The C++ standard also has:
sizeof(signed char)   == 1
sizeof(unsigned char) == 1

// NOTE: These size are not specified explicitly in the standard.
//       They are implied by the minimum/maximum values that MUST be supported
//       for the type. These limits are defined in limits.h
sizeof(short)     * CHAR_BIT >= 16
sizeof(int)       * CHAR_BIT >= 16
sizeof(long)      * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
CHAR_BIT         >= 8   // Number of bits in a byte

Also see: Is long guaranteed to be at least 32 bits?


Relying on the compiler vendor's implementation of primitive type sizes WILL come back to haunt you if you ever compile your code on another machine architecture, OS, or another vendor's compiler .

Most compiler vendors provide a header file that defines primitive types with explict type sizes. These primitive types should be used when ever code may be potentially ported to another compiler (read this as ALWAYS in EVERY instance). For example, most UNIX compilers have int8_t uint8_t int16_t int32_t uint32_t. Microsoft has INT8 UINT8 INT16 UINT16 INT32 UINT32. I prefer Borland/CodeGear's int8 uint8 int16 uint16 int32 uint32. These names also give a little reminder of the size/range of the intended value.

For years I have used Borland's explicit primitive type names and #include the following C/C++ header file (primitive.h) which is intended to define the explicit primitive types with these names for any C/C++ compiler (this header file might not actually cover every compiler but it covers several compilers I have used on Windows, UNIX and Linux, it also doesn't (yet) define 64bit types).

#ifndef primitiveH
#define primitiveH
// Header file primitive.h
// Primitive types
// For C and/or C++
// This header file is intended to define a set of primitive types
// that will always be the same number bytes on any operating operating systems
// and/or for several popular C/C++ compiler vendors.
// Currently the type definitions cover:
// Windows (16 or 32 bit)
// Linux
// UNIX (HP/US, Solaris)
// And the following compiler vendors
// Microsoft, Borland/Imprise/CodeGear, SunStudio,  HP/UX
// (maybe GNU C/C++)
// This does not currently include 64bit primitives.
#define float64 double
#define float32 float
// Some old C++ compilers didn't have bool type
// If your compiler does not have bool then add   emulate_bool
// to your command line -D option or defined macros.
#ifdef emulate_bool
#   ifdef TVISION
#     define bool int
#     define true 1
#     define false 0
#   else
#     ifdef __BCPLUSPLUS__
      //BC++ bool type not available until 5.0
#        define BI_NO_BOOL
#        include <classlib/defs.h>
#     else
#        define bool int
#        define true 1
#        define false 0
#     endif
#  endif
#endif
#ifdef __BCPLUSPLUS__
#  include <systypes.h>
#else
#  ifdef unix
#     ifdef hpux
#        include <sys/_inttypes.h>
#     endif
#     ifdef sun
#        include <sys/int_types.h>
#     endif
#     ifdef linux
#        include <idna.h>
#     endif
#     define int8 int8_t
#     define uint8 uint8_t
#     define int16 int16_t
#     define int32 int32_t
#     define uint16 uint16_t
#     define uint32 uint32_t
#  else
#     ifdef  _MSC_VER
#        include <BaseTSD.h>
#        define int8 INT8
#        define uint8 UINT8
#        define int16 INT16
#        define int32 INT32
#        define uint16 UINT16
#        define uint32 UINT32
#     else
#        ifndef OWL6
//          OWL version 6 already defines these types
#           define int8 char
#           define uint8 unsigned char
#           ifdef __WIN32_
#              define int16 short int
#              define int32 long
#              define uint16 unsigned short int
#              define uint32 unsigned long
#           else
#              define int16 int
#              define int32 long
#              define uint16 unsigned int
#              define uint32 unsigned long
#           endif
#        endif
#      endif
#  endif
#endif
typedef int8   sint8;
typedef int16  sint16;
typedef int32  sint32;
typedef uint8  nat8;
typedef uint16 nat16;
typedef uint32 nat32;
typedef const char * cASCIIz;    // constant null terminated char array
typedef char *       ASCIIz;     // null terminated char array
#endif
//primitive.h

It depends on your compiler. You are guaranteed that a long will be at least as large as an int, but you are not guaranteed that it will be any longer.


As Kevin Haines points out, ints have the natural size suggested by the execution environment, which has to fit within INT_MIN and INT_MAX.

The C89 standard states that UINT_MAX should be at least 2^16-1, USHRT_MAX 2^16-1 and ULONG_MAX 2^32-1 . That makes a bit-count of at least 16 for short and int, and 32 for long. For char it states explicitly that it should have at least 8 bits (CHAR_BIT). C++ inherits those rules for the limits.h file, so in C++ we have the same fundamental requirements for those values. You should however not derive from that that int is at least 2 byte. Theoretically, char, int and long could all be 1 byte, in which case CHAR_BIT must be at least 32. Just remember that "byte" is always the size of a char, so if char is bigger, a byte is not only 8 bits any more.


Relying on the compiler vendor's implementation of primitive type sizes WILL come back to haunt you if you ever compile your code on another machine architecture, OS, or another vendor's compiler .

Most compiler vendors provide a header file that defines primitive types with explict type sizes. These primitive types should be used when ever code may be potentially ported to another compiler (read this as ALWAYS in EVERY instance). For example, most UNIX compilers have int8_t uint8_t int16_t int32_t uint32_t. Microsoft has INT8 UINT8 INT16 UINT16 INT32 UINT32. I prefer Borland/CodeGear's int8 uint8 int16 uint16 int32 uint32. These names also give a little reminder of the size/range of the intended value.

For years I have used Borland's explicit primitive type names and #include the following C/C++ header file (primitive.h) which is intended to define the explicit primitive types with these names for any C/C++ compiler (this header file might not actually cover every compiler but it covers several compilers I have used on Windows, UNIX and Linux, it also doesn't (yet) define 64bit types).

#ifndef primitiveH
#define primitiveH
// Header file primitive.h
// Primitive types
// For C and/or C++
// This header file is intended to define a set of primitive types
// that will always be the same number bytes on any operating operating systems
// and/or for several popular C/C++ compiler vendors.
// Currently the type definitions cover:
// Windows (16 or 32 bit)
// Linux
// UNIX (HP/US, Solaris)
// And the following compiler vendors
// Microsoft, Borland/Imprise/CodeGear, SunStudio,  HP/UX
// (maybe GNU C/C++)
// This does not currently include 64bit primitives.
#define float64 double
#define float32 float
// Some old C++ compilers didn't have bool type
// If your compiler does not have bool then add   emulate_bool
// to your command line -D option or defined macros.
#ifdef emulate_bool
#   ifdef TVISION
#     define bool int
#     define true 1
#     define false 0
#   else
#     ifdef __BCPLUSPLUS__
      //BC++ bool type not available until 5.0
#        define BI_NO_BOOL
#        include <classlib/defs.h>
#     else
#        define bool int
#        define true 1
#        define false 0
#     endif
#  endif
#endif
#ifdef __BCPLUSPLUS__
#  include <systypes.h>
#else
#  ifdef unix
#     ifdef hpux
#        include <sys/_inttypes.h>
#     endif
#     ifdef sun
#        include <sys/int_types.h>
#     endif
#     ifdef linux
#        include <idna.h>
#     endif
#     define int8 int8_t
#     define uint8 uint8_t
#     define int16 int16_t
#     define int32 int32_t
#     define uint16 uint16_t
#     define uint32 uint32_t
#  else
#     ifdef  _MSC_VER
#        include <BaseTSD.h>
#        define int8 INT8
#        define uint8 UINT8
#        define int16 INT16
#        define int32 INT32
#        define uint16 UINT16
#        define uint32 UINT32
#     else
#        ifndef OWL6
//          OWL version 6 already defines these types
#           define int8 char
#           define uint8 unsigned char
#           ifdef __WIN32_
#              define int16 short int
#              define int32 long
#              define uint16 unsigned short int
#              define uint32 unsigned long
#           else
#              define int16 int
#              define int32 long
#              define uint16 unsigned int
#              define uint32 unsigned long
#           endif
#        endif
#      endif
#  endif
#endif
typedef int8   sint8;
typedef int16  sint16;
typedef int32  sint32;
typedef uint8  nat8;
typedef uint16 nat16;
typedef uint32 nat32;
typedef const char * cASCIIz;    // constant null terminated char array
typedef char *       ASCIIz;     // null terminated char array
#endif
//primitive.h

The C++ Standard says it like this :

3.9.1, §2 :

There are five signed integer types : "signed char", "short int", "int", "long int", and "long long int". In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment (44); the other signed integer types are provided to meet special needs.

(44) that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>.

The conclusion : it depends on which architecture you're working on. Any other assumption is false.


Relying on the compiler vendor's implementation of primitive type sizes WILL come back to haunt you if you ever compile your code on another machine architecture, OS, or another vendor's compiler .

Most compiler vendors provide a header file that defines primitive types with explict type sizes. These primitive types should be used when ever code may be potentially ported to another compiler (read this as ALWAYS in EVERY instance). For example, most UNIX compilers have int8_t uint8_t int16_t int32_t uint32_t. Microsoft has INT8 UINT8 INT16 UINT16 INT32 UINT32. I prefer Borland/CodeGear's int8 uint8 int16 uint16 int32 uint32. These names also give a little reminder of the size/range of the intended value.

For years I have used Borland's explicit primitive type names and #include the following C/C++ header file (primitive.h) which is intended to define the explicit primitive types with these names for any C/C++ compiler (this header file might not actually cover every compiler but it covers several compilers I have used on Windows, UNIX and Linux, it also doesn't (yet) define 64bit types).

#ifndef primitiveH
#define primitiveH
// Header file primitive.h
// Primitive types
// For C and/or C++
// This header file is intended to define a set of primitive types
// that will always be the same number bytes on any operating operating systems
// and/or for several popular C/C++ compiler vendors.
// Currently the type definitions cover:
// Windows (16 or 32 bit)
// Linux
// UNIX (HP/US, Solaris)
// And the following compiler vendors
// Microsoft, Borland/Imprise/CodeGear, SunStudio,  HP/UX
// (maybe GNU C/C++)
// This does not currently include 64bit primitives.
#define float64 double
#define float32 float
// Some old C++ compilers didn't have bool type
// If your compiler does not have bool then add   emulate_bool
// to your command line -D option or defined macros.
#ifdef emulate_bool
#   ifdef TVISION
#     define bool int
#     define true 1
#     define false 0
#   else
#     ifdef __BCPLUSPLUS__
      //BC++ bool type not available until 5.0
#        define BI_NO_BOOL
#        include <classlib/defs.h>
#     else
#        define bool int
#        define true 1
#        define false 0
#     endif
#  endif
#endif
#ifdef __BCPLUSPLUS__
#  include <systypes.h>
#else
#  ifdef unix
#     ifdef hpux
#        include <sys/_inttypes.h>
#     endif
#     ifdef sun
#        include <sys/int_types.h>
#     endif
#     ifdef linux
#        include <idna.h>
#     endif
#     define int8 int8_t
#     define uint8 uint8_t
#     define int16 int16_t
#     define int32 int32_t
#     define uint16 uint16_t
#     define uint32 uint32_t
#  else
#     ifdef  _MSC_VER
#        include <BaseTSD.h>
#        define int8 INT8
#        define uint8 UINT8
#        define int16 INT16
#        define int32 INT32
#        define uint16 UINT16
#        define uint32 UINT32
#     else
#        ifndef OWL6
//          OWL version 6 already defines these types
#           define int8 char
#           define uint8 unsigned char
#           ifdef __WIN32_
#              define int16 short int
#              define int32 long
#              define uint16 unsigned short int
#              define uint32 unsigned long
#           else
#              define int16 int
#              define int32 long
#              define uint16 unsigned int
#              define uint32 unsigned long
#           endif
#        endif
#      endif
#  endif
#endif
typedef int8   sint8;
typedef int16  sint16;
typedef int32  sint32;
typedef uint8  nat8;
typedef uint16 nat16;
typedef uint32 nat32;
typedef const char * cASCIIz;    // constant null terminated char array
typedef char *       ASCIIz;     // null terminated char array
#endif
//primitive.h

When compiling for x64, the difference between int and long is somewhere between 0 and 4 bytes, depending on what compiler you use.

GCC uses the LP64 model, which means that ints are 32-bits but longs are 64-bits under 64-bit mode.

MSVC for example uses the LLP64 model, which means both ints and longs are 32-bits even in 64-bit mode.


It depends on your compiler. You are guaranteed that a long will be at least as large as an int, but you are not guaranteed that it will be any longer.


For the most part, the number of bytes and range of values is determined by the CPU's architecture not by C++. However, C++ sets minimum requirements, which litb explained properly and Martin York only made a few mistakes with.

The reason why you can't use int and long interchangeably is because they aren't always the same length. C was invented on a PDP-11 where a byte had 8 bits, int was two bytes and could be handled directly by hardware instructions. Since C programmers often needed four-byte arithmetic, long was invented and it was four bytes, handled by library functions. Other machines had different specifications. The C standard imposed some minimum requirements.


When compiling for x64, the difference between int and long is somewhere between 0 and 4 bytes, depending on what compiler you use.

GCC uses the LP64 model, which means that ints are 32-bits but longs are 64-bits under 64-bit mode.

MSVC for example uses the LLP64 model, which means both ints and longs are 32-bits even in 64-bit mode.


For the most part, the number of bytes and range of values is determined by the CPU's architecture not by C++. However, C++ sets minimum requirements, which litb explained properly and Martin York only made a few mistakes with.

The reason why you can't use int and long interchangeably is because they aren't always the same length. C was invented on a PDP-11 where a byte had 8 bits, int was two bytes and could be handled directly by hardware instructions. Since C programmers often needed four-byte arithmetic, long was invented and it was four bytes, handled by library functions. Other machines had different specifications. The C standard imposed some minimum requirements.


The C++ specification itself (old version but good enough for this) leaves this open.

There are four signed integer types: 'signed char', 'short int', 'int', and 'long int'. In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment* ;

[Footnote: that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>. --- end foonote]


For the most part, the number of bytes and range of values is determined by the CPU's architecture not by C++. However, C++ sets minimum requirements, which litb explained properly and Martin York only made a few mistakes with.

The reason why you can't use int and long interchangeably is because they aren't always the same length. C was invented on a PDP-11 where a byte had 8 bits, int was two bytes and could be handled directly by hardware instructions. Since C programmers often needed four-byte arithmetic, long was invented and it was four bytes, handled by library functions. Other machines had different specifications. The C standard imposed some minimum requirements.


The only guarantee you have are:

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

// FROM @KTC. The C++ standard also has:
sizeof(signed char)   == 1
sizeof(unsigned char) == 1

// NOTE: These size are not specified explicitly in the standard.
//       They are implied by the minimum/maximum values that MUST be supported
//       for the type. These limits are defined in limits.h
sizeof(short)     * CHAR_BIT >= 16
sizeof(int)       * CHAR_BIT >= 16
sizeof(long)      * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
CHAR_BIT         >= 8   // Number of bits in a byte

Also see: Is long guaranteed to be at least 32 bits?


The C++ specification itself (old version but good enough for this) leaves this open.

There are four signed integer types: 'signed char', 'short int', 'int', and 'long int'. In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment* ;

[Footnote: that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>. --- end foonote]


As Kevin Haines points out, ints have the natural size suggested by the execution environment, which has to fit within INT_MIN and INT_MAX.

The C89 standard states that UINT_MAX should be at least 2^16-1, USHRT_MAX 2^16-1 and ULONG_MAX 2^32-1 . That makes a bit-count of at least 16 for short and int, and 32 for long. For char it states explicitly that it should have at least 8 bits (CHAR_BIT). C++ inherits those rules for the limits.h file, so in C++ we have the same fundamental requirements for those values. You should however not derive from that that int is at least 2 byte. Theoretically, char, int and long could all be 1 byte, in which case CHAR_BIT must be at least 32. Just remember that "byte" is always the size of a char, so if char is bigger, a byte is not only 8 bits any more.


For the most part, the number of bytes and range of values is determined by the CPU's architecture not by C++. However, C++ sets minimum requirements, which litb explained properly and Martin York only made a few mistakes with.

The reason why you can't use int and long interchangeably is because they aren't always the same length. C was invented on a PDP-11 where a byte had 8 bits, int was two bytes and could be handled directly by hardware instructions. Since C programmers often needed four-byte arithmetic, long was invented and it was four bytes, handled by library functions. Other machines had different specifications. The C standard imposed some minimum requirements.


Relying on the compiler vendor's implementation of primitive type sizes WILL come back to haunt you if you ever compile your code on another machine architecture, OS, or another vendor's compiler .

Most compiler vendors provide a header file that defines primitive types with explict type sizes. These primitive types should be used when ever code may be potentially ported to another compiler (read this as ALWAYS in EVERY instance). For example, most UNIX compilers have int8_t uint8_t int16_t int32_t uint32_t. Microsoft has INT8 UINT8 INT16 UINT16 INT32 UINT32. I prefer Borland/CodeGear's int8 uint8 int16 uint16 int32 uint32. These names also give a little reminder of the size/range of the intended value.

For years I have used Borland's explicit primitive type names and #include the following C/C++ header file (primitive.h) which is intended to define the explicit primitive types with these names for any C/C++ compiler (this header file might not actually cover every compiler but it covers several compilers I have used on Windows, UNIX and Linux, it also doesn't (yet) define 64bit types).

#ifndef primitiveH
#define primitiveH
// Header file primitive.h
// Primitive types
// For C and/or C++
// This header file is intended to define a set of primitive types
// that will always be the same number bytes on any operating operating systems
// and/or for several popular C/C++ compiler vendors.
// Currently the type definitions cover:
// Windows (16 or 32 bit)
// Linux
// UNIX (HP/US, Solaris)
// And the following compiler vendors
// Microsoft, Borland/Imprise/CodeGear, SunStudio,  HP/UX
// (maybe GNU C/C++)
// This does not currently include 64bit primitives.
#define float64 double
#define float32 float
// Some old C++ compilers didn't have bool type
// If your compiler does not have bool then add   emulate_bool
// to your command line -D option or defined macros.
#ifdef emulate_bool
#   ifdef TVISION
#     define bool int
#     define true 1
#     define false 0
#   else
#     ifdef __BCPLUSPLUS__
      //BC++ bool type not available until 5.0
#        define BI_NO_BOOL
#        include <classlib/defs.h>
#     else
#        define bool int
#        define true 1
#        define false 0
#     endif
#  endif
#endif
#ifdef __BCPLUSPLUS__
#  include <systypes.h>
#else
#  ifdef unix
#     ifdef hpux
#        include <sys/_inttypes.h>
#     endif
#     ifdef sun
#        include <sys/int_types.h>
#     endif
#     ifdef linux
#        include <idna.h>
#     endif
#     define int8 int8_t
#     define uint8 uint8_t
#     define int16 int16_t
#     define int32 int32_t
#     define uint16 uint16_t
#     define uint32 uint32_t
#  else
#     ifdef  _MSC_VER
#        include <BaseTSD.h>
#        define int8 INT8
#        define uint8 UINT8
#        define int16 INT16
#        define int32 INT32
#        define uint16 UINT16
#        define uint32 UINT32
#     else
#        ifndef OWL6
//          OWL version 6 already defines these types
#           define int8 char
#           define uint8 unsigned char
#           ifdef __WIN32_
#              define int16 short int
#              define int32 long
#              define uint16 unsigned short int
#              define uint32 unsigned long
#           else
#              define int16 int
#              define int32 long
#              define uint16 unsigned int
#              define uint32 unsigned long
#           endif
#        endif
#      endif
#  endif
#endif
typedef int8   sint8;
typedef int16  sint16;
typedef int32  sint32;
typedef uint8  nat8;
typedef uint16 nat16;
typedef uint32 nat32;
typedef const char * cASCIIz;    // constant null terminated char array
typedef char *       ASCIIz;     // null terminated char array
#endif
//primitive.h

As Kevin Haines points out, ints have the natural size suggested by the execution environment, which has to fit within INT_MIN and INT_MAX.

The C89 standard states that UINT_MAX should be at least 2^16-1, USHRT_MAX 2^16-1 and ULONG_MAX 2^32-1 . That makes a bit-count of at least 16 for short and int, and 32 for long. For char it states explicitly that it should have at least 8 bits (CHAR_BIT). C++ inherits those rules for the limits.h file, so in C++ we have the same fundamental requirements for those values. You should however not derive from that that int is at least 2 byte. Theoretically, char, int and long could all be 1 byte, in which case CHAR_BIT must be at least 32. Just remember that "byte" is always the size of a char, so if char is bigger, a byte is not only 8 bits any more.


It depends on your compiler. You are guaranteed that a long will be at least as large as an int, but you are not guaranteed that it will be any longer.


The only guarantee you have are:

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

// FROM @KTC. The C++ standard also has:
sizeof(signed char)   == 1
sizeof(unsigned char) == 1

// NOTE: These size are not specified explicitly in the standard.
//       They are implied by the minimum/maximum values that MUST be supported
//       for the type. These limits are defined in limits.h
sizeof(short)     * CHAR_BIT >= 16
sizeof(int)       * CHAR_BIT >= 16
sizeof(long)      * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
CHAR_BIT         >= 8   // Number of bits in a byte

Also see: Is long guaranteed to be at least 32 bits?