In C and C++ you have these least requirements (i.e actual implementations can have larger magnitudes)

signed char: -2^07+1 to +2^07-1
short:       -2^15+1 to +2^15-1
int:         -2^15+1 to +2^15-1
long:        -2^31+1 to +2^31-1
long long:   -2^63+1 to +2^63-1

Now, on particular implementations, you have a variety of bit ranges. The wikipedia article describes this nicely.

Answer from Johannes Schaub - litb on Stack Overflow
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ c language โ€บ ranges-of-data-types-in-c
Ranges of Data Types in C - GeeksforGeeks
January 24, 2026 - The ranges of data types in C define ... For example, int typically ranges from -2,147,483,648 to 2,147,483,647 for signed, and 0 to 4,294,967,295 for unsigned on a 32-bit system....
Discussions

Range of values in C Int and Long 32 - 64 bits - Stack Overflow
I'm confused with range of values of Int variable in C. I know that a 32bits unsigned int have a range of: 0 to 65,535. So long has 0 to 4,294,967,295 This is fine in 32bits machine. But now in 6... More on stackoverflow.com
๐ŸŒ stackoverflow.com
Range of values that can be stored in an integer type in C - Software Engineering Stack Exchange
C has family of integer types i.e {short, int, long, long long}. Any new programmer is likely to use int to represent integer variable in the application and since int type has 32 bit space with range (-2,147,483,648 to 2,147,483,647), there will be bug as soon value of the variable goes out ... More on softwareengineering.stackexchange.com
๐ŸŒ softwareengineering.stackexchange.com
May 11, 2016
gcc - who define the integer type range in c language - Stack Overflow
Learn more about Collectives ... Bring the best of human thought and AI automation together at your work. Explore Stack Internal ... Save this question. Show activity on this post. I learned that in x86_64 platform, the int range is from -2,147,483,648 to 2,147,483,647. More on stackoverflow.com
๐ŸŒ stackoverflow.com
typedef - Define integer ranges in C - Stack Overflow
Find centralized, trusted content ... Learn more about Collectives ... Bring the best of human thought and AI automation together at your work. Explore Stack Internal ... I want to define a type named Int_1_100_Type which is an integer variable in the range from 1 to ... More on stackoverflow.com
๐ŸŒ stackoverflow.com
๐ŸŒ
Quora
quora.com โ€บ What-is-the-range-of-an-integer-datatype-in-C-language
What is the range of an integer datatype in C language? - Quora
Answer (1 of 3): Yeahโ€ฆThe range of integer in C language depends on your OSโ€ฆIf you are using 32 bit OS then it allocates 2byte of memory and if you are using 64 bit OS then it allocates 4 bytes of memoryโ€ฆ.
๐ŸŒ
Oreate AI
oreateai.com โ€บ blog โ€บ range-of-int-type-in-c-language โ€บ 090ad69d4b3bad58c215ac94001fc34c
Range of Int Type in C Language - Oreate AI Blog
December 22, 2025 - In summary, according to standard specifications, the range for 'int' is at least from -32,767 to 32,767 and typically from -2,147,483,647 to 2,147-483-647. Oreate AI is your all-in-one assistant, helping you write essays, build presentations, ...
๐ŸŒ
TutorialsPoint
tutorialspoint.com โ€บ cprogramming โ€บ c_data_types.htm
C - Data Types
CHAR_BIT : 8 CHAR_MAX : 127 CHAR_MIN : -128 INT_MAX : 2147483647 INT_MIN : -2147483648 LONG_MAX : 9223372036854775807 LONG_MIN : -9223372036854775808 SCHAR_MAX : 127 SCHAR_MIN : -128 SHRT_MAX : 32767 SHRT_MIN : -32768 UCHAR_MAX : 255 UINT_MAX ...
Find elsewhere
๐ŸŒ
Programiz
programiz.com โ€บ c-programming โ€บ c-data-types
C Data Types
The size of int is usually 4 bytes (32 bits). And, it can take 232 distinct states from -2147483648 to 2147483647. ... In C, floating-point numbers can also be represented in exponential.
๐ŸŒ
Wikipedia
en.wikipedia.org โ€บ wiki โ€บ C_data_types
C data types - Wikipedia
2 days ago - Since C23, the only representation allowed is two's complement, therefore the values range from at least [โˆ’2nโˆ’1, 2nโˆ’1โˆ’1]. ... ^ a b Uppercase differs from lowercase in the output. Uppercase specifiers produce values in the uppercase, and lowercase in lower (%A, %E, %F, %G produce such ...
Top answer
1 of 4
3

As far as C is concerned, it's up the compiler.


Of the constants in limits.h, the C17 standard says

5.2.4.2.1.1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign. [...]

The term "implementation" is used throughout to refer to the C compiler and associated libraries.

5. An implementation translates C source files and executes C programs in two data-processing-system environments, which will be called the translation environment and the execution environment in this International Standard. Their characteristics define and constrain the results of executing conforming C programs constructed according to the syntactic and semantic rules for conforming implementations.

It is not directly based on the hardware, as your question presumes. For example, Microsoft's C compiler for Windows for the x86-64 uses a 32-bit long, but my gcc has a 64-bit long for Linux on the same hardware.

2 of 4
2

The size of integer types is determined, on many platforms, by the target ABI (Application Binary Interface). The ABI is defined usually by the operating system and/or compiler used. Programs compiled with the same ABI can often interoperate, even if they are compiled by different compilers.

For example, on many Intel-based Linux and UNIX machines, the System V ABI determines the sizes of the fundamental C types per processor; there are different definitions for x86 and x86-64, for example, but they will be consistent across all systems that use that ABI. Compilers targeting Intel Linux machines will typically use the System V ABI, so you get the same int no matter which compiler you use. However, Microsoft operating systems will use a different ABI (actually, several, depending on how you look), which defines the fundamental types differently.

Modern desktop systems almost always provide 4-byte ints. However, embedded systems will often provide smaller ints; the Arduino AVR platform, for example, defines int as a 16-bit type. This is again dependent on the compiler and processor (no OS in this case).

So, the short answer is that "it depends". Your compiler (in some specific configuration) will ultimately be responsible for translating int into a machine type, so in some sense your compiler is the ultimate source of truth. But the compiler's decision might be informed by an existing ABI standard, by the standards of the OS, the processor, or just existing convention.

๐ŸŒ
Northern Michigan University
nmu.edu โ€บ Webb โ€บ ArchivedHTML โ€บ MathComputerScience โ€บ c_programming โ€บ c_009.htm
Different Types of Integers
DIFFERENT TYPES OF INTEGERS A normal integer is limited in range to +-32767. This value differs from computer to computer. It is possible in C to specify that an integer be stored in four memory locations instead of the normal two. This increases the effective range and allows very large integers ...
๐ŸŒ
Sanfoundry
sanfoundry.com โ€บ c-program-print-range
Range of Data Types in C | Data Types in C - Sanfoundry
November 16, 2022 - Ranges of Data Types: The range of a data type gives us the minimum and maximum value that can be stored inside the variable of the given data type. Examples: Range of int = -2147483648 to 2147483647 Range of char = -128 to 127
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ c language โ€บ data-types-in-c
Data Types in C - GeeksforGeeks
We use int keyword to declare the integer variable: Size: typically 4 bytes, Range: -2,147,483,648 to 2,147,483,647. Format specifier: %d. Format specifiers are the symbols that are used for printing and scanning values of given data types.
Published ย  2 weeks ago
๐ŸŒ
BYJUS
byjus.com โ€บ gate โ€บ size-of-data-types-in-c
Size of Data Types in C
February 16, 2024 - Thus, the range for such characters is from 0 to 255. These character data types are capable of storing the ASCII characters or the numbers that are equivalent to the ASCII characters.
๐ŸŒ
Codingtag
codingtag.com โ€บ range-of-int-in-c
Range of int in C
The range of int can be determined using the limits provided by the ... It's crucial to be aware of these limits to prevent overflow (when a value exceeds the maximum limit) and underflow (when a value goes below the minimum limit) while performing arithmetic operations. ... int main() { printf("Minimum value of int: %d\n", INT_MIN); printf("Maximum value of int: %d\n", INT_MAX); return 0; } ... int main() { int num = INT_MAX; printf("Initial value: %d\n", num); // Add 1 to check overflow num = num + 1; printf("After adding 1: %d\n", num); return 0; }
Top answer
1 of 3
2

Over the history of computers, byte and word sizes have varied considerably; you don't always had a neat system of 8-bit bytes, 16-bit words, 32-bit longwords, etc. When C was being developed in the early 1970s, you had systems with 9-bit bytes and 36-bit words, systems that weren't byte-addressed at all, word sizes in excess of 40 bits, etc. Similarly, some systems had padding or guard bits that didn't contribute to representing the value - you could have an 18-bit type that could still only represent 216 values. Making all word sizes powers of 2 is convenient, but it isn't required.

Because the situation was somewhat variable, the C language standard only specifies the minimum range of values that a type must be able to represent. signed char must be able to represent at least the range -127...127, so it must be at least 8 bits wide. A short must be able to represent at least the range -32767...32767, so it must be at least 16 bits wide, etc. Also, representation of signed integers varied as well - two's complement is the most common, but you also had sign-magnitude and ones' complement representations, which encode two values for zero (positive and negative) - that's why the ranges don't go from -2N-1 to 2N-1-1. The individual implementations then map those ranges onto the native word sizes provided by the hardware.

Now, it's not an accident that those particular ranges were specified - most hardware was already using 8-bit bytes, 16-bit words, 32-bit longwords, etc. Many of C's abstractions (including type sizes and behavior) are based on what the hardware already provides.

int is somewhat special - it's only required to represent at least the range -32767...32767, but it's also commonly set to be the same as the native word size, which since the late '80s has been 32 bits on most platforms.

To see what the actual ranges are on your platform, you can look at the macros defined in <limits.h>. Here's a little program I womped up to show what some of the size definitions are on my system:

#include <stdio.h>
#include <limits.h>

#define EXP(x) #x
#define STR(x) EXP(x)
#define DISPL(t,m) printf( "%30s = %2zu, %15s = %35s\n", "sizeof(" #t ")", sizeof(t), #m,  STR(m) )
#define DISPL2(t,m1,m2) printf( "%30s = %2zu, %15s = %35s, %15s = %35s\n", "sizeof(" #t ")", sizeof(t), #m1, STR(m1), #m2, STR(m2) )

int main( void )
{
  DISPL(char, CHAR_BIT);
  DISPL2(char, CHAR_MIN, CHAR_MAX);
  DISPL2(signed char, SCHAR_MIN, SCHAR_MAX);
  DISPL(unsigned char, UCHAR_MAX);  
  
  DISPL2(short, SHRT_MIN, SHRT_MAX);
  DISPL(unsigned short, USHRT_MAX);

  DISPL2(int, INT_MIN, INT_MAX);
  DISPL(unsigned int, UINT_MAX );
  
  DISPL2(long, LONG_MIN, LONG_MAX );
  DISPL(unsigned long, ULONG_MAX );

  DISPL2(long long, LLONG_MIN, LLONG_MAX );
  DISPL(unsigned long long, ULLONG_MAX );

  return 0;
}

And here's the result:

$ ./sizes
                  sizeof(char) =  1,        CHAR_BIT =                                   8
                  sizeof(char) =  1,        CHAR_MIN =                            (-127-1),        CHAR_MAX =                                 127
           sizeof(signed char) =  1,       SCHAR_MIN =                            (-127-1),       SCHAR_MAX =                                 127
         sizeof(unsigned char) =  1,       UCHAR_MAX =                          (127*2 +1)
                 sizeof(short) =  2,        SHRT_MIN =                         (-32767 -1),        SHRT_MAX =                               32767
        sizeof(unsigned short) =  2,       USHRT_MAX =                       (32767 *2 +1)
                   sizeof(int) =  4,         INT_MIN =                    (-2147483647 -1),         INT_MAX =                          2147483647
          sizeof(unsigned int) =  4,        UINT_MAX =                (2147483647 *2U +1U)
                  sizeof(long) =  8,        LONG_MIN =         (-9223372036854775807L -1L),        LONG_MAX =                9223372036854775807L
         sizeof(unsigned long) =  8,       ULONG_MAX =     (9223372036854775807L *2UL+1UL)
             sizeof(long long) =  8,       LLONG_MIN =        (-9223372036854775807LL-1LL),       LLONG_MAX =               9223372036854775807LL
    sizeof(unsigned long long) =  8,      ULLONG_MAX =   (9223372036854775807LL*2ULL+1ULL)
2 of 3
1

The size of an int is not necessarily the same on all implementations.

The C standard dictates that the range of an int must be at least -32767 to 32767, but it can be more. On most systems you're likely to come in contact with, an int will have range -2,147,483,648 to 2,147,483,647 i.e. 32-bit two's complement representation.

Top answer
1 of 11
163

The minimum ranges you can rely on are:

  • short int and int: -32,767 to 32,767
  • unsigned short int and unsigned int: 0 to 65,535
  • long int: -2,147,483,647 to 2,147,483,647
  • unsigned long int: 0 to 4,294,967,295

This means that no, long int cannot be relied upon to store any 10-digit number. However, a larger type, long long int, was introduced to C in C99 and C++ in C++11 (this type is also often supported as an extension by compilers built for older standards that did not include it). The minimum range for this type, if your compiler supports it, is:

  • long long int: -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807
  • unsigned long long int: 0 to 18,446,744,073,709,551,615

So that type will be big enough (again, if you have it available).


A note for those who believe I've made a mistake with these lower bounds: the C requirements for the ranges are written to allow for ones' complement or sign-magnitude integer representations, where the lowest representable value and the highest representable value differ only in sign. It is also allowed to have a two's complement representation where the value with sign bit 1 and all value bits 0 is a trap representation rather than a legal value. In other words, int is not required to be able to represent the value -32,768.

2 of 11
36

The size of the numerical types is not defined in the C++ standard, although the minimum sizes are. The way to tell what size they are on your platform is to use numeric limits

For example, the maximum value for a int can be found by:

std::numeric_limits<int>::max();

Computers don't work in base 10, which means that the maximum value will be in the form of 2n-1 because of how the numbers of represent in memory. Take for example eight bits (1 byte)

  0100 1000

The right most bit (number) when set to 1 represents 20, the next bit 21, then 22 and so on until we get to the left most bit which if the number is unsigned represents 27.

So the number represents 26 + 23 = 64 + 8 = 72, because the 4th bit from the right and the 7th bit right the left are set.

If we set all values to 1:

11111111

The number is now (assuming unsigned)
128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255 = 28 - 1
And as we can see, that is the largest possible value that can be represented with 8 bits.

On my machine and int and a long are the same, each able to hold between -231 to 231 - 1. In my experience the most common size on modern 32 bit desktop machine.

๐ŸŒ
Tpoint Tech
tpointtech.com โ€บ range-of-int-in-c
Range of Int in C - Tpoint Tech
March 17, 2025 - In this article, we are going to discuss the range of int in C with programs.