In C and C++ you have these least requirements (i.e actual implementations can have larger magnitudes)
signed char: -2^07+1 to +2^07-1
short: -2^15+1 to +2^15-1
int: -2^15+1 to +2^15-1
long: -2^31+1 to +2^31-1
long long: -2^63+1 to +2^63-1
Now, on particular implementations, you have a variety of bit ranges. The wikipedia article describes this nicely.
Answer from Johannes Schaub - litb on Stack OverflowTitle says it all
Range of values in C Int and Long 32 - 64 bits - Stack Overflow
Range of values that can be stored in an integer type in C - Software Engineering Stack Exchange
gcc - who define the integer type range in c language - Stack Overflow
typedef - Define integer ranges in C - Stack Overflow
Videos
In C and C++ you have these least requirements (i.e actual implementations can have larger magnitudes)
signed char: -2^07+1 to +2^07-1
short: -2^15+1 to +2^15-1
int: -2^15+1 to +2^15-1
long: -2^31+1 to +2^31-1
long long: -2^63+1 to +2^63-1
Now, on particular implementations, you have a variety of bit ranges. The wikipedia article describes this nicely.
No, int in C is not defined to be 32 bits. int and long are not defined to be any specific size at all. The only thing the language guarantees is that sizeof(char)<=sizeof(short)<=sizeof(long).
Theoretically a compiler could make short, char, and long all the same number of bits. I know of some that actually did that for all those types save char.
This is why C now defines types like uint16_t and uint32_t. If you need a specific size, you are supposed to use one of those.
Note: 'int' is only guaranteed to be at least 16 bits. It's even smaller than you thought! If you want to guarantee at least 32-bits use 'long'. For even larger values look at things like 'int64_t' or 'long long'.
How does a newbie avoid problems like this? I'm afraid it's the same as for many other programming problems. "think carefully and take care".
Running a test at program startup is a good idea. As is having a good set of unit tests. Take extra care when moving to a new platform.
Limits.h stores the min and max values for integer types in C.
N.B. C++ has its own version: <limits>
If you're really interested in the number of bits a type uses on your platform, you can do something like this (from here):
#include <limits.h>
#include <stdio.h>
int main(void) {
printf("short is %d bits\n", CHAR_BIT * sizeof( short ) );
printf("int is %d bits\n", CHAR_BIT * sizeof( int ) );
printf("long is %d bits\n", CHAR_BIT * sizeof( long ) );
printf("long long is %d bits\n", CHAR_BIT * sizeof(long long) );
return 0;
}
As far as C is concerned, it's up the compiler.
Of the constants in limits.h, the C17 standard says
5.2.4.2.1.1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign. [...]
The term "implementation" is used throughout to refer to the C compiler and associated libraries.
5. An implementation translates C source files and executes C programs in two data-processing-system environments, which will be called the translation environment and the execution environment in this International Standard. Their characteristics define and constrain the results of executing conforming C programs constructed according to the syntactic and semantic rules for conforming implementations.
It is not directly based on the hardware, as your question presumes. For example, Microsoft's C compiler for Windows for the x86-64 uses a 32-bit long, but my gcc has a 64-bit long for Linux on the same hardware.
The size of integer types is determined, on many platforms, by the target ABI (Application Binary Interface). The ABI is defined usually by the operating system and/or compiler used. Programs compiled with the same ABI can often interoperate, even if they are compiled by different compilers.
For example, on many Intel-based Linux and UNIX machines, the System V ABI determines the sizes of the fundamental C types per processor; there are different definitions for x86 and x86-64, for example, but they will be consistent across all systems that use that ABI. Compilers targeting Intel Linux machines will typically use the System V ABI, so you get the same int no matter which compiler you use. However, Microsoft operating systems will use a different ABI (actually, several, depending on how you look), which defines the fundamental types differently.
Modern desktop systems almost always provide 4-byte ints. However, embedded systems will often provide smaller ints; the Arduino AVR platform, for example, defines int as a 16-bit type. This is again dependent on the compiler and processor (no OS in this case).
So, the short answer is that "it depends". Your compiler (in some specific configuration) will ultimately be responsible for translating int into a machine type, so in some sense your compiler is the ultimate source of truth. But the compiler's decision might be informed by an existing ABI standard, by the standards of the OS, the processor, or just existing convention.
You can't, C has no such functionality. You can of course typedef an int:
typedef int int_1_100_Type;
but there is no way of restricting its range. In C++, you could create a new type with this functionality, but I think very few people would bother - you just need to put range checks in the function(s) that use the type.
Of course you can. All you need is a little object-based C.
create a file with a struct and some members
typedef struct s_foo {
int member;
} Foo;
Foo* newFoo(int input); // ctor
void get(Foo *f); // accessor
Enforce your condition in the mutator/ctor
If you do this in its own file, you can hide the impl of the class as well, you can do oo-like C
Over the history of computers, byte and word sizes have varied considerably; you don't always had a neat system of 8-bit bytes, 16-bit words, 32-bit longwords, etc. When C was being developed in the early 1970s, you had systems with 9-bit bytes and 36-bit words, systems that weren't byte-addressed at all, word sizes in excess of 40 bits, etc. Similarly, some systems had padding or guard bits that didn't contribute to representing the value - you could have an 18-bit type that could still only represent 216 values. Making all word sizes powers of 2 is convenient, but it isn't required.
Because the situation was somewhat variable, the C language standard only specifies the minimum range of values that a type must be able to represent. signed char must be able to represent at least the range -127...127, so it must be at least 8 bits wide. A short must be able to represent at least the range -32767...32767, so it must be at least 16 bits wide, etc. Also, representation of signed integers varied as well - two's complement is the most common, but you also had sign-magnitude and ones' complement representations, which encode two values for zero (positive and negative) - that's why the ranges don't go from -2N-1 to 2N-1-1. The individual implementations then map those ranges onto the native word sizes provided by the hardware.
Now, it's not an accident that those particular ranges were specified - most hardware was already using 8-bit bytes, 16-bit words, 32-bit longwords, etc. Many of C's abstractions (including type sizes and behavior) are based on what the hardware already provides.
int is somewhat special - it's only required to represent at least the range -32767...32767, but it's also commonly set to be the same as the native word size, which since the late '80s has been 32 bits on most platforms.
To see what the actual ranges are on your platform, you can look at the macros defined in <limits.h>. Here's a little program I womped up to show what some of the size definitions are on my system:
#include <stdio.h>
#include <limits.h>
#define EXP(x) #x
#define STR(x) EXP(x)
#define DISPL(t,m) printf( "%30s = %2zu, %15s = %35s\n", "sizeof(" #t ")", sizeof(t), #m, STR(m) )
#define DISPL2(t,m1,m2) printf( "%30s = %2zu, %15s = %35s, %15s = %35s\n", "sizeof(" #t ")", sizeof(t), #m1, STR(m1), #m2, STR(m2) )
int main( void )
{
DISPL(char, CHAR_BIT);
DISPL2(char, CHAR_MIN, CHAR_MAX);
DISPL2(signed char, SCHAR_MIN, SCHAR_MAX);
DISPL(unsigned char, UCHAR_MAX);
DISPL2(short, SHRT_MIN, SHRT_MAX);
DISPL(unsigned short, USHRT_MAX);
DISPL2(int, INT_MIN, INT_MAX);
DISPL(unsigned int, UINT_MAX );
DISPL2(long, LONG_MIN, LONG_MAX );
DISPL(unsigned long, ULONG_MAX );
DISPL2(long long, LLONG_MIN, LLONG_MAX );
DISPL(unsigned long long, ULLONG_MAX );
return 0;
}
And here's the result:
$ ./sizes
sizeof(char) = 1, CHAR_BIT = 8
sizeof(char) = 1, CHAR_MIN = (-127-1), CHAR_MAX = 127
sizeof(signed char) = 1, SCHAR_MIN = (-127-1), SCHAR_MAX = 127
sizeof(unsigned char) = 1, UCHAR_MAX = (127*2 +1)
sizeof(short) = 2, SHRT_MIN = (-32767 -1), SHRT_MAX = 32767
sizeof(unsigned short) = 2, USHRT_MAX = (32767 *2 +1)
sizeof(int) = 4, INT_MIN = (-2147483647 -1), INT_MAX = 2147483647
sizeof(unsigned int) = 4, UINT_MAX = (2147483647 *2U +1U)
sizeof(long) = 8, LONG_MIN = (-9223372036854775807L -1L), LONG_MAX = 9223372036854775807L
sizeof(unsigned long) = 8, ULONG_MAX = (9223372036854775807L *2UL+1UL)
sizeof(long long) = 8, LLONG_MIN = (-9223372036854775807LL-1LL), LLONG_MAX = 9223372036854775807LL
sizeof(unsigned long long) = 8, ULLONG_MAX = (9223372036854775807LL*2ULL+1ULL)
The size of an int is not necessarily the same on all implementations.
The C standard dictates that the range of an int must be at least -32767 to 32767, but it can be more. On most systems you're likely to come in contact with, an int will have range -2,147,483,648 to 2,147,483,647 i.e. 32-bit two's complement representation.
The minimum ranges you can rely on are:
short intandint: -32,767 to 32,767unsigned short intandunsigned int: 0 to 65,535long int: -2,147,483,647 to 2,147,483,647unsigned long int: 0 to 4,294,967,295
This means that no, long int cannot be relied upon to store any 10-digit number. However, a larger type, long long int, was introduced to C in C99 and C++ in C++11 (this type is also often supported as an extension by compilers built for older standards that did not include it). The minimum range for this type, if your compiler supports it, is:
long long int: -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807unsigned long long int: 0 to 18,446,744,073,709,551,615
So that type will be big enough (again, if you have it available).
A note for those who believe I've made a mistake with these lower bounds: the C requirements for the ranges are written to allow for ones' complement or sign-magnitude integer representations, where the lowest representable value and the highest representable value differ only in sign. It is also allowed to have a two's complement representation where the value with sign bit 1 and all value bits 0 is a trap representation rather than a legal value. In other words, int is not required to be able to represent the value -32,768.
The size of the numerical types is not defined in the C++ standard, although the minimum sizes are. The way to tell what size they are on your platform is to use numeric limits
For example, the maximum value for a int can be found by:
std::numeric_limits<int>::max();
Computers don't work in base 10, which means that the maximum value will be in the form of 2n-1 because of how the numbers of represent in memory. Take for example eight bits (1 byte)
0100 1000
The right most bit (number) when set to 1 represents 20, the next bit 21, then 22 and so on until we get to the left most bit which if the number is unsigned represents 27.
So the number represents 26 + 23 = 64 + 8 = 72, because the 4th bit from the right and the 7th bit right the left are set.
If we set all values to 1:
11111111
The number is now (assuming unsigned)
128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255 = 28 - 1
And as we can see, that is the largest possible value that can be represented with 8 bits.
On my machine and int and a long are the same, each able to hold between -231 to 231 - 1. In my experience the most common size on modern 32 bit desktop machine.