Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).
Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).
On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.
Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.
Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).
Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).
On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.
Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.
The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.
Because all identifiers ending with _t are reserved for future additional types.
The int32_t family of types was added in the C99 standard, so they used the reserved names to avoid conflict with already existing software.
You can find a nice overview of reserved names in the glibc documentation.
Note:
Since Microsoft is not the C standards committee, they are right in not using the _t family of names, opting for the unreserved INT32 instead.
At the time the C99 Standard was ratified, there already existed countless C programs that used int32 as an identifier. On platforms where both int and long were 32 bits, some of that pre-existing code would have defined int32 as int and some as long (the latter allows compatibility with platforms where int is 16 bits but long is 32; the former allows compatibility with platforms where int is 32 bits but long is 64). While it might have made sense for compilers to allow "int" and "long" to by synonymous on platforms where they're both 32 bits and have matching representations, in which case the "new" int32 type could compatible with both, the Standard doesn't allow for that.
Code written for a platform where int32 is known to be synonymous with int can use int* and int32* interchangeably. Code written for a platform where int32 is known to be synonymous with long can use int32* and long* interchangeably. Even on platforms where both int and long have identical representations, however, the Standard requires platforms squawk if an attempt is made to convert an int* to long* or vice versa without a cast.
Further, even on platforms where int and long have the same representation, casting an int* to a long* is not required to yield a pointer that's usable as a long*; the current maintainers of gcc believe that since the Standard doesn't require that such a cast work, their compiler should generate code where such casts sometimes fail.
Thus, if C99 had used int32 rather than int32_t, it would have had to break code that defines that identifier as int and expects it to be synonymous with int, or code that defines that defines that identifier as long and expects it to be synonymous with long.
"int" in C is whatever the default integer the target architecture is, right?
Why is int in C in practice at least a 32 bit?
use of int32_t in c or c++? - Stack Overflow
c - Are types like uint32, int32, uint64, int64 defined in any stdlib header? - Stack Overflow
Like, if you declare an int in C, it will be 32 bit int when compiling to a 32 bit binary, and 64 bit for a 64 bit binary?
1) int32_t provides exact 32 bit integer. This is important because you can port your applications to different platforms without rewriting algorithm (if they will compile and yes, int is not always 16 or 32 or 64 bit wide, check C Reference). Check nice self-explanatory page about stdint.h types
2) Probably, yes
- Firstly, why do we need int32_t as we already have different variation for it like short int unsigned int and etc.
Because short int, unsigned int, etc aren't portable among architectures.
If you mean exactly 32 bits, just say that explicitely. Otherwise you might end up using 64 bits just using the unsigned int with a different CPU architecture.
- Secondly does the use of this type of fixed size types makes programs portable?
Yes, as mentioned above.
The C99 stdint.h defines these:
int8_tint16_tint32_tuint8_tuint16_tuint32_t
And, if the architecture supports them:
int64_tuint64_t
There are various other integer typedefs in stdint.h as well.
If you're stuck without a C99 environment then you should probably supply your own typedefs and use the C99 ones anyway.
The uint32 and uint64 (i.e. without the _t suffix) are probably application specific.
Those integer types are all defined in stdint.h
With Groovy on the path:
groovy -e " println Integer.MAX_VALUE "
(Groovy is extremely useful for quick reference, within a Java context.)
2147483647
Here's what you need to remember:
- It's 2 billion.
- The next three triplets are increasing like so: 100s, 400s, 600s
- The first and the last triplet need 3 added to them so they get rounded up to 50 (eg 147 + 3 = 150 & 647 + 3 = 650)
- The second triplet needs 3 subtracted from it to round it down to 80 (eg 483 - 3 = 480)
Hence 2, 147, 483, 647
#include <stdint.h>
int32_t my_32bit_int;
C doesn't concern itself very much with exact sizes of integer types, C99 introduces the header stdint.h , which is probably your best bet. Include that and you can use e.g. int32_t. Of course not all platforms might support that.
As several people have stated, there are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the c99 specification.
int8_t
uint8_t
int32_t
uint32_t
etc...
they are generally of the form [u]intN_t, where the 'u' specifies that you want an unsigned quantity, and N is the number of bits
the correct typedefs for these should be available in stdint.h on whichever platform you are compiling for, using these allows you to write nice, portable code :-)
"is always 32-bit on most platforms" - what's wrong with that snippet? :-)
The C standard does not mandate the sizes of many of its integral types. It does mandate relative sizes, for example, sizeof(int) >= sizeof(short) and so on. It also mandates minimum ranges but allows for multiple encoding schemes (two's complement, ones' complement, and sign/magnitude).
If you want a specific sized variable, you need to use one suitable for the platform you're running on, such as the use of #ifdef's, something like:
#ifdef LONG_IS_32BITS
typedef long int32;
#else
#ifdef INT_IS_32BITS
typedef int int32;
#else
#error No 32-bit data type available
#endif
#endif
Alternatively, C99 and above allows for exact width integer types intN_t and uintN_t:
- The
typedefnameintN_tdesignates a signed integer type with widthN, no padding bits, and a two's complement representation. Thus,int8_tdenotes a signed integer type with a width of exactly 8 bits. - The
typedefnameuintN_tdesignates an unsigned integer type with widthN. Thus,uint24_tdenotes an unsigned integer type with a width of exactly 24 bits. - These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two's complement representation, it shall define the corresponding
typedefnames.