Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).

Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).

On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.

Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.

Answer from Jerry Coffin on Stack Overflow
Top answer
1 of 3
186

Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).

Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).

On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.

Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.

2 of 3
35

The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.

Top answer
1 of 4
12

Because all identifiers ending with _t are reserved for future additional types.

The int32_t family of types was added in the C99 standard, so they used the reserved names to avoid conflict with already existing software.

You can find a nice overview of reserved names in the glibc documentation.


Note:
Since Microsoft is not the C standards committee, they are right in not using the _t family of names, opting for the unreserved INT32 instead.

2 of 4
8

At the time the C99 Standard was ratified, there already existed countless C programs that used int32 as an identifier. On platforms where both int and long were 32 bits, some of that pre-existing code would have defined int32 as int and some as long (the latter allows compatibility with platforms where int is 16 bits but long is 32; the former allows compatibility with platforms where int is 32 bits but long is 64). While it might have made sense for compilers to allow "int" and "long" to by synonymous on platforms where they're both 32 bits and have matching representations, in which case the "new" int32 type could compatible with both, the Standard doesn't allow for that.

Code written for a platform where int32 is known to be synonymous with int can use int* and int32* interchangeably. Code written for a platform where int32 is known to be synonymous with long can use int32* and long* interchangeably. Even on platforms where both int and long have identical representations, however, the Standard requires platforms squawk if an attempt is made to convert an int* to long* or vice versa without a cast.

Further, even on platforms where int and long have the same representation, casting an int* to a long* is not required to yield a pointer that's usable as a long*; the current maintainers of gcc believe that since the Standard doesn't require that such a cast work, their compiler should generate code where such casts sometimes fail.

Thus, if C99 had used int32 rather than int32_t, it would have had to break code that defines that identifier as int and expects it to be synonymous with int, or code that defines that defines that identifier as long and expects it to be synonymous with long.

Discussions

"int" in C is whatever the default integer the target architecture is, right?
No. The sizes of the integral types is up to the implementation (with some minor restrictions) per the C standard. Some common standards are used in practice. This page covers them. See the Data Models section. The page is technically about C++ but the models are shared. More on reddit.com
🌐 r/learnprogramming
4
2
December 4, 2021
Why is int in C in practice at least a 32 bit?
The inttypes.h manpage has a rationale section that's important here · Especially in data structures if you are hitting the disk or network - the data structure alignment can get screwed up once it hits another machine. It's not worth it; stable reliable C is hard enough already. More on news.ycombinator.com
🌐 news.ycombinator.com
50
26
November 14, 2023
use of int32_t in c or c++? - Stack Overflow
A compliant platform need not have int32_t. 2017-05-17T19:47:24.623Z+00:00 ... Eugene Sh. Over a year ago · @EOF Good one, thanks. This is why I like SO. Every day something new... 2017-05-17T19:47:28.507Z+00:00 ... Save this answer. Show activity on this post. Don't get caught out and always use the definitions provided in ... More on stackoverflow.com
🌐 stackoverflow.com
c - Are types like uint32, int32, uint64, int64 defined in any stdlib header? - Stack Overflow
I often see source code using types like uint32, uint64 and I wonder if they should be defined by the programmer in the application code or if they are defined in a standard lib header. What's the... More on stackoverflow.com
🌐 stackoverflow.com
🌐
Quora
quora.com › Should-I-use-int-or-Int32
Should I use int or Int32? - Quora
Answer (1 of 2): It depends. What are you planning to do? The Int32 looks like [code ]int32_t[/code], which has a sign bit and is exactly 32 bits long. The [code ]int[/code] has a sign bit as well, but its length varies according to used tools and system.
🌐
Quora
quora.com › What-is-the-difference-between-int-and-int32_t
What is the difference between int and int32_t? - Quora
Usually, on common 32-bit and 64-bit architectures, it has 32 bits. The language standards permit it to have any size greater or equal to 16 bits. On the other hand, int32_t has exactly 32 bits.
🌐
Hacker News
news.ycombinator.com › item
Why is int in C in practice at least a 32 bit? | Hacker News
November 14, 2023 - The inttypes.h manpage has a rationale section that's important here · Especially in data structures if you are hitting the disk or network - the data structure alignment can get screwed up once it hits another machine. It's not worth it; stable reliable C is hard enough already.
Find elsewhere
🌐
SEI CERT
wiki.sei.cmu.edu › confluence › display › c › INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow
INT32-C. Ensure that operations on signed integers do not result in overflow - SEI CERT C Coding Standard - Confluence
June 21, 2023 - INT32-C – CWE-190 = Underflow of signed integer operation · fortify · cwe-129 · cwe-190 · android-applicable · ptc · rose-partial · rule · arithmetic · overflow · int · in-cpp · The compliant solution for unsigned int addition is rather unclear.
🌐
Reddit
reddit.com › r/cpp_questions › should i use int or int32_t?
Should I use int or int32_t? : r/cpp_questions
July 10, 2019 - If you find yourself on a platform where an int is only 2 bytes as opposed to 4, you'd have a bug if you assumed it was 4. int32_t ensures that the datatype can hold 32 bits of data. If I'm writing an interface for a library I'll generally try to make sure I use things like int32_t, size_t, ...
🌐
Quora
quora.com › What-does-Int32-mean-in-C-Why-32
What does Int32 mean in C#? Why 32? - Quora
Answer (1 of 6): 32 bits! This is part of C#’s cross-platform facilities. A little history: back when C was coming into its own, it was considered “cross-platform” because you could take the raw C source code and compile it on another ...
🌐
Mbed
os.mbed.com › handbook › C-Data-Types
C Data Types - Handbook | Mbed
The ARMv7-M architecture used in mbed microcontrollers is a 32-bit architecture, so standard C pointers are 32-bits.
🌐
Cppreference
en.cppreference.com › w › cpp › types › integer.html
Fixed width integer types (since C++11) - cppreference. ...
February 8, 2024 - However, some implementations (such as glibc 2.17) try to apply this rule, and it may be necessary to define the __STDC macros; C++ compilers may try to work around this by automatically defining them in some circumstances. std::int8_t may be signed char and std::uint8_t may be unsigned char, but neither can be char regardless of its signedness (because char is not considered a "signed integer type" or "unsigned integer type").
Top answer
1 of 8
45

As several people have stated, there are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the c99 specification.

int8_t
uint8_t
int32_t
uint32_t

etc...

they are generally of the form [u]intN_t, where the 'u' specifies that you want an unsigned quantity, and N is the number of bits

the correct typedefs for these should be available in stdint.h on whichever platform you are compiling for, using these allows you to write nice, portable code :-)

2 of 8
17

"is always 32-bit on most platforms" - what's wrong with that snippet? :-)

The C standard does not mandate the sizes of many of its integral types. It does mandate relative sizes, for example, sizeof(int) >= sizeof(short) and so on. It also mandates minimum ranges but allows for multiple encoding schemes (two's complement, ones' complement, and sign/magnitude).

If you want a specific sized variable, you need to use one suitable for the platform you're running on, such as the use of #ifdef's, something like:

#ifdef LONG_IS_32BITS
    typedef long int32;
#else
    #ifdef INT_IS_32BITS
        typedef int int32;
    #else
        #error No 32-bit data type available
    #endif
#endif

Alternatively, C99 and above allows for exact width integer types intN_t and uintN_t:


  1. The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two's complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
  2. The typedef name uintN_t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
  3. These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two's complement representation, it shall define the corresponding typedef names.
🌐
Wikipedia
en.wikipedia.org › wiki › C_data_types
C data types - Wikipedia
April 7, 2026 - The C language provides the four basic arithmetic type specifiers char, int, float and double (as well as the Boolean type bool), and the modifiers signed, unsigned, short, and long. The following table lists the permissible combinations in specifying a large set of storage size-specific declarations. ^ The standard term for the size of an integer type in bits is width.
🌐
Reddit
reddit.com › r/zig › what's the actual difference between c_int and i32?
r/Zig on Reddit: What's the actual difference between c_int and i32?
November 10, 2023 - C has had the <stdint> header for a while now that provides fixed-size types for those that want the guarantees, though. ... Further: I’ve seen platforms where “int” is 16-bit. I think I’ve seen 64-bit on a platform once but don’t quote me on that. ... In C types do not have a guaranteed size, zig c_int is always the same size as an int would be in C on whatever system you're compiling for, so if you're only making your code for one operating system on one architecture you can use them interchangeably.
🌐
Lenovo
lenovo.com › home
What Is UInt32? | Unsigned 32-Bit Integer Data Type Explained | Lenovo US
UInt32 is a data type that represents an unsigned 32-bit integer, meaning it only stores non-negative whole numbers ranging from 0 to 4,294,967,295. Commonly used in programming, UInt32 is ideal for scenarios requiring a wide range of positive values.
🌐
Unreal Engine
forums.unrealengine.com › development › programming & scripting › c++
Int or int32 in for loop? C++ - C++ - Epic Developer Community Forums
August 8, 2022 - I have this for loop in which I use int32 , so is it ok? int32 limit = 16; for (int32 i = 1; i <= limit; i++) { //some code }