Each type of integer has a different range of storage capacity
Type Capacity
Int16 (-32,768 to +32,767) or (โ2^15 to +2^15โ1)
Int32 (-2,147,483,648 to +2,147,483,647) or (โ2^31 to +2^31โ1)
Int64 (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807) or (โ2^63 to +2^63โ1)
As stated by James Sutherland in his answer:
Answer from user1082916 on Stack Overflow
intandInt32are indeed synonymous;intwill be a little more familiar looking,Int32makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer',Int32where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge anintif appropriate, but should take care changingInt32variables in the same way.The resulting code will be identical: the difference is purely one of readability or code appearance.
Each type of integer has a different range of storage capacity
Type Capacity
Int16 (-32,768 to +32,767) or (โ2^15 to +2^15โ1)
Int32 (-2,147,483,648 to +2,147,483,647) or (โ2^31 to +2^31โ1)
Int64 (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807) or (โ2^63 to +2^63โ1)
As stated by James Sutherland in his answer:
intandInt32are indeed synonymous;intwill be a little more familiar looking,Int32makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer',Int32where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge anintif appropriate, but should take care changingInt32variables in the same way.The resulting code will be identical: the difference is purely one of readability or code appearance.
The only real difference here is the size. All of the int types here are signed integer values which have varying sizes
Int16: 2 bytesInt32andint: 4 bytesInt64: 8 bytes
There is one small difference between Int64 and the rest. On a 32 bit platform assignments to an Int64 storage location are not guaranteed to be atomic. It is guaranteed for all of the other types.
c - Difference between int32, int, int32_t, int8 and int8_t - Stack Overflow
Should I use int or int32_t?
What is the difference between in64, int32, and int8. Are there anymore 'int' functions? (int means integer right?)
if int and int32 the same, why golang keeps these two?
https://golang.org/pkg/builtin/#int
More on reddit.comint is a signed integer type that is at least 32 bits in size. It is a distinct type, however, and not an alias for, say, int32.
Is int16 faster than Int32
What is the difference between 16bit integer and 32bit integer
Videos
The two are indeed synonymous; int will be a little more familiar looking, Int32 makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer', Int32 where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an int if appropriate, but should take care changing Int32s in the same way.
The resulting code will be identical: the difference is purely one of readability or code appearance.
ECMA-334:2006 C# Language Specification (p18):
Each of the predefined types is shorthand for a system-provided type. For example, the keyword
intrefers to the structSystem.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.
Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).
Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).
On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.
Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.
The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.
tbh, this raises confusing if you were programming in java or cpp
https://golang.org/pkg/builtin/#int
int is a signed integer type that is at least 32 bits in size. It is a distinct type, however, and not an alias for, say, int32.
Surprised no one mentioned that int usually has 64 bits on 64-bit systems and 32 bits on 32-bit systems. The distinction from both int32 and int64 is clear.