floatstores floating-point values, that is, values that have potential decimal placesintonly stores integral values, that is, whole numbers
So while both are 32 bits wide, their use (and representation) is quite different. You cannot store 3.141 in an integer, but you can in a float.
Dissecting them both a little further:
In an integer, all bits except the leftmost one are used to store the number value. This is (in Java and many computers too) done in the so-called two's complement, which support negatives values. Two's complement uses the leftmost bit to store the positive (0) or negative sign (1). This basically means that you can represent the values of −231 to 231 − 1.
In a float, those 32 bits are divided between three distinct parts: The sign bit, the exponent and the mantissa. They are laid out as follows:
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
There is a single bit that determines whether the number is negative or non-negative (zero is neither positive nor negative, but has the sign bit set to zero). Then there are eight bits of an exponent and 23 bits of mantissa. To get a useful number from that, (roughly) the following calculation is performed:
M × 2E
(There is more to it, but this should suffice for the purpose of this discussion)
The mantissa is in essence not much more than a 24-bit integer number. This gets multiplied by 2 to the power of the exponent part, which, roughly, is a number between −128 and 127.
Therefore you can accurately represent all numbers that would fit in a 24-bit integer but the numeric range is also much greater as larger exponents allow for larger values. For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.
But that also means, since 32 bits only have 4.2 × 109 different states (which are all used to represent the values int can store), that at the larger end of float's numeric range the numbers are spaced wider apart (since there cannot be more unique float numbers than there are unique int numbers). You cannot represent some numbers exactly, then. For example, the number 2 × 1012 has a representation in float of 1,999,999,991,808. That might be close to 2,000,000,000,000 but it's not exact. Likewise, adding 1 to that number does not change it because 1 is too small to make a difference in the larger scales float is using there.
Similarly, you can also represent very small numbers (between 0 and 1) in a float but regardless of whether the numbers are very large or very small, float only has a precision of around 6 or 7 decimal digits. If you have large numbers those digits are at the start of the number (e.g. 4.51534 × 1035, which is nothing more than 451534 follows by 30 zeroes – and float cannot tell anything useful about whether those 30 digits are actually zeroes or something else), for very small numbers (e.g. 3.14159 × 10−27) they are at the far end of the number, way beyond the starting digits of 0.0000...
floatstores floating-point values, that is, values that have potential decimal placesintonly stores integral values, that is, whole numbers
So while both are 32 bits wide, their use (and representation) is quite different. You cannot store 3.141 in an integer, but you can in a float.
Dissecting them both a little further:
In an integer, all bits except the leftmost one are used to store the number value. This is (in Java and many computers too) done in the so-called two's complement, which support negatives values. Two's complement uses the leftmost bit to store the positive (0) or negative sign (1). This basically means that you can represent the values of −231 to 231 − 1.
In a float, those 32 bits are divided between three distinct parts: The sign bit, the exponent and the mantissa. They are laid out as follows:
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
There is a single bit that determines whether the number is negative or non-negative (zero is neither positive nor negative, but has the sign bit set to zero). Then there are eight bits of an exponent and 23 bits of mantissa. To get a useful number from that, (roughly) the following calculation is performed:
M × 2E
(There is more to it, but this should suffice for the purpose of this discussion)
The mantissa is in essence not much more than a 24-bit integer number. This gets multiplied by 2 to the power of the exponent part, which, roughly, is a number between −128 and 127.
Therefore you can accurately represent all numbers that would fit in a 24-bit integer but the numeric range is also much greater as larger exponents allow for larger values. For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.
But that also means, since 32 bits only have 4.2 × 109 different states (which are all used to represent the values int can store), that at the larger end of float's numeric range the numbers are spaced wider apart (since there cannot be more unique float numbers than there are unique int numbers). You cannot represent some numbers exactly, then. For example, the number 2 × 1012 has a representation in float of 1,999,999,991,808. That might be close to 2,000,000,000,000 but it's not exact. Likewise, adding 1 to that number does not change it because 1 is too small to make a difference in the larger scales float is using there.
Similarly, you can also represent very small numbers (between 0 and 1) in a float but regardless of whether the numbers are very large or very small, float only has a precision of around 6 or 7 decimal digits. If you have large numbers those digits are at the start of the number (e.g. 4.51534 × 1035, which is nothing more than 451534 follows by 30 zeroes – and float cannot tell anything useful about whether those 30 digits are actually zeroes or something else), for very small numbers (e.g. 3.14159 × 10−27) they are at the far end of the number, way beyond the starting digits of 0.0000...
Floats are used to store a wider range of number than can be fit in an integer. These include decimal numbers and scientific notation style numbers that can be bigger values than can fit in 32 bits. Here's the deep dive into them: http://en.wikipedia.org/wiki/Floating_point
variable attributes: float32 vs float64, int32 vs int64
Why does float have a bigger range than int32?
Float vs Int - Unity Engine - Unity Discussions
Difference between 32-bit fixed integer and 32-bit floating point - Cantabile Community
UInt32 is a 32-bit (4-byte) unsigned integer. This means that it can represent values in the range
[0, 2^32-1] (= [0, 4294967295]).
Float32 is a 32-bit (aka single-precision [contrast with double-precision]) floating point number.
As other answers have mentioned, the types exist to guarantee the width.
The suffix gives the bit size. This makes them the same if and only if the Standard float and int have the same size on the target machine. They exist to give guaranteed sizes on all platforms.
float (32 Bit): -3,4E+38 to +3,4E+38
int (32 Bit): -2.147.483.648 to +2.147.483.647
Why is it like that? Shouldn't float have an even smaller range because it can also hold decimal places?
I've been a hobbyist programmer for long enough that I can recall when it was not only conventional wisdom that integer math was faster than floating point, but it was undeniably true and produced obvious performance improvements. So, whenever practical, I've made a habit of doing my calculations in integers and never thought twice about it.
Recently, though, I've been working on some graphics code that's a bit more number-crunchy (mostly linear transformations and trigonometry), and I've been curious about whether I could improve performance by swapping floating point math with integer, the way it was done in the old days.
But before investing the time in a full refactor, I've been doing some tests to see if there's an appreciable difference, and I've gotten some weird results. On the AMD processor in my ~8 year old HP Envy, which is my main system, there's a slight-to-moderate performance advantage when multiplying by an integer ratio vs. float, but running the same code on my Galaxy S22 via Termux shows the floating point math absolutely blowing away the integer.
Obviously I expected different results on processors with very different architectures separated by a decade of development, but I'm really not sure how much more I should bother looking into this given these results. Despite hearing voices online repeat the conventional wisdom that ints are faster, I've found that, at least for scaling by ratios, floats are the clear winner (which I guess makes sense if FPUs have reached anything near parity with ALUs, considering that scaling by an integer ratio requires a costly division every time you do it).
Anyway, my question: does this bear out for the majority of modern processors and operations? Is there any place on modern systems where integers have a clear advantage over floats, or should I just enter the 21st century and get used to using floats without letting the nagging feeling that I'm incurring a performance penalty get to me?