• float stores floating-point values, that is, values that have potential decimal places
  • int only stores integral values, that is, whole numbers

So while both are 32 bits wide, their use (and representation) is quite different. You cannot store 3.141 in an integer, but you can in a float.

Dissecting them both a little further:

In an integer, all bits except the leftmost one are used to store the number value. This is (in Java and many computers too) done in the so-called two's complement, which support negatives values. Two's complement uses the leftmost bit to store the positive (0) or negative sign (1). This basically means that you can represent the values of −231 to 231 − 1.

In a float, those 32 bits are divided between three distinct parts: The sign bit, the exponent and the mantissa. They are laid out as follows:

S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM

There is a single bit that determines whether the number is negative or non-negative (zero is neither positive nor negative, but has the sign bit set to zero). Then there are eight bits of an exponent and 23 bits of mantissa. To get a useful number from that, (roughly) the following calculation is performed:

M × 2E

(There is more to it, but this should suffice for the purpose of this discussion)

The mantissa is in essence not much more than a 24-bit integer number. This gets multiplied by 2 to the power of the exponent part, which, roughly, is a number between −128 and 127.

Therefore you can accurately represent all numbers that would fit in a 24-bit integer but the numeric range is also much greater as larger exponents allow for larger values. For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.

But that also means, since 32 bits only have 4.2 × 109 different states (which are all used to represent the values int can store), that at the larger end of float's numeric range the numbers are spaced wider apart (since there cannot be more unique float numbers than there are unique int numbers). You cannot represent some numbers exactly, then. For example, the number 2 × 1012 has a representation in float of 1,999,999,991,808. That might be close to 2,000,000,000,000 but it's not exact. Likewise, adding 1 to that number does not change it because 1 is too small to make a difference in the larger scales float is using there.

Similarly, you can also represent very small numbers (between 0 and 1) in a float but regardless of whether the numbers are very large or very small, float only has a precision of around 6 or 7 decimal digits. If you have large numbers those digits are at the start of the number (e.g. 4.51534 × 1035, which is nothing more than 451534 follows by 30 zeroes – and float cannot tell anything useful about whether those 30 digits are actually zeroes or something else), for very small numbers (e.g. 3.14159 × 10−27) they are at the far end of the number, way beyond the starting digits of 0.0000...

Answer from Joey on Stack Overflow
🌐
NVIDIA Developer Forums
forums.developer.nvidia.com › accelerated computing › cuda › cuda programming and performance
int32 Vs float32 performance difference and analysis advice - CUDA Programming and Performance - NVIDIA Developer Forums
July 31, 2017 - Hello, I wrote a simple matmul kernel for float and int and bench-marked on a Quadro K1200 with matrices of sizes 1024x1024x1024, 16384x128x128, 128x16384x128 ( NxKxM : C[NxM] = A[NxK] * B[KxM] ). Here’s the kernel, __global__ void matmul(int N, int K, int M, Type *C, Type *A, Type *B) { int i_idx = blockIdx.x, j_idx = blockIdx.y*BLK_SIZE + threadIdx.x; if( i_idx >= N || j_idx >= M ) return; int k; Type temp = C[i_idx*M+j_idx]; Type *A_ptr = A + i_idx*K + 0, *B_ptr = B + ...
Top answer
1 of 3
118
  • float stores floating-point values, that is, values that have potential decimal places
  • int only stores integral values, that is, whole numbers

So while both are 32 bits wide, their use (and representation) is quite different. You cannot store 3.141 in an integer, but you can in a float.

Dissecting them both a little further:

In an integer, all bits except the leftmost one are used to store the number value. This is (in Java and many computers too) done in the so-called two's complement, which support negatives values. Two's complement uses the leftmost bit to store the positive (0) or negative sign (1). This basically means that you can represent the values of −231 to 231 − 1.

In a float, those 32 bits are divided between three distinct parts: The sign bit, the exponent and the mantissa. They are laid out as follows:

S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM

There is a single bit that determines whether the number is negative or non-negative (zero is neither positive nor negative, but has the sign bit set to zero). Then there are eight bits of an exponent and 23 bits of mantissa. To get a useful number from that, (roughly) the following calculation is performed:

M × 2E

(There is more to it, but this should suffice for the purpose of this discussion)

The mantissa is in essence not much more than a 24-bit integer number. This gets multiplied by 2 to the power of the exponent part, which, roughly, is a number between −128 and 127.

Therefore you can accurately represent all numbers that would fit in a 24-bit integer but the numeric range is also much greater as larger exponents allow for larger values. For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.

But that also means, since 32 bits only have 4.2 × 109 different states (which are all used to represent the values int can store), that at the larger end of float's numeric range the numbers are spaced wider apart (since there cannot be more unique float numbers than there are unique int numbers). You cannot represent some numbers exactly, then. For example, the number 2 × 1012 has a representation in float of 1,999,999,991,808. That might be close to 2,000,000,000,000 but it's not exact. Likewise, adding 1 to that number does not change it because 1 is too small to make a difference in the larger scales float is using there.

Similarly, you can also represent very small numbers (between 0 and 1) in a float but regardless of whether the numbers are very large or very small, float only has a precision of around 6 or 7 decimal digits. If you have large numbers those digits are at the start of the number (e.g. 4.51534 × 1035, which is nothing more than 451534 follows by 30 zeroes – and float cannot tell anything useful about whether those 30 digits are actually zeroes or something else), for very small numbers (e.g. 3.14159 × 10−27) they are at the far end of the number, way beyond the starting digits of 0.0000...

2 of 3
3

Floats are used to store a wider range of number than can be fit in an integer. These include decimal numbers and scientific notation style numbers that can be bigger values than can fit in 32 bits. Here's the deep dive into them: http://en.wikipedia.org/wiki/Floating_point

Discussions

variable attributes: float32 vs float64, int32 vs int64
When I define my variables as data_type of "f" or "f4", these should be 32-bit floating-point decimals. However, when defining a variable attribute whose value is a floating-poi... More on github.com
🌐 github.com
2
May 7, 2019
Why does float have a bigger range than int32?
Shouldn't float have an even smaller range because it can also hold decimal places? The way floating point numbers are implemented allows them to have the larger range, but, there is a price for this trade off. A single precision (32 bit) IEEE-754 float can only exactly represent integers with an absolute value less than 224. Beyond that, you end up with gaps in the number line where the integer has to be rounded to a value that the float can represent. Example: 224 is 16777216 in decimal. Encoded as a float, this has the value 0x4b800000. The next integer value (16777217) encoded as a float is... also 0x4b800000. If you continue, you'll find that 16777218 can be represented exactly (0x4b800001), but 16777219 cannot. As the values grow past 224, the gaps between exact representations grow as well. More on reddit.com
🌐 r/AskProgramming
3
1
September 13, 2015
Float vs Int - Unity Engine - Unity Discussions
So Hi , I am new to game making . Can someone please tell me , what is the difference between float and int ? More on discussions.unity.com
🌐 discussions.unity.com
0
February 21, 2019
Difference between 32-bit fixed integer and 32-bit floating point - Cantabile Community
I had some recordings in 24-bit, 48000 sample rate, but now have the setup using 44100 to save CPU load. When I played them back in the media player, those were all distorted unless I changed my audio card back to 48000. While sleuthing that out, as I knew that wasn’t supposed to be the case, ... More on community.cantabilesoftware.com
🌐 community.cantabilesoftware.com
0
November 30, 2015
🌐
GitHub
github.com › Unidata › netcdf4-python › issues › 926
variable attributes: float32 vs float64, int32 vs int64 · Issue #926 · Unidata/netcdf4-python
May 7, 2019 - variable attributes: float32 vs float64, int32 vs int64#926 · Copy link · ghost · opened · on May 7, 2019 · When I define my variables as data_type of "f" or "f4", these should be 32-bit floating-point decimals. However, when defining a variable attribute whose value is a floating-point via setncattr, the result is a 64-bit floating-point ("double").
Author   ghost
🌐
Reddit
reddit.com › r/askprogramming › why does float have a bigger range than int32?
r/AskProgramming on Reddit: Why does float have a bigger range than int32?
September 13, 2015 -

float (32 Bit): -3,4E+38 to +3,4E+38

int (32 Bit): -2.147.483.648 to +2.147.483.647

Why is it like that? Shouldn't float have an even smaller range because it can also hold decimal places?

🌐
NumPy
numpy.org › doc › stable › user › basics.types.html
Data types — NumPy v2.4 Manual
NumPy numerical types are instances of numpy.dtype (data-type) objects, each having unique characteristics. Once you have imported NumPy using import numpy as np you can create arrays with a specified dtype using the scalar types in the numpy top-level API, e.g. numpy.bool, numpy.float32, etc.
🌐
Unity
discussions.unity.com › unity engine
Float vs Int - Unity Engine - Unity Discussions
February 21, 2019 - So Hi , I am new to game making . Can someone please tell me , what is the difference between float and int ?
🌐
Cantabile Community
community.cantabilesoftware.com › t › difference-between-32-bit-fixed-integer-and-32-bit-floating-point › 186
Difference between 32-bit fixed integer and 32-bit floating point - Cantabile Community
November 30, 2015 - I had some recordings in 24-bit, 48000 sample rate, but now have the setup using 44100 to save CPU load. When I played them back in the media player, those were all distorted unless I changed my audio card back to 48000. While sleuthing that out, as I knew that wasn’t supposed to be the case, ...
Find elsewhere
🌐
Jeskola
forums.jeskola.net › board index › buzz › users
Hard disk recorder modes, int32 vs. float32? - buzz forums
What's the difference with HD recorder modes like int32 and float32? ---edit--- Well, both work and i don't get any difference with them -at least not anything that i could hear.. so i guess it doesn't matter. That's why i ended asking it now, after pretty long use..
🌐
Julia Language
docs.julialang.org › en › v1 › manual › integers-and-floating-point-numbers
Integers and Floating-Point Numbers · The Julia Language
These values are 2.0^-23 and 2.0^-52 as Float32 and Float64 values, respectively. The eps function can also take a floating-point value as an argument, and gives the absolute difference between that value and the next representable floating point value.
🌐
Python⇒Speed
pythonspeed.com › articles › float64-float32-precision
The problem with float32: you only get 16 million values
February 1, 2023 - For example, to get results down to 1/16th of your input data’s precision, your data range has to be 1 million positive values when using float32. int32 lets you express 2000 million positive values, though it has no concept of precision.
🌐
Sololearn
sololearn.com › en › Discuss › 2682964 › golang-int8-int16-int32-float32-
Golang int8, int16, int32, float32 .... | Sololearn: Learn to code for FREE!
because sometimes int32 or below is enough (and it could make a memory footprint very different for large dataset).
🌐
Xojo Programming Forum
forum.xojo.com › general
Single Vs Int32 - General - Xojo Programming Forum
September 5, 2017 - I’m porting some Java code where most of the values are declared a type float. I think Java floats are single-precision 32-bit IEEE 754 floating point numbers. Up until now I’ve been using the Xojo Double type in the port but since I am targeting both 32-bit and 64-bit apps, I’m guessing ...
🌐
Sololearn
sololearn.com › en › Discuss › 3057798 › difference-between-float-and-int
Difference between float and int | Sololearn: Learn to code for FREE!
July 9, 2022 - Sololearn is the world's largest community of people learning to code. With over 25 programming courses, choose from thousands of topics to learn how to code, brush up your programming knowledge, upskill your technical ability, or stay informed about the latest trends.
🌐
Quora
quora.com › In-Golang-why-is-the-integer-type-written-as-int-but-the-float-type-as-float64-or-float32
In Golang, why is the integer type written as int, but the float type as float64 or float32? - Quora
Go therefore forces the programmer to choose float32 or float64 so the precision and semantics are explicit and unambiguous. ... For integers, many languages provide both a word-width default (C’s int) and fixed-width types (Go provides int8/int16/int32/int64 and their unsigned variants).
🌐
Unity
forum.unity.com › unity engine
int vs float ? - Unity Engine - Unity Discussions
November 19, 2010 - I have very little programming/scripting knowledge but this has always confused me. If an int value stores whole numbers and a float stores numbers including any decimal places, why does the int type exist? Doesn’t havi…
🌐
Julia Programming Language
discourse.julialang.org › general usage
typeof(Int32(1)/Int32(2)) == Float64 @_@ - General Usage - Julia Programming Language
February 4, 2018 - So I don’t have to track down every division of 2 low precision integers and surround them with a low precision Float wrapper. I think it is more natural to assume that if I input Int32, I don’t want Float64 outputs unless explicitly specified. That makes more sense to me, and I hope I ...
🌐
John D. Cook
johndcook.com › blog › 2025 › 06 › 27 › most-ints-are-not-floats
Most machine integers are not machine floats
June 29, 2025 - The int32 data type represents integers −231 through 231 − 1. The float32 data type represents numbers of the form
🌐
Reddit
reddit.com › r/c_programming › is there any performance benefit to using int vs. float on modern systems?
r/C_Programming on Reddit: Is there any performance benefit to using int vs. float on modern systems?
April 19, 2023 -

I've been a hobbyist programmer for long enough that I can recall when it was not only conventional wisdom that integer math was faster than floating point, but it was undeniably true and produced obvious performance improvements. So, whenever practical, I've made a habit of doing my calculations in integers and never thought twice about it.

Recently, though, I've been working on some graphics code that's a bit more number-crunchy (mostly linear transformations and trigonometry), and I've been curious about whether I could improve performance by swapping floating point math with integer, the way it was done in the old days.

But before investing the time in a full refactor, I've been doing some tests to see if there's an appreciable difference, and I've gotten some weird results. On the AMD processor in my ~8 year old HP Envy, which is my main system, there's a slight-to-moderate performance advantage when multiplying by an integer ratio vs. float, but running the same code on my Galaxy S22 via Termux shows the floating point math absolutely blowing away the integer.

Obviously I expected different results on processors with very different architectures separated by a decade of development, but I'm really not sure how much more I should bother looking into this given these results. Despite hearing voices online repeat the conventional wisdom that ints are faster, I've found that, at least for scaling by ratios, floats are the clear winner (which I guess makes sense if FPUs have reached anything near parity with ALUs, considering that scaling by an integer ratio requires a costly division every time you do it).

Anyway, my question: does this bear out for the majority of modern processors and operations? Is there any place on modern systems where integers have a clear advantage over floats, or should I just enter the 21st century and get used to using floats without letting the nagging feeling that I'm incurring a performance penalty get to me?

Top answer
1 of 13
77
To affirm what others have already said here, generally speaking on modern processors with an FPU, use floating point. From the video game industry to video codecs, they all using floating point because it is faster, not because they find it novel. This has been true pretty much this entire century, which includes much older CPUs than your 8 year old computer. For some context, Doom (1993) used fixed point, but Quake (1996) required an FPU. But the real performance gains are derived from utilizing SIMD. Modern processors are literally designed for doing wide operations as the default. It wants to do at least 128-bit wide operations, if not larger. So instead of doing a single 32-bit float, it wants to do 4 32-bit floats at the same time. If you only provide 1 value to do at a time, the CPU will literally fill up the 3 remaining slots with garbage, do the wide operation, and then throw away the 3 unused results to give you back the 1 result you asked for. (The real trick to performance is writing your code in such a way that you can avoid making the CPU waste this performance...that can range from using programming languages that do this for you, to you going more low-level and specifying it directly. In C, you might try turning on compiler flags like autovectorization and also directly using SIMD intrinsics.) Additionally, if you profile a modern CPU, in high performance code when you are trying to get all the performance you can, you will find that because integer and floating units are separate hardware, the integer ALU is easy to saturate because it is often being used to compute all the mundane things you might take for granted...such as computing the math to increment your loop counter in a for-loop and computing the memory address offsets to access different adjacent slots in an array. If you do all your interesting work in fixed point, you will saturate your ALU while your FPU is sitting idle. Handmade Hero, which teaches coding a game from scratch on a live stream, had a bunch of episodes explaining how modern CPUs work, what SIMD is, and how to utilize it. I think this was the first intro episode to SIMD. https://www.youtube.com/watch?v=qin-Eps3U_E
2 of 13
24
Depends your data and the required operations on the data. With 128-bit SIMD you can do 4 32-bit floating point operations per instruction. The speed difference using 32-bit integer math depends on execution port usage (see https://www.agner.org/optimize/instruction_tables.pdf ). I would not expect it to be appreciably faster. However, if your data elements fit into 16-bit or 8-bit integers it can be possible to do 8 or 16 operations per instruction. The compiler won't help you here, outside of trivial loops. You usually need to use intrinsics to get a reliable speedup.