32-bit computer number format
{\displaystyle 1=x_{1}}
Single-precision floating-point format (sometimes called FP32, float32, or float) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide range of numeric values by using a … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Single-precision_floating-point_format
Single-precision floating-point format - Wikipedia
3 weeks ago - A floating-point variable can represent ... an IEEE 754 32-bit base-2 floating-point variable has a maximum finite value of (2 − 2−23) × 2127 ≈ 3.4028235 × 1038....
🌐
Python⇒Speed
pythonspeed.com › articles › float64-float32-precision
The problem with float32: you only get 16 million values
February 1, 2023 - We only have 16 million values at a $1,000,000 precision, so the maximum value we can represent accurately at this level of precision is $16 trillion, or slightly less than the 2022 US GDP of about $25 trillion.
Discussions

variables - What are the actual min/max values for float and double (C++) - Stack Overflow
I have read recommendations to use the "FLT_MIN" and "FLT_MAX" values for float. Whenever I do this, codeblocks tells me its max: 3.40282e+038 min: 1.17549e-038 Not knowing what this meant I t... More on stackoverflow.com
🌐 stackoverflow.com
Why is the max value of a 32 bit floating point number 3.4 x 10^38?
Quick correction: the exponent is base 2, not base 10. An exponent of 0xFF would indicate a special value like an infinity or NAN, so the largest exponent would indeed be 127, hence 2127. As far as the mantissa, it should be remembered that it's actually 24 bits. It's just that because we're working in binary, we know that the first digit is always 1 (since the leading digit has to be non-zero and in binary there's only one other digit: 1), so we don't have to explicitly store it. So it's 23 explicit bits plus the one implicit leading 1 (which is 0 when the exponent is 0x00). Either way, the largest mantissa is one where all the bits are set so it's value would be: 1 (implicit bit) + 1/2 (first explicit bit) + 1/4 (second explicit bit) + ... up until the last explicit bit. This would be roughly 1.99999988079071044921875, but this could be more tersely represented as 2 − 2-23 . The 2-23 or 1/(223 ) is just the gap of between 2 and 1.99999988079071044921875. Multiply the two together, and you get Wikipedia's answer. I suggest you play around with this site. It's pretty handy for exploring the format. More on reddit.com
🌐 r/AskProgramming
2
5
August 28, 2021
c# - What is the Max value for 'float'? - Stack Overflow
When I check the value of "float.MaxValue" I'm getting: 3.402823E+38 which is: 340,282,300,000,000,000,000,000,000,000,000,000,000 Then why when I'm trying to set a much smaller value in... More on stackoverflow.com
🌐 stackoverflow.com
How to calculate min/max values of floating point numbers? - Software Engineering Stack Exchange
I'm trying to calculate the min/max, or the lowest to highest value range of a 48 bit floating point type MIL-STD-1750A (PDF) (WIKI). Ex: How a double range is 1.7E +/- 308 I've looked around for More on softwareengineering.stackexchange.com
🌐 softwareengineering.stackexchange.com
Top answer
1 of 4
26

Alright. Using what I learned from here (thanks everyone) and the other parts of the web I wrote a neat little summary of the two just in case I run into another issue like this.

In C++ there are two ways to represent/store decimal values.

Floats and Doubles

A float can store values from:

  • -340282346638528859811704183484516925440.0000000000000000 Float lowest
  • 340282346638528859811704183484516925440.0000000000000000 Float max

A double can store values from:

  • -179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.0000000000000000 Double lowest

  • 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.0000000000000000 Double max

Float's precision allows it to store a value of up to 9 digits (7 real digits, +2 from decimal to binary conversion)

Double, like the name suggests can store twice as much precision as a float. It can store up to 17 digits. (15 real digits, +2 from decimal to binary conversion)

e.g.

     float x = 1.426;
     double y = 8.739437;

Decimals & Math

Due to a float being able to carry 7 real decimals, and a double being able to carry 15 real decimals, to print them out when performing calculations a proper method must be used.

e.g

include

typedef std::numeric_limits<double> dbl; 
cout.precision(dbl::max_digits10-2); // sets the precision to the *proper* amount of digits.
cout << dbl::max_digits10 <<endl; // prints 17.
double x = 12345678.312; 
double a = 12345678.244; 
// these calculations won't perform correctly be printed correctly without setting the precision.


cout << endl << x+a <<endl;

example 2:

typedef std::numeric_limits< float> flt;
cout.precision(flt::max_digits10-2);
cout << flt::max_digits10 <<endl;
float x =  54.122111;
float a =  11.323111;

cout << endl << x+a <<endl; /* without setting precison this outputs a different value, as well as making sure we're *limited* to 7 digits. If we were to enter another digit before the decimal point, the digits on the right would be one less, as there can only be 7. Doubles work in the same way */

Roughly how accurate is this description? Can it be used as a standard when confused?

2 of 4
15

The std::numerics_limits class in the <limits> header provides information about the characteristics of numeric types.

For a floating-point type T, here are the greatest and least values representable in the type, in various senses of “greatest” and “least.” I also include the values for the common IEEE 754 64-bit binary type, which is called double in this answer. These are in decreasing order:

  • std::numeric_limits<T>::infinity() is the largest representable value, if T supports infinity. It is, of course, infinity. Whether the type T supports infinity is indicated by std::numeric_limits<T>::has_infinity.

  • std::numeric_limits<T>::max() is the largest finite value. For double, this is 21024−2971, approximately 1.79769•10308.

  • std::numeric_limits<T>::min() is the smallest positive normal value. Floating-point formats often have an interval where the exponent cannot get any smaller, but the significand (fraction portion of the number) is allowed to get smaller until it reaches zero. This comes at the expense of precision but has some desirable mathematical-computing properties. min() is the point where this precision loss starts. For double, this is 2−1022, approximately 2.22507•10−308.

  • std::numeric_limits<T>::denorm_min() is the smallest positive value. In types which have subnormal values, it is subnormal. Otherwise, it equals std::numeric_limits<T>::min(). For double, this is 2−1074, approximately 4.94066•10−324.

  • std::numeric_limits<T>::lowest() is the least finite value. It is usually a negative number large in magnitude. For double, this is −(21024−2971), approximately −1.79769•10308.

  • If std::numeric_limits<T>::has_infinity and std::numeric_limits<T>::is_signed are true, then -std::numeric_limits<T>::infinity() is the least value. It is, of course, negative infinity.

Another characteristic you may be interested in is:

  • std::numeric_limits<T>::digits10 is the greatest number of decimal digits such that converting any decimal number with that many digits to T and then converting back to the same number of decimal digits will yield the original number. For double, this is 15.
🌐
Note.nkmk.me
note.nkmk.me › home › python
Maximum and Minimum float Values in Python | note.nkmk.me
August 11, 2023 - Note that in NumPy, you can explicitly specify the type with the number of bits, such as float32 or float64. NumPy: Cast ndarray to a specific dtype with astype() Use sys.float_info to get detailed information about float. sys.float_info — System-specific parameters and functions — Python 3.11.4 documentation · The sys module is included in the standard library, so no additional installation is required. import sys print(sys.float_info) # sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1) print(type(sys.float_info)) # <class 'sys.float_info'> source: sys_float_info.py ·
Find elsewhere
🌐
MathWorks
mathworks.com › matlab › mathematics › elementary math › arithmetic operations
Floating-Point Numbers - MATLAB & Simulink
Find the largest and smallest positive values that can be represented with the double data type by using the realmax and realmin functions, respectively.
🌐
Julia Language
docs.julialang.org › en › v1 › manual › integers-and-floating-point-numbers
Integers and Floating-Point Numbers · The Julia Language
julia> (typemin(Float16),typemax(Float16)) (-Inf16, Inf16) julia> (typemin(Float32),typemax(Float32)) (-Inf32, Inf32) julia> (typemin(Float64),typemax(Float64)) (-Inf, Inf) Most real numbers cannot be represented exactly with floating-point numbers, and so for many purposes it is important to know the distance between two adjacent representable floating-point numbers, which is often known as machine epsilon. Julia provides eps, which gives the distance between 1.0 and the next larger representable floating-point value:
🌐
Reddit
reddit.com › r/askprogramming › why is the max value of a 32 bit floating point number 3.4 x 10^38?
r/AskProgramming on Reddit: Why is the max value of a 32 bit floating point number 3.4 x 10^38?
August 28, 2021 -

I understand 2x - 1 gives you the max number for x unsigned bits. but say we're using IEEE-754 32 bit floating point number. Shouldn't the maximum value be 10256 x 223? since the mantissa is a 23 bit number and the exponent is 10 raised to the power of an 8 bit number.

Wikipedia however says the (2 − 2−23 ) × 2127

where does this come from? Why is the exponent negative?

Top answer
1 of 2
3
Quick correction: the exponent is base 2, not base 10. An exponent of 0xFF would indicate a special value like an infinity or NAN, so the largest exponent would indeed be 127, hence 2127. As far as the mantissa, it should be remembered that it's actually 24 bits. It's just that because we're working in binary, we know that the first digit is always 1 (since the leading digit has to be non-zero and in binary there's only one other digit: 1), so we don't have to explicitly store it. So it's 23 explicit bits plus the one implicit leading 1 (which is 0 when the exponent is 0x00). Either way, the largest mantissa is one where all the bits are set so it's value would be: 1 (implicit bit) + 1/2 (first explicit bit) + 1/4 (second explicit bit) + ... up until the last explicit bit. This would be roughly 1.99999988079071044921875, but this could be more tersely represented as 2 − 2-23 . The 2-23 or 1/(223 ) is just the gap of between 2 and 1.99999988079071044921875. Multiply the two together, and you get Wikipedia's answer. I suggest you play around with this site. It's pretty handy for exploring the format.
2 of 2
3
but say we're using IEEE-754 32 bit floating-point number This consists of: 1 sign bit 8 exponent bits an implicit leading 1 mantissa bit (not part of the representation itself) 23 mantissa bits To represent a positive number, we need a 0 sign bit. The largest exponent value, 11111111₂, is reserved for NaN values. The next largest, 11111110₂, corresponds to the exponent +127. The largest mantissa value is 1.11111111111111111111111₂. I have included the implicit leading 1 here, even though it's not part of the floating-point representation itself. This corresponds to the value 1.99999988079071044921875. Putting this altogether, we get the largest representable 32-bit floating-point number to be (-1)⁰ × 2⁺¹²⁷ × 1.99999988079071044921875 = 34028234663852885981170418348451692544​0.
🌐
Microsoft Learn
learn.microsoft.com › en-us › cpp › c-language › type-float
Type float | Microsoft Learn
The following table shows the minimum and maximum values you can store in variables of each floating-point type. The values listed in this table apply only to normalized floating-point numbers; denormalized floating-point numbers have a smaller minimum value.
🌐
gosamples
gosamples.dev › tutorials › the maximum and minimum value of the float types in go
📊 The maximum and minimum value of the float types in Go
April 14, 2022 - The maximum value of the float32 type in Go is 3.40282346638528859811704183484516925440e+38 and you can get this value using the math.MaxFloat32 constant.
🌐
Spark By {Examples}
sparkbyexamples.com › home › python › find maximum float value in python
Find Maximum Float Value in Python - Spark By {Examples}
May 31, 2024 - # Quick examples of finding maximum float value import sys import numpy as np # Example 1: Find maximum float value # Using sys.float_info max_float_value = sys.float_info.max # Example 2: Using sys.float_info max_float_value = sys.float_info # Example 3: Find the maximum float value # Using numpy module max_float_value = np.finfo(np.float64).max # Example 4: Get the minimum value of float32 max_float32_value = np.finfo(np.float32).max # Example 5: Get the minimum value of float64 max_float64_value = np.finfo(np.float64).max
🌐
W3Schools
w3schools.com › go › go_float_data_type.php
Go Float Data Types
The type of float to choose, depends on the value the variable has to store. This example will result in an error because 3.4e+39 is out of range for float32:
🌐
Khronos
khronos.org › opengl › wiki › Small_Float_Formats
Small Float Formats - OpenGL Wiki
August 5, 2023 - Small Float Formats, are floating-point values that use less than the standard 32-bits of precision. An example of these are 16-bit half-floats. This article details how these are encoded and used · We start with a quick review on how 32-bit floating-point numbers are encoded; detailed information ...
🌐
The blog at the bottom of the sea
blog.demofox.org › 2017 › 11 › 21 › floating-point-precision
Demystifying Floating Point Precision « The blog at the bottom of the sea
November 22, 2017 - Here’s a table showing the amount of precision you get with each data type at various exponent values. N/A is used when an exponent is out of range for the specific data type. A quick note on the maximum number you can store in floating point numbers, by looking at the half float specifically:
Top answer
1 of 2
9

For 32-bit floating point, the maximum value is shown in Table III:

0.9999998 x 2^127 represented in hex as: mantissa=7FFFFF, exponent=7F.

We can decompose the mantissa/exponent into a (close) decimal value as follows:

7FFFFF <base-16> = 8,388,607 <base-10>. 

There are 23 bits of significance, so we divide 8,388,607 by 2^23.

8,388,607 / 2^23 = 0.99999988079071044921875 (see Table III)

as far as the exponent:

7F <base-16> = 127 <base-10>

and now we multiply the mantissa by 2^127 (the exponent)

8,388,607 / 2^23 * 2^127 = 
8,388,607 * 2^104 = 1.7014116317805962808001687976863 * 10^38

This is the largest 32-bit floating point value because the largest mantissa is used and the largest exponent.

The 48-bit floating point adds 16 bits of lessor significance mantissa but leaves the exponent the same size. Thus, the max value would be represented in hex as

mansissa=7FFFFFFFFF, exponent=7F.

again, we can compute

7FFFFFFFFF <base-16> = 549,755,813,887 <base-10> 

the max exponent is still 127, but we need to divide by [23+16=39, so:] 2^39. 127-39=88, so just multiply by 2^88:

549,755,813,887 * 2^88 =
1.7014118346015974672186595864716 * 10^38

This is the largest 48-bit floating point value because we used the largest possible mantissa and largest possible exponent.

So, the max values are:

1.7014116317805962808001687976863 * 10^38, for 32-bit, and,
1.7014118346015974672186595864716 * 10^38, for 48-bit

The max value for 48-bit is just slightly larger than for 32-bit, which stands to reason since a few bits are added to the end of the mantissa.

(To be exact the maximum number for the 48-bit format can be expressed as a binary number that consists of 39 1's followed by 88 0's.)

(The smallest is just the negative of this value. The closest to zero without being zero can also easily be computed as per above: use the smallest possible (positive) mantissa:0000001 and the smallest possible exponent: 80 in hex, or -128 in decimal)


FYI

Some floating point formats use an unrepresented hidden 1 bit in the mantissa (this allows for one extra bit of precision in the mantissa, as follows: the first binary digit of all numbers (except 0, or denormals, see below) is a 1, therefore we don't have to store that 1, and we have an extra bit of precision). This particular format doesn't seem to do this.

Other floating point formats allow denormalized mantissa, which allows representing (positive) numbers smaller than smallest the exponent, by trading bits of precision for additional (negative) powers of 2. This easy to support if it doesn't also support the hidden one bit, a bit harder if it does.


8,388,607 / 2^23 is the value you'd get with mantissa=0x7FFFFF and exponent=0x00. It is not the single bit value but rather the value with a full mantissa and a neutral, or more specifically, a zero exponent.

The reason this value is not directly 8388607, and requires division (by 2^23 and hence is less than what you might expect) is that the implied radix point is in front of the mantissa, rather than after it. So, think +/-.111111111111111111111 (a sign bit, followed by a radix point, followed by twenty-three 1-bits) for the mantissa and +/-111111111111 (no radix point here, just an integer, in this case, 127) for the exponent.

mantissa = 0x7FFFFF with exponent = 0x7F is the largest value which corresponds to 8388607 * 2 ^ 104, where the 104 comes from 127-23: again, subtracting 23 powers of two because the mantissa has the radix point at the beginning. If the radix point were at the end, then the largest value (0x7FFFFF,0x7F) would indeed be 8,388,607 * 2 ^ 127.

Among others, there are possible ways we can consider a single bit value for the mantissa. One is mantissa=0x400000, and the other is mantissa=0x000001. without considering the radix point or the exponent, the former is 4,194,304, and the latter is 1. With a zero exponent and considering the radix point, the former is 0.5 (decimal) and the latter is 0.00000011920928955078125. With a maximum (or minimum) exponent, we can compute max and min single bit values.

(Note that the latter format where the mantissa has leading zeros would be considered denormalized in some number formats, and its normalized representation would be 0x400000 with an exponent of -23).

2 of 2
1

You can borrow from how the IEEE floating point is laid out for fast comparison: sign, exponent, mantissa. however in that PDF I see mantissa and exponent are reversed.

This means that to compare you'll have to first check the sign bit and if one is not the winner yet you compare the exponents and then you compare the mantissa.

If one is positive and the other is negative then the positive is the max.

If both are positive and one exponent is larger then it is the max (if both are negative then it is the min)

Similarly for mantissa.

🌐
Quora
quora.com › Why-is-the-maximum-float-number-in-C-3-40282-cdot10-38
Why is the maximum float number in C++ 3.40282\cdot10^{38} ?
The answer is really that floating point numbers are stored in an underlying binary format. So, the maximum value in the format would be a floating point number which, pulled apart and analyzed, has 1’s in nearly all positions (except the sign bit, in which 1 indicates negative).