How do you convert from IEEE 754 Floating Point to decimal without using a calculator?
ieee 754 - Equation to convert IEEE754 to decimal (normalised form) - Stack Overflow
How to convert an IEEE single precision floating point to a decimal value - Stack Overflow
Converting from IEEE-754 to decimal - Stack Overflow
Videos
We aren't allowed calculators in my upcoming exam, but are also required to convert IEEE 754 Floating Points with long mantissas into decimal.
The lecture slides only show examples where it's easy to work out, but I've gotten to a point where I need to convert 0.0000010011101 into decimal and this is simply impossible without a new approach.
I have figured out the equation to get out the exact number of the binary representation, it is: sign * 2^b-exp * mantissa
Edit: To get the right mantissa, you need to ONLY calculate it starting at the fractional part of the binary. So for example, if your fractional is 011 1111...
Then you would do (1*2^-0) + (1*2^-1) + (1*2^-2)...
Keep doing this for all the numbers and you'll get your mantissa.
Instead of calculating all those bits behind the comma, which is heck of a job, IMO, just scale everything by 2^23 and subtract 23 more from the exponent for compensation.
This is explained in my article about floating point for Delphi.
First decode:
0 - 1000 1101 - 011 1111 1100 0000 0000 0000
Insert hidden bit:
0 - 1000 1101 - 1011 1111 1100 0000 0000 0000
In hex:
0 - 8D - BFC000
0x8D = 141, minus bias of 127, that becomes 14.
I like to scale things, so the calculation is:
sign * full_mantissa * (exp - bias - len)
where full_mantissa is the mantissa, including hidden bit, as integer; bias = 127 and len = 23 (the number of mantissa bits).
So then it becomes:
1 * 0xBFC000 * 2^(14-23) = 0xBFC000 / 0x200 = 0x5FE0 = 24544
because 2^(14-23) = 2^-9 = 1 / 2^9 = 1 / 0x200.
Given how the question is posed, it seems that you need to do this as a one off. If that's the case, I would simply use the online IEEE-754 calculator: link.
It not only converts the number to the decimal floating-point representation, it also shows all the relevant bit patterns.
In the question you don't state the endianness of your 32-bit int, so you might need to swap the byte order before entering the number into the calculator.
Take a close look at the result of the calculator aix pointed to in his answer:
Binary32: AEF00000
Status Sign [1] Exponent [8] Significand [23]
Normal 1 (-) 01011101 (-34) 1.11100000000000000000000 (1.875)
Write out the full binary pattern for 0xAEF00000: 10101110111100000000000000000000.
Split this according to the pattern the calculator shows: 1 01011101 11100000000000000000000.
You now have the sign bit, the biased exponent value, and the significand without the implicit leading bit. This should be enough to make a start on interpreting the value.