For a given IEEE-754 floating point number X, if
2^E <= abs(X) < 2^(E+1)
then the distance from X to the next largest representable floating point number (epsilon) is:
epsilon = 2^(E-52) % For a 64-bit float (double precision)
epsilon = 2^(E-23) % For a 32-bit float (single precision)
epsilon = 2^(E-10) % For a 16-bit float (half precision)
The above equations allow us to compute the following:
For half precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^10. Any larger than this and the distance between floating point numbers is greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 1. Any larger than this and the distance between floating point numbers is greater than 0.0005.
For single precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^23. Any larger than this and the distance between floating point numbers is greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 2^13. Any larger than this and the distance between floating point numbers is greater than 0.0005.
For double precision...
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is 2^52. Any larger than this and the distance between floating point numbers is greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is 2^42. Any larger than this and the distance between floating point numbers is greater than 0.0005.
For floating-point integers (I'll give my answer in terms of IEEE double-precision), every integer between 1 and 2^53 is exactly representable. Beyond 2^53, integers that are exactly representable are spaced apart by increasing powers of two. For example:
- Every 2nd integer between 2^53 + 2 and 2^54 can be represented exactly.
- Every 4th integer between 2^54 + 4 and 2^55 can be represented exactly.
- Every 8th integer between 2^55 + 8 and 2^56 can be represented exactly.
- Every 16th integer between 2^56 + 16 and 2^57 can be represented exactly.
- Every 32nd integer between 2^57 + 32 and 2^58 can be represented exactly.
- Every 64th integer between 2^58 + 64 and 2^59 can be represented exactly.
- Every 128th integer between 2^59 + 128 and 2^60 can be represented exactly.
- Every 256th integer between 2^60 + 256 and 2^61 can be represented exactly.
- Every 512th integer between 2^61 + 512 and 2^62 can be represented exactly. . . .
Integers that are not exactly representable are rounded to the nearest representable integer, so the worst case rounding is 1/2 the spacing between representable integers.
For a given IEEE-754 floating point number X, if
2^E <= abs(X) < 2^(E+1)
then the distance from X to the next largest representable floating point number (epsilon) is:
epsilon = 2^(E-52) % For a 64-bit float (double precision)
epsilon = 2^(E-23) % For a 32-bit float (single precision)
epsilon = 2^(E-10) % For a 16-bit float (half precision)
The above equations allow us to compute the solutions to the following:
If you want an accuracy of +/-0.5 (or 2^-1), the maximum size that the number can be is S1. Any larger than this and the distance between floating point numbers is greater than 0.5.
If you want an accuracy of +/-0.0005 (about 2^-11), the maximum size that the number can be is S4. Any larger than this and the distance between floating point numbers is greater than 0.0005.
- For double precision, S1 = 2^52, S4 = 2^42
- For single precision, S1 = 2^23, S4 = 2^13
- For half precision, S1 = 2^10, S4 = 1
For floating-point integers (I'll give my answer in terms of IEEE double-precision), every integer between 1 and 2^53 is exactly representable. Beyond 2^53, integers that are exactly representable are spaced apart by increasing powers of two. For example:
- Every 2nd integer between 2^53 + 2 and 2^54 can be represented exactly.
- Every 4th integer between 2^54 + 4 and 2^55 can be represented exactly.
- Every 8th integer between 2^55 + 8 and 2^56 can be represented exactly.
- Every 16th integer between 2^56 + 16 and 2^57 can be represented exactly.
- Every 32nd integer between 2^57 + 32 and 2^58 can be represented exactly.
- Every 64th integer between 2^58 + 64 and 2^59 can be represented exactly.
- Every 128th integer between 2^59 + 128 and 2^60 can be represented exactly.
- Every 256th integer between 2^60 + 256 and 2^61 can be represented exactly.
- Every 512th integer between 2^61 + 512 and 2^62 can be represented exactly. . . .
Integers that are not exactly representable are rounded to the nearest representable integer, so the worst case rounding is 1/2 the spacing between representable integers.
Interesting that in machine learning short floats are used because it is more important to squeeze all the numbers into the RAM of a GPU card, than get 7 significant figures. But I am curious what is feasible with 7 or 10 bit mantissa. For example, if I want eigenvectors of a matrix for PCA, I use double precision, as that is an iterative process.