Python's standard float type is a C double: http://docs.python.org/2/library/stdtypes.html#typesnumeric
NumPy's standard numpy.float is the same, and is also the same as numpy.float64.
Python's standard float type is a C double: http://docs.python.org/2/library/stdtypes.html#typesnumeric
NumPy's standard numpy.float is the same, and is also the same as numpy.float64.
Data type-wise numpy floats and built-in Python floats are the same, however boolean operations on numpy floats return np.bool_ objects, which always return False for val is True. Example below:
In [1]: import numpy as np
...: an_np_float = np.float32(0.3)
...: a_normal_float = 0.3
...: print(a_normal_float, an_np_float)
...: print(type(a_normal_float), type(an_np_float))
0.3 0.3
<class 'float'> <class 'numpy.float32'>
Numpy floats can arise from scalar output of array operations. If you weren't checking the data type, it is easy to confuse numpy floats for native floats.
In [2]: criterion_fn = lambda x: x <= 0.5
...: criterion_fn(a_normal_float), criterion_fn(an_np_float)
Out[2]: (True, True)
Even boolean operations look correct. However the result of the numpy float isn't a native boolean datatype, and thus can't be truthy.
In [3]: criterion_fn(a_normal_float) is True, criterion_fn(an_np_float) is True
Out[3]: (True, False)
In [4]: type(criterion_fn(a_normal_float)), type(criterion_fn(an_np_float))
Out[4]: (bool, numpy.bool_)
According to this github thread, criterion_fn(an_np_float) == True will evaluate properly, but that goes against the PEP8 style guide.
Instead, extract the native float from the result of numpy operations. You can do an_np_float.item() to do it explicitly (ref: this SO post) or simply pass values through float().
How to convert np.float32 to Python float easily?
BUG: Weird conversion behavior from np.float32 to Python float
numpy - How to force python float operation on float32 rather than float64? - Stack Overflow
python - Numpy's float32 and float comparisons - Stack Overflow
Videos
Will numpy.float32 help?
>>>PI=3.1415926535897
>>> print PI*PI
9.86960440109
>>> PI32=numpy.float32(PI)
>>> print PI32*PI32
9.86961
If you want to do math operation on float32, convert the operands to float32 may help you.
Use numpy.ndarray.astype:
import numpy as np
arr_f64 = np.array([1.0000123456789, 2.0000123456789, 3.0000123456789], dtype=np.float64)
arr_f32 = arr_f64.astype(np.float32)
Pay attention to precision:
np.set_printoptions(precision=16)
print("arr_f64 = ", arr_f64)
print("arr_f32 = ", arr_f32)
gives
arr_f64 = [1.0000123456789 2.0000123456789 3.0000123456789]
arr_f32 = [1.0000124000000 2.0000124000000 3.0000124000000]
The numbers compare equal because 58682.7578125 can be exactly represented in both 32 and 64 bit floating point. Let's take a close look at the binary representation:
32 bit: 01000111011001010011101011000010
sign : 0
exponent: 10001110
fraction: 11001010011101011000010
64 bit: 0100000011101100101001110101100001000000000000000000000000000000
sign : 0
exponent: 10000001110
fraction: 1100101001110101100001000000000000000000000000000000
They have the same sign, the same exponent, and the same fraction - the extra bits in the 64 bit representation are filled with zeros.
No matter which way they are cast, they will compare equal. If you try a different number such as 58682.7578124 you will see that the representations differ at the binary level; 32 bit looses more precision and they won't compare equal.
(It's also easy to see in the binary representation that a float32 can be upcast to a float64 without any loss of information. That is what numpy is supposed to do before comparing both.)
import numpy as np
a = 58682.7578125
f32 = np.float32(a)
f64 = np.float64(a)
u32 = np.array(a, dtype=np.float32).view(dtype=np.uint32)
u64 = np.array(a, dtype=np.float64).view(dtype=np.uint64)
b32 = bin(u32)[2:]
b32 = '0' * (32-len(b32)) + b32 # add leading 0s
print('32 bit: ', b32)
print('sign : ', b32[0])
print('exponent: ', b32[1:9])
print('fraction: ', b32[9:])
print()
b64 = bin(u64)[2:]
b64 = '0' * (64-len(b64)) + b64 # add leading 0s
print('64 bit: ', b64)
print('sign : ', b64[0])
print('exponent: ', b64[1:12])
print('fraction: ', b64[12:])
The same value is stored internally, only it doesn't show all digits with a print
Try:
print "%0.8f" % float_32
See related Printing numpy.float64 with full precision
The NumPy package has a float32 type.
If numpy (the excellent suggestion of other answers) is inapplicable for you (e.g. because you're in an environment that doesn't allow arbitrary third-party extensions), the array module in Python standard library is fine too -- type code 'f' gives you 32-bit floats. Besides those and the (usual) double precision floats, there isn't much for "other floating point formats" -- what did you have in mind? (e.g. gmpy offers GMP's modest support for floats with much longer, arbitrary bit sizes -- but it's modest indeed, e.g., no trig functions).
If you have the raw bytes (e.g. read from memory, from file, over the network, ...) you can use struct for this:
>>> import struct
>>> struct.unpack('>f', '\x3f\x9a\xec\xb5')[0]
1.2103487253189087
Here, \x3f\x9a\xec\xb5 are your input registers, 16282 (hex 0x3f9a) and 60597 (hex 0xecb5) expressed as bytes in a string. The > is the byte order mark.
So depending how you get the register values, you may be able to use this method (e.g. by converting your input integers to byte strings). You can use struct for this, too; this is your second example:
>>> raw = struct.pack('>HH', 16282, 1147) # from two unsigned shorts
>>> struct.unpack('>f', raw)[0] # to one float
1.2032617330551147
The way you've converting the two ints makes implicit assumptions about endianness that I believe are wrong.
So, let's back up a step. You know that the first argument is the most significant word, and the second is the least significant word. So, rather than try to figure out how to combine them into a hex string in the appropriate way, let's just do this:
import struct
import sys
first = sys.argv[1]
second = sys.argv[2]
sample = int(first) << 16 | int(second)
Now we can just convert like this:
def convert(i):
s = struct.pack('=i', i)
return struct.unpack('=f', s)[0]
And if I try it on your inputs:
$ python floatify.py 16282 60597
1.21034872532
$ python floatify.py 16282 1147
1.20326173306
I was messing around with float32 numbers and got values like 1.4e-44. I'm not good at scientific notation, but I printed variables out with 100 digits being shown instead of scientific notation, and it goes something like .000000000000000000000000125321 (not exact, just an example)
But how does that make sense if float32 only allows 6-7 digits? Wouldn't that mean only the 0s are valid?