The IEEE 754-2008 16-bit base 2 format, aka binary16, doesn't give you a lot of precision. What do you expect from 16 bits? :) 1 bit is the sign bit, 5 bits are used for the exponent, and that leaves 10 bits to store the normalised 11 bit mantissa, so anything > 2**11 == 2048 has to be quantized.
According to Wikipedia, integers between 4097 and 8192 round to a multiple of 4, and integers between 8193 and 16384 round to a multiple of 8.
Answer from PM 2Ring on Stack OverflowThe IEEE 754-2008 16-bit base 2 format, aka binary16, doesn't give you a lot of precision. What do you expect from 16 bits? :) 1 bit is the sign bit, 5 bits are used for the exponent, and that leaves 10 bits to store the normalised 11 bit mantissa, so anything > 2**11 == 2048 has to be quantized.
According to Wikipedia, integers between 4097 and 8192 round to a multiple of 4, and integers between 8193 and 16384 round to a multiple of 8.
Tensorflow requires float16 and produces an error for float32. You can use what was suggested by Reti43:
np.float16(a)
Out[102]: array([8192.], dtype=float16)
I'm surprised that a useless reply has been upvoted so high. I know that moderators request to mark as the best answer the one which was upvoted the highest, but an question author is not obliged to do this. There are a number of people who just gather points here, and do not care about actually replying to a request. They might upvote themselves under different names.
The problem is that you do not do any type conversion of the numpy array. You calculate a float32 variable and put it as an entry into a float64 numpy array. numpy then converts it properly back to float64
Try someting like this:
a = np.zeros(4,dtype="float64")
print a.dtype
print type(a[0])
a = np.float32(a)
print a.dtype
print type(a[0])
The output (tested with python 2.7)
float64
<type 'numpy.float64'>
float32
<type 'numpy.float32'>
a is in your case the array tree.tree_.threshold
Actually i tried hard but not able to do as the 'sklearn.tree._tree.Tree' objects is not writable.
It is causing a precision issue while generating a PMML file, so i raised a bug over there and they gave an updated solution for it by not converting it in to the Float64 internally.
For more info, you can follow this link: Precision Issue