The astype method works well:

>>> numpy.float64('6374.345407799015').astype(str)
'6374.345407799015'
Answer from Rahiel Kasim on Stack Overflow
🌐
NumPy
numpy.org › doc › stable › user › basics.types.html
Data types — NumPy v2.4 Manual
>>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int 0 >>> np.power(100, 100, dtype=np.float64) 1e+200 · Many functions in NumPy, especially those in numpy.linalg, involve floating-point arithmetic, which can introduce small inaccuracies due to the way computers represent decimal numbers. For instance, when performing basic arithmetic operations involving floating-point numbers: >>> 0.3 - 0.2 - 0.1 # This does not equal 0 due to floating-point precision -2.7755575615628914e-17
Discussions

which data type should I use for most accurate calculations? float decimal or python?
Floats are great but unreliable - they have small imprecisions that can quickly add up. More on reddit.com
🌐 r/learnpython
7
1
March 25, 2023
how set numpy floating point accuracy? - Stack Overflow
float64. since just printing number does not show any difference by python print funciton, I added np.set_printoptions(precision=70) More on stackoverflow.com
🌐 stackoverflow.com
python - How to set the precision on str(numpy.float64)? - Stack Overflow
i need to write a couple of numpy floats to a csv-file which has additional string content. therefore i dont use savetxt etc. with numpy.set_printoptions() i can only define the print behaviour, bu... More on stackoverflow.com
🌐 stackoverflow.com
format - How to control the display precision of a NumPy float64 scalar? - Stack Overflow
I'm writing a teaching document that uses lots of examples of Python code and includes the resulting numeric output. I'm working from inside IPython and a lot of the examples use NumPy. I want to ... More on stackoverflow.com
🌐 stackoverflow.com
🌐
NumPy
numpy.org › devdocs › user › basics.types.html
Data types — NumPy v2.5.dev0 Manual
>>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int 0 >>> np.power(100, 100, dtype=np.float64) 1e+200 · Many functions in NumPy, especially those in numpy.linalg, involve floating-point arithmetic, which can introduce small inaccuracies due to the way computers represent decimal numbers. For instance, when performing basic arithmetic operations involving floating-point numbers: >>> 0.3 - 0.2 - 0.1 # This does not equal 0 due to floating-point precision -2.7755575615628914e-17
🌐
Medium
medium.com › @amit25173 › understanding-numpy-float64-a300ac9e096a
Understanding numpy.float64. If you think you need to spend $2,000… | by Amit Yadav | Medium
February 8, 2025 - The main difference lies in precision. float64 uses 64 bits, offering more precision (about 15–17 decimal places) than float32, which uses only 32 bits and is limited to around 7 decimal places.
🌐
NumPy
numpy.org › doc › 1.22 › user › basics.types.html
Data types — NumPy v1.22 Manual
Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their long double type, MSVC (standard for Windows builds) makes long double ...
Top answer
1 of 2
34

Do you care about the actual precision of the result, or about getting the exact same digits back from your two calculations?

If you just want the same digits, you could use np.around() to round the results to some appropriate number of decimal places. However, by doing this you'll only reduce the precision of the result.

If you actually want to compute the result more precisely, you could try using the np.longdouble type for your input array, which, depending on your architecture and compiler, might give you an 80- or 128-bit floating point representation, rather than the standard 64-bit np.double*.

You can compare the approximate number of decimal places of precision using np.finfo:

print np.finfo(np.double).precision
# 15

print np.finfo(np.longdouble).precision
# 18

Note that not all numpy functions will support long double - some will down-cast it to double.


*However, some compilers (such as Microsoft Visual C++) will always treat long double as synonymous with double, in which case there would be no difference in precision between np.longdouble and np.double.

2 of 2
12

In normal numpy use, the numbers are double. Which means that the accuracy will be less than 16 digits. Here is a solved subject that contains the same problematic ...

If you need to increase the accuracy, you can use symbolic computation .... The library mpmath ... is a quiet good one. The advantage is that you can use limitless precision. However, calculations are slower than what numpy can do.

Here is an example:

# The library mpmath is a good solution
>>> import sympy as smp
>>> import mpmath as mp

>>> mp.mp.dps = 50  # Computation precision is 50 digits
# Notice that the difference between x and y is in the digit before last (47th)
>>> x = smp.mpmath.mpf("0.910221324013388510820732335560023784637451171875")
>>> y = smp.mpmath.mpf("0.910221324013388510820732335560023784637451171865")
>>> x - y  # Must be equal to 1e-47 as the difference is on the 47th digit
mpf('1.000014916280995001003481719184726944958705912691304e-47')

You can't do better with numpy. You can calculate exponentials with a better accuracy.

>>> smp.exp(x).evalf(20)
2.4848724344693696167

Note that for SymPy versions after 0.7.6, mpmath is no longer packaged with SymPy but is a dependency instead. This means that in newer SymPy versions sympy.mpmath functions have moved to mpmath.

Find elsewhere
🌐
Sling Academy
slingacademy.com › article › understanding-numpy-float64-type-5-examples
Understanding numpy.float64 type (5 examples) - Sling Academy
Before diving into complex examples, let’s start with the basics. The numpy.float64 data type represents a double-precision floating-point number, which can store significantly larger (or smaller) numbers than Python’s standard float type, with greater precision.
🌐
Numba
numba.pydata.org › numba-doc › 0.30.0 › reference › fpsemantics.html
2.9. Floating-point pitfalls — Numba 0.30.0 documentation
Numpy will most often return a float64 as a result of a computation with mixed integer and floating-point operands (a typical example is the power operator **). Numba by contrast will select the highest precision amongst the floating-point operands, so for example float32 ** int32 will return ...
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.finfo.html
numpy.finfo — NumPy v2.4 Manual
January 31, 2021 - For longdouble, the representation varies across platforms. On most platforms it is IEEE 754 binary128 (quad precision) or binary64-extended (80-bit extended precision). On PowerPC systems, it may use the IBM double-double format (a pair of float64 values), which has special characteristics ...
Top answer
1 of 1
2

IPython has formatters (core/formatters.py) which contain a dict that maps a type to a format method. There seems to be some knowledge of NumPy in the formatters but not for the np.float64 type.

There are a bunch of formatters, for HTML, LaTeX etc. but text/plain is the one for consoles.

We first get the IPython formatter for console text output

plain = get_ipython().display_formatter.formatters['text/plain']

and then set a formatter for the float64 type, we use the same formatter as already exists for float since it already knows about %precision

plain.for_type(np.float64, plain.lookup_by_type(float))

Now

In [26]: a = float(1.23456789)

In [28]: b = np.float64(1.23456789)

In [29]: %precision 3
Out[29]: '%.3f'

In [30]: a
Out[30]: 1.235

In [31]: b
Out[31]: 1.235

In the implementation I also found that %precision calls np.set_printoptions() with a suitable format string. I didn't know it did this, and potentially problematic if the user has already set this. Following the example above

In [32]: c = np.r_[a, a, a]

In [33]: c
Out[33]: array([1.235, 1.235, 1.235])

we see it is doing the right thing for array elements.

I can do this formatter initialisation explicitly in my own code, but a better fix might to modify IPython code/formatters.py line 677

    @default('type_printers')
    def _type_printers_default(self):
        d = pretty._type_pprinters.copy()
        d[float] = lambda obj,p,cycle: p.text(self.float_format%obj)
        # suggested "fix"
        if 'numpy' in sys.modules:
            d[numpy.float64] = lambda obj,p,cycle: p.text(self.float_format%obj)
        # end suggested fix
        return d

to also handle np.float64 here if NumPy is included. Happy for feedback on this, if I feel brave I might submit a PR.

🌐
NumPy
numpy.org › doc › 2.3 › reference › arrays.scalars.html
Scalars — NumPy v2.3 Manual
Double-precision floating-point number type, compatible with Python float and C double. ... numpy.float64: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa.
🌐
Python⇒Speed
pythonspeed.com › articles › float64-float32-precision
The problem with float32: you only get 16 million values
February 1, 2023 - You’re still limited to 224 positive values and same number of negative values, centered around 0. The number of values at a given level of precision cannot be changed! As a result, the highest value is cut in half to 223, to make room for ...
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.format_float_positional.html
numpy.format_float_positional — NumPy v2.4 Manual
If True, use a digit-generation strategy which gives the shortest representation which uniquely identifies the floating-point number from other values of the same type, by judicious rounding. If precision is given fewer digits than necessary can be printed, or if min_digits is given more can ...
🌐
GitHub
github.com › numpy › numpy › issues › 5272
Data type precision problems? · Issue #5272 · numpy/numpy
November 11, 2014 - Hi folks I've just been investigating some type conversion and came across what appears unusual behaviour to me - I would have expected these statements to more faithfully print the values at their respective precisions: >>> import numpy >>> numpy.version.version '1.9.0' >>> a = numpy.array([3991.86795711963],dtype='float64') >>> print a [ 3991.86795712] >>> print numpy.float32(a) [ 3991.86791992] >>> print numpy.float64(a) 3991.86795712 ; # Why does this print statement look different to the above (float32) - no brackets >>> a = numpy.array([3991.86795711963],dtype='float128') >>> print numpy.float128(a) [ 3991.868] ; # Why is this truncated when it should have double the precision of float64?
Author   durack1
🌐
Crumb
crumb.sh › 3ERVuUd3iaD
NumPy float types: a demonstration of precision - Python 3 code example - crumb.sh
The larger the number of allowed bits, the more precision our array’s elements will have. E.g., np.float16 will use 16 bits (two bytes), while np.float64 takes up 64 bits (8 bytes).
🌐
NumPy
numpy.org › doc › 2.2 › reference › arrays.scalars.html
Scalars — NumPy v2.2 Manual
Double-precision floating-point number type, compatible with Python float and C double. ... numpy.float64: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa.