1. Use numpy.gradient (best option)

Most people want this. This is now the Numpy provided finite difference aproach (2nd-order accurate.) Same shape-size as input array.

Uses second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.

2. Use numpy.diff (you probably don't want this)

If you really want something ~twice worse this is just 1st-order accurate and also doesn't have same shape as input. But it's faster than above (some little tests I did).

For constant space between x sampless

import numpy as np 
dx = 0.1; y = [1, 2, 3, 4, 4, 5, 6] # dx constant
np.gradient(y, dx) # dy/dx 2nd order accurate
array([10., 10., 10.,  5.,  5., 10., 10.])

For irregular space between x samples

your question

import numpy as np
x = [.1, .2, .5, .6, .7, .8, .9] # dx varies
y = [1, 2, 3, 4, 4, 5, 6]
np.gradient(y, x) # dy/dx 2nd order accurate
array([10., 8.333..,  8.333.., 5.,  5., 10., 10.])

What are you trying to achieve?

The numpy.gradient offers a 2nd-order and numpy.diff is a 1st-order approximation schema of finite differences for a non-uniform grid/array. But if you are trying to make a numerical differentiation, a specific finite differences formulation for your case might help you better. You can achieve much higher accuracy like 8th-order (if you need) much superior to numpy.gradient.

Answer from imbr on Stack Overflow
Top answer
1 of 4
57

1. Use numpy.gradient (best option)

Most people want this. This is now the Numpy provided finite difference aproach (2nd-order accurate.) Same shape-size as input array.

Uses second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.

2. Use numpy.diff (you probably don't want this)

If you really want something ~twice worse this is just 1st-order accurate and also doesn't have same shape as input. But it's faster than above (some little tests I did).

For constant space between x sampless

import numpy as np 
dx = 0.1; y = [1, 2, 3, 4, 4, 5, 6] # dx constant
np.gradient(y, dx) # dy/dx 2nd order accurate
array([10., 10., 10.,  5.,  5., 10., 10.])

For irregular space between x samples

your question

import numpy as np
x = [.1, .2, .5, .6, .7, .8, .9] # dx varies
y = [1, 2, 3, 4, 4, 5, 6]
np.gradient(y, x) # dy/dx 2nd order accurate
array([10., 8.333..,  8.333.., 5.,  5., 10., 10.])

What are you trying to achieve?

The numpy.gradient offers a 2nd-order and numpy.diff is a 1st-order approximation schema of finite differences for a non-uniform grid/array. But if you are trying to make a numerical differentiation, a specific finite differences formulation for your case might help you better. You can achieve much higher accuracy like 8th-order (if you need) much superior to numpy.gradient.

2 of 4
25

use numpy.gradient()

Please be aware that there are more advanced way to calculate the numerical derivative than simply using diff. I would suggest to use numpy.gradient, like in this example.

import numpy as np
from matplotlib import pyplot as plt

# we sample a sin(x) function
dx = np.pi/10
x = np.arange(0,2*np.pi,np.pi/10)

# we calculate the derivative, with np.gradient
plt.plot(x,np.gradient(np.sin(x), dx), '-*', label='approx')

# we compare it with the exact first derivative, i.e. cos(x)
plt.plot(x,np.cos(x), label='exact')
plt.legend()
๐ŸŒ
NumPy
numpy.org โ€บ doc โ€บ 2.1 โ€บ reference โ€บ generated โ€บ numpy.gradient.html
numpy.gradient โ€” NumPy v2.1 Manual
With a similar procedure the forward/backward approximations used for boundaries can be derived. ... Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. ... Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. ... Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. PDF. ... >>> import numpy as np >>> f = np.array([1, 2, 4, 7, 11, 16]) >>> np.gradient(f) array([1.
๐ŸŒ
Medium
medium.com โ€บ @whyamit404 โ€บ understanding-derivatives-with-numpy-e54d65fcbc52
Understanding Derivatives with NumPy | by whyamit404 | Medium
February 8, 2025 - In NumPy, we donโ€™t have a dedicated function for derivatives. Instead, we use np.gradient(). This function calculates the derivative using numerical differentiation. It estimates the rate of change of an array by looking at the differences between adjacent points.
๐ŸŒ
NumPy
numpy.org โ€บ doc โ€บ stable โ€บ reference โ€บ generated โ€บ numpy.gradient.html
numpy.gradient โ€” NumPy v2.4 Manual
Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis. ... A tuple of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension.
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ how-to-compute-derivative-using-numpy
How to compute derivative using Numpy? - GeeksforGeeks
July 23, 2025 - At first, we need to define a polynomial function using the numpy.poly1d() function. Then we need to derive the derivative expression using the derive() function. At last, we can give the required value to x to calculate the derivative numerically.
๐ŸŒ
TutorialsPoint
tutorialspoint.com โ€บ how-to-compute-derivative-using-numpy
How to Compute Derivative Using Numpy?
July 20, 2023 - In this case, the gradient function will compute the derivative of f(x) = x^2 at each point in the domain and return an array representing the values of the derivative at each point.
๐ŸŒ
Derivative
derivative.ca โ€บ UserGuide โ€บ NumPy
NumPy - Derivative |
For example a CHOP with all its channels and samples can be converted to a NumPy array: # Returns all of the channels in this CHOP a 2D NumPy array # with a width equal to the channel length (the number of samples) # and a height equal to the number of channels.
Top answer
1 of 9
208

You have four options

  1. Finite Differences
  2. Automatic Derivatives
  3. Symbolic Differentiation
  4. Compute derivatives by hand.

Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.

Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.

Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.

Here is an example using SymPy

In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2โ‹…x

In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2.  2.  2.  2.  2.]
2 of 9
82

The most straight-forward way I can think of is using numpy's gradient function:

x = numpy.linspace(0,10,1000)
dx = x[1]-x[0]
y = x**2 + 1
dydx = numpy.gradient(y, dx)

This way, dydx will be computed using central differences and will have the same length as y, unlike numpy.diff, which uses forward differences and will return (n-1) size vector.

Find elsewhere
๐ŸŒ
Berkeley
pythonnumericalmethods.berkeley.edu โ€บ notebooks โ€บ chapter20.05-Summary-and-Problems.html
Summary โ€” Python Numerical Methods
There are issues with finite differences for approximation of derivatives when the data is noisy. Write a function \(my\_der\_calc(f, a, b, N, option)\), with the output as \([df, X]\), where \(f\) is a function object, \(a\) and \(b\) are scalars such that a < b, \(N\) is an integer bigger than 10, and \(option\) is the string \(forward\), \(backward\), or \(central\). Let \(x\) be an array ...
Top answer
1 of 3
6

This is not a simple problem, but there are a lot of methods that have been devised to handle it. One simple solution is to use finite difference methods. The command numpy.diff() uses finite differencing where you can specify the order of the derivative.

Wikipedia also has a page that lists the needed finite differencing coefficients for different derivatives of different accuracies. If the numpy function doesn't do what you want.

Depending on your application you can also use scipy.fftpack.diff which uses a completely different technique to do the same thing. Though your function needs a well defined Fourier transform.

There are lots and lots and lots of variants (e.g. summation by parts, finite differencing operators, or operators designed to preserve known evolution constants in your system of equations) on both of the two ideas above. What you should do will depend a great deal on what the problem is that you are trying to solve.

The good thing is that there is a lot of work has been done in this field. The Wikipedia page for Numerical Differentiation has some resources (though it is focused on finite differencing techniques).

2 of 3
1

The findiff project is a Python package that can do derivatives of arrays of any dimension with any desired accuracy order (of course depending on your hardware restrictions). It can handle arrays on uniform as well as non-uniform grids and also create generalizations of derivatives, i.e. general linear combinations of partial derivatives with constant and variable coefficients.

๐ŸŒ
Python Guides
pythonguides.com โ€บ python-scipy-derivative-of-array
Python SciPy Derivative Of Array: Calculate With Precision
June 23, 2025 - The second derivative helps identify where sales growth is accelerating or decelerating, which is valuable for inventory planning and marketing strategy. When working with large arrays, performance becomes critical. Hereโ€™s a comparison of different methods: import numpy as np import time from scipy import misc, interpolate, ndimage, signal # Generate a large array x = np.linspace(0, 10, 10000) y = np.sin(x) * np.exp(-0.1*x) # Method 1: NumPy gradient start = time.time() d1 = np.gradient(y, x) time1 = time.time() - start print(f"NumPy gradient time: {time1:.5f} seconds") # Method 2: SciPy int
๐ŸŒ
SciPy
docs.scipy.org โ€บ doc โ€บ scipy โ€บ reference โ€บ generated โ€บ scipy.differentiate.derivative.html
derivative โ€” SciPy v1.17.0 Manual
For each element of the output of f, derivative approximates the first derivative of f at the corresponding element of x using finite difference differentiation. This function works elementwise when x, step_direction, and args contain (broadcastable) arrays.
๐ŸŒ
NumPy
numpy.org โ€บ doc โ€บ stable โ€บ reference โ€บ generated โ€บ numpy.polyder.html
numpy.polyder โ€” NumPy v2.3 Manual
January 31, 2021 - Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. ... Polynomial to differentiate. A sequence is interpreted as polynomial coefficients, see poly1d. ... A new polynomial representing the derivative.
๐ŸŒ
Svitla Systems
svitla.com โ€บ home โ€บ articles โ€บ blog โ€บ numerical differentiation methods in python
Python for Numerical Differentiation: Methods & Tools
January 14, 2021 - Please donโ€™t write your own code to calculate the derivative of a function until you know why you need it. Scipy provides fast implementations of numerical methods and it is pre-compiled and tested across many use cases. import numpy import matplotlib.pyplot as plt def f(x): return x*x x = numpy.arange(0,4,0.01) y = f(x) plt.figure(figsize=(10,5)) plt.plot(x, y, 'b') plt.grid(axis = 'both') plt.show() Code language: JavaScript (javascript)
Price ย  $$$
Call ย  +1-415-891-8605
Address ย  100 Meadowcreek Drive, Suite 102, 94925, Corte Madera
๐ŸŒ
NumPy
numpy.org โ€บ doc โ€บ 2.0 โ€บ reference โ€บ generated โ€บ numpy.gradient.html
numpy.gradient โ€” NumPy v2.0 Manual
With a similar procedure the forward/backward approximations used for boundaries can be derived. ... Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. ... Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. ... Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. PDF. ... >>> f = np.array([1, 2, 4, 7, 11, 16], dtype=float) >>> np.gradient(f) array([1.
๐ŸŒ
arXiv
arxiv.org โ€บ pdf โ€บ 2011.08461 pdf
Deep Learning Framework From Scratch Using Numpy Andrei Nicolae Microsoft Corp.
If ๐‘Ž๐‘–> ๐‘๐‘–, the derivative ๐œ•๐‘™/๐œ•๐‘ฆ๐‘–is passed to ๐‘Ž๐‘–, and vice versa. Thus, ... Absolute Value. The absolute value function ๐‘ฆ= |๐‘ฅ| is not differ- entiable at ๐‘ฅ= 0, however we can compute it for other points ... Concatenate. Concatenating a list of ๐‘›arrays, where each array
๐ŸŒ
Python Like You Mean It
pythonlikeyoumeanit.com โ€บ Module3_IntroducingNumpy โ€บ AutoDiff.html
Automatic Differentiation โ€” Python Like You Mean It
It is important to reiterate that MyGrad never gives us the actual function \(\frac{\mathrm{d}f}{\mathrm{d}x}\); it only computes the derivative evaluated at a specific input \(x=10\). MyGradโ€™s functions are intentionally designed to mirror NumPyโ€™s functions almost exactly. In fact, for all of the NumPy functions that MyGrad mirrors, we can pass a tensor to a NumPy function and it will be โ€œcoercedโ€ into returning a tensor instead of a NumPy array โ€“ thus we can differentiate through NumPy functions!