1. Use numpy.gradient (best option)

Most people want this. This is now the Numpy provided finite difference aproach (2nd-order accurate.) Same shape-size as input array.

Uses second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.

2. Use numpy.diff (you probably don't want this)

If you really want something ~twice worse this is just 1st-order accurate and also doesn't have same shape as input. But it's faster than above (some little tests I did).

For constant space between x sampless

import numpy as np 
dx = 0.1; y = [1, 2, 3, 4, 4, 5, 6] # dx constant
np.gradient(y, dx) # dy/dx 2nd order accurate
array([10., 10., 10.,  5.,  5., 10., 10.])

For irregular space between x samples

your question

import numpy as np
x = [.1, .2, .5, .6, .7, .8, .9] # dx varies
y = [1, 2, 3, 4, 4, 5, 6]
np.gradient(y, x) # dy/dx 2nd order accurate
array([10., 8.333..,  8.333.., 5.,  5., 10., 10.])

What are you trying to achieve?

The numpy.gradient offers a 2nd-order and numpy.diff is a 1st-order approximation schema of finite differences for a non-uniform grid/array. But if you are trying to make a numerical differentiation, a specific finite differences formulation for your case might help you better. You can achieve much higher accuracy like 8th-order (if you need) much superior to numpy.gradient.

Answer from imbr on Stack Overflow
Top answer
1 of 4
57

1. Use numpy.gradient (best option)

Most people want this. This is now the Numpy provided finite difference aproach (2nd-order accurate.) Same shape-size as input array.

Uses second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.

2. Use numpy.diff (you probably don't want this)

If you really want something ~twice worse this is just 1st-order accurate and also doesn't have same shape as input. But it's faster than above (some little tests I did).

For constant space between x sampless

import numpy as np 
dx = 0.1; y = [1, 2, 3, 4, 4, 5, 6] # dx constant
np.gradient(y, dx) # dy/dx 2nd order accurate
array([10., 10., 10.,  5.,  5., 10., 10.])

For irregular space between x samples

your question

import numpy as np
x = [.1, .2, .5, .6, .7, .8, .9] # dx varies
y = [1, 2, 3, 4, 4, 5, 6]
np.gradient(y, x) # dy/dx 2nd order accurate
array([10., 8.333..,  8.333.., 5.,  5., 10., 10.])

What are you trying to achieve?

The numpy.gradient offers a 2nd-order and numpy.diff is a 1st-order approximation schema of finite differences for a non-uniform grid/array. But if you are trying to make a numerical differentiation, a specific finite differences formulation for your case might help you better. You can achieve much higher accuracy like 8th-order (if you need) much superior to numpy.gradient.

2 of 4
25

use numpy.gradient()

Please be aware that there are more advanced way to calculate the numerical derivative than simply using diff. I would suggest to use numpy.gradient, like in this example.

import numpy as np
from matplotlib import pyplot as plt

# we sample a sin(x) function
dx = np.pi/10
x = np.arange(0,2*np.pi,np.pi/10)

# we calculate the derivative, with np.gradient
plt.plot(x,np.gradient(np.sin(x), dx), '-*', label='approx')

# we compare it with the exact first derivative, i.e. cos(x)
plt.plot(x,np.cos(x), label='exact')
plt.legend()
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.1 Manual
With a similar procedure the forward/backward approximations used for boundaries can be derived. ... Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. ... Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. ... Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. PDF. ... >>> import numpy as np >>> f = np.array([1, 2, 4, 7, 11, 16]) >>> np.gradient(f) array([1.
🌐
Medium
medium.com › @whyamit404 › understanding-derivatives-with-numpy-e54d65fcbc52
Understanding Derivatives with NumPy | by whyamit404 | Medium
February 8, 2025 - In NumPy, we don’t have a dedicated function for derivatives. Instead, we use np.gradient(). This function calculates the derivative using numerical differentiation. It estimates the rate of change of an array by looking at the differences between adjacent points.
🌐
SciPy
docs.scipy.org › doc › scipy-1.16.1 › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.16.1 Manual
For each element of the output of f, derivative approximates the first derivative of f at the corresponding element of x using finite difference differentiation. This function works elementwise when x, step_direction, and args contain (broadcastable) arrays.
🌐
SciPy
docs.scipy.org › doc › scipy › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.17.0 Manual
For each element of the output of f, derivative approximates the first derivative of f at the corresponding element of x using finite difference differentiation. This function works elementwise when x, step_direction, and args contain (broadcastable) arrays.
Top answer
1 of 2
5

Purely in terms of terminology, it's probably better to talk about taking discrete partial derivatives of variable fields stored in an array, rather than differentiating an array itself.

Regarding the code itself, you appear to have dropped the element assignment inside your loop,

v(i,j) = (u(i+1,j)-u(i-1,j))/(2*h)
w(i,j) = (u(i,j+1)-u(i,j-1))/(2*h)

although that may be a typo. Your version is also bugged and unsafe at the end points of the ranges (i.e i=-nx, i=nx, j=ny, j=-ny) since in these places you are accessing elements of the u array which do not exist. At these locations you either need to use a non centred derivative, something like v(-nx,j) = (u(-nx+1,j) - u(-nx,j)/h, or use any knowledge of the boundary conditions.

As a side note, if you are writing Fortran code frequently, then in terms of code optimisations, it is useful to remember that the index furthest to right should be outermost in loops for the code to have the best chance of running fast, which implies swapping your i and j loops. In many simple cases it's possible the compiler will fix this for you, and it's certainly less important than having code which works, but it's probably a good habit to get into.

2 of 2
2

It's called column major due to the Fortran array model. Brieftly saying about this: Let be a matrix of size . Let be element of for and , then is stored at the -th position of one-dimenstional storage , i.e. . Writting the loop as the orginal post, the storage is processed with -increment. However, swapping loop order, the storage is processed continuously. Hence this approach is performed faster than the first one.

🌐
GeeksforGeeks
geeksforgeeks.org › python › how-to-compute-derivative-using-numpy
How to compute derivative using Numpy? - GeeksforGeeks
July 23, 2025 - However, NumPy can compute the special cases of one-dimensional polynomials using the functions numpy.poly1d() and deriv().
Top answer
1 of 3
6

This is not a simple problem, but there are a lot of methods that have been devised to handle it. One simple solution is to use finite difference methods. The command numpy.diff() uses finite differencing where you can specify the order of the derivative.

Wikipedia also has a page that lists the needed finite differencing coefficients for different derivatives of different accuracies. If the numpy function doesn't do what you want.

Depending on your application you can also use scipy.fftpack.diff which uses a completely different technique to do the same thing. Though your function needs a well defined Fourier transform.

There are lots and lots and lots of variants (e.g. summation by parts, finite differencing operators, or operators designed to preserve known evolution constants in your system of equations) on both of the two ideas above. What you should do will depend a great deal on what the problem is that you are trying to solve.

The good thing is that there is a lot of work has been done in this field. The Wikipedia page for Numerical Differentiation has some resources (though it is focused on finite differencing techniques).

2 of 3
1

The findiff project is a Python package that can do derivatives of arrays of any dimension with any desired accuracy order (of course depending on your hardware restrictions). It can handle arrays on uniform as well as non-uniform grids and also create generalizations of derivatives, i.e. general linear combinations of partial derivatives with constant and variable coefficients.

Find elsewhere
🌐
Svitla Systems
svitla.com › home › articles › blog › numerical differentiation methods in python
Python for Numerical Differentiation: Methods & Tools
January 14, 2021 - It allows you to calculate the first order derivative, second order derivative, and so on. It accepts functions as input and this function can be represented as a Python function. It is also possible to provide the “spacing” dx parameter, which will give you the possibility to set the step of derivative intervals.
Price   $$$
Address   100 Meadowcreek Drive, Suite 102, 94925, Corte Madera
🌐
Python Guides
pythonguides.com › python-scipy-derivative-of-array
Python SciPy Derivative Of Array: Calculate With Precision
June 23, 2025 - Learn to calculate derivatives of arrays in Python using SciPy. Master numerical differentiation with examples for data analysis, signal processing, and more.
🌐
GitHub
github.com › jrwalk › deriv
GitHub - jrwalk/deriv: Python code to calculate numerical derivative of an array on a non-uniform grid
From what I've seen, the common methods currently are: (1) numpy.diff -- right-sided single difference of an arbitrary array, on a uniform grid (2) numpy.gradient -- two-sided single difference of an arbitrary array, on a uniform grid (3) scipy.misc.derivative -- two-sided difference of a defined function evaluated on an arbitrary grid This method combines the arbitrary grid of (3) with the arbitrary array input in (1) and (2).
Author   jrwalk
🌐
TutorialsPoint
tutorialspoint.com › how-to-compute-derivative-using-numpy
How to Compute Derivative Using Numpy?
July 20, 2023 - The output of this function will be two arrays representing the values of the partial derivatives with respect to x and y at each point in the domain. While NumPy's gradient function is a convenient and straightforward method to compute derivatives of functions, there are other approaches to calculating derivatives in Python...
🌐
Berkeley
pythonnumericalmethods.berkeley.edu › notebooks › chapter20.05-Summary-and-Problems.html
Summary — Python Numerical Methods
There are issues with finite differences for approximation of derivatives when the data is noisy. Write a function \(my\_der\_calc(f, a, b, N, option)\), with the output as \([df, X]\), where \(f\) is a function object, \(a\) and \(b\) are scalars such that a < b, \(N\) is an integer bigger than 10, and \(option\) is the string \(forward\), \(backward\), or \(central\). Let \(x\) be an array ...
🌐
YouTube
youtube.com › watch
How to: Numerical Derivative in Python - YouTube
Learn how to take a simple numerical derivative of data using a difference formula in Python.Script and resources to download can be found at: https://www.ha...
Published   September 26, 2020
🌐
Turing
turing.com › kb › derivative-functions-in-python
How to Calculate Derivative Functions in Python
This article will look at the methods and techniques for calculating derivatives in Python. It will cover numerical approaches, which approximate derivatives through numerical differentiation, as well as symbolic methods, which use mathematical representations to derive exact solutions. Derivatives enable us to determine the slope of a curve at any specific point.
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.4 Manual
Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis. ... A tuple of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension.
🌐
GitHub
github.com › Ryota-Kawamura › Mathematics-for-Machine-Learning-and-Data-Science-Specialization › blob › main › Course-2 › Week-1 › C2_W1_Lab_1_differentiation_in_python.ipynb
Mathematics-for-Machine-Learning-and-Data-Science-Specialization/Course-2/Week-1/C2_W1_Lab_1_differentiation_in_python.ipynb at main · Ryota-Kawamura/Mathematics-for-Machine-Learning-and-Data-Science-Specialization
"This is just a reminder how to define functions in Python. A simple function $f\\left(x\\right) = x^2$, it can be set up as:" ... "You can easily find the derivative of this function analytically. You can set it up as a separate function:" ... "Since you have been working with the `NumPy` arrays, you can apply the function to each element of an array:"
Author   Ryota-Kawamura
🌐
Python.org
discuss.python.org › python help
How to differentiate a matrix of functions - Python Help - Discussions on Python.org
July 30, 2022 - I want a simple elementwise derivative of a matrix. Could not find anything precoded, which was surprising. I tried a few versions, the following is probably the simplest. import numpy as np from sympy import * import sympy as sp t = symbols(‘t’) a = np.array([[t**2 + 1, sp.exp(2*t)], [sin(t), 45]]) for row in a: for element in row: a[row][element] = diff(a[row][element],t) print(a) and the error section that shows in Jupyter: IndexError Traceback (most ...
🌐
Python Like You Mean It
pythonlikeyoumeanit.com › Module3_IntroducingNumpy › AutoDiff.html
Automatic Differentiation — Python Like You Mean It
It is important to reiterate that MyGrad never gives us the actual function \(\frac{\mathrm{d}f}{\mathrm{d}x}\); it only computes the derivative evaluated at a specific input \(x=10\). MyGrad’s functions are intentionally designed to mirror NumPy’s functions almost exactly. In fact, for all of the NumPy functions that MyGrad mirrors, we can pass a tensor to a NumPy function and it will be “coerced” into returning a tensor instead of a NumPy array – thus we can differentiate through NumPy functions!
🌐
Readthedocs
findiff.readthedocs.io › en › latest › source › examples-basic.html
Basic Examples of findiff — findiff 0.11.1 documentation
Let’s do this numerically with findiff. First we set up the grid and the arrays: x = np.linspace(0, 10, 100) dx = x[1] - x[0] f = np.sin(x) g = np.cos(x) Then we construct the derivative object, which represents the differential operator \(\frac{d^2}{dx^2}\):