The key phrase from the docs is:

The gradient is computed using second order accurate central differences in the interior points ...

This means that each group of three contiguous points is adjusted to a parabola (2nd order polynomial) and its slope at the location of the central point is used as the gradient.

For evenly spaced data (with a spacing of 1) the formula is very simple:

g(i) = 0.5 f(i+1) - 0.5 f(i-1)

Then comes the problematic part:

... and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.

There is neither f(i+1) at the right boundary nor f(i-1) at the left boundary.

So you can use a simple 1st order approximation

g(0) = f(1) - f(0)
g(n) = f(n) - f(n-1)

or a more complex 2nd order approximation

g(0) = -1.5 f(0) + 2 f(1) - 0.5 f(2)
g(n) = 0.5 f(n-2) - 2 f(n-1) + 1.5 f(n)

The effect can be seen in this example, copied from the docs:

>>> x = np.array([0, 1, 2, 3, 4])
>>> f = x**2
>>> np.gradient(f, edge_order=1)
array([1., 2., 4., 6., 7.])
>>> np.gradient(f, edge_order=2)
array([0., 2., 4., 6., 8.])

The derivation of the coefficient values can be found here.

Answer from aerobiomat on Stack Overflow
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.1 Manual
edge_order{1, 2}, optional · Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1. New in version 1.9.1. axisNone or int or tuple of ints, optional · Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the ...
Discussions

What is the purpose of np.gradient() edge_order?
Well, the documentation says: edge_order{1, 2}, optional Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1. Heck if I know what all that means though. https://numpy.org/doc/stable/reference/generated/numpy.gradient.html More on reddit.com
🌐 r/learnpython
2
0
January 8, 2023
Increasing the precision of np.gradient?
np.gradient takes an array-like in the first argument, i.e., you provide the function values, and it returns the gradient at the same points. That's really all it can do because it doesn't know the function itself. MATLAB's gradient takes a function, so you can give it any precision you like, and it'll sample that function as necessary. So I think the answer is you just need to sample the function yourself at your desired step size, then pass the result to np.gradient. More on reddit.com
🌐 r/learnpython
3
7
January 7, 2023
What implicit function used for gradient descent in numpy gradient? - Cross Validated
TL;DR numpy.gradient calculates the gradient of an ndarray, but I am not clear as to what it is with respect to what original function. An example, (although I might be wrong in understanding so appreciate clarity) Say we have an (2, 2) ndarray representing some image data, then we can say: pixel = f(x, y), so we can do partials of dp/dy and dp/dx, and I think such an approach is used in edge ... More on stats.stackexchange.com
🌐 stats.stackexchange.com
November 28, 2018
I need help with numpy.gradient
u/jtclimb makes a great point. You need to distinguish between gradient and gradient descent. They both have the word gradient but one is a property and one is a process. The gradient is the rate of change of a function. You can determine it several different ways. If you have a symbolic function you can use textbook calculus to write the equation for the derivative by hand. That is an exact solution. There are also numerical tricks you can do to approximate the derivative. Numpy gradient takes small steps to approximate the slope. If you want to minimize a function you need to find the variable values that will minimize. Once again there are different approaches. Gradient descent calculates the gradient of the function at a point in space and uses that to estimate how much to increase or decrease each variable. It does this iteratively until it gets within a tolerance or exceeds a number of iterations. So Gradient descent is an algorithm that minimizes a function using and uses gradient information. I highly recommend the free textbook by Martins and Ning. Just google “MDOBook Martins”. Chapter 4 covers in constrained gradient-based optimization which includes gradient descent. Chapter 6 explains the different ways to calculate a gradient. FYI if you’re referring to gradient descent for machine learning you’ll need to read more specific texts but the stuff in chapter 4 will give you a really strong basic understanding. More on reddit.com
🌐 r/Numpy
14
1
January 8, 2023
🌐
GitHub
github.com › numpy › numpy › issues › 28166
DOC: suggestion, beware of `np.gradient( edge_order=2 )` · Issue #28166 · numpy/numpy
January 16, 2025 - """ show how np.gradient( edge_order=2 ) can overshoot: slopes 10, 1 -> -3.5 """ import numpy as np print( f"{np.__version__ = } \n" ) for a in [0, 5, 10]: for edge_order in [1, 2]: x = np.array([ 0, 1, 2 ]) y = np.array([ 0, a, a + 1 ]) # slope a, then 1 g = np.gradient( y, x, edge_order=edge_order ) ...
Author   denis-bz
🌐
NumPy
numpy.org › doc › 1.25 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.25 Manual
edge_order{1, 2}, optional · Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1. New in version 1.9.1. axisNone or int or tuple of ints, optional · Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the ...
🌐
SciPy
docs.scipy.org › doc › numpy-1.9.3 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.9 Manual
>>> x = np.array([0, 1, 2, 3, 4]) >>> dx = np.gradient(x) >>> y = x**2 >>> np.gradient(y, dx, edge_order=2) array([-0., 2., 4., 6., 8.]) numpy.ediff1d · numpy.cross · © Copyright 2008-2009, The Scipy community. Last updated on Oct 18, 2015.
Find elsewhere
🌐
Codegive
codegive.com › blog › numpy_gradient_vs_diff.php
Numpy gradient vs diff
If None (default), the gradient is computed for all dimensions. edge_order: Integer (1 or 2) specifying the order of accuracy for the boundary points. Default is 1. edge_order=1 uses first-order forward or backward differences, while edge_order=2 uses second-order accurate differences. For interior ...
🌐
MindSpore
mindspore.cn › docs › en › r1.7 › api_python › numpy › mindspore.numpy.gradient.html
mindspore.numpy.gradient | MindSpore 1.7 documentation | MindSpore
The default (axis = None) is to calculate the gradient for all the axes of the input tensor. axis may be negative, in which case it counts from the last to the first axis. edge_order (int) – Gradient is calculated using N-th order accurate differences at the boundaries.
🌐
SciPy
docs.scipy.org › doc › numpy-1.10.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.10 Manual
For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction: >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float)) [array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[ 1. , 2.5, 4. ], [ 1. , 1. , 1. ]])] >>> x = np.array([0, 1, 2, 3, 4]) >>> dx = np.gradient(x) >>> y = x**2 >>> np.gradient(y, dx, edge_order=2) array([-0., 2., 4., 6., 8.])
🌐
Google Groups
groups.google.com › g › qutip › c › umXwZYH35AA
Qutip and the Numpy Gradient
I have a time dependent hamiltionian with a simple polynomial function, whose gradient I find using Numpy's gradient function. I also specify the step size that the gradient function should use. However, when passing the function and its gradient to Qutips mesolve function, i get the error: ValueError: Shape of array too small to calculate a numerical gradient, at least (edge_order + 1) elements are required.
🌐
GitHub
github.com › pytorch › pytorch › issues › 56036
torch.gradient currently only supports edge_order to be equal to one. · Issue #56036 · pytorch/pytorch
April 14, 2021 - Numpy gradient function is able to support edge_order to be either equal to one or two: https://numpy.org/doc/stable/reference/generated/numpy.gradient.html However, currently PyTorch only supports edge_order to be only equal to one. Add...
🌐
Python Pool
pythonpool.com › home › blog › numpy gradient | descent optimizer of neural networks
Numpy Gradient | Descent Optimizer of Neural Networks - Python Pool
June 14, 2021 - The gradient is calculated using N-th order accurate differences at the boundaries. Default: 1 · It returns an N-dimensional array or a list of N-dimensional array. In other words, it returns a set of ndarrays (depends on the number of dimensions) ...
🌐
Jiffyclub
jiffyclub.github.io › numpy › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.12 Manual
For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction: >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float)) [array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[ 1. , 2.5, 4. ], [ 1. , 1. , 1. ]])] >>> x = np.array([0, 1, 2, 3, 4]) >>> y = x**2 >>> np.gradient(y, edge_order=2) array([-0., 2., 4., 6., 8.]) The axis keyword can be used to specify a subset of axes of which the gradient is calculated >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float), axis=0) array([[ 2., 2., -1.], [ 2., 2., -1.]]) © Copyright 2008-2009, The Scipy community.
🌐
SciPy
docs.scipy.org › doc › numpy › reference › › generated › numpy.gradient.html
numpy.gradient — NumPy v2.4 Manual
June 29, 2020 - edge_order{1, 2}, optional · Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1. axisNone or int or tuple of ints, optional · Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the ...
🌐
SciPy
docs.scipy.org › doc › numpy-1.15.0 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.15 Manual
>>> x = np.array([0, 1, 2, 3, 4]) >>> f = x**2 >>> np.gradient(f, edge_order=1) array([ 1., 2., 4., 6., 7.]) >>> np.gradient(f, edge_order=2) array([-0., 2., 4., 6., 8.]) The axis keyword can be used to specify a subset of axes of which the gradient is calculated · >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), axis=0) array([[ 2., 2., -1.], [ 2., 2., -1.]]) numpy.ediff1d ·
🌐
Kite
kite.com › python › docs › numpy.gradient
Code Faster with Line-of-Code Completions, Cloudless Processing
Kite is a free autocomplete for Python developers. Code faster with the Kite plugin for your code editor, featuring Line-of-Code Completions and cloudless processing.