The key phrase from the docs is:
The gradient is computed using second order accurate central differences in the interior points ...
This means that each group of three contiguous points is adjusted to a parabola (2nd order polynomial) and its slope at the location of the central point is used as the gradient.
For evenly spaced data (with a spacing of 1) the formula is very simple:
g(i) = 0.5 f(i+1) - 0.5 f(i-1)
Then comes the problematic part:
... and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.
There is neither f(i+1) at the right boundary nor f(i-1) at the left boundary.
So you can use a simple 1st order approximation
g(0) = f(1) - f(0)
g(n) = f(n) - f(n-1)
or a more complex 2nd order approximation
g(0) = -1.5 f(0) + 2 f(1) - 0.5 f(2)
g(n) = 0.5 f(n-2) - 2 f(n-1) + 1.5 f(n)
The effect can be seen in this example, copied from the docs:
>>> x = np.array([0, 1, 2, 3, 4])
>>> f = x**2
>>> np.gradient(f, edge_order=1)
array([1., 2., 4., 6., 7.])
>>> np.gradient(f, edge_order=2)
array([0., 2., 4., 6., 8.])
The derivation of the coefficient values can be found here.
Answer from aerobiomat on Stack OverflowWhat is the purpose of np.gradient() edge_order?
Increasing the precision of np.gradient?
What implicit function used for gradient descent in numpy gradient? - Cross Validated
I need help with numpy.gradient
Videos
Simple question really. When calling the Gradient function, what does this option do?
Also is there a Python equivalent of matlab’s del2 function?
so in MATLAB, one can use (assuming u is an array)
`gradient(u, x, dx)`
where `dx` is some spacing, say `0.01`.
In python one may use
` np.gradient(u, x)`
which is fine but I want to set a custom step size. How might one do this?