The key phrase from the docs is:
The gradient is computed using second order accurate central differences in the interior points ...
This means that each group of three contiguous points is adjusted to a parabola (2nd order polynomial) and its slope at the location of the central point is used as the gradient.
For evenly spaced data (with a spacing of 1) the formula is very simple:
g(i) = 0.5 f(i+1) - 0.5 f(i-1)
Then comes the problematic part:
... and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.
There is neither f(i+1) at the right boundary nor f(i-1) at the left boundary.
So you can use a simple 1st order approximation
g(0) = f(1) - f(0)
g(n) = f(n) - f(n-1)
or a more complex 2nd order approximation
g(0) = -1.5 f(0) + 2 f(1) - 0.5 f(2)
g(n) = 0.5 f(n-2) - 2 f(n-1) + 1.5 f(n)
The effect can be seen in this example, copied from the docs:
>>> x = np.array([0, 1, 2, 3, 4])
>>> f = x**2
>>> np.gradient(f, edge_order=1)
array([1., 2., 4., 6., 7.])
>>> np.gradient(f, edge_order=2)
array([0., 2., 4., 6., 8.])
The derivation of the coefficient values can be found here.
Answer from aerobiomat on Stack OverflowSimple question really. When calling the Gradient function, what does this option do?
Also is there a Python equivalent of matlab’s del2 function?
DOC: suggestion, beware of `np.gradient( edge_order=2 )`
Improving numpy.gradient to be second order accurate over the full domain
Videos
I'll second @jrennie's first sentence - it can all depend. The numpy.gradient function requires that the data be evenly spaced (although allows for different distances in each direction if multi-dimensional). If your data does not adhere to this, than numpy.gradient isn't going to be much use. Experimental data may have (OK, will have) noise on it, in addition to not necessarily being all evenly spaced. In this case it might be better to use one of the scipy.interpolate spline functions (or objects). These can take unevenly spaced data, allow for smoothing, and can return derivatives up to k-1 where k is the order of the spline fit requested. The default value for k is 3, so a second derivative is just fine. Example:
spl = scipy.interpolate.splrep(x,y,k=3) # no smoothing, 3rd order spline
ddy = scipy.interpolate.splev(x,spl,der=2) # use those knots to get second derivative
The object oriented splines like scipy.interpolate.UnivariateSpline have methods for the derivatives. Note that the derivative methods are implemented in Scipy 0.13 and are not present in 0.12.
Note that, as pointed out by @JosephCottham in comments in 2018, this answer (good for Numpy 1.08 at least), is no longer applicable since (at least) Numpy 1.14. Check your version number and the available options for the call.
There's no universal right answer for numerical gradient calculation. Before you can calculate the gradient about sample data, you have to make some assumption about the underlying function that generated that data. You can technically use np.diff for gradient calculation. Using np.gradient is a reasonable approach. I don't see anything fundamentally wrong with what you are doing---it's one particular approximation of the 2nd derivative of a 1-D function.