The problem is, that numpy can't give you the derivatives directly and you have two options:
With NUMPY
What you essentially have to do, is to define a grid in three dimension and to evaluate the function on this grid. Afterwards you feed this table of function values to numpy.gradient to get an array with the numerical derivative for every dimension (variable).
Example from here:
from numpy import *
x,y,z = mgrid[-100:101:25., -100:101:25., -100:101:25.]
V = 2*x**2 + 3*y**2 - 4*z # just a random function for the potential
Ex,Ey,Ez = gradient(V)
Without NUMPY
You could also calculate the derivative yourself by using the centered difference quotient.

This is essentially, what numpy.gradient is doing for every point of your predefined grid.
The problem is, that numpy can't give you the derivatives directly and you have two options:
With NUMPY
What you essentially have to do, is to define a grid in three dimension and to evaluate the function on this grid. Afterwards you feed this table of function values to numpy.gradient to get an array with the numerical derivative for every dimension (variable).
Example from here:
from numpy import *
x,y,z = mgrid[-100:101:25., -100:101:25., -100:101:25.]
V = 2*x**2 + 3*y**2 - 4*z # just a random function for the potential
Ex,Ey,Ez = gradient(V)
Without NUMPY
You could also calculate the derivative yourself by using the centered difference quotient.

This is essentially, what numpy.gradient is doing for every point of your predefined grid.
Numpy and Scipy are for numerical calculations. Since you want to calculate the gradient of an analytical function, you have to use the Sympy package which supports symbolic mathematics. Differentiation is explained here (you can actually use it in the web console in the left bottom corner).
You can install Sympy under Ubuntu with
sudo apt-get install python-sympy
or under any Linux distribution with pip
sudo pip install sympy
Gradient of a function in Python - Data Science Stack Exchange
python - What does numpy.gradient do? - Stack Overflow
Need help in understanding np.gradient for calculating derivatives
numpy - Gradient calculation with python - Stack Overflow
Videos
Also in the documentation1:
>>> y = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> j = np.gradient(y)
>>> j
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])
Gradient is defined as (change in
y)/(change inx).x, here, is the list index, so the difference between adjacent values is 1.At the boundaries, the first difference is calculated. This means that at each end of the array, the gradient given is simply, the difference between the end two values (divided by 1)
Away from the boundaries the gradient for a particular index is given by taking the difference between the the values either side and dividing by 2.
So, the gradient of y, above, is calculated thus:
j[0] = (y[1]-y[0])/1 = (2-1)/1 = 1
j[1] = (y[2]-y[0])/2 = (4-1)/2 = 1.5
j[2] = (y[3]-y[1])/2 = (7-2)/2 = 2.5
j[3] = (y[4]-y[2])/2 = (11-4)/2 = 3.5
j[4] = (y[5]-y[3])/2 = (16-7)/2 = 4.5
j[5] = (y[5]-y[4])/1 = (16-11)/1 = 5
You could find the minima of all the absolute values in the resulting array to find the turning points of a curve, for example.
1The array is actually called x in the example in the docs, I've changed it to y to avoid confusion.
Here is what is going on. The Taylor series expansion guides us on how to approximate the derivative, given the value at close points. The simplest comes from the first order Taylor series expansion for a C^2 function (two continuous derivatives)...
- f(x+h) = f(x) + f'(x)h+f''(xi)h^2/2.
One can solve for f'(x)...
- f'(x) = [f(x+h) - f(x)]/h + O(h).
Can we do better? Yes indeed. If we assume C^3, then the Taylor expansion is
- f(x+h) = f(x) + f'(x)h + f''(x)h^2/2 + f'''(xi) h^3/6, and
- f(x-h) = f(x) - f'(x)h + f''(x)h^2/2 - f'''(xi) h^3/6.
Subtracting these (both the h^0 and h^2 terms drop out!) and solve for f'(x):
- f'(x) = [f(x+h) - f(x-h)]/(2h) + O(h^2).
So, if we have a discretized function defined on equal distant partitions: x = x_0,x_0+h(=x_1),....,x_n=x_0+h*n, then numpy gradient will yield a "derivative" array using the first order estimate on the ends and the better estimates in the middle.
Example 1. If you don't specify any spacing, the interval is assumed to be 1. so if you call
f = np.array([5, 7, 4, 8])
what you are saying is that f(0) = 5, f(1) = 7, f(2) = 4, and f(3) = 8. Then
np.gradient(f)
will be: f'(0) = (7 - 5)/1 = 2, f'(1) = (4 - 5)/(2*1) = -0.5, f'(2) = (8 - 7)/(2*1) = 0.5, f'(3) = (8 - 4)/1 = 4.
Example 2. If you specify a single spacing, the spacing is uniform but not 1.
For example, if you call
np.gradient(f, 0.5)
this is saying that h = 0.5, not 1, i.e., the function is really f(0) = 5, f(0.5) = 7, f(1.0) = 4, f(1.5) = 8. The net effect is to replace h = 1 with h = 0.5 and all the results will be doubled.
Example 3. Suppose the discretized function f(x) is not defined on uniformly spaced intervals, for instance f(0) = 5, f(1) = 7, f(3) = 4, f(3.5) = 8, then there is a messier discretized differentiation function that the numpy gradient function uses and you will get the discretized derivatives by calling
np.gradient(f, np.array([0,1,3,3.5]))
Lastly, if your input is a 2d array, then you are thinking of a function f of x, y defined on a grid. The numpy gradient will output the arrays of "discretized" partial derivatives in x and y.
Hi, I'm trying to expand my knowledge in Machine Learning, I came across the np.gradient function, I wanted to understand how it relates to Taylor's Series for estimating values. The documentation seemed a bit confusing for novice.
I a trying to find a simple python function to get the gradient of a mathematical function. numpy gradient() doesn't get a function as an input so it doesn't seem like the right thing to used is there any other option? Than you!




