You have four options

  1. Finite Differences
  2. Automatic Derivatives
  3. Symbolic Differentiation
  4. Compute derivatives by hand.

Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.

Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.

Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.

Here is an example using SymPy

In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x

In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2.  2.  2.  2.  2.]
Answer from MRocklin on Stack Overflow
Top answer
1 of 9
208

You have four options

  1. Finite Differences
  2. Automatic Derivatives
  3. Symbolic Differentiation
  4. Compute derivatives by hand.

Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.

Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.

Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.

Here is an example using SymPy

In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x

In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2.  2.  2.  2.  2.]
2 of 9
82

The most straight-forward way I can think of is using numpy's gradient function:

x = numpy.linspace(0,10,1000)
dx = x[1]-x[0]
y = x**2 + 1
dydx = numpy.gradient(y, dx)

This way, dydx will be computed using central differences and will have the same length as y, unlike numpy.diff, which uses forward differences and will return (n-1) size vector.

🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.1 Manual
With a similar procedure the forward/backward approximations used for boundaries can be derived. ... Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. ... Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. ... Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. PDF. ... >>> import numpy as np >>> f = np.array([1, 2, 4, 7, 11, 16]) >>> np.gradient(f) array([1.
🌐
Medium
medium.com › @whyamit404 › understanding-derivatives-with-numpy-e54d65fcbc52
Understanding Derivatives with NumPy | by whyamit404 | Medium
February 8, 2025 - Suppose we want to find the derivative of y = x². Here’s how you can do it: import numpy as np # Define the range for x from 0 to 10, split into 100 points x = np.linspace(0, 10, 100) # Define the function y = x^2 y = x**2 # Compute the derivative of y with respect to x using np.gradient dy_dx = np.gradient(y, x) # Print the result print(dy_dx)
🌐
Reddit
reddit.com › r/learnpython › need help in understanding np.gradient for calculating derivatives
r/learnpython on Reddit: Need help in understanding np.gradient for calculating derivatives
June 30, 2023 -

Hi, I'm trying to expand my knowledge in Machine Learning, I came across the np.gradient function, I wanted to understand how it relates to Taylor's Series for estimating values. The documentation seemed a bit confusing for novice.

Top answer
1 of 2
7
One definition of the derivative is f'(x) = (f(x+h)-f(x))/h where h goes to 0. Computers cannot store infinitely small numbers, so they might set h=1e-6 (that is 0.000001). It's a tradeoff because while we want h to be as small as possible, at some point the errors due to computer precision begin to dominate. Given any function that the computer can calculate, it can approximate the derivative. def f(x): return np.sin(x) x = np.arange(-2,2,0.01) y = f(x) dfdx = (f(x+h)-f(x))/h plt.plot(x,y) plt.plot(x,dfdx) plt.show() Assuming that the function is reasonably smooth (i.e. the derivative above exists), another definition of the derivative is f'(x) = (f(x+h)-f(x-h))/(2h) where h goes to 0. Going from x-h to x+h means 2 steps, that's the reason for 2h. Which works just as well. These methods are named finite difference to contrast from the normal derivative definition where h is infinitely small. The first one is the forward difference and the second one is called central difference. The backward difference is (f(x)-f(x-h))/2. Let's assume we want to write a derivative function. It takes a function f and values of x, and gives back f'(x). def f(x): return np.sin(x) def d(fun, x): return (fun(x+h)-fun(x))/h x = np.arange(-2,2,0.01) y = f(x) dfdx = d(f,x) plt.plot(x,y) plt.plot(x,dfdx) plt.show() By passing the function into the function, the derivative function can just call fun wherever it wants/needs to get the derivative. Now things become a bit more inconvenient. For some reason we do not know f. We only know y, i.e. f(x) for some values of x. Let's say that x is evenly spaced as usual. Then our best guess for h is not really tiny but identical to the spacing between neighboring x values. With the forward difference we need to take care at the rightmost value because we cannot just add +h to get a value even further out. Instead we use the backward difference. For values in the middle we decide to use the central difference instead of the forward difference. def f(x): return np.sin(x) def d(y, h=1): dfdx = [(y[1]-y[0])/h] for i in range(1,len(y)-1): dfdx.append((y[i+1]-y[i-1])/2/h) dfdx.append((y[i]-y[i-1])/h) return dfdx h = 0.01 x = np.arange(-2,2,h) y = f(x) dfdx = d(y,h) plt.plot(x,y) plt.plot(x,dfdx) plt.show() The implementation above corresponds to np.gradient in the one-dimensional case where varargs is set to case 1 or 2. The case where varargs is set to 3 or 4 would use x directly in d instead of h. However at that point the formula is more complicated as they mention in the documentation. Effectively any point has a hd (the forward step size) and a hs (the backward step size) and the formula is not just (f(x+hd)-f(x-hs))/(hd+hs) but instead that bigger expression given in the documentation, where the values of hd,hs act as some kind of weights. np.gradient is basically backwards, central and forward difference combined. When you have values like f(1),f(2),f(2+h) and want the derivative at 2, the code notices that 2 and 2+h are very close together and puts greater weight on that (and mostly ignores f(1)). The important part so far is that np.gradient when given a vector with N elements calculates N one-dimensional derivatives, which is not the typical idea of a gradient. np.gradient does support more dimensions which might make things clearer. So in the 1D case, we essentially go through all values from left to right and then consider that value and its direct left and right neighbor to quantify the uptrend or downtrend. In the 2D case, np.gradient still does this, but additionally also walks from top to bottom and does the same. So in 2D it returns 2 arrays, one for left-right and one for top-bottom. The actual definition of the gradient by finite differences is [(f(x+h,y)-f(x,y))/h, (f(x,y+h)-f(x,y))/h] in 2D. These values are indeed returned by np.gradient, the left part is in the first array and the right part in the second array. Say we are in 2D and want the gradient at x=3 and y=0, then we can plug it into np.gradient like this: hx = 1e-6 hy = 1e-3 x = [3,3+hx] y = [0,0+hy] xx,yy = np.meshgrid(x,y) def f(x,y): return x**2-2*x*np.sin(y) + 1/x grad = np.gradient(f(xx,yy), y,x) # Note the order. print(grad[1][0,0], grad[0][0,0]) # Note the order. This is dfdx, dfdy. but if the function f can be calculated by a computer, it makes more sense to just use automatic differentiation instead of finite differences. Automatic differentiation has no h that needs to be chosen carefully. It's always as accurate is possible. import torch x = torch.tensor([3.],requires_grad=True) y = torch.tensor([0.],requires_grad=True) z = x**2-2*x*torch.sin(y) + 1/x z.backward() print(x.grad, y.grad) So what's the deal with the Taylor series? It's just a minor piece in the derivation of that more general expression used by np.gradient. We just start by claiming that we can express the gradient by adding together function values in the direct neighborhood. f'(x) = a f(x) + b f(x+hd) + c f(x-hs) Given that finite differences do work out, this approach should work as well and generalize the idea. Expand f(x+hd) and f(x-hs) with their series: f(x+hd) = f(x) + hd f'(x) + hd^2 f''(x)/2 + ... f(x-hs) = f(x) - hs f'(x) + hs^2 f''(x)/2 + ... Then plug it in and reshape: f'(x) = a f(x) + b f(x) + b hd f'(x) + b hd^2 f''(x)/2 + c f(x) - c hs f'(x) + c hs^2 f''(x)/2 = (a+b+c) f(x) + (b hd - c hs) f'(x) + (b hd^2 + c hs^2 )/2 f''(x) 0 = (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd^2 + c hs^2 )/2 f''(x) The = in the middle is actually more of an approximately equal sign. We won't be able to reach 0 for all f(x) as claimed on the left hand size, but we can get pretty close. We do NOT want to minimize the right-hand-side. We want it to reach 0 (it can go below 0 right now). To turn this into a minimization problem, we square it. This way we get a positive number always and it really becomes a matter of minimization. We COULD also take the absolute value instead of squaring, but it's pain to work this through and the end result are exactly the same parameters anyway. To minimize: E2 with E = (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd2 + c hs2 )/2 f''(x) One requirement for an optimum is that the gradient is 0. In this case we take the derivatives with respect to a,b,c because we want to find the optimal a,b,c. First a reminder of the chain rule: dE2 /dt = 2E dE/dt for whatever t is. It's optional to do this but a bit less messy than working it through individually. In particular we have dE^2/da = 2E dE/da = 2E f(x) dE^2/db = 2E dE/db = 2E (f(x) + hd f'(x) + hd^2 f''(x)/2) dE^2/dc = 2E dE/dc = 2E (f(x) - hs f'(x) + hs^2 f''(x)/2) We want ALL three of them to be 0 at the same time. This can only happen if E is 0. 0 := (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd2 + c hs2 )/2 f''(x) and we want this to be 0 for any f, f', f'' for any value of x. The only way for this to happen is if each coefficient is 0, i.e. a+b+c = 0 b hd - c hs = 1 b hd^2 + c hs^2 = 0 We would need to check the second derivative to make sure that this is a minimum, not a maximum, but given the problem it is fairly clear. So why did we stop exactly after f'' in the Taylor series? It's because this way we get exactly 3 unknowns and 3 equations, which is the most convenient to solve. Multiply the second equation by hd then subtract the third from it. (b hd^2 - c hs hd) - (b hd^2 + c hs^2) = hd -c hs^2 - c hs hd = hd c hs (hs + hd) = -hd c = -hd/hs/(hs+hd) = -hd^2 / (hs hd (hs+hd)) where the last step is just so it looks exactly like in np.gradient. Insert c into the second equation. b hd + hd/hs/(hs+hd) hs = 1 b hd + hd/(hs+hd) = 1 b + 1/(hs+hd) = 1/hd b = 1/hd - 1/(hs+hd) b = (hs(hs+hd) - hs hd) / [hs hd (hs+hd)] b = hs^2 / [hs hd (hs+hd)] From the first equation we know that a = -b-c = (hd2 - hs2 )/(hs hd (hs+hd)). So here's your summary: If you have a function that can be calculated by a computer, use torch or tensorflow or any other framework for automatic differentiation. If you have a function that can be calculated by a computer but such a framework is not available, np.gradient is still a bad idea because it is inefficient. Note for the 2D gradient we needed three values, f(x,y), f(x+dx,y), f(x,y+dy). But with np.gradient we would first need to set up arrays where it is almost natural to also include f(x+dx,y+dy) which is not needed for gradient calculations. It's more natural to set up some loop that increments x once, then y once, then z once, and so on. Many solvers in scipy.optimize work with finite differences. If you have a function that cannot be calculated by a computer, np.gradient may be useful. In practice this means that you have data from some experiment. Even there, the concept of a Taylor series plays no role here UNLESS the data was taken on an unevenly spaced grid.
2 of 2
2
You might enjoy this stackoverflow post on the same question
🌐
SciPy
docs.scipy.org › doc › scipy › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.17.0 Manual
>>> import numpy as np >>> from scipy.differentiate import derivative >>> f = np.exp >>> df = np.exp # true derivative >>> x = np.linspace(1, 2, 5) >>> res = derivative(f, x) >>> res.df # approximation of the derivative array([2.71828183, 3.49034296, 4.48168907, 5.75460268, 7.3890561 ]) >>> res.error # estimate of the error array([7.13740178e-12, 9.16600129e-12, 1.17594823e-11, 1.51061386e-11, 1.94262384e-11]) >>> abs(res.df - df(x)) # true error array([2.53130850e-14, 3.55271368e-14, 5.77315973e-14, 5.59552404e-14, 6.92779167e-14])
🌐
GeeksforGeeks
geeksforgeeks.org › python › how-to-compute-derivative-using-numpy
How to compute derivative using Numpy? - GeeksforGeeks
July 23, 2025 - Below are some examples where we compute the derivative of some expressions using NumPy. Here we are taking the expression in variable 'var' and differentiating it with respect to 'x'. ... import numpy as np # defining polynomial function var = np.poly1d([1, 0, 1]) print("Polynomial function, f(x):\n", var) # calculating the derivative derivative = var.deriv() print("Derivative, f(x)'=", derivative) # calculates the derivative of after # given value of x print("When x=5 f(x)'=", derivative(5))
🌐
TutorialsPoint
tutorialspoint.com › how-to-compute-derivative-using-numpy
How to Compute Derivative Using Numpy?
July 20, 2023 - Let's say we want to compute the derivative of f(x) = x^2 over the range x = [0, 1, 2, 3]. We can do that by updating our code as follows: x = np.array([0, 1, 2, 3]) derivative = np.gradient(f(x), x) In this case, the gradient function will compute the derivative of f(x) = x^2 at each point in the domain and return an array representing the values of the derivative at each point.
🌐
SciPy
docs.scipy.org › doc › scipy-1.16.2 › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.16.2 Manual
>>> import numpy as np >>> from scipy.differentiate import derivative >>> f = np.exp >>> df = np.exp # true derivative >>> x = np.linspace(1, 2, 5) >>> res = derivative(f, x) >>> res.df # approximation of the derivative array([2.71828183, 3.49034296, 4.48168907, 5.75460268, 7.3890561 ]) >>> res.error # estimate of the error array([7.13740178e-12, 9.16600129e-12, 1.17594823e-11, 1.51061386e-11, 1.94262384e-11]) >>> abs(res.df - df(x)) # true error array([2.53130850e-14, 3.55271368e-14, 5.77315973e-14, 5.59552404e-14, 6.92779167e-14])
Find elsewhere
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.4 Manual
With a similar procedure the forward/backward approximations used for boundaries can be derived. ... Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. ... Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. ... Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. PDF. ... Try it in your browser! >>> import numpy as np >>> f = np.array([1, 2, 4, 7, 11, 16]) >>> np.gradient(f) array([1.
🌐
Turing
turing.com › kb › derivative-functions-in-python
How to Calculate Derivative Functions in Python
Derivative calculations for different types of functions ... Below are a few code snippets that demonstrate the methods discussed earlier. The functions and input values in these examples can be adapted and modified to specific requirements. 1. Numerical differentiation using central difference · import numpy as np def f(x): return np.sin(x) def central_difference(f, x, h): return (f(x + h) - f(x - h)) / (2 * h) x = 0.5 h = 0.01 f_prime = central_difference(f, x, h) print(f_prime)
🌐
Derivative
docs.derivative.ca › NumPy
NumPy - Derivative
March 23, 2022 - # import the NumPy library import numpy as np def onCook(scriptOp): scriptOp.clear() inputs = scriptOp.inputs[0] # get input CHOP as NumPy array inPos = inputs.numpyArray() # this results in a numChannels x numSamples array # we need to transpose the attay to a numSamples x numChannels array # to get positions as pairs for each point # this is comparable to the Shuffle CHOP's "Swap Channels and Samples" inPos = inPos.T # using broadcasting, we can eliminate the for loop # and calculate the distance between all points # https://numpy.org/doc/stable/user/basics.broadcasting.html npDist = np.sqrt(((inPos[:,:,None] - inPos[:,:,None].T) ** 2).sum(1)) # copy the NumPy array into the CHOP scriptOp.copyNumpyArray(npDist) return
🌐
Svitla Systems
svitla.com › home › articles › blog › numerical differentiation methods in python
Python for Numerical Differentiation: Methods & Tools
January 14, 2021 - Also, you can use the library numpy to calculate all derivative values in range x = 0..4 with step 0.01 as we set in the input function. Then, you can use the np.gradient method.
Price   $$$
Address   100 Meadowcreek Drive, Suite 102, 94925, Corte Madera
🌐
GitHub
github.com › HIPS › autograd
GitHub - HIPS/autograd: Efficiently computes derivatives of NumPy code. · GitHub
>>> import autograd.numpy as np # Thinly-wrapped numpy >>> from autograd import grad # The only autograd function you may ever need >>> >>> def tanh(x): # Define a function ... return (1.0 - np.exp((-2 * x))) / (1.0 + np.exp(-(2 * x))) ...
Starred by 7.5K users
Forked by 942 users
Languages   Python 99.7% | Shell 0.3%
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.polyder.html
numpy.polyder — NumPy v2.3 Manual
January 31, 2021 - The derivative of the polynomial \(x^3 + x^2 + x^1 + 1\) is: >>> import numpy as np · >>> p = np.poly1d([1,1,1,1]) >>> p2 = np.polyder(p) >>> p2 poly1d([3, 2, 1]) which evaluates to: >>> p2(2.) 17.0 · We can verify this, approximating the derivative with (f(x + h) - f(x))/h: >>> (p(2.
🌐
NumPy
numpy.org › doc › 2.2 › reference › generated › numpy.polyder.html
numpy.polyder — NumPy v2.2 Manual
The derivative of the polynomial \(x^3 + x^2 + x^1 + 1\) is: >>> import numpy as np · >>> p = np.poly1d([1,1,1,1]) >>> p2 = np.polyder(p) >>> p2 poly1d([3, 2, 1]) which evaluates to: >>> p2(2.) 17.0 · We can verify this, approximating the derivative with (f(x + h) - f(x))/h: >>> (p(2.
🌐
Derivative
derivative.ca › UserGuide › NumPy
NumPy - Derivative |
# import the NumPy library import numpy as np def onCook(scriptOp): scriptOp.clear() inputs = scriptOp.inputs[0] # get input CHOP as NumPy array inPos = inputs.numpyArray() # this results in a numChannels x numSamples array # we need to transpose the attay to a numSamples x numChannels array # to get positions as pairs for each point # this is comparable to the Shuffle CHOP's "Swap Channels and Samples" inPos = inPos.T # using broadcasting, we can eliminate the for loop # and calculate the distance between all points # https://numpy.org/doc/stable/user/basics.broadcasting.html npDist = np.sqrt(((inPos[:,:,None] - inPos[:,:,None].T) ** 2).sum(1)) # copy the NumPy array into the CHOP scriptOp.copyNumpyArray(npDist) return
🌐
Kitchin Research Group
kitchingroup.cheme.cmu.edu › blog › 2013 › 02 › 27 › Numeric-derivatives-by-differences
Numeric derivatives by differences
They work well for very smooth data. they are surprisingly fast even up to 10000 points in the vector. ''' x = np.linspace(0.78,0.79,100) y = np.sin(x) dy_analytical = np.cos(x) ''' lets use a forward difference method: that works up until the last point, where there is not a forward difference to use.
🌐
Saturn Cloud
saturncloud.io › blog › numpy-or-scipy-derivative-function-for-nonuniform-spacing
Numpy or SciPy Derivative Function for Non-Uniform Spacing? | Saturn Cloud Blog
October 4, 2023 - In this example, x represents the non-uniformly spaced data points, and y represents the corresponding function values. The np.gradient() function calculates the derivative of y with respect to x and stores the result in dy_dx.
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.polyder.html
numpy.polyder — NumPy v2.1 Manual
The derivative of the polynomial \(x^3 + x^2 + x^1 + 1\) is: >>> import numpy as np · >>> p = np.poly1d([1,1,1,1]) >>> p2 = np.polyder(p) >>> p2 poly1d([3, 2, 1]) which evaluates to: >>> p2(2.) 17.0 · We can verify this, approximating the derivative with (f(x + h) - f(x))/h: >>> (p(2.