As pointed out by @SevC_10 in his answer, you are missing dx parameter.

I like to show case the use of sympy for derivation operations, I find it much easier in many cases.

Copyimport sympy
import numpy as np

x = sympy.Symbol('x')

f = sympy.exp(x) # my function e^x
df = f.diff() # y' of the function = e^x

f_lambda = sympy.lambdify(x, f, 'numpy')
df_lambda = sympy.lambdify(x, yprime, 'numpy') # use lambdify

print(f_lambda(np.ones(5)))

# array([2.71828183, 2.71828183, 2.71828183, 2.71828183, 2.71828183])

print(df_lambda(np.ones(5)))

# array([2.71828183, 2.71828183, 2.71828183, 2.71828183, 2.71828183])

print(f_lambda(np.zeros(5)))

# array([1., 1., 1., 1., 1.])

print(df_lambda(np.zeros(5)))

# array([1., 1., 1., 1., 1.])


print(f_lambda(np.array([0, 1, 2, 3, 4])))
# array([ 1.        ,  2.71828183,  7.3890561 , 20.08553692, 54.59815003])

print(df_lambda(np.array([0, 1, 2, 3, 4])))
# array([ 1.        ,  2.71828183,  7.3890561 , 20.08553692, 54.59815003])
Answer from Sreeram TP on Stack Overflow
🌐
SciPy
docs.scipy.org › doc › scipy › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.17.0 Manual
For each element of the output of f, derivative approximates the first derivative of f at the corresponding element of x using finite difference differentiation.
Top answer
1 of 2
3

As pointed out by @SevC_10 in his answer, you are missing dx parameter.

I like to show case the use of sympy for derivation operations, I find it much easier in many cases.

Copyimport sympy
import numpy as np

x = sympy.Symbol('x')

f = sympy.exp(x) # my function e^x
df = f.diff() # y' of the function = e^x

f_lambda = sympy.lambdify(x, f, 'numpy')
df_lambda = sympy.lambdify(x, yprime, 'numpy') # use lambdify

print(f_lambda(np.ones(5)))

# array([2.71828183, 2.71828183, 2.71828183, 2.71828183, 2.71828183])

print(df_lambda(np.ones(5)))

# array([2.71828183, 2.71828183, 2.71828183, 2.71828183, 2.71828183])

print(f_lambda(np.zeros(5)))

# array([1., 1., 1., 1., 1.])

print(df_lambda(np.zeros(5)))

# array([1., 1., 1., 1., 1.])


print(f_lambda(np.array([0, 1, 2, 3, 4])))
# array([ 1.        ,  2.71828183,  7.3890561 , 20.08553692, 54.59815003])

print(df_lambda(np.array([0, 1, 2, 3, 4])))
# array([ 1.        ,  2.71828183,  7.3890561 , 20.08553692, 54.59815003])
2 of 2
3

The derivative function has other arguments. From the help(derivative):

CopyParameters
----------
func : function
    Input function.
x0 : float
    The point at which the nth derivative is found.
dx : float, optional
    Spacing.
n : int, optional
    Order of the derivative. Default is 1.
args : tuple, optional
    Arguments
order : int, optional
    Number of points to use, must be odd.

As you can see, you didn't specify the dx parameter, so this can cause rounding error because the approximate derivative is computed on a larger interval. From the documentation, the default value is 1 (https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.derivative.html).

Simply try to reduce the spacing interval: for example, using 1e-3 I get:

Copy2.718281828459045
2.718282281505724
Discussions

Help with scipy derivatives
Here, SciPy computes derivatives numerically, using finite differencing . The default setting (from the docs) is the central difference with step size 1 about your point: f'(1) approx (f(2)-f(0))/2 which gives what you got. You can lower the dx setting and/or increase the order to give you a better result. Try a = der(f,1,dx=1e-5) instead. More on reddit.com
🌐 r/learnpython
3
2
February 3, 2021
python - Scipy Derivative - Stack Overflow
I have a question about the derivative function of Scipy. I used it last night and got some odd answers. I tried again this morning with some simple functions and got some right answers and some wr... More on stackoverflow.com
🌐 stackoverflow.com
scipy - Numerical derivative in python - Computational Science Stack Exchange
I am trying to take the numerical derivative of a dataset. My first attempt was to use the gradient function from numpy but in that case the graph of the derivative looked not "smooth enough". So I... More on scicomp.stackexchange.com
🌐 scicomp.stackexchange.com
October 25, 2019
python - scipy.misc.derivative for multiple argument function - Stack Overflow
It is straightforward to compute the partial derivatives of a function at a point with respect to the first argument using the SciPy function scipy.misc.derivative. Here is an example: def foo(x, ... More on stackoverflow.com
🌐 stackoverflow.com
🌐
SciPy
docs.scipy.org › doc › scipy-1.14.1 › reference › generated › scipy.misc.derivative.html
derivative — SciPy v1.14.1 Manual
Given a function, use a central difference formula with spacing dx to compute the nth derivative at x0.
🌐
SciPy
docs.scipy.org › doc › scipy-1.16.2 › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.16.2 Manual
Scientific Python Forum · Search ... step_factor=2.0, step_direction=0, preserve_shape=False, callback=None)[source]# Evaluate the derivative of a elementwise, real scalar function numerically....
🌐
SciPy
docs.scipy.org › doc › scipy-1.16.1 › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.16.1 Manual
Scientific Python Forum · Search ... step_factor=2.0, step_direction=0, preserve_shape=False, callback=None)[source]# Evaluate the derivative of a elementwise, real scalar function numerically....
🌐
Reddit
reddit.com › r/learnpython › help with scipy derivatives
r/learnpython on Reddit: Help with scipy derivatives
February 3, 2021 -

from scipy.misc import derivative as der

import sympy as sy

def f(x):

......return sy.sin(x)

a = der(f, 1)

print(a)

I really don't understand why this isn't working properly. I'm trying to derive sin(x) (as a simple test) at a certain point, but it gives me the wrong result (0.454648713412841 instead of 0.5403023....). In the line "a = der(f, 1)", it means that I'm looking for the derivative of the function f, at point 1.

why is this not working?

🌐
SciPy
scipy.github.io › devdocs › reference › differentiate.html
Finite Difference Differentiation (scipy.differentiate) — SciPy v1.18.0.dev Manual
SciPy differentiate provides functions for performing finite difference numerical differentiation of black-box functions · derivative(f, x, *[, args, tolerances, ...])
Find elsewhere
🌐
Svitla Systems
svitla.com › home › articles › numerical differentiation methods in python
Python for Numerical Differentiation: Methods & Tools
January 14, 2021 - To get more information about scipy.misc.derivative, please refer to this manual. It allows you to calculate the first order derivative, second order derivative, and so on. It accepts functions as input and this function can be represented as ...
Price   $$$
Address   100 Meadowcreek Drive, Suite 102, 94925, Corte Madera
🌐
SciPy
docs.scipy.org › doc › scipy-1.15.2 › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.15.2 Manual
For each element of the output of f, derivative approximates the first derivative of f at the corresponding element of x using finite difference differentiation.
🌐
Python Guides
pythonguides.com › python-scipy-derivative-of-array
Python SciPy Derivative Of Array: Calculate With Precision
June 23, 2025 - The scipy.misc.derivative() function is perfect when you have a Python function and need to find its derivative at a specific point.
Top answer
1 of 9
208

You have four options

  1. Finite Differences
  2. Automatic Derivatives
  3. Symbolic Differentiation
  4. Compute derivatives by hand.

Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.

Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.

Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.

Here is an example using SymPy

In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x

In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2.  2.  2.  2.  2.]
2 of 9
82

The most straight-forward way I can think of is using numpy's gradient function:

x = numpy.linspace(0,10,1000)
dx = x[1]-x[0]
y = x**2 + 1
dydx = numpy.gradient(y, dx)

This way, dydx will be computed using central differences and will have the same length as y, unlike numpy.diff, which uses forward differences and will return (n-1) size vector.

🌐
SciPy
docs.scipy.org › doc › scipy-1.16.0 › reference › generated › scipy.differentiate.derivative.html
derivative — SciPy v1.16.0 Manual
Scientific Python Forum · Search ... step_factor=2.0, step_direction=0, preserve_shape=False, callback=None)[source]# Evaluate the derivative of a elementwise, real scalar function numerically....
🌐
GitHub
github.com › scipy › scipy › issues › 17059
ENH: Robust and fast numerical derivative for error propagation · Issue #17059 · scipy/scipy
September 20, 2022 - jacobi is written in pure Python and the implementation is written in a concise way by me, a Python expert and author and maintainer of several open source Python and C++ libraries. I am also the designer and author of Boost Histogram in the Boost C++ project. Jacobi uses only 250 lines. It is very fast, too, because it uses numpy array math efficiently. It is faster than numdifftools by up to a factor of 2 to 1000, depending on the problem. scipy.misc.derivative is too difficult to use correctly.
Author   HDembinski
Top answer
1 of 1
1

If you ignore the mathematical formulae in the tutorial you link to, and just look at the call itself,

res = minimize(rosen, x0, method='BFGS', jac=rosen_der, ... options={'disp': True})

There are two python functions defined. One is the scalar functional, $f(\vec{x})$,

def rosen(x): ... """The Rosenbrock function""" ... return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)

and the other is the vector gradient function, $$\nabla f(\vec{x}) = \left(\left.\frac{\partial f}{\partial x_0}\right|_{\vec{x}},\ldots,\left.\frac{\partial f}{\partial x_{n-1}}\right|_{\vec{x}}\right)^T,$$

which is automatically the same size as $\vec{x}$.

def rosen_der(x): ... xm = x[1:-1] ... xm_m1 = x[:-2] ... xm_p1 = x[2:] ... der = np.zeros_like(x) ... der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm) ... der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0]) ... der[-1] = 200*(x[-1]-x[-2]**2) ... return der

In terms of the code, this is deliberately written in the same form as $\vec{x}$ so that the iterative minimisation routine can form objects like $$\vec{x}^{(n+1}=\vec{x}^n-\alpha \nabla f(\vec{x}^n) $$ so that the variable can move "downhill" towards a local minimum through successive iterations of the vector $\vec{x}$ (note just using the last answer is the method "steepest descent", which nobody would ever use choose to use outside of a classroom). Hopefully this isn't too much of a surprise to you. If it is, you might want to read up a bit more on the theory of gradient based optimisation before getting back to your coding.

In terms of your actual code, it should be enough to use basic arrays with numpy.dot to perform matrix multiplication,rather than writing everything as matrices (which numpy makes 2d), or if you really want to keep this style in your functions, use numpy.asarray and numpy.ravel when returning the result. At a worst case, you need to double check that your matrix $Q$ is $N\times N$ and that you make $x$ a column vector (which as I said before asmatrix doesn't do).

Top answer
1 of 3
10

FFT returns a complex array that has the same dimensions as the input array. The output array is ordered as follows:

  1. Element 0 contains the zero frequency component, F0.

  2. The array element F1 contains the smallest, nonzero positive frequency, which is equal to 1/(Ni Ti), where Ni is the number of elements and Ti is the sampling interval.

  3. F2 corresponds to a frequency of 2/(Ni Ti).

  4. Negative frequencies are stored in the reverse order of positive frequencies, ranging from the highest to lowest negative frequencies.

  5. For an even number of points, the frequencies corresponding to the returned complex values are: 0, 1/(NiTi), 2/(NiTi), ..., (Ni/2–1)/(NiTi), 1/(2Ti), –(Ni/2–1)/(NiTi), ..., –1/(NiTi) where 1/(2Ti) is the Nyquist critical frequency.

  6. For an odd number of points, the frequencies corresponding to the returned complex values are: 0, 1/(NiTi), 2/(NiTi), ..., (Ni–1)/2)/(NiTi), –(Ni–1)/2)/(NiTi), ..., –1/(NiTi)

Using this information we can construct the proper vector of frequencies that should be used for calculating the derivative. Below is a piece of self-explanatory Python code that does it all correctly. Note that the factor 2$\pi$N cancels out due to normalization of FFT.

from scipy.fftpack import fft, ifft, dct, idct, dst, idst, fftshift, fftfreq
from numpy import linspace, zeros, array, pi, sin, cos, exp, arange
import matplotlib.pyplot as plt


N = 100
x = 2*pi*arange(0,N,1)/N #-open-periodic domain                                                   

dx = x[1]-x[0]
y = sin(2*x)+cos(5*x)
dydx = 2*cos(2*x)-5*sin(5*x)


k2=zeros(N)

if ((N%2)==0):
    #-even number                                                                                   
    for i in range(1,N//2):
        k2[i]=i
        k2[N-i]=-i
else:
    #-odd number                                                                                    
    for i in range(1,(N-1)//2):
        k2[i]=i
        k2[N-i]=-i

dydx1 = ifft(1j*k2*fft(y))

plt.plot(x,dydx,'b',label='Exact value')
plt.plot(x,dydx1, color='r', linestyle='--', label='Derivative by FFT')
plt.legend()
plt.show()

2 of 3
9

Maxim Umansky’s answer describes the storage convention of the FFT frequency components in detail, but doesn’t necessarily explain why the original code didn’t work. There are three main problems in the code:

  1. x = linspace(0,2*pi,N): By constructing your spatial domain like this, your x values will range from $0$ to $2\pi$, inclusive! This is a problem because your function y = sin(2*x)+cos(5*x) is not exactly periodic on this domain ($0$ and $2\pi$ correspond to the same point, but they’re included twice). This causes spectral leakage and thus a small deviation in the result. You can avoid this by using x = linspace(0,2*pi,N, endpoint=False) (or x = 2*pi*arange(0,N,1)/N, as Maxim Umansky did; this is what he is referring to with “open-periodic domain”).
  2. k = fftshift(k): As Maxim Umansky explained, your k values need to be in a specific order to match the FFT convention. fftshift sorts the values (from small/negative to large/positive), which can be useful e. g. for plotting, but is incorrect for computations.
  3. dydx1 = ifft(-k*1j*fft(y)).real: scipy defines the FFT as y(j) = (x * exp(-2*pi*sqrt(-1)*j*np.arange(n)/n)).sum(), i. e. with a factor of $2\pi$ in the exponential, so you need to include this factor when deriving the formula for the derivative. Also, for scipy’s FFT convention, the k values shouldn’t get a minus sign.

So, with these three changes, the original code can be corrected as follows:

from scipy.fftpack import fft, ifft, dct, idct, dst, idst, fftshift, fftfreq
from numpy import linspace, zeros, array, pi, sin, cos, exp
import matplotlib.pyplot as plt

N = 100
x = linspace(0,2*pi,N, endpoint=False) # (1.)

dx = x[1]-x[0]
y = sin(2*x)+cos(5*x)
dydx = 2*cos(2*x)-5*sin(5*x)

k = fftfreq(N,dx)
# (2.)

dydx1 = ifft(2*pi*k*1j*fft(y)).real # (3.)

plt.plot(x,dydx,'b',label='Exact value')
plt.plot(x,dydx1,'r',label='Derivative by FFT')
plt.legend()
plt.show()