🌐
Très Facile
tresfacile.net › la-methode-numpy-gradient
La méthode numpy.gradient() – Très Facile
August 25, 2023 - La méthode numpy.gradient() est une fonction du module NumPy en Python qui permet de calculer les gradients d'un tableau multidimensionnel (tel qu'un tableau NumPy) en utilisant une approche discrète.
🌐
KooR.fr
koor.fr › Python › API › scientist › numpy › gradient.wp
KooR.fr - Fonction gradient - module numpy - Description de quelques librairies Python
Returns ------- gradient : ndarray or tuple of ndarray A tuple of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension. Each derivative has the same shape as f. Examples -------- >>> import numpy as np >>> f = np.array([1, 2, 4, 7, 11, 16]) >>> np.gradient(f) array([1.
🌐
Vdeborto
vdeborto.github.io › teaching › optimization › cours_optim › TP › tp1 › tp1_2017.pdf pdf
Méthode de descentes de gradient et algorithmes de Newton
exponentiellement avec N. Voici le code python qui calcule la fonction à minimiser, son ... A exp(−σx) sin(ωx), θ = (A, σ, ω). On va maintenant introduire la · méthode de Levenberg-Marquardt qui est une méthode de quasi-Newton. ... Visualiser les données générées.
🌐
Mrmint
mrmint.fr › accueil › gradient descent algorithm : explications et implémentation en python
Gradient Descent Algorithm : Explications et implémentation en Python
August 31, 2017 - On vient d’implémenter l’algorithme Gradient Descent. Ce dernier tente de réduire, à chaque itération le coût global d’erreur et ce en minimisant la fonction ,. On peut s’en assurer en regardant comment évolue les valeurs de , au cours des itérations.
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.1 Manual
Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array.
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.4 Manual
Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array.
🌐
Université de Lyon 1
perso.univ-lyon1.fr › marc.buffat › COURS › BOOK_OUTILSNUM_HTML › IntroIA › source › TP1_Introduction_IA › TP1b_minimisation_etu.html
7.3. TP méthode de minimisation multi-dimensionnelle — Outils numériques avancés en Mécanique
Cette fonction J() est implémentée en Python comme un objet (instance de classe): J : nom de l’objet · J.dim() : dimension de X, i.e. le nombre de variables dont J dépend · J(X) : calcul la valeur de J pour un vecteur X de dimension J.dim() J.grad(X) : calcul le gradient de J en X ·
🌐
CodinGame
codingame.com › playgrounds › 53882 › descente-de-gradient-fonction-de-deux-variables
Descente de gradient (fonction de deux variables)
CodinGame is a challenge-based training platform for programmers where you can play with the hottest programming topics. Solve games, code AI bots, learn from your peers, have fun.
🌐
YouTube
youtube.com › orkia derkaoui
Méthode de Descente de Gradient avec Python: résolution de problème de maximisation f(x)=x² , 2/4 - YouTube
Exemple : pour une fonction à une seule variable : min f(x) = x² - x + 1, x dans [-10,10]Solution initiale: x0: choix aléatoire ou avec une ...
Published   August 26, 2021
Views   1K
Find elsewhere
🌐
CodinGame
codingame.com › playgrounds › 53874 › descente-de-gradient
Descente de gradient
CodinGame is a challenge-based training platform for programmers where you can play with the hottest programming topics. Solve games, code AI bots, learn from your peers, have fun.
🌐
Moonbooks
moonbooks.org › Articles › Algorithme-du-gradient-gradient-descent-avec-python
Comment implementer l'algorithme du gradient ("gradient descent") en python pour trouver un minimum local ?
March 21, 2017 - Algorithme du gradient (gradient descent) avec python (2D) from scipy import misc import matplotlib.pyplot as plt import numpy as np import math #----------------------------------------------------------------------------------------# # Function def fonction(x1,x2): return - 1.0 * math.exp(-x1**2 - x2**2); def partial_derivative(func, var=0, point=[]): args = point[:] def wraps(x): args[var] = x return func(*args) return misc.derivative(wraps, point[var], dx = 1e-6) #----------------------------------------------------------------------------------------# # Plot Function x1 = np.arange(-2.0,
🌐
GeeksforGeeks
geeksforgeeks.org › how-to-find-gradient-of-a-function-using-python
How to find Gradient of a Function using Python? | GeeksforGeeks
July 28, 2020 - The gradient of a function simply means the rate of change of a function. We will use numdifftools to find Gradient of a function. Examples: Input : x^4+x+1 Output :Gradient of x^4+x+1 at x=1 is 4.99 Input :(1-x)^2+(y-x^2)^2 Output :Gradient ...
🌐
Traimaocv
traimaocv.fr › CoursStereoVision › co › imageGradientPy.html
Introduction au traitement des images et à la stéréo-vision - Calcul du Gradient d'une image avec Python
On peut visualiser le vecteur gradient. Le module du gradient indique le contraste (changement d'intensité) et l'angle la direction de cehangement. Pour obtenir cette image il faut utiliser ce script python :
Top answer
1 of 4
196

Also in the documentation1:

>>> y = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> j = np.gradient(y)
>>> j 
array([ 1. ,  1.5,  2.5,  3.5,  4.5,  5. ])
  • Gradient is defined as (change in y)/(change in x).

  • x, here, is the list index, so the difference between adjacent values is 1.

  • At the boundaries, the first difference is calculated. This means that at each end of the array, the gradient given is simply, the difference between the end two values (divided by 1)

  • Away from the boundaries the gradient for a particular index is given by taking the difference between the the values either side and dividing by 2.

So, the gradient of y, above, is calculated thus:

j[0] = (y[1]-y[0])/1 = (2-1)/1  = 1
j[1] = (y[2]-y[0])/2 = (4-1)/2  = 1.5
j[2] = (y[3]-y[1])/2 = (7-2)/2  = 2.5
j[3] = (y[4]-y[2])/2 = (11-4)/2 = 3.5
j[4] = (y[5]-y[3])/2 = (16-7)/2 = 4.5
j[5] = (y[5]-y[4])/1 = (16-11)/1 = 5

You could find the minima of all the absolute values in the resulting array to find the turning points of a curve, for example.


1The array is actually called x in the example in the docs, I've changed it to y to avoid confusion.

2 of 4
32

Here is what is going on. The Taylor series expansion guides us on how to approximate the derivative, given the value at close points. The simplest comes from the first order Taylor series expansion for a C^2 function (two continuous derivatives)...

  • f(x+h) = f(x) + f'(x)h+f''(xi)h^2/2.

One can solve for f'(x)...

  • f'(x) = [f(x+h) - f(x)]/h + O(h).

Can we do better? Yes indeed. If we assume C^3, then the Taylor expansion is

  • f(x+h) = f(x) + f'(x)h + f''(x)h^2/2 + f'''(xi) h^3/6, and
  • f(x-h) = f(x) - f'(x)h + f''(x)h^2/2 - f'''(xi) h^3/6.

Subtracting these (both the h^0 and h^2 terms drop out!) and solve for f'(x):

  • f'(x) = [f(x+h) - f(x-h)]/(2h) + O(h^2).

So, if we have a discretized function defined on equal distant partitions: x = x_0,x_0+h(=x_1),....,x_n=x_0+h*n, then numpy gradient will yield a "derivative" array using the first order estimate on the ends and the better estimates in the middle.

Example 1. If you don't specify any spacing, the interval is assumed to be 1. so if you call

f = np.array([5, 7, 4, 8])

what you are saying is that f(0) = 5, f(1) = 7, f(2) = 4, and f(3) = 8. Then

np.gradient(f) 

will be: f'(0) = (7 - 5)/1 = 2, f'(1) = (4 - 5)/(2*1) = -0.5, f'(2) = (8 - 7)/(2*1) = 0.5, f'(3) = (8 - 4)/1 = 4.

Example 2. If you specify a single spacing, the spacing is uniform but not 1.

For example, if you call

np.gradient(f, 0.5)

this is saying that h = 0.5, not 1, i.e., the function is really f(0) = 5, f(0.5) = 7, f(1.0) = 4, f(1.5) = 8. The net effect is to replace h = 1 with h = 0.5 and all the results will be doubled.

Example 3. Suppose the discretized function f(x) is not defined on uniformly spaced intervals, for instance f(0) = 5, f(1) = 7, f(3) = 4, f(3.5) = 8, then there is a messier discretized differentiation function that the numpy gradient function uses and you will get the discretized derivatives by calling

np.gradient(f, np.array([0,1,3,3.5]))

Lastly, if your input is a 2d array, then you are thinking of a function f of x, y defined on a grid. The numpy gradient will output the arrays of "discretized" partial derivatives in x and y.

🌐
Progresser-en-maths
progresser-en-maths.com › accueil › cours › maths pour le ml › la descente de gradient : cours complet, variantes et python
La descente de gradient : cours complet, variantes et Python
March 15, 2026 - Nous présentons ensuite les variantes modernes utilisées en pratique (SGD, mini-batch, momentum, Adam), avant de tout implémenter en Python sur un exemple concret de régression linéaire. Prérequis : le gradient, suites numériques (notions de convergence et de suites géométriques). ... On cherche à minimiser une fonction f : \mathbb{R}^n \to \mathbb{R} (appelée fonction de coût en machine learning).
🌐
Reddit
reddit.com › r/learnpython › need help in understanding np.gradient for calculating derivatives
r/learnpython on Reddit: Need help in understanding np.gradient for calculating derivatives
June 30, 2023 -

Hi, I'm trying to expand my knowledge in Machine Learning, I came across the np.gradient function, I wanted to understand how it relates to Taylor's Series for estimating values. The documentation seemed a bit confusing for novice.

Top answer
1 of 2
7
One definition of the derivative is f'(x) = (f(x+h)-f(x))/h where h goes to 0. Computers cannot store infinitely small numbers, so they might set h=1e-6 (that is 0.000001). It's a tradeoff because while we want h to be as small as possible, at some point the errors due to computer precision begin to dominate. Given any function that the computer can calculate, it can approximate the derivative. def f(x): return np.sin(x) x = np.arange(-2,2,0.01) y = f(x) dfdx = (f(x+h)-f(x))/h plt.plot(x,y) plt.plot(x,dfdx) plt.show() Assuming that the function is reasonably smooth (i.e. the derivative above exists), another definition of the derivative is f'(x) = (f(x+h)-f(x-h))/(2h) where h goes to 0. Going from x-h to x+h means 2 steps, that's the reason for 2h. Which works just as well. These methods are named finite difference to contrast from the normal derivative definition where h is infinitely small. The first one is the forward difference and the second one is called central difference. The backward difference is (f(x)-f(x-h))/2. Let's assume we want to write a derivative function. It takes a function f and values of x, and gives back f'(x). def f(x): return np.sin(x) def d(fun, x): return (fun(x+h)-fun(x))/h x = np.arange(-2,2,0.01) y = f(x) dfdx = d(f,x) plt.plot(x,y) plt.plot(x,dfdx) plt.show() By passing the function into the function, the derivative function can just call fun wherever it wants/needs to get the derivative. Now things become a bit more inconvenient. For some reason we do not know f. We only know y, i.e. f(x) for some values of x. Let's say that x is evenly spaced as usual. Then our best guess for h is not really tiny but identical to the spacing between neighboring x values. With the forward difference we need to take care at the rightmost value because we cannot just add +h to get a value even further out. Instead we use the backward difference. For values in the middle we decide to use the central difference instead of the forward difference. def f(x): return np.sin(x) def d(y, h=1): dfdx = [(y[1]-y[0])/h] for i in range(1,len(y)-1): dfdx.append((y[i+1]-y[i-1])/2/h) dfdx.append((y[i]-y[i-1])/h) return dfdx h = 0.01 x = np.arange(-2,2,h) y = f(x) dfdx = d(y,h) plt.plot(x,y) plt.plot(x,dfdx) plt.show() The implementation above corresponds to np.gradient in the one-dimensional case where varargs is set to case 1 or 2. The case where varargs is set to 3 or 4 would use x directly in d instead of h. However at that point the formula is more complicated as they mention in the documentation. Effectively any point has a hd (the forward step size) and a hs (the backward step size) and the formula is not just (f(x+hd)-f(x-hs))/(hd+hs) but instead that bigger expression given in the documentation, where the values of hd,hs act as some kind of weights. np.gradient is basically backwards, central and forward difference combined. When you have values like f(1),f(2),f(2+h) and want the derivative at 2, the code notices that 2 and 2+h are very close together and puts greater weight on that (and mostly ignores f(1)). The important part so far is that np.gradient when given a vector with N elements calculates N one-dimensional derivatives, which is not the typical idea of a gradient. np.gradient does support more dimensions which might make things clearer. So in the 1D case, we essentially go through all values from left to right and then consider that value and its direct left and right neighbor to quantify the uptrend or downtrend. In the 2D case, np.gradient still does this, but additionally also walks from top to bottom and does the same. So in 2D it returns 2 arrays, one for left-right and one for top-bottom. The actual definition of the gradient by finite differences is [(f(x+h,y)-f(x,y))/h, (f(x,y+h)-f(x,y))/h] in 2D. These values are indeed returned by np.gradient, the left part is in the first array and the right part in the second array. Say we are in 2D and want the gradient at x=3 and y=0, then we can plug it into np.gradient like this: hx = 1e-6 hy = 1e-3 x = [3,3+hx] y = [0,0+hy] xx,yy = np.meshgrid(x,y) def f(x,y): return x**2-2*x*np.sin(y) + 1/x grad = np.gradient(f(xx,yy), y,x) # Note the order. print(grad[1][0,0], grad[0][0,0]) # Note the order. This is dfdx, dfdy. but if the function f can be calculated by a computer, it makes more sense to just use automatic differentiation instead of finite differences. Automatic differentiation has no h that needs to be chosen carefully. It's always as accurate is possible. import torch x = torch.tensor([3.],requires_grad=True) y = torch.tensor([0.],requires_grad=True) z = x**2-2*x*torch.sin(y) + 1/x z.backward() print(x.grad, y.grad) So what's the deal with the Taylor series? It's just a minor piece in the derivation of that more general expression used by np.gradient. We just start by claiming that we can express the gradient by adding together function values in the direct neighborhood. f'(x) = a f(x) + b f(x+hd) + c f(x-hs) Given that finite differences do work out, this approach should work as well and generalize the idea. Expand f(x+hd) and f(x-hs) with their series: f(x+hd) = f(x) + hd f'(x) + hd^2 f''(x)/2 + ... f(x-hs) = f(x) - hs f'(x) + hs^2 f''(x)/2 + ... Then plug it in and reshape: f'(x) = a f(x) + b f(x) + b hd f'(x) + b hd^2 f''(x)/2 + c f(x) - c hs f'(x) + c hs^2 f''(x)/2 = (a+b+c) f(x) + (b hd - c hs) f'(x) + (b hd^2 + c hs^2 )/2 f''(x) 0 = (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd^2 + c hs^2 )/2 f''(x) The = in the middle is actually more of an approximately equal sign. We won't be able to reach 0 for all f(x) as claimed on the left hand size, but we can get pretty close. We do NOT want to minimize the right-hand-side. We want it to reach 0 (it can go below 0 right now). To turn this into a minimization problem, we square it. This way we get a positive number always and it really becomes a matter of minimization. We COULD also take the absolute value instead of squaring, but it's pain to work this through and the end result are exactly the same parameters anyway. To minimize: E2 with E = (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd2 + c hs2 )/2 f''(x) One requirement for an optimum is that the gradient is 0. In this case we take the derivatives with respect to a,b,c because we want to find the optimal a,b,c. First a reminder of the chain rule: dE2 /dt = 2E dE/dt for whatever t is. It's optional to do this but a bit less messy than working it through individually. In particular we have dE^2/da = 2E dE/da = 2E f(x) dE^2/db = 2E dE/db = 2E (f(x) + hd f'(x) + hd^2 f''(x)/2) dE^2/dc = 2E dE/dc = 2E (f(x) - hs f'(x) + hs^2 f''(x)/2) We want ALL three of them to be 0 at the same time. This can only happen if E is 0. 0 := (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd2 + c hs2 )/2 f''(x) and we want this to be 0 for any f, f', f'' for any value of x. The only way for this to happen is if each coefficient is 0, i.e. a+b+c = 0 b hd - c hs = 1 b hd^2 + c hs^2 = 0 We would need to check the second derivative to make sure that this is a minimum, not a maximum, but given the problem it is fairly clear. So why did we stop exactly after f'' in the Taylor series? It's because this way we get exactly 3 unknowns and 3 equations, which is the most convenient to solve. Multiply the second equation by hd then subtract the third from it. (b hd^2 - c hs hd) - (b hd^2 + c hs^2) = hd -c hs^2 - c hs hd = hd c hs (hs + hd) = -hd c = -hd/hs/(hs+hd) = -hd^2 / (hs hd (hs+hd)) where the last step is just so it looks exactly like in np.gradient. Insert c into the second equation. b hd + hd/hs/(hs+hd) hs = 1 b hd + hd/(hs+hd) = 1 b + 1/(hs+hd) = 1/hd b = 1/hd - 1/(hs+hd) b = (hs(hs+hd) - hs hd) / [hs hd (hs+hd)] b = hs^2 / [hs hd (hs+hd)] From the first equation we know that a = -b-c = (hd2 - hs2 )/(hs hd (hs+hd)). So here's your summary: If you have a function that can be calculated by a computer, use torch or tensorflow or any other framework for automatic differentiation. If you have a function that can be calculated by a computer but such a framework is not available, np.gradient is still a bad idea because it is inefficient. Note for the 2D gradient we needed three values, f(x,y), f(x+dx,y), f(x,y+dy). But with np.gradient we would first need to set up arrays where it is almost natural to also include f(x+dx,y+dy) which is not needed for gradient calculations. It's more natural to set up some loop that increments x once, then y once, then z once, and so on. Many solvers in scipy.optimize work with finite differences. If you have a function that cannot be calculated by a computer, np.gradient may be useful. In practice this means that you have data from some experiment. Even there, the concept of a Taylor series plays no role here UNLESS the data was taken on an unevenly spaced grid.
2 of 2
2
You might enjoy this stackoverflow post on the same question
🌐
Telecom ParisTech
perso.telecom-paristech.fr › mozharovskyi › resources › TP_Python_perceptron.pdf pdf
Introduction à Python, Gradient Stochastique et Perceptron
Currently I am Full Professor at Télécom Paris in the Team Signal, Statistique et Apprentissage (S2A) of the Information Processing and Communication Laboratory (LTCI). After having finished my studies at Kyiv Polytechnic Institute in automation control and informatics, I obtained a PhD degree ...
🌐
Polytechnique
jupyter_map412.gitlab.labos.polytechnique.fr › jupyterbook › content › chapter › syst_lin_iteratif › gradient.html
Méthodes de gradient — Introduction à l'analyse numérique
# initialization xk = [np.zeros(2)] rk = b - np.dot(a,xk[0]) pk = rk rkm1 = rk # iteration du gradient for k in range(b.size): apk = np.dot(a,pk) alpha = np.dot(rk,rk)/ np.dot(pk, np.dot(a,pk)) xk.append(xk[k] + alpha*pk) rk = rk - alpha*apk if (np.linalg.norm(rk) < 1.e-6): break beta = np.dot(rk,rk) / np.dot(rkm1, rkm1) pk = rk + beta*pk rkm1 = rk xk = np.array(xk) print("Gradient conjugué :") print(f"-> nb iteration {k+1}") x1 = np.linspace(-0.5, 1., 200) x2 = np.linspace(-2.5, 0.5, 200) x1, x2 = np.meshgrid(x1, x2) z = f(a, b, x1, x2) nlevel = 3 level = f(a, b, xk[0:nlevel,0], xk[0:nlevel,1]) plt.figure(figsize=(7.5,15)) plt.contour(x1, x2, z, np.flip(level)) plt.plot(xk[0:nlevel,0], xk[0:nlevel,1], '--ro') plt.scatter(xexa[0], xexa[1]) plt.show()