The problem is, that numpy can't give you the derivatives directly and you have two options:

With NUMPY

What you essentially have to do, is to define a grid in three dimension and to evaluate the function on this grid. Afterwards you feed this table of function values to numpy.gradient to get an array with the numerical derivative for every dimension (variable).

Example from here:

from numpy import *

x,y,z = mgrid[-100:101:25., -100:101:25., -100:101:25.]

V = 2*x**2 + 3*y**2 - 4*z # just a random function for the potential

Ex,Ey,Ez = gradient(V)

Without NUMPY

You could also calculate the derivative yourself by using the centered difference quotient.

This is essentially, what numpy.gradient is doing for every point of your predefined grid.

Answer from Stefan on Stack Overflow
🌐
Vdeborto
vdeborto.github.io › teaching › optimization › cours_optim › TP › tp1 › tp1_2017.pdf pdf
Méthode de descentes de gradient et algorithmes de Newton
exponentiellement avec N. Voici le code python qui calcule la fonction à minimiser, son ... A exp(−σx) sin(ωx), θ = (A, σ, ω). On va maintenant introduire la · méthode de Levenberg-Marquardt qui est une méthode de quasi-Newton. ... Visualiser les données générées.
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.1 Manual
Gradient is calculated using N-th order accurate differences at the boundaries.
🌐
CodinGame
codingame.com › playgrounds › 53882 › descente-de-gradient-fonction-de-deux-variables
Descente de gradient (fonction de deux variables)
CodinGame is a challenge-based training platform for programmers where you can play with the hottest programming topics. Solve games, code AI bots, learn from your peers, have fun.
Top answer
1 of 1
31

You need to give gradient a matrix that describes your angular frequency values for your (x,y) points. e.g.

def f(x,y):
    return np.sin((x + y))
x = y = np.arange(-5, 5, 0.05)
X, Y = np.meshgrid(x, y)
zs = np.array([f(x,y) for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = zs.reshape(X.shape)

gx,gy = np.gradient(Z,0.05,0.05)

You can see that plotting Z as a surface gives:

Here is how to interpret your gradient:

gx is a matrix that gives the change dz/dx at all points. e.g. gx[0][0] is dz/dx at (x0,y0). Visualizing gx helps in understanding:

Since my data was generated from f(x,y) = sin(x+y) gy looks the same.

Here is a more obvious example using f(x,y) = sin(x)...

f(x,y)

and the gradients

update Let's take a look at the xy pairs.

This is the code I used:

def f(x,y):
    return np.sin(x)
x = y = np.arange(-3,3,.05)
X, Y = np.meshgrid(x, y)
zs = np.array([f(x,y) for x,y in zip(np.ravel(X), np.ravel(Y))])
xy_pairs = np.array([str(x)+','+str(y) for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = zs.reshape(X.shape)
xy_pairs = xy_pairs.reshape(X.shape)

gy,gx = np.gradient(Z,.05,.05)

Now we can look and see exactly what is happening. Say we wanted to know what point was associated with the value atZ[20][30]? Then...

>>> Z[20][30]
-0.99749498660405478

And the point is

>>> xy_pairs[20][30]
'-1.5,-2.0'

Is that right? Let's check.

>>> np.sin(-1.5)
-0.99749498660405445

Yes.

And what are our gradient components at that point?

>>> gy[20][30]
0.0
>>> gx[20][30]
0.070707731517679617

Do those check out?

dz/dy always 0 check. dz/dx = cos(x) and...

>>> np.cos(-1.5)
0.070737201667702906

Looks good.

You'll notice they aren't exactly correct, that is because my Z data isn't continuous, there is a step size of 0.05 and gradient can only approximate the rate of change.

🌐
SciPy
docs.scipy.org › doc › numpy-1.17.0 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.17 Manual
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.
Find elsewhere
🌐
Iric
bioinfo.iric.ca › gradient-descent
Gradient Descent – IRIC's Bioinformatics Platform
August 3, 2017 - Gradient descent is an iterative algorithm that aims to find values for the parameters of a function of interest which minimizes the output of a cost function with respect to a given dataset.
🌐
KooR.fr
koor.fr › Python › API › scientist › numpy › gradient.wp
KooR.fr - Fonction gradient - module numpy - Description de quelques librairies Python
In this example the first array stands for the gradient in rows and the second one in columns direction: >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]])) (array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[1. , 2.5, 4. ], [1. , 1. , 1. ]])) In this example the spacing is also specified: uniform for axis=0 and non uniform for axis=1 >>> dx = 2. >>> y = [1., 1.5, 3.5] >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]]), dx, y) (array([[ 1. , 1. , -0.5], [ 1. , 1. , -0.5]]), array([[2. , 2. , 2. ], [2. , 1.7, 0.5]])) It is possible to specify how boundaries are treated using `edge_order` >>> x = np.a
🌐
Medium
medium.com › @ilmunabid › how-to-find-a-gradient-slope-of-a-function-in-python-774f865467d2
What is Gradient/Slope? and How to Calculate One in Python (SymPy) | Medium
June 5, 2022 - derivative is used to find the gradient of a curve or to measure steepness. it is also called the rate of change. so we take the change in y and divide that with the change in x
🌐
CodinGame
codingame.com › playgrounds › 53874 › descente-de-gradient
Descente de gradient
CodinGame is a challenge-based training platform for programmers where you can play with the hottest programming topics. Solve games, code AI bots, learn from your peers, have fun.
🌐
NumPy
numpy.org › doc › 2.0 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.0 Manual
Gradient is calculated using N-th order accurate differences at the boundaries.
🌐
Univ-lemans
perso.univ-lemans.fr › ~berger › CoursStereoVision › co › imageGradientPy.html
Introduction au traitement des images et à la stéréo-vision - Calcul du Gradient d'une image avec Python
import numpy as np import cv2 as cv from matplotlib import pyplot as plt img = cv.imread('c:/temp/OCV_Haribo.png',0) sobelx = cv.Sobel(img,cv.CV_64F,1,0,ksize=5) sobely = cv.Sobel(img,cv.CV_64F,0,1,ksize=5) abs_grad_x = cv.convertScaleAbs(sobelx) abs_grad_y = cv.convertScaleAbs(sobely) modgrad = cv.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0) plt.subplot(2,2,1),plt.imshow(img,cmap = 'gray') plt.title('Original'), plt.xticks([]), plt.yticks([]) plt.subplot(2,2,2),plt.imshow(modgrad,cmap = 'gray') plt.title('Module gradient'), plt.xticks([]), plt.yticks([]) plt.subplot(2,2,3),plt.imshow(sobelx,cmap = 'gray') plt.title('Sobel X'), plt.xticks([]), plt.yticks([]) plt.subplot(2,2,4),plt.imshow(sobely,cmap = 'gray') plt.title('Sobel Y'), plt.xticks([]), plt.yticks([]) plt.show() L'image est lissée par un filtre gaussien avant de calculer le gradient.
🌐
Très Facile
tresfacile.net › la-methode-numpy-gradient
La méthode numpy.gradient() – Très Facile
August 25, 2023 - La méthode numpy.gradient() est une fonction du module NumPy en Python qui permet de calculer les gradients d'un tableau multidimensionnel (tel qu'un tableau NumPy) en utilisant une approche discrète.
🌐
Mrmint
mrmint.fr › accueil › gradient descent algorithm : explications et implémentation en python
Gradient Descent Algorithm : Explications et implémentation en Python
August 31, 2017 - On vient d’implémenter l’algorithme Gradient Descent. Ce dernier tente de réduire, à chaque itération le coût global d’erreur et ce en minimisant la fonction ,. On peut s’en assurer en regardant comment évolue les valeurs de , au cours des itérations. def calculer_cost_function(theta_0, theta_1): global_cost = 0 for i in range(len(X)): cost_i = ((theta_0 + (theta_1 * X[i])) - Y[i]) * ((theta_0 + (theta_1 * X[i])) - Y[i]) global_cost+= cost_i return (1/ (2 * len(X))) * global_cost xx = []; yy=[] axes = plt.axes() axes.grid() #dessiner l'avancer des differents de J(theta_0, theta_1) for i in range(len(COST_RECORDER)): xx.append(i) yy.append(COST_RECORDER[i]) plt.scatter(xx,yy) plt.show()
🌐
Université de Lyon 1
perso.univ-lyon1.fr › marc.buffat › COURS › BOOK_OUTILSNUM_HTML › IntroIA › source › TP1_Introduction_IA › TP1b_minimisation_etu.html
7.3. TP méthode de minimisation multi-dimensionnelle — Outils numériques avancés en Mécanique
Cette fonction J() est implémentée en Python comme un objet (instance de classe): J : nom de l’objet · J.dim() : dimension de X, i.e. le nombre de variables dont J dépend · J(X) : calcul la valeur de J pour un vecteur X de dimension J.dim() J.grad(X) : calcul le gradient de J en X ·