use sympy
>>> from sympy import symbols, diff
>>> x, y, z = symbols('x y z', real=True)
>>> f = 4*x*y + x*sin(z) + x**3 + z**8*y
>>> diff(f, x)
4*y + sin(z) + 3*x**2
Answer from wtayyeb on Stack Overflow Top answer 1 of 2
31
use sympy
>>> from sympy import symbols, diff
>>> x, y, z = symbols('x y z', real=True)
>>> f = 4*x*y + x*sin(z) + x**3 + z**8*y
>>> diff(f, x)
4*y + sin(z) + 3*x**2
2 of 2
4
Use sympy
From their Docs:
>>> diff(sin(x)*exp(x), x)
x x
ℯ ⋅sin(x) + ℯ ⋅cos(x)
and for your example:
>>> diff(4*x*y + x*sin(z)+ x**3 + z**8*y,x)
3x**2+4*y+sin(z)
numpy - Partial derivative in Python - Stack Overflow
I am slowly moving from C to Python. This time I need to calculate partial derivatives numerically from a grid given. I know how to do it in C, so at the moment I just use inline adapter, i.e. def... More on stackoverflow.com
Partial Derivative with sympy Python - Mathematics Stack Exchange
Bring the best of human thought and AI automation together at your work. Explore Stack Internal ... I used Python's sympy to take the partial derivative of the following function with respect to $\rho$: More on math.stackexchange.com
Need help in understanding np.gradient for calculating derivatives
One definition of the derivative is f'(x) = (f(x+h)-f(x))/h where h goes to 0. Computers cannot store infinitely small numbers, so they might set h=1e-6 (that is 0.000001). It's a tradeoff because while we want h to be as small as possible, at some point the errors due to computer precision begin to dominate. Given any function that the computer can calculate, it can approximate the derivative. def f(x): return np.sin(x) x = np.arange(-2,2,0.01) y = f(x) dfdx = (f(x+h)-f(x))/h plt.plot(x,y) plt.plot(x,dfdx) plt.show() Assuming that the function is reasonably smooth (i.e. the derivative above exists), another definition of the derivative is f'(x) = (f(x+h)-f(x-h))/(2h) where h goes to 0. Going from x-h to x+h means 2 steps, that's the reason for 2h. Which works just as well. These methods are named finite difference to contrast from the normal derivative definition where h is infinitely small. The first one is the forward difference and the second one is called central difference. The backward difference is (f(x)-f(x-h))/2. Let's assume we want to write a derivative function. It takes a function f and values of x, and gives back f'(x). def f(x): return np.sin(x) def d(fun, x): return (fun(x+h)-fun(x))/h x = np.arange(-2,2,0.01) y = f(x) dfdx = d(f,x) plt.plot(x,y) plt.plot(x,dfdx) plt.show() By passing the function into the function, the derivative function can just call fun wherever it wants/needs to get the derivative. Now things become a bit more inconvenient. For some reason we do not know f. We only know y, i.e. f(x) for some values of x. Let's say that x is evenly spaced as usual. Then our best guess for h is not really tiny but identical to the spacing between neighboring x values. With the forward difference we need to take care at the rightmost value because we cannot just add +h to get a value even further out. Instead we use the backward difference. For values in the middle we decide to use the central difference instead of the forward difference. def f(x): return np.sin(x) def d(y, h=1): dfdx = [(y[1]-y[0])/h] for i in range(1,len(y)-1): dfdx.append((y[i+1]-y[i-1])/2/h) dfdx.append((y[i]-y[i-1])/h) return dfdx h = 0.01 x = np.arange(-2,2,h) y = f(x) dfdx = d(y,h) plt.plot(x,y) plt.plot(x,dfdx) plt.show() The implementation above corresponds to np.gradient in the one-dimensional case where varargs is set to case 1 or 2. The case where varargs is set to 3 or 4 would use x directly in d instead of h. However at that point the formula is more complicated as they mention in the documentation. Effectively any point has a hd (the forward step size) and a hs (the backward step size) and the formula is not just (f(x+hd)-f(x-hs))/(hd+hs) but instead that bigger expression given in the documentation, where the values of hd,hs act as some kind of weights. np.gradient is basically backwards, central and forward difference combined. When you have values like f(1),f(2),f(2+h) and want the derivative at 2, the code notices that 2 and 2+h are very close together and puts greater weight on that (and mostly ignores f(1)). The important part so far is that np.gradient when given a vector with N elements calculates N one-dimensional derivatives, which is not the typical idea of a gradient. np.gradient does support more dimensions which might make things clearer. So in the 1D case, we essentially go through all values from left to right and then consider that value and its direct left and right neighbor to quantify the uptrend or downtrend. In the 2D case, np.gradient still does this, but additionally also walks from top to bottom and does the same. So in 2D it returns 2 arrays, one for left-right and one for top-bottom. The actual definition of the gradient by finite differences is [(f(x+h,y)-f(x,y))/h, (f(x,y+h)-f(x,y))/h] in 2D. These values are indeed returned by np.gradient, the left part is in the first array and the right part in the second array. Say we are in 2D and want the gradient at x=3 and y=0, then we can plug it into np.gradient like this: hx = 1e-6 hy = 1e-3 x = [3,3+hx] y = [0,0+hy] xx,yy = np.meshgrid(x,y) def f(x,y): return x**2-2*x*np.sin(y) + 1/x grad = np.gradient(f(xx,yy), y,x) # Note the order. print(grad[1][0,0], grad[0][0,0]) # Note the order. This is dfdx, dfdy. but if the function f can be calculated by a computer, it makes more sense to just use automatic differentiation instead of finite differences. Automatic differentiation has no h that needs to be chosen carefully. It's always as accurate is possible. import torch x = torch.tensor([3.],requires_grad=True) y = torch.tensor([0.],requires_grad=True) z = x**2-2*x*torch.sin(y) + 1/x z.backward() print(x.grad, y.grad) So what's the deal with the Taylor series? It's just a minor piece in the derivation of that more general expression used by np.gradient. We just start by claiming that we can express the gradient by adding together function values in the direct neighborhood. f'(x) = a f(x) + b f(x+hd) + c f(x-hs) Given that finite differences do work out, this approach should work as well and generalize the idea. Expand f(x+hd) and f(x-hs) with their series: f(x+hd) = f(x) + hd f'(x) + hd^2 f''(x)/2 + ... f(x-hs) = f(x) - hs f'(x) + hs^2 f''(x)/2 + ... Then plug it in and reshape: f'(x) = a f(x) + b f(x) + b hd f'(x) + b hd^2 f''(x)/2 + c f(x) - c hs f'(x) + c hs^2 f''(x)/2 = (a+b+c) f(x) + (b hd - c hs) f'(x) + (b hd^2 + c hs^2 )/2 f''(x) 0 = (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd^2 + c hs^2 )/2 f''(x) The = in the middle is actually more of an approximately equal sign. We won't be able to reach 0 for all f(x) as claimed on the left hand size, but we can get pretty close. We do NOT want to minimize the right-hand-side. We want it to reach 0 (it can go below 0 right now). To turn this into a minimization problem, we square it. This way we get a positive number always and it really becomes a matter of minimization. We COULD also take the absolute value instead of squaring, but it's pain to work this through and the end result are exactly the same parameters anyway. To minimize: E2 with E = (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd2 + c hs2 )/2 f''(x) One requirement for an optimum is that the gradient is 0. In this case we take the derivatives with respect to a,b,c because we want to find the optimal a,b,c. First a reminder of the chain rule: dE2 /dt = 2E dE/dt for whatever t is. It's optional to do this but a bit less messy than working it through individually. In particular we have dE^2/da = 2E dE/da = 2E f(x) dE^2/db = 2E dE/db = 2E (f(x) + hd f'(x) + hd^2 f''(x)/2) dE^2/dc = 2E dE/dc = 2E (f(x) - hs f'(x) + hs^2 f''(x)/2) We want ALL three of them to be 0 at the same time. This can only happen if E is 0. 0 := (a+b+c) f(x) + (b hd - c hs - 1) f'(x) + (b hd2 + c hs2 )/2 f''(x) and we want this to be 0 for any f, f', f'' for any value of x. The only way for this to happen is if each coefficient is 0, i.e. a+b+c = 0 b hd - c hs = 1 b hd^2 + c hs^2 = 0 We would need to check the second derivative to make sure that this is a minimum, not a maximum, but given the problem it is fairly clear. So why did we stop exactly after f'' in the Taylor series? It's because this way we get exactly 3 unknowns and 3 equations, which is the most convenient to solve. Multiply the second equation by hd then subtract the third from it. (b hd^2 - c hs hd) - (b hd^2 + c hs^2) = hd -c hs^2 - c hs hd = hd c hs (hs + hd) = -hd c = -hd/hs/(hs+hd) = -hd^2 / (hs hd (hs+hd)) where the last step is just so it looks exactly like in np.gradient. Insert c into the second equation. b hd + hd/hs/(hs+hd) hs = 1 b hd + hd/(hs+hd) = 1 b + 1/(hs+hd) = 1/hd b = 1/hd - 1/(hs+hd) b = (hs(hs+hd) - hs hd) / [hs hd (hs+hd)] b = hs^2 / [hs hd (hs+hd)] From the first equation we know that a = -b-c = (hd2 - hs2 )/(hs hd (hs+hd)). So here's your summary: If you have a function that can be calculated by a computer, use torch or tensorflow or any other framework for automatic differentiation. If you have a function that can be calculated by a computer but such a framework is not available, np.gradient is still a bad idea because it is inefficient. Note for the 2D gradient we needed three values, f(x,y), f(x+dx,y), f(x,y+dy). But with np.gradient we would first need to set up arrays where it is almost natural to also include f(x+dx,y+dy) which is not needed for gradient calculations. It's more natural to set up some loop that increments x once, then y once, then z once, and so on. Many solvers in scipy.optimize work with finite differences. If you have a function that cannot be calculated by a computer, np.gradient may be useful. In practice this means that you have data from some experiment. Even there, the concept of a Taylor series plays no role here UNLESS the data was taken on an unevenly spaced grid. More on reddit.com
How To Take Derivatives In Python: 3 Different Types of Scenarios
How To Take Derivatives In Python: 3 Different Types of Scenarios In this video I show how to properly take derivatives in python in 3 different types of scenarios. The first scenario is when you have an explicit form for your function, such as f(x)=x2 or f(x)=ex sin(x). In such a scenario, the sympy library can be used to take first, second, up to nth derivatives of a function. This comes in handy for complicated functions, but can later on be EXTREMELY useful for computing Lagrange's equations of motion given strange trajectories. The second scenario is when you collect data and want to compute a derivative. In such a scenario, the data is often noisey, and taking a simple derivative will fail since it will amplify the high-frequency component of the data. In such a case, one needs to smooth data before taking a derivative. The ideal library for managing this is numpy. The third scenario involves functions of an irregular form. By this, I mean that your function can't be written down as simply as "sin(x)" or "ex". For example. f(x) = "solve an ode using some complex odesolver with parameter abserr=x and compute the integral of the answer". In this case, derivatives can't be computed symbolically, but one can use scipy's derivative method to get a good estimate of df/dx at certain values of x. More on reddit.com
Videos
15:06
METR2021 - Lab 3 - Segment 10: Evaluating Partial Derivatives in ...
29:57
What Partial Derivatives Are (Hands-on Introduction) — Topic 67 ...
05:24
Calculating Partial Derivatives with PyTorch AutoDiff — Topic ...
39:36
Full Calculus Tutorial Using Sympy Python Package with many Examples ...
- YouTube
01:38
9.13) Calculate Partial Derivatives in Python - YouTube
Reddit
reddit.com › r/optimization › numerical partial derivative in python
r/optimization on Reddit: Numerical Partial derivative in python
May 1, 2021 -
Hi! I want to write a function in Python thats find numerical parcial derivatives this functions (Z1i - Z1i-1)2 +(Z2i-Z2i-1)2 +l2
Can someone help me?
Top answer 1 of 4
4
Look up complex step differentiation. Input a complex number to that function, perturb the imaginary part, and you basically get machine precision accurate derivatives as long as your function is complex safe and analytic. This one appears to be.
2 of 4
3
The way that I have it implemented is to use a central finite differences scheme to approximate a partial derivative of a multivariable, scalar valued function like this: def fdiff_cm( f, x, dx, n ): ''' Calculate central finite difference of multivariable, vector valued function w.r.t nth x ''' dx2 = dx \* 2.0 x\[ n \] += dx fxu = f( x ) x\[ n \] -= dx2 fxl = f( x ) x\[ n \] += dx return ( fxu - fxl ) / dx2 Where f is a function that you pass in, x is a list of state variables, dx is how much to perturb the x variable for the finite difference calculation, and n is an index to which x variable is being perturbed. Edit: the formatting is all messed up but i think you'll be able to figure out the jist of it
Top answer 1 of 4
8
np.diff might be the most idiomatic numpy way to do this:
y = np.empty_like(x)
y[:-1] = np.diff(x, axis=0) / dx
y[-1] = -x[-1] / dx
You may also be interested in np.gradient, although this function takes the gradient over all dimensions of the input array rather than a single one.
2 of 4
2
If you are using numpy, this should do the same as your code above:
y = np.empty_like(x)
y[:-1] = (x[1:] - x[:-1]) / dx
y[-1] = -x[-1] / dx
To get the same result over the second axis, you would do:
y = np.empty_like(x)
y[:, :-1] = (x[:, 1:] - x[:, :-1]) / dx
y[:, -1] = -x[:, -1] / dx
Learning About Electronics
learningaboutelectronics.com › Articles › How-to-find-the-partial-derivative-of-a-function-in-Python.php
How to Find the Partial Derivative of a Function in Python
Then, type in, pip install sympy. Once you see that the module has been successfully installed, then you are ready to proceed to the code on this page. A partial derivative is the derivative of a function that has more than one variable with respect to only one variable.
Byu
emc2.byu.edu › winter-labs › lab09.html
Lab 9: Symbolic Python and Partial Differentiation — Math 495R EMC2 Python Labs
Plot \(p(x)\) over \(-5 \leq x \leq 5\) and mark each of the minima in one color and the maxima in another color. Calculate the partial derivatives with respect to x and y of the following functions
Delft Stack
delftstack.com › home › howto › python › python partial derivatives
How to Calculate Partial Derivatives in Python Using Sympy | Delft Stack
March 11, 2025 - Learn how to calculate partial derivatives in Python using the Sympy library. This article provides step-by-step guidance on computing partial derivatives, evaluating them at specific points, and exploring higher-order derivatives. Perfect for students and professionals in data science, ...
Towards Data Science
towardsdatascience.com › home › latest › taking derivatives in python
Taking Derivatives in Python | Towards Data Science
January 28, 2025 - To start, let’s take the most basic two-variable function and calculate partial derivatives. The function is simply – x squared multiplied by y, and you would differentiate it as follows: Cool, but how would I do this in Python? Good question. To start, you’ll need to redefine your symbols.
Notebook Community
notebook.community › alexandrnikitin › algorithm-sandbox › courses › DAT256x › Module02 › 02-05-Multivariate Functions and Partial Derivatives
Partial Derivatives
We can take a derivative of the changes in the function with respect to either x or y. We call these derivatives with respect to one variable partial derivatives. Let's give this a try by taking the derivative of $f(x,y)$ with respect to x. We write this partial derivative as follows.
CodeSignal
codesignal.com › learn › courses › advanced-calculus-for-machine-learning › lessons › derivatives-for-multivariable-functions
Derivatives for Multivariable Functions | CodeSignal Learn
To calculate a partial derivative, you take the derivative of the function while treating other variables as constants. In the previous course of this path we have seen examples of functions and their derivative. Though we omitted the calculation rules, it is important to remember that they ...
SymPy
docs.sympy.org › latest › modules › solvers › pde.html
PDE - SymPy 1.14.0 documentation
>>> from sympy import Function, Derivative >>> from sympy.abc import x, y # x and y are the independent variables >>> f = Function("f")(x, y) # f is a function of x and y >>> # fx will be the partial derivative of f with respect to x >>> fx = Derivative(f, x) >>> # fy will be the partial derivative of f with respect to y >>> fy = Derivative(f, y)
Codefinity
codefinity.com › courses › v2 › 7a3cf1aa-f919-4535-b4fb-e7352cc2d87f › 939155ba-0bda-43dc-b25e-0f08990a18ee › 14dd6808-63cc-43ca-9b52-40ff0331ac7b
Learn Implementing Partial Derivatives in Python | Mathematical Analysis
In this video, you will learn how to compute partial derivatives of multivariable functions using Python.
Turing
turing.com › kb › derivative-functions-in-python
How to Calculate Derivative Functions in Python
from sympy import symbols, diff x, y = symbols('x y') f = x**2 + 3*y**2 + 2*x*y f_partial_x = diff(f, x) f_partial_y = diff(f, y) print(f_partial_x) print(f_partial_y) ... By following these steps and utilizing SymPy's functionalities, we can easily compute the exact symbolic derivatives of various functions. autograd is a Python library that provides automatic differentiation capabilities.
LinkedIn
linkedin.com › learning › machine-learning-foundations-calculus › calculating-partial-derivatives
Calculating partial derivatives - Python Video Tutorial | LinkedIn Learning, formerly Lynda.com
Learn how to calculate partial derivatives for functions of two and three variables.
Published March 7, 2023
Medium
medium.com › @mustafaazzurri › neural-network-from-scratch-in-python-pt-7-derivatives-and-gradients-2e0c4f00f78c
Neural Network From Scratch in Python pt-7 (Derivatives and Gradients) | by Mustafa Alahmid | Medium
August 25, 2022 - We need to know these impacts; this means that we have to calculate the derivative with respect to each input separately to learn about each of them. That’s why we call these partial derivatives with respect to given input — we are calculating a partial of the derivative, related to a singular input.
Manning
livebook.manning.com › concept › python › partial-derivative
partial-derivative in python - liveBook · Manning
liveBooks are enhanced books. They add narration, interactive exercises, code execution, and other features to eBooks.
myCompiler
mycompiler.io › view › 7tulAi4tN8w
numerical patial Derivative (Python) - myCompiler
June 20, 2024 - import numpy as np def numerical_partial_derivative(f, x, y, method='central', h=1e-4): """ Compute the numerical partial derivatives of a function f at the point (x, y). Parameters: f (function): The function for which partial derivatives are ...