You have four options
- Finite Differences
- Automatic Derivatives
- Symbolic Differentiation
- Compute derivatives by hand.
Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.
Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.
Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.
Here is an example using SymPy
In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x
In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2. 2. 2. 2. 2.]
Answer from MRocklin on Stack OverflowYou have four options
- Finite Differences
- Automatic Derivatives
- Symbolic Differentiation
- Compute derivatives by hand.
Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.
Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.
Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.
Here is an example using SymPy
In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x
In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2. 2. 2. 2. 2.]
The most straight-forward way I can think of is using numpy's gradient function:
x = numpy.linspace(0,10,1000)
dx = x[1]-x[0]
y = x**2 + 1
dydx = numpy.gradient(y, dx)
This way, dydx will be computed using central differences and will have the same length as y, unlike numpy.diff, which uses forward differences and will return (n-1) size vector.
Videos
Is there a calculator for derivatives?
What is the derivative of a Function?
How do you calculate derivatives?
The idea behind what you are trying to do is correct, but there are a couple of points to make it work as intended:
- There is a typo in calc_diffY(X), the derivative of X**2 is 2*X, not 2**X:
Copy def calc_diffY(X):
yval_dash = 3*(X**2) + 2*X
By doing this you don't obtain much better results:
Copyyval_dash = [5, 16, 33, 56, 85]
numpyDiff = [10. 24. 44. 70.]
- To calculate the numerical derivative you should do a "Difference quotient" which is an approximation of a derivative
CopynumpyDiff = np.diff(yval)/np.diff(xval)
The approximation becomes better and better if the values of the points are more dense. The difference between your points on the x axis is 1, so you end up in this situation (in blue the analytical derivative, in red the numerical):

If you reduce the difference in your x points to 0.1, you get this, which is much better:

Just to add something to this, have a look at this image showing the effect of reducing the distance of the points on which the derivative is numerically calculated, taken from Wikipedia:

I like @lgsp's answer. I will add that you can directly estimate the derivative without having to worry about how much space there is between the values. This just uses the symmetric formula for calculating finite differences, described at this wikipedia page.
Take note, though, of the way delta is specified. I found that when it is too small, higher-order estimates fail. There's probably not a 100% generic value that will always work well!
Also, I simplified your code by taking advantage of numpy broadcasting over arrays to eliminate for loops.
Copyimport numpy as np
# selecte a polynomial equation
def f(x):
y = x**3 + x**2 + 7
return y
# manually differentiate the equation
def f_prime(x):
return 3*x**2 + 2*x
# numerically estimate the first three derivatives
def d1(f, x, delta=1e-10):
return (f(x + delta) - f(x - delta)) / (2 * delta)
def d2(f, x, delta=1e-5):
return (d1(f, x + delta, delta) - d1(f, x - delta, delta)) / (2 * delta)
def d3(f, x, delta=1e-2):
return (d2(f, x + delta, delta) - d2(f, x - delta, delta)) / (2 * delta)
# demo output
# note that functions operate in parallel on numpy arrays -- no for loops!
xval = np.array([1,2,3,4,5])
print('y = ', f(xval))
print('y\' = ', f_prime(xval))
print('d1 = ', d1(f, xval))
print('d2 = ', d2(f, xval))
print('d3 = ', d3(f, xval))
And the outputs:
Copyy = [ 9 19 43 87 157]
y' = [ 5 16 33 56 85]
d1 = [ 5.00000041 16.00000132 33.00002049 56.00000463 84.99995374]
d2 = [ 8.0000051 14.00000116 20.00000165 25.99996662 32.00000265]
d3 = [6. 6. 6. 6. 5.99999999]
np.diff might be the most idiomatic numpy way to do this:
y = np.empty_like(x)
y[:-1] = np.diff(x, axis=0) / dx
y[-1] = -x[-1] / dx
You may also be interested in np.gradient, although this function takes the gradient over all dimensions of the input array rather than a single one.
If you are using numpy, this should do the same as your code above:
y = np.empty_like(x)
y[:-1] = (x[1:] - x[:-1]) / dx
y[-1] = -x[-1] / dx
To get the same result over the second axis, you would do:
y = np.empty_like(x)
y[:, :-1] = (x[:, 1:] - x[:, :-1]) / dx
y[:, -1] = -x[:, -1] / dx
pd.Series.diff() only takes the differences. It doesn't divide by the delta of the index as well.
This gets you the answer
recv.diff() / recv.index.to_series().diff().dt.total_seconds()
2017-01-20 20:00:00 NaN
2017-01-20 20:05:00 4521.493333
2017-01-20 20:10:00 4533.760000
2017-01-20 20:15:00 4557.493333
2017-01-20 20:20:00 4536.053333
2017-01-20 20:25:00 4567.813333
2017-01-20 20:30:00 4406.160000
2017-01-20 20:35:00 4366.720000
2017-01-20 20:40:00 4407.520000
2017-01-20 20:45:00 4421.173333
Freq: 300S, dtype: float64
You could also use numpy.gradient passing the bytes_in and the delta you expect to have. This will not reduce the length by one, instead making assumptions about the edges.
np.gradient(bytes_in, 300) * 8
array([ 4521.49333333, 4527.62666667, 4545.62666667, 4546.77333333,
4551.93333333, 4486.98666667, 4386.44 , 4387.12 ,
4414.34666667, 4421.17333333])
As there is no builtin derivative method in Pandas Series / DataFrame you can use https://github.com/scls19fr/pandas-helper-calc.
It will provide a new accessor called calc to Pandas Series and DataFrames to calculate numerically derivative and integral.
So you will be able to simply do
recv.calc.derivative()
It's using diff() under the hood.