1. Use numpy.gradient (best option)
Most people want this. This is now the Numpy provided finite difference aproach (2nd-order accurate.) Same shape-size as input array.
Uses second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
2. Use numpy.diff (you probably don't want this)
If you really want something ~twice worse this is just 1st-order accurate and also doesn't have same shape as input. But it's faster than above (some little tests I did).
For constant space between x sampless
import numpy as np
dx = 0.1; y = [1, 2, 3, 4, 4, 5, 6] # dx constant
np.gradient(y, dx) # dy/dx 2nd order accurate
array([10., 10., 10., 5., 5., 10., 10.])
For irregular space between x samples
your question
import numpy as np
x = [.1, .2, .5, .6, .7, .8, .9] # dx varies
y = [1, 2, 3, 4, 4, 5, 6]
np.gradient(y, x) # dy/dx 2nd order accurate
array([10., 8.333.., 8.333.., 5., 5., 10., 10.])
What are you trying to achieve?
The numpy.gradient offers a 2nd-order and numpy.diff is a 1st-order approximation schema of finite differences for a non-uniform grid/array. But if you are trying to make a numerical differentiation, a specific finite differences formulation for your case might help you better. You can achieve much higher accuracy like 8th-order (if you need) much superior to numpy.gradient.
1. Use numpy.gradient (best option)
Most people want this. This is now the Numpy provided finite difference aproach (2nd-order accurate.) Same shape-size as input array.
Uses second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
2. Use numpy.diff (you probably don't want this)
If you really want something ~twice worse this is just 1st-order accurate and also doesn't have same shape as input. But it's faster than above (some little tests I did).
For constant space between x sampless
import numpy as np
dx = 0.1; y = [1, 2, 3, 4, 4, 5, 6] # dx constant
np.gradient(y, dx) # dy/dx 2nd order accurate
array([10., 10., 10., 5., 5., 10., 10.])
For irregular space between x samples
your question
import numpy as np
x = [.1, .2, .5, .6, .7, .8, .9] # dx varies
y = [1, 2, 3, 4, 4, 5, 6]
np.gradient(y, x) # dy/dx 2nd order accurate
array([10., 8.333.., 8.333.., 5., 5., 10., 10.])
What are you trying to achieve?
The numpy.gradient offers a 2nd-order and numpy.diff is a 1st-order approximation schema of finite differences for a non-uniform grid/array. But if you are trying to make a numerical differentiation, a specific finite differences formulation for your case might help you better. You can achieve much higher accuracy like 8th-order (if you need) much superior to numpy.gradient.
use numpy.gradient()
Please be aware that there are more advanced way to calculate the numerical derivative than simply using diff. I would suggest to use numpy.gradient, like in this example.
import numpy as np
from matplotlib import pyplot as plt
# we sample a sin(x) function
dx = np.pi/10
x = np.arange(0,2*np.pi,np.pi/10)
# we calculate the derivative, with np.gradient
plt.plot(x,np.gradient(np.sin(x), dx), '-*', label='approx')
# we compare it with the exact first derivative, i.e. cos(x)
plt.plot(x,np.cos(x), label='exact')
plt.legend()
Videos
Purely in terms of terminology, it's probably better to talk about taking discrete partial derivatives of variable fields stored in an array, rather than differentiating an array itself.
Regarding the code itself, you appear to have dropped the element assignment inside your loop,
v(i,j) = (u(i+1,j)-u(i-1,j))/(2*h)
w(i,j) = (u(i,j+1)-u(i,j-1))/(2*h)
although that may be a typo. Your version is also bugged and unsafe at the end points of the ranges (i.e i=-nx, i=nx, j=ny, j=-ny) since in these places you are accessing elements of the u array which do not exist. At these locations you either need to use a non centred derivative, something like
v(-nx,j) = (u(-nx+1,j) - u(-nx,j)/h, or use any knowledge of the boundary conditions.
As a side note, if you are writing Fortran code frequently, then in terms of code optimisations, it is useful to remember that the index furthest to right should be outermost in loops for the code to have the best chance of running fast, which implies swapping your i and j loops. In many simple cases it's possible the compiler will fix this for you, and it's certainly less important than having code which works, but it's probably a good habit to get into.
It's called column major due to the Fortran array model. Brieftly saying about this: Let be a matrix of size
. Let
be element of
for
and
, then
is stored at the
-th position of one-dimenstional storage
, i.e.
. Writting the loop as the orginal post, the storage
is processed with
-increment. However, swapping loop order, the storage
is processed continuously. Hence this approach is performed faster than the first one.
This is not a simple problem, but there are a lot of methods that have been devised to handle it. One simple solution is to use finite difference methods. The command numpy.diff() uses finite differencing where you can specify the order of the derivative.
Wikipedia also has a page that lists the needed finite differencing coefficients for different derivatives of different accuracies. If the numpy function doesn't do what you want.
Depending on your application you can also use scipy.fftpack.diff which uses a completely different technique to do the same thing. Though your function needs a well defined Fourier transform.
There are lots and lots and lots of variants (e.g. summation by parts, finite differencing operators, or operators designed to preserve known evolution constants in your system of equations) on both of the two ideas above. What you should do will depend a great deal on what the problem is that you are trying to solve.
The good thing is that there is a lot of work has been done in this field. The Wikipedia page for Numerical Differentiation has some resources (though it is focused on finite differencing techniques).
The findiff project is a Python package that can do derivatives of arrays of any dimension with any desired accuracy order (of course depending on your hardware restrictions). It can handle arrays on uniform as well as non-uniform grids and also create generalizations of derivatives, i.e. general linear combinations of partial derivatives with constant and variable coefficients.