Just a hint for everybody reading that:

the functions above do not compute the divergence of a vector field. they sum the derivatives of a scalar field A:

result = dA/dx + dA/dy

in contrast to a vector field (with three dimensional example):

result = sum dAi/dxi = dAx/dx + dAy/dy + dAz/dz

Vote down for all! It is mathematically simply wrong.

Cheers!

Answer from Polly on Stack Overflow
🌐
AskPython
askpython.com › home › how to compute the divergence of a vector field using python?
How to Compute the Divergence of a Vector Field Using Python? - AskPython
May 30, 2023 - We are going to calculate the divergence of a typical vector field – (2y^2+x-4)i+cos(x)j. The divergence of this field is constant, which is 1. We can find the divergence by applying partial derivation on the vector.
🌐
Bucknell University
eg.bucknell.edu › ~phys310 › jupyter › numdiff.html
numdiff
# Divergence of vector-valued function f evaluated at x def div(f,x): jac = nd.Jacobian(f)(x) return jac[0,0] + jac[1,1] + jac[2,2]
🌐
GitHub
gist.github.com › swayson › 86c296aa354a555536e6765bbe726ff7
Numpy and scipy ways to calculate KL Divergence. · GitHub
Numpy and scipy ways to calculate KL Divergence. GitHub Gist: instantly share code, notes, and snippets.
🌐
Unidata
unidata.github.io › MetPy › latest › api › generated › metpy.calc.divergence.html
divergence — MetPy 1.7
metpy.calc.divergence(u, v, *, dx=None, dy=None, x_dim=-1, y_dim=-2, parallel_scale=None, meridional_scale=None, latitude=None, longitude=None, crs=None)[source]#
Top answer
1 of 3
3

try something like:

dqu_dx, dqu_dy = np.gradient(qu, [dx, dy])
dqv_dx, dqv_dy = np.gradient(qv, [dx, dy])

you can not assign to any operation in python; any of those are syntax errors:

a + b = 3
a * b = 7
# or, in your case:
a / b = 9

UPDATE

following Pinetwig's comment: a/b is not a valid identifier name; it is (the return value of) an operator.

2 of 3
1

Try removing the [dx, dy].

   [dqu_dx, dqu_dy] = np.gradient(qu)
   [dqv_dx, dqv_dy] = np.gradient(qv)

Also to point out if you are recreating plots. Gradient changed in numpy between 1.82 and 1.9. This had an effect for recreating matlab plots in python as 1.82 was the matlab method. I am not sure how this relates to GrADs. Here is the wording for both.

1.82

"The gradient is computed using central differences in the interior and first differences at the boundaries. The returned gradient hence has the same shape as the input array."

1.9

"The gradient is computed using second order accurate central differences in the interior and either first differences or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array."

The gradient function for 1.82 is here.

def gradient(f, *varargs):
"""
Return the gradient of an N-dimensional array.

The gradient is computed using central differences in the interior
and first differences at the boundaries. The returned gradient hence has
the same shape as the input array.

Parameters
----------
f : array_like
  An N-dimensional array containing samples of a scalar function.
`*varargs` : scalars
  0, 1, or N scalars specifying the sample distances in each direction,
  that is: `dx`, `dy`, `dz`, ... The default distance is 1.


Returns
-------
gradient : ndarray
  N arrays of the same shape as `f` giving the derivative of `f` with
  respect to each dimension.

Examples
--------
>>> x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> np.gradient(x)
array([ 1. ,  1.5,  2.5,  3.5,  4.5,  5. ])
>>> np.gradient(x, 2)
array([ 0.5 ,  0.75,  1.25,  1.75,  2.25,  2.5 ])

>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float))
[array([[ 2.,  2., -1.],
       [ 2.,  2., -1.]]),
array([[ 1. ,  2.5,  4. ],
       [ 1. ,  1. ,  1. ]])]

"""
  f = np.asanyarray(f)
  N = len(f.shape)  # number of dimensions
  n = len(varargs)
  if n == 0:
      dx = [1.0]*N
  elif n == 1:
      dx = [varargs[0]]*N
  elif n == N:
      dx = list(varargs)
  else:
      raise SyntaxError(
              "invalid number of arguments")

# use central differences on interior and first differences on endpoints

  outvals = []

# create slice objects --- initially all are [:, :, ..., :]
  slice1 = [slice(None)]*N
  slice2 = [slice(None)]*N
  slice3 = [slice(None)]*N

  otype = f.dtype.char
  if otype not in ['f', 'd', 'F', 'D', 'm', 'M']:
      otype = 'd'

# Difference of datetime64 elements results in timedelta64
  if otype == 'M' :
    # Need to use the full dtype name because it contains unit information
      otype = f.dtype.name.replace('datetime', 'timedelta')
  elif otype == 'm' :
    # Needs to keep the specific units, can't be a general unit
      otype = f.dtype

  for axis in range(N):
    # select out appropriate parts for this dimension
      out = np.empty_like(f, dtype=otype)
      slice1[axis] = slice(1, -1)
      slice2[axis] = slice(2, None)
      slice3[axis] = slice(None, -2)
    # 1D equivalent -- out[1:-1] = (f[2:] - f[:-2])/2.0
      out[slice1] = (f[slice2] - f[slice3])/2.0
      slice1[axis] = 0
      slice2[axis] = 1
      slice3[axis] = 0
    # 1D equivalent -- out[0] = (f[1] - f[0])
      out[slice1] = (f[slice2] - f[slice3])
      slice1[axis] = -1
      slice2[axis] = -1
      slice3[axis] = -2
    # 1D equivalent -- out[-1] = (f[-1] - f[-2])
      out[slice1] = (f[slice2] - f[slice3])

    # divide by step size
      outvals.append(out / dx[axis])

    # reset the slice object in this dimension to ":"
      slice1[axis] = slice(None)
      slice2[axis] = slice(None)
      slice3[axis] = slice(None)

  if N == 1:
      return outvals[0]
  else:
      return outvals
🌐
EDUCBA
educba.com › home › software development › software development tutorials › numpy tutorial › numpy.diff()
Examples and Functions of Python numpy.diff()
March 28, 2023 - numpy.diff() is a function of the numpy module which is used for depicting the divergence between the values along with the x-axis. So the divergence among each of the values in the x array will be calculated and placed as a new array.
Address   Unit no. 202, Jay Antariksh Bldg, Makwana Road, Marol, Andheri (East),, 400059, Mumbai
Find elsewhere
Top answer
1 of 6
32

First of all, sklearn.metrics.mutual_info_score implements mutual information for evaluating clustering results, not pure Kullback-Leibler divergence!

This is equal to the Kullback-Leibler divergence of the joint distribution with the product distribution of the marginals.

KL divergence (and any other such measure) expects the input data to have a sum of 1. Otherwise, they are not proper probability distributions. If your data does not have a sum of 1, most likely it is usually not proper to use KL divergence! (In some cases, it may be admissible to have a sum of less than 1, e.g. in the case of missing data.)

Also note that it is common to use base 2 logarithms. This only yields a constant scaling factor in difference, but base 2 logarithms are easier to interpret and have a more intuitive scale (0 to 1 instead of 0 to log2=0.69314..., measuring the information in bits instead of nats).

> sklearn.metrics.mutual_info_score([0,1],[1,0])
0.69314718055994529

as we can clearly see, the MI result of sklearn is scaled using natural logarithms instead of log2. This is an unfortunate choice, as explained above.

Kullback-Leibler divergence is fragile, unfortunately. On above example it is not well-defined: KL([0,1],[1,0]) causes a division by zero, and tends to infinity. It is also asymmetric.

2 of 6
25

Scipy's entropy function will calculate KL divergence if feed two vectors p and q, each representing a probability distribution. If the two vectors aren't pdfs, it will normalize then first.

Mutual information is related to, but not the same as KL Divergence.

"This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs, and there are examples where the weighted mutual information also takes negative values"

🌐
SciPy
docs.scipy.org › doc › scipy › reference › generated › scipy.special.kl_div.html
scipy.special.kl_div — SciPy v1.17.0 Manual
This is why the function contains the extra \(-x + y\) terms over what might be expected from the Kullback-Leibler divergence. For a version of the function without the extra terms, see rel_entr. ... kl_div has experimental support for Python Array API Standard compatible backends in addition to NumPy.
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.diff.html
numpy.diff — NumPy v2.1 Manual
Calculate the n-th discrete difference along the given axis · The first difference is given by out[i] = a[i+1] - a[i] along the given axis, higher differences are calculated by using diff recursively
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.4 Manual
Return the gradient of an N-dimensional array · The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same ...
🌐
SciPy
docs.scipy.org › doc › scipy › reference › generated › scipy.spatial.distance.jensenshannon.html
jensenshannon — SciPy v1.17.0 Manual
>>> from scipy.spatial import distance >>> import numpy as np >>> distance.jensenshannon([1.0, 0.0, 0.0], [0.0, 1.0, 0.0], 2.0) 1.0 >>> distance.jensenshannon([1.0, 0.0], [0.5, 0.5]) 0.46450140402245893 >>> distance.jensenshannon([1.0, 0.0, 0.0], [1.0, 0.0, 0.0]) 0.0 >>> a = np.array([[1, 2, ...
🌐
GitHub
gist.github.com › zhiyzuo › f80e2b1cfb493a5711330d271a228a3d
Jensen-Shannon Divergence in Python · GitHub
Is there any smart way to do it and avoid so many for loop or distributed processing. anything in numpy that can help? ... @manjeetnagi, i'm not really sure about an "optimal" way. if i were you, i would simply use joblib to parallelize the process. ... Just for those who land here looking for jensen shannon distance (using monte carlo integration) between two distributions: def distributions_js(distribution_p, distribution_q, n_samples=10 ** 5): # jensen shannon divergence.
🌐
PyPI
pypi.org › project › beta-divergence-metrics
beta-divergence-metrics
January 30, 2022 - JavaScript is disabled in your browser · Please enable JavaScript to proceed · A required part of this site couldn’t load. This may be due to a browser extension, network issues, or browser settings. Please check your connection, disable any ad blockers, or try using a different browser
🌐
PyPI
pypi.org › project › divergence
divergence · PyPI
July 30, 2020 - Divergence is a Python package to compute statistical measures of entropy and divergence from probability distributions and samples.
      » pip install divergence
    
Published   Jul 31, 2020
Version   0.4.2