SciPy

scipy.spatial.distance.jensenshannon

scipy.spatial.distance.jensenshannon(p, q, base=None)[source]

Compute the Jensen-Shannon distance (metric) between two 1-D probability arrays. This is the square root of the Jensen-Shannon divergence.

The Jensen-Shannon distance between two probability vectors p and q is defined as,

\[\sqrt{\frac{D(p \parallel m) + D(q \parallel m)}{2}}\]

where \(m\) is the pointwise mean of \(p\) and \(q\) and \(D\) is the Kullback-Leibler divergence.

This routine will normalize p and q if they don’t sum to 1.0.

Parameters
p(N,) array_like

left probability vector

q(N,) array_like

right probability vector

basedouble, optional

the base of the logarithm used to compute the output if not given, then the routine uses the default base of scipy.stats.entropy.

Returns
jsdouble

The Jensen-Shannon distance between p and q

New in version 1.2.0: ..

Examples

>>> from scipy.spatial import distance
>>> distance.jensenshannon([1.0, 0.0, 0.0], [0.0, 1.0, 0.0], 2.0)
1.0
>>> distance.jensenshannon([1.0, 0.0], [0.5, 0.5])
0.46450140402245893
>>> distance.jensenshannon([1.0, 0.0, 0.0], [1.0, 0.0, 0.0])
0.0

Previous topic

scipy.spatial.distance.euclidean

Next topic

scipy.spatial.distance.mahalanobis