scipy.stats.entropy#

scipy.stats.entropy(pk, qk=None, base=None, axis=0)[source]#

Calculate the entropy of a distribution for given probability values.

If only probabilities pk are given, the entropy is calculated as S = -sum(pk * log(pk), axis=axis).

If qk is not None, then compute the Kullback-Leibler divergence S = sum(pk * log(pk / qk), axis=axis).

This routine will normalize pk and qk if they don’t sum to 1.

Parameters
pkarray_like

Defines the (discrete) distribution. Along each axis-slice of pk, element i is the (possibly unnormalized) probability of event i.

qkarray_like, optional

Sequence against which the relative entropy is computed. Should be in the same format as pk.

basefloat, optional

The logarithmic base to use, defaults to e (natural logarithm).

axisint, optional

The axis along which the entropy is calculated. Default is 0.

Returns
S{float, array_like}

The calculated entropy.

Examples

>>> from scipy.stats import entropy

Bernoulli trial with different p. The outcome of a fair coin is the most uncertain:

>>> entropy([1/2, 1/2], base=2)
1.0

The outcome of a biased coin is less uncertain:

>>> entropy([9/10, 1/10], base=2)
0.46899559358928117

Relative entropy:

>>> entropy([1/2, 1/2], qk=[9/10, 1/10])
0.5108256237659907