scipy.stats.multinomial¶
-
scipy.stats.
multinomial
(n, p, seed=None) = <scipy.stats._multivariate.multinomial_gen object>[source]¶ A multinomial random variable.
- Parameters
- xarray_like
Quantiles, with the last axis of x denoting the components.
- nint
Number of trials
- parray_like
Probability of a trial falling into each category; should sum to 1
- random_stateNone or int or np.random.RandomState instance, optional
If int or RandomState, use it for drawing the random variates. If None (or np.random), the global np.random state is used. Default is None.
See also
scipy.stats.binom
The binomial distribution.
numpy.random.Generator.multinomial
Sampling from the multinomial distribution.
Notes
n should be a positive integer. Each element of p should be in the interval \([0,1]\) and the elements should sum to 1. If they do not sum to 1, the last element of the p array is not used and is replaced with the remaining probability left over from the earlier elements.
Alternatively, the object may be called (as a function) to fix the n and p parameters, returning a “frozen” multinomial random variable:
The probability mass function for
multinomial
is\[f(x) = \frac{n!}{x_1! \cdots x_k!} p_1^{x_1} \cdots p_k^{x_k},\]supported on \(x=(x_1, \ldots, x_k)\) where each \(x_i\) is a nonnegative integer and their sum is \(n\).
New in version 0.19.0.
Examples
>>> from scipy.stats import multinomial >>> rv = multinomial(8, [0.3, 0.2, 0.5]) >>> rv.pmf([1, 3, 4]) 0.042000000000000072
The multinomial distribution for \(k=2\) is identical to the corresponding binomial distribution (tiny numerical differences notwithstanding):
>>> from scipy.stats import binom >>> multinomial.pmf([3, 4], n=7, p=[0.4, 0.6]) 0.29030399999999973 >>> binom.pmf(3, 7, 0.4) 0.29030400000000012
The functions
pmf
,logpmf
,entropy
, andcov
support broadcasting, under the convention that the vector parameters (x
andp
) are interpreted as if each row along the last axis is a single object. For instance:>>> multinomial.pmf([[3, 4], [3, 5]], n=[7, 8], p=[.3, .7]) array([0.2268945, 0.25412184])
Here,
x.shape == (2, 2)
,n.shape == (2,)
, andp.shape == (2,)
, but following the rules mentioned above they behave as if the rows[3, 4]
and[3, 5]
inx
and[.3, .7]
inp
were a single object, and as if we hadx.shape = (2,)
,n.shape = (2,)
, andp.shape = ()
. To obtain the individual elements without broadcasting, we would do this:>>> multinomial.pmf([3, 4], n=7, p=[.3, .7]) 0.2268945 >>> multinomial.pmf([3, 5], 8, p=[.3, .7]) 0.25412184
This broadcasting also works for
cov
, where the output objects are square matrices of sizep.shape[-1]
. For example:>>> multinomial.cov([4, 5], [[.3, .7], [.4, .6]]) array([[[ 0.84, -0.84], [-0.84, 0.84]], [[ 1.2 , -1.2 ], [-1.2 , 1.2 ]]])
In this example,
n.shape == (2,)
andp.shape == (2, 2)
, and following the rules above, these broadcast as ifp.shape == (2,)
. Thus the result should also be of shape(2,)
, but since each output is a \(2 \times 2\) matrix, the result in fact has shape(2, 2, 2)
, whereresult[0]
is equal tomultinomial.cov(n=4, p=[.3, .7])
andresult[1]
is equal tomultinomial.cov(n=5, p=[.4, .6])
.Methods
``pmf(x, n, p)``
Probability mass function.
``logpmf(x, n, p)``
Log of the probability mass function.
``rvs(n, p, size=1, random_state=None)``
Draw random samples from a multinomial distribution.
``entropy(n, p)``
Compute the entropy of the multinomial distribution.
``cov(n, p)``
Compute the covariance matrix of the multinomial distribution.