Generalization of the chi distribution. Shape parameter is \(\nu>0.\) The support is \(x\geq0.\)
\begin{eqnarray*}
f\left(x;\nu\right) & = & \frac{2\nu^{\nu}}{\Gamma\left(\nu\right)}x^{2\nu-1}\exp\left(-\nu x^{2}\right)\\
F\left(x;\nu\right) & = & \frac{\gamma\left(\nu,\nu x^{2}\right)}{\Gamma\left(\nu\right)}\\
G\left(q;\nu\right) & = & \sqrt{\frac{1}{\nu}\gamma^{-1}\left(\nu,q{\Gamma\left(\nu\right)}\right)}
\end{eqnarray*}
\begin{eqnarray*}
\mu & = & \frac{\Gamma\left(\nu+\frac{1}{2}\right)}{\sqrt{\nu}\Gamma\left(\nu\right)}\\
\mu_{2} & = & \left[1-\mu^{2}\right]\\
\gamma_{1} & = & \frac{\mu\left(1-4v\mu_{2}\right)}{2\nu\mu_{2}^{3/2}}\\
\gamma_{2} & = & \frac{-6\mu^{4}\nu+\left(8\nu-2\right)\mu^{2}-2\nu+1}{\nu\mu_{2}^{2}}
\end{eqnarray*}
MLE of the Nakagami Distribution in SciPy (nakagami.fit
)
The probability density function of the nakagami
distribution in SciPy is
\begin{equation}
f(x; \nu, \mu, \sigma) = 2 \frac{\nu^\nu}{ \sigma \Gamma(\nu)}\left(\frac{x-\mu}{\sigma}\right)^{2\nu - 1} \exp\left(-\nu \left(\frac{x-\mu}{\sigma}\right)^2 \right),\tag{1}
\end{equation}
for \(x\) such that \(\frac{x-\mu}{\sigma} \geq 0\), where \(\nu \geq \frac{1}{2}\) is the shape parameter,
\(\mu\) is the location, and \(\sigma\) is the scale.
The log-likelihood function is therefore
\begin{equation}
l(\nu, \mu, \sigma) = \sum_{i = 1}^{N} \log \left( 2 \frac{\nu^\nu}{ \sigma\Gamma(\nu)}\left(\frac{x_i-\mu}{\sigma}\right)^{2\nu - 1} \exp\left(-\nu \left(\frac{x_i-\mu}{\sigma}\right)^2 \right) \right),\tag{2}
\end{equation}
which can be expanded as
\begin{equation}
l(\nu, \mu, \sigma) = N \log(2) + N\nu \log(\nu) - N\log\left(\Gamma(\nu)\right) - 2N \nu \log(\sigma) + \left(2 \nu - 1 \right) \sum_{i=1}^N \log(x_i - \mu) - \nu \sigma^{-2} \sum_{i=1}^N \left(x_i-\mu\right)^2, \tag{3}
\end{equation}
Leaving supports constraints out, the first-order condition for optimality on the likelihood derivatives gives estimates of parameters:
\begin{align}
\frac{\partial l}{\partial \nu}(\nu, \mu, \sigma) &= N\left(1 + \log(\nu) - \psi^{(0)}(\nu)\right) + 2 \sum_{i=1}^N \log \left( \frac{x_i - \mu}{\sigma} \right) - \sum_{i=1}^N \left( \frac{x_i - \mu}{\sigma} \right)^2 = 0
\text{,} \tag{4}\\
\frac{\partial l}{\partial \mu}(\nu, \mu, \sigma) &= (1 - 2 \nu) \sum_{i=1}^N \frac{1}{x_i-\mu} + \frac{2\nu}{\sigma^2} \sum_{i=1}^N x_i-\mu = 0
\text{, and} \tag{5}\\
\frac{\partial l}{\partial \sigma}(\nu, \mu, \sigma) &= -2 N \nu \frac{1}{\sigma} + 2 \nu \sigma^{-3} \sum_{i=1}^N \left(x_i-\mu\right)^2 = 0
\text{,}\tag{6}
\end{align}
where \(\psi^{(0)}\) is the polygamma function of order \(0\); i.e. \(\psi^{(0)}(\nu) = \frac{d}{d\nu} \log \Gamma(\nu)\).
However, the support of the distribution is the values of \(x\) for which \(\frac{x-\mu}{\sigma} \geq 0\), and this provides an additional constraint that
\begin{equation}
\mu \leq \min_i{x_i}.\tag{7}
\end{equation}
For \(\nu = \frac{1}{2}\), the partial derivative of the log-likelihood with respect to \(\mu\) reduces to:
\begin{equation}
\frac{\partial l}{\partial \mu}(\nu, \mu, \sigma) = {\sigma^2} \sum_{i=1}^N (x_i-\mu),
\end{equation}
which is positive when the support constraint is satisfied. Because the partial derivative with respect to \(\mu\)
is positive, increasing \(\mu\) increases the log-likelihood, and therefore the constraint is active at the maximum likelihood estimate for \(\mu\)
\begin{equation}
\mu = \min_i{x_i}, \quad \nu = \frac{1}{2}. \tag{8}
\end{equation}
For \(\nu\) sufficiently greater than \(\frac{1}{2}\), the likelihood equation \(\frac{\partial l}{\partial \mu}(\nu, \mu, \sigma)=0\) has a solution, and this solution provides the maximum likelihood estimate for \(\mu\). In either case, however, the condition \(\mu = \min_i{x_i}\) provides a reasonable initial guess for numerical optimization.
Furthermore, the likelihood equation for \(\sigma\) can be solved explicitly, and it provides the maximum likelihood estimate
\begin{equation}
\sigma = \sqrt{ \frac{\sum_{i=1}^N \left(x_i-\mu\right)^2}{N}}. \tag{9}
\end{equation}
Hence, the _fitstart
method for nakagami
uses
\begin{align}
\mu_0 &= \min_i{x_i} \,
\text{and} \\
\sigma_0 &= \sqrt{ \frac{\sum_{i=1}^N \left(x_i-\mu_0\right)^2}{N}}
\end{align}
as initial guesses for numerical optimization accordingly.