evaluate_error#
- FastGeneratorInversion.evaluate_error(size=100000, random_state=None, x_error=False)[source]#
Evaluate the numerical accuracy of the inversion (u- and x-error).
- Parameters:
- sizeint, optional
The number of random points over which the error is estimated. Default is
100000
.- random_state{None, int,
numpy.random.Generator
, numpy.random.RandomState
}, optionalA NumPy random number generator or seed for the underlying NumPy random number generator used to generate the stream of uniform random numbers. If
random_state
is None, useself.random_state
. Ifrandom_state
is an int,np.random.default_rng(random_state)
is used. Ifrandom_state
is already aGenerator
orRandomState
instance then that instance is used.
- Returns:
- u_error, x_errortuple of floats
A NumPy array of random variates.
Notes
The numerical precision of the inverse CDF
ppf
is controlled by the u-error. It is computed as follows:max |u - CDF(PPF(u))|
where the max is taken size random points in the interval [0,1].random_state
determines the random sample. Note that ifppf
was exact, the u-error would be zero.The x-error measures the direct distance between the exact PPF and
ppf
. Ifx_error
is set toTrue`, it is computed as the maximum of the minimum of the relative and absolute x-error: ``max(min(x_error_abs[i], x_error_rel[i]))
wherex_error_abs[i] = |PPF(u[i]) - PPF_fast(u[i])|
,x_error_rel[i] = max |(PPF(u[i]) - PPF_fast(u[i])) / PPF(u[i])|
. Note that it is important to consider the relative x-error in the case thatPPF(u)
is close to zero or very large.By default, only the u-error is evaluated and the x-error is set to
np.nan
. Note that the evaluation of the x-error will be very slow if the implementation of the PPF is slow.Further information about these error measures can be found in [1].
References
[1]Derflinger, Gerhard, Wolfgang Hörmann, and Josef Leydold. “Random variate generation by numerical inversion when only the density is known.” ACM Transactions on Modeling and Computer Simulation (TOMACS) 20.4 (2010): 1-25.
Examples
>>> import numpy as np >>> from scipy import stats >>> from scipy.stats.sampling import FastGeneratorInversion
Create an object for the normal distribution:
>>> d_norm_frozen = stats.norm() >>> d_norm = FastGeneratorInversion(d_norm_frozen)
To confirm that the numerical inversion is accurate, we evaluate the approximation error (u-error and x-error).
>>> u_error, x_error = d_norm.evaluate_error(x_error=True)
The u-error should be below 1e-10:
>>> u_error 8.785783212061915e-11 # may vary
Compare the PPF against approximation
ppf
:>>> q = [0.001, 0.2, 0.4, 0.6, 0.8, 0.999] >>> diff = np.abs(d_norm_frozen.ppf(q) - d_norm.ppf(q)) >>> x_error_abs = np.max(diff) >>> x_error_abs 1.2937954707581412e-08
This is the absolute x-error evaluated at the points q. The relative error is given by
>>> x_error_rel = np.max(diff / np.abs(d_norm_frozen.ppf(q))) >>> x_error_rel 4.186725600453555e-09
The x_error computed above is derived in a very similar way over a much larger set of random values q. At each value q[i], the minimum of the relative and absolute error is taken. The final value is then derived as the maximum of these values. In our example, we get the following value:
>>> x_error 4.507068014335139e-07 # may vary