epps_singleton_2samp#
- scipy.stats.epps_singleton_2samp(x, y, t=(0.4, 0.8), *, axis=0, nan_policy='propagate', keepdims=False)[source]#
Compute the Epps-Singleton (ES) test statistic.
Test the null hypothesis that two samples have the same underlying probability distribution.
- Parameters:
- x, yarray-like
The two samples of observations to be tested. Input must not have more than one dimension. Samples can have different lengths, but both must have at least five observations.
- tarray-like, optional
The points (t1, …, tn) where the empirical characteristic function is to be evaluated. It should be positive distinct numbers. The default value (0.4, 0.8) is proposed in [1]. Input must not have more than one dimension.
- axisint or None, default: 0
If an int, the axis of the input along which to compute the statistic. The statistic of each axis-slice (e.g. row) of the input will appear in a corresponding element of the output. If
None
, the input will be raveled before computing the statistic.- nan_policy{‘propagate’, ‘omit’, ‘raise’}
Defines how to handle input NaNs.
propagate
: if a NaN is present in the axis slice (e.g. row) along which the statistic is computed, the corresponding entry of the output will be NaN.omit
: NaNs will be omitted when performing the calculation. If insufficient data remains in the axis slice along which the statistic is computed, the corresponding entry of the output will be NaN.raise
: if a NaN is present, aValueError
will be raised.
- keepdimsbool, default: False
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
- statisticfloat
The test statistic.
- pvaluefloat
The associated p-value based on the asymptotic chi2-distribution.
See also
Notes
Testing whether two samples are generated by the same underlying distribution is a classical question in statistics. A widely used test is the Kolmogorov-Smirnov (KS) test which relies on the empirical distribution function. Epps and Singleton introduce a test based on the empirical characteristic function in [1].
One advantage of the ES test compared to the KS test is that is does not assume a continuous distribution. In [1], the authors conclude that the test also has a higher power than the KS test in many examples. They recommend the use of the ES test for discrete samples as well as continuous samples with at least 25 observations each, whereas
anderson_ksamp
is recommended for smaller sample sizes in the continuous case.The p-value is computed from the asymptotic distribution of the test statistic which follows a
chi2
distribution. If the sample size of both x and y is below 25, the small sample correction proposed in [1] is applied to the test statistic.The default values of
t
are determined in [1] by considering various distributions and finding good values that lead to a high power of the test in general. Table III in [1] gives the optimal values for the distributions tested in that study. The values oft
are scaled by the semi-interquartile range in the implementation, see [1].Beginning in SciPy 1.9,
np.matrix
inputs (not recommended for new code) are converted tonp.ndarray
before the calculation is performed. In this case, the output will be a scalar ornp.ndarray
of appropriate shape rather than a 2Dnp.matrix
. Similarly, while masked elements of masked arrays are ignored, the output will be a scalar ornp.ndarray
rather than a masked array withmask=False
.References
[1] (1,2,3,4,5,6,7)T. W. Epps and K. J. Singleton, “An omnibus test for the two-sample problem using the empirical characteristic function”, Journal of Statistical Computation and Simulation 26, p. 177–203, 1986.
[2]S. J. Goerg and J. Kaiser, “Nonparametric testing of distributions - the Epps-Singleton two-sample test using the empirical characteristic function”, The Stata Journal 9(3), p. 454–465, 2009.