scipy.spatial.distance.pdist¶

scipy.spatial.distance.
pdist
(X, metric='euclidean', p=None, w=None, V=None, VI=None)[source]¶ Pairwise distances between observations in ndimensional space.
See Notes for common calling conventions.
Parameters: X : ndarray
An m by n array of m original observations in an ndimensional space.
metric : str or function, optional
The distance metric to use. The distance function can be ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’, ‘cosine’, ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’.
p : double, optional
The pnorm to apply Only for Minkowski, weighted and unweighted. Default: 2.
w : ndarray, optional
The weight vector. Only for weighted Minkowski. Mandatory
V : ndarray, optional
The variance vector Only for standardized Euclidean. Default: var(X, axis=0, ddof=1)
VI : ndarray, optional
The inverse of the covariance matrix Only for Mahalanobis. Default: inv(cov(X.T)).T
Returns: Y : ndarray
Returns a condensed distance matrix Y. For each \(i\) and \(j\) (where \(i<j<m\)),where m is the number of original observations. The metric
dist(u=X[i], v=X[j])
is computed and stored in entryij
.See also
squareform
 converts between condensed distance matrices and square distance matrices.
Notes
See
squareform
for information on how to calculate the index of this entry or to convert the condensed distance matrix to a redundant square matrix.The following are common calling conventions.
Y = pdist(X, 'euclidean')
Computes the distance between m points using Euclidean distance (2norm) as the distance metric between the points. The points are arranged as m ndimensional row vectors in the matrix X.
Y = pdist(X, 'minkowski', p)
Computes the distances using the Minkowski distance \(uv_p\) (pnorm) where \(p \geq 1\).
Y = pdist(X, 'cityblock')
Computes the city block or Manhattan distance between the points.
Y = pdist(X, 'seuclidean', V=None)
Computes the standardized Euclidean distance. The standardized Euclidean distance between two nvectors
u
andv
is\[\sqrt{\sum {(u_iv_i)^2 / V[x_i]}}\]V is the variance vector; V[i] is the variance computed over all the i’th components of the points. If not passed, it is automatically computed.
Y = pdist(X, 'sqeuclidean')
Computes the squared Euclidean distance \(uv_2^2\) between the vectors.
Y = pdist(X, 'cosine')
Computes the cosine distance between vectors u and v,
\[1  \frac{u \cdot v} {{u}_2 {v}_2}\]where \(*_2\) is the 2norm of its argument
*
, and \(u \cdot v\) is the dot product ofu
andv
.Y = pdist(X, 'correlation')
Computes the correlation distance between vectors u and v. This is
\[1  \frac{(u  \bar{u}) \cdot (v  \bar{v})} {{(u  \bar{u})}_2 {(v  \bar{v})}_2}\]where \(\bar{v}\) is the mean of the elements of vector v, and \(x \cdot y\) is the dot product of \(x\) and \(y\).
Y = pdist(X, 'hamming')
Computes the normalized Hamming distance, or the proportion of those vector elements between two nvectors
u
andv
which disagree. To save memory, the matrixX
can be of type boolean.Y = pdist(X, 'jaccard')
Computes the Jaccard distance between the points. Given two vectors,
u
andv
, the Jaccard distance is the proportion of those elementsu[i]
andv[i]
that disagree.Y = pdist(X, 'chebyshev')
Computes the Chebyshev distance between the points. The Chebyshev distance between two nvectors
u
andv
is the maximum norm1 distance between their respective elements. More precisely, the distance is given by\[d(u,v) = \max_i {u_iv_i}\]Y = pdist(X, 'canberra')
Computes the Canberra distance between the points. The Canberra distance between two points
u
andv
is\[d(u,v) = \sum_i \frac{u_iv_i} {u_i+v_i}\]Y = pdist(X, 'braycurtis')
Computes the BrayCurtis distance between the points. The BrayCurtis distance between two points
u
andv
is\[d(u,v) = \frac{\sum_i {u_iv_i}} {\sum_i {u_i+v_i}}\]Y = pdist(X, 'mahalanobis', VI=None)
Computes the Mahalanobis distance between the points. The Mahalanobis distance between two pointsu
andv
is \(\sqrt{(uv)(1/V)(uv)^T}\) where \((1/V)\) (theVI
variable) is the inverse covariance. IfVI
is not None,VI
will be used as the inverse covariance matrix.Y = pdist(X, 'yule')
Computes the Yule distance between each pair of boolean vectors. (see yule function documentation)Y = pdist(X, 'matching')
Synonym for ‘hamming’.Y = pdist(X, 'dice')
Computes the Dice distance between each pair of boolean vectors. (see dice function documentation)Y = pdist(X, 'kulsinski')
Computes the Kulsinski distance between each pair of boolean vectors. (see kulsinski function documentation)Y = pdist(X, 'rogerstanimoto')
Computes the RogersTanimoto distance between each pair of boolean vectors. (see rogerstanimoto function documentation)Y = pdist(X, 'russellrao')
Computes the RussellRao distance between each pair of boolean vectors. (see russellrao function documentation)Y = pdist(X, 'sokalmichener')
Computes the SokalMichener distance between each pair of boolean vectors. (see sokalmichener function documentation)Y = pdist(X, 'sokalsneath')
Computes the SokalSneath distance between each pair of boolean vectors. (see sokalsneath function documentation)Y = pdist(X, 'wminkowski')
Computes the weighted Minkowski distance between each pair of vectors. (see wminkowski function documentation)Y = pdist(X, f)
Computes the distance between all pairs of vectors in X using the user supplied 2arity function f. For example, Euclidean distance between the vectors could be computed as follows:
dm = pdist(X, lambda u, v: np.sqrt(((uv)**2).sum()))
Note that you should avoid passing a reference to one of the distance functions defined in this library. For example,:
dm = pdist(X, sokalsneath)
would calculate the pairwise distances between the vectors in X using the Python function sokalsneath. This would result in sokalsneath being called \({n \choose 2}\) times, which is inefficient. Instead, the optimized C version is more efficient, and we call it using the following syntax.:
dm = pdist(X, 'sokalsneath')