Skip to main content

Table 2 Distance definitions for SI

From: Classification complexity in myoelectric pattern recognition

Distance definition

Description

Mahalanobis distance

Mahalanobis distance was designed to measure the distance between a distribution and a single point [19]. Half the Mahalanobis distance will be the value referred to as Mahalanobis distance hereafter because that is how it was originally used in SI [13].

Mahalanobis distance for multivariate normal distributions is defined as:

\( \frac{D_M}{2}=\frac{1}{2}\sqrt{{\left({\mu}_1-{\mu}_2\right)}^T{S}_1^{-1}\left({\mu}_1-{\mu}_2\right)} \)

Bhattacharyya Distance

Bhattacharyya distance is a measurement of statistical similarity between two distributions based on the Bhattacharyya Coefficient (BC) [24]. Unlike Mahalanobis distance, Bhattacharyya distance takes both the distance and similarity in covariance between the distributions into account. In this study, the square root of Bhattacharyya distance was used to equate the formulation of Mahalanobis distance and facilitate comparison.

Bhattacharyya coefficient for the continuous probability distributions p and q is defined as:

\( BC=\int \sqrt{p(x) q(x)} dx \)

Bhattacharyya distance as function of Bhattacharyya coefficient:

\( \sqrt{D_B}=\sqrt{-\frac{1}{2} \ln (BC)} \)

Bhattacharyya distance for multivariate normal distributions (square root) [25]:

\( \sqrt{D_B}=\sqrt{\frac{1}{8}{\left({\mu}_1-{\mu}_2\right)}^T{S}^{-1}\left({\mu}_1-{\mu}_2\right)-\frac{1}{2} \ln \left(\frac{detS}{\sqrt{\mathit{\det}{S}_1\mathit{\det}{S}_2}}\right)} \)

Kullback–Leibler divergence

Kullback–Leibler divergence is a well-known statistical similarity measure that is typically used to determine whether an observed distribution, Q, is a sample of a true distribution, P [26].

Kullback–Leibler divergence for multivariate normal distributions is defined as [25]:

\( {D}_{KL}=\frac{1}{2}\left( tr\left({S}_1^{-1}{S}_2\right)+{\left({\mu}_1-{\mu}_2\right)}^T{S}_1^{-1}\left({\mu}_1-{\mu}_2\right)- k+\mathit{\ln}\left(\frac{\mathit{\det}{S}_1}{\mathit{\det}{S}_2}\right)\right) \)

Hellinger distance

Hellinger distance is related to Bhattacharyya distance as it is also based on the Bhattacharyya coefficient [27]. The square of the Hellinger distance was used in this study to avoid complex numbers appearing where the assumption of normality fails, and this value is referred to here as the Hellinger distance.

Hellinger distance as a function of the Bhattacharyya coefficient is defined as:

\( {D}_H^2=1- BC \)

Hellinger distance for multivariate normal distributions:

\( {D}_H^2=1-{\frac{{\left({detS}_1\right)}^{\frac{1}{4}}{\left({detS}_2\right)}^{\frac{1}{4}}}{{\left({detS}_1\right)}^{\frac{1}{2}}}}^{\ast}\mathit{\exp}\left\{-\frac{1}{8}{\left({\mu}_1-{\mu}_2\right)}^T{S}^{-1}\left({\mu}_1-{\mu}_2\right)\right\} \)

Modified Mahalanobis

This measure of statistical similarity is equal to the aforementioned Mahalanobis distance, except that it takes into account the covariance matrix of both distributions being compared. The algorithm is related to Bhattacharyya distance, but is only focused on the distance between the distributions. This CCEA is referred to here as modified Mahalanobis and is defined for multivariate normal distributions as:

\( \frac{D_{MM}}{2}=\frac{1}{2}\sqrt{{\left({\mu}_1-{\mu}_2\right)}^T{S}^{-1}\left({\mu}_1-{\mu}_2\right)} \)

Explanations

All equations above index 1 and 2 are appointed the considered movement and the compared movement, respectively, and

\( S=\frac{S_1+{S}_2}{2} \)

  1. Table of distance definitions used to compute SI, including their names, definitions, and how the were implemented in the present study