Covariance Matrix
Definitions
Consider a discrete random variable X with a finite list x1, ..., xk of possible outcomes, each of which has probability p1, ..., pk of occurring.
The expected value or mean of X is defined as .
The variance of a random variable X is the expected value of the squared deviation from the mean of X, defined as .
The standard deviation is defined as .
Now consider two random variables X and Y. The covariance is defined as the expected value of the product of their deviations from their individual expected values:.
The correlation is defined as , which ranges from -1 to 1. It is a normalized version of the covariance.
Now consider a column vector , where each entry is a random variable. The covariance matrix is the matrix whose entry is the covariance .
The correlation matrix follows the save way to act as a normalized covariance matrix.
Estimation from Samples
The above definitions are mathematical descriptions that work well for known distributions.
In practice, when we estimate from samples, the true distribution is unknown. In this case, we can only access sample mean, sample variance, and sample covariance.
Sample mean is correct and unbiased.
Sample variance is biased.
Unbiased sample variance .
X is a random variable, and Xi are observations or samples of this random variable.
Unbiased sample (auto-)covariance matrix .
Unbiased sample (cross-)covariance matrix .
X and Y are vectors of size m. Each entry of the vector is a random variable. Xi and Yi are observations or samples of random vectors X and Y.
If the random variables are known in normal distribution, then we can still take 1/n instead of 1/(n-1) for unbiased estimation. This is because sample mean and sample variance are independent for normal distribution.
Applications
Covariance matrix of a set of 3D points
In a typical normal estimation algorithm, the normal direction of each point is estimated locally by first computing the covariance matrix of points in the neighborhood (e.g., 20 points), and then taking the eigenvector corresponding to the smallest eigenvalue of the covariance matrix.
Let X, Y and Z denote the random variables (of unknown distributions) in x, y, z axes respectively. The set of 3D points are the samples drawn from these three distributions.
The covariance matrix in this case is defined as
.
This is the auto-covariance matrix, meaning that it computes the covariance between the random vector and itself. This matrix by definition is always positive semi-definite.
Since auto-covariance matrices are always real symmetric, the singular values (from SVD) are the absolute values of the eigenvalues. Also, since the matrix is positive semi-definite, all eigenvalues are non-negative.
Covariance matrix of two point clouds
In the standard point-to-point ICP registration algorithm, the closed form solution is given by performing SVD on the covariance matrix estimated from two subsets of corresponding points in two point clouds respectively. This is the method proposed by Umeyama in 1991.
Let (X1, Y1, Z1) be the random vector of point cloud A, and let (X2, Y2, Z2) be the random vector of the other point cloud. Each entry of the vector is a random variable corresponding to the x/y/z axis.
The covariance matrix in this case is
This is the cross-covariance matrix, meaning that it computes the covariance between one random vector with the other. This matrix does not guarantee positive semi-definiteness.
Suppose that A and B are 3-by-n matrices where each column is a 3D point. The practical way to compute this cross-covariance matrix is C = 1/n * A_demean * B_demean^T, where ^T is the transpose and demean indicates we need to first subtract the mean from all points in A and B.
References
Last updated