From Wikipedia, the free encyclopedia  View original article
In probability theory and statistics, covariance is a measure of how much two random variables change together. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the smaller values, i.e., the variables tend to show similar behavior, the covariance is positive.^{[1]} In the opposite case, when the greater values of one variable mainly correspond to the smaller values of the other, i.e., the variables tend to show opposite behavior, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.
A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which serves as an estimated value of the parameter.
The covariance between two jointly distributed realvalued random variables x and y with finite second moments is defined^{[2]} as
where E[x] is the expected value of x, also known as the mean of x. By using the linearity property of expectations, this can be simplified to
However, when , this last equation is prone to catastrophic cancellation when computed with floating point arithmetic and thus should be avoided in computer programs when the data has not been centered before.^{[3]}
For random vectors and (of dimension m and n respectively) the m×n cross covariance matrix (also known as dispersion matrix or variance–covariance matrix,^{[4]} or simply called covariance matrix) is equal to
where m^{T} is the transpose of the vector (or matrix) m.
The (i,j)th element of this matrix is equal to the covariance Cov(x_{i}, y_{j}) between the ith scalar component of x and the jth scalar component of y. In particular, Cov(y, x) is the transpose of Cov(x, y).
For a vector of m jointly distributed random variables with finite second moments, its covariance matrix is defined as
Random variables whose covariance is zero are called uncorrelated.
The units of measurement of the covariance Cov(x, y) are those of x times those of y. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
For a sequence x_{1}, ..., x_{n} of random variables, and constants a_{1}, ..., a_{n}, we have
Let be a random vector with covariance matrix , and let be a matrix that can act on . The covariance matrix of the vector is:
This is a direct result of the linearity of expectation and is useful when applying a linear transformation, such as a whitening transformation, to a vector.
If x and y are independent, then their covariance is zero. This follows because under independence,
The converse, however, is not generally true. For example, let x be uniformly distributed in [1, 1] and let y = x^{2}. Clearly, x and y are dependent, but
In this case, the relationship between y and x is nonlinear, while correlation and covariance are measures of linear dependence between two variables. This example shows that if two variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness does imply independence.
Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product:
In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semidefiniteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L^{2} inner product of realvalued functions on the sample space.
As a result for random variables with finite variance, the inequality
holds via the Cauchy–Schwarz inequality.
Proof: If σ^{2}(y) = 0, then it holds trivially. Otherwise, let random variable
Then we have
The sample covariance of N observations of K variables is the KbyK matrix with the entries
which is an estimate of the covariance between variable j and variable k.
The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector , a row vector whose j^{th} element (j = 1, ..., K) is one of the random variables. The reason the sample covariance matrix has in the denominator rather than is essentially that the population mean is not known and is replaced by the sample mean . If the population mean is known, the analogous unbiased estimate is given by
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation coefficient. From it, one can obtain the Pearson coefficient, which gives us the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.
Covariance is an important measure in biology. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary structures of proteins, or of RNA structures, we compare sequences in closely related species. If we find sequence changes or no changes at all in noncoding RNA (such as microRNA), we can find out about which sequences are necessary for common structural motifs, such as an RNA loop.
Covariances play a key role in financial economics, especially in portfolio theory and in the capital asset pricing model. Covariances among various assets' returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification.
This article needs additional citations for verification. (December 2010) 
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (December 2010) 
Look up covariance in Wiktionary, the free dictionary. 
