From Wikipedia, the free encyclopedia  View original article
In statistics, the Pearson productmoment correlation coefficient (/ˈpɪərsɨn/) (sometimes referred to as the PPMCC or PCC or Pearson's r) is a measure of the linear correlation (dependence) between two variables X and Y, giving a value between +1 and −1 inclusive, where 1 is total positive correlation, 0 is no correlation, and −1 is total negative correlation. It is widely used in the sciences as a measure of the degree of linear dependence between two variables. It was developed by Karl Pearson from a related idea introduced by Francis Galton in the 1880s.^{[1]}^{[2]}^{[3]}
Pearson's correlation coefficient between two variables is defined as the covariance of the two variables divided by the product of their standard deviations. The form of the definition involves a "product moment", that is, the mean (the first moment about the origin) of the product of the meanadjusted random variables; hence the modifier productmoment in the name.
Pearson's correlation coefficient when applied to a population is commonly represented by the Greek letter ρ (rho) and may be referred to as the population correlation coefficient or the population Pearson correlation coefficient. The formula for ρ is:
where, is the covariance, is the standard deviation of , is the mean of , and is the expectation.
Pearson's correlation coefficient when applied to a sample is commonly represented by the letter r and may be referred to as the sample correlation coefficient or the sample Pearson correlation coefficient. We can obtain a formula for r by substituting estimates of the covariances and variances based on a sample into the formula above. That formula for r is:
An equivalent expression gives the correlation coefficient as the mean of the products of the standard scores. Based on a sample of paired data (X_{i}, Y_{i}), the sample Pearson correlation coefficient is
where
are the sample mean and sample standard deviation, respectively. Thus, the first parenthesized term in the previous summation is the standard score. (The terms for Y are similar.)
The absolute values of both the sample and population Pearson correlation coefficients are less than or equal to 1. Correlations equal to 1 or −1 correspond to data points lying exactly on a line (in the case of the sample correlation), or to a bivariate distribution entirely supported on a line (in the case of the population correlation). The Pearson correlation coefficient is symmetric: corr(X,Y) = corr(Y,X).
A key mathematical property of the Pearson correlation coefficient is that it is invariant to separate changes in location and scale in the two variables. That is, we may transform X to a + bX and transform Y to c + dY, where a, b, c, and d are constants with b, d > 0, without changing the correlation coefficient. (This fact holds for both the population and sample Pearson correlation coefficients.) Note that more general linear transformations do change the correlation: see a later section for an application of this.
The Pearson correlation can be expressed in terms of uncentered moments. Since μ_{X} = E(X), σ_{X}^{2} = E[(X − E(X))^{2}] = E(X^{2}) − E^{2}(X) and likewise for Y, and since
the correlation can also be written as
Alternative formulae for the sample Pearson correlation coefficient are also available:
The above formula suggests a convenient singlepass algorithm for calculating sample correlations, but, depending on the numbers involved, it can sometimes be numerically unstable.
The correlation coefficient ranges from −1 to 1. A value of 1 implies that a linear equation describes the relationship between X and Y perfectly, with all data points lying on a line for which Y increases as X increases. A value of −1 implies that all data points lie on a line for which Y decreases as X increases. A value of 0 implies that there is no linear correlation between the variables.
More generally, note that (X_{i} − X)(Y_{i} − Y) is positive if and only if X_{i} and Y_{i} lie on the same side of their respective means. Thus the correlation coefficient is positive if X_{i} and Y_{i} tend to be simultaneously greater than, or simultaneously less than, their respective means. The correlation coefficient is negative if X_{i} and Y_{i} tend to lie on opposite sides of their respective means.
For uncentered data, it is possible to obtain a relation between correlation coefficient and the angle between both possible regression lines y=g_{x}(x) and x=g_{y}(y). One can show^{[4]} that r = sec() tan().
For centered data (i.e., data which have been shifted by the sample mean so as to have an average of zero), the correlation coefficient can also be viewed as the cosine of the angle between the two vectors of samples drawn from the two random variables (see below).
Both the uncentered (nonPearsoncompliant) and centered correlation coefficients can be determined for a dataset. As an example, suppose five countries are found to have gross national products of 1, 2, 3, 5, and 8 billion dollars, respectively. Suppose these same five countries (in the same order) are found to have 11%, 12%, 13%, 15%, and 18% poverty. Then let x and y be ordered 5element vectors containing the above data: x = (1, 2, 3, 5, 8) and y = (0.11, 0.12, 0.13, 0.15, 0.18).
By the usual procedure for finding the angle between two vectors (see dot product), the uncentered correlation coefficient is:
Note that the above data were deliberately chosen to be perfectly correlated: y = 0.10 + 0.01 x. The Pearson correlation coefficient must therefore be exactly one. Centering the data (shifting x by E(x) = 3.8 and y by E(y) = 0.138) yields x = (−2.8, −1.8, −0.8, 1.2, 4.2) and y = (−0.028, −0.018, −0.008, 0.012, 0.042), from which
as expected.
Several authors^{[5]}^{[6]} have offered guidelines for the interpretation of a correlation coefficient. However, all such criteria are in some ways arbitrary and should not be observed too strictly.^{[6]} The interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.8 may be very low if one is verifying a physical law using highquality instruments, but may be regarded as very high in the social sciences where there may be a greater contribution from complicating factors.
A distance metric for two variables X and Y known as Pearson's distance can be defined from their correlation coefficient as^{[7]}
Considering that the Pearson correlation coefficient falls between [−1, 1], the Pearson distance lies in [0, 2].
Statistical inference based on Pearson's correlation coefficient often focuses on one of the following two aims:
We discuss methods of achieving one or both of these aims below.
Permutation tests provide a direct approach to performing hypothesis tests and constructing confidence intervals. A permutation test for Pearson's correlation coefficient involves the following two steps:
To perform the permutation test, repeat steps (1) and (2) a large number of times. The pvalue for the permutation test is the proportion of the r values generated in step (2) that are larger than the Pearson correlation coefficient that was calculated from the original data. Here "larger" can mean either that the value is larger in magnitude, or larger in signed value, depending on whether a twosided or onesided test is desired.
The bootstrap can be used to construct confidence intervals for Pearson's correlation coefficient. In the "nonparametric" bootstrap, n pairs (x_{i}, y_{i}) are resampled "with replacement" from the observed set of n pairs, and the correlation coefficient r is calculated based on the resampled data. This process is repeated a large number of times, and the empirical distribution of the resampled r values are used to approximate the sampling distribution of the statistic. A 95% confidence interval for ρ can be defined as the interval spanning from the 2.5^{th} to the 97.5^{th} percentile of the resampled r values.
For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of Pearson's correlation coefficient follows Student's tdistribution with degrees of freedom n − 2. Specifically, if the underlying variables have a bivariate normal distribution, the variable
has a Student's tdistribution in the null case (zero correlation).^{[8]} This also holds approximately even if the observed values are nonnormal, provided sample sizes are not very small.^{[9]} For determining the critical values for r the inverse of this transformation is also needed:
Alternatively, large sample approaches can be used.
Early work on the distribution of the sample correlation coefficient was carried out by R. A. Fisher^{[10]}^{[11]} and A. K. Gayen.^{[12]} Another early paper^{[13]} provides graphs and tables for general values of ρ, for small sample sizes, and discusses computational approaches.
For data that follows a bivariate normal distribution, the exact density function for the sample correlation of a normal bivariate is^{[14]}^{[15]}
where is the gamma function, is the Gaussian hypergeometric function. In the special case when , the density can be written as:
where is the beta function, which is one way of writing the density of a Student's tdistribution, as above.
Note that^{[16]} , therefore r is a biased estimator of . The unique minimum variance unbiased estimator is given by^{[17]} . An approximately unbiased estimator can be obtained by truncating the previously mentioned series for and solving the equation for . However, the solution, ,^{[citation needed]} is suboptimal.^{[citation needed]} An approximately unbiased estimator,^{[citation needed]} with minimum variance for large values of n, with a bias of order , can be obtained by maximizing , i.e. .^{[citation needed]}
In practice, confidence intervals and hypothesis tests relating to ρ are usually carried out using the Fisher transformation:
If F(r) is the Fisher transformation of r, and n is the sample size, then F(r) approximately follows a normal distribution with
Thus, a zscore is
under the null hypothesis of that , given the assumption that the sample pairs are independent and identically distributed and follow a bivariate normal distribution. Thus an approximate pvalue can be obtained from a normal probability table. For example, if z = 2.2 is observed and a twosided pvalue is desired to test the null hypothesis that , the pvalue is 2·Φ(−2.2) = 0.028, where Φ is the standard normal cumulative distribution function.
To obtain a confidence interval for ρ, we first compute a confidence interval for F():
The inverse Fisher transformation bring the interval back to the correlation scale.
For example, suppose we observe r = 0.3 with a sample size of n=50, and we wish to obtain a 95% confidence interval for ρ. The transformed value is arctanh(r) = 0.30952, so the confidence interval on the transformed scale is 0.30952 ± 1.96/√47, or (0.023624, 0.595415). Converting back to the correlation scale yields (0.024, 0.534).
The square of the sample correlation coefficient is typically denoted r^{2} and called the coefficient of determination, estimates the fraction of the variance in Y that is explained by X in a simple linear regression. As a starting point, the total variation in the Y_{i} around their average value can be decomposed as follows
where the are the fitted values from the regression analysis. This can be rearranged to give
The two summands above are the fraction of variance in Y that is explained by X (right) and that is unexplained by X (left).
Next, we apply a property of least square regression models, that the sample covariance between and is zero. Thus, the sample correlation coefficient between the observed and fitted response values in the regression can be written
Thus
is the proportion of variance in Y explained by a linear function of X.
The population Pearson correlation coefficient is defined in terms of moments, and therefore exists for any bivariate probability distribution for which the population covariance is defined and the marginal population variances are defined and are nonzero. Some probability distributions such as the Cauchy distribution have undefined variance and hence ρ is not defined if X or Y follows such a distribution. In some practical applications, such as those involving data suspected to follow a heavytailed distribution, this is an important consideration. However, the existence of the correlation coefficient is usually not a concern; for instance, if the range of the distribution is bounded, ρ is always defined.
In the case of the bivariate normal distribution, the sample correlation coefficient is the maximum likelihood estimate of the population correlation coefficient, and is asymptotically unbiased and efficient, which roughly means that it is impossible to construct a more accurate estimate than the sample correlation coefficient if the data are normal and the sample size is moderate or large. For nonnormal populations, the sample correlation coefficient remains approximately unbiased, but may not be efficient. The sample correlation coefficient is a consistent estimator of the population correlation coefficient as long as the sample means, variances, and covariance are consistent (which is guaranteed when the law of large numbers can be applied).
Like many commonly used statistics, the sample statistic r is not robust,^{[18]} so its value can be misleading if outliers are present.^{[19]}^{[20]} Specifically, the PMCC is neither distributionally robust,^{[citation needed]} nor outlier resistant^{[18]} (see Robust statistics#Definition). Inspection of the scatterplot between X and Y will typically reveal a situation where lack of robustness might be an issue, and in such cases it may be advisable to use a robust measure of association. Note however that while most robust estimators of association measure statistical dependence in some way, they are generally not interpretable on the same scale as the Pearson correlation coefficient.
Statistical inference for Pearson's correlation coefficient is sensitive to the data distribution. Exact tests, and asymptotic tests based on the Fisher transformation can be applied if the data are approximately normally distributed, but may be misleading otherwise. In some situations, the bootstrap can be applied to construct confidence intervals, and permutation tests can be applied to carry out hypothesis tests. These nonparametric approaches may give more meaningful results in some situations where bivariate normality does not hold. However the standard versions of these approaches rely on exchangeability of the data, meaning that there is no ordering or grouping of the data pairs being analyzed that might affect the behavior of the correlation estimate.
A stratified analysis is one way to either accommodate a lack of bivariate normality, or to isolate the correlation resulting from one factor while controlling for another. If W represents cluster membership or another factor that it is desirable to control, we can stratify the data based on the value of W, then calculate a correlation coefficient within each stratum. The stratumlevel estimates can then be combined to estimate the overall correlation while controlling for W.^{[21]}
Suppose observations to be correlated have differing degrees of importance that can be expressed with a weight vector w. To calculate the correlation between vectors x and y with the weight vector w (all of length n),^{[22]}^{[23]}
It is always possible to remove the correlation between random variables with a linear transformation, even if the relationship between the variables is nonlinear. A presentation of this result for population distributions is given by Cox & Hinkley.^{[24]}
A corresponding result exists for sample correlations, in which the sample correlation is reduced to zero. Suppose a vector of n random variables is sampled m times. Let X be a matrix where is the jth variable of sample i. Let be an m by m square matrix with every element 1. Then D is the data transformed so every random variable has zero mean, and T is the data transformed so all variables have zero mean and zero correlation with all other variables – the sample covariance matrix of T will be the identity matrix. This has to be further divided by the standard deviation to get unit variance. The transformed variables will be uncorrelated, even though they may not be independent.
where an exponent of −1/2 represents the matrix square root of the inverse of a matrix. The covariance matrix of T will be the identity matrix. If a new data sample x is a row vector of n elements, then the same transform can be applied to x to get the transformed vectors d and t:
This decorrelation is related to principal components analysis for multivariate data.
The reflective correlation is a variant of Pearson's correlation in which the data are not centered around their mean values.^{[citation needed]} The population reflective correlation is
The reflective correlation is symmetric, but it is not invariant under translation:
The sample reflective correlation is
The weighted version of the sample reflective correlation is
Scaled correlation is a variant of Pearson's correlation in which the range of the data is restricted intentionally and in a controlled manner to reveal correlations between fast components in time series.^{[25]} Scaled correlation is defined as average correlation across short segments of data.
Let be the number of segments that can fit into the total length of the signal for a given scale :
The scaled correlation across the entire signals is then computed as
where is Pearson's coefficient of correlation for segment .
By choosing the parameter , the range of values is reduced and the correlations on long time scale are filtered out, only the correlations on short time scales being revealed. Thus, the contributions of slow components are removed and those of fast components are retained.
Under heavy noise conditions, extracting the correlation coefficient between two sets of stochastic variables is nontrivial, in particular where Canonical Correlation Analysis reports on degraded correlation values due to the heavy noise contributions. A generalization of the approach is given elsewhere.^{[26]}
chapter=
ignored (help)Wikiversity has learning materials about Linear correlation 
