From Wikipedia, the free encyclopedia  View original article

In statistics, an effect size is a quantitative measure of the strength of a phenomenon.^{[1]} Effectsizes always compare two variables, but can do so in different ways. First, an effectsize can simply indicate the correlation between two variables. Second, an effectsize can indicate which of two predictor variables is a stronger predictor of a third, outcome variable. This is similar to comparing the averages of two groups. Third, an effectsize can indicate the rate at or risk with which something happens, such as how many people survive after a heart attack for every one person that doesn't. For each type of effectsize, a larger absolute value always indicates a stronger effect. Effect sizes complement statistical hypothesis testing, and play an important role in statistical power analyses, and in metaanalyses.
Especially in metaanalysis, where the purpose is to combine multiple effectsizes, the standard error of effectsize is of critical importance. The S.E. of effectsize is used to weight effectsizes when combining studies, so that large studies are considered more important than small studies in the analysis. The S.E. of effectsize is calculated differently for each type of effectsize, but generally only requires knowing the study's sample size (N), or the number of observations in each group (n's).
Reporting effect sizes is considered good practice when presenting empirical research findings in many fields.^{[2]}^{[3]} The reporting of effect sizes facilitates the interpretation of the substantive, as opposed to the statistical, significance of a research result.^{[4]} Effect sizes are particularly prominent in social and medical research. Relative and absolute measures of effect size convey different information, and can be used complementarily. A prominent task force in the psychology research community expressed the following recommendation:
Always present effect sizes for primary outcomes...If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).
— L. Wilkinson and APA Task Force on Statistical Inference (1999, p. 599)
The term effect size can refer to the value of a statistic calculated from a sample of data, the value of a parameter of a hypothetical statistical population, or to the equation that operationalizes how statistics or parameters lead the effect size value.^{[1]} Conventions for distinguishing sample from population effect sizes follow standard statistical practices — one common approach is to use Greek letters like ρ to denote population parameters and Latin letters like r to denote the corresponding statistic; alternatively, a "hat" can be placed over the population parameter to denote the statistic, e.g. with being the estimate of the parameter .
As in any statistical setting, effect sizes are estimated with sampling error, and may be biased unless the effect size estimator that is used is appropriate for the manner in which the data were sampled and the manner in which the measurements were made. An example of this is publication bias, which occurs when scientists only report results when the estimated effect sizes are large or are statistically significant. As a result, if many researchers are carrying out studies under low statistical power, the reported results are biased to be stronger than true effects, if any.^{[5]} Another example where effect sizes may be distorted is in a multiple trial experiment, where the effect size calculation is based on the averaged or aggregated response across the trials.^{[6]}
Samplebased effect sizes are distinguished from test statistics used in hypothesis testing, in that they estimate the strength (magnitude) of, for example, an apparent relationship, rather than assigning a significance level reflecting whether the magnitude of the relationship observed could be due to chance. The effect size does not directly determine the significance level, or viceversa. Given a sufficiently large sample size, a nonnull statistical comparison will always show a statistically significant results unless the population effect size is exactly zero (and even there it will be shows to be statistically significance at the rate of the Type I error used). For example, a sample Pearson correlation coefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significant pvalue from this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application.
The term effect size can refer to a standardized measure of effect (such as r, Cohen's d, and odds ratio), or to an unstandardized measure (e.g., the raw difference between group means and unstandardized regression coefficients). Standardized effect size measures are typically used when the metrics of variables being studied do not have intrinsic meaning (e.g., a score on a personality test on an arbitrary scale), when results from multiple studies are being combined, when some or all of the studies use different scales, or when it is desired to convey the size of an effect relative to the variability in the population. In metaanalyses, standardized effect sizes are used as a common measure that can be calculated for different studies and then combined into an overall summary.
These effect sizes estimate the amount of the variance within an experiment that is "explained" or "accounted for" by the experiment's model.
Pearson's correlation, often denoted r and introduced by Karl Pearson, is widely used as an effect size when paired quantitative data are available; for instance if one were studying the relationship between birth weight and longevity. The correlation coefficient can also be used when the data are binary. Pearson's r can vary in magnitude from −1 to 1, with −1 indicating a perfect negative linear relation, 1 indicating a perfect positive linear relation, and 0 indicating no linear relation between two variables. Cohen gives the following guidelines for the social sciences:^{[7]}^{[8]}
Effect size  r 

Small  0.10 
Medium  0.30 
Large  0.50 
A related effect size is r², the coefficient of determination (also referred to as "rsquared"), calculated as the square of the Pearson correlation r. In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. For example, with an r of 0.21 the coefficient of determination is 0.0441, meaning that 4.4% of the variance of either variable is shared with the other variable. The r² is always positive, so does not convey the direction of the correlation between the two variables.
Etasquared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors, making it analogous to the r^{2}. Etasquared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). This estimate shares the weakness with r^{2} that each additional variable will automatically increase the value of η^{2}. In addition, it measures the variance explained of the sample, not the population, meaning that it will always overestimate the effect size, although the bias grows smaller as the sample grows larger.
A less biased estimator of the variance explained in the population is ω^{2}^{[9]}^{[10]}^{[11]}
This form of the formula is limited to betweensubjects analysis with equal sample sizes in all cells,.^{[11]} Since it is less biased (although not unbiased), ω^{2} is preferable to η^{2}; however, it can be more inconvenient to calculate for complex analyses. A generalized form of the estimator has been published for betweensubjects and withinsubjects analysis, repeated measure, mixed design, and randomized block design experiments.^{[12]} In addition, methods to calculate partial Omega^{2} for individual factors and combined factors in designs with up to three independent variables have been published.^{[12]}
Cohen's ƒ^{2} is one of several effect size measures to use in the context of an Ftest for ANOVA or multiple regression. Its amount of bias (overestimation of the effect size for the ANOVA) depends on the bias of its underlying measurement of variance explained (e.g., R^{2}, η^{2}, ω^{2}).
The ƒ^{2} effect size measure for multiple regression is defined as:
Likewise, ƒ^{2} can be defined as:
The effect size measure for hierarchical multiple regression is defined as:
Cohen's can also be found for factorial analysis of variance (ANOVA, aka the Ftest) working backwards using :
In a balanced design (equivalent sample sizes across groups) of ANOVA, the corresponding population parameter of is
wherein μ_{j} denotes the population mean within the j^{th} group of the total K groups, and σ the equivalent population standard deviations within each groups. SS is the sum of squares manipulation in ANOVA.
This article needs attention from an expert in Statistics. (March 2011) 
A (population) effect size θ based on means usually considers the standardized mean difference between two populations^{[14]}^{:78}
where μ_{1} is the mean for one population, μ_{2} is the mean for the other population, and σ is a standard deviation based on either or both populations.
In the practical setting the population values are typically not known and must be estimated from sample statistics. The several versions of effect sizes based on means differ with respect to which statistics are used.
This form for the effect size resembles the computation for a ttest statistic, with the critical difference that the ttest statistic includes a factor of . This means that for a given effect size, the significance level increases with the sample size. Unlike the ttest statistic, the effect size aims to estimate a population parameter, so is not affected by the sample size.
Cohen's d is defined as the difference between two means divided by a standard deviation for the data, i.e.,
Jacob Cohen defined s, the pooled standard deviation, as (for two independent samples):^{[7]}^{:67}
where the variance for one of the groups is defined as
and similiar for the other group. Other authors choose a slightly different computation of the standard deviation when referring to "Cohen's d" where the denominator is without "2"^{[15]}^{[16]}^{:14}
This definition of "Cohen's d" is termed the maximum likelihood estimator by Hedges and Olkin,^{[14]} and it is related to Hedges' g by a scaling factor (see below).
So, in the example above of visiting England and observing men's and women's heights, the data (Aaron,Kromrey,& Ferron, 1998, November; from a 2004 UK representative sample of 2436 men and 3311 women) are:
The effect size (using Cohen's d) would equal 1.72 (95% confidence intervals: 1.66 – 1.78). This is very large and you should have no problem in detecting that there is a consistent height difference, on average, between men and women.
In 1976 Gene V. Glass proposed an estimator of the effect size that uses only the standard deviation of the second group^{[14]}^{:78}
The second group may be regarded as a control group, and Glass argued that if several treatments were compared to the control group it would be better to use just the standard deviation computed from the control group, so that effect sizes would not differ under equal means and different variances.
Under a correct assumption of equal population variances a pooled estimate for σ is more precise.
Hedges' g, suggested by Larry Hedges in 1981,^{[17]} is like the other measures based on a standardized difference^{[14]}^{:79}
where the pooled standard deviation is computed as:
However, as an estimator for the population effect size θ it is biased. Nevertheless, this bias can be approximately corrected through multiplication by a factor
Hedges and Olkin refer to this lessbiased estimator as d,^{[14]} but it is not the same as Cohen's d. The exact form for the correction factor J() involves the gamma function^{[14]}^{:104}
A similar effect size estimator for multiple comparisons (e.g., ANOVA) is the Ψ rootmeansquare standardized effect.^{[13]} This essentially presents the omnibus difference of the entire model adjusted by the root mean square, analogous to d or g. The simplest formula for Ψ, suitable for oneway ANOVA, is
In addition, a generalization for multifactorial designs has been provided.^{[13]}
Provided that the data is Gaussian distributed a scaled Hedges' g, , follows a noncentral tdistribution with the noncentrality parameter and (n_{1} + n_{2} − 2) degrees of freedom. Likewise, the scaled Glass' Δ is distributed with n_{2} − 1 degrees of freedom.
From the distribution it is possible to compute the expectation and variance of the effect sizes.
In some cases large sample approximations for the variance are used. One suggestion for the variance of Hedges' unbiased estimator is^{[14]}^{:86}


Phi (φ)  Cramér's V (φ_{c}) 

Commonly used measures of association for the chisquared test are the Phi coefficient and Cramér's V (sometimes referred to as Cramér's phi and denoted as φ_{c}). Phi is related to the pointbiserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2 x 2).^{[18]} Cramér's V may be used with variables having more than two levels.
Phi can be computed by finding the square root of the chisquared statistic divided by the sample size.
Similarly, Cramér's V is computed by taking the square root of the chisquared statistic divided by the sample size and the length of the minimum dimension (k is the smaller of the number of rows r or columns c).
φ_{c} is the intercorrelation of the two discrete variables^{[19]} and may be computed for any value of r or c. However, as chisquared values tend to increase with the number of cells, the greater the difference between r and c, the more likely V will tend to 1 without strong evidence of a meaningful correlation.
Cramér's V may also be applied to 'goodness of fit' chisquared models (i.e. those where c=1). In this case it functions as a measure of tendency towards a single outcome (i.e. out of k outcomes). In such a case one must use r for k, in order to preserve the 0 to 1 range of V. Otherwise, using c would reduce the equation to that for Phi.
The odds ratio (OR) is another useful effect size. It is appropriate when the research question focuses on the degree of association between two binary variables. For example, consider a study of spelling ability. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. Odds ratio statistics are on a different scale than Cohen's d, so this '3' is not comparable to a Cohen's d of 3.
The relative risk (RR), also called risk ratio, is simply the risk (probability) of an event relative to some independent variable. This measure of effect size differs from the odds ratio in that it compares probabilities instead of odds, but asymptotically approaches the latter for small probabilities. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively. The effect size can be computed the same as above, but using the probabilities instead. Therefore, the relative risk is 1.28. Since rather large probabilities of passing were used, there is a large difference between relative risk and odds ratio. Had failure (a smaller probability) been used as the event (rather than passing), the difference between the two measures of effect size would not be so great.
While both measures are useful, they have different statistical uses. In medical research, the odds ratio is commonly used for casecontrol studies, as odds, but not probabilities, are usually estimated.^{[20]} Relative risk is commonly used in randomized controlled trials and cohort studies.^{[21]} When the incidence of outcomes are rare in the study population (generally interpreted to mean less than 10%), the odds ratio is considered a good estimate of the risk ratio. However, as outcomes become more common, the odds ratio and risk ratio diverge, with the odds ratio overestimating or underestimating the risk ratio when the estimates are greater than or less than 1, respectively. When estimates of the incidence of outcomes are available, methods exist to convert odds ratios to risk ratios.^{[22]}^{[23]}
As the name implies, the common language effect size is designed to communicate the meaning of an effect size in plain English, so that those with little statistics background can grasp the meaning. This effect size was proposed and named by Kenneth McGraw and S. P. Wong (1992),^{[24]} and it is used to describe the difference between two groups.
Kerby (2014) notes that core concept of the common language effect size is the notion of a pair, defined as a score in group one paired with a score in group two.^{[25]} For example, if a study has ten people in a treatment group and ten people in a control group, then there are 100 pairs. The common language effect size ranks all the scores, compares the pairs, and reports the results in the common language of the percent of pairs that support the hypothesis.
As an example, consider a treatment for a chronic disease such as arthritis, with the outcome a scale that rates mobility and pain; further consider that there are ten people in the treatment group and ten people in the control group, for a total of 100 pairs. The sample results may be reported as follows: "When a patient in the treatment group was compared with a patient in the control group, in 80 of 100 pairs the treated patient showed a better treatment outcome."
This sample value is an unbiased estimator of the population value.^{[26]} The population value for the common language effect size can be reported in terms of pairs randomly chosen from the population. McGraw and Wong ^{[24]} use the example of heights between men and women, and they describe the population value of the common language effect size as follows: "in any random pairing of young adult males and females, the probability of the male being taller than the female is .92, or in simpler terms yet, in 92 out of 100 blind dates among young adults, the male will be taller than the female" (p. 381).
An effect size related to the common language effect size is the rankbiserial correlation. This measure was introduced by Cureton as an effect size for the MannWhitney U test.^{[27]} That is, there are two groups, and scores for the groups have been converted to ranks. The Kerby simple difference formula ^{[25]} computes the rankbiserial correlation from the common language effect size. Letting f be the proportion of pairs favorable to the hypothesis (the common language effect size), and letting u be the proportion of pairs not favorable, the rankbiserial r is the simple difference between the two proportions: r = f  u. In other words, the correlation is the difference between the common language effect size and its complement. For example, if the common language effect size is 60%, then the rankbiserial r equals 60% minus 40%, or r = .20. The Kerby formula is directional, with positive values indicating that the results support the hypothesis.
A nondirectional formula for the rankbiserial correlation was provided by Wendt, such that the correlation is always positive.^{[28]} The advantage of the Wendt formula is that it can be computed with information that is readily available in published papers. The formula uses only the test value of U from the MannWhitney U test, and the sample sizes of the two groups: r = 1 – (2U)/ (n1 * n2).
An example can illustrate the use of the two formulas. Consider a health study of twenty older adults, with ten in the treatment group and ten in the control group; hence, there are ten times ten or 100 pairs. The health program uses diet, exercise, and supplements to improve memory, and memory is measured by a standardized test. A MannWhitney U test shows that the adult in the treatment group had the better memory in 70 of the 100 pairs, and the poorer memory in 30 pairs. The MannWhitney U is the smaller of 70 and 30, so U = 30. The correlation between memory and treatment performance by the Kerby simple difference formula is r = (70/100)  (30/100) = 0.40. The correlation by the Wendt formula is r = 1  (2*30) / (10*10) = 0.40.
Confidence intervals of standardized effect sizes, especially Cohen's and , rely on the calculation of confidence intervals of noncentrality parameters (ncp). A common approach to construct the confidence interval of ncp is to find the critical ncp values to fit the observed statistic to tail quantiles α/2 and (1 − α/2). The SAS and Rpackage MBESS provides functions to find critical values of ncp.
For a single group, M denotes the sample mean, μ the population mean, SD the sample's standard deviation, σ the population's standard deviation, and n is the sample size of the group. The t value is used to test the hypothesis on the difference between the mean and a baseline μ_{baseline}. Usually, μ_{baseline} is zero. In the case of two related groups, the single group is constructed by the differences in pair of samples, while SD and σ denote the sample's and population's standard deviations of differences rather than within original two groups.
and Cohen's
is the point estimate of
So,
n_{1} or n_{2} are the respective sample sizes.
wherein
and Cohen's
So,
Oneway ANOVA test applies noncentral F distribution. While with a given population standard deviation , the same test question applies noncentral chisquared distribution.
For each jth sample within ith group X_{i,j}, denote
While,
So, both ncp(s) of F and equate
In case of for K independent groups of same size, the total sample size is N := n·K.
The ttest for a pair of independent groups is a special case of oneway ANOVA. Note that the noncentrality parameter of F is not comparable to the noncentrality parameter of the corresponding t. Actually, , and .
With two paired samples, we look at the distribution of the difference scores. In that case, s is the standard deviation of this distribution of difference scores. This creates the following relationship between the tstatistic to test for a difference in the means of the two groups and Cohen's d:
thus
Cohen's d is frequently used in estimating sample sizes for statistical testing. A lower Cohen's d indicates the necessity of larger sample sizes, and vice versa, as can subsequently be determined together with the additional parameters of desired significance level and statistical power.^{[29]}
Some fields using effect sizes apply words such as "small", "medium" and "large" to the size of the effect. Whether an effect size should be interpreted small, medium, or large depends on its substantive context and its operational definition. Cohen's conventional criteria small, medium, or big^{[7]} are near ubiquitous across many fields. Power analysis or sample size planning requires an assumed population parameter of effect sizes. Many researchers adopt Cohen's standards as default alternative hypotheses. Russell Lenth [sic] criticized them as Tshirt effect sizes.^{[30]}
This is an elaborate way to arrive at the same sample size that has been used in past social science studies of large, medium, and small size (respectively). The method uses a standardized effect size as the goal. Think about it: for a "medium" effect size, you'll choose the same n regardless of the accuracy or reliability of your instrument, or the narrowness or diversity of your subjects. Clearly, important considerations are being ignored here. "Medium" is definitely not the message!
For Cohen's d an effect size of 0.2 to 0.3 might be a "small" effect, around 0.5 a "medium" effect and 0.8 to infinity, a "large" effect.^{[7]}^{:25} (But the d might be larger than one.)
Cohen's text^{[7]} anticipates Lenth's concerns:
"The terms 'small,' 'medium,' and 'large' are relative, not only to each other, but to the area of behavioral science or even more particularly to the specific content and research method being employed in any given investigation....In the face of this relativity, there is a certain risk inherent in offering conventional operational definitions for these terms for use in power analysis in as diverse a field of inquiry as behavioral science. This risk is nevertheless accepted in the belief that more is to be gained than lost by supplying a common conventional frame of reference which is recommended for use only when no better basis for estimating the ES index is available." (p. 25)
In an ideal world, researchers would interpret the substantive significance of their results by grounding them in a meaningful context or by quantifying their contribution to knowledge. Where this is problematic, Cohen's effect size criteria may serve as a last resort.^{[4]}
A recent U.S. Dept of Education sponsored report said "The widespread indiscriminate use of Cohen’s generic small, medium, and large effect size values to characterize effect sizes in domains to which his normative values do not apply is thus likewise inappropriate and misleading."^{[31]} The report went on to recommend other ways of interpret the effect size.
This article's further reading may not follow Wikipedia's content policies or guidelines. Please improve this article by removing excessive, less relevant or many publications with the same point of view; or by incorporating the relevant publications into the body of the article through appropriate citations. (June 2014) 
Wikiversity has learning materials about Effect size 
Online applications
Software
Further explanations

