From Wikipedia, the free encyclopedia  View original article
A ttest is any statistical hypothesis test in which the test statistic follows a Student's t distribution if the null hypothesis is supported. It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is unknown and is replaced by an estimate based on the data, the test statistic (under certain conditions) follows a Student's t distribution.
The tstatistic was introduced in 1908 by William Sealy Gosset, a chemist working for the Guinness brewery in Dublin, Ireland ("Student" was his pen name).^{[1]}^{[2]}^{[3]}^{[4]} Gosset had been hired due to Claude Guinness's policy of recruiting the best graduates from Oxford and Cambridge to apply biochemistry and statistics to Guinness's industrial processes.^{[2]} Gosset devised the ttest as a cheap way to monitor the quality of stout. The Student's ttest work was submitted to and accepted in the journal Biometrika, the journal that Karl Pearson had cofounded and of which he was the editorinchief; the article was published in 1908. Company policy at Guinness forbade its chemists from publishing their findings, so Gosset published his mathematical work under the pseudonym "Student". Guinness had a policy of allowing technical staff leave for study (socalled study leave), which Gosset used during the first two terms of the 1906–1907 academic year in Professor Karl Pearson's Biometric Laboratory at University College London.^{[5]} Gosset's identity was then known to fellow statisticians and to the editorinchief Karl Pearson.^{[citation needed]} It is not clear how much of the work Gosset performed while he was at Guinness and how much was done when he was on study leave at University College London.^{[citation needed]}
Among the most frequently used ttests are:
Most ttest statistics have the form t = Z/s, where Z and s are functions of the data. Typically, Z is designed to be sensitive to the alternative hypothesis (i.e., its magnitude tends to be larger when the alternative hypothesis is true), whereas s is a scaling parameter that allows the distribution of t to be determined.
As an example, in the onesample ttest t = , where is the sample mean of the data, is the sample size, and is the sample standard deviation. is the population standard deviation of the data
The assumptions underlying a ttest are that
In a specific type of ttest, these conditions are consequences of the population being studied, and of the way in which the data are sampled. For example, in the ttest comparing the means of two independent samples, the following assumptions should be met:
Twosample ttests for a difference in mean involve independent samples, paired samples and overlapping samples. Paired ttests are a form of blocking, and have greater power than unpaired tests when the paired units are similar with respect to "noise factors" that are independent of membership in the two groups being compared.^{[9]} In a different context, paired ttests can be used to reduce the effects of confounding factors in an observational study.
The independent samples ttest is used when two separate sets of independent and identically distributed samples are obtained, one from each of the two populations being compared. For example, suppose we are evaluating the effect of a medical treatment, and we enroll 100 subjects into our study, then randomly assign 50 subjects to the treatment group and 50 subjects to the control group. In this case, we have two independent samples and would use the unpaired form of the ttest. The randomization is not essential here – if we contacted 100 people by phone and obtained each person's age and gender, and then used a twosample ttest to see whether the mean ages differ by gender, this would also be an independent samples ttest, even though the data are observational.
Paired samples ttests typically consist of a sample of matched pairs of similar units, or one group of units that has been tested twice (a "repeated measures" ttest).
A typical example of the repeated measures ttest would be where subjects are tested prior to a treatment, say for high blood pressure, and the same subjects are tested again after treatment with a bloodpressure lowering medication. By comparing the same patient's numbers before and after treatment, we are effectively using each patient as their own control. That way the correct rejection of the null hypothesis (here: of no difference made by the treatment) can become much more likely, with statistical power increasing simply because the random betweenpatient variation has now been eliminated. Note however that an increase of statistical power comes at a price: more tests are required, each subject having to be tested twice. Because half of the sample now depends on the other half, the paired version of Student's ttest has only 'n/2 − 1' degrees of freedom (with 'n' being the total number of observations). Pairs become individual test units, and the sample has to be doubled to achieve the same number of degrees of freedom.
A paired samples ttest based on a "matchedpairs sample" results from an unpaired sample that is subsequently used to form a paired sample, by using additional variables that were measured along with the variable of interest.^{[10]} The matching is carried out by identifying pairs of values consisting of one observation from each of the two samples, where the pair is similar in terms of other measured variables. This approach is sometimes used in observational studies to reduce or eliminate the effects of confounding factors.
Paired samples ttests are often referred to as "dependent samples ttests" (as are ttests on overlapping samples).
An overlapping samples ttest is used when there are paired samples with data missing in one or the other samples (e.g., due to selection of "Don't know" options in questionnaires or because respondents are randomly assigned to a subset question). These tests are widely used in commercial survey research (e.g., by polling companies) and are available in many standard crosstab software packages.
Explicit expressions that can be used to carry out various ttests are given below. In each case, the formula for a test statistic that either exactly follows or closely approximates a tdistribution under the null hypothesis is given. Also, the appropriate degrees of freedom are given in each case. Each of these statistics can be used to carry out either a onetailed test or a twotailed test.
Once a t value is determined, a pvalue can be found using a table of values from Student's tdistribution. If the calculated pvalue is below the threshold chosen for statistical significance (usually the 0.10, the 0.05, or 0.01 level), then the null hypothesis is rejected in favor of the alternative hypothesis.
In testing the null hypothesis that the population mean is equal to a specified value μ_{0}, one uses the statistic
where is the sample mean, s is the sample standard deviation of the sample and n is the sample size. The degrees of freedom used in this test are n − 1. Although the parent population does not need to be normally distributed, the distribution of the population of sample means, , is assumed to be normal. By the central limit theorem, if the sampling of the parent population is random then the sample means will be approximately normal.^{[11]} (The degree of approximation will depend on how close the parent population is to a normal distribution and the sample size, n.)
Suppose one is fitting the model
where x_{i}, i = 1, ..., n are known, α and β are unknown, and ε_{i} are independent identically normally distributed random errors with expected value 0 and unknown variance σ^{2}, and Y_{i}, i = 1, ..., n are observed. It is desired to test the null hypothesis that the slope β is equal to some specified value β_{0} (often taken to be 0, in which case the hypothesis is that x and y are unrelated).
Let
Then
has a tdistribution with n − 2 degrees of freedom if the null hypothesis is true. The standard error of the slope coefficient:
can be written in terms of the residuals. Let
Then is given by:
This test is only used when both:
Violations of these assumptions are discussed below.
The t statistic to test whether the means are different can be calculated as follows:
where
Here is the grand standard deviation (or pooled standard deviation), 1 = group one, 2 = group two. and are the unbiased estimators of the variances of the two samples. The denominator of t is the standard error of the difference between two means.
For significance testing, the degrees of freedom for this test is 2n − 2 where n is the number of participants in each group.
This test is used only when it can be assumed that the two distributions have the same variance. (When this assumption is violated, see below.) The t statistic to test whether the means are different can be calculated as follows:
where
Note that the formulae above are generalizations of the case where both samples have equal sizes (substitute n for n_{1} and n_{2}).
is an estimator of the common standard deviation of the two samples: it is defined in this way so that its square is an unbiased estimator of the common variance whether or not the population means are the same. In these formulae, n = number of participants, 1 = group one, 2 = group two. n − 1 is the number of degrees of freedom for either group, and the total sample size minus two (that is, n_{1} + n_{2} − 2) is the total number of degrees of freedom, which is used in significance testing.
This test, also known as Welch's ttest, is used only when the two population variances are not assumed to be equal (the two sample sizes may or may not be equal) and hence must be estimated separately. The t statistic to test whether the population means are different is calculated as:
where
Here s^{2} is the unbiased estimator of the variance of the two samples, n_{i} = number of participants in group i, i=1 or 2. Note that in this case is not a pooled variance. For use in significance testing, the distribution of the test statistic is approximated as an ordinary Student's t distribution with the degrees of freedom calculated using
This is known as the Welch–Satterthwaite equation. The true distribution of the test statistic actually depends (slightly) on the two unknown population variances (see Behrens–Fisher problem).
This test is used when the samples are dependent; that is, when there is only one sample that has been tested twice (repeated measures) or when there are two samples that have been matched or "paired". This is an example of a paired difference test.
For this equation, the differences between all pairs must be calculated. The pairs are either one person's pretest and posttest scores or between pairs of persons matched into meaningful groups (for instance drawn from the same family or age group: see table). The average (X_{D}) and standard deviation (s_{D}) of those differences are used in the equation. The constant μ_{0} is nonzero if you want to test whether the average of the difference is significantly different from μ_{0}. The degree of freedom used is n − 1.
Example of repeated measures  
Number  Name  Test 1  Test 2 

1  Mike  35%  67% 
2  Melanie  50%  46% 
3  Melissa  90%  86% 
4  Mitchell  78%  91% 
Example of matched pairs  
Pair  Name  Age  Test 

1  John  35  250 
1  Jane  36  340 
2  Jimmy  22  460 
2  Jessy  21  200 
Let A_{1} denote a set obtained by taking 6 random samples out of a larger set:
and let A_{2} denote a second set obtained similarly:
These could be, for example, the weights of screws that were chosen out of a bucket.
We will carry out tests of the null hypothesis that the means of the populations from which the two samples were taken are equal.
The difference between the two sample means, each denoted by , which appears in the numerator for all the twosample testing approaches discussed above, is
The sample standard deviations for the two samples are approximately 0.05 and 0.11, respectively. For such small samples, a test of equality between the two population variances would not be very powerful. Since the sample sizes are equal, the two forms of the two sample ttest will perform similarly in this example.
If the approach for unequal variances (discussed above) is followed, the results are
and
The test statistic is approximately 1.959. The twotailed test pvalue is approximately 0.091 and the onetailed pvalue is approximately 0.045.
If the approach for equal variances (discussed above) is followed, the results are
and
Since the sample sizes are equal (both are 6), the test statistic is again approximately equal to 1.959. Since the degrees of freedom is different from what it is in the unequal variances test, the pvalues will differ slightly from what was found above. Here, the twotailed test pvalue is approximately 0.078, and the onetailed pvalue is approximately 0.039. Thus if there is good reason to believe that the population variances are equal, the results become somewhat more suggestive of a difference in the mean weights for the two populations of screws.
The ttest provides an exact test for the equality of the means of two normal populations with unknown, but equal, variances. (The Welch's ttest is a nearly exact test for the case where the data are normal but the variances may differ.) For moderately large samples and a one tailed test, the t is relatively robust to moderate violations of the normality assumption.^{[12]}
For exactness, the ttest and Ztest require normality of the sample means, and the ttest additionally requires that the sample variance follows a scaled χ^{2} distribution, and that the sample mean and sample variance be statistically independent. Normality of the individual data values is not required if these conditions are met. By the central limit theorem, sample means of moderately large samples are often wellapproximated by a normal distribution even if the data are not normally distributed. For nonnormal data, the distribution of the sample variance may deviate substantially from a χ^{2} distribution. However, if the sample size is large, Slutsky's theorem implies that the distribution of the sample variance has little effect on the distribution of the test statistic. If the data are substantially nonnormal and the sample size is small, the ttest can give misleading results. See Location test for Gaussian scale mixture distributions for some theory related to one particular family of nonnormal distributions.
When the normality assumption does not hold, a nonparametric alternative to the ttest can often have better statistical power. For example, for two independent samples when the data distributions are asymmetric (that is, the distributions are skewed) or the distributions have large tails, then the Wilcoxon ranksum test (also known as the Mann–Whitney U test) can have three to four times higher power than the ttest.^{[12]}^{[13]}^{[14]} The nonparametric counterpart to the paired samples ttest is the Wilcoxon signedrank test for paired samples. For a discussion on choosing between the ttest and nonparametric alternatives, see Sawilowsky (2005).^{[15]}
Oneway analysis of variance generalizes the twosample ttest when the data belong to more than two groups.
A generalization of Student's t statistic, called Hotelling's Tsquare statistic, allows for the testing of hypotheses on multiple (often correlated) measures within the same sample. For instance, a researcher might submit a number of subjects to a personality test consisting of multiple personality scales (e.g. the Minnesota Multiphasic Personality Inventory). Because measures of this type are usually positively correlated, it is not advisable to conduct separate univariate ttests to test hypotheses, as these would neglect the covariance among measures and inflate the chance of falsely rejecting at least one hypothesis (Type I error). In this case a single multivariate test is preferable for hypothesis testing. Fisher's Method for combining multiple tests with alpha reduced for positive correlation among tests is one. Another is Hotelling's T^{ 2} statistic follows a T^{ 2} distribution. However, in practice the distribution is rarely used, since tabulated values for T^{ 2} are hard to find. Usually, T^{ 2} is converted instead to an F statistic.
For a onesample multivariate test, the hypothesis is that the mean vector () is equal to a given vector (). The test statistic is Hotelling's T^{ 2}:
where n is the sample size, is the vector of column means and is a sample covariance matrix.
For a twosample multivariate test, the hypothesis is that the mean vectors (, ) of two samples are equal. The test statistic is Hotelling's 2sampleT^{ 2}:
Many spreadsheet programs and statistics packages, such as QtiPlot, LibreOffice Calc, Microsoft Excel, SAS, SPSS, Stata, DAP, gretl, R, Python, PSPP, Matlab and Minitab, include implementations of Student's ttest.
Language/Program  Function  Notes 

Microsoft Excel pre 2010  TTEST(array1, array2, tails, type)  See [1] 
Microsoft Excel 2010 and later  T.TEST(array1, array2, tails, type)  See [2] 
LibreOffice  TTEST(Data1; Data2; Mode; Type)  See [3] 
Python  scipy.stats.ttest_ind(a, b, axis=0, equal_var=True)  See [4] 
Matlab  ttest(data1, data2)  See [5] 
R  t.test(data1, data2, var.equal=TRUE)  See [6] 
SAS  PROC TTEST  See [7] 
Java  tTest(sample1, sample2)  See [8] 
Wikiversity has learning materials about ttest 
