**Total variation in experimental data is partitioned into components assignable to specific sources by the analysis of variance.** This statistical technique is applicable to data for which (1) effects of sources are additive, (2) uncontrolled or unexplained experimental variations (which are grouped as experimental errors) are independent of other sources of variation, (3) variance of experimental errors is homogeneous, and (4) experimental errors follow a normal distribution. When data depart from these assumptions, one must exercise extreme care in interpreting the results of an analysis of variance. Statistical tests indicate the contribution of the components to the observed variation.

In an illustrative experiment, *t* methods of treatment are under study, and *n* samples are measured for each treatment for a total of *nt* samples. Measurement $X{}_{\text{ij}}$ of the *i*th sample that received the *j*th treatment records an overall effect μ, an effect $\text{\beta}{}_{j}$ produced by the *j*th treatment, and an effect $\text{\epsilon}{}_{\mathrm{ij}}$ produced by experimental error. The three effects are additive, so that Eq. (1)

holds, where $i\text{=1,\u2026,}n$; and $j\text{=1,\u2026,}t$. The statistical problem is to test for the existence of these effects.

The analysis of variance in this example is presented in the **table**. Entries in the sum of squares column represent that part of the total variation that is attributable to each source. Total sum of squares *Q* is the sum over all squared deviations of observations $X{}_{\text{j}}$ from the grand mean $\stackrel{\u2015}{X}$, Eq. (2).

Similarly, within treatments, sum of squares *E* is the sum over all squared deviations of observations $X{}_{\text{ij}}$ within a treatment from the mean $\stackrel{\u2015}{X}{}_{j}$ of that treatment, Eq. (3).

Also, between treatments, sum of squares *T* is *n* times the sum over all treatments of the squared deviations of treatment means $\stackrel{\u2015}{X}{}_{j}$ from a grand mean $\stackrel{\u2015}{X}$ as defined by Eqs. (2) and (3). The sum of squares is generally computed more easily from the equivalent formulas (4)–(6).

The entries under degrees of freedom represent the number of independent comparisons upon which the sum of squares for the source of variation is based. In every case the linear restriction imposed by the relationship of the particular mean to the observations results in the loss of one degree of freedom. Therefore the number of degrees of freedom is always one less than the number of deviations used to compute the sum of squares.

The mean squares in the analysis of variance are obtained by dividing the sum of squares by the corresponding degrees of freedom. The within-treatments mean square is an estimate of $\text{\sigma}{}^{\text{2}}$, the variance of the error term $\text{\epsilon}{}_{\mathrm{ij}}$ in the additive model. It represents the random or unexplained variation in the data. The between-treatments mean square is an estimate of $\text{\sigma}{}^{\text{2}}\text{+}n\text{\sigma}{}_{\text{\beta}}{}^{\text{2}}$, where $\text{\sigma}{}_{\text{\beta}}{}^{\text{2}}$ is the variance of the treatment effects $\text{\beta}{}_{j}$.

If the treatment means differ substantially, the $\text{\beta}{}_{j}$ effects estimated by (($\stackrel{\u2015}{X}{}_{j}\text{\u2212}\stackrel{\u2015}{\text{X}}$)) will differ correspondingly and will have a large variance $\text{\sigma}{}_{\text{\beta}}{}^{\text{2}}$. If on the other hand the means do not differ, the treatment effects $\text{\beta}{}_{j}$ would be zero and $\text{\sigma}{}_{\text{\beta}}{}^{\text{2}}$ would be zero. In this case the treatment mean square would be equal to the error mean square and both would be independent estimates of $\text{\sigma}{}^{\text{2}}$. By comparing the ratio $T\text{\u2032/}\mathrm{E\prime}$ of between treatment mean square $T\text{\u2032}$ to within-treatment mean square $E\text{\u2032}$ with unity, the variation due to treatments is compared with the variation due to random or unexplained factors. If this ratio, called the *F* ratio, is close to unity, there is no evidence of a treatment effect. However, if ratio $T\text{\u2032/}\mathrm{E\prime}$ is substantially greater than unity there may be a significant treatment effect.

To compare the mean squares objectively, one uses the *F* test of significance in which the statistical hypothesis is that $\text{\sigma}{}_{\text{\beta}}{}^{\text{2}}\text{=0}$. Under this hypothesis it can be concluded that the treatment effects are significantly different from zero at the significance level α if the calculated *F* ratio is greater than the value of *F* at the α point on the *F* distribution with $t\text{\u22121}$ and $t\text{(}n\text{\u22121)}$ degrees of freedom.* See also: ***Biometrics**; **Quality control**; **Statistics**