1: the Fratio test
The Fratio test is used to test for equality of two variances. The null hypothesis is that two sample variances (v_{1} and v_{2}) are from independent random samples from a normal population with the same population variance (v). The test statistic (F) is the ratio of the variances where, traditionally, v_{1} is the larger of the two:
Algebraically speaking 
F = v_{1} / v_{2}
where:
F is the F statistic with n_{1}1 and n_{2}1 degrees of freedom;
n_{1} and n_{2} are the number of observations in each sample.
v_{1} and v_{2} are two sample variances.

The observed value is then tested against the distribution of F with the appropriate number of degrees of freedom.
BEWARE:
 Most statistical tables only deal with the F distribution's upper tail. As a result, traditionally, v_{1} is assumed to be the larger variance  and any formulae are arranged accordingly, albeit rather confusingly.
If you are using probability functions, such as those provided by R, those contortions are seldom needed  unless you use textbook stats formulae.
When used prior to a ttest, there is usually no reason to anticipate which variance is bigger. Hence a two tailed test is carried out with the alternative hypothesis being that population variances are not equal ( v_{1}≠ v_{2}). However, the test is most commonly used in analysis of variance where, if H_{0} is untrue, one of the variances is presumed in advance to be larger than the other. Hence tables of F are often only given in the onetailed form. If this is the case use the value given for P = 0.025 rather than P = 0.05.
2: Levene's test
There are a number of other tests we can use to test for equality of variances. We will cover these in detail when we compare several means using the technique of analysis of variance. However you may encounter these tests on software packages as an optional test when you carry out a ttest, so we will briefly introduce one of them here, namely Levene's test.
In Levene's test we first calculate the absolute deviations of the observations from their respective means. All subsequent analysis is then done on the two sets of deviations:
 Measure the variability (using sums of squares) of the mean of each set of deviations about the overall mean.
 Measure the variability (using sums of squares) of the individual deviations about their respective means.
 Convert these sums of squares to 'variances' (usually called mean squares) by dividing by their degrees of freedom.
 Use a standard Fratio test to compare our two variance estimates with 1 and n2 degrees of freedom.

All we are doing here is comparing the amount of variability between the two mean deviations with the amount of variability of the deviations within each group. This is called an analysis of variance of the deviations.
This test is usually considered more robust (that is less sensitive to assumptions not being met) than the Fratio test. But there are still problems if you have unequal numbers of observations in each group. This is because the absolute deviations may be highly skewed thus violating the assumption for analysis of variance.
There is a modification of this test called the BrownForsythe test. Here an analysis of variance is carried out on the deviations from the group medians rather than the means. However, Glass and Hopkins (1996, p. 436) challenged the robustness of both Levene's test and the modification if sample sizes are unequal and variances are not homogenous. Some statisticians therefore recommend using the unequalvariance ttest if there is any doubt about the homogeneity of variances, or if sample sizes differ markedly.