All three types of \(t\)-tests can be performed in SAS. This tutorial will demonstrate the steps and syntax needed to conduct one sample, two independent samples, and paired samples t-tests.

There are two datafiles used in this tutorial. The `iq_wide`

can be downloaded here, and the `iq_long`

data can be downloaded here. The one-sample and independent samples examples will use the `iq_long`

data, and the paired samples example will use `iq_wide`

.

## One Sample \(t\)-Test

Say we have data from 200 subjects who have taken an IQ test. We know in the general population the mean IQ is 100. We want to test the hypothesis that our sample comes from a different population, e.g. one that is more gifted than the general population. We will first look at the distribution of scores to determine if there are any outliers or if the distribution is highly skewed. Then we will test the null hypothesis that our sample comes from a population where \(\mu \ne 100\).

To get the histogram we will run:

```
proc univariate data=iq_long noprint;
histogram iq;
run;
```

This will give us the following figure:

The observations look like they may be centered above 100, and the distribution looks roughly symmetric.

Is the mean significantly different from 100? We can run a one-sample \(t\)-test using the `ttest`

procedure to determine the answer. The syntax is the following:

```
proc ttest h0=100 plots(showh0);
var iq;
run;
```

`h0`

specifies the null hypothesis, `plots(showh0)`

has SAS display the null value on all relevent plots. Note that the test defaults to a two-tailed test with \(\alpha=0.05\). Then we set the variable as `iq`

and run it. We will get the following output:

The first table gives summary statistics for IQ.

`N`

is the sample size`Mean`

is the mean of the sample`Std Dev`

is the standard deviation of the sample`Std Err`

is the standard error of the mean, or \(\frac{SD}{\sqrt{N}}\)`Minimum`

gives the minimum value`Maximum`

gives the maximum value

What are the mean and standard deviation? This table tells us that the mean for IQ is 105.0351 and the standard deviation is 15.7835.

The second table gives the sample mean and standard deviation, along with the 95% confidence intervals for each.

The third table gives the results of our t-test:

`DF`

gives the degree of freedom, \(n-1\), for the test`t Value`

indicated the t-statistic`Pr>|t|`

gives the p-value

Since the \(p\)-value that we found (p<.001) is less than 0.05, and the 95% confidence interval does not include 100, we reject the null hypothesis.

An important assumption for the one-sample t-tests. is that the residuals (the squared distance from the mean) are normally distributed. SAS also outputs a figure that includes a histogram with a normal and kernel density curve, as well as a box plot of the data, to help assess how well the assumption is met.

The normal density curve is supplied as a point of reference for a true normal distribution. Comparing the density curve for the sample to the normal curve, we see that we are not perfectly normal but pretty close. The boxplot at the bottom of the figure shows that there is only one small outlier (the dot on the right), which is not enough to concern us that outliers are affecting the results.

Lastly, the Q-Q plot provides another check of normality. If the data are normally distributed, the dots will follow closely along the diagonal line. Again, although the observations are not perfectly linear, they are quite close. We conclude that the assumption is met.

## Independent Samples \(t\)-Test

Say we wanted to test whether there is a significant difference in the IQs of males and females in our sample of 200 subjects. We’ll start out by visualizing the differences between groups using boxplots. This will give us an initial sense of whether differences exist and allow us to look for major outliers, skew in the distributions, or dramatically unequal variability between the two groups. We will do this using `sgplot`

:

```
proc sgplot data=iq_long;
vbox iq / group=gender;
run;
```

We get the following figure, where male = 1 and female = 2:

The central tendency for females, indicated by the median line in the middle of the box, sits higher than the male median. The distributions are both similar (consistent with the equal variance assumption) and roughly symmetric. We can get the specific descriptive statistics for each gender when we run our t-test:

```
proc ttest data = iq_long;
class gender;
var iq;
run;
```

We will get the following output:

The first table gives the desriptive statistics for each gender, as well as for the difference between the two. The `pooled`

results are estimated assuming equal variance. The `Satterthwaite`

results relax the equal variance assumption.

The second table gives the sample mean and standard deviation, along with the 95% confidence intervals for each for both genders, as well as the difference between the two.

The third table gives the results of the t-test, with equal and unequal variances assumed. We saw that our two groups had a similar variance, so we may be comfortable reporting the less conservative test assuming equal variances. However, the fourth table provides a formal test of the equal variances assumption. The null hypothesis is that the assumption is met. Since the result is significant (\(p = 0.0298\)) , we reject the null hypothesis that the variances are equal and hence focus on the more conservative test using the Satterthwaite adjustment.

When evaluated against a \(t\)-distribution with 198 degrees of freedom (listed in the `DF`

column), we get a \(p\)-value of 0.204. This is greater than 0.05, so we fail to reject the null-hypothesis of no difference.

SAS also outputs a figure that includes a histogram with a normal and kernel density curve, as well as a box plot of the data, though now it has one for each gender seperately. These figures allow us to assess the normality assumption, which says that the distributions should be normal within each group. Comparing the normal density reference curve to the density curve from our data, we see that the observations are not exactly normal but are relatively close. The boxplots at the bottom show no major outliers.

Lastly, the Q-Q plot provides another check of the normality assumption within each group. If the observations were perfectly normal the dots would fall perfectly along the diagonal line. While not exactly normal, our observations are sufficiently close for us to accept the results.

## Paired Samples \(t\)-Test

Finally, say we have IQ data collected on 100 individuals at two points in time. We want to know if an intervention that occurs in between the measures - say forming a test study group - increases IQ scores. The null hypothesis is that the mean change (\(IQ_{t2} - IQ_{t1}\)) is zero.

To conduct a dependent (or paired) samples \(t\)-test, the data must be in wide format. That is, the \(t_1\) measures are in one column, the \(t_2\) measures are in another, and each row represents one subject. This is where we use our `iq_wide`

dataset.

First, we’ll visualize the differences between the two time points. However, this requires the data be in long format (\(t_1\) is stacked on \(t_2\) in a single column). To do this we will use the transpose procedure:

```
proc transpose data=iq_wide out=long;
by id;
run;
```

Then, we will create our boxplot:

```
proc sgplot data=long;
vbox col1 / category= _name_;
xaxis label= "Time 1 vs. Time 2";
yaxis label="IQ Score";
run;
```

We get this plot:

Looking at the figure, it looks like the \(t_2\) scores are a little higher than the \(t_1\) scores, though not by much. Is this difference statistically significant? We run the paired samples \(t\)-test using the wide format of the data to find out.

```
proc ttest data=iq_wide;
paired Time_1*Time_2;
run;
```

We get the following output:

The first table gives summary statistics for Time 1 - Time 2. The second table gives the 95% confidence intervals for the mean and standard deviation for these. The third table gives the result of our t-test. We find that when evaluated against a \(t\)-distribution with 99 degrees of freedom (listed in the `DF`

column), we get a \(p\)-value of 0.123. This is greater than 0.05, so we fail to reject the null-hypothesis of no difference.

We also get the following four figures:

The first is a histogram with a normal and kernel density curve, as well as a box plot of the difference between time 1 and time 2. These figures allow us to assess the normality assumption, which says that the distribution of the difference in groups should be normal. Comparing the normal density reference curve to the density curve from our data, we see that the observations are not exactly normal but are relatively close. There are a few small outliers but not enough to concern us that outliers are affecting the results.

The next figure shows how each observation changes from time 1 to time 2.

Lastly, the Q-Q plot provides another check of the normality assumption of the difference between groups. If the observations were perfectly normal the dots would fall perfectly along the diagonal line. While not exactly normal, our observations are sufficiently close for us to accept the results.