# Chapter 7

## Hypothesis Testing Applied to Means

### Typical Questions:

Q1: Is some sample mean significantly different from what would be expected given some population distribution?

Example: say that I sampled 25 undergraduate students at Scarborough and measured their IQ, finding a mean of 115. Does this mean seem especially large given that the population has an average IQ of 100?

a) population variance known
b) population variance known

Q2: Is the mean of one group significantly different from the mean of some other group?

Example: recall the "paper airplane" memory experiment where I asked people within three groups to estimate the speed of a car involved in an accident and, across groups, I varied the adjective used to described the collision (smashed vs. ran into vs. contacted). Did my manipulation affect speed estimates? That is, are the mean speed estimates of the various groups different?

a) matched groups
b) independent groups

Note: these tests are used in conjunction with continuous (i.e., measurement) data, not categorical data.

Q1: Is some sample mean significantly different from what would be expected given some population distribution?

On the face of it, this question should remind you of your previous fun with z-scores

In the case of z-scores, we asked whether some observation was significantly different from some sample mean

In the case of the question above, we are asking whether some sample mean is significantly different from some population mean

Despite this apparent similarity, the questions are different because the sampling distribution of the mean (the t distribution) is different from the sampling distribution of observations the z distribution).

### The Central Limit Theorem

In order to understand the distinction between the z and t-tests, we need to understand the Central Limit Theorem ...

CLT: Given a population with mean and variance , the sampling distribution of the mean (the distribution of sample means) will have a mean equal to ((i.e., ), a variance equal to , and a standard deviation equal to . The distribution will approach the normal distribution as N, the sample size, increases.

Steve will now use a sexy computer demo to illustrate this theorem and its relevance to asking questions about sample means

### Testing Hypotheses about Single Means when the Population Variance is Known

Although it is seldom the case, sometimes we know the variance (as well as the mean) of the population distribution of interest

In such cases, we can do a revised version of the z-test that takes into account the CLT

Specifically:

With this formula, we can answer questions like the following:

Example: say that I sampled 25 undergraduate students at Scarborough and measured their IQ, finding a mean of 105. Is this mean significantly different from the population which has a mean IQ of 100 and a standard deviation of 10?

### Testing Hypotheses about Single Means when the Population Variance is Unknown

Unfortunately, it is very rare that we know the population standard deviation

Instead we must use the sample standard deviation, s, to estimate

However, there is a hitch to this. While s2 is an unbiased estimator of (i.e., the mean of the sampling distribution of s2 equals ), the sampling distribution of s2 is positive skewed

Sample Variance (s2)

This means that any individual s2 chosen from the sampling distribution of s2 will tend to underestimate

Thus, if we used the formula that we used when was known, we would tend to get z values that were larger than they should be, leading to too many significant results

The solution? Use the same formula (modified to use s instead of ), find its distribution under H0, then use that distribution for doing hypothesis testing

The result:

When a t-value is calculated in this manner, it is evaluated using the t-table (p. 648 of the text) and the row for N-1 degrees of freedom

So, with all this in hand, we can now answer questions of the following type ...

Example:

Let's say that the average human who has reached maturity is 68" tall. I'm curious whether the average height of our class differs from this population mean. So, I measure the height of the 100 people who come to class one day, and get a mean 70" and a standard deviation of 5". What can I conclude?

in we look at the t-table, we find the critical t-value for alpha=.05 and 99 (N-1) degrees of freedom is 1.984

Since the tobt > tcrit, we reject H0

### Testing Hypotheses Concerning Pairs of Means: Matched Samples

In many studies, we test the same subject on multiple sessions or in different test conditions

=> sexist profs example

We then wish to compare the means across these sessions or test conditions

This type of situation is referred to as a pairwise or matched samples (or within subjects) design, and it must be used anytime different data points cannot be assumed to be independent

As you are about to see, the t-test used in this situation is basically identical to the t-test discussed in the previous section, once the data has been transformed to provide difference scores

### Difference Scores

Assume we have some measure of rudeness and we then measure 10 profs rudeness index; once when the offending TA is male, and once when they are female

 Female TA Male TA Difference Prof 1 15 10 5 Prof 2 22 20 2 Prof 3 18 19 -1 Prof 4 5 4 1 Prof 5 40 33 7 Prof 6 20 20 0 Prof 7 14 16 -2 Prof 8 10 14 -4 Prof 9 22 10 12 Prof 10 18 13 5 Mean 18.4 15.9 2.5 SD 4.67

Question becomes, is the average difference score significantly different from 0?

So, when we do the math:

The critical t with alpha equal .05 (two-tailed) and 9 (N-1) degrees of freedom is 2.262

Since tobt is not greater than tcrit, we can not reject H0

Thus, we have no evidence that the profs rudeness is difference across TAs of different genders

### Testing Hypotheses Concerning Pairs of Means: Independent Samples

Another common situation is one where we have two of more groups composed of independent observations

That is, each subject is in only one group and there is no reason to believe that knowing about one subjects performance in one of the groups would tell you anything about another subjects performance in one of the other groups

In this situation we are said to have independent samples or, as it is sometimes called, a between subjects design

Example: study to examine external biases of memory (i.e., when I threw all the paper airplanes around).

### Data from our "Memory Bias" experiment

Given Steve's accident story, about how fast (in km/h) do you think the grey car was going when it ________ the side of the red car?

 Group Mean SD Smashed into 74.0 9.06 Ran into 67.20 10.555 Made Contact with 66.30 11.22

There are, in fact, three different t-tests we can perform in this situation, comparing groups 1 &2, 1&3, or 2&3.

For demonstration purposes, let's only worry about groups 1 & 2 for now.

So, we could ask, do subjects in Group 1 give different estimates of the grey car's speed than subjects in Group 2?

### The Variance Sum Law

When testing a difference between two independent means, we must once again think about the sampling distribution associated with H0

If we assume the means come from separate populations, we could simultaneously draw samples from each population and calculate the mean of each sample.

If we repeat this process a number of times, we could generate sampling distributions of the mean of each population, and a sampling distribution of the difference of the two means.

If we actually did this, we would find that the sampling distribution of the difference would have a variance equal to the sum of the two population variances.

Now recall that when we performed a t-test in the situation where the population standard deviation was unknown, we used the formula:

Given all of the above, we can now alter this formula in a way that will allow us to use it in the independent means example

Specifically, instead of comparing a single sample mean with some mean, we want to see if the difference between two sample means equals zero

Thus the numerator (top part) will change to:

and, because the standard error associated with the difference between two means is the sum of each mean's standard error (by the variance sum law), the denominator of the formula changes to:

Thus, the basic formula for calculating a t-test for independent samples is:

### Pooling Variances & Unequal Ns

The previous formula is fine when sample sizes are equal.

However, when sample sizes are unequal, it treats both of the S2 as equal in terms of their ability to estimate the population variance.

Instead, it would be better to combine the s2 in a way that weighted them according to their respective sample sizes. This is done using the following pooled variance estimate:

Given this, the new formula for calculating an independent groups t-test is:

### Coupla Notes

Note 1: Using the pooled variances version of the t formula for independent samples is no difference from using the separate variances version when sample sizes are equal. It can have a big effect, however, when sample sizes are unequal.

Note 2: As mentioned previously, the degrees of freedom associated with an independent samples t test is N1 + N2 - 2

### Hetrogeneity of Variance:

The text book has a large section on hetrogeneity of variance (pps 185-193) including lots of nasty looking formulae. All I want you to know is the following:

When doing a t-test across two groups, you are assuming that the variances of the two groups are approximately equal.

If the variances look fairly different, there are tests that can be used to see if the difference is so great as to be a problem.

If the variances are different across the groups, there are ways of correcting the t-test to take the hetrogeneity in account.

In fact, t-tests are often quite robust to this problem, so you don't have to worry about it too much.

Sometimes, hetrogeneity is interesting

You deserve a break today:

## Hypothesis Testing with Means: The Cookbook

One Mean vs. One population mean

Population variance known:

df = N-1

Population variance unknown:

df = N-1

Two Means

Matched samples:

first create a difference score, then ...

df = N-1

Independent samples:

df = N1+N2-2

where:

Easy as baking a cake, right? Now for some examples of using these recipes to cook up some tasty conclusions ...

Examples

1) The population spends an average of 8 hours per day working, with a standard deviation of 1 hour. A certain researcher believes that profs work less hours than average and wants to test whether the average hours per day that profs work is different from the population. This researcher samples 10 professors and asks them how many hours they work per day, leading to the following dataset:

6, 12, 8, 15, 9, 16, 7, 6, 14, 15

perform the appropriate statistical test and state your conclusions.

2) Now answer the question again except assume the population variance is unknown

3) Does the use of examples improve memory for the concepts being taught? Joe Researcher tested this possibility by teaching 10 subjects 20 concepts each. For each subject, examples were provided to help explain 10 of the new concepts, no examples were provided for the other 10. Joe then tested his subjects memory for the concepts and recorded how many concepts, out of 10, that the subject could remember. Here are his data:

 Subject No. Example Example 1 6 8 2 8 8 3 5 6 4 4 6 5 7 7 6 8 7 7 2 5 8 5 7 9 6 7 10 8 9

4) Circadian rhythms suggest that young adults are at their physical peek in the early afternoon, and are at their physical low point in the early morning. Are cognitive factors affected by these rhythms? To test this question I bring subjects in to run a recognition memory experiment. Half of the subjects are run at 8 am, the other half at 2 pm. I then record their recognition memory accuracy. Here are the results:

 8 AM 2 PM .60 .78 .58 .85 .68 .81 .74 .82 .71 .76 .62 .73