*** In order to do this, we need to know the distribution associated with H_{0}, because we use that distribution as the basis for our probability calculation.
Use when we have aquired some dataset, then want to ask questions concerning the probability of certain specific data values (e.g., do certain values seem extreme?)
In this case, the distribution associated with H_{0} is described by and S^{2} because the data points reflect a continuous variable that is normally distributed.
Use when we know the probability that some twoalternative event will occur (assuming H_{0}), and want to ask whether some specific observed outcome seems bizarre, given this probability
In this case, the distribution associated with H_{0} can be derived using because the data reflects a discreet twoalternative variable.
The Chisquare test is a general purpose test for use with discrete variables
It has a number of uses, including the detection of bizzare outcomes given some a priori probability for binomial situation, and for multinomial situations
In addition, it allows us to go beyond questions of bizarreness, and move into the question of whether pairs of variables are related. For example:









It does so by mapping the discreet variables unto a continuous distribution assuming H_{0}, the chisquare distribution
Let's reconsider a simple binomial problem. Say, we have a batter who hits .300 [i.e., P(Hit)=0.30], and we want to know whether is is abnormal for him to go 6 for 10 (i.e., 6 hits in 10 at bats)
Hopefully, you know how to do this using a binomial test
A different way is to put the values into a contingency table as follows,







then consider the distribution of the following formula given H0:











































Note that while the observed values are discreet, the derived score is continuous.
If we calculated enough of these derived scores, we could plot a frequency distribution which would be a chisquare distibution with 1 degree of freedom or .
Given this distribution and appropriate tables, we can then find the probability associated with any particular value.
Continuing the Baseball Example:









So, if the probability of obtaining a of 4.29 or greater is less than , then the observed outcome can be considered bizarre (i.e., the result of something other than a .300 hitter getting lucky).
There is one hitch to using the chisquare distribution when testing hypotheses ... the chisquare distribution is different for different numbers of degrees of freedom (df)
This means that in order to provide the areas associated with all values of for some number of df, we would need a complete table like the ztable for each level of df
Instead of doing that, the table only shows critical values as Steve will now illustrate using the funky new overhead thingy
Our example question has 1 df. Assuming we are using an level of .05, the critical value for rejecting the null is 3.84
Thus, since our obtained value of 4.29 is greater than 3.84, we can reject H_{0} and assume that hitting 6 of 10 reflects more than just chance performance
Suppose we complicate the previous example by taking walks and hit by pitches into account. That is, suppose the average batter gets a hit with a probability of 0.28, gets walked with a probability of .08, gets hit by a pitch (HBP) with a probability of .02, and gets out the rest of the time
Now we ask, can you reject H_{0} (that this batter is typical of the average batter) given the following outcomes from 50 at bats?











So far, all the tests have been to assess whether some observation or set of observations seems outofline with some expected distribution
However, the logic of the chisquare test can be extended to examine the issue of whether two variables are independent (i.e., not systematically related) or dependent (i.e., systematically related)
Consider the following data set again:









Are the variables of gender and opinion concerning the legalization of marijuana independent?
















From the marginal totals we can calculate:
If these two variables are independent, then by the multiplicative law, we expect that:
If we do this for all four cells, we get:
















Are the observed values different enough from the expected values to reject the notion that the differences are due to chance variation?
The df associated with 2 variable contingency tables can be calculated using the formula:
df = (C1)(R1)
where C is the number of columns and R is the number of rows
This gives the seemingly odd result that a 2x2 tables has 1 df, just like the simple binomial version of the chisquare test
However, as Steve will now show, this actually makes sense
Thus, to finish our previous example, the with equal .05 and 1 df equals 3.84. Since our is bigger than that (i.e., 6.04) we can reject H_{0} and conclude that opinions concerning the legalization of marijuana appear different across the males and females of our sample
Independence of observations
Chisquare analyses are only valid when the actual observations within the cells are independent
This independence of observations is different from the issue of whether the variables are independent, that is what the chisquare is testing
You know your observations are not independent when the grand total is larger than the number of subjects
Example: The activity level of 5 rats was tested over 4 days, producing these values:










Normality
Use of the chisquare distribution for finding critical values assumes that the expected values (i.e., Np) are normally distributed
This assumption breaks down when the expected values are small (specifically, the distribution of Np becomes more and more positively skewed as Np gets small)
Thus, one should be cautious using the chisquare test when the expected values are small
How small? This is debatable but if expected values are as low as 5, you should be worried
Inclusion of NonOccurrences
The chisquare test assumes that all outcomes (occurrences and nonoccurrences) are considered in the contingency table
As an example of a failure to include a nonoccurrence, see page 142 of the text
We only reject H_{0} when values of are larger than
This suggests that the test is always onetailed and, in terms of the rejection region, it is
In a different sense, however, the test is actually multiple tailed
Reonsider the following "marking scheme" example:







If we do not specify how we expect the results to fall out then any outcome with a high enough can be used to reject H_{0}
However, if we specify our outcome, we are allowed to increase our as in the example where we can increase as in the example where we can increase to 0.30 if we specified the exact ordering (in advance) that was observed
Measures of Association
The chisquare test only tells us whether two variables are independent, it does not say anything about the magnitude of the dependency if one is found to exist
Stealing from the book, consider the following two cases, both of which produce a significant , but which imply different strengths of relation
Smoking Behaviour









Primary Food Shopper
Yes  No  
Male  400  100 
Female  100  400 
There are a number of ways to quantify the strength of a relation (see sections in the text on the contingency coefficient, Phi, & Odds Ratios), but the two most relevant to psychologists is Cramer's Phi and Kappa
Cramer's Phi can be used with any contingency table and is calculated as:
Values of range from 0 to 1. The for the tables on the previous page are 0.12 and 0.60 respectively, indicating a much stronger relation in the second example
Often, in psychology, we will ask some "judge" to categorize things into specific categories
For example, imagine a beer brewing competition where we asked a judge to categorize beers as Yucky, OK, or Yummy
Obviously, we are eventually interested in knowing something about the beers after they are categorized
However, one issue that arises is the judges abilities to tell the difference between the beers
One way around this is to get two judges and show that a given beer is reliably rated across the judges (i.e., that both judges tend to categorize things in a similar way)
Such a finding would suggest that the judges are sensitive to some underlying qualitity of the beers as opposed to just guessing



















Note that if you just looked at the proportion of decisions that me and Judge 2 agreed on, it looks like we are doing OK:
P(Agree) = 21/30 = 0.70 or 70%
There is a problem here, however, because both judges are biased to judge a beer as OK such that even if they were guessing, the agreement would seem high because both would guess OK on a lot of trials and would therefore agree a lot
Solution: