How to Test Goodness of Fit on TI89
Copyright © 2001–2024 by Stan Brown, BrownMath.com
Copyright © 2001–2024 by Stan Brown, BrownMath.com
In a goodnessoffit or GOF test (also known as a multinomial experiment), you have three or more possible responses and you check whether the observed counts in your sample are consistent with the expected counts computed from the proportions in your model.
To compute the test statistic χ², your textbook sets up some columns and goes through a series of calculations that involve lots of writing and copying figures. This page shows you how to do all these calculations in statistics lists on your calculator, which is easier and more accurate.
Use this procedure:
m
and the
observed values in a list b
.m×sum(b)/sum(m)→e
and check that they meet the requirementsb
, expected is e
,
and degrees of freedom is (number of cells)−1.See also: A separate TI83/84 procedure is also available; see MATH200A Program part 6. The test can also be done with native TI83/84 commands, though it’s harder; see How to Test Goodness of Fit on TI83/84.
Model ratio  Observed  

Greeneyed winged  9  120 
Greeneyed wingless  3  49 
Redeyed winged  3  36 
Redeyed wingless  1  12 
Total  217 
An example in Dabes & Janik [full citation at https://BrownMath.com/swt/sources.htm#so_Dabes1999] had to do with the offspring of hybrid fruit flies; see figures at right. The null hypothesis H_{0} is that the 9:3:3:1 model is good, and the alternative H_{1} is that the model is bad. To compute the pvalue, as always, you assume the model is good (assume H_{0} is true) and then compute the probability of getting the sample you got, or a sample even further from the model. Use α = 0.05.
The test statistic χ² is a measure of how far the observations differ from the model. You’ve already learned to compute it by hand. Now you’ll learn the TI89 procedure by working the same example. At each stage you can compare the TI89 numbers with the ones you did by hand, so that you can be confident you’re doing everything right.
On the home screen, create a list called m
(for model). 
Press [2nd ( makes { ]. Then enter
the model numbers 9, 3, 3, 1. Press [2nd ) makes } ],
then [STO→ ] [ALPHA 5 makes M ] [ENTER ].
(Any previous contents of the list are automatically erased.) 
Create a second list called b (for observed). 
[2nd ( makes { ], then the observed numbers 120, 49, 36,
12, then [2nd ) makes } ] [STO→ ] [ALPHA ( makes B ] [ENTER ].

Now you need to compute the expected numbers. This will be the
total of observed numbers from list b
, redistributed in the
proportions of the model in list m
.
To find each Expected number, multiply each number in the model by the
fraction ∑observed / ∑model.
The formula m×sum(b)/sum(m) creates the list of expected values. 
[ALPHA 5 makes M ] [× ].
To select “sum”, press [ CATALOG ] [T ]. Cursor up to
“sum” (the one without the ∑ sign) and press
[ENTER ]. Press [ALPHA ( makes B ] [) ] [÷ ].
Select “sum” again: press [ CATALOG ], cursor if
necessary, and press [ENTER ].
Finish with [ ALPHA 5 makes M ] [) ] [STO ] [ALPHA ÷ makes E ]
[ENTER ]. 
Here’s the output I got, with my calculator in Auto mode. If yours is in Approximate mode, you’ll see decimals here instead; either way is fine. 
Check that the expected values meet the requirements: none are <1, and no more than 20% of them are <5. Here, the expected values are all 217/16≈13.6 or greater, so the requirements for a χ² test are met.
Now you’re ready to compute the test statistic and the pvalue.
Get to the Stats/List Editor application.  Press [◆ ] [APPS ]. Cursor to
“Stats/List Editor” if necessary. Press [ENTER ]. If
necessary, select the folder “main”. 
Select the χ² goodnessoffit test.  [2nd F1 makes F6 ] [7 ] selects Chi2 GOF .
The observed list is [ ALPHA ( makes B ]. The expected
list is [ALPHA ÷ makes E ]. Degrees of freedom is 3, which is 1 less
than the number of categories. Select “draw” or “calculate”.

The test statistic is χ² = 2.4531 and the pvalue is 0.4838. Since p>α, you fail to reject H_{0} and you can’t reach a conclusion about the model. (Some researchers will say “the model is not inconsistent with the data”.)
Sometimes you want to know whether unequal frequencies are in fact significantly unequal. In that case your model is a series of 1’s, indicating equal ratios. Here’s an example adopted from Johnson & Kuby 2003 [full citation at https://BrownMath.com/swt/sources.htm#so_Johnson2003] page 463.
Suppose 119 college students registered for seven sections of a course in these numbers: 18, 12, 25, 23, 8, 19, 14. At the 0.05 level, do the data indicate that the students had a preference for certain sections, or was each section equally likely to be chosen?
H_{0} is that each section was equally likely to be chosen, and H_{1} is that students had a preference. Your model for H_{0} is equal ratios of 1:1:1:1:1:1:1 (one 1 for each of the seven categories). Enter this in L1, enter the observed numbers in L2, and proceed as above.
You should find χ² = 12.9412. Since there are seven categories, df = 6 and you compute p = 0.0440. This is less than α and you conclude that there was a preference shown.
You can always draw a χ² distribution with the appropriate part shaded and the pvalue displayed.
Press [F5
] [1
] [3
].
Lower Value
= the χ² statistic you computed
Upper Value
= ∞
Degrees of Freedom
= one less
than the number of categories
Autoscale
= Yes
Updates and new info: https://BrownMath.com/ti83/