BrownMath.com → TI-83/84/89 → Extra Statistics Utilities
Updated 19 Nov 2021 (What’s New?)

MATH200B Program —
Extra Statistics Utilities for TI-83/84

Copyright © 2008–2023 by Stan Brown, BrownMath.com

Summary: This page presents a downloadable TI-83/84 program with easier versions of some calculator procedures plus new capabilities like computing skewness and kurtosis and making statistical inferences about standard deviation, correlation, and regression. See Using the Program below for a full list of features.

Your first course in statistics probably won’t use these features, but they’re offered here for advanced students and those who are studying on their own.

Contents:

See also: Troubles? See TI-83/84 Troubleshooting.

MATH200B Program Overview

Because this program helps you,
please click to donate!
Because this program helps you,
please donate at
BrownMath.com/donate.

Getting the Program

The program is in two parts, MATH200B and MATH200Z. You need both on your calculator, even though you won’t run MATH200Z directly. It works with all TI-83 Plus calculators and all TI-84 calculators, including the color models.

If you have a “classic” TI-83, not a Plus or Silver, follow the directions below but put M20083B and M20083Z on your calculator, not MATH200B and MATH200Z. (M20083B and M20083Z aren’t being updated after version 4.2, which was released in August 2012, so you will see some differences from the screen shots in this document.)

There are three methods to get the programs into your calculator:

Using the Program

MATH200B splash screen MATH200B menu

Press the [PRGM] key. If you can see MATH200B in the menu, press its number; otherwise, scroll to it and press [ENTER]. When the program name appears on your home screen, press [ENTER] a second time to run it. Check the splash screen to make sure you have the latest version (v4.4a), then press [ENTER].

The menu at right shows what the program can do:

  1. Skew/kurtosis: compute skewness and kurtosis, which are numerical measures of the shape of a distribution
  2. Time series: plot time-series data
  3. Critical t: find the t value that cuts the distribution with a given probability in the right-hand tail
  4. Critical χ˛: find the χ˛ value that cuts the distribution with a given probability in the right-hand tail
  5. Infer about σ: hypothesis tests and confidence intervals for population standard deviation and variance
  6. Correlatn inf: hypothesis tests and confidence intervals for the linear correlation of a population; the hypothesis test for correlation doubles as a hypothesis test for slope of the regression line
  7. Regression inf: confidence intervals for slope of the regression line, y intercept, and ŷ for a particular x, plus prediction intervals for ŷ for a particular x

If you ever need to break out of the program before finishing the prompts, press [ON] [1].

MATH200B splash screen on high-res displays If you run the program on a TI-84 with a higher-resolution screen, some displays will look slightly different, but all keystrokes will be the same.

The program is protected so that you can’t edit it accidentally. If you want to look at the program source code, see MATH200B.PDF and MATH200Z.PDF in the downloadable MATH200B.ZIP file.

Each procedure leaves its results in variables in case you want to use them for further computations. For details, please see the separate document MATH200B Program — Technical Notes.

See also: For interpretation of skewness and kurtosis, and technical details of how they are calculated, see Measures of Shape: Skewness and Kurtosis.

If you have a frequency or probability distribution, put the data points or class midpoints (class marks) in one statistics list and the frequencies or probabilities in another. If you have a simple list of numbers, put them in a statistics list.

Then press [PRGM], scroll if necessary and select MATH200B, and in the program menu select 1:Skew/kurtosis. Specify your data arrangement, enter your data list, and if appropriate enter your frequency or probability list. The program will produce a great many statistics.

Here are grouped data for heights of 100 randomly selected male students:

Class boundaries 59.5–62.562.5–65.565.5–68.5 68.5–71.571.5–74.5
Class midpoints, x 6164677073
Frequency, f 51842278
Data are adapted from Spiegel 1999 [full citation in “References”, below], page 68.

histogram of heights of male students A histogram, prepared with the MATH200A program, shows the data are skewed left, not symmetric. But how highly skewed are they? And how does the central peak compare to the normal distribution for height and sharpness? To answer these questions, you have to compute the skewness and kurtosis.

Enter the x’s in one statistics list and the f’s in another. If you’re not sure how to create statistics lists, please see Sample Statistics on TI-83/84.

Then run the MATH200B program and select 1:Skew/kurtosis. Your data arrangement is 3:Grouped dist. When prompted, enter the list that contains the x’s and then the list that contains the f’s. I’ve used L5 and L6, but you could use any lists.

statistics lists for student heights       program setup for student heights

The program gives its results on three screens of data.

results screen 1 of 3: 
n=100, mean M=67.45, std dev: 2.9201884, variance V=8.5275 The first screen shows some basic statistics: the sample size, the mean, the standard deviation, and the variance. As usual, you have to consider whether the data are a sample or the whole population; the program gives you both σ and s, σ˛ and s˛.

The program stores key results in variables in case you want to do any further computations with them. See MATH200B Program — Technical Notes for a complete list of variables computed by the program.

results screen 2 of 3: 
3rd moment: minus 2.69325, skewness S=minus .1081544,standard error E=.24137978, statistic S over E: minus .45 The second screen shows results for skewness. The third moment divided by the 1.5 power of the variance is the skewness, which is about −0.11 for this data set. Again, you are given the values to use if this is the whole population and if it is a sample.

If this is the whole population, then you stop with the first skewness figure and can state that the population is negatively skewed (skewed left).

But this is just a sample, so you use the “as sample” figure for your skewness. (This is also the figure that Excel reports.) The sample is negatively skewed (skewed left), but can you say anything about the skew of the population? To answer that question, use the standard error of skewness, which is also shown on the screen. As a rule of thumb, if sample skewness is more than about two standard errors either side of zero, you can say that the population is skewed in that direction. In this example, the standard error of skewness is 0.24, and the statistic of −0.45 tells you that the skewness is only 0.45 standard errors below zero. This is not enough to let you say anything about whether the population is skewed in either direction or symmetric.

results screen 3 of 3: 
4th moment: 199.37593, kurtosis K=2.741759, excess K-3: minus .258241,standard error F=.47833113, statistic K minus 3 all over F: minus .54 The last screen shows results for kurtosis. The fourth moment divided by the square of the variance gives the kurtosis, which is 2.74. Some authors, and Microsoft Excel, prefer to subtract 3 and consider the excess kurtosis: 2.74−3 is −0.26.

A bell curve (normal distribution) has kurtosis of 3 and excess kurtosis of 0. If excess kurtosis is negative, as it is here, then the distribution has a lower peak and higher “shoulders” than a normal distribution, and it is called platykurtic. (An excess kurtosis greater than 0 would mean that the distribution was leptokurtic, with a narrower and higher peak than a bell curve.)

Since this is just a sample, and not the whole population, use the “as sample” excess kurtosis of −0.21. (This is the figure Excel reports.) Can you say anything about the kurtosis of the population from which this sample was taken? Yes, just as you did for skewness. The rule of thumb is that an excess kurtosis of at least two standard errors is significant. For this sample, the standard error of kurtosis is 0.48, and −0.21/0.48 = −0.44, so the excess kurtosis is only 0.44 standard errors below zero. (Or, the kurtosis is only 0.44 standard errors below 3.) Therefore you can’t say whether the population is peaked like a normal distribution, more than normal, or less than normal.

combined skewness/kurtosis screen on TI-84 Plus CE On high-resolution screens (the TI-84 Plus C and TI-84 Plus CE), there’s enough room to show skewness and kurtosis on the same screen, as shown at right.

You can also use this part of the program to compute the shape of a probability distribution. For instance, here’s the probability distribution for the number of spots showing when you throw two dice:

Probability Distribution for Throwing Two Dice
Spots, x 2345 6789 101112
Probability, P(x) 1/362/363/364/36 5/366/365/364/36 3/362/361/36

The x’s go in one list and the P’s in another. (Enter the probabilities as fractions, not decimals, to ensure that they add to exactly 1. The calculator displays rounded decimals but keeps full precision internally, and the program will tell you if your probabilities don’t add to 1.) Now run the MATH200B program and select 1:Skew/kurtosis. Your data arrangement is 4:Discrete PD, and you’ll see the following results:

output screen 1 of 2 for dice throwing skewness and kurtosis output screen 2 of 2 for dice throwing skewness and kurtosis

histogram for dice throwing On the first screen, no sample size is shown because a probability distribution is a population.

On the second screen, the skewness is essentially zero. This confirms what you can see in the histogram: the distribution is symmetric. Standard error and test statistic don’t apply because you have a probability distribution (population) rather than a sample.

On the same screen, the kurtosis is 2.37 (not shown for reasons of space), and the excess kurtosis is −0.63; the dice make a platykurtic distribution. Compared to a normal distribution, this distribution of dice throwing has a lower, less distinct peak and shorter tails.

single-screen output from TI-84 Plus CE On high-resolution screens, namely the TI-83 Plus C Silver Edition and TI-84 Plus CE, complete information about a probability distribution fits on one screen.

You may notice that, although the skewness is still essentially zero, it’s a different very small number from the very small number the older TI-84s gave us, on the screen shot above. I can’t account for this in detail, but I think it’s likely that the newer calculator’s chip processes floating point with very slightly different precision than the old one. Don’t obsess about it — for all practical purposes, both numbers are zero.

Example: Let’s plot the closing prices of Cisco Systems stock over a two-year period. The following table is adapted from Sullivan 2008 [full citation in “References”, below], page 82, which credits NASDAQ as the source.

Month 3/034/035/036/03 7/038/039/0310/03
Closing 12.9815.0016.4116.79 19.4919.1419.5920.93
Month 11/0312/031/042/04 3/044/045/046/04
Closing 22.7024.2325.7123.16 23.5720.9122.3723.70
Month 7/048/049/0410/04 11/0412/041/052/05
Closing 20.9218.7618.1019.21 18.7519.3218.0417.42

Enter the closing prices in a statistics list such as L1, ignoring the dates.

Now run the MATH200B program and select 2:Time series. The program prompts you for the data list. (Caution: The program assumes the time intervals are all equal. If they aren’t, the horizontal scale will not be uniform and the graph will not be correct.)

Below you see the effect of a “yes” at left and the effect of a “no” at right.

time series graph for Cisco data, x axis forced       time series graph for Cisco data, x axis not forced

As you can see, the graph that doesn’t include the zero looks a lot more dramatic, with bigger changes. But that can be deceptive. A more accurate picture is shown in the first graph, the one that does include the x axis.

If you wish, you can press the [TRACE] key and display the closing prices, scrolling back and forth with the [] and [] keys. If you want to jump to a particular month, say June 2004, the 16th month, type 16 and then press [ENTER].

The TI-83 doesn’t have an invT function as the TI-84 does, but if you need to find critical t or inverse t on either calculator you can use this part of the MATH200B program.

Caution: our notation of t(df,rtail) matches most books in specifying the area of the right-hand tail for critical t. But the TI calculator’s built-in menus specify the area of the left-hand tail. Make sure you know whether you expect a positive or negative t value.

Some textbooks interchange the arguments: t(rtail,df). Since degrees of freedom must always be a whole number and the tail area must always be less than 1, you’ll always know which argument is which.

Example: find t(27,0.025), the t statistic with 27 degrees of freedom (sample size 28) for a one-tailed significance test with α = 0.025, a two-tailed test with α = 0.05, or a confidence interval with 1−α = 95%.

TI-83 output screen for critical t example TI-83 input screen for critical t example Solution: run the MATH200B program and select 3:Critical t. When prompted, enter 27 for degrees of freedom and 0.025 for the area of the right-hand tail, as shown in the first screen. After a short pause, the calculator gives you the answer: t(27,0.025) = 2.05.

Interpretation: with a sample of 28 items (df=27), a t score of 2.05 cuts the t distribution with 97.5% of the area to the left and 2.5% to the right.

chi-squared distribution, right-hand tail shaded and critical value marked with star χ˛(df,rtail) is the critical value for the χ˛ distribution with df degrees of freedom and probability rtail. (In the context of a hypothesis test, rtail is α, the significance level of the test.)

In the illustration, rtail is the area of the right-hand tail, and the asterisk * marks the critical value χ˛(df,rtail). The critical value or inverse χ˛ is the χ˛ value such that a higher value of χ˛ has only an rtail probability of occurring by chance.

You can compute critical χ˛ only for the right-hand tail, because the χ˛ distribution has no left-hand tail.

Caution: Some textbooks write the function the other way, χ˛(rtail,df). Since df is a whole number and rtail is a decimal between 0 and 1, you will be able to adapt.

Example: What is the critical χ˛ for a 0.05 significance test with 13 degrees of freedom?

critical output screen for chi-squared Example critical input screen for chi-squared Example Run the MATH200B program and select 4:Critical χ˛. Enter the number of degrees of freedom and the area of the right-hand tail. Be patient: the computation is slow. But the program gives you the critical χ˛ value of 22.36, as shown in the second screen.

Interpretation: For a χ˛ distribution with 13 degrees of freedom, the value χ˛ = 22.36 divides the distribution such that the area of the right-hand tail is 0.05.

Summary: This part performs hypothesis tests and computes confidence intervals for the standard deviation of a population. Since variance is the square of standard deviation, it can also do those calculations for the variance of a population.

Cautions:

The tests on standard deviation or variance of a population require that the underlying population must be normal. They are not robust, meaning that even moderate departures from normality can invalidate your analysis. See MATH200A Program part 4 for procedures to test whether a population is normal by testing the sample.

Outliers are also unacceptable and must be ruled out. See MATH200A Program part 2 for an easy way to test for outliers.

See also: Inferences about One-Population Standard Deviation gives the statistical concepts with examples of calculation “by hand” and in an Excel workbook.

You already know how to test the mean of a population with a t test, or estimate a population mean using a t interval. Why would you want to do that for the standard deviation of a population?

The standard deviation measures variability. In many situations not just the average is important, but also the variability. Another way to look at it is that consistency is important: the variability must not be too great.

For example, suppose you are thinking about investing in one of two mutual funds. Both show an average annual growth of 3.8% in the past 20 years, but one has a standard deviation of 8.6% and the other has a standard deviation of 1.2%. Obviously you prefer the second one, because with the first one there’s quite a good chance that you’d have to take a loss if you need money suddenly.

Industrial processes, too, are monitored not only for average output but for variability within a specified tolerance. If the diameter of ball bearings produced varies too much, many of them won’t fit in their intended application. On the other hand, it costs more money to reduce variability, so you may want to make sure that the variability is not too low either.

sub-menu for inferences about sigma To use the program, first check the requirements for your sample; see Cautions above. Then run the MATH200B program and select 5:Infer about σ. When prompted, enter the standard deviation and size of the sample, pressing [ENTER] after each one. If you know the variance of the sample rather than the standard deviation, use the square root operation since s is the square root of the variance s˛ (see example below).

The program then presents you with a five-item menu: confidence interval for the population standard deviation σ, confidence interval for the population variance σ˛, and three hypothesis tests for σ or σ˛ less than, different from, or greater than a number. Make your selection by pressing the appropriate number.

Confidence Intervals

inference about sigma showing 'computing...' status If you select one of the confidence intervals, the program will prompt you for the confidence level and then compute the interval. Because this involves a process of successive approximations, it can take some time, so please be patient.

confidence interval for sigma The program displays the endpoints of the interval on screen and also leaves them in variables L and H in case you want to use them in further calculations. You can include them in any formula by pressing [ALPHA ) makes L] and [ALPHA ^ makes H].

By the way, confidence intervals about a population standard deviation are not symmetric around the sample standard deviation. That’s different from the simpler cases of means and proportions. In this example, the 95% interval for σ extends 2.7 units below the sample standard deviation, but 4.3 units above it.

Hypothesis Tests

inference about sigma showing prompt for H0 SD If you select one of the hypothesis tests, the program will prompt you for σ, the population standard deviation in the null hypothesis. If your H0 is about population variance σ˛ rather than σ, use the square root symbol to convert the hypothetical variance to standard deviation.

The program then displays the χ˛ test statistic, the degrees of freedom, and the p-value. These are also left in variables X, D, and P in case you wish to use them in further calculations. You can include them in any formula with [x,T,θ,n], [ALPHA x-1 makes D], and [ALPHA 8 makes P].

Examples

Example 1: A machine packs cereal into boxes, and you don’t want too much variation from box to box. You decide that a standard deviation of no more than five grams (about 1/6 ounce) is acceptable. To determine whether the machine is operating within specification, you randomly select 45 boxes. Here are the weights of the boxes, in grams:

386388381395392383389383370
379382388390386393374381386
391384390374386393384381386
386374393385388384385388392
400377378392380380395393387

Solution: First, use 1-Var Stats to find the sample standard deviation, which is 6.42 g. Obviously this is greater than the target standard deviation of 5 g, but is it enough greater that you can say the machine is not operating correctly, or could it have come from a population with standard deviation no more than 5 g? Your hypotheses are

H0: σ = 5, the machine is within spec (some books would say H0: σ ≤ 5)

H1: σ > 5, the machine is not working right

No α was specified, but for an industrial process with no possibility of human injury α = 0.05 seems appropriate.

Next, check the requirements: is the sample normally distributed and free of outliers? TI-83 normality check for Example 1; see text TI-83 boxplot for Example 1; see text Use MATH200A part 2 to make a box-whisker plot to rule out outliers, and MATH200A part 4 to check normality. The outputs are shown at right. You can see that the sample has no outliers and that it is extremely close to normal, so requirements are met and you can proceed with the hypothesis test.

TI-83 results screen for Example 1; see text TI-83 input screen for Example 1; see text Now, run the MATH200B program and select 5:Infer about σ. Enter s:6.42 and n:45, and select 5:Test σ>const. Enter 5 for σ in H0.

The results are shown at far right. The test statistic is χ˛ = 72.54 with 44 degrees of freedom, and the p-value is 0.0043.

Since p<α, you reject H0 and accept H1. At the 0.05 level of significance, the population standard deviation σ is greater than 5, and the machine is not operating within specificaton.

Example 2: You have a random sample of size 20, with a standard deviation of 125. You have good reason to believe that the underlying population is normal, and you’ve checked the sample and found no outliers. Is the population standard deviation different from 100, at the 0.05 significance level?

Solution: n = 20, s = 125, σo = 100, α = 0.05. Your hypotheses are

H0: σ = 100

H1: σ ≠ 100

results screen for Example 2; see text This time in the INFER ABOUT σ menu you select 4:Test σ≠const.

Results are shown at right. χ˛ = 29.69 with 19 degrees of freedom, and the p-value is 0.1118.

p>α; fail to reject H0. At the 0.05 significance level, you can’t say whether the population standard deviation σ is different from 100 or not.

Example 3: Of several thousand students who took the same exam, 40 papers were selected randomly and statistics were computed. The standard deviation of the sample was 17 points. Estimate the standard deviation of the population, with 95% confidence. (Recall that test scores are normally distributed.)

results screen for Example 3; see text Solution: Check the data and make sure there are no outliers. Run MATH200B and select [2] in the first menu. Enter s and n, and in the second menu select 1:σ interval with a C-level of 95 or .95. The results screen is shown at right.

Conclusion: You’re 95% confident that the standard deviation of test scores for all students is between 13.9 and 21.8.

Remark: The center of the confidence interval is about 17.9, which is different from the point estimate s=17. This is a feature of confidence intervals for σ or σ˛: they are asymmetric because the χ˛ distribution used to compute them is asymmetric.

Example 4: Heights of US males aged 18–25 are normally distributed. You take a random sample of 100 from that population and find a mean of 65.3 in and a variance of 7.3 in˛. (Remember that the units of variance are the square of the units of the original measurement.)

Estimate the mean and variance of the height of US males aged 18–25, with 95% confidence.

TInterval input screen for Example 4 TInterval output screen for Example 4 Solution for mean: Computing a confidence interval for the mean is a straightforward TInterval. Just remember that for Sx the calculator wants the sample standard deviation, but you have the sample variance, which is s˛. Therefore you take the square root of sample variance to get sample standard deviation, as shown in the input screen at near right.

The output screen at far right shows the confidence interval. You’re 95% confident that the mean height of US males aged 18–25 is between 64.8 and 65.8 in.

confidence interval for sigma: input screen for Example 4 confidence interval for sigma: output screen for Example 4 Solution for variance: Run the MATH200B program and select 5:Infer about σ. Enter s:√7.3 and n:100. Select 2:σ˛ interval and enter C-Level:.95 (or 95). The program computes the confidence interval for population variance as 5.6 ≤ σ˛ ≤ 9.9. Notice that the output screen shows the point estimate for variance, s˛, and that as expected the confidence interval is not symmetric.

You’re 95% confident that the variance in heights of US males aged 18–25 is between 5.6 and 9.9 in˛.

Complete answer: You’re 95% confident that the heights of US males aged 18–25 have mean 64.8 to 65.9 in and variance 5.6 to 9.9 in˛.

Summary: With linear correlation, you compute a sample correlation coefficient r. But what can you say about the correlation in the population, ρ? The MATH200B program computes a confidence interval about ρ or performs a hypothesis test to tell whether there is correlation in the population.

See also: Inferences about Linear Correlation gives the statistical concepts with examples of calculation “by hand” and in an Excel workbook.

To perform inferences about linear regression, first load your x’s and y’s in any two statistics lists. Then run the MATH200B program and select 7:Regression inf.

Example: The following sample of commuting distances and times for fifteen randomly selected co-workers is adapted from Johnson & Kuby 2004 [full citation at https://BrownMath.com/swt/sources.htm#so_Johnson2004], page 623.

Commuting Distances and Times
Person 123456789101112131415
Miles, x 35781011121213151516181920
Minutes, y 72020152517203526253532443745

menu selection for confidence interval or hypothesis test The TI’s LinReg(ax+b) command can tell you that the correlation of the sample is 0.88. But what can you infer about ρ, the correlation of the population? You can get a confidence interval estimate for ρ, or you can perform a hypothesis test for ρ≠0.

Requirements

Before you can make any inference (hypothesis test or confidence interval) about correlation or regression in the population, check these requirements:

plot of residuals against commute distances To make a scatterplot of residuals, perform a regression with LinReg(ax+b) L1,L2 (or whichever lists contain your data). This computes the residuals automatically. You can then plot them by following the procedure in Display the Residuals, part of Linked Variables. As you see from the graph at right, the residuals don’t show any problem features.

normal probability plot of residuals To check normality of the residuals, run MATH200A part 4 and when prompted for the data list press [2nd STAT makes LIST] [], scroll to RESID if necessary, and press [ENTER] [ENTER]. The graph at right shows that the residuals are approximately normally distributed.

It can be hard to tell whether a normal probability plot is close enough to a straight line. But MATH200A part 4 shows the r and critical values from the Ryan-Joiner test. When r > the critical value, the points are near enough to a normal distribution. Here r=0.9772 > crit=0.9383, so the residuals are close enough to normal.

Confidence Interval about ρ

input screen for correlation confidence interval Enter your x’s and y’s in two statistics lists, such as L1 and L2. Run the MATH200B program and select 6:Correlatn inf. When prompted, enter your x list and y list, select 1:Conf interval, and enter your desired confidence level, such as .95 or 95 for 95%.

output screen for correlation confidence interval The output screen is shown at right. For this sample of n = 15 points, the sample correlation coefficient is r = 0.88. For the correlation of the population (distances and times for all commuters at this company), you’re 95% confident that 0.67 ≤ ρ ≤ 0.96.

(Just like confidence intervals about σ, confidence intervals about ρ extend different amounts above and below the sample statistic.)

Hypothesis Test about ρ

input screen for correlation hypothesis test You can also do a hypothesis test to see whether there is any correlation in the population. The null hypothesis H0 is that there is no correlation in the population, ρ = 0; the alternative H1 is that there is correlation in the population, ρ ≠ 0.

Select your α; 0.05 is a common choice. Run the MATH200B program and select 6:Correlatn inf. Enter your x and y lists and select 2:Test ρ≠0.

output screen for correlation hypothesis test The output screen is shown at right. Sample size n = 15, and sample correlation is r = 0.88. The t statistic for this hypothesis test is 6.64, and with 13 (n−2) degrees of freedom that yields a p-value of <0.0001.

p<α; reject H0 and accept H1. At the 0.05 level of significance, ρ ≠ 0: there is some correlation in the population. Furthermore, the population correlation is positive. (See p < α in Two-Tailed Test: What Does It Tell You? for interpreting the result of a two-tailed test in a one-tailed manner like this.)

Remark: When p is greater than α, you fail to reject H0. In that case, you conclude that it is impossible to say, at the 0.05 level of significance, whether there is correlation in the population or not.

Summary: A linear regression fits an equation of the form ŷ = b1x + b0 to the sample data, but the slope b1 and the y intercept b0 are just sample statistics. If you took a different sample you would likely get a different regression line.

The MATH200B program finds confidence intervals for the slope β1 and intercept β0 of the line that best fits the entire population of points, not just a particular sample. It can also find a confidence interval about the mean ŷ for a particular x and a prediction interval about all ŷ’s for a particular x.

The program doesn’t do any hypothesis tests on the regression line. The standard test is to test whether the regression line has a nonzero slope, β1 ≠ 0. But that test is identical to the test for a nonzero correlation coefficient, ρ ≠ 0, which the MATH200B program performs as part of the 6:Correlatn inf menu selection.

See also: Inferences about Linear Regression explains the principles and calculations behind inferences about linear regression; there’s even an Excel workbook.

The Example

Let’s use the same data on commuting distances and times from Inferences about Linear Correlation. The TI-83/84 command LinReg(ax+b)will show the best fitting regression line for this particular sample, but what can you say about the regression for all commuters at that company?

Requirements

The requirements for inference about regression are the same as the requireemnts for inference about correlation, listed above.

Regression Coefficients for the Population

TI-83 input screen for inferences about the regression line
TI-83 output screen for inferences about the regression line

Solution: Enter the x’s and y’s in any two statistics lists, such as L1 and L2. Run the MATH200B program and select 7:Regression inf. Specify the two lists and your desired confdence level, such as .95 or 95 for 95%.

Results: Always look first at the sample size (bottom of the screen) to make sure you haven’t left out any points. The slope of the sample regression line is 1.89, meaning that on average each extra mile of commute takes 1.89 minutes (a speed of about 32 mph). But the 95% confidence interval for the slope is 1.28 to 2.51: you’re 95% confident that the slope of commuting time per distance, for all commuters at this company, is between 1.28 and 2.51 minutes per mile.

The second section of the screen shows that the y intercept of the sample is 3.6: this represents the “fixed cost” of the commute, as opposed to the “variable cost” per mile represented by the slope. But the 95% confidence interval is −4.5 to +11.8 minutes.

Interpretation: the line that best fits the sample data is

ŷ = 1.89x + 3.6

and the regression line for the whole population is

ŷ = β1x + β0

where you’re 95% confident that

1.28 ≤ β1 ≤ 2.51   and   −4.5 ≤ β0 ≤ +11.8

Let’s think a bit more about that intercept, with a 95% confidence interval of −4.5 to +11.8 minutes. This is a good illustration that it’s a mistake to use a regression line too far outside your actual data. Here, the x’s run from 3 to 20. The y intercept corresponds to x = 0, and a commute of zero miles is not a commute at all. (Yes, there are people who work from home, but they don’t get in their cars and drive to work.) While the y intercept can be discussed as a mathematical concept, it really has no relevance to this particular problem.

Inferences about a Particular x Value

The first output screen was about the line as a whole; now the program turns to predictions for a specific x value. First it asks for the x value you’re interested in. This time, let’s make predictions about a commute of 10 miles.

Caution: You should only use x values that are within the domain of x values in your data, or close to it. No matter how good the straight-line relationship of your data, you don’t really know whether that relationship continues for lower or higher x values.

The program arbitrarily limits you to the domain plus or minus 15% of the domain width, but even that may be too much in some problems. In this problem, commuting distances range from 3 mi to 20 mi, a width of 17 mi. The program will let you make predictions about any x value from 3−.15*12 = 0.45 mi to 15+.15*12 = 22.55 mi, but you have to decide how far you’re justified in extrapolating.

TI-83 input screen for inferences about the value x=10
TI-83 output screen for inferences about the value x=10

The input and output screens are shown at right. ŷ (“y-hat”) is simply the y value on the regression line for the given x value, found by ŷ = (slope)×10+(intercept) = 22.6. That is a prediction for μy|x=10, the average time for many 10-mile commutes. The screen shows a 95% confidence interval for that mean: you’re 95% confident that the average commute time for all 10-mile commutes (not just in the sample) is between 19.3 and 25.9 minutes.

But that is an estimate of the mean. Can we say anything about individual commutes? Yes, that is the prediction interval at the bottom of the screen. It says that 95% of all 10-mile commutes take between 10.4 and 34.7 minutes.

References

Spiegel, Murray R., and Larry J. Stephens. 1999.
Theory and Problems of Statistics. 3d ed. McGraw-Hill.
Sullivan, Michael. 2008.
Fundamentals of Statistics. 2d ed. Pearson Prentice Hall.

What’s New?

Because this program helps you,
please click to donate!
Because this program helps you,
please donate at
BrownMath.com/donate.

Updates and new info: https://BrownMath.com/ti83/

Site Map | Searches | Home Page | Contact