Data rarely fit a straight line exactly. Usually, you must be satisfied with roughpredictions. Typically, you have a set of data whose scatter plot appears to **"fit"** astraight line. This is called a Line of Best Fit **or** Least-Squares Line.

### Collaborative Exercise

If you know a person's pinky (smallest) finger length, do you think you could predict thatperson's height? Collect data from your class (pinky finger length, in inches). Theindependent variable, *x*, is pinky finger length and the dependent variable, *y*, is height. For each set of data, plot the points on graph paper. Make your graph big enough and **use a ruler**. Then "by eye" draw a line that appears to "fit" the data. For your line, pick two convenient points and use them to find the slope of the line. Find the *y*-intercept of the line by extending your line so it crosses the *y*-axis. Using the slopes and the *y*-intercepts, write your equation of "best fit." Do you think everyone will have the same equation? Why or why not? According to your equation, what is the predicted height for a pinky length of 2.5 inches?

### Example 12.6

A random sample of 11 statistics students produced the following data, where *x* is the third exam score out of 80, and *y* is the final exam score out of 200. Can you predict the final exam score of a random student if you know the third exam score?

x (third exam score) | y (final exam score) |
---|---|

65 | 175 |

67 | 133 |

71 | 185 |

71 | 163 |

66 | 126 |

75 | 198 |

67 | 153 |

70 | 163 |

71 | 159 |

69 | 151 |

69 | 159 |

Table 12.3 Table showing the scores on the final exam based on scores from the third exam.

Figure 12.9 Scatter plot showing the scores on the final exam based on scores from the third exam.

### Try It 12.6

SCUBA divers have maximum dive times they cannot exceed when going to different depths. The data in Table 12.4 show different depths with the maximum dive times in minutes. Use your calculator to find the least squares regression line and predict the maximum dive time for 110 feet.

X (depth in feet) | Y (maximum dive time) |
---|---|

50 | 80 |

60 | 55 |

70 | 45 |

80 | 35 |

90 | 25 |

100 | 22 |

Table 12.4

The third exam score, *x*, is the independent variable and the final exam score, *y*, is the dependent variable. We will plot a regression line that best "fits" the data. If each of you were to fit a line "by eye," you would draw different lines. We can use what is called a least-squares regression line to obtain the best fit line.

Consider the following diagram. Each point of data is of the the form (*x*, *y*) and each point of the line of best fit using least-squares linear regression has the form (*x*, *ŷ*).

The *ŷ* is read **" y hat"** and is the

**estimated value of**. It is the value of

*y**y*obtained using the regression line. It is not generally equal to

*y*from data.

Figure 12.10

The term *y*_{0} – *ŷ*_{0} = *ε*_{0} is called the **"error" or** residual. It is not an error in the sense of a mistake. The absolute value of a residual measures the vertical distance between the actual value of *y* and the estimated value of *y*. In other words, it measures the vertical distance between the actual data point and the predicted point on the line.

If the observed data point lies above the line, the residual is positive, and the line underestimates the actual data value for *y*. If the observed data point lies below the line, the residual is negative, and the line overestimates that actual data value for *y*.

In the diagram in Figure 12.10, *y*_{0} – *ŷ*_{0} = ε_{0} is the residual for the point shown. Here the point lies above the line and the residual is positive.

*ε* = the Greek letter **epsilon**

For each data point, you can calculate the residuals or errors, *y*_{i} - *ŷ*_{i} = *ε*_{i} for *i* = 1, 2, 3, ..., 11.

Each |*ε*| is a vertical distance.

For the example about the third exam scores and the final exam scores for the 11 statistics students, there are 11 data points. Therefore, there are 11 *ε* values. If you square each ε and add, you get

${({\epsilon}_{1})}^{2}+{({\epsilon}_{2})}^{2}+\mathrm{...}+{({\epsilon}_{11})}^{2}=\stackrel{11}{\underset{i\text{}=\text{}1}{\Sigma}}{\epsilon}^{2}$

This is called the Sum of Squared Errors (SSE).

Using calculus, you can determine the values of *a* and *b* that make the **SSE** a minimum. When you make the **SSE** aminimum, you have determined the points that are on the line of best fit. It turns out thatthe line of best fit has the equation:

$$\widehat{y}=a+bx$$

where $a=\overline{y}-b\overline{x}$ and $b=\frac{\Sigma (x-\overline{x})(y-\overline{y})}{\Sigma {(x-\overline{x})}^{2}}$.

The sample means of the *x* values and the *y* values are $\overline{x}$ and $\overline{y}$, respectively. The best fit line always passes through the point $(\overline{x},\overline{y})$.

The slope *b* can be written as $b=r\left(\frac{{s}_{y}}{{s}_{x}}\right)$ where *s*_{y} = the standard deviation of the *y* values and *s*_{x} = the standard deviation of the *x* values. *r* is the correlationcoefficient, which is discussed in the next section.

### Least Squares Criteria for Best Fit

The process of fitting the best-fit line is called **linear regression**. The idea behind finding the best-fit line is based on the assumption that the data are scattered about a straight line. The criteria for the best fit line is that the sum of the squared errors (SSE) is minimized, that is, made as small as possible. Any other line you might choose would have a higher SSE than the best fit line. This best fit line is called the ** least-squares regression line **.

### Note

Computer spreadsheets, statistical software, and many calculators can quickly calculate the best-fit line and create the graphs. The calculations tend to be tedious if done by hand. Instructions to use the TI-83, TI-83+, and TI-84+ calculators to find the best-fit line and create a scatterplot are shown at the end of this section.

THIRD EXAM vs FINAL EXAM EXAMPLE:The graph of the line of best fit for the third-exam/final-exam example is as follows:

Figure 12.11

The least squares regression line (best-fit line) for the third-exam/final-exam example has the equation:

$$\widehat{y}=-173.51+4.83x$$

### Reminder

Remember, it is always important to plot a scatter diagram first. If the scatter plot indicates that there is a linear relationship between the variables, then it is reasonable to use a best fit line to make predictions for *y* given *x* within the domain of *x*-values in the sample data, **but not necessarily for x-values outside that domain.** You could use the line to predict the final exam score for a student who earned a grade of 73 on the third exam. You should NOT use the line to predict the final exam score for a student who earned a grade of 50 on the third exam, because 50 is not within the domain of the

*x*-values in the sample data, which are between 65 and 75.

### UNDERSTANDING SLOPE

The slope of the line, *b*, describes how changes in the variables are related. It is important to interpret the slope of the line in the context of the situation represented by the data. You should be able to write a sentence interpreting the slope in plain English.

**INTERPRETATION OF THE SLOPE:** The slope of the best-fit line tells us how the dependent variable (*y*) changes for every one unit increase in the independent (*x*) variable, on average.

THIRD EXAM vs FINAL EXAM EXAMPLESlope: The slope of the line is *b* = 4.83.

Interpretation: For a one-point increase in the score on the third exam, the final exam score increases by 4.83 points, on average.

### Using the TI-83, 83+, 84, 84+ Calculator

Using the Linear Regression T Test: LinRegTTest

- In the STAT list editor, enter the X data in list L1 and the Y data in list L2, paired so that the corresponding (
*x*,*y*) values are next to each other in the lists. (If a particular pair of values is repeated, enter it as many times as it appears in the data.) - On the STAT TESTS menu, scroll down with the cursor to select the LinRegTTest. (Be careful to select LinRegTTest, as some calculators may also have a different item called LinRegTInt.)
- On the LinRegTTest input screen enter: Xlist: L1 ; Ylist: L2 ; Freq: 1
- On the next line, at the prompt
*β*or*ρ*, highlight "≠ 0" and press ENTER - Leave the line for "RegEq:" blank
- Highlight Calculate and press ENTER.

Figure 12.12

The output screen contains a lot of information. For now we will focus on a few items from the output, and will return later to the other items.

The second line says *y* = *a* + *bx*. Scroll down to find the values *a* = –173.513, and *b* = 4.8273; the equation of the best fit line is *ŷ* = –173.51 + 4.83*x*

The two items at the bottom are *r*_{2} = 0.43969 and *r* = 0.663. For now, just note where to find these values; we will discuss them in the next two sections.

Graphing the Scatterplot and Regression Line

- We are assuming your X data is already entered in list L1 and your Y data is in list L2
- Press 2nd STATPLOT ENTER to use Plot 1
- On the input screen for PLOT 1, highlight
**On**, and press ENTER - For TYPE: highlight the very first icon which is the scatterplot and press ENTER
- Indicate Xlist: L1 and Ylist: L2
- For Mark: it does not matter which symbol you highlight.
- Press the ZOOM key and then the number 9 (for menu item "ZoomStat") ; the calculator will fit the window to the data
- To graph the best-fit line, press the "Y=" key and type the equation –173.5 + 4.83X into equation Y1. (The X key is immediately left of the STAT key). Press ZOOM 9 again to graph it.
- Optional: If you want to change the viewing window, press the WINDOW key. Enter your desired window using Xmin, Xmax, Ymin, Ymax

### NOTE

Another way to graph the line after you create a scatter plot is to use LinRegTTest.

- Make sure you have done the scatter plot. Check it on your screen.
- Go to LinRegTTest and enter the lists.
- At RegEq: press VARS and arrow over to Y-VARS. Press 1 for 1:Function. Press 1 for 1:Y1. Then arrow down to Calculate and do the calculation for the line of best fit.
- Press Y = (you will see the regression equation).
- Press GRAPH. The line will be drawn."

### The Correlation Coefficient *r*

Besides looking at the scatter plot and seeing that a line seems reasonable, how can you tell if the line is a good predictor? Use the correlation coefficient as another indicator (besides the scatterplot) of the strength of the relationship between *x* and *y*.

The **correlation coefficient, r, ** developed by Karl Pearson in the early 1900s, is numerical and provides a measure of strength and direction of the linear association between the independent variable

*x*and the dependent variable

*y*.

The correlation coefficient is calculated as

$$r=\frac{n\Sigma (xy)-(\Sigma x)(\Sigma y)}{\sqrt{\left[n\Sigma {x}^{2}-{(\Sigma x)}^{2}\right]\left[n\Sigma {y}^{2}-{(\Sigma y)}^{2}\right]}}$$

where *n* = the number of data points.

If you suspect a linear relationship between *x* and *y*, then *r* can measure how strong the linear relationship is.

What the VALUE of *r* tells us:

- The value of
*r*is always between –1 and +1: –1 ≤*r*≤ 1. - The size of the correlation
*r*indicates the strength of the linear relationship between*x*and*y*. Values of*r*close to –1 or to +1 indicate a stronger linear relationship between*x*and*y*. - If
*r*= 0 there is likely no linear correlation. It is important to view the scatterplot, however, because data that exhibit a curved or horizontal pattern may have a correlation of 0. - If
*r*= 1, there is perfect positive correlation. If*r*= –1, there is perfect negative correlation. In both these cases, all of the original data points lie on a straight line. Of course,in the real world, this will not generally happen.

What the SIGN of *r* tells us

- A positive value of
*r*means that when*x*increases,*y*tends to increase and when*x*decreases,*y*tends to decrease**(positive correlation)**. - A negative value of
*r*means that when*x*increases,*y*tends to decrease and when*x*decreases,*y*tends to increase**(negative correlation)**. - The sign of
*r*is the same as the sign of the slope,*b*, of the best-fit line.

### Note

Strong correlation does not suggest that *x* causes *y* or *y* causes *x*. We say **"correlation does not imply causation."**

Figure 12.13 (a) A scatter plot showing data with a positive correlation. 0 < *r* < 1 (b) A scatter plot showing data with a negative correlation. –1 < *r* < 0 (c) A scatter plot showing data with zero correlation. *r* = 0

The formula for *r* looks formidable. However, computer spreadsheets, statistical software, and many calculators can quickly calculate *r*. The correlation coefficient *r* is the bottom item in the output screens for the LinRegTTest on the TI-83, TI-83+, or TI-84+ calculator (see previous section for instructions).

### The Coefficient of Determination

**The variable r^{2} is called the** coefficient of determination and is the square of the correlation coefficient, but is usually stated as a percent, rather than in decimal form. It has an interpretation in the context of the data:

- ${r}^{2}$, when expressed as a percent, represents the percent of variation in the dependent (predicted) variable
*y*that can be explained by variation in the independent (explanatory) variable*x*using the regression (best-fit) line. - 1 – ${r}^{2}$, when expressed as a percentage, represents the percent of variation in
*y*that is NOT explained by variation in*x*using the regression line. This can be seen as the scattering of the observed data points about the regression line.

Consider the third exam/final exam example introduced in the previous section

- The line of best fit is:
*ŷ*= –173.51 + 4.83x - The correlation coefficient is
*r*= 0.6631 - The coefficient of determination is
*r*^{2}= 0.6631^{2}= 0.4397 **Interpretation of***r*^{2}in the context of this example:- Approximately 44% of the variation (0.4397 is approximately 0.44) in the final-exam grades can be explained by the variation in the grades on the third exam, using the best-fit regression line.
- Therefore, approximately 56% of the variation (1 – 0.44 = 0.56) in the final exam grades can NOT be explained by the variation in the grades on the third exam, using the best-fit regression line. (This is seen as the scattering of the points about the line.)

## FAQs

### How do you solve a regression equation in statistics? ›

Therefore, the formula for calculation is **Y = a + bX + E**, where Y is the dependent variable, X is the independent variable, a is the intercept, b is the slope, and E is the residual. Regression is a statistical tool to predict the dependent variable with the help of one or more independent variables.

**What is regression equation answer? ›**

The formula is **Y = a + b X** , in which Y is dependent, X is independent, b is slope and a is intercept. Multiple linear regression – This is a case where there is a linear relation between one dependent variable (Y) and many independent variable (X1, X2…. Xn).

**How do you find the regression equation in regression statistics? ›**

The formula for simple linear regression is **Y = mX + b**, where Y is the response (dependent) variable, X is the predictor (independent) variable, m is the estimated slope, and b is the estimated intercept.

**What is the regression formula for dummies? ›**

The equation which defines the simplest form of the regression equation with one dependent and one independent variable: **y = mx+c**. Where y = estimated dependent variable, c = constant, m= regression coefficient and x = independent variable.

**What is a regression equation example? ›**

We could use the equation to predict weight if we knew an individual's height. In this example, if an individual was 70 inches tall, we would predict his weight to be: **Weight = 80 + 2 x (70) = 220 lbs**. In this simple linear regression, we are examining the impact of one independent variable on the outcome.

**How do you write a regression equation example? ›**

A linear regression line has an equation of the form **Y = a + bX**, where X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0).

**What is an example of a regression in statistics? ›**

Example: we can say that **age and height** can be described using a linear regression model. Since a person's height increases as age increases, they have a linear relationship. Regression models are commonly used as statistical proof of claims regarding everyday facts.

**What is the regression equation quizlet? ›**

The regression equation is **used to estimate a value of the dependent variable Y based on a selected value of the independent variable X**.

**What question does regression answer? ›**

There are 3 major areas of questions that the regression analysis answers – (1) **causal analysis**, (2) forecasting an effect, (3) trend forecasting.

**What is simple regression in statistics? ›**

Simple linear regression is **a regression model that estimates the relationship between one independent variable and one dependent variable using a straight line**. Both variables should be quantitative.

### What is the regression method in statistics? ›

A regression is a statistical technique that **relates a dependent variable to one or more independent (explanatory) variables**. A regression model is able to show whether changes observed in the dependent variable are associated with changes in one or more of the explanatory variables.

**What is an example of a regression problem? ›**

Some real-world examples for regression analysis include **predicting the price of a house given house features**, predicting the impact of SAT/GRE scores on college admissions, predicting the sales based on input parameters, predicting the weather, etc.

**Why do we calculate regression? ›**

Typically, a regression analysis is done for one of two purposes: In order to predict the value of the dependent variable for individuals for whom some information concerning the explanatory variables is available, or in order to estimate the effect of some explanatory variable on the dependent variable.

**What is regression equation one sentence? ›**

A regression equation can be defined as a statistical model, used to determine the specific relationship between the predictor variable and the outcome variable. A model regression equation allows predicting outcome with a very small error. **Y _{i}=b_{0} +b_{1}x_{i} +e**.

**What is a good example of regression? ›**

For example, it can be used **to predict the relationship between reckless driving and the total number of road accidents caused by a driver**, or, to use a business example, the effect on sales and spending a certain amount of money on advertising. Regression is one of the most common models of machine learning.

**What is regression to the mean easy example? ›**

A toy example

**If you naively took your top performing 10% of students and give them a second test using the same strategy, the mean score would be expected to be close to 50**. Thus your top performing students would “regress” all the way back to the mean of all students who took the original test.

**What is the regression equation for variables? ›**

**Y= the dependent variable of the regression**. **M= slope of the regression**. **X1=first independent variable of the regression**. **The x2=second independent variable of the regression**.

**What does my regression equation mean? ›**

A linear regression equation **describes the relationship between the independent variables (IVs) and the dependent variable (DV)**. It can also predict new values of the DV for the IV values you specify.

**What does a represent in the following regression equation? ›**

The portion of the equation denoted by a + b X_{i} defines a line. The symbol X represents the independent variable. The symbol a represents **the Y intercept**, that is, the value that Y takes when X is zero.

**What is regression to the mean math? ›**

Regression to the mean (RTM) is **a statistical phenomenon describing how variables much higher or lower than the mean are often much closer to the mean when measured a second time**. Regression to the mean is due to natural variation or chance.

### What is an example of regression quizlet? ›

Examples of regression include the use of baby talk or whining in a child (or adult) who has already mastered appropriate speech or a return to thumb sucking , teddy bear cuddling, or watching cartoons in response to something distressing.

**What is regression basics? ›**

**The regression equation simply describes the relationship between the dependent variable (y) and the independent variable (x)**. The intercept, or "a," is the value of y (dependent variable) if the value of x (independent variable) is zero, and so is sometimes simply referred to as the 'constant.

**What are types of regression? ›**

The ultimate goal of the regression algorithm is to plot a best-fit line or a curve between the data and linear regression, **logistic regression, ridge regression, Lasso regression, Polynomial regression** are types of regression.

**What is the sample regression function? ›**

The sample regression function is **an equation that represents the relationship between the Y variable and X variable(s) that is based only on the information in a sample of the population**.

**How do you explain linear regression results? ›**

Interpreting Linear Regression Coefficients

**A positive coefficient indicates that as the value of the independent variable increases, the mean of the dependent variable also tends to increase**. A negative coefficient suggests that as the independent variable increases, the dependent variable tends to decrease.

**What is the two regression equation in statistics? ›**

Two Lines of Regression

The line of regression of Y on X is given by **Y = a + bX** where a and b are unknown constants known as intercept and slope of the equation. This is used to predict the unknown value of variable Y when value of variable X is known.

**What is a regression problem in statistics? ›**

1 The regression problem. The regression problem is **how to model one or several dependent variables/responses, Y, by means of a set of predictor variables, X**. In the PLS method, we divide the variables (columns) into two blocks denoted as X and Y.

**What is the best explanation of regression? ›**

Regression analysis is a reliable method of identifying which variables have impact on a topic of interest. The process of performing a regression allows you to confidently determine which factors matter most, which factors can be ignored, and how these factors influence each other.

**What are the two key parts of a regression equation? ›**

2 Elements of a regression equations (linear, first-order model) **y is the value of the dependent variable (y), what is being predicted or explained.** a, a constant, equals the value of y when the value of x = 0. b is the coefficient of X, the slope of the regression line, how much Y changes for each change in x.

**What is a regression problem how it is solved? ›**

The regression problem is **how to model one or several dependent variables/responses, Y, by means of a set of predictor variables, X**. In the PLS method, we divide the variables (columns) into two blocks denoted as X and Y.

### How do you solve two regression equations? ›

**1.**

**Formula & Example-1**

- byx=∑(x-ˉx)(y-ˉy)∑(x-ˉx)2. bxy=∑(x-ˉx)(y-ˉy)∑(y-ˉy)2.
- byx=n∑xy-(∑x)(∑y)n∑x2-(∑x)2. bxy=n∑xy-(∑x)(∑y)n∑y2-(∑y)2.
- byx=n∑dxdy-(∑dx)(∑dy)n∑dx2-(∑dx)2. bxy=n∑dxdy-(∑dx)(∑dy)n∑dy2-(∑dy)2.

**What is regression in math? ›**

Regression is **a statistical term for describing models that estimate the relationships among variables**. Linear Regression model study the relationship between a single dependent variable Y and one or more independent variable X.

**Where is regression on calculator? ›**

To calculate the Linear Regression (ax+b): **Press [STAT] to enter the statistics menu.** **Press the right arrow key to reach the CALC menu and then press 4: LinReg(ax+b)**.

**Can a TI 84 do regression? ›**

**Choose a regression from the list in [Stat] "CALC".** **Go to: [STAT] "CALC".** **Use the arrow keys to select the desired regression.** **[ENTER].**