<< Chapter < Page | Chapter >> Page > |
Data rarely fit a straight line exactly. Usually, you must be satisfied with rough predictions. Typically, you have a set of data whose scatter plot appears to "fit" a straight line. This is called a Line of Best Fit or Least Squares Line .
A random sample of 11 statistics students produced the following data where $x$ is the third exam score, out of 80, and $y$ is the final exam score, out of 200. Can you predict the final exam score of a random student if you know the third exam score?
x (third exam score) | y (final exam score) |
---|---|
65 | 175 |
67 | 133 |
71 | 185 |
71 | 163 |
66 | 126 |
75 | 198 |
67 | 153 |
70 | 163 |
71 | 159 |
69 | 151 |
69 | 159 |
The third exam score, $x$ , is the independent variable and the final exam score, $y$ , is the dependent variable. We will plot a regression line that best "fits" the data. If each of youwere to fit a line "by eye", you would draw different lines. We can use what is called a least-squares regression line to obtain the best fit line.
Consider the following diagram. Each point of data is of the the form $(x,y)$ and each point of the line of best fit using least-squares linear regression has the form $(x,\hat{y})$ .
The $\hat{y}$ is read "y hat" and is the estimated value of $y$ . It is the value of $y$ obtained using the regression line. It is not generally equal to the observed $y$ from data.
The term $y-\hat{y}$ is called the residual . It is the observed $y$ value − the predicted $\hat{y}$ value. It can also be called the "error".It is not an error in the sense of a mistake, but measures the vertical distance between the observed value $y$ and the estimated value $\hat{y}$ . In other words, it measures the vertical distance between the actual data point and the predicted point on the line.
If the observed data point lies above the line, the residual is positive, and the line underestimates the actual data value for $y$ . In the observed data point lies below the line, the residual is negative, and the line overestimates that actual data value for $y$ .
In the Figure 2 diagram above, ${y}_{0}-{\hat{y}}_{0}={\epsilon}_{0}$ is the residual for the point shown. Here the point lies above the line and the residual is positive.
$\epsilon $ = the Greek letter epsilon
For each data point, you can calculate the residuals or errors, ${y}_{i}-{\hat{y}}_{i}={\epsilon}_{i}$ for $i=\text{1, 2, 3, ..., 11}$ .
Each $\epsilon $ is a vertical distance.
For the example about the third exam scores and the final exam scores for the 11 statistics students, there are 11 data points. Therefore, there are 11 $\epsilon $ values. If you square each $\epsilon $ and add, you get
$({\epsilon}_{1}{)}^{2}+({\epsilon}_{2}{)}^{2}+\text{...}+({\epsilon}_{11}{)}^{2}=\stackrel{11}{\underset{\text{i = 1}}{\Sigma}}{\epsilon}^{2}$
This is called the Sum of Squared Errors (SSE) .
Using calculus, you can determine the values of $a$ and $b$ that make the SSE a minimum. When you make the SSE a minimum, you have determined the points that are on the line of best fit. It turns out thatthe line of best fit has the equation:
Notification Switch
Would you like to follow the 'Collaborative statistics: custom version modified by v moyle' conversation and receive update notifications?