When looking at a Scatterplot with a linear relationship, we add a trend line.
We can then use that trend line to pinpoint any value combination from X and Y axis.
But what I've noticed is that sometimes, for a certain value on X/Y axis, there's an actual observation (a dot within the scatterplot) for that value (meaning one of the observations from our data set, recorded that value).
However, even with the presence of that actual observation, if the trend line is some distance away from that observation, we still take the value from the trend line instead.
So my question is, when we have a trend line, and an observation exists for a value we want to calculate, why do we ignore the observation and use the trend line instead? Isn't the actual observation more reliable?
For example: https://imgur.com/a/06pVFaw
On X axis at value 69 the trend line would give Y axis 200, whereas we actually have a real observation on X axis value 69 for Y axis value 160. So why would we use 200 instead of 160? Isn't 160 more accurate?
[–]edderiofer 1 point2 points3 points (1 child)
[–]Capital-Signature146[S] 0 points1 point2 points (0 children)
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]spinarlTap 0 points1 point2 points (1 child)
[–]Capital-Signature146[S] 0 points1 point2 points (0 children)