Actually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.
You can read some more here.
Answer from Bitwise on Stack ExchangeActually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.
You can read some more here.
If is linear with respect to the parameters, the derivatives of the sum of squares leads to simple, explicit and direct solutions (immediate if you use matrix calculations).
This is not the case for the second objective function in your post. The problem becomes nonlinear with respect to the parameters and it is much more difficult to solve. But, it is doable (I would generate the starting guesses from the first objective function.
For illustration purposes, I generated a table for
(
), (
) and changed the values of
using a random relative error between
and
%. The values used were
,
and
.
Using the first objective function, the solution is immediate and leads to ,
,
.
Starting with these values as initial guesses for the second objective function (which, again, makes the problem nonlinear), it took to the solver iterations to get
,
,
. And all these painful iterations reduced the objective function from
down to
!
There are many other possible objective functions used in regression but the traditional sum of squared errors is the only one which leads to explicit solutions.
Added later
A very small problem that you could (should, if I may) exercise by hand : consider four data points ,
,
,
and your model is simply
and your search for the best value of
which minimizes either
or
Plot the values of
and
as a function of
for
. For
, you will have a nice parabola (the minimum of which is easy to find) but for
the plot shows a series of segments which then lead to discontinuous derivatives at thei intersections; this makes the problem much more difficult to solve.
algebra precalculus - Sum of absolute values and the absolute value of the sum of these values? - Mathematics Stack Exchange
Is the absolute value of the sum of two numbers always equal to the sum of their absolute values? Explain.
Why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
The method of least squares of residuals is commonly used to get the best fit with linear regression in which the sum of squared residuals is minimized The reason why the sum of absolute residuals (y- ypred) is not used is that - The sum of absolute values is not easy to differentiate The sum of absolute values has a lesser value as compared to the sum of squares The sum of squares is not a measure of how well the regression line fits the data The method of least squares of residuals is not applicable to linear regression
Videos
You can try considering , then using the property that
for all x, you can obtain the desired inequality. Also your conclusion should be
. Notice that the inequality is not strict. (For example, if
then certainly
is false.) Another way is to use the fact that
for all numbers a (doing this for both
and
, and then manipulating the inequalities, we can achieve what you want), however, this second route assumes that you at least are a little bit familiar with the rules of inequalities, particularly the rules regarding inequalities with absolute values.
As an example of how we would apply the squaring technique, we can do the following:
Now since
is always true we can say that
Now we want to try to "force" an inequality. That is, we will replace
with
. If
and
are both greater that
then nothing would change, however, if they are not both greater than
we would get the following:
notice that we have broken the chain of equal signs and forced an inequality. (There are still some more steps to do for you. Hint: What is
?)
First method:
Since we have
which is equality if and only if :
if and only if
and
ar both negatives or both positives. This means taht if
and
hav'nt the same sign we have:
But
gives also
Second method:
We have
for all real number
we have
, and equality holds if and only if
is positive. Since
or
is'nt positive one of the numbers
or
is
so
Kinda stumbled on this and seems ridiculously simple. I know it works but I can't really "understand" it.
Edit: thank you everyone. I've learned a lot. The links to other branches from quadratics to computing to Mohr's circle is mind boggling!
The sum of all absolute values of the differences of the numbers 1,2,3,โฆ,n, taken two at a time is
$\begin{align} S(n) &=\sum_{i=1}^n\sum_{j=1}^{n}|i-j|\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}|i-j| +\sum_{i=1}^n\sum_{j=i+1}^{n}|i-j|\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}(i-j) +\sum_{i=1}^n\sum_{j=i+1}^{n}(j-i)\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}j +\sum_{i=1}^n\sum_{j=1}^{n-i}j\\ &=\sum_{i=1}^n\frac{i(i-1)}{2} +\sum_{i=1}^n\frac{(n-i)(n-i+1)}{2}\\ &=\sum_{i=1}^n\left(\frac{i(i-1)}{2} +\frac{(n-i)(n-i+1)}{2}\right)\\ &=\sum_{i=1}^n\left(\frac{i(i-1)+(n-i)(n-i+1)}{2}\right)\\ &=\sum_{i=1}^n\left(\frac{i^2-i+(n-i)^2+(n-i)}{2}\right)\\ &=\frac12\sum_{i=1}^n\left(i^2-i+n^2-2ni+i^2+n-i\right)\\ &=\frac{n(n^2+n)}{2}+\sum_{i=1}^n\left(i^2-i-2ni\right)\\ &=\frac{n(n^2+n)}{2}+\frac{n(n+1)(2n+1)}{6}-(2n+1)\frac{n(n+1)}{2}\\ &=\frac{n(n+1)}{2}\left(n+\frac{(2n+1)}{3}-(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(3n+(2n+1)-3(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(3n-(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(n-1\right)\\ &=\frac{(n-1)n(n+1)}{6}\\ &=\binom{n+1}{3}\\ \end{align} $
(Whew!)
There are probably simpler ways, but this is my way.
If you fix $j=1$, then let $i$ range you get: $1+2+..+(n-1)$.
If you fix $j=2$, then let $i$ range you get: $1+2+..+(n-2)$.
...
If you fix $j=n-1$, then let $i$ range you get: $1$.
If you put them all together you get a total of $(n-1)$ number of $1$'s, $(n-2)$ number of $2$'s,..., $1$ number of $(n-1)$'s. Thus your sum equals: $$\sum_{k=1}^{n-1}k\cdot (n-k)$$Which equals: $n\sum_{k=1}^{n-1} k-\sum_{k=1}^{n-1}k^2$, and using the well know formulas (and some algebra) you get $n^3/6-n/6$, which is equal to $\binom{n+1}{3}$