Actually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.

You can read some more here.

Answer from Bitwise on Stack Exchange
Top answer
1 of 7
30

Actually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.

You can read some more here.

2 of 7
12

If is linear with respect to the parameters, the derivatives of the sum of squares leads to simple, explicit and direct solutions (immediate if you use matrix calculations).

This is not the case for the second objective function in your post. The problem becomes nonlinear with respect to the parameters and it is much more difficult to solve. But, it is doable (I would generate the starting guesses from the first objective function.

For illustration purposes, I generated a table for (), () and changed the values of using a random relative error between and %. The values used were , and .

Using the first objective function, the solution is immediate and leads to , ,.

Starting with these values as initial guesses for the second objective function (which, again, makes the problem nonlinear), it took to the solver iterations to get , ,. And all these painful iterations reduced the objective function from down to !

There are many other possible objective functions used in regression but the traditional sum of squared errors is the only one which leads to explicit solutions.

Added later

A very small problem that you could (should, if I may) exercise by hand : consider four data points ,,, and your model is simply and your search for the best value of which minimizes either or Plot the values of and as a function of for . For , you will have a nice parabola (the minimum of which is easy to find) but for the plot shows a series of segments which then lead to discontinuous derivatives at thei intersections; this makes the problem much more difficult to solve.

Discussions

algebra precalculus - Sum of absolute values and the absolute value of the sum of these values? - Mathematics Stack Exchange
I'm working on a proof and I need some help with this: I determined that for some situations ($x$ or $y$ are negative but not both): $|x| + |y| > x + y$ How can I conclude using that statement... More on math.stackexchange.com
๐ŸŒ math.stackexchange.com
November 7, 2013
Is the absolute value of the sum of two numbers always equal to the sum of their absolute values? Explain.
Expert-written โœ…, step-by-step solution to: Is the absolute value of the sum of two numbers always equal to the su... More on quizlet.com
๐ŸŒ quizlet.com
1
Why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
Take numbers a and b. Their sum is a + b and their difference is |a - b| (the vertical lines denote the absolute-value-function). The formula you're referring to is: min(a, b) = 1/2 ((a + b) - |a - b|) I'll first show a quick proof that this formula is correct and then a short explanation of why it makes sense, with a geometric interpretation. We can distinguish 2 cases, a >= b and a < b. Lets assume the first case is true (the reasoning is the same in the second case). We find that |a - b| is now equal to just a - b. So the full expression becomes: 1/2((a + b) - (a - b)) = 1/2 (a + b - a + b) = 1/2 (2b) = b And we get the smaller of the two numbers. In the other case, the same reasoning will show that the formula will yield a. To get a bit of feeling for why this formula works, lets write it slightly differently: (a + b) / 2 - |a - b| / 2 Visualize a and b as points on a line. (a + b) / 2 represents the point precisely in the middle between a and b. The term |a - b| / 2 represents half of the distance between a and b. So you can see this formula as starting directly in the middle between a and b and then walking back exactly half the distance between the two. And that means you end up at the smaller of the two numbers. Note that for the maximum of two numbers you can use a very similar formula: max(a, b) = 1/2 ((a + b) + |a - b|) The geometric interpretation is essentially the same, but instead of walking backwards from the midpoint, the + sign in front of the distance-term means that you're now walking forward. So starting at the point in the middle between a and b you walk half the distance between these two points forward and you naturally end up at the largest of the numbers. edit: formatting More on reddit.com
๐ŸŒ r/askscience
135
614
June 15, 2016
The method of least squares of residuals is commonly used to get the best fit with linear regression in which the sum of squared residuals is minimized The reason why the sum of absolute residuals (y- ypred) is not used is that - The sum of absolute values is not easy to differentiate The sum of absolute values has a lesser value as compared to the sum of squares The sum of squares is not a measure of how well the regression line fits the data The method of least squares of residuals is not applicable to linear regression
The method of least squares of residuals is commonly used to get the best fit with linear regression, in which the sum of squared residuals is minimized. The reason why the sum of absolute residuals (|y- y_pred|) is not used is that - The sum of absolute values is not easy to differentiate. More on studyx.ai
๐ŸŒ studyx.ai
1
November 9, 2024
๐ŸŒ
Quizlet
quizlet.com โ€บ maths โ€บ algebra
Is the absolute value of the sum of two numbers always equal to the sum of their absolute values? Explain. | Quizlet
The sum of an even integer and an odd integer. ... The statement that the absolute value of the sum of two numbers is always equal to the sum of their absolute values is only true if the signs of both numbers are same; that is either both numbers are positive or both numbers are negative.
๐ŸŒ
Reddit
reddit.com โ€บ r/askscience โ€บ why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
r/askscience on Reddit: Why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
June 15, 2016 -

Kinda stumbled on this and seems ridiculously simple. I know it works but I can't really "understand" it.

Edit: thank you everyone. I've learned a lot. The links to other branches from quadratics to computing to Mohr's circle is mind boggling!

Top answer
1 of 4
472
Take numbers a and b. Their sum is a + b and their difference is |a - b| (the vertical lines denote the absolute-value-function). The formula you're referring to is: min(a, b) = 1/2 ((a + b) - |a - b|) I'll first show a quick proof that this formula is correct and then a short explanation of why it makes sense, with a geometric interpretation. We can distinguish 2 cases, a >= b and a < b. Lets assume the first case is true (the reasoning is the same in the second case). We find that |a - b| is now equal to just a - b. So the full expression becomes: 1/2((a + b) - (a - b)) = 1/2 (a + b - a + b) = 1/2 (2b) = b And we get the smaller of the two numbers. In the other case, the same reasoning will show that the formula will yield a. To get a bit of feeling for why this formula works, lets write it slightly differently: (a + b) / 2 - |a - b| / 2 Visualize a and b as points on a line. (a + b) / 2 represents the point precisely in the middle between a and b. The term |a - b| / 2 represents half of the distance between a and b. So you can see this formula as starting directly in the middle between a and b and then walking back exactly half the distance between the two. And that means you end up at the smaller of the two numbers. Note that for the maximum of two numbers you can use a very similar formula: max(a, b) = 1/2 ((a + b) + |a - b|) The geometric interpretation is essentially the same, but instead of walking backwards from the midpoint, the + sign in front of the distance-term means that you're now walking forward. So starting at the point in the middle between a and b you walk half the distance between these two points forward and you naturally end up at the largest of the numbers. edit: formatting
2 of 4
15
Simple enough. You have any two numbers, you can write them as: a a+x where a is the minimum number, and x is the "positive" difference (the modulo, which is the function to get the position "version" of any number) between the minimum and maximum number. So for example if your two numbers were 7 and 3 then a = 3 and x = 4. So you add the two numbers = 2a+x then you subtract the modulo of the difference i.e. x you get 2a, 2a divided by 2 is just the minimum (a) again. To explain in common sense sort of way, basically you're taking the minimum, doubling it, adding some arbitrary number and subtracting it again (effectively doing nothing), then halving it again (negating the very first thing you did!), so naturally you get the same thing. It's like one of those party "tricks" people do where they ask you to pick a number and then do a bunch of subtractions and multiplications on it and then at the end they say "and your number is [XYZ]" but really all they've done is some forward and backward steps and mixed them up.
๐ŸŒ
Expii
expii.com โ€บ t โ€บ absolute-value-equations-with-sums-4163
Absolute Value Equations with Sums - Expii
Treat sums in absolute value equations like operations in parentheses. Add the terms within the absolute value brackets, apply the absolute value, then add the terms outside.
Find elsewhere
๐ŸŒ
Wikipedia
en.wikipedia.org โ€บ wiki โ€บ Absolute_value
Absolute value - Wikipedia
2 days ago - The real absolute value function has a derivative for every x โ‰  0, given by a step function equal to the sign function except at x = 0 where the absolute value function is not differentiable:
๐ŸŒ
StudyX
studyx.ai โ€บ homework โ€บ 109909081-the-method-of-least-squares-of-residuals-is-commonly-used-to-get-the-best-fit-with-linear
The method of least squares of residuals is | StudyX
November 9, 2024 - The method of least squares of residuals is commonly used to get the best fit with linear regression, in which the sum of squared residuals is minimized. The reason why the sum of absolute residuals (|y- y_pred|) is not used is that - The sum of absolute values is not easy to differentiate.
๐ŸŒ
UBC Math
personal.math.ubc.ca โ€บ ~PLP โ€บ book โ€บ sec-abs-triangle.html
5.4 Absolute values and the triangle inequality
Okay, now we can prove our main result; it tells us that the absolute value of the sum of two numbers is smaller
๐ŸŒ
ScienceDirect
sciencedirect.com โ€บ topics โ€บ computer-science โ€บ sum-of-absolute-difference
Sum Of Absolute Difference - an overview | ScienceDirect Topics
Sum of Absolute Difference (SAD) is a metric used in computer science that calculates the total sum of absolute pixel value differences between two images or regions without the complexity of additional division operations.
Top answer
1 of 5
2

The sum of all absolute values of the differences of the numbers 1,2,3,โ€ฆ,n, taken two at a time is

$\begin{align} S(n) &=\sum_{i=1}^n\sum_{j=1}^{n}|i-j|\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}|i-j| +\sum_{i=1}^n\sum_{j=i+1}^{n}|i-j|\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}(i-j) +\sum_{i=1}^n\sum_{j=i+1}^{n}(j-i)\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}j +\sum_{i=1}^n\sum_{j=1}^{n-i}j\\ &=\sum_{i=1}^n\frac{i(i-1)}{2} +\sum_{i=1}^n\frac{(n-i)(n-i+1)}{2}\\ &=\sum_{i=1}^n\left(\frac{i(i-1)}{2} +\frac{(n-i)(n-i+1)}{2}\right)\\ &=\sum_{i=1}^n\left(\frac{i(i-1)+(n-i)(n-i+1)}{2}\right)\\ &=\sum_{i=1}^n\left(\frac{i^2-i+(n-i)^2+(n-i)}{2}\right)\\ &=\frac12\sum_{i=1}^n\left(i^2-i+n^2-2ni+i^2+n-i\right)\\ &=\frac{n(n^2+n)}{2}+\sum_{i=1}^n\left(i^2-i-2ni\right)\\ &=\frac{n(n^2+n)}{2}+\frac{n(n+1)(2n+1)}{6}-(2n+1)\frac{n(n+1)}{2}\\ &=\frac{n(n+1)}{2}\left(n+\frac{(2n+1)}{3}-(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(3n+(2n+1)-3(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(3n-(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(n-1\right)\\ &=\frac{(n-1)n(n+1)}{6}\\ &=\binom{n+1}{3}\\ \end{align} $

(Whew!)

There are probably simpler ways, but this is my way.

2 of 5
1

If you fix $j=1$, then let $i$ range you get: $1+2+..+(n-1)$.

If you fix $j=2$, then let $i$ range you get: $1+2+..+(n-2)$.

...

If you fix $j=n-1$, then let $i$ range you get: $1$.

If you put them all together you get a total of $(n-1)$ number of $1$'s, $(n-2)$ number of $2$'s,..., $1$ number of $(n-1)$'s. Thus your sum equals: $$\sum_{k=1}^{n-1}k\cdot (n-k)$$Which equals: $n\sum_{k=1}^{n-1} k-\sum_{k=1}^{n-1}k^2$, and using the well know formulas (and some algebra) you get $n^3/6-n/6$, which is equal to $\binom{n+1}{3}$

๐ŸŒ
Wikipedia
en.wikipedia.org โ€บ wiki โ€บ Sum_of_absolute_differences
Sum of absolute differences - Wikipedia
2 weeks ago - For each of these three image patches, the 9 absolute differences are added together, giving SAD values of 20, 25, and 17, respectively. From these SAD values, it could be asserted that the right side of the search image is the most similar to the template image, because it has the lowest sum of absolute differences as compared to the other two locations.
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ dsa โ€บ sum-absolute-differences-pairs-given-array
Sum of absolute differences of all pairs in a given array - GeeksforGeeks
April 23, 2023 - Since array is sorted and elements are distinct when we take sum of absolute difference of pairs each element in the i'th position is added 'i' times and subtracted 'n-1-i' times.
๐ŸŒ
Statistics By Jim
statisticsbyjim.com โ€บ home โ€บ absolute value
Absolute Value - Statistics By Jim
May 16, 2025 - Subadditivity (Triangle Inequality): |a + b| โ‰ค |a| + |b|: The absolute value of a sum is less than or equal to the sum of the absolute values.
๐ŸŒ
Quora
quora.com โ€บ Why-do-we-square-instead-of-using-the-absolute-value-when-calculating-variance-and-standard-deviation
Why do we square instead of using the absolute value when calculating variance and standard deviation? - Quora
So,the question arises why we square instead we can take absolute values to find both. The benefits of squaring include- 1.Squaring always gives a positive value,so the sum will not be zero. 2.squaring emphasises large differences which makes and algebra much easier to work with and offers properties that the absolute method does not.
๐ŸŒ
Stack Exchange
math.stackexchange.com โ€บ questions โ€บ 4250443 โ€บ how-to-prove-the-sum-of-absolute-values-is-greater-than-or-equal-to-absolute-val
summation - How to Prove the Sum of Absolute Values is greater than or equal to Absolute Value of Sum - Mathematics Stack Exchange
September 14, 2021 - You can provide examples for which $\sum_{i=1}^n |x_i|$ is greater than or equal to $\left|\sum_{i=1}^n x_i \right|$, but those examples cannot prove the inequality in general. $\endgroup$ ... Find the answer to your question by asking. Ask question ... See similar questions with these tags. ... Community Asks Sprint Announcement โ€“ January 2026: Custom site-specific badges! Citation Helper v2 - User Script edition! 1 Absolute value of infinite sum smaller than infinite sum of absolute values
๐ŸŒ
HowStuffWorks
science.howstuffworks.com โ€บ physical science โ€บ math concepts
How Absolute Value Works in Equations and Graphs | HowStuffWorks
May 30, 2024 - Now, let's go back to the initial inequality: |a + b| โ‰ค |a| + |b|. No matter what values you plug into a and b, you'll find that the absolute value of the sum (|a + b|) is less than or equal to the sum of the absolute values (|a| + |b|).