Actually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.

You can read some more here.

Answer from Bitwise on Stack Exchange
Top answer
1 of 7
30

Actually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.

You can read some more here.

2 of 7
12

If is linear with respect to the parameters, the derivatives of the sum of squares leads to simple, explicit and direct solutions (immediate if you use matrix calculations).

This is not the case for the second objective function in your post. The problem becomes nonlinear with respect to the parameters and it is much more difficult to solve. But, it is doable (I would generate the starting guesses from the first objective function.

For illustration purposes, I generated a table for (), () and changed the values of using a random relative error between and %. The values used were , and .

Using the first objective function, the solution is immediate and leads to , ,.

Starting with these values as initial guesses for the second objective function (which, again, makes the problem nonlinear), it took to the solver iterations to get , ,. And all these painful iterations reduced the objective function from down to !

There are many other possible objective functions used in regression but the traditional sum of squared errors is the only one which leads to explicit solutions.

Added later

A very small problem that you could (should, if I may) exercise by hand : consider four data points ,,, and your model is simply and your search for the best value of which minimizes either or Plot the values of and as a function of for . For , you will have a nice parabola (the minimum of which is easy to find) but for the plot shows a series of segments which then lead to discontinuous derivatives at thei intersections; this makes the problem much more difficult to solve.

Discussions

algebra precalculus - Sum of absolute values and the absolute value of the sum of these values? - Mathematics Stack Exchange
I'm working on a proof and I need some help with this: I determined that for some situations ($x$ or $y$ are negative but not both): $|x| + |y| > x + y$ How can I conclude using that statement... More on math.stackexchange.com
🌐 math.stackexchange.com
November 7, 2013
Why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
Take numbers a and b. Their sum is a + b and their difference is |a - b| (the vertical lines denote the absolute-value-function). The formula you're referring to is: min(a, b) = 1/2 ((a + b) - |a - b|) I'll first show a quick proof that this formula is correct and then a short explanation of why it makes sense, with a geometric interpretation. We can distinguish 2 cases, a >= b and a < b. Lets assume the first case is true (the reasoning is the same in the second case). We find that |a - b| is now equal to just a - b. So the full expression becomes: 1/2((a + b) - (a - b)) = 1/2 (a + b - a + b) = 1/2 (2b) = b And we get the smaller of the two numbers. In the other case, the same reasoning will show that the formula will yield a. To get a bit of feeling for why this formula works, lets write it slightly differently: (a + b) / 2 - |a - b| / 2 Visualize a and b as points on a line. (a + b) / 2 represents the point precisely in the middle between a and b. The term |a - b| / 2 represents half of the distance between a and b. So you can see this formula as starting directly in the middle between a and b and then walking back exactly half the distance between the two. And that means you end up at the smaller of the two numbers. Note that for the maximum of two numbers you can use a very similar formula: max(a, b) = 1/2 ((a + b) + |a - b|) The geometric interpretation is essentially the same, but instead of walking backwards from the midpoint, the + sign in front of the distance-term means that you're now walking forward. So starting at the point in the middle between a and b you walk half the distance between these two points forward and you naturally end up at the largest of the numbers. edit: formatting More on reddit.com
🌐 r/askscience
135
614
June 15, 2016
Is the absolute value of the sum of two numbers always equal to the sum of their absolute values? Explain.
Expert-written ✅, step-by-step solution to: Is the absolute value of the sum of two numbers always equal to the su... More on quizlet.com
🌐 quizlet.com
1
summation - Sum of all absolute values of difference of the numbers 1,2,.....n taken two at a time - Mathematics Stack Exchange
Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams ... The sum of all absolute values of the differences of the numbers $1,2,3,\ldots, n$, taken two at a time, i.e. More on math.stackexchange.com
🌐 math.stackexchange.com
August 26, 2013

Actually there are some great reasons which have nothing to do with whether this is easy to calculate. The first form is called least squares, and in a probabilistic setting there are several good theoretical justifications to use it. For example, if you assume you are performing this regression on variables with normally distributed error (which is a reasonable assumption in many cases), then the least squares form is the maximum likelihood estimator. There are several other important properties.

You can read some more here.

Answer from Bitwise on Stack Exchange
🌐
Wikipedia
en.wikipedia.org › wiki › Absolute_value
Absolute value - Wikipedia
1 month ago - The real absolute value function has a derivative for every x ≠ 0, given by a step function equal to the sign function except at x = 0 where the absolute value function is not differentiable:
🌐
Reddit
reddit.com › r/askscience › why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
r/askscience on Reddit: Why does taking the sum and difference between two numbers and dividing by 2 find the minimum of the two numbers?
June 15, 2016 -

Kinda stumbled on this and seems ridiculously simple. I know it works but I can't really "understand" it.

Edit: thank you everyone. I've learned a lot. The links to other branches from quadratics to computing to Mohr's circle is mind boggling!

Top answer
1 of 4
472
Take numbers a and b. Their sum is a + b and their difference is |a - b| (the vertical lines denote the absolute-value-function). The formula you're referring to is: min(a, b) = 1/2 ((a + b) - |a - b|) I'll first show a quick proof that this formula is correct and then a short explanation of why it makes sense, with a geometric interpretation. We can distinguish 2 cases, a >= b and a < b. Lets assume the first case is true (the reasoning is the same in the second case). We find that |a - b| is now equal to just a - b. So the full expression becomes: 1/2((a + b) - (a - b)) = 1/2 (a + b - a + b) = 1/2 (2b) = b And we get the smaller of the two numbers. In the other case, the same reasoning will show that the formula will yield a. To get a bit of feeling for why this formula works, lets write it slightly differently: (a + b) / 2 - |a - b| / 2 Visualize a and b as points on a line. (a + b) / 2 represents the point precisely in the middle between a and b. The term |a - b| / 2 represents half of the distance between a and b. So you can see this formula as starting directly in the middle between a and b and then walking back exactly half the distance between the two. And that means you end up at the smaller of the two numbers. Note that for the maximum of two numbers you can use a very similar formula: max(a, b) = 1/2 ((a + b) + |a - b|) The geometric interpretation is essentially the same, but instead of walking backwards from the midpoint, the + sign in front of the distance-term means that you're now walking forward. So starting at the point in the middle between a and b you walk half the distance between these two points forward and you naturally end up at the largest of the numbers. edit: formatting
2 of 4
15
Simple enough. You have any two numbers, you can write them as: a a+x where a is the minimum number, and x is the "positive" difference (the modulo, which is the function to get the position "version" of any number) between the minimum and maximum number. So for example if your two numbers were 7 and 3 then a = 3 and x = 4. So you add the two numbers = 2a+x then you subtract the modulo of the difference i.e. x you get 2a, 2a divided by 2 is just the minimum (a) again. To explain in common sense sort of way, basically you're taking the minimum, doubling it, adding some arbitrary number and subtracting it again (effectively doing nothing), then halving it again (negating the very first thing you did!), so naturally you get the same thing. It's like one of those party "tricks" people do where they ask you to pick a number and then do a bunch of subtractions and multiplications on it and then at the end they say "and your number is [XYZ]" but really all they've done is some forward and backward steps and mixed them up.
🌐
Quizlet
quizlet.com › maths › algebra
Is the absolute value of the sum of two numbers always equal to the sum of their absolute values? Explain. | Quizlet
The sum of an even integer and an odd integer. ... The statement that the absolute value of the sum of two numbers is always equal to the sum of their absolute values is only true if the signs of both numbers are same; that is either both numbers are positive or both numbers are negative.
Find elsewhere
🌐
Expii
expii.com › t › absolute-value-equations-with-sums-4163
Absolute Value Equations with Sums - Expii
Treat sums in absolute value equations like operations in parentheses. Add the terms within the absolute value brackets, apply the absolute value, then add the terms outside.
Top answer
1 of 5
2

The sum of all absolute values of the differences of the numbers 1,2,3,…,n, taken two at a time is

$\begin{align} S(n) &=\sum_{i=1}^n\sum_{j=1}^{n}|i-j|\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}|i-j| +\sum_{i=1}^n\sum_{j=i+1}^{n}|i-j|\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}(i-j) +\sum_{i=1}^n\sum_{j=i+1}^{n}(j-i)\\ &=\sum_{i=1}^n\sum_{j=1}^{i-1}j +\sum_{i=1}^n\sum_{j=1}^{n-i}j\\ &=\sum_{i=1}^n\frac{i(i-1)}{2} +\sum_{i=1}^n\frac{(n-i)(n-i+1)}{2}\\ &=\sum_{i=1}^n\left(\frac{i(i-1)}{2} +\frac{(n-i)(n-i+1)}{2}\right)\\ &=\sum_{i=1}^n\left(\frac{i(i-1)+(n-i)(n-i+1)}{2}\right)\\ &=\sum_{i=1}^n\left(\frac{i^2-i+(n-i)^2+(n-i)}{2}\right)\\ &=\frac12\sum_{i=1}^n\left(i^2-i+n^2-2ni+i^2+n-i\right)\\ &=\frac{n(n^2+n)}{2}+\sum_{i=1}^n\left(i^2-i-2ni\right)\\ &=\frac{n(n^2+n)}{2}+\frac{n(n+1)(2n+1)}{6}-(2n+1)\frac{n(n+1)}{2}\\ &=\frac{n(n+1)}{2}\left(n+\frac{(2n+1)}{3}-(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(3n+(2n+1)-3(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(3n-(2n+1)\right)\\ &=\frac{n(n+1)}{6}\left(n-1\right)\\ &=\frac{(n-1)n(n+1)}{6}\\ &=\binom{n+1}{3}\\ \end{align} $

(Whew!)

There are probably simpler ways, but this is my way.

2 of 5
1

If you fix $j=1$, then let $i$ range you get: $1+2+..+(n-1)$.

If you fix $j=2$, then let $i$ range you get: $1+2+..+(n-2)$.

...

If you fix $j=n-1$, then let $i$ range you get: $1$.

If you put them all together you get a total of $(n-1)$ number of $1$'s, $(n-2)$ number of $2$'s,..., $1$ number of $(n-1)$'s. Thus your sum equals: $$\sum_{k=1}^{n-1}k\cdot (n-k)$$Which equals: $n\sum_{k=1}^{n-1} k-\sum_{k=1}^{n-1}k^2$, and using the well know formulas (and some algebra) you get $n^3/6-n/6$, which is equal to $\binom{n+1}{3}$

🌐
UBC Math
personal.math.ubc.ca › ~PLP › book › sec-abs-triangle.html
5.4 Absolute values and the triangle inequality
Okay, now we can prove our main result; it tells us that the absolute value of the sum of two numbers is smaller
🌐
ScienceDirect
sciencedirect.com › topics › computer-science › sum-of-absolute-difference
Sum Of Absolute Difference - an overview | ScienceDirect Topics
Sum of Absolute Difference (SAD) is a metric used in computer science that calculates the total sum of absolute pixel value differences between two images or regions without the complexity of additional division operations.
Top answer
1 of 12
158

Introduction: The solution below is essentially the same as the solution given by Brian M. Scott, but it will take a lot longer. You are expected to assume that $S$ is a finite set. with say $k$ elements. Line them up in order, as $s_1<s_2<\cdots <s_k$.

The situation is a little different when $k$ is odd than when $k$ is even. In particular, if $k$ is even there are (depending on the exact definition of median) many medians. We tell the story first for $k$ odd.
Recall that $|x-s_i|$ is the distance between $x$ and $s_i$, so we are trying to minimize the sum of the distances. For example, we have $k$ people who live at various points on the $x$-axis. We want to find the point(s) $x$ such that the sum of the travel distances of the $k$ people to $x$ is a minimum.

The story: Imagine that the $s_i$ are points on the $x$-axis. For clarity, take $k=7$. Start from well to the left of all the $s_i$, and take a tiny step, say of length $\epsilon$, to the right. Then you have gotten $\epsilon$ closer to every one of the $s_i$, so the sum of the distances has decreased by $7\epsilon$.

Keep taking tiny steps to the right, each time getting a decrease of $7\epsilon$. This continues until you hit $s_1$. If you now take a tiny step to the right, then your distance from $s_1$ increases by $\epsilon$, and your distance from each of the remaining $s_i$ decreases by $\epsilon$. What has happened to the sum of the distances? There is a decrease of $6\epsilon$, and an increase of $\epsilon$, for a net decrease of $5\epsilon$ in the sum.

This continues until you hit $s_2$. Now, when you take a tiny step to the right, your distance from each of $s_1$ and $s_2$ increases by $\epsilon$, and your distance from each of the five others decreases by $\epsilon$, for a
net decrease of $3\epsilon$.

This continues until you hit $s_3$. The next tiny step gives an increase of $3\epsilon$, and a decrease of $4\epsilon$, for a net decrease of $\epsilon$.

This continues until you hit $s_4$. The next little step brings a total increase of $4\epsilon$, and a total decrease of $3\epsilon$, for an increase of $\epsilon$. Things get even worse when you travel further to the right. So the minimum sum of distances is reached at $s_4$, the median.

The situation is quite similar if $k$ is even, say $k=6$. As you travel to the right, there is a net decrease at every step, until you hit $s_3$. When you are between $s_3$ and $s_4$, a tiny step of $\epsilon$ increases your distance from each of $s_1$, $s_2$, and $s_3$ by $\epsilon$. But it decreases your distance from each of the three others, for no net gain. Thus any $x$ in the interval from $s_3$ to $s_4$, including the endpoints, minimizes the sum of the distances. In the even case, I prefer to say that any point between the two "middle" points is a median. So the conclusion is that the points that minimize the sum are the medians. But some people prefer to define the median in the even case to be the average of the two "middle" points. Then the median does minimize the sum of the distances, but some other points also do.

2 of 12
128

We're basically after: $$ \arg \min_{x} \sum_{i = 1}^{N} \left| {s}_{i} - x \right| $$

One should notice that $ \frac{\mathrm{d} \left | x \right | }{\mathrm{d} x} = \operatorname{sign} \left( x \right) $ (Being more rigorous would say it is a Sub Gradient of the non smooth $ {L}_{1} $ Norm function).
Hence, deriving the sum above yields $ \sum_{i = 1}^{N} \operatorname{sign} \left( {s}_{i} - x \right) $.
This equals to zero only when the number of positive items equals the number of negative which happens when $ x = \operatorname{median} \left\{ {s}_{1}, {s}_{2}, \cdots, {s}_{N} \right\} $.

Remarks

  1. One should notice that the median of a discrete group is not uniquely defined.
  2. The median is not necessarily an item within the group.
  3. Not every set can bring the Sub Gradient to vanish. Yet employing the Sub Gradient Method is guaranteed to converge to a median.
  4. It is not the optimal way to calculate the Median. It is given to give intuition about what's the median.
🌐
ExtendOffice
extendoffice.com › documents › excel › sum absolute values in excel - a complete guide
Sum absolute values in Excel - A complete guide
April 10, 2025 - Learn how to sum absolute values in Excel using formulas (ABS, SUMPRODUCT, SUMIF) or Kutools. Quickly calculate totals while treating negatives as positives.
🌐
Wikipedia
en.wikipedia.org › wiki › Sum_of_absolute_differences
Sum of absolute differences - Wikipedia
October 22, 2023 - For each of these three image patches, the 9 absolute differences are added together, giving SAD values of 20, 25, and 17, respectively. From these SAD values, it could be asserted that the right side of the search image is the most similar to the template image, because it has the lowest sum of absolute differences as compared to the other two locations.
🌐
Statistics By Jim
statisticsbyjim.com › home › absolute value
Absolute Value - Statistics By Jim
May 16, 2025 - Subadditivity (Triangle Inequality): |a + b| ≤ |a| + |b|: The absolute value of a sum is less than or equal to the sum of the absolute values.
🌐
Stack Exchange
math.stackexchange.com › questions › 4250443 › how-to-prove-the-sum-of-absolute-values-is-greater-than-or-equal-to-absolute-val
summation - How to Prove the Sum of Absolute Values is greater than or equal to Absolute Value of Sum - Mathematics Stack Exchange
September 14, 2021 - You can provide examples for which $\sum_{i=1}^n |x_i|$ is greater than or equal to $\left|\sum_{i=1}^n x_i \right|$, but those examples cannot prove the inequality in general. $\endgroup$ ... Find the answer to your question by asking. Ask question ... See similar questions with these tags. ... Community Asks Sprint Announcement – January 2026: Custom site-specific badges! Citation Helper v2 - User Script edition! 1 Absolute value of infinite sum smaller than infinite sum of absolute values
🌐
Quora
quora.com › Why-do-we-square-instead-of-using-the-absolute-value-when-calculating-variance-and-standard-deviation
Why do we square instead of using the absolute value when calculating variance and standard deviation? - Quora
So,the question arises why we square instead we can take absolute values to find both. The benefits of squaring include- 1.Squaring always gives a positive value,so the sum will not be zero. 2.squaring emphasises large differences which makes and algebra much easier to work with and offers properties that the absolute method does not.
🌐
HowStuffWorks
science.howstuffworks.com › physical science › math concepts
How Absolute Value Works in Equations and Graphs | HowStuffWorks
May 30, 2024 - Now, let's go back to the initial inequality: |a + b| ≤ |a| + |b|. No matter what values you plug into a and b, you'll find that the absolute value of the sum (|a + b|) is less than or equal to the sum of the absolute values (|a| + |b|).
🌐
Quora
quora.com › Why-is-absolute-value-not-differentiable
Why is absolute value not differentiable? - Quora
Answer (1 of 4): I’m not disagreeing with either answer. I had to think about about both answers a bit, so I just thought I would write out my stream of consciousness. The primary issues is at zero where ABS transitions. While the limit analysis is the more formal way of proof, it is also easy t...