statistical optimality criterion
least absolute deviations regression method diagram
Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based on … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Least_absolute_deviations
Least absolute deviations - Wikipedia
November 22, 2024 - Unlike least squares regression, least absolute deviations regression does not have an analytical solving method. Therefore, an iterative approach is required. The following is an enumeration of some least absolute deviations solving methods. Simplex-based methods (such as the Barrodale-Roberts algorithm) Because the problem is a linear program...
Top answer
1 of 2
6

You want an example for solving least absolute deviation by linear programming. I will show you an simple implementation in R. Quantile regression is a generalization of least absolute deviation, which is the case of the quantile 0.5, so I will show a solution for quantile regression. Then you can check the results with the R quantreg package:

    rq_LP  <-  function(x, Y, r=0.5, intercept=TRUE) {
        require("lpSolve")
        if (intercept) X  <-  cbind(1, x) else X <-  cbind(x)
        N   <-  length(Y)
        n  <-  nrow(X)
        stopifnot(n == N)
        p  <-  ncol(X)
        c  <-  c(rep(r, n), rep(1-r, n), rep(0, 2*p))  
                 # cost coefficient vector
        A  <- cbind(diag(n), -diag(n), X, -X)
        res  <-  lp("min", c, A, "=", Y, compute.sens=1)
            ### Desempaquetar los coefs:
        sol <- res$solution
        coef1  <-  sol[(2*n+1):(2*n+2*p)]
        coef <- numeric(length=p)
        for (i in seq(along=coef)) {
             coef[i] <- (if(coef1[i]<=0)-1 else +1) *  
                  max(coef1[i], coef1[i+p])
        }
        return(coef)
        }

Then we use it in a simple example:

    library(robustbase)
    data(starsCYG)
    Y  <- starsCYG[, 2]
    x  <- starsCYG[, 1]
    rq_LP(x, Y)
    [1]  8.1492045 -0.6931818

then you yourself can do the check with quantreg.

2 of 2
2

Linear Programming can be generalized with convex optimization, where in addition to simplex, many more reliable algorithms are available.

I would suggest you to check The Convex Optimization Book and the CVX toolbox they provided. Where you can easily formulate least absolute deviation with regularization.

https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf

http://cvxr.com/cvx/

🌐
GitHub
github.com › seunghwanyoo › lad_reformulation
GitHub - seunghwanyoo/lad_reformulation: Least absolute deviation (LAD) problem with linear programming
Least absolute deviation (LAD) method is using L1 norm to get the solution x for Ax = b instead of L2 norm. We test a simple linear regression problem with LAD and compare its performance with Least Squares (LS) method.
Author   seunghwanyoo
🌐
Princeton
vanderbei.princeton.edu › 542 › lectures › lec9.pdf pdf
Linear Programming: Chapter 12 Regression Robert J. Vanderbei October 17, 2007
Least Absolute Deviation Regression via Lin- ear Programming · min · X · i · bi − · X · j · aijxj · Equivalent Linear Program: min · X · i · ti · −ti ≤bi − · X · j · aijxj ≤ti · i = 1, 2, . . . , m · AMPL Model ·
🌐
R-project
roi.r-forge.r-project.org › use_case_LAD.html
Least absolute deviation (LAD) problem
\[ \begin{eqnarray*} \underset{{\beta_0,\mathbf{\beta},\mathbf{e}^+,\mathbf{e}^-}}{\text{minimize}} ~~ \sum_{i=1}^n e_i^+ + e_i^- ~~~~~~~~~~~~~~~~~ \nonumber \\ \text{subject to} ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~ \nonumber \\ \beta_0 + \mathbf{\beta}^\top \mathbf{x}_i + e_i^+ - e_i^- = 0 ~~~~~~ i = 1,\ldots{},n \nonumber \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \beta_j = -1 ~~~~~~~~~~~~~~~~~~~~~ \nonumber \\ ~~~~~~~~~~~~~~~~~~~~~~~ e_i^+, e_i^- \geq 0 ~~~~~ i = 1,\ldots{},n \end{eqnarray*} \] given a set of points \(\mathbf{x}_i \in \mathbb{R}^m\), \(i = 1,\ldots{},n\) and t
🌐
Readthedocs
gurobi-optimods.readthedocs.io › en › stable › mods › lad-regression.html
Least Absolute Deviation Regression - gurobi-optimods documentation v3.0.0
The distinction between this Mod and the Ordinary Least Squares regression algorithm from scikit-learn is the loss function. LADRegression chooses coefficients \(w\) of a linear model \(y = Xw\) so as to minimize the sum of absolute errors on a training dataset \((X, y)\). In other words, it aims to minimize the following loss function: ... The fitting algorithm of the LAD regression Mod is implemented by formulating the loss function as a Linear Program (LP), which is then solved using Gurobi.
🌐
Stack Overflow
stackoverflow.com › questions › 64422417 › irls-vs-linear-programming-for-large-scale-for-least-absolute-deviation-lad-r
IRLS vs. Linear Programming for Large Scale for Least Absolute Deviation (LAD) Regression - Stack Overflow
I was trying to run a Least Absolute Deviance regression (L1 regression). I did it by designing a linear program, and solving with CVXOPT (in Python). The number of features (including constant) was 13 and the number of samples was ~100K. It took many hours to run to completion.
🌐
Readthedocs
gurobi-optimods.readthedocs.io › en › v1.0.1 › mods › lad-regression.html
Least Absolute Deviation Regression — Gurobi OptiMods 1.0.1 documentation
The distinction between this Mod and the Ordinary Least Squares regression algorithm from scikit-learn is the loss function. LADRegression chooses coefficients \(w\) of a linear model \(y = Xw\) so as to minimize the sum of absolute errors on a training dataset \((X, y)\). In other words, it aims to minimize the following loss function: ... The fitting algorithm of the LAD regression Mod is implemented by formulating the loss function as a Linear Program (LP), which is then solved using Gurobi.
Find elsewhere
🌐
ResearchGate
researchgate.net › publication › 229703239_Least_squares_versus_minimum_absolute_deviation_estimation_in_linear_models
Least squares versus minimum absolute deviation estimation in linear models
June 7, 2007 - In this paper simulation techniques ... demonstrate that, in certain cases, minimizing the sum of the absolute values of the deviations (L1 norm) is preferable to the Least Squares criterion....
🌐
Blogger
yetanothermathprogrammingconsultant.blogspot.com › 2017 › 11 › lp-and-lad-regression.html
Yet Another Math Programming Consultant: Linear Programming and LAD Regression
November 9, 2017 - I believe any book on linear programming will mention LAD (Least Absolute Deviation) or \(\ell_1\) regression: minimize the sum of the absolute values of the residuals.
🌐
Mobook
mobook.github.io › MO-book › notebooks › 02 › 02-lad-regression.html
2.2 Least Absolute Deviation (LAD) Regression — Companion code for the book "Hands-On Mathematical Optimization with Python"
It remains a cornerstone of modern ... This notebook introduces an alternative approach to traditional linear regression, employing linear optimization to optimize based on the Least Absolute Deviation (LAD) metric....
🌐
ResearchGate
researchgate.net › publication › 24112751_Least_Absolute_Deviation_Estimation_of_Linear_Econometric_Models_A_Literature_Review
(PDF) Least Absolute Deviation Estimation of Linear Econometric Models: A Literature Review
July 5, 2022 - An intensive research has established that in such cases estimation by the Least Absolute Deviation (LAD) method performs well. This paper is an attempt to survey the literature on LAD estimation of single as well as multi-equation linear econometric models. ... Content may be subject to copyright. ... Dept. of Economics ... I. Introduction: The Least Squares method of estimation of parameters of linear · (regression) models performs well provided that the residuals (disturbances or errors) are
🌐
Reddit
reddit.com › r/learnmath › linear programming: how to find the least absolute deviation?
r/learnmath on Reddit: Linear Programming: How to find the Least Absolute Deviation?
July 2, 2021 -

I have been making a program to analyse sets of data. I have implemented linear regression via least squares, but I was hoping to implement Least Absolute Deviation. For clarity, LAD wishes to minimise the sum of | y_i - a - b x_i |. To set up the problem as a set of linear constraints, a new variable, e_i, is used. Setting e_i = | y_i - a - b x_i |, we can form the inequalities e_i >= y_i - a - b x_i and -e_i <= y_i - a - b x_i . I have implemented simplex method in my program, so I wished to get it into a form usable by the simplex method. As such, I have converted each to [variables] <= value form, being -y_i = -a - b x_i - e_i + s_{2i} and y_i = a + b x_i - e_i + s_{2i +1}. The objective function is z = sum_{i=1}^{n} e_i. Since we wish to minimise this, I have negated everything to give -z + sum_{i=1}^{n} e_i = 0. Is this the correct setup for use of simplex? That is to ask: if I were to form a tableau from the following and apply simplex, would I get the desired a, b such that y = b x + a is the LAD line?

Any help would be appreciated.

EDIT: This formulation doesn't seem to function since the only selectable pivot column is the "z" column which is all zeros. How can this be avoided?

🌐
Hong Kong University of Science and Technology
math.hkust.edu.hk › ~makchen › Paper › LAD.pdf pdf
Analysis of least absolute deviation By KANI CHEN
method in Section 4. Use of the linear programming greatly facilitates the computation and makes · the method easy to implement. Simulation results show that the method works well for practical ... The rest of the paper is organized as follows. In the next section, the usual linear model and · accompanying linear hypotheses are specified and relevant notations introduced. In Section 3, a new · least absolute deviation statistic is introduced for testing nested linear hypotheses.
🌐
Princeton
vanderbei.princeton.edu › 307 › lectures › lec11_show.pdf pdf
ORF 307: Lecture 11 Linear Programming: Chapter 12 Regression
April 2, 2019 - Least Absolute Deviation Regression via LP · First of Two Methods · min · X · i · bi − · X · j · aijxj · Equivalent Linear Program: min · X · i · ti · −ti ≤bi − · X · j · aijxj ≤ti · i = 1, 2, . . . , m · 15 · AMPL Model ·
🌐
SAS Support Communities
communities.sas.com › t5 › SAS-Communities-Library › Automatic-Linearization-Using-the-OPTMODEL-Procedure-Least › ta-p › 929444
Automatic Linearization Using the OPTMODEL Procedure: Least Absolute Deviation (LAD) Regression
May 23, 2024 - The objective function for LAD is shown below, which minimizes the sum of the absolute value of the residuals: Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page. In a perfect world, we could simply modify the objective function from the OLS formulation, re-run the model, and output the new results. However, the problem is that LAD regression is more computationally difficult to solve than OLS due to the presence of the absolute value function, which introduces non-smoothness into the optimization problem.
🌐
Ampl
ampl.com › mo-book › notebooks › 02 › lad-regression.html
LAD Regression — Hands-On Mathematical Optimization with AMPL in Python
Suppose we have a finite dataset consisting of \(n\) points \(\{({X}^{(i)}, y^{(i)})\}_{i=1,\dots,n}\) with \({X}^{(i)} \in \mathbb{R}^k\) and \(y^{(i)} \in \mathbb{R}\). A linear regression model assumes the relationship between the vector of \(k\) regressors \({X}\) and the dependent variable \(y\) is linear. This relationship is modeled through an error or deviation term \(e_i\), which quantifies how much each of the data points diverge from the model prediction and is defined as follows: \[ \begin{equation}\label{eq:regression} e_i:= y^{(i)} - {m}^\top {X}^{(i)} - b = y^{(i)} - \sum_{j=1}^k X^{(i)}_j m_j - b, \end{equation} \] for some real numbers \(m_1,\dots,m_k\) and \(b\). The Least Absolute Deviation (LAD) is a possible statistical optimality criterion for such a linear regression.
🌐
Munich Personal RePEc Archive
mpra.ub.uni-muenchen.de › 1781 › 1 › MPRA_paper_1781.pdf pdf
Least absolute deviation estimation of linear econometric ...
However, models with the disturbances ... that in such cases estimation by the Least Absolute Deviation (LAD) method performs well. This paper is an attempt to survey the literature on LAD estimation of single as well as multi-equation linear econometric models....
🌐
GitHub
github.com › flatironinstitute › least_absolute_regression
GitHub - flatironinstitute/least_absolute_regression: Least absolute error regression implemented using Linear Programming, primarily to illustrate repository structure conventions.
April 10, 2018 - Least absolute error regression implemented using Linear Programming, primarily to illustrate repository structure conventions. - flatironinstitute/least_absolute_regression
Starred by 6 users
Forked by 5 users
Languages   Jupyter Notebook 96.4% | Python 3.6% | Jupyter Notebook 96.4% | Python 3.6%
🌐
ResearchGate
researchgate.net › publication › 309690752_An_Alternative_Algorithm_and_R_Programming_Implementation_for_Least_Absolute_Deviation_Estimator_of_the_Linear_Regression_Models
(PDF) An Alternative Algorithm and R Programming Implementation for Least Absolute Deviation Estimator of the Linear Regression Models
August 7, 2025 - Linear models: Least squares and · alternatives (2nd ed.). New York, NY: Springer-Verlag, Inc. Rousseeuw, P. J. & Leroy, A. M. (1987). Robust regression and outlier ... Wolberg, J. (2006). Data analysis using the method of least squares. Berlin · Heidelberg: Springer-Verlag. ... OGUNDELE ET AL. ... OGUNDELE ET AL. ... ... In the traditional regression models based on the least absolute deviations, the optimal parameters of the model are calculated based on a linear optimization problem.