in machine learning, a loss function used for maximum‐margin classification
Wikipedia
en.wikipedia.org › wiki › Hinge_loss
Hinge loss - Wikipedia
January 26, 2026 - The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it.
Videos
22:50
Hinge Loss, SVMs, and the Loss of Users - YouTube
12:01
Hinge Loss for Binary Classifiers - YouTube
41:14
8. Loss Functions for Regression and Classification - YouTube
What is the Hinge Loss in SVM in Machine Learning | Data ...
10:46
Gradient Descent for Support Vector Machines and Subgradients - ...
UBC Computer Science
cs.ubc.ca › ~schmidtm › Courses › 340-F17 › L21.pdf pdf
CPSC 340: Machine Learning and Data Mining More Linear Classifiers Fall 2017
• This is the called hinge loss. – It’s convex: max(constant,linear). – It’s not degenerate: w=0 now gives an error of 1 instead of 0. Hinge Loss: Convex Approximation to 0-1 Loss · 7 · Hinge Loss: Convex Approximation to 0-1 Loss · 8 · Hinge Loss: Convex Approximation to 0-1 Loss ·
Carnegie Mellon University
cs.cmu.edu › ~yandongl › loss.html
Loss Function
0/1 loss: $\min_\theta\sum_i L_{0/1}(\theta^Tx)$. We define $L_{0/1}(\theta^Tx) =1$ if $y\cdot f \lt 0$, and $=0$ o.w. Non convex and very hard to optimize. Hinge loss: approximate 0/1 loss by $\min_\theta\sum_i H(\theta^Tx)$. We define $H(\theta^Tx) = max(0, 1 - y\cdot f)$. Apparently $H$ ...
Stack Exchange
math.stackexchange.com › questions › 3587895 › showing-regularized-hinge-loss-is-convex-or-concave
Showing regularized Hinge Loss is convex or concave - Mathematics Stack Exchange
March 20, 2020 - From 1 and 2, $L'(w)$ is convex because max of 2 convex functions is convex.
arXiv
arxiv.org › pdf › 2103.00233 pdf
Learning with Smooth Hinge Losses Junru Luo ∗, Hong Qiao †, and Bo Zhang ‡
condition for a convex surrogate loss ℓto be classification-calibrated, as stated ... Secondly, ψG(α; σ) and ψM(α; σ) are infinitely differentiable. By replacing the · Hinge loss with these two smooth Hinge losses, we obtain two smooth support
Davidrosenberg
davidrosenberg.github.io › mlcourse › Archive › 2016 › Homework › hw6-multiclass › hw6.pdf pdf
Generalized Hinge Loss and Multiclass SVM
we will eventually need our loss function to be a convex function of some w ∈Rd that parameterizes our hypothesis · space. It’ll be clear in what follows what we’re talking about. ... we have a linear hypothesis space. We’ll start with a special case, that the hinge loss is a convex
arXiv
arxiv.org › abs › 2103.00233
[2103.00233] Learning with Smooth Hinge Losses
March 15, 2021 - In this paper, we introduce two smooth Hinge losses $\psi_G(\alpha;\sigma)$ and $\psi_M(\alpha;\sigma)$ which are infinitely differentiable and converge to the Hinge loss uniformly in $\alpha$ as $\sigma$ tends to $0$. By replacing the Hinge loss with these two smooth Hinge losses, we obtain two smooth support vector machines(SSVMs), respectively. Solving the SSVMs with the Trust Region Newton method (TRON) leads to two quadratically convergent algorithms. Experiments in text classification tasks show that the proposed SSVMs are effective in real-world applications. We also introduce a general smooth convex loss function to unify several commonly-used convex loss functions in machine learning.
Davidrosenberg
davidrosenberg.github.io › mlcourse › Archive › 2017 › Homework › hw5.pdf pdf
Homework 5: Generalized Hinge Loss and Multiclass SVM
New homework on multiclass hinge loss and multiclass SVM · New homework on Bayesian methods, specifically the beta-binomial model, hierarchical models, empirical Bayes ML-II, MAP-II · New short lecture on correlated variables with L1, L2, and Elastic Net regularization · Added some details about subgradient methods, including a one-slide proof that subgradient descent moves us towards a minimizer of a convex ...
JMLR
jmlr.org › papers › v9 › bartlett08a.html
Classification with a Reject Option using a Hinge Loss
Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs).
arXiv
arxiv.org › abs › 1309.6813
[1309.6813] Hinge-loss Markov Random Fields: Convex Inference for Structured Prediction
September 26, 2013 - Graphical models for structured domains are powerful tools, but the computational complexities of combinatorial prediction spaces can force restrictions on models, or require approximate inference in order to be tractable. Instead of working in a combinatorial space, we use hinge-loss Markov random fields (HL-MRFs), an expressive class of graphical models with log-concave density functions over continuous variables, which can represent confidences in discrete predictions.
Gabormelli
gabormelli.com › RKB › Hinge-Loss_Function
Hinge-Loss Function - GM-RKB - Gabor Melli
The hinge loss function is defined as : [math]\displaystyle{ V(f(\vec{x}),y) = \max(0, 1-yf(\vec{x})) = |1 - yf(\vec{x}) |_{+}. }[/math] The hinge loss provides a relatively tight, convex upper bound on the 0–1 indicator function. Specifically, the hinge loss equals the 0–1 indicator function ...
ScienceDirect
sciencedirect.com › topics › engineering › hinge-loss-function
Hinge Loss Function - an overview | ScienceDirect Topics
The hinge loss encourages the network to maximize the margin around the decision boundary separating the two classes, which can lead to better generalization performance than using cross-entropy. Additionally, the hinge loss has sparse gradients, which can be useful for training large models with limited memory (unlike cross-entropy with dense gradients). A frequently used variant of the hinge loss is the squared hinge loss, given by
arXiv
arxiv.org › abs › 1512.07797
[1512.07797] The Lovász Hinge: A Novel Convex Surrogate for Submodular Losses
May 15, 2017 - We propose instead a novel surrogate ... loss function to compute a gradient or cutting-plane. We prove that the Lovász hinge is convex and yields an extension....
Massachusetts Institute of Technology
mit.edu › ~9.520 › spring07 › Classes › svmwithfenchel.pdf pdf
Several Views of Support Vector Machines Ryan M. Rifkin
Unfortunately, the 0-1 loss is not convex. Therefore, we · have little hope of being able to optimize this loss function · in practice. (Note that the representer theorem does hold ... This is (basically) an SVM. So what? How will you solve this problem (find the minimizing y)? The hinge ...
Stack Exchange
stats.stackexchange.com › questions › 520792 › hinge-loss-is-the-tightest-convex-upper-bound-on-the-0-1-loss
svm - Hinge loss is the tightest convex upper bound on the 0-1 loss - Cross Validated
April 21, 2021 - I have read many times that the hinge loss is the tightest convex upper bound on the 0-1 loss (e.g. here, here and here). However, I have never seen a formal proof of this statement. How can we for...