in machine learning, a loss function used for maximum‐margin classification
In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for "maximum-margin" classification, most notably for support vector machines (SVMs). For an intended … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Hinge_loss
Hinge loss - Wikipedia
January 26, 2026 - In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for "maximum-margin" classification, most notably for support vector machines (SVMs). For an intended output t = ±1 and a classifier score y, the hinge loss of the prediction y is defined ...
🌐
Medium
medium.com › analytics-vidhya › understanding-loss-functions-hinge-loss-a0ff112b40a1
Understanding loss functions : Hinge loss | by Kunal Chowdhury | Analytics Vidhya | Medium
January 18, 2024 - Let us consider the misclassification graph for now in Fig 3. We can see that for yf(x) > 0, we are assigning ‘0’ loss. These points have been correctly classified, hence we do not want to contribute more to the total fraction (refer Fig 1).
Top answer
1 of 3
20

To answer your questions directly:

  • A loss function is a scoring function used to evaluate how well a given boundary separates the training data. Each loss function represents a different set of priorities about what the scoring criteria are. In particular, the hinge loss function doesn't care about correctly classified points as long as they're correct, but imposes a penalty for incorrectly classified points which is directly proportional to how far away they are on the wrong side of the boundary in question.
  • A boundary's loss score is computed by seeing how well it classifies each training point, computing each training point's loss value (which is a function of its distance from the boundary), then adding up the results.

    By plotting how a single training point's loss score would vary based on how well it is classified, you can get a feel for the loss function's priorities. That's what your graphs are showing— the size of the penalty that would hypothetically be assigned to a single point based on how confidently it is classified or misclassified. They're pictures of the scoring rubric, not calculations of an actual score. [See diagram below!]

  • At least conceptually, you minimize the loss for a dataset by considering all possible linear boundaries, computing their loss scores, and picking the boundary whose loss score is smallest. Remember that the plots just show how an individual point would be scored in each case based on how accurately it is classified.

  • Interpret loss plots as follows: The horizontal axis corresponds to , which is how accurately a point is classified. Positive values correspond to increasingly confident correct classifications, while negative values correspond to increasingly confident incorrect classifications. (Or, geometrically, is the distance of the point from the boundary, and we prefer boundaries that separate points as widely as possible.) The vertical axis is the magnitude of the penalty. (They're simplified in the sense that they're showing the scoring rubric for a single point; they're not showing you the computed total loss for various boundaries as a function of which boundary you pick.)

Details follow.


  1. The linear SVM problem is the problem of finding a line (or plane, etc.) in space that separates points of one class from points of the other class by the widest possible margin. You want to find, out of all possible planes, the one that separates the points best.

  2. If it helps to think geometrically, a plane can be completely defined by two parameters: a vector perpendicular to the plane (which tells you the plane's orientation) and an offset (which tells you how far it is from the origin). Each choice of is therefore a choice of plane. Another helpful geometric fact for intuition: if is some plane and is a point in space, then is the distance between the plane and the point (!).

    [Nitpick: If is not a unit vector, then this formula actually gives a multiple of a distance, but the constants don't matter here.]

  3. That planar-distance formula is useful because it defines a measurement scheme throughout all space: points lying on the plane have a value of 0; points far away on one side of the boundary have increasingly positive value, and points far away on the other side of the boundary have increasingly negative value.

  4. We have two classes of points. By convention, we'll call one of the classes positive and the other negative. An effective decision boundary will be one that assigns very positive planar-distance values to positive points and very negative planar-distance values to negative points. In formal terms, if denotes the class of the th training point and denotes its position, then what we want is for and to have the same sign and for to be large in magnitude.

  5. A loss function is a way of scoring how well the boundary assigns planar-distance values that match each point's actual class. A loss function is always a function of two arguments— for the first argument we plug in the true class of the point in question, and for the second we plug in the planar distance value our plane assigns to it. The total loss for the planar boundary is the sum of the losses for each of the training points.

    Based on our choice of loss function, we might express a preference that points be classified correctly but that we don't care about the magnitude of the planar-distance value if it's beyond, say, 1000; or we might choose a loss function which allows some points to be misclassified as long as the rest are very solidly classified, etc.

    Your graphs show how different loss functions behave on a single point whose class is fixed and whose planar distance varies ( runs along the horizontal axis). This can give you an idea of what the loss function is prioritizing. (Under this scheme, by the way, positive values of correspond to increasingly confident correct classification, and negative values of correspond to increasingly confident incorrect classification.)

    As a concrete example, the hinge loss function is a mathematical formulation of the following preference:

    Hinge loss preference: When evaluating planar boundaries that separate positive points from negative points, it is irrelevant how far away from the boundary the correctly classified points are. However, misclassified points incur a penalty that is directly proportional to how far they are on the wrong side of the boundary.

    Formally, this preference falls out of the fact that a correctly classified point incurs zero loss once its planar distance is greater than 1. On the other hand, it incurs a linear penalty directly proportional to the planar distance as the classification becomes more badly incorrect.

  6. Computing the loss value means computing the value of the loss for a particular set of training points and a particular boundary. Minimizing the loss means finding, for a particular set of training data, the boundary for which the loss value is minimal.

    For a dataset as in the 2D picture provided, first draw any linear boundary; call one side the positive (or red square) side, and the other the negative (or blue circle) side. You can compute the loss of the boundary by first measuring the planar distance values of each point; here, the signed distance between each training point and the boundary. Points on the positive side have positive values and points on the negative side have negative values. Next, each point contributes to the total loss: . Compute the loss for each of the points now that you've computed for each point and you know whether the point is a red square or blue circle. Add them all up to compute the overall loss.

    The best boundary is the one that has the lowest loss on this dataset out of all possible linear boundaries you could draw. (Time permitting, I'll add illustrations for all of this.)

  7. If the training data can be separated by a linear boundary, then any boundary which does so will have a hinge loss of zero— the lowest achievable value. Based on our preferences as expressed by hinge loss, all such boundaries tie for first place.

    Only if the training data is not linearly separable will the best boundary have a nonzero (positive, worse) hinge loss. In that case, the hinge loss preference will prefer to choose the boundary so that whichever misclassified points are not too far on the wrong side.

Addendum: As you can see from the shape of the curves, the loss functions in your picture express the following scoring rubrics:

  • Zero-one loss : Being misclassified is uniformly bad— points on the wrong side of the boundary get the same size penalty regardless of how far on the wrong side they are. Similarly, all points on the correct side of the boundary get no penalty and no special bonus, even if they're very far on the correct side.
  • Exponential loss : The more correct you are, the better. But once you're on the correct side of the boundary, it gets less and less important that you be far away from the boundary. On the other hand, the further you are on the wrong side of the boundary, the more exponentially urgent of a problem it is.
  • : Same, qualitatively, as previous function.
  • Hinge loss : If you're correctly classified beyond the margin () then it's irrelevant just how far on the correct side you are. On the other hand, if you're within the margin or on the incorrect side, you get a penalty directly proportional to how far you are on the wrong side.
2 of 3
5

The hinge function is convex and the problem of its minimization can be cast as a quadratic program:

$\quad t_i \geq 1 - y_i(wx_i + b) , \forall i=1,\ldots,m\quad t_i \geq 0 , \forall i=1,\ldots,m$

or in conic form

$\quad t_i \geq y_i(wx_i + b) , \forall i=1,\ldots,m\quad t_i \geq 0 , \forall i=1,\ldots,m (2,z,w) \in Q_r^m$

where is the rotated cone.

You can solve this problem either using a QP /SOCP solver, as MOSEK, or by some specialize algorithms that you can find in literature. Note that the minimum is not necessarily in zero, because of the interplay between the norm of and the blue side of the objective function.

In the second picture, every feasible solution is a line. An optimal one will balance the separation of the two classes, given by the blue term and the norm.

As for references, searching on Scholar Google seems quite ok. But even following the references from Wikipedia it is already a first steps.

You can draw some inspiration from one of my recent blog post:

http://blog.mosek.com/2014/03/swissquant-math-challenge-on-lasso.html

it is a similar problem. Also, for more general concept on conic optimization and regression you can check the classical book of Boyd.

🌐
ScienceDirect
sciencedirect.com › topics › engineering › hinge-loss-function
Hinge Loss Function - an overview | ScienceDirect Topics
The hinge loss encourages the network to maximize the margin around the decision boundary separating the two classes, which can lead to better generalization performance than using cross-entropy. Additionally, the hinge loss has sparse gradients, which can be useful for training large models with limited memory (unlike cross-entropy with dense gradients). A frequently used variant of the hinge loss is the squared hinge loss, given by
🌐
GeeksforGeeks
geeksforgeeks.org › machine learning › hinge-loss-relationship-with-support-vector-machines
Hinge-loss & Relationship with Support Vector Machines - GeeksforGeeks
August 21, 2025 - Hinge loss is a loss function widely used in machine learning for training classifiers such as support vector machines (SVMs). Its purpose is to penalize predictions that are incorrect or insufficiently confident in the context of binary ...
🌐
scikit-learn
scikit-learn.org › 1.5 › modules › generated › sklearn.metrics.hinge_loss.html
hinge_loss — scikit-learn 1.5.2 documentation
In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagree), implying 1 - margin is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier.
🌐
Scribd
scribd.com › document › 866365432 › Hinge-Loss
Understanding Hinge Loss in SVMs | PDF | Support Vector Machine | Applied Mathematics
Hinge loss is faster than cross-entropy but may lead to degraded accuracy. ... We take content rights seriously. If you suspect this is your content, claim it here. ... The 0/1 Loss and 𝑦ො to be the prediction. Let’s try to multiply the two together: 𝑦𝑦ො If the label is -1 and the prediction is -1: -1(-1) = +1  Positive If we follow the graph, any positive will give us 0 loss.
Find elsewhere
🌐
Towards Data Science
towardsdatascience.com › home › latest › a definitive explanation to hinge loss for support vector machines.
A definitive explanation to Hinge Loss for Support Vector Machines. | Towards Data Science
January 23, 2025 - On the flip size, a positive distance from the boundary incurs a low hinge loss, or no hinge loss at all, and the further we are away from the boundary(and on the right side of it), the lower our hinge loss will be. ... This graph essentially strengthens the observations we made from the previous visualisation.
🌐
ResearchGate
researchgate.net › figure › Plot-of-the-different-loss-functions-a-Hinge-loss-function-b-Quadratic-loss_fig1_328906293
Plot of the different loss functions. (a) Hinge loss function, (b)... | Download Scientific Diagram
Download scientific diagram | Plot of the different loss functions. (a) Hinge loss function, (b) Quadratic loss function, (c) Pinball loss function and (d) Proposed reward cum penalty loss function. from publication: A Combined Reward-Penalty Loss Function Based Support Vector Machine | | ResearchGate, the professional network for scientists.
🌐
scikit-learn
scikit-learn.org › stable › modules › generated › sklearn.metrics.hinge_loss.html
hinge_loss — scikit-learn 1.8.0 documentation
In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagree), implying 1 - margin is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier.
🌐
Medium
medium.com › @jainilgosalia › hinge-loss-understanding-and-implementing-it-from-scratch-a273d786f8e6
Hinge Loss: Understanding and Implementing it from Scratch | by Jainil Gosalia | Medium
May 15, 2025 - If y * f(x) is less than 1, loss is positive (either incorrect or not confident enough). First, let’s see how hinge loss looks for different prediction scores.
🌐
ACM Other conferences
dl.acm.org › doi › 10.1145 › 3672758.3672845
A method integrating enhanced hinge loss function for few-shot knowledge graph completion | Proceedings of the 3rd International Conference on Computer, Artificial Intelligence and Control Engineering
August 6, 2024 - Specially, we utilize a modified margin hinge loss function by incorporating an upper bound loss function for the base module with an Euclidean-distance-based scoring function to improve the embedding distribution and reduce the chance that positive candidates are distributed outside the negatives.
🌐
Readthedocs
knowledge-graph-embedding.readthedocs.io › en › latest › api › KGE.loss.PairwiseHingeLoss.html
PairwiseHingeLoss — Knowledge Graph Embedding 1.0.0 documentation
Pairwise Hinge Loss or Margin Ranking Loss is a common loss function that used in many models such as UM, SE, TransE, TransH, TransR, TransD, DistMult.
🌐
ResearchGate
researchgate.net › figure › The-function-curve-of-hinge-loss_fig19_394162507
The function curve of hinge loss. | Download Scientific Diagram
Download scientific diagram | The function curve of hinge loss. from publication: A Survey of Loss Functions in Deep Learning | Deep learning (DL), as a cutting-edge technology in artificial intelligence, has significantly impacted fields such as computer vision and natural language processing.
🌐
ResearchGate
researchgate.net › figure › Different-loss-functions-Linear-hinge-loss-l-h-y-i-f-x-i-max0-1-y-i-f-x-i_fig1_259221404
Different loss functions: Linear hinge loss ℓ h (y i , f (x i )) =... | Download Scientific Diagram
Download scientific diagram | Different loss functions: Linear hinge loss ℓ h (y i , f (x i )) = max(0, 1 − y i f (x i )), Squared Hinge loss ℓ sqh (y i , f (x i )) = max(0, 1 − y i f (x i )) 2 , and Huberized Hinge loss Eq.
🌐
James D. McCaffrey
jamesmccaffreyblog.com › home › hinge loss explained with a table instead of a graph
Hinge Loss Explained with a Table Instead of a Graph - James D. McCaffreyJames D. McCaffrey
October 4, 2018 - The table above illustrates hinge loss for a hypothetical SVM. The goal is binary classification. Items can be class -1 or +1 (for example, male / female, or live / die, etc.). An SVM classifier accepts predictor values and emits a value between -1.0 and +1.0 for example +0.3872 or -0.4548.
🌐
DataMonje
datamonje.com › a beginner’s guide to loss functions for classification algorithms
A Beginner's Guide to Loss functions for Classification Algorithms - DataMonje
November 18, 2022 - Consider the below data with decision boundary and margins, The points are linearly separable or we can separate via a line. The hinge loss can be explained in three stages.