The optimization objective of SVM is to reduce w, b in such a way that we have the maximum margin with the hyperplane.

Mathematically speaking, it is a nonlinear optimization task which is solved by KKT (Karush-Kunn-Tucker) conditions, using lagrange multipliers.

The following video explains this in simple terms for linearly seperable case

https://www.youtube.com/watch?v=1NxnPkZM9bc

Also how this is calculated is better explained here for both linear and primal cases.

https://www.csie.ntu.edu.tw/~cjlin/talks/rome.pdf

Answer from codeslord on Stack Overflow
🌐
GeeksforGeeks
geeksforgeeks.org › machine learning › using-a-hard-margin-vs-soft-margin-in-svm
Using a Hard Margin vs Soft Margin in SVM - GeeksforGeeks
July 23, 2025 - This margin is the distance between the hyperplane and the nearest data point, also known as the support vectors. The hyperplane equation plays a crucial role in hard margin SVMs because it defines the boundary that separates the classes.
🌐
Stanford NLP Group
nlp.stanford.edu › IR-book › html › htmledition › support-vector-machines-the-linearly-separable-case-1.html
Support vector machines: The linearly separable case
While some learning methods such ... be looking for a decision surface that is maximally far away from any data point. This distance from the decision surface to the closest data point determines the margin of the classifier....
Top answer
1 of 2
1

The optimization objective of SVM is to reduce w, b in such a way that we have the maximum margin with the hyperplane.

Mathematically speaking, it is a nonlinear optimization task which is solved by KKT (Karush-Kunn-Tucker) conditions, using lagrange multipliers.

The following video explains this in simple terms for linearly seperable case

https://www.youtube.com/watch?v=1NxnPkZM9bc

Also how this is calculated is better explained here for both linear and primal cases.

https://www.csie.ntu.edu.tw/~cjlin/talks/rome.pdf

2 of 2
0

The margin between the separating hyperplane and the class boundaries of an SVM is an essential feature of this algorithm.

See, you have two hyperplanes (1) w^tx+b>=1, if y=1 and (2) w^tx+b<=-1, if y=-1. This says that any vector with a label y=1 must lie ether on or behind the hyperplane (1). The same applies to the vectors with label y=-1 and hyperplane (2).

Note: If those requirements can be fulfilled, it implicitly means the dataset is linearly separatable. This makes sense because otherwise no such margin can be constructed.

So, what an SVM tries to find is a decision boundary which ist half-way between (1) and (2). Let's define this boundary as (3) w^tx+b=0. What you see here is that (1), (2) and (3) are parallel hyperplanes because they share the same parameters w and b. The parameters w holds the direction of those planes. Recall that a vector always has a direction and a magnitude/length.

The question is now: How can one calculate the hyperplane (3)? The equations (1) and (2) tell us that any vector with a label y=1 which is closest to (3) lies exactly on the hyperplane (1), hence (1) becomes w^tx+b=1 for such x. The similar applies for the closest vectors with a negative label and (2). Those vectors on the planes called 'support vectors' and the decision boundary (3) only depends on those, because one simply can subtract (2) from (1) for the support vectors and gets:

w^tx+b-w^tx+b=1-(-1) => wt^x-w^tx=2

Note: x for the two planes are different support vectors.

Now, we want to get the direction of w but ignoring it's length to get the shortest distance between (3) and the other planes. This distance is a perpendicular line segment from (3) to the others. To do so, one can divide by the length of w to get the norm vector which is perpendicular to (3), hence (wt^x-w^tx)/||w||=2/||w||. By ignoring the left hand site (it's equal) we see that the distance between the two planes is actually 2/||w||. This distance must be maximized.

Edit: As others state here, use Lagrange multipliers or the SMO algorithm to minimize the term 1/2 ||w||^2 s.t. y(w^tx+b)>=1 This is the convex form of the optimization problem for the primal svm.

🌐
MIT
web.mit.edu › 6.034 › wwwbob › svm-notes-long-08.pdf pdf
1 An Idiot’s guide to Support vector machines (SVMs) R. Berwick, Village Idiot
Typically, there can be lots of input features xi. Output: set of weights w (or wi), one for each feature, whose linear combination predicts the value of y. (So far, ... The margin (gutter) of a separating hyperplane is d+ + d–.
🌐
MathWorks
mathworks.com › statistics and machine learning toolbox › classification › support vector machine classification
margin - Find classification margins for support vector machine (SVM) classifier - MATLAB
Approximately 25% of the margins from the full model are less than those from the model with fewer predictors. This result suggests that the model trained with all the predictors is better. ... SVM classification model, specified as a ClassificationSVM model object or CompactClassificationSVM model object returned by fitcsvm or compact, respectively.
🌐
Quora
quora.com › What-is-the-mathematical-definition-of-margin-in-support-vector-machine-SVM
What is the mathematical definition of margin in support vector machine (SVM)? - Quora
— including what is the margin. In short, you want to find a line that separates the points in two classes, while being as far as possible from each class. So in the figure below, the bold line is the one...
🌐
Medium
medium.com › @apurvjain37 › support-vector-machines-s-v-m-hyperplane-and-margins-ee2f083381b4
Support Vector Machines(S.V.M) — Hyperplane and Margins | by apurv jain | Medium
September 25, 2020 - An SVM model is basically a representation of different classes in a hyperplane in multidimensional space. The hyperplane will be generated in an iterative manner by SVM so that the error can be minimized. The goal of SVM is to divide the datasets into classes to find a maximum marginal hyperplane ...
Find elsewhere
🌐
scikit-learn
scikit-learn.org › stable › auto_examples › svm › plot_svm_margin.html
SVM Margins Example — scikit-learn 1.8.0 documentation
# Authors: The scikit-learn developers # SPDX-License-Identifier: BSD-3-Clause import matplotlib.pyplot as plt import numpy as np from sklearn import svm # we create 40 separable points np.random.seed(0) X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]] Y = [0] * 20 + [1] * 20 # figure number fignum = 1 # fit the model for name, penalty in (("unreg", 1), ("reg", 0.05)): clf = svm.SVC(kernel="linear", C=penalty) clf.fit(X, Y) # get the separating hyperplane w = clf.coef_[0] a = -w[0] / w[1] xx = np.linspace(-5, 5) yy = a * xx - (clf.intercept_[0]) / w[1] # plot the parallels to the separating hyperplane that pass through the # support vectors (margin away from hyperplane in direction # perpendicular to hyperplane). This is sqrt(1+a^2) away vertically in # 2-d.
🌐
Baeldung
baeldung.com › home › artificial intelligence › deep learning › using a hard margin vs. soft margin in svm
Using a Hard Margin vs. Soft Margin in SVM | Baeldung on Computer Science
February 13, 2025 - Let’s assume that the hyperplane separating our two classes is defined as : Then, we can define the margin by two parallel hyperplanes: They are the green and purple lines in the above figure. Without allowing any misclassifications in the hard margin SVM, we want to maximize the distance between the two hyperplanes.
🌐
Wikipedia
en.wikipedia.org › wiki › Support_vector_machine
Support vector machine - Wikipedia
1 week ago - In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories, SVMs are one of the most studied models, ...
🌐
EITCA
eitca.org › home › what is the significance of the margin in svm and how is it related to support vectors?
What is the significance of the margin in SVM and how is it related to support vectors? - EITCA Academy
August 7, 2023 - The hyperplane is a decision boundary that separates the data points into different classes. The margin is defined as the distance between the hyperplane and the nearest data points from each class. The larger the margin, the better the generalization performance of the SVM model.
Top answer
1 of 4
42

"A geometric margin is simply the euclidean distance between a certain x (data point) to the hyperlane. "

I don't think that is a proper definition for the geometric margin, and I believe that is what is confusing you. The geometric margin is just a scaled version of the functional margin.

You can think the functional margin, just as a testing function that will tell you whether a particular point is properly classified or not. And the geometric margin is functional margin scaled by ||w||

If you check the formula:

You can notice that independently of the label, the result would be positive for properly classified points (e.g sig(1*5)=1 and sig(-1*-5)=1) and negative otherwise. If you scale that by ||w|| then you will have the geometric margin.

Why does the geometric margin exists?

Well to maximize the margin you need more that just the sign, you need to have a notion of magnitude, the functional margin would give you a number but without a reference you can't tell if the point is actually far away or close to the decision plane. The geometric margin is telling you not only if the point is properly classified or not, but the magnitude of that distance in term of units of |w|

2 of 4
10

The functional margin represents the correctness and confidence of the prediction if the magnitude of the vector(w^T) orthogonal to the hyperplane has a constant value all the time.

By correctness, the functional margin should always be positive, since if wx + b is negative, then y is -1 and if wx + b is positive, y is 1. If the functional margin is negative then the sample should be divided into the wrong group.

By confidence, the functional margin can change due to two reasons: 1) the sample(y_i and x_i) changes or 2) the vector(w^T) orthogonal to the hyperplane is scaled (by scaling w and b). If the vector(w^T) orthogonal to the hyperplane remains the same all the time, no matter how large its magnitude is, we can determine how confident the point is grouped into the right side. The larger that functional margin, the more confident we can say the point is classified correctly.

But if the functional margin is defined without keeping the magnitude of the vector(w^T) orthogonal to the hyperplane the same, then we define the geometric margin as mentioned above. The functional margin is normalized by the magnitude of w to get the geometric margin of a training example. In this constraint, the value of the geometric margin results only from the samples and not from the scaling of the vector(w^T) orthogonal to the hyperplane.

The geometric margin is invariant to the rescaling of the parameter, which is the only difference between geometric margin and functional margin.

EDIT:

The introduction of functional margin plays two roles: 1) intuit the maximization of geometric margin and 2) transform the geometric margin maximization issue to the minimization of the magnitude of the vector orthogonal to the hyperplane.

Since scaling the parameters w and b can result in nothing meaningful and the parameters are scaled in the same way as the functional margin, then if we can arbitrarily make the ||w|| to be 1(results in maximizing the geometric margin) we can also rescale the parameters to make them subject to the functional margin being 1(then minimize ||w||).

🌐
EITCA
eitca.org › home › what is the significance of the margin in a support vector machine (svm)?
What is the significance of the margin in a support vector machine (SVM)? - EITCA Academy
August 7, 2023 - SVM is a powerful machine learning ... that separates data points in feature space. The margin in SVM refers to the region between the hyperplane and the closest data points, known as support vectors....
🌐
Jeremy Jordan
jeremyjordan.me › support-vector-machines
Support vector machines. - Jeremy Jordan
October 18, 2017 - In the world of SVMs, this "space" is called the margin (as visualized below).
🌐
Towards Data Science
towardsdatascience.com › home › latest › support vector machines – soft margin formulation and kernel trick
Support Vector Machines - Soft Margin Formulation and Kernel Trick | Towards Data Science
January 21, 2025 - This is usually the case in many real-world applications. Fortunately, researchers have already come up with techniques that can handle situations like these. Let’s see what they are and how they work. This idea is based on a simple premise: allow SVM to make a certain number of mistakes and keep margin as wide as possible so that other points can still be classified correctly.
🌐
GeeksforGeeks
geeksforgeeks.org › machine learning › support-vector-machine-algorithm
Support Vector Machine (SVM) Algorithm - GeeksforGeeks
It tries to find the best boundary known as hyperplane that separates different classes in the data. It is useful when you want to do binary classification like spam vs. not spam or cat vs. dog. The main goal of SVM is to maximize the margin between the two classes.
Published   3 weeks ago
🌐
Medium
medium.com › @skilltohire › support-vector-machines-4d28a427ebd
Support Vector Machines. Introduction to margins of separation… | by skilltohire | Medium
July 28, 2020 - SVM can not just perform the linear ... of separation as the name itself suggests is some sort of margin or boundary which is used as a separation between different classes....
🌐
Quora
quora.com › What-is-the-intuition-behind-margin-in-SVM
What is the intuition behind margin in SVM? - Quora
Answer: Let's say you've found a hyperplane that completely separates the two classes in your training set. We expect that when new data comes along (i.e. your test set), the new data will look like your training data. Points that should be classified as one class or the other should lie near the...