generalization of Euclidean geometry to higher-dimensional vector spaces
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's Elements, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean … Wikipedia
Factsheet
Named after Euclid
space
Factsheet
Named after Euclid
space
🌐
Wolfram MathWorld
mathworld.wolfram.com β€Ί L2-Norm.html
L^2-Norm -- from Wolfram MathWorld
July 26, 2003 - The l^2-norm (also written "l^2-norm") |x| is a vector norm defined for a complex vector x=[x_1; x_2; |; x_n] (1) by |x|=sqrt(sum_(k=1)^n|x_k|^2), (2) where |x_k| on the right denotes the complex modulus.
🌐
Byhand
byhand.ai β€Ί p β€Ί l2-norm
L2 Norm - by Prof. Tom Yeh
February 13, 2026 - It tells us how far a point is from the origin. When we apply the L2 norm to the difference between two vectors, it gives the straight-line (Euclidean) distance between them. For example, the displacement vectors d1 = B βˆ’ A and d2 = A βˆ’ ...
🌐
Kaggle
kaggle.com β€Ί code β€Ί paulrohan2020 β€Ί euclidean-distance-and-normalizing-a-vector
Euclidean distance and Normalizing a Vector
Checking your browser before accessing www.kaggle.com Β· Click here if you are not automatically redirected after 5 seconds
🌐
Wikipedia
en.wikipedia.org β€Ί wiki β€Ί Euclidean_distance
Euclidean distance - Wikipedia
December 3, 2025 - By Dvoretzky's theorem, every finite-dimensional normed vector space has a high-dimensional subspace on which the norm is approximately Euclidean; the Euclidean norm is the only norm with this property. It can be extended to infinite-dimensional vector spaces as the L2 norm or L2 distance.
🌐
CodingNomads
codingnomads.com β€Ί what-is-l2-norm
What is L2 Norm?
The x's you see in the above equation are vectors of points. Which means the L2-norm generalizes to multi-dimensions. This can feel confusing at first because you are so used to measuring distance in 2-dimensions. You naturally see distance as the space between two points, and you always measure that distance as the straightest possible line between the two points, just like in a triangle.
🌐
Medium
medium.com β€Ί mlearning-ai β€Ί is-l2-norm-euclidean-distance-a9c04be0b3ca
Is L2-Norm = Euclidean Distance?. One of the concepts that can be a… | by Saurav Gupta | MLearning.ai | Medium
February 26, 2022 - Things like Euclidean distance is just a technique to calculate the distance between two vectors. For Vector Norms, when the distance calculating technique is Euclidean then it is called L2-Norm and when the technique is Manhattan then it is called L1-Norm.
🌐
Wikipedia
en.wikipedia.org β€Ί wiki β€Ί Norm_(mathematics)
Norm (mathematics) - Wikipedia
March 5, 2026 - In particular, the Euclidean distance ... or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself. A seminorm satisfies the first two properties of a norm but may be zero for vectors other ...
Top answer
1 of 2
43

A norm is a function that takes a vector as an input and returns a scalar value that can be interpreted as the "size", "length" or "magnitude" of that vector. More formally, norms are defined as having the following mathematical properties:

  • They scale multiplicatively, i.e. Norm(aΒ·v) = |a|Β·Norm(v) for any scalar a
  • They satisfy the triangle inequality, i.e. Norm(u + v) ≀ Norm(u) + Norm(v)
  • The norm of a vector is zero if and only if it is the zero vector, i.e. Norm(v) = 0 ⇔ v = 0

The Euclidean norm (also known as the LΒ² norm) is just one of many different norms - there is also the max norm, the Manhattan norm etc. The LΒ² norm of a single vector is equivalent to the Euclidean distance from that point to the origin, and the LΒ² norm of the difference between two vectors is equivalent to the Euclidean distance between the two points.


As @nobar's answer says, np.linalg.norm(x - y, ord=2) (or just np.linalg.norm(x - y)) will give you Euclidean distance between the vectors x and y.

Since you want to compute the Euclidean distance between a[1, :] and every other row in a, you could do this a lot faster by eliminating the for loop and broadcasting over the rows of a:

dist = np.linalg.norm(a[1:2] - a, axis=1)

It's also easy to compute the Euclidean distance yourself using broadcasting:

dist = np.sqrt(((a[1:2] - a) ** 2).sum(1))

The fastest method is probably scipy.spatial.distance.cdist:

from scipy.spatial.distance import cdist

dist = cdist(a[1:2], a)[0]

Some timings for a (1000, 1000) array:

a = np.random.randn(1000, 1000)

%timeit np.linalg.norm(a[1:2] - a, axis=1)
# 100 loops, best of 3: 5.43 ms per loop

%timeit np.sqrt(((a[1:2] - a) ** 2).sum(1))
# 100 loops, best of 3: 5.5 ms per loop

%timeit cdist(a[1:2], a)[0]
# 1000 loops, best of 3: 1.38 ms per loop

# check that all 3 methods return the same result
d1 = np.linalg.norm(a[1:2] - a, axis=1)
d2 = np.sqrt(((a[1:2] - a) ** 2).sum(1))
d3 = cdist(a[1:2], a)[0]

assert np.allclose(d1, d2) and np.allclose(d1, d3)
2 of 2
5

The concept of a "norm" is a generalized idea in mathematics which, when applied to vectors (or vector differences), broadly represents some measure of length. There are various different approaches to computing a norm, but the one called Euclidean distance is called the "2-norm" and is based on applying an exponent of 2 (the "square"), and after summing applying an exponent of 1/2 (the "square root").


It's a bit cryptic in the docs, but you get Euclidean distance between two vectors by setting the parameter ord=2.

sum(abs(x)**ord)**(1./ord)

becomes sqrt(sum(x**2)).

Note: as pointed out by @Holt, the default value is ord=None, which is documented to compute the "2-norm" for vectors. This is, therefore, equivalent to ord=2 (Euclidean distance).

Find elsewhere
🌐
Quora
quora.com β€Ί What-is-the-L2-norm-and-why-is-it-used-in-machine-learning
What is the L2 norm and why is it used in machine learning? - Quora
Distance measurement: The L2 norm ... two vectors x and y is given by the L2 norm of the difference between them: [math]||x - y||[/math]....
🌐
APXML
apxml.com β€Ί courses β€Ί linear-algebra-fundamentals-machine-learning β€Ί chapter-2-vector-operations β€Ί vector-norms-l1-l2
Vector Norms: L1 and L2 Norms
The L1 norm measures the path along ... different norms have different properties that make them useful for specific tasks. L2 Norm: This is the most common norm for measuring error....
Top answer
1 of 4
17

OK let's see if this helps you. Suppose you have two functions $f,g:[a,b]\to \mathbb{R}$. If someone asks you what is distance between $f(x)$ and $g(x)$ it is easy you would say $|f(x)-g(x)|$. But if I ask what is the distance between $f$ and $g$, this question is kind of absurd. But I can ask what is the distance between $f$ and $g$ on average? Then it is $$ \dfrac{1}{b-a}\int_a^b |f(x)-g(x)|dx=\dfrac{||f-g||_1}{b-a} $$ which gives the $L^1$-norm. But this is just one of the many different ways you can do the averaging: Another way would be related to the integral $$ \left[\int_a^b|f(x)-g(x)|^p dx\right]^{1/p}:=||f-g||_{p} $$ which is the $L^p$-norm in general.

Let us investigate the norm of $f(x)=x^n$ in $[0,1]$ for different $L_p$ norms. I suggest you draw the graphs of $x^{p}$ for a few $p$ to see how higher $p$ makes $x^{p}$ flatter near the origin and how the integral therefore favors the vicinity of $x=1$ more and more as $p$ becomes bigger. $$ ||x||_p=\left[\int_0^1 x^{p}dx\right]^{1/p}=\frac{1}{(p+1)^{1/p}} $$ The $L^p$ norm is smaller than $L^m$ norm if $m>p$ because the behavior near more points is downplayed in $m$ in comparison to $p$. So depending on what you want to capture in your averaging and how you want to define `the distance' between functions, you utilize different $L^p$ norms.

This also motivates why the $L^\infty$ norm is nothing but the essential supremum of $f$; i.e. you filter everything out other than the highest values of $f(x)$ as you let $p\to \infty$.

2 of 4
15

There are several good answers here, one accepted. Nevertheless I'm surprised not to see the $L^2$ norm described as the infinite dimensional analogue of Euclidean distance.

In the plane, the length of the vector $(x,y)$ - that is, the distance between $(x,y)$ and the origin - is $\sqrt{x^2 + y^2}$. In $n$-space it's the square root of the sum of the squares of the components.

Now think of a function as a vector with infinitely many components (its value at each point in the domain) and replace summation by integration to get the $L^2$ norm of a function.

Finally, tack on the end of last sentence of @levap 's answer:

... the $L^2$ norm has the advantage that it comes from an inner product and so all the techniques from inner product spaces (orthogonal projections, etc) can be applied when we use the $L^2$ norm.

🌐
Built In
builtin.com β€Ί data-science β€Ί vector-norms
Vector Norms: A Quick Guide | Built In
A vector norm is a function that measures the size or magnitude of a vector, essentially quantifying a vector's length from the origin. This guide breaks down the idea behind the LΒΉ, LΒ², L∞ and Lα΅– norms.
🌐
Medium
montjoile.medium.com β€Ί l0-norm-l1-norm-l2-norm-l-infinity-norm-7a7d18a4f40c
L0 Norm, L1 Norm, L2 Norm & L-Infinity Norm | by Sara Iris Garcia | Medium
December 22, 2020 - In this norm, all the components of the vector are weighted equally. ... As you can see in the graphic, the L1 norm is the distance you have to travel between the origin (0,0) to the destination (3,4), in a way that resembles how a taxicab drives between city blocks to arrive at its destination. Is the most popular norm, also known as the Euclidean norm. It is the shortest distance to go from one point to another. Using the same example, the L2 norm is calculated by
🌐
ZeroEntropy
zeroentropy.dev β€Ί articles β€Ί 2-norm-vector
2-Norm Vector β€” ZeroEntropy Blog
July 20, 2025 - This represents the straight-line (Euclidean) distance between the two points. ... For complex vectors, the Lβ‚‚ norm is defined as: β€–xβ€–β‚‚ = √(xα΄΄x), where xα΄΄ is the conjugate transpose of x.
🌐
Fabian-kostadinov
fabian-kostadinov.github.io β€Ί 2019 β€Ί 12 β€Ί 27 β€Ί basics-of-vector-algebra
Basics of vector algebra Β· Fabian Kostadinov
December 27, 2019 - As you can see from the formula the Euclidean distance is the square root of the inner product of p - q (and also of q - p). Since we are using the dot product as the inner product, it turns out that the Euclidean distance is same as the L2 norm (Euclidean norm) | |p - q | |. The last two lines ...
Top answer
1 of 3
1

Well, while it is true that under some noation $\nabla_x \|x\|^2_2=2x$, I think it is a bad habit as it doesn't go natuarlly with matrix calculus. Usually, given $f:\mathbb R^n\rightarrow\mathbb R^m$, its derivative matrix $Df$ is an $m\times n$ matrix. The $ij$ entry is the derivative of $f_i$ w.r.t $x_j$. So in this case we expect $D(\|x\|^2_2)$ to be an $1\times n$ row vector. Namely, $D(\|x\|^2_2)=2x^T$.

The chain rule only works naturally in this setting, when we are using derivative matrices. It says that (under some simplification), $D(f\circ g) (x)= Df(g(x))\cdot Dg(x)$ The multiplication is matrix multiplication. Under this setting we see the following: I am using the well known formula $D(Ax)=A$ (which makes sense, a linear approximation of a linear function is itself) $$D_x(\|z-Zx\|_2^2)=2(z-Zx)^T D(z-Zx)=-2(z^T-x^TZ^T)Z$$ This is a row vector of course. I am guessing your book equate it to zero. In this case, to get a nicer notation you may transpoe both sides and get $Z^T(z-Zx)=0$. Other approaches would be using the fact that the squared norm is just $(Zx-z)^T(Zx-z)$, foiling and then there's no need to use the chain rule.

2 of 3
1

Let's write the squared norm in this form and use $y$ instead of $z$ to avoid confusion

\begin{equation} \begin{split} f & = \|y - Zx\|^2_2 \\ & = (y - Zx)^T(y - Zx) \\ & = (y^T - x^TZ^T)(y - Zx) \\ & = y^Ty - y^TZx - x^TZ^Ty + x^TZ^TZx \\ df & = d(y^Ty) - d(y^TZx) - d(x^TZ^Ty) + d(x^TZ^TZx) \end{split} \end{equation}

Now we will work out each term separately,

It is clear that $\frac{d(y^Ty)}{dx} =0$, so no need to develope it further

For the 2nd term $d(y^TZx)$,

\begin{equation} \begin{split} d(y^TZx) & = (dy^T)Zx + y^T(dZ)x + y^TZ(dx) \\ \frac{d(y^TZx)}{dx} & = y^TZ \\ \end{split} \end{equation}

For the 3rd term $d(x^TZ^Ty)$, we will use this property

$$ d(y^TZx) = d(y^TZx)^T = d(x^TZ^Ty) $$

so

$$ \frac{d(x^TZ^Ty)}{dx} = y^TZ $$

For the last term $d(x^TZ^TZx)$, we have just to put $x=y$ and differentiate, so

\begin{equation} \begin{split} d(x^TZ^TZx) & = (x^TZ^TZ)dx + (x^TZ^TZ)dx \\ \frac{d(x^TZ^TZx)}{dx} &= x^T(Z^TZ + Z^TZ) \\ &= 2x^T(Z^TZ) \\ \end{split} \end{equation}

Note: Depending on your preferred Layout convention, the derivative could be either

$$\frac{d(x^TZ^TZx)}{dx} = x^T(Z^TZ + (Z^TZ)^T) = 2 x^TZ^TZ$$

or

$$\frac{d(x^TZ^TZx)}{dx} = 2Z^TZx$$

Finally, putting all terms together, we obtain

$$ d(\|y - Zx\|^2_2) = 2x^T(Z^TZ) - 2y^TZ = 2(x^TZ^T -y^T)Z$$

using other convention, we obtain

$$ d(\|y - Zx\|^2_2) = 2Z^TZx - 2Z^Ty = 2Z^T(Zx -y)$$

🌐
AskPython
askpython.com β€Ί home β€Ί how to compute l1 and l2 norms in python?
How to compute L1 and L2 norms in python? - AskPython
February 27, 2023 - It is represented as ||x||2. The L2 norm is widely used in many fields, such as machine learning, engineering, and physics, for various applications, including optimization, regularization, and normalization. Refer the image below to visualize the L2 norm for vector x = (7,5) ... The L2 norm, as shown in the diagram, is the direct distance between the origin (0,0) and the destination (7,5).
🌐
MachineLearningMastery
machinelearningmastery.com β€Ί home β€Ί blog β€Ί gentle introduction to vector norms in machine learning
Gentle Introduction to Vector Norms in Machine Learning - MachineLearningMastery.com
October 17, 2021 - The L2 norm calculates the distance of the vector coordinate from the origin of the vector space. As such, it is also known as the Euclidean norm as it is calculated as the Euclidean distance from the origin.
🌐
Quora
quora.com β€Ί What-is-the-L2-norm-of-a-vector
What is the L2 norm of a vector? - Quora
Distance measurement: The L2 norm ... two vectors x and y is given by the L2 norm of the difference between them: [math]||x - y||[/math]....