generalization of Euclidean geometry to higher-dimensional vector spaces
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's Elements, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean โ€ฆ Wikipedia
Factsheet
Named after Euclid
space
Factsheet
Named after Euclid
space
๐ŸŒ
Wolfram MathWorld
mathworld.wolfram.com โ€บ L2-Norm.html
L^2-Norm -- from Wolfram MathWorld
July 26, 2003 - The l^2-norm (also written "l^2-norm") |x| is a vector norm defined for a complex vector x=[x_1; x_2; |; x_n] (1) by |x|=sqrt(sum_(k=1)^n|x_k|^2), (2) where |x_k| on the right denotes the complex modulus.
๐ŸŒ
Kaggle
kaggle.com โ€บ code โ€บ paulrohan2020 โ€บ euclidean-distance-and-normalizing-a-vector
Euclidean distance and Normalizing a Vector
Checking your browser before accessing www.kaggle.com ยท Click here if you are not automatically redirected after 5 seconds
๐ŸŒ
Wikipedia
en.wikipedia.org โ€บ wiki โ€บ Euclidean_distance
Euclidean distance - Wikipedia
December 3, 2025 - By Dvoretzky's theorem, every finite-dimensional normed vector space has a high-dimensional subspace on which the norm is approximately Euclidean; the Euclidean norm is the only norm with this property. It can be extended to infinite-dimensional vector spaces as the L2 norm or L2 distance.
Top answer
1 of 2
43

A norm is a function that takes a vector as an input and returns a scalar value that can be interpreted as the "size", "length" or "magnitude" of that vector. More formally, norms are defined as having the following mathematical properties:

  • They scale multiplicatively, i.e. Norm(aยทv) = |a|ยทNorm(v) for any scalar a
  • They satisfy the triangle inequality, i.e. Norm(u + v) โ‰ค Norm(u) + Norm(v)
  • The norm of a vector is zero if and only if it is the zero vector, i.e. Norm(v) = 0 โ‡” v = 0

The Euclidean norm (also known as the Lยฒ norm) is just one of many different norms - there is also the max norm, the Manhattan norm etc. The Lยฒ norm of a single vector is equivalent to the Euclidean distance from that point to the origin, and the Lยฒ norm of the difference between two vectors is equivalent to the Euclidean distance between the two points.


As @nobar's answer says, np.linalg.norm(x - y, ord=2) (or just np.linalg.norm(x - y)) will give you Euclidean distance between the vectors x and y.

Since you want to compute the Euclidean distance between a[1, :] and every other row in a, you could do this a lot faster by eliminating the for loop and broadcasting over the rows of a:

dist = np.linalg.norm(a[1:2] - a, axis=1)

It's also easy to compute the Euclidean distance yourself using broadcasting:

dist = np.sqrt(((a[1:2] - a) ** 2).sum(1))

The fastest method is probably scipy.spatial.distance.cdist:

from scipy.spatial.distance import cdist

dist = cdist(a[1:2], a)[0]

Some timings for a (1000, 1000) array:

a = np.random.randn(1000, 1000)

%timeit np.linalg.norm(a[1:2] - a, axis=1)
# 100 loops, best of 3: 5.43 ms per loop

%timeit np.sqrt(((a[1:2] - a) ** 2).sum(1))
# 100 loops, best of 3: 5.5 ms per loop

%timeit cdist(a[1:2], a)[0]
# 1000 loops, best of 3: 1.38 ms per loop

# check that all 3 methods return the same result
d1 = np.linalg.norm(a[1:2] - a, axis=1)
d2 = np.sqrt(((a[1:2] - a) ** 2).sum(1))
d3 = cdist(a[1:2], a)[0]

assert np.allclose(d1, d2) and np.allclose(d1, d3)
2 of 2
5

The concept of a "norm" is a generalized idea in mathematics which, when applied to vectors (or vector differences), broadly represents some measure of length. There are various different approaches to computing a norm, but the one called Euclidean distance is called the "2-norm" and is based on applying an exponent of 2 (the "square"), and after summing applying an exponent of 1/2 (the "square root").


It's a bit cryptic in the docs, but you get Euclidean distance between two vectors by setting the parameter ord=2.

sum(abs(x)**ord)**(1./ord)

becomes sqrt(sum(x**2)).

Note: as pointed out by @Holt, the default value is ord=None, which is documented to compute the "2-norm" for vectors. This is, therefore, equivalent to ord=2 (Euclidean distance).

๐ŸŒ
Medium
medium.com โ€บ mlearning-ai โ€บ is-l2-norm-euclidean-distance-a9c04be0b3ca
Is L2-Norm = Euclidean Distance?. One of the concepts that can be aโ€ฆ | by Saurav Gupta | MLearning.ai | Medium
February 26, 2022 - Things like Euclidean distance is just a technique to calculate the distance between two vectors. For Vector Norms, when the distance calculating technique is Euclidean then it is called L2-Norm and when the technique is Manhattan then it is called L1-Norm.
๐ŸŒ
Byhand
byhand.ai โ€บ p โ€บ l2-norm
L2 Norm
February 13, 2026 - It tells us how far a point is from the origin. When we apply the L2 norm to the difference between two vectors, it gives the straight-line (Euclidean) distance between them. For example, the displacement vectors d1 = B โˆ’ A and d2 = A โˆ’ ...
๐ŸŒ
CodingNomads
codingnomads.com โ€บ what-is-l2-norm
What is L2 Norm?
The x's you see in the above equation are vectors of points. Which means the L2-norm generalizes to multi-dimensions. This can feel confusing at first because you are so used to measuring distance in 2-dimensions. You naturally see distance as the space between two points, and you always measure that distance as the straightest possible line between the two points, just like in a triangle.
Find elsewhere
๐ŸŒ
ScienceDirect
sciencedirect.com โ€บ topics โ€บ mathematics โ€บ euclidean-norm
Euclidean Norm - an overview | ScienceDirect Topics
Computing a scalar product needs n multiplications and n โˆ’ 1 additions, that is 2n โˆ’ 1 floating point operations in total. From the scalar product, we can define the Euclidean norm (sometimes called the l2 norm) of the vector x. ... For the Euclidean norm, relation (1.1) can be shown using ...
๐ŸŒ
University of Texas
cs.utexas.edu โ€บ ~flame โ€บ laff โ€บ alaff โ€บ chapter01-vector-2-norm.html
ALAFF The vector 2-norm (Euclidean length)
is the Hermitian transpose of \(x \) (or, equivalently, the Hermitian transpose of the vector \(x \) that is viewed as a matrix) and \(x^H y \) can be thought of as the dot product of \(x\) and \(y \) or, equivalently, as the matrix-vector multiplication of the matrix \(x^H \) times the vector \(y \text{.}\) To prove that the 2-norm is a norm (just calling it a norm doesn't mean it is, after all), we need a result known as the Cauchy-Schwarz inequality. This inequality relates the magnitude of the dot product of two vectors to the product of their 2-norms: if \(x, y \in \R^m \text{,}\) then \(\vert x^T y \vert \leq \| x \|_2 \| y \|_2 \text{.}\) To motivate this result before we rigorously prove it, recall from your undergraduate studies that the component of \(x \) in the direction of a vector \(y \) of unit length is given by \((y^T x) y \text{,}\) as illustrated by
Top answer
1 of 2
10

Since I don't have a great amount of experience in math or the concepts of deep learning, I was often confused whether the 2 simply meant, in conjunction with the double bars, "apply the L2 norm to the terms within, i.e. square each of them and then sum the result" or if the 2 was, itself, a squaring of whatever the double brackets meant on their own.

I have never seen an author disambiguate the norm delimiters $\lVert\quad\rVert$ through the use of a superscript. In analysis, such notation would be incredibly confusing, since we frequently need to establish inequalities among norms of vectors raised to some power.

Also, an $L^2$ norm of a vector is the square root of the sum of the absolute squares of its components: $$\lVert x\rVert_2=\sqrt{\sum_{i=1}^n\lvert x_i\rvert^2}\text{;}$$ consequently, $$\lVert x\rVert^2_2=\sum_{i=1}^n\lvert x_i\rvert^2\text{.}$$

2 of 2
0

Just to add to this: since OP mentioned "concepts of deep learning", I'm guessing that the expressions that contain these norms appear in loss functions. Usually, regressive loss functions have a square because they have nicer derivatives for gradient descent.

For example, with a regularisation term added, you would see something like $$ L = \frac{1}{2}\sum_{i=1}^N (y_i - f(\vec x_i))^2 + \frac{\lambda}{2}||\vec w||^2 $$ with $f(\vec x) = \vec w \cdot \vec x$. Lots of squares here, but differentiating to any $w_j$, we have $$ \frac{\partial L}{\partial w_j} = \sum_{i=1}^N (f(\vec x_i) - y_i) x_j + \lambda w_j $$ which would have been a lot less nice without the squares. Other norms are sometimes also used (e.g. $||\vec w||_1$ for L1-regularisation), but the convention in machine learning is always that $||\vec w||$ means $||\vec w||_2$ by default.

๐ŸŒ
R Project
search.r-project.org โ€บ CRAN โ€บ refmans โ€บ wavethresh โ€บ html โ€บ l2norm.html
R: Compute L2 distance between two vectors of numbers.
Compute L2 distance between two vectors of numbers (square root of sum of squares of differences between two vectors).
๐ŸŒ
Fabian-kostadinov
fabian-kostadinov.github.io โ€บ 2019 โ€บ 12 โ€บ 27 โ€บ basics-of-vector-algebra
Basics of vector algebra ยท Fabian Kostadinov
December 27, 2019 - As you can see from the formula the Euclidean distance is the square root of the inner product of p - q (and also of q - p). Since we are using the dot product as the inner product, it turns out that the Euclidean distance is same as the L2 norm (Euclidean norm) | |p - q | |. The last two lines ...
๐ŸŒ
Towards Data Science
towardsdatascience.com โ€บ a-quick-guide-to-understanding-vectors-norms-84eb802f81f9
September 27, 2021 - We cannot provide a description for this page right now
๐ŸŒ
R Project
search.r-project.org โ€บ CRAN โ€บ refmans โ€บ wavethresh โ€บ help โ€บ l2norm.html
Compute L2 distance between two vectors of numbers.
powered by xapian hosted and maintained by the Institute for Statistics and Mathematics of WU (Vienna University of Economics and Business) give a feedback
Top answer
1 of 1
3

You need s = x2 - x1.

norm(s, "2")
#[1] 8.062258

sqrt(sum(s ^ 2))  ## or: sqrt(c(crossprod(s)))
#[1] 8.062258

lpnorm(s, 2)
#[1] 8.062258

If you define s = cbind(x1, x2), none of the options you listed is going to compute the Euclidean distance between x1 and x2, but we can still get them output the same value. In this case they the L2 norm of the vector c(x1, x2).

norm(s, "F")
#[1] 6.244998

sqrt(sum(s ^ 2))
#[1] 6.244998

lpnorm(s, 2)
#[1] 6.244998

Finally, norm is not a common way for computing distance. It is really for matrix norm. When you do norm(cbind(x1, x2), "2"), it computes the L2 matrix norm which is the largest singular value of matrix cbind(x1, x2).


So my problem is with defining s. Ok, what if I have more than three vectors?

In that case you want pairwise Euclidean matrix. See function ?dist.

I have the train sets (containing three or more rows) and one test set (one row). So, I would like to calculate the Euclidean distance or may be other distances. This is the reason why I want to make sure about the distance calculation.

You want the distance between one vector and each of many others, and the result is a vector?

set.seed(0)
X_train <- matrix(runif(10), 5, 2)
x_test <- runif(2)
S <- t(X_train) - x_test

apply(S, 2, norm, "2")  ## don't try other types than "2"
#[1] 0.8349220 0.7217628 0.8012416 0.6841445 0.9462961

apply(S, 2, lpnorm, 2)
#[1] 0.8349220 0.7217628 0.8012416 0.6841445 0.9462961

sqrt(colSums(S ^ 2))  ## only for L2-norm
#[1] 0.8349220 0.7217628 0.8012416 0.6841445 0.9462961

I would stress again that norm would fail on a vector, unless type = "2". ?norm clearly says that this function is intended for matrix. What norm does is very different from your self-defined lpnorm function. lpnorm is for a vector norm, norm is for a matrix norm. Even "L2" means differently for a matrix and a vector.

Top answer
1 of 4
11

The Euclidean distance formula finds the distance between any two points in Euclidean space.

A point in Euclidean space is also called a Euclidean vector.

You can use the Euclidean distance formula to calculate the distance between vectors of two different lengths.

For vectors of different dimension, the same principle applies.

Suppose a vector of lower dimension also exists in the higher dimensional space. You can then set all of the missing components in the lower dimensional vector to 0 so that both vectors have the same dimension. You would then use any of the mentioned distance formulas for computing the distance.

For example, consider a 2-dimensional vector A in Rยฒ with components (a1,a2), and a 3-dimensional vector B in Rยณ with components (b1,b2,b3).

To express A in Rยณ, you would set its components to (a1,a2,0). Then, the Euclidean distance d between A and B can be found using the formula:

dยฒ = (b1 - a1)ยฒ + (b2 - a2)ยฒ + (b3 - 0)ยฒ

d = sqrt((b1 - a1)ยฒ + (b2 - a2)ยฒ + b3ยฒ)

For your particular case, the components will be either 0 or 1, so all differences will be -1, 0, or 1. The squared differences will then only be 0 or 1.

If you're using integers or individual bits to represent the components, you can use simple bitwise operations instead of some arithmetic (^ means XOR or exclusive or):

d = sqrt(b1 ^ a1 + b2 ^ a2 + ... + b(n-1) ^ a(n-1) + b(n) ^ a(n))

And we're assuming the trailing components of A are 0, so the final formula will be:

d = sqrt(b1 ^ a1 + b2 ^ a2 + ... + b(n-1) + b(n))
2 of 4
3

There is no unique definition of distance if you mix vectors of differing number of elements ("length", "dimensionality"); usually, people just map vectors of one space to the other.

There are many ways to do these mappings:

  • Fill up with zeroes. Say, if you have a car (2D coordinate) and need to compute its distance to an airplane (3D coordinate), this effectively places the car at sea level and does a 3D distance.
  • Alternatively, you can drop the z value of the airplane and do a 2D distance.
  • Look up the missing values somewhere. With the car-airplane example, to get a 3D coordinate for the car, you'd fire up your geo database and look up heights from longitude/latitude.

These are just the most common possibilities one could come up with; I'm sure there are more.

Choose the one that makes most sense to your application, there is no single "right" way to do it.

๐ŸŒ
Codegrepper
codegrepper.com โ€บ code-examples โ€บ python โ€บ python+l2+norm+between+two+vectors
python l2 norm between two vectors Code Example
August 6, 2020 - function is used to calculate one of the eight different matrix norms or one of the vector norms in Numpy ... Assume a and b are two (20, 20) numpy arrays. The L2-distance (defined above) between two equal dimension arrays can be calculated in python as follows: def l2_dist(a, b): result = ((a - b) * (a - b)).sum() result = result ** 0.5 return result