length in a vector space
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Norm_(mathematics)
Norm (mathematics) - Wikipedia
March 5, 2026 - The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any ... {\displaystyle p} -norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and
🌐
Wolfram MathWorld
mathworld.wolfram.com › L2-Norm.html
L^2-Norm -- from Wolfram MathWorld
July 26, 2003 - However, if desired, a more explicit ... just one of several possible types of norms. For real vectors, the absolute value sign indicating that a complex modulus is being taken on the right of equation (2) may be dropped. So, for example, the -norm of the vector is given ...
🌐
University of Texas
cs.utexas.edu › ~flame › laff › alaff › chapter01-matrix-norms-2-norm.html
ALAFF The matrix 2-norm
We will see that the matrix 2-norm plays an important role in the theory of linear algebra, but less so in practical computation. ... \begin{equation*} \left\| \left( \begin{array}{c c} \delta_0 \amp 0 \\ 0 \amp \delta_1 \end{array} \right) \right\|_2 = \max( \vert \delta_0 \vert, \vert \delta_1 \vert ). \end{equation*} ... The proof of the last example builds on a general principle: Showing that \(\max_{x \in D} f(x) = \alpha \) for some function \(f: D \rightarrow R \) can be broken down into showing that both
🌐
MathWorks
mathworks.com › matlab › mathematics › linear algebra
norm - Vector and matrix norms - MATLAB
If p = 2, then n is approximately max(svd(X)). This value is equivalent to norm(X).
🌐
Wikipedia
en.wikipedia.org › wiki › Matrix_norm
Matrix norm - Wikipedia
March 10, 2026 - The spectral norm of a matrix ... {\textstyle \|A\|_{2}=\sup\{x^{*}Ay:x\in K^{m},y\in K^{n}{\text{ with }}\|x\|_{2}=\|y\|_{2}=1\}.} Proved by the Cauchy–Schwarz inequality.
🌐
University of Texas
cs.utexas.edu › ~flame › laff › alaff › chapter01-vector-2-norm.html
ALAFF The vector 2-norm (Euclidean length)
The vector 2-norm \(\| \cdot \|_2 : \C^m \rightarrow \mathbb R \) is defined for \(x \in \C^m \) by
🌐
Built In
builtin.com › data-science › vector-norms
Vector Norms: A Quick Guide | Built In
Since the L² norm squares each component, very small components become even smaller. For example, (0.1)² = 0.01.
🌐
ScienceDirect
sciencedirect.com › topics › mathematics › euclidean-norm
Euclidean Norm - an overview | ScienceDirect Topics
The Euclidean norm Norm[v, 2] or ... norm of real vectors. If we let n = 5, for example, the manipulation displays the Euclidean norm of the vector {1, 2, 3, 4, 5}....
Find elsewhere
🌐
Zeroentropy
zeroentropy.dev › articles › 2-norm-vector
2-Norm Vector — ZeroEntropy Blog
July 20, 2025 - A valid norm must satisfy the following properties: ‖x‖₂ ≥ 0, and equals 0 if and only if x is the zero vector. For any scalar α, ‖αx‖₂ = |α| · ‖x‖₂. ... In 2D, the 2‑norm gives the straight-line distance from the origin ...
🌐
San José State University
sjsu.edu › faculty › guangliang.chen › Math253S20 › lec7matrixnorm.pdf pdf
San José State University Math 253: Mathematical Methods for Data Visualization
Example 0.1. Below are three different norms on the Euclidean space Rd: ... When unspecified, it is understood as the Euclidean 2-norm.
Top answer
1 of 5
32

We will show the more general case, i.e.:

$\| \cdot \|_1$ , $\| \cdot \|_2$, and $\| \cdot \|_{\infty}$ are all equivalent on $\mathbb{R}^{n}$. And we have $$\| x \|_{\infty} \leq \| x \|_{2} \leq \| x \|_{1} \leq n \| x \|_{\infty}\ $$

Every $x \in \mathbb{R}^{n}$ has the representation $x = ( x_1 , x_2 , \dots , x_n )$. Using the canonical basis of $\mathbb{R}^{n}$, namely $e_{i}$, where $e_i = (0, \dots , 0 , 1 , 0 , \dots , 0 )$ for $1$ in the $i^{\text{th}}$ position and otherwise $0$, we have that $$\| x \|_{\infty} = \max_{1\leq i \leq n} | x_i | = \max_{1\leq i \leq n} \sqrt{ | x_i |^{2} } \leq \sqrt{ \sum_{i=1}^{n} | x_ i |^{2} } = \| x \|_2 $$ Additionally, $$ \| x \|_2 = \sqrt{ \sum_{i=1}^{n} | x_i |^{2} } \leq \sum_{i=1}^{n} \sqrt{ | x_ i |^{2} } = \sum_{i=1}^{n} |x_i| = \| x \|_1$$ Finally, $$ \| x \|_1\ = \sum_{i=1}^{n} |x_i| \leq \sum_{i=1}^{n} | \max_{1 \leq j \leq n} x_j | = n \max_{1 \leq j \leq n} | x_j | = n \| x \|_{\infty}$$ showing the chain of inequalities as desired. Moreover, for any norm on $\mathbb{R}^{n}$ we have that: $$\| x - x_{n} \|\ \to 0 \hspace{1cm} \text{as} \space\ \space\ n \to \infty $$ so that they are equivalent, as this holds for any $x \in \mathbb{R}^{n}$ under any norm.

2 of 5
23

The inequality $ \|x\|_1 \leq \sqrt{n} \,\|x\|_2 $ is a consequence of Cauchy-Schwarz. To see this

$$\sqrt n\, \|x\|_2 =\sqrt{1+1+\cdots+1}\,\sqrt{\sum_{i} x_i^2 }\geq \|x\|_1$$

For the first, the function $f(x)=\sqrt{x}$ is concave and $f(0)=0$, hence $f$ is subadditive

Therefore $ f(\sum_{i} x_i^2 )\leq \sum_{i} f(x_i^2) =\|x\|_1 $

Top answer
1 of 4
17

OK let's see if this helps you. Suppose you have two functions $f,g:[a,b]\to \mathbb{R}$. If someone asks you what is distance between $f(x)$ and $g(x)$ it is easy you would say $|f(x)-g(x)|$. But if I ask what is the distance between $f$ and $g$, this question is kind of absurd. But I can ask what is the distance between $f$ and $g$ on average? Then it is $$ \dfrac{1}{b-a}\int_a^b |f(x)-g(x)|dx=\dfrac{||f-g||_1}{b-a} $$ which gives the $L^1$-norm. But this is just one of the many different ways you can do the averaging: Another way would be related to the integral $$ \left[\int_a^b|f(x)-g(x)|^p dx\right]^{1/p}:=||f-g||_{p} $$ which is the $L^p$-norm in general.

Let us investigate the norm of $f(x)=x^n$ in $[0,1]$ for different $L_p$ norms. I suggest you draw the graphs of $x^{p}$ for a few $p$ to see how higher $p$ makes $x^{p}$ flatter near the origin and how the integral therefore favors the vicinity of $x=1$ more and more as $p$ becomes bigger. $$ ||x||_p=\left[\int_0^1 x^{p}dx\right]^{1/p}=\frac{1}{(p+1)^{1/p}} $$ The $L^p$ norm is smaller than $L^m$ norm if $m>p$ because the behavior near more points is downplayed in $m$ in comparison to $p$. So depending on what you want to capture in your averaging and how you want to define `the distance' between functions, you utilize different $L^p$ norms.

This also motivates why the $L^\infty$ norm is nothing but the essential supremum of $f$; i.e. you filter everything out other than the highest values of $f(x)$ as you let $p\to \infty$.

2 of 4
15

There are several good answers here, one accepted. Nevertheless I'm surprised not to see the $L^2$ norm described as the infinite dimensional analogue of Euclidean distance.

In the plane, the length of the vector $(x,y)$ - that is, the distance between $(x,y)$ and the origin - is $\sqrt{x^2 + y^2}$. In $n$-space it's the square root of the sum of the squares of the components.

Now think of a function as a vector with infinitely many components (its value at each point in the domain) and replace summation by integration to get the $L^2$ norm of a function.

Finally, tack on the end of last sentence of @levap 's answer:

... the $L^2$ norm has the advantage that it comes from an inner product and so all the techniques from inner product spaces (orthogonal projections, etc) can be applied when we use the $L^2$ norm.

🌐
GeeksforGeeks
geeksforgeeks.org › mathematics › vector-norms
Vector Norms - GeeksforGeeks
July 23, 2025 - For a vector x = [x1, x2, . . ., xn], the L∞ norm ∣x∣∞​ is defined as: ∣x∣∞ ​= max​∣xi​∣ where 1 ≤ i ≤ n · Example: If x = [3, −4, 2], then the L∞ norm is:
🌐
University of Pennsylvania
cis.upenn.edu › ~cis5150 › cis515-11-sl4.pdf pdf
Chapter 4 Vector Norms and Matrix Norms 4.1 Normed Vector Spaces
Example 4.1. 1. Let E = R, and ￿x￿= |x|, the absolute value of x. ... For every (x1, . . . , xn) ∈E, we have the 1-norm ... CHAPTER 4. VECTOR NORMS AND MATRIX NORMS · Some work is required to show the triangle inequality for ... Proposition 4.1. If E is a finite-dimensional vector · space over R or C, for every real number p ≥1, the ... For p = 2, it is the Cauchy–Schwarz inequality.
🌐
Rezaborhani
rezaborhani.github.io › mlr › blog_posts › Linear_Algebra › Vector_and_matrix_norms.html
Part 3: Vector and matrix norms
The norm of $\alpha\mathbf{x}$, that is a scalar multiple of $\mathbf{x}$, can be written in terms of the norm of $\mathbf{x}$, as $\Vert\alpha\mathbf{x}\Vert=\left|\alpha\right|\Vert\mathbf{x}\Vert$. With $\alpha=-1$ for example, we have that $\Vert-\mathbf{x}\Vert=\Vert\mathbf{x}\Vert$. Norms also satisfy the so-called triangle inequality where for any vectors $\mathbf{x}$, $\mathbf{y}$, and $\mathbf{z}$ we have $\Vert\mathbf{x}-\mathbf{z}\Vert+\Vert\mathbf{z}-\mathbf{y}\Vert\geq\Vert\mathbf{x}-\mathbf{y}\Vert$. As illustrated in the figure below for the $\ell_{2}$ norm (left), the $\ell_{1}
🌐
Medium
montjoile.medium.com › l0-norm-l1-norm-l2-norm-l-infinity-norm-7a7d18a4f40c
L0 Norm, L1 Norm, L2 Norm & L-Infinity Norm | by Sara Iris Garcia | Medium
December 22, 2020 - A good practical example of L0 norm is the one that gives Nishant Shukla, when having two vectors (username and password). If the L0 norm of the vectors is equal to 0, then the login is successful. Otherwise, if the L0 norm is 1, it means that either the username or password is incorrect, but not both. And lastly, if the L0 norm is 2, it means that both username and password are incorrect.
🌐
Wiktionary
en.wiktionary.org › wiki › two-norm
two-norm - Wiktionary, the free dictionary
{\displaystyle ||{\vec {v}}||_{2}={\sqrt {a_{1}^{2}+a_{2}^{2}+\cdots +a_{n}^{2}}}} . The two norm of an · m · × · m · {\displaystyle m\times m} matrix · A · {\displaystyle A} is defined by · max · v · → · ≠ · 0 · → · | | A · v · → · | | 2 ·
🌐
MachineLearningMastery
machinelearningmastery.com › home › blog › gentle introduction to vector norms in machine learning
Gentle Introduction to Vector Norms in Machine Learning - MachineLearningMastery.com
October 17, 2021 - Running the example first prints the defined vector and then the vector’s L1 norm. The L1 norm is often used when fitting machine learning algorithms as a regularization method, e.g. a method to keep the coefficients of the model small, and in turn, the model less complex. The length of a vector can be calculated using the L2 norm, where the 2 is a superscript of the L, e.g.
Top answer
1 of 3
15

I think I understand your question - typically $\|A\|_2$ has two definitions $$ \|A\|_2 = \sqrt{\text{largest eigen value of } A^{\ast}A} $$ and $$ \|A\|^{\prime}_2 = \sup_{\|x\|_2 = 1}\|Ax\|_2 $$ Note that $B = A^{\ast}A$ is a symmetric, positive matrix ($\langle Bx,x \rangle \geq 0$), and hence it can be diagonalized, and its eigen values are non-negative. Write them as $$ \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n \geq 0 $$ and consider an orthonormal basis $\{u_1, u_2, \ldots, u_n\}$ such that $$ Bu_i = \lambda_i u_i $$ For any $x \in \mathbb{R}^n$, write $x = \sum \alpha_i u_i$, then $$ \|x\|_2 = 1 \Leftrightarrow \sum_{i=1}^n \alpha_i^2 = 1 $$ So consider $$ \|Ax\|^2_2 = \langle Ax,Ax\rangle = \langle A^{\ast}Ax,x\rangle = \sum_{i=1}^n \lambda_i \alpha_i^2 $$ Hence, it follows that $$ \|Ax\|^2_2 \leq \lambda_1 $$ and hence $\|A\|^{\prime}_2 \leq \sqrt{\lambda_1} = \|A\|_2$

The other inequality is obvious - can you see that?

2 of 3
8

The operator norm of a matrix depends on the norm that you are putting on the vector space on which it acts. So for example if the matrices we consider are in $M_n(\mathbb{R})$ then they are acting on $\mathbb{R}^n$. Given a norm $\| \cdot \|_\alpha$ on $\mathbb{R}^n$ we can create an associated norm (operator norm associated to $\alpha$) by defining

$\|A\|_{(\alpha)}=\sup_{v\neq0}\frac{\|Av\|_\alpha}{\|v\|_\alpha}$.

In the case that we are using the usual euclidean 2-norm for $\mathbb{R}^n$ then the associated $\|\cdot\|_{(2)}$ is what you describe above.

However, we also know that $M_n(\mathbb{R})$ is a vector space isomorphic to $\mathbb{R}^{n^2}$ and so we can put a norm on it this way. These are the $\|A\|_q$ norms that you mention.

In particular $\|A\|_2=\left(\sum|a_{ij}|^2\right)^{\frac{1}{2}}\neq \|A\|_{(2)}$ in general.

Take for example $A=\left(\begin{array}{cc} 1 & 0 \\ 0 & 2 \end{array}\right)$

Then $\|A\|_{(2)}=2$ and $\|A\|_2=\sqrt{5}$

Finally just like any finite dimensional space we have $\|A\|_1\geq \|A\|_2$. Further for finite dimensional spaces all norms give the same topology. This implies that for any two norms and any $A$ in the space there are constants $k_1, k_2>0$ such that $\|A\|_\alpha\geq k_1\|A\|_\beta\geq k_2\|A\|_\alpha$.

Thus there is some constant such that $\|A\|_2\geq k\|A\|_1$ but I don't know what it is off the top of my head. (and it should depend on $n$...maybe $\sqrt{n}$?)

🌐
MathWorks
mathworks.com › matlab › mathematics › linear algebra
vecnorm - Vector-wise norm - MATLAB
N = vecnorm(A,p) calculates the generalized vector p-norm. ... N = vecnorm(A,p,dim) operates along dimension dim. The size of this dimension reduces to 1 while the sizes of all other dimensions remain the same. ... Calculate the 2-norm of a vector corresponding to the point (2,2,2) in 3-D space.