Videos
The first point is proven as follows.
From the SVD of we can see that eigenvalues of
are just squared ones from
. At the same time the columns of
are the eigenvectors of
. So, exploiting orthogonality of eigenvectors
The proof is based on the property of the second matrix norm
.
The same reasoning from the first point says that the eigenvalues of are just the ones of
being squared.
So
Edit
The inequality becomes after the notion that , because eigenvalues of
could be negative.
For the problem , we have the following:
By definition of 2-norm of a matrix, , where
,
is a
matrix and
.
Also, by SVD, , where
are
and
unitary (orthonormal) matrices, respectively, and
, a
diagonal matrix with
, with
being the singular values of the matrix
.
Now, , where
are unitary with
.
Also, unitary matrices preserves norms, i.e., for a unitary matrix
.
Hence, By definition of 2-norm,
$\begin{array} \\
\|A^TA\|_2&=&\underset{\|x\|_2=1} {max}\|A^TAx\|_2 \\
&=&\underset{\|x\|_2=1} {max}\|V\Sigma^2U^Tx\|_2 \\
&=&\underset{\|x\|_2=1} {max}\|\Sigma^2U^Tx\|_2 \text{ (since V is unitary, hence norm-preserving)} \\
&=&\underset{\|x\|_2=1} {max}\|\Sigma^2x\|_2 \text{ (since U is unitary)} \\
&=& \sum_{i=0}^{n-1}\sigma_i^2x_i^2 \\
&\leq& \sum_{i=0}^{n-1}\sigma_0^2x_i^2 \text{ (since ,
)}\\
&=& \sigma_0^2\sum_{i=0}^{n-1}x_i^2 \\
&=& \sigma_0^2 \text{ (since
)}
\end{array}$
Also, for a specific $x=e_0=\begin{bmatrix}1\\0\\.\\.\\.\\0\\0\end{bmatrix} \in \mathbb{R}^n$,
Hence, combining the above two,
Also, from here, we have, , the largest singular value.
There are three important types of matrix norms. For some matrix $A$
Induced norm, which measures what is the maximum of $\frac{\|Ax\|}{\|x\|}$ for any $x \neq 0$ (or, equivalently, the maximum of $\|Ax\|$ for $\|x\|=1$).
Element-wise norm, which is like unwrapping $A$ into a long vector, then calculating its vector norm.
Schatten norm, which measures the vector norm of the singular values of $A$.
So, to answer your question:
Frobenius norm = Element-wise 2-norm = Schatten 2-norm
Induced 2-norm = Schatten $\infty$-norm. This is also called Spectral norm.
So if by "2-norm" you mean element-wise or Schatten norm, then they are identical to Frobenius norm. If you mean induced 2-norm, you get spectral 2-norm, which is $\le$ Frobenius norm. (It should be less than or equal to)
As far as I can tell, if you don't clarify which type you're talking about, induced norm is usually implied. For example, in matlab, norm(A,2) gives you induced 2-norm, which they simply call the 2-norm. So in that sense, the answer to your question is that the (induced) matrix 2-norm is $\le$ than Frobenius norm, and the two are only equal when all of the matrix's eigenvalues have equal magnitude.
The 2-norm (spectral norm) of a matrix is the greatest distortion of the unit circle/sphere/hyper-sphere. It corresponds to the largest singular value (or |eigenvalue| if the matrix is symmetric/hermitian).
The Forbenius norm is the "diagonal" between all the singular values.
i.e. $$||A||_2 = s_1 \;\;,\;\;||A||_F = \sqrt{s_1^2 +s_2^2 + ... + s_r^2}$$
(r being the rank of A).
Here's a 2D version of it: $x$ is any vector on the unit circle. $Ax$ is the deformation of all those vectors. The length of the red line is the 2-norm (biggest singular value). And the length of the green line is the Forbenius norm (diagonal).
