Random math: 18

Source of most content: Gunhee Cho

Schur’s theorem

\[\text{Given a finite dimensional inner product space (f.d.i.p.s) } (V, \langle , \rangle) \text{ over } F,\ \text{let linear operator } T: V\rightarrow V. \nonumber\] \[\text{Assume } \varphi_T (t) (:= \det (T-t Id)) \text{ splits. Then, there exists orthonormal basis } \beta \text{ for } V \text{ s.t. } [ T ]_\beta \text{ is upper-triangular.} \nonumber\]

NB1; $\varphi_T (t)$ splits: $\varphi_T (t)$ is factorized into linear factors which is degree 1 polynomial.

We need a lemma to prove this theorem.

\[\text{If a linear operator } T: V\rightarrow T \text{ has eigenvalue } \lambda. \text{ Then, } T^* : V \rightarrow V \text{ has an eigenvalue }\bar{\lambda}. \nonumber\]

Proof of Lemma

Let $\beta$ be an orthonormal basis for $V$. Then, $\det T^ * - \bar{\lambda} Id = det [ T^ * - \bar{\lambda} Id ] _\beta = det ([ T^ * ] _\beta - \bar{\lambda} I _n )$ $ = det (([ T] _\beta)^H - \bar{\lambda} I _n ) \stackrel{\det A = \det A^\top}{=} \det (\overline{[ T ] _\beta} - \bar{\lambda}I_n) = \overline{\det [ T ] _\beta - \lambda I_n}=\overline{\det (T-\lambda Id))}=\bar{0} = 0$. Therefore, $\bar{\lambda}$ is an eigenvalue of $T^*$.

Proof of Schur’s thm

Prove by M.I..
If $dimV=1,$ then it is obvious. Assume it holds if $dim V = n-1$.
Let $dimV =n, \ T: V \rightarrow V$. By assumption, $\varphi_T (t)$ splits. Let $\lambda$ be an eigenvalue of $T$. Then, by above lemma, $\bar{\lambda}$ is an eigenvalue of $T^ *$. Let $z$ be an eigenvector of $T^ *$ with eigenvalue $\bar{\lambda}$ and $\Vert z \Vert = 1$. Consider a subspace generated by $z$, i.e., $< z > = span(z) \leqslant V$. Then, $V = < z > \oplus < z >^\perp$, where $dim < z > =1, dim < z >^\perp = n-1$. $T^ *: < z > \rightarrow < z > \lneq V \ (cz \mapsto c\bar{\lambda} z)$, i.e., $< z >$ is $T^ *$-invariant $(T^ * ( < z >) \subseteq < z >)$. Let $ x \in < z >^\perp$. Then, $ \langle T(x), z \rangle = \langle x, T^ * (z) \rangle\stackrel{lemma}{=} \langle x, \bar{\lambda}z \rangle = \lambda \langle x, z \rangle = 0$ which implies $T(x) \in < z >^\perp$, i.e., $< z >^\perp$ is $T$-invariant. Note that $\varphi_{T \mid_{< z >^\perp}}(t)$ splits since this polynomial is factorized by induction hypothesis (where we restrict the domain of linear operator $T$). By induction, there exists an orthonormal bassis $\hat{\beta}$ of $ < z > ^\perp$ s.t. $[ T \mid _{< z >^ \perp} ] _\hat{\beta}$ is upper-triangular matrix, where $\hat{\beta} := \{ v _1, \cdots, v _{n-1}\}$. Let $\beta := \hat{\beta} \cup \{ z \} = \{ v _1, \cdots, v _{n-1}, z \}$. Then, $\beta$ is orthonormal basis for $V$ and
$ [ T ] _\beta = \begin{bmatrix} & & & \\ & [ T \mid _{< z >^ \perp} ] _\hat{\beta} & & \\ & & & T(z) \\ & 0 & & \end{bmatrix} = \begin{bmatrix} \lambda _1 & & & \\ 0 & \ddots & & \\ 0 & 0 & \lambda _{n-1} & \\ 0 & \cdots & 0 & \lambda \end{bmatrix} $, which is an upper-triangular matrix.


\[\text{Let } F = \mathbb{C}. \text{ Then, } \forall A \in Mat_{n \times n}(\mathbb(C)), \ L_A : \mathbb{C}^n \rightarrow \mathbb{C}^n \ (x \mapsto Ax), \text{ Then,} \exists \text{ONB } \beta \text{ s.t. } [ L _A ] _\beta \text{ is upper triangular.} \nonumber\]

Proof

Since $\varphi _{L _A} (t)$ splits by fundamental theorem of algebra (when $F=\mathbb{C}$), there exists orthonormal basis $\beta$ s.t. $[ L _A ] _\beta$ is upper triangular matrix. $ [ L_A ] _\beta = [ Id ]^\beta _{\epsilon _n} [ L_A ]^{\epsilon _n} _{\epsilon _n} [ Id ]^{\epsilon _n} _\beta = Q^{-1} A Q,$ i.e., every complex matrix is similar to an upper triangular matrix.

Normal and self-adjoint

Def. Let $T: V \rightarrow V$ be a linear operator, where $V$ is a finite dimensional inner product space over $F$. We call $T$ is normal is $T\circ T^ * = T^ * \circ T$. For $A \in Mat_{n\times n} (F)$, $A$ is normal if $AA^H = A^H A$.

Def. A linear operator $T: V \rightarrow V$ is called self adjoint operator when $T = T^ *$ ($\implies T\circ T^ * = T^ * \circ T$).

Def. For $A \in Mat_{n \times n}(\mathbb{C})$, $A$ is called unitary matrix if $AA^H = A^H A = I_n$.
For $A \in Mat_{n \times n}(\mathbb{R})$, $A$ is called orthogonal matrix if $AA^\top = A^\top A = I_n$.

Propositions

\[\text{Let } T: V \rightarrow V \text{ be normal. Then, following holds:} \nonumber\]
  1. $\Vert T(x) \Vert = \Vert T^ *(x) \Vert, \forall x \in V.$
  2. If $T(v) = \lambda v$, then $T^ *(v) = \bar{\lambda} v$.
  3. If eigenvalues $\lambda_1 \ne \lambda_2$ and $T(v_1) = \lambda_1 v_1, T(v_2) = \lambda_2 v_2,$ then $ \langle v_1, v_2 \rangle =0 \Leftrightarrow v_1 \perp v_2.$
  4. $\forall c \in F,$ $T-c Id$ is normal.

Proof

  1. $\forall x \in V, \Vert T(x) \Vert^2 = \langle T(x), T(x) \rangle =\langle x, T^ * (T(x)) \rangle \stackrel{normal}{=} \langle x, T(T^ *(x)) \rangle = \langle T^ * (x), T^ * (x) \rangle = \Vert T^ * (x) \Vert^2.$

  2. $\Vert T^ * (v) - \bar{\lambda} v \Vert = \Vert (T^ * - \bar{\lambda}Id)(v) \Vert = \Vert (T - \lambda Id)^ * (v) \Vert \stackrel{1}{=} \Vert (T - \lambda Id) (v) \Vert \stackrel{eigen \lambda}{=} 0$, i.e, $\bar{\lambda}$ is eigenvalue of $T^ *$.
  3. $\lambda_1 \langle v_1, v_2 \rangle = \langle \lambda_1 v_1, v_2 \rangle = \langle T( v_1), v_2 \rangle \stackrel{adjoint}{=} \langle v_1, T^ * (v_2) \rangle \stackrel{2}{=} \langle v_1, \bar{\lambda_2} (v_2) \rangle = \lambda_2 \langle v_1, v_2 \rangle$. Therefore, $(\lambda_1 - \lambda_2) \langle v_1, v_2 \rangle = 0$. Since $\lambda_1 \ne \lambda_2$, $\langle v_1, v_2 \rangle = 0$.
  4. $ (T - c Id) \circ (T^ * - \bar{c}Id) = T\circ T^ * - c T^ * - \bar{c} T + c \bar{c} Id = T\circ T^ * - c T^ * - \bar{c} T + c \bar{c} Id $. And $(T^ * - \bar{c}Id) \circ (T - c Id) = T^ * \circ T - c T^ * - \bar{c} T + c \bar{c} Id$. Since $T$ is normal, $T-c Id$ is normal.

\[\text{For } (V, \langle , \rangle) \text{ be a finite dimensional inner product space over } F=\mathbb{C}, \text{ and lin. op. } T: V\rightarrow V. \nonumber\] \[\text{ Then, } T \text{ is normal iff } \exists \text{orthonormal basis }\beta \text{ consisting of eigenvectors.} \nonumber\]

NB2; This theorem actually implies Spectral Theorem.

Proof

  1. $(\Rightarrow)$ Since $F=\mathbb{C},\ \varphi _T (t)$ splits by fundamental theorem of algebra. By Schur’s theorem, there exists an orthonormal basis $\beta$ of $V$ s.t. $[ T ] _\beta$ is upper triangular matrix, where $\beta = \{ v _1, \cdots, v _n \}$. In particular, $v _1$ is an eigenvector of $T$. Assume $\{v _1, \cdots, v _{k-1} \}$ are eigenvectors. Since $ [ T ] _\beta = \begin{bmatrix} \lambda _1 & 0 & 0 & \\ 0 & \ddots & & C \\ 0 & 0 & \lambda _{n-1} & \\ & 0 & & D \end{bmatrix} $, where $D$ is upper-triangular matrix. Therefore,
    $ [ T^ * ] _\beta = ([ T] _\beta)^H = \begin{bmatrix} \overline{\lambda } _1 & 0 & 0 & \\ 0 & \ddots & & 0 \\ 0 & 0 & \overline{\lambda} _{k-1}& \\ & C^H & & E \end{bmatrix} $, where $E = D^ H \in Mat _{n-k \times n-k}$ is lower triangular matrix. Since $T$ is normal, $T^ * (v _i) = \bar{\lambda} _i v _i$. Therefore, $ v_1, \cdots, v _{k-1}$ is eigenvectors for $T^ *$, i.e., $C^ H =0$. Then, $T(v _k) = c v _k$ for some $c \in F$, i.e., $v _k$ is eigenvector of $T$.

  2. $(\Leftarrow)$ Let $\beta = \{v _1 , \cdots, v _n \}$ be an orthonormal basis. Any $v\in V$ can be expressed by $\beta$, $v = \sum _{i=1}^n a _i v _i$. Then, $T\circ T^ * (v) = T \circ T^ * (\sum _{i=1}^n a _i v _i) = T(\sum _{i=1}^n a _i \overline{\lambda } _i v _i) = \sum _{i=1}^n a _i \overline{\lambda } _i \lambda _i v _i$. Similarly, $T^ * \circ T (v) = T^ * \circ T (\sum _{i=1}^n a _i v _i) = T^ * (\sum _{i=1}^n a _i \lambda _i v _i) = \sum _{i=1}^n a _i \overline{\lambda } _i \lambda _i v _i$. Therefore, $T\circ T^ * (v) = T^ * \circ T (v), \forall v \in V$, i.e., $T$ is normal.


\[\text{For } A \in Mat_{n \times n}(F), AA^H = I \text{ iff column(or row) vectors is an orthonormal basis of } F^n. \nonumber\]

Proof

$AA^H = I \Leftrightarrow \langle A^i, A^j \rangle = \delta_{ij},$ where $A^i$ is $i$-th column vector of $V$ and $\delta$ is Kroneker delta function. Here, $\langle A^i, A^j \rangle = \delta_{ij}$ means that column vector is an orthonormal basis of $V$.