Random math: 15

Source of most content: Gunhee Cho

Eigenvector/value

Def. Let $V$ be a vector space over $F$ with $dim V =n < \infty$. $v \in V \setminus \{0\}$ is called eigenvector of linear operator $T: V\rightarrow V$ if $T(v)=\lambda v$ for some eigenvalue $\lambda \in F$.

Def. Linear operator $T: V\rightarrow V$ if $T(v)=\lambda v$ is diagonalizable if there exists a basis $\beta$ for $V$ s.t. $[ T ] _\beta$ is a diagonal matrix, which is equivalent to existence of a basis consisting of eigenvectors of $T$.

e.g.) Let $A \in Mat_{n\times n}(F)$. Suppose $L_A: F^n \rightarrow F^n \ (X \mapsto AX)$ is diagonalizable. Then, there exists a basis $\beta$ of eigenvectors of $L_A$, i.e., $ [ L_A ] _\beta = diag(d_1 d_2 \cdots d_n)$. Here, we know that $[ L_A ] _\beta = [ Id ] _{\epsilon_n}^\beta [ L_A ] _{\epsilon_n}^{\epsilon_n} [ Id ] _\beta^{\epsilon_n} = P^{-1}AP$. Consequently, $L_A$ is diagonalizable iff $\exists P^{-1}$ s.t. $[ L_A ] _\beta = P^{-1} AP,$ i.e., $[ L_A ] _\beta \sim A$.

Def. $A \in Mat_{n \times n}(F)$ is diagnoalizable ($\Leftrightarrow L_A$ is diagonalizable) if $A \sim D,$ where $D$ is diagonal matrix, i.e., $\exists P^{-1}$ s.t. $D=P^{-1}AP$.

NB1; the former definition is on the linear operator while the latter is on the matrix.
NB2; $A$’s diagnolization depends on the choice of field.

Def. Let $A \in Mat_{n \times n}(F)$. We call $\varphi_A (t) := \det (A-tI_n)$ the charteristic polynomial of $A$.

NB3; We can show that $\varphi_A(T) = (-1)^n (t^n + a_1t^{n-1} + \cdots + a_{n-1}t+a_n) \leadsto (-1)^n a_n = \det A, \ a_1 = -tr A$ by M.I.

For an linear operator $T: V \rightarrow V$, existence of eigenvector $v$ with eigenvalue $\lambda$ implies that $(T-\lambda Id)(v) =0 \leadsto \ker (T-\lambda Id) \ne \{0\}$. Then, we want to show that this implies that $\det (T-\lambda Id) =0$ similarlly to that of matrix.

Def. Let $V$ be a vector space with $dim V = n < \infty$ and linear operator $T: V \rightarrow V$. Then, $\det T := \det [ T ] _\beta$ and $tr T := tr [ T ] _\beta$ for any basis $\beta$ of $V$.

Def. Let $V$ be a vector space with $dim V = n < \infty$ and linear operator $T: V \rightarrow V$. Define $\varphi_T (t) := \det (T-tId)$ the charteristic polynomial of $T$.

Proposition/Lemma

\[\text{Let } A\sim B, \ A,B \in Mat_{n \times n}(F). \text{ Then, } \det A = \det B, \ tr A = tr B. \nonumber\]

Proof

  1. $\because \det AB =\det A \det B, \ \forall A,B \in Mat_{n \times n}(F)$, $\det A = \det P^{-1}BP = \det P^{-1} \det B \det P = \det P^{-1}P \det B = \det B$.
  2. $\because tr AB = tr BA, \ tr A = tr P^{-1}BP = tr PP^{-1}B = tr B.$

\[\text{For an linear operator } T:V\rightarrow V \text{ above definitions are well-defined.} \nonumber\]

Proof

Take any other basis $\gamma$ of $V$. Then, $[ T ] _\beta = [ Id ] _{\gamma}^\beta [ T ] _{\gamma}^{\gamma} [ Id ] _\beta^{\gamma} = P^{-1} [ T ] _{\gamma} P \leadsto [ T ] _{\beta} \sim [ T ] _{\gamma}$. Then, applying above lemma concludes the proof.


\(\text{Let } v_1, \cdots, v_k \text{ be eigenvectors with eigenvalues } \lambda_i, 1 \leq i \leq k. \text{ If } \lambda_i \text{'s are mutually distinct,} \nonumber\) \(\text{then } \{v_1, \cdots, v_k \} \text{ is linearly independent.} \nonumber\)

Proof

If $v_1, \cdots, v_k$ are linearly dependent, let $h$ be the smallest positive integer s.t. $v_1, \cdots, v_h$ are linearly dependent, where $2 \leq h \leq k$. Then, there exists not all zero $a_i$’s s.t. $a_1v_1 + \cdots + a_hv_h=0 \leadsto T(a_1v_1 + \cdots + a_hv_h)=$$a_1\lambda_1 v_1 + \cdots + a_h\lambda_h v_h=0$. Also multiplying $\lambda_h$ leadsto $a_1\lambda_h v_1 + \cdots + a_h \lambda_h v_h=0$. Thus, $a_1(\lambda_1 - \lambda_h) v_1 + \cdots + a_{h-1} (\lambda_{h-1} - \lambda_h) v_{h-1}=0$. Since $v_1, \cdots v_{h-1}$ is linearly independent by assumption, $a_i(\lambda_i - \lambda_h)=0,\ 1 \leq i \leq h \leadsto a_i=0 \because \lambda_i \ne \lambda_h$. This implies that $a_h =0$ which is a contradiction. Therfore, $v_1, \cdots, v_k$ are linearly independent.


\(\text{From above lemma, If } T: V \rightarrow V \text{ has } n \text{ distinct eigenvalues, where } dimV=n, \text{ Then, }T \text{ is diagonalizable.} \nonumber\)

Eigenspace

Def. Let $\lambda$ be an eigenvlue of a linear operator $T:V\rightarrow V$. Define $E_\lambda := \{v\in V \mid T(v) = \lambda v \}$ which is called the eigenspace of $T$ correspond to $\lambda$.

NB4; $E_\lambda \leqslant V$ since $\forall v_1, v_2 \in E_\lambda, \ T(v_1 - v_2) = T(v_1) - T(v_2) = \lambda (v_1 - v_2)$, i.e., $v_1-v_2 \in E_\lambda$.
NB5; By definition, we can see that $E_\lambda = \ker (T-\lambda Id)$, i.e., null space of $T-\lambda Id$.
NB6; $dim E_\lambda = dim V - rk(T-\lambda Id)$ by nullity-rank theorem.

Def. Given a vector space $V/F$ with $dim V = n < \infty$, let $U, W \leqslant V$. Define $U+W := \{ u + w \mid u \in U, w \in W \}$, the sum of $U$ and $W$ and $U \oplus W$ direct sum of $U$ and $W$ if $V= U+W$ as a set and $U \cap W = \{0 \}$.

NB7; $U+W \leqslant V$ and $U,W \leqslant U+W$. (easy to check)

Propostion/theorem

\[V= \oplus_{i=1}^n W_i, \ W_i \leqslant V \Leftrightarrow \text{ every element } v \in V \text{ can be written uniquely as } v = w_1 + \cdots + w_n, w_i \in W_i. \nonumber\]

Proof

Let $\beta_i:=\{ w^i_1, \cdots, w^i_{m_i} \}$ be a basis of $W_i$ with $dim W_i = m_i$. By definition, $v \in \{ w_1 + \cdots + w_n \mid w_i \in W_i\}$. $\forall i \in \{1, \cdots, n \}, \ \forall w_i \in W_i, w_i = a^i_1 w^i_1 + \cdots + a^i_{m_i} w^i_{m_i}$. It is already shown that every vector has a unique expression, the result follows.


\[\text{Let } v_1, \cdots, v_n\text{ be a basis of } V,\ U=span(\{v_1, \cdots, v_k \}) \text{ and } W=span({v _{k+1}, \cdots, v _n}). \text{ Then, }V= U \oplus W. \nonumber\]

Proof

Let $x \in U\cap W$, then $x = a_1 v_1 + \cdots + a_k v_k = b_1 v_{k+1} + \cdots + b_{n-k} v_n$ $\leadsto a_1 v_1 + \cdots + a_k v_k - b_1 v_{k+1} - \cdots - b_{n-k} v_n = 0$. Since $v_1, \cdots, v_n$ is a basis of $V$, i.e., linearly independent, $a_1=\cdots=a_k=b_1 = \cdots =b_{n-k} =0$. This implies that $x = 0$, i.e., $U \cap W = \{ 0 \}$. For any $v\in V$, $v=a_1 v_1 + \cdots +a_k v_k + a_{k+1} v_{k+1} \cdots + a_n v_n = u + w$, i.e., $V = U +W$.


\[\text{Given a linear operator } T:V\rightarrow V, \text{ where } V \text{ is a vector space with } dim V = n < \infty. \nonumber\] \[\text{ Then, } T \text{ is diagonalizable iff for eigenvalues } \lambda_1, \cdots, \lambda_k, k \leq n, \ V = \oplus_{i=1}^k E_{\lambda_i}. \nonumber\]

Proof

  1. $(\Rightarrow)$ Suppose $T$ is digonalizable, i.e., there exists a basis $\beta$ consisting of eigenvectors. Then, we can write $\beta = \{v^1_1, \cdots, v^1 _{m_1}, v^2_1, \cdots, v^2 _{m_2}, \cdots, v^k_1, \cdots, v^k _{m_k} \},$ where $v^i_1, \cdots, v^i _{m_i}$ corresponds to eigenvalue $\lambda_i$ and $\sum _{i=1}^k m_i = n$. Note that $E _{\lambda_i} = \{ v\in V \mid T(v) = \lambda_i v \} = \{v= a^1_1 v^1_1+ \cdots+ a^1 _{m_1} v^1 _{m_1}+ \cdots + a^k _{m_k} v^k _{m_k} \mid T(v) = \lambda_i v \}$ $ = \{v=a^1 _1 v^1_1+ \cdots + a^1 _{m_1} v^1 _{m_1} \mid T(v) = \lambda_i v\}$ since if there exists $a^k_j \ne 0$ for $k \ne i, \forall j \in \{1, \cdots, m_k \}$, then $T(v) = \lambda_i (a^1 _1 v^1 _1+ \cdots + a^1 _{m_1} v^1 _{m_1}) + \lambda _k a^k _j v^k _j \ne \lambda_i v$. Thus, $span(\{v^i_1, \cdots, v^i _{m_i} \})=E _{\lambda_i}$. Therefore, $V = \oplus _{i=1}^k E _{\lambda_i}$ by above lemma.
  2. $(\Leftarrow)$ Suppose $V = \oplus_{i=1}^k E _{\lambda_i}$. Let $\beta_i$ be a basis for $E _{\lambda_i}, \ 1 \leq i \leq k$. Then, $\beta = \cup _{i=1}^k$ should be a basis for $V$, which consists of eigenvectors, i.e., $T$ is diagonalizable.

NB8; For above $T$, $[ T ] _\beta = diag(\underbrace{\lambda_1, \cdots, \lambda_1} _\text{$m_1$}, \underbrace{\lambda_2, \cdots, \lambda_2} _\text{$m _2$}, \cdots, \underbrace{\lambda_k, \cdots, \lambda_k} _\text{$ m _k $}).$