Linear independence of eigenvectors;
a sufficient condition

Branko Ćurgus

In the theorem below we prove that the eigenvectors corresponding to distinct eigenvalues are linearly independent.
Theorem. Let $\mathcal{V}$ be a vector space, let $T: \mathcal V\to \mathcal V$ be a linear transformation and let $m$ be a positive integer. Let ${v}_1, \ldots, {v}_m \in \mathcal V$ be eigenvectors of $T$ with the corresponding eigenvalues $\lambda_1,\ldots,\lambda_m$, that is, let \[ T {v}_k = \lambda_k {v}_k \quad \text{for all} \quad k \in \{1,\ldots,m\}. \] If $\lambda_1,\ldots,\lambda_n$ are distinct, then the vectors ${v}_1, \ldots, {v}_m$ are linearly independent.
Proof. In this proof we will assume that $m=3$. It should be clear how to adopt the proof for any positive integer $n$. We assume:
  1. $\lambda_1 \neq \lambda_2, \lambda_1 \neq \lambda_3$ and $\lambda_2 \neq \lambda_3$.
  2. ${v}_1, {v}_2, {v}_3$ are nonzero vectors.
  3. $T{v}_1 = \lambda_1 {v}_1, T{v}_2 = \lambda_2 {v}_2, T{v}_3 = \lambda_3 {v}_3$.
To prove that ${v}_1, {v}_2, {v}_3$ are linearly independent we have to prove the following implication \[ \alpha_1 {v}_1 + \alpha_2 {v}_2+ \alpha_3 {v}_3 = {0} \quad \Rightarrow \quad \alpha_1 = \alpha_2 = \alpha_3 = 0. \] To prove this implication we assume \begin{equation} \label{eq1} \alpha_1 {v}_1 + \alpha_2 {v}_2+ \alpha_3 {v}_3 = {0}. \end{equation} To show that $\alpha_3 = 0$ we apply two linear transofrmations to both sides of \eqref{eq1}: the linear transformation $T-\lambda_1 I$ and the linear transformation $T-\lambda_2 I$ as follows \begin{equation*} (T-\lambda_1 I )\bigl(\alpha_1 {v}_1 + \alpha_2 {v}_2+ \alpha_3 {v}_3\bigr) = (T-\lambda_1 I) {0}. \end{equation*} Next, we remember that this is a linear algebra class, so we use the linearity property of $T-\lambda_1 I$ to get \begin{equation} \label{eq2a} \alpha_1 (T-\lambda_1 I) {v}_1 + \alpha_2 (T-\lambda_1 I){v}_2+ \alpha_3 (T-\lambda_1 I) {v}_3 = {0}. \end{equation} Now we notice that by the assumption 3 we have \begin{equation*} (T-\lambda_1 I) {v}_1 = 0, \quad (T-\lambda_1 I){v}_2 = (\lambda_2 - \lambda_1) {v}_2, \quad (T-\lambda_1 I){v}_3 = (\lambda_3 - \lambda_1) {v}_3. \end{equation*} With these equalities \eqref{eq2a} becomes \begin{equation} \label{eq2b} \alpha_2 (\lambda_2 - \lambda_1) {v}_2+ \alpha_3 (\lambda_3 - \lambda_1) {v}_3 = {0}. \end{equation} Now we apply the linear transformation $T-\lambda_2 I$ to \eqref{eq2b} and use its linearity to get \begin{equation} \label{eq2c} \alpha_2 (\lambda_2 - \lambda_1) (T-\lambda_2 I) {v}_2+ \alpha_3 (\lambda_3 - \lambda_1) (T-\lambda_2 I) {v}_3 = {0}. \end{equation} Next we notice that by the assumption 3 we have \begin{equation*} (T-\lambda_2 I) {v}_2 = 0, \quad (T-\lambda_2 I){v}_3 = (\lambda_3 - \lambda_2) {v}_3. \end{equation*} With these equalities \eqref{eq2c} becomes \begin{equation*} \alpha_3 (\lambda_3 - \lambda_1) (\lambda_3 - \lambda_2) {v}_3 = {0}. \end{equation*} Since ${v}_3$ is a nonzero vector, the last equality implies that \[ \alpha_3 (\lambda_3 - \lambda_1) (\lambda_3 - \lambda_2) = 0. \] By the assumption 1, the preceding equality yields \[ \alpha_3 = 0. \] To prove that $\alpha_2 = 0$ we would apply the linear transofrmations $T-\lambda_1 I$ and $T-\lambda_3 I$ to both sides of \eqref{eq1}. To prove that $\alpha_1 = 0$ we would apply the linear transofrmations $T-\lambda_2 I$ and $T-\lambda_3 I$ to both sides of \eqref{eq1} and used a similar reasoning.

In general, with an arbitrary positive integer $m$, equation \eqref{eq1} is replaced by \begin{equation} \label{eq3} \alpha_1 {v}_1 + \alpha_2 {v}_2+ \cdots + \alpha_n {v}_m = {0}. \end{equation} Choose $k \in \{1,\ldots,m\}$ arbitrary. To prove that $\alpha_k = 0$ we apply the transformations \begin{equation} \label{eq4} T-\lambda_j I \qquad \text{with} \qquad j \in \{1,\ldots,m\} \quad \text{and} \quad j \neq k, \end{equation} to equation both sides of \eqref{eq3}. Since \[ (T-\lambda_j I) {v}_j = 0 \ \ \text{and} \ \ (T-\lambda_j I) {v}_i = (\lambda_i -\lambda_j) v_i \ \text{for} \ i \in \{1,\ldots,m\}, \ i \neq j, \] applying the transformations in \eqref{eq4} to \eqref{eq3} we get \begin{equation} \label{eq5} \alpha_k \prod_{\substack{j=1\\ j\neq k}}^{m} (\lambda_k - \lambda_j) = 0 \end{equation} As we assume that the eigenvalues are distinct, we have \[ \prod_{\substack{j=1\\ j\neq k}}^{m} (\lambda_k - \lambda_j) \neq 0. \] Therefore \eqref{eq5} implies $\alpha_k = 0$.

This completes the proof.