Linear independence of eigenvectors;
a sufficient condition

Branko Ćurgus

Theorem. Let $m, n \in\mathbb{N}$ and let $A$ be an $n\times n$ matrix. If $\mathbf{v}_1, \ldots, \mathbf{v}_m$ are eigenvectors of $A$ which correspond to distinct eigenvalues $\lambda_1,\ldots,\lambda_m$, then the set $\bigl\{ \mathbf{v}_1, \ldots, \mathbf{v}_m \bigr\}$ is linearly independent.
Proof. In this proof we will assume that $m=3$. We assume: To prove that $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly independent we have to prove the following implication \[ x_1 \mathbf{v}_1 + x_2 \mathbf{v}_2+ x_3 \mathbf{v}_3 = \mathbf{0} \quad \Rightarrow \quad x_1 = x_2 = x_3 = 0. \] To prove the preceding implication we assume \begin{equation} \label{eq1} x_1 \mathbf{v}_1 + x_2 \mathbf{v}_2+ x_3 \mathbf{v}_3 = \mathbf{0}. \end{equation}
Step 1. Apply the matrix $A - \lambda_1 I$ to both sides of \eqref{eq1} to get \begin{equation*} (A - \lambda_1 I)\bigl(x_1 \mathbf{v}_1 + x_2 \mathbf{v}_2+ x_3 \mathbf{v}_3\bigr) = (A - \lambda_1 I)\mathbf{0}. \end{equation*} Next, we remember that this is a linear algebra class, so we use the linearity property of $A - \lambda_1 I$ to get \begin{equation} \label{eq1a} x_1 (A - \lambda_1 I)\mathbf{v}_1 + x_2 (A - \lambda_1 I)\mathbf{v}_2+ x_3 (A - \lambda_1 I)\mathbf{v}_3 = \mathbf{0}. \end{equation} From Assumption 3 it follows that \begin{equation*} (A - \lambda_1 I) \mathbf{v}_1 = \mathbf{0}, \quad (A - \lambda_1 I) \mathbf{v}_2 = (\lambda_2 - \lambda_1) \mathbf{v}_2, \quad (A - \lambda_1 I) \mathbf{v}_3 = (\lambda_3 - \lambda_1) \mathbf{v}_3. \end{equation*} Next we use the preceding three equalities in \eqref{eq1a} to obtain \begin{equation} \label{eq2} x_2 (\lambda_2 - \lambda_1) \mathbf{v}_2+ x_3 (\lambda_3 - \lambda_1) \mathbf{v}_3 = \mathbf{0}. \end{equation} Comment: The beauty of \eqref{eq2} is that the vector $\mathbf{v}_1$ has disappeared. In the next step we will disappear $\mathbf{v}_2.$
Step 2. Apply the matrix $A - \lambda_2 I$ to both sides of \eqref{eq2} and use linear algebra to get \begin{equation} \label{eq2a} x_2 (\lambda_2 - \lambda_1) (A - \lambda_2 I) \mathbf{v}_2+ x_3 (\lambda_3 - \lambda_1) (A - \lambda_2 I) \mathbf{v}_3 = \mathbf{0}. \end{equation} From Assumption 3 it follows that \begin{equation*} (A - \lambda_2 I) \mathbf{v}_2 = \mathbf{0}, \qquad (A - \lambda_2 I) \mathbf{v}_3= (\lambda_3 - \lambda_2) \mathbf{v}_3. \end{equation*} Next we use the preceding two equalities in \eqref{eq2a} to obtain \begin{equation} \label{eq3} x_3 (\lambda_3 - \lambda_1) (\lambda_3 - \lambda_2) \mathbf{v}_3 = \mathbf{0}. \end{equation}
Step 3.  By Assumption 2 we have \begin{equation*} (\lambda_3 - \lambda_1) (\lambda_3 - \lambda_2) \neq 0. \end{equation*} Dividing \eqref{eq3} by $(\lambda_3 - \lambda_1) (\lambda_3 - \lambda_2)$ we obtain \begin{equation*} x_3 \mathbf{v}_3 = \mathbf{0}. \end{equation*} Since $\mathbf{v}_3 \neq \mathbf{0},$ we deduce that $x_3 = 0.$
Step 4.  Substituting $x_3 = 0$ in \eqref{eq2} we get \begin{equation} \label{eq4} x_2 (\lambda_2 - \lambda_1) \mathbf{v}_2 = \mathbf{0}. \end{equation} Since by Assumption 2 we have $(\lambda_2 - \lambda_1) \neq 0,$ it follows from \eqref{eq4} that $x_2 \mathbf{v}_2 = \mathbf{0}.$ Since $\mathbf{v}_2 \neq \mathbf{0},$ we deduce that $x_2 = 0.$ Substituting $x_2 = x_3 = 0$ in \eqref{eq1} we get $x_1 \mathbf{v}_1 = \mathbf{0}.$ Since $\mathbf{v}_1 \neq \mathbf{0},$ we deduce that $x_1 = 0.$ Thus we have proved that \[ x_1 = 0, \qquad x_2 = 0, \qquad x_3 = 0. \] This completes the proof.
QED.  The end of the proof.
Reasoning presented in the above proof is sometimes called recursive reasoning. We identify a certain process that leads to a desirible result (in this case eliminates a vector from a homogeneous vector equation), then we repeat that process many times to achieve a desired result. Sometimes this process is formally presented under the name Mathematical Induction.