lie-algebras-and-their-representations

Source code for my notes on representations of semisimple Lie algebras and Olivier Mathieu's classification of simple weight modules

Commit
68b7ca86f5a81411b6e73e698d79dbea106e2950
Parent
283e3378be74c27a4ebe0cd495229ba542d0a14e
Author
Pablo <pablo-escobar@riseup.net>
Date

Renamed a file

Diffstats

3 files changed, 2446 insertions, 2446 deletions

Status Name Changes Insertions Deletions
Added sections/complete-reducibility.tex 1 file changed 2445 0
Deleted sections/semisimple-algebras.tex 1 file changed 0 2445
Modified tcc.tex 2 files changed 1 1
diff --git /dev/null b/sections/complete-reducibility.tex
@@ -0,0 +1,2445 @@
+\chapter{Semisimplicity \& Complete Reducibility}
+
+% TODO: Remove this?
+\epigraph{Nobody has ever bet enough on a winning horse.}{Some gambler}
+
+% TODOOO: Point out we are now working with finite-dimensional Lie algebras
+% over an algebraicly closed field of characteristic zero
+
+% TODO: Update the 40 pages thing when we're done
+% TODO: Have we seen the fact representations are useful?
+Having hopefully established in the previous chapter that Lie algebras are
+indeed useful, we are now faced with the Herculean task of trying to
+understand them. We have seen that representations are a remarkably effective
+way to derive information about groups -- and therefore algebras -- but the
+question remains: how to we go about classifying the representations of a given
+Lie algebra? This is a question that have sparked an entire field of research,
+and we cannot hope to provide a comprehensive answer the 40 pages we have left.
+Nevertheless, we can work on particular cases.
+
+Like any sane mathematician would do, we begin by studying a simpler case,
+which is that of \emph{semisimple} Lie algebras algebras. The first question we
+have is thus: why are semisimple algebras simpler -- or perhaps
+\emph{semisimpler} -- to understand than any old Lie algebra? Well, the special
+thing about semisimple algebras is that the relationship between their
+indecomposable representations and their irreducible representations is much
+clearer -- at least in finite dimension. Namely\dots
+
+\begin{proposition}\label{thm:complete-reducibility-equiv}
+  Given a finite-dimensional Lie algebra \(\mathfrak{g}\) over \(K\), the
+  following conditions are equivalent.
+  \begin{enumerate}
+    \item \(\mathfrak{g}\) is semisimple.
+
+    \item Given a finite-dimensional representation \(V\) of \(\mathfrak{g}\)
+      and a subrepresentation \(W \subset V\), \(W\) has a
+      \(\mathfrak{g}\)-invariant complement in \(V\).
+
+    \item Every exact sequence of finite-dimensional representations of
+      \(\mathfrak{g}\) splits.
+
+    \item Every finite-dimensional indecomposable representation of
+      \(\mathfrak{g}\) is irreducible.
+
+    \item Every finite-dimensional representation of \(\mathfrak{g}\) can be
+      uniquely decomposed as a direct sum of irreducible representations.
+  \end{enumerate}
+\end{proposition}
+
+Condition \textbf{(ii)} is known as \emph{complete reducibility}. The
+equivalence between conditions \textbf{(ii)} to \textbf{(iv)} follows at once
+from simple arguments. Furthermore, the equivalence between \textbf{(ii)} and
+\textbf{(v)} is a direct consequence of the Krull-Schmidt theorem. On the other
+hand, the equivalence between \textbf{(i)} and the other items is more subtle.
+We are particularly interested in the proof that \textbf{(i)} implies
+\textbf{(ii)}. In other words, we are interested in the fact that every
+finite-dimensional representation of a semisimple Lie algebra is
+\emph{completely reducible}.
+
+This is because if every finite-dimensional representation of \(\mathfrak{g}\)
+is completely reducible, the equivalence between \textbf{(ii)} and \textbf{(v)}
+implies a classification of the finite-dimensional irreducible representations
+of \(\mathfrak{g}\) leads to a classification of \emph{all} finite-dimensional
+representation of \(\mathfrak{g}\) -- it suffices to take direct sums of the
+already classified irreducible modules. This leads us to the third restriction
+we will impose: for now, we will focus our attention exclusively on
+finite-dimensional representations.
+
+Another interesting characterization of semisimple Lie algebras, which will
+come in handy later on, is the following.
+
+% TODO: Define the Killing form beforehand
+% TODO: Define invariant forms beforehand
+\begin{proposition}
+  Let \(\mathfrak{g}\) be a Lie algebra. The following statements are
+  equivalent.
+  \begin{enumerate}
+    \item \(\mathfrak{g}\) is semisimple.
+    \item For each finite-dimensional representation \(V\) of \(\mathfrak{g}\),
+      the \(\mathfrak{g}\)-invariant bilinear form
+      \begin{align*}
+        B_V : \mathfrak{g} \times \mathfrak{g} & \to K \\
+        (X, Y) &
+        \mapsto \operatorname{Tr}(X\!\restriction_V \circ Y\!\restriction_V)
+      \end{align*}
+      is non-degenerate\footnote{A symmetric bilinear form $B : \mathfrak{g}
+      \times \mathfrak{g} \to K$ is called non-degenerate if $B(X, Y) = 0$ for
+      all $Y \in \mathfrak{g}$ implies $X = 0$.}.
+    \item The Killing form \(B\) is non-degenerate.
+  \end{enumerate}
+\end{proposition}
+
+We refer the reader for \cite[ch. 5]{humphreys} for a proof of this last
+result. Without further ado, we may proceed to a proof of\dots
+
+\section{Complete Reducibility}
+
+Historically, complete reducibility was first proved by Herman Weyl for \(K =
+\mathbb{C}\), using his knowledge of smooth representations of compact Lie
+groups. Namely, Weyl showed that any finite-dimensional semisimple complex Lie
+algebra is (isomorphic to) the complexification of the Lie algebra of a unique
+simply connected compact Lie group, known as its \emph{compact form}. Hence the
+category of the finite-dimensional representations of a given complex
+semisimple algebra is equivalent to that of the finite-dimensional smooth
+representations of its compact form, whose representations are known to be
+completely reducible -- see \cite[ch. 3]{serganova} for instance.
+
+This proof, however, is heavily reliant on the geometric structure of
+\(\mathbb{C}\). In other words, there is no hope for generalizing this for some
+arbitrary \(K\). Hopefully for us, there is a much simpler, completely
+algebraic proof of complete reducibility, which works for algebras over any
+algebraically closed field of characteristic zero. The algebraic proof included
+in here is mainly based on that of \cite[ch. 6]{kirillov}, and uses some basic
+homological algebra. Admittedly, much of the homological algebra used in here
+could be concealed from the reader, which would make the exposition more
+accessible -- see \cite{humphreys} for an elementary account, for instance.
+
+However, this does not change the fact the arguments used in this proof are
+essentially homological in nature. Hence we consider it more productive to use
+the full force of the language of homological algebra, instead of burring the
+reader in a pile of unmotivated, yet entirely elementary arguments.
+Furthermore, the homological algebra used in here is actually \emph{very
+basic}. In fact, all we need to know is\dots
+
+\begin{theorem}\label{thm:ext-exacts-seqs}
+  There is a sequence of bifunctors \(\operatorname{Ext}^i :
+  \mathfrak{g}\text{-}\mathbf{Mod} \times \mathfrak{g}\text{-}\mathbf{Mod} \to
+  K\text{-}\mathbf{Vect}\), \(i \ge 0\) such that every exact
+  sequence of \(\mathfrak{g}\)-modules
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} & W \arrow{r}{i} & V \arrow{r}{\pi} & U \arrow{r} & 0
+    \end{tikzcd}
+  \end{center}
+  induces long exact sequences
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow[r] &
+      \operatorname{Hom}_{\mathfrak{g}}(S, W)
+      \arrow[r, "i \circ -"', swap]\ar[draw=none]{d}[name=X, anchor=center]{} &
+      \operatorname{Hom}_{\mathfrak{g}}(S, V) \arrow[r, "\pi \circ -"', swap] &
+      \operatorname{Hom}_{\mathfrak{g}}(S, U)
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (X.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      \operatorname{Ext}^1(S, W)
+      \arrow[r]\ar[draw=none]{d}[name=Y, anchor=center]{} &
+      \operatorname{Ext}^1(S, V) \arrow[r] &
+      \operatorname{Ext}^1(S, U)
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (Y.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      \operatorname{Ext}^2(S, W) \arrow[r] &
+      \operatorname{Ext}^2(S, V) \arrow[r] &
+      \operatorname{Ext}^2(S, U) \arrow[r, dashed] &
+      \cdots
+    \end{tikzcd}
+  \end{center}
+  and
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow[r] &
+      \operatorname{Hom}_{\mathfrak{g}}(U, S)
+      \arrow[r, "- \circ \pi"', swap]\ar[draw=none]{d}[name=X, anchor=center]{} &
+      \operatorname{Hom}_{\mathfrak{g}}(V, S) \arrow[r, "- \circ i"', swap] &
+      \operatorname{Hom}_{\mathfrak{g}}(W, S)
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (X.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      \operatorname{Ext}^1(U, S)
+      \arrow[r]\ar[draw=none]{d}[name=Y, anchor=center]{} &
+      \operatorname{Ext}^1(V, S) \arrow[r] &
+      \operatorname{Ext}^1(W, S)
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (Y.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      \operatorname{Ext}^2(U, S) \arrow[r] &
+      \operatorname{Ext}^2(V, S) \arrow[r] &
+      \operatorname{Ext}^2(W, S) \arrow[r, dashed] &
+      \cdots
+    \end{tikzcd}
+  \end{center}
+\end{theorem}
+
+\begin{theorem}\label{thm:ext-1-classify-short-seqs}
+  Given \(\mathfrak{g}\)-modules \(W\) and \(U\), there is a one-to-one
+  correspondence between elements of \(\operatorname{Ext}^1(W, U)\) and
+  isomorphism classes of short exact sequences
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} & W \arrow{r} & V \arrow{r} & U \arrow{r} & 0
+    \end{tikzcd}
+  \end{center}
+
+  In particular, \(\operatorname{Ext}^1(W, U) = 0\) if, and only if every short
+  exact sequence of \(\mathfrak{g}\)-modules with \(W\) and \(U\) in the
+  extremes splits.
+\end{theorem}
+
+\begin{note}
+  This is, of course, \emph{far} from a comprehensive account of homological
+  algebra. Nevertheless, this is all we need. We refer the reader to
+  \cite{harder} for a complete exposition, or to part II of \cite{ribeiro} for
+  a more modern account using derived categories.
+\end{note}
+
+We are particular interested in the case where \(S = K\) is the trivial
+representation of \(\mathfrak{g}\). Namely, we may define\dots
+
+\begin{definition}
+  Given a \(\mathfrak{g}\)-module \(V\), we refer to the Abelian group
+  \(H^i(\mathfrak{g}, V) = \operatorname{Ext}^i(K, V)\) as \emph{the \(i\)-th
+  Lie algebra cohomology group of \(V\)}.
+\end{definition}
+
+Given a \(\mathfrak{g}\)-module \(V\), we call the vector space
+\(V^{\mathfrak{g}} = \{v \in V : X v = 0 \; \forall X \in \mathfrak{g}\}\)
+\emph{the space of invariants of \(V\)}. The Lie algebra cohomology groups are
+very much related to invariants of representations. Namely, the canonical
+isomorphism of functors
+\(\operatorname{Hom}_{\mathfrak{g}}(K, -) \isoto {-}^{\mathfrak{g}}\) given by
+\begin{align*}
+  \operatorname{Hom}_{\mathfrak{g}}(K, V) & \isoto V^{\mathfrak{g}} \\
+                                        T & \mapsto T(1)
+\end{align*}
+implies\dots
+
+\begin{corollary}
+  Every short exact sequence of \(\mathfrak{g}\)-modules
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} & W \arrow{r}{i} & V \arrow{r}{\pi} & U \arrow{r} & 0
+    \end{tikzcd}
+  \end{center}
+  induces a long exact sequence
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow[r] &
+      W^{\mathfrak{g}} \arrow[r, "i"', swap]\ar[draw=none]{d}[name=X, anchor=center]{} &
+      V^{\mathfrak{g}} \arrow[r, "\pi"', swap] &
+      U^{\mathfrak{g}}
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (X.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      H^1(\mathfrak{g}, W) \arrow[r]\ar[draw=none]{d}[name=Y, anchor=center]{} &
+      H^1(\mathfrak{g}, V) \arrow[r] &
+      H^1(\mathfrak{g}, U)
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (Y.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      H^2(\mathfrak{g}, W) \arrow[r] &
+      H^2(\mathfrak{g}, V) \arrow[r] &
+      H^2(\mathfrak{g}, U) \arrow[r, dashed] &
+      \cdots
+    \end{tikzcd}
+  \end{center}
+\end{corollary}
+
+\begin{proof}
+  We have an isomorphism of sequences
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} &
+      \operatorname{Hom}_{\mathfrak{g}}(K, W)
+        \arrow{r}{i \circ -} \arrow{d} &
+      \operatorname{Hom}_{\mathfrak{g}}(K, V)
+        \arrow{r}{\pi \circ -} \arrow{d} &
+      \operatorname{Hom}_{\mathfrak{g}}(K, U) \arrow{r} \arrow{d} &
+      H^1(\mathfrak{g}, W) \arrow[dashed]{r} \arrow[Rightarrow, no head]{d} &
+      \cdots \\
+      0 \arrow{r} &
+      W^{\mathfrak{g}} \arrow[swap]{r}{i} &
+      V^{\mathfrak{g}} \arrow[swap]{r}{\pi} &
+      U^{\mathfrak{g}} \arrow{r} &
+      H^1(\mathfrak{g}, W) \arrow[dashed]{r} &
+      \cdots
+    \end{tikzcd}
+  \end{center}
+
+  By theorem~\ref{thm:ext-exacts-seqs} the sequence on the top is exact. Hence
+  so is the sequence on the bottom.
+\end{proof}
+
+This is all well and good, but what does any of this have to do with complete
+reducibility? Well, in general cohomology theories really shine when one is
+trying to control obstructions of some kind. In our case, the bifunctor
+\(H^1(\mathfrak{g}, \operatorname{Hom}(-, -)) :
+\mathfrak{g}\text{-}\mathbf{Mod} \times \mathfrak{g}\text{-}\mathbf{Mod} \to
+\mathbf{Ab}\) classifies obstructions to complete reducibility.
+Explicitly\dots
+
+\begin{theorem}
+  Given \(\mathfrak{g}\)-modules \(W\) and \(U\), there is a one-to-one
+  correspondence between elements of \(H^1(\mathfrak{g}, \operatorname{Hom}(W,
+  U))\) and isomorphism classes of short exact sequences
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} & W \arrow{r} & V \arrow{r} & U \arrow{r} & 0
+    \end{tikzcd}
+  \end{center}
+\end{theorem}
+
+For the readers already familiar with homological algebra: this correspondence
+can computed very concretely by considering a canonical acyclic resolution
+\begin{center}
+  \begin{tikzcd}
+    \cdots \arrow[dashed]{r} &
+    \wedge^3 \mathfrak{g} \rar &
+    \wedge^2 \mathfrak{g} \rar &
+    \mathfrak{g} \rar &
+    K \rar &
+    0
+  \end{tikzcd}
+\end{center}
+of the trivial representation \(K\), which provides an explicit construction of
+the cohomology groups -- see \cite[sec.~9]{lie-groups-serganova-student} or
+\cite[sec.~24]{symplectic-physics} for further details. We will use the
+previous result implicitly in our proof, but we will not prove it in its full
+force. Namely, we will show that \(H^1(\mathfrak{g}, V) = 0\) for all
+finite-dimensional \(V\), and that the fact that \(H^1(\mathfrak{g},
+\operatorname{Hom}(W, U)) = 0\) for all finite-dimensional \(W\) and \(U\)
+implies complete reducibility. To that end, we introduce a distinguished
+element of \(\mathcal{U}(\mathfrak{g})\), known as \emph{the Casimir element of
+a representation}.
+
+\begin{definition}\label{def:casimir-element}
+  Let \(V\) be a finite-dimensional representation of \(\mathfrak{g}\).
+  Let \(\{X_i\}_i\) be a basis for \(\mathfrak{g}\) and denote by \(\{X^i\}_i\)
+  its dual basis with respect to the form \(B_V\) -- i.e. the unique basis for
+  \(\mathfrak{g}\) satisfying \(B_V(X_i, X^j) = \delta_{i j}\). We call
+  \[
+    C_V = X_1 X^1 + \cdots + X_n X^n \in \mathcal{U}(\mathfrak{g})
+  \]
+  the \emph{Casimir element of \(V\)}.
+\end{definition}
+
+\begin{lemma}
+  The definition of \(C_V\) is independent of the choice of basis
+  \(\{X_i\}_i\).
+\end{lemma}
+
+\begin{proof}
+  Whatever basis \(\{X_i\}_i\) we choose, the image of \(C_V\) under the
+  canonical isomorphism \(\mathfrak{g} \otimes \mathfrak{g} \isoto \mathfrak{g}
+  \otimes \mathfrak{g}^* \isoto \operatorname{End}(\mathfrak{g})\) is the
+  identity operator\footnote{Here the isomorphism $\mathfrak{g} \otimes
+  \mathfrak{g} \isoto \mathfrak{g} \otimes \mathfrak{g}^*$ is given by
+  tensoring the identity $\mathfrak{g} \to \mathfrak{g}$ with the isomorphism
+  $\mathfrak{g} \isoto \mathfrak{g}^*$ induced by the form $B_V$.}.
+\end{proof}
+
+\begin{proposition}
+  The Casimir element \(C_V \in \mathcal{U}(\mathfrak{g})\) is central, so that
+  \(C_V : W \to W\) is an intertwining operator for any \(\mathfrak{g}\)-module
+  \(W\). Furthermore, \(C_V\) acts in \(V\) as a non-zero scalar operator
+  whenever \(V\) is a non-trivial finite-dimensional irreducible representation
+  of \(\mathfrak{g}\).
+\end{proposition}
+
+\begin{proof}
+  To see that \(C_V\) is central fix a basis \(\{X_i\}_i\) for \(\mathfrak{g}\)
+  and denote by \(\{X^i\}_i\) its dual basis as in
+  definition~\ref{def:casimir-element}. Let \(X \in \mathfrak{g}\) and denote
+  by \(\lambda_{i j}, \mu_{i j} \in K\) the coefficients of \(X_j\) and \(X^j\)
+  in \([X, X_i]\) and \([X, X^i]\), respectively.
+
+  % TODO: Comment on the invariance of the Killing form beforehand
+  The invariance of \(B_V\) implies
+  \[
+    \lambda_{i k}
+    = B_V([X, X_i], X^k)
+    = B_V(-[X_i, X], X^k)
+    = B_V(X_i, -[X, X^k])
+    = - \mu_{k i}
+  \]
+
+  Hence
+  \[
+    \begin{split}
+      [X, C_V]
+      & = \sum_i [X, X_i X^i] \\
+      & = \sum_i [X, X_i] X^i + \sum_i X_i [X, X^i] \\
+      & = \sum_{i j} \lambda_{i j} X_j X^i + \sum_{i j} \mu_{i j} X_i X^j \\
+      & = 0
+    \end{split},
+  \]
+  and \(C_V\) is central. This implies that \(C_V : W \to W\) is an intertwiner
+  for all representations \(W\) of \(\mathfrak{g}\): its action commutes with
+  the action of any other element of \(\mathfrak{g}\).
+
+  In particular, it follows from Schur's lemma that if \(V\) is
+  finite-dimensional and irreducible then \(C_V\) acts in \(V\) as a scalar
+  operator. To see that this scalar is nonzero we compute
+  \[
+    \operatorname{Tr}(C_V\!\restriction_V)
+    = \operatorname{Tr}(X_1\!\restriction_V X^1\!\restriction_V)
+    + \cdots
+    + \operatorname{Tr}(X_n\!\restriction_V X^n\!\restriction_V)
+    = \dim \mathfrak{g},
+  \]
+  so that \(C_V\!\restriction_V = \lambda \operatorname{Id}\) for \(\lambda =
+  \frac{\dim \mathfrak{g}}{\dim V} \ne 0\).
+\end{proof}
+
+As promised, the Casimir element of a representation can be used to
+establish\dots
+
+\begin{proposition}\label{thm:first-cohomology-vanishes}
+  Let \(V\) be a finite-dimensional representation of \(\mathfrak{g}\). Then
+  \(H^1(\mathfrak{g}, V) = 0\).
+\end{proposition}
+
+\begin{proof}
+  We begin by the case where \(V\) is irreducible. Due to
+  theorem~\ref{thm:ext-1-classify-short-seqs}, it suffices to show that any
+  exact sequence of the form
+  \begin{equation}\label{eq:exact-seq-h1-vanishes}
+    \begin{tikzcd}
+      0 \arrow{r} & K \arrow{r} & W \arrow{r}{\pi} & V \arrow{r} & 0
+    \end{tikzcd}
+  \end{equation}
+  splits.
+
+   If \(V = K\) is the trivial representation then the exactness of
+  \begin{equation}\label{eq:trivial-extrems-exact-seq}
+    \begin{tikzcd}
+      0 \arrow{r} & K \arrow{r} & W \arrow{r}{\pi} & K \arrow{r} & 0
+    \end{tikzcd}
+  \end{equation}
+  implies \(W\) is 2-dimensional. Take any non-zero \(w \in W\) outside of the
+  image of the inclusion \(K \to W\).
+
+  Since \(\dim W = 2\), the irreducible component \(\mathcal{U}(\mathfrak{g})
+  \cdot w\) of \(w\) in \(W\) is either \(K w\) or \(W\) itself. But this
+  component cannot be \(W\), since the image the inclusion \(K \to W\) is a
+  1-dimensional representation -- i.e. a proper non-zero subrepresentation.
+  Hence \(K w\) is invariant under the action of \(\mathfrak{g}\). In
+  particular, \(X w = 0\) for all \(X \in \mathfrak{g}\). Since \(w\) lies
+  outside the image of the inclusion \(K \to W\), \(\pi(w) \ne 0\) -- which is
+  to say, \(w \notin \ker \pi\). This implies the map \(K \to W\) that takes
+  \(1\) to \(\sfrac{w}{\pi(w)}\) is a splitting of
+  (\ref{eq:trivial-extrems-exact-seq}).
+
+  Now suppose that \(V\) is non-trivial, so that \(C_V\) acts on \(V\) as
+  \(\lambda \operatorname{Id}\) for some \(\lambda \ne 0\). Given an eigenvalue
+  \(\mu \in K\) of the action of \(C_V\) in \(W\), denote by \(W^\mu\) its
+  associated generalized eigenspace. We claim \(W^0\) is the image of the
+  inclusion \(K \to W\). Since \(C_V\) acts as zero in \(K\), this image is
+  clearly contained in \(W^0\). On the other hand, if \(w \in W\) is such that
+  \(C_V^n w = 0\) then
+  \[
+    \lambda^n \pi(w)
+    = C_V^n \pi(w)
+    = \pi(C_V^n w)
+    = 0,
+  \]
+  so that \(w \in \ker \pi\) -- because \(\lambda^n \ne 0\). The exactness of
+  (\ref{eq:exact-seq-h1-vanishes}) then implies the desired conclusion.
+
+  We furthermore claim that the only eigenvalues of \(C_V\) in \(W\) are \(0\)
+  and \(\lambda\). Indeed, if \(\mu \ne 0\) is eigenvalue and \(w\) is an
+  associated eigenvector, then
+  \[
+    \mu \pi(w) = \pi(C_V w) = C_V \pi(w) = \lambda \pi(w)
+  \]
+
+  Since \(w \notin W^0\), \(\pi(w) \ne 0\) and therefore \(\mu = \lambda\).
+  Hence \(W = W^0 \oplus W^\lambda\) as vector space. The fact that \(C_V\) is
+  central implies \((C_V - \lambda \operatorname{Id})^n X v = X (C_V - \lambda
+  \operatorname{Id})^n v\) for all \(v \in V\), \(X \in \mathfrak{g}\) and \(n
+  > 0\). In particular, \(W^\lambda\) is stable under the action of
+  \(\mathfrak{g}\) -- i.e. \(W^\lambda\) is a subrepresentation. Since \(W^0\)
+  is precisely the kernel of \(\pi\), we have an isomorphism of representations
+  \(W^\lambda \cong \sfrac{W}{W^0} \isoto V\), which induces a splitting \(W
+  \cong K \oplus V\).
+
+  Finally, we consider the case where \(V\) is not irreducible. Suppose
+  \(H^1(\mathfrak{g}, W) = 0\) for all \(\mathfrak{g}\)-modules with \(\dim W <
+  \dim V\) and let \(W \subset V\) be a proper non-zero subrepresentation. Then
+  the exact sequence
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} & W \arrow{r} & V \arrow{r} & \sfrac{V}{W} \arrow{r} & 0
+    \end{tikzcd}
+  \end{center}
+  induces a long exact sequence of the form
+  \begin{center}
+    \begin{tikzcd}
+      \cdots \arrow[dashed]{r} &
+      H^1(\mathfrak{g}, W) \arrow{r} &
+      H^1(\mathfrak{g}, V) \arrow{r} &
+      H^1(\mathfrak{g}, \sfrac{V}{W}) \arrow[dashed]{r} &
+      \cdots
+    \end{tikzcd}
+  \end{center}
+
+  Since \(0 < \dim W, \dim \sfrac{V}{W} < \dim V\) it follows
+  \(H^1(\mathfrak{g}, W) = H^1(\mathfrak{g}, \sfrac{V}{W}) = 0\). The exactness
+  of
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} &
+      H^1(\mathfrak{g}, V) \arrow{r} &
+      0
+    \end{tikzcd}
+  \end{center}
+  then implies \(H^1(\mathfrak{g}, V) = 0\). Hence by induction in \(\dim V\)
+  we find \(H^1(\mathfrak{g}, V) = 0\) for all finite-dimensional \(V\). We are
+  done.
+\end{proof}
+
+We are now finally ready to prove\dots
+
+\begin{theorem}
+  Every representation of a semisimple Lie algebra is completely reducible.
+\end{theorem}
+
+\begin{proof}
+  Let
+  \begin{equation}\label{eq:generict-exact-sequence}
+    \begin{tikzcd}
+      0 \arrow{r} & W \arrow{r} & V \arrow{r}{\pi} & U \arrow{r} & 0
+    \end{tikzcd}
+  \end{equation}
+  be a short exact sequence of finite-dimensional representations of
+  \(\mathfrak{g}\). We want to establish that
+  (\ref{eq:generict-exact-sequence}) splits.
+
+  We have an exact sequence
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} &
+      \operatorname{Hom}(U, W) \arrow{r} &
+      \operatorname{Hom}(U, V) \arrow{r}{\pi \circ -} &
+      \operatorname{Hom}(U, U) \arrow{r} & 0
+    \end{tikzcd}
+  \end{center}
+  of vector spaces. Since all maps involved are intertwiners, this is an exact
+  sequence of \(\mathfrak{g}\)-modules. This then induces a long exact sequence
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow[r] &
+      \operatorname{Hom}(U, W)^{\mathfrak{g}} \arrow[r]\ar[draw=none]{d}[name=X, anchor=center]{} &
+      \operatorname{Hom}(U, V)^{\mathfrak{g}} \arrow[r, "\pi \circ -"', swap] &
+      \operatorname{Hom}(U, U)^{\mathfrak{g}}
+      \ar[rounded corners,
+                to path={ -- ([xshift=2ex]\tikztostart.east)
+                          |- (X.center) \tikztonodes
+                          -| ([xshift=-2ex]\tikztotarget.west)
+                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
+      H^1(\mathfrak{g}, \operatorname{Hom}(U, W)) \arrow[r] &
+      H^1(\mathfrak{g}, \operatorname{Hom}(U, V)) \arrow[r] &
+      H^1(\mathfrak{g}, \operatorname{Hom}(U, U)) \arrow[r, dashed] &
+      \cdots
+    \end{tikzcd}
+  \end{center}
+  of vector spaces. But \(H^1(\mathfrak{g}, \operatorname{Hom}(U, W))\)
+  vanishes because of proposition~\ref{thm:first-cohomology-vanishes}. Hence we
+  have an exact sequence
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} &
+      \operatorname{Hom}(U, W)^{\mathfrak{g}} \arrow{r} &
+      \operatorname{Hom}(U, V)^{\mathfrak{g}} \arrow{r}{\pi \circ -} &
+      \operatorname{Hom}(U, U)^{\mathfrak{g}} \arrow{r} &
+      0
+    \end{tikzcd}
+  \end{center}
+
+  Now notice \(\operatorname{Hom}(U, -)^{\mathfrak{g}} =
+  \operatorname{Hom}_{\mathfrak{g}}(U, -)\). Indeed, given a
+  \(\mathfrak{g}\)-module \(S\) and a \(K\)-linear map \(T : U \to S\)
+  \[
+    \begin{split}
+      T \in \operatorname{Hom}(U, S)^{\mathfrak{g}}
+      & \iff X T - T X = 0 \quad \forall X \in \mathfrak{g} \\
+      & \iff X T = T X \quad \forall X \in \mathfrak{g} \\
+      & \iff T \in \operatorname{Hom}_{\mathfrak{g}}(U, S)
+    \end{split}
+  \]
+
+  We thus have a short exact sequence
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} &
+      \operatorname{Hom}_{\mathfrak{g}}(U, W) \arrow{r} &
+      \operatorname{Hom}_{\mathfrak{g}}(U, V) \arrow{r}{\pi \circ -} &
+      \operatorname{Hom}_{\mathfrak{g}}(U, U) \arrow{r} &
+      0
+    \end{tikzcd}
+  \end{center}
+
+  In particular, there is some intertwiner \(T : U \to V\) such that \(\pi
+  \circ T : U \to U\) is the identity operator. In other words
+  \begin{center}
+    \begin{tikzcd}
+      0 \arrow{r} &
+      W \arrow{r} &
+      V \arrow{r}{\pi} &
+      U \arrow{r} \arrow[bend left]{l}{T} &
+      0
+    \end{tikzcd}
+  \end{center}
+  is a splitting of (\ref{eq:generict-exact-sequence}).
+\end{proof}
+
+We should point out that this last results are just the beginning of a well
+developed cohomology theory. For example, a similar argument involving the
+Casimir elements can be used to show that \(H^i(\mathfrak{g}, V) = 0\) for all
+non-trivial finite-dimensional irreducible \(V\), \(i > 0\). For \(K =
+\mathbb{C}\), the Lie algebra cohomology groups of an algebra \(\mathfrak{g} =
+\mathbb{C} \otimes \operatorname{Lie}(G)\) are intimately related with the
+topological cohomologies -- i.e. singular cohomology, de Rham cohomology, etc.
+-- of \(G\) with coefficients in \(\mathbb{C}\). We refer the reader to
+\cite{cohomologies-lie} and \cite[sec.~24]{symplectic-physics} for further
+details.
+
+Complete reducibility can be generalized for arbitrary -- not necessarily
+semisimple -- \(\mathfrak{g}\), to a certain extent, by considering the exact
+sequence
+\begin{center}
+  \begin{tikzcd}
+    0 \arrow{r} &
+    \mathfrak{rad}(\mathfrak{g}) \arrow{r} &
+    \mathfrak{g} \arrow{r} &
+    \mfrac{\mathfrak{g}}{\mathfrak{rad}(\mathfrak{g})} \arrow{r} &
+    0
+  \end{tikzcd}
+\end{center}
+
+This sequence always splits, which implies we can deduce information about the
+representations of \(\mathfrak{g}\) by studying those of its ``semisimple
+part'' \(\mfrac{\mathfrak{g}}{\mathfrak{rad}(\mathfrak{g})}\) -- see
+proposition~\ref{thm:quotients-by-rads}. In practice this translates to\dots
+
+\begin{theorem}\label{thm:semi-simple-part-decomposition}
+  Every irreducible representation of \(\mathfrak{g}\) is the tensor product of
+  an irreducible representation of its semisimple part
+  \(\mfrac{\mathfrak{g}}{\mathfrak{rad}(\mathfrak{g})}\) and a
+  one-dimensional representation of \(\mathfrak{g}\).
+\end{theorem}
+
+Having achieved our goal of proving complete reducibility, we can now afford
+the luxury of concerning ourselves exclusively with irreducible
+representations. Still, our efforts towards a classification of the
+finite-dimensional representations of semisimple Lie algebras are far from
+over. In particular, there is so far no indication on how we could go about
+understanding the irreducible \(\mathfrak{g}\)-modules. Once more, we begin by
+investigating a simple case: that of \(\mathfrak{sl}_2(K)\).
+
+\section{Representations of \(\mathfrak{sl}_2(K)\)}\label{sec:sl2}
+
+The primary goal of this section is proving\dots
+
+\begin{theorem}\label{thm:sl2-exist-unique}
+  For each \(n > 0\), there exists precisely one irreducible representation
+  \(V\) of \(\mathfrak{sl}_2(K)\) with \(\dim V = n\).
+\end{theorem}
+
+The general approach we'll take is supposing \(V\) is an irreducible
+representation of \(\mathfrak{sl}_2(K)\) and then derive some information about
+its structure. We begin our analysis by recalling that the elements
+\begin{align*}
+  e & = \begin{pmatrix} 0 & 1 \\ 0 &  0 \end{pmatrix} &
+  f & = \begin{pmatrix} 0 & 0 \\ 1 &  0 \end{pmatrix} &
+  h & = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}
+\end{align*}
+form a basis of \(\mathfrak{sl}_2(K)\) and satisfy
+\begin{align*}
+  [e, f] & = h & [h, f] & = -2 f & [h, e] = 2 e
+\end{align*}
+
+This is interesting to us because it implies every subspace of \(V\) invariant
+under the actions of \(e\), \(f\) and \(h\) has to be \(V\) itself. Next we
+turn our attention to the action of \(h\) in \(V\), in particular, to the
+eigenspace decomposition
+\[
+  V = \bigoplus_{\lambda} V_\lambda
+\]
+of \(V\) -- where \(\lambda\) ranges over the eigenvalues of \(h\) and
+\(V_\lambda\) is the corresponding eigenspace. At this point, this is nothing
+short of a gamble: why look at the eigenvalues of \(h\)?
+
+The short answer is that, as we shall see, this will pay off -- which
+conveniently justifies the epigraph of this chapter. For now we will postpone
+the discussion about the real reason of why we chose \(h\). Let \(\lambda\) be
+any eigenvalue of \(h\). Notice \(V_\lambda\) is in general not a
+subrepresentation of \(V\). Indeed, if \(v \in V_\lambda\) then
+\begin{align*}
+  h e v & =   2e v + e h v = (\lambda + 2) e v \\
+  h f v & = - 2f v + f h v = (\lambda - 2) f v
+\end{align*}
+
+In other words, \(e\) sends an element of \(V_\lambda\) to an element of
+\(V_{\lambda + 2}\), while \(f\) sends it to an element of \(V_{\lambda - 2}\).
+Hence
+\begin{center}
+  \begin{tikzcd}
+    \cdots \arrow[bend left=60]{r}
+    & V_{\lambda - 2} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}
+    & V_{\lambda} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
+    & V_{\lambda + 2} \arrow[bend left=60]{r} \arrow[bend left=60]{l}{f}
+    & \cdots \arrow[bend left=60]{l}
+  \end{tikzcd}
+\end{center}
+and \(\bigoplus_{n \in \ZZ} V_{\lambda + 2 n}\) is an
+\(\mathfrak{sl}_2(K)\)-invariant subspace. This implies
+\[
+  V = \bigoplus_{n \in \ZZ} V_{\lambda + 2 n},
+\]
+so that the eigenvalues of \(h\) all have the form \(\lambda + 2 n\) for some
+\(n\) -- since \(V_\mu = 0\) for all \(\mu \notin \lambda + 2 \ZZ\).
+
+Even more so, if \(a = \min \{ n \in \ZZ : V_{\lambda + 2 n} \ne 0 \}\) and
+\(b = \max \{ n \in \ZZ : V_{\lambda + 2 n} \ne 0 \}\) we can see that
+\[
+  \bigoplus_{\substack{n \in \ZZ \\ a \le n \le b}} V_{\lambda + 2 n}
+\]
+is also an \(\mathfrak{sl}_2(K)\)-invariant subspace, so that the eigenvalues
+of \(h\) form an unbroken string
+\[
+  \ldots, \lambda - 4, \lambda - 2, \lambda, \lambda + 2, \lambda + 4, \ldots
+\]
+around \(\lambda\).
+
+Our main objective is to show \(V\) is determined by this string of
+eigenvalues. To do so, we suppose without any loss in generality that
+\(\lambda\) is the right-most eigenvalue of \(h\), fix some non-zero \(v \in
+V_\lambda\) and consider the set \(\{v, f v, f^2, v, \ldots\}\).
+
+\begin{theorem}\label{thm:basis-of-irr-rep}
+  The set \(\{v, f v, f^2, \ldots\}\) is a basis for \(V\).
+\end{theorem}
+
+\begin{proof}
+  First of all, notice \(f^k v\) lies in \(V_{\lambda - 2 k}\), so that \(\{v,
+  f v, f^2 v, \ldots\}\) is a set of linearly independent vectors. Hence it
+  suffices to show \(V = K \langle v, f v, f^2 v, \ldots \rangle\), which in
+  light of the fact that \(V\) is irreducible is the same as showing \(K
+  \langle v, f v, f^2 v, \ldots \rangle\) is invariant under the action of
+  \(\mathfrak{sl}_2(K)\).
+
+  The fact that \(h f^k v \in K \langle v, f v, f^2 v, \ldots \rangle\) follows
+  immediately from our previous assertion that \(f^k v \in V_{\lambda - 2 k}\)
+  -- indeed, \(h f^k v = (\lambda - 2 k) f^k v\). Seeing \(e f^k v \in K
+  \langle v, f v, f^2 v, \ldots \rangle\) is a bit more complex. Clearly,
+  \[
+    \begin{split}
+      e f v
+      & = h v + f e v \\
+      \text{(since \(\lambda\) is the right-most eigenvalue)}
+      & = h v + f 0 \\
+      & = \lambda v
+    \end{split}
+  \]
+
+  Next we compute
+  \[
+    \begin{split}
+      e f^2 v
+      & = (h + fe) f v \\
+      & = h f v + f (\lambda v) \\
+      & = 2 (\lambda - 1) f v
+    \end{split}
+  \]
+
+  The pattern is starting to become clear: \(e\) sends \(f^k v\) to a multiple
+  of \(f^{k - 1} v\). Explicitly, it's not hard to check by induction that
+  \[
+    e f^k v = k (\lambda + 1 - k) f^{k - 1} v
+  \]
+\end{proof}
+
+\begin{note}
+  For this last formula to work we fix the convention that \(f^{-1} v = 0\) --
+  which is to say \(e v = 0\).
+\end{note}
+
+Theorem~\ref{thm:basis-of-irr-rep} may seem unrelated to our problem at first,
+but its significance lies in the fact that we have just provided a complete
+description of the action of \(\mathfrak{sl}_2(K)\) in \(V\). In other
+words\dots
+
+\begin{corollary}
+  \(V\) is completely determined by the right-most eigenvalue \(\lambda\) of
+  \(h\).
+\end{corollary}
+
+\begin{proof}
+  If \(W\) is an irreducible representation of \(\mathfrak{sl}_2(K)\) whose
+  right-most eigenvalue of \(h\) is \(\lambda\) and \(w \in W_\lambda\) is
+  non-zero, consider the linear isomorphism
+  \begin{align*}
+    T : V     & \to     W      \\
+        f^k v & \mapsto f^k w
+  \end{align*}
+
+  We claim \(T\) is an intertwining operator. Indeed, the explicit calculations
+  of \(e f^k v\) and \(h f^k v\) from the previous proof imply
+  \begin{align*}
+    T e & = e T & T f & = f T & T h & = h T
+  \end{align*}
+\end{proof}
+
+Other important consequences of theorem~\ref{thm:basis-of-irr-rep} are\dots
+
+\begin{corollary}
+  Every \(h\) eigenspace is one-dimensional.
+\end{corollary}
+
+\begin{proof}
+  It suffices to note \(\{v, f v, f^2 v, \ldots \}\) is a basis for \(V\)
+  consisting of eigenvalues of \(h\) and whose only element in \(V_{\lambda - 2
+  k}\) is \(f^k v\).
+\end{proof}
+
+\begin{corollary}
+  The eigenvalues of \(h\) in \(V\) form a symmetric, unbroken string of
+  integers separated by intervals of length \(2\) whose right-most value is
+  \(\dim V - 1\).
+\end{corollary}
+
+\begin{proof}
+  If \(f^m\) is the lowest power of \(f\) that annihilates \(v\), it follows
+  from the formula for \(e f^k v\) obtained in the proof of
+  theorem~\ref{thm:basis-of-irr-rep} that
+  \[
+    0 = e 0 = e f^m v = m (\lambda + 1 - m) f^{m - 1} v
+  \]
+
+  This implies \(\lambda + 1 - m = 0\) -- i.e. \(\lambda = m - 1 \in \ZZ\). Now
+  since \(\{v, f v, f^2 v, \ldots, f^{m - 1} v\}\) is a basis for \(V\), \(m =
+  \dim V\). Hence if \(n = \lambda = \dim V - 1\) then the eigenvalues of \(h\)
+  are
+  \[
+    \ldots, n - 6, n - 4, n - 2, n
+  \]
+
+  To see that this string is symmetric around \(0\), simply note that the
+  left-most eigenvalue of \(h\) is precisely \(n - 2 (m - 1) = -n\).
+\end{proof}
+
+We now know every irreducible representation \(V\) of \(\mathfrak{sl}_2(K)\)
+has the form
+\begin{center}
+  \begin{tikzcd}
+    \cdots \arrow[bend left=60]{r}
+    & V_{n - 6} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}
+    & V_{n - 4} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
+    & V_{n - 2} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
+    & V_n \arrow[bend left=60]{l}{f}
+  \end{tikzcd}
+\end{center}
+where \(V_{n - 2 k}\) is the one-dimensional eigenspace of \(h\) associated to
+\(n - 2 k\) and \(n = \dim V - 1\). Even more so, we explicitly know
+\[
+  V = \bigoplus_{k = 0}^n K f^k v
+\]
+and
+\begin{equation}\label{eq:irr-rep-of-sl2}
+  \begin{aligned}
+      f^k v & \overset{e}{\mapsto} k(n + 1 - k) f^{k - 1} v
+    & f^k v & \overset{f}{\mapsto} f^{k + 1} v
+    & f^k v & \overset{h}{\mapsto} (n - 2 k) f^k v
+  \end{aligned}
+\end{equation}
+
+To conclude our analysis all it's left is to show that for each \(n\) such
+\(V\) does indeed exist and is irreducible. In other words\dots
+
+\begin{theorem}\label{thm:irr-rep-of-sl2-exists}
+  For each \(n \ge 0\) there exists a (unique) irreducible representation of
+  \(\mathfrak{sl}_2(K)\) whose left-most eigenvalue of \(h\) is \(n\).
+\end{theorem}
+
+\begin{proof}
+  The fact the representation \(V\) from the previous discussion exists is
+  clear from the commutator relations of \(\mathfrak{sl}_2(K)\) -- just look at
+  \(f^k v\) as abstract symbols and impose the action given by
+  (\ref{eq:irr-rep-of-sl2}). Alternatively, one can readily check that if
+  \(K^2\) is the natural representation of \(\mathfrak{sl}_2(K)\), then \(V =
+  \operatorname{Sym}^n K^2\) satisfies the relations of
+  (\ref{eq:irr-rep-of-sl2}). To see that \(V\) is irreducible let \(W\) be a
+  non-zero subrepresentation and take some non-zero \(w \in W\). Suppose \(w =
+  \alpha_0 v + \alpha_1 f v + \cdots + \alpha_n f^n v\) and let \(k\) be the
+  lowest index such that \(\alpha_k \ne 0\), so that
+  \[
+    w = \alpha_k f^k v + \cdots + \alpha_n f^n v
+  \]
+
+  Now given that \(f^m = f^{n + 1}\) annihilates \(v\),
+  \[
+    f w = \alpha_k f^{k + 1} v + \cdots + \alpha_{n - 1} f^n v
+  \]
+
+  Proceeding inductively we arrive at \(f^{n - k} w = \alpha_k f^n v\), so
+  that \(f^n v \in W\). Hence \(e^i f^n v = \prod_{k = 1}^i k(n + 1 - k) f^{n -
+  i} v \in W\) for all \(i = 1, 2, \ldots, n\). Since \(k \ne 0 \ne n + 1 - k\)
+  for all \(k\) in this range, we can see that \(f^k v \in W\) for all \(k = 0,
+  1, \ldots, n\). In other words, \(W = V\). We are done.
+\end{proof}
+
+Our initial gamble of studying the eigenvalues of \(h\) may have seemed
+arbitrary at first, but it payed off: we've \emph{completely} described
+\emph{all} irreducible representations of \(\mathfrak{sl}_2(K)\). It is not yet
+clear, however, if any of this can be adapted to a general setting. In the
+following section we shall double down on our gamble by trying to reproduce
+some of the results of this section for \(\mathfrak{sl}_3(K)\), hoping this
+will \emph{somehow} lead us to a general solution. In the process of doing so
+we'll learn a bit more why \(h\) was a sure bet and the race was fixed all
+along.
+
+\section{Representations of \(\mathfrak{sl}_3(K)\)}\label{sec:sl3-reps}
+
+The study of representations of \(\mathfrak{sl}_2(K)\) reminds me of the
+difference the derivative of a function \(\RR \to \RR\) and that of a smooth
+map between manifolds: it's a simpler case of something greater, but in some
+sense it's too simple of a case, and the intuition we acquire from it can be a
+bit misleading in regards to the general setting. For instance I distinctly
+remember my Calculus I teacher telling the class ``the derivative of the
+composition of two functions is not the composition of their derivatives'' --
+which is, of course, the \emph{correct} formulation of the chain rule in the
+context of smooth manifolds.
+
+The same applies to \(\mathfrak{sl}_2(K)\). It's a simple and beautiful
+example, but unfortunately the general picture -- representations of arbitrary
+semisimple algebras -- lacks its simplicity, and, of course, much of this
+complexity is hidden in the case of \(\mathfrak{sl}_2(K)\).  The general
+purpose of this section is to investigate to which extent the framework used in
+the previous section to classify the representations of \(\mathfrak{sl}_2(K)\)
+can be generalized to other semisimple Lie algebras, and the algebra
+\(\mathfrak{sl}_3(K)\) stands as a natural candidate for potential
+generalizations: \(3 = 2 + 1\) after all.
+
+Our approach is very straightforward: we'll fix some irreducible representation
+\(V\) of \(\mathfrak{sl}_3(K)\) and proceed step by step, at each point asking
+ourselves how we could possibly adapt the framework we laid out for
+\(\mathfrak{sl}_2(K)\). The first obvious question is one we have already asked
+ourselves: why \(h\)?  More specifically, why did we choose to study its
+eigenvalues and is there an analogue of \(h\) in \(\mathfrak{sl}_3(K)\)?
+
+The answer to the former question is one we'll discuss at length in the next
+chapter, but for now we note that perhaps the most fundamental property of
+\(h\) is that \emph{there exists an eigenvector \(v\) of \(h\) that is
+annihilated by \(e\)} -- that being the generator of the right-most eigenspace
+of \(h\). This was instrumental to our explicit description of the irreducible
+representations of \(\mathfrak{sl}_2(K)\) culminating in
+theorem~\ref{thm:irr-rep-of-sl2-exists}.
+
+Our fist task is to find some analogue of \(h\) in \(\mathfrak{sl}_3(K)\), but
+it's still unclear what exactly we are looking for. We could say we're looking
+for an element of \(V\) that is annihilated by some analogue of \(e\), but the
+meaning of \emph{some analogue of \(e\)} is again unclear. In fact, as we shall
+see, no such analogue exists and neither does such element. Instead, the actual
+way to proceed is to consider the subalgebra
+\[
+  \mathfrak{h}
+  = \left\{
+    X \in
+    \begin{pmatrix} K & 0 & 0 \\ 0 & K & 0 \\ 0 & 0 & K \end{pmatrix}
+    : \operatorname{Tr}(X) = 0
+    \right\}
+\]
+
+The choice of \(\mathfrak{h}\) may seem like an odd choice at the moment, but
+the point is we'll later show that there exists some \(v \in V\) that is
+simultaneously an eigenvector of each \(H \in \mathfrak{h}\) and annihilated by
+half of the remaining elements of \(\mathfrak{sl}_3(K)\). This is exactly
+analogous to the situation we found in \(\mathfrak{sl}_2(K)\): \(h\)
+corresponds to the subalgebra \(\mathfrak{h}\), and the eigenvalues of \(h\) in
+turn correspond to linear functions \(\lambda : \mathfrak{h} \to k\) such that
+\(H v = \lambda(H) \cdot v\) for each \(H \in \mathfrak{h}\) and some non-zero
+\(v \in V\). We call such functionals \(\lambda\) \emph{eigenvalues of
+\(\mathfrak{h}\)}, and we say \emph{\(v\) is an eigenvector of
+\(\mathfrak{h}\)}.
+
+Once again, we'll pay special attention to the eigenvalue decomposition
+\begin{equation}\label{eq:weight-module}
+  V = \bigoplus_\lambda V_\lambda
+\end{equation}
+where \(\lambda\) ranges over all eigenvalues of \(\mathfrak{h}\) and
+\(V_\lambda = \{ v \in V : H v = \lambda(H) \cdot v, \forall H \in \mathfrak{h}
+\}\). We should note that the fact that (\ref{eq:weight-module}) holds is not
+at all obvious. This is because in general \(V_\lambda\) is not the eigenspace
+associated with an eigenvalue of any particular operator \(H \in
+\mathfrak{h}\), but instead the eigenspace of the action of the entire algebra
+\(\mathfrak{h}\). Fortunately for us, (\ref{eq:weight-module}) always holds,
+but we will postpone its proof to the next section.
+
+Next we turn our attention to the remaining elements of \(\mathfrak{sl}_3(K)\).
+In our analysis of \(\mathfrak{sl}_2(K)\) we saw that the eigenvalues of \(h\)
+differed from one another by multiples of \(2\). A possible way to interpret
+this is to say \emph{the eigenvalues of \(h\) differ from one another by
+integral linear combinations of the eigenvalues of the adjoint action of
+\(h\)}. In English, the eigenvalues of of the adjoint actions of \(h\) are
+\(\pm 2\) since
+\begin{align*}
+  [h, f] & = -2 f &
+  [h, e] & = 2 e
+\end{align*}
+and the eigenvalues of the action of \(h\) in an irreducible
+\(\mathfrak{sl}_2(K)\)-representation differ from one another by multiples of
+\(\pm 2\).
+
+In the case of \(\mathfrak{sl}_3(K)\), a simple calculation shows that if \([H,
+X]\) is scalar multiple of \(X\) for all \(H \in \mathfrak{h}\) then all but
+one entry of \(X\) are zero. Hence the eigenvectors of the adjoint action of
+\(\mathfrak{h}\) are \(E_{i j}\) and its eigenvalues are \(\alpha_i -
+\alpha_j\), where
+\[
+  \alpha_i
+  \begin{pmatrix}
+    a_1 &   0 &   0 \\
+      0 & a_2 &   0 \\
+      0 &   0 & a_3
+  \end{pmatrix}
+  = a_i
+\]
+
+Visually we may draw
+
+\begin{figure}[h]
+  \centering
+  \begin{tikzpicture}[scale=2.5]
+    \begin{rootSystem}{A}
+      \filldraw[black] \weight{0}{0} circle (.5pt);
+      \node[black, above right] at \weight{0}{0} {\small$0$};
+      \wt[black]{-1}{2}
+      \wt[black]{-2}{1}
+      \wt[black]{1}{1}
+      \wt[black]{-1}{-1}
+      \wt[black]{2}{-1}
+      \wt[black]{1}{-2}
+      \node[above] at \weight{-1}{2}  {$\alpha_2 - \alpha_3$};
+      \node[left]  at \weight{-2}{1}  {$\alpha_2 - \alpha_1$};
+      \node[right] at \weight{1}{1}   {$\alpha_1 - \alpha_3$};
+      \node[left]  at \weight{-1}{-1} {$\alpha_3 - \alpha_1$};
+      \node[right] at \weight{2}{-1}  {$\alpha_1 - \alpha_2$};
+      \node[below] at \weight{1}{-2}  {$\alpha_3 - \alpha_1$};
+      \node[black, above] at \weight{1}{0}  {$\alpha_1$};
+      \node[black, above] at \weight{-1}{1} {$\alpha_2$};
+      \node[black, above] at \weight{0}{-1} {$\alpha_3$};
+      \filldraw[black] \weight{1}{0}  circle (.5pt);
+      \filldraw[black] \weight{-1}{1} circle (.5pt);
+      \filldraw[black] \weight{0}{-1} circle (.5pt);
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{figure}
+
+If we denote the eigenspace of the adjoint action of \(\mathfrak{h}\) in
+\(\mathfrak{sl}_3(K)\) associated to \(\alpha\) by
+\(\mathfrak{sl}_3(K)_\alpha\) and fix some \(X \in \mathfrak{sl}_3(K)_\alpha\),
+\(H \in \mathfrak{h}\) and \(v \in V_\lambda\) then
+\[
+  \begin{split}
+    H (X v)
+    & = X (H v) + [H, X] v \\
+    & = X (\lambda(H) \cdot v) + (\alpha(H) \cdot X) v \\
+    & = (\alpha + \lambda)(H) \cdot X v
+  \end{split}
+\]
+so that \(X\) carries \(v\) to \(V_{\alpha + \lambda}\). In other words,
+\(\mathfrak{sl}_3(k)_\alpha\) \emph{acts on \(V\) by translating vectors
+between eigenspaces}.
+
+For instance \(\mathfrak{sl}_3(K)_{\alpha_1 - \alpha_3}\) will act on the
+adjoint representation of \(\mathfrak{sl}_3(K)\) via
+\begin{figure}[h]
+  \centering
+  \begin{tikzpicture}[scale=2.5]
+    \begin{rootSystem}{A}
+      \wt[black]{0}{0}
+      \wt[black]{-1}{2}
+      \wt[black]{-2}{1}
+      \wt[black]{1}{1}
+      \wt[black]{-1}{-1}
+      \wt[black]{2}{-1}
+      \wt[black]{1}{-2}
+      \draw[-latex, black] \weight{-1.9}{1.1} -- \weight{-1.1}{1.9};
+      \draw[-latex, black] \weight{-.9}{-.9} -- \weight{-.1}{-.1};
+      \draw[-latex, black] \weight{0.1}{0.1} -- \weight{.9}{.9};
+      \draw[-latex, black] \weight{1.1}{-1.9} -- \weight{1.9}{-1.1};
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{figure}
+
+This is again entirely analogous to the situation we observed in
+\(\mathfrak{sl}_2(K)\). In fact, we may once more conclude\dots
+
+\begin{theorem}\label{thm:sl3-weights-congruent-mod-root}
+  The eigenvalues of the action of \(\mathfrak{h}\) in an irreducible
+  \(\mathfrak{sl}_3(K)\)-representation \(V\) differ from one another by
+  integral linear combinations of the eigenvalues \(\alpha_i - \alpha_j\) of
+  adjoint action of \(\mathfrak{h}\) in \(\mathfrak{sl}_3(K)\).
+\end{theorem}
+
+\begin{proof}
+  This proof goes exactly as that of the analogous statement for
+  \(\mathfrak{sl}_2(K)\): it suffices to note that if we fix some eigenvalue
+  \(\lambda\) of \(\mathfrak{h}\) and let \(i\) and \(j\) vary then
+  \[
+    \bigoplus_{i j} V_{\lambda + \alpha_i - \alpha_j}
+  \]
+  is an invariant subspace of \(V\).
+\end{proof}
+
+To avoid confusion we better introduce some notation to differentiate between
+eigenvalues of the action of \(\mathfrak{h}\) in \(V\) and eigenvalues of the
+adjoint action of \(\mathfrak{h}\).
+
+\begin{definition}
+  Given a representation \(V\) of \(\mathfrak{sl}_3(K)\), we'll call the
+  non-zero eigenvalues of the action of \(\mathfrak{h}\) in \(V\) \emph{weights
+  of \(V\)}. As you might have guessed, we'll correspondingly refer to
+  eigenvectors and eigenspaces of a given weight by \emph{weight vectors} and
+  \emph{weight spaces}.
+\end{definition}
+
+It's clear from our previous discussion that the weights of the adjoint
+representation of \(\mathfrak{sl}_3(K)\) deserve some special attention.
+
+\begin{definition}
+  The weights of the adjoint representation of \(\mathfrak{sl}_3(K)\) are
+  called \emph{roots of \(\mathfrak{sl}_3(K)\)}. Once again, the expressions
+  \emph{root vector} and \emph{root space} are self-explanatory.
+\end{definition}
+
+Theorem~\ref{thm:sl3-weights-congruent-mod-root} can thus be restated as\dots
+
+\begin{corollary}
+  The weights of an irreducible representation \(V\) of \(\mathfrak{sl}_3(K)\)
+  are all congruent module the lattice \(Q\) generated by the roots \(\alpha_i
+  - \alpha_j\) of \(\mathfrak{sl}_3(K)\).
+\end{corollary}
+
+\begin{definition}
+  The lattice \(Q = \ZZ \langle \alpha_i - \alpha_j : i, j = 1, 2, 3 \rangle\)
+  is called \emph{the root lattice of \(\mathfrak{sl}_3(K)\)}.
+\end{definition}
+
+To proceed we once more refer to the previously established framework: next we
+saw that the eigenvalues of \(h\) formed an unbroken string of integers
+symmetric around \(0\). To prove this we analyzed the right-most eigenvalue of
+\(h\) and its eigenvector, providing an explicit description of the irreducible
+representation of \(\mathfrak{sl}_2(K)\) in terms of this vector. We may
+reproduce these steps in the context of \(\mathfrak{sl}_3(K)\) by fixing a
+direction in the place an considering the weight lying the furthest in that
+direction. For instance, let's say we fix the direction
+\begin{center}
+  \begin{tikzpicture}[scale=2.5]
+    \begin{rootSystem}{A}
+      \wt[black]{0}{0}
+      \wt[black]{-1}{2}
+      \wt[black]{-2}{1}
+      \wt[black]{1}{1}
+      \wt[black]{-1}{-1}
+      \wt[black]{2}{-1}
+      \wt[black]{1}{-2}
+      \draw[-latex, black, thick] \weight{-1.5}{-.5} -- \weight{1.5}{.5};
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+and let \(\lambda\) be the weight lying the furthest in this direction.
+
+Its easy to see what we mean intuitively by looking at the previous picture,
+but its precise meaning is still allusive. Formally this means we'll choose a
+linear functional \(f : \mathfrak{h}^* \to \QQ\) and pick the weight that
+maximizes \(f\). To avoid any ambiguity we should choose the direction of a
+line irrational with respect to the root lattice \(Q\). For instance if we
+choose the direction of \(\alpha_1 - \alpha_3\) and let \(f\) be the rational
+projection \(Q \to \QQ \langle \alpha_1 - \alpha_3 \rangle \cong \QQ\) then
+\(\alpha_1 - 2 \alpha_2 + \alpha_3 \in Q\) lies in \(\ker f\), so that if a
+weight \(\lambda\) maximizes \(f\) then the translation of \(\lambda\) by any
+multiple of \(\alpha_1 - 2 \alpha_2 + \alpha_3\) must also do so. In others
+words, if the direction we choose is parallel to a vector lying in \(Q\) then
+there may be multiple choices the ``weight lying the furthest'' along this
+direction.
+
+\begin{definition}
+  We say that a root \(\alpha\) is positive if \(f(\alpha) > 0\) -- i.e. if it
+  lies to the right of the direction we chose. Otherwise we say \(\alpha\) is
+  negative. Notice that \(f(\alpha) \ne 0\) since by definition \(\alpha \ne
+  0\) and \(f\) is irrational with respect to the lattice \(Q\).
+\end{definition}
+
+The first observation we make is that all others weights of \(V\) must lie in a
+sort of \(\frac{1}{3}\)-plane with corners at \(\lambda\), as shown in
+\begin{center}
+  \begin{tikzpicture}
+    \AutoSizeWeightLatticefalse
+    \begin{rootSystem}{A}
+      \weightLattice{3}
+      \fill[gray!50,opacity=.2] (hex cs:x=5,y=-7) -- (hex cs:x=1,y=1) --
+      (hex cs:x=-7,y=5) arc (150:270:{7*\weightLength});
+      \draw[black, thick] (hex cs:x=5,y=-7) -- (hex cs:x=1,y=1) --
+      (hex cs:x=-7,y=5);
+      \filldraw[black] (hex cs:x=1,y=1) circle (1pt);
+      \node[above right=-2pt] at (hex cs:x=1,y=1) {\small\(\lambda\)};
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+Indeed, if this is not the case then, by definition, \(\lambda\) is not the
+furthest weight along the line we chose. Given our previous assertion that the
+root spaces of \(\mathfrak{sl}_3(K)\) act on the weight spaces of \(V\) via
+translation, this implies that \(E_{1 2}\), \(E_{1 3}\) and \(E_{2 3}\) all
+annihilate \(V_\lambda\), or otherwise one of \(V_{\lambda + \alpha_1 -
+\alpha_2}\), \(V_{\lambda + \alpha_1 - \alpha_3}\) and \(V_{\lambda + \alpha_2
+- \alpha_3}\) would be non-zero -- which contradicts the hypothesis that
+\(\lambda\) lies the furthest along the direction we chose. In other words\dots
+
+\begin{theorem}
+  There is a weight vector \(v \in V\) that is killed by all positive root
+  spaces of \(\mathfrak{sl}_3(K)\).
+\end{theorem}
+
+\begin{proof}
+  It suffices to note that the positive roots of \(\mathfrak{sl}_3(K)\) are
+  precisely \(\alpha_1 - \alpha_2\), \(\alpha_1 - \alpha_3\) and \(\alpha_2 -
+  \alpha_3\).
+\end{proof}
+
+We call \(\lambda\) \emph{the highest weight of \(V\)}, and we call any \(v \in
+V_\lambda\) \emph{a highest weight vector}. Going back to the case of
+\(\mathfrak{sl}_2(K)\), we then constructed an explicit basis of our
+irreducible representations in terms of a highest weight vector, which allowed
+us to provide an explicit description of the action of \(\mathfrak{sl}_2(K)\)
+in terms of its standard basis and finally we concluded that the eigenvalues of
+\(h\) must be symmetrical around \(0\). An analogous procedure could be
+implemented for \(\mathfrak{sl}_3(K)\) -- and indeed that's what we'll do later
+down the line -- but instead we would like to focus on the problem of finding
+the weights of \(V\) for the moment.
+
+We'll start out by trying to understand the weights in the boundary of
+\(\frac{1}{3}\)-plane previously drawn. Since the root spaces act by
+translation, the action of \(E_{2 1}\) in \(V_\lambda\) will span a subspace
+\[
+  W = \bigoplus_k V_{\lambda + k (\alpha_2 - \alpha_1)},
+\]
+and by the same token \(W\) must be invariant under the action of \(E_{1 2}\).
+
+To draw a familiar picture
+\begin{center}
+  \begin{tikzpicture}
+    \begin{rootSystem}{A}
+      \node at \weight{3}{1} (a) {};
+      \node at \weight{1}{2} (b) {};
+      \node at \weight{-1}{3} (c) {};
+      \node at \weight{-3}{4} (d) {};
+      \node at \weight{-5}{5} (e) {};
+      \draw \weight{3}{1} -- \weight{-4}{4.5};
+      \draw[dotted] \weight{-4}{4.5} -- \weight{-5}{5};
+      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+      \draw[-latex] (a) to[bend left=40] (b);
+      \draw[-latex] (b) to[bend left=40] (c);
+      \draw[-latex] (c) to[bend left=40] (d);
+      \draw[-latex] (d) to[bend left=40] (e);
+      \draw[-latex] (e) to[bend left=40] (d);
+      \draw[-latex] (d) to[bend left=40] (c);
+      \draw[-latex] (c) to[bend left=40] (b);
+      \draw[-latex] (b) to[bend left=40] (a);
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+What's remarkable about all this is the fact that the subalgebra spanned by
+\(E_{1 2}\), \(E_{2 1}\) and \(H = [E_{1 2}, E_{2 1}]\) is isomorphic to
+\(\mathfrak{sl}_2(K)\) via
+\begin{align*}
+  E_{2 1} & \mapsto e &
+  E_{1 2} & \mapsto f &
+        H & \mapsto h
+\end{align*}
+
+In other words, \(W\) is a representation of \(\mathfrak{sl}_2(K)\). Even more
+so, we claim
+\[
+  V_{\lambda + k (\alpha_2 - \alpha_1)} = W_{\lambda(H) - 2k}
+\]
+
+Indeed, \(V_{\lambda + k (\alpha_2 - \alpha_1)} \subset W_{\lambda(H) - 2k}\)
+since \((\lambda + k (\alpha_2 - \alpha_1))(H) = \lambda(H) + k (-1 - 1) =
+\lambda(H) - 2 k\). On the other hand, if we suppose \(0 < \dim V_{\lambda + k
+(\alpha_2 - \alpha_1)} < \dim W_{\lambda(H) - 2 k}\) for some \(k\) we arrive
+at
+\[
+  \dim W
+  = \sum_k \dim V_{\lambda + k (\alpha_2 - \alpha_1)}
+  < \sum_k \dim W_{\lambda(H) - 2k}
+  = \dim W,
+\]
+a contradiction.
+
+There are a number of important consequences to this, of the first being that
+the weights of \(V\) appearing on \(W\) must be symmetric with respect to the
+the line \(B(\alpha_1 - \alpha_2, \alpha) =  0\). The picture is
+thus
+\begin{center}
+  \begin{tikzpicture}
+    \AutoSizeWeightLatticefalse
+    \begin{rootSystem}{A}
+      \setlength{\weightRadius}{2pt}
+      \weightLattice{4}
+      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+      \wt[black]{0}{0}
+      \node[above left] at \weight{0}{0} {\small\(0\)};
+      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+      \draw[very thick] \weight{0}{-4} -- \weight{0}{4}
+      node[above]{\small\(B(\alpha_1 - \alpha_2, \alpha) = 0\)};
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+Notice we could apply this same argument to the subspace \(\bigoplus_k
+V_{\lambda + k (\alpha_3 - \alpha_2)}\): this subspace is invariant under the
+action of the subalgebra spanned by \(E_{2 3}\), \(E_{3 2}\) and \([E_{2 3},
+E_{3 2}]\), which is again isomorphic to \(\mathfrak{sl}_2(K)\), so that the
+weights in this subspace must be symmetric with respect to the line
+\(B(\alpha_3 - \alpha_2, \alpha) = 0\). The picture is now
+\begin{center}
+  \begin{tikzpicture}
+    \AutoSizeWeightLatticefalse
+    \begin{rootSystem}{A}
+      \setlength{\weightRadius}{2pt}
+      \weightLattice{4}
+      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+      \wt[black]{0}{0}
+      \wt[black]{4}{-1}
+      \node[above left] at \weight{0}{0} {\small\(0\)};
+      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+      \draw[very thick] \weight{0}{-4} -- \weight{0}{4}
+      node[above]{\small\(B(\alpha_1 - \alpha_2, \alpha) = 0\)};
+      \draw[very thick] \weight{-4}{0} -- \weight{4}{0}
+      node[right]{\small\(B(\alpha_3 - \alpha_2, \alpha) = 0\)};
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+In general, given a weight \(\mu\), the space
+\[
+  \bigoplus_k V_{\mu + k (\alpha_i - \alpha_j)}
+\]
+is invariant under the action of the subalgebra \(\mathfrak{s}_{\alpha_i -
+\alpha_j} = K \langle E_{i j}, E_{j i}, [E_{i j}, E_{j i}] \rangle\), which is
+once more isomorphic to \(\mathfrak{sl}_2(K)\), and again the weight spaces in
+this string match precisely the eigenvalues of \(h\). Needless to say, we could
+keep applying this method to the weights at the ends of our string, arriving at
+\begin{center}
+  \begin{tikzpicture}
+    \AutoSizeWeightLatticefalse
+    \begin{rootSystem}{A}
+      \setlength{\weightRadius}{2pt}
+      \weightLattice{5}
+      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+      \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
+      \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
+      \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
+      \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
+      \wt[black]{-4}{3}
+      \wt[black]{-3}{1}
+      \wt[black]{-2}{-1}
+      \wt[black]{-1}{-3}
+      \wt[black]{1}{-4}
+      \wt[black]{2}{-3}
+      \wt[black]{3}{-2}
+      \wt[black]{4}{-1}
+      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+      \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
+      \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
+      \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+We claim all dots \(\mu\) lying inside the hexagon we've drawn must also be
+weights -- i.e. \(V_\mu \ne 0\). Indeed, by applying the same argument to an
+arbitrary weight \(\nu\) in the boundary of the hexagon we get a representation
+of \(\mathfrak{sl}_2(K)\) whose weights correspond to weights of \(V\) lying in
+a string inside the hexagon, and whose right-most weight is precisely the
+weight of \(V\) we started with.
+\begin{center}
+  \begin{tikzpicture}
+    \AutoSizeWeightLatticefalse
+    \begin{rootSystem}{A}
+      \setlength{\weightRadius}{2pt}
+      \weightLattice{5}
+      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+      \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
+      \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
+      \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
+      \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
+      \wt[black]{-4}{3}
+      \wt[black]{-3}{1}
+      \wt[black]{-2}{-1}
+      \wt[black]{-1}{-3}
+      \wt[black]{1}{-4}
+      \wt[black]{2}{-3}
+      \wt[black]{3}{-2}
+      \wt[black]{4}{-1}
+      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+      \node[above right=-2pt] at \weight{1}{2} {\small\(\nu\)};
+      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+      \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
+      \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
+      \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
+      \draw[gray, thick] \weight{1}{2} -- \weight{-2}{-1};
+      \wt[black]{1}{2}
+      \wt[black]{-2}{-1}
+      \wt{0}{1}
+      \wt{-1}{0}
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+By construction, \(\nu\) corresponds to the right-most weight of the
+representation of \(\mathfrak{sl}_2(K)\), so that all dots lying on the gray
+string must occur in the representation of \(\mathfrak{sl}_2(K)\). Hence they
+must also be weights of \(V\). The final picture is thus
+\begin{center}
+  \begin{tikzpicture}
+    \AutoSizeWeightLatticefalse
+    \begin{rootSystem}{A}
+      \setlength{\weightRadius}{2pt}
+      \weightLattice{5}
+      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+      \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
+      \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
+      \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
+      \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
+      \wt[black]{-4}{3}
+      \wt[black]{-3}{1}
+      \wt[black]{-2}{-1}
+      \wt[black]{-1}{-3}
+      \wt[black]{1}{-4}
+      \wt[black]{2}{-3}
+      \wt[black]{3}{-2}
+      \wt[black]{4}{-1}
+      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+      \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
+      \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
+      \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
+      \wt[black]{-2}{2}
+      \wt[black]{0}{1}
+      \wt[black]{-1}{0}
+      \wt[black]{0}{-2}
+      \wt[black]{1}{-1}
+      \wt[black]{2}{0}
+    \end{rootSystem}
+  \end{tikzpicture}
+\end{center}
+
+Another important consequence of our analysis is the fact that \(\lambda\) lies
+in the lattice \(P\) generated by \(\alpha_1\), \(\alpha_2\) and \(\alpha_3\).
+Indeed, \(\lambda([E_{i j}, E_{j i}])\) is an eigenvalue of \(h\) in a
+representation of \(\mathfrak{sl}_2(K)\), so it must be an integer. Now since
+\[
+  \lambda
+  \begin{pmatrix}
+    a & 0 & 0     \\
+    0 & b & 0     \\
+    0 & 0 & -a -b
+  \end{pmatrix}
+  =
+  \lambda
+  \begin{pmatrix}
+    a & 0 & 0  \\
+    0 & 0 & 0  \\
+    0 & 0 & -a
+  \end{pmatrix}
+  +
+  \lambda
+  \begin{pmatrix}
+    0 & 0 & 0  \\
+    0 & b & 0  \\
+    0 & 0 & -b
+  \end{pmatrix}
+  =
+  a \lambda([E_{1 3}, E_{3 1}]) + b \lambda([E_{2 3}, E_{3 2}]),
+\]
+which is to say \(\lambda = \lambda([E_{1 3}, E_{3 1}]) \alpha_1 +
+\lambda([E_{2 3}, E_{3 2}]) \alpha_2\), we can see that \(\lambda \in
+P\).
+
+\begin{definition}
+  The lattice \(P = \ZZ \alpha_1 \oplus \ZZ \alpha_2 \oplus \ZZ \alpha_3\) is
+  called \emph{the weight lattice of \(\mathfrak{sl}_3(K)\)}.
+\end{definition}
+
+Finally\dots
+
+\begin{theorem}\label{thm:sl3-irr-weights-class}
+  The weights of \(V\) are precisely the elements of the weight lattice \(P\)
+  congruent to \(\lambda\) module the sublattice \(Q\) and lying inside hexagon
+  with vertices the images of \(\lambda\) under the group generated by
+  reflections across the lines \(B(\alpha_i - \alpha_j, \alpha) = 0\).
+\end{theorem}
+
+Once more there's a clear parallel between the case of \(\mathfrak{sl}_3(K)\)
+and that of \(\mathfrak{sl}_2(K)\), where we observed that the weights all lied
+in the lattice \(P = \ZZ\) and were congruent modulo the lattice \(Q = 2 \ZZ\).
+Having found all of the weights of \(V\), the only thing we're missing is an
+existence and uniqueness theorem analogous to
+theorem~\ref{thm:sl2-exist-unique}. In other words, our next goal is
+establishing\dots
+
+\begin{theorem}\label{thm:sl3-existence-uniqueness}
+  For each pair of positive integers \(n\) and \(m\), there exists precisely
+  one irreducible representation \(V\) of \(\mathfrak{sl}_3(K)\) whose highest
+  weight is \(n \alpha_1 - m \alpha_3\).
+\end{theorem}
+
+To proceed further we once again refer to the approach we employed in the case
+of \(\mathfrak{sl}_2(K)\): next we showed in theorem~\ref{thm:basis-of-irr-rep}
+that any irreducible representation of \(\mathfrak{sl}_2(K)\) is spanned by the
+images of its highest weight vector under \(f\). A more abstract way of putting
+it is to say that an irreducible representation \(V\) of \(\mathfrak{sl}_2(K)\)
+is spanned by the images of its highest weight vector under successive
+applications by half of the root spaces of \(\mathfrak{sl}_2(K)\). The
+advantage of this alternative formulation is, of course, that the same holds
+for \(\mathfrak{sl}_3(K)\). Specifically\dots
+
+\begin{theorem}\label{thm:irr-sl3-span}
+  Given an irreducible \(\mathfrak{sl}_3(K)\)-representation \(V\) and a
+  highest weight vector \(v \in V\), \(V\) is spanned by the images of \(v\)
+  under successive applications of \(E_{2 1}\), \(E_{3 1}\) and \(E_{3 2}\).
+\end{theorem}
+
+The proof of theorem~\ref{thm:irr-sl3-span} is very similar to that of
+theorem~\ref{thm:basis-of-irr-rep}: we use the commutator relations of
+\(\mathfrak{sl}_3(K)\) to inductively show that the subspace spanned by the
+images of a highest weight vector under successive applications of \(E_{2 1}\),
+\(E_{3 1}\) and \(E_{3 2}\) is invariant under the action of
+\(\mathfrak{sl}_3(K)\) -- please refer to \cite{fulton-harris} for further
+details. The same argument also goes to show\dots
+
+\begin{corollary}
+  Given a representation \(V\) of \(\mathfrak{sl}_3(K)\) with highest weight
+  \(\lambda\) and \(v \in V_\lambda\), the subspace spanned by successive
+  applications of \(E_{2 1}\), \(E_{3 1}\) and \(E_{3 2}\) to \(v\) is an
+  irreducible subrepresentation whose highest weight is \(\lambda\).
+\end{corollary}
+
+This is very interesting to us since it implies that finding \emph{any}
+representation whose highest weight is \(n \alpha_1 - m \alpha_2\) is enough
+for establishing the ``existence'' part of
+theorem~\ref{thm:sl3-existence-uniqueness}. Moreover, constructing such
+representation turns out to be quite simple.
+
+\begin{proof}[Proof of existence]
+  Consider the natural representation \(V = K^3\) of \(\mathfrak{sl}_3(K)\). We
+  claim that the highest weight of \(\operatorname{Sym}^n V \otimes
+  \operatorname{Sym}^m V^*\) is \(n \alpha_1 - m \alpha_3\).
+
+  First of all, notice that the eigenvectors of \(V\) are the canonical basis
+  vectors \(e_1\), \(e_2\) and \(e_3\), whose eigenvalues are \(\alpha_1\),
+  \(\alpha_2\) and \(\alpha_3\) respectively. Hence the weight diagram of \(V\)
+  is
+  \begin{center}
+    \begin{tikzpicture}[scale=2.5]
+      \AutoSizeWeightLatticefalse
+      \begin{rootSystem}{A}
+        \weightLattice{2}
+        \wt[black]{1}{0}
+        \wt[black]{-1}{1}
+        \wt[black]{0}{-1}
+        \node[right] at \weight{1}{0}  {$\alpha_1$};
+        \node[above left] at \weight{-1}{1} {$\alpha_2$};
+        \node[below left] at \weight{0}{-1} {$\alpha_3$};
+      \end{rootSystem}
+    \end{tikzpicture}
+  \end{center}
+  and \(\alpha_1\) is the highest weight of \(V\).
+
+  On the one hand, if \(\{f_1, f_2, f_3\}\) is the dual basis of \(\{e_1, e_2,
+  e_3\}\) then \(H f_i = - \alpha_i(H) \cdot f_i\) for each \(H \in
+  \mathfrak{h}\), so that the weights of \(V^*\) are precisely the opposites of
+  the weights of \(V\). In other words,
+  \begin{center}
+    \begin{tikzpicture}[scale=2.5]
+      \AutoSizeWeightLatticefalse
+      \begin{rootSystem}{A}
+        \weightLattice{2}
+        \wt[black]{-1}{0}
+        \wt[black]{1}{-1}
+        \wt[black]{0}{1}
+        \node[left]        at \weight{-1}{0} {$-\alpha_1$};
+        \node[below right] at \weight{1}{-1} {$-\alpha_2$};
+        \node[above right] at \weight{0}{1}  {$-\alpha_3$};
+      \end{rootSystem}
+    \end{tikzpicture}
+  \end{center}
+  is the weight diagram of \(V^*\) and \(\alpha_3\) is the highest weight of
+  \(V^*\).
+
+  On the other hand if we fix two \(\mathfrak{sl}_3(K)\)-representations \(U\)
+  and \(W\), by computing
+  \[
+    \begin{split}
+      H (u \otimes w)
+      & = H u \otimes w + u \otimes H w \\
+      & = \lambda(H) \cdot u \otimes w + u \otimes \mu(H) \cdot w \\
+      & = (\lambda + \mu)(H) \cdot (u \otimes w)
+    \end{split}
+  \]
+  for each \(H \in \mathfrak{h}\), \(u \in U_\lambda\) and \(w \in W_\lambda\)
+  we can see that the weights of \(U \otimes W\) are precisely the sums of the
+  weights of \(U\) with the weights of \(W\).
+
+  This implies that the maximal weights of \(\operatorname{Sym}^n V\) and
+  \(\operatorname{Sym}^m V^*\) are \(n \alpha_1\) and \(- m \alpha_3\)
+  respectively -- with maximal weight vectors \(e_1^n\) and \(f_3^m\).
+  Furthermore, by the same token the highest weight of \(\operatorname{Sym}^n V
+  \otimes \operatorname{Sym}^m V^*\) must be \(n e_1 - m e_3\) -- with highest
+  weight vector \(e_1^n \otimes f_3^m\).
+\end{proof}
+
+The ``uniqueness'' part of theorem~\ref{thm:sl3-existence-uniqueness} is even
+simpler than that.
+
+\begin{proof}[Proof of uniqueness]
+  Let \(V\) and \(W\) be two irreducible representations of
+  \(\mathfrak{sl}_3(K)\) with highest weight \(\lambda\). By
+  theorem~\ref{thm:sl3-irr-weights-class}, the weights of \(V\) are precisely
+  the same as those of \(W\).
+
+  Now by computing
+  \[
+    H (v + w)
+    = H v + H w
+    = \mu(H) \cdot v + \mu(H) \cdot w
+    = \mu(H) \cdot (v + w)
+  \]
+  for each \(H \in \mathfrak{h}\), \(v \in V_\mu\) and \(w \in W_\mu\), we can
+  see that the weights of \(V \oplus W\) are same as those of \(V\) and \(W\).
+  Hence the highest weight of \(V \oplus W\) is \(\lambda\) -- with highest
+  weight vectors given by the sum of highest weight vectors of \(V\) and \(W\).
+
+  Fix some \(v \in V_\lambda\) and \(w \in W_\lambda\) and consider the
+  irreducible representation \(U = \mathfrak{sl}_3(K) \cdot v + w\) generated
+  by \(v + w\). The projection maps \(\pi_1 : U \to V\), \(\pi_2 : U \to W\),
+  being non-zero homomorphism between irreducible representations of
+  \(\mathfrak{sl}_3(K)\) must be isomorphism. Finally,
+  \[
+    V \cong U \cong W
+  \]
+\end{proof}
+
+The situation here is analogous to that of the previous section, where we saw
+that the irreducible representations of \(\mathfrak{sl}_2(K)\) are given by
+symmetric powers of the natural representation.
+
+We've been very successful in our pursue for a classification of the
+irreducible representations of \(\mathfrak{sl}_2(K)\) and
+\(\mathfrak{sl}_3(K)\), but so far we've mostly postponed the discussion on the
+motivation behind our methods. In particular, we did not explain why we chose
+\(h\) and \(\mathfrak{h}\), and neither why we chose to look at their
+eigenvalues. Apart from the obvious fact we already knew it would work a
+priory, why did we do all that? In the following section we will attempt to
+answer this question by looking at what we did in the last chapter through more
+abstract lenses and studying the representations of an arbitrary
+finite-dimensional semisimple Lie algebra \(\mathfrak{g}\).
+
+\section{Simultaneous Diagonalization \& the General Case}
+
+At the heart of our analysis of \(\mathfrak{sl}_2(K)\) and
+\(\mathfrak{sl}_3(K)\) was the decision to consider the eigenspace
+decomposition
+\begin{equation}\label{sym-diag}
+  V = \bigoplus_\lambda V_\lambda
+\end{equation}
+
+This was simple enough to do in the case of \(\mathfrak{sl}_2(K)\), but the
+reasoning behind it, as well as the mere fact equation (\ref{sym-diag}) holds,
+are harder to explain in the case of \(\mathfrak{sl}_3(K)\). The eigenspace
+decomposition associated with an operator \(V \to V\) is a very well-known
+tool, and this type of argument should be familiar to anyone familiar with
+basic concepts of linear algebra. On the other hand, the eigenspace
+decomposition of \(V\) with respect to the action of an arbitrary subalgebra
+\(\mathfrak{h} \subset \mathfrak{gl}(V)\) is neither well-known nor does it
+hold in general: as previously stated, it may very well be that
+\[
+  \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda \subsetneq V
+\]
+
+We should note, however, that this two cases are not as different as they may
+sound at first glance. Specifically, we can regard the eigenspace decomposition
+of a representation \(V\) of \(\mathfrak{sl}_2(K)\) with respect to the
+eigenvalues of the action of \(h\) as the eigenvalue decomposition of \(V\)
+with respect to the action of the subalgebra \(\mathfrak{h} = K h \subset
+\mathfrak{sl}_2(K)\). Furthermore, in both cases \(\mathfrak{h} \subset
+\mathfrak{sl}_n(K)\) is the subalgebra of diagonal matrices, which is Abelian.
+The fundamental difference between these two cases is thus the fact that \(\dim
+\mathfrak{h} = 1\) for \(\mathfrak{h} \subset \mathfrak{sl}_2(K)\) while \(\dim
+\mathfrak{h} > 1\) for \(\mathfrak{h} \subset \mathfrak{sl}_3(K)\). The
+question then is: why did we choose \(\mathfrak{h}\) with \(\dim \mathfrak{h} >
+1\) for \(\mathfrak{sl}_3(K)\)?
+
+% TODO: Add a note on how irreducible representations of Abelian algebras are
+% all one dimensional to the previous chapter
+The rational behind fixing an Abelian subalgebra is a simple one: we have seen
+in the previous chapter that representations of Abelian
+algebras are generally much simpler to understand than the general case.
+Thus it make sense to decompose a given representation \(V\) of
+\(\mathfrak{g}\) into subspaces invariant under the action of \(\mathfrak{h}\),
+and then analyze how the remaining elements of \(\mathfrak{g}\) act on this
+subspaces. The bigger \(\mathfrak{h}\) the simpler our problem gets, because
+there are fewer elements outside of \(\mathfrak{h}\) left to analyze.
+
+Hence we are generally interested in maximal Abelian subalgebras \(\mathfrak{h}
+\subset \mathfrak{g}\), which leads us to the following definition.
+
+\begin{definition}
+  An subalgebra \(\mathfrak{h} \subset \mathfrak{g}\) is called \emph{a Cartan
+  subalgebra of \(\mathfrak{g}\)} if is self-normalizing -- i.e. \([X, H] \in
+  \mathfrak{h}\) for all \(H \in \mathfrak{h}\) if, and only if \(X \in
+  \mathfrak{h}\) -- and nilpotent. Equivalently for reductive \(\mathfrak{g}\),
+  \(\mathfrak{h}\) is called \emph{a Cartan subalgebra of \(\mathfrak{g}\)} if
+  it is Abelian, \(\operatorname{ad}(H)\) is diagonalizable for each \(H \in
+  \mathfrak{h}\) and if \(\mathfrak{h}\) is maximal with respect to the former
+  two properties.
+\end{definition}
+
+\begin{proposition}
+  There exists a Cartan subalgebra \(\mathfrak{h} \subset \mathfrak{g}\).
+\end{proposition}
+
+\begin{proof}
+  Notice that \(0 \subset \mathfrak{g}\) is an Abelian subalgebra whose
+  elements act as diagonal operators via the adjoint representation. Indeed,
+  \(0\) -- the only element of \(0 \subset \mathfrak{g}\) -- is such that
+  \(\operatorname{ad}(0) = 0\). Furthermore, given a chain of Abelian
+  subalgebras
+  \[
+    0 \subset \mathfrak{h}_1 \subset \mathfrak{h}_2 \subset \cdots
+  \]
+  such that \(\operatorname{ad}(H)\) is a diagonal operator for each \(H \in
+  \mathfrak{h}_i\), the subalgebra \(\bigcup_i \mathfrak{h}_i \subset
+  \mathfrak{g}\) is Abelian, and its elements also act diagonally in
+  \(\mathfrak{g}\). It then follows from Zorn's lemma that there exists a
+  subalgebra \(\mathfrak{h}\) which is maximal with respect to both these
+  properties -- i.e. a Cartan subalgebra.
+\end{proof}
+
+We have already seen some concrete examples. For instance, one can readily
+check that every pair of diagonal matrices commutes, so that
+\[
+  \mathfrak{h} =
+  \begin{pmatrix}
+         K &      0 & \cdots &      0 \\
+         0 &      K & \cdots &      0 \\
+    \vdots & \vdots & \ddots & \vdots \\
+         0 &      0 & \cdots &      K
+  \end{pmatrix}
+\]
+is an Abelian -- and hence nilpotent -- subalgebra of \(\mathfrak{gl}_n(K)\). A
+simple calculation also shows that if \(i \ne j\) then the coefficient of
+\(E_{i j}\) in \([E_{i i}, X]\) is the same as the coefficient of \(E_{i j}\)
+in \(X\), for all \(X \in \mathfrak{gl}_n(K)\). In particular, if \([E_{i i},
+X]\) is diagonal for all \(i\), then so is \(X\) -- i.e. \(\mathfrak{h}\) is
+self-normalizing. Hence \(\mathfrak{h}\) is a Cartan subalgebra of
+\(\mathfrak{gl}_n(K)\).
+
+The intersection of such subalgebra with \(\mathfrak{sl}_n(K)\) -- i.e. the
+subalgebra of traceless diagonal matrices -- is a Cartan subalgebra of
+\(\mathfrak{sl}_n(K)\). In particular, if \(n = 2\) or \(n = 3\) we get to the
+subalgebras described the previous two sections. The remaining question then
+is: if \(\mathfrak{h} \subset \mathfrak{g}\) is a Cartan subalgebra and \(V\)
+is a representation of \(\mathfrak{g}\), does the eigenspace decomposition
+\[
+  V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda
+\]
+of \(V\) hold? The answer to this question turns out to be yes. This is a
+consequence of something known as \emph{simultaneous diagonalization}, which is
+the primary tool we'll use to generalize the results of the previous section.
+What is simultaneous diagonalization all about then?
+
+\begin{definition}\label{def:sim-diag}
+  Given a \(K\)-vector space \(V\), a set of operators \(\{T_j : V \to V\}_j\)
+  is called \emph{simultaneously diagonalizable} if there is a basis \(\{v_1,
+  \ldots, v_n\}\) for \(V\) such that \(T_j v_i\) is a scalar multiple of
+  \(v_i\), for all \(i, j\).
+\end{definition}
+
+\begin{proposition}
+  Given a \emph{finite-dimensional} vector space \(V\), A set of diagonalizable
+  operators \(V \to V\) is simultaneously diagonalizable if, and only if all of
+  its elements commute with one another.
+\end{proposition}
+
+We should point out that simultaneous diagonalization \emph{only works in the
+finite-dimensional setting}. In fact, simultaneous diagonalization is usually
+framed as an equivalent statement about diagonalizable \(n \times n\) matrices
+-- where \(n\) is, of course, finite.
+
+Simultaneous diagonalization implies that to show \(V = \bigoplus_\lambda
+V_\lambda\) it suffices to show that \(H\!\restriction_V : V \to V\) is a
+diagonalizable operator for each \(H \in \mathfrak{h}\). To that end, we
+introduce \emph{the Jordan decomposition of an operator} and \emph{the abstract
+Jordan decomposition of a semisimple Lie algebra}.
+
+\begin{proposition}[Jordan]
+  Given a finite-dimensional vector space \(V\) and an operator \(T : V \to
+  V\), there are unique commuting operators \(T_s, T_n : V \to V\), with
+  \(T_s\) diagonalizable and \(T_n\) nilpotent, such that \(T = T_s + T_n\).
+  The pair \((T_s, T_n)\) is known as \emph{the Jordan decomposition of \(T\)}.
+\end{proposition}
+
+\begin{proposition}
+  Given \(\mathfrak{g}\) semisimple and \(X \in \mathfrak{g}\), there are
+  \(X_s, X_n \in \mathfrak{g}\) such that \(X = X_s + X_n\), \([X_s, X_n] =
+  0\), \(\operatorname{ad}(X_s)\) is a diagonalizable operator and
+  \(\operatorname{ad}(X_n)\) is a nilpotent operator. The pair \((X_s, X_n)\)
+  is known as \emph{the Jordan decomposition of \(X\)}.
+\end{proposition}
+
+It should be clear from the uniqueness of \(\operatorname{ad}(X)_s\) and
+\(\operatorname{ad}(X)_n\) that the Jordan decomposition of
+\(\operatorname{ad}(X)\) is \(\operatorname{ad}(X) = \operatorname{ad}(X_s) +
+\operatorname{ad}(X_n)\). What's perhaps more remarkable is the fact this holds
+for \emph{any} finite-dimensional representation of \(\mathfrak{g}\). In other
+words\dots
+
+\begin{proposition}\label{thm:preservation-jordan-form}
+  Let \(V\) be a finite-dimensional representation of \(\mathfrak{g}\) and \(X
+  \in \mathfrak{g}\). Denote by \(X\!\restriction_V\) the action of \(X\) in
+  \(V\). Then \(X_s\!\restriction_V = (X\!\restriction)_s\) and
+  \(X_n\!\restriction_V = (X\!\restriction)_n\).
+\end{proposition}
+
+This last result is known as \emph{the preservation of the Jordan form}, and a
+proof can be found in appendix C of \cite{fulton-harris}. We should point out
+this fails spectacularly in positive characteristic. Furthermore, the statement
+of proposition~\ref{thm:preservation-jordan-form} only makes sense for
+\emph{semisimple} Lie algebras -- i.e. the algebras \(\mathfrak{g}\) for which
+the abstract Jordan decomposition of \(\mathfrak{g}\) is defined. Nevertheless,
+as promised this implies\dots
+
+\begin{corollary}\label{thm:finite-dim-is-weight-mod}
+  Let \(\mathfrak{g}\) be a semisimple Lie algebra, \(\mathfrak{h} \subset
+  \mathfrak{g}\) be a Cartan subalgebra and \(V\) be any finite-dimensional
+  representation of \(\mathfrak{g}\). Then there is a basis \(\{v_1, \ldots,
+  v_n\}\) of \(V\) so that each \(v_i\) is simultaneously an eigenvector of all
+  elements of \(\mathfrak{h}\) -- i.e. each element of \(\mathfrak{h}\) acts as
+  a diagonal matrix in this basis. In other words, there are linear functionals
+  \(\lambda_i \in \mathfrak{h}^*\) so that
+  \(
+    H v_i = \lambda_i(H) \cdot v_i
+  \)
+  for all \(H \in \mathfrak{h}\). In particular,
+  \[
+    V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda
+  \]
+\end{corollary}
+
+\begin{proof}
+  Fix some \(H \in \mathfrak{h}\). It suffices to show that \(H\!\restriction_V
+  : V \to V\) is a diagonalizable operator.
+
+  If we write \(H = H_s + H_n\) for the abstract Jordan decomposition of \(H\),
+  we know \(\operatorname{ad}(H_s) = \operatorname{ad}(H)_s\). But
+  \(\operatorname{ad}(H)\) is a diagonalizable operator, so that
+  \(\operatorname{ad}(H)_s = \operatorname{ad}(H)\). This implies
+  \(\operatorname{ad}(H_n) = \operatorname{ad}(H)_n = 0\), so that \(H_n\) is a
+  central element of \(\mathfrak{g}\). Since \(\mathfrak{g}\) is semisimple,
+  \(H_n = 0\). Proposition~\ref{thm:preservation-jordan-form} then implies
+  \((H\!\restriction_V)_n = (H_n)\!\restriction_V = 0\), so \(H\!\restriction_V
+  = (H\!\restriction_V)_s\) is a diagonalizable operator.
+\end{proof}
+
+We should point out that this last proof only works for semisimple Lie
+algebras. This is because we rely heavily on
+proposition~\ref{thm:preservation-jordan-form}, as well in the fact that
+semisimple Lie algebras are centerless. In fact,
+corollary~\ref{thm:finite-dim-is-weight-mod} fails even for reductive Lie
+algebras. For a counterexample, consider the algebra \(\mathfrak{g} = K\): the
+Cartan subalgebra of \(\mathfrak{g}\) is \(\mathfrak{g}\) itself, and a
+\(\mathfrak{g}\)-module is simply a vector space \(V\) endowed with an operator
+\(V \to V\) -- which corresponds to the action of \(1 \in \mathfrak{g}\) in
+\(V\). In particular, if we choose an operator \(V \to V\) which is \emph{not}
+diagonalizable we find \(V \ne \bigoplus_{\lambda \in \mathfrak{h}^*}
+V_\lambda\).
+
+However, corollary~\ref{thm:finite-dim-is-weight-mod} does work for reductive
+\(\mathfrak{g}\) if we assume that the representation in question is
+irreducible, since central elements of \(\mathfrak{g}\) act on irreducible
+representations as scalar operators. The hypothesis of finite-dimensionality is
+also of huge importance. In the next chapter we will encounter
+infinite-dimensional \(\mathfrak{g}\)-modules for which the eigenspace
+decomposition \(V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda\) fails.
+As a first consequence of corollary~\ref{thm:finite-dim-is-weight-mod}
+
+\begin{corollary}
+  The restriction of \(B\) to \(\mathfrak{h}\) is non-degenerate.
+\end{corollary}
+
+\begin{proof}
+  Consider the eigenspace decomposition \(\mathfrak{g} = \mathfrak{g}_0 \oplus
+  \bigoplus_\alpha \mathfrak{g}_\alpha\) of the adjoint representation, where
+  \(\alpha\) ranges over all nonzero eigenvalues of the adjoint action of
+  \(\mathfrak{h}\). We claim \(\mathfrak{g}_0 = \mathfrak{h}\).
+
+  Indeed, since \(\mathfrak{h}\) is Abelian, \(\operatorname{ad}(\mathfrak{h})
+  \mathfrak{h} = 0\) -- i.e. \(\mathfrak{h} \subset \mathfrak{g}_0\). On the
+  other hand, since \(\mathfrak{h}\) is self-normalizing, if \([X, H] = 0 \in
+  \mathfrak{h}\) for all \(H \in \mathfrak{h}\) then \(X \in \mathfrak{h}\) --
+  i.e. \(\mathfrak{g}_0 \subset \mathfrak{h}\). So the eigenspace decomposition
+  becomes
+  \[
+    \mathfrak{g} = \mathfrak{h} \oplus \bigoplus_\alpha \mathfrak{g}_\alpha
+  \]
+
+  We furthermore claim that \(\mathfrak{h} = \mathfrak{g}_0\) is orthogonal to
+  \(\mathfrak{g}_\alpha\) with respect to \(B\) for any \(\alpha \ne 0\).
+  Indeed, given \(X \in \mathfrak{g}_\alpha\) and \(H_1, H_2 \in \mathfrak{h}\)
+  with \(\alpha(H_1) \ne 0\) we have
+  \[
+    \alpha(H_1) \cdot B(X, H_2)
+    = B([H_1, X], H_2)
+    = - B([X, H_1], H_2)
+    = - B(X, [H_1, H_2])
+    = 0
+  \]
+
+  Hence the non-degeneracy of \(B\) implies the non-degeneracy of its
+  restriction.
+\end{proof}
+
+We should point out that the restriction of \(B\) to \(\mathfrak{h}\) is
+\emph{not} the Killing form of \(\mathfrak{h}\). In fact, since
+\(\mathfrak{h}\) is Abelian, its Killing form is identically zero -- which is
+hardly ever a non-degenerate form.
+
+\begin{note}
+  Since \(B\) induces an isomorphism \(\mathfrak{h} \isoto \mathfrak{h}^*\), it
+  induces a bilinear form \((B(X, \cdot), B(Y, \cdot)) \mapsto B(X, Y)\) in
+  \(\mathfrak{h}^*\). We denote this form by \(B\).
+\end{note}
+
+We now have most of the necessary tools to reproduce the results of the
+previous chapter in a general setting. Let \(\mathfrak{g}\) be a
+finite-dimensional semisimple algebra with a Cartan subalgebra \(\mathfrak{h}\)
+and let \(V\) be a finite-dimensional irreducible representation of
+\(\mathfrak{g}\). We will proceed, as we did before, by generalizing the
+results about of the previous two sections in order. By now the pattern should
+be starting become clear, so we will mostly omit technical details and proofs
+analogous to the ones on the previous sections. Further details can be found in
+appendix D of \cite{fulton-harris} and in \cite{humphreys}.
+
+We begin our analysis by remarking that in both \(\mathfrak{sl}_2(K)\) and
+\(\mathfrak{sl}_3(K)\), the roots were symmetric about the origin and spanned
+all of \(\mathfrak{h}^*\). This turns out to be a general fact, which is a
+consequence of the non-degeneracy of the restriction of the Killing form to the
+Cartan subalgebra.
+
+\begin{proposition}\label{thm:weights-symmetric-span}
+  The eigenvalues \(\alpha\) of the adjoint action of \(\mathfrak{h}\) in
+  \(\mathfrak{g}\) are symmetrical about the origin -- i.e. \(- \alpha\) is
+  also an eigenvalue -- and they span all of \(\mathfrak{h}^*\).
+\end{proposition}
+
+\begin{proof}
+  We'll start with the first claim. Let \(\alpha\) and \(\beta\) be two
+  eigenvalues of the adjoint action of \(\mathfrak{h}\). Notice
+  \([\mathfrak{g}_\alpha, \mathfrak{g}_\beta] \subset \mathfrak{g}_{\alpha +
+  \beta}\). Indeed, if \(X \in \mathfrak{g}_\alpha\) and \(Y \in
+  \mathfrak{g}_\beta\) then
+  \[
+    [H [X, Y]]
+    = [X, [H, Y]] - [Y, [H, X]]
+    = (\alpha + \beta)(H) \cdot [X, Y]
+  \]
+  for all \(H \in \mathfrak{h}\).
+
+  This implies that if \(\alpha + \beta \ne 0\) then \(\operatorname{ad}(X)
+  \operatorname{ad}(Y)\) is nilpotent: if \(Z \in \mathfrak{g}_\gamma\) then
+  \[
+    (\operatorname{ad}(X) \operatorname{ad}(Y))^n Z
+    = [X, [Y, [ \ldots, [X, [Y, Z]]] \ldots ]
+    \in \mathfrak{g}_{n \alpha + n \beta + \gamma}
+    = 0
+  \]
+  for \(n\) large enough. In particular, \(B(X, Y) =
+  \operatorname{Tr}(\operatorname{ad}(X) \operatorname{ad}(Y)) = 0\). Now if
+  \(- \alpha\) is not an eigenvalue we find \(B(X, \mathfrak{g}_\beta) = 0\)
+  for all eigenvalues \(\beta\), which contradicts the non-degeneracy of \(B\).
+  Hence \(- \alpha\) must be an eigenvalue of the adjoint action of
+  \(\mathfrak{h}\).
+
+  For the second statement, note that if the eigenvalues of \(\mathfrak{h}\) do
+  not span all of \(\mathfrak{h}^*\) then there is some \(H \in \mathfrak{h}\)
+  non-zero such that \(\alpha(H) = 0\) for all eigenvalues \(\alpha\), which is
+  to say, \(\operatorname{ad}(H) X = [H, X] = 0\) for all \(X \in
+  \mathfrak{g}\). Another way of putting it is to say \(H\) is an element of
+  the center \(\mathfrak{z}\) of \(\mathfrak{g}\), which is zero by the
+  semisimplicity -- a contradiction.
+\end{proof}
+
+Furthermore, as in the case of \(\mathfrak{sl}_2(K)\) and
+\(\mathfrak{sl}_3(K)\) one can show\dots
+
+\begin{proposition}\label{thm:root-space-dim-1}
+  The eigenspaces \(\mathfrak{g}_\alpha\) are all 1-dimensional.
+\end{proposition}
+
+The proof of the first statement of
+proposition~\ref{thm:weights-symmetric-span} highlights something interesting:
+if we fix some some eigenvalue \(\alpha\) of the adjoint action of
+\(\mathfrak{h}\) in \(\mathfrak{g}\) and a eigenvector \(X \in
+\mathfrak{g}_\alpha\), then for each \(H \in \mathfrak{h}\) and \(v \in
+V_\lambda\) we find
+\[
+  H (X v)
+  = X (H v) + [H, X] v
+  = (\lambda + \alpha)(H) \cdot X v
+\]
+so that \(X\) carries \(v\) to \(V_{\lambda + \alpha}\). We have encountered
+this formula twice in this chapter: again, we find \(\mathfrak{g}_\alpha\)
+\emph{acts on \(V\) by translating vectors between eigenspaces}. In other
+words, if we denote by \(\Delta\) the set of all roots of \(\mathfrak{g}\)
+then\dots
+
+\begin{theorem}\label{thm:weights-congruent-mod-root}
+  The weights of an irreducible representation \(V\) of \(\mathfrak{g}\) are
+  all congruent module the root lattice \(Q = \ZZ \Delta\) of \(\mathfrak{g}\).
+\end{theorem}
+
+% TODOO: Turn this into a proper discussion of basis and give the idea of the
+% proof of existance of basis?
+To proceed further, as in the case of \(\mathfrak{sl}_3(K)\) we have to fix a
+direction in \(\mathfrak{h}^*\) -- i.e. we fix a linear function
+\(\mathfrak{h}^* \to \QQ\) such that \(Q\) lies outside of its kernel. This
+choice induces a partition \(\Delta = \Delta^+ \cup \Delta^-\) of the set of
+roots of \(\mathfrak{g}\) and once more we find\dots
+
+\begin{definition}
+  The elements of \(\Delta^+\) and \(\Delta^-\) are called \emph{positive} and
+  \emph{negative roots}, respectively. The subalgebra \(\mathfrak{b} =
+  \mathfrak{h} \oplus \bigoplus_{\alpha \in \Delta^+} \mathfrak{g}_\alpha\) is
+  called \emph{the Borel subalgebra associated with \(\mathfrak{h}\)}.
+\end{definition}
+
+\begin{theorem}
+  There is a weight vector \(v \in V\) that is killed by all positive root
+  spaces of \(\mathfrak{g}\).
+\end{theorem}
+
+% TODO: Here we may take a weight of maximal height, but why is it unique?
+% TODO: We don't really need to talk about height tho, we may simply take a
+% weight that maximizes B(gamma, lambda) in QQ
+% TODOO: Either way, we need to move this to after the discussion on the
+% integrality of weights
+\begin{proof}
+  It suffices to note that if \(\lambda\) is the weight of \(V\) lying the
+  furthest along the direction we chose and \(V_{\lambda + \alpha} \ne 0\) for
+  some \(\alpha \in \Delta^+\) then \(\lambda + \alpha\) is a weight that is
+  furthest along the direction we chose than \(\lambda\), which contradicts the
+  definition of \(\lambda\).
+\end{proof}
+
+Accordingly, we call \(\lambda\) \emph{the highest weight of \(V\)}, and we
+call any \(v \in V_\lambda\) \emph{a highest weight vector}. The strategy then
+is to describe all weight spaces of \(V\) in terms of \(\lambda\) and \(v\), as
+in theorem~\ref{thm:sl3-irr-weights-class}, and unsurprisingly we do so by
+reproducing the proof of the case of \(\mathfrak{sl}_3(K)\). Namely, we
+show\dots
+
+\begin{proposition}\label{thm:distinguished-subalgebra}
+  Given a root \(\alpha\) of \(\mathfrak{g}\) the subspace
+  \(\mathfrak{s}_\alpha = \mathfrak{g}_\alpha \oplus \mathfrak{g}_{- \alpha}
+  \oplus [\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]\) is a subalgebra
+  isomorphic to \(\mathfrak{sl}_2(k)\).
+\end{proposition}
+
+\begin{corollary}\label{thm:distinguished-subalg-rep}
+  For all weights \(\mu\), the subspace
+  \[
+    V_\mu[\alpha] = \bigoplus_k V_{\mu + k \alpha}
+  \]
+  is invariant under the action of the subalgebra \(\mathfrak{s}_\alpha\)
+  and the weight spaces in this string match the eigenspaces of \(h\).
+\end{corollary}
+
+The proof of proposition~\ref{thm:distinguished-subalgebra} is very technical
+in nature and we won't include it here, but the idea behind it is simple:
+recall that \(\mathfrak{g}_\alpha\) and \(\mathfrak{g}_{- \alpha}\) are both
+1-dimensional, so that \(\dim [\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]\)
+is at most 1. We check that \([\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]
+\ne 0\) and that no generator of \([\mathfrak{g}_\alpha, \mathfrak{g}_{-
+\alpha}] \ne 0\) is annihilated by \(\alpha\), so that by adjusting scalars we
+can find \(E_\alpha \in \mathfrak{g}_\alpha\) and \(F_\alpha \in
+\mathfrak{g}_{- \alpha}\) such that \(H_\alpha = [E_\alpha, F_\alpha]\)
+satisfies
+\begin{align*}
+  [H_\alpha, F_\alpha] & = -2 F_\alpha &
+  [H_\alpha, E_\alpha] & =  2 E_\alpha
+\end{align*}
+
+The elements \(E_\alpha, F_\alpha \in \mathfrak{g}\) are not uniquely
+determined by this condition, but \(H_\alpha\) is. The second statement of
+corollary~\ref{thm:distinguished-subalg-rep} imposes a restriction on the
+weights of \(V\). Namely, if \(\mu\) is a weight, \(\mu(H_\alpha)\) is an
+eigenvalue of \(h\) in some representation of \(\mathfrak{sl}_2(K)\), so
+that\dots
+
+\begin{proposition}
+  The weights \(\mu\) of an irreducible representation \(V\) of
+  \(\mathfrak{g}\) are so that \(\mu(H_\alpha) \in \ZZ\) for each \(\alpha \in
+  \Delta\).
+\end{proposition}
+
+Once more, the lattice \(P = \{ \lambda \in \mathfrak{h}^* : \lambda(H_\alpha)
+\in \ZZ, \forall \alpha \in \Delta \}\) is called \emph{the weight lattice of
+\(\mathfrak{g}\)}, and we call the elements of \(P\) \emph{integral}. Finally,
+another important consequence of theorem~\ref{thm:distinguished-subalgebra}
+is\dots
+
+\begin{corollary}
+  If \(\alpha \in \Delta^+\) and \(T_\alpha : \mathfrak{h}^* \to
+  \mathfrak{h}^*\) is the reflection in the hyperplane perpendicular to
+  \(\alpha\) with respect to the Killing form,
+  corollary~\ref{thm:distinguished-subalg-rep} implies that all \(\nu \in P\)
+  lying inside the line connecting \(\mu\) and \(T_\alpha \mu\) are weights --
+  i.e. \(V_\nu \ne 0\).
+\end{corollary}
+
+\begin{proof}
+  It suffices to note that \(\nu \in V_\mu[\alpha]\) -- see appendix D of
+  \cite{fulton-harris} for further details.
+\end{proof}
+
+\begin{definition}
+  We refer to the group \(\mathcal{W} = \langle T_\alpha : \alpha \in \Delta^+
+  \rangle \subset \operatorname{O}(\mathfrak{h}^*)\) as \emph{the Weyl group of
+  \(\mathfrak{g}\)}.
+\end{definition}
+
+This is entirely analogous to the situation of \(\mathfrak{sl}_3(K)\), where we
+found that the weights of the irreducible representations were symmetric with
+respect to the lines \(K \alpha\) with \(B(\alpha_i - \alpha_j, \alpha) = 0\).
+Indeed, the same argument leads us to the conclusion\dots
+
+\begin{theorem}\label{thm:irr-weight-class}
+  The weights of an irreducible representation \(V\) of \(\mathfrak{g}\) with
+  highest weight \(\lambda\) are precisely the elements of the weight lattice
+  \(P\) congruent to \(\lambda\) modulo the root lattice \(Q\) lying inside the
+  convex hull of the image of \(\lambda\) under the action of the Weyl group
+  \(\mathcal{W}\).
+\end{theorem}
+
+Now the only thing we are missing for a complete classification is an existence
+and uniqueness theorem analogous to theorem~\ref{thm:sl2-exist-unique} and
+theorem~\ref{thm:sl3-existence-uniqueness}. Lo and behold\dots
+
+\begin{definition}
+  An element \(\lambda\) of \(P\) such that \(\lambda(H_\alpha) \ge 0\) for all
+  \(\alpha \in \Delta^+\) is referred to as an \emph{integral dominant weight
+  of \(\mathfrak{g}\)}.
+\end{definition}
+
+\begin{theorem}\label{thm:dominant-weight-theo}
+  For each dominant integral \(\lambda \in P\) there exists precisely one
+  irreducible finite-dimensional representation \(V\) of \(\mathfrak{g}\) whose
+  highest weight is \(\lambda\).
+\end{theorem}
+
+Fix some dominant integral \(\lambda \in P\). The ``uniqueness'' part of the
+theorem follows at once from the argument used for \(\mathfrak{sl}_3(K)\). The
+``existence'' part is more nuanced. Our first instinct is, of course, to try to
+generalize the proof used for \(\mathfrak{sl}_3(K)\). The issue is that our
+proof relied heavily on our knowledge of the roots of \(\mathfrak{sl}_3(K)\).
+Instead, we need a new strategy for the general setting. To that end, we
+introduce a special class of \(\mathfrak{g}\)-modules, known as \emph{Verma
+modules}.
+
+\begin{definition}\label{def:verma}
+  The \(\mathfrak{g}\)-module \(M(\lambda) =
+  \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}} K v^+\), where the action of
+  \(\mathfrak{b}\) in \(K v^+\) is given by \(H v^+ = \lambda(H) \cdot v^+\)
+  for all \(H \in \mathfrak{h}\) and \(X v^+ = 0\) for \(X \in
+  \mathfrak{g}_{\alpha}\), \(\alpha \in \Delta^+\), is called \emph{the Verma
+  module of weight \(\lambda\)}
+\end{definition}
+
+We should point out that, unlike most representations we've encountered so far,
+Verma modules are \emph{highly infinite-dimensional}. Indeed, the dimension of
+\(M(\lambda)\) is the same as the codimension of \(\mathcal{U}(\mathfrak{b})\)
+in \(\mathcal{U}(\mathfrak{g})\), which is always infinite. Nevertheless,
+\(M(\lambda)\) turns out to be quite well behaved. For instance, by
+construction \(M(\lambda) = \mathcal{U}(\mathfrak{g}) \cdot v^+\) -- where
+\(v^+ = 1 \otimes v^+ \in M(\lambda)\) is as in definition~\ref{def:verma}.
+Moreover, we find\dots
+
+\begin{proposition}\label{thm:verma-is-weight-mod}
+  The weight spaces decomposition
+  \[
+    M(\lambda) = \bigoplus_{\mu \in \mathfrak{h}^*} M(\lambda)_\mu
+  \]
+  holds. Furthermore, \(\dim M(\lambda)_\mu < \infty\) for all \(\mu \in
+  \mathfrak{h}^*\) and \(\dim M(\lambda) = 1\). Finally, \(\lambda\) is the
+  highest weight of \(M(\lambda)\), with highest weight vector given by \(v^+ =
+  1 \otimes v^+ \in M(\lambda)\) as in definition~\ref{def:verma}.
+\end{proposition}
+
+\begin{proof}
+  The Poincaré-Birkhoff-Witt theorem implies that \(M(\lambda)\) is spanned by
+  the vectors \(F_{\alpha_1} F_{\alpha_2} \cdots F_{\alpha_n} v^+\) for
+  \(\alpha_i \in \Delta^-\) and \(F_{\alpha_i} \in \mathfrak{g}_{\alpha_i}\) as
+  in the proof of proposition~\ref{thm:distinguished-subalgebra}. But
+  \[
+    \begin{split}
+      H F_{\alpha_1} F_{\alpha_2} \cdots F_{\alpha_n} v^+
+      & = ([H, F_{\alpha_1}] + F_{\alpha_1} H)
+          F_{\alpha_2} \cdots F_{\alpha_n} v^+ \\
+      & = \alpha_1(H) \cdot F_{\alpha_1} \cdots F_{\alpha_n} v^+
+        + F_{\alpha_1} ([H, F_{\alpha_2}] + F_{\alpha_2} H)
+          F_{\alpha_2} \cdots F_{\alpha_n} v^+ \\
+      & \;\; \vdots \\
+      & = (\alpha_1 + \cdots + \alpha_n)(H) \cdot
+          F_{\alpha_1} \cdots F_{\alpha_n} v^+
+        + F_{\alpha_1} \cdots F_{\alpha_n} H v^+ \\
+      & = (\lambda + \alpha_1 + \cdots + \alpha_n)(H) \cdot
+          F_{\alpha_1} \cdots F_{\alpha_n} v^+ \\
+      & \therefore F_{\alpha_1} \cdots F_{\alpha_n} v^+
+        \in M(\lambda)_{\lambda + \alpha_1 + \cdots + \alpha_n}
+    \end{split}
+  \]
+
+  Hence \(M(\lambda) \subset \bigoplus_{\mu \in \mathfrak{h}^*}
+  M(\lambda)_\mu\), as desired. In fact we have established
+  \[
+    M(\lambda)
+    \subset
+    \bigoplus_{\substack{k_i \in \ZZ \\ k_i \ge 0}}
+    M(\lambda)_{\lambda + k_1 \cdot \alpha_1 + \cdots + k_n \cdot \alpha_n}
+  \]
+  where \(\{\alpha_1, \ldots, \alpha_m\} = \Delta^-\), so that all weights of
+  \(M(\lambda)\) have the form \(\mu = \lambda + k_1 \cdot \alpha_1 + \cdots +
+  k_n \cdot \alpha_n\).
+
+  This already gives us that the weights of \(M(\lambda)\) are bounded by
+  \(\lambda\) -- in the sense that no weight of \(M(\lambda)\) is ``higher''
+  than \(\lambda\). To see that \(\lambda\) is indeed a weight, we show that
+  \(v^+\) is nonzero weight vector. Clearly \(v^+ \in V_\lambda\). The
+  Poincaré-Birkhoff-Witt theorem implies
+  \[
+    M(\lambda)
+    \cong \left(\bigoplus_i \mathcal{U}(\mathfrak{b}) \right)
+    \otimes_{\mathcal{U}(\mathfrak{b})} K v^+
+    \cong \bigoplus_i \mathcal{U}(\mathfrak{b})
+    \otimes_{\mathcal{U}(\mathfrak{b})} K v^+
+    \cong \bigoplus_i K v^+
+    \ne 0
+  \]
+  as \(\mathcal{U}(\mathfrak{b})\)-modules, so \(v^+ \ne 0\) -- for if this was
+  not the case we would find \(M(\lambda) = \mathcal{U}(\mathfrak{g}) \cdot v^+
+  = 0\). Hence \(V_\lambda \ne 0\) and therefore \(\lambda\) is the highest
+  weight of \(M(\lambda)\), with highest weight vector \(v^+\).
+
+  To see that \(\dim M(\lambda)_\mu < \infty\), simply note that there are only
+  finitely many monomials \(F_{\alpha_1}^{k_1} F_{\alpha_2}^{k_2} \cdots
+  F_{\alpha_n}^{k_n}\) such that \(\mu = \lambda + k_1 \cdot \alpha_1 + \cdots
+  + k_n \cdot \alpha_n\). Since \(M(\lambda)_\mu\) is spanned by the images of
+  \(v^+\) under such monomials, we conclude \(\dim M(\lambda) < \infty\). In
+  particular, there is a single monomials \(F_{\alpha_1}^{k_1}
+  F_{\alpha_2}^{k_2} \cdots F_{\alpha_n}^{k_n}\) such that \(\lambda = \lambda
+  + k_1 \cdot \alpha_1 + \cdots + k_n \cdot \alpha_n\) -- which is, of course,
+  the monomial where \(k_1 = \cdots = k_n = 0\). Hence \(\dim V_\lambda = 1\).
+\end{proof}
+
+\begin{example}\label{ex:sl2-verma}
+  If \(\mathfrak{g} = \mathfrak{sl}_2(K)\), then we can take \(\mathfrak{h} = K
+  h\) and \(\mathfrak{b} = K e \oplus K h\). If \(\lambda \in
+  \mathfrak{h}^*\) is the map \(h \mapsto 2\) then \(M(\lambda) =
+  \bigoplus_{k \ge 0} K f^k v^+\), and the action of \(\mathfrak{sl}_2(K)\) in
+  \(M(\lambda)\) is given by
+  \begin{align*}
+    f^{k + 1} v^+ & \overset{e}{\mapsto} (2 - k (k - 1)) f^k v^+ &
+    f^{k + 1} v^+ & \overset{f}{\mapsto} f^{k + 2} v^+ &
+    f^{k + 1} v^+ & \overset{h}{\mapsto} - 2 k f^{k + 1} v^+ &
+  \end{align*}
+
+  In the language of the diagrams used in section~\ref{sec:sl2}, we write
+  % TODO: Add a label to the righ of the diagram indicating that the top arrows
+  % are the action of e and the bottom arrows are the action of f
+  \begin{center}
+    \begin{tikzcd}
+      \cdots \arrow[bend left=60]{r}{-10}
+      & M(\lambda)_{-6} \arrow[bend left=60]{r}{-4} \arrow[bend left=60]{l}{1}
+      & M(\lambda)_{-4} \arrow[bend left=60]{r}{0}  \arrow[bend left=60]{l}{1}
+      & M(\lambda)_{-2} \arrow[bend left=60]{r}{2}  \arrow[bend left=60]{l}{1}
+      & M(\lambda)_0    \arrow[bend left=60]{r}{2}  \arrow[bend left=60]{l}{1}
+      & M(\lambda)_2    \arrow[bend left=60]{l}{1}
+    \end{tikzcd}
+  \end{center}
+  where \(M(\lambda)_{2 - 2 k} = K f^k v\). In this case, unlike we have see in
+  section~\ref{sec:sl2}, the string of weight spaces to left of the diagram is
+  infinite.
+\end{example}
+
+What's interesting to us about all this is that we've just constructed a
+\(\mathfrak{g}\)-module whose highest weight is \(\lambda\). This is not a
+proof of theorem~\ref{thm:dominant-weight-theo}, however, since \(M(\lambda)\)
+is neither irreducible nor finite-dimensional. Nevertheless, we can use
+\(M(\lambda)\) to construct an irreducible representation of \(\mathfrak{g}\)
+whose highest weight is \(\lambda\).
+
+\begin{proposition}\label{thm:max-verma-submod-is-weight}
+  Every subrepresentation \(V \subset M(\lambda)\) is the direct sum of its
+  weight spaces. In particular, \(M(\lambda)\) has a unique maximal
+  subrepresentation \(N(\lambda)\) and a unique irreducible quotient
+  \(\sfrac{M(\lambda)}{N(\lambda)}\).
+\end{proposition}
+
+\begin{proof}
+  Let \(V \subset M(\lambda)\) be a subrepresentation and take any nonzero \(v
+  \in V\). Because of proposition~\ref{thm:verma-is-weight-mod}, we know there
+  are \(\mu_1, \ldots, \mu_n \in \mathfrak{h}^*\) and nonzero \(v_i \in
+  M(\lambda)_{\mu_i}\) such that \(v = v_1 + \cdots + v_n\). We want to show
+  \(v_i \in V\) for all \(i\).
+
+  Fix some \(H_2 \in \mathfrak{h}\) such that \(\mu_1(H_2) \ne \mu_2(H_2)\).
+  Then
+  \[
+    v_1
+    - \frac{(\mu_3 - \mu_1)(H_2)}{(\mu_2 - \mu_1)(H_2)} v_3
+    - \cdots
+    - \frac{(\mu_n - \mu_1)(H_2)}{(\mu_2 - \mu_1)(H_2)} v_n
+    = \left( 1 - \frac{H_2 - \mu_1(H_2)}{(\mu_2 - \mu_1)(H_2)} \right) v
+    \in V
+  \]
+
+  Now take \(H_3 \in \mathfrak{h}\) such that \(\mu_1(H_3) \ne \mu_3(H_3)\). By
+  applying the same procedure again we get
+  \begin{multline*}
+    v_1
+    -
+    \frac{(\mu_4 - \mu_3)(H_3) \cdot (\mu_4 - \mu_1)(H_2)}
+         {(\mu_3 - \mu_1)(H_3) \cdot (\mu_2 - \mu_1)(H_2)} v_4
+    - \cdots -
+    \frac{(\mu_n - \mu_3)(H_3) \cdot (\mu_n - \mu_1)(H_2)}
+         {(\mu_3 - \mu_1)(H_3) \cdot (\mu_2 - \mu_1)(H_2)} v_n \\
+    =
+    \left(1 - \frac{H_3 - \mu_1(H_3)}{(\mu_3 - \mu_1)(H_3)} \right)
+    \left(1 - \frac{H_2 - \mu_1(H_2)}{(\mu_2 - \mu_1)(H_2)} \right) v
+    \in V
+  \end{multline*}
+
+  By applying the same procedure over and over again we can see that \(v_1 = X
+  v \in V\) for some \(X \in \mathcal{U}(\mathfrak{g})\). Furthermore, if we
+  reproduce all this for \(v_2 + \cdots + v_n = v - v_1 \in V\) we get that
+  \(v_2 \in V\). Now by applying the same procedure over and over we find
+  \(v_1, \ldots, v_n \in V\). Hence
+  \[
+    V = \bigoplus_\mu V_\mu = \bigoplus_\mu M(\lambda)_\mu \cap V
+  \]
+
+  Since \(M(\lambda) = \mathcal{U}(\mathfrak{g}) \cdot v^+\), \(V\) is a proper
+  subrepresentation then \(v^+ \notin V\). Hence any proper submodule lies in
+  the sum of weight spaces other than \(M(\lambda)_\lambda\), so the sum
+  \(N(\lambda)\) of all such submodules is still proper. In fact, this implies
+  \(N(\lambda)\) is the unique maximal subrepresentation of \(M(\lambda)\) and
+  \(\sfrac{M(\lambda)}{N(\lambda)}\) is its unique irreducible quotient.
+\end{proof}
+
+\begin{example}\label{ex:sl2-verma-quotient}
+  If \(\mathfrak{g} = \mathfrak{sl}_2(K)\) and \(\lambda : h \mapsto 2\), we
+  can see from example~\ref{ex:sl2-verma} that \(N(\lambda) = \bigoplus_{k \ge
+  3} K f^k v^+\), so that \(\sfrac{M(\lambda)}{N(\lambda)}\) is the
+  \(3\)-dimensional irreducible representation of \(\mathfrak{sl}_2(K)\) --
+  i.e. the finite-dimensional irreducible representation with highest weight
+  \(\lambda\) constructed in section~\ref{sec:sl2}.
+\end{example}
+
+This last example is particularly interesting to us, since it indicates that
+the finite-dimensional irreducible representations of \(\mathfrak{sl}_2(K)\) as
+quotients of Verma modules. This is because the quotient
+\(\sfrac{M(\lambda)}{N(\lambda)}\) in example~\ref{ex:sl2-verma-quotient}
+happened to be finite-dimensional. As it turns out, this is always the case for
+semisimple \(\mathfrak{g}\). Namely\dots
+
+\begin{proposition}\label{thm:verma-is-finite-dim}
+  If \(\lambda\) is dominant integral then the unique irreducible quotient of
+  \(M(\lambda)\) is finite-dimensional.
+\end{proposition}
+
+The proof of proposition~\ref{thm:verma-is-finite-dim} is very technical and we
+won't include it here, but the idea behind it is to show that the set of
+weights of \(\sfrac{M(\lambda)}{N(\lambda)}\) is stable under the natural
+action of the Weyl group \(\mathcal{W}\) in \(\mathfrak{h}^*\). One can then
+show that the every weight of \(V\) is conjugate to a single dominant integral
+weight of \(\sfrac{M(\lambda)}{N(\lambda)}\), and that the set of dominant
+integral weights of such irreducible quotient is finite. Since \(W\) is
+finitely generated, this implies the set of weights of the unique irreducible
+quotient of \(M(\lambda)\) is finite. But each weight space is
+finite-dimensional. Hence so is the irreducible quotient.
+
+We refer the reader to \cite[ch. 21]{humphreys} for further details. What we
+are really interested in is\dots
+
+\begin{corollary}
+  There is a finite-dimensional irreducible \(\mathfrak{g}\)-module \(V\) whose
+  highest weight is \(\lambda\).
+\end{corollary}
+
+\begin{proof}
+  Let \(V = \sfrac{M(\lambda)}{N(\lambda)}\). It suffices to show that its
+  highest weight is \(\lambda\). We have already seen that \(v^+ \in
+  M(\lambda)_\lambda\) is a highest weight vector. Now since \(v\) lies outside
+  of the maximal subrepresentation of \(M(\lambda)\), the projection \(v^+ +
+  N(\lambda) \in V\) is nonzero.
+
+  % TODO: Why is V_mu = M(lambda)_mu + N(lambda)? Turn this into a proposition?
+  We now claim that \(v^+ + N(\lambda) \in V_\lambda\). Indeed,
+  \[
+    H (v^+ + N(\lambda))
+    = H v^+ + N(\lambda)
+    = \lambda(H) \cdot (v^+ + N(\lambda))
+  \]
+  for all \(H \in \mathfrak{h}\). Hence \(\lambda\) is a weight of \(V\), with
+  weight vector \(v^+ + N(\lambda)\). Finally, we remark that \(\lambda\) is
+  the highest weight of \(V\), for if this was not the case we could find a
+  weight \(\mu\) of \(M(\lambda)\) which is higher than \(\lambda\).
+\end{proof}
+
+% TODO: Write a conclusion and move this to the next chapter
diff --git a/sections/semisimple-algebras.tex /dev/null
@@ -1,2445 +0,0 @@
-\chapter{Semisimplicity \& Complete Reducibility}
-
-% TODO: Remove this?
-\epigraph{Nobody has ever bet enough on a winning horse.}{Some gambler}
-
-% TODOOO: Point out we are now working with finite-dimensional Lie algebras
-% over an algebraicly closed field of characteristic zero
-
-% TODO: Update the 40 pages thing when we're done
-% TODO: Have we seen the fact representations are useful?
-Having hopefully established in the previous chapter that Lie algebras are
-indeed useful, we are now faced with the Herculean task of trying to
-understand them. We have seen that representations are a remarkably effective
-way to derive information about groups -- and therefore algebras -- but the
-question remains: how to we go about classifying the representations of a given
-Lie algebra? This is a question that have sparked an entire field of research,
-and we cannot hope to provide a comprehensive answer the 40 pages we have left.
-Nevertheless, we can work on particular cases.
-
-Like any sane mathematician would do, we begin by studying a simpler case,
-which is that of \emph{semisimple} Lie algebras algebras. The first question we
-have is thus: why are semisimple algebras simpler -- or perhaps
-\emph{semisimpler} -- to understand than any old Lie algebra? Well, the special
-thing about semisimple algebras is that the relationship between their
-indecomposable representations and their irreducible representations is much
-clearer -- at least in finite dimension. Namely\dots
-
-\begin{proposition}\label{thm:complete-reducibility-equiv}
-  Given a finite-dimensional Lie algebra \(\mathfrak{g}\) over \(K\), the
-  following conditions are equivalent.
-  \begin{enumerate}
-    \item \(\mathfrak{g}\) is semisimple.
-
-    \item Given a finite-dimensional representation \(V\) of \(\mathfrak{g}\)
-      and a subrepresentation \(W \subset V\), \(W\) has a
-      \(\mathfrak{g}\)-invariant complement in \(V\).
-
-    \item Every exact sequence of finite-dimensional representations of
-      \(\mathfrak{g}\) splits.
-
-    \item Every finite-dimensional indecomposable representation of
-      \(\mathfrak{g}\) is irreducible.
-
-    \item Every finite-dimensional representation of \(\mathfrak{g}\) can be
-      uniquely decomposed as a direct sum of irreducible representations.
-  \end{enumerate}
-\end{proposition}
-
-Condition \textbf{(ii)} is known as \emph{complete reducibility}. The
-equivalence between conditions \textbf{(ii)} to \textbf{(iv)} follows at once
-from simple arguments. Furthermore, the equivalence between \textbf{(ii)} and
-\textbf{(v)} is a direct consequence of the Krull-Schmidt theorem. On the other
-hand, the equivalence between \textbf{(i)} and the other items is more subtle.
-We are particularly interested in the proof that \textbf{(i)} implies
-\textbf{(ii)}. In other words, we are interested in the fact that every
-finite-dimensional representation of a semisimple Lie algebra is
-\emph{completely reducible}.
-
-This is because if every finite-dimensional representation of \(\mathfrak{g}\)
-is completely reducible, the equivalence between \textbf{(ii)} and \textbf{(v)}
-implies a classification of the finite-dimensional irreducible representations
-of \(\mathfrak{g}\) leads to a classification of \emph{all} finite-dimensional
-representation of \(\mathfrak{g}\) -- it suffices to take direct sums of the
-already classified irreducible modules. This leads us to the third restriction
-we will impose: for now, we will focus our attention exclusively on
-finite-dimensional representations.
-
-Another interesting characterization of semisimple Lie algebras, which will
-come in handy later on, is the following.
-
-% TODO: Define the Killing form beforehand
-% TODO: Define invariant forms beforehand
-\begin{proposition}
-  Let \(\mathfrak{g}\) be a Lie algebra. The following statements are
-  equivalent.
-  \begin{enumerate}
-    \item \(\mathfrak{g}\) is semisimple.
-    \item For each finite-dimensional representation \(V\) of \(\mathfrak{g}\),
-      the \(\mathfrak{g}\)-invariant bilinear form
-      \begin{align*}
-        B_V : \mathfrak{g} \times \mathfrak{g} & \to K \\
-        (X, Y) &
-        \mapsto \operatorname{Tr}(X\!\restriction_V \circ Y\!\restriction_V)
-      \end{align*}
-      is non-degenerate\footnote{A symmetric bilinear form $B : \mathfrak{g}
-      \times \mathfrak{g} \to K$ is called non-degenerate if $B(X, Y) = 0$ for
-      all $Y \in \mathfrak{g}$ implies $X = 0$.}.
-    \item The Killing form \(B\) is non-degenerate.
-  \end{enumerate}
-\end{proposition}
-
-We refer the reader for \cite[ch. 5]{humphreys} for a proof of this last
-result. Without further ado, we may proceed to a proof of\dots
-
-\section{Complete Reducibility}
-
-Historically, complete reducibility was first proved by Herman Weyl for \(K =
-\mathbb{C}\), using his knowledge of smooth representations of compact Lie
-groups. Namely, Weyl showed that any finite-dimensional semisimple complex Lie
-algebra is (isomorphic to) the complexification of the Lie algebra of a unique
-simply connected compact Lie group, known as its \emph{compact form}. Hence the
-category of the finite-dimensional representations of a given complex
-semisimple algebra is equivalent to that of the finite-dimensional smooth
-representations of its compact form, whose representations are known to be
-completely reducible -- see \cite[ch. 3]{serganova} for instance.
-
-This proof, however, is heavily reliant on the geometric structure of
-\(\mathbb{C}\). In other words, there is no hope for generalizing this for some
-arbitrary \(K\). Hopefully for us, there is a much simpler, completely
-algebraic proof of complete reducibility, which works for algebras over any
-algebraically closed field of characteristic zero. The algebraic proof included
-in here is mainly based on that of \cite[ch. 6]{kirillov}, and uses some basic
-homological algebra. Admittedly, much of the homological algebra used in here
-could be concealed from the reader, which would make the exposition more
-accessible -- see \cite{humphreys} for an elementary account, for instance.
-
-However, this does not change the fact the arguments used in this proof are
-essentially homological in nature. Hence we consider it more productive to use
-the full force of the language of homological algebra, instead of burring the
-reader in a pile of unmotivated, yet entirely elementary arguments.
-Furthermore, the homological algebra used in here is actually \emph{very
-basic}. In fact, all we need to know is\dots
-
-\begin{theorem}\label{thm:ext-exacts-seqs}
-  There is a sequence of bifunctors \(\operatorname{Ext}^i :
-  \mathfrak{g}\text{-}\mathbf{Mod} \times \mathfrak{g}\text{-}\mathbf{Mod} \to
-  K\text{-}\mathbf{Vect}\), \(i \ge 0\) such that every exact
-  sequence of \(\mathfrak{g}\)-modules
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} & W \arrow{r}{i} & V \arrow{r}{\pi} & U \arrow{r} & 0
-    \end{tikzcd}
-  \end{center}
-  induces long exact sequences
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow[r] &
-      \operatorname{Hom}_{\mathfrak{g}}(S, W)
-      \arrow[r, "i \circ -"', swap]\ar[draw=none]{d}[name=X, anchor=center]{} &
-      \operatorname{Hom}_{\mathfrak{g}}(S, V) \arrow[r, "\pi \circ -"', swap] &
-      \operatorname{Hom}_{\mathfrak{g}}(S, U)
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (X.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      \operatorname{Ext}^1(S, W)
-      \arrow[r]\ar[draw=none]{d}[name=Y, anchor=center]{} &
-      \operatorname{Ext}^1(S, V) \arrow[r] &
-      \operatorname{Ext}^1(S, U)
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (Y.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      \operatorname{Ext}^2(S, W) \arrow[r] &
-      \operatorname{Ext}^2(S, V) \arrow[r] &
-      \operatorname{Ext}^2(S, U) \arrow[r, dashed] &
-      \cdots
-    \end{tikzcd}
-  \end{center}
-  and
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow[r] &
-      \operatorname{Hom}_{\mathfrak{g}}(U, S)
-      \arrow[r, "- \circ \pi"', swap]\ar[draw=none]{d}[name=X, anchor=center]{} &
-      \operatorname{Hom}_{\mathfrak{g}}(V, S) \arrow[r, "- \circ i"', swap] &
-      \operatorname{Hom}_{\mathfrak{g}}(W, S)
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (X.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      \operatorname{Ext}^1(U, S)
-      \arrow[r]\ar[draw=none]{d}[name=Y, anchor=center]{} &
-      \operatorname{Ext}^1(V, S) \arrow[r] &
-      \operatorname{Ext}^1(W, S)
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (Y.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      \operatorname{Ext}^2(U, S) \arrow[r] &
-      \operatorname{Ext}^2(V, S) \arrow[r] &
-      \operatorname{Ext}^2(W, S) \arrow[r, dashed] &
-      \cdots
-    \end{tikzcd}
-  \end{center}
-\end{theorem}
-
-\begin{theorem}\label{thm:ext-1-classify-short-seqs}
-  Given \(\mathfrak{g}\)-modules \(W\) and \(U\), there is a one-to-one
-  correspondence between elements of \(\operatorname{Ext}^1(W, U)\) and
-  isomorphism classes of short exact sequences
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} & W \arrow{r} & V \arrow{r} & U \arrow{r} & 0
-    \end{tikzcd}
-  \end{center}
-
-  In particular, \(\operatorname{Ext}^1(W, U) = 0\) if, and only if every short
-  exact sequence of \(\mathfrak{g}\)-modules with \(W\) and \(U\) in the
-  extremes splits.
-\end{theorem}
-
-\begin{note}
-  This is, of course, \emph{far} from a comprehensive account of homological
-  algebra. Nevertheless, this is all we need. We refer the reader to
-  \cite{harder} for a complete exposition, or to part II of \cite{ribeiro} for
-  a more modern account using derived categories.
-\end{note}
-
-We are particular interested in the case where \(S = K\) is the trivial
-representation of \(\mathfrak{g}\). Namely, we may define\dots
-
-\begin{definition}
-  Given a \(\mathfrak{g}\)-module \(V\), we refer to the Abelian group
-  \(H^i(\mathfrak{g}, V) = \operatorname{Ext}^i(K, V)\) as \emph{the \(i\)-th
-  Lie algebra cohomology group of \(V\)}.
-\end{definition}
-
-Given a \(\mathfrak{g}\)-module \(V\), we call the vector space
-\(V^{\mathfrak{g}} = \{v \in V : X v = 0 \; \forall X \in \mathfrak{g}\}\)
-\emph{the space of invariants of \(V\)}. The Lie algebra cohomology groups are
-very much related to invariants of representations. Namely, the canonical
-isomorphism of functors
-\(\operatorname{Hom}_{\mathfrak{g}}(K, -) \isoto {-}^{\mathfrak{g}}\) given by
-\begin{align*}
-  \operatorname{Hom}_{\mathfrak{g}}(K, V) & \isoto V^{\mathfrak{g}} \\
-                                        T & \mapsto T(1)
-\end{align*}
-implies\dots
-
-\begin{corollary}
-  Every short exact sequence of \(\mathfrak{g}\)-modules
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} & W \arrow{r}{i} & V \arrow{r}{\pi} & U \arrow{r} & 0
-    \end{tikzcd}
-  \end{center}
-  induces a long exact sequence
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow[r] &
-      W^{\mathfrak{g}} \arrow[r, "i"', swap]\ar[draw=none]{d}[name=X, anchor=center]{} &
-      V^{\mathfrak{g}} \arrow[r, "\pi"', swap] &
-      U^{\mathfrak{g}}
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (X.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      H^1(\mathfrak{g}, W) \arrow[r]\ar[draw=none]{d}[name=Y, anchor=center]{} &
-      H^1(\mathfrak{g}, V) \arrow[r] &
-      H^1(\mathfrak{g}, U)
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (Y.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      H^2(\mathfrak{g}, W) \arrow[r] &
-      H^2(\mathfrak{g}, V) \arrow[r] &
-      H^2(\mathfrak{g}, U) \arrow[r, dashed] &
-      \cdots
-    \end{tikzcd}
-  \end{center}
-\end{corollary}
-
-\begin{proof}
-  We have an isomorphism of sequences
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} &
-      \operatorname{Hom}_{\mathfrak{g}}(K, W)
-        \arrow{r}{i \circ -} \arrow{d} &
-      \operatorname{Hom}_{\mathfrak{g}}(K, V)
-        \arrow{r}{\pi \circ -} \arrow{d} &
-      \operatorname{Hom}_{\mathfrak{g}}(K, U) \arrow{r} \arrow{d} &
-      H^1(\mathfrak{g}, W) \arrow[dashed]{r} \arrow[Rightarrow, no head]{d} &
-      \cdots \\
-      0 \arrow{r} &
-      W^{\mathfrak{g}} \arrow[swap]{r}{i} &
-      V^{\mathfrak{g}} \arrow[swap]{r}{\pi} &
-      U^{\mathfrak{g}} \arrow{r} &
-      H^1(\mathfrak{g}, W) \arrow[dashed]{r} &
-      \cdots
-    \end{tikzcd}
-  \end{center}
-
-  By theorem~\ref{thm:ext-exacts-seqs} the sequence on the top is exact. Hence
-  so is the sequence on the bottom.
-\end{proof}
-
-This is all well and good, but what does any of this have to do with complete
-reducibility? Well, in general cohomology theories really shine when one is
-trying to control obstructions of some kind. In our case, the bifunctor
-\(H^1(\mathfrak{g}, \operatorname{Hom}(-, -)) :
-\mathfrak{g}\text{-}\mathbf{Mod} \times \mathfrak{g}\text{-}\mathbf{Mod} \to
-\mathbf{Ab}\) classifies obstructions to complete reducibility.
-Explicitly\dots
-
-\begin{theorem}
-  Given \(\mathfrak{g}\)-modules \(W\) and \(U\), there is a one-to-one
-  correspondence between elements of \(H^1(\mathfrak{g}, \operatorname{Hom}(W,
-  U))\) and isomorphism classes of short exact sequences
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} & W \arrow{r} & V \arrow{r} & U \arrow{r} & 0
-    \end{tikzcd}
-  \end{center}
-\end{theorem}
-
-For the readers already familiar with homological algebra: this correspondence
-can computed very concretely by considering a canonical acyclic resolution
-\begin{center}
-  \begin{tikzcd}
-    \cdots \arrow[dashed]{r} &
-    \wedge^3 \mathfrak{g} \rar &
-    \wedge^2 \mathfrak{g} \rar &
-    \mathfrak{g} \rar &
-    K \rar &
-    0
-  \end{tikzcd}
-\end{center}
-of the trivial representation \(K\), which provides an explicit construction of
-the cohomology groups -- see \cite[sec.~9]{lie-groups-serganova-student} or
-\cite[sec.~24]{symplectic-physics} for further details. We will use the
-previous result implicitly in our proof, but we will not prove it in its full
-force. Namely, we will show that \(H^1(\mathfrak{g}, V) = 0\) for all
-finite-dimensional \(V\), and that the fact that \(H^1(\mathfrak{g},
-\operatorname{Hom}(W, U)) = 0\) for all finite-dimensional \(W\) and \(U\)
-implies complete reducibility. To that end, we introduce a distinguished
-element of \(\mathcal{U}(\mathfrak{g})\), known as \emph{the Casimir element of
-a representation}.
-
-\begin{definition}\label{def:casimir-element}
-  Let \(V\) be a finite-dimensional representation of \(\mathfrak{g}\).
-  Let \(\{X_i\}_i\) be a basis for \(\mathfrak{g}\) and denote by \(\{X^i\}_i\)
-  its dual basis with respect to the form \(B_V\) -- i.e. the unique basis for
-  \(\mathfrak{g}\) satisfying \(B_V(X_i, X^j) = \delta_{i j}\). We call
-  \[
-    C_V = X_1 X^1 + \cdots + X_n X^n \in \mathcal{U}(\mathfrak{g})
-  \]
-  the \emph{Casimir element of \(V\)}.
-\end{definition}
-
-\begin{lemma}
-  The definition of \(C_V\) is independent of the choice of basis
-  \(\{X_i\}_i\).
-\end{lemma}
-
-\begin{proof}
-  Whatever basis \(\{X_i\}_i\) we choose, the image of \(C_V\) under the
-  canonical isomorphism \(\mathfrak{g} \otimes \mathfrak{g} \isoto \mathfrak{g}
-  \otimes \mathfrak{g}^* \isoto \operatorname{End}(\mathfrak{g})\) is the
-  identity operator\footnote{Here the isomorphism $\mathfrak{g} \otimes
-  \mathfrak{g} \isoto \mathfrak{g} \otimes \mathfrak{g}^*$ is given by
-  tensoring the identity $\mathfrak{g} \to \mathfrak{g}$ with the isomorphism
-  $\mathfrak{g} \isoto \mathfrak{g}^*$ induced by the form $B_V$.}.
-\end{proof}
-
-\begin{proposition}
-  The Casimir element \(C_V \in \mathcal{U}(\mathfrak{g})\) is central, so that
-  \(C_V : W \to W\) is an intertwining operator for any \(\mathfrak{g}\)-module
-  \(W\). Furthermore, \(C_V\) acts in \(V\) as a non-zero scalar operator
-  whenever \(V\) is a non-trivial finite-dimensional irreducible representation
-  of \(\mathfrak{g}\).
-\end{proposition}
-
-\begin{proof}
-  To see that \(C_V\) is central fix a basis \(\{X_i\}_i\) for \(\mathfrak{g}\)
-  and denote by \(\{X^i\}_i\) its dual basis as in
-  definition~\ref{def:casimir-element}. Let \(X \in \mathfrak{g}\) and denote
-  by \(\lambda_{i j}, \mu_{i j} \in K\) the coefficients of \(X_j\) and \(X^j\)
-  in \([X, X_i]\) and \([X, X^i]\), respectively.
-
-  % TODO: Comment on the invariance of the Killing form beforehand
-  The invariance of \(B_V\) implies
-  \[
-    \lambda_{i k}
-    = B_V([X, X_i], X^k)
-    = B_V(-[X_i, X], X^k)
-    = B_V(X_i, -[X, X^k])
-    = - \mu_{k i}
-  \]
-
-  Hence
-  \[
-    \begin{split}
-      [X, C_V]
-      & = \sum_i [X, X_i X^i] \\
-      & = \sum_i [X, X_i] X^i + \sum_i X_i [X, X^i] \\
-      & = \sum_{i j} \lambda_{i j} X_j X^i + \sum_{i j} \mu_{i j} X_i X^j \\
-      & = 0
-    \end{split},
-  \]
-  and \(C_V\) is central. This implies that \(C_V : W \to W\) is an intertwiner
-  for all representations \(W\) of \(\mathfrak{g}\): its action commutes with
-  the action of any other element of \(\mathfrak{g}\).
-
-  In particular, it follows from Schur's lemma that if \(V\) is
-  finite-dimensional and irreducible then \(C_V\) acts in \(V\) as a scalar
-  operator. To see that this scalar is nonzero we compute
-  \[
-    \operatorname{Tr}(C_V\!\restriction_V)
-    = \operatorname{Tr}(X_1\!\restriction_V X^1\!\restriction_V)
-    + \cdots
-    + \operatorname{Tr}(X_n\!\restriction_V X^n\!\restriction_V)
-    = \dim \mathfrak{g},
-  \]
-  so that \(C_V\!\restriction_V = \lambda \operatorname{Id}\) for \(\lambda =
-  \frac{\dim \mathfrak{g}}{\dim V} \ne 0\).
-\end{proof}
-
-As promised, the Casimir element of a representation can be used to
-establish\dots
-
-\begin{proposition}\label{thm:first-cohomology-vanishes}
-  Let \(V\) be a finite-dimensional representation of \(\mathfrak{g}\). Then
-  \(H^1(\mathfrak{g}, V) = 0\).
-\end{proposition}
-
-\begin{proof}
-  We begin by the case where \(V\) is irreducible. Due to
-  theorem~\ref{thm:ext-1-classify-short-seqs}, it suffices to show that any
-  exact sequence of the form
-  \begin{equation}\label{eq:exact-seq-h1-vanishes}
-    \begin{tikzcd}
-      0 \arrow{r} & K \arrow{r} & W \arrow{r}{\pi} & V \arrow{r} & 0
-    \end{tikzcd}
-  \end{equation}
-  splits.
-
-   If \(V = K\) is the trivial representation then the exactness of
-  \begin{equation}\label{eq:trivial-extrems-exact-seq}
-    \begin{tikzcd}
-      0 \arrow{r} & K \arrow{r} & W \arrow{r}{\pi} & K \arrow{r} & 0
-    \end{tikzcd}
-  \end{equation}
-  implies \(W\) is 2-dimensional. Take any non-zero \(w \in W\) outside of the
-  image of the inclusion \(K \to W\).
-
-  Since \(\dim W = 2\), the irreducible component \(\mathcal{U}(\mathfrak{g})
-  \cdot w\) of \(w\) in \(W\) is either \(K w\) or \(W\) itself. But this
-  component cannot be \(W\), since the image the inclusion \(K \to W\) is a
-  1-dimensional representation -- i.e. a proper non-zero subrepresentation.
-  Hence \(K w\) is invariant under the action of \(\mathfrak{g}\). In
-  particular, \(X w = 0\) for all \(X \in \mathfrak{g}\). Since \(w\) lies
-  outside the image of the inclusion \(K \to W\), \(\pi(w) \ne 0\) -- which is
-  to say, \(w \notin \ker \pi\). This implies the map \(K \to W\) that takes
-  \(1\) to \(\sfrac{w}{\pi(w)}\) is a splitting of
-  (\ref{eq:trivial-extrems-exact-seq}).
-
-  Now suppose that \(V\) is non-trivial, so that \(C_V\) acts on \(V\) as
-  \(\lambda \operatorname{Id}\) for some \(\lambda \ne 0\). Given an eigenvalue
-  \(\mu \in K\) of the action of \(C_V\) in \(W\), denote by \(W^\mu\) its
-  associated generalized eigenspace. We claim \(W^0\) is the image of the
-  inclusion \(K \to W\). Since \(C_V\) acts as zero in \(K\), this image is
-  clearly contained in \(W^0\). On the other hand, if \(w \in W\) is such that
-  \(C_V^n w = 0\) then
-  \[
-    \lambda^n \pi(w)
-    = C_V^n \pi(w)
-    = \pi(C_V^n w)
-    = 0,
-  \]
-  so that \(w \in \ker \pi\) -- because \(\lambda^n \ne 0\). The exactness of
-  (\ref{eq:exact-seq-h1-vanishes}) then implies the desired conclusion.
-
-  We furthermore claim that the only eigenvalues of \(C_V\) in \(W\) are \(0\)
-  and \(\lambda\). Indeed, if \(\mu \ne 0\) is eigenvalue and \(w\) is an
-  associated eigenvector, then
-  \[
-    \mu \pi(w) = \pi(C_V w) = C_V \pi(w) = \lambda \pi(w)
-  \]
-
-  Since \(w \notin W^0\), \(\pi(w) \ne 0\) and therefore \(\mu = \lambda\).
-  Hence \(W = W^0 \oplus W^\lambda\) as vector space. The fact that \(C_V\) is
-  central implies \((C_V - \lambda \operatorname{Id})^n X v = X (C_V - \lambda
-  \operatorname{Id})^n v\) for all \(v \in V\), \(X \in \mathfrak{g}\) and \(n
-  > 0\). In particular, \(W^\lambda\) is stable under the action of
-  \(\mathfrak{g}\) -- i.e. \(W^\lambda\) is a subrepresentation. Since \(W^0\)
-  is precisely the kernel of \(\pi\), we have an isomorphism of representations
-  \(W^\lambda \cong \sfrac{W}{W^0} \isoto V\), which induces a splitting \(W
-  \cong K \oplus V\).
-
-  Finally, we consider the case where \(V\) is not irreducible. Suppose
-  \(H^1(\mathfrak{g}, W) = 0\) for all \(\mathfrak{g}\)-modules with \(\dim W <
-  \dim V\) and let \(W \subset V\) be a proper non-zero subrepresentation. Then
-  the exact sequence
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} & W \arrow{r} & V \arrow{r} & \sfrac{V}{W} \arrow{r} & 0
-    \end{tikzcd}
-  \end{center}
-  induces a long exact sequence of the form
-  \begin{center}
-    \begin{tikzcd}
-      \cdots \arrow[dashed]{r} &
-      H^1(\mathfrak{g}, W) \arrow{r} &
-      H^1(\mathfrak{g}, V) \arrow{r} &
-      H^1(\mathfrak{g}, \sfrac{V}{W}) \arrow[dashed]{r} &
-      \cdots
-    \end{tikzcd}
-  \end{center}
-
-  Since \(0 < \dim W, \dim \sfrac{V}{W} < \dim V\) it follows
-  \(H^1(\mathfrak{g}, W) = H^1(\mathfrak{g}, \sfrac{V}{W}) = 0\). The exactness
-  of
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} &
-      H^1(\mathfrak{g}, V) \arrow{r} &
-      0
-    \end{tikzcd}
-  \end{center}
-  then implies \(H^1(\mathfrak{g}, V) = 0\). Hence by induction in \(\dim V\)
-  we find \(H^1(\mathfrak{g}, V) = 0\) for all finite-dimensional \(V\). We are
-  done.
-\end{proof}
-
-We are now finally ready to prove\dots
-
-\begin{theorem}
-  Every representation of a semisimple Lie algebra is completely reducible.
-\end{theorem}
-
-\begin{proof}
-  Let
-  \begin{equation}\label{eq:generict-exact-sequence}
-    \begin{tikzcd}
-      0 \arrow{r} & W \arrow{r} & V \arrow{r}{\pi} & U \arrow{r} & 0
-    \end{tikzcd}
-  \end{equation}
-  be a short exact sequence of finite-dimensional representations of
-  \(\mathfrak{g}\). We want to establish that
-  (\ref{eq:generict-exact-sequence}) splits.
-
-  We have an exact sequence
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} &
-      \operatorname{Hom}(U, W) \arrow{r} &
-      \operatorname{Hom}(U, V) \arrow{r}{\pi \circ -} &
-      \operatorname{Hom}(U, U) \arrow{r} & 0
-    \end{tikzcd}
-  \end{center}
-  of vector spaces. Since all maps involved are intertwiners, this is an exact
-  sequence of \(\mathfrak{g}\)-modules. This then induces a long exact sequence
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow[r] &
-      \operatorname{Hom}(U, W)^{\mathfrak{g}} \arrow[r]\ar[draw=none]{d}[name=X, anchor=center]{} &
-      \operatorname{Hom}(U, V)^{\mathfrak{g}} \arrow[r, "\pi \circ -"', swap] &
-      \operatorname{Hom}(U, U)^{\mathfrak{g}}
-      \ar[rounded corners,
-                to path={ -- ([xshift=2ex]\tikztostart.east)
-                          |- (X.center) \tikztonodes
-                          -| ([xshift=-2ex]\tikztotarget.west)
-                          -- (\tikztotarget)}]{dll}[at end]{} \\ &
-      H^1(\mathfrak{g}, \operatorname{Hom}(U, W)) \arrow[r] &
-      H^1(\mathfrak{g}, \operatorname{Hom}(U, V)) \arrow[r] &
-      H^1(\mathfrak{g}, \operatorname{Hom}(U, U)) \arrow[r, dashed] &
-      \cdots
-    \end{tikzcd}
-  \end{center}
-  of vector spaces. But \(H^1(\mathfrak{g}, \operatorname{Hom}(U, W))\)
-  vanishes because of proposition~\ref{thm:first-cohomology-vanishes}. Hence we
-  have an exact sequence
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} &
-      \operatorname{Hom}(U, W)^{\mathfrak{g}} \arrow{r} &
-      \operatorname{Hom}(U, V)^{\mathfrak{g}} \arrow{r}{\pi \circ -} &
-      \operatorname{Hom}(U, U)^{\mathfrak{g}} \arrow{r} &
-      0
-    \end{tikzcd}
-  \end{center}
-
-  Now notice \(\operatorname{Hom}(U, -)^{\mathfrak{g}} =
-  \operatorname{Hom}_{\mathfrak{g}}(U, -)\). Indeed, given a
-  \(\mathfrak{g}\)-module \(S\) and a \(K\)-linear map \(T : U \to S\)
-  \[
-    \begin{split}
-      T \in \operatorname{Hom}(U, S)^{\mathfrak{g}}
-      & \iff X T - T X = 0 \quad \forall X \in \mathfrak{g} \\
-      & \iff X T = T X \quad \forall X \in \mathfrak{g} \\
-      & \iff T \in \operatorname{Hom}_{\mathfrak{g}}(U, S)
-    \end{split}
-  \]
-
-  We thus have a short exact sequence
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} &
-      \operatorname{Hom}_{\mathfrak{g}}(U, W) \arrow{r} &
-      \operatorname{Hom}_{\mathfrak{g}}(U, V) \arrow{r}{\pi \circ -} &
-      \operatorname{Hom}_{\mathfrak{g}}(U, U) \arrow{r} &
-      0
-    \end{tikzcd}
-  \end{center}
-
-  In particular, there is some intertwiner \(T : U \to V\) such that \(\pi
-  \circ T : U \to U\) is the identity operator. In other words
-  \begin{center}
-    \begin{tikzcd}
-      0 \arrow{r} &
-      W \arrow{r} &
-      V \arrow{r}{\pi} &
-      U \arrow{r} \arrow[bend left]{l}{T} &
-      0
-    \end{tikzcd}
-  \end{center}
-  is a splitting of (\ref{eq:generict-exact-sequence}).
-\end{proof}
-
-We should point out that this last results are just the beginning of a well
-developed cohomology theory. For example, a similar argument involving the
-Casimir elements can be used to show that \(H^i(\mathfrak{g}, V) = 0\) for all
-non-trivial finite-dimensional irreducible \(V\), \(i > 0\). For \(K =
-\mathbb{C}\), the Lie algebra cohomology groups of an algebra \(\mathfrak{g} =
-\mathbb{C} \otimes \operatorname{Lie}(G)\) are intimately related with the
-topological cohomologies -- i.e. singular cohomology, de Rham cohomology, etc.
--- of \(G\) with coefficients in \(\mathbb{C}\). We refer the reader to
-\cite{cohomologies-lie} and \cite[sec.~24]{symplectic-physics} for further
-details.
-
-Complete reducibility can be generalized for arbitrary -- not necessarily
-semisimple -- \(\mathfrak{g}\), to a certain extent, by considering the exact
-sequence
-\begin{center}
-  \begin{tikzcd}
-    0 \arrow{r} &
-    \mathfrak{rad}(\mathfrak{g}) \arrow{r} &
-    \mathfrak{g} \arrow{r} &
-    \mfrac{\mathfrak{g}}{\mathfrak{rad}(\mathfrak{g})} \arrow{r} &
-    0
-  \end{tikzcd}
-\end{center}
-
-This sequence always splits, which implies we can deduce information about the
-representations of \(\mathfrak{g}\) by studying those of its ``semisimple
-part'' \(\mfrac{\mathfrak{g}}{\mathfrak{rad}(\mathfrak{g})}\) -- see
-proposition~\ref{thm:quotients-by-rads}. In practice this translates to\dots
-
-\begin{theorem}\label{thm:semi-simple-part-decomposition}
-  Every irreducible representation of \(\mathfrak{g}\) is the tensor product of
-  an irreducible representation of its semisimple part
-  \(\mfrac{\mathfrak{g}}{\mathfrak{rad}(\mathfrak{g})}\) and a
-  one-dimensional representation of \(\mathfrak{g}\).
-\end{theorem}
-
-Having achieved our goal of proving complete reducibility, we can now afford
-the luxury of concerning ourselves exclusively with irreducible
-representations. Still, our efforts towards a classification of the
-finite-dimensional representations of semisimple Lie algebras are far from
-over. In particular, there is so far no indication on how we could go about
-understanding the irreducible \(\mathfrak{g}\)-modules. Once more, we begin by
-investigating a simple case: that of \(\mathfrak{sl}_2(K)\).
-
-\section{Representations of \(\mathfrak{sl}_2(K)\)}\label{sec:sl2}
-
-The primary goal of this section is proving\dots
-
-\begin{theorem}\label{thm:sl2-exist-unique}
-  For each \(n > 0\), there exists precisely one irreducible representation
-  \(V\) of \(\mathfrak{sl}_2(K)\) with \(\dim V = n\).
-\end{theorem}
-
-The general approach we'll take is supposing \(V\) is an irreducible
-representation of \(\mathfrak{sl}_2(K)\) and then derive some information about
-its structure. We begin our analysis by recalling that the elements
-\begin{align*}
-  e & = \begin{pmatrix} 0 & 1 \\ 0 &  0 \end{pmatrix} &
-  f & = \begin{pmatrix} 0 & 0 \\ 1 &  0 \end{pmatrix} &
-  h & = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}
-\end{align*}
-form a basis of \(\mathfrak{sl}_2(K)\) and satisfy
-\begin{align*}
-  [e, f] & = h & [h, f] & = -2 f & [h, e] = 2 e
-\end{align*}
-
-This is interesting to us because it implies every subspace of \(V\) invariant
-under the actions of \(e\), \(f\) and \(h\) has to be \(V\) itself. Next we
-turn our attention to the action of \(h\) in \(V\), in particular, to the
-eigenspace decomposition
-\[
-  V = \bigoplus_{\lambda} V_\lambda
-\]
-of \(V\) -- where \(\lambda\) ranges over the eigenvalues of \(h\) and
-\(V_\lambda\) is the corresponding eigenspace. At this point, this is nothing
-short of a gamble: why look at the eigenvalues of \(h\)?
-
-The short answer is that, as we shall see, this will pay off -- which
-conveniently justifies the epigraph of this chapter. For now we will postpone
-the discussion about the real reason of why we chose \(h\). Let \(\lambda\) be
-any eigenvalue of \(h\). Notice \(V_\lambda\) is in general not a
-subrepresentation of \(V\). Indeed, if \(v \in V_\lambda\) then
-\begin{align*}
-  h e v & =   2e v + e h v = (\lambda + 2) e v \\
-  h f v & = - 2f v + f h v = (\lambda - 2) f v
-\end{align*}
-
-In other words, \(e\) sends an element of \(V_\lambda\) to an element of
-\(V_{\lambda + 2}\), while \(f\) sends it to an element of \(V_{\lambda - 2}\).
-Hence
-\begin{center}
-  \begin{tikzcd}
-    \cdots \arrow[bend left=60]{r}
-    & V_{\lambda - 2} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}
-    & V_{\lambda} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
-    & V_{\lambda + 2} \arrow[bend left=60]{r} \arrow[bend left=60]{l}{f}
-    & \cdots \arrow[bend left=60]{l}
-  \end{tikzcd}
-\end{center}
-and \(\bigoplus_{n \in \ZZ} V_{\lambda + 2 n}\) is an
-\(\mathfrak{sl}_2(K)\)-invariant subspace. This implies
-\[
-  V = \bigoplus_{n \in \ZZ} V_{\lambda + 2 n},
-\]
-so that the eigenvalues of \(h\) all have the form \(\lambda + 2 n\) for some
-\(n\) -- since \(V_\mu = 0\) for all \(\mu \notin \lambda + 2 \ZZ\).
-
-Even more so, if \(a = \min \{ n \in \ZZ : V_{\lambda + 2 n} \ne 0 \}\) and
-\(b = \max \{ n \in \ZZ : V_{\lambda + 2 n} \ne 0 \}\) we can see that
-\[
-  \bigoplus_{\substack{n \in \ZZ \\ a \le n \le b}} V_{\lambda + 2 n}
-\]
-is also an \(\mathfrak{sl}_2(K)\)-invariant subspace, so that the eigenvalues
-of \(h\) form an unbroken string
-\[
-  \ldots, \lambda - 4, \lambda - 2, \lambda, \lambda + 2, \lambda + 4, \ldots
-\]
-around \(\lambda\).
-
-Our main objective is to show \(V\) is determined by this string of
-eigenvalues. To do so, we suppose without any loss in generality that
-\(\lambda\) is the right-most eigenvalue of \(h\), fix some non-zero \(v \in
-V_\lambda\) and consider the set \(\{v, f v, f^2, v, \ldots\}\).
-
-\begin{theorem}\label{thm:basis-of-irr-rep}
-  The set \(\{v, f v, f^2, \ldots\}\) is a basis for \(V\).
-\end{theorem}
-
-\begin{proof}
-  First of all, notice \(f^k v\) lies in \(V_{\lambda - 2 k}\), so that \(\{v,
-  f v, f^2 v, \ldots\}\) is a set of linearly independent vectors. Hence it
-  suffices to show \(V = K \langle v, f v, f^2 v, \ldots \rangle\), which in
-  light of the fact that \(V\) is irreducible is the same as showing \(K
-  \langle v, f v, f^2 v, \ldots \rangle\) is invariant under the action of
-  \(\mathfrak{sl}_2(K)\).
-
-  The fact that \(h f^k v \in K \langle v, f v, f^2 v, \ldots \rangle\) follows
-  immediately from our previous assertion that \(f^k v \in V_{\lambda - 2 k}\)
-  -- indeed, \(h f^k v = (\lambda - 2 k) f^k v\). Seeing \(e f^k v \in K
-  \langle v, f v, f^2 v, \ldots \rangle\) is a bit more complex. Clearly,
-  \[
-    \begin{split}
-      e f v
-      & = h v + f e v \\
-      \text{(since \(\lambda\) is the right-most eigenvalue)}
-      & = h v + f 0 \\
-      & = \lambda v
-    \end{split}
-  \]
-
-  Next we compute
-  \[
-    \begin{split}
-      e f^2 v
-      & = (h + fe) f v \\
-      & = h f v + f (\lambda v) \\
-      & = 2 (\lambda - 1) f v
-    \end{split}
-  \]
-
-  The pattern is starting to become clear: \(e\) sends \(f^k v\) to a multiple
-  of \(f^{k - 1} v\). Explicitly, it's not hard to check by induction that
-  \[
-    e f^k v = k (\lambda + 1 - k) f^{k - 1} v
-  \]
-\end{proof}
-
-\begin{note}
-  For this last formula to work we fix the convention that \(f^{-1} v = 0\) --
-  which is to say \(e v = 0\).
-\end{note}
-
-Theorem~\ref{thm:basis-of-irr-rep} may seem unrelated to our problem at first,
-but its significance lies in the fact that we have just provided a complete
-description of the action of \(\mathfrak{sl}_2(K)\) in \(V\). In other
-words\dots
-
-\begin{corollary}
-  \(V\) is completely determined by the right-most eigenvalue \(\lambda\) of
-  \(h\).
-\end{corollary}
-
-\begin{proof}
-  If \(W\) is an irreducible representation of \(\mathfrak{sl}_2(K)\) whose
-  right-most eigenvalue of \(h\) is \(\lambda\) and \(w \in W_\lambda\) is
-  non-zero, consider the linear isomorphism
-  \begin{align*}
-    T : V     & \to     W      \\
-        f^k v & \mapsto f^k w
-  \end{align*}
-
-  We claim \(T\) is an intertwining operator. Indeed, the explicit calculations
-  of \(e f^k v\) and \(h f^k v\) from the previous proof imply
-  \begin{align*}
-    T e & = e T & T f & = f T & T h & = h T
-  \end{align*}
-\end{proof}
-
-Other important consequences of theorem~\ref{thm:basis-of-irr-rep} are\dots
-
-\begin{corollary}
-  Every \(h\) eigenspace is one-dimensional.
-\end{corollary}
-
-\begin{proof}
-  It suffices to note \(\{v, f v, f^2 v, \ldots \}\) is a basis for \(V\)
-  consisting of eigenvalues of \(h\) and whose only element in \(V_{\lambda - 2
-  k}\) is \(f^k v\).
-\end{proof}
-
-\begin{corollary}
-  The eigenvalues of \(h\) in \(V\) form a symmetric, unbroken string of
-  integers separated by intervals of length \(2\) whose right-most value is
-  \(\dim V - 1\).
-\end{corollary}
-
-\begin{proof}
-  If \(f^m\) is the lowest power of \(f\) that annihilates \(v\), it follows
-  from the formula for \(e f^k v\) obtained in the proof of
-  theorem~\ref{thm:basis-of-irr-rep} that
-  \[
-    0 = e 0 = e f^m v = m (\lambda + 1 - m) f^{m - 1} v
-  \]
-
-  This implies \(\lambda + 1 - m = 0\) -- i.e. \(\lambda = m - 1 \in \ZZ\). Now
-  since \(\{v, f v, f^2 v, \ldots, f^{m - 1} v\}\) is a basis for \(V\), \(m =
-  \dim V\). Hence if \(n = \lambda = \dim V - 1\) then the eigenvalues of \(h\)
-  are
-  \[
-    \ldots, n - 6, n - 4, n - 2, n
-  \]
-
-  To see that this string is symmetric around \(0\), simply note that the
-  left-most eigenvalue of \(h\) is precisely \(n - 2 (m - 1) = -n\).
-\end{proof}
-
-We now know every irreducible representation \(V\) of \(\mathfrak{sl}_2(K)\)
-has the form
-\begin{center}
-  \begin{tikzcd}
-    \cdots \arrow[bend left=60]{r}
-    & V_{n - 6} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}
-    & V_{n - 4} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
-    & V_{n - 2} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
-    & V_n \arrow[bend left=60]{l}{f}
-  \end{tikzcd}
-\end{center}
-where \(V_{n - 2 k}\) is the one-dimensional eigenspace of \(h\) associated to
-\(n - 2 k\) and \(n = \dim V - 1\). Even more so, we explicitly know
-\[
-  V = \bigoplus_{k = 0}^n K f^k v
-\]
-and
-\begin{equation}\label{eq:irr-rep-of-sl2}
-  \begin{aligned}
-      f^k v & \overset{e}{\mapsto} k(n + 1 - k) f^{k - 1} v
-    & f^k v & \overset{f}{\mapsto} f^{k + 1} v
-    & f^k v & \overset{h}{\mapsto} (n - 2 k) f^k v
-  \end{aligned}
-\end{equation}
-
-To conclude our analysis all it's left is to show that for each \(n\) such
-\(V\) does indeed exist and is irreducible. In other words\dots
-
-\begin{theorem}\label{thm:irr-rep-of-sl2-exists}
-  For each \(n \ge 0\) there exists a (unique) irreducible representation of
-  \(\mathfrak{sl}_2(K)\) whose left-most eigenvalue of \(h\) is \(n\).
-\end{theorem}
-
-\begin{proof}
-  The fact the representation \(V\) from the previous discussion exists is
-  clear from the commutator relations of \(\mathfrak{sl}_2(K)\) -- just look at
-  \(f^k v\) as abstract symbols and impose the action given by
-  (\ref{eq:irr-rep-of-sl2}). Alternatively, one can readily check that if
-  \(K^2\) is the natural representation of \(\mathfrak{sl}_2(K)\), then \(V =
-  \operatorname{Sym}^n K^2\) satisfies the relations of
-  (\ref{eq:irr-rep-of-sl2}). To see that \(V\) is irreducible let \(W\) be a
-  non-zero subrepresentation and take some non-zero \(w \in W\). Suppose \(w =
-  \alpha_0 v + \alpha_1 f v + \cdots + \alpha_n f^n v\) and let \(k\) be the
-  lowest index such that \(\alpha_k \ne 0\), so that
-  \[
-    w = \alpha_k f^k v + \cdots + \alpha_n f^n v
-  \]
-
-  Now given that \(f^m = f^{n + 1}\) annihilates \(v\),
-  \[
-    f w = \alpha_k f^{k + 1} v + \cdots + \alpha_{n - 1} f^n v
-  \]
-
-  Proceeding inductively we arrive at \(f^{n - k} w = \alpha_k f^n v\), so
-  that \(f^n v \in W\). Hence \(e^i f^n v = \prod_{k = 1}^i k(n + 1 - k) f^{n -
-  i} v \in W\) for all \(i = 1, 2, \ldots, n\). Since \(k \ne 0 \ne n + 1 - k\)
-  for all \(k\) in this range, we can see that \(f^k v \in W\) for all \(k = 0,
-  1, \ldots, n\). In other words, \(W = V\). We are done.
-\end{proof}
-
-Our initial gamble of studying the eigenvalues of \(h\) may have seemed
-arbitrary at first, but it payed off: we've \emph{completely} described
-\emph{all} irreducible representations of \(\mathfrak{sl}_2(K)\). It is not yet
-clear, however, if any of this can be adapted to a general setting. In the
-following section we shall double down on our gamble by trying to reproduce
-some of the results of this section for \(\mathfrak{sl}_3(K)\), hoping this
-will \emph{somehow} lead us to a general solution. In the process of doing so
-we'll learn a bit more why \(h\) was a sure bet and the race was fixed all
-along.
-
-\section{Representations of \(\mathfrak{sl}_3(K)\)}\label{sec:sl3-reps}
-
-The study of representations of \(\mathfrak{sl}_2(K)\) reminds me of the
-difference the derivative of a function \(\RR \to \RR\) and that of a smooth
-map between manifolds: it's a simpler case of something greater, but in some
-sense it's too simple of a case, and the intuition we acquire from it can be a
-bit misleading in regards to the general setting. For instance I distinctly
-remember my Calculus I teacher telling the class ``the derivative of the
-composition of two functions is not the composition of their derivatives'' --
-which is, of course, the \emph{correct} formulation of the chain rule in the
-context of smooth manifolds.
-
-The same applies to \(\mathfrak{sl}_2(K)\). It's a simple and beautiful
-example, but unfortunately the general picture -- representations of arbitrary
-semisimple algebras -- lacks its simplicity, and, of course, much of this
-complexity is hidden in the case of \(\mathfrak{sl}_2(K)\).  The general
-purpose of this section is to investigate to which extent the framework used in
-the previous section to classify the representations of \(\mathfrak{sl}_2(K)\)
-can be generalized to other semisimple Lie algebras, and the algebra
-\(\mathfrak{sl}_3(K)\) stands as a natural candidate for potential
-generalizations: \(3 = 2 + 1\) after all.
-
-Our approach is very straightforward: we'll fix some irreducible representation
-\(V\) of \(\mathfrak{sl}_3(K)\) and proceed step by step, at each point asking
-ourselves how we could possibly adapt the framework we laid out for
-\(\mathfrak{sl}_2(K)\). The first obvious question is one we have already asked
-ourselves: why \(h\)?  More specifically, why did we choose to study its
-eigenvalues and is there an analogue of \(h\) in \(\mathfrak{sl}_3(K)\)?
-
-The answer to the former question is one we'll discuss at length in the next
-chapter, but for now we note that perhaps the most fundamental property of
-\(h\) is that \emph{there exists an eigenvector \(v\) of \(h\) that is
-annihilated by \(e\)} -- that being the generator of the right-most eigenspace
-of \(h\). This was instrumental to our explicit description of the irreducible
-representations of \(\mathfrak{sl}_2(K)\) culminating in
-theorem~\ref{thm:irr-rep-of-sl2-exists}.
-
-Our fist task is to find some analogue of \(h\) in \(\mathfrak{sl}_3(K)\), but
-it's still unclear what exactly we are looking for. We could say we're looking
-for an element of \(V\) that is annihilated by some analogue of \(e\), but the
-meaning of \emph{some analogue of \(e\)} is again unclear. In fact, as we shall
-see, no such analogue exists and neither does such element. Instead, the actual
-way to proceed is to consider the subalgebra
-\[
-  \mathfrak{h}
-  = \left\{
-    X \in
-    \begin{pmatrix} K & 0 & 0 \\ 0 & K & 0 \\ 0 & 0 & K \end{pmatrix}
-    : \operatorname{Tr}(X) = 0
-    \right\}
-\]
-
-The choice of \(\mathfrak{h}\) may seem like an odd choice at the moment, but
-the point is we'll later show that there exists some \(v \in V\) that is
-simultaneously an eigenvector of each \(H \in \mathfrak{h}\) and annihilated by
-half of the remaining elements of \(\mathfrak{sl}_3(K)\). This is exactly
-analogous to the situation we found in \(\mathfrak{sl}_2(K)\): \(h\)
-corresponds to the subalgebra \(\mathfrak{h}\), and the eigenvalues of \(h\) in
-turn correspond to linear functions \(\lambda : \mathfrak{h} \to k\) such that
-\(H v = \lambda(H) \cdot v\) for each \(H \in \mathfrak{h}\) and some non-zero
-\(v \in V\). We call such functionals \(\lambda\) \emph{eigenvalues of
-\(\mathfrak{h}\)}, and we say \emph{\(v\) is an eigenvector of
-\(\mathfrak{h}\)}.
-
-Once again, we'll pay special attention to the eigenvalue decomposition
-\begin{equation}\label{eq:weight-module}
-  V = \bigoplus_\lambda V_\lambda
-\end{equation}
-where \(\lambda\) ranges over all eigenvalues of \(\mathfrak{h}\) and
-\(V_\lambda = \{ v \in V : H v = \lambda(H) \cdot v, \forall H \in \mathfrak{h}
-\}\). We should note that the fact that (\ref{eq:weight-module}) holds is not
-at all obvious. This is because in general \(V_\lambda\) is not the eigenspace
-associated with an eigenvalue of any particular operator \(H \in
-\mathfrak{h}\), but instead the eigenspace of the action of the entire algebra
-\(\mathfrak{h}\). Fortunately for us, (\ref{eq:weight-module}) always holds,
-but we will postpone its proof to the next section.
-
-Next we turn our attention to the remaining elements of \(\mathfrak{sl}_3(K)\).
-In our analysis of \(\mathfrak{sl}_2(K)\) we saw that the eigenvalues of \(h\)
-differed from one another by multiples of \(2\). A possible way to interpret
-this is to say \emph{the eigenvalues of \(h\) differ from one another by
-integral linear combinations of the eigenvalues of the adjoint action of
-\(h\)}. In English, the eigenvalues of of the adjoint actions of \(h\) are
-\(\pm 2\) since
-\begin{align*}
-  [h, f] & = -2 f &
-  [h, e] & = 2 e
-\end{align*}
-and the eigenvalues of the action of \(h\) in an irreducible
-\(\mathfrak{sl}_2(K)\)-representation differ from one another by multiples of
-\(\pm 2\).
-
-In the case of \(\mathfrak{sl}_3(K)\), a simple calculation shows that if \([H,
-X]\) is scalar multiple of \(X\) for all \(H \in \mathfrak{h}\) then all but
-one entry of \(X\) are zero. Hence the eigenvectors of the adjoint action of
-\(\mathfrak{h}\) are \(E_{i j}\) and its eigenvalues are \(\alpha_i -
-\alpha_j\), where
-\[
-  \alpha_i
-  \begin{pmatrix}
-    a_1 &   0 &   0 \\
-      0 & a_2 &   0 \\
-      0 &   0 & a_3
-  \end{pmatrix}
-  = a_i
-\]
-
-Visually we may draw
-
-\begin{figure}[h]
-  \centering
-  \begin{tikzpicture}[scale=2.5]
-    \begin{rootSystem}{A}
-      \filldraw[black] \weight{0}{0} circle (.5pt);
-      \node[black, above right] at \weight{0}{0} {\small$0$};
-      \wt[black]{-1}{2}
-      \wt[black]{-2}{1}
-      \wt[black]{1}{1}
-      \wt[black]{-1}{-1}
-      \wt[black]{2}{-1}
-      \wt[black]{1}{-2}
-      \node[above] at \weight{-1}{2}  {$\alpha_2 - \alpha_3$};
-      \node[left]  at \weight{-2}{1}  {$\alpha_2 - \alpha_1$};
-      \node[right] at \weight{1}{1}   {$\alpha_1 - \alpha_3$};
-      \node[left]  at \weight{-1}{-1} {$\alpha_3 - \alpha_1$};
-      \node[right] at \weight{2}{-1}  {$\alpha_1 - \alpha_2$};
-      \node[below] at \weight{1}{-2}  {$\alpha_3 - \alpha_1$};
-      \node[black, above] at \weight{1}{0}  {$\alpha_1$};
-      \node[black, above] at \weight{-1}{1} {$\alpha_2$};
-      \node[black, above] at \weight{0}{-1} {$\alpha_3$};
-      \filldraw[black] \weight{1}{0}  circle (.5pt);
-      \filldraw[black] \weight{-1}{1} circle (.5pt);
-      \filldraw[black] \weight{0}{-1} circle (.5pt);
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{figure}
-
-If we denote the eigenspace of the adjoint action of \(\mathfrak{h}\) in
-\(\mathfrak{sl}_3(K)\) associated to \(\alpha\) by
-\(\mathfrak{sl}_3(K)_\alpha\) and fix some \(X \in \mathfrak{sl}_3(K)_\alpha\),
-\(H \in \mathfrak{h}\) and \(v \in V_\lambda\) then
-\[
-  \begin{split}
-    H (X v)
-    & = X (H v) + [H, X] v \\
-    & = X (\lambda(H) \cdot v) + (\alpha(H) \cdot X) v \\
-    & = (\alpha + \lambda)(H) \cdot X v
-  \end{split}
-\]
-so that \(X\) carries \(v\) to \(V_{\alpha + \lambda}\). In other words,
-\(\mathfrak{sl}_3(k)_\alpha\) \emph{acts on \(V\) by translating vectors
-between eigenspaces}.
-
-For instance \(\mathfrak{sl}_3(K)_{\alpha_1 - \alpha_3}\) will act on the
-adjoint representation of \(\mathfrak{sl}_3(K)\) via
-\begin{figure}[h]
-  \centering
-  \begin{tikzpicture}[scale=2.5]
-    \begin{rootSystem}{A}
-      \wt[black]{0}{0}
-      \wt[black]{-1}{2}
-      \wt[black]{-2}{1}
-      \wt[black]{1}{1}
-      \wt[black]{-1}{-1}
-      \wt[black]{2}{-1}
-      \wt[black]{1}{-2}
-      \draw[-latex, black] \weight{-1.9}{1.1} -- \weight{-1.1}{1.9};
-      \draw[-latex, black] \weight{-.9}{-.9} -- \weight{-.1}{-.1};
-      \draw[-latex, black] \weight{0.1}{0.1} -- \weight{.9}{.9};
-      \draw[-latex, black] \weight{1.1}{-1.9} -- \weight{1.9}{-1.1};
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{figure}
-
-This is again entirely analogous to the situation we observed in
-\(\mathfrak{sl}_2(K)\). In fact, we may once more conclude\dots
-
-\begin{theorem}\label{thm:sl3-weights-congruent-mod-root}
-  The eigenvalues of the action of \(\mathfrak{h}\) in an irreducible
-  \(\mathfrak{sl}_3(K)\)-representation \(V\) differ from one another by
-  integral linear combinations of the eigenvalues \(\alpha_i - \alpha_j\) of
-  adjoint action of \(\mathfrak{h}\) in \(\mathfrak{sl}_3(K)\).
-\end{theorem}
-
-\begin{proof}
-  This proof goes exactly as that of the analogous statement for
-  \(\mathfrak{sl}_2(K)\): it suffices to note that if we fix some eigenvalue
-  \(\lambda\) of \(\mathfrak{h}\) and let \(i\) and \(j\) vary then
-  \[
-    \bigoplus_{i j} V_{\lambda + \alpha_i - \alpha_j}
-  \]
-  is an invariant subspace of \(V\).
-\end{proof}
-
-To avoid confusion we better introduce some notation to differentiate between
-eigenvalues of the action of \(\mathfrak{h}\) in \(V\) and eigenvalues of the
-adjoint action of \(\mathfrak{h}\).
-
-\begin{definition}
-  Given a representation \(V\) of \(\mathfrak{sl}_3(K)\), we'll call the
-  non-zero eigenvalues of the action of \(\mathfrak{h}\) in \(V\) \emph{weights
-  of \(V\)}. As you might have guessed, we'll correspondingly refer to
-  eigenvectors and eigenspaces of a given weight by \emph{weight vectors} and
-  \emph{weight spaces}.
-\end{definition}
-
-It's clear from our previous discussion that the weights of the adjoint
-representation of \(\mathfrak{sl}_3(K)\) deserve some special attention.
-
-\begin{definition}
-  The weights of the adjoint representation of \(\mathfrak{sl}_3(K)\) are
-  called \emph{roots of \(\mathfrak{sl}_3(K)\)}. Once again, the expressions
-  \emph{root vector} and \emph{root space} are self-explanatory.
-\end{definition}
-
-Theorem~\ref{thm:sl3-weights-congruent-mod-root} can thus be restated as\dots
-
-\begin{corollary}
-  The weights of an irreducible representation \(V\) of \(\mathfrak{sl}_3(K)\)
-  are all congruent module the lattice \(Q\) generated by the roots \(\alpha_i
-  - \alpha_j\) of \(\mathfrak{sl}_3(K)\).
-\end{corollary}
-
-\begin{definition}
-  The lattice \(Q = \ZZ \langle \alpha_i - \alpha_j : i, j = 1, 2, 3 \rangle\)
-  is called \emph{the root lattice of \(\mathfrak{sl}_3(K)\)}.
-\end{definition}
-
-To proceed we once more refer to the previously established framework: next we
-saw that the eigenvalues of \(h\) formed an unbroken string of integers
-symmetric around \(0\). To prove this we analyzed the right-most eigenvalue of
-\(h\) and its eigenvector, providing an explicit description of the irreducible
-representation of \(\mathfrak{sl}_2(K)\) in terms of this vector. We may
-reproduce these steps in the context of \(\mathfrak{sl}_3(K)\) by fixing a
-direction in the place an considering the weight lying the furthest in that
-direction. For instance, let's say we fix the direction
-\begin{center}
-  \begin{tikzpicture}[scale=2.5]
-    \begin{rootSystem}{A}
-      \wt[black]{0}{0}
-      \wt[black]{-1}{2}
-      \wt[black]{-2}{1}
-      \wt[black]{1}{1}
-      \wt[black]{-1}{-1}
-      \wt[black]{2}{-1}
-      \wt[black]{1}{-2}
-      \draw[-latex, black, thick] \weight{-1.5}{-.5} -- \weight{1.5}{.5};
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-and let \(\lambda\) be the weight lying the furthest in this direction.
-
-Its easy to see what we mean intuitively by looking at the previous picture,
-but its precise meaning is still allusive. Formally this means we'll choose a
-linear functional \(f : \mathfrak{h}^* \to \QQ\) and pick the weight that
-maximizes \(f\). To avoid any ambiguity we should choose the direction of a
-line irrational with respect to the root lattice \(Q\). For instance if we
-choose the direction of \(\alpha_1 - \alpha_3\) and let \(f\) be the rational
-projection \(Q \to \QQ \langle \alpha_1 - \alpha_3 \rangle \cong \QQ\) then
-\(\alpha_1 - 2 \alpha_2 + \alpha_3 \in Q\) lies in \(\ker f\), so that if a
-weight \(\lambda\) maximizes \(f\) then the translation of \(\lambda\) by any
-multiple of \(\alpha_1 - 2 \alpha_2 + \alpha_3\) must also do so. In others
-words, if the direction we choose is parallel to a vector lying in \(Q\) then
-there may be multiple choices the ``weight lying the furthest'' along this
-direction.
-
-\begin{definition}
-  We say that a root \(\alpha\) is positive if \(f(\alpha) > 0\) -- i.e. if it
-  lies to the right of the direction we chose. Otherwise we say \(\alpha\) is
-  negative. Notice that \(f(\alpha) \ne 0\) since by definition \(\alpha \ne
-  0\) and \(f\) is irrational with respect to the lattice \(Q\).
-\end{definition}
-
-The first observation we make is that all others weights of \(V\) must lie in a
-sort of \(\frac{1}{3}\)-plane with corners at \(\lambda\), as shown in
-\begin{center}
-  \begin{tikzpicture}
-    \AutoSizeWeightLatticefalse
-    \begin{rootSystem}{A}
-      \weightLattice{3}
-      \fill[gray!50,opacity=.2] (hex cs:x=5,y=-7) -- (hex cs:x=1,y=1) --
-      (hex cs:x=-7,y=5) arc (150:270:{7*\weightLength});
-      \draw[black, thick] (hex cs:x=5,y=-7) -- (hex cs:x=1,y=1) --
-      (hex cs:x=-7,y=5);
-      \filldraw[black] (hex cs:x=1,y=1) circle (1pt);
-      \node[above right=-2pt] at (hex cs:x=1,y=1) {\small\(\lambda\)};
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-Indeed, if this is not the case then, by definition, \(\lambda\) is not the
-furthest weight along the line we chose. Given our previous assertion that the
-root spaces of \(\mathfrak{sl}_3(K)\) act on the weight spaces of \(V\) via
-translation, this implies that \(E_{1 2}\), \(E_{1 3}\) and \(E_{2 3}\) all
-annihilate \(V_\lambda\), or otherwise one of \(V_{\lambda + \alpha_1 -
-\alpha_2}\), \(V_{\lambda + \alpha_1 - \alpha_3}\) and \(V_{\lambda + \alpha_2
-- \alpha_3}\) would be non-zero -- which contradicts the hypothesis that
-\(\lambda\) lies the furthest along the direction we chose. In other words\dots
-
-\begin{theorem}
-  There is a weight vector \(v \in V\) that is killed by all positive root
-  spaces of \(\mathfrak{sl}_3(K)\).
-\end{theorem}
-
-\begin{proof}
-  It suffices to note that the positive roots of \(\mathfrak{sl}_3(K)\) are
-  precisely \(\alpha_1 - \alpha_2\), \(\alpha_1 - \alpha_3\) and \(\alpha_2 -
-  \alpha_3\).
-\end{proof}
-
-We call \(\lambda\) \emph{the highest weight of \(V\)}, and we call any \(v \in
-V_\lambda\) \emph{a highest weight vector}. Going back to the case of
-\(\mathfrak{sl}_2(K)\), we then constructed an explicit basis of our
-irreducible representations in terms of a highest weight vector, which allowed
-us to provide an explicit description of the action of \(\mathfrak{sl}_2(K)\)
-in terms of its standard basis and finally we concluded that the eigenvalues of
-\(h\) must be symmetrical around \(0\). An analogous procedure could be
-implemented for \(\mathfrak{sl}_3(K)\) -- and indeed that's what we'll do later
-down the line -- but instead we would like to focus on the problem of finding
-the weights of \(V\) for the moment.
-
-We'll start out by trying to understand the weights in the boundary of
-\(\frac{1}{3}\)-plane previously drawn. Since the root spaces act by
-translation, the action of \(E_{2 1}\) in \(V_\lambda\) will span a subspace
-\[
-  W = \bigoplus_k V_{\lambda + k (\alpha_2 - \alpha_1)},
-\]
-and by the same token \(W\) must be invariant under the action of \(E_{1 2}\).
-
-To draw a familiar picture
-\begin{center}
-  \begin{tikzpicture}
-    \begin{rootSystem}{A}
-      \node at \weight{3}{1} (a) {};
-      \node at \weight{1}{2} (b) {};
-      \node at \weight{-1}{3} (c) {};
-      \node at \weight{-3}{4} (d) {};
-      \node at \weight{-5}{5} (e) {};
-      \draw \weight{3}{1} -- \weight{-4}{4.5};
-      \draw[dotted] \weight{-4}{4.5} -- \weight{-5}{5};
-      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
-      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
-      \draw[-latex] (a) to[bend left=40] (b);
-      \draw[-latex] (b) to[bend left=40] (c);
-      \draw[-latex] (c) to[bend left=40] (d);
-      \draw[-latex] (d) to[bend left=40] (e);
-      \draw[-latex] (e) to[bend left=40] (d);
-      \draw[-latex] (d) to[bend left=40] (c);
-      \draw[-latex] (c) to[bend left=40] (b);
-      \draw[-latex] (b) to[bend left=40] (a);
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-What's remarkable about all this is the fact that the subalgebra spanned by
-\(E_{1 2}\), \(E_{2 1}\) and \(H = [E_{1 2}, E_{2 1}]\) is isomorphic to
-\(\mathfrak{sl}_2(K)\) via
-\begin{align*}
-  E_{2 1} & \mapsto e &
-  E_{1 2} & \mapsto f &
-        H & \mapsto h
-\end{align*}
-
-In other words, \(W\) is a representation of \(\mathfrak{sl}_2(K)\). Even more
-so, we claim
-\[
-  V_{\lambda + k (\alpha_2 - \alpha_1)} = W_{\lambda(H) - 2k}
-\]
-
-Indeed, \(V_{\lambda + k (\alpha_2 - \alpha_1)} \subset W_{\lambda(H) - 2k}\)
-since \((\lambda + k (\alpha_2 - \alpha_1))(H) = \lambda(H) + k (-1 - 1) =
-\lambda(H) - 2 k\). On the other hand, if we suppose \(0 < \dim V_{\lambda + k
-(\alpha_2 - \alpha_1)} < \dim W_{\lambda(H) - 2 k}\) for some \(k\) we arrive
-at
-\[
-  \dim W
-  = \sum_k \dim V_{\lambda + k (\alpha_2 - \alpha_1)}
-  < \sum_k \dim W_{\lambda(H) - 2k}
-  = \dim W,
-\]
-a contradiction.
-
-There are a number of important consequences to this, of the first being that
-the weights of \(V\) appearing on \(W\) must be symmetric with respect to the
-the line \(B(\alpha_1 - \alpha_2, \alpha) =  0\). The picture is
-thus
-\begin{center}
-  \begin{tikzpicture}
-    \AutoSizeWeightLatticefalse
-    \begin{rootSystem}{A}
-      \setlength{\weightRadius}{2pt}
-      \weightLattice{4}
-      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
-      \wt[black]{0}{0}
-      \node[above left] at \weight{0}{0} {\small\(0\)};
-      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
-      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
-      \draw[very thick] \weight{0}{-4} -- \weight{0}{4}
-      node[above]{\small\(B(\alpha_1 - \alpha_2, \alpha) = 0\)};
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-Notice we could apply this same argument to the subspace \(\bigoplus_k
-V_{\lambda + k (\alpha_3 - \alpha_2)}\): this subspace is invariant under the
-action of the subalgebra spanned by \(E_{2 3}\), \(E_{3 2}\) and \([E_{2 3},
-E_{3 2}]\), which is again isomorphic to \(\mathfrak{sl}_2(K)\), so that the
-weights in this subspace must be symmetric with respect to the line
-\(B(\alpha_3 - \alpha_2, \alpha) = 0\). The picture is now
-\begin{center}
-  \begin{tikzpicture}
-    \AutoSizeWeightLatticefalse
-    \begin{rootSystem}{A}
-      \setlength{\weightRadius}{2pt}
-      \weightLattice{4}
-      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
-      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
-      \wt[black]{0}{0}
-      \wt[black]{4}{-1}
-      \node[above left] at \weight{0}{0} {\small\(0\)};
-      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
-      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
-      \draw[very thick] \weight{0}{-4} -- \weight{0}{4}
-      node[above]{\small\(B(\alpha_1 - \alpha_2, \alpha) = 0\)};
-      \draw[very thick] \weight{-4}{0} -- \weight{4}{0}
-      node[right]{\small\(B(\alpha_3 - \alpha_2, \alpha) = 0\)};
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-In general, given a weight \(\mu\), the space
-\[
-  \bigoplus_k V_{\mu + k (\alpha_i - \alpha_j)}
-\]
-is invariant under the action of the subalgebra \(\mathfrak{s}_{\alpha_i -
-\alpha_j} = K \langle E_{i j}, E_{j i}, [E_{i j}, E_{j i}] \rangle\), which is
-once more isomorphic to \(\mathfrak{sl}_2(K)\), and again the weight spaces in
-this string match precisely the eigenvalues of \(h\). Needless to say, we could
-keep applying this method to the weights at the ends of our string, arriving at
-\begin{center}
-  \begin{tikzpicture}
-    \AutoSizeWeightLatticefalse
-    \begin{rootSystem}{A}
-      \setlength{\weightRadius}{2pt}
-      \weightLattice{5}
-      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
-      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
-      \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
-      \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
-      \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
-      \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
-      \wt[black]{-4}{3}
-      \wt[black]{-3}{1}
-      \wt[black]{-2}{-1}
-      \wt[black]{-1}{-3}
-      \wt[black]{1}{-4}
-      \wt[black]{2}{-3}
-      \wt[black]{3}{-2}
-      \wt[black]{4}{-1}
-      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
-      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
-      \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
-      \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
-      \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-We claim all dots \(\mu\) lying inside the hexagon we've drawn must also be
-weights -- i.e. \(V_\mu \ne 0\). Indeed, by applying the same argument to an
-arbitrary weight \(\nu\) in the boundary of the hexagon we get a representation
-of \(\mathfrak{sl}_2(K)\) whose weights correspond to weights of \(V\) lying in
-a string inside the hexagon, and whose right-most weight is precisely the
-weight of \(V\) we started with.
-\begin{center}
-  \begin{tikzpicture}
-    \AutoSizeWeightLatticefalse
-    \begin{rootSystem}{A}
-      \setlength{\weightRadius}{2pt}
-      \weightLattice{5}
-      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
-      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
-      \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
-      \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
-      \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
-      \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
-      \wt[black]{-4}{3}
-      \wt[black]{-3}{1}
-      \wt[black]{-2}{-1}
-      \wt[black]{-1}{-3}
-      \wt[black]{1}{-4}
-      \wt[black]{2}{-3}
-      \wt[black]{3}{-2}
-      \wt[black]{4}{-1}
-      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
-      \node[above right=-2pt] at \weight{1}{2} {\small\(\nu\)};
-      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
-      \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
-      \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
-      \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
-      \draw[gray, thick] \weight{1}{2} -- \weight{-2}{-1};
-      \wt[black]{1}{2}
-      \wt[black]{-2}{-1}
-      \wt{0}{1}
-      \wt{-1}{0}
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-By construction, \(\nu\) corresponds to the right-most weight of the
-representation of \(\mathfrak{sl}_2(K)\), so that all dots lying on the gray
-string must occur in the representation of \(\mathfrak{sl}_2(K)\). Hence they
-must also be weights of \(V\). The final picture is thus
-\begin{center}
-  \begin{tikzpicture}
-    \AutoSizeWeightLatticefalse
-    \begin{rootSystem}{A}
-      \setlength{\weightRadius}{2pt}
-      \weightLattice{5}
-      \draw[thick] \weight{3}{1} -- \weight{-3}{4};
-      \draw[thick] \weight{3}{1} -- \weight{4}{-1};
-      \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
-      \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
-      \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
-      \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
-      \wt[black]{-4}{3}
-      \wt[black]{-3}{1}
-      \wt[black]{-2}{-1}
-      \wt[black]{-1}{-3}
-      \wt[black]{1}{-4}
-      \wt[black]{2}{-3}
-      \wt[black]{3}{-2}
-      \wt[black]{4}{-1}
-      \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
-      \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
-      \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
-      \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
-      \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
-      \wt[black]{-2}{2}
-      \wt[black]{0}{1}
-      \wt[black]{-1}{0}
-      \wt[black]{0}{-2}
-      \wt[black]{1}{-1}
-      \wt[black]{2}{0}
-    \end{rootSystem}
-  \end{tikzpicture}
-\end{center}
-
-Another important consequence of our analysis is the fact that \(\lambda\) lies
-in the lattice \(P\) generated by \(\alpha_1\), \(\alpha_2\) and \(\alpha_3\).
-Indeed, \(\lambda([E_{i j}, E_{j i}])\) is an eigenvalue of \(h\) in a
-representation of \(\mathfrak{sl}_2(K)\), so it must be an integer. Now since
-\[
-  \lambda
-  \begin{pmatrix}
-    a & 0 & 0     \\
-    0 & b & 0     \\
-    0 & 0 & -a -b
-  \end{pmatrix}
-  =
-  \lambda
-  \begin{pmatrix}
-    a & 0 & 0  \\
-    0 & 0 & 0  \\
-    0 & 0 & -a
-  \end{pmatrix}
-  +
-  \lambda
-  \begin{pmatrix}
-    0 & 0 & 0  \\
-    0 & b & 0  \\
-    0 & 0 & -b
-  \end{pmatrix}
-  =
-  a \lambda([E_{1 3}, E_{3 1}]) + b \lambda([E_{2 3}, E_{3 2}]),
-\]
-which is to say \(\lambda = \lambda([E_{1 3}, E_{3 1}]) \alpha_1 +
-\lambda([E_{2 3}, E_{3 2}]) \alpha_2\), we can see that \(\lambda \in
-P\).
-
-\begin{definition}
-  The lattice \(P = \ZZ \alpha_1 \oplus \ZZ \alpha_2 \oplus \ZZ \alpha_3\) is
-  called \emph{the weight lattice of \(\mathfrak{sl}_3(K)\)}.
-\end{definition}
-
-Finally\dots
-
-\begin{theorem}\label{thm:sl3-irr-weights-class}
-  The weights of \(V\) are precisely the elements of the weight lattice \(P\)
-  congruent to \(\lambda\) module the sublattice \(Q\) and lying inside hexagon
-  with vertices the images of \(\lambda\) under the group generated by
-  reflections across the lines \(B(\alpha_i - \alpha_j, \alpha) = 0\).
-\end{theorem}
-
-Once more there's a clear parallel between the case of \(\mathfrak{sl}_3(K)\)
-and that of \(\mathfrak{sl}_2(K)\), where we observed that the weights all lied
-in the lattice \(P = \ZZ\) and were congruent modulo the lattice \(Q = 2 \ZZ\).
-Having found all of the weights of \(V\), the only thing we're missing is an
-existence and uniqueness theorem analogous to
-theorem~\ref{thm:sl2-exist-unique}. In other words, our next goal is
-establishing\dots
-
-\begin{theorem}\label{thm:sl3-existence-uniqueness}
-  For each pair of positive integers \(n\) and \(m\), there exists precisely
-  one irreducible representation \(V\) of \(\mathfrak{sl}_3(K)\) whose highest
-  weight is \(n \alpha_1 - m \alpha_3\).
-\end{theorem}
-
-To proceed further we once again refer to the approach we employed in the case
-of \(\mathfrak{sl}_2(K)\): next we showed in theorem~\ref{thm:basis-of-irr-rep}
-that any irreducible representation of \(\mathfrak{sl}_2(K)\) is spanned by the
-images of its highest weight vector under \(f\). A more abstract way of putting
-it is to say that an irreducible representation \(V\) of \(\mathfrak{sl}_2(K)\)
-is spanned by the images of its highest weight vector under successive
-applications by half of the root spaces of \(\mathfrak{sl}_2(K)\). The
-advantage of this alternative formulation is, of course, that the same holds
-for \(\mathfrak{sl}_3(K)\). Specifically\dots
-
-\begin{theorem}\label{thm:irr-sl3-span}
-  Given an irreducible \(\mathfrak{sl}_3(K)\)-representation \(V\) and a
-  highest weight vector \(v \in V\), \(V\) is spanned by the images of \(v\)
-  under successive applications of \(E_{2 1}\), \(E_{3 1}\) and \(E_{3 2}\).
-\end{theorem}
-
-The proof of theorem~\ref{thm:irr-sl3-span} is very similar to that of
-theorem~\ref{thm:basis-of-irr-rep}: we use the commutator relations of
-\(\mathfrak{sl}_3(K)\) to inductively show that the subspace spanned by the
-images of a highest weight vector under successive applications of \(E_{2 1}\),
-\(E_{3 1}\) and \(E_{3 2}\) is invariant under the action of
-\(\mathfrak{sl}_3(K)\) -- please refer to \cite{fulton-harris} for further
-details. The same argument also goes to show\dots
-
-\begin{corollary}
-  Given a representation \(V\) of \(\mathfrak{sl}_3(K)\) with highest weight
-  \(\lambda\) and \(v \in V_\lambda\), the subspace spanned by successive
-  applications of \(E_{2 1}\), \(E_{3 1}\) and \(E_{3 2}\) to \(v\) is an
-  irreducible subrepresentation whose highest weight is \(\lambda\).
-\end{corollary}
-
-This is very interesting to us since it implies that finding \emph{any}
-representation whose highest weight is \(n \alpha_1 - m \alpha_2\) is enough
-for establishing the ``existence'' part of
-theorem~\ref{thm:sl3-existence-uniqueness}. Moreover, constructing such
-representation turns out to be quite simple.
-
-\begin{proof}[Proof of existence]
-  Consider the natural representation \(V = K^3\) of \(\mathfrak{sl}_3(K)\). We
-  claim that the highest weight of \(\operatorname{Sym}^n V \otimes
-  \operatorname{Sym}^m V^*\) is \(n \alpha_1 - m \alpha_3\).
-
-  First of all, notice that the eigenvectors of \(V\) are the canonical basis
-  vectors \(e_1\), \(e_2\) and \(e_3\), whose eigenvalues are \(\alpha_1\),
-  \(\alpha_2\) and \(\alpha_3\) respectively. Hence the weight diagram of \(V\)
-  is
-  \begin{center}
-    \begin{tikzpicture}[scale=2.5]
-      \AutoSizeWeightLatticefalse
-      \begin{rootSystem}{A}
-        \weightLattice{2}
-        \wt[black]{1}{0}
-        \wt[black]{-1}{1}
-        \wt[black]{0}{-1}
-        \node[right] at \weight{1}{0}  {$\alpha_1$};
-        \node[above left] at \weight{-1}{1} {$\alpha_2$};
-        \node[below left] at \weight{0}{-1} {$\alpha_3$};
-      \end{rootSystem}
-    \end{tikzpicture}
-  \end{center}
-  and \(\alpha_1\) is the highest weight of \(V\).
-
-  On the one hand, if \(\{f_1, f_2, f_3\}\) is the dual basis of \(\{e_1, e_2,
-  e_3\}\) then \(H f_i = - \alpha_i(H) \cdot f_i\) for each \(H \in
-  \mathfrak{h}\), so that the weights of \(V^*\) are precisely the opposites of
-  the weights of \(V\). In other words,
-  \begin{center}
-    \begin{tikzpicture}[scale=2.5]
-      \AutoSizeWeightLatticefalse
-      \begin{rootSystem}{A}
-        \weightLattice{2}
-        \wt[black]{-1}{0}
-        \wt[black]{1}{-1}
-        \wt[black]{0}{1}
-        \node[left]        at \weight{-1}{0} {$-\alpha_1$};
-        \node[below right] at \weight{1}{-1} {$-\alpha_2$};
-        \node[above right] at \weight{0}{1}  {$-\alpha_3$};
-      \end{rootSystem}
-    \end{tikzpicture}
-  \end{center}
-  is the weight diagram of \(V^*\) and \(\alpha_3\) is the highest weight of
-  \(V^*\).
-
-  On the other hand if we fix two \(\mathfrak{sl}_3(K)\)-representations \(U\)
-  and \(W\), by computing
-  \[
-    \begin{split}
-      H (u \otimes w)
-      & = H u \otimes w + u \otimes H w \\
-      & = \lambda(H) \cdot u \otimes w + u \otimes \mu(H) \cdot w \\
-      & = (\lambda + \mu)(H) \cdot (u \otimes w)
-    \end{split}
-  \]
-  for each \(H \in \mathfrak{h}\), \(u \in U_\lambda\) and \(w \in W_\lambda\)
-  we can see that the weights of \(U \otimes W\) are precisely the sums of the
-  weights of \(U\) with the weights of \(W\).
-
-  This implies that the maximal weights of \(\operatorname{Sym}^n V\) and
-  \(\operatorname{Sym}^m V^*\) are \(n \alpha_1\) and \(- m \alpha_3\)
-  respectively -- with maximal weight vectors \(e_1^n\) and \(f_3^m\).
-  Furthermore, by the same token the highest weight of \(\operatorname{Sym}^n V
-  \otimes \operatorname{Sym}^m V^*\) must be \(n e_1 - m e_3\) -- with highest
-  weight vector \(e_1^n \otimes f_3^m\).
-\end{proof}
-
-The ``uniqueness'' part of theorem~\ref{thm:sl3-existence-uniqueness} is even
-simpler than that.
-
-\begin{proof}[Proof of uniqueness]
-  Let \(V\) and \(W\) be two irreducible representations of
-  \(\mathfrak{sl}_3(K)\) with highest weight \(\lambda\). By
-  theorem~\ref{thm:sl3-irr-weights-class}, the weights of \(V\) are precisely
-  the same as those of \(W\).
-
-  Now by computing
-  \[
-    H (v + w)
-    = H v + H w
-    = \mu(H) \cdot v + \mu(H) \cdot w
-    = \mu(H) \cdot (v + w)
-  \]
-  for each \(H \in \mathfrak{h}\), \(v \in V_\mu\) and \(w \in W_\mu\), we can
-  see that the weights of \(V \oplus W\) are same as those of \(V\) and \(W\).
-  Hence the highest weight of \(V \oplus W\) is \(\lambda\) -- with highest
-  weight vectors given by the sum of highest weight vectors of \(V\) and \(W\).
-
-  Fix some \(v \in V_\lambda\) and \(w \in W_\lambda\) and consider the
-  irreducible representation \(U = \mathfrak{sl}_3(K) \cdot v + w\) generated
-  by \(v + w\). The projection maps \(\pi_1 : U \to V\), \(\pi_2 : U \to W\),
-  being non-zero homomorphism between irreducible representations of
-  \(\mathfrak{sl}_3(K)\) must be isomorphism. Finally,
-  \[
-    V \cong U \cong W
-  \]
-\end{proof}
-
-The situation here is analogous to that of the previous section, where we saw
-that the irreducible representations of \(\mathfrak{sl}_2(K)\) are given by
-symmetric powers of the natural representation.
-
-We've been very successful in our pursue for a classification of the
-irreducible representations of \(\mathfrak{sl}_2(K)\) and
-\(\mathfrak{sl}_3(K)\), but so far we've mostly postponed the discussion on the
-motivation behind our methods. In particular, we did not explain why we chose
-\(h\) and \(\mathfrak{h}\), and neither why we chose to look at their
-eigenvalues. Apart from the obvious fact we already knew it would work a
-priory, why did we do all that? In the following section we will attempt to
-answer this question by looking at what we did in the last chapter through more
-abstract lenses and studying the representations of an arbitrary
-finite-dimensional semisimple Lie algebra \(\mathfrak{g}\).
-
-\section{Simultaneous Diagonalization \& the General Case}
-
-At the heart of our analysis of \(\mathfrak{sl}_2(K)\) and
-\(\mathfrak{sl}_3(K)\) was the decision to consider the eigenspace
-decomposition
-\begin{equation}\label{sym-diag}
-  V = \bigoplus_\lambda V_\lambda
-\end{equation}
-
-This was simple enough to do in the case of \(\mathfrak{sl}_2(K)\), but the
-reasoning behind it, as well as the mere fact equation (\ref{sym-diag}) holds,
-are harder to explain in the case of \(\mathfrak{sl}_3(K)\). The eigenspace
-decomposition associated with an operator \(V \to V\) is a very well-known
-tool, and this type of argument should be familiar to anyone familiar with
-basic concepts of linear algebra. On the other hand, the eigenspace
-decomposition of \(V\) with respect to the action of an arbitrary subalgebra
-\(\mathfrak{h} \subset \mathfrak{gl}(V)\) is neither well-known nor does it
-hold in general: as previously stated, it may very well be that
-\[
-  \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda \subsetneq V
-\]
-
-We should note, however, that this two cases are not as different as they may
-sound at first glance. Specifically, we can regard the eigenspace decomposition
-of a representation \(V\) of \(\mathfrak{sl}_2(K)\) with respect to the
-eigenvalues of the action of \(h\) as the eigenvalue decomposition of \(V\)
-with respect to the action of the subalgebra \(\mathfrak{h} = K h \subset
-\mathfrak{sl}_2(K)\). Furthermore, in both cases \(\mathfrak{h} \subset
-\mathfrak{sl}_n(K)\) is the subalgebra of diagonal matrices, which is Abelian.
-The fundamental difference between these two cases is thus the fact that \(\dim
-\mathfrak{h} = 1\) for \(\mathfrak{h} \subset \mathfrak{sl}_2(K)\) while \(\dim
-\mathfrak{h} > 1\) for \(\mathfrak{h} \subset \mathfrak{sl}_3(K)\). The
-question then is: why did we choose \(\mathfrak{h}\) with \(\dim \mathfrak{h} >
-1\) for \(\mathfrak{sl}_3(K)\)?
-
-% TODO: Add a note on how irreducible representations of Abelian algebras are
-% all one dimensional to the previous chapter
-The rational behind fixing an Abelian subalgebra is a simple one: we have seen
-in the previous chapter that representations of Abelian
-algebras are generally much simpler to understand than the general case.
-Thus it make sense to decompose a given representation \(V\) of
-\(\mathfrak{g}\) into subspaces invariant under the action of \(\mathfrak{h}\),
-and then analyze how the remaining elements of \(\mathfrak{g}\) act on this
-subspaces. The bigger \(\mathfrak{h}\) the simpler our problem gets, because
-there are fewer elements outside of \(\mathfrak{h}\) left to analyze.
-
-Hence we are generally interested in maximal Abelian subalgebras \(\mathfrak{h}
-\subset \mathfrak{g}\), which leads us to the following definition.
-
-\begin{definition}
-  An subalgebra \(\mathfrak{h} \subset \mathfrak{g}\) is called \emph{a Cartan
-  subalgebra of \(\mathfrak{g}\)} if is self-normalizing -- i.e. \([X, H] \in
-  \mathfrak{h}\) for all \(H \in \mathfrak{h}\) if, and only if \(X \in
-  \mathfrak{h}\) -- and nilpotent. Equivalently for reductive \(\mathfrak{g}\),
-  \(\mathfrak{h}\) is called \emph{a Cartan subalgebra of \(\mathfrak{g}\)} if
-  it is Abelian, \(\operatorname{ad}(H)\) is diagonalizable for each \(H \in
-  \mathfrak{h}\) and if \(\mathfrak{h}\) is maximal with respect to the former
-  two properties.
-\end{definition}
-
-\begin{proposition}
-  There exists a Cartan subalgebra \(\mathfrak{h} \subset \mathfrak{g}\).
-\end{proposition}
-
-\begin{proof}
-  Notice that \(0 \subset \mathfrak{g}\) is an Abelian subalgebra whose
-  elements act as diagonal operators via the adjoint representation. Indeed,
-  \(0\) -- the only element of \(0 \subset \mathfrak{g}\) -- is such that
-  \(\operatorname{ad}(0) = 0\). Furthermore, given a chain of Abelian
-  subalgebras
-  \[
-    0 \subset \mathfrak{h}_1 \subset \mathfrak{h}_2 \subset \cdots
-  \]
-  such that \(\operatorname{ad}(H)\) is a diagonal operator for each \(H \in
-  \mathfrak{h}_i\), the subalgebra \(\bigcup_i \mathfrak{h}_i \subset
-  \mathfrak{g}\) is Abelian, and its elements also act diagonally in
-  \(\mathfrak{g}\). It then follows from Zorn's lemma that there exists a
-  subalgebra \(\mathfrak{h}\) which is maximal with respect to both these
-  properties -- i.e. a Cartan subalgebra.
-\end{proof}
-
-We have already seen some concrete examples. For instance, one can readily
-check that every pair of diagonal matrices commutes, so that
-\[
-  \mathfrak{h} =
-  \begin{pmatrix}
-         K &      0 & \cdots &      0 \\
-         0 &      K & \cdots &      0 \\
-    \vdots & \vdots & \ddots & \vdots \\
-         0 &      0 & \cdots &      K
-  \end{pmatrix}
-\]
-is an Abelian -- and hence nilpotent -- subalgebra of \(\mathfrak{gl}_n(K)\). A
-simple calculation also shows that if \(i \ne j\) then the coefficient of
-\(E_{i j}\) in \([E_{i i}, X]\) is the same as the coefficient of \(E_{i j}\)
-in \(X\), for all \(X \in \mathfrak{gl}_n(K)\). In particular, if \([E_{i i},
-X]\) is diagonal for all \(i\), then so is \(X\) -- i.e. \(\mathfrak{h}\) is
-self-normalizing. Hence \(\mathfrak{h}\) is a Cartan subalgebra of
-\(\mathfrak{gl}_n(K)\).
-
-The intersection of such subalgebra with \(\mathfrak{sl}_n(K)\) -- i.e. the
-subalgebra of traceless diagonal matrices -- is a Cartan subalgebra of
-\(\mathfrak{sl}_n(K)\). In particular, if \(n = 2\) or \(n = 3\) we get to the
-subalgebras described the previous two sections. The remaining question then
-is: if \(\mathfrak{h} \subset \mathfrak{g}\) is a Cartan subalgebra and \(V\)
-is a representation of \(\mathfrak{g}\), does the eigenspace decomposition
-\[
-  V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda
-\]
-of \(V\) hold? The answer to this question turns out to be yes. This is a
-consequence of something known as \emph{simultaneous diagonalization}, which is
-the primary tool we'll use to generalize the results of the previous section.
-What is simultaneous diagonalization all about then?
-
-\begin{definition}\label{def:sim-diag}
-  Given a \(K\)-vector space \(V\), a set of operators \(\{T_j : V \to V\}_j\)
-  is called \emph{simultaneously diagonalizable} if there is a basis \(\{v_1,
-  \ldots, v_n\}\) for \(V\) such that \(T_j v_i\) is a scalar multiple of
-  \(v_i\), for all \(i, j\).
-\end{definition}
-
-\begin{proposition}
-  Given a \emph{finite-dimensional} vector space \(V\), A set of diagonalizable
-  operators \(V \to V\) is simultaneously diagonalizable if, and only if all of
-  its elements commute with one another.
-\end{proposition}
-
-We should point out that simultaneous diagonalization \emph{only works in the
-finite-dimensional setting}. In fact, simultaneous diagonalization is usually
-framed as an equivalent statement about diagonalizable \(n \times n\) matrices
--- where \(n\) is, of course, finite.
-
-Simultaneous diagonalization implies that to show \(V = \bigoplus_\lambda
-V_\lambda\) it suffices to show that \(H\!\restriction_V : V \to V\) is a
-diagonalizable operator for each \(H \in \mathfrak{h}\). To that end, we
-introduce \emph{the Jordan decomposition of an operator} and \emph{the abstract
-Jordan decomposition of a semisimple Lie algebra}.
-
-\begin{proposition}[Jordan]
-  Given a finite-dimensional vector space \(V\) and an operator \(T : V \to
-  V\), there are unique commuting operators \(T_s, T_n : V \to V\), with
-  \(T_s\) diagonalizable and \(T_n\) nilpotent, such that \(T = T_s + T_n\).
-  The pair \((T_s, T_n)\) is known as \emph{the Jordan decomposition of \(T\)}.
-\end{proposition}
-
-\begin{proposition}
-  Given \(\mathfrak{g}\) semisimple and \(X \in \mathfrak{g}\), there are
-  \(X_s, X_n \in \mathfrak{g}\) such that \(X = X_s + X_n\), \([X_s, X_n] =
-  0\), \(\operatorname{ad}(X_s)\) is a diagonalizable operator and
-  \(\operatorname{ad}(X_n)\) is a nilpotent operator. The pair \((X_s, X_n)\)
-  is known as \emph{the Jordan decomposition of \(X\)}.
-\end{proposition}
-
-It should be clear from the uniqueness of \(\operatorname{ad}(X)_s\) and
-\(\operatorname{ad}(X)_n\) that the Jordan decomposition of
-\(\operatorname{ad}(X)\) is \(\operatorname{ad}(X) = \operatorname{ad}(X_s) +
-\operatorname{ad}(X_n)\). What's perhaps more remarkable is the fact this holds
-for \emph{any} finite-dimensional representation of \(\mathfrak{g}\). In other
-words\dots
-
-\begin{proposition}\label{thm:preservation-jordan-form}
-  Let \(V\) be a finite-dimensional representation of \(\mathfrak{g}\) and \(X
-  \in \mathfrak{g}\). Denote by \(X\!\restriction_V\) the action of \(X\) in
-  \(V\). Then \(X_s\!\restriction_V = (X\!\restriction)_s\) and
-  \(X_n\!\restriction_V = (X\!\restriction)_n\).
-\end{proposition}
-
-This last result is known as \emph{the preservation of the Jordan form}, and a
-proof can be found in appendix C of \cite{fulton-harris}. We should point out
-this fails spectacularly in positive characteristic. Furthermore, the statement
-of proposition~\ref{thm:preservation-jordan-form} only makes sense for
-\emph{semisimple} Lie algebras -- i.e. the algebras \(\mathfrak{g}\) for which
-the abstract Jordan decomposition of \(\mathfrak{g}\) is defined. Nevertheless,
-as promised this implies\dots
-
-\begin{corollary}\label{thm:finite-dim-is-weight-mod}
-  Let \(\mathfrak{g}\) be a semisimple Lie algebra, \(\mathfrak{h} \subset
-  \mathfrak{g}\) be a Cartan subalgebra and \(V\) be any finite-dimensional
-  representation of \(\mathfrak{g}\). Then there is a basis \(\{v_1, \ldots,
-  v_n\}\) of \(V\) so that each \(v_i\) is simultaneously an eigenvector of all
-  elements of \(\mathfrak{h}\) -- i.e. each element of \(\mathfrak{h}\) acts as
-  a diagonal matrix in this basis. In other words, there are linear functionals
-  \(\lambda_i \in \mathfrak{h}^*\) so that
-  \(
-    H v_i = \lambda_i(H) \cdot v_i
-  \)
-  for all \(H \in \mathfrak{h}\). In particular,
-  \[
-    V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda
-  \]
-\end{corollary}
-
-\begin{proof}
-  Fix some \(H \in \mathfrak{h}\). It suffices to show that \(H\!\restriction_V
-  : V \to V\) is a diagonalizable operator.
-
-  If we write \(H = H_s + H_n\) for the abstract Jordan decomposition of \(H\),
-  we know \(\operatorname{ad}(H_s) = \operatorname{ad}(H)_s\). But
-  \(\operatorname{ad}(H)\) is a diagonalizable operator, so that
-  \(\operatorname{ad}(H)_s = \operatorname{ad}(H)\). This implies
-  \(\operatorname{ad}(H_n) = \operatorname{ad}(H)_n = 0\), so that \(H_n\) is a
-  central element of \(\mathfrak{g}\). Since \(\mathfrak{g}\) is semisimple,
-  \(H_n = 0\). Proposition~\ref{thm:preservation-jordan-form} then implies
-  \((H\!\restriction_V)_n = (H_n)\!\restriction_V = 0\), so \(H\!\restriction_V
-  = (H\!\restriction_V)_s\) is a diagonalizable operator.
-\end{proof}
-
-We should point out that this last proof only works for semisimple Lie
-algebras. This is because we rely heavily on
-proposition~\ref{thm:preservation-jordan-form}, as well in the fact that
-semisimple Lie algebras are centerless. In fact,
-corollary~\ref{thm:finite-dim-is-weight-mod} fails even for reductive Lie
-algebras. For a counterexample, consider the algebra \(\mathfrak{g} = K\): the
-Cartan subalgebra of \(\mathfrak{g}\) is \(\mathfrak{g}\) itself, and a
-\(\mathfrak{g}\)-module is simply a vector space \(V\) endowed with an operator
-\(V \to V\) -- which corresponds to the action of \(1 \in \mathfrak{g}\) in
-\(V\). In particular, if we choose an operator \(V \to V\) which is \emph{not}
-diagonalizable we find \(V \ne \bigoplus_{\lambda \in \mathfrak{h}^*}
-V_\lambda\).
-
-However, corollary~\ref{thm:finite-dim-is-weight-mod} does work for reductive
-\(\mathfrak{g}\) if we assume that the representation in question is
-irreducible, since central elements of \(\mathfrak{g}\) act on irreducible
-representations as scalar operators. The hypothesis of finite-dimensionality is
-also of huge importance. In the next chapter we will encounter
-infinite-dimensional \(\mathfrak{g}\)-modules for which the eigenspace
-decomposition \(V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda\) fails.
-As a first consequence of corollary~\ref{thm:finite-dim-is-weight-mod}
-
-\begin{corollary}
-  The restriction of \(B\) to \(\mathfrak{h}\) is non-degenerate.
-\end{corollary}
-
-\begin{proof}
-  Consider the eigenspace decomposition \(\mathfrak{g} = \mathfrak{g}_0 \oplus
-  \bigoplus_\alpha \mathfrak{g}_\alpha\) of the adjoint representation, where
-  \(\alpha\) ranges over all nonzero eigenvalues of the adjoint action of
-  \(\mathfrak{h}\). We claim \(\mathfrak{g}_0 = \mathfrak{h}\).
-
-  Indeed, since \(\mathfrak{h}\) is Abelian, \(\operatorname{ad}(\mathfrak{h})
-  \mathfrak{h} = 0\) -- i.e. \(\mathfrak{h} \subset \mathfrak{g}_0\). On the
-  other hand, since \(\mathfrak{h}\) is self-normalizing, if \([X, H] = 0 \in
-  \mathfrak{h}\) for all \(H \in \mathfrak{h}\) then \(X \in \mathfrak{h}\) --
-  i.e. \(\mathfrak{g}_0 \subset \mathfrak{h}\). So the eigenspace decomposition
-  becomes
-  \[
-    \mathfrak{g} = \mathfrak{h} \oplus \bigoplus_\alpha \mathfrak{g}_\alpha
-  \]
-
-  We furthermore claim that \(\mathfrak{h} = \mathfrak{g}_0\) is orthogonal to
-  \(\mathfrak{g}_\alpha\) with respect to \(B\) for any \(\alpha \ne 0\).
-  Indeed, given \(X \in \mathfrak{g}_\alpha\) and \(H_1, H_2 \in \mathfrak{h}\)
-  with \(\alpha(H_1) \ne 0\) we have
-  \[
-    \alpha(H_1) \cdot B(X, H_2)
-    = B([H_1, X], H_2)
-    = - B([X, H_1], H_2)
-    = - B(X, [H_1, H_2])
-    = 0
-  \]
-
-  Hence the non-degeneracy of \(B\) implies the non-degeneracy of its
-  restriction.
-\end{proof}
-
-We should point out that the restriction of \(B\) to \(\mathfrak{h}\) is
-\emph{not} the Killing form of \(\mathfrak{h}\). In fact, since
-\(\mathfrak{h}\) is Abelian, its Killing form is identically zero -- which is
-hardly ever a non-degenerate form.
-
-\begin{note}
-  Since \(B\) induces an isomorphism \(\mathfrak{h} \isoto \mathfrak{h}^*\), it
-  induces a bilinear form \((B(X, \cdot), B(Y, \cdot)) \mapsto B(X, Y)\) in
-  \(\mathfrak{h}^*\). We denote this form by \(B\).
-\end{note}
-
-We now have most of the necessary tools to reproduce the results of the
-previous chapter in a general setting. Let \(\mathfrak{g}\) be a
-finite-dimensional semisimple algebra with a Cartan subalgebra \(\mathfrak{h}\)
-and let \(V\) be a finite-dimensional irreducible representation of
-\(\mathfrak{g}\). We will proceed, as we did before, by generalizing the
-results about of the previous two sections in order. By now the pattern should
-be starting become clear, so we will mostly omit technical details and proofs
-analogous to the ones on the previous sections. Further details can be found in
-appendix D of \cite{fulton-harris} and in \cite{humphreys}.
-
-We begin our analysis by remarking that in both \(\mathfrak{sl}_2(K)\) and
-\(\mathfrak{sl}_3(K)\), the roots were symmetric about the origin and spanned
-all of \(\mathfrak{h}^*\). This turns out to be a general fact, which is a
-consequence of the non-degeneracy of the restriction of the Killing form to the
-Cartan subalgebra.
-
-\begin{proposition}\label{thm:weights-symmetric-span}
-  The eigenvalues \(\alpha\) of the adjoint action of \(\mathfrak{h}\) in
-  \(\mathfrak{g}\) are symmetrical about the origin -- i.e. \(- \alpha\) is
-  also an eigenvalue -- and they span all of \(\mathfrak{h}^*\).
-\end{proposition}
-
-\begin{proof}
-  We'll start with the first claim. Let \(\alpha\) and \(\beta\) be two
-  eigenvalues of the adjoint action of \(\mathfrak{h}\). Notice
-  \([\mathfrak{g}_\alpha, \mathfrak{g}_\beta] \subset \mathfrak{g}_{\alpha +
-  \beta}\). Indeed, if \(X \in \mathfrak{g}_\alpha\) and \(Y \in
-  \mathfrak{g}_\beta\) then
-  \[
-    [H [X, Y]]
-    = [X, [H, Y]] - [Y, [H, X]]
-    = (\alpha + \beta)(H) \cdot [X, Y]
-  \]
-  for all \(H \in \mathfrak{h}\).
-
-  This implies that if \(\alpha + \beta \ne 0\) then \(\operatorname{ad}(X)
-  \operatorname{ad}(Y)\) is nilpotent: if \(Z \in \mathfrak{g}_\gamma\) then
-  \[
-    (\operatorname{ad}(X) \operatorname{ad}(Y))^n Z
-    = [X, [Y, [ \ldots, [X, [Y, Z]]] \ldots ]
-    \in \mathfrak{g}_{n \alpha + n \beta + \gamma}
-    = 0
-  \]
-  for \(n\) large enough. In particular, \(B(X, Y) =
-  \operatorname{Tr}(\operatorname{ad}(X) \operatorname{ad}(Y)) = 0\). Now if
-  \(- \alpha\) is not an eigenvalue we find \(B(X, \mathfrak{g}_\beta) = 0\)
-  for all eigenvalues \(\beta\), which contradicts the non-degeneracy of \(B\).
-  Hence \(- \alpha\) must be an eigenvalue of the adjoint action of
-  \(\mathfrak{h}\).
-
-  For the second statement, note that if the eigenvalues of \(\mathfrak{h}\) do
-  not span all of \(\mathfrak{h}^*\) then there is some \(H \in \mathfrak{h}\)
-  non-zero such that \(\alpha(H) = 0\) for all eigenvalues \(\alpha\), which is
-  to say, \(\operatorname{ad}(H) X = [H, X] = 0\) for all \(X \in
-  \mathfrak{g}\). Another way of putting it is to say \(H\) is an element of
-  the center \(\mathfrak{z}\) of \(\mathfrak{g}\), which is zero by the
-  semisimplicity -- a contradiction.
-\end{proof}
-
-Furthermore, as in the case of \(\mathfrak{sl}_2(K)\) and
-\(\mathfrak{sl}_3(K)\) one can show\dots
-
-\begin{proposition}\label{thm:root-space-dim-1}
-  The eigenspaces \(\mathfrak{g}_\alpha\) are all 1-dimensional.
-\end{proposition}
-
-The proof of the first statement of
-proposition~\ref{thm:weights-symmetric-span} highlights something interesting:
-if we fix some some eigenvalue \(\alpha\) of the adjoint action of
-\(\mathfrak{h}\) in \(\mathfrak{g}\) and a eigenvector \(X \in
-\mathfrak{g}_\alpha\), then for each \(H \in \mathfrak{h}\) and \(v \in
-V_\lambda\) we find
-\[
-  H (X v)
-  = X (H v) + [H, X] v
-  = (\lambda + \alpha)(H) \cdot X v
-\]
-so that \(X\) carries \(v\) to \(V_{\lambda + \alpha}\). We have encountered
-this formula twice in this chapter: again, we find \(\mathfrak{g}_\alpha\)
-\emph{acts on \(V\) by translating vectors between eigenspaces}. In other
-words, if we denote by \(\Delta\) the set of all roots of \(\mathfrak{g}\)
-then\dots
-
-\begin{theorem}\label{thm:weights-congruent-mod-root}
-  The weights of an irreducible representation \(V\) of \(\mathfrak{g}\) are
-  all congruent module the root lattice \(Q = \ZZ \Delta\) of \(\mathfrak{g}\).
-\end{theorem}
-
-% TODOO: Turn this into a proper discussion of basis and give the idea of the
-% proof of existance of basis?
-To proceed further, as in the case of \(\mathfrak{sl}_3(K)\) we have to fix a
-direction in \(\mathfrak{h}^*\) -- i.e. we fix a linear function
-\(\mathfrak{h}^* \to \QQ\) such that \(Q\) lies outside of its kernel. This
-choice induces a partition \(\Delta = \Delta^+ \cup \Delta^-\) of the set of
-roots of \(\mathfrak{g}\) and once more we find\dots
-
-\begin{definition}
-  The elements of \(\Delta^+\) and \(\Delta^-\) are called \emph{positive} and
-  \emph{negative roots}, respectively. The subalgebra \(\mathfrak{b} =
-  \mathfrak{h} \oplus \bigoplus_{\alpha \in \Delta^+} \mathfrak{g}_\alpha\) is
-  called \emph{the Borel subalgebra associated with \(\mathfrak{h}\)}.
-\end{definition}
-
-\begin{theorem}
-  There is a weight vector \(v \in V\) that is killed by all positive root
-  spaces of \(\mathfrak{g}\).
-\end{theorem}
-
-% TODO: Here we may take a weight of maximal height, but why is it unique?
-% TODO: We don't really need to talk about height tho, we may simply take a
-% weight that maximizes B(gamma, lambda) in QQ
-% TODOO: Either way, we need to move this to after the discussion on the
-% integrality of weights
-\begin{proof}
-  It suffices to note that if \(\lambda\) is the weight of \(V\) lying the
-  furthest along the direction we chose and \(V_{\lambda + \alpha} \ne 0\) for
-  some \(\alpha \in \Delta^+\) then \(\lambda + \alpha\) is a weight that is
-  furthest along the direction we chose than \(\lambda\), which contradicts the
-  definition of \(\lambda\).
-\end{proof}
-
-Accordingly, we call \(\lambda\) \emph{the highest weight of \(V\)}, and we
-call any \(v \in V_\lambda\) \emph{a highest weight vector}. The strategy then
-is to describe all weight spaces of \(V\) in terms of \(\lambda\) and \(v\), as
-in theorem~\ref{thm:sl3-irr-weights-class}, and unsurprisingly we do so by
-reproducing the proof of the case of \(\mathfrak{sl}_3(K)\). Namely, we
-show\dots
-
-\begin{proposition}\label{thm:distinguished-subalgebra}
-  Given a root \(\alpha\) of \(\mathfrak{g}\) the subspace
-  \(\mathfrak{s}_\alpha = \mathfrak{g}_\alpha \oplus \mathfrak{g}_{- \alpha}
-  \oplus [\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]\) is a subalgebra
-  isomorphic to \(\mathfrak{sl}_2(k)\).
-\end{proposition}
-
-\begin{corollary}\label{thm:distinguished-subalg-rep}
-  For all weights \(\mu\), the subspace
-  \[
-    V_\mu[\alpha] = \bigoplus_k V_{\mu + k \alpha}
-  \]
-  is invariant under the action of the subalgebra \(\mathfrak{s}_\alpha\)
-  and the weight spaces in this string match the eigenspaces of \(h\).
-\end{corollary}
-
-The proof of proposition~\ref{thm:distinguished-subalgebra} is very technical
-in nature and we won't include it here, but the idea behind it is simple:
-recall that \(\mathfrak{g}_\alpha\) and \(\mathfrak{g}_{- \alpha}\) are both
-1-dimensional, so that \(\dim [\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]\)
-is at most 1. We check that \([\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]
-\ne 0\) and that no generator of \([\mathfrak{g}_\alpha, \mathfrak{g}_{-
-\alpha}] \ne 0\) is annihilated by \(\alpha\), so that by adjusting scalars we
-can find \(E_\alpha \in \mathfrak{g}_\alpha\) and \(F_\alpha \in
-\mathfrak{g}_{- \alpha}\) such that \(H_\alpha = [E_\alpha, F_\alpha]\)
-satisfies
-\begin{align*}
-  [H_\alpha, F_\alpha] & = -2 F_\alpha &
-  [H_\alpha, E_\alpha] & =  2 E_\alpha
-\end{align*}
-
-The elements \(E_\alpha, F_\alpha \in \mathfrak{g}\) are not uniquely
-determined by this condition, but \(H_\alpha\) is. The second statement of
-corollary~\ref{thm:distinguished-subalg-rep} imposes a restriction on the
-weights of \(V\). Namely, if \(\mu\) is a weight, \(\mu(H_\alpha)\) is an
-eigenvalue of \(h\) in some representation of \(\mathfrak{sl}_2(K)\), so
-that\dots
-
-\begin{proposition}
-  The weights \(\mu\) of an irreducible representation \(V\) of
-  \(\mathfrak{g}\) are so that \(\mu(H_\alpha) \in \ZZ\) for each \(\alpha \in
-  \Delta\).
-\end{proposition}
-
-Once more, the lattice \(P = \{ \lambda \in \mathfrak{h}^* : \lambda(H_\alpha)
-\in \ZZ, \forall \alpha \in \Delta \}\) is called \emph{the weight lattice of
-\(\mathfrak{g}\)}, and we call the elements of \(P\) \emph{integral}. Finally,
-another important consequence of theorem~\ref{thm:distinguished-subalgebra}
-is\dots
-
-\begin{corollary}
-  If \(\alpha \in \Delta^+\) and \(T_\alpha : \mathfrak{h}^* \to
-  \mathfrak{h}^*\) is the reflection in the hyperplane perpendicular to
-  \(\alpha\) with respect to the Killing form,
-  corollary~\ref{thm:distinguished-subalg-rep} implies that all \(\nu \in P\)
-  lying inside the line connecting \(\mu\) and \(T_\alpha \mu\) are weights --
-  i.e. \(V_\nu \ne 0\).
-\end{corollary}
-
-\begin{proof}
-  It suffices to note that \(\nu \in V_\mu[\alpha]\) -- see appendix D of
-  \cite{fulton-harris} for further details.
-\end{proof}
-
-\begin{definition}
-  We refer to the group \(\mathcal{W} = \langle T_\alpha : \alpha \in \Delta^+
-  \rangle \subset \operatorname{O}(\mathfrak{h}^*)\) as \emph{the Weyl group of
-  \(\mathfrak{g}\)}.
-\end{definition}
-
-This is entirely analogous to the situation of \(\mathfrak{sl}_3(K)\), where we
-found that the weights of the irreducible representations were symmetric with
-respect to the lines \(K \alpha\) with \(B(\alpha_i - \alpha_j, \alpha) = 0\).
-Indeed, the same argument leads us to the conclusion\dots
-
-\begin{theorem}\label{thm:irr-weight-class}
-  The weights of an irreducible representation \(V\) of \(\mathfrak{g}\) with
-  highest weight \(\lambda\) are precisely the elements of the weight lattice
-  \(P\) congruent to \(\lambda\) modulo the root lattice \(Q\) lying inside the
-  convex hull of the image of \(\lambda\) under the action of the Weyl group
-  \(\mathcal{W}\).
-\end{theorem}
-
-Now the only thing we are missing for a complete classification is an existence
-and uniqueness theorem analogous to theorem~\ref{thm:sl2-exist-unique} and
-theorem~\ref{thm:sl3-existence-uniqueness}. Lo and behold\dots
-
-\begin{definition}
-  An element \(\lambda\) of \(P\) such that \(\lambda(H_\alpha) \ge 0\) for all
-  \(\alpha \in \Delta^+\) is referred to as an \emph{integral dominant weight
-  of \(\mathfrak{g}\)}.
-\end{definition}
-
-\begin{theorem}\label{thm:dominant-weight-theo}
-  For each dominant integral \(\lambda \in P\) there exists precisely one
-  irreducible finite-dimensional representation \(V\) of \(\mathfrak{g}\) whose
-  highest weight is \(\lambda\).
-\end{theorem}
-
-Fix some dominant integral \(\lambda \in P\). The ``uniqueness'' part of the
-theorem follows at once from the argument used for \(\mathfrak{sl}_3(K)\). The
-``existence'' part is more nuanced. Our first instinct is, of course, to try to
-generalize the proof used for \(\mathfrak{sl}_3(K)\). The issue is that our
-proof relied heavily on our knowledge of the roots of \(\mathfrak{sl}_3(K)\).
-Instead, we need a new strategy for the general setting. To that end, we
-introduce a special class of \(\mathfrak{g}\)-modules, known as \emph{Verma
-modules}.
-
-\begin{definition}\label{def:verma}
-  The \(\mathfrak{g}\)-module \(M(\lambda) =
-  \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}} K v^+\), where the action of
-  \(\mathfrak{b}\) in \(K v^+\) is given by \(H v^+ = \lambda(H) \cdot v^+\)
-  for all \(H \in \mathfrak{h}\) and \(X v^+ = 0\) for \(X \in
-  \mathfrak{g}_{\alpha}\), \(\alpha \in \Delta^+\), is called \emph{the Verma
-  module of weight \(\lambda\)}
-\end{definition}
-
-We should point out that, unlike most representations we've encountered so far,
-Verma modules are \emph{highly infinite-dimensional}. Indeed, the dimension of
-\(M(\lambda)\) is the same as the codimension of \(\mathcal{U}(\mathfrak{b})\)
-in \(\mathcal{U}(\mathfrak{g})\), which is always infinite. Nevertheless,
-\(M(\lambda)\) turns out to be quite well behaved. For instance, by
-construction \(M(\lambda) = \mathcal{U}(\mathfrak{g}) \cdot v^+\) -- where
-\(v^+ = 1 \otimes v^+ \in M(\lambda)\) is as in definition~\ref{def:verma}.
-Moreover, we find\dots
-
-\begin{proposition}\label{thm:verma-is-weight-mod}
-  The weight spaces decomposition
-  \[
-    M(\lambda) = \bigoplus_{\mu \in \mathfrak{h}^*} M(\lambda)_\mu
-  \]
-  holds. Furthermore, \(\dim M(\lambda)_\mu < \infty\) for all \(\mu \in
-  \mathfrak{h}^*\) and \(\dim M(\lambda) = 1\). Finally, \(\lambda\) is the
-  highest weight of \(M(\lambda)\), with highest weight vector given by \(v^+ =
-  1 \otimes v^+ \in M(\lambda)\) as in definition~\ref{def:verma}.
-\end{proposition}
-
-\begin{proof}
-  The Poincaré-Birkhoff-Witt theorem implies that \(M(\lambda)\) is spanned by
-  the vectors \(F_{\alpha_1} F_{\alpha_2} \cdots F_{\alpha_n} v^+\) for
-  \(\alpha_i \in \Delta^-\) and \(F_{\alpha_i} \in \mathfrak{g}_{\alpha_i}\) as
-  in the proof of proposition~\ref{thm:distinguished-subalgebra}. But
-  \[
-    \begin{split}
-      H F_{\alpha_1} F_{\alpha_2} \cdots F_{\alpha_n} v^+
-      & = ([H, F_{\alpha_1}] + F_{\alpha_1} H)
-          F_{\alpha_2} \cdots F_{\alpha_n} v^+ \\
-      & = \alpha_1(H) \cdot F_{\alpha_1} \cdots F_{\alpha_n} v^+
-        + F_{\alpha_1} ([H, F_{\alpha_2}] + F_{\alpha_2} H)
-          F_{\alpha_2} \cdots F_{\alpha_n} v^+ \\
-      & \;\; \vdots \\
-      & = (\alpha_1 + \cdots + \alpha_n)(H) \cdot
-          F_{\alpha_1} \cdots F_{\alpha_n} v^+
-        + F_{\alpha_1} \cdots F_{\alpha_n} H v^+ \\
-      & = (\lambda + \alpha_1 + \cdots + \alpha_n)(H) \cdot
-          F_{\alpha_1} \cdots F_{\alpha_n} v^+ \\
-      & \therefore F_{\alpha_1} \cdots F_{\alpha_n} v^+
-        \in M(\lambda)_{\lambda + \alpha_1 + \cdots + \alpha_n}
-    \end{split}
-  \]
-
-  Hence \(M(\lambda) \subset \bigoplus_{\mu \in \mathfrak{h}^*}
-  M(\lambda)_\mu\), as desired. In fact we have established
-  \[
-    M(\lambda)
-    \subset
-    \bigoplus_{\substack{k_i \in \ZZ \\ k_i \ge 0}}
-    M(\lambda)_{\lambda + k_1 \cdot \alpha_1 + \cdots + k_n \cdot \alpha_n}
-  \]
-  where \(\{\alpha_1, \ldots, \alpha_m\} = \Delta^-\), so that all weights of
-  \(M(\lambda)\) have the form \(\mu = \lambda + k_1 \cdot \alpha_1 + \cdots +
-  k_n \cdot \alpha_n\).
-
-  This already gives us that the weights of \(M(\lambda)\) are bounded by
-  \(\lambda\) -- in the sense that no weight of \(M(\lambda)\) is ``higher''
-  than \(\lambda\). To see that \(\lambda\) is indeed a weight, we show that
-  \(v^+\) is nonzero weight vector. Clearly \(v^+ \in V_\lambda\). The
-  Poincaré-Birkhoff-Witt theorem implies
-  \[
-    M(\lambda)
-    \cong \left(\bigoplus_i \mathcal{U}(\mathfrak{b}) \right)
-    \otimes_{\mathcal{U}(\mathfrak{b})} K v^+
-    \cong \bigoplus_i \mathcal{U}(\mathfrak{b})
-    \otimes_{\mathcal{U}(\mathfrak{b})} K v^+
-    \cong \bigoplus_i K v^+
-    \ne 0
-  \]
-  as \(\mathcal{U}(\mathfrak{b})\)-modules, so \(v^+ \ne 0\) -- for if this was
-  not the case we would find \(M(\lambda) = \mathcal{U}(\mathfrak{g}) \cdot v^+
-  = 0\). Hence \(V_\lambda \ne 0\) and therefore \(\lambda\) is the highest
-  weight of \(M(\lambda)\), with highest weight vector \(v^+\).
-
-  To see that \(\dim M(\lambda)_\mu < \infty\), simply note that there are only
-  finitely many monomials \(F_{\alpha_1}^{k_1} F_{\alpha_2}^{k_2} \cdots
-  F_{\alpha_n}^{k_n}\) such that \(\mu = \lambda + k_1 \cdot \alpha_1 + \cdots
-  + k_n \cdot \alpha_n\). Since \(M(\lambda)_\mu\) is spanned by the images of
-  \(v^+\) under such monomials, we conclude \(\dim M(\lambda) < \infty\). In
-  particular, there is a single monomials \(F_{\alpha_1}^{k_1}
-  F_{\alpha_2}^{k_2} \cdots F_{\alpha_n}^{k_n}\) such that \(\lambda = \lambda
-  + k_1 \cdot \alpha_1 + \cdots + k_n \cdot \alpha_n\) -- which is, of course,
-  the monomial where \(k_1 = \cdots = k_n = 0\). Hence \(\dim V_\lambda = 1\).
-\end{proof}
-
-\begin{example}\label{ex:sl2-verma}
-  If \(\mathfrak{g} = \mathfrak{sl}_2(K)\), then we can take \(\mathfrak{h} = K
-  h\) and \(\mathfrak{b} = K e \oplus K h\). If \(\lambda \in
-  \mathfrak{h}^*\) is the map \(h \mapsto 2\) then \(M(\lambda) =
-  \bigoplus_{k \ge 0} K f^k v^+\), and the action of \(\mathfrak{sl}_2(K)\) in
-  \(M(\lambda)\) is given by
-  \begin{align*}
-    f^{k + 1} v^+ & \overset{e}{\mapsto} (2 - k (k - 1)) f^k v^+ &
-    f^{k + 1} v^+ & \overset{f}{\mapsto} f^{k + 2} v^+ &
-    f^{k + 1} v^+ & \overset{h}{\mapsto} - 2 k f^{k + 1} v^+ &
-  \end{align*}
-
-  In the language of the diagrams used in section~\ref{sec:sl2}, we write
-  % TODO: Add a label to the righ of the diagram indicating that the top arrows
-  % are the action of e and the bottom arrows are the action of f
-  \begin{center}
-    \begin{tikzcd}
-      \cdots \arrow[bend left=60]{r}{-10}
-      & M(\lambda)_{-6} \arrow[bend left=60]{r}{-4} \arrow[bend left=60]{l}{1}
-      & M(\lambda)_{-4} \arrow[bend left=60]{r}{0}  \arrow[bend left=60]{l}{1}
-      & M(\lambda)_{-2} \arrow[bend left=60]{r}{2}  \arrow[bend left=60]{l}{1}
-      & M(\lambda)_0    \arrow[bend left=60]{r}{2}  \arrow[bend left=60]{l}{1}
-      & M(\lambda)_2    \arrow[bend left=60]{l}{1}
-    \end{tikzcd}
-  \end{center}
-  where \(M(\lambda)_{2 - 2 k} = K f^k v\). In this case, unlike we have see in
-  section~\ref{sec:sl2}, the string of weight spaces to left of the diagram is
-  infinite.
-\end{example}
-
-What's interesting to us about all this is that we've just constructed a
-\(\mathfrak{g}\)-module whose highest weight is \(\lambda\). This is not a
-proof of theorem~\ref{thm:dominant-weight-theo}, however, since \(M(\lambda)\)
-is neither irreducible nor finite-dimensional. Nevertheless, we can use
-\(M(\lambda)\) to construct an irreducible representation of \(\mathfrak{g}\)
-whose highest weight is \(\lambda\).
-
-\begin{proposition}\label{thm:max-verma-submod-is-weight}
-  Every subrepresentation \(V \subset M(\lambda)\) is the direct sum of its
-  weight spaces. In particular, \(M(\lambda)\) has a unique maximal
-  subrepresentation \(N(\lambda)\) and a unique irreducible quotient
-  \(\sfrac{M(\lambda)}{N(\lambda)}\).
-\end{proposition}
-
-\begin{proof}
-  Let \(V \subset M(\lambda)\) be a subrepresentation and take any nonzero \(v
-  \in V\). Because of proposition~\ref{thm:verma-is-weight-mod}, we know there
-  are \(\mu_1, \ldots, \mu_n \in \mathfrak{h}^*\) and nonzero \(v_i \in
-  M(\lambda)_{\mu_i}\) such that \(v = v_1 + \cdots + v_n\). We want to show
-  \(v_i \in V\) for all \(i\).
-
-  Fix some \(H_2 \in \mathfrak{h}\) such that \(\mu_1(H_2) \ne \mu_2(H_2)\).
-  Then
-  \[
-    v_1
-    - \frac{(\mu_3 - \mu_1)(H_2)}{(\mu_2 - \mu_1)(H_2)} v_3
-    - \cdots
-    - \frac{(\mu_n - \mu_1)(H_2)}{(\mu_2 - \mu_1)(H_2)} v_n
-    = \left( 1 - \frac{H_2 - \mu_1(H_2)}{(\mu_2 - \mu_1)(H_2)} \right) v
-    \in V
-  \]
-
-  Now take \(H_3 \in \mathfrak{h}\) such that \(\mu_1(H_3) \ne \mu_3(H_3)\). By
-  applying the same procedure again we get
-  \begin{multline*}
-    v_1
-    -
-    \frac{(\mu_4 - \mu_3)(H_3) \cdot (\mu_4 - \mu_1)(H_2)}
-         {(\mu_3 - \mu_1)(H_3) \cdot (\mu_2 - \mu_1)(H_2)} v_4
-    - \cdots -
-    \frac{(\mu_n - \mu_3)(H_3) \cdot (\mu_n - \mu_1)(H_2)}
-         {(\mu_3 - \mu_1)(H_3) \cdot (\mu_2 - \mu_1)(H_2)} v_n \\
-    =
-    \left(1 - \frac{H_3 - \mu_1(H_3)}{(\mu_3 - \mu_1)(H_3)} \right)
-    \left(1 - \frac{H_2 - \mu_1(H_2)}{(\mu_2 - \mu_1)(H_2)} \right) v
-    \in V
-  \end{multline*}
-
-  By applying the same procedure over and over again we can see that \(v_1 = X
-  v \in V\) for some \(X \in \mathcal{U}(\mathfrak{g})\). Furthermore, if we
-  reproduce all this for \(v_2 + \cdots + v_n = v - v_1 \in V\) we get that
-  \(v_2 \in V\). Now by applying the same procedure over and over we find
-  \(v_1, \ldots, v_n \in V\). Hence
-  \[
-    V = \bigoplus_\mu V_\mu = \bigoplus_\mu M(\lambda)_\mu \cap V
-  \]
-
-  Since \(M(\lambda) = \mathcal{U}(\mathfrak{g}) \cdot v^+\), \(V\) is a proper
-  subrepresentation then \(v^+ \notin V\). Hence any proper submodule lies in
-  the sum of weight spaces other than \(M(\lambda)_\lambda\), so the sum
-  \(N(\lambda)\) of all such submodules is still proper. In fact, this implies
-  \(N(\lambda)\) is the unique maximal subrepresentation of \(M(\lambda)\) and
-  \(\sfrac{M(\lambda)}{N(\lambda)}\) is its unique irreducible quotient.
-\end{proof}
-
-\begin{example}\label{ex:sl2-verma-quotient}
-  If \(\mathfrak{g} = \mathfrak{sl}_2(K)\) and \(\lambda : h \mapsto 2\), we
-  can see from example~\ref{ex:sl2-verma} that \(N(\lambda) = \bigoplus_{k \ge
-  3} K f^k v^+\), so that \(\sfrac{M(\lambda)}{N(\lambda)}\) is the
-  \(3\)-dimensional irreducible representation of \(\mathfrak{sl}_2(K)\) --
-  i.e. the finite-dimensional irreducible representation with highest weight
-  \(\lambda\) constructed in section~\ref{sec:sl2}.
-\end{example}
-
-This last example is particularly interesting to us, since it indicates that
-the finite-dimensional irreducible representations of \(\mathfrak{sl}_2(K)\) as
-quotients of Verma modules. This is because the quotient
-\(\sfrac{M(\lambda)}{N(\lambda)}\) in example~\ref{ex:sl2-verma-quotient}
-happened to be finite-dimensional. As it turns out, this is always the case for
-semisimple \(\mathfrak{g}\). Namely\dots
-
-\begin{proposition}\label{thm:verma-is-finite-dim}
-  If \(\lambda\) is dominant integral then the unique irreducible quotient of
-  \(M(\lambda)\) is finite-dimensional.
-\end{proposition}
-
-The proof of proposition~\ref{thm:verma-is-finite-dim} is very technical and we
-won't include it here, but the idea behind it is to show that the set of
-weights of \(\sfrac{M(\lambda)}{N(\lambda)}\) is stable under the natural
-action of the Weyl group \(\mathcal{W}\) in \(\mathfrak{h}^*\). One can then
-show that the every weight of \(V\) is conjugate to a single dominant integral
-weight of \(\sfrac{M(\lambda)}{N(\lambda)}\), and that the set of dominant
-integral weights of such irreducible quotient is finite. Since \(W\) is
-finitely generated, this implies the set of weights of the unique irreducible
-quotient of \(M(\lambda)\) is finite. But each weight space is
-finite-dimensional. Hence so is the irreducible quotient.
-
-We refer the reader to \cite[ch. 21]{humphreys} for further details. What we
-are really interested in is\dots
-
-\begin{corollary}
-  There is a finite-dimensional irreducible \(\mathfrak{g}\)-module \(V\) whose
-  highest weight is \(\lambda\).
-\end{corollary}
-
-\begin{proof}
-  Let \(V = \sfrac{M(\lambda)}{N(\lambda)}\). It suffices to show that its
-  highest weight is \(\lambda\). We have already seen that \(v^+ \in
-  M(\lambda)_\lambda\) is a highest weight vector. Now since \(v\) lies outside
-  of the maximal subrepresentation of \(M(\lambda)\), the projection \(v^+ +
-  N(\lambda) \in V\) is nonzero.
-
-  % TODO: Why is V_mu = M(lambda)_mu + N(lambda)? Turn this into a proposition?
-  We now claim that \(v^+ + N(\lambda) \in V_\lambda\). Indeed,
-  \[
-    H (v^+ + N(\lambda))
-    = H v^+ + N(\lambda)
-    = \lambda(H) \cdot (v^+ + N(\lambda))
-  \]
-  for all \(H \in \mathfrak{h}\). Hence \(\lambda\) is a weight of \(V\), with
-  weight vector \(v^+ + N(\lambda)\). Finally, we remark that \(\lambda\) is
-  the highest weight of \(V\), for if this was not the case we could find a
-  weight \(\mu\) of \(M(\lambda)\) which is higher than \(\lambda\).
-\end{proof}
-
-% TODO: Write a conclusion and move this to the next chapter
diff --git a/tcc.tex b/tcc.tex
@@ -24,7 +24,7 @@
 
 \input{sections/introduction}
 
-\input{sections/semisimple-algebras}
+\input{sections/complete-reducibility}
 
 \input{sections/mathieu}