diff --git a/sections/lie-algebras.tex b/sections/lie-algebras.tex
@@ -0,0 +1,1877 @@
+\chapter{Semisimple Lie Algebras \& their Representations}\label{ch:lie-algebras}
+
+\epigraph{Nobody has ever bet enough on a winning horse.}{\textit{Some
+gambler}}
+
+We've just established that \(\Rep(G) \cong \Rep(\mathfrak{g}_\CC)\) for each
+simply connected \(G\), but how do we go about classifying the representations
+of \(\mathfrak{g}_\CC\)? We can very quickly see that many of the aspects of
+the representation theory of compact groups that were essential for the
+solution our classification problem are no longer valid in this context. For
+instance if we take
+\[
+ \mathfrak g =
+ \begin{pmatrix}
+ \CC & \CC \\
+ 0 & 0
+ \end{pmatrix}
+ \subset \gl_2(\CC)
+\]
+and consider its natural representation \(V = \CC^2\) we can see that \(W =
+\langle (1, 0) \rangle\) is a subrepresentation with no \(\mathfrak
+g\)-invariant complement. The issue we now face is, of course, the fact that
+complete reducibility fails for some complex Lie algebras. In other words,
+understanding the irreducible representations of an algebra isn't enough,
+because there may be representations which cannot be expressed as the direct
+sum of irreducible ones.
+
+% TODO: Update the "two chapter!" thing after those chapters are done?
+This is not going to be an easy ride\dots \ In fact, even after some further
+restrictions we will soon impose, this classification
+problem will keep us busy for the next \emph{two} chapters! The primary goal
+of this chapter is to highlight this restrictions and study some particular
+examples in the hopes of getting some insight into the general case. We should
+point that the following discussion is \emph{immensely} inspired by the third
+part of \citetitle{fulton-harris} \cite{fulton-harris}: Fulton \& Harris are
+the real authors of this chapter, I am merely a commentator on their work.
+
+The restriction we'll make is, as you might have guessed, that we'll primarily
+focus on \emph{semisimple Lie algebras} -- those for whom \emph{complete
+reducibility}, also known as \emph{semisimplicity}, holds.
+This is a bit of an admission of defeat from my end, as we won't ever get to
+classify the smooth representations of \emph{all} simply connected Lie groups,
+but keep in mind that the problem of classifying the finite-dimensional
+representations of an arbitrary finite-dimensional Lie algebra is still an open
+one. In other words, the semisimple Lie algebras are the only algebras whose
+representation we can expect to understand in full generality. This goes to
+show the importance of complete reducibility in representation theory.
+
+I guess we could simply define semisimple Lie algebras as the class of complex
+Lie algebras whose representations are completely reducible, but this is about
+as satisfying as saying ``the semisimple are the ones who won't cause us any
+trouble''. Who are the semisimple Lie algebras? Why does complete reducibility
+holds for them?
+
+\section{Semisimplicity \& Complete Reducibility}
+
+There are multiple equivalent ways to define what a semisimple Lie algebra is,
+the most obvious of which we have already mentioned in the above. Perhaps the
+most common definition is\dots
+
+\begin{definition}\label{thm:sesimple-algebra}
+ A Lie algebra \(\mathfrak g\) is called \emph{semisimple} if it has no
+ non-zero solvable ideals -- i.e. subalgebras \(\mathfrak h\) with
+ \([\mathfrak h, \mathfrak g] \subset \mathfrak h\) whose derived series
+ \[
+ \mathfrak h
+ \supseteq [\mathfrak h, \mathfrak h]
+ \supseteq [[\mathfrak h, \mathfrak h], [\mathfrak h, \mathfrak h]]
+ \supseteq
+ [
+ [[\mathfrak h, \mathfrak h], [\mathfrak h, \mathfrak h]],
+ [[\mathfrak h, \mathfrak h], [\mathfrak h, \mathfrak h]]
+ ]
+ \supseteq \cdots
+ \]
+ converges to \(0\) in finite time.
+\end{definition}
+
+\begin{example}
+ The complex Lie algebras \(\sl_n(\CC)\) and \(\mathfrak{sp}_{2 n}(\CC)\) are
+ both semisimple -- see the section of \cite{kirillov} on invariant bilinear
+ forms and the semisimplicity of classical Lie algebras.
+\end{example}
+
+A popular alternative to definition~\ref{thm:sesimple-algebra} is\dots
+
+\begin{definition}\label{def:semisimple-is-direct-sum}
+ A Lie algebra \(\mathfrak g\) is called semisimple if it is the direct sum of
+ simple Lie algebras -- i.e. Lie algebras \(\mathfrak s\) whose only ideals
+ are \(0\) and \(\mathfrak s\).
+\end{definition}
+
+\begin{example}
+ Every Abelian Lie algebra \(\mathfrak{g}\) is semisimple. Indeed, if \(\{X_i
+ : i \in \Lambda\}\) is a basis of \(\mathfrak{g}\) then
+ \[
+ \mathfrak{g} = \bigoplus_i \CC X_i
+ \]
+ as a vector space. Clearly, each \(\CC X_i\) is a subalgebra of
+ \(\mathfrak{g}\). But \(\dim \CC X_i = 1\), so that the only subspaces of
+ \(\CC X_i\) are \(0\) and \(\CC X_i\). In particular, the only ideals of
+ \(\CC X_i\) are \(0\) and \(\CC X_i\) -- i.e. \(\CC X_i\) is simple.
+ Moreover, the fact that \([X_i, X_j] = 0\) for all \(i\) and \(j\) implies
+ \(\mathfrak{g} \cong \bigoplus_i \CC X_i\) as a Lie algebra.
+\end{example}
+
+\begin{example}
+ The algebra \(\gl_n(\CC) = \sl_n(\CC) \oplus \CC\) is semisimple. In fact,
+ the direct sum of any two semisimple Lie algebras is semisimple.
+\end{example}
+
+I suppose this last definition explains the nomenclature, but what does any of
+this have to do with complete reducibility? Well, the special thing about
+semisimple complex Lie algebras is that they are \emph{compact algebras}.
+Compact Lie algebras are, as you might have guessed, \emph{algebras that come
+from compact groups}. In other words\dots
+
+\begin{theorem}\label{thm:compact-form}
+ If \(\mathfrak g\) is a complex semisimple Lie algebra, there exists a
+ (unique) semisimple real form of \(\mathfrak g\) whose simply connected form
+ is compact.
+\end{theorem}
+
+The proof of theorem~\ref{thm:compact-form} is quite involved and we will only
+provide a rough outline -- mostly based on the proof by \cite{fegan} -- but the
+interesting thing about it is the fact that we have already classified the
+smooth (complex) representations of compact Lie groups: because of
+chapter~\ref{ch:compacts}, we already know every smooth representation of a
+compact group is completely reducible. We can then use this knowledge to lift
+complete reducibility to our semisimple algebra. For instance, given a complex
+representation \(V\) of \(\sl_n(\CC)\) and a subrepresentation \(W \subset V\),
+\begin{enumerate}
+ \item Since \(\mathfrak{su}_n \otimes \CC \cong \sl_n(\CC)\), \(V\) and \(W\)
+ correspond to complex representations of \(\mathfrak{su}_n\)
+ \item Since \(\SU_n\) is simply connected, \(V\) and \(W\) correspond to
+ smooth representations of \(\SU_n\)
+ \item Since \(\SU_n\) is compact, \(W\) admits a \(\SU_n\)-invariant
+ complement \(U \subset V\)
+ \item \(U\) is invariant under the action of \(\mathfrak{su}_n\), so\dots
+ \item \(U\) is invariant under the action of \(\sl_n(\CC)\) and
+ therefore\dots
+ \item \(U\) is a \(\sl_n(\CC)\)-invariant complement of \(W\) in \(V\)
+\end{enumerate}
+
+If we assume theorem~\ref{thm:compact-form} and replace \(\sl_n(\CC)\) with
+some arbitrary semisimple \(\mathfrak{g}\) in the previous paragraph we arrive
+at a proof of\dots
+
+\begin{theorem}
+ Every representation of a semisimple Lie algebra is completely reducible.
+\end{theorem}
+
+By the same token, most of other aspects of representation
+theory of compact groups must hold in the context of semisimple algebras. For
+instance, we have\dots
+
+\begin{lemma}[Schur]
+ Let \(V\) and \(W\) be two irreducible representations of a complex
+ semisimple Lie algebra \(\mathfrak{g}\) and \(T : V \to W\) be an
+ intertwining operator. Then either \(T = 0\) or \(T\) is an isomorphism.
+ Furthermore, if \(V = W\) then \(T\) is scalar multiple of the identity.
+\end{lemma}
+
+\begin{corollary}
+ Every irreducible representation of an Abelian Lie group is 1-dimensional.
+\end{corollary}
+
+Indeed, if we take theorem~\ref{thm:compact-form} at face
+value we can easily see that given a semisimple Lie algebra \(\mathfrak g\),
+\[
+ \Rep(\mathfrak g)
+ \cong \Rep(\mathfrak{g}_\RR)
+ \cong \Rep(G),
+\]
+where \(\mathfrak{g}_\RR\) is some real form of \(\mathfrak g\) with compact
+simply connected form \(G\). This is what's known as \emph{Weyl's unitarization
+trick}, and historically it was the first proof of complete reducibility for
+semisimple Lie algebras.
+
+Alternatively, one could prove the same statement in a purely algebraic manner
+by showing the first Lie algebra cohomology group \(H^1(\mathfrak{g}, V) =
+\Ext^1(\CC, V)\) vanishes for all \(V\), as do \cite{kirillov} and
+\cite{lie-groups-serganova-student} in their proofs. More precisely, one can
+show that there is a natural bijection between \(H^1(\mathfrak{g}, \Hom(V,
+W))\) and isomorphism classes of the representations \(U\) of \(\mathfrak{g}\)
+such that there is an exact sequence
+\begin{center}
+ \begin{tikzcd}
+ 0 \arrow{r} &
+ V \arrow{r} &
+ U \arrow{r} &
+ W \arrow{r} &
+ 0
+ \end{tikzcd}
+\end{center}
+
+This implies every exact sequence of \(\mathfrak{g}\)-representations splits --
+which, if you recall theorem~\ref{thm:complete-reducibility-equiv}, is
+equivalent to complete reducibility -- if, and only if \(H^1(\mathfrak{g},
+\Hom(V, W)) = 0\) for all \(V\) and \(W\). The algebraic approach has the
+advantage of working for Lie algebras over arbitrary fields, but in keeping
+with our principle of preferring geometric arguments over purely algebraic one
+we'll instead focus in the unitarization trick. What follows is a sketch of its
+proof, whose main ingredient is\dots
+
+\section{The Killing Form}
+
+\begin{definition}
+ Given a -- either real or complex -- Lie algebra, its Killing form is the
+ symmetric bilinear form
+ \[
+ K(X, Y) = \Tr(\ad(X) \ad(Y))
+ \]
+\end{definition}
+
+The Killing form certainly deserves much more attention than what we can
+afford at the present moment, but what's relevant to us is the fact that
+theorem~\ref{thm:compact-form} can be deduced from an algebraic condition
+satisfied by the Killing forms of complex semisimple algebras. Explicitly\dots
+
+\begin{theorem}\label{thm:killing-form-is-negative}
+ If \(\mathfrak g\) is semisimple then there exists a semisimple real Lie
+ algebra \(\mathfrak{g}_\RR\) whose complexification is precisely \(\mathfrak
+ g\) and whose Killing form is negative-definite.
+\end{theorem}
+
+The proof of theorem~\ref{thm:killing-form-is-negative} is combinatorial in
+nature and it can be found in chapter 26 of \cite{fulton-harris}. What we're
+interested in at the moment is showing it implies
+theorem~\ref{thm:compact-form}. We'll start out by showing\dots
+
+\begin{lemma}
+ If \(\mathfrak{g}_\RR\) is a real Lie algebra with negative-definite Killing
+ form and \(G\) is its simply connected form then \(\mfrac{G}{Z(G)}\) is
+ compact.
+\end{lemma}
+
+\begin{proof}
+ Let \(G\) be the simply connected form of \(\mathfrak{g}_\RR\). Consider the
+ the adjoint action \(\Ad : G \to \Aut(\mathfrak{g}_\RR)\).
+
+ We'll start by point out that given \(g \in G\),
+ \[
+ \begin{split}
+ K(X, Y)
+ & = \Tr(\ad(X) \ad(Y)) \\
+ & = \Tr(\Ad(g) (\ad(X) \ad(Y)) \Ad(g)^{-1}) \\
+ & = \Tr((\Ad(g) \ad(X) \Ad(g)^{-1}) (\Ad(g) \ad(Y) \Ad(g)^{-1})) \\
+ \text{(because \(\Ad(g)\) is a homomorphism)}
+ & = \Tr(\ad(\Ad(g) X) \ad(\Ad(g) Y)) \\
+ & = K(\Ad(g) X, \Ad(g) Y))
+ \end{split}
+ \]
+
+ Now since \(K\) is negative-definite, \(\Ad(g)\) is an orthogonal operator.
+ Hence \(\Ad(G)\) is a closed subgroup of \(\operatorname{O}(n)\) -- where \(n
+ = \dim \mathfrak{g}_\RR\). Notice \(Z(G) = \ker \Ad\). Indeed, if \(\Ad(g) =
+ \Id\) by corollary~\ref{thm:lie-group-morphism-at-identity}
+ \(h \mapsto g h g^{-1}\) is the identity map -- i.e. \(g \in Z(G)\). It then
+ follows from the fact that \(\operatorname{O}(n)\) is compact that
+ \[
+ \mfrac{G}{Z(G)}
+ = \mfrac{G}{\ker \Ad}
+ \cong \Ad(G)
+ \]
+ is compact.
+\end{proof}
+
+We should point out that this last trick can also be used to prove that
+\(\mathfrak{g}_\RR\) is the direct sum of simple algebras. Indeed, if
+\(\mathfrak{g}_\RR\) is not simple then, by definition, it has a proper
+subalgebra \(\mathfrak h\). We can then consider its orthogonal complement
+\(\mathfrak{h}^\perp\) under the Killing form, so that \(\mathfrak{h}^\perp\)
+is a subalgebra and \(\mathfrak{g}_\RR = \mathfrak{h} \oplus
+\mathfrak{h}^\perp\). Now by induction on the dimension of \(\mathfrak{g}_\RR\)
+we see that theorem~\ref{thm:killing-form-is-negative} implies the
+characterization of definition~\ref{def:semisimple-is-direct-sum}.
+
+To conclude this dubious attempt at a proof, we refer to a theorem by Hermann
+Weyl, whose proof is beyond the scope of this notes as it requires calculating
+the Ricci curvature of \(G\) \footnote{The Ricci curvature is a tensor related
+to any given connection in a manifold. In this proof we're interested in the
+Ricci curvature of the Riemannian connection of \(\widetilde H\) under the
+metric given by the pullback of the unique bi-invariant metric of \(H\) along
+the covering map \(\widetilde H \to H\).} -- for a proof please refer to
+theorem 3.2.15 of \cite{gorodski}. What's interesting about this theorem is it
+implies\dots
+
+\begin{theorem}[Weyl]
+ If \(H\) is a compact connected Lie group with discrete center then its
+ universal cover \(\widetilde H\) is also compact.
+\end{theorem}
+
+\begin{proof}[Proof of theorem~\ref{thm:compact-form}]
+ Let \(\mathfrak{g}_\RR\) be a semisimple real form of \(\mathfrak g\) with
+ negative-definite Killing form. Because of the previous lemma, we already
+ know \(\mfrac{G}{Z(G)}\) is compact and centerless. Hence by Weyl's theorem
+ it suffices to show \(Z(G) = \ker \Ad\) is discrete -- so that the universal
+ cover of \(\mfrac{G}{Z(G)}\) is \(G\).
+
+ To do so, we consider its Lie algebra \(\mathfrak z = \ker \ad\) -- also
+ known as the center of \(\mathfrak{g}_\RR\). Notice \(\mathfrak z\) is an
+ ideal. In fact, \(\mathfrak z\) is a solvable ideal of \(\mathfrak{g}_\RR\)
+ -- indeed, \([\mathfrak z, \mathfrak z] = 0\). This implies \(\mathfrak z =
+ 0\) and therefore \(Z(G)\) is a 0-dimensional Lie group -- i.e. a discrete
+ group. We are done.
+\end{proof}
+
+This results can be generalized to a certain extent by considering the exact
+sequence
+\begin{center}
+ \begin{tikzcd}
+ 0 \arrow{r} &
+ \Rad(\mathfrak g) \arrow{r} &
+ \mathfrak g \arrow{r} &
+ \mfrac{\mathfrak g}{\Rad(\mathfrak g)} \arrow{r} &
+ 0
+ \end{tikzcd}
+\end{center}
+where \(\Rad(\mathfrak g)\) is the sum of all solvable ideals of \(\mathfrak
+g\) -- i.e. a maximal solvable ideal -- for arbitrary complex \(\mathfrak g\).
+This implies we can deduce information about the representations of \(\mathfrak
+g\) by studying those of its semisimple part \(\mfrac{\mathfrak
+g}{\Rad(\mathfrak g)}\). In practice though, this isn't quite satisfactory
+because the exactness of this last sequence translates to the
+underwhelming\dots
+
+\begin{theorem}\label{thm:semi-simple-part-decomposition}
+ Every irreducible representation of \(\mathfrak g\) is the tensor product of
+ an irreducible representation of its semisimple part \(\mfrac{\mathfrak
+ g}{\Rad(\mathfrak g)}\) and a one-dimensional representation of \(\mathfrak
+ g\).
+\end{theorem}
+
+We say that this isn't satisfactory because
+theorem~\ref{thm:semi-simple-part-decomposition} is a statement about
+\emph{irreducible} representations of \(\mathfrak g\). This may sound a bit
+unfair, as theorem~\ref{thm:semi-simple-part-decomposition} does lead to a
+complete classification of a large class of representations of \(\mathfrak g\)
+-- those that are the direct sum of irreducible representations -- but the
+point is that these may not be all possible representations if \(\mathfrak g\)
+is not semisimple. That said, we can finally get to the classification itself.
+Without further ado, we'll start out by highlighting a concrete example of the
+general paradigm we'll later adopt: that of \(\sl_2(\CC)\).
+
+\section{Representations of \(\sl_2(\CC)\)}
+
+The primary goal of this section is proving\dots
+
+\begin{theorem}\label{thm:sl2-exist-unique}
+ For each \(n > 0\), there exists precisely one irreducible representation
+ \(V\) of \(\sl_2(\CC)\) with \(\dim V = n\).
+\end{theorem}
+
+It's important to note, however, that -- as promised -- we will end up with an
+explicit construction of \(V\). The general approach we'll take is supposing
+\(V\) is an irreducible representation of \(\sl_2(\CC)\) and then derive some
+information about its structure. We begin our analysis by pointing out that
+the elements
+\begin{align*}
+ e & = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} &
+ f & = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} &
+ h & = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}
+\end{align*}
+form a basis of \(\sl_2(\CC)\) and satisfy
+\begin{align*}
+ [e, f] & = h & [h, f] & = -2 f & [h, e] = 2 e
+\end{align*}
+
+This is interesting to us because it implies every subspace of \(V\) invariant
+under the actions of \(e\), \(f\) and \(h\) has to be \(V\) itself. Next we
+turn our attention to the action of \(h\) in \(V\), in particular, to the
+eigenspace decomposition
+\[
+ V = \bigoplus_{\lambda} V_\lambda
+\]
+of \(V\) -- where \(\lambda\) ranges over the eigenvalues of \(h\) and
+\(V_\lambda\) is the corresponding eigenspace. At this point, this is nothing
+short of a gamble: why look at the eigenvalues of \(h\)?
+
+The short answer is that, as we shall see, this will pay off -- which
+conveniently justifies the epigraph of this chapter. For now we will postpone
+the discussion about the real reason of why we chose \(h\). Let \(\lambda\) be
+any eigenvalue of \(h\). Notice \(V_\lambda\) is in general not a
+subrepresentation of \(V\). Indeed, if \(v \in V_\lambda\) then
+\begin{align*}
+ h e v & = 2e v + e h v = (\lambda + 2) e v \\
+ h f v & = - 2f v + f h v = (\lambda - 2) f v
+\end{align*}
+
+In other words, \(e\) sends an element of \(V_\lambda\) to an element of
+\(V_{\lambda + 2}\), while \(f\) sends it to an element of \(V_{\lambda - 2}\).
+Hence
+\begin{center}
+ \begin{tikzcd}
+ \cdots \arrow[bend left=60]{r}
+ & V_{\lambda - 2} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}
+ & V_{\lambda} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
+ & V_{\lambda + 2} \arrow[bend left=60]{r} \arrow[bend left=60]{l}{f}
+ & \cdots \arrow[bend left=60]{l}
+ \end{tikzcd}
+\end{center}
+and \(\bigoplus_{n \in \ZZ} V_{\lambda + 2 n}\) is an \(\sl_2(\CC)\)-invariant
+subspace. This implies
+\[
+ V = \bigoplus_{n \in \ZZ} V_{\lambda + 2 n},
+\]
+so that the eigenvalues of \(h\) all have the form \(\lambda + 2 n\) for some
+\(n\) -- since \(V_\mu = 0\) for all \(\mu \notin \lambda + 2 \ZZ\).
+
+Even more so, if \(a = \min \{ n \in \ZZ : V_{\lambda + 2 n} \ne 0 \}\) and
+\(b = \max \{ n \in \ZZ : V_{\lambda + 2 n} \ne 0 \}\) we can see that
+\[
+ \bigoplus_{\substack{n \in \ZZ \\ a \le n \le b}} V_{\lambda + 2 n}
+\]
+is also an \(\sl_2(\CC)\)-invariant subspace, so that the eigenvalues of \(h\)
+form an unbroken string
+\[
+ \ldots, \lambda - 4, \lambda - 2, \lambda, \lambda + 2, \lambda + 4, \ldots
+\]
+around \(\lambda\).
+
+Our main objective is to show \(V\) is determined by this string of
+eigenvalues. To do so, we suppose without any loss in generality that
+\(\lambda\) is the right-most eigenvalue of \(h\), fix some non-zero \(v \in
+V_\lambda\) and consider the set \(\{v, f v, f^2, v, \ldots\}\).
+
+\begin{theorem}\label{thm:basis-of-irr-rep}
+ The set \(\{v, f v, f^2, \ldots\}\) is a basis for \(V\).
+\end{theorem}
+
+\begin{proof}
+ First of all, notice \(f^k v\) lies in \(V_{\lambda - 2 k}\), so that \(\{v,
+ f v, f^2 v, \ldots\}\) is a set of linearly independent vectors. Hence it
+ suffices to show \(V = \langle v, f v, f^2 v, \ldots \rangle\), which in
+ light of the fact that \(V\) is irreducible is the same as showing \(\langle
+ v, f v, f^2 v, \ldots \rangle\) is invariant under the action of
+ \(\sl_2(\CC)\).
+
+ The fact that \(h f^k v \in \langle v, f v, f^2 v, \ldots \rangle\) follows
+ immediately from our previous assertion that \(f^k v \in V_{\lambda - 2 k}\)
+ -- indeed, \(h f^k v = (\lambda - 2 k) f^k v\). Seeing \(e f^k v \in \langle
+ v, f v, f^2 v, \ldots \rangle\) is a bit more complex. Clearly,
+ \[
+ \begin{split}
+ e f v
+ & = h v + f e v \\
+ \text{(since \(\lambda\) is the right-most eigenvalue)}
+ & = h v + f 0 \\
+ & = \lambda v
+ \end{split}
+ \]
+
+ Next we compute
+ \[
+ \begin{split}
+ e f^2 v
+ & = (h + fe) f v \\
+ & = h f v + f (\lambda v) \\
+ & = 2 (\lambda - 1) f v
+ \end{split}
+ \]
+
+ The pattern is starting to become clear: \(e\) sends \(f^k v\) to a multiple
+ of \(f^{k - 1} v\). Explicitly, it's not hard to check by induction that
+ \[
+ e f^k v = k (\lambda + 1 - k) f^{k - 1} v
+ \]
+\end{proof}
+
+\begin{note}
+ For this last formula to work we fix the convention that \(f^{-1} v = 0\) --
+ which is to say \(e v = 0\).
+\end{note}
+
+Theorem~\ref{thm:basis-of-irr-rep} may seem unrelated to our problem at first,
+but it's significance lies in the fact that we have just provided a complete
+description of the action of \(\sl_2(\CC)\) in \(V\). In other words\dots
+
+\begin{corollary}
+ \(V\) is completely determined by the right-most eigenvalue \(\lambda\) of
+ \(h\).
+\end{corollary}
+
+\begin{proof}
+ If \(W\) is an irreducible representation of \(\sl_2(\CC)\) whose
+ right-most eigenvalue of \(h\) is \(\lambda\) and \(w \in W_\lambda\) is
+ non-zero, consider the linear isomorphism
+ \begin{align*}
+ T : V & \to W \\
+ f^k v & \mapsto f^k w
+ \end{align*}
+
+ We claim \(T\) is an intertwining operator. Indeed, the explicit calculations
+ of \(e f^k v\) and \(h f^k v\) from the previous proof imply
+ \begin{align*}
+ T e & = e T & T f & = f T & T h & = h T
+ \end{align*}
+\end{proof}
+
+Other important consequences of theorem~\ref{thm:basis-of-irr-rep} are\dots
+
+\begin{corollary}
+ Every \(h\) eigenspace is one-dimensional.
+\end{corollary}
+
+\begin{proof}
+ It suffices to note \(\{v, f v, f^2 v, \ldots \}\) is a basis for \(V\)
+ consisting of eigenvalues of \(h\) and whose only element in \(V_{\lambda - 2
+ k}\) is \(f^k v\).
+\end{proof}
+
+\begin{corollary}
+ The eigenvalues of \(h\) in \(V\) form a symmetric, unbroken string of
+ integers separated by intervals of length \(2\) whose right-most value is
+ \(\dim V - 1\).
+\end{corollary}
+
+\begin{proof}
+ If \(f^m\) is the lowest power of \(f\) that annihilates \(v\), it follows
+ from the formula for \(e f^k v\) obtained in the proof of
+ theorem~\ref{thm:basis-of-irr-rep} that
+ \[
+ 0 = e 0 = e f^m v = m (\lambda + 1 - m) f^{m - 1} v
+ \]
+
+ This implies \(\lambda + 1 - m = 0\) -- i.e. \(\lambda = m - 1 \in \ZZ\). Now
+ since \(\{v, f v, f^2 v, \ldots, f^{m - 1} v\}\) is a basis for \(V\), \(m =
+ \dim V\). Hence if \(n = \lambda = \dim V - 1\) then the eigenvalues of \(h\)
+ are
+ \[
+ \ldots, n - 6, n - 4, n - 2, n
+ \]
+
+ To see that this string is symmetric around \(0\), simply note that the
+ left-most eigenvalue of \(h\) is precisely \(n - 2 (m - 1) = -n\).
+\end{proof}
+
+We now know every irreducible representation \(V\) of \(\sl_2(\CC)\) has the
+form
+\begin{center}
+ \begin{tikzcd}
+ \cdots \arrow[bend left=60]{r}
+ & V_{n - 6} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}
+ & V_{n - 4} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
+ & V_{n - 2} \arrow[bend left=60]{r}{e} \arrow[bend left=60]{l}{f}
+ & V_n \arrow[bend left=60]{l}{f}
+ \end{tikzcd}
+\end{center}
+where \(V_{n - 2 k}\) is the one-dimensional eigenspace of \(h\) associated to
+\(n - 2 k\) and \(n = \dim V - 1\). Even more so, we explicitly know
+\[
+ V = \bigoplus_{k = 0}^n \CC f^k v
+\]
+and
+\begin{equation}\label{eq:irr-rep-of-sl2}
+ \begin{aligned}
+ f^k v & \overset{e}{\mapsto} k(n + 1 - k) f^{k - 1} v
+ & f^k v & \overset{f}{\mapsto} f^{k + 1} v
+ & f^k v & \overset{h}{\mapsto} (n - 2 k) f^k v
+ \end{aligned}
+\end{equation}
+
+To conclude our analysis all it's left is to show that for each \(n\) such
+\(V\) does indeed exist and is irreducible. In other words\dots
+
+\begin{theorem}\label{thm:irr-rep-of-sl2-exists}
+ For each \(n \ge 0\) there exists a (unique) irreducible representation of
+ \(\sl_2(\CC)\) whose left-most eigenvalue of \(h\) is \(n\).
+\end{theorem}
+
+\begin{proof}
+ The fact the representation \(V\) from the previous discussion exists is
+ clear from the commutator relations of \(\sl_2(\CC)\) -- just look at \(f^k
+ v\) as abstract symbols and impose the action given by
+ (\ref{eq:irr-rep-of-sl2}). Alternatively, one can readily check that if
+ \(\CC^2\) is the natural representation of \(\sl_2(\CC)\), then \(V = \Sym^n
+ \CC^2\) satisfies the relations of (\ref{eq:irr-rep-of-sl2}). To see that
+ \(V\) is irreducible let \(W\) be a non-zero subrepresentation and take some
+ non-zero \(w \in W\). Suppose \(w = \alpha_0 v + \alpha_1 f v + \cdots +
+ \alpha_n f^n v\) and let \(k\) be the lowest index such that \(\alpha_k \ne
+ 0\), so that
+ \[
+ w = \alpha_k f^k v + \cdots + \alpha_n f^n v
+ \]
+
+ Now given that \(f^m = f^{n + 1}\) annihilates \(v\),
+ \[
+ f w = \alpha_k f^{k + 1} v + \cdots + \alpha_{n - 1} f^n v
+ \]
+
+ Proceeding inductively we arrive at \(f^{n - k} w = \alpha_k f^n v\), so
+ that \(f^n v \in W\). Hence \(e^i f^n v = \prod_{k = 1}^i k(n + 1 - k) f^{n -
+ i} v \in W\) for all \(i = 1, 2, \ldots, n\). Since \(k \ne 0 \ne n + 1 - k\)
+ for all \(k\) in this range, we can see that \(f^k v \in W\) for all \(k = 0,
+ 1, \ldots, n\). In other words, \(W = V\). We are done.
+\end{proof}
+
+A perhaps more elegant way of proving theorem~\ref{thm:irr-rep-of-sl2-exists}
+is to work our way backwards to the corresponding representation \(V\) of
+\(\SU_2\) and then show that its character is irreducible. Computing the
+action of \(\mathfrak{su}_2\) in \(V\) is fairly easy: if we consider the
+standard basis
+\begin{align*}
+ I & = \begin{pmatrix} i & 0 \\ 0 & - i \end{pmatrix} &
+ J & = \begin{pmatrix} 0 & 1 \\ - 1 & 0 \end{pmatrix} &
+ K & = \begin{pmatrix} 0 & - i \\ - i & 0 \end{pmatrix}
+\end{align*}
+of \(\mathfrak{su}_2\) one can readily check
+\begin{align*}
+ f^k v
+ & \overset{I}{\mapsto}
+ i (n - 2 k) f^k v \\
+ f^k v
+ & \overset{J}{\mapsto}
+ k (n + 1 - k) f^{k - 1} v - f^{k + 1} v \\
+ f^k v
+ & \overset{K}{\mapsto}
+ - i k (n + 1 - k) f^{k - 1} v - i f^{k + 1} v
+\end{align*}
+
+If we denote by \(\rho_*(X)\) the matrix corresponding to the action of \(X \in
+\mathfrak{su}_2\) in the basis \(\{v, f v, \ldots, f^n v\}\), and given that
+\(\exp : \mathfrak{su}_2 \to \SU_2\) is surjective, to arrive at the action of
+\(\SU_2\) all we have to do is compute \(\exp(\rho_*(X))\) for arbitrary \(X\).
+For instance, if \(n = 1\) we can very quickly check that \(V\) corresponds to
+the natural representation of \(\SU_2\), which can be shown to be irreducible.
+
+In general, however, computing \(\exp(\rho_*(X))\) is \emph{quite hard}.
+Nevertheless, the exceptional isomorphism \(\mathbb{S}^3 \cong \SU_2\) allows
+us to compute its trace: if \(a, b, c \in \RR\) are such that \(a^2 + b^2 + c^2
+= 1\), then the eigenvalues of \(\rho_*(a I + b J + c K)\) are \(\lambda i\),
+where \(\lambda\) ranges over the eigenvalues of \(h\) in \(V\). Hence the
+eigenvalues of \(\exp(\rho_*(t X))\) are \(e^{\lambda i t}\) for \(X = a I +
+b J + c K\). Now if \(p = a i + b j + c k \in \mathbb{S}^3\) then \(\cos t + p
+\sin t = \exp(t X)\), so that
+\[
+ \chi_V(\cos t + p \sin t)
+ = \sum_\lambda e^{\lambda i t}
+ = \sum_\lambda \cos(\lambda t)
+\]
+
+A simple calculation then shows
+\[
+ \norm{\chi_V}^2
+ = \frac{1}{\mu(\mathbb{S}^3)} \int_{\mathbb{S}^3} \abs{\chi_V(q)}^2 \; \dd q
+ = 1,
+\]
+which establishes that \(V\) is irreducible.
+
+Our initial gamble of studying the eigenvalues of \(h\) may have seemed
+arbitrary at first, but it payed off: we've \emph{completely} described
+\emph{all} irreducible representations of \(\sl_2(\CC)\). It is not yet clear,
+however, if any of this can be adapted to a general setting. In the following
+section we shall double down on our gamble by trying to reproduce some of the
+results of this section for \(\sl_3(\CC)\), hoping this will \emph{somehow}
+lead us to a general solution. In the process of doing so we'll learn a bit
+more why \(h\) was a sure bet and the race was fixed all along.
+
+\section{Representations of \(\sl_3(\CC)\)}\label{sec:sl3-reps}
+
+The study of representations of \(\sl_2(\CC)\) reminds me of the difference the
+derivative of a function \(\RR \to \RR\) and that of a smooth map between
+manifolds: it's a simpler case of something greater, but in some sense it's too
+simple of a case, and the intuition we acquire from it can be a bit misleading
+in regards to the general one. For instance I distinctly remember my Calculus I
+teacher telling the class ``the derivative of the composition of two functions
+is not the composition of their derivatives'' -- which is, of course, the
+\emph{correct} formulation of the chain rule in the context of smooth
+manifolds.
+
+The same applies to \(\sl_2(\CC)\). It's a simple and beautiful example, but
+unfortunately the general picture -- representations of arbitrary semisimple
+algebras -- lacks its simplicity, and, of course, much of this complexity is
+hidden in the case of \(\sl_2(\CC)\). The general purpose of this section is
+to investigate to which extent the framework used in the previous section to
+classify the representations of \(\sl_2(\CC)\) can be generalized to other
+semisimple Lie algebras, and the algebra \(\sl_3(\CC)\) stands as a natural
+candidate for potential generalizations: \(3 = 2 + 1\) after all.
+
+Our approach is very straightforward: we'll fix some irreducible
+representation \(V\) of \(\sl_3(\CC)\) and proceed step by step, at each point
+asking ourselves how we could possibly adapt the framework we laid out for
+\(\sl_2(\CC)\). The first obvious question is one we have already asked
+ourselves: why \(h\)? More specifically, why did we choose to study its
+eigenvalues and is there an analogue of \(h\) in \(\sl_3(\CC)\)?
+
+The answer to the former question is one we'll discuss at length in the
+next chapter, but for now we note that perhaps the most fundamental
+property of \(h\) is that \emph{there exists an eigenvector \(v\) of
+\(h\) that is annihilated by \(e\)} -- that being the generator of the
+right-most eigenspace of \(h\). This was instrumental to our explicit
+description of the irreducible representations of \(\sl_2(\CC)\) culminating in
+theorem~\ref{thm:irr-rep-of-sl2-exists}.
+
+Our fist task is to find some analogue of \(h\) in \(\sl_3(\CC)\), but it's
+still unclear what exactly we are looking for. We could say we're looking for
+an element of \(V\) that is annihilated by some analogue of \(e\), but the
+meaning of \emph{some analogue of \(e\)} is again unclear. In fact, as we shall
+see, no such analogue exists and neither does such element. Instead, the
+actual way to proceed is to consider the subalgebra
+\[
+ \mathfrak h
+ = \left\{
+ X \in
+ \begin{pmatrix} \CC & 0 & 0 \\ 0 & \CC & 0 \\ 0 & 0 & \CC \end{pmatrix}
+ : \Tr(X) = 0
+ \right\}
+\]
+
+The choice of \(\mathfrak{h}\) may seem like an odd choice at the moment, but
+the point is we'll later show that there exists some \(v \in V\) that is
+simultaneously an eigenvector of each \(H \in \mathfrak{h}\) and annihilated by
+half of the remaining elements of \(\sl_3(\CC)\). This is exactly analogous to
+the situation we found in \(\sl_2(\CC)\): \(h\) corresponds to the subalgebra
+\(\mathfrak{h}\), and the eigenvalues of \(h\) in turn correspond to linear
+functions \(\lambda : \mathfrak{h} \to \CC\) such that \(H v = \lambda(H) \cdot
+v\) for each \(H \in \mathfrak{h}\) and some non-zero \(v \in V\). We call such
+functionals \(\lambda\) \emph{eigenvalues of \(\mathfrak{h}\)}, and we say
+\emph{\(v\) is an eigenvector of \(\mathfrak h\)}.
+
+Once again, we'll pay special attention to the eigenvalue decomposition
+\begin{equation}\label{eq:weight-module}
+ V = \bigoplus_\lambda V_\lambda
+\end{equation}
+where \(\lambda\) ranges over all eigenvalues of \(\mathfrak{h}\) and
+\(V_\lambda = \{ v \in V : H v = \lambda(H) \cdot v, \forall H \in \mathfrak{h}
+\}\). We should note that the fact that (\ref{eq:weight-module}) is not at all
+obvious. This is because in general \(V_\lambda\) is not the eigenspace
+associated with an eigenvalue of any particular operator \(H \in
+\mathfrak{h}\), but instead the eigenspace of the action of the entire algebra
+\(\mathfrak{h}\). Fortunately for us, (\ref{eq:weight-module}) always holds,
+but we will postpone its proof to the next chapter.
+
+Next we turn our attention to the remaining elements of \(\sl_3(\CC)\). In our
+analysis of \(\sl_2(\CC)\) we saw that the eigenvalues of \(h\) differed from
+one another by multiples of \(2\). A possible way to interpret this is to say
+\emph{the eigenvalues of \(h\) differ from one another by integral linear
+combinations of the eigenvalues of the adjoint action of \(h\)}. In English,
+the eigenvalues of of the adjoint actions of \(h\) are \(\pm 2\) since
+\begin{align*}
+ [h, f] & = -2 f &
+ [h, e] & = 2 e
+\end{align*}
+and the eigenvalues of the action of \(h\) in an irreducible
+\(\sl_2(\CC)\)-representation differ from one another by multiples of \(\pm
+2\).
+
+In the case of \(\sl_3(\CC)\), a simple calculation shows that if \([H, X]\) is
+scalar multiple of \(X\) for all \(H \in \mathfrak{h}\) then all but one entry
+of \(X\) are zero. Hence the eigenvectors of the adjoint action of
+\(\mathfrak{h}\) are \(E_{i j}\) and its eigenvalues are \(\alpha_i -
+\alpha_j\), where
+\[
+ \alpha_i
+ \begin{pmatrix}
+ a_1 & 0 & 0 \\
+ 0 & a_2 & 0 \\
+ 0 & 0 & a_3
+ \end{pmatrix}
+ = a_i
+\]
+
+Visually we may draw
+
+\begin{figure}[h]
+ \centering
+ \begin{tikzpicture}[scale=2.5]
+ \begin{rootSystem}{A}
+ \filldraw[black] \weight{0}{0} circle (.5pt);
+ \node[black, above right] at \weight{0}{0} {\small$0$};
+ \wt[black]{-1}{2}
+ \wt[black]{-2}{1}
+ \wt[black]{1}{1}
+ \wt[black]{-1}{-1}
+ \wt[black]{2}{-1}
+ \wt[black]{1}{-2}
+ \node[above] at \weight{-1}{2} {$\alpha_2 - \alpha_3$};
+ \node[left] at \weight{-2}{1} {$\alpha_2 - \alpha_1$};
+ \node[right] at \weight{1}{1} {$\alpha_1 - \alpha_3$};
+ \node[left] at \weight{-1}{-1} {$\alpha_3 - \alpha_1$};
+ \node[right] at \weight{2}{-1} {$\alpha_1 - \alpha_2$};
+ \node[below] at \weight{1}{-2} {$\alpha_3 - \alpha_1$};
+ \node[black, above] at \weight{1}{0} {$\alpha_1$};
+ \node[black, above] at \weight{-1}{1} {$\alpha_2$};
+ \node[black, above] at \weight{0}{-1} {$\alpha_3$};
+ \filldraw[black] \weight{1}{0} circle (.5pt);
+ \filldraw[black] \weight{-1}{1} circle (.5pt);
+ \filldraw[black] \weight{0}{-1} circle (.5pt);
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{figure}
+
+If we denote the eigenspace of the adjoint action of \(\mathfrak{h}\) in
+\(\sl_3(\CC)\) associated to \(\alpha\) by \(\sl_3(\CC)_\alpha\) and fix some
+\(X \in \sl_3(\CC)_\alpha\), \(H \in \mathfrak{h}\) and \(v \in V_\lambda\)
+then
+\[
+ \begin{split}
+ H (X v)
+ & = X (H v) + [H, X] v \\
+ & = X (\lambda(H) \cdot v) + (\alpha(H) \cdot X) v \\
+ & = (\alpha + \lambda)(H) \cdot X v
+ \end{split}
+\]
+so that \(X\) carries \(v\) to \(V_{\alpha + \lambda}\). In other words,
+\(\sl_3(\CC)_\alpha\) \emph{acts on \(V\) by translating vectors between
+eigenspaces}.
+
+For instance \(\sl_3(\CC)_{\alpha_1 - \alpha_3}\) will act on the adjoint
+representation of \(\sl_3(\CC)\) via
+\begin{figure}[h]
+ \centering
+ \begin{tikzpicture}[scale=2.5]
+ \begin{rootSystem}{A}
+ \wt[black]{0}{0}
+ \wt[black]{-1}{2}
+ \wt[black]{-2}{1}
+ \wt[black]{1}{1}
+ \wt[black]{-1}{-1}
+ \wt[black]{2}{-1}
+ \wt[black]{1}{-2}
+ \draw[-latex, black] \weight{-1.9}{1.1} -- \weight{-1.1}{1.9};
+ \draw[-latex, black] \weight{-.9}{-.9} -- \weight{-.1}{-.1};
+ \draw[-latex, black] \weight{0.1}{0.1} -- \weight{.9}{.9};
+ \draw[-latex, black] \weight{1.1}{-1.9} -- \weight{1.9}{-1.1};
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{figure}
+
+This is again entirely analogous to the situation we observed in
+\(\sl_2(\CC)\). In fact, we may once more conclude\dots
+
+\begin{theorem}\label{thm:sl3-weights-congruent-mod-root}
+ The eigenvalues of the action of \(\mathfrak{h}\) in an irreducible
+ \(\sl_3(\CC)\)-representation \(V\) differ from one another by integral
+ linear combinations of the eigenvalues \(\alpha_i - \alpha_j\) of
+ adjoint action of \(\mathfrak{h}\) in \(\sl_3(\CC)\).
+\end{theorem}
+
+\begin{proof}
+ This proof goes exactly as that of the analogous statement for
+ \(\sl_2(\CC)\): it suffices to note that if we fix some eigenvalue
+ \(\lambda\) of \(\mathfrak{h}\) and let \(i\) and \(j\) vary then
+ \[
+ \bigoplus_{i j} V_{\lambda + \alpha_i - \alpha_j}
+ \]
+ is an invariant subspace of \(V\).
+\end{proof}
+
+To avoid confusion we better introduce some notation to differentiate between
+eigenvalues of the action of \(\mathfrak{h}\) in \(V\) and eigenvalues of the
+adjoint action of \(\mathfrak{h}\).
+
+\begin{definition}
+ Given a representation \(V\) of \(\sl_3(\CC)\), we'll call the non-zero
+ eigenvalues of the action of \(\mathfrak{h}\) in \(V\) \emph{weights of
+ \(V\)}. As you might have guessed, we'll correspondingly refer to
+ eigenvectors and eigenspaces of a given weight by \emph{weight vectors} and
+ \emph{weight spaces}.
+\end{definition}
+
+It's clear from our previous discussion that the weights of the adjoint
+representation of \(\sl_3(\CC)\) deserve some special attention.
+
+\begin{definition}
+ The weights of the adjoint representation of \(\sl_3(\CC)\) are called
+ \emph{roots of \(\sl_3(\CC)\)}. Once again, the expressions \emph{root
+ vector} and \emph{root space} are self-explanatory.
+\end{definition}
+
+Theorem~\ref{thm:sl3-weights-congruent-mod-root} can thus be restated as\dots
+
+\begin{corollary}
+ The weights of an irreducible representation \(V\) of \(\sl_3(\CC)\) are all
+ congruent module the lattice \(Q\) generated by the roots \(\alpha_i -
+ \alpha_j\) of \(\sl_3(\CC)\).
+\end{corollary}
+
+\begin{definition}
+ The lattice \(Q = \ZZ \langle \alpha_i - \alpha_j : i, j = 1, 2, 3 \rangle\)
+ is called \emph{the root lattice of \(\sl_3(\CC)\)}.
+\end{definition}
+
+To proceed we once more refer to the previously established framework: next we
+saw that the eigenvalues of \(h\) formed an unbroken string of integers
+symmetric around \(0\). To prove this we analyzed the right-most eigenvalue of
+\(h\) and its eigenvector, providing an explicit description of the
+irreducible representation of \(\sl_2(\CC)\) in terms of this vector. We may
+reproduce these steps in the context of \(\sl_3(\CC)\) by fixing a direction in
+the place an considering the weight lying the furthest in that direction.
+
+In practice this means we'll choose a linear functional \(f : \mathfrak{h}^*
+\to \RR\) and pick the weight that maximizes \(f\). To avoid any ambiguity we
+should choose the direction of a line irrational with respect to the root
+lattice \(Q\). For instance if we choose the direction of \(\alpha_1 -
+\alpha_3\) and let \(f\) be the projection \(Q \to \RR \langle \alpha_1 -
+\alpha_3 \rangle \cong \RR\) then \(\alpha_1 - 2 \alpha_2 + \alpha_3 \in Q\)
+lies in \(\ker f\), so that if a weight \(\lambda\) maximizes \(f\) then the
+translation of \(\lambda\) by any multiple of \(\alpha_1 - 2 \alpha_2 +
+\alpha_3\) must also do so. In others words, if the direction we choose is
+parallel to a vector lying in \(Q\) then there may be multiple choices the
+``weight lying the furthest'' along this direction.
+
+Let's say we fix the direction
+\begin{center}
+ \begin{tikzpicture}[scale=2.5]
+ \begin{rootSystem}{A}
+ \wt[black]{0}{0}
+ \wt[black]{-1}{2}
+ \wt[black]{-2}{1}
+ \wt[black]{1}{1}
+ \wt[black]{-1}{-1}
+ \wt[black]{2}{-1}
+ \wt[black]{1}{-2}
+ \draw[-latex, black, thick] \weight{-1.5}{-.5} -- \weight{1.5}{.5};
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+and let \(\lambda\) be the weight lying the furthest in this direction.
+
+\begin{definition}
+ We say that a root \(\alpha\) is positive if \(f(\alpha) > 0\) -- i.e. if it
+ lies to the right of the direction we chose. Otherwise we say \(\alpha\) is
+ negative. Notice that \(f(\alpha) \ne 0\) since by definition \(\alpha \ne
+ 0\) and \(f\) is irrational with respect to the lattice \(Q\).
+\end{definition}
+
+The first observation we make is that all others weights of \(V\) must lie in a
+sort of \(\frac{1}{3}\)-plane with corners at \(\lambda\), as shown in
+\begin{center}
+ \begin{tikzpicture}
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \weightLattice{3}
+ \fill[gray!50,opacity=.2] (hex cs:x=5,y=-7) -- (hex cs:x=1,y=1) --
+ (hex cs:x=-7,y=5) arc (150:270:{7*\weightLength});
+ \draw[black, thick] (hex cs:x=5,y=-7) -- (hex cs:x=1,y=1) --
+ (hex cs:x=-7,y=5);
+ \filldraw[black] (hex cs:x=1,y=1) circle (1pt);
+ \node[above right=-2pt] at (hex cs:x=1,y=1) {\small\(\lambda\)};
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+Indeed, if this is not the case then, by definition, \(\lambda\) is not the
+furthest weight along the line we chose. Given our previous assertion that the
+root spaces of \(\sl_3(\CC)\) act on the weight spaces of \(V\) via
+translation, this implies that \(E_{1 2}\), \(E_{1 3}\) and \(E_{2 3}\) all
+annihilate \(V_\lambda\), or otherwise one of \(V_{\lambda + \alpha_1 -
+\alpha_2}\), \(V_{\lambda + \alpha_1 - \alpha_3}\) and \(V_{\lambda + \alpha_2 -
+\alpha_3}\) would be non-zero -- which contradicts the hypothesis that
+\(\lambda\) lies the furthest along the direction we chose. In other words\dots
+
+\begin{theorem}
+ There is a weight vector \(v \in V\) that is killed by all positive root
+ spaces of \(\sl_3(\CC)\).
+\end{theorem}
+
+\begin{proof}
+ It suffices to note that the positive roots of \(\sl_3(\CC)\) are precisely
+ \(\alpha_1 - \alpha_2\), \(\alpha_1 - \alpha_3\) and \(\alpha_2 - \alpha_3\).
+\end{proof}
+
+We call \(\lambda\) \emph{the highest weight of \(V\)}, and we call any \(v \in
+V_\lambda\) \emph{a highest weight vector}. Going back to the case of
+\(\sl_2(\CC)\), we then constructed an explicit basis of our irreducible
+representations in terms of a highest weight vector, which allowed us to
+provide an explicit description of the action of \(\sl_2(\CC)\) in terms of
+its standard basis and finally we concluded that the eigenvalues of \(h\) must
+be symmetrical around \(0\). An analogous procedure could be implemented for
+\(\sl_3(\CC)\) -- and indeed that's what we'll do later down the line -- but
+instead we would like to focus on the problem of finding the weights of \(V\)
+for the moment.
+
+We'll start out by trying to understand the weights in the boundary of
+\(\frac{1}{3}\)-plane previously drawn. Since the root spaces act by
+translation, the action of \(E_{2 1}\) in \(V_\lambda\) will span a subspace
+\[
+ W = \bigoplus_k V_{\lambda + k (\alpha_2 - \alpha_1)},
+\]
+and by the same token \(W\) must be invariant under the action of \(E_{1 2}\).
+
+To draw a familiar picture
+\begin{center}
+ \begin{tikzpicture}
+ \begin{rootSystem}{A}
+ \node at \weight{3}{1} (a) {};
+ \node at \weight{1}{2} (b) {};
+ \node at \weight{-1}{3} (c) {};
+ \node at \weight{-3}{4} (d) {};
+ \node at \weight{-5}{5} (e) {};
+ \draw \weight{3}{1} -- \weight{-4}{4.5};
+ \draw[dotted] \weight{-4}{4.5} -- \weight{-5}{5};
+ \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+ \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+ \draw[-latex] (a) to[bend left=40] (b);
+ \draw[-latex] (b) to[bend left=40] (c);
+ \draw[-latex] (c) to[bend left=40] (d);
+ \draw[-latex] (d) to[bend left=40] (e);
+ \draw[-latex] (e) to[bend left=40] (d);
+ \draw[-latex] (d) to[bend left=40] (c);
+ \draw[-latex] (c) to[bend left=40] (b);
+ \draw[-latex] (b) to[bend left=40] (a);
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+What's remarkable about all this is the fact that the subalgebra spanned by
+\(E_{1 2}\), \(E_{2 1}\) and \(H = [E_{1 2}, E_{2 1}]\) is isomorphic to
+\(\sl_2(\CC)\) via
+\begin{align*}
+ E_{2 1} & \mapsto e &
+ E_{1 2} & \mapsto f &
+ H & \mapsto h
+\end{align*}
+
+In other words, \(W\) is a representation of \(\sl_2(\CC)\). Even more so, we
+claim
+\[
+ V_{\lambda + k (\alpha_2 - \alpha_1)} = W_{\lambda(H) - 2k}
+\]
+
+Indeed, \(V_{\lambda + k (\alpha_2 - \alpha_1)} \subset W_{\lambda(H) - 2k}\)
+since \((\lambda + k (\alpha_2 - \alpha_1))(H) = \lambda(H) + k (-1 - 1) =
+\lambda(H) - 2 k\). On the other hand, if we suppose \(0 < \dim V_{\lambda + k
+(\alpha_2 - \alpha_1)} < \dim W_{\lambda(H) - 2 k}\) for some \(k\) we arrive
+at
+\[
+ \dim W
+ = \sum_k \dim V_{\lambda + k (\alpha_2 - \alpha_1)}
+ < \sum_k \dim W_{\lambda(H) - 2k}
+ = \dim W,
+\]
+a contradiction.
+
+There are a number of important consequences to this, of the first being that
+the weights of \(V\) appearing on \(W\) must be symmetric with respect to the
+the line \(\langle \alpha_1 - \alpha_2, \alpha \rangle = 0\). The picture is
+thus
+\begin{center}
+ \begin{tikzpicture}
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \setlength{\weightRadius}{2pt}
+ \weightLattice{4}
+ \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+ \wt[black]{0}{0}
+ \node[above left] at \weight{0}{0} {\small\(0\)};
+ \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+ \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+ \draw[very thick] \weight{0}{-4} -- \weight{0}{4}
+ node[above]{\small\(\langle \alpha_1 - \alpha_2, \alpha \rangle=0\)};
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+Notice we could apply this same argument to the subspace \(\bigoplus_k
+V_{\lambda + k (\alpha_3 - \alpha_2)}\): this subspace is invariant under the
+action of the subalgebra spanned by \(E_{2 3}\), \(E_{3 2}\) and \([E_{2 3},
+E_{3 2}]\), which is again isomorphic to \(\sl_2\), so that the weights in this
+subspace must be symmetric with respect to the line \(\langle \alpha_3 -
+\alpha_2, \alpha \rangle = 0\). The picture is now
+\begin{center}
+ \begin{tikzpicture}
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \setlength{\weightRadius}{2pt}
+ \weightLattice{4}
+ \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+ \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+ \wt[black]{0}{0}
+ \wt[black]{4}{-1}
+ \node[above left] at \weight{0}{0} {\small\(0\)};
+ \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+ \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+ \draw[very thick] \weight{0}{-4} -- \weight{0}{4}
+ node[above]{\small\(\langle \alpha_1 - \alpha_2, \alpha \rangle=0\)};
+ \draw[very thick] \weight{-4}{0} -- \weight{4}{0}
+ node[right]{\small\(\langle \alpha_3 - \alpha_2, \alpha \rangle=0\)};
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+In general, given a weight \(\mu\), the space
+\[
+ \bigoplus_k V_{\mu + k (\alpha_i - \alpha_j)}
+\]
+is invariant under the action of the subalgebra \(\mathfrak{s}_{\alpha_i -
+\alpha_j} = \CC \langle E_{i j}, E_{j i}, [E_{i j}, E_{j i}] \rangle\), which
+is once more isomorphic to \(\sl_2(\CC)\), and again the weight spaces in this
+string match precisely the eigenvalues of \(h\). Needless to say, we could keep
+applying this method to the weights at the ends of our string, arriving at
+\begin{center}
+ \begin{tikzpicture}
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \setlength{\weightRadius}{2pt}
+ \weightLattice{5}
+ \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+ \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+ \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
+ \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
+ \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
+ \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
+ \wt[black]{-4}{3}
+ \wt[black]{-3}{1}
+ \wt[black]{-2}{-1}
+ \wt[black]{-1}{-3}
+ \wt[black]{1}{-4}
+ \wt[black]{2}{-3}
+ \wt[black]{3}{-2}
+ \wt[black]{4}{-1}
+ \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+ \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+ \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
+ \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
+ \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+We claim all dots \(\mu\) lying inside the hexagon we've drawn must also be
+weights -- i.e. \(V_\mu \ne 0\). Indeed, by applying the same argument to an
+arbitrary weight \(\nu\) in the boundary of the hexagon we get a representation
+of \(\sl_2(\CC)\) whose weights correspond to weights of \(V\) lying in a
+string inside the hexagon, and whose right-most weight is precisely the weight
+of \(V\) we started with.
+\begin{center}
+ \begin{tikzpicture}
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \setlength{\weightRadius}{2pt}
+ \weightLattice{5}
+ \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+ \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+ \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
+ \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
+ \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
+ \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
+ \wt[black]{-4}{3}
+ \wt[black]{-3}{1}
+ \wt[black]{-2}{-1}
+ \wt[black]{-1}{-3}
+ \wt[black]{1}{-4}
+ \wt[black]{2}{-3}
+ \wt[black]{3}{-2}
+ \wt[black]{4}{-1}
+ \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+ \node[above right=-2pt] at \weight{1}{2} {\small\(\nu\)};
+ \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+ \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
+ \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
+ \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
+ \draw[gray, thick] \weight{1}{2} -- \weight{-2}{-1};
+ \wt[black]{1}{2}
+ \wt[black]{-2}{-1}
+ \wt{0}{1}
+ \wt{-1}{0}
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+By construction, \(\nu\) corresponds to the right-most weight of the
+representation of \(\sl_2(\CC)\), so that all dots lying on the gray string
+must occur in the representation of \(\sl_2(\CC)\). Hence they must also be
+weights of \(V\). The final picture is thus
+\begin{center}
+ \begin{tikzpicture}
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \setlength{\weightRadius}{2pt}
+ \weightLattice{5}
+ \draw[thick] \weight{3}{1} -- \weight{-3}{4};
+ \draw[thick] \weight{3}{1} -- \weight{4}{-1};
+ \draw[thick] \weight{-3}{4} -- \weight{-4}{3};
+ \draw[thick] \weight{-4}{3} -- \weight{-1}{-3};
+ \draw[thick] \weight{1}{-4} -- \weight{4}{-1};
+ \draw[thick] \weight{-1}{-3} -- \weight{1}{-4};
+ \wt[black]{-4}{3}
+ \wt[black]{-3}{1}
+ \wt[black]{-2}{-1}
+ \wt[black]{-1}{-3}
+ \wt[black]{1}{-4}
+ \wt[black]{2}{-3}
+ \wt[black]{3}{-2}
+ \wt[black]{4}{-1}
+ \foreach \i in {1,...,4}{\wt[black]{5-2*\i}{\i}}
+ \node[above right=-2pt] at (hex cs:x=3,y=1){\small\(\lambda\)};
+ \draw[very thick] \weight{-5}{5} -- \weight{5}{-5};
+ \draw[very thick] \weight{0}{-5} -- \weight{0}{5};
+ \draw[very thick] \weight{-5}{0} -- \weight{5}{0};
+ \wt[black]{-2}{2}
+ \wt[black]{0}{1}
+ \wt[black]{-1}{0}
+ \wt[black]{0}{-2}
+ \wt[black]{1}{-1}
+ \wt[black]{2}{0}
+ \end{rootSystem}
+ \end{tikzpicture}
+\end{center}
+
+Another important consequence of our analysis is the fact that \(\lambda\) lies
+in the lattice \(P\) generated by \(\alpha_1\), \(\alpha_2\) and \(\alpha_3\).
+Indeed, \(\lambda([E_{i j}, E_{j i}])\) is an eigenvalue of \(h\) in a
+representation of \(\sl_2(\CC)\), so it must be an integer. Now since
+\[
+ \lambda
+ \begin{pmatrix}
+ a & 0 & 0 \\
+ 0 & b & 0 \\
+ 0 & 0 & -a -b
+ \end{pmatrix}
+ =
+ \lambda
+ \begin{pmatrix}
+ a & 0 & 0 \\
+ 0 & 0 & 0 \\
+ 0 & 0 & -a
+ \end{pmatrix}
+ +
+ \lambda
+ \begin{pmatrix}
+ 0 & 0 & 0 \\
+ 0 & b & 0 \\
+ 0 & 0 & -b
+ \end{pmatrix}
+ =
+ a \lambda([E_{1 3}, E_{3 1}]) + b \lambda([E_{2 3}, E_{3 2}]),
+\]
+which is to say \(\lambda = \lambda([E_{1 3}, E_{3 1}]) \alpha_1 +
+\lambda([E_{2 3}, E_{3 2}]) \alpha_2\), we can see that \(\lambda \in
+P\).
+
+\begin{definition}
+ The lattice \(P = \ZZ \alpha_1 \oplus \ZZ \alpha_2 \oplus \ZZ \alpha_3\) is
+ called \emph{the weight lattice of \(\sl_3(\CC)\)}.
+\end{definition}
+
+Finally\dots
+
+\begin{theorem}\label{thm:sl3-irr-weights-class}
+ The weights of \(V\) are precisely the elements of the weight lattice \(P\)
+ congruent to \(\lambda\) module the sublattice \(Q\) and lying inside hexagon
+ with vertices the images of \(\lambda\) under the group generated by
+ reflections across the lines \(\langle \alpha_i - \alpha_j, \alpha \rangle =
+ 0\).
+\end{theorem}
+
+Once more there's a clear parallel between the case of \(\sl_3(\CC)\) and that
+of \(\sl_2(\CC)\), where we observed that the weights all lied in the lattice
+\(P = \ZZ\) and were congruent modulo the lattice \(Q = 2 \ZZ\).
+Having found all of the weights of \(V\), the only thing we're missing is an
+existence and uniqueness theorem analogous to
+theorem~\ref{thm:sl2-exist-unique}. In other words, our next goal is
+establishing\dots
+
+\begin{theorem}\label{thm:sl3-existence-uniqueness}
+ For each pair of positive integers \(n\) and \(m\), there exists precisely
+ one irreducible representation \(V\) of \(\sl_3(\CC)\) whose highest weight
+ is \(n \alpha_1 - m \alpha_3\).
+\end{theorem}
+
+To proceed further we once again refer to the approach we employed in the case
+of \(\sl_2(\CC)\): next we showed in theorem~\ref{thm:basis-of-irr-rep} that
+any irreducible representation of \(\sl_2(\CC)\) is spanned by the images of
+its highest weight vector under \(f\). A more abstract way of putting it is to
+say that an irreducible representation \(V\) of \(\sl_2(\CC)\) is spanned by
+the images of its highest weight vector under successive applications by half
+of the root spaces of \(\sl_2(\CC)\). The advantage of this alternative
+formulation is, of course, that the same holds for \(\sl_3(\CC)\).
+Specifically\dots
+
+\begin{theorem}\label{thm:irr-sl3-span}
+ Given an irreducible \(\sl_3(\CC)\)-representation \(V\) and a highest
+ weight vector \(v \in V\), \(V\) is spanned by the images of \(v\) under
+ successive applications of \(E_{2 1}\), \(E_{3 1}\) and \(E_{3 2}\).
+\end{theorem}
+
+The proof of theorem~\ref{thm:irr-sl3-span} is very similar to that of
+theorem~\ref{thm:basis-of-irr-rep}: we use the commutator relations of
+\(\sl_3(\CC)\) to inductively show that the subspace spanned by the images of a
+highest weight vector under successive applications of \(E_{2 1}\), \(E_{3 1}\)
+and \(E_{3 2}\) is invariant under the action of \(\sl_3(\CC)\) -- please refer
+to \cite{fulton-harris} for further details. The same argument also goes to
+show\dots
+
+\begin{corollary}
+ Given a representation \(V\) of \(\sl_3(\CC)\) with highest weight
+ \(\lambda\) and \(v \in V_\lambda\), the subspace spanned by successive
+ applications of \(E_{2 1}\), \(E_{3 1}\) and \(E_{3 2}\) to \(v\) is an
+ irreducible subrepresentation whose highest weight is \(\lambda\).
+\end{corollary}
+
+This is very interesting to us since it implies that finding \emph{any}
+representation whose highest weight is \(n \alpha_1 - m \alpha_2\) is enough
+for establishing the ``existence'' part of
+theorem~\ref{thm:sl3-existence-uniqueness}. Moreover, constructing such
+representation turns out to be quite simple.
+
+\begin{proof}[Proof of existence]
+ Consider the natural representation \(V = \CC^3\) of \(\sl_3(\CC)\). We
+ claim that the highest weight of \(\Sym^n V \otimes \Sym^m V^*\)
+ is \(n \alpha_1 - m \alpha_3\).
+
+ First of all, notice that the eigenvectors of \(V\) are the canonical basis
+ vectors \(e_1\), \(e_2\) and \(e_3\), whose eigenvalues are \(\alpha_1\),
+ \(\alpha_2\) and \(\alpha_3\) respectively. Hence the weight diagram of \(V\)
+ is
+ \begin{center}
+ \begin{tikzpicture}[scale=2.5]
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \weightLattice{2}
+ \wt[black]{1}{0}
+ \wt[black]{-1}{1}
+ \wt[black]{0}{-1}
+ \node[right] at \weight{1}{0} {$\alpha_1$};
+ \node[above left] at \weight{-1}{1} {$\alpha_2$};
+ \node[below left] at \weight{0}{-1} {$\alpha_3$};
+ \end{rootSystem}
+ \end{tikzpicture}
+ \end{center}
+ and \(\alpha_1\) is the highest weight of \(V\).
+
+ On the one hand, if \(\{f_1, f_2, f_3\}\) is the dual basis of \(\{e_1, e_2,
+ e_3\}\) then \(H f_i = - \alpha_i(H) \cdot f_i\) for each \(H \in
+ \mathfrak{h}\), so that the weights of \(V^*\) are precisely the opposites of
+ the weights of \(V\). In other words,
+ \begin{center}
+ \begin{tikzpicture}[scale=2.5]
+ \AutoSizeWeightLatticefalse
+ \begin{rootSystem}{A}
+ \weightLattice{2}
+ \wt[black]{-1}{0}
+ \wt[black]{1}{-1}
+ \wt[black]{0}{1}
+ \node[left] at \weight{-1}{0} {$-\alpha_1$};
+ \node[below right] at \weight{1}{-1} {$-\alpha_2$};
+ \node[above right] at \weight{0}{1} {$-\alpha_3$};
+ \end{rootSystem}
+ \end{tikzpicture}
+ \end{center}
+ is the weight diagram of \(V^*\) and \(\alpha_3\) is the highest weight of
+ \(V^*\).
+
+ On the other hand if we fix two \(\sl_3(\CC)\)-representations \(U\) and
+ \(W\), by computing
+ \[
+ \begin{split}
+ H (u \otimes w)
+ & = H u \otimes w + u \otimes H w \\
+ & = \lambda(H) \cdot u \otimes w + u \otimes \mu(H) \cdot w \\
+ & = (\lambda + \mu)(H) \cdot (u \otimes w)
+ \end{split}
+ \]
+ for each \(H \in \mathfrak{h}\), \(u \in U_\lambda\) and \(w \in W_\lambda\)
+ we can see that the weights of \(U \otimes W\) are precisely the sums of the
+ weights of \(U\) with the weights of \(W\).
+
+ This implies that the maximal weights of \(\Sym^n V\) and \(\Sym^m V^*\) are
+ \(n \alpha_1\) and \(- m \alpha_3\) respectively -- with maximal weight
+ vectors \(e_1^n\) and \(f_3^m\). Furthermore, by the same token the highest
+ weight of \(\Sym^n V \otimes \Sym^m V^*\) must be \(n e_1 - m e_3\) -- with
+ highest weight vector \(e_1^n \otimes f_3^m\).
+\end{proof}
+
+The ``uniqueness'' part of theorem~\ref{thm:sl3-existence-uniqueness} is even
+simpler than that.
+
+\begin{proof}[Proof of uniqueness]
+ Let \(V\) and \(W\) be two irreducible representations of \(\sl_3(\CC)\) with
+ highest weight \(\lambda\). By theorem~\ref{thm:sl3-irr-weights-class}, the
+ weights of \(V\) are precisely the same as those of \(W\).
+
+ Now by computing
+ \[
+ H (v + w)
+ = H v + H w
+ = \mu(H) \cdot v + \mu(H) \cdot w
+ = \mu(H) \cdot (v + w)
+ \]
+ for each \(H \in \mathfrak{h}\), \(v \in V_\mu\) and \(w \in W_\mu\), we can
+ see that the weights of \(V \oplus W\) are same as those of \(V\) and \(W\).
+ Hence the highest weight of \(V \oplus W\) is \(\lambda\) -- with highest
+ weight vectors given by the sum of highest weight vectors of \(V\) and \(W\).
+
+ Fix some \(v \in V_\lambda\) and \(w \in W_\lambda\) and consider the
+ irreducible representation \(U = \sl_3(\CC) \ v + w\) generated by \(v + w\).
+ The projection maps \(\pi_1 : U \to V\), \(\pi_2 : U \to W\), being non-zero
+ homomorphism between irreducible representations of \(\sl_3(\CC)\) must be
+ isomorphism. Finally,
+ \[
+ V \cong U \cong W
+ \]
+\end{proof}
+
+The situation here is analogous to that of the previous section, where we saw
+that the irreducible representations of \(\sl_2(\CC)\) are given by symmetric
+powers of the natural representation.
+
+We've been very successful in our pursue for a classification of the
+irreducible representations of \(\sl_2(\CC)\) and \(\sl_3(\CC)\), but so far
+we've mostly postponed the discussion on the motivation behind our methods. In
+particular, we did not explain why we chose \(h\) and \(\mathfrak{h}\), and
+neither why we chose to look at their eigenvalues. Apart from the obvious fact
+we already knew it would work a priory, why did we do all that? In the
+following section we will attempt to answer this question by looking at what we
+did in the last chapter through more abstract lenses and studying the
+representations of an arbitrary finite-dimensional complex semisimple Lie
+algebra \(\mathfrak{g}\).
+
+\section{Simultaneous Diagonalization \& the General Case}
+
+At the heart of our analysis of \(\sl_2(\CC)\) and \(\sl_3(\CC)\) was the
+decision to consider the eigenspace decomposition
+\begin{equation}\label{sym-diag}
+ V = \bigoplus_\lambda V_\lambda
+\end{equation}
+
+This was simple enough to do in the case of \(\sl_2(\CC)\), but the reasoning
+behind it, as well as the mere fact equation (\ref{sym-diag}) holds, are harder
+to explain in the case of \(\sl_3(\CC)\). The eigenspace decomposition
+associated with an operator \(V \to V\) is a very well-known tool, and this
+type of argument should be familiar to anyone familiar with basic concepts of
+linear algebra. On the other hand, the eigenspace decomposition of \(V\) with
+respect to the action of an arbitrary subalgebra \(\mathfrak{h} \subset
+\gl(V)\) is neither well-known nor does it hold in general: as previously
+stated, it may very well be that
+\[
+ \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda \subsetneq V
+\]
+
+We should note, however, that this two cases are not as different as they may
+sound at first glance. Specifically, we can regard the eigenspace decomposition
+of a representation \(V\) of \(\sl_2(\CC)\) with respect to the eigenvalues of
+the action of \(h\) as the eigenvalue decomposition of \(V\) with respect to
+the action of the subalgebra \(\mathfrak{h} = \CC h \subset \sl_2(\CC)\).
+Furthermore, in both cases \(\mathfrak{h} \subset \sl_n(\CC)\) is the
+subalgebra of diagonal matrices, which is Abelian. The fundamental difference
+between these two cases is thus the fact that \(\dim \mathfrak{h} = 1\) for
+\(\mathfrak{h} \subset \sl_2(\CC)\) while \(\dim \mathfrak{h} > 1\) for
+\(\mathfrak{h} \subset \sl_3(\CC)\). The question then is: why did we choose
+\(\mathfrak{h}\) with \(\dim \mathfrak{h} > 1\) for \(\sl_3(\CC)\)?
+
+The rational behind fixing an Abelian subalgebra is one we have already
+encountered when dealing with finite groups: representations of Abelian groups
+and algebras are generally much simpler to understand than the general case.
+Thus it make sense to decompose a given representation \(V\) of
+\(\mathfrak{g}\) into subspaces invariant under the action of \(\mathfrak{h}\),
+and then analyze how the remaining elements of \(\mathfrak{g}\) act on this
+subspaces. The bigger \(\mathfrak{h}\) the simpler our problem gets, because
+there are fewer elements outside of \(\mathfrak{h}\) left to analyze.
+
+Hence we are generally interested in maximal Abelian subalgebras \(\mathfrak{h}
+\subset \mathfrak{g}\). When \(\mathfrak{g}\) is semisimple, these coincide
+with the so called \emph{Cartan subalgebras} of \(\mathfrak{g}\) -- i.e.
+self-normalizing nilpotent subalgebras. A simple argument via Zorn's lemma is
+enough to establish the existence of Cartan subalgebras for semisimple
+\(\mathfrak{g}\): it suffices to note that if
+\[
+ \mathfrak{h}_1
+ \subset \mathfrak{h}_2
+ \subset \cdots
+ \subset \mathfrak{h}_n
+ \subset \cdots
+\]
+is a chain of Abelian subalgebras, then their sum is again an Abelian
+subalgebra. Alternatively, one can show that every compact Lie group \(G\)
+contains a maximal tori, whose Lie algebra is therefore a maximal Abelian
+subalgebra of \(\mathfrak{g}\).
+
+That said, we can easily compute concrete examples. For instance, one can
+readily check that every pair of diagonal matrices commutes, so that
+\[
+ \mathfrak{h} =
+ \begin{pmatrix}
+ \CC & 0 & \cdots & 0 \\
+ 0 & \CC & \cdots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \cdots & \CC
+ \end{pmatrix}
+\]
+is an Abelian subalgebra of \(\gl_n(\CC)\). A simple calculation then shows
+that if \(X \in \gl_n(\CC)\) commutes with every diagonal matrix \(H \in
+\mathfrak{h}\) then \(X\) is a diagonal matrix, so that \(\mathfrak{h}\) is a
+Cartan subalgebra of \(\gl_n(\CC)\). The intersection of such subalgebra with
+\(\sl_n(\CC)\) -- i.e. the subalgebra of traceless diagonal matrices -- is a
+Cartan subalgebra of \(\sl_n(\CC)\). In particular, if \(n = 2\) or \(n = 3\)
+we get to the subalgebras described the previous two sections.
+
+The remaining question then is: if \(\mathfrak{h} \subset \mathfrak{g}\) is a
+Cartan subalgebra and \(V\) is a representation of \(\mathfrak{g}\), does the
+eigenspace decomposition
+\[
+ V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda
+\]
+of \(V\) hold? The answer to this question turns out to be yes. This is a
+consequence of something known as \emph{simultaneous diagonalization}, which is
+the primary tool we'll use to generalize the results of the previous section.
+What is simultaneous diagonalization all about then?
+
+\begin{proposition}
+ Let \(\mathfrak{g}\) be a Lie algebra, \(\mathfrak{h} \subset \mathfrak{g}\)
+ be an Abelian subalgebra and \(V\) be any finite-dimensional representation
+ of \(\mathfrak{g}\). Then there is a basis \(\{v_1, \ldots, v_n\}\) of \(V\)
+ so that each \(v_i\) is simultaneously an eigenvector of all elements of
+ \(\mathfrak{h}\) -- i.e. each element of \(\mathfrak{h}\) acts as a diagonal
+ matrix in this basis. In other words, there is some linear functional
+ \(\lambda \in \mathfrak{h}^*\) so that
+ \[
+ H v_i = \lambda(H) \cdot v_i
+ \]
+ for all \(H \in \mathfrak{h}\) and all \(i\).
+\end{proposition}
+
+\begin{proof}
+ We claim \(\mathfrak{h}\) is semisimple. Indeed, if \(\{H_1, \ldots, H_m\}\)
+ is basis of \(\mathfrak{h}\) then
+ \[
+ \mathfrak{h} \cong \bigoplus_i \CC H_i
+ \]
+ as vector spaces. Usually this is simply a linear isomorphism, but since
+ \(\mathfrak{h}\) is Abelian this is an isomorphism of Lie algebras -- here
+ \(\CC H_i\) represents the 1-dimensional subalgebra spanned by \(H_i\), which
+ is isomorphic to the trivial Lie algebra \(\CC\). Each \(\CC H_i\) is simple,
+ so \(\mathfrak{h}\) is isomorphic to a direct sum of simple algebras -- i.e.
+ \(\mathfrak{h}\) is semisimple.
+
+ Hence
+ \[
+ V
+ = \Res{\mathfrak g}{\mathfrak h} V
+ \cong \bigoplus V_i,
+ \]
+ as representations of \(\mathfrak{h}\), where each \(V_i\) is an irreducible
+ representation of \(\mathfrak{h}\). Since \(\mathfrak{h}\) is Abelian, it
+ follows from Schur's lemma that each \(V_i\) is 1-dimensional. Say \(V_i =
+ \CC v_i\) and consider the basis \(\{v_1, \ldots, v_n\}\) of \(V\). Now the
+ assertion that each \(v_i\) is an eigenvector of all elements of
+ \(\mathfrak{h}\) is equivalent to the statement that each \(\CC v_i\) is
+ stable under the action of \(\mathfrak{h}\).
+\end{proof}
+
+As promised, this implies\dots
+
+\begin{corollary}
+ Let \(\mathfrak{g}\) be a finite-dimensional complex semisimple Lie algebra
+ and \(\mathfrak{h}\) be a Cartan subalgebra of \(\mathfrak{g}\). Given a
+ finite-dimensional representation \(V\) of \(\mathfrak{g}\),
+ \[
+ V = \bigoplus_{\lambda \in \mathfrak{h}^*} V_\lambda
+ \]
+\end{corollary}
+
+We now have most of the necessary tools to reproduce the results of the
+previous chapter in a general setting. Let \(\mathfrak{g}\) be a
+finite-dimensional semisimple algebra with a Cartan subalgebra \(\mathfrak{h}\)
+and let \(V\) be a finite-dimensional irreducible representation of
+\(\mathfrak{g}\). We will proceed, as we did before, by generalizing the
+results about of the previous two sections in order. By now the pattern should
+be starting become clear, so we will mostly omit technical details and proofs
+analogous to the ones on the previous sections. Further details can be found in
+appendix D of \cite{fulton-harris} and in \cite{humphreys}.
+
+We begin our analysis by remarking that in both \(\sl_2(\CC)\) and
+\(\sl_3(\CC)\), the roots were symmetric about the origin and spanned all of
+\(\mathfrak{h}^*\). This turns out to be a general fact, which is a consequence
+of the following theorem.
+
+% TODO: Add a proof? The proof of FH turns out to be recursive!!!!
+% TODO: Note that this is where the maximality of the Cartan subalgebra comes
+% into play
+\begin{theorem}
+ If \(\mathfrak g\) is semisimple then its Killing form \(K\) is
+ non-degenerate. Furthermore, the restriction of \(K\) to \(\mathfrak{h}\) is
+ non-degenerate.
+\end{theorem}
+
+\begin{proposition}\label{thm:weights-symmetric-span}
+ The eigenvalues \(\alpha\) of the adjoint action of \(\mathfrak{h}\) in
+ \(\mathfrak{g}\) are symmetrical about the origin -- i.e. \(- \alpha\) is
+ also an eigenvalue -- and they span all of \(\mathfrak{h}^*\).
+\end{proposition}
+
+\begin{proof}
+ We'll start with the first claim. Let \(\alpha\) and \(\beta\) be two
+ eigenvalues of the adjoint action of \(\mathfrak{h}\). Notice
+ \([\mathfrak{g}_\alpha, \mathfrak{g}_\beta] \subset \mathfrak{g}_{\alpha +
+ \beta}\). Indeed, if \(X \in \mathfrak{g}_\alpha\) and \(Y \in
+ \mathfrak{g}_\beta\) then
+ \[
+ [H [X, Y]]
+ = [X, [H, Y]] - [Y, [H, X]]
+ = (\alpha + \beta)(H) \cdot [X, Y]
+ \]
+ for all \(H \in \mathfrak{h}\).
+
+ This implies that if \(\alpha + \beta \ne 0\) then \(\ad(X) \ad(Y)\) is
+ nilpotent: if \(Z \in \mathfrak{g}_\gamma\) then
+ \[
+ (\ad(X) \ad(Y))^n Z
+ = [X, [Y, [ \ldots, [X, [Y, Z]]] \ldots ]
+ \in \mathfrak{g}_{n \alpha + n \beta + \gamma}
+ = 0
+ \]
+ for \(n\) large enough. In particular, \(K(X, Y) = \Tr(\ad(X) \ad(Y)) = 0\).
+ Now if \(- \alpha\) is not an eigenvalue we find \(K(X, \mathfrak{g}_\beta) =
+ 0\) for all eigenvalues \(\beta\), which contradicts the non-degeneracy of
+ \(K\). Hence \(- \alpha\) must be an eigenvalue of the adjoint action of
+ \(\mathfrak{h}\).
+
+ For the second statement, note that if the eigenvalues of \(\mathfrak{h}\) do
+ not span all of \(\mathfrak{h}^*\) then there is some \(H \in \mathfrak{h}\)
+ non-zero such that \(\alpha(H) = 0\) for all eigenvalues \(\alpha\), which is
+ to say, \(\ad(H) X = [H, X] = 0\) for all \(X \in \mathfrak{g}\). Another way
+ of putting it is to say \(H\) is an element of the center \(\mathfrak{z}\) of
+ \(\mathfrak{g}\), which is zero by the semisimplicity -- a contradiction.
+\end{proof}
+
+Furthermore, as in the case of \(\sl_2(\CC)\) and \(\sl_3(\CC)\) one can
+show\dots
+
+\begin{proposition}\label{thm:root-space-dim-1}
+ The eigenspaces \(\mathfrak{g}_\alpha\) are all 1-dimensional.
+\end{proposition}
+
+The proof of the first statement of
+proposition~\ref{thm:weights-symmetric-span} highlights something interesting:
+if we fix some some eigenvalue \(\alpha\) of the adjoint action of
+\(\mathfrak{h}\) in \(\mathfrak{g}\) and a eigenvector \(X \in
+\mathfrak{g}_\alpha\), then for each \(H \in \mathfrak{h}\) and \(v \in
+V_\lambda\) we find
+\[
+ H (X v)
+ = X (H v) + [H, X] v
+ = (\lambda + \alpha)(H) \cdot X v
+\]
+so that \(X\) carries \(v\) to \(V_{\lambda + \alpha}\). We have encountered
+this formula twice in this chapter: again, we find \(\mathfrak{g}_\alpha\)
+\emph{acts on \(V\) by translating vectors between eigenspaces}. In other
+words, if we denote by \(\Delta\) the set of all roots of \(\mathfrak{g}\)
+then\dots
+
+\begin{theorem}\label{thm:weights-congruent-mod-root}
+ The weights of an irreducible representation \(V\) of \(\mathfrak{g}\) are
+ all congruent module the root lattice \(Q = \ZZ \Delta\) of \(\mathfrak{g}\).
+\end{theorem}
+
+To proceed further, as in the case of \(\sl_3(\CC)\) we have to fix a direction
+in \(\mathfrak{h}^*\) -- i.e. we fix a linear function \(\mathfrak{h}^* \to
+\RR\) such that \(Q\) lies outside of its kernel. This choice induces a
+partition \(\Delta = \Delta^+ \cup \Delta^-\) of the set of roots of
+\(\mathfrak{g}\) and once more we find\dots
+
+\begin{theorem}
+ There is a weight vector \(v \in V\) that is killed by all positive root
+ spaces of \(\mathfrak{g}\).
+\end{theorem}
+
+\begin{proof}
+ It suffices to note that if \(\lambda\) is the weight of \(V\) lying the
+ furthest along the direction we chose and \(V_{\lambda + \alpha} \ne 0\) for
+ some \(\alpha \in \Delta^+\) then \(\lambda + \alpha\) is a weight that is
+ furthest along the direction we chose than \(\lambda\), which contradicts the
+ definition of \(\lambda\).
+\end{proof}
+
+Accordingly, we call \(\lambda\) \emph{the highest weight of \(V\)}, and we
+call any \(v \in V_\lambda\) \emph{a highest weight vector}. The strategy then
+is to describe all weight spaces of \(V\) in terms of \(\lambda\) and \(v\), as
+in theorem~\ref{thm:sl3-irr-weights-class}, and unsurprisingly we do so by
+reproducing the proof of the case of \(\sl_3(\CC)\). Namely, we show\dots
+
+\begin{proposition}\label{thm:distinguished-subalgebra}
+ Given a root \(\alpha\) of \(\mathfrak{g}\) the subspace
+ \(\mathfrak{s}_\alpha = \mathfrak{g}_\alpha \oplus \mathfrak{g}_{- \alpha}
+ \oplus [\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]\) is a subalgebra
+ isomorphic to \(\sl_2(\CC)\).
+\end{proposition}
+
+\begin{corollary}\label{thm:distinguished-subalg-rep}
+ For all weights \(\mu\), the subspace
+ \[
+ V_\mu[\alpha] = \bigoplus_k V_{\mu + k \alpha}
+ \]
+ is invariant under the action of the subalgebra \(\mathfrak{s}_\alpha\)
+ and the weight spaces in this string match the eigenspaces of \(h\).
+\end{corollary}
+
+The proof of proposition~\ref{thm:distinguished-subalgebra} is very technical
+in nature and we won't include it here, but the idea behind it is simple:
+recall that \(\mathfrak{g}_\alpha\) and \(\mathfrak{g}_{- \alpha}\) are both
+1-dimensional, so that \(\dim [\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]\)
+is at most 1. We check that \([\mathfrak{g}_\alpha, \mathfrak{g}_{- \alpha}]
+\ne 0\) and that no generator of \([\mathfrak{g}_\alpha, \mathfrak{g}_{-
+\alpha}] \ne 0\) is annihilated by \(\alpha\), so that by adjusting scalars we
+can find \(E_\alpha \in \mathfrak{g}_\alpha\) and \(F_\alpha \in
+\mathfrak{g}_{- \alpha}\) such that \(H_\alpha = [E_\alpha, F_\alpha]\)
+satisfies
+\begin{align*}
+ [H_\alpha, F_\alpha] & = -2 F_\alpha &
+ [H_\alpha, E_\alpha] & = 2 E_\alpha
+\end{align*}
+
+The elements \(E_\alpha, F_\alpha \in \mathfrak{g}\) are not uniquely
+determined by this condition, but \(H_\alpha\) is. The second statement of
+corollary~\ref{thm:distinguished-subalg-rep} imposes a restriction on the
+weights of \(V\). Namely, if \(\mu\) is a weight, \(\mu(H_\alpha)\) is an
+eigenvalue of \(h\) in some representation of \(\sl_2(\CC)\), so that\dots
+
+\begin{proposition}
+ The weights \(\mu\) of an irreducible representation \(V\) of
+ \(\mathfrak{g}\) are so that \(\mu(H_\alpha) \in \ZZ\) for each \(\alpha \in
+ \Delta\).
+\end{proposition}
+
+Once more, the lattice \(P = \{ \lambda \in \mathfrak{h}^* : \lambda(H_\alpha)
+\in \ZZ, \forall \alpha \in \Delta \}\) is called \emph{the weight lattice of
+\(\mathfrak{g}\)}, and we call the elements of \(P\) \emph{integral}. Finally,
+another important consequence of theorem~\ref{thm:distinguished-subalgebra}
+is\dots
+
+\begin{corollary}
+ If \(\alpha \in \Delta^+\) and \(T_\alpha : \mathfrak{h}^* \to
+ \mathfrak{h}^*\) is the reflection in the hyperplane perpendicular to
+ \(\alpha\) with respect to the Killing form,
+ corollary~\ref{thm:distinguished-subalg-rep} implies that all \(\nu \in P\)
+ lying inside the line connecting \(\mu\) and \(T_\alpha \mu\) are weights --
+ i.e. \(V_\nu \ne 0\).
+\end{corollary}
+
+\begin{proof}
+ It suffices to note that \(\nu \in V_\mu[\alpha]\) -- see appendix D of
+ \cite{fulton-harris} for further details.
+\end{proof}
+
+\begin{definition}
+ We refer to the group \(W = \langle T_\alpha : \alpha \in \Delta^+ \rangle
+ \subset \operatorname{O}(\mathfrak{h}^*)\) as \emph{the Weyl group of
+ \(\mathfrak{g}\)}.
+\end{definition}
+
+This is entirely analogous to the situation of \(\sl_3(\CC)\), where we found
+that the weights of the irreducible representations were symmetric with respect
+to the lines \(\langle \alpha_i - \alpha_j, \alpha \rangle = 0\). Indeed, the
+same argument leads us to the conclusion\dots
+
+\begin{theorem}\label{thm:irr-weight-class}
+ The weights of an irreducible representation \(V\) of \(\mathfrak{g}\) with
+ highest weight \(\lambda\) are precisely the elements of the weight lattice
+ \(P\) congruent to \(\lambda\) modulo the root lattice \(Q\) lying inside the
+ convex hull of the image of \(\lambda\) under the action of the Weyl group
+ \(W\).
+\end{theorem}
+
+Now the only thing we are missing for a complete classification is an existence
+and uniqueness theorem analogous to theorem~\ref{thm:sl2-exist-unique} and
+theorem~\ref{thm:sl3-existence-uniqueness}. Lo and behold\dots
+
+\begin{theorem}\label{thm:dominant-weight-theo}
+ For each \(\lambda \in P\) such that \(\lambda(H_\alpha) \ge 0\) for
+ all positive roots \(\alpha\) there exists precisely one irreducible
+ representation \(V\) of \(\mathfrak{g}\) whose highest weight is \(\lambda\).
+\end{theorem}
+
+\begin{note}
+ An element \(\lambda\) of \(P\) such that \(\lambda(H_\alpha) \ge 0\) for all
+ \(\alpha \in \Delta^+\) is usually referred to as an \emph{integral
+ dominant weight of \(\mathfrak{g}\)}.
+\end{note}
+
+Unsurprisingly, our strategy is to copy what we did in the previous section.
+The ``uniqueness'' part of the theorem follows at once from the argument used
+for \(\sl_3(\CC)\), and the proof of existence of can once again be reduced
+to the proof of\dots
+
+\begin{theorem}\label{thm:weak-dominant-weight}
+ There exists \emph{some} -- not necessarily irreducible -- finite-dimensional
+ representation of \(\mathfrak{g}\) whose highest weight is \(\lambda\).
+\end{theorem}
+
+The trouble comes when we try to generalize the proof of
+theorem~\ref{thm:weak-dominant-weight} we used for the case when \(\mathfrak{g}
+= \sl_3(\CC)\). The issue is that our proof relied heavily on our knowledge of
+the roots of \(\sl_3(\CC)\). Specifically, we used the fact every dominant
+integral weight of \(\sl_3(\CC)\) can be written as \(n \alpha_1 - m \alpha_3\)
+for unique non-negative integers \(n\) and \(m\). When then constructed
+finite-dimensional representations \(V\) and \(W\) of \(\sl_3(\CC)\) whose
+highest weights are \(\alpha_1\) and \(- \alpha_3\), so that the highest weight
+of \(\Sym^n V \otimes \Sym^m W\) is \(n \alpha_1 - m \alpha_3\).
+
+A similar construction can be implemented for \(\sl_n(\CC)\): if \(\mathfrak{h}
+\subset \sl_n(\CC)\) is the subalgebra of diagonal matrices -- which, as you
+may recall, is a Cartan subalgebra -- and \(\alpha : \mathfrak{h} \to \CC\) is
+given by \(\alpha(E_{j j}) = \delta_{i j}\), one can show that any dominant
+integral weight of \(\sl_n(\CC)\) can be uniquely expressed in the form \(k_1
+\alpha_1 + k_2 (\alpha_1 + \alpha_2) + \cdots + k_{n - 1} (\alpha_1 + \cdots +
+\alpha_{n - 1})\) for non-negative integers \(k_1, k_2, \ldots k_{n - 1}\). For
+instance, one may visually represent the roots of \(\sl_4(\CC)\) by
+\begin{center}
+ \begin{tikzpicture}[scale=3]
+ \draw (0, 0) -- (1, 0) -- (1, 1) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (.4, 1.4) -- (1.4, 1.4) -- (1, 1);
+ \draw (1, 0) -- (1.4, .4) -- (1.4, 1.4);
+ \draw[dotted] (0, 0) -- (.4, .4) -- (1.4, .4);
+ \draw[dotted] (.4, 1.4) -- (.4, .4);
+
+ \filldraw (.5, 0) circle (.7pt);
+ \filldraw (.5, 1) circle (.7pt);
+ \node[below] at (.5, 0) {$\alpha_2 - \alpha_1$};
+
+ \filldraw ( .4, .9) circle (.7pt);
+ \filldraw (1.4, .9) circle (.7pt);
+ \node[right] at (1.4, .9) {$\alpha_1 - \alpha_4$};
+
+ \filldraw (.9, .4) circle (.7pt);
+ \filldraw (.9, 1.4) circle (.7pt);
+ \node[above] at (.9, 1.4) {$\alpha_1 - \alpha_2$};
+
+ \filldraw (0, .5) circle (.7pt);
+ \filldraw (1, .5) circle (.7pt);
+ \node[left] at (0, .5) {$\alpha_4 - \alpha_1$};
+
+ \filldraw (.2, .2) circle (.7pt);
+ \filldraw (.2, 1.2) circle (.7pt);
+ \node[above left] at (.2, 1.2) {$\alpha_4 - \alpha_2$};
+
+ \filldraw (1.2, .2) circle (.7pt);
+ \filldraw (1.2, 1.2) circle (.7pt);
+ \node[below right] at (1.2, .2) {$\alpha_2 - \alpha_4$};
+ \end{tikzpicture}
+\end{center}
+
+% TODO: Historical citation needed!
+% TODO: Mention at the start of the chapter that we are following Weyl's
+% footsteps in here
+One can then construct representations \(V_i\) of \(\sl_n(\CC)\) whose highest
+weights are \(\alpha_1 + \cdots + \alpha_i\). In fact, whenever we can find
+finitely many generators of \(\beta_i\) of the set of dominant integral weights
+and finite-dimensional representations \(V_i\) of \(\mathfrak{g}\) whose
+highest weights are \(\beta_i\) we can construct a finite-dimensional
+representation of \(\mathfrak{g}\) whose highest weight is some dominant
+integral \(\lambda \in P\) by tensoring symmetric powers of the \(V_i\)'s. This
+is the approach we'll take to prove theorem~\ref{thm:weak-dominant-weight}, as
+historically this was Weyl's first proof of the theorem. As of now, however, we
+don't have the necessary tools to construct a standard set of generators of the
+dominant integral weights of some arbitrary semisimple \(\mathfrak{g}\), let
+alone the representations \(V_i\). Indeed, Weyl's work was based on Cartan's
+classification of finite-dimensional complex simple Lie algebras, which we so
+far have neglected to mention.
+
+Alternatively, one could construct a potentially infinite-dimensional
+representation of \(\mathfrak{g}\) whose highest weight is some fixed dominant
+integral weight \(\lambda\) by taking the induced representation
+\(\Ind{\mathfrak{g}}{\mathfrak{b}} V_\lambda = \mathcal{U}(\mathfrak{g})
+\otimes_{\mathcal{U}(\mathfrak{b})} V_\lambda\), where \(\mathfrak{b} =
+\mathfrak{h} \oplus \bigoplus_{\alpha \in \Delta^+} \mathfrak{g}_\alpha \subset
+\mathfrak{g}\) is the so called \emph{Borel subalgebra of \(\mathfrak{g}\)},
+\(\mathcal{U}(\mathfrak{g})\) denotes the \emph{universal enveloping algebra
+of \(\mathfrak{g}\)} and \(\mathfrak{b}\) acts on \(V_\lambda = \CC v\) via \(H
+v = \lambda(H) \cdot v\) and \(X v = 0\) for \(X \in \mathfrak{g}_\alpha\), as
+does \cite{humphreys} in his proof. The fact that \(v\) is annihilated by all
+positive root spaces guarantees that the maximal weight of \(V\) is at most
+\(\lambda\), while the Poincare-Birkhoff-Witt \cite{humphreys} theorem
+guarantees that \(v = 1 \otimes v \in V\) is a non-zero weight vector of
+\(\lambda\) -- so that \(\lambda\) is the highest weight of \(V\).
+The challenge then is to show that the irreducible component of \(v\) in \(V\)
+is finite-dimensional -- see chapter 20 of \cite{humphreys} for a proof.
+
+This approach has the advantage of working over fields other than \(\CC\), but
+in keeping with our general theme of preferring geometric proofs over purely
+algebraic ones we will instead take this as an opportunity to dive into
+Cartan's classification. In the next chapter we will explore the structure of
+complex semisimple Lie algebras, and in the process of doing so we will reduce
+the proof of theorem~\ref{thm:weak-dominant-weight} to a proof by exhaustion.
+