Mar 8, 2026
Sheldon Axler wrote a book called Linear Algebra Done Right that he has graciously made available under a creative commons license. He declared his hatred for determinants in both his prefaces and left those to the last chapter in his book. I hated them too in my first linear algebra course. I even refused to learn them, and got a grade of C. Much later, I learned about (Grassmann 1844). Hermann Grassmann showed how determinants follow easily from his exterior product of points. His rule is very simple: If P and Q are points in space then PQ = 0 if and only if P = Q
Exercise. Show PQ = -QP.
Hint: 0 = (P + Q)(P + Q) = PQ + QP.
Exercise. Show (aP + bQ)(cP + dQ) = (ad - bc)PQ.
The coefficient ad - bc is the determinant of the matrix \begin{bmatrix}a & b\\ c&d\end{bmatrix}. This generalizes to any number of dimensions.
…More stuff…
The usual definition of the vector space \boldsymbol{{R}}^n over the real numbers \boldsymbol{{R}} is the set of all n-tuples of real numbers \{x = (x_1,\ldots,x_n)\mid x_j\in\boldsymbol{{R}}, 1\le j\le n\}. It has a scalar multiplication a(x_1,\ldots,x_n) = (ax_1,\ldots,ax_n), a\in\boldsymbol{{R}} and vector addition (x_1,\ldots,x_n) + (y_1,\ldots,y_n) = (x_1 + y_1,\ldots,x_n + y_n) that satifies the distributed laws a(x + y) = ax + ay and (a + b)x = ax + bx, a,b\in\boldsymbol{{R}}.
Vector addition is an abelian group. Addition is associative x + (y + z) = (x + y) + z and commutative x + y = y + x. It has an identity element \boldsymbol{{0}} = (0,\ldots,0) and additive inverse -(x_1,\ldots,x_n) = (-x_1,\ldots,-x_n).
Exercise. Show (-1)x = -x for x\in\boldsymbol{{R}}^n.
Exercise. Show x + (-x) = \boldsymbol{{0}}.
We write x - y for x + (-y).
Exercise. Show 1x = x for x\in\boldsymbol{{R}}^n.
Exercise. Show x + x = x implies x = \boldsymbol{{0}}.
Hint: If x + x = x then (x + x) - x = x - x.
There is a different, but equivalent, way to define \boldsymbol{{R}}^n. Let \boldsymbol{{n}} = \{1,\ldots,n\} and define \boldsymbol{{R}}^{\boldsymbol{{n}}} = \{\boldsymbol{{x}}\colon\boldsymbol{{n}}\to\boldsymbol{{R}}\}, the set of all functions from \boldsymbol{{n}} to \boldsymbol{{R}}. Scalar multiplication and vector addition are defined pointwise. (ax)(i) = a(x(i)) and (x + y)(i) = x(i) + y(i).
The first definition of \boldsymbol{{R}}^n is that it is a product of n copies of \boldsymbol{{R}}. The second definition uses the set exponential B^A = \{f\colon A\to B\}, the set of all functions from A to B. They are in one-to-one correspondence by x_i = \boldsymbol{{x}}(i), i\in\boldsymbol{{n}}. Scalar multiplication and vector addition are preserved by this equivalence so we call it an isomorphism (same form).
Exercise. Show a(x_1,\ldots,x_n) = (ax_1,\ldots,ax_n) corresponds to (a\boldsymbol{{x}})(i) = a(\boldsymbol{{x}}(i).
Exercise. Show (x_1,\ldots,x_n) + (y_1\ldots,y_n) = (x_1 + y_1,\ldots,x_n + y_n) corresponds to {(\boldsymbol{{x}} + \boldsymbol{{y}})(i) = \boldsymbol{{x}}(i) + \boldsymbol{{y}}(i)}.
This shows the function \boldsymbol{{R}}^n\to\boldsymbol{{R}}^{\boldsymbol{{n}}} by x\mapsto\boldsymbol{{x}} is a linear operator that preserves scalar multiplicaton and vector addition.
We will drop the bold face type and define \boldsymbol{{R}}^I = \{I \to \boldsymbol{{R}}\} for any index set I and try to convince you this is a righter way to think about vector spaces.
For i\in I define e_i\in\boldsymbol{{R}}^I by e_i(j) = \delta_{ij} where \delta_{ij} = 1 if i = j and \delta_{ij} = 0 if i\not=j is the Kronecker delta.
Exercise. Show x = \sum_{i\in I} x(i) e_i for x\in\boldsymbol{{R}}^I.
Hint: x(j) = \sum_{i\in I} x(i) e_i(j).
This shows the standard basis spans \boldsymbol{{R}}^I.
The span of a set of vectors is the smallest subspace containing the vectors.
Exercise. Show the span of x_i, i\in\boldsymbol{{n}} is the set of linear combinations \{\sum a_i, x_i\mid a_i\in\boldsymbol{{R}}\}.
Hint. The set contains x_i for i\in\boldsymbol{{n}} and is a vector space.
Exercise. Show \sum_i a_i e_i = \boldsymbol{{0}} implies a_i = 0 for i\in I.
This shows the standard basis in independent.
Vectors x_i, i\in\boldsymbol{{n}}, are independent if \sum_i a_i v_i = \boldsymbol{{0}} implies a_i = 0 for all i\in\boldsymbol{{n}}.
Exercise. Show if x_1,\ldots,x_n are not independent then there exists j with x_j = \sum_{i\not=j} b_i x_i for some b_i\in\boldsymbol{{R}}.
Hint: We have \sum_i a_i x_i = \boldsymbol{{0}} with some a_j\not=0.
The dual of \boldsymbol{{R}}^I is the set of linear operators from \boldsymbol{{R}}^I\to\boldsymbol{{R}}.
Haskell Curry was a logician who extended Alonso Church’s lambda calculus.
Currying and uncurrying provides the connection between set exponential and cartesian product. The set {A\times B\to C} is in one-to-one correspondence with the set \{\{A\to\{B\to C\}\}\}. Given {f\in\{A\times B\to C\}} define {\operatorname{curry}f\colon A\to\{B\to C\}} by {((\operatorname{curry}f)(a))(b) = f(a,b)}. The inverse is uncurrying. Given {g\in\{A\to\{B\to C\}\}} define {\operatorname{uncurry}g\colon A\times B\to C} by {(\operatorname{uncurry}g)(a,b) = (g(a))(b) = (ga)b}.
Exercise. Show \operatorname{uncurry}(\operatorname{curry}f) = f and \operatorname{curry}(\operatorname{uncurry}g) = g.
The connection between cartesion product and exponential is $C^{AB}C^{B^A} where \cong indicates equivalence. If we write B^A as \{A\to B\} this becomes \{A\times B\to C\} \cong \{A\to \{B\to C\}\}.
A subspace of a vector space V is a subset U\subseteq V that is closed under scalar multiplication and vector addition.
Exercise. If U is a subspace of V then \boldsymbol{{R}}U\subseteq U.
Hint: \boldsymbol{{F}}U = \{au\mid a\in\boldsymbol{{F}}, u\in U\}.
Exercise. If U is a subspace of V then U + U\subseteq U.
Hint: U + U = \{u + u'\mid u,u'\in U\}.
A set of independent vectors that span a vector space is a basis.
If T\colon V\to V is a linear operator then clearly TV\subseteq V. If U is a subspace of V and TU\subseteq U then U is an invariant subspace of the operator T. How can we find all invariant subspaces of an operator?
If Tv = \alpha v for some v\in V and \alpha\in\boldsymbol{{F}} we say v is an eigenvector of T with eigenvalue \alpha.
Exercise. If v is an eigenvalue of T then \boldsymbol{{F}}v = \{av\mid a\in\boldsymbol{{R}}\} is an invariant subspace of T.
Exercise. If Tv = \alpha v, Tw = \beta w, and \alpha\not=\beta then v and w are independent.
Hint: If v and w are not independent then v = \gamma w for some \gamma\in\boldsymbol{{R}}.
If the eigenvectors of T form a basis of V then T is diagonalizable.
Exercise. If T is diagonalizable with eigenvalues \alpha then \Pi_j (T - \alpha I) = 0.