Vector Space

Keith A. Lewis

May 2, 2023

Abstract
A mathematical sweet spot

Vector Space

A vector space V over a field \boldsymbol{F} is a commutative (abelian) group under addition with a scalar multiplication that satisfy the distributive laws a(u + w) = au + aw, a\in\boldsymbol{F}, u,w\in V, (a + b)u = au + bu, a,b\in\boldsymbol{F}, u\in V. We also require a(bv) = (ab)v for a,b\in\boldsymbol{F}, v\in V and 1v = v, u\in V where 1\in\boldsymbol{F} is the field unit.

Note that the field of real numbers \boldsymbol{R} and complex numbers \boldsymbol{C} are (1-dimensional) vector spaces.

Exercise. Show v + v = v implies v = 0, v\in V.

Solution \begin{aligned} v + v &= v \\ &\quad\langle a = b\Rightarrow a + c = b + c\rangle\\ (v + v) + (-v) &= v + (-v) \\ &\quad\langle (a + b) + c = a + (b + c)\rangle\\ v + (v + (-v)) &= v + (-v) \\ &\quad\langle v + (-v) = 0\rangle\\ v + 0 &= 0 \\ &\quad\langle v + 0 = v\rangle\\ v &= 0 \end{aligned}

Recall B^A = \{f\colon A\to B\} is the set of all functions from the set A to the set B. Define the n-dimensional vector space \boldsymbol{F}^n = \{v\colon n\to\boldsymbol{F}\} where n = \{1,2,\ldots,n\}. The standard basis e_i\in\boldsymbol{F}^n is defined by e_i(j) = δ_{ij}, where δ_{ij} = 1 if i=j and δ_{ij} = 0 if i\not= j is the Kronecker delta,

Exercise. Show every v\in\boldsymbol{F}^n can be written v=\sum_i v_i e_i for some v_i\in\boldsymbol{F}.

Hint: v_i = v(i).

Solution v(j) = \sum_i v_i e_i(j) = \sum_i v_i \delta_{ij} = v_j.

This shows we can identify \{(v_1,\ldots,v_n)\mid v_j\in\boldsymbol{F}\} with \boldsymbol{R}^n where (v_1,\ldots,v_n) corresponds to \sum_i v_i e_i.

Span

A linear combination of vectors v_j\in V is a sum \sum_j a_j v_j where a_j\in\boldsymbol{F}. The span of \{v_j\} is the set of all linear combinations.

Exercise. Show the span is a vector space.

Hint. Show if u is in the span then au is also in the span for a\in\boldsymbol{R} and if v and w are in the span then v + w is also in the span.

Solution If u = \sum_j a_j v_j then {au = \sum_j a(a_j v_j) = \sum_j (aa_j)v_j} is in the span. If {v = \sum_j b_j v_j} and {w = \sum_j c_j v_j} then {v + w = \sum_j (b_j + c_j) v_j} is in the span.

Subspace

A subset U\subseteq V of a vector space V is a subspace if U is also a vector space.

Exercise. Let U be a subset of V. If \boldsymbol{F}U\subseteq U and U + U\subseteq U then U is a subspace of V.

Hint. \boldsymbol{F}U = \{au\mid a\in\boldsymbol{F}, u\in U\} and U + U = \{v + w\mid v\in U, w\in U\}. Show if u\in U then au\in U and if v,w\in U then v + w\in U.

Solution If u\in U and a\in\boldsymbol{F} then au\in\boldsymbol{F}U\subseteq U. If v\in U and w\in U then v + w\in U + U\subseteq U.

Exercise. Show the intersection of two subspaces is a subspace.

Hint. Show if v is in the intersection then av is also in the intersection for a\in\boldsymbol{R} and if u and w are in the intersection then u + w is also in the intersection.

Solution If v\in U\cap V then au\in U and av\in V so au\in U\cap V. If u,w\in U\cap V then u + w\in U and u + w\in V so u + w\in U\cap V.

Exercise. Show the sum of two subspaces is a subspace.

Hint. The sum of subspaces U, W\subseteq V is U + W = \{u + w\mid u\in U, w\in W\}.

Independent

A set of vectors \{v_j\} are independent if \sum a_j v_j = 0 implies a_j = 0 for all j.

Exercise. If \{v_j\} are not independent then v_i = \sum_{j\not= i} a_j v_j for some i.

Hint: If \sum a_j v_j = 0 and a_i\not= 0 for some i then a_i v_i = -\sum_{j\not=i} a_j v_j.

Solution v_i = -\sum_{j\not=i} a_j/a_i v_j.

Basis

A collection of vectors (v_i)_{i\in I}, v_i\in V, is a basis of V if they are independent and span V. Since they span V every vector v\in V can be written v = \sum_{i\in I} a_i v_i.

Exercise. If \sum_i a_i v_i = \sum_i b_i v_i then a_i = b_i for all i\in I.

Hint: 0 = \sum_i (a_i - b_i) v_i.

Solution \sum_i a_i v_i - \sum_i b_i v_i = \sum_i (a_i - b_i)v_i = 0 so a_i - b_i = 0 for i\in I.

This shows how to identify any vector space V with \boldsymbol{F}^I given a basis (v_i)_{i\in I}.

The dimension of a vector space is the number of elements of a basis. A vector space has many collections of vectors that are a basis but every basis has the same number of vectors. This is not trivial to prove. Vector spaces occupy a sweet spot in the menagerie of mathematical objects. They are determined up to isomorphism by their dimension.

Linear Transformation

A linear transformation is a function T\colon V\to W, where V and W are vector spaces that satisfies {T(av + w) = aTv + Tw}, a\in\boldsymbol{F}, v,w\in V. Note that the addition {av + w} occurs in V and {aTv + Tw} occurs in W. The space of all such linear transformations is denoted \mathcal{L}(V,W).

Exercise. Show if T is a linear transformation then T0 = 0.

Hint: Consider T(0 + 0) and v + v = v implies v = 0.

Solution T(0 + 0) = T(0) + T(0) and T(0 + 0) = T(0) so T(0) = 0.

Exercise. Show T(av) = aTv, a\in\boldsymbol{F}, v\in V.

Solution Using T(av + w) = aTv + Tw, T(av) = T(av + 0) = aTv + T0 = aTv + 0 = aTv.

Exercise. Show T(av + bw) = aTv + bTw, a,b\in\boldsymbol{F}, v,w\in V.

Solution T(av + bw) = aTv + T(bw) = aTv = bTw.

A linear transformation T\colon V\to W is one-to-one, or injective, if Tu = Tv implies u = v then T

Exercise. Show if Tv = 0 implies v = 0 then T is one-to-one.

Hint. Use linearity.

Solution If Tu = Tv then T(u - v) = 0 so u - v = 0 and u = v.

A linear transformation T\colon V\to W is onto, or surjective, if for every w\in W there exists v\in V with Tv = w.

A linear transformation that is one-to-one and onto, or bijective, is an isomorphism. If T\colon V\to W is an isomorphism then V and W are isomorphic, V\cong W.

Exercise. Show V\cong W is an equivalence relation.

Hint: This means V\cong V, V\cong W implies W\cong V, and V\cong W, W\cong U implies V\cong U.

Solution The identity transformation I\colon V\to V defined by I(v) = v shows V\cong V. If T\colon V\to W is an isomorphism then its inverse T^{-1}\colon W\to V shows W\cong V. If T\colon V\to W and S\colon W\to U are isomorphisms then so is ST and V\cong U.

The space of linear transformations \mathcal{L}(V,W) is also a vector space under pointwise addition {(T + S)v = Tv + Sv} and pointwise scalar multiplication {(aT)v = a(Tv)}, a\in\boldsymbol{F}, v,w\in V. The space \mathcal{L}(\boldsymbol{F}^n,\boldsymbol{F}^m) can be identified with \boldsymbol{F}^{n\times m}. If {T\colon\boldsymbol{F}^n\to\boldsymbol{F}^m} then {Te_i = \sum_j t_{ij} e_j} for some t_{ij}\in\boldsymbol{F}.

Exercise. If T\colon\boldsymbol{F}^k\to\boldsymbol{F}^n and S\colon\boldsymbol{F}^n\to\boldsymbol{F}^m then the composition R = ST\colon\boldsymbol{F}^k\to\boldsymbol{F}^m. Show r_{ij} = \sum_k t_{ik} s_{kj}.

Solution R(e_i) = ST(e_i) = S(\sum_k t_{ik} e_k) = \sum_k t_{ik} Se_k = \sum_k t_{ik} \sum_j s_{kj} e_j = \sum_j \sum_k t_{ik} s_{kj} e_j = \sum_j r_{ij} e_j

Matrix multiplication is composition of linear transformations.

Heisenberg

Werner Heisenberg rediscovered matrix multiplication by considering orbital levels of the hydrogen atom. If e_{ij} represents a jump from level i to level j, he posited e_{ij}e_{kl} = e_{il} if j = k and equals e_{ij}e_{kl} = 0 if j\not= k. [@cite Hei] An electron can jump from i to j, then j to l, but not from i to j, then k to l if k\not= j.

Exercise. If S = \sum_{i,j}s_{ij}e_{ij} and T = \sum_{k,l} t_{kl}e_{kl} show TS = \sum_{i,j} (\sum_k t_{ik} s_{kj}) e_{ij}.

Solution \begin{aligned} TS &= (\sum_{kl} t_{kl}e_{kl})(\sum_{ij}s_{ij}e_{ij}) \\ &= \sum_{ij} \sum_{kl} s_{ij} t_{kl} e_{ij}e_{kl} \\ &= \sum_{ij} \sum_{kl} s_{ij} t_{kl} e_{il}\delta_{jk} \\ &= \sum_{ij} \sum_k s_{ik} t_{kl} e_{il} \\ \end{aligned}

The kernel of a linear transformation T\colon V\to W is \ker T = \{v\in V\mid Tv = 0\}\subseteq V.

Exercise. The kernel of a linear transformation is a subspace.

Hint: T(av + w) = aTv + Tw = 0 for a\in\boldsymbol{F}, v,w\in \ker T.

Exercise. T is one-to-one if and only if \ker T = \{0\}.

Hint: Consider T(v - v').

The range of a linear transformation T\colon V\to W is \operatorname{ran}T = \{Tv\mid v\in V\}\subseteq W.

Exercise. The range of a linear transformation is a subspace.

Hint: aTv + Tw = T(av + w)\in\operatorname{ran}T.

If \operatorname{ran}T = W then T is onto, or surjective.

Every linear transformation T\colon V\to W factors through the quotient space V/\ker T. Define π\colon V\to V/\ker T by πv = v + \ker T.

Exercise. Show π is a surjective linear transformation.

Define ν\colon V/\ker T\to\operatorname{ran}T by ν(v + \ker T) = Tv.

Exercise. Show ν is a well-defined injective linear transformation.

Hint: Start by showing it is well-defined; if v + \ker T = v' + \ker T then Tv = Tv', v,v'\in V.

Solution Since v + \ker T = v' + \ker T if and only if v - v'\in\ker T we have T(v - v') = 0 so Tv = Tv' and ν is well-defined. If Tv = Tv' then v - v'\in\ker T so v + \ker T = v' + \ker T showing ν is injective.

Quotient

If U is a subspace of V and v\in V define the coset of U containing v by v + U = \{v + u\mid u\in U\}. Subspaces factor vector spaces into smaller vector spaces.

Exercise. Show v\in v+U for v\in V.

Hint: U is a vector space so 0\in U.

Exercise. Show u + U = U if and only if u\in U.

Solution

If u + U = U then u + u' = u'' for some u',u''\in U so u = u'' - u'\in U.

If u\in U then u + u'\in U for all u'\in U so u + U \subseteq U and if u'\in U then u' = u + (u' - u)\in u + U so U\subseteq u + U.

Exercise. Show v + U = w + U if and only if v - w\in U.

Solution

If v + U = w + U then v + u = w + u' for some u,u'\in U so v - w = u' - u\in U.

If v - w\in U then v - w = u for some u\in U so v + U = w + u + U = w + U.

Exercise. Show v\cong_U w if and only if v + U = w + U is an equivalence relation.

Hint: Show v\cong v (reflexive), v\cong w implies w\cong v (symmetric), and v\cong w and w\cong x implies v\cong x (transitive).

The quotient space V/U = \{v + U\mid v\in V\} is a vector space with scalar multiplication a(v + U) = av + U and addition (v + U) + (w + U) = (v + w) + U.

Exercise. Show v + U = w + U implies av + U = aw + U, a\in\boldsymbol{F}, v,w\in V.

Hint: av - aw\in U.

Solution If v + U = w + U then v - w\in U, so a(v - w)\in U and av + U = aw + U.

Exercise. Show v + U = v' + U and w + U = w' + U implies v + u + U = v' + w' + U, v,v',w,w'\in V.

Hint: v - v', w - w'\in U.

The last two exercises show scalar multiplication and addition are well-defined in V/U.

Exercise. Show (u + U) + (v + U) = (v + U) + (u + U) and a((u + U) + (v + U)) = a(u + U) + a(v + U).

This shows addition is commutative and scalar multiplication distributes over addition, hence the quotient space V/U is a vector space where the cosets are the vectors. A subspace U and the quotient space V/U determine V up to isomorphism, but that requires more machinery.

Invariant Subspace

An invariant subspace of T\colon V\to V is a subspace U\subseteq V with T(U) \subseteq U.

Exercise. If T\colon V\to V show \ker T and \operatorname{ran}T are invariant subspaces.

If U is a 1-dimensional subspace spanned by e\in V then e is an eigenvector and Te = λe for some λ\in\boldsymbol{F}, the eigenvalue corresponding to u.

If the eigenvectors of T are independent they and their corresponding eigenvalues determine T. Let (e_i), (λ_i) be the eigenvectors and corresponding eigenvalues. Every vector v\in V can be written v = \sum_i a_i e_i so Tv = \sum_i a_i Te_i = \sum_i λ_i a_i v_i. In this case we say T is diagonalizable. Using the eigenvectors as a basis, t_{ij} = λ_i δ_{ij}.

If e is an eigenvector with eigenvalue λ then Te = λe so (T - λI)e = 0 where I:V\to V is the identity transformation Iv = v, v\in V.

Exercise. If the eigenvectors of T form a basis then (T-λ_1I)\cdots(T-λ_nI) = 0.

The dimension of \mathcal{L}(\boldsymbol{F}^n,\boldsymbol{F}^n) is n^2 so we know I, T, T^2, , T^{n^2} must be linearly dependent so there is a polynomial of order at most n^2 with p(T) = 0. If T is diagonalizable the above exercise shows there is a polynomial of order n satisfying this. The Cayley-Hamilton states this is true for any T where p(λ) = \det(T - λ I).

Norm

A norm on a vector space is a function \|\cdot\|\colon V\to[0,\infty) with \|av\| = |a|\|v\|, \|v + w\| \le \|v\| + \|w\|, a\in\boldsymbol{F}, v,w\in V, and \|v\| = 0 implies v = 0.

If V=\boldsymbol{C}^n then \|v\|_\infty = \max_i |v_i| and \|v\|_p = (\sum_i |v_i|^p)^{1/p} are the sup norm and p-norm, p\ge 1.

Exercise. Show \lim_{p\to\infty}\|v\|_p = \|v\|_\infty.

If T\colon V\to W is a linear transformation between normed vector spaces then the operator norm is \|T\| = \sup_{\|v\|\le 1}\|Tv\|.

Exercise. Show \|aT\| = |a|\|T\|, \|T + S\|\le \|T\| + \|S\| and \|T\| = 0 implies T = 0, a\in\boldsymbol{F}, T,S\in\mathcal{L}(V,W).

Inner Product

An inner product on a vector space is a bilinear function V\times V\to\boldsymbol{F}. The pair (u,v) is sent to v\cdot w, v, w\in V. The inner product satisfies v\cdot v \ge 0 and v\cdot v = 0 implies v = 0.

Exercise. Show \|v\| = v\cdot v is a norm.

Theorem (Cauchy-Schwartz) |u\cdot v| \le \|u\| \|v\| and equality holds if and only if u and v are colinear.

Proof. Since 0\le\|au - v\|^2 = a^2\|u\|^2 - 2au\cdot v + \|v\|^2 the discriminat |u\cdot v|^2 - \|u\|^2 \|v\|^2\ge 0. The discriminant is 0 if and only if au - v = 0.

Spectrum

If V is a finite dimensional normed space over \boldsymbol{C} then every operator T\colon V\to V has and eigenvector.

The spectrum, σ(T), of a linear operator T\colon V\to V is the set of all λ\in\boldsymbol{C} such that \ker(T - λI) is not invertable. The spectral radius is ρ(T) = \max\{|λ|\mid λ\in σ(T)\}.

Exercise. Show if V is finite dimensional then the spectrum is the set of eigenvalues.

Hint: \ker(T - λI)\neq 0 if and only if Te = λe for some e\in V.

Define E_λ = \ker(T - λI).

Exercise. Show E_λ\cap E_μ = 0 if λ\ne μ.

Exercise. Show \sum_{λ\in σ(T)} E_λ = V.

Define the multiplicity of λ\in\boldsymbol{C} by m(λ) = \dim\ker(T - λI).

Exercise. Show there exists e\in V with (T - λI)^ke\neq 0 for 0\le k < m(λ) and (T - λI)^{m(λ)}e = 0.

Dual

The dual of a vector space is V^* = \mathcal{L}(V,\boldsymbol{F}), the space of linear functionals on V. Define the dual pairing by \langle v,v^*\rangle = v^*(v) for v\in V and v^*\in V^*.

If V = \boldsymbol{F}^n we can identify V^* with \boldsymbol{F}^n using the standard basis. Define the dual basis e_j^*\colon\boldsymbol{F}^n\to\boldsymbol{F} by e_j^*(e_k) = δ_{jk}.

Exercise. Show every v\in\boldsymbol{F}^n can be written v = \sum_j e_j^*(v) e_j.

Solution If v = \sum_j v_j e_j then e_i^*(v) = v_i.

Exercise. Show every v^*\in(\boldsymbol{F}^n)^* can be written v^* = \sum_j v^*(e_j) e_j^*.

Solution If v^* = \sum_j v_j e_j^* then e_i^*(v) = v_i.

If V has any basis e_j then every v\in V can be written v = \sum v_j e_j for some v_j\in\boldsymbol{F}. Define the dual basis e_j^*\colon V\to\boldsymbol{F}\in V^* by e_j^*(v) = v_j. The map V\to V^* by v = \sum_j v_j e_j\mapsto \sum v_j e_j^* = v^* is one-to-one and onto (an isomorphism).

Functions are vectors. They can be added and scalar multiplication satisfies the distributed law. Integration is a linear functional on a space of functions. Given a set \Omega let B(\Omega) = \{f\colon\Omega\to\boldsymbol{F}: \|f\| = \sup_{\omega\in\Omega}|f(\omega)| < \infty\}.

If L\colon B(\Omega)\to\boldsymbol{F} is a linear functional define λ(E) = L(1_E) for E\subseteq\Omega.

Exercise. If E,F\subseteq\Omega are disjoint the 1_{E\cup F} = 1_E + 1_F.

This shows λ(E\cup F) = λ(E) + λ(F) if E\cap F=\emptyset. Since 1_\emptyset = 0 we have λ(\emptyset) = 0 so λ is a (finitely additive) measure.

Given a finitely additive measure λ on subsets of \Omega define a linear functional L\colon B(\Omega)\to\boldsymbol{F} by L(\sum_i a_i 1_{E_i}) = \sum_i a_i λ(E_i).

Exercise. Show this is well-defined.

Hint: \sum_i a_i 1_{A_i} = \sum_j b_j 1_{B_j} where (B_j) are pairwise disjoint. Note 1_A + 1_B = 1_{A\setminus B} + 1_{A\cap B} + 1_{B\setminus A} is a sum of pairwise disjoint sets.

Exercise. Given f\in B(Ω) and ε > 0 show there exist a finite number of a_i\in\boldsymbol{F} and A_i\subseteq Ω with \|f - \sum_i a_i 1_{A_i}\| < ε.

This shows the linear functional can be extended to B(\Omega) and B(\Omega)^* is isomorphic to the space of finitely additive measures on \Omega, ba(\Omega).

If \Omega has a sufficiently rich topology (e.g., compact and Hausdorff) then C(\Omega)^* can be identfied with the space of countably additive Borel measures on \Omega, M(\Omega). If \mu\in M(\Omega) define L^p(\mu) = \{f\colon\Omega\to\boldsymbol{F}: \int_\Omega |f|^p\,d\mu < \infty\}. It is true that L^p(\mu)^*\cong L^q(\mu) where 1/p + 1/q = 1 and p > 1. It is not true that L^\infty(\mu)^* \cong L^1(\mu) in general. Proving these claims is non-trivial.

Adjoint

The adjoint of a linear operator T\colon V\to W is T^*\colon W^*\to V^* defined by \langle v, T^* w\rangle = \langle Tv, w^*\rangle, v\in V, w^*\in W^*.

Fréchet Derivative

If F\colon X\to Y is a function between normed vector spaces the Fréchet derivative DF\colon X\to\mathcal{L}(X,Y) is defined by F(x + h) - F(x) = DF(x)h + o(\|h\|).

Recall F(x) = G(x) + o(\|h\|) means \lim_{\|h\|\to 0} \|F(x) - G(x)\|/\|h\| = 0.

Exercise. If F(x) = x^2 where x is a square matrix show DF(x) = L_x + R_x where L_xy = xy and R_xy = yx.

A suggestive way to write this is D(x^2) = x(Dx) + (Dx)x.

Hint: (x + h)^2 = x^2 + xh + hx + h^2 and h^2 = o(\|h\|).

Solution Since (x + h)^2 = xx + xh + hx + hh and \|h^2\| = o(\|h\}) we have D(x^2)h = L_x h + R_x h.

Exercise. If F(x) = x^n where x is a square matrix and n\in\boldsymbol{N} show DF(x) = \sum_{i=0}^{n-1} L_x^{n-i-1}R_x^{i}.

Hint: What are the terms in (x + h)^n containing exactly one h?

Exercise. If F\colon\boldsymbol{F}^n\to\boldsymbol{F} is F(x) = \|x\|^p show DF(x) = p\|x\|^{p-2}x^*.

Hint. Show D\|x\|^2 = 2x^* and note \|x\|^p = (\|x\|^2)^{p/2}. By the chain rule D\|x\|^p = (p/2)\|x\|^{2(p/2 - 1)}2x^* = p\|x\|^{p - 2}x^*.