2.1 Vector Spaces
Perhaps the most important definition in all of mathematical physics is that of a vector space. The study of vector spaces is called linear algebra, and it is the foundation for almost every area in physics. We will first begin with finite-dimensional vector spaces, and then move on to infinite-dimensional vector spaces in the next chapter.
Table of Contents
Vectors
A vector is an element of a vector space. We denote complex vectors as kets: 
Some preliminary definitions:
- A group is a set 
with a binary operation that satisfies closure, associativity, identity, and invertibility.  - An abelian group is a group that also satisfies commutativity.
 - A field is a set 
with two binary operations (addition and multiplication) that satisfy the properties of an abelian group under addition, a group under multiplication (excluding zero), and distributivity of multiplication over addition.  
A vector space (or linear space) 
We often denote a vector space simply as 
- 
Vector Addition: For any two vectors
, their sum is denoted by . It satisfies the following axioms, making an abelian group: - Commutativity: 
for all .  - Associativity: 
for all .  - Existence of Zero Vector: There exists a a unique vector 
such that for all .  - Existence of Additive Inverse: For every vector 
, there exists a unique vector such that .  
 - Commutativity: 
 - 
Scalar Multiplication: For any scalar
and vector , their product is denoted by . It satisfies the following axioms: - Compatibility with Field Multiplication: 
for all and .  - Identity Element of Scalar Multiplication: 
for all , where is the multiplicative identity in .  
 - Compatibility with Field Multiplication: 
 - 
Distributivity:
- Distributivity of Scalar Multiplication with respect to Vector Addition: 
for all and .  - Distributivity of Scalar Multiplication with respect to Field Addition: 
for all and .  
 - Distributivity of Scalar Multiplication with respect to Vector Addition: 
 
There are many, many examples of vector spaces. Here are a few.
Some examples of vector spaces are:
is a vector space over itself. is a vector space over itself. is a vector space over . is a vector space over . is not a vector space over , since scalar multiplication is not closed. - The set of oriented line segments in 
with the same initial point is a vector space over .  - The set of all polynomials with real coefficients is a vector space over 
.  - The set of all continuous functions from 
to is a vector space over .  - The set of all 
matrices with real entries is a vector space over .  - The set of all 
matrices with complex entries is a vector space over .  - The set of all solutions to a homogeneous linear differential equation is a vector space over 
or (This is why you can do things like Fourier analysis and Laplace transforms).  
A set of vectors, 
is the trivial solution 
A subset 
Some examples of subspaces are:
- The set 
 is a subspace of any vector space . - The set 
 itself is a subspace of . - The set of all vectors in 
 that lie on a given plane through the origin is a subspace of . - The set of all polynomials of degree at most 
 is a subspace of the set of all polynomials. - The set of all continuous functions that vanish at a given point is a subspace of the set of all continuous functions.
 
Let 
Proof. We need to show that 
Let 
- 
Closure under Vector Addition: Let
 . Then, by definition, there exist scalars such thatand
Adding these two equations, we get
Since
 are also scalars in , it follows that . - 
Closure under Scalar Multiplication: Let
 and . Then, by definition, there exist scalars such thatNow, consider the vector
 . We haveSince
 are also scalars in , it follows that . - 
Existence of Zero Vector: Since
 is non-empty, let . Then, the zero vector can be expressed as , which is a finite linear combination of elements of . Therefore, . 
Since 
Another equivalent definition of the span is the smallest subspace containing 
Subspaces have some nice properties:
- The intersection of any collection of subspaces is also a subspace.
 - The sum of two subspaces 
 and , defined as , is also a subspace. 
Basis and Dimension
A basis of a vector space 
For any finite-dimensional vector space 
Proof. Let 
This proof requires some concepts from linear algebra. We will delve into these concepts deeper in later sections, but for now, we will use them without proof. First is the concept of rank and nullity of a matrix.
If we separate a matrix into its column vectors, the rank of the matrix is the dimension of the span of its column vectors.
The nullity of the matrix is the dimension of the null space of the matrix, which is the set of all vectors 
Anways, back to the proof.
Since 
for some scalars 
We can represent this system in matrix form as 
Since 
By the rank-nullity theorem, we have
where 
Since 
If a vector space 
Suppose we have a vector 
for some scalars 
We can also write Equation 
where the first matrix is an 
::example Example 2.1.10 (Examples of Bases)
Some examples of bases are:
- In 
 , any nonzero number forms a basis, since any real number can be written as a scalar multiple of . is thus 1-dimensional. - In 
 over (i.e., considering only real scalars), the set forms a basis, since any complex number can be written as a linear combination of and . is thus 2-dimensional over . - In 
 over itself, the set forms a basis, since any complex number can be written as a scalar (complex) multiple of . is thus 1-dimensional over itself. - In 3-dimensional Euclidean space, we have the basis 
 , commonly used in Newtonian physics. - In 
 , the standard basis is given by the vectors , where has a in the -th position and elsewhere. - In the matrix space 
 , the standard basis is given by the matrices , where has a in the position and elsewhere. - In the function space 
 of all infinitely differentiable functions on the interval , the set of monomials forms a basis. By Taylor's theorem, any smooth function can be expressed as a power series in terms of these monomials. Since this basis is infinite, is infinite-dimensional. - Similarly, functions can be expressed in terms of trigonometric functions (Fourier series) or orthogonal polynomials (Legendre, Hermite, etc.), leading to different bases for function spaces.
 
:::
Factor Spaces
Consider a vector space 
This equivalence relation partitions 
Can we turn 
for any scalars 
Does this definition depend on the choice of representatives 
But remember that 
The right-hand side is within 
Also, since 
| Abbreviated Notation | Full Notation | 
|---|---|
Factor Space Basis and Dimension
Let's now find a basis for the factor space 
The idea is to start with a basis for 
for some scalars 
But since 
But wait, recall that 
Finally, recall that we add and scale equivalence classes by adding and scaling their representatives. Thus, we have
This shows that the set 
As the sum is a member of 
But wait, the set 
Therefore, we have shown that 
Also, as a corollary, we have that
This is because if 
Direct Sums vs Tensor Products
Recall that in set theory, we can define the union and intersection of two sets. In vector spaces, we can define similar operations: the direct sum and the tensor product.
First, the direct sum.
Often we like to decompose a vector space into smaller, more manageable pieces.
For example, in classical mechanics, we often decompose the motion of a particle into its 
Now consider 
If 
A more concrete way to think about the direct sum is this: suppose we write 
In other words, the direct sum "stacks" the two subspaces together, but keeps them separate.
Evidently, we have 
If 
Proof. (
Suppose 
for some 
The left-hand side is in 
(
Suppose we have the contradiction that there exists a nonzero vector 
Since we can write 
Since we have shown both directions, the proposition is proven. 
If 
Proposition 2.1.15 Let 
Intuitively, this proposition makes sense: since the subspaces only intersect at the zero vector, no vector in one subspace can be written as a linear combination of vectors from the other subspaces.
For instance, in 
Proof. Suppose we have a set of 
We can always write 
for some scalars 
The left-hand side is in 
We can make the same argument for 
If 
Proof. Let 
If 
Let 
for some scalars 
Thus, we have 
Consider the vector space 
If 
Proof. Let 
By Proposition 2.1.15, the set 
Therefore, this set is a basis for 
The number of vectors in this basis is 
One may also find that the direct sum is very similar to the Cartesian product of sets.
We can define two vector spaces 
for any 
If we associate a vector in 
Proof. First, let's show that their intersection is only the zero vector.
Suppose there exists a vector in both 
Thus, the only vector in both 
Therefore, we have 
The mapping 
is a bijection, as there is one and only one pair 
Let 
Then, the set 
Proof. Before proving it, let's understand what the theorem is saying.
Suppose 
See how the vectors 
Anyways, let's prove the theorem.
First, we need to show that the set 
To do this, we assume a linear combination of the vectors 
for some scalars 
So
Since 
Therefore, the set 
Next, we need to show that the set 
for some 
for some scalars 
for some scalars 
Therefore, we have shown that the set 
So, given a Cartesian product 
The vector space defined over the Cartesian products using these laws is called the tensor product of 
If we expand their bases and apply the above laws, we can see that
Thus, the set 
hence the name "tensor product".
Intuitively, the tensor product can be thought of as a way to combine two vector spaces into a larger one, where the basis vectors of the new space are formed by taking all possible combinations of the basis vectors from the original spaces. In matrix form, it looks like replacing each entry of a matrix with another matrix:
Technically, when we define 
Summary and Next Steps
In this section, we have introduced the basics of vector spaces, including their definitions, properties, and examples.
Here are the key points to remember:
- A vector space is a set of vectors that can be added together and multiplied by scalars, satisfying certain axioms.
 - Subspaces are subsets of vector spaces that are also vector spaces.
 - The span of a set of vectors is the set of all linear combinations of those vectors.
 - A basis of a vector space is a set of linearly independent vectors that span the entire space.
 - The dimension of a vector space is the number of vectors in its basis.
 - The direct sum of subspaces combines them into a larger space while keeping them separate.
 - The tensor product of vector spaces combines them into a larger space where basis vectors are formed by all possible combinations of the original basis vectors.
 
Vector spaces alone are not enough to describe physical systems. Much like how we needed metrics to define distances in geometry, we need additional structures to define concepts like length and angle in vector spaces. These additional structures lead us to the concept of inner product spaces, which we will explore in the next section.