The mathematical definition; an element of a vector space,
The physical definition; a quantity that has both magnitude and direction,
The computer science definition; a one-dimensional array.
It's important to note that these definitions are not mutually exclusive. In fact, they are all related to each other, and can be thought of as different perspectives on the same concept.
A vector is often represented as an arrow in space, with a starting point and an ending point. The length of the arrow represents the magnitude of the vector, and the direction of the arrow represents the direction of the vector.
Typically the start of the arrow is called the tail, and the end of the arrow is called the tip.
In two dimensions, the real coordinate space is denoted by .
This is the set of all ordered pairs of real numbers, and can be visualized as the plane.
Essentially, means "all pairs of real numbers", which contains all vectors in two-dimensional space.
You could say that , which means that the vector is an element of the real coordinate space and has two dimensions.
In three dimensions, the real coordinate space is instead denoted by .
Remember that vectors can be represented as arrows in space. A vector or means "move 1 unit in the direction and 2 units in the direction".
To add two vectors graphically, you can place the tail of the second vector at the tip of the first vector, and then draw a new vector from the tail of the first vector to the tip of the second vector.
The new vector is then an arrow from the origin to the tip of the second vector. This new vector is the sum of the two vectors.
One important property of vector addition is that it is commutative. This means that the order in which you add the vectors does not matter.
This is important because some other vector operations are not commutative.
To show this, let's first add and , and then add and .
A visual representation of this is shown below:
Notice how adding and gives the same result as adding and .
That is, they reach the same point in space.
This can be shown algebraically as well:
Since normal addition is commutative, the two results are equal.
We've already seen the arrow representation, where vectors are represented as arrows in space, as well as the algebraic representation, where vectors are represented as ordered lists of numbers.
Another way to represent vectors is through the use of unit or basis vectors.
The unit vector for a given vector is typically denoted as .
To find the unit vector for a given vector, you simply divide the vector by its magnitude.
Since dividing doesn't change the direction of the vector, the unit vector will have the same direction as the original vector,
while its magnitude will be 1.
In some cases, it's useful to define special unit vectors that are aligned with the axes of the coordinate system.
For a two-dimensional space, these are typically denoted as and .
Imagine a vector .
As we know, this means "move 3 units in the direction and 4 units in the direction".
You could imagine separating this vector into two components: one in the direction and one in the direction:
Now we have a representation of the vector as a sum of two scaled unit vectors: one in the direction and one in the direction.
We can represent any vector in this coordinate system by scaling these two unit vectors.
It's a bit like how you can represent any color by mixing red, green, and blue.
Since it's so common, these unit vectors are often denoted as and .
This also makes it very easy to perform vector operations, as you can simply add the corresponding components of the vectors.
A linear combination is a combination of vectors in which each vector is multiplied by a scalar and then added together.
Let us consider a simple example to understand linear combinations better.
Let be vectors in .
Let be real scalars. All a linear combination is, is the sum of the vectors multiplied by the scalars:
The reason it's called "linear" is because the scalars are multiplied to the vectors, and then added together.
We aren't multiplying vectors by vectors, or taking any exponents or anything like that.
If a set of vectors is linearly dependent, then it means that one of the vectors can be written as a linear combination of the others.
This means that one of the vectors is redundant, and you can remove it without losing any information.
For example, consider some GPS software that says "Go 3 miles north, then 4 miles south."
This is like saying "Go , then ."
The two vectors are linearly dependent because you can linearly combine them to get .
This means we can describe the north vector in terms of the south vector, or vice versa.
Therefore, we can remove one of them without losing any information.
The GPS can instead simply say "Go 1 mile south."
Consider another instruction: "Go 2 miles north, then 2 miles east."
This is like saying "Go , then ."
These two vectors are linearly independent because you can't combine them to get .
We cannot describe the north vector in terms of the east vector, or vice versa.
This means that both vectors are necessary to describe the movement.
The GPS can also say "Go 2.8 miles northeast." This would be a linear combination of the two vectors.
If you have a set of vectors and you want to determine if they span a space, you can check if they are linearly independent.
If they are linearly independent, then they span , and if not, they don't.
Proof
Let be vectors in the vector space .
Assume that . This means that there exists a vector in that is not in .
Consider the set .
If is linearly dependent, then can be written as a linear combination of the other vectors in , which is not possible because is not in .
Therefore, must be linearly independent.
However, has vectors, which is more than the dimension of .
This contradicts the fact that is an -dimensional vector space.
Therefore, the assumption that leads to a contradition, so it is false, and hence, , proven by contradiction.
The basis vectors of a space are always linearly independent.
Consider the basis vectors for 2-dimensional Cartesian space: and .
and are linearly independent because you can't write one in terms of the other.
Therefore, both are needed to describe all of .
Example Problem: Linear Dependence of Two Vectors
Let and be vectors in .
Determine if the vectors are linearly dependent or independent.
To determine if the vectors are linearly dependent, we need to see if there's a way to combine them to get the zero vector:
This is a system of linear equations that can be solved to find and :
Recall that for linear dependence, the coefficients must not all be zero.
Since and , the vectors are linearly independent.
Example Problem: Linear Dependence of Three Vectors
Let , , and be vectors in .
Determine if the vectors are linearly dependent or independent.
To determine if the vectors are linearly dependent, we need to see if there's a way to combine them to get the zero vector:
Since we want to find a set of scalars that aren't all zero, we can pick a random value, like :
Now we can substitute and back into the second equation:
We found three scalars that aren't all zero yet satisfy the equation:
Therefore, the vectors are linearly dependent.
Alternatively, let's consider a logical approach.
For 3 vectors in , in the best case scenario, two of the vectors are linearly independent.
This means that they cover the entire space, and the third vector, which is in the space, can be written as a linear combination of the other two, and so, is redundant.
Recall that is the set of all -dimensional vectors.
You could visualize it as a -dimensional space, but we'll use the most abstract definition for now.
can be defined as:
Let be a subset of . In order for to be a linear subspace of , it must satisfy the following conditions:
must contain the zero vector.
must be closed under addition, so if and are in , then must also be in .
must be closed under scalar multiplication, so if is in and is a scalar, then must also be in .
Example Problem: Determining if a Set of the Zero Vector is a Linear Subspace
Let . Is a linear subspace of ?
Let's check the conditions:
contains the zero vector, so this condition is satisfied.
is closed under addition. The only possible addition is .
is closed under scalar multiplication. Anything multiplied by the zero vector is the zero vector.
Therefore, is a linear subspace of .
Example Problem: Determining if a Set of Two Quadrants is a Linear Subspace
Let . Is a linear subspace of ?
This set contains all vectors in the first and fourth quadrants.
It visually looks like the right half of the plane.
Once again, let's check the conditions:
contains the zero vector, so this condition is satisfied.
is closed under addition:
Since and are both non-negative, is also non-negative, so the sum is in .
is not closed under scalar multiplication. You could multiply by a negative scalar and get a vector in the other quadrant:
Since , is negative, so the scalar multiplication is not in .
A linear subspace can be defined by the span of a set of linearly independent vectors.
Let be a set of linearly independent vectors in .
The linear subspace defined by is:
Recall that for a set of vectors to be linearly independent, the only solution to the equation is .
If all of this is true, then is a basis for .
More formally, the basis of a linear subspace is a "minimum" set of vectors that can span the space.
Example Problem: Determining the Span of a Set of Vectors
Let be a set of vectors in .
Determine the span of .
To determine the span of the set, we need to find all possible linear combinations of the vectors.
Let be a vector in the span. Then, for some scalars and :
This gives us a system of linear equations:
Substituting into the first equation:
So, for any values of and , you can find and that satisfy the equation.
Therefore, and can be any real numbers, and the span of the set is .