A vector can be represented by an arrow that is pointing from the origin
An arrow vector has two properties
The length of the arrow vector
The orientation of the arrow vector
To indicate that a symbol represents a vector in the form of an arrow, we will place an arrow above the symbol
vector
This style of notation is only relevant when we want to emphasize that the vector should be treated as arrows
As you progress further, you will encounter a broader understanding of vectors that goes beyond the concept of arrows. Consequently, a more generalized notation will be employed.
Vectors are typically considered as one-dimensional entities.
However, it is important to note that vectors can reside in spaces of any dimension.
It is worth mentioning that vectors can only interact or operate with other vectors of the same dimension.
All vectors can undergo two basic arithmetic operations
Vector addition is the operation of adding two or more vectors together
vector+vector= vector sum
Geometrically, we can use the head-to-tail method to determine the vector sum
To add two arbitrary vectors, place the tail of one vector at the head of the other and draw an arrow from the tail of the first vector to the head of the last.
One way to conceptualize vector addition like this is to view each vector as representing the length and direction of a step that needs to be taken.
By adding the vectors together, we obtain a new vector that represents the cumulative effect of following the steps described by each individual vector.
It is not hard to show that which vector's tail is touching which vector's head is irrelevant
Scalar multiplication is the operation of multiplying a vector by a number
scalar⋅vector=scaled vector
To find the scaled vector, multiply its original length by the scaling factor to produce a new vector with the same orientation but a different length.
Multiplying a vector by a number scales the vector, so these numbers are called scalars.
There are two aspects to scalar multiplication
The sign of the scalar determines the direction of the new vector: positive scalars keep the direction unchanged, while negative scalars reverse it
The magnitude of the scalar affects the vector's length: greater than stretches the vector, between and squishes it, and exactly leaves the length unchanged
It should be noted that although scalar multiplication can change the direction and length of the vector, the vector will always remain on the same line
We can reach every point on this line through scalar multiplication alone, but anywhere beyond the line is unreachable simply by scaling the vector
Suppose we have a set of known vectors and want to express an arbitrary vector in terms of these vectors.
Vector addition and scalar multiplication allow us to express vectors in terms of others, but neither operation alone is usually sufficient to fully represent an arbitrary vector
However, by combining both operations—adding vectors and multiplying them by scalars—we gain the flexibility needed to express the arbitrary vector in terms of the given set.
Formally, a linear combination creates new vectors by simultaneously applying vector addition and scalar multiplication:
λ1V1+λ2V2+λ3V3+⋯+λnVn
The power of linear combinations lies in their ability to construct new dimensions
Each additional vector in the set can potentially unlock a new dimension, expanding the span of the space.
In an n-dimensional space, a set of n vectors can span that space if the vectors are linearly independent. This means any vector in the space can be expressed as a linear combination of the n vectors.
However, if the vectors are linearly dependent, some vectors can be written as combinations of others members of the set, reducing the number of vectors needed to span the entire space.
This redundancy occurs when the vectors do not contribute to new dimensions. For example, in a set of two vectors, the vectors are linearly dependent if they lie along the same line.
In a set of three vectors, the set is linearly dependent if all three vectors lie in the same plane or if any two of them lie along the same line.
This concept of linear combinations directly ties into the idea of basis vectors.
In any n-dimensional space, a set of n linearly independent vectors forms a basis.
The basis serves as a framework for representing any vector in that space.
Once a basis is chosen, any vector can be expressed as a linear combination of the basis vectors:
V is the vector we want to express in terms of the basis vectors
{ basis 1 , basis 2 , basis 3 ,⋯, basis n } is the set of known vectors
{v1,v2,v3,⋯,vn} are the scalars, known as the components, that determine how much of each basis vector is needed to construct the arbitrary vector
If the choice of basis vectors is known, this simplifies the representation of vectors. For convenience, we often denote the vector V as a column of its components in the chosen basis:
V=⎣⎢⎢⎢⎢⎢⎡v1v2v3:vn⎦⎥⎥⎥⎥⎥⎤
This notation is effective only when the choice of basis is clearly defined or implicitly understood.
With the introduction of basis vectors, performing vector operations becomes more streamlined. We can redefine operations like vector addition and scalar multiplication in terms of the components relative to the chosen basis.
Previously, we defined vector addition geometrically using the tip-to-tail method. We can now observe that the result is consistent when each vector is expressed as a linear combination of basis vectors.
Mathematically, this means vector addition can be simplified to adding the corresponding components of the vectors together
Previously, we defined scalar multiplication as changing a vector's length while maintaining its orientation. When using a linear combination of basis vectors, all basis vectors are scaled equally to preserve this orientation.
Mathematically, this means scalar multiplication can be simplified to multiplying each of the component by the scalar
Having introduced the two fundamental vector operations, we now proceed to explore an operation that is not fundamental, but extremely useful
This operation is called the "Dot product" and it is a vector operation that takes in two vectors and output a scalar
vector•vector=λ
The dot product tells us how much two vectors point in the same direction and there are two factors that will affect the value of the dot product
The longer the two vectors are, the larger the magnitude of the dot product is
The length of the vector is given by the magnitude or modulus of the vector, ∣∣vector∣∣
The magnitude of the dot product is directly proportional to the product of the length of the vectors
(U•W)∝(∣∣U∣∣×∣∣W∣∣)
The more the two vectors points in the same direction, the larger the value of the dot product is
The dot product is the largest and positive when they are pointing in the same direction
The dot product is the smallest and negative when they are pointing in opposite directions
Between the two extreme where the vectors are perpendicular to each other, the dot product has a value of 0
To understand how we can apply the dot product in the general cases, we start by looking at three special cases
When the two vectors are scalar multiples of each other and are pointing in the same direction
When the two vectors are scalar multiples of each other and are pointing in the opposite direction
When the two vectors are perpendicular to each other
Having established the three special cases, we can find the dot product of any two vectors by simply relating them back to those three cases
Most of the time, vectors are not completely aligned or perpendicular to each other
In these situations, we can split one of the vector into a sum of two vectors, where one of the vector is aligned with the vector we are dotting, and the other vector is perpendicular to the vector we are dotting with
We then distribute the dot product
This simplifies the situation into the two of the three special cases
Since the dot product between the pairs of perpendicular vectors is zero, the overall dot product is just given by the dot product of the pairs of parallel vectors
This means that the dot product can be found by multiplying the length of one vector by the length of the projection of the other vector on itself
One interesting property arises when we compute the dot product of the same vector
vector•vector
Since a vector must be pointing in the same direction with itself, the dot product of the same vector amounts to squaring the length of said vector
vector•vector=∣∣vector∣×∣vector∣∣=∣∣vector∣∣2
In fact, given the dot product, we can find the corresponding length of the vector
∣∣vector∣∣=vector•vector
In other words, we can define the length of a vector using dot product
Our expansion complicated the situation from determining the dot product of two vectors to determining the dot product between the n basis vectors
The issue with our initial attempt is that we tried to express the dot product in terms of the dot product of basis vectors, but we have no idea what the dot product between the basis vectors should be
We can improve the expansion if we choose a basis where the dot products between the basis vectors can be known without doing any calculation
Any two basis vectors in this chosen basis set must fall into one of the three special cases of dot product
By definition, a basis set must be linearly independent, so no two basis vectors can be pointing in the same or opposite direction
This leaves us with choosing a basis set where all the basis vectors are perpendicular to each other
A basis set where all the vectors are perpendicular to each other is called an orthogonal basis
Despite having the same meaning, we usually use the word, orthogonal, instead of perpendicular in maths
It should go without saying that the dot product between any of two different orthogonal basis vectors must be 0
i=j basis i • basis j =0
Choosing an orthogonal basis to compute dot product will therefore greatly simplify the expansion
Hence, when the length of each basis is known, the dot product between the two vectors will also be known
We can be even more ambitious and choose a basis that simplifies the above expression even further by controlling the length of the basis vectors as well
To make all the ∣∣ basis i ∣∣2 disaapear, we simply have to choose orthogonal basis vectors with a length of 1
u1w1×1+u2w2×1+u3w3×1+⋯+unwn×1
This choice simplifies the computation to one where we simply have to add and multiply the corresponding components of the two vectors
u1w1+u2w2+u3w3+⋯+unwn
The basis set where all the basis vectors are not only orthogonal to each other, but also of a length of 1, is called an orthonormal basis
An orthonormal basis is far too convenient to use, so in most situations, it is the default choice of basis
Hence, whenever a vector is presented in its column form without specifying the basis, we can be quite sure that the implied basis is an orthonormal one
In fact, we usually express the dot product in an orthonormal basis using the column vectors