Linear transformations are mathematical operations that change vectors and affect the space in which these vectors exist. To understand how space changes during a linear transformation, we need a parameter that quantifies the distortion or warping of the space.
One commonly used and intuitive parameter is the scaling factor of the -dimensional space on which the transformation is applied.
This scaling factor represents how much the space expands or contracts in different directions.
To determine the scaling factor of space caused by a linear transformation, we introduce a function called the determinant:
The scaling factor of space encompasses two important aspects: magnitude and sign, providing insights into changes in both the size and orientation of space resulting from a linear transformation.
Magnitude
The magnitude of the scaling factor indicates the change in the size of space.
Specifically, it represents the ratio between the original volume of the space and the volume of the transformed space.
Sign
The sign of the scaling factor reflects changes in the orientation of space. Altering the orientation of space is akin to transforming it into its mirror image. Regardless of the dimension of the space, there are only two possible configurations for the orientation:
Positive Orientation: Preserves the original orientation.
Negative Orientation: Results in a mirror image of the original orientation.
Transformations that result in mirror images have a negative scaling factor, while transformations that preserve the orientation have a positive scaling factor.
Before deriving an expression for the determinant, it's essential to grasp its defining characteristics, many of which can be intuitively understood through geometry. Three key points to keep in mind are:
Each column in a matrix represents a transformed basis vector.
When you scale the side a parallelotope by a constant, its volume will also be scaled by the same amount
Therefore, when you scale all the entries in a single column of a determinant by a constant factor, the determinant will also be scaled by the same factor
Shearing refers to a transformation that slants the shape of an n-dimensional parallelotope while preserving its volume.
In this transformation, one or more of the faces of the parallelotope are displaced along directions parallel to other faces, resulting in a skewed but volume-preserving deformation.
Shearing can be achieved by adding a vector in the direction of one of the basis vectors that defines the parallelotope to another basis vector within the matrix
Since the parallelotope and its sheared counterpart have the same volume, their corresponding determinant should also yield the same value.
When we distort or move a parallel side of a parallelotope by the same amount, the resulting volume remains unchanged because the overall "base" area and the height of the parallelotope stay constant.
This distortion can be achieved if the vector defining the side of the parallelotope is broken down into a sum of vectors.
The volume of a parallelotope, where one of its sides is defined by the sum of two vectors, is equal to the sum of the volumes of the parallelotopes formed by each of those individual vectors.
Expressing this in terms of determinants, when one of the column / transformed basis vector can be written as a sum of two other vectors, the determinant can also be written as a sum of two determinants
After understanding the properties that make the determinant unique, we can construct a mathematical expression to compute it. To simplify the determinant's expression, remember that its columns represent the transformed basis vectors
Using the Column of Sums property of determinants, we can split the determinant into multiple determinants, effectively subdividing the -dimensional volume:
Using the Column Scaling property, we can extract coefficients from each determinant:
Using the Invariance Under Shearing property, we can simplify each determinant by turning the other entries in those row to zero.
Although we can express the total volume of a parallelotope as a sum of other parallelotopes, we must consider their orientations.
Half of the fragmented parallelotopes may lie in the positive space, while the other half are in the negative space.
Adding parallelotopes from different spaces directly is not feasible. To perform the addition, we need to ensure all fragmented parallelotopes are aligned within the positive space.
This involves rearranging their orientations by aligning the normalized vectors to the corresponding columns:
The Antisymmetry property dictates that interchanging any two rows changes the sign of the determinant, thus forming an alternate in a pattern of positive and negative values
The simplification of the determinant not only streamlines the expression but also offers valuable insights into the geometric structure of the problem. Examining the determinant expression reveals that each fragmented determinant can be decomposed into two distinct parts:
The group of vectors without a component in one of the basis directions forms a parallelotope of dimension . The remaining normalized vector, which has a component only in the specific basis direction, is orthogonal to this -dimensional parallelotope.
The volume of an n-dimensional parallelotope is found by multiplying the (n−1)-dimensional volume of its cross-sections by the height, which is the length of the basis-aligned normalized vector :
Regardless of the starting point, any determinant can be expressed in terms of 1-dimensional determinants.
A 1-dimensional determinant represents the oriented 1-dimensional volume. For a 1-dimensional determinant, this corresponds to the oriented length of the vector.
det([Ωij])=Ωij
A 1-dimensional determinant involves only a single vector and has a single entry.
This entry reflects the oriented length of the vector, so the determinant's value is equal to this entry:
By defining 1-dimensional determinants in this way, we can build up to higher-dimensional determinants, capturing the oriented volumes of n-dimensional parallelotopes recursively.
In practice, the recursive definition of the determinant can be simplified into the following steps:
Choose a Column: Select any column of the determinant.
Express the Determinant: The -dimensional determinant is expressed as a sum of -dimensional determinants derived from the chosen column. Each term in this sum is the product of an entry from the column, an associated sign, and the corresponding minor :
Indices : i is the chosen column and j is the index for each row
Entry : The entries Ωji are selected from the chosen column.
Sign : Due to the antisymmetry property of determinants, each term is multiplied by a sign, (−1)i+j, which follows an alternating pattern:
⎣⎢⎢⎢⎡+−+:−+−:+−+:⋯⋯⋯⎦⎥⎥⎥⎤
(n–1)-dimensional determinant : Each of these smaller determinants is obtained by removing the jth row and ith column from the original matrix. Each resulting determinant is referred to as a Minor