Determinants
The determinant is a scalar value associated with a square matrix that encodes key information about the transformation it represents — whether it is invertible, how it scales volume, and its orientation.
Definition
2×2 Determinant
det [a b] = ad - bc
[c d]
Geometrically: the signed area of the parallelogram formed by the column vectors.
3×3 Determinant (Sarrus' Rule)
det [a b c]
[d e f] = aei + bfg + cdh - ceg - bdi - afh
[g h i]
General Definition (Leibniz Formula)
For an n × n matrix A:
det(A) = Σ_σ sgn(σ) · a₁σ(₁) · a₂σ(₂) · ... · aₙσ(ₙ)
Sum over all permutations σ of {1, 2, ..., n}, where sgn(σ) = +1 for even permutations, -1 for odd.
This has n! terms — impractical for computation, but important theoretically.
Cofactor Expansion
The determinant can be computed by expanding along any row or column.
Cofactor Cᵢⱼ = (-1)^(i+j) · Mᵢⱼ, where Mᵢⱼ is the (i,j) minor (determinant of the submatrix obtained by deleting row i and column j).
Expansion along row i:
det(A) = Σⱼ₌₁ⁿ aᵢⱼ · Cᵢⱼ = Σⱼ₌₁ⁿ (-1)^(i+j) · aᵢⱼ · Mᵢⱼ
Expansion along column j:
det(A) = Σᵢ₌₁ⁿ aᵢⱼ · Cᵢⱼ
Choose the row or column with the most zeros to minimize computation.
Time: O(n!) naively (recursive cofactor expansion). O(n³) via Gaussian elimination.
Laplace Expansion (Generalized)
Expand along multiple rows or columns simultaneously using complementary minors. This is the Laplace expansion theorem, generalizing cofactor expansion.
Properties
Fundamental Properties
- det(I) = 1
- Row swap changes sign: det(..., Rᵢ, ..., Rⱼ, ...) = -det(..., Rⱼ, ..., Rᵢ, ...)
- Row scaling: det(..., cRᵢ, ...) = c · det(..., Rᵢ, ...)
- Row addition: det(..., Rᵢ + cRⱼ, ...) = det(..., Rᵢ, ...) (doesn't change)
- Multilinearity: det is linear in each row separately
Derived Properties
- det(cA) = cⁿ det(A) for n × n matrix A
- det(Aᵀ) = det(A)
- det(AB) = det(A) · det(B) (multiplicative)
- det(A⁻¹) = 1/det(A)
- det(A) ≠ 0 iff A is invertible
- Triangular matrix: det = product of diagonal entries
- Block triangular: det [A B; 0 D] = det(A) · det(D)
Determinant via Row Reduction
The most efficient way to compute determinants:
- Row reduce to upper triangular form U.
- Track row swaps (each flips sign) and row scalings.
- det(A) = (-1)^(# swaps) × (product of scaling factors) × (product of diagonal of U).
Time: O(n³).
Cramer's Rule
For a system Ax = b where A is n × n and invertible:
xᵢ = det(Aᵢ) / det(A)
where Aᵢ is A with column i replaced by b.
Practical use: O(n · n!) naively, O(n⁴) with efficient det computation. Gaussian elimination (O(n³)) is faster for solving systems. Cramer's rule is mainly of theoretical importance.
Useful for: Small systems (2×2, 3×3), deriving formulas, theoretical proofs.
Adjugate Matrix
The adjugate (or classical adjoint) adj(A) is the transpose of the cofactor matrix:
adj(A)ᵢⱼ = Cⱼᵢ = (-1)^(i+j) Mⱼᵢ
Key identity:
A · adj(A) = adj(A) · A = det(A) · I
Therefore:
A⁻¹ = adj(A) / det(A)
This gives an explicit formula for the inverse but is impractical for large matrices.
Geometric Interpretation
Determinant and Volume
The absolute value |det(A)| equals the volume of the parallelepiped (or parallelogram in 2D) formed by the column vectors of A.
2D: |det([v₁ | v₂])| = area of parallelogram spanned by v₁, v₂.
3D: |det([v₁ | v₂ | v₃])| = volume of parallelepiped spanned by v₁, v₂, v₃.
nD: n-dimensional volume of the parallelotope.
Sign and Orientation
- det > 0: The transformation preserves orientation (right-handed → right-handed).
- det < 0: The transformation reverses orientation (reflection).
- det = 0: The transformation collapses dimension (singular, not invertible).
Determinant as Scaling Factor
A linear transformation T with matrix A scales volumes by a factor of |det(A)|.
If R is a region with volume V(R), then V(T(R)) = |det(A)| · V(R).
This is the basis for the change of variables formula in multivariable calculus:
∫∫_T(R) f(u,v) du dv = ∫∫_R f(T(x,y)) |det(J)| dx dy
where J is the Jacobian matrix.
Special Determinant Formulas
Vandermonde determinant:
det [1 1 ... 1 ]
[x₁ x₂ ... xₙ ] = Π_{i<j} (xⱼ - xᵢ)
[x₁² x₂² ... xₙ²]
[ ⋮ ⋮ ⋱ ⋮ ]
[x₁ⁿ⁻¹ ... xₙⁿ⁻¹]
Non-zero iff all xᵢ are distinct. Appears in polynomial interpolation.
Circulant determinant: For a circulant matrix C with first row (c₀, c₁, ..., cₙ₋₁):
det(C) = Π_{k=0}^{n-1} p(ωᵏ)
where p(x) = c₀ + c₁x + ... + cₙ₋₁xⁿ⁻¹ and ω = e^(2πi/n).
Determinant Identities
Matrix determinant lemma: For invertible A ∈ ℝⁿˣⁿ, vectors u, v ∈ ℝⁿ:
det(A + uvᵀ) = (1 + vᵀA⁻¹u) · det(A)
Rank-1 update of the determinant in O(n²) instead of O(n³).
Sylvester's determinant identity: For A ∈ ℝᵐˣⁿ, B ∈ ℝⁿˣᵐ:
det(Iₘ + AB) = det(Iₙ + BA)
Useful when m ≠ n — compute the determinant in the smaller dimension.
Real-World Applications
- Invertibility testing: det(A) ≠ 0 iff the system has a unique solution.
- Volume computation: Determinants compute areas, volumes, and higher-dimensional content.
- Cross product: In 3D, u × v = det [e₁ e₂ e₃; u₁ u₂ u₃; v₁ v₂ v₃] (symbolic expansion).
- Eigenvalue computation: Eigenvalues are roots of det(A - λI) = 0.
- Jacobian in optimization: det(J) measures local volume distortion. Critical for change of variables in integration and probability.
- Linear independence: det ≠ 0 iff columns are linearly independent.
- Orientation testing: In computational geometry, the sign of a 3×3 determinant determines if three points make a left or right turn.
- Kirchhoff's theorem: The number of spanning trees of a graph = any cofactor of the Laplacian matrix (a determinant computation).