Algebraic Algorithms
Overview
Algebraic algorithms leverage the structure of rings, fields, and polynomials to achieve fast computation. The Fast Fourier Transform (FFT) is the central technique, enabling O(n log n) polynomial multiplication with far-reaching applications across computer science, signal processing, and number theory.
Fast Fourier Transform (FFT)
Polynomial Representation
A degree-(n-1) polynomial P(x) = a_0 + a_1*x + ... + a_{n-1}*x^{n-1} can be represented as:
- Coefficient form: vector (a_0, a_1, ..., a_{n-1})
- Point-value form: n pairs (x_i, P(x_i))
The DFT
The Discrete Fourier Transform evaluates P at the n-th roots of unity w^0, w^1, ..., w^{n-1} where w = e^{2pii/n}.
DFT(a) = (P(w^0), P(w^1), ..., P(w^{n-1}))
Cooley-Tukey FFT
Divide-and-conquer on even and odd coefficients:
FFT(a, n):
if n == 1: return a
w = e^(2*pi*i/n)
a_even = (a_0, a_2, ..., a_{n-2})
a_odd = (a_1, a_3, ..., a_{n-1})
y_even = FFT(a_even, n/2)
y_odd = FFT(a_odd, n/2)
for k = 0 to n/2 - 1:
y[k] = y_even[k] + w^k * y_odd[k]
y[k + n/2] = y_even[k] - w^k * y_odd[k]
return y
- Time: T(n) = 2T(n/2) + O(n) = O(n log n)
- Requires n to be a power of 2 (pad with zeros if needed)
- Inverse FFT: same algorithm with w^{-1} and divide result by n
Butterfly Diagram
Each stage combines pairs of elements, resembling a butterfly pattern. Iterative (bottom-up) implementation uses bit-reversal permutation and avoids recursion.
Polynomial Multiplication via FFT
To multiply polynomials A(x) and B(x) of degree n-1:
1. Pad A and B to length 2n (coefficients)
2. Compute FFT(A) and FFT(B) -- O(n log n) each
3. Pointwise multiply: C[k] = FFT(A)[k] * FFT(B)[k] -- O(n)
4. Compute IFFT(C) to get coefficients of A*B -- O(n log n)
Total: O(n log n) vs O(n^2) naive multiplication.
Applications of Polynomial Multiplication
- Integer multiplication: Schonhage-Strassen O(n log n log log n); Harvey-van der Hoeven O(n log n)
- String matching with wildcards: encode as polynomial multiplication
- Convolution: signal processing, probability distribution convolution
- Generating functions: combinatorial enumeration
Number Theoretic Transform (NTT)
FFT over finite fields, avoiding floating-point precision issues.
Setup
- Work in Z_p where p is prime and p - 1 is divisible by n (a power of 2)
- Use a primitive n-th root of unity g in Z_p (g^n = 1 mod p, g^k != 1 for k < n)
- Common choice: p = 998244353 = 119 * 2^23 + 1 (supports n up to 2^23)
Algorithm
Same as FFT but all arithmetic modulo p. No floating-point errors.
NTT(a, n, p):
// Same structure as FFT, with w = g^((p-1)/n) mod p
// All multiplications and additions mod p
Advantages
- Exact arithmetic (crucial for competitive programming, cryptography)
- Faster on modern hardware (integer vs floating-point)
- Applications in polynomial rings for lattice-based cryptography
Convolution Theorem
NTT(A * B) = NTT(A) . NTT(B) (pointwise product), same as FFT.
Fast Matrix Multiplication
Naive: O(n^3)
Standard row-by-column multiplication.
Strassen's Algorithm: O(n^2.807)
Multiply 2x2 block matrices using 7 multiplications instead of 8.
For 2x2 block matrices A, B, C = AB:
M1 = (A11 + A22)(B11 + B22)
M2 = (A21 + A22) * B11
M3 = A11 * (B12 - B22)
M4 = A22 * (B21 - B11)
M5 = (A11 + A12) * B22
M6 = (A21 - A11)(B11 + B12)
M7 = (A12 - A22)(B21 + B22)
C11 = M1 + M4 - M5 + M7
C12 = M3 + M5
C21 = M2 + M4
C22 = M1 - M2 + M3 + M6
- Recurrence: T(n) = 7T(n/2) + O(n^2) => O(n^{log2 7}) = O(n^{2.807})
- Practical crossover: n ~ 64-256 (depends on architecture)
Current Best Bounds
| Algorithm | Exponent (omega) | Year | |------------------------|---------------------|------| | Naive | 3 | — | | Strassen | 2.807 | 1969 | | Coppersmith-Winograd | 2.376 | 1990 | | Williams | 2.3729 | 2012 | | Alman-Vassilevska-Williams | 2.3728 | 2021 | | Duan-Wu-Zhou | 2.371552 | 2023 |
- Conjecture: omega = 2 (optimal), but no proof
- Practical algorithms rarely go beyond Strassen due to constant factors
- Rectangular matrix multiplication: separate exponents
Applications
- Solving linear systems: O(n^omega) via LU decomposition
- Matrix inversion: same complexity as multiplication
- All-pairs shortest paths (Seidel's algorithm for unweighted: O(n^omega log n))
- Transitive closure, matching, determinant
Polynomial Evaluation and Interpolation
Multi-Point Evaluation
Given polynomial P(x) of degree n and points x_0, ..., x_{n-1}, compute all P(x_i).
Subproduct tree approach:
- Build a binary tree of subproducts: M_S(x) = prod(x - x_i for i in S)
- Evaluate P mod M_S at each node, recurse
- Time: O(n log^2 n) using fast polynomial division
Interpolation
Given n point-value pairs (x_i, y_i), find the unique degree-(n-1) polynomial P.
Lagrange interpolation:
P(x) = sum over i of y_i * prod over j != i of (x - x_j) / (x_i - x_j)
- Naive: O(n^2)
- Fast interpolation via subproduct tree: O(n log^2 n)
Newton's Form
P(x) = c_0 + c_1(x - x_0) + c_2(x - x_0)(x - x_1) + ...
- Divided differences to find coefficients
- O(n^2) naive, O(n log^2 n) fast
Chinese Remainder Theorem (CRT)
Statement
If m_1, ..., m_k are pairwise coprime, the system:
- x = a_1 (mod m_1), ..., x = a_k (mod m_k) has a unique solution modulo M = m_1 * ... * m_k.
Algorithm
For each i:
M_i = M / m_i
N_i = M_i^(-1) mod m_i (extended Euclidean algorithm)
x = sum(a_i * M_i * N_i) mod M
Fast CRT (Subproduct Tree)
- Compute all remainders simultaneously in O(n log^2 n)
- Reconstruction also in O(n log^2 n)
Applications
- Multi-modular arithmetic: compute over small primes, reconstruct via CRT
- Avoids large integer arithmetic
- Used in computer algebra systems
- Secret sharing (Shamir's scheme)
- RSA decryption speedup (CRT for modular exponentiation)
Grobner Bases
Generalization of GCD and Gaussian elimination to multivariate polynomial systems.
Definition
A Grobner basis G of an ideal I in k[x_1, ..., x_n] is a generating set such that the leading terms of G generate the leading term ideal of I.
Buchberger's Algorithm
Input: set of polynomials F
G = F
Repeat:
For each pair (f, g) in G:
Compute S-polynomial S(f, g)
Reduce S(f, g) modulo G to get remainder r
If r != 0: G = G union {r}
Until no new elements added
Reduce G to obtain minimal/reduced Grobner basis
Properties
- Existence: every ideal has a Grobner basis (finite by Hilbert's basis theorem)
- Uniqueness: reduced Grobner basis is unique for a given monomial ordering
- Membership testing: f is in ideal I iff f reduces to 0 modulo the Grobner basis
- Complexity: doubly exponential in worst case, but often efficient in practice
Monomial Orderings
- Lexicographic (lex): good for elimination, solving systems
- Degree reverse lexicographic (grevlex): fastest computation
- Elimination ordering: variables eliminated in order
Applications
- Solving polynomial systems: triangulate via lex Grobner basis
- Computational algebraic geometry: ideal membership, dimension, degree
- Robotics: inverse kinematics (polynomial constraints)
- Coding theory: decoding algebraic codes
- Automated theorem proving: geometric theorem proving
Additional Algebraic Techniques
Resultants and GCD
- Polynomial GCD: O(n log^2 n) via half-GCD algorithm
- Resultant: determines if two polynomials share a root; O(n log^2 n)
Modular Composition
Given polynomials f, g, h of degree n, compute f(g(x)) mod h(x).
- Naive: O(n^2)
- Brent-Kung: O(n^{(omega+1)/2}) ~ O(n^1.69) using matrix multiplication
Characteristic Polynomial
- O(n^omega) for general matrices
- O(n^2) for Hessenberg matrices
- Applications: eigenvalue computation, minimal polynomial, matrix functions
Berlekamp-Massey Algorithm
Given a linear recurrence sequence, find the shortest LFSR (linear feedback shift register).
- O(n^2) time, or O(n log^2 n) via fast extended Euclidean algorithm
- Applications: BCH/RS decoding, finding recurrences in combinatorial sequences
Summary
Algebraic algorithms exploit mathematical structure for efficiency. FFT and NTT are foundational, enabling O(n log n) polynomial multiplication with applications from signal processing to cryptography. Fast matrix multiplication remains a major open problem (omega = 2 conjecture). CRT and Grobner bases extend algebraic computation to modular and multivariate settings. These techniques underpin much of modern computational mathematics and algorithm design.