5 min read
On this page

Linear Programming

Standard Form

Every linear program (LP) can be written in standard form:

minimize    c^T x
subject to  Ax = b
            x >= 0

where c in R^n is the cost vector, A in R^{m x n} is the constraint matrix, and b in R^m is the right-hand side. Any LP with inequalities or free variables converts to this form through slack variables and variable splitting.

Canonical form (inequality form):

minimize    c^T x
subject to  Ax <= b

A basic feasible solution (BFS) corresponds to selecting m linearly independent columns of A (a basis B), setting x_B = B^{-1} b >= 0 and x_N = 0. The fundamental theorem of LP states that if an optimal solution exists, one exists at a BFS.

The Simplex Method

The simplex method moves along vertices (BFS) of the feasible polyhedron, improving the objective at each step.

Algorithm Outline

  1. Start at a BFS with basis B and nonbasic set N.
  2. Compute reduced costs: c_bar_j = c_j - c_B^T B^{-1} A_j for j in N.
  3. If all c_bar_j >= 0, the current BFS is optimal. Stop.
  4. Select entering variable: pick j with c_bar_j < 0 (pricing).
  5. Compute direction: d = B^{-1} A_j.
  6. Minimum ratio test: theta = min { (B^{-1} b)_i / d_i : d_i > 0 }. The corresponding index leaves the basis.
  7. Pivot: update basis, return to step 2.

Pivoting Rules and Degeneracy

A BFS is degenerate if some basic variable equals zero. Degeneracy can cause cycling, where the simplex method revisits the same basis.

Bland's rule: Always select the entering variable with the smallest index among those with negative reduced cost, and the leaving variable with the smallest index among ties in the ratio test. This guarantees finite termination.

Steepest edge and Devex pricing are practical alternatives that reduce iteration counts.

Phase I / Phase II

When no initial BFS is known, introduce artificial variables and solve a Phase I problem to find a feasible basis, then proceed with Phase II on the original objective.

Complexity

The simplex method has exponential worst-case complexity (Klee-Minty cubes) but is extremely fast in practice—typically O(m) to O(3m) pivots.

Duality

Every LP (the primal) has an associated dual LP:

Primal:  min c^T x   s.t. Ax = b, x >= 0
Dual:    max b^T y   s.t. A^T y <= c

Weak Duality

For any primal feasible x and dual feasible y: c^T x >= b^T y. The dual objective lower-bounds the primal.

Strong Duality

If the primal has an optimal solution, so does the dual, and their optimal values are equal: c^T x* = b^T y*. This follows from the LP strong duality theorem (no constraint qualification needed beyond feasibility).

Complementary Slackness

At optimality, for all j: x_j * (c_j - A_j^T y) = 0. Either a primal variable is zero or its reduced cost is zero (the dual constraint is tight).

Economic Interpretation

Dual variables y_i represent shadow prices: the rate of change of the optimal objective with respect to b_i. Reduced costs measure the marginal cost of forcing a nonbasic variable into the basis.

Sensitivity Analysis

Sensitivity analysis studies how the optimal solution changes with perturbations to c, b, or A.

  • Ranging on b: The current basis remains optimal for b + deltae_i as long as B^{-1}(b + deltae_i) >= 0. The allowable range gives the interval for delta.
  • Ranging on c: For nonbasic c_j, the basis stays optimal while c_bar_j >= 0. For basic c_j, re-derive reduced costs.
  • Adding a variable or constraint: Check if the new variable prices out (reduced cost >= 0) or if the new constraint is satisfied by the current solution.

Sensitivity information is a byproduct of the optimal simplex tableau and is available at no extra computational cost.

Interior Point Methods

Interior point methods (IPMs) approach the optimum through the interior of the feasible region rather than along edges.

Barrier Method

Replace x >= 0 with a logarithmic barrier:

minimize  c^T x - mu * sum_j ln(x_j)
subject to  Ax = b

As mu -> 0, the solution of the barrier problem converges to the LP optimum. The path traced is the central path.

Primal-Dual IPM

The most practical variant solves the KKT system of the barrier problem:

Ax = b
A^T y + s = c
X S e = mu e
x, s > 0

where X = diag(x), S = diag(s). A Newton step on this system gives the search direction. The algorithm reduces mu at each iteration.

Complexity and Practice

IPMs solve LPs in O(sqrt(n) * L) iterations (polynomial), where L is the input bit length. Each iteration requires solving a system of size m x m. For large sparse LPs, IPMs often outperform simplex; for sequences of related LPs (warm-starting), simplex has an advantage.

Applications

| Domain | Example | |--------|---------| | Supply chain | Transportation and assignment problems | | Finance | Portfolio optimization with linear costs | | Telecommunications | Network flow and routing | | Agriculture | Diet problem (nutrient optimization) | | Manufacturing | Production planning, resource allocation |

Key Complexity Results

  • LP is in P (Khachiyan 1979, ellipsoid method; Karmarkar 1984, interior point).
  • Simplex is not polynomial worst-case but dominates in practice.
  • Strongly polynomial LP remains an open problem (Smale's 9th problem).

Connections

  • LP relaxations underpin integer programming (→ 02-integer-programming.md).
  • LP duality is a special case of Lagrangian duality (→ 04-duality-and-kkt.md).
  • LP is a special case of conic programming (→ 05-conic-programming.md).
  • Solvers and modeling tools (→ 09-optimization-solvers.md).