7 min read
On this page

Convex Hulls

Definition and Properties

The convex hull of a point set SS is the smallest convex set containing SS, equivalently the intersection of all convex sets containing SS, or the set of all convex combinations λipi\sum \lambda_i p_i with λi0,λi=1\lambda_i \geq 0, \sum \lambda_i = 1.

For finite point sets in R2\mathbb{R}^2, the convex hull is a convex polygon whose vertices are a subset of SS (the extreme points). Points on the hull boundary but not at vertices are non-extreme hull points.

Lower bound: computing the convex hull of nn points in R2\mathbb{R}^2 requires Ω(nlogn)\Omega(n \log n) in the algebraic decision tree model (via reduction from sorting). However, if hh = number of hull vertices, output-sensitive algorithms can achieve O(nlogh)O(n \log h).

Graham Scan

Algorithm (Graham, 1972): O(nlogn)O(n \log n)

  1. Find the lowest point p0p_0 (leftmost if tie)
  2. Sort remaining points by polar angle around p0p_0
  3. Process points in order, maintaining a stack representing the current hull:
    • For each point pip_i: while the last two stack points and pip_i make a non-left turn, pop the stack
    • Push pip_i

Correctness: the stack invariant ensures that the sequence of points on the stack always forms the vertices of a convex polygon in counterclockwise order.

Complexity: O(nlogn)O(n \log n) for sorting; the scan itself is O(n)O(n) (each point is pushed and popped at most once, amortized).

Degeneracy handling: collinear points during the angle sort must be handled carefully. Points at the same angle should be sorted by distance from p0p_0; the farthest collinear point on the last angular group should be processed last.

Andrew's Monotone Chain

Algorithm (Andrew, 1979): O(nlogn)O(n \log n)

  1. Sort points lexicographically (by xx, breaking ties by yy)
  2. Build upper hull: scan left to right, maintaining convexity (clockwise turns)
  3. Build lower hull: scan left to right, maintaining convexity (counterclockwise turns)
  4. Concatenate upper and lower hulls

Advantages over Graham scan:

  • No trigonometric functions (uses only cross products)
  • Numerically more stable
  • Handles collinear points more naturally
  • Simple implementation; the preferred algorithm in competitive programming
upper = []
for p in sorted_points:
    while len(upper) >= 2 and cross(upper[-2], upper[-1], p) >= 0:
        upper.pop()
    upper.append(p)
lower = []
for p in sorted_points:
    while len(lower) >= 2 and cross(lower[-2], lower[-1], p) <= 0:
        lower.pop()
    lower.append(p)
hull = upper + lower[1:-1]

Jarvis March (Gift Wrapping)

Algorithm (Jarvis, 1973): O(nh)O(nh) where hh = hull size

  1. Start from the lowest point (guaranteed on hull)
  2. At each hull vertex, find the point making the smallest counterclockwise angle with the current edge direction
  3. Continue until returning to the start

Each step requires O(n)O(n) to find the next hull vertex (linear scan of all points, selecting the one that is most clockwise). Total hh steps.

Output-sensitive: O(nh)O(nh). Optimal when h=O(1)h = O(1) or h=O(logn)h = O(\log n) but poor when h=Θ(n)h = \Theta(n). Can be viewed as the selection-based analog where Graham scan is the sorting-based analog.

Quickhull

Algorithm (Barber, Dobkin, Huhdanpaa, 1996): divide-and-conquer, expected O(nlogn)O(n \log n)

  1. Find leftmost and rightmost points; they divide the set into "above" and "below" subsets
  2. For each subset, find the point farthest from the dividing line (this point must be on the hull)
  3. The farthest point divides the subset into two smaller subsets; recurse

Analysis: worst case O(n2)O(n^2) (similar to quicksort); expected O(nlogn)O(n \log n) for many distributions. In practice, often the fastest algorithm due to excellent cache behavior and early elimination of interior points.

Qhull library: the standard implementation, handles arbitrary dimensions. Uses Quickhull with automatic convexity verification and facet merging for numerical robustness.

Chan's Algorithm

Algorithm (Chan, 1996): O(nlogh)O(n \log h) --- optimal output-sensitive

Combines Jarvis march with a grouping trick:

  1. Guess h=22th' = 2^{2^t} for t=1,2,t = 1, 2, \ldots (doubling strategy)
  2. Partition points into groups of size hh'
  3. Compute convex hull of each group using Graham scan: O(hlogh)O(h' \log h') per group, O(nlogh)O(n \log h') total
  4. Run Jarvis march, but at each step, find the tangent to each group's hull in O(logh)O(\log h') via binary search. Total: O(nhlogh)O(\frac{n}{h'} \cdot \log h') per Jarvis step, hh' steps max
  5. If Jarvis completes within hh' steps, done. Otherwise, double hh' and restart

Total cost: O(nlogh)O(n \log h). The doubling strategy ensures we don't overshoot by more than a constant factor. The geometric growth h=22th' = 2^{2^t} ensures total work across all rounds is O(nlogh)O(n \log h).

3D Convex Hull

Output: a convex polytope with vertices, edges, and faces. By Euler's formula, if hh vertices then O(h)O(h) edges and faces.

Algorithms:

  • Incremental: add points one by one, maintaining the hull. For each new point, find visible facets (orientation test in 3D), remove them, and create new facets. Expected O(nlogn)O(n \log n) for random order; worst case O(n2)O(n^2)
  • Divide and conquer: O(nlogn)O(n \log n). Split by median, recursively compute hulls, merge. Merge step finds the "bridge" edges and wraps around
  • Randomized incremental: O(nlogn)O(n \log n) expected. Conflict lists maintain which uninserted points can "see" each facet. Clarkson-Shor framework
  • Chan's algorithm extends: O(nlogh)O(n \log h) in 3D as well

The beneath-beyond algorithm: for each new point, test visibility of each facet. A facet (a,b,c)(a, b, c) is visible from pp if orient3d(a,b,c,p)<0\text{orient3d}(a, b, c, p) < 0 (assuming outward-facing normals). Delete visible facets and fill the horizon with new triangles.

CGAL, Qhull: standard implementations handle degeneracies via exact arithmetic or symbolic perturbation.

Higher-Dimensional Convex Hulls

In Rd\mathbb{R}^d, the complexity of the convex hull of nn points can be Θ(nd/2)\Theta(n^{\lfloor d/2 \rfloor}) (upper bound theorem). Algorithms:

  • Beneath-beyond: O(nd/2)O(n^{\lfloor d/2 \rfloor}), optimal in even dimensions
  • Gift wrapping: generalizes but is O(nf)O(n \cdot f) where ff is number of facets
  • Reverse search (Avis-Fukuda): polynomial delay per facet, low memory

For d4d \geq 4, the connection to Delaunay triangulation breaks (Delaunay in Rd\mathbb{R}^d = convex hull in Rd+1\mathbb{R}^{d+1} of lifted points).

Dynamic Convex Hull

Maintain the convex hull under insertions and/or deletions of points.

Semi-online (insertions only):

  • O(log2n)O(\log^2 n) amortized per insertion using balanced BST of hull edges (Preparata, 1979)
  • Store upper and lower hulls in separate trees, supporting tangent queries

Fully dynamic (insertions and deletions):

  • O(log2n)O(\log^2 n) per operation (Overmars-van Leeuwen, 1981): maintain the hull in two balanced trees (upper and lower). Deletion requires finding the replacement edges
  • O(logn)O(\log n) amortized per operation (Chan, 2001) via a more sophisticated approach

Kinetic convex hull: points move along known trajectories. Maintain a kinetic data structure (KDS) with certificates that detect when the hull topology changes. Processing events maintains the hull over time.

Applications

  • Collision detection: two convex objects don't collide iff their Minkowski difference's convex hull doesn't contain the origin (GJK algorithm)
  • Linear programming in 2D: the feasible region is a convex polygon; optimize via hull computation or Seidel's randomized O(n)O(n)
  • Smallest enclosing circle/rectangle: related to convex hull; optimal solutions touch hull vertices
  • Diameter and width: the diameter (farthest pair) can be computed in O(n)O(n) from the hull using rotating calipers (Shamos, 1978). Similarly for width (minimum distance between parallel supporting lines)
  • Shape analysis: convexity defects (deviations from hull) characterize shape features

Rotating Calipers

A powerful technique on convex hulls. Maintain two parallel supporting lines (the "calipers") and rotate them around the hull:

  • Diameter: O(n)O(n) --- farthest pair is always antipodal
  • Width: O(n)O(n) --- minimum over all directions
  • Minimum enclosing rectangle: O(n)O(n) --- one side flush with a hull edge
  • Maximum distance between two convex polygons: O(n+m)O(n + m)
  • Minkowski sum of two convex polygons: O(n+m)O(n + m)
  • Onion peeling: repeatedly compute and remove the hull. Total O(n2)O(n^2); depth is Θ(n)\Theta(\sqrt{n}) for random point sets

Lower Bound and Reductions

Sorting reduces to convex hull: given numbers a1,,ana_1, \ldots, a_n, map to points (ai,ai2)(a_i, a_i^2) on a parabola. The hull in order gives the sorted sequence. Hence Ω(nlogn)\Omega(n \log n) in the algebraic decision tree model.

Convex hull reduces to sorting (in 2D): the O(nlogn)O(n \log n) algorithms essentially sort by angle or coordinate. This tight connection means the hull problem is Θ(nlogn)\Theta(n \log n) in the comparison model, but O(nlogh)O(n \log h) output-sensitive algorithms bypass this by exploiting that hh may be much smaller than nn.

For integer coordinates bounded by UU, the sorting step can use integer sorting, achieving O(nloglogU)O(n \log \log U) or even O(nloglogn)O(n \sqrt{\log \log n}) with advanced integer sorting.