6 min read
On this page

Quantum Error Correction

Quantum Noise

Unlike classical bits, qubits are susceptible to continuous errors and decoherence. Quantum noise is modeled using quantum channels (CPTP maps).

Bit Flip Channel

With probability p, applies X (bit flip): E(rho) = (1-p) rho + p X rho X. Analogous to the classical binary symmetric channel.

Phase Flip Channel

With probability p, applies Z (phase flip): E(rho) = (1-p) rho + p Z rho Z. No classical analogue -- it randomizes relative phase.

Depolarizing Channel

With probability p, replaces the state with the maximally mixed state: E(rho) = (1-p) rho + (p/3)(X rho X + Y rho Y + Z rho Z)

Equivalently: E(rho) = (1 - 4p/3) rho + (4p/3)(I/2). This is the most commonly used noise model for benchmarking QEC.

Amplitude Damping

Models energy relaxation (T1 decay): qubit decays from |1> to |0> with probability gamma. Kraus operators: A_0 = [[1,0],[0,sqrt(1-gamma)]], A_1 = [[0,sqrt(gamma)],[0,0]]. This is non-unital: the fixed point is |0>, not I/2.

Phase Damping (Dephasing)

Models T2 decay (loss of coherence without energy loss): off-diagonal elements of rho decay at rate lambda. The qubit loses phase information while preserving populations.

Principles of Quantum Error Correction

Key challenges distinguishing QEC from classical error correction:

  • No-cloning theorem prevents simple repetition
  • Errors are continuous (arbitrary rotations), not just bit flips
  • Measurement disturbs quantum states
  • Must correct both bit flips (X errors) and phase flips (Z errors)

Fundamental insight: Discretization of errors. If a code can correct the discrete Pauli errors {I, X, Y, Z} on each qubit, it can correct any single-qubit error (arbitrary channel). This follows from linearity of quantum mechanics.

Knill-Laflamme conditions: A code with projector P corrects error set {E_a} iff P E_a^dagger E_b P = alpha_{ab} P for all a, b. The matrix alpha_{ab} must be Hermitian; the errors are either detected or have identical effect on all codewords.

QEC Codes

Shor Code [[9,1,3]]

The first quantum error-correcting code. Encodes 1 logical qubit into 9 physical qubits. Corrects any single-qubit error.

Encoding:

  • |0_L> = (|000> + |111>)^{tensor 3} / (2*sqrt(2))
  • |1_L> = (|000> - |111>)^{tensor 3} / (2*sqrt(2))

Structure: Three copies of a 3-qubit phase-flip code, each using a 3-qubit bit-flip code. Demonstrates the concatenation principle.

Steane Code [[7,1,3]]

A CSS code encoding 1 logical qubit into 7 physical qubits. Based on the classical [7,4,3] Hamming code. Corrects any single-qubit error with more efficient encoding than Shor's code. Admits transversal implementation of all Clifford gates.

CSS Codes (Calderbank-Shor-Steane)

Constructed from two classical linear codes C1, C2 with C2 subset of C1. The code corrects X errors using C1 and Z errors using C2^perp. An [[n, k1+k2-n, min(d1,d2)]] code where d1, d2 are the distances of C1 and C2^perp.

CSS structure enables transversal CNOT gates and simplifies syndrome extraction.

Stabilizer Codes

The stabilizer formalism (Gottesman) provides the dominant framework for QEC.

A stabilizer code is defined by an abelian subgroup S of the n-qubit Pauli group with -I not in S. The code space is the simultaneous +1 eigenspace of all elements in S: C = {|psi> : g|psi> = |psi> for all g in S}.

An [[n,k,d]] stabilizer code uses n physical qubits to encode k logical qubits with distance d (corrects floor((d-1)/2) errors). S has n-k independent generators.

Syndrome measurement: Measuring each generator g_i yields +1 (no error detected) or -1 (error detected). The syndrome vector identifies the error without disturbing the encoded information.

Logical operators: Pauli operators that commute with all stabilizers but are not in S. They act on the logical qubits.

Surface Code

The leading candidate for scalable fault-tolerant QC. Defined on a 2D lattice with qubits on edges.

  • Stabilizers: X-type (star/vertex operators) and Z-type (plaquette/face operators)
  • Distance: d x d lattice gives distance d code, encoding 1 logical qubit in ~2d^2 physical qubits
  • Threshold: ~1% per-gate error rate (highest known for any code)
  • Error correction: Syndrome decoding via minimum weight perfect matching (MWPM) on a decoding graph
  • Logical gates: Lattice surgery for CNOT, magic state injection for T gates

Advantages: local stabilizer checks (only nearest-neighbor interactions), high threshold, well-suited to superconducting architectures. Disadvantage: large overhead (~1000-10000 physical qubits per logical qubit at practical noise rates).

Color Codes

Defined on trivalent, 3-colorable lattices. Encode k logical qubits with properties:

  • Transversal implementation of all Clifford gates (advantage over surface codes)
  • Support for lattice surgery and code deformation
  • Slightly lower threshold than surface codes (~0.1-0.5%)
  • Exist in 2D and 3D variants; 3D color codes enable transversal T gates

Toric Code

Kitaev's toric code: surface code on a torus (periodic boundary conditions). Encodes 2 logical qubits. Historically important as the first topological code. Anyonic excitations (e and m particles) correspond to syndrome violations; logical operators are non-contractible loops on the torus. Introduced the connection between QEC and topological order.

Fault Tolerance

A computation is fault-tolerant if errors do not propagate uncontrollably. Key principles:

  • Transversal gates: Apply gates qubit-by-qubit across code blocks. A single fault affects at most one qubit per block. Not all gates can be transversal (Eastin-Knill theorem).
  • Flag qubits: Detect dangerous error propagation during syndrome extraction using auxiliary flag qubits.
  • Syndrome extraction: Use ancilla qubits and repeated measurements to extract error syndromes without propagating errors into data.

Threshold Theorem

Theorem (Aharonov-Ben-Or, Knill-Laflamme-Zurek): If the physical error rate per gate p is below a threshold p_th > 0, then arbitrarily long quantum computations can be performed with error rate suppressed to any desired level epsilon, using poly(log(1/epsilon)) overhead per logical gate.

The proof uses concatenated codes: recursively encoding qubits into codes. At level k, the effective error rate is ~ (p/p_th)^{2^k}, doubly exponentially suppressed.

Threshold values depend on the code and noise model:

  • Concatenated codes: p_th ~ 10^{-4} to 10^{-5}
  • Surface codes: p_th ~ 10^{-2}
  • With biased noise or tailored codes: potentially higher

Magic State Distillation

The Clifford+T gate set is universal, but the T gate cannot be implemented transversally in CSS codes (Eastin-Knill). Solution: magic state distillation.

  1. Prepare noisy copies of the magic state |T> = T|+> = (|0> + e^{i*pi/4}|1>)/sqrt(2)
  2. Apply a distillation protocol (using Clifford gates only) to produce fewer, higher-fidelity magic states
  3. Use magic state injection (gate teleportation) to implement the T gate using one distilled magic state + Clifford operations

15-to-1 protocol: Distill 15 noisy magic states into 1 cleaner one, reducing error from epsilon to ~35*epsilon^3. Iterated, this achieves arbitrary precision with polylogarithmic overhead.

Magic state distillation is typically the dominant cost in fault-tolerant quantum computation, motivating research into more efficient protocols and alternative approaches (e.g., non-Clifford transversal gates in exotic codes, code switching).

Quantum LDPC Codes

A frontier area aiming to reduce the overhead of fault-tolerant QC. Quantum LDPC codes have stabilizers of bounded weight and constant rate (k/n -> constant as n -> infinity). Recent breakthroughs include asymptotically good quantum LDPC codes achieving constant rate and linear distance. These could dramatically reduce the physical-to-logical qubit ratio but require non-local connectivity, posing implementation challenges.