5 min read
On this page

Stochastic Processes

Overview

A stochastic process is a collection of random variables {X(t) : t in T} indexed by time. It models systems where randomness is inherent — arrival patterns, stock prices, component failures, molecular motion.

Key classification axes:

  • State space: discrete (chains) or continuous (diffusion)
  • Time index: discrete (n = 0, 1, 2, ...) or continuous (t >= 0)
  • Memory: Markov (memoryless), semi-Markov, general

Poisson Processes

The Poisson process models random events occurring independently at a constant average rate lambda.

Properties:

  • N(t) ~ Poisson(lambda·t) — count of events in [0, t]
  • Interarrival times ~ Exponential(lambda), independent
  • Memoryless: P(next arrival > t+s | no arrival by t) = P(next arrival > s)
  • Merging: sum of independent Poisson processes is Poisson
  • Thinning: randomly accepting events with probability p gives Poisson(p·lambda)

Non-Homogeneous Poisson Process (NHPP)

Rate lambda(t) varies with time. Used for modeling time-varying demand, software reliability.

Simulation by thinning:

  1. Find lambda_max >= lambda(t) for all t
  2. Generate events from homogeneous Poisson(lambda_max)
  3. Accept event at time t with probability lambda(t) / lambda_max
PROCEDURE SIMULATE_NHPP(lambda, lambda_max, end_time) → list of event times
    events ← empty list
    t ← 0.0

    LOOP
        // Generate from homogeneous process at rate lambda_max
        u ← RANDOM_UNIFORM(0, 1)
        t ← t + (-LN(u)) / lambda_max
        IF t > end_time THEN BREAK

        // Thin: accept with probability lambda(t) / lambda_max
        IF RANDOM_UNIFORM(0, 1) < lambda(t) / lambda_max THEN
            APPEND t TO events
    RETURN events

Markov Chains

Discrete-Time Markov Chain (DTMC)

State transitions at discrete steps with the Markov property: P(X_{n+1} | X_n, ..., X_0) = P(X_{n+1} | X_n).

Fully specified by:

  • State space S = {s_1, s_2, ...}
  • Transition matrix P where P_ij = P(X_{n+1} = j | X_n = i)
  • Initial distribution pi_0

Stationary distribution pi satisfies pi = pi·P and sum(pi) = 1. For ergodic chains, pi exists, is unique, and the chain converges to it regardless of start state.

// Simulate a DTMC for n_steps
PROCEDURE SIMULATE_DTMC(transition_matrix, initial_state, n_steps) → path
    path ← [initial_state]
    state ← initial_state

    FOR step ← 1 TO n_steps DO
        row ← transition_matrix[state]
        u ← RANDOM_UNIFORM(0, 1)
        cumsum ← 0.0
        FOR j ← 0 TO LENGTH(row) - 1 DO
            cumsum ← cumsum + row[j]
            IF u < cumsum THEN
                state ← j
                BREAK
        APPEND state TO path
    RETURN path

Classification of States

  • Recurrent: the chain returns to the state with probability 1
  • Transient: positive probability of never returning
  • Absorbing: once entered, never left (P_ii = 1)
  • Periodic: returns only at multiples of some period d > 1
  • Ergodic: recurrent, aperiodic, positive recurrent

Continuous-Time Markov Chain (CTMC)

Transitions occur at continuous time points. Holding time in state i is Exponential(q_i) where q_i = -Q_ii is the exit rate from state i.

Generator matrix Q: Q_ij = transition rate from i to j (i != j), Q_ii = -sum_{j!=i} Q_ij.

Steady-state: pi·Q = 0, sum(pi) = 1.

Simulation: alternate between sampling holding time (exponential) and sampling next state (discrete distribution proportional to off-diagonal rates).

Renewal Theory

A renewal process is a sequence of non-negative i.i.d. interarrival times X_1, X_2, .... The Poisson process is the special case where X_i ~ Exponential.

Key results:

  • Elementary renewal theorem: N(t)/t -> 1/E[X] as t -> infinity
  • Renewal reward theorem: long-run average reward rate = E[reward per cycle] / E[cycle length]
  • Inspection paradox: a random time point is more likely to fall in a longer inter-renewal interval

Applications: replacement policies (replace on failure vs preventive replacement), warranty analysis.

Birth-Death Processes

A CTMC where transitions only occur between adjacent states: state i can go to i+1 (birth, rate lambda_i) or i-1 (death, rate mu_i).

  λ₀    λ₁    λ₂
0 ──► 1 ──► 2 ──► 3 ...
  ◄──   ◄──   ◄──
  μ₁    μ₂    μ₃

Steady-state (when it exists):

pi_n = pi_0 · (lambda_0 · lambda_1 · ... · lambda_{n-1}) / (mu_1 · mu_2 · ... · mu_n)

Queueing models as birth-death processes:

  • M/M/1: lambda_i = lambda, mu_i = mu. Stable when rho = lambda/mu < 1.
  • M/M/c: lambda_i = lambda, mu_i = min(i, c)·mu
  • M/M/1/K: finite buffer, lambda_K = 0

Brownian Motion

Standard Brownian motion B(t) has:

  • B(0) = 0
  • Independent increments: B(t) - B(s) independent of B(u) for u <= s
  • Normal increments: B(t) - B(s) ~ N(0, t-s)
  • Continuous sample paths (almost surely)

Geometric Brownian Motion models stock prices:

dS = mu·S·dt + sigma·S·dB

Solution: S(t) = S(0)·exp((mu - sigma²/2)t + sigma·B(t))

Stochastic Differential Equations (SDEs)

General form:

dX = a(X, t)dt + b(X, t)dB

where a is the drift and b is the diffusion coefficient.

Euler-Maruyama Method

The simplest numerical scheme for SDEs:

X_{n+1} = X_n + a(X_n, t_n)·dt + b(X_n, t_n)·sqrt(dt)·Z_n

where Z_n ~ N(0, 1) i.i.d.

// Simulate geometric Brownian motion using Euler-Maruyama
PROCEDURE SIMULATE_GBM(s0, mu, sigma, dt, n_steps) → path
    sqrt_dt ← SQRT(dt)
    path ← [s0]
    s ← s0

    FOR step ← 1 TO n_steps DO
        z ← RANDOM_NORMAL(mean ← 0, stddev ← 1)
        s ← s + mu * s * dt + sigma * s * sqrt_dt * z
        APPEND s TO path
    RETURN path

Milstein Method

Higher-order scheme (strong order 1.0 vs 0.5 for Euler-Maruyama) for scalar SDEs:

X_{n+1} = X_n + a·dt + b·sqrt(dt)·Z + 0.5·b·b'·(Z²-1)·dt

where b' = db/dX. The correction term 0.5·b·b'·(Z²-1)·dt captures the Ito-to-Stratonovich correction.

Convergence

| Method | Strong Order | Weak Order | |--------|-------------|------------| | Euler-Maruyama | 0.5 | 1.0 | | Milstein | 1.0 | 1.0 | | Stochastic RK | 1.0-1.5 | 2.0 |

Strong convergence: path-wise accuracy (trajectories close to true solution). Weak convergence: distributional accuracy (moments and expectations match).

Applications

Finance

  • Option pricing: Black-Scholes from GBM, Monte Carlo for exotic options
  • Interest rates: Vasicek, Cox-Ingersoll-Ross (CIR) models
  • Credit risk: default modeled as first passage time
  • Portfolio optimization: stochastic control with SDEs

Queueing Theory

  • M/M/c queues as birth-death processes
  • Jackson networks (open/closed) with product-form solutions
  • Performance modeling of data centers, call centers, hospitals

Reliability Engineering

  • Component lifetimes as renewal processes
  • System availability via Markov models (operational/failed states)
  • Maintenance optimization (age-based vs condition-based policies)
  • Degradation modeling with Wiener processes (Brownian motion with drift)

Biology

  • Gene expression noise modeled by chemical master equation (CTMC)
  • Gillespie algorithm: exact stochastic simulation of chemical reactions
  • Population genetics: Wright-Fisher model, coalescent processes
  • Epidemic models: stochastic SIR as CTMC

Communication Networks

  • Packet arrivals as Poisson or Markov-modulated Poisson
  • Channel fading as Markov chain or Ornstein-Uhlenbeck process
  • Network traffic: long-range dependence modeled by fractional Brownian motion