Concurrent Programming Models
Concurrency is the composition of independently executing processes. Parallelism is the simultaneous execution of computations. This file covers the programming models; low-level implementation details are in the parallel computing topic.
Shared Memory
Threads
The fundamental unit of concurrent execution. Threads within a process share the same address space.
handle ← SPAWN_THREAD(PROCEDURE()
PRINT "Hello from a thread!"
)
JOIN(handle)
Locks (Mutexes)
Mutual exclusion: Only one thread can access the protected data at a time.
counter ← SHARED_MUTEX(0)
handles ← empty list
FOR i ← 1 TO 10
APPEND SPAWN_THREAD(PROCEDURE()
LOCK(counter)
counter.value ← counter.value + 1
UNLOCK(counter)
) TO handles
FOR EACH handle IN handles: JOIN(handle)
PRINT "Result: ", counter.value // 10
Problems: Deadlocks (circular waiting), priority inversion, convoying, lock contention.
Semaphores
A counter that controls access to a shared resource. acquire() decrements (blocks if zero). release() increments.
Binary semaphore = mutex. Counting semaphore = controls access to a pool of N resources.
Monitors
A higher-level synchronization construct combining a mutex with condition variables.
SHARED started ← false, protected by mutex
SHARED cvar ← CONDITION_VARIABLE
// Waiting thread
LOCK(started)
WHILE NOT started
WAIT(cvar, started) // atomically release lock and wait
// Signaling thread
LOCK(started)
started ← true
NOTIFY_ONE(cvar) // wake one waiting thread
Condition variables: wait() releases the lock and blocks until signaled. notify_one() / notify_all() wake waiting threads.
Message Passing
Actor Model
Each actor has:
- Private state (no shared memory)
- A mailbox (message queue)
- Behavior (how to process messages)
Actors communicate exclusively by sending messages. No shared state → no locks.
Actor A: send(B, "hello")
Actor B: receive() → "hello" → process → send(A, "world")
Erlang/Elixir: The canonical actor-model languages. OTP framework provides supervisors, gen_servers, etc.
Akka (Scala/Java): Actor framework on JVM.
Properties: Location transparency (actors can be on different machines). Fault tolerance (supervisor hierarchies). Natural for distributed systems.
Channels (CSP — Communicating Sequential Processes)
Processes communicate by sending values through channels. Channels can be synchronous (unbuffered) or asynchronous (buffered).
(tx, rx) ← NEW_CHANNEL() // multi-producer, single-consumer
SPAWN_THREAD(PROCEDURE()
SEND(tx, "hello")
)
received ← RECEIVE(rx) // blocks until message arrives
PRINT "Got: ", received
Go's motto: "Don't communicate by sharing memory; share memory by communicating."
ch := make(chan int)
go func() { ch <- 42 }()
value := <-ch // 42
CSP (Hoare, 1978): Formal model of concurrent processes communicating via channels. Basis for Go's goroutines and channels, Rust's channels, Clojure's core.async.
Select
Wait on multiple channels simultaneously, proceeding with whichever is ready first.
select {
case msg := <-ch1:
fmt.Println("Received from ch1:", msg)
case msg := <-ch2:
fmt.Println("Received from ch2:", msg)
case <-time.After(5 * time.Second):
fmt.Println("Timeout!")
}
Rust equivalent: crossbeam::select! macro.
Async/Await
Asynchronous programming: Write non-blocking code that looks like blocking code.
ASYNC FUNCTION FETCH_DATA(url)
response ← AWAIT HTTP_GET(url)
body ← AWAIT response.TEXT()
RETURN Ok(body)
ASYNC PROCEDURE MAIN()
data ← AWAIT FETCH_DATA("https://example.com")
PRINT data
How It Works
async fnreturns a Future (state machine) instead of executing immediately.awaitsuspends the current task if the Future isn't ready, allowing other tasks to run.- An executor (runtime) polls Futures and drives them to completion.
Key insight: Tasks are multiplexed onto a smaller number of OS threads. Millions of concurrent tasks on a handful of threads.
Cooperative vs Preemptive
Cooperative (async/await, green threads): Tasks voluntarily yield at await points. If a task never yields, it blocks the thread.
Preemptive (OS threads): The OS can interrupt a thread at any time to schedule another. Fair but expensive (context switch, stack per thread).
Runtimes
- Tokio (Rust): Multi-threaded async runtime. Work-stealing scheduler.
- async-std (Rust): Alternative runtime.
- Node.js (JavaScript): Single-threaded event loop with async/await.
- asyncio (Python): Single-threaded event loop.
- Go runtime: M:N scheduling of goroutines on OS threads.
Coroutines
Coroutines are generalizations of functions that can suspend and resume execution.
Stackful coroutines (green threads): Each coroutine has its own stack. Can suspend from nested function calls. Go goroutines, Lua coroutines.
Stackless coroutines (generators/async): Compile to state machines. Can only suspend at top level. Rust async/await, Python generators/async.
Green Threads
User-space threads managed by the runtime (not the OS). Much lighter than OS threads (smaller stacks, faster context switch).
Go goroutines: Start with ~2 KB stack (grows as needed). Millions of goroutines on a few OS threads. M:N scheduling.
Erlang processes: Even lighter (~300 bytes). Millions of processes. Preemptive scheduling at the VM level.
Futures and Promises
A Future (or Promise) represents a value that will be available in the future.
// Future interface
INTERFACE Future
TYPE Output
FUNCTION POLL(context) -> Poll(Output)
TYPE Poll = Ready(value) | Pending
Composing futures: Chain with .then(), combine with join! (parallel), race with select!.
(result_a, result_b) ← AWAIT_ALL(
FETCH_DATA("url_a"),
FETCH_DATA("url_b")
)
// Both fetched concurrently
Reactive Programming
Data flows and propagation of change. When a value changes, all dependent computations automatically update.
Observable streams: Sequences of events over time.
source: --1--2--3--4--5--|
map(x*2): --2--4--6--8--10--|
filter(>5): ---------6--8--10--|
RxJS, RxJava, RxRust: Reactive extensions libraries. Operators: map, filter, merge, flatMap, debounce, throttle.
Applications: UI event handling (React, Angular), real-time data streams, event-driven architectures.
Software Transactional Memory (STM)
Manage shared memory like database transactions:
atomically {
read account_a
read account_b
write account_a (balance_a - amount)
write account_b (balance_b + amount)
}
Properties: Atomicity (all or nothing), isolation (no partial visibility). If a conflict is detected, the transaction retries.
Advantages: No deadlocks. Composable (nest transactions freely). Easier reasoning.
Disadvantages: Overhead from tracking reads/writes. May retry many times under contention.
Implementations: Haskell STM (the gold standard), Clojure refs.
Comparison
| Model | Communication | Strengths | Weaknesses | |---|---|---|---| | Shared memory + locks | Shared variables | Direct, efficient | Error-prone (deadlocks, races) | | Actors | Messages | Fault-tolerant, distributed | Debugging message flows | | Channels (CSP) | Typed channels | Composable, type-safe | Channel management overhead | | Async/await | Cooperative tasks | Scalable I/O, ergonomic | Cooperative (can't preempt) | | STM | Transactions | No deadlocks, composable | Retry overhead |
Applications in CS
- Web servers: Handle thousands of concurrent connections. Async/await (Node.js, Tokio) or threads (Go goroutines).
- Databases: Connection pools (threads), async query execution, lock-based or MVCC concurrency control.
- Game engines: Main thread + worker threads. Job systems with task graphs.
- Distributed systems: Actor model (Erlang/Elixir for telecom, Akka for microservices). Message-passing between nodes.
- UI frameworks: Event loops (async), reactive state management (React, SwiftUI).
- Scientific computing: MPI (message passing between nodes) + OpenMP (shared memory within node).