Queues
A queue is a First-In, First-Out (FIFO) data structure. The first element added is the first one removed — like a line of people waiting.
Operations
| Operation | Description | Time | |---|---|---| | enqueue(x) | Add x to the back | O(1) | | dequeue() | Remove and return the front element | O(1) | | front() / peek() | Return the front element without removing | O(1) | | is_empty() | Check if the queue is empty | O(1) | | size() | Number of elements | O(1) |
Implementations
Array-Based (Circular Buffer)
Use a fixed-size array with front and rear pointers that wrap around.
Capacity: 8
[_, _, 30, 40, 50, _, _, _]
^front ^rear
After enqueue(60):
[_, _, 30, 40, 50, 60, _, _]
^front ^rear
After dequeue() → returns 30:
[_, _, _, 40, 50, 60, _, _]
^front ^rear
Wrap-around:
[80, _, _, _, _, 60, 70, _]
^rear ^front
STRUCTURE CircularQueue
data : array of elements
front : integer
rear : integer
size : integer
capacity : integer
FUNCTION ENQUEUE(queue, val)
IF queue.size = queue.capacity THEN RETURN error "Full"
queue.data[queue.rear] ← val
queue.rear ← (queue.rear + 1) MOD queue.capacity
queue.size ← queue.size + 1
FUNCTION DEQUEUE(queue)
IF queue.size = 0 THEN RETURN None
val ← queue.data[queue.front]
queue.front ← (queue.front + 1) MOD queue.capacity
queue.size ← queue.size - 1
RETURN val
Advantages: Cache-friendly. No allocation per operation. Constant-time operations. Disadvantages: Fixed capacity (or needs resizing logic).
Linked-List-Based
Enqueue at tail, dequeue from head. Maintain both head and tail pointers.
Advantages: Dynamic size. True O(1) worst case. Disadvantages: Allocation per enqueue. Cache-unfriendly.
VecDeque (Rust)
Rust's std::collections::VecDeque is a growable circular buffer — the best of both worlds. Amortized O(1) push/pop at both ends.
Double-Ended Queue (Deque)
Supports insertion and removal at both ends.
| Operation | Time | |---|---| | push_front(x) | O(1) | | push_back(x) | O(1) | | pop_front() | O(1) | | pop_back() | O(1) |
Implemented as a circular buffer (VecDeque) or doubly linked list.
Can simulate both stack and queue: Use only one end for stack operations, or push at one end and pop at the other for queue.
Priority Queue
Elements have priorities. The highest-priority element is dequeued first, regardless of insertion order.
| Operation | Time (binary heap) | |---|---| | insert(x, priority) | O(log n) | | extract_max/min() | O(log n) | | peek() | O(1) | | change_priority(x, new_p) | O(log n) (with position tracking) |
Implementation: Usually a binary heap (covered in detail in the heaps file).
Rust: std::collections::BinaryHeap (max-heap).
Applications: Dijkstra's algorithm, Huffman coding, job scheduling, event-driven simulation, A* search.
Monotonic Queue
A deque where elements are maintained in monotonically increasing (or decreasing) order. Enables O(1) query of the minimum (or maximum) in a sliding window.
Sliding Window Maximum
Given an array and window size k, find the maximum in each window position.
FUNCTION SLIDING_WINDOW_MAX(arr, k)
deque ← empty double-ended queue // stores indices
result ← empty list
FOR i ← 0 TO LENGTH(arr) - 1 DO
// Remove elements outside the window
WHILE deque is not empty AND FRONT(deque) + k ≤ i DO
POP_FRONT(deque)
// Remove elements smaller than current (maintain decreasing order)
WHILE deque is not empty AND arr[BACK(deque)] ≤ arr[i] DO
POP_BACK(deque)
PUSH_BACK(deque, i)
IF i ≥ k - 1 THEN
APPEND arr[FRONT(deque)] TO result
RETURN result
Time: O(n) total (each element enters and exits the deque at most once).
Applications: Sliding window problems, real-time stream processing, range minimum/maximum queries.
Queue Applications
BFS (Breadth-First Search)
The fundamental algorithm using a queue:
FUNCTION BFS(graph, start)
n ← LENGTH(graph)
dist ← array of n values, all set to -1
dist[start] ← 0
queue ← empty queue
ENQUEUE(queue, start)
WHILE queue is not empty DO
u ← DEQUEUE(queue)
FOR EACH v IN graph[u] DO
IF dist[v] = -1 THEN
dist[v] ← dist[u] + 1
ENQUEUE(queue, v)
RETURN dist
BFS visits nodes in order of increasing distance from the source. Guarantees shortest path in unweighted graphs.
Task Scheduling
Round-robin scheduler: Tasks in a queue. Each task gets a time quantum. After the quantum, the task goes to the back of the queue.
Work queues: Producer-consumer pattern. Producers enqueue tasks, consumers dequeue and execute them. Used in thread pools, web servers, message brokers.
Buffering
I/O buffers: Keyboard buffer, print queue, network packet queue.
Streaming: Data flows through a pipeline of processing stages connected by queues.
Level-Order Tree Traversal
Visit all nodes at depth d before any node at depth d+1:
FUNCTION LEVEL_ORDER(root)
result ← empty list
queue ← empty queue
ENQUEUE(queue, root)
WHILE queue is not empty DO
level_size ← SIZE(queue)
level ← empty list
FOR i ← 1 TO level_size DO
node ← DEQUEUE(queue)
APPEND node.val TO level
IF node.left ≠ NULL THEN ENQUEUE(queue, node.left)
IF node.right ≠ NULL THEN ENQUEUE(queue, node.right)
APPEND level TO result
RETURN result
Concurrent Queues
Lock-Free Queue (Michael-Scott)
A lock-free queue using compare-and-swap (CAS). Multiple threads can enqueue and dequeue concurrently without locks.
Key idea: Separate head and tail pointers. CAS on tail for enqueue, CAS on head for dequeue.
MPSC / MPMC Queues
- MPSC (Multi-Producer, Single-Consumer): Simpler. Rust channels (
std::sync::mpsc). - SPSC (Single-Producer, Single-Consumer): Lock-free with a simple circular buffer. Highest throughput.
- MPMC (Multi-Producer, Multi-Consumer): Most complex. crossbeam-channel in Rust.
Bounded vs Unbounded
Bounded: Fixed capacity. Producers block (or fail) when full. Provides backpressure. Prevents memory exhaustion.
Unbounded: Grows as needed. Producers never block. Risk of memory exhaustion under sustained overload.
Applications in CS
- Graph algorithms: BFS, shortest path (0-1 BFS with deque), topological sort (Kahn's algorithm uses a queue).
- Operating systems: Process scheduling (ready queue), I/O request queues, print queues.
- Networking: Packet queues in routers/switches. QoS with priority queues. TCP send/receive buffers.
- Web servers: Request queues. Connection pools. Rate limiting with token bucket (queue of tokens).
- Message systems: Kafka, RabbitMQ, SQS — all fundamentally queues.
- Simulation: Event-driven simulation uses a priority queue of events.
- Cache eviction: FIFO eviction policy is a queue.