7 min read
On this page

Emerging Networking Technologies

Information-Centric Networking (ICN) / Named Data Networking (NDN)

ICN shifts the network paradigm from host-centric (where is it?) to content-centric (what do I want?).

NDN Architecture

NDN (the leading ICN architecture) uses two packet types and three data structures.

Packets:

  • Interest: Consumer requests data by name (e.g., /com/example/video/segment-42).
  • Data: Producer (or cache) returns signed data matching the Interest name.

Per-node data structures:

| Structure | Purpose | |-----------|---------| | Content Store (CS) | Cache of recently seen Data packets | | Pending Interest Table (PIT) | Records outstanding Interests and incoming faces | | Forwarding Information Base (FIB) | Name-prefix to outgoing face(s) mapping |

NDN Forwarding Process

Interest arrives:
  1. Check CS → if match, return Data (cache hit)
  2. Check PIT → if existing entry for same name, add incoming face (aggregate)
  3. Check FIB → forward Interest toward producer; create PIT entry

Data arrives:
  1. Match against PIT → forward Data to all recorded incoming faces
  2. Optionally cache in CS
  3. Remove PIT entry

NDN Properties

| Property | Description | |----------|-------------| | In-network caching | Any node can cache and serve content | | Multicast/aggregation | PIT naturally aggregates duplicate requests | | Data security | Data packets are signed by the producer; security is per-content, not per-channel | | Flow balance | One Interest retrieves at most one Data; inherent flow control | | No addresses | No IP addresses; names replace them | | Mobility | Consumer mobility is free (re-express Interest); producer mobility requires name-based routing updates |

Challenges

  • Name-based routing does not scale like IP prefix aggregation.
  • PIT state per pending request introduces DoS vulnerability (Interest flooding attack).
  • Cache privacy: timing attacks can reveal cached content.
  • Incremental deployment over existing IP infrastructure.

LEO Satellite Networks

Low Earth Orbit (LEO) satellite constellations provide global broadband connectivity with lower latency than GEO satellites.

Orbital Parameters

| Parameter | LEO | MEO | GEO | |-----------|-----|-----|-----| | Altitude | 300-2000 km | 2000-35786 km | 35,786 km | | One-way latency | 1-15 ms | 50-150 ms | ~270 ms | | Orbital period | ~90-120 min | ~12 hrs | 24 hrs (stationary) | | Coverage per sat | Small footprint | Medium | ~1/3 of Earth |

User Terminal (phased-array antenna)
    ↕ Ka/Ku-band
Starlink Satellite (LEO, ~550 km)
    ↕ Optical inter-satellite links (ISLs)
Adjacent Satellites
    ↕ Ka/Ku-band
Ground Station (gateway to Internet)
  • Constellation: ~6,000+ satellites in multiple orbital shells (as of 2025).
  • Inter-satellite links (ISLs): Laser links between satellites enable traffic to traverse the constellation without touching the ground. In vacuum, light travels ~1.47x faster than in fiber, making satellite paths potentially faster than terrestrial fiber for long distances.
  • Handover: As satellites move (7.5 km/s), user terminals must switch between satellites frequently (~15-second intervals).

Networking Challenges

| Challenge | Description | |-----------|-------------| | Dynamic topology | Satellite positions change continuously; routing must adapt | | Handover | Seamless connection transfer between satellites | | Latency variation | Path length changes as constellation geometry shifts | | Congestion | Hotspots over populated areas; limited per-satellite capacity | | Weather | Rain fade affects Ka/Ku-band ground links | | Routing | Shortest path changes frequently; need predictable routing algorithms |

Routing Approaches

  • Snapshot-based: Precompute routes for discrete time intervals; topology is quasi-periodic.
  • Geographic routing: Forward toward the satellite closest to the destination ground station.
  • Contact graph routing: Model satellite contacts as a time-evolving graph; compute time-dependent shortest paths.

Time-Sensitive Networking (TSN)

IEEE 802.1 TSN standards enable deterministic, bounded-latency communication over Ethernet.

Key TSN Standards

| Standard | Name | Function | |----------|------|----------| | 802.1AS | gPTP | Precision time synchronization (< 1 us accuracy) | | 802.1Qbv | TAS (Time-Aware Shaper) | Gate-controlled scheduling; time slots for traffic classes | | 802.1Qbu/3br | Frame Preemption | High-priority frames interrupt low-priority transmission | | 802.1CB | FRER | Frame replication and elimination for reliability | | 802.1Qcc | SRP enhancements | Centralized and hybrid stream reservation models | | 802.1Qci | PSFP | Per-stream filtering and policing |

Time-Aware Shaper (TAS)

Time axis divided into repeating cycles:

Cycle: |  Slot A  |  Slot B  | Slot C | Guard |
       | Critical | Scheduled|  Best  | Band  |
       | traffic  | traffic  | effort |       |

Gate states per queue: OPEN or CLOSED
Only queues with open gates can transmit in each slot.
  • Provides deterministic, bounded latency for critical traffic.
  • Requires precise time synchronization (802.1AS).
  • Schedule computed offline by a Central Network Controller (CNC).

TSN Applications

| Domain | Use Case | |--------|----------| | Industrial automation | Motion control, sensor networks (replacing fieldbuses) | | Automotive | In-vehicle Ethernet backbone (ADAS, infotainment, control) | | Audio/video | Professional AV over Ethernet (replacing SDI/MADI) | | 5G fronthaul | Transporting radio signals between RU and DU |

Network Slicing (5G)

Network slicing creates multiple logical networks on shared physical infrastructure, each tailored to specific service requirements.

5G Slice Types (3GPP)

| Slice Type | SST Value | Use Case | Requirements | |-----------|-----------|----------|-------------| | eMBB | 1 | Enhanced mobile broadband | High throughput, moderate latency | | URLLC | 2 | Ultra-reliable low-latency | <1 ms latency, 99.999% reliability | | MIoT | 3 | Massive IoT | Low power, high device density |

Slicing Architecture

                   Shared Physical Infrastructure
                  /            |              \
        +--------+    +-------+     +---------+
        | Slice 1 |    | Slice 2|    | Slice 3  |
        | eMBB    |    | URLLC  |    | MIoT     |
        | (video) |    | (auto) |    | (sensor) |
        +--------+    +-------+     +---------+
        Each slice has its own:
          - RAN configuration (scheduling, numerology)
          - Core network functions (SMF, UPF)
          - SLA guarantees (bandwidth, latency, reliability)

Implementation Technologies

| Layer | Slicing Mechanism | |-------|-------------------| | RAN | Dynamic spectrum sharing, scheduling policies, numerology | | Transport | MPLS/SR VPNs, TSN, FlexE | | Core | NFV-based network functions, container orchestration | | Management | NSMF (Network Slice Management Function), AI/ML for SLA assurance |

Slice Isolation

  • Resource isolation: Dedicated spectrum, compute, and network resources per slice.
  • Performance isolation: One slice's traffic surge does not degrade another slice's SLA.
  • Security isolation: Separate authentication, encryption, and policy domains.

In-Network Computing

Processing data within the network (at switches, SmartNICs) rather than only at endpoints.

Programmable Switch Applications

Using P4-programmable switches (e.g., Intel Tofino) for computation.

| Application | Description | |-------------|-------------| | NetCache | Key-value cache in the switch for hot items | | SwitchML / ATP | In-network aggregation for ML gradient synchronization | | NetChain | Consensus (chain replication) in the switch data plane | | Pint | Probabilistic in-band network telemetry | | NetSeer | Real-time anomaly detection at line rate |

In-Network Aggregation for ML

Traditional AllReduce:
  Worker 1 ──→ Parameter Server ──→ Worker 1
  Worker 2 ──→ (aggregates all)  ──→ Worker 2
  Worker 3 ──→                   ──→ Worker 3

In-network aggregation:
  Worker 1 ──→ Switch (aggregates ──→ Worker 1
  Worker 2 ──→ gradients in      ──→ Worker 2
  Worker 3 ──→ data plane)       ──→ Worker 3
  • Reduces network traffic by aggregating at the switch.
  • SwitchML achieves near-ideal speedup for distributed training.
  • Challenges: limited switch memory, fixed-point arithmetic, fault tolerance.

Computational Storage and Processing

  • SmartNIC computing: Run application logic (e.g., consensus, encryption, compression) on DPU/IPU processors.
  • Computational storage: Process queries at the storage device (e.g., filtering in NVMe SSDs).
  • In-network databases: Push selection and aggregation to network switches.

Quantum Networking Fundamentals

Quantum networking uses quantum mechanical properties (superposition, entanglement) to enable communication capabilities impossible with classical networks.

Key Concepts

| Concept | Description | |---------|-------------| | Qubit | Quantum bit; superposition of |0> and |1> states | | Entanglement | Correlated quantum states; measuring one instantly determines the other | | No-cloning theorem | Quantum states cannot be copied; fundamental limit on repeaters | | Teleportation | Transfer quantum state using entanglement + classical communication | | Decoherence | Loss of quantum state due to environmental interaction |

Quantum Key Distribution (QKD)

The most mature quantum networking application. Uses quantum mechanics to establish provably secure encryption keys.

BB84 Protocol:

1. Alice sends qubits in random bases (rectilinear + or diagonal x)
2. Bob measures in random bases
3. Public comparison of bases (not values); keep only matching-basis bits
4. Error rate check: if too high, eavesdropper detected (discard)
5. Privacy amplification → shared secret key
  • Security guaranteed by physics (measurement disturbs quantum states).
  • Limited by distance (photon loss in fiber ~0.2 dB/km; ~100 km practical limit without repeaters).
  • Commercial QKD systems exist (ID Quantique, Toshiba).

Quantum Repeaters

Classical amplifiers cannot be used (no-cloning theorem). Quantum repeaters use entanglement swapping.

Node A ←entangle→ Repeater ←entangle→ Node B
         (segment 1)         (segment 2)

Repeater performs Bell measurement on its two qubits:
  → A and B become entangled (entanglement swapping)
  → Extends entanglement distance
  • First generation: Entanglement swapping + purification; requires quantum memory.
  • Second generation: Adds quantum error correction.
  • Third generation: Full quantum error correction (fault-tolerant); enables arbitrary-distance quantum communication.
  • Current status: experimental demonstrations at lab scale; practical repeaters remain a major research challenge.

Quantum Internet Architecture

End nodes (quantum processors)
    ↕ quantum channels (fiber / free-space optical)
Quantum repeaters (extend entanglement)
    ↕
Quantum switches/routers (entanglement routing)
    ↕
Classical control plane (manages quantum resources)

Development Stages (Wehner et al.)

| Stage | Capability | Application | |-------|-----------|-------------| | 1. Trusted repeater | QKD with trusted nodes | Point-to-point key distribution | | 2. Prepare and measure | End-to-end QKD | Secure communication | | 3. Entanglement distribution | Remote entanglement | Quantum sensor networks | | 4. Quantum memory | Store and forward qubits | Blind quantum computing | | 5. Fault-tolerant | Full quantum computation | Distributed quantum computing | | 6. Quantum Internet | Networked quantum computers | Quantum cloud, quantum consensus |

Current State and Challenges

  • QKD networks operational in China (Beijing-Shanghai backbone, 2000 km), EU (EuroQCI), and other regions.
  • Satellite-based QKD demonstrated (Micius satellite, China).
  • Quantum memories with sufficient coherence times remain a primary hardware bottleneck.
  • Entanglement routing and resource management are open research problems.
  • Integration with classical Internet infrastructure requires new protocol stacks.