Tuesday, January 20, 2026Toronto, Canada
New: Modern Emulation Testbed

Network Emulation
Reimagined

A new research testbed for network emulation and experimentation, built in Rust. Nextmini combines user-space flexibility with an optional max-performance mode, and runs real workloads directly on emulated topologies.

132.1 Gbps

Two-hop throughput (max mode, TCP splicing)

4.8 Gbps

TUN throughput (multi-queue + TSO enabled)

15 MB

Per-node memory (namespace mode)

1,000

Nodes on a single machine (namespace mode)

Section A | Highlights

Flexibility First, Performance When Needed

User-Space Dataplane

A Rust + tokio dataplane built around the actor model, designed for flexibility when implementing new routing, scheduling, dropping, and shaping policies.

Asynchronous Rust at Scale

Stackless coroutines and multi-threaded executors make high-concurrency user-space networking practical, without kernel modules or eBPF.

Normal + Max Modes

Normal mode prioritizes algorithmic flexibility; max mode uses Linux TCP splicing (splice) to approach kernel-level forwarding throughput.

Real Workloads via TUN

Each node exposes a TUN interface so distributed applications (including ML training) can run unmodified on emulated topologies—locally or across regions.

“A system that is not understood in its entirety, or at least to a significant degree of detail by a single individual, should probably not be built.”
— Niklaus Wirth, “A Plea for Lean Software”
Section B | Workflow

From Topology to Experiments

01

Define Topology

Deploy dataplane nodes in Docker (single machine, cluster, or multi-region) or in namespace mode for single-machine scale.

02

Connect the Data Plane

Nodes form an overlay topology with persistent TCP or QUIC links, forwarding packets in the user space under controller-installed policies.

03

Run Workloads

Run arbitrary applications over the virtual network via TUN, or generate synthetic user-space TCP flows inside the dataplane.

04

Close the Loop

The controller exchanges WebSocket messages with nodes, stores state in PostgreSQL, and enables Python control scripts via real-time triggers.

Fig. 1 | Bird’s-eye view
Bird's-eye view comparing Mininet and Nextmini
Controller ↔ dataplane via WebSocket + PostgreSQL
Fig. 2 | Live multi-region deployment
World map UI for a multi-region Nextmini deployment

Software-Defined,
User-Space

Nextmini follows a software-defined architecture with a centralized controller and user-space dataplane nodes, typically running in Docker containers. Topologies are implemented as persistent TCP or QUIC connections between nodes.

This enables two-way control messaging, real-time metric collection, and Python-scripted control-plane algorithms via PostgreSQL triggers—while preserving an optional path to extreme throughput in max mode.

Section C | Performance

Performance at a Glance

Max mode reaches 132.1 Gbps over two hops on a consumer-grade laptop, slightly outperforming Mininet with kernel packet switches.

Max Mode ThroughputKernel-assisted forwarding via splice

In namespace mode, Nextmini sustains ~12 Gbps while scaling to 40 hops, and keeps per-node memory around 15 MB (enabling 1,000 nodes on one machine).

Single-Machine ScaleLong paths + high scale without custom kernels

An unmodified PyTorch DDP example runs over a four-datacenter deployment: LeNet-5 averages 0.08s/iter locally vs 2.24s/iter across regions (28× slower due to WAN bandwidth).

Real WorkloadsMulti-region experiments via Docker Swarm
Section D | Operating Modes

Two Modes, One Testbed

Normal mode
User spacefocus

Packets are processed and forwarded in user space under policies installed by the controller, prioritizing flexibility for experimenting with new dataplane mechanisms.


  • Scheduling disciplines (FCFS, WRR)
  • Packet dropping (Tail Drop, RED)
  • Traffic shaping (Token Bucket) + rate limiting
  • Application workloads via TUN interfaces
  • Synthetic user-space TCP flows (smoltcp)
See Workflow
Featured
Max mode
splicesyscall

For extreme throughput, Nextmini can “connect” TCP connections in the kernel using splice, bypassing user-space forwarding and approaching kernel-level performance.


  • Kernel-assisted TCP splicing for forwarding
  • Well north of 100 Gbps achievable (SOCKS5 proxy path)
  • Orders-of-magnitude faster than user-space forwarding
  • Trades off some scheduling/shaping flexibility
  • Normal + max flows can co-exist
Read the Paper
Section E | Questions

Frequently Asked

Nextmini is a modern networking research testbed for network emulation and experimentation. It is implemented in Rust, designed for a user-space dataplane, and supports running arbitrary application workloads directly on an emulated network.

Nextmini emphasizes a user-space dataplane (for flexibility), first-class Docker deployment (for scale-out), and TUN interfaces (to run real workloads unmodified). It also supports a high-performance max mode using Linux splice for extreme forwarding throughput.

Normal mode processes and forwards packets in the user space to maximize flexibility for scheduling, dropping, shaping, and routing. Max mode “connects” TCP connections in the kernel using splice to approach kernel-level performance, trading off some dataplane flexibility.

Yes. Nextmini exposes a virtual network via TUN interfaces inside dataplane nodes, allowing unmodified distributed applications (including distributed ML training) to run on the emulated topology.

Yes. With Docker Swarm orchestration and user-space topologies over persistent TCP/QUIC connections, Nextmini can be deployed across clusters and even geographically distributed datacenters.

Powered by
Rust.

A new research testbed for network emulation and experimentation: user-space by design, max-performance when needed, and ready for real workloads and multi-region deployments.

Xindan Zhang, Shengwen Chang, Baochun Li — University of Toronto