A new research testbed for network emulation and experimentation, built in Rust. Nextmini combines user-space flexibility with an optional max-performance mode, and runs real workloads directly on emulated topologies.
Two-hop throughput (max mode, TCP splicing)
TUN throughput (multi-queue + TSO enabled)
Per-node memory (namespace mode)
Nodes on a single machine (namespace mode)
A Rust + tokio dataplane built around the actor model, designed for flexibility when implementing new routing, scheduling, dropping, and shaping policies.
Stackless coroutines and multi-threaded executors make high-concurrency user-space networking practical, without kernel modules or eBPF.
Normal mode prioritizes algorithmic flexibility; max mode uses Linux TCP splicing (splice) to approach kernel-level forwarding throughput.
Each node exposes a TUN interface so distributed applications (including ML training) can run unmodified on emulated topologies—locally or across regions.
“A system that is not understood in its entirety, or at least to a significant degree of detail by a single individual, should probably not be built.”— Niklaus Wirth, “A Plea for Lean Software”
Deploy dataplane nodes in Docker (single machine, cluster, or multi-region) or in namespace mode for single-machine scale.
Nodes form an overlay topology with persistent TCP or QUIC links, forwarding packets in the user space under controller-installed policies.
Run arbitrary applications over the virtual network via TUN, or generate synthetic user-space TCP flows inside the dataplane.
The controller exchanges WebSocket messages with nodes, stores state in PostgreSQL, and enables Python control scripts via real-time triggers.
Nextmini follows a software-defined architecture with a centralized controller and user-space dataplane nodes, typically running in Docker containers. Topologies are implemented as persistent TCP or QUIC connections between nodes.
This enables two-way control messaging, real-time metric collection, and Python-scripted control-plane algorithms via PostgreSQL triggers—while preserving an optional path to extreme throughput in max mode.
“Max mode reaches 132.1 Gbps over two hops on a consumer-grade laptop, slightly outperforming Mininet with kernel packet switches.”
“In namespace mode, Nextmini sustains ~12 Gbps while scaling to 40 hops, and keeps per-node memory around 15 MB (enabling 1,000 nodes on one machine).”
“An unmodified PyTorch DDP example runs over a four-datacenter deployment: LeNet-5 averages 0.08s/iter locally vs 2.24s/iter across regions (28× slower due to WAN bandwidth).”
Packets are processed and forwarded in user space under policies installed by the controller, prioritizing flexibility for experimenting with new dataplane mechanisms.
For extreme throughput, Nextmini can “connect” TCP connections in the kernel using splice, bypassing user-space forwarding and approaching kernel-level performance.
Nextmini is a modern networking research testbed for network emulation and experimentation. It is implemented in Rust, designed for a user-space dataplane, and supports running arbitrary application workloads directly on an emulated network.
Nextmini emphasizes a user-space dataplane (for flexibility), first-class Docker deployment (for scale-out), and TUN interfaces (to run real workloads unmodified). It also supports a high-performance max mode using Linux splice for extreme forwarding throughput.
Normal mode processes and forwards packets in the user space to maximize flexibility for scheduling, dropping, shaping, and routing. Max mode “connects” TCP connections in the kernel using splice to approach kernel-level performance, trading off some dataplane flexibility.
Yes. Nextmini exposes a virtual network via TUN interfaces inside dataplane nodes, allowing unmodified distributed applications (including distributed ML training) to run on the emulated topology.
Yes. With Docker Swarm orchestration and user-space topologies over persistent TCP/QUIC connections, Nextmini can be deployed across clusters and even geographically distributed datacenters.
A new research testbed for network emulation and experimentation: user-space by design, max-performance when needed, and ready for real workloads and multi-region deployments.
Xindan Zhang, Shengwen Chang, Baochun Li — University of Toronto