When you run code through SmolVM, your commands execute inside a separate virtual machine — not on your host. This page explains the moving parts and why they are designed this way.
What happens when you run code
When you call SmolVM() and run a command, five things happen behind the scenes:
- SmolVM builds or reuses a lightweight Linux image — an Alpine-based filesystem with SSH pre-configured.
- A microVM boots using Firecracker (Linux) or QEMU (macOS).
- A private network is created so the sandbox can reach the internet but is isolated from other sandboxes.
- SmolVM connects over SSH to execute your commands and return the output.
- Everything is torn down when you exit the
with block.
The whole cycle — boot, run, teardown — takes about 3.5 seconds on typical hardware.
from smolvm import SmolVM
with SmolVM(mem_size_mib=2048) as vm:
result = vm.run("free -m")
print(result.stdout)
# Sandbox automatically stopped and cleaned up
Why microVMs instead of containers
AI agents and applications often need to run code that comes from a language model — Python scripts, shell commands, or browser automation. Running that code directly on your host or inside a container can be risky because containers share the host kernel.
SmolVM uses KVM-backed microVMs, which provide hardware-level isolation:
- Stronger boundary — each sandbox runs its own kernel, so a breakout would require a hypervisor exploit, not just a kernel vulnerability
- Fast startup — microVMs boot in under a second, close to container speed
- Low overhead — minimal memory footprint compared to traditional VMs
Key components
SDK (SmolVM class)
The main interface you import in Python. It handles VM lifecycle (create, start, stop, delete), SSH-based command execution via vm.run(), auto-configuration so you can get started with zero config, and reconnection to existing sandboxes via SmolVM.from_id().
CLI (smolvm command)
A terminal interface for creating sandboxes, starting browser sessions, running diagnostics, and managing snapshots. Useful for scripting, debugging, and quick exploration.
Network layer
Each sandbox gets a dedicated TAP device, a private IP address in the 172.16.0.0/16 range, and automatic NAT for outbound connectivity. Sandboxes are isolated from each other by default.
State store
SmolVM tracks sandbox metadata, network assignments, and process state in a local SQLite database at ~/.local/state/smolvm/smolvm.db. This lets you reconnect to running sandboxes across Python sessions.
Image builder
Builds Alpine Linux root filesystems with SSH pre-configured. You can also create custom images with your own tools and dependencies baked in.
Resource defaults
| Setting | Default | Range |
|---|
| vCPUs | 1 | 1–32 |
| Memory | 512 MiB | 128–16384 MiB |
| Disk | 512 MiB | 64 MiB minimum |
| Disk mode | isolated (per-VM copy) | isolated or shared |
# Customize resources
with SmolVM(mem_size_mib=2048, disk_size_mib=4096) as vm:
vm.run("echo 'more room to work'")
Typical lifecycle timings (p50) on a standard Linux host:
| Phase | Time |
|---|
| Create + Start | ~572ms |
| SSH ready | ~2.1s |
| Command execution | ~43ms |
| Stop + Delete | ~751ms |
| Full lifecycle (boot, run, teardown) | ~3.5s |
Measured on AMD Ryzen 7 7800X3D (8C/16T), Ubuntu Linux, KVM/Firecracker backend.
Next steps