Why .cell is 250× Faster Than E2B
E2B boots an entire Linux kernel for every code execution. We don't. Here's the architectural difference — and why it matters for AI agents executing millions of function calls per day.
The Problem: AI Agents Need Fast Code Execution
Modern AI agents — Claude, GPT-4, Gemini — need to execute code as a tool call. When an agent generates Python or JavaScript and needs to run it, the execution substrate matters. A lot.
E2B is the market leader. They provide sandboxed code execution via Firecracker microVMs. It works. But there's a hidden cost: every execution pays the VM tax.
Two Architectures, Wildly Different Results
.cell (WASI sandbox): API request → Instantiate Wasm module → Execute → Capture output ≈ 2ms cold start
The difference is structural, not incremental. E2B can't optimize their way to our numbers because the architecture itself is the bottleneck. A microVM will always need to boot a kernel. A Wasm sandbox never will.
What .cell does instead
.cell uses WebAssembly System Interface (WASI) sandboxes powered by Wasmtime with Cranelift JIT compilation. Language interpreters — QuickJS for JavaScript, CPython 3.12 for Python — are pre-compiled to Wasm at server startup. When code arrives:
- Instantiate the pre-compiled Wasm module (~0.5ms)
- Configure WASI context: stdout/stderr pipes, preopened /data/ directory
- Execute the code with fuel metering (DoS protection)
- Capture stdout, stderr, exit code
- Hash the code + output for a cryptographic receipt
No kernel. No filesystem. No networking stack. No init system. Just your code, running in a sandbox.
The Numbers
Measured on Hetzner AX102 (AMD Ryzen 9 7950X3D, 128GB DDR5). Real workloads, real numbers.
| E2B (Firecracker) | .cell (WASI) | |
|---|---|---|
| Cold start | ~500ms | <2ms |
| Architecture | microVM (Linux kernel) | Wasm sandbox |
| JavaScript | ✅ V8/Node | ✅ QuickJS |
| Python | ✅ CPython | ✅ CPython 3.12 (WASI) |
| Execution receipts | ❌ | ✅ SHA-256 |
| Deterministic | ❌ (VM non-determinism) | ✅ (Wasm spec) |
| Self-hosted | Enterprise $$$ | Free (open source) |
Why This Matters for AI Agents
An AI agent making 1,000 tool calls per session (not uncommon for complex tasks) pays the execution tax 1,000 times.
| E2B | .cell | |
|---|---|---|
| 1,000 JS executions | ~500 seconds | <2 seconds |
| Cost per million execs | ~$138* VM time | Same price, 250× faster |
| User experience | Noticeable lag | Instant |
*Based on E2B published pricing at $0.000138/second
Cryptographic Receipts: Something E2B Can't Do
Every .cell execution produces a cryptographic receipt — a SHA-256 hash chain linking the code, output, and execution metadata. This is structurally impossible with VMs because non-deterministic OS behavior means the same code can produce different hashes on different runs.
{
"execution_id": "1656c200-...",
"code_hash": "f2f248f0...", // SHA-256(code)
"result_hash": "02787652...", // SHA-256(stdout:stderr)
"template": "javascript",
"timestamp": 1712537284000
}
Why does this matter? For regulated industries (finance, healthcare, defense), you need audit trails. You need to prove that a specific piece of code produced a specific output at a specific time. .cell gives you that on every execution, automatically.
The Technical Stack
// Gateway: Rust (zero-dependency HTTP server) // Runtime: Wasmtime + Cranelift JIT // JS: QuickJS (1.3MB Wasm module) // Python: CPython 3.12 (26MB Wasm module, VMware WASI build) // Sandbox: WASI with preopened /data/, fuel metering // Receipt: SHA-256 hash chain // API: MCP-native (Model Context Protocol)
The gateway is a single Rust binary (~37MB Docker image). No containers needed on the host. Templates are JIT-compiled once at startup (QuickJS: 111ms, CPython 3.12: 695ms), then instantiated per-execution in microseconds.
Try It Right Now
The demo is live. No account needed. Execute JavaScript or Python on our AX102 server and see the latency for yourself.
# JavaScript (<2ms) $ curl -X POST http://65.108.120.219:8002/v1/demo/exec \ -H "Content-Type: application/json" \ -d '{"code":"console.log(42 * 42)","language":"javascript"}' # Python (~35ms) $ curl -X POST http://65.108.120.219:8002/v1/demo/exec \ -H "Content-Type: application/json" \ -d '{"code":"print(42 * 42)","language":"python3"}'
Stop Paying the VM Tax
Same price. Better performance. Self-hosted for free.