It started in Rust

The first version of the simulation engine was written in Rust with Tokio for async I/O. Rust was the obvious choice for a performance-sensitive event loop: zero-cost abstractions, no GC pauses, and precise control over memory layout. The core tick replay loop worked well and was genuinely fast.

The early architecture used MongoDB for tick data storage and ZeroMQ for passing messages between the feed handlers, the simulation engine, and the strategy process. Both caused problems. The Rust MongoDB driver was immature at the time — the subscription support we needed for the project was not straightforward to implement, and the driver's performance under our workload left a lot to be desired. ZeroMQ was harder: the Rust bindings were unstable under our async Tokio runtime, with sporadic socket hangs that were nearly impossible to reproduce reliably and painful to debug across process boundaries.

The problem was the strategy API. We wanted users to write strategies in Python — the language algo traders already use — which meant embedding a Python runtime and crossing the FFI boundary on every tick callback. In Rust, safe Python interop via pyo3 added significant friction: every value crossing the boundary needed careful lifetime management, and the ergonomics of calling into Python from an async Rust context were rough.

After a few weeks of fighting the borrow checker over PyObject lifetimes, on top of the MongoDB and ZeroMQ issues, we decided to rethink the whole stack.

Why Nim

Nim compiles to C, which gives it near-native performance without a runtime or GC pauses in hot paths (the GC is arc/orc — deterministic, reference-counted). More importantly, it has nimpy — a first-class Python FFI library that lets you embed CPython and call back and forth with minimal boilerplate.

The strategy execution model became straightforward: the Nim simulation engine drives the event loop, and on each tick it calls into the user's Python strategy object via nimpy. The Python side gets a clean on_tick(tick) callback with a typed object, and can place orders by calling back into the Nim OMS layer. No async complexity, no lifetime annotations — just a regular function call across the language boundary.

The entire backend is a single Nim binary. The web server uses mummy (an HTTP server library for Nim). The frontend is compiled from Nim to JavaScript using Karax. Persistent storage is handled by debby. No Node, no separate API service, no Docker compose file with six containers.

Build output: nim c -d:release --threads:on --app:lib --out:pyscripts/ctrl.pyd src/player/ctrl.nim — the Python-callable simulation core compiles to a .pyd native extension in one step.

The simulation engine

The engine is a discrete-event simulator (DES) — roughly two years of work alongside everything else on the project. Time advances tick by tick through the stored exchange feed. On each event the engine:

  1. Updates the current best bid/ask state from the incoming quote
  2. Checks whether any pending orders are now fillable given the new book state
  3. Applies the configured OMS delay (2 ms by default) to model order propagation latency
  4. Fires the strategy's on_tick callback with the current market state
  5. Processes any new orders the strategy placed, adding them to the pending order queue

Order types supported: limit, market, stop-loss, and stop-limit. The engine tracks each limit order's queue position so that passive fills are only granted after the ahead-of-queue volume has traded through.

Performance on a typical day's data: 62,000 quote events processed in ~1.3 seconds with a Python strategy running on every tick. Without the Python callback overhead the same loop runs in roughly 6 ms. The bottleneck is CPython, not the simulation engine.

Feed handlers

Two always-on Nim feed handlers collect market data continuously — one for crypto pairs (BTCUSD, ETHUSD, SOLUSD, XRPUSD and others via Coinbase and Binance), one for FX majors (EURUSD, USDJPY, GBPUSD, AUDUSD and more). Data is partitioned by symbol and date, then made available for replay.

Strategy sandboxing

User strategies run inside a Linux unprivileged sandbox. The Python process gets a read-only view of the strategy file and the data it needs, no network access, and strict memory and CPU limits. A strategy that calls import requests and tries to phone home simply gets no network on basic tariffs.

Controlled network access is on the roadmap for higher tiers — opening the door to strategies that connect to their own infrastructure, external data feeds, or private AI models to generate signals.

Technical indicators are available via talipp — an incremental technical analysis library for Python that computes indicators tick by tick without reprocessing the full history on each event. This keeps the per-tick callback overhead bounded regardless of how many candles of history the strategy tracks.

AI — at least for the design

We are backend people. The simulation engine, the feed pipeline, the sandboxing — that is where we feel at home. Producing a UI that is at least not painful to look at has never been our strongest point. This time we leaned on AI heavily for the frontend: layout, color choices, copy, and the overall visual structure of the landing page and app were all done with AI assistance. We would not have shipped something presentable without it.

What is next

The current data feed is Level-1 (best bid/ask + trades). Full Level-2 order book depth has already been implemented and tested — the simulation engine has queue position tracking, and the feed layer can collect the data. What is holding it back is storage volume and server capacity: full order book data is significantly larger, and serving it at replay speed requires better infrastructure. It will roll out as the project grows.

If any of this sounds interesting — or if you spot something we should have done differently — we are reachable via Discord, by email at support@hfts.app, through the contacts page, or follow along on the blog.