19 KiB
NLNet NGI Zero Commons Fund — Grant Proposal Draft
Deadline: April 1st 2026 12:00 CEST Fund: NGI Zero Commons Fund Amount range: €5,000–€50,000
Proposal Name
piker: sovereign, structured-concurrent trading infrastructure for the commons
Website / Wiki
- https://pikers.dev/pikers/piker (self-hosted Gitea)
- https://github.com/pikers/piker (eventual marketing mirror)
- https://github.com/goodboy/tractor (core runtime dependency)
Abstract
piker is a broker-agnostic, fully libre (AGPLv3+) trading toolkit and runtime that enables individuals to participate in financial markets without surrendering their data, autonomy, or compute infrastructure to proprietary platforms or cloud services.
The project delivers a complete, federated trading stack — real-time data feeds, order management, charting, and (financial) signal processing — built entirely on free software and designed from first principles around structured concurrency, distributed actor supervision, and zero-copy IPC. Every component runs on the user’s own hardware; no centralized service or account is required beyond the market venue itself.
What piker provides today (alpha):
A multi-process actor runtime (via
tractor, our structured concurrency framework built ontrio) that supervises broker connections, data feeds, order execution, and UI rendering as a distributed process tree — locally or across hosts.A broker abstraction layer with working integrations for Binance, Kraken, Interactive Brokers, Deribit, and Kucoin, presenting a unified API for market data, order submission, and position tracking regardless of venue.
A hyper performant real-time charting UI built on PyQt6 and our own extensions to the
PyQtGraphgraphic lib, all driven by in-shared-memory data (like OHLCV) buffers that multiple processes read without serialization overhead.a keyboard-driven, modal UI/X targeting tiling window manager users on linux.
A financial signal processing (FSP) subsystem that allows traders to compose custom indicators, auto-strats, real-time analysis, or other real-time data oriented integrations all from modern async (
trio) Python. Further feats include and hot-reload them against live data streams.A “dark clearing” engine which ensures all triggered/algorithmic orders can exist entirely trader-client-side without submitting/yielding any control to a particular broker/venue thus guaranteeing minimal stop/hedged-order losses as often seen when trading against larger/faster market participants.
A full “paper trading” engine that simulates order execution against live market feeds, optionally with a per provider traffic profile config, enabling easy strategy development against live market data feeds without capital risk.
Apache Arrow / Parquet and storage for OHLCV history and trade ledgers with full accounting primitives (positions, P&L, allocation tracking).
What we intend to deliver with this grant:
Stabilize the core runtime and IPC layer — harden
tractor’s supervision protocol, improve error propagation across actor boundaries, and complete the modular transport stack (TCP → UDS → shared-memory ring buffers -> TIPC) so that the distributed architecture is production-reliable.Some of this work in
tractorcore could include,- formalizing the typed IPC-msg semantics and API,
- building out the hot-code-reload and supervision strategy API,
- adopting and integrating more modern discovery protocol systems as built-ins,
- experimenting with the (oddly not well known, from “team erlang” AND long time been inside the linux kernel) TIPC transport as an official IPC backend. Ostensibly its use would allow covering outstanding feats mentioned prior such as,
- any sophisticated discovery sys requirements outstanding.
- built-in failover/HA features that would normally require per-IPC-transport augmentation, at least for most in-use-in-internet-stack protocols.
- localhost-only shared-memory transport refinements with various improvements by extending the current implementation with,
eventfdsignalling around our readers-writer-lock abstrations for managing posix-shm segments.- use of apache
arrowbuffers as an data-buffer-backend to the std lib’smultiprocessing.sharedmemory,
Extend the broker/data provider integration set — formalize the provider pluggable backened APIs so that community contributors can more easily add new venues (DEXs, traditional brokerages, prediction markets) without deep framework knowledge. Ideally our ongoing agentic AI dev helpers should make such integrations very vibe-able and thus simple to get started. Publish integration guides and reference implementations.
Mature the order management and execution system (EMS) — complete the state machine for multi-leg and conditional orders, add journaling for audit trails, and ensure the paper-trading engine faithfully models slippage and fee structures per venue.
Ship packaging and onboarding — produce reproducible builds via Nix flakes and
uv-based installs, write user-facing documentation, and establish a contributor onboarding path so the project can grow beyond its current core team.Security and privacy hardening — integrate VPN-aware connection management so traders can route venue traffic through tunnels, preventing brokers and market makers from correlating order flow with identity. Audit all IPC surfaces for injection/escalation vectors.
All outcomes are released under the GNU Affero General Public License v3 or later. The AGPL’s network-interaction clause ensures that anyone who deploys piker as a service must share their modifications — preventing the enclosure of community work by proprietary cloud platforms, which is often the dominant failure mode in financial technology today available to most self-funded retail investors and traders.
Relevant Experience / Prior Involvement
@goodboy is primary creator and steward of both piker and tractor. He has been developing structured-concurrency (SC) distributed systems in Python since 2018 and is an active participant in the trio ecosystem and the broader structured concurrency community (which has since influenced Python’s asyncio.TaskGroup, Kotlin coroutines, Swift concurrency, and Java’s Project Loom).
tractor itself is a novel contribution to the field: it extends the structured concurrency supervision model across process and host boundaries via an IPC-contract-enforced, “SC supervision control protocol” — something no other Python runtime provides nor has seemingly any other project formalized. This design directly informs piker’s architecture and is maintained as a standalone library so that other projects can continue to benefit from its distributed runtime principles and primitives.
The piker community operates via Matrix chat and the core code base is primarily developed on a self-hosted Gitea instance at pikers.dev. Contributors include a small but high dedicated core group of “hacker traders” (what we call actual “pikers”: a new hybrid type of developer who are also themselves self-funded independent traders) and a burgeoning community of surrounding devs, engineers, data scientists, traders and investors.
Comparison With Existing Efforts
The landscape of trading software falls into three categories, none of which serve the commons:
1. Proprietary platforms (TradingView, MetaTrader, Bloomberg Terminal, Thinkorswim) These are closed-source, SaaS-dependent, and extract rent from users via subscriptions, data fees, and order-flow selling. Users have zero visibility into how their data is handled, cannot audit execution quality, and are locked into vendor-specific ecosystems. They represent the canonical “market failure” that NLNet’s commons fund targets: an essential digital infrastructure captured by extractive incumbents.
2. Open-source trading libraries (Zipline, Backtrader, ccxt, freqtrade) These provide components (backtesting engines, API wrappers, bot frameworks) but not a complete runtime. They are typically single-process, single-venue, and lack real-time supervision, IPC, or UI. Most use permissive licenses (MIT/Apache) which allow proprietary enclosure — indeed, Zipline was developed by Quantopian (now defunct) and its maintenance has fragmented. None provide a coherent distributed architecture or structured concurrency guarantees.
3. Institutional systems (FIX protocol gateways, internal bank platforms) These are inaccessible to individuals and small firms, require expensive connectivity, and are architecturally rooted in 1990s-era message bus designs (TIBCO, 29West) that predate modern concurrency research.
piker is unique in combining all of the following:
A complete, integrated stack (data → compute → orders → UI) rather than isolated components.
Structured concurrent distribution as a first-class architectural property, not an afterthought. Every subsystem is an actor in a supervised tree; failures propagate and cancel cleanly; resources never leak. This is a direct application of formal concurrency research (Dijkstra, Hoare, and the recent structured concurrency lineage from Trio/Nurseries) to a domain that has historically ignored it.
Hard copyleft licensing (AGPLv3+) that prevents the most common form of open-source value extraction in fintech: wrapping a permissively-licensed library in a proprietary cloud service.
Zero-web architecture: native IPC (Unix domain sockets, shared-memory) and native Qt UI instead of HTTP/WebSocket/Electron. This is not aesthetic preference — it is an engineering decision that eliminates entire classes of latency, security, and complexity problems introduced by the browser runtime.
Venue agnosticism as a design principle: the same codebase, the same UI, the same order management primitives regardless of whether the user trades crypto on Binance, equities on Interactive Brokers, or derivatives on Deribit. No other open-source project attempts this across asset classes with a unified real-time architecture.
Technical Challenges
1. Structured concurrency across host boundaries
tractor’s supervision protocol must guarantee that if any actor in the distributed tree fails, the failure is propagated to all dependent actors and resources are cleaned up — exactly as trio nurseries do within a single process. Achieving this over network transports (which can partition, delay, or corrupt messages) while maintaining low latency is an open research problem. We are implementing a cancellation-scope protocol layered on our IPC message spec that handles partial failures gracefully without resorting to the “let it crash” philosophy that abandons resource cleanup.
2. Zero-copy shared-memory data flow
Our ShmArray primitive allows multiple processes (chart renderer, FSP engine, EMS) to read the same OHLCV buffer without serialization. This requires careful lock-free coordination: the sampling daemon appends new bars while consumers read concurrently. Extending this to ring buffers for tick-level data (L2 order books, trade streams) without introducing GC pauses or memory corruption is a systems-level challenge that demands expertise in memory-mapped I/O and cache-line-aware data structures.
3. Broker API heterogeneity
Each venue has a different API surface, authentication model, rate-limit policy, market data format, and order lifecycle model. Abstracting these behind a unified interface without losing venue-specific capabilities (e.g., Binance’s sub-account system, IB’s complex multi-leg orders, Deribit’s options greeks) requires a plugin architecture that is both general enough to be learnable and specific enough to be useful. We must also handle venue-specific failure modes (exchange maintenance windows, API deprecations, WebSocket reconnection) within the structured concurrency supervision tree.
4. Real-time UI rendering performance
Financial charting demands smooth 60fps rendering of potentially millions of OHLCV bars with dynamic zoom, overlaid indicators, and interactive order annotations. We have extended PyQtGraph with custom batch-rendering paths and GPU-accelerated drawing, but further work is needed on level-of-detail decimation, viewport culling, and efficient incremental updates as new data arrives — all while the UI thread remains responsive to keyboard input.
5. Reproducible packaging across Linux distributions
piker depends on Qt6, system-level libraries (OpenGL, SSL), Python native extensions (numpy, numba, pyarrow), and our own tractor runtime. Ensuring that uv sync and nix develop both produce working, reproducible environments across NixOS, Arch, Debian, and Fedora — without resorting to containerization that undermines the native-performance philosophy — requires continuous integration testing against multiple targets.
Ecosystem Engagement
Upstream contributions:
piker’s development directly drives improvements in its dependency ecosystem. Bugs and feature requests discovered through our use of trio, pyqtgraph, msgspec, polars, and ib_insync/ib_async are reported and often patched upstream. Our extended PyQtGraph fork contains rendering optimizations that are candidates for upstream merge. tractor itself is a standalone project that any Python developer can use for structured-concurrent multiprocessing, independent of piker.
Community building:
- We operate a Matrix-based chat community for real-time collaboration.
- Our self-hosted Gitea instance at
pikers.devprovides infrastructure independent of any corporate platform (GitHub is used as a mirror for discoverability). - We maintain integration guides for AI-assisted development workflows (currently Claude Code) to lower the barrier for new contributors.
- We plan to use grant funds to produce user documentation, video walkthroughs, and contributor onboarding materials.
Standards and interoperability:
tractor’s IPC message spec is being formalized with the goal of becoming a reusable protocol for structured-concurrent RPC, applicable beyond trading.- Our broker abstraction layer is designed so that adding a new venue is a matter of implementing a well-documented Python module interface, not forking the project.
- We use open data formats throughout: Apache Arrow/Parquet for time-series, TOML for configuration, and standard Python typing (
msgspec.Struct) for all message schemas.
Alignment with NGI Zero Commons Fund goals:
piker addresses a clear market failure: individuals who wish to participate in financial markets are forced to use proprietary, surveillance-laden platforms that extract value from their data and order flow. There is no credible libre alternative today. By building a complete, federated, copyleft-licensed trading stack, we create digital infrastructure that is:
- Honest: open-source, auditable, no hidden order-flow selling.
- Open: AGPLv3+ ensures all modifications remain public goods.
- Inclusive: venue-agnostic design welcomes any market participant regardless of which broker or asset class they prefer.
- Robust: structured concurrency guarantees that the system fails safely rather than silently corrupting state — a property that proprietary platforms routinely lack and that costs retail traders real money.
Financial infrastructure is critical public infrastructure. The fact that it is almost entirely enclosed by proprietary interests is a failure of the commons that this project directly addresses.
Suggested Budget Breakdown
Requested amount: €50,000
| Task | Effort | Rate | Amount |
|---|---|---|---|
tractor runtime hardening: supervision protocol, transport stack (TCP/UDS/shm), error propagation |
200h | €75/h | €15,000 |
| Broker plugin API formalization + integration guides + 2 new venue integrations | 150h | €75/h | €11,250 |
| EMS completion: multi-leg orders, journaling, paper-engine fidelity | 120h | €75/h | €9,000 |
| Packaging & CI: Nix flakes, uv reproducibility, multi-distro testing | 60h | €75/h | €4,500 |
| Security audit: IPC surfaces, VPN integration, dependency review | 50h | €75/h | €3,750 |
| Documentation & onboarding: user guides, contributor docs, video walkthroughs | 60h | €75/h | €4,500 |
| Project management & community coordination | 27h | €75/h | €2,000 |
| Total | 667h | €50,000 |
All work is performed by the existing core team (Tyler Goodlet, Guillermo Rodriguez, and community contributors). The hourly rate reflects senior-level systems engineering in a cost-conscious FOSS context.
Other Funding Sources
This project has not received external funding to date. All development has been self-funded by the core developers. There are no other pending grant applications at this time.
Generative AI Disclosure
This proposal draft was composed with assistance from Claude (Anthropic, model: claude-opus-4-6) on 2026-03-30. The AI was used to:
- Fetch and summarize NLNet’s proposal requirements and fund mission statement.
- Explore the
pikercodebase to compile an accurate technical description of the architecture. - Draft the proposal text based on the above research and the author’s direction regarding key arguments and positioning.
The author reviewed, edited, and approved all content. Unedited AI outputs and prompts are available on request.
Notes for the Applicant
Before submitting, review and personalize the following:
Budget: adjust the total amount, hourly rate, and task breakdown to match your actual plan and financial needs. NLNet accepts €5k–€50k; projects with strong potential can scale.
Community links: add the actual Matrix room URL (e.g.,
#piker:matrix.orgor however it’s addressed).Contact details: fill in name, email, phone, org, country on the form itself.
Attachments: consider attaching the piker README, an architecture diagram, or a short demo video/screencast.
Generative AI section: update with the actual prompts used and attach unedited outputs if you want full transparency.
Tone: NLNet reviewers are technical; the current draft leans into engineering substance over marketing. Adjust if desired.
Timeline: NLNet doesn’t require a fixed timeline in the form, but if asked, a 12-month delivery window is reasonable for this scope.