f1f7241a1e
Build out an interface that makes it super easy to downsample curves using the m4 algorithm while keeping our incremental `QPainterPath` update feature. A lot of hard work and tinkering went into getting this working all in-thread correctly and there are quite a few details.. New interface methods: - `.x_uppx()` which returns the x-axis "view units per pixel" - `.px_width()` which returns the total (rounded) x-axis pixels spanned by the curve in view. - `.should_ds_or_redraw()` a predicate which checks internal state to see if either downsampling of the curve should take place, or the curve should have all downsampling removed and be redrawn with source array data. - `.downsample()` the actual ds processing routine which delegates into the m4 algo impl. - `.maybe_downsample()` a simple update method which can be called by the view box when the user changes the zoom level. Implementation details/changes: - make `.update_from_array()` check for downsample (or revert to source aka de-downsample) conditions exist and then downsample and re-draw path graphics accordingly. - in order to even further speed up path appends (since our main bottleneck is measured to be `QPainter.drawPath()` calls with large paths which are frequently updates), add a secondary path `.fast_path` which is the path that is real-time updates by incremental appends and which is painted separately for speed in `.pain()`. - drop all the `QPolyLine` stuff since it was tested to be much slower in general and especially so for append-updates. - stop disabling the cache settings on updates since it doesn't seem to be required any more? - more move toward deprecating and removing all lingering interface requirements from `pg.PlotCurveItem` (like `.xData`/`.yData`). - adjust `.paint()` and `.boundingRect()` to compensate for the new `.fast_path` - add a butt-load of profiling B) |
||
---|---|---|
.github/workflows | ||
config | ||
piker | ||
snippets | ||
tests | ||
.gitignore | ||
LICENSE | ||
MANIFEST.in | ||
README.rst | ||
notes_to_self.rst | ||
requirements-test.txt | ||
requirements.txt | ||
setup.py |
README.rst
piker
trading gear for hackers.
piker
is a broker agnostic, next-gen FOSS toolset for real-time computational trading targeted at hardcore Linux users .
we use as much bleeding edge tech as possible including (but not limited to):
- latest python for glue
- trio for structured concurrency
- tractor for distributed, multi-core, real-time streaming
- marketstore for historical and real-time tick data persistence and sharing
- techtonicdb for L2 book storage
- Qt for pristine high performance UIs
- pyqtgraph for real-time charting
numpy
andnumba
for fast numerics
focus and features:
- 100% federated: your code, your hardware, your data feeds, your broker fills.
- zero web: low latency, native software that doesn't try to re-invent the OS
- maximal privacy: prevent brokers and mms from knowing your planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
fitting with these tenets, we're always open to new framework suggestions and ideas.
building the best looking, most reliable, keyboard friendly trading platform is the dream; join the cause.
install
piker
is currently under heavy pre-alpha development and as such should be cloned from this repo and hacked on directly.
for a development install:
git clone git@github.com:pikers/piker.git
cd piker
virtualenv env
source ./env/bin/activate
pip install -r requirements.txt -e .
install for tinas
for windows peeps you can start by getting conda installed and the C++ build toolz on your system.
then, crack a conda shell and run the following commands:
conda create piker --python=3.9
conda activate piker
conda install pip
pip install --upgrade setuptools
cd dIreCToRieZ\oF\cODez\piker\
pip install -r requirements -e .
in order to look coolio in front of all ur tina friends (and maybe want to help us with testin, hackzing or configgin), install vscode and setup a coolio tiled wm console so you can start living the life of the tech literate..
provider support
for live data feeds the in-progress set of supported brokers is:
- IB via
ib_insync
- binance and kraken for crypto over their public websocket API
- questrade (ish) which comes with effectively free L1
coming soon...
- webull via the reverse engineered public API
- yahoo via yliveticker
if you want your broker supported and they have an API let us know.
check out our charts
bet you weren't expecting this from the foss:
piker -l info -b kraken -b binance chart btcusdt.binance --pdb
this runs the main chart (currently with 1m sampled OHLC) in in debug mode and you can practice paper trading using the following micro-manual:
order_mode
(edge triggered activation by any of the following keys,
mouse-click
on y-level to submit at that price ):f
/ctl-f
to stage buyd
/ctl-d
to stage sella
to stage alert
search_mode
(ctl-l
orctl-space
to open,ctl-c
orctl-space
to close ) :- begin typing to have symbol search automatically lookup symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like
ctl-[hjkl]
for navigation
you can also configure your position allocation limits from the sidepane.
run in distributed mode
start the service manager and data feed daemon in the background and connect to it:
pikerd -l info --pdb
connect your chart:
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
enjoy persistent real-time data feeds tied to daemon lifetime. the next time you spawn a chart it will load much faster since the data feed has been cached and is now always running live in the background until you kill pikerd
.
if anyone asks you what this project is about
you don't talk about it.
how do i get involved?
enter the matrix.
how come there ain't that many docs
suck it up, learn the code; no one is trying to sell you on anything. also, we need lotsa help so if you want to start somewhere and can't necessarily write serious code, this might be the place for you!