947a514153
Build out an interface that makes it super easy to downsample curves using the m4 algorithm while keeping our incremental `QPainterPath` update feature. A lot of hard work and tinkering went into getting this working all in-thread correctly and there are quite a few details.. New interface methods: - `.x_uppx()` which returns the x-axis "view units per pixel" - `.px_width()` which returns the total (rounded) x-axis pixels spanned by the curve in view. - `.should_ds_or_redraw()` a predicate which checks internal state to see if either downsampling of the curve should take place, or the curve should have all downsampling removed and be redrawn with source array data. - `.downsample()` the actual ds processing routine which delegates into the m4 algo impl. - `.maybe_downsample()` a simple update method which can be called by the view box when the user changes the zoom level. Implementation details/changes: - make `.update_from_array()` check for downsample (or revert to source aka de-downsample) conditions exist and then downsample and re-draw path graphics accordingly. - in order to even further speed up path appends (since our main bottleneck is measured to be `QPainter.drawPath()` calls with large paths which are frequently updates), add a secondary path `.fast_path` which is the path that is real-time updates by incremental appends and which is painted separately for speed in `.pain()`. - drop all the `QPolyLine` stuff since it was tested to be much slower in general and especially so for append-updates. - stop disabling the cache settings on updates since it doesn't seem to be required any more? - more move toward deprecating and removing all lingering interface requirements from `pg.PlotCurveItem` (like `.xData`/`.yData`). - adjust `.paint()` and `.boundingRect()` to compensate for the new `.fast_path` - add a butt-load of profiling B) |
||
---|---|---|
.github/workflows | ||
config | ||
piker | ||
snippets | ||
tests | ||
.gitignore | ||
LICENSE | ||
MANIFEST.in | ||
README.rst | ||
notes_to_self.rst | ||
requirements-test.txt | ||
requirements.txt | ||
setup.py |
README.rst
piker
trading gear for hackers.
piker
is a broker agnostic, next-gen FOSS toolset for real-time computational trading targeted at hardcore Linux users .
we use as much bleeding edge tech as possible including (but not limited to):
- latest python for glue
- trio for structured concurrency
- tractor for distributed, multi-core, real-time streaming
- marketstore for historical and real-time tick data persistence and sharing
- techtonicdb for L2 book storage
- Qt for pristine high performance UIs
- pyqtgraph for real-time charting
numpy
andnumba
for fast numerics
focus and features:
- 100% federated: your code, your hardware, your data feeds, your broker fills.
- zero web: low latency, native software that doesn't try to re-invent the OS
- maximal privacy: prevent brokers and mms from knowing your planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
fitting with these tenets, we're always open to new framework suggestions and ideas.
building the best looking, most reliable, keyboard friendly trading platform is the dream; join the cause.
install
piker
is currently under heavy pre-alpha development and as such should be cloned from this repo and hacked on directly.
for a development install:
git clone git@github.com:pikers/piker.git
cd piker
virtualenv env
source ./env/bin/activate
pip install -r requirements.txt -e .
install for tinas
for windows peeps you can start by installing all the prerequisite software:
- install git with all default settings - https://git-scm.com/download/win
- install anaconda all default settings - https://www.anaconda.com/products/individual
- install microsoft build tools (check the box for Desktop development for C++, you might be able to uncheck some optional downloads) - https://visualstudio.microsoft.com/visual-cpp-build-tools/
- install visual studio code default settings - https://code.visualstudio.com/download
then, crack a conda shell and run the following commands:
mkdir code # create code directory
cd code # change directory to code
git clone https://github.com/pikers/piker.git # downloads piker installation package from github
cd piker # change directory to piker
conda create -n pikonda # creates conda environment named pikonda
conda activate pikonda # activates pikonda
conda install -c conda-forge python-levenshtein # in case it is not already installed
conda install pip # may already be installed
pip # will show if pip is installed
pip install -e . -r requirements.txt # install piker in editable mode
test Piker to see if it is working:
piker -b binance chart btcusdt.binance # formatting for loading a chart
piker -b kraken -b binance chart xbtusdt.kraken
piker -b kraken -b binance -b ib chart qqq.nasdaq.ib
piker -b ib chart tsla.nasdaq.ib
potential error:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\AppData\\Roaming\\piker\\brokers.toml'
solution:
- navigate to file directory above (may be different on your machine, location should be listed in the error code)
- copy and paste file from 'C:\Users\user\code\data/brokers.toml' or create a blank file using notepad at the location above
Visual Studio Code setup:
- now that piker is installed we can set up vscode as the default terminal for running piker and editing the code
- open Visual Studio Code
- file --> Add Folder to Workspace --> C:Usersusercodepiker (adds piker directory where all piker files are located)
- file --> Save Workspace As --> save it wherever you want and call it whatever you want, this is going to be your default workspace for running and editing piker code
- ctrl + shift + p --> start typing Python: Select Interpetter --> when the option comes up select it --> Select at the workspace level --> select the one that shows ('pikonda')
- change the default terminal to cmd.exe instead of powershell (default)
- now when you create a new terminal VScode should automatically activate you conda env so that piker can be run as the first command after a new terminal is created
also, try out fancyzones as part of powertoyz for a decent tiling windows manager to manage all the cool new software you are going to be running.
provider support
for live data feeds the in-progress set of supported brokers is:
- IB via
ib_insync
- binance and kraken for crypto over their public websocket API
- questrade (ish) which comes with effectively free L1
coming soon...
- webull via the reverse engineered public API
- yahoo via yliveticker
if you want your broker supported and they have an API let us know.
check out our charts
bet you weren't expecting this from the foss:
piker -l info -b kraken -b binance chart btcusdt.binance --pdb
this runs the main chart (currently with 1m sampled OHLC) in in debug mode and you can practice paper trading using the following micro-manual:
order_mode
(edge triggered activation by any of the following keys,
mouse-click
on y-level to submit at that price ):f
/ctl-f
to stage buyd
/ctl-d
to stage sella
to stage alert
search_mode
(ctl-l
orctl-space
to open,ctl-c
orctl-space
to close ) :- begin typing to have symbol search automatically lookup symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like
ctl-[hjkl]
for navigation
you can also configure your position allocation limits from the sidepane.
run in distributed mode
start the service manager and data feed daemon in the background and connect to it:
pikerd -l info --pdb
connect your chart:
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
enjoy persistent real-time data feeds tied to daemon lifetime. the next time you spawn a chart it will load much faster since the data feed has been cached and is now always running live in the background until you kill pikerd
.
if anyone asks you what this project is about
you don't talk about it.
how do i get involved?
enter the matrix.
how come there ain't that many docs
suck it up, learn the code; no one is trying to sell you on anything. also, we need lotsa help so if you want to start somewhere and can't necessarily write serious code, this might be the place for you!