Compare commits
No commits in common. "main" and "basic_buy_bot" have entirely different histories.
main
...
basic_buy_
278
README.rst
278
README.rst
|
|
@ -1,199 +1,162 @@
|
||||||
piker
|
piker
|
||||||
-----
|
-----
|
||||||
trading gear for hackers
|
trading gear for hackers.
|
||||||
|
|
||||||
|gh_actions|
|
|gh_actions|
|
||||||
|
|
||||||
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
|
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
|
||||||
:target: https://actions-badge.atrox.dev/piker/pikers/goto
|
:target: https://actions-badge.atrox.dev/piker/pikers/goto
|
||||||
|
|
||||||
``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for
|
``piker`` is a broker agnostic, next-gen FOSS toolset for real-time
|
||||||
real-time computational trading targeted at `hardcore Linux users
|
computational trading targeted at `hardcore Linux users <comp_trader>`_ .
|
||||||
<comp_trader>`_ .
|
|
||||||
|
|
||||||
we use much bleeding edge tech including (but not limited to):
|
we use as much bleeding edge tech as possible including (but not limited to):
|
||||||
|
|
||||||
- latest python for glue_
|
- latest python for glue_
|
||||||
- uv_ for packaging and distribution
|
- trio_ & tractor_ for our distributed, multi-core, real-time streaming
|
||||||
- trio_ & tractor_ for our distributed `structured concurrency`_ runtime
|
`structured concurrency`_ runtime B)
|
||||||
- Qt_ for pristine low latency UIs
|
- Qt_ for pristine high performance UIs
|
||||||
- pyqtgraph_ (which we've extended) for real-time charting and graphics
|
- pyqtgraph_ for real-time charting
|
||||||
- ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_
|
- ``polars`` ``numpy`` and ``numba`` for `fast numerics`_
|
||||||
- `apache arrow and parquet`_ for time-series storage
|
- `apache arrow and parquet`_ for time series history management
|
||||||
|
persistence and sharing
|
||||||
|
- (prototyped) techtonicdb_ for L2 book storage
|
||||||
|
|
||||||
potential projects we might integrate with soon,
|
.. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg
|
||||||
|
:target: https://travis-ci.org/pikers/piker
|
||||||
- (already prototyped in ) techtonicdb_ for L2 book storage
|
|
||||||
|
|
||||||
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
|
|
||||||
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
|
|
||||||
.. _uv: https://docs.astral.sh/uv/
|
|
||||||
.. _trio: https://github.com/python-trio/trio
|
.. _trio: https://github.com/python-trio/trio
|
||||||
.. _tractor: https://github.com/goodboy/tractor
|
.. _tractor: https://github.com/goodboy/tractor
|
||||||
.. _structured concurrency: https://trio.discourse.group/
|
.. _structured concurrency: https://trio.discourse.group/
|
||||||
|
.. _marketstore: https://github.com/alpacahq/marketstore
|
||||||
|
.. _techtonicdb: https://github.com/0b01/tectonicdb
|
||||||
.. _Qt: https://www.qt.io/
|
.. _Qt: https://www.qt.io/
|
||||||
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
|
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
|
||||||
|
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
|
||||||
.. _apache arrow and parquet: https://arrow.apache.org/faq/
|
.. _apache arrow and parquet: https://arrow.apache.org/faq/
|
||||||
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
|
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
|
||||||
.. _techtonicdb: https://github.com/0b01/tectonicdb
|
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
|
||||||
|
|
||||||
|
|
||||||
focus and feats:
|
focus and features:
|
||||||
****************
|
*******************
|
||||||
fitting with these tenets, we're always open to new
|
- 100% federated: your code, your hardware, your data feeds, your broker fills.
|
||||||
framework/lib/service interop suggestions and ideas!
|
- zero web: low latency, native software that doesn't try to re-invent the OS
|
||||||
|
- maximal **privacy**: prevent brokers and mms from knowing your
|
||||||
|
planz; smack their spreads with dark volume.
|
||||||
|
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
|
||||||
|
thought noise and encourage un-emotion.
|
||||||
|
- first class parallelism: built from the ground up on next-gen structured concurrency
|
||||||
|
primitives.
|
||||||
|
- traders first: broker/exchange/asset-class agnostic
|
||||||
|
- systems grounded: real-time financial signal processing that will
|
||||||
|
make any queuing or DSP eng juice their shorts.
|
||||||
|
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
|
||||||
|
- data collaboration: every process and protocol is multi-host scalable.
|
||||||
|
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
|
||||||
|
|
||||||
- **100% federated**:
|
fitting with these tenets, we're always open to new framework suggestions and ideas.
|
||||||
your code, your hardware, your data feeds, your broker fills.
|
|
||||||
|
|
||||||
- **zero web**:
|
building the best looking, most reliable, keyboard friendly trading
|
||||||
low latency as a prime objective, native UIs and modern IPC
|
platform is the dream; join the cause.
|
||||||
protocols without trying to re-invent the "OS-as-an-app"..
|
|
||||||
|
|
||||||
- **maximal privacy**:
|
|
||||||
prevent brokers and mms from knowing your planz; smack their
|
|
||||||
spreads with dark volume from a VPN tunnel.
|
|
||||||
|
|
||||||
- **zero clutter**:
|
|
||||||
modal, context oriented UIs that echew minimalism, reduce thought
|
|
||||||
noise and encourage un-emotion.
|
|
||||||
|
|
||||||
- **first class parallelism**:
|
|
||||||
built from the ground up on a next-gen structured concurrency
|
|
||||||
supervision sys.
|
|
||||||
|
|
||||||
- **traders first**:
|
|
||||||
broker/exchange/venue/asset-class/money-sys agnostic
|
|
||||||
|
|
||||||
- **systems grounded**:
|
|
||||||
real-time financial signal processing (fsp) that will make any
|
|
||||||
queuing or DSP eng juice their shorts.
|
|
||||||
|
|
||||||
- **non-tina UX**:
|
|
||||||
sleek, powerful keyboard driven interaction with expected use in
|
|
||||||
tiling wms (or maybe even a DDE).
|
|
||||||
|
|
||||||
- **data collab at scale**:
|
|
||||||
every actor-process and protocol is multi-host aware.
|
|
||||||
|
|
||||||
- **fight club ready**:
|
|
||||||
zero interest in adoption by suits; no corporate friendly license,
|
|
||||||
ever.
|
|
||||||
|
|
||||||
building the hottest looking, fastest, most reliable, keyboard
|
|
||||||
friendly FOSS trading platform is the dream; join the cause.
|
|
||||||
|
|
||||||
|
|
||||||
a sane install with `uv`
|
sane install with `poetry`
|
||||||
************************
|
**************************
|
||||||
bc why install with `python` when you can faster with `rust` ::
|
TODO!
|
||||||
|
|
||||||
uv sync
|
|
||||||
|
|
||||||
# ^ astral's docs,
|
|
||||||
# https://docs.astral.sh/uv/concepts/projects/sync/
|
|
||||||
|
|
||||||
include all GUIs (ex. for charting)::
|
|
||||||
|
|
||||||
uv sync --group uis
|
|
||||||
|
|
||||||
AND with **all** our normal hacking tools::
|
|
||||||
|
|
||||||
uv sync --dev
|
|
||||||
|
|
||||||
AND if you want to try WIP integrations::
|
|
||||||
|
|
||||||
uv sync --all-groups
|
|
||||||
|
|
||||||
Ensure you can run the root-daemon::
|
|
||||||
|
|
||||||
uv run pikerd [-l info --pdb]
|
|
||||||
|
|
||||||
|
|
||||||
install on nix(os)
|
rigorous install on ``nixos`` using ``poetry2nix``
|
||||||
******************
|
**************************************************
|
||||||
``NixOS`` is our core devs' distro of choice for which we offer
|
TODO!
|
||||||
a stringently defined development shell envoirment that can currently
|
|
||||||
be applied in one of 2 ways::
|
|
||||||
|
|
||||||
# ONLY if running on X11
|
|
||||||
nix-shell default.nix
|
|
||||||
|
|
||||||
Or if you prefer flakes style and a modern DE::
|
|
||||||
|
|
||||||
# ONLY if also running on Wayland
|
|
||||||
nix develop # for default bash
|
|
||||||
nix develop -c uv run xonsh # for @goodboy's preferred sh B)
|
|
||||||
|
|
||||||
|
|
||||||
start a chart
|
hacky install on nixos
|
||||||
*************
|
**********************
|
||||||
run a realtime OHLCV chart stand-alone::
|
`NixOS` is our core devs' distro of choice for which we offer
|
||||||
|
a stringently defined development shell envoirment that can be loaded with::
|
||||||
|
|
||||||
[uv run] piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken
|
nix-shell develop.nix
|
||||||
|
|
||||||
# ^^^ iff you haven't activated the py-env,
|
this will setup the required python environment to run piker, make sure to
|
||||||
# - https://docs.astral.sh/uv/concepts/projects/run/
|
run::
|
||||||
#
|
|
||||||
# in order to create an explicit virt-env see,
|
|
||||||
# - https://docs.astral.sh/uv/concepts/projects/layout/#the-project-environment
|
|
||||||
# - https://docs.astral.sh/uv/pip/environments/
|
|
||||||
#
|
|
||||||
# use $UV_PROJECT_ENVIRONMENT to select any non-`.venv/`
|
|
||||||
# as the venv sudir in the repo's root.
|
|
||||||
# - https://docs.astral.sh/uv/reference/environment/#uv_project_environment
|
|
||||||
|
|
||||||
this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes
|
pip install -r requirements.txt -e .
|
||||||
overlayed on the same graph. Use of `piker` without first starting
|
|
||||||
a daemon (`pikerd` - see below) means there is an implicit spawning of the
|
|
||||||
multi-actor-runtime (implemented as a `tractor` app).
|
|
||||||
|
|
||||||
For additional subsystem feats available through our chart UI see the
|
once after loading the shell
|
||||||
various sub-readmes:
|
|
||||||
|
|
||||||
- order control using a mouse-n-keyboard UX B)
|
|
||||||
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
|
|
||||||
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
|
|
||||||
- src-asset derivatives scan for anal, like the infamous "max pain" XO
|
|
||||||
|
|
||||||
|
|
||||||
spawn a daemon standalone
|
install wild-west style via `pip`
|
||||||
*************************
|
*********************************
|
||||||
we call the root actor-process the ``pikerd``. it can be (and is
|
``piker`` is currently under heavy pre-alpha development and as such
|
||||||
recommended normally to be) started separately from the ``piker
|
should be cloned from this repo and hacked on directly.
|
||||||
chart`` program::
|
|
||||||
|
for a development install::
|
||||||
|
|
||||||
|
git clone git@github.com:pikers/piker.git
|
||||||
|
cd piker
|
||||||
|
virtualenv env
|
||||||
|
source ./env/bin/activate
|
||||||
|
pip install -r requirements.txt -e .
|
||||||
|
|
||||||
|
|
||||||
|
check out our charts
|
||||||
|
********************
|
||||||
|
bet you weren't expecting this from the foss::
|
||||||
|
|
||||||
|
piker -l info -b kraken -b binance chart btcusdt.binance --pdb
|
||||||
|
|
||||||
|
|
||||||
|
this runs the main chart (currently with 1m sampled OHLC) in in debug
|
||||||
|
mode and you can practice paper trading using the following
|
||||||
|
micro-manual:
|
||||||
|
|
||||||
|
``order_mode`` (
|
||||||
|
edge triggered activation by any of the following keys,
|
||||||
|
``mouse-click`` on y-level to submit at that price
|
||||||
|
):
|
||||||
|
|
||||||
|
- ``f``/ ``ctl-f`` to stage buy
|
||||||
|
- ``d``/ ``ctl-d`` to stage sell
|
||||||
|
- ``a`` to stage alert
|
||||||
|
|
||||||
|
|
||||||
|
``search_mode`` (
|
||||||
|
``ctl-l`` or ``ctl-space`` to open,
|
||||||
|
``ctl-c`` or ``ctl-space`` to close
|
||||||
|
) :
|
||||||
|
|
||||||
|
- begin typing to have symbol search automatically lookup
|
||||||
|
symbols from all loaded backend (broker) providers
|
||||||
|
- arrow keys and mouse click to navigate selection
|
||||||
|
- vi-like ``ctl-[hjkl]`` for navigation
|
||||||
|
|
||||||
|
|
||||||
|
you can also configure your position allocation limits from the
|
||||||
|
sidepane.
|
||||||
|
|
||||||
|
|
||||||
|
run in distributed mode
|
||||||
|
***********************
|
||||||
|
start the service manager and data feed daemon in the background and
|
||||||
|
connect to it::
|
||||||
|
|
||||||
pikerd -l info --pdb
|
pikerd -l info --pdb
|
||||||
|
|
||||||
the daemon does nothing until a ``piker``-client (like ``piker
|
|
||||||
chart``) connects and requests some particular sub-system. for
|
|
||||||
a connecting chart ``pikerd`` will spawn and manage at least,
|
|
||||||
|
|
||||||
- a data-feed daemon: ``datad`` which does all the work of comms with
|
connect your chart::
|
||||||
the backend provider (in this case the ``binance`` cex).
|
|
||||||
- a paper-trading engine instance, ``paperboi.binance``, (if no live
|
|
||||||
account has been configured) which allows for auto/manual order
|
|
||||||
control against the live quote stream.
|
|
||||||
|
|
||||||
*using* an actor-service (aka micro-daemon) manager which dynamically
|
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
|
||||||
supervises various sub-subsystems-as-services throughout the ``piker``
|
|
||||||
runtime-stack.
|
|
||||||
|
|
||||||
now you can (implicitly) connect your chart::
|
|
||||||
|
|
||||||
piker chart btcusdt.spot.binance
|
enjoy persistent real-time data feeds tied to daemon lifetime. the next
|
||||||
|
time you spawn a chart it will load much faster since the data feed has
|
||||||
since ``pikerd`` was started separately you can now enjoy a persistent
|
been cached and is now always running live in the background until you
|
||||||
real-time data stream tied to the daemon-tree's lifetime. i.e. the next
|
kill ``pikerd``.
|
||||||
time you spawn a chart it will obviously not only load much faster
|
|
||||||
(since the underlying ``datad.binance`` is left running with its
|
|
||||||
in-memory IPC data structures) but also the data-feed and any order
|
|
||||||
mgmt states should be persistent until you finally cancel ``pikerd``.
|
|
||||||
|
|
||||||
|
|
||||||
if anyone asks you what this project is about
|
if anyone asks you what this project is about
|
||||||
*********************************************
|
*********************************************
|
||||||
you don't talk about it; just use it.
|
you don't talk about it.
|
||||||
|
|
||||||
|
|
||||||
how do i get involved?
|
how do i get involved?
|
||||||
|
|
@ -203,15 +166,6 @@ enter the matrix.
|
||||||
|
|
||||||
how come there ain't that many docs
|
how come there ain't that many docs
|
||||||
***********************************
|
***********************************
|
||||||
i mean we want/need them but building the core right has been higher
|
suck it up, learn the code; no one is trying to sell you on anything.
|
||||||
prio then marketting (and likely will stay that way Bp).
|
also, we need lotsa help so if you want to start somewhere and can't
|
||||||
|
necessarily write serious code, this might be the place for you!
|
||||||
soo, suck it up bc,
|
|
||||||
|
|
||||||
- no one is trying to sell you on anything
|
|
||||||
- learning the code base is prolly way more valuable
|
|
||||||
- the UI/UXs are intended to be "intuitive" for any hacker..
|
|
||||||
|
|
||||||
we obviously need tonz help so if you want to start somewhere and
|
|
||||||
can't necessarily write "advanced" concurrent python/rust code, this
|
|
||||||
helping document literally anything might be the place for you!
|
|
||||||
|
|
|
||||||
|
|
@ -1,52 +1,38 @@
|
||||||
|
################
|
||||||
# ---- CEXY ----
|
# ---- CEXY ----
|
||||||
|
################
|
||||||
[binance]
|
[binance]
|
||||||
accounts.paper = 'paper'
|
|
||||||
|
|
||||||
accounts.usdtm = 'futes'
|
accounts.usdtm = 'futes'
|
||||||
futes.use_testnet = false
|
futes.use_testnet = true
|
||||||
futes.api_key = ''
|
futes.api_key = ''
|
||||||
futes.api_secret = ''
|
futes.api_secret = ''
|
||||||
|
|
||||||
accounts.spot = 'spot'
|
accounts.spot = 'spot'
|
||||||
spot.use_testnet = false
|
spot.use_testnet = true
|
||||||
spot.api_key = ''
|
spot.api_key = ''
|
||||||
spot.api_secret = ''
|
spot.api_secret = ''
|
||||||
# ------ binance ------
|
|
||||||
|
|
||||||
|
|
||||||
[deribit]
|
[deribit]
|
||||||
# std assets
|
|
||||||
key_id = ''
|
key_id = ''
|
||||||
key_secret = ''
|
key_secret = ''
|
||||||
# options
|
|
||||||
accounts.option = 'option'
|
|
||||||
option.use_testnet = false
|
|
||||||
option.key_id = ''
|
|
||||||
option.key_secret = ''
|
|
||||||
# aux logging from `cryptofeed`
|
|
||||||
option.log.filename = 'cryptofeed.log'
|
|
||||||
option.log.level = 'DEBUG'
|
|
||||||
option.log.disabled = true
|
|
||||||
# ------ deribit ------
|
|
||||||
|
|
||||||
|
|
||||||
[kraken]
|
[kraken]
|
||||||
key_descr = ''
|
key_descr = ''
|
||||||
api_key = ''
|
api_key = ''
|
||||||
secret = ''
|
secret = ''
|
||||||
# ------ kraken ------
|
|
||||||
|
|
||||||
|
|
||||||
[kucoin]
|
[kucoin]
|
||||||
key_id = ''
|
key_id = ''
|
||||||
key_secret = ''
|
key_secret = ''
|
||||||
key_passphrase = ''
|
key_passphrase = ''
|
||||||
# ------ kucoin ------
|
|
||||||
|
|
||||||
|
|
||||||
|
################
|
||||||
# -- BROKERZ ---
|
# -- BROKERZ ---
|
||||||
|
################
|
||||||
[questrade]
|
[questrade]
|
||||||
refresh_token = ''
|
refresh_token = ''
|
||||||
access_token = ''
|
access_token = ''
|
||||||
|
|
@ -54,55 +40,44 @@ api_server = 'https://api06.iq.questrade.com/'
|
||||||
expires_in = 1800
|
expires_in = 1800
|
||||||
token_type = 'Bearer'
|
token_type = 'Bearer'
|
||||||
expires_at = 1616095326.355846
|
expires_at = 1616095326.355846
|
||||||
# ------ questrade ------
|
|
||||||
|
|
||||||
|
|
||||||
[ib]
|
[ib]
|
||||||
# define the (set of) host-port socketaddrs that
|
|
||||||
# brokerd.ib will scan to connect to an API endpoint
|
|
||||||
# (ib-gw or ib-tws listening instances)
|
|
||||||
hosts = [
|
hosts = [
|
||||||
'127.0.0.1',
|
'127.0.0.1',
|
||||||
]
|
]
|
||||||
|
# XXX: the order in which ports will be scanned
|
||||||
|
# (by the `brokerd` daemon-actor)
|
||||||
|
# is determined # by the line order here.
|
||||||
|
# TODO: when we eventually spawn gateways in our
|
||||||
|
# container, we can just dynamically allocate these
|
||||||
|
# using IBC.
|
||||||
ports = [
|
ports = [
|
||||||
4002, # gw
|
4002, # gw
|
||||||
7497, # tws
|
7497, # tws
|
||||||
]
|
]
|
||||||
|
|
||||||
# When API endpoints are being scanned durin startup, the order
|
# XXX: for a paper account the flex web query service
|
||||||
# of user-defined-account "names" (as defined below) here
|
# is not supported so you have to manually download
|
||||||
# determines which py-client connection is given priority to be
|
# and XML report and put it in a location that can be
|
||||||
# used for data-feed-requests by according to whichever client
|
# accessed by the ``brokerd.ib`` backend code for parsing.
|
||||||
# connected to an API endpoing which reported the equivalent
|
flex_token = ''
|
||||||
# account number for that name.
|
flex_trades_query_id = '' # live account
|
||||||
|
|
||||||
|
# when clients are being scanned this determines
|
||||||
|
# which clients are preferred to be used for data
|
||||||
|
# feeds based on the order of account names, if
|
||||||
|
# detected as active on an API client.
|
||||||
prefer_data_account = [
|
prefer_data_account = [
|
||||||
'paper',
|
'paper',
|
||||||
'margin',
|
'margin',
|
||||||
'ira',
|
'ira',
|
||||||
]
|
]
|
||||||
|
|
||||||
# For long-term trades txn (transaction) history
|
|
||||||
# processing (i.e your txn ledger with IB) you can
|
|
||||||
# (automatically for live accounts) query the FLEX
|
|
||||||
# report system for past history.
|
|
||||||
#
|
|
||||||
# (For paper accounts the web query service
|
|
||||||
# is not supported so you have to manually download
|
|
||||||
# an XML report and put it in a location that can be
|
|
||||||
# accessed by our `brokerd.ib` backend code for parsing).
|
|
||||||
#
|
|
||||||
flex_token = ''
|
|
||||||
flex_trades_query_id = '' # live account
|
|
||||||
|
|
||||||
# define "aliases" (names) for each account number
|
|
||||||
# such that the names can be reffed and logged throughout
|
|
||||||
# `piker.accounting` subsys and more easily
|
|
||||||
# referred to by the user.
|
|
||||||
#
|
|
||||||
# These keys will be the set exposed through the order-mode
|
|
||||||
# account-selection UI so that numbers are never shown.
|
|
||||||
[ib.accounts]
|
[ib.accounts]
|
||||||
paper = 'DU0000000' # <- literal account #
|
# the order in which accounts will be selectable
|
||||||
margin = 'U0000000'
|
# in the order mode UI (if found via clients during
|
||||||
ira = 'U0000000'
|
# API-app scanning)when a new symbol is loaded.
|
||||||
# ------ ib ------
|
paper = 'XX0000000'
|
||||||
|
margin = 'X0000000'
|
||||||
|
ira = 'X0000000'
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,7 @@
|
||||||
[network]
|
[network]
|
||||||
pikerd = [
|
tsdb.backend = 'marketstore'
|
||||||
'/ipv4/127.0.0.1/tcp/6116', # std localhost daemon-actor tree
|
tsdb.host = 'localhost'
|
||||||
# '/uds/6116', # TODO std uds socket file
|
tsdb.grpc_port = 5995
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
[ui]
|
[ui]
|
||||||
# set custom font + size which will scale entire UI
|
# set custom font + size which will scale entire UI
|
||||||
|
|
|
||||||
135
default.nix
135
default.nix
|
|
@ -1,135 +0,0 @@
|
||||||
with (import <nixpkgs> {});
|
|
||||||
let
|
|
||||||
glibStorePath = lib.getLib glib;
|
|
||||||
zlibStorePath = lib.getLib zlib;
|
|
||||||
zstdStorePath = lib.getLib zstd;
|
|
||||||
dbusStorePath = lib.getLib dbus;
|
|
||||||
libGLStorePath = lib.getLib libGL;
|
|
||||||
freetypeStorePath = lib.getLib freetype;
|
|
||||||
qt6baseStorePath = lib.getLib qt6.qtbase;
|
|
||||||
fontconfigStorePath = lib.getLib fontconfig;
|
|
||||||
libxkbcommonStorePath = lib.getLib libxkbcommon;
|
|
||||||
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
|
|
||||||
|
|
||||||
pypkgs = python313Packages;
|
|
||||||
qtpyStorePath = lib.getLib pypkgs.qtpy;
|
|
||||||
pyqt6StorePath = lib.getLib pypkgs.pyqt6;
|
|
||||||
pyqt6SipStorePath = lib.getLib pypkgs.pyqt6-sip;
|
|
||||||
rapidfuzzStorePath = lib.getLib pypkgs.rapidfuzz;
|
|
||||||
qdarkstyleStorePath = lib.getLib pypkgs.qdarkstyle;
|
|
||||||
|
|
||||||
xorgLibX11StorePath = lib.getLib xorg.libX11;
|
|
||||||
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
|
|
||||||
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
|
|
||||||
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
|
|
||||||
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
|
|
||||||
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
|
|
||||||
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
|
|
||||||
in
|
|
||||||
stdenv.mkDerivation {
|
|
||||||
name = "piker-qt6-uv";
|
|
||||||
buildInputs = [
|
|
||||||
# System requirements.
|
|
||||||
glib
|
|
||||||
zlib
|
|
||||||
dbus
|
|
||||||
zstd
|
|
||||||
libGL
|
|
||||||
freetype
|
|
||||||
qt6.qtbase
|
|
||||||
libgcc.lib
|
|
||||||
fontconfig
|
|
||||||
libxkbcommon
|
|
||||||
|
|
||||||
# Xorg requirements
|
|
||||||
xcb-util-cursor
|
|
||||||
xorg.libxcb
|
|
||||||
xorg.libX11
|
|
||||||
xorg.xcbutilwm
|
|
||||||
xorg.xcbutilimage
|
|
||||||
xorg.xcbutilerrors
|
|
||||||
xorg.xcbutilkeysyms
|
|
||||||
xorg.xcbutilrenderutil
|
|
||||||
|
|
||||||
# Python requirements.
|
|
||||||
python313
|
|
||||||
uv
|
|
||||||
pypkgs.qdarkstyle
|
|
||||||
pypkgs.rapidfuzz
|
|
||||||
pypkgs.pyqt6
|
|
||||||
pypkgs.qtpy
|
|
||||||
];
|
|
||||||
src = null;
|
|
||||||
shellHook = ''
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Set the Qt plugin path
|
|
||||||
# export QT_DEBUG_PLUGINS=1
|
|
||||||
|
|
||||||
QTBASE_PATH="${qt6baseStorePath}/lib"
|
|
||||||
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
|
|
||||||
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
|
|
||||||
|
|
||||||
LIB_GCC_PATH="${libgcc.lib}/lib"
|
|
||||||
GLIB_PATH="${glibStorePath}/lib"
|
|
||||||
ZSTD_PATH="${zstdStorePath}/lib"
|
|
||||||
ZLIB_PATH="${zlibStorePath}/lib"
|
|
||||||
DBUS_PATH="${dbusStorePath}/lib"
|
|
||||||
LIBGL_PATH="${libGLStorePath}/lib"
|
|
||||||
FREETYPE_PATH="${freetypeStorePath}/lib"
|
|
||||||
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
|
|
||||||
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
|
|
||||||
|
|
||||||
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
|
|
||||||
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
|
|
||||||
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
|
|
||||||
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
|
|
||||||
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
|
|
||||||
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
|
|
||||||
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
|
|
||||||
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
|
|
||||||
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
|
|
||||||
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
|
|
||||||
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
|
|
||||||
|
|
||||||
export LD_LIBRARY_PATH
|
|
||||||
|
|
||||||
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.13/site-packages"
|
|
||||||
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.13/site-packages"
|
|
||||||
QTPY_PATH="${qtpyStorePath}/lib/python3.13/site-packages"
|
|
||||||
PYQT6_PATH="${pyqt6StorePath}/lib/python3.13/site-packages"
|
|
||||||
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.13/site-packages"
|
|
||||||
|
|
||||||
PATCH="$PATCH:$RPDFUZZ_PATH"
|
|
||||||
PATCH="$PATCH:$QDRKSTYLE_PATH"
|
|
||||||
PATCH="$PATCH:$QTPY_PATH"
|
|
||||||
PATCH="$PATCH:$PYQT6_PATH"
|
|
||||||
PATCH="$PATCH:$PYQT6_SIP_PATH"
|
|
||||||
|
|
||||||
export PATCH
|
|
||||||
|
|
||||||
# install all dev and extras
|
|
||||||
uv sync --dev --all-extras
|
|
||||||
|
|
||||||
'';
|
|
||||||
}
|
|
||||||
37
develop.nix
37
develop.nix
|
|
@ -1,34 +1,28 @@
|
||||||
with (import <nixpkgs> {});
|
with (import <nixpkgs> {});
|
||||||
|
with python310Packages;
|
||||||
stdenv.mkDerivation {
|
stdenv.mkDerivation {
|
||||||
name = "poetry-env";
|
name = "pip-env";
|
||||||
buildInputs = [
|
buildInputs = [
|
||||||
# System requirements.
|
# System requirements.
|
||||||
readline
|
readline
|
||||||
|
|
||||||
# TODO: hacky non-poetry install stuff we need to get rid of!!
|
# TODO: hacky non-poetry install stuff we need to get rid of!!
|
||||||
poetry
|
virtualenv
|
||||||
# virtualenv
|
setuptools
|
||||||
# setuptools
|
pip
|
||||||
# pip
|
|
||||||
|
|
||||||
# Python requirements (enough to get a virtualenv going).
|
|
||||||
python311Full
|
|
||||||
|
|
||||||
# obviously, and see below for hacked linking
|
# obviously, and see below for hacked linking
|
||||||
python311Packages.pyqt5
|
pyqt5
|
||||||
python311Packages.pyqt5_sip
|
|
||||||
# python311Packages.qtpy
|
# Python requirements (enough to get a virtualenv going).
|
||||||
|
python310Full
|
||||||
|
|
||||||
# numerics deps
|
# numerics deps
|
||||||
python311Packages.levenshtein
|
python310Packages.python-Levenshtein
|
||||||
python311Packages.fastparquet
|
python310Packages.fastparquet
|
||||||
python311Packages.polars
|
python310Packages.polars
|
||||||
|
|
||||||
];
|
];
|
||||||
# environment.sessionVariables = {
|
|
||||||
# LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib";
|
|
||||||
# };
|
|
||||||
src = null;
|
src = null;
|
||||||
shellHook = ''
|
shellHook = ''
|
||||||
# Allow the use of wheels.
|
# Allow the use of wheels.
|
||||||
|
|
@ -36,12 +30,13 @@ stdenv.mkDerivation {
|
||||||
|
|
||||||
# Augment the dynamic linker path
|
# Augment the dynamic linker path
|
||||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
|
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
|
||||||
|
|
||||||
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
|
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
|
||||||
|
|
||||||
if [ ! -d ".venv" ]; then
|
if [ ! -d "venv" ]; then
|
||||||
poetry install --with uis
|
virtualenv venv
|
||||||
fi
|
fi
|
||||||
|
|
||||||
poetry shell
|
source venv/bin/activate
|
||||||
'';
|
'';
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,138 +1,30 @@
|
||||||
running ``ib`` gateway in ``docker``
|
running ``ib`` gateway in ``docker``
|
||||||
------------------------------------
|
------------------------------------
|
||||||
We have a config based on a well maintained community
|
We have a config based on the (now defunct)
|
||||||
image from `@gnzsnz`:
|
image from "waytrade":
|
||||||
|
|
||||||
https://github.com/gnzsnz/ib-gateway-docker
|
https://github.com/waytrade/ib-gateway-docker
|
||||||
|
|
||||||
|
To startup this image with our custom settings
|
||||||
To startup this image simply run the command::
|
simply run the command::
|
||||||
|
|
||||||
docker compose up
|
docker compose up
|
||||||
|
|
||||||
(For further usage^ see the official `docker-compose`_ docs)
|
And you should have the following socket-available services:
|
||||||
|
|
||||||
|
- ``x11vnc1@127.0.0.1:3003``
|
||||||
|
- ``ib-gw@127.0.0.1:4002``
|
||||||
|
|
||||||
And you should have the following socket-available services by
|
You can attach to the container via a VNC client
|
||||||
default:
|
without password auth.
|
||||||
|
|
||||||
- ``x11vnc1 @ 127.0.0.1:5900``
|
SECURITY STUFF!?!?!
|
||||||
- ``ib-gw @ 127.0.0.1:4002``
|
-------------------
|
||||||
|
Though "``ib``" claims they host filter connections outside
|
||||||
You can now attach to the container via a VNC client with password-auth;
|
localhost (aka ``127.0.0.1``) it's probably better if you filter
|
||||||
here is an example using ``vncclient`` on ``linux``::
|
the socket at the OS level using a stateless firewall rule::
|
||||||
|
|
||||||
vncviewer localhost:5900
|
|
||||||
|
|
||||||
now enter the pw (password) you set via an (see second code blob)
|
|
||||||
`.env file`_ or pw-file according to the `credentials section`_.
|
|
||||||
|
|
||||||
If you want to change away from their default config see the example
|
|
||||||
`docker-compose.yml`-config issue and config-section of the readme,
|
|
||||||
|
|
||||||
- https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#configuration
|
|
||||||
- https://github.com/gnzsnz/ib-gateway-docker/discussions/103
|
|
||||||
|
|
||||||
.. _.env file: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#how-to-use-it
|
|
||||||
.. _docker-compose: https://docs.docker.com/compose/
|
|
||||||
.. _credentials section: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#credentials
|
|
||||||
|
|
||||||
|
|
||||||
Connecting to the API from `piker`
|
|
||||||
---------------------------------
|
|
||||||
In order to expose the container's API endpoint to the
|
|
||||||
`brokerd/datad/ib` actor, we need to add a section to the user's
|
|
||||||
`brokers.toml` config (note the below is similar to the repo-shipped
|
|
||||||
template file),
|
|
||||||
|
|
||||||
.. code:: toml
|
|
||||||
|
|
||||||
[ib]
|
|
||||||
# define the (set of) host-port socketaddrs that
|
|
||||||
# brokerd.ib will scan to connect to an API endpoint
|
|
||||||
# (ib-gw or ib-tws listening instances)
|
|
||||||
hosts = [
|
|
||||||
'127.0.0.1',
|
|
||||||
]
|
|
||||||
ports = [
|
|
||||||
4002, # gw
|
|
||||||
7497, # tws
|
|
||||||
]
|
|
||||||
|
|
||||||
# When API endpoints are being scanned durin startup, the order
|
|
||||||
# of user-defined-account "names" (as defined below) here
|
|
||||||
# determines which py-client connection is given priority to be
|
|
||||||
# used for data-feed-requests by according to whichever client
|
|
||||||
# connected to an API endpoing which reported the equivalent
|
|
||||||
# account number for that name.
|
|
||||||
prefer_data_account = [
|
|
||||||
'paper',
|
|
||||||
'margin',
|
|
||||||
'ira',
|
|
||||||
]
|
|
||||||
|
|
||||||
# define "aliases" (names) for each account number
|
|
||||||
# such that the names can be reffed and logged throughout
|
|
||||||
# `piker.accounting` subsys and more easily
|
|
||||||
# referred to by the user.
|
|
||||||
#
|
|
||||||
# These keys will be the set exposed through the order-mode
|
|
||||||
# account-selection UI so that numbers are never shown.
|
|
||||||
[ib.accounts]
|
|
||||||
paper = 'XX0000000'
|
|
||||||
margin = 'X0000000'
|
|
||||||
ira = 'X0000000'
|
|
||||||
|
|
||||||
|
|
||||||
the broker daemon can also connect to the container's VNC server for
|
|
||||||
added functionalies including,
|
|
||||||
|
|
||||||
- viewing the API endpoint program's GUI for manual interventions,
|
|
||||||
- workarounds for historical data throttling using hotkey hacks,
|
|
||||||
|
|
||||||
Add a further section to `brokers.toml` which maps each API-ep's
|
|
||||||
port to a table of VNC server connection info like,
|
|
||||||
|
|
||||||
.. code:: toml
|
|
||||||
|
|
||||||
[ib.vnc_addrs]
|
|
||||||
4002 = {host = 'localhost', port = 5900, pw = 'doggy'}
|
|
||||||
|
|
||||||
The `pw = 'doggy'` here ^ should the same value as the particular
|
|
||||||
container instances `.env` file setting (when it was run),
|
|
||||||
|
|
||||||
.. code:: ini
|
|
||||||
|
|
||||||
VNC_SERVER_PASSWORD='doggy'
|
|
||||||
|
|
||||||
|
|
||||||
IF you also want to run ``TWS``
|
|
||||||
-------------------------------
|
|
||||||
You can also run it containerized,
|
|
||||||
|
|
||||||
https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#using-tws
|
|
||||||
|
|
||||||
|
|
||||||
SECURITY stuff (advanced, only if you're paranoid)
|
|
||||||
--------------------------------------------------
|
|
||||||
First and foremost if doing a "distributed" container setup where you
|
|
||||||
run the ``ib-gw`` docker container and your connecting API client
|
|
||||||
(likely ``ib_async`` from python) on **different hosts** be sure to
|
|
||||||
read the `security considerations`_ section!
|
|
||||||
|
|
||||||
And for a further (somewhat paranoid) perspective from
|
|
||||||
a long-time-ago serious devops eng..
|
|
||||||
|
|
||||||
Though "``ib``" claims they filter remote host connections outside
|
|
||||||
``localhost`` (aka ``127.0.0.1`` on ipv4) it's prolly justified if
|
|
||||||
you'd like to filter the socket at the *OS level* using a stateless
|
|
||||||
firewall rule::
|
|
||||||
|
|
||||||
ip rule add not unicast iif lo to 0.0.0.0/0 dport 4002
|
ip rule add not unicast iif lo to 0.0.0.0/0 dport 4002
|
||||||
|
|
||||||
|
We will soon have this baked into our own custom image but for
|
||||||
We will soon have this either baked into our own custom derivative
|
now you'll have to do it urself dawgy.
|
||||||
image (or patched into the current upstream one after further testin)
|
|
||||||
but for now you'll have to do it urself, diggity dawg.
|
|
||||||
|
|
||||||
.. _security considerations: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#security-considerations
|
|
||||||
|
|
|
||||||
|
|
@ -1,15 +1,10 @@
|
||||||
# a community maintained IB API container!
|
# rework from the original @
|
||||||
#
|
# https://github.com/waytrade/ib-gateway-docker/blob/master/docker-compose.yml
|
||||||
# https://github.com/gnzsnz/ib-gateway-docker
|
version: "3.5"
|
||||||
#
|
|
||||||
# For piker we (currently) include some minor deviations
|
|
||||||
# for some config files in the `volumes` section.
|
|
||||||
#
|
|
||||||
# See full configuration settings @
|
|
||||||
# - https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#configuration
|
|
||||||
# - https://github.com/gnzsnz/ib-gateway-docker/discussions/103
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
|
|
||||||
ib_gw_paper:
|
ib_gw_paper:
|
||||||
|
|
||||||
# apparently java is a mega cukc:
|
# apparently java is a mega cukc:
|
||||||
|
|
@ -24,9 +19,8 @@ services:
|
||||||
|
|
||||||
# other image tags available:
|
# other image tags available:
|
||||||
# https://github.com/waytrade/ib-gateway-docker#supported-tags
|
# https://github.com/waytrade/ib-gateway-docker#supported-tags
|
||||||
# image: waytrade/ib-gateway:1012.2i
|
# image: waytrade/ib-gateway:981.3j
|
||||||
image: ghcr.io/gnzsnz/ib-gateway:latest
|
image: waytrade/ib-gateway:1012.2i
|
||||||
|
|
||||||
restart: 'no' # restart on boot whenev there's a crash or user clicsk
|
restart: 'no' # restart on boot whenev there's a crash or user clicsk
|
||||||
network_mode: 'host'
|
network_mode: 'host'
|
||||||
|
|
||||||
|
|
@ -55,22 +49,16 @@ services:
|
||||||
target: /root/scripts/run_x11_vnc.sh
|
target: /root/scripts/run_x11_vnc.sh
|
||||||
read_only: true
|
read_only: true
|
||||||
|
|
||||||
# NOTE: an alt method to fill these out is to
|
# NOTE:to fill these out, define an `.env` file in the same dir as
|
||||||
# define an `.env` file in the same dir as
|
# this compose file which looks something like:
|
||||||
# this compose file.
|
# TWS_USERID='myuser'
|
||||||
|
# TWS_PASSWORD='guest'
|
||||||
environment:
|
environment:
|
||||||
TWS_USERID: ${TWS_USERID}
|
TWS_USERID: ${TWS_USERID}
|
||||||
# TWS_USERID: 'myuser'
|
|
||||||
TWS_PASSWORD: ${TWS_PASSWORD}
|
TWS_PASSWORD: ${TWS_PASSWORD}
|
||||||
# TWS_PASSWORD: 'guest'
|
TRADING_MODE: 'paper'
|
||||||
TRADING_MODE: ${TRADING_MODE}
|
VNC_SERVER_PASSWORD: 'doggy'
|
||||||
# TRADING_MODE: 'paper'
|
VNC_SERVER_PORT: '3003'
|
||||||
VNC_SERVER_PASSWORD: ${VNC_SERVER_PASSWORD}
|
|
||||||
# VNC_SERVER_PASSWORD: 'doggy'
|
|
||||||
|
|
||||||
# TODO, see if we can get this supported like it
|
|
||||||
# was on the old `waytrade` image?
|
|
||||||
# VNC_SERVER_PORT: '3003'
|
|
||||||
|
|
||||||
# ports:
|
# ports:
|
||||||
# - target: 4002
|
# - target: 4002
|
||||||
|
|
@ -87,9 +75,6 @@ services:
|
||||||
# - "127.0.0.1:4002:4002"
|
# - "127.0.0.1:4002:4002"
|
||||||
# - "127.0.0.1:5900:5900"
|
# - "127.0.0.1:5900:5900"
|
||||||
|
|
||||||
# TODO, a masked but working example of dual paper + live
|
|
||||||
# ib-gw instances running in a single app run!
|
|
||||||
#
|
|
||||||
# ib_gw_live:
|
# ib_gw_live:
|
||||||
# image: waytrade/ib-gateway:1012.2i
|
# image: waytrade/ib-gateway:1012.2i
|
||||||
# restart: no
|
# restart: no
|
||||||
|
|
|
||||||
|
|
@ -117,57 +117,9 @@ SecondFactorDevice=
|
||||||
|
|
||||||
# If you use the IBKR Mobile app for second factor authentication,
|
# If you use the IBKR Mobile app for second factor authentication,
|
||||||
# and you fail to complete the process before the time limit imposed
|
# and you fail to complete the process before the time limit imposed
|
||||||
# by IBKR, this setting tells IBC whether to automatically restart
|
# by IBKR, you can use this setting to tell IBC to exit: arrangements
|
||||||
# the login sequence, giving you another opportunity to complete
|
# can then be made to automatically restart IBC in order to initiate
|
||||||
# second factor authentication.
|
# the login sequence afresh. Otherwise, manual intervention at TWS's
|
||||||
#
|
|
||||||
# Permitted values are 'yes' and 'no'.
|
|
||||||
#
|
|
||||||
# If this setting is not present or has no value, then the value
|
|
||||||
# of the deprecated ExitAfterSecondFactorAuthenticationTimeout is
|
|
||||||
# used instead. If this also has no value, then this setting defaults
|
|
||||||
# to 'no'.
|
|
||||||
#
|
|
||||||
# NB: you must be using IBC v3.14.0 or later to use this setting:
|
|
||||||
# earlier versions ignore it.
|
|
||||||
|
|
||||||
ReloginAfterSecondFactorAuthenticationTimeout=
|
|
||||||
|
|
||||||
|
|
||||||
# This setting is only relevant if
|
|
||||||
# ReloginAfterSecondFactorAuthenticationTimeout is set to 'yes',
|
|
||||||
# or if ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
|
|
||||||
#
|
|
||||||
# It controls how long (in seconds) IBC waits for login to complete
|
|
||||||
# after the user acknowledges the second factor authentication
|
|
||||||
# alert at the IBKR Mobile app. If login has not completed after
|
|
||||||
# this time, IBC terminates.
|
|
||||||
# The default value is 60.
|
|
||||||
|
|
||||||
SecondFactorAuthenticationExitInterval=
|
|
||||||
|
|
||||||
|
|
||||||
# This setting specifies the timeout for second factor authentication
|
|
||||||
# imposed by IB. The value is in seconds. You should not change this
|
|
||||||
# setting unless you have reason to believe that IB has changed the
|
|
||||||
# timeout. The default value is 180.
|
|
||||||
|
|
||||||
SecondFactorAuthenticationTimeout=180
|
|
||||||
|
|
||||||
|
|
||||||
# DEPRECATED SETTING
|
|
||||||
# ------------------
|
|
||||||
#
|
|
||||||
# ExitAfterSecondFactorAuthenticationTimeout - THIS SETTING WILL BE
|
|
||||||
# REMOVED IN A FUTURE RELEASE. For IBC version 3.14.0 and later, see
|
|
||||||
# the notes for ReloginAfterSecondFactorAuthenticationTimeout above.
|
|
||||||
#
|
|
||||||
# For IBC versions earlier than 3.14.0: If you use the IBKR Mobile
|
|
||||||
# app for second factor authentication, and you fail to complete the
|
|
||||||
# process before the time limit imposed by IBKR, you can use this
|
|
||||||
# setting to tell IBC to exit: arrangements can then be made to
|
|
||||||
# automatically restart IBC in order to initiate the login sequence
|
|
||||||
# afresh. Otherwise, manual intervention at TWS's
|
|
||||||
# Second Factor Authentication dialog is needed to complete the
|
# Second Factor Authentication dialog is needed to complete the
|
||||||
# login.
|
# login.
|
||||||
#
|
#
|
||||||
|
|
@ -180,18 +132,29 @@ SecondFactorAuthenticationTimeout=180
|
||||||
ExitAfterSecondFactorAuthenticationTimeout=no
|
ExitAfterSecondFactorAuthenticationTimeout=no
|
||||||
|
|
||||||
|
|
||||||
|
# This setting is only relevant if
|
||||||
|
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
|
||||||
|
#
|
||||||
|
# It controls how long (in seconds) IBC waits for login to complete
|
||||||
|
# after the user acknowledges the second factor authentication
|
||||||
|
# alert at the IBKR Mobile app. If login has not completed after
|
||||||
|
# this time, IBC terminates.
|
||||||
|
# The default value is 40.
|
||||||
|
|
||||||
|
SecondFactorAuthenticationExitInterval=
|
||||||
|
|
||||||
|
|
||||||
# Trading Mode
|
# Trading Mode
|
||||||
# ------------
|
# ------------
|
||||||
#
|
#
|
||||||
# This indicates whether the live account or the paper trading
|
# TWS 955 introduced a new Trading Mode combo box on its login
|
||||||
# account corresponding to the supplied credentials is to be used.
|
# dialog. This indicates whether the live account or the paper
|
||||||
# The allowed values are 'live' (the default) and 'paper'.
|
# trading account corresponding to the supplied credentials is
|
||||||
#
|
# to be used. The allowed values are 'live' (the default) and
|
||||||
# If this is set to 'live', then the credentials for the live
|
# 'paper'. For earlier versions of TWS this setting has no
|
||||||
# account must be supplied. If it is set to 'paper', then either
|
# effect.
|
||||||
# the live or the paper-trading credentials may be supplied.
|
|
||||||
|
|
||||||
TradingMode=paper
|
TradingMode=
|
||||||
|
|
||||||
|
|
||||||
# Paper-trading Account Warning
|
# Paper-trading Account Warning
|
||||||
|
|
@ -225,7 +188,7 @@ AcceptNonBrokerageAccountWarning=yes
|
||||||
#
|
#
|
||||||
# The default value is 60.
|
# The default value is 60.
|
||||||
|
|
||||||
LoginDialogDisplayTimeout=60
|
LoginDialogDisplayTimeout=20
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -254,15 +217,7 @@ LoginDialogDisplayTimeout=60
|
||||||
# but they are acceptable.
|
# but they are acceptable.
|
||||||
#
|
#
|
||||||
# The default is the current working directory when IBC is
|
# The default is the current working directory when IBC is
|
||||||
# started, unless the TWS_SETTINGS_PATH setting in the relevant
|
# started.
|
||||||
# start script is set.
|
|
||||||
#
|
|
||||||
# If both this setting and TWS_SETTINGS_PATH are set, then this
|
|
||||||
# setting takes priority. Note that if they have different values,
|
|
||||||
# auto-restart will not work.
|
|
||||||
#
|
|
||||||
# NB: this setting is now DEPRECATED. You should use the
|
|
||||||
# TWS_SETTINGS_PATH setting in the relevant start script.
|
|
||||||
|
|
||||||
IbDir=/root/Jts
|
IbDir=/root/Jts
|
||||||
|
|
||||||
|
|
@ -331,30 +286,13 @@ ExistingSessionDetectedAction=primary
|
||||||
#
|
#
|
||||||
# If OverrideTwsApiPort is set to an integer, IBC changes the
|
# If OverrideTwsApiPort is set to an integer, IBC changes the
|
||||||
# 'Socket port' in TWS's API configuration to that number shortly
|
# 'Socket port' in TWS's API configuration to that number shortly
|
||||||
# after startup (but note that for the FIX Gateway, this setting is
|
# after startup. Leaving the setting blank will make no change to
|
||||||
# actually stored in jts.ini rather than the Gateway's settings
|
|
||||||
# file). Leaving the setting blank will make no change to
|
|
||||||
# the current setting. This setting is only intended for use in
|
# the current setting. This setting is only intended for use in
|
||||||
# certain specialized situations where the port number needs to
|
# certain specialized situations where the port number needs to
|
||||||
# be set dynamically at run-time, and for the FIX Gateway: most
|
|
||||||
# non-FIX users will never need it, so don't use it unless you know
|
|
||||||
# you need it.
|
|
||||||
|
|
||||||
OverrideTwsApiPort=4000
|
|
||||||
|
|
||||||
|
|
||||||
# Override TWS Master Client ID
|
|
||||||
# -----------------------------
|
|
||||||
#
|
|
||||||
# If OverrideTwsMasterClientID is set to an integer, IBC changes the
|
|
||||||
# 'Master Client ID' value in TWS's API configuration to that
|
|
||||||
# value shortly after startup. Leaving the setting blank will make
|
|
||||||
# no change to the current setting. This setting is only intended
|
|
||||||
# for use in certain specialized situations where the value needs to
|
|
||||||
# be set dynamically at run-time: most users will never need it,
|
# be set dynamically at run-time: most users will never need it,
|
||||||
# so don't use it unless you know you need it.
|
# so don't use it unless you know you need it.
|
||||||
|
|
||||||
OverrideTwsMasterClientID=
|
; OverrideTwsApiPort=4002
|
||||||
|
|
||||||
|
|
||||||
# Read-only Login
|
# Read-only Login
|
||||||
|
|
@ -364,13 +302,11 @@ OverrideTwsMasterClientID=
|
||||||
# account security programme, the user will not be asked to perform
|
# account security programme, the user will not be asked to perform
|
||||||
# the second factor authentication action, and login to TWS will
|
# the second factor authentication action, and login to TWS will
|
||||||
# occur automatically in read-only mode: in this mode, placing or
|
# occur automatically in read-only mode: in this mode, placing or
|
||||||
# managing orders is not allowed.
|
# managing orders is not allowed. If set to 'no', and the user is
|
||||||
#
|
# enrolled in IB's account security programme, the user must perform
|
||||||
# If set to 'no', and the user is enrolled in IB's account security
|
# the relevant second factor authentication action to complete the
|
||||||
# programme, the second factor authentication process is handled
|
# login.
|
||||||
# according to the Second Factor Authentication Settings described
|
|
||||||
# elsewhere in this file.
|
|
||||||
#
|
|
||||||
# If the user is not enrolled in IB's account security programme,
|
# If the user is not enrolled in IB's account security programme,
|
||||||
# this setting is ignored. The default is 'no'.
|
# this setting is ignored. The default is 'no'.
|
||||||
|
|
||||||
|
|
@ -390,44 +326,7 @@ ReadOnlyLogin=no
|
||||||
# set the relevant checkbox (this only needs to be done once) and
|
# set the relevant checkbox (this only needs to be done once) and
|
||||||
# not provide a value for this setting.
|
# not provide a value for this setting.
|
||||||
|
|
||||||
ReadOnlyApi=
|
ReadOnlyApi=no
|
||||||
|
|
||||||
|
|
||||||
# API Precautions
|
|
||||||
# ---------------
|
|
||||||
#
|
|
||||||
# These settings relate to the corresponding 'Precautions' checkboxes in the
|
|
||||||
# API section of the Global Configuration dialog.
|
|
||||||
#
|
|
||||||
# For all of these, the accepted values are:
|
|
||||||
# - 'yes' sets the checkbox
|
|
||||||
# - 'no' clears the checkbox
|
|
||||||
# - if not set, the existing TWS/Gateway configuration is unchanged
|
|
||||||
#
|
|
||||||
# NB: thess settings are really only supplied for the benefit of new TWS
|
|
||||||
# or Gateway instances that are being automatically installed and
|
|
||||||
# started without user intervention, or where user settings are not preserved
|
|
||||||
# between sessions (eg some Docker containers). Where a user is involved, they
|
|
||||||
# should use the Global Configuration to set the relevant checkboxes and not
|
|
||||||
# provide values for these settings.
|
|
||||||
|
|
||||||
BypassOrderPrecautions=
|
|
||||||
|
|
||||||
BypassBondWarning=
|
|
||||||
|
|
||||||
BypassNegativeYieldToWorstConfirmation=
|
|
||||||
|
|
||||||
BypassCalledBondWarning=
|
|
||||||
|
|
||||||
BypassSameActionPairTradeWarning=
|
|
||||||
|
|
||||||
BypassPriceBasedVolatilityRiskWarning=
|
|
||||||
|
|
||||||
BypassUSStocksMarketDataInSharesWarning=
|
|
||||||
|
|
||||||
BypassRedirectOrderWarning=
|
|
||||||
|
|
||||||
BypassNoOverfillProtectionPrecaution=
|
|
||||||
|
|
||||||
|
|
||||||
# Market data size for US stocks - lots or shares
|
# Market data size for US stocks - lots or shares
|
||||||
|
|
@ -482,145 +381,54 @@ AcceptBidAskLastSizeDisplayUpdateNotification=accept
|
||||||
SendMarketDataInLotsForUSstocks=
|
SendMarketDataInLotsForUSstocks=
|
||||||
|
|
||||||
|
|
||||||
# Trusted API Client IPs
|
|
||||||
# ----------------------
|
|
||||||
#
|
|
||||||
# NB: THIS SETTING IS ONLY RELEVANT FOR THE GATEWAY, AND ONLY WHEN FIX=yes.
|
|
||||||
# In all other cases it is ignored.
|
|
||||||
#
|
|
||||||
# This is a list of IP addresses separated by commas. API clients with IP
|
|
||||||
# addresses in this list are able to connect to the API without Gateway
|
|
||||||
# generating the 'Incoming connection' popup.
|
|
||||||
#
|
|
||||||
# Note that 127.0.0.1 is always permitted to connect, so do not include it
|
|
||||||
# in this setting.
|
|
||||||
|
|
||||||
TrustedTwsApiClientIPs=
|
|
||||||
|
|
||||||
|
|
||||||
# Reset Order ID Sequence
|
|
||||||
# -----------------------
|
|
||||||
#
|
|
||||||
# The setting resets the order id sequence for orders submitted via the API, so
|
|
||||||
# that the next invocation of the `NextValidId` API callback will return the
|
|
||||||
# value 1. The reset occurs when TWS starts.
|
|
||||||
#
|
|
||||||
# Note that order ids are reset for all API clients, except those that have
|
|
||||||
# outstanding (ie incomplete) orders: their order id sequence carries on as
|
|
||||||
# before.
|
|
||||||
#
|
|
||||||
# Valid values are 'yes', 'true', 'false' and 'no'. The default is 'no'.
|
|
||||||
|
|
||||||
ResetOrderIdsAtStart=
|
|
||||||
|
|
||||||
|
|
||||||
# This setting specifies IBC's action when TWS displays the dialog asking for
|
|
||||||
# confirmation of a request to reset the API order id sequence.
|
|
||||||
#
|
|
||||||
# Note that the Gateway never displays this dialog, so this setting is ignored
|
|
||||||
# for a Gateway session.
|
|
||||||
#
|
|
||||||
# Valid values consist of two strings separated by a solidus '/'. The first
|
|
||||||
# value specifies the action to take when the order id reset request resulted
|
|
||||||
# from setting ResetOrderIdsAtStart=yes. The second specifies the action to
|
|
||||||
# take when the order id reset request is a result of the user clicking the
|
|
||||||
# 'Reset API order ID sequence' button in the API configuration. Each value
|
|
||||||
# must be one of the following:
|
|
||||||
#
|
|
||||||
# 'confirm'
|
|
||||||
# order ids will be reset
|
|
||||||
#
|
|
||||||
# 'reject'
|
|
||||||
# order ids will not be reset
|
|
||||||
#
|
|
||||||
# 'ignore'
|
|
||||||
# IBC will ignore the dialog. The user must take action.
|
|
||||||
#
|
|
||||||
# The default setting is ignore/ignore
|
|
||||||
|
|
||||||
# Examples:
|
|
||||||
#
|
|
||||||
# 'confirm/reject' - confirm order id reset only if ResetOrderIdsAtStart=yes
|
|
||||||
# and reject any user-initiated requests
|
|
||||||
#
|
|
||||||
# 'ignore/confirm' - user must decide what to do if ResetOrderIdsAtStart=yes
|
|
||||||
# and confirm user-initiated requests
|
|
||||||
#
|
|
||||||
# 'reject/ignore' - reject order id reset if ResetOrderIdsAtStart=yes but
|
|
||||||
# allow user to handle user-initiated requests
|
|
||||||
|
|
||||||
ConfirmOrderIdReset=
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# 4. TWS Auto-Logoff and Auto-Restart
|
# 4. TWS Auto-Closedown
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
#
|
#
|
||||||
# TWS and Gateway insist on being restarted every day. Two alternative
|
# IMPORTANT NOTE: Starting with TWS 974, this setting no longer
|
||||||
# automatic options are offered:
|
# works properly, because IB have changed the way TWS handles its
|
||||||
|
# autologoff mechanism.
|
||||||
#
|
#
|
||||||
# - Auto-Logoff: at a specified time, TWS shuts down tidily, without
|
# You should now configure the TWS autologoff time to something
|
||||||
# restarting.
|
# convenient for you, and restart IBC each day.
|
||||||
#
|
#
|
||||||
# - Auto-Restart: at a specified time, TWS shuts down and then restarts
|
# Alternatively, discontinue use of IBC and use the auto-relogin
|
||||||
# without the user having to re-autheticate.
|
# mechanism within TWS 974 and later versions (note that the
|
||||||
#
|
# auto-relogin mechanism provided by IB is not available if you
|
||||||
# The normal way to configure the time at which this happens is via the Lock
|
# use IBC).
|
||||||
# and Exit section of the Configuration dialog. Once this time has been
|
|
||||||
# configured in this way, the setting persists until the user changes it again.
|
|
||||||
#
|
|
||||||
# However, there are situations where there is no user available to do this
|
|
||||||
# configuration, or where there is no persistent storage (for example some
|
|
||||||
# Docker images). In such cases, the auto-restart or auto-logoff time can be
|
|
||||||
# set whenever IBC starts with the settings below.
|
|
||||||
#
|
|
||||||
# The value, if specified, must be a time in HH:MM AM/PM format, for example
|
|
||||||
# 08:00 AM or 10:00 PM. Note that there must be a single space between the
|
|
||||||
# two parts of this value; also that midnight is "12:00 AM" and midday is
|
|
||||||
# "12:00 PM".
|
|
||||||
#
|
|
||||||
# If no value is specified for either setting, the currently configured
|
|
||||||
# settings will apply. If a value is supplied for one setting, the other
|
|
||||||
# setting is cleared. If values are supplied for both settings, only the
|
|
||||||
# auto-restart time is set, and the auto-logoff time is cleared.
|
|
||||||
#
|
|
||||||
# Note that for a normal TWS/Gateway installation with persistent storage
|
|
||||||
# (for example on a desktop computer) the value will be persisted as if the
|
|
||||||
# user had set it via the configuration dialog.
|
|
||||||
#
|
|
||||||
# If you choose to auto-restart, you should take note of the considerations
|
|
||||||
# described at the link below. Note that where this information mentions
|
|
||||||
# 'manual authentication', restarting IBC will do the job (IBKR does not
|
|
||||||
# recognise the existence of IBC in its docuemntation).
|
|
||||||
#
|
|
||||||
# https://www.interactivebrokers.com/en/software/tws/twsguide.htm#usersguidebook/configuretws/auto_restart_info.htm
|
|
||||||
#
|
|
||||||
# If you use the "RESTART" command via the IBC command server, and IBC is
|
|
||||||
# running any version of the Gateway (or a version of TWS earlier than 1018),
|
|
||||||
# note that this will set the Auto-Restart time in Gateway/TWS's configuration
|
|
||||||
# dialog to the time at which the restart actually happens (which may be up to
|
|
||||||
# a minute after the RESTART command is issued). To prevent future auto-
|
|
||||||
# restarts at this time, you must make sure you have set AutoLogoffTime or
|
|
||||||
# AutoRestartTime to your desired value before running IBC. NB: this does not
|
|
||||||
# apply to TWS from version 1018 onwards.
|
|
||||||
|
|
||||||
AutoLogoffTime=
|
# Set to yes or no (lower case).
|
||||||
|
#
|
||||||
|
# yes means allow TWS to shut down automatically at its
|
||||||
|
# specified shutdown time, which is set via the TWS
|
||||||
|
# configuration menu.
|
||||||
|
#
|
||||||
|
# no means TWS never shuts down automatically.
|
||||||
|
#
|
||||||
|
# NB: IB recommends that you do not keep TWS running
|
||||||
|
# continuously. If you set this setting to 'no', you may
|
||||||
|
# experience incorrect TWS operation.
|
||||||
|
#
|
||||||
|
# NB: the default for this setting is 'no'. Since this will
|
||||||
|
# only work properly with TWS versions earlier than 974, you
|
||||||
|
# should explicitly set this to 'yes' for version 974 and later.
|
||||||
|
|
||||||
|
IbAutoClosedown=yes
|
||||||
|
|
||||||
AutoRestartTime=
|
|
||||||
|
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# 5. TWS Tidy Closedown Time
|
# 5. TWS Tidy Closedown Time
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
#
|
#
|
||||||
# Specifies a time at which TWS will close down tidily, with no restart.
|
# NB: starting with TWS 974 this is no longer a useful option
|
||||||
|
# because both TWS and Gateway now have the same auto-logoff
|
||||||
|
# mechanism, and IBC can no longer avoid this.
|
||||||
#
|
#
|
||||||
# There is little reason to use this setting. It is similar to AutoLogoffTime,
|
# Note that giving this setting a value does not change TWS's
|
||||||
# but can include a day-of-the-week, whereas AutoLogoffTime and AutoRestartTime
|
# auto-logoff in any way: any setting will be additional to the
|
||||||
# apply every day. So for example you could use ClosedownAt in conjunction with
|
# TWS auto-logoff.
|
||||||
# AutoRestartTime to shut down TWS on Friday evenings after the markets
|
|
||||||
# close, without it running on Saturday as well.
|
|
||||||
#
|
#
|
||||||
# To tell IBC to tidily close TWS at a specified time every
|
# To tell IBC to tidily close TWS at a specified time every
|
||||||
# day, set this value to <hh:mm>, for example:
|
# day, set this value to <hh:mm>, for example:
|
||||||
|
|
@ -679,7 +487,7 @@ AcceptIncomingConnectionAction=reject
|
||||||
# no means the dialog remains on display and must be
|
# no means the dialog remains on display and must be
|
||||||
# handled by the user.
|
# handled by the user.
|
||||||
|
|
||||||
AllowBlindTrading=no
|
AllowBlindTrading=yes
|
||||||
|
|
||||||
|
|
||||||
# Save Settings on a Schedule
|
# Save Settings on a Schedule
|
||||||
|
|
@ -722,26 +530,6 @@ AllowBlindTrading=no
|
||||||
SaveTwsSettingsAt=
|
SaveTwsSettingsAt=
|
||||||
|
|
||||||
|
|
||||||
# Confirm Crypto Currency Orders Automatically
|
|
||||||
# --------------------------------------------
|
|
||||||
#
|
|
||||||
# When you place an order for a cryptocurrency contract, a dialog is displayed
|
|
||||||
# asking you to confirm that you want to place the order, and notifying you
|
|
||||||
# that you are placing an order to trade cryptocurrency with Paxos, a New York
|
|
||||||
# limited trust company, and not at Interactive Brokers.
|
|
||||||
#
|
|
||||||
# transmit means that the order will be placed automatically, and the
|
|
||||||
# dialog will then be closed
|
|
||||||
#
|
|
||||||
# cancel means that the order will not be placed, and the dialog will
|
|
||||||
# then be closed
|
|
||||||
#
|
|
||||||
# manual means that IBC will take no action and the user must deal
|
|
||||||
# with the dialog
|
|
||||||
|
|
||||||
ConfirmCryptoCurrencyOrders=transmit
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# 7. Settings Specific to Indian Versions of TWS
|
# 7. Settings Specific to Indian Versions of TWS
|
||||||
|
|
@ -778,17 +566,13 @@ DismissNSEComplianceNotice=yes
|
||||||
#
|
#
|
||||||
# The port number that IBC listens on for commands
|
# The port number that IBC listens on for commands
|
||||||
# such as "STOP". DO NOT set this to the port number
|
# such as "STOP". DO NOT set this to the port number
|
||||||
# used for TWS API connections.
|
# used for TWS API connections. There is no good reason
|
||||||
#
|
# to change this setting unless the port is used by
|
||||||
# The convention is to use 7462 for this port,
|
# some other application (typically another instance of
|
||||||
# but it must be set to a different value from any other
|
# IBC). The default value is 0, which tells IBC not to
|
||||||
# IBC instance that might run at the same time.
|
# start the command server
|
||||||
#
|
|
||||||
# The default value is 0, which tells IBC not to start
|
|
||||||
# the command server
|
|
||||||
|
|
||||||
#CommandServerPort=7462
|
#CommandServerPort=7462
|
||||||
CommandServerPort=0
|
|
||||||
|
|
||||||
|
|
||||||
# Permitted Command Sources
|
# Permitted Command Sources
|
||||||
|
|
@ -799,19 +583,19 @@ CommandServerPort=0
|
||||||
# IBC. Commands can always be sent from the
|
# IBC. Commands can always be sent from the
|
||||||
# same host as IBC is running on.
|
# same host as IBC is running on.
|
||||||
|
|
||||||
ControlFrom=
|
ControlFrom=127.0.0.1
|
||||||
|
|
||||||
|
|
||||||
# Address for Receiving Commands
|
# Address for Receiving Commands
|
||||||
# ------------------------------
|
# ------------------------------
|
||||||
#
|
#
|
||||||
# Specifies the IP address on which the Command Server
|
# Specifies the IP address on which the Command Server
|
||||||
# is to listen. For a multi-homed host, this can be used
|
# is so listen. For a multi-homed host, this can be used
|
||||||
# to specify that connection requests are only to be
|
# to specify that connection requests are only to be
|
||||||
# accepted on the specified address. The default is to
|
# accepted on the specified address. The default is to
|
||||||
# accept connection requests on all local addresses.
|
# accept connection requests on all local addresses.
|
||||||
|
|
||||||
BindAddress=
|
BindAddress=127.0.0.1
|
||||||
|
|
||||||
|
|
||||||
# Command Prompt
|
# Command Prompt
|
||||||
|
|
@ -837,7 +621,7 @@ CommandPrompt=
|
||||||
# information is sent. The default is that such information
|
# information is sent. The default is that such information
|
||||||
# is not sent.
|
# is not sent.
|
||||||
|
|
||||||
SuppressInfoMessages=yes
|
SuppressInfoMessages=no
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -867,10 +651,10 @@ SuppressInfoMessages=yes
|
||||||
# The LogStructureScope setting indicates which windows are
|
# The LogStructureScope setting indicates which windows are
|
||||||
# eligible for structure logging:
|
# eligible for structure logging:
|
||||||
#
|
#
|
||||||
# - (default value) if set to 'known', only windows that
|
# - if set to 'known', only windows that IBC recognizes
|
||||||
# IBC recognizes are eligible - these are windows that
|
# are eligible - these are windows that IBC has some
|
||||||
# IBC has some interest in monitoring, usually to take
|
# interest in monitoring, usually to take some action
|
||||||
# some action on the user's behalf;
|
# on the user's behalf;
|
||||||
#
|
#
|
||||||
# - if set to 'unknown', only windows that IBC does not
|
# - if set to 'unknown', only windows that IBC does not
|
||||||
# recognize are eligible. Most windows displayed by
|
# recognize are eligible. Most windows displayed by
|
||||||
|
|
@ -883,8 +667,9 @@ SuppressInfoMessages=yes
|
||||||
# - if set to 'all', then every window displayed by TWS
|
# - if set to 'all', then every window displayed by TWS
|
||||||
# is eligible.
|
# is eligible.
|
||||||
#
|
#
|
||||||
|
# The default value is 'known'.
|
||||||
|
|
||||||
LogStructureScope=known
|
LogStructureScope=all
|
||||||
|
|
||||||
|
|
||||||
# When to Log Window Structure
|
# When to Log Window Structure
|
||||||
|
|
@ -897,15 +682,13 @@ LogStructureScope=known
|
||||||
# structure of an eligible window the first time it
|
# structure of an eligible window the first time it
|
||||||
# is encountered;
|
# is encountered;
|
||||||
#
|
#
|
||||||
# - if set to 'openclose', the structure is logged every
|
|
||||||
# time an eligible window is opened or closed;
|
|
||||||
#
|
|
||||||
# - if set to 'activate', the structure is logged every
|
# - if set to 'activate', the structure is logged every
|
||||||
# time an eligible window is made active;
|
# time an eligible window is made active;
|
||||||
#
|
#
|
||||||
# - (default value) if set to 'never' or 'no' or 'false',
|
# - if set to 'never' or 'no' or 'false', structure
|
||||||
# structure information is never logged.
|
# information is never logged.
|
||||||
#
|
#
|
||||||
|
# The default value is 'never'.
|
||||||
|
|
||||||
LogStructureWhen=never
|
LogStructureWhen=never
|
||||||
|
|
||||||
|
|
@ -925,3 +708,4 @@ LogStructureWhen=never
|
||||||
#LogComponents=
|
#LogComponents=
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -121,7 +121,6 @@ async def bot_main():
|
||||||
# tick_throttle=10,
|
# tick_throttle=10,
|
||||||
) as feed,
|
) as feed,
|
||||||
|
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
):
|
):
|
||||||
assert accounts
|
assert accounts
|
||||||
|
|
|
||||||
27
flake.lock
27
flake.lock
|
|
@ -1,27 +0,0 @@
|
||||||
{
|
|
||||||
"nodes": {
|
|
||||||
"nixpkgs": {
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1765779637,
|
|
||||||
"narHash": "sha256-KJ2wa/BLSrTqDjbfyNx70ov/HdgNBCBBSQP3BIzKnv4=",
|
|
||||||
"owner": "nixos",
|
|
||||||
"repo": "nixpkgs",
|
|
||||||
"rev": "1306659b587dc277866c7b69eb97e5f07864d8c4",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "nixos",
|
|
||||||
"ref": "nixos-unstable",
|
|
||||||
"repo": "nixpkgs",
|
|
||||||
"type": "github"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"root": {
|
|
||||||
"inputs": {
|
|
||||||
"nixpkgs": "nixpkgs"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"root": "root",
|
|
||||||
"version": 7
|
|
||||||
}
|
|
||||||
103
flake.nix
103
flake.nix
|
|
@ -1,103 +0,0 @@
|
||||||
# An "impure" template thx to `pyproject.nix`,
|
|
||||||
# https://pyproject-nix.github.io/pyproject.nix/templates.html#impure
|
|
||||||
# https://github.com/pyproject-nix/pyproject.nix/blob/master/templates/impure/flake.nix
|
|
||||||
{
|
|
||||||
description = "An impure `piker` overlay using `uv` with Nix(OS)";
|
|
||||||
|
|
||||||
inputs = {
|
|
||||||
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
|
|
||||||
};
|
|
||||||
|
|
||||||
outputs =
|
|
||||||
{ nixpkgs, ... }:
|
|
||||||
let
|
|
||||||
inherit (nixpkgs) lib;
|
|
||||||
forAllSystems = lib.genAttrs lib.systems.flakeExposed;
|
|
||||||
in
|
|
||||||
{
|
|
||||||
devShells = forAllSystems (
|
|
||||||
system:
|
|
||||||
let
|
|
||||||
pkgs = nixpkgs.legacyPackages.${system};
|
|
||||||
|
|
||||||
# do store-path extractions
|
|
||||||
qt6baseStorePath = lib.getLib pkgs.qt6.qtbase;
|
|
||||||
# ?TODO? can remove below since manual linking not needed?
|
|
||||||
# qt6QtWaylandStorePath = lib.getLib pkgs.qt6.qtwayland;
|
|
||||||
|
|
||||||
# XXX NOTE XXX, for now we overlay specific pkgs via
|
|
||||||
# a major-version-pinned-`cpython`
|
|
||||||
cpython = "python313";
|
|
||||||
pypkgs = pkgs."${cpython}Packages";
|
|
||||||
in
|
|
||||||
{
|
|
||||||
default = pkgs.mkShell {
|
|
||||||
|
|
||||||
packages = with pkgs; [
|
|
||||||
# XXX, ensure sh completions active!
|
|
||||||
bashInteractive
|
|
||||||
bash-completion
|
|
||||||
|
|
||||||
# dev utils
|
|
||||||
ruff
|
|
||||||
pypkgs.ruff
|
|
||||||
|
|
||||||
qt6.qtwayland
|
|
||||||
qt6.qtbase
|
|
||||||
|
|
||||||
uv
|
|
||||||
python313 # ?TODO^ how to set from `cpython` above?
|
|
||||||
pypkgs.pyqt6
|
|
||||||
pypkgs.pyqt6-sip
|
|
||||||
pypkgs.qtpy
|
|
||||||
pypkgs.qdarkstyle
|
|
||||||
pypkgs.rapidfuzz
|
|
||||||
];
|
|
||||||
|
|
||||||
shellHook = ''
|
|
||||||
# unmask to debug **this** dev-shell-hook
|
|
||||||
# set -e
|
|
||||||
|
|
||||||
# set qt-base/plugin path(s)
|
|
||||||
QTBASE_PATH="${qt6baseStorePath}/lib"
|
|
||||||
QT_PLUGIN_PATH="${qt6baseStorePath}/lib/qt-6/plugins"
|
|
||||||
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
|
|
||||||
|
|
||||||
# link in Qt cc lib paths from <nixpkgs>
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
|
|
||||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
|
|
||||||
|
|
||||||
# link-in c++ stdlib for various AOT-ext-pkgs (numpy, etc.)
|
|
||||||
LD_LIBRARY_PATH="${pkgs.stdenv.cc.cc.lib}/lib:$LD_LIBRARY_PATH"
|
|
||||||
|
|
||||||
export LD_LIBRARY_PATH
|
|
||||||
|
|
||||||
# RUNTIME-SETTINGS
|
|
||||||
#
|
|
||||||
# ------ Qt ------
|
|
||||||
# XXX, unmask to debug qt .so linking/loading deats
|
|
||||||
# export QT_DEBUG_PLUGINS=1
|
|
||||||
#
|
|
||||||
# ALSO, for *modern linux* DEs,
|
|
||||||
# - maybe set wayland-mode (TODO, parametrtize this!)
|
|
||||||
# * a chosen wayland-mode shell-integration
|
|
||||||
export QT_QPA_PLATFORM="wayland"
|
|
||||||
export QT_WAYLAND_SHELL_INTEGRATION="xdg-shell"
|
|
||||||
|
|
||||||
# ------ uv ------
|
|
||||||
# - always use the ./py313/ venv-subdir
|
|
||||||
export UV_PROJECT_ENVIRONMENT="py313"
|
|
||||||
# sync project-env with all extras
|
|
||||||
uv sync --dev --all-extras --no-group lint
|
|
||||||
|
|
||||||
# ------ TIPS ------
|
|
||||||
# NOTE, to launch the py-venv installed `xonsh` (like @goodboy)
|
|
||||||
# run the `nix develop` cmd with,
|
|
||||||
# >> nix develop -c uv run xonsh
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
}
|
|
||||||
);
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
@ -1,16 +0,0 @@
|
||||||
.accounting
|
|
||||||
-----------
|
|
||||||
A subsystem for transaction processing, storage and historical
|
|
||||||
measurement.
|
|
||||||
|
|
||||||
|
|
||||||
.pnl
|
|
||||||
----
|
|
||||||
BEP, the break even price: the price at which liquidating
|
|
||||||
a remaining position results in a zero PnL since the position was
|
|
||||||
"opened" in the destination asset.
|
|
||||||
|
|
||||||
PPU: price-per-unit: the "average cost" (in cumulative mean terms)
|
|
||||||
of the "entry" transactions which "make a position larger"; taking
|
|
||||||
a profit relative to this price means that you will "make more
|
|
||||||
profit then made prior" since the position was opened.
|
|
||||||
|
|
@ -21,26 +21,24 @@ for tendiez.
|
||||||
'''
|
'''
|
||||||
from ..log import get_logger
|
from ..log import get_logger
|
||||||
|
|
||||||
from .calc import (
|
|
||||||
iter_by_dt,
|
|
||||||
)
|
|
||||||
from ._ledger import (
|
from ._ledger import (
|
||||||
|
iter_by_dt,
|
||||||
Transaction,
|
Transaction,
|
||||||
TransactionLedger,
|
TransactionLedger,
|
||||||
open_trade_ledger,
|
open_trade_ledger,
|
||||||
)
|
)
|
||||||
from ._pos import (
|
from ._pos import (
|
||||||
Account,
|
load_pps_from_ledger,
|
||||||
load_account,
|
open_pps,
|
||||||
load_account_from_ledger,
|
|
||||||
open_account,
|
|
||||||
Position,
|
Position,
|
||||||
|
PpTable,
|
||||||
)
|
)
|
||||||
from ._mktinfo import (
|
from ._mktinfo import (
|
||||||
Asset,
|
Asset,
|
||||||
dec_digits,
|
dec_digits,
|
||||||
digits_to_dec,
|
digits_to_dec,
|
||||||
MktPair,
|
MktPair,
|
||||||
|
Symbol,
|
||||||
unpack_fqme,
|
unpack_fqme,
|
||||||
_derivs as DerivTypes,
|
_derivs as DerivTypes,
|
||||||
)
|
)
|
||||||
|
|
@ -49,24 +47,23 @@ from ._allocate import (
|
||||||
Allocator,
|
Allocator,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'Account',
|
|
||||||
'Allocator',
|
'Allocator',
|
||||||
'Asset',
|
'Asset',
|
||||||
'MktPair',
|
'MktPair',
|
||||||
'Position',
|
'Position',
|
||||||
|
'PpTable',
|
||||||
|
'Symbol',
|
||||||
'Transaction',
|
'Transaction',
|
||||||
'TransactionLedger',
|
'TransactionLedger',
|
||||||
'dec_digits',
|
'dec_digits',
|
||||||
'digits_to_dec',
|
'digits_to_dec',
|
||||||
'iter_by_dt',
|
'iter_by_dt',
|
||||||
'load_account',
|
'load_pps_from_ledger',
|
||||||
'load_account_from_ledger',
|
|
||||||
'mk_allocator',
|
'mk_allocator',
|
||||||
'open_account',
|
'open_pps',
|
||||||
'open_trade_ledger',
|
'open_trade_ledger',
|
||||||
'unpack_fqme',
|
'unpack_fqme',
|
||||||
'DerivTypes',
|
'DerivTypes',
|
||||||
|
|
@ -85,7 +82,7 @@ def get_likely_pair(
|
||||||
|
|
||||||
'''
|
'''
|
||||||
try:
|
try:
|
||||||
src_name_start: str = bs_mktid.rindex(src)
|
src_name_start = bs_mktid.rindex(src)
|
||||||
except (
|
except (
|
||||||
ValueError, # substr not found
|
ValueError, # substr not found
|
||||||
):
|
):
|
||||||
|
|
@ -96,8 +93,25 @@ def get_likely_pair(
|
||||||
# log.warning(
|
# log.warning(
|
||||||
# f'No src fiat {src} found in {bs_mktid}?'
|
# f'No src fiat {src} found in {bs_mktid}?'
|
||||||
# )
|
# )
|
||||||
return None
|
return
|
||||||
|
|
||||||
likely_dst: str = bs_mktid[:src_name_start]
|
likely_dst = bs_mktid[:src_name_start]
|
||||||
if likely_dst == dst:
|
if likely_dst == dst:
|
||||||
return bs_mktid
|
return bs_mktid
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
import sys
|
||||||
|
from pprint import pformat
|
||||||
|
|
||||||
|
args = sys.argv
|
||||||
|
assert len(args) > 1, 'Specifiy account(s) from `brokers.toml`'
|
||||||
|
args = args[1:]
|
||||||
|
for acctid in args:
|
||||||
|
broker, name = acctid.split('.')
|
||||||
|
trans, updated_pps = load_pps_from_ledger(broker, name)
|
||||||
|
print(
|
||||||
|
f'Processing transactions into pps for {broker}:{acctid}\n'
|
||||||
|
f'{pformat(trans)}\n\n'
|
||||||
|
f'{pformat(updated_pps)}'
|
||||||
|
)
|
||||||
|
|
|
||||||
|
|
@ -25,7 +25,7 @@ from bidict import bidict
|
||||||
|
|
||||||
from ._pos import Position
|
from ._pos import Position
|
||||||
from . import MktPair
|
from . import MktPair
|
||||||
from piker.types import Struct
|
from ..data.types import Struct
|
||||||
|
|
||||||
|
|
||||||
_size_units = bidict({
|
_size_units = bidict({
|
||||||
|
|
@ -118,9 +118,9 @@ class Allocator(Struct):
|
||||||
ld: int = mkt.size_tick_digits
|
ld: int = mkt.size_tick_digits
|
||||||
|
|
||||||
size_unit = self.size_unit
|
size_unit = self.size_unit
|
||||||
live_size = live_pp.cumsize
|
live_size = live_pp.size
|
||||||
abs_live_size = abs(live_size)
|
abs_live_size = abs(live_size)
|
||||||
abs_startup_size = abs(startup_pp.cumsize)
|
abs_startup_size = abs(startup_pp.size)
|
||||||
|
|
||||||
u_per_slot, currency_per_slot = self.step_sizes()
|
u_per_slot, currency_per_slot = self.step_sizes()
|
||||||
|
|
||||||
|
|
@ -213,6 +213,8 @@ class Allocator(Struct):
|
||||||
slots_used = self.slots_used(
|
slots_used = self.slots_used(
|
||||||
Position(
|
Position(
|
||||||
mkt=mkt,
|
mkt=mkt,
|
||||||
|
size=order_size,
|
||||||
|
ppu=price,
|
||||||
bs_mktid=mkt.bs_mktid,
|
bs_mktid=mkt.bs_mktid,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
@ -239,7 +241,7 @@ class Allocator(Struct):
|
||||||
Calc and return the number of slots used by this ``Position``.
|
Calc and return the number of slots used by this ``Position``.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
abs_pp_size = abs(pp.cumsize)
|
abs_pp_size = abs(pp.size)
|
||||||
|
|
||||||
if self.size_unit == 'currency':
|
if self.size_unit == 'currency':
|
||||||
# live_currency_size = size or (abs_pp_size * pp.ppu)
|
# live_currency_size = size or (abs_pp_size * pp.ppu)
|
||||||
|
|
|
||||||
|
|
@ -21,77 +21,65 @@ Trade and transaction ledger processing.
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from collections import UserDict
|
from collections import UserDict
|
||||||
from contextlib import contextmanager as cm
|
from contextlib import contextmanager as cm
|
||||||
from functools import partial
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from pprint import pformat
|
|
||||||
from types import ModuleType
|
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
Callable,
|
Callable,
|
||||||
Generator,
|
Iterator,
|
||||||
Literal,
|
Union,
|
||||||
TYPE_CHECKING,
|
Generator
|
||||||
)
|
)
|
||||||
|
|
||||||
from pendulum import (
|
from pendulum import (
|
||||||
|
datetime,
|
||||||
DateTime,
|
DateTime,
|
||||||
|
from_timestamp,
|
||||||
|
parse,
|
||||||
)
|
)
|
||||||
import tomli_w # for fast ledger writing
|
import tomli_w # for fast ledger writing
|
||||||
|
|
||||||
from piker.types import Struct
|
from .. import config
|
||||||
from piker import config
|
from ..data.types import Struct
|
||||||
from piker.log import get_logger
|
from ..log import get_logger
|
||||||
from .calc import (
|
from ._mktinfo import (
|
||||||
iter_by_dt,
|
Symbol, # legacy
|
||||||
|
MktPair,
|
||||||
|
Asset,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ..data._symcache import (
|
|
||||||
SymbologyCache,
|
|
||||||
)
|
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
TxnType = Literal[
|
|
||||||
'clear',
|
|
||||||
'transfer',
|
|
||||||
|
|
||||||
# TODO: see https://github.com/pikers/piker/issues/510
|
|
||||||
# 'split',
|
|
||||||
# 'rename',
|
|
||||||
# 'resize',
|
|
||||||
# 'removal',
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
class Transaction(Struct, frozen=True):
|
class Transaction(Struct, frozen=True):
|
||||||
|
|
||||||
# NOTE: this is a unified acronym also used in our `MktPair`
|
# TODO: unify this with the `MktPair`,
|
||||||
# and can stand for any of a
|
# once we have that as a required field,
|
||||||
# "fully qualified <blank> endpoint":
|
# we don't really need the fqme any more..
|
||||||
# - "market" in the case of financial trades
|
|
||||||
# (btcusdt.spot.binance).
|
|
||||||
# - "merkel (tree)" aka a blockchain system "wallet tranfers"
|
|
||||||
# (btc.blockchain)
|
|
||||||
# - "money" for tradtitional (digital databases)
|
|
||||||
# *bank accounts* (usd.swift, eur.sepa)
|
|
||||||
fqme: str
|
fqme: str
|
||||||
|
|
||||||
tid: str | int # unique transaction id
|
tid: Union[str, int] # unique transaction id
|
||||||
size: float
|
size: float
|
||||||
price: float
|
price: float
|
||||||
cost: float # commisions or other additional costs
|
cost: float # commisions or other additional costs
|
||||||
dt: DateTime
|
dt: datetime
|
||||||
|
|
||||||
# the "event type" in terms of "market events" see above and
|
|
||||||
# https://github.com/pikers/piker/issues/510
|
|
||||||
etype: TxnType = 'clear'
|
|
||||||
|
|
||||||
# TODO: we can drop this right since we
|
# TODO: we can drop this right since we
|
||||||
# can instead expect the backend to provide this
|
# can instead expect the backend to provide this
|
||||||
# via the `MktPair`?
|
# via the `MktPair`?
|
||||||
expiry: DateTime | None = None
|
expiry: datetime | None = None
|
||||||
|
|
||||||
|
# TODO: drop the Symbol type, construct using
|
||||||
|
# t.sys (the transaction system)
|
||||||
|
|
||||||
|
# the underlying "transaction system", normally one of a ``MktPair``
|
||||||
|
# (a description of a tradable double auction) or a ledger-recorded
|
||||||
|
# ("ledger" in any sense as long as you can record transfers) of any
|
||||||
|
# sort) ``Asset``.
|
||||||
|
sym: MktPair | Asset | Symbol | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def sys(self) -> Symbol:
|
||||||
|
return self.sym
|
||||||
|
|
||||||
# (optional) key-id defined by the broker-service backend which
|
# (optional) key-id defined by the broker-service backend which
|
||||||
# ensures the instrument-symbol market key for this record is unique
|
# ensures the instrument-symbol market key for this record is unique
|
||||||
|
|
@ -100,16 +88,15 @@ class Transaction(Struct, frozen=True):
|
||||||
# service.
|
# service.
|
||||||
bs_mktid: str | int | None = None
|
bs_mktid: str | int | None = None
|
||||||
|
|
||||||
def to_dict(
|
def to_dict(self) -> dict:
|
||||||
self,
|
dct = super().to_dict()
|
||||||
**kwargs,
|
|
||||||
) -> dict:
|
# TODO: switch to sys!
|
||||||
dct: dict[str, Any] = super().to_dict(**kwargs)
|
dct.pop('sym')
|
||||||
|
|
||||||
# ensure we use a pendulum formatted
|
# ensure we use a pendulum formatted
|
||||||
# ISO style str here!@
|
# ISO style str here!@
|
||||||
dct['dt'] = str(self.dt)
|
dct['dt'] = str(self.dt)
|
||||||
|
|
||||||
return dct
|
return dct
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -121,45 +108,17 @@ class TransactionLedger(UserDict):
|
||||||
outside.
|
outside.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# NOTE: see `open_trade_ledger()` for defaults, this should
|
|
||||||
# never be constructed manually!
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
ledger_dict: dict,
|
ledger_dict: dict,
|
||||||
file_path: Path,
|
file_path: Path,
|
||||||
account: str,
|
|
||||||
mod: ModuleType, # broker mod
|
|
||||||
tx_sort: Callable,
|
tx_sort: Callable,
|
||||||
symcache: SymbologyCache,
|
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
self.account: str = account
|
self.file_path = file_path
|
||||||
self.file_path: Path = file_path
|
self.tx_sort = tx_sort
|
||||||
self.mod: ModuleType = mod
|
|
||||||
self.tx_sort: Callable = tx_sort
|
|
||||||
|
|
||||||
self._symcache: SymbologyCache = symcache
|
|
||||||
|
|
||||||
# any added txns we keep in that form for meta-data
|
|
||||||
# gathering purposes
|
|
||||||
self._txns: dict[str, Transaction] = {}
|
|
||||||
|
|
||||||
super().__init__(ledger_dict)
|
super().__init__(ledger_dict)
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
return (
|
|
||||||
f'TransactionLedger: {len(self)}\n'
|
|
||||||
f'{pformat(list(self.data))}'
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def symcache(self) -> SymbologyCache:
|
|
||||||
'''
|
|
||||||
Read-only ref to backend's ``SymbologyCache``.
|
|
||||||
|
|
||||||
'''
|
|
||||||
return self._symcache
|
|
||||||
|
|
||||||
def update_from_t(
|
def update_from_t(
|
||||||
self,
|
self,
|
||||||
t: Transaction,
|
t: Transaction,
|
||||||
|
|
@ -170,14 +129,14 @@ class TransactionLedger(UserDict):
|
||||||
|
|
||||||
'''
|
'''
|
||||||
self.data[t.tid] = t.to_dict()
|
self.data[t.tid] = t.to_dict()
|
||||||
self._txns[t.tid] = t
|
|
||||||
|
|
||||||
def iter_txns(
|
def iter_trans(
|
||||||
self,
|
self,
|
||||||
symcache: SymbologyCache | None = None,
|
mkt_by_fqme: dict[str, MktPair],
|
||||||
|
broker: str = 'paper',
|
||||||
|
|
||||||
) -> Generator[
|
) -> Generator[
|
||||||
Transaction,
|
tuple[str, Transaction],
|
||||||
None,
|
None,
|
||||||
None,
|
None,
|
||||||
]:
|
]:
|
||||||
|
|
@ -186,127 +145,129 @@ class TransactionLedger(UserDict):
|
||||||
form via generator.
|
form via generator.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
symcache = symcache or self._symcache
|
if broker != 'paper':
|
||||||
|
raise NotImplementedError('Per broker support not dun yet!')
|
||||||
|
|
||||||
if self.account == 'paper':
|
# TODO: lookup some standard normalizer
|
||||||
from piker.clearing import _paper_engine
|
# func in the backend?
|
||||||
norm_trade: Callable = partial(
|
# from ..brokers import get_brokermod
|
||||||
_paper_engine.norm_trade,
|
# mod = get_brokermod(broker)
|
||||||
brokermod=self.mod,
|
# trans_dict = mod.norm_trade_records(self.data)
|
||||||
|
|
||||||
|
# NOTE: instead i propose the normalizer is
|
||||||
|
# a one shot routine (that can be lru cached)
|
||||||
|
# and instead call it for each entry incrementally:
|
||||||
|
# normer = mod.norm_trade_record(txdict)
|
||||||
|
|
||||||
|
# TODO: use tx_sort here yah?
|
||||||
|
for txdict in self.tx_sort(self.data.values()):
|
||||||
|
# for tid, txdict in self.data.items():
|
||||||
|
# special field handling for datetimes
|
||||||
|
# to ensure pendulum is used!
|
||||||
|
tid: str = txdict['tid']
|
||||||
|
fqme: str = txdict.get('fqme') or txdict['fqsn']
|
||||||
|
dt: DateTime = parse(txdict['dt'])
|
||||||
|
expiry: str | None = txdict.get('expiry')
|
||||||
|
|
||||||
|
if not (mkt := mkt_by_fqme.get(fqme)):
|
||||||
|
# we can't build a trans if we don't have
|
||||||
|
# the ``.sys: MktPair`` info, so skip.
|
||||||
|
continue
|
||||||
|
|
||||||
|
tx = Transaction(
|
||||||
|
fqme=fqme,
|
||||||
|
tid=txdict['tid'],
|
||||||
|
dt=dt,
|
||||||
|
price=txdict['price'],
|
||||||
|
size=txdict['size'],
|
||||||
|
cost=txdict.get('cost', 0),
|
||||||
|
bs_mktid=txdict['bs_mktid'],
|
||||||
|
|
||||||
|
# TODO: change to .sys!
|
||||||
|
sym=mkt,
|
||||||
|
expiry=parse(expiry) if expiry else None,
|
||||||
)
|
)
|
||||||
|
yield tid, tx
|
||||||
|
|
||||||
else:
|
def to_trans(
|
||||||
norm_trade: Callable = self.mod.norm_trade
|
|
||||||
|
|
||||||
# datetime-sort and pack into txs
|
|
||||||
for tid, txdict in self.tx_sort(self.data.items()):
|
|
||||||
txn: Transaction = norm_trade(
|
|
||||||
tid,
|
|
||||||
txdict,
|
|
||||||
pairs=symcache.pairs,
|
|
||||||
symcache=symcache,
|
|
||||||
)
|
|
||||||
yield txn
|
|
||||||
|
|
||||||
def to_txns(
|
|
||||||
self,
|
self,
|
||||||
symcache: SymbologyCache | None = None,
|
**kwargs,
|
||||||
|
|
||||||
) -> dict[str, Transaction]:
|
) -> dict[str, Transaction]:
|
||||||
'''
|
'''
|
||||||
Return entire output from ``.iter_txns()`` in a ``dict``.
|
Return entire output from ``.iter_trans()`` in a ``dict``.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
txns: dict[str, Transaction] = {}
|
return dict(self.iter_trans(**kwargs))
|
||||||
for t in self.iter_txns(symcache=symcache):
|
|
||||||
|
|
||||||
if not t:
|
def write_config(
|
||||||
log.warning(f'{self.mod.name}:{self.account} TXN is -> {t}')
|
self,
|
||||||
continue
|
|
||||||
|
|
||||||
txns[t.tid] = t
|
) -> None:
|
||||||
|
|
||||||
return txns
|
|
||||||
|
|
||||||
def write_config(self) -> None:
|
|
||||||
'''
|
'''
|
||||||
Render the self.data ledger dict to its TOML file form.
|
Render the self.data ledger dict to it's TOML file form.
|
||||||
|
|
||||||
ALWAYS order datetime sorted!
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
is_paper: bool = self.account == 'paper'
|
cpy = self.data.copy()
|
||||||
|
|
||||||
symcache: SymbologyCache = self._symcache
|
|
||||||
towrite: dict[str, Any] = {}
|
towrite: dict[str, Any] = {}
|
||||||
for tid, txdict in self.tx_sort(
|
for tid, trans in cpy.items():
|
||||||
self.data.copy()
|
|
||||||
):
|
# drop key for non-expiring assets
|
||||||
# write blank-str expiry for non-expiring assets
|
txdict = towrite[tid] = self.data[tid]
|
||||||
if (
|
if (
|
||||||
'expiry' in txdict
|
'expiry' in txdict
|
||||||
and txdict['expiry'] is None
|
and txdict['expiry'] is None
|
||||||
):
|
):
|
||||||
txdict['expiry'] = ''
|
txdict.pop('expiry')
|
||||||
|
|
||||||
# (maybe) re-write old acro-key
|
# re-write old acro-key
|
||||||
if (
|
fqme = txdict.get('fqsn')
|
||||||
is_paper
|
if fqme:
|
||||||
# if symcache is empty/not supported (yet), don't
|
|
||||||
# bother xD
|
|
||||||
and symcache.mktmaps
|
|
||||||
):
|
|
||||||
fqme: str = txdict.pop('fqsn', None) or txdict['fqme']
|
|
||||||
bs_mktid: str | None = txdict.get('bs_mktid')
|
|
||||||
|
|
||||||
if (
|
|
||||||
|
|
||||||
fqme not in symcache.mktmaps
|
|
||||||
or (
|
|
||||||
# also try to see if this is maybe a paper
|
|
||||||
# engine ledger in which case the bs_mktid
|
|
||||||
# should be the fqme as well!
|
|
||||||
bs_mktid
|
|
||||||
and fqme != bs_mktid
|
|
||||||
)
|
|
||||||
):
|
|
||||||
# always take any (paper) bs_mktid if defined and
|
|
||||||
# in the backend's cache key set.
|
|
||||||
if bs_mktid in symcache.mktmaps:
|
|
||||||
fqme: str = bs_mktid
|
|
||||||
else:
|
|
||||||
best_fqme: str = list(symcache.search(fqme))[0]
|
|
||||||
log.warning(
|
|
||||||
f'Could not find FQME: {fqme} in qualified set?\n'
|
|
||||||
f'Qualifying and expanding {fqme} -> {best_fqme}'
|
|
||||||
)
|
|
||||||
fqme = best_fqme
|
|
||||||
|
|
||||||
if (
|
|
||||||
bs_mktid
|
|
||||||
and bs_mktid != fqme
|
|
||||||
):
|
|
||||||
# in paper account case always make sure both the
|
|
||||||
# fqme and bs_mktid are fully qualified..
|
|
||||||
txdict['bs_mktid'] = fqme
|
|
||||||
|
|
||||||
# in paper ledgers always write the latest
|
|
||||||
# symbology key field: an FQME.
|
|
||||||
txdict['fqme'] = fqme
|
txdict['fqme'] = fqme
|
||||||
|
|
||||||
towrite[tid] = txdict
|
|
||||||
|
|
||||||
with self.file_path.open(mode='wb') as fp:
|
with self.file_path.open(mode='wb') as fp:
|
||||||
tomli_w.dump(towrite, fp)
|
tomli_w.dump(towrite, fp)
|
||||||
|
|
||||||
|
|
||||||
|
def iter_by_dt(
|
||||||
|
records: dict[str, dict[str, Any]] | list[dict],
|
||||||
|
|
||||||
|
# NOTE: parsers are looked up in the insert order
|
||||||
|
# so if you know that the record stats show some field
|
||||||
|
# is more common then others, stick it at the top B)
|
||||||
|
parsers: dict[tuple[str], Callable] = {
|
||||||
|
'dt': None, # parity case
|
||||||
|
'datetime': parse, # datetime-str
|
||||||
|
'time': from_timestamp, # float epoch
|
||||||
|
},
|
||||||
|
key: Callable | None = None,
|
||||||
|
|
||||||
|
) -> Iterator[tuple[str, dict]]:
|
||||||
|
'''
|
||||||
|
Iterate entries of a ``records: dict`` table sorted by entry recorded
|
||||||
|
datetime presumably set at the ``'dt'`` field in each entry.
|
||||||
|
|
||||||
|
'''
|
||||||
|
def dyn_parse_to_dt(txdict: dict[str, Any]) -> DateTime:
|
||||||
|
k, v, parser = next(
|
||||||
|
(k, txdict[k], parsers[k]) for k in parsers if k in txdict
|
||||||
|
)
|
||||||
|
return parser(v) if parser else v
|
||||||
|
|
||||||
|
if isinstance(records, dict):
|
||||||
|
records = records.values()
|
||||||
|
|
||||||
|
for entry in sorted(
|
||||||
|
records,
|
||||||
|
key=key or dyn_parse_to_dt,
|
||||||
|
):
|
||||||
|
yield entry
|
||||||
|
|
||||||
|
|
||||||
def load_ledger(
|
def load_ledger(
|
||||||
brokername: str,
|
brokername: str,
|
||||||
acctid: str,
|
acctid: str,
|
||||||
|
|
||||||
# for testing or manual load from file
|
|
||||||
dirpath: Path | None = None,
|
|
||||||
|
|
||||||
) -> tuple[dict, Path]:
|
) -> tuple[dict, Path]:
|
||||||
'''
|
'''
|
||||||
Load a ledger (TOML) file from user's config directory:
|
Load a ledger (TOML) file from user's config directory:
|
||||||
|
|
@ -321,11 +282,7 @@ def load_ledger(
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
import tomli as tomllib
|
import tomli as tomllib
|
||||||
|
|
||||||
ldir: Path = (
|
ldir: Path = config._config_dir / 'accounting' / 'ledgers'
|
||||||
dirpath
|
|
||||||
or
|
|
||||||
config._config_dir / 'accounting' / 'ledgers'
|
|
||||||
)
|
|
||||||
if not ldir.is_dir():
|
if not ldir.is_dir():
|
||||||
ldir.mkdir()
|
ldir.mkdir()
|
||||||
|
|
||||||
|
|
@ -351,15 +308,8 @@ def open_trade_ledger(
|
||||||
broker: str,
|
broker: str,
|
||||||
account: str,
|
account: str,
|
||||||
|
|
||||||
allow_from_sync_code: bool = False,
|
|
||||||
symcache: SymbologyCache | None = None,
|
|
||||||
|
|
||||||
# default is to sort by detected datetime-ish field
|
# default is to sort by detected datetime-ish field
|
||||||
tx_sort: Callable = iter_by_dt,
|
tx_sort: Callable = iter_by_dt,
|
||||||
rewrite: bool = False,
|
|
||||||
|
|
||||||
# for testing or manual load from file
|
|
||||||
_fp: Path | None = None,
|
|
||||||
|
|
||||||
) -> Generator[TransactionLedger, None, None]:
|
) -> Generator[TransactionLedger, None, None]:
|
||||||
'''
|
'''
|
||||||
|
|
@ -371,58 +321,18 @@ def open_trade_ledger(
|
||||||
name as defined in the user's ``brokers.toml`` config.
|
name as defined in the user's ``brokers.toml`` config.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from ..brokers import get_brokermod
|
ledger_dict, fpath = load_ledger(broker, account)
|
||||||
mod: ModuleType = get_brokermod(broker)
|
cpy = ledger_dict.copy()
|
||||||
|
|
||||||
ledger_dict, fpath = load_ledger(
|
|
||||||
broker,
|
|
||||||
account,
|
|
||||||
dirpath=_fp,
|
|
||||||
)
|
|
||||||
cpy: dict = ledger_dict.copy()
|
|
||||||
|
|
||||||
# XXX NOTE: if not provided presume we are being called from
|
|
||||||
# sync code and need to maybe run `trio` to generate..
|
|
||||||
if symcache is None:
|
|
||||||
|
|
||||||
# XXX: be mega pendantic and ensure the caller knows what
|
|
||||||
# they're doing!
|
|
||||||
if not allow_from_sync_code:
|
|
||||||
raise RuntimeError(
|
|
||||||
'You MUST set `allow_from_sync_code=True` when '
|
|
||||||
'calling `open_trade_ledger()` from sync code! '
|
|
||||||
'If you are calling from async code you MUST '
|
|
||||||
'instead pass a `symcache: SymbologyCache`!'
|
|
||||||
)
|
|
||||||
|
|
||||||
from ..data._symcache import (
|
|
||||||
get_symcache,
|
|
||||||
)
|
|
||||||
symcache: SymbologyCache = get_symcache(broker)
|
|
||||||
|
|
||||||
assert symcache
|
|
||||||
|
|
||||||
ledger = TransactionLedger(
|
ledger = TransactionLedger(
|
||||||
ledger_dict=cpy,
|
ledger_dict=cpy,
|
||||||
file_path=fpath,
|
file_path=fpath,
|
||||||
account=account,
|
tx_sort=tx_sort,
|
||||||
mod=mod,
|
|
||||||
symcache=symcache,
|
|
||||||
|
|
||||||
# NOTE: allow backends to provide custom ledger sorting
|
|
||||||
tx_sort=getattr(
|
|
||||||
mod,
|
|
||||||
'tx_sort',
|
|
||||||
tx_sort,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
try:
|
try:
|
||||||
yield ledger
|
yield ledger
|
||||||
finally:
|
finally:
|
||||||
if (
|
if ledger.data != ledger_dict:
|
||||||
ledger.data != ledger_dict
|
|
||||||
or rewrite
|
|
||||||
):
|
|
||||||
# TODO: show diff output?
|
# TODO: show diff output?
|
||||||
# https://stackoverflow.com/questions/12956957/print-diff-of-python-dictionaries
|
# https://stackoverflow.com/questions/12956957/print-diff-of-python-dictionaries
|
||||||
log.info(f'Updating ledger for {fpath}:\n')
|
log.info(f'Updating ledger for {fpath}:\n')
|
||||||
|
|
|
||||||
|
|
@ -36,7 +36,7 @@ from typing import (
|
||||||
Literal,
|
Literal,
|
||||||
)
|
)
|
||||||
|
|
||||||
from piker.types import Struct
|
from ..data.types import Struct
|
||||||
|
|
||||||
|
|
||||||
# TODO: make these literals..
|
# TODO: make these literals..
|
||||||
|
|
@ -130,29 +130,8 @@ class Asset(Struct, frozen=True):
|
||||||
# should not be explicitly required in our generic API.
|
# should not be explicitly required in our generic API.
|
||||||
info: dict | None = None
|
info: dict | None = None
|
||||||
|
|
||||||
# `None` is not toml-compat so drop info
|
# TODO?
|
||||||
# if no extra data added..
|
# _to_dict_skip = {'info'}
|
||||||
def to_dict(
|
|
||||||
self,
|
|
||||||
**kwargs,
|
|
||||||
) -> dict:
|
|
||||||
dct = super().to_dict(**kwargs)
|
|
||||||
if (info := dct.pop('info', None)):
|
|
||||||
dct['info'] = info
|
|
||||||
|
|
||||||
assert dct['tx_tick']
|
|
||||||
return dct
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_msg(
|
|
||||||
cls,
|
|
||||||
msg: dict[str, Any],
|
|
||||||
) -> Asset:
|
|
||||||
return cls(
|
|
||||||
tx_tick=Decimal(str(msg.pop('tx_tick'))),
|
|
||||||
info=msg.pop('info', None),
|
|
||||||
**msg,
|
|
||||||
)
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
def __str__(self) -> str:
|
||||||
return self.name
|
return self.name
|
||||||
|
|
@ -305,44 +284,16 @@ class MktPair(Struct, frozen=True):
|
||||||
# config right?
|
# config right?
|
||||||
# src_type: AssetTypeName
|
# src_type: AssetTypeName
|
||||||
|
|
||||||
# for derivs, info describing contract, egs. strike price, call
|
# for derivs, info describing contract, egs.
|
||||||
# or put, swap type, exercise model, etc.
|
# strike price, call or put, swap type, exercise model, etc.
|
||||||
contract_info: list[str] | None = None
|
contract_info: list[str] | None = None
|
||||||
|
|
||||||
# TODO: rename to sectype since all of these can
|
|
||||||
# be considered "securities"?
|
|
||||||
_atype: str = ''
|
_atype: str = ''
|
||||||
|
|
||||||
# allow explicit disable of the src part of the market
|
|
||||||
# pair name -> useful for legacy markets like qqq.nasdaq.ib
|
|
||||||
_fqme_without_src: bool = False
|
|
||||||
|
|
||||||
# NOTE: when cast to `str` return fqme
|
# NOTE: when cast to `str` return fqme
|
||||||
def __str__(self) -> str:
|
def __str__(self) -> str:
|
||||||
return self.fqme
|
return self.fqme
|
||||||
|
|
||||||
def to_dict(
|
|
||||||
self,
|
|
||||||
**kwargs,
|
|
||||||
) -> dict:
|
|
||||||
d = super().to_dict(**kwargs)
|
|
||||||
d['src'] = self.src.to_dict(**kwargs)
|
|
||||||
|
|
||||||
if not isinstance(self.dst, str):
|
|
||||||
d['dst'] = self.dst.to_dict(**kwargs)
|
|
||||||
else:
|
|
||||||
d['dst'] = str(self.dst)
|
|
||||||
|
|
||||||
d['price_tick'] = str(self.price_tick)
|
|
||||||
d['size_tick'] = str(self.size_tick)
|
|
||||||
|
|
||||||
if self.contract_info is None:
|
|
||||||
d.pop('contract_info')
|
|
||||||
|
|
||||||
# d.pop('_fqme_without_src')
|
|
||||||
|
|
||||||
return d
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_msg(
|
def from_msg(
|
||||||
cls,
|
cls,
|
||||||
|
|
@ -353,32 +304,36 @@ class MktPair(Struct, frozen=True):
|
||||||
Constructor for a received msg-dict normally received over IPC.
|
Constructor for a received msg-dict normally received over IPC.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if not isinstance(
|
dst_asset_msg = msg.pop('dst')
|
||||||
dst_asset_msg := msg.pop('dst'),
|
src_asset_msg = msg.pop('src')
|
||||||
str,
|
|
||||||
):
|
if isinstance(dst_asset_msg, str):
|
||||||
dst: Asset = Asset.from_msg(dst_asset_msg) # .copy()
|
src: str = str(src_asset_msg)
|
||||||
|
assert isinstance(src, str)
|
||||||
|
return cls.from_fqme(
|
||||||
|
dst_asset_msg,
|
||||||
|
src=src,
|
||||||
|
**msg,
|
||||||
|
)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
dst: str = dst_asset_msg
|
# NOTE: we call `.copy()` here to ensure
|
||||||
|
# type casting!
|
||||||
|
dst = Asset(**dst_asset_msg).copy()
|
||||||
|
if not isinstance(src_asset_msg, str):
|
||||||
|
src = Asset(**src_asset_msg).copy()
|
||||||
|
else:
|
||||||
|
src = str(src_asset_msg)
|
||||||
|
|
||||||
src_asset_msg: dict = msg.pop('src')
|
|
||||||
src: Asset = Asset.from_msg(src_asset_msg) # .copy()
|
|
||||||
|
|
||||||
# XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't
|
|
||||||
# decide to it by default since we aren't spec-cing these
|
|
||||||
# msgs as structs proper to get them to decode implictily
|
|
||||||
# (yet) as per,
|
|
||||||
# - https://github.com/pikers/piker/pull/354
|
|
||||||
# - https://github.com/goodboy/tractor/pull/311
|
|
||||||
# SO we have to ensure we do a struct type
|
|
||||||
# case (which `.copy()` does) to ensure we get the right
|
|
||||||
# type!
|
|
||||||
return cls(
|
return cls(
|
||||||
dst=dst,
|
dst=dst,
|
||||||
src=src,
|
src=src,
|
||||||
price_tick=Decimal(msg.pop('price_tick')),
|
|
||||||
size_tick=Decimal(msg.pop('size_tick')),
|
|
||||||
**msg,
|
**msg,
|
||||||
|
# XXX NOTE: ``msgspec`` can encode `Decimal`
|
||||||
|
# but it doesn't decide to it by default since
|
||||||
|
# we aren't spec-cing these msgs as structs, SO
|
||||||
|
# we have to ensure we do a struct type case (which `.copy()`
|
||||||
|
# does) to ensure we get the right type!
|
||||||
).copy()
|
).copy()
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
|
@ -390,8 +345,8 @@ class MktPair(Struct, frozen=True):
|
||||||
cls,
|
cls,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
|
|
||||||
price_tick: float|str,
|
price_tick: float | str,
|
||||||
size_tick: float|str,
|
size_tick: float | str,
|
||||||
bs_mktid: str,
|
bs_mktid: str,
|
||||||
|
|
||||||
broker: str | None = None,
|
broker: str | None = None,
|
||||||
|
|
@ -406,20 +361,7 @@ class MktPair(Struct, frozen=True):
|
||||||
):
|
):
|
||||||
_fqme = f'{fqme}.{broker}'
|
_fqme = f'{fqme}.{broker}'
|
||||||
|
|
||||||
broker, mkt_ep_key, venue, expiry = unpack_fqme(_fqme)
|
broker, mkt_ep_key, venue, suffix = unpack_fqme(_fqme)
|
||||||
|
|
||||||
kven: str = kwargs.pop('venue', venue)
|
|
||||||
if venue:
|
|
||||||
assert venue == kven
|
|
||||||
else:
|
|
||||||
venue = kven
|
|
||||||
|
|
||||||
exp: str = kwargs.pop('expiry', expiry)
|
|
||||||
if expiry:
|
|
||||||
assert exp == expiry
|
|
||||||
else:
|
|
||||||
expiry = exp
|
|
||||||
|
|
||||||
dst: Asset = Asset.guess_from_mkt_ep_key(
|
dst: Asset = Asset.guess_from_mkt_ep_key(
|
||||||
mkt_ep_key,
|
mkt_ep_key,
|
||||||
atype=kwargs.get('_atype'),
|
atype=kwargs.get('_atype'),
|
||||||
|
|
@ -431,15 +373,14 @@ class MktPair(Struct, frozen=True):
|
||||||
# which we expect to be filled in by some
|
# which we expect to be filled in by some
|
||||||
# backend client with access to that data-info.
|
# backend client with access to that data-info.
|
||||||
return cls(
|
return cls(
|
||||||
dst=dst,
|
|
||||||
# XXX: not resolved to ``Asset`` :(
|
# XXX: not resolved to ``Asset`` :(
|
||||||
#src=src,
|
dst=dst,
|
||||||
|
|
||||||
broker=broker,
|
broker=broker,
|
||||||
venue=venue,
|
venue=venue,
|
||||||
# XXX NOTE: we presume this token
|
# XXX NOTE: we presume this token
|
||||||
# if the expiry for now!
|
# if the expiry for now!
|
||||||
expiry=expiry,
|
expiry=suffix,
|
||||||
|
|
||||||
price_tick=price_tick,
|
price_tick=price_tick,
|
||||||
size_tick=size_tick,
|
size_tick=size_tick,
|
||||||
|
|
@ -545,7 +486,7 @@ class MktPair(Struct, frozen=True):
|
||||||
'''
|
'''
|
||||||
key: str = (
|
key: str = (
|
||||||
self.pair(delim_char=delim_char)
|
self.pair(delim_char=delim_char)
|
||||||
if not (without_src or self._fqme_without_src)
|
if not without_src
|
||||||
else str(self.dst)
|
else str(self.dst)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -614,7 +555,7 @@ class MktPair(Struct, frozen=True):
|
||||||
if isinstance(self.dst, Asset):
|
if isinstance(self.dst, Asset):
|
||||||
return str(self.dst.atype)
|
return str(self.dst.atype)
|
||||||
|
|
||||||
return 'UNKNOWN'
|
return 'unknown'
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def price_tick_digits(self) -> int:
|
def price_tick_digits(self) -> int:
|
||||||
|
|
@ -677,3 +618,90 @@ def unpack_fqme(
|
||||||
# '.'.join([mkt_ep, venue]),
|
# '.'.join([mkt_ep, venue]),
|
||||||
suffix,
|
suffix,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class Symbol(Struct):
|
||||||
|
'''
|
||||||
|
I guess this is some kinda container thing for dealing with
|
||||||
|
all the different meta-data formats from brokers?
|
||||||
|
|
||||||
|
'''
|
||||||
|
key: str
|
||||||
|
|
||||||
|
broker: str = ''
|
||||||
|
venue: str = ''
|
||||||
|
|
||||||
|
# precision descriptors for price and vlm
|
||||||
|
tick_size: Decimal = Decimal('0.01')
|
||||||
|
lot_tick_size: Decimal = Decimal('0.0')
|
||||||
|
|
||||||
|
suffix: str = ''
|
||||||
|
broker_info: dict[str, dict[str, Any]] = {}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_fqme(
|
||||||
|
cls,
|
||||||
|
fqsn: str,
|
||||||
|
info: dict[str, Any],
|
||||||
|
|
||||||
|
) -> Symbol:
|
||||||
|
broker, mktep, venue, suffix = unpack_fqme(fqsn)
|
||||||
|
tick_size = info.get('price_tick_size', 0.01)
|
||||||
|
lot_size = info.get('lot_tick_size', 0.0)
|
||||||
|
|
||||||
|
return Symbol(
|
||||||
|
broker=broker,
|
||||||
|
key=mktep,
|
||||||
|
tick_size=tick_size,
|
||||||
|
lot_tick_size=lot_size,
|
||||||
|
venue=venue,
|
||||||
|
suffix=suffix,
|
||||||
|
broker_info={broker: info},
|
||||||
|
)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def type_key(self) -> str:
|
||||||
|
return list(self.broker_info.values())[0]['asset_type']
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tick_size_digits(self) -> int:
|
||||||
|
return float_digits(self.tick_size)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def lot_size_digits(self) -> int:
|
||||||
|
return float_digits(self.lot_tick_size)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def price_tick(self) -> Decimal:
|
||||||
|
return Decimal(str(self.tick_size))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def size_tick(self) -> Decimal:
|
||||||
|
return Decimal(str(self.lot_tick_size))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def broker(self) -> str:
|
||||||
|
return list(self.broker_info.keys())[0]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def fqme(self) -> str:
|
||||||
|
return maybe_cons_tokens([
|
||||||
|
self.key, # final "pair name" (eg. qqq[/usd], btcusdt)
|
||||||
|
self.venue,
|
||||||
|
self.suffix, # includes expiry and other con info
|
||||||
|
self.broker,
|
||||||
|
])
|
||||||
|
|
||||||
|
def quantize(
|
||||||
|
self,
|
||||||
|
size: float,
|
||||||
|
) -> Decimal:
|
||||||
|
digits = float_digits(self.lot_tick_size)
|
||||||
|
return Decimal(size).quantize(
|
||||||
|
Decimal(f'1.{"0".ljust(digits, "0")}'),
|
||||||
|
rounding=ROUND_HALF_EVEN
|
||||||
|
)
|
||||||
|
|
||||||
|
# NOTE: when cast to `str` return fqme
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return self.fqme
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,768 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Calculation routines for balance and position tracking such that
|
|
||||||
you know when you're losing money (if possible) XD
|
|
||||||
|
|
||||||
'''
|
|
||||||
from __future__ import annotations
|
|
||||||
from collections.abc import ValuesView
|
|
||||||
from contextlib import contextmanager as cm
|
|
||||||
from functools import partial
|
|
||||||
from math import copysign
|
|
||||||
from pprint import pformat
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
Callable,
|
|
||||||
Iterator,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
|
||||||
|
|
||||||
from tractor.devx import maybe_open_crash_handler
|
|
||||||
import polars as pl
|
|
||||||
from pendulum import (
|
|
||||||
DateTime,
|
|
||||||
from_timestamp,
|
|
||||||
parse,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ..log import get_logger
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ._ledger import (
|
|
||||||
Transaction,
|
|
||||||
TransactionLedger,
|
|
||||||
)
|
|
||||||
|
|
||||||
log = get_logger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def ppu(
|
|
||||||
clears: Iterator[Transaction],
|
|
||||||
|
|
||||||
# include transaction cost in breakeven price
|
|
||||||
# and presume the worst case of the same cost
|
|
||||||
# to exit this transaction (even though in reality
|
|
||||||
# it will be dynamic based on exit stratetgy).
|
|
||||||
cost_scalar: float = 2,
|
|
||||||
|
|
||||||
# return the ledger of clears as a (now dt sorted) dict with
|
|
||||||
# new position fields inserted alongside each entry.
|
|
||||||
as_ledger: bool = False,
|
|
||||||
|
|
||||||
) -> float | list[(str, dict)]:
|
|
||||||
'''
|
|
||||||
Compute the "price-per-unit" price for the given non-zero sized
|
|
||||||
rolling position.
|
|
||||||
|
|
||||||
The recurrence relation which computes this (exponential) mean
|
|
||||||
per new clear which **increases** the accumulative postiion size
|
|
||||||
is:
|
|
||||||
|
|
||||||
ppu[-1] = (
|
|
||||||
ppu[-2] * accum_size[-2]
|
|
||||||
+
|
|
||||||
ppu[-1] * size
|
|
||||||
) / accum_size[-1]
|
|
||||||
|
|
||||||
where `cost_basis` for the current step is simply the price
|
|
||||||
* size of the most recent clearing transaction.
|
|
||||||
|
|
||||||
-----
|
|
||||||
TODO: get the BEP computed and working similarly!
|
|
||||||
-----
|
|
||||||
the equivalent "break even price" or bep at each new clear
|
|
||||||
event step conversely only changes when an "position exiting
|
|
||||||
clear" which **decreases** the cumulative dst asset size:
|
|
||||||
|
|
||||||
bep[-1] = ppu[-1] - (cum_pnl[-1] / cumsize[-1])
|
|
||||||
|
|
||||||
'''
|
|
||||||
asize_h: list[float] = [] # historical accumulative size
|
|
||||||
ppu_h: list[float] = [] # historical price-per-unit
|
|
||||||
# ledger: dict[str, dict] = {}
|
|
||||||
ledger: list[dict] = []
|
|
||||||
|
|
||||||
t: Transaction
|
|
||||||
for t in clears:
|
|
||||||
clear_size: float = t.size
|
|
||||||
clear_price: str | float = t.price
|
|
||||||
is_clear: bool = not isinstance(clear_price, str)
|
|
||||||
|
|
||||||
last_accum_size = asize_h[-1] if asize_h else 0
|
|
||||||
accum_size: float = last_accum_size + clear_size
|
|
||||||
accum_sign = copysign(1, accum_size)
|
|
||||||
sign_change: bool = False
|
|
||||||
|
|
||||||
# on transfers we normally write some non-valid
|
|
||||||
# price since withdrawal to another account/wallet
|
|
||||||
# has nothing to do with inter-asset-market prices.
|
|
||||||
# TODO: this should be better handled via a `type: 'tx'`
|
|
||||||
# field as per existing issue surrounding all this:
|
|
||||||
# https://github.com/pikers/piker/issues/510
|
|
||||||
if isinstance(clear_price, str):
|
|
||||||
# TODO: we can't necessarily have this commit to
|
|
||||||
# the overall pos size since we also need to
|
|
||||||
# include other positions contributions to this
|
|
||||||
# balance or we might end up with a -ve balance for
|
|
||||||
# the position..
|
|
||||||
continue
|
|
||||||
|
|
||||||
# test if the pp somehow went "passed" a net zero size state
|
|
||||||
# resulting in a change of the "sign" of the size (+ve for
|
|
||||||
# long, -ve for short).
|
|
||||||
sign_change = (
|
|
||||||
copysign(1, last_accum_size) + accum_sign == 0
|
|
||||||
and last_accum_size != 0
|
|
||||||
)
|
|
||||||
|
|
||||||
# since we passed the net-zero-size state the new size
|
|
||||||
# after sum should be the remaining size the new
|
|
||||||
# "direction" (aka, long vs. short) for this clear.
|
|
||||||
if sign_change:
|
|
||||||
clear_size: float = accum_size
|
|
||||||
abs_diff: float = abs(accum_size)
|
|
||||||
asize_h.append(0)
|
|
||||||
ppu_h.append(0)
|
|
||||||
|
|
||||||
else:
|
|
||||||
# old size minus the new size gives us size diff with
|
|
||||||
# +ve -> increase in pp size
|
|
||||||
# -ve -> decrease in pp size
|
|
||||||
abs_diff = abs(accum_size) - abs(last_accum_size)
|
|
||||||
|
|
||||||
# XXX: LIFO breakeven price update. only an increaze in size
|
|
||||||
# of the position contributes the breakeven price,
|
|
||||||
# a decrease does not (i.e. the position is being made
|
|
||||||
# smaller).
|
|
||||||
# abs_clear_size = abs(clear_size)
|
|
||||||
abs_new_size: float | int = abs(accum_size)
|
|
||||||
|
|
||||||
if (
|
|
||||||
abs_diff > 0
|
|
||||||
and is_clear
|
|
||||||
):
|
|
||||||
cost_basis = (
|
|
||||||
# cost basis for this clear
|
|
||||||
clear_price * abs(clear_size)
|
|
||||||
+
|
|
||||||
# transaction cost
|
|
||||||
accum_sign * cost_scalar * t.cost
|
|
||||||
)
|
|
||||||
|
|
||||||
if asize_h:
|
|
||||||
size_last: float = abs(asize_h[-1])
|
|
||||||
cb_last: float = ppu_h[-1] * size_last
|
|
||||||
ppu: float = (cost_basis + cb_last) / abs_new_size
|
|
||||||
|
|
||||||
else:
|
|
||||||
ppu: float = cost_basis / abs_new_size
|
|
||||||
|
|
||||||
else:
|
|
||||||
# TODO: for PPU we should probably handle txs out
|
|
||||||
# (aka withdrawals) similarly by simply not having
|
|
||||||
# them contrib to the running PPU calc and only
|
|
||||||
# when the next entry clear comes in (which will
|
|
||||||
# then have a higher weighting on the PPU).
|
|
||||||
|
|
||||||
# on "exit" clears from a given direction,
|
|
||||||
# only the size changes not the price-per-unit
|
|
||||||
# need to be updated since the ppu remains constant
|
|
||||||
# and gets weighted by the new size.
|
|
||||||
ppu: float = ppu_h[-1] if ppu_h else 0 # set to previous value
|
|
||||||
|
|
||||||
# extend with new rolling metric for this step
|
|
||||||
ppu_h.append(ppu)
|
|
||||||
asize_h.append(accum_size)
|
|
||||||
|
|
||||||
# ledger[t.tid] = {
|
|
||||||
# 'txn': t,
|
|
||||||
# ledger[t.tid] = t.to_dict() | {
|
|
||||||
ledger.append((
|
|
||||||
t.tid,
|
|
||||||
t.to_dict() | {
|
|
||||||
'ppu': ppu,
|
|
||||||
'cumsize': accum_size,
|
|
||||||
'sign_change': sign_change,
|
|
||||||
|
|
||||||
# TODO: cum_pnl, bep
|
|
||||||
}
|
|
||||||
))
|
|
||||||
|
|
||||||
final_ppu = ppu_h[-1] if ppu_h else 0
|
|
||||||
# TODO: once we have etypes in all ledger entries..
|
|
||||||
# handle any split info entered (for now) manually by user
|
|
||||||
# if self.split_ratio is not None:
|
|
||||||
# final_ppu /= self.split_ratio
|
|
||||||
|
|
||||||
if as_ledger:
|
|
||||||
return ledger
|
|
||||||
|
|
||||||
else:
|
|
||||||
return final_ppu
|
|
||||||
|
|
||||||
|
|
||||||
def iter_by_dt(
|
|
||||||
records: (
|
|
||||||
dict[str, dict[str, Any]]
|
|
||||||
| ValuesView[dict] # eg. `Position._events.values()`
|
|
||||||
| list[dict]
|
|
||||||
| list[Transaction] # XXX preferred!
|
|
||||||
),
|
|
||||||
|
|
||||||
# NOTE: parsers are looked up in the insert order
|
|
||||||
# so if you know that the record stats show some field
|
|
||||||
# is more common then others, stick it at the top B)
|
|
||||||
parsers: dict[str, Callable | None] = {
|
|
||||||
'dt': parse, # parity case
|
|
||||||
'datetime': parse, # datetime-str
|
|
||||||
'time': from_timestamp, # float epoch
|
|
||||||
},
|
|
||||||
key: Callable | None = None,
|
|
||||||
|
|
||||||
) -> Iterator[tuple[str, dict]]:
|
|
||||||
'''
|
|
||||||
Iterate entries of a transaction table sorted by entry recorded
|
|
||||||
datetime presumably set at the ``'dt'`` field in each entry.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if isinstance(records, dict):
|
|
||||||
records: list[tuple[str, dict]] = list(records.items())
|
|
||||||
|
|
||||||
def dyn_parse_to_dt(
|
|
||||||
tx: tuple[str, dict[str, Any]] | Transaction,
|
|
||||||
|
|
||||||
debug: bool = False,
|
|
||||||
_invalid: list|None = None,
|
|
||||||
) -> DateTime:
|
|
||||||
|
|
||||||
# handle `.items()` inputs
|
|
||||||
if isinstance(tx, tuple):
|
|
||||||
tx = tx[1]
|
|
||||||
|
|
||||||
# dict or tx object?
|
|
||||||
isdict: bool = isinstance(tx, dict)
|
|
||||||
|
|
||||||
# get best parser for this record..
|
|
||||||
for k in parsers:
|
|
||||||
if (
|
|
||||||
(v := getattr(tx, k, None))
|
|
||||||
or
|
|
||||||
(
|
|
||||||
isdict
|
|
||||||
and
|
|
||||||
(v := tx.get(k))
|
|
||||||
)
|
|
||||||
):
|
|
||||||
# only call parser on the value if not None from
|
|
||||||
# the `parsers` table above (when NOT using
|
|
||||||
# `.get()`), otherwise pass through the value and
|
|
||||||
# sort on it directly
|
|
||||||
if (
|
|
||||||
not isinstance(v, DateTime)
|
|
||||||
and
|
|
||||||
(parser := parsers.get(k))
|
|
||||||
):
|
|
||||||
ret = parser(v)
|
|
||||||
else:
|
|
||||||
ret = v
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
else:
|
|
||||||
log.debug(
|
|
||||||
f'Parser-field not found in txn\n'
|
|
||||||
f'\n'
|
|
||||||
f'parser-field: {k!r}\n'
|
|
||||||
f'txn: {tx!r}\n'
|
|
||||||
f'\n'
|
|
||||||
f'Trying next..\n'
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
# XXX: we should never really get here bc it means some kinda
|
|
||||||
# bad txn-record (field) data..
|
|
||||||
#
|
|
||||||
# -> set the `debug_mode = True` if you want to trace such
|
|
||||||
# cases from REPL ;)
|
|
||||||
else:
|
|
||||||
# XXX: we should really never get here..
|
|
||||||
# only if a ledger record has no expected sort(able)
|
|
||||||
# field will we likely hit this.. like with ze IB.
|
|
||||||
# if no sortable field just deliver epoch?
|
|
||||||
log.warning(
|
|
||||||
'No (time) sortable field for TXN:\n'
|
|
||||||
f'{tx!r}\n'
|
|
||||||
)
|
|
||||||
report: str = (
|
|
||||||
f'No supported time-field found in txn !?\n'
|
|
||||||
f'\n'
|
|
||||||
f'supported-time-fields: {parsers!r}\n'
|
|
||||||
f'\n'
|
|
||||||
f'txn: {tx!r}\n'
|
|
||||||
)
|
|
||||||
if debug:
|
|
||||||
with maybe_open_crash_handler(
|
|
||||||
pdb=debug,
|
|
||||||
raise_on_exit=False,
|
|
||||||
):
|
|
||||||
raise ValueError(report)
|
|
||||||
else:
|
|
||||||
log.error(report)
|
|
||||||
|
|
||||||
if _invalid is not None:
|
|
||||||
_invalid.append(tx)
|
|
||||||
return from_timestamp(0.)
|
|
||||||
|
|
||||||
entry: tuple[str, dict]|Transaction
|
|
||||||
invalid: list = []
|
|
||||||
for entry in sorted(
|
|
||||||
records,
|
|
||||||
key=key or partial(
|
|
||||||
dyn_parse_to_dt,
|
|
||||||
_invalid=invalid,
|
|
||||||
),
|
|
||||||
):
|
|
||||||
if entry in invalid:
|
|
||||||
log.warning(
|
|
||||||
f'Ignoring txn w invalid timestamp ??\n'
|
|
||||||
f'{pformat(entry)}\n'
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
# NOTE the type sig above; either pairs or txns B)
|
|
||||||
yield entry
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: probably just move this into the test suite or
|
|
||||||
# keep it here for use from as such?
|
|
||||||
# def ensure_state(self) -> None:
|
|
||||||
# '''
|
|
||||||
# Audit either the `.cumsize` and `.ppu` local instance vars against
|
|
||||||
# the clears table calculations and return the calc-ed values if
|
|
||||||
# they differ and log warnings to console.
|
|
||||||
|
|
||||||
# '''
|
|
||||||
# # clears: list[dict] = self._clears
|
|
||||||
|
|
||||||
# # self.first_clear_dt = min(clears, key=lambda e: e['dt'])['dt']
|
|
||||||
# last_clear: dict = clears[-1]
|
|
||||||
# csize: float = self.calc_size()
|
|
||||||
# accum: float = last_clear['accum_size']
|
|
||||||
|
|
||||||
# if not self.expired():
|
|
||||||
# if (
|
|
||||||
# csize != accum
|
|
||||||
# and csize != round(accum * (self.split_ratio or 1))
|
|
||||||
# ):
|
|
||||||
# raise ValueError(f'Size mismatch: {csize}')
|
|
||||||
# else:
|
|
||||||
# assert csize == 0, 'Contract is expired but non-zero size?'
|
|
||||||
|
|
||||||
# if self.cumsize != csize:
|
|
||||||
# log.warning(
|
|
||||||
# 'Position state mismatch:\n'
|
|
||||||
# f'{self.cumsize} => {csize}'
|
|
||||||
# )
|
|
||||||
# self.cumsize = csize
|
|
||||||
|
|
||||||
# cppu: float = self.calc_ppu()
|
|
||||||
# ppu: float = last_clear['ppu']
|
|
||||||
# if (
|
|
||||||
# cppu != ppu
|
|
||||||
# and self.split_ratio is not None
|
|
||||||
|
|
||||||
# # handle any split info entered (for now) manually by user
|
|
||||||
# and cppu != (ppu / self.split_ratio)
|
|
||||||
# ):
|
|
||||||
# raise ValueError(f'PPU mismatch: {cppu}')
|
|
||||||
|
|
||||||
# if self.ppu != cppu:
|
|
||||||
# log.warning(
|
|
||||||
# 'Position state mismatch:\n'
|
|
||||||
# f'{self.ppu} => {cppu}'
|
|
||||||
# )
|
|
||||||
# self.ppu = cppu
|
|
||||||
|
|
||||||
|
|
||||||
@cm
|
|
||||||
def open_ledger_dfs(
|
|
||||||
|
|
||||||
brokername: str,
|
|
||||||
acctname: str,
|
|
||||||
|
|
||||||
ledger: TransactionLedger | None = None,
|
|
||||||
debug_mode: bool = False,
|
|
||||||
|
|
||||||
**kwargs,
|
|
||||||
|
|
||||||
) -> tuple[
|
|
||||||
dict[str, pl.DataFrame],
|
|
||||||
TransactionLedger,
|
|
||||||
]:
|
|
||||||
'''
|
|
||||||
Open a ledger of trade records (presumably from some broker
|
|
||||||
backend), normalize the records into `Transactions` via the
|
|
||||||
backend's declared endpoint, cast to a `polars.DataFrame` which
|
|
||||||
can update the ledger on exit.
|
|
||||||
|
|
||||||
'''
|
|
||||||
with maybe_open_crash_handler(
|
|
||||||
pdb=debug_mode,
|
|
||||||
# raise_on_exit=False,
|
|
||||||
):
|
|
||||||
if not ledger:
|
|
||||||
import time
|
|
||||||
from ._ledger import open_trade_ledger
|
|
||||||
|
|
||||||
now = time.time()
|
|
||||||
|
|
||||||
with open_trade_ledger(
|
|
||||||
brokername,
|
|
||||||
acctname,
|
|
||||||
rewrite=True,
|
|
||||||
allow_from_sync_code=True,
|
|
||||||
|
|
||||||
# proxied through from caller
|
|
||||||
**kwargs,
|
|
||||||
|
|
||||||
) as ledger:
|
|
||||||
if not ledger:
|
|
||||||
raise ValueError(f'No ledger for {acctname}@{brokername} exists?')
|
|
||||||
|
|
||||||
print(f'LEDGER LOAD TIME: {time.time() - now}')
|
|
||||||
|
|
||||||
yield ledger_to_dfs(ledger), ledger
|
|
||||||
|
|
||||||
|
|
||||||
def ledger_to_dfs(
|
|
||||||
ledger: TransactionLedger,
|
|
||||||
|
|
||||||
) -> dict[str, pl.DataFrame]:
|
|
||||||
|
|
||||||
txns: dict[str, Transaction] = ledger.to_txns()
|
|
||||||
|
|
||||||
# ldf = pl.DataFrame(
|
|
||||||
# list(txn.to_dict() for txn in txns.values()),
|
|
||||||
ldf = pl.from_dicts(
|
|
||||||
list(txn.to_dict() for txn in txns.values()),
|
|
||||||
|
|
||||||
# only for ordering the cols
|
|
||||||
schema=[
|
|
||||||
('fqme', str),
|
|
||||||
('tid', str),
|
|
||||||
('bs_mktid', str),
|
|
||||||
('expiry', str),
|
|
||||||
('etype', str),
|
|
||||||
('dt', str),
|
|
||||||
('size', pl.Float64),
|
|
||||||
('price', pl.Float64),
|
|
||||||
('cost', pl.Float64),
|
|
||||||
],
|
|
||||||
).sort( # chronological order
|
|
||||||
'dt'
|
|
||||||
).with_columns([
|
|
||||||
pl.col('dt').str.to_datetime(),
|
|
||||||
# pl.col('expiry').str.to_datetime(),
|
|
||||||
# pl.col('expiry').dt.date(),
|
|
||||||
])
|
|
||||||
|
|
||||||
# filter out to the columns matching values filter passed
|
|
||||||
# as input.
|
|
||||||
# if filter_by_ids:
|
|
||||||
# for col, vals in filter_by_ids.items():
|
|
||||||
# str_vals = set(map(str, vals))
|
|
||||||
# pred: pl.Expr = pl.col(col).eq(str_vals.pop())
|
|
||||||
# for val in str_vals:
|
|
||||||
# pred |= pl.col(col).eq(val)
|
|
||||||
|
|
||||||
# fdf = df.filter(pred)
|
|
||||||
|
|
||||||
# TODO: originally i had tried just using a plain ol' groupby
|
|
||||||
# + agg here but the issue was re-inserting to the src frame.
|
|
||||||
# however, learning more about `polars` seems like maybe we can
|
|
||||||
# use `.over()`?
|
|
||||||
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.Expr.over.html#polars.Expr.over
|
|
||||||
# => CURRENTLY we break up into a frame per mkt / fqme
|
|
||||||
dfs: dict[str, pl.DataFrame] = ldf.partition_by(
|
|
||||||
'bs_mktid',
|
|
||||||
as_dict=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: not sure if this is even possible but..
|
|
||||||
# - it'd be more ideal to use `ppt = df.groupby('fqme').agg([`
|
|
||||||
# - ppu and bep calcs!
|
|
||||||
for key in dfs:
|
|
||||||
|
|
||||||
# covert to lazy form (since apparently we might need it
|
|
||||||
# eventually ...)
|
|
||||||
df: pl.DataFrame = dfs[key]
|
|
||||||
|
|
||||||
ldf: pl.LazyFrame = df.lazy()
|
|
||||||
|
|
||||||
df = dfs[key] = ldf.with_columns([
|
|
||||||
|
|
||||||
pl.cum_sum('size').alias('cumsize'),
|
|
||||||
|
|
||||||
# amount of source asset "sent" (via buy txns in
|
|
||||||
# the market) to acquire the dst asset, PER txn.
|
|
||||||
# when this value is -ve (i.e. a sell operation) then
|
|
||||||
# the amount sent is actually "returned".
|
|
||||||
(
|
|
||||||
(pl.col('price') * pl.col('size'))
|
|
||||||
+
|
|
||||||
(pl.col('cost')) # * pl.col('size').sign())
|
|
||||||
).alias('dst_bot'),
|
|
||||||
|
|
||||||
]).with_columns([
|
|
||||||
|
|
||||||
# rolling balance in src asset units
|
|
||||||
(pl.col('dst_bot').cum_sum() * -1).alias('src_balance'),
|
|
||||||
|
|
||||||
# "position operation type" in terms of increasing the
|
|
||||||
# amount in the dst asset (entering) or decreasing the
|
|
||||||
# amount in the dst asset (exiting).
|
|
||||||
pl.when(
|
|
||||||
pl.col('size').sign() == pl.col('cumsize').sign()
|
|
||||||
|
|
||||||
).then(
|
|
||||||
pl.lit('enter') # see above, but is just price * size per txn
|
|
||||||
|
|
||||||
).otherwise(
|
|
||||||
pl.when(pl.col('cumsize') == 0)
|
|
||||||
.then(pl.lit('exit_to_zero'))
|
|
||||||
.otherwise(pl.lit('exit'))
|
|
||||||
).alias('descr'),
|
|
||||||
|
|
||||||
(pl.col('cumsize').sign() == pl.col('size').sign())
|
|
||||||
.alias('is_enter'),
|
|
||||||
|
|
||||||
]).with_columns([
|
|
||||||
|
|
||||||
# pl.lit(0, dtype=pl.Utf8).alias('virt_cost'),
|
|
||||||
pl.lit(0, dtype=pl.Float64).alias('applied_cost'),
|
|
||||||
pl.lit(0, dtype=pl.Float64).alias('pos_ppu'),
|
|
||||||
pl.lit(0, dtype=pl.Float64).alias('per_txn_pnl'),
|
|
||||||
pl.lit(0, dtype=pl.Float64).alias('cum_pos_pnl'),
|
|
||||||
pl.lit(0, dtype=pl.Float64).alias('pos_bep'),
|
|
||||||
pl.lit(0, dtype=pl.Float64).alias('cum_ledger_pnl'),
|
|
||||||
pl.lit(None, dtype=pl.Float64).alias('ledger_bep'),
|
|
||||||
|
|
||||||
# TODO: instead of the iterative loop below i guess we
|
|
||||||
# could try using embedded lists to track which txns
|
|
||||||
# are part of which ppu / bep calcs? Not sure this will
|
|
||||||
# look any better nor be any more performant though xD
|
|
||||||
# pl.lit([[0]], dtype=pl.List(pl.Float64)).alias('list'),
|
|
||||||
|
|
||||||
# choose fields to emit for accounting puposes
|
|
||||||
]).select([
|
|
||||||
pl.exclude([
|
|
||||||
'tid',
|
|
||||||
# 'dt',
|
|
||||||
'expiry',
|
|
||||||
'bs_mktid',
|
|
||||||
'etype',
|
|
||||||
# 'is_enter',
|
|
||||||
]),
|
|
||||||
]).collect()
|
|
||||||
|
|
||||||
# compute recurrence relations for ppu and bep
|
|
||||||
last_ppu: float = 0
|
|
||||||
last_cumsize: float = 0
|
|
||||||
last_ledger_pnl: float = 0
|
|
||||||
last_pos_pnl: float = 0
|
|
||||||
virt_costs: list[float, float] = [0., 0.]
|
|
||||||
|
|
||||||
# imperatively compute the PPU (price per unit) and BEP
|
|
||||||
# (break even price) iteratively over the ledger, oriented
|
|
||||||
# around each position state: a state of split balances in
|
|
||||||
# > 1 asset.
|
|
||||||
for i, row in enumerate(df.iter_rows(named=True)):
|
|
||||||
|
|
||||||
cumsize: float = row['cumsize']
|
|
||||||
is_enter: bool = row['is_enter']
|
|
||||||
price: float = row['price']
|
|
||||||
size: float = row['size']
|
|
||||||
|
|
||||||
# the profit is ALWAYS decreased, aka made a "loss"
|
|
||||||
# by the constant fee charged by the txn provider!
|
|
||||||
# see below in final PnL calculation and row element
|
|
||||||
# set.
|
|
||||||
txn_cost: float = row['cost']
|
|
||||||
pnl: float = 0
|
|
||||||
|
|
||||||
# ALWAYS reset per-position cum PnL
|
|
||||||
if last_cumsize == 0:
|
|
||||||
last_pos_pnl: float = 0
|
|
||||||
|
|
||||||
# a "position size INCREASING" or ENTER transaction
|
|
||||||
# which "makes larger", in src asset unit terms, the
|
|
||||||
# trade's side-size of the destination asset:
|
|
||||||
# - "buying" (more) units of the dst asset
|
|
||||||
# - "selling" (more short) units of the dst asset
|
|
||||||
if is_enter:
|
|
||||||
|
|
||||||
# Naively include transaction cost in breakeven
|
|
||||||
# price and presume the worst case of the
|
|
||||||
# exact-same-cost-to-exit this transaction's worth
|
|
||||||
# of size even though in reality it will be dynamic
|
|
||||||
# based on exit strategy, price, liquidity, etc..
|
|
||||||
virt_cost: float = txn_cost
|
|
||||||
|
|
||||||
# cpu: float = cost / size
|
|
||||||
# cummean of the cost-per-unit used for modelling
|
|
||||||
# a projected future exit cost which we immediately
|
|
||||||
# include in the costs incorporated to BEP on enters
|
|
||||||
last_cum_costs_size, last_cpu = virt_costs
|
|
||||||
cum_costs_size: float = last_cum_costs_size + abs(size)
|
|
||||||
cumcpu = (
|
|
||||||
(last_cpu * last_cum_costs_size)
|
|
||||||
+
|
|
||||||
txn_cost
|
|
||||||
) / cum_costs_size
|
|
||||||
virt_costs = [cum_costs_size, cumcpu]
|
|
||||||
|
|
||||||
txn_cost = txn_cost + virt_cost
|
|
||||||
# df[i, 'virt_cost'] = f'{-virt_cost} FROM {cumcpu}@{cum_costs_size}'
|
|
||||||
|
|
||||||
# a cumulative mean of the price-per-unit acquired
|
|
||||||
# in the destination asset:
|
|
||||||
# https://en.wikipedia.org/wiki/Moving_average#Cumulative_average
|
|
||||||
# You could also think of this measure more
|
|
||||||
# generally as an exponential mean with `alpha
|
|
||||||
# = 1/N` where `N` is the current number of txns
|
|
||||||
# included in the "position" defining set:
|
|
||||||
# https://en.wikipedia.org/wiki/Exponential_smoothing
|
|
||||||
ppu: float = (
|
|
||||||
(
|
|
||||||
(last_ppu * last_cumsize)
|
|
||||||
+
|
|
||||||
(price * size)
|
|
||||||
) /
|
|
||||||
cumsize
|
|
||||||
)
|
|
||||||
|
|
||||||
# a "position size DECREASING" or EXIT transaction
|
|
||||||
# which "makes smaller" the trade's side-size of the
|
|
||||||
# destination asset:
|
|
||||||
# - selling previously bought units of the dst asset
|
|
||||||
# (aka 'closing' a long position).
|
|
||||||
# - buying previously borrowed and sold (short) units
|
|
||||||
# of the dst asset (aka 'covering'/'closing' a short
|
|
||||||
# position).
|
|
||||||
else:
|
|
||||||
# only changes on position size increasing txns
|
|
||||||
ppu: float = last_ppu
|
|
||||||
|
|
||||||
# UNWIND IMPLIED COSTS FROM ENTRIES
|
|
||||||
# => Reverse the virtual/modelled (2x predicted) txn
|
|
||||||
# cost that was included in the least-recently
|
|
||||||
# entered txn that is still part of the current CSi
|
|
||||||
# set.
|
|
||||||
# => we look up the cost-per-unit cum_sum and apply
|
|
||||||
# if over the current txn size (by multiplication)
|
|
||||||
# and then reverse that previusly applied cost on
|
|
||||||
# the txn_cost for this record.
|
|
||||||
#
|
|
||||||
# NOTE: current "model" is just to previously assumed 2x
|
|
||||||
# the txn cost for a matching enter-txn's
|
|
||||||
# cost-per-unit; we then immediately reverse this
|
|
||||||
# prediction and apply the real cost received here.
|
|
||||||
last_cum_costs_size, last_cpu = virt_costs
|
|
||||||
prev_virt_cost: float = last_cpu * abs(size)
|
|
||||||
txn_cost: float = txn_cost - prev_virt_cost # +ve thus a "reversal"
|
|
||||||
cum_costs_size: float = last_cum_costs_size - abs(size)
|
|
||||||
virt_costs = [cum_costs_size, last_cpu]
|
|
||||||
|
|
||||||
# df[i, 'virt_cost'] = (
|
|
||||||
# f'{-prev_virt_cost} FROM {last_cpu}@{cum_costs_size}'
|
|
||||||
# )
|
|
||||||
|
|
||||||
# the per-txn profit or loss (PnL) given we are
|
|
||||||
# (partially) "closing"/"exiting" the position via
|
|
||||||
# this txn.
|
|
||||||
pnl: float = (last_ppu - price) * size
|
|
||||||
|
|
||||||
# always subtract txn cost from total txn pnl
|
|
||||||
txn_pnl: float = pnl - txn_cost
|
|
||||||
|
|
||||||
# cumulative PnLs per txn
|
|
||||||
last_ledger_pnl = (
|
|
||||||
last_ledger_pnl + txn_pnl
|
|
||||||
)
|
|
||||||
last_pos_pnl = df[i, 'cum_pos_pnl'] = (
|
|
||||||
last_pos_pnl + txn_pnl
|
|
||||||
)
|
|
||||||
|
|
||||||
if cumsize == 0:
|
|
||||||
last_ppu = ppu = 0
|
|
||||||
|
|
||||||
# compute the BEP: "break even price", a value that
|
|
||||||
# determines at what price the remaining cumsize can be
|
|
||||||
# liquidated such that the net-PnL on the current
|
|
||||||
# position will result in ZERO gain or loss from open
|
|
||||||
# to close including all txn costs B)
|
|
||||||
if (
|
|
||||||
abs(cumsize) > 0 # non-exit-to-zero position txn
|
|
||||||
):
|
|
||||||
cumsize_sign: float = copysign(1, cumsize)
|
|
||||||
ledger_bep: float = (
|
|
||||||
(
|
|
||||||
(ppu * cumsize)
|
|
||||||
-
|
|
||||||
(last_ledger_pnl * cumsize_sign)
|
|
||||||
) / cumsize
|
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: when we "enter more" dst asset units (aka
|
|
||||||
# increase position state) AFTER having exited some
|
|
||||||
# units (aka decreasing the pos size some) the bep
|
|
||||||
# needs to be RECOMPUTED based on new ppu such that
|
|
||||||
# liquidation of the cumsize at the bep price
|
|
||||||
# results in a zero-pnl for the existing position
|
|
||||||
# (since the last one).
|
|
||||||
# for position lifetime BEP we never can have
|
|
||||||
# a valid value once the position is "closed"
|
|
||||||
# / full exitted Bo
|
|
||||||
pos_bep: float = (
|
|
||||||
(
|
|
||||||
(ppu * cumsize)
|
|
||||||
-
|
|
||||||
(last_pos_pnl * cumsize_sign)
|
|
||||||
) / cumsize
|
|
||||||
)
|
|
||||||
|
|
||||||
# inject DF row with all values
|
|
||||||
df[i, 'pos_ppu'] = ppu
|
|
||||||
df[i, 'per_txn_pnl'] = txn_pnl
|
|
||||||
df[i, 'applied_cost'] = -txn_cost
|
|
||||||
df[i, 'cum_pos_pnl'] = last_pos_pnl
|
|
||||||
df[i, 'pos_bep'] = pos_bep
|
|
||||||
df[i, 'cum_ledger_pnl'] = last_ledger_pnl
|
|
||||||
df[i, 'ledger_bep'] = ledger_bep
|
|
||||||
|
|
||||||
# keep backrefs to suffice reccurence relation
|
|
||||||
last_ppu: float = ppu
|
|
||||||
last_cumsize: float = cumsize
|
|
||||||
|
|
||||||
# TODO?: pass back the current `Position` object loaded from
|
|
||||||
# the account as well? Would provide incentive to do all
|
|
||||||
# this ledger loading inside a new async open_account().
|
|
||||||
# bs_mktid: str = df[0]['bs_mktid']
|
|
||||||
# pos: Position = acnt.pps[bs_mktid]
|
|
||||||
|
|
||||||
return dfs
|
|
||||||
|
|
@ -19,7 +19,6 @@ CLI front end for trades ledger and position tracking management.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from pprint import pformat
|
|
||||||
|
|
||||||
|
|
||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
|
|
@ -38,11 +37,13 @@ from ..calc import humanize
|
||||||
from ..brokers._daemon import broker_init
|
from ..brokers._daemon import broker_init
|
||||||
from ._ledger import (
|
from ._ledger import (
|
||||||
load_ledger,
|
load_ledger,
|
||||||
TransactionLedger,
|
|
||||||
# open_trade_ledger,
|
# open_trade_ledger,
|
||||||
|
# TransactionLedger,
|
||||||
)
|
)
|
||||||
from .calc import (
|
from ._pos import (
|
||||||
open_ledger_dfs,
|
PpTable,
|
||||||
|
load_pps_from_ledger,
|
||||||
|
# load_account,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -239,74 +240,54 @@ def sync(
|
||||||
def disect(
|
def disect(
|
||||||
# "fully_qualified_account_name"
|
# "fully_qualified_account_name"
|
||||||
fqan: str,
|
fqan: str,
|
||||||
fqme: str, # for ib
|
bs_mktid: str, # for ib
|
||||||
|
|
||||||
# TODO: in tractor we should really have
|
|
||||||
# a debug_mode ctx for wrapping any kind of code no?
|
|
||||||
pdb: bool = False,
|
pdb: bool = False,
|
||||||
bs_mktid: str = typer.Option(
|
|
||||||
None,
|
|
||||||
"-bid",
|
|
||||||
),
|
|
||||||
loglevel: str = typer.Option(
|
loglevel: str = typer.Option(
|
||||||
'error',
|
'error',
|
||||||
"-l",
|
"-l",
|
||||||
),
|
),
|
||||||
):
|
):
|
||||||
from piker.log import get_console_log
|
|
||||||
from piker.toolz import open_crash_handler
|
|
||||||
get_console_log(loglevel)
|
|
||||||
|
|
||||||
pair: tuple[str, str]
|
pair: tuple[str, str]
|
||||||
if not (pair := unpack_fqan(fqan)):
|
if not (pair := unpack_fqan(fqan)):
|
||||||
raise ValueError('{fqan} malformed!?')
|
raise ValueError('{fqan} malformed!?')
|
||||||
|
|
||||||
brokername, account = pair
|
brokername, account = pair
|
||||||
|
|
||||||
# ledger dfs groupby-partitioned by fqme
|
# ledger: TransactionLedger
|
||||||
dfs: dict[str, pl.DataFrame]
|
records: dict[str, dict]
|
||||||
# actual ledger instance
|
table: PpTable
|
||||||
ldgr: TransactionLedger
|
records, table = load_pps_from_ledger(
|
||||||
|
brokername,
|
||||||
|
account,
|
||||||
|
filter_by_ids={bs_mktid},
|
||||||
|
)
|
||||||
|
df = pl.DataFrame(
|
||||||
|
list(records.values()),
|
||||||
|
# schema=[
|
||||||
|
# ('tid', str),
|
||||||
|
# ('fqme', str),
|
||||||
|
# ('dt', str),
|
||||||
|
# ('size', pl.Float64),
|
||||||
|
# ('price', pl.Float64),
|
||||||
|
# ('cost', pl.Float64),
|
||||||
|
# ('expiry', str),
|
||||||
|
# ('bs_mktid', str),
|
||||||
|
# ],
|
||||||
|
).select([
|
||||||
|
pl.col('fqme'),
|
||||||
|
pl.col('dt').str.to_datetime(),
|
||||||
|
# pl.col('expiry').dt.datetime(),
|
||||||
|
pl.col('size'),
|
||||||
|
pl.col('price'),
|
||||||
|
])
|
||||||
|
|
||||||
pl.Config.set_tbl_cols(-1)
|
assert not df.is_empty()
|
||||||
pl.Config.set_tbl_rows(-1)
|
breakpoint()
|
||||||
with (
|
# tractor.pause_from_sync()
|
||||||
open_crash_handler(),
|
# with open_trade_ledger(
|
||||||
open_ledger_dfs(
|
# brokername,
|
||||||
brokername,
|
# account,
|
||||||
account,
|
# ) as ledger:
|
||||||
) as (dfs, ldgr),
|
# for tid, rec in ledger.items():
|
||||||
):
|
# bs_mktid: str = rec['bs_mktid']
|
||||||
|
|
||||||
# look up specific frame for fqme-selected asset
|
|
||||||
if (df := dfs.get(fqme)) is None:
|
|
||||||
mktids2fqmes: dict[str, list[str]] = {}
|
|
||||||
for bs_mktid in dfs:
|
|
||||||
df: pl.DataFrame = dfs[bs_mktid]
|
|
||||||
fqmes: pl.Series[str] = df['fqme']
|
|
||||||
uniques: list[str] = fqmes.unique()
|
|
||||||
mktids2fqmes[bs_mktid] = set(uniques)
|
|
||||||
if fqme in uniques:
|
|
||||||
break
|
|
||||||
print(
|
|
||||||
f'No specific ledger for fqme={fqme} could be found in\n'
|
|
||||||
f'{pformat(mktids2fqmes)}?\n'
|
|
||||||
f'Maybe the `{brokername}` backend uses something '
|
|
||||||
'else for its `bs_mktid` then the `fqme`?\n'
|
|
||||||
'Scanning for matches in unique fqmes per frame..\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# :pray:
|
|
||||||
assert not df.is_empty()
|
|
||||||
|
|
||||||
# muck around in pdbp REPL
|
|
||||||
# tractor.devx.mk_pdb().set_trace()
|
|
||||||
# breakpoint()
|
|
||||||
|
|
||||||
# TODO: we REALLY need a better console REPL for this
|
|
||||||
# kinda thing..
|
|
||||||
# - `xonsh` is an obvious option (and it looks amazin) but
|
|
||||||
# we need to figure out how to embed it better then just:
|
|
||||||
# from xonsh.main import main
|
|
||||||
# main(argv=[])
|
|
||||||
# which will not actually inject the `df` to globals?
|
|
||||||
|
|
|
||||||
|
|
@ -50,7 +50,7 @@ __brokers__: list[str] = [
|
||||||
'binance',
|
'binance',
|
||||||
'ib',
|
'ib',
|
||||||
'kraken',
|
'kraken',
|
||||||
'kucoin',
|
'kucoin'
|
||||||
|
|
||||||
# broken but used to work
|
# broken but used to work
|
||||||
# 'questrade',
|
# 'questrade',
|
||||||
|
|
@ -71,7 +71,7 @@ def get_brokermod(brokername: str) -> ModuleType:
|
||||||
Return the imported broker module by name.
|
Return the imported broker module by name.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
module: ModuleType = import_module('.' + brokername, 'piker.brokers')
|
module = import_module('.' + brokername, 'piker.brokers')
|
||||||
# we only allow monkeying because it's for internal keying
|
# we only allow monkeying because it's for internal keying
|
||||||
module.name = module.__name__.split('.')[-1]
|
module.name = module.__name__.split('.')[-1]
|
||||||
return module
|
return module
|
||||||
|
|
@ -98,15 +98,14 @@ async def open_cached_client(
|
||||||
If one has not been setup do it and cache it.
|
If one has not been setup do it and cache it.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
brokermod: ModuleType = get_brokermod(brokername)
|
brokermod = get_brokermod(brokername)
|
||||||
|
|
||||||
# TODO: make abstract or `typing.Protocol`
|
|
||||||
# client: Client
|
|
||||||
async with maybe_open_context(
|
async with maybe_open_context(
|
||||||
acm_func=brokermod.get_client,
|
acm_func=brokermod.get_client,
|
||||||
kwargs=kwargs,
|
kwargs=kwargs,
|
||||||
|
|
||||||
) as (cache_hit, client):
|
) as (cache_hit, client):
|
||||||
|
|
||||||
if cache_hit:
|
if cache_hit:
|
||||||
log.runtime(f'Reusing existing {client}')
|
log.info(f'Reusing existing {client}')
|
||||||
|
|
||||||
yield client
|
yield client
|
||||||
|
|
|
||||||
|
|
@ -96,10 +96,7 @@ async def _setup_persistent_brokerd(
|
||||||
# - `open_symbol_search()`
|
# - `open_symbol_search()`
|
||||||
# NOTE: see ep invocation details inside `.data.feed`.
|
# NOTE: see ep invocation details inside `.data.feed`.
|
||||||
try:
|
try:
|
||||||
async with (
|
async with trio.open_nursery() as service_nursery:
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as service_nursery
|
|
||||||
):
|
|
||||||
bus: _FeedsBus = feed.get_feed_bus(
|
bus: _FeedsBus = feed.get_feed_bus(
|
||||||
brokername,
|
brokername,
|
||||||
service_nursery,
|
service_nursery,
|
||||||
|
|
@ -182,6 +179,9 @@ def broker_init(
|
||||||
subpath: str = f'{modpath}.{submodname}'
|
subpath: str = f'{modpath}.{submodname}'
|
||||||
enabled.append(subpath)
|
enabled.append(subpath)
|
||||||
|
|
||||||
|
# TODO XXX: DO WE NEED THIS?
|
||||||
|
# enabled.append('piker.data.feed')
|
||||||
|
|
||||||
return (
|
return (
|
||||||
brokermod,
|
brokermod,
|
||||||
start_actor_kwargs, # to `ActorNursery.start_actor()`
|
start_actor_kwargs, # to `ActorNursery.start_actor()`
|
||||||
|
|
|
||||||
|
|
@ -18,11 +18,10 @@
|
||||||
Handy cross-broker utils.
|
Handy cross-broker utils.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
from __future__ import annotations
|
|
||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import httpx
|
import asks
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from ..log import (
|
from ..log import (
|
||||||
|
|
@ -51,7 +50,6 @@ class SymbolNotFound(BrokerError):
|
||||||
"Symbol not found by broker search"
|
"Symbol not found by broker search"
|
||||||
|
|
||||||
|
|
||||||
# TODO: these should probably be moved to `.tsp/.data`?
|
|
||||||
class NoData(BrokerError):
|
class NoData(BrokerError):
|
||||||
'''
|
'''
|
||||||
Symbol data not permitted or no data
|
Symbol data not permitted or no data
|
||||||
|
|
@ -61,15 +59,14 @@ class NoData(BrokerError):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
*args,
|
*args,
|
||||||
info: dict|None = None,
|
frame_size: int = 1000,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
super().__init__(*args)
|
super().__init__(*args)
|
||||||
self.info: dict|None = info
|
|
||||||
|
|
||||||
# when raised, machinery can check if the backend
|
# when raised, machinery can check if the backend
|
||||||
# set a "frame size" for doing datetime calcs.
|
# set a "frame size" for doing datetime calcs.
|
||||||
# self.frame_size: int = 1000
|
self.frame_size: int = 1000
|
||||||
|
|
||||||
|
|
||||||
class DataUnavailable(BrokerError):
|
class DataUnavailable(BrokerError):
|
||||||
|
|
@ -91,18 +88,16 @@ class DataThrottle(BrokerError):
|
||||||
|
|
||||||
|
|
||||||
def resproc(
|
def resproc(
|
||||||
resp: httpx.Response,
|
resp: asks.response_objects.Response,
|
||||||
log: logging.Logger,
|
log: logging.Logger,
|
||||||
return_json: bool = True,
|
return_json: bool = True,
|
||||||
log_resp: bool = False,
|
log_resp: bool = False,
|
||||||
|
|
||||||
) -> httpx.Response:
|
) -> asks.response_objects.Response:
|
||||||
'''
|
"""Process response and return its json content.
|
||||||
Process response and return its json content.
|
|
||||||
|
|
||||||
Raise the appropriate error on non-200 OK responses.
|
Raise the appropriate error on non-200 OK responses.
|
||||||
|
"""
|
||||||
'''
|
|
||||||
if not resp.status_code == 200:
|
if not resp.status_code == 200:
|
||||||
raise BrokerError(resp.body)
|
raise BrokerError(resp.body)
|
||||||
try:
|
try:
|
||||||
|
|
|
||||||
|
|
@ -32,19 +32,12 @@ from .feed import (
|
||||||
)
|
)
|
||||||
from .broker import (
|
from .broker import (
|
||||||
open_trade_dialog,
|
open_trade_dialog,
|
||||||
get_cost,
|
|
||||||
)
|
|
||||||
from .venues import (
|
|
||||||
SpotPair,
|
|
||||||
FutesPair,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'get_client',
|
'get_client',
|
||||||
'get_mkt_info',
|
'get_mkt_info',
|
||||||
'get_cost',
|
|
||||||
'SpotPair',
|
|
||||||
'FutesPair',
|
|
||||||
'open_trade_dialog',
|
'open_trade_dialog',
|
||||||
'open_history_client',
|
'open_history_client',
|
||||||
'open_symbol_search',
|
'open_symbol_search',
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,8 @@
|
||||||
# piker: trading gear for hackers
|
# piker: trading gear for hackers
|
||||||
# Copyright (C)
|
# Copyright (C)
|
||||||
# Guillermo Rodriguez (aka ze jefe)
|
# Guillermo Rodriguez (aka ze jefe)
|
||||||
# Tyler Goodlet
|
# Tyler Goodlet
|
||||||
# (in stewardship for pikers)
|
# (in stewardship for pikers)
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# This program is free software: you can redistribute it and/or modify
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
|
@ -25,7 +25,6 @@ from __future__ import annotations
|
||||||
from collections import ChainMap
|
from collections import ChainMap
|
||||||
from contextlib import (
|
from contextlib import (
|
||||||
asynccontextmanager as acm,
|
asynccontextmanager as acm,
|
||||||
AsyncExitStack,
|
|
||||||
)
|
)
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pprint import pformat
|
from pprint import pformat
|
||||||
|
|
@ -42,7 +41,8 @@ import trio
|
||||||
from pendulum import (
|
from pendulum import (
|
||||||
now,
|
now,
|
||||||
)
|
)
|
||||||
import httpx
|
import asks
|
||||||
|
from fuzzywuzzy import process as fuzzy
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
from piker import config
|
from piker import config
|
||||||
|
|
@ -52,13 +52,9 @@ from piker.clearing._messages import (
|
||||||
from piker.accounting import (
|
from piker.accounting import (
|
||||||
Asset,
|
Asset,
|
||||||
digits_to_dec,
|
digits_to_dec,
|
||||||
MktPair,
|
|
||||||
)
|
|
||||||
from piker.types import Struct
|
|
||||||
from piker.data import (
|
|
||||||
def_iohlcv_fields,
|
|
||||||
match_from_pairs,
|
|
||||||
)
|
)
|
||||||
|
from piker.data.types import Struct
|
||||||
|
from piker.data import def_iohlcv_fields
|
||||||
from piker.brokers import (
|
from piker.brokers import (
|
||||||
resproc,
|
resproc,
|
||||||
SymbolNotFound,
|
SymbolNotFound,
|
||||||
|
|
@ -68,6 +64,7 @@ from .venues import (
|
||||||
PAIRTYPES,
|
PAIRTYPES,
|
||||||
Pair,
|
Pair,
|
||||||
MarketType,
|
MarketType,
|
||||||
|
|
||||||
_spot_url,
|
_spot_url,
|
||||||
_futes_url,
|
_futes_url,
|
||||||
_testnet_futes_url,
|
_testnet_futes_url,
|
||||||
|
|
@ -77,18 +74,16 @@ from .venues import (
|
||||||
log = get_logger('piker.brokers.binance')
|
log = get_logger('piker.brokers.binance')
|
||||||
|
|
||||||
|
|
||||||
def get_config() -> dict[str, Any]:
|
def get_config() -> dict:
|
||||||
|
|
||||||
conf: dict
|
conf: dict
|
||||||
path: Path
|
path: Path
|
||||||
conf, path = config.load(
|
conf, path = config.load()
|
||||||
conf_name='brokers',
|
|
||||||
touch_if_dne=True,
|
section = conf.get('binance')
|
||||||
)
|
|
||||||
section: dict = conf.get('binance')
|
|
||||||
if not section:
|
if not section:
|
||||||
log.warning(
|
log.warning(f'No config section found for binance in {path}')
|
||||||
f'No config section found for binance in {path}'
|
|
||||||
)
|
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
return section
|
return section
|
||||||
|
|
@ -144,7 +139,7 @@ def binance_timestamp(
|
||||||
|
|
||||||
class Client:
|
class Client:
|
||||||
'''
|
'''
|
||||||
Async ReST API client using `trio` + `httpx` B)
|
Async ReST API client using ``trio`` + ``asks`` B)
|
||||||
|
|
||||||
Supports all of the spot, margin and futures endpoints depending
|
Supports all of the spot, margin and futures endpoints depending
|
||||||
on method.
|
on method.
|
||||||
|
|
@ -153,17 +148,10 @@ class Client:
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
|
|
||||||
venue_sessions: dict[
|
|
||||||
str, # venue key
|
|
||||||
tuple[httpx.AsyncClient, str] # session, eps path
|
|
||||||
],
|
|
||||||
conf: dict[str, Any],
|
|
||||||
# TODO: change this to `Client.[mkt_]venue: MarketType`?
|
# TODO: change this to `Client.[mkt_]venue: MarketType`?
|
||||||
mkt_mode: MarketType = 'spot',
|
mkt_mode: MarketType = 'spot',
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
self.conf = conf
|
|
||||||
|
|
||||||
# build out pair info tables for each market type
|
# build out pair info tables for each market type
|
||||||
# and wrap in a chain-map view for search / query.
|
# and wrap in a chain-map view for search / query.
|
||||||
self._spot_pairs: dict[str, Pair] = {} # spot info table
|
self._spot_pairs: dict[str, Pair] = {} # spot info table
|
||||||
|
|
@ -190,13 +178,44 @@ class Client:
|
||||||
# market symbols for use by search. See `.exch_info()`.
|
# market symbols for use by search. See `.exch_info()`.
|
||||||
self._pairs: ChainMap[str, Pair] = ChainMap()
|
self._pairs: ChainMap[str, Pair] = ChainMap()
|
||||||
|
|
||||||
|
# spot EPs sesh
|
||||||
|
self._sesh = asks.Session(connections=4)
|
||||||
|
self._sesh.base_location: str = _spot_url
|
||||||
|
# spot testnet
|
||||||
|
self._test_sesh: asks.Session = asks.Session(connections=4)
|
||||||
|
self._test_sesh.base_location: str = _testnet_spot_url
|
||||||
|
|
||||||
|
# margin and extended spot endpoints session.
|
||||||
|
self._sapi_sesh = asks.Session(connections=4)
|
||||||
|
self._sapi_sesh.base_location: str = _spot_url
|
||||||
|
|
||||||
|
# futes EPs sesh
|
||||||
|
self._fapi_sesh = asks.Session(connections=4)
|
||||||
|
self._fapi_sesh.base_location: str = _futes_url
|
||||||
|
# futes testnet
|
||||||
|
self._test_fapi_sesh: asks.Session = asks.Session(connections=4)
|
||||||
|
self._test_fapi_sesh.base_location: str = _testnet_futes_url
|
||||||
|
|
||||||
# global client "venue selection" mode.
|
# global client "venue selection" mode.
|
||||||
# set this when you want to switch venues and not have to
|
# set this when you want to switch venues and not have to
|
||||||
# specify the venue for the next request.
|
# specify the venue for the next request.
|
||||||
self.mkt_mode: MarketType = mkt_mode
|
self.mkt_mode: MarketType = mkt_mode
|
||||||
|
|
||||||
# per-mkt-venue API client table
|
# per 8
|
||||||
self.venue_sesh = venue_sessions
|
self.venue_sesh: dict[
|
||||||
|
str, # venue key
|
||||||
|
tuple[asks.Session, str] # session, eps path
|
||||||
|
] = {
|
||||||
|
'spot': (self._sesh, '/api/v3/'),
|
||||||
|
'spot_testnet': (self._test_sesh, '/fapi/v1/'),
|
||||||
|
|
||||||
|
'margin': (self._sapi_sesh, '/sapi/v1/'),
|
||||||
|
|
||||||
|
'usdtm_futes': (self._fapi_sesh, '/fapi/v1/'),
|
||||||
|
'usdtm_futes_testnet': (self._test_fapi_sesh, '/fapi/v1/'),
|
||||||
|
|
||||||
|
# 'futes_coin': self._dapi, # TODO
|
||||||
|
}
|
||||||
|
|
||||||
# lookup for going from `.mkt_mode: str` to the config
|
# lookup for going from `.mkt_mode: str` to the config
|
||||||
# subsection `key: str`
|
# subsection `key: str`
|
||||||
|
|
@ -211,6 +230,40 @@ class Client:
|
||||||
'futes': ['usdtm_futes'],
|
'futes': ['usdtm_futes'],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# for creating API keys see,
|
||||||
|
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
|
||||||
|
self.conf: dict = get_config()
|
||||||
|
|
||||||
|
for key, subconf in self.conf.items():
|
||||||
|
if api_key := subconf.get('api_key', ''):
|
||||||
|
venue_keys: list[str] = self.confkey2venuekeys[key]
|
||||||
|
|
||||||
|
venue_key: str
|
||||||
|
sesh: asks.Session
|
||||||
|
for venue_key in venue_keys:
|
||||||
|
sesh, _ = self.venue_sesh[venue_key]
|
||||||
|
|
||||||
|
api_key_header: dict = {
|
||||||
|
# taken from official:
|
||||||
|
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
|
||||||
|
"Content-Type": "application/json;charset=utf-8",
|
||||||
|
|
||||||
|
# TODO: prolly should just always query and copy
|
||||||
|
# in the real latest ver?
|
||||||
|
"User-Agent": "binance-connector/6.1.6smbz6",
|
||||||
|
"X-MBX-APIKEY": api_key,
|
||||||
|
}
|
||||||
|
sesh.headers.update(api_key_header)
|
||||||
|
|
||||||
|
# if `.use_tesnet = true` in the config then
|
||||||
|
# also add headers for the testnet session which
|
||||||
|
# will be used for all order control
|
||||||
|
if subconf.get('use_testnet', False):
|
||||||
|
testnet_sesh, _ = self.venue_sesh[
|
||||||
|
venue_key + '_testnet'
|
||||||
|
]
|
||||||
|
testnet_sesh.headers.update(api_key_header)
|
||||||
|
|
||||||
def _mk_sig(
|
def _mk_sig(
|
||||||
self,
|
self,
|
||||||
data: dict,
|
data: dict,
|
||||||
|
|
@ -229,6 +282,7 @@ class Client:
|
||||||
'to define the creds for auth-ed endpoints!?'
|
'to define the creds for auth-ed endpoints!?'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# XXX: Info on security and authentification
|
# XXX: Info on security and authentification
|
||||||
# https://binance-docs.github.io/apidocs/#endpoint-security-type
|
# https://binance-docs.github.io/apidocs/#endpoint-security-type
|
||||||
if not (api_secret := subconf.get('api_secret')):
|
if not (api_secret := subconf.get('api_secret')):
|
||||||
|
|
@ -257,7 +311,7 @@ class Client:
|
||||||
params: dict,
|
params: dict,
|
||||||
|
|
||||||
method: str = 'get',
|
method: str = 'get',
|
||||||
venue: str|None = None, # if None use `.mkt_mode` state
|
venue: str | None = None, # if None use `.mkt_mode` state
|
||||||
signed: bool = False,
|
signed: bool = False,
|
||||||
allow_testnet: bool = False,
|
allow_testnet: bool = False,
|
||||||
|
|
||||||
|
|
@ -268,9 +322,8 @@ class Client:
|
||||||
- /fapi/v3/ USD-M FUTURES, or
|
- /fapi/v3/ USD-M FUTURES, or
|
||||||
- /api/v3/ SPOT/MARGIN
|
- /api/v3/ SPOT/MARGIN
|
||||||
|
|
||||||
account/market endpoint request depending on either passed in
|
account/market endpoint request depending on either passed in `venue: str`
|
||||||
`venue: str` or the current setting `.mkt_mode: str` setting,
|
or the current setting `.mkt_mode: str` setting, default `'spot'`.
|
||||||
default `'spot'`.
|
|
||||||
|
|
||||||
|
|
||||||
Docs per venue API:
|
Docs per venue API:
|
||||||
|
|
@ -299,6 +352,9 @@ class Client:
|
||||||
venue=venue_key,
|
venue=venue_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
sesh: asks.Session
|
||||||
|
path: str
|
||||||
|
|
||||||
# Check if we're configured to route order requests to the
|
# Check if we're configured to route order requests to the
|
||||||
# venue equivalent's testnet.
|
# venue equivalent's testnet.
|
||||||
use_testnet: bool = False
|
use_testnet: bool = False
|
||||||
|
|
@ -323,12 +379,11 @@ class Client:
|
||||||
# ctl machinery B)
|
# ctl machinery B)
|
||||||
venue_key += '_testnet'
|
venue_key += '_testnet'
|
||||||
|
|
||||||
client: httpx.AsyncClient
|
sesh, path = self.venue_sesh[venue_key]
|
||||||
path: str
|
|
||||||
client, path = self.venue_sesh[venue_key]
|
meth: Callable = getattr(sesh, method)
|
||||||
meth: Callable = getattr(client, method)
|
|
||||||
resp = await meth(
|
resp = await meth(
|
||||||
url=path + endpoint,
|
path=path + endpoint,
|
||||||
params=params,
|
params=params,
|
||||||
timeout=float('inf'),
|
timeout=float('inf'),
|
||||||
)
|
)
|
||||||
|
|
@ -341,6 +396,7 @@ class Client:
|
||||||
) -> None:
|
) -> None:
|
||||||
# lookup internal mkt-specific pair table to update
|
# lookup internal mkt-specific pair table to update
|
||||||
pair_table: dict[str, Pair] = self._venue2pairs[venue]
|
pair_table: dict[str, Pair] = self._venue2pairs[venue]
|
||||||
|
asset_table: dict[str, Asset] = self._venue2assets[venue]
|
||||||
|
|
||||||
# make API request(s)
|
# make API request(s)
|
||||||
resp = await self._api(
|
resp = await self._api(
|
||||||
|
|
@ -352,7 +408,6 @@ class Client:
|
||||||
venue=venue,
|
venue=venue,
|
||||||
allow_testnet=False, # XXX: never use testnet for symbol lookups
|
allow_testnet=False, # XXX: never use testnet for symbol lookups
|
||||||
)
|
)
|
||||||
|
|
||||||
mkt_pairs = resp['symbols']
|
mkt_pairs = resp['symbols']
|
||||||
if not mkt_pairs:
|
if not mkt_pairs:
|
||||||
raise SymbolNotFound(f'No market pairs found!?:\n{resp}')
|
raise SymbolNotFound(f'No market pairs found!?:\n{resp}')
|
||||||
|
|
@ -370,65 +425,28 @@ class Client:
|
||||||
item['filters'] = filters
|
item['filters'] = filters
|
||||||
|
|
||||||
pair_type: Type = PAIRTYPES[venue]
|
pair_type: Type = PAIRTYPES[venue]
|
||||||
try:
|
pair: Pair = pair_type(**item)
|
||||||
pair: Pair = pair_type(**item)
|
|
||||||
except Exception as e:
|
|
||||||
e.add_note(
|
|
||||||
f'\n'
|
|
||||||
f'New or removed field we need to codify!\n'
|
|
||||||
f'pair-type: {pair_type!r}\n'
|
|
||||||
f'\n'
|
|
||||||
f"Don't panic, prolly stupid binance changed their symbology schema again..\n"
|
|
||||||
f'Check out their API docs here:\n'
|
|
||||||
f'\n'
|
|
||||||
f'https://binance-docs.github.io/apidocs/spot/en/#exchange-information\n'
|
|
||||||
)
|
|
||||||
raise
|
|
||||||
pair_table[pair.symbol.upper()] = pair
|
pair_table[pair.symbol.upper()] = pair
|
||||||
|
|
||||||
# update an additional top-level-cross-venue-table
|
# update an additional top-level-cross-venue-table
|
||||||
# `._pairs: ChainMap` for search B0
|
# `._pairs: ChainMap` for search B0
|
||||||
pairs_view_subtable[pair.bs_fqme] = pair
|
pairs_view_subtable[pair.bs_fqme] = pair
|
||||||
|
|
||||||
# XXX WOW: TURNS OUT THIS ISN'T TRUE !?
|
|
||||||
# > (populate `Asset` table for spot mkts only since it
|
|
||||||
# > should be a superset of any other venues such as
|
|
||||||
# > futes or margin)
|
|
||||||
if venue == 'spot':
|
if venue == 'spot':
|
||||||
dst_sectype: str = 'crypto_currency'
|
if (name := pair.quoteAsset) not in asset_table:
|
||||||
|
asset_table[name] = Asset(
|
||||||
|
name=name,
|
||||||
|
atype='crypto_currency',
|
||||||
|
tx_tick=digits_to_dec(pair.quoteAssetPrecision),
|
||||||
|
)
|
||||||
|
|
||||||
elif venue in {'usdtm_futes'}:
|
if (name := pair.baseAsset) not in asset_table:
|
||||||
dst_sectype: str = 'future'
|
asset_table[name] = Asset(
|
||||||
if pair.contractType == 'PERPETUAL':
|
name=name,
|
||||||
dst_sectype: str = 'perpetual_future'
|
atype='crypto_currency',
|
||||||
|
tx_tick=digits_to_dec(pair.baseAssetPrecision),
|
||||||
|
)
|
||||||
|
|
||||||
spot_asset_table: dict[str, Asset] = self._venue2assets['spot']
|
|
||||||
ven_asset_table: dict[str, Asset] = self._venue2assets[venue]
|
|
||||||
|
|
||||||
if (
|
|
||||||
(name := pair.quoteAsset) not in spot_asset_table
|
|
||||||
):
|
|
||||||
spot_asset_table[pair.bs_src_asset] = Asset(
|
|
||||||
name=name,
|
|
||||||
atype='crypto_currency',
|
|
||||||
tx_tick=digits_to_dec(pair.quoteAssetPrecision),
|
|
||||||
)
|
|
||||||
|
|
||||||
if (
|
|
||||||
(name := pair.baseAsset) not in ven_asset_table
|
|
||||||
):
|
|
||||||
if venue != 'spot':
|
|
||||||
assert dst_sectype != 'crypto_currency'
|
|
||||||
|
|
||||||
ven_asset_table[pair.bs_dst_asset] = Asset(
|
|
||||||
name=name,
|
|
||||||
atype=dst_sectype,
|
|
||||||
tx_tick=digits_to_dec(pair.baseAssetPrecision),
|
|
||||||
)
|
|
||||||
|
|
||||||
# log.warning(
|
|
||||||
# f'Assets not YET found in spot set: `{pformat(dne)}`!?'
|
|
||||||
# )
|
|
||||||
# NOTE: make merged view of all market-type pairs but
|
# NOTE: make merged view of all market-type pairs but
|
||||||
# use market specific `Pair.bs_fqme` for keys!
|
# use market specific `Pair.bs_fqme` for keys!
|
||||||
# this allows searching for market pairs with different
|
# this allows searching for market pairs with different
|
||||||
|
|
@ -440,29 +458,16 @@ class Client:
|
||||||
if venue == 'spot':
|
if venue == 'spot':
|
||||||
return
|
return
|
||||||
|
|
||||||
# TODO: maybe use this assets response for non-spot venues?
|
assets: list[dict] = resp.get('assets', ())
|
||||||
# -> issue is we do the exch_info queries conc, so we can't
|
for entry in assets:
|
||||||
# guarantee order for inter-table lookups..
|
name: str = entry['asset']
|
||||||
# if venue ep delivers an explicit set of assets copy just
|
asset_table[name] = self._venue2assets['spot'].get(name)
|
||||||
# ensure they are also already listed in the spot equivs.
|
|
||||||
# assets: list[dict] = resp.get('assets', ())
|
|
||||||
# for entry in assets:
|
|
||||||
# name: str = entry['asset']
|
|
||||||
# spot_asset_table: dict[str, Asset] = self._venue2assets['spot']
|
|
||||||
# if name not in spot_asset_table:
|
|
||||||
# log.warning(
|
|
||||||
# f'COULDNT FIND ASSET {name}\n{entry}\n'
|
|
||||||
# f'ADDING AS FUTES ONLY!?'
|
|
||||||
# )
|
|
||||||
# asset_table: dict[str, Asset] = self._venue2assets[venue]
|
|
||||||
# asset_table[name] = spot_asset_table.get(name)
|
|
||||||
|
|
||||||
async def exch_info(
|
async def exch_info(
|
||||||
self,
|
self,
|
||||||
sym: str | None = None,
|
sym: str | None = None,
|
||||||
|
|
||||||
venue: MarketType | None = None,
|
venue: MarketType | None = None,
|
||||||
expiry: str | None = None,
|
|
||||||
|
|
||||||
) -> dict[str, Pair] | Pair:
|
) -> dict[str, Pair] | Pair:
|
||||||
'''
|
'''
|
||||||
|
|
@ -478,20 +483,9 @@ class Client:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
pair_table: dict[str, Pair] = self._venue2pairs[
|
pair_table: dict[str, Pair] = self._venue2pairs[
|
||||||
venue
|
venue or self.mkt_mode
|
||||||
or
|
|
||||||
self.mkt_mode
|
|
||||||
]
|
]
|
||||||
if (
|
if cached_pair := pair_table.get(sym):
|
||||||
expiry
|
|
||||||
and 'perp' not in expiry.lower()
|
|
||||||
):
|
|
||||||
sym: str = f'{sym}_{expiry}'
|
|
||||||
|
|
||||||
if (
|
|
||||||
sym
|
|
||||||
and (cached_pair := pair_table.get(sym))
|
|
||||||
):
|
|
||||||
return cached_pair
|
return cached_pair
|
||||||
|
|
||||||
venues: list[str] = ['spot', 'usdtm_futes']
|
venues: list[str] = ['spot', 'usdtm_futes']
|
||||||
|
|
@ -499,52 +493,14 @@ class Client:
|
||||||
venues: list[str] = [venue]
|
venues: list[str] = [venue]
|
||||||
|
|
||||||
# batch per-venue download of all exchange infos
|
# batch per-venue download of all exchange infos
|
||||||
async with trio.open_nursery() as tn:
|
async with trio.open_nursery() as rn:
|
||||||
for ven in venues:
|
for ven in venues:
|
||||||
tn.start_soon(
|
rn.start_soon(
|
||||||
self._cache_pairs,
|
self._cache_pairs,
|
||||||
ven,
|
ven,
|
||||||
)
|
)
|
||||||
|
|
||||||
if sym:
|
return pair_table[sym] if sym else self._pairs
|
||||||
return pair_table[sym]
|
|
||||||
else:
|
|
||||||
return self._pairs
|
|
||||||
|
|
||||||
async def get_assets(
|
|
||||||
self,
|
|
||||||
venue: str | None = None,
|
|
||||||
|
|
||||||
) -> dict[str, Asset]:
|
|
||||||
if (
|
|
||||||
venue
|
|
||||||
and venue != 'spot'
|
|
||||||
):
|
|
||||||
venues = [venue]
|
|
||||||
else:
|
|
||||||
venues = ['usdtm_futes']
|
|
||||||
|
|
||||||
ass_table: dict[str, Asset] = self._venue2assets['spot']
|
|
||||||
|
|
||||||
# merge in futes contracts with a sectype suffix
|
|
||||||
for venue in venues:
|
|
||||||
ass_table |= self._venue2assets[venue]
|
|
||||||
|
|
||||||
return ass_table
|
|
||||||
|
|
||||||
|
|
||||||
async def get_mkt_pairs(self) -> dict[str, Pair]:
|
|
||||||
'''
|
|
||||||
Flatten the multi-venue (chain) map of market pairs
|
|
||||||
to a fqme indexed table for data layer caching.
|
|
||||||
|
|
||||||
'''
|
|
||||||
flat: dict[str, Pair] = {}
|
|
||||||
for venmap in self._pairs.maps:
|
|
||||||
for bs_fqme, pair in venmap.items():
|
|
||||||
flat[pair.bs_fqme] = pair
|
|
||||||
|
|
||||||
return flat
|
|
||||||
|
|
||||||
# TODO: unused except by `brokers.core.search_symbols()`?
|
# TODO: unused except by `brokers.core.search_symbols()`?
|
||||||
async def search_symbols(
|
async def search_symbols(
|
||||||
|
|
@ -554,32 +510,20 @@ class Client:
|
||||||
|
|
||||||
) -> dict[str, Any]:
|
) -> dict[str, Any]:
|
||||||
|
|
||||||
fq_pairs: dict[str, Pair] = await self.exch_info()
|
fq_pairs: dict = await self.exch_info()
|
||||||
|
|
||||||
# TODO: cache this list like we were in
|
matches = fuzzy.extractBests(
|
||||||
# `open_symbol_search()`?
|
pattern,
|
||||||
# keys: list[str] = list(fq_pairs)
|
fq_pairs,
|
||||||
|
|
||||||
return match_from_pairs(
|
|
||||||
pairs=fq_pairs,
|
|
||||||
query=pattern.upper(),
|
|
||||||
score_cutoff=50,
|
score_cutoff=50,
|
||||||
)
|
)
|
||||||
|
# repack in dict form
|
||||||
def pair2venuekey(
|
return {item[0]['symbol']: item[0]
|
||||||
self,
|
for item in matches}
|
||||||
pair: Pair,
|
|
||||||
) -> str:
|
|
||||||
return {
|
|
||||||
'USDTM': 'usdtm_futes',
|
|
||||||
'SPOT': 'spot',
|
|
||||||
# 'COINM': 'coin_futes',
|
|
||||||
# ^-TODO-^ bc someone might want it..?
|
|
||||||
}[pair.venue]
|
|
||||||
|
|
||||||
async def bars(
|
async def bars(
|
||||||
self,
|
self,
|
||||||
mkt: MktPair,
|
symbol: str,
|
||||||
|
|
||||||
start_dt: datetime | None = None,
|
start_dt: datetime | None = None,
|
||||||
end_dt: datetime | None = None,
|
end_dt: datetime | None = None,
|
||||||
|
|
@ -609,20 +553,16 @@ class Client:
|
||||||
start_time = binance_timestamp(start_dt)
|
start_time = binance_timestamp(start_dt)
|
||||||
end_time = binance_timestamp(end_dt)
|
end_time = binance_timestamp(end_dt)
|
||||||
|
|
||||||
bs_pair: Pair = self._pairs[mkt.bs_fqme.upper()]
|
|
||||||
|
|
||||||
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
|
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
|
||||||
bars = await self._api(
|
bars = await self._api(
|
||||||
'klines',
|
'klines',
|
||||||
params={
|
params={
|
||||||
# NOTE: always query using their native symbology!
|
'symbol': symbol.upper(),
|
||||||
'symbol': mkt.bs_mktid.upper(),
|
|
||||||
'interval': '1m',
|
'interval': '1m',
|
||||||
'startTime': start_time,
|
'startTime': start_time,
|
||||||
'endTime': end_time,
|
'endTime': end_time,
|
||||||
'limit': limit
|
'limit': limit
|
||||||
},
|
},
|
||||||
venue=self.pair2venuekey(bs_pair),
|
|
||||||
allow_testnet=False,
|
allow_testnet=False,
|
||||||
)
|
)
|
||||||
new_bars: list[tuple] = []
|
new_bars: list[tuple] = []
|
||||||
|
|
@ -939,148 +879,17 @@ class Client:
|
||||||
await self.close_listen_key(key)
|
await self.close_listen_key(key)
|
||||||
|
|
||||||
|
|
||||||
_venue_urls: dict[str, str] = {
|
|
||||||
'spot': (
|
|
||||||
_spot_url,
|
|
||||||
'/api/v3/',
|
|
||||||
),
|
|
||||||
'spot_testnet': (
|
|
||||||
_testnet_spot_url,
|
|
||||||
'/fapi/v1/'
|
|
||||||
),
|
|
||||||
# margin and extended spot endpoints session.
|
|
||||||
# TODO: did this ever get implemented fully?
|
|
||||||
# 'margin': (
|
|
||||||
# _spot_url,
|
|
||||||
# '/sapi/v1/'
|
|
||||||
# ),
|
|
||||||
|
|
||||||
'usdtm_futes': (
|
|
||||||
_futes_url,
|
|
||||||
'/fapi/v1/',
|
|
||||||
),
|
|
||||||
|
|
||||||
'usdtm_futes_testnet': (
|
|
||||||
_testnet_futes_url,
|
|
||||||
'/fapi/v1/',
|
|
||||||
),
|
|
||||||
|
|
||||||
# TODO: for anyone who actually needs it ;P
|
|
||||||
# 'coin_futes': ()
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def init_api_keys(
|
|
||||||
client: Client,
|
|
||||||
conf: dict[str, Any],
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Set up per-venue API keys each http client according to the user's
|
|
||||||
`brokers.conf`.
|
|
||||||
|
|
||||||
For ex, to use spot-testnet and live usdt futures APIs:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[binance]
|
|
||||||
# spot test net
|
|
||||||
spot.use_testnet = true
|
|
||||||
spot.api_key = '<spot_api_key_from_binance_account>'
|
|
||||||
spot.api_secret = '<spot_api_key_password>'
|
|
||||||
|
|
||||||
# futes live
|
|
||||||
futes.use_testnet = false
|
|
||||||
accounts.usdtm = 'futes'
|
|
||||||
futes.api_key = '<futes_api_key_from_binance>'
|
|
||||||
futes.api_secret = '<futes_api_key_password>''
|
|
||||||
|
|
||||||
# if uncommented will use the built-in paper engine and not
|
|
||||||
# connect to `binance` API servers for order ctl.
|
|
||||||
# accounts.paper = 'paper'
|
|
||||||
```
|
|
||||||
|
|
||||||
'''
|
|
||||||
for key, subconf in conf.items():
|
|
||||||
if api_key := subconf.get('api_key', ''):
|
|
||||||
venue_keys: list[str] = client.confkey2venuekeys[key]
|
|
||||||
|
|
||||||
venue_key: str
|
|
||||||
client: httpx.AsyncClient
|
|
||||||
for venue_key in venue_keys:
|
|
||||||
client, _ = client.venue_sesh[venue_key]
|
|
||||||
|
|
||||||
api_key_header: dict = {
|
|
||||||
# taken from official:
|
|
||||||
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
|
|
||||||
"Content-Type": "application/json;charset=utf-8",
|
|
||||||
|
|
||||||
# TODO: prolly should just always query and copy
|
|
||||||
# in the real latest ver?
|
|
||||||
"User-Agent": "binance-connector/6.1.6smbz6",
|
|
||||||
"X-MBX-APIKEY": api_key,
|
|
||||||
}
|
|
||||||
client.headers.update(api_key_header)
|
|
||||||
|
|
||||||
# if `.use_tesnet = true` in the config then
|
|
||||||
# also add headers for the testnet session which
|
|
||||||
# will be used for all order control
|
|
||||||
if subconf.get('use_testnet', False):
|
|
||||||
testnet_sesh, _ = client.venue_sesh[
|
|
||||||
venue_key + '_testnet'
|
|
||||||
]
|
|
||||||
testnet_sesh.headers.update(api_key_header)
|
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def get_client(
|
async def get_client() -> Client:
|
||||||
mkt_mode: MarketType = 'spot',
|
|
||||||
) -> Client:
|
|
||||||
'''
|
|
||||||
Construct an single `piker` client which composes multiple underlying venue
|
|
||||||
specific API clients both for live and test networks.
|
|
||||||
|
|
||||||
'''
|
client = Client()
|
||||||
venue_sessions: dict[
|
await client.exch_info()
|
||||||
str, # venue key
|
log.info(
|
||||||
tuple[httpx.AsyncClient, str] # session, eps path
|
f'{client} in {client.mkt_mode} mode: caching exchange infos..\n'
|
||||||
] = {}
|
'Cached multi-market pairs:\n'
|
||||||
async with AsyncExitStack() as client_stack:
|
f'spot: {len(client._spot_pairs)}\n'
|
||||||
for name, (base_url, path) in _venue_urls.items():
|
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
|
||||||
api: httpx.AsyncClient = await client_stack.enter_async_context(
|
f'Total: {len(client._pairs)}\n'
|
||||||
httpx.AsyncClient(
|
)
|
||||||
base_url=base_url,
|
|
||||||
# headers={},
|
|
||||||
|
|
||||||
# TODO: is there a way to numerate this?
|
yield client
|
||||||
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
|
|
||||||
# connections=4
|
|
||||||
)
|
|
||||||
)
|
|
||||||
venue_sessions[name] = (
|
|
||||||
api,
|
|
||||||
path,
|
|
||||||
)
|
|
||||||
|
|
||||||
conf: dict[str, Any] = get_config()
|
|
||||||
# for creating API keys see,
|
|
||||||
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
|
|
||||||
client = Client(
|
|
||||||
venue_sessions=venue_sessions,
|
|
||||||
conf=conf,
|
|
||||||
mkt_mode=mkt_mode,
|
|
||||||
)
|
|
||||||
init_api_keys(
|
|
||||||
client=client,
|
|
||||||
conf=conf,
|
|
||||||
)
|
|
||||||
fq_pairs: dict[str, Pair] = await client.exch_info()
|
|
||||||
assert fq_pairs
|
|
||||||
log.info(
|
|
||||||
f'Loaded multi-venue `Client` in mkt_mode={client.mkt_mode!r}\n\n'
|
|
||||||
f'Symbology Summary:\n'
|
|
||||||
f'------ - ------\n'
|
|
||||||
f'spot: {len(client._spot_pairs)}\n'
|
|
||||||
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
|
|
||||||
'------ - ------\n'
|
|
||||||
f'total: {len(client._pairs)}\n'
|
|
||||||
)
|
|
||||||
yield client
|
|
||||||
|
|
|
||||||
|
|
@ -36,6 +36,7 @@ import trio
|
||||||
|
|
||||||
from piker.accounting import (
|
from piker.accounting import (
|
||||||
Asset,
|
Asset,
|
||||||
|
# MktPair,
|
||||||
)
|
)
|
||||||
from piker.brokers._util import (
|
from piker.brokers._util import (
|
||||||
get_logger,
|
get_logger,
|
||||||
|
|
@ -48,9 +49,7 @@ from piker.brokers import (
|
||||||
open_cached_client,
|
open_cached_client,
|
||||||
BrokerError,
|
BrokerError,
|
||||||
)
|
)
|
||||||
from piker.clearing import (
|
from piker.clearing import OrderDialogs
|
||||||
OrderDialogs,
|
|
||||||
)
|
|
||||||
from piker.clearing._messages import (
|
from piker.clearing._messages import (
|
||||||
BrokerdOrder,
|
BrokerdOrder,
|
||||||
BrokerdOrderAck,
|
BrokerdOrderAck,
|
||||||
|
|
@ -72,33 +71,6 @@ from .api import Client
|
||||||
log = get_logger('piker.brokers.binance')
|
log = get_logger('piker.brokers.binance')
|
||||||
|
|
||||||
|
|
||||||
# Fee schedule template, mostly for paper engine fees modelling.
|
|
||||||
# https://www.binance.com/en/support/faq/what-are-market-makers-and-takers-360007720071
|
|
||||||
def get_cost(
|
|
||||||
price: float,
|
|
||||||
size: float,
|
|
||||||
is_taker: bool = False,
|
|
||||||
|
|
||||||
) -> float:
|
|
||||||
|
|
||||||
# https://www.binance.com/en/fee/trading
|
|
||||||
cb: float = price * size
|
|
||||||
match is_taker:
|
|
||||||
case True:
|
|
||||||
return cb * 0.001000
|
|
||||||
|
|
||||||
case False if cb < 1e6:
|
|
||||||
return cb * 0.001000
|
|
||||||
|
|
||||||
case False if 1e6 >= cb < 5e6:
|
|
||||||
return cb * 0.000900
|
|
||||||
|
|
||||||
# NOTE: there's more but are you really going
|
|
||||||
# to have a cb bigger then this per trade?
|
|
||||||
case False if cb >= 5e6:
|
|
||||||
return cb * 0.000800
|
|
||||||
|
|
||||||
|
|
||||||
async def handle_order_requests(
|
async def handle_order_requests(
|
||||||
ems_order_stream: tractor.MsgStream,
|
ems_order_stream: tractor.MsgStream,
|
||||||
client: Client,
|
client: Client,
|
||||||
|
|
@ -260,24 +232,16 @@ async def open_trade_dialog(
|
||||||
account_name: str = 'usdtm'
|
account_name: str = 'usdtm'
|
||||||
use_testnet: bool = False
|
use_testnet: bool = False
|
||||||
|
|
||||||
# TODO: if/when we add .accounting support we need to
|
|
||||||
# do a open_symcache() call.. though maybe we can hide
|
|
||||||
# this in a new async version of open_account()?
|
|
||||||
async with open_cached_client('binance') as client:
|
async with open_cached_client('binance') as client:
|
||||||
subconf: dict|None = client.conf.get(venue_name)
|
subconf: dict = client.conf[venue_name]
|
||||||
|
use_testnet = subconf.get('use_testnet', False)
|
||||||
|
|
||||||
# XXX: if no futes.api_key or spot.api_key has been set we
|
# XXX: if no futes.api_key or spot.api_key has been set we
|
||||||
# always fall back to the paper engine!
|
# always fall back to the paper engine!
|
||||||
if (
|
if not subconf.get('api_key'):
|
||||||
not subconf
|
|
||||||
or
|
|
||||||
not subconf.get('api_key')
|
|
||||||
):
|
|
||||||
await ctx.started('paper')
|
await ctx.started('paper')
|
||||||
return
|
return
|
||||||
|
|
||||||
use_testnet: bool = subconf.get('use_testnet', False)
|
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
open_cached_client('binance') as client,
|
open_cached_client('binance') as client,
|
||||||
):
|
):
|
||||||
|
|
@ -357,7 +321,7 @@ async def open_trade_dialog(
|
||||||
|
|
||||||
if balance > 0:
|
if balance > 0:
|
||||||
balances[spot_asset] = (balance, last_update_t)
|
balances[spot_asset] = (balance, last_update_t)
|
||||||
# await tractor.pause()
|
# await tractor.breakpoint()
|
||||||
|
|
||||||
# @position response:
|
# @position response:
|
||||||
# {'positions': [{'entryPrice': '0.0',
|
# {'positions': [{'entryPrice': '0.0',
|
||||||
|
|
@ -436,11 +400,10 @@ async def open_trade_dialog(
|
||||||
# and comparison with binance's own position calcs.
|
# and comparison with binance's own position calcs.
|
||||||
# - load pps and accounts using accounting apis, write
|
# - load pps and accounts using accounting apis, write
|
||||||
# the ledger and account files
|
# the ledger and account files
|
||||||
# - table: Account
|
# - table: PpTable
|
||||||
# - ledger: TransactionLedger
|
# - ledger: TransactionLedger
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
ctx.open_stream() as ems_stream,
|
ctx.open_stream() as ems_stream,
|
||||||
):
|
):
|
||||||
|
|
|
||||||
|
|
@ -24,11 +24,8 @@ from contextlib import (
|
||||||
aclosing,
|
aclosing,
|
||||||
)
|
)
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from functools import (
|
from functools import partial
|
||||||
partial,
|
|
||||||
)
|
|
||||||
import itertools
|
import itertools
|
||||||
from pprint import pformat
|
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
AsyncGenerator,
|
AsyncGenerator,
|
||||||
|
|
@ -42,12 +39,12 @@ from trio_typing import TaskStatus
|
||||||
from pendulum import (
|
from pendulum import (
|
||||||
from_timestamp,
|
from_timestamp,
|
||||||
)
|
)
|
||||||
|
from fuzzywuzzy import process as fuzzy
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import tractor
|
import tractor
|
||||||
|
|
||||||
from piker.brokers import (
|
from piker.brokers import (
|
||||||
open_cached_client,
|
open_cached_client,
|
||||||
NoData,
|
|
||||||
)
|
)
|
||||||
from piker._cacheables import (
|
from piker._cacheables import (
|
||||||
async_lifo_cache,
|
async_lifo_cache,
|
||||||
|
|
@ -57,8 +54,9 @@ from piker.accounting import (
|
||||||
DerivTypes,
|
DerivTypes,
|
||||||
MktPair,
|
MktPair,
|
||||||
unpack_fqme,
|
unpack_fqme,
|
||||||
|
digits_to_dec,
|
||||||
)
|
)
|
||||||
from piker.types import Struct
|
from piker.data.types import Struct
|
||||||
from piker.data.validate import FeedInit
|
from piker.data.validate import FeedInit
|
||||||
from piker.data._web_bs import (
|
from piker.data._web_bs import (
|
||||||
open_autorecon_ws,
|
open_autorecon_ws,
|
||||||
|
|
@ -94,26 +92,22 @@ class L1(Struct):
|
||||||
|
|
||||||
|
|
||||||
# validation type
|
# validation type
|
||||||
# https://developers.binance.com/docs/derivatives/usds-margined-futures/websocket-market-streams/Aggregate-Trade-Streams#response-example
|
|
||||||
class AggTrade(Struct, frozen=True):
|
class AggTrade(Struct, frozen=True):
|
||||||
e: str # Event type
|
e: str # Event type
|
||||||
E: int # Event time
|
E: int # Event time
|
||||||
s: str # Symbol
|
s: str # Symbol
|
||||||
a: int # Aggregate trade ID
|
a: int # Aggregate trade ID
|
||||||
p: float # Price
|
p: float # Price
|
||||||
q: float # Quantity with all the market trades
|
q: float # Quantity
|
||||||
f: int # First trade ID
|
f: int # First trade ID
|
||||||
l: int # noqa Last trade ID
|
l: int # noqa Last trade ID
|
||||||
T: int # Trade time
|
T: int # Trade time
|
||||||
m: bool # Is the buyer the market maker?
|
m: bool # Is the buyer the market maker?
|
||||||
M: bool|None = None # Ignore
|
M: bool | None = None # Ignore
|
||||||
nq: float|None = None # Normal quantity without the trades involving RPI orders
|
|
||||||
# ^XXX https://developers.binance.com/docs/derivatives/change-log#2025-12-29
|
|
||||||
|
|
||||||
|
|
||||||
async def stream_messages(
|
async def stream_messages(
|
||||||
ws: NoBsWs,
|
ws: NoBsWs,
|
||||||
|
|
||||||
) -> AsyncGenerator[NoBsWs, dict]:
|
) -> AsyncGenerator[NoBsWs, dict]:
|
||||||
|
|
||||||
# TODO: match syntax here!
|
# TODO: match syntax here!
|
||||||
|
|
@ -224,8 +218,6 @@ def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
# TODO, why aren't frame resp `log.info()`s showing in upstream
|
|
||||||
# code?!
|
|
||||||
@acm
|
@acm
|
||||||
async def open_history_client(
|
async def open_history_client(
|
||||||
mkt: MktPair,
|
mkt: MktPair,
|
||||||
|
|
@ -258,30 +250,24 @@ async def open_history_client(
|
||||||
else:
|
else:
|
||||||
client.mkt_mode = 'spot'
|
client.mkt_mode = 'spot'
|
||||||
|
|
||||||
array: np.ndarray = await client.bars(
|
# NOTE: always query using their native symbology!
|
||||||
mkt=mkt,
|
mktid: str = mkt.bs_mktid
|
||||||
|
array = await client.bars(
|
||||||
|
mktid,
|
||||||
start_dt=start_dt,
|
start_dt=start_dt,
|
||||||
end_dt=end_dt,
|
end_dt=end_dt,
|
||||||
)
|
)
|
||||||
if array.size == 0:
|
|
||||||
raise NoData(
|
|
||||||
f'No frame for {start_dt} -> {end_dt}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
times = array['time']
|
times = array['time']
|
||||||
if not times.any():
|
if (
|
||||||
raise ValueError(
|
end_dt is None
|
||||||
'Bad frame with null-times?\n\n'
|
):
|
||||||
f'{times}'
|
inow = round(time.time())
|
||||||
)
|
|
||||||
|
|
||||||
if end_dt is None:
|
|
||||||
inow: int = round(time.time())
|
|
||||||
if (inow - times[-1]) > 60:
|
if (inow - times[-1]) > 60:
|
||||||
await tractor.pause()
|
await tractor.breakpoint()
|
||||||
|
|
||||||
start_dt = from_timestamp(times[0])
|
start_dt = from_timestamp(times[0])
|
||||||
end_dt = from_timestamp(times[-1])
|
end_dt = from_timestamp(times[-1])
|
||||||
|
|
||||||
return array, start_dt, end_dt
|
return array, start_dt, end_dt
|
||||||
|
|
||||||
yield get_ohlc, {'erlangs': 3, 'rate': 3}
|
yield get_ohlc, {'erlangs': 3, 'rate': 3}
|
||||||
|
|
@ -291,113 +277,69 @@ async def open_history_client(
|
||||||
async def get_mkt_info(
|
async def get_mkt_info(
|
||||||
fqme: str,
|
fqme: str,
|
||||||
|
|
||||||
) -> tuple[MktPair, Pair] | None:
|
) -> tuple[MktPair, Pair]:
|
||||||
|
|
||||||
# uppercase since kraken bs_mktid is always upper
|
# uppercase since kraken bs_mktid is always upper
|
||||||
if 'binance' not in fqme.lower():
|
if 'binance' not in fqme:
|
||||||
fqme += '.binance'
|
fqme += '.binance'
|
||||||
|
|
||||||
mkt_mode: str = ''
|
bs_fqme, _, broker = fqme.rpartition('.')
|
||||||
broker, mkt_ep, venue, expiry = unpack_fqme(fqme)
|
broker, mkt_ep, venue, expiry = unpack_fqme(fqme)
|
||||||
|
|
||||||
# NOTE: we always upper case all tokens to be consistent with
|
# NOTE: see the `FutesPair.bs_fqme: str` implementation
|
||||||
# binance's symbology style for pairs, like `BTCUSDT`, but in
|
# to understand the reverse market info lookup below.
|
||||||
# theory we could also just keep things lower case; as long as
|
mkt_mode = venue = venue.lower() or 'spot'
|
||||||
# we're consistent and the symcache matches whatever this func
|
_atype: str = ''
|
||||||
# returns, always!
|
|
||||||
expiry: str = expiry.upper()
|
|
||||||
venue: str = venue.upper()
|
|
||||||
venue_lower: str = venue.lower()
|
|
||||||
|
|
||||||
# XXX TODO: we should change the usdtm_futes name to just
|
|
||||||
# usdm_futes (dropping the tether part) since it turns out that
|
|
||||||
# there are indeed USD-tokens OTHER THEN tether being used as
|
|
||||||
# the margin assets.. it's going to require a wholesale
|
|
||||||
# (variable/key) rename as well as file name adjustments to any
|
|
||||||
# existing tsdb set..
|
|
||||||
if 'usd' in venue_lower:
|
|
||||||
mkt_mode: str = 'usdtm_futes'
|
|
||||||
|
|
||||||
# NO IDEA what these contracts (some kinda DEX-ish futes?) are
|
|
||||||
# but we're masking them for now..
|
|
||||||
elif (
|
|
||||||
'defi' in venue_lower
|
|
||||||
|
|
||||||
# TODO: handle coinm futes which have a margin asset that
|
|
||||||
# is some crypto token!
|
|
||||||
# https://binance-docs.github.io/apidocs/delivery/en/#exchange-information
|
|
||||||
or 'btc' in venue_lower
|
|
||||||
):
|
|
||||||
return None
|
|
||||||
|
|
||||||
else:
|
|
||||||
# NOTE: see the `FutesPair.bs_fqme: str` implementation
|
|
||||||
# to understand the reverse market info lookup below.
|
|
||||||
mkt_mode = venue_lower or 'spot'
|
|
||||||
|
|
||||||
if (
|
if (
|
||||||
venue
|
venue
|
||||||
and 'spot' not in venue_lower
|
and 'spot' not in venue.lower()
|
||||||
|
|
||||||
# XXX: catch all in case user doesn't know which
|
# XXX: catch all in case user doesn't know which
|
||||||
# venue they want (usdtm vs. coinm) and we can choose
|
# venue they want (usdtm vs. coinm) and we can choose
|
||||||
# a default (via config?) once we support coin-m APIs.
|
# a default (via config?) once we support coin-m APIs.
|
||||||
or 'perp' in venue_lower
|
or 'perp' in bs_fqme.lower()
|
||||||
):
|
):
|
||||||
if not mkt_mode:
|
mkt_mode: str = f'{venue.lower()}_futes'
|
||||||
mkt_mode: str = f'{venue_lower}_futes'
|
if 'perp' in expiry:
|
||||||
|
_atype = 'perpetual_future'
|
||||||
|
|
||||||
|
else:
|
||||||
|
_atype = 'future'
|
||||||
|
|
||||||
async with open_cached_client(
|
async with open_cached_client(
|
||||||
'binance',
|
'binance',
|
||||||
) as client:
|
) as client:
|
||||||
|
|
||||||
assets: dict[str, Asset] = await client.get_assets()
|
# switch mode depending on input pattern parsing
|
||||||
pair_str: str = mkt_ep.upper()
|
|
||||||
|
|
||||||
# switch venue-mode depending on input pattern parsing
|
|
||||||
# since we want to use a particular endpoint (set) for
|
|
||||||
# pair info lookup!
|
|
||||||
client.mkt_mode = mkt_mode
|
client.mkt_mode = mkt_mode
|
||||||
|
|
||||||
pair: Pair = await client.exch_info(
|
pair_str: str = mkt_ep.upper()
|
||||||
pair_str,
|
pair: Pair = await client.exch_info(pair_str)
|
||||||
venue=mkt_mode, # explicit
|
|
||||||
expiry=expiry,
|
|
||||||
)
|
|
||||||
|
|
||||||
if 'futes' in mkt_mode:
|
if 'futes' in mkt_mode:
|
||||||
assert isinstance(pair, FutesPair)
|
assert isinstance(pair, FutesPair)
|
||||||
|
|
||||||
dst: Asset | None = assets.get(pair.bs_dst_asset)
|
|
||||||
if (
|
|
||||||
not dst
|
|
||||||
# TODO: a known asset DNE list?
|
|
||||||
# and pair.baseAsset == 'DEFI'
|
|
||||||
):
|
|
||||||
log.warning(
|
|
||||||
f'UNKNOWN {venue} asset {pair.baseAsset} from,\n'
|
|
||||||
f'{pformat(pair.to_dict())}'
|
|
||||||
)
|
|
||||||
|
|
||||||
# XXX UNKNOWN missing "asset", though no idea why?
|
|
||||||
# maybe it's only avail in the margin venue(s): /dapi/ ?
|
|
||||||
return None
|
|
||||||
|
|
||||||
mkt = MktPair(
|
mkt = MktPair(
|
||||||
dst=dst,
|
dst=Asset(
|
||||||
src=assets[pair.bs_src_asset],
|
name=pair.baseAsset,
|
||||||
|
atype='crypto',
|
||||||
|
tx_tick=digits_to_dec(pair.baseAssetPrecision),
|
||||||
|
),
|
||||||
|
src=Asset(
|
||||||
|
name=pair.quoteAsset,
|
||||||
|
atype='crypto',
|
||||||
|
tx_tick=digits_to_dec(pair.quoteAssetPrecision),
|
||||||
|
),
|
||||||
price_tick=pair.price_tick,
|
price_tick=pair.price_tick,
|
||||||
size_tick=pair.size_tick,
|
size_tick=pair.size_tick,
|
||||||
bs_mktid=pair.symbol,
|
bs_mktid=pair.symbol,
|
||||||
expiry=expiry,
|
expiry=expiry,
|
||||||
venue=venue,
|
venue=venue,
|
||||||
broker='binance',
|
broker='binance',
|
||||||
|
_atype=_atype,
|
||||||
# NOTE: sectype is always taken from dst, see
|
|
||||||
# `MktPair.type_key` and `Client._cache_pairs()`
|
|
||||||
# _atype=sectype,
|
|
||||||
)
|
)
|
||||||
return mkt, pair
|
both = mkt, pair
|
||||||
|
return both
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
|
|
@ -451,6 +393,7 @@ async def subscribe(
|
||||||
|
|
||||||
|
|
||||||
async def stream_quotes(
|
async def stream_quotes(
|
||||||
|
|
||||||
send_chan: trio.abc.SendChannel,
|
send_chan: trio.abc.SendChannel,
|
||||||
symbols: list[str],
|
symbols: list[str],
|
||||||
feed_is_live: trio.Event,
|
feed_is_live: trio.Event,
|
||||||
|
|
@ -462,14 +405,11 @@ async def stream_quotes(
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
tractor.trionics.maybe_raise_from_masking_exc(),
|
|
||||||
send_chan as send_chan,
|
send_chan as send_chan,
|
||||||
open_cached_client('binance') as client,
|
open_cached_client('binance') as client,
|
||||||
):
|
):
|
||||||
init_msgs: list[FeedInit] = []
|
init_msgs: list[FeedInit] = []
|
||||||
for sym in symbols:
|
for sym in symbols:
|
||||||
mkt: MktPair
|
|
||||||
pair: Pair
|
|
||||||
mkt, pair = await get_mkt_info(sym)
|
mkt, pair = await get_mkt_info(sym)
|
||||||
|
|
||||||
# build out init msgs according to latest spec
|
# build out init msgs according to latest spec
|
||||||
|
|
@ -518,6 +458,7 @@ async def stream_quotes(
|
||||||
|
|
||||||
# start streaming
|
# start streaming
|
||||||
async for typ, quote in msg_gen:
|
async for typ, quote in msg_gen:
|
||||||
|
|
||||||
# period = time.time() - last
|
# period = time.time() - last
|
||||||
# hz = 1/period if period else float('inf')
|
# hz = 1/period if period else float('inf')
|
||||||
# if hz > 60:
|
# if hz > 60:
|
||||||
|
|
@ -531,11 +472,10 @@ async def open_symbol_search(
|
||||||
ctx: tractor.Context,
|
ctx: tractor.Context,
|
||||||
) -> Client:
|
) -> Client:
|
||||||
|
|
||||||
# NOTE: symbology tables are loaded as part of client
|
|
||||||
# startup in ``.api.get_client()`` and in this case
|
|
||||||
# are stored as `Client._pairs`.
|
|
||||||
async with open_cached_client('binance') as client:
|
async with open_cached_client('binance') as client:
|
||||||
|
|
||||||
|
# load all symbols locally for fast search
|
||||||
|
fqpairs_cache = await client.exch_info()
|
||||||
# TODO: maybe we should deliver the cache
|
# TODO: maybe we should deliver the cache
|
||||||
# so that client's can always do a local-lookup-first
|
# so that client's can always do a local-lookup-first
|
||||||
# style try and then update async as (new) match results
|
# style try and then update async as (new) match results
|
||||||
|
|
@ -546,15 +486,14 @@ async def open_symbol_search(
|
||||||
|
|
||||||
pattern: str
|
pattern: str
|
||||||
async for pattern in stream:
|
async for pattern in stream:
|
||||||
# NOTE: pattern fuzzy-matching is done within
|
matches = fuzzy.extractBests(
|
||||||
# the methd impl.
|
|
||||||
pairs: dict[str, Pair] = await client.search_symbols(
|
|
||||||
pattern,
|
pattern,
|
||||||
|
fqpairs_cache,
|
||||||
|
score_cutoff=50,
|
||||||
)
|
)
|
||||||
|
|
||||||
# repack in fqme-keyed table
|
# repack in dict form
|
||||||
byfqme: dict[str, Pair] = {}
|
await stream.send({
|
||||||
for pair in pairs.values():
|
item[0].bs_fqme: item[0]
|
||||||
byfqme[pair.bs_fqme] = pair
|
for item in matches
|
||||||
|
})
|
||||||
await stream.send(byfqme)
|
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,8 @@
|
||||||
# piker: trading gear for hackers
|
# piker: trading gear for hackers
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
# Copyright (C)
|
||||||
|
# Guillermo Rodriguez (aka ze jefe)
|
||||||
|
# Tyler Goodlet
|
||||||
|
# (in stewardship for pikers)
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# This program is free software: you can redistribute it and/or modify
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
|
@ -26,7 +29,7 @@ from decimal import Decimal
|
||||||
|
|
||||||
from msgspec import field
|
from msgspec import field
|
||||||
|
|
||||||
from piker.types import Struct
|
from piker.data.types import Struct
|
||||||
|
|
||||||
|
|
||||||
# API endpoint paths by venue / sub-API
|
# API endpoint paths by venue / sub-API
|
||||||
|
|
@ -62,7 +65,7 @@ MarketType = Literal[
|
||||||
'spot',
|
'spot',
|
||||||
# 'margin',
|
# 'margin',
|
||||||
'usdtm_futes',
|
'usdtm_futes',
|
||||||
# 'coinm_futes',
|
# 'coin_futes',
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -84,7 +87,6 @@ def get_api_eps(venue: MarketType) -> tuple[str, str]:
|
||||||
|
|
||||||
|
|
||||||
class Pair(Struct, frozen=True, kw_only=True):
|
class Pair(Struct, frozen=True, kw_only=True):
|
||||||
|
|
||||||
symbol: str
|
symbol: str
|
||||||
status: str
|
status: str
|
||||||
orderTypes: list[str]
|
orderTypes: list[str]
|
||||||
|
|
@ -97,16 +99,6 @@ class Pair(Struct, frozen=True, kw_only=True):
|
||||||
baseAsset: str
|
baseAsset: str
|
||||||
baseAssetPrecision: int
|
baseAssetPrecision: int
|
||||||
|
|
||||||
permissionSets: list[list[str]]
|
|
||||||
|
|
||||||
# https://developers.binance.com/docs/binance-spot-api-docs#2025-08-26
|
|
||||||
# will become non-optional 2025-08-28?
|
|
||||||
# https://developers.binance.com/docs/binance-spot-api-docs#future-changes
|
|
||||||
pegInstructionsAllowed: bool = False
|
|
||||||
|
|
||||||
# https://developers.binance.com/docs/binance-spot-api-docs#2025-12-02
|
|
||||||
opoAllowed: bool = False
|
|
||||||
|
|
||||||
filters: dict[
|
filters: dict[
|
||||||
str,
|
str,
|
||||||
str | int | float,
|
str | int | float,
|
||||||
|
|
@ -128,10 +120,6 @@ class Pair(Struct, frozen=True, kw_only=True):
|
||||||
def bs_fqme(self) -> str:
|
def bs_fqme(self) -> str:
|
||||||
return self.symbol
|
return self.symbol
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_mktid(self) -> str:
|
|
||||||
return f'{self.symbol}.{self.venue}'
|
|
||||||
|
|
||||||
|
|
||||||
class SpotPair(Pair, frozen=True):
|
class SpotPair(Pair, frozen=True):
|
||||||
|
|
||||||
|
|
@ -147,35 +135,15 @@ class SpotPair(Pair, frozen=True):
|
||||||
quoteOrderQtyMarketAllowed: bool
|
quoteOrderQtyMarketAllowed: bool
|
||||||
isSpotTradingAllowed: bool
|
isSpotTradingAllowed: bool
|
||||||
isMarginTradingAllowed: bool
|
isMarginTradingAllowed: bool
|
||||||
otoAllowed: bool
|
|
||||||
|
|
||||||
defaultSelfTradePreventionMode: str
|
defaultSelfTradePreventionMode: str
|
||||||
allowedSelfTradePreventionModes: list[str]
|
allowedSelfTradePreventionModes: list[str]
|
||||||
permissions: list[str]
|
permissions: list[str]
|
||||||
|
|
||||||
# can the paint botz creat liq gaps even easier on this asset?
|
|
||||||
# Bp
|
|
||||||
# https://developers.binance.com/docs/binance-spot-api-docs/faqs/order_amend_keep_priority
|
|
||||||
amendAllowed: bool
|
|
||||||
|
|
||||||
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
|
|
||||||
ns_path: str = 'piker.brokers.binance:SpotPair'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def venue(self) -> str:
|
|
||||||
return 'SPOT'
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def bs_fqme(self) -> str:
|
def bs_fqme(self) -> str:
|
||||||
return f'{self.symbol}.SPOT'
|
return f'{self.symbol}.SPOT'
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_src_asset(self) -> str:
|
|
||||||
return f'{self.quoteAsset}'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_dst_asset(self) -> str:
|
|
||||||
return f'{self.baseAsset}'
|
|
||||||
|
|
||||||
|
|
||||||
class FutesPair(Pair):
|
class FutesPair(Pair):
|
||||||
|
|
@ -195,14 +163,12 @@ class FutesPair(Pair):
|
||||||
quoteAsset: str # 'USDT',
|
quoteAsset: str # 'USDT',
|
||||||
quotePrecision: int # 8,
|
quotePrecision: int # 8,
|
||||||
requiredMarginPercent: float # '5.0000',
|
requiredMarginPercent: float # '5.0000',
|
||||||
|
settlePlan: int # 0,
|
||||||
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
|
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
|
||||||
triggerProtect: float # '0.0500',
|
triggerProtect: float # '0.0500',
|
||||||
underlyingSubType: list[str] # ['PoW'],
|
underlyingSubType: list[str] # ['PoW'],
|
||||||
underlyingType: str # 'COIN'
|
underlyingType: str # 'COIN'
|
||||||
|
|
||||||
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
|
|
||||||
ns_path: str = 'piker.brokers.binance:FutesPair'
|
|
||||||
|
|
||||||
# NOTE: for compat with spot pairs and `MktPair.src: Asset`
|
# NOTE: for compat with spot pairs and `MktPair.src: Asset`
|
||||||
# processing..
|
# processing..
|
||||||
@property
|
@property
|
||||||
|
|
@ -210,107 +176,32 @@ class FutesPair(Pair):
|
||||||
return self.quotePrecision
|
return self.quotePrecision
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def expiry(self) -> str:
|
def bs_fqme(self) -> str:
|
||||||
symbol: str = self.symbol
|
|
||||||
contype: str = self.contractType
|
|
||||||
match contype:
|
|
||||||
case (
|
|
||||||
'CURRENT_QUARTER'
|
|
||||||
| 'CURRENT_QUARTER DELIVERING'
|
|
||||||
| 'NEXT_QUARTER' # su madre binance..
|
|
||||||
):
|
|
||||||
pair, _, expiry = symbol.partition('_')
|
|
||||||
assert pair == self.pair # sanity
|
|
||||||
return f'{expiry}'
|
|
||||||
|
|
||||||
case (
|
|
||||||
'PERPETUAL'
|
|
||||||
| 'TRADIFI_PERPETUAL'
|
|
||||||
):
|
|
||||||
return 'PERP'
|
|
||||||
|
|
||||||
case '':
|
|
||||||
subtype: list[str] = self.underlyingSubType
|
|
||||||
if not subtype:
|
|
||||||
if self.status == 'PENDING_TRADING':
|
|
||||||
return 'PENDING'
|
|
||||||
|
|
||||||
match subtype:
|
|
||||||
case ['DEFI']:
|
|
||||||
return 'PERP'
|
|
||||||
|
|
||||||
# wow, just wow you binance guys suck..
|
|
||||||
if self.status == 'PENDING_TRADING':
|
|
||||||
return 'PENDING'
|
|
||||||
|
|
||||||
# XXX: yeah no clue then..
|
|
||||||
raise ValueError(
|
|
||||||
f'Bad .expiry token match: {contype} for {symbol}'
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def venue(self) -> str:
|
|
||||||
symbol: str = self.symbol
|
symbol: str = self.symbol
|
||||||
ctype: str = self.contractType
|
ctype: str = self.contractType
|
||||||
margin: str = self.marginAsset
|
margin: str = self.marginAsset
|
||||||
|
|
||||||
match ctype:
|
match ctype:
|
||||||
case (
|
case 'PERPETUAL':
|
||||||
'PERPETUAL'
|
return f'{symbol}.{margin}M.PERP'
|
||||||
| 'TRADIFI_PERPETUAL'
|
|
||||||
):
|
|
||||||
return f'{margin}M'
|
|
||||||
|
|
||||||
case (
|
case 'CURRENT_QUARTER':
|
||||||
'CURRENT_QUARTER'
|
pair, _, expiry = symbol.partition('_')
|
||||||
| 'CURRENT_QUARTER DELIVERING'
|
return f'{pair}.{margin}M.{expiry}'
|
||||||
| 'NEXT_QUARTER' # su madre binance..
|
|
||||||
):
|
|
||||||
_, _, expiry = symbol.partition('_')
|
|
||||||
return f'{margin}M'
|
|
||||||
|
|
||||||
case '':
|
case '':
|
||||||
subtype: list[str] = self.underlyingSubType
|
subtype: list[str] = self.underlyingSubType
|
||||||
if not subtype:
|
if not subtype:
|
||||||
if self.status == 'PENDING_TRADING':
|
if self.status == 'PENDING_TRADING':
|
||||||
return f'{margin}M'
|
return f'{symbol}.{margin}M.PENDING'
|
||||||
|
|
||||||
match subtype:
|
match subtype[0]:
|
||||||
case (
|
case 'DEFI':
|
||||||
['DEFI']
|
return f'{symbol}.{subtype}.PERP'
|
||||||
| ['USDC']
|
|
||||||
):
|
|
||||||
return f'{subtype[0]}'
|
|
||||||
|
|
||||||
# XXX: yeah no clue then..
|
# XXX: yeah no clue then..
|
||||||
raise ValueError(
|
return f'{symbol}.WTF.PWNED.BBQ'
|
||||||
f'Bad .venue token match: {ctype}'
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_fqme(self) -> str:
|
|
||||||
symbol: str = self.symbol
|
|
||||||
ctype: str = self.contractType
|
|
||||||
venue: str = self.venue
|
|
||||||
pair: str = self.pair
|
|
||||||
|
|
||||||
match ctype:
|
|
||||||
case (
|
|
||||||
'CURRENT_QUARTER'
|
|
||||||
| 'NEXT_QUARTER' # su madre binance..
|
|
||||||
):
|
|
||||||
pair, _, expiry = symbol.partition('_')
|
|
||||||
assert pair == self.pair
|
|
||||||
|
|
||||||
return f'{pair}.{venue}.{self.expiry}'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_src_asset(self) -> str:
|
|
||||||
return f'{self.quoteAsset}'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_dst_asset(self) -> str:
|
|
||||||
return f'{self.baseAsset}.{self.venue}'
|
|
||||||
|
|
||||||
|
|
||||||
PAIRTYPES: dict[MarketType, Pair] = {
|
PAIRTYPES: dict[MarketType, Pair] = {
|
||||||
|
|
|
||||||
|
|
@ -454,54 +454,37 @@ def mkt_info(
|
||||||
|
|
||||||
@cli.command()
|
@cli.command()
|
||||||
@click.argument('pattern', required=True)
|
@click.argument('pattern', required=True)
|
||||||
# TODO: move this to top level click/typer context for all subs
|
|
||||||
@click.option(
|
|
||||||
'--pdb',
|
|
||||||
is_flag=True,
|
|
||||||
help='Enable tractor debug mode',
|
|
||||||
)
|
|
||||||
@click.pass_obj
|
@click.pass_obj
|
||||||
def search(
|
def search(config, pattern):
|
||||||
config: dict,
|
|
||||||
pattern: str,
|
|
||||||
pdb: bool,
|
|
||||||
):
|
|
||||||
'''
|
'''
|
||||||
Search for symbols from broker backend(s).
|
Search for symbols from broker backend(s).
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# global opts
|
# global opts
|
||||||
brokermods: list[ModuleType] = list(config['brokermods'].values())
|
brokermods = list(config['brokermods'].values())
|
||||||
|
|
||||||
# TODO: this is coming from the `search --pdb` NOT from
|
|
||||||
# the `piker --pdb` XD ..
|
|
||||||
# -[ ] pull from the parent click ctx's values..dumdum
|
|
||||||
# assert pdb
|
|
||||||
|
|
||||||
# define tractor entrypoint
|
# define tractor entrypoint
|
||||||
async def main(func):
|
async def main(func):
|
||||||
|
|
||||||
async with maybe_open_pikerd(
|
async with maybe_open_pikerd(
|
||||||
loglevel=config['loglevel'],
|
loglevel=config['loglevel'],
|
||||||
debug_mode=pdb,
|
|
||||||
):
|
):
|
||||||
return await func()
|
return await func()
|
||||||
|
|
||||||
from piker.toolz import open_crash_handler
|
quotes = trio.run(
|
||||||
with open_crash_handler():
|
main,
|
||||||
quotes = trio.run(
|
partial(
|
||||||
main,
|
core.symbol_search,
|
||||||
partial(
|
brokermods,
|
||||||
core.symbol_search,
|
pattern,
|
||||||
brokermods,
|
),
|
||||||
pattern,
|
)
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
if not quotes:
|
if not quotes:
|
||||||
log.error(f"No matches could be found for {pattern}?")
|
log.error(f"No matches could be found for {pattern}?")
|
||||||
return
|
return
|
||||||
|
|
||||||
click.echo(colorize_json(quotes))
|
click.echo(colorize_json(quotes))
|
||||||
|
|
||||||
|
|
||||||
@cli.command()
|
@cli.command()
|
||||||
|
|
@ -510,11 +493,9 @@ def search(
|
||||||
@click.option('--delete', '-d', flag_value=True, help='Delete section')
|
@click.option('--delete', '-d', flag_value=True, help='Delete section')
|
||||||
@click.pass_obj
|
@click.pass_obj
|
||||||
def brokercfg(config, section, value, delete):
|
def brokercfg(config, section, value, delete):
|
||||||
'''
|
"""If invoked with no arguments, open an editor to edit broker configs file
|
||||||
If invoked with no arguments, open an editor to edit broker
|
or get / update an individual section.
|
||||||
configs file or get / update an individual section.
|
"""
|
||||||
|
|
||||||
'''
|
|
||||||
from .. import config
|
from .. import config
|
||||||
|
|
||||||
if section:
|
if section:
|
||||||
|
|
|
||||||
|
|
@ -22,9 +22,7 @@ routines should be primitive data types where possible.
|
||||||
"""
|
"""
|
||||||
import inspect
|
import inspect
|
||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
from typing import (
|
from typing import List, Dict, Any, Optional
|
||||||
Any,
|
|
||||||
)
|
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
|
|
@ -36,10 +34,8 @@ from ..accounting import MktPair
|
||||||
|
|
||||||
|
|
||||||
async def api(brokername: str, methname: str, **kwargs) -> dict:
|
async def api(brokername: str, methname: str, **kwargs) -> dict:
|
||||||
'''
|
"""Make (proxy through) a broker API call by name and return its result.
|
||||||
Make (proxy through) a broker API call by name and return its result.
|
"""
|
||||||
|
|
||||||
'''
|
|
||||||
brokermod = get_brokermod(brokername)
|
brokermod = get_brokermod(brokername)
|
||||||
async with brokermod.get_client() as client:
|
async with brokermod.get_client() as client:
|
||||||
meth = getattr(client, methname, None)
|
meth = getattr(client, methname, None)
|
||||||
|
|
@ -66,14 +62,10 @@ async def api(brokername: str, methname: str, **kwargs) -> dict:
|
||||||
|
|
||||||
async def stocks_quote(
|
async def stocks_quote(
|
||||||
brokermod: ModuleType,
|
brokermod: ModuleType,
|
||||||
tickers: list[str]
|
tickers: List[str]
|
||||||
|
) -> Dict[str, Dict[str, Any]]:
|
||||||
) -> dict[str, dict[str, Any]]:
|
"""Return quotes dict for ``tickers``.
|
||||||
'''
|
"""
|
||||||
Return a `dict` of snapshot quotes for the provided input
|
|
||||||
`tickers`: a `list` of fqmes.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async with brokermod.get_client() as client:
|
async with brokermod.get_client() as client:
|
||||||
return await client.quote(tickers)
|
return await client.quote(tickers)
|
||||||
|
|
||||||
|
|
@ -82,15 +74,13 @@ async def stocks_quote(
|
||||||
async def option_chain(
|
async def option_chain(
|
||||||
brokermod: ModuleType,
|
brokermod: ModuleType,
|
||||||
symbol: str,
|
symbol: str,
|
||||||
date: str|None = None,
|
date: Optional[str] = None,
|
||||||
) -> dict[str, dict[str, dict[str, Any]]]:
|
) -> Dict[str, Dict[str, Dict[str, Any]]]:
|
||||||
'''
|
"""Return option chain for ``symbol`` for ``date``.
|
||||||
Return option chain for ``symbol`` for ``date``.
|
|
||||||
|
|
||||||
By default all expiries are returned. If ``date`` is provided
|
By default all expiries are returned. If ``date`` is provided
|
||||||
then contract quotes for that single expiry are returned.
|
then contract quotes for that single expiry are returned.
|
||||||
|
"""
|
||||||
'''
|
|
||||||
async with brokermod.get_client() as client:
|
async with brokermod.get_client() as client:
|
||||||
if date:
|
if date:
|
||||||
id = int((await client.tickers2ids([symbol]))[symbol])
|
id = int((await client.tickers2ids([symbol]))[symbol])
|
||||||
|
|
@ -105,39 +95,45 @@ async def option_chain(
|
||||||
return await client.option_chains(contracts)
|
return await client.option_chains(contracts)
|
||||||
|
|
||||||
|
|
||||||
# async def contracts(
|
async def contracts(
|
||||||
# brokermod: ModuleType,
|
brokermod: ModuleType,
|
||||||
# symbol: str,
|
symbol: str,
|
||||||
# ) -> dict[str, dict[str, dict[str, Any]]]:
|
) -> Dict[str, Dict[str, Dict[str, Any]]]:
|
||||||
# """Return option contracts (all expiries) for ``symbol``.
|
"""Return option contracts (all expiries) for ``symbol``.
|
||||||
# """
|
"""
|
||||||
# async with brokermod.get_client() as client:
|
async with brokermod.get_client() as client:
|
||||||
# # return await client.get_all_contracts([symbol])
|
# return await client.get_all_contracts([symbol])
|
||||||
# return await client.get_all_contracts([symbol])
|
return await client.get_all_contracts([symbol])
|
||||||
|
|
||||||
|
|
||||||
async def bars(
|
async def bars(
|
||||||
brokermod: ModuleType,
|
brokermod: ModuleType,
|
||||||
symbol: str,
|
symbol: str,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
) -> dict[str, dict[str, dict[str, Any]]]:
|
) -> Dict[str, Dict[str, Dict[str, Any]]]:
|
||||||
'''
|
"""Return option contracts (all expiries) for ``symbol``.
|
||||||
Return option contracts (all expiries) for ``symbol``.
|
"""
|
||||||
|
|
||||||
'''
|
|
||||||
async with brokermod.get_client() as client:
|
async with brokermod.get_client() as client:
|
||||||
return await client.bars(symbol, **kwargs)
|
return await client.bars(symbol, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
async def search_w_brokerd(
|
async def mkt_info(
|
||||||
name: str,
|
brokermod: ModuleType,
|
||||||
pattern: str,
|
fqme: str,
|
||||||
) -> dict:
|
**kwargs,
|
||||||
|
|
||||||
|
) -> MktPair:
|
||||||
|
'''
|
||||||
|
Return MktPair info from broker including src and dst assets.
|
||||||
|
|
||||||
|
'''
|
||||||
|
return await brokermod.get_mkt_info(
|
||||||
|
fqme.replace(brokermod.name, '')
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def search_w_brokerd(name: str, pattern: str) -> dict:
|
||||||
|
|
||||||
# TODO: WHY NOT WORK!?!
|
|
||||||
# when we `step` through the next block?
|
|
||||||
# import tractor
|
|
||||||
# await tractor.pause()
|
|
||||||
async with open_cached_client(name) as client:
|
async with open_cached_client(name) as client:
|
||||||
|
|
||||||
# TODO: support multiple asset type concurrent searches.
|
# TODO: support multiple asset type concurrent searches.
|
||||||
|
|
@ -149,12 +145,12 @@ async def symbol_search(
|
||||||
pattern: str,
|
pattern: str,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
||||||
) -> dict[str, dict[str, dict[str, Any]]]:
|
) -> Dict[str, Dict[str, Dict[str, Any]]]:
|
||||||
'''
|
'''
|
||||||
Return symbol info from broker.
|
Return symbol info from broker.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
results: list[str] = []
|
results = []
|
||||||
|
|
||||||
async def search_backend(
|
async def search_backend(
|
||||||
brokermod: ModuleType
|
brokermod: ModuleType
|
||||||
|
|
@ -162,20 +158,9 @@ async def symbol_search(
|
||||||
|
|
||||||
brokername: str = mod.name
|
brokername: str = mod.name
|
||||||
|
|
||||||
# TODO: figure this the FUCK OUT
|
|
||||||
# -> ok so obvi in the root actor any async task that's
|
|
||||||
# spawned outside the main tractor-root-actor task needs to
|
|
||||||
# call this..
|
|
||||||
# await tractor.devx._debug.maybe_init_greenback()
|
|
||||||
# tractor.pause_from_sync()
|
|
||||||
|
|
||||||
async with maybe_spawn_brokerd(
|
async with maybe_spawn_brokerd(
|
||||||
mod.name,
|
mod.name,
|
||||||
infect_asyncio=getattr(
|
infect_asyncio=getattr(mod, '_infect_asyncio', False),
|
||||||
mod,
|
|
||||||
'_infect_asyncio',
|
|
||||||
False,
|
|
||||||
),
|
|
||||||
) as portal:
|
) as portal:
|
||||||
|
|
||||||
results.append((
|
results.append((
|
||||||
|
|
@ -188,26 +173,8 @@ async def symbol_search(
|
||||||
))
|
))
|
||||||
|
|
||||||
async with trio.open_nursery() as n:
|
async with trio.open_nursery() as n:
|
||||||
|
|
||||||
for mod in brokermods:
|
for mod in brokermods:
|
||||||
n.start_soon(search_backend, mod.name)
|
n.start_soon(search_backend, mod.name)
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
||||||
async def mkt_info(
|
|
||||||
brokermod: ModuleType,
|
|
||||||
fqme: str,
|
|
||||||
|
|
||||||
**kwargs,
|
|
||||||
|
|
||||||
) -> MktPair:
|
|
||||||
'''
|
|
||||||
Return the `piker.accounting.MktPair` info struct from a given
|
|
||||||
backend broker tradable src/dst asset pair.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async with open_cached_client(brokermod.name) as client:
|
|
||||||
assert client
|
|
||||||
return await brokermod.get_mkt_info(
|
|
||||||
fqme.replace(brokermod.name, '')
|
|
||||||
)
|
|
||||||
|
|
|
||||||
|
|
@ -31,15 +31,14 @@ from typing import (
|
||||||
Callable,
|
Callable,
|
||||||
)
|
)
|
||||||
|
|
||||||
from pendulum import now
|
import pendulum
|
||||||
import trio
|
import trio
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
from rapidfuzz import process as fuzzy
|
from fuzzywuzzy import process as fuzzy
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from tractor.trionics import (
|
from tractor.trionics import (
|
||||||
broadcast_receiver,
|
broadcast_receiver,
|
||||||
maybe_open_context
|
maybe_open_context
|
||||||
collapse_eg,
|
|
||||||
)
|
)
|
||||||
from tractor import to_asyncio
|
from tractor import to_asyncio
|
||||||
# XXX WOOPS XD
|
# XXX WOOPS XD
|
||||||
|
|
@ -53,11 +52,8 @@ from cryptofeed.defines import (
|
||||||
)
|
)
|
||||||
from cryptofeed.symbols import Symbol
|
from cryptofeed.symbols import Symbol
|
||||||
|
|
||||||
from piker.data import (
|
from piker.data.types import Struct
|
||||||
def_iohlcv_fields,
|
from piker.data import def_iohlcv_fields
|
||||||
match_from_pairs,
|
|
||||||
Struct,
|
|
||||||
)
|
|
||||||
from piker.data._web_bs import (
|
from piker.data._web_bs import (
|
||||||
open_jsonrpc_session
|
open_jsonrpc_session
|
||||||
)
|
)
|
||||||
|
|
@ -83,7 +79,7 @@ _testnet_ws_url = 'wss://test.deribit.com/ws/api/v2'
|
||||||
class JSONRPCResult(Struct):
|
class JSONRPCResult(Struct):
|
||||||
jsonrpc: str = '2.0'
|
jsonrpc: str = '2.0'
|
||||||
id: int
|
id: int
|
||||||
result: Optional[list[dict]] = None
|
result: Optional[dict] = None
|
||||||
error: Optional[dict] = None
|
error: Optional[dict] = None
|
||||||
usIn: int
|
usIn: int
|
||||||
usOut: int
|
usOut: int
|
||||||
|
|
@ -293,29 +289,24 @@ class Client:
|
||||||
currency: str = 'btc', # BTC, ETH, SOL, USDC
|
currency: str = 'btc', # BTC, ETH, SOL, USDC
|
||||||
kind: str = 'option',
|
kind: str = 'option',
|
||||||
expired: bool = False
|
expired: bool = False
|
||||||
|
) -> dict[str, Any]:
|
||||||
|
"""Get symbol info for the exchange.
|
||||||
|
|
||||||
) -> dict[str, dict]:
|
"""
|
||||||
'''
|
|
||||||
Get symbol infos.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if self._pairs:
|
if self._pairs:
|
||||||
return self._pairs
|
return self._pairs
|
||||||
|
|
||||||
# will retrieve all symbols by default
|
# will retrieve all symbols by default
|
||||||
params: dict[str, str] = {
|
params = {
|
||||||
'currency': currency.upper(),
|
'currency': currency.upper(),
|
||||||
'kind': kind,
|
'kind': kind,
|
||||||
'expired': str(expired).lower()
|
'expired': str(expired).lower()
|
||||||
}
|
}
|
||||||
|
|
||||||
resp: JSONRPCResult = await self.json_rpc(
|
resp = await self.json_rpc('public/get_instruments', params)
|
||||||
'public/get_instruments',
|
results = resp.result
|
||||||
params,
|
|
||||||
)
|
instruments = {
|
||||||
# convert to symbol-keyed table
|
|
||||||
results: list[dict] | None = resp.result
|
|
||||||
instruments: dict[str, dict] = {
|
|
||||||
item['instrument_name'].lower(): item
|
item['instrument_name'].lower(): item
|
||||||
for item in results
|
for item in results
|
||||||
}
|
}
|
||||||
|
|
@ -328,7 +319,6 @@ class Client:
|
||||||
async def cache_symbols(
|
async def cache_symbols(
|
||||||
self,
|
self,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
|
|
||||||
if not self._pairs:
|
if not self._pairs:
|
||||||
self._pairs = await self.symbol_info()
|
self._pairs = await self.symbol_info()
|
||||||
|
|
||||||
|
|
@ -339,23 +329,17 @@ class Client:
|
||||||
pattern: str,
|
pattern: str,
|
||||||
limit: int = 30,
|
limit: int = 30,
|
||||||
) -> dict[str, Any]:
|
) -> dict[str, Any]:
|
||||||
'''
|
data = await self.symbol_info()
|
||||||
Fuzzy search symbology set for pairs matching `pattern`.
|
|
||||||
|
|
||||||
'''
|
matches = fuzzy.extractBests(
|
||||||
pairs: dict[str, Any] = await self.symbol_info()
|
pattern,
|
||||||
matches: dict[str, Pair] = match_from_pairs(
|
data,
|
||||||
pairs=pairs,
|
|
||||||
query=pattern.upper(),
|
|
||||||
score_cutoff=35,
|
score_cutoff=35,
|
||||||
limit=limit
|
limit=limit
|
||||||
)
|
)
|
||||||
|
# repack in dict form
|
||||||
# repack in name-keyed table
|
return {item[0]['instrument_name'].lower(): item[0]
|
||||||
return {
|
for item in matches}
|
||||||
pair['instrument_name'].lower(): pair
|
|
||||||
for pair in matches.values()
|
|
||||||
}
|
|
||||||
|
|
||||||
async def bars(
|
async def bars(
|
||||||
self,
|
self,
|
||||||
|
|
@ -433,7 +417,6 @@ async def get_client(
|
||||||
) -> Client:
|
) -> Client:
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
collapse_eg(),
|
|
||||||
trio.open_nursery() as n,
|
trio.open_nursery() as n,
|
||||||
open_jsonrpc_session(
|
open_jsonrpc_session(
|
||||||
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc
|
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc
|
||||||
|
|
|
||||||
|
|
@ -26,7 +26,7 @@ import time
|
||||||
import trio
|
import trio
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
import pendulum
|
import pendulum
|
||||||
from rapidfuzz import process as fuzzy
|
from fuzzywuzzy import process as fuzzy
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import tractor
|
import tractor
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -30,33 +30,23 @@ from .api import (
|
||||||
)
|
)
|
||||||
from .feed import (
|
from .feed import (
|
||||||
open_history_client,
|
open_history_client,
|
||||||
|
open_symbol_search,
|
||||||
stream_quotes,
|
stream_quotes,
|
||||||
)
|
)
|
||||||
from .broker import (
|
from .broker import (
|
||||||
open_trade_dialog,
|
open_trade_dialog,
|
||||||
)
|
)
|
||||||
from .ledger import (
|
from .ledger import (
|
||||||
norm_trade,
|
|
||||||
norm_trade_records,
|
norm_trade_records,
|
||||||
tx_sort,
|
|
||||||
)
|
|
||||||
from .symbols import (
|
|
||||||
get_mkt_info,
|
|
||||||
open_symbol_search,
|
|
||||||
_search_conf,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'get_client',
|
'get_client',
|
||||||
'get_mkt_info',
|
|
||||||
'norm_trade',
|
|
||||||
'norm_trade_records',
|
'norm_trade_records',
|
||||||
'open_trade_dialog',
|
'open_trade_dialog',
|
||||||
'open_history_client',
|
'open_history_client',
|
||||||
'open_symbol_search',
|
'open_symbol_search',
|
||||||
'stream_quotes',
|
'stream_quotes',
|
||||||
'_search_conf',
|
|
||||||
'tx_sort',
|
|
||||||
]
|
]
|
||||||
|
|
||||||
_brokerd_mods: list[str] = [
|
_brokerd_mods: list[str] = [
|
||||||
|
|
@ -66,7 +56,6 @@ _brokerd_mods: list[str] = [
|
||||||
|
|
||||||
_datad_mods: list[str] = [
|
_datad_mods: list[str] = [
|
||||||
'feed',
|
'feed',
|
||||||
'symbols',
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -86,8 +75,3 @@ _spawn_kwargs = {
|
||||||
# know if ``brokerd`` should be spawned with
|
# know if ``brokerd`` should be spawned with
|
||||||
# ``tractor``'s aio mode.
|
# ``tractor``'s aio mode.
|
||||||
_infect_asyncio: bool = True
|
_infect_asyncio: bool = True
|
||||||
|
|
||||||
# XXX NOTE: for now we disable symcache with this backend since
|
|
||||||
# there is no clearly simple nor practical way to download "all
|
|
||||||
# symbology info" for all supported venues..
|
|
||||||
_no_symcache: bool = True
|
|
||||||
|
|
|
||||||
|
|
@ -159,11 +159,7 @@ def load_flex_trades(
|
||||||
for acctid in trades_by_account:
|
for acctid in trades_by_account:
|
||||||
trades_by_id = trades_by_account[acctid]
|
trades_by_id = trades_by_account[acctid]
|
||||||
|
|
||||||
with open_trade_ledger(
|
with open_trade_ledger('ib', acctid) as ledger_dict:
|
||||||
'ib',
|
|
||||||
acctid,
|
|
||||||
allow_from_sync_code=True,
|
|
||||||
) as ledger_dict:
|
|
||||||
tid_delta = set(trades_by_id) - set(ledger_dict)
|
tid_delta = set(trades_by_id) - set(ledger_dict)
|
||||||
log.info(
|
log.info(
|
||||||
'New trades detected\n'
|
'New trades detected\n'
|
||||||
|
|
|
||||||
|
|
@ -20,11 +20,6 @@ runnable script-programs.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from datetime import ( # noqa
|
|
||||||
datetime,
|
|
||||||
date,
|
|
||||||
tzinfo as TzInfo,
|
|
||||||
)
|
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from typing import (
|
from typing import (
|
||||||
Literal,
|
Literal,
|
||||||
|
|
@ -38,7 +33,7 @@ from piker.brokers._util import get_logger
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .api import Client
|
from .api import Client
|
||||||
import i3ipc
|
from ib_insync import IB
|
||||||
|
|
||||||
log = get_logger('piker.brokers.ib')
|
log = get_logger('piker.brokers.ib')
|
||||||
|
|
||||||
|
|
@ -53,39 +48,8 @@ _reset_tech: Literal[
|
||||||
] = 'vnc'
|
] = 'vnc'
|
||||||
|
|
||||||
|
|
||||||
no_setup_msg:str = (
|
|
||||||
'No data reset hack test setup for {vnc_sockaddr}!\n'
|
|
||||||
'See config setup tips @\n'
|
|
||||||
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def try_xdo_manual(
|
|
||||||
client: Client,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Do the "manual" `xdo`-based screen switch + click
|
|
||||||
combo since apparently the `asyncvnc` client ain't workin..
|
|
||||||
|
|
||||||
Note this is only meant as a backup method for Xorg users,
|
|
||||||
ideally you can use a real vnc client and the `vnc_click_hack()`
|
|
||||||
impl!
|
|
||||||
|
|
||||||
'''
|
|
||||||
global _reset_tech
|
|
||||||
try:
|
|
||||||
i3ipc_xdotool_manual_click_hack()
|
|
||||||
_reset_tech = 'i3ipc_xdotool'
|
|
||||||
return True
|
|
||||||
except OSError:
|
|
||||||
vnc_sockaddr: str = client.conf.vnc_addrs
|
|
||||||
log.exception(
|
|
||||||
no_setup_msg.format(vnc_sockaddr=vnc_sockaddr)
|
|
||||||
)
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
async def data_reset_hack(
|
async def data_reset_hack(
|
||||||
|
# vnc_host: str,
|
||||||
client: Client,
|
client: Client,
|
||||||
reset_type: Literal['data', 'connection'],
|
reset_type: Literal['data', 'connection'],
|
||||||
|
|
||||||
|
|
@ -117,60 +81,56 @@ async def data_reset_hack(
|
||||||
that need to be wrangle.
|
that need to be wrangle.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
ib_client: IB = client.ib
|
||||||
|
|
||||||
# look up any user defined vnc socket address mapped from
|
# look up any user defined vnc socket address mapped from
|
||||||
# a particular API socket port.
|
# a particular API socket port.
|
||||||
vnc_addrs: tuple[str]|None = client.conf.get('vnc_addrs')
|
api_port: str = str(ib_client.client.port)
|
||||||
if not vnc_addrs:
|
vnc_host: str
|
||||||
log.warning(
|
vnc_port: int
|
||||||
no_setup_msg.format(vnc_sockaddr=client.conf)
|
vnc_host, vnc_port = client.conf['vnc_addrs'].get(
|
||||||
+
|
api_port,
|
||||||
'REQUIRES A `vnc_addrs: array` ENTRY'
|
('localhost', 3003)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
no_setup_msg:str = (
|
||||||
|
f'No data reset hack test setup for {vnc_host}!\n'
|
||||||
|
'See setup @\n'
|
||||||
|
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
|
||||||
|
)
|
||||||
global _reset_tech
|
global _reset_tech
|
||||||
|
|
||||||
match _reset_tech:
|
match _reset_tech:
|
||||||
case 'vnc':
|
case 'vnc':
|
||||||
try:
|
try:
|
||||||
await tractor.to_asyncio.run_task(
|
await tractor.to_asyncio.run_task(
|
||||||
partial(
|
partial(
|
||||||
vnc_click_hack,
|
vnc_click_hack,
|
||||||
client=client,
|
host=vnc_host,
|
||||||
|
port=vnc_port,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
except (
|
except OSError:
|
||||||
OSError, # no VNC server avail..
|
if vnc_host != 'localhost':
|
||||||
PermissionError, # asyncvnc pw fail..
|
log.warning(no_setup_msg)
|
||||||
):
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import i3ipc # noqa (since a deps dynamic check)
|
import i3ipc # noqa (since a deps dynamic check)
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
log.warning(
|
log.warning(no_setup_msg)
|
||||||
no_setup_msg.format(vnc_sockaddr=client.conf)
|
|
||||||
)
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# XXX, Xorg only workaround..
|
try:
|
||||||
# TODO? remove now that we have `pyvnc`?
|
i3ipc_xdotool_manual_click_hack()
|
||||||
# if vnc_host not in {
|
_reset_tech = 'i3ipc_xdotool'
|
||||||
# 'localhost',
|
return True
|
||||||
# '127.0.0.1',
|
except OSError:
|
||||||
# }:
|
log.exception(no_setup_msg)
|
||||||
# focussed, matches = i3ipc_fin_wins_titled()
|
return False
|
||||||
# if not matches:
|
|
||||||
# log.warning(
|
|
||||||
# no_setup_msg.format(vnc_sockaddr=vnc_sockaddr)
|
|
||||||
# )
|
|
||||||
# return False
|
|
||||||
# else:
|
|
||||||
# try_xdo_manual(vnc_sockaddr)
|
|
||||||
|
|
||||||
# localhost but no vnc-client or it borked..
|
|
||||||
else:
|
|
||||||
try_xdo_manual(client)
|
|
||||||
|
|
||||||
case 'i3ipc_xdotool':
|
case 'i3ipc_xdotool':
|
||||||
try_xdo_manual(client)
|
i3ipc_xdotool_manual_click_hack()
|
||||||
# i3ipc_xdotool_manual_click_hack()
|
|
||||||
|
|
||||||
case _ as tech:
|
case _ as tech:
|
||||||
raise RuntimeError(f'{tech} is not supported for reset tech!?')
|
raise RuntimeError(f'{tech} is not supported for reset tech!?')
|
||||||
|
|
@ -180,66 +140,21 @@ async def data_reset_hack(
|
||||||
|
|
||||||
|
|
||||||
async def vnc_click_hack(
|
async def vnc_click_hack(
|
||||||
client: Client,
|
host: str,
|
||||||
reset_type: str = 'data',
|
port: int,
|
||||||
pw: str|None = None,
|
reset_type: str = 'data'
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
Reset the data or network connection for the VNC attached
|
Reset the data or network connection for the VNC attached
|
||||||
ib-gateway using a (magic) keybinding combo.
|
ib gateway using magic combos.
|
||||||
|
|
||||||
A vnc-server password can be set either by an input `pw` param or
|
|
||||||
set in the client's config with the latter loaded from the user's
|
|
||||||
`brokers.toml` in a vnc-addrs-port-mapping section,
|
|
||||||
|
|
||||||
.. code:: toml
|
|
||||||
|
|
||||||
[ib.vnc_addrs]
|
|
||||||
4002 = {host = 'localhost', port = 5900, pw = 'doggy'}
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
api_port: str = str(client.ib.client.port)
|
|
||||||
conf: dict = client.conf
|
|
||||||
vnc_addrs: dict[int, tuple] = conf.get('vnc_addrs')
|
|
||||||
if not vnc_addrs:
|
|
||||||
return None
|
|
||||||
|
|
||||||
addr_entry: dict|tuple = vnc_addrs.get(
|
|
||||||
api_port,
|
|
||||||
('localhost', 5900) # a typical default
|
|
||||||
)
|
|
||||||
if pw is None:
|
|
||||||
match addr_entry:
|
|
||||||
case (
|
|
||||||
host,
|
|
||||||
port,
|
|
||||||
):
|
|
||||||
pass
|
|
||||||
|
|
||||||
case {
|
|
||||||
'host': host,
|
|
||||||
'port': port,
|
|
||||||
'pw': pw
|
|
||||||
}:
|
|
||||||
pass
|
|
||||||
|
|
||||||
case _:
|
|
||||||
raise ValueError(
|
|
||||||
f'Invalid `ib.vnc_addrs` entry ?\n'
|
|
||||||
f'{addr_entry!r}\n'
|
|
||||||
)
|
|
||||||
try:
|
try:
|
||||||
from pyvnc import (
|
import asyncvnc
|
||||||
AsyncVNCClient,
|
|
||||||
VNCConfig,
|
|
||||||
Point,
|
|
||||||
MOUSE_BUTTON_LEFT,
|
|
||||||
)
|
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
log.warning(
|
log.warning(
|
||||||
"In order to leverage `piker`'s built-in data reset hacks, install "
|
"In order to leverage `piker`'s built-in data reset hacks, install "
|
||||||
"the `pyvnc` project: https://github.com/regulad/pyvnc.git"
|
"the `asyncvnc` project: https://github.com/barneygale/asyncvnc"
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
@ -250,79 +165,24 @@ async def vnc_click_hack(
|
||||||
'connection': 'r'
|
'connection': 'r'
|
||||||
}[reset_type]
|
}[reset_type]
|
||||||
|
|
||||||
with tractor.devx.open_crash_handler():
|
async with asyncvnc.connect(
|
||||||
client = await AsyncVNCClient.connect(
|
host,
|
||||||
VNCConfig(
|
port=port,
|
||||||
host=host,
|
|
||||||
port=port,
|
# TODO: doesn't work see:
|
||||||
password=pw,
|
# https://github.com/barneygale/asyncvnc/issues/7
|
||||||
)
|
# password='ibcansmbz',
|
||||||
|
|
||||||
|
) as client:
|
||||||
|
|
||||||
|
# move to middle of screen
|
||||||
|
# 640x1800
|
||||||
|
client.mouse.move(
|
||||||
|
x=500,
|
||||||
|
y=500,
|
||||||
)
|
)
|
||||||
async with client:
|
client.mouse.click()
|
||||||
# move to middle of screen
|
client.keyboard.press('Ctrl', 'Alt', key) # keys are stacked
|
||||||
# 640x1800
|
|
||||||
await client.move(
|
|
||||||
Point(
|
|
||||||
500,
|
|
||||||
500,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
# ensure the ib-gw window is active
|
|
||||||
await client.click(MOUSE_BUTTON_LEFT)
|
|
||||||
# send the hotkeys combo B)
|
|
||||||
await client.press('Ctrl', 'Alt', key) # keys are stacked
|
|
||||||
|
|
||||||
|
|
||||||
def i3ipc_fin_wins_titled(
|
|
||||||
titles: list[str] = [
|
|
||||||
'Interactive Brokers', # tws running in i3
|
|
||||||
'IB Gateway', # gw running in i3
|
|
||||||
# 'IB', # gw running in i3 (newer version?)
|
|
||||||
|
|
||||||
# !TODO, remote vnc instance
|
|
||||||
# -[ ] something in title (or other Con-props) that indicates
|
|
||||||
# this is explicitly for ibrk sw?
|
|
||||||
# |_[ ] !can use modden spawn eventually!
|
|
||||||
'TigerVNC',
|
|
||||||
# 'vncviewer', # the terminal..
|
|
||||||
],
|
|
||||||
) -> tuple[
|
|
||||||
i3ipc.Con, # orig focussed win
|
|
||||||
list[tuple[str, i3ipc.Con]], # matching wins by title
|
|
||||||
]:
|
|
||||||
'''
|
|
||||||
Attempt to find a local-DE window titled with an entry in
|
|
||||||
`titles`.
|
|
||||||
|
|
||||||
If found deliver the current focussed window and all matching
|
|
||||||
`i3ipc.Con`s in a list.
|
|
||||||
|
|
||||||
'''
|
|
||||||
import i3ipc
|
|
||||||
ipc = i3ipc.Connection()
|
|
||||||
|
|
||||||
# TODO: might be worth offering some kinda api for grabbing
|
|
||||||
# the window id from the pid?
|
|
||||||
# https://stackoverflow.com/a/2250879
|
|
||||||
tree = ipc.get_tree()
|
|
||||||
focussed: i3ipc.Con = tree.find_focused()
|
|
||||||
|
|
||||||
matches: list[i3ipc.Con] = []
|
|
||||||
for name in titles:
|
|
||||||
results = tree.find_titled(name)
|
|
||||||
print(f'results for {name}: {results}')
|
|
||||||
if results:
|
|
||||||
con = results[0]
|
|
||||||
matches.append((
|
|
||||||
name,
|
|
||||||
con,
|
|
||||||
))
|
|
||||||
|
|
||||||
return (
|
|
||||||
focussed,
|
|
||||||
matches,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def i3ipc_xdotool_manual_click_hack() -> None:
|
def i3ipc_xdotool_manual_click_hack() -> None:
|
||||||
|
|
@ -330,48 +190,67 @@ def i3ipc_xdotool_manual_click_hack() -> None:
|
||||||
Do the data reset hack but expecting a local X-window using `xdotool`.
|
Do the data reset hack but expecting a local X-window using `xdotool`.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
focussed, matches = i3ipc_fin_wins_titled()
|
import i3ipc
|
||||||
orig_win_id = focussed.window
|
i3 = i3ipc.Connection()
|
||||||
|
|
||||||
|
# TODO: might be worth offering some kinda api for grabbing
|
||||||
|
# the window id from the pid?
|
||||||
|
# https://stackoverflow.com/a/2250879
|
||||||
|
t = i3.get_tree()
|
||||||
|
|
||||||
|
orig_win_id = t.find_focused().window
|
||||||
|
|
||||||
|
# for tws
|
||||||
|
win_names: list[str] = [
|
||||||
|
'Interactive Brokers', # tws running in i3
|
||||||
|
'IB Gateway', # gw running in i3
|
||||||
|
# 'IB', # gw running in i3 (newer version?)
|
||||||
|
]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
for name, con in matches:
|
for name in win_names:
|
||||||
print(f'Resetting data feed for {name}')
|
results = t.find_titled(name)
|
||||||
win_id = str(con.window)
|
print(f'results for {name}: {results}')
|
||||||
w, h = con.rect.width, con.rect.height
|
if results:
|
||||||
|
con = results[0]
|
||||||
|
print(f'Resetting data feed for {name}')
|
||||||
|
win_id = str(con.window)
|
||||||
|
w, h = con.rect.width, con.rect.height
|
||||||
|
|
||||||
# TODO: seems to be a few libs for python but not sure
|
# TODO: seems to be a few libs for python but not sure
|
||||||
# if they support all the sub commands we need, order of
|
# if they support all the sub commands we need, order of
|
||||||
# most recent commit history:
|
# most recent commit history:
|
||||||
# https://github.com/rr-/pyxdotool
|
# https://github.com/rr-/pyxdotool
|
||||||
# https://github.com/ShaneHutter/pyxdotool
|
# https://github.com/ShaneHutter/pyxdotool
|
||||||
# https://github.com/cphyc/pyxdotool
|
# https://github.com/cphyc/pyxdotool
|
||||||
|
|
||||||
# TODO: only run the reconnect (2nd) kc on a detected
|
# TODO: only run the reconnect (2nd) kc on a detected
|
||||||
# disconnect?
|
# disconnect?
|
||||||
for key_combo, timeout in [
|
for key_combo, timeout in [
|
||||||
# only required if we need a connection reset.
|
# only required if we need a connection reset.
|
||||||
# ('ctrl+alt+r', 12),
|
# ('ctrl+alt+r', 12),
|
||||||
# data feed reset.
|
# data feed reset.
|
||||||
('ctrl+alt+f', 6)
|
('ctrl+alt+f', 6)
|
||||||
]:
|
]:
|
||||||
subprocess.call([
|
subprocess.call([
|
||||||
'xdotool',
|
'xdotool',
|
||||||
'windowactivate', '--sync', win_id,
|
'windowactivate', '--sync', win_id,
|
||||||
|
|
||||||
# move mouse to bottom left of window (where
|
# move mouse to bottom left of window (where
|
||||||
# there should be nothing to click).
|
# there should be nothing to click).
|
||||||
'mousemove_relative', '--sync', str(w-4), str(h-4),
|
'mousemove_relative', '--sync', str(w-4), str(h-4),
|
||||||
|
|
||||||
# NOTE: we may need to stick a `--retry 3` in here..
|
# NOTE: we may need to stick a `--retry 3` in here..
|
||||||
'click', '--window', win_id,
|
'click', '--window', win_id,
|
||||||
'--repeat', '3', '1',
|
'--repeat', '3', '1',
|
||||||
|
|
||||||
# hackzorzes
|
# hackzorzes
|
||||||
'key', key_combo,
|
'key', key_combo,
|
||||||
],
|
],
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
# re-activate and focus original window
|
# re-activate and focus original window
|
||||||
subprocess.call([
|
subprocess.call([
|
||||||
'xdotool',
|
'xdotool',
|
||||||
'windowactivate', '--sync', str(orig_win_id),
|
'windowactivate', '--sync', str(orig_win_id),
|
||||||
|
|
@ -379,99 +258,3 @@ def i3ipc_xdotool_manual_click_hack() -> None:
|
||||||
])
|
])
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
log.exception('xdotool timed out?')
|
log.exception('xdotool timed out?')
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def is_current_time_in_range(
|
|
||||||
start_dt: datetime,
|
|
||||||
end_dt: datetime,
|
|
||||||
) -> bool:
|
|
||||||
'''
|
|
||||||
Check if current time is within the datetime range.
|
|
||||||
|
|
||||||
Use any/the-same timezone as provided by `start_dt.tzinfo` value
|
|
||||||
in the range.
|
|
||||||
|
|
||||||
'''
|
|
||||||
now: datetime = datetime.now(start_dt.tzinfo)
|
|
||||||
return start_dt <= now <= end_dt
|
|
||||||
|
|
||||||
|
|
||||||
# TODO, put this into `._util` and call it from here!
|
|
||||||
#
|
|
||||||
# NOTE, this was generated by @guille from a gpt5 prompt
|
|
||||||
# and was originally thot to be needed before learning about
|
|
||||||
# `ib_insync.contract.ContractDetails._parseSessions()` and
|
|
||||||
# it's downstream meths..
|
|
||||||
#
|
|
||||||
# This is still likely useful to keep for now to parse the
|
|
||||||
# `.tradingHours: str` value manually if we ever decide
|
|
||||||
# to move off `ib_async` and implement our own `trio`/`anyio`
|
|
||||||
# based version Bp
|
|
||||||
#
|
|
||||||
# >attempt to parse the retarted ib "time stampy thing" they
|
|
||||||
# >do for "venue hours" with this.. written by
|
|
||||||
# >gpt5-"thinking",
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
def parse_trading_hours(
|
|
||||||
spec: str,
|
|
||||||
tz: TzInfo|None = None
|
|
||||||
) -> dict[
|
|
||||||
date,
|
|
||||||
tuple[datetime, datetime]
|
|
||||||
]|None:
|
|
||||||
'''
|
|
||||||
Parse venue hours like:
|
|
||||||
'YYYYMMDD:HHMM-YYYYMMDD:HHMM;YYYYMMDD:CLOSED;...'
|
|
||||||
|
|
||||||
Returns `dict[date] = (open_dt, close_dt)` or `None` if
|
|
||||||
closed.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if (
|
|
||||||
not isinstance(spec, str)
|
|
||||||
or
|
|
||||||
not spec
|
|
||||||
):
|
|
||||||
raise ValueError('spec must be a non-empty string')
|
|
||||||
|
|
||||||
out: dict[
|
|
||||||
date,
|
|
||||||
tuple[datetime, datetime]
|
|
||||||
]|None = {}
|
|
||||||
|
|
||||||
for part in (p.strip() for p in spec.split(';') if p.strip()):
|
|
||||||
if part.endswith(':CLOSED'):
|
|
||||||
day_s, _ = part.split(':', 1)
|
|
||||||
d = datetime.strptime(day_s, '%Y%m%d').date()
|
|
||||||
out[d] = None
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
start_s, end_s = part.split('-', 1)
|
|
||||||
start_dt = datetime.strptime(start_s, '%Y%m%d:%H%M')
|
|
||||||
end_dt = datetime.strptime(end_s, '%Y%m%d:%H%M')
|
|
||||||
except ValueError as exc:
|
|
||||||
raise ValueError(f'invalid segment: {part}') from exc
|
|
||||||
|
|
||||||
if tz is not None:
|
|
||||||
start_dt = start_dt.replace(tzinfo=tz)
|
|
||||||
end_dt = end_dt.replace(tzinfo=tz)
|
|
||||||
|
|
||||||
out[start_dt.date()] = (start_dt, end_dt)
|
|
||||||
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
# ORIG desired usage,
|
|
||||||
#
|
|
||||||
# TODO, for non-drunk tomorrow,
|
|
||||||
# - call above fn and check that `output[today] is not None`
|
|
||||||
# trading_hrs: dict = parse_trading_hours(
|
|
||||||
# details.tradingHours
|
|
||||||
# )
|
|
||||||
# liq_hrs: dict = parse_trading_hours(
|
|
||||||
# details.liquidHours
|
|
||||||
# )
|
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
|
@ -18,382 +18,159 @@
|
||||||
Trade transaction accounting and normalization.
|
Trade transaction accounting and normalization.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
|
||||||
from bisect import insort
|
from bisect import insort
|
||||||
from dataclasses import asdict
|
|
||||||
from decimal import Decimal
|
from decimal import Decimal
|
||||||
from functools import partial
|
|
||||||
from pprint import pformat
|
from pprint import pformat
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
Callable,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
from bidict import bidict
|
from bidict import bidict
|
||||||
from pendulum import (
|
import pendulum
|
||||||
DateTime,
|
|
||||||
parse,
|
|
||||||
from_timestamp,
|
|
||||||
)
|
|
||||||
from ib_insync import (
|
|
||||||
Contract,
|
|
||||||
Commodity,
|
|
||||||
Fill,
|
|
||||||
Execution,
|
|
||||||
CommissionReport,
|
|
||||||
)
|
|
||||||
|
|
||||||
from piker.types import Struct
|
|
||||||
from piker.data import (
|
|
||||||
SymbologyCache,
|
|
||||||
)
|
|
||||||
from piker.accounting import (
|
from piker.accounting import (
|
||||||
Asset,
|
|
||||||
dec_digits,
|
dec_digits,
|
||||||
digits_to_dec,
|
digits_to_dec,
|
||||||
Transaction,
|
Transaction,
|
||||||
MktPair,
|
MktPair,
|
||||||
iter_by_dt,
|
|
||||||
)
|
)
|
||||||
from ._flex_reports import parse_flex_dt
|
from ._flex_reports import parse_flex_dt
|
||||||
from ._util import log
|
from ._util import log
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from .api import (
|
|
||||||
Client,
|
|
||||||
MethodProxy,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
tx_sort: Callable = partial(
|
|
||||||
iter_by_dt,
|
|
||||||
parsers={
|
|
||||||
'dateTime': parse_flex_dt,
|
|
||||||
'datetime': parse,
|
|
||||||
|
|
||||||
# XXX: for some some fucking 2022 and
|
|
||||||
# back options records.. f@#$ me..
|
|
||||||
'date': parse,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def norm_trade(
|
|
||||||
tid: str,
|
|
||||||
record: dict[str, Any],
|
|
||||||
|
|
||||||
# this is the dict that was returned from
|
|
||||||
# `Client.get_mkt_pairs()` and when running offline ledger
|
|
||||||
# processing from `.accounting`, this will be the table loaded
|
|
||||||
# into `SymbologyCache.pairs`.
|
|
||||||
pairs: dict[str, Struct],
|
|
||||||
symcache: SymbologyCache | None = None,
|
|
||||||
|
|
||||||
) -> Transaction | None:
|
|
||||||
|
|
||||||
conid: int = str(record.get('conId') or record['conid'])
|
|
||||||
bs_mktid: str = str(conid)
|
|
||||||
|
|
||||||
# NOTE: sometimes weird records (like BTTX?)
|
|
||||||
# have no field for this?
|
|
||||||
comms: float = -1 * (
|
|
||||||
record.get('commission')
|
|
||||||
or record.get('ibCommission')
|
|
||||||
or 0
|
|
||||||
)
|
|
||||||
if not comms:
|
|
||||||
log.warning(
|
|
||||||
'No commissions found for record?\n'
|
|
||||||
f'{pformat(record)}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
price: float = (
|
|
||||||
record.get('price')
|
|
||||||
or record.get('tradePrice')
|
|
||||||
)
|
|
||||||
if price is None:
|
|
||||||
log.warning(
|
|
||||||
'No `price` field found in record?\n'
|
|
||||||
'Skipping normalization..\n'
|
|
||||||
f'{pformat(record)}\n'
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
# the api doesn't do the -/+ on the quantity for you but flex
|
|
||||||
# records do.. are you fucking serious ib...!?
|
|
||||||
size: float|int = (
|
|
||||||
record.get('quantity')
|
|
||||||
or record['shares']
|
|
||||||
) * {
|
|
||||||
'BOT': 1,
|
|
||||||
'SLD': -1,
|
|
||||||
}[record['side']]
|
|
||||||
|
|
||||||
symbol: str = record['symbol']
|
|
||||||
exch: str = (
|
|
||||||
record.get('listingExchange')
|
|
||||||
or record.get('primaryExchange')
|
|
||||||
or record['exchange']
|
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: remove null values since `tomlkit` can't serialize
|
|
||||||
# them to file.
|
|
||||||
if dnc := record.pop('deltaNeutralContract', None):
|
|
||||||
record['deltaNeutralContract'] = dnc
|
|
||||||
|
|
||||||
# likely an opts contract record from a flex report..
|
|
||||||
# TODO: no idea how to parse ^ the strike part from flex..
|
|
||||||
# (00010000 any, or 00007500 tsla, ..)
|
|
||||||
# we probably must do the contract lookup for this?
|
|
||||||
if (
|
|
||||||
' ' in symbol
|
|
||||||
or '--' in exch
|
|
||||||
):
|
|
||||||
underlying, _, tail = symbol.partition(' ')
|
|
||||||
exch: str = 'opt'
|
|
||||||
expiry: str = tail[:6]
|
|
||||||
# otype = tail[6]
|
|
||||||
# strike = tail[7:]
|
|
||||||
|
|
||||||
log.warning(
|
|
||||||
f'Skipping option contract -> NO SUPPORT YET!\n'
|
|
||||||
f'{symbol}\n'
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
# timestamping is way different in API records
|
|
||||||
dtstr: str = record.get('datetime')
|
|
||||||
date: str = record.get('date')
|
|
||||||
flex_dtstr: str = record.get('dateTime')
|
|
||||||
|
|
||||||
if dtstr or date:
|
|
||||||
dt: DateTime = parse(dtstr or date)
|
|
||||||
|
|
||||||
elif flex_dtstr:
|
|
||||||
# probably a flex record with a wonky non-std timestamp..
|
|
||||||
dt: DateTime = parse_flex_dt(record['dateTime'])
|
|
||||||
|
|
||||||
# special handling of symbol extraction from
|
|
||||||
# flex records using some ad-hoc schema parsing.
|
|
||||||
asset_type: str = (
|
|
||||||
record.get('assetCategory')
|
|
||||||
or record.get('secType')
|
|
||||||
or 'STK'
|
|
||||||
)
|
|
||||||
|
|
||||||
if (expiry := (
|
|
||||||
record.get('lastTradeDateOrContractMonth')
|
|
||||||
or record.get('expiry')
|
|
||||||
)
|
|
||||||
):
|
|
||||||
expiry: str = str(expiry).strip(' ')
|
|
||||||
# NOTE: we directly use the (simple and usually short)
|
|
||||||
# date-string expiry token when packing the `MktPair`
|
|
||||||
# since we want the fqme to contain *that* token.
|
|
||||||
# It might make sense later to instead parse and then
|
|
||||||
# render different output str format(s) for this same
|
|
||||||
# purpose depending on asset-type-market down the road.
|
|
||||||
# Eg. for derivs we use the short token only for fqme
|
|
||||||
# but use the isoformat('T') for transactions and
|
|
||||||
# account file position entries?
|
|
||||||
# dt_str: str = pendulum.parse(expiry).isoformat('T')
|
|
||||||
|
|
||||||
# XXX: pretty much all legacy market assets have a fiat
|
|
||||||
# currency (denomination) determined by their venue.
|
|
||||||
currency: str = record['currency']
|
|
||||||
src = Asset(
|
|
||||||
name=currency.lower(),
|
|
||||||
atype='fiat',
|
|
||||||
tx_tick=Decimal('0.01'),
|
|
||||||
)
|
|
||||||
|
|
||||||
match asset_type:
|
|
||||||
case 'FUT':
|
|
||||||
# XXX (flex) ledger entries don't necessarily have any
|
|
||||||
# simple 3-char key.. sometimes the .symbol is some
|
|
||||||
# weird internal key that we probably don't want in the
|
|
||||||
# .fqme => we should probably just wrap `Contract` to
|
|
||||||
# this like we do other crypto$ backends XD
|
|
||||||
|
|
||||||
# NOTE: at least older FLEX records should have
|
|
||||||
# this field.. no idea about API entries..
|
|
||||||
local_symbol: str | None = record.get('localSymbol')
|
|
||||||
underlying_key: str = record.get('underlyingSymbol')
|
|
||||||
descr: str | None = record.get('description')
|
|
||||||
|
|
||||||
if (
|
|
||||||
not (
|
|
||||||
local_symbol
|
|
||||||
and symbol in local_symbol
|
|
||||||
)
|
|
||||||
and (
|
|
||||||
descr
|
|
||||||
and symbol not in descr
|
|
||||||
)
|
|
||||||
):
|
|
||||||
con_key, exp_str = descr.split(' ')
|
|
||||||
symbol: str = underlying_key or con_key
|
|
||||||
|
|
||||||
dst = Asset(
|
|
||||||
name=symbol.lower(),
|
|
||||||
atype='future',
|
|
||||||
tx_tick=Decimal('1'),
|
|
||||||
)
|
|
||||||
|
|
||||||
case 'STK':
|
|
||||||
dst = Asset(
|
|
||||||
name=symbol.lower(),
|
|
||||||
atype='stock',
|
|
||||||
tx_tick=Decimal('1'),
|
|
||||||
)
|
|
||||||
|
|
||||||
case 'CASH':
|
|
||||||
if currency not in symbol:
|
|
||||||
# likely a dict-casted `Forex` contract which
|
|
||||||
# has .symbol as the dst and .currency as the
|
|
||||||
# src.
|
|
||||||
name: str = symbol.lower()
|
|
||||||
else:
|
|
||||||
# likely a flex-report record which puts
|
|
||||||
# EUR.USD as the symbol field and just USD in
|
|
||||||
# the currency field.
|
|
||||||
name: str = symbol.lower().replace(f'.{src.name}', '')
|
|
||||||
|
|
||||||
dst = Asset(
|
|
||||||
name=name,
|
|
||||||
atype='fiat',
|
|
||||||
tx_tick=Decimal('0.01'),
|
|
||||||
)
|
|
||||||
|
|
||||||
case 'OPT':
|
|
||||||
dst = Asset(
|
|
||||||
name=symbol.lower(),
|
|
||||||
atype='option',
|
|
||||||
tx_tick=Decimal('1'),
|
|
||||||
|
|
||||||
# TODO: we should probably always cast to the
|
|
||||||
# `Contract` instance then dict-serialize that for
|
|
||||||
# the `.info` field!
|
|
||||||
# info=asdict(Option()),
|
|
||||||
)
|
|
||||||
|
|
||||||
case 'CMDTY':
|
|
||||||
from .symbols import _adhoc_symbol_map
|
|
||||||
con_kwargs, _ = _adhoc_symbol_map[symbol.upper()]
|
|
||||||
dst = Asset(
|
|
||||||
name=symbol.lower(),
|
|
||||||
atype='commodity',
|
|
||||||
tx_tick=Decimal('1'),
|
|
||||||
info=asdict(Commodity(**con_kwargs)),
|
|
||||||
)
|
|
||||||
|
|
||||||
# try to build out piker fqme from record.
|
|
||||||
# src: str = record['currency']
|
|
||||||
price_tick: Decimal = digits_to_dec(dec_digits(price))
|
|
||||||
|
|
||||||
# NOTE: can't serlialize `tomlkit.String` so cast to native
|
|
||||||
atype: str = str(dst.atype)
|
|
||||||
|
|
||||||
# if not (mkt := symcache.mktmaps.get(bs_mktid)):
|
|
||||||
mkt = MktPair(
|
|
||||||
bs_mktid=bs_mktid,
|
|
||||||
dst=dst,
|
|
||||||
|
|
||||||
price_tick=price_tick,
|
|
||||||
# NOTE: for "legacy" assets, volume is normally discreet, not
|
|
||||||
# a float, but we keep a digit in case the suitz decide
|
|
||||||
# to get crazy and change it; we'll be kinda ready
|
|
||||||
# schema-wise..
|
|
||||||
size_tick=Decimal('1'),
|
|
||||||
|
|
||||||
src=src, # XXX: normally always a fiat
|
|
||||||
|
|
||||||
_atype=atype,
|
|
||||||
|
|
||||||
venue=exch,
|
|
||||||
expiry=expiry,
|
|
||||||
broker='ib',
|
|
||||||
|
|
||||||
_fqme_without_src=(atype != 'fiat'),
|
|
||||||
)
|
|
||||||
|
|
||||||
fqme: str = mkt.fqme
|
|
||||||
|
|
||||||
# XXX: if passed in, we fill out the symcache ad-hoc in order
|
|
||||||
# to make downstream accounting work..
|
|
||||||
if symcache is not None:
|
|
||||||
orig_mkt: MktPair | None = symcache.mktmaps.get(bs_mktid)
|
|
||||||
if (
|
|
||||||
orig_mkt
|
|
||||||
and orig_mkt.fqme != mkt.fqme
|
|
||||||
):
|
|
||||||
log.warning(
|
|
||||||
# print(
|
|
||||||
f'Contracts with common `conId`: {bs_mktid} mismatch..\n'
|
|
||||||
f'{orig_mkt.fqme} -> {mkt.fqme}\n'
|
|
||||||
# 'with DIFF:\n'
|
|
||||||
# f'{mkt - orig_mkt}'
|
|
||||||
)
|
|
||||||
|
|
||||||
symcache.mktmaps[bs_mktid] = mkt
|
|
||||||
symcache.mktmaps[fqme] = mkt
|
|
||||||
symcache.assets[src.name] = src
|
|
||||||
symcache.assets[dst.name] = dst
|
|
||||||
|
|
||||||
# NOTE: for flex records the normal fields for defining an fqme
|
|
||||||
# sometimes won't be available so we rely on two approaches for
|
|
||||||
# the "reverse lookup" of piker style fqme keys:
|
|
||||||
# - when dealing with API trade records received from
|
|
||||||
# `IB.trades()` we do a contract lookup at he time of processing
|
|
||||||
# - when dealing with flex records, it is assumed the record
|
|
||||||
# is at least a day old and thus the TWS position reporting system
|
|
||||||
# should already have entries if the pps are still open, in
|
|
||||||
# which case, we can pull the fqme from that table (see
|
|
||||||
# `trades_dialogue()` above).
|
|
||||||
return Transaction(
|
|
||||||
fqme=fqme,
|
|
||||||
tid=tid,
|
|
||||||
size=size,
|
|
||||||
price=price,
|
|
||||||
cost=comms,
|
|
||||||
dt=dt,
|
|
||||||
expiry=expiry,
|
|
||||||
bs_mktid=str(conid),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def norm_trade_records(
|
def norm_trade_records(
|
||||||
ledger: dict[str, Any],
|
ledger: dict[str, Any],
|
||||||
symcache: SymbologyCache | None = None,
|
|
||||||
|
|
||||||
) -> dict[str, Transaction]:
|
) -> dict[str, Transaction]:
|
||||||
'''
|
'''
|
||||||
Normalize (xml) flex-report or (recent) API trade records into
|
Normalize a flex report or API retrieved executions
|
||||||
our ledger format with parsing for `MktPair` and `Asset`
|
ledger into our standard record format.
|
||||||
extraction to fill in the `Transaction.sys: MktPair` field.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
records: list[Transaction] = []
|
records: list[Transaction] = []
|
||||||
|
|
||||||
for tid, record in ledger.items():
|
for tid, record in ledger.items():
|
||||||
|
conid = record.get('conId') or record['conid']
|
||||||
|
comms = record.get('commission')
|
||||||
|
if comms is None:
|
||||||
|
comms = -1*record['ibCommission']
|
||||||
|
|
||||||
txn = norm_trade(
|
price = record.get('price') or record['tradePrice']
|
||||||
tid,
|
|
||||||
record,
|
|
||||||
|
|
||||||
# NOTE: currently no symcache support
|
# the api doesn't do the -/+ on the quantity for you but flex
|
||||||
pairs={},
|
# records do.. are you fucking serious ib...!?
|
||||||
symcache=symcache,
|
size = record.get('quantity') or record['shares'] * {
|
||||||
)
|
'BOT': 1,
|
||||||
|
'SLD': -1,
|
||||||
|
}[record['side']]
|
||||||
|
|
||||||
if txn is None:
|
exch = record['exchange']
|
||||||
|
lexch = record.get('listingExchange')
|
||||||
|
|
||||||
|
# NOTE: remove null values since `tomlkit` can't serialize
|
||||||
|
# them to file.
|
||||||
|
dnc = record.pop('deltaNeutralContract', False)
|
||||||
|
if dnc is not None:
|
||||||
|
record['deltaNeutralContract'] = dnc
|
||||||
|
|
||||||
|
suffix = lexch or exch
|
||||||
|
symbol = record['symbol']
|
||||||
|
|
||||||
|
# likely an opts contract record from a flex report..
|
||||||
|
# TODO: no idea how to parse ^ the strike part from flex..
|
||||||
|
# (00010000 any, or 00007500 tsla, ..)
|
||||||
|
# we probably must do the contract lookup for this?
|
||||||
|
if ' ' in symbol or '--' in exch:
|
||||||
|
underlying, _, tail = symbol.partition(' ')
|
||||||
|
suffix = exch = 'opt'
|
||||||
|
expiry = tail[:6]
|
||||||
|
# otype = tail[6]
|
||||||
|
# strike = tail[7:]
|
||||||
|
|
||||||
|
print(f'skipping opts contract {symbol}')
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# inject txns sorted by datetime
|
# timestamping is way different in API records
|
||||||
|
dtstr = record.get('datetime')
|
||||||
|
date = record.get('date')
|
||||||
|
flex_dtstr = record.get('dateTime')
|
||||||
|
|
||||||
|
if dtstr or date:
|
||||||
|
dt = pendulum.parse(dtstr or date)
|
||||||
|
|
||||||
|
elif flex_dtstr:
|
||||||
|
# probably a flex record with a wonky non-std timestamp..
|
||||||
|
dt = parse_flex_dt(record['dateTime'])
|
||||||
|
|
||||||
|
# special handling of symbol extraction from
|
||||||
|
# flex records using some ad-hoc schema parsing.
|
||||||
|
asset_type: str = record.get(
|
||||||
|
'assetCategory'
|
||||||
|
) or record.get('secType', 'STK')
|
||||||
|
|
||||||
|
# TODO: XXX: WOA this is kinda hacky.. probably
|
||||||
|
# should figure out the correct future pair key more
|
||||||
|
# explicitly and consistently?
|
||||||
|
if asset_type == 'FUT':
|
||||||
|
# (flex) ledger entries don't have any simple 3-char key?
|
||||||
|
symbol = record['symbol'][:3]
|
||||||
|
asset_type: str = 'future'
|
||||||
|
|
||||||
|
elif asset_type == 'STK':
|
||||||
|
asset_type: str = 'stock'
|
||||||
|
|
||||||
|
# try to build out piker fqme from record.
|
||||||
|
expiry = (
|
||||||
|
record.get('lastTradeDateOrContractMonth')
|
||||||
|
or record.get('expiry')
|
||||||
|
)
|
||||||
|
|
||||||
|
if expiry:
|
||||||
|
expiry = str(expiry).strip(' ')
|
||||||
|
suffix = f'{exch}.{expiry}'
|
||||||
|
expiry = pendulum.parse(expiry)
|
||||||
|
|
||||||
|
# src: str = record['currency']
|
||||||
|
price_tick: Decimal = digits_to_dec(dec_digits(price))
|
||||||
|
|
||||||
|
pair = MktPair.from_fqme(
|
||||||
|
fqme=f'{symbol}.{suffix}.ib',
|
||||||
|
bs_mktid=str(conid),
|
||||||
|
_atype=str(asset_type), # XXX: can't serlialize `tomlkit.String`
|
||||||
|
|
||||||
|
price_tick=price_tick,
|
||||||
|
# NOTE: for "legacy" assets, volume is normally discreet, not
|
||||||
|
# a float, but we keep a digit in case the suitz decide
|
||||||
|
# to get crazy and change it; we'll be kinda ready
|
||||||
|
# schema-wise..
|
||||||
|
size_tick='1',
|
||||||
|
)
|
||||||
|
|
||||||
|
fqme = pair.fqme
|
||||||
|
|
||||||
|
# NOTE: for flex records the normal fields for defining an fqme
|
||||||
|
# sometimes won't be available so we rely on two approaches for
|
||||||
|
# the "reverse lookup" of piker style fqme keys:
|
||||||
|
# - when dealing with API trade records received from
|
||||||
|
# `IB.trades()` we do a contract lookup at he time of processing
|
||||||
|
# - when dealing with flex records, it is assumed the record
|
||||||
|
# is at least a day old and thus the TWS position reporting system
|
||||||
|
# should already have entries if the pps are still open, in
|
||||||
|
# which case, we can pull the fqme from that table (see
|
||||||
|
# `trades_dialogue()` above).
|
||||||
insort(
|
insort(
|
||||||
records,
|
records,
|
||||||
txn,
|
Transaction(
|
||||||
|
fqme=fqme,
|
||||||
|
sym=pair,
|
||||||
|
tid=tid,
|
||||||
|
size=size,
|
||||||
|
price=price,
|
||||||
|
cost=comms,
|
||||||
|
dt=dt,
|
||||||
|
expiry=expiry,
|
||||||
|
bs_mktid=str(conid),
|
||||||
|
),
|
||||||
key=lambda t: t.dt
|
key=lambda t: t.dt
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -402,49 +179,50 @@ def norm_trade_records(
|
||||||
|
|
||||||
def api_trades_to_ledger_entries(
|
def api_trades_to_ledger_entries(
|
||||||
accounts: bidict[str, str],
|
accounts: bidict[str, str],
|
||||||
fills: list[Fill],
|
|
||||||
|
|
||||||
) -> dict[str, dict]:
|
# TODO: maybe we should just be passing through the
|
||||||
|
# ``ib_insync.order.Trade`` instance directly here
|
||||||
|
# instead of pre-casting to dicts?
|
||||||
|
trade_entries: list[dict],
|
||||||
|
|
||||||
|
) -> dict:
|
||||||
'''
|
'''
|
||||||
Convert API execution objects entry objects into
|
Convert API execution objects entry objects into ``dict`` form,
|
||||||
flattened-``dict`` form, pretty much straight up without
|
pretty much straight up without modification except add
|
||||||
modification except add a `pydatetime` field from the parsed
|
a `pydatetime` field from the parsed timestamp.
|
||||||
timestamp so that on write
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
trades_by_account: dict[str, dict] = {}
|
trades_by_account = {}
|
||||||
for fill in fills:
|
for t in trade_entries:
|
||||||
|
# NOTE: example of schema we pull from the API client.
|
||||||
|
# {
|
||||||
|
# 'commissionReport': CommissionReport(...
|
||||||
|
# 'contract': {...
|
||||||
|
# 'execution': Execution(...
|
||||||
|
# 'time': 1654801166.0
|
||||||
|
# }
|
||||||
|
|
||||||
# NOTE: for the schema, see the defn for `Fill` which is
|
# flatten all sub-dicts and values into one top level entry.
|
||||||
# a `NamedTuple` subtype
|
entry = {}
|
||||||
fdict: dict = fill._asdict()
|
for section, val in t.items():
|
||||||
|
match section:
|
||||||
# flatten all (sub-)objects and convert to dicts.
|
|
||||||
# with values packed into one top level entry.
|
|
||||||
val: CommissionReport | Execution | Contract
|
|
||||||
txn_dict: dict[str, Any] = {}
|
|
||||||
for attr_name, val in fdict.items():
|
|
||||||
match attr_name:
|
|
||||||
# value is a `@dataclass` subtype
|
|
||||||
case 'contract' | 'execution' | 'commissionReport':
|
case 'contract' | 'execution' | 'commissionReport':
|
||||||
txn_dict.update(asdict(val))
|
# sub-dict cases
|
||||||
|
entry.update(val)
|
||||||
|
|
||||||
case 'time':
|
case 'time':
|
||||||
# ib has wack ns timestamps, or is that us?
|
# ib has wack ns timestamps, or is that us?
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# TODO: we can remove this case right since there's
|
|
||||||
# only 4 fields on a `Fill`?
|
|
||||||
case _:
|
case _:
|
||||||
txn_dict[attr_name] = val
|
entry[section] = val
|
||||||
|
|
||||||
tid = str(txn_dict['execId'])
|
tid = str(entry['execId'])
|
||||||
dt = from_timestamp(txn_dict['time'])
|
dt = pendulum.from_timestamp(entry['time'])
|
||||||
txn_dict['datetime'] = str(dt)
|
# TODO: why isn't this showing seconds in the str?
|
||||||
acctid = accounts[txn_dict['acctNumber']]
|
entry['pydatetime'] = dt
|
||||||
|
entry['datetime'] = str(dt)
|
||||||
# NOTE: only inserted (then later popped) for sorting below!
|
acctid = accounts[entry['acctNumber']]
|
||||||
txn_dict['pydatetime'] = dt
|
|
||||||
|
|
||||||
if not tid:
|
if not tid:
|
||||||
# this is likely some kind of internal adjustment
|
# this is likely some kind of internal adjustment
|
||||||
|
|
@ -455,18 +233,13 @@ def api_trades_to_ledger_entries(
|
||||||
# the user from the accounts window in TWS where they can
|
# the user from the accounts window in TWS where they can
|
||||||
# manually set the avg price and size:
|
# manually set the avg price and size:
|
||||||
# https://api.ibkr.com/lib/cstools/faq/web1/index.html#/tag/DTWS_ADJ_AVG_COST
|
# https://api.ibkr.com/lib/cstools/faq/web1/index.html#/tag/DTWS_ADJ_AVG_COST
|
||||||
log.warning(
|
log.warning(f'Skipping ID-less ledger entry:\n{pformat(entry)}')
|
||||||
'Skipping ID-less ledger txn_dict:\n'
|
|
||||||
f'{pformat(txn_dict)}'
|
|
||||||
)
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
trades_by_account.setdefault(
|
trades_by_account.setdefault(
|
||||||
acctid, {}
|
acctid, {}
|
||||||
)[tid] = txn_dict
|
)[tid] = entry
|
||||||
|
|
||||||
# TODO: maybe we should just bisect.insort() into a list of
|
|
||||||
# tuples and then return a dict of that?
|
|
||||||
# sort entries in output by python based datetime
|
# sort entries in output by python based datetime
|
||||||
for acctid in trades_by_account:
|
for acctid in trades_by_account:
|
||||||
trades_by_account[acctid] = dict(sorted(
|
trades_by_account[acctid] = dict(sorted(
|
||||||
|
|
@ -475,55 +248,3 @@ def api_trades_to_ledger_entries(
|
||||||
))
|
))
|
||||||
|
|
||||||
return trades_by_account
|
return trades_by_account
|
||||||
|
|
||||||
|
|
||||||
async def update_ledger_from_api_trades(
|
|
||||||
fills: list[Fill],
|
|
||||||
client: Client | MethodProxy,
|
|
||||||
accounts_def_inv: bidict[str, str],
|
|
||||||
|
|
||||||
# NOTE: provided for ad-hoc insertions "as transactions are
|
|
||||||
# processed" -> see `norm_trade()` signature requirements.
|
|
||||||
symcache: SymbologyCache | None = None,
|
|
||||||
|
|
||||||
) -> tuple[
|
|
||||||
dict[str, Transaction],
|
|
||||||
dict[str, dict],
|
|
||||||
]:
|
|
||||||
# XXX; ERRGGG..
|
|
||||||
# pack in the "primary/listing exchange" value from a
|
|
||||||
# contract lookup since it seems this isn't available by
|
|
||||||
# default from the `.fills()` method endpoint...
|
|
||||||
fill: Fill
|
|
||||||
for fill in fills:
|
|
||||||
con: Contract = fill.contract
|
|
||||||
conid: str = con.conId
|
|
||||||
pexch: str | None = con.primaryExchange
|
|
||||||
|
|
||||||
if not pexch:
|
|
||||||
cons = await client.get_con(conid=conid)
|
|
||||||
if cons:
|
|
||||||
con = cons[0]
|
|
||||||
pexch = con.primaryExchange or con.exchange
|
|
||||||
else:
|
|
||||||
# for futes it seems like the primary is always empty?
|
|
||||||
pexch: str = con.exchange
|
|
||||||
|
|
||||||
# pack in the ``Contract.secType``
|
|
||||||
# entry['asset_type'] = condict['secType']
|
|
||||||
|
|
||||||
entries: dict[str, dict] = api_trades_to_ledger_entries(
|
|
||||||
accounts_def_inv,
|
|
||||||
fills,
|
|
||||||
)
|
|
||||||
# normalize recent session's trades to the `Transaction` type
|
|
||||||
trans_by_acct: dict[str, dict[str, Transaction]] = {}
|
|
||||||
|
|
||||||
for acctid, trades_by_id in entries.items():
|
|
||||||
# normalize to transaction form
|
|
||||||
trans_by_acct[acctid] = norm_trade_records(
|
|
||||||
trades_by_id,
|
|
||||||
symcache=symcache,
|
|
||||||
)
|
|
||||||
|
|
||||||
return trans_by_acct, entries
|
|
||||||
|
|
|
||||||
|
|
@ -1,615 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Symbology search and normalization.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from __future__ import annotations
|
|
||||||
from contextlib import (
|
|
||||||
nullcontext,
|
|
||||||
)
|
|
||||||
from decimal import Decimal
|
|
||||||
import time
|
|
||||||
from typing import (
|
|
||||||
Awaitable,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
|
||||||
|
|
||||||
from rapidfuzz import process as fuzzy
|
|
||||||
import ib_insync as ibis
|
|
||||||
import tractor
|
|
||||||
import trio
|
|
||||||
|
|
||||||
from piker.accounting import (
|
|
||||||
Asset,
|
|
||||||
MktPair,
|
|
||||||
unpack_fqme,
|
|
||||||
)
|
|
||||||
from piker._cacheables import (
|
|
||||||
async_lifo_cache,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ._util import (
|
|
||||||
log,
|
|
||||||
)
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from .api import (
|
|
||||||
MethodProxy,
|
|
||||||
Client,
|
|
||||||
)
|
|
||||||
|
|
||||||
_futes_venues = (
|
|
||||||
'GLOBEX',
|
|
||||||
'NYMEX',
|
|
||||||
'CME',
|
|
||||||
'CMECRYPTO',
|
|
||||||
'COMEX',
|
|
||||||
# 'CMDTY', # special name case..
|
|
||||||
'CBOT', # (treasury) yield futures
|
|
||||||
)
|
|
||||||
|
|
||||||
_adhoc_cmdty_set = {
|
|
||||||
# metals
|
|
||||||
# https://misc.interactivebrokers.com/cstools/contract_info/v3.10/index.php?action=Conid%20Info&wlId=IB&conid=69067924
|
|
||||||
'xauusd.cmdty', # london gold spot ^
|
|
||||||
'xagusd.cmdty', # silver spot
|
|
||||||
}
|
|
||||||
|
|
||||||
# NOTE: if you aren't seeing one of these symbol's futues contracts
|
|
||||||
# show up, it's likely the `.<venue>` part is wrong!
|
|
||||||
_adhoc_futes_set = {
|
|
||||||
|
|
||||||
# equities
|
|
||||||
'nq.cme',
|
|
||||||
'mnq.cme', # micro
|
|
||||||
|
|
||||||
'es.cme',
|
|
||||||
'mes.cme', # micro
|
|
||||||
|
|
||||||
# cypto$
|
|
||||||
'brr.cme',
|
|
||||||
'mbt.cme', # micro
|
|
||||||
'ethusdrr.cme',
|
|
||||||
|
|
||||||
# agriculture
|
|
||||||
'he.comex', # lean hogs
|
|
||||||
'le.comex', # live cattle (geezers)
|
|
||||||
'gf.comex', # feeder cattle (younguns)
|
|
||||||
|
|
||||||
# raw
|
|
||||||
'lb.comex', # random len lumber
|
|
||||||
|
|
||||||
'gc.comex',
|
|
||||||
'mgc.comex', # micro
|
|
||||||
|
|
||||||
# oil & gas
|
|
||||||
'cl.nymex',
|
|
||||||
|
|
||||||
'ni.comex', # silver futes
|
|
||||||
'qi.comex', # mini-silver futes
|
|
||||||
|
|
||||||
# treasury yields
|
|
||||||
# etfs by duration:
|
|
||||||
# SHY -> IEI -> IEF -> TLT
|
|
||||||
'zt.cbot', # 2y
|
|
||||||
'z3n.cbot', # 3y
|
|
||||||
'zf.cbot', # 5y
|
|
||||||
'zn.cbot', # 10y
|
|
||||||
'zb.cbot', # 30y
|
|
||||||
|
|
||||||
# (micros of above)
|
|
||||||
'2yy.cbot',
|
|
||||||
'5yy.cbot',
|
|
||||||
'10y.cbot',
|
|
||||||
'30y.cbot',
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# taken from list here:
|
|
||||||
# https://www.interactivebrokers.com/en/trading/products-spot-currencies.php
|
|
||||||
_adhoc_fiat_set = set((
|
|
||||||
'USD, AED, AUD, CAD,'
|
|
||||||
'CHF, CNH, CZK, DKK,'
|
|
||||||
'EUR, GBP, HKD, HUF,'
|
|
||||||
'ILS, JPY, MXN, NOK,'
|
|
||||||
'NZD, PLN, RUB, SAR,'
|
|
||||||
'SEK, SGD, TRY, ZAR'
|
|
||||||
).split(' ,')
|
|
||||||
)
|
|
||||||
|
|
||||||
# manually discovered tick discrepancies,
|
|
||||||
# onl god knows how or why they'd cuck these up..
|
|
||||||
_adhoc_mkt_infos: dict[int | str, dict] = {
|
|
||||||
'vtgn.nasdaq': {'price_tick': Decimal('0.01')},
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# map of symbols to contract ids
|
|
||||||
_adhoc_symbol_map = {
|
|
||||||
# https://misc.interactivebrokers.com/cstools/contract_info/v3.10/index.php?action=Conid%20Info&wlId=IB&conid=69067924
|
|
||||||
|
|
||||||
# NOTE: some cmdtys/metals don't have trade data like gold/usd:
|
|
||||||
# https://groups.io/g/twsapi/message/44174
|
|
||||||
'XAUUSD': ({'conId': 69067924}, {'whatToShow': 'MIDPOINT'}),
|
|
||||||
}
|
|
||||||
for qsn in _adhoc_futes_set:
|
|
||||||
sym, venue = qsn.split('.')
|
|
||||||
assert venue.upper() in _futes_venues, f'{venue}'
|
|
||||||
_adhoc_symbol_map[sym.upper()] = (
|
|
||||||
{'exchange': venue},
|
|
||||||
{},
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# exchanges we don't support at the moment due to not knowing
|
|
||||||
# how to do symbol-contract lookup correctly likely due
|
|
||||||
# to not having the data feeds subscribed.
|
|
||||||
_exch_skip_list = {
|
|
||||||
|
|
||||||
'ASX', # aussie stocks
|
|
||||||
'MEXI', # mexican stocks
|
|
||||||
|
|
||||||
# no idea
|
|
||||||
'NSE',
|
|
||||||
'VALUE',
|
|
||||||
'FUNDSERV',
|
|
||||||
'SWB2',
|
|
||||||
'PSE',
|
|
||||||
'PHLX',
|
|
||||||
}
|
|
||||||
|
|
||||||
# optional search config the backend can register for
|
|
||||||
# it's symbol search handling (in this case we avoid
|
|
||||||
# accepting patterns before the kb has settled more then
|
|
||||||
# a quarter second).
|
|
||||||
_search_conf = {
|
|
||||||
'pause_period': 6 / 16,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
|
||||||
async def open_symbol_search(ctx: tractor.Context) -> None:
|
|
||||||
'''
|
|
||||||
Symbology search brokerd-endpoint.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from .api import open_client_proxies
|
|
||||||
from .feed import open_data_client
|
|
||||||
|
|
||||||
# TODO: load user defined symbol set locally for fast search?
|
|
||||||
await ctx.started({})
|
|
||||||
|
|
||||||
async with (
|
|
||||||
open_client_proxies() as (proxies, _),
|
|
||||||
open_data_client() as data_proxy,
|
|
||||||
):
|
|
||||||
async with ctx.open_stream() as stream:
|
|
||||||
|
|
||||||
# select a non-history client for symbol search to lighten
|
|
||||||
# the load in the main data node.
|
|
||||||
proxy = data_proxy
|
|
||||||
for name, proxy in proxies.items():
|
|
||||||
if proxy is data_proxy:
|
|
||||||
continue
|
|
||||||
break
|
|
||||||
|
|
||||||
ib_client = proxy._aio_ns.ib
|
|
||||||
log.info(
|
|
||||||
f'Using API client for symbol-search\n'
|
|
||||||
f'{ib_client}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
last = time.time()
|
|
||||||
async for pattern in stream:
|
|
||||||
log.info(f'received {pattern}')
|
|
||||||
now: float = time.time()
|
|
||||||
|
|
||||||
# this causes tractor hang...
|
|
||||||
# assert 0
|
|
||||||
|
|
||||||
assert pattern, 'IB can not accept blank search pattern'
|
|
||||||
|
|
||||||
# throttle search requests to no faster then 1Hz
|
|
||||||
diff = now - last
|
|
||||||
if diff < 1.0:
|
|
||||||
log.debug('throttle sleeping')
|
|
||||||
await trio.sleep(diff)
|
|
||||||
try:
|
|
||||||
pattern = stream.receive_nowait()
|
|
||||||
except trio.WouldBlock:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if (
|
|
||||||
not pattern
|
|
||||||
or pattern.isspace()
|
|
||||||
|
|
||||||
# XXX: not sure if this is a bad assumption but it
|
|
||||||
# seems to make search snappier?
|
|
||||||
or len(pattern) < 1
|
|
||||||
):
|
|
||||||
log.warning('empty pattern received, skipping..')
|
|
||||||
|
|
||||||
# TODO: *BUG* if nothing is returned here the client
|
|
||||||
# side will cache a null set result and not showing
|
|
||||||
# anything to the use on re-searches when this query
|
|
||||||
# timed out. We probably need a special "timeout" msg
|
|
||||||
# or something...
|
|
||||||
|
|
||||||
# XXX: this unblocks the far end search task which may
|
|
||||||
# hold up a multi-search nursery block
|
|
||||||
await stream.send({})
|
|
||||||
|
|
||||||
continue
|
|
||||||
|
|
||||||
log.info(f'searching for {pattern}')
|
|
||||||
|
|
||||||
last = time.time()
|
|
||||||
|
|
||||||
# async batch search using api stocks endpoint and module
|
|
||||||
# defined adhoc symbol set.
|
|
||||||
stock_results = []
|
|
||||||
|
|
||||||
async def extend_results(
|
|
||||||
target: Awaitable[list]
|
|
||||||
) -> None:
|
|
||||||
try:
|
|
||||||
results = await target
|
|
||||||
except tractor.trionics.Lagged:
|
|
||||||
print("IB SYM-SEARCH OVERRUN?!?")
|
|
||||||
return
|
|
||||||
|
|
||||||
stock_results.extend(results)
|
|
||||||
|
|
||||||
for _ in range(10):
|
|
||||||
with trio.move_on_after(3) as cs:
|
|
||||||
async with trio.open_nursery() as sn:
|
|
||||||
sn.start_soon(
|
|
||||||
extend_results,
|
|
||||||
proxy.search_symbols(
|
|
||||||
pattern=pattern,
|
|
||||||
upto=5,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
# trigger async request
|
|
||||||
await trio.sleep(0)
|
|
||||||
|
|
||||||
if cs.cancelled_caught:
|
|
||||||
log.warning(
|
|
||||||
f'Search timeout? {proxy._aio_ns.ib.client}'
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
elif stock_results:
|
|
||||||
break
|
|
||||||
# else:
|
|
||||||
# await tractor.pause()
|
|
||||||
|
|
||||||
# # match against our ad-hoc set immediately
|
|
||||||
# adhoc_matches = fuzzy.extract(
|
|
||||||
# pattern,
|
|
||||||
# list(_adhoc_futes_set),
|
|
||||||
# score_cutoff=90,
|
|
||||||
# )
|
|
||||||
# log.info(f'fuzzy matched adhocs: {adhoc_matches}')
|
|
||||||
# adhoc_match_results = {}
|
|
||||||
# if adhoc_matches:
|
|
||||||
# # TODO: do we need to pull contract details?
|
|
||||||
# adhoc_match_results = {i[0]: {} for i in
|
|
||||||
# adhoc_matches}
|
|
||||||
|
|
||||||
log.debug(f'fuzzy matching stocks {stock_results}')
|
|
||||||
stock_matches = fuzzy.extract(
|
|
||||||
pattern,
|
|
||||||
stock_results,
|
|
||||||
score_cutoff=50,
|
|
||||||
)
|
|
||||||
|
|
||||||
# matches = adhoc_match_results | {
|
|
||||||
matches = {
|
|
||||||
item[0]: {} for item in stock_matches
|
|
||||||
}
|
|
||||||
# TODO: we used to deliver contract details
|
|
||||||
# {item[2]: item[0] for item in stock_matches}
|
|
||||||
|
|
||||||
log.debug(f"sending matches: {matches.keys()}")
|
|
||||||
await stream.send(matches)
|
|
||||||
|
|
||||||
|
|
||||||
# re-mapping to piker asset type names
|
|
||||||
# https://github.com/erdewit/ib_insync/blob/master/ib_insync/contract.py#L113
|
|
||||||
_asset_type_map = {
|
|
||||||
'STK': 'stock',
|
|
||||||
'OPT': 'option',
|
|
||||||
'FUT': 'future',
|
|
||||||
'CONTFUT': 'continuous_future',
|
|
||||||
'CASH': 'fiat',
|
|
||||||
'IND': 'index',
|
|
||||||
'CFD': 'cfd',
|
|
||||||
'BOND': 'bond',
|
|
||||||
'CMDTY': 'commodity',
|
|
||||||
'FOP': 'futures_option',
|
|
||||||
'FUND': 'mutual_fund',
|
|
||||||
'WAR': 'warrant',
|
|
||||||
'IOPT': 'warran',
|
|
||||||
'BAG': 'bag',
|
|
||||||
'CRYPTO': 'crypto', # bc it's diff then fiat?
|
|
||||||
# 'NEWS': 'news',
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def parse_patt2fqme(
|
|
||||||
# client: Client,
|
|
||||||
pattern: str,
|
|
||||||
|
|
||||||
) -> tuple[str, str, str, str]:
|
|
||||||
|
|
||||||
# TODO: we can't use this currently because
|
|
||||||
# ``wrapper.starTicker()`` currently cashes ticker instances
|
|
||||||
# which means getting a singel quote will potentially look up
|
|
||||||
# a quote for a ticker that it already streaming and thus run
|
|
||||||
# into state clobbering (eg. list: Ticker.ticks). It probably
|
|
||||||
# makes sense to try this once we get the pub-sub working on
|
|
||||||
# individual symbols...
|
|
||||||
|
|
||||||
# XXX UPDATE: we can probably do the tick/trades scraping
|
|
||||||
# inside our eventkit handler instead to bypass this entirely?
|
|
||||||
|
|
||||||
currency = ''
|
|
||||||
|
|
||||||
# fqme parsing stage
|
|
||||||
# ------------------
|
|
||||||
if '.ib' in pattern:
|
|
||||||
_, symbol, venue, expiry = unpack_fqme(pattern)
|
|
||||||
|
|
||||||
else:
|
|
||||||
symbol = pattern
|
|
||||||
expiry = ''
|
|
||||||
|
|
||||||
# # another hack for forex pairs lul.
|
|
||||||
# if (
|
|
||||||
# '.idealpro' in symbol
|
|
||||||
# # or '/' in symbol
|
|
||||||
# ):
|
|
||||||
# exch: str = 'IDEALPRO'
|
|
||||||
# symbol = symbol.removesuffix('.idealpro')
|
|
||||||
# if '/' in symbol:
|
|
||||||
# symbol, currency = symbol.split('/')
|
|
||||||
|
|
||||||
# else:
|
|
||||||
# TODO: yes, a cache..
|
|
||||||
# try:
|
|
||||||
# # give the cache a go
|
|
||||||
# return client._contracts[symbol]
|
|
||||||
# except KeyError:
|
|
||||||
# log.debug(f'Looking up contract for {symbol}')
|
|
||||||
expiry: str = ''
|
|
||||||
if symbol.count('.') > 1:
|
|
||||||
symbol, _, expiry = symbol.rpartition('.')
|
|
||||||
|
|
||||||
# use heuristics to figure out contract "type"
|
|
||||||
symbol, venue = symbol.upper().rsplit('.', maxsplit=1)
|
|
||||||
|
|
||||||
return symbol, currency, venue, expiry
|
|
||||||
|
|
||||||
|
|
||||||
def con2fqme(
|
|
||||||
con: ibis.Contract,
|
|
||||||
_cache: dict[int, (str, bool)] = {}
|
|
||||||
|
|
||||||
) -> tuple[str, bool]:
|
|
||||||
'''
|
|
||||||
Convert contracts to fqme-style strings to be used both in
|
|
||||||
symbol-search matching and as feed tokens passed to the front
|
|
||||||
end data deed layer.
|
|
||||||
|
|
||||||
Previously seen contracts are cached by id.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# should be real volume for this contract by default
|
|
||||||
calc_price: bool = False
|
|
||||||
if con.conId:
|
|
||||||
try:
|
|
||||||
# TODO: LOL so apparently IB just changes the contract
|
|
||||||
# ID (int) on a whim.. so we probably need to use an
|
|
||||||
# FQME style key after all...
|
|
||||||
return _cache[con.conId]
|
|
||||||
except KeyError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
suffix: str = con.primaryExchange or con.exchange
|
|
||||||
symbol: str = con.symbol
|
|
||||||
expiry: str = con.lastTradeDateOrContractMonth or ''
|
|
||||||
|
|
||||||
match con:
|
|
||||||
case ibis.Option():
|
|
||||||
# TODO: option symbol parsing and sane display:
|
|
||||||
symbol = con.localSymbol.replace(' ', '')
|
|
||||||
|
|
||||||
case (
|
|
||||||
ibis.Commodity()
|
|
||||||
# search API endpoint returns std con box..
|
|
||||||
| ibis.Contract(secType='CMDTY')
|
|
||||||
):
|
|
||||||
# commodities and forex don't have an exchange name and
|
|
||||||
# no real volume so we have to calculate the price
|
|
||||||
suffix = con.secType
|
|
||||||
|
|
||||||
# no real volume on this tract
|
|
||||||
calc_price = True
|
|
||||||
|
|
||||||
case ibis.Forex() | ibis.Contract(secType='CASH'):
|
|
||||||
dst, src = con.localSymbol.split('.')
|
|
||||||
symbol = ''.join([dst, src])
|
|
||||||
suffix = con.exchange or 'idealpro'
|
|
||||||
|
|
||||||
# no real volume on forex feeds..
|
|
||||||
calc_price = True
|
|
||||||
|
|
||||||
if not suffix:
|
|
||||||
entry = _adhoc_symbol_map.get(
|
|
||||||
con.symbol or con.localSymbol
|
|
||||||
)
|
|
||||||
if entry:
|
|
||||||
meta, kwargs = entry
|
|
||||||
cid = meta.get('conId')
|
|
||||||
if cid:
|
|
||||||
assert con.conId == meta['conId']
|
|
||||||
suffix = meta['exchange']
|
|
||||||
|
|
||||||
# append a `.<suffix>` to the returned symbol
|
|
||||||
# key for derivatives that normally is the expiry
|
|
||||||
# date key.
|
|
||||||
if expiry:
|
|
||||||
suffix += f'.{expiry}'
|
|
||||||
|
|
||||||
fqme_key = symbol.lower()
|
|
||||||
if suffix:
|
|
||||||
fqme_key = '.'.join((fqme_key, suffix)).lower()
|
|
||||||
|
|
||||||
_cache[con.conId] = fqme_key, calc_price
|
|
||||||
return fqme_key, calc_price
|
|
||||||
|
|
||||||
|
|
||||||
@async_lifo_cache()
|
|
||||||
async def get_mkt_info(
|
|
||||||
fqme: str,
|
|
||||||
|
|
||||||
proxy: MethodProxy | None = None,
|
|
||||||
|
|
||||||
) -> tuple[MktPair, ibis.ContractDetails]:
|
|
||||||
|
|
||||||
if '.ib' not in fqme:
|
|
||||||
fqme += '.ib'
|
|
||||||
broker, pair, venue, expiry = unpack_fqme(fqme)
|
|
||||||
|
|
||||||
proxy: MethodProxy
|
|
||||||
if proxy is not None:
|
|
||||||
client_ctx = nullcontext(proxy)
|
|
||||||
else:
|
|
||||||
from .feed import (
|
|
||||||
open_data_client,
|
|
||||||
)
|
|
||||||
client_ctx = open_data_client
|
|
||||||
|
|
||||||
async with client_ctx as proxy:
|
|
||||||
try:
|
|
||||||
(
|
|
||||||
con, # Contract
|
|
||||||
details, # ContractDetails
|
|
||||||
) = await proxy.get_sym_details(fqme=fqme)
|
|
||||||
except ConnectionError:
|
|
||||||
log.exception(f'Proxy is ded {proxy._aio_ns}')
|
|
||||||
raise
|
|
||||||
|
|
||||||
# TODO: more consistent field translation
|
|
||||||
atype = _asset_type_map[con.secType]
|
|
||||||
|
|
||||||
if atype == 'commodity':
|
|
||||||
venue: str = 'cmdty'
|
|
||||||
else:
|
|
||||||
venue = con.primaryExchange or con.exchange
|
|
||||||
|
|
||||||
price_tick: Decimal = Decimal(str(details.minTick))
|
|
||||||
ib_min_tick_gt_2: Decimal = Decimal('0.01')
|
|
||||||
if (
|
|
||||||
price_tick < ib_min_tick_gt_2
|
|
||||||
):
|
|
||||||
# TODO: we need to add some kinda dynamic rounding sys
|
|
||||||
# to our MktPair i guess?
|
|
||||||
# not sure where the logic should sit, but likely inside
|
|
||||||
# the `.clearing._ems` i suppose...
|
|
||||||
log.warning(
|
|
||||||
'IB seems to disallow a min price tick < 0.01 '
|
|
||||||
'when the price is > 2.0..?\n'
|
|
||||||
f'Decreasing min tick precision for {fqme} to 0.01'
|
|
||||||
)
|
|
||||||
# price_tick = ib_min_tick
|
|
||||||
# await tractor.pause()
|
|
||||||
|
|
||||||
if atype == 'stock':
|
|
||||||
# XXX: GRRRR they don't support fractional share sizes for
|
|
||||||
# stocks from the API?!
|
|
||||||
# if con.secType == 'STK':
|
|
||||||
size_tick = Decimal('1')
|
|
||||||
else:
|
|
||||||
size_tick: Decimal = Decimal(
|
|
||||||
str(details.minSize).rstrip('0')
|
|
||||||
)
|
|
||||||
# |-> TODO: there is also the Contract.sizeIncrement, bt wtf is it?
|
|
||||||
|
|
||||||
# NOTE: this is duplicate from the .broker.norm_trade_records()
|
|
||||||
# routine, we should factor all this parsing somewhere..
|
|
||||||
expiry_str = str(con.lastTradeDateOrContractMonth)
|
|
||||||
# if expiry:
|
|
||||||
# expiry_str: str = str(pendulum.parse(
|
|
||||||
# str(expiry).strip(' ')
|
|
||||||
# ))
|
|
||||||
|
|
||||||
# TODO: currently we can't pass the fiat src asset because
|
|
||||||
# then we'll get a `MNQUSD` request for history data..
|
|
||||||
# we need to figure out how we're going to handle this (later?)
|
|
||||||
# but likely we want all backends to eventually handle
|
|
||||||
# ``dst/src.venue.`` style !?
|
|
||||||
src = Asset(
|
|
||||||
name=str(con.currency).lower(),
|
|
||||||
atype='fiat',
|
|
||||||
tx_tick=Decimal('0.01'), # right?
|
|
||||||
)
|
|
||||||
dst = Asset(
|
|
||||||
name=con.symbol.lower(),
|
|
||||||
atype=atype,
|
|
||||||
tx_tick=size_tick,
|
|
||||||
)
|
|
||||||
|
|
||||||
mkt = MktPair(
|
|
||||||
src=src,
|
|
||||||
dst=dst,
|
|
||||||
|
|
||||||
price_tick=price_tick,
|
|
||||||
size_tick=size_tick,
|
|
||||||
|
|
||||||
bs_mktid=str(con.conId),
|
|
||||||
venue=str(venue),
|
|
||||||
expiry=expiry_str,
|
|
||||||
broker='ib',
|
|
||||||
|
|
||||||
# TODO: options contract info as str?
|
|
||||||
# contract_info=<optionsdetails>
|
|
||||||
_fqme_without_src=(atype != 'fiat'),
|
|
||||||
)
|
|
||||||
|
|
||||||
# just.. wow.
|
|
||||||
if entry := _adhoc_mkt_infos.get(mkt.bs_fqme):
|
|
||||||
log.warning(f'Frickin {mkt.fqme} has an adhoc {entry}..')
|
|
||||||
new = mkt.to_dict()
|
|
||||||
new['price_tick'] = entry['price_tick']
|
|
||||||
new['src'] = src
|
|
||||||
new['dst'] = dst
|
|
||||||
mkt = MktPair(**new)
|
|
||||||
|
|
||||||
# if possible register the bs_mktid to the just-built
|
|
||||||
# mkt so that it can be retreived by order mode tasks later.
|
|
||||||
# TODO NOTE: this is going to be problematic if/when we split
|
|
||||||
# out the datatd vs. brokerd actors since the mktmap lookup
|
|
||||||
# table will now be inaccessible..
|
|
||||||
if proxy is not None:
|
|
||||||
client: Client = proxy._aio_ns
|
|
||||||
client._contracts[mkt.bs_fqme] = con
|
|
||||||
client._cons2mkts[con] = mkt
|
|
||||||
|
|
||||||
return mkt, details
|
|
||||||
|
|
@ -19,36 +19,23 @@ Kraken backend.
|
||||||
|
|
||||||
Sub-modules within break into the core functionalities:
|
Sub-modules within break into the core functionalities:
|
||||||
|
|
||||||
- .api: for the core API machinery which generally
|
- ``broker.py`` part for orders / trading endpoints
|
||||||
a ``asks``/``trio-websocket`` implemented ``Client``.
|
- ``feed.py`` for real-time data feed endpoints
|
||||||
- .broker: part for orders / trading endpoints.
|
- ``api.py`` for the core API machinery which is ``trio``-ized
|
||||||
- .feed: for real-time and historical data query endpoints.
|
wrapping around ``ib_insync``.
|
||||||
- .ledger: for transaction processing as it pertains to accounting.
|
|
||||||
- .symbols: for market (name) search and symbology meta-defs.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from .symbols import (
|
|
||||||
Pair, # for symcache
|
|
||||||
open_symbol_search,
|
|
||||||
# required by `.accounting`, `.data`
|
|
||||||
get_mkt_info,
|
|
||||||
)
|
|
||||||
# required by `.brokers`
|
|
||||||
from .api import (
|
from .api import (
|
||||||
get_client,
|
get_client,
|
||||||
)
|
)
|
||||||
from .feed import (
|
from .feed import (
|
||||||
# required by `.data`
|
get_mkt_info,
|
||||||
stream_quotes,
|
|
||||||
open_history_client,
|
open_history_client,
|
||||||
|
open_symbol_search,
|
||||||
|
stream_quotes,
|
||||||
)
|
)
|
||||||
from .broker import (
|
from .broker import (
|
||||||
# required by `.clearing`
|
|
||||||
open_trade_dialog,
|
open_trade_dialog,
|
||||||
)
|
|
||||||
from .ledger import (
|
|
||||||
# required by `.accounting`
|
|
||||||
norm_trade,
|
|
||||||
norm_trade_records,
|
norm_trade_records,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -56,20 +43,17 @@ from .ledger import (
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'get_client',
|
'get_client',
|
||||||
'get_mkt_info',
|
'get_mkt_info',
|
||||||
'Pair',
|
|
||||||
'open_trade_dialog',
|
'open_trade_dialog',
|
||||||
'open_history_client',
|
'open_history_client',
|
||||||
'open_symbol_search',
|
'open_symbol_search',
|
||||||
'stream_quotes',
|
'stream_quotes',
|
||||||
'norm_trade_records',
|
'norm_trade_records',
|
||||||
'norm_trade',
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
# tractor RPC enable arg
|
# tractor RPC enable arg
|
||||||
__enable_modules__: list[str] = [
|
__enable_modules__: list[str] = [
|
||||||
'api',
|
'api',
|
||||||
'broker',
|
|
||||||
'feed',
|
'feed',
|
||||||
'symbols',
|
'broker',
|
||||||
]
|
]
|
||||||
|
|
|
||||||
|
|
@ -15,11 +15,12 @@
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
Core (web) API client
|
Kraken web API wrapping.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from contextlib import asynccontextmanager as acm
|
from contextlib import asynccontextmanager as acm
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from decimal import Decimal
|
||||||
import itertools
|
import itertools
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
|
|
@ -27,25 +28,23 @@ from typing import (
|
||||||
)
|
)
|
||||||
import time
|
import time
|
||||||
|
|
||||||
import httpx
|
from bidict import bidict
|
||||||
import pendulum
|
import pendulum
|
||||||
|
import asks
|
||||||
|
from fuzzywuzzy import process as fuzzy
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
import hashlib
|
import hashlib
|
||||||
import hmac
|
import hmac
|
||||||
import base64
|
import base64
|
||||||
import tractor
|
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
from piker import config
|
from piker import config
|
||||||
from piker.data import (
|
from piker.data.types import Struct
|
||||||
def_iohlcv_fields,
|
from piker.data import def_iohlcv_fields
|
||||||
match_from_pairs,
|
|
||||||
)
|
|
||||||
from piker.accounting._mktinfo import (
|
from piker.accounting._mktinfo import (
|
||||||
Asset,
|
Asset,
|
||||||
digits_to_dec,
|
digits_to_dec,
|
||||||
dec_digits,
|
|
||||||
)
|
)
|
||||||
from piker.brokers._util import (
|
from piker.brokers._util import (
|
||||||
resproc,
|
resproc,
|
||||||
|
|
@ -55,17 +54,11 @@ from piker.brokers._util import (
|
||||||
)
|
)
|
||||||
from piker.accounting import Transaction
|
from piker.accounting import Transaction
|
||||||
from piker.log import get_logger
|
from piker.log import get_logger
|
||||||
from .symbols import Pair
|
|
||||||
|
|
||||||
log = get_logger('piker.brokers.kraken')
|
log = get_logger('piker.brokers.kraken')
|
||||||
|
|
||||||
# <uri>/<version>/
|
# <uri>/<version>/
|
||||||
_url = 'https://api.kraken.com/0'
|
_url = 'https://api.kraken.com/0'
|
||||||
|
|
||||||
_headers: dict[str, str] = {
|
|
||||||
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
|
|
||||||
}
|
|
||||||
|
|
||||||
# TODO: this is the only backend providing this right?
|
# TODO: this is the only backend providing this right?
|
||||||
# in which case we should drop it from the defaults and
|
# in which case we should drop it from the defaults and
|
||||||
# instead make a custom fields descr in this module!
|
# instead make a custom fields descr in this module!
|
||||||
|
|
@ -76,18 +69,12 @@ _symbol_info_translation: dict[str, str] = {
|
||||||
|
|
||||||
|
|
||||||
def get_config() -> dict[str, Any]:
|
def get_config() -> dict[str, Any]:
|
||||||
'''
|
|
||||||
Load our section from `piker/brokers.toml`.
|
|
||||||
|
|
||||||
'''
|
conf, path = config.load()
|
||||||
conf, path = config.load(
|
section = conf.get('kraken')
|
||||||
conf_name='brokers',
|
|
||||||
touch_if_dne=True,
|
if section is None:
|
||||||
)
|
log.warning(f'No config section found for kraken in {path}')
|
||||||
if (section := conf.get('kraken')) is None:
|
|
||||||
log.warning(
|
|
||||||
f'No config section found for kraken in {path}'
|
|
||||||
)
|
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
return section
|
return section
|
||||||
|
|
@ -118,51 +105,96 @@ class InvalidKey(ValueError):
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
|
||||||
|
# https://www.kraken.com/features/api#get-tradable-pairs
|
||||||
|
class Pair(Struct):
|
||||||
|
altname: str # alternate pair name
|
||||||
|
wsname: str # WebSocket pair name (if available)
|
||||||
|
aclass_base: str # asset class of base component
|
||||||
|
base: str # asset id of base component
|
||||||
|
aclass_quote: str # asset class of quote component
|
||||||
|
quote: str # asset id of quote component
|
||||||
|
lot: str # volume lot size
|
||||||
|
|
||||||
|
cost_decimals: int
|
||||||
|
costmin: float
|
||||||
|
pair_decimals: int # scaling decimal places for pair
|
||||||
|
lot_decimals: int # scaling decimal places for volume
|
||||||
|
|
||||||
|
# amount to multiply lot volume by to get currency volume
|
||||||
|
lot_multiplier: float
|
||||||
|
|
||||||
|
# array of leverage amounts available when buying
|
||||||
|
leverage_buy: list[int]
|
||||||
|
# array of leverage amounts available when selling
|
||||||
|
leverage_sell: list[int]
|
||||||
|
|
||||||
|
# fee schedule array in [volume, percent fee] tuples
|
||||||
|
fees: list[tuple[int, float]]
|
||||||
|
|
||||||
|
# maker fee schedule array in [volume, percent fee] tuples (if on
|
||||||
|
# maker/taker)
|
||||||
|
fees_maker: list[tuple[int, float]]
|
||||||
|
|
||||||
|
fee_volume_currency: str # volume discount currency
|
||||||
|
margin_call: str # margin call level
|
||||||
|
margin_stop: str # stop-out/liquidation margin level
|
||||||
|
ordermin: float # minimum order volume for pair
|
||||||
|
tick_size: float # min price step size
|
||||||
|
status: str
|
||||||
|
|
||||||
|
short_position_limit: float = 0
|
||||||
|
long_position_limit: float = float('inf')
|
||||||
|
|
||||||
|
@property
|
||||||
|
def price_tick(self) -> Decimal:
|
||||||
|
return digits_to_dec(self.pair_decimals)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def size_tick(self) -> Decimal:
|
||||||
|
return digits_to_dec(self.lot_decimals)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def bs_fqme(self) -> str:
|
||||||
|
return f'{self.symbol}.SPOT'
|
||||||
|
|
||||||
|
|
||||||
class Client:
|
class Client:
|
||||||
|
|
||||||
# assets and mkt pairs are key-ed by kraken's ReST response
|
# symbol mapping from all names to the altname
|
||||||
# symbol-bs_mktids (we call them "X-keys" like fricking
|
_ntable: dict[str, str] = {}
|
||||||
# "XXMRZEUR"). these keys used directly since ledger endpoints
|
|
||||||
# return transaction sets keyed with the same set!
|
|
||||||
_Assets: dict[str, Asset] = {}
|
|
||||||
_AssetPairs: dict[str, Pair] = {}
|
|
||||||
|
|
||||||
# offer lookup tables for all .altname and .wsname
|
# 2-way map of symbol names to their "alt names" ffs XD
|
||||||
# to the equivalent .xname so that various symbol-schemas
|
_altnames: bidict[str, str] = bidict()
|
||||||
# can be mapped to `Pair`s in the tables above.
|
|
||||||
_altnames: dict[str, str] = {}
|
|
||||||
_wsnames: dict[str, str] = {}
|
|
||||||
|
|
||||||
# key-ed by `Pair.bs_fqme: str`, and thus used for search
|
|
||||||
# allowing for lookup using piker's own FQME symbology sys.
|
|
||||||
_pairs: dict[str, Pair] = {}
|
_pairs: dict[str, Pair] = {}
|
||||||
_assets: dict[str, Asset] = {}
|
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
config: dict[str, str],
|
config: dict[str, str],
|
||||||
httpx_client: httpx.AsyncClient,
|
|
||||||
|
|
||||||
name: str = '',
|
name: str = '',
|
||||||
api_key: str = '',
|
api_key: str = '',
|
||||||
secret: str = ''
|
secret: str = ''
|
||||||
) -> None:
|
) -> None:
|
||||||
|
self._sesh = asks.Session(connections=4)
|
||||||
self._sesh: httpx.AsyncClient = httpx_client
|
self._sesh.base_location = _url
|
||||||
|
self._sesh.headers.update({
|
||||||
|
'User-Agent':
|
||||||
|
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
|
||||||
|
})
|
||||||
self._name = name
|
self._name = name
|
||||||
self._api_key = api_key
|
self._api_key = api_key
|
||||||
self._secret = secret
|
self._secret = secret
|
||||||
|
|
||||||
self.conf: dict[str, str] = config
|
self.conf: dict[str, str] = config
|
||||||
|
self.assets: dict[str, Asset] = {}
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def pairs(self) -> dict[str, Pair]:
|
def pairs(self) -> dict[str, Pair]:
|
||||||
|
|
||||||
if self._pairs is None:
|
if self._pairs is None:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
"Client didn't run `.get_mkt_pairs()` on startup?!"
|
"Make sure to run `cache_symbols()` on startup!"
|
||||||
)
|
)
|
||||||
|
# retreive and cache all symbols
|
||||||
|
|
||||||
return self._pairs
|
return self._pairs
|
||||||
|
|
||||||
|
|
@ -171,9 +203,10 @@ class Client:
|
||||||
method: str,
|
method: str,
|
||||||
data: dict,
|
data: dict,
|
||||||
) -> dict[str, Any]:
|
) -> dict[str, Any]:
|
||||||
resp: httpx.Response = await self._sesh.post(
|
resp = await self._sesh.post(
|
||||||
url=f'/public/{method}',
|
path=f'/public/{method}',
|
||||||
json=data,
|
json=data,
|
||||||
|
timeout=float('inf')
|
||||||
)
|
)
|
||||||
return resproc(resp, log)
|
return resproc(resp, log)
|
||||||
|
|
||||||
|
|
@ -184,18 +217,18 @@ class Client:
|
||||||
uri_path: str
|
uri_path: str
|
||||||
) -> dict[str, Any]:
|
) -> dict[str, Any]:
|
||||||
headers = {
|
headers = {
|
||||||
'Content-Type': 'application/x-www-form-urlencoded',
|
'Content-Type':
|
||||||
'API-Key': self._api_key,
|
'application/x-www-form-urlencoded',
|
||||||
'API-Sign': get_kraken_signature(
|
'API-Key':
|
||||||
uri_path,
|
self._api_key,
|
||||||
data,
|
'API-Sign':
|
||||||
self._secret,
|
get_kraken_signature(uri_path, data, self._secret)
|
||||||
),
|
|
||||||
}
|
}
|
||||||
resp: httpx.Response = await self._sesh.post(
|
resp = await self._sesh.post(
|
||||||
url=f'/private/{method}',
|
path=f'/private/{method}',
|
||||||
data=data,
|
data=data,
|
||||||
headers=headers,
|
headers=headers,
|
||||||
|
timeout=float('inf')
|
||||||
)
|
)
|
||||||
return resproc(resp, log)
|
return resproc(resp, log)
|
||||||
|
|
||||||
|
|
@ -221,29 +254,17 @@ class Client:
|
||||||
'Balance',
|
'Balance',
|
||||||
{},
|
{},
|
||||||
)
|
)
|
||||||
by_bsmktid: dict[str, dict] = resp['result']
|
by_bsmktid = resp['result']
|
||||||
|
|
||||||
balances: dict = {}
|
# TODO: we need to pull out the "asset" decimals
|
||||||
for xname, bal in by_bsmktid.items():
|
# data and return a `decimal.Decimal` instead here!
|
||||||
asset: Asset = self._Assets[xname]
|
# using the underlying Asset
|
||||||
|
return {
|
||||||
|
self._altnames[sym].lower(): float(bal)
|
||||||
|
for sym, bal in by_bsmktid.items()
|
||||||
|
}
|
||||||
|
|
||||||
# TODO: which KEY should we use? it's used to index
|
async def get_assets(self) -> dict[str, Asset]:
|
||||||
# the `Account.pps: dict` ..
|
|
||||||
key: str = asset.name.lower()
|
|
||||||
# TODO: should we just return a `Decimal` here
|
|
||||||
# or is the rounded version ok?
|
|
||||||
balances[key] = round(
|
|
||||||
float(bal),
|
|
||||||
ndigits=dec_digits(asset.tx_tick)
|
|
||||||
)
|
|
||||||
|
|
||||||
return balances
|
|
||||||
|
|
||||||
async def get_assets(
|
|
||||||
self,
|
|
||||||
reload: bool = False,
|
|
||||||
|
|
||||||
) -> dict[str, Asset]:
|
|
||||||
'''
|
'''
|
||||||
Load and cache all asset infos and pack into
|
Load and cache all asset infos and pack into
|
||||||
our native ``Asset`` struct.
|
our native ``Asset`` struct.
|
||||||
|
|
@ -261,37 +282,21 @@ class Client:
|
||||||
}
|
}
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if (
|
resp = await self._public('Assets', {})
|
||||||
not self._assets
|
assets = resp['result']
|
||||||
or reload
|
|
||||||
):
|
|
||||||
resp = await self._public('Assets', {})
|
|
||||||
assets: dict[str, dict] = resp['result']
|
|
||||||
|
|
||||||
for bs_mktid, info in assets.items():
|
for bs_mktid, info in assets.items():
|
||||||
|
altname = self._altnames[bs_mktid] = info['altname']
|
||||||
|
aclass: str = info['aclass']
|
||||||
|
|
||||||
altname: str = info['altname']
|
self.assets[bs_mktid] = Asset(
|
||||||
aclass: str = info['aclass']
|
name=altname.lower(),
|
||||||
asset = Asset(
|
atype=f'crypto_{aclass}',
|
||||||
name=altname,
|
tx_tick=digits_to_dec(info['decimals']),
|
||||||
atype=f'crypto_{aclass}',
|
info=info,
|
||||||
tx_tick=digits_to_dec(info['decimals']),
|
)
|
||||||
info=info,
|
|
||||||
)
|
|
||||||
# NOTE: yes we keep 2 sets since kraken insists on
|
|
||||||
# keeping 3 frickin sets bc apparently they have
|
|
||||||
# no sane data engineers whol all like different
|
|
||||||
# keys for their fricking symbology sets..
|
|
||||||
self._Assets[bs_mktid] = asset
|
|
||||||
self._assets[altname.lower()] = asset
|
|
||||||
self._assets[altname] = asset
|
|
||||||
|
|
||||||
# we return the "most native" set merged with our preferred
|
return self.assets
|
||||||
# naming (which i guess is the "altname" one) since that's
|
|
||||||
# what the symcache loader will be storing, and we need the
|
|
||||||
# keys that are easiest to match against in any trade
|
|
||||||
# records.
|
|
||||||
return self._Assets | self._assets
|
|
||||||
|
|
||||||
async def get_trades(
|
async def get_trades(
|
||||||
self,
|
self,
|
||||||
|
|
@ -372,24 +377,23 @@ class Client:
|
||||||
# 'amount': '0.00300726', 'fee': '0.00001000', 'time':
|
# 'amount': '0.00300726', 'fee': '0.00001000', 'time':
|
||||||
# 1658347714, 'status': 'Success'}]}
|
# 1658347714, 'status': 'Success'}]}
|
||||||
|
|
||||||
if xfers:
|
|
||||||
await tractor.pause()
|
|
||||||
|
|
||||||
trans: dict[str, Transaction] = {}
|
trans: dict[str, Transaction] = {}
|
||||||
for entry in xfers:
|
for entry in xfers:
|
||||||
|
|
||||||
# look up the normalized name and asset info
|
# look up the normalized name and asset info
|
||||||
asset_key: str = entry['asset']
|
asset_key = entry['asset']
|
||||||
asset: Asset = self._Assets[asset_key]
|
asset = self.assets[asset_key]
|
||||||
asset_key: str = asset.name.lower()
|
asset_key = self._altnames[asset_key].lower()
|
||||||
|
|
||||||
# XXX: this is in the asset units (likely) so it isn't
|
# XXX: this is in the asset units (likely) so it isn't
|
||||||
# quite the same as a commisions cost necessarily..)
|
# quite the same as a commisions cost necessarily..)
|
||||||
# TODO: also round this based on `Pair` cost precision info?
|
|
||||||
cost = float(entry['fee'])
|
cost = float(entry['fee'])
|
||||||
# fqme: str = asset_key + '.kraken'
|
|
||||||
|
fqme = asset_key + '.kraken'
|
||||||
|
|
||||||
tx = Transaction(
|
tx = Transaction(
|
||||||
fqme=asset_key, # this must map to an entry in .assets!
|
fqme=fqme,
|
||||||
|
sym=asset,
|
||||||
tid=entry['txid'],
|
tid=entry['txid'],
|
||||||
dt=pendulum.from_timestamp(entry['time']),
|
dt=pendulum.from_timestamp(entry['time']),
|
||||||
bs_mktid=f'{asset_key}{src_asset}',
|
bs_mktid=f'{asset_key}{src_asset}',
|
||||||
|
|
@ -404,11 +408,6 @@ class Client:
|
||||||
|
|
||||||
# XXX: see note above
|
# XXX: see note above
|
||||||
cost=cost,
|
cost=cost,
|
||||||
|
|
||||||
# not a trade but a withdrawal or deposit on the
|
|
||||||
# asset (chain) system.
|
|
||||||
etype='transfer',
|
|
||||||
|
|
||||||
)
|
)
|
||||||
trans[tx.tid] = tx
|
trans[tx.tid] = tx
|
||||||
|
|
||||||
|
|
@ -459,7 +458,7 @@ class Client:
|
||||||
# txid is a transaction id given by kraken
|
# txid is a transaction id given by kraken
|
||||||
return await self.endpoint('CancelOrder', {"txid": reqid})
|
return await self.endpoint('CancelOrder', {"txid": reqid})
|
||||||
|
|
||||||
async def asset_pairs(
|
async def pair_info(
|
||||||
self,
|
self,
|
||||||
pair_patt: str | None = None,
|
pair_patt: str | None = None,
|
||||||
|
|
||||||
|
|
@ -471,77 +470,64 @@ class Client:
|
||||||
https://docs.kraken.com/rest/#tag/Market-Data/operation/getTradableAssetPairs
|
https://docs.kraken.com/rest/#tag/Market-Data/operation/getTradableAssetPairs
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if not self._AssetPairs:
|
# get all pairs by default, or filter
|
||||||
# get all pairs by default, or filter
|
# to whatever pattern is provided as input.
|
||||||
# to whatever pattern is provided as input.
|
pairs: dict[str, str] | None = None
|
||||||
req_pairs: dict[str, str] | None = None
|
if pair_patt is not None:
|
||||||
if pair_patt is not None:
|
pairs = {'pair': pair_patt}
|
||||||
req_pairs = {'pair': pair_patt}
|
|
||||||
|
|
||||||
resp = await self._public(
|
resp = await self._public(
|
||||||
'AssetPairs',
|
'AssetPairs',
|
||||||
req_pairs,
|
pairs,
|
||||||
)
|
)
|
||||||
err = resp['error']
|
err = resp['error']
|
||||||
if err:
|
if err:
|
||||||
raise SymbolNotFound(pair_patt)
|
raise SymbolNotFound(pair_patt)
|
||||||
|
|
||||||
# NOTE: we try to key pairs by our custom defined
|
pairs: dict[str, Pair] = {
|
||||||
# `.bs_fqme` field since we want to offer search over
|
|
||||||
# this pattern set, callers should fill out lookup
|
|
||||||
# tables for kraken's bs_mktid keys to map to these
|
|
||||||
# keys!
|
|
||||||
# XXX: FURTHER kraken's data eng team decided to offer
|
|
||||||
# 3 frickin market-pair-symbol key sets depending on
|
|
||||||
# which frickin API is being used.
|
|
||||||
# Example for the trading pair 'LTC<EUR'
|
|
||||||
# - the "X-key" from rest eps 'XLTCZEUR'
|
|
||||||
# - the "websocket key" from ws msgs is 'LTC/EUR'
|
|
||||||
# - the "altname key" also delivered in pair info is 'LTCEUR'
|
|
||||||
for xkey, data in resp['result'].items():
|
|
||||||
|
|
||||||
# NOTE: always cache in pairs tables for faster lookup
|
key: Pair(**data)
|
||||||
with tractor.devx.maybe_open_crash_handler(): # as bxerr:
|
for key, data in resp['result'].items()
|
||||||
pair = Pair(xname=xkey, **data)
|
}
|
||||||
|
# always cache so we can possibly do faster lookup
|
||||||
# register the above `Pair` structs for all
|
self._pairs.update(pairs)
|
||||||
# key-sets/monikers: a set of 4 (frickin) tables
|
|
||||||
# acting as a combined surjection of all possible
|
|
||||||
# (and stupid) kraken names to their `Pair` obj.
|
|
||||||
self._AssetPairs[xkey] = pair
|
|
||||||
self._pairs[pair.bs_fqme] = pair
|
|
||||||
self._altnames[pair.altname] = pair
|
|
||||||
self._wsnames[pair.wsname] = pair
|
|
||||||
|
|
||||||
if pair_patt is not None:
|
if pair_patt is not None:
|
||||||
return next(iter(self._pairs.items()))[1]
|
return next(iter(pairs.items()))[1]
|
||||||
|
|
||||||
return self._AssetPairs
|
return pairs
|
||||||
|
|
||||||
async def get_mkt_pairs(
|
async def cache_symbols(self) -> dict:
|
||||||
self,
|
|
||||||
reload: bool = False,
|
|
||||||
) -> dict:
|
|
||||||
'''
|
'''
|
||||||
Load all market pair info build and cache it for downstream
|
Load all market pair info build and cache it for downstream use.
|
||||||
use.
|
|
||||||
|
|
||||||
Multiple pair info lookup tables (like ``._altnames:
|
A ``._ntable: dict[str, str]`` is available for mapping the
|
||||||
dict[str, str]``) are created for looking up the
|
websocket pair name-keys and their http endpoint API (smh)
|
||||||
piker-native `Pair`-struct from any input of the three
|
equivalents to the "alternative name" which is generally the one
|
||||||
(yes, it's that idiotic..) available symbol/pair-key-sets
|
we actually want to use XD
|
||||||
that kraken frickin offers depending on the API including
|
|
||||||
the .altname, .wsname and the weird ass default set they
|
|
||||||
return in ReST responses .xname..
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if (
|
if not self._pairs:
|
||||||
not self._pairs
|
pairs = await self.pair_info()
|
||||||
or reload
|
assert self._pairs == pairs
|
||||||
):
|
|
||||||
await self.asset_pairs()
|
|
||||||
|
|
||||||
return self._AssetPairs
|
# table of all ws and rest keys to their alt-name values.
|
||||||
|
ntable: dict[str, str] = {}
|
||||||
|
|
||||||
|
for rest_key in list(pairs.keys()):
|
||||||
|
|
||||||
|
pair: Pair = pairs[rest_key]
|
||||||
|
altname = pair.altname
|
||||||
|
wsname = pair.wsname
|
||||||
|
ntable[altname] = ntable[rest_key] = ntable[wsname] = altname
|
||||||
|
|
||||||
|
# register the pair under all monikers, a giant flat
|
||||||
|
# surjection of all possible names to each info obj.
|
||||||
|
self._pairs[altname] = self._pairs[wsname] = pair
|
||||||
|
|
||||||
|
self._ntable.update(ntable)
|
||||||
|
|
||||||
|
return self._pairs
|
||||||
|
|
||||||
async def search_symbols(
|
async def search_symbols(
|
||||||
self,
|
self,
|
||||||
|
|
@ -557,20 +543,16 @@ class Client:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if not len(self._pairs):
|
if not len(self._pairs):
|
||||||
await self.get_mkt_pairs()
|
await self.cache_symbols()
|
||||||
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
|
assert self._pairs, '`Client.cache_symbols()` was never called!?'
|
||||||
|
|
||||||
matches: dict[str, Pair] = match_from_pairs(
|
matches = fuzzy.extractBests(
|
||||||
pairs=self._pairs,
|
pattern,
|
||||||
query=pattern.upper(),
|
self._pairs,
|
||||||
score_cutoff=50,
|
score_cutoff=50,
|
||||||
)
|
)
|
||||||
|
# repack in dict form
|
||||||
# repack in .altname-keyed output table
|
return {item[0].altname: item[0] for item in matches}
|
||||||
return {
|
|
||||||
pair.altname: pair
|
|
||||||
for pair in matches.values()
|
|
||||||
}
|
|
||||||
|
|
||||||
async def bars(
|
async def bars(
|
||||||
self,
|
self,
|
||||||
|
|
@ -650,10 +632,10 @@ class Client:
|
||||||
raise BrokerError(errmsg)
|
raise BrokerError(errmsg)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def to_bs_fqme(
|
def normalize_symbol(
|
||||||
cls,
|
cls,
|
||||||
pair_str: str
|
ticker: str
|
||||||
) -> str:
|
) -> tuple[str, Pair]:
|
||||||
'''
|
'''
|
||||||
Normalize symbol names to to a 3x3 pair from the global
|
Normalize symbol names to to a 3x3 pair from the global
|
||||||
definition map which we build out from the data retreived from
|
definition map which we build out from the data retreived from
|
||||||
|
|
@ -661,7 +643,7 @@ class Client:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
try:
|
try:
|
||||||
return cls._altnames[pair_str.upper()].bs_fqme
|
return cls._ntable[ticker]
|
||||||
except KeyError as ke:
|
except KeyError as ke:
|
||||||
raise SymbolNotFound(f'kraken has no {ke.args[0]}')
|
raise SymbolNotFound(f'kraken has no {ke.args[0]}')
|
||||||
|
|
||||||
|
|
@ -669,36 +651,21 @@ class Client:
|
||||||
@acm
|
@acm
|
||||||
async def get_client() -> Client:
|
async def get_client() -> Client:
|
||||||
|
|
||||||
conf: dict[str, Any] = get_config()
|
conf = get_config()
|
||||||
async with httpx.AsyncClient(
|
if conf:
|
||||||
base_url=_url,
|
client = Client(
|
||||||
headers=_headers,
|
conf,
|
||||||
|
name=conf['key_descr'],
|
||||||
|
api_key=conf['api_key'],
|
||||||
|
secret=conf['secret']
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
client = Client({})
|
||||||
|
|
||||||
# TODO: is there a way to numerate this?
|
# at startup, load all symbols, and asset info in
|
||||||
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
|
# batch requests.
|
||||||
# connections=4
|
async with trio.open_nursery() as nurse:
|
||||||
) as trio_client:
|
nurse.start_soon(client.get_assets)
|
||||||
if conf:
|
await client.cache_symbols()
|
||||||
client = Client(
|
|
||||||
conf,
|
|
||||||
httpx_client=trio_client,
|
|
||||||
|
|
||||||
# TODO: don't break these up and just do internal
|
yield client
|
||||||
# conf lookups instead..
|
|
||||||
name=conf['key_descr'],
|
|
||||||
api_key=conf['api_key'],
|
|
||||||
secret=conf['secret']
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
client = Client(
|
|
||||||
conf={},
|
|
||||||
httpx_client=trio_client,
|
|
||||||
)
|
|
||||||
|
|
||||||
# at startup, load all symbols, and asset info in
|
|
||||||
# batch requests.
|
|
||||||
async with trio.open_nursery() as nurse:
|
|
||||||
nurse.start_soon(client.get_assets)
|
|
||||||
await client.get_mkt_pairs()
|
|
||||||
|
|
||||||
yield client
|
|
||||||
|
|
|
||||||
|
|
@ -24,6 +24,7 @@ from contextlib import (
|
||||||
)
|
)
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from itertools import count
|
from itertools import count
|
||||||
|
import math
|
||||||
from pprint import pformat
|
from pprint import pformat
|
||||||
import time
|
import time
|
||||||
from typing import (
|
from typing import (
|
||||||
|
|
@ -34,16 +35,21 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
from bidict import bidict
|
from bidict import bidict
|
||||||
|
import pendulum
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
|
|
||||||
from piker.accounting import (
|
from piker.accounting import (
|
||||||
Position,
|
Position,
|
||||||
Account,
|
PpTable,
|
||||||
Transaction,
|
Transaction,
|
||||||
TransactionLedger,
|
TransactionLedger,
|
||||||
open_trade_ledger,
|
open_trade_ledger,
|
||||||
open_account,
|
open_pps,
|
||||||
|
get_likely_pair,
|
||||||
|
)
|
||||||
|
from piker.accounting._mktinfo import (
|
||||||
|
MktPair,
|
||||||
)
|
)
|
||||||
from piker.clearing import(
|
from piker.clearing import(
|
||||||
OrderDialogs,
|
OrderDialogs,
|
||||||
|
|
@ -59,24 +65,18 @@ from piker.clearing._messages import (
|
||||||
BrokerdPosition,
|
BrokerdPosition,
|
||||||
BrokerdStatus,
|
BrokerdStatus,
|
||||||
)
|
)
|
||||||
from piker.brokers import (
|
|
||||||
open_cached_client,
|
|
||||||
)
|
|
||||||
from piker.data import open_symcache
|
|
||||||
from .api import (
|
from .api import (
|
||||||
log,
|
log,
|
||||||
Client,
|
Client,
|
||||||
BrokerError,
|
BrokerError,
|
||||||
|
get_client,
|
||||||
)
|
)
|
||||||
from .feed import (
|
from .feed import (
|
||||||
|
get_mkt_info,
|
||||||
open_autorecon_ws,
|
open_autorecon_ws,
|
||||||
NoBsWs,
|
NoBsWs,
|
||||||
stream_messages,
|
stream_messages,
|
||||||
)
|
)
|
||||||
from .ledger import (
|
|
||||||
norm_trade_records,
|
|
||||||
verify_balances,
|
|
||||||
)
|
|
||||||
|
|
||||||
MsgUnion = Union[
|
MsgUnion = Union[
|
||||||
BrokerdCancel,
|
BrokerdCancel,
|
||||||
|
|
@ -175,8 +175,9 @@ async def handle_order_requests(
|
||||||
|
|
||||||
case {
|
case {
|
||||||
'account': 'kraken.spot' as account,
|
'account': 'kraken.spot' as account,
|
||||||
'action': 'buy'|'sell',
|
'action': action,
|
||||||
}:
|
} if action in {'buy', 'sell'}:
|
||||||
|
|
||||||
# validate
|
# validate
|
||||||
order = BrokerdOrder(**msg)
|
order = BrokerdOrder(**msg)
|
||||||
|
|
||||||
|
|
@ -261,12 +262,6 @@ async def handle_order_requests(
|
||||||
} | extra
|
} | extra
|
||||||
|
|
||||||
log.info(f'Submitting WS order request:\n{pformat(req)}')
|
log.info(f'Submitting WS order request:\n{pformat(req)}')
|
||||||
|
|
||||||
# NOTE HOWTO, debug order requests
|
|
||||||
#
|
|
||||||
# if 'XRP' in pair:
|
|
||||||
# await tractor.pause()
|
|
||||||
|
|
||||||
await ws.send_msg(req)
|
await ws.send_msg(req)
|
||||||
|
|
||||||
# placehold for sanity checking in relay loop
|
# placehold for sanity checking in relay loop
|
||||||
|
|
@ -376,8 +371,7 @@ async def subscribe(
|
||||||
|
|
||||||
|
|
||||||
def trades2pps(
|
def trades2pps(
|
||||||
acnt: Account,
|
table: PpTable,
|
||||||
ledger: TransactionLedger,
|
|
||||||
acctid: str,
|
acctid: str,
|
||||||
new_trans: dict[str, Transaction] = {},
|
new_trans: dict[str, Transaction] = {},
|
||||||
|
|
||||||
|
|
@ -385,14 +379,13 @@ def trades2pps(
|
||||||
|
|
||||||
) -> list[BrokerdPosition]:
|
) -> list[BrokerdPosition]:
|
||||||
if new_trans:
|
if new_trans:
|
||||||
updated = acnt.update_from_ledger(
|
updated = table.update_from_trans(
|
||||||
new_trans,
|
new_trans,
|
||||||
symcache=ledger.symcache,
|
|
||||||
)
|
)
|
||||||
log.info(f'Updated pps:\n{pformat(updated)}')
|
log.info(f'Updated pps:\n{pformat(updated)}')
|
||||||
|
|
||||||
pp_entries, closed_pp_objs = acnt.dump_active()
|
pp_entries, closed_pp_objs = table.dump_active()
|
||||||
pp_objs: dict[Union[str, int], Position] = acnt.pps
|
pp_objs: dict[Union[str, int], Position] = table.pps
|
||||||
|
|
||||||
pps: dict[int, Position]
|
pps: dict[int, Position]
|
||||||
position_msgs: list[dict] = []
|
position_msgs: list[dict] = []
|
||||||
|
|
@ -406,13 +399,13 @@ def trades2pps(
|
||||||
# backend suffix prefixed but when
|
# backend suffix prefixed but when
|
||||||
# reading accounts from ledgers we
|
# reading accounts from ledgers we
|
||||||
# don't need it and/or it's prefixed
|
# don't need it and/or it's prefixed
|
||||||
# in the section acnt.. we should
|
# in the section table.. we should
|
||||||
# just strip this from the message
|
# just strip this from the message
|
||||||
# right since `.broker` is already
|
# right since `.broker` is already
|
||||||
# included?
|
# included?
|
||||||
account='kraken.' + acctid,
|
account='kraken.' + acctid,
|
||||||
symbol=p.mkt.fqme,
|
symbol=p.mkt.fqme,
|
||||||
size=p.cumsize,
|
size=p.size,
|
||||||
avg_price=p.ppu,
|
avg_price=p.ppu,
|
||||||
currency='',
|
currency='',
|
||||||
)
|
)
|
||||||
|
|
@ -423,7 +416,7 @@ def trades2pps(
|
||||||
# as little as possible. we need to either do
|
# as little as possible. we need to either do
|
||||||
# these writes in another actor, or try out `trio`'s
|
# these writes in another actor, or try out `trio`'s
|
||||||
# async file IO api?
|
# async file IO api?
|
||||||
acnt.write_config()
|
table.write_config()
|
||||||
|
|
||||||
return position_msgs
|
return position_msgs
|
||||||
|
|
||||||
|
|
@ -434,12 +427,7 @@ async def open_trade_dialog(
|
||||||
|
|
||||||
) -> AsyncIterator[dict[str, Any]]:
|
) -> AsyncIterator[dict[str, Any]]:
|
||||||
|
|
||||||
async with (
|
async with get_client() as client:
|
||||||
# TODO: maybe bind these together and deliver
|
|
||||||
# a tuple from `.open_cached_client()`?
|
|
||||||
open_cached_client('kraken') as client,
|
|
||||||
open_symcache('kraken') as symcache,
|
|
||||||
):
|
|
||||||
# make ems flip to paper mode when no creds setup in
|
# make ems flip to paper mode when no creds setup in
|
||||||
# `brokers.toml` B0
|
# `brokers.toml` B0
|
||||||
if not client._api_key:
|
if not client._api_key:
|
||||||
|
|
@ -469,8 +457,8 @@ async def open_trade_dialog(
|
||||||
# - delete the *ABSOLUTE LAST* entry from account's corresponding
|
# - delete the *ABSOLUTE LAST* entry from account's corresponding
|
||||||
# trade ledgers file (NOTE this MUST be the last record
|
# trade ledgers file (NOTE this MUST be the last record
|
||||||
# delivered from the api ledger),
|
# delivered from the api ledger),
|
||||||
# - open you ``account.kraken.spot.toml`` and find that
|
# - open you ``pps.toml`` and find that same tid and delete it
|
||||||
# same tid and delete it from the pos's clears table,
|
# from the pp's clears table,
|
||||||
# - set this flag to `True`
|
# - set this flag to `True`
|
||||||
#
|
#
|
||||||
# You should see an update come in after the order mode
|
# You should see an update come in after the order mode
|
||||||
|
|
@ -481,85 +469,172 @@ async def open_trade_dialog(
|
||||||
# update things correctly.
|
# update things correctly.
|
||||||
simulate_pp_update: bool = False
|
simulate_pp_update: bool = False
|
||||||
|
|
||||||
acnt: Account
|
table: PpTable
|
||||||
ledger: TransactionLedger
|
ledger: TransactionLedger
|
||||||
with (
|
with (
|
||||||
open_account(
|
open_pps(
|
||||||
'kraken',
|
'kraken',
|
||||||
acctid,
|
acctid,
|
||||||
write_on_exit=True,
|
write_on_exit=True,
|
||||||
) as acnt,
|
) as table,
|
||||||
|
|
||||||
open_trade_ledger(
|
open_trade_ledger(
|
||||||
'kraken',
|
'kraken',
|
||||||
acctid,
|
acctid,
|
||||||
symcache=symcache,
|
|
||||||
) as ledger,
|
) as ledger,
|
||||||
):
|
):
|
||||||
# TODO: loading ledger entries should all be done
|
# transaction-ify the ledger entries
|
||||||
# within a newly implemented `async with open_account()
|
ledger_trans = await norm_trade_records(ledger)
|
||||||
# as acnt` where `Account.ledger: TransactionLedger`
|
|
||||||
# can be used to explicitily update and write the
|
|
||||||
# offline TOML files!
|
|
||||||
# ------ - ------
|
|
||||||
# MOL the init sequence is:
|
|
||||||
# - get `Account` (with presumed pre-loaded ledger done
|
|
||||||
# beind the scenes as part of ctx enter).
|
|
||||||
# - pull new trades from API, update the ledger with
|
|
||||||
# normalized to `Transaction` entries of those
|
|
||||||
# records, presumably (and implicitly) update the
|
|
||||||
# acnt state including expiries, positions,
|
|
||||||
# transfers..), and finally of course existing
|
|
||||||
# per-asset balances.
|
|
||||||
# - validate all pos and balances ensuring there's
|
|
||||||
# no seemingly noticeable discrepancies?
|
|
||||||
|
|
||||||
# LOAD and transaction-ify the EXISTING LEDGER
|
if not table.pps:
|
||||||
ledger_trans: dict[str, Transaction] = await norm_trade_records(
|
# NOTE: we can't use this since it first needs
|
||||||
ledger,
|
# broker: str input support!
|
||||||
client,
|
# table.update_from_trans(ledger.to_trans())
|
||||||
api_name_set='xname',
|
table.update_from_trans(ledger_trans)
|
||||||
)
|
table.write_config()
|
||||||
|
|
||||||
if not acnt.pps:
|
|
||||||
acnt.update_from_ledger(
|
|
||||||
ledger_trans,
|
|
||||||
symcache=ledger.symcache,
|
|
||||||
)
|
|
||||||
acnt.write_config()
|
|
||||||
|
|
||||||
# TODO: eventually probably only load
|
# TODO: eventually probably only load
|
||||||
# as far back as it seems is not deliverd in the
|
# as far back as it seems is not deliverd in the
|
||||||
# most recent 50 trades and assume that by ordering we
|
# most recent 50 trades and assume that by ordering we
|
||||||
# already have those records in the ledger?
|
# already have those records in the ledger.
|
||||||
tids2trades: dict[str, dict] = await client.get_trades()
|
tids2trades = await client.get_trades()
|
||||||
ledger.update(tids2trades)
|
ledger.update(tids2trades)
|
||||||
if tids2trades:
|
if tids2trades:
|
||||||
ledger.write_config()
|
ledger.write_config()
|
||||||
|
|
||||||
api_trans: dict[str, Transaction] = await norm_trade_records(
|
api_trans = await norm_trade_records(tids2trades)
|
||||||
tids2trades,
|
|
||||||
client,
|
|
||||||
api_name_set='xname',
|
|
||||||
)
|
|
||||||
|
|
||||||
# retrieve kraken reported balances
|
# retrieve kraken reported balances
|
||||||
# and do diff with ledger to determine
|
# and do diff with ledger to determine
|
||||||
# what amount of trades-transactions need
|
# what amount of trades-transactions need
|
||||||
# to be reloaded.
|
# to be reloaded.
|
||||||
balances: dict[str, float] = await client.get_balances()
|
balances = await client.get_balances()
|
||||||
|
|
||||||
await verify_balances(
|
for dst, size in balances.items():
|
||||||
acnt,
|
|
||||||
src_fiat,
|
|
||||||
balances,
|
|
||||||
client,
|
|
||||||
ledger,
|
|
||||||
ledger_trans,
|
|
||||||
api_trans,
|
|
||||||
)
|
|
||||||
|
|
||||||
# XXX NOTE: only for simulate-testing a "new fill" since
|
# we don't care about tracking positions
|
||||||
|
# in the user's source fiat currency.
|
||||||
|
if (
|
||||||
|
dst == src_fiat
|
||||||
|
or not any(
|
||||||
|
dst in bs_mktid for bs_mktid in table.pps
|
||||||
|
)
|
||||||
|
):
|
||||||
|
log.warning(
|
||||||
|
f'Skipping balance `{dst}`:{size} for position calcs!'
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
def has_pp(
|
||||||
|
dst: str,
|
||||||
|
size: float,
|
||||||
|
|
||||||
|
) -> Position | None:
|
||||||
|
|
||||||
|
src2dst: dict[str, str] = {}
|
||||||
|
|
||||||
|
for bs_mktid in table.pps:
|
||||||
|
likely_pair = get_likely_pair(
|
||||||
|
src_fiat,
|
||||||
|
dst,
|
||||||
|
bs_mktid,
|
||||||
|
)
|
||||||
|
if likely_pair:
|
||||||
|
src2dst[src_fiat] = dst
|
||||||
|
|
||||||
|
for src, dst in src2dst.items():
|
||||||
|
pair = f'{dst}{src_fiat}'
|
||||||
|
pp = table.pps.get(pair)
|
||||||
|
if (
|
||||||
|
pp
|
||||||
|
and math.isclose(pp.size, size)
|
||||||
|
):
|
||||||
|
return pp
|
||||||
|
|
||||||
|
elif (
|
||||||
|
size == 0
|
||||||
|
and pp.size
|
||||||
|
):
|
||||||
|
log.warning(
|
||||||
|
f'`kraken` account says you have a ZERO '
|
||||||
|
f'balance for {bs_mktid}:{pair}\n'
|
||||||
|
f'but piker seems to think `{pp.size}`\n'
|
||||||
|
'This is likely a discrepancy in piker '
|
||||||
|
'accounting if the above number is'
|
||||||
|
"large,' though it's likely to due lack"
|
||||||
|
"f tracking xfers fees.."
|
||||||
|
)
|
||||||
|
return pp
|
||||||
|
|
||||||
|
return None # signal no entry
|
||||||
|
|
||||||
|
pos = has_pp(dst, size)
|
||||||
|
if not pos:
|
||||||
|
|
||||||
|
# we have a balance for which there is no pp
|
||||||
|
# entry? so we have to likely update from the
|
||||||
|
# ledger.
|
||||||
|
updated = table.update_from_trans(ledger_trans)
|
||||||
|
log.info(f'Updated pps from ledger:\n{pformat(updated)}')
|
||||||
|
pos = has_pp(dst, size)
|
||||||
|
|
||||||
|
if (
|
||||||
|
not pos
|
||||||
|
and not simulate_pp_update
|
||||||
|
):
|
||||||
|
# try reloading from API
|
||||||
|
table.update_from_trans(api_trans)
|
||||||
|
pos = has_pp(dst, size)
|
||||||
|
if not pos:
|
||||||
|
|
||||||
|
# get transfers to make sense of abs balances.
|
||||||
|
# NOTE: we do this after ledger and API
|
||||||
|
# loading since we might not have an entry
|
||||||
|
# in the ``pps.toml`` for the necessary pair
|
||||||
|
# yet and thus this likely pair grabber will
|
||||||
|
# likely fail.
|
||||||
|
for bs_mktid in table.pps:
|
||||||
|
likely_pair = get_likely_pair(
|
||||||
|
src_fiat,
|
||||||
|
dst,
|
||||||
|
bs_mktid,
|
||||||
|
)
|
||||||
|
if likely_pair:
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
'Could not find a position pair in '
|
||||||
|
'ledger for likely widthdrawal '
|
||||||
|
f'candidate: {dst}'
|
||||||
|
)
|
||||||
|
|
||||||
|
if likely_pair:
|
||||||
|
# this was likely pp that had a withdrawal
|
||||||
|
# from the dst asset out of the account.
|
||||||
|
|
||||||
|
xfer_trans = await client.get_xfers(
|
||||||
|
dst,
|
||||||
|
# TODO: not all src assets are
|
||||||
|
# 3 chars long...
|
||||||
|
src_asset=likely_pair[3:],
|
||||||
|
)
|
||||||
|
if xfer_trans:
|
||||||
|
updated = table.update_from_trans(
|
||||||
|
xfer_trans,
|
||||||
|
cost_scalar=1,
|
||||||
|
)
|
||||||
|
log.info(
|
||||||
|
f'Updated {dst} from transfers:\n'
|
||||||
|
f'{pformat(updated)}'
|
||||||
|
)
|
||||||
|
|
||||||
|
if has_pp(dst, size):
|
||||||
|
raise ValueError(
|
||||||
|
'Could not reproduce balance:\n'
|
||||||
|
f'dst: {dst}, {size}\n'
|
||||||
|
)
|
||||||
|
|
||||||
|
# only for simulate-testing a "new fill" since
|
||||||
# otherwise we have to actually conduct a live clear.
|
# otherwise we have to actually conduct a live clear.
|
||||||
if simulate_pp_update:
|
if simulate_pp_update:
|
||||||
tid = list(tids2trades)[0]
|
tid = list(tids2trades)[0]
|
||||||
|
|
@ -568,27 +643,25 @@ async def open_trade_dialog(
|
||||||
reqids2txids[0] = last_trade_dict['ordertxid']
|
reqids2txids[0] = last_trade_dict['ordertxid']
|
||||||
|
|
||||||
ppmsgs: list[BrokerdPosition] = trades2pps(
|
ppmsgs: list[BrokerdPosition] = trades2pps(
|
||||||
acnt,
|
table,
|
||||||
ledger,
|
|
||||||
acctid,
|
acctid,
|
||||||
)
|
)
|
||||||
# sync with EMS delivering pps and accounts
|
|
||||||
await ctx.started((ppmsgs, [acc_name]))
|
await ctx.started((ppmsgs, [acc_name]))
|
||||||
|
|
||||||
# TODO: ideally this blocks the this task
|
# TODO: ideally this blocks the this task
|
||||||
# as little as possible. we need to either do
|
# as little as possible. we need to either do
|
||||||
# these writes in another actor, or try out `trio`'s
|
# these writes in another actor, or try out `trio`'s
|
||||||
# async file IO api?
|
# async file IO api?
|
||||||
acnt.write_config()
|
table.write_config()
|
||||||
|
|
||||||
# Get websocket token for authenticated data stream
|
# Get websocket token for authenticated data stream
|
||||||
# Assert that a token was actually received.
|
# Assert that a token was actually received.
|
||||||
resp = await client.endpoint('GetWebSocketsToken', {})
|
resp = await client.endpoint('GetWebSocketsToken', {})
|
||||||
if err := resp.get('error'):
|
err = resp.get('error')
|
||||||
|
if err:
|
||||||
raise BrokerError(err)
|
raise BrokerError(err)
|
||||||
|
|
||||||
# resp token for ws init
|
token = resp['result']['token']
|
||||||
token: str = resp['result']['token']
|
|
||||||
|
|
||||||
ws: NoBsWs
|
ws: NoBsWs
|
||||||
async with (
|
async with (
|
||||||
|
|
@ -617,35 +690,32 @@ async def open_trade_dialog(
|
||||||
|
|
||||||
# enter relay loop
|
# enter relay loop
|
||||||
await handle_order_updates(
|
await handle_order_updates(
|
||||||
client=client,
|
ws,
|
||||||
ws=ws,
|
stream,
|
||||||
ws_stream=stream,
|
ems_stream,
|
||||||
ems_stream=ems_stream,
|
apiflows,
|
||||||
apiflows=apiflows,
|
ids,
|
||||||
ids=ids,
|
reqids2txids,
|
||||||
reqids2txids=reqids2txids,
|
table,
|
||||||
acnt=acnt,
|
api_trans,
|
||||||
ledger=ledger,
|
acctid,
|
||||||
acctid=acctid,
|
acc_name,
|
||||||
acc_name=acc_name,
|
token,
|
||||||
token=token,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
async def handle_order_updates(
|
async def handle_order_updates(
|
||||||
client: Client, # only for pairs table needed in ledger proc
|
|
||||||
ws: NoBsWs,
|
ws: NoBsWs,
|
||||||
ws_stream: AsyncIterator,
|
ws_stream: AsyncIterator,
|
||||||
ems_stream: tractor.MsgStream,
|
ems_stream: tractor.MsgStream,
|
||||||
apiflows: OrderDialogs,
|
apiflows: OrderDialogs,
|
||||||
ids: bidict[str, int],
|
ids: bidict[str, int],
|
||||||
reqids2txids: bidict[int, str],
|
reqids2txids: bidict[int, str],
|
||||||
acnt: Account,
|
table: PpTable,
|
||||||
|
|
||||||
# transaction records which will be updated
|
# transaction records which will be updated
|
||||||
# on new trade clearing events (aka order "fills")
|
# on new trade clearing events (aka order "fills")
|
||||||
ledger: TransactionLedger,
|
ledger_trans: dict[str, Transaction],
|
||||||
# ledger_trans: dict[str, Transaction],
|
|
||||||
acctid: str,
|
acctid: str,
|
||||||
acc_name: str,
|
acc_name: str,
|
||||||
token: str,
|
token: str,
|
||||||
|
|
@ -663,7 +733,7 @@ async def handle_order_updates(
|
||||||
|
|
||||||
# TODO: turns out you get the fill events from the
|
# TODO: turns out you get the fill events from the
|
||||||
# `openOrders` before you get this, so it might be better
|
# `openOrders` before you get this, so it might be better
|
||||||
# to do all fill/status/pos updates in that sub and just use
|
# to do all fill/status/pp updates in that sub and just use
|
||||||
# this one for ledger syncs?
|
# this one for ledger syncs?
|
||||||
|
|
||||||
# For eg. we could take the "last 50 trades" and do a diff
|
# For eg. we could take the "last 50 trades" and do a diff
|
||||||
|
|
@ -705,8 +775,7 @@ async def handle_order_updates(
|
||||||
# if tid not in ledger_trans
|
# if tid not in ledger_trans
|
||||||
}
|
}
|
||||||
for tid, trade in trades.items():
|
for tid, trade in trades.items():
|
||||||
# assert tid not in ledger_trans
|
assert tid not in ledger_trans
|
||||||
assert tid not in ledger
|
|
||||||
txid = trade['ordertxid']
|
txid = trade['ordertxid']
|
||||||
reqid = trade.get('userref')
|
reqid = trade.get('userref')
|
||||||
|
|
||||||
|
|
@ -749,22 +818,12 @@ async def handle_order_updates(
|
||||||
)
|
)
|
||||||
await ems_stream.send(status_msg)
|
await ems_stream.send(status_msg)
|
||||||
|
|
||||||
new_trans = await norm_trade_records(
|
new_trans = await norm_trade_records(trades)
|
||||||
trades,
|
ppmsgs = trades2pps(
|
||||||
client,
|
table,
|
||||||
api_name_set='wsname',
|
acctid,
|
||||||
|
new_trans,
|
||||||
)
|
)
|
||||||
ppmsgs: list[BrokerdPosition] = trades2pps(
|
|
||||||
acnt=acnt,
|
|
||||||
ledger=ledger,
|
|
||||||
acctid=acctid,
|
|
||||||
new_trans=new_trans,
|
|
||||||
)
|
|
||||||
# ppmsgs = trades2pps(
|
|
||||||
# acnt,
|
|
||||||
# acctid,
|
|
||||||
# new_trans,
|
|
||||||
# )
|
|
||||||
for pp_msg in ppmsgs:
|
for pp_msg in ppmsgs:
|
||||||
await ems_stream.send(pp_msg)
|
await ems_stream.send(pp_msg)
|
||||||
|
|
||||||
|
|
@ -1090,8 +1149,6 @@ async def handle_order_updates(
|
||||||
f'Failed to {action} order {reqid}:\n'
|
f'Failed to {action} order {reqid}:\n'
|
||||||
f'{errmsg}'
|
f'{errmsg}'
|
||||||
)
|
)
|
||||||
# if tractor._state.debug_mode():
|
|
||||||
# await tractor.pause()
|
|
||||||
|
|
||||||
symbol: str = 'N/A'
|
symbol: str = 'N/A'
|
||||||
if chain := apiflows.get(reqid):
|
if chain := apiflows.get(reqid):
|
||||||
|
|
@ -1126,3 +1183,36 @@ async def handle_order_updates(
|
||||||
})
|
})
|
||||||
case _:
|
case _:
|
||||||
log.warning(f'Unhandled trades update msg: {msg}')
|
log.warning(f'Unhandled trades update msg: {msg}')
|
||||||
|
|
||||||
|
|
||||||
|
async def norm_trade_records(
|
||||||
|
ledger: dict[str, Any],
|
||||||
|
|
||||||
|
) -> dict[str, Transaction]:
|
||||||
|
|
||||||
|
records: dict[str, Transaction] = {}
|
||||||
|
|
||||||
|
for tid, record in ledger.items():
|
||||||
|
|
||||||
|
size = float(record.get('vol')) * {
|
||||||
|
'buy': 1,
|
||||||
|
'sell': -1,
|
||||||
|
}[record['type']]
|
||||||
|
|
||||||
|
# we normalize to kraken's `altname` always..
|
||||||
|
bs_mktid: str = Client.normalize_symbol(record['pair'])
|
||||||
|
fqme = f'{bs_mktid.lower()}.kraken'
|
||||||
|
mkt: MktPair = (await get_mkt_info(fqme))[0]
|
||||||
|
|
||||||
|
records[tid] = Transaction(
|
||||||
|
fqme=fqme,
|
||||||
|
sym=mkt,
|
||||||
|
tid=tid,
|
||||||
|
size=size,
|
||||||
|
price=float(record['price']),
|
||||||
|
cost=float(record['fee']),
|
||||||
|
dt=pendulum.from_timestamp(float(record['time'])),
|
||||||
|
bs_mktid=bs_mktid,
|
||||||
|
)
|
||||||
|
|
||||||
|
return records
|
||||||
|
|
|
||||||
|
|
@ -30,29 +30,38 @@ from typing import (
|
||||||
)
|
)
|
||||||
import time
|
import time
|
||||||
|
|
||||||
|
from fuzzywuzzy import process as fuzzy
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pendulum
|
import pendulum
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
|
import tractor
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
from piker.accounting._mktinfo import (
|
from piker.accounting._mktinfo import (
|
||||||
|
Asset,
|
||||||
MktPair,
|
MktPair,
|
||||||
|
unpack_fqme,
|
||||||
)
|
)
|
||||||
from piker.brokers import (
|
from piker.brokers import (
|
||||||
open_cached_client,
|
open_cached_client,
|
||||||
|
SymbolNotFound,
|
||||||
|
)
|
||||||
|
from piker._cacheables import (
|
||||||
|
async_lifo_cache,
|
||||||
)
|
)
|
||||||
from piker.brokers._util import (
|
from piker.brokers._util import (
|
||||||
BrokerError,
|
BrokerError,
|
||||||
DataThrottle,
|
DataThrottle,
|
||||||
DataUnavailable,
|
DataUnavailable,
|
||||||
)
|
)
|
||||||
from piker.types import Struct
|
from piker.data.types import Struct
|
||||||
from piker.data.validate import FeedInit
|
from piker.data.validate import FeedInit
|
||||||
from piker.data._web_bs import open_autorecon_ws, NoBsWs
|
from piker.data._web_bs import open_autorecon_ws, NoBsWs
|
||||||
from .api import (
|
from .api import (
|
||||||
log,
|
log,
|
||||||
|
Client,
|
||||||
|
Pair,
|
||||||
)
|
)
|
||||||
from .symbols import get_mkt_info
|
|
||||||
|
|
||||||
|
|
||||||
class OHLC(Struct, frozen=True):
|
class OHLC(Struct, frozen=True):
|
||||||
|
|
@ -258,6 +267,62 @@ async def open_history_client(
|
||||||
yield get_ohlc, {'erlangs': 1, 'rate': 1}
|
yield get_ohlc, {'erlangs': 1, 'rate': 1}
|
||||||
|
|
||||||
|
|
||||||
|
@async_lifo_cache()
|
||||||
|
async def get_mkt_info(
|
||||||
|
fqme: str,
|
||||||
|
|
||||||
|
) -> tuple[MktPair, Pair]:
|
||||||
|
'''
|
||||||
|
Query for and return a `MktPair` and backend-native `Pair` (or
|
||||||
|
wtv else) info.
|
||||||
|
|
||||||
|
If more then one fqme is provided return a ``dict`` of native
|
||||||
|
key-strs to `MktPair`s.
|
||||||
|
|
||||||
|
'''
|
||||||
|
venue: str = 'spot'
|
||||||
|
expiry: str = ''
|
||||||
|
if '.kraken' in fqme:
|
||||||
|
broker, pair, venue, expiry = unpack_fqme(fqme)
|
||||||
|
venue: str = venue or 'spot'
|
||||||
|
|
||||||
|
if venue != 'spot':
|
||||||
|
raise SymbolNotFound(
|
||||||
|
'kraken only supports spot markets right now!\n'
|
||||||
|
f'{fqme}\n'
|
||||||
|
)
|
||||||
|
|
||||||
|
async with open_cached_client('kraken') as client:
|
||||||
|
|
||||||
|
# uppercase since kraken bs_mktid is always upper
|
||||||
|
bs_fqme, _, broker = fqme.partition('.')
|
||||||
|
pair_str: str = bs_fqme.upper()
|
||||||
|
bs_mktid: str = Client.normalize_symbol(pair_str)
|
||||||
|
pair: Pair = await client.pair_info(pair_str)
|
||||||
|
|
||||||
|
assets = client.assets
|
||||||
|
dst_asset: Asset = assets[pair.base]
|
||||||
|
src_asset: Asset = assets[pair.quote]
|
||||||
|
|
||||||
|
mkt = MktPair(
|
||||||
|
dst=dst_asset,
|
||||||
|
src=src_asset,
|
||||||
|
|
||||||
|
price_tick=pair.price_tick,
|
||||||
|
size_tick=pair.size_tick,
|
||||||
|
bs_mktid=bs_mktid,
|
||||||
|
|
||||||
|
expiry=expiry,
|
||||||
|
venue=venue or 'spot',
|
||||||
|
|
||||||
|
# TODO: futes
|
||||||
|
# _atype=_atype,
|
||||||
|
|
||||||
|
broker='kraken',
|
||||||
|
)
|
||||||
|
return mkt, pair
|
||||||
|
|
||||||
|
|
||||||
async def stream_quotes(
|
async def stream_quotes(
|
||||||
|
|
||||||
send_chan: trio.abc.SendChannel,
|
send_chan: trio.abc.SendChannel,
|
||||||
|
|
@ -413,3 +478,30 @@ async def stream_quotes(
|
||||||
log.warning(f'Unknown WSS message: {typ}, {quote}')
|
log.warning(f'Unknown WSS message: {typ}, {quote}')
|
||||||
|
|
||||||
await send_chan.send({topic: quote})
|
await send_chan.send({topic: quote})
|
||||||
|
|
||||||
|
|
||||||
|
@tractor.context
|
||||||
|
async def open_symbol_search(
|
||||||
|
ctx: tractor.Context,
|
||||||
|
|
||||||
|
) -> Client:
|
||||||
|
async with open_cached_client('kraken') as client:
|
||||||
|
|
||||||
|
# load all symbols locally for fast search
|
||||||
|
cache = await client.cache_symbols()
|
||||||
|
await ctx.started(cache)
|
||||||
|
|
||||||
|
async with ctx.open_stream() as stream:
|
||||||
|
|
||||||
|
async for pattern in stream:
|
||||||
|
|
||||||
|
matches = fuzzy.extractBests(
|
||||||
|
pattern,
|
||||||
|
cache,
|
||||||
|
score_cutoff=50,
|
||||||
|
)
|
||||||
|
# repack in dict form
|
||||||
|
await stream.send({
|
||||||
|
pair[0].altname: pair[0]
|
||||||
|
for pair in matches
|
||||||
|
})
|
||||||
|
|
|
||||||
|
|
@ -1,269 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Trade transaction accounting and normalization.
|
|
||||||
|
|
||||||
'''
|
|
||||||
import math
|
|
||||||
from pprint import pformat
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
)
|
|
||||||
|
|
||||||
import pendulum
|
|
||||||
|
|
||||||
from piker.accounting import (
|
|
||||||
Transaction,
|
|
||||||
Position,
|
|
||||||
Account,
|
|
||||||
get_likely_pair,
|
|
||||||
TransactionLedger,
|
|
||||||
# MktPair,
|
|
||||||
)
|
|
||||||
from piker.types import Struct
|
|
||||||
from piker.data import (
|
|
||||||
SymbologyCache,
|
|
||||||
)
|
|
||||||
from .api import (
|
|
||||||
log,
|
|
||||||
Client,
|
|
||||||
Pair,
|
|
||||||
)
|
|
||||||
# from .feed import get_mkt_info
|
|
||||||
|
|
||||||
|
|
||||||
def norm_trade(
|
|
||||||
tid: str,
|
|
||||||
record: dict[str, Any],
|
|
||||||
|
|
||||||
# this is the dict that was returned from
|
|
||||||
# `Client.get_mkt_pairs()` and when running offline ledger
|
|
||||||
# processing from `.accounting`, this will be the table loaded
|
|
||||||
# into `SymbologyCache.pairs`.
|
|
||||||
pairs: dict[str, Struct],
|
|
||||||
symcache: SymbologyCache | None = None,
|
|
||||||
|
|
||||||
) -> Transaction:
|
|
||||||
|
|
||||||
size: float = float(record.get('vol')) * {
|
|
||||||
'buy': 1,
|
|
||||||
'sell': -1,
|
|
||||||
}[record['type']]
|
|
||||||
|
|
||||||
# NOTE: this value may be either the websocket OR the rest schema
|
|
||||||
# so we need to detect the key format and then choose the
|
|
||||||
# correct symbol lookup table to evetually get a ``Pair``..
|
|
||||||
# See internals of `Client.asset_pairs()` for deats!
|
|
||||||
src_pair_key: str = record['pair']
|
|
||||||
|
|
||||||
# XXX: kraken's data engineering is soo bad they require THREE
|
|
||||||
# different pair schemas (more or less seemingly tied to
|
|
||||||
# transport-APIs)..LITERALLY they return different market id
|
|
||||||
# pairs in the ledger endpoints vs. the websocket event subs..
|
|
||||||
# lookup pair using appropriately provided tabled depending
|
|
||||||
# on API-key-schema..
|
|
||||||
pair: Pair = pairs[src_pair_key]
|
|
||||||
fqme: str = pair.bs_fqme.lower() + '.kraken'
|
|
||||||
|
|
||||||
return Transaction(
|
|
||||||
fqme=fqme,
|
|
||||||
tid=tid,
|
|
||||||
size=size,
|
|
||||||
price=float(record['price']),
|
|
||||||
cost=float(record['fee']),
|
|
||||||
dt=pendulum.from_timestamp(float(record['time'])),
|
|
||||||
bs_mktid=pair.bs_mktid,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
async def norm_trade_records(
|
|
||||||
ledger: dict[str, Any],
|
|
||||||
client: Client,
|
|
||||||
api_name_set: str = 'xname',
|
|
||||||
|
|
||||||
) -> dict[str, Transaction]:
|
|
||||||
'''
|
|
||||||
Loop through an input ``dict`` of trade records
|
|
||||||
and convert them to ``Transactions``.
|
|
||||||
|
|
||||||
'''
|
|
||||||
records: dict[str, Transaction] = {}
|
|
||||||
for tid, record in ledger.items():
|
|
||||||
|
|
||||||
# manual_fqme: str = f'{bs_mktid.lower()}.kraken'
|
|
||||||
# mkt: MktPair = (await get_mkt_info(manual_fqme))[0]
|
|
||||||
# fqme: str = mkt.fqme
|
|
||||||
# assert fqme == manual_fqme
|
|
||||||
pairs: dict[str, Pair] = {
|
|
||||||
'xname': client._AssetPairs,
|
|
||||||
'wsname': client._wsnames,
|
|
||||||
'altname': client._altnames,
|
|
||||||
}[api_name_set]
|
|
||||||
|
|
||||||
records[tid] = norm_trade(
|
|
||||||
tid,
|
|
||||||
record,
|
|
||||||
pairs=pairs,
|
|
||||||
)
|
|
||||||
|
|
||||||
return records
|
|
||||||
|
|
||||||
|
|
||||||
def has_pp(
|
|
||||||
acnt: Account,
|
|
||||||
src_fiat: str,
|
|
||||||
dst: str,
|
|
||||||
size: float,
|
|
||||||
|
|
||||||
) -> Position | None:
|
|
||||||
|
|
||||||
src2dst: dict[str, str] = {}
|
|
||||||
for bs_mktid in acnt.pps:
|
|
||||||
likely_pair = get_likely_pair(
|
|
||||||
src_fiat,
|
|
||||||
dst,
|
|
||||||
bs_mktid,
|
|
||||||
)
|
|
||||||
if likely_pair:
|
|
||||||
src2dst[src_fiat] = dst
|
|
||||||
|
|
||||||
for src, dst in src2dst.items():
|
|
||||||
pair: str = f'{dst}{src_fiat}'
|
|
||||||
pos: Position = acnt.pps.get(pair)
|
|
||||||
if (
|
|
||||||
pos
|
|
||||||
and math.isclose(pos.size, size)
|
|
||||||
):
|
|
||||||
return pos
|
|
||||||
|
|
||||||
elif (
|
|
||||||
size == 0
|
|
||||||
and pos.size
|
|
||||||
):
|
|
||||||
log.warning(
|
|
||||||
f'`kraken` account says you have a ZERO '
|
|
||||||
f'balance for {bs_mktid}:{pair}\n'
|
|
||||||
f'but piker seems to think `{pos.size}`\n'
|
|
||||||
'This is likely a discrepancy in piker '
|
|
||||||
'accounting if the above number is'
|
|
||||||
"large,' though it's likely to due lack"
|
|
||||||
"f tracking xfers fees.."
|
|
||||||
)
|
|
||||||
return pos
|
|
||||||
|
|
||||||
return None # indicate no entry found
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: factor most of this "account updating from txns" into the
|
|
||||||
# the `Account` impl so has to provide for hiding the mostly
|
|
||||||
# cross-provider updates from txn sets
|
|
||||||
async def verify_balances(
|
|
||||||
acnt: Account,
|
|
||||||
src_fiat: str,
|
|
||||||
balances: dict[str, float],
|
|
||||||
client: Client,
|
|
||||||
ledger: TransactionLedger,
|
|
||||||
ledger_trans: dict[str, Transaction], # from toml
|
|
||||||
api_trans: dict[str, Transaction], # from API
|
|
||||||
|
|
||||||
simulate_pp_update: bool = False,
|
|
||||||
|
|
||||||
) -> None:
|
|
||||||
for dst, size in balances.items():
|
|
||||||
|
|
||||||
# we don't care about tracking positions
|
|
||||||
# in the user's source fiat currency.
|
|
||||||
if (
|
|
||||||
dst == src_fiat
|
|
||||||
or not any(
|
|
||||||
dst in bs_mktid for bs_mktid in acnt.pps
|
|
||||||
)
|
|
||||||
):
|
|
||||||
log.warning(
|
|
||||||
f'Skipping balance `{dst}`:{size} for position calcs!'
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
# we have a balance for which there is no pos entry
|
|
||||||
# - we have to likely update from the ledger?
|
|
||||||
if not has_pp(acnt, src_fiat, dst, size):
|
|
||||||
updated = acnt.update_from_ledger(
|
|
||||||
ledger_trans,
|
|
||||||
symcache=ledger.symcache,
|
|
||||||
)
|
|
||||||
log.info(f'Updated pps from ledger:\n{pformat(updated)}')
|
|
||||||
|
|
||||||
# FIRST try reloading from API records
|
|
||||||
if (
|
|
||||||
not has_pp(acnt, src_fiat, dst, size)
|
|
||||||
and not simulate_pp_update
|
|
||||||
):
|
|
||||||
acnt.update_from_ledger(
|
|
||||||
api_trans,
|
|
||||||
symcache=ledger.symcache,
|
|
||||||
)
|
|
||||||
|
|
||||||
# get transfers to make sense of abs
|
|
||||||
# balances.
|
|
||||||
# NOTE: we do this after ledger and API
|
|
||||||
# loading since we might not have an
|
|
||||||
# entry in the
|
|
||||||
# ``account.kraken.spot.toml`` for the
|
|
||||||
# necessary pair yet and thus this
|
|
||||||
# likely pair grabber will likely fail.
|
|
||||||
if not has_pp(acnt, src_fiat, dst, size):
|
|
||||||
for bs_mktid in acnt.pps:
|
|
||||||
likely_pair: str | None = get_likely_pair(
|
|
||||||
src_fiat,
|
|
||||||
dst,
|
|
||||||
bs_mktid,
|
|
||||||
)
|
|
||||||
if likely_pair:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
raise ValueError(
|
|
||||||
'Could not find a position pair in '
|
|
||||||
'ledger for likely widthdrawal '
|
|
||||||
f'candidate: {dst}'
|
|
||||||
)
|
|
||||||
|
|
||||||
# this was likely pos that had a withdrawal
|
|
||||||
# from the dst asset out of the account.
|
|
||||||
if likely_pair:
|
|
||||||
xfer_trans = await client.get_xfers(
|
|
||||||
dst,
|
|
||||||
|
|
||||||
# TODO: not all src assets are
|
|
||||||
# 3 chars long...
|
|
||||||
src_asset=likely_pair[3:],
|
|
||||||
)
|
|
||||||
if xfer_trans:
|
|
||||||
updated = acnt.update_from_ledger(
|
|
||||||
xfer_trans,
|
|
||||||
cost_scalar=1,
|
|
||||||
symcache=ledger.symcache,
|
|
||||||
)
|
|
||||||
log.info(
|
|
||||||
f'Updated {dst} from transfers:\n'
|
|
||||||
f'{pformat(updated)}'
|
|
||||||
)
|
|
||||||
|
|
||||||
if has_pp(acnt, src_fiat, dst, size):
|
|
||||||
raise ValueError(
|
|
||||||
'Could not reproduce balance:\n'
|
|
||||||
f'dst: {dst}, {size}\n'
|
|
||||||
)
|
|
||||||
|
|
@ -1,210 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Symbology defs and search.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from decimal import Decimal
|
|
||||||
|
|
||||||
import tractor
|
|
||||||
|
|
||||||
from piker._cacheables import (
|
|
||||||
async_lifo_cache,
|
|
||||||
)
|
|
||||||
from piker.accounting._mktinfo import (
|
|
||||||
digits_to_dec,
|
|
||||||
)
|
|
||||||
from piker.brokers import (
|
|
||||||
open_cached_client,
|
|
||||||
SymbolNotFound,
|
|
||||||
)
|
|
||||||
from piker.types import Struct
|
|
||||||
from piker.accounting._mktinfo import (
|
|
||||||
Asset,
|
|
||||||
MktPair,
|
|
||||||
unpack_fqme,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class Pair(Struct):
|
|
||||||
'''
|
|
||||||
A tradable asset pair as schema-defined by,
|
|
||||||
|
|
||||||
https://docs.kraken.com/api/docs/rest-api/get-tradable-asset-pairs
|
|
||||||
|
|
||||||
'''
|
|
||||||
xname: str # idiotic bs_mktid equiv i guess?
|
|
||||||
altname: str # alternate pair name
|
|
||||||
wsname: str # WebSocket pair name (if available)
|
|
||||||
aclass_base: str # asset class of base component
|
|
||||||
base: str # asset id of base component
|
|
||||||
aclass_quote: str # asset class of quote component
|
|
||||||
quote: str # asset id of quote component
|
|
||||||
lot: str # volume lot size
|
|
||||||
|
|
||||||
cost_decimals: int
|
|
||||||
pair_decimals: int # scaling decimal places for pair
|
|
||||||
lot_decimals: int # scaling decimal places for volume
|
|
||||||
|
|
||||||
# amount to multiply lot volume by to get currency volume
|
|
||||||
lot_multiplier: float
|
|
||||||
|
|
||||||
# array of leverage amounts available when buying
|
|
||||||
leverage_buy: list[int]
|
|
||||||
# array of leverage amounts available when selling
|
|
||||||
leverage_sell: list[int]
|
|
||||||
|
|
||||||
# fee schedule array in [volume, percent fee] tuples
|
|
||||||
fees: list[tuple[int, float]]
|
|
||||||
|
|
||||||
# maker fee schedule array in [volume, percent fee] tuples (if on
|
|
||||||
# maker/taker)
|
|
||||||
fees_maker: list[tuple[int, float]]
|
|
||||||
|
|
||||||
fee_volume_currency: str # volume discount currency
|
|
||||||
margin_call: str # margin call level
|
|
||||||
margin_stop: str # stop-out/liquidation margin level
|
|
||||||
ordermin: float # minimum order volume for pair
|
|
||||||
tick_size: float # min price step size
|
|
||||||
status: str
|
|
||||||
|
|
||||||
costmin: str|None = None # XXX, only some mktpairs?
|
|
||||||
short_position_limit: float = 0
|
|
||||||
long_position_limit: float = float('inf')
|
|
||||||
|
|
||||||
# TODO: should we make this a literal NamespacePath ref?
|
|
||||||
ns_path: str = 'piker.brokers.kraken:Pair'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_mktid(self) -> str:
|
|
||||||
'''
|
|
||||||
Kraken seems to index it's market symbol sets in
|
|
||||||
transaction ledgers using the key returned from rest
|
|
||||||
queries.. so use that since apparently they can't
|
|
||||||
make up their minds on a better key set XD
|
|
||||||
|
|
||||||
'''
|
|
||||||
return self.xname
|
|
||||||
|
|
||||||
@property
|
|
||||||
def price_tick(self) -> Decimal:
|
|
||||||
return digits_to_dec(self.pair_decimals)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def size_tick(self) -> Decimal:
|
|
||||||
return digits_to_dec(self.lot_decimals)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_dst_asset(self) -> str:
|
|
||||||
dst, _ = self.wsname.split('/')
|
|
||||||
return dst
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_src_asset(self) -> str:
|
|
||||||
_, src = self.wsname.split('/')
|
|
||||||
return src
|
|
||||||
|
|
||||||
@property
|
|
||||||
def bs_fqme(self) -> str:
|
|
||||||
'''
|
|
||||||
Basically the `.altname` but with special '.' handling and
|
|
||||||
`.SPOT` suffix appending (for future multi-venue support).
|
|
||||||
|
|
||||||
'''
|
|
||||||
dst, src = self.wsname.split('/')
|
|
||||||
# XXX: omg for stupid shite like ETH2.S/ETH..
|
|
||||||
dst = dst.replace('.', '-')
|
|
||||||
return f'{dst}{src}.SPOT'
|
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
|
||||||
async def open_symbol_search(ctx: tractor.Context) -> None:
|
|
||||||
async with open_cached_client('kraken') as client:
|
|
||||||
|
|
||||||
# load all symbols locally for fast search
|
|
||||||
cache = await client.get_mkt_pairs()
|
|
||||||
await ctx.started(cache)
|
|
||||||
|
|
||||||
async with ctx.open_stream() as stream:
|
|
||||||
async for pattern in stream:
|
|
||||||
await stream.send(
|
|
||||||
await client.search_symbols(pattern)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@async_lifo_cache()
|
|
||||||
async def get_mkt_info(
|
|
||||||
fqme: str,
|
|
||||||
|
|
||||||
) -> tuple[MktPair, Pair]:
|
|
||||||
'''
|
|
||||||
Query for and return a `MktPair` and backend-native `Pair` (or
|
|
||||||
wtv else) info.
|
|
||||||
|
|
||||||
If more then one fqme is provided return a ``dict`` of native
|
|
||||||
key-strs to `MktPair`s.
|
|
||||||
|
|
||||||
'''
|
|
||||||
venue: str = 'spot'
|
|
||||||
expiry: str = ''
|
|
||||||
if '.kraken' not in fqme:
|
|
||||||
fqme += '.kraken'
|
|
||||||
|
|
||||||
broker, pair, venue, expiry = unpack_fqme(fqme)
|
|
||||||
venue: str = venue or 'spot'
|
|
||||||
|
|
||||||
if venue.lower() != 'spot':
|
|
||||||
raise SymbolNotFound(
|
|
||||||
'kraken only supports spot markets right now!\n'
|
|
||||||
f'{fqme}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
async with open_cached_client('kraken') as client:
|
|
||||||
|
|
||||||
# uppercase since kraken bs_mktid is always upper
|
|
||||||
# bs_fqme, _, broker = fqme.partition('.')
|
|
||||||
# pair_str: str = bs_fqme.upper()
|
|
||||||
pair_str: str = f'{pair}.{venue}'
|
|
||||||
|
|
||||||
pair: Pair | None = client._pairs.get(pair_str.upper())
|
|
||||||
if not pair:
|
|
||||||
bs_fqme: str = client.to_bs_fqme(pair_str)
|
|
||||||
pair: Pair = client._pairs[bs_fqme]
|
|
||||||
|
|
||||||
if not (assets := client._assets):
|
|
||||||
assets: dict[str, Asset] = await client.get_assets()
|
|
||||||
|
|
||||||
dst_asset: Asset = assets[pair.bs_dst_asset]
|
|
||||||
src_asset: Asset = assets[pair.bs_src_asset]
|
|
||||||
|
|
||||||
mkt = MktPair(
|
|
||||||
dst=dst_asset,
|
|
||||||
src=src_asset,
|
|
||||||
|
|
||||||
price_tick=pair.price_tick,
|
|
||||||
size_tick=pair.size_tick,
|
|
||||||
bs_mktid=pair.bs_mktid,
|
|
||||||
|
|
||||||
expiry=expiry,
|
|
||||||
venue=venue or 'spot',
|
|
||||||
|
|
||||||
# TODO: futes
|
|
||||||
# _atype=_atype,
|
|
||||||
|
|
||||||
broker='kraken',
|
|
||||||
)
|
|
||||||
return mkt, pair
|
|
||||||
|
|
@ -16,9 +16,10 @@
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
Kucoin cex API backend.
|
Kucoin broker backend
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
|
||||||
from contextlib import (
|
from contextlib import (
|
||||||
asynccontextmanager as acm,
|
asynccontextmanager as acm,
|
||||||
aclosing,
|
aclosing,
|
||||||
|
|
@ -40,8 +41,9 @@ from typing import (
|
||||||
import wsproto
|
import wsproto
|
||||||
from uuid import uuid4
|
from uuid import uuid4
|
||||||
|
|
||||||
|
from fuzzywuzzy import process as fuzzy
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
import httpx
|
import asks
|
||||||
from bidict import bidict
|
from bidict import bidict
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pendulum
|
import pendulum
|
||||||
|
|
@ -62,11 +64,8 @@ from piker._cacheables import (
|
||||||
)
|
)
|
||||||
from piker.log import get_logger
|
from piker.log import get_logger
|
||||||
from piker.data.validate import FeedInit
|
from piker.data.validate import FeedInit
|
||||||
from piker.types import Struct # NOTE, this is already a `tractor.msg.Struct`
|
from piker.data.types import Struct
|
||||||
from piker.data import (
|
from piker.data import def_iohlcv_fields
|
||||||
def_iohlcv_fields,
|
|
||||||
match_from_pairs,
|
|
||||||
)
|
|
||||||
from piker.data._web_bs import (
|
from piker.data._web_bs import (
|
||||||
open_autorecon_ws,
|
open_autorecon_ws,
|
||||||
NoBsWs,
|
NoBsWs,
|
||||||
|
|
@ -75,8 +74,6 @@ from ._util import DataUnavailable
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
||||||
_no_symcache: bool = True
|
|
||||||
|
|
||||||
|
|
||||||
class KucoinMktPair(Struct, frozen=True):
|
class KucoinMktPair(Struct, frozen=True):
|
||||||
'''
|
'''
|
||||||
|
|
@ -89,27 +86,18 @@ class KucoinMktPair(Struct, frozen=True):
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def price_tick(self) -> Decimal:
|
def price_tick(self) -> Decimal:
|
||||||
return Decimal(str(self.quoteIncrement))
|
return Decimal(str(self.baseIncrement))
|
||||||
|
|
||||||
baseMaxSize: float
|
baseMaxSize: float
|
||||||
baseMinSize: float
|
baseMinSize: float
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def size_tick(self) -> Decimal:
|
def size_tick(self) -> Decimal:
|
||||||
return Decimal(str(self.quoteMinSize))
|
return Decimal(str(self.baseMinSize))
|
||||||
|
|
||||||
callauctionFirstStageStartTime: None|float
|
|
||||||
callauctionIsEnabled: bool
|
|
||||||
callauctionPriceCeiling: float|None
|
|
||||||
callauctionPriceFloor: float|None
|
|
||||||
callauctionSecondStageStartTime: float|None
|
|
||||||
callauctionThirdStageStartTime: float|None
|
|
||||||
|
|
||||||
enableTrading: bool
|
enableTrading: bool
|
||||||
feeCategory: int
|
|
||||||
feeCurrency: str
|
feeCurrency: str
|
||||||
isMarginEnabled: bool
|
isMarginEnabled: bool
|
||||||
makerFeeCoefficient: float
|
|
||||||
market: str
|
market: str
|
||||||
minFunds: float
|
minFunds: float
|
||||||
name: str
|
name: str
|
||||||
|
|
@ -119,10 +107,7 @@ class KucoinMktPair(Struct, frozen=True):
|
||||||
quoteIncrement: float
|
quoteIncrement: float
|
||||||
quoteMaxSize: float
|
quoteMaxSize: float
|
||||||
quoteMinSize: float
|
quoteMinSize: float
|
||||||
st: bool
|
|
||||||
symbol: str # our bs_mktid, kucoin's internal id
|
symbol: str # our bs_mktid, kucoin's internal id
|
||||||
takerFeeCoefficient: float
|
|
||||||
tradingStartTime: float|None
|
|
||||||
|
|
||||||
|
|
||||||
class AccountTrade(Struct, frozen=True):
|
class AccountTrade(Struct, frozen=True):
|
||||||
|
|
@ -222,13 +207,8 @@ def get_config() -> BrokerConfig | None:
|
||||||
|
|
||||||
|
|
||||||
class Client:
|
class Client:
|
||||||
|
def __init__(self) -> None:
|
||||||
def __init__(
|
self._config: BrokerConfig | None = get_config()
|
||||||
self,
|
|
||||||
httpx_client: httpx.AsyncClient,
|
|
||||||
) -> None:
|
|
||||||
self._http: httpx.AsyncClient = httpx_client
|
|
||||||
self._config: BrokerConfig|None = get_config()
|
|
||||||
self._pairs: dict[str, KucoinMktPair] = {}
|
self._pairs: dict[str, KucoinMktPair] = {}
|
||||||
self._fqmes2mktids: bidict[str, str] = bidict()
|
self._fqmes2mktids: bidict[str, str] = bidict()
|
||||||
self._bars: list[list[float]] = []
|
self._bars: list[list[float]] = []
|
||||||
|
|
@ -242,24 +222,18 @@ class Client:
|
||||||
|
|
||||||
) -> dict[str, str | bytes]:
|
) -> dict[str, str | bytes]:
|
||||||
'''
|
'''
|
||||||
Generate authenticated request headers:
|
Generate authenticated request headers
|
||||||
|
|
||||||
https://docs.kucoin.com/#authentication
|
https://docs.kucoin.com/#authentication
|
||||||
https://www.kucoin.com/docs/basic-info/connection-method/authentication/creating-a-request
|
|
||||||
https://www.kucoin.com/docs/basic-info/connection-method/authentication/signing-a-message
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
|
||||||
if not self._config:
|
if not self._config:
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
'No config found when trying to send authenticated request'
|
'No config found when trying to send authenticated request')
|
||||||
)
|
|
||||||
|
|
||||||
str_to_sign = (
|
str_to_sign = (
|
||||||
str(int(time.time() * 1000))
|
str(int(time.time() * 1000))
|
||||||
+
|
+ action + f'/api/{api}/{endpoint.lstrip("/")}'
|
||||||
action
|
|
||||||
+
|
|
||||||
f'/api/{api}/{endpoint.lstrip("/")}'
|
|
||||||
)
|
)
|
||||||
|
|
||||||
signature = base64.b64encode(
|
signature = base64.b64encode(
|
||||||
|
|
@ -270,7 +244,6 @@ class Client:
|
||||||
).digest()
|
).digest()
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO: can we cache this between calls?
|
|
||||||
passphrase = base64.b64encode(
|
passphrase = base64.b64encode(
|
||||||
hmac.new(
|
hmac.new(
|
||||||
self._config.key_secret.encode('utf-8'),
|
self._config.key_secret.encode('utf-8'),
|
||||||
|
|
@ -292,10 +265,8 @@ class Client:
|
||||||
self,
|
self,
|
||||||
action: Literal['POST', 'GET'],
|
action: Literal['POST', 'GET'],
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
|
|
||||||
api: str = 'v2',
|
api: str = 'v2',
|
||||||
headers: dict = {},
|
headers: dict = {},
|
||||||
|
|
||||||
) -> Any:
|
) -> Any:
|
||||||
'''
|
'''
|
||||||
Generic request wrapper for Kucoin API
|
Generic request wrapper for Kucoin API
|
||||||
|
|
@ -308,19 +279,14 @@ class Client:
|
||||||
api,
|
api,
|
||||||
)
|
)
|
||||||
|
|
||||||
req_meth: Callable = getattr(
|
api_url = f'https://api.kucoin.com/api/{api}/{endpoint}'
|
||||||
self._http,
|
|
||||||
action.lower(),
|
res = await asks.request(action, api_url, headers=headers)
|
||||||
)
|
|
||||||
res = await req_meth(
|
json = res.json()
|
||||||
url=f'/{api}/{endpoint}',
|
if 'data' in json:
|
||||||
headers=headers,
|
return json['data']
|
||||||
)
|
|
||||||
json: dict = res.json()
|
|
||||||
if (data := json.get('data')) is not None:
|
|
||||||
return data
|
|
||||||
else:
|
else:
|
||||||
api_url: str = self._http.base_url
|
|
||||||
log.error(
|
log.error(
|
||||||
f'Error making request to {api_url} ->\n'
|
f'Error making request to {api_url} ->\n'
|
||||||
f'{pformat(res)}'
|
f'{pformat(res)}'
|
||||||
|
|
@ -340,7 +306,7 @@ class Client:
|
||||||
'''
|
'''
|
||||||
token_type = 'private' if private else 'public'
|
token_type = 'private' if private else 'public'
|
||||||
try:
|
try:
|
||||||
data: dict[str, Any]|None = await self._request(
|
data: dict[str, Any] | None = await self._request(
|
||||||
'POST',
|
'POST',
|
||||||
endpoint=f'bullet-{token_type}',
|
endpoint=f'bullet-{token_type}',
|
||||||
api='v1'
|
api='v1'
|
||||||
|
|
@ -378,8 +344,8 @@ class Client:
|
||||||
currencies: dict[str, Currency] = {}
|
currencies: dict[str, Currency] = {}
|
||||||
entries: list[dict] = await self._request(
|
entries: list[dict] = await self._request(
|
||||||
'GET',
|
'GET',
|
||||||
endpoint='currencies',
|
|
||||||
api='v1',
|
api='v1',
|
||||||
|
endpoint='currencies',
|
||||||
)
|
)
|
||||||
for entry in entries:
|
for entry in entries:
|
||||||
curr = Currency(**entry).copy()
|
curr = Currency(**entry).copy()
|
||||||
|
|
@ -395,29 +361,20 @@ class Client:
|
||||||
dict[str, KucoinMktPair],
|
dict[str, KucoinMktPair],
|
||||||
bidict[str, KucoinMktPair],
|
bidict[str, KucoinMktPair],
|
||||||
]:
|
]:
|
||||||
entries = await self._request(
|
entries = await self._request('GET', 'symbols')
|
||||||
'GET',
|
|
||||||
endpoint='symbols',
|
|
||||||
)
|
|
||||||
log.info(f' {len(entries)} Kucoin market pairs fetched')
|
log.info(f' {len(entries)} Kucoin market pairs fetched')
|
||||||
|
|
||||||
pairs: dict[str, KucoinMktPair] = {}
|
pairs: dict[str, KucoinMktPair] = {}
|
||||||
fqmes2mktids: bidict[str, str] = bidict()
|
fqmes2mktids: bidict[str, str] = bidict()
|
||||||
for item in entries:
|
for item in entries:
|
||||||
try:
|
pair = pairs[item['name']] = KucoinMktPair(**item)
|
||||||
pair = pairs[item['name']] = KucoinMktPair(**item)
|
|
||||||
except TypeError as te:
|
|
||||||
raise TypeError(
|
|
||||||
'`KucoinMktPair` and reponse fields do not match ??\n'
|
|
||||||
f'{KucoinMktPair.fields_diff(item)}\n'
|
|
||||||
) from te
|
|
||||||
fqmes2mktids[
|
fqmes2mktids[
|
||||||
item['name'].lower().replace('-', '')
|
item['name'].lower().replace('-', '')
|
||||||
] = pair.name
|
] = pair.name
|
||||||
|
|
||||||
return pairs, fqmes2mktids
|
return pairs, fqmes2mktids
|
||||||
|
|
||||||
async def get_mkt_pairs(
|
async def cache_pairs(
|
||||||
self,
|
self,
|
||||||
update: bool = False,
|
update: bool = False,
|
||||||
|
|
||||||
|
|
@ -445,27 +402,16 @@ class Client:
|
||||||
|
|
||||||
) -> dict[str, KucoinMktPair]:
|
) -> dict[str, KucoinMktPair]:
|
||||||
'''
|
'''
|
||||||
Use fuzzy search engine to match against pairs, deliver
|
Use fuzzy search to match against all market names.
|
||||||
matching ones.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if not len(self._pairs):
|
data = await self.cache_pairs()
|
||||||
await self.get_mkt_pairs()
|
|
||||||
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
|
|
||||||
|
|
||||||
matches: dict[str, KucoinMktPair] = match_from_pairs(
|
matches = fuzzy.extractBests(
|
||||||
pairs=self._pairs,
|
pattern, data, score_cutoff=35, limit=limit
|
||||||
# query=pattern.upper(),
|
|
||||||
query=pattern.upper(),
|
|
||||||
score_cutoff=35,
|
|
||||||
limit=limit,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# repack in dict form
|
# repack in dict form
|
||||||
return {
|
return {item[0].name: item[0] for item in matches}
|
||||||
pair.name: pair
|
|
||||||
for pair in matches.values()
|
|
||||||
}
|
|
||||||
|
|
||||||
async def last_trades(self, sym: str) -> list[AccountTrade]:
|
async def last_trades(self, sym: str) -> list[AccountTrade]:
|
||||||
trades = await self._request(
|
trades = await self._request(
|
||||||
|
|
@ -605,21 +551,13 @@ def fqme_to_kucoin_sym(
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def get_client() -> AsyncGenerator[Client, None]:
|
async def get_client() -> AsyncGenerator[Client, None]:
|
||||||
'''
|
client = Client()
|
||||||
Load an API `Client` preconfigured from user settings
|
|
||||||
|
|
||||||
'''
|
async with trio.open_nursery() as n:
|
||||||
async with (
|
n.start_soon(client.cache_pairs)
|
||||||
httpx.AsyncClient(
|
await client.get_currencies()
|
||||||
base_url='https://api.kucoin.com/api',
|
|
||||||
) as trio_client,
|
|
||||||
):
|
|
||||||
client = Client(httpx_client=trio_client)
|
|
||||||
async with trio.open_nursery() as tn:
|
|
||||||
tn.start_soon(client.get_mkt_pairs)
|
|
||||||
await client.get_currencies()
|
|
||||||
|
|
||||||
yield client
|
yield client
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
|
|
@ -628,7 +566,7 @@ async def open_symbol_search(
|
||||||
) -> None:
|
) -> None:
|
||||||
async with open_cached_client('kucoin') as client:
|
async with open_cached_client('kucoin') as client:
|
||||||
# load all symbols locally for fast search
|
# load all symbols locally for fast search
|
||||||
await client.get_mkt_pairs()
|
await client.cache_pairs()
|
||||||
await ctx.started()
|
await ctx.started()
|
||||||
|
|
||||||
async with ctx.open_stream() as stream:
|
async with ctx.open_stream() as stream:
|
||||||
|
|
@ -655,7 +593,7 @@ async def open_ping_task(
|
||||||
await trio.sleep((ping_interval - 1000) / 1000)
|
await trio.sleep((ping_interval - 1000) / 1000)
|
||||||
await ws.send_msg({'id': connect_id, 'type': 'ping'})
|
await ws.send_msg({'id': connect_id, 'type': 'ping'})
|
||||||
|
|
||||||
log.warning('Starting ping task for kucoin ws connection')
|
log.info('Starting ping task for kucoin ws connection')
|
||||||
n.start_soon(ping_server)
|
n.start_soon(ping_server)
|
||||||
|
|
||||||
yield
|
yield
|
||||||
|
|
@ -667,21 +605,16 @@ async def open_ping_task(
|
||||||
async def get_mkt_info(
|
async def get_mkt_info(
|
||||||
fqme: str,
|
fqme: str,
|
||||||
|
|
||||||
) -> tuple[
|
) -> tuple[MktPair, KucoinMktPair]:
|
||||||
MktPair,
|
|
||||||
KucoinMktPair,
|
|
||||||
]:
|
|
||||||
'''
|
'''
|
||||||
Query for and return both a `piker.accounting.MktPair` and
|
Query for and return a `MktPair` and `KucoinMktPair`.
|
||||||
`KucoinMktPair` from provided `fqme: str`
|
|
||||||
(fully-qualified-market-endpoint).
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
async with open_cached_client('kucoin') as client:
|
async with open_cached_client('kucoin') as client:
|
||||||
# split off any fqme broker part
|
# split off any fqme broker part
|
||||||
bs_fqme, _, broker = fqme.partition('.')
|
bs_fqme, _, broker = fqme.partition('.')
|
||||||
|
|
||||||
pairs: dict[str, KucoinMktPair] = await client.get_mkt_pairs()
|
pairs: dict[str, KucoinMktPair] = await client.cache_pairs()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# likely search result key which is already in native mkt symbol form
|
# likely search result key which is already in native mkt symbol form
|
||||||
|
|
@ -749,8 +682,6 @@ async def stream_quotes(
|
||||||
|
|
||||||
log.info(f'Starting up quote stream(s) for {symbols}')
|
log.info(f'Starting up quote stream(s) for {symbols}')
|
||||||
for sym_str in symbols:
|
for sym_str in symbols:
|
||||||
mkt: MktPair
|
|
||||||
pair: KucoinMktPair
|
|
||||||
mkt, pair = await get_mkt_info(sym_str)
|
mkt, pair = await get_mkt_info(sym_str)
|
||||||
init_msgs.append(
|
init_msgs.append(
|
||||||
FeedInit(mkt_info=mkt)
|
FeedInit(mkt_info=mkt)
|
||||||
|
|
@ -758,11 +689,7 @@ async def stream_quotes(
|
||||||
|
|
||||||
ws: NoBsWs
|
ws: NoBsWs
|
||||||
token, ping_interval = await client._get_ws_token()
|
token, ping_interval = await client._get_ws_token()
|
||||||
log.info('API reported ping_interval: {ping_interval}\n')
|
connect_id = str(uuid4())
|
||||||
|
|
||||||
connect_id: str = str(uuid4())
|
|
||||||
typ: str
|
|
||||||
quote: dict
|
|
||||||
async with (
|
async with (
|
||||||
open_autorecon_ws(
|
open_autorecon_ws(
|
||||||
(
|
(
|
||||||
|
|
@ -776,37 +703,20 @@ async def stream_quotes(
|
||||||
),
|
),
|
||||||
) as ws,
|
) as ws,
|
||||||
open_ping_task(ws, ping_interval, connect_id),
|
open_ping_task(ws, ping_interval, connect_id),
|
||||||
aclosing(
|
aclosing(stream_messages(ws, sym_str)) as msg_gen,
|
||||||
iter_normed_quotes(
|
|
||||||
ws, sym_str
|
|
||||||
)
|
|
||||||
) as iter_quotes,
|
|
||||||
):
|
):
|
||||||
typ, quote = await anext(iter_quotes)
|
typ, quote = await anext(msg_gen)
|
||||||
|
|
||||||
# take care to not unblock here until we get a real
|
while typ != 'trade':
|
||||||
# trade quote?
|
# take care to not unblock here until we get a real
|
||||||
# ^TODO, remove this right?
|
# trade quote
|
||||||
# -[ ] what often blocks chart boot/new-feed switching
|
typ, quote = await anext(msg_gen)
|
||||||
# since we'ere waiting for a live quote instead of just
|
|
||||||
# loading history afap..
|
|
||||||
# |_ XXX, not sure if we require a bit of rework to core
|
|
||||||
# feed init logic or if backends justg gotta be
|
|
||||||
# changed up.. feel like there was some causality
|
|
||||||
# dilema prolly only seen with IB too..
|
|
||||||
# while typ != 'trade':
|
|
||||||
# typ, quote = await anext(iter_quotes)
|
|
||||||
|
|
||||||
task_status.started((init_msgs, quote))
|
task_status.started((init_msgs, quote))
|
||||||
feed_is_live.set()
|
feed_is_live.set()
|
||||||
|
|
||||||
# XXX NOTE, DO NOT include the `.<backend>` suffix!
|
async for typ, msg in msg_gen:
|
||||||
# OW the sampling loop will not broadcast correctly..
|
await send_chan.send({sym_str: msg})
|
||||||
# since `bus._subscribers.setdefault(bs_fqme, set())`
|
|
||||||
# is used inside `.data.open_feed_bus()` !!!
|
|
||||||
topic: str = mkt.bs_fqme
|
|
||||||
async for typ, quote in iter_quotes:
|
|
||||||
await send_chan.send({topic: quote})
|
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
|
|
@ -861,7 +771,7 @@ async def subscribe(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
async def iter_normed_quotes(
|
async def stream_messages(
|
||||||
ws: NoBsWs,
|
ws: NoBsWs,
|
||||||
sym: str,
|
sym: str,
|
||||||
|
|
||||||
|
|
@ -892,9 +802,6 @@ async def iter_normed_quotes(
|
||||||
|
|
||||||
yield 'trade', {
|
yield 'trade', {
|
||||||
'symbol': sym,
|
'symbol': sym,
|
||||||
# TODO, is 'last' even used elsewhere/a-good
|
|
||||||
# semantic? can't we just read the ticks with our
|
|
||||||
# .data.ticktools.frame_ticks()`/
|
|
||||||
'last': trade_data.price,
|
'last': trade_data.price,
|
||||||
'brokerd_ts': last_trade_ts,
|
'brokerd_ts': last_trade_ts,
|
||||||
'ticks': [
|
'ticks': [
|
||||||
|
|
@ -987,7 +894,7 @@ async def open_history_client(
|
||||||
if end_dt is None:
|
if end_dt is None:
|
||||||
inow = round(time.time())
|
inow = round(time.time())
|
||||||
|
|
||||||
log.debug(
|
print(
|
||||||
f'difference in time between load and processing'
|
f'difference in time between load and processing'
|
||||||
f'{inow - times[-1]}'
|
f'{inow - times[-1]}'
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -37,12 +37,6 @@ import tractor
|
||||||
from async_generator import asynccontextmanager
|
from async_generator import asynccontextmanager
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import wrapt
|
import wrapt
|
||||||
|
|
||||||
# TODO, port to `httpx`/`trio-websocket` whenver i get back to
|
|
||||||
# writing a proper ws-api streamer for this backend (since the data
|
|
||||||
# feeds are free now) as per GH feat-req:
|
|
||||||
# https://github.com/pikers/piker/issues/509
|
|
||||||
#
|
|
||||||
import asks
|
import asks
|
||||||
|
|
||||||
from ..calc import humanize, percent_change
|
from ..calc import humanize, percent_change
|
||||||
|
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
piker.clearing
|
|
||||||
______________
|
|
||||||
trade execution-n-control subsys for both live and paper trading as
|
|
||||||
well as algo-trading manual override/interaction across any backend
|
|
||||||
broker and data provider.
|
|
||||||
|
|
||||||
avail UIs
|
|
||||||
*********
|
|
||||||
|
|
||||||
order ctl
|
|
||||||
---------
|
|
||||||
the `piker.clearing` subsys is exposed mainly though
|
|
||||||
the `piker chart` GUI as a "chart trader" style UX and
|
|
||||||
is automatically enabled whenever a chart is opened.
|
|
||||||
|
|
||||||
.. ^TODO, more prose here!
|
|
||||||
|
|
||||||
the "manual" order control features are exposed via the
|
|
||||||
`piker.ui.order_mode` API and can pretty much always be
|
|
||||||
used (at least) in simulated-trading mode, aka "paper"-mode, and
|
|
||||||
the micro-manual is as follows:
|
|
||||||
|
|
||||||
``order_mode`` (
|
|
||||||
edge triggered activation by any of the following keys,
|
|
||||||
``mouse-click`` on y-level to submit at that price
|
|
||||||
):
|
|
||||||
|
|
||||||
- ``f``/ ``ctl-f`` to stage buy
|
|
||||||
- ``d``/ ``ctl-d`` to stage sell
|
|
||||||
- ``a`` to stage alert
|
|
||||||
|
|
||||||
|
|
||||||
``search_mode`` (
|
|
||||||
``ctl-l`` or ``ctl-space`` to open,
|
|
||||||
``ctl-c`` or ``ctl-space`` to close
|
|
||||||
) :
|
|
||||||
|
|
||||||
- begin typing to have symbol search automatically lookup
|
|
||||||
symbols from all loaded backend (broker) providers
|
|
||||||
- arrow keys and mouse click to navigate selection
|
|
||||||
- vi-like ``ctl-[hjkl]`` for navigation
|
|
||||||
|
|
||||||
|
|
||||||
position (pp) mgmt
|
|
||||||
------------------
|
|
||||||
you can also configure your position allocation limits from the
|
|
||||||
sidepane.
|
|
||||||
|
|
||||||
.. ^TODO, explain and provide tut once more refined!
|
|
||||||
|
|
@ -27,28 +27,13 @@ from ._ems import (
|
||||||
open_brokerd_dialog,
|
open_brokerd_dialog,
|
||||||
)
|
)
|
||||||
from ._util import OrderDialogs
|
from ._util import OrderDialogs
|
||||||
from ._messages import(
|
|
||||||
Order,
|
|
||||||
Status,
|
|
||||||
Cancel,
|
|
||||||
|
|
||||||
# TODO: deprecate these and replace end-2-end with
|
|
||||||
# client-side-dialog set above B)
|
|
||||||
# https://github.com/pikers/piker/issues/514
|
|
||||||
BrokerdPosition
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'FeeModel',
|
|
||||||
'open_ems',
|
'open_ems',
|
||||||
'OrderClient',
|
'OrderClient',
|
||||||
'open_brokerd_dialog',
|
'open_brokerd_dialog',
|
||||||
'OrderDialogs',
|
'OrderDialogs',
|
||||||
'Order',
|
|
||||||
'Status',
|
|
||||||
'Cancel',
|
|
||||||
'BrokerdPosition'
|
|
||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -25,15 +25,12 @@ from typing import TYPE_CHECKING
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
from tractor.trionics import (
|
from tractor.trionics import broadcast_receiver
|
||||||
broadcast_receiver,
|
|
||||||
collapse_eg,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log, # sub-sys logger
|
log, # sub-sys logger
|
||||||
)
|
)
|
||||||
from piker.types import Struct
|
from ..data.types import Struct
|
||||||
from ..service import maybe_open_emsd
|
from ..service import maybe_open_emsd
|
||||||
from ._messages import (
|
from ._messages import (
|
||||||
Order,
|
Order,
|
||||||
|
|
@ -171,6 +168,7 @@ class OrderClient(Struct):
|
||||||
|
|
||||||
|
|
||||||
async def relay_orders_from_sync_code(
|
async def relay_orders_from_sync_code(
|
||||||
|
|
||||||
client: OrderClient,
|
client: OrderClient,
|
||||||
symbol_key: str,
|
symbol_key: str,
|
||||||
to_ems_stream: tractor.MsgStream,
|
to_ems_stream: tractor.MsgStream,
|
||||||
|
|
@ -244,11 +242,6 @@ async def open_ems(
|
||||||
|
|
||||||
async with maybe_open_emsd(
|
async with maybe_open_emsd(
|
||||||
broker,
|
broker,
|
||||||
# XXX NOTE, LOL so this determines the daemon `emsd` loglevel
|
|
||||||
# then FYI.. that's kinda wrong no?
|
|
||||||
# -[ ] shouldn't it be set by `pikerd -l` or no?
|
|
||||||
# -[ ] would make a lot more sense to have a subsys ctl for
|
|
||||||
# levels.. like `-l emsd.info` or something?
|
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
) as portal:
|
) as portal:
|
||||||
|
|
||||||
|
|
@ -288,11 +281,8 @@ async def open_ems(
|
||||||
client._ems_stream = trades_stream
|
client._ems_stream = trades_stream
|
||||||
|
|
||||||
# start sync code order msg delivery task
|
# start sync code order msg delivery task
|
||||||
async with (
|
async with trio.open_nursery() as n:
|
||||||
collapse_eg(),
|
n.start_soon(
|
||||||
trio.open_nursery() as tn,
|
|
||||||
):
|
|
||||||
tn.start_soon(
|
|
||||||
relay_orders_from_sync_code,
|
relay_orders_from_sync_code,
|
||||||
client,
|
client,
|
||||||
fqme,
|
fqme,
|
||||||
|
|
@ -308,4 +298,4 @@ async def open_ems(
|
||||||
)
|
)
|
||||||
|
|
||||||
# stop the sync-msg-relay task on exit.
|
# stop the sync-msg-relay task on exit.
|
||||||
tn.cancel_scope.cancel()
|
n.cancel_scope.cancel()
|
||||||
|
|
|
||||||
|
|
@ -27,7 +27,7 @@ from contextlib import asynccontextmanager as acm
|
||||||
from decimal import Decimal
|
from decimal import Decimal
|
||||||
from math import isnan
|
from math import isnan
|
||||||
from pprint import pformat
|
from pprint import pformat
|
||||||
from time import time_ns
|
import time
|
||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
from typing import (
|
from typing import (
|
||||||
AsyncIterator,
|
AsyncIterator,
|
||||||
|
|
@ -42,7 +42,6 @@ from bidict import bidict
|
||||||
import trio
|
import trio
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import trionics
|
|
||||||
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log, # sub-sys logger
|
log, # sub-sys logger
|
||||||
|
|
@ -52,13 +51,12 @@ from ..accounting._mktinfo import (
|
||||||
unpack_fqme,
|
unpack_fqme,
|
||||||
dec_digits,
|
dec_digits,
|
||||||
)
|
)
|
||||||
from piker.types import Struct
|
|
||||||
from ..ui._notify import notify_from_ems_status_msg
|
from ..ui._notify import notify_from_ems_status_msg
|
||||||
from ..data import iterticks
|
from ..data import iterticks
|
||||||
|
from ..data.types import Struct
|
||||||
from ._messages import (
|
from ._messages import (
|
||||||
Order,
|
Order,
|
||||||
Status,
|
Status,
|
||||||
Error,
|
|
||||||
BrokerdCancel,
|
BrokerdCancel,
|
||||||
BrokerdOrder,
|
BrokerdOrder,
|
||||||
# BrokerdOrderAck,
|
# BrokerdOrderAck,
|
||||||
|
|
@ -77,6 +75,7 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
# TODO: numba all of this
|
# TODO: numba all of this
|
||||||
def mk_check(
|
def mk_check(
|
||||||
|
|
||||||
trigger_price: float,
|
trigger_price: float,
|
||||||
known_last: float,
|
known_last: float,
|
||||||
action: str,
|
action: str,
|
||||||
|
|
@ -162,7 +161,7 @@ async def clear_dark_triggers(
|
||||||
|
|
||||||
router: Router,
|
router: Router,
|
||||||
brokerd_orders_stream: tractor.MsgStream,
|
brokerd_orders_stream: tractor.MsgStream,
|
||||||
quote_stream: tractor.MsgStream,
|
quote_stream: tractor.ReceiveMsgStream, # noqa
|
||||||
broker: str,
|
broker: str,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
|
|
||||||
|
|
@ -178,7 +177,6 @@ async def clear_dark_triggers(
|
||||||
'''
|
'''
|
||||||
# XXX: optimize this for speed!
|
# XXX: optimize this for speed!
|
||||||
# TODO:
|
# TODO:
|
||||||
# - port to the new ringbuf stuff in `tractor.ipc`!
|
|
||||||
# - numba all this!
|
# - numba all this!
|
||||||
# - this stream may eventually contain multiple symbols
|
# - this stream may eventually contain multiple symbols
|
||||||
quote_stream._raise_on_lag = False
|
quote_stream._raise_on_lag = False
|
||||||
|
|
@ -257,7 +255,7 @@ async def clear_dark_triggers(
|
||||||
action=action,
|
action=action,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
account=account,
|
account=account,
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
symbol=bfqme,
|
symbol=bfqme,
|
||||||
price=submit_price,
|
price=submit_price,
|
||||||
size=size,
|
size=size,
|
||||||
|
|
@ -270,7 +268,7 @@ async def clear_dark_triggers(
|
||||||
# fallthrough logic
|
# fallthrough logic
|
||||||
status = Status(
|
status = Status(
|
||||||
oid=oid, # ems dialog id
|
oid=oid, # ems dialog id
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
resp=resp,
|
resp=resp,
|
||||||
req=cmd,
|
req=cmd,
|
||||||
brokerd_msg=brokerd_msg,
|
brokerd_msg=brokerd_msg,
|
||||||
|
|
@ -388,7 +386,6 @@ async def open_brokerd_dialog(
|
||||||
for ep_name in [
|
for ep_name in [
|
||||||
'open_trade_dialog', # probably final name?
|
'open_trade_dialog', # probably final name?
|
||||||
'trades_dialogue', # legacy
|
'trades_dialogue', # legacy
|
||||||
# ^!TODO, rm this since all backends ported no ?!?
|
|
||||||
]:
|
]:
|
||||||
trades_endpoint = getattr(
|
trades_endpoint = getattr(
|
||||||
brokermod,
|
brokermod,
|
||||||
|
|
@ -502,7 +499,7 @@ class Router(Struct):
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# setup at actor spawn time
|
# setup at actor spawn time
|
||||||
_tn: trio.Nursery
|
nursery: trio.Nursery
|
||||||
|
|
||||||
# broker to book map
|
# broker to book map
|
||||||
books: dict[str, DarkBook] = {}
|
books: dict[str, DarkBook] = {}
|
||||||
|
|
@ -655,11 +652,7 @@ class Router(Struct):
|
||||||
flume = feed.flumes[fqme]
|
flume = feed.flumes[fqme]
|
||||||
first_quote: dict = flume.first_quote
|
first_quote: dict = flume.first_quote
|
||||||
book: DarkBook = self.get_dark_book(broker)
|
book: DarkBook = self.get_dark_book(broker)
|
||||||
|
book.lasts[fqme]: float = float(first_quote['last'])
|
||||||
if not (last := first_quote.get('last')):
|
|
||||||
last: float = flume.rt_shm.array[-1]['close']
|
|
||||||
|
|
||||||
book.lasts[fqme]: float = float(last)
|
|
||||||
|
|
||||||
async with self.maybe_open_brokerd_dialog(
|
async with self.maybe_open_brokerd_dialog(
|
||||||
brokermod=brokermod,
|
brokermod=brokermod,
|
||||||
|
|
@ -672,7 +665,7 @@ class Router(Struct):
|
||||||
# dark book clearing loop, also lives with parent
|
# dark book clearing loop, also lives with parent
|
||||||
# daemon to allow dark order clearing while no
|
# daemon to allow dark order clearing while no
|
||||||
# client is connected.
|
# client is connected.
|
||||||
self._tn.start_soon(
|
self.nursery.start_soon(
|
||||||
clear_dark_triggers,
|
clear_dark_triggers,
|
||||||
self,
|
self,
|
||||||
relay.brokerd_stream,
|
relay.brokerd_stream,
|
||||||
|
|
@ -695,7 +688,7 @@ class Router(Struct):
|
||||||
|
|
||||||
# spawn a ``brokerd`` order control dialog stream
|
# spawn a ``brokerd`` order control dialog stream
|
||||||
# that syncs lifetime with the parent `emsd` daemon.
|
# that syncs lifetime with the parent `emsd` daemon.
|
||||||
self._tn.start_soon(
|
self.nursery.start_soon(
|
||||||
translate_and_relay_brokerd_events,
|
translate_and_relay_brokerd_events,
|
||||||
broker,
|
broker,
|
||||||
relay.brokerd_stream,
|
relay.brokerd_stream,
|
||||||
|
|
@ -722,7 +715,7 @@ class Router(Struct):
|
||||||
subs = self.subscribers[sub_key]
|
subs = self.subscribers[sub_key]
|
||||||
|
|
||||||
sent_some: bool = False
|
sent_some: bool = False
|
||||||
for client_stream in subs.copy():
|
for client_stream in subs:
|
||||||
try:
|
try:
|
||||||
await client_stream.send(msg)
|
await client_stream.send(msg)
|
||||||
sent_some = True
|
sent_some = True
|
||||||
|
|
@ -769,12 +762,10 @@ async def _setup_persistent_emsd(
|
||||||
|
|
||||||
global _router
|
global _router
|
||||||
|
|
||||||
# open a root "service task-nursery" for the `emsd`-actor
|
# open a root "service nursery" for the ``emsd`` actor
|
||||||
async with (
|
async with trio.open_nursery() as service_nursery:
|
||||||
trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as tn
|
_router = Router(nursery=service_nursery)
|
||||||
):
|
|
||||||
_router = Router(_tn=tn)
|
|
||||||
|
|
||||||
# TODO: send back the full set of persistent
|
# TODO: send back the full set of persistent
|
||||||
# orders/execs?
|
# orders/execs?
|
||||||
|
|
@ -835,8 +826,8 @@ async def translate_and_relay_brokerd_events(
|
||||||
# keep pps per account up to date locally in ``emsd`` mem
|
# keep pps per account up to date locally in ``emsd`` mem
|
||||||
# sym, broker = pos_msg.symbol, pos_msg.broker
|
# sym, broker = pos_msg.symbol, pos_msg.broker
|
||||||
|
|
||||||
# NOTE: translate to a FQME!
|
|
||||||
relay.positions.setdefault(
|
relay.positions.setdefault(
|
||||||
|
# NOTE: translate to a FQSN!
|
||||||
(broker, pos_msg.account),
|
(broker, pos_msg.account),
|
||||||
{}
|
{}
|
||||||
)[pos_msg.symbol] = pos_msg
|
)[pos_msg.symbol] = pos_msg
|
||||||
|
|
@ -892,7 +883,7 @@ async def translate_and_relay_brokerd_events(
|
||||||
BrokerdCancel(
|
BrokerdCancel(
|
||||||
oid=oid,
|
oid=oid,
|
||||||
reqid=reqid,
|
reqid=reqid,
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
account=status_msg.req.account,
|
account=status_msg.req.account,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
@ -907,75 +898,38 @@ async def translate_and_relay_brokerd_events(
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# BrokerdError
|
# BrokerdError
|
||||||
# TODO: figure out how this will interact with EMS clients
|
|
||||||
# for ex. on an error do we react with a dark orders
|
|
||||||
# management response, like cancelling all dark orders?
|
|
||||||
# This looks like a supervision policy for pending orders on
|
|
||||||
# some unexpected failure - something we need to think more
|
|
||||||
# about. In most default situations, with composed orders
|
|
||||||
# (ex. brackets), most brokers seem to use a oca policy.
|
|
||||||
case {
|
case {
|
||||||
'name': 'error',
|
'name': 'error',
|
||||||
'oid': oid, # ems order-dialog id
|
'oid': oid, # ems order-dialog id
|
||||||
'reqid': reqid, # brokerd generated order-request id
|
'reqid': reqid, # brokerd generated order-request id
|
||||||
}:
|
}:
|
||||||
if (
|
status_msg = book._active.get(oid)
|
||||||
not oid
|
|
||||||
# try to lookup any order dialog by
|
|
||||||
# brokerd-side id..
|
|
||||||
and not (
|
|
||||||
oid := book._ems2brokerd_ids.inverse.get(reqid)
|
|
||||||
)
|
|
||||||
):
|
|
||||||
log.warning(
|
|
||||||
f'Rxed unusable error-msg:\n'
|
|
||||||
f'{brokerd_msg}'
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
msg = BrokerdError(**brokerd_msg)
|
msg = BrokerdError(**brokerd_msg)
|
||||||
|
log.error(fmsg) # XXX make one when it's blank?
|
||||||
|
|
||||||
# NOTE: retreive the last client-side response
|
# TODO: figure out how this will interact with EMS clients
|
||||||
# OR create an error when we have no last msg /dialog
|
# for ex. on an error do we react with a dark orders
|
||||||
# on record
|
# management response, like cancelling all dark orders?
|
||||||
status_msg: Status
|
# This looks like a supervision policy for pending orders on
|
||||||
if not (status_msg := book._active.get(oid)):
|
# some unexpected failure - something we need to think more
|
||||||
status_msg = Error(
|
# about. In most default situations, with composed orders
|
||||||
time_ns=time_ns(),
|
# (ex. brackets), most brokers seem to use a oca policy.
|
||||||
oid=oid,
|
|
||||||
reqid=reqid,
|
# only relay to client side if we have an active
|
||||||
brokerd_msg=msg,
|
# ongoing dialog
|
||||||
)
|
if status_msg:
|
||||||
else:
|
|
||||||
# only modify last status if we have an active
|
|
||||||
# ongoing dialog..
|
|
||||||
status_msg.resp = 'error'
|
status_msg.resp = 'error'
|
||||||
status_msg.brokerd_msg = msg
|
status_msg.brokerd_msg = msg
|
||||||
|
book._active[oid] = status_msg
|
||||||
|
|
||||||
book._active[oid] = status_msg
|
await router.client_broadcast(
|
||||||
|
status_msg.req.symbol,
|
||||||
log.error(
|
status_msg,
|
||||||
'Translating brokerd error to status:\n'
|
|
||||||
f'{fmsg}'
|
|
||||||
f'{status_msg.to_dict()}'
|
|
||||||
)
|
|
||||||
if req := status_msg.req:
|
|
||||||
fqme: str = req.symbol
|
|
||||||
else:
|
|
||||||
bdmsg: Struct = status_msg.brokerd_msg
|
|
||||||
fqme: str = (
|
|
||||||
bdmsg.symbol # might be None
|
|
||||||
or
|
|
||||||
bdmsg.broker_details['flow']
|
|
||||||
# NOTE: what happens in empty case in the
|
|
||||||
# broadcast below? it's a problem?
|
|
||||||
.get('symbol', '')
|
|
||||||
)
|
)
|
||||||
|
|
||||||
await router.client_broadcast(
|
else:
|
||||||
fqme,
|
log.error(f'Error for unknown order flow:\n{msg}')
|
||||||
status_msg,
|
continue
|
||||||
)
|
|
||||||
|
|
||||||
# BrokerdStatus
|
# BrokerdStatus
|
||||||
case {
|
case {
|
||||||
|
|
@ -1018,28 +972,14 @@ async def translate_and_relay_brokerd_events(
|
||||||
status_msg.brokerd_msg = msg
|
status_msg.brokerd_msg = msg
|
||||||
status_msg.src = msg.broker_details['name']
|
status_msg.src = msg.broker_details['name']
|
||||||
|
|
||||||
if not status_msg.req:
|
await router.client_broadcast(
|
||||||
# likely some order change state?
|
status_msg.req.symbol,
|
||||||
await tractor.pause()
|
status_msg,
|
||||||
else:
|
)
|
||||||
await router.client_broadcast(
|
|
||||||
status_msg.req.symbol,
|
|
||||||
status_msg,
|
|
||||||
)
|
|
||||||
|
|
||||||
if status == 'closed':
|
if status == 'closed':
|
||||||
log.info(
|
log.info(f'Execution for {oid} is complete!')
|
||||||
f'Execution is complete!\n'
|
status_msg = book._active.pop(oid)
|
||||||
f'oid: {oid!r}\n'
|
|
||||||
)
|
|
||||||
status_msg = book._active.pop(oid, None)
|
|
||||||
if status_msg is None:
|
|
||||||
log.warning(
|
|
||||||
f'Order was already cleared from book ??\n'
|
|
||||||
f'oid: {oid!r}\n'
|
|
||||||
f'\n'
|
|
||||||
f'Maybe the order cancelled before submitted ??\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
elif status == 'canceled':
|
elif status == 'canceled':
|
||||||
log.cancel(f'Cancellation for {oid} is complete!')
|
log.cancel(f'Cancellation for {oid} is complete!')
|
||||||
|
|
@ -1130,7 +1070,7 @@ async def translate_and_relay_brokerd_events(
|
||||||
status_msg.req = order
|
status_msg.req = order
|
||||||
|
|
||||||
assert status_msg.src # source tag?
|
assert status_msg.src # source tag?
|
||||||
oid: str = str(status_msg.reqid)
|
oid = str(status_msg.reqid)
|
||||||
|
|
||||||
# attempt to avoid collisions
|
# attempt to avoid collisions
|
||||||
status_msg.reqid = oid
|
status_msg.reqid = oid
|
||||||
|
|
@ -1147,28 +1087,38 @@ async def translate_and_relay_brokerd_events(
|
||||||
status_msg,
|
status_msg,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# don't fall through
|
||||||
|
continue
|
||||||
|
|
||||||
|
# brokerd error
|
||||||
|
case {
|
||||||
|
'name': 'status',
|
||||||
|
'status': 'error',
|
||||||
|
}:
|
||||||
|
log.error(f'Broker error:\n{fmsg}')
|
||||||
|
# XXX: we presume the brokerd cancels its own order
|
||||||
|
continue
|
||||||
|
|
||||||
# TOO FAST ``BrokerdStatus`` that arrives
|
# TOO FAST ``BrokerdStatus`` that arrives
|
||||||
# before the ``BrokerdAck``.
|
# before the ``BrokerdAck``.
|
||||||
# NOTE XXX: sometimes there is a race with the backend (like
|
|
||||||
# `ib` where the pending status will be relayed *before*
|
|
||||||
# the ack msg, in which case we just ignore the faster
|
|
||||||
# pending msg and wait for our expected ack to arrive
|
|
||||||
# later (i.e. the first block below should enter).
|
|
||||||
case {
|
case {
|
||||||
|
# XXX: sometimes there is a race with the backend (like
|
||||||
|
# `ib` where the pending stauts will be related before
|
||||||
|
# the ack, in which case we just ignore the faster
|
||||||
|
# pending msg and wait for our expected ack to arrive
|
||||||
|
# later (i.e. the first block below should enter).
|
||||||
'name': 'status',
|
'name': 'status',
|
||||||
'status': status,
|
'status': status,
|
||||||
'reqid': reqid,
|
'reqid': reqid,
|
||||||
}:
|
}:
|
||||||
msg = (
|
oid = book._ems2brokerd_ids.inverse.get(reqid)
|
||||||
f'Unhandled broker status for dialog {reqid}:\n'
|
msg = f'Unhandled broker status for dialog {reqid}:\n'
|
||||||
f'{pformat(brokerd_msg)}'
|
if oid:
|
||||||
)
|
status_msg = book._active.get(oid)
|
||||||
if (
|
# status msg may not have been set yet or popped?
|
||||||
oid := book._ems2brokerd_ids.inverse.get(reqid)
|
|
||||||
):
|
|
||||||
# NOTE: have seen a key error here on kraken
|
# NOTE: have seen a key error here on kraken
|
||||||
# clearable limits..
|
# clearable limits..
|
||||||
if status_msg := book._active.get(oid):
|
if status_msg:
|
||||||
msg += (
|
msg += (
|
||||||
f'last status msg: {pformat(status_msg)}\n\n'
|
f'last status msg: {pformat(status_msg)}\n\n'
|
||||||
f'this msg:{fmsg}\n'
|
f'this msg:{fmsg}\n'
|
||||||
|
|
@ -1204,16 +1154,12 @@ async def process_client_order_cmds(
|
||||||
submitting live orders immediately if requested by the client.
|
submitting live orders immediately if requested by the client.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# TODO, only allow `msgspec.Struct` form!
|
# cmd: dict
|
||||||
cmd: dict
|
|
||||||
async for cmd in client_order_stream:
|
async for cmd in client_order_stream:
|
||||||
log.info(
|
log.info(f'Received order cmd:\n{pformat(cmd)}')
|
||||||
f'Received order cmd:\n'
|
|
||||||
f'{pformat(cmd)}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# CAWT DAMN we need struct support!
|
# CAWT DAMN we need struct support!
|
||||||
oid: str = str(cmd['oid'])
|
oid = str(cmd['oid'])
|
||||||
|
|
||||||
# register this stream as an active order dialog (msg flow) for
|
# register this stream as an active order dialog (msg flow) for
|
||||||
# this order id such that translated message from the brokerd
|
# this order id such that translated message from the brokerd
|
||||||
|
|
@ -1268,7 +1214,7 @@ async def process_client_order_cmds(
|
||||||
BrokerdCancel(
|
BrokerdCancel(
|
||||||
oid=oid,
|
oid=oid,
|
||||||
reqid=reqid,
|
reqid=reqid,
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
account=order.account,
|
account=order.account,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
@ -1319,7 +1265,7 @@ async def process_client_order_cmds(
|
||||||
case {
|
case {
|
||||||
'oid': oid,
|
'oid': oid,
|
||||||
'symbol': fqme,
|
'symbol': fqme,
|
||||||
'price': price,
|
'price': trigger_price,
|
||||||
'size': size,
|
'size': size,
|
||||||
'action': ('buy' | 'sell') as action,
|
'action': ('buy' | 'sell') as action,
|
||||||
'exec_mode': ('live' | 'paper'),
|
'exec_mode': ('live' | 'paper'),
|
||||||
|
|
@ -1343,7 +1289,7 @@ async def process_client_order_cmds(
|
||||||
|
|
||||||
msg = BrokerdOrder(
|
msg = BrokerdOrder(
|
||||||
oid=oid, # no ib support for oids...
|
oid=oid, # no ib support for oids...
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
|
|
||||||
# if this is None, creates a new order
|
# if this is None, creates a new order
|
||||||
# otherwise will modify any existing one
|
# otherwise will modify any existing one
|
||||||
|
|
@ -1351,7 +1297,7 @@ async def process_client_order_cmds(
|
||||||
|
|
||||||
symbol=sym,
|
symbol=sym,
|
||||||
action=action,
|
action=action,
|
||||||
price=price,
|
price=trigger_price,
|
||||||
size=size,
|
size=size,
|
||||||
account=req.account,
|
account=req.account,
|
||||||
)
|
)
|
||||||
|
|
@ -1361,7 +1307,7 @@ async def process_client_order_cmds(
|
||||||
oid=oid,
|
oid=oid,
|
||||||
reqid=reqid,
|
reqid=reqid,
|
||||||
resp='pending',
|
resp='pending',
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
brokerd_msg=msg,
|
brokerd_msg=msg,
|
||||||
req=req,
|
req=req,
|
||||||
)
|
)
|
||||||
|
|
@ -1373,11 +1319,7 @@ async def process_client_order_cmds(
|
||||||
# (``translate_and_relay_brokerd_events()`` above) will
|
# (``translate_and_relay_brokerd_events()`` above) will
|
||||||
# handle relaying the ems side responses back to
|
# handle relaying the ems side responses back to
|
||||||
# the client/cmd sender from this request
|
# the client/cmd sender from this request
|
||||||
log.info(
|
log.info(f'Sending live order to {broker}:\n{pformat(msg)}')
|
||||||
f'Sending live order to {broker}:\n'
|
|
||||||
f'{pformat(msg)}'
|
|
||||||
)
|
|
||||||
|
|
||||||
await brokerd_order_stream.send(msg)
|
await brokerd_order_stream.send(msg)
|
||||||
|
|
||||||
# an immediate response should be ``BrokerdOrderAck``
|
# an immediate response should be ``BrokerdOrderAck``
|
||||||
|
|
@ -1393,7 +1335,7 @@ async def process_client_order_cmds(
|
||||||
case {
|
case {
|
||||||
'oid': oid,
|
'oid': oid,
|
||||||
'symbol': fqme,
|
'symbol': fqme,
|
||||||
'price': price,
|
'price': trigger_price,
|
||||||
'size': size,
|
'size': size,
|
||||||
'exec_mode': exec_mode,
|
'exec_mode': exec_mode,
|
||||||
'action': action,
|
'action': action,
|
||||||
|
|
@ -1421,12 +1363,7 @@ async def process_client_order_cmds(
|
||||||
if isnan(last):
|
if isnan(last):
|
||||||
last = flume.rt_shm.array[-1]['close']
|
last = flume.rt_shm.array[-1]['close']
|
||||||
|
|
||||||
trigger_price: float = float(price)
|
pred = mk_check(trigger_price, last, action)
|
||||||
pred = mk_check(
|
|
||||||
trigger_price,
|
|
||||||
last,
|
|
||||||
action,
|
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: for dark orders currently we submit
|
# NOTE: for dark orders currently we submit
|
||||||
# the triggered live order at a price 5 ticks
|
# the triggered live order at a price 5 ticks
|
||||||
|
|
@ -1487,7 +1424,7 @@ async def process_client_order_cmds(
|
||||||
status = Status(
|
status = Status(
|
||||||
resp=resp,
|
resp=resp,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
time_ns=time_ns(),
|
time_ns=time.time_ns(),
|
||||||
req=req,
|
req=req,
|
||||||
src='dark',
|
src='dark',
|
||||||
)
|
)
|
||||||
|
|
@ -1533,7 +1470,7 @@ async def maybe_open_trade_relays(
|
||||||
loglevel: str = 'info',
|
loglevel: str = 'info',
|
||||||
):
|
):
|
||||||
|
|
||||||
fqme, relay, feed, client_ready = await _router._tn.start(
|
fqme, relay, feed, client_ready = await _router.nursery.start(
|
||||||
_router.open_trade_relays,
|
_router.open_trade_relays,
|
||||||
fqme,
|
fqme,
|
||||||
exec_mode,
|
exec_mode,
|
||||||
|
|
@ -1563,18 +1500,19 @@ async def maybe_open_trade_relays(
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
async def _emsd_main(
|
async def _emsd_main(
|
||||||
ctx: tractor.Context, # becomes `ems_ctx` below
|
ctx: tractor.Context,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
exec_mode: str, # ('paper', 'live')
|
exec_mode: str, # ('paper', 'live')
|
||||||
loglevel: str|None = None,
|
loglevel: str | None = None,
|
||||||
|
|
||||||
) -> tuple[ # `ctx.started()` value!
|
) -> tuple[
|
||||||
dict[ # positions
|
dict[
|
||||||
tuple[str, str], # brokername, acctid
|
# brokername, acctid
|
||||||
|
tuple[str, str],
|
||||||
list[BrokerdPosition],
|
list[BrokerdPosition],
|
||||||
],
|
],
|
||||||
list[str], # accounts
|
list[str],
|
||||||
dict[str, Status], # dialogs
|
dict[str, Status],
|
||||||
]:
|
]:
|
||||||
'''
|
'''
|
||||||
EMS (sub)actor entrypoint providing the execution management
|
EMS (sub)actor entrypoint providing the execution management
|
||||||
|
|
|
||||||
|
|
@ -18,15 +18,39 @@
|
||||||
Clearing sub-system message and protocols.
|
Clearing sub-system message and protocols.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
from __future__ import annotations
|
# from collections import (
|
||||||
from decimal import Decimal
|
# ChainMap,
|
||||||
|
# deque,
|
||||||
|
# )
|
||||||
from typing import (
|
from typing import (
|
||||||
Literal,
|
Literal,
|
||||||
)
|
)
|
||||||
|
|
||||||
from msgspec import field
|
from msgspec import field
|
||||||
|
|
||||||
from piker.types import Struct
|
from ..data.types import Struct
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: a composite for tracking msg flow on 2-legged
|
||||||
|
# dialogs.
|
||||||
|
# class Dialog(ChainMap):
|
||||||
|
# '''
|
||||||
|
# Msg collection abstraction to easily track the state changes of
|
||||||
|
# a msg flow in one high level, query-able and immutable construct.
|
||||||
|
|
||||||
|
# The main use case is to query data from a (long-running)
|
||||||
|
# msg-transaction-sequence
|
||||||
|
|
||||||
|
|
||||||
|
# '''
|
||||||
|
# def update(
|
||||||
|
# self,
|
||||||
|
# msg,
|
||||||
|
# ) -> None:
|
||||||
|
# self.maps.insert(0, msg.to_dict())
|
||||||
|
|
||||||
|
# def flatten(self) -> dict:
|
||||||
|
# return dict(self)
|
||||||
|
|
||||||
|
|
||||||
# TODO: ``msgspec`` stuff worth paying attention to:
|
# TODO: ``msgspec`` stuff worth paying attention to:
|
||||||
|
|
@ -72,15 +96,7 @@ class Order(Struct):
|
||||||
symbol: str # | MktPair
|
symbol: str # | MktPair
|
||||||
account: str # should we set a default as '' ?
|
account: str # should we set a default as '' ?
|
||||||
|
|
||||||
# https://docs.python.org/3/library/decimal.html#decimal-objects
|
price: float
|
||||||
#
|
|
||||||
# ?TODO? decimal usage throughout?
|
|
||||||
# -[ ] possibly leverage the `Encoder(decimal_format='number')`
|
|
||||||
# bit?
|
|
||||||
# |_https://jcristharif.com/msgspec/supported-types.html#decimal
|
|
||||||
# -[ ] should we also use it for .size?
|
|
||||||
#
|
|
||||||
price: Decimal
|
|
||||||
size: float # -ve is "sell", +ve is "buy"
|
size: float # -ve is "sell", +ve is "buy"
|
||||||
|
|
||||||
brokers: list[str] = []
|
brokers: list[str] = []
|
||||||
|
|
@ -147,18 +163,6 @@ class Status(Struct):
|
||||||
brokerd_msg: dict = {}
|
brokerd_msg: dict = {}
|
||||||
|
|
||||||
|
|
||||||
class Error(Status):
|
|
||||||
resp: str = 'error'
|
|
||||||
|
|
||||||
# TODO: allow re-wrapping from existing (last) status?
|
|
||||||
@classmethod
|
|
||||||
def from_status(
|
|
||||||
cls,
|
|
||||||
msg: Status,
|
|
||||||
) -> Error:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
# ---------------
|
# ---------------
|
||||||
# emsd -> brokerd
|
# emsd -> brokerd
|
||||||
# ---------------
|
# ---------------
|
||||||
|
|
@ -187,7 +191,7 @@ class BrokerdOrder(Struct):
|
||||||
time_ns: int
|
time_ns: int
|
||||||
|
|
||||||
symbol: str # fqme
|
symbol: str # fqme
|
||||||
price: Decimal
|
price: float
|
||||||
size: float
|
size: float
|
||||||
|
|
||||||
# TODO: if we instead rely on a +ve/-ve size to determine
|
# TODO: if we instead rely on a +ve/-ve size to determine
|
||||||
|
|
@ -222,7 +226,6 @@ class BrokerdOrderAck(Struct):
|
||||||
|
|
||||||
# emsd id originally sent in matching request msg
|
# emsd id originally sent in matching request msg
|
||||||
oid: str
|
oid: str
|
||||||
# TODO: do we need this?
|
|
||||||
account: str = ''
|
account: str = ''
|
||||||
name: str = 'ack'
|
name: str = 'ack'
|
||||||
|
|
||||||
|
|
@ -235,14 +238,13 @@ class BrokerdStatus(Struct):
|
||||||
'open',
|
'open',
|
||||||
'canceled',
|
'canceled',
|
||||||
'pending',
|
'pending',
|
||||||
# 'error', # NOTE: use `BrokerdError`
|
'error',
|
||||||
'closed',
|
'closed',
|
||||||
]
|
]
|
||||||
name: str = 'status'
|
|
||||||
|
|
||||||
oid: str = ''
|
|
||||||
# TODO: do we need this?
|
# TODO: do we need this?
|
||||||
account: str | None = None,
|
account: str | None = None,
|
||||||
|
name: str = 'status'
|
||||||
filled: float = 0.0
|
filled: float = 0.0
|
||||||
reason: str = ''
|
reason: str = ''
|
||||||
remaining: float = 0.0
|
remaining: float = 0.0
|
||||||
|
|
@ -285,25 +287,20 @@ class BrokerdError(Struct):
|
||||||
This is still a TODO thing since we're not sure how to employ it yet.
|
This is still a TODO thing since we're not sure how to employ it yet.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
oid: str
|
||||||
reason: str
|
reason: str
|
||||||
|
|
||||||
# TODO: drop this right?
|
# TODO: drop this right?
|
||||||
symbol: str | None = None
|
symbol: str | None = None
|
||||||
|
|
||||||
oid: str | None = None
|
|
||||||
# if no brokerd order request was actually submitted (eg. we errored
|
# if no brokerd order request was actually submitted (eg. we errored
|
||||||
# at the ``pikerd`` layer) then there will be ``reqid`` allocated.
|
# at the ``pikerd`` layer) then there will be ``reqid`` allocated.
|
||||||
reqid: str | None = None
|
reqid: int | str | None = None
|
||||||
|
|
||||||
name: str = 'error'
|
name: str = 'error'
|
||||||
broker_details: dict = {}
|
broker_details: dict = {}
|
||||||
|
|
||||||
|
|
||||||
# TODO: yeah, so we REALLY need to completely deprecate
|
|
||||||
# this and use the `.accounting.Position` msg-type instead..
|
|
||||||
# -[ ] an alternative might be to add a `Position.summary() ->
|
|
||||||
# `PositionSummary`-msg that we generate since `Position` has a lot
|
|
||||||
# of fields by default we likely don't want to send over the wire?
|
|
||||||
class BrokerdPosition(Struct):
|
class BrokerdPosition(Struct):
|
||||||
'''
|
'''
|
||||||
Position update event from brokerd.
|
Position update event from brokerd.
|
||||||
|
|
@ -316,4 +313,3 @@ class BrokerdPosition(Struct):
|
||||||
avg_price: float
|
avg_price: float
|
||||||
currency: str = ''
|
currency: str = ''
|
||||||
name: str = 'position'
|
name: str = 'position'
|
||||||
bs_mktid: str|int|None = None
|
|
||||||
|
|
|
||||||
|
|
@ -26,12 +26,10 @@ from contextlib import asynccontextmanager as acm
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from operator import itemgetter
|
from operator import itemgetter
|
||||||
import itertools
|
import itertools
|
||||||
from pprint import pformat
|
|
||||||
import time
|
import time
|
||||||
from typing import (
|
from typing import (
|
||||||
Callable,
|
Callable,
|
||||||
)
|
)
|
||||||
from types import ModuleType
|
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
from bidict import bidict
|
from bidict import bidict
|
||||||
|
|
@ -39,26 +37,22 @@ import pendulum
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
|
|
||||||
from piker.brokers import get_brokermod
|
from ..brokers import get_brokermod
|
||||||
from piker.service import find_service
|
from .. import data
|
||||||
from piker.accounting import (
|
from ..data.types import Struct
|
||||||
Account,
|
from ..accounting._mktinfo import (
|
||||||
MktPair,
|
MktPair,
|
||||||
|
)
|
||||||
|
from ..accounting import (
|
||||||
Position,
|
Position,
|
||||||
|
PpTable,
|
||||||
Transaction,
|
Transaction,
|
||||||
TransactionLedger,
|
TransactionLedger,
|
||||||
open_account,
|
|
||||||
open_trade_ledger,
|
open_trade_ledger,
|
||||||
unpack_fqme,
|
open_pps,
|
||||||
)
|
)
|
||||||
from piker.data import (
|
from ..data import iterticks
|
||||||
Feed,
|
from ..accounting import unpack_fqme
|
||||||
SymbologyCache,
|
|
||||||
iterticks,
|
|
||||||
open_feed,
|
|
||||||
open_symcache,
|
|
||||||
)
|
|
||||||
from piker.types import Struct
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log, # sub-sys logger
|
log, # sub-sys logger
|
||||||
get_console_log,
|
get_console_log,
|
||||||
|
|
@ -83,10 +77,11 @@ class PaperBoi(Struct):
|
||||||
|
|
||||||
'''
|
'''
|
||||||
broker: str
|
broker: str
|
||||||
|
|
||||||
ems_trades_stream: tractor.MsgStream
|
ems_trades_stream: tractor.MsgStream
|
||||||
acnt: Account
|
|
||||||
|
ppt: PpTable
|
||||||
ledger: TransactionLedger
|
ledger: TransactionLedger
|
||||||
fees: Callable
|
|
||||||
|
|
||||||
# map of paper "live" orders which be used
|
# map of paper "live" orders which be used
|
||||||
# to simulate fills based on paper engine settings
|
# to simulate fills based on paper engine settings
|
||||||
|
|
@ -268,44 +263,29 @@ class PaperBoi(Struct):
|
||||||
# we don't actually have any unique backend symbol ourselves
|
# we don't actually have any unique backend symbol ourselves
|
||||||
# other then this thing, our fqme address.
|
# other then this thing, our fqme address.
|
||||||
bs_mktid: str = fqme
|
bs_mktid: str = fqme
|
||||||
if fees := self.fees:
|
|
||||||
cost: float = fees(price, size)
|
|
||||||
else:
|
|
||||||
cost: float = 0
|
|
||||||
|
|
||||||
t = Transaction(
|
t = Transaction(
|
||||||
fqme=fqme,
|
fqme=fqme,
|
||||||
|
sym=self._mkts[fqme],
|
||||||
tid=oid,
|
tid=oid,
|
||||||
size=size,
|
size=size,
|
||||||
price=price,
|
price=price,
|
||||||
cost=cost,
|
cost=0, # TODO: cost model
|
||||||
dt=pendulum.from_timestamp(fill_time_s),
|
dt=pendulum.from_timestamp(fill_time_s),
|
||||||
bs_mktid=bs_mktid,
|
bs_mktid=bs_mktid,
|
||||||
)
|
)
|
||||||
|
|
||||||
# update in-mem ledger and pos table
|
# update in-mem ledger and pos table
|
||||||
self.ledger.update_from_t(t)
|
self.ledger.update_from_t(t)
|
||||||
self.acnt.update_from_ledger(
|
self.ppt.update_from_trans({oid: t})
|
||||||
{oid: t},
|
|
||||||
symcache=self.ledger._symcache,
|
|
||||||
|
|
||||||
# XXX when a backend has no symcache support yet we can
|
|
||||||
# simply pass in the gmi() retreived table created
|
|
||||||
# during init :o
|
|
||||||
_mktmap_table=self._mkts,
|
|
||||||
)
|
|
||||||
|
|
||||||
# transmit pp msg to ems
|
# transmit pp msg to ems
|
||||||
pp: Position = self.acnt.pps[bs_mktid]
|
pp = self.ppt.pps[bs_mktid]
|
||||||
# TODO, this will break if `require_only=True` was passed to
|
|
||||||
# `.update_from_ledger()`
|
|
||||||
|
|
||||||
pp_msg = BrokerdPosition(
|
pp_msg = BrokerdPosition(
|
||||||
broker=self.broker,
|
broker=self.broker,
|
||||||
account='paper',
|
account='paper',
|
||||||
symbol=fqme,
|
symbol=fqme,
|
||||||
|
|
||||||
size=pp.cumsize,
|
size=pp.size,
|
||||||
avg_price=pp.ppu,
|
avg_price=pp.ppu,
|
||||||
|
|
||||||
# TODO: we need to look up the asset currency from
|
# TODO: we need to look up the asset currency from
|
||||||
|
|
@ -316,7 +296,7 @@ class PaperBoi(Struct):
|
||||||
# write all updates to filesys immediately
|
# write all updates to filesys immediately
|
||||||
# (adds latency but that works for simulation anyway)
|
# (adds latency but that works for simulation anyway)
|
||||||
self.ledger.write_config()
|
self.ledger.write_config()
|
||||||
self.acnt.write_config()
|
self.ppt.write_config()
|
||||||
|
|
||||||
await self.ems_trades_stream.send(pp_msg)
|
await self.ems_trades_stream.send(pp_msg)
|
||||||
|
|
||||||
|
|
@ -345,7 +325,6 @@ async def simulate_fills(
|
||||||
# this stream may eventually contain multiple symbols
|
# this stream may eventually contain multiple symbols
|
||||||
async for quotes in quote_stream:
|
async for quotes in quote_stream:
|
||||||
for sym, quote in quotes.items():
|
for sym, quote in quotes.items():
|
||||||
# print(sym)
|
|
||||||
for tick in iterticks(
|
for tick in iterticks(
|
||||||
quote,
|
quote,
|
||||||
# dark order price filter(s)
|
# dark order price filter(s)
|
||||||
|
|
@ -510,7 +489,7 @@ async def handle_order_requests(
|
||||||
reqid = await client.submit_limit(
|
reqid = await client.submit_limit(
|
||||||
oid=order.oid,
|
oid=order.oid,
|
||||||
symbol=f'{order.symbol}.{client.broker}',
|
symbol=f'{order.symbol}.{client.broker}',
|
||||||
price=float(order.price),
|
price=order.price,
|
||||||
action=order.action,
|
action=order.action,
|
||||||
size=order.size,
|
size=order.size,
|
||||||
# XXX: by default 0 tells ``ib_insync`` methods that
|
# XXX: by default 0 tells ``ib_insync`` methods that
|
||||||
|
|
@ -561,186 +540,139 @@ async def open_trade_dialog(
|
||||||
# enable piker.clearing console log for *this* subactor
|
# enable piker.clearing console log for *this* subactor
|
||||||
get_console_log(loglevel)
|
get_console_log(loglevel)
|
||||||
|
|
||||||
symcache: SymbologyCache
|
ppt: PpTable
|
||||||
async with open_symcache(get_brokermod(broker)) as symcache:
|
ledger: TransactionLedger
|
||||||
|
with (
|
||||||
|
open_pps(
|
||||||
|
broker,
|
||||||
|
'paper',
|
||||||
|
write_on_exit=True,
|
||||||
|
) as ppt,
|
||||||
|
|
||||||
acnt: Account
|
open_trade_ledger(
|
||||||
ledger: TransactionLedger
|
broker,
|
||||||
with (
|
'paper',
|
||||||
|
) as ledger
|
||||||
|
):
|
||||||
|
# NOTE: retreive market(pair) info from the backend broker
|
||||||
|
# since ledger entries (in their backend native format) often
|
||||||
|
# don't contain necessary market info per trade record entry..
|
||||||
|
# - if no fqme was passed in, we presume we're running in
|
||||||
|
# "ledger-sync-only mode" and thus we load mkt info for
|
||||||
|
# each symbol found in the ledger to a ppt table manually.
|
||||||
|
|
||||||
# TODO: probably do the symcache and ledger loading
|
# TODO: how to process ledger info from backends?
|
||||||
# implicitly behind this? Deliver an account, and ledger
|
# - should we be rolling our own actor-cached version of these
|
||||||
# pair or make the ledger an attr of the account?
|
# client API refs or using portal IPC to send requests to the
|
||||||
open_account(
|
# existing brokerd daemon?
|
||||||
broker,
|
# - alternatively we can possibly expect and use
|
||||||
'paper',
|
# a `.broker.norm_trade_records()` ep?
|
||||||
write_on_exit=True,
|
brokermod = get_brokermod(broker)
|
||||||
) as acnt,
|
gmi = getattr(brokermod, 'get_mkt_info', None)
|
||||||
|
|
||||||
open_trade_ledger(
|
# update all transactions with mkt info before
|
||||||
broker,
|
# loading any pps
|
||||||
'paper',
|
mkt_by_fqme: dict[str, MktPair] = {}
|
||||||
symcache=symcache,
|
if fqme:
|
||||||
) as ledger
|
bs_fqme, _, broker = fqme.rpartition('.')
|
||||||
):
|
mkt, _ = await brokermod.get_mkt_info(bs_fqme)
|
||||||
# NOTE: WE MUST retreive market(pair) info from each
|
mkt_by_fqme[mkt.fqme] = mkt
|
||||||
# backend broker since ledger entries (in their
|
|
||||||
# provider-native format) often don't contain necessary
|
|
||||||
# market info per trade record entry..
|
|
||||||
# FURTHER, if no fqme was passed in, we presume we're
|
|
||||||
# running in "ledger-sync-only mode" and thus we load
|
|
||||||
# mkt info for each symbol found in the ledger to
|
|
||||||
# an acnt table manually.
|
|
||||||
|
|
||||||
# TODO: how to process ledger info from backends?
|
# for each sym in the ledger load it's `MktPair` info
|
||||||
# - should we be rolling our own actor-cached version of these
|
for tid, txdict in ledger.data.items():
|
||||||
# client API refs or using portal IPC to send requests to the
|
l_fqme: str = txdict.get('fqme') or txdict['fqsn']
|
||||||
# existing brokerd daemon?
|
|
||||||
# - alternatively we can possibly expect and use
|
|
||||||
# a `.broker.ledger.norm_trade()` ep?
|
|
||||||
brokermod: ModuleType = get_brokermod(broker)
|
|
||||||
gmi: Callable = getattr(brokermod, 'get_mkt_info', None)
|
|
||||||
|
|
||||||
# update all transactions with mkt info before
|
|
||||||
# loading any pps
|
|
||||||
mkt_by_fqme: dict[str, MktPair] = {}
|
|
||||||
if (
|
if (
|
||||||
fqme
|
gmi
|
||||||
and fqme not in symcache.mktmaps
|
and l_fqme not in mkt_by_fqme
|
||||||
):
|
):
|
||||||
log.warning(
|
mkt, pair = await brokermod.get_mkt_info(
|
||||||
f'Symcache for {broker} has no `{fqme}` entry?\n'
|
l_fqme.rstrip(f'.{broker}'),
|
||||||
'Manually requesting mkt map data via `.get_mkt_info()`..'
|
|
||||||
)
|
)
|
||||||
|
mkt_by_fqme[l_fqme] = mkt
|
||||||
|
|
||||||
bs_fqme, _, broker = fqme.rpartition('.')
|
# if an ``fqme: str`` input was provided we only
|
||||||
mkt, pair = await gmi(bs_fqme)
|
# need a ``MktPair`` for that one market, since we're
|
||||||
mkt_by_fqme[mkt.fqme] = mkt
|
# running in real simulated-clearing mode, not just ledger
|
||||||
|
# syncing.
|
||||||
|
if (
|
||||||
|
fqme is not None
|
||||||
|
and fqme in mkt_by_fqme
|
||||||
|
):
|
||||||
|
break
|
||||||
|
|
||||||
# for each sym in the ledger load its `MktPair` info
|
# update pos table from ledger history and provide a ``MktPair``
|
||||||
for tid, txdict in ledger.data.items():
|
# lookup for internal position accounting calcs.
|
||||||
l_fqme: str = txdict.get('fqme') or txdict['fqsn']
|
ppt.update_from_trans(ledger.to_trans(mkt_by_fqme=mkt_by_fqme))
|
||||||
|
|
||||||
if (
|
pp_msgs: list[BrokerdPosition] = []
|
||||||
gmi
|
pos: Position
|
||||||
and l_fqme not in symcache.mktmaps
|
token: str # f'{symbol}.{self.broker}'
|
||||||
and l_fqme not in mkt_by_fqme
|
for token, pos in ppt.pps.items():
|
||||||
):
|
pp_msgs.append(BrokerdPosition(
|
||||||
log.warning(
|
broker=broker,
|
||||||
f'Symcache for {broker} has no `{l_fqme}` entry?\n'
|
account='paper',
|
||||||
'Manually requesting mkt map data via `.get_mkt_info()`..'
|
symbol=pos.mkt.fqme,
|
||||||
)
|
size=pos.size,
|
||||||
mkt, pair = await gmi(
|
avg_price=pos.ppu,
|
||||||
l_fqme.rstrip(f'.{broker}'),
|
|
||||||
)
|
|
||||||
mkt_by_fqme[l_fqme] = mkt
|
|
||||||
|
|
||||||
# if an ``fqme: str`` input was provided we only
|
|
||||||
# need a ``MktPair`` for that one market, since we're
|
|
||||||
# running in real simulated-clearing mode, not just ledger
|
|
||||||
# syncing.
|
|
||||||
if (
|
|
||||||
fqme is not None
|
|
||||||
and fqme in mkt_by_fqme
|
|
||||||
):
|
|
||||||
break
|
|
||||||
|
|
||||||
# update pos table from ledger history and provide a ``MktPair``
|
|
||||||
# lookup for internal position accounting calcs.
|
|
||||||
acnt.update_from_ledger(
|
|
||||||
ledger,
|
|
||||||
|
|
||||||
# NOTE: if the symcache fails on fqme lookup
|
|
||||||
# (either sycache not yet supported or not filled
|
|
||||||
# in) use manually constructed table from calling
|
|
||||||
# the `.get_mkt_info()` provider EP above.
|
|
||||||
_mktmap_table=mkt_by_fqme,
|
|
||||||
only_require=list(mkt_by_fqme),
|
|
||||||
)
|
|
||||||
|
|
||||||
pp_msgs: list[BrokerdPosition] = []
|
|
||||||
pos: Position
|
|
||||||
token: str # f'{symbol}.{self.broker}'
|
|
||||||
for token, pos in acnt.pps.items():
|
|
||||||
|
|
||||||
pp_msgs.append(BrokerdPosition(
|
|
||||||
broker=broker,
|
|
||||||
account='paper',
|
|
||||||
symbol=pos.mkt.fqme,
|
|
||||||
size=pos.cumsize,
|
|
||||||
avg_price=pos.ppu,
|
|
||||||
))
|
|
||||||
|
|
||||||
await ctx.started((
|
|
||||||
pp_msgs,
|
|
||||||
['paper'],
|
|
||||||
))
|
))
|
||||||
|
|
||||||
# write new positions state in case ledger was
|
await ctx.started((
|
||||||
# newer then that tracked in pps.toml
|
pp_msgs,
|
||||||
acnt.write_config()
|
['paper'],
|
||||||
|
))
|
||||||
|
|
||||||
# exit early since no fqme was passed,
|
# write new positions state in case ledger was
|
||||||
# normally this case is just to load
|
# newer then that tracked in pps.toml
|
||||||
# positions "offline".
|
ppt.write_config()
|
||||||
if fqme is None:
|
|
||||||
log.warning(
|
# exit early since no fqme was passed,
|
||||||
'Paper engine only running in position delivery mode!\n'
|
# normally this case is just to load
|
||||||
'NO SIMULATED CLEARING LOOP IS ACTIVE!'
|
# positions "offline".
|
||||||
)
|
if fqme is None:
|
||||||
await trio.sleep_forever()
|
log.warning(
|
||||||
return
|
'Paper engine only running in position delivery mode!\n'
|
||||||
|
'NO SIMULATED CLEARING LOOP IS ACTIVE!'
|
||||||
|
)
|
||||||
|
await trio.sleep_forever()
|
||||||
|
return
|
||||||
|
|
||||||
|
async with (
|
||||||
|
data.open_feed(
|
||||||
|
[fqme],
|
||||||
|
loglevel=loglevel,
|
||||||
|
) as feed,
|
||||||
|
):
|
||||||
|
# sanity check all the mkt infos
|
||||||
|
for fqme, flume in feed.flumes.items():
|
||||||
|
assert mkt_by_fqme[fqme] == flume.mkt
|
||||||
|
|
||||||
feed: Feed
|
|
||||||
async with (
|
async with (
|
||||||
open_feed(
|
ctx.open_stream() as ems_stream,
|
||||||
[fqme],
|
trio.open_nursery() as n,
|
||||||
loglevel=loglevel,
|
|
||||||
) as feed,
|
|
||||||
):
|
):
|
||||||
# sanity check all the mkt infos
|
client = PaperBoi(
|
||||||
for fqme, flume in feed.flumes.items():
|
broker=broker,
|
||||||
mkt: MktPair = symcache.mktmaps.get(fqme) or mkt_by_fqme[fqme]
|
ems_trades_stream=ems_stream,
|
||||||
if mkt != flume.mkt:
|
ppt=ppt,
|
||||||
diff: tuple = mkt - flume.mkt
|
ledger=ledger,
|
||||||
log.warning(
|
|
||||||
'MktPair sig mismatch?\n'
|
_buys=_buys,
|
||||||
f'{pformat(diff)}'
|
_sells=_sells,
|
||||||
)
|
_reqids=_reqids,
|
||||||
|
|
||||||
|
_mkts=mkt_by_fqme,
|
||||||
|
|
||||||
get_cost: Callable = getattr(
|
|
||||||
brokermod,
|
|
||||||
'get_cost',
|
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
async with (
|
n.start_soon(
|
||||||
ctx.open_stream() as ems_stream,
|
handle_order_requests,
|
||||||
trio.open_nursery() as n,
|
client,
|
||||||
):
|
ems_stream,
|
||||||
client = PaperBoi(
|
)
|
||||||
broker=broker,
|
|
||||||
ems_trades_stream=ems_stream,
|
|
||||||
acnt=acnt,
|
|
||||||
ledger=ledger,
|
|
||||||
fees=get_cost,
|
|
||||||
|
|
||||||
_buys=_buys,
|
# paper engine simulator clearing task
|
||||||
_sells=_sells,
|
await simulate_fills(feed.streams[broker], client)
|
||||||
_reqids=_reqids,
|
|
||||||
|
|
||||||
_mkts=mkt_by_fqme,
|
|
||||||
|
|
||||||
)
|
|
||||||
|
|
||||||
n.start_soon(
|
|
||||||
handle_order_requests,
|
|
||||||
client,
|
|
||||||
ems_stream,
|
|
||||||
)
|
|
||||||
|
|
||||||
# paper engine simulator clearing task
|
|
||||||
await simulate_fills(feed.streams[broker], client)
|
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
|
|
@ -764,7 +696,7 @@ async def open_paperboi(
|
||||||
service_name = f'paperboi.{broker}'
|
service_name = f'paperboi.{broker}'
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
find_service(service_name) as portal,
|
tractor.find_actor(service_name) as portal,
|
||||||
tractor.open_nursery() as an,
|
tractor.open_nursery() as an,
|
||||||
):
|
):
|
||||||
# NOTE: only spawn if no paperboi already is up since we likely
|
# NOTE: only spawn if no paperboi already is up since we likely
|
||||||
|
|
@ -787,59 +719,7 @@ async def open_paperboi(
|
||||||
) as (ctx, first):
|
) as (ctx, first):
|
||||||
yield ctx, first
|
yield ctx, first
|
||||||
|
|
||||||
# ALWAYS tear down connection AND any newly spawned
|
# tear down connection and any spawned actor on exit
|
||||||
# paperboi actor on exit!
|
|
||||||
await ctx.cancel()
|
await ctx.cancel()
|
||||||
|
|
||||||
if we_spawned:
|
if we_spawned:
|
||||||
await portal.cancel_actor()
|
await portal.cancel_actor()
|
||||||
|
|
||||||
|
|
||||||
def norm_trade(
|
|
||||||
tid: str,
|
|
||||||
txdict: dict,
|
|
||||||
pairs: dict[str, Struct],
|
|
||||||
symcache: SymbologyCache | None = None,
|
|
||||||
|
|
||||||
brokermod: ModuleType | None = None,
|
|
||||||
|
|
||||||
) -> Transaction:
|
|
||||||
from pendulum import (
|
|
||||||
DateTime,
|
|
||||||
parse,
|
|
||||||
)
|
|
||||||
|
|
||||||
# special field handling for datetimes
|
|
||||||
# to ensure pendulum is used!
|
|
||||||
dt: DateTime = parse(txdict['dt'])
|
|
||||||
expiry: str | None = txdict.get('expiry')
|
|
||||||
fqme: str = txdict.get('fqme') or txdict.pop('fqsn')
|
|
||||||
|
|
||||||
price: float = txdict['price']
|
|
||||||
size: float = txdict['size']
|
|
||||||
cost: float = txdict.get('cost', 0)
|
|
||||||
if (
|
|
||||||
brokermod
|
|
||||||
and (get_cost := getattr(
|
|
||||||
brokermod,
|
|
||||||
'get_cost',
|
|
||||||
False,
|
|
||||||
))
|
|
||||||
):
|
|
||||||
cost = get_cost(
|
|
||||||
price,
|
|
||||||
size,
|
|
||||||
is_taker=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
return Transaction(
|
|
||||||
fqme=fqme,
|
|
||||||
tid=txdict['tid'],
|
|
||||||
dt=dt,
|
|
||||||
price=price,
|
|
||||||
size=size,
|
|
||||||
cost=cost,
|
|
||||||
bs_mktid=txdict['bs_mktid'],
|
|
||||||
expiry=parse(expiry) if expiry else None,
|
|
||||||
etype='clear',
|
|
||||||
)
|
|
||||||
|
|
|
||||||
|
|
@ -25,18 +25,20 @@ from ..log import (
|
||||||
get_logger,
|
get_logger,
|
||||||
get_console_log,
|
get_console_log,
|
||||||
)
|
)
|
||||||
from piker.types import Struct
|
from piker.data.types import Struct
|
||||||
subsys: str = 'piker.clearing'
|
subsys: str = 'piker.clearing'
|
||||||
|
|
||||||
log = get_logger(subsys)
|
log = get_logger(subsys)
|
||||||
|
|
||||||
# TODO, oof doesn't this ignore the `loglevel` then???
|
|
||||||
get_console_log = partial(
|
get_console_log = partial(
|
||||||
get_console_log,
|
get_console_log,
|
||||||
name=subsys,
|
name=subsys,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: use this in other backends like kraken which currently has
|
||||||
|
# a less formalized version more or less:
|
||||||
|
# `apiflows[reqid].maps.append(status_msg.to_dict())`
|
||||||
class OrderDialogs(Struct):
|
class OrderDialogs(Struct):
|
||||||
'''
|
'''
|
||||||
Order control dialog (and thus transaction) tracking via
|
Order control dialog (and thus transaction) tracking via
|
||||||
|
|
|
||||||
|
|
@ -1,33 +1,30 @@
|
||||||
# piker: trading gear for hackers
|
# piker: trading gear for hackers
|
||||||
# Copyright (C) 2018-present Tyler Goodlet
|
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
||||||
# (in stewardship for pikers, everywhere.)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or
|
# This program is free software: you can redistribute it and/or modify
|
||||||
# modify it under the terms of the GNU Affero General Public
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
# License as published by the Free Software Foundation, either
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
# version 3 of the License, or (at your option) any later version.
|
# (at your option) any later version.
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
# This program is distributed in the hope that it will be useful,
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
# Affero General Public License for more details.
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
# License along with this program. If not, see
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
# <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
CLI commons.
|
CLI commons.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
import os
|
import os
|
||||||
# from contextlib import AsyncExitStack
|
from contextlib import AsyncExitStack
|
||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
|
|
||||||
import click
|
import click
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
from tractor._multiaddr import parse_maddr
|
|
||||||
|
|
||||||
from ..log import (
|
from ..log import (
|
||||||
get_console_log,
|
get_console_log,
|
||||||
|
|
@ -45,97 +42,35 @@ from .. import config
|
||||||
log = get_logger('piker.cli')
|
log = get_logger('piker.cli')
|
||||||
|
|
||||||
|
|
||||||
def load_trans_eps(
|
|
||||||
network: dict | None = None,
|
|
||||||
maddrs: list[tuple] | None = None,
|
|
||||||
|
|
||||||
) -> dict[str, dict[str, dict]]:
|
|
||||||
|
|
||||||
# transport-oriented endpoint multi-addresses
|
|
||||||
eps: dict[
|
|
||||||
str, # service name, eg. `pikerd`, `emsd`..
|
|
||||||
|
|
||||||
# libp2p style multi-addresses parsed into prot layers
|
|
||||||
list[dict[str, str | int]]
|
|
||||||
] = {}
|
|
||||||
|
|
||||||
if (
|
|
||||||
network
|
|
||||||
and not maddrs
|
|
||||||
):
|
|
||||||
# load network section and (attempt to) connect all endpoints
|
|
||||||
# which are reachable B)
|
|
||||||
for key, maddrs in network.items():
|
|
||||||
match key:
|
|
||||||
|
|
||||||
# TODO: resolve table across multiple discov
|
|
||||||
# prots Bo
|
|
||||||
case 'resolv':
|
|
||||||
pass
|
|
||||||
|
|
||||||
case 'pikerd':
|
|
||||||
dname: str = key
|
|
||||||
for maddr in maddrs:
|
|
||||||
layers: dict = parse_maddr(maddr)
|
|
||||||
eps.setdefault(
|
|
||||||
dname,
|
|
||||||
[],
|
|
||||||
).append(layers)
|
|
||||||
|
|
||||||
elif maddrs:
|
|
||||||
# presume user is manually specifying the root actor ep.
|
|
||||||
eps['pikerd'] = [parse_maddr(maddr)]
|
|
||||||
|
|
||||||
return eps
|
|
||||||
|
|
||||||
|
|
||||||
@click.command()
|
@click.command()
|
||||||
|
@click.option('--loglevel', '-l', default='warning', help='Logging level')
|
||||||
|
@click.option('--tl', is_flag=True, help='Enable tractor logging')
|
||||||
|
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
|
||||||
|
@click.option('--host', '-h', default=None, help='Host addr to bind')
|
||||||
|
@click.option('--port', '-p', default=None, help='Port number to bind')
|
||||||
@click.option(
|
@click.option(
|
||||||
'--loglevel',
|
'--tsdb',
|
||||||
'-l',
|
|
||||||
default='warning',
|
|
||||||
help='Logging level',
|
|
||||||
)
|
|
||||||
@click.option(
|
|
||||||
'--tl',
|
|
||||||
is_flag=True,
|
is_flag=True,
|
||||||
help='Enable tractor-runtime logs',
|
help='Enable local ``marketstore`` instance'
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
'--pdb',
|
'--es',
|
||||||
is_flag=True,
|
is_flag=True,
|
||||||
help='Enable tractor debug mode',
|
help='Enable local ``elasticsearch`` instance'
|
||||||
)
|
)
|
||||||
@click.option(
|
|
||||||
'--maddr',
|
|
||||||
'-m',
|
|
||||||
default=None,
|
|
||||||
help='Multiaddrs to bind or contact',
|
|
||||||
)
|
|
||||||
# @click.option(
|
|
||||||
# '--tsdb',
|
|
||||||
# is_flag=True,
|
|
||||||
# help='Enable local ``marketstore`` instance'
|
|
||||||
# )
|
|
||||||
# @click.option(
|
|
||||||
# '--es',
|
|
||||||
# is_flag=True,
|
|
||||||
# help='Enable local ``elasticsearch`` instance'
|
|
||||||
# )
|
|
||||||
def pikerd(
|
def pikerd(
|
||||||
maddr: list[str] | None,
|
|
||||||
loglevel: str,
|
loglevel: str,
|
||||||
|
host: str,
|
||||||
|
port: int,
|
||||||
tl: bool,
|
tl: bool,
|
||||||
pdb: bool,
|
pdb: bool,
|
||||||
# tsdb: bool,
|
tsdb: bool,
|
||||||
# es: bool,
|
es: bool,
|
||||||
):
|
):
|
||||||
'''
|
'''
|
||||||
Spawn the piker broker-daemon.
|
Spawn the piker broker-daemon.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# from tractor.devx import maybe_open_crash_handler
|
|
||||||
# with maybe_open_crash_handler(pdb=False):
|
|
||||||
log = get_console_log(loglevel, name='cli')
|
log = get_console_log(loglevel, name='cli')
|
||||||
|
|
||||||
if pdb:
|
if pdb:
|
||||||
|
|
@ -147,49 +82,46 @@ def pikerd(
|
||||||
"\n"
|
"\n"
|
||||||
))
|
))
|
||||||
|
|
||||||
# service-actor registry endpoint socket-address set
|
reg_addr: None | tuple[str, int] = None
|
||||||
regaddrs: list[tuple[str, int]] = []
|
if host or port:
|
||||||
|
reg_addr = (
|
||||||
conf, _ = config.load(
|
host or _default_registry_host,
|
||||||
conf_name='conf',
|
int(port) or _default_registry_port,
|
||||||
)
|
|
||||||
network: dict = conf.get('network')
|
|
||||||
if (
|
|
||||||
network is None
|
|
||||||
and not maddr
|
|
||||||
):
|
|
||||||
regaddrs = [(
|
|
||||||
_default_registry_host,
|
|
||||||
_default_registry_port,
|
|
||||||
)]
|
|
||||||
|
|
||||||
else:
|
|
||||||
eps: dict = load_trans_eps(
|
|
||||||
network,
|
|
||||||
maddr,
|
|
||||||
)
|
)
|
||||||
for layers in eps['pikerd']:
|
|
||||||
regaddrs.append((
|
|
||||||
layers['ipv4']['addr'],
|
|
||||||
layers['tcp']['port'],
|
|
||||||
))
|
|
||||||
|
|
||||||
from .. import service
|
from .. import service
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
service_mngr: service.Services
|
service_mngr: service.Services
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
service.open_pikerd(
|
service.open_pikerd(
|
||||||
registry_addrs=regaddrs,
|
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
debug_mode=pdb,
|
debug_mode=pdb,
|
||||||
# enable_transports=['uds'],
|
registry_addr=reg_addr,
|
||||||
enable_transports=['tcp'],
|
|
||||||
) as service_mngr,
|
) as service_mngr, # normally delivers a ``Services`` handle
|
||||||
|
|
||||||
|
AsyncExitStack() as stack,
|
||||||
):
|
):
|
||||||
assert service_mngr
|
if tsdb:
|
||||||
# ?TODO? spawn all other sub-actor daemons according to
|
dname, conf = await stack.enter_async_context(
|
||||||
# multiaddress endpoint spec defined by user config
|
service.marketstore.start_ahab_daemon(
|
||||||
|
service_mngr,
|
||||||
|
loglevel=loglevel,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
log.info(f'TSDB `{dname}` up with conf:\n{conf}')
|
||||||
|
|
||||||
|
if es:
|
||||||
|
dname, conf = await stack.enter_async_context(
|
||||||
|
service.elastic.start_ahab_daemon(
|
||||||
|
service_mngr,
|
||||||
|
loglevel=loglevel,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
log.info(f'DB `{dname}` up with conf:\n{conf}')
|
||||||
|
|
||||||
await trio.sleep_forever()
|
await trio.sleep_forever()
|
||||||
|
|
||||||
trio.run(main)
|
trio.run(main)
|
||||||
|
|
@ -205,24 +137,8 @@ def pikerd(
|
||||||
@click.option('--loglevel', '-l', default='warning', help='Logging level')
|
@click.option('--loglevel', '-l', default='warning', help='Logging level')
|
||||||
@click.option('--tl', is_flag=True, help='Enable tractor logging')
|
@click.option('--tl', is_flag=True, help='Enable tractor logging')
|
||||||
@click.option('--configdir', '-c', help='Configuration directory')
|
@click.option('--configdir', '-c', help='Configuration directory')
|
||||||
@click.option(
|
@click.option('--host', '-h', default=None, help='Host addr to bind')
|
||||||
'--pdb',
|
@click.option('--port', '-p', default=None, help='Port number to bind')
|
||||||
is_flag=True,
|
|
||||||
help='Enable runtime debug mode ',
|
|
||||||
)
|
|
||||||
@click.option(
|
|
||||||
'--maddr',
|
|
||||||
'-m',
|
|
||||||
default=None,
|
|
||||||
multiple=True,
|
|
||||||
help='Multiaddr to bind',
|
|
||||||
)
|
|
||||||
@click.option(
|
|
||||||
'--regaddr',
|
|
||||||
'-r',
|
|
||||||
default=None,
|
|
||||||
help='Registrar addr to contact',
|
|
||||||
)
|
|
||||||
@click.pass_context
|
@click.pass_context
|
||||||
def cli(
|
def cli(
|
||||||
ctx: click.Context,
|
ctx: click.Context,
|
||||||
|
|
@ -230,11 +146,8 @@ def cli(
|
||||||
loglevel: str,
|
loglevel: str,
|
||||||
tl: bool,
|
tl: bool,
|
||||||
configdir: str,
|
configdir: str,
|
||||||
pdb: bool,
|
host: str,
|
||||||
|
port: int,
|
||||||
# TODO: make these list[str] with multiple -m maddr0 -m maddr1
|
|
||||||
maddr: list[str],
|
|
||||||
regaddr: str,
|
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
if configdir is not None:
|
if configdir is not None:
|
||||||
|
|
@ -255,20 +168,12 @@ def cli(
|
||||||
}
|
}
|
||||||
assert brokermods
|
assert brokermods
|
||||||
|
|
||||||
# TODO: load endpoints from `conf::[network].pikerd`
|
reg_addr: None | tuple[str, int] = None
|
||||||
# - pikerd vs. regd, separate registry daemon?
|
if host or port:
|
||||||
# - expose datad vs. brokerd?
|
reg_addr = (
|
||||||
# - bind emsd with certain perms on public iface?
|
host or _default_registry_host,
|
||||||
regaddrs: list[tuple[str, int]] = regaddr or [(
|
int(port) or _default_registry_port,
|
||||||
_default_registry_host,
|
)
|
||||||
_default_registry_port,
|
|
||||||
)]
|
|
||||||
|
|
||||||
# TODO: factor [network] section parsing out from pikerd
|
|
||||||
# above and call it here as well.
|
|
||||||
# if maddr:
|
|
||||||
# for addr in maddr:
|
|
||||||
# layers: dict = parse_maddr(addr)
|
|
||||||
|
|
||||||
ctx.obj.update({
|
ctx.obj.update({
|
||||||
'brokers': brokers,
|
'brokers': brokers,
|
||||||
|
|
@ -278,12 +183,7 @@ def cli(
|
||||||
'log': get_console_log(loglevel),
|
'log': get_console_log(loglevel),
|
||||||
'confdir': config._config_dir,
|
'confdir': config._config_dir,
|
||||||
'wl_path': config._watchlists_data_path,
|
'wl_path': config._watchlists_data_path,
|
||||||
'registry_addrs': regaddrs,
|
'registry_addr': reg_addr,
|
||||||
'pdb': pdb, # debug mode flag
|
|
||||||
|
|
||||||
# TODO: endpoint parsing, pinging and binding
|
|
||||||
# on no existing server.
|
|
||||||
# 'maddrs': maddr,
|
|
||||||
})
|
})
|
||||||
|
|
||||||
# allow enabling same loglevel in ``tractor`` machinery
|
# allow enabling same loglevel in ``tractor`` machinery
|
||||||
|
|
@ -307,10 +207,6 @@ def services(config, tl, ports):
|
||||||
if not ports:
|
if not ports:
|
||||||
ports = [_default_registry_port]
|
ports = [_default_registry_port]
|
||||||
|
|
||||||
addr = tractor._addr.wrap_address(
|
|
||||||
addr=(host, ports[0])
|
|
||||||
)
|
|
||||||
|
|
||||||
async def list_services():
|
async def list_services():
|
||||||
nonlocal host
|
nonlocal host
|
||||||
async with (
|
async with (
|
||||||
|
|
@ -318,25 +214,24 @@ def services(config, tl, ports):
|
||||||
name='service_query',
|
name='service_query',
|
||||||
loglevel=config['loglevel'] if tl else None,
|
loglevel=config['loglevel'] if tl else None,
|
||||||
),
|
),
|
||||||
tractor.get_registry(
|
tractor.get_arbiter(
|
||||||
addr=addr,
|
host=host,
|
||||||
|
port=ports[0]
|
||||||
) as portal
|
) as portal
|
||||||
):
|
):
|
||||||
registry = await portal.run_from_ns(
|
registry = await portal.run_from_ns('self', 'get_registry')
|
||||||
'self',
|
|
||||||
'get_registry',
|
|
||||||
)
|
|
||||||
json_d = {}
|
json_d = {}
|
||||||
for key, socket in registry.items():
|
for key, socket in registry.items():
|
||||||
json_d[key] = f'{socket}'
|
host, port = socket
|
||||||
|
json_d[key] = f'{host}:{port}'
|
||||||
click.echo(f"{colorize_json(json_d)}")
|
click.echo(f"{colorize_json(json_d)}")
|
||||||
|
|
||||||
trio.run(list_services)
|
trio.run(list_services)
|
||||||
|
|
||||||
|
|
||||||
def _load_clis() -> None:
|
def _load_clis() -> None:
|
||||||
# from ..service import elastic # noqa
|
from ..service import marketstore # noqa
|
||||||
|
from ..service import elastic # noqa
|
||||||
from ..brokers import cli # noqa
|
from ..brokers import cli # noqa
|
||||||
from ..ui import cli # noqa
|
from ..ui import cli # noqa
|
||||||
from ..watchlists import cli # noqa
|
from ..watchlists import cli # noqa
|
||||||
|
|
|
||||||
101
piker/config.py
101
piker/config.py
|
|
@ -41,13 +41,10 @@ from .log import get_logger
|
||||||
log = get_logger('broker-config')
|
log = get_logger('broker-config')
|
||||||
|
|
||||||
|
|
||||||
# XXX NOTE: taken from `click`
|
# XXX NOTE: taken from ``click`` since apparently they have some
|
||||||
# |_https://github.com/pallets/click/blob/main/src/click/utils.py#L449
|
# super weirdness with sigint and sudo..no clue
|
||||||
#
|
# we're probably going to slowly just modify it to our own version over
|
||||||
# (since apparently they have some super weirdness with SIGINT and
|
# time..
|
||||||
# sudo.. no clue we're probably going to slowly just modify it to our
|
|
||||||
# own version over time..)
|
|
||||||
#
|
|
||||||
def get_app_dir(
|
def get_app_dir(
|
||||||
app_name: str,
|
app_name: str,
|
||||||
roaming: bool = True,
|
roaming: bool = True,
|
||||||
|
|
@ -107,15 +104,14 @@ def get_app_dir(
|
||||||
# `tractor`) with the testing dir and check for it whenever we
|
# `tractor`) with the testing dir and check for it whenever we
|
||||||
# detect `pytest` is being used (which it isn't under normal
|
# detect `pytest` is being used (which it isn't under normal
|
||||||
# operation).
|
# operation).
|
||||||
# if "pytest" in sys.modules:
|
if "pytest" in sys.modules:
|
||||||
# import tractor
|
import tractor
|
||||||
# actor = tractor.current_actor(err_on_no_runtime=False)
|
actor = tractor.current_actor(err_on_no_runtime=False)
|
||||||
# if actor: # runtime is up
|
if actor: # runtime is up
|
||||||
# rvs = tractor._state._runtime_vars
|
rvs = tractor._state._runtime_vars
|
||||||
# import pdbp; pdbp.set_trace()
|
testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
|
||||||
# testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
|
assert testdirpath.exists(), 'piker test harness might be borked!?'
|
||||||
# assert testdirpath.exists(), 'piker test harness might be borked!?'
|
app_name = str(testdirpath)
|
||||||
# app_name = str(testdirpath)
|
|
||||||
|
|
||||||
if platform.system() == 'Windows':
|
if platform.system() == 'Windows':
|
||||||
key = "APPDATA" if roaming else "LOCALAPPDATA"
|
key = "APPDATA" if roaming else "LOCALAPPDATA"
|
||||||
|
|
@ -138,19 +134,14 @@ def get_app_dir(
|
||||||
|
|
||||||
_click_config_dir: Path = Path(get_app_dir('piker'))
|
_click_config_dir: Path = Path(get_app_dir('piker'))
|
||||||
_config_dir: Path = _click_config_dir
|
_config_dir: Path = _click_config_dir
|
||||||
|
_parent_user: str = os.environ.get('SUDO_USER')
|
||||||
|
|
||||||
# NOTE: when using `sudo` we attempt to determine the non-root user
|
if _parent_user:
|
||||||
# and still use their normal config dir.
|
|
||||||
if (
|
|
||||||
(_parent_user := os.environ.get('SUDO_USER'))
|
|
||||||
and
|
|
||||||
_parent_user != 'root'
|
|
||||||
):
|
|
||||||
non_root_user_dir = Path(
|
non_root_user_dir = Path(
|
||||||
os.path.expanduser(f'~{_parent_user}')
|
os.path.expanduser(f'~{_parent_user}')
|
||||||
)
|
)
|
||||||
root: str = 'root'
|
root: str = 'root'
|
||||||
_ccds: str = str(_click_config_dir) # click config dir as string
|
_ccds: str = str(_click_config_dir) # click config dir string
|
||||||
i_tail: int = int(_ccds.rfind(root) + len(root))
|
i_tail: int = int(_ccds.rfind(root) + len(root))
|
||||||
_config_dir = (
|
_config_dir = (
|
||||||
non_root_user_dir
|
non_root_user_dir
|
||||||
|
|
@ -255,8 +246,7 @@ def repodir() -> Path:
|
||||||
|
|
||||||
|
|
||||||
def load(
|
def load(
|
||||||
# NOTE: always appended with .toml suffix
|
conf_name: str = 'brokers', # appended with .toml suffix
|
||||||
conf_name: str = 'conf',
|
|
||||||
path: Path | None = None,
|
path: Path | None = None,
|
||||||
|
|
||||||
decode: Callable[
|
decode: Callable[
|
||||||
|
|
@ -264,7 +254,7 @@ def load(
|
||||||
MutableMapping,
|
MutableMapping,
|
||||||
] = tomllib.loads,
|
] = tomllib.loads,
|
||||||
|
|
||||||
touch_if_dne: bool = True,
|
touch_if_dne: bool = False,
|
||||||
|
|
||||||
**tomlkws,
|
**tomlkws,
|
||||||
|
|
||||||
|
|
@ -273,7 +263,7 @@ def load(
|
||||||
Load config file by name.
|
Load config file by name.
|
||||||
|
|
||||||
If desired config is not in the top level piker-user config path then
|
If desired config is not in the top level piker-user config path then
|
||||||
pass the `path: Path` explicitly.
|
pass the ``path: Path`` explicitly.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# create the $HOME/.config/piker dir if dne
|
# create the $HOME/.config/piker dir if dne
|
||||||
|
|
@ -288,8 +278,7 @@ def load(
|
||||||
|
|
||||||
if (
|
if (
|
||||||
not path.is_file()
|
not path.is_file()
|
||||||
and
|
and touch_if_dne
|
||||||
touch_if_dne
|
|
||||||
):
|
):
|
||||||
# only do a template if no path provided,
|
# only do a template if no path provided,
|
||||||
# just touch an empty file with same name.
|
# just touch an empty file with same name.
|
||||||
|
|
@ -368,9 +357,7 @@ def load_accounts(
|
||||||
|
|
||||||
) -> bidict[str, str | None]:
|
) -> bidict[str, str | None]:
|
||||||
|
|
||||||
conf, path = load(
|
conf, path = load()
|
||||||
conf_name='brokers',
|
|
||||||
)
|
|
||||||
accounts = bidict()
|
accounts = bidict()
|
||||||
for provider_name, section in conf.items():
|
for provider_name, section in conf.items():
|
||||||
accounts_section = section.get('accounts')
|
accounts_section = section.get('accounts')
|
||||||
|
|
@ -391,3 +378,51 @@ def load_accounts(
|
||||||
accounts['paper'] = None
|
accounts['paper'] = None
|
||||||
|
|
||||||
return accounts
|
return accounts
|
||||||
|
|
||||||
|
|
||||||
|
# XXX: Recursive getting & setting
|
||||||
|
|
||||||
|
def get_value(_dict, _section):
|
||||||
|
subs = _section.split('.')
|
||||||
|
if len(subs) > 1:
|
||||||
|
return get_value(
|
||||||
|
_dict[subs[0]],
|
||||||
|
'.'.join(subs[1:]),
|
||||||
|
)
|
||||||
|
|
||||||
|
else:
|
||||||
|
return _dict[_section]
|
||||||
|
|
||||||
|
|
||||||
|
def set_value(_dict, _section, val):
|
||||||
|
subs = _section.split('.')
|
||||||
|
if len(subs) > 1:
|
||||||
|
if subs[0] not in _dict:
|
||||||
|
_dict[subs[0]] = {}
|
||||||
|
|
||||||
|
return set_value(
|
||||||
|
_dict[subs[0]],
|
||||||
|
'.'.join(subs[1:]),
|
||||||
|
val
|
||||||
|
)
|
||||||
|
|
||||||
|
else:
|
||||||
|
_dict[_section] = val
|
||||||
|
|
||||||
|
|
||||||
|
def del_value(_dict, _section):
|
||||||
|
subs = _section.split('.')
|
||||||
|
if len(subs) > 1:
|
||||||
|
if subs[0] not in _dict:
|
||||||
|
return
|
||||||
|
|
||||||
|
return del_value(
|
||||||
|
_dict[subs[0]],
|
||||||
|
'.'.join(subs[1:])
|
||||||
|
)
|
||||||
|
|
||||||
|
else:
|
||||||
|
if _section not in _dict:
|
||||||
|
return
|
||||||
|
|
||||||
|
del _dict[_section]
|
||||||
|
|
|
||||||
|
|
@ -39,33 +39,18 @@ from .feed import (
|
||||||
open_feed,
|
open_feed,
|
||||||
)
|
)
|
||||||
from .flows import Flume
|
from .flows import Flume
|
||||||
from ._symcache import (
|
|
||||||
SymbologyCache,
|
|
||||||
open_symcache,
|
|
||||||
get_symcache,
|
|
||||||
match_from_pairs,
|
|
||||||
)
|
|
||||||
from ._sampling import open_sample_stream
|
|
||||||
from ..types import Struct
|
|
||||||
|
|
||||||
|
|
||||||
__all__: list[str] = [
|
__all__ = [
|
||||||
'Flume',
|
'Flume',
|
||||||
'Feed',
|
'Feed',
|
||||||
'open_feed',
|
'open_feed',
|
||||||
'ShmArray',
|
'ShmArray',
|
||||||
'iterticks',
|
'iterticks',
|
||||||
'maybe_open_shm_array',
|
'maybe_open_shm_array',
|
||||||
'match_from_pairs',
|
|
||||||
'attach_shm_array',
|
'attach_shm_array',
|
||||||
'open_shm_array',
|
'open_shm_array',
|
||||||
'get_shm_token',
|
'get_shm_token',
|
||||||
'def_iohlcv_fields',
|
'def_iohlcv_fields',
|
||||||
'def_ohlcv_fields',
|
'def_ohlcv_fields',
|
||||||
'open_symcache',
|
|
||||||
'open_sample_stream',
|
|
||||||
'get_symcache',
|
|
||||||
'Struct',
|
|
||||||
'SymbologyCache',
|
|
||||||
'types',
|
|
||||||
]
|
]
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
# piker: trading gear for hackers
|
# piker: trading gear for hackers
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# This program is free software: you can redistribute it and/or modify
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
|
@ -13,10 +13,10 @@
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
'''
|
"""
|
||||||
Pre-(path)-graphics formatted x/y nd/1d rendering subsystem.
|
Pre-(path)-graphics formatted x/y nd/1d rendering subsystem.
|
||||||
|
|
||||||
'''
|
"""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from typing import (
|
from typing import (
|
||||||
Optional,
|
Optional,
|
||||||
|
|
@ -39,12 +39,7 @@ if TYPE_CHECKING:
|
||||||
from ._dataviz import (
|
from ._dataviz import (
|
||||||
Viz,
|
Viz,
|
||||||
)
|
)
|
||||||
from piker.toolz import Profiler
|
from .._profile import Profiler
|
||||||
|
|
||||||
# default gap between bars: "bar gap multiplier"
|
|
||||||
# - 0.5 is no overlap between OC arms,
|
|
||||||
# - 1.0 is full overlap on each neighbor sample
|
|
||||||
BGM: float = 0.16
|
|
||||||
|
|
||||||
|
|
||||||
class IncrementalFormatter(msgspec.Struct):
|
class IncrementalFormatter(msgspec.Struct):
|
||||||
|
|
@ -518,7 +513,6 @@ class IncrementalFormatter(msgspec.Struct):
|
||||||
|
|
||||||
|
|
||||||
class OHLCBarsFmtr(IncrementalFormatter):
|
class OHLCBarsFmtr(IncrementalFormatter):
|
||||||
|
|
||||||
x_offset: np.ndarray = np.array([
|
x_offset: np.ndarray = np.array([
|
||||||
-0.5,
|
-0.5,
|
||||||
0,
|
0,
|
||||||
|
|
@ -610,9 +604,8 @@ class OHLCBarsFmtr(IncrementalFormatter):
|
||||||
vr: tuple[int, int],
|
vr: tuple[int, int],
|
||||||
|
|
||||||
start: int = 0, # XXX: do we need this?
|
start: int = 0, # XXX: do we need this?
|
||||||
|
|
||||||
# 0.5 is no overlap between arms, 1.0 is full overlap
|
# 0.5 is no overlap between arms, 1.0 is full overlap
|
||||||
gap: float = BGM,
|
w: float = 0.16,
|
||||||
|
|
||||||
) -> tuple[
|
) -> tuple[
|
||||||
np.ndarray,
|
np.ndarray,
|
||||||
|
|
@ -629,7 +622,7 @@ class OHLCBarsFmtr(IncrementalFormatter):
|
||||||
array[:-1],
|
array[:-1],
|
||||||
start,
|
start,
|
||||||
bar_w=self.index_step_size,
|
bar_w=self.index_step_size,
|
||||||
bar_gap=gap * self.index_step_size,
|
bar_gap=w * self.index_step_size,
|
||||||
|
|
||||||
# XXX: don't ask, due to a ``numba`` bug..
|
# XXX: don't ask, due to a ``numba`` bug..
|
||||||
use_time_index=(self.index_field == 'time'),
|
use_time_index=(self.index_field == 'time'),
|
||||||
|
|
|
||||||
|
|
@ -17,6 +17,11 @@
|
||||||
Super fast ``QPainterPath`` generation related operator routines.
|
Super fast ``QPainterPath`` generation related operator routines.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
from math import (
|
||||||
|
ceil,
|
||||||
|
floor,
|
||||||
|
)
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from numpy.lib import recfunctions as rfn
|
from numpy.lib import recfunctions as rfn
|
||||||
from numba import (
|
from numba import (
|
||||||
|
|
@ -30,6 +35,11 @@ from numba import (
|
||||||
# TODO: for ``numba`` typing..
|
# TODO: for ``numba`` typing..
|
||||||
# from ._source import numba_ohlc_dtype
|
# from ._source import numba_ohlc_dtype
|
||||||
from ._m4 import ds_m4
|
from ._m4 import ds_m4
|
||||||
|
from .._profile import (
|
||||||
|
Profiler,
|
||||||
|
pg_profile_enabled,
|
||||||
|
ms_slower_then,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def xy_downsample(
|
def xy_downsample(
|
||||||
|
|
@ -125,7 +135,7 @@ def path_arrays_from_ohlc(
|
||||||
half_w: float = bar_w/2
|
half_w: float = bar_w/2
|
||||||
|
|
||||||
# TODO: report bug for assert @
|
# TODO: report bug for assert @
|
||||||
# ../piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
|
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
|
||||||
for i, q in enumerate(data[start:], start):
|
for i, q in enumerate(data[start:], start):
|
||||||
|
|
||||||
open = q['open']
|
open = q['open']
|
||||||
|
|
@ -227,20 +237,20 @@ def trace_hl(
|
||||||
|
|
||||||
for i in range(hl.size):
|
for i in range(hl.size):
|
||||||
row = hl[i]
|
row = hl[i]
|
||||||
lo, hi = row['low'], row['high']
|
l, h = row['low'], row['high']
|
||||||
|
|
||||||
up_diff = hi - last_l
|
up_diff = h - last_l
|
||||||
down_diff = last_h - lo
|
down_diff = last_h - l
|
||||||
|
|
||||||
if up_diff > down_diff:
|
if up_diff > down_diff:
|
||||||
out[2*i + 1] = hi
|
out[2*i + 1] = h
|
||||||
out[2*i] = last_l
|
out[2*i] = last_l
|
||||||
else:
|
else:
|
||||||
out[2*i + 1] = lo
|
out[2*i + 1] = l
|
||||||
out[2*i] = last_h
|
out[2*i] = last_h
|
||||||
|
|
||||||
last_l = lo
|
last_l = l
|
||||||
last_h = hi
|
last_h = h
|
||||||
|
|
||||||
x[2*i] = int(i) - margin
|
x[2*i] = int(i) - margin
|
||||||
x[2*i + 1] = int(i) + margin
|
x[2*i + 1] = int(i) + margin
|
||||||
|
|
|
||||||
|
|
@ -33,11 +33,6 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import (
|
|
||||||
Context,
|
|
||||||
MsgStream,
|
|
||||||
Channel,
|
|
||||||
)
|
|
||||||
from tractor.trionics import (
|
from tractor.trionics import (
|
||||||
maybe_open_nursery,
|
maybe_open_nursery,
|
||||||
)
|
)
|
||||||
|
|
@ -58,10 +53,7 @@ if TYPE_CHECKING:
|
||||||
from ._sharedmem import (
|
from ._sharedmem import (
|
||||||
ShmArray,
|
ShmArray,
|
||||||
)
|
)
|
||||||
from .feed import (
|
from .feed import _FeedsBus
|
||||||
_FeedsBus,
|
|
||||||
Sub,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# highest frequency sample step is 1 second by default, though in
|
# highest frequency sample step is 1 second by default, though in
|
||||||
|
|
@ -95,12 +87,6 @@ class Sampler:
|
||||||
# history loading.
|
# history loading.
|
||||||
incr_task_cs: trio.CancelScope | None = None
|
incr_task_cs: trio.CancelScope | None = None
|
||||||
|
|
||||||
bcast_errors: tuple[Exception] = (
|
|
||||||
trio.BrokenResourceError,
|
|
||||||
trio.ClosedResourceError,
|
|
||||||
trio.EndOfChannel,
|
|
||||||
)
|
|
||||||
|
|
||||||
# holds all the ``tractor.Context`` remote subscriptions for
|
# holds all the ``tractor.Context`` remote subscriptions for
|
||||||
# a particular sample period increment event: all subscribers are
|
# a particular sample period increment event: all subscribers are
|
||||||
# notified on a step.
|
# notified on a step.
|
||||||
|
|
@ -108,7 +94,7 @@ class Sampler:
|
||||||
float,
|
float,
|
||||||
list[
|
list[
|
||||||
float,
|
float,
|
||||||
set[MsgStream]
|
set[tractor.MsgStream]
|
||||||
],
|
],
|
||||||
] = defaultdict(
|
] = defaultdict(
|
||||||
lambda: [
|
lambda: [
|
||||||
|
|
@ -264,17 +250,16 @@ class Sampler:
|
||||||
subs: set
|
subs: set
|
||||||
last_ts, subs = pair
|
last_ts, subs = pair
|
||||||
|
|
||||||
# NOTE, for debugging pub-sub issues
|
task = trio.lowlevel.current_task()
|
||||||
# task = trio.lowlevel.current_task()
|
log.debug(
|
||||||
# log.debug(
|
f'SUBS {self.subscribers}\n'
|
||||||
# f'AlL-SUBS@{period_s!r}: {self.subscribers}\n'
|
f'PAIR {pair}\n'
|
||||||
# f'PAIR: {pair}\n'
|
f'TASK: {task}: {id(task)}\n'
|
||||||
# f'TASK: {task}: {id(task)}\n'
|
f'broadcasting {period_s} -> {last_ts}\n'
|
||||||
# f'broadcasting {period_s} -> {last_ts}\n'
|
# f'consumers: {subs}'
|
||||||
# f'consumers: {subs}'
|
)
|
||||||
# )
|
borked: set[tractor.MsgStream] = set()
|
||||||
borked: set[MsgStream] = set()
|
sent: set[tractor.MsgStream] = set()
|
||||||
sent: set[MsgStream] = set()
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
for stream in (subs - sent):
|
for stream in (subs - sent):
|
||||||
|
|
@ -289,11 +274,12 @@ class Sampler:
|
||||||
await stream.send(msg)
|
await stream.send(msg)
|
||||||
sent.add(stream)
|
sent.add(stream)
|
||||||
|
|
||||||
except self.bcast_errors as err:
|
except (
|
||||||
|
trio.BrokenResourceError,
|
||||||
|
trio.ClosedResourceError
|
||||||
|
):
|
||||||
log.error(
|
log.error(
|
||||||
f'Connection dropped for IPC ctx\n'
|
f'{stream._ctx.chan.uid} dropped connection'
|
||||||
f'{stream._ctx}\n\n'
|
|
||||||
f'Due to {type(err)}'
|
|
||||||
)
|
)
|
||||||
borked.add(stream)
|
borked.add(stream)
|
||||||
else:
|
else:
|
||||||
|
|
@ -328,7 +314,7 @@ class Sampler:
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
async def register_with_sampler(
|
async def register_with_sampler(
|
||||||
ctx: Context,
|
ctx: tractor.Context,
|
||||||
period_s: float,
|
period_s: float,
|
||||||
shms_by_period: dict[float, dict] | None = None,
|
shms_by_period: dict[float, dict] | None = None,
|
||||||
|
|
||||||
|
|
@ -400,8 +386,7 @@ async def register_with_sampler(
|
||||||
finally:
|
finally:
|
||||||
if (
|
if (
|
||||||
sub_for_broadcasts
|
sub_for_broadcasts
|
||||||
and
|
and subs
|
||||||
subs
|
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
subs.remove(stream)
|
subs.remove(stream)
|
||||||
|
|
@ -568,7 +553,8 @@ async def open_sample_stream(
|
||||||
|
|
||||||
|
|
||||||
async def sample_and_broadcast(
|
async def sample_and_broadcast(
|
||||||
bus: _FeedsBus,
|
|
||||||
|
bus: _FeedsBus, # noqa
|
||||||
rt_shm: ShmArray,
|
rt_shm: ShmArray,
|
||||||
hist_shm: ShmArray,
|
hist_shm: ShmArray,
|
||||||
quote_stream: trio.abc.ReceiveChannel,
|
quote_stream: trio.abc.ReceiveChannel,
|
||||||
|
|
@ -588,33 +574,11 @@ async def sample_and_broadcast(
|
||||||
|
|
||||||
overruns = Counter()
|
overruns = Counter()
|
||||||
|
|
||||||
# NOTE, only used for debugging live-data-feed issues, though
|
|
||||||
# this should be resolved more correctly in the future using the
|
|
||||||
# new typed-msgspec feats of `tractor`!
|
|
||||||
#
|
|
||||||
# XXX, a multiline nested `dict` formatter (since rn quote-msgs
|
|
||||||
# are just that).
|
|
||||||
# pfmt: Callable[[str], str] = mk_repr()
|
|
||||||
|
|
||||||
# iterate stream delivered by broker
|
# iterate stream delivered by broker
|
||||||
async for quotes in quote_stream:
|
async for quotes in quote_stream:
|
||||||
# print(quotes)
|
# print(quotes)
|
||||||
|
|
||||||
# XXX WARNING XXX only enable for debugging bc ow can cost
|
# TODO: ``numba`` this!
|
||||||
# ALOT of perf with HF-feedz!!!
|
|
||||||
#
|
|
||||||
# log.info(
|
|
||||||
# 'Rx live quotes:\n'
|
|
||||||
# f'{pfmt(quotes)}'
|
|
||||||
# )
|
|
||||||
|
|
||||||
# TODO,
|
|
||||||
# -[ ] `numba` or `cython`-nize this loop possibly?
|
|
||||||
# |_alternatively could we do it in rust somehow by upacking
|
|
||||||
# arrow msgs instead of using `msgspec`?
|
|
||||||
# -[ ] use `msgspec.Struct` support in new typed-msging from
|
|
||||||
# `tractor` to ensure only allowed msgs are transmitted?
|
|
||||||
#
|
|
||||||
for broker_symbol, quote in quotes.items():
|
for broker_symbol, quote in quotes.items():
|
||||||
# TODO: in theory you can send the IPC msg *before* writing
|
# TODO: in theory you can send the IPC msg *before* writing
|
||||||
# to the sharedmem array to decrease latency, however, that
|
# to the sharedmem array to decrease latency, however, that
|
||||||
|
|
@ -685,22 +649,12 @@ async def sample_and_broadcast(
|
||||||
# eventually block this producer end of the feed and
|
# eventually block this producer end of the feed and
|
||||||
# thus other consumers still attached.
|
# thus other consumers still attached.
|
||||||
sub_key: str = broker_symbol.lower()
|
sub_key: str = broker_symbol.lower()
|
||||||
subs: set[Sub] = bus.get_subs(sub_key)
|
subs: list[
|
||||||
|
tuple[
|
||||||
# TODO, figure out how to make this useful whilst
|
tractor.MsgStream | trio.MemorySendChannel,
|
||||||
# incoporating feed "pausing" ..
|
float | None, # tick throttle in Hz
|
||||||
#
|
]
|
||||||
# if not subs:
|
] = bus.get_subs(sub_key)
|
||||||
# all_bs_fqmes: list[str] = list(
|
|
||||||
# bus._subscribers.keys()
|
|
||||||
# )
|
|
||||||
# log.warning(
|
|
||||||
# f'No subscribers for {brokername!r} live-quote ??\n'
|
|
||||||
# f'broker_symbol: {broker_symbol}\n\n'
|
|
||||||
|
|
||||||
# f'Maybe the backend-sys symbol does not match one of,\n'
|
|
||||||
# f'{pfmt(all_bs_fqmes)}\n'
|
|
||||||
# )
|
|
||||||
|
|
||||||
# NOTE: by default the broker backend doesn't append
|
# NOTE: by default the broker backend doesn't append
|
||||||
# it's own "name" into the fqme schema (but maybe it
|
# it's own "name" into the fqme schema (but maybe it
|
||||||
|
|
@ -709,40 +663,34 @@ async def sample_and_broadcast(
|
||||||
fqme: str = f'{broker_symbol}.{brokername}'
|
fqme: str = f'{broker_symbol}.{brokername}'
|
||||||
lags: int = 0
|
lags: int = 0
|
||||||
|
|
||||||
# XXX TODO XXX: speed up this loop in an AOT compiled
|
# TODO: speed up this loop in an AOT compiled lang (like
|
||||||
# lang (like rust or nim or zig)!
|
# rust or nim or zig) and/or instead of doing a fan out to
|
||||||
# AND/OR instead of doing a fan out to TCP sockets
|
# TCP sockets here, we add a shm-style tick queue which
|
||||||
# here, we add a shm-style tick queue which readers can
|
# readers can pull from instead of placing the burden of
|
||||||
# pull from instead of placing the burden of broadcast
|
# broadcast on solely on this `brokerd` actor. see issues:
|
||||||
# on solely on this `brokerd` actor. see issues:
|
|
||||||
# - https://github.com/pikers/piker/issues/98
|
# - https://github.com/pikers/piker/issues/98
|
||||||
# - https://github.com/pikers/piker/issues/107
|
# - https://github.com/pikers/piker/issues/107
|
||||||
|
|
||||||
# for (stream, tick_throttle) in subs.copy():
|
for (stream, tick_throttle) in subs.copy():
|
||||||
for sub in subs.copy():
|
|
||||||
ipc: MsgStream = sub.ipc
|
|
||||||
throttle: float = sub.throttle_rate
|
|
||||||
try:
|
try:
|
||||||
with trio.move_on_after(0.2) as cs:
|
with trio.move_on_after(0.2) as cs:
|
||||||
if throttle:
|
if tick_throttle:
|
||||||
send_chan: trio.abc.SendChannel = sub.send_chan
|
|
||||||
|
|
||||||
# this is a send mem chan that likely
|
# this is a send mem chan that likely
|
||||||
# pushes to the ``uniform_rate_send()`` below.
|
# pushes to the ``uniform_rate_send()`` below.
|
||||||
try:
|
try:
|
||||||
send_chan.send_nowait(
|
stream.send_nowait(
|
||||||
(fqme, quote)
|
(fqme, quote)
|
||||||
)
|
)
|
||||||
except trio.WouldBlock:
|
except trio.WouldBlock:
|
||||||
overruns[sub_key] += 1
|
overruns[sub_key] += 1
|
||||||
ctx: Context = ipc._ctx
|
ctx = stream._ctx
|
||||||
chan: Channel = ctx.chan
|
chan = ctx.chan
|
||||||
|
|
||||||
log.warning(
|
log.warning(
|
||||||
f'Feed OVERRUN {sub_key}'
|
f'Feed OVERRUN {sub_key}'
|
||||||
f'@{bus.brokername} -> \n'
|
'@{bus.brokername} -> \n'
|
||||||
f'feed @ {chan.uid}\n'
|
f'feed @ {chan.uid}\n'
|
||||||
f'throttle = {throttle} Hz'
|
f'throttle = {tick_throttle} Hz'
|
||||||
)
|
)
|
||||||
|
|
||||||
if overruns[sub_key] > 6:
|
if overruns[sub_key] > 6:
|
||||||
|
|
@ -759,29 +707,33 @@ async def sample_and_broadcast(
|
||||||
f'{sub_key}:'
|
f'{sub_key}:'
|
||||||
f'{ctx.cid}@{chan.uid}'
|
f'{ctx.cid}@{chan.uid}'
|
||||||
)
|
)
|
||||||
await ipc.aclose()
|
await stream.aclose()
|
||||||
raise trio.BrokenResourceError
|
raise trio.BrokenResourceError
|
||||||
else:
|
else:
|
||||||
await ipc.send(
|
await stream.send(
|
||||||
{fqme: quote}
|
{fqme: quote}
|
||||||
)
|
)
|
||||||
|
|
||||||
if cs.cancelled_caught:
|
if cs.cancelled_caught:
|
||||||
lags += 1
|
lags += 1
|
||||||
if lags > 10:
|
if lags > 10:
|
||||||
await tractor.pause()
|
await tractor.breakpoint()
|
||||||
|
|
||||||
except Sampler.bcast_errors as ipc_err:
|
except (
|
||||||
ctx: Context = ipc._ctx
|
trio.BrokenResourceError,
|
||||||
chan: Channel = ctx.chan
|
trio.ClosedResourceError,
|
||||||
|
trio.EndOfChannel,
|
||||||
|
):
|
||||||
|
ctx = stream._ctx
|
||||||
|
chan = ctx.chan
|
||||||
if ctx:
|
if ctx:
|
||||||
log.warning(
|
log.warning(
|
||||||
f'Dropped `brokerd`-feed for {broker_symbol!r} due to,\n'
|
'Dropped `brokerd`-quotes-feed connection:\n'
|
||||||
f'x>) {ctx.cid}@{chan.uid}'
|
f'{broker_symbol}:'
|
||||||
f'|_{ipc_err!r}\n\n'
|
f'{ctx.cid}@{chan.uid}'
|
||||||
)
|
)
|
||||||
if sub.throttle_rate:
|
if tick_throttle:
|
||||||
assert ipc._closed
|
assert stream._closed
|
||||||
|
|
||||||
# XXX: do we need to deregister here
|
# XXX: do we need to deregister here
|
||||||
# if it's done in the fee bus code?
|
# if it's done in the fee bus code?
|
||||||
|
|
@ -790,16 +742,17 @@ async def sample_and_broadcast(
|
||||||
# since there seems to be some kinda race..
|
# since there seems to be some kinda race..
|
||||||
bus.remove_subs(
|
bus.remove_subs(
|
||||||
sub_key,
|
sub_key,
|
||||||
{sub},
|
{(stream, tick_throttle)},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
async def uniform_rate_send(
|
async def uniform_rate_send(
|
||||||
|
|
||||||
rate: float,
|
rate: float,
|
||||||
quote_stream: trio.abc.ReceiveChannel,
|
quote_stream: trio.abc.ReceiveChannel,
|
||||||
stream: MsgStream,
|
stream: tractor.MsgStream,
|
||||||
|
|
||||||
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
|
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
|
|
@ -817,16 +770,13 @@ async def uniform_rate_send(
|
||||||
https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
|
https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# ?TODO? dynamically compute the **actual** approx overhead latency per cycle
|
# TODO: compute the approx overhead latency per cycle
|
||||||
# instead of this magic # bidinezz?
|
left_to_sleep = throttle_period = 1/rate - 0.000616
|
||||||
throttle_period: float = 1/rate - 0.000616
|
|
||||||
left_to_sleep: float = throttle_period
|
|
||||||
|
|
||||||
# send cycle state
|
# send cycle state
|
||||||
first_quote: dict|None
|
|
||||||
first_quote = last_quote = None
|
first_quote = last_quote = None
|
||||||
last_send: float = time.time()
|
last_send = time.time()
|
||||||
diff: float = 0
|
diff = 0
|
||||||
|
|
||||||
task_status.started()
|
task_status.started()
|
||||||
ticks_by_type: dict[
|
ticks_by_type: dict[
|
||||||
|
|
@ -837,28 +787,22 @@ async def uniform_rate_send(
|
||||||
clear_types = _tick_groups['clears']
|
clear_types = _tick_groups['clears']
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
|
|
||||||
# compute the remaining time to sleep for this throttled cycle
|
# compute the remaining time to sleep for this throttled cycle
|
||||||
left_to_sleep: float = throttle_period - diff
|
left_to_sleep = throttle_period - diff
|
||||||
|
|
||||||
if left_to_sleep > 0:
|
if left_to_sleep > 0:
|
||||||
cs: trio.CancelScope
|
|
||||||
with trio.move_on_after(left_to_sleep) as cs:
|
with trio.move_on_after(left_to_sleep) as cs:
|
||||||
sym: str
|
|
||||||
last_quote: dict
|
|
||||||
try:
|
try:
|
||||||
sym, last_quote = await quote_stream.receive()
|
sym, last_quote = await quote_stream.receive()
|
||||||
except trio.EndOfChannel:
|
except trio.EndOfChannel:
|
||||||
log.exception(
|
log.exception(f"feed for {stream} ended?")
|
||||||
f'Live stream for feed for ended?\n'
|
|
||||||
f'<=c\n'
|
|
||||||
f' |_[{stream!r}\n'
|
|
||||||
)
|
|
||||||
break
|
break
|
||||||
|
|
||||||
diff: float = time.time() - last_send
|
diff = time.time() - last_send
|
||||||
|
|
||||||
if not first_quote:
|
if not first_quote:
|
||||||
first_quote: float = last_quote
|
first_quote = last_quote
|
||||||
# first_quote['tbt'] = ticks_by_type
|
# first_quote['tbt'] = ticks_by_type
|
||||||
|
|
||||||
if (throttle_period - diff) > 0:
|
if (throttle_period - diff) > 0:
|
||||||
|
|
@ -919,9 +863,7 @@ async def uniform_rate_send(
|
||||||
# TODO: now if only we could sync this to the display
|
# TODO: now if only we could sync this to the display
|
||||||
# rate timing exactly lul
|
# rate timing exactly lul
|
||||||
try:
|
try:
|
||||||
await stream.send({
|
await stream.send({sym: first_quote})
|
||||||
sym: first_quote
|
|
||||||
})
|
|
||||||
except tractor.RemoteActorError as rme:
|
except tractor.RemoteActorError as rme:
|
||||||
if rme.type is not tractor._exceptions.StreamOverrun:
|
if rme.type is not tractor._exceptions.StreamOverrun:
|
||||||
raise
|
raise
|
||||||
|
|
@ -932,28 +874,19 @@ async def uniform_rate_send(
|
||||||
f'{sym}:{ctx.cid}@{chan.uid}'
|
f'{sym}:{ctx.cid}@{chan.uid}'
|
||||||
)
|
)
|
||||||
|
|
||||||
# NOTE: any of these can be raised by `tractor`'s IPC
|
|
||||||
# transport-layer and we want to be highly resilient
|
|
||||||
# to consumers which crash or lose network connection.
|
|
||||||
# I.e. we **DO NOT** want to crash and propagate up to
|
|
||||||
# ``pikerd`` these kinds of errors!
|
|
||||||
except (
|
except (
|
||||||
|
# NOTE: any of these can be raised by ``tractor``'s IPC
|
||||||
|
# transport-layer and we want to be highly resilient
|
||||||
|
# to consumers which crash or lose network connection.
|
||||||
|
# I.e. we **DO NOT** want to crash and propagate up to
|
||||||
|
# ``pikerd`` these kinds of errors!
|
||||||
|
trio.ClosedResourceError,
|
||||||
|
trio.BrokenResourceError,
|
||||||
ConnectionResetError,
|
ConnectionResetError,
|
||||||
) + Sampler.bcast_errors as ipc_err:
|
):
|
||||||
match ipc_err:
|
# if the feed consumer goes down then drop
|
||||||
case trio.EndOfChannel():
|
# out of this rate limiter
|
||||||
log.info(
|
log.warning(f'{stream} closed')
|
||||||
f'{stream} terminated by peer,\n'
|
|
||||||
f'{ipc_err!r}'
|
|
||||||
)
|
|
||||||
case _:
|
|
||||||
# if the feed consumer goes down then drop
|
|
||||||
# out of this rate limiter
|
|
||||||
log.warning(
|
|
||||||
f'{stream} closed due to,\n'
|
|
||||||
f'{ipc_err!r}'
|
|
||||||
)
|
|
||||||
|
|
||||||
await stream.aclose()
|
await stream.aclose()
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,7 @@ import tractor
|
||||||
|
|
||||||
from ._util import log
|
from ._util import log
|
||||||
from ._source import def_iohlcv_fields
|
from ._source import def_iohlcv_fields
|
||||||
from piker.types import Struct
|
from .types import Struct
|
||||||
|
|
||||||
|
|
||||||
def cuckoff_mantracker():
|
def cuckoff_mantracker():
|
||||||
|
|
|
||||||
|
|
@ -1,534 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Mega-simple symbology cache via TOML files.
|
|
||||||
|
|
||||||
Allow backend data providers and/or brokers to stash their
|
|
||||||
symbology sets (aka the meta data we normalize into our
|
|
||||||
`.accounting.MktPair` type) to the filesystem for faster lookup and
|
|
||||||
offline usage.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from __future__ import annotations
|
|
||||||
from contextlib import (
|
|
||||||
asynccontextmanager as acm,
|
|
||||||
)
|
|
||||||
from pathlib import Path
|
|
||||||
from pprint import pformat
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
Callable,
|
|
||||||
Sequence,
|
|
||||||
Hashable,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
|
||||||
from types import ModuleType
|
|
||||||
|
|
||||||
from rapidfuzz import process as fuzzy
|
|
||||||
import tomli_w # for fast symbol cache writing
|
|
||||||
import tractor
|
|
||||||
import trio
|
|
||||||
try:
|
|
||||||
import tomllib
|
|
||||||
except ModuleNotFoundError:
|
|
||||||
import tomli as tomllib
|
|
||||||
from msgspec import field
|
|
||||||
|
|
||||||
from piker.log import get_logger
|
|
||||||
from piker import config
|
|
||||||
from piker.types import Struct
|
|
||||||
from piker.brokers import (
|
|
||||||
open_cached_client,
|
|
||||||
get_brokermod,
|
|
||||||
)
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from piker.accounting import (
|
|
||||||
Asset,
|
|
||||||
MktPair,
|
|
||||||
)
|
|
||||||
|
|
||||||
log = get_logger('data.cache')
|
|
||||||
|
|
||||||
|
|
||||||
class SymbologyCache(Struct):
|
|
||||||
'''
|
|
||||||
Asset meta-data cache which holds lookup tables for 3 sets of
|
|
||||||
market-symbology related struct-types required by the
|
|
||||||
`.accounting` and `.data` subsystems.
|
|
||||||
|
|
||||||
'''
|
|
||||||
mod: ModuleType
|
|
||||||
fp: Path
|
|
||||||
|
|
||||||
# all asset-money-systems descriptions as minimally defined by
|
|
||||||
# in `.accounting.Asset`
|
|
||||||
assets: dict[str, Asset] = field(default_factory=dict)
|
|
||||||
|
|
||||||
# backend-system pairs loaded in provider (schema) specific
|
|
||||||
# structs.
|
|
||||||
pairs: dict[str, Struct] = field(default_factory=dict)
|
|
||||||
# serialized namespace path to the backend's pair-info-`Struct`
|
|
||||||
# defn B)
|
|
||||||
pair_ns_path: tractor.msg.NamespacePath | None = None
|
|
||||||
|
|
||||||
# TODO: piker-normalized `.accounting.MktPair` table?
|
|
||||||
# loaded from the `.pairs` and a normalizer
|
|
||||||
# provided by the backend pkg.
|
|
||||||
mktmaps: dict[str, MktPair] = field(default_factory=dict)
|
|
||||||
|
|
||||||
def pformat(self) -> str:
|
|
||||||
return (
|
|
||||||
f'<{type(self).__name__}(\n'
|
|
||||||
f' .mod: {self.mod!r}\n'
|
|
||||||
f' .assets: {len(self.assets)!r}\n'
|
|
||||||
f' .pairs: {len(self.pairs)!r}\n'
|
|
||||||
f' .mktmaps: {len(self.mktmaps)!r}\n'
|
|
||||||
f')>'
|
|
||||||
)
|
|
||||||
|
|
||||||
__repr__ = pformat
|
|
||||||
|
|
||||||
def write_config(self) -> None:
|
|
||||||
|
|
||||||
# put the backend's pair-struct type ref at the top
|
|
||||||
# of file if possible.
|
|
||||||
cachedict: dict[str, Any] = {
|
|
||||||
'pair_ns_path': str(self.pair_ns_path) or '',
|
|
||||||
}
|
|
||||||
|
|
||||||
# serialize all tables as dicts for TOML.
|
|
||||||
for key, table in {
|
|
||||||
'assets': self.assets,
|
|
||||||
'pairs': self.pairs,
|
|
||||||
'mktmaps': self.mktmaps,
|
|
||||||
}.items():
|
|
||||||
if not table:
|
|
||||||
log.warning(
|
|
||||||
f'Asset cache table for `{key}` is empty?'
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
dct = cachedict[key] = {}
|
|
||||||
for key, struct in table.items():
|
|
||||||
dct[key] = struct.to_dict(include_non_members=False)
|
|
||||||
|
|
||||||
try:
|
|
||||||
with self.fp.open(mode='wb') as fp:
|
|
||||||
tomli_w.dump(cachedict, fp)
|
|
||||||
except TypeError:
|
|
||||||
self.fp.unlink()
|
|
||||||
raise
|
|
||||||
|
|
||||||
async def load(self) -> None:
|
|
||||||
'''
|
|
||||||
Explicitly load the "symbology set" for this provider by using
|
|
||||||
2 required `Client` methods:
|
|
||||||
|
|
||||||
- `.get_assets()`: returning a table of `Asset`s
|
|
||||||
- `.get_mkt_pairs()`: returning a table of pair-`Struct`
|
|
||||||
types, custom defined by the particular backend.
|
|
||||||
|
|
||||||
AND, the required `.get_mkt_info()` module-level endpoint
|
|
||||||
which maps `fqme: str` -> `MktPair`s.
|
|
||||||
|
|
||||||
These tables are then used to fill out the `.assets`, `.pairs` and
|
|
||||||
`.mktmaps` tables on this cache instance, respectively.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async with open_cached_client(self.mod.name) as client:
|
|
||||||
|
|
||||||
if get_assets := getattr(client, 'get_assets', None):
|
|
||||||
assets: dict[str, Asset] = await get_assets()
|
|
||||||
for bs_mktid, asset in assets.items():
|
|
||||||
self.assets[bs_mktid] = asset
|
|
||||||
else:
|
|
||||||
log.warning(
|
|
||||||
'No symbology cache `Asset` support for `{provider}`..\n'
|
|
||||||
'Implement `Client.get_assets()`!'
|
|
||||||
)
|
|
||||||
|
|
||||||
get_mkt_pairs: Callable|None = getattr(
|
|
||||||
client,
|
|
||||||
'get_mkt_pairs',
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
if not get_mkt_pairs:
|
|
||||||
log.warning(
|
|
||||||
'No symbology cache `Pair` support for `{provider}`..\n'
|
|
||||||
'Implement `Client.get_mkt_pairs()`!'
|
|
||||||
)
|
|
||||||
return self
|
|
||||||
|
|
||||||
pairs: dict[str, Struct] = await get_mkt_pairs()
|
|
||||||
if not pairs:
|
|
||||||
log.warning(
|
|
||||||
'No pairs from intial {provider!r} sym-cache request?\n\n'
|
|
||||||
'`Client.get_mkt_pairs()` -> {pairs!r} ?'
|
|
||||||
)
|
|
||||||
return self
|
|
||||||
|
|
||||||
for bs_fqme, pair in pairs.items():
|
|
||||||
if not getattr(pair, 'ns_path', None):
|
|
||||||
# XXX: every backend defined pair must declare
|
|
||||||
# a `.ns_path: tractor.NamespacePath` to enable
|
|
||||||
# roundtrip serialization lookup from a local
|
|
||||||
# cache file.
|
|
||||||
raise TypeError(
|
|
||||||
f'Pair-struct for {self.mod.name} MUST define a '
|
|
||||||
'`.ns_path: str`!\n\n'
|
|
||||||
f'{pair!r}'
|
|
||||||
)
|
|
||||||
|
|
||||||
entry = await self.mod.get_mkt_info(pair.bs_fqme)
|
|
||||||
if not entry:
|
|
||||||
continue
|
|
||||||
|
|
||||||
mkt: MktPair
|
|
||||||
pair: Struct
|
|
||||||
mkt, _pair = entry
|
|
||||||
assert _pair is pair, (
|
|
||||||
f'`{self.mod.name}` backend probably has a '
|
|
||||||
'keying-symmetry problem between the pair-`Struct` '
|
|
||||||
'returned from `Client.get_mkt_pairs()`and the '
|
|
||||||
'module level endpoint: `.get_mkt_info()`\n\n'
|
|
||||||
"Here's the struct diff:\n"
|
|
||||||
f'{_pair - pair}'
|
|
||||||
)
|
|
||||||
# NOTE XXX: this means backends MUST implement
|
|
||||||
# a `Struct.bs_mktid: str` field to provide
|
|
||||||
# a native-keyed map to their own symbol
|
|
||||||
# set(s).
|
|
||||||
self.pairs[pair.bs_mktid] = pair
|
|
||||||
|
|
||||||
# NOTE: `MktPair`s are keyed here using piker's
|
|
||||||
# internal FQME schema so that search,
|
|
||||||
# accounting and feed init can be accomplished
|
|
||||||
# a sane, uniform, normalized basis.
|
|
||||||
self.mktmaps[mkt.fqme] = mkt
|
|
||||||
|
|
||||||
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
|
|
||||||
pair,
|
|
||||||
)
|
|
||||||
|
|
||||||
return self
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(
|
|
||||||
cls: type,
|
|
||||||
data: dict,
|
|
||||||
**kwargs,
|
|
||||||
) -> SymbologyCache:
|
|
||||||
|
|
||||||
# normal init inputs
|
|
||||||
cache = cls(**kwargs)
|
|
||||||
|
|
||||||
# XXX WARNING: this may break if backend namespacing
|
|
||||||
# changes (eg. `Pair` class def is moved to another
|
|
||||||
# module) in which case you can manually update the
|
|
||||||
# `pair_ns_path` in the symcache file and try again.
|
|
||||||
# TODO: probably a verbose error about this?
|
|
||||||
Pair: type = tractor.msg.NamespacePath(
|
|
||||||
str(data['pair_ns_path'])
|
|
||||||
).load_ref()
|
|
||||||
|
|
||||||
pairtable = data.pop('pairs')
|
|
||||||
for key, pairtable in pairtable.items():
|
|
||||||
|
|
||||||
# allow each serialized pair-dict-table to declare its
|
|
||||||
# specific struct type's path in cases where a backend
|
|
||||||
# supports multiples (normally with different
|
|
||||||
# schemas..) and we are storing them in a flat `.pairs`
|
|
||||||
# table.
|
|
||||||
ThisPair = Pair
|
|
||||||
if this_pair_type := pairtable.get('ns_path'):
|
|
||||||
ThisPair: type = tractor.msg.NamespacePath(
|
|
||||||
str(this_pair_type)
|
|
||||||
).load_ref()
|
|
||||||
|
|
||||||
pair: Struct = ThisPair(**pairtable)
|
|
||||||
cache.pairs[key] = pair
|
|
||||||
|
|
||||||
from ..accounting import (
|
|
||||||
Asset,
|
|
||||||
MktPair,
|
|
||||||
)
|
|
||||||
|
|
||||||
# load `dict` -> `Asset`
|
|
||||||
assettable = data.pop('assets')
|
|
||||||
for name, asdict in assettable.items():
|
|
||||||
cache.assets[name] = Asset.from_msg(asdict)
|
|
||||||
|
|
||||||
# load `dict` -> `MktPair`
|
|
||||||
dne: list[str] = []
|
|
||||||
mkttable = data.pop('mktmaps')
|
|
||||||
for fqme, mktdict in mkttable.items():
|
|
||||||
|
|
||||||
mkt = MktPair.from_msg(mktdict)
|
|
||||||
assert mkt.fqme == fqme
|
|
||||||
|
|
||||||
# sanity check asset refs from those (presumably)
|
|
||||||
# loaded asset set above.
|
|
||||||
src: Asset = cache.assets[mkt.src.name]
|
|
||||||
assert src == mkt.src
|
|
||||||
dst: Asset
|
|
||||||
if not (dst := cache.assets.get(mkt.dst.name)):
|
|
||||||
dne.append(mkt.dst.name)
|
|
||||||
continue
|
|
||||||
else:
|
|
||||||
assert dst.name == mkt.dst.name
|
|
||||||
|
|
||||||
cache.mktmaps[fqme] = mkt
|
|
||||||
|
|
||||||
log.warning(
|
|
||||||
f'These `MktPair.dst: Asset`s DNE says `{cache.mod.name}`?\n'
|
|
||||||
f'{pformat(dne)}'
|
|
||||||
)
|
|
||||||
return cache
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
async def from_scratch(
|
|
||||||
mod: ModuleType,
|
|
||||||
fp: Path,
|
|
||||||
**kwargs,
|
|
||||||
|
|
||||||
) -> SymbologyCache:
|
|
||||||
'''
|
|
||||||
Generate (a) new symcache (contents) entirely from scratch
|
|
||||||
including all (TOML) serialized data and file.
|
|
||||||
|
|
||||||
'''
|
|
||||||
log.info(f'GENERATING symbology cache for `{mod.name}`')
|
|
||||||
cache = SymbologyCache(
|
|
||||||
mod=mod,
|
|
||||||
fp=fp,
|
|
||||||
**kwargs,
|
|
||||||
)
|
|
||||||
await cache.load()
|
|
||||||
cache.write_config()
|
|
||||||
return cache
|
|
||||||
|
|
||||||
def search(
|
|
||||||
self,
|
|
||||||
pattern: str,
|
|
||||||
table: str = 'mktmaps'
|
|
||||||
|
|
||||||
) -> dict[str, Struct]:
|
|
||||||
'''
|
|
||||||
(Fuzzy) search this cache's `.mktmaps` table, which is
|
|
||||||
keyed by FQMEs, for `pattern: str` and return the best
|
|
||||||
matches in a `dict` including the `MktPair` values.
|
|
||||||
|
|
||||||
'''
|
|
||||||
matches = fuzzy.extract(
|
|
||||||
pattern,
|
|
||||||
getattr(self, table),
|
|
||||||
score_cutoff=50,
|
|
||||||
)
|
|
||||||
|
|
||||||
# repack in dict[fqme, MktPair] form
|
|
||||||
return {
|
|
||||||
item[0].fqme: item[0]
|
|
||||||
for item in matches
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# actor-process-local in-mem-cache of symcaches (by backend).
|
|
||||||
_caches: dict[str, SymbologyCache] = {}
|
|
||||||
|
|
||||||
|
|
||||||
def mk_cachefile(
|
|
||||||
provider: str,
|
|
||||||
) -> Path:
|
|
||||||
cachedir: Path = config.get_conf_dir() / '_cache'
|
|
||||||
if not cachedir.is_dir():
|
|
||||||
log.info(f'Creating `nativedb` director: {cachedir}')
|
|
||||||
cachedir.mkdir()
|
|
||||||
|
|
||||||
cachefile: Path = cachedir / f'{str(provider)}.symcache.toml'
|
|
||||||
cachefile.touch()
|
|
||||||
return cachefile
|
|
||||||
|
|
||||||
|
|
||||||
@acm
|
|
||||||
async def open_symcache(
|
|
||||||
mod_or_name: ModuleType | str,
|
|
||||||
|
|
||||||
reload: bool = False,
|
|
||||||
only_from_memcache: bool = False, # no API req
|
|
||||||
_no_symcache: bool = False, # no backend support
|
|
||||||
|
|
||||||
) -> SymbologyCache:
|
|
||||||
|
|
||||||
if isinstance(mod_or_name, str):
|
|
||||||
mod = get_brokermod(mod_or_name)
|
|
||||||
else:
|
|
||||||
mod: ModuleType = mod_or_name
|
|
||||||
|
|
||||||
provider: str = mod.name
|
|
||||||
cachefile: Path = mk_cachefile(provider)
|
|
||||||
|
|
||||||
# NOTE: certain backends might not support a symbology cache
|
|
||||||
# (easily) and thus we allow for an empty instance to be loaded
|
|
||||||
# and manually filled in at the whim of the caller presuming
|
|
||||||
# the backend pkg-module is annotated appropriately.
|
|
||||||
if (
|
|
||||||
getattr(mod, '_no_symcache', False)
|
|
||||||
or _no_symcache
|
|
||||||
):
|
|
||||||
yield SymbologyCache(
|
|
||||||
mod=mod,
|
|
||||||
fp=cachefile,
|
|
||||||
)
|
|
||||||
# don't do nuttin
|
|
||||||
return
|
|
||||||
|
|
||||||
# actor-level cache-cache XD
|
|
||||||
global _caches
|
|
||||||
if not reload:
|
|
||||||
try:
|
|
||||||
yield _caches[provider]
|
|
||||||
except KeyError:
|
|
||||||
msg: str = (
|
|
||||||
f'No asset info cache exists yet for `{provider}`'
|
|
||||||
)
|
|
||||||
if only_from_memcache:
|
|
||||||
raise RuntimeError(msg)
|
|
||||||
else:
|
|
||||||
log.warning(msg)
|
|
||||||
|
|
||||||
# if no cache exists or an explicit reload is requested, load
|
|
||||||
# the provider API and call appropriate endpoints to populate
|
|
||||||
# the mkt and asset tables.
|
|
||||||
if (
|
|
||||||
reload
|
|
||||||
or not cachefile.is_file()
|
|
||||||
):
|
|
||||||
cache = await SymbologyCache.from_scratch(
|
|
||||||
mod=mod,
|
|
||||||
fp=cachefile,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
|
||||||
log.info(
|
|
||||||
f'Loading EXISTING `{mod.name}` symbology cache:\n'
|
|
||||||
f'> {cachefile}'
|
|
||||||
)
|
|
||||||
import time
|
|
||||||
now = time.time()
|
|
||||||
with cachefile.open('rb') as existing_fp:
|
|
||||||
data: dict[str, dict] = tomllib.load(existing_fp)
|
|
||||||
log.runtime(f'SYMCACHE TOML LOAD TIME: {time.time() - now}')
|
|
||||||
|
|
||||||
# if there's an empty file for some reason we need
|
|
||||||
# to do a full reload as well!
|
|
||||||
if not data:
|
|
||||||
cache = await SymbologyCache.from_scratch(
|
|
||||||
mod=mod,
|
|
||||||
fp=cachefile,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
cache = SymbologyCache.from_dict(
|
|
||||||
data,
|
|
||||||
mod=mod,
|
|
||||||
fp=cachefile,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: use a real profiling sys..
|
|
||||||
# https://github.com/pikers/piker/issues/337
|
|
||||||
log.info(f'SYMCACHE LOAD TIME: {time.time() - now}')
|
|
||||||
|
|
||||||
yield cache
|
|
||||||
|
|
||||||
# TODO: write only when changes detected? but that should
|
|
||||||
# never happen right except on reload?
|
|
||||||
# cache.write_config()
|
|
||||||
|
|
||||||
|
|
||||||
def get_symcache(
|
|
||||||
provider: str,
|
|
||||||
force_reload: bool = False,
|
|
||||||
|
|
||||||
) -> SymbologyCache:
|
|
||||||
'''
|
|
||||||
Get any available symbology/assets cache from sync code by
|
|
||||||
(maybe) manually running `trio` to do the work.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# spawn tractor runtime and generate cache
|
|
||||||
# if not existing.
|
|
||||||
async def sched_gen_symcache():
|
|
||||||
async with (
|
|
||||||
# only for runtime's debug mode
|
|
||||||
tractor.open_nursery(debug_mode=True),
|
|
||||||
|
|
||||||
open_symcache(
|
|
||||||
get_brokermod(provider),
|
|
||||||
reload=force_reload,
|
|
||||||
) as symcache,
|
|
||||||
):
|
|
||||||
return symcache
|
|
||||||
|
|
||||||
try:
|
|
||||||
symcache: SymbologyCache = trio.run(sched_gen_symcache)
|
|
||||||
assert symcache
|
|
||||||
except BaseException:
|
|
||||||
import pdbp
|
|
||||||
pdbp.xpm()
|
|
||||||
|
|
||||||
return symcache
|
|
||||||
|
|
||||||
|
|
||||||
def match_from_pairs(
|
|
||||||
pairs: dict[str, Struct],
|
|
||||||
query: str,
|
|
||||||
score_cutoff: int = 50,
|
|
||||||
**extract_kwargs,
|
|
||||||
|
|
||||||
) -> dict[str, Struct]:
|
|
||||||
'''
|
|
||||||
Fuzzy search over a "pairs table" maintained by most backends
|
|
||||||
as part of their symbology-info caching internals.
|
|
||||||
|
|
||||||
Scan the native symbol key set and return best ranked
|
|
||||||
matches back in a new `dict`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
# TODO: somehow cache this list (per call) like we were in
|
|
||||||
# `open_symbol_search()`?
|
|
||||||
keys: list[str] = list(pairs)
|
|
||||||
matches: list[tuple[
|
|
||||||
Sequence[Hashable], # matching input key
|
|
||||||
Any, # scores
|
|
||||||
Any,
|
|
||||||
]] = fuzzy.extract(
|
|
||||||
# NOTE: most backends provide keys uppercased
|
|
||||||
query=query,
|
|
||||||
choices=keys,
|
|
||||||
score_cutoff=score_cutoff,
|
|
||||||
**extract_kwargs,
|
|
||||||
)
|
|
||||||
|
|
||||||
# pop and repack pairs in output dict
|
|
||||||
matched_pairs: dict[str, Struct] = {}
|
|
||||||
for item in matches:
|
|
||||||
pair_key: str = item[0]
|
|
||||||
matched_pairs[pair_key] = pairs[pair_key]
|
|
||||||
|
|
||||||
return matched_pairs
|
|
||||||
|
|
@ -0,0 +1,326 @@
|
||||||
|
# piker: trading gear for hackers
|
||||||
|
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
'''
|
||||||
|
Financial time series processing utilities usually
|
||||||
|
pertaining to OHLCV style sampled data.
|
||||||
|
|
||||||
|
Routines are generally implemented in either ``numpy`` or ``polars`` B)
|
||||||
|
|
||||||
|
'''
|
||||||
|
from __future__ import annotations
|
||||||
|
from typing import Literal
|
||||||
|
from math import (
|
||||||
|
ceil,
|
||||||
|
floor,
|
||||||
|
)
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
import polars as pl
|
||||||
|
|
||||||
|
from ._sharedmem import ShmArray
|
||||||
|
from .._profile import (
|
||||||
|
Profiler,
|
||||||
|
pg_profile_enabled,
|
||||||
|
ms_slower_then,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def slice_from_time(
|
||||||
|
arr: np.ndarray,
|
||||||
|
start_t: float,
|
||||||
|
stop_t: float,
|
||||||
|
step: float, # sampler period step-diff
|
||||||
|
|
||||||
|
) -> slice:
|
||||||
|
'''
|
||||||
|
Calculate array indices mapped from a time range and return them in
|
||||||
|
a slice.
|
||||||
|
|
||||||
|
Given an input array with an epoch `'time'` series entry, calculate
|
||||||
|
the indices which span the time range and return in a slice. Presume
|
||||||
|
each `'time'` step increment is uniform and when the time stamp
|
||||||
|
series contains gaps (the uniform presumption is untrue) use
|
||||||
|
``np.searchsorted()`` binary search to look up the appropriate
|
||||||
|
index.
|
||||||
|
|
||||||
|
'''
|
||||||
|
profiler = Profiler(
|
||||||
|
msg='slice_from_time()',
|
||||||
|
disabled=not pg_profile_enabled(),
|
||||||
|
ms_threshold=ms_slower_then,
|
||||||
|
)
|
||||||
|
|
||||||
|
times = arr['time']
|
||||||
|
t_first = floor(times[0])
|
||||||
|
t_last = ceil(times[-1])
|
||||||
|
|
||||||
|
# the greatest index we can return which slices to the
|
||||||
|
# end of the input array.
|
||||||
|
read_i_max = arr.shape[0]
|
||||||
|
|
||||||
|
# compute (presumed) uniform-time-step index offsets
|
||||||
|
i_start_t = floor(start_t)
|
||||||
|
read_i_start = floor(((i_start_t - t_first) // step)) - 1
|
||||||
|
|
||||||
|
i_stop_t = ceil(stop_t)
|
||||||
|
|
||||||
|
# XXX: edge case -> always set stop index to last in array whenever
|
||||||
|
# the input stop time is detected to be greater then the equiv time
|
||||||
|
# stamp at that last entry.
|
||||||
|
if i_stop_t >= t_last:
|
||||||
|
read_i_stop = read_i_max
|
||||||
|
else:
|
||||||
|
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
|
||||||
|
|
||||||
|
# always clip outputs to array support
|
||||||
|
# for read start:
|
||||||
|
# - never allow a start < the 0 index
|
||||||
|
# - never allow an end index > the read array len
|
||||||
|
read_i_start = min(
|
||||||
|
max(0, read_i_start),
|
||||||
|
read_i_max - 1,
|
||||||
|
)
|
||||||
|
read_i_stop = max(
|
||||||
|
0,
|
||||||
|
min(read_i_stop, read_i_max),
|
||||||
|
)
|
||||||
|
|
||||||
|
# check for larger-then-latest calculated index for given start
|
||||||
|
# time, in which case we do a binary search for the correct index.
|
||||||
|
# NOTE: this is usually the result of a time series with time gaps
|
||||||
|
# where it is expected that each index step maps to a uniform step
|
||||||
|
# in the time stamp series.
|
||||||
|
t_iv_start = times[read_i_start]
|
||||||
|
if (
|
||||||
|
t_iv_start > i_start_t
|
||||||
|
):
|
||||||
|
# do a binary search for the best index mapping to ``start_t``
|
||||||
|
# given we measured an overshoot using the uniform-time-step
|
||||||
|
# calculation from above.
|
||||||
|
|
||||||
|
# TODO: once we start caching these per source-array,
|
||||||
|
# we can just overwrite ``read_i_start`` directly.
|
||||||
|
new_read_i_start = np.searchsorted(
|
||||||
|
times,
|
||||||
|
i_start_t,
|
||||||
|
side='left',
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: minimize binary search work as much as possible:
|
||||||
|
# - cache these remap values which compensate for gaps in the
|
||||||
|
# uniform time step basis where we calc a later start
|
||||||
|
# index for the given input ``start_t``.
|
||||||
|
# - can we shorten the input search sequence by heuristic?
|
||||||
|
# up_to_arith_start = index[:read_i_start]
|
||||||
|
|
||||||
|
if (
|
||||||
|
new_read_i_start <= read_i_start
|
||||||
|
):
|
||||||
|
# t_diff = t_iv_start - start_t
|
||||||
|
# print(
|
||||||
|
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
||||||
|
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
|
||||||
|
# f'diff: {t_diff}\n'
|
||||||
|
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
|
||||||
|
# )
|
||||||
|
read_i_start = new_read_i_start
|
||||||
|
|
||||||
|
t_iv_stop = times[read_i_stop - 1]
|
||||||
|
if (
|
||||||
|
t_iv_stop > i_stop_t
|
||||||
|
):
|
||||||
|
# t_diff = stop_t - t_iv_stop
|
||||||
|
# print(
|
||||||
|
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
||||||
|
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
|
||||||
|
# f'diff: {t_diff}\n'
|
||||||
|
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
|
||||||
|
# )
|
||||||
|
new_read_i_stop = np.searchsorted(
|
||||||
|
times[read_i_start:],
|
||||||
|
# times,
|
||||||
|
i_stop_t,
|
||||||
|
side='right',
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
new_read_i_stop <= read_i_stop
|
||||||
|
):
|
||||||
|
read_i_stop = read_i_start + new_read_i_stop + 1
|
||||||
|
|
||||||
|
# sanity checks for range size
|
||||||
|
# samples = (i_stop_t - i_start_t) // step
|
||||||
|
# index_diff = read_i_stop - read_i_start + 1
|
||||||
|
# if index_diff > (samples + 3):
|
||||||
|
# breakpoint()
|
||||||
|
|
||||||
|
# read-relative indexes: gives a slice where `shm.array[read_slc]`
|
||||||
|
# will be the data spanning the input time range `start_t` ->
|
||||||
|
# `stop_t`
|
||||||
|
read_slc = slice(
|
||||||
|
int(read_i_start),
|
||||||
|
int(read_i_stop),
|
||||||
|
)
|
||||||
|
|
||||||
|
profiler(
|
||||||
|
'slicing complete'
|
||||||
|
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
|
||||||
|
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
|
||||||
|
)
|
||||||
|
|
||||||
|
# NOTE: if caller needs absolute buffer indices they can
|
||||||
|
# slice the buffer abs index like so:
|
||||||
|
# index = arr['index']
|
||||||
|
# abs_indx = index[read_slc]
|
||||||
|
# abs_slc = slice(
|
||||||
|
# int(abs_indx[0]),
|
||||||
|
# int(abs_indx[-1]),
|
||||||
|
# )
|
||||||
|
|
||||||
|
return read_slc
|
||||||
|
|
||||||
|
|
||||||
|
def detect_null_time_gap(
|
||||||
|
shm: ShmArray,
|
||||||
|
imargin: int = 1,
|
||||||
|
|
||||||
|
) -> tuple[float, float] | None:
|
||||||
|
'''
|
||||||
|
Detect if there are any zero-epoch stamped rows in
|
||||||
|
the presumed 'time' field-column.
|
||||||
|
|
||||||
|
Filter to the gap and return a surrounding index range.
|
||||||
|
|
||||||
|
NOTE: for now presumes only ONE gap XD
|
||||||
|
|
||||||
|
'''
|
||||||
|
zero_pred: np.ndarray = shm.array['time'] == 0
|
||||||
|
zero_t: np.ndarray = shm.array[zero_pred]
|
||||||
|
if zero_t.size:
|
||||||
|
istart, iend = zero_t['index'][[0, -1]]
|
||||||
|
start, end = shm._array['time'][
|
||||||
|
[istart - imargin, iend + imargin]
|
||||||
|
]
|
||||||
|
return (
|
||||||
|
istart - imargin,
|
||||||
|
start,
|
||||||
|
end,
|
||||||
|
iend + imargin,
|
||||||
|
)
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
t_unit: Literal[
|
||||||
|
'days',
|
||||||
|
'hours',
|
||||||
|
'minutes',
|
||||||
|
'seconds',
|
||||||
|
'miliseconds',
|
||||||
|
'microseconds',
|
||||||
|
'nanoseconds',
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def with_dts(
|
||||||
|
df: pl.DataFrame,
|
||||||
|
time_col: str = 'time',
|
||||||
|
) -> pl.DataFrame:
|
||||||
|
'''
|
||||||
|
Insert datetime (casted) columns to a (presumably) OHLC sampled
|
||||||
|
time series with an epoch-time column keyed by ``time_col``.
|
||||||
|
|
||||||
|
'''
|
||||||
|
return df.with_columns([
|
||||||
|
pl.col(time_col).shift(1).suffix('_prev'),
|
||||||
|
pl.col(time_col).diff().alias('s_diff'),
|
||||||
|
pl.from_epoch(pl.col(time_col)).alias('dt'),
|
||||||
|
]).with_columns([
|
||||||
|
pl.from_epoch(pl.col(f'{time_col}_prev')).alias('dt_prev'),
|
||||||
|
pl.col('dt').diff().alias('dt_diff'),
|
||||||
|
]) #.with_columns(
|
||||||
|
# pl.col('dt').diff().dt.days().alias('days_dt_diff'),
|
||||||
|
# )
|
||||||
|
|
||||||
|
|
||||||
|
def detect_time_gaps(
|
||||||
|
df: pl.DataFrame,
|
||||||
|
|
||||||
|
time_col: str = 'time',
|
||||||
|
# epoch sampling step diff
|
||||||
|
expect_period: float = 60,
|
||||||
|
|
||||||
|
# datetime diff unit and gap value
|
||||||
|
# crypto mkts
|
||||||
|
# gap_dt_unit: t_unit = 'minutes',
|
||||||
|
# gap_thresh: int = 1,
|
||||||
|
|
||||||
|
# legacy stock mkts
|
||||||
|
gap_dt_unit: t_unit = 'days',
|
||||||
|
gap_thresh: int = 2,
|
||||||
|
|
||||||
|
) -> pl.DataFrame:
|
||||||
|
'''
|
||||||
|
Filter to OHLC datums which contain sample step gaps.
|
||||||
|
|
||||||
|
For eg. legacy markets which have venue close gaps and/or
|
||||||
|
actual missing data segments.
|
||||||
|
|
||||||
|
'''
|
||||||
|
dt_gap_col: str = f'{gap_dt_unit}_diff'
|
||||||
|
return with_dts(
|
||||||
|
df
|
||||||
|
).filter(
|
||||||
|
pl.col('s_diff').abs() > expect_period
|
||||||
|
).with_columns(
|
||||||
|
getattr(
|
||||||
|
pl.col('dt_diff').dt,
|
||||||
|
gap_dt_unit, # NOTE: must be valid ``Expr.dt.<name>``
|
||||||
|
)().alias(dt_gap_col)
|
||||||
|
).filter(
|
||||||
|
pl.col(dt_gap_col).abs() > gap_thresh
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def detect_price_gaps(
|
||||||
|
df: pl.DataFrame,
|
||||||
|
gt_multiplier: float = 2.,
|
||||||
|
price_fields: list[str] = ['high', 'low'],
|
||||||
|
|
||||||
|
) -> pl.DataFrame:
|
||||||
|
'''
|
||||||
|
Detect gaps in clearing price over an OHLC series.
|
||||||
|
|
||||||
|
2 types of gaps generally exist; up gaps and down gaps:
|
||||||
|
|
||||||
|
- UP gap: when any next sample's lo price is strictly greater
|
||||||
|
then the current sample's hi price.
|
||||||
|
|
||||||
|
- DOWN gap: when any next sample's hi price is strictly
|
||||||
|
less then the current samples lo price.
|
||||||
|
|
||||||
|
'''
|
||||||
|
# return df.filter(
|
||||||
|
# pl.col('high') - ) > expect_period,
|
||||||
|
# ).select([
|
||||||
|
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
|
||||||
|
# pl.all(),
|
||||||
|
# ]).select([
|
||||||
|
# pl.all(),
|
||||||
|
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
|
||||||
|
# ])
|
||||||
|
...
|
||||||
|
|
@ -27,6 +27,7 @@ from functools import partial
|
||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
|
Optional,
|
||||||
Callable,
|
Callable,
|
||||||
AsyncContextManager,
|
AsyncContextManager,
|
||||||
AsyncGenerator,
|
AsyncGenerator,
|
||||||
|
|
@ -34,7 +35,6 @@ from typing import (
|
||||||
)
|
)
|
||||||
import json
|
import json
|
||||||
|
|
||||||
import tractor
|
|
||||||
import trio
|
import trio
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
from trio_websocket import (
|
from trio_websocket import (
|
||||||
|
|
@ -50,8 +50,8 @@ from trio_websocket._impl import (
|
||||||
ConnectionTimeout,
|
ConnectionTimeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
from piker.types import Struct
|
|
||||||
from ._util import log
|
from ._util import log
|
||||||
|
from .types import Struct
|
||||||
|
|
||||||
|
|
||||||
class NoBsWs:
|
class NoBsWs:
|
||||||
|
|
@ -167,7 +167,7 @@ async def _reconnect_forever(
|
||||||
|
|
||||||
async def proxy_msgs(
|
async def proxy_msgs(
|
||||||
ws: WebSocketConnection,
|
ws: WebSocketConnection,
|
||||||
rent_cs: trio.CancelScope, # parent cancel scope
|
pcs: trio.CancelScope, # parent cancel scope
|
||||||
):
|
):
|
||||||
'''
|
'''
|
||||||
Receive (under `timeout` deadline) all msgs from from underlying
|
Receive (under `timeout` deadline) all msgs from from underlying
|
||||||
|
|
@ -192,7 +192,7 @@ async def _reconnect_forever(
|
||||||
f'{url} connection bail with:'
|
f'{url} connection bail with:'
|
||||||
)
|
)
|
||||||
await trio.sleep(0.5)
|
await trio.sleep(0.5)
|
||||||
rent_cs.cancel()
|
pcs.cancel()
|
||||||
|
|
||||||
# go back to reonnect loop in parent task
|
# go back to reonnect loop in parent task
|
||||||
return
|
return
|
||||||
|
|
@ -204,7 +204,7 @@ async def _reconnect_forever(
|
||||||
f'{src_mod}\n'
|
f'{src_mod}\n'
|
||||||
'WS feed seems down and slow af.. reconnecting\n'
|
'WS feed seems down and slow af.. reconnecting\n'
|
||||||
)
|
)
|
||||||
rent_cs.cancel()
|
pcs.cancel()
|
||||||
|
|
||||||
# go back to reonnect loop in parent task
|
# go back to reonnect loop in parent task
|
||||||
return
|
return
|
||||||
|
|
@ -228,25 +228,16 @@ async def _reconnect_forever(
|
||||||
nobsws._connected = trio.Event()
|
nobsws._connected = trio.Event()
|
||||||
task_status.started()
|
task_status.started()
|
||||||
|
|
||||||
mc_state: trio._channel.MemoryChannelState = snd._state
|
while not snd._closed:
|
||||||
while (
|
|
||||||
mc_state.open_receive_channels > 0
|
|
||||||
and
|
|
||||||
mc_state.open_send_channels > 0
|
|
||||||
):
|
|
||||||
log.info(
|
log.info(
|
||||||
f'{src_mod}\n'
|
f'{src_mod}\n'
|
||||||
f'{url} trying (RE)CONNECT'
|
f'{url} trying (RE)CONNECT'
|
||||||
)
|
)
|
||||||
|
|
||||||
ws: WebSocketConnection
|
async with trio.open_nursery() as n:
|
||||||
try:
|
cs = nobsws._cs = n.cancel_scope
|
||||||
async with (
|
ws: WebSocketConnection
|
||||||
open_websocket_url(url) as ws,
|
async with open_websocket_url(url) as ws:
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as tn,
|
|
||||||
):
|
|
||||||
cs = nobsws._cs = tn.cancel_scope
|
|
||||||
nobsws._ws = ws
|
nobsws._ws = ws
|
||||||
log.info(
|
log.info(
|
||||||
f'{src_mod}\n'
|
f'{src_mod}\n'
|
||||||
|
|
@ -254,7 +245,7 @@ async def _reconnect_forever(
|
||||||
)
|
)
|
||||||
|
|
||||||
# begin relay loop to forward msgs
|
# begin relay loop to forward msgs
|
||||||
tn.start_soon(
|
n.start_soon(
|
||||||
proxy_msgs,
|
proxy_msgs,
|
||||||
ws,
|
ws,
|
||||||
cs,
|
cs,
|
||||||
|
|
@ -268,7 +259,7 @@ async def _reconnect_forever(
|
||||||
|
|
||||||
# TODO: should we return an explicit sub-cs
|
# TODO: should we return an explicit sub-cs
|
||||||
# from this fixture task?
|
# from this fixture task?
|
||||||
await tn.start(
|
await n.start(
|
||||||
open_fixture,
|
open_fixture,
|
||||||
fixture,
|
fixture,
|
||||||
nobsws,
|
nobsws,
|
||||||
|
|
@ -279,22 +270,8 @@ async def _reconnect_forever(
|
||||||
nobsws._connected.set()
|
nobsws._connected.set()
|
||||||
await trio.sleep_forever()
|
await trio.sleep_forever()
|
||||||
|
|
||||||
except (
|
# ws open block end
|
||||||
HandshakeError,
|
# nursery block end
|
||||||
ConnectionRejected,
|
|
||||||
):
|
|
||||||
log.exception('Retrying connection')
|
|
||||||
await trio.sleep(0.5) # throttle
|
|
||||||
|
|
||||||
except BaseException as _berr:
|
|
||||||
berr = _berr
|
|
||||||
log.exception(
|
|
||||||
'Reconnect-attempt failed ??\n'
|
|
||||||
)
|
|
||||||
await trio.sleep(0.2) # throttle
|
|
||||||
raise berr
|
|
||||||
|
|
||||||
#|_ws & nursery block ends
|
|
||||||
nobsws._connected = trio.Event()
|
nobsws._connected = trio.Event()
|
||||||
if cs.cancelled_caught:
|
if cs.cancelled_caught:
|
||||||
log.cancel(
|
log.cancel(
|
||||||
|
|
@ -307,8 +284,7 @@ async def _reconnect_forever(
|
||||||
and not nobsws._connected.is_set()
|
and not nobsws._connected.is_set()
|
||||||
)
|
)
|
||||||
|
|
||||||
# -> from here, move to next reconnect attempt iteration
|
# -> from here, move to next reconnect attempt
|
||||||
# in the while loop above Bp
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
log.exception(
|
log.exception(
|
||||||
|
|
@ -342,25 +318,21 @@ async def open_autorecon_ws(
|
||||||
connetivity errors, or some user defined recv timeout.
|
connetivity errors, or some user defined recv timeout.
|
||||||
|
|
||||||
You can provide a ``fixture`` async-context-manager which will be
|
You can provide a ``fixture`` async-context-manager which will be
|
||||||
entered/exitted around each connection reset; eg. for
|
entered/exitted around each connection reset; eg. for (re)requesting
|
||||||
(re)requesting subscriptions without requiring streaming setup
|
subscriptions without requiring streaming setup code to rerun.
|
||||||
code to rerun.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
snd: trio.MemorySendChannel
|
snd: trio.MemorySendChannel
|
||||||
rcv: trio.MemoryReceiveChannel
|
rcv: trio.MemoryReceiveChannel
|
||||||
snd, rcv = trio.open_memory_channel(616)
|
snd, rcv = trio.open_memory_channel(616)
|
||||||
|
|
||||||
async with (
|
async with trio.open_nursery() as n:
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as tn
|
|
||||||
):
|
|
||||||
nobsws = NoBsWs(
|
nobsws = NoBsWs(
|
||||||
url,
|
url,
|
||||||
rcv,
|
rcv,
|
||||||
msg_recv_timeout=msg_recv_timeout,
|
msg_recv_timeout=msg_recv_timeout,
|
||||||
)
|
)
|
||||||
await tn.start(
|
await n.start(
|
||||||
partial(
|
partial(
|
||||||
_reconnect_forever,
|
_reconnect_forever,
|
||||||
url,
|
url,
|
||||||
|
|
@ -373,15 +345,16 @@ async def open_autorecon_ws(
|
||||||
await nobsws._connected.wait()
|
await nobsws._connected.wait()
|
||||||
assert nobsws._cs
|
assert nobsws._cs
|
||||||
assert nobsws.connected()
|
assert nobsws.connected()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield nobsws
|
yield nobsws
|
||||||
finally:
|
finally:
|
||||||
tn.cancel_scope.cancel()
|
n.cancel_scope.cancel()
|
||||||
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
JSONRPC response-request style machinery for transparent multiplexing
|
JSONRPC response-request style machinery for transparent multiplexing of msgs
|
||||||
of msgs over a `NoBsWs`.
|
over a NoBsWs.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
|
@ -389,8 +362,8 @@ of msgs over a `NoBsWs`.
|
||||||
class JSONRPCResult(Struct):
|
class JSONRPCResult(Struct):
|
||||||
id: int
|
id: int
|
||||||
jsonrpc: str = '2.0'
|
jsonrpc: str = '2.0'
|
||||||
result: dict|None = None
|
result: Optional[dict] = None
|
||||||
error: dict|None = None
|
error: Optional[dict] = None
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
|
|
@ -398,82 +371,43 @@ async def open_jsonrpc_session(
|
||||||
url: str,
|
url: str,
|
||||||
start_id: int = 0,
|
start_id: int = 0,
|
||||||
response_type: type = JSONRPCResult,
|
response_type: type = JSONRPCResult,
|
||||||
msg_recv_timeout: float = float('inf'),
|
request_type: Optional[type] = None,
|
||||||
# ^NOTE, since only `deribit` is using this jsonrpc stuff atm
|
request_hook: Optional[Callable] = None,
|
||||||
# and options mkts are generally "slow moving"..
|
error_hook: Optional[Callable] = None,
|
||||||
#
|
|
||||||
# FURTHER if we break the underlying ws connection then since we
|
|
||||||
# don't pass a `fixture` to the task that manages `NoBsWs`, i.e.
|
|
||||||
# `_reconnect_forever()`, the jsonrpc "transport pipe" get's
|
|
||||||
# broken and never restored with wtv init sequence is required to
|
|
||||||
# re-establish a working req-resp session.
|
|
||||||
|
|
||||||
) -> Callable[[str, dict], dict]:
|
) -> Callable[[str, dict], dict]:
|
||||||
'''
|
|
||||||
Init a json-RPC-over-websocket connection to the provided `url`.
|
|
||||||
|
|
||||||
A `json_rpc: Callable[[str, dict], dict` is delivered to the
|
|
||||||
caller for sending requests and a bg-`trio.Task` handles
|
|
||||||
processing of response msgs including error reporting/raising in
|
|
||||||
the parent/caller task.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# NOTE, store all request msgs so we can raise errors on the
|
|
||||||
# caller side!
|
|
||||||
req_msgs: dict[int, dict] = {}
|
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as n,
|
||||||
open_autorecon_ws(
|
open_autorecon_ws(url) as ws
|
||||||
url=url,
|
|
||||||
msg_recv_timeout=msg_recv_timeout,
|
|
||||||
) as ws
|
|
||||||
):
|
):
|
||||||
rpc_id: Iterable[int] = count(start_id)
|
rpc_id: Iterable = count(start_id)
|
||||||
rpc_results: dict[int, dict] = {}
|
rpc_results: dict[int, dict] = {}
|
||||||
|
|
||||||
async def json_rpc(
|
async def json_rpc(method: str, params: dict) -> dict:
|
||||||
method: str,
|
|
||||||
params: dict,
|
|
||||||
) -> dict:
|
|
||||||
'''
|
'''
|
||||||
perform a json rpc call and wait for the result, raise exception in
|
perform a json rpc call and wait for the result, raise exception in
|
||||||
case of error field present on response
|
case of error field present on response
|
||||||
'''
|
'''
|
||||||
nonlocal req_msgs
|
|
||||||
|
|
||||||
req_id: int = next(rpc_id)
|
|
||||||
msg = {
|
msg = {
|
||||||
'jsonrpc': '2.0',
|
'jsonrpc': '2.0',
|
||||||
'id': req_id,
|
'id': next(rpc_id),
|
||||||
'method': method,
|
'method': method,
|
||||||
'params': params
|
'params': params
|
||||||
}
|
}
|
||||||
_id = msg['id']
|
_id = msg['id']
|
||||||
|
|
||||||
result = rpc_results[_id] = {
|
rpc_results[_id] = {
|
||||||
'result': None,
|
'result': None,
|
||||||
'error': None,
|
'event': trio.Event()
|
||||||
'event': trio.Event(), # signal caller resp arrived
|
|
||||||
}
|
}
|
||||||
req_msgs[_id] = msg
|
|
||||||
|
|
||||||
await ws.send_msg(msg)
|
await ws.send_msg(msg)
|
||||||
|
|
||||||
# wait for reponse before unblocking requester code
|
|
||||||
await rpc_results[_id]['event'].wait()
|
await rpc_results[_id]['event'].wait()
|
||||||
|
|
||||||
if (maybe_result := result['result']):
|
ret = rpc_results[_id]['result']
|
||||||
ret = maybe_result
|
|
||||||
del rpc_results[_id]
|
|
||||||
|
|
||||||
else:
|
del rpc_results[_id]
|
||||||
err = result['error']
|
|
||||||
raise Exception(
|
|
||||||
f'JSONRPC request failed\n'
|
|
||||||
f'req: {msg}\n'
|
|
||||||
f'resp: {err}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
if ret.error is not None:
|
if ret.error is not None:
|
||||||
raise Exception(json.dumps(ret.error, indent=4))
|
raise Exception(json.dumps(ret.error, indent=4))
|
||||||
|
|
@ -488,7 +422,6 @@ async def open_jsonrpc_session(
|
||||||
the server side.
|
the server side.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
nonlocal req_msgs
|
|
||||||
async for msg in ws:
|
async for msg in ws:
|
||||||
match msg:
|
match msg:
|
||||||
case {
|
case {
|
||||||
|
|
@ -512,28 +445,19 @@ async def open_jsonrpc_session(
|
||||||
'params': _,
|
'params': _,
|
||||||
}:
|
}:
|
||||||
log.debug(f'Recieved\n{msg}')
|
log.debug(f'Recieved\n{msg}')
|
||||||
|
if request_hook:
|
||||||
|
await request_hook(request_type(**msg))
|
||||||
|
|
||||||
case {
|
case {
|
||||||
'error': error
|
'error': error
|
||||||
}:
|
}:
|
||||||
# retreive orig request msg, set error
|
log.warning(f'Recieved\n{error}')
|
||||||
# response in original "result" msg,
|
if error_hook:
|
||||||
# THEN FINALLY set the event to signal caller
|
await error_hook(response_type(**msg))
|
||||||
# to raise the error in the parent task.
|
|
||||||
req_id: int = error['id']
|
|
||||||
req_msg: dict = req_msgs[req_id]
|
|
||||||
result: dict = rpc_results[req_id]
|
|
||||||
result['error'] = error
|
|
||||||
result['event'].set()
|
|
||||||
log.error(
|
|
||||||
f'JSONRPC request failed\n'
|
|
||||||
f'req: {req_msg}\n'
|
|
||||||
f'resp: {error}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
case _:
|
case _:
|
||||||
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
|
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
|
||||||
|
|
||||||
tn.start_soon(recv_task)
|
n.start_soon(recv_task)
|
||||||
yield json_rpc
|
yield json_rpc
|
||||||
tn.cancel_scope.cancel()
|
n.cancel_scope.cancel()
|
||||||
|
|
|
||||||
|
|
@ -28,7 +28,6 @@ module.
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from collections import (
|
from collections import (
|
||||||
defaultdict,
|
defaultdict,
|
||||||
abc,
|
|
||||||
)
|
)
|
||||||
from contextlib import asynccontextmanager as acm
|
from contextlib import asynccontextmanager as acm
|
||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
@ -37,74 +36,49 @@ from types import ModuleType
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
AsyncContextManager,
|
AsyncContextManager,
|
||||||
|
Optional,
|
||||||
Awaitable,
|
Awaitable,
|
||||||
Sequence,
|
Sequence,
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
from trio.abc import ReceiveChannel
|
from trio.abc import ReceiveChannel
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import trionics
|
from tractor.trionics import (
|
||||||
|
maybe_open_context,
|
||||||
|
gather_contexts,
|
||||||
|
)
|
||||||
|
|
||||||
from piker.accounting import (
|
from ..brokers import get_brokermod
|
||||||
MktPair,
|
from ..calc import humanize
|
||||||
unpack_fqme,
|
|
||||||
)
|
|
||||||
from piker.types import Struct
|
|
||||||
from piker.brokers import get_brokermod
|
|
||||||
from piker.service import (
|
|
||||||
maybe_spawn_brokerd,
|
|
||||||
)
|
|
||||||
from piker.calc import humanize
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log,
|
log,
|
||||||
get_console_log,
|
get_console_log,
|
||||||
)
|
)
|
||||||
|
from ..service import (
|
||||||
|
maybe_spawn_brokerd,
|
||||||
|
)
|
||||||
from .flows import Flume
|
from .flows import Flume
|
||||||
from .validate import (
|
from .validate import (
|
||||||
FeedInit,
|
FeedInit,
|
||||||
validate_backend,
|
validate_backend,
|
||||||
)
|
)
|
||||||
from ..tsp import (
|
from .history import (
|
||||||
manage_history,
|
manage_history,
|
||||||
)
|
)
|
||||||
from .ingest import get_ingestormod
|
from .ingest import get_ingestormod
|
||||||
|
from .types import Struct
|
||||||
|
from ..accounting import (
|
||||||
|
MktPair,
|
||||||
|
unpack_fqme,
|
||||||
|
)
|
||||||
|
from ..ui import _search
|
||||||
from ._sampling import (
|
from ._sampling import (
|
||||||
sample_and_broadcast,
|
sample_and_broadcast,
|
||||||
uniform_rate_send,
|
uniform_rate_send,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from tractor._addr import Address
|
|
||||||
from tractor.msg.types import Aid
|
|
||||||
|
|
||||||
|
|
||||||
class Sub(Struct, frozen=True):
|
|
||||||
'''
|
|
||||||
A live feed subscription entry.
|
|
||||||
|
|
||||||
Contains meta-data on the remote-actor type (in functionality
|
|
||||||
terms) as well as refs to IPC streams and sampler runtime
|
|
||||||
params.
|
|
||||||
|
|
||||||
'''
|
|
||||||
ipc: tractor.MsgStream
|
|
||||||
send_chan: trio.abc.SendChannel | None = None
|
|
||||||
|
|
||||||
# tick throttle rate in Hz; determines how live
|
|
||||||
# quotes/ticks should be downsampled before relay
|
|
||||||
# to the receiving remote consumer (process).
|
|
||||||
throttle_rate: float | None = None
|
|
||||||
_throttle_cs: trio.CancelScope | None = None
|
|
||||||
|
|
||||||
# TODO: actually stash comms info for the far end to allow
|
|
||||||
# `.tsp`, `.fsp` and `.data._sampling` sub-systems to re-render
|
|
||||||
# the data view as needed via msging with the `._remote_ctl`
|
|
||||||
# ipc ctx.
|
|
||||||
rc_ui: bool = False
|
|
||||||
|
|
||||||
|
|
||||||
class _FeedsBus(Struct):
|
class _FeedsBus(Struct):
|
||||||
'''
|
'''
|
||||||
|
|
@ -130,7 +104,13 @@ class _FeedsBus(Struct):
|
||||||
|
|
||||||
_subscribers: defaultdict[
|
_subscribers: defaultdict[
|
||||||
str,
|
str,
|
||||||
set[Sub]
|
set[
|
||||||
|
tuple[
|
||||||
|
tractor.MsgStream | trio.MemorySendChannel,
|
||||||
|
# tractor.Context,
|
||||||
|
float | None, # tick throttle in Hz
|
||||||
|
]
|
||||||
|
]
|
||||||
] = defaultdict(set)
|
] = defaultdict(set)
|
||||||
|
|
||||||
async def start_task(
|
async def start_task(
|
||||||
|
|
@ -145,8 +125,6 @@ class _FeedsBus(Struct):
|
||||||
trio.CancelScope] = trio.TASK_STATUS_IGNORED,
|
trio.CancelScope] = trio.TASK_STATUS_IGNORED,
|
||||||
) -> None:
|
) -> None:
|
||||||
with trio.CancelScope() as cs:
|
with trio.CancelScope() as cs:
|
||||||
# TODO: shouldn't this be a direct await to avoid
|
|
||||||
# cancellation contagion to the bus nursery!?!?!
|
|
||||||
await self.nursery.start(
|
await self.nursery.start(
|
||||||
target,
|
target,
|
||||||
*args,
|
*args,
|
||||||
|
|
@ -164,28 +142,31 @@ class _FeedsBus(Struct):
|
||||||
def get_subs(
|
def get_subs(
|
||||||
self,
|
self,
|
||||||
key: str,
|
key: str,
|
||||||
|
) -> set[
|
||||||
) -> set[Sub]:
|
tuple[
|
||||||
|
tractor.MsgStream | trio.MemorySendChannel,
|
||||||
|
float | None, # tick throttle in Hz
|
||||||
|
]
|
||||||
|
]:
|
||||||
'''
|
'''
|
||||||
Get the ``set`` of consumer subscription entries for the given key.
|
Get the ``set`` of consumer subscription entries for the given key.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
return self._subscribers[key]
|
return self._subscribers[key]
|
||||||
|
|
||||||
def subs_items(self) -> abc.ItemsView[str, set[Sub]]:
|
|
||||||
return self._subscribers.items()
|
|
||||||
|
|
||||||
def add_subs(
|
def add_subs(
|
||||||
self,
|
self,
|
||||||
key: str,
|
key: str,
|
||||||
subs: set[Sub],
|
subs: set[tuple[
|
||||||
|
tractor.MsgStream | trio.MemorySendChannel,
|
||||||
) -> set[Sub]:
|
float | None, # tick throttle in Hz
|
||||||
|
]],
|
||||||
|
) -> set[tuple]:
|
||||||
'''
|
'''
|
||||||
Add a ``set`` of consumer subscription entries for the given key.
|
Add a ``set`` of consumer subscription entries for the given key.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
_subs: set[Sub] = self._subscribers.setdefault(key, set())
|
_subs: set[tuple] = self._subscribers[key]
|
||||||
_subs.update(subs)
|
_subs.update(subs)
|
||||||
return _subs
|
return _subs
|
||||||
|
|
||||||
|
|
@ -350,6 +331,7 @@ async def allocate_persistent_feed(
|
||||||
) = await bus.nursery.start(
|
) = await bus.nursery.start(
|
||||||
manage_history,
|
manage_history,
|
||||||
mod,
|
mod,
|
||||||
|
bus,
|
||||||
mkt,
|
mkt,
|
||||||
some_data_ready,
|
some_data_ready,
|
||||||
feed_is_live,
|
feed_is_live,
|
||||||
|
|
@ -357,9 +339,7 @@ async def allocate_persistent_feed(
|
||||||
|
|
||||||
# yield back control to starting nursery once we receive either
|
# yield back control to starting nursery once we receive either
|
||||||
# some history or a real-time quote.
|
# some history or a real-time quote.
|
||||||
log.info(
|
log.info(f'loading OHLCV history: {fqme}')
|
||||||
f'loading OHLCV history: {fqme!r}\n'
|
|
||||||
)
|
|
||||||
await some_data_ready.wait()
|
await some_data_ready.wait()
|
||||||
|
|
||||||
flume = Flume(
|
flume = Flume(
|
||||||
|
|
@ -428,13 +408,7 @@ async def allocate_persistent_feed(
|
||||||
rt_shm.array['time'][1] = ts + 1
|
rt_shm.array['time'][1] = ts + 1
|
||||||
|
|
||||||
elif hist_shm.array.size == 0:
|
elif hist_shm.array.size == 0:
|
||||||
for i in range(100):
|
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
|
||||||
await trio.sleep(0.1)
|
|
||||||
if hist_shm.array.size > 0:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
await tractor.pause()
|
|
||||||
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
|
|
||||||
|
|
||||||
# wait the spawning parent task to register its subscriber
|
# wait the spawning parent task to register its subscriber
|
||||||
# send-stream entry before we start the sample loop.
|
# send-stream entry before we start the sample loop.
|
||||||
|
|
@ -464,9 +438,8 @@ async def open_feed_bus(
|
||||||
symbols: list[str], # normally expected to the broker-specific fqme
|
symbols: list[str], # normally expected to the broker-specific fqme
|
||||||
|
|
||||||
loglevel: str = 'error',
|
loglevel: str = 'error',
|
||||||
tick_throttle: float | None = None,
|
tick_throttle: Optional[float] = None,
|
||||||
start_stream: bool = True,
|
start_stream: bool = True,
|
||||||
allow_remote_ctl_ui: bool = False,
|
|
||||||
|
|
||||||
) -> dict[
|
) -> dict[
|
||||||
str, # fqme
|
str, # fqme
|
||||||
|
|
@ -481,12 +454,8 @@ async def open_feed_bus(
|
||||||
if loglevel is None:
|
if loglevel is None:
|
||||||
loglevel = tractor.current_actor().loglevel
|
loglevel = tractor.current_actor().loglevel
|
||||||
|
|
||||||
# XXX: required to propagate ``tractor`` loglevel to piker
|
# XXX: required to propagate ``tractor`` loglevel to piker logging
|
||||||
# logging
|
get_console_log(loglevel or tractor.current_actor().loglevel)
|
||||||
get_console_log(
|
|
||||||
loglevel
|
|
||||||
or tractor.current_actor().loglevel
|
|
||||||
)
|
|
||||||
|
|
||||||
# local state sanity checks
|
# local state sanity checks
|
||||||
# TODO: check for any stale shm entries for this symbol
|
# TODO: check for any stale shm entries for this symbol
|
||||||
|
|
@ -496,7 +465,7 @@ async def open_feed_bus(
|
||||||
assert 'brokerd' in servicename
|
assert 'brokerd' in servicename
|
||||||
assert brokername in servicename
|
assert brokername in servicename
|
||||||
|
|
||||||
bus: _FeedsBus = get_feed_bus(brokername)
|
bus = get_feed_bus(brokername)
|
||||||
sub_registered = trio.Event()
|
sub_registered = trio.Event()
|
||||||
|
|
||||||
flumes: dict[str, Flume] = {}
|
flumes: dict[str, Flume] = {}
|
||||||
|
|
@ -543,10 +512,10 @@ async def open_feed_bus(
|
||||||
# pack for ``.started()`` sync msg
|
# pack for ``.started()`` sync msg
|
||||||
flumes[fqme] = flume
|
flumes[fqme] = flume
|
||||||
|
|
||||||
# we use the broker-specific fqme (bs_fqme) for the sampler
|
# we use the broker-specific fqme (bs_fqme) for the
|
||||||
# subscription since the backend isn't (yet) expected to
|
# sampler subscription since the backend isn't (yet) expected to
|
||||||
# append it's own name to the fqme, so we filter on keys
|
# append it's own name to the fqme, so we filter on keys which
|
||||||
# which *do not* include that name (e.g .ib) .
|
# *do not* include that name (e.g .ib) .
|
||||||
bus._subscribers.setdefault(bs_fqme, set())
|
bus._subscribers.setdefault(bs_fqme, set())
|
||||||
|
|
||||||
# sync feed subscribers with flume handles
|
# sync feed subscribers with flume handles
|
||||||
|
|
@ -585,60 +554,49 @@ async def open_feed_bus(
|
||||||
# that the ``sample_and_broadcast()`` task (spawned inside
|
# that the ``sample_and_broadcast()`` task (spawned inside
|
||||||
# ``allocate_persistent_feed()``) will push real-time quote
|
# ``allocate_persistent_feed()``) will push real-time quote
|
||||||
# (ticks) to this new consumer.
|
# (ticks) to this new consumer.
|
||||||
cs: trio.CancelScope | None = None
|
|
||||||
send: trio.MemorySendChannel | None = None
|
|
||||||
if tick_throttle:
|
if tick_throttle:
|
||||||
flume.throttle_rate = tick_throttle
|
flume.throttle_rate = tick_throttle
|
||||||
|
|
||||||
# open a bg task which receives quotes over a mem
|
# open a bg task which receives quotes over a mem chan
|
||||||
# chan and only pushes them to the target
|
# and only pushes them to the target actor-consumer at
|
||||||
# actor-consumer at a max ``tick_throttle``
|
# a max ``tick_throttle`` instantaneous rate.
|
||||||
# (instantaneous) rate.
|
|
||||||
send, recv = trio.open_memory_channel(2**10)
|
send, recv = trio.open_memory_channel(2**10)
|
||||||
|
|
||||||
# NOTE: the ``.send`` channel here is a swapped-in
|
cs = await bus.start_task(
|
||||||
# trio mem chan which gets `.send()`-ed by the normal
|
|
||||||
# sampler task but instead of being sent directly
|
|
||||||
# over the IPC msg stream it's the throttle task
|
|
||||||
# does the work of incrementally forwarding to the
|
|
||||||
# IPC stream at the throttle rate.
|
|
||||||
cs: trio.CancelScope = await bus.start_task(
|
|
||||||
uniform_rate_send,
|
uniform_rate_send,
|
||||||
tick_throttle,
|
tick_throttle,
|
||||||
recv,
|
recv,
|
||||||
stream,
|
stream,
|
||||||
)
|
)
|
||||||
|
# NOTE: so the ``send`` channel here is actually a swapped
|
||||||
|
# in trio mem chan which gets pushed by the normal sampler
|
||||||
|
# task but instead of being sent directly over the IPC msg
|
||||||
|
# stream it's the throttle task does the work of
|
||||||
|
# incrementally forwarding to the IPC stream at the throttle
|
||||||
|
# rate.
|
||||||
|
send._ctx = ctx # mock internal ``tractor.MsgStream`` ref
|
||||||
|
sub = (send, tick_throttle)
|
||||||
|
|
||||||
sub = Sub(
|
else:
|
||||||
ipc=stream,
|
sub = (stream, tick_throttle)
|
||||||
send_chan=send,
|
|
||||||
throttle_rate=tick_throttle,
|
|
||||||
_throttle_cs=cs,
|
|
||||||
rc_ui=allow_remote_ctl_ui,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: add an api for this on the bus?
|
# TODO: add an api for this on the bus?
|
||||||
# maybe use the current task-id to key the sub list that's
|
# maybe use the current task-id to key the sub list that's
|
||||||
# added / removed? Or maybe we can add a general
|
# added / removed? Or maybe we can add a general
|
||||||
# pause-resume by sub-key api?
|
# pause-resume by sub-key api?
|
||||||
bs_fqme = fqme.removesuffix(f'.{brokername}')
|
bs_fqme = fqme.removesuffix(f'.{brokername}')
|
||||||
local_subs.setdefault(
|
local_subs.setdefault(bs_fqme, set()).add(sub)
|
||||||
bs_fqme,
|
bus.add_subs(bs_fqme, {sub})
|
||||||
set()
|
|
||||||
).add(sub)
|
|
||||||
bus.add_subs(
|
|
||||||
bs_fqme,
|
|
||||||
{sub}
|
|
||||||
)
|
|
||||||
|
|
||||||
# sync caller with all subs registered state
|
# sync caller with all subs registered state
|
||||||
sub_registered.set()
|
sub_registered.set()
|
||||||
|
|
||||||
uid: tuple[str, str] = ctx.chan.uid
|
uid = ctx.chan.uid
|
||||||
try:
|
try:
|
||||||
# ctrl protocol for start/stop of live quote streams
|
# ctrl protocol for start/stop of quote streams based on UI
|
||||||
# based on UI state (eg. don't need a stream when
|
# state (eg. don't need a stream when a symbol isn't being
|
||||||
# a symbol isn't being displayed).
|
# displayed).
|
||||||
async for msg in stream:
|
async for msg in stream:
|
||||||
|
|
||||||
if msg == 'pause':
|
if msg == 'pause':
|
||||||
|
|
@ -730,10 +688,7 @@ class Feed(Struct):
|
||||||
async for msg in stream:
|
async for msg in stream:
|
||||||
await tx.send(msg)
|
await tx.send(msg)
|
||||||
|
|
||||||
async with (
|
async with trio.open_nursery() as nurse:
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as nurse
|
|
||||||
):
|
|
||||||
# spawn a relay task for each stream so that they all
|
# spawn a relay task for each stream so that they all
|
||||||
# multiplex to a common channel.
|
# multiplex to a common channel.
|
||||||
for brokername in mods:
|
for brokername in mods:
|
||||||
|
|
@ -779,7 +734,6 @@ async def install_brokerd_search(
|
||||||
except trio.EndOfChannel:
|
except trio.EndOfChannel:
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
from piker.ui import _search
|
|
||||||
async with _search.register_symbol_search(
|
async with _search.register_symbol_search(
|
||||||
|
|
||||||
provider_name=brokermod.name,
|
provider_name=brokermod.name,
|
||||||
|
|
@ -796,8 +750,9 @@ async def install_brokerd_search(
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def maybe_open_feed(
|
async def maybe_open_feed(
|
||||||
|
|
||||||
fqmes: list[str],
|
fqmes: list[str],
|
||||||
loglevel: str | None = None,
|
loglevel: Optional[str] = None,
|
||||||
|
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
||||||
|
|
@ -813,7 +768,7 @@ async def maybe_open_feed(
|
||||||
'''
|
'''
|
||||||
fqme = fqmes[0]
|
fqme = fqmes[0]
|
||||||
|
|
||||||
async with trionics.maybe_open_context(
|
async with maybe_open_context(
|
||||||
acm_func=open_feed,
|
acm_func=open_feed,
|
||||||
kwargs={
|
kwargs={
|
||||||
'fqmes': fqmes,
|
'fqmes': fqmes,
|
||||||
|
|
@ -833,7 +788,7 @@ async def maybe_open_feed(
|
||||||
# add a new broadcast subscription for the quote stream
|
# add a new broadcast subscription for the quote stream
|
||||||
# if this feed is likely already in use
|
# if this feed is likely already in use
|
||||||
|
|
||||||
async with trionics.gather_contexts(
|
async with gather_contexts(
|
||||||
mngrs=[stream.subscribe() for stream in feed.streams.values()]
|
mngrs=[stream.subscribe() for stream in feed.streams.values()]
|
||||||
) as bstreams:
|
) as bstreams:
|
||||||
for bstream, flume in zip(bstreams, feed.flumes.values()):
|
for bstream, flume in zip(bstreams, feed.flumes.values()):
|
||||||
|
|
@ -849,14 +804,13 @@ async def maybe_open_feed(
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def open_feed(
|
async def open_feed(
|
||||||
|
|
||||||
fqmes: list[str],
|
fqmes: list[str],
|
||||||
|
|
||||||
loglevel: str|None = None,
|
loglevel: str | None = None,
|
||||||
allow_overruns: bool = True,
|
allow_overruns: bool = True,
|
||||||
start_stream: bool = True,
|
start_stream: bool = True,
|
||||||
tick_throttle: float|None = None, # Hz
|
tick_throttle: float | None = None, # Hz
|
||||||
|
|
||||||
allow_remote_ctl_ui: bool = False,
|
|
||||||
|
|
||||||
) -> Feed:
|
) -> Feed:
|
||||||
'''
|
'''
|
||||||
|
|
@ -894,7 +848,7 @@ async def open_feed(
|
||||||
)
|
)
|
||||||
|
|
||||||
portals: tuple[tractor.Portal]
|
portals: tuple[tractor.Portal]
|
||||||
async with trionics.gather_contexts(
|
async with gather_contexts(
|
||||||
brokerd_ctxs,
|
brokerd_ctxs,
|
||||||
) as portals:
|
) as portals:
|
||||||
|
|
||||||
|
|
@ -907,19 +861,19 @@ async def open_feed(
|
||||||
feed.portals[brokermod] = portal
|
feed.portals[brokermod] = portal
|
||||||
|
|
||||||
# fill out "status info" that the UI can show
|
# fill out "status info" that the UI can show
|
||||||
chan: tractor.Channel = portal.chan
|
host, port = portal.channel.raddr
|
||||||
raddr: Address = chan.raddr
|
if host == '127.0.0.1':
|
||||||
aid: Aid = chan.aid
|
host = 'localhost'
|
||||||
# TAG_feed_status_update
|
|
||||||
feed.status.update({
|
feed.status.update({
|
||||||
'actor_id': aid,
|
'actor_name': portal.channel.uid[0],
|
||||||
'actor_short_id': f'{aid.name}@{aid.pid}',
|
'host': host,
|
||||||
'ipc': chan.raddr.proto_key,
|
'port': port,
|
||||||
'ipc_addr': raddr,
|
|
||||||
'hist_shm': 'NA',
|
'hist_shm': 'NA',
|
||||||
'rt_shm': 'NA',
|
'rt_shm': 'NA',
|
||||||
'throttle_hz': tick_throttle,
|
'throttle_rate': tick_throttle,
|
||||||
})
|
})
|
||||||
|
# feed.status.update(init_msg.pop('status', {}))
|
||||||
|
|
||||||
# (allocate and) connect to any feed bus for this broker
|
# (allocate and) connect to any feed bus for this broker
|
||||||
bus_ctxs.append(
|
bus_ctxs.append(
|
||||||
|
|
@ -940,19 +894,13 @@ async def open_feed(
|
||||||
# of these stream open sequences sequentially per
|
# of these stream open sequences sequentially per
|
||||||
# backend? .. need some thot!
|
# backend? .. need some thot!
|
||||||
allow_overruns=True,
|
allow_overruns=True,
|
||||||
|
|
||||||
# NOTE: UI actors (like charts) can allow
|
|
||||||
# remote control of certain graphics rendering
|
|
||||||
# capabilities via the
|
|
||||||
# `.ui._remote_ctl.remote_annotate()` msg loop.
|
|
||||||
allow_remote_ctl_ui=allow_remote_ctl_ui,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
assert len(feed.mods) == len(feed.portals)
|
assert len(feed.mods) == len(feed.portals)
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
trionics.gather_contexts(bus_ctxs) as ctxs,
|
gather_contexts(bus_ctxs) as ctxs,
|
||||||
):
|
):
|
||||||
stream_ctxs: list[tractor.MsgStream] = []
|
stream_ctxs: list[tractor.MsgStream] = []
|
||||||
for (
|
for (
|
||||||
|
|
@ -994,7 +942,7 @@ async def open_feed(
|
||||||
brokermod: ModuleType
|
brokermod: ModuleType
|
||||||
fqmes: list[str]
|
fqmes: list[str]
|
||||||
async with (
|
async with (
|
||||||
trionics.gather_contexts(stream_ctxs) as streams,
|
gather_contexts(stream_ctxs) as streams,
|
||||||
):
|
):
|
||||||
for (
|
for (
|
||||||
stream,
|
stream,
|
||||||
|
|
@ -1010,12 +958,6 @@ async def open_feed(
|
||||||
if brokermod.name == flume.mkt.broker:
|
if brokermod.name == flume.mkt.broker:
|
||||||
flume.stream = stream
|
flume.stream = stream
|
||||||
|
|
||||||
assert (
|
assert len(feed.mods) == len(feed.portals) == len(feed.streams)
|
||||||
len(feed.mods)
|
|
||||||
==
|
|
||||||
len(feed.portals)
|
|
||||||
==
|
|
||||||
len(feed.streams)
|
|
||||||
)
|
|
||||||
|
|
||||||
yield feed
|
yield feed
|
||||||
|
|
|
||||||
|
|
@ -30,27 +30,53 @@ import tractor
|
||||||
import pendulum
|
import pendulum
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
from piker.types import Struct
|
from ..accounting import MktPair
|
||||||
|
from ._util import log
|
||||||
|
from .types import Struct
|
||||||
from ._sharedmem import (
|
from ._sharedmem import (
|
||||||
attach_shm_array,
|
attach_shm_array,
|
||||||
ShmArray,
|
ShmArray,
|
||||||
_Token,
|
_Token,
|
||||||
)
|
)
|
||||||
from piker.accounting import MktPair
|
# from .._profile import (
|
||||||
|
# Profiler,
|
||||||
|
# pg_profile_enabled,
|
||||||
|
# )
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from piker.data.feed import Feed
|
# from pyqtgraph import PlotItem
|
||||||
|
from .feed import Feed
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: ideas for further abstractions as per
|
||||||
|
# https://github.com/pikers/piker/issues/216 and
|
||||||
|
# https://github.com/pikers/piker/issues/270:
|
||||||
|
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
|
||||||
|
# as per circuit parlance:
|
||||||
|
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
|
||||||
|
# - could cover the combination of our `FspAdmin` and the
|
||||||
|
# backend `.fsp._engine` related machinery to "connect" one flume
|
||||||
|
# to another?
|
||||||
|
# - a (financial signal) ``Flow`` would be the a "collection" of such
|
||||||
|
# minmial cascades. Some engineering based jargon concepts:
|
||||||
|
# - https://en.wikipedia.org/wiki/Signal_chain
|
||||||
|
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
|
||||||
|
# - https://en.wikipedia.org/wiki/Audio_signal_flow
|
||||||
|
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
|
||||||
|
# - https://en.wikipedia.org/wiki/Dataflow_programming
|
||||||
|
# - https://en.wikipedia.org/wiki/Signal_programming
|
||||||
|
# - https://en.wikipedia.org/wiki/Incremental_computing
|
||||||
|
|
||||||
|
|
||||||
class Flume(Struct):
|
class Flume(Struct):
|
||||||
'''
|
'''
|
||||||
Composite reference type which points to all the addressing
|
Composite reference type which points to all the addressing handles
|
||||||
handles and other meta-data necessary for the read, measure and
|
and other meta-data necessary for the read, measure and management
|
||||||
management of a set of real-time updated data flows.
|
of a set of real-time updated data flows.
|
||||||
|
|
||||||
Can be thought of as a "flow descriptor" or "flow frame" which
|
Can be thought of as a "flow descriptor" or "flow frame" which
|
||||||
describes the high level properties of a set of data flows that
|
describes the high level properties of a set of data flows that can
|
||||||
can be used seamlessly across process-memory boundaries.
|
be used seamlessly across process-memory boundaries.
|
||||||
|
|
||||||
Each instance's sub-components normally includes:
|
Each instance's sub-components normally includes:
|
||||||
- a msg oriented quote stream provided via an IPC transport
|
- a msg oriented quote stream provided via an IPC transport
|
||||||
|
|
@ -73,7 +99,6 @@ class Flume(Struct):
|
||||||
# private shm refs loaded dynamically from tokens
|
# private shm refs loaded dynamically from tokens
|
||||||
_hist_shm: ShmArray | None = None
|
_hist_shm: ShmArray | None = None
|
||||||
_rt_shm: ShmArray | None = None
|
_rt_shm: ShmArray | None = None
|
||||||
_readonly: bool = True
|
|
||||||
|
|
||||||
stream: tractor.MsgStream | None = None
|
stream: tractor.MsgStream | None = None
|
||||||
izero_hist: int = 0
|
izero_hist: int = 0
|
||||||
|
|
@ -82,7 +107,7 @@ class Flume(Struct):
|
||||||
|
|
||||||
# TODO: do we need this really if we can pull the `Portal` from
|
# TODO: do we need this really if we can pull the `Portal` from
|
||||||
# ``tractor``'s internals?
|
# ``tractor``'s internals?
|
||||||
feed: Feed|None = None
|
feed: Feed | None = None
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def rt_shm(self) -> ShmArray:
|
def rt_shm(self) -> ShmArray:
|
||||||
|
|
@ -90,7 +115,7 @@ class Flume(Struct):
|
||||||
if self._rt_shm is None:
|
if self._rt_shm is None:
|
||||||
self._rt_shm = attach_shm_array(
|
self._rt_shm = attach_shm_array(
|
||||||
token=self._rt_shm_token,
|
token=self._rt_shm_token,
|
||||||
readonly=self._readonly,
|
readonly=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
return self._rt_shm
|
return self._rt_shm
|
||||||
|
|
@ -103,10 +128,12 @@ class Flume(Struct):
|
||||||
'No shm token has been set for the history buffer?'
|
'No shm token has been set for the history buffer?'
|
||||||
)
|
)
|
||||||
|
|
||||||
if self._hist_shm is None:
|
if (
|
||||||
|
self._hist_shm is None
|
||||||
|
):
|
||||||
self._hist_shm = attach_shm_array(
|
self._hist_shm = attach_shm_array(
|
||||||
token=self._hist_shm_token,
|
token=self._hist_shm_token,
|
||||||
readonly=self._readonly,
|
readonly=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
return self._hist_shm
|
return self._hist_shm
|
||||||
|
|
@ -125,10 +152,10 @@ class Flume(Struct):
|
||||||
period and ratio between them.
|
period and ratio between them.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
times: np.ndarray = self.hist_shm.array['time']
|
times = self.hist_shm.array['time']
|
||||||
end: float | int = pendulum.from_timestamp(times[-1])
|
end = pendulum.from_timestamp(times[-1])
|
||||||
start: float | int = pendulum.from_timestamp(times[times != times[-1]][-1])
|
start = pendulum.from_timestamp(times[times != times[-1]][-1])
|
||||||
hist_step_size_s: float = (end - start).seconds
|
hist_step_size_s = (end - start).seconds
|
||||||
|
|
||||||
times = self.rt_shm.array['time']
|
times = self.rt_shm.array['time']
|
||||||
end = pendulum.from_timestamp(times[-1])
|
end = pendulum.from_timestamp(times[-1])
|
||||||
|
|
@ -148,25 +175,17 @@ class Flume(Struct):
|
||||||
msg = self.to_dict()
|
msg = self.to_dict()
|
||||||
msg['mkt'] = self.mkt.to_dict()
|
msg['mkt'] = self.mkt.to_dict()
|
||||||
|
|
||||||
# NOTE: pop all un-msg-serializable fields:
|
# can't serialize the stream or feed objects, it's expected
|
||||||
# - `tractor.MsgStream`
|
# you'll have a ref to it since this msg should be rxed on
|
||||||
# - `Feed`
|
# a stream on whatever far end IPC..
|
||||||
# - `Shmarray`
|
|
||||||
# it's expected the `.from_msg()` on the other side
|
|
||||||
# will get instead some kind of msg-compat version
|
|
||||||
# that it can load.
|
|
||||||
msg.pop('stream')
|
msg.pop('stream')
|
||||||
msg.pop('feed')
|
msg.pop('feed')
|
||||||
msg.pop('_rt_shm')
|
|
||||||
msg.pop('_hist_shm')
|
|
||||||
|
|
||||||
return msg
|
return msg
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_msg(
|
def from_msg(
|
||||||
cls,
|
cls,
|
||||||
msg: dict,
|
msg: dict,
|
||||||
readonly: bool = True,
|
|
||||||
|
|
||||||
) -> dict:
|
) -> dict:
|
||||||
'''
|
'''
|
||||||
|
|
@ -175,13 +194,8 @@ class Flume(Struct):
|
||||||
|
|
||||||
'''
|
'''
|
||||||
mkt_msg = msg.pop('mkt')
|
mkt_msg = msg.pop('mkt')
|
||||||
from ..accounting import MktPair # cycle otherwise..
|
|
||||||
mkt = MktPair.from_msg(mkt_msg)
|
mkt = MktPair.from_msg(mkt_msg)
|
||||||
msg |= {'_readonly': readonly}
|
return cls(mkt=mkt, **msg)
|
||||||
return cls(
|
|
||||||
mkt=mkt,
|
|
||||||
**msg,
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_index(
|
def get_index(
|
||||||
self,
|
self,
|
||||||
|
|
@ -219,3 +233,5 @@ class Flume(Struct):
|
||||||
np.all(np.isin(vlm, -1))
|
np.all(np.isin(vlm, -1))
|
||||||
or np.all(np.isnan(vlm))
|
or np.all(np.isnan(vlm))
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,967 @@
|
||||||
|
# piker: trading gear for hackers
|
||||||
|
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
'''
|
||||||
|
Historical data business logic for load, backfill and tsdb storage.
|
||||||
|
|
||||||
|
'''
|
||||||
|
from __future__ import annotations
|
||||||
|
# from collections import (
|
||||||
|
# Counter,
|
||||||
|
# )
|
||||||
|
from datetime import datetime
|
||||||
|
from functools import partial
|
||||||
|
# import time
|
||||||
|
from types import ModuleType
|
||||||
|
from typing import (
|
||||||
|
Callable,
|
||||||
|
TYPE_CHECKING,
|
||||||
|
)
|
||||||
|
|
||||||
|
import trio
|
||||||
|
from trio_typing import TaskStatus
|
||||||
|
import tractor
|
||||||
|
from pendulum import (
|
||||||
|
Duration,
|
||||||
|
from_timestamp,
|
||||||
|
)
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from ..accounting import (
|
||||||
|
MktPair,
|
||||||
|
)
|
||||||
|
from ._util import (
|
||||||
|
log,
|
||||||
|
)
|
||||||
|
from ._sharedmem import (
|
||||||
|
maybe_open_shm_array,
|
||||||
|
ShmArray,
|
||||||
|
)
|
||||||
|
from ._source import def_iohlcv_fields
|
||||||
|
from ._sampling import (
|
||||||
|
open_sample_stream,
|
||||||
|
)
|
||||||
|
from ..brokers._util import (
|
||||||
|
DataUnavailable,
|
||||||
|
)
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from bidict import bidict
|
||||||
|
from ..service.marketstore import StorageClient
|
||||||
|
from .feed import _FeedsBus
|
||||||
|
|
||||||
|
|
||||||
|
# `ShmArray` buffer sizing configuration:
|
||||||
|
_mins_in_day = int(60 * 24)
|
||||||
|
# how much is probably dependent on lifestyle
|
||||||
|
# but we reco a buncha times (but only on a
|
||||||
|
# run-every-other-day kinda week).
|
||||||
|
_secs_in_day = int(60 * _mins_in_day)
|
||||||
|
_days_in_week: int = 7
|
||||||
|
|
||||||
|
_days_worth: int = 3
|
||||||
|
_default_hist_size: int = 6 * 365 * _mins_in_day
|
||||||
|
_hist_buffer_start = int(
|
||||||
|
_default_hist_size - round(7 * _mins_in_day)
|
||||||
|
)
|
||||||
|
|
||||||
|
_default_rt_size: int = _days_worth * _secs_in_day
|
||||||
|
# NOTE: start the append index in rt buffer such that 1 day's worth
|
||||||
|
# can be appenened before overrun.
|
||||||
|
_rt_buffer_start = int((_days_worth - 1) * _secs_in_day)
|
||||||
|
|
||||||
|
|
||||||
|
def diff_history(
|
||||||
|
array: np.ndarray,
|
||||||
|
append_until_dt: datetime | None = None,
|
||||||
|
prepend_until_dt: datetime | None = None,
|
||||||
|
|
||||||
|
) -> np.ndarray:
|
||||||
|
|
||||||
|
# no diffing with tsdb dt index possible..
|
||||||
|
if (
|
||||||
|
prepend_until_dt is None
|
||||||
|
and append_until_dt is None
|
||||||
|
):
|
||||||
|
return array
|
||||||
|
|
||||||
|
times = array['time']
|
||||||
|
|
||||||
|
if append_until_dt:
|
||||||
|
return array[times < append_until_dt.timestamp()]
|
||||||
|
else:
|
||||||
|
return array[times >= prepend_until_dt.timestamp()]
|
||||||
|
|
||||||
|
|
||||||
|
async def shm_push_in_between(
|
||||||
|
shm: ShmArray,
|
||||||
|
to_push: np.ndarray,
|
||||||
|
prepend_index: int,
|
||||||
|
|
||||||
|
update_start_on_prepend: bool = False,
|
||||||
|
|
||||||
|
) -> int:
|
||||||
|
shm.push(
|
||||||
|
to_push,
|
||||||
|
prepend=True,
|
||||||
|
|
||||||
|
# XXX: only update the ._first index if no tsdb
|
||||||
|
# segment was previously prepended by the
|
||||||
|
# parent task.
|
||||||
|
update_first=update_start_on_prepend,
|
||||||
|
|
||||||
|
# XXX: only prepend from a manually calculated shm
|
||||||
|
# index if there was already a tsdb history
|
||||||
|
# segment prepended (since then the
|
||||||
|
# ._first.value is going to be wayyy in the
|
||||||
|
# past!)
|
||||||
|
start=(
|
||||||
|
prepend_index
|
||||||
|
if not update_start_on_prepend
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
# XXX: extremely important, there can be no checkpoints
|
||||||
|
# in the block above to avoid entering new ``frames``
|
||||||
|
# values while we're pipelining the current ones to
|
||||||
|
# memory...
|
||||||
|
array = shm.array
|
||||||
|
zeros = array[array['low'] == 0]
|
||||||
|
if (
|
||||||
|
0 < zeros.size < 1000
|
||||||
|
):
|
||||||
|
tractor.breakpoint()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def start_backfill(
|
||||||
|
get_hist,
|
||||||
|
mod: ModuleType,
|
||||||
|
mkt: MktPair,
|
||||||
|
shm: ShmArray,
|
||||||
|
timeframe: float,
|
||||||
|
|
||||||
|
backfill_from_shm_index: int,
|
||||||
|
backfill_from_dt: datetime,
|
||||||
|
|
||||||
|
sampler_stream: tractor.MsgStream,
|
||||||
|
|
||||||
|
backfill_until_dt: datetime | None = None,
|
||||||
|
storage: StorageClient | None = None,
|
||||||
|
|
||||||
|
write_tsdb: bool = True,
|
||||||
|
|
||||||
|
task_status: TaskStatus[tuple] = trio.TASK_STATUS_IGNORED,
|
||||||
|
|
||||||
|
) -> int:
|
||||||
|
|
||||||
|
# let caller unblock and deliver latest history frame
|
||||||
|
# and use to signal that backfilling the shm gap until
|
||||||
|
# the tsdb end is complete!
|
||||||
|
bf_done = trio.Event()
|
||||||
|
task_status.started(bf_done)
|
||||||
|
|
||||||
|
# based on the sample step size, maybe load a certain amount history
|
||||||
|
update_start_on_prepend: bool = False
|
||||||
|
if backfill_until_dt is None:
|
||||||
|
|
||||||
|
# TODO: drop this right and just expose the backfill
|
||||||
|
# limits inside a [storage] section in conf.toml?
|
||||||
|
# when no tsdb "last datum" is provided, we just load
|
||||||
|
# some near-term history.
|
||||||
|
# periods = {
|
||||||
|
# 1: {'days': 1},
|
||||||
|
# 60: {'days': 14},
|
||||||
|
# }
|
||||||
|
|
||||||
|
# do a decently sized backfill and load it into storage.
|
||||||
|
periods = {
|
||||||
|
1: {'days': 6},
|
||||||
|
60: {'years': 6},
|
||||||
|
}
|
||||||
|
period_duration: int = periods[timeframe]
|
||||||
|
|
||||||
|
update_start_on_prepend = True
|
||||||
|
|
||||||
|
# NOTE: manually set the "latest" datetime which we intend to
|
||||||
|
# backfill history "until" so as to adhere to the history
|
||||||
|
# settings above when the tsdb is detected as being empty.
|
||||||
|
backfill_until_dt = backfill_from_dt.subtract(**period_duration)
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: can we drop this? without conc i don't think this
|
||||||
|
# is necessary any more?
|
||||||
|
# configure async query throttling
|
||||||
|
# rate = config.get('rate', 1)
|
||||||
|
# XXX: legacy from ``trimeter`` code but unsupported now.
|
||||||
|
# erlangs = config.get('erlangs', 1)
|
||||||
|
# avoid duplicate history frames with a set of datetime frame
|
||||||
|
# starts and associated counts of how many duplicates we see
|
||||||
|
# per time stamp.
|
||||||
|
# starts: Counter[datetime] = Counter()
|
||||||
|
|
||||||
|
# conduct "backward history gap filling" where we push to
|
||||||
|
# the shm buffer until we have history back until the
|
||||||
|
# latest entry loaded from the tsdb's table B)
|
||||||
|
last_start_dt: datetime = backfill_from_dt
|
||||||
|
next_prepend_index: int = backfill_from_shm_index
|
||||||
|
|
||||||
|
while last_start_dt > backfill_until_dt:
|
||||||
|
|
||||||
|
log.debug(
|
||||||
|
f'Requesting {timeframe}s frame ending in {last_start_dt}'
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
(
|
||||||
|
array,
|
||||||
|
next_start_dt,
|
||||||
|
next_end_dt,
|
||||||
|
) = await get_hist(
|
||||||
|
timeframe,
|
||||||
|
end_dt=last_start_dt,
|
||||||
|
)
|
||||||
|
|
||||||
|
# broker says there never was or is no more history to pull
|
||||||
|
except DataUnavailable:
|
||||||
|
log.warning(
|
||||||
|
f'NO-MORE-DATA: backend {mod.name} halted history!?'
|
||||||
|
)
|
||||||
|
|
||||||
|
# ugh, what's a better way?
|
||||||
|
# TODO: fwiw, we probably want a way to signal a throttle
|
||||||
|
# condition (eg. with ib) so that we can halt the
|
||||||
|
# request loop until the condition is resolved?
|
||||||
|
return
|
||||||
|
|
||||||
|
# TODO: drop this? see todo above..
|
||||||
|
# if (
|
||||||
|
# next_start_dt in starts
|
||||||
|
# and starts[next_start_dt] <= 6
|
||||||
|
# ):
|
||||||
|
# start_dt = min(starts)
|
||||||
|
# log.warning(
|
||||||
|
# f"{mkt.fqme}: skipping duplicate frame @ {next_start_dt}"
|
||||||
|
# )
|
||||||
|
# starts[start_dt] += 1
|
||||||
|
# await tractor.breakpoint()
|
||||||
|
# continue
|
||||||
|
|
||||||
|
# elif starts[next_start_dt] > 6:
|
||||||
|
# log.warning(
|
||||||
|
# f'NO-MORE-DATA: backend {mod.name} before {next_start_dt}?'
|
||||||
|
# )
|
||||||
|
# return
|
||||||
|
|
||||||
|
# # only update new start point if not-yet-seen
|
||||||
|
# starts[next_start_dt] += 1
|
||||||
|
|
||||||
|
assert array['time'][0] == next_start_dt.timestamp()
|
||||||
|
|
||||||
|
diff = last_start_dt - next_start_dt
|
||||||
|
frame_time_diff_s = diff.seconds
|
||||||
|
|
||||||
|
# frame's worth of sample-period-steps, in seconds
|
||||||
|
frame_size_s = len(array) * timeframe
|
||||||
|
expected_frame_size_s = frame_size_s + timeframe
|
||||||
|
if frame_time_diff_s > expected_frame_size_s:
|
||||||
|
|
||||||
|
# XXX: query result includes a start point prior to our
|
||||||
|
# expected "frame size" and thus is likely some kind of
|
||||||
|
# history gap (eg. market closed period, outage, etc.)
|
||||||
|
# so just report it to console for now.
|
||||||
|
log.warning(
|
||||||
|
f'History frame ending @ {last_start_dt} appears to have a gap:\n'
|
||||||
|
f'{diff} ~= {frame_time_diff_s} seconds'
|
||||||
|
)
|
||||||
|
|
||||||
|
to_push = diff_history(
|
||||||
|
array,
|
||||||
|
prepend_until_dt=backfill_until_dt,
|
||||||
|
)
|
||||||
|
ln = len(to_push)
|
||||||
|
if ln:
|
||||||
|
log.info(f'{ln} bars for {next_start_dt} -> {last_start_dt}')
|
||||||
|
|
||||||
|
else:
|
||||||
|
log.warning(
|
||||||
|
'0 BARS TO PUSH after diff!?\n'
|
||||||
|
f'{next_start_dt} -> {last_start_dt}'
|
||||||
|
)
|
||||||
|
|
||||||
|
# bail gracefully on shm allocation overrun/full
|
||||||
|
# condition
|
||||||
|
try:
|
||||||
|
await shm_push_in_between(
|
||||||
|
shm,
|
||||||
|
to_push,
|
||||||
|
prepend_index=next_prepend_index,
|
||||||
|
update_start_on_prepend=update_start_on_prepend,
|
||||||
|
)
|
||||||
|
await sampler_stream.send({
|
||||||
|
'broadcast_all': {
|
||||||
|
'backfilling': (mkt.fqme, timeframe),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
# decrement next prepend point
|
||||||
|
next_prepend_index = next_prepend_index - ln
|
||||||
|
last_start_dt = next_start_dt
|
||||||
|
|
||||||
|
except ValueError as ve:
|
||||||
|
_ve = ve
|
||||||
|
log.error(
|
||||||
|
f'Shm prepend OVERRUN on: {next_start_dt} -> {last_start_dt}?'
|
||||||
|
)
|
||||||
|
|
||||||
|
if next_prepend_index < ln:
|
||||||
|
log.warning(
|
||||||
|
f'Shm buffer can only hold {next_prepend_index} more rows..\n'
|
||||||
|
f'Appending those from recent {ln}-sized frame, no more!'
|
||||||
|
)
|
||||||
|
|
||||||
|
to_push = to_push[-next_prepend_index + 1:]
|
||||||
|
await shm_push_in_between(
|
||||||
|
shm,
|
||||||
|
to_push,
|
||||||
|
prepend_index=next_prepend_index,
|
||||||
|
update_start_on_prepend=update_start_on_prepend,
|
||||||
|
)
|
||||||
|
await sampler_stream.send({
|
||||||
|
'broadcast_all': {
|
||||||
|
'backfilling': (mkt.fqme, timeframe),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
# can't push the entire frame? so
|
||||||
|
# push only the amount that can fit..
|
||||||
|
break
|
||||||
|
|
||||||
|
log.info(
|
||||||
|
f'Shm pushed {ln} frame:\n'
|
||||||
|
f'{next_start_dt} -> {last_start_dt}'
|
||||||
|
)
|
||||||
|
|
||||||
|
# FINALLY, maybe write immediately to the tsdb backend for
|
||||||
|
# long-term storage.
|
||||||
|
if (
|
||||||
|
storage is not None
|
||||||
|
and write_tsdb
|
||||||
|
):
|
||||||
|
log.info(
|
||||||
|
f'Writing {ln} frame to storage:\n'
|
||||||
|
f'{next_start_dt} -> {last_start_dt}'
|
||||||
|
)
|
||||||
|
|
||||||
|
if mkt.dst.atype not in {'crypto', 'crypto_currency'}:
|
||||||
|
# for now, our table key schema is not including
|
||||||
|
# the dst[/src] source asset token.
|
||||||
|
col_sym_key: str = mkt.get_fqme(
|
||||||
|
delim_char='',
|
||||||
|
without_src=True,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
col_sym_key: str = mkt.get_fqme(delim_char='')
|
||||||
|
|
||||||
|
# TODO: implement parquet append!?
|
||||||
|
await storage.write_ohlcv(
|
||||||
|
col_sym_key,
|
||||||
|
shm.array,
|
||||||
|
timeframe,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# finally filled gap
|
||||||
|
log.info(
|
||||||
|
f'Finished filling gap to tsdb start @ {backfill_until_dt}!'
|
||||||
|
)
|
||||||
|
# conduct tsdb timestamp gap detection and backfill any
|
||||||
|
# seemingly missing sequence segments..
|
||||||
|
# TODO: ideally these never exist but somehow it seems
|
||||||
|
# sometimes we're writing zero-ed segments on certain
|
||||||
|
# (teardown) cases?
|
||||||
|
from ._timeseries import detect_null_time_gap
|
||||||
|
|
||||||
|
gap_indices: tuple | None = detect_null_time_gap(shm)
|
||||||
|
while gap_indices:
|
||||||
|
(
|
||||||
|
istart,
|
||||||
|
start,
|
||||||
|
end,
|
||||||
|
iend,
|
||||||
|
) = gap_indices
|
||||||
|
|
||||||
|
start_dt = from_timestamp(start)
|
||||||
|
end_dt = from_timestamp(end)
|
||||||
|
(
|
||||||
|
array,
|
||||||
|
next_start_dt,
|
||||||
|
next_end_dt,
|
||||||
|
) = await get_hist(
|
||||||
|
timeframe,
|
||||||
|
start_dt=start_dt,
|
||||||
|
end_dt=end_dt,
|
||||||
|
)
|
||||||
|
|
||||||
|
# XXX TODO: pretty sure if i plot tsla, btcusdt.binance
|
||||||
|
# and mnq.cme.ib this causes a Qt crash XXDDD
|
||||||
|
|
||||||
|
# make sure we don't overrun the buffer start
|
||||||
|
len_to_push: int = min(iend, array.size)
|
||||||
|
to_push: np.ndarray = array[-len_to_push:]
|
||||||
|
await shm_push_in_between(
|
||||||
|
shm,
|
||||||
|
to_push,
|
||||||
|
prepend_index=iend,
|
||||||
|
update_start_on_prepend=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: UI side needs IPC event to update..
|
||||||
|
# - make sure the UI actually always handles
|
||||||
|
# this update!
|
||||||
|
# - remember that in the display side, only refersh this
|
||||||
|
# if the respective history is actually "in view".
|
||||||
|
# loop
|
||||||
|
await sampler_stream.send({
|
||||||
|
'broadcast_all': {
|
||||||
|
'backfilling': (mkt.fqme, timeframe),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
gap_indices: tuple | None = detect_null_time_gap(shm)
|
||||||
|
|
||||||
|
# XXX: extremely important, there can be no checkpoints
|
||||||
|
# in the block above to avoid entering new ``frames``
|
||||||
|
# values while we're pipelining the current ones to
|
||||||
|
# memory...
|
||||||
|
# await sampler_stream.send('broadcast_all')
|
||||||
|
|
||||||
|
# short-circuit (for now)
|
||||||
|
bf_done.set()
|
||||||
|
|
||||||
|
|
||||||
|
async def back_load_from_tsdb(
|
||||||
|
storemod: ModuleType,
|
||||||
|
storage: StorageClient,
|
||||||
|
|
||||||
|
fqme: str,
|
||||||
|
|
||||||
|
tsdb_history: np.ndarray,
|
||||||
|
|
||||||
|
last_tsdb_dt: datetime,
|
||||||
|
latest_start_dt: datetime,
|
||||||
|
latest_end_dt: datetime,
|
||||||
|
|
||||||
|
bf_done: trio.Event,
|
||||||
|
|
||||||
|
timeframe: int,
|
||||||
|
shm: ShmArray,
|
||||||
|
):
|
||||||
|
assert len(tsdb_history)
|
||||||
|
|
||||||
|
# sync to backend history task's query/load completion
|
||||||
|
# if bf_done:
|
||||||
|
# await bf_done.wait()
|
||||||
|
|
||||||
|
# TODO: eventually it'd be nice to not require a shm array/buffer
|
||||||
|
# to accomplish this.. maybe we can do some kind of tsdb direct to
|
||||||
|
# graphics format eventually in a child-actor?
|
||||||
|
if storemod.name == 'nativedb':
|
||||||
|
return
|
||||||
|
|
||||||
|
await tractor.breakpoint()
|
||||||
|
assert shm._first.value == 0
|
||||||
|
|
||||||
|
array = shm.array
|
||||||
|
|
||||||
|
# if timeframe == 1:
|
||||||
|
# times = shm.array['time']
|
||||||
|
# assert (times[1] - times[0]) == 1
|
||||||
|
|
||||||
|
if len(array):
|
||||||
|
shm_last_dt = from_timestamp(
|
||||||
|
shm.array[0]['time']
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
shm_last_dt = None
|
||||||
|
|
||||||
|
if last_tsdb_dt:
|
||||||
|
assert shm_last_dt >= last_tsdb_dt
|
||||||
|
|
||||||
|
# do diff against start index of last frame of history and only
|
||||||
|
# fill in an amount of datums from tsdb allows for most recent
|
||||||
|
# to be loaded into mem *before* tsdb data.
|
||||||
|
if (
|
||||||
|
last_tsdb_dt
|
||||||
|
and latest_start_dt
|
||||||
|
):
|
||||||
|
backfilled_size_s = (
|
||||||
|
latest_start_dt - last_tsdb_dt
|
||||||
|
).seconds
|
||||||
|
# if the shm buffer len is not large enough to contain
|
||||||
|
# all missing data between the most recent backend-queried frame
|
||||||
|
# and the most recent dt-index in the db we warn that we only
|
||||||
|
# want to load a portion of the next tsdb query to fill that
|
||||||
|
# space.
|
||||||
|
log.info(
|
||||||
|
f'{backfilled_size_s} seconds worth of {timeframe}s loaded'
|
||||||
|
)
|
||||||
|
|
||||||
|
# Load TSDB history into shm buffer (for display) if there is
|
||||||
|
# remaining buffer space.
|
||||||
|
|
||||||
|
time_key: str = 'time'
|
||||||
|
if getattr(storemod, 'ohlc_key_map', False):
|
||||||
|
keymap: bidict = storemod.ohlc_key_map
|
||||||
|
time_key: str = keymap.inverse['time']
|
||||||
|
|
||||||
|
# if (
|
||||||
|
# not len(tsdb_history)
|
||||||
|
# ):
|
||||||
|
# return
|
||||||
|
|
||||||
|
tsdb_last_frame_start: datetime = last_tsdb_dt
|
||||||
|
# load as much from storage into shm possible (depends on
|
||||||
|
# user's shm size settings).
|
||||||
|
while shm._first.value > 0:
|
||||||
|
|
||||||
|
tsdb_history = await storage.read_ohlcv(
|
||||||
|
fqme,
|
||||||
|
timeframe=timeframe,
|
||||||
|
end=tsdb_last_frame_start,
|
||||||
|
)
|
||||||
|
|
||||||
|
# # empty query
|
||||||
|
# if not len(tsdb_history):
|
||||||
|
# break
|
||||||
|
|
||||||
|
next_start = tsdb_history[time_key][0]
|
||||||
|
if next_start >= tsdb_last_frame_start:
|
||||||
|
# no earlier data detected
|
||||||
|
break
|
||||||
|
|
||||||
|
else:
|
||||||
|
tsdb_last_frame_start = next_start
|
||||||
|
|
||||||
|
# TODO: see if there's faster multi-field reads:
|
||||||
|
# https://numpy.org/doc/stable/user/basics.rec.html#accessing-multiple-fields
|
||||||
|
# re-index with a `time` and index field
|
||||||
|
prepend_start = shm._first.value
|
||||||
|
|
||||||
|
to_push = tsdb_history[-prepend_start:]
|
||||||
|
shm.push(
|
||||||
|
to_push,
|
||||||
|
|
||||||
|
# insert the history pre a "days worth" of samples
|
||||||
|
# to leave some real-time buffer space at the end.
|
||||||
|
prepend=True,
|
||||||
|
# update_first=False,
|
||||||
|
# start=prepend_start,
|
||||||
|
field_map=storemod.ohlc_key_map,
|
||||||
|
)
|
||||||
|
|
||||||
|
log.info(f'Loaded {to_push.shape} datums from storage')
|
||||||
|
tsdb_last_frame_start = tsdb_history[time_key][0]
|
||||||
|
|
||||||
|
# manually trigger step update to update charts/fsps
|
||||||
|
# which need an incremental update.
|
||||||
|
# NOTE: the way this works is super duper
|
||||||
|
# un-intuitive right now:
|
||||||
|
# - the broadcaster fires a msg to the fsp subsystem.
|
||||||
|
# - fsp subsys then checks for a sample step diff and
|
||||||
|
# possibly recomputes prepended history.
|
||||||
|
# - the fsp then sends back to the parent actor
|
||||||
|
# (usually a chart showing graphics for said fsp)
|
||||||
|
# which tells the chart to conduct a manual full
|
||||||
|
# graphics loop cycle.
|
||||||
|
# await sampler_stream.send('broadcast_all')
|
||||||
|
|
||||||
|
|
||||||
|
async def tsdb_backfill(
|
||||||
|
mod: ModuleType,
|
||||||
|
storemod: ModuleType,
|
||||||
|
tn: trio.Nursery,
|
||||||
|
|
||||||
|
storage: StorageClient,
|
||||||
|
mkt: MktPair,
|
||||||
|
shm: ShmArray,
|
||||||
|
timeframe: float,
|
||||||
|
|
||||||
|
sampler_stream: tractor.MsgStream,
|
||||||
|
|
||||||
|
task_status: TaskStatus[
|
||||||
|
tuple[ShmArray, ShmArray]
|
||||||
|
] = trio.TASK_STATUS_IGNORED,
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
|
||||||
|
get_hist: Callable[
|
||||||
|
[int, datetime, datetime],
|
||||||
|
tuple[np.ndarray, str]
|
||||||
|
]
|
||||||
|
config: dict[str, int]
|
||||||
|
async with mod.open_history_client(
|
||||||
|
mkt,
|
||||||
|
) as (get_hist, config):
|
||||||
|
log.info(f'{mod} history client returned backfill config: {config}')
|
||||||
|
|
||||||
|
# get latest query's worth of history all the way
|
||||||
|
# back to what is recorded in the tsdb
|
||||||
|
try:
|
||||||
|
array, mr_start_dt, mr_end_dt = await get_hist(
|
||||||
|
timeframe,
|
||||||
|
end_dt=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
# XXX: timeframe not supported for backend (since
|
||||||
|
# above exception type), terminate immediately since
|
||||||
|
# there's no backfilling possible.
|
||||||
|
except DataUnavailable:
|
||||||
|
task_status.started()
|
||||||
|
return
|
||||||
|
|
||||||
|
times: np.ndarray = array['time']
|
||||||
|
|
||||||
|
# sample period step size in seconds
|
||||||
|
step_size_s = (
|
||||||
|
from_timestamp(times[-1])
|
||||||
|
- from_timestamp(times[-2])
|
||||||
|
).seconds
|
||||||
|
|
||||||
|
if step_size_s not in (1, 60):
|
||||||
|
log.error(f'Last 2 sample period is off!? -> {step_size_s}')
|
||||||
|
step_size_s = (
|
||||||
|
from_timestamp(times[-2])
|
||||||
|
- from_timestamp(times[-3])
|
||||||
|
).seconds
|
||||||
|
|
||||||
|
# NOTE: on the first history, most recent history
|
||||||
|
# frame we PREPEND from the current shm ._last index
|
||||||
|
# and thus a gap between the earliest datum loaded here
|
||||||
|
# and the latest loaded from the tsdb may exist!
|
||||||
|
log.info(f'Pushing {array.size} to shm!')
|
||||||
|
shm.push(
|
||||||
|
array,
|
||||||
|
prepend=True, # append on first frame
|
||||||
|
)
|
||||||
|
backfill_gap_from_shm_index: int = shm._first.value + 1
|
||||||
|
|
||||||
|
# tell parent task to continue
|
||||||
|
task_status.started()
|
||||||
|
|
||||||
|
# loads a (large) frame of data from the tsdb depending
|
||||||
|
# on the db's query size limit; our "nativedb" (using
|
||||||
|
# parquet) generally can load the entire history into mem
|
||||||
|
# but if not then below the remaining history can be lazy
|
||||||
|
# loaded?
|
||||||
|
fqme: str = mkt.fqme
|
||||||
|
tsdb_entry: tuple | None = await storage.load(
|
||||||
|
fqme,
|
||||||
|
timeframe=timeframe,
|
||||||
|
)
|
||||||
|
|
||||||
|
last_tsdb_dt: datetime | None = None
|
||||||
|
if tsdb_entry:
|
||||||
|
(
|
||||||
|
tsdb_history,
|
||||||
|
first_tsdb_dt,
|
||||||
|
last_tsdb_dt,
|
||||||
|
) = tsdb_entry
|
||||||
|
|
||||||
|
# calc the index from which the tsdb data should be
|
||||||
|
# prepended, presuming there is a gap between the
|
||||||
|
# latest frame (loaded/read above) and the latest
|
||||||
|
# sample loaded from the tsdb.
|
||||||
|
backfill_diff: Duration = mr_start_dt - last_tsdb_dt
|
||||||
|
offset_s: float = backfill_diff.in_seconds()
|
||||||
|
offset_samples: int = round(offset_s / timeframe)
|
||||||
|
|
||||||
|
# TODO: see if there's faster multi-field reads:
|
||||||
|
# https://numpy.org/doc/stable/user/basics.rec.html#accessing-multiple-fields
|
||||||
|
# re-index with a `time` and index field
|
||||||
|
prepend_start = shm._first.value - offset_samples + 1
|
||||||
|
|
||||||
|
# tsdb history is so far in the past we can't fit it in
|
||||||
|
# shm buffer space so simply don't load it!
|
||||||
|
if prepend_start > 0:
|
||||||
|
to_push = tsdb_history[-prepend_start:]
|
||||||
|
shm.push(
|
||||||
|
to_push,
|
||||||
|
|
||||||
|
# insert the history pre a "days worth" of samples
|
||||||
|
# to leave some real-time buffer space at the end.
|
||||||
|
prepend=True,
|
||||||
|
# update_first=False,
|
||||||
|
start=prepend_start,
|
||||||
|
field_map=storemod.ohlc_key_map,
|
||||||
|
)
|
||||||
|
|
||||||
|
log.info(f'Loaded {to_push.shape} datums from storage')
|
||||||
|
|
||||||
|
# TODO: maybe start history anal and load missing "history
|
||||||
|
# gaps" via backend..
|
||||||
|
|
||||||
|
if timeframe not in (1, 60):
|
||||||
|
raise ValueError(
|
||||||
|
'`piker` only needs to support 1m and 1s sampling '
|
||||||
|
'but ur api is trying to deliver a longer '
|
||||||
|
f'timeframe of {timeframe} seconds..\n'
|
||||||
|
'So yuh.. dun do dat brudder.'
|
||||||
|
)
|
||||||
|
# if there is a gap to backfill from the first
|
||||||
|
# history frame until the last datum loaded from the tsdb
|
||||||
|
# continue that now in the background
|
||||||
|
bf_done = await tn.start(
|
||||||
|
partial(
|
||||||
|
start_backfill,
|
||||||
|
get_hist,
|
||||||
|
mod,
|
||||||
|
mkt,
|
||||||
|
shm,
|
||||||
|
timeframe,
|
||||||
|
|
||||||
|
backfill_from_shm_index=backfill_gap_from_shm_index,
|
||||||
|
backfill_from_dt=mr_start_dt,
|
||||||
|
|
||||||
|
sampler_stream=sampler_stream,
|
||||||
|
|
||||||
|
backfill_until_dt=last_tsdb_dt,
|
||||||
|
storage=storage,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# if len(hist_shm.array) < 2:
|
||||||
|
# TODO: there's an edge case here to solve where if the last
|
||||||
|
# frame before market close (at least on ib) was pushed and
|
||||||
|
# there was only "1 new" row pushed from the first backfill
|
||||||
|
# query-iteration, then the sample step sizing calcs will
|
||||||
|
# break upstream from here since you can't diff on at least
|
||||||
|
# 2 steps... probably should also add logic to compute from
|
||||||
|
# the tsdb series and stash that somewhere as meta data on
|
||||||
|
# the shm buffer?.. no se.
|
||||||
|
|
||||||
|
# backload any further data from tsdb (concurrently per
|
||||||
|
# timeframe) if not all data was able to be loaded (in memory)
|
||||||
|
# from the ``StorageClient.load()`` call above.
|
||||||
|
try:
|
||||||
|
await trio.sleep_forever()
|
||||||
|
finally:
|
||||||
|
return
|
||||||
|
|
||||||
|
# IF we need to continue backloading incrementall from the
|
||||||
|
# tsdb client..
|
||||||
|
tn.start_soon(
|
||||||
|
back_load_from_tsdb,
|
||||||
|
|
||||||
|
storemod,
|
||||||
|
storage,
|
||||||
|
fqme,
|
||||||
|
|
||||||
|
tsdb_history,
|
||||||
|
last_tsdb_dt,
|
||||||
|
mr_start_dt,
|
||||||
|
mr_end_dt,
|
||||||
|
bf_done,
|
||||||
|
|
||||||
|
timeframe,
|
||||||
|
shm,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def manage_history(
|
||||||
|
mod: ModuleType,
|
||||||
|
bus: _FeedsBus,
|
||||||
|
mkt: MktPair,
|
||||||
|
some_data_ready: trio.Event,
|
||||||
|
feed_is_live: trio.Event,
|
||||||
|
timeframe: float = 60, # in seconds
|
||||||
|
|
||||||
|
task_status: TaskStatus[
|
||||||
|
tuple[ShmArray, ShmArray]
|
||||||
|
] = trio.TASK_STATUS_IGNORED,
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
'''
|
||||||
|
Load and manage historical data including the loading of any
|
||||||
|
available series from any connected tsdb as well as conduct
|
||||||
|
real-time update of both that existing db and the allocated
|
||||||
|
shared memory buffer.
|
||||||
|
|
||||||
|
Init sequence:
|
||||||
|
- allocate shm (numpy array) buffers for 60s & 1s sample rates
|
||||||
|
- configure "zero index" for each buffer: the index where
|
||||||
|
history will prepended *to* and new live data will be
|
||||||
|
appened *from*.
|
||||||
|
- open a ``.storage.StorageClient`` and load any existing tsdb
|
||||||
|
history as well as (async) start a backfill task which loads
|
||||||
|
missing (newer) history from the data provider backend:
|
||||||
|
- tsdb history is loaded first and pushed to shm ASAP.
|
||||||
|
- the backfill task loads the most recent history before
|
||||||
|
unblocking its parent task, so that the `ShmArray._last` is
|
||||||
|
up to date to allow the OHLC sampler to begin writing new
|
||||||
|
samples as the correct buffer index once the provider feed
|
||||||
|
engages.
|
||||||
|
|
||||||
|
'''
|
||||||
|
# TODO: is there a way to make each shm file key
|
||||||
|
# actor-tree-discovery-addr unique so we avoid collisions
|
||||||
|
# when doing tests which also allocate shms for certain instruments
|
||||||
|
# that may be in use on the system by some other running daemons?
|
||||||
|
# from tractor._state import _runtime_vars
|
||||||
|
# port = _runtime_vars['_root_mailbox'][1]
|
||||||
|
|
||||||
|
uid = tractor.current_actor().uid
|
||||||
|
name, uuid = uid
|
||||||
|
service = name.rstrip(f'.{mod.name}')
|
||||||
|
|
||||||
|
fqme: str = mkt.get_fqme(delim_char='')
|
||||||
|
|
||||||
|
# (maybe) allocate shm array for this broker/symbol which will
|
||||||
|
# be used for fast near-term history capture and processing.
|
||||||
|
hist_shm, opened = maybe_open_shm_array(
|
||||||
|
size=_default_hist_size,
|
||||||
|
append_start_index=_hist_buffer_start,
|
||||||
|
|
||||||
|
key=f'piker.{service}[{uuid[:16]}].{fqme}.hist',
|
||||||
|
|
||||||
|
# use any broker defined ohlc dtype:
|
||||||
|
dtype=getattr(mod, '_ohlc_dtype', def_iohlcv_fields),
|
||||||
|
|
||||||
|
# we expect the sub-actor to write
|
||||||
|
readonly=False,
|
||||||
|
)
|
||||||
|
hist_zero_index = hist_shm.index - 1
|
||||||
|
|
||||||
|
# TODO: history validation
|
||||||
|
if not opened:
|
||||||
|
raise RuntimeError(
|
||||||
|
"Persistent shm for sym was already open?!"
|
||||||
|
)
|
||||||
|
|
||||||
|
rt_shm, opened = maybe_open_shm_array(
|
||||||
|
size=_default_rt_size,
|
||||||
|
append_start_index=_rt_buffer_start,
|
||||||
|
key=f'piker.{service}[{uuid[:16]}].{fqme}.rt',
|
||||||
|
|
||||||
|
# use any broker defined ohlc dtype:
|
||||||
|
dtype=getattr(mod, '_ohlc_dtype', def_iohlcv_fields),
|
||||||
|
|
||||||
|
# we expect the sub-actor to write
|
||||||
|
readonly=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# (for now) set the rt (hft) shm array with space to prepend
|
||||||
|
# only a few days worth of 1s history.
|
||||||
|
days = 2
|
||||||
|
start_index = days*_secs_in_day
|
||||||
|
rt_shm._first.value = start_index
|
||||||
|
rt_shm._last.value = start_index
|
||||||
|
rt_zero_index = rt_shm.index - 1
|
||||||
|
|
||||||
|
if not opened:
|
||||||
|
raise RuntimeError(
|
||||||
|
"Persistent shm for sym was already open?!"
|
||||||
|
)
|
||||||
|
|
||||||
|
open_history_client = getattr(
|
||||||
|
mod,
|
||||||
|
'open_history_client',
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
assert open_history_client
|
||||||
|
|
||||||
|
# TODO: maybe it should be a subpkg of `.data`?
|
||||||
|
from piker import storage
|
||||||
|
|
||||||
|
async with (
|
||||||
|
storage.open_storage_client() as (storemod, client),
|
||||||
|
trio.open_nursery() as tn,
|
||||||
|
):
|
||||||
|
log.info(
|
||||||
|
f'Connecting to storage backend `{storemod.name}`:\n'
|
||||||
|
f'location: {client.address}\n'
|
||||||
|
f'db cardinality: {client.cardinality}\n'
|
||||||
|
# TODO: show backend config, eg:
|
||||||
|
# - network settings
|
||||||
|
# - storage size with compression
|
||||||
|
# - number of loaded time series?
|
||||||
|
)
|
||||||
|
|
||||||
|
# NOTE: this call ONLY UNBLOCKS once the latest-most frame
|
||||||
|
# (i.e. history just before the live feed latest datum) of
|
||||||
|
# history has been loaded and written to the shm buffer:
|
||||||
|
# - the backfiller task can write in reverse chronological
|
||||||
|
# to the shm and tsdb
|
||||||
|
# - the tsdb data can be loaded immediately and the
|
||||||
|
# backfiller can do a single append from it's end datum and
|
||||||
|
# then prepends backward to that from the current time
|
||||||
|
# step.
|
||||||
|
tf2mem: dict = {
|
||||||
|
1: rt_shm,
|
||||||
|
60: hist_shm,
|
||||||
|
}
|
||||||
|
async with open_sample_stream(
|
||||||
|
period_s=1.,
|
||||||
|
shms_by_period={
|
||||||
|
1.: rt_shm.token,
|
||||||
|
60.: hist_shm.token,
|
||||||
|
},
|
||||||
|
|
||||||
|
# NOTE: we want to only open a stream for doing
|
||||||
|
# broadcasts on backfill operations, not receive the
|
||||||
|
# sample index-stream (since there's no code in this
|
||||||
|
# data feed layer that needs to consume it).
|
||||||
|
open_index_stream=True,
|
||||||
|
sub_for_broadcasts=False,
|
||||||
|
|
||||||
|
) as sample_stream:
|
||||||
|
# register 1s and 1m buffers with the global incrementer task
|
||||||
|
log.info(f'Connected to sampler stream: {sample_stream}')
|
||||||
|
|
||||||
|
for timeframe in [60, 1]:
|
||||||
|
await tn.start(
|
||||||
|
tsdb_backfill,
|
||||||
|
mod,
|
||||||
|
storemod,
|
||||||
|
tn,
|
||||||
|
# bus,
|
||||||
|
client,
|
||||||
|
mkt,
|
||||||
|
tf2mem[timeframe],
|
||||||
|
timeframe,
|
||||||
|
|
||||||
|
sample_stream,
|
||||||
|
)
|
||||||
|
|
||||||
|
# indicate to caller that feed can be delivered to
|
||||||
|
# remote requesting client since we've loaded history
|
||||||
|
# data that can be used.
|
||||||
|
some_data_ready.set()
|
||||||
|
|
||||||
|
# wait for a live feed before starting the sampler.
|
||||||
|
await feed_is_live.wait()
|
||||||
|
|
||||||
|
# yield back after client connect with filled shm
|
||||||
|
task_status.started((
|
||||||
|
hist_zero_index,
|
||||||
|
hist_shm,
|
||||||
|
rt_zero_index,
|
||||||
|
rt_shm,
|
||||||
|
))
|
||||||
|
|
||||||
|
# history retreival loop depending on user interaction
|
||||||
|
# and thus a small RPC-prot for remotely controllinlg
|
||||||
|
# what data is loaded for viewing.
|
||||||
|
await trio.sleep_forever()
|
||||||
|
|
@ -0,0 +1,104 @@
|
||||||
|
# piker: trading gear for hackers
|
||||||
|
# Copyright (C) (in stewardship for pikers)
|
||||||
|
# - Tyler Goodlet
|
||||||
|
# - Guillermo Rodriguez
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
'''
|
||||||
|
Extensions to built-in or (heavily used but 3rd party) friend-lib
|
||||||
|
types.
|
||||||
|
|
||||||
|
'''
|
||||||
|
from pprint import pformat
|
||||||
|
|
||||||
|
from msgspec import (
|
||||||
|
msgpack,
|
||||||
|
Struct,
|
||||||
|
structs,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class Struct(
|
||||||
|
Struct,
|
||||||
|
|
||||||
|
# https://jcristharif.com/msgspec/structs.html#tagged-unions
|
||||||
|
# tag='pikerstruct',
|
||||||
|
# tag=True,
|
||||||
|
):
|
||||||
|
'''
|
||||||
|
A "human friendlier" (aka repl buddy) struct subtype.
|
||||||
|
|
||||||
|
'''
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
'''
|
||||||
|
Like it sounds.. direct delegation to:
|
||||||
|
https://jcristharif.com/msgspec/api.html#msgspec.structs.asdict
|
||||||
|
|
||||||
|
TODO: probably just drop this method since it's now a built-int method?
|
||||||
|
|
||||||
|
'''
|
||||||
|
return structs.asdict(self)
|
||||||
|
|
||||||
|
def pformat(self) -> str:
|
||||||
|
return f'Struct({pformat(self.to_dict())})'
|
||||||
|
|
||||||
|
def copy(
|
||||||
|
self,
|
||||||
|
update: dict | None = None,
|
||||||
|
|
||||||
|
) -> Struct:
|
||||||
|
'''
|
||||||
|
Validate-typecast all self defined fields, return a copy of
|
||||||
|
us with all such fields.
|
||||||
|
|
||||||
|
NOTE: This is kinda like the default behaviour in
|
||||||
|
`pydantic.BaseModel` except a copy of the object is
|
||||||
|
returned making it compat with `frozen=True`.
|
||||||
|
|
||||||
|
'''
|
||||||
|
if update:
|
||||||
|
for k, v in update.items():
|
||||||
|
setattr(self, k, v)
|
||||||
|
|
||||||
|
# NOTE: roundtrip serialize to validate
|
||||||
|
# - enode to msgpack binary format,
|
||||||
|
# - decode that back to a struct.
|
||||||
|
return msgpack.Decoder(type=type(self)).decode(
|
||||||
|
msgpack.Encoder().encode(self)
|
||||||
|
)
|
||||||
|
|
||||||
|
def typecast(
|
||||||
|
self,
|
||||||
|
|
||||||
|
# TODO: allow only casting a named subset?
|
||||||
|
# fields: set[str] | None = None,
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
'''
|
||||||
|
Cast all fields using their declared type annotations
|
||||||
|
(kinda like what `pydantic` does by default).
|
||||||
|
|
||||||
|
NOTE: this of course won't work on frozen types, use
|
||||||
|
``.copy()`` above in such cases.
|
||||||
|
|
||||||
|
'''
|
||||||
|
# https://jcristharif.com/msgspec/api.html#msgspec.structs.fields
|
||||||
|
fi: structs.FieldInfo
|
||||||
|
for fi in structs.fields(self):
|
||||||
|
setattr(
|
||||||
|
self,
|
||||||
|
fi.name,
|
||||||
|
fi.type(getattr(self, fi.name)),
|
||||||
|
)
|
||||||
|
|
@ -18,7 +18,6 @@ Data feed synchronization protocols, init msgs, and general
|
||||||
data-provider-backend-agnostic schema definitions.
|
data-provider-backend-agnostic schema definitions.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
|
||||||
from decimal import Decimal
|
from decimal import Decimal
|
||||||
from pprint import pformat
|
from pprint import pformat
|
||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
|
|
@ -29,8 +28,8 @@ from typing import (
|
||||||
|
|
||||||
from msgspec import field
|
from msgspec import field
|
||||||
|
|
||||||
from piker.types import Struct
|
from .types import Struct
|
||||||
from piker.accounting import (
|
from ..accounting import (
|
||||||
Asset,
|
Asset,
|
||||||
MktPair,
|
MktPair,
|
||||||
)
|
)
|
||||||
|
|
@ -82,8 +81,8 @@ _eps: dict[str, list[str]] = {
|
||||||
# live order control and trading
|
# live order control and trading
|
||||||
'brokerd': [
|
'brokerd': [
|
||||||
'trades_dialogue',
|
'trades_dialogue',
|
||||||
'open_trade_dialog', # live order ctl
|
# TODO: ledger normalizer helper?
|
||||||
'norm_trade', # ledger normalizer for txns
|
# norm_trades(records: dict[str, Any]) -> TransactionLedger)
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -113,9 +112,9 @@ def validate_backend(
|
||||||
)
|
)
|
||||||
if ep is None:
|
if ep is None:
|
||||||
log.warning(
|
log.warning(
|
||||||
f'Provider backend {mod.name!r} is missing '
|
f'Provider backend {mod.name} is missing '
|
||||||
f'{daemon_name!r} support?\n'
|
f'{daemon_name} support :(\n'
|
||||||
f'|_module endpoint-func missing: {name!r}\n'
|
f'The following endpoint is missing: {name}'
|
||||||
)
|
)
|
||||||
|
|
||||||
inits: list[
|
inits: list[
|
||||||
|
|
|
||||||
|
|
@ -22,40 +22,17 @@ from typing import AsyncIterator
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
from ._api import (
|
from ._engine import cascade
|
||||||
maybe_mk_fsp_shm,
|
|
||||||
Fsp,
|
|
||||||
)
|
|
||||||
from ._engine import (
|
|
||||||
cascade,
|
|
||||||
Cascade,
|
|
||||||
)
|
|
||||||
from ._volume import (
|
|
||||||
dolla_vlm,
|
|
||||||
flow_rates,
|
|
||||||
tina_vwap,
|
|
||||||
)
|
|
||||||
|
|
||||||
__all__: list[str] = [
|
__all__ = ['cascade']
|
||||||
'cascade',
|
|
||||||
'Cascade',
|
|
||||||
'maybe_mk_fsp_shm',
|
|
||||||
'Fsp',
|
|
||||||
'dolla_vlm',
|
|
||||||
'flow_rates',
|
|
||||||
'tina_vwap',
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
async def latency(
|
async def latency(
|
||||||
source: 'TickStream[Dict[str, float]]', # noqa
|
source: 'TickStream[Dict[str, float]]', # noqa
|
||||||
ohlcv: np.ndarray
|
ohlcv: np.ndarray
|
||||||
|
|
||||||
) -> AsyncIterator[np.ndarray]:
|
) -> AsyncIterator[np.ndarray]:
|
||||||
'''
|
"""Latency measurements, broker to piker.
|
||||||
Latency measurements, broker to piker.
|
"""
|
||||||
|
|
||||||
'''
|
|
||||||
# TODO: do we want to offer yielding this async
|
# TODO: do we want to offer yielding this async
|
||||||
# before the rt data connection comes up?
|
# before the rt data connection comes up?
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -18,12 +18,13 @@
|
||||||
core task logic for processing chains
|
core task logic for processing chains
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
from dataclasses import dataclass
|
||||||
from contextlib import asynccontextmanager as acm
|
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from typing import (
|
from typing import (
|
||||||
AsyncIterator,
|
AsyncIterator,
|
||||||
Callable,
|
Callable,
|
||||||
|
Optional,
|
||||||
|
Union,
|
||||||
)
|
)
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
@ -32,9 +33,9 @@ from trio_typing import TaskStatus
|
||||||
import tractor
|
import tractor
|
||||||
from tractor.msg import NamespacePath
|
from tractor.msg import NamespacePath
|
||||||
|
|
||||||
from piker.types import Struct
|
|
||||||
from ..log import get_logger, get_console_log
|
from ..log import get_logger, get_console_log
|
||||||
from .. import data
|
from .. import data
|
||||||
|
from ..data import attach_shm_array
|
||||||
from ..data.feed import (
|
from ..data.feed import (
|
||||||
Flume,
|
Flume,
|
||||||
Feed,
|
Feed,
|
||||||
|
|
@ -50,11 +51,17 @@ from ._api import (
|
||||||
_load_builtins,
|
_load_builtins,
|
||||||
_Token,
|
_Token,
|
||||||
)
|
)
|
||||||
from ..toolz import Profiler
|
from .._profile import Profiler
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TaskTracker:
|
||||||
|
complete: trio.Event
|
||||||
|
cs: trio.CancelScope
|
||||||
|
|
||||||
|
|
||||||
async def filter_quotes_by_sym(
|
async def filter_quotes_by_sym(
|
||||||
|
|
||||||
sym: str,
|
sym: str,
|
||||||
|
|
@ -75,168 +82,30 @@ async def filter_quotes_by_sym(
|
||||||
if quote:
|
if quote:
|
||||||
yield quote
|
yield quote
|
||||||
|
|
||||||
# TODO: unifying the abstractions in this FSP subsys/layer:
|
|
||||||
# -[ ] move the `.data.flows.Flume` type into this
|
|
||||||
# module/subsys/pkg?
|
|
||||||
# -[ ] ideas for further abstractions as per
|
|
||||||
# - https://github.com/pikers/piker/issues/216,
|
|
||||||
# - https://github.com/pikers/piker/issues/270:
|
|
||||||
# - a (financial signal) ``Flow`` would be the a "collection" of such
|
|
||||||
# minmial cascades. Some engineering based jargon concepts:
|
|
||||||
# - https://en.wikipedia.org/wiki/Signal_chain
|
|
||||||
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
|
|
||||||
# - https://en.wikipedia.org/wiki/Audio_signal_flow
|
|
||||||
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
|
|
||||||
# - https://en.wikipedia.org/wiki/Dataflow_programming
|
|
||||||
# - https://en.wikipedia.org/wiki/Signal_programming
|
|
||||||
# - https://en.wikipedia.org/wiki/Incremental_computing
|
|
||||||
# - https://en.wikipedia.org/wiki/Signal-flow_graph
|
|
||||||
# - https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
|
|
||||||
|
|
||||||
# -[ ] we probably want to eval THE BELOW design and unify with the
|
async def fsp_compute(
|
||||||
# proto `TaskManager` in the `tractor` dev branch as well as with
|
|
||||||
# our below idea for `Cascade`:
|
|
||||||
# - https://github.com/goodboy/tractor/pull/363
|
|
||||||
class Cascade(Struct):
|
|
||||||
'''
|
|
||||||
As per sig-proc engineering parlance, this is a chaining of
|
|
||||||
`Flume`s, which are themselves collections of "Streams"
|
|
||||||
implemented currently via `ShmArray`s.
|
|
||||||
|
|
||||||
A `Cascade` is be the minimal "connection" of 2 `Flumes`
|
|
||||||
as per circuit parlance:
|
|
||||||
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
|
|
||||||
|
|
||||||
TODO:
|
|
||||||
-[ ] could cover the combination of our `FspAdmin` and the
|
|
||||||
backend `.fsp._engine` related machinery to "connect" one flume
|
|
||||||
to another?
|
|
||||||
|
|
||||||
'''
|
|
||||||
# TODO: make these `Flume`s
|
|
||||||
src: Flume
|
|
||||||
dst: Flume
|
|
||||||
tn: trio.Nursery
|
|
||||||
fsp: Fsp # UI-side middleware ctl API
|
|
||||||
|
|
||||||
# filled during cascade/.bind_func() (fsp_compute) init phases
|
|
||||||
bind_func: Callable | None = None
|
|
||||||
complete: trio.Event | None = None
|
|
||||||
cs: trio.CancelScope | None = None
|
|
||||||
client_stream: tractor.MsgStream | None = None
|
|
||||||
|
|
||||||
async def resync(self) -> int:
|
|
||||||
# TODO: adopt an incremental update engine/approach
|
|
||||||
# where possible here eventually!
|
|
||||||
log.info(f're-syncing fsp {self.fsp.name} to source')
|
|
||||||
self.cs.cancel()
|
|
||||||
await self.complete.wait()
|
|
||||||
index: int = await self.tn.start(self.bind_func)
|
|
||||||
|
|
||||||
# always trigger UI refresh after history update,
|
|
||||||
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
|
|
||||||
# ``piker.ui._display.trigger_update()``.
|
|
||||||
dst_shm: ShmArray = self.dst.rt_shm
|
|
||||||
await self.client_stream.send({
|
|
||||||
'fsp_update': {
|
|
||||||
'key': dst_shm.token,
|
|
||||||
'first': dst_shm._first.value,
|
|
||||||
'last': dst_shm._last.value,
|
|
||||||
}
|
|
||||||
})
|
|
||||||
return index
|
|
||||||
|
|
||||||
def is_synced(self) -> tuple[bool, int, int]:
|
|
||||||
'''
|
|
||||||
Predicate to dertmine if a destination FSP
|
|
||||||
output array is aligned to its source array.
|
|
||||||
|
|
||||||
'''
|
|
||||||
src_shm: ShmArray = self.src.rt_shm
|
|
||||||
dst_shm: ShmArray = self.dst.rt_shm
|
|
||||||
step_diff = src_shm.index - dst_shm.index
|
|
||||||
len_diff = abs(len(src_shm.array) - len(dst_shm.array))
|
|
||||||
synced: bool = not (
|
|
||||||
# the source is likely backfilling and we must
|
|
||||||
# sync history calculations
|
|
||||||
len_diff > 2
|
|
||||||
|
|
||||||
# we aren't step synced to the source and may be
|
|
||||||
# leading/lagging by a step
|
|
||||||
or step_diff > 1
|
|
||||||
or step_diff < 0
|
|
||||||
)
|
|
||||||
if not synced:
|
|
||||||
fsp: Fsp = self.fsp
|
|
||||||
log.warning(
|
|
||||||
'***DESYNCED FSP***\n'
|
|
||||||
f'{fsp.ns_path}@{src_shm.token}\n'
|
|
||||||
f'step_diff: {step_diff}\n'
|
|
||||||
f'len_diff: {len_diff}\n'
|
|
||||||
)
|
|
||||||
return (
|
|
||||||
synced,
|
|
||||||
step_diff,
|
|
||||||
len_diff,
|
|
||||||
)
|
|
||||||
|
|
||||||
async def poll_and_sync_to_step(self) -> int:
|
|
||||||
synced, step_diff, _ = self.is_synced()
|
|
||||||
while not synced:
|
|
||||||
await self.resync()
|
|
||||||
synced, step_diff, _ = self.is_synced()
|
|
||||||
|
|
||||||
return step_diff
|
|
||||||
|
|
||||||
@acm
|
|
||||||
async def open_edge(
|
|
||||||
self,
|
|
||||||
bind_func: Callable,
|
|
||||||
) -> int:
|
|
||||||
self.bind_func = bind_func
|
|
||||||
index = await self.tn.start(bind_func)
|
|
||||||
yield index
|
|
||||||
# TODO: what do we want on teardown/error?
|
|
||||||
# -[ ] dynamic reconnection after update?
|
|
||||||
|
|
||||||
|
|
||||||
async def connect_streams(
|
|
||||||
casc: Cascade,
|
|
||||||
mkt: MktPair,
|
mkt: MktPair,
|
||||||
|
flume: Flume,
|
||||||
quote_stream: trio.abc.ReceiveChannel,
|
quote_stream: trio.abc.ReceiveChannel,
|
||||||
src: Flume,
|
|
||||||
dst: Flume,
|
|
||||||
|
|
||||||
edge_func: Callable,
|
src: ShmArray,
|
||||||
|
dst: ShmArray,
|
||||||
|
|
||||||
|
func: Callable,
|
||||||
|
|
||||||
# attach_stream: bool = False,
|
# attach_stream: bool = False,
|
||||||
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
|
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
|
||||||
Stream and per-sample compute and write the cascade of
|
|
||||||
2 `Flumes`/streams given some operating `func`.
|
|
||||||
|
|
||||||
https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
|
|
||||||
|
|
||||||
Not literally, but something like:
|
|
||||||
|
|
||||||
edge_func(Flume_in) -> Flume_out
|
|
||||||
|
|
||||||
'''
|
|
||||||
profiler = Profiler(
|
profiler = Profiler(
|
||||||
delayed=False,
|
delayed=False,
|
||||||
disabled=True
|
disabled=True
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO: just pull it from src.mkt.fqme no?
|
fqme = mkt.fqme
|
||||||
# fqme: str = mkt.fqme
|
out_stream = func(
|
||||||
fqme: str = src.mkt.fqme
|
|
||||||
|
|
||||||
# TODO: dynamic introspection of what the underlying (vertex)
|
|
||||||
# function actually requires from input node (flumes) then
|
|
||||||
# deliver those inputs as part of a graph "compilation" step?
|
|
||||||
out_stream = edge_func(
|
|
||||||
|
|
||||||
# TODO: do we even need this if we do the feed api right?
|
# TODO: do we even need this if we do the feed api right?
|
||||||
# shouldn't a local stream do this before we get a handle
|
# shouldn't a local stream do this before we get a handle
|
||||||
|
|
@ -244,21 +113,20 @@ async def connect_streams(
|
||||||
# async itertools style?
|
# async itertools style?
|
||||||
filter_quotes_by_sym(fqme, quote_stream),
|
filter_quotes_by_sym(fqme, quote_stream),
|
||||||
|
|
||||||
# XXX: currently the ``ohlcv`` arg, but we should allow
|
# XXX: currently the ``ohlcv`` arg
|
||||||
# (dynamic) requests for src flume (node) streams?
|
flume.rt_shm,
|
||||||
src.rt_shm,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# HISTORY COMPUTE PHASE
|
# HISTORY COMPUTE PHASE
|
||||||
# conduct a single iteration of fsp with historical bars input
|
# conduct a single iteration of fsp with historical bars input
|
||||||
# and get historical output.
|
# and get historical output.
|
||||||
history_output: (
|
history_output: Union[
|
||||||
dict[str, np.ndarray] # multi-output case
|
dict[str, np.ndarray], # multi-output case
|
||||||
| np.ndarray, # single output case
|
np.ndarray, # single output case
|
||||||
)
|
]
|
||||||
history_output = await anext(out_stream)
|
history_output = await anext(out_stream)
|
||||||
|
|
||||||
func_name = edge_func.__name__
|
func_name = func.__name__
|
||||||
profiler(f'{func_name} generated history')
|
profiler(f'{func_name} generated history')
|
||||||
|
|
||||||
# build struct array with an 'index' field to push as history
|
# build struct array with an 'index' field to push as history
|
||||||
|
|
@ -266,12 +134,10 @@ async def connect_streams(
|
||||||
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
|
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
|
||||||
# if the output array is multi-field then push
|
# if the output array is multi-field then push
|
||||||
# each respective field.
|
# each respective field.
|
||||||
dst_shm: ShmArray = dst.rt_shm
|
fields = getattr(dst.array.dtype, 'fields', None).copy()
|
||||||
fields = getattr(dst_shm.array.dtype, 'fields', None).copy()
|
|
||||||
fields.pop('index')
|
fields.pop('index')
|
||||||
history_by_field: np.ndarray | None = None
|
history_by_field: Optional[np.ndarray] = None
|
||||||
src_shm: ShmArray = src.rt_shm
|
src_time = src.array['time']
|
||||||
src_time = src_shm.array['time']
|
|
||||||
|
|
||||||
if (
|
if (
|
||||||
fields and
|
fields and
|
||||||
|
|
@ -290,7 +156,7 @@ async def connect_streams(
|
||||||
if history_by_field is None:
|
if history_by_field is None:
|
||||||
|
|
||||||
if output is None:
|
if output is None:
|
||||||
length = len(src_shm.array)
|
length = len(src.array)
|
||||||
else:
|
else:
|
||||||
length = len(output)
|
length = len(output)
|
||||||
|
|
||||||
|
|
@ -299,7 +165,7 @@ async def connect_streams(
|
||||||
# will be pushed to shm.
|
# will be pushed to shm.
|
||||||
history_by_field = np.zeros(
|
history_by_field = np.zeros(
|
||||||
length,
|
length,
|
||||||
dtype=dst_shm.array.dtype
|
dtype=dst.array.dtype
|
||||||
)
|
)
|
||||||
|
|
||||||
if output is None:
|
if output is None:
|
||||||
|
|
@ -316,13 +182,13 @@ async def connect_streams(
|
||||||
)
|
)
|
||||||
history_by_field = np.zeros(
|
history_by_field = np.zeros(
|
||||||
len(history_output),
|
len(history_output),
|
||||||
dtype=dst_shm.array.dtype
|
dtype=dst.array.dtype
|
||||||
)
|
)
|
||||||
history_by_field[func_name] = history_output
|
history_by_field[func_name] = history_output
|
||||||
|
|
||||||
history_by_field['time'] = src_time[-len(history_by_field):]
|
history_by_field['time'] = src_time[-len(history_by_field):]
|
||||||
|
|
||||||
history_output['time'] = src_shm.array['time']
|
history_output['time'] = src.array['time']
|
||||||
|
|
||||||
# TODO: XXX:
|
# TODO: XXX:
|
||||||
# THERE'S A BIG BUG HERE WITH THE `index` field since we're
|
# THERE'S A BIG BUG HERE WITH THE `index` field since we're
|
||||||
|
|
@ -335,11 +201,11 @@ async def connect_streams(
|
||||||
# is `index` aware such that historical data can be indexed
|
# is `index` aware such that historical data can be indexed
|
||||||
# relative to the true first datum? Not sure if this is sane
|
# relative to the true first datum? Not sure if this is sane
|
||||||
# for incremental compuations.
|
# for incremental compuations.
|
||||||
first = dst_shm._first.value = src_shm._first.value
|
first = dst._first.value = src._first.value
|
||||||
|
|
||||||
# TODO: can we use this `start` flag instead of the manual
|
# TODO: can we use this `start` flag instead of the manual
|
||||||
# setting above?
|
# setting above?
|
||||||
index = dst_shm.push(
|
index = dst.push(
|
||||||
history_by_field,
|
history_by_field,
|
||||||
start=first,
|
start=first,
|
||||||
)
|
)
|
||||||
|
|
@ -350,9 +216,12 @@ async def connect_streams(
|
||||||
# setup a respawn handle
|
# setup a respawn handle
|
||||||
with trio.CancelScope() as cs:
|
with trio.CancelScope() as cs:
|
||||||
|
|
||||||
casc.cs = cs
|
# TODO: might be better to just make a "restart" method where
|
||||||
casc.complete = trio.Event()
|
# the target task is spawned implicitly and then the event is
|
||||||
task_status.started(index)
|
# set via some higher level api? At that poing we might as well
|
||||||
|
# be writing a one-cancels-one nursery though right?
|
||||||
|
tracker = TaskTracker(trio.Event(), cs)
|
||||||
|
task_status.started((tracker, index))
|
||||||
|
|
||||||
profiler(f'{func_name} yield last index')
|
profiler(f'{func_name} yield last index')
|
||||||
|
|
||||||
|
|
@ -366,12 +235,12 @@ async def connect_streams(
|
||||||
log.debug(f"{func_name}: {processed}")
|
log.debug(f"{func_name}: {processed}")
|
||||||
key, output = processed
|
key, output = processed
|
||||||
# dst.array[-1][key] = output
|
# dst.array[-1][key] = output
|
||||||
dst_shm.array[[key, 'time']][-1] = (
|
dst.array[[key, 'time']][-1] = (
|
||||||
output,
|
output,
|
||||||
# TODO: what about pushing ``time.time_ns()``
|
# TODO: what about pushing ``time.time_ns()``
|
||||||
# in which case we'll need to round at the graphics
|
# in which case we'll need to round at the graphics
|
||||||
# processing / sampling layer?
|
# processing / sampling layer?
|
||||||
src_shm.array[-1]['time']
|
src.array[-1]['time']
|
||||||
)
|
)
|
||||||
|
|
||||||
# NOTE: for now we aren't streaming this to the consumer
|
# NOTE: for now we aren't streaming this to the consumer
|
||||||
|
|
@ -383,7 +252,7 @@ async def connect_streams(
|
||||||
# N-consumers who subscribe for the real-time output,
|
# N-consumers who subscribe for the real-time output,
|
||||||
# which we'll likely want to implement using local-mem
|
# which we'll likely want to implement using local-mem
|
||||||
# chans for the fan out?
|
# chans for the fan out?
|
||||||
# index = src_shm.index
|
# index = src.index
|
||||||
# if attach_stream:
|
# if attach_stream:
|
||||||
# await client_stream.send(index)
|
# await client_stream.send(index)
|
||||||
|
|
||||||
|
|
@ -393,7 +262,7 @@ async def connect_streams(
|
||||||
# log.info(f'FSP quote too fast: {hz}')
|
# log.info(f'FSP quote too fast: {hz}')
|
||||||
# last = time.time()
|
# last = time.time()
|
||||||
finally:
|
finally:
|
||||||
casc.complete.set()
|
tracker.complete.set()
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
|
|
@ -404,15 +273,15 @@ async def cascade(
|
||||||
# data feed key
|
# data feed key
|
||||||
fqme: str,
|
fqme: str,
|
||||||
|
|
||||||
# flume pair cascaded using an "edge function"
|
src_shm_token: dict,
|
||||||
src_flume_addr: dict,
|
dst_shm_token: tuple[str, np.dtype],
|
||||||
dst_flume_addr: dict,
|
|
||||||
ns_path: NamespacePath,
|
ns_path: NamespacePath,
|
||||||
|
|
||||||
shm_registry: dict[str, _Token],
|
shm_registry: dict[str, _Token],
|
||||||
|
|
||||||
zero_on_step: bool = False,
|
zero_on_step: bool = False,
|
||||||
loglevel: str | None = None,
|
loglevel: Optional[str] = None,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
|
|
@ -428,14 +297,8 @@ async def cascade(
|
||||||
if loglevel:
|
if loglevel:
|
||||||
get_console_log(loglevel)
|
get_console_log(loglevel)
|
||||||
|
|
||||||
src: Flume = Flume.from_msg(src_flume_addr)
|
src = attach_shm_array(token=src_shm_token)
|
||||||
dst: Flume = Flume.from_msg(
|
dst = attach_shm_array(readonly=False, token=dst_shm_token)
|
||||||
dst_flume_addr,
|
|
||||||
readonly=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
# src: ShmArray = attach_shm_array(token=src_shm_token)
|
|
||||||
# dst: ShmArray = attach_shm_array(readonly=False, token=dst_shm_token)
|
|
||||||
|
|
||||||
reg = _load_builtins()
|
reg = _load_builtins()
|
||||||
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
|
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
|
||||||
|
|
@ -443,11 +306,11 @@ async def cascade(
|
||||||
f'Registered FSP set:\n{lines}'
|
f'Registered FSP set:\n{lines}'
|
||||||
)
|
)
|
||||||
|
|
||||||
# NOTE XXX: update actorlocal flows table which registers
|
# update actorlocal flows table which registers
|
||||||
# readonly "instances" of this fsp for symbol/source so that
|
# readonly "instances" of this fsp for symbol/source
|
||||||
# consumer fsps can look it up by source + fsp.
|
# so that consumer fsps can look it up by source + fsp.
|
||||||
# TODO: ugh i hate this wind/unwind to list over the wire but
|
# TODO: ugh i hate this wind/unwind to list over the wire
|
||||||
# not sure how else to do it.
|
# but not sure how else to do it.
|
||||||
for (token, fsp_name, dst_token) in shm_registry:
|
for (token, fsp_name, dst_token) in shm_registry:
|
||||||
Fsp._flow_registry[(
|
Fsp._flow_registry[(
|
||||||
_Token.from_msg(token),
|
_Token.from_msg(token),
|
||||||
|
|
@ -457,15 +320,12 @@ async def cascade(
|
||||||
fsp: Fsp = reg.get(
|
fsp: Fsp = reg.get(
|
||||||
NamespacePath(ns_path)
|
NamespacePath(ns_path)
|
||||||
)
|
)
|
||||||
func: Callable = fsp.func
|
func = fsp.func
|
||||||
|
|
||||||
if not func:
|
if not func:
|
||||||
# TODO: assume it's a func target path
|
# TODO: assume it's a func target path
|
||||||
raise ValueError(f'Unknown fsp target: {ns_path}')
|
raise ValueError(f'Unknown fsp target: {ns_path}')
|
||||||
|
|
||||||
_fqme: str = src.mkt.fqme
|
|
||||||
assert _fqme == fqme
|
|
||||||
|
|
||||||
# open a data feed stream with requested broker
|
# open a data feed stream with requested broker
|
||||||
feed: Feed
|
feed: Feed
|
||||||
async with data.feed.maybe_open_feed(
|
async with data.feed.maybe_open_feed(
|
||||||
|
|
@ -479,143 +339,177 @@ async def cascade(
|
||||||
|
|
||||||
) as feed:
|
) as feed:
|
||||||
|
|
||||||
flume: Flume = feed.flumes[fqme]
|
flume = feed.flumes[fqme]
|
||||||
# XXX: can't do this since flume.feed will be set XD
|
mkt = flume.mkt
|
||||||
# assert flume == src
|
assert src.token == flume.rt_shm.token
|
||||||
assert flume.mkt == src.mkt
|
|
||||||
mkt: MktPair = flume.mkt
|
|
||||||
|
|
||||||
# NOTE: FOR NOW, sanity checks around the feed as being
|
|
||||||
# always the src flume (until we get to fancier/lengthier
|
|
||||||
# chains/graphs.
|
|
||||||
assert src.rt_shm.token == flume.rt_shm.token
|
|
||||||
|
|
||||||
# XXX: won't work bc the _hist_shm_token value will be
|
|
||||||
# list[list] after IPC..
|
|
||||||
# assert flume.to_msg() == src_flume_addr
|
|
||||||
|
|
||||||
profiler(f'{func}: feed up')
|
profiler(f'{func}: feed up')
|
||||||
|
|
||||||
func_name: str = func.__name__
|
func_name = func.__name__
|
||||||
async with (
|
async with (
|
||||||
tractor.trionics.collapse_eg(), # avoid multi-taskc tb in console
|
trio.open_nursery() as n,
|
||||||
trio.open_nursery() as tn,
|
|
||||||
):
|
):
|
||||||
# TODO: might be better to just make a "restart" method where
|
|
||||||
# the target task is spawned implicitly and then the event is
|
|
||||||
# set via some higher level api? At that poing we might as well
|
|
||||||
# be writing a one-cancels-one nursery though right?
|
|
||||||
casc = Cascade(
|
|
||||||
src,
|
|
||||||
dst,
|
|
||||||
tn,
|
|
||||||
fsp,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: this seems like it should be wrapped somewhere?
|
|
||||||
fsp_target = partial(
|
fsp_target = partial(
|
||||||
connect_streams,
|
|
||||||
casc=casc,
|
fsp_compute,
|
||||||
mkt=mkt,
|
mkt=mkt,
|
||||||
|
flume=flume,
|
||||||
quote_stream=flume.stream,
|
quote_stream=flume.stream,
|
||||||
|
|
||||||
# flumes and shm passthrough
|
# shm
|
||||||
src=src,
|
src=src,
|
||||||
dst=dst,
|
dst=dst,
|
||||||
|
|
||||||
# chain function which takes src flume input(s)
|
# target
|
||||||
# and renders dst flume output(s)
|
func=func
|
||||||
edge_func=func
|
|
||||||
)
|
)
|
||||||
async with casc.open_edge(
|
|
||||||
bind_func=fsp_target,
|
|
||||||
) as index:
|
|
||||||
# casc.bind_func = fsp_target
|
|
||||||
# index = await tn.start(fsp_target)
|
|
||||||
dst_shm: ShmArray = dst.rt_shm
|
|
||||||
src_shm: ShmArray = src.rt_shm
|
|
||||||
|
|
||||||
if zero_on_step:
|
tracker, index = await n.start(fsp_target)
|
||||||
last = dst.rt_shm.array[-1:]
|
|
||||||
zeroed = np.zeros(last.shape, dtype=last.dtype)
|
|
||||||
|
|
||||||
profiler(f'{func_name}: fsp up')
|
if zero_on_step:
|
||||||
|
last = dst.array[-1:]
|
||||||
|
zeroed = np.zeros(last.shape, dtype=last.dtype)
|
||||||
|
|
||||||
# sync to client-side actor
|
profiler(f'{func_name}: fsp up')
|
||||||
await ctx.started(index)
|
|
||||||
|
|
||||||
# XXX: rt stream with client which we MUST
|
# sync client
|
||||||
# open here (and keep it open) in order to make
|
await ctx.started(index)
|
||||||
# incremental "updates" as history prepends take
|
|
||||||
# place.
|
|
||||||
async with ctx.open_stream() as client_stream:
|
|
||||||
casc.client_stream: tractor.MsgStream = client_stream
|
|
||||||
|
|
||||||
s, step, ld = casc.is_synced()
|
# XXX: rt stream with client which we MUST
|
||||||
|
# open here (and keep it open) in order to make
|
||||||
|
# incremental "updates" as history prepends take
|
||||||
|
# place.
|
||||||
|
async with ctx.open_stream() as client_stream:
|
||||||
|
|
||||||
# detect sample period step for subscription to increment
|
# TODO: these likely should all become
|
||||||
# signal
|
# methods of this ``TaskLifetime`` or wtv
|
||||||
times = src.rt_shm.array['time']
|
# abstraction..
|
||||||
if len(times) > 1:
|
async def resync(
|
||||||
last_ts = times[-1]
|
tracker: TaskTracker,
|
||||||
delay_s: float = float(last_ts - times[times != last_ts][-1])
|
|
||||||
else:
|
|
||||||
# our default "HFT" sample rate.
|
|
||||||
delay_s: float = _default_delay_s
|
|
||||||
|
|
||||||
# sub and increment the underlying shared memory buffer
|
) -> tuple[TaskTracker, int]:
|
||||||
# on every step msg received from the global `samplerd`
|
# TODO: adopt an incremental update engine/approach
|
||||||
# service.
|
# where possible here eventually!
|
||||||
async with open_sample_stream(
|
log.info(f're-syncing fsp {func_name} to source')
|
||||||
float(delay_s)
|
tracker.cs.cancel()
|
||||||
) as istream:
|
await tracker.complete.wait()
|
||||||
|
tracker, index = await n.start(fsp_target)
|
||||||
|
|
||||||
profiler(f'{func_name}: sample stream up')
|
# always trigger UI refresh after history update,
|
||||||
profiler.finish()
|
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
|
||||||
|
# ``piker.ui._display.trigger_update()``.
|
||||||
|
await client_stream.send({
|
||||||
|
'fsp_update': {
|
||||||
|
'key': dst_shm_token,
|
||||||
|
'first': dst._first.value,
|
||||||
|
'last': dst._last.value,
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return tracker, index
|
||||||
|
|
||||||
async for i in istream:
|
def is_synced(
|
||||||
# print(f'FSP incrementing {i}')
|
src: ShmArray,
|
||||||
|
dst: ShmArray
|
||||||
|
) -> tuple[bool, int, int]:
|
||||||
|
'''
|
||||||
|
Predicate to dertmine if a destination FSP
|
||||||
|
output array is aligned to its source array.
|
||||||
|
|
||||||
# respawn the compute task if the source
|
'''
|
||||||
# array has been updated such that we compute
|
step_diff = src.index - dst.index
|
||||||
# new history from the (prepended) source.
|
len_diff = abs(len(src.array) - len(dst.array))
|
||||||
synced, step_diff, _ = casc.is_synced()
|
return not (
|
||||||
if not synced:
|
# the source is likely backfilling and we must
|
||||||
step_diff: int = await casc.poll_and_sync_to_step()
|
# sync history calculations
|
||||||
|
len_diff > 2
|
||||||
|
|
||||||
# skip adding a last bar since we should already
|
# we aren't step synced to the source and may be
|
||||||
# be step alinged
|
# leading/lagging by a step
|
||||||
if step_diff == 0:
|
or step_diff > 1
|
||||||
continue
|
or step_diff < 0
|
||||||
|
), step_diff, len_diff
|
||||||
|
|
||||||
# read out last shm row, copy and write new row
|
async def poll_and_sync_to_step(
|
||||||
array = dst_shm.array
|
tracker: TaskTracker,
|
||||||
|
src: ShmArray,
|
||||||
|
dst: ShmArray,
|
||||||
|
|
||||||
# some metrics like vlm should be reset
|
) -> tuple[TaskTracker, int]:
|
||||||
# to zero every step.
|
|
||||||
if zero_on_step:
|
|
||||||
last = zeroed
|
|
||||||
else:
|
|
||||||
last = array[-1:].copy()
|
|
||||||
|
|
||||||
dst.rt_shm.push(last)
|
synced, step_diff, _ = is_synced(src, dst)
|
||||||
|
while not synced:
|
||||||
|
tracker, index = await resync(tracker)
|
||||||
|
synced, step_diff, _ = is_synced(src, dst)
|
||||||
|
|
||||||
# sync with source buffer's time step
|
return tracker, step_diff
|
||||||
src_l2 = src_shm.array[-2:]
|
|
||||||
src_li, src_lt = src_l2[-1][['index', 'time']]
|
|
||||||
src_2li, src_2lt = src_l2[-2][['index', 'time']]
|
|
||||||
dst_shm._array['time'][src_li] = src_lt
|
|
||||||
dst_shm._array['time'][src_2li] = src_2lt
|
|
||||||
|
|
||||||
# last2 = dst.array[-2:]
|
s, step, ld = is_synced(src, dst)
|
||||||
# if (
|
|
||||||
# last2[-1]['index'] != src_li
|
# detect sample period step for subscription to increment
|
||||||
# or last2[-2]['index'] != src_2li
|
# signal
|
||||||
# ):
|
times = src.array['time']
|
||||||
# dstl2 = list(last2)
|
if len(times) > 1:
|
||||||
# srcl2 = list(src_l2)
|
last_ts = times[-1]
|
||||||
# print(
|
delay_s = float(last_ts - times[times != last_ts][-1])
|
||||||
# # f'{dst.token}\n'
|
else:
|
||||||
# f'src: {srcl2}\n'
|
# our default "HFT" sample rate.
|
||||||
# f'dst: {dstl2}\n'
|
delay_s = _default_delay_s
|
||||||
# )
|
|
||||||
|
# sub and increment the underlying shared memory buffer
|
||||||
|
# on every step msg received from the global `samplerd`
|
||||||
|
# service.
|
||||||
|
async with open_sample_stream(float(delay_s)) as istream:
|
||||||
|
|
||||||
|
profiler(f'{func_name}: sample stream up')
|
||||||
|
profiler.finish()
|
||||||
|
|
||||||
|
async for i in istream:
|
||||||
|
# print(f'FSP incrementing {i}')
|
||||||
|
|
||||||
|
# respawn the compute task if the source
|
||||||
|
# array has been updated such that we compute
|
||||||
|
# new history from the (prepended) source.
|
||||||
|
synced, step_diff, _ = is_synced(src, dst)
|
||||||
|
if not synced:
|
||||||
|
tracker, step_diff = await poll_and_sync_to_step(
|
||||||
|
tracker,
|
||||||
|
src,
|
||||||
|
dst,
|
||||||
|
)
|
||||||
|
|
||||||
|
# skip adding a last bar since we should already
|
||||||
|
# be step alinged
|
||||||
|
if step_diff == 0:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# read out last shm row, copy and write new row
|
||||||
|
array = dst.array
|
||||||
|
|
||||||
|
# some metrics like vlm should be reset
|
||||||
|
# to zero every step.
|
||||||
|
if zero_on_step:
|
||||||
|
last = zeroed
|
||||||
|
else:
|
||||||
|
last = array[-1:].copy()
|
||||||
|
|
||||||
|
dst.push(last)
|
||||||
|
|
||||||
|
# sync with source buffer's time step
|
||||||
|
src_l2 = src.array[-2:]
|
||||||
|
src_li, src_lt = src_l2[-1][['index', 'time']]
|
||||||
|
src_2li, src_2lt = src_l2[-2][['index', 'time']]
|
||||||
|
dst._array['time'][src_li] = src_lt
|
||||||
|
dst._array['time'][src_2li] = src_2lt
|
||||||
|
|
||||||
|
# last2 = dst.array[-2:]
|
||||||
|
# if (
|
||||||
|
# last2[-1]['index'] != src_li
|
||||||
|
# or last2[-2]['index'] != src_2li
|
||||||
|
# ):
|
||||||
|
# dstl2 = list(last2)
|
||||||
|
# srcl2 = list(src_l2)
|
||||||
|
# print(
|
||||||
|
# # f'{dst.token}\n'
|
||||||
|
# f'src: {srcl2}\n'
|
||||||
|
# f'dst: {dstl2}\n'
|
||||||
|
# )
|
||||||
|
|
|
||||||
30
piker/log.py
30
piker/log.py
|
|
@ -19,10 +19,6 @@ Log like a forester!
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
import json
|
import json
|
||||||
import reprlib
|
|
||||||
from typing import (
|
|
||||||
Callable,
|
|
||||||
)
|
|
||||||
|
|
||||||
import tractor
|
import tractor
|
||||||
from pygments import (
|
from pygments import (
|
||||||
|
|
@ -88,29 +84,3 @@ def colorize_json(
|
||||||
# likeable styles: algol_nu, tango, monokai
|
# likeable styles: algol_nu, tango, monokai
|
||||||
formatters.TerminalTrueColorFormatter(style=style)
|
formatters.TerminalTrueColorFormatter(style=style)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# TODO, eventually defer to the version in `modden` once
|
|
||||||
# it becomes a dep!
|
|
||||||
def mk_repr(
|
|
||||||
**repr_kws,
|
|
||||||
) -> Callable[[str], str]:
|
|
||||||
'''
|
|
||||||
Allocate and deliver a `repr.Repr` instance with provided input
|
|
||||||
settings using the std-lib's `reprlib` mod,
|
|
||||||
* https://docs.python.org/3/library/reprlib.html
|
|
||||||
|
|
||||||
------ Ex. ------
|
|
||||||
An up to 6-layer-nested `dict` as multi-line:
|
|
||||||
- https://stackoverflow.com/a/79102479
|
|
||||||
- https://docs.python.org/3/library/reprlib.html#reprlib.Repr.maxlevel
|
|
||||||
|
|
||||||
'''
|
|
||||||
def_kws: dict[str, int] = dict(
|
|
||||||
indent=2,
|
|
||||||
maxlevel=6, # recursion levels
|
|
||||||
maxstring=66, # match editor line-len limit
|
|
||||||
)
|
|
||||||
def_kws |= repr_kws
|
|
||||||
reprr = reprlib.Repr(**def_kws)
|
|
||||||
return reprr.repr
|
|
||||||
|
|
|
||||||
|
|
@ -14,45 +14,49 @@
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
'''
|
"""
|
||||||
Actor runtime primtives and (distributed) service APIs for,
|
Actor-runtime service orchestration machinery.
|
||||||
|
|
||||||
- daemon-service mgmt: `_daemon` (i.e. low-level spawn and supervise machinery
|
"""
|
||||||
for sub-actors like `brokerd`, `emsd`, datad`, etc.)
|
from __future__ import annotations
|
||||||
|
|
||||||
- service-actor supervision (via `trio` tasks) API: `._mngr`
|
from ._mngr import Services
|
||||||
|
from ._registry import ( # noqa
|
||||||
- discovery interface (via light wrapping around `tractor`'s built-in
|
_tractor_kwargs,
|
||||||
prot): `._registry`
|
_default_reg_addr,
|
||||||
|
_default_registry_host,
|
||||||
- `docker` cntr SC supervision for use with `trio`: `_ahab`
|
_default_registry_port,
|
||||||
- wrappers for marketstore and elasticsearch dbs
|
open_registry,
|
||||||
=> TODO: maybe to (re)move elsewhere?
|
find_service,
|
||||||
|
check_for_service,
|
||||||
'''
|
|
||||||
from ._mngr import Services as Services
|
|
||||||
from ._registry import (
|
|
||||||
_tractor_kwargs as _tractor_kwargs,
|
|
||||||
_default_reg_addr as _default_reg_addr,
|
|
||||||
_default_registry_host as _default_registry_host,
|
|
||||||
_default_registry_port as _default_registry_port,
|
|
||||||
|
|
||||||
open_registry as open_registry,
|
|
||||||
find_service as find_service,
|
|
||||||
check_for_service as check_for_service,
|
|
||||||
)
|
)
|
||||||
from ._daemon import (
|
from ._daemon import ( # noqa
|
||||||
maybe_spawn_daemon as maybe_spawn_daemon,
|
maybe_spawn_daemon,
|
||||||
spawn_emsd as spawn_emsd,
|
spawn_emsd,
|
||||||
maybe_open_emsd as maybe_open_emsd,
|
maybe_open_emsd,
|
||||||
)
|
)
|
||||||
from ._actor_runtime import (
|
from ._actor_runtime import (
|
||||||
open_piker_runtime as open_piker_runtime,
|
open_piker_runtime,
|
||||||
maybe_open_pikerd as maybe_open_pikerd,
|
maybe_open_pikerd,
|
||||||
open_pikerd as open_pikerd,
|
open_pikerd,
|
||||||
get_runtime_vars as get_runtime_vars,
|
get_tractor_runtime_kwargs,
|
||||||
)
|
)
|
||||||
from ..brokers._daemon import (
|
from ..brokers._daemon import (
|
||||||
spawn_brokerd as spawn_brokerd,
|
spawn_brokerd,
|
||||||
maybe_spawn_brokerd as maybe_spawn_brokerd,
|
maybe_spawn_brokerd,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
'check_for_service',
|
||||||
|
'Services',
|
||||||
|
'maybe_spawn_daemon',
|
||||||
|
'spawn_brokerd',
|
||||||
|
'maybe_spawn_brokerd',
|
||||||
|
'spawn_emsd',
|
||||||
|
'maybe_open_emsd',
|
||||||
|
'open_piker_runtime',
|
||||||
|
'maybe_open_pikerd',
|
||||||
|
'open_pikerd',
|
||||||
|
'get_tractor_runtime_kwargs',
|
||||||
|
]
|
||||||
|
|
|
||||||
|
|
@ -45,7 +45,7 @@ from ._registry import ( # noqa
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_runtime_vars() -> dict[str, Any]:
|
def get_tractor_runtime_kwargs() -> dict[str, Any]:
|
||||||
'''
|
'''
|
||||||
Deliver ``tractor`` related runtime variables in a `dict`.
|
Deliver ``tractor`` related runtime variables in a `dict`.
|
||||||
|
|
||||||
|
|
@ -56,8 +56,6 @@ def get_runtime_vars() -> dict[str, Any]:
|
||||||
@acm
|
@acm
|
||||||
async def open_piker_runtime(
|
async def open_piker_runtime(
|
||||||
name: str,
|
name: str,
|
||||||
registry_addrs: list[tuple[str, int]] = [],
|
|
||||||
|
|
||||||
enable_modules: list[str] = [],
|
enable_modules: list[str] = [],
|
||||||
loglevel: Optional[str] = None,
|
loglevel: Optional[str] = None,
|
||||||
|
|
||||||
|
|
@ -65,6 +63,8 @@ async def open_piker_runtime(
|
||||||
# for data daemons when running in production.
|
# for data daemons when running in production.
|
||||||
debug_mode: bool = False,
|
debug_mode: bool = False,
|
||||||
|
|
||||||
|
registry_addr: None | tuple[str, int] = None,
|
||||||
|
|
||||||
# TODO: once we have `rsyscall` support we will read a config
|
# TODO: once we have `rsyscall` support we will read a config
|
||||||
# and spawn the service tree distributed per that.
|
# and spawn the service tree distributed per that.
|
||||||
start_method: str = 'trio',
|
start_method: str = 'trio',
|
||||||
|
|
@ -74,7 +74,7 @@ async def open_piker_runtime(
|
||||||
|
|
||||||
) -> tuple[
|
) -> tuple[
|
||||||
tractor.Actor,
|
tractor.Actor,
|
||||||
list[tuple[str, int]],
|
tuple[str, int],
|
||||||
]:
|
]:
|
||||||
'''
|
'''
|
||||||
Start a piker actor who's runtime will automatically sync with
|
Start a piker actor who's runtime will automatically sync with
|
||||||
|
|
@ -84,71 +84,50 @@ async def open_piker_runtime(
|
||||||
a root actor.
|
a root actor.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# check for existing runtime, boot it
|
|
||||||
# if not already running.
|
|
||||||
try:
|
try:
|
||||||
actor = tractor.current_actor()
|
# check for existing runtime
|
||||||
|
actor = tractor.current_actor().uid
|
||||||
|
|
||||||
except tractor._exceptions.NoRuntime:
|
except tractor._exceptions.NoRuntime:
|
||||||
tractor._state._runtime_vars[
|
tractor._state._runtime_vars[
|
||||||
'piker_vars'
|
'piker_vars'] = tractor_runtime_overrides
|
||||||
] = tractor_runtime_overrides
|
|
||||||
|
|
||||||
# NOTE: if no registrar list passed used the default of just
|
registry_addr = registry_addr or _default_reg_addr
|
||||||
# setting it as the root actor on localhost.
|
|
||||||
registry_addrs = (
|
|
||||||
registry_addrs
|
|
||||||
or [_default_reg_addr]
|
|
||||||
)
|
|
||||||
|
|
||||||
if ems := tractor_kwargs.pop('enable_modules', None):
|
|
||||||
# import pdbp; pdbp.set_trace()
|
|
||||||
enable_modules.extend(ems)
|
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
tractor.open_root_actor(
|
tractor.open_root_actor(
|
||||||
|
|
||||||
# passed through to `open_root_actor`
|
# passed through to ``open_root_actor``
|
||||||
registry_addrs=registry_addrs,
|
arbiter_addr=registry_addr,
|
||||||
name=name,
|
name=name,
|
||||||
start_method=start_method,
|
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
|
start_method=start_method,
|
||||||
# XXX NOTE MEMBER DAT der's a perf hit yo!!
|
|
||||||
# https://greenback.readthedocs.io/en/latest/principle.html#performance
|
|
||||||
maybe_enable_greenback=True,
|
|
||||||
|
|
||||||
# TODO: eventually we should be able to avoid
|
# TODO: eventually we should be able to avoid
|
||||||
# having the root have more then permissions to
|
# having the root have more then permissions to
|
||||||
# spawn other specialized daemons I think?
|
# spawn other specialized daemons I think?
|
||||||
enable_modules=enable_modules,
|
enable_modules=enable_modules,
|
||||||
hide_tb=False,
|
|
||||||
|
|
||||||
**tractor_kwargs,
|
**tractor_kwargs,
|
||||||
) as actor,
|
) as _,
|
||||||
|
|
||||||
open_registry(
|
open_registry(registry_addr, ensure_exists=False) as addr,
|
||||||
registry_addrs,
|
|
||||||
ensure_exists=False,
|
|
||||||
) as addrs,
|
|
||||||
):
|
):
|
||||||
assert actor is tractor.current_actor()
|
|
||||||
yield (
|
yield (
|
||||||
actor,
|
tractor.current_actor(),
|
||||||
addrs,
|
addr,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
async with open_registry(
|
async with open_registry(registry_addr) as addr:
|
||||||
registry_addrs
|
|
||||||
) as addrs:
|
|
||||||
yield (
|
yield (
|
||||||
actor,
|
actor,
|
||||||
addrs,
|
addr,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
_root_dname: str = 'pikerd'
|
_root_dname = 'pikerd'
|
||||||
_root_modules: list[str] = [
|
_root_modules = [
|
||||||
__name__,
|
__name__,
|
||||||
'piker.service._daemon',
|
'piker.service._daemon',
|
||||||
'piker.brokers._daemon',
|
'piker.brokers._daemon',
|
||||||
|
|
@ -162,13 +141,13 @@ _root_modules: list[str] = [
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def open_pikerd(
|
async def open_pikerd(
|
||||||
registry_addrs: list[tuple[str, int]],
|
|
||||||
|
|
||||||
loglevel: str | None = None,
|
loglevel: str | None = None,
|
||||||
|
|
||||||
# XXX: you should pretty much never want debug mode
|
# XXX: you should pretty much never want debug mode
|
||||||
# for data daemons when running in production.
|
# for data daemons when running in production.
|
||||||
debug_mode: bool = False,
|
debug_mode: bool = False,
|
||||||
|
registry_addr: None | tuple[str, int] = None,
|
||||||
|
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
||||||
|
|
@ -180,44 +159,33 @@ async def open_pikerd(
|
||||||
alive underling services (see below).
|
alive underling services (see below).
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# NOTE: for the root daemon we always enable the root
|
|
||||||
# mod set and we `list.extend()` it into wtv the
|
|
||||||
# caller requested.
|
|
||||||
# TODO: make this mod set more strict?
|
|
||||||
# -[ ] eventually we should be able to avoid
|
|
||||||
# having the root have more then permissions to spawn other
|
|
||||||
# specialized daemons I think?
|
|
||||||
ems: list[str] = kwargs.setdefault('enable_modules', [])
|
|
||||||
ems.extend(_root_modules)
|
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
open_piker_runtime(
|
open_piker_runtime(
|
||||||
|
|
||||||
name=_root_dname,
|
name=_root_dname,
|
||||||
|
# TODO: eventually we should be able to avoid
|
||||||
|
# having the root have more then permissions to
|
||||||
|
# spawn other specialized daemons I think?
|
||||||
|
enable_modules=_root_modules,
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
registry_addrs=registry_addrs,
|
registry_addr=registry_addr,
|
||||||
|
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
||||||
) as (
|
) as (root_actor, reg_addr),
|
||||||
root_actor,
|
|
||||||
reg_addrs,
|
|
||||||
),
|
|
||||||
tractor.open_nursery() as actor_nursery,
|
tractor.open_nursery() as actor_nursery,
|
||||||
tractor.trionics.collapse_eg(),
|
trio.open_nursery() as service_nursery,
|
||||||
trio.open_nursery() as service_tn,
|
|
||||||
):
|
):
|
||||||
for addr in reg_addrs:
|
if root_actor.accept_addr != reg_addr:
|
||||||
if addr not in root_actor.accept_addrs:
|
raise RuntimeError(
|
||||||
raise RuntimeError(
|
f'`pikerd` failed to bind on {reg_addr}!\n'
|
||||||
f'`pikerd` failed to bind on {addr}!\n'
|
'Maybe you have another daemon already running?'
|
||||||
'Maybe you have another daemon already running?'
|
)
|
||||||
)
|
|
||||||
|
|
||||||
# assign globally for future daemon/task creation
|
# assign globally for future daemon/task creation
|
||||||
Services.actor_n = actor_nursery
|
Services.actor_n = actor_nursery
|
||||||
Services.service_n = service_tn
|
Services.service_n = service_nursery
|
||||||
Services.debug_mode = debug_mode
|
Services.debug_mode = debug_mode
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
@ -227,7 +195,7 @@ async def open_pikerd(
|
||||||
# TODO: is this more clever/efficient?
|
# TODO: is this more clever/efficient?
|
||||||
# if 'samplerd' in Services.service_tasks:
|
# if 'samplerd' in Services.service_tasks:
|
||||||
# await Services.cancel_service('samplerd')
|
# await Services.cancel_service('samplerd')
|
||||||
service_tn.cancel_scope.cancel()
|
service_nursery.cancel_scope.cancel()
|
||||||
|
|
||||||
|
|
||||||
# TODO: do we even need this?
|
# TODO: do we even need this?
|
||||||
|
|
@ -257,15 +225,12 @@ async def open_pikerd(
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def maybe_open_pikerd(
|
async def maybe_open_pikerd(
|
||||||
registry_addrs: list[tuple[str, int]] | None = None,
|
loglevel: Optional[str] = None,
|
||||||
|
registry_addr: None | tuple = None,
|
||||||
|
|
||||||
loglevel: str | None = None,
|
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
||||||
) -> (
|
) -> tractor._portal.Portal | ClassVar[Services]:
|
||||||
tractor._portal.Portal
|
|
||||||
|ClassVar[Services]
|
|
||||||
):
|
|
||||||
'''
|
'''
|
||||||
If no ``pikerd`` daemon-root-actor can be found start it and
|
If no ``pikerd`` daemon-root-actor can be found start it and
|
||||||
yield up (we should probably figure out returning a portal to self
|
yield up (we should probably figure out returning a portal to self
|
||||||
|
|
@ -288,52 +253,32 @@ async def maybe_open_pikerd(
|
||||||
# async with open_portal(chan) as arb_portal:
|
# async with open_portal(chan) as arb_portal:
|
||||||
# yield arb_portal
|
# yield arb_portal
|
||||||
|
|
||||||
registry_addrs: list[tuple[str, int]] = (
|
|
||||||
registry_addrs
|
|
||||||
or
|
|
||||||
[_default_reg_addr]
|
|
||||||
)
|
|
||||||
|
|
||||||
pikerd_portal: tractor.Portal|None
|
|
||||||
async with (
|
async with (
|
||||||
open_piker_runtime(
|
open_piker_runtime(
|
||||||
name=query_name,
|
name=query_name,
|
||||||
registry_addrs=registry_addrs,
|
registry_addr=registry_addr,
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
) as (actor, addrs),
|
) as _,
|
||||||
|
|
||||||
|
tractor.find_actor(
|
||||||
|
_root_dname,
|
||||||
|
arbiter_sockaddr=registry_addr,
|
||||||
|
) as portal
|
||||||
):
|
):
|
||||||
if _root_dname in actor.uid:
|
# connect to any existing daemon presuming
|
||||||
yield None
|
# its registry socket was selected.
|
||||||
|
if (
|
||||||
|
portal is not None
|
||||||
|
):
|
||||||
|
yield portal
|
||||||
return
|
return
|
||||||
|
|
||||||
# NOTE: IFF running in disti mode, try to attach to any
|
|
||||||
# existing (host-local) `pikerd`.
|
|
||||||
else:
|
|
||||||
async with tractor.find_actor(
|
|
||||||
_root_dname,
|
|
||||||
registry_addrs=registry_addrs,
|
|
||||||
only_first=True,
|
|
||||||
# raise_on_none=True,
|
|
||||||
) as pikerd_portal:
|
|
||||||
|
|
||||||
# connect to any existing remote daemon presuming its
|
|
||||||
# registry socket was selected.
|
|
||||||
if pikerd_portal is not None:
|
|
||||||
|
|
||||||
# sanity check that we are actually connecting to
|
|
||||||
# a remote process and not ourselves.
|
|
||||||
assert actor.uid != pikerd_portal.channel.uid
|
|
||||||
assert registry_addrs
|
|
||||||
|
|
||||||
yield pikerd_portal
|
|
||||||
return
|
|
||||||
|
|
||||||
# presume pikerd role since no daemon could be found at
|
# presume pikerd role since no daemon could be found at
|
||||||
# configured address
|
# configured address
|
||||||
async with open_pikerd(
|
async with open_pikerd(
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
registry_addrs=registry_addrs,
|
registry_addr=registry_addr,
|
||||||
|
|
||||||
# passthrough to ``tractor`` init
|
# passthrough to ``tractor`` init
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
|
||||||
|
|
@ -15,8 +15,8 @@
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
Supervisor for ``docker`` with included async and SC wrapping to
|
Supervisor for ``docker`` with included async and SC wrapping
|
||||||
ensure a cancellable container lifetime system.
|
to ensure a cancellable container lifetime system.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
|
||||||
|
|
@ -28,7 +28,6 @@ from contextlib import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import tractor
|
import tractor
|
||||||
from trio.lowlevel import current_task
|
|
||||||
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log, # sub-sys logger
|
log, # sub-sys logger
|
||||||
|
|
@ -71,84 +70,66 @@ async def maybe_spawn_daemon(
|
||||||
lock = Services.locks[service_name]
|
lock = Services.locks[service_name]
|
||||||
await lock.acquire()
|
await lock.acquire()
|
||||||
|
|
||||||
try:
|
async with find_service(service_name) as portal:
|
||||||
async with find_service(
|
if portal is not None:
|
||||||
service_name,
|
|
||||||
registry_addrs=[('127.0.0.1', 6116)],
|
|
||||||
) as portal:
|
|
||||||
if portal is not None:
|
|
||||||
lock.release()
|
|
||||||
yield portal
|
|
||||||
return
|
|
||||||
|
|
||||||
log.warning(
|
|
||||||
f"Couldn't find any existing {service_name}\n"
|
|
||||||
'Attempting to spawn new daemon-service..'
|
|
||||||
)
|
|
||||||
|
|
||||||
# ask root ``pikerd`` daemon to spawn the daemon we need if
|
|
||||||
# pikerd is not live we now become the root of the
|
|
||||||
# process tree
|
|
||||||
async with maybe_open_pikerd(
|
|
||||||
loglevel=loglevel,
|
|
||||||
**pikerd_kwargs,
|
|
||||||
|
|
||||||
) as pikerd_portal:
|
|
||||||
|
|
||||||
# we are the root and thus are `pikerd`
|
|
||||||
# so spawn the target service directly by calling
|
|
||||||
# the provided target routine.
|
|
||||||
# XXX: this assumes that the target is well formed and will
|
|
||||||
# do the right things to setup both a sub-actor **and** call
|
|
||||||
# the ``_Services`` api from above to start the top level
|
|
||||||
# service task for that actor.
|
|
||||||
started: bool
|
|
||||||
if pikerd_portal is None:
|
|
||||||
started = await service_task_target(
|
|
||||||
loglevel=loglevel,
|
|
||||||
**spawn_args,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
|
||||||
# request a remote `pikerd` (service manager) to start the
|
|
||||||
# target daemon-task, the target can't return
|
|
||||||
# a non-serializable value since it is expected that service
|
|
||||||
# starting is non-blocking and the target task will persist
|
|
||||||
# running "under" or "within" the `pikerd` actor tree after
|
|
||||||
# the questing client disconnects. in other words this
|
|
||||||
# spawns a persistent daemon actor that continues to live
|
|
||||||
# for the lifespan of whatever the service manager inside
|
|
||||||
# `pikerd` says it should.
|
|
||||||
started = await pikerd_portal.run(
|
|
||||||
service_task_target,
|
|
||||||
loglevel=loglevel,
|
|
||||||
**spawn_args,
|
|
||||||
)
|
|
||||||
|
|
||||||
if started:
|
|
||||||
log.info(f'Service {service_name} started!')
|
|
||||||
|
|
||||||
# block until we can discover (by IPC connection) to the newly
|
|
||||||
# spawned daemon-actor and then deliver the portal to the
|
|
||||||
# caller.
|
|
||||||
async with tractor.wait_for_actor(service_name) as portal:
|
|
||||||
lock.release()
|
|
||||||
yield portal
|
|
||||||
await portal.cancel_actor()
|
|
||||||
|
|
||||||
except BaseException as _err:
|
|
||||||
err = _err
|
|
||||||
if (
|
|
||||||
lock.locked()
|
|
||||||
and
|
|
||||||
lock.statistics().owner is current_task()
|
|
||||||
):
|
|
||||||
log.exception(
|
|
||||||
f'Releasing stale lock after crash..?'
|
|
||||||
f'{err!r}\n'
|
|
||||||
)
|
|
||||||
lock.release()
|
lock.release()
|
||||||
raise err
|
yield portal
|
||||||
|
return
|
||||||
|
|
||||||
|
log.warning(
|
||||||
|
f"Couldn't find any existing {service_name}\n"
|
||||||
|
'Attempting to spawn new daemon-service..'
|
||||||
|
)
|
||||||
|
|
||||||
|
# ask root ``pikerd`` daemon to spawn the daemon we need if
|
||||||
|
# pikerd is not live we now become the root of the
|
||||||
|
# process tree
|
||||||
|
async with maybe_open_pikerd(
|
||||||
|
loglevel=loglevel,
|
||||||
|
**pikerd_kwargs,
|
||||||
|
|
||||||
|
) as pikerd_portal:
|
||||||
|
|
||||||
|
# we are the root and thus are `pikerd`
|
||||||
|
# so spawn the target service directly by calling
|
||||||
|
# the provided target routine.
|
||||||
|
# XXX: this assumes that the target is well formed and will
|
||||||
|
# do the right things to setup both a sub-actor **and** call
|
||||||
|
# the ``_Services`` api from above to start the top level
|
||||||
|
# service task for that actor.
|
||||||
|
started: bool
|
||||||
|
if pikerd_portal is None:
|
||||||
|
started = await service_task_target(
|
||||||
|
loglevel=loglevel,
|
||||||
|
**spawn_args,
|
||||||
|
)
|
||||||
|
|
||||||
|
else:
|
||||||
|
# request a remote `pikerd` (service manager) to start the
|
||||||
|
# target daemon-task, the target can't return
|
||||||
|
# a non-serializable value since it is expected that service
|
||||||
|
# starting is non-blocking and the target task will persist
|
||||||
|
# running "under" or "within" the `pikerd` actor tree after
|
||||||
|
# the questing client disconnects. in other words this
|
||||||
|
# spawns a persistent daemon actor that continues to live
|
||||||
|
# for the lifespan of whatever the service manager inside
|
||||||
|
# `pikerd` says it should.
|
||||||
|
started = await pikerd_portal.run(
|
||||||
|
service_task_target,
|
||||||
|
loglevel=loglevel,
|
||||||
|
**spawn_args,
|
||||||
|
)
|
||||||
|
|
||||||
|
if started:
|
||||||
|
log.info(f'Service {service_name} started!')
|
||||||
|
|
||||||
|
# block until we can discover (by IPC connection) to the newly
|
||||||
|
# spawned daemon-actor and then deliver the portal to the
|
||||||
|
# caller.
|
||||||
|
async with tractor.wait_for_actor(service_name) as portal:
|
||||||
|
lock.release()
|
||||||
|
yield portal
|
||||||
|
await portal.cancel_actor()
|
||||||
|
|
||||||
|
|
||||||
async def spawn_emsd(
|
async def spawn_emsd(
|
||||||
|
|
|
||||||
|
|
@ -27,12 +27,6 @@ from typing import (
|
||||||
import trio
|
import trio
|
||||||
from trio_typing import TaskStatus
|
from trio_typing import TaskStatus
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import (
|
|
||||||
current_actor,
|
|
||||||
ContextCancelled,
|
|
||||||
Context,
|
|
||||||
Portal,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log, # sub-sys logger
|
log, # sub-sys logger
|
||||||
|
|
@ -44,8 +38,6 @@ from ._util import (
|
||||||
# library.
|
# library.
|
||||||
# - wrap a "remote api" wherein you can get a method proxy
|
# - wrap a "remote api" wherein you can get a method proxy
|
||||||
# to the pikerd actor for starting services remotely!
|
# to the pikerd actor for starting services remotely!
|
||||||
# - prolly rename this to ActorServicesNursery since it spawns
|
|
||||||
# new actors and supervises them to completion?
|
|
||||||
class Services:
|
class Services:
|
||||||
|
|
||||||
actor_n: tractor._supervise.ActorNursery
|
actor_n: tractor._supervise.ActorNursery
|
||||||
|
|
@ -55,7 +47,7 @@ class Services:
|
||||||
str,
|
str,
|
||||||
tuple[
|
tuple[
|
||||||
trio.CancelScope,
|
trio.CancelScope,
|
||||||
Portal,
|
tractor.Portal,
|
||||||
trio.Event,
|
trio.Event,
|
||||||
]
|
]
|
||||||
] = {}
|
] = {}
|
||||||
|
|
@ -65,12 +57,12 @@ class Services:
|
||||||
async def start_service_task(
|
async def start_service_task(
|
||||||
self,
|
self,
|
||||||
name: str,
|
name: str,
|
||||||
portal: Portal,
|
portal: tractor.Portal,
|
||||||
target: Callable,
|
target: Callable,
|
||||||
allow_overruns: bool = False,
|
allow_overruns: bool = False,
|
||||||
**ctx_kwargs,
|
**ctx_kwargs,
|
||||||
|
|
||||||
) -> (trio.CancelScope, Context):
|
) -> (trio.CancelScope, tractor.Context):
|
||||||
'''
|
'''
|
||||||
Open a context in a service sub-actor, add to a stack
|
Open a context in a service sub-actor, add to a stack
|
||||||
that gets unwound at ``pikerd`` teardown.
|
that gets unwound at ``pikerd`` teardown.
|
||||||
|
|
@ -109,30 +101,13 @@ class Services:
|
||||||
# wait on any context's return value
|
# wait on any context's return value
|
||||||
# and any final portal result from the
|
# and any final portal result from the
|
||||||
# sub-actor.
|
# sub-actor.
|
||||||
ctx_res: Any = await ctx.wait_for_result()
|
ctx_res = await ctx.result()
|
||||||
|
|
||||||
# NOTE: blocks indefinitely until cancelled
|
# NOTE: blocks indefinitely until cancelled
|
||||||
# either by error from the target context
|
# either by error from the target context
|
||||||
# function or by being cancelled here by the
|
# function or by being cancelled here by the
|
||||||
# surrounding cancel scope.
|
# surrounding cancel scope.
|
||||||
return (await portal.result(), ctx_res)
|
return (await portal.result(), ctx_res)
|
||||||
except ContextCancelled as ctxe:
|
|
||||||
canceller: tuple[str, str] = ctxe.canceller
|
|
||||||
our_uid: tuple[str, str] = current_actor().uid
|
|
||||||
if (
|
|
||||||
canceller != portal.channel.uid
|
|
||||||
and
|
|
||||||
canceller != our_uid
|
|
||||||
):
|
|
||||||
log.cancel(
|
|
||||||
f'Actor-service {name} was remotely cancelled?\n'
|
|
||||||
f'remote canceller: {canceller}\n'
|
|
||||||
f'Keeping {our_uid} alive, ignoring sub-actor cancel..\n'
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
await portal.cancel_actor()
|
await portal.cancel_actor()
|
||||||
|
|
|
||||||
|
|
@ -27,7 +27,6 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import Portal
|
|
||||||
|
|
||||||
from ._util import (
|
from ._util import (
|
||||||
log, # sub-sys logger
|
log, # sub-sys logger
|
||||||
|
|
@ -47,9 +46,7 @@ _registry: Registry | None = None
|
||||||
|
|
||||||
|
|
||||||
class Registry:
|
class Registry:
|
||||||
# TODO: should this be a set or should we complain
|
addr: None | tuple[str, int] = None
|
||||||
# on duplicates?
|
|
||||||
addrs: list[tuple[str, int]] = []
|
|
||||||
|
|
||||||
# TODO: table of uids to sockaddrs
|
# TODO: table of uids to sockaddrs
|
||||||
peers: dict[
|
peers: dict[
|
||||||
|
|
@ -63,134 +60,69 @@ _tractor_kwargs: dict[str, Any] = {}
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def open_registry(
|
async def open_registry(
|
||||||
addrs: list[tuple[str, int]],
|
addr: None | tuple[str, int] = None,
|
||||||
ensure_exists: bool = True,
|
ensure_exists: bool = True,
|
||||||
|
|
||||||
) -> list[tuple[str, int]]:
|
) -> tuple[str, int]:
|
||||||
'''
|
|
||||||
Open the service-actor-discovery registry by returning a set of
|
|
||||||
tranport socket-addrs to registrar actors which may be
|
|
||||||
contacted and queried for similar addresses for other
|
|
||||||
non-registrar actors.
|
|
||||||
|
|
||||||
'''
|
|
||||||
global _tractor_kwargs
|
global _tractor_kwargs
|
||||||
actor = tractor.current_actor()
|
actor = tractor.current_actor()
|
||||||
uid = actor.uid
|
uid = actor.uid
|
||||||
preset_reg_addrs: list[tuple[str, int]] = Registry.addrs
|
|
||||||
if (
|
if (
|
||||||
preset_reg_addrs
|
Registry.addr is not None
|
||||||
and addrs
|
and addr
|
||||||
):
|
):
|
||||||
if preset_reg_addrs != addrs:
|
raise RuntimeError(
|
||||||
# if any(addr in preset_reg_addrs for addr in addrs):
|
f'`{uid}` registry addr already bound @ {_registry.sockaddr}'
|
||||||
diff: set[tuple[str, int]] = set(preset_reg_addrs) - set(addrs)
|
)
|
||||||
if diff:
|
|
||||||
log.warning(
|
|
||||||
f'`{uid}` requested only subset of registrars: {addrs}\n'
|
|
||||||
f'However there are more @{diff}'
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
raise RuntimeError(
|
|
||||||
f'`{uid}` has non-matching registrar addresses?\n'
|
|
||||||
f'request: {addrs}\n'
|
|
||||||
f'already set: {preset_reg_addrs}'
|
|
||||||
)
|
|
||||||
|
|
||||||
was_set: bool = False
|
was_set: bool = False
|
||||||
|
|
||||||
if (
|
if (
|
||||||
not tractor.is_root_process()
|
not tractor.is_root_process()
|
||||||
and
|
and Registry.addr is None
|
||||||
not Registry.addrs
|
|
||||||
):
|
):
|
||||||
Registry.addrs.extend(actor.reg_addrs)
|
Registry.addr = actor._arb_addr
|
||||||
|
|
||||||
if (
|
if (
|
||||||
ensure_exists
|
ensure_exists
|
||||||
and
|
and Registry.addr is None
|
||||||
not Registry.addrs
|
|
||||||
):
|
):
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"`{uid}` registry should already exist but doesn't?"
|
f"`{uid}` registry should already exist bug doesn't?"
|
||||||
)
|
)
|
||||||
|
|
||||||
if (
|
if (
|
||||||
not Registry.addrs
|
Registry.addr is None
|
||||||
):
|
):
|
||||||
was_set = True
|
was_set = True
|
||||||
Registry.addrs = addrs or [_default_reg_addr]
|
Registry.addr = addr or _default_reg_addr
|
||||||
|
|
||||||
# NOTE: only spot this seems currently used is inside
|
_tractor_kwargs['arbiter_addr'] = Registry.addr
|
||||||
# `.ui._exec` which is the (eventual qtloops) bootstrapping
|
|
||||||
# with guest mode.
|
|
||||||
_tractor_kwargs['registry_addrs'] = Registry.addrs
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield Registry.addrs
|
yield Registry.addr
|
||||||
finally:
|
finally:
|
||||||
# XXX: always clear the global addr if we set it so that the
|
# XXX: always clear the global addr if we set it so that the
|
||||||
# next (set of) calls will apply whatever new one is passed
|
# next (set of) calls will apply whatever new one is passed
|
||||||
# in.
|
# in.
|
||||||
if was_set:
|
if was_set:
|
||||||
Registry.addrs = None
|
Registry.addr = None
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def find_service(
|
async def find_service(
|
||||||
service_name: str,
|
service_name: str,
|
||||||
registry_addrs: list[tuple[str, int]] | None = None,
|
) -> tractor.Portal | None:
|
||||||
|
|
||||||
first_only: bool = True,
|
|
||||||
|
|
||||||
) -> (
|
|
||||||
Portal
|
|
||||||
| list[Portal]
|
|
||||||
| None
|
|
||||||
):
|
|
||||||
# try:
|
|
||||||
reg_addrs: list[tuple[str, int]]
|
|
||||||
async with open_registry(
|
|
||||||
addrs=(
|
|
||||||
registry_addrs
|
|
||||||
# NOTE: if no addr set is passed assume the registry has
|
|
||||||
# already been opened and use the previously applied
|
|
||||||
# startup set.
|
|
||||||
or Registry.addrs
|
|
||||||
),
|
|
||||||
) as reg_addrs:
|
|
||||||
|
|
||||||
log.info(
|
|
||||||
f'Scanning for service {service_name!r}'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
async with open_registry() as reg_addr:
|
||||||
|
log.info(f'Scanning for service `{service_name}`')
|
||||||
# attach to existing daemon by name if possible
|
# attach to existing daemon by name if possible
|
||||||
maybe_portals: list[Portal]|Portal|None
|
|
||||||
async with tractor.find_actor(
|
async with tractor.find_actor(
|
||||||
service_name,
|
service_name,
|
||||||
registry_addrs=reg_addrs,
|
arbiter_sockaddr=reg_addr,
|
||||||
only_first=first_only, # if set only returns single ref
|
) as maybe_portal:
|
||||||
) as maybe_portals:
|
yield maybe_portal
|
||||||
if not maybe_portals:
|
|
||||||
# log.info(
|
|
||||||
print(
|
|
||||||
f'Could NOT find service {service_name!r} -> {maybe_portals!r}'
|
|
||||||
)
|
|
||||||
yield None
|
|
||||||
return
|
|
||||||
|
|
||||||
# log.info(
|
|
||||||
print(
|
|
||||||
f'Found service {service_name!r} -> {maybe_portals}'
|
|
||||||
)
|
|
||||||
yield maybe_portals
|
|
||||||
|
|
||||||
# except BaseException as _berr:
|
|
||||||
# berr = _berr
|
|
||||||
# log.exception(
|
|
||||||
# 'tractor.find_actor() failed with,\n'
|
|
||||||
# )
|
|
||||||
# raise berr
|
|
||||||
|
|
||||||
|
|
||||||
async def check_for_service(
|
async def check_for_service(
|
||||||
|
|
@ -201,11 +133,9 @@ async def check_for_service(
|
||||||
Service daemon "liveness" predicate.
|
Service daemon "liveness" predicate.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
async with (
|
async with open_registry(ensure_exists=False) as reg_addr:
|
||||||
open_registry(ensure_exists=False) as reg_addr,
|
async with tractor.query_actor(
|
||||||
tractor.query_actor(
|
|
||||||
service_name,
|
service_name,
|
||||||
arbiter_sockaddr=reg_addr,
|
arbiter_sockaddr=reg_addr,
|
||||||
) as sockaddr,
|
) as sockaddr:
|
||||||
):
|
return sockaddr
|
||||||
return sockaddr
|
|
||||||
|
|
|
||||||
|
|
@ -139,13 +139,6 @@ class StorageClient(
|
||||||
...
|
...
|
||||||
|
|
||||||
|
|
||||||
class TimeseriesNotFound(Exception):
|
|
||||||
'''
|
|
||||||
No timeseries entry can be found for this backend.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
class StorageConnectionError(ConnectionError):
|
class StorageConnectionError(ConnectionError):
|
||||||
'''
|
'''
|
||||||
Can't connect to the desired tsdb subsys/service.
|
Can't connect to the desired tsdb subsys/service.
|
||||||
|
|
@ -176,13 +169,10 @@ async def open_storage_client(
|
||||||
tsdb_host: str = 'localhost'
|
tsdb_host: str = 'localhost'
|
||||||
|
|
||||||
# load root config and any tsdb user defined settings
|
# load root config and any tsdb user defined settings
|
||||||
conf, path = config.load(
|
conf, path = config.load('conf', touch_if_dne=True)
|
||||||
conf_name='conf',
|
|
||||||
touch_if_dne=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: maybe not under a "network" section.. since
|
# TODO: maybe not under a "network" section.. since
|
||||||
# no more chitty `marketstore`..
|
# no more chitty mkts..
|
||||||
tsdbconf: dict = {}
|
tsdbconf: dict = {}
|
||||||
service_section = conf.get('service')
|
service_section = conf.get('service')
|
||||||
if (
|
if (
|
||||||
|
|
@ -193,11 +183,8 @@ async def open_storage_client(
|
||||||
|
|
||||||
# lookup backend tsdb module by name and load any user service
|
# lookup backend tsdb module by name and load any user service
|
||||||
# settings for connecting to the tsdb service.
|
# settings for connecting to the tsdb service.
|
||||||
backend: str = tsdbconf.pop(
|
backend: str = tsdbconf.pop('backend')
|
||||||
'name',
|
tsdb_host: str = tsdbconf['host']
|
||||||
def_backend,
|
|
||||||
)
|
|
||||||
tsdb_host: str = tsdbconf.get('maddrs', [])
|
|
||||||
|
|
||||||
if backend is None:
|
if backend is None:
|
||||||
backend: str = def_backend
|
backend: str = def_backend
|
||||||
|
|
@ -259,7 +246,7 @@ async def open_tsdb_client(
|
||||||
# * the original data feed arch blurb:
|
# * the original data feed arch blurb:
|
||||||
# - https://github.com/pikers/piker/issues/98
|
# - https://github.com/pikers/piker/issues/98
|
||||||
#
|
#
|
||||||
from ..toolz import Profiler
|
from .._profile import Profiler
|
||||||
profiler = Profiler(
|
profiler = Profiler(
|
||||||
disabled=True, # not pg_profile_enabled(),
|
disabled=True, # not pg_profile_enabled(),
|
||||||
delayed=False,
|
delayed=False,
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
# piker: trading gear for hackers
|
# piker: trading gear for hackers
|
||||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# This program is free software: you can redistribute it and/or modify
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
|
@ -19,18 +19,10 @@ Storage middle-ware CLIs.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
# from datetime import datetime
|
|
||||||
# from contextlib import (
|
|
||||||
# AsyncExitStack,
|
|
||||||
# )
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from math import copysign
|
|
||||||
import time
|
import time
|
||||||
from types import ModuleType
|
from typing import Generator
|
||||||
from typing import (
|
# from typing import TYPE_CHECKING
|
||||||
Any,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
|
||||||
|
|
||||||
import polars as pl
|
import polars as pl
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
@ -43,21 +35,24 @@ import typer
|
||||||
|
|
||||||
from piker.service import open_piker_runtime
|
from piker.service import open_piker_runtime
|
||||||
from piker.cli import cli
|
from piker.cli import cli
|
||||||
|
from piker.config import get_conf_dir
|
||||||
from piker.data import (
|
from piker.data import (
|
||||||
|
maybe_open_shm_array,
|
||||||
|
def_iohlcv_fields,
|
||||||
ShmArray,
|
ShmArray,
|
||||||
)
|
)
|
||||||
from piker import tsp
|
from piker.data.history import (
|
||||||
from piker.data._formatters import BGM
|
_default_hist_size,
|
||||||
from . import log
|
_default_rt_size,
|
||||||
|
)
|
||||||
|
from . import (
|
||||||
|
log,
|
||||||
|
)
|
||||||
from . import (
|
from . import (
|
||||||
__tsdbs__,
|
__tsdbs__,
|
||||||
open_storage_client,
|
open_storage_client,
|
||||||
StorageClient,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from piker.ui._remote_ctl import AnnotCtl
|
|
||||||
|
|
||||||
|
|
||||||
store = typer.Typer()
|
store = typer.Typer()
|
||||||
|
|
||||||
|
|
@ -82,6 +77,7 @@ def ls(
|
||||||
async with (
|
async with (
|
||||||
open_piker_runtime(
|
open_piker_runtime(
|
||||||
'tsdb_storage',
|
'tsdb_storage',
|
||||||
|
enable_modules=['piker.service._ahab'],
|
||||||
),
|
),
|
||||||
):
|
):
|
||||||
for i, backend in enumerate(backends):
|
for i, backend in enumerate(backends):
|
||||||
|
|
@ -103,18 +99,6 @@ def ls(
|
||||||
trio.run(query_all)
|
trio.run(query_all)
|
||||||
|
|
||||||
|
|
||||||
# TODO: like ls but takes in a pattern and matches
|
|
||||||
# @store.command()
|
|
||||||
# def search(
|
|
||||||
# patt: str,
|
|
||||||
# backends: list[str] = typer.Argument(
|
|
||||||
# default=None,
|
|
||||||
# help='Storage backends to query, default is all.'
|
|
||||||
# ),
|
|
||||||
# ):
|
|
||||||
# ...
|
|
||||||
|
|
||||||
|
|
||||||
@store.command()
|
@store.command()
|
||||||
def delete(
|
def delete(
|
||||||
symbols: list[str],
|
symbols: list[str],
|
||||||
|
|
@ -137,6 +121,7 @@ def delete(
|
||||||
async with (
|
async with (
|
||||||
open_piker_runtime(
|
open_piker_runtime(
|
||||||
'tsdb_storage',
|
'tsdb_storage',
|
||||||
|
enable_modules=['piker.service._ahab']
|
||||||
),
|
),
|
||||||
open_storage_client(backend) as (_, client),
|
open_storage_client(backend) as (_, client),
|
||||||
trio.open_nursery() as n,
|
trio.open_nursery() as n,
|
||||||
|
|
@ -157,33 +142,20 @@ def delete(
|
||||||
def anal(
|
def anal(
|
||||||
fqme: str,
|
fqme: str,
|
||||||
period: int = 60,
|
period: int = 60,
|
||||||
pdb: bool = False,
|
|
||||||
|
|
||||||
) -> np.ndarray:
|
) -> np.ndarray:
|
||||||
'''
|
|
||||||
Anal-ysis is when you take the data do stuff to it.
|
|
||||||
|
|
||||||
NOTE: This ONLY loads the offline timeseries data (by default
|
|
||||||
from a parquet file) NOT the in-shm version you might be seeing
|
|
||||||
in a chart.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async def main():
|
async def main():
|
||||||
async with (
|
async with (
|
||||||
open_piker_runtime(
|
open_piker_runtime(
|
||||||
# are you a bear or boi?
|
|
||||||
'tsdb_polars_anal',
|
'tsdb_polars_anal',
|
||||||
debug_mode=pdb,
|
# enable_modules=['piker.service._ahab']
|
||||||
),
|
|
||||||
open_storage_client() as (
|
|
||||||
mod,
|
|
||||||
client,
|
|
||||||
),
|
),
|
||||||
|
open_storage_client() as (mod, client),
|
||||||
):
|
):
|
||||||
syms: list[str] = await client.list_keys()
|
syms: list[str] = await client.list_keys()
|
||||||
log.info(f'{len(syms)} FOUND for {mod.name}')
|
print(f'{len(syms)} FOUND for {mod.name}')
|
||||||
|
|
||||||
history: ShmArray # np buffer format
|
|
||||||
(
|
(
|
||||||
history,
|
history,
|
||||||
first_dt,
|
first_dt,
|
||||||
|
|
@ -194,357 +166,166 @@ def anal(
|
||||||
)
|
)
|
||||||
assert first_dt < last_dt
|
assert first_dt < last_dt
|
||||||
|
|
||||||
null_segs: tuple = tsp.get_null_segs(
|
src_df = await client.as_df(fqme, period)
|
||||||
frame=history,
|
from piker.data import _timeseries as tsmod
|
||||||
period=period,
|
df = tsmod.with_dts(src_df)
|
||||||
)
|
gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
|
||||||
# TODO: do tsp queries to backcend to fill i missing
|
|
||||||
# history and then prolly write it to tsdb!
|
|
||||||
|
|
||||||
shm_df: pl.DataFrame = await client.as_df(
|
if gaps:
|
||||||
fqme,
|
print(f'Gaps found:\n{gaps}')
|
||||||
period,
|
|
||||||
)
|
|
||||||
|
|
||||||
df: pl.DataFrame # with dts
|
# TODO: something better with tab completion..
|
||||||
deduped: pl.DataFrame # deduplicated dts
|
# is there something more minimal but nearly as
|
||||||
(
|
# functional as ipython?
|
||||||
df,
|
await tractor.breakpoint()
|
||||||
deduped,
|
|
||||||
diff,
|
|
||||||
) = tsp.dedupe(
|
|
||||||
shm_df,
|
|
||||||
period=period,
|
|
||||||
)
|
|
||||||
|
|
||||||
write_edits: bool = True
|
|
||||||
if (
|
|
||||||
write_edits
|
|
||||||
and (
|
|
||||||
diff
|
|
||||||
or null_segs
|
|
||||||
)
|
|
||||||
):
|
|
||||||
await tractor.pause()
|
|
||||||
await client.write_ohlcv(
|
|
||||||
fqme,
|
|
||||||
ohlcv=deduped,
|
|
||||||
timeframe=period,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
|
||||||
# TODO: something better with tab completion..
|
|
||||||
# is there something more minimal but nearly as
|
|
||||||
# functional as ipython?
|
|
||||||
await tractor.pause()
|
|
||||||
assert not null_segs
|
|
||||||
|
|
||||||
trio.run(main)
|
trio.run(main)
|
||||||
|
|
||||||
|
|
||||||
async def markup_gaps(
|
def iter_dfs_from_shms(fqme: str) -> Generator[
|
||||||
fqme: str,
|
tuple[Path, ShmArray, pl.DataFrame],
|
||||||
timeframe: float,
|
None,
|
||||||
actl: AnnotCtl,
|
None,
|
||||||
wdts: pl.DataFrame,
|
]:
|
||||||
gaps: pl.DataFrame,
|
# shm buffer size table based on known sample rates
|
||||||
|
sizes: dict[str, int] = {
|
||||||
|
'hist': _default_hist_size,
|
||||||
|
'rt': _default_rt_size,
|
||||||
|
}
|
||||||
|
|
||||||
) -> dict[int, dict]:
|
# load all detected shm buffer files which have the
|
||||||
'''
|
# passed FQME pattern in the file name.
|
||||||
Remote annotate time-gaps in a dt-fielded ts (normally OHLC)
|
shmfiles: list[Path] = []
|
||||||
with rectangles.
|
shmdir = Path('/dev/shm/')
|
||||||
|
|
||||||
'''
|
for shmfile in shmdir.glob(f'*{fqme}*'):
|
||||||
aids: dict[int] = {}
|
filename: str = shmfile.name
|
||||||
for i in range(gaps.height):
|
|
||||||
|
|
||||||
row: pl.DataFrame = gaps[i]
|
# skip index files
|
||||||
|
if (
|
||||||
|
'_first' in filename
|
||||||
|
or '_last' in filename
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
# the gap's RIGHT-most bar's OPEN value
|
assert shmfile.is_file()
|
||||||
# at that time (sample) step.
|
log.debug(f'Found matching shm buffer file: {filename}')
|
||||||
iend: int = row['index'][0]
|
shmfiles.append(shmfile)
|
||||||
# dt: datetime = row['dt'][0]
|
|
||||||
# dt_prev: datetime = row['dt_prev'][0]
|
|
||||||
# dt_end_t: float = dt.timestamp()
|
|
||||||
|
|
||||||
|
for shmfile in shmfiles:
|
||||||
|
|
||||||
# TODO: can we eventually remove this
|
# lookup array buffer size based on file suffix
|
||||||
# once we figure out why the epoch cols
|
# being either .rt or .hist
|
||||||
# don't match?
|
size: int = sizes[shmfile.name.rsplit('.')[-1]]
|
||||||
# TODO: FIX HOW/WHY these aren't matching
|
|
||||||
# and are instead off by 4hours (EST
|
|
||||||
# vs. UTC?!?!)
|
|
||||||
# end_t: float = row['time']
|
|
||||||
# assert (
|
|
||||||
# dt.timestamp()
|
|
||||||
# ==
|
|
||||||
# end_t
|
|
||||||
# )
|
|
||||||
|
|
||||||
# the gap's LEFT-most bar's CLOSE value
|
# attach to any shm buffer, load array into polars df,
|
||||||
# at that time (sample) step.
|
# write to local parquet file.
|
||||||
prev_r: pl.DataFrame = wdts.filter(
|
shm, opened = maybe_open_shm_array(
|
||||||
pl.col('index') == iend - 1
|
key=shmfile.name,
|
||||||
|
size=size,
|
||||||
|
dtype=def_iohlcv_fields,
|
||||||
|
readonly=True,
|
||||||
)
|
)
|
||||||
# XXX: probably a gap in the (newly sorted or de-duplicated)
|
assert not opened
|
||||||
# dt-df, so we might need to re-index first..
|
ohlcv = shm.array
|
||||||
if prev_r.is_empty():
|
|
||||||
await tractor.pause()
|
|
||||||
|
|
||||||
istart: int = prev_r['index'][0]
|
start = time.time()
|
||||||
# dt_start_t: float = dt_prev.timestamp()
|
|
||||||
|
|
||||||
# start_t: float = prev_r['time']
|
# XXX: thanks to this SO answer for this conversion tip:
|
||||||
# assert (
|
# https://stackoverflow.com/a/72054819
|
||||||
# dt_start_t
|
df = pl.DataFrame({
|
||||||
# ==
|
field_name: ohlcv[field_name]
|
||||||
# start_t
|
for field_name in ohlcv.dtype.fields
|
||||||
# )
|
})
|
||||||
|
delay: float = round(
|
||||||
# TODO: implement px-col width measure
|
time.time() - start,
|
||||||
# and ensure at least as many px-cols
|
ndigits=6,
|
||||||
# shown per rect as configured by user.
|
|
||||||
# gap_w: float = abs((iend - istart))
|
|
||||||
# if gap_w < 6:
|
|
||||||
# margin: float = 6
|
|
||||||
# iend += margin
|
|
||||||
# istart -= margin
|
|
||||||
|
|
||||||
rect_gap: float = BGM*3/8
|
|
||||||
opn: float = row['open'][0]
|
|
||||||
ro: tuple[float, float] = (
|
|
||||||
# dt_end_t,
|
|
||||||
iend + rect_gap + 1,
|
|
||||||
opn,
|
|
||||||
)
|
)
|
||||||
cls: float = prev_r['close'][0]
|
log.info(
|
||||||
lc: tuple[float, float] = (
|
f'numpy -> polars conversion took {delay} secs\n'
|
||||||
# dt_start_t,
|
f'polars df: {df}'
|
||||||
istart - rect_gap, # + 1 ,
|
|
||||||
cls,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
color: str = 'dad_blue'
|
yield (
|
||||||
diff: float = cls - opn
|
shmfile,
|
||||||
sgn: float = copysign(1, diff)
|
shm,
|
||||||
color: str = {
|
df,
|
||||||
-1: 'buy_green',
|
|
||||||
1: 'sell_red',
|
|
||||||
}[sgn]
|
|
||||||
|
|
||||||
rect_kwargs: dict[str, Any] = dict(
|
|
||||||
fqme=fqme,
|
|
||||||
timeframe=timeframe,
|
|
||||||
start_pos=lc,
|
|
||||||
end_pos=ro,
|
|
||||||
color=color,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
aid: int = await actl.add_rect(**rect_kwargs)
|
|
||||||
assert aid
|
|
||||||
aids[aid] = rect_kwargs
|
|
||||||
|
|
||||||
# tell chart to redraw all its
|
|
||||||
# graphics view layers Bo
|
|
||||||
await actl.redraw(
|
|
||||||
fqme=fqme,
|
|
||||||
timeframe=timeframe,
|
|
||||||
)
|
|
||||||
return aids
|
|
||||||
|
|
||||||
|
|
||||||
@store.command()
|
@store.command()
|
||||||
def ldshm(
|
def ldshm(
|
||||||
fqme: str,
|
fqme: str,
|
||||||
write_parquet: bool = True,
|
|
||||||
reload_parquet_to_shm: bool = True,
|
write_parquet: bool = False,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
Linux ONLY: load any fqme file name matching shm buffer from
|
Linux ONLY: load any fqme file name matching shm buffer from
|
||||||
/dev/shm/ into an OHLCV numpy array and polars DataFrame,
|
/dev/shm/ into an OHLCV numpy array and polars DataFrame,
|
||||||
optionally write to offline storage via `.parquet` file.
|
optionally write to .parquet file.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
async def main():
|
async def main():
|
||||||
from piker.ui._remote_ctl import (
|
|
||||||
open_annot_ctl,
|
|
||||||
)
|
|
||||||
actl: AnnotCtl
|
|
||||||
mod: ModuleType
|
|
||||||
client: StorageClient
|
|
||||||
async with (
|
async with (
|
||||||
open_piker_runtime(
|
open_piker_runtime(
|
||||||
'polars_boi',
|
'polars_boi',
|
||||||
enable_modules=['piker.data._sharedmem'],
|
enable_modules=['piker.data._sharedmem'],
|
||||||
debug_mode=True,
|
|
||||||
),
|
),
|
||||||
open_storage_client() as (
|
|
||||||
mod,
|
|
||||||
client,
|
|
||||||
),
|
|
||||||
open_annot_ctl() as actl,
|
|
||||||
):
|
):
|
||||||
shm_df: pl.DataFrame | None = None
|
|
||||||
tf2aids: dict[float, dict] = {}
|
|
||||||
|
|
||||||
for (
|
df: pl.DataFrame | None = None
|
||||||
shmfile,
|
for shmfile, shm, df in iter_dfs_from_shms(fqme):
|
||||||
shm,
|
|
||||||
# parquet_path,
|
|
||||||
shm_df,
|
|
||||||
) in tsp.iter_dfs_from_shms(fqme):
|
|
||||||
|
|
||||||
|
# compute ohlc properties for naming
|
||||||
times: np.ndarray = shm.array['time']
|
times: np.ndarray = shm.array['time']
|
||||||
d1: float = float(times[-1] - times[-2])
|
secs: float = times[-1] - times[-2]
|
||||||
d2: float = float(times[-2] - times[-3])
|
if secs < 1.:
|
||||||
med: float = np.median(np.diff(times))
|
breakpoint()
|
||||||
if (
|
|
||||||
d1 < 1.
|
|
||||||
and d2 < 1.
|
|
||||||
and med < 1.
|
|
||||||
):
|
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
f'Something is wrong with time period for {shm}:\n{times}'
|
f'Something is wrong with time period for {shm}:\n{times}'
|
||||||
)
|
)
|
||||||
|
|
||||||
period_s: float = float(max(d1, d2, med))
|
# TODO: maybe only optionally enter this depending
|
||||||
|
# on some CLI flags and/or gap detection?
|
||||||
|
await tractor.breakpoint()
|
||||||
|
|
||||||
null_segs: tuple = tsp.get_null_segs(
|
# write to parquet file?
|
||||||
frame=shm.array,
|
if write_parquet:
|
||||||
period=period_s,
|
timeframe: str = f'{secs}s'
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: call null-seg fixer somehow?
|
datadir: Path = get_conf_dir() / 'nativedb'
|
||||||
if null_segs:
|
if not datadir.is_dir():
|
||||||
await tractor.pause()
|
datadir.mkdir()
|
||||||
# async with (
|
|
||||||
# trio.open_nursery() as tn,
|
|
||||||
# mod.open_history_client(
|
|
||||||
# mkt,
|
|
||||||
# ) as (get_hist, config),
|
|
||||||
# ):
|
|
||||||
# nulls_detected: trio.Event = await tn.start(partial(
|
|
||||||
# tsp.maybe_fill_null_segments,
|
|
||||||
|
|
||||||
# shm=shm,
|
path: Path = datadir / f'{fqme}.{timeframe}.parquet'
|
||||||
# timeframe=timeframe,
|
|
||||||
# get_hist=get_hist,
|
|
||||||
# sampler_stream=sampler_stream,
|
|
||||||
# mkt=mkt,
|
|
||||||
# ))
|
|
||||||
|
|
||||||
# over-write back to shm?
|
# write to fs
|
||||||
wdts: pl.DataFrame # with dts
|
start = time.time()
|
||||||
deduped: pl.DataFrame # deduplicated dts
|
df.write_parquet(path)
|
||||||
(
|
delay: float = round(
|
||||||
wdts,
|
time.time() - start,
|
||||||
deduped,
|
ndigits=6,
|
||||||
diff,
|
)
|
||||||
) = tsp.dedupe(
|
log.info(
|
||||||
shm_df,
|
f'parquet write took {delay} secs\n'
|
||||||
period=period_s,
|
f'file path: {path}'
|
||||||
)
|
|
||||||
|
|
||||||
# detect gaps from in expected (uniform OHLC) sample period
|
|
||||||
step_gaps: pl.DataFrame = tsp.detect_time_gaps(
|
|
||||||
deduped,
|
|
||||||
expect_period=period_s,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: by default we always want to mark these up
|
|
||||||
# with rects showing up/down gaps Bo
|
|
||||||
venue_gaps: pl.DataFrame = tsp.detect_time_gaps(
|
|
||||||
deduped,
|
|
||||||
expect_period=period_s,
|
|
||||||
|
|
||||||
# TODO: actually pull the exact duration
|
|
||||||
# expected for each venue operational period?
|
|
||||||
gap_dt_unit='days',
|
|
||||||
gap_thresh=1,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: find the disjoint set of step gaps from
|
|
||||||
# venue (closure) set!
|
|
||||||
# -[ ] do a set diff by checking for the unique
|
|
||||||
# gap set only in the step_gaps?
|
|
||||||
if (
|
|
||||||
not venue_gaps.is_empty()
|
|
||||||
or (
|
|
||||||
period_s < 60
|
|
||||||
and not step_gaps.is_empty()
|
|
||||||
)
|
)
|
||||||
):
|
|
||||||
# write repaired ts to parquet-file?
|
|
||||||
if write_parquet:
|
|
||||||
start: float = time.time()
|
|
||||||
path: Path = await client.write_ohlcv(
|
|
||||||
fqme,
|
|
||||||
ohlcv=deduped,
|
|
||||||
timeframe=period_s,
|
|
||||||
)
|
|
||||||
write_delay: float = round(
|
|
||||||
time.time() - start,
|
|
||||||
ndigits=6,
|
|
||||||
)
|
|
||||||
|
|
||||||
# read back from fs
|
# read back from fs
|
||||||
start: float = time.time()
|
start = time.time()
|
||||||
read_df: pl.DataFrame = pl.read_parquet(path)
|
read_df: pl.DataFrame = pl.read_parquet(path)
|
||||||
read_delay: float = round(
|
delay: float = round(
|
||||||
time.time() - start,
|
time.time() - start,
|
||||||
ndigits=6,
|
ndigits=6,
|
||||||
)
|
)
|
||||||
log.info(
|
print(
|
||||||
f'parquet write took {write_delay} secs\n'
|
f'parquet read took {delay} secs\n'
|
||||||
f'file path: {path}'
|
f'polars df: {read_df}'
|
||||||
f'parquet read took {read_delay} secs\n'
|
)
|
||||||
f'polars df: {read_df}'
|
|
||||||
)
|
|
||||||
|
|
||||||
if reload_parquet_to_shm:
|
if df is None:
|
||||||
new = tsp.pl2np(
|
log.error(f'No matching shm buffers for {fqme} ?')
|
||||||
deduped,
|
|
||||||
dtype=shm.array.dtype,
|
|
||||||
)
|
|
||||||
# since normally readonly
|
|
||||||
shm._array.setflags(
|
|
||||||
write=int(1),
|
|
||||||
)
|
|
||||||
shm.push(
|
|
||||||
new,
|
|
||||||
prepend=True,
|
|
||||||
start=new['index'][-1],
|
|
||||||
update_first=False, # don't update ._first
|
|
||||||
)
|
|
||||||
|
|
||||||
do_markup_gaps: bool = True
|
|
||||||
if do_markup_gaps:
|
|
||||||
new_df: pl.DataFrame = tsp.np2pl(new)
|
|
||||||
aids: dict = await markup_gaps(
|
|
||||||
fqme,
|
|
||||||
period_s,
|
|
||||||
actl,
|
|
||||||
new_df,
|
|
||||||
step_gaps,
|
|
||||||
)
|
|
||||||
# last chance manual overwrites in REPL
|
|
||||||
# await tractor.pause()
|
|
||||||
assert aids
|
|
||||||
tf2aids[period_s] = aids
|
|
||||||
|
|
||||||
else:
|
|
||||||
# allow interaction even when no ts problems.
|
|
||||||
assert not diff
|
|
||||||
|
|
||||||
await tractor.pause()
|
|
||||||
log.info('Exiting TSP shm anal-izer!')
|
|
||||||
|
|
||||||
if shm_df is None:
|
|
||||||
log.error(
|
|
||||||
f'No matching shm buffers for {fqme} ?'
|
|
||||||
|
|
||||||
)
|
|
||||||
|
|
||||||
trio.run(main)
|
trio.run(main)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -59,6 +59,7 @@ from anyio_marketstore import ( # noqa
|
||||||
Params,
|
Params,
|
||||||
)
|
)
|
||||||
from piker.log import get_logger
|
from piker.log import get_logger
|
||||||
|
# from .._profile import Profiler
|
||||||
|
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
@ -204,7 +205,7 @@ class MktsStorageClient:
|
||||||
# break
|
# break
|
||||||
# except purerpc.grpclib.exceptions.UnknownError as err:
|
# except purerpc.grpclib.exceptions.UnknownError as err:
|
||||||
# if 'snappy' in err.args:
|
# if 'snappy' in err.args:
|
||||||
# await tractor.pause()
|
# await tractor.breakpoint()
|
||||||
|
|
||||||
# # indicate there is no history for this timeframe
|
# # indicate there is no history for this timeframe
|
||||||
# log.exception(
|
# log.exception(
|
||||||
|
|
@ -232,7 +233,7 @@ class MktsStorageClient:
|
||||||
'YOUR DATABASE LIKELY CONTAINS BAD DATA FROM AN OLD BUG '
|
'YOUR DATABASE LIKELY CONTAINS BAD DATA FROM AN OLD BUG '
|
||||||
f'WIPING HISTORY FOR {ts}s'
|
f'WIPING HISTORY FOR {ts}s'
|
||||||
)
|
)
|
||||||
await tractor.pause()
|
await tractor.breakpoint()
|
||||||
# await self.delete_ts(fqme, timeframe)
|
# await self.delete_ts(fqme, timeframe)
|
||||||
|
|
||||||
# try reading again..
|
# try reading again..
|
||||||
|
|
|
||||||
|
|
@ -19,8 +19,7 @@
|
||||||
call a poor man's tsdb).
|
call a poor man's tsdb).
|
||||||
|
|
||||||
AKA a `piker`-native file-system native "time series database"
|
AKA a `piker`-native file-system native "time series database"
|
||||||
without needing an extra process and no standard TSDB features,
|
without needing an extra process and no standard TSDB features, YET!
|
||||||
YET!
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# TODO: like there's soo much..
|
# TODO: like there's soo much..
|
||||||
|
|
@ -56,6 +55,8 @@ from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import time
|
import time
|
||||||
|
|
||||||
|
# from bidict import bidict
|
||||||
|
# import tractor
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import polars as pl
|
import polars as pl
|
||||||
from pendulum import (
|
from pendulum import (
|
||||||
|
|
@ -63,18 +64,46 @@ from pendulum import (
|
||||||
)
|
)
|
||||||
|
|
||||||
from piker import config
|
from piker import config
|
||||||
from piker import tsp
|
from piker.data import def_iohlcv_fields
|
||||||
from piker.data import (
|
from piker.data import ShmArray
|
||||||
def_iohlcv_fields,
|
|
||||||
ShmArray,
|
|
||||||
)
|
|
||||||
from piker.log import get_logger
|
from piker.log import get_logger
|
||||||
from . import TimeseriesNotFound
|
# from .._profile import Profiler
|
||||||
|
|
||||||
|
|
||||||
log = get_logger('storage.nativedb')
|
log = get_logger('storage.nativedb')
|
||||||
|
|
||||||
|
|
||||||
|
# NOTE: thanks to this SO answer for the below conversion routines
|
||||||
|
# to go from numpy struct-arrays to polars dataframes and back:
|
||||||
|
# https://stackoverflow.com/a/72054819
|
||||||
|
def np2pl(array: np.ndarray) -> pl.DataFrame:
|
||||||
|
return pl.DataFrame({
|
||||||
|
field_name: array[field_name]
|
||||||
|
for field_name in array.dtype.fields
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
def pl2np(
|
||||||
|
df: pl.DataFrame,
|
||||||
|
dtype: np.dtype,
|
||||||
|
|
||||||
|
) -> np.ndarray:
|
||||||
|
|
||||||
|
# Create numpy struct array of the correct size and dtype
|
||||||
|
# and loop through df columns to fill in array fields.
|
||||||
|
array = np.empty(
|
||||||
|
df.height,
|
||||||
|
dtype,
|
||||||
|
)
|
||||||
|
for field, col in zip(
|
||||||
|
dtype.fields,
|
||||||
|
df.columns,
|
||||||
|
):
|
||||||
|
array[field] = df.get_column(col).to_numpy()
|
||||||
|
|
||||||
|
return array
|
||||||
|
|
||||||
|
|
||||||
def detect_period(shm: ShmArray) -> float:
|
def detect_period(shm: ShmArray) -> float:
|
||||||
'''
|
'''
|
||||||
Attempt to detect the series time step sampling period
|
Attempt to detect the series time step sampling period
|
||||||
|
|
@ -95,19 +124,16 @@ def detect_period(shm: ShmArray) -> float:
|
||||||
|
|
||||||
def mk_ohlcv_shm_keyed_filepath(
|
def mk_ohlcv_shm_keyed_filepath(
|
||||||
fqme: str,
|
fqme: str,
|
||||||
period: float | int, # ow known as the "timeframe"
|
period: float, # ow known as the "timeframe"
|
||||||
datadir: Path,
|
datadir: Path,
|
||||||
|
|
||||||
) -> Path:
|
) -> str:
|
||||||
|
|
||||||
if period < 1.:
|
if period < 1.:
|
||||||
raise ValueError('Sample period should be >= 1.!?')
|
raise ValueError('Sample period should be >= 1.!?')
|
||||||
|
|
||||||
path: Path = (
|
period_s: str = f'{period}s'
|
||||||
datadir
|
path: Path = datadir / f'{fqme}.ohlcv{period_s}.parquet'
|
||||||
/
|
|
||||||
f'{fqme}.ohlcv{int(period)}s.parquet'
|
|
||||||
)
|
|
||||||
return path
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -161,13 +187,7 @@ class NativeStorageClient:
|
||||||
|
|
||||||
def index_files(self):
|
def index_files(self):
|
||||||
for path in self._datadir.iterdir():
|
for path in self._datadir.iterdir():
|
||||||
if (
|
if 'borked' in path.name:
|
||||||
path.is_dir()
|
|
||||||
or
|
|
||||||
'.parquet' not in str(path)
|
|
||||||
# or
|
|
||||||
# path.name in {'borked', 'expired',}
|
|
||||||
):
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
key: str = path.name.rstrip('.parquet')
|
key: str = path.name.rstrip('.parquet')
|
||||||
|
|
@ -209,21 +229,8 @@ class NativeStorageClient:
|
||||||
fqme,
|
fqme,
|
||||||
timeframe,
|
timeframe,
|
||||||
)
|
)
|
||||||
except FileNotFoundError as fnfe:
|
except FileNotFoundError:
|
||||||
|
return None
|
||||||
bs_fqme, _, *_ = fqme.rpartition('.')
|
|
||||||
|
|
||||||
possible_matches: list[str] = []
|
|
||||||
for tskey in self._index:
|
|
||||||
if bs_fqme in tskey:
|
|
||||||
possible_matches.append(tskey)
|
|
||||||
|
|
||||||
match_str: str = '\n'.join(sorted(possible_matches))
|
|
||||||
raise TimeseriesNotFound(
|
|
||||||
f'No entry for `{fqme}`?\n'
|
|
||||||
f'Maybe you need a more specific fqme-key like:\n\n'
|
|
||||||
f'{match_str}'
|
|
||||||
) from fnfe
|
|
||||||
|
|
||||||
times = array['time']
|
times = array['time']
|
||||||
return (
|
return (
|
||||||
|
|
@ -236,7 +243,6 @@ class NativeStorageClient:
|
||||||
self,
|
self,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
period: float,
|
period: float,
|
||||||
|
|
||||||
) -> Path:
|
) -> Path:
|
||||||
return mk_ohlcv_shm_keyed_filepath(
|
return mk_ohlcv_shm_keyed_filepath(
|
||||||
fqme=fqme,
|
fqme=fqme,
|
||||||
|
|
@ -244,23 +250,6 @@ class NativeStorageClient:
|
||||||
datadir=self._datadir,
|
datadir=self._datadir,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _cache_df(
|
|
||||||
self,
|
|
||||||
fqme: str,
|
|
||||||
df: pl.DataFrame,
|
|
||||||
timeframe: float,
|
|
||||||
|
|
||||||
) -> None:
|
|
||||||
# cache df for later usage since we (currently) need to
|
|
||||||
# convert to np.ndarrays to push to our `ShmArray` rt
|
|
||||||
# buffers subsys but later we may operate entirely on
|
|
||||||
# pyarrow arrays/buffers so keeping the dfs around for
|
|
||||||
# a variety of purposes is handy.
|
|
||||||
self._dfs.setdefault(
|
|
||||||
timeframe,
|
|
||||||
{},
|
|
||||||
)[fqme] = df
|
|
||||||
|
|
||||||
async def read_ohlcv(
|
async def read_ohlcv(
|
||||||
self,
|
self,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
|
|
@ -269,20 +258,13 @@ class NativeStorageClient:
|
||||||
# limit: int = int(200e3),
|
# limit: int = int(200e3),
|
||||||
|
|
||||||
) -> np.ndarray:
|
) -> np.ndarray:
|
||||||
path: Path = self.mk_path(
|
path: Path = self.mk_path(fqme, period=int(timeframe))
|
||||||
fqme,
|
|
||||||
period=int(timeframe),
|
|
||||||
)
|
|
||||||
df: pl.DataFrame = pl.read_parquet(path)
|
df: pl.DataFrame = pl.read_parquet(path)
|
||||||
|
self._dfs.setdefault(timeframe, {})[fqme] = df
|
||||||
|
|
||||||
self._cache_df(
|
|
||||||
fqme=fqme,
|
|
||||||
df=df,
|
|
||||||
timeframe=timeframe,
|
|
||||||
)
|
|
||||||
# TODO: filter by end and limit inputs
|
# TODO: filter by end and limit inputs
|
||||||
# times: pl.Series = df['time']
|
# times: pl.Series = df['time']
|
||||||
array: np.ndarray = tsp.pl2np(
|
array: np.ndarray = pl2np(
|
||||||
df,
|
df,
|
||||||
dtype=np.dtype(def_iohlcv_fields),
|
dtype=np.dtype(def_iohlcv_fields),
|
||||||
)
|
)
|
||||||
|
|
@ -292,15 +274,11 @@ class NativeStorageClient:
|
||||||
self,
|
self,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
period: int = 60,
|
period: int = 60,
|
||||||
load_from_offline: bool = True,
|
|
||||||
|
|
||||||
) -> pl.DataFrame:
|
) -> pl.DataFrame:
|
||||||
try:
|
try:
|
||||||
return self._dfs[period][fqme]
|
return self._dfs[period][fqme]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
if not load_from_offline:
|
|
||||||
raise
|
|
||||||
|
|
||||||
await self.read_ohlcv(fqme, period)
|
await self.read_ohlcv(fqme, period)
|
||||||
return self._dfs[period][fqme]
|
return self._dfs[period][fqme]
|
||||||
|
|
||||||
|
|
@ -322,39 +300,32 @@ class NativeStorageClient:
|
||||||
datadir=self._datadir,
|
datadir=self._datadir,
|
||||||
)
|
)
|
||||||
if isinstance(ohlcv, np.ndarray):
|
if isinstance(ohlcv, np.ndarray):
|
||||||
df: pl.DataFrame = tsp.np2pl(ohlcv)
|
df: pl.DataFrame = np2pl(ohlcv)
|
||||||
else:
|
else:
|
||||||
df = ohlcv
|
df = ohlcv
|
||||||
|
|
||||||
self._cache_df(
|
|
||||||
fqme=fqme,
|
|
||||||
df=df,
|
|
||||||
timeframe=timeframe,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: in terms of managing the ultra long term data
|
# TODO: in terms of managing the ultra long term data
|
||||||
# -[ ] use a proper profiler to measure all this IO and
|
# - use a proper profiler to measure all this IO and
|
||||||
# roundtripping!
|
# roundtripping!
|
||||||
# -[ ] implement parquet append!? see issue:
|
# - try out ``fastparquet``'s append writing:
|
||||||
# https://github.com/pikers/piker/issues/536
|
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
|
||||||
# -[ ] try out ``fastparquet``'s append writing:
|
|
||||||
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
|
|
||||||
start = time.time()
|
start = time.time()
|
||||||
df.write_parquet(path)
|
df.write_parquet(path)
|
||||||
delay: float = round(
|
delay: float = round(
|
||||||
time.time() - start,
|
time.time() - start,
|
||||||
ndigits=6,
|
ndigits=6,
|
||||||
)
|
)
|
||||||
log.info(
|
print(
|
||||||
f'parquet write took {delay} secs\n'
|
f'parquet write took {delay} secs\n'
|
||||||
f'file path: {path}'
|
f'file path: {path}'
|
||||||
)
|
)
|
||||||
return path
|
return path
|
||||||
|
|
||||||
|
|
||||||
async def write_ohlcv(
|
async def write_ohlcv(
|
||||||
self,
|
self,
|
||||||
fqme: str,
|
fqme: str,
|
||||||
ohlcv: np.ndarray | pl.DataFrame,
|
ohlcv: np.ndarray,
|
||||||
timeframe: int,
|
timeframe: int,
|
||||||
|
|
||||||
) -> Path:
|
) -> Path:
|
||||||
|
|
@ -406,8 +377,6 @@ class NativeStorageClient:
|
||||||
# ...
|
# ...
|
||||||
|
|
||||||
|
|
||||||
# TODO: does this need to be async on average?
|
|
||||||
# I guess for any IPC connected backend yes?
|
|
||||||
@acm
|
@acm
|
||||||
async def get_client(
|
async def get_client(
|
||||||
|
|
||||||
|
|
@ -425,7 +394,7 @@ async def get_client(
|
||||||
'''
|
'''
|
||||||
datadir: Path = config.get_conf_dir() / 'nativedb'
|
datadir: Path = config.get_conf_dir() / 'nativedb'
|
||||||
if not datadir.is_dir():
|
if not datadir.is_dir():
|
||||||
log.info(f'Creating `nativedb` dir: {datadir}')
|
log.info(f'Creating `nativedb` director: {datadir}')
|
||||||
datadir.mkdir()
|
datadir.mkdir()
|
||||||
|
|
||||||
client = NativeStorageClient(datadir)
|
client = NativeStorageClient(datadir)
|
||||||
|
|
|
||||||
|
|
@ -1,29 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Toolz for debug, profile and trace of the distributed runtime :surfer:
|
|
||||||
|
|
||||||
'''
|
|
||||||
from tractor.devx import (
|
|
||||||
open_crash_handler as open_crash_handler,
|
|
||||||
)
|
|
||||||
from .profile import (
|
|
||||||
Profiler as Profiler,
|
|
||||||
pg_profile_enabled as pg_profile_enabled,
|
|
||||||
ms_slower_then as ms_slower_then,
|
|
||||||
timeit as timeit,
|
|
||||||
)
|
|
||||||
|
|
@ -0,0 +1,80 @@
|
||||||
|
# piker: trading gear for hackers
|
||||||
|
# Copyright (C) Tyler Goodlet (in stewardship of piker0)
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
'''
|
||||||
|
sugarz for trio/tractor conc peeps.
|
||||||
|
|
||||||
|
'''
|
||||||
|
from typing import AsyncContextManager
|
||||||
|
from typing import TypeVar
|
||||||
|
from contextlib import asynccontextmanager as acm
|
||||||
|
|
||||||
|
import trio
|
||||||
|
|
||||||
|
|
||||||
|
# A regular invariant generic type
|
||||||
|
T = TypeVar("T")
|
||||||
|
|
||||||
|
|
||||||
|
async def _enter_and_sleep(
|
||||||
|
|
||||||
|
mngr: AsyncContextManager[T],
|
||||||
|
to_yield: dict[int, T],
|
||||||
|
all_entered: trio.Event,
|
||||||
|
# task_status: TaskStatus[T] = trio.TASK_STATUS_IGNORED,
|
||||||
|
|
||||||
|
) -> T:
|
||||||
|
'''Open the async context manager deliver it's value
|
||||||
|
to this task's spawner and sleep until cancelled.
|
||||||
|
|
||||||
|
'''
|
||||||
|
async with mngr as value:
|
||||||
|
to_yield[id(mngr)] = value
|
||||||
|
|
||||||
|
if all(to_yield.values()):
|
||||||
|
all_entered.set()
|
||||||
|
|
||||||
|
# sleep until cancelled
|
||||||
|
await trio.sleep_forever()
|
||||||
|
|
||||||
|
|
||||||
|
@acm
|
||||||
|
async def async_enter_all(
|
||||||
|
|
||||||
|
*mngrs: list[AsyncContextManager[T]],
|
||||||
|
|
||||||
|
) -> tuple[T]:
|
||||||
|
|
||||||
|
to_yield = {}.fromkeys(id(mngr) for mngr in mngrs)
|
||||||
|
|
||||||
|
all_entered = trio.Event()
|
||||||
|
|
||||||
|
async with trio.open_nursery() as n:
|
||||||
|
for mngr in mngrs:
|
||||||
|
n.start_soon(
|
||||||
|
_enter_and_sleep,
|
||||||
|
mngr,
|
||||||
|
to_yield,
|
||||||
|
all_entered,
|
||||||
|
)
|
||||||
|
|
||||||
|
# deliver control once all managers have started up
|
||||||
|
await all_entered.wait()
|
||||||
|
yield tuple(to_yield.values())
|
||||||
|
|
||||||
|
# tear down all sleeper tasks thus triggering individual
|
||||||
|
# mngr ``__aexit__()``s.
|
||||||
|
n.cancel_scope.cancel()
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,746 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Financial time series processing utilities usually
|
|
||||||
pertaining to OHLCV style sampled data.
|
|
||||||
|
|
||||||
Routines are generally implemented in either ``numpy`` or
|
|
||||||
``polars`` B)
|
|
||||||
|
|
||||||
'''
|
|
||||||
from __future__ import annotations
|
|
||||||
from functools import partial
|
|
||||||
from math import (
|
|
||||||
ceil,
|
|
||||||
floor,
|
|
||||||
)
|
|
||||||
import time
|
|
||||||
from typing import (
|
|
||||||
Literal,
|
|
||||||
# AsyncGenerator,
|
|
||||||
Generator,
|
|
||||||
)
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
import polars as pl
|
|
||||||
from pendulum import (
|
|
||||||
DateTime,
|
|
||||||
from_timestamp,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ..toolz.profile import (
|
|
||||||
Profiler,
|
|
||||||
pg_profile_enabled,
|
|
||||||
ms_slower_then,
|
|
||||||
)
|
|
||||||
from ..log import (
|
|
||||||
get_logger,
|
|
||||||
get_console_log,
|
|
||||||
)
|
|
||||||
# for "time series processing"
|
|
||||||
subsys: str = 'piker.tsp'
|
|
||||||
|
|
||||||
log = get_logger(subsys)
|
|
||||||
get_console_log = partial(
|
|
||||||
get_console_log,
|
|
||||||
name=subsys,
|
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: union type-defs to handle generic `numpy` and `polars` types
|
|
||||||
# side-by-side Bo
|
|
||||||
# |_ TODO: schema spec typing?
|
|
||||||
# -[ ] nptyping!
|
|
||||||
# -[ ] wtv we can with polars?
|
|
||||||
Frame = pl.DataFrame | np.ndarray
|
|
||||||
Seq = pl.Series | np.ndarray
|
|
||||||
|
|
||||||
|
|
||||||
def slice_from_time(
|
|
||||||
arr: np.ndarray,
|
|
||||||
start_t: float,
|
|
||||||
stop_t: float,
|
|
||||||
step: float, # sampler period step-diff
|
|
||||||
|
|
||||||
) -> slice:
|
|
||||||
'''
|
|
||||||
Calculate array indices mapped from a time range and return them in
|
|
||||||
a slice.
|
|
||||||
|
|
||||||
Given an input array with an epoch `'time'` series entry, calculate
|
|
||||||
the indices which span the time range and return in a slice. Presume
|
|
||||||
each `'time'` step increment is uniform and when the time stamp
|
|
||||||
series contains gaps (the uniform presumption is untrue) use
|
|
||||||
``np.searchsorted()`` binary search to look up the appropriate
|
|
||||||
index.
|
|
||||||
|
|
||||||
'''
|
|
||||||
profiler = Profiler(
|
|
||||||
msg='slice_from_time()',
|
|
||||||
disabled=not pg_profile_enabled(),
|
|
||||||
ms_threshold=ms_slower_then,
|
|
||||||
)
|
|
||||||
|
|
||||||
times = arr['time']
|
|
||||||
t_first = floor(times[0])
|
|
||||||
t_last = ceil(times[-1])
|
|
||||||
|
|
||||||
# the greatest index we can return which slices to the
|
|
||||||
# end of the input array.
|
|
||||||
read_i_max = arr.shape[0]
|
|
||||||
|
|
||||||
# compute (presumed) uniform-time-step index offsets
|
|
||||||
i_start_t = floor(start_t)
|
|
||||||
read_i_start = floor(((i_start_t - t_first) // step)) - 1
|
|
||||||
|
|
||||||
i_stop_t = ceil(stop_t)
|
|
||||||
|
|
||||||
# XXX: edge case -> always set stop index to last in array whenever
|
|
||||||
# the input stop time is detected to be greater then the equiv time
|
|
||||||
# stamp at that last entry.
|
|
||||||
if i_stop_t >= t_last:
|
|
||||||
read_i_stop = read_i_max
|
|
||||||
else:
|
|
||||||
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
|
|
||||||
|
|
||||||
# always clip outputs to array support
|
|
||||||
# for read start:
|
|
||||||
# - never allow a start < the 0 index
|
|
||||||
# - never allow an end index > the read array len
|
|
||||||
read_i_start = min(
|
|
||||||
max(0, read_i_start),
|
|
||||||
read_i_max - 1,
|
|
||||||
)
|
|
||||||
read_i_stop = max(
|
|
||||||
0,
|
|
||||||
min(read_i_stop, read_i_max),
|
|
||||||
)
|
|
||||||
|
|
||||||
# check for larger-then-latest calculated index for given start
|
|
||||||
# time, in which case we do a binary search for the correct index.
|
|
||||||
# NOTE: this is usually the result of a time series with time gaps
|
|
||||||
# where it is expected that each index step maps to a uniform step
|
|
||||||
# in the time stamp series.
|
|
||||||
t_iv_start = times[read_i_start]
|
|
||||||
if (
|
|
||||||
t_iv_start > i_start_t
|
|
||||||
):
|
|
||||||
# do a binary search for the best index mapping to ``start_t``
|
|
||||||
# given we measured an overshoot using the uniform-time-step
|
|
||||||
# calculation from above.
|
|
||||||
|
|
||||||
# TODO: once we start caching these per source-array,
|
|
||||||
# we can just overwrite ``read_i_start`` directly.
|
|
||||||
new_read_i_start = np.searchsorted(
|
|
||||||
times,
|
|
||||||
i_start_t,
|
|
||||||
side='left',
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: minimize binary search work as much as possible:
|
|
||||||
# - cache these remap values which compensate for gaps in the
|
|
||||||
# uniform time step basis where we calc a later start
|
|
||||||
# index for the given input ``start_t``.
|
|
||||||
# - can we shorten the input search sequence by heuristic?
|
|
||||||
# up_to_arith_start = index[:read_i_start]
|
|
||||||
|
|
||||||
if (
|
|
||||||
new_read_i_start <= read_i_start
|
|
||||||
):
|
|
||||||
# t_diff = t_iv_start - start_t
|
|
||||||
# print(
|
|
||||||
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
|
||||||
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
|
|
||||||
# f'diff: {t_diff}\n'
|
|
||||||
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
|
|
||||||
# )
|
|
||||||
read_i_start = new_read_i_start
|
|
||||||
|
|
||||||
t_iv_stop = times[read_i_stop - 1]
|
|
||||||
if (
|
|
||||||
t_iv_stop > i_stop_t
|
|
||||||
):
|
|
||||||
# t_diff = stop_t - t_iv_stop
|
|
||||||
# print(
|
|
||||||
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
|
||||||
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
|
|
||||||
# f'diff: {t_diff}\n'
|
|
||||||
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
|
|
||||||
# )
|
|
||||||
new_read_i_stop = np.searchsorted(
|
|
||||||
times[read_i_start:],
|
|
||||||
# times,
|
|
||||||
i_stop_t,
|
|
||||||
side='right',
|
|
||||||
)
|
|
||||||
|
|
||||||
if (
|
|
||||||
new_read_i_stop <= read_i_stop
|
|
||||||
):
|
|
||||||
read_i_stop = read_i_start + new_read_i_stop + 1
|
|
||||||
|
|
||||||
# sanity checks for range size
|
|
||||||
# samples = (i_stop_t - i_start_t) // step
|
|
||||||
# index_diff = read_i_stop - read_i_start + 1
|
|
||||||
# if index_diff > (samples + 3):
|
|
||||||
# breakpoint()
|
|
||||||
|
|
||||||
# read-relative indexes: gives a slice where `shm.array[read_slc]`
|
|
||||||
# will be the data spanning the input time range `start_t` ->
|
|
||||||
# `stop_t`
|
|
||||||
read_slc = slice(
|
|
||||||
int(read_i_start),
|
|
||||||
int(read_i_stop),
|
|
||||||
)
|
|
||||||
|
|
||||||
profiler(
|
|
||||||
'slicing complete'
|
|
||||||
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
|
|
||||||
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: if caller needs absolute buffer indices they can
|
|
||||||
# slice the buffer abs index like so:
|
|
||||||
# index = arr['index']
|
|
||||||
# abs_indx = index[read_slc]
|
|
||||||
# abs_slc = slice(
|
|
||||||
# int(abs_indx[0]),
|
|
||||||
# int(abs_indx[-1]),
|
|
||||||
# )
|
|
||||||
|
|
||||||
return read_slc
|
|
||||||
|
|
||||||
|
|
||||||
def get_null_segs(
|
|
||||||
frame: Frame,
|
|
||||||
period: float, # sampling step in seconds
|
|
||||||
imargin: int = 1,
|
|
||||||
col: str = 'time',
|
|
||||||
|
|
||||||
) -> tuple[
|
|
||||||
# Seq, # TODO: can we make it an array-type instead?
|
|
||||||
list[
|
|
||||||
list[int, int],
|
|
||||||
],
|
|
||||||
Seq,
|
|
||||||
Frame
|
|
||||||
] | None:
|
|
||||||
'''
|
|
||||||
Detect if there are any zero(-epoch stamped) valued
|
|
||||||
rows in for the provided `col: str` column; by default
|
|
||||||
presume the 'time' field/column.
|
|
||||||
|
|
||||||
Filter to all such zero (time) segments and return
|
|
||||||
the corresponding frame zeroed segment's,
|
|
||||||
|
|
||||||
- gap absolute (in buffer terms) indices-endpoints as
|
|
||||||
`absi_zsegs`
|
|
||||||
- abs indices of all rows with zeroed `col` values as `absi_zeros`
|
|
||||||
- the corresponding frame's row-entries (view) which are
|
|
||||||
zeroed for the `col` as `zero_t`
|
|
||||||
|
|
||||||
'''
|
|
||||||
times: Seq = frame['time']
|
|
||||||
zero_pred: Seq = (times == 0)
|
|
||||||
|
|
||||||
if isinstance(frame, np.ndarray):
|
|
||||||
tis_zeros: int = zero_pred.any()
|
|
||||||
else:
|
|
||||||
tis_zeros: int = zero_pred.any()
|
|
||||||
|
|
||||||
if not tis_zeros:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# TODO: use ndarray for this?!
|
|
||||||
absi_zsegs: list[list[int, int]] = []
|
|
||||||
|
|
||||||
if isinstance(frame, np.ndarray):
|
|
||||||
# view of ONLY the zero segments as one continuous chunk
|
|
||||||
zero_t: np.ndarray = frame[zero_pred]
|
|
||||||
# abs indices of said zeroed rows
|
|
||||||
absi_zeros = zero_t['index']
|
|
||||||
# diff of abs index steps between each zeroed row
|
|
||||||
absi_zdiff: np.ndarray = np.diff(absi_zeros)
|
|
||||||
|
|
||||||
# scan for all frame-indices where the
|
|
||||||
# zeroed-row-abs-index-step-diff is greater then the
|
|
||||||
# expected increment of 1.
|
|
||||||
# data 1st zero seg data zeros
|
|
||||||
# ---- ------------ ---- ----- ------ ----
|
|
||||||
# ||||..000000000000..||||..00000..||||||..0000
|
|
||||||
# ---- ------------ ---- ----- ------ ----
|
|
||||||
# ^zero_t[0] ^zero_t[-1]
|
|
||||||
# ^fi_zgaps[0] ^fi_zgaps[1]
|
|
||||||
# ^absi_zsegs[0][0] ^---^ => absi_zsegs[1]: tuple
|
|
||||||
# absi_zsegs[0][1]^
|
|
||||||
#
|
|
||||||
# NOTE: the first entry in `fi_zgaps` is where
|
|
||||||
# the first (absolute) index step diff is > 1.
|
|
||||||
# and it is a frame-relative index into `zero_t`.
|
|
||||||
fi_zgaps = np.argwhere(
|
|
||||||
absi_zdiff > 1
|
|
||||||
# NOTE: +1 here is ensure we index to the "start" of each
|
|
||||||
# segment (if we didn't the below loop needs to be
|
|
||||||
# re-written to expect `fi_end_rows`!
|
|
||||||
) + 1
|
|
||||||
# the rows from the contiguous zeroed segments which have
|
|
||||||
# abs-index steps >1 compared to the previous zero row
|
|
||||||
# (indicating an end of zeroed segment).
|
|
||||||
fi_zseg_start_rows = zero_t[fi_zgaps]
|
|
||||||
|
|
||||||
# TODO: equiv for pl.DataFrame case!
|
|
||||||
else:
|
|
||||||
izeros: pl.Series = zero_pred.arg_true()
|
|
||||||
zero_t: pl.DataFrame = frame[izeros]
|
|
||||||
|
|
||||||
absi_zeros = zero_t['index']
|
|
||||||
absi_zdiff: pl.Series = absi_zeros.diff()
|
|
||||||
fi_zgaps = (absi_zdiff > 1).arg_true()
|
|
||||||
|
|
||||||
# XXX: our goal (in this func) is to select out slice index
|
|
||||||
# pairs (zseg0_start, zseg_end) in abs index units for each
|
|
||||||
# null-segment portion detected throughout entire input frame.
|
|
||||||
|
|
||||||
# only up to one null-segment in entire frame?
|
|
||||||
num_gaps: int = fi_zgaps.size + 1
|
|
||||||
if num_gaps < 1:
|
|
||||||
if absi_zeros.size > 1:
|
|
||||||
absi_zsegs = [[
|
|
||||||
# TODO: maybe mk these max()/min() limits func
|
|
||||||
# consts instead of called more then once?
|
|
||||||
max(
|
|
||||||
absi_zeros[0] - 1,
|
|
||||||
0,
|
|
||||||
),
|
|
||||||
# NOTE: need the + 1 to guarantee we index "up to"
|
|
||||||
# the next non-null row-datum.
|
|
||||||
min(
|
|
||||||
absi_zeros[-1] + 1,
|
|
||||||
frame['index'][-1],
|
|
||||||
),
|
|
||||||
]]
|
|
||||||
else:
|
|
||||||
# XXX EDGE CASE: only one null-datum found so
|
|
||||||
# mark the start abs index as None to trigger
|
|
||||||
# a full frame-len query to the respective backend?
|
|
||||||
absi_zsegs = [[
|
|
||||||
# see `get_hist()` in backend, should ALWAYS be
|
|
||||||
# able to handle a `start_dt=None`!
|
|
||||||
# None,
|
|
||||||
None,
|
|
||||||
absi_zeros[0] + 1,
|
|
||||||
]]
|
|
||||||
|
|
||||||
# XXX NOTE XXX: if >= 2 zeroed segments are found, there should
|
|
||||||
# ALWAYS be more then one zero-segment-abs-index-step-diff row
|
|
||||||
# in `absi_zdiff`, so loop through all such
|
|
||||||
# abs-index-step-diffs >1 (i.e. the entries of `absi_zdiff`)
|
|
||||||
# and add them as the "end index" entries for each segment.
|
|
||||||
# Then, iif NOT iterating the first such segment end, look back
|
|
||||||
# for the prior segments zero-segment start indext by relative
|
|
||||||
# indexing the `zero_t` frame by -1 and grabbing the abs index
|
|
||||||
# of what should be the prior zero-segment abs start index.
|
|
||||||
else:
|
|
||||||
# NOTE: since `absi_zdiff` will never have a row
|
|
||||||
# corresponding to the first zero-segment's row, we add it
|
|
||||||
# manually here.
|
|
||||||
absi_zsegs.append([
|
|
||||||
max(
|
|
||||||
absi_zeros[0] - 1,
|
|
||||||
0,
|
|
||||||
),
|
|
||||||
None,
|
|
||||||
])
|
|
||||||
|
|
||||||
# TODO: can we do it with vec ops?
|
|
||||||
for i, (
|
|
||||||
fi, # frame index of zero-seg start
|
|
||||||
zseg_start_row, # full row for ^
|
|
||||||
) in enumerate(zip(
|
|
||||||
fi_zgaps,
|
|
||||||
fi_zseg_start_rows,
|
|
||||||
)):
|
|
||||||
assert (zseg_start_row == zero_t[fi]).all()
|
|
||||||
iabs: int = zseg_start_row['index'][0]
|
|
||||||
absi_zsegs.append([
|
|
||||||
iabs - 1,
|
|
||||||
None, # backfilled on next iter
|
|
||||||
])
|
|
||||||
|
|
||||||
# final iter case, backfill FINAL end iabs!
|
|
||||||
if (i + 1) == fi_zgaps.size:
|
|
||||||
absi_zsegs[-1][1] = absi_zeros[-1] + 1
|
|
||||||
|
|
||||||
# NOTE: only after the first segment (due to `.diff()`
|
|
||||||
# usage above) can we do a lookback to the prior
|
|
||||||
# segment's end row and determine it's abs index to
|
|
||||||
# retroactively insert to the prior
|
|
||||||
# `absi_zsegs[i-1][1]` entry Bo
|
|
||||||
last_end: int = absi_zsegs[i][1]
|
|
||||||
if last_end is None:
|
|
||||||
prev_zseg_row = zero_t[fi - 1]
|
|
||||||
absi_post_zseg = prev_zseg_row['index'][0] + 1
|
|
||||||
# XXX: MUST BACKFILL previous end iabs!
|
|
||||||
absi_zsegs[i][1] = absi_post_zseg
|
|
||||||
|
|
||||||
else:
|
|
||||||
if 0 < num_gaps < 2:
|
|
||||||
absi_zsegs[-1][1] = min(
|
|
||||||
absi_zeros[-1] + 1,
|
|
||||||
frame['index'][-1],
|
|
||||||
)
|
|
||||||
|
|
||||||
iabs_first: int = frame['index'][0]
|
|
||||||
for start, end in absi_zsegs:
|
|
||||||
|
|
||||||
ts_start: float = times[start - iabs_first]
|
|
||||||
ts_end: float = times[end - iabs_first]
|
|
||||||
if (
|
|
||||||
(ts_start == 0 and not start == 0)
|
|
||||||
or
|
|
||||||
ts_end == 0
|
|
||||||
):
|
|
||||||
import pdbp
|
|
||||||
pdbp.set_trace()
|
|
||||||
|
|
||||||
assert end
|
|
||||||
assert start < end
|
|
||||||
|
|
||||||
log.warning(
|
|
||||||
f'Frame has {len(absi_zsegs)} NULL GAPS!?\n'
|
|
||||||
f'period: {period}\n'
|
|
||||||
f'total null samples: {len(zero_t)}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
return (
|
|
||||||
absi_zsegs, # [start, end] abs slice indices of seg
|
|
||||||
absi_zeros, # all abs indices within all null-segs
|
|
||||||
zero_t, # sliced-view of all null-segment rows-datums
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def iter_null_segs(
|
|
||||||
timeframe: float,
|
|
||||||
frame: Frame | None = None,
|
|
||||||
null_segs: tuple | None = None,
|
|
||||||
|
|
||||||
) -> Generator[
|
|
||||||
tuple[
|
|
||||||
int, int,
|
|
||||||
int, int,
|
|
||||||
float, float,
|
|
||||||
float, float,
|
|
||||||
|
|
||||||
# Seq, # TODO: can we make it an array-type instead?
|
|
||||||
# list[
|
|
||||||
# list[int, int],
|
|
||||||
# ],
|
|
||||||
# Seq,
|
|
||||||
# Frame
|
|
||||||
],
|
|
||||||
None,
|
|
||||||
]:
|
|
||||||
if not (
|
|
||||||
null_segs := get_null_segs(
|
|
||||||
frame,
|
|
||||||
period=timeframe,
|
|
||||||
)
|
|
||||||
):
|
|
||||||
return
|
|
||||||
|
|
||||||
absi_pairs_zsegs: list[list[float, float]]
|
|
||||||
izeros: Seq
|
|
||||||
zero_t: Frame
|
|
||||||
(
|
|
||||||
absi_pairs_zsegs,
|
|
||||||
izeros,
|
|
||||||
zero_t,
|
|
||||||
) = null_segs
|
|
||||||
|
|
||||||
absi_first: int = frame[0]['index']
|
|
||||||
for (
|
|
||||||
absi_start,
|
|
||||||
absi_end,
|
|
||||||
) in absi_pairs_zsegs:
|
|
||||||
|
|
||||||
fi_end: int = absi_end - absi_first
|
|
||||||
end_row: Seq = frame[fi_end]
|
|
||||||
end_t: float = end_row['time']
|
|
||||||
end_dt: DateTime = from_timestamp(end_t)
|
|
||||||
|
|
||||||
fi_start = None
|
|
||||||
start_row = None
|
|
||||||
start_t = None
|
|
||||||
start_dt = None
|
|
||||||
if (
|
|
||||||
absi_start is not None
|
|
||||||
and start_t != 0
|
|
||||||
):
|
|
||||||
fi_start: int = absi_start - absi_first
|
|
||||||
start_row: Seq = frame[fi_start]
|
|
||||||
start_t: float = start_row['time']
|
|
||||||
start_dt: DateTime = from_timestamp(start_t)
|
|
||||||
|
|
||||||
if absi_start < 0:
|
|
||||||
import pdbp
|
|
||||||
pdbp.set_trace()
|
|
||||||
|
|
||||||
yield (
|
|
||||||
absi_start, absi_end, # abs indices
|
|
||||||
fi_start, fi_end, # relative "frame" indices
|
|
||||||
start_t, end_t,
|
|
||||||
start_dt, end_dt,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def with_dts(
|
|
||||||
df: pl.DataFrame,
|
|
||||||
time_col: str = 'time',
|
|
||||||
|
|
||||||
) -> pl.DataFrame:
|
|
||||||
'''
|
|
||||||
Insert datetime (casted) columns to a (presumably) OHLC sampled
|
|
||||||
time series with an epoch-time column keyed by `time_col: str`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
return df.with_columns([
|
|
||||||
pl.col(time_col).shift(1).name.suffix('_prev'),
|
|
||||||
pl.col(time_col).diff().alias('s_diff'),
|
|
||||||
pl.from_epoch(pl.col(time_col)).alias('dt'),
|
|
||||||
]).with_columns([
|
|
||||||
pl.from_epoch(
|
|
||||||
column=pl.col(f'{time_col}_prev'),
|
|
||||||
).alias('dt_prev'),
|
|
||||||
pl.col('dt').diff().alias('dt_diff'),
|
|
||||||
])
|
|
||||||
|
|
||||||
|
|
||||||
t_unit: Literal = Literal[
|
|
||||||
'days',
|
|
||||||
'hours',
|
|
||||||
'minutes',
|
|
||||||
'seconds',
|
|
||||||
'miliseconds',
|
|
||||||
'microseconds',
|
|
||||||
'nanoseconds',
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def detect_time_gaps(
|
|
||||||
w_dts: pl.DataFrame,
|
|
||||||
|
|
||||||
time_col: str = 'time',
|
|
||||||
# epoch sampling step diff
|
|
||||||
expect_period: float = 60,
|
|
||||||
|
|
||||||
# NOTE: legacy stock mkts have venue operating hours
|
|
||||||
# and thus gaps normally no more then 1-2 days at
|
|
||||||
# a time.
|
|
||||||
gap_thresh: float = 1.,
|
|
||||||
|
|
||||||
# TODO: allow passing in a frame of operating hours?
|
|
||||||
# -[ ] durations/ranges for faster legit gap checks?
|
|
||||||
# XXX -> must be valid ``polars.Expr.dt.<name>``
|
|
||||||
# like 'days' which a sane default for venue closures
|
|
||||||
# though will detect weekend gaps which are normal :o
|
|
||||||
gap_dt_unit: t_unit | None = None,
|
|
||||||
|
|
||||||
) -> pl.DataFrame:
|
|
||||||
'''
|
|
||||||
Filter to OHLC datums which contain sample step gaps.
|
|
||||||
|
|
||||||
For eg. legacy markets which have venue close gaps and/or
|
|
||||||
actual missing data segments.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# first select by any sample-period (in seconds unit) step size
|
|
||||||
# greater then expected.
|
|
||||||
step_gaps: pl.DataFrame = w_dts.filter(
|
|
||||||
pl.col('s_diff').abs() > expect_period
|
|
||||||
)
|
|
||||||
|
|
||||||
if gap_dt_unit is None:
|
|
||||||
return step_gaps
|
|
||||||
|
|
||||||
# NOTE: this flag is to indicate that on this (sampling) time
|
|
||||||
# scale we expect to only be filtering against larger venue
|
|
||||||
# closures-scale time gaps.
|
|
||||||
return step_gaps.filter(
|
|
||||||
# Second by an arbitrary dt-unit step size
|
|
||||||
getattr(
|
|
||||||
pl.col('dt_diff').dt,
|
|
||||||
gap_dt_unit,
|
|
||||||
)().abs() > gap_thresh
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def detect_price_gaps(
|
|
||||||
df: pl.DataFrame,
|
|
||||||
gt_multiplier: float = 2.,
|
|
||||||
price_fields: list[str] = ['high', 'low'],
|
|
||||||
|
|
||||||
) -> pl.DataFrame:
|
|
||||||
'''
|
|
||||||
Detect gaps in clearing price over an OHLC series.
|
|
||||||
|
|
||||||
2 types of gaps generally exist; up gaps and down gaps:
|
|
||||||
|
|
||||||
- UP gap: when any next sample's lo price is strictly greater
|
|
||||||
then the current sample's hi price.
|
|
||||||
|
|
||||||
- DOWN gap: when any next sample's hi price is strictly
|
|
||||||
less then the current samples lo price.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# return df.filter(
|
|
||||||
# pl.col('high') - ) > expect_period,
|
|
||||||
# ).select([
|
|
||||||
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
|
|
||||||
# pl.all(),
|
|
||||||
# ]).select([
|
|
||||||
# pl.all(),
|
|
||||||
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
|
|
||||||
# ])
|
|
||||||
...
|
|
||||||
|
|
||||||
# TODO: probably just use the null_segs impl above?
|
|
||||||
def detect_vlm_gaps(
|
|
||||||
df: pl.DataFrame,
|
|
||||||
col: str = 'volume',
|
|
||||||
|
|
||||||
) -> pl.DataFrame:
|
|
||||||
|
|
||||||
vnull: pl.DataFrame = df.filter(
|
|
||||||
pl.col(col) == 0
|
|
||||||
)
|
|
||||||
return vnull
|
|
||||||
|
|
||||||
|
|
||||||
def dedupe(
|
|
||||||
src_df: pl.DataFrame,
|
|
||||||
|
|
||||||
time_gaps: pl.DataFrame | None = None,
|
|
||||||
sort: bool = True,
|
|
||||||
period: float = 60,
|
|
||||||
|
|
||||||
) -> tuple[
|
|
||||||
pl.DataFrame, # with dts
|
|
||||||
pl.DataFrame, # with deduplicated dts (aka gap/repeat removal)
|
|
||||||
int, # len diff between input and deduped
|
|
||||||
]:
|
|
||||||
'''
|
|
||||||
Check for time series gaps and if found
|
|
||||||
de-duplicate any datetime entries, check for
|
|
||||||
a frame height diff and return the newly
|
|
||||||
dt-deduplicated frame.
|
|
||||||
|
|
||||||
'''
|
|
||||||
wdts: pl.DataFrame = with_dts(src_df)
|
|
||||||
|
|
||||||
deduped = wdts
|
|
||||||
|
|
||||||
# remove duplicated datetime samples/sections
|
|
||||||
deduped: pl.DataFrame = wdts.unique(
|
|
||||||
# subset=['dt'],
|
|
||||||
subset=['time'],
|
|
||||||
maintain_order=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
# maybe sort on any time field
|
|
||||||
if sort:
|
|
||||||
deduped = deduped.sort(by='time')
|
|
||||||
# TODO: detect out-of-order segments which were corrected!
|
|
||||||
# -[ ] report in log msg
|
|
||||||
# -[ ] possibly return segment sections which were moved?
|
|
||||||
|
|
||||||
diff: int = (
|
|
||||||
wdts.height
|
|
||||||
-
|
|
||||||
deduped.height
|
|
||||||
)
|
|
||||||
return (
|
|
||||||
wdts,
|
|
||||||
deduped,
|
|
||||||
diff,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def sort_diff(
|
|
||||||
src_df: pl.DataFrame,
|
|
||||||
col: str = 'time',
|
|
||||||
|
|
||||||
) -> tuple[
|
|
||||||
pl.DataFrame, # with dts
|
|
||||||
pl.DataFrame, # sorted
|
|
||||||
list[int], # indices of segments that are out-of-order
|
|
||||||
]:
|
|
||||||
ser: pl.Series = src_df[col]
|
|
||||||
sortd: pl.DataFrame = ser.sort()
|
|
||||||
diff: pl.Series = ser.diff()
|
|
||||||
|
|
||||||
sortd_diff: pl.Series = sortd.diff()
|
|
||||||
i_step_diff = (diff != sortd_diff).arg_true()
|
|
||||||
frame_reorders: int = i_step_diff.len()
|
|
||||||
if frame_reorders:
|
|
||||||
log.warn(
|
|
||||||
f'Resorted frame on col: {col}\n'
|
|
||||||
f'{frame_reorders}'
|
|
||||||
|
|
||||||
)
|
|
||||||
# import pdbp; pdbp.set_trace()
|
|
||||||
|
|
||||||
# NOTE: thanks to this SO answer for the below conversion routines
|
|
||||||
# to go from numpy struct-arrays to polars dataframes and back:
|
|
||||||
# https://stackoverflow.com/a/72054819
|
|
||||||
def np2pl(array: np.ndarray) -> pl.DataFrame:
|
|
||||||
start: float = time.time()
|
|
||||||
|
|
||||||
# XXX: thanks to this SO answer for this conversion tip:
|
|
||||||
# https://stackoverflow.com/a/72054819
|
|
||||||
df = pl.DataFrame({
|
|
||||||
field_name: array[field_name]
|
|
||||||
for field_name in array.dtype.fields
|
|
||||||
})
|
|
||||||
delay: float = round(
|
|
||||||
time.time() - start,
|
|
||||||
ndigits=6,
|
|
||||||
)
|
|
||||||
log.info(
|
|
||||||
f'numpy -> polars conversion took {delay} secs\n'
|
|
||||||
f'polars df: {df}'
|
|
||||||
)
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
def pl2np(
|
|
||||||
df: pl.DataFrame,
|
|
||||||
dtype: np.dtype,
|
|
||||||
|
|
||||||
) -> np.ndarray:
|
|
||||||
|
|
||||||
# Create numpy struct array of the correct size and dtype
|
|
||||||
# and loop through df columns to fill in array fields.
|
|
||||||
array = np.empty(
|
|
||||||
df.height,
|
|
||||||
dtype,
|
|
||||||
)
|
|
||||||
for field, col in zip(
|
|
||||||
dtype.fields,
|
|
||||||
df.columns,
|
|
||||||
):
|
|
||||||
array[field] = df.get_column(col).to_numpy()
|
|
||||||
|
|
||||||
return array
|
|
||||||
250
piker/types.py
250
piker/types.py
|
|
@ -1,250 +0,0 @@
|
||||||
# piker: trading gear for hackers
|
|
||||||
# Copyright (C) (in stewardship for pikers)
|
|
||||||
# - Tyler Goodlet
|
|
||||||
# - Guillermo Rodriguez
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
Extensions to built-in or (heavily used but 3rd party) friend-lib
|
|
||||||
types.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from __future__ import annotations
|
|
||||||
from collections import UserList
|
|
||||||
from pprint import (
|
|
||||||
saferepr,
|
|
||||||
)
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from msgspec import (
|
|
||||||
msgpack,
|
|
||||||
Struct as _Struct,
|
|
||||||
structs,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class DiffDump(UserList):
|
|
||||||
'''
|
|
||||||
Very simple list delegator that repr() dumps (presumed) tuple
|
|
||||||
elements of the form `tuple[str, Any, Any]` in a nice
|
|
||||||
multi-line readable form for analyzing `Struct` diffs.
|
|
||||||
|
|
||||||
'''
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
if not len(self):
|
|
||||||
return super().__repr__()
|
|
||||||
|
|
||||||
# format by displaying item pair's ``repr()`` on multiple,
|
|
||||||
# indented lines such that they are more easily visually
|
|
||||||
# comparable when printed to console when printed to
|
|
||||||
# console.
|
|
||||||
repstr: str = '[\n'
|
|
||||||
for k, left, right in self:
|
|
||||||
repstr += (
|
|
||||||
f'({k},\n'
|
|
||||||
f'\t{repr(left)},\n'
|
|
||||||
f'\t{repr(right)},\n'
|
|
||||||
')\n'
|
|
||||||
)
|
|
||||||
repstr += ']\n'
|
|
||||||
return repstr
|
|
||||||
|
|
||||||
|
|
||||||
class Struct(
|
|
||||||
_Struct,
|
|
||||||
|
|
||||||
# https://jcristharif.com/msgspec/structs.html#tagged-unions
|
|
||||||
# tag='pikerstruct',
|
|
||||||
# tag=True,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
A "human friendlier" (aka repl buddy) struct subtype.
|
|
||||||
|
|
||||||
'''
|
|
||||||
def _sin_props(self) -> Iterator[
|
|
||||||
tuple[
|
|
||||||
structs.FieldIinfo,
|
|
||||||
str,
|
|
||||||
Any,
|
|
||||||
]
|
|
||||||
]:
|
|
||||||
'''
|
|
||||||
Iterate over all non-@property fields of this struct.
|
|
||||||
|
|
||||||
'''
|
|
||||||
fi: structs.FieldInfo
|
|
||||||
for fi in structs.fields(self):
|
|
||||||
key: str = fi.name
|
|
||||||
val: Any = getattr(self, key)
|
|
||||||
yield fi, key, val
|
|
||||||
|
|
||||||
def to_dict(
|
|
||||||
self,
|
|
||||||
include_non_members: bool = True,
|
|
||||||
|
|
||||||
) -> dict:
|
|
||||||
'''
|
|
||||||
Like it sounds.. direct delegation to:
|
|
||||||
https://jcristharif.com/msgspec/api.html#msgspec.structs.asdict
|
|
||||||
|
|
||||||
BUT, by default we pop all non-member (aka not defined as
|
|
||||||
struct fields) fields by default.
|
|
||||||
|
|
||||||
'''
|
|
||||||
asdict: dict = structs.asdict(self)
|
|
||||||
if include_non_members:
|
|
||||||
return asdict
|
|
||||||
|
|
||||||
# only return a dict of the struct members
|
|
||||||
# which were provided as input, NOT anything
|
|
||||||
# added as type-defined `@property` methods!
|
|
||||||
sin_props: dict = {}
|
|
||||||
fi: structs.FieldInfo
|
|
||||||
for fi, k, v in self._sin_props():
|
|
||||||
sin_props[k] = asdict[k]
|
|
||||||
|
|
||||||
return sin_props
|
|
||||||
|
|
||||||
def pformat(
|
|
||||||
self,
|
|
||||||
field_indent: int = 2,
|
|
||||||
indent: int = 0,
|
|
||||||
|
|
||||||
) -> str:
|
|
||||||
'''
|
|
||||||
Recursion-safe `pprint.pformat()` style formatting of
|
|
||||||
a `msgspec.Struct` for sane reading by a human using a REPL.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# global whitespace indent
|
|
||||||
ws: str = ' '*indent
|
|
||||||
|
|
||||||
# field whitespace indent
|
|
||||||
field_ws: str = ' '*(field_indent + indent)
|
|
||||||
|
|
||||||
# qtn: str = ws + self.__class__.__qualname__
|
|
||||||
qtn: str = self.__class__.__qualname__
|
|
||||||
|
|
||||||
obj_str: str = '' # accumulator
|
|
||||||
fi: structs.FieldInfo
|
|
||||||
k: str
|
|
||||||
v: Any
|
|
||||||
for fi, k, v in self._sin_props():
|
|
||||||
|
|
||||||
# TODO: how can we prefer `Literal['option1', 'option2,
|
|
||||||
# ..]` over .__name__ == `Literal` but still get only the
|
|
||||||
# latter for simple types like `str | int | None` etc..?
|
|
||||||
ft: type = fi.type
|
|
||||||
typ_name: str = getattr(ft, '__name__', str(ft))
|
|
||||||
|
|
||||||
# recurse to get sub-struct's `.pformat()` output Bo
|
|
||||||
if isinstance(v, Struct):
|
|
||||||
val_str: str = v.pformat(
|
|
||||||
indent=field_indent + indent,
|
|
||||||
field_indent=indent + field_indent,
|
|
||||||
)
|
|
||||||
|
|
||||||
else: # the `pprint` recursion-safe format:
|
|
||||||
# https://docs.python.org/3.11/library/pprint.html#pprint.saferepr
|
|
||||||
val_str: str = saferepr(v)
|
|
||||||
|
|
||||||
obj_str += (field_ws + f'{k}: {typ_name} = {val_str},\n')
|
|
||||||
|
|
||||||
return (
|
|
||||||
f'{qtn}(\n'
|
|
||||||
f'{obj_str}'
|
|
||||||
f'{ws})'
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
|
|
||||||
# inside a known tty?
|
|
||||||
# def __repr__(self) -> str:
|
|
||||||
# ...
|
|
||||||
|
|
||||||
# __str__ = __repr__ = pformat
|
|
||||||
__repr__ = pformat
|
|
||||||
|
|
||||||
def copy(
|
|
||||||
self,
|
|
||||||
update: dict | None = None,
|
|
||||||
|
|
||||||
) -> Struct:
|
|
||||||
'''
|
|
||||||
Validate-typecast all self defined fields, return a copy of
|
|
||||||
us with all such fields.
|
|
||||||
|
|
||||||
NOTE: This is kinda like the default behaviour in
|
|
||||||
`pydantic.BaseModel` except a copy of the object is
|
|
||||||
returned making it compat with `frozen=True`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if update:
|
|
||||||
for k, v in update.items():
|
|
||||||
setattr(self, k, v)
|
|
||||||
|
|
||||||
# NOTE: roundtrip serialize to validate
|
|
||||||
# - enode to msgpack binary format,
|
|
||||||
# - decode that back to a struct.
|
|
||||||
return msgpack.Decoder(type=type(self)).decode(
|
|
||||||
msgpack.Encoder().encode(self)
|
|
||||||
)
|
|
||||||
|
|
||||||
def typecast(
|
|
||||||
self,
|
|
||||||
|
|
||||||
# TODO: allow only casting a named subset?
|
|
||||||
# fields: set[str] | None = None,
|
|
||||||
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Cast all fields using their declared type annotations
|
|
||||||
(kinda like what `pydantic` does by default).
|
|
||||||
|
|
||||||
NOTE: this of course won't work on frozen types, use
|
|
||||||
``.copy()`` above in such cases.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# https://jcristharif.com/msgspec/api.html#msgspec.structs.fields
|
|
||||||
fi: structs.FieldInfo
|
|
||||||
for fi in structs.fields(self):
|
|
||||||
setattr(
|
|
||||||
self,
|
|
||||||
fi.name,
|
|
||||||
fi.type(getattr(self, fi.name)),
|
|
||||||
)
|
|
||||||
|
|
||||||
def __sub__(
|
|
||||||
self,
|
|
||||||
other: Struct,
|
|
||||||
|
|
||||||
) -> DiffDump[tuple[str, Any, Any]]:
|
|
||||||
'''
|
|
||||||
Compare fields/items key-wise and return a ``DiffDump``
|
|
||||||
for easy visual REPL comparison B)
|
|
||||||
|
|
||||||
'''
|
|
||||||
diffs: DiffDump[tuple[str, Any, Any]] = DiffDump()
|
|
||||||
for fi in structs.fields(self):
|
|
||||||
attr_name: str = fi.name
|
|
||||||
ours: Any = getattr(self, attr_name)
|
|
||||||
theirs: Any = getattr(other, attr_name)
|
|
||||||
if ours != theirs:
|
|
||||||
diffs.append((
|
|
||||||
attr_name,
|
|
||||||
ours,
|
|
||||||
theirs,
|
|
||||||
))
|
|
||||||
|
|
||||||
return diffs
|
|
||||||
|
|
@ -14,8 +14,9 @@
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
'''
|
"""
|
||||||
UI components built using `Qt` with major versions swapped in via
|
Stuff for your eyes, aka super hawt Qt UI components.
|
||||||
the import indirection in the `.qt` sub-mod.
|
|
||||||
|
|
||||||
'''
|
Currently we only support PyQt5 due to this issue in Pyside2:
|
||||||
|
https://bugreports.qt.io/projects/PYSIDE/issues/PYSIDE-1313
|
||||||
|
"""
|
||||||
|
|
|
||||||
|
|
@ -21,10 +21,8 @@ Anchor funtions for UI placement of annotions.
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from typing import Callable, TYPE_CHECKING
|
from typing import Callable, TYPE_CHECKING
|
||||||
|
|
||||||
from piker.ui.qt import (
|
from PyQt5.QtCore import QPointF
|
||||||
QPointF,
|
from PyQt5.QtWidgets import QGraphicsPathItem
|
||||||
QGraphicsPathItem,
|
|
||||||
)
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ._chart import ChartPlotWidget
|
from ._chart import ChartPlotWidget
|
||||||
|
|
|
||||||
|
|
@ -20,22 +20,12 @@ Annotations for ur faces.
|
||||||
"""
|
"""
|
||||||
from typing import Callable
|
from typing import Callable
|
||||||
|
|
||||||
from pyqtgraph import (
|
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||||
Point,
|
from PyQt5.QtCore import QPointF, QRectF
|
||||||
functions as fn,
|
from PyQt5.QtWidgets import QGraphicsPathItem
|
||||||
Color,
|
from pyqtgraph import Point, functions as fn, Color
|
||||||
)
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
from piker.ui.qt import (
|
|
||||||
QtCore,
|
|
||||||
QtGui,
|
|
||||||
QtWidgets,
|
|
||||||
QPointF,
|
|
||||||
QRectF,
|
|
||||||
QGraphicsPathItem,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def mk_marker_path(
|
def mk_marker_path(
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -21,17 +21,13 @@ Main app startup and run.
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
|
|
||||||
import tractor
|
from PyQt5.QtCore import QEvent
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
from piker.ui.qt import (
|
|
||||||
QEvent,
|
|
||||||
)
|
|
||||||
from ..service import maybe_spawn_brokerd
|
from ..service import maybe_spawn_brokerd
|
||||||
from . import _event
|
from . import _event
|
||||||
from ._exec import run_qtractor
|
from ._exec import run_qtractor
|
||||||
from ..data.feed import install_brokerd_search
|
from ..data.feed import install_brokerd_search
|
||||||
from ..data._symcache import open_symcache
|
|
||||||
from ..accounting import unpack_fqme
|
from ..accounting import unpack_fqme
|
||||||
from . import _search
|
from . import _search
|
||||||
from ._chart import GodWidget
|
from ._chart import GodWidget
|
||||||
|
|
@ -60,13 +56,7 @@ async def load_provider_search(
|
||||||
portal,
|
portal,
|
||||||
brokermod,
|
brokermod,
|
||||||
),
|
),
|
||||||
open_symcache(brokermod) as symcache,
|
|
||||||
):
|
):
|
||||||
if not symcache.mktmaps:
|
|
||||||
log.warning(
|
|
||||||
f'BACKEND DOES NOT (yet) support symcaching: `{brokermod.name}`'
|
|
||||||
)
|
|
||||||
|
|
||||||
# keep search engine stream up until cancelled
|
# keep search engine stream up until cancelled
|
||||||
await trio.sleep_forever()
|
await trio.sleep_forever()
|
||||||
|
|
||||||
|
|
@ -109,15 +99,12 @@ async def _async_main(
|
||||||
sbar = godwidget.window.status_bar
|
sbar = godwidget.window.status_bar
|
||||||
starting_done = sbar.open_status('starting ze sexy chartz')
|
starting_done = sbar.open_status('starting ze sexy chartz')
|
||||||
|
|
||||||
# NOTE: by default we load all "builtin" backends for search
|
|
||||||
# and that includes loading their symcaches if possible B)
|
|
||||||
needed_brokermods: dict[str, ModuleType] = {}
|
needed_brokermods: dict[str, ModuleType] = {}
|
||||||
for fqme in syms:
|
for fqme in syms:
|
||||||
brokername, *_ = unpack_fqme(fqme)
|
brokername, *_ = unpack_fqme(fqme)
|
||||||
needed_brokermods[brokername] = brokers[brokername]
|
needed_brokermods[brokername] = brokers[brokername]
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as root_n,
|
trio.open_nursery() as root_n,
|
||||||
):
|
):
|
||||||
# set root nursery and task stack for spawning other charts/feeds
|
# set root nursery and task stack for spawning other charts/feeds
|
||||||
|
|
|
||||||
|
|
@ -23,24 +23,16 @@ from functools import lru_cache
|
||||||
from typing import Callable
|
from typing import Callable
|
||||||
from math import floor
|
from math import floor
|
||||||
|
|
||||||
import polars as pl
|
import numpy as np
|
||||||
import pyqtgraph as pg
|
import pyqtgraph as pg
|
||||||
|
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||||
|
from PyQt5.QtCore import QPointF
|
||||||
|
|
||||||
from piker.ui.qt import (
|
|
||||||
QtCore,
|
|
||||||
QtGui,
|
|
||||||
QtWidgets,
|
|
||||||
QPointF,
|
|
||||||
txt_flag,
|
|
||||||
align_flag,
|
|
||||||
px_cache_mode,
|
|
||||||
)
|
|
||||||
from . import _pg_overrides as pgo
|
from . import _pg_overrides as pgo
|
||||||
from ..accounting._mktinfo import float_digits
|
from ..accounting._mktinfo import float_digits
|
||||||
from ._label import Label
|
from ._label import Label
|
||||||
from ._style import DpiAwareFont, hcolor, _font
|
from ._style import DpiAwareFont, hcolor, _font
|
||||||
from ._interaction import ChartView
|
from ._interaction import ChartView
|
||||||
from ._dataviz import Viz
|
|
||||||
|
|
||||||
_axis_pen = pg.mkPen(hcolor('bracket'))
|
_axis_pen = pg.mkPen(hcolor('bracket'))
|
||||||
|
|
||||||
|
|
@ -295,7 +287,9 @@ class DynamicDateAxis(Axis):
|
||||||
# time formats mapped by seconds between bars
|
# time formats mapped by seconds between bars
|
||||||
tick_tpl = {
|
tick_tpl = {
|
||||||
60 * 60 * 24: '%Y-%b-%d',
|
60 * 60 * 24: '%Y-%b-%d',
|
||||||
60: '%Y-%b-%d(%H:%M)',
|
60: '%H:%M',
|
||||||
|
30: '%H:%M:%S',
|
||||||
|
5: '%H:%M:%S',
|
||||||
1: '%H:%M:%S',
|
1: '%H:%M:%S',
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -311,10 +305,10 @@ class DynamicDateAxis(Axis):
|
||||||
# XX: ARGGGGG AG:LKSKDJF:LKJSDFD
|
# XX: ARGGGGG AG:LKSKDJF:LKJSDFD
|
||||||
chart = self.pi.chart_widget
|
chart = self.pi.chart_widget
|
||||||
|
|
||||||
viz: Viz = chart._vizs[chart.name]
|
viz = chart._vizs[chart.name]
|
||||||
shm = viz.shm
|
shm = viz.shm
|
||||||
array = shm.array
|
array = shm.array
|
||||||
ifield: str = viz.index_field
|
ifield = viz.index_field
|
||||||
index = array[ifield]
|
index = array[ifield]
|
||||||
i_0, i_l = index[0], index[-1]
|
i_0, i_l = index[0], index[-1]
|
||||||
|
|
||||||
|
|
@ -335,7 +329,7 @@ class DynamicDateAxis(Axis):
|
||||||
arr_len = index.shape[0]
|
arr_len = index.shape[0]
|
||||||
first = shm._first.value
|
first = shm._first.value
|
||||||
times = array['time']
|
times = array['time']
|
||||||
epochs: list[int] = times[
|
epochs = times[
|
||||||
list(
|
list(
|
||||||
map(
|
map(
|
||||||
int,
|
int,
|
||||||
|
|
@ -347,30 +341,23 @@ class DynamicDateAxis(Axis):
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
else:
|
else:
|
||||||
epochs: list[int] = list(map(int, indexes))
|
epochs = list(map(int, indexes))
|
||||||
|
|
||||||
# TODO: **don't** have this hard coded shift to EST
|
# TODO: **don't** have this hard coded shift to EST
|
||||||
delay: float = viz.time_step()
|
# delay = times[-1] - times[-2]
|
||||||
if delay > 1:
|
dts = np.array(
|
||||||
# NOTE: use less granular dt-str when using 1M+ OHLC
|
|
||||||
fmtstr: str = self.tick_tpl[delay]
|
|
||||||
else:
|
|
||||||
fmtstr: str = '%Y-%m-%d(%H:%M:%S)'
|
|
||||||
|
|
||||||
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.from_epoch.html#polars-from-epoch
|
|
||||||
pl_dts: pl.Series = pl.from_epoch(
|
|
||||||
epochs,
|
epochs,
|
||||||
time_unit='s',
|
dtype='datetime64[s]',
|
||||||
# NOTE: kinda weird we can pass it to `.from_epoch()` no?
|
|
||||||
).dt.replace_time_zone(
|
|
||||||
time_zone='UTC'
|
|
||||||
).dt.convert_time_zone(
|
|
||||||
# TODO: pull this from either:
|
|
||||||
# -[ ] the mkt venue tz by default
|
|
||||||
# -[ ] the user's config under `sys.mkt_timezone: str`
|
|
||||||
'EST'
|
|
||||||
)
|
)
|
||||||
return pl_dts.dt.to_string(fmtstr).to_list()
|
|
||||||
|
# see units listing:
|
||||||
|
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units
|
||||||
|
return list(np.datetime_as_string(dts))
|
||||||
|
|
||||||
|
# TODO: per timeframe formatting?
|
||||||
|
# - we probably need this based on zoom now right?
|
||||||
|
# prec = self.np_dt_precision[delay]
|
||||||
|
# return dts.strftime(self.tick_tpl[delay])
|
||||||
|
|
||||||
def tickStrings(
|
def tickStrings(
|
||||||
self,
|
self,
|
||||||
|
|
@ -421,15 +408,11 @@ class AxisLabel(pg.GraphicsObject):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.setParentItem(parent)
|
self.setParentItem(parent)
|
||||||
|
|
||||||
self.setFlag(
|
self.setFlag(self.ItemIgnoresTransformations)
|
||||||
self.GraphicsItemFlag.ItemIgnoresTransformations
|
|
||||||
)
|
|
||||||
self.setZValue(100)
|
self.setZValue(100)
|
||||||
|
|
||||||
# XXX: pretty sure this is faster
|
# XXX: pretty sure this is faster
|
||||||
self.setCacheMode(
|
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||||
px_cache_mode.DeviceCoordinateCache
|
|
||||||
)
|
|
||||||
|
|
||||||
self._parent = parent
|
self._parent = parent
|
||||||
|
|
||||||
|
|
@ -566,14 +549,21 @@ class AxisLabel(pg.GraphicsObject):
|
||||||
|
|
||||||
return (self.rect.width(), self.rect.height())
|
return (self.rect.width(), self.rect.height())
|
||||||
|
|
||||||
|
# _common_text_flags = (
|
||||||
|
# QtCore.Qt.TextDontClip |
|
||||||
|
# QtCore.Qt.AlignCenter |
|
||||||
|
# QtCore.Qt.AlignTop |
|
||||||
|
# QtCore.Qt.AlignHCenter |
|
||||||
|
# QtCore.Qt.AlignVCenter
|
||||||
|
# )
|
||||||
|
|
||||||
|
|
||||||
class XAxisLabel(AxisLabel):
|
class XAxisLabel(AxisLabel):
|
||||||
_x_margin = 8
|
_x_margin = 8
|
||||||
|
|
||||||
text_flags = (
|
text_flags = (
|
||||||
align_flag.AlignCenter
|
QtCore.Qt.TextDontClip
|
||||||
| txt_flag.TextDontClip
|
| QtCore.Qt.AlignCenter
|
||||||
)
|
)
|
||||||
|
|
||||||
def size_hint(self) -> tuple[float, float]:
|
def size_hint(self) -> tuple[float, float]:
|
||||||
|
|
@ -630,10 +620,10 @@ class YAxisLabel(AxisLabel):
|
||||||
_y_margin: int = 4
|
_y_margin: int = 4
|
||||||
|
|
||||||
text_flags = (
|
text_flags = (
|
||||||
align_flag.AlignLeft
|
QtCore.Qt.AlignLeft
|
||||||
| align_flag.AlignVCenter
|
# QtCore.Qt.AlignHCenter
|
||||||
# | align_flag.AlignHCenter
|
| QtCore.Qt.AlignVCenter
|
||||||
| txt_flag.TextDontClip
|
| QtCore.Qt.TextDontClip
|
||||||
)
|
)
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
|
|
|
||||||
|
|
@ -28,19 +28,22 @@ from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
)
|
)
|
||||||
|
|
||||||
import pyqtgraph as pg
|
from PyQt5 import QtCore, QtWidgets
|
||||||
import trio
|
from PyQt5.QtCore import (
|
||||||
|
|
||||||
from piker.ui.qt import (
|
|
||||||
QtCore,
|
|
||||||
Qt,
|
Qt,
|
||||||
QLineF,
|
QLineF,
|
||||||
|
# QPointF,
|
||||||
|
)
|
||||||
|
from PyQt5.QtWidgets import (
|
||||||
QFrame,
|
QFrame,
|
||||||
QWidget,
|
QWidget,
|
||||||
QHBoxLayout,
|
QHBoxLayout,
|
||||||
QVBoxLayout,
|
QVBoxLayout,
|
||||||
QSplitter,
|
QSplitter,
|
||||||
)
|
)
|
||||||
|
import pyqtgraph as pg
|
||||||
|
import trio
|
||||||
|
|
||||||
from ._axes import (
|
from ._axes import (
|
||||||
DynamicDateAxis,
|
DynamicDateAxis,
|
||||||
PriceAxis,
|
PriceAxis,
|
||||||
|
|
@ -279,7 +282,7 @@ class GodWidget(QWidget):
|
||||||
# TODO: probably stick this in some kinda `LooknFeel` API?
|
# TODO: probably stick this in some kinda `LooknFeel` API?
|
||||||
for tracker in self.rt_linked.mode.trackers.values():
|
for tracker in self.rt_linked.mode.trackers.values():
|
||||||
pp_nav = tracker.nav
|
pp_nav = tracker.nav
|
||||||
if tracker.live_pp.cumsize:
|
if tracker.live_pp.size:
|
||||||
pp_nav.show()
|
pp_nav.show()
|
||||||
pp_nav.hide_info()
|
pp_nav.hide_info()
|
||||||
else:
|
else:
|
||||||
|
|
@ -403,7 +406,6 @@ class ChartnPane(QFrame):
|
||||||
)
|
)
|
||||||
self._sidepane = sidepane
|
self._sidepane = sidepane
|
||||||
|
|
||||||
@property
|
|
||||||
def sidepane(self) -> FieldsForm | SearchWidget:
|
def sidepane(self) -> FieldsForm | SearchWidget:
|
||||||
return self._sidepane
|
return self._sidepane
|
||||||
|
|
||||||
|
|
@ -493,7 +495,7 @@ class LinkedSplits(QWidget):
|
||||||
Set the proportion of space allocated for linked subcharts.
|
Set the proportion of space allocated for linked subcharts.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
ln: int = len(self.subplots) or 1
|
ln = len(self.subplots) or 1
|
||||||
|
|
||||||
# proportion allocated to consumer subcharts
|
# proportion allocated to consumer subcharts
|
||||||
if not prop:
|
if not prop:
|
||||||
|
|
@ -567,8 +569,8 @@ class LinkedSplits(QWidget):
|
||||||
|
|
||||||
# style?
|
# style?
|
||||||
self.chart.setFrameStyle(
|
self.chart.setFrameStyle(
|
||||||
QFrame.Shape.StyledPanel |
|
QFrame.StyledPanel |
|
||||||
QFrame.Shadow.Plain
|
QFrame.Plain
|
||||||
)
|
)
|
||||||
|
|
||||||
return self.chart
|
return self.chart
|
||||||
|
|
@ -686,8 +688,8 @@ class LinkedSplits(QWidget):
|
||||||
|
|
||||||
cpw.plotItem.vb.linked = self
|
cpw.plotItem.vb.linked = self
|
||||||
cpw.setFrameStyle(
|
cpw.setFrameStyle(
|
||||||
QFrame.Shape.StyledPanel
|
QtWidgets.QFrame.StyledPanel
|
||||||
# | QFrame.Shadow.Plain
|
# | QtWidgets.QFrame.Plain
|
||||||
)
|
)
|
||||||
|
|
||||||
# don't show the little "autoscale" A label.
|
# don't show the little "autoscale" A label.
|
||||||
|
|
@ -923,7 +925,6 @@ class ChartPlotWidget(pg.PlotWidget):
|
||||||
self.useOpenGL(use_open_gl)
|
self.useOpenGL(use_open_gl)
|
||||||
self.name = name
|
self.name = name
|
||||||
self.data_key = data_key or name
|
self.data_key = data_key or name
|
||||||
self.qframe: ChartnPane | None = None
|
|
||||||
|
|
||||||
# scene-local placeholder for book graphics
|
# scene-local placeholder for book graphics
|
||||||
# sizing to avoid overlap with data contents
|
# sizing to avoid overlap with data contents
|
||||||
|
|
|
||||||
|
|
@ -28,14 +28,9 @@ from typing import (
|
||||||
import inspect
|
import inspect
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pyqtgraph as pg
|
import pyqtgraph as pg
|
||||||
|
from PyQt5 import QtCore, QtWidgets
|
||||||
|
from PyQt5.QtCore import QPointF, QRectF
|
||||||
|
|
||||||
from piker.ui.qt import (
|
|
||||||
QPointF,
|
|
||||||
QRectF,
|
|
||||||
QtCore,
|
|
||||||
QtWidgets,
|
|
||||||
px_cache_mode,
|
|
||||||
)
|
|
||||||
from ._style import (
|
from ._style import (
|
||||||
_xaxis_at,
|
_xaxis_at,
|
||||||
hcolor,
|
hcolor,
|
||||||
|
|
@ -109,9 +104,7 @@ class LineDot(pg.CurvePoint):
|
||||||
dot.setParentItem(self)
|
dot.setParentItem(self)
|
||||||
|
|
||||||
# keep a static size
|
# keep a static size
|
||||||
self.setFlag(
|
self.setFlag(self.ItemIgnoresTransformations)
|
||||||
self.GraphicsItemFlag.ItemIgnoresTransformations
|
|
||||||
)
|
|
||||||
|
|
||||||
def event(
|
def event(
|
||||||
self,
|
self,
|
||||||
|
|
@ -214,10 +207,9 @@ class ContentsLabel(pg.LabelItem):
|
||||||
# this being "html" is the dumbest shit :eyeroll:
|
# this being "html" is the dumbest shit :eyeroll:
|
||||||
|
|
||||||
self.setText(
|
self.setText(
|
||||||
"<b>i_arr</b>:{index}<br/>"
|
"<b>i</b>:{index}<br/>"
|
||||||
# NB: these fields must be indexed in the correct order via
|
# NB: these fields must be indexed in the correct order via
|
||||||
# the slice syntax below.
|
# the slice syntax below.
|
||||||
"<b>i_shm</b>:{}<br/>"
|
|
||||||
"<b>epoch</b>:{}<br/>"
|
"<b>epoch</b>:{}<br/>"
|
||||||
"<b>O</b>:{}<br/>"
|
"<b>O</b>:{}<br/>"
|
||||||
"<b>H</b>:{}<br/>"
|
"<b>H</b>:{}<br/>"
|
||||||
|
|
@ -227,7 +219,6 @@ class ContentsLabel(pg.LabelItem):
|
||||||
# "<b>wap</b>:{}".format(
|
# "<b>wap</b>:{}".format(
|
||||||
*array[ix][
|
*array[ix][
|
||||||
[
|
[
|
||||||
'index',
|
|
||||||
'time',
|
'time',
|
||||||
'open',
|
'open',
|
||||||
'high',
|
'high',
|
||||||
|
|
@ -279,15 +270,10 @@ class ContentsLabels:
|
||||||
x_in: int,
|
x_in: int,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
for (
|
for chart, name, label, update in self._labels:
|
||||||
chart,
|
|
||||||
name,
|
|
||||||
label,
|
|
||||||
update,
|
|
||||||
)in self._labels:
|
|
||||||
|
|
||||||
viz = chart.get_viz(name)
|
viz = chart.get_viz(name)
|
||||||
array: np.ndarray = viz.shm._array
|
array = viz.shm.array
|
||||||
index = array[viz.index_field]
|
index = array[viz.index_field]
|
||||||
start = index[0]
|
start = index[0]
|
||||||
stop = index[-1]
|
stop = index[-1]
|
||||||
|
|
@ -298,7 +284,7 @@ class ContentsLabels:
|
||||||
):
|
):
|
||||||
# out of range
|
# out of range
|
||||||
print('WTF out of range?')
|
print('WTF out of range?')
|
||||||
# continue
|
continue
|
||||||
|
|
||||||
# call provided update func with data point
|
# call provided update func with data point
|
||||||
try:
|
try:
|
||||||
|
|
@ -306,7 +292,6 @@ class ContentsLabels:
|
||||||
ix = np.searchsorted(index, x_in)
|
ix = np.searchsorted(index, x_in)
|
||||||
if ix > len(array):
|
if ix > len(array):
|
||||||
breakpoint()
|
breakpoint()
|
||||||
|
|
||||||
update(ix, array)
|
update(ix, array)
|
||||||
|
|
||||||
except IndexError:
|
except IndexError:
|
||||||
|
|
@ -431,10 +416,10 @@ class Cursor(pg.GraphicsObject):
|
||||||
# vertical and horizonal lines and a y-axis label
|
# vertical and horizonal lines and a y-axis label
|
||||||
|
|
||||||
vl = plot.addLine(x=0, pen=self.lines_pen, movable=False)
|
vl = plot.addLine(x=0, pen=self.lines_pen, movable=False)
|
||||||
vl.setCacheMode(px_cache_mode.DeviceCoordinateCache)
|
vl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||||
|
|
||||||
hl = plot.addLine(y=0, pen=self.lines_pen, movable=False)
|
hl = plot.addLine(y=0, pen=self.lines_pen, movable=False)
|
||||||
hl.setCacheMode(px_cache_mode.DeviceCoordinateCache)
|
hl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||||
hl.hide()
|
hl.hide()
|
||||||
|
|
||||||
yl = YAxisLabel(
|
yl = YAxisLabel(
|
||||||
|
|
@ -518,10 +503,7 @@ class Cursor(pg.GraphicsObject):
|
||||||
plot=chart
|
plot=chart
|
||||||
)
|
)
|
||||||
chart.addItem(cursor)
|
chart.addItem(cursor)
|
||||||
self.graphics[chart].setdefault(
|
self.graphics[chart].setdefault('cursors', []).append(cursor)
|
||||||
'cursors',
|
|
||||||
[],
|
|
||||||
).append(cursor)
|
|
||||||
return cursor
|
return cursor
|
||||||
|
|
||||||
def mouseAction(
|
def mouseAction(
|
||||||
|
|
|
||||||
|
|
@ -19,47 +19,42 @@ Fast, smooth, sexy curves.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
from contextlib import contextmanager as cm
|
from contextlib import contextmanager as cm
|
||||||
from enum import EnumType
|
|
||||||
from typing import Callable
|
from typing import Callable
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pyqtgraph as pg
|
import pyqtgraph as pg
|
||||||
|
from PyQt5 import QtWidgets
|
||||||
from piker.ui.qt import (
|
from PyQt5.QtWidgets import QGraphicsItem
|
||||||
QtWidgets,
|
from PyQt5.QtCore import (
|
||||||
QGraphicsItem,
|
|
||||||
Qt,
|
Qt,
|
||||||
QLineF,
|
QLineF,
|
||||||
QRectF,
|
QRectF,
|
||||||
|
)
|
||||||
|
from PyQt5.QtGui import (
|
||||||
QPainter,
|
QPainter,
|
||||||
QPainterPath,
|
QPainterPath,
|
||||||
px_cache_mode,
|
|
||||||
)
|
)
|
||||||
|
from .._profile import pg_profile_enabled, ms_slower_then
|
||||||
from ._style import hcolor
|
from ._style import hcolor
|
||||||
from ..log import get_logger
|
from ..log import get_logger
|
||||||
from ..toolz.profile import (
|
from .._profile import Profiler
|
||||||
Profiler,
|
|
||||||
pg_profile_enabled,
|
|
||||||
ms_slower_then,
|
|
||||||
)
|
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
pen_style: EnumType = Qt.PenStyle
|
|
||||||
|
|
||||||
_line_styles: dict[str, int] = {
|
_line_styles: dict[str, int] = {
|
||||||
'solid': pen_style.SolidLine,
|
'solid': Qt.PenStyle.SolidLine,
|
||||||
'dash': pen_style.DashLine,
|
'dash': Qt.PenStyle.DashLine,
|
||||||
'dot': pen_style.DotLine,
|
'dot': Qt.PenStyle.DotLine,
|
||||||
'dashdot': pen_style.DashDotLine,
|
'dashdot': Qt.PenStyle.DashDotLine,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
class FlowGraphic(pg.GraphicsObject):
|
class FlowGraphic(pg.GraphicsObject):
|
||||||
'''
|
'''
|
||||||
Base class with minimal interface for `QPainterPath`
|
Base class with minimal interface for `QPainterPath` implemented,
|
||||||
implemented, real-time updated "data flow" graphics.
|
real-time updated "data flow" graphics.
|
||||||
|
|
||||||
See subtypes below.
|
See subtypes below.
|
||||||
|
|
||||||
|
|
@ -71,12 +66,12 @@ class FlowGraphic(pg.GraphicsObject):
|
||||||
# XXX-NOTE-XXX: graphics caching B)
|
# XXX-NOTE-XXX: graphics caching B)
|
||||||
# see explanation for different caching modes:
|
# see explanation for different caching modes:
|
||||||
# https://stackoverflow.com/a/39410081
|
# https://stackoverflow.com/a/39410081
|
||||||
cache_mode: int = px_cache_mode.DeviceCoordinateCache
|
cache_mode: int = QGraphicsItem.DeviceCoordinateCache
|
||||||
# XXX: WARNING item caching seems to only be useful
|
# XXX: WARNING item caching seems to only be useful
|
||||||
# if we don't re-generate the entire QPainterPath every time
|
# if we don't re-generate the entire QPainterPath every time
|
||||||
# don't ever use this - it's a colossal nightmare of artefacts
|
# don't ever use this - it's a colossal nightmare of artefacts
|
||||||
# and is disastrous for performance.
|
# and is disastrous for performance.
|
||||||
# cache_mode.ItemCoordinateCache
|
# QGraphicsItem.ItemCoordinateCache
|
||||||
# TODO: still questions todo with coord-cacheing that we should
|
# TODO: still questions todo with coord-cacheing that we should
|
||||||
# probably talk to a core dev about:
|
# probably talk to a core dev about:
|
||||||
# - if this makes trasform interactions slower (such as zooming)
|
# - if this makes trasform interactions slower (such as zooming)
|
||||||
|
|
@ -169,16 +164,15 @@ class FlowGraphic(pg.GraphicsObject):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# XXX: due to a variety of weird jitter bugs and "smearing"
|
# XXX: due to a variety of weird jitter bugs and "smearing"
|
||||||
# artifacts when click-drag panning and viewing history time
|
# artifacts when click-drag panning and viewing history time series,
|
||||||
# series, we offer this ctx-mngr interface to allow temporarily
|
# we offer this ctx-mngr interface to allow temporarily disabling
|
||||||
# disabling Qt's graphics caching mode; this is now currently
|
# Qt's graphics caching mode; this is now currently used from
|
||||||
# used from ``ChartView.start/signal_ic()`` methods which also
|
# ``ChartView.start/signal_ic()`` methods which also disable the
|
||||||
# disable the rt-display loop when the user is moving around
|
# rt-display loop when the user is moving around a view.
|
||||||
# a view.
|
|
||||||
@cm
|
@cm
|
||||||
def reset_cache(self) -> None:
|
def reset_cache(self) -> None:
|
||||||
try:
|
try:
|
||||||
none = px_cache_mode.NoCache
|
none = QGraphicsItem.NoCache
|
||||||
log.debug(
|
log.debug(
|
||||||
f'{self._name} -> CACHE DISABLE: {none}'
|
f'{self._name} -> CACHE DISABLE: {none}'
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -36,12 +36,9 @@ from msgspec import (
|
||||||
field,
|
field,
|
||||||
)
|
)
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from numpy import (
|
|
||||||
ndarray,
|
|
||||||
)
|
|
||||||
import pyqtgraph as pg
|
import pyqtgraph as pg
|
||||||
|
from PyQt5.QtCore import QLineF
|
||||||
|
|
||||||
from piker.ui.qt import QLineF
|
|
||||||
from ..data._sharedmem import (
|
from ..data._sharedmem import (
|
||||||
ShmArray,
|
ShmArray,
|
||||||
)
|
)
|
||||||
|
|
@ -52,7 +49,7 @@ from ..data._formatters import (
|
||||||
OHLCBarsAsCurveFmtr, # OHLC converted to line
|
OHLCBarsAsCurveFmtr, # OHLC converted to line
|
||||||
StepCurveFmtr, # "step" curve (like for vlm)
|
StepCurveFmtr, # "step" curve (like for vlm)
|
||||||
)
|
)
|
||||||
from ..tsp import (
|
from ..data._timeseries import (
|
||||||
slice_from_time,
|
slice_from_time,
|
||||||
)
|
)
|
||||||
from ._ohlc import (
|
from ._ohlc import (
|
||||||
|
|
@ -65,7 +62,7 @@ from ._curve import (
|
||||||
)
|
)
|
||||||
from ._render import Renderer
|
from ._render import Renderer
|
||||||
from ..log import get_logger
|
from ..log import get_logger
|
||||||
from ..toolz.profile import (
|
from .._profile import (
|
||||||
Profiler,
|
Profiler,
|
||||||
pg_profile_enabled,
|
pg_profile_enabled,
|
||||||
ms_slower_then,
|
ms_slower_then,
|
||||||
|
|
@ -85,11 +82,10 @@ def render_baritems(
|
||||||
viz: Viz,
|
viz: Viz,
|
||||||
graphics: BarItems,
|
graphics: BarItems,
|
||||||
read: tuple[
|
read: tuple[
|
||||||
int, int, ndarray,
|
int, int, np.ndarray,
|
||||||
int, int, ndarray,
|
int, int, np.ndarray,
|
||||||
],
|
],
|
||||||
profiler: Profiler,
|
profiler: Profiler,
|
||||||
force_redraw: bool = False,
|
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
@ -220,11 +216,9 @@ def render_baritems(
|
||||||
viz._in_ds = should_line
|
viz._in_ds = should_line
|
||||||
|
|
||||||
should_redraw = (
|
should_redraw = (
|
||||||
force_redraw
|
changed_to_line
|
||||||
or changed_to_line
|
|
||||||
or not should_line
|
or not should_line
|
||||||
)
|
)
|
||||||
# print(f'should_redraw: {should_redraw}')
|
|
||||||
return (
|
return (
|
||||||
graphics,
|
graphics,
|
||||||
r,
|
r,
|
||||||
|
|
@ -256,7 +250,7 @@ class ViewState(Struct):
|
||||||
] | None = None
|
] | None = None
|
||||||
|
|
||||||
# last in view ``ShmArray.array[read_slc]`` data
|
# last in view ``ShmArray.array[read_slc]`` data
|
||||||
in_view: ndarray | None = None
|
in_view: np.ndarray | None = None
|
||||||
|
|
||||||
|
|
||||||
class Viz(Struct):
|
class Viz(Struct):
|
||||||
|
|
@ -319,7 +313,6 @@ class Viz(Struct):
|
||||||
_last_uppx: float = 0
|
_last_uppx: float = 0
|
||||||
_in_ds: bool = False
|
_in_ds: bool = False
|
||||||
_index_step: float | None = None
|
_index_step: float | None = None
|
||||||
_time_step: float | None = None
|
|
||||||
|
|
||||||
# map from uppx -> (downsampled data, incremental graphics)
|
# map from uppx -> (downsampled data, incremental graphics)
|
||||||
_src_r: Renderer | None = None
|
_src_r: Renderer | None = None
|
||||||
|
|
@ -366,8 +359,7 @@ class Viz(Struct):
|
||||||
|
|
||||||
def index_step(
|
def index_step(
|
||||||
self,
|
self,
|
||||||
index_field: str | None = None,
|
reset: bool = False,
|
||||||
|
|
||||||
) -> float:
|
) -> float:
|
||||||
'''
|
'''
|
||||||
Return the size between sample steps in the units of the
|
Return the size between sample steps in the units of the
|
||||||
|
|
@ -375,17 +367,12 @@ class Viz(Struct):
|
||||||
epoch time in seconds.
|
epoch time in seconds.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# attempt to detect the best step size by scanning a sample
|
# attempt to dectect the best step size by scanning a sample of
|
||||||
# of the source data.
|
# the source data.
|
||||||
if (
|
if self._index_step is None:
|
||||||
self._index_step is None
|
|
||||||
or index_field is not None
|
index: np.ndarray = self.shm.array[self.index_field]
|
||||||
):
|
isample: np.ndarray = index[-16:]
|
||||||
index: ndarray = self.shm.array[
|
|
||||||
index_field
|
|
||||||
or self.index_field
|
|
||||||
]
|
|
||||||
isample: ndarray = index[-16:]
|
|
||||||
|
|
||||||
mxdiff: None | float = None
|
mxdiff: None | float = None
|
||||||
for step in np.diff(isample):
|
for step in np.diff(isample):
|
||||||
|
|
@ -399,15 +386,7 @@ class Viz(Struct):
|
||||||
)
|
)
|
||||||
mxdiff = step
|
mxdiff = step
|
||||||
|
|
||||||
step: float = max(mxdiff, 1)
|
self._index_step = max(mxdiff, 1)
|
||||||
|
|
||||||
# only SET the internal index step if an explicit
|
|
||||||
# field name is NOT passed, since in such cases this
|
|
||||||
# is likely just being called from `.time_step()`.
|
|
||||||
if index_field is not None:
|
|
||||||
return step
|
|
||||||
|
|
||||||
self._index_step = step
|
|
||||||
if (
|
if (
|
||||||
mxdiff < 1
|
mxdiff < 1
|
||||||
or 1 < mxdiff < 60
|
or 1 < mxdiff < 60
|
||||||
|
|
@ -418,17 +397,6 @@ class Viz(Struct):
|
||||||
|
|
||||||
return self._index_step
|
return self._index_step
|
||||||
|
|
||||||
def time_step(self) -> float:
|
|
||||||
'''
|
|
||||||
Attempt to determine the per-sample time-step period by
|
|
||||||
forcing an epoch-index and calling `.index_step()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if self._time_step is None:
|
|
||||||
self._time_step: float = self.index_step(index_field='time')
|
|
||||||
|
|
||||||
return self._time_step
|
|
||||||
|
|
||||||
def maxmin(
|
def maxmin(
|
||||||
self,
|
self,
|
||||||
|
|
||||||
|
|
@ -436,9 +404,6 @@ class Viz(Struct):
|
||||||
i_read_range: tuple[int, int] | None = None,
|
i_read_range: tuple[int, int] | None = None,
|
||||||
use_caching: bool = True,
|
use_caching: bool = True,
|
||||||
|
|
||||||
# XXX: internal debug
|
|
||||||
_do_print: bool = False
|
|
||||||
|
|
||||||
) -> tuple[float, float] | None:
|
) -> tuple[float, float] | None:
|
||||||
'''
|
'''
|
||||||
Compute the cached max and min y-range values for a given
|
Compute the cached max and min y-range values for a given
|
||||||
|
|
@ -458,14 +423,15 @@ class Viz(Struct):
|
||||||
if shm is None:
|
if shm is None:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
arr: ndarray = shm.array
|
do_print: bool = False
|
||||||
|
arr = shm.array
|
||||||
|
|
||||||
if i_read_range is not None:
|
if i_read_range is not None:
|
||||||
read_slc = slice(*i_read_range)
|
read_slc = slice(*i_read_range)
|
||||||
index: float | int = arr[read_slc][self.index_field]
|
index = arr[read_slc][self.index_field]
|
||||||
if not index.size:
|
if not index.size:
|
||||||
return None
|
return None
|
||||||
ixrng: tuple[int, int] = (index[0], index[-1])
|
ixrng = (index[0], index[-1])
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if x_range is None:
|
if x_range is None:
|
||||||
|
|
@ -483,24 +449,15 @@ class Viz(Struct):
|
||||||
|
|
||||||
# TODO: hash the slice instead maybe?
|
# TODO: hash the slice instead maybe?
|
||||||
# https://stackoverflow.com/a/29980872
|
# https://stackoverflow.com/a/29980872
|
||||||
ixrng = lbar, rbar = (
|
ixrng = lbar, rbar = round(x_range[0]), round(x_range[1])
|
||||||
round(x_range[0]),
|
|
||||||
round(x_range[1]),
|
|
||||||
)
|
|
||||||
|
|
||||||
if (
|
if (
|
||||||
use_caching
|
use_caching
|
||||||
and self._mxmn_cache_enabled
|
and self._mxmn_cache_enabled
|
||||||
):
|
):
|
||||||
# TODO: is there a way to ONLY clear ranges containing
|
|
||||||
# a certain sub-range?
|
|
||||||
# -[ ] currently we have a problem where a previously
|
|
||||||
# cached mxmn will persist even if the viz is "hard
|
|
||||||
# re-rendered" (usually bc underlying data was
|
|
||||||
# corrected)
|
|
||||||
cached_result = self._mxmns.get(ixrng)
|
cached_result = self._mxmns.get(ixrng)
|
||||||
if cached_result:
|
if cached_result:
|
||||||
if _do_print:
|
if do_print:
|
||||||
print(
|
print(
|
||||||
f'{self.name} CACHED maxmin\n'
|
f'{self.name} CACHED maxmin\n'
|
||||||
f'{ixrng} -> {cached_result}'
|
f'{ixrng} -> {cached_result}'
|
||||||
|
|
@ -530,7 +487,7 @@ class Viz(Struct):
|
||||||
(rbar - ifirst) + 1
|
(rbar - ifirst) + 1
|
||||||
)
|
)
|
||||||
|
|
||||||
slice_view: ndarray = arr[read_slc]
|
slice_view = arr[read_slc]
|
||||||
|
|
||||||
if not slice_view.size:
|
if not slice_view.size:
|
||||||
log.warning(
|
log.warning(
|
||||||
|
|
@ -541,7 +498,7 @@ class Viz(Struct):
|
||||||
|
|
||||||
elif self.ds_yrange:
|
elif self.ds_yrange:
|
||||||
mxmn = self.ds_yrange
|
mxmn = self.ds_yrange
|
||||||
if _do_print:
|
if do_print:
|
||||||
print(
|
print(
|
||||||
f'{self.name} M4 maxmin:\n'
|
f'{self.name} M4 maxmin:\n'
|
||||||
f'{ixrng} -> {mxmn}'
|
f'{ixrng} -> {mxmn}'
|
||||||
|
|
@ -558,7 +515,7 @@ class Viz(Struct):
|
||||||
|
|
||||||
mxmn = ylow, yhigh
|
mxmn = ylow, yhigh
|
||||||
if (
|
if (
|
||||||
_do_print
|
do_print
|
||||||
):
|
):
|
||||||
s = 3
|
s = 3
|
||||||
print(
|
print(
|
||||||
|
|
@ -572,23 +529,14 @@ class Viz(Struct):
|
||||||
|
|
||||||
# cache result for input range
|
# cache result for input range
|
||||||
ylow, yhi = mxmn
|
ylow, yhi = mxmn
|
||||||
diff: float = yhi - ylow
|
|
||||||
|
|
||||||
# order-of-magnitude check
|
|
||||||
# TODO: really we should be checking the hi or low
|
|
||||||
# against the previous sample to catch stuff like,
|
|
||||||
# - rando stock (reverse-)split
|
|
||||||
# - null-segments written by some prior
|
|
||||||
# crash-during-backfil
|
|
||||||
if diff > 0:
|
|
||||||
omg: float = abs(logf(diff, 10))
|
|
||||||
else:
|
|
||||||
omg: float = 0
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
prolly_anomaly: bool = (
|
prolly_anomaly: bool = (
|
||||||
# diff == 0
|
(
|
||||||
(ylow and omg > 10)
|
abs(logf(ylow, 10)) > 16
|
||||||
|
if ylow
|
||||||
|
else False
|
||||||
|
)
|
||||||
or (
|
or (
|
||||||
isnan(ylow) or isnan(yhi)
|
isnan(ylow) or isnan(yhi)
|
||||||
)
|
)
|
||||||
|
|
@ -615,8 +563,7 @@ class Viz(Struct):
|
||||||
|
|
||||||
def view_range(self) -> tuple[int, int]:
|
def view_range(self) -> tuple[int, int]:
|
||||||
'''
|
'''
|
||||||
Return the start and stop x-indexes for the managed
|
Return the start and stop x-indexes for the managed ``ViewBox``.
|
||||||
``ViewBox``.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
vr = self.plot.viewRect()
|
vr = self.plot.viewRect()
|
||||||
|
|
@ -629,7 +576,7 @@ class Viz(Struct):
|
||||||
self,
|
self,
|
||||||
view_range: None | tuple[float, float] = None,
|
view_range: None | tuple[float, float] = None,
|
||||||
index_field: str | None = None,
|
index_field: str | None = None,
|
||||||
array: ndarray | None = None,
|
array: np.ndarray | None = None,
|
||||||
|
|
||||||
) -> tuple[
|
) -> tuple[
|
||||||
int, int, int, int, int, int
|
int, int, int, int, int, int
|
||||||
|
|
@ -700,8 +647,8 @@ class Viz(Struct):
|
||||||
profiler: None | Profiler = None,
|
profiler: None | Profiler = None,
|
||||||
|
|
||||||
) -> tuple[
|
) -> tuple[
|
||||||
int, int, ndarray,
|
int, int, np.ndarray,
|
||||||
int, int, ndarray,
|
int, int, np.ndarray,
|
||||||
]:
|
]:
|
||||||
'''
|
'''
|
||||||
Read the underlying shm array buffer and
|
Read the underlying shm array buffer and
|
||||||
|
|
@ -871,10 +818,6 @@ class Viz(Struct):
|
||||||
graphics,
|
graphics,
|
||||||
read,
|
read,
|
||||||
profiler,
|
profiler,
|
||||||
|
|
||||||
# NOTE: only set when caller says to
|
|
||||||
force_redraw=should_redraw,
|
|
||||||
|
|
||||||
**kwargs,
|
**kwargs,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -1037,39 +980,6 @@ class Viz(Struct):
|
||||||
graphics,
|
graphics,
|
||||||
)
|
)
|
||||||
|
|
||||||
def reset_graphics(
|
|
||||||
self,
|
|
||||||
|
|
||||||
# TODO: allow only resetting within some x-domain range?
|
|
||||||
# ixrng: tuple[int, int] | None = None,
|
|
||||||
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Hard reset all graphics (rendering) layers for this
|
|
||||||
data viz including clearing the mxmn auto-y-range
|
|
||||||
cache.
|
|
||||||
|
|
||||||
Normally called when the underlying data set is modified
|
|
||||||
(probably by some `.tsp` correcting/editing routine) and
|
|
||||||
the (now cached) graphics need to be fully re-rendered from
|
|
||||||
source.
|
|
||||||
|
|
||||||
'''
|
|
||||||
log.warning(
|
|
||||||
f'Forcing hard Viz graphihcs RESET:\n'
|
|
||||||
f'.name: {self.name}\n'
|
|
||||||
f'.index_field: {self.index_field}\n'
|
|
||||||
f'.index_step(): {self.index_step()}\n'
|
|
||||||
f'.time_step(): {self.time_step()}\n'
|
|
||||||
)
|
|
||||||
# XXX: always clear the mxn y-range cache
|
|
||||||
# to avoid old data (anomalies) from being
|
|
||||||
# retained in auto-yrange output.
|
|
||||||
self._mxmn_cache_enabled = False
|
|
||||||
self._mxmns.clear()
|
|
||||||
self.update_graphics(force_redraw=True)
|
|
||||||
self._mxmn_cache_enabled = True
|
|
||||||
|
|
||||||
def draw_last(
|
def draw_last(
|
||||||
self,
|
self,
|
||||||
array_key: str | None = None,
|
array_key: str | None = None,
|
||||||
|
|
@ -1162,7 +1072,7 @@ class Viz(Struct):
|
||||||
|
|
||||||
'''
|
'''
|
||||||
shm: ShmArray = self.shm
|
shm: ShmArray = self.shm
|
||||||
array: ndarray = shm.array
|
array: np.ndarray = shm.array
|
||||||
view: ChartView = self.plot.vb
|
view: ChartView = self.plot.vb
|
||||||
(
|
(
|
||||||
vl,
|
vl,
|
||||||
|
|
|
||||||
|
|
@ -36,28 +36,25 @@ import pyqtgraph as pg
|
||||||
from msgspec import field
|
from msgspec import field
|
||||||
|
|
||||||
# from .. import brokers
|
# from .. import brokers
|
||||||
from piker.accounting import (
|
from ..accounting import (
|
||||||
MktPair,
|
MktPair,
|
||||||
)
|
)
|
||||||
from piker.types import Struct
|
from ..data import (
|
||||||
from piker.data import (
|
|
||||||
open_feed,
|
open_feed,
|
||||||
Feed,
|
Feed,
|
||||||
Flume,
|
Flume,
|
||||||
open_sample_stream,
|
|
||||||
ShmArray,
|
|
||||||
)
|
)
|
||||||
from piker.data.ticktools import (
|
from ..data.ticktools import (
|
||||||
_tick_groups,
|
_tick_groups,
|
||||||
_auction_ticks,
|
_auction_ticks,
|
||||||
)
|
)
|
||||||
from piker.toolz import (
|
from ..data.types import Struct
|
||||||
pg_profile_enabled,
|
from ..data._sharedmem import (
|
||||||
ms_slower_then,
|
ShmArray,
|
||||||
Profiler,
|
)
|
||||||
|
from ..data._sampling import (
|
||||||
|
open_sample_stream,
|
||||||
)
|
)
|
||||||
from piker.log import get_logger
|
|
||||||
from piker import config
|
|
||||||
# from ..data._source import tf_in_1s
|
# from ..data._source import tf_in_1s
|
||||||
from ._axes import YAxisLabel
|
from ._axes import YAxisLabel
|
||||||
from ._chart import (
|
from ._chart import (
|
||||||
|
|
@ -82,6 +79,12 @@ from .order_mode import (
|
||||||
open_order_mode,
|
open_order_mode,
|
||||||
OrderMode,
|
OrderMode,
|
||||||
)
|
)
|
||||||
|
from .._profile import (
|
||||||
|
pg_profile_enabled,
|
||||||
|
ms_slower_then,
|
||||||
|
)
|
||||||
|
from ..log import get_logger
|
||||||
|
from .._profile import Profiler
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ._interaction import ChartView
|
from ._interaction import ChartView
|
||||||
|
|
@ -211,9 +214,9 @@ async def increment_history_view(
|
||||||
):
|
):
|
||||||
hist_chart: ChartPlotWidget = ds.hist_chart
|
hist_chart: ChartPlotWidget = ds.hist_chart
|
||||||
hist_viz: Viz = ds.hist_viz
|
hist_viz: Viz = ds.hist_viz
|
||||||
# viz: Viz = ds.viz
|
viz: Viz = ds.viz
|
||||||
assert 'hist' in hist_viz.shm.token['shm_name']
|
assert 'hist' in hist_viz.shm.token['shm_name']
|
||||||
# name: str = hist_viz.name
|
name: str = hist_viz.name
|
||||||
|
|
||||||
# TODO: seems this is more reliable at keeping the slow
|
# TODO: seems this is more reliable at keeping the slow
|
||||||
# chart incremented in view more correctly?
|
# chart incremented in view more correctly?
|
||||||
|
|
@ -226,8 +229,7 @@ async def increment_history_view(
|
||||||
# draw everything from scratch on first entry!
|
# draw everything from scratch on first entry!
|
||||||
for curve_name, hist_viz in hist_chart._vizs.items():
|
for curve_name, hist_viz in hist_chart._vizs.items():
|
||||||
log.info(f'Forcing hard redraw -> {curve_name}')
|
log.info(f'Forcing hard redraw -> {curve_name}')
|
||||||
hist_viz.reset_graphics()
|
hist_viz.update_graphics(force_redraw=True)
|
||||||
# hist_viz.update_graphics(force_redraw=True)
|
|
||||||
|
|
||||||
async with open_sample_stream(1.) as min_istream:
|
async with open_sample_stream(1.) as min_istream:
|
||||||
async for msg in min_istream:
|
async for msg in min_istream:
|
||||||
|
|
@ -250,27 +252,17 @@ async def increment_history_view(
|
||||||
# - samplerd could emit the actual update range via
|
# - samplerd could emit the actual update range via
|
||||||
# tuple and then we only enter the below block if that
|
# tuple and then we only enter the below block if that
|
||||||
# range is detected as in-view?
|
# range is detected as in-view?
|
||||||
# match msg:
|
if (
|
||||||
# case {
|
(bf_wut := msg.get('backfilling', False))
|
||||||
# 'backfilling': (viz_name, timeframe),
|
):
|
||||||
# } if (
|
viz_name, timeframe = bf_wut
|
||||||
# viz_name == name
|
if viz_name == name:
|
||||||
# ):
|
log.info(f'Forcing hard redraw -> {name}@{timeframe}')
|
||||||
# log.warning(
|
match timeframe:
|
||||||
# f'Forcing HARD REDRAW:\n'
|
case 60:
|
||||||
# f'name: {name}\n'
|
hist_viz.update_graphics(force_redraw=True)
|
||||||
# f'timeframe: {timeframe}\n'
|
case 1:
|
||||||
# )
|
viz.update_graphics(force_redraw=True)
|
||||||
# # TODO: only allow this when the data is IN VIEW!
|
|
||||||
# # also, we probably can do this more efficiently
|
|
||||||
# # / smarter by only redrawing the portion of the
|
|
||||||
# # path necessary?
|
|
||||||
# {
|
|
||||||
# 60: hist_viz,
|
|
||||||
# 1: viz,
|
|
||||||
# }[timeframe].update_graphics(
|
|
||||||
# force_redraw=True
|
|
||||||
# )
|
|
||||||
|
|
||||||
# check if slow chart needs an x-domain shift and/or
|
# check if slow chart needs an x-domain shift and/or
|
||||||
# y-range resize.
|
# y-range resize.
|
||||||
|
|
@ -311,7 +303,6 @@ async def increment_history_view(
|
||||||
|
|
||||||
async def graphics_update_loop(
|
async def graphics_update_loop(
|
||||||
|
|
||||||
dss: dict[str, DisplayState],
|
|
||||||
nurse: trio.Nursery,
|
nurse: trio.Nursery,
|
||||||
godwidget: GodWidget,
|
godwidget: GodWidget,
|
||||||
feed: Feed,
|
feed: Feed,
|
||||||
|
|
@ -353,6 +344,8 @@ async def graphics_update_loop(
|
||||||
'i_last_slow_t': 0, # multiview-global slow (1m) step index
|
'i_last_slow_t': 0, # multiview-global slow (1m) step index
|
||||||
}
|
}
|
||||||
|
|
||||||
|
dss: dict[str, DisplayState] = {}
|
||||||
|
|
||||||
for fqme, flume in feed.flumes.items():
|
for fqme, flume in feed.flumes.items():
|
||||||
ohlcv = flume.rt_shm
|
ohlcv = flume.rt_shm
|
||||||
hist_ohlcv = flume.hist_shm
|
hist_ohlcv = flume.hist_shm
|
||||||
|
|
@ -469,20 +462,12 @@ async def graphics_update_loop(
|
||||||
await trio.sleep(0)
|
await trio.sleep(0)
|
||||||
|
|
||||||
if ds.hist_vars['i_last'] < ds.hist_vars['i_last_append']:
|
if ds.hist_vars['i_last'] < ds.hist_vars['i_last_append']:
|
||||||
await tractor.pause()
|
await tractor.breakpoint()
|
||||||
|
|
||||||
# try:
|
|
||||||
|
|
||||||
# XXX TODO: we need to do _dss UPDATE here so that when
|
|
||||||
# a feed-view is switched you can still remote annotate the
|
|
||||||
# prior view..
|
|
||||||
from . import _remote_ctl
|
|
||||||
_remote_ctl._dss.update(dss)
|
|
||||||
|
|
||||||
# main real-time quotes update loop
|
# main real-time quotes update loop
|
||||||
stream: tractor.MsgStream
|
stream: tractor.MsgStream
|
||||||
async with feed.open_multi_stream() as stream:
|
async with feed.open_multi_stream() as stream:
|
||||||
# assert stream
|
assert stream
|
||||||
async for quotes in stream:
|
async for quotes in stream:
|
||||||
quote_period = time.time() - last_quote_s
|
quote_period = time.time() - last_quote_s
|
||||||
quote_rate = round(
|
quote_rate = round(
|
||||||
|
|
@ -498,7 +483,7 @@ async def graphics_update_loop(
|
||||||
pass
|
pass
|
||||||
# log.warning(f'High quote rate {mkt.fqme}: {quote_rate}')
|
# log.warning(f'High quote rate {mkt.fqme}: {quote_rate}')
|
||||||
|
|
||||||
last_quote_s: float = time.time()
|
last_quote_s = time.time()
|
||||||
|
|
||||||
for fqme, quote in quotes.items():
|
for fqme, quote in quotes.items():
|
||||||
ds = dss[fqme]
|
ds = dss[fqme]
|
||||||
|
|
@ -528,12 +513,6 @@ async def graphics_update_loop(
|
||||||
quote,
|
quote,
|
||||||
)
|
)
|
||||||
|
|
||||||
# finally:
|
|
||||||
# # XXX: cancel any remote annotation control ctxs
|
|
||||||
# _remote_ctl._dss = None
|
|
||||||
# for cid, (ctx, aids) in _remote_ctl._ctxs.items():
|
|
||||||
# await ctx.cancel()
|
|
||||||
|
|
||||||
|
|
||||||
def graphics_update_cycle(
|
def graphics_update_cycle(
|
||||||
ds: DisplayState,
|
ds: DisplayState,
|
||||||
|
|
@ -1232,8 +1211,6 @@ async def link_views_with_region(
|
||||||
# region.sigRegionChangeFinished.connect(update_pi_from_region)
|
# region.sigRegionChangeFinished.connect(update_pi_from_region)
|
||||||
|
|
||||||
|
|
||||||
# NOTE: default is set to 60 FPS until the runtime delivers the
|
|
||||||
# discoverd hw value below.
|
|
||||||
_quote_throttle_rate: int = 60 - 6
|
_quote_throttle_rate: int = 60 - 6
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1252,7 +1229,7 @@ async def display_symbol_data(
|
||||||
fast from a cached watch-list.
|
fast from a cached watch-list.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# sbar = godwidget.window.status_bar
|
sbar = godwidget.window.status_bar
|
||||||
# historical data fetch
|
# historical data fetch
|
||||||
# brokermod = brokers.get_brokermod(provider)
|
# brokermod = brokers.get_brokermod(provider)
|
||||||
|
|
||||||
|
|
@ -1262,11 +1239,11 @@ async def display_symbol_data(
|
||||||
# group_key=loading_sym_key,
|
# group_key=loading_sym_key,
|
||||||
# )
|
# )
|
||||||
|
|
||||||
# for fqme in fqmes:
|
for fqme in fqmes:
|
||||||
# loading_sym_key = sbar.open_status(
|
loading_sym_key = sbar.open_status(
|
||||||
# f'loading {fqme} ->',
|
f'loading {fqme} ->',
|
||||||
# group_key=True
|
group_key=True
|
||||||
# )
|
)
|
||||||
|
|
||||||
# (TODO: make this not so shit XD)
|
# (TODO: make this not so shit XD)
|
||||||
# close group status once a symbol feed fully loads to view.
|
# close group status once a symbol feed fully loads to view.
|
||||||
|
|
@ -1275,54 +1252,26 @@ async def display_symbol_data(
|
||||||
# TODO: ctl over update loop's maximum frequency.
|
# TODO: ctl over update loop's maximum frequency.
|
||||||
# - load this from a config.toml!
|
# - load this from a config.toml!
|
||||||
# - allow dyanmic configuration from chart UI?
|
# - allow dyanmic configuration from chart UI?
|
||||||
(
|
|
||||||
conf,
|
|
||||||
path,
|
|
||||||
) = config.load()
|
|
||||||
ui_conf: dict = conf['ui']
|
|
||||||
|
|
||||||
global _quote_throttle_rate
|
global _quote_throttle_rate
|
||||||
from ._window import main_window
|
from ._window import main_window
|
||||||
|
display_rate = main_window().current_screen().refreshRate()
|
||||||
display_rate: int = floor(
|
_quote_throttle_rate = floor(display_rate) - 6
|
||||||
main_window().current_screen().refreshRate()
|
|
||||||
) - 6
|
|
||||||
|
|
||||||
mx_redraw_rate: int = ui_conf.get(
|
|
||||||
'max_redraw_rate',
|
|
||||||
_quote_throttle_rate,
|
|
||||||
)
|
|
||||||
|
|
||||||
if mx_redraw_rate < display_rate:
|
|
||||||
log.info(
|
|
||||||
'Down-throttling redraw rate to config setting\n'
|
|
||||||
f'display FPS: {display_rate}\n'
|
|
||||||
'max_redraw_rate: {max_redraw_rate}\n'
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
_quote_throttle_rate = display_rate
|
|
||||||
|
|
||||||
# TODO: we should be able to increase this if we use some
|
# TODO: we should be able to increase this if we use some
|
||||||
# `mypyc` speedups elsewhere? 22ish seems to be the sweet
|
# `mypyc` speedups elsewhere? 22ish seems to be the sweet
|
||||||
# spot for single-feed chart.
|
# spot for single-feed chart.
|
||||||
num_of_feeds = len(fqmes)
|
num_of_feeds = len(fqmes)
|
||||||
# if num_of_feeds > 1:
|
mx: int = 22
|
||||||
|
if num_of_feeds > 1:
|
||||||
# there will be more ctx switches with more than 1 feed so we
|
# there will be more ctx switches with more than 1 feed so we
|
||||||
# max throttle down a bit more.
|
# max throttle down a bit more.
|
||||||
mx_per_feed: int = (
|
mx = 16
|
||||||
ui_conf.get(
|
|
||||||
'per_feed_redraw_rate',
|
|
||||||
mx_redraw_rate,
|
|
||||||
)
|
|
||||||
or 16
|
|
||||||
)
|
|
||||||
|
|
||||||
# limit to at least display's FPS
|
# limit to at least display's FPS
|
||||||
# avoiding needless Qt-in-guest-mode context switches
|
# avoiding needless Qt-in-guest-mode context switches
|
||||||
cycles_per_feed = min(
|
cycles_per_feed = min(
|
||||||
round(_quote_throttle_rate/num_of_feeds),
|
round(_quote_throttle_rate/num_of_feeds),
|
||||||
mx_per_feed,
|
mx,
|
||||||
)
|
)
|
||||||
|
|
||||||
feed: Feed
|
feed: Feed
|
||||||
|
|
@ -1445,10 +1394,7 @@ async def display_symbol_data(
|
||||||
# for pause/resume on mouse interaction
|
# for pause/resume on mouse interaction
|
||||||
rt_chart.feed = feed
|
rt_chart.feed = feed
|
||||||
|
|
||||||
async with (
|
async with trio.open_nursery() as ln:
|
||||||
tractor.trionics.collapse_eg(),
|
|
||||||
trio.open_nursery() as ln,
|
|
||||||
):
|
|
||||||
# if available load volume related built-in display(s)
|
# if available load volume related built-in display(s)
|
||||||
vlm_charts: dict[
|
vlm_charts: dict[
|
||||||
str,
|
str,
|
||||||
|
|
@ -1470,7 +1416,7 @@ async def display_symbol_data(
|
||||||
start_fsp_displays,
|
start_fsp_displays,
|
||||||
rt_linked,
|
rt_linked,
|
||||||
flume,
|
flume,
|
||||||
# loading_sym_key,
|
loading_sym_key,
|
||||||
loglevel,
|
loglevel,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -1589,10 +1535,8 @@ async def display_symbol_data(
|
||||||
)
|
)
|
||||||
|
|
||||||
# start update loop task
|
# start update loop task
|
||||||
dss: dict[str, DisplayState] = {}
|
|
||||||
ln.start_soon(
|
ln.start_soon(
|
||||||
graphics_update_loop,
|
graphics_update_loop,
|
||||||
dss,
|
|
||||||
ln,
|
ln,
|
||||||
godwidget,
|
godwidget,
|
||||||
feed,
|
feed,
|
||||||
|
|
@ -1606,31 +1550,15 @@ async def display_symbol_data(
|
||||||
order_ctl_fqme: str = fqmes[0]
|
order_ctl_fqme: str = fqmes[0]
|
||||||
mode: OrderMode
|
mode: OrderMode
|
||||||
async with (
|
async with (
|
||||||
|
|
||||||
open_order_mode(
|
open_order_mode(
|
||||||
feed,
|
feed,
|
||||||
godwidget,
|
godwidget,
|
||||||
order_ctl_fqme,
|
order_ctl_fqme,
|
||||||
order_mode_started,
|
order_mode_started,
|
||||||
loglevel=loglevel
|
loglevel=loglevel
|
||||||
) as mode,
|
) as mode
|
||||||
|
|
||||||
# TODO: maybe have these startup sooner before
|
|
||||||
# order mode fully boots? but we gotta,
|
|
||||||
# -[ ] decouple the order mode bindings until
|
|
||||||
# the mode has fully booted..
|
|
||||||
# -[ ] maybe do an Event to sync?
|
|
||||||
|
|
||||||
# start input handling for ``ChartView`` input
|
|
||||||
# (i.e. kb + mouse handling loops)
|
|
||||||
rt_chart.view.open_async_input_handler(
|
|
||||||
dss=dss,
|
|
||||||
),
|
|
||||||
hist_chart.view.open_async_input_handler(
|
|
||||||
dss=dss,
|
|
||||||
),
|
|
||||||
|
|
||||||
):
|
):
|
||||||
|
|
||||||
rt_linked.mode = mode
|
rt_linked.mode = mode
|
||||||
|
|
||||||
rt_viz = rt_chart.get_viz(order_ctl_fqme)
|
rt_viz = rt_chart.get_viz(order_ctl_fqme)
|
||||||
|
|
|
||||||
|
|
@ -21,8 +21,7 @@ Higher level annotation editors.
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
from typing import (
|
from typing import (
|
||||||
Sequence,
|
TYPE_CHECKING
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
import pyqtgraph as pg
|
import pyqtgraph as pg
|
||||||
|
|
@ -32,34 +31,24 @@ from pyqtgraph import (
|
||||||
QtCore,
|
QtCore,
|
||||||
QtWidgets,
|
QtWidgets,
|
||||||
)
|
)
|
||||||
|
from PyQt5.QtGui import (
|
||||||
|
QColor,
|
||||||
|
)
|
||||||
|
from PyQt5.QtWidgets import (
|
||||||
|
QLabel,
|
||||||
|
)
|
||||||
|
|
||||||
from pyqtgraph import functions as fn
|
from pyqtgraph import functions as fn
|
||||||
|
from PyQt5.QtCore import QPointF
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
from piker.types import Struct
|
from ._style import hcolor, _font
|
||||||
from piker.ui.qt import (
|
|
||||||
Qt,
|
|
||||||
QPointF,
|
|
||||||
QRectF,
|
|
||||||
QGraphicsProxyWidget,
|
|
||||||
QGraphicsScene,
|
|
||||||
QLabel,
|
|
||||||
QColor,
|
|
||||||
QTransform,
|
|
||||||
)
|
|
||||||
from ._style import (
|
|
||||||
hcolor,
|
|
||||||
_font,
|
|
||||||
)
|
|
||||||
from ._lines import LevelLine
|
from ._lines import LevelLine
|
||||||
from ..log import get_logger
|
from ..log import get_logger
|
||||||
|
from ..data.types import Struct
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ._chart import (
|
from ._chart import GodWidget
|
||||||
GodWidget,
|
|
||||||
ChartPlotWidget,
|
|
||||||
)
|
|
||||||
from ._interaction import ChartView
|
|
||||||
|
|
||||||
|
|
||||||
log = get_logger(__name__)
|
log = get_logger(__name__)
|
||||||
|
|
@ -76,7 +65,7 @@ class ArrowEditor(Struct):
|
||||||
uid: str,
|
uid: str,
|
||||||
x: float,
|
x: float,
|
||||||
y: float,
|
y: float,
|
||||||
color: str = 'default',
|
color='default',
|
||||||
pointing: str | None = None,
|
pointing: str | None = None,
|
||||||
|
|
||||||
) -> pg.ArrowItem:
|
) -> pg.ArrowItem:
|
||||||
|
|
@ -262,75 +251,43 @@ class LineEditor(Struct):
|
||||||
return lines
|
return lines
|
||||||
|
|
||||||
|
|
||||||
def as_point(
|
|
||||||
pair: Sequence[float, float] | QPointF,
|
|
||||||
) -> list[QPointF, QPointF]:
|
|
||||||
'''
|
|
||||||
Case any input tuple of floats to a a list of `QPoint` objects
|
|
||||||
for use in Qt geometry routines.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if isinstance(pair, QPointF):
|
|
||||||
return pair
|
|
||||||
|
|
||||||
return QPointF(pair[0], pair[1])
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: maybe implement better, something something RectItemProxy??
|
|
||||||
# -[ ] dig into details of how proxy's work?
|
|
||||||
# https://doc.qt.io/qt-5/qgraphicsscene.html#addWidget
|
|
||||||
# -[ ] consider using `.addRect()` maybe?
|
|
||||||
|
|
||||||
class SelectRect(QtWidgets.QGraphicsRectItem):
|
class SelectRect(QtWidgets.QGraphicsRectItem):
|
||||||
'''
|
|
||||||
A data-view "selection rectangle": the most fundamental
|
|
||||||
geometry for annotating data views.
|
|
||||||
|
|
||||||
- https://doc.qt.io/qt-5/qgraphicsrectitem.html
|
|
||||||
- https://doc.qt.io/qt-6/qgraphicsrectitem.html
|
|
||||||
|
|
||||||
'''
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
viewbox: ViewBox,
|
viewbox: ViewBox,
|
||||||
color: str | None = None,
|
color: str = 'dad_blue',
|
||||||
) -> None:
|
) -> None:
|
||||||
super().__init__(0, 0, 1, 1)
|
super().__init__(0, 0, 1, 1)
|
||||||
|
|
||||||
# self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1)
|
# self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1)
|
||||||
self.vb: ViewBox = viewbox
|
self.vb = viewbox
|
||||||
|
self._chart: 'ChartPlotWidget' = None # noqa
|
||||||
|
|
||||||
self._chart: ChartPlotWidget | None = None # noqa
|
# override selection box color
|
||||||
|
|
||||||
# TODO: maybe allow this to be dynamic via a method?
|
|
||||||
#l override selection box color
|
|
||||||
color: str = color or 'dad_blue'
|
|
||||||
color = QColor(hcolor(color))
|
color = QColor(hcolor(color))
|
||||||
|
|
||||||
self.setPen(fn.mkPen(color, width=1))
|
self.setPen(fn.mkPen(color, width=1))
|
||||||
color.setAlpha(66)
|
color.setAlpha(66)
|
||||||
self.setBrush(fn.mkBrush(color))
|
self.setBrush(fn.mkBrush(color))
|
||||||
self.setZValue(1e9)
|
self.setZValue(1e9)
|
||||||
|
self.hide()
|
||||||
|
self._label = None
|
||||||
|
|
||||||
label = self._label = QLabel()
|
label = self._label = QLabel()
|
||||||
label.setTextFormat(
|
label.setTextFormat(0) # markdown
|
||||||
Qt.TextFormat.MarkdownText
|
|
||||||
)
|
|
||||||
label.setFont(_font.font)
|
label.setFont(_font.font)
|
||||||
label.setMargin(0)
|
label.setMargin(0)
|
||||||
label.setAlignment(
|
label.setAlignment(
|
||||||
QtCore.Qt.AlignLeft
|
QtCore.Qt.AlignLeft
|
||||||
# | QtCore.Qt.AlignVCenter
|
# | QtCore.Qt.AlignVCenter
|
||||||
)
|
)
|
||||||
label.hide() # always right after init
|
|
||||||
|
|
||||||
# proxy is created after containing scene is initialized
|
# proxy is created after containing scene is initialized
|
||||||
self._label_proxy: QGraphicsProxyWidget | None = None
|
self._label_proxy = None
|
||||||
self._abs_top_right: Point | None = None
|
self._abs_top_right = None
|
||||||
|
|
||||||
# TODO: "swing %" might be handy here (data's max/min
|
# TODO: "swing %" might be handy here (data's max/min # % change)
|
||||||
# # % change)?
|
self._contents = [
|
||||||
self._contents: list[str] = [
|
|
||||||
'change: {pchng:.2f} %',
|
'change: {pchng:.2f} %',
|
||||||
'range: {rng:.2f}',
|
'range: {rng:.2f}',
|
||||||
'bars: {nbars}',
|
'bars: {nbars}',
|
||||||
|
|
@ -340,31 +297,12 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
||||||
'sigma: {std:.2f}',
|
'sigma: {std:.2f}',
|
||||||
]
|
]
|
||||||
|
|
||||||
self.add_to_view(viewbox)
|
|
||||||
self.hide()
|
|
||||||
|
|
||||||
def add_to_view(
|
|
||||||
self,
|
|
||||||
view: ChartView,
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Self-defined view hookup impl which will
|
|
||||||
also re-assign the internal ref.
|
|
||||||
|
|
||||||
'''
|
|
||||||
view.addItem(
|
|
||||||
self,
|
|
||||||
ignoreBounds=True,
|
|
||||||
)
|
|
||||||
if self.vb is not view:
|
|
||||||
self.vb = view
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def chart(self) -> ChartPlotWidget: # noqa
|
def chart(self) -> 'ChartPlotWidget': # noqa
|
||||||
return self._chart
|
return self._chart
|
||||||
|
|
||||||
@chart.setter
|
@chart.setter
|
||||||
def chart(self, chart: ChartPlotWidget) -> None: # noqa
|
def chart(self, chart: 'ChartPlotWidget') -> None: # noqa
|
||||||
self._chart = chart
|
self._chart = chart
|
||||||
chart.sigRangeChanged.connect(self.update_on_resize)
|
chart.sigRangeChanged.connect(self.update_on_resize)
|
||||||
palette = self._label.palette()
|
palette = self._label.palette()
|
||||||
|
|
@ -377,155 +315,57 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
||||||
)
|
)
|
||||||
|
|
||||||
def update_on_resize(self, vr, r):
|
def update_on_resize(self, vr, r):
|
||||||
'''
|
"""Re-position measure label on view range change.
|
||||||
Re-position measure label on view range change.
|
|
||||||
|
|
||||||
'''
|
"""
|
||||||
if self._abs_top_right:
|
if self._abs_top_right:
|
||||||
self._label_proxy.setPos(
|
self._label_proxy.setPos(
|
||||||
self.vb.mapFromView(self._abs_top_right)
|
self.vb.mapFromView(self._abs_top_right)
|
||||||
)
|
)
|
||||||
|
|
||||||
def set_scen_pos(
|
def mouse_drag_released(
|
||||||
self,
|
self,
|
||||||
scen_p1: QPointF,
|
p1: QPointF,
|
||||||
scen_p2: QPointF,
|
p2: QPointF
|
||||||
|
|
||||||
update_label: bool = True,
|
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
"""Called on final button release for mouse drag with start and
|
||||||
Set position from scene coords of selection rect (normally
|
end positions.
|
||||||
from mouse position) and accompanying label, move label to
|
|
||||||
match.
|
|
||||||
|
|
||||||
'''
|
"""
|
||||||
# NOTE XXX: apparently just setting it doesn't work!?
|
self.set_pos(p1, p2)
|
||||||
# i have no idea why but it's pretty weird we have to do
|
|
||||||
# this transform thing which was basically pulled verbatim
|
|
||||||
# from the `pg.ViewBox.updateScaleBox()` method.
|
|
||||||
view_rect: QRectF = self.vb.childGroup.mapRectFromScene(
|
|
||||||
QRectF(
|
|
||||||
scen_p1,
|
|
||||||
scen_p2,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
self.setPos(view_rect.topLeft())
|
|
||||||
# XXX: does not work..!?!?
|
|
||||||
# https://doc.qt.io/qt-5/qgraphicsrectitem.html#setRect
|
|
||||||
# self.setRect(view_rect)
|
|
||||||
|
|
||||||
tr = QTransform.fromScale(
|
def set_pos(
|
||||||
view_rect.width(),
|
|
||||||
view_rect.height(),
|
|
||||||
)
|
|
||||||
self.setTransform(tr)
|
|
||||||
|
|
||||||
# XXX: never got this working, was always offset
|
|
||||||
# / transformed completely wrong (and off to the far right
|
|
||||||
# from the cursor?)
|
|
||||||
# self.set_view_pos(
|
|
||||||
# view_rect=view_rect,
|
|
||||||
# # self.vwqpToView(p1),
|
|
||||||
# # self.vb.mapToView(p2),
|
|
||||||
# # start_pos=self.vb.mapToScene(p1),
|
|
||||||
# # end_pos=self.vb.mapToScene(p2),
|
|
||||||
# )
|
|
||||||
self.show()
|
|
||||||
|
|
||||||
if update_label:
|
|
||||||
self.init_label(view_rect)
|
|
||||||
|
|
||||||
def set_view_pos(
|
|
||||||
self,
|
self,
|
||||||
|
p1: QPointF,
|
||||||
start_pos: QPointF | Sequence[float, float] | None = None,
|
p2: QPointF
|
||||||
end_pos: QPointF | Sequence[float, float] | None = None,
|
|
||||||
view_rect: QRectF | None = None,
|
|
||||||
|
|
||||||
update_label: bool = True,
|
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
"""Set position of selection rect and accompanying label, move
|
||||||
Set position from `ViewBox` coords (i.e. from the actual
|
label to match.
|
||||||
data domain) of rect (and any accompanying label which is
|
|
||||||
moved to match).
|
|
||||||
|
|
||||||
'''
|
"""
|
||||||
if self._chart is None:
|
|
||||||
raise RuntimeError(
|
|
||||||
'You MUST assign a `SelectRect.chart: ChartPlotWidget`!'
|
|
||||||
)
|
|
||||||
|
|
||||||
if view_rect is None:
|
|
||||||
# ensure point casting
|
|
||||||
start_pos: QPointF = as_point(start_pos)
|
|
||||||
end_pos: QPointF = as_point(end_pos)
|
|
||||||
|
|
||||||
# map to view coords and update area
|
|
||||||
view_rect = QtCore.QRectF(
|
|
||||||
start_pos,
|
|
||||||
end_pos,
|
|
||||||
)
|
|
||||||
|
|
||||||
self.setPos(view_rect.topLeft())
|
|
||||||
|
|
||||||
# NOTE: SERIOUSLY NO IDEA WHY THIS WORKS...
|
|
||||||
# but it does and all the other commented stuff above
|
|
||||||
# dint, dawg..
|
|
||||||
|
|
||||||
# self.resetTransform()
|
|
||||||
# self.setRect(view_rect)
|
|
||||||
|
|
||||||
tr = QTransform.fromScale(
|
|
||||||
view_rect.width(),
|
|
||||||
view_rect.height(),
|
|
||||||
)
|
|
||||||
self.setTransform(tr)
|
|
||||||
|
|
||||||
if update_label:
|
|
||||||
self.init_label(view_rect)
|
|
||||||
|
|
||||||
print(
|
|
||||||
'SelectRect modify:\n'
|
|
||||||
f'QRectF: {view_rect}\n'
|
|
||||||
f'start_pos: {start_pos}\n'
|
|
||||||
f'end_pos: {end_pos}\n'
|
|
||||||
)
|
|
||||||
self.show()
|
|
||||||
|
|
||||||
def init_label(
|
|
||||||
self,
|
|
||||||
view_rect: QRectF,
|
|
||||||
) -> QLabel:
|
|
||||||
|
|
||||||
# should be init-ed in `.__init__()`
|
|
||||||
label: QLabel = self._label
|
|
||||||
cv: ChartView = self.vb
|
|
||||||
|
|
||||||
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
|
|
||||||
if self._label_proxy is None:
|
if self._label_proxy is None:
|
||||||
scen: QGraphicsScene = cv.scene()
|
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
|
||||||
# NOTE: specifically this is passing a widget
|
self._label_proxy = self.vb.scene().addWidget(self._label)
|
||||||
# pointer to the scene's `.addWidget()` as per,
|
|
||||||
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html#embedding-a-widget-with-qgraphicsproxywidget
|
|
||||||
self._label_proxy: QGraphicsProxyWidget = scen.addWidget(label)
|
|
||||||
|
|
||||||
# get label startup coords
|
start_pos = self.vb.mapToView(p1)
|
||||||
tl: QPointF = view_rect.topLeft()
|
end_pos = self.vb.mapToView(p2)
|
||||||
br: QPointF = view_rect.bottomRight()
|
|
||||||
|
|
||||||
x1, y1 = tl.x(), tl.y()
|
# map to view coords and update area
|
||||||
x2, y2 = br.x(), br.y()
|
r = QtCore.QRectF(start_pos, end_pos)
|
||||||
|
|
||||||
# TODO: to remove, previous label corner point unpacking
|
# old way; don't need right?
|
||||||
# x1, y1 = start_pos.x(), start_pos.y()
|
# lr = QtCore.QRectF(p1, p2)
|
||||||
# x2, y2 = end_pos.x(), end_pos.y()
|
# r = self.vb.childGroup.mapRectFromParent(lr)
|
||||||
# y1, y2 = start_pos.y(), end_pos.y()
|
|
||||||
# x1, x2 = start_pos.x(), end_pos.x()
|
|
||||||
|
|
||||||
# TODO: heh, could probably use a max-min streamin algo
|
self.setPos(r.topLeft())
|
||||||
# here too?
|
self.resetTransform()
|
||||||
|
self.setRect(r)
|
||||||
|
self.show()
|
||||||
|
|
||||||
|
y1, y2 = start_pos.y(), end_pos.y()
|
||||||
|
x1, x2 = start_pos.x(), end_pos.x()
|
||||||
|
|
||||||
|
# TODO: heh, could probably use a max-min streamin algo here too
|
||||||
_, xmn = min(y1, y2), min(x1, x2)
|
_, xmn = min(y1, y2), min(x1, x2)
|
||||||
ymx, xmx = max(y1, y2), max(x1, x2)
|
ymx, xmx = max(y1, y2), max(x1, x2)
|
||||||
|
|
||||||
|
|
@ -535,35 +375,26 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
||||||
ixmn, ixmx = round(xmn), round(xmx)
|
ixmn, ixmx = round(xmn), round(xmx)
|
||||||
nbars = ixmx - ixmn + 1
|
nbars = ixmx - ixmn + 1
|
||||||
|
|
||||||
chart: ChartPlotWidget = self._chart
|
chart = self._chart
|
||||||
data: np.ndarray = chart.get_viz(
|
data = chart.get_viz(chart.name).shm.array[ixmn:ixmx]
|
||||||
chart.name
|
|
||||||
).shm.array[ixmn:ixmx]
|
|
||||||
|
|
||||||
if len(data):
|
if len(data):
|
||||||
std: float = data['close'].std()
|
std = data['close'].std()
|
||||||
dmx: float = data['high'].max()
|
dmx = data['high'].max()
|
||||||
dmn: float = data['low'].min()
|
dmn = data['low'].min()
|
||||||
else:
|
else:
|
||||||
dmn = dmx = std = np.nan
|
dmn = dmx = std = np.nan
|
||||||
|
|
||||||
# update label info
|
# update label info
|
||||||
label.setText('\n'.join(self._contents).format(
|
self._label.setText('\n'.join(self._contents).format(
|
||||||
pchng=pchng,
|
pchng=pchng, rng=rng, nbars=nbars,
|
||||||
rng=rng,
|
std=std, dmx=dmx, dmn=dmn,
|
||||||
nbars=nbars,
|
|
||||||
std=std,
|
|
||||||
dmx=dmx,
|
|
||||||
dmn=dmn,
|
|
||||||
))
|
))
|
||||||
|
|
||||||
# print(f'x2, y2: {(x2, y2)}')
|
# print(f'x2, y2: {(x2, y2)}')
|
||||||
# print(f'xmn, ymn: {(xmn, ymx)}')
|
# print(f'xmn, ymn: {(xmn, ymx)}')
|
||||||
|
|
||||||
label_anchor = Point(
|
label_anchor = Point(xmx + 2, ymx)
|
||||||
xmx + 2,
|
|
||||||
ymx,
|
|
||||||
)
|
|
||||||
|
|
||||||
# XXX: in the drag bottom-right -> top-left case we don't
|
# XXX: in the drag bottom-right -> top-left case we don't
|
||||||
# want the label to overlay the box.
|
# want the label to overlay the box.
|
||||||
|
|
@ -572,40 +403,13 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
||||||
# # label_anchor = Point(x2, y2 + self._label.height())
|
# # label_anchor = Point(x2, y2 + self._label.height())
|
||||||
# label_anchor = Point(xmn, ymn)
|
# label_anchor = Point(xmn, ymn)
|
||||||
|
|
||||||
self._abs_top_right: Point = label_anchor
|
self._abs_top_right = label_anchor
|
||||||
self._label_proxy.setPos(
|
self._label_proxy.setPos(self.vb.mapFromView(label_anchor))
|
||||||
cv.mapFromView(label_anchor)
|
# self._label.show()
|
||||||
)
|
|
||||||
label.show()
|
|
||||||
|
|
||||||
def hide(self):
|
def clear(self):
|
||||||
'''
|
"""Clear the selection box from view.
|
||||||
Clear the selection box from its graphics scene but
|
|
||||||
don't delete it permanently.
|
|
||||||
|
|
||||||
'''
|
"""
|
||||||
super().hide()
|
|
||||||
self._label.hide()
|
self._label.hide()
|
||||||
|
self.hide()
|
||||||
# TODO: ensure noone else using dis.
|
|
||||||
clear = hide
|
|
||||||
|
|
||||||
def delete(self) -> None:
|
|
||||||
'''
|
|
||||||
De-allocate this rect from its rendering graphics scene.
|
|
||||||
|
|
||||||
Like a permanent hide.
|
|
||||||
|
|
||||||
'''
|
|
||||||
scen: QGraphicsScene = self.scene()
|
|
||||||
if scen is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
scen.removeItem(self)
|
|
||||||
if (
|
|
||||||
self._label
|
|
||||||
and
|
|
||||||
self._label_proxy
|
|
||||||
|
|
||||||
):
|
|
||||||
scen.removeItem(self._label_proxy)
|
|
||||||
|
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue