Compare commits

..

No commits in common. "main" and "pyqt6" have entirely different histories.
main ... pyqt6

81 changed files with 2034 additions and 6133 deletions

View File

@ -1,196 +1,162 @@
piker piker
----- -----
trading gear for hackers trading gear for hackers.
|gh_actions| |gh_actions|
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square .. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/piker/pikers/goto :target: https://actions-badge.atrox.dev/piker/pikers/goto
``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for ``piker`` is a broker agnostic, next-gen FOSS toolset for real-time
real-time computational trading targeted at `hardcore Linux users computational trading targeted at `hardcore Linux users <comp_trader>`_ .
<comp_trader>`_ .
we use much bleeding edge tech including (but not limited to): we use as much bleeding edge tech as possible including (but not limited to):
- latest python for glue_ - latest python for glue_
- uv_ for packaging and distribution - trio_ & tractor_ for our distributed, multi-core, real-time streaming
- trio_ & tractor_ for our distributed `structured concurrency`_ runtime `structured concurrency`_ runtime B)
- Qt_ for pristine low latency UIs - Qt_ for pristine high performance UIs
- pyqtgraph_ (which we've extended) for real-time charting and graphics - pyqtgraph_ for real-time charting
- ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_ - ``polars`` ``numpy`` and ``numba`` for `fast numerics`_
- `apache arrow and parquet`_ for time-series storage - `apache arrow and parquet`_ for time series history management
persistence and sharing
- (prototyped) techtonicdb_ for L2 book storage
potential projects we might integrate with soon, .. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg
:target: https://travis-ci.org/pikers/piker
- (already prototyped in ) techtonicdb_ for L2 book storage
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _uv: https://docs.astral.sh/uv/
.. _trio: https://github.com/python-trio/trio .. _trio: https://github.com/python-trio/trio
.. _tractor: https://github.com/goodboy/tractor .. _tractor: https://github.com/goodboy/tractor
.. _structured concurrency: https://trio.discourse.group/ .. _structured concurrency: https://trio.discourse.group/
.. _marketstore: https://github.com/alpacahq/marketstore
.. _techtonicdb: https://github.com/0b01/tectonicdb
.. _Qt: https://www.qt.io/ .. _Qt: https://www.qt.io/
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph .. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _apache arrow and parquet: https://arrow.apache.org/faq/ .. _apache arrow and parquet: https://arrow.apache.org/faq/
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/ .. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
.. _techtonicdb: https://github.com/0b01/tectonicdb .. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
focus and feats: focus and features:
**************** *******************
fitting with these tenets, we're always open to new - 100% federated: your code, your hardware, your data feeds, your broker fills.
framework/lib/service interop suggestions and ideas! - zero web: low latency, native software that doesn't try to re-invent the OS
- maximal **privacy**: prevent brokers and mms from knowing your
planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency
primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will
make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
- **100% federated**: fitting with these tenets, we're always open to new framework suggestions and ideas.
your code, your hardware, your data feeds, your broker fills.
- **zero web**: building the best looking, most reliable, keyboard friendly trading
low latency as a prime objective, native UIs and modern IPC platform is the dream; join the cause.
protocols without trying to re-invent the "OS-as-an-app"..
- **maximal privacy**:
prevent brokers and mms from knowing your planz; smack their
spreads with dark volume from a VPN tunnel.
- **zero clutter**:
modal, context oriented UIs that echew minimalism, reduce thought
noise and encourage un-emotion.
- **first class parallelism**:
built from the ground up on a next-gen structured concurrency
supervision sys.
- **traders first**:
broker/exchange/venue/asset-class/money-sys agnostic
- **systems grounded**:
real-time financial signal processing (fsp) that will make any
queuing or DSP eng juice their shorts.
- **non-tina UX**:
sleek, powerful keyboard driven interaction with expected use in
tiling wms (or maybe even a DDE).
- **data collab at scale**:
every actor-process and protocol is multi-host aware.
- **fight club ready**:
zero interest in adoption by suits; no corporate friendly license,
ever.
building the hottest looking, fastest, most reliable, keyboard
friendly FOSS trading platform is the dream; join the cause.
a sane install with `uv` sane install with `poetry`
************************ **************************
bc why install with `python` when you can faster with `rust` :: TODO!
uv sync
# ^ astral's docs,
# https://docs.astral.sh/uv/concepts/projects/sync/
include all GUIs (ex. for charting)::
uv sync --extra uis
AND with all our hacking tools and WIP integrations::
uv sync --dev --all-extras
Ensure you can run the root-daemon:: rigorous install on ``nixos`` using ``poetry2nix``
**************************************************
uv run pikerd [-l info --pdb] TODO!
install on nix(os) hacky install on nixos
****************** **********************
``NixOS`` is our core devs' distro of choice for which we offer `NixOS` is our core devs' distro of choice for which we offer
a stringently defined development shell envoirment that can currently a stringently defined development shell envoirment that can be loaded with::
be applied in one of 2 ways::
# ONLY if running on X11 nix-shell develop.nix
nix-shell default.nix
Or if you prefer flakes style and a modern DE:: this will setup the required python environment to run piker, make sure to
run::
# ONLY if also running on Wayland pip install -r requirements.txt -e .
nix develop # for default bash
nix develop -c uv run xonsh # for @goodboy's preferred sh B) once after loading the shell
start a chart install wild-west style via `pip`
************* *********************************
run a realtime OHLCV chart stand-alone:: ``piker`` is currently under heavy pre-alpha development and as such
should be cloned from this repo and hacked on directly.
[uv run] piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken for a development install::
# ^^^ iff you haven't activated the py-env, git clone git@github.com:pikers/piker.git
# - https://docs.astral.sh/uv/concepts/projects/run/ cd piker
# virtualenv env
# in order to create an explicit virt-env see, source ./env/bin/activate
# - https://docs.astral.sh/uv/concepts/projects/layout/#the-project-environment pip install -r requirements.txt -e .
# - https://docs.astral.sh/uv/pip/environments/
#
# use $UV_PROJECT_ENVIRONMENT to select any non-`.venv/`
# as the venv sudir in the repo's root.
# - https://docs.astral.sh/uv/reference/environment/#uv_project_environment
this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes
overlayed on the same graph. Use of `piker` without first starting
a daemon (`pikerd` - see below) means there is an implicit spawning of the
multi-actor-runtime (implemented as a `tractor` app).
For additional subsystem feats available through our chart UI see the
various sub-readmes:
- order control using a mouse-n-keyboard UX B)
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
- src-asset derivatives scan for anal, like the infamous "max pain" XO
spawn a daemon standalone check out our charts
************************* ********************
we call the root actor-process the ``pikerd``. it can be (and is bet you weren't expecting this from the foss::
recommended normally to be) started separately from the ``piker
chart`` program:: piker -l info -b kraken -b binance chart btcusdt.binance --pdb
this runs the main chart (currently with 1m sampled OHLC) in in debug
mode and you can practice paper trading using the following
micro-manual:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
you can also configure your position allocation limits from the
sidepane.
run in distributed mode
***********************
start the service manager and data feed daemon in the background and
connect to it::
pikerd -l info --pdb pikerd -l info --pdb
the daemon does nothing until a ``piker``-client (like ``piker
chart``) connects and requests some particular sub-system. for
a connecting chart ``pikerd`` will spawn and manage at least,
- a data-feed daemon: ``datad`` which does all the work of comms with connect your chart::
the backend provider (in this case the ``binance`` cex).
- a paper-trading engine instance, ``paperboi.binance``, (if no live
account has been configured) which allows for auto/manual order
control against the live quote stream.
*using* an actor-service (aka micro-daemon) manager which dynamically piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
supervises various sub-subsystems-as-services throughout the ``piker``
runtime-stack.
now you can (implicitly) connect your chart::
piker chart btcusdt.spot.binance enjoy persistent real-time data feeds tied to daemon lifetime. the next
time you spawn a chart it will load much faster since the data feed has
since ``pikerd`` was started separately you can now enjoy a persistent been cached and is now always running live in the background until you
real-time data stream tied to the daemon-tree's lifetime. i.e. the next kill ``pikerd``.
time you spawn a chart it will obviously not only load much faster
(since the underlying ``datad.binance`` is left running with its
in-memory IPC data structures) but also the data-feed and any order
mgmt states should be persistent until you finally cancel ``pikerd``.
if anyone asks you what this project is about if anyone asks you what this project is about
********************************************* *********************************************
you don't talk about it; just use it. you don't talk about it.
how do i get involved? how do i get involved?
@ -200,15 +166,6 @@ enter the matrix.
how come there ain't that many docs how come there ain't that many docs
*********************************** ***********************************
i mean we want/need them but building the core right has been higher suck it up, learn the code; no one is trying to sell you on anything.
prio then marketting (and likely will stay that way Bp). also, we need lotsa help so if you want to start somewhere and can't
necessarily write serious code, this might be the place for you!
soo, suck it up bc,
- no one is trying to sell you on anything
- learning the code base is prolly way more valuable
- the UI/UXs are intended to be "intuitive" for any hacker..
we obviously need tonz help so if you want to start somewhere and
can't necessarily write "advanced" concurrent python/rust code, this
helping document literally anything might be the place for you!

View File

@ -1,5 +1,6 @@
################
# ---- CEXY ---- # ---- CEXY ----
################
[binance] [binance]
accounts.paper = 'paper' accounts.paper = 'paper'
@ -12,41 +13,28 @@ accounts.spot = 'spot'
spot.use_testnet = false spot.use_testnet = false
spot.api_key = '' spot.api_key = ''
spot.api_secret = '' spot.api_secret = ''
# ------ binance ------
[deribit] [deribit]
# std assets
key_id = '' key_id = ''
key_secret = '' key_secret = ''
# options
accounts.option = 'option'
option.use_testnet = false
option.key_id = ''
option.key_secret = ''
# aux logging from `cryptofeed`
option.log.filename = 'cryptofeed.log'
option.log.level = 'DEBUG'
option.log.disabled = true
# ------ deribit ------
[kraken] [kraken]
key_descr = '' key_descr = ''
api_key = '' api_key = ''
secret = '' secret = ''
# ------ kraken ------
[kucoin] [kucoin]
key_id = '' key_id = ''
key_secret = '' key_secret = ''
key_passphrase = '' key_passphrase = ''
# ------ kucoin ------
################
# -- BROKERZ --- # -- BROKERZ ---
################
[questrade] [questrade]
refresh_token = '' refresh_token = ''
access_token = '' access_token = ''
@ -54,55 +42,44 @@ api_server = 'https://api06.iq.questrade.com/'
expires_in = 1800 expires_in = 1800
token_type = 'Bearer' token_type = 'Bearer'
expires_at = 1616095326.355846 expires_at = 1616095326.355846
# ------ questrade ------
[ib] [ib]
# define the (set of) host-port socketaddrs that
# brokerd.ib will scan to connect to an API endpoint
# (ib-gw or ib-tws listening instances)
hosts = [ hosts = [
'127.0.0.1', '127.0.0.1',
] ]
# XXX: the order in which ports will be scanned
# (by the `brokerd` daemon-actor)
# is determined # by the line order here.
# TODO: when we eventually spawn gateways in our
# container, we can just dynamically allocate these
# using IBC.
ports = [ ports = [
4002, # gw 4002, # gw
7497, # tws 7497, # tws
] ]
# When API endpoints are being scanned durin startup, the order # XXX: for a paper account the flex web query service
# of user-defined-account "names" (as defined below) here # is not supported so you have to manually download
# determines which py-client connection is given priority to be # and XML report and put it in a location that can be
# used for data-feed-requests by according to whichever client # accessed by the ``brokerd.ib`` backend code for parsing.
# connected to an API endpoing which reported the equivalent flex_token = ''
# account number for that name. flex_trades_query_id = '' # live account
# when clients are being scanned this determines
# which clients are preferred to be used for data
# feeds based on the order of account names, if
# detected as active on an API client.
prefer_data_account = [ prefer_data_account = [
'paper', 'paper',
'margin', 'margin',
'ira', 'ira',
] ]
# For long-term trades txn (transaction) history
# processing (i.e your txn ledger with IB) you can
# (automatically for live accounts) query the FLEX
# report system for past history.
#
# (For paper accounts the web query service
# is not supported so you have to manually download
# an XML report and put it in a location that can be
# accessed by our `brokerd.ib` backend code for parsing).
#
flex_token = ''
flex_trades_query_id = '' # live account
# define "aliases" (names) for each account number
# such that the names can be reffed and logged throughout
# `piker.accounting` subsys and more easily
# referred to by the user.
#
# These keys will be the set exposed through the order-mode
# account-selection UI so that numbers are never shown.
[ib.accounts] [ib.accounts]
paper = 'DU0000000' # <- literal account # # the order in which accounts will be selectable
margin = 'U0000000' # in the order mode UI (if found via clients during
ira = 'U0000000' # API-app scanning)when a new symbol is loaded.
# ------ ib ------ paper = 'XX0000000'
margin = 'X0000000'
ira = 'X0000000'

View File

@ -1,9 +1,7 @@
[network] [network]
pikerd = [ tsdb.backend = 'marketstore'
'/ipv4/127.0.0.1/tcp/6116', # std localhost daemon-actor tree tsdb.host = 'localhost'
# '/uds/6116', # TODO std uds socket file tsdb.grpc_port = 5995
]
[ui] [ui]
# set custom font + size which will scale entire UI # set custom font + size which will scale entire UI

View File

@ -1,135 +0,0 @@
with (import <nixpkgs> {});
let
glibStorePath = lib.getLib glib;
zlibStorePath = lib.getLib zlib;
zstdStorePath = lib.getLib zstd;
dbusStorePath = lib.getLib dbus;
libGLStorePath = lib.getLib libGL;
freetypeStorePath = lib.getLib freetype;
qt6baseStorePath = lib.getLib qt6.qtbase;
fontconfigStorePath = lib.getLib fontconfig;
libxkbcommonStorePath = lib.getLib libxkbcommon;
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
pypkgs = python313Packages;
qtpyStorePath = lib.getLib pypkgs.qtpy;
pyqt6StorePath = lib.getLib pypkgs.pyqt6;
pyqt6SipStorePath = lib.getLib pypkgs.pyqt6-sip;
rapidfuzzStorePath = lib.getLib pypkgs.rapidfuzz;
qdarkstyleStorePath = lib.getLib pypkgs.qdarkstyle;
xorgLibX11StorePath = lib.getLib xorg.libX11;
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
in
stdenv.mkDerivation {
name = "piker-qt6-uv";
buildInputs = [
# System requirements.
glib
zlib
dbus
zstd
libGL
freetype
qt6.qtbase
libgcc.lib
fontconfig
libxkbcommon
# Xorg requirements
xcb-util-cursor
xorg.libxcb
xorg.libX11
xorg.xcbutilwm
xorg.xcbutilimage
xorg.xcbutilerrors
xorg.xcbutilkeysyms
xorg.xcbutilrenderutil
# Python requirements.
python313
uv
pypkgs.qdarkstyle
pypkgs.rapidfuzz
pypkgs.pyqt6
pypkgs.qtpy
];
src = null;
shellHook = ''
set -e
# Set the Qt plugin path
# export QT_DEBUG_PLUGINS=1
QTBASE_PATH="${qt6baseStorePath}/lib"
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
LIB_GCC_PATH="${libgcc.lib}/lib"
GLIB_PATH="${glibStorePath}/lib"
ZSTD_PATH="${zstdStorePath}/lib"
ZLIB_PATH="${zlibStorePath}/lib"
DBUS_PATH="${dbusStorePath}/lib"
LIBGL_PATH="${libGLStorePath}/lib"
FREETYPE_PATH="${freetypeStorePath}/lib"
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
export LD_LIBRARY_PATH
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.13/site-packages"
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.13/site-packages"
QTPY_PATH="${qtpyStorePath}/lib/python3.13/site-packages"
PYQT6_PATH="${pyqt6StorePath}/lib/python3.13/site-packages"
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.13/site-packages"
PATCH="$PATCH:$RPDFUZZ_PATH"
PATCH="$PATCH:$QDRKSTYLE_PATH"
PATCH="$PATCH:$QTPY_PATH"
PATCH="$PATCH:$PYQT6_PATH"
PATCH="$PATCH:$PYQT6_SIP_PATH"
export PATCH
# install all dev and extras
uv sync --dev --all-extras
'';
}

View File

@ -1,138 +1,30 @@
running ``ib`` gateway in ``docker`` running ``ib`` gateway in ``docker``
------------------------------------ ------------------------------------
We have a config based on a well maintained community We have a config based on the (now defunct)
image from `@gnzsnz`: image from "waytrade":
https://github.com/gnzsnz/ib-gateway-docker https://github.com/waytrade/ib-gateway-docker
To startup this image with our custom settings
To startup this image simply run the command:: simply run the command::
docker compose up docker compose up
(For further usage^ see the official `docker-compose`_ docs) And you should have the following socket-available services:
- ``x11vnc1@127.0.0.1:3003``
- ``ib-gw@127.0.0.1:4002``
And you should have the following socket-available services by You can attach to the container via a VNC client
default: without password auth.
- ``x11vnc1 @ 127.0.0.1:5900`` SECURITY STUFF!?!?!
- ``ib-gw @ 127.0.0.1:4002`` -------------------
Though "``ib``" claims they host filter connections outside
You can now attach to the container via a VNC client with password-auth; localhost (aka ``127.0.0.1``) it's probably better if you filter
here is an example using ``vncclient`` on ``linux``:: the socket at the OS level using a stateless firewall rule::
vncviewer localhost:5900
now enter the pw (password) you set via an (see second code blob)
`.env file`_ or pw-file according to the `credentials section`_.
If you want to change away from their default config see the example
`docker-compose.yml`-config issue and config-section of the readme,
- https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#configuration
- https://github.com/gnzsnz/ib-gateway-docker/discussions/103
.. _.env file: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#how-to-use-it
.. _docker-compose: https://docs.docker.com/compose/
.. _credentials section: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#credentials
Connecting to the API from `piker`
---------------------------------
In order to expose the container's API endpoint to the
`brokerd/datad/ib` actor, we need to add a section to the user's
`brokers.toml` config (note the below is similar to the repo-shipped
template file),
.. code:: toml
[ib]
# define the (set of) host-port socketaddrs that
# brokerd.ib will scan to connect to an API endpoint
# (ib-gw or ib-tws listening instances)
hosts = [
'127.0.0.1',
]
ports = [
4002, # gw
7497, # tws
]
# When API endpoints are being scanned durin startup, the order
# of user-defined-account "names" (as defined below) here
# determines which py-client connection is given priority to be
# used for data-feed-requests by according to whichever client
# connected to an API endpoing which reported the equivalent
# account number for that name.
prefer_data_account = [
'paper',
'margin',
'ira',
]
# define "aliases" (names) for each account number
# such that the names can be reffed and logged throughout
# `piker.accounting` subsys and more easily
# referred to by the user.
#
# These keys will be the set exposed through the order-mode
# account-selection UI so that numbers are never shown.
[ib.accounts]
paper = 'XX0000000'
margin = 'X0000000'
ira = 'X0000000'
the broker daemon can also connect to the container's VNC server for
added functionalies including,
- viewing the API endpoint program's GUI for manual interventions,
- workarounds for historical data throttling using hotkey hacks,
Add a further section to `brokers.toml` which maps each API-ep's
port to a table of VNC server connection info like,
.. code:: toml
[ib.vnc_addrs]
4002 = {host = 'localhost', port = 5900, pw = 'doggy'}
The `pw = 'doggy'` here ^ should the same value as the particular
container instances `.env` file setting (when it was run),
.. code:: ini
VNC_SERVER_PASSWORD='doggy'
IF you also want to run ``TWS``
-------------------------------
You can also run it containerized,
https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#using-tws
SECURITY stuff (advanced, only if you're paranoid)
--------------------------------------------------
First and foremost if doing a "distributed" container setup where you
run the ``ib-gw`` docker container and your connecting API client
(likely ``ib_async`` from python) on **different hosts** be sure to
read the `security considerations`_ section!
And for a further (somewhat paranoid) perspective from
a long-time-ago serious devops eng..
Though "``ib``" claims they filter remote host connections outside
``localhost`` (aka ``127.0.0.1`` on ipv4) it's prolly justified if
you'd like to filter the socket at the *OS level* using a stateless
firewall rule::
ip rule add not unicast iif lo to 0.0.0.0/0 dport 4002 ip rule add not unicast iif lo to 0.0.0.0/0 dport 4002
We will soon have this baked into our own custom image but for
We will soon have this either baked into our own custom derivative now you'll have to do it urself dawgy.
image (or patched into the current upstream one after further testin)
but for now you'll have to do it urself, diggity dawg.
.. _security considerations: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#security-considerations

View File

@ -1,15 +1,10 @@
# a community maintained IB API container! # rework from the original @
# # https://github.com/waytrade/ib-gateway-docker/blob/master/docker-compose.yml
# https://github.com/gnzsnz/ib-gateway-docker version: "3.5"
#
# For piker we (currently) include some minor deviations
# for some config files in the `volumes` section.
#
# See full configuration settings @
# - https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#configuration
# - https://github.com/gnzsnz/ib-gateway-docker/discussions/103
services: services:
ib_gw_paper: ib_gw_paper:
# apparently java is a mega cukc: # apparently java is a mega cukc:
@ -55,22 +50,16 @@ services:
target: /root/scripts/run_x11_vnc.sh target: /root/scripts/run_x11_vnc.sh
read_only: true read_only: true
# NOTE: an alt method to fill these out is to # NOTE:to fill these out, define an `.env` file in the same dir as
# define an `.env` file in the same dir as # this compose file which looks something like:
# this compose file. # TWS_USERID='myuser'
# TWS_PASSWORD='guest'
environment: environment:
TWS_USERID: ${TWS_USERID} TWS_USERID: ${TWS_USERID}
# TWS_USERID: 'myuser'
TWS_PASSWORD: ${TWS_PASSWORD} TWS_PASSWORD: ${TWS_PASSWORD}
# TWS_PASSWORD: 'guest' TRADING_MODE: 'paper'
TRADING_MODE: ${TRADING_MODE} VNC_SERVER_PASSWORD: 'doggy'
# TRADING_MODE: 'paper' VNC_SERVER_PORT: '3003'
VNC_SERVER_PASSWORD: ${VNC_SERVER_PASSWORD}
# VNC_SERVER_PASSWORD: 'doggy'
# TODO, see if we can get this supported like it
# was on the old `waytrade` image?
# VNC_SERVER_PORT: '3003'
# ports: # ports:
# - target: 4002 # - target: 4002
@ -87,9 +76,6 @@ services:
# - "127.0.0.1:4002:4002" # - "127.0.0.1:4002:4002"
# - "127.0.0.1:5900:5900" # - "127.0.0.1:5900:5900"
# TODO, a masked but working example of dual paper + live
# ib-gw instances running in a single app run!
#
# ib_gw_live: # ib_gw_live:
# image: waytrade/ib-gateway:1012.2i # image: waytrade/ib-gateway:1012.2i
# restart: no # restart: no

View File

@ -121,7 +121,6 @@ async def bot_main():
# tick_throttle=10, # tick_throttle=10,
) as feed, ) as feed,
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
): ):
assert accounts assert accounts

View File

@ -1,24 +1,135 @@
{ {
"nodes": { "nodes": {
"nixpkgs": { "flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": { "locked": {
"lastModified": 1765779637, "lastModified": 1689068808,
"narHash": "sha256-KJ2wa/BLSrTqDjbfyNx70ov/HdgNBCBBSQP3BIzKnv4=", "narHash": "sha256-6ixXo3wt24N/melDWjq70UuHQLxGV8jZvooRanIHXw0=",
"owner": "nixos", "owner": "numtide",
"repo": "nixpkgs", "repo": "flake-utils",
"rev": "1306659b587dc277866c7b69eb97e5f07864d8c4", "rev": "919d646de7be200f3bf08cb76ae1f09402b6f9b4",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nixos", "owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1689068808,
"narHash": "sha256-6ixXo3wt24N/melDWjq70UuHQLxGV8jZvooRanIHXw0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "919d646de7be200f3bf08cb76ae1f09402b6f9b4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"poetry2nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1688870561,
"narHash": "sha256-4UYkifnPEw1nAzqqPOTL2MvWtm3sNGw1UTYTalkTcGY=",
"owner": "nix-community",
"repo": "nix-github-actions",
"rev": "165b1650b753316aa7f1787f3005a8d2da0f5301",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-github-actions",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1692174805,
"narHash": "sha256-xmNPFDi/AUMIxwgOH/IVom55Dks34u1g7sFKKebxUm0=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "caac0eb6bdcad0b32cb2522e03e4002c8975c62e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable", "ref": "nixos-unstable",
"repo": "nixpkgs", "repo": "nixpkgs",
"type": "github" "type": "github"
} }
}, },
"poetry2nix": {
"inputs": {
"flake-utils": "flake-utils_2",
"nix-github-actions": "nix-github-actions",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1692048894,
"narHash": "sha256-cDw03rso2V4CDc3Mll0cHN+ztzysAvdI8pJ7ybbz714=",
"ref": "refs/heads/pyqt6",
"rev": "b059ad4c3051f45d6c912e17747aae37a9ec1544",
"revCount": 2276,
"type": "git",
"url": "file:///home/lord_fomo/repos/poetry2nix"
},
"original": {
"type": "git",
"url": "file:///home/lord_fomo/repos/poetry2nix"
}
},
"root": { "root": {
"inputs": { "inputs": {
"nixpkgs": "nixpkgs" "flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"poetry2nix": "poetry2nix"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
} }
} }
}, },

251
flake.nix
View File

@ -1,103 +1,180 @@
# An "impure" template thx to `pyproject.nix`, # NOTE: to convert to a poetry2nix env like this here are the
# https://pyproject-nix.github.io/pyproject.nix/templates.html#impure # steps:
# https://github.com/pyproject-nix/pyproject.nix/blob/master/templates/impure/flake.nix # - install poetry in your system nix config
{ # - convert the repo to use poetry using `poetry init`:
description = "An impure `piker` overlay using `uv` with Nix(OS)"; # https://python-poetry.org/docs/basic-usage/#initialising-a-pre-existing-project
# - then manually ensuring all deps are converted over:
# - add this file to the repo and commit it
# -
inputs = { # GROKin tips:
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable"; # - CLI eps are (ostensibly) added via an `entry_points.txt`:
# - https://packaging.python.org/en/latest/specifications/entry-points/#file-format
# - https://github.com/nix-community/poetry2nix/blob/master/editable.nix#L49
{
description = "piker: trading gear for hackers (pkged with poetry2nix)";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
# see https://github.com/nix-community/poetry2nix/tree/master#api
inputs.poetry2nix = {
# url = "github:nix-community/poetry2nix";
# url = "github:K900/poetry2nix/qt5-explicit-deps";
url = "/home/lord_fomo/repos/poetry2nix";
inputs.nixpkgs.follows = "nixpkgs";
}; };
outputs = outputs = {
{ nixpkgs, ... }: self,
let nixpkgs,
inherit (nixpkgs) lib; flake-utils,
forAllSystems = lib.genAttrs lib.systems.flakeExposed; poetry2nix,
in }:
{ # TODO: build cross-OS and use the `${system}` var thingy..
devShells = forAllSystems ( flake-utils.lib.eachDefaultSystem (system:
system: let
let # use PWD as sources
pkgs = nixpkgs.legacyPackages.${system}; projectDir = ./.;
pyproject = ./pyproject.toml;
poetrylock = ./poetry.lock;
# do store-path extractions # TODO: port to 3.11 and support both versions?
qt6baseStorePath = lib.getLib pkgs.qt6.qtbase; python = "python3.10";
# ?TODO? can remove below since manual linking not needed?
# qt6QtWaylandStorePath = lib.getLib pkgs.qt6.qtwayland;
# XXX NOTE XXX, for now we overlay specific pkgs via # for more functions and examples.
# a major-version-pinned-`cpython` # inherit
cpython = "python313"; # (poetry2nix.legacyPackages.${system})
pypkgs = pkgs."${cpython}Packages"; # mkPoetryApplication;
in # pkgs = nixpkgs.legacyPackages.${system};
{
default = pkgs.mkShell {
packages = with pkgs; [ pkgs = nixpkgs.legacyPackages.x86_64-linux;
# XXX, ensure sh completions active! lib = pkgs.lib;
bashInteractive p2npkgs = poetry2nix.legacyPackages.x86_64-linux;
bash-completion
# dev utils # define all pkg overrides per dep, see edgecases.md:
ruff # https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md
pypkgs.ruff # TODO: add these into the json file:
# https://github.com/nix-community/poetry2nix/blob/master/overrides/build-systems.json
pypkgs-build-requirements = {
asyncvnc = [ "setuptools" ];
eventkit = [ "setuptools" ];
ib-insync = [ "setuptools" "flake8" ];
msgspec = [ "setuptools"];
pdbp = [ "setuptools" ];
pyqt6-sip = [ "setuptools" ];
tabcompleter = [ "setuptools" ];
tractor = [ "setuptools" ];
tricycle = [ "setuptools" ];
trio-typing = [ "setuptools" ];
trio-util = [ "setuptools" ];
xonsh = [ "setuptools" ];
};
qt6.qtwayland # auto-generate override entries
qt6.qtbase p2n-overrides = p2npkgs.defaultPoetryOverrides.extend (self: super:
builtins.mapAttrs (package: build-requirements:
(builtins.getAttr package super).overridePythonAttrs (old: {
buildInputs = (
old.buildInputs or [ ]
) ++ (
builtins.map (
pkg: if builtins.isString pkg then builtins.getAttr pkg super else pkg
) build-requirements
);
})
) pypkgs-build-requirements
);
uv # override some ahead-of-time compiled extensions
python313 # ?TODO^ how to set from `cpython` above? # to be built with their wheels.
pypkgs.pyqt6 ahot_overrides = p2n-overrides.extend(
pypkgs.pyqt6-sip final: prev: {
pypkgs.qtpy
pypkgs.qdarkstyle
pypkgs.rapidfuzz
];
shellHook = '' # llvmlite = prev.llvmlite.override {
# unmask to debug **this** dev-shell-hook # preferWheel = false;
# set -e # };
# set qt-base/plugin path(s) # TODO: get this workin with p2n and nixpkgs..
QTBASE_PATH="${qt6baseStorePath}/lib" # pyqt6 = prev.pyqt6.override {
QT_PLUGIN_PATH="${qt6baseStorePath}/lib/qt-6/plugins" # preferWheel = true;
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms" # };
# link in Qt cc lib paths from <nixpkgs> # NOTE: this DOESN'T work atm but after a fix
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH" # to poetry2nix, it will and actually this line
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH" # won't be needed - thanks @k900:
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH" # https://github.com/nix-community/poetry2nix/pull/1257
pyqt5 = prev.pyqt5.override {
# withWebkit = false;
preferWheel = true;
};
# link-in c++ stdlib for various AOT-ext-pkgs (numpy, etc.) # see PR from @k900:
LD_LIBRARY_PATH="${pkgs.stdenv.cc.cc.lib}/lib:$LD_LIBRARY_PATH" # https://github.com/nix-community/poetry2nix/pull/1257
# pyqt5-qt5 = prev.pyqt5-qt5.override {
# withWebkit = false;
# preferWheel = true;
# };
export LD_LIBRARY_PATH # TODO: patch in an override for polars to build
# from src! See the details likely needed from
# RUNTIME-SETTINGS # the cryptography entry:
# # https://github.com/nix-community/poetry2nix/blob/master/overrides/default.nix#L426-L435
# ------ Qt ------ polars = prev.polars.override {
# XXX, unmask to debug qt .so linking/loading deats preferWheel = true;
# export QT_DEBUG_PLUGINS=1 };
# }
# ALSO, for *modern linux* DEs,
# - maybe set wayland-mode (TODO, parametrtize this!)
# * a chosen wayland-mode shell-integration
export QT_QPA_PLATFORM="wayland"
export QT_WAYLAND_SHELL_INTEGRATION="xdg-shell"
# ------ uv ------
# - always use the ./py313/ venv-subdir
export UV_PROJECT_ENVIRONMENT="py313"
# sync project-env with all extras
uv sync --dev --all-extras --no-group lint
# ------ TIPS ------
# NOTE, to launch the py-venv installed `xonsh` (like @goodboy)
# run the `nix develop` cmd with,
# >> nix develop -c uv run xonsh
'';
};
}
); );
};
# WHY!? -> output-attrs that `nix develop` scans for:
# https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop.html#flake-output-attributes
in
rec {
packages = {
# piker = poetry2nix.legacyPackages.x86_64-linux.mkPoetryEditablePackage {
# editablePackageSources = { piker = ./piker; };
piker = p2npkgs.mkPoetryApplication {
projectDir = projectDir;
# SEE ABOVE for auto-genned input set, override
# buncha deps with extras.. like `setuptools` mostly.
# TODO: maybe propose a patch to p2n to show that you
# can even do this in the edgecases docs?
overrides = ahot_overrides;
# XXX: won't work on llvmlite..
# preferWheels = true;
};
};
# devShells.default = pkgs.mkShell {
# projectDir = projectDir;
# python = "python3.10";
# overrides = ahot_overrides;
# inputsFrom = [ self.packages.x86_64-linux.piker ];
# packages = packages;
# # packages = [ poetry2nix.packages.${system}.poetry ];
# };
# TODO: grok the difference here..
# - avoid re-cloning git repos on every develop entry..
# - ideally allow hacking on the src code of some deps
# (tractor, pyqtgraph, tomlkit, etc.) WITHOUT having to
# re-install them every time a change is made.
# - boot a usable xonsh inside the poetry virtualenv when
# defined via a custom entry point?
devShells.default = p2npkgs.mkPoetryEnv {
# env = p2npkgs.mkPoetryEnv {
projectDir = projectDir;
python = pkgs.python310;
overrides = ahot_overrides;
editablePackageSources = packages;
# piker = "./";
# tractor = "../tractor/";
# }; # wut?
};
}
); # end of .outputs scope
} }

View File

@ -33,6 +33,7 @@ from ._pos import (
Account, Account,
load_account, load_account,
load_account_from_ledger, load_account_from_ledger,
open_pps,
open_account, open_account,
Position, Position,
) )
@ -41,6 +42,7 @@ from ._mktinfo import (
dec_digits, dec_digits,
digits_to_dec, digits_to_dec,
MktPair, MktPair,
Symbol,
unpack_fqme, unpack_fqme,
_derivs as DerivTypes, _derivs as DerivTypes,
) )
@ -58,6 +60,7 @@ __all__ = [
'Asset', 'Asset',
'MktPair', 'MktPair',
'Position', 'Position',
'Symbol',
'Transaction', 'Transaction',
'TransactionLedger', 'TransactionLedger',
'dec_digits', 'dec_digits',
@ -67,6 +70,7 @@ __all__ = [
'load_account_from_ledger', 'load_account_from_ledger',
'mk_allocator', 'mk_allocator',
'open_account', 'open_account',
'open_pps',
'open_trade_ledger', 'open_trade_ledger',
'unpack_fqme', 'unpack_fqme',
'DerivTypes', 'DerivTypes',

View File

@ -40,7 +40,7 @@ import tomli_w # for fast ledger writing
from piker.types import Struct from piker.types import Struct
from piker import config from piker import config
from piker.log import get_logger from ..log import get_logger
from .calc import ( from .calc import (
iter_by_dt, iter_by_dt,
) )
@ -239,9 +239,7 @@ class TransactionLedger(UserDict):
symcache: SymbologyCache = self._symcache symcache: SymbologyCache = self._symcache
towrite: dict[str, Any] = {} towrite: dict[str, Any] = {}
for tid, txdict in self.tx_sort( for tid, txdict in self.tx_sort(self.data.copy()):
self.data.copy()
):
# write blank-str expiry for non-expiring assets # write blank-str expiry for non-expiring assets
if ( if (
'expiry' in txdict 'expiry' in txdict
@ -379,7 +377,7 @@ def open_trade_ledger(
account, account,
dirpath=_fp, dirpath=_fp,
) )
cpy: dict = ledger_dict.copy() cpy = ledger_dict.copy()
# XXX NOTE: if not provided presume we are being called from # XXX NOTE: if not provided presume we are being called from
# sync code and need to maybe run `trio` to generate.. # sync code and need to maybe run `trio` to generate..
@ -408,13 +406,7 @@ def open_trade_ledger(
account=account, account=account,
mod=mod, mod=mod,
symcache=symcache, symcache=symcache,
tx_sort=getattr(mod, 'tx_sort', tx_sort),
# NOTE: allow backends to provide custom ledger sorting
tx_sort=getattr(
mod,
'tx_sort',
tx_sort,
),
) )
try: try:
yield ledger yield ledger

View File

@ -305,8 +305,8 @@ class MktPair(Struct, frozen=True):
# config right? # config right?
# src_type: AssetTypeName # src_type: AssetTypeName
# for derivs, info describing contract, egs. strike price, call # for derivs, info describing contract, egs.
# or put, swap type, exercise model, etc. # strike price, call or put, swap type, exercise model, etc.
contract_info: list[str] | None = None contract_info: list[str] | None = None
# TODO: rename to sectype since all of these can # TODO: rename to sectype since all of these can
@ -390,8 +390,8 @@ class MktPair(Struct, frozen=True):
cls, cls,
fqme: str, fqme: str,
price_tick: float|str, price_tick: float | str,
size_tick: float|str, size_tick: float | str,
bs_mktid: str, bs_mktid: str,
broker: str | None = None, broker: str | None = None,
@ -677,3 +677,90 @@ def unpack_fqme(
# '.'.join([mkt_ep, venue]), # '.'.join([mkt_ep, venue]),
suffix, suffix,
) )
class Symbol(Struct):
'''
I guess this is some kinda container thing for dealing with
all the different meta-data formats from brokers?
'''
key: str
broker: str = ''
venue: str = ''
# precision descriptors for price and vlm
tick_size: Decimal = Decimal('0.01')
lot_tick_size: Decimal = Decimal('0.0')
suffix: str = ''
broker_info: dict[str, dict[str, Any]] = {}
@classmethod
def from_fqme(
cls,
fqsn: str,
info: dict[str, Any],
) -> Symbol:
broker, mktep, venue, suffix = unpack_fqme(fqsn)
tick_size = info.get('price_tick_size', 0.01)
lot_size = info.get('lot_tick_size', 0.0)
return Symbol(
broker=broker,
key=mktep,
tick_size=tick_size,
lot_tick_size=lot_size,
venue=venue,
suffix=suffix,
broker_info={broker: info},
)
@property
def type_key(self) -> str:
return list(self.broker_info.values())[0]['asset_type']
@property
def tick_size_digits(self) -> int:
return float_digits(self.tick_size)
@property
def lot_size_digits(self) -> int:
return float_digits(self.lot_tick_size)
@property
def price_tick(self) -> Decimal:
return Decimal(str(self.tick_size))
@property
def size_tick(self) -> Decimal:
return Decimal(str(self.lot_tick_size))
@property
def broker(self) -> str:
return list(self.broker_info.keys())[0]
@property
def fqme(self) -> str:
return maybe_cons_tokens([
self.key, # final "pair name" (eg. qqq[/usd], btcusdt)
self.venue,
self.suffix, # includes expiry and other con info
self.broker,
])
def quantize(
self,
size: float,
) -> Decimal:
digits = float_digits(self.lot_tick_size)
return Decimal(size).quantize(
Decimal(f'1.{"0".ljust(digits, "0")}'),
rounding=ROUND_HALF_EVEN
)
# NOTE: when cast to `str` return fqme
def __str__(self) -> str:
return self.fqme

View File

@ -30,8 +30,7 @@ from types import ModuleType
from typing import ( from typing import (
Any, Any,
Iterator, Iterator,
Generator, Generator
TYPE_CHECKING,
) )
import pendulum import pendulum
@ -60,10 +59,8 @@ from ..clearing._messages import (
BrokerdPosition, BrokerdPosition,
) )
from piker.types import Struct from piker.types import Struct
from piker.log import get_logger from piker.data._symcache import SymbologyCache
from ..log import get_logger
if TYPE_CHECKING:
from piker.data._symcache import SymbologyCache
log = get_logger(__name__) log = get_logger(__name__)
@ -356,20 +353,17 @@ class Position(Struct):
) -> bool: ) -> bool:
''' '''
Update clearing table by calculating the rolling ppu and Update clearing table by calculating the rolling ppu and
(accumulative) size in both the clears entry and local attrs (accumulative) size in both the clears entry and local
state. attrs state.
Inserts are always done in datetime sorted order. Inserts are always done in datetime sorted order.
''' '''
# added: bool = False
tid: str = t.tid tid: str = t.tid
if tid in self._events: if tid in self._events:
log.debug( log.warning(f'{t} is already added?!')
f'Txn is already added?\n' # return added
f'\n'
f'{t}\n'
)
return False
# TODO: apparently this IS possible with a dict but not # TODO: apparently this IS possible with a dict but not
# common and probably not that beneficial unless we're also # common and probably not that beneficial unless we're also
@ -450,12 +444,6 @@ class Position(Struct):
# def suggest_split(self) -> float: # def suggest_split(self) -> float:
# ... # ...
# ?TODO, for sending rendered state over the wire?
# def summary(self) -> PositionSummary:
# do minimal conversion to a subset of fields
# currently defined in `.clearing._messages.BrokerdPosition`
class Account(Struct): class Account(Struct):
''' '''
@ -499,23 +487,12 @@ class Account(Struct):
def update_from_ledger( def update_from_ledger(
self, self,
ledger: TransactionLedger|dict[str, Transaction], ledger: TransactionLedger | dict[str, Transaction],
cost_scalar: float = 2, cost_scalar: float = 2,
symcache: SymbologyCache|None = None, symcache: SymbologyCache | None = None,
_mktmap_table: dict[str, MktPair] | None = None, _mktmap_table: dict[str, MktPair] | None = None,
only_require: list[str]|True = True,
# ^list of fqmes that are "required" to be processed from
# this ledger pass; we often don't care about others and
# definitely shouldn't always error in such cases.
# (eg. broker backend loaded that doesn't yet supsport the
# symcache but also, inside the paper engine we don't ad-hoc
# request `get_mkt_info()` for every symbol in the ledger,
# only the one for which we're simulating against).
# TODO, not sure if there's a better soln for this, ideally
# all backends get symcache support afap i guess..
) -> dict[str, Position]: ) -> dict[str, Position]:
''' '''
Update the internal `.pps[str, Position]` table from input Update the internal `.pps[str, Position]` table from input
@ -558,32 +535,11 @@ class Account(Struct):
if _mktmap_table is None: if _mktmap_table is None:
raise raise
required: bool = (
only_require is True
or (
only_require is not True
and
fqme in only_require
)
)
# XXX: caller is allowed to provide a fallback # XXX: caller is allowed to provide a fallback
# mktmap table for the case where a new position is # mktmap table for the case where a new position is
# being added and the preloaded symcache didn't # being added and the preloaded symcache didn't
# have this entry prior (eg. with frickin IB..) # have this entry prior (eg. with frickin IB..)
if ( mkt = _mktmap_table[fqme]
not (mkt := _mktmap_table.get(fqme))
and
required
):
raise
elif not required:
continue
else:
# should be an entry retreived somewhere
assert mkt
if not (pos := pps.get(bs_mktid)): if not (pos := pps.get(bs_mktid)):
@ -700,7 +656,7 @@ class Account(Struct):
def write_config(self) -> None: def write_config(self) -> None:
''' '''
Write the current account state to the user's account TOML file, normally Write the current account state to the user's account TOML file, normally
something like `pps.toml`. something like ``pps.toml``.
''' '''
# TODO: show diff output? # TODO: show diff output?
@ -740,7 +696,7 @@ class Account(Struct):
else: else:
# TODO: we reallly need a diff set of # TODO: we reallly need a diff set of
# loglevels/colors per subsys. # loglevels/colors per subsys.
log.debug( log.warning(
f'Recent position for {fqme} was closed!' f'Recent position for {fqme} was closed!'
) )
@ -754,7 +710,7 @@ class Account(Struct):
# XXX WTF: if we use a tomlkit.Integer here we get this # XXX WTF: if we use a tomlkit.Integer here we get this
# super weird --1 thing going on for cumsize!?1! # super weird --1 thing going on for cumsize!?1!
# NOTE: the fix was to always float() the size value loaded # NOTE: the fix was to always float() the size value loaded
# in open_account() below! # in open_pps() below!
config.write( config.write(
config=self.conf, config=self.conf,
path=self.conf_path, path=self.conf_path,
@ -938,6 +894,7 @@ def open_account(
clears_table['dt'] = dt clears_table['dt'] = dt
trans.append(Transaction( trans.append(Transaction(
fqme=bs_mktid, fqme=bs_mktid,
# sym=mkt,
bs_mktid=bs_mktid, bs_mktid=bs_mktid,
tid=tid, tid=tid,
# XXX: not sure why sometimes these are loaded as # XXX: not sure why sometimes these are loaded as
@ -960,22 +917,11 @@ def open_account(
): ):
expiry: pendulum.DateTime = pendulum.parse(expiry) expiry: pendulum.DateTime = pendulum.parse(expiry)
# !XXX, should never be duplicates over pp = pp_objs[bs_mktid] = Position(
# a backend-(broker)-system's unique market-IDs! mkt,
if pos := pp_objs.get(bs_mktid): split_ratio=split_ratio,
if mkt != pos.mkt: bs_mktid=bs_mktid,
log.warning( )
f'Duplicated position but diff `MktPair.fqme` ??\n'
f'bs_mktid: {bs_mktid!r}\n'
f'pos.mkt: {pos.mkt}\n'
f'mkt: {mkt}\n'
)
else:
pos = pp_objs[bs_mktid] = Position(
mkt,
split_ratio=split_ratio,
bs_mktid=bs_mktid,
)
# XXX: super critical, we need to be sure to include # XXX: super critical, we need to be sure to include
# all pps.toml clears to avoid reusing clears that were # all pps.toml clears to avoid reusing clears that were
@ -983,13 +929,8 @@ def open_account(
# state, since today's records may have already been # state, since today's records may have already been
# processed! # processed!
for t in trans: for t in trans:
added: bool = pos.add_clear(t) pp.add_clear(t)
if not added:
log.warning(
f'Txn already recorded in pp ??\n'
f'\n'
f'{t}\n'
)
try: try:
yield acnt yield acnt
finally: finally:
@ -997,6 +938,20 @@ def open_account(
acnt.write_config() acnt.write_config()
# TODO: drop the old name and THIS!
@cm
def open_pps(
*args,
**kwargs,
) -> Generator[Account, None, None]:
log.warning(
'`open_pps()` is now deprecated!\n'
'Please use `with open_account() as cnt:`'
)
with open_account(*args, **kwargs) as acnt:
yield acnt
def load_account_from_ledger( def load_account_from_ledger(
brokername: str, brokername: str,

View File

@ -22,9 +22,7 @@ you know when you're losing money (if possible) XD
from __future__ import annotations from __future__ import annotations
from collections.abc import ValuesView from collections.abc import ValuesView
from contextlib import contextmanager as cm from contextlib import contextmanager as cm
from functools import partial
from math import copysign from math import copysign
from pprint import pformat
from typing import ( from typing import (
Any, Any,
Callable, Callable,
@ -32,7 +30,6 @@ from typing import (
TYPE_CHECKING, TYPE_CHECKING,
) )
from tractor.devx import maybe_open_crash_handler
import polars as pl import polars as pl
from pendulum import ( from pendulum import (
DateTime, DateTime,
@ -40,16 +37,12 @@ from pendulum import (
parse, parse,
) )
from ..log import get_logger
if TYPE_CHECKING: if TYPE_CHECKING:
from ._ledger import ( from ._ledger import (
Transaction, Transaction,
TransactionLedger, TransactionLedger,
) )
log = get_logger(__name__)
def ppu( def ppu(
clears: Iterator[Transaction], clears: Iterator[Transaction],
@ -245,9 +238,6 @@ def iter_by_dt(
def dyn_parse_to_dt( def dyn_parse_to_dt(
tx: tuple[str, dict[str, Any]] | Transaction, tx: tuple[str, dict[str, Any]] | Transaction,
debug: bool = False,
_invalid: list|None = None,
) -> DateTime: ) -> DateTime:
# handle `.items()` inputs # handle `.items()` inputs
@ -260,90 +250,33 @@ def iter_by_dt(
# get best parser for this record.. # get best parser for this record..
for k in parsers: for k in parsers:
if ( if (
(v := getattr(tx, k, None)) isdict and k in tx
or or getattr(tx, k, None)
(
isdict
and
(v := tx.get(k))
)
): ):
v = tx[k] if isdict else tx.dt
assert v is not None, f'No valid value for `{k}`!?'
# only call parser on the value if not None from # only call parser on the value if not None from
# the `parsers` table above (when NOT using # the `parsers` table above (when NOT using
# `.get()`), otherwise pass through the value and # `.get()`), otherwise pass through the value and
# sort on it directly # sort on it directly
if ( if (
not isinstance(v, DateTime) not isinstance(v, DateTime)
and and (parser := parsers.get(k))
(parser := parsers.get(k))
): ):
ret = parser(v) return parser(v)
else: else:
ret = v return v
return ret
else:
log.debug(
f'Parser-field not found in txn\n'
f'\n'
f'parser-field: {k!r}\n'
f'txn: {tx!r}\n'
f'\n'
f'Trying next..\n'
)
continue
# XXX: we should never really get here bc it means some kinda
# bad txn-record (field) data..
#
# -> set the `debug_mode = True` if you want to trace such
# cases from REPL ;)
else: else:
# XXX: we should really never get here.. # XXX: should never get here..
# only if a ledger record has no expected sort(able) breakpoint()
# field will we likely hit this.. like with ze IB.
# if no sortable field just deliver epoch?
log.warning(
'No (time) sortable field for TXN:\n'
f'{tx!r}\n'
)
report: str = (
f'No supported time-field found in txn !?\n'
f'\n'
f'supported-time-fields: {parsers!r}\n'
f'\n'
f'txn: {tx!r}\n'
)
if debug:
with maybe_open_crash_handler(
pdb=debug,
raise_on_exit=False,
):
raise ValueError(report)
else:
log.error(report)
if _invalid is not None: entry: tuple[str, dict] | Transaction
_invalid.append(tx)
return from_timestamp(0.)
entry: tuple[str, dict]|Transaction
invalid: list = []
for entry in sorted( for entry in sorted(
records, records,
key=key or partial( key=key or dyn_parse_to_dt,
dyn_parse_to_dt,
_invalid=invalid,
),
): ):
if entry in invalid:
log.warning(
f'Ignoring txn w invalid timestamp ??\n'
f'{pformat(entry)}\n'
)
continue
# NOTE the type sig above; either pairs or txns B) # NOTE the type sig above; either pairs or txns B)
yield entry yield entry
@ -406,7 +339,6 @@ def open_ledger_dfs(
acctname: str, acctname: str,
ledger: TransactionLedger | None = None, ledger: TransactionLedger | None = None,
debug_mode: bool = False,
**kwargs, **kwargs,
@ -421,10 +353,8 @@ def open_ledger_dfs(
can update the ledger on exit. can update the ledger on exit.
''' '''
with maybe_open_crash_handler( from piker.toolz import open_crash_handler
pdb=debug_mode, with open_crash_handler():
# raise_on_exit=False,
):
if not ledger: if not ledger:
import time import time
from ._ledger import open_trade_ledger from ._ledger import open_trade_ledger
@ -516,7 +446,7 @@ def ledger_to_dfs(
df = dfs[key] = ldf.with_columns([ df = dfs[key] = ldf.with_columns([
pl.cum_sum('size').alias('cumsize'), pl.cumsum('size').alias('cumsize'),
# amount of source asset "sent" (via buy txns in # amount of source asset "sent" (via buy txns in
# the market) to acquire the dst asset, PER txn. # the market) to acquire the dst asset, PER txn.
@ -531,7 +461,7 @@ def ledger_to_dfs(
]).with_columns([ ]).with_columns([
# rolling balance in src asset units # rolling balance in src asset units
(pl.col('dst_bot').cum_sum() * -1).alias('src_balance'), (pl.col('dst_bot').cumsum() * -1).alias('src_balance'),
# "position operation type" in terms of increasing the # "position operation type" in terms of increasing the
# amount in the dst asset (entering) or decreasing the # amount in the dst asset (entering) or decreasing the
@ -673,7 +603,7 @@ def ledger_to_dfs(
# cost that was included in the least-recently # cost that was included in the least-recently
# entered txn that is still part of the current CSi # entered txn that is still part of the current CSi
# set. # set.
# => we look up the cost-per-unit cum_sum and apply # => we look up the cost-per-unit cumsum and apply
# if over the current txn size (by multiplication) # if over the current txn size (by multiplication)
# and then reverse that previusly applied cost on # and then reverse that previusly applied cost on
# the txn_cost for this record. # the txn_cost for this record.

View File

@ -300,8 +300,7 @@ def disect(
assert not df.is_empty() assert not df.is_empty()
# muck around in pdbp REPL # muck around in pdbp REPL
# tractor.devx.mk_pdb().set_trace() breakpoint()
# breakpoint()
# TODO: we REALLY need a better console REPL for this # TODO: we REALLY need a better console REPL for this
# kinda thing.. # kinda thing..

View File

@ -50,7 +50,7 @@ __brokers__: list[str] = [
'binance', 'binance',
'ib', 'ib',
'kraken', 'kraken',
'kucoin', 'kucoin'
# broken but used to work # broken but used to work
# 'questrade', # 'questrade',
@ -71,7 +71,7 @@ def get_brokermod(brokername: str) -> ModuleType:
Return the imported broker module by name. Return the imported broker module by name.
''' '''
module: ModuleType = import_module('.' + brokername, 'piker.brokers') module = import_module('.' + brokername, 'piker.brokers')
# we only allow monkeying because it's for internal keying # we only allow monkeying because it's for internal keying
module.name = module.__name__.split('.')[-1] module.name = module.__name__.split('.')[-1]
return module return module
@ -98,14 +98,13 @@ async def open_cached_client(
If one has not been setup do it and cache it. If one has not been setup do it and cache it.
''' '''
brokermod: ModuleType = get_brokermod(brokername) brokermod = get_brokermod(brokername)
# TODO: make abstract or `typing.Protocol`
# client: Client
async with maybe_open_context( async with maybe_open_context(
acm_func=brokermod.get_client, acm_func=brokermod.get_client,
kwargs=kwargs, kwargs=kwargs,
) as (cache_hit, client): ) as (cache_hit, client):
if cache_hit: if cache_hit:
log.runtime(f'Reusing existing {client}') log.runtime(f'Reusing existing {client}')

View File

@ -96,10 +96,7 @@ async def _setup_persistent_brokerd(
# - `open_symbol_search()` # - `open_symbol_search()`
# NOTE: see ep invocation details inside `.data.feed`. # NOTE: see ep invocation details inside `.data.feed`.
try: try:
async with ( async with trio.open_nursery() as service_nursery:
tractor.trionics.collapse_eg(),
trio.open_nursery() as service_nursery
):
bus: _FeedsBus = feed.get_feed_bus( bus: _FeedsBus = feed.get_feed_bus(
brokername, brokername,
service_nursery, service_nursery,

View File

@ -18,11 +18,10 @@
Handy cross-broker utils. Handy cross-broker utils.
""" """
from __future__ import annotations
from functools import partial from functools import partial
import json import json
import httpx import asks
import logging import logging
from ..log import ( from ..log import (
@ -61,11 +60,11 @@ class NoData(BrokerError):
def __init__( def __init__(
self, self,
*args, *args,
info: dict|None = None, info: dict,
) -> None: ) -> None:
super().__init__(*args) super().__init__(*args)
self.info: dict|None = info self.info: dict = info
# when raised, machinery can check if the backend # when raised, machinery can check if the backend
# set a "frame size" for doing datetime calcs. # set a "frame size" for doing datetime calcs.
@ -91,18 +90,16 @@ class DataThrottle(BrokerError):
def resproc( def resproc(
resp: httpx.Response, resp: asks.response_objects.Response,
log: logging.Logger, log: logging.Logger,
return_json: bool = True, return_json: bool = True,
log_resp: bool = False, log_resp: bool = False,
) -> httpx.Response: ) -> asks.response_objects.Response:
''' """Process response and return its json content.
Process response and return its json content.
Raise the appropriate error on non-200 OK responses. Raise the appropriate error on non-200 OK responses.
"""
'''
if not resp.status_code == 200: if not resp.status_code == 200:
raise BrokerError(resp.body) raise BrokerError(resp.body)
try: try:

View File

@ -1,8 +1,8 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) # Copyright (C)
# Guillermo Rodriguez (aka ze jefe) # Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet # Tyler Goodlet
# (in stewardship for pikers) # (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -25,13 +25,14 @@ from __future__ import annotations
from collections import ChainMap from collections import ChainMap
from contextlib import ( from contextlib import (
asynccontextmanager as acm, asynccontextmanager as acm,
AsyncExitStack,
) )
from datetime import datetime from datetime import datetime
from pprint import pformat from pprint import pformat
from typing import ( from typing import (
Any, Any,
Callable, Callable,
Hashable,
Sequence,
Type, Type,
) )
import hmac import hmac
@ -42,7 +43,8 @@ import trio
from pendulum import ( from pendulum import (
now, now,
) )
import httpx import asks
from rapidfuzz import process as fuzzy
import numpy as np import numpy as np
from piker import config from piker import config
@ -52,7 +54,6 @@ from piker.clearing._messages import (
from piker.accounting import ( from piker.accounting import (
Asset, Asset,
digits_to_dec, digits_to_dec,
MktPair,
) )
from piker.types import Struct from piker.types import Struct
from piker.data import ( from piker.data import (
@ -68,6 +69,7 @@ from .venues import (
PAIRTYPES, PAIRTYPES,
Pair, Pair,
MarketType, MarketType,
_spot_url, _spot_url,
_futes_url, _futes_url,
_testnet_futes_url, _testnet_futes_url,
@ -77,18 +79,19 @@ from .venues import (
log = get_logger('piker.brokers.binance') log = get_logger('piker.brokers.binance')
def get_config() -> dict[str, Any]: def get_config() -> dict:
conf: dict conf: dict
path: Path path: Path
conf, path = config.load( conf, path = config.load(
conf_name='brokers', conf_name='brokers',
touch_if_dne=True, touch_if_dne=True,
) )
section: dict = conf.get('binance')
section = conf.get('binance')
if not section: if not section:
log.warning( log.warning(f'No config section found for binance in {path}')
f'No config section found for binance in {path}'
)
return {} return {}
return section return section
@ -144,7 +147,7 @@ def binance_timestamp(
class Client: class Client:
''' '''
Async ReST API client using `trio` + `httpx` B) Async ReST API client using ``trio`` + ``asks`` B)
Supports all of the spot, margin and futures endpoints depending Supports all of the spot, margin and futures endpoints depending
on method. on method.
@ -153,17 +156,10 @@ class Client:
def __init__( def __init__(
self, self,
venue_sessions: dict[
str, # venue key
tuple[httpx.AsyncClient, str] # session, eps path
],
conf: dict[str, Any],
# TODO: change this to `Client.[mkt_]venue: MarketType`? # TODO: change this to `Client.[mkt_]venue: MarketType`?
mkt_mode: MarketType = 'spot', mkt_mode: MarketType = 'spot',
) -> None: ) -> None:
self.conf = conf
# build out pair info tables for each market type # build out pair info tables for each market type
# and wrap in a chain-map view for search / query. # and wrap in a chain-map view for search / query.
self._spot_pairs: dict[str, Pair] = {} # spot info table self._spot_pairs: dict[str, Pair] = {} # spot info table
@ -190,13 +186,44 @@ class Client:
# market symbols for use by search. See `.exch_info()`. # market symbols for use by search. See `.exch_info()`.
self._pairs: ChainMap[str, Pair] = ChainMap() self._pairs: ChainMap[str, Pair] = ChainMap()
# spot EPs sesh
self._sesh = asks.Session(connections=4)
self._sesh.base_location: str = _spot_url
# spot testnet
self._test_sesh: asks.Session = asks.Session(connections=4)
self._test_sesh.base_location: str = _testnet_spot_url
# margin and extended spot endpoints session.
self._sapi_sesh = asks.Session(connections=4)
self._sapi_sesh.base_location: str = _spot_url
# futes EPs sesh
self._fapi_sesh = asks.Session(connections=4)
self._fapi_sesh.base_location: str = _futes_url
# futes testnet
self._test_fapi_sesh: asks.Session = asks.Session(connections=4)
self._test_fapi_sesh.base_location: str = _testnet_futes_url
# global client "venue selection" mode. # global client "venue selection" mode.
# set this when you want to switch venues and not have to # set this when you want to switch venues and not have to
# specify the venue for the next request. # specify the venue for the next request.
self.mkt_mode: MarketType = mkt_mode self.mkt_mode: MarketType = mkt_mode
# per-mkt-venue API client table # per 8
self.venue_sesh = venue_sessions self.venue_sesh: dict[
str, # venue key
tuple[asks.Session, str] # session, eps path
] = {
'spot': (self._sesh, '/api/v3/'),
'spot_testnet': (self._test_sesh, '/fapi/v1/'),
'margin': (self._sapi_sesh, '/sapi/v1/'),
'usdtm_futes': (self._fapi_sesh, '/fapi/v1/'),
'usdtm_futes_testnet': (self._test_fapi_sesh, '/fapi/v1/'),
# 'futes_coin': self._dapi, # TODO
}
# lookup for going from `.mkt_mode: str` to the config # lookup for going from `.mkt_mode: str` to the config
# subsection `key: str` # subsection `key: str`
@ -211,6 +238,40 @@ class Client:
'futes': ['usdtm_futes'], 'futes': ['usdtm_futes'],
} }
# for creating API keys see,
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
self.conf: dict = get_config()
for key, subconf in self.conf.items():
if api_key := subconf.get('api_key', ''):
venue_keys: list[str] = self.confkey2venuekeys[key]
venue_key: str
sesh: asks.Session
for venue_key in venue_keys:
sesh, _ = self.venue_sesh[venue_key]
api_key_header: dict = {
# taken from official:
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
"Content-Type": "application/json;charset=utf-8",
# TODO: prolly should just always query and copy
# in the real latest ver?
"User-Agent": "binance-connector/6.1.6smbz6",
"X-MBX-APIKEY": api_key,
}
sesh.headers.update(api_key_header)
# if `.use_tesnet = true` in the config then
# also add headers for the testnet session which
# will be used for all order control
if subconf.get('use_testnet', False):
testnet_sesh, _ = self.venue_sesh[
venue_key + '_testnet'
]
testnet_sesh.headers.update(api_key_header)
def _mk_sig( def _mk_sig(
self, self,
data: dict, data: dict,
@ -229,6 +290,7 @@ class Client:
'to define the creds for auth-ed endpoints!?' 'to define the creds for auth-ed endpoints!?'
) )
# XXX: Info on security and authentification # XXX: Info on security and authentification
# https://binance-docs.github.io/apidocs/#endpoint-security-type # https://binance-docs.github.io/apidocs/#endpoint-security-type
if not (api_secret := subconf.get('api_secret')): if not (api_secret := subconf.get('api_secret')):
@ -257,7 +319,7 @@ class Client:
params: dict, params: dict,
method: str = 'get', method: str = 'get',
venue: str|None = None, # if None use `.mkt_mode` state venue: str | None = None, # if None use `.mkt_mode` state
signed: bool = False, signed: bool = False,
allow_testnet: bool = False, allow_testnet: bool = False,
@ -268,9 +330,8 @@ class Client:
- /fapi/v3/ USD-M FUTURES, or - /fapi/v3/ USD-M FUTURES, or
- /api/v3/ SPOT/MARGIN - /api/v3/ SPOT/MARGIN
account/market endpoint request depending on either passed in account/market endpoint request depending on either passed in `venue: str`
`venue: str` or the current setting `.mkt_mode: str` setting, or the current setting `.mkt_mode: str` setting, default `'spot'`.
default `'spot'`.
Docs per venue API: Docs per venue API:
@ -299,6 +360,9 @@ class Client:
venue=venue_key, venue=venue_key,
) )
sesh: asks.Session
path: str
# Check if we're configured to route order requests to the # Check if we're configured to route order requests to the
# venue equivalent's testnet. # venue equivalent's testnet.
use_testnet: bool = False use_testnet: bool = False
@ -323,12 +387,11 @@ class Client:
# ctl machinery B) # ctl machinery B)
venue_key += '_testnet' venue_key += '_testnet'
client: httpx.AsyncClient sesh, path = self.venue_sesh[venue_key]
path: str
client, path = self.venue_sesh[venue_key] meth: Callable = getattr(sesh, method)
meth: Callable = getattr(client, method)
resp = await meth( resp = await meth(
url=path + endpoint, path=path + endpoint,
params=params, params=params,
timeout=float('inf'), timeout=float('inf'),
) )
@ -370,20 +433,7 @@ class Client:
item['filters'] = filters item['filters'] = filters
pair_type: Type = PAIRTYPES[venue] pair_type: Type = PAIRTYPES[venue]
try: pair: Pair = pair_type(**item)
pair: Pair = pair_type(**item)
except Exception as e:
e.add_note(
f'\n'
f'New or removed field we need to codify!\n'
f'pair-type: {pair_type!r}\n'
f'\n'
f"Don't panic, prolly stupid binance changed their symbology schema again..\n"
f'Check out their API docs here:\n'
f'\n'
f'https://binance-docs.github.io/apidocs/spot/en/#exchange-information\n'
)
raise
pair_table[pair.symbol.upper()] = pair pair_table[pair.symbol.upper()] = pair
# update an additional top-level-cross-venue-table # update an additional top-level-cross-venue-table
@ -478,9 +528,7 @@ class Client:
''' '''
pair_table: dict[str, Pair] = self._venue2pairs[ pair_table: dict[str, Pair] = self._venue2pairs[
venue venue or self.mkt_mode
or
self.mkt_mode
] ]
if ( if (
expiry expiry
@ -499,9 +547,9 @@ class Client:
venues: list[str] = [venue] venues: list[str] = [venue]
# batch per-venue download of all exchange infos # batch per-venue download of all exchange infos
async with trio.open_nursery() as tn: async with trio.open_nursery() as rn:
for ven in venues: for ven in venues:
tn.start_soon( rn.start_soon(
self._cache_pairs, self._cache_pairs,
ven, ven,
) )
@ -554,11 +602,11 @@ class Client:
) -> dict[str, Any]: ) -> dict[str, Any]:
fq_pairs: dict[str, Pair] = await self.exch_info() fq_pairs: dict = await self.exch_info()
# TODO: cache this list like we were in # TODO: cache this list like we were in
# `open_symbol_search()`? # `open_symbol_search()`?
# keys: list[str] = list(fq_pairs) keys: list[str] = list(fq_pairs)
return match_from_pairs( return match_from_pairs(
pairs=fq_pairs, pairs=fq_pairs,
@ -566,20 +614,9 @@ class Client:
score_cutoff=50, score_cutoff=50,
) )
def pair2venuekey(
self,
pair: Pair,
) -> str:
return {
'USDTM': 'usdtm_futes',
'SPOT': 'spot',
# 'COINM': 'coin_futes',
# ^-TODO-^ bc someone might want it..?
}[pair.venue]
async def bars( async def bars(
self, self,
mkt: MktPair, symbol: str,
start_dt: datetime | None = None, start_dt: datetime | None = None,
end_dt: datetime | None = None, end_dt: datetime | None = None,
@ -609,20 +646,16 @@ class Client:
start_time = binance_timestamp(start_dt) start_time = binance_timestamp(start_dt)
end_time = binance_timestamp(end_dt) end_time = binance_timestamp(end_dt)
bs_pair: Pair = self._pairs[mkt.bs_fqme.upper()]
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data # https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
bars = await self._api( bars = await self._api(
'klines', 'klines',
params={ params={
# NOTE: always query using their native symbology! 'symbol': symbol.upper(),
'symbol': mkt.bs_mktid.upper(),
'interval': '1m', 'interval': '1m',
'startTime': start_time, 'startTime': start_time,
'endTime': end_time, 'endTime': end_time,
'limit': limit 'limit': limit
}, },
venue=self.pair2venuekey(bs_pair),
allow_testnet=False, allow_testnet=False,
) )
new_bars: list[tuple] = [] new_bars: list[tuple] = []
@ -939,148 +972,17 @@ class Client:
await self.close_listen_key(key) await self.close_listen_key(key)
_venue_urls: dict[str, str] = {
'spot': (
_spot_url,
'/api/v3/',
),
'spot_testnet': (
_testnet_spot_url,
'/fapi/v1/'
),
# margin and extended spot endpoints session.
# TODO: did this ever get implemented fully?
# 'margin': (
# _spot_url,
# '/sapi/v1/'
# ),
'usdtm_futes': (
_futes_url,
'/fapi/v1/',
),
'usdtm_futes_testnet': (
_testnet_futes_url,
'/fapi/v1/',
),
# TODO: for anyone who actually needs it ;P
# 'coin_futes': ()
}
def init_api_keys(
client: Client,
conf: dict[str, Any],
) -> None:
'''
Set up per-venue API keys each http client according to the user's
`brokers.conf`.
For ex, to use spot-testnet and live usdt futures APIs:
```toml
[binance]
# spot test net
spot.use_testnet = true
spot.api_key = '<spot_api_key_from_binance_account>'
spot.api_secret = '<spot_api_key_password>'
# futes live
futes.use_testnet = false
accounts.usdtm = 'futes'
futes.api_key = '<futes_api_key_from_binance>'
futes.api_secret = '<futes_api_key_password>''
# if uncommented will use the built-in paper engine and not
# connect to `binance` API servers for order ctl.
# accounts.paper = 'paper'
```
'''
for key, subconf in conf.items():
if api_key := subconf.get('api_key', ''):
venue_keys: list[str] = client.confkey2venuekeys[key]
venue_key: str
client: httpx.AsyncClient
for venue_key in venue_keys:
client, _ = client.venue_sesh[venue_key]
api_key_header: dict = {
# taken from official:
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
"Content-Type": "application/json;charset=utf-8",
# TODO: prolly should just always query and copy
# in the real latest ver?
"User-Agent": "binance-connector/6.1.6smbz6",
"X-MBX-APIKEY": api_key,
}
client.headers.update(api_key_header)
# if `.use_tesnet = true` in the config then
# also add headers for the testnet session which
# will be used for all order control
if subconf.get('use_testnet', False):
testnet_sesh, _ = client.venue_sesh[
venue_key + '_testnet'
]
testnet_sesh.headers.update(api_key_header)
@acm @acm
async def get_client( async def get_client() -> Client:
mkt_mode: MarketType = 'spot',
) -> Client:
'''
Construct an single `piker` client which composes multiple underlying venue
specific API clients both for live and test networks.
''' client = Client()
venue_sessions: dict[ await client.exch_info()
str, # venue key log.info(
tuple[httpx.AsyncClient, str] # session, eps path f'{client} in {client.mkt_mode} mode: caching exchange infos..\n'
] = {} 'Cached multi-market pairs:\n'
async with AsyncExitStack() as client_stack: f'spot: {len(client._spot_pairs)}\n'
for name, (base_url, path) in _venue_urls.items(): f'usdtm_futes: {len(client._ufutes_pairs)}\n'
api: httpx.AsyncClient = await client_stack.enter_async_context( f'Total: {len(client._pairs)}\n'
httpx.AsyncClient( )
base_url=base_url,
# headers={},
# TODO: is there a way to numerate this? yield client
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
# connections=4
)
)
venue_sessions[name] = (
api,
path,
)
conf: dict[str, Any] = get_config()
# for creating API keys see,
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
client = Client(
venue_sessions=venue_sessions,
conf=conf,
mkt_mode=mkt_mode,
)
init_api_keys(
client=client,
conf=conf,
)
fq_pairs: dict[str, Pair] = await client.exch_info()
assert fq_pairs
log.info(
f'Loaded multi-venue `Client` in mkt_mode={client.mkt_mode!r}\n\n'
f'Symbology Summary:\n'
f'------ - ------\n'
f'spot: {len(client._spot_pairs)}\n'
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
'------ - ------\n'
f'total: {len(client._pairs)}\n'
)
yield client

View File

@ -264,20 +264,15 @@ async def open_trade_dialog(
# do a open_symcache() call.. though maybe we can hide # do a open_symcache() call.. though maybe we can hide
# this in a new async version of open_account()? # this in a new async version of open_account()?
async with open_cached_client('binance') as client: async with open_cached_client('binance') as client:
subconf: dict|None = client.conf.get(venue_name) subconf: dict = client.conf[venue_name]
use_testnet = subconf.get('use_testnet', False)
# XXX: if no futes.api_key or spot.api_key has been set we # XXX: if no futes.api_key or spot.api_key has been set we
# always fall back to the paper engine! # always fall back to the paper engine!
if ( if not subconf.get('api_key'):
not subconf
or
not subconf.get('api_key')
):
await ctx.started('paper') await ctx.started('paper')
return return
use_testnet: bool = subconf.get('use_testnet', False)
async with ( async with (
open_cached_client('binance') as client, open_cached_client('binance') as client,
): ):
@ -440,7 +435,6 @@ async def open_trade_dialog(
# - ledger: TransactionLedger # - ledger: TransactionLedger
async with ( async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
ctx.open_stream() as ems_stream, ctx.open_stream() as ems_stream,
): ):

View File

@ -42,12 +42,12 @@ from trio_typing import TaskStatus
from pendulum import ( from pendulum import (
from_timestamp, from_timestamp,
) )
from rapidfuzz import process as fuzzy
import numpy as np import numpy as np
import tractor import tractor
from piker.brokers import ( from piker.brokers import (
open_cached_client, open_cached_client,
NoData,
) )
from piker._cacheables import ( from piker._cacheables import (
async_lifo_cache, async_lifo_cache,
@ -94,15 +94,13 @@ class L1(Struct):
# validation type # validation type
# https://developers.binance.com/docs/derivatives/usds-margined-futures/websocket-market-streams/Aggregate-Trade-Streams#response-example
class AggTrade(Struct, frozen=True): class AggTrade(Struct, frozen=True):
e: str # Event type e: str # Event type
E: int # Event time E: int # Event time
s: str # Symbol s: str # Symbol
a: int # Aggregate trade ID a: int # Aggregate trade ID
p: float # Price p: float # Price
q: float # Quantity with all the market trades q: float # Quantity
nq: float # Normal quantity without the trades involving RPI orders
f: int # First trade ID f: int # First trade ID
l: int # noqa Last trade ID l: int # noqa Last trade ID
T: int # Trade time T: int # Trade time
@ -112,7 +110,6 @@ class AggTrade(Struct, frozen=True):
async def stream_messages( async def stream_messages(
ws: NoBsWs, ws: NoBsWs,
) -> AsyncGenerator[NoBsWs, dict]: ) -> AsyncGenerator[NoBsWs, dict]:
# TODO: match syntax here! # TODO: match syntax here!
@ -223,8 +220,6 @@ def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
} }
# TODO, why aren't frame resp `log.info()`s showing in upstream
# code?!
@acm @acm
async def open_history_client( async def open_history_client(
mkt: MktPair, mkt: MktPair,
@ -257,30 +252,24 @@ async def open_history_client(
else: else:
client.mkt_mode = 'spot' client.mkt_mode = 'spot'
array: np.ndarray = await client.bars( # NOTE: always query using their native symbology!
mkt=mkt, mktid: str = mkt.bs_mktid
array = await client.bars(
mktid,
start_dt=start_dt, start_dt=start_dt,
end_dt=end_dt, end_dt=end_dt,
) )
if array.size == 0:
raise NoData(
f'No frame for {start_dt} -> {end_dt}\n'
)
times = array['time'] times = array['time']
if not times.any(): if (
raise ValueError( end_dt is None
'Bad frame with null-times?\n\n' ):
f'{times}' inow = round(time.time())
)
if end_dt is None:
inow: int = round(time.time())
if (inow - times[-1]) > 60: if (inow - times[-1]) > 60:
await tractor.pause() await tractor.pause()
start_dt = from_timestamp(times[0]) start_dt = from_timestamp(times[0])
end_dt = from_timestamp(times[-1]) end_dt = from_timestamp(times[-1])
return array, start_dt, end_dt return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3} yield get_ohlc, {'erlangs': 3, 'rate': 3}
@ -450,6 +439,7 @@ async def subscribe(
async def stream_quotes( async def stream_quotes(
send_chan: trio.abc.SendChannel, send_chan: trio.abc.SendChannel,
symbols: list[str], symbols: list[str],
feed_is_live: trio.Event, feed_is_live: trio.Event,
@ -461,14 +451,11 @@ async def stream_quotes(
) -> None: ) -> None:
async with ( async with (
tractor.trionics.maybe_raise_from_masking_exc(),
send_chan as send_chan, send_chan as send_chan,
open_cached_client('binance') as client, open_cached_client('binance') as client,
): ):
init_msgs: list[FeedInit] = [] init_msgs: list[FeedInit] = []
for sym in symbols: for sym in symbols:
mkt: MktPair
pair: Pair
mkt, pair = await get_mkt_info(sym) mkt, pair = await get_mkt_info(sym)
# build out init msgs according to latest spec # build out init msgs according to latest spec
@ -517,6 +504,7 @@ async def stream_quotes(
# start streaming # start streaming
async for typ, quote in msg_gen: async for typ, quote in msg_gen:
# period = time.time() - last # period = time.time() - last
# hz = 1/period if period else float('inf') # hz = 1/period if period else float('inf')
# if hz > 60: # if hz > 60:
@ -552,7 +540,7 @@ async def open_symbol_search(
) )
# repack in fqme-keyed table # repack in fqme-keyed table
byfqme: dict[str, Pair] = {} byfqme: dict[start, Pair] = {}
for pair in pairs.values(): for pair in pairs.values():
byfqme[pair.bs_fqme] = pair byfqme[pair.bs_fqme] = pair

View File

@ -97,16 +97,6 @@ class Pair(Struct, frozen=True, kw_only=True):
baseAsset: str baseAsset: str
baseAssetPrecision: int baseAssetPrecision: int
permissionSets: list[list[str]]
# https://developers.binance.com/docs/binance-spot-api-docs#2025-08-26
# will become non-optional 2025-08-28?
# https://developers.binance.com/docs/binance-spot-api-docs#future-changes
pegInstructionsAllowed: bool = False
# https://developers.binance.com/docs/binance-spot-api-docs#2025-12-02
opoAllowed: bool = False
filters: dict[ filters: dict[
str, str,
str | int | float, str | int | float,
@ -147,17 +137,11 @@ class SpotPair(Pair, frozen=True):
quoteOrderQtyMarketAllowed: bool quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool isSpotTradingAllowed: bool
isMarginTradingAllowed: bool isMarginTradingAllowed: bool
otoAllowed: bool
defaultSelfTradePreventionMode: str defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str] allowedSelfTradePreventionModes: list[str]
permissions: list[str] permissions: list[str]
# can the paint botz creat liq gaps even easier on this asset?
# Bp
# https://developers.binance.com/docs/binance-spot-api-docs/faqs/order_amend_keep_priority
amendAllowed: bool
# NOTE: see `.data._symcache.SymbologyCache.load()` for why # NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:SpotPair' ns_path: str = 'piker.brokers.binance:SpotPair'
@ -195,6 +179,7 @@ class FutesPair(Pair):
quoteAsset: str # 'USDT', quoteAsset: str # 'USDT',
quotePrecision: int # 8, quotePrecision: int # 8,
requiredMarginPercent: float # '5.0000', requiredMarginPercent: float # '5.0000',
settlePlan: int # 0,
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'], timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
triggerProtect: float # '0.0500', triggerProtect: float # '0.0500',
underlyingSubType: list[str] # ['PoW'], underlyingSubType: list[str] # ['PoW'],
@ -223,10 +208,7 @@ class FutesPair(Pair):
assert pair == self.pair # sanity assert pair == self.pair # sanity
return f'{expiry}' return f'{expiry}'
case ( case 'PERPETUAL':
'PERPETUAL'
| 'TRADIFI_PERPETUAL'
):
return 'PERP' return 'PERP'
case '': case '':
@ -255,10 +237,7 @@ class FutesPair(Pair):
margin: str = self.marginAsset margin: str = self.marginAsset
match ctype: match ctype:
case ( case 'PERPETUAL':
'PERPETUAL'
| 'TRADIFI_PERPETUAL'
):
return f'{margin}M' return f'{margin}M'
case ( case (

View File

@ -471,15 +471,11 @@ def search(
''' '''
# global opts # global opts
brokermods: list[ModuleType] = list(config['brokermods'].values()) brokermods = list(config['brokermods'].values())
# TODO: this is coming from the `search --pdb` NOT from
# the `piker --pdb` XD ..
# -[ ] pull from the parent click ctx's values..dumdum
# assert pdb
# define tractor entrypoint # define tractor entrypoint
async def main(func): async def main(func):
async with maybe_open_pikerd( async with maybe_open_pikerd(
loglevel=config['loglevel'], loglevel=config['loglevel'],
debug_mode=pdb, debug_mode=pdb,

View File

@ -22,9 +22,7 @@ routines should be primitive data types where possible.
""" """
import inspect import inspect
from types import ModuleType from types import ModuleType
from typing import ( from typing import List, Dict, Any, Optional
Any,
)
import trio import trio
@ -36,10 +34,8 @@ from ..accounting import MktPair
async def api(brokername: str, methname: str, **kwargs) -> dict: async def api(brokername: str, methname: str, **kwargs) -> dict:
''' """Make (proxy through) a broker API call by name and return its result.
Make (proxy through) a broker API call by name and return its result. """
'''
brokermod = get_brokermod(brokername) brokermod = get_brokermod(brokername)
async with brokermod.get_client() as client: async with brokermod.get_client() as client:
meth = getattr(client, methname, None) meth = getattr(client, methname, None)
@ -66,14 +62,10 @@ async def api(brokername: str, methname: str, **kwargs) -> dict:
async def stocks_quote( async def stocks_quote(
brokermod: ModuleType, brokermod: ModuleType,
tickers: list[str] tickers: List[str]
) -> Dict[str, Dict[str, Any]]:
) -> dict[str, dict[str, Any]]: """Return quotes dict for ``tickers``.
''' """
Return a `dict` of snapshot quotes for the provided input
`tickers`: a `list` of fqmes.
'''
async with brokermod.get_client() as client: async with brokermod.get_client() as client:
return await client.quote(tickers) return await client.quote(tickers)
@ -82,15 +74,13 @@ async def stocks_quote(
async def option_chain( async def option_chain(
brokermod: ModuleType, brokermod: ModuleType,
symbol: str, symbol: str,
date: str|None = None, date: Optional[str] = None,
) -> dict[str, dict[str, dict[str, Any]]]: ) -> Dict[str, Dict[str, Dict[str, Any]]]:
''' """Return option chain for ``symbol`` for ``date``.
Return option chain for ``symbol`` for ``date``.
By default all expiries are returned. If ``date`` is provided By default all expiries are returned. If ``date`` is provided
then contract quotes for that single expiry are returned. then contract quotes for that single expiry are returned.
"""
'''
async with brokermod.get_client() as client: async with brokermod.get_client() as client:
if date: if date:
id = int((await client.tickers2ids([symbol]))[symbol]) id = int((await client.tickers2ids([symbol]))[symbol])
@ -108,7 +98,7 @@ async def option_chain(
# async def contracts( # async def contracts(
# brokermod: ModuleType, # brokermod: ModuleType,
# symbol: str, # symbol: str,
# ) -> dict[str, dict[str, dict[str, Any]]]: # ) -> Dict[str, Dict[str, Dict[str, Any]]]:
# """Return option contracts (all expiries) for ``symbol``. # """Return option contracts (all expiries) for ``symbol``.
# """ # """
# async with brokermod.get_client() as client: # async with brokermod.get_client() as client:
@ -120,24 +110,15 @@ async def bars(
brokermod: ModuleType, brokermod: ModuleType,
symbol: str, symbol: str,
**kwargs, **kwargs,
) -> dict[str, dict[str, dict[str, Any]]]: ) -> Dict[str, Dict[str, Dict[str, Any]]]:
''' """Return option contracts (all expiries) for ``symbol``.
Return option contracts (all expiries) for ``symbol``. """
'''
async with brokermod.get_client() as client: async with brokermod.get_client() as client:
return await client.bars(symbol, **kwargs) return await client.bars(symbol, **kwargs)
async def search_w_brokerd( async def search_w_brokerd(name: str, pattern: str) -> dict:
name: str,
pattern: str,
) -> dict:
# TODO: WHY NOT WORK!?!
# when we `step` through the next block?
# import tractor
# await tractor.pause()
async with open_cached_client(name) as client: async with open_cached_client(name) as client:
# TODO: support multiple asset type concurrent searches. # TODO: support multiple asset type concurrent searches.
@ -149,12 +130,12 @@ async def symbol_search(
pattern: str, pattern: str,
**kwargs, **kwargs,
) -> dict[str, dict[str, dict[str, Any]]]: ) -> Dict[str, Dict[str, Dict[str, Any]]]:
''' '''
Return symbol info from broker. Return symbol info from broker.
''' '''
results: list[str] = [] results = []
async def search_backend( async def search_backend(
brokermod: ModuleType brokermod: ModuleType
@ -162,13 +143,6 @@ async def symbol_search(
brokername: str = mod.name brokername: str = mod.name
# TODO: figure this the FUCK OUT
# -> ok so obvi in the root actor any async task that's
# spawned outside the main tractor-root-actor task needs to
# call this..
# await tractor.devx._debug.maybe_init_greenback()
# tractor.pause_from_sync()
async with maybe_spawn_brokerd( async with maybe_spawn_brokerd(
mod.name, mod.name,
infect_asyncio=getattr( infect_asyncio=getattr(
@ -188,6 +162,7 @@ async def symbol_search(
)) ))
async with trio.open_nursery() as n: async with trio.open_nursery() as n:
for mod in brokermods: for mod in brokermods:
n.start_soon(search_backend, mod.name) n.start_soon(search_backend, mod.name)
@ -197,13 +172,11 @@ async def symbol_search(
async def mkt_info( async def mkt_info(
brokermod: ModuleType, brokermod: ModuleType,
fqme: str, fqme: str,
**kwargs, **kwargs,
) -> MktPair: ) -> MktPair:
''' '''
Return the `piker.accounting.MktPair` info struct from a given Return MktPair info from broker including src and dst assets.
backend broker tradable src/dst asset pair.
''' '''
async with open_cached_client(brokermod.name) as client: async with open_cached_client(brokermod.name) as client:

View File

@ -31,7 +31,7 @@ from typing import (
Callable, Callable,
) )
from pendulum import now import pendulum
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
from rapidfuzz import process as fuzzy from rapidfuzz import process as fuzzy
@ -39,7 +39,6 @@ import numpy as np
from tractor.trionics import ( from tractor.trionics import (
broadcast_receiver, broadcast_receiver,
maybe_open_context maybe_open_context
collapse_eg,
) )
from tractor import to_asyncio from tractor import to_asyncio
# XXX WOOPS XD # XXX WOOPS XD
@ -433,7 +432,6 @@ async def get_client(
) -> Client: ) -> Client:
async with ( async with (
collapse_eg(),
trio.open_nursery() as n, trio.open_nursery() as n,
open_jsonrpc_session( open_jsonrpc_session(
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc _testnet_ws_url, dtype=JSONRPCResult) as json_rpc

View File

@ -20,11 +20,6 @@ runnable script-programs.
''' '''
from __future__ import annotations from __future__ import annotations
from datetime import ( # noqa
datetime,
date,
tzinfo as TzInfo,
)
from functools import partial from functools import partial
from typing import ( from typing import (
Literal, Literal,
@ -38,7 +33,7 @@ from piker.brokers._util import get_logger
if TYPE_CHECKING: if TYPE_CHECKING:
from .api import Client from .api import Client
import i3ipc from ib_insync import IB
log = get_logger('piker.brokers.ib') log = get_logger('piker.brokers.ib')
@ -53,39 +48,8 @@ _reset_tech: Literal[
] = 'vnc' ] = 'vnc'
no_setup_msg:str = (
'No data reset hack test setup for {vnc_sockaddr}!\n'
'See config setup tips @\n'
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
)
def try_xdo_manual(
client: Client,
):
'''
Do the "manual" `xdo`-based screen switch + click
combo since apparently the `asyncvnc` client ain't workin..
Note this is only meant as a backup method for Xorg users,
ideally you can use a real vnc client and the `vnc_click_hack()`
impl!
'''
global _reset_tech
try:
i3ipc_xdotool_manual_click_hack()
_reset_tech = 'i3ipc_xdotool'
return True
except OSError:
vnc_sockaddr: str = client.conf.vnc_addrs
log.exception(
no_setup_msg.format(vnc_sockaddr=vnc_sockaddr)
)
return False
async def data_reset_hack( async def data_reset_hack(
# vnc_host: str,
client: Client, client: Client,
reset_type: Literal['data', 'connection'], reset_type: Literal['data', 'connection'],
@ -117,60 +81,65 @@ async def data_reset_hack(
that need to be wrangle. that need to be wrangle.
''' '''
ib_client: IB = client.ib
# look up any user defined vnc socket address mapped from # look up any user defined vnc socket address mapped from
# a particular API socket port. # a particular API socket port.
vnc_addrs: tuple[str]|None = client.conf.get('vnc_addrs') api_port: str = str(ib_client.client.port)
if not vnc_addrs: vnc_host: str
vnc_port: int
vnc_sockaddr: tuple[str] | None = client.conf.get('vnc_addrs')
no_setup_msg:str = (
f'No data reset hack test setup for {vnc_sockaddr}!\n'
'See config setup tips @\n'
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
)
if not vnc_sockaddr:
log.warning( log.warning(
no_setup_msg.format(vnc_sockaddr=client.conf) no_setup_msg
+ +
'REQUIRES A `vnc_addrs: array` ENTRY' f'REQUIRES A `vnc_addrs: array` ENTRY'
) )
vnc_host, vnc_port = vnc_sockaddr.get(
api_port,
('localhost', 3003)
)
global _reset_tech global _reset_tech
match _reset_tech: match _reset_tech:
case 'vnc': case 'vnc':
try: try:
await tractor.to_asyncio.run_task( await tractor.to_asyncio.run_task(
partial( partial(
vnc_click_hack, vnc_click_hack,
client=client, host=vnc_host,
port=vnc_port,
) )
) )
except ( except OSError:
OSError, # no VNC server avail.. if vnc_host != 'localhost':
PermissionError, # asyncvnc pw fail.. log.warning(no_setup_msg)
): return False
try: try:
import i3ipc # noqa (since a deps dynamic check) import i3ipc # noqa (since a deps dynamic check)
except ModuleNotFoundError: except ModuleNotFoundError:
log.warning( log.warning(no_setup_msg)
no_setup_msg.format(vnc_sockaddr=client.conf)
)
return False return False
# XXX, Xorg only workaround.. try:
# TODO? remove now that we have `pyvnc`? i3ipc_xdotool_manual_click_hack()
# if vnc_host not in { _reset_tech = 'i3ipc_xdotool'
# 'localhost', return True
# '127.0.0.1', except OSError:
# }: log.exception(no_setup_msg)
# focussed, matches = i3ipc_fin_wins_titled() return False
# if not matches:
# log.warning(
# no_setup_msg.format(vnc_sockaddr=vnc_sockaddr)
# )
# return False
# else:
# try_xdo_manual(vnc_sockaddr)
# localhost but no vnc-client or it borked..
else:
try_xdo_manual(client)
case 'i3ipc_xdotool': case 'i3ipc_xdotool':
try_xdo_manual(client) i3ipc_xdotool_manual_click_hack()
# i3ipc_xdotool_manual_click_hack()
case _ as tech: case _ as tech:
raise RuntimeError(f'{tech} is not supported for reset tech!?') raise RuntimeError(f'{tech} is not supported for reset tech!?')
@ -180,66 +149,21 @@ async def data_reset_hack(
async def vnc_click_hack( async def vnc_click_hack(
client: Client, host: str,
reset_type: str = 'data', port: int,
pw: str|None = None, reset_type: str = 'data'
) -> None: ) -> None:
''' '''
Reset the data or network connection for the VNC attached Reset the data or network connection for the VNC attached
ib-gateway using a (magic) keybinding combo. ib gateway using magic combos.
A vnc-server password can be set either by an input `pw` param or
set in the client's config with the latter loaded from the user's
`brokers.toml` in a vnc-addrs-port-mapping section,
.. code:: toml
[ib.vnc_addrs]
4002 = {host = 'localhost', port = 5900, pw = 'doggy'}
''' '''
api_port: str = str(client.ib.client.port)
conf: dict = client.conf
vnc_addrs: dict[int, tuple] = conf.get('vnc_addrs')
if not vnc_addrs:
return None
addr_entry: dict|tuple = vnc_addrs.get(
api_port,
('localhost', 5900) # a typical default
)
if pw is None:
match addr_entry:
case (
host,
port,
):
pass
case {
'host': host,
'port': port,
'pw': pw
}:
pass
case _:
raise ValueError(
f'Invalid `ib.vnc_addrs` entry ?\n'
f'{addr_entry!r}\n'
)
try: try:
from pyvnc import ( import asyncvnc
AsyncVNCClient,
VNCConfig,
Point,
MOUSE_BUTTON_LEFT,
)
except ModuleNotFoundError: except ModuleNotFoundError:
log.warning( log.warning(
"In order to leverage `piker`'s built-in data reset hacks, install " "In order to leverage `piker`'s built-in data reset hacks, install "
"the `pyvnc` project: https://github.com/regulad/pyvnc.git" "the `asyncvnc` project: https://github.com/barneygale/asyncvnc"
) )
return return
@ -250,79 +174,24 @@ async def vnc_click_hack(
'connection': 'r' 'connection': 'r'
}[reset_type] }[reset_type]
with tractor.devx.open_crash_handler(): async with asyncvnc.connect(
client = await AsyncVNCClient.connect( host,
VNCConfig( port=port,
host=host,
port=port, # TODO: doesn't work see:
password=pw, # https://github.com/barneygale/asyncvnc/issues/7
) # password='ibcansmbz',
) as client:
# move to middle of screen
# 640x1800
client.mouse.move(
x=500,
y=500,
) )
async with client: client.mouse.click()
# move to middle of screen client.keyboard.press('Ctrl', 'Alt', key) # keys are stacked
# 640x1800
await client.move(
Point(
500,
500,
)
)
# ensure the ib-gw window is active
await client.click(MOUSE_BUTTON_LEFT)
# send the hotkeys combo B)
await client.press('Ctrl', 'Alt', key) # keys are stacked
def i3ipc_fin_wins_titled(
titles: list[str] = [
'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
# !TODO, remote vnc instance
# -[ ] something in title (or other Con-props) that indicates
# this is explicitly for ibrk sw?
# |_[ ] !can use modden spawn eventually!
'TigerVNC',
# 'vncviewer', # the terminal..
],
) -> tuple[
i3ipc.Con, # orig focussed win
list[tuple[str, i3ipc.Con]], # matching wins by title
]:
'''
Attempt to find a local-DE window titled with an entry in
`titles`.
If found deliver the current focussed window and all matching
`i3ipc.Con`s in a list.
'''
import i3ipc
ipc = i3ipc.Connection()
# TODO: might be worth offering some kinda api for grabbing
# the window id from the pid?
# https://stackoverflow.com/a/2250879
tree = ipc.get_tree()
focussed: i3ipc.Con = tree.find_focused()
matches: list[i3ipc.Con] = []
for name in titles:
results = tree.find_titled(name)
print(f'results for {name}: {results}')
if results:
con = results[0]
matches.append((
name,
con,
))
return (
focussed,
matches,
)
def i3ipc_xdotool_manual_click_hack() -> None: def i3ipc_xdotool_manual_click_hack() -> None:
@ -330,48 +199,67 @@ def i3ipc_xdotool_manual_click_hack() -> None:
Do the data reset hack but expecting a local X-window using `xdotool`. Do the data reset hack but expecting a local X-window using `xdotool`.
''' '''
focussed, matches = i3ipc_fin_wins_titled() import i3ipc
orig_win_id = focussed.window i3 = i3ipc.Connection()
# TODO: might be worth offering some kinda api for grabbing
# the window id from the pid?
# https://stackoverflow.com/a/2250879
t = i3.get_tree()
orig_win_id = t.find_focused().window
# for tws
win_names: list[str] = [
'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
]
try: try:
for name, con in matches: for name in win_names:
print(f'Resetting data feed for {name}') results = t.find_titled(name)
win_id = str(con.window) print(f'results for {name}: {results}')
w, h = con.rect.width, con.rect.height if results:
con = results[0]
print(f'Resetting data feed for {name}')
win_id = str(con.window)
w, h = con.rect.width, con.rect.height
# TODO: seems to be a few libs for python but not sure # TODO: seems to be a few libs for python but not sure
# if they support all the sub commands we need, order of # if they support all the sub commands we need, order of
# most recent commit history: # most recent commit history:
# https://github.com/rr-/pyxdotool # https://github.com/rr-/pyxdotool
# https://github.com/ShaneHutter/pyxdotool # https://github.com/ShaneHutter/pyxdotool
# https://github.com/cphyc/pyxdotool # https://github.com/cphyc/pyxdotool
# TODO: only run the reconnect (2nd) kc on a detected # TODO: only run the reconnect (2nd) kc on a detected
# disconnect? # disconnect?
for key_combo, timeout in [ for key_combo, timeout in [
# only required if we need a connection reset. # only required if we need a connection reset.
# ('ctrl+alt+r', 12), # ('ctrl+alt+r', 12),
# data feed reset. # data feed reset.
('ctrl+alt+f', 6) ('ctrl+alt+f', 6)
]: ]:
subprocess.call([ subprocess.call([
'xdotool', 'xdotool',
'windowactivate', '--sync', win_id, 'windowactivate', '--sync', win_id,
# move mouse to bottom left of window (where # move mouse to bottom left of window (where
# there should be nothing to click). # there should be nothing to click).
'mousemove_relative', '--sync', str(w-4), str(h-4), 'mousemove_relative', '--sync', str(w-4), str(h-4),
# NOTE: we may need to stick a `--retry 3` in here.. # NOTE: we may need to stick a `--retry 3` in here..
'click', '--window', win_id, 'click', '--window', win_id,
'--repeat', '3', '1', '--repeat', '3', '1',
# hackzorzes # hackzorzes
'key', key_combo, 'key', key_combo,
], ],
timeout=timeout, timeout=timeout,
) )
# re-activate and focus original window # re-activate and focus original window
subprocess.call([ subprocess.call([
'xdotool', 'xdotool',
'windowactivate', '--sync', str(orig_win_id), 'windowactivate', '--sync', str(orig_win_id),
@ -379,99 +267,3 @@ def i3ipc_xdotool_manual_click_hack() -> None:
]) ])
except subprocess.TimeoutExpired: except subprocess.TimeoutExpired:
log.exception('xdotool timed out?') log.exception('xdotool timed out?')
def is_current_time_in_range(
start_dt: datetime,
end_dt: datetime,
) -> bool:
'''
Check if current time is within the datetime range.
Use any/the-same timezone as provided by `start_dt.tzinfo` value
in the range.
'''
now: datetime = datetime.now(start_dt.tzinfo)
return start_dt <= now <= end_dt
# TODO, put this into `._util` and call it from here!
#
# NOTE, this was generated by @guille from a gpt5 prompt
# and was originally thot to be needed before learning about
# `ib_insync.contract.ContractDetails._parseSessions()` and
# it's downstream meths..
#
# This is still likely useful to keep for now to parse the
# `.tradingHours: str` value manually if we ever decide
# to move off `ib_async` and implement our own `trio`/`anyio`
# based version Bp
#
# >attempt to parse the retarted ib "time stampy thing" they
# >do for "venue hours" with this.. written by
# >gpt5-"thinking",
#
def parse_trading_hours(
spec: str,
tz: TzInfo|None = None
) -> dict[
date,
tuple[datetime, datetime]
]|None:
'''
Parse venue hours like:
'YYYYMMDD:HHMM-YYYYMMDD:HHMM;YYYYMMDD:CLOSED;...'
Returns `dict[date] = (open_dt, close_dt)` or `None` if
closed.
'''
if (
not isinstance(spec, str)
or
not spec
):
raise ValueError('spec must be a non-empty string')
out: dict[
date,
tuple[datetime, datetime]
]|None = {}
for part in (p.strip() for p in spec.split(';') if p.strip()):
if part.endswith(':CLOSED'):
day_s, _ = part.split(':', 1)
d = datetime.strptime(day_s, '%Y%m%d').date()
out[d] = None
continue
try:
start_s, end_s = part.split('-', 1)
start_dt = datetime.strptime(start_s, '%Y%m%d:%H%M')
end_dt = datetime.strptime(end_s, '%Y%m%d:%H%M')
except ValueError as exc:
raise ValueError(f'invalid segment: {part}') from exc
if tz is not None:
start_dt = start_dt.replace(tzinfo=tz)
end_dt = end_dt.replace(tzinfo=tz)
out[start_dt.date()] = (start_dt, end_dt)
return out
# ORIG desired usage,
#
# TODO, for non-drunk tomorrow,
# - call above fn and check that `output[today] is not None`
# trading_hrs: dict = parse_trading_hours(
# details.tradingHours
# )
# liq_hrs: dict = parse_trading_hours(
# details.liquidHours
# )

View File

@ -48,7 +48,6 @@ from bidict import bidict
import trio import trio
import tractor import tractor
from tractor import to_asyncio from tractor import to_asyncio
from tractor import trionics
from pendulum import ( from pendulum import (
from_timestamp, from_timestamp,
DateTime, DateTime,
@ -288,31 +287,9 @@ class Client:
self.conf = config self.conf = config
# NOTE: the ib.client here is "throttled" to 45 rps by default # NOTE: the ib.client here is "throttled" to 45 rps by default
self.ib: IB = ib self.ib = ib
self.ib.RaiseRequestErrors: bool = True self.ib.RaiseRequestErrors: bool = True
# self._acnt_names: set[str] = {}
self._acnt_names: list[str] = []
@property
def acnts(self) -> list[str]:
# return list(self._acnt_names)
return self._acnt_names
def __repr__(self) -> str:
return (
f'<{type(self).__name__}('
f'ib={self.ib} '
f'acnts={self.acnts}'
# TODO: we need to mask out acnt-#s and other private
# infos if we're going to console this!
# f' |_.conf:\n'
# f' {pformat(self.conf)}\n'
')>'
)
async def get_fills(self) -> list[Fill]: async def get_fills(self) -> list[Fill]:
''' '''
Return list of rents `Fills` from trading session. Return list of rents `Fills` from trading session.
@ -334,15 +311,15 @@ class Client:
fqme: str, fqme: str,
# EST in ISO 8601 format is required... below is EPOCH # EST in ISO 8601 format is required... below is EPOCH
start_dt: datetime|str = "1970-01-01T00:00:00.000000-05:00", start_dt: datetime | str = "1970-01-01T00:00:00.000000-05:00",
end_dt: datetime|str = "", end_dt: datetime | str = "",
# ohlc sample period in seconds # ohlc sample period in seconds
sample_period_s: int = 1, sample_period_s: int = 1,
# optional "duration of time" equal to the # optional "duration of time" equal to the
# length of the returned history frame. # length of the returned history frame.
duration: str|None = None, duration: str | None = None,
**kwargs, **kwargs,
@ -399,63 +376,55 @@ class Client:
# whatToShow='MIDPOINT', # whatToShow='MIDPOINT',
# whatToShow='TRADES', # whatToShow='TRADES',
) )
log.info(
f'REQUESTING {ib_duration_str} worth {bar_size} BARS\n'
f'fqme: {fqme}\n'
f'global _enters: {_enters}\n'
f'kwargs: {pformat(kwargs)}\n'
)
bars = await self.ib.reqHistoricalDataAsync( bars = await self.ib.reqHistoricalDataAsync(
**kwargs, **kwargs,
) )
query_info: str = (
f'REQUESTING IB history BARS\n'
f' ------ - ------\n'
f'dt_duration: {dt_duration}\n'
f'ib_duration_str: {ib_duration_str}\n'
f'bar_size: {bar_size}\n'
f'fqme: {fqme}\n'
f'actor-global _enters: {_enters}\n'
f'kwargs: {pformat(kwargs)}\n'
)
# tail case if no history for range or none prior. # tail case if no history for range or none prior.
# NOTE: there's actually 3 cases here to handle (and
# this should be read alongside the implementation of
# `.reqHistoricalDataAsync()`):
# - a timeout occurred in which case insync internals return
# an empty list thing with bars.clear()...
# - no data exists for the period likely due to
# a weekend, holiday or other non-trading period prior to
# ``end_dt`` which exceeds the ``duration``,
# - LITERALLY this is the start of the mkt's history!
if not bars: if not bars:
# TODO: figure out wut's going on here. # NOTE: there's actually 3 cases here to handle (and
# this should be read alongside the implementation of
# `.reqHistoricalDataAsync()`):
# - a timeout occurred in which case insync internals return
# an empty list thing with bars.clear()...
# - no data exists for the period likely due to
# a weekend, holiday or other non-trading period prior to
# ``end_dt`` which exceeds the ``duration``,
# - LITERALLY this is the start of the mkt's history!
# TODO: is this handy, a sync requester for tinkering
# with empty frame cases?
# def get_hist():
# return self.ib.reqHistoricalData(**kwargs)
# import pdbp
# pdbp.set_trace()
log.critical( # sync requester for debugging empty frame cases
'STUPID IB SAYS NO HISTORY\n\n' def get_hist():
+ query_info return self.ib.reqHistoricalData(**kwargs)
)
assert get_hist
import pdbp
pdbp.set_trace()
# TODO: we could maybe raise ``NoData`` instead if we
# rewrite the method in the first case?
# right now there's no way to detect a timeout..
return [], np.empty(0), dt_duration return [], np.empty(0), dt_duration
# TODO: we could maybe raise ``NoData`` instead if we
# rewrite the method in the first case? right now there's no
# way to detect a timeout.
log.info(query_info) # NOTE XXX: ensure minimum duration in bars B)
# NOTE XXX: ensure minimum duration in bars? # => we recursively call this method until we get at least
# => recursively call this method until we get at least as # as many bars such that they sum in aggregate to the the
# many bars such that they sum in aggregate to the the # desired total time (duration) at most.
# desired total time (duration) at most. # XXX XXX XXX
# - if you query over a gap and get no data # WHY DID WE EVEN NEED THIS ORIGINALLY!?
# that may short circuit the history # XXX XXX XXX
# - if you query over a gap and get no data
# that may short circuit the history
if ( if (
# XXX XXX XXX end_dt
# => WHY DID WE EVEN NEED THIS ORIGINALLY!? <= and False
# XXX XXX XXX
False
and end_dt
): ):
nparr: np.ndarray = bars_to_np(bars) nparr: np.ndarray = bars_to_np(bars)
times: np.ndarray = nparr['time'] times: np.ndarray = nparr['time']
@ -716,8 +685,8 @@ class Client:
async def find_contracts( async def find_contracts(
self, self,
pattern: str|None = None, pattern: str | None = None,
contract: Contract|None = None, contract: Contract | None = None,
qualify: bool = True, qualify: bool = True,
err_on_qualify: bool = True, err_on_qualify: bool = True,
@ -862,7 +831,7 @@ class Client:
self, self,
fqme: str, fqme: str,
) -> datetime|None: ) -> datetime | None:
''' '''
Return the first datetime stamp for `fqme` or `None` Return the first datetime stamp for `fqme` or `None`
on request failure. on request failure.
@ -918,7 +887,7 @@ class Client:
tries: int = 100, tries: int = 100,
raise_on_timeout: bool = False, raise_on_timeout: bool = False,
) -> Ticker|None: ) -> Ticker | None:
''' '''
Return a single (snap) quote for symbol. Return a single (snap) quote for symbol.
@ -930,7 +899,7 @@ class Client:
ready: ticker.TickerUpdateEvent = ticker.updateEvent ready: ticker.TickerUpdateEvent = ticker.updateEvent
# ensure a last price gets filled in before we deliver quote # ensure a last price gets filled in before we deliver quote
timeouterr: Exception|None = None timeouterr: Exception | None = None
warnset: bool = False warnset: bool = False
for _ in range(tries): for _ in range(tries):
@ -944,7 +913,6 @@ class Client:
) )
if tkr: if tkr:
break break
except TimeoutError as err: except TimeoutError as err:
timeouterr = err timeouterr = err
await asyncio.sleep(0.01) await asyncio.sleep(0.01)
@ -953,25 +921,18 @@ class Client:
else: else:
if not warnset: if not warnset:
log.warning( log.warning(
f'Quote req timed out..\n' f'Quote req timed out..maybe venue is closed?\n'
f'Maybe the venue is closed?\n'
f'\n'
f'{asdict(contract)}' f'{asdict(contract)}'
) )
warnset = True warnset = True
else: else:
log.info( log.info(f'Got first quote for {contract}')
'Got first quote for contract\n'
f'{contract}\n'
)
break break
else: else:
if ( if timeouterr and raise_on_timeout:
timeouterr import pdbp
and pdbp.set_trace()
raise_on_timeout
):
raise timeouterr raise timeouterr
if not warnset: if not warnset:
@ -1030,12 +991,8 @@ class Client:
outsideRth=True, outsideRth=True,
optOutSmartRouting=True, optOutSmartRouting=True,
# TODO: need to understand this setting better as
# it pertains to shit ass mms..
routeMarketableToBbo=True, routeMarketableToBbo=True,
designatedLocation='SMART', designatedLocation='SMART',
# TODO: make all orders GTC? # TODO: make all orders GTC?
# https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html#a95539081751afb9980f4c6bd1655a6ba # https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html#a95539081751afb9980f4c6bd1655a6ba
# goodTillDate=f"yyyyMMdd-HH:mm:ss", # goodTillDate=f"yyyyMMdd-HH:mm:ss",
@ -1163,8 +1120,8 @@ def get_config() -> dict[str, Any]:
names = list(accounts.keys()) names = list(accounts.keys())
accts = section['accounts'] = bidict(accounts) accts = section['accounts'] = bidict(accounts)
log.info( log.info(
f'{path} defines {len(accts)} account aliases:\n' f'brokers.toml defines {len(accts)} accounts: '
f'{pformat(names)}\n' f'{pformat(names)}'
) )
if section is None: if section is None:
@ -1231,7 +1188,7 @@ async def load_aio_clients(
try_ports = list(try_ports.values()) try_ports = list(try_ports.values())
_err = None _err = None
accounts_def: dict[str, str] = config.load_accounts(['ib']) accounts_def = config.load_accounts(['ib'])
ports = try_ports if port is None else [port] ports = try_ports if port is None else [port]
combos = list(itertools.product(hosts, ports)) combos = list(itertools.product(hosts, ports))
accounts_found: dict[str, Client] = {} accounts_found: dict[str, Client] = {}
@ -1256,12 +1213,6 @@ async def load_aio_clients(
for i in range(connect_retries): for i in range(connect_retries):
try: try:
log.info(
'Trying `ib_async` connect\n'
f'{host}: {port}\n'
f'clientId: {client_id}\n'
f'timeout: {connect_timeout}\n'
)
await ib.connectAsync( await ib.connectAsync(
host, host,
port, port,
@ -1276,9 +1227,7 @@ async def load_aio_clients(
client = Client(ib=ib, config=conf) client = Client(ib=ib, config=conf)
# update all actor-global caches # update all actor-global caches
log.runtime( log.info(f"Caching client for {sockaddr}")
f'Connected and caching `Client` @ {sockaddr!r}'
)
_client_cache[sockaddr] = client _client_cache[sockaddr] = client
break break
@ -1293,59 +1242,37 @@ async def load_aio_clients(
OSError, OSError,
) as ce: ) as ce:
_err = ce _err = ce
message: str = ( log.warning(
f'Failed to connect on {host}:{port} after {i} tries with\n' f'Failed to connect on {host}:{port} for {i} time with,\n'
f'{ib.client.apiError.value()!r}\n\n' f'{ib.client.apiError.value()}\n'
'Retrying with a new client id..\n' 'retrying with a new client id..')
)
log.runtime(message)
else:
# XXX report loudly if we never established after all
# re-tries
log.warning(message)
# Pre-collect all accounts available for this # Pre-collect all accounts available for this
# connection and map account names to this client # connection and map account names to this client
# instance. # instance.
for value in ib.accountValues(): for value in ib.accountValues():
acct_number: str = value.account acct_number = value.account
acnt_alias: str = accounts_def.inverse.get(acct_number) entry = accounts_def.inverse.get(acct_number)
if not acnt_alias: if not entry:
# TODO: should we constuct the below reco-ex from
# the existing config content?
_, path = config.load(
conf_name='brokers',
)
raise ValueError( raise ValueError(
'No alias in account section for account!\n' 'No section in brokers.toml for account:'
f'Please add an acnt alias entry to your {path}\n' f' {acct_number}\n'
'For example,\n\n' f'Please add entry to continue using this API client'
'[ib.accounts]\n'
'margin = {accnt_number!r}\n'
'^^^^^^ <- you need this part!\n\n'
'This ensures `piker` will not leak private acnt info '
'to console output by default!\n'
) )
# surjection of account names to operating clients. # surjection of account names to operating clients.
if acnt_alias not in accounts_found: if acct_number not in accounts_found:
accounts_found[acnt_alias] = client accounts_found[entry] = client
# client._acnt_names.add(acnt_alias)
client._acnt_names.append(acnt_alias)
if accounts_found: log.info(
log.info( f'Loaded accounts for client @ {host}:{port}\n'
f'Loaded accounts for api client\n\n' f'{pformat(accounts_found)}'
f'{pformat(accounts_found)}\n' )
)
# XXX: why aren't we just updating this directy above # XXX: why aren't we just updating this directy above
# instead of using the intermediary `accounts_found`? # instead of using the intermediary `accounts_found`?
_accounts2clients.update(accounts_found) _accounts2clients.update(accounts_found)
# if we have no clients after the scan loop then error out. # if we have no clients after the scan loop then error out.
if not _client_cache: if not _client_cache:
@ -1368,20 +1295,21 @@ async def load_aio_clients(
async def load_clients_for_trio( async def load_clients_for_trio(
chan: tractor.to_asyncio.LinkedTaskChannel, from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> None: ) -> None:
''' '''
Pure async mngr proxy to ``load_aio_clients()``. Pure async mngr proxy to ``load_aio_clients()``.
This is a bootstrap entrypoint to call from This is a bootstrap entrypoing to call from
a `tractor.to_asyncio.open_channel_from()`. a ``tractor.to_asyncio.open_channel_from()``.
''' '''
async with load_aio_clients( async with load_aio_clients() as accts2clients:
disconnect_on_exit=False,
) as accts2clients: to_trio.send_nowait(accts2clients)
chan.started_nowait(accts2clients)
# TODO: maybe a sync event to wait on instead? # TODO: maybe a sync event to wait on instead?
await asyncio.sleep(float('inf')) await asyncio.sleep(float('inf'))
@ -1394,10 +1322,7 @@ async def open_client_proxies() -> tuple[
async with ( async with (
tractor.trionics.maybe_open_context( tractor.trionics.maybe_open_context(
acm_func=tractor.to_asyncio.open_channel_from, acm_func=tractor.to_asyncio.open_channel_from,
kwargs={ kwargs={'target': load_clients_for_trio},
'target': load_clients_for_trio,
# ^XXX, kwarg to `open_channel_from()`
},
# lock around current actor task access # lock around current actor task access
# TODO: maybe this should be the default in tractor? # TODO: maybe this should be the default in tractor?
@ -1507,7 +1432,7 @@ class MethodProxy:
self, self,
pattern: str, pattern: str,
) -> dict[str, Any]|trio.Event: ) -> dict[str, Any] | trio.Event:
ev = self.event_table.get(pattern) ev = self.event_table.get(pattern)
@ -1528,25 +1453,26 @@ class MethodProxy:
async def open_aio_client_method_relay( async def open_aio_client_method_relay(
chan: tractor.to_asyncio.LinkedTaskChannel, from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
client: Client, client: Client,
event_consumers: dict[str, trio.Event], event_consumers: dict[str, trio.Event],
) -> None: ) -> None:
# sync with `open_client_proxy()` caller # sync with `open_client_proxy()` caller
chan.started_nowait(client) to_trio.send_nowait(client)
# TODO: separate channel for error handling? # TODO: separate channel for error handling?
client.inline_errors(chan) client.inline_errors(to_trio)
# relay all method requests to ``asyncio``-side client and deliver # relay all method requests to ``asyncio``-side client and deliver
# back results # back results
while not chan._to_trio._closed: # <- TODO, better check like `._web_bs`? while not to_trio._closed:
msg: tuple[str, dict]|dict|None = await chan.get() msg: tuple[str, dict] | dict | None = await from_trio.get()
match msg: match msg:
case None: # termination sentinel case None: # termination sentinel
log.info('asyncio `Client` method-proxy SHUTDOWN!') print('asyncio PROXY-RELAY SHUTDOWN')
break break
case (meth_name, kwargs): case (meth_name, kwargs):
@ -1556,7 +1482,7 @@ async def open_aio_client_method_relay(
try: try:
resp = await meth(**kwargs) resp = await meth(**kwargs)
# echo the msg back # echo the msg back
chan.send_nowait({'result': resp}) to_trio.send_nowait({'result': resp})
except ( except (
RequestError, RequestError,
@ -1564,10 +1490,10 @@ async def open_aio_client_method_relay(
# TODO: relay all errors to trio? # TODO: relay all errors to trio?
# BaseException, # BaseException,
) as err: ) as err:
chan.send_nowait({'exception': err}) to_trio.send_nowait({'exception': err})
case {'error': content}: case {'error': content}:
chan.send_nowait({'exception': content}) to_trio.send_nowait({'exception': content})
case _: case _:
raise ValueError(f'Unhandled msg {msg}') raise ValueError(f'Unhandled msg {msg}')
@ -1589,8 +1515,7 @@ async def open_client_proxy(
event_consumers=event_table, event_consumers=event_table,
) as (first, chan), ) as (first, chan),
trionics.collapse_eg(), # loose-ify trio.open_nursery() as relay_n,
trio.open_nursery() as relay_tn,
): ):
assert isinstance(first, Client) assert isinstance(first, Client)
@ -1630,7 +1555,7 @@ async def open_client_proxy(
continue continue
relay_tn.start_soon(relay_events) relay_n.start_soon(relay_events)
yield proxy yield proxy

View File

@ -34,7 +34,6 @@ import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
from tractor.to_asyncio import LinkedTaskChannel from tractor.to_asyncio import LinkedTaskChannel
from tractor import trionics
from ib_insync.contract import ( from ib_insync.contract import (
Contract, Contract,
) )
@ -117,11 +116,7 @@ def pack_position(
symbol=fqme, symbol=fqme,
currency=con.currency, currency=con.currency,
size=float(pos.position), size=float(pos.position),
avg_price=( avg_price=float(pos.avgCost) / float(con.multiplier or 1.0),
float(pos.avgCost)
/
float(con.multiplier or 1.0)
),
), ),
) )
@ -362,10 +357,6 @@ async def update_and_audit_pos_msg(
size=ibpos.position, size=ibpos.position,
avg_price=pikerpos.ppu, avg_price=pikerpos.ppu,
# XXX ensures matching even if multiple venue-names
# in `.bs_fqme`, likely from txn records..
bs_mktid=mkt.bs_mktid,
) )
ibfmtmsg: str = pformat(ibpos._asdict()) ibfmtmsg: str = pformat(ibpos._asdict())
@ -416,7 +407,7 @@ async def update_and_audit_pos_msg(
# TODO: make this a "propaganda" log level? # TODO: make this a "propaganda" log level?
if ibpos.avgCost != msg.avg_price: if ibpos.avgCost != msg.avg_price:
log.debug( log.warning(
f'IB "FIFO" avg price for {msg.symbol} is DIFF:\n' f'IB "FIFO" avg price for {msg.symbol} is DIFF:\n'
f'ib: {ibfmtmsg}\n' f'ib: {ibfmtmsg}\n'
'---------------------------\n' '---------------------------\n'
@ -434,8 +425,7 @@ async def aggr_open_orders(
) -> None: ) -> None:
''' '''
Collect all open orders from client and fill in `order_msgs: Collect all open orders from client and fill in `order_msgs: list`.
list`.
''' '''
trades: list[Trade] = client.ib.openTrades() trades: list[Trade] = client.ib.openTrades()
@ -556,10 +546,7 @@ async def open_trade_dialog(
), ),
# TODO: do this as part of `open_account()`!? # TODO: do this as part of `open_account()`!?
open_symcache( open_symcache('ib', only_from_memcache=True) as symcache,
'ib',
only_from_memcache=True,
) as symcache,
): ):
# Open a trade ledgers stack for appending trade records over # Open a trade ledgers stack for appending trade records over
# multiple accounts. # multiple accounts.
@ -567,10 +554,8 @@ async def open_trade_dialog(
ledgers: dict[str, TransactionLedger] = {} ledgers: dict[str, TransactionLedger] = {}
tables: dict[str, Account] = {} tables: dict[str, Account] = {}
order_msgs: list[Status] = [] order_msgs: list[Status] = []
conf: dict = get_config() conf = get_config()
accounts_def_inv: bidict[str, str] = bidict( accounts_def_inv: bidict[str, str] = bidict(conf['accounts']).inverse
conf['accounts']
).inverse
with ( with (
ExitStack() as lstack, ExitStack() as lstack,
@ -720,11 +705,7 @@ async def open_trade_dialog(
# client-account and build out position msgs to deliver to # client-account and build out position msgs to deliver to
# EMS. # EMS.
for acctid, acnt in tables.items(): for acctid, acnt in tables.items():
active_pps: dict[str, Position] active_pps, closed_pps = acnt.dump_active()
(
active_pps,
closed_pps,
) = acnt.dump_active()
for pps in [active_pps, closed_pps]: for pps in [active_pps, closed_pps]:
piker_pps: list[Position] = list(pps.values()) piker_pps: list[Position] = list(pps.values())
@ -740,7 +721,6 @@ async def open_trade_dialog(
) )
if ibpos: if ibpos:
bs_mktid: str = str(ibpos.contract.conId) bs_mktid: str = str(ibpos.contract.conId)
msg = await update_and_audit_pos_msg( msg = await update_and_audit_pos_msg(
acctid, acctid,
pikerpos, pikerpos,
@ -758,7 +738,7 @@ async def open_trade_dialog(
f'UNEXPECTED POSITION says IB => {msg.symbol}\n' f'UNEXPECTED POSITION says IB => {msg.symbol}\n'
'Maybe they LIQUIDATED YOU or your ledger is wrong?\n' 'Maybe they LIQUIDATED YOU or your ledger is wrong?\n'
) )
log.debug(logmsg) log.error(logmsg)
await ctx.started(( await ctx.started((
all_positions, all_positions,
@ -767,22 +747,21 @@ async def open_trade_dialog(
async with ( async with (
ctx.open_stream() as ems_stream, ctx.open_stream() as ems_stream,
trionics.collapse_eg(), trio.open_nursery() as n,
trio.open_nursery() as tn,
): ):
# relay existing open orders to ems # relay existing open orders to ems
for msg in order_msgs: for msg in order_msgs:
await ems_stream.send(msg) await ems_stream.send(msg)
for client in set(aioclients.values()): for client in set(aioclients.values()):
trade_event_stream: LinkedTaskChannel = await tn.start( trade_event_stream: LinkedTaskChannel = await n.start(
open_trade_event_stream, open_trade_event_stream,
client, client,
) )
# start order request handler **before** local trades # start order request handler **before** local trades
# event loop # event loop
tn.start_soon( n.start_soon(
handle_order_requests, handle_order_requests,
ems_stream, ems_stream,
accounts_def, accounts_def,
@ -790,7 +769,7 @@ async def open_trade_dialog(
) )
# allocate event relay tasks for each client connection # allocate event relay tasks for each client connection
tn.start_soon( n.start_soon(
deliver_trade_events, deliver_trade_events,
trade_event_stream, trade_event_stream,
@ -1204,14 +1183,7 @@ async def deliver_trade_events(
pos pos
and fill and fill
): ):
now_cr: CommissionReport = fill.commissionReport assert fill.commissionReport == cr
if (now_cr != cr):
log.warning(
'UhhHh ib updated the commission report mid-fill..?\n'
f'was: {pformat(cr)}\n'
f'now: {pformat(now_cr)}\n'
)
await emit_pp_update( await emit_pp_update(
ems_stream, ems_stream,
accounts_def, accounts_def,
@ -1262,47 +1234,32 @@ async def deliver_trade_events(
# never relay errors for non-broker related issues # never relay errors for non-broker related issues
# https://interactivebrokers.github.io/tws-api/message_codes.html # https://interactivebrokers.github.io/tws-api/message_codes.html
code: int = err['error_code'] code: int = err['error_code']
reason: str = err['reason'] if code in {
reqid: str = str(err['reqid']) 200, # uhh
# "Warning:" msg codes,
# https://interactivebrokers.github.io/tws-api/message_codes.html#warning_codes
# - 2109: 'Outside Regular Trading Hours'
if 'Warning:' in reason:
log.warning(
f'Order-API-warning: {code!r}\n'
f'reqid: {reqid!r}\n'
f'\n'
f'{pformat(err)}\n'
# ^TODO? should we just print the `reason`
# not the full `err`-dict?
)
continue
# XXX known special (ignore) cases
elif code in {
200, # uhh.. ni idea
# hist pacing / connectivity # hist pacing / connectivity
162, 162,
165, 165,
# WARNING codes:
# https://interactivebrokers.github.io/tws-api/message_codes.html#warning_codes
# Attribute 'Outside Regular Trading Hours' is
# " 'ignored based on the order type and
# destination. PlaceOrder is now ' 'being
# processed.',
2109,
# XXX: lol this isn't even documented.. # XXX: lol this isn't even documented..
# 'No market data during competing live session' # 'No market data during competing live session'
1669, 1669,
}: }:
log.error(
f'Order-API-error which is non-cancel-causing ?!\n'
f'\n'
f'{pformat(err)}\n'
)
continue continue
reqid: str = str(err['reqid'])
reason: str = err['reason']
if err['reqid'] == -1: if err['reqid'] == -1:
log.error( log.error(f'TWS external order error:\n{pformat(err)}')
f'TWS external order error ??\n'
f'{pformat(err)}\n'
)
flow: dict = dict( flow: dict = dict(
flows.get(reqid) flows.get(reqid)

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-forever Tyler Goodlet (in stewardship for pikers) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -13,12 +13,10 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Data feed endpoints pre-wrapped and ready for use with ``tractor``/``trio``.
''' """
Data feed endpoints pre-wrapped and ready for use with `tractor`/`trio`
via "infected-asyncio-mode".
'''
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
from contextlib import ( from contextlib import (
@ -28,6 +26,7 @@ from dataclasses import asdict
from datetime import datetime from datetime import datetime
from functools import partial from functools import partial
from pprint import pformat from pprint import pformat
from math import isnan
import time import time
from typing import ( from typing import (
Any, Any,
@ -41,6 +40,7 @@ import numpy as np
from pendulum import ( from pendulum import (
now, now,
from_timestamp, from_timestamp,
# DateTime,
Duration, Duration,
duration as mk_duration, duration as mk_duration,
) )
@ -69,10 +69,7 @@ from .api import (
Contract, Contract,
RequestError, RequestError,
) )
from ._util import ( from ._util import data_reset_hack
data_reset_hack,
is_current_time_in_range,
)
from .symbols import get_mkt_info from .symbols import get_mkt_info
if TYPE_CHECKING: if TYPE_CHECKING:
@ -187,8 +184,7 @@ async def open_history_client(
if ( if (
start_dt start_dt
and and start_dt.timestamp() == 0
start_dt.timestamp() == 0
): ):
await tractor.pause() await tractor.pause()
@ -207,16 +203,14 @@ async def open_history_client(
): ):
count += 1 count += 1
mean += latency / count mean += latency / count
log.debug( print(
f'HISTORY FRAME QUERY LATENCY: {latency}\n' f'HISTORY FRAME QUERY LATENCY: {latency}\n'
f'mean: {mean}' f'mean: {mean}'
) )
# could be trying to retreive bars over weekend # could be trying to retreive bars over weekend
if out is None: if out is None:
log.error( log.error(f"Can't grab bars starting at {end_dt}!?!?")
f"No bars starting at {end_dt!r} !?!?"
)
if ( if (
end_dt end_dt
and head_dt and head_dt
@ -291,9 +285,8 @@ _pacing: str = (
async def wait_on_data_reset( async def wait_on_data_reset(
proxy: MethodProxy, proxy: MethodProxy,
reset_type: str = 'data', reset_type: str = 'data',
timeout: float = 16, timeout: float = 16, # float('inf'),
task_status: TaskStatus[ task_status: TaskStatus[
tuple[ tuple[
@ -302,47 +295,29 @@ async def wait_on_data_reset(
] ]
] = trio.TASK_STATUS_IGNORED, ] = trio.TASK_STATUS_IGNORED,
) -> bool: ) -> bool:
'''
Wait on a (global-ish) "data-farm" event to be emitted
by the IB api server.
Allows syncing to reconnect event-messages emitted on the API # TODO: we might have to put a task lock around this
console, such as: # method..
hist_ev = proxy.status_event(
- 'HMDS data farm connection is OK:ushmds'
- 'Market data farm is connecting:usfuture'
- 'Market data farm connection is OK:usfuture'
Deliver a `(cs, done: Event)` pair to the caller to support it
waiting or cancelling the associated "data-reset-request";
normally a manual data-reset-req is expected to be the cause and
thus trigger such events (such as our click-hack-magic from
`.ib._util`).
'''
# ?TODO, do we need a task-lock around this method?
#
# register for an API "status event" wrapped for `trio`-sync.
hist_ev: trio.Event = proxy.status_event(
'HMDS data farm connection is OK:ushmds' 'HMDS data farm connection is OK:ushmds'
) )
#
# ^TODO: other event-messages we might want to support waiting-for # TODO: other event messages we might want to try and
# but i wasn't able to get reliable.. # wait for but i wasn't able to get any of this
# # reliable..
# reconnect_start = proxy.status_event( # reconnect_start = proxy.status_event(
# 'Market data farm is connecting:usfuture' # 'Market data farm is connecting:usfuture'
# ) # )
# live_ev = proxy.status_event( # live_ev = proxy.status_event(
# 'Market data farm connection is OK:usfuture' # 'Market data farm connection is OK:usfuture'
# ) # )
# try to wait on the reset event(s) to arrive, a timeout # try to wait on the reset event(s) to arrive, a timeout
# will trigger a retry up to 6 times (for now). # will trigger a retry up to 6 times (for now).
client: Client = proxy._aio_ns client: Client = proxy._aio_ns
done = trio.Event() done = trio.Event()
with trio.move_on_after(timeout) as cs: with trio.move_on_after(timeout) as cs:
task_status.started((cs, done)) task_status.started((cs, done))
log.warning( log.warning(
@ -421,9 +396,8 @@ async def get_bars(
bool, # timed out hint bool, # timed out hint
]: ]:
''' '''
Request-n-retrieve historical data frames from a `trio.Task` Retrieve historical data from a ``trio``-side task using
using a `MethoProxy` to query the `asyncio`-side's a ``MethoProxy``.
`.ib.api.Client` methods.
''' '''
global _data_resetter_task, _failed_resets global _data_resetter_task, _failed_resets
@ -613,7 +587,7 @@ async def get_bars(
data_cs.cancel() data_cs.cancel()
# spawn new data reset task # spawn new data reset task
data_cs, reset_done = await tn.start( data_cs, reset_done = await nurse.start(
partial( partial(
wait_on_data_reset, wait_on_data_reset,
proxy, proxy,
@ -633,14 +607,11 @@ async def get_bars(
# such that simultaneous symbol queries don't try data resettingn # such that simultaneous symbol queries don't try data resettingn
# too fast.. # too fast..
unset_resetter: bool = False unset_resetter: bool = False
async with ( async with trio.open_nursery() as nurse:
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn
):
# start history request that we allow # start history request that we allow
# to run indefinitely until a result is acquired # to run indefinitely until a result is acquired
tn.start_soon(query) nurse.start_soon(query)
# start history reset loop which waits up to the timeout # start history reset loop which waits up to the timeout
# for a result before triggering a data feed reset. # for a result before triggering a data feed reset.
@ -660,7 +631,7 @@ async def get_bars(
unset_resetter: bool = True unset_resetter: bool = True
# spawn new data reset task # spawn new data reset task
data_cs, reset_done = await tn.start( data_cs, reset_done = await nurse.start(
partial( partial(
wait_on_data_reset, wait_on_data_reset,
proxy, proxy,
@ -682,12 +653,14 @@ async def get_bars(
) )
# per-actor cache of inter-eventloop-chans
_quote_streams: dict[str, trio.abc.ReceiveStream] = {} _quote_streams: dict[str, trio.abc.ReceiveStream] = {}
async def _setup_quote_stream( async def _setup_quote_stream(
chan: tractor.to_asyncio.LinkedTaskChannel,
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
symbol: str, symbol: str,
opts: tuple[int] = ( opts: tuple[int] = (
'375', # RT trade volume (excludes utrades) '375', # RT trade volume (excludes utrades)
@ -698,20 +671,17 @@ async def _setup_quote_stream(
# making them mostly useless and explains why the scanner # making them mostly useless and explains why the scanner
# is always slow XD # is always slow XD
# '293', # Trade count for day # '293', # Trade count for day
# '294', # Trade rate / minute '294', # Trade rate / minute
# '295', # Vlm rate / minute '295', # Vlm rate / minute
), ),
contract: Contract | None = None, contract: Contract | None = None,
) -> trio.abc.ReceiveChannel: ) -> trio.abc.ReceiveChannel:
''' '''
Stream L1 quotes via the `Ticker.updateEvent.connect(push)` Stream a ticker using the std L1 api.
callback API by registering a `push` callback which simply
`chan.send_nowait()`s quote msgs back to the calling
parent-`trio.Task`-side.
NOTE, that this task-fn is run on the `asyncio.Task`-side ONLY This task is ``asyncio``-side and must be called from
and is thus run via `tractor.to_asyncio.open_channel_from()`. ``tractor.to_asyncio.open_channel_from()``.
''' '''
global _quote_streams global _quote_streams
@ -719,79 +689,37 @@ async def _setup_quote_stream(
async with load_aio_clients( async with load_aio_clients(
disconnect_on_exit=False, disconnect_on_exit=False,
) as accts2clients: ) as accts2clients:
# XXX since this is an `asyncio.Task`, we must use
# tractor.pause_from_sync()
caccount_name, client = get_preferred_data_client(accts2clients) caccount_name, client = get_preferred_data_client(accts2clients)
contract = ( contract = contract or (await client.find_contract(symbol))
contract to_trio.send_nowait(contract) # cuz why not
or ticker: Ticker = client.ib.reqMktData(contract, ','.join(opts))
(await client.find_contract(symbol))
)
chan.started_nowait(contract) # cuz why not
ticker: Ticker = client.ib.reqMktData(
contract,
','.join(opts),
)
maybe_exc: BaseException|None = None
handler_tries: int = 0
aio_task: asyncio.Task = asyncio.current_task()
# ?TODO? this API is batch-wise and quite slow-af but, # NOTE: it's batch-wise and slow af but I guess could
# - seems to be 5s updates? # be good for backchecking? Seems to be every 5s maybe?
# - maybe we could use it for backchecking?
#
# ticker: Ticker = client.ib.reqTickByTickData( # ticker: Ticker = client.ib.reqTickByTickData(
# contract, 'Last', # contract, 'Last',
# ) # )
# define a very naive queue-pushing callback that relays # # define a simple queue push routine that streams quote packets
# quote-packets directly the calling (parent) `trio.Task`. # # to trio over the ``to_trio`` memory channel.
# Ensure on teardown we cancel the feed via their cancel API. # to_trio, from_aio = trio.open_memory_channel(2**8) # type: ignore
#
def teardown(): def teardown():
'''
Disconnect our `push`-er callback and cancel the data-feed
for `contract`.
'''
nonlocal maybe_exc
ticker.updateEvent.disconnect(push) ticker.updateEvent.disconnect(push)
report: str = f'Disconnected mkt-data for {symbol!r} due to ' log.error(f"Disconnected stream for `{symbol}`")
if maybe_exc is not None:
report += (
'error,\n'
f'{maybe_exc!r}\n'
)
log.error(report)
else:
report += (
'cancellation.\n'
)
log.cancel(report)
client.ib.cancelMktData(contract) client.ib.cancelMktData(contract)
# decouple broadcast mem chan # decouple broadcast mem chan
_quote_streams.pop(symbol, None) _quote_streams.pop(symbol, None)
def push( def push(t: Ticker) -> None:
t: Ticker, """
tries_before_raise: int = 6, Push quotes to trio task.
) -> None:
'''
Push quotes verbatim to parent-side `trio.Task`.
''' """
nonlocal maybe_exc, handler_tries # log.debug(t)
# log.debug(f'new IB quote: {t}\n')
try: try:
chan.send_nowait(t) to_trio.send_nowait(t)
# XXX TODO XXX replicate in `tractor` tests
# as per `CancelledError`-handler notes below!
# assert 0
except ( except (
trio.BrokenResourceError, trio.BrokenResourceError,
@ -806,107 +734,35 @@ async def _setup_quote_stream(
# resulting in tracebacks spammed to console.. # resulting in tracebacks spammed to console..
# Manually do the dereg ourselves. # Manually do the dereg ourselves.
teardown() teardown()
# for slow debugging purposes to avoid clobbering prompt
# with log msgs
except trio.WouldBlock: except trio.WouldBlock:
log.exception( # log.warning(
f'Asyncio->Trio `chan.send_nowait()` blocked !?\n' # f'channel is blocking symbol feed for {symbol}?'
f'\n' # f'\n{to_trio.statistics}'
f'{chan._to_trio.statistics()}\n' # )
) pass
# ?TODO, handle re-connection attempts?
except BaseException as _berr:
berr = _berr
if handler_tries >= tries_before_raise:
# breakpoint()
maybe_exc = _berr
# task.set_exception(berr)
aio_task.cancel(msg=berr.args)
raise berr
else:
handler_tries += 1
log.exception(
f'Failed to push ticker quote !?\n'
f'handler_tries={handler_tries!r}\n'
f'ticker: {t!r}\n'
f'\n'
f'{chan._to_trio.statistics()}\n'
f'\n'
f'CAUSE: {berr}\n'
)
# except trio.WouldBlock:
# # for slow debugging purposes to avoid clobbering prompt
# # with log msgs
# pass
ticker.updateEvent.connect(push) ticker.updateEvent.connect(push)
try: try:
await asyncio.sleep(float('inf')) await asyncio.sleep(float('inf'))
# XXX, for debug.. TODO? can we rm again?
#
# tractor.pause_from_sync()
# while True:
# await asyncio.sleep(1.6)
# if ticker.ticks:
# log.debug(
# f'ticker.ticks = \n'
# f'{ticker.ticks}\n'
# )
# else:
# log.warning(
# 'UHH no ticker.ticks ??'
# )
# XXX TODO XXX !?!?
# apparently **without this handler** and the subsequent
# re-raising of `maybe_exc from _taskc` cancelling the
# `aio_task` from the `push()`-callback will cause a very
# strange chain of exc raising that breaks alll sorts of
# downstream callers, tasks and remote-actor tasks!?
#
# -[ ] we need some lowlevel reproducting tests to replicate
# those worst-case scenarios in `tractor` core!!
# -[ ] likely we should factor-out the `tractor.to_asyncio`
# attempts at workarounds in `.translate_aio_errors()`
# for failed `asyncio.Task.set_exception()` to either
# call `aio_task.cancel()` and/or
# `aio_task._fut_waiter.set_exception()` to a re-useable
# toolset in something like a `.to_asyncio._utils`??
#
except asyncio.CancelledError as _taskc:
if maybe_exc is not None:
raise maybe_exc from _taskc
raise _taskc
except BaseException as _berr:
# stash any crash cause for reporting in `teardown()`
maybe_exc = _berr
raise _berr
finally: finally:
# always disconnect our `push()` and cancel the
# ib-"mkt-data-feed".
teardown() teardown()
# return from_aio
@acm @acm
async def open_aio_quote_stream( async def open_aio_quote_stream(
symbol: str, symbol: str,
contract: Contract|None = None, contract: Contract | None = None,
) -> ( ) -> trio.abc.ReceiveStream:
trio.abc.Channel| # iface
tractor.to_asyncio.LinkedTaskChannel # actually
):
'''
Open a real-time `Ticker` quote stream from an `asyncio.Task`
spawned via `tractor.to_asyncio.open_channel_from()`, deliver the
inter-event-loop channel to the `trio.Task` caller and cache it
globally for re-use.
'''
from tractor.trionics import broadcast_receiver from tractor.trionics import broadcast_receiver
global _quote_streams global _quote_streams
@ -922,7 +778,6 @@ async def open_aio_quote_stream(
yield from_aio yield from_aio
return return
from_aio: tractor.to_asyncio.LinkedTaskChannel
async with tractor.to_asyncio.open_channel_from( async with tractor.to_asyncio.open_channel_from(
_setup_quote_stream, _setup_quote_stream,
symbol=symbol, symbol=symbol,
@ -932,10 +787,6 @@ async def open_aio_quote_stream(
assert contract assert contract
# TODO? de-reg on teardown of last consumer task?
# -> why aren't we using `.trionics.maybe_open_context()`
# here again?? (we are in `open_client_proxies()` tho?)
#
# cache feed for later consumers # cache feed for later consumers
_quote_streams[symbol] = from_aio _quote_streams[symbol] = from_aio
@ -950,12 +801,7 @@ def normalize(
calc_price: bool = False calc_price: bool = False
) -> dict: ) -> dict:
'''
Translate `ib_async`'s `Ticker.ticks` values to a `piker`
normalized `dict` form for transmit to downstream `.data` layer
consumers.
'''
# check for special contract types # check for special contract types
con = ticker.contract con = ticker.contract
fqme, calc_price = con2fqme(con) fqme, calc_price = con2fqme(con)
@ -974,7 +820,7 @@ def normalize(
tbt = ticker.tickByTicks tbt = ticker.tickByTicks
if tbt: if tbt:
log.info(f'tickbyticks:\n {ticker.tickByTicks}') print(f'tickbyticks:\n {ticker.tickByTicks}')
ticker.ticks = new_ticks ticker.ticks = new_ticks
@ -1010,39 +856,27 @@ def normalize(
return data return data
# ?TODO? feels like this task-fn could be factored to reduce some
# indentation levels?
# -[ ] the reconnect while loop on ib-gw "data farm connection.."s
# -[ ] everything embedded under the `async with aclosing(stream):`
# as the "meat" of the quote delivery once the connection is
# stable.
#
async def stream_quotes( async def stream_quotes(
send_chan: trio.abc.SendChannel, send_chan: trio.abc.SendChannel,
symbols: list[str], symbols: list[str],
feed_is_live: trio.Event, feed_is_live: trio.Event,
loglevel: str = None,
# TODO? we need to hook into the `ib_async` logger like
# we can with i3ipc from modden!
# loglevel: str|None = None,
# startup sync # startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
''' '''
Stream `symbols[0]` quotes back via `send_chan`. Stream symbol quotes.
The `feed_is_live: Event` is set to signal the caller that it can This is a ``trio`` callable routine meant to be invoked
begin processing msgs from the mem-chan. once the brokerd is up.
''' '''
# TODO: support multiple subscriptions # TODO: support multiple subscriptions
sym: str = symbols[0] sym = symbols[0]
log.info( log.info(f'request for real-time quotes: {sym}')
f'request for real-time quotes\n'
f'sym: {sym!r}\n'
)
init_msgs: list[FeedInit] = [] init_msgs: list[FeedInit] = []
@ -1051,84 +885,63 @@ async def stream_quotes(
details: ibis.ContractDetails details: ibis.ContractDetails
async with ( async with (
open_data_client() as proxy, open_data_client() as proxy,
# trio.open_nursery() as tn,
): ):
mkt, details = await get_mkt_info( mkt, details = await get_mkt_info(
sym, sym,
proxy=proxy, # passed to avoid implicit client load proxy=proxy, # passed to avoid implicit client load
) )
# is venue active rn?
venue_is_open: bool = any(
is_current_time_in_range(
start_dt=sesh.start,
end_dt=sesh.end,
)
for sesh in details.tradingSessions()
)
init_msg = FeedInit(mkt_info=mkt) init_msg = FeedInit(mkt_info=mkt)
# NOTE, tell sampler (via config) to skip vlm summing for dst
# assets which provide no vlm data..
if mkt.dst.atype in { if mkt.dst.atype in {
'fiat', 'fiat',
'index', 'index',
'commodity', 'commodity',
}: }:
# tell sampler config that it shouldn't do vlm summing.
init_msg.shm_write_opts['sum_tick_vlm'] = False init_msg.shm_write_opts['sum_tick_vlm'] = False
init_msg.shm_write_opts['has_vlm'] = False init_msg.shm_write_opts['has_vlm'] = False
init_msgs.append(init_msg) init_msgs.append(init_msg)
con: Contract = details.contract con: Contract = details.contract
first_ticker: Ticker|None = None first_ticker: Ticker | None = None
with trio.move_on_after(1):
timeout: float = 1.6
with trio.move_on_after(timeout) as quote_cs:
first_ticker: Ticker = await proxy.get_quote( first_ticker: Ticker = await proxy.get_quote(
contract=con, contract=con,
raise_on_timeout=False, raise_on_timeout=False,
) )
# XXX should never happen with this ep right?
# but if so then, more then likely mkt is closed?
if quote_cs.cancelled_caught:
log.warning(
f'First quote req timed out after {timeout!r}s'
)
if first_ticker: if first_ticker:
first_quote: dict = normalize(first_ticker) first_quote: dict = normalize(first_ticker)
log.info(
# TODO: we need a stack-oriented log levels filters for 'Rxed init quote:\n'
# this! f'{pformat(first_quote)}'
# log.info(message, filter={'stack': 'live_feed'}) ?
log.runtime(
'Rxed init quote:\n\n'
f'{pformat(first_quote)}\n'
) )
# XXX NOTE: whenever we're "outside regular trading hours" # NOTE: it might be outside regular trading hours for
# (only relevant for assets coming from the "legacy markets" # assets with "standard venue operating hours" so we
# space) so we basically (from an API/runtime-operational # only "pretend the feed is live" when the dst asset
# perspective) "pretend the feed is live" even if it's # type is NOT within the NON-NORMAL-venue set: aka not
# actually closed. # commodities, forex or crypto currencies which CAN
# # always return a NaN on a snap quote request during
# IOW, we signal to the effective caller (task) that the live # normal venue hours. In the case of a closed venue
# feed is "already up" but really we're just indicating that # (equitiies, futes, bonds etc.) we at least try to
# the OHLCV history can start being loaded immediately by the # grab the OHLC history.
# `piker.data`/`.tsp` layers. if (
# first_ticker
# XXX, deats: the "pretend we're live" is just done by and
# a `feed_is_live.set()` even though nothing is actually live isnan(first_ticker.last)
# Bp # SO, if the last quote price value is NaN we ONLY
if not venue_is_open: # "pretend to do" `feed_is_live.set()` if it's a known
log.warning( # dst asset venue with a lot of closed operating hours.
f'Venue is closed, unable to establish real-time feed.\n' and mkt.dst.atype not in {
f'mkt: {mkt!r}\n' 'commodity',
f'\n' 'fiat',
f'first_ticker: {first_ticker}\n' 'crypto',
) }
):
task_status.started(( task_status.started((
init_msgs, init_msgs,
first_quote, first_quote,
@ -1139,12 +952,10 @@ async def stream_quotes(
feed_is_live.set() feed_is_live.set()
# block and let data history backfill code run. # block and let data history backfill code run.
# XXX obvi given the venue is closed, we never expect feed
# to come up; a taskc should be the only way to
# terminate this task.
await trio.sleep_forever() await trio.sleep_forever()
return # we never expect feed to come up?
# ?TODO, we could instead spawn a task that waits on a feed # TODO: we should instead spawn a task that waits on a feed
# to start and let it wait indefinitely..instead of this # to start and let it wait indefinitely..instead of this
# hard coded stuff. # hard coded stuff.
# async def wait_for_first_quote(): # async def wait_for_first_quote():
@ -1158,35 +969,27 @@ async def stream_quotes(
raise_on_timeout=True, raise_on_timeout=True,
) )
first_quote: dict = normalize(first_ticker) first_quote: dict = normalize(first_ticker)
log.info(
# TODO: we need a stack-oriented log levels filters for
# this!
# log.info(message, filter={'stack': 'live_feed'}) ?
log.runtime(
'Rxed init quote:\n' 'Rxed init quote:\n'
f'{pformat(first_quote)}' f'{pformat(first_quote)}'
) )
cs: trio.CancelScope|None = None cs: trio.CancelScope | None = None
startup: bool = True startup: bool = True
iter_quotes: trio.abc.Channel
while ( while (
startup startup
or or cs.cancel_called
cs.cancel_called
): ):
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
async with ( async with (
tractor.trionics.collapse_eg(), trio.open_nursery() as nurse,
trio.open_nursery() as tn,
open_aio_quote_stream( open_aio_quote_stream(
symbol=sym, symbol=sym,
contract=con, contract=con,
) as iter_quotes, ) as stream,
): ):
# ?TODO? can we rm this - particularly for `ib_async`?
# ugh, clear ticks since we've consumed them # ugh, clear ticks since we've consumed them
# (ahem, ib_insync is stateful trash) # (ahem, ib_insync is stateful trash)
# first_ticker.ticks = [] first_ticker.ticks = []
# only on first entry at feed boot up # only on first entry at feed boot up
if startup: if startup:
@ -1200,8 +1003,8 @@ async def stream_quotes(
# data feed event. # data feed event.
async def reset_on_feed(): async def reset_on_feed():
# ??TODO? this seems to be surpressed from the # TODO: this seems to be surpressed from the
# traceback in `tractor`? # traceback in ``tractor``?
# assert 0 # assert 0
rt_ev = proxy.status_event( rt_ev = proxy.status_event(
@ -1210,9 +1013,9 @@ async def stream_quotes(
await rt_ev.wait() await rt_ev.wait()
cs.cancel() # cancel called should now be set cs.cancel() # cancel called should now be set
tn.start_soon(reset_on_feed) nurse.start_soon(reset_on_feed)
async with aclosing(iter_quotes): async with aclosing(stream):
# if syminfo.get('no_vlm', False): # if syminfo.get('no_vlm', False):
if not init_msg.shm_write_opts['has_vlm']: if not init_msg.shm_write_opts['has_vlm']:
@ -1227,27 +1030,25 @@ async def stream_quotes(
# wait for real volume on feed (trading might be # wait for real volume on feed (trading might be
# closed) # closed)
while True: while True:
ticker = await iter_quotes.receive() ticker = await stream.receive()
# for a real volume contract we rait for # for a real volume contract we rait for
# the first "real" trade to take place # the first "real" trade to take place
if ( if (
# not calc_price # not calc_price
# and not ticker.rtTime # and not ticker.rtTime
False not ticker.rtTime
# not ticker.rtTime
): ):
# spin consuming tickers until we # spin consuming tickers until we
# get a real market datum # get a real market datum
log.debug(f"New unsent ticker: {ticker}") log.debug(f"New unsent ticker: {ticker}")
continue continue
else: else:
log.debug("Received first volume tick") log.debug("Received first volume tick")
# ugh, clear ticks since we've # ugh, clear ticks since we've
# consumed them (ahem, ib_insync is # consumed them (ahem, ib_insync is
# truly stateful trash) # truly stateful trash)
# ticker.ticks = [] ticker.ticks = []
# XXX: this works because we don't use # XXX: this works because we don't use
# ``aclosing()`` above? # ``aclosing()`` above?
@ -1257,24 +1058,15 @@ async def stream_quotes(
log.debug(f"First ticker received {quote}") log.debug(f"First ticker received {quote}")
# tell data-layer spawner-caller that live # tell data-layer spawner-caller that live
# quotes are now active desptie not having # quotes are now streaming.
# necessarily received a first vlm/clearing
# tick.
ticker = await iter_quotes.receive()
feed_is_live.set() feed_is_live.set()
fqme: str = quote['fqme']
await send_chan.send({fqme: quote})
# last = time.time() # last = time.time()
async for ticker in iter_quotes: async for ticker in stream:
quote = normalize(ticker) quote = normalize(ticker)
fqme: str = quote['fqme'] fqme = quote['fqme']
log.debug(
f'Sending quote\n'
f'{quote}'
)
await send_chan.send({fqme: quote}) await send_chan.send({fqme: quote})
# ugh, clear ticks since we've consumed them # ugh, clear ticks since we've consumed them
# ticker.ticks = [] ticker.ticks = []
# last = time.time() # last = time.time()

View File

@ -31,11 +31,7 @@ from typing import (
) )
from bidict import bidict from bidict import bidict
from pendulum import ( import pendulum
DateTime,
parse,
from_timestamp,
)
from ib_insync import ( from ib_insync import (
Contract, Contract,
Commodity, Commodity,
@ -70,11 +66,10 @@ tx_sort: Callable = partial(
iter_by_dt, iter_by_dt,
parsers={ parsers={
'dateTime': parse_flex_dt, 'dateTime': parse_flex_dt,
'datetime': parse, 'datetime': pendulum.parse,
# for some some fucking 2022 and
# XXX: for some some fucking 2022 and # back options records...fuck me.
# back options records.. f@#$ me.. 'date': pendulum.parse,
'date': parse,
} }
) )
@ -94,38 +89,15 @@ def norm_trade(
conid: int = str(record.get('conId') or record['conid']) conid: int = str(record.get('conId') or record['conid'])
bs_mktid: str = str(conid) bs_mktid: str = str(conid)
comms = record.get('commission')
if comms is None:
comms = -1*record['ibCommission']
# NOTE: sometimes weird records (like BTTX?) price = record.get('price') or record['tradePrice']
# have no field for this?
comms: float = -1 * (
record.get('commission')
or record.get('ibCommission')
or 0
)
if not comms:
log.warning(
'No commissions found for record?\n'
f'{pformat(record)}\n'
)
price: float = (
record.get('price')
or record.get('tradePrice')
)
if price is None:
log.warning(
'No `price` field found in record?\n'
'Skipping normalization..\n'
f'{pformat(record)}\n'
)
return None
# the api doesn't do the -/+ on the quantity for you but flex # the api doesn't do the -/+ on the quantity for you but flex
# records do.. are you fucking serious ib...!? # records do.. are you fucking serious ib...!?
size: float|int = ( size = record.get('quantity') or record['shares'] * {
record.get('quantity')
or record['shares']
) * {
'BOT': 1, 'BOT': 1,
'SLD': -1, 'SLD': -1,
}[record['side']] }[record['side']]
@ -156,31 +128,26 @@ def norm_trade(
# otype = tail[6] # otype = tail[6]
# strike = tail[7:] # strike = tail[7:]
log.warning( print(f'skipping opts contract {symbol}')
f'Skipping option contract -> NO SUPPORT YET!\n'
f'{symbol}\n'
)
return None return None
# timestamping is way different in API records # timestamping is way different in API records
dtstr: str = record.get('datetime') dtstr = record.get('datetime')
date: str = record.get('date') date = record.get('date')
flex_dtstr: str = record.get('dateTime') flex_dtstr = record.get('dateTime')
if dtstr or date: if dtstr or date:
dt: DateTime = parse(dtstr or date) dt = pendulum.parse(dtstr or date)
elif flex_dtstr: elif flex_dtstr:
# probably a flex record with a wonky non-std timestamp.. # probably a flex record with a wonky non-std timestamp..
dt: DateTime = parse_flex_dt(record['dateTime']) dt = parse_flex_dt(record['dateTime'])
# special handling of symbol extraction from # special handling of symbol extraction from
# flex records using some ad-hoc schema parsing. # flex records using some ad-hoc schema parsing.
asset_type: str = ( asset_type: str = record.get(
record.get('assetCategory') 'assetCategory'
or record.get('secType') ) or record.get('secType', 'STK')
or 'STK'
)
if (expiry := ( if (expiry := (
record.get('lastTradeDateOrContractMonth') record.get('lastTradeDateOrContractMonth')
@ -390,7 +357,6 @@ def norm_trade_records(
if txn is None: if txn is None:
continue continue
# inject txns sorted by datetime
insort( insort(
records, records,
txn, txn,
@ -439,7 +405,7 @@ def api_trades_to_ledger_entries(
txn_dict[attr_name] = val txn_dict[attr_name] = val
tid = str(txn_dict['execId']) tid = str(txn_dict['execId'])
dt = from_timestamp(txn_dict['time']) dt = pendulum.from_timestamp(txn_dict['time'])
txn_dict['datetime'] = str(dt) txn_dict['datetime'] = str(dt)
acctid = accounts[txn_dict['acctNumber']] acctid = accounts[txn_dict['acctNumber']]

View File

@ -209,10 +209,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
break break
ib_client = proxy._aio_ns.ib ib_client = proxy._aio_ns.ib
log.info( log.info(f'Using {ib_client} for symbol search')
f'Using API client for symbol-search\n'
f'{ib_client}\n'
)
last = time.time() last = time.time()
async for pattern in stream: async for pattern in stream:
@ -297,7 +294,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
elif stock_results: elif stock_results:
break break
# else: # else:
# await tractor.pause() await tractor.pause()
# # match against our ad-hoc set immediately # # match against our ad-hoc set immediately
# adhoc_matches = fuzzy.extract( # adhoc_matches = fuzzy.extract(
@ -525,21 +522,7 @@ async def get_mkt_info(
venue = con.primaryExchange or con.exchange venue = con.primaryExchange or con.exchange
price_tick: Decimal = Decimal(str(details.minTick)) price_tick: Decimal = Decimal(str(details.minTick))
ib_min_tick_gt_2: Decimal = Decimal('0.01') # price_tick: Decimal = Decimal('0.01')
if (
price_tick < ib_min_tick_gt_2
):
# TODO: we need to add some kinda dynamic rounding sys
# to our MktPair i guess?
# not sure where the logic should sit, but likely inside
# the `.clearing._ems` i suppose...
log.warning(
'IB seems to disallow a min price tick < 0.01 '
'when the price is > 2.0..?\n'
f'Decreasing min tick precision for {fqme} to 0.01'
)
# price_tick = ib_min_tick
# await tractor.pause()
if atype == 'stock': if atype == 'stock':
# XXX: GRRRR they don't support fractional share sizes for # XXX: GRRRR they don't support fractional share sizes for

View File

@ -27,14 +27,13 @@ from typing import (
) )
import time import time
import httpx
import pendulum import pendulum
import asks
import numpy as np import numpy as np
import urllib.parse import urllib.parse
import hashlib import hashlib
import hmac import hmac
import base64 import base64
import tractor
import trio import trio
from piker import config from piker import config
@ -61,11 +60,6 @@ log = get_logger('piker.brokers.kraken')
# <uri>/<version>/ # <uri>/<version>/
_url = 'https://api.kraken.com/0' _url = 'https://api.kraken.com/0'
_headers: dict[str, str] = {
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
}
# TODO: this is the only backend providing this right? # TODO: this is the only backend providing this right?
# in which case we should drop it from the defaults and # in which case we should drop it from the defaults and
# instead make a custom fields descr in this module! # instead make a custom fields descr in this module!
@ -141,15 +135,16 @@ class Client:
def __init__( def __init__(
self, self,
config: dict[str, str], config: dict[str, str],
httpx_client: httpx.AsyncClient,
name: str = '', name: str = '',
api_key: str = '', api_key: str = '',
secret: str = '' secret: str = ''
) -> None: ) -> None:
self._sesh = asks.Session(connections=4)
self._sesh: httpx.AsyncClient = httpx_client self._sesh.base_location = _url
self._sesh.headers.update({
'User-Agent':
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
})
self._name = name self._name = name
self._api_key = api_key self._api_key = api_key
self._secret = secret self._secret = secret
@ -171,9 +166,10 @@ class Client:
method: str, method: str,
data: dict, data: dict,
) -> dict[str, Any]: ) -> dict[str, Any]:
resp: httpx.Response = await self._sesh.post( resp = await self._sesh.post(
url=f'/public/{method}', path=f'/public/{method}',
json=data, json=data,
timeout=float('inf')
) )
return resproc(resp, log) return resproc(resp, log)
@ -184,18 +180,18 @@ class Client:
uri_path: str uri_path: str
) -> dict[str, Any]: ) -> dict[str, Any]:
headers = { headers = {
'Content-Type': 'application/x-www-form-urlencoded', 'Content-Type':
'API-Key': self._api_key, 'application/x-www-form-urlencoded',
'API-Sign': get_kraken_signature( 'API-Key':
uri_path, self._api_key,
data, 'API-Sign':
self._secret, get_kraken_signature(uri_path, data, self._secret)
),
} }
resp: httpx.Response = await self._sesh.post( resp = await self._sesh.post(
url=f'/private/{method}', path=f'/private/{method}',
data=data, data=data,
headers=headers, headers=headers,
timeout=float('inf')
) )
return resproc(resp, log) return resproc(resp, log)
@ -373,7 +369,8 @@ class Client:
# 1658347714, 'status': 'Success'}]} # 1658347714, 'status': 'Success'}]}
if xfers: if xfers:
await tractor.pause() import tractor
await tractor.pp()
trans: dict[str, Transaction] = {} trans: dict[str, Transaction] = {}
for entry in xfers: for entry in xfers:
@ -501,8 +498,7 @@ class Client:
for xkey, data in resp['result'].items(): for xkey, data in resp['result'].items():
# NOTE: always cache in pairs tables for faster lookup # NOTE: always cache in pairs tables for faster lookup
with tractor.devx.maybe_open_crash_handler(): # as bxerr: pair = Pair(xname=xkey, **data)
pair = Pair(xname=xkey, **data)
# register the above `Pair` structs for all # register the above `Pair` structs for all
# key-sets/monikers: a set of 4 (frickin) tables # key-sets/monikers: a set of 4 (frickin) tables
@ -669,36 +665,24 @@ class Client:
@acm @acm
async def get_client() -> Client: async def get_client() -> Client:
conf: dict[str, Any] = get_config() conf = get_config()
async with httpx.AsyncClient( if conf:
base_url=_url, client = Client(
headers=_headers, conf,
# TODO: is there a way to numerate this? # TODO: don't break these up and just do internal
# https://www.python-httpx.org/advanced/clients/#why-use-a-client # conf lookups instead..
# connections=4 name=conf['key_descr'],
) as trio_client: api_key=conf['api_key'],
if conf: secret=conf['secret']
client = Client( )
conf, else:
httpx_client=trio_client, client = Client({})
# TODO: don't break these up and just do internal # at startup, load all symbols, and asset info in
# conf lookups instead.. # batch requests.
name=conf['key_descr'], async with trio.open_nursery() as nurse:
api_key=conf['api_key'], nurse.start_soon(client.get_assets)
secret=conf['secret'] await client.get_mkt_pairs()
)
else:
client = Client(
conf={},
httpx_client=trio_client,
)
# at startup, load all symbols, and asset info in yield client
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.get_mkt_pairs()
yield client

View File

@ -175,8 +175,9 @@ async def handle_order_requests(
case { case {
'account': 'kraken.spot' as account, 'account': 'kraken.spot' as account,
'action': 'buy'|'sell', 'action': action,
}: } if action in {'buy', 'sell'}:
# validate # validate
order = BrokerdOrder(**msg) order = BrokerdOrder(**msg)
@ -261,12 +262,6 @@ async def handle_order_requests(
} | extra } | extra
log.info(f'Submitting WS order request:\n{pformat(req)}') log.info(f'Submitting WS order request:\n{pformat(req)}')
# NOTE HOWTO, debug order requests
#
# if 'XRP' in pair:
# await tractor.pause()
await ws.send_msg(req) await ws.send_msg(req)
# placehold for sanity checking in relay loop # placehold for sanity checking in relay loop
@ -549,7 +544,7 @@ async def open_trade_dialog(
# to be reloaded. # to be reloaded.
balances: dict[str, float] = await client.get_balances() balances: dict[str, float] = await client.get_balances()
await verify_balances( verify_balances(
acnt, acnt,
src_fiat, src_fiat,
balances, balances,
@ -617,18 +612,18 @@ async def open_trade_dialog(
# enter relay loop # enter relay loop
await handle_order_updates( await handle_order_updates(
client=client, client,
ws=ws, ws,
ws_stream=stream, stream,
ems_stream=ems_stream, ems_stream,
apiflows=apiflows, apiflows,
ids=ids, ids,
reqids2txids=reqids2txids, reqids2txids,
acnt=acnt, acnt,
ledger=ledger, api_trans,
acctid=acctid, acctid,
acc_name=acc_name, acc_name,
token=token, token,
) )
@ -644,8 +639,7 @@ async def handle_order_updates(
# transaction records which will be updated # transaction records which will be updated
# on new trade clearing events (aka order "fills") # on new trade clearing events (aka order "fills")
ledger: TransactionLedger, ledger_trans: dict[str, Transaction],
# ledger_trans: dict[str, Transaction],
acctid: str, acctid: str,
acc_name: str, acc_name: str,
token: str, token: str,
@ -705,8 +699,7 @@ async def handle_order_updates(
# if tid not in ledger_trans # if tid not in ledger_trans
} }
for tid, trade in trades.items(): for tid, trade in trades.items():
# assert tid not in ledger_trans assert tid not in ledger_trans
assert tid not in ledger
txid = trade['ordertxid'] txid = trade['ordertxid']
reqid = trade.get('userref') reqid = trade.get('userref')
@ -754,17 +747,11 @@ async def handle_order_updates(
client, client,
api_name_set='wsname', api_name_set='wsname',
) )
ppmsgs: list[BrokerdPosition] = trades2pps( ppmsgs = trades2pps(
acnt=acnt, acnt,
ledger=ledger, acctid,
acctid=acctid, new_trans,
new_trans=new_trans,
) )
# ppmsgs = trades2pps(
# acnt,
# acctid,
# new_trans,
# )
for pp_msg in ppmsgs: for pp_msg in ppmsgs:
await ems_stream.send(pp_msg) await ems_stream.send(pp_msg)
@ -1090,8 +1077,6 @@ async def handle_order_updates(
f'Failed to {action} order {reqid}:\n' f'Failed to {action} order {reqid}:\n'
f'{errmsg}' f'{errmsg}'
) )
# if tractor._state.debug_mode():
# await tractor.pause()
symbol: str = 'N/A' symbol: str = 'N/A'
if chain := apiflows.get(reqid): if chain := apiflows.get(reqid):

View File

@ -21,6 +21,7 @@ Symbology defs and search.
from decimal import Decimal from decimal import Decimal
import tractor import tractor
from rapidfuzz import process as fuzzy
from piker._cacheables import ( from piker._cacheables import (
async_lifo_cache, async_lifo_cache,
@ -40,13 +41,8 @@ from piker.accounting._mktinfo import (
) )
# https://www.kraken.com/features/api#get-tradable-pairs
class Pair(Struct): class Pair(Struct):
'''
A tradable asset pair as schema-defined by,
https://docs.kraken.com/api/docs/rest-api/get-tradable-asset-pairs
'''
xname: str # idiotic bs_mktid equiv i guess? xname: str # idiotic bs_mktid equiv i guess?
altname: str # alternate pair name altname: str # alternate pair name
wsname: str # WebSocket pair name (if available) wsname: str # WebSocket pair name (if available)
@ -57,6 +53,7 @@ class Pair(Struct):
lot: str # volume lot size lot: str # volume lot size
cost_decimals: int cost_decimals: int
costmin: float
pair_decimals: int # scaling decimal places for pair pair_decimals: int # scaling decimal places for pair
lot_decimals: int # scaling decimal places for volume lot_decimals: int # scaling decimal places for volume
@ -82,7 +79,6 @@ class Pair(Struct):
tick_size: float # min price step size tick_size: float # min price step size
status: str status: str
costmin: str|None = None # XXX, only some mktpairs?
short_position_limit: float = 0 short_position_limit: float = 0
long_position_limit: float = float('inf') long_position_limit: float = float('inf')

View File

@ -16,9 +16,10 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Kucoin cex API backend. Kucoin broker backend
''' '''
from contextlib import ( from contextlib import (
asynccontextmanager as acm, asynccontextmanager as acm,
aclosing, aclosing,
@ -41,7 +42,7 @@ import wsproto
from uuid import uuid4 from uuid import uuid4
from trio_typing import TaskStatus from trio_typing import TaskStatus
import httpx import asks
from bidict import bidict from bidict import bidict
import numpy as np import numpy as np
import pendulum import pendulum
@ -62,7 +63,7 @@ from piker._cacheables import (
) )
from piker.log import get_logger from piker.log import get_logger
from piker.data.validate import FeedInit from piker.data.validate import FeedInit
from piker.types import Struct # NOTE, this is already a `tractor.msg.Struct` from piker.types import Struct
from piker.data import ( from piker.data import (
def_iohlcv_fields, def_iohlcv_fields,
match_from_pairs, match_from_pairs,
@ -98,18 +99,9 @@ class KucoinMktPair(Struct, frozen=True):
def size_tick(self) -> Decimal: def size_tick(self) -> Decimal:
return Decimal(str(self.quoteMinSize)) return Decimal(str(self.quoteMinSize))
callauctionFirstStageStartTime: None|float
callauctionIsEnabled: bool
callauctionPriceCeiling: float|None
callauctionPriceFloor: float|None
callauctionSecondStageStartTime: float|None
callauctionThirdStageStartTime: float|None
enableTrading: bool enableTrading: bool
feeCategory: int
feeCurrency: str feeCurrency: str
isMarginEnabled: bool isMarginEnabled: bool
makerFeeCoefficient: float
market: str market: str
minFunds: float minFunds: float
name: str name: str
@ -119,10 +111,7 @@ class KucoinMktPair(Struct, frozen=True):
quoteIncrement: float quoteIncrement: float
quoteMaxSize: float quoteMaxSize: float
quoteMinSize: float quoteMinSize: float
st: bool
symbol: str # our bs_mktid, kucoin's internal id symbol: str # our bs_mktid, kucoin's internal id
takerFeeCoefficient: float
tradingStartTime: float|None
class AccountTrade(Struct, frozen=True): class AccountTrade(Struct, frozen=True):
@ -223,12 +212,8 @@ def get_config() -> BrokerConfig | None:
class Client: class Client:
def __init__( def __init__(self) -> None:
self, self._config: BrokerConfig | None = get_config()
httpx_client: httpx.AsyncClient,
) -> None:
self._http: httpx.AsyncClient = httpx_client
self._config: BrokerConfig|None = get_config()
self._pairs: dict[str, KucoinMktPair] = {} self._pairs: dict[str, KucoinMktPair] = {}
self._fqmes2mktids: bidict[str, str] = bidict() self._fqmes2mktids: bidict[str, str] = bidict()
self._bars: list[list[float]] = [] self._bars: list[list[float]] = []
@ -242,24 +227,18 @@ class Client:
) -> dict[str, str | bytes]: ) -> dict[str, str | bytes]:
''' '''
Generate authenticated request headers: Generate authenticated request headers
https://docs.kucoin.com/#authentication https://docs.kucoin.com/#authentication
https://www.kucoin.com/docs/basic-info/connection-method/authentication/creating-a-request
https://www.kucoin.com/docs/basic-info/connection-method/authentication/signing-a-message
''' '''
if not self._config: if not self._config:
raise ValueError( raise ValueError(
'No config found when trying to send authenticated request' 'No config found when trying to send authenticated request')
)
str_to_sign = ( str_to_sign = (
str(int(time.time() * 1000)) str(int(time.time() * 1000))
+ + action + f'/api/{api}/{endpoint.lstrip("/")}'
action
+
f'/api/{api}/{endpoint.lstrip("/")}'
) )
signature = base64.b64encode( signature = base64.b64encode(
@ -270,7 +249,6 @@ class Client:
).digest() ).digest()
) )
# TODO: can we cache this between calls?
passphrase = base64.b64encode( passphrase = base64.b64encode(
hmac.new( hmac.new(
self._config.key_secret.encode('utf-8'), self._config.key_secret.encode('utf-8'),
@ -292,10 +270,8 @@ class Client:
self, self,
action: Literal['POST', 'GET'], action: Literal['POST', 'GET'],
endpoint: str, endpoint: str,
api: str = 'v2', api: str = 'v2',
headers: dict = {}, headers: dict = {},
) -> Any: ) -> Any:
''' '''
Generic request wrapper for Kucoin API Generic request wrapper for Kucoin API
@ -308,19 +284,14 @@ class Client:
api, api,
) )
req_meth: Callable = getattr( api_url = f'https://api.kucoin.com/api/{api}/{endpoint}'
self._http,
action.lower(), res = await asks.request(action, api_url, headers=headers)
)
res = await req_meth( json = res.json()
url=f'/{api}/{endpoint}', if 'data' in json:
headers=headers, return json['data']
)
json: dict = res.json()
if (data := json.get('data')) is not None:
return data
else: else:
api_url: str = self._http.base_url
log.error( log.error(
f'Error making request to {api_url} ->\n' f'Error making request to {api_url} ->\n'
f'{pformat(res)}' f'{pformat(res)}'
@ -340,7 +311,7 @@ class Client:
''' '''
token_type = 'private' if private else 'public' token_type = 'private' if private else 'public'
try: try:
data: dict[str, Any]|None = await self._request( data: dict[str, Any] | None = await self._request(
'POST', 'POST',
endpoint=f'bullet-{token_type}', endpoint=f'bullet-{token_type}',
api='v1' api='v1'
@ -378,8 +349,8 @@ class Client:
currencies: dict[str, Currency] = {} currencies: dict[str, Currency] = {}
entries: list[dict] = await self._request( entries: list[dict] = await self._request(
'GET', 'GET',
endpoint='currencies',
api='v1', api='v1',
endpoint='currencies',
) )
for entry in entries: for entry in entries:
curr = Currency(**entry).copy() curr = Currency(**entry).copy()
@ -395,22 +366,13 @@ class Client:
dict[str, KucoinMktPair], dict[str, KucoinMktPair],
bidict[str, KucoinMktPair], bidict[str, KucoinMktPair],
]: ]:
entries = await self._request( entries = await self._request('GET', 'symbols')
'GET',
endpoint='symbols',
)
log.info(f' {len(entries)} Kucoin market pairs fetched') log.info(f' {len(entries)} Kucoin market pairs fetched')
pairs: dict[str, KucoinMktPair] = {} pairs: dict[str, KucoinMktPair] = {}
fqmes2mktids: bidict[str, str] = bidict() fqmes2mktids: bidict[str, str] = bidict()
for item in entries: for item in entries:
try: pair = pairs[item['name']] = KucoinMktPair(**item)
pair = pairs[item['name']] = KucoinMktPair(**item)
except TypeError as te:
raise TypeError(
'`KucoinMktPair` and reponse fields do not match ??\n'
f'{KucoinMktPair.fields_diff(item)}\n'
) from te
fqmes2mktids[ fqmes2mktids[
item['name'].lower().replace('-', '') item['name'].lower().replace('-', '')
] = pair.name ] = pair.name
@ -605,21 +567,13 @@ def fqme_to_kucoin_sym(
@acm @acm
async def get_client() -> AsyncGenerator[Client, None]: async def get_client() -> AsyncGenerator[Client, None]:
''' client = Client()
Load an API `Client` preconfigured from user settings
''' async with trio.open_nursery() as n:
async with ( n.start_soon(client.get_mkt_pairs)
httpx.AsyncClient( await client.get_currencies()
base_url='https://api.kucoin.com/api',
) as trio_client,
):
client = Client(httpx_client=trio_client)
async with trio.open_nursery() as tn:
tn.start_soon(client.get_mkt_pairs)
await client.get_currencies()
yield client yield client
@tractor.context @tractor.context
@ -655,7 +609,7 @@ async def open_ping_task(
await trio.sleep((ping_interval - 1000) / 1000) await trio.sleep((ping_interval - 1000) / 1000)
await ws.send_msg({'id': connect_id, 'type': 'ping'}) await ws.send_msg({'id': connect_id, 'type': 'ping'})
log.warning('Starting ping task for kucoin ws connection') log.info('Starting ping task for kucoin ws connection')
n.start_soon(ping_server) n.start_soon(ping_server)
yield yield
@ -667,14 +621,9 @@ async def open_ping_task(
async def get_mkt_info( async def get_mkt_info(
fqme: str, fqme: str,
) -> tuple[ ) -> tuple[MktPair, KucoinMktPair]:
MktPair,
KucoinMktPair,
]:
''' '''
Query for and return both a `piker.accounting.MktPair` and Query for and return a `MktPair` and `KucoinMktPair`.
`KucoinMktPair` from provided `fqme: str`
(fully-qualified-market-endpoint).
''' '''
async with open_cached_client('kucoin') as client: async with open_cached_client('kucoin') as client:
@ -749,8 +698,6 @@ async def stream_quotes(
log.info(f'Starting up quote stream(s) for {symbols}') log.info(f'Starting up quote stream(s) for {symbols}')
for sym_str in symbols: for sym_str in symbols:
mkt: MktPair
pair: KucoinMktPair
mkt, pair = await get_mkt_info(sym_str) mkt, pair = await get_mkt_info(sym_str)
init_msgs.append( init_msgs.append(
FeedInit(mkt_info=mkt) FeedInit(mkt_info=mkt)
@ -758,11 +705,7 @@ async def stream_quotes(
ws: NoBsWs ws: NoBsWs
token, ping_interval = await client._get_ws_token() token, ping_interval = await client._get_ws_token()
log.info('API reported ping_interval: {ping_interval}\n') connect_id = str(uuid4())
connect_id: str = str(uuid4())
typ: str
quote: dict
async with ( async with (
open_autorecon_ws( open_autorecon_ws(
( (
@ -776,37 +719,20 @@ async def stream_quotes(
), ),
) as ws, ) as ws,
open_ping_task(ws, ping_interval, connect_id), open_ping_task(ws, ping_interval, connect_id),
aclosing( aclosing(stream_messages(ws, sym_str)) as msg_gen,
iter_normed_quotes(
ws, sym_str
)
) as iter_quotes,
): ):
typ, quote = await anext(iter_quotes) typ, quote = await anext(msg_gen)
# take care to not unblock here until we get a real while typ != 'trade':
# trade quote? # take care to not unblock here until we get a real
# ^TODO, remove this right? # trade quote
# -[ ] what often blocks chart boot/new-feed switching typ, quote = await anext(msg_gen)
# since we'ere waiting for a live quote instead of just
# loading history afap..
# |_ XXX, not sure if we require a bit of rework to core
# feed init logic or if backends justg gotta be
# changed up.. feel like there was some causality
# dilema prolly only seen with IB too..
# while typ != 'trade':
# typ, quote = await anext(iter_quotes)
task_status.started((init_msgs, quote)) task_status.started((init_msgs, quote))
feed_is_live.set() feed_is_live.set()
# XXX NOTE, DO NOT include the `.<backend>` suffix! async for typ, msg in msg_gen:
# OW the sampling loop will not broadcast correctly.. await send_chan.send({sym_str: msg})
# since `bus._subscribers.setdefault(bs_fqme, set())`
# is used inside `.data.open_feed_bus()` !!!
topic: str = mkt.bs_fqme
async for typ, quote in iter_quotes:
await send_chan.send({topic: quote})
@acm @acm
@ -861,7 +787,7 @@ async def subscribe(
) )
async def iter_normed_quotes( async def stream_messages(
ws: NoBsWs, ws: NoBsWs,
sym: str, sym: str,
@ -892,9 +818,6 @@ async def iter_normed_quotes(
yield 'trade', { yield 'trade', {
'symbol': sym, 'symbol': sym,
# TODO, is 'last' even used elsewhere/a-good
# semantic? can't we just read the ticks with our
# .data.ticktools.frame_ticks()`/
'last': trade_data.price, 'last': trade_data.price,
'brokerd_ts': last_trade_ts, 'brokerd_ts': last_trade_ts,
'ticks': [ 'ticks': [
@ -987,7 +910,7 @@ async def open_history_client(
if end_dt is None: if end_dt is None:
inow = round(time.time()) inow = round(time.time())
log.debug( print(
f'difference in time between load and processing' f'difference in time between load and processing'
f'{inow - times[-1]}' f'{inow - times[-1]}'
) )

View File

@ -37,12 +37,6 @@ import tractor
from async_generator import asynccontextmanager from async_generator import asynccontextmanager
import numpy as np import numpy as np
import wrapt import wrapt
# TODO, port to `httpx`/`trio-websocket` whenver i get back to
# writing a proper ws-api streamer for this backend (since the data
# feeds are free now) as per GH feat-req:
# https://github.com/pikers/piker/issues/509
#
import asks import asks
from ..calc import humanize, percent_change from ..calc import humanize, percent_change

View File

@ -1,49 +0,0 @@
piker.clearing
______________
trade execution-n-control subsys for both live and paper trading as
well as algo-trading manual override/interaction across any backend
broker and data provider.
avail UIs
*********
order ctl
---------
the `piker.clearing` subsys is exposed mainly though
the `piker chart` GUI as a "chart trader" style UX and
is automatically enabled whenever a chart is opened.
.. ^TODO, more prose here!
the "manual" order control features are exposed via the
`piker.ui.order_mode` API and can pretty much always be
used (at least) in simulated-trading mode, aka "paper"-mode, and
the micro-manual is as follows:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
position (pp) mgmt
------------------
you can also configure your position allocation limits from the
sidepane.
.. ^TODO, explain and provide tut once more refined!

View File

@ -25,10 +25,7 @@ from typing import TYPE_CHECKING
import trio import trio
import tractor import tractor
from tractor.trionics import ( from tractor.trionics import broadcast_receiver
broadcast_receiver,
collapse_eg,
)
from ._util import ( from ._util import (
log, # sub-sys logger log, # sub-sys logger
@ -171,6 +168,7 @@ class OrderClient(Struct):
async def relay_orders_from_sync_code( async def relay_orders_from_sync_code(
client: OrderClient, client: OrderClient,
symbol_key: str, symbol_key: str,
to_ems_stream: tractor.MsgStream, to_ems_stream: tractor.MsgStream,
@ -244,11 +242,6 @@ async def open_ems(
async with maybe_open_emsd( async with maybe_open_emsd(
broker, broker,
# XXX NOTE, LOL so this determines the daemon `emsd` loglevel
# then FYI.. that's kinda wrong no?
# -[ ] shouldn't it be set by `pikerd -l` or no?
# -[ ] would make a lot more sense to have a subsys ctl for
# levels.. like `-l emsd.info` or something?
loglevel=loglevel, loglevel=loglevel,
) as portal: ) as portal:
@ -288,11 +281,8 @@ async def open_ems(
client._ems_stream = trades_stream client._ems_stream = trades_stream
# start sync code order msg delivery task # start sync code order msg delivery task
async with ( async with trio.open_nursery() as n:
collapse_eg(), n.start_soon(
trio.open_nursery() as tn,
):
tn.start_soon(
relay_orders_from_sync_code, relay_orders_from_sync_code,
client, client,
fqme, fqme,
@ -308,4 +298,4 @@ async def open_ems(
) )
# stop the sync-msg-relay task on exit. # stop the sync-msg-relay task on exit.
tn.cancel_scope.cancel() n.cancel_scope.cancel()

View File

@ -42,7 +42,6 @@ from bidict import bidict
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
from tractor import trionics
from ._util import ( from ._util import (
log, # sub-sys logger log, # sub-sys logger
@ -77,6 +76,7 @@ if TYPE_CHECKING:
# TODO: numba all of this # TODO: numba all of this
def mk_check( def mk_check(
trigger_price: float, trigger_price: float,
known_last: float, known_last: float,
action: str, action: str,
@ -162,7 +162,7 @@ async def clear_dark_triggers(
router: Router, router: Router,
brokerd_orders_stream: tractor.MsgStream, brokerd_orders_stream: tractor.MsgStream,
quote_stream: tractor.MsgStream, quote_stream: tractor.ReceiveMsgStream, # noqa
broker: str, broker: str,
fqme: str, fqme: str,
@ -178,7 +178,6 @@ async def clear_dark_triggers(
''' '''
# XXX: optimize this for speed! # XXX: optimize this for speed!
# TODO: # TODO:
# - port to the new ringbuf stuff in `tractor.ipc`!
# - numba all this! # - numba all this!
# - this stream may eventually contain multiple symbols # - this stream may eventually contain multiple symbols
quote_stream._raise_on_lag = False quote_stream._raise_on_lag = False
@ -388,7 +387,6 @@ async def open_brokerd_dialog(
for ep_name in [ for ep_name in [
'open_trade_dialog', # probably final name? 'open_trade_dialog', # probably final name?
'trades_dialogue', # legacy 'trades_dialogue', # legacy
# ^!TODO, rm this since all backends ported no ?!?
]: ]:
trades_endpoint = getattr( trades_endpoint = getattr(
brokermod, brokermod,
@ -502,7 +500,7 @@ class Router(Struct):
''' '''
# setup at actor spawn time # setup at actor spawn time
_tn: trio.Nursery nursery: trio.Nursery
# broker to book map # broker to book map
books: dict[str, DarkBook] = {} books: dict[str, DarkBook] = {}
@ -655,11 +653,7 @@ class Router(Struct):
flume = feed.flumes[fqme] flume = feed.flumes[fqme]
first_quote: dict = flume.first_quote first_quote: dict = flume.first_quote
book: DarkBook = self.get_dark_book(broker) book: DarkBook = self.get_dark_book(broker)
book.lasts[fqme]: float = float(first_quote['last'])
if not (last := first_quote.get('last')):
last: float = flume.rt_shm.array[-1]['close']
book.lasts[fqme]: float = float(last)
async with self.maybe_open_brokerd_dialog( async with self.maybe_open_brokerd_dialog(
brokermod=brokermod, brokermod=brokermod,
@ -672,7 +666,7 @@ class Router(Struct):
# dark book clearing loop, also lives with parent # dark book clearing loop, also lives with parent
# daemon to allow dark order clearing while no # daemon to allow dark order clearing while no
# client is connected. # client is connected.
self._tn.start_soon( self.nursery.start_soon(
clear_dark_triggers, clear_dark_triggers,
self, self,
relay.brokerd_stream, relay.brokerd_stream,
@ -695,7 +689,7 @@ class Router(Struct):
# spawn a ``brokerd`` order control dialog stream # spawn a ``brokerd`` order control dialog stream
# that syncs lifetime with the parent `emsd` daemon. # that syncs lifetime with the parent `emsd` daemon.
self._tn.start_soon( self.nursery.start_soon(
translate_and_relay_brokerd_events, translate_and_relay_brokerd_events,
broker, broker,
relay.brokerd_stream, relay.brokerd_stream,
@ -722,7 +716,7 @@ class Router(Struct):
subs = self.subscribers[sub_key] subs = self.subscribers[sub_key]
sent_some: bool = False sent_some: bool = False
for client_stream in subs.copy(): for client_stream in subs:
try: try:
await client_stream.send(msg) await client_stream.send(msg)
sent_some = True sent_some = True
@ -769,12 +763,10 @@ async def _setup_persistent_emsd(
global _router global _router
# open a root "service task-nursery" for the `emsd`-actor # open a root "service nursery" for the ``emsd`` actor
async with ( async with trio.open_nursery() as service_nursery:
trionics.collapse_eg(),
trio.open_nursery() as tn _router = Router(nursery=service_nursery)
):
_router = Router(_tn=tn)
# TODO: send back the full set of persistent # TODO: send back the full set of persistent
# orders/execs? # orders/execs?
@ -1018,28 +1010,14 @@ async def translate_and_relay_brokerd_events(
status_msg.brokerd_msg = msg status_msg.brokerd_msg = msg
status_msg.src = msg.broker_details['name'] status_msg.src = msg.broker_details['name']
if not status_msg.req: await router.client_broadcast(
# likely some order change state? status_msg.req.symbol,
await tractor.pause() status_msg,
else: )
await router.client_broadcast(
status_msg.req.symbol,
status_msg,
)
if status == 'closed': if status == 'closed':
log.info( log.info(f'Execution for {oid} is complete!')
f'Execution is complete!\n' status_msg = book._active.pop(oid)
f'oid: {oid!r}\n'
)
status_msg = book._active.pop(oid, None)
if status_msg is None:
log.warning(
f'Order was already cleared from book ??\n'
f'oid: {oid!r}\n'
f'\n'
f'Maybe the order cancelled before submitted ??\n'
)
elif status == 'canceled': elif status == 'canceled':
log.cancel(f'Cancellation for {oid} is complete!') log.cancel(f'Cancellation for {oid} is complete!')
@ -1204,16 +1182,12 @@ async def process_client_order_cmds(
submitting live orders immediately if requested by the client. submitting live orders immediately if requested by the client.
''' '''
# TODO, only allow `msgspec.Struct` form! # cmd: dict
cmd: dict
async for cmd in client_order_stream: async for cmd in client_order_stream:
log.info( log.info(f'Received order cmd:\n{pformat(cmd)}')
f'Received order cmd:\n'
f'{pformat(cmd)}\n'
)
# CAWT DAMN we need struct support! # CAWT DAMN we need struct support!
oid: str = str(cmd['oid']) oid = str(cmd['oid'])
# register this stream as an active order dialog (msg flow) for # register this stream as an active order dialog (msg flow) for
# this order id such that translated message from the brokerd # this order id such that translated message from the brokerd
@ -1319,7 +1293,7 @@ async def process_client_order_cmds(
case { case {
'oid': oid, 'oid': oid,
'symbol': fqme, 'symbol': fqme,
'price': price, 'price': trigger_price,
'size': size, 'size': size,
'action': ('buy' | 'sell') as action, 'action': ('buy' | 'sell') as action,
'exec_mode': ('live' | 'paper'), 'exec_mode': ('live' | 'paper'),
@ -1351,7 +1325,7 @@ async def process_client_order_cmds(
symbol=sym, symbol=sym,
action=action, action=action,
price=price, price=trigger_price,
size=size, size=size,
account=req.account, account=req.account,
) )
@ -1373,11 +1347,7 @@ async def process_client_order_cmds(
# (``translate_and_relay_brokerd_events()`` above) will # (``translate_and_relay_brokerd_events()`` above) will
# handle relaying the ems side responses back to # handle relaying the ems side responses back to
# the client/cmd sender from this request # the client/cmd sender from this request
log.info( log.info(f'Sending live order to {broker}:\n{pformat(msg)}')
f'Sending live order to {broker}:\n'
f'{pformat(msg)}'
)
await brokerd_order_stream.send(msg) await brokerd_order_stream.send(msg)
# an immediate response should be ``BrokerdOrderAck`` # an immediate response should be ``BrokerdOrderAck``
@ -1393,7 +1363,7 @@ async def process_client_order_cmds(
case { case {
'oid': oid, 'oid': oid,
'symbol': fqme, 'symbol': fqme,
'price': price, 'price': trigger_price,
'size': size, 'size': size,
'exec_mode': exec_mode, 'exec_mode': exec_mode,
'action': action, 'action': action,
@ -1421,12 +1391,7 @@ async def process_client_order_cmds(
if isnan(last): if isnan(last):
last = flume.rt_shm.array[-1]['close'] last = flume.rt_shm.array[-1]['close']
trigger_price: float = float(price) pred = mk_check(trigger_price, last, action)
pred = mk_check(
trigger_price,
last,
action,
)
# NOTE: for dark orders currently we submit # NOTE: for dark orders currently we submit
# the triggered live order at a price 5 ticks # the triggered live order at a price 5 ticks
@ -1533,7 +1498,7 @@ async def maybe_open_trade_relays(
loglevel: str = 'info', loglevel: str = 'info',
): ):
fqme, relay, feed, client_ready = await _router._tn.start( fqme, relay, feed, client_ready = await _router.nursery.start(
_router.open_trade_relays, _router.open_trade_relays,
fqme, fqme,
exec_mode, exec_mode,
@ -1563,18 +1528,19 @@ async def maybe_open_trade_relays(
@tractor.context @tractor.context
async def _emsd_main( async def _emsd_main(
ctx: tractor.Context, # becomes `ems_ctx` below ctx: tractor.Context,
fqme: str, fqme: str,
exec_mode: str, # ('paper', 'live') exec_mode: str, # ('paper', 'live')
loglevel: str|None = None, loglevel: str | None = None,
) -> tuple[ # `ctx.started()` value! ) -> tuple[
dict[ # positions dict[
tuple[str, str], # brokername, acctid # brokername, acctid
tuple[str, str],
list[BrokerdPosition], list[BrokerdPosition],
], ],
list[str], # accounts list[str],
dict[str, Status], # dialogs dict[str, Status],
]: ]:
''' '''
EMS (sub)actor entrypoint providing the execution management EMS (sub)actor entrypoint providing the execution management

View File

@ -19,7 +19,6 @@ Clearing sub-system message and protocols.
""" """
from __future__ import annotations from __future__ import annotations
from decimal import Decimal
from typing import ( from typing import (
Literal, Literal,
) )
@ -72,15 +71,7 @@ class Order(Struct):
symbol: str # | MktPair symbol: str # | MktPair
account: str # should we set a default as '' ? account: str # should we set a default as '' ?
# https://docs.python.org/3/library/decimal.html#decimal-objects price: float
#
# ?TODO? decimal usage throughout?
# -[ ] possibly leverage the `Encoder(decimal_format='number')`
# bit?
# |_https://jcristharif.com/msgspec/supported-types.html#decimal
# -[ ] should we also use it for .size?
#
price: Decimal
size: float # -ve is "sell", +ve is "buy" size: float # -ve is "sell", +ve is "buy"
brokers: list[str] = [] brokers: list[str] = []
@ -187,7 +178,7 @@ class BrokerdOrder(Struct):
time_ns: int time_ns: int
symbol: str # fqme symbol: str # fqme
price: Decimal price: float
size: float size: float
# TODO: if we instead rely on a +ve/-ve size to determine # TODO: if we instead rely on a +ve/-ve size to determine
@ -301,9 +292,6 @@ class BrokerdError(Struct):
# TODO: yeah, so we REALLY need to completely deprecate # TODO: yeah, so we REALLY need to completely deprecate
# this and use the `.accounting.Position` msg-type instead.. # this and use the `.accounting.Position` msg-type instead..
# -[ ] an alternative might be to add a `Position.summary() ->
# `PositionSummary`-msg that we generate since `Position` has a lot
# of fields by default we likely don't want to send over the wire?
class BrokerdPosition(Struct): class BrokerdPosition(Struct):
''' '''
Position update event from brokerd. Position update event from brokerd.
@ -316,4 +304,3 @@ class BrokerdPosition(Struct):
avg_price: float avg_price: float
currency: str = '' currency: str = ''
name: str = 'position' name: str = 'position'
bs_mktid: str|int|None = None

View File

@ -297,8 +297,6 @@ class PaperBoi(Struct):
# transmit pp msg to ems # transmit pp msg to ems
pp: Position = self.acnt.pps[bs_mktid] pp: Position = self.acnt.pps[bs_mktid]
# TODO, this will break if `require_only=True` was passed to
# `.update_from_ledger()`
pp_msg = BrokerdPosition( pp_msg = BrokerdPosition(
broker=self.broker, broker=self.broker,
@ -510,7 +508,7 @@ async def handle_order_requests(
reqid = await client.submit_limit( reqid = await client.submit_limit(
oid=order.oid, oid=order.oid,
symbol=f'{order.symbol}.{client.broker}', symbol=f'{order.symbol}.{client.broker}',
price=float(order.price), price=order.price,
action=order.action, action=order.action,
size=order.size, size=order.size,
# XXX: by default 0 tells ``ib_insync`` methods that # XXX: by default 0 tells ``ib_insync`` methods that
@ -655,7 +653,6 @@ async def open_trade_dialog(
# in) use manually constructed table from calling # in) use manually constructed table from calling
# the `.get_mkt_info()` provider EP above. # the `.get_mkt_info()` provider EP above.
_mktmap_table=mkt_by_fqme, _mktmap_table=mkt_by_fqme,
only_require=list(mkt_by_fqme),
) )
pp_msgs: list[BrokerdPosition] = [] pp_msgs: list[BrokerdPosition] = []

View File

@ -30,7 +30,6 @@ subsys: str = 'piker.clearing'
log = get_logger(subsys) log = get_logger(subsys)
# TODO, oof doesn't this ignore the `loglevel` then???
get_console_log = partial( get_console_log = partial(
get_console_log, get_console_log,
name=subsys, name=subsys,

View File

@ -134,65 +134,86 @@ def pikerd(
Spawn the piker broker-daemon. Spawn the piker broker-daemon.
''' '''
# from tractor.devx import maybe_open_crash_handler from tractor.devx import maybe_open_crash_handler
# with maybe_open_crash_handler(pdb=False): with maybe_open_crash_handler(pdb=pdb):
log = get_console_log(loglevel, name='cli') log = get_console_log(loglevel, name='cli')
if pdb: if pdb:
log.warning(( log.warning((
"\n" "\n"
"!!! YOU HAVE ENABLED DAEMON DEBUG MODE !!!\n" "!!! YOU HAVE ENABLED DAEMON DEBUG MODE !!!\n"
"When a `piker` daemon crashes it will block the " "When a `piker` daemon crashes it will block the "
"task-thread until resumed from console!\n" "task-thread until resumed from console!\n"
"\n" "\n"
))
# service-actor registry endpoint socket-address set
regaddrs: list[tuple[str, int]] = []
conf, _ = config.load(
conf_name='conf',
)
network: dict = conf.get('network')
if (
network is None
and not maddr
):
regaddrs = [(
_default_registry_host,
_default_registry_port,
)]
else:
eps: dict = load_trans_eps(
network,
maddr,
)
for layers in eps['pikerd']:
regaddrs.append((
layers['ipv4']['addr'],
layers['tcp']['port'],
)) ))
from .. import service # service-actor registry endpoint socket-address set
regaddrs: list[tuple[str, int]] = []
async def main(): conf, _ = config.load(
service_mngr: service.Services conf_name='conf',
async with ( )
service.open_pikerd( network: dict = conf.get('network')
registry_addrs=regaddrs, if (
loglevel=loglevel, network is None
debug_mode=pdb, and not maddr
# enable_transports=['uds'],
enable_transports=['tcp'],
) as service_mngr,
): ):
assert service_mngr regaddrs = [(
# ?TODO? spawn all other sub-actor daemons according to _default_registry_host,
# multiaddress endpoint spec defined by user config _default_registry_port,
await trio.sleep_forever() )]
trio.run(main) else:
eps: dict = load_trans_eps(
network,
maddr,
)
for layers in eps['pikerd']:
regaddrs.append((
layers['ipv4']['addr'],
layers['tcp']['port'],
))
from .. import service
async def main():
service_mngr: service.Services
async with (
service.open_pikerd(
registry_addrs=regaddrs,
loglevel=loglevel,
debug_mode=pdb,
) as service_mngr, # normally delivers a ``Services`` handle
# AsyncExitStack() as stack,
):
# TODO: spawn all other sub-actor daemons according to
# multiaddress endpoint spec defined by user config
assert service_mngr
# if tsdb:
# dname, conf = await stack.enter_async_context(
# service.marketstore.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'TSDB `{dname}` up with conf:\n{conf}')
# if es:
# dname, conf = await stack.enter_async_context(
# service.elastic.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'DB `{dname}` up with conf:\n{conf}')
await trio.sleep_forever()
trio.run(main)
@click.group(context_settings=config._context_defaults) @click.group(context_settings=config._context_defaults)
@ -307,10 +328,6 @@ def services(config, tl, ports):
if not ports: if not ports:
ports = [_default_registry_port] ports = [_default_registry_port]
addr = tractor._addr.wrap_address(
addr=(host, ports[0])
)
async def list_services(): async def list_services():
nonlocal host nonlocal host
async with ( async with (
@ -318,18 +335,16 @@ def services(config, tl, ports):
name='service_query', name='service_query',
loglevel=config['loglevel'] if tl else None, loglevel=config['loglevel'] if tl else None,
), ),
tractor.get_registry( tractor.get_arbiter(
addr=addr, host=host,
port=ports[0]
) as portal ) as portal
): ):
registry = await portal.run_from_ns( registry = await portal.run_from_ns('self', 'get_registry')
'self',
'get_registry',
)
json_d = {} json_d = {}
for key, socket in registry.items(): for key, socket in registry.items():
json_d[key] = f'{socket}' host, port = socket
json_d[key] = f'{host}:{port}'
click.echo(f"{colorize_json(json_d)}") click.echo(f"{colorize_json(json_d)}")
trio.run(list_services) trio.run(list_services)

View File

@ -41,13 +41,10 @@ from .log import get_logger
log = get_logger('broker-config') log = get_logger('broker-config')
# XXX NOTE: taken from `click` # XXX NOTE: taken from ``click`` since apparently they have some
# |_https://github.com/pallets/click/blob/main/src/click/utils.py#L449 # super weirdness with sigint and sudo..no clue
# # we're probably going to slowly just modify it to our own version over
# (since apparently they have some super weirdness with SIGINT and # time..
# sudo.. no clue we're probably going to slowly just modify it to our
# own version over time..)
#
def get_app_dir( def get_app_dir(
app_name: str, app_name: str,
roaming: bool = True, roaming: bool = True,
@ -107,15 +104,14 @@ def get_app_dir(
# `tractor`) with the testing dir and check for it whenever we # `tractor`) with the testing dir and check for it whenever we
# detect `pytest` is being used (which it isn't under normal # detect `pytest` is being used (which it isn't under normal
# operation). # operation).
# if "pytest" in sys.modules: if "pytest" in sys.modules:
# import tractor import tractor
# actor = tractor.current_actor(err_on_no_runtime=False) actor = tractor.current_actor(err_on_no_runtime=False)
# if actor: # runtime is up if actor: # runtime is up
# rvs = tractor._state._runtime_vars rvs = tractor._state._runtime_vars
# import pdbp; pdbp.set_trace() testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
# testdirpath = Path(rvs['piker_vars']['piker_test_dir']) assert testdirpath.exists(), 'piker test harness might be borked!?'
# assert testdirpath.exists(), 'piker test harness might be borked!?' app_name = str(testdirpath)
# app_name = str(testdirpath)
if platform.system() == 'Windows': if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA" key = "APPDATA" if roaming else "LOCALAPPDATA"
@ -264,7 +260,7 @@ def load(
MutableMapping, MutableMapping,
] = tomllib.loads, ] = tomllib.loads,
touch_if_dne: bool = True, touch_if_dne: bool = False,
**tomlkws, **tomlkws,
@ -273,7 +269,7 @@ def load(
Load config file by name. Load config file by name.
If desired config is not in the top level piker-user config path then If desired config is not in the top level piker-user config path then
pass the `path: Path` explicitly. pass the ``path: Path`` explicitly.
''' '''
# create the $HOME/.config/piker dir if dne # create the $HOME/.config/piker dir if dne
@ -288,8 +284,7 @@ def load(
if ( if (
not path.is_file() not path.is_file()
and and touch_if_dne
touch_if_dne
): ):
# only do a template if no path provided, # only do a template if no path provided,
# just touch an empty file with same name. # just touch an empty file with same name.

View File

@ -95,12 +95,6 @@ class Sampler:
# history loading. # history loading.
incr_task_cs: trio.CancelScope | None = None incr_task_cs: trio.CancelScope | None = None
bcast_errors: tuple[Exception] = (
trio.BrokenResourceError,
trio.ClosedResourceError,
trio.EndOfChannel,
)
# holds all the ``tractor.Context`` remote subscriptions for # holds all the ``tractor.Context`` remote subscriptions for
# a particular sample period increment event: all subscribers are # a particular sample period increment event: all subscribers are
# notified on a step. # notified on a step.
@ -264,15 +258,14 @@ class Sampler:
subs: set subs: set
last_ts, subs = pair last_ts, subs = pair
# NOTE, for debugging pub-sub issues task = trio.lowlevel.current_task()
# task = trio.lowlevel.current_task() log.debug(
# log.debug( f'SUBS {self.subscribers}\n'
# f'AlL-SUBS@{period_s!r}: {self.subscribers}\n' f'PAIR {pair}\n'
# f'PAIR: {pair}\n' f'TASK: {task}: {id(task)}\n'
# f'TASK: {task}: {id(task)}\n' f'broadcasting {period_s} -> {last_ts}\n'
# f'broadcasting {period_s} -> {last_ts}\n' # f'consumers: {subs}'
# f'consumers: {subs}' )
# )
borked: set[MsgStream] = set() borked: set[MsgStream] = set()
sent: set[MsgStream] = set() sent: set[MsgStream] = set()
while True: while True:
@ -289,11 +282,12 @@ class Sampler:
await stream.send(msg) await stream.send(msg)
sent.add(stream) sent.add(stream)
except self.bcast_errors as err: except (
trio.BrokenResourceError,
trio.ClosedResourceError
):
log.error( log.error(
f'Connection dropped for IPC ctx\n' f'{stream._ctx.chan.uid} dropped connection'
f'{stream._ctx}\n\n'
f'Due to {type(err)}'
) )
borked.add(stream) borked.add(stream)
else: else:
@ -400,8 +394,7 @@ async def register_with_sampler(
finally: finally:
if ( if (
sub_for_broadcasts sub_for_broadcasts
and and subs
subs
): ):
try: try:
subs.remove(stream) subs.remove(stream)
@ -568,7 +561,8 @@ async def open_sample_stream(
async def sample_and_broadcast( async def sample_and_broadcast(
bus: _FeedsBus,
bus: _FeedsBus, # noqa
rt_shm: ShmArray, rt_shm: ShmArray,
hist_shm: ShmArray, hist_shm: ShmArray,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
@ -588,33 +582,11 @@ async def sample_and_broadcast(
overruns = Counter() overruns = Counter()
# NOTE, only used for debugging live-data-feed issues, though
# this should be resolved more correctly in the future using the
# new typed-msgspec feats of `tractor`!
#
# XXX, a multiline nested `dict` formatter (since rn quote-msgs
# are just that).
# pfmt: Callable[[str], str] = mk_repr()
# iterate stream delivered by broker # iterate stream delivered by broker
async for quotes in quote_stream: async for quotes in quote_stream:
# print(quotes) # print(quotes)
# XXX WARNING XXX only enable for debugging bc ow can cost # TODO: ``numba`` this!
# ALOT of perf with HF-feedz!!!
#
# log.info(
# 'Rx live quotes:\n'
# f'{pfmt(quotes)}'
# )
# TODO,
# -[ ] `numba` or `cython`-nize this loop possibly?
# |_alternatively could we do it in rust somehow by upacking
# arrow msgs instead of using `msgspec`?
# -[ ] use `msgspec.Struct` support in new typed-msging from
# `tractor` to ensure only allowed msgs are transmitted?
#
for broker_symbol, quote in quotes.items(): for broker_symbol, quote in quotes.items():
# TODO: in theory you can send the IPC msg *before* writing # TODO: in theory you can send the IPC msg *before* writing
# to the sharedmem array to decrease latency, however, that # to the sharedmem array to decrease latency, however, that
@ -687,21 +659,6 @@ async def sample_and_broadcast(
sub_key: str = broker_symbol.lower() sub_key: str = broker_symbol.lower()
subs: set[Sub] = bus.get_subs(sub_key) subs: set[Sub] = bus.get_subs(sub_key)
# TODO, figure out how to make this useful whilst
# incoporating feed "pausing" ..
#
# if not subs:
# all_bs_fqmes: list[str] = list(
# bus._subscribers.keys()
# )
# log.warning(
# f'No subscribers for {brokername!r} live-quote ??\n'
# f'broker_symbol: {broker_symbol}\n\n'
# f'Maybe the backend-sys symbol does not match one of,\n'
# f'{pfmt(all_bs_fqmes)}\n'
# )
# NOTE: by default the broker backend doesn't append # NOTE: by default the broker backend doesn't append
# it's own "name" into the fqme schema (but maybe it # it's own "name" into the fqme schema (but maybe it
# should?) so we have to manually generate the correct # should?) so we have to manually generate the correct
@ -740,7 +697,7 @@ async def sample_and_broadcast(
log.warning( log.warning(
f'Feed OVERRUN {sub_key}' f'Feed OVERRUN {sub_key}'
f'@{bus.brokername} -> \n' '@{bus.brokername} -> \n'
f'feed @ {chan.uid}\n' f'feed @ {chan.uid}\n'
f'throttle = {throttle} Hz' f'throttle = {throttle} Hz'
) )
@ -771,14 +728,18 @@ async def sample_and_broadcast(
if lags > 10: if lags > 10:
await tractor.pause() await tractor.pause()
except Sampler.bcast_errors as ipc_err: except (
trio.BrokenResourceError,
trio.ClosedResourceError,
trio.EndOfChannel,
):
ctx: Context = ipc._ctx ctx: Context = ipc._ctx
chan: Channel = ctx.chan chan: Channel = ctx.chan
if ctx: if ctx:
log.warning( log.warning(
f'Dropped `brokerd`-feed for {broker_symbol!r} due to,\n' 'Dropped `brokerd`-quotes-feed connection:\n'
f'x>) {ctx.cid}@{chan.uid}' f'{broker_symbol}:'
f'|_{ipc_err!r}\n\n' f'{ctx.cid}@{chan.uid}'
) )
if sub.throttle_rate: if sub.throttle_rate:
assert ipc._closed assert ipc._closed
@ -795,11 +756,12 @@ async def sample_and_broadcast(
async def uniform_rate_send( async def uniform_rate_send(
rate: float, rate: float,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
stream: MsgStream, stream: MsgStream,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
''' '''
@ -817,16 +779,13 @@ async def uniform_rate_send(
https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9 https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
''' '''
# ?TODO? dynamically compute the **actual** approx overhead latency per cycle # TODO: compute the approx overhead latency per cycle
# instead of this magic # bidinezz? left_to_sleep = throttle_period = 1/rate - 0.000616
throttle_period: float = 1/rate - 0.000616
left_to_sleep: float = throttle_period
# send cycle state # send cycle state
first_quote: dict|None
first_quote = last_quote = None first_quote = last_quote = None
last_send: float = time.time() last_send = time.time()
diff: float = 0 diff = 0
task_status.started() task_status.started()
ticks_by_type: dict[ ticks_by_type: dict[
@ -837,28 +796,22 @@ async def uniform_rate_send(
clear_types = _tick_groups['clears'] clear_types = _tick_groups['clears']
while True: while True:
# compute the remaining time to sleep for this throttled cycle # compute the remaining time to sleep for this throttled cycle
left_to_sleep: float = throttle_period - diff left_to_sleep = throttle_period - diff
if left_to_sleep > 0: if left_to_sleep > 0:
cs: trio.CancelScope
with trio.move_on_after(left_to_sleep) as cs: with trio.move_on_after(left_to_sleep) as cs:
sym: str
last_quote: dict
try: try:
sym, last_quote = await quote_stream.receive() sym, last_quote = await quote_stream.receive()
except trio.EndOfChannel: except trio.EndOfChannel:
log.exception( log.exception(f"feed for {stream} ended?")
f'Live stream for feed for ended?\n'
f'<=c\n'
f' |_[{stream!r}\n'
)
break break
diff: float = time.time() - last_send diff = time.time() - last_send
if not first_quote: if not first_quote:
first_quote: float = last_quote first_quote = last_quote
# first_quote['tbt'] = ticks_by_type # first_quote['tbt'] = ticks_by_type
if (throttle_period - diff) > 0: if (throttle_period - diff) > 0:
@ -919,9 +872,7 @@ async def uniform_rate_send(
# TODO: now if only we could sync this to the display # TODO: now if only we could sync this to the display
# rate timing exactly lul # rate timing exactly lul
try: try:
await stream.send({ await stream.send({sym: first_quote})
sym: first_quote
})
except tractor.RemoteActorError as rme: except tractor.RemoteActorError as rme:
if rme.type is not tractor._exceptions.StreamOverrun: if rme.type is not tractor._exceptions.StreamOverrun:
raise raise
@ -932,28 +883,19 @@ async def uniform_rate_send(
f'{sym}:{ctx.cid}@{chan.uid}' f'{sym}:{ctx.cid}@{chan.uid}'
) )
# NOTE: any of these can be raised by `tractor`'s IPC
# transport-layer and we want to be highly resilient
# to consumers which crash or lose network connection.
# I.e. we **DO NOT** want to crash and propagate up to
# ``pikerd`` these kinds of errors!
except ( except (
# NOTE: any of these can be raised by ``tractor``'s IPC
# transport-layer and we want to be highly resilient
# to consumers which crash or lose network connection.
# I.e. we **DO NOT** want to crash and propagate up to
# ``pikerd`` these kinds of errors!
trio.ClosedResourceError,
trio.BrokenResourceError,
ConnectionResetError, ConnectionResetError,
) + Sampler.bcast_errors as ipc_err: ):
match ipc_err: # if the feed consumer goes down then drop
case trio.EndOfChannel(): # out of this rate limiter
log.info( log.warning(f'{stream} closed')
f'{stream} terminated by peer,\n'
f'{ipc_err!r}'
)
case _:
# if the feed consumer goes down then drop
# out of this rate limiter
log.warning(
f'{stream} closed due to,\n'
f'{ipc_err!r}'
)
await stream.aclose() await stream.aclose()
return return

View File

@ -31,7 +31,6 @@ from pathlib import Path
from pprint import pformat from pprint import pformat
from typing import ( from typing import (
Any, Any,
Callable,
Sequence, Sequence,
Hashable, Hashable,
TYPE_CHECKING, TYPE_CHECKING,
@ -57,7 +56,7 @@ from piker.brokers import (
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from piker.accounting import ( from ..accounting import (
Asset, Asset,
MktPair, MktPair,
) )
@ -91,18 +90,6 @@ class SymbologyCache(Struct):
# provided by the backend pkg. # provided by the backend pkg.
mktmaps: dict[str, MktPair] = field(default_factory=dict) mktmaps: dict[str, MktPair] = field(default_factory=dict)
def pformat(self) -> str:
return (
f'<{type(self).__name__}(\n'
f' .mod: {self.mod!r}\n'
f' .assets: {len(self.assets)!r}\n'
f' .pairs: {len(self.pairs)!r}\n'
f' .mktmaps: {len(self.mktmaps)!r}\n'
f')>'
)
__repr__ = pformat
def write_config(self) -> None: def write_config(self) -> None:
# put the backend's pair-struct type ref at the top # put the backend's pair-struct type ref at the top
@ -162,68 +149,57 @@ class SymbologyCache(Struct):
'Implement `Client.get_assets()`!' 'Implement `Client.get_assets()`!'
) )
get_mkt_pairs: Callable|None = getattr( if get_mkt_pairs := getattr(client, 'get_mkt_pairs', None):
client,
'get_mkt_pairs', pairs: dict[str, Struct] = await get_mkt_pairs()
None, for bs_fqme, pair in pairs.items():
)
if not get_mkt_pairs: # NOTE: every backend defined pair should
# declare it's ns path for roundtrip
# serialization lookup.
if not getattr(pair, 'ns_path', None):
raise TypeError(
f'Pair-struct for {self.mod.name} MUST define a '
'`.ns_path: str`!\n'
f'{pair}'
)
entry = await self.mod.get_mkt_info(pair.bs_fqme)
if not entry:
continue
mkt: MktPair
pair: Struct
mkt, _pair = entry
assert _pair is pair, (
f'`{self.mod.name}` backend probably has a '
'keying-symmetry problem between the pair-`Struct` '
'returned from `Client.get_mkt_pairs()`and the '
'module level endpoint: `.get_mkt_info()`\n\n'
"Here's the struct diff:\n"
f'{_pair - pair}'
)
# NOTE XXX: this means backends MUST implement
# a `Struct.bs_mktid: str` field to provide
# a native-keyed map to their own symbol
# set(s).
self.pairs[pair.bs_mktid] = pair
# NOTE: `MktPair`s are keyed here using piker's
# internal FQME schema so that search,
# accounting and feed init can be accomplished
# a sane, uniform, normalized basis.
self.mktmaps[mkt.fqme] = mkt
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
pair,
)
else:
log.warning( log.warning(
'No symbology cache `Pair` support for `{provider}`..\n' 'No symbology cache `Pair` support for `{provider}`..\n'
'Implement `Client.get_mkt_pairs()`!' 'Implement `Client.get_mkt_pairs()`!'
) )
return self
pairs: dict[str, Struct] = await get_mkt_pairs()
if not pairs:
log.warning(
'No pairs from intial {provider!r} sym-cache request?\n\n'
'`Client.get_mkt_pairs()` -> {pairs!r} ?'
)
return self
for bs_fqme, pair in pairs.items():
if not getattr(pair, 'ns_path', None):
# XXX: every backend defined pair must declare
# a `.ns_path: tractor.NamespacePath` to enable
# roundtrip serialization lookup from a local
# cache file.
raise TypeError(
f'Pair-struct for {self.mod.name} MUST define a '
'`.ns_path: str`!\n\n'
f'{pair!r}'
)
entry = await self.mod.get_mkt_info(pair.bs_fqme)
if not entry:
continue
mkt: MktPair
pair: Struct
mkt, _pair = entry
assert _pair is pair, (
f'`{self.mod.name}` backend probably has a '
'keying-symmetry problem between the pair-`Struct` '
'returned from `Client.get_mkt_pairs()`and the '
'module level endpoint: `.get_mkt_info()`\n\n'
"Here's the struct diff:\n"
f'{_pair - pair}'
)
# NOTE XXX: this means backends MUST implement
# a `Struct.bs_mktid: str` field to provide
# a native-keyed map to their own symbol
# set(s).
self.pairs[pair.bs_mktid] = pair
# NOTE: `MktPair`s are keyed here using piker's
# internal FQME schema so that search,
# accounting and feed init can be accomplished
# a sane, uniform, normalized basis.
self.mktmaps[mkt.fqme] = mkt
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
pair,
)
return self return self

View File

@ -27,6 +27,7 @@ from functools import partial
from types import ModuleType from types import ModuleType
from typing import ( from typing import (
Any, Any,
Optional,
Callable, Callable,
AsyncContextManager, AsyncContextManager,
AsyncGenerator, AsyncGenerator,
@ -34,7 +35,6 @@ from typing import (
) )
import json import json
import tractor
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
from trio_websocket import ( from trio_websocket import (
@ -167,7 +167,7 @@ async def _reconnect_forever(
async def proxy_msgs( async def proxy_msgs(
ws: WebSocketConnection, ws: WebSocketConnection,
rent_cs: trio.CancelScope, # parent cancel scope pcs: trio.CancelScope, # parent cancel scope
): ):
''' '''
Receive (under `timeout` deadline) all msgs from from underlying Receive (under `timeout` deadline) all msgs from from underlying
@ -192,7 +192,7 @@ async def _reconnect_forever(
f'{url} connection bail with:' f'{url} connection bail with:'
) )
await trio.sleep(0.5) await trio.sleep(0.5)
rent_cs.cancel() pcs.cancel()
# go back to reonnect loop in parent task # go back to reonnect loop in parent task
return return
@ -204,7 +204,7 @@ async def _reconnect_forever(
f'{src_mod}\n' f'{src_mod}\n'
'WS feed seems down and slow af.. reconnecting\n' 'WS feed seems down and slow af.. reconnecting\n'
) )
rent_cs.cancel() pcs.cancel()
# go back to reonnect loop in parent task # go back to reonnect loop in parent task
return return
@ -228,12 +228,7 @@ async def _reconnect_forever(
nobsws._connected = trio.Event() nobsws._connected = trio.Event()
task_status.started() task_status.started()
mc_state: trio._channel.MemoryChannelState = snd._state while not snd._closed:
while (
mc_state.open_receive_channels > 0
and
mc_state.open_send_channels > 0
):
log.info( log.info(
f'{src_mod}\n' f'{src_mod}\n'
f'{url} trying (RE)CONNECT' f'{url} trying (RE)CONNECT'
@ -242,11 +237,10 @@ async def _reconnect_forever(
ws: WebSocketConnection ws: WebSocketConnection
try: try:
async with ( async with (
trio.open_nursery() as n,
open_websocket_url(url) as ws, open_websocket_url(url) as ws,
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn,
): ):
cs = nobsws._cs = tn.cancel_scope cs = nobsws._cs = n.cancel_scope
nobsws._ws = ws nobsws._ws = ws
log.info( log.info(
f'{src_mod}\n' f'{src_mod}\n'
@ -254,7 +248,7 @@ async def _reconnect_forever(
) )
# begin relay loop to forward msgs # begin relay loop to forward msgs
tn.start_soon( n.start_soon(
proxy_msgs, proxy_msgs,
ws, ws,
cs, cs,
@ -268,7 +262,7 @@ async def _reconnect_forever(
# TODO: should we return an explicit sub-cs # TODO: should we return an explicit sub-cs
# from this fixture task? # from this fixture task?
await tn.start( await n.start(
open_fixture, open_fixture,
fixture, fixture,
nobsws, nobsws,
@ -278,23 +272,11 @@ async def _reconnect_forever(
# to let tasks run **inside** the ws open block above. # to let tasks run **inside** the ws open block above.
nobsws._connected.set() nobsws._connected.set()
await trio.sleep_forever() await trio.sleep_forever()
except HandshakeError:
log.exception(f'Retrying connection')
except ( # ws & nursery block ends
HandshakeError,
ConnectionRejected,
):
log.exception('Retrying connection')
await trio.sleep(0.5) # throttle
except BaseException as _berr:
berr = _berr
log.exception(
'Reconnect-attempt failed ??\n'
)
await trio.sleep(0.2) # throttle
raise berr
#|_ws & nursery block ends
nobsws._connected = trio.Event() nobsws._connected = trio.Event()
if cs.cancelled_caught: if cs.cancelled_caught:
log.cancel( log.cancel(
@ -342,25 +324,21 @@ async def open_autorecon_ws(
connetivity errors, or some user defined recv timeout. connetivity errors, or some user defined recv timeout.
You can provide a ``fixture`` async-context-manager which will be You can provide a ``fixture`` async-context-manager which will be
entered/exitted around each connection reset; eg. for entered/exitted around each connection reset; eg. for (re)requesting
(re)requesting subscriptions without requiring streaming setup subscriptions without requiring streaming setup code to rerun.
code to rerun.
''' '''
snd: trio.MemorySendChannel snd: trio.MemorySendChannel
rcv: trio.MemoryReceiveChannel rcv: trio.MemoryReceiveChannel
snd, rcv = trio.open_memory_channel(616) snd, rcv = trio.open_memory_channel(616)
async with ( async with trio.open_nursery() as n:
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn
):
nobsws = NoBsWs( nobsws = NoBsWs(
url, url,
rcv, rcv,
msg_recv_timeout=msg_recv_timeout, msg_recv_timeout=msg_recv_timeout,
) )
await tn.start( await n.start(
partial( partial(
_reconnect_forever, _reconnect_forever,
url, url,
@ -373,15 +351,16 @@ async def open_autorecon_ws(
await nobsws._connected.wait() await nobsws._connected.wait()
assert nobsws._cs assert nobsws._cs
assert nobsws.connected() assert nobsws.connected()
try: try:
yield nobsws yield nobsws
finally: finally:
tn.cancel_scope.cancel() n.cancel_scope.cancel()
''' '''
JSONRPC response-request style machinery for transparent multiplexing JSONRPC response-request style machinery for transparent multiplexing of msgs
of msgs over a `NoBsWs`. over a NoBsWs.
''' '''
@ -389,8 +368,8 @@ of msgs over a `NoBsWs`.
class JSONRPCResult(Struct): class JSONRPCResult(Struct):
id: int id: int
jsonrpc: str = '2.0' jsonrpc: str = '2.0'
result: dict|None = None result: Optional[dict] = None
error: dict|None = None error: Optional[dict] = None
@acm @acm
@ -398,82 +377,43 @@ async def open_jsonrpc_session(
url: str, url: str,
start_id: int = 0, start_id: int = 0,
response_type: type = JSONRPCResult, response_type: type = JSONRPCResult,
msg_recv_timeout: float = float('inf'), request_type: Optional[type] = None,
# ^NOTE, since only `deribit` is using this jsonrpc stuff atm request_hook: Optional[Callable] = None,
# and options mkts are generally "slow moving".. error_hook: Optional[Callable] = None,
#
# FURTHER if we break the underlying ws connection then since we
# don't pass a `fixture` to the task that manages `NoBsWs`, i.e.
# `_reconnect_forever()`, the jsonrpc "transport pipe" get's
# broken and never restored with wtv init sequence is required to
# re-establish a working req-resp session.
) -> Callable[[str, dict], dict]: ) -> Callable[[str, dict], dict]:
'''
Init a json-RPC-over-websocket connection to the provided `url`.
A `json_rpc: Callable[[str, dict], dict` is delivered to the
caller for sending requests and a bg-`trio.Task` handles
processing of response msgs including error reporting/raising in
the parent/caller task.
'''
# NOTE, store all request msgs so we can raise errors on the
# caller side!
req_msgs: dict[int, dict] = {}
async with ( async with (
trio.open_nursery() as tn, trio.open_nursery() as n,
open_autorecon_ws( open_autorecon_ws(url) as ws
url=url,
msg_recv_timeout=msg_recv_timeout,
) as ws
): ):
rpc_id: Iterable[int] = count(start_id) rpc_id: Iterable = count(start_id)
rpc_results: dict[int, dict] = {} rpc_results: dict[int, dict] = {}
async def json_rpc( async def json_rpc(method: str, params: dict) -> dict:
method: str,
params: dict,
) -> dict:
''' '''
perform a json rpc call and wait for the result, raise exception in perform a json rpc call and wait for the result, raise exception in
case of error field present on response case of error field present on response
''' '''
nonlocal req_msgs
req_id: int = next(rpc_id)
msg = { msg = {
'jsonrpc': '2.0', 'jsonrpc': '2.0',
'id': req_id, 'id': next(rpc_id),
'method': method, 'method': method,
'params': params 'params': params
} }
_id = msg['id'] _id = msg['id']
result = rpc_results[_id] = { rpc_results[_id] = {
'result': None, 'result': None,
'error': None, 'event': trio.Event()
'event': trio.Event(), # signal caller resp arrived
} }
req_msgs[_id] = msg
await ws.send_msg(msg) await ws.send_msg(msg)
# wait for reponse before unblocking requester code
await rpc_results[_id]['event'].wait() await rpc_results[_id]['event'].wait()
if (maybe_result := result['result']): ret = rpc_results[_id]['result']
ret = maybe_result
del rpc_results[_id]
else: del rpc_results[_id]
err = result['error']
raise Exception(
f'JSONRPC request failed\n'
f'req: {msg}\n'
f'resp: {err}\n'
)
if ret.error is not None: if ret.error is not None:
raise Exception(json.dumps(ret.error, indent=4)) raise Exception(json.dumps(ret.error, indent=4))
@ -488,7 +428,6 @@ async def open_jsonrpc_session(
the server side. the server side.
''' '''
nonlocal req_msgs
async for msg in ws: async for msg in ws:
match msg: match msg:
case { case {
@ -512,28 +451,19 @@ async def open_jsonrpc_session(
'params': _, 'params': _,
}: }:
log.debug(f'Recieved\n{msg}') log.debug(f'Recieved\n{msg}')
if request_hook:
await request_hook(request_type(**msg))
case { case {
'error': error 'error': error
}: }:
# retreive orig request msg, set error log.warning(f'Recieved\n{error}')
# response in original "result" msg, if error_hook:
# THEN FINALLY set the event to signal caller await error_hook(response_type(**msg))
# to raise the error in the parent task.
req_id: int = error['id']
req_msg: dict = req_msgs[req_id]
result: dict = rpc_results[req_id]
result['error'] = error
result['event'].set()
log.error(
f'JSONRPC request failed\n'
f'req: {req_msg}\n'
f'resp: {error}\n'
)
case _: case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}') log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
tn.start_soon(recv_task) n.start_soon(recv_task)
yield json_rpc yield json_rpc
tn.cancel_scope.cancel() n.cancel_scope.cancel()

View File

@ -39,7 +39,6 @@ from typing import (
AsyncContextManager, AsyncContextManager,
Awaitable, Awaitable,
Sequence, Sequence,
TYPE_CHECKING,
) )
import trio import trio
@ -76,10 +75,6 @@ from ._sampling import (
uniform_rate_send, uniform_rate_send,
) )
if TYPE_CHECKING:
from tractor._addr import Address
from tractor.msg.types import Aid
class Sub(Struct, frozen=True): class Sub(Struct, frozen=True):
''' '''
@ -357,9 +352,7 @@ async def allocate_persistent_feed(
# yield back control to starting nursery once we receive either # yield back control to starting nursery once we receive either
# some history or a real-time quote. # some history or a real-time quote.
log.info( log.info(f'loading OHLCV history: {fqme}')
f'loading OHLCV history: {fqme!r}\n'
)
await some_data_ready.wait() await some_data_ready.wait()
flume = Flume( flume = Flume(
@ -730,10 +723,7 @@ class Feed(Struct):
async for msg in stream: async for msg in stream:
await tx.send(msg) await tx.send(msg)
async with ( async with trio.open_nursery() as nurse:
tractor.trionics.collapse_eg(),
trio.open_nursery() as nurse
):
# spawn a relay task for each stream so that they all # spawn a relay task for each stream so that they all
# multiplex to a common channel. # multiplex to a common channel.
for brokername in mods: for brokername in mods:
@ -796,6 +786,7 @@ async def install_brokerd_search(
@acm @acm
async def maybe_open_feed( async def maybe_open_feed(
fqmes: list[str], fqmes: list[str],
loglevel: str | None = None, loglevel: str | None = None,
@ -849,12 +840,13 @@ async def maybe_open_feed(
@acm @acm
async def open_feed( async def open_feed(
fqmes: list[str], fqmes: list[str],
loglevel: str|None = None, loglevel: str | None = None,
allow_overruns: bool = True, allow_overruns: bool = True,
start_stream: bool = True, start_stream: bool = True,
tick_throttle: float|None = None, # Hz tick_throttle: float | None = None, # Hz
allow_remote_ctl_ui: bool = False, allow_remote_ctl_ui: bool = False,
@ -907,19 +899,19 @@ async def open_feed(
feed.portals[brokermod] = portal feed.portals[brokermod] = portal
# fill out "status info" that the UI can show # fill out "status info" that the UI can show
chan: tractor.Channel = portal.chan host, port = portal.channel.raddr
raddr: Address = chan.raddr if host == '127.0.0.1':
aid: Aid = chan.aid host = 'localhost'
# TAG_feed_status_update
feed.status.update({ feed.status.update({
'actor_id': aid, 'actor_name': portal.channel.uid[0],
'actor_short_id': f'{aid.name}@{aid.pid}', 'host': host,
'ipc': chan.raddr.proto_key, 'port': port,
'ipc_addr': raddr,
'hist_shm': 'NA', 'hist_shm': 'NA',
'rt_shm': 'NA', 'rt_shm': 'NA',
'throttle_hz': tick_throttle, 'throttle_rate': tick_throttle,
}) })
# feed.status.update(init_msg.pop('status', {}))
# (allocate and) connect to any feed bus for this broker # (allocate and) connect to any feed bus for this broker
bus_ctxs.append( bus_ctxs.append(

View File

@ -36,10 +36,10 @@ from ._sharedmem import (
ShmArray, ShmArray,
_Token, _Token,
) )
from piker.accounting import MktPair
if TYPE_CHECKING: if TYPE_CHECKING:
from piker.data.feed import Feed from ..accounting import MktPair
from .feed import Feed
class Flume(Struct): class Flume(Struct):
@ -82,7 +82,7 @@ class Flume(Struct):
# TODO: do we need this really if we can pull the `Portal` from # TODO: do we need this really if we can pull the `Portal` from
# ``tractor``'s internals? # ``tractor``'s internals?
feed: Feed|None = None feed: Feed | None = None
@property @property
def rt_shm(self) -> ShmArray: def rt_shm(self) -> ShmArray:

View File

@ -113,9 +113,9 @@ def validate_backend(
) )
if ep is None: if ep is None:
log.warning( log.warning(
f'Provider backend {mod.name!r} is missing ' f'Provider backend {mod.name} is missing '
f'{daemon_name!r} support?\n' f'{daemon_name} support :(\n'
f'|_module endpoint-func missing: {name!r}\n' f'The following endpoint is missing: {name}'
) )
inits: list[ inits: list[

View File

@ -498,7 +498,6 @@ async def cascade(
func_name: str = func.__name__ func_name: str = func.__name__
async with ( async with (
tractor.trionics.collapse_eg(), # avoid multi-taskc tb in console
trio.open_nursery() as tn, trio.open_nursery() as tn,
): ):
# TODO: might be better to just make a "restart" method where # TODO: might be better to just make a "restart" method where

View File

@ -19,10 +19,6 @@ Log like a forester!
""" """
import logging import logging
import json import json
import reprlib
from typing import (
Callable,
)
import tractor import tractor
from pygments import ( from pygments import (
@ -88,29 +84,3 @@ def colorize_json(
# likeable styles: algol_nu, tango, monokai # likeable styles: algol_nu, tango, monokai
formatters.TerminalTrueColorFormatter(style=style) formatters.TerminalTrueColorFormatter(style=style)
) )
# TODO, eventually defer to the version in `modden` once
# it becomes a dep!
def mk_repr(
**repr_kws,
) -> Callable[[str], str]:
'''
Allocate and deliver a `repr.Repr` instance with provided input
settings using the std-lib's `reprlib` mod,
* https://docs.python.org/3/library/reprlib.html
------ Ex. ------
An up to 6-layer-nested `dict` as multi-line:
- https://stackoverflow.com/a/79102479
- https://docs.python.org/3/library/reprlib.html#reprlib.Repr.maxlevel
'''
def_kws: dict[str, int] = dict(
indent=2,
maxlevel=6, # recursion levels
maxstring=66, # match editor line-len limit
)
def_kws |= repr_kws
reprr = reprlib.Repr(**def_kws)
return reprr.repr

View File

@ -107,22 +107,17 @@ async def open_piker_runtime(
async with ( async with (
tractor.open_root_actor( tractor.open_root_actor(
# passed through to `open_root_actor` # passed through to ``open_root_actor``
registry_addrs=registry_addrs, registry_addrs=registry_addrs,
name=name, name=name,
start_method=start_method,
loglevel=loglevel, loglevel=loglevel,
debug_mode=debug_mode, debug_mode=debug_mode,
start_method=start_method,
# XXX NOTE MEMBER DAT der's a perf hit yo!!
# https://greenback.readthedocs.io/en/latest/principle.html#performance
maybe_enable_greenback=True,
# TODO: eventually we should be able to avoid # TODO: eventually we should be able to avoid
# having the root have more then permissions to # having the root have more then permissions to
# spawn other specialized daemons I think? # spawn other specialized daemons I think?
enable_modules=enable_modules, enable_modules=enable_modules,
hide_tb=False,
**tractor_kwargs, **tractor_kwargs,
) as actor, ) as actor,
@ -205,8 +200,7 @@ async def open_pikerd(
reg_addrs, reg_addrs,
), ),
tractor.open_nursery() as actor_nursery, tractor.open_nursery() as actor_nursery,
tractor.trionics.collapse_eg(), trio.open_nursery() as service_nursery,
trio.open_nursery() as service_tn,
): ):
for addr in reg_addrs: for addr in reg_addrs:
if addr not in root_actor.accept_addrs: if addr not in root_actor.accept_addrs:
@ -217,7 +211,7 @@ async def open_pikerd(
# assign globally for future daemon/task creation # assign globally for future daemon/task creation
Services.actor_n = actor_nursery Services.actor_n = actor_nursery
Services.service_n = service_tn Services.service_n = service_nursery
Services.debug_mode = debug_mode Services.debug_mode = debug_mode
try: try:
@ -227,7 +221,7 @@ async def open_pikerd(
# TODO: is this more clever/efficient? # TODO: is this more clever/efficient?
# if 'samplerd' in Services.service_tasks: # if 'samplerd' in Services.service_tasks:
# await Services.cancel_service('samplerd') # await Services.cancel_service('samplerd')
service_tn.cancel_scope.cancel() service_nursery.cancel_scope.cancel()
# TODO: do we even need this? # TODO: do we even need this?
@ -262,10 +256,7 @@ async def maybe_open_pikerd(
loglevel: str | None = None, loglevel: str | None = None,
**kwargs, **kwargs,
) -> ( ) -> tractor._portal.Portal | ClassVar[Services]:
tractor._portal.Portal
|ClassVar[Services]
):
''' '''
If no ``pikerd`` daemon-root-actor can be found start it and If no ``pikerd`` daemon-root-actor can be found start it and
yield up (we should probably figure out returning a portal to self yield up (we should probably figure out returning a portal to self
@ -290,11 +281,10 @@ async def maybe_open_pikerd(
registry_addrs: list[tuple[str, int]] = ( registry_addrs: list[tuple[str, int]] = (
registry_addrs registry_addrs
or or [_default_reg_addr]
[_default_reg_addr]
) )
pikerd_portal: tractor.Portal|None pikerd_portal: tractor.Portal | None
async with ( async with (
open_piker_runtime( open_piker_runtime(
name=query_name, name=query_name,

View File

@ -28,7 +28,6 @@ from contextlib import (
) )
import tractor import tractor
from trio.lowlevel import current_task
from ._util import ( from ._util import (
log, # sub-sys logger log, # sub-sys logger
@ -71,84 +70,69 @@ async def maybe_spawn_daemon(
lock = Services.locks[service_name] lock = Services.locks[service_name]
await lock.acquire() await lock.acquire()
try: async with find_service(
async with find_service( service_name,
service_name, registry_addrs=[('127.0.0.1', 6116)],
registry_addrs=[('127.0.0.1', 6116)], ) as portal:
) as portal: if portal is not None:
if portal is not None:
lock.release()
yield portal
return
log.warning(
f"Couldn't find any existing {service_name}\n"
'Attempting to spawn new daemon-service..'
)
# ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the
# process tree
async with maybe_open_pikerd(
loglevel=loglevel,
**pikerd_kwargs,
) as pikerd_portal:
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
started: bool
if pikerd_portal is None:
started = await service_task_target(
loglevel=loglevel,
**spawn_args,
)
else:
# request a remote `pikerd` (service manager) to start the
# target daemon-task, the target can't return
# a non-serializable value since it is expected that service
# starting is non-blocking and the target task will persist
# running "under" or "within" the `pikerd` actor tree after
# the questing client disconnects. in other words this
# spawns a persistent daemon actor that continues to live
# for the lifespan of whatever the service manager inside
# `pikerd` says it should.
started = await pikerd_portal.run(
service_task_target,
loglevel=loglevel,
**spawn_args,
)
if started:
log.info(f'Service {service_name} started!')
# block until we can discover (by IPC connection) to the newly
# spawned daemon-actor and then deliver the portal to the
# caller.
async with tractor.wait_for_actor(service_name) as portal:
lock.release()
yield portal
await portal.cancel_actor()
except BaseException as _err:
err = _err
if (
lock.locked()
and
lock.statistics().owner is current_task()
):
log.exception(
f'Releasing stale lock after crash..?'
f'{err!r}\n'
)
lock.release() lock.release()
raise err yield portal
return
log.warning(
f"Couldn't find any existing {service_name}\n"
'Attempting to spawn new daemon-service..'
)
# ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the
# process tree
async with maybe_open_pikerd(
loglevel=loglevel,
**pikerd_kwargs,
) as pikerd_portal:
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
started: bool
if pikerd_portal is None:
started = await service_task_target(
loglevel=loglevel,
**spawn_args,
)
else:
# request a remote `pikerd` (service manager) to start the
# target daemon-task, the target can't return
# a non-serializable value since it is expected that service
# starting is non-blocking and the target task will persist
# running "under" or "within" the `pikerd` actor tree after
# the questing client disconnects. in other words this
# spawns a persistent daemon actor that continues to live
# for the lifespan of whatever the service manager inside
# `pikerd` says it should.
started = await pikerd_portal.run(
service_task_target,
loglevel=loglevel,
**spawn_args,
)
if started:
log.info(f'Service {service_name} started!')
# block until we can discover (by IPC connection) to the newly
# spawned daemon-actor and then deliver the portal to the
# caller.
async with tractor.wait_for_actor(service_name) as portal:
lock.release()
yield portal
await portal.cancel_actor()
async def spawn_emsd( async def spawn_emsd(

View File

@ -109,7 +109,7 @@ class Services:
# wait on any context's return value # wait on any context's return value
# and any final portal result from the # and any final portal result from the
# sub-actor. # sub-actor.
ctx_res: Any = await ctx.wait_for_result() ctx_res: Any = await ctx.result()
# NOTE: blocks indefinitely until cancelled # NOTE: blocks indefinitely until cancelled
# either by error from the target context # either by error from the target context

View File

@ -101,15 +101,13 @@ async def open_registry(
if ( if (
not tractor.is_root_process() not tractor.is_root_process()
and and not Registry.addrs
not Registry.addrs
): ):
Registry.addrs.extend(actor.reg_addrs) Registry.addrs.extend(actor.reg_addrs)
if ( if (
ensure_exists ensure_exists
and and not Registry.addrs
not Registry.addrs
): ):
raise RuntimeError( raise RuntimeError(
f"`{uid}` registry should already exist but doesn't?" f"`{uid}` registry should already exist but doesn't?"
@ -148,7 +146,7 @@ async def find_service(
| list[Portal] | list[Portal]
| None | None
): ):
# try:
reg_addrs: list[tuple[str, int]] reg_addrs: list[tuple[str, int]]
async with open_registry( async with open_registry(
addrs=( addrs=(
@ -159,39 +157,22 @@ async def find_service(
or Registry.addrs or Registry.addrs
), ),
) as reg_addrs: ) as reg_addrs:
log.info(f'Scanning for service `{service_name}`')
log.info( maybe_portals: list[Portal] | Portal | None
f'Scanning for service {service_name!r}'
)
# attach to existing daemon by name if possible # attach to existing daemon by name if possible
maybe_portals: list[Portal]|Portal|None
async with tractor.find_actor( async with tractor.find_actor(
service_name, service_name,
registry_addrs=reg_addrs, registry_addrs=reg_addrs,
only_first=first_only, # if set only returns single ref only_first=first_only, # if set only returns single ref
) as maybe_portals: ) as maybe_portals:
if not maybe_portals: if not maybe_portals:
# log.info(
print(
f'Could NOT find service {service_name!r} -> {maybe_portals!r}'
)
yield None yield None
return return
# log.info(
print(
f'Found service {service_name!r} -> {maybe_portals}'
)
yield maybe_portals yield maybe_portals
# except BaseException as _berr:
# berr = _berr
# log.exception(
# 'tractor.find_actor() failed with,\n'
# )
# raise berr
async def check_for_service( async def check_for_service(
service_name: str, service_name: str,

View File

@ -386,8 +386,6 @@ def ldshm(
open_annot_ctl() as actl, open_annot_ctl() as actl,
): ):
shm_df: pl.DataFrame | None = None shm_df: pl.DataFrame | None = None
tf2aids: dict[float, dict] = {}
for ( for (
shmfile, shmfile,
shm, shm,
@ -528,17 +526,16 @@ def ldshm(
new_df, new_df,
step_gaps, step_gaps,
) )
# last chance manual overwrites in REPL # last chance manual overwrites in REPL
# await tractor.pause() await tractor.pause()
assert aids assert aids
tf2aids[period_s] = aids
else: else:
# allow interaction even when no ts problems. # allow interaction even when no ts problems.
assert not diff await tractor.pause()
# assert not diff
await tractor.pause()
log.info('Exiting TSP shm anal-izer!')
if shm_df is None: if shm_df is None:
log.error( log.error(

View File

@ -161,13 +161,7 @@ class NativeStorageClient:
def index_files(self): def index_files(self):
for path in self._datadir.iterdir(): for path in self._datadir.iterdir():
if ( if path.name in {'borked', 'expired',}:
path.is_dir()
or
'.parquet' not in str(path)
# or
# path.name in {'borked', 'expired',}
):
continue continue
key: str = path.name.rstrip('.parquet') key: str = path.name.rstrip('.parquet')

View File

@ -44,10 +44,8 @@ import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
from pendulum import ( from pendulum import (
Interval,
DateTime, DateTime,
Duration, Duration,
duration as mk_duration,
from_timestamp, from_timestamp,
) )
import numpy as np import numpy as np
@ -216,8 +214,7 @@ async def maybe_fill_null_segments(
# pair, immediately stop backfilling? # pair, immediately stop backfilling?
if ( if (
start_dt start_dt
and and end_dt < start_dt
end_dt < start_dt
): ):
await tractor.pause() await tractor.pause()
break break
@ -265,7 +262,6 @@ async def maybe_fill_null_segments(
except tractor.ContextCancelled: except tractor.ContextCancelled:
# log.exception # log.exception
await tractor.pause() await tractor.pause()
raise
null_segs_detected.set() null_segs_detected.set()
# RECHECK for more null-gaps # RECHECK for more null-gaps
@ -353,7 +349,7 @@ async def maybe_fill_null_segments(
async def start_backfill( async def start_backfill(
get_hist, get_hist,
def_frame_duration: Duration, frame_types: dict[str, Duration] | None,
mod: ModuleType, mod: ModuleType,
mkt: MktPair, mkt: MktPair,
shm: ShmArray, shm: ShmArray,
@ -383,23 +379,22 @@ async def start_backfill(
update_start_on_prepend: bool = False update_start_on_prepend: bool = False
if backfill_until_dt is None: if backfill_until_dt is None:
# TODO: per-provider default history-durations? # TODO: drop this right and just expose the backfill
# -[ ] inside the `open_history_client()` config allow # limits inside a [storage] section in conf.toml?
# declaring the history duration limits instead of # when no tsdb "last datum" is provided, we just load
# guessing and/or applying the same limits to all? # some near-term history.
# # periods = {
# -[ ] allow declaring (default) per-provider backfill # 1: {'days': 1},
# limits inside a [storage] sub-section in conf.toml? # 60: {'days': 14},
# # }
# NOTE, when no tsdb "last datum" is provided, we just
# load some near-term history by presuming a "decently # do a decently sized backfill and load it into storage.
# large" 60s duration limit and a much shorter 1s range.
periods = { periods = {
1: {'days': 2}, 1: {'days': 2},
60: {'years': 6}, 60: {'years': 6},
} }
period_duration: int = periods[timeframe] period_duration: int = periods[timeframe]
update_start_on_prepend: bool = True update_start_on_prepend = True
# NOTE: manually set the "latest" datetime which we intend to # NOTE: manually set the "latest" datetime which we intend to
# backfill history "until" so as to adhere to the history # backfill history "until" so as to adhere to the history
@ -421,6 +416,7 @@ async def start_backfill(
f'backfill_until_dt: {backfill_until_dt}\n' f'backfill_until_dt: {backfill_until_dt}\n'
f'last_start_dt: {last_start_dt}\n' f'last_start_dt: {last_start_dt}\n'
) )
try: try:
( (
array, array,
@ -430,114 +426,71 @@ async def start_backfill(
timeframe, timeframe,
end_dt=last_start_dt, end_dt=last_start_dt,
) )
except NoData as _daterr: except NoData as _daterr:
orig_last_start_dt: datetime = last_start_dt # 3 cases:
gap_report: str = ( # - frame in the middle of a legit venue gap
f'EMPTY FRAME for `end_dt: {last_start_dt}`?\n' # - history actually began at the `last_start_dt`
f'{mod.name} -> tf@fqme: {timeframe}@{mkt.fqme}\n' # - some other unknown error (ib blocking the
f'last_start_dt: {orig_last_start_dt}\n\n' # history bc they don't want you seeing how they
f'bf_until: {backfill_until_dt}\n' # cucked all the tinas..)
) if dur := frame_types.get(timeframe):
# EMPTY FRAME signal with 3 (likely) causes: # decrement by a frame's worth of duration and
# # retry a few times.
# 1. range contains legit gap in venue history last_start_dt.subtract(
# 2. history actually (edge case) **began** at the seconds=dur.total_seconds()
# value `last_start_dt`
# 3. some other unknown error (ib blocking the
# history-query bc they don't want you seeing how
# they cucked all the tinas.. like with options
# hist)
#
if def_frame_duration:
# decrement by a duration's (frame) worth of time
# as maybe indicated by the backend to see if we
# can get older data before this possible
# "history gap".
last_start_dt: datetime = last_start_dt.subtract(
seconds=def_frame_duration.total_seconds()
) )
gap_report += ( log.warning(
f'Decrementing `end_dt` and retrying with,\n' f'{mod.name} -> EMPTY FRAME for end_dt?\n'
f'def_frame_duration: {def_frame_duration}\n' f'tf@fqme: {timeframe}@{mkt.fqme}\n'
f'(new) last_start_dt: {last_start_dt}\n' 'bf_until <- last_start_dt:\n'
f'{backfill_until_dt} <- {last_start_dt}\n'
f'Decrementing `end_dt` by {dur} and retry..\n'
) )
log.warning(gap_report)
# skip writing to shm/tsdb and try the next
# duration's worth of prior history.
continue continue
else:
# await tractor.pause()
raise DataUnavailable(gap_report)
# broker says there never was or is no more history to pull # broker says there never was or is no more history to pull
except DataUnavailable as due: except DataUnavailable:
message: str = due.args[0]
log.warning( log.warning(
f'Provider {mod.name!r} halted backfill due to,\n\n' f'NO-MORE-DATA in range?\n'
f'`{mod.name}` halted history:\n'
f'{message}\n' f'tf@fqme: {timeframe}@{mkt.fqme}\n'
'bf_until <- last_start_dt:\n'
f'fqme: {mkt.fqme}\n' f'{backfill_until_dt} <- {last_start_dt}\n'
f'timeframe: {timeframe}\n'
f'last_start_dt: {last_start_dt}\n'
f'bf_until: {backfill_until_dt}\n'
) )
# UGH: what's a better way?
# TODO: backends are responsible for being correct on # ugh, what's a better way?
# this right!? # TODO: fwiw, we probably want a way to signal a throttle
# -[ ] in the `ib` case we could maybe offer some way # condition (eg. with ib) so that we can halt the
# to halt the request loop until the condition is # request loop until the condition is resolved?
# resolved or should the backend be entirely in if timeframe > 1:
# charge of solving such faults? yes, right? await tractor.pause()
return return
time: np.ndarray = array['time']
assert ( assert (
time[0] array['time'][0]
== ==
next_start_dt.timestamp() next_start_dt.timestamp()
) )
assert time[-1] == next_end_dt.timestamp() diff = last_start_dt - next_start_dt
frame_time_diff_s = diff.seconds
expected_dur: Interval = last_start_dt - next_start_dt
# frame's worth of sample-period-steps, in seconds # frame's worth of sample-period-steps, in seconds
frame_size_s: float = len(array) * timeframe frame_size_s: float = len(array) * timeframe
recv_frame_dur: Duration = ( expected_frame_size_s: float = frame_size_s + timeframe
from_timestamp(array[-1]['time']) if frame_time_diff_s > expected_frame_size_s:
-
from_timestamp(array[0]['time'])
)
if (
(lt_frame := (recv_frame_dur < expected_dur))
or
(null_frame := (frame_size_s == 0))
# ^XXX, should NEVER hit now!
):
# XXX: query result includes a start point prior to our # XXX: query result includes a start point prior to our
# expected "frame size" and thus is likely some kind of # expected "frame size" and thus is likely some kind of
# history gap (eg. market closed period, outage, etc.) # history gap (eg. market closed period, outage, etc.)
# so just report it to console for now. # so just report it to console for now.
if lt_frame:
reason = 'Possible GAP (or first-datum)'
else:
assert null_frame
reason = 'NULL-FRAME'
missing_dur: Interval = expected_dur.end - recv_frame_dur.end
log.warning( log.warning(
f'{timeframe}s-series {reason} detected!\n' 'GAP DETECTED:\n'
f'fqme: {mkt.fqme}\n' f'last_start_dt: {last_start_dt}\n'
f'last_start_dt: {last_start_dt}\n\n' f'diff: {diff}\n'
f'recv interval: {recv_frame_dur}\n' f'frame_time_diff_s: {frame_time_diff_s}\n'
f'expected interval: {expected_dur}\n\n'
f'Missing duration of history of {missing_dur.in_words()!r}\n'
f'{missing_dur}\n'
) )
# await tractor.pause()
to_push = diff_history( to_push = diff_history(
array, array,
@ -612,27 +565,22 @@ async def start_backfill(
# long-term storage. # long-term storage.
if ( if (
storage is not None storage is not None
and and write_tsdb
write_tsdb
): ):
log.info( log.info(
f'Writing {ln} frame to storage:\n' f'Writing {ln} frame to storage:\n'
f'{next_start_dt} -> {last_start_dt}' f'{next_start_dt} -> {last_start_dt}'
) )
# NOTE, always drop the src asset token for # always drop the src asset token for
# non-currency-pair like market types (for now) # non-currency-pair like market types (for now)
#
# THAT IS, for now our table key schema is NOT
# including the dst[/src] source asset token. SO,
# 'tsla.nasdaq.ib' over 'tsla/usd.nasdaq.ib' for
# historical reasons ONLY.
if mkt.dst.atype not in { if mkt.dst.atype not in {
'crypto', 'crypto',
'crypto_currency', 'crypto_currency',
'fiat', # a "forex pair" 'fiat', # a "forex pair"
'perpetual_future', # stupid "perps" from cex land
}: }:
# for now, our table key schema is not including
# the dst[/src] source asset token.
col_sym_key: str = mkt.get_fqme( col_sym_key: str = mkt.get_fqme(
delim_char='', delim_char='',
without_src=True, without_src=True,
@ -737,7 +685,7 @@ async def back_load_from_tsdb(
last_tsdb_dt last_tsdb_dt
and latest_start_dt and latest_start_dt
): ):
backfilled_size_s: Duration = ( backfilled_size_s = (
latest_start_dt - last_tsdb_dt latest_start_dt - last_tsdb_dt
).seconds ).seconds
# if the shm buffer len is not large enough to contain # if the shm buffer len is not large enough to contain
@ -960,13 +908,8 @@ async def tsdb_backfill(
f'{pformat(config)}\n' f'{pformat(config)}\n'
) )
# concurrently load the provider's most-recent-frame AND any
# pre-existing tsdb history already saved in `piker` storage.
dt_eps: list[DateTime, DateTime] = [] dt_eps: list[DateTime, DateTime] = []
async with ( async with trio.open_nursery() as tn:
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn
):
tn.start_soon( tn.start_soon(
push_latest_frame, push_latest_frame,
dt_eps, dt_eps,
@ -975,6 +918,7 @@ async def tsdb_backfill(
timeframe, timeframe,
config, config,
) )
tsdb_entry: tuple = await load_tsdb_hist( tsdb_entry: tuple = await load_tsdb_hist(
storage, storage,
mkt, mkt,
@ -1003,32 +947,6 @@ async def tsdb_backfill(
mr_end_dt, mr_end_dt,
) = dt_eps ) = dt_eps
first_frame_dur_s: Duration = (mr_end_dt - mr_start_dt).seconds
calced_frame_size: Duration = mk_duration(
seconds=first_frame_dur_s,
)
# NOTE, attempt to use the backend declared default frame
# sizing (as allowed by their time-series query APIs) and
# if not provided try to construct a default from the
# first frame received above.
def_frame_durs: dict[
int,
Duration,
]|None = config.get('frame_types', None)
if def_frame_durs:
def_frame_size: Duration = def_frame_durs[timeframe]
if def_frame_size != calced_frame_size:
log.warning(
f'Expected frame size {def_frame_size}\n'
f'Rxed frame {calced_frame_size}\n'
)
# await tractor.pause()
else:
# use what we calced from first frame above.
def_frame_size = calced_frame_size
# NOTE: when there's no offline data, there's 2 cases: # NOTE: when there's no offline data, there's 2 cases:
# - data backend doesn't support timeframe/sample # - data backend doesn't support timeframe/sample
# period (in which case `dt_eps` should be `None` and # period (in which case `dt_eps` should be `None` and
@ -1053,15 +971,13 @@ async def tsdb_backfill(
# if there is a gap to backfill from the first # if there is a gap to backfill from the first
# history frame until the last datum loaded from the tsdb # history frame until the last datum loaded from the tsdb
# continue that now in the background # continue that now in the background
async with trio.open_nursery( async with trio.open_nursery() as tn:
strict_exception_groups=False,
) as tn:
bf_done = await tn.start( bf_done = await tn.start(
partial( partial(
start_backfill, start_backfill,
get_hist=get_hist, get_hist=get_hist,
def_frame_duration=def_frame_size, frame_types=config.get('frame_types', None),
mod=mod, mod=mod,
mkt=mkt, mkt=mkt,
shm=shm, shm=shm,
@ -1320,7 +1236,6 @@ async def manage_history(
# sampling period) data set since normally differently # sampling period) data set since normally differently
# sampled timeseries can be loaded / process independently # sampled timeseries can be loaded / process independently
# ;) # ;)
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
): ):
log.info( log.info(

View File

@ -517,7 +517,7 @@ def with_dts(
''' '''
return df.with_columns([ return df.with_columns([
pl.col(time_col).shift(1).name.suffix('_prev'), pl.col(time_col).shift(1).suffix('_prev'),
pl.col(time_col).diff().alias('s_diff'), pl.col(time_col).diff().alias('s_diff'),
pl.from_epoch(pl.col(time_col)).alias('dt'), pl.from_epoch(pl.col(time_col)).alias('dt'),
]).with_columns([ ]).with_columns([
@ -616,18 +616,6 @@ def detect_price_gaps(
# ]) # ])
... ...
# TODO: probably just use the null_segs impl above?
def detect_vlm_gaps(
df: pl.DataFrame,
col: str = 'volume',
) -> pl.DataFrame:
vnull: pl.DataFrame = df.filter(
pl.col(col) == 0
)
return vnull
def dedupe( def dedupe(
src_df: pl.DataFrame, src_df: pl.DataFrame,
@ -638,6 +626,7 @@ def dedupe(
) -> tuple[ ) -> tuple[
pl.DataFrame, # with dts pl.DataFrame, # with dts
pl.DataFrame, # gaps
pl.DataFrame, # with deduplicated dts (aka gap/repeat removal) pl.DataFrame, # with deduplicated dts (aka gap/repeat removal)
int, # len diff between input and deduped int, # len diff between input and deduped
]: ]:
@ -650,22 +639,19 @@ def dedupe(
''' '''
wdts: pl.DataFrame = with_dts(src_df) wdts: pl.DataFrame = with_dts(src_df)
deduped = wdts
# remove duplicated datetime samples/sections
deduped: pl.DataFrame = wdts.unique(
# subset=['dt'],
subset=['time'],
maintain_order=True,
)
# maybe sort on any time field # maybe sort on any time field
if sort: if sort:
deduped = deduped.sort(by='time') wdts = wdts.sort(by='time')
# TODO: detect out-of-order segments which were corrected! # TODO: detect out-of-order segments which were corrected!
# -[ ] report in log msg # -[ ] report in log msg
# -[ ] possibly return segment sections which were moved? # -[ ] possibly return segment sections which were moved?
# remove duplicated datetime samples/sections
deduped: pl.DataFrame = wdts.unique(
subset=['dt'],
maintain_order=True,
)
diff: int = ( diff: int = (
wdts.height wdts.height
- -

View File

@ -21,7 +21,6 @@ Main app startup and run.
from functools import partial from functools import partial
from types import ModuleType from types import ModuleType
import tractor
import trio import trio
from piker.ui.qt import ( from piker.ui.qt import (
@ -117,7 +116,6 @@ async def _async_main(
needed_brokermods[brokername] = brokers[brokername] needed_brokermods[brokername] = brokers[brokername]
async with ( async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as root_n, trio.open_nursery() as root_n,
): ):
# set root nursery and task stack for spawning other charts/feeds # set root nursery and task stack for spawning other charts/feeds

View File

@ -33,6 +33,7 @@ import trio
from piker.ui.qt import ( from piker.ui.qt import (
QtCore, QtCore,
QtWidgets,
Qt, Qt,
QLineF, QLineF,
QFrame, QFrame,

View File

@ -1445,10 +1445,7 @@ async def display_symbol_data(
# for pause/resume on mouse interaction # for pause/resume on mouse interaction
rt_chart.feed = feed rt_chart.feed = feed
async with ( async with trio.open_nursery() as ln:
tractor.trionics.collapse_eg(),
trio.open_nursery() as ln,
):
# if available load volume related built-in display(s) # if available load volume related built-in display(s)
vlm_charts: dict[ vlm_charts: dict[
str, str,

View File

@ -22,10 +22,7 @@ from contextlib import asynccontextmanager as acm
from typing import Callable from typing import Callable
import trio import trio
from tractor.trionics import ( from tractor.trionics import gather_contexts
gather_contexts,
collapse_eg,
)
from piker.ui.qt import ( from piker.ui.qt import (
QtCore, QtCore,
@ -210,10 +207,7 @@ async def open_signal_handler(
async for args in recv: async for args in recv:
await async_handler(*args) await async_handler(*args)
async with ( async with trio.open_nursery() as tn:
collapse_eg(),
trio.open_nursery() as tn
):
tn.start_soon(proxy_to_handler) tn.start_soon(proxy_to_handler)
async with send: async with send:
yield yield
@ -248,7 +242,6 @@ async def open_handlers(
widget: QWidget widget: QWidget
streams: list[trio.abc.ReceiveChannel] streams: list[trio.abc.ReceiveChannel]
async with ( async with (
collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
gather_contexts([ gather_contexts([
open_event_stream( open_event_stream(

View File

@ -18,11 +18,10 @@
Feed status and controls widget(s) for embedding in a UI-pane. Feed status and controls widget(s) for embedding in a UI-pane.
""" """
from __future__ import annotations from __future__ import annotations
from typing import ( from textwrap import dedent
Any, from typing import TYPE_CHECKING
TYPE_CHECKING,
)
# from PyQt5.QtCore import Qt # from PyQt5.QtCore import Qt
@ -50,55 +49,35 @@ def mk_feed_label(
a feed control protocol. a feed control protocol.
''' '''
status: dict[str, Any] = feed.status status = feed.status
assert status assert status
# SO tips on ws/nls, msg = dedent("""
# https://stackoverflow.com/a/15721400 actor: **{actor_name}**\n
ws: str = '&nbsp;' |_ @**{host}:{port}**\n
# nl: str = '<br>' # dun work? """)
actor_info_repr: str = (
f')> **{status["actor_short_id"]}**\n'
'\n' # bc md?
)
# fields to select *IN* for display for key, val in status.items():
# (see `.data.feed.open_feed()` status if key in ('host', 'port', 'actor_name'):
# update -> TAG_feed_status_update) continue
for key in [ msg += f'\n|_ {key}: **{{{key}}}**\n'
'ipc',
'hist_shm',
'rt_shm',
'throttle_hz',
]:
# NOTE, the 2nd key is filled via `.format()` updates.
actor_info_repr += (
f'\n' # bc md?
f'{ws}|_{key}: **{{{key}}}**\n'
)
# ^TODO? formatting and content..
# -[ ] showing which fqme is "forward" on the
# chart/fsp/order-mode?
# '|_ flows: **{symbols}**\n'
#
# -[x] why isn't the indent working?
# => markdown, now solved..
feed_label = FormatLabel( feed_label = FormatLabel(
fmt_str=actor_info_repr, fmt_str=msg,
# |_ streams: **{symbols}**\n
font=_font.font, font=_font.font,
font_size=_font_small.px_size, font_size=_font_small.px_size,
font_color='default_lightest', font_color='default_lightest',
) )
# ?TODO, remove this?
# form.vbox.setAlignment(feed_label, Qt.AlignBottom) # form.vbox.setAlignment(feed_label, Qt.AlignBottom)
# form.vbox.setAlignment(Qt.AlignBottom) # form.vbox.setAlignment(Qt.AlignBottom)
# _ = chart.height() - ( _ = chart.height() - (
# form.height() + form.height() +
# form.fill_bar.height() form.fill_bar.height()
# # feed_label.height() # feed_label.height()
# ) )
feed_label.format(**feed.status) feed_label.format(**feed.status)
return feed_label return feed_label

View File

@ -600,7 +600,6 @@ async def open_fsp_admin(
kwargs=kwargs, kwargs=kwargs,
) as (cache_hit, cluster_map), ) as (cache_hit, cluster_map),
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
): ):
if cache_hit: if cache_hit:
@ -614,8 +613,6 @@ async def open_fsp_admin(
) )
try: try:
yield admin yield admin
# ??TODO, does this *need* to be inside a finally?
finally: finally:
# terminate all tasks via signals # terminate all tasks via signals
for key, entry in admin._registry.items(): for key, entry in admin._registry.items():

View File

@ -285,20 +285,18 @@ class FormatLabel(QLabel):
font_size: int, font_size: int,
font_color: str, font_color: str,
use_md: bool = True,
parent=None, parent=None,
) -> None: ) -> None:
super().__init__(parent) super().__init__(parent)
# by default set the format string verbatim and expect user # by default set the format string verbatim and expect user to
# to call ``.format()`` later (presumably they'll notice the # call ``.format()`` later (presumably they'll notice the
# unformatted content if ``fmt_str`` isn't meant to be # unformatted content if ``fmt_str`` isn't meant to be
# unformatted). # unformatted).
self.fmt_str = fmt_str self.fmt_str = fmt_str
# self.setText(fmt_str) # ?TODO, why here? self.setText(fmt_str)
self.setStyleSheet( self.setStyleSheet(
f"""QLabel {{ f"""QLabel {{
@ -308,10 +306,9 @@ class FormatLabel(QLabel):
""" """
) )
self.setFont(_font.font) self.setFont(_font.font)
if use_md: self.setTextFormat(
self.setTextFormat( Qt.TextFormat.MarkdownText
Qt.TextFormat.MarkdownText )
)
self.setMargin(0) self.setMargin(0)
self.setSizePolicy( self.setSizePolicy(
@ -319,10 +316,7 @@ class FormatLabel(QLabel):
size_policy.Expanding, size_policy.Expanding,
) )
self.setAlignment( self.setAlignment(
Qt.AlignLeft Qt.AlignVCenter | Qt.AlignLeft
|
Qt.AlignBottom
# Qt.AlignVCenter
) )
self.setText(self.fmt_str) self.setText(self.fmt_str)

View File

@ -15,8 +15,8 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Remote control tasks for sending annotations (and maybe more cmds) to Remote control tasks for sending annotations (and maybe more cmds)
a chart from some other actor. to a chart from some other actor.
''' '''
from __future__ import annotations from __future__ import annotations
@ -32,7 +32,6 @@ from typing import (
) )
import tractor import tractor
import trio
from tractor import trionics from tractor import trionics
from tractor import ( from tractor import (
Portal, Portal,
@ -317,9 +316,7 @@ class AnnotCtl(Struct):
) )
yield aid yield aid
finally: finally:
# async ipc send op await self.remove(aid)
with trio.CancelScope(shield=True):
await self.remove(aid)
async def redraw( async def redraw(
self, self,

View File

@ -15,8 +15,7 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
qompleterz: embeddable search and complete using trio, Qt and qompleterz: embeddable search and complete using trio, Qt and rapidfuzz.
rapidfuzz.
""" """
@ -47,7 +46,6 @@ import time
from pprint import pformat from pprint import pformat
from rapidfuzz import process as fuzzy from rapidfuzz import process as fuzzy
import tractor
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
@ -55,7 +53,7 @@ from piker.ui.qt import (
size_policy, size_policy,
align_flag, align_flag,
Qt, Qt,
# QtCore, QtCore,
QtWidgets, QtWidgets,
QModelIndex, QModelIndex,
QItemSelectionModel, QItemSelectionModel,
@ -922,10 +920,7 @@ async def fill_results(
# issue multi-provider fan-out search request and place # issue multi-provider fan-out search request and place
# "searching.." statuses on outstanding results providers # "searching.." statuses on outstanding results providers
async with ( async with trio.open_nursery() as n:
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn
):
for provider, (search, pause) in ( for provider, (search, pause) in (
_searcher_cache.copy().items() _searcher_cache.copy().items()
@ -949,7 +944,7 @@ async def fill_results(
status_field='-> searchin..', status_field='-> searchin..',
) )
await tn.start( await n.start(
pack_matches, pack_matches,
view, view,
has_results, has_results,
@ -1009,14 +1004,12 @@ async def handle_keyboard_input(
view.set_font_size(searchbar.dpi_font.px_size) view.set_font_size(searchbar.dpi_font.px_size)
send, recv = trio.open_memory_channel(616) send, recv = trio.open_memory_channel(616)
async with ( async with trio.open_nursery() as n:
tractor.trionics.collapse_eg(), # needed?
trio.open_nursery() as tn
):
# start a background multi-searcher task which receives # start a background multi-searcher task which receives
# patterns relayed from this keyboard input handler and # patterns relayed from this keyboard input handler and
# async updates the completer view's results. # async updates the completer view's results.
tn.start_soon( n.start_soon(
partial( partial(
fill_results, fill_results,
searchw, searchw,

View File

@ -269,8 +269,6 @@ def hcolor(name: str) -> str:
# default ohlc-bars/curve gray # default ohlc-bars/curve gray
'bracket': '#666666', # like the logo 'bracket': '#666666', # like the logo
'pikers': '#616161', # a trader shade of..
'beast': '#161616', # in the dark alone.
# bluish # bluish
'charcoal': '#36454F', 'charcoal': '#36454F',

View File

@ -21,7 +21,6 @@ Chart trading, the only way to scalp.
from __future__ import annotations from __future__ import annotations
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from dataclasses import dataclass, field from dataclasses import dataclass, field
from decimal import Decimal
from functools import partial from functools import partial
from pprint import pformat from pprint import pformat
import time import time
@ -42,6 +41,7 @@ from piker.accounting import (
Position, Position,
mk_allocator, mk_allocator,
MktPair, MktPair,
Symbol,
) )
from piker.clearing import ( from piker.clearing import (
open_ems, open_ems,
@ -143,15 +143,6 @@ class OrderMode:
} }
_staged_order: Order | None = None _staged_order: Order | None = None
@property
def curr_mkt(self) -> MktPair:
'''
Deliver the currently selected `MktPair` according
chart state.
'''
return self.chart.linked.mkt
def on_level_change_update_next_order_info( def on_level_change_update_next_order_info(
self, self,
level: float, level: float,
@ -181,11 +172,7 @@ class OrderMode:
line.update_labels(order_info) line.update_labels(order_info)
# update bound-in staged order # update bound-in staged order
mkt: MktPair = self.curr_mkt order.price = level
order.price: Decimal = mkt.quantize(
size=level,
quantity_type='price',
)
order.size = order_info['size'] order.size = order_info['size']
# when an order is changed we flip the settings side-pane to # when an order is changed we flip the settings side-pane to
@ -200,9 +187,7 @@ class OrderMode:
) -> LevelLine: ) -> LevelLine:
# TODO, if we instead just always decimalize at the ems layer level = order.price
# we can avoid this back-n-forth casting?
level = float(order.price)
line = order_line( line = order_line(
chart or self.chart, chart or self.chart,
@ -239,11 +224,7 @@ class OrderMode:
# the order mode allocator but we still need to update the # the order mode allocator but we still need to update the
# "staged" order message we'll send to the ems # "staged" order message we'll send to the ems
def update_order_price(y: float) -> None: def update_order_price(y: float) -> None:
mkt: MktPair = self.curr_mkt order.price = y
order.price: Decimal = mkt.quantize(
size=y,
quantity_type='price',
)
line._on_level_change = update_order_price line._on_level_change = update_order_price
@ -294,31 +275,34 @@ class OrderMode:
chart = cursor.linked.chart chart = cursor.linked.chart
if ( if (
not chart not chart
and and cursor
cursor and cursor.active_plot
and
cursor.active_plot
): ):
return return
chart = cursor.active_plot chart = cursor.active_plot
price: float = cursor._datum_xy[1] price = cursor._datum_xy[1]
if not price: if not price:
# zero prices are not supported by any means # zero prices are not supported by any means
# since that's illogical / a no-op. # since that's illogical / a no-op.
return return
mkt: MktPair = self.chart.linked.mkt
# NOTE : we could also use instead,
# mkt.quantize(price, quantity_type='price')
# but it returns a Decimal and it's probably gonna
# be slower?
# TODO: should we be enforcing this precision # TODO: should we be enforcing this precision
# at a different layer in the stack? # at a different layer in the stack? right now
# |_ might require `MktPair` tracking in the EMS? # any precision error will literally be relayed
# |_ right now any precision error will be relayed # all the way back from the backend.
# all the way back from the backend and vice-versa..
# price = round(
mkt: MktPair = self.curr_mkt price,
price: Decimal = mkt.quantize( ndigits=mkt.price_tick_digits,
size=price,
quantity_type='price',
) )
order = self._staged_order = Order( order = self._staged_order = Order(
action=action, action=action,
price=price, price=price,
@ -394,7 +378,7 @@ class OrderMode:
'oid': oid, 'oid': oid,
}) })
if float(order.price) <= 0: if order.price <= 0:
log.error( log.error(
'*!? Invalid `Order.price <= 0` ?!*\n' '*!? Invalid `Order.price <= 0` ?!*\n'
# TODO: make this present multi-line in object form # TODO: make this present multi-line in object form
@ -531,15 +515,14 @@ class OrderMode:
# if an order msg is provided update the line # if an order msg is provided update the line
# **from** that msg. # **from** that msg.
if order: if order:
price: float = float(order.price) if order.price <= 0:
if price <= 0:
log.error(f'Order has 0 price, cancelling..\n{order}') log.error(f'Order has 0 price, cancelling..\n{order}')
self.cancel_orders([order.oid]) self.cancel_orders([order.oid])
return None return None
line.set_level(price) line.set_level(order.price)
self.on_level_change_update_next_order_info( self.on_level_change_update_next_order_info(
level=price, level=order.price,
line=line, line=line,
order=order, order=order,
# use the corresponding position tracker for the # use the corresponding position tracker for the
@ -555,13 +538,14 @@ class OrderMode:
def on_fill( def on_fill(
self, self,
uuid: str, uuid: str,
price: float, price: float,
time_s: float, time_s: float,
pointing: str | None = None, pointing: str | None = None,
) -> bool: ) -> None:
''' '''
Fill msg handler. Fill msg handler.
@ -574,83 +558,60 @@ class OrderMode:
- update fill bar size - update fill bar size
''' '''
# XXX WARNING XXX dialog = self.dialogs[uuid]
# if a `Status(resp='error')` arrives *before* this
# fill-status, the `.dialogs` entry may have already been
# popped and thus the below will skipped.
#
# NOTE, to avoid this confusing scenario ensure that any
# errors delivered thru from the broker-backend are not just
# "noisy reporting" (like is very common from IB..) and are
# instead ONLY errors-causing-order-dialog-cancellation!
if not (dialog := self.dialogs.get(uuid)):
log.warning(
f'Order was already cleared from `.dialogs` ??\n'
f'uuid: {uuid!r}\n'
)
return False
lines = dialog.lines lines = dialog.lines
chart = self.chart chart = self.chart
if not lines: # XXX: seems to fail on certain types of races?
log.warn("No line(s) for order {uuid}!?")
return False
# update line state(s)
#
# ?XXX this fails on certain types of races?
# assert len(lines) == 2 # assert len(lines) == 2
flume: Flume = self.feed.flumes[chart.linked.mkt.fqme] if lines:
_, _, ratio = flume.get_ds_info() flume: Flume = self.feed.flumes[chart.linked.mkt.fqme]
_, _, ratio = flume.get_ds_info()
for chart, shm in [ for chart, shm in [
(self.chart, flume.rt_shm), (self.chart, flume.rt_shm),
(self.hist_chart, flume.hist_shm), (self.hist_chart, flume.hist_shm),
]: ]:
viz = chart.get_viz(chart.name) viz = chart.get_viz(chart.name)
index_field = viz.index_field index_field = viz.index_field
arr = shm.array arr = shm.array
# TODO: borked for int index based.. # TODO: borked for int index based..
index = flume.get_index(time_s, arr) index = flume.get_index(time_s, arr)
# get absolute index for arrow placement # get absolute index for arrow placement
arrow_index = arr[index_field][index] arrow_index = arr[index_field][index]
self.arrows.add( self.arrows.add(
chart.plotItem, chart.plotItem,
uuid, uuid,
arrow_index, arrow_index,
price, price,
pointing=pointing, pointing=pointing,
color=lines[0].color color=lines[0].color
) )
else:
log.warn("No line(s) for order {uuid}!?")
def on_cancel( def on_cancel(
self, self,
uuid: str, uuid: str
) -> bool: ) -> None:
msg: Order|None = self.client._sent_orders.pop(uuid, None) msg: Order = self.client._sent_orders.pop(uuid, None)
if msg is None:
if msg is not None:
self.lines.remove_line(uuid=uuid)
self.chart.linked.cursor.show_xhair()
dialog = self.dialogs.pop(uuid, None)
if dialog:
dialog.last_status_close()
else:
log.warning( log.warning(
f'Received cancel for unsubmitted order {pformat(msg)}' f'Received cancel for unsubmitted order {pformat(msg)}'
) )
return False
# remove GUI line, show cursor.
self.lines.remove_line(uuid=uuid)
self.chart.linked.cursor.show_xhair()
# remove msg dialog (history)
dialog: Dialog|None = self.dialogs.pop(uuid, None)
if dialog:
dialog.last_status_close()
return True
def cancel_orders_under_cursor(self) -> list[str]: def cancel_orders_under_cursor(self) -> list[str]:
return self.cancel_orders( return self.cancel_orders(
@ -720,9 +681,9 @@ class OrderMode:
) -> Dialog | None: ) -> Dialog | None:
# NOTE: the `.order` attr **must** be set with the # NOTE: the `.order` attr **must** be set with the
# equivalent order msg in order to be loaded. # equivalent order msg in order to be loaded.
order: Order = msg.req order = msg.req
oid = str(msg.oid) oid = str(msg.oid)
symbol: str = order.symbol symbol = order.symbol
# TODO: MEGA UGGG ZONEEEE! # TODO: MEGA UGGG ZONEEEE!
src = msg.src src = msg.src
@ -741,22 +702,13 @@ class OrderMode:
order.oid = str(order.oid) order.oid = str(order.oid)
order.brokers = [brokername] order.brokers = [brokername]
# ?TODO? change this over to `MktPair`, but it's gonna be # TODO: change this over to `MktPair`, but it's
# tough since we don't have any such data really in our # gonna be tough since we don't have any such data
# clearing msg schema.. # really in our clearing msg schema..
# BUT WAIT! WHY do we even want/need this!? order.symbol = Symbol.from_fqme(
# fqsn=fqme,
# order.symbol = self.curr_mkt info={},
# )
# XXX, the old approach.. which i don't quire member why..
# -[ ] verify we for sure don't require this any more!
# |_https://github.com/pikers/piker/issues/517
#
# order.symbol = Symbol.from_fqme(
# fqsn=fqme,
# info={},
# )
maybe_dialog: Dialog | None = self.submit_order( maybe_dialog: Dialog | None = self.submit_order(
send_msg=False, send_msg=False,
order=order, order=order,
@ -814,7 +766,6 @@ async def open_order_mode(
brokerd_accounts, brokerd_accounts,
ems_dialog_msgs, ems_dialog_msgs,
), ),
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
): ):
@ -1079,23 +1030,13 @@ async def process_trade_msg(
if name in ( if name in (
'position', 'position',
): ):
mkt: MktPair = mode.chart.linked.mkt sym: MktPair = mode.chart.linked.mkt
pp_msg_symbol = msg['symbol'].lower() pp_msg_symbol = msg['symbol'].lower()
pp_msg_bsmktid = msg['bs_mktid'] fqme = sym.fqme
fqme = mkt.fqme broker = sym.broker
broker = mkt.broker
if ( if (
# match on any backed-specific(-unique)-ID first!
(
pp_msg_bsmktid
and
mkt.bs_mktid == pp_msg_bsmktid
)
or
# OW try against what's provided as an FQME..
pp_msg_symbol == fqme pp_msg_symbol == fqme
or or pp_msg_symbol == fqme.removesuffix(f'.{broker}')
pp_msg_symbol == fqme.removesuffix(f'.{broker}')
): ):
log.info( log.info(
f'Loading position for `{fqme}`:\n' f'Loading position for `{fqme}`:\n'
@ -1118,7 +1059,7 @@ async def process_trade_msg(
return return
msg = Status(**msg) msg = Status(**msg)
# resp: str = msg.resp resp = msg.resp
oid = msg.oid oid = msg.oid
dialog: Dialog = mode.dialogs.get(oid) dialog: Dialog = mode.dialogs.get(oid)
@ -1160,7 +1101,7 @@ async def process_trade_msg(
) )
) )
): ):
msg.req: Order = order msg.req = order
dialog: ( dialog: (
Dialog Dialog
# NOTE: on an invalid order submission (eg. # NOTE: on an invalid order submission (eg.
@ -1182,33 +1123,20 @@ async def process_trade_msg(
mode.on_submit(oid) mode.on_submit(oid)
case Status(resp='error'): case Status(resp='error'):
# TODO: parse into broker-side msg, or should we
# expect it to just be **that** msg verbatim (since
# we'd presumably have only 1 `Error` msg-struct)
broker_msg: dict = msg.brokerd_msg
# XXX NOTE, this presumes the rxed "error" is
# order-dialog-cancel-causing, THUS backends much ONLY
# relay errors of this "severity"!!
log.error(
f'Order errored ??\n'
f'oid: {oid!r}\n'
f'\n'
f'{pformat(broker_msg)}\n'
f'\n'
f'=> CANCELLING ORDER DIALOG <=\n'
# from tractor.devx.pformat import ppfmt
# !TODO LOL, wtf the msg is causing
# a recursion bug!
# -[ ] get this shit on msgspec stat!
# f'{ppfmt(broker_msg)}'
)
# do all the things for a cancel: # do all the things for a cancel:
# - drop order-msg dialog from client table # - drop order-msg dialog from client table
# - delete level line from view # - delete level line from view
mode.on_cancel(oid) mode.on_cancel(oid)
# TODO: parse into broker-side msg, or should we
# expect it to just be **that** msg verbatim (since
# we'd presumably have only 1 `Error` msg-struct)
broker_msg: dict = msg.brokerd_msg
log.error(
f'Order {oid}->{resp} with:\n{pformat(broker_msg)}'
)
case Status(resp='canceled'): case Status(resp='canceled'):
# delete level line from view # delete level line from view
mode.on_cancel(oid) mode.on_cancel(oid)
@ -1223,10 +1151,10 @@ async def process_trade_msg(
# TODO: UX for a "pending" clear/live order # TODO: UX for a "pending" clear/live order
log.info(f'Dark order triggered for {fmtmsg}') log.info(f'Dark order triggered for {fmtmsg}')
# TODO: do the struct-msg version, blah blah..
# req=Order(exec_mode='live', action='alert') as req,
case Status( case Status(
resp='triggered', resp='triggered',
# TODO: do the struct-msg version, blah blah..
# req=Order(exec_mode='live', action='alert') as req,
req={ req={
'exec_mode': 'live', 'exec_mode': 'live',
'action': 'alert', 'action': 'alert',
@ -1238,7 +1166,7 @@ async def process_trade_msg(
tm = time.time() tm = time.time()
mode.on_fill( mode.on_fill(
oid, oid,
price=float(req.price), price=req.price,
time_s=tm, time_s=tm,
) )
mode.lines.remove_line(uuid=oid) mode.lines.remove_line(uuid=oid)
@ -1293,7 +1221,7 @@ async def process_trade_msg(
tm = details['broker_time'] tm = details['broker_time']
mode.on_fill( mode.on_fill(
oid, oid,
price=float(details['price']), price=details['price'],
time_s=tm, time_s=tm,
pointing='up' if action == 'buy' else 'down', pointing='up' if action == 'buy' else 'down',
) )

View File

@ -15,186 +15,140 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
[build-system] [build-system]
requires = ["hatchling"] requires = ["poetry-core"]
build-backend = "hatchling.build" build-backend = "poetry.core.masonry.api"
[project] # ------ - ------
[tool.ruff.lint]
# https://docs.astral.sh/ruff/settings/#lint_ignore
ignore = []
# https://docs.astral.sh/ruff/settings/#lint_per-file-ignores
"piker/ui/qt.py" = [
"E402",
'F401', # unused imports (without __all__ or blah as blah)
# "F841", # unused variable rules
]
# ignore-init-module-imports = false
# ------ - ------
[tool.poetry]
name = "piker" name = "piker"
version = "0.1.0a0dev0" version = "0.1.0.alpha0.dev0"
description = "trading gear for hackers" description = "trading gear for hackers"
authors = [{ name = "Tyler Goodlet", email = "goodboy_foss@protonmail.com" }] authors = ["Tyler Goodlet <goodboy_foss@protonmail.com>"]
requires-python = ">=3.12" license = "AGPLv3"
license = "AGPL-3.0-or-later"
readme = "README.rst" readme = "README.rst"
keywords = [
"async",
"trading",
"finance",
"quant",
"charting",
]
classifiers = [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Intended Audience :: Education",
]
dependencies = [
"async-generator >=1.10, <2.0.0",
"attrs >=23.1.0, <24.0.0",
"bidict >=0.23.1",
"colorama >=0.4.6, <0.5.0",
"colorlog >=6.7.0, <7.0.0",
"ib-insync >=0.9.86, <0.10.0",
"numpy>=2.0",
"polars >=0.20.6",
"polars-fuzzy-match>=0.1.5",
"pygments >=2.16.1, <3.0.0",
"rich >=13.5.2, <14.0.0",
"tomli >=2.0.1, <3.0.0",
"tomli-w >=1.0.0, <2.0.0",
"trio-util >=0.7.0, <0.8.0",
"trio-websocket >=0.10.3, <0.11.0",
"typer >=0.9.0, <1.0.0",
"trio >=0.27",
"pendulum",
"httpx >=0.27.0, <0.28.0",
"cryptofeed >=2.4.0, <3.0.0",
"pyarrow>=18.0.0",
"websockets ==12.0",
"msgspec>=0.19.0,<0.20",
"tractor",
"tomlkit",
"trio-typing>=0.10.0",
"numba>=0.61.0",
"pyvnc",
]
# ------ dependencies ------
# ------ - ------
# TODO: add an `--only daemon` group for running non-ui / pikerd [tool.poetry.dependencies]
# service tree in distributed mode B) async-generator = "^1.10"
# https://docs.astral.sh/uv/concepts/projects/dependencies/#optional-dependencies attrs = "^23.1.0"
bidict = "^0.22.1"
colorama = "^0.4.6"
colorlog = "^6.7.0"
cython = "^3.0.0"
greenback = "^1.1.1"
ib-insync = "^0.9.86"
msgspec = "^0.18.0"
numba = "^0.59.0"
numpy = "^1.25"
polars = "^0.18.13"
pygments = "^2.16.1"
python = ">=3.11, <3.13"
rich = "^13.5.2"
# setuptools = "^68.0.0"
tomli = "^2.0.1"
tomli-w = "^1.0.0"
trio-util = "^0.7.0"
trio-websocket = "^0.10.3"
typer = "^0.9.0"
rapidfuzz = "^3.5.2"
pdbp = "^1.5.0"
trio = "^0.24"
pendulum = "^3.0.0"
httpx = "^0.27.0"
[dependency-groups] [tool.poetry.dependencies.tractor]
uis = [ develop = true
# https://docs.astral.sh/uv/concepts/projects/dependencies/#optional-dependencies git = 'https://github.com/goodboy/tractor.git'
# TODO: make sure the levenshtein shit compiles on nix.. branch = 'asyncio_debugger_support'
# rapidfuzz = {extras = ["speedup"], version = "^0.18.0"} # path = "../tractor"
"rapidfuzz >=3.2.0, <4.0.0",
"qdarkstyle >=3.0.2, <4.0.0",
"pyqt6 >=6.7.0, <7.0.0",
"pyqtgraph",
# for consideration, [tool.poetry.dependencies.asyncvnc]
# - 'visidata' git = 'https://github.com/pikers/asyncvnc.git'
branch = 'main'
"qdarkstyle >=3.0.2, <4.0.0", [tool.poetry.dependencies.tomlkit]
"pyqt6 >=6.7.0, <7.0.0", develop = true
"pyqtgraph", git = 'https://github.com/pikers/tomlkit.git'
] branch = 'piker_pin'
# path = "../tomlkit/"
# TODO: a toolset that makes debugging a `pikerd` service (tree) easy [tool.poetry.group.uis]
# to hack on directly using more or less the local env: optional = true
[tool.poetry.group.uis.dependencies]
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# TODO: make sure the levenshtein shit compiles on nix..
# rapidfuzz = {extras = ["speedup"], version = "^0.18.0"}
rapidfuzz = "^3.2.0"
qdarkstyle = ">=3.0.2"
pyqtgraph = { git = 'https://github.com/pikers/pyqtgraph.git' }
# ------ - ------
pyqt6 = "^6.7.0"
[tool.poetry.group.dev]
optional = true
[tool.poetry.group.dev.dependencies]
# testing / CI
pytest = "^6.0.0"
elasticsearch = "^8.9.0"
xonsh = "^0.14.2"
prompt-toolkit = "3.0.40"
# console ehancements and eventually remote debugging
# extras/helpers.
# TODO: add a toolset that makes debugging a `pikerd` service
# (tree) easy to hack on directly using more or less the local env:
# - xonsh + xxh # - xonsh + xxh
# - rsyscall + pdbp # - rsyscall + pdbp
# - actor runtime control console like BEAM/OTP # - actor runtime control console like BEAM/OTP
#
# console ehancements and eventually remote debugging extras/helpers.
# use `uv --dev` to enable
repl = [
# debug
"pdbp >=1.5.0, <2.0.0",
"greenback >=1.1.1, <2.0.0",
"xonsh",
"prompt-toolkit ==3.0.40",
"pyperclip>=1.9.0",
# ------ - ------
# TODO: add an `--only daemon` group for running non-ui / pikerd
# service tree in distributed mode B)
# https://python-poetry.org/docs/managing-dependencies/#installing-group-dependencies
# [tool.poetry.group.daemon.dependencies]
[tool.poetry.scripts]
piker = 'piker.cli:cli'
pikerd = 'piker.cli:pikerd'
ledger = 'piker.accounting.cli:ledger'
[project]
keywords=[
"async",
"trading",
"finance",
"quant",
"charting",
] ]
testing = [ classifiers=[
"pytest", 'Development Status :: 3 - Alpha',
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
'Operating System :: POSIX :: Linux',
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
'Intended Audience :: Financial and Insurance Industry',
'Intended Audience :: Science/Research',
'Intended Audience :: Developers',
'Intended Audience :: Education',
] ]
de = [
# DE-specific
"i3ipc>=2.2.1",
]
dev = [
# https://docs.astral.sh/uv/concepts/projects/dependencies/#development-dependencies
"cython >=3.0.0, <4.0.0",
# nested deps-groups
# https://docs.astral.sh/uv/concepts/projects/dependencies/#nesting-groups
{include-group = 'uis'},
{include-group = 'repl'},
{include-group = 'testing'},
{include-group = 'de'},
]
lint = [
# XXX, with flake.nix needs to be from nixpkgs
"ruff>=0.9.6"
#
# ^TODO? these markers don't work; use deps-flags for now?
# ; os_name != 'nixos' and platform_system != 'NixOS'",
# ; defined('IN_NIX_SHELL')",
]
dbs = [
"elasticsearch >=8.9.0, <9.0.0",
]
# ------ dependency-groups ------
[tool.pytest.ini_options]
# https://docs.pytest.org/en/stable/reference/reference.html#configuration-options
testpaths = [
"tests",
]
# https://docs.pytest.org/en/stable/reference/reference.html#confval-console_output_style
console_output_style = 'progress'
# https://docs.pytest.org/en/stable/how-to/plugins.html#disabling-plugins-from-autoloading
# https://docs.pytest.org/en/stable/how-to/plugins.html#deactivating-unregistering-a-plugin-by-name
addopts = '-p no:xonsh'
# ------ tool.pytest ------
[project.scripts]
piker = "piker.cli:cli"
pikerd = "piker.cli:pikerd"
ledger = "piker.accounting.cli:ledger"
# ------ project.scripts ------
[tool.hatch.build.targets.sdist]
include = ["piker"]
[tool.hatch.build.targets.wheel]
include = ["piker"]
# ------ tool.hatch ------
# TODO? move to a `uv.toml`?
[tool.uv]
python-preference = 'system'
python-downloads = 'manual'
# https://docs.astral.sh/uv/concepts/projects/dependencies/#default-groups
default-groups = ['uis', 'dev']
# ------ tool.uv ------
[tool.uv.sources]
pyqtgraph = { git = "https://github.com/pikers/pyqtgraph.git" }
tomlkit = { git = "https://github.com/pikers/tomlkit.git", branch ="piker_pin" }
pyvnc = { git = "https://github.com/regulad/pyvnc.git" }
# XXX since, we're like, always hacking new shite all-the-time. Bp
tractor = { git = "https://github.com/goodboy/tractor.git", branch ="piker_pin" }
# tractor = { git = "https://pikers.dev/goodboy/tractor", branch = "piker_pin" }
# tractor = { git = "https://pikers.dev/goodboy/tractor", branch = "main" }
# ------ goodboy ------
# hackin dev-envs, usually there's something new he's hackin in..
# tractor = { path = "../tractor", editable = true }

View File

@ -1,94 +0,0 @@
# from default `ruff.toml` @
# https://docs.astral.sh/ruff/configuration/
# Exclude a variety of commonly ignored directories.
exclude = [
".bzr",
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".ipynb_checkpoints",
".mypy_cache",
".nox",
".pants.d",
".pyenv",
".pytest_cache",
".pytype",
".ruff_cache",
".svn",
".tox",
".venv",
".vscode",
"__pypackages__",
"_build",
"buck-out",
"build",
"dist",
"node_modules",
"site-packages",
"venv",
]
# Same as Black.
line-length = 88
indent-width = 4
# Assume Python 3.9
target-version = "py312"
# ------ - ------
# TODO, stop warnings around `anext()` builtin use?
# tool.ruff.target-version = "py310"
[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
# McCabe complexity (`C901`) by default.
select = ["E4", "E7", "E9", "F"]
ignore = []
ignore-init-module-imports = false
[lint.per-file-ignores]
"piker/ui/qt.py" = [
"E402",
'F401', # unused imports (without __all__ or blah as blah)
# "F841", # unused variable rules
]
# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
# TODO? uhh why no work!?
# Allow unused variables when underscore-prefixed.
# dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
[format]
# Use single quotes in `ruff format`.
quote-style = "single"
# Like Black, indent with spaces, rather than tabs.
indent-style = "space"
# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false
# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"
# Enable auto-formatting of code examples in docstrings. Markdown,
# reStructuredText code/literal blocks and doctests are all supported.
#
# This is currently disabled by default, but it is planned for this
# to be opt-out in the future.
docstring-code-format = false
# Set the line length limit used when formatting code snippets in
# docstrings.
#
# This only has an effect when the `docstring-code-format` setting is
# enabled.
docstring-code-line-length = "dynamic"

View File

@ -1,22 +1,4 @@
# piker: trading gear for hackers """
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
A per-display, DPI (scaling) info dumper.
Resource list for mucking with DPIs on multiple screens: Resource list for mucking with DPIs on multiple screens:
- https://stackoverflow.com/questions/42141354/convert-pixel-size-to-point-size-for-fonts-on-multiple-platforms - https://stackoverflow.com/questions/42141354/convert-pixel-size-to-point-size-for-fonts-on-multiple-platforms
@ -30,86 +12,89 @@ Resource list for mucking with DPIs on multiple screens:
- https://stackoverflow.com/questions/16561879/what-is-the-difference-between-logicaldpix-and-physicaldpix-in-qt - https://stackoverflow.com/questions/16561879/what-is-the-difference-between-logicaldpix-and-physicaldpix-in-qt
- https://doc.qt.io/qt-5/qguiapplication.html#screenAt - https://doc.qt.io/qt-5/qguiapplication.html#screenAt
''' """
from pyqtgraph import QtGui from pyqtgraph import QtGui
from PyQt6 import ( from PyQt5.QtCore import (
QtCore, Qt, QCoreApplication
QtWidgets,
)
from PyQt6.QtCore import (
Qt,
QCoreApplication,
QSize,
QRect,
) )
# Proper high DPI scaling is available in Qt >= 5.6.0. This attibute # Proper high DPI scaling is available in Qt >= 5.6.0. This attibute
# must be set before creating the application # must be set before creating the application
if hasattr(Qt, 'AA_EnableHighDpiScaling'): if hasattr(Qt, 'AA_EnableHighDpiScaling'):
QCoreApplication.setAttribute( QCoreApplication.setAttribute(Qt.AA_EnableHighDpiScaling, True)
Qt.AA_EnableHighDpiScaling,
True,
)
if hasattr(Qt, 'AA_UseHighDpiPixmaps'): if hasattr(Qt, 'AA_UseHighDpiPixmaps'):
QCoreApplication.setAttribute( QCoreApplication.setAttribute(Qt.AA_UseHighDpiPixmaps, True)
Qt.AA_UseHighDpiPixmaps,
True,
)
app = QtWidgets.QApplication([])
window = QtWidgets.QMainWindow() app = QtGui.QApplication([])
main_widget = QtWidgets.QWidget() window = QtGui.QMainWindow()
main_widget = QtGui.QWidget()
window.setCentralWidget(main_widget) window.setCentralWidget(main_widget)
window.show() window.show()
pxr: float = main_widget.devicePixelRatioF() pxr = main_widget.devicePixelRatioF()
# explicitly get main widget and primary displays # screen_num = app.desktop().screenNumber()
current_screen: QtGui.QScreen = app.screenAt( # screen = app.screens()[screen_num]
main_widget.geometry().center()
screen = app.screenAt(main_widget.geometry().center())
name = screen.name()
size = screen.size()
geo = screen.availableGeometry()
phydpi = screen.physicalDotsPerInch()
logdpi = screen.logicalDotsPerInch()
print(
# f'screen number: {screen_num}\n',
f'screen name: {name}\n'
f'screen size: {size}\n'
f'screen geometry: {geo}\n\n'
f'devicePixelRationF(): {pxr}\n'
f'physical dpi: {phydpi}\n'
f'logical dpi: {logdpi}\n'
) )
primary_screen: QtGui.QScreen = app.primaryScreen()
screen: QtGui.QScreen print('-'*50)
for screen in app.screens():
name: str = screen.name()
model: str = screen.model().rstrip()
size: QSize = screen.size()
geo: QRect = screen.availableGeometry()
phydpi: float = screen.physicalDotsPerInch()
logdpi: float = screen.logicalDotsPerInch()
is_primary: bool = screen is primary_screen
is_current: bool = screen is current_screen
print( screen = app.primaryScreen()
f'------ screen name: {name} ------\n'
f'|_primary: {is_primary}\n'
f' _current: {is_current}\n'
f' _model: {model}\n'
f' _screen size: {size}\n'
f' _screen geometry: {geo}\n'
f' _devicePixelRationF(): {pxr}\n'
f' _physical dpi: {phydpi}\n'
f' _logical dpi: {logdpi}\n'
)
# app-wide font info name = screen.name()
size = screen.size()
geo = screen.availableGeometry()
phydpi = screen.physicalDotsPerInch()
logdpi = screen.logicalDotsPerInch()
print(
# f'screen number: {screen_num}\n',
f'screen name: {name}\n'
f'screen size: {size}\n'
f'screen geometry: {geo}\n\n'
f'devicePixelRationF(): {pxr}\n'
f'physical dpi: {phydpi}\n'
f'logical dpi: {logdpi}\n'
)
# app-wide font
font = QtGui.QFont("Hack") font = QtGui.QFont("Hack")
# use pixel size to be cross-resolution compatible? # use pixel size to be cross-resolution compatible?
font.setPixelSize(6) font.setPixelSize(6)
fm = QtGui.QFontMetrics(font)
fontdpi: float = fm.fontDpi()
font_h: int = fm.height()
string: str = '10000' fm = QtGui.QFontMetrics(font)
str_br: QtCore.QRect = fm.boundingRect(string) fontdpi = fm.fontDpi()
str_w: int = str_br.width() font_h = fm.height()
string = '10000'
str_br = fm.boundingRect(string)
str_w = str_br.width()
print( print(
f'------ global font settings ------\n' # f'screen number: {screen_num}\n',
f'font dpi: {fontdpi}\n' f'font dpi: {fontdpi}\n'
f'font height: {font_h}\n' f'font height: {font_h}\n'
f'string bounding rect: {str_br}\n' f'string bounding rect: {str_br}\n'

1
tags
View File

@ -1 +0,0 @@
TAG_feed_status_update ./piker/data/feed.py /TAG_feed_status_update/

View File

@ -15,12 +15,6 @@ from piker.service import (
from piker.log import get_console_log from piker.log import get_console_log
# include `tractor`'s built-in fixtures!
pytest_plugins: tuple[str] = (
"tractor._testing.pytest",
)
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addoption("--ll", action="store", dest='loglevel', parser.addoption("--ll", action="store", dest='loglevel',
default=None, help="logging level to set when testing") default=None, help="logging level to set when testing")

View File

@ -12,14 +12,12 @@ from piker import config
from piker.accounting import ( from piker.accounting import (
Account, Account,
calc, calc,
open_account,
load_account,
load_account_from_ledger,
open_trade_ledger,
Position, Position,
TransactionLedger, TransactionLedger,
open_trade_ledger,
load_account,
load_account_from_ledger,
) )
import tractor
def test_root_conf_networking_section( def test_root_conf_networking_section(
@ -55,17 +53,12 @@ def test_account_file_default_empty(
) )
def test_paper_ledger_position_calcs( def test_paper_ledger_position_calcs(
fq_acnt: tuple[str, str], fq_acnt: tuple[str, str],
debug_mode: bool,
): ):
broker: str broker: str
acnt_name: str acnt_name: str
broker, acnt_name = fq_acnt broker, acnt_name = fq_acnt
accounts_path: Path = ( accounts_path: Path = config.repodir() / 'tests' / '_inputs'
config.repodir()
/ 'tests'
/ '_inputs' # tests-local-subdir
)
ldr: TransactionLedger ldr: TransactionLedger
with ( with (
@ -84,7 +77,6 @@ def test_paper_ledger_position_calcs(
ledger=ldr, ledger=ldr,
_fp=accounts_path, _fp=accounts_path,
debug_mode=debug_mode,
) as (dfs, ledger), ) as (dfs, ledger),
@ -110,87 +102,3 @@ def test_paper_ledger_position_calcs(
df = dfs[xrp] df = dfs[xrp]
assert df['cumsize'][-1] == 0 assert df['cumsize'][-1] == 0
assert pos.cumsize == 0 assert pos.cumsize == 0
@pytest.mark.parametrize(
'fq_acnt',
[
('ib', 'algopaper'),
],
)
def test_ib_account_with_duplicated_mktids(
fq_acnt: tuple[str, str],
debug_mode: bool,
):
# ?TODO, once we start symcache-incremental-update-support?
# from piker.data import (
# open_symcache,
# )
#
# async def main():
# async with (
# # TODO: do this as part of `open_account()`!?
# open_symcache(
# 'ib',
# only_from_memcache=True,
# ) as symcache,
# ):
from piker.brokers.ib.ledger import (
tx_sort,
# ?TODO, once we want to pull lowlevel txns and process them?
# norm_trade_records,
# update_ledger_from_api_trades,
)
broker: str
acnt_id: str = 'algopaper'
broker, acnt_id = fq_acnt
accounts_def = config.load_accounts([broker])
assert accounts_def[f'{broker}.{acnt_id}']
ledger: TransactionLedger
acnt: Account
with (
tractor.devx.maybe_open_crash_handler(pdb=debug_mode),
open_trade_ledger(
'ib',
acnt_id,
tx_sort=tx_sort,
# TODO, eventually incrementally updated for IB..
# symcache=symcache,
symcache=None,
allow_from_sync_code=True,
) as ledger,
open_account(
'ib',
acnt_id,
write_on_exit=True,
) as acnt,
):
# per input params
symcache = ledger.symcache
assert not (
symcache.pairs
or
symcache.pairs
or
symcache.mktmaps
)
# re-compute all positions that have changed state.
# TODO: likely we should change the API to return the
# position updates from `.update_from_ledger()`?
active, closed = acnt.dump_active()
# breakpoint()
# TODO, (see above imports as well) incremental update from
# (updated) ledger?
# -[ ] pull some code from `.ib.broker` content.

View File

@ -42,7 +42,7 @@ from piker.accounting import (
unpack_fqme, unpack_fqme,
) )
from piker.accounting import ( from piker.accounting import (
open_account, open_pps,
Position, Position,
) )
@ -136,7 +136,7 @@ def load_and_check_pos(
) -> None: ) -> None:
with open_account(ppmsg.broker, ppmsg.account) as table: with open_pps(ppmsg.broker, ppmsg.account) as table:
if ppmsg.size == 0: if ppmsg.size == 0:
assert ppmsg.symbol not in table.pps assert ppmsg.symbol not in table.pps
@ -179,7 +179,7 @@ def test_ems_err_on_bad_broker(
# NOTE: emsd should error on the actor's enabled modules # NOTE: emsd should error on the actor's enabled modules
# import phase, when looking for a backend named `doggy`. # import phase, when looking for a backend named `doggy`.
except tractor.RemoteActorError as re: except tractor.RemoteActorError as re:
assert re.type is ModuleNotFoundError assert re.type == ModuleNotFoundError
run_and_tollerate_cancels(load_bad_fqme) run_and_tollerate_cancels(load_bad_fqme)

View File

@ -142,12 +142,7 @@ async def test_concurrent_tokens_refresh(us_symbols, loglevel):
# async with tractor.open_nursery() as n: # async with tractor.open_nursery() as n:
# await n.run_in_actor('other', intermittently_refresh_tokens) # await n.run_in_actor('other', intermittently_refresh_tokens)
async with ( async with trio.open_nursery() as n:
tractor.trionics.collapse_eg(),
trio.open_nursery(
# strict_exception_groups=False,
) as n
):
quoter = await qt.stock_quoter(client, us_symbols) quoter = await qt.stock_quoter(client, us_symbols)
@ -388,9 +383,7 @@ async def test_quote_streaming(tmx_symbols, loglevel, stream_what):
else: else:
symbols = [tmx_symbols] symbols = [tmx_symbols]
async with trio.open_nursery( async with trio.open_nursery() as n:
strict_exception_groups=False,
) as n:
for syms, func in zip(symbols, stream_what): for syms, func in zip(symbols, stream_what):
n.start_soon(func, feed, syms) n.start_soon(func, feed, syms)

2218
uv.lock

File diff suppressed because it is too large Load Diff