Compare commits
No commits in common. "gitea_feats" and "ib_py311_fixes" have entirely different histories.
gitea_feat
...
ib_py311_f
234
README.rst
234
README.rst
|
@ -1,161 +1,162 @@
|
|||
piker
|
||||
-----
|
||||
trading gear for hackers
|
||||
trading gear for hackers.
|
||||
|
||||
|gh_actions|
|
||||
|
||||
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
|
||||
:target: https://actions-badge.atrox.dev/piker/pikers/goto
|
||||
|
||||
``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for
|
||||
real-time computational trading targeted at `hardcore Linux users
|
||||
<comp_trader>`_ .
|
||||
``piker`` is a broker agnostic, next-gen FOSS toolset for real-time
|
||||
computational trading targeted at `hardcore Linux users <comp_trader>`_ .
|
||||
|
||||
we use much bleeding edge tech including (but not limited to):
|
||||
we use as much bleeding edge tech as possible including (but not limited to):
|
||||
|
||||
- latest python for glue_
|
||||
- uv_ for packaging and distribution
|
||||
- trio_ & tractor_ for our distributed `structured concurrency`_ runtime
|
||||
- Qt_ for pristine low latency UIs
|
||||
- pyqtgraph_ (which we've extended) for real-time charting and graphics
|
||||
- ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_
|
||||
- `apache arrow and parquet`_ for time-series storage
|
||||
- trio_ & tractor_ for our distributed, multi-core, real-time streaming
|
||||
`structured concurrency`_ runtime B)
|
||||
- Qt_ for pristine high performance UIs
|
||||
- pyqtgraph_ for real-time charting
|
||||
- ``polars`` ``numpy`` and ``numba`` for `fast numerics`_
|
||||
- `apache arrow and parquet`_ for time series history management
|
||||
persistence and sharing
|
||||
- (prototyped) techtonicdb_ for L2 book storage
|
||||
|
||||
potential projects we might integrate with soon,
|
||||
|
||||
- (already prototyped in ) techtonicdb_ for L2 book storage
|
||||
|
||||
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
|
||||
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
|
||||
.. _uv: https://docs.astral.sh/uv/
|
||||
.. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg
|
||||
:target: https://travis-ci.org/pikers/piker
|
||||
.. _trio: https://github.com/python-trio/trio
|
||||
.. _tractor: https://github.com/goodboy/tractor
|
||||
.. _structured concurrency: https://trio.discourse.group/
|
||||
.. _marketstore: https://github.com/alpacahq/marketstore
|
||||
.. _techtonicdb: https://github.com/0b01/tectonicdb
|
||||
.. _Qt: https://www.qt.io/
|
||||
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
|
||||
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
|
||||
.. _apache arrow and parquet: https://arrow.apache.org/faq/
|
||||
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
|
||||
.. _techtonicdb: https://github.com/0b01/tectonicdb
|
||||
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
|
||||
|
||||
|
||||
focus and feats:
|
||||
****************
|
||||
fitting with these tenets, we're always open to new
|
||||
framework/lib/service interop suggestions and ideas!
|
||||
focus and features:
|
||||
*******************
|
||||
- 100% federated: your code, your hardware, your data feeds, your broker fills.
|
||||
- zero web: low latency, native software that doesn't try to re-invent the OS
|
||||
- maximal **privacy**: prevent brokers and mms from knowing your
|
||||
planz; smack their spreads with dark volume.
|
||||
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
|
||||
thought noise and encourage un-emotion.
|
||||
- first class parallelism: built from the ground up on next-gen structured concurrency
|
||||
primitives.
|
||||
- traders first: broker/exchange/asset-class agnostic
|
||||
- systems grounded: real-time financial signal processing that will
|
||||
make any queuing or DSP eng juice their shorts.
|
||||
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
|
||||
- data collaboration: every process and protocol is multi-host scalable.
|
||||
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
|
||||
|
||||
- **100% federated**:
|
||||
your code, your hardware, your data feeds, your broker fills.
|
||||
fitting with these tenets, we're always open to new framework suggestions and ideas.
|
||||
|
||||
- **zero web**:
|
||||
low latency as a prime objective, native UIs and modern IPC
|
||||
protocols without trying to re-invent the "OS-as-an-app"..
|
||||
|
||||
- **maximal privacy**:
|
||||
prevent brokers and mms from knowing your planz; smack their
|
||||
spreads with dark volume from a VPN tunnel.
|
||||
|
||||
- **zero clutter**:
|
||||
modal, context oriented UIs that echew minimalism, reduce thought
|
||||
noise and encourage un-emotion.
|
||||
|
||||
- **first class parallelism**:
|
||||
built from the ground up on a next-gen structured concurrency
|
||||
supervision sys.
|
||||
|
||||
- **traders first**:
|
||||
broker/exchange/venue/asset-class/money-sys agnostic
|
||||
|
||||
- **systems grounded**:
|
||||
real-time financial signal processing (fsp) that will make any
|
||||
queuing or DSP eng juice their shorts.
|
||||
|
||||
- **non-tina UX**:
|
||||
sleek, powerful keyboard driven interaction with expected use in
|
||||
tiling wms (or maybe even a DDE).
|
||||
|
||||
- **data collab at scale**:
|
||||
every actor-process and protocol is multi-host aware.
|
||||
|
||||
- **fight club ready**:
|
||||
zero interest in adoption by suits; no corporate friendly license,
|
||||
ever.
|
||||
|
||||
building the hottest looking, fastest, most reliable, keyboard
|
||||
friendly FOSS trading platform is the dream; join the cause.
|
||||
building the best looking, most reliable, keyboard friendly trading
|
||||
platform is the dream; join the cause.
|
||||
|
||||
|
||||
a sane install with `uv`
|
||||
************************
|
||||
bc why install with `python` when you can faster with `rust` ::
|
||||
sane install with `poetry`
|
||||
**************************
|
||||
TODO!
|
||||
|
||||
uv lock
|
||||
|
||||
rigorous install on ``nixos`` using ``poetry2nix``
|
||||
**************************************************
|
||||
TODO!
|
||||
|
||||
|
||||
hacky install on nixos
|
||||
**********************
|
||||
``NixOS`` is our core devs' distro of choice for which we offer
|
||||
`NixOS` is our core devs' distro of choice for which we offer
|
||||
a stringently defined development shell envoirment that can be loaded with::
|
||||
|
||||
nix-shell default.nix
|
||||
nix-shell develop.nix
|
||||
|
||||
this will setup the required python environment to run piker, make sure to
|
||||
run::
|
||||
|
||||
pip install -r requirements.txt -e .
|
||||
|
||||
once after loading the shell
|
||||
|
||||
|
||||
start a chart
|
||||
*************
|
||||
run a realtime OHLCV chart stand-alone::
|
||||
install wild-west style via `pip`
|
||||
*********************************
|
||||
``piker`` is currently under heavy pre-alpha development and as such
|
||||
should be cloned from this repo and hacked on directly.
|
||||
|
||||
piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken
|
||||
for a development install::
|
||||
|
||||
this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes
|
||||
overlayed on the same graph. Use of `piker` without first starting
|
||||
a daemon (`pikerd` - see below) means there is an implicit spawning of the
|
||||
multi-actor-runtime (implemented as a `tractor` app).
|
||||
|
||||
For additional subsystem feats available through our chart UI see the
|
||||
various sub-readmes:
|
||||
|
||||
- order control using a mouse-n-keyboard UX B)
|
||||
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
|
||||
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
|
||||
- src-asset derivatives scan for anal, like the infamous "max pain" XO
|
||||
git clone git@github.com:pikers/piker.git
|
||||
cd piker
|
||||
virtualenv env
|
||||
source ./env/bin/activate
|
||||
pip install -r requirements.txt -e .
|
||||
|
||||
|
||||
spawn a daemon standalone
|
||||
*************************
|
||||
we call the root actor-process the ``pikerd``. it can be (and is
|
||||
recommended normally to be) started separately from the ``piker
|
||||
chart`` program::
|
||||
check out our charts
|
||||
********************
|
||||
bet you weren't expecting this from the foss::
|
||||
|
||||
piker -l info -b kraken -b binance chart btcusdt.binance --pdb
|
||||
|
||||
|
||||
this runs the main chart (currently with 1m sampled OHLC) in in debug
|
||||
mode and you can practice paper trading using the following
|
||||
micro-manual:
|
||||
|
||||
``order_mode`` (
|
||||
edge triggered activation by any of the following keys,
|
||||
``mouse-click`` on y-level to submit at that price
|
||||
):
|
||||
|
||||
- ``f``/ ``ctl-f`` to stage buy
|
||||
- ``d``/ ``ctl-d`` to stage sell
|
||||
- ``a`` to stage alert
|
||||
|
||||
|
||||
``search_mode`` (
|
||||
``ctl-l`` or ``ctl-space`` to open,
|
||||
``ctl-c`` or ``ctl-space`` to close
|
||||
) :
|
||||
|
||||
- begin typing to have symbol search automatically lookup
|
||||
symbols from all loaded backend (broker) providers
|
||||
- arrow keys and mouse click to navigate selection
|
||||
- vi-like ``ctl-[hjkl]`` for navigation
|
||||
|
||||
|
||||
you can also configure your position allocation limits from the
|
||||
sidepane.
|
||||
|
||||
|
||||
run in distributed mode
|
||||
***********************
|
||||
start the service manager and data feed daemon in the background and
|
||||
connect to it::
|
||||
|
||||
pikerd -l info --pdb
|
||||
|
||||
the daemon does nothing until a ``piker``-client (like ``piker
|
||||
chart``) connects and requests some particular sub-system. for
|
||||
a connecting chart ``pikerd`` will spawn and manage at least,
|
||||
|
||||
- a data-feed daemon: ``datad`` which does all the work of comms with
|
||||
the backend provider (in this case the ``binance`` cex).
|
||||
- a paper-trading engine instance, ``paperboi.binance``, (if no live
|
||||
account has been configured) which allows for auto/manual order
|
||||
control against the live quote stream.
|
||||
connect your chart::
|
||||
|
||||
*using* an actor-service (aka micro-daemon) manager which dynamically
|
||||
supervises various sub-subsystems-as-services throughout the ``piker``
|
||||
runtime-stack.
|
||||
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
|
||||
|
||||
now you can (implicitly) connect your chart::
|
||||
|
||||
piker chart btcusdt.spot.binance
|
||||
|
||||
since ``pikerd`` was started separately you can now enjoy a persistent
|
||||
real-time data stream tied to the daemon-tree's lifetime. i.e. the next
|
||||
time you spawn a chart it will obviously not only load much faster
|
||||
(since the underlying ``datad.binance`` is left running with its
|
||||
in-memory IPC data structures) but also the data-feed and any order
|
||||
mgmt states should be persistent until you finally cancel ``pikerd``.
|
||||
enjoy persistent real-time data feeds tied to daemon lifetime. the next
|
||||
time you spawn a chart it will load much faster since the data feed has
|
||||
been cached and is now always running live in the background until you
|
||||
kill ``pikerd``.
|
||||
|
||||
|
||||
if anyone asks you what this project is about
|
||||
*********************************************
|
||||
you don't talk about it; just use it.
|
||||
you don't talk about it.
|
||||
|
||||
|
||||
how do i get involved?
|
||||
|
@ -165,15 +166,6 @@ enter the matrix.
|
|||
|
||||
how come there ain't that many docs
|
||||
***********************************
|
||||
i mean we want/need them but building the core right has been higher
|
||||
prio then marketting (and likely will stay that way Bp).
|
||||
|
||||
soo, suck it up bc,
|
||||
|
||||
- no one is trying to sell you on anything
|
||||
- learning the code base is prolly way more valuable
|
||||
- the UI/UXs are intended to be "intuitive" for any hacker..
|
||||
|
||||
we obviously need tonz help so if you want to start somewhere and
|
||||
can't necessarily write "advanced" concurrent python/rust code, this
|
||||
helping document literally anything might be the place for you!
|
||||
suck it up, learn the code; no one is trying to sell you on anything.
|
||||
also, we need lotsa help so if you want to start somewhere and can't
|
||||
necessarily write serious code, this might be the place for you!
|
||||
|
|
134
default.nix
134
default.nix
|
@ -1,134 +0,0 @@
|
|||
with (import <nixpkgs> {});
|
||||
let
|
||||
glibStorePath = lib.getLib glib;
|
||||
zlibStorePath = lib.getLib zlib;
|
||||
zstdStorePath = lib.getLib zstd;
|
||||
dbusStorePath = lib.getLib dbus;
|
||||
libGLStorePath = lib.getLib libGL;
|
||||
freetypeStorePath = lib.getLib freetype;
|
||||
qt6baseStorePath = lib.getLib qt6.qtbase;
|
||||
fontconfigStorePath = lib.getLib fontconfig;
|
||||
libxkbcommonStorePath = lib.getLib libxkbcommon;
|
||||
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
|
||||
|
||||
qtpyStorePath = lib.getLib python312Packages.qtpy;
|
||||
pyqt6StorePath = lib.getLib python312Packages.pyqt6;
|
||||
pyqt6SipStorePath = lib.getLib python312Packages.pyqt6-sip;
|
||||
rapidfuzzStorePath = lib.getLib python312Packages.rapidfuzz;
|
||||
qdarkstyleStorePath = lib.getLib python312Packages.qdarkstyle;
|
||||
|
||||
xorgLibX11StorePath = lib.getLib xorg.libX11;
|
||||
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
|
||||
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
|
||||
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
|
||||
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
|
||||
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
|
||||
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
|
||||
in
|
||||
stdenv.mkDerivation {
|
||||
name = "piker-qt6-uv";
|
||||
buildInputs = [
|
||||
# System requirements.
|
||||
glib
|
||||
zlib
|
||||
dbus
|
||||
zstd
|
||||
libGL
|
||||
freetype
|
||||
qt6.qtbase
|
||||
libgcc.lib
|
||||
fontconfig
|
||||
libxkbcommon
|
||||
|
||||
# Xorg requirements
|
||||
xcb-util-cursor
|
||||
xorg.libxcb
|
||||
xorg.libX11
|
||||
xorg.xcbutilwm
|
||||
xorg.xcbutilimage
|
||||
xorg.xcbutilerrors
|
||||
xorg.xcbutilkeysyms
|
||||
xorg.xcbutilrenderutil
|
||||
|
||||
# Python requirements.
|
||||
python312Full
|
||||
python312Packages.uv
|
||||
python312Packages.qdarkstyle
|
||||
python312Packages.rapidfuzz
|
||||
python312Packages.pyqt6
|
||||
python312Packages.qtpy
|
||||
];
|
||||
src = null;
|
||||
shellHook = ''
|
||||
set -e
|
||||
|
||||
# Set the Qt plugin path
|
||||
# export QT_DEBUG_PLUGINS=1
|
||||
|
||||
QTBASE_PATH="${qt6baseStorePath}/lib"
|
||||
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
|
||||
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
|
||||
|
||||
LIB_GCC_PATH="${libgcc.lib}/lib"
|
||||
GLIB_PATH="${glibStorePath}/lib"
|
||||
ZSTD_PATH="${zstdStorePath}/lib"
|
||||
ZLIB_PATH="${zlibStorePath}/lib"
|
||||
DBUS_PATH="${dbusStorePath}/lib"
|
||||
LIBGL_PATH="${libGLStorePath}/lib"
|
||||
FREETYPE_PATH="${freetypeStorePath}/lib"
|
||||
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
|
||||
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
|
||||
|
||||
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
|
||||
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
|
||||
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
|
||||
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
|
||||
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
|
||||
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
|
||||
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
|
||||
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
|
||||
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
|
||||
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
|
||||
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
|
||||
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
|
||||
|
||||
export LD_LIBRARY_PATH
|
||||
|
||||
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.12/site-packages"
|
||||
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.12/site-packages"
|
||||
QTPY_PATH="${qtpyStorePath}/lib/python3.12/site-packages"
|
||||
PYQT6_PATH="${pyqt6StorePath}/lib/python3.12/site-packages"
|
||||
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.12/site-packages"
|
||||
|
||||
PATCH="$PATCH:$RPDFUZZ_PATH"
|
||||
PATCH="$PATCH:$QDRKSTYLE_PATH"
|
||||
PATCH="$PATCH:$QTPY_PATH"
|
||||
PATCH="$PATCH:$PYQT6_PATH"
|
||||
PATCH="$PATCH:$PYQT6_SIP_PATH"
|
||||
|
||||
export PATCH
|
||||
|
||||
# Install deps
|
||||
uv lock
|
||||
|
||||
'';
|
||||
}
|
37
develop.nix
37
develop.nix
|
@ -1,34 +1,28 @@
|
|||
with (import <nixpkgs> {});
|
||||
|
||||
with python310Packages;
|
||||
stdenv.mkDerivation {
|
||||
name = "poetry-env";
|
||||
name = "pip-env";
|
||||
buildInputs = [
|
||||
# System requirements.
|
||||
readline
|
||||
|
||||
# TODO: hacky non-poetry install stuff we need to get rid of!!
|
||||
poetry
|
||||
# virtualenv
|
||||
# setuptools
|
||||
# pip
|
||||
|
||||
# Python requirements (enough to get a virtualenv going).
|
||||
python311Full
|
||||
virtualenv
|
||||
setuptools
|
||||
pip
|
||||
|
||||
# obviously, and see below for hacked linking
|
||||
python311Packages.pyqt5
|
||||
python311Packages.pyqt5_sip
|
||||
# python311Packages.qtpy
|
||||
pyqt5
|
||||
|
||||
# Python requirements (enough to get a virtualenv going).
|
||||
python310Full
|
||||
|
||||
# numerics deps
|
||||
python311Packages.levenshtein
|
||||
python311Packages.fastparquet
|
||||
python311Packages.polars
|
||||
python310Packages.python-Levenshtein
|
||||
python310Packages.fastparquet
|
||||
python310Packages.polars
|
||||
|
||||
];
|
||||
# environment.sessionVariables = {
|
||||
# LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib";
|
||||
# };
|
||||
src = null;
|
||||
shellHook = ''
|
||||
# Allow the use of wheels.
|
||||
|
@ -36,12 +30,13 @@ stdenv.mkDerivation {
|
|||
|
||||
# Augment the dynamic linker path
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
|
||||
|
||||
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
|
||||
|
||||
if [ ! -d ".venv" ]; then
|
||||
poetry install --with uis
|
||||
if [ ! -d "venv" ]; then
|
||||
virtualenv venv
|
||||
fi
|
||||
|
||||
poetry shell
|
||||
source venv/bin/activate
|
||||
'';
|
||||
}
|
||||
|
|
|
@ -19,9 +19,8 @@ services:
|
|||
|
||||
# other image tags available:
|
||||
# https://github.com/waytrade/ib-gateway-docker#supported-tags
|
||||
# image: waytrade/ib-gateway:1012.2i
|
||||
image: ghcr.io/gnzsnz/ib-gateway:latest
|
||||
|
||||
# image: waytrade/ib-gateway:981.3j
|
||||
image: waytrade/ib-gateway:1012.2i
|
||||
restart: 'no' # restart on boot whenev there's a crash or user clicsk
|
||||
network_mode: 'host'
|
||||
|
||||
|
|
|
@ -117,57 +117,9 @@ SecondFactorDevice=
|
|||
|
||||
# If you use the IBKR Mobile app for second factor authentication,
|
||||
# and you fail to complete the process before the time limit imposed
|
||||
# by IBKR, this setting tells IBC whether to automatically restart
|
||||
# the login sequence, giving you another opportunity to complete
|
||||
# second factor authentication.
|
||||
#
|
||||
# Permitted values are 'yes' and 'no'.
|
||||
#
|
||||
# If this setting is not present or has no value, then the value
|
||||
# of the deprecated ExitAfterSecondFactorAuthenticationTimeout is
|
||||
# used instead. If this also has no value, then this setting defaults
|
||||
# to 'no'.
|
||||
#
|
||||
# NB: you must be using IBC v3.14.0 or later to use this setting:
|
||||
# earlier versions ignore it.
|
||||
|
||||
ReloginAfterSecondFactorAuthenticationTimeout=
|
||||
|
||||
|
||||
# This setting is only relevant if
|
||||
# ReloginAfterSecondFactorAuthenticationTimeout is set to 'yes',
|
||||
# or if ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
|
||||
#
|
||||
# It controls how long (in seconds) IBC waits for login to complete
|
||||
# after the user acknowledges the second factor authentication
|
||||
# alert at the IBKR Mobile app. If login has not completed after
|
||||
# this time, IBC terminates.
|
||||
# The default value is 60.
|
||||
|
||||
SecondFactorAuthenticationExitInterval=
|
||||
|
||||
|
||||
# This setting specifies the timeout for second factor authentication
|
||||
# imposed by IB. The value is in seconds. You should not change this
|
||||
# setting unless you have reason to believe that IB has changed the
|
||||
# timeout. The default value is 180.
|
||||
|
||||
SecondFactorAuthenticationTimeout=180
|
||||
|
||||
|
||||
# DEPRECATED SETTING
|
||||
# ------------------
|
||||
#
|
||||
# ExitAfterSecondFactorAuthenticationTimeout - THIS SETTING WILL BE
|
||||
# REMOVED IN A FUTURE RELEASE. For IBC version 3.14.0 and later, see
|
||||
# the notes for ReloginAfterSecondFactorAuthenticationTimeout above.
|
||||
#
|
||||
# For IBC versions earlier than 3.14.0: If you use the IBKR Mobile
|
||||
# app for second factor authentication, and you fail to complete the
|
||||
# process before the time limit imposed by IBKR, you can use this
|
||||
# setting to tell IBC to exit: arrangements can then be made to
|
||||
# automatically restart IBC in order to initiate the login sequence
|
||||
# afresh. Otherwise, manual intervention at TWS's
|
||||
# by IBKR, you can use this setting to tell IBC to exit: arrangements
|
||||
# can then be made to automatically restart IBC in order to initiate
|
||||
# the login sequence afresh. Otherwise, manual intervention at TWS's
|
||||
# Second Factor Authentication dialog is needed to complete the
|
||||
# login.
|
||||
#
|
||||
|
@ -180,18 +132,29 @@ SecondFactorAuthenticationTimeout=180
|
|||
ExitAfterSecondFactorAuthenticationTimeout=no
|
||||
|
||||
|
||||
# This setting is only relevant if
|
||||
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
|
||||
#
|
||||
# It controls how long (in seconds) IBC waits for login to complete
|
||||
# after the user acknowledges the second factor authentication
|
||||
# alert at the IBKR Mobile app. If login has not completed after
|
||||
# this time, IBC terminates.
|
||||
# The default value is 40.
|
||||
|
||||
SecondFactorAuthenticationExitInterval=
|
||||
|
||||
|
||||
# Trading Mode
|
||||
# ------------
|
||||
#
|
||||
# This indicates whether the live account or the paper trading
|
||||
# account corresponding to the supplied credentials is to be used.
|
||||
# The allowed values are 'live' (the default) and 'paper'.
|
||||
#
|
||||
# If this is set to 'live', then the credentials for the live
|
||||
# account must be supplied. If it is set to 'paper', then either
|
||||
# the live or the paper-trading credentials may be supplied.
|
||||
# TWS 955 introduced a new Trading Mode combo box on its login
|
||||
# dialog. This indicates whether the live account or the paper
|
||||
# trading account corresponding to the supplied credentials is
|
||||
# to be used. The allowed values are 'live' (the default) and
|
||||
# 'paper'. For earlier versions of TWS this setting has no
|
||||
# effect.
|
||||
|
||||
TradingMode=paper
|
||||
TradingMode=
|
||||
|
||||
|
||||
# Paper-trading Account Warning
|
||||
|
@ -225,7 +188,7 @@ AcceptNonBrokerageAccountWarning=yes
|
|||
#
|
||||
# The default value is 60.
|
||||
|
||||
LoginDialogDisplayTimeout=60
|
||||
LoginDialogDisplayTimeout=20
|
||||
|
||||
|
||||
|
||||
|
@ -254,15 +217,7 @@ LoginDialogDisplayTimeout=60
|
|||
# but they are acceptable.
|
||||
#
|
||||
# The default is the current working directory when IBC is
|
||||
# started, unless the TWS_SETTINGS_PATH setting in the relevant
|
||||
# start script is set.
|
||||
#
|
||||
# If both this setting and TWS_SETTINGS_PATH are set, then this
|
||||
# setting takes priority. Note that if they have different values,
|
||||
# auto-restart will not work.
|
||||
#
|
||||
# NB: this setting is now DEPRECATED. You should use the
|
||||
# TWS_SETTINGS_PATH setting in the relevant start script.
|
||||
# started.
|
||||
|
||||
IbDir=/root/Jts
|
||||
|
||||
|
@ -331,30 +286,13 @@ ExistingSessionDetectedAction=primary
|
|||
#
|
||||
# If OverrideTwsApiPort is set to an integer, IBC changes the
|
||||
# 'Socket port' in TWS's API configuration to that number shortly
|
||||
# after startup (but note that for the FIX Gateway, this setting is
|
||||
# actually stored in jts.ini rather than the Gateway's settings
|
||||
# file). Leaving the setting blank will make no change to
|
||||
# after startup. Leaving the setting blank will make no change to
|
||||
# the current setting. This setting is only intended for use in
|
||||
# certain specialized situations where the port number needs to
|
||||
# be set dynamically at run-time, and for the FIX Gateway: most
|
||||
# non-FIX users will never need it, so don't use it unless you know
|
||||
# you need it.
|
||||
|
||||
OverrideTwsApiPort=4000
|
||||
|
||||
|
||||
# Override TWS Master Client ID
|
||||
# -----------------------------
|
||||
#
|
||||
# If OverrideTwsMasterClientID is set to an integer, IBC changes the
|
||||
# 'Master Client ID' value in TWS's API configuration to that
|
||||
# value shortly after startup. Leaving the setting blank will make
|
||||
# no change to the current setting. This setting is only intended
|
||||
# for use in certain specialized situations where the value needs to
|
||||
# be set dynamically at run-time: most users will never need it,
|
||||
# so don't use it unless you know you need it.
|
||||
|
||||
OverrideTwsMasterClientID=
|
||||
; OverrideTwsApiPort=4002
|
||||
|
||||
|
||||
# Read-only Login
|
||||
|
@ -364,13 +302,11 @@ OverrideTwsMasterClientID=
|
|||
# account security programme, the user will not be asked to perform
|
||||
# the second factor authentication action, and login to TWS will
|
||||
# occur automatically in read-only mode: in this mode, placing or
|
||||
# managing orders is not allowed.
|
||||
#
|
||||
# If set to 'no', and the user is enrolled in IB's account security
|
||||
# programme, the second factor authentication process is handled
|
||||
# according to the Second Factor Authentication Settings described
|
||||
# elsewhere in this file.
|
||||
#
|
||||
# managing orders is not allowed. If set to 'no', and the user is
|
||||
# enrolled in IB's account security programme, the user must perform
|
||||
# the relevant second factor authentication action to complete the
|
||||
# login.
|
||||
|
||||
# If the user is not enrolled in IB's account security programme,
|
||||
# this setting is ignored. The default is 'no'.
|
||||
|
||||
|
@ -390,44 +326,7 @@ ReadOnlyLogin=no
|
|||
# set the relevant checkbox (this only needs to be done once) and
|
||||
# not provide a value for this setting.
|
||||
|
||||
ReadOnlyApi=
|
||||
|
||||
|
||||
# API Precautions
|
||||
# ---------------
|
||||
#
|
||||
# These settings relate to the corresponding 'Precautions' checkboxes in the
|
||||
# API section of the Global Configuration dialog.
|
||||
#
|
||||
# For all of these, the accepted values are:
|
||||
# - 'yes' sets the checkbox
|
||||
# - 'no' clears the checkbox
|
||||
# - if not set, the existing TWS/Gateway configuration is unchanged
|
||||
#
|
||||
# NB: thess settings are really only supplied for the benefit of new TWS
|
||||
# or Gateway instances that are being automatically installed and
|
||||
# started without user intervention, or where user settings are not preserved
|
||||
# between sessions (eg some Docker containers). Where a user is involved, they
|
||||
# should use the Global Configuration to set the relevant checkboxes and not
|
||||
# provide values for these settings.
|
||||
|
||||
BypassOrderPrecautions=
|
||||
|
||||
BypassBondWarning=
|
||||
|
||||
BypassNegativeYieldToWorstConfirmation=
|
||||
|
||||
BypassCalledBondWarning=
|
||||
|
||||
BypassSameActionPairTradeWarning=
|
||||
|
||||
BypassPriceBasedVolatilityRiskWarning=
|
||||
|
||||
BypassUSStocksMarketDataInSharesWarning=
|
||||
|
||||
BypassRedirectOrderWarning=
|
||||
|
||||
BypassNoOverfillProtectionPrecaution=
|
||||
ReadOnlyApi=no
|
||||
|
||||
|
||||
# Market data size for US stocks - lots or shares
|
||||
|
@ -482,145 +381,54 @@ AcceptBidAskLastSizeDisplayUpdateNotification=accept
|
|||
SendMarketDataInLotsForUSstocks=
|
||||
|
||||
|
||||
# Trusted API Client IPs
|
||||
# ----------------------
|
||||
#
|
||||
# NB: THIS SETTING IS ONLY RELEVANT FOR THE GATEWAY, AND ONLY WHEN FIX=yes.
|
||||
# In all other cases it is ignored.
|
||||
#
|
||||
# This is a list of IP addresses separated by commas. API clients with IP
|
||||
# addresses in this list are able to connect to the API without Gateway
|
||||
# generating the 'Incoming connection' popup.
|
||||
#
|
||||
# Note that 127.0.0.1 is always permitted to connect, so do not include it
|
||||
# in this setting.
|
||||
|
||||
TrustedTwsApiClientIPs=
|
||||
|
||||
|
||||
# Reset Order ID Sequence
|
||||
# -----------------------
|
||||
#
|
||||
# The setting resets the order id sequence for orders submitted via the API, so
|
||||
# that the next invocation of the `NextValidId` API callback will return the
|
||||
# value 1. The reset occurs when TWS starts.
|
||||
#
|
||||
# Note that order ids are reset for all API clients, except those that have
|
||||
# outstanding (ie incomplete) orders: their order id sequence carries on as
|
||||
# before.
|
||||
#
|
||||
# Valid values are 'yes', 'true', 'false' and 'no'. The default is 'no'.
|
||||
|
||||
ResetOrderIdsAtStart=
|
||||
|
||||
|
||||
# This setting specifies IBC's action when TWS displays the dialog asking for
|
||||
# confirmation of a request to reset the API order id sequence.
|
||||
#
|
||||
# Note that the Gateway never displays this dialog, so this setting is ignored
|
||||
# for a Gateway session.
|
||||
#
|
||||
# Valid values consist of two strings separated by a solidus '/'. The first
|
||||
# value specifies the action to take when the order id reset request resulted
|
||||
# from setting ResetOrderIdsAtStart=yes. The second specifies the action to
|
||||
# take when the order id reset request is a result of the user clicking the
|
||||
# 'Reset API order ID sequence' button in the API configuration. Each value
|
||||
# must be one of the following:
|
||||
#
|
||||
# 'confirm'
|
||||
# order ids will be reset
|
||||
#
|
||||
# 'reject'
|
||||
# order ids will not be reset
|
||||
#
|
||||
# 'ignore'
|
||||
# IBC will ignore the dialog. The user must take action.
|
||||
#
|
||||
# The default setting is ignore/ignore
|
||||
|
||||
# Examples:
|
||||
#
|
||||
# 'confirm/reject' - confirm order id reset only if ResetOrderIdsAtStart=yes
|
||||
# and reject any user-initiated requests
|
||||
#
|
||||
# 'ignore/confirm' - user must decide what to do if ResetOrderIdsAtStart=yes
|
||||
# and confirm user-initiated requests
|
||||
#
|
||||
# 'reject/ignore' - reject order id reset if ResetOrderIdsAtStart=yes but
|
||||
# allow user to handle user-initiated requests
|
||||
|
||||
ConfirmOrderIdReset=
|
||||
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# 4. TWS Auto-Logoff and Auto-Restart
|
||||
# 4. TWS Auto-Closedown
|
||||
# =============================================================================
|
||||
#
|
||||
# TWS and Gateway insist on being restarted every day. Two alternative
|
||||
# automatic options are offered:
|
||||
# IMPORTANT NOTE: Starting with TWS 974, this setting no longer
|
||||
# works properly, because IB have changed the way TWS handles its
|
||||
# autologoff mechanism.
|
||||
#
|
||||
# - Auto-Logoff: at a specified time, TWS shuts down tidily, without
|
||||
# restarting.
|
||||
# You should now configure the TWS autologoff time to something
|
||||
# convenient for you, and restart IBC each day.
|
||||
#
|
||||
# - Auto-Restart: at a specified time, TWS shuts down and then restarts
|
||||
# without the user having to re-autheticate.
|
||||
#
|
||||
# The normal way to configure the time at which this happens is via the Lock
|
||||
# and Exit section of the Configuration dialog. Once this time has been
|
||||
# configured in this way, the setting persists until the user changes it again.
|
||||
#
|
||||
# However, there are situations where there is no user available to do this
|
||||
# configuration, or where there is no persistent storage (for example some
|
||||
# Docker images). In such cases, the auto-restart or auto-logoff time can be
|
||||
# set whenever IBC starts with the settings below.
|
||||
#
|
||||
# The value, if specified, must be a time in HH:MM AM/PM format, for example
|
||||
# 08:00 AM or 10:00 PM. Note that there must be a single space between the
|
||||
# two parts of this value; also that midnight is "12:00 AM" and midday is
|
||||
# "12:00 PM".
|
||||
#
|
||||
# If no value is specified for either setting, the currently configured
|
||||
# settings will apply. If a value is supplied for one setting, the other
|
||||
# setting is cleared. If values are supplied for both settings, only the
|
||||
# auto-restart time is set, and the auto-logoff time is cleared.
|
||||
#
|
||||
# Note that for a normal TWS/Gateway installation with persistent storage
|
||||
# (for example on a desktop computer) the value will be persisted as if the
|
||||
# user had set it via the configuration dialog.
|
||||
#
|
||||
# If you choose to auto-restart, you should take note of the considerations
|
||||
# described at the link below. Note that where this information mentions
|
||||
# 'manual authentication', restarting IBC will do the job (IBKR does not
|
||||
# recognise the existence of IBC in its docuemntation).
|
||||
#
|
||||
# https://www.interactivebrokers.com/en/software/tws/twsguide.htm#usersguidebook/configuretws/auto_restart_info.htm
|
||||
#
|
||||
# If you use the "RESTART" command via the IBC command server, and IBC is
|
||||
# running any version of the Gateway (or a version of TWS earlier than 1018),
|
||||
# note that this will set the Auto-Restart time in Gateway/TWS's configuration
|
||||
# dialog to the time at which the restart actually happens (which may be up to
|
||||
# a minute after the RESTART command is issued). To prevent future auto-
|
||||
# restarts at this time, you must make sure you have set AutoLogoffTime or
|
||||
# AutoRestartTime to your desired value before running IBC. NB: this does not
|
||||
# apply to TWS from version 1018 onwards.
|
||||
# Alternatively, discontinue use of IBC and use the auto-relogin
|
||||
# mechanism within TWS 974 and later versions (note that the
|
||||
# auto-relogin mechanism provided by IB is not available if you
|
||||
# use IBC).
|
||||
|
||||
AutoLogoffTime=
|
||||
# Set to yes or no (lower case).
|
||||
#
|
||||
# yes means allow TWS to shut down automatically at its
|
||||
# specified shutdown time, which is set via the TWS
|
||||
# configuration menu.
|
||||
#
|
||||
# no means TWS never shuts down automatically.
|
||||
#
|
||||
# NB: IB recommends that you do not keep TWS running
|
||||
# continuously. If you set this setting to 'no', you may
|
||||
# experience incorrect TWS operation.
|
||||
#
|
||||
# NB: the default for this setting is 'no'. Since this will
|
||||
# only work properly with TWS versions earlier than 974, you
|
||||
# should explicitly set this to 'yes' for version 974 and later.
|
||||
|
||||
IbAutoClosedown=yes
|
||||
|
||||
AutoRestartTime=
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# 5. TWS Tidy Closedown Time
|
||||
# =============================================================================
|
||||
#
|
||||
# Specifies a time at which TWS will close down tidily, with no restart.
|
||||
# NB: starting with TWS 974 this is no longer a useful option
|
||||
# because both TWS and Gateway now have the same auto-logoff
|
||||
# mechanism, and IBC can no longer avoid this.
|
||||
#
|
||||
# There is little reason to use this setting. It is similar to AutoLogoffTime,
|
||||
# but can include a day-of-the-week, whereas AutoLogoffTime and AutoRestartTime
|
||||
# apply every day. So for example you could use ClosedownAt in conjunction with
|
||||
# AutoRestartTime to shut down TWS on Friday evenings after the markets
|
||||
# close, without it running on Saturday as well.
|
||||
# Note that giving this setting a value does not change TWS's
|
||||
# auto-logoff in any way: any setting will be additional to the
|
||||
# TWS auto-logoff.
|
||||
#
|
||||
# To tell IBC to tidily close TWS at a specified time every
|
||||
# day, set this value to <hh:mm>, for example:
|
||||
|
@ -679,7 +487,7 @@ AcceptIncomingConnectionAction=reject
|
|||
# no means the dialog remains on display and must be
|
||||
# handled by the user.
|
||||
|
||||
AllowBlindTrading=no
|
||||
AllowBlindTrading=yes
|
||||
|
||||
|
||||
# Save Settings on a Schedule
|
||||
|
@ -722,26 +530,6 @@ AllowBlindTrading=no
|
|||
SaveTwsSettingsAt=
|
||||
|
||||
|
||||
# Confirm Crypto Currency Orders Automatically
|
||||
# --------------------------------------------
|
||||
#
|
||||
# When you place an order for a cryptocurrency contract, a dialog is displayed
|
||||
# asking you to confirm that you want to place the order, and notifying you
|
||||
# that you are placing an order to trade cryptocurrency with Paxos, a New York
|
||||
# limited trust company, and not at Interactive Brokers.
|
||||
#
|
||||
# transmit means that the order will be placed automatically, and the
|
||||
# dialog will then be closed
|
||||
#
|
||||
# cancel means that the order will not be placed, and the dialog will
|
||||
# then be closed
|
||||
#
|
||||
# manual means that IBC will take no action and the user must deal
|
||||
# with the dialog
|
||||
|
||||
ConfirmCryptoCurrencyOrders=transmit
|
||||
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# 7. Settings Specific to Indian Versions of TWS
|
||||
|
@ -778,17 +566,13 @@ DismissNSEComplianceNotice=yes
|
|||
#
|
||||
# The port number that IBC listens on for commands
|
||||
# such as "STOP". DO NOT set this to the port number
|
||||
# used for TWS API connections.
|
||||
#
|
||||
# The convention is to use 7462 for this port,
|
||||
# but it must be set to a different value from any other
|
||||
# IBC instance that might run at the same time.
|
||||
#
|
||||
# The default value is 0, which tells IBC not to start
|
||||
# the command server
|
||||
# used for TWS API connections. There is no good reason
|
||||
# to change this setting unless the port is used by
|
||||
# some other application (typically another instance of
|
||||
# IBC). The default value is 0, which tells IBC not to
|
||||
# start the command server
|
||||
|
||||
#CommandServerPort=7462
|
||||
CommandServerPort=0
|
||||
|
||||
|
||||
# Permitted Command Sources
|
||||
|
@ -799,19 +583,19 @@ CommandServerPort=0
|
|||
# IBC. Commands can always be sent from the
|
||||
# same host as IBC is running on.
|
||||
|
||||
ControlFrom=
|
||||
ControlFrom=127.0.0.1
|
||||
|
||||
|
||||
# Address for Receiving Commands
|
||||
# ------------------------------
|
||||
#
|
||||
# Specifies the IP address on which the Command Server
|
||||
# is to listen. For a multi-homed host, this can be used
|
||||
# is so listen. For a multi-homed host, this can be used
|
||||
# to specify that connection requests are only to be
|
||||
# accepted on the specified address. The default is to
|
||||
# accept connection requests on all local addresses.
|
||||
|
||||
BindAddress=
|
||||
BindAddress=127.0.0.1
|
||||
|
||||
|
||||
# Command Prompt
|
||||
|
@ -837,7 +621,7 @@ CommandPrompt=
|
|||
# information is sent. The default is that such information
|
||||
# is not sent.
|
||||
|
||||
SuppressInfoMessages=yes
|
||||
SuppressInfoMessages=no
|
||||
|
||||
|
||||
|
||||
|
@ -867,10 +651,10 @@ SuppressInfoMessages=yes
|
|||
# The LogStructureScope setting indicates which windows are
|
||||
# eligible for structure logging:
|
||||
#
|
||||
# - (default value) if set to 'known', only windows that
|
||||
# IBC recognizes are eligible - these are windows that
|
||||
# IBC has some interest in monitoring, usually to take
|
||||
# some action on the user's behalf;
|
||||
# - if set to 'known', only windows that IBC recognizes
|
||||
# are eligible - these are windows that IBC has some
|
||||
# interest in monitoring, usually to take some action
|
||||
# on the user's behalf;
|
||||
#
|
||||
# - if set to 'unknown', only windows that IBC does not
|
||||
# recognize are eligible. Most windows displayed by
|
||||
|
@ -883,8 +667,9 @@ SuppressInfoMessages=yes
|
|||
# - if set to 'all', then every window displayed by TWS
|
||||
# is eligible.
|
||||
#
|
||||
# The default value is 'known'.
|
||||
|
||||
LogStructureScope=known
|
||||
LogStructureScope=all
|
||||
|
||||
|
||||
# When to Log Window Structure
|
||||
|
@ -897,15 +682,13 @@ LogStructureScope=known
|
|||
# structure of an eligible window the first time it
|
||||
# is encountered;
|
||||
#
|
||||
# - if set to 'openclose', the structure is logged every
|
||||
# time an eligible window is opened or closed;
|
||||
#
|
||||
# - if set to 'activate', the structure is logged every
|
||||
# time an eligible window is made active;
|
||||
#
|
||||
# - (default value) if set to 'never' or 'no' or 'false',
|
||||
# structure information is never logged.
|
||||
# - if set to 'never' or 'no' or 'false', structure
|
||||
# information is never logged.
|
||||
#
|
||||
# The default value is 'never'.
|
||||
|
||||
LogStructureWhen=never
|
||||
|
||||
|
@ -925,3 +708,4 @@ LogStructureWhen=never
|
|||
#LogComponents=
|
||||
|
||||
|
||||
|
||||
|
|
54
flake.nix
54
flake.nix
|
@ -6,11 +6,6 @@
|
|||
# - then manually ensuring all deps are converted over:
|
||||
# - add this file to the repo and commit it
|
||||
# -
|
||||
|
||||
# GROKin tips:
|
||||
# - CLI eps are (ostensibly) added via an `entry_points.txt`:
|
||||
# - https://packaging.python.org/en/latest/specifications/entry-points/#file-format
|
||||
# - https://github.com/nix-community/poetry2nix/blob/master/editable.nix#L49
|
||||
{
|
||||
description = "piker: trading gear for hackers (pkged with poetry2nix)";
|
||||
|
||||
|
@ -106,7 +101,7 @@
|
|||
# won't be needed - thanks @k900:
|
||||
# https://github.com/nix-community/poetry2nix/pull/1257
|
||||
pyqt5 = prev.pyqt5.override {
|
||||
# withWebkit = false;
|
||||
withWebkit = false;
|
||||
preferWheel = true;
|
||||
};
|
||||
|
||||
|
@ -129,8 +124,7 @@
|
|||
|
||||
# WHY!? -> output-attrs that `nix develop` scans for:
|
||||
# https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop.html#flake-output-attributes
|
||||
in
|
||||
rec {
|
||||
in {
|
||||
packages = {
|
||||
# piker = poetry2nix.legacyPackages.x86_64-linux.mkPoetryEditablePackage {
|
||||
# editablePackageSources = { piker = ./piker; };
|
||||
|
@ -149,32 +143,36 @@
|
|||
};
|
||||
};
|
||||
|
||||
# devShells.default = pkgs.mkShell {
|
||||
# projectDir = projectDir;
|
||||
# python = "python3.10";
|
||||
# overrides = ahot_overrides;
|
||||
# inputsFrom = [ self.packages.x86_64-linux.piker ];
|
||||
# packages = packages;
|
||||
# # packages = [ poetry2nix.packages.${system}.poetry ];
|
||||
# };
|
||||
devShells.default = pkgs.mkShell {
|
||||
# packages = [ poetry2nix.packages.${system}.poetry ];
|
||||
packages = [ poetry2nix.packages.x86_64-linux.poetry ];
|
||||
inputsFrom = [ self.packages.x86_64-linux.piker ];
|
||||
|
||||
# TODO: boot xonsh inside the poetry virtualenv when
|
||||
# defined via a custom entry point?
|
||||
# NOTE XXX: apparently DON'T do these..?
|
||||
# shellHook = "poetry run xonsh";
|
||||
# shellHook = "poetry shell";
|
||||
};
|
||||
|
||||
|
||||
# TODO: grok the difference here..
|
||||
# - avoid re-cloning git repos on every develop entry..
|
||||
# - ideally allow hacking on the src code of some deps
|
||||
# (tractor, pyqtgraph, tomlkit, etc.) WITHOUT having to
|
||||
# re-install them every time a change is made.
|
||||
# - boot a usable xonsh inside the poetry virtualenv when
|
||||
# defined via a custom entry point?
|
||||
devShells.default = p2npkgs.mkPoetryEnv {
|
||||
# env = p2npkgs.mkPoetryEnv {
|
||||
projectDir = projectDir;
|
||||
python = pkgs.python310;
|
||||
overrides = ahot_overrides;
|
||||
editablePackageSources = packages;
|
||||
# piker = "./";
|
||||
# tractor = "../tractor/";
|
||||
# }; # wut?
|
||||
};
|
||||
|
||||
# devShells.default = (p2npkgs.mkPoetryEnv {
|
||||
# # let {
|
||||
# # devEnv = p2npkgs.mkPoetryEnv {
|
||||
# projectDir = projectDir;
|
||||
# overrides = ahot_overrides;
|
||||
# inputsFrom = [ self.packages.x86_64-linux.piker ];
|
||||
# }).env.overrideAttrs (old: {
|
||||
# buildInputs = [ packages.piker ];
|
||||
# }
|
||||
# );
|
||||
|
||||
}
|
||||
); # end of .outputs scope
|
||||
}
|
||||
|
|
|
@ -327,11 +327,7 @@ class MktPair(Struct, frozen=True):
|
|||
) -> dict:
|
||||
d = super().to_dict(**kwargs)
|
||||
d['src'] = self.src.to_dict(**kwargs)
|
||||
|
||||
if not isinstance(self.dst, str):
|
||||
d['dst'] = self.dst.to_dict(**kwargs)
|
||||
else:
|
||||
d['dst'] = str(self.dst)
|
||||
|
||||
d['price_tick'] = str(self.price_tick)
|
||||
d['size_tick'] = str(self.size_tick)
|
||||
|
@ -353,16 +349,11 @@ class MktPair(Struct, frozen=True):
|
|||
Constructor for a received msg-dict normally received over IPC.
|
||||
|
||||
'''
|
||||
if not isinstance(
|
||||
dst_asset_msg := msg.pop('dst'),
|
||||
str,
|
||||
):
|
||||
dst: Asset = Asset.from_msg(dst_asset_msg) # .copy()
|
||||
else:
|
||||
dst: str = dst_asset_msg
|
||||
dst_asset_msg = msg.pop('dst')
|
||||
dst = Asset.from_msg(dst_asset_msg) # .copy()
|
||||
|
||||
src_asset_msg: dict = msg.pop('src')
|
||||
src: Asset = Asset.from_msg(src_asset_msg) # .copy()
|
||||
src_asset_msg = msg.pop('src')
|
||||
src = Asset.from_msg(src_asset_msg) # .copy()
|
||||
|
||||
# XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't
|
||||
# decide to it by default since we aren't spec-cing these
|
||||
|
|
|
@ -50,7 +50,7 @@ __brokers__: list[str] = [
|
|||
'binance',
|
||||
'ib',
|
||||
'kraken',
|
||||
'kucoin',
|
||||
'kucoin'
|
||||
|
||||
# broken but used to work
|
||||
# 'questrade',
|
||||
|
@ -71,7 +71,7 @@ def get_brokermod(brokername: str) -> ModuleType:
|
|||
Return the imported broker module by name.
|
||||
|
||||
'''
|
||||
module: ModuleType = import_module('.' + brokername, 'piker.brokers')
|
||||
module = import_module('.' + brokername, 'piker.brokers')
|
||||
# we only allow monkeying because it's for internal keying
|
||||
module.name = module.__name__.split('.')[-1]
|
||||
return module
|
||||
|
|
|
@ -18,11 +18,10 @@
|
|||
Handy cross-broker utils.
|
||||
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from functools import partial
|
||||
|
||||
import json
|
||||
import httpx
|
||||
import asks
|
||||
import logging
|
||||
|
||||
from ..log import (
|
||||
|
@ -51,7 +50,6 @@ class SymbolNotFound(BrokerError):
|
|||
"Symbol not found by broker search"
|
||||
|
||||
|
||||
# TODO: these should probably be moved to `.tsp/.data`?
|
||||
class NoData(BrokerError):
|
||||
'''
|
||||
Symbol data not permitted or no data
|
||||
|
@ -61,15 +59,14 @@ class NoData(BrokerError):
|
|||
def __init__(
|
||||
self,
|
||||
*args,
|
||||
info: dict|None = None,
|
||||
frame_size: int = 1000,
|
||||
|
||||
) -> None:
|
||||
super().__init__(*args)
|
||||
self.info: dict|None = info
|
||||
|
||||
# when raised, machinery can check if the backend
|
||||
# set a "frame size" for doing datetime calcs.
|
||||
# self.frame_size: int = 1000
|
||||
self.frame_size: int = 1000
|
||||
|
||||
|
||||
class DataUnavailable(BrokerError):
|
||||
|
@ -91,18 +88,16 @@ class DataThrottle(BrokerError):
|
|||
|
||||
|
||||
def resproc(
|
||||
resp: httpx.Response,
|
||||
resp: asks.response_objects.Response,
|
||||
log: logging.Logger,
|
||||
return_json: bool = True,
|
||||
log_resp: bool = False,
|
||||
|
||||
) -> httpx.Response:
|
||||
'''
|
||||
Process response and return its json content.
|
||||
) -> asks.response_objects.Response:
|
||||
"""Process response and return its json content.
|
||||
|
||||
Raise the appropriate error on non-200 OK responses.
|
||||
|
||||
'''
|
||||
"""
|
||||
if not resp.status_code == 200:
|
||||
raise BrokerError(resp.body)
|
||||
try:
|
||||
|
|
|
@ -25,13 +25,14 @@ from __future__ import annotations
|
|||
from collections import ChainMap
|
||||
from contextlib import (
|
||||
asynccontextmanager as acm,
|
||||
AsyncExitStack,
|
||||
)
|
||||
from datetime import datetime
|
||||
from pprint import pformat
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Hashable,
|
||||
Sequence,
|
||||
Type,
|
||||
)
|
||||
import hmac
|
||||
|
@ -42,7 +43,8 @@ import trio
|
|||
from pendulum import (
|
||||
now,
|
||||
)
|
||||
import httpx
|
||||
import asks
|
||||
from rapidfuzz import process as fuzzy
|
||||
import numpy as np
|
||||
|
||||
from piker import config
|
||||
|
@ -52,7 +54,6 @@ from piker.clearing._messages import (
|
|||
from piker.accounting import (
|
||||
Asset,
|
||||
digits_to_dec,
|
||||
MktPair,
|
||||
)
|
||||
from piker.types import Struct
|
||||
from piker.data import (
|
||||
|
@ -68,6 +69,7 @@ from .venues import (
|
|||
PAIRTYPES,
|
||||
Pair,
|
||||
MarketType,
|
||||
|
||||
_spot_url,
|
||||
_futes_url,
|
||||
_testnet_futes_url,
|
||||
|
@ -77,18 +79,16 @@ from .venues import (
|
|||
log = get_logger('piker.brokers.binance')
|
||||
|
||||
|
||||
def get_config() -> dict[str, Any]:
|
||||
def get_config() -> dict:
|
||||
|
||||
conf: dict
|
||||
path: Path
|
||||
conf, path = config.load(
|
||||
conf_name='brokers',
|
||||
touch_if_dne=True,
|
||||
)
|
||||
section: dict = conf.get('binance')
|
||||
conf, path = config.load(touch_if_dne=True)
|
||||
|
||||
section = conf.get('binance')
|
||||
|
||||
if not section:
|
||||
log.warning(
|
||||
f'No config section found for binance in {path}'
|
||||
)
|
||||
log.warning(f'No config section found for binance in {path}')
|
||||
return {}
|
||||
|
||||
return section
|
||||
|
@ -144,7 +144,7 @@ def binance_timestamp(
|
|||
|
||||
class Client:
|
||||
'''
|
||||
Async ReST API client using `trio` + `httpx` B)
|
||||
Async ReST API client using ``trio`` + ``asks`` B)
|
||||
|
||||
Supports all of the spot, margin and futures endpoints depending
|
||||
on method.
|
||||
|
@ -153,17 +153,10 @@ class Client:
|
|||
def __init__(
|
||||
self,
|
||||
|
||||
venue_sessions: dict[
|
||||
str, # venue key
|
||||
tuple[httpx.AsyncClient, str] # session, eps path
|
||||
],
|
||||
conf: dict[str, Any],
|
||||
# TODO: change this to `Client.[mkt_]venue: MarketType`?
|
||||
mkt_mode: MarketType = 'spot',
|
||||
|
||||
) -> None:
|
||||
self.conf = conf
|
||||
|
||||
# build out pair info tables for each market type
|
||||
# and wrap in a chain-map view for search / query.
|
||||
self._spot_pairs: dict[str, Pair] = {} # spot info table
|
||||
|
@ -190,13 +183,44 @@ class Client:
|
|||
# market symbols for use by search. See `.exch_info()`.
|
||||
self._pairs: ChainMap[str, Pair] = ChainMap()
|
||||
|
||||
# spot EPs sesh
|
||||
self._sesh = asks.Session(connections=4)
|
||||
self._sesh.base_location: str = _spot_url
|
||||
# spot testnet
|
||||
self._test_sesh: asks.Session = asks.Session(connections=4)
|
||||
self._test_sesh.base_location: str = _testnet_spot_url
|
||||
|
||||
# margin and extended spot endpoints session.
|
||||
self._sapi_sesh = asks.Session(connections=4)
|
||||
self._sapi_sesh.base_location: str = _spot_url
|
||||
|
||||
# futes EPs sesh
|
||||
self._fapi_sesh = asks.Session(connections=4)
|
||||
self._fapi_sesh.base_location: str = _futes_url
|
||||
# futes testnet
|
||||
self._test_fapi_sesh: asks.Session = asks.Session(connections=4)
|
||||
self._test_fapi_sesh.base_location: str = _testnet_futes_url
|
||||
|
||||
# global client "venue selection" mode.
|
||||
# set this when you want to switch venues and not have to
|
||||
# specify the venue for the next request.
|
||||
self.mkt_mode: MarketType = mkt_mode
|
||||
|
||||
# per-mkt-venue API client table
|
||||
self.venue_sesh = venue_sessions
|
||||
# per 8
|
||||
self.venue_sesh: dict[
|
||||
str, # venue key
|
||||
tuple[asks.Session, str] # session, eps path
|
||||
] = {
|
||||
'spot': (self._sesh, '/api/v3/'),
|
||||
'spot_testnet': (self._test_sesh, '/fapi/v1/'),
|
||||
|
||||
'margin': (self._sapi_sesh, '/sapi/v1/'),
|
||||
|
||||
'usdtm_futes': (self._fapi_sesh, '/fapi/v1/'),
|
||||
'usdtm_futes_testnet': (self._test_fapi_sesh, '/fapi/v1/'),
|
||||
|
||||
# 'futes_coin': self._dapi, # TODO
|
||||
}
|
||||
|
||||
# lookup for going from `.mkt_mode: str` to the config
|
||||
# subsection `key: str`
|
||||
|
@ -211,6 +235,40 @@ class Client:
|
|||
'futes': ['usdtm_futes'],
|
||||
}
|
||||
|
||||
# for creating API keys see,
|
||||
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
|
||||
self.conf: dict = get_config()
|
||||
|
||||
for key, subconf in self.conf.items():
|
||||
if api_key := subconf.get('api_key', ''):
|
||||
venue_keys: list[str] = self.confkey2venuekeys[key]
|
||||
|
||||
venue_key: str
|
||||
sesh: asks.Session
|
||||
for venue_key in venue_keys:
|
||||
sesh, _ = self.venue_sesh[venue_key]
|
||||
|
||||
api_key_header: dict = {
|
||||
# taken from official:
|
||||
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
|
||||
"Content-Type": "application/json;charset=utf-8",
|
||||
|
||||
# TODO: prolly should just always query and copy
|
||||
# in the real latest ver?
|
||||
"User-Agent": "binance-connector/6.1.6smbz6",
|
||||
"X-MBX-APIKEY": api_key,
|
||||
}
|
||||
sesh.headers.update(api_key_header)
|
||||
|
||||
# if `.use_tesnet = true` in the config then
|
||||
# also add headers for the testnet session which
|
||||
# will be used for all order control
|
||||
if subconf.get('use_testnet', False):
|
||||
testnet_sesh, _ = self.venue_sesh[
|
||||
venue_key + '_testnet'
|
||||
]
|
||||
testnet_sesh.headers.update(api_key_header)
|
||||
|
||||
def _mk_sig(
|
||||
self,
|
||||
data: dict,
|
||||
|
@ -229,6 +287,7 @@ class Client:
|
|||
'to define the creds for auth-ed endpoints!?'
|
||||
)
|
||||
|
||||
|
||||
# XXX: Info on security and authentification
|
||||
# https://binance-docs.github.io/apidocs/#endpoint-security-type
|
||||
if not (api_secret := subconf.get('api_secret')):
|
||||
|
@ -268,9 +327,8 @@ class Client:
|
|||
- /fapi/v3/ USD-M FUTURES, or
|
||||
- /api/v3/ SPOT/MARGIN
|
||||
|
||||
account/market endpoint request depending on either passed in
|
||||
`venue: str` or the current setting `.mkt_mode: str` setting,
|
||||
default `'spot'`.
|
||||
account/market endpoint request depending on either passed in `venue: str`
|
||||
or the current setting `.mkt_mode: str` setting, default `'spot'`.
|
||||
|
||||
|
||||
Docs per venue API:
|
||||
|
@ -299,6 +357,9 @@ class Client:
|
|||
venue=venue_key,
|
||||
)
|
||||
|
||||
sesh: asks.Session
|
||||
path: str
|
||||
|
||||
# Check if we're configured to route order requests to the
|
||||
# venue equivalent's testnet.
|
||||
use_testnet: bool = False
|
||||
|
@ -323,12 +384,11 @@ class Client:
|
|||
# ctl machinery B)
|
||||
venue_key += '_testnet'
|
||||
|
||||
client: httpx.AsyncClient
|
||||
path: str
|
||||
client, path = self.venue_sesh[venue_key]
|
||||
meth: Callable = getattr(client, method)
|
||||
sesh, path = self.venue_sesh[venue_key]
|
||||
|
||||
meth: Callable = getattr(sesh, method)
|
||||
resp = await meth(
|
||||
url=path + endpoint,
|
||||
path=path + endpoint,
|
||||
params=params,
|
||||
timeout=float('inf'),
|
||||
)
|
||||
|
@ -370,15 +430,7 @@ class Client:
|
|||
item['filters'] = filters
|
||||
|
||||
pair_type: Type = PAIRTYPES[venue]
|
||||
try:
|
||||
pair: Pair = pair_type(**item)
|
||||
except Exception as e:
|
||||
e.add_note(
|
||||
"\nDon't panic, prolly stupid binance changed their symbology schema again..\n"
|
||||
'Check out their API docs here:\n\n'
|
||||
'https://binance-docs.github.io/apidocs/spot/en/#exchange-information'
|
||||
)
|
||||
raise
|
||||
pair_table[pair.symbol.upper()] = pair
|
||||
|
||||
# update an additional top-level-cross-venue-table
|
||||
|
@ -473,9 +525,7 @@ class Client:
|
|||
|
||||
'''
|
||||
pair_table: dict[str, Pair] = self._venue2pairs[
|
||||
venue
|
||||
or
|
||||
self.mkt_mode
|
||||
venue or self.mkt_mode
|
||||
]
|
||||
if (
|
||||
expiry
|
||||
|
@ -494,9 +544,9 @@ class Client:
|
|||
venues: list[str] = [venue]
|
||||
|
||||
# batch per-venue download of all exchange infos
|
||||
async with trio.open_nursery() as tn:
|
||||
async with trio.open_nursery() as rn:
|
||||
for ven in venues:
|
||||
tn.start_soon(
|
||||
rn.start_soon(
|
||||
self._cache_pairs,
|
||||
ven,
|
||||
)
|
||||
|
@ -549,11 +599,11 @@ class Client:
|
|||
|
||||
) -> dict[str, Any]:
|
||||
|
||||
fq_pairs: dict[str, Pair] = await self.exch_info()
|
||||
fq_pairs: dict = await self.exch_info()
|
||||
|
||||
# TODO: cache this list like we were in
|
||||
# `open_symbol_search()`?
|
||||
# keys: list[str] = list(fq_pairs)
|
||||
keys: list[str] = list(fq_pairs)
|
||||
|
||||
return match_from_pairs(
|
||||
pairs=fq_pairs,
|
||||
|
@ -561,20 +611,9 @@ class Client:
|
|||
score_cutoff=50,
|
||||
)
|
||||
|
||||
def pair2venuekey(
|
||||
self,
|
||||
pair: Pair,
|
||||
) -> str:
|
||||
return {
|
||||
'USDTM': 'usdtm_futes',
|
||||
'SPOT': 'spot',
|
||||
# 'COINM': 'coin_futes',
|
||||
# ^-TODO-^ bc someone might want it..?
|
||||
}[pair.venue]
|
||||
|
||||
async def bars(
|
||||
self,
|
||||
mkt: MktPair,
|
||||
symbol: str,
|
||||
|
||||
start_dt: datetime | None = None,
|
||||
end_dt: datetime | None = None,
|
||||
|
@ -604,20 +643,16 @@ class Client:
|
|||
start_time = binance_timestamp(start_dt)
|
||||
end_time = binance_timestamp(end_dt)
|
||||
|
||||
bs_pair: Pair = self._pairs[mkt.bs_fqme.upper()]
|
||||
|
||||
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
|
||||
bars = await self._api(
|
||||
'klines',
|
||||
params={
|
||||
# NOTE: always query using their native symbology!
|
||||
'symbol': mkt.bs_mktid.upper(),
|
||||
'symbol': symbol.upper(),
|
||||
'interval': '1m',
|
||||
'startTime': start_time,
|
||||
'endTime': end_time,
|
||||
'limit': limit
|
||||
},
|
||||
venue=self.pair2venuekey(bs_pair),
|
||||
allow_testnet=False,
|
||||
)
|
||||
new_bars: list[tuple] = []
|
||||
|
@ -934,148 +969,17 @@ class Client:
|
|||
await self.close_listen_key(key)
|
||||
|
||||
|
||||
_venue_urls: dict[str, str] = {
|
||||
'spot': (
|
||||
_spot_url,
|
||||
'/api/v3/',
|
||||
),
|
||||
'spot_testnet': (
|
||||
_testnet_spot_url,
|
||||
'/fapi/v1/'
|
||||
),
|
||||
# margin and extended spot endpoints session.
|
||||
# TODO: did this ever get implemented fully?
|
||||
# 'margin': (
|
||||
# _spot_url,
|
||||
# '/sapi/v1/'
|
||||
# ),
|
||||
|
||||
'usdtm_futes': (
|
||||
_futes_url,
|
||||
'/fapi/v1/',
|
||||
),
|
||||
|
||||
'usdtm_futes_testnet': (
|
||||
_testnet_futes_url,
|
||||
'/fapi/v1/',
|
||||
),
|
||||
|
||||
# TODO: for anyone who actually needs it ;P
|
||||
# 'coin_futes': ()
|
||||
}
|
||||
|
||||
|
||||
def init_api_keys(
|
||||
client: Client,
|
||||
conf: dict[str, Any],
|
||||
) -> None:
|
||||
'''
|
||||
Set up per-venue API keys each http client according to the user's
|
||||
`brokers.conf`.
|
||||
|
||||
For ex, to use spot-testnet and live usdt futures APIs:
|
||||
|
||||
```toml
|
||||
[binance]
|
||||
# spot test net
|
||||
spot.use_testnet = true
|
||||
spot.api_key = '<spot_api_key_from_binance_account>'
|
||||
spot.api_secret = '<spot_api_key_password>'
|
||||
|
||||
# futes live
|
||||
futes.use_testnet = false
|
||||
accounts.usdtm = 'futes'
|
||||
futes.api_key = '<futes_api_key_from_binance>'
|
||||
futes.api_secret = '<futes_api_key_password>''
|
||||
|
||||
# if uncommented will use the built-in paper engine and not
|
||||
# connect to `binance` API servers for order ctl.
|
||||
# accounts.paper = 'paper'
|
||||
```
|
||||
|
||||
'''
|
||||
for key, subconf in conf.items():
|
||||
if api_key := subconf.get('api_key', ''):
|
||||
venue_keys: list[str] = client.confkey2venuekeys[key]
|
||||
|
||||
venue_key: str
|
||||
client: httpx.AsyncClient
|
||||
for venue_key in venue_keys:
|
||||
client, _ = client.venue_sesh[venue_key]
|
||||
|
||||
api_key_header: dict = {
|
||||
# taken from official:
|
||||
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
|
||||
"Content-Type": "application/json;charset=utf-8",
|
||||
|
||||
# TODO: prolly should just always query and copy
|
||||
# in the real latest ver?
|
||||
"User-Agent": "binance-connector/6.1.6smbz6",
|
||||
"X-MBX-APIKEY": api_key,
|
||||
}
|
||||
client.headers.update(api_key_header)
|
||||
|
||||
# if `.use_tesnet = true` in the config then
|
||||
# also add headers for the testnet session which
|
||||
# will be used for all order control
|
||||
if subconf.get('use_testnet', False):
|
||||
testnet_sesh, _ = client.venue_sesh[
|
||||
venue_key + '_testnet'
|
||||
]
|
||||
testnet_sesh.headers.update(api_key_header)
|
||||
|
||||
|
||||
@acm
|
||||
async def get_client(
|
||||
mkt_mode: MarketType = 'spot',
|
||||
) -> Client:
|
||||
'''
|
||||
Construct an single `piker` client which composes multiple underlying venue
|
||||
specific API clients both for live and test networks.
|
||||
async def get_client() -> Client:
|
||||
|
||||
'''
|
||||
venue_sessions: dict[
|
||||
str, # venue key
|
||||
tuple[httpx.AsyncClient, str] # session, eps path
|
||||
] = {}
|
||||
async with AsyncExitStack() as client_stack:
|
||||
for name, (base_url, path) in _venue_urls.items():
|
||||
api: httpx.AsyncClient = await client_stack.enter_async_context(
|
||||
httpx.AsyncClient(
|
||||
base_url=base_url,
|
||||
# headers={},
|
||||
|
||||
# TODO: is there a way to numerate this?
|
||||
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
|
||||
# connections=4
|
||||
)
|
||||
)
|
||||
venue_sessions[name] = (
|
||||
api,
|
||||
path,
|
||||
)
|
||||
|
||||
conf: dict[str, Any] = get_config()
|
||||
# for creating API keys see,
|
||||
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
|
||||
client = Client(
|
||||
venue_sessions=venue_sessions,
|
||||
conf=conf,
|
||||
mkt_mode=mkt_mode,
|
||||
)
|
||||
init_api_keys(
|
||||
client=client,
|
||||
conf=conf,
|
||||
)
|
||||
fq_pairs: dict[str, Pair] = await client.exch_info()
|
||||
assert fq_pairs
|
||||
client = Client()
|
||||
await client.exch_info()
|
||||
log.info(
|
||||
f'Loaded multi-venue `Client` in mkt_mode={client.mkt_mode!r}\n\n'
|
||||
f'Symbology Summary:\n'
|
||||
f'------ - ------\n'
|
||||
f'{client} in {client.mkt_mode} mode: caching exchange infos..\n'
|
||||
'Cached multi-market pairs:\n'
|
||||
f'spot: {len(client._spot_pairs)}\n'
|
||||
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
|
||||
'------ - ------\n'
|
||||
f'total: {len(client._pairs)}\n'
|
||||
f'Total: {len(client._pairs)}\n'
|
||||
)
|
||||
|
||||
yield client
|
||||
|
|
|
@ -264,20 +264,15 @@ async def open_trade_dialog(
|
|||
# do a open_symcache() call.. though maybe we can hide
|
||||
# this in a new async version of open_account()?
|
||||
async with open_cached_client('binance') as client:
|
||||
subconf: dict|None = client.conf.get(venue_name)
|
||||
subconf: dict = client.conf[venue_name]
|
||||
use_testnet = subconf.get('use_testnet', False)
|
||||
|
||||
# XXX: if no futes.api_key or spot.api_key has been set we
|
||||
# always fall back to the paper engine!
|
||||
if (
|
||||
not subconf
|
||||
or
|
||||
not subconf.get('api_key')
|
||||
):
|
||||
if not subconf.get('api_key'):
|
||||
await ctx.started('paper')
|
||||
return
|
||||
|
||||
use_testnet: bool = subconf.get('use_testnet', False)
|
||||
|
||||
async with (
|
||||
open_cached_client('binance') as client,
|
||||
):
|
||||
|
|
|
@ -42,12 +42,12 @@ from trio_typing import TaskStatus
|
|||
from pendulum import (
|
||||
from_timestamp,
|
||||
)
|
||||
from rapidfuzz import process as fuzzy
|
||||
import numpy as np
|
||||
import tractor
|
||||
|
||||
from piker.brokers import (
|
||||
open_cached_client,
|
||||
NoData,
|
||||
)
|
||||
from piker._cacheables import (
|
||||
async_lifo_cache,
|
||||
|
@ -110,7 +110,6 @@ class AggTrade(Struct, frozen=True):
|
|||
|
||||
async def stream_messages(
|
||||
ws: NoBsWs,
|
||||
|
||||
) -> AsyncGenerator[NoBsWs, dict]:
|
||||
|
||||
# TODO: match syntax here!
|
||||
|
@ -221,8 +220,6 @@ def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
|
|||
}
|
||||
|
||||
|
||||
# TODO, why aren't frame resp `log.info()`s showing in upstream
|
||||
# code?!
|
||||
@acm
|
||||
async def open_history_client(
|
||||
mkt: MktPair,
|
||||
|
@ -255,30 +252,24 @@ async def open_history_client(
|
|||
else:
|
||||
client.mkt_mode = 'spot'
|
||||
|
||||
array: np.ndarray = await client.bars(
|
||||
mkt=mkt,
|
||||
# NOTE: always query using their native symbology!
|
||||
mktid: str = mkt.bs_mktid
|
||||
array = await client.bars(
|
||||
mktid,
|
||||
start_dt=start_dt,
|
||||
end_dt=end_dt,
|
||||
)
|
||||
if array.size == 0:
|
||||
raise NoData(
|
||||
f'No frame for {start_dt} -> {end_dt}\n'
|
||||
)
|
||||
|
||||
times = array['time']
|
||||
if not times.any():
|
||||
raise ValueError(
|
||||
'Bad frame with null-times?\n\n'
|
||||
f'{times}'
|
||||
)
|
||||
|
||||
if end_dt is None:
|
||||
inow: int = round(time.time())
|
||||
if (
|
||||
end_dt is None
|
||||
):
|
||||
inow = round(time.time())
|
||||
if (inow - times[-1]) > 60:
|
||||
await tractor.pause()
|
||||
|
||||
start_dt = from_timestamp(times[0])
|
||||
end_dt = from_timestamp(times[-1])
|
||||
|
||||
return array, start_dt, end_dt
|
||||
|
||||
yield get_ohlc, {'erlangs': 3, 'rate': 3}
|
||||
|
@ -465,8 +456,6 @@ async def stream_quotes(
|
|||
):
|
||||
init_msgs: list[FeedInit] = []
|
||||
for sym in symbols:
|
||||
mkt: MktPair
|
||||
pair: Pair
|
||||
mkt, pair = await get_mkt_info(sym)
|
||||
|
||||
# build out init msgs according to latest spec
|
||||
|
@ -515,6 +504,7 @@ async def stream_quotes(
|
|||
|
||||
# start streaming
|
||||
async for typ, quote in msg_gen:
|
||||
|
||||
# period = time.time() - last
|
||||
# hz = 1/period if period else float('inf')
|
||||
# if hz > 60:
|
||||
|
@ -550,7 +540,7 @@ async def open_symbol_search(
|
|||
)
|
||||
|
||||
# repack in fqme-keyed table
|
||||
byfqme: dict[str, Pair] = {}
|
||||
byfqme: dict[start, Pair] = {}
|
||||
for pair in pairs.values():
|
||||
byfqme[pair.bs_fqme] = pair
|
||||
|
||||
|
|
|
@ -137,12 +137,10 @@ class SpotPair(Pair, frozen=True):
|
|||
quoteOrderQtyMarketAllowed: bool
|
||||
isSpotTradingAllowed: bool
|
||||
isMarginTradingAllowed: bool
|
||||
otoAllowed: bool
|
||||
|
||||
defaultSelfTradePreventionMode: str
|
||||
allowedSelfTradePreventionModes: list[str]
|
||||
permissions: list[str]
|
||||
permissionSets: list[list[str]]
|
||||
|
||||
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
|
||||
ns_path: str = 'piker.brokers.binance:SpotPair'
|
||||
|
@ -181,6 +179,7 @@ class FutesPair(Pair):
|
|||
quoteAsset: str # 'USDT',
|
||||
quotePrecision: int # 8,
|
||||
requiredMarginPercent: float # '5.0000',
|
||||
settlePlan: int # 0,
|
||||
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
|
||||
triggerProtect: float # '0.0500',
|
||||
underlyingSubType: list[str] # ['PoW'],
|
||||
|
@ -202,7 +201,6 @@ class FutesPair(Pair):
|
|||
match contype:
|
||||
case (
|
||||
'CURRENT_QUARTER'
|
||||
| 'CURRENT_QUARTER DELIVERING'
|
||||
| 'NEXT_QUARTER' # su madre binance..
|
||||
):
|
||||
pair, _, expiry = symbol.partition('_')
|
||||
|
@ -222,10 +220,6 @@ class FutesPair(Pair):
|
|||
case ['DEFI']:
|
||||
return 'PERP'
|
||||
|
||||
# wow, just wow you binance guys suck..
|
||||
if self.status == 'PENDING_TRADING':
|
||||
return 'PENDING'
|
||||
|
||||
# XXX: yeah no clue then..
|
||||
raise ValueError(
|
||||
f'Bad .expiry token match: {contype} for {symbol}'
|
||||
|
@ -243,7 +237,6 @@ class FutesPair(Pair):
|
|||
|
||||
case (
|
||||
'CURRENT_QUARTER'
|
||||
| 'CURRENT_QUARTER DELIVERING'
|
||||
| 'NEXT_QUARTER' # su madre binance..
|
||||
):
|
||||
_, _, expiry = symbol.partition('_')
|
||||
|
@ -256,10 +249,7 @@ class FutesPair(Pair):
|
|||
return f'{margin}M'
|
||||
|
||||
match subtype:
|
||||
case (
|
||||
['DEFI']
|
||||
| ['USDC']
|
||||
):
|
||||
case ['DEFI']:
|
||||
return f'{subtype[0]}'
|
||||
|
||||
# XXX: yeah no clue then..
|
||||
|
|
|
@ -482,8 +482,6 @@ def search(
|
|||
):
|
||||
return await func()
|
||||
|
||||
from piker.toolz import open_crash_handler
|
||||
with open_crash_handler():
|
||||
quotes = trio.run(
|
||||
main,
|
||||
partial(
|
||||
|
@ -506,11 +504,9 @@ def search(
|
|||
@click.option('--delete', '-d', flag_value=True, help='Delete section')
|
||||
@click.pass_obj
|
||||
def brokercfg(config, section, value, delete):
|
||||
'''
|
||||
If invoked with no arguments, open an editor to edit broker
|
||||
configs file or get / update an individual section.
|
||||
|
||||
'''
|
||||
"""If invoked with no arguments, open an editor to edit broker configs file
|
||||
or get / update an individual section.
|
||||
"""
|
||||
from .. import config
|
||||
|
||||
if section:
|
||||
|
|
|
@ -145,11 +145,7 @@ async def symbol_search(
|
|||
|
||||
async with maybe_spawn_brokerd(
|
||||
mod.name,
|
||||
infect_asyncio=getattr(
|
||||
mod,
|
||||
'_infect_asyncio',
|
||||
False,
|
||||
),
|
||||
infect_asyncio=getattr(mod, '_infect_asyncio', False),
|
||||
) as portal:
|
||||
|
||||
results.append((
|
||||
|
|
|
@ -100,7 +100,7 @@ async def data_reset_hack(
|
|||
log.warning(
|
||||
no_setup_msg
|
||||
+
|
||||
'REQUIRES A `vnc_addrs: array` ENTRY'
|
||||
f'REQUIRES A `vnc_addrs: array` ENTRY'
|
||||
)
|
||||
|
||||
vnc_host, vnc_port = vnc_sockaddr.get(
|
||||
|
|
|
@ -41,6 +41,7 @@ import time
|
|||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Union,
|
||||
)
|
||||
from types import SimpleNamespace
|
||||
|
||||
|
@ -48,12 +49,7 @@ from bidict import bidict
|
|||
import trio
|
||||
import tractor
|
||||
from tractor import to_asyncio
|
||||
from pendulum import (
|
||||
from_timestamp,
|
||||
DateTime,
|
||||
Duration,
|
||||
duration as mk_duration,
|
||||
)
|
||||
import pendulum
|
||||
from eventkit import Event
|
||||
from ib_insync import (
|
||||
client as ib_client,
|
||||
|
@ -225,20 +221,16 @@ def bars_to_np(bars: list) -> np.ndarray:
|
|||
# https://interactivebrokers.github.io/tws-api/historical_limitations.html#non-available_hd
|
||||
_samplings: dict[int, tuple[str, str]] = {
|
||||
1: (
|
||||
# ib strs
|
||||
'1 secs',
|
||||
f'{int(2e3)} S',
|
||||
|
||||
mk_duration(seconds=2e3),
|
||||
pendulum.duration(seconds=2e3),
|
||||
),
|
||||
# TODO: benchmark >1 D duration on query to see if
|
||||
# throughput can be made faster during backfilling.
|
||||
60: (
|
||||
# ib strs
|
||||
'1 min',
|
||||
'2 D',
|
||||
|
||||
mk_duration(days=2),
|
||||
'1 D',
|
||||
pendulum.duration(days=1),
|
||||
),
|
||||
}
|
||||
|
||||
|
@ -287,31 +279,9 @@ class Client:
|
|||
self.conf = config
|
||||
|
||||
# NOTE: the ib.client here is "throttled" to 45 rps by default
|
||||
self.ib: IB = ib
|
||||
self.ib = ib
|
||||
self.ib.RaiseRequestErrors: bool = True
|
||||
|
||||
# self._acnt_names: set[str] = {}
|
||||
self._acnt_names: list[str] = []
|
||||
|
||||
@property
|
||||
def acnts(self) -> list[str]:
|
||||
# return list(self._acnt_names)
|
||||
return self._acnt_names
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
f'<{type(self).__name__}('
|
||||
f'ib={self.ib} '
|
||||
f'acnts={self.acnts}'
|
||||
|
||||
# TODO: we need to mask out acnt-#s and other private
|
||||
# infos if we're going to console this!
|
||||
# f' |_.conf:\n'
|
||||
# f' {pformat(self.conf)}\n'
|
||||
|
||||
')>'
|
||||
)
|
||||
|
||||
async def get_fills(self) -> list[Fill]:
|
||||
'''
|
||||
Return list of rents `Fills` from trading session.
|
||||
|
@ -333,8 +303,8 @@ class Client:
|
|||
fqme: str,
|
||||
|
||||
# EST in ISO 8601 format is required... below is EPOCH
|
||||
start_dt: datetime | str = "1970-01-01T00:00:00.000000-05:00",
|
||||
end_dt: datetime | str = "",
|
||||
start_dt: Union[datetime, str] = "1970-01-01T00:00:00.000000-05:00",
|
||||
end_dt: Union[datetime, str] = "",
|
||||
|
||||
# ohlc sample period in seconds
|
||||
sample_period_s: int = 1,
|
||||
|
@ -345,7 +315,7 @@ class Client:
|
|||
|
||||
**kwargs,
|
||||
|
||||
) -> tuple[BarDataList, np.ndarray, Duration]:
|
||||
) -> tuple[BarDataList, np.ndarray, pendulum.Duration]:
|
||||
'''
|
||||
Retreive OHLCV bars for a fqme over a range to the present.
|
||||
|
||||
|
@ -354,19 +324,14 @@ class Client:
|
|||
# https://interactivebrokers.github.io/tws-api/historical_data.html
|
||||
bars_kwargs = {'whatToShow': 'TRADES'}
|
||||
bars_kwargs.update(kwargs)
|
||||
(
|
||||
bar_size,
|
||||
ib_duration_str,
|
||||
default_dt_duration,
|
||||
) = _samplings[sample_period_s]
|
||||
bar_size, duration, dt_duration = _samplings[sample_period_s]
|
||||
|
||||
dt_duration: Duration = (
|
||||
duration
|
||||
or default_dt_duration
|
||||
global _enters
|
||||
log.info(
|
||||
f"REQUESTING {duration}'s worth {bar_size} BARS\n"
|
||||
f'{_enters} @ end={end_dt}"'
|
||||
)
|
||||
|
||||
# TODO: maybe remove all this?
|
||||
global _enters
|
||||
if not end_dt:
|
||||
end_dt = ''
|
||||
|
||||
|
@ -375,8 +340,8 @@ class Client:
|
|||
contract: Contract = (await self.find_contracts(fqme))[0]
|
||||
bars_kwargs.update(getattr(contract, 'bars_kwargs', {}))
|
||||
|
||||
kwargs: dict[str, Any] = dict(
|
||||
contract=contract,
|
||||
bars = await self.ib.reqHistoricalDataAsync(
|
||||
contract,
|
||||
endDateTime=end_dt,
|
||||
formatDate=2,
|
||||
|
||||
|
@ -388,7 +353,7 @@ class Client:
|
|||
|
||||
# time history length values format:
|
||||
# ``durationStr=integer{SPACE}unit (S|D|W|M|Y)``
|
||||
durationStr=ib_duration_str,
|
||||
durationStr=duration,
|
||||
|
||||
# always use extended hours
|
||||
useRTH=False,
|
||||
|
@ -398,81 +363,36 @@ class Client:
|
|||
# whatToShow='MIDPOINT',
|
||||
# whatToShow='TRADES',
|
||||
)
|
||||
bars = await self.ib.reqHistoricalDataAsync(
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
query_info: str = (
|
||||
f'REQUESTING IB history BARS\n'
|
||||
f' ------ - ------\n'
|
||||
f'dt_duration: {dt_duration}\n'
|
||||
f'ib_duration_str: {ib_duration_str}\n'
|
||||
f'bar_size: {bar_size}\n'
|
||||
f'fqme: {fqme}\n'
|
||||
f'actor-global _enters: {_enters}\n'
|
||||
f'kwargs: {pformat(kwargs)}\n'
|
||||
)
|
||||
# tail case if no history for range or none prior.
|
||||
# NOTE: there's actually 3 cases here to handle (and
|
||||
# this should be read alongside the implementation of
|
||||
# `.reqHistoricalDataAsync()`):
|
||||
# - a timeout occurred in which case insync internals return
|
||||
# an empty list thing with bars.clear()...
|
||||
# - no data exists for the period likely due to
|
||||
if not bars:
|
||||
# NOTE: there's 2 cases here to handle (and this should be
|
||||
# read alongside the implementation of
|
||||
# ``.reqHistoricalDataAsync()``):
|
||||
# - no data is returned for the period likely due to
|
||||
# a weekend, holiday or other non-trading period prior to
|
||||
# ``end_dt`` which exceeds the ``duration``,
|
||||
# - LITERALLY this is the start of the mkt's history!
|
||||
if not bars:
|
||||
# TODO: figure out wut's going on here.
|
||||
|
||||
# TODO: is this handy, a sync requester for tinkering
|
||||
# with empty frame cases?
|
||||
# def get_hist():
|
||||
# return self.ib.reqHistoricalData(**kwargs)
|
||||
# import pdbp
|
||||
# pdbp.set_trace()
|
||||
|
||||
log.critical(
|
||||
'STUPID IB SAYS NO HISTORY\n\n'
|
||||
+ query_info
|
||||
)
|
||||
|
||||
# TODO: we could maybe raise ``NoData`` instead if we
|
||||
# rewrite the method in the first case?
|
||||
# right now there's no way to detect a timeout..
|
||||
# - a timeout occurred in which case insync internals return
|
||||
# an empty list thing with bars.clear()...
|
||||
return [], np.empty(0), dt_duration
|
||||
# TODO: we could maybe raise ``NoData`` instead if we
|
||||
# rewrite the method in the first case? right now there's no
|
||||
# way to detect a timeout.
|
||||
|
||||
log.info(query_info)
|
||||
# NOTE XXX: ensure minimum duration in bars?
|
||||
# => recursively call this method until we get at least as
|
||||
# many bars such that they sum in aggregate to the the
|
||||
# NOTE XXX: ensure minimum duration in bars B)
|
||||
# => we recursively call this method until we get at least
|
||||
# as many bars such that they sum in aggregate to the the
|
||||
# desired total time (duration) at most.
|
||||
# - if you query over a gap and get no data
|
||||
# that may short circuit the history
|
||||
if (
|
||||
# XXX XXX XXX
|
||||
# => WHY DID WE EVEN NEED THIS ORIGINALLY!? <=
|
||||
# XXX XXX XXX
|
||||
False
|
||||
and end_dt
|
||||
):
|
||||
nparr: np.ndarray = bars_to_np(bars)
|
||||
times: np.ndarray = nparr['time']
|
||||
first: float = times[0]
|
||||
tdiff: float = times[-1] - first
|
||||
|
||||
if (
|
||||
# len(bars) * sample_period_s) < dt_duration.in_seconds()
|
||||
tdiff < dt_duration.in_seconds()
|
||||
# and False
|
||||
):
|
||||
end_dt: DateTime = from_timestamp(first)
|
||||
log.warning(
|
||||
f'Frame result was shorter then {dt_duration}!?\n'
|
||||
'Recursing for more bars:\n'
|
||||
f'end_dt: {end_dt}\n'
|
||||
f'dt_duration: {dt_duration}\n'
|
||||
elif (
|
||||
end_dt
|
||||
and (
|
||||
(len(bars) * sample_period_s) < dt_duration.in_seconds()
|
||||
)
|
||||
):
|
||||
log.warning(
|
||||
f'Recursing to get more bars from {end_dt} for {dt_duration}'
|
||||
)
|
||||
end_dt -= dt_duration
|
||||
(
|
||||
r_bars,
|
||||
r_arr,
|
||||
|
@ -481,39 +401,12 @@ class Client:
|
|||
fqme,
|
||||
start_dt=start_dt,
|
||||
end_dt=end_dt,
|
||||
sample_period_s=sample_period_s,
|
||||
|
||||
# TODO: make a table for Duration to
|
||||
# the ib str values in order to use this?
|
||||
# duration=duration,
|
||||
)
|
||||
r_bars.extend(bars)
|
||||
bars = r_bars
|
||||
|
||||
nparr: np.ndarray = bars_to_np(bars)
|
||||
|
||||
# timestep should always be at least as large as the
|
||||
# period step.
|
||||
tdiff: np.ndarray = np.diff(nparr['time'])
|
||||
to_short: np.ndarray = tdiff < sample_period_s
|
||||
if (to_short).any():
|
||||
# raise ValueError(
|
||||
log.error(
|
||||
f'OHLC frame for {sample_period_s} has {to_short.size} '
|
||||
'time steps which are shorter then expected?!"'
|
||||
)
|
||||
# OOF: this will break teardown?
|
||||
# -[ ] check if it's greenback
|
||||
# -[ ] why tf are we leaking shm entries..
|
||||
# -[ ] make a test on the debugging asyncio testing
|
||||
# branch..
|
||||
# breakpoint()
|
||||
|
||||
return (
|
||||
bars,
|
||||
nparr,
|
||||
dt_duration,
|
||||
)
|
||||
nparr = bars_to_np(bars)
|
||||
return bars, nparr, dt_duration
|
||||
|
||||
async def con_deats(
|
||||
self,
|
||||
|
@ -857,23 +750,6 @@ class Client:
|
|||
|
||||
return contracts
|
||||
|
||||
async def maybe_get_head_time(
|
||||
self,
|
||||
fqme: str,
|
||||
|
||||
) -> datetime | None:
|
||||
'''
|
||||
Return the first datetime stamp for `fqme` or `None`
|
||||
on request failure.
|
||||
|
||||
'''
|
||||
try:
|
||||
head_dt: datetime = await self.get_head_time(fqme=fqme)
|
||||
return head_dt
|
||||
except RequestError:
|
||||
log.warning(f'Unable to get head time: {fqme} ?')
|
||||
return None
|
||||
|
||||
async def get_head_time(
|
||||
self,
|
||||
fqme: str,
|
||||
|
@ -914,7 +790,6 @@ class Client:
|
|||
self,
|
||||
contract: Contract,
|
||||
timeout: float = 1,
|
||||
tries: int = 100,
|
||||
raise_on_timeout: bool = False,
|
||||
|
||||
) -> Ticker | None:
|
||||
|
@ -929,45 +804,34 @@ class Client:
|
|||
ready: ticker.TickerUpdateEvent = ticker.updateEvent
|
||||
|
||||
# ensure a last price gets filled in before we deliver quote
|
||||
timeouterr: Exception | None = None
|
||||
warnset: bool = False
|
||||
for _ in range(tries):
|
||||
|
||||
# wait for a first update(Event) indicatingn a
|
||||
# live quote feed.
|
||||
for _ in range(100):
|
||||
if isnan(ticker.last):
|
||||
|
||||
# wait for a first update(Event)
|
||||
try:
|
||||
tkr = await asyncio.wait_for(
|
||||
ready,
|
||||
timeout=timeout,
|
||||
)
|
||||
except TimeoutError:
|
||||
if raise_on_timeout:
|
||||
raise
|
||||
return None
|
||||
|
||||
if tkr:
|
||||
break
|
||||
except TimeoutError as err:
|
||||
timeouterr = err
|
||||
await asyncio.sleep(0.01)
|
||||
continue
|
||||
|
||||
else:
|
||||
if not warnset:
|
||||
log.warning(
|
||||
f'Quote req timed out..maybe venue is closed?\n'
|
||||
f'{asdict(contract)}'
|
||||
f'Quote for {contract} timed out: market is closed?'
|
||||
)
|
||||
warnset = True
|
||||
|
||||
else:
|
||||
log.info(
|
||||
'Got first quote for contract\n'
|
||||
f'{contract}\n'
|
||||
)
|
||||
log.info(f'Got first quote for {contract}')
|
||||
break
|
||||
else:
|
||||
if timeouterr and raise_on_timeout:
|
||||
import pdbp
|
||||
pdbp.set_trace()
|
||||
raise timeouterr
|
||||
|
||||
if not warnset:
|
||||
log.warning(
|
||||
f'Contract {contract} is not returning a quote '
|
||||
|
@ -975,8 +839,6 @@ class Client:
|
|||
)
|
||||
warnset = True
|
||||
|
||||
return None
|
||||
|
||||
return ticker
|
||||
|
||||
# async to be consistent for the client proxy, and cuz why not.
|
||||
|
@ -1024,12 +886,8 @@ class Client:
|
|||
outsideRth=True,
|
||||
|
||||
optOutSmartRouting=True,
|
||||
# TODO: need to understand this setting better as
|
||||
# it pertains to shit ass mms..
|
||||
routeMarketableToBbo=True,
|
||||
|
||||
designatedLocation='SMART',
|
||||
|
||||
# TODO: make all orders GTC?
|
||||
# https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html#a95539081751afb9980f4c6bd1655a6ba
|
||||
# goodTillDate=f"yyyyMMdd-HH:mm:ss",
|
||||
|
@ -1142,9 +1000,7 @@ _scan_ignore: set[tuple[str, int]] = set()
|
|||
|
||||
def get_config() -> dict[str, Any]:
|
||||
|
||||
conf, path = config.load(
|
||||
conf_name='brokers',
|
||||
)
|
||||
conf, path = config.load('brokers')
|
||||
section = conf.get('ib')
|
||||
|
||||
accounts = section.get('accounts')
|
||||
|
@ -1157,8 +1013,8 @@ def get_config() -> dict[str, Any]:
|
|||
names = list(accounts.keys())
|
||||
accts = section['accounts'] = bidict(accounts)
|
||||
log.info(
|
||||
f'{path} defines {len(accts)} account aliases:\n'
|
||||
f'{pformat(names)}\n'
|
||||
f'brokers.toml defines {len(accts)} accounts: '
|
||||
f'{pformat(names)}'
|
||||
)
|
||||
|
||||
if section is None:
|
||||
|
@ -1225,7 +1081,7 @@ async def load_aio_clients(
|
|||
try_ports = list(try_ports.values())
|
||||
|
||||
_err = None
|
||||
accounts_def: dict[str, str] = config.load_accounts(['ib'])
|
||||
accounts_def = config.load_accounts(['ib'])
|
||||
ports = try_ports if port is None else [port]
|
||||
combos = list(itertools.product(hosts, ports))
|
||||
accounts_found: dict[str, Client] = {}
|
||||
|
@ -1250,12 +1106,6 @@ async def load_aio_clients(
|
|||
|
||||
for i in range(connect_retries):
|
||||
try:
|
||||
log.info(
|
||||
'Trying `ib_async` connect\n'
|
||||
f'{host}: {port}\n'
|
||||
f'clientId: {client_id}\n'
|
||||
f'timeout: {connect_timeout}\n'
|
||||
)
|
||||
await ib.connectAsync(
|
||||
host,
|
||||
port,
|
||||
|
@ -1270,9 +1120,7 @@ async def load_aio_clients(
|
|||
client = Client(ib=ib, config=conf)
|
||||
|
||||
# update all actor-global caches
|
||||
log.runtime(
|
||||
f'Connected and caching `Client` @ {sockaddr!r}'
|
||||
)
|
||||
log.info(f"Caching client for {sockaddr}")
|
||||
_client_cache[sockaddr] = client
|
||||
break
|
||||
|
||||
|
@ -1287,54 +1135,32 @@ async def load_aio_clients(
|
|||
OSError,
|
||||
) as ce:
|
||||
_err = ce
|
||||
message: str = (
|
||||
f'Failed to connect on {host}:{port} after {i} tries with\n'
|
||||
f'{ib.client.apiError.value()!r}\n\n'
|
||||
'Retrying with a new client id..\n'
|
||||
)
|
||||
log.runtime(message)
|
||||
else:
|
||||
# XXX report loudly if we never established after all
|
||||
# re-tries
|
||||
log.warning(message)
|
||||
log.warning(
|
||||
f'Failed to connect on {host}:{port} for {i} time with,\n'
|
||||
f'{ib.client.apiError.value()}\n'
|
||||
'retrying with a new client id..')
|
||||
|
||||
# Pre-collect all accounts available for this
|
||||
# connection and map account names to this client
|
||||
# instance.
|
||||
for value in ib.accountValues():
|
||||
acct_number: str = value.account
|
||||
acct_number = value.account
|
||||
|
||||
acnt_alias: str = accounts_def.inverse.get(acct_number)
|
||||
if not acnt_alias:
|
||||
|
||||
# TODO: should we constuct the below reco-ex from
|
||||
# the existing config content?
|
||||
_, path = config.load(
|
||||
conf_name='brokers',
|
||||
)
|
||||
entry = accounts_def.inverse.get(acct_number)
|
||||
if not entry:
|
||||
raise ValueError(
|
||||
'No alias in account section for account!\n'
|
||||
f'Please add an acnt alias entry to your {path}\n'
|
||||
'For example,\n\n'
|
||||
|
||||
'[ib.accounts]\n'
|
||||
'margin = {accnt_number!r}\n'
|
||||
'^^^^^^ <- you need this part!\n\n'
|
||||
|
||||
'This ensures `piker` will not leak private acnt info '
|
||||
'to console output by default!\n'
|
||||
'No section in brokers.toml for account:'
|
||||
f' {acct_number}\n'
|
||||
f'Please add entry to continue using this API client'
|
||||
)
|
||||
|
||||
# surjection of account names to operating clients.
|
||||
if acnt_alias not in accounts_found:
|
||||
accounts_found[acnt_alias] = client
|
||||
# client._acnt_names.add(acnt_alias)
|
||||
client._acnt_names.append(acnt_alias)
|
||||
if acct_number not in accounts_found:
|
||||
accounts_found[entry] = client
|
||||
|
||||
if accounts_found:
|
||||
log.info(
|
||||
f'Loaded accounts for api client\n\n'
|
||||
f'{pformat(accounts_found)}\n'
|
||||
f'Loaded accounts for client @ {host}:{port}\n'
|
||||
f'{pformat(accounts_found)}'
|
||||
)
|
||||
|
||||
# XXX: why aren't we just updating this directy above
|
||||
|
@ -1373,9 +1199,7 @@ async def load_clients_for_trio(
|
|||
a ``tractor.to_asyncio.open_channel_from()``.
|
||||
|
||||
'''
|
||||
async with load_aio_clients(
|
||||
disconnect_on_exit=False,
|
||||
) as accts2clients:
|
||||
async with load_aio_clients() as accts2clients:
|
||||
|
||||
to_trio.send_nowait(accts2clients)
|
||||
|
||||
|
@ -1501,7 +1325,7 @@ class MethodProxy:
|
|||
self,
|
||||
pattern: str,
|
||||
|
||||
) -> dict[str, Any] | trio.Event:
|
||||
) -> Union[dict[str, Any], trio.Event]:
|
||||
|
||||
ev = self.event_table.get(pattern)
|
||||
|
||||
|
@ -1541,7 +1365,7 @@ async def open_aio_client_method_relay(
|
|||
msg: tuple[str, dict] | dict | None = await from_trio.get()
|
||||
match msg:
|
||||
case None: # termination sentinel
|
||||
log.info('asyncio `Client` method-proxy SHUTDOWN!')
|
||||
print('asyncio PROXY-RELAY SHUTDOWN')
|
||||
break
|
||||
|
||||
case (meth_name, kwargs):
|
||||
|
|
|
@ -20,7 +20,7 @@ Order and trades endpoints for use with ``piker``'s EMS.
|
|||
"""
|
||||
from __future__ import annotations
|
||||
from contextlib import ExitStack
|
||||
# from collections import ChainMap
|
||||
from collections import ChainMap
|
||||
from functools import partial
|
||||
from pprint import pformat
|
||||
import time
|
||||
|
@ -1183,14 +1183,7 @@ async def deliver_trade_events(
|
|||
pos
|
||||
and fill
|
||||
):
|
||||
now_cr: CommissionReport = fill.commissionReport
|
||||
if (now_cr != cr):
|
||||
log.warning(
|
||||
'UhhHh ib updated the commission report mid-fill..?\n'
|
||||
f'was: {pformat(cr)}\n'
|
||||
f'now: {pformat(now_cr)}\n'
|
||||
)
|
||||
|
||||
assert fill.commissionReport == cr
|
||||
await emit_pp_update(
|
||||
ems_stream,
|
||||
accounts_def,
|
||||
|
|
|
@ -25,7 +25,6 @@ from contextlib import (
|
|||
from dataclasses import asdict
|
||||
from datetime import datetime
|
||||
from functools import partial
|
||||
from pprint import pformat
|
||||
from math import isnan
|
||||
import time
|
||||
from typing import (
|
||||
|
@ -37,13 +36,7 @@ from typing import (
|
|||
from async_generator import aclosing
|
||||
import ib_insync as ibis
|
||||
import numpy as np
|
||||
from pendulum import (
|
||||
now,
|
||||
from_timestamp,
|
||||
# DateTime,
|
||||
Duration,
|
||||
duration as mk_duration,
|
||||
)
|
||||
import pendulum
|
||||
import tractor
|
||||
import trio
|
||||
from trio_typing import TaskStatus
|
||||
|
@ -52,9 +45,10 @@ from piker.accounting import (
|
|||
MktPair,
|
||||
)
|
||||
from piker.data.validate import FeedInit
|
||||
from piker.brokers._util import (
|
||||
from .._util import (
|
||||
NoData,
|
||||
DataUnavailable,
|
||||
SymbolNotFound,
|
||||
)
|
||||
from .api import (
|
||||
# _adhoc_futes_set,
|
||||
|
@ -165,13 +159,13 @@ async def open_history_client(
|
|||
head_dt: None | datetime = None
|
||||
if (
|
||||
# fx cons seem to not provide this endpoint?
|
||||
# TODO: guard against all contract types which don't
|
||||
# support it?
|
||||
'idealpro' not in fqme
|
||||
):
|
||||
head_dt: datetime | None = await proxy.maybe_get_head_time(
|
||||
fqme=fqme
|
||||
)
|
||||
try:
|
||||
head_dt = await proxy.get_head_time(fqme=fqme)
|
||||
except RequestError:
|
||||
log.warning(f'Unable to get head time: {fqme} ?')
|
||||
pass
|
||||
|
||||
async def get_hist(
|
||||
timeframe: float,
|
||||
|
@ -179,15 +173,8 @@ async def open_history_client(
|
|||
start_dt: datetime | None = None,
|
||||
|
||||
) -> tuple[np.ndarray, str]:
|
||||
|
||||
nonlocal max_timeout, mean, count
|
||||
|
||||
if (
|
||||
start_dt
|
||||
and start_dt.timestamp() == 0
|
||||
):
|
||||
await tractor.pause()
|
||||
|
||||
query_start = time.time()
|
||||
out, timedout = await get_bars(
|
||||
proxy,
|
||||
|
@ -208,48 +195,24 @@ async def open_history_client(
|
|||
f'mean: {mean}'
|
||||
)
|
||||
|
||||
if (
|
||||
out is None
|
||||
):
|
||||
# could be trying to retreive bars over weekend
|
||||
if out is None:
|
||||
log.error(f"Can't grab bars starting at {end_dt}!?!?")
|
||||
raise NoData(
|
||||
f'{end_dt}',
|
||||
# frame_size=2000,
|
||||
)
|
||||
|
||||
if (
|
||||
end_dt
|
||||
and head_dt
|
||||
and end_dt <= head_dt
|
||||
):
|
||||
raise DataUnavailable(
|
||||
f'First timestamp is {head_dt}\n'
|
||||
f'But {end_dt} was requested..'
|
||||
)
|
||||
raise DataUnavailable(f'First timestamp is {head_dt}')
|
||||
|
||||
else:
|
||||
raise NoData(
|
||||
info={
|
||||
'fqme': fqme,
|
||||
'head_dt': head_dt,
|
||||
'start_dt': start_dt,
|
||||
'end_dt': end_dt,
|
||||
'timedout': timedout,
|
||||
},
|
||||
)
|
||||
|
||||
# also see return type for `get_bars()`
|
||||
bars: ibis.objects.BarDataList
|
||||
bars_array: np.ndarray
|
||||
first_dt: datetime
|
||||
last_dt: datetime
|
||||
(
|
||||
bars,
|
||||
bars_array,
|
||||
first_dt,
|
||||
last_dt,
|
||||
) = out
|
||||
|
||||
# TODO: audit the sampling period here as well?
|
||||
# timestep should always be at least as large as the
|
||||
# period step.
|
||||
# tdiff: np.ndarray = np.diff(bars_array['time'])
|
||||
# if (tdiff < timeframe).any():
|
||||
# await tractor.pause()
|
||||
bars, bars_array, first_dt, last_dt = out
|
||||
|
||||
# volume cleaning since there's -ve entries,
|
||||
# wood luv to know what crookery that is..
|
||||
|
@ -263,18 +226,7 @@ async def open_history_client(
|
|||
# quite sure why.. needs some tinkering and probably
|
||||
# a lookthrough of the ``ib_insync`` machinery, for eg. maybe
|
||||
# we have to do the batch queries on the `asyncio` side?
|
||||
yield (
|
||||
get_hist,
|
||||
{
|
||||
'erlangs': 1, # max conc reqs
|
||||
'rate': 3, # max req rate
|
||||
'frame_types': { # expected frame sizes
|
||||
1: mk_duration(seconds=2e3),
|
||||
60: mk_duration(days=2),
|
||||
}
|
||||
|
||||
},
|
||||
)
|
||||
yield get_hist, {'erlangs': 1, 'rate': 3}
|
||||
|
||||
|
||||
_pacing: str = (
|
||||
|
@ -419,11 +371,7 @@ async def get_bars(
|
|||
|
||||
while _failed_resets < max_failed_resets:
|
||||
try:
|
||||
(
|
||||
bars,
|
||||
bars_array,
|
||||
dt_duration,
|
||||
) = await proxy.bars(
|
||||
out = await proxy.bars(
|
||||
fqme=fqme,
|
||||
end_dt=end_dt,
|
||||
sample_period_s=timeframe,
|
||||
|
@ -434,58 +382,44 @@ async def get_bars(
|
|||
# current impl) to detect a cancel case.
|
||||
# timeout=timeout,
|
||||
)
|
||||
# usually either a request during a venue closure
|
||||
# or into a large (weekend) closure gap.
|
||||
if not bars:
|
||||
# no data returned?
|
||||
log.warning(
|
||||
'History frame is blank?\n'
|
||||
f'start_dt: {start_dt}\n'
|
||||
f'end_dt: {end_dt}\n'
|
||||
f'duration: {dt_duration}\n'
|
||||
)
|
||||
# NOTE: REQUIRED to pass back value..
|
||||
result = None
|
||||
return None
|
||||
if out is None:
|
||||
raise NoData(f'{end_dt}')
|
||||
|
||||
bars, bars_array, dt_duration = out
|
||||
|
||||
# not enough bars signal, likely due to venue
|
||||
# operational gaps.
|
||||
if end_dt:
|
||||
dur_s: float = len(bars) * timeframe
|
||||
bars_dur = Duration(seconds=dur_s)
|
||||
dt_dur_s: float = dt_duration.in_seconds()
|
||||
if dur_s < dt_dur_s:
|
||||
log.warning(
|
||||
'History frame is shorter then expected?\n'
|
||||
f'start_dt: {start_dt}\n'
|
||||
f'end_dt: {end_dt}\n'
|
||||
f'duration: {dt_dur_s}\n'
|
||||
f'frame duration seconds: {dur_s}\n'
|
||||
f'dur diff: {dt_duration - bars_dur}\n'
|
||||
too_little: bool = False
|
||||
if (
|
||||
end_dt
|
||||
and (
|
||||
not bars
|
||||
or (too_little :=
|
||||
start_dt
|
||||
and (len(bars) * timeframe)
|
||||
< dt_duration.in_seconds()
|
||||
)
|
||||
# NOTE: we used to try to get a minimal
|
||||
# set of bars by recursing but this ran
|
||||
# into possible infinite query loops
|
||||
# when logic in the `Client.bars()` dt
|
||||
# diffing went bad. So instead for now
|
||||
# we just return the
|
||||
# shorter-then-expected history with
|
||||
# a warning.
|
||||
# TODO: in the future it prolly makes
|
||||
# the most send to do venue operating
|
||||
# hours lookup and
|
||||
# timestamp-in-operating-range set
|
||||
# checking to know for sure if we can
|
||||
# safely and quickly ignore non-uniform history
|
||||
# frame timestamp gaps..
|
||||
# end_dt -= dt_duration
|
||||
# continue
|
||||
# await tractor.pause()
|
||||
)
|
||||
):
|
||||
if (
|
||||
end_dt
|
||||
or too_little
|
||||
):
|
||||
log.warning(
|
||||
f'History is blank for {dt_duration} from {end_dt}'
|
||||
)
|
||||
end_dt -= dt_duration
|
||||
continue
|
||||
|
||||
first_dt = from_timestamp(
|
||||
raise NoData(f'{end_dt}')
|
||||
|
||||
if bars_array is None:
|
||||
raise SymbolNotFound(fqme)
|
||||
|
||||
first_dt = pendulum.from_timestamp(
|
||||
bars[0].date.timestamp())
|
||||
|
||||
last_dt = from_timestamp(
|
||||
last_dt = pendulum.from_timestamp(
|
||||
bars[-1].date.timestamp())
|
||||
|
||||
time = bars_array['time']
|
||||
|
@ -498,7 +432,6 @@ async def get_bars(
|
|||
if data_cs:
|
||||
data_cs.cancel()
|
||||
|
||||
# NOTE: setting this is critical!
|
||||
result = (
|
||||
bars, # ib native
|
||||
bars_array, # numpy
|
||||
|
@ -509,7 +442,6 @@ async def get_bars(
|
|||
# signal data reset loop parent task
|
||||
result_ready.set()
|
||||
|
||||
# NOTE: this isn't getting collected anywhere!
|
||||
return result
|
||||
|
||||
except RequestError as err:
|
||||
|
@ -535,7 +467,7 @@ async def get_bars(
|
|||
if end_dt is not None:
|
||||
end_dt = end_dt.subtract(days=1)
|
||||
elif end_dt is None:
|
||||
end_dt = now().subtract(days=1)
|
||||
end_dt = pendulum.now().subtract(days=1)
|
||||
|
||||
log.warning(
|
||||
f'NO DATA found ending @ {end_dt}\n'
|
||||
|
@ -671,8 +603,8 @@ async def _setup_quote_stream(
|
|||
# making them mostly useless and explains why the scanner
|
||||
# is always slow XD
|
||||
# '293', # Trade count for day
|
||||
# '294', # Trade rate / minute
|
||||
# '295', # Vlm rate / minute
|
||||
'294', # Trade rate / minute
|
||||
'295', # Vlm rate / minute
|
||||
),
|
||||
contract: Contract | None = None,
|
||||
|
||||
|
@ -883,10 +815,7 @@ async def stream_quotes(
|
|||
proxy: MethodProxy
|
||||
mkt: MktPair
|
||||
details: ibis.ContractDetails
|
||||
async with (
|
||||
open_data_client() as proxy,
|
||||
# trio.open_nursery() as tn,
|
||||
):
|
||||
async with open_data_client() as proxy:
|
||||
mkt, details = await get_mkt_info(
|
||||
sym,
|
||||
proxy=proxy, # passed to avoid implicit client load
|
||||
|
@ -906,50 +835,30 @@ async def stream_quotes(
|
|||
init_msgs.append(init_msg)
|
||||
|
||||
con: Contract = details.contract
|
||||
first_ticker: Ticker | None = None
|
||||
with trio.move_on_after(1):
|
||||
first_ticker: Ticker = await proxy.get_quote(
|
||||
contract=con,
|
||||
raise_on_timeout=False,
|
||||
)
|
||||
|
||||
if first_ticker:
|
||||
first_ticker: Ticker = await proxy.get_quote(contract=con)
|
||||
first_quote: dict = normalize(first_ticker)
|
||||
|
||||
# TODO: we need a stack-oriented log levels filters for
|
||||
# this!
|
||||
# log.info(message, filter={'stack': 'live_feed'}) ?
|
||||
log.runtime(
|
||||
'Rxed init quote:\n\n'
|
||||
f'{pformat(first_quote)}\n'
|
||||
log.warning(f'FIRST QUOTE: {first_quote}')
|
||||
|
||||
# TODO: we should instead spawn a task that waits on a feed to start
|
||||
# and let it wait indefinitely..instead of this hard coded stuff.
|
||||
with trio.move_on_after(1):
|
||||
first_ticker = await proxy.get_quote(
|
||||
contract=con,
|
||||
raise_on_timeout=True,
|
||||
)
|
||||
|
||||
# NOTE: it might be outside regular trading hours for
|
||||
# assets with "standard venue operating hours" so we
|
||||
# only "pretend the feed is live" when the dst asset
|
||||
# type is NOT within the NON-NORMAL-venue set: aka not
|
||||
# commodities, forex or crypto currencies which CAN
|
||||
# always return a NaN on a snap quote request during
|
||||
# normal venue hours. In the case of a closed venue
|
||||
# (equitiies, futes, bonds etc.) we at least try to
|
||||
# grab the OHLC history.
|
||||
# it might be outside regular trading hours so see if we can at
|
||||
# least grab history.
|
||||
if (
|
||||
first_ticker
|
||||
and
|
||||
isnan(first_ticker.last)
|
||||
# SO, if the last quote price value is NaN we ONLY
|
||||
# "pretend to do" `feed_is_live.set()` if it's a known
|
||||
# dst asset venue with a lot of closed operating hours.
|
||||
isnan(first_ticker.last) # last quote price value is nan
|
||||
and mkt.dst.atype not in {
|
||||
'commodity',
|
||||
'fiat',
|
||||
'crypto',
|
||||
}
|
||||
):
|
||||
task_status.started((
|
||||
init_msgs,
|
||||
first_quote,
|
||||
))
|
||||
task_status.started((init_msgs, first_quote))
|
||||
|
||||
# it's not really live but this will unblock
|
||||
# the brokerd feed task to tell the ui to update?
|
||||
|
@ -959,28 +868,6 @@ async def stream_quotes(
|
|||
await trio.sleep_forever()
|
||||
return # we never expect feed to come up?
|
||||
|
||||
# TODO: we should instead spawn a task that waits on a feed
|
||||
# to start and let it wait indefinitely..instead of this
|
||||
# hard coded stuff.
|
||||
# async def wait_for_first_quote():
|
||||
# with trio.CancelScope() as cs:
|
||||
|
||||
# XXX: MUST acquire a ticker + first quote before starting
|
||||
# the live quotes loop!
|
||||
# with trio.move_on_after(1):
|
||||
first_ticker = await proxy.get_quote(
|
||||
contract=con,
|
||||
raise_on_timeout=True,
|
||||
)
|
||||
first_quote: dict = normalize(first_ticker)
|
||||
|
||||
# TODO: we need a stack-oriented log levels filters for
|
||||
# this!
|
||||
# log.info(message, filter={'stack': 'live_feed'}) ?
|
||||
log.runtime(
|
||||
'Rxed init quote:\n'
|
||||
f'{pformat(first_quote)}'
|
||||
)
|
||||
cs: trio.CancelScope | None = None
|
||||
startup: bool = True
|
||||
while (
|
||||
|
@ -1001,11 +888,8 @@ async def stream_quotes(
|
|||
|
||||
# only on first entry at feed boot up
|
||||
if startup:
|
||||
startup: bool = False
|
||||
task_status.started((
|
||||
init_msgs,
|
||||
first_quote,
|
||||
))
|
||||
startup = False
|
||||
task_status.started((init_msgs, first_quote))
|
||||
|
||||
# start a stream restarter task which monitors the
|
||||
# data feed event.
|
||||
|
@ -1029,7 +913,7 @@ async def stream_quotes(
|
|||
|
||||
# generally speaking these feeds don't
|
||||
# include vlm data.
|
||||
atype: str = mkt.dst.atype
|
||||
atype = mkt.dst.atype
|
||||
log.info(
|
||||
f'No-vlm {mkt.fqme}@{atype}, skipping quote poll'
|
||||
)
|
||||
|
@ -1065,8 +949,7 @@ async def stream_quotes(
|
|||
quote = normalize(ticker)
|
||||
log.debug(f"First ticker received {quote}")
|
||||
|
||||
# tell data-layer spawner-caller that live
|
||||
# quotes are now streaming.
|
||||
# tell caller quotes are now coming in live
|
||||
feed_is_live.set()
|
||||
|
||||
# last = time.time()
|
||||
|
|
|
@ -31,11 +31,7 @@ from typing import (
|
|||
)
|
||||
|
||||
from bidict import bidict
|
||||
from pendulum import (
|
||||
DateTime,
|
||||
parse,
|
||||
from_timestamp,
|
||||
)
|
||||
import pendulum
|
||||
from ib_insync import (
|
||||
Contract,
|
||||
Commodity,
|
||||
|
@ -70,11 +66,10 @@ tx_sort: Callable = partial(
|
|||
iter_by_dt,
|
||||
parsers={
|
||||
'dateTime': parse_flex_dt,
|
||||
'datetime': parse,
|
||||
|
||||
# XXX: for some some fucking 2022 and
|
||||
# back options records.. f@#$ me..
|
||||
'date': parse,
|
||||
'datetime': pendulum.parse,
|
||||
# for some some fucking 2022 and
|
||||
# back options records...fuck me.
|
||||
'date': pendulum.parse,
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -94,38 +89,15 @@ def norm_trade(
|
|||
|
||||
conid: int = str(record.get('conId') or record['conid'])
|
||||
bs_mktid: str = str(conid)
|
||||
comms = record.get('commission')
|
||||
if comms is None:
|
||||
comms = -1*record['ibCommission']
|
||||
|
||||
# NOTE: sometimes weird records (like BTTX?)
|
||||
# have no field for this?
|
||||
comms: float = -1 * (
|
||||
record.get('commission')
|
||||
or record.get('ibCommission')
|
||||
or 0
|
||||
)
|
||||
if not comms:
|
||||
log.warning(
|
||||
'No commissions found for record?\n'
|
||||
f'{pformat(record)}\n'
|
||||
)
|
||||
|
||||
price: float = (
|
||||
record.get('price')
|
||||
or record.get('tradePrice')
|
||||
)
|
||||
if price is None:
|
||||
log.warning(
|
||||
'No `price` field found in record?\n'
|
||||
'Skipping normalization..\n'
|
||||
f'{pformat(record)}\n'
|
||||
)
|
||||
return None
|
||||
price = record.get('price') or record['tradePrice']
|
||||
|
||||
# the api doesn't do the -/+ on the quantity for you but flex
|
||||
# records do.. are you fucking serious ib...!?
|
||||
size: float|int = (
|
||||
record.get('quantity')
|
||||
or record['shares']
|
||||
) * {
|
||||
size = record.get('quantity') or record['shares'] * {
|
||||
'BOT': 1,
|
||||
'SLD': -1,
|
||||
}[record['side']]
|
||||
|
@ -156,31 +128,26 @@ def norm_trade(
|
|||
# otype = tail[6]
|
||||
# strike = tail[7:]
|
||||
|
||||
log.warning(
|
||||
f'Skipping option contract -> NO SUPPORT YET!\n'
|
||||
f'{symbol}\n'
|
||||
)
|
||||
print(f'skipping opts contract {symbol}')
|
||||
return None
|
||||
|
||||
# timestamping is way different in API records
|
||||
dtstr: str = record.get('datetime')
|
||||
date: str = record.get('date')
|
||||
flex_dtstr: str = record.get('dateTime')
|
||||
dtstr = record.get('datetime')
|
||||
date = record.get('date')
|
||||
flex_dtstr = record.get('dateTime')
|
||||
|
||||
if dtstr or date:
|
||||
dt: DateTime = parse(dtstr or date)
|
||||
dt = pendulum.parse(dtstr or date)
|
||||
|
||||
elif flex_dtstr:
|
||||
# probably a flex record with a wonky non-std timestamp..
|
||||
dt: DateTime = parse_flex_dt(record['dateTime'])
|
||||
dt = parse_flex_dt(record['dateTime'])
|
||||
|
||||
# special handling of symbol extraction from
|
||||
# flex records using some ad-hoc schema parsing.
|
||||
asset_type: str = (
|
||||
record.get('assetCategory')
|
||||
or record.get('secType')
|
||||
or 'STK'
|
||||
)
|
||||
asset_type: str = record.get(
|
||||
'assetCategory'
|
||||
) or record.get('secType', 'STK')
|
||||
|
||||
if (expiry := (
|
||||
record.get('lastTradeDateOrContractMonth')
|
||||
|
@ -390,7 +357,6 @@ def norm_trade_records(
|
|||
if txn is None:
|
||||
continue
|
||||
|
||||
# inject txns sorted by datetime
|
||||
insort(
|
||||
records,
|
||||
txn,
|
||||
|
@ -439,7 +405,7 @@ def api_trades_to_ledger_entries(
|
|||
txn_dict[attr_name] = val
|
||||
|
||||
tid = str(txn_dict['execId'])
|
||||
dt = from_timestamp(txn_dict['time'])
|
||||
dt = pendulum.from_timestamp(txn_dict['time'])
|
||||
txn_dict['datetime'] = str(dt)
|
||||
acctid = accounts[txn_dict['acctNumber']]
|
||||
|
||||
|
|
|
@ -209,15 +209,12 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
|
|||
break
|
||||
|
||||
ib_client = proxy._aio_ns.ib
|
||||
log.info(
|
||||
f'Using API client for symbol-search\n'
|
||||
f'{ib_client}\n'
|
||||
)
|
||||
log.info(f'Using {ib_client} for symbol search')
|
||||
|
||||
last = time.time()
|
||||
async for pattern in stream:
|
||||
log.info(f'received {pattern}')
|
||||
now: float = time.time()
|
||||
now = time.time()
|
||||
|
||||
# this causes tractor hang...
|
||||
# assert 0
|
||||
|
@ -264,9 +261,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
|
|||
# defined adhoc symbol set.
|
||||
stock_results = []
|
||||
|
||||
async def extend_results(
|
||||
target: Awaitable[list]
|
||||
) -> None:
|
||||
async def stash_results(target: Awaitable[list]):
|
||||
try:
|
||||
results = await target
|
||||
except tractor.trionics.Lagged:
|
||||
|
@ -279,7 +274,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
|
|||
with trio.move_on_after(3) as cs:
|
||||
async with trio.open_nursery() as sn:
|
||||
sn.start_soon(
|
||||
extend_results,
|
||||
stash_results,
|
||||
proxy.search_symbols(
|
||||
pattern=pattern,
|
||||
upto=5,
|
||||
|
@ -294,10 +289,8 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
|
|||
f'Search timeout? {proxy._aio_ns.ib.client}'
|
||||
)
|
||||
continue
|
||||
elif stock_results:
|
||||
else:
|
||||
break
|
||||
# else:
|
||||
# await tractor.pause()
|
||||
|
||||
# # match against our ad-hoc set immediately
|
||||
# adhoc_matches = fuzzy.extract(
|
||||
|
@ -525,21 +518,7 @@ async def get_mkt_info(
|
|||
venue = con.primaryExchange or con.exchange
|
||||
|
||||
price_tick: Decimal = Decimal(str(details.minTick))
|
||||
ib_min_tick_gt_2: Decimal = Decimal('0.01')
|
||||
if (
|
||||
price_tick < ib_min_tick_gt_2
|
||||
):
|
||||
# TODO: we need to add some kinda dynamic rounding sys
|
||||
# to our MktPair i guess?
|
||||
# not sure where the logic should sit, but likely inside
|
||||
# the `.clearing._ems` i suppose...
|
||||
log.warning(
|
||||
'IB seems to disallow a min price tick < 0.01 '
|
||||
'when the price is > 2.0..?\n'
|
||||
f'Decreasing min tick precision for {fqme} to 0.01'
|
||||
)
|
||||
# price_tick = ib_min_tick
|
||||
# await tractor.pause()
|
||||
# price_tick: Decimal = Decimal('0.01')
|
||||
|
||||
if atype == 'stock':
|
||||
# XXX: GRRRR they don't support fractional share sizes for
|
||||
|
|
|
@ -27,8 +27,8 @@ from typing import (
|
|||
)
|
||||
import time
|
||||
|
||||
import httpx
|
||||
import pendulum
|
||||
import asks
|
||||
import numpy as np
|
||||
import urllib.parse
|
||||
import hashlib
|
||||
|
@ -60,11 +60,6 @@ log = get_logger('piker.brokers.kraken')
|
|||
|
||||
# <uri>/<version>/
|
||||
_url = 'https://api.kraken.com/0'
|
||||
|
||||
_headers: dict[str, str] = {
|
||||
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
|
||||
}
|
||||
|
||||
# TODO: this is the only backend providing this right?
|
||||
# in which case we should drop it from the defaults and
|
||||
# instead make a custom fields descr in this module!
|
||||
|
@ -75,18 +70,12 @@ _symbol_info_translation: dict[str, str] = {
|
|||
|
||||
|
||||
def get_config() -> dict[str, Any]:
|
||||
'''
|
||||
Load our section from `piker/brokers.toml`.
|
||||
|
||||
'''
|
||||
conf, path = config.load(
|
||||
conf_name='brokers',
|
||||
touch_if_dne=True,
|
||||
)
|
||||
if (section := conf.get('kraken')) is None:
|
||||
log.warning(
|
||||
f'No config section found for kraken in {path}'
|
||||
)
|
||||
conf, path = config.load()
|
||||
section = conf.get('kraken')
|
||||
|
||||
if section is None:
|
||||
log.warning(f'No config section found for kraken in {path}')
|
||||
return {}
|
||||
|
||||
return section
|
||||
|
@ -140,15 +129,16 @@ class Client:
|
|||
def __init__(
|
||||
self,
|
||||
config: dict[str, str],
|
||||
httpx_client: httpx.AsyncClient,
|
||||
|
||||
name: str = '',
|
||||
api_key: str = '',
|
||||
secret: str = ''
|
||||
) -> None:
|
||||
|
||||
self._sesh: httpx.AsyncClient = httpx_client
|
||||
|
||||
self._sesh = asks.Session(connections=4)
|
||||
self._sesh.base_location = _url
|
||||
self._sesh.headers.update({
|
||||
'User-Agent':
|
||||
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
|
||||
})
|
||||
self._name = name
|
||||
self._api_key = api_key
|
||||
self._secret = secret
|
||||
|
@ -170,9 +160,10 @@ class Client:
|
|||
method: str,
|
||||
data: dict,
|
||||
) -> dict[str, Any]:
|
||||
resp: httpx.Response = await self._sesh.post(
|
||||
url=f'/public/{method}',
|
||||
resp = await self._sesh.post(
|
||||
path=f'/public/{method}',
|
||||
json=data,
|
||||
timeout=float('inf')
|
||||
)
|
||||
return resproc(resp, log)
|
||||
|
||||
|
@ -183,18 +174,18 @@ class Client:
|
|||
uri_path: str
|
||||
) -> dict[str, Any]:
|
||||
headers = {
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
'API-Key': self._api_key,
|
||||
'API-Sign': get_kraken_signature(
|
||||
uri_path,
|
||||
data,
|
||||
self._secret,
|
||||
),
|
||||
'Content-Type':
|
||||
'application/x-www-form-urlencoded',
|
||||
'API-Key':
|
||||
self._api_key,
|
||||
'API-Sign':
|
||||
get_kraken_signature(uri_path, data, self._secret)
|
||||
}
|
||||
resp: httpx.Response = await self._sesh.post(
|
||||
url=f'/private/{method}',
|
||||
resp = await self._sesh.post(
|
||||
path=f'/private/{method}',
|
||||
data=data,
|
||||
headers=headers,
|
||||
timeout=float('inf')
|
||||
)
|
||||
return resproc(resp, log)
|
||||
|
||||
|
@ -668,19 +659,10 @@ class Client:
|
|||
@acm
|
||||
async def get_client() -> Client:
|
||||
|
||||
conf: dict[str, Any] = get_config()
|
||||
async with httpx.AsyncClient(
|
||||
base_url=_url,
|
||||
headers=_headers,
|
||||
|
||||
# TODO: is there a way to numerate this?
|
||||
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
|
||||
# connections=4
|
||||
) as trio_client:
|
||||
conf = get_config()
|
||||
if conf:
|
||||
client = Client(
|
||||
conf,
|
||||
httpx_client=trio_client,
|
||||
|
||||
# TODO: don't break these up and just do internal
|
||||
# conf lookups instead..
|
||||
|
@ -689,10 +671,7 @@ async def get_client() -> Client:
|
|||
secret=conf['secret']
|
||||
)
|
||||
else:
|
||||
client = Client(
|
||||
conf={},
|
||||
httpx_client=trio_client,
|
||||
)
|
||||
client = Client({})
|
||||
|
||||
# at startup, load all symbols, and asset info in
|
||||
# batch requests.
|
||||
|
|
|
@ -612,18 +612,18 @@ async def open_trade_dialog(
|
|||
|
||||
# enter relay loop
|
||||
await handle_order_updates(
|
||||
client=client,
|
||||
ws=ws,
|
||||
ws_stream=stream,
|
||||
ems_stream=ems_stream,
|
||||
apiflows=apiflows,
|
||||
ids=ids,
|
||||
reqids2txids=reqids2txids,
|
||||
acnt=acnt,
|
||||
ledger=ledger,
|
||||
acctid=acctid,
|
||||
acc_name=acc_name,
|
||||
token=token,
|
||||
client,
|
||||
ws,
|
||||
stream,
|
||||
ems_stream,
|
||||
apiflows,
|
||||
ids,
|
||||
reqids2txids,
|
||||
acnt,
|
||||
api_trans,
|
||||
acctid,
|
||||
acc_name,
|
||||
token,
|
||||
)
|
||||
|
||||
|
||||
|
@ -639,8 +639,7 @@ async def handle_order_updates(
|
|||
|
||||
# transaction records which will be updated
|
||||
# on new trade clearing events (aka order "fills")
|
||||
ledger: TransactionLedger,
|
||||
# ledger_trans: dict[str, Transaction],
|
||||
ledger_trans: dict[str, Transaction],
|
||||
acctid: str,
|
||||
acc_name: str,
|
||||
token: str,
|
||||
|
@ -700,8 +699,7 @@ async def handle_order_updates(
|
|||
# if tid not in ledger_trans
|
||||
}
|
||||
for tid, trade in trades.items():
|
||||
# assert tid not in ledger_trans
|
||||
assert tid not in ledger
|
||||
assert tid not in ledger_trans
|
||||
txid = trade['ordertxid']
|
||||
reqid = trade.get('userref')
|
||||
|
||||
|
@ -749,17 +747,11 @@ async def handle_order_updates(
|
|||
client,
|
||||
api_name_set='wsname',
|
||||
)
|
||||
ppmsgs: list[BrokerdPosition] = trades2pps(
|
||||
acnt=acnt,
|
||||
ledger=ledger,
|
||||
acctid=acctid,
|
||||
new_trans=new_trans,
|
||||
ppmsgs = trades2pps(
|
||||
acnt,
|
||||
acctid,
|
||||
new_trans,
|
||||
)
|
||||
# ppmsgs = trades2pps(
|
||||
# acnt,
|
||||
# acctid,
|
||||
# new_trans,
|
||||
# )
|
||||
for pp_msg in ppmsgs:
|
||||
await ems_stream.send(pp_msg)
|
||||
|
||||
|
|
|
@ -16,9 +16,10 @@
|
|||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Kucoin cex API backend.
|
||||
Kucoin broker backend
|
||||
|
||||
'''
|
||||
|
||||
from contextlib import (
|
||||
asynccontextmanager as acm,
|
||||
aclosing,
|
||||
|
@ -40,8 +41,9 @@ from typing import (
|
|||
import wsproto
|
||||
from uuid import uuid4
|
||||
|
||||
from rapidfuzz import process as fuzzy
|
||||
from trio_typing import TaskStatus
|
||||
import httpx
|
||||
import asks
|
||||
from bidict import bidict
|
||||
import numpy as np
|
||||
import pendulum
|
||||
|
@ -62,7 +64,7 @@ from piker._cacheables import (
|
|||
)
|
||||
from piker.log import get_logger
|
||||
from piker.data.validate import FeedInit
|
||||
from piker.types import Struct # NOTE, this is already a `tractor.msg.Struct`
|
||||
from piker.types import Struct
|
||||
from piker.data import (
|
||||
def_iohlcv_fields,
|
||||
match_from_pairs,
|
||||
|
@ -98,18 +100,9 @@ class KucoinMktPair(Struct, frozen=True):
|
|||
def size_tick(self) -> Decimal:
|
||||
return Decimal(str(self.quoteMinSize))
|
||||
|
||||
callauctionFirstStageStartTime: None|float
|
||||
callauctionIsEnabled: bool
|
||||
callauctionPriceCeiling: float|None
|
||||
callauctionPriceFloor: float|None
|
||||
callauctionSecondStageStartTime: float|None
|
||||
callauctionThirdStageStartTime: float|None
|
||||
|
||||
enableTrading: bool
|
||||
feeCategory: int
|
||||
feeCurrency: str
|
||||
isMarginEnabled: bool
|
||||
makerFeeCoefficient: float
|
||||
market: str
|
||||
minFunds: float
|
||||
name: str
|
||||
|
@ -119,10 +112,7 @@ class KucoinMktPair(Struct, frozen=True):
|
|||
quoteIncrement: float
|
||||
quoteMaxSize: float
|
||||
quoteMinSize: float
|
||||
st: bool
|
||||
symbol: str # our bs_mktid, kucoin's internal id
|
||||
takerFeeCoefficient: float
|
||||
tradingStartTime: float|None
|
||||
|
||||
|
||||
class AccountTrade(Struct, frozen=True):
|
||||
|
@ -223,11 +213,7 @@ def get_config() -> BrokerConfig | None:
|
|||
|
||||
class Client:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
httpx_client: httpx.AsyncClient,
|
||||
) -> None:
|
||||
self._http: httpx.AsyncClient = httpx_client
|
||||
def __init__(self) -> None:
|
||||
self._config: BrokerConfig | None = get_config()
|
||||
self._pairs: dict[str, KucoinMktPair] = {}
|
||||
self._fqmes2mktids: bidict[str, str] = bidict()
|
||||
|
@ -242,24 +228,18 @@ class Client:
|
|||
|
||||
) -> dict[str, str | bytes]:
|
||||
'''
|
||||
Generate authenticated request headers:
|
||||
|
||||
Generate authenticated request headers
|
||||
https://docs.kucoin.com/#authentication
|
||||
https://www.kucoin.com/docs/basic-info/connection-method/authentication/creating-a-request
|
||||
https://www.kucoin.com/docs/basic-info/connection-method/authentication/signing-a-message
|
||||
|
||||
'''
|
||||
|
||||
if not self._config:
|
||||
raise ValueError(
|
||||
'No config found when trying to send authenticated request'
|
||||
)
|
||||
'No config found when trying to send authenticated request')
|
||||
|
||||
str_to_sign = (
|
||||
str(int(time.time() * 1000))
|
||||
+
|
||||
action
|
||||
+
|
||||
f'/api/{api}/{endpoint.lstrip("/")}'
|
||||
+ action + f'/api/{api}/{endpoint.lstrip("/")}'
|
||||
)
|
||||
|
||||
signature = base64.b64encode(
|
||||
|
@ -270,7 +250,6 @@ class Client:
|
|||
).digest()
|
||||
)
|
||||
|
||||
# TODO: can we cache this between calls?
|
||||
passphrase = base64.b64encode(
|
||||
hmac.new(
|
||||
self._config.key_secret.encode('utf-8'),
|
||||
|
@ -292,10 +271,8 @@ class Client:
|
|||
self,
|
||||
action: Literal['POST', 'GET'],
|
||||
endpoint: str,
|
||||
|
||||
api: str = 'v2',
|
||||
headers: dict = {},
|
||||
|
||||
) -> Any:
|
||||
'''
|
||||
Generic request wrapper for Kucoin API
|
||||
|
@ -308,19 +285,14 @@ class Client:
|
|||
api,
|
||||
)
|
||||
|
||||
req_meth: Callable = getattr(
|
||||
self._http,
|
||||
action.lower(),
|
||||
)
|
||||
res = await req_meth(
|
||||
url=f'/{api}/{endpoint}',
|
||||
headers=headers,
|
||||
)
|
||||
json: dict = res.json()
|
||||
if (data := json.get('data')) is not None:
|
||||
return data
|
||||
api_url = f'https://api.kucoin.com/api/{api}/{endpoint}'
|
||||
|
||||
res = await asks.request(action, api_url, headers=headers)
|
||||
|
||||
json = res.json()
|
||||
if 'data' in json:
|
||||
return json['data']
|
||||
else:
|
||||
api_url: str = self._http.base_url
|
||||
log.error(
|
||||
f'Error making request to {api_url} ->\n'
|
||||
f'{pformat(res)}'
|
||||
|
@ -378,8 +350,8 @@ class Client:
|
|||
currencies: dict[str, Currency] = {}
|
||||
entries: list[dict] = await self._request(
|
||||
'GET',
|
||||
endpoint='currencies',
|
||||
api='v1',
|
||||
endpoint='currencies',
|
||||
)
|
||||
for entry in entries:
|
||||
curr = Currency(**entry).copy()
|
||||
|
@ -395,22 +367,13 @@ class Client:
|
|||
dict[str, KucoinMktPair],
|
||||
bidict[str, KucoinMktPair],
|
||||
]:
|
||||
entries = await self._request(
|
||||
'GET',
|
||||
endpoint='symbols',
|
||||
)
|
||||
entries = await self._request('GET', 'symbols')
|
||||
log.info(f' {len(entries)} Kucoin market pairs fetched')
|
||||
|
||||
pairs: dict[str, KucoinMktPair] = {}
|
||||
fqmes2mktids: bidict[str, str] = bidict()
|
||||
for item in entries:
|
||||
try:
|
||||
pair = pairs[item['name']] = KucoinMktPair(**item)
|
||||
except TypeError as te:
|
||||
raise TypeError(
|
||||
'`KucoinMktPair` and reponse fields do not match ??\n'
|
||||
f'{KucoinMktPair.fields_diff(item)}\n'
|
||||
) from te
|
||||
fqmes2mktids[
|
||||
item['name'].lower().replace('-', '')
|
||||
] = pair.name
|
||||
|
@ -453,7 +416,8 @@ class Client:
|
|||
await self.get_mkt_pairs()
|
||||
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
|
||||
|
||||
matches: dict[str, KucoinMktPair] = match_from_pairs(
|
||||
|
||||
matches: dict[str, Pair] = match_from_pairs(
|
||||
pairs=self._pairs,
|
||||
# query=pattern.upper(),
|
||||
query=pattern.upper(),
|
||||
|
@ -605,18 +569,10 @@ def fqme_to_kucoin_sym(
|
|||
|
||||
@acm
|
||||
async def get_client() -> AsyncGenerator[Client, None]:
|
||||
'''
|
||||
Load an API `Client` preconfigured from user settings
|
||||
client = Client()
|
||||
|
||||
'''
|
||||
async with (
|
||||
httpx.AsyncClient(
|
||||
base_url='https://api.kucoin.com/api',
|
||||
) as trio_client,
|
||||
):
|
||||
client = Client(httpx_client=trio_client)
|
||||
async with trio.open_nursery() as tn:
|
||||
tn.start_soon(client.get_mkt_pairs)
|
||||
async with trio.open_nursery() as n:
|
||||
n.start_soon(client.get_mkt_pairs)
|
||||
await client.get_currencies()
|
||||
|
||||
yield client
|
||||
|
@ -655,7 +611,7 @@ async def open_ping_task(
|
|||
await trio.sleep((ping_interval - 1000) / 1000)
|
||||
await ws.send_msg({'id': connect_id, 'type': 'ping'})
|
||||
|
||||
log.warning('Starting ping task for kucoin ws connection')
|
||||
log.info('Starting ping task for kucoin ws connection')
|
||||
n.start_soon(ping_server)
|
||||
|
||||
yield
|
||||
|
@ -667,14 +623,9 @@ async def open_ping_task(
|
|||
async def get_mkt_info(
|
||||
fqme: str,
|
||||
|
||||
) -> tuple[
|
||||
MktPair,
|
||||
KucoinMktPair,
|
||||
]:
|
||||
) -> tuple[MktPair, KucoinMktPair]:
|
||||
'''
|
||||
Query for and return both a `piker.accounting.MktPair` and
|
||||
`KucoinMktPair` from provided `fqme: str`
|
||||
(fully-qualified-market-endpoint).
|
||||
Query for and return a `MktPair` and `KucoinMktPair`.
|
||||
|
||||
'''
|
||||
async with open_cached_client('kucoin') as client:
|
||||
|
@ -749,8 +700,6 @@ async def stream_quotes(
|
|||
|
||||
log.info(f'Starting up quote stream(s) for {symbols}')
|
||||
for sym_str in symbols:
|
||||
mkt: MktPair
|
||||
pair: KucoinMktPair
|
||||
mkt, pair = await get_mkt_info(sym_str)
|
||||
init_msgs.append(
|
||||
FeedInit(mkt_info=mkt)
|
||||
|
@ -758,11 +707,7 @@ async def stream_quotes(
|
|||
|
||||
ws: NoBsWs
|
||||
token, ping_interval = await client._get_ws_token()
|
||||
log.info('API reported ping_interval: {ping_interval}\n')
|
||||
|
||||
connect_id: str = str(uuid4())
|
||||
typ: str
|
||||
quote: dict
|
||||
connect_id = str(uuid4())
|
||||
async with (
|
||||
open_autorecon_ws(
|
||||
(
|
||||
|
@ -776,37 +721,20 @@ async def stream_quotes(
|
|||
),
|
||||
) as ws,
|
||||
open_ping_task(ws, ping_interval, connect_id),
|
||||
aclosing(
|
||||
iter_normed_quotes(
|
||||
ws, sym_str
|
||||
)
|
||||
) as iter_quotes,
|
||||
aclosing(stream_messages(ws, sym_str)) as msg_gen,
|
||||
):
|
||||
typ, quote = await anext(iter_quotes)
|
||||
typ, quote = await anext(msg_gen)
|
||||
|
||||
while typ != 'trade':
|
||||
# take care to not unblock here until we get a real
|
||||
# trade quote?
|
||||
# ^TODO, remove this right?
|
||||
# -[ ] what often blocks chart boot/new-feed switching
|
||||
# since we'ere waiting for a live quote instead of just
|
||||
# loading history afap..
|
||||
# |_ XXX, not sure if we require a bit of rework to core
|
||||
# feed init logic or if backends justg gotta be
|
||||
# changed up.. feel like there was some causality
|
||||
# dilema prolly only seen with IB too..
|
||||
# while typ != 'trade':
|
||||
# typ, quote = await anext(iter_quotes)
|
||||
# trade quote
|
||||
typ, quote = await anext(msg_gen)
|
||||
|
||||
task_status.started((init_msgs, quote))
|
||||
feed_is_live.set()
|
||||
|
||||
# XXX NOTE, DO NOT include the `.<backend>` suffix!
|
||||
# OW the sampling loop will not broadcast correctly..
|
||||
# since `bus._subscribers.setdefault(bs_fqme, set())`
|
||||
# is used inside `.data.open_feed_bus()` !!!
|
||||
topic: str = mkt.bs_fqme
|
||||
async for typ, quote in iter_quotes:
|
||||
await send_chan.send({topic: quote})
|
||||
async for typ, msg in msg_gen:
|
||||
await send_chan.send({sym_str: msg})
|
||||
|
||||
|
||||
@acm
|
||||
|
@ -861,7 +789,7 @@ async def subscribe(
|
|||
)
|
||||
|
||||
|
||||
async def iter_normed_quotes(
|
||||
async def stream_messages(
|
||||
ws: NoBsWs,
|
||||
sym: str,
|
||||
|
||||
|
@ -892,9 +820,6 @@ async def iter_normed_quotes(
|
|||
|
||||
yield 'trade', {
|
||||
'symbol': sym,
|
||||
# TODO, is 'last' even used elsewhere/a-good
|
||||
# semantic? can't we just read the ticks with our
|
||||
# .data.ticktools.frame_ticks()`/
|
||||
'last': trade_data.price,
|
||||
'brokerd_ts': last_trade_ts,
|
||||
'ticks': [
|
||||
|
@ -987,7 +912,7 @@ async def open_history_client(
|
|||
if end_dt is None:
|
||||
inow = round(time.time())
|
||||
|
||||
log.debug(
|
||||
print(
|
||||
f'difference in time between load and processing'
|
||||
f'{inow - times[-1]}'
|
||||
)
|
||||
|
|
|
@ -1,49 +0,0 @@
|
|||
piker.clearing
|
||||
______________
|
||||
trade execution-n-control subsys for both live and paper trading as
|
||||
well as algo-trading manual override/interaction across any backend
|
||||
broker and data provider.
|
||||
|
||||
avail UIs
|
||||
*********
|
||||
|
||||
order ctl
|
||||
---------
|
||||
the `piker.clearing` subsys is exposed mainly though
|
||||
the `piker chart` GUI as a "chart trader" style UX and
|
||||
is automatically enabled whenever a chart is opened.
|
||||
|
||||
.. ^TODO, more prose here!
|
||||
|
||||
the "manual" order control features are exposed via the
|
||||
`piker.ui.order_mode` API and can pretty much always be
|
||||
used (at least) in simulated-trading mode, aka "paper"-mode, and
|
||||
the micro-manual is as follows:
|
||||
|
||||
``order_mode`` (
|
||||
edge triggered activation by any of the following keys,
|
||||
``mouse-click`` on y-level to submit at that price
|
||||
):
|
||||
|
||||
- ``f``/ ``ctl-f`` to stage buy
|
||||
- ``d``/ ``ctl-d`` to stage sell
|
||||
- ``a`` to stage alert
|
||||
|
||||
|
||||
``search_mode`` (
|
||||
``ctl-l`` or ``ctl-space`` to open,
|
||||
``ctl-c`` or ``ctl-space`` to close
|
||||
) :
|
||||
|
||||
- begin typing to have symbol search automatically lookup
|
||||
symbols from all loaded backend (broker) providers
|
||||
- arrow keys and mouse click to navigate selection
|
||||
- vi-like ``ctl-[hjkl]`` for navigation
|
||||
|
||||
|
||||
position (pp) mgmt
|
||||
------------------
|
||||
you can also configure your position allocation limits from the
|
||||
sidepane.
|
||||
|
||||
.. ^TODO, explain and provide tut once more refined!
|
|
@ -913,17 +913,8 @@ async def translate_and_relay_brokerd_events(
|
|||
}:
|
||||
if (
|
||||
not oid
|
||||
# try to lookup any order dialog by
|
||||
# brokerd-side id..
|
||||
and not (
|
||||
oid := book._ems2brokerd_ids.inverse.get(reqid)
|
||||
)
|
||||
):
|
||||
log.warning(
|
||||
f'Rxed unusable error-msg:\n'
|
||||
f'{brokerd_msg}'
|
||||
)
|
||||
continue
|
||||
oid: str = book._ems2brokerd_ids.inverse[reqid]
|
||||
|
||||
msg = BrokerdError(**brokerd_msg)
|
||||
|
||||
|
|
|
@ -26,7 +26,6 @@ from contextlib import asynccontextmanager as acm
|
|||
from datetime import datetime
|
||||
from operator import itemgetter
|
||||
import itertools
|
||||
from pprint import pformat
|
||||
import time
|
||||
from typing import (
|
||||
Callable,
|
||||
|
@ -40,7 +39,6 @@ import trio
|
|||
import tractor
|
||||
|
||||
from piker.brokers import get_brokermod
|
||||
from piker.service import find_service
|
||||
from piker.accounting import (
|
||||
Account,
|
||||
MktPair,
|
||||
|
@ -698,12 +696,7 @@ async def open_trade_dialog(
|
|||
# sanity check all the mkt infos
|
||||
for fqme, flume in feed.flumes.items():
|
||||
mkt: MktPair = symcache.mktmaps.get(fqme) or mkt_by_fqme[fqme]
|
||||
if mkt != flume.mkt:
|
||||
diff: tuple = mkt - flume.mkt
|
||||
log.warning(
|
||||
'MktPair sig mismatch?\n'
|
||||
f'{pformat(diff)}'
|
||||
)
|
||||
assert mkt == flume.mkt
|
||||
|
||||
get_cost: Callable = getattr(
|
||||
brokermod,
|
||||
|
@ -761,7 +754,7 @@ async def open_paperboi(
|
|||
service_name = f'paperboi.{broker}'
|
||||
|
||||
async with (
|
||||
find_service(service_name) as portal,
|
||||
tractor.find_actor(service_name) as portal,
|
||||
tractor.open_nursery() as an,
|
||||
):
|
||||
# NOTE: only spawn if no paperboi already is up since we likely
|
||||
|
@ -784,10 +777,8 @@ async def open_paperboi(
|
|||
) as (ctx, first):
|
||||
yield ctx, first
|
||||
|
||||
# ALWAYS tear down connection AND any newly spawned
|
||||
# paperboi actor on exit!
|
||||
# tear down connection and any spawned actor on exit
|
||||
await ctx.cancel()
|
||||
|
||||
if we_spawned:
|
||||
await portal.cancel_actor()
|
||||
|
||||
|
|
|
@ -1,33 +1,30 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) 2018-present Tyler Goodlet
|
||||
# (in stewardship for pikers, everywhere.)
|
||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
||||
|
||||
# This program is free software: you can redistribute it and/or
|
||||
# modify it under the terms of the GNU Affero General Public
|
||||
# License as published by the Free Software Foundation, either
|
||||
# version 3 of the License, or (at your option) any later version.
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# Affero General Public License for more details.
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public
|
||||
# License along with this program. If not, see
|
||||
# <https://www.gnu.org/licenses/>.
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
CLI commons.
|
||||
|
||||
'''
|
||||
import os
|
||||
# from contextlib import AsyncExitStack
|
||||
from contextlib import AsyncExitStack
|
||||
from types import ModuleType
|
||||
|
||||
import click
|
||||
import trio
|
||||
import tractor
|
||||
from tractor._multiaddr import parse_maddr
|
||||
|
||||
from ..log import (
|
||||
get_console_log,
|
||||
|
@ -45,97 +42,35 @@ from .. import config
|
|||
log = get_logger('piker.cli')
|
||||
|
||||
|
||||
def load_trans_eps(
|
||||
network: dict | None = None,
|
||||
maddrs: list[tuple] | None = None,
|
||||
|
||||
) -> dict[str, dict[str, dict]]:
|
||||
|
||||
# transport-oriented endpoint multi-addresses
|
||||
eps: dict[
|
||||
str, # service name, eg. `pikerd`, `emsd`..
|
||||
|
||||
# libp2p style multi-addresses parsed into prot layers
|
||||
list[dict[str, str | int]]
|
||||
] = {}
|
||||
|
||||
if (
|
||||
network
|
||||
and not maddrs
|
||||
):
|
||||
# load network section and (attempt to) connect all endpoints
|
||||
# which are reachable B)
|
||||
for key, maddrs in network.items():
|
||||
match key:
|
||||
|
||||
# TODO: resolve table across multiple discov
|
||||
# prots Bo
|
||||
case 'resolv':
|
||||
pass
|
||||
|
||||
case 'pikerd':
|
||||
dname: str = key
|
||||
for maddr in maddrs:
|
||||
layers: dict = parse_maddr(maddr)
|
||||
eps.setdefault(
|
||||
dname,
|
||||
[],
|
||||
).append(layers)
|
||||
|
||||
elif maddrs:
|
||||
# presume user is manually specifying the root actor ep.
|
||||
eps['pikerd'] = [parse_maddr(maddr)]
|
||||
|
||||
return eps
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option('--loglevel', '-l', default='warning', help='Logging level')
|
||||
@click.option('--tl', is_flag=True, help='Enable tractor logging')
|
||||
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
|
||||
@click.option('--host', '-h', default=None, help='Host addr to bind')
|
||||
@click.option('--port', '-p', default=None, help='Port number to bind')
|
||||
@click.option(
|
||||
'--loglevel',
|
||||
'-l',
|
||||
default='warning',
|
||||
help='Logging level',
|
||||
)
|
||||
@click.option(
|
||||
'--tl',
|
||||
'--tsdb',
|
||||
is_flag=True,
|
||||
help='Enable tractor-runtime logs',
|
||||
help='Enable local ``marketstore`` instance'
|
||||
)
|
||||
@click.option(
|
||||
'--pdb',
|
||||
'--es',
|
||||
is_flag=True,
|
||||
help='Enable tractor debug mode',
|
||||
help='Enable local ``elasticsearch`` instance'
|
||||
)
|
||||
@click.option(
|
||||
'--maddr',
|
||||
'-m',
|
||||
default=None,
|
||||
help='Multiaddrs to bind or contact',
|
||||
)
|
||||
# @click.option(
|
||||
# '--tsdb',
|
||||
# is_flag=True,
|
||||
# help='Enable local ``marketstore`` instance'
|
||||
# )
|
||||
# @click.option(
|
||||
# '--es',
|
||||
# is_flag=True,
|
||||
# help='Enable local ``elasticsearch`` instance'
|
||||
# )
|
||||
def pikerd(
|
||||
maddr: list[str] | None,
|
||||
loglevel: str,
|
||||
host: str,
|
||||
port: int,
|
||||
tl: bool,
|
||||
pdb: bool,
|
||||
# tsdb: bool,
|
||||
# es: bool,
|
||||
tsdb: bool,
|
||||
es: bool,
|
||||
):
|
||||
'''
|
||||
Spawn the piker broker-daemon.
|
||||
|
||||
'''
|
||||
from tractor.devx import maybe_open_crash_handler
|
||||
with maybe_open_crash_handler(pdb=pdb):
|
||||
log = get_console_log(loglevel, name='cli')
|
||||
|
||||
if pdb:
|
||||
|
@ -147,32 +82,12 @@ def pikerd(
|
|||
"\n"
|
||||
))
|
||||
|
||||
# service-actor registry endpoint socket-address set
|
||||
regaddrs: list[tuple[str, int]] = []
|
||||
|
||||
conf, _ = config.load(
|
||||
conf_name='conf',
|
||||
reg_addr: None | tuple[str, int] = None
|
||||
if host or port:
|
||||
reg_addr = (
|
||||
host or _default_registry_host,
|
||||
int(port) or _default_registry_port,
|
||||
)
|
||||
network: dict = conf.get('network')
|
||||
if (
|
||||
network is None
|
||||
and not maddr
|
||||
):
|
||||
regaddrs = [(
|
||||
_default_registry_host,
|
||||
_default_registry_port,
|
||||
)]
|
||||
|
||||
else:
|
||||
eps: dict = load_trans_eps(
|
||||
network,
|
||||
maddr,
|
||||
)
|
||||
for layers in eps['pikerd']:
|
||||
regaddrs.append((
|
||||
layers['ipv4']['addr'],
|
||||
layers['tcp']['port'],
|
||||
))
|
||||
|
||||
from .. import service
|
||||
|
||||
|
@ -181,35 +96,31 @@ def pikerd(
|
|||
|
||||
async with (
|
||||
service.open_pikerd(
|
||||
registry_addrs=regaddrs,
|
||||
loglevel=loglevel,
|
||||
debug_mode=pdb,
|
||||
registry_addr=reg_addr,
|
||||
|
||||
) as service_mngr, # normally delivers a ``Services`` handle
|
||||
|
||||
# AsyncExitStack() as stack,
|
||||
AsyncExitStack() as stack,
|
||||
):
|
||||
# TODO: spawn all other sub-actor daemons according to
|
||||
# multiaddress endpoint spec defined by user config
|
||||
assert service_mngr
|
||||
if tsdb:
|
||||
dname, conf = await stack.enter_async_context(
|
||||
service.marketstore.start_ahab_daemon(
|
||||
service_mngr,
|
||||
loglevel=loglevel,
|
||||
)
|
||||
)
|
||||
log.info(f'TSDB `{dname}` up with conf:\n{conf}')
|
||||
|
||||
# if tsdb:
|
||||
# dname, conf = await stack.enter_async_context(
|
||||
# service.marketstore.start_ahab_daemon(
|
||||
# service_mngr,
|
||||
# loglevel=loglevel,
|
||||
# )
|
||||
# )
|
||||
# log.info(f'TSDB `{dname}` up with conf:\n{conf}')
|
||||
|
||||
# if es:
|
||||
# dname, conf = await stack.enter_async_context(
|
||||
# service.elastic.start_ahab_daemon(
|
||||
# service_mngr,
|
||||
# loglevel=loglevel,
|
||||
# )
|
||||
# )
|
||||
# log.info(f'DB `{dname}` up with conf:\n{conf}')
|
||||
if es:
|
||||
dname, conf = await stack.enter_async_context(
|
||||
service.elastic.start_ahab_daemon(
|
||||
service_mngr,
|
||||
loglevel=loglevel,
|
||||
)
|
||||
)
|
||||
log.info(f'DB `{dname}` up with conf:\n{conf}')
|
||||
|
||||
await trio.sleep_forever()
|
||||
|
||||
|
@ -226,24 +137,8 @@ def pikerd(
|
|||
@click.option('--loglevel', '-l', default='warning', help='Logging level')
|
||||
@click.option('--tl', is_flag=True, help='Enable tractor logging')
|
||||
@click.option('--configdir', '-c', help='Configuration directory')
|
||||
@click.option(
|
||||
'--pdb',
|
||||
is_flag=True,
|
||||
help='Enable runtime debug mode ',
|
||||
)
|
||||
@click.option(
|
||||
'--maddr',
|
||||
'-m',
|
||||
default=None,
|
||||
multiple=True,
|
||||
help='Multiaddr to bind',
|
||||
)
|
||||
@click.option(
|
||||
'--regaddr',
|
||||
'-r',
|
||||
default=None,
|
||||
help='Registrar addr to contact',
|
||||
)
|
||||
@click.option('--host', '-h', default=None, help='Host addr to bind')
|
||||
@click.option('--port', '-p', default=None, help='Port number to bind')
|
||||
@click.pass_context
|
||||
def cli(
|
||||
ctx: click.Context,
|
||||
|
@ -251,11 +146,8 @@ def cli(
|
|||
loglevel: str,
|
||||
tl: bool,
|
||||
configdir: str,
|
||||
pdb: bool,
|
||||
|
||||
# TODO: make these list[str] with multiple -m maddr0 -m maddr1
|
||||
maddr: list[str],
|
||||
regaddr: str,
|
||||
host: str,
|
||||
port: int,
|
||||
|
||||
) -> None:
|
||||
if configdir is not None:
|
||||
|
@ -276,20 +168,12 @@ def cli(
|
|||
}
|
||||
assert brokermods
|
||||
|
||||
# TODO: load endpoints from `conf::[network].pikerd`
|
||||
# - pikerd vs. regd, separate registry daemon?
|
||||
# - expose datad vs. brokerd?
|
||||
# - bind emsd with certain perms on public iface?
|
||||
regaddrs: list[tuple[str, int]] = regaddr or [(
|
||||
_default_registry_host,
|
||||
_default_registry_port,
|
||||
)]
|
||||
|
||||
# TODO: factor [network] section parsing out from pikerd
|
||||
# above and call it here as well.
|
||||
# if maddr:
|
||||
# for addr in maddr:
|
||||
# layers: dict = parse_maddr(addr)
|
||||
reg_addr: None | tuple[str, int] = None
|
||||
if host or port:
|
||||
reg_addr = (
|
||||
host or _default_registry_host,
|
||||
int(port) or _default_registry_port,
|
||||
)
|
||||
|
||||
ctx.obj.update({
|
||||
'brokers': brokers,
|
||||
|
@ -299,12 +183,7 @@ def cli(
|
|||
'log': get_console_log(loglevel),
|
||||
'confdir': config._config_dir,
|
||||
'wl_path': config._watchlists_data_path,
|
||||
'registry_addrs': regaddrs,
|
||||
'pdb': pdb, # debug mode flag
|
||||
|
||||
# TODO: endpoint parsing, pinging and binding
|
||||
# on no existing server.
|
||||
# 'maddrs': maddr,
|
||||
'registry_addr': reg_addr,
|
||||
})
|
||||
|
||||
# allow enabling same loglevel in ``tractor`` machinery
|
||||
|
@ -351,7 +230,7 @@ def services(config, tl, ports):
|
|||
|
||||
|
||||
def _load_clis() -> None:
|
||||
# from ..service import elastic # noqa
|
||||
from ..service import elastic # noqa
|
||||
from ..brokers import cli # noqa
|
||||
from ..ui import cli # noqa
|
||||
from ..watchlists import cli # noqa
|
||||
|
|
|
@ -104,15 +104,14 @@ def get_app_dir(
|
|||
# `tractor`) with the testing dir and check for it whenever we
|
||||
# detect `pytest` is being used (which it isn't under normal
|
||||
# operation).
|
||||
# if "pytest" in sys.modules:
|
||||
# import tractor
|
||||
# actor = tractor.current_actor(err_on_no_runtime=False)
|
||||
# if actor: # runtime is up
|
||||
# rvs = tractor._state._runtime_vars
|
||||
# import pdbp; pdbp.set_trace()
|
||||
# testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
|
||||
# assert testdirpath.exists(), 'piker test harness might be borked!?'
|
||||
# app_name = str(testdirpath)
|
||||
if "pytest" in sys.modules:
|
||||
import tractor
|
||||
actor = tractor.current_actor(err_on_no_runtime=False)
|
||||
if actor: # runtime is up
|
||||
rvs = tractor._state._runtime_vars
|
||||
testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
|
||||
assert testdirpath.exists(), 'piker test harness might be borked!?'
|
||||
app_name = str(testdirpath)
|
||||
|
||||
if platform.system() == 'Windows':
|
||||
key = "APPDATA" if roaming else "LOCALAPPDATA"
|
||||
|
@ -135,19 +134,14 @@ def get_app_dir(
|
|||
|
||||
_click_config_dir: Path = Path(get_app_dir('piker'))
|
||||
_config_dir: Path = _click_config_dir
|
||||
_parent_user: str = os.environ.get('SUDO_USER')
|
||||
|
||||
# NOTE: when using `sudo` we attempt to determine the non-root user
|
||||
# and still use their normal config dir.
|
||||
if (
|
||||
(_parent_user := os.environ.get('SUDO_USER'))
|
||||
and
|
||||
_parent_user != 'root'
|
||||
):
|
||||
if _parent_user:
|
||||
non_root_user_dir = Path(
|
||||
os.path.expanduser(f'~{_parent_user}')
|
||||
)
|
||||
root: str = 'root'
|
||||
_ccds: str = str(_click_config_dir) # click config dir as string
|
||||
_ccds: str = str(_click_config_dir) # click config dir string
|
||||
i_tail: int = int(_ccds.rfind(root) + len(root))
|
||||
_config_dir = (
|
||||
non_root_user_dir
|
||||
|
@ -252,8 +246,7 @@ def repodir() -> Path:
|
|||
|
||||
|
||||
def load(
|
||||
# NOTE: always appended with .toml suffix
|
||||
conf_name: str = 'conf',
|
||||
conf_name: str = 'brokers', # appended with .toml suffix
|
||||
path: Path | None = None,
|
||||
|
||||
decode: Callable[
|
||||
|
@ -364,9 +357,7 @@ def load_accounts(
|
|||
|
||||
) -> bidict[str, str | None]:
|
||||
|
||||
conf, path = load(
|
||||
conf_name='brokers',
|
||||
)
|
||||
conf, path = load()
|
||||
accounts = bidict()
|
||||
for provider_name, section in conf.items():
|
||||
accounts_section = section.get('accounts')
|
||||
|
|
|
@ -56,7 +56,6 @@ __all__: list[str] = [
|
|||
'ShmArray',
|
||||
'iterticks',
|
||||
'maybe_open_shm_array',
|
||||
'match_from_pairs',
|
||||
'attach_shm_array',
|
||||
'open_shm_array',
|
||||
'get_shm_token',
|
||||
|
|
|
@ -41,11 +41,6 @@ if TYPE_CHECKING:
|
|||
)
|
||||
from piker.toolz import Profiler
|
||||
|
||||
# default gap between bars: "bar gap multiplier"
|
||||
# - 0.5 is no overlap between OC arms,
|
||||
# - 1.0 is full overlap on each neighbor sample
|
||||
BGM: float = 0.16
|
||||
|
||||
|
||||
class IncrementalFormatter(msgspec.Struct):
|
||||
'''
|
||||
|
@ -518,7 +513,6 @@ class IncrementalFormatter(msgspec.Struct):
|
|||
|
||||
|
||||
class OHLCBarsFmtr(IncrementalFormatter):
|
||||
|
||||
x_offset: np.ndarray = np.array([
|
||||
-0.5,
|
||||
0,
|
||||
|
@ -610,9 +604,8 @@ class OHLCBarsFmtr(IncrementalFormatter):
|
|||
vr: tuple[int, int],
|
||||
|
||||
start: int = 0, # XXX: do we need this?
|
||||
|
||||
# 0.5 is no overlap between arms, 1.0 is full overlap
|
||||
gap: float = BGM,
|
||||
w: float = 0.16,
|
||||
|
||||
) -> tuple[
|
||||
np.ndarray,
|
||||
|
@ -629,7 +622,7 @@ class OHLCBarsFmtr(IncrementalFormatter):
|
|||
array[:-1],
|
||||
start,
|
||||
bar_w=self.index_step_size,
|
||||
bar_gap=gap * self.index_step_size,
|
||||
bar_gap=w * self.index_step_size,
|
||||
|
||||
# XXX: don't ask, due to a ``numba`` bug..
|
||||
use_time_index=(self.index_field == 'time'),
|
||||
|
|
|
@ -33,11 +33,6 @@ from typing import (
|
|||
)
|
||||
|
||||
import tractor
|
||||
from tractor import (
|
||||
Context,
|
||||
MsgStream,
|
||||
Channel,
|
||||
)
|
||||
from tractor.trionics import (
|
||||
maybe_open_nursery,
|
||||
)
|
||||
|
@ -58,10 +53,7 @@ if TYPE_CHECKING:
|
|||
from ._sharedmem import (
|
||||
ShmArray,
|
||||
)
|
||||
from .feed import (
|
||||
_FeedsBus,
|
||||
Sub,
|
||||
)
|
||||
from .feed import _FeedsBus
|
||||
|
||||
|
||||
# highest frequency sample step is 1 second by default, though in
|
||||
|
@ -102,7 +94,7 @@ class Sampler:
|
|||
float,
|
||||
list[
|
||||
float,
|
||||
set[MsgStream]
|
||||
set[tractor.MsgStream]
|
||||
],
|
||||
] = defaultdict(
|
||||
lambda: [
|
||||
|
@ -266,8 +258,8 @@ class Sampler:
|
|||
f'broadcasting {period_s} -> {last_ts}\n'
|
||||
# f'consumers: {subs}'
|
||||
)
|
||||
borked: set[MsgStream] = set()
|
||||
sent: set[MsgStream] = set()
|
||||
borked: set[tractor.MsgStream] = set()
|
||||
sent: set[tractor.MsgStream] = set()
|
||||
while True:
|
||||
try:
|
||||
for stream in (subs - sent):
|
||||
|
@ -322,7 +314,7 @@ class Sampler:
|
|||
|
||||
@tractor.context
|
||||
async def register_with_sampler(
|
||||
ctx: Context,
|
||||
ctx: tractor.Context,
|
||||
period_s: float,
|
||||
shms_by_period: dict[float, dict] | None = None,
|
||||
|
||||
|
@ -657,7 +649,12 @@ async def sample_and_broadcast(
|
|||
# eventually block this producer end of the feed and
|
||||
# thus other consumers still attached.
|
||||
sub_key: str = broker_symbol.lower()
|
||||
subs: set[Sub] = bus.get_subs(sub_key)
|
||||
subs: list[
|
||||
tuple[
|
||||
tractor.MsgStream | trio.MemorySendChannel,
|
||||
float | None, # tick throttle in Hz
|
||||
]
|
||||
] = bus.get_subs(sub_key)
|
||||
|
||||
# NOTE: by default the broker backend doesn't append
|
||||
# it's own "name" into the fqme schema (but maybe it
|
||||
|
@ -666,40 +663,34 @@ async def sample_and_broadcast(
|
|||
fqme: str = f'{broker_symbol}.{brokername}'
|
||||
lags: int = 0
|
||||
|
||||
# XXX TODO XXX: speed up this loop in an AOT compiled
|
||||
# lang (like rust or nim or zig)!
|
||||
# AND/OR instead of doing a fan out to TCP sockets
|
||||
# here, we add a shm-style tick queue which readers can
|
||||
# pull from instead of placing the burden of broadcast
|
||||
# on solely on this `brokerd` actor. see issues:
|
||||
# TODO: speed up this loop in an AOT compiled lang (like
|
||||
# rust or nim or zig) and/or instead of doing a fan out to
|
||||
# TCP sockets here, we add a shm-style tick queue which
|
||||
# readers can pull from instead of placing the burden of
|
||||
# broadcast on solely on this `brokerd` actor. see issues:
|
||||
# - https://github.com/pikers/piker/issues/98
|
||||
# - https://github.com/pikers/piker/issues/107
|
||||
|
||||
# for (stream, tick_throttle) in subs.copy():
|
||||
for sub in subs.copy():
|
||||
ipc: MsgStream = sub.ipc
|
||||
throttle: float = sub.throttle_rate
|
||||
for (stream, tick_throttle) in subs.copy():
|
||||
try:
|
||||
with trio.move_on_after(0.2) as cs:
|
||||
if throttle:
|
||||
send_chan: trio.abc.SendChannel = sub.send_chan
|
||||
|
||||
if tick_throttle:
|
||||
# this is a send mem chan that likely
|
||||
# pushes to the ``uniform_rate_send()`` below.
|
||||
try:
|
||||
send_chan.send_nowait(
|
||||
stream.send_nowait(
|
||||
(fqme, quote)
|
||||
)
|
||||
except trio.WouldBlock:
|
||||
overruns[sub_key] += 1
|
||||
ctx: Context = ipc._ctx
|
||||
chan: Channel = ctx.chan
|
||||
ctx = stream._ctx
|
||||
chan = ctx.chan
|
||||
|
||||
log.warning(
|
||||
f'Feed OVERRUN {sub_key}'
|
||||
'@{bus.brokername} -> \n'
|
||||
f'feed @ {chan.uid}\n'
|
||||
f'throttle = {throttle} Hz'
|
||||
f'throttle = {tick_throttle} Hz'
|
||||
)
|
||||
|
||||
if overruns[sub_key] > 6:
|
||||
|
@ -716,10 +707,10 @@ async def sample_and_broadcast(
|
|||
f'{sub_key}:'
|
||||
f'{ctx.cid}@{chan.uid}'
|
||||
)
|
||||
await ipc.aclose()
|
||||
await stream.aclose()
|
||||
raise trio.BrokenResourceError
|
||||
else:
|
||||
await ipc.send(
|
||||
await stream.send(
|
||||
{fqme: quote}
|
||||
)
|
||||
|
||||
|
@ -733,16 +724,16 @@ async def sample_and_broadcast(
|
|||
trio.ClosedResourceError,
|
||||
trio.EndOfChannel,
|
||||
):
|
||||
ctx: Context = ipc._ctx
|
||||
chan: Channel = ctx.chan
|
||||
ctx = stream._ctx
|
||||
chan = ctx.chan
|
||||
if ctx:
|
||||
log.warning(
|
||||
'Dropped `brokerd`-quotes-feed connection:\n'
|
||||
f'{broker_symbol}:'
|
||||
f'{ctx.cid}@{chan.uid}'
|
||||
)
|
||||
if sub.throttle_rate:
|
||||
assert ipc._closed
|
||||
if tick_throttle:
|
||||
assert stream._closed
|
||||
|
||||
# XXX: do we need to deregister here
|
||||
# if it's done in the fee bus code?
|
||||
|
@ -751,7 +742,7 @@ async def sample_and_broadcast(
|
|||
# since there seems to be some kinda race..
|
||||
bus.remove_subs(
|
||||
sub_key,
|
||||
{sub},
|
||||
{(stream, tick_throttle)},
|
||||
)
|
||||
|
||||
|
||||
|
@ -759,7 +750,7 @@ async def uniform_rate_send(
|
|||
|
||||
rate: float,
|
||||
quote_stream: trio.abc.ReceiveChannel,
|
||||
stream: MsgStream,
|
||||
stream: tractor.MsgStream,
|
||||
|
||||
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
|
||||
|
||||
|
|
|
@ -31,8 +31,6 @@ from pathlib import Path
|
|||
from pprint import pformat
|
||||
from typing import (
|
||||
Any,
|
||||
Sequence,
|
||||
Hashable,
|
||||
TYPE_CHECKING,
|
||||
)
|
||||
from types import ModuleType
|
||||
|
@ -130,8 +128,8 @@ class SymbologyCache(Struct):
|
|||
- `.get_mkt_pairs()`: returning a table of pair-`Struct`
|
||||
types, custom defined by the particular backend.
|
||||
|
||||
AND, the required `.get_mkt_info()` module-level endpoint
|
||||
which maps `fqme: str` -> `MktPair`s.
|
||||
AND, the required `.get_mkt_info()` module-level endpoint which
|
||||
maps `fqme: str` -> `MktPair`s.
|
||||
|
||||
These tables are then used to fill out the `.assets`, `.pairs` and
|
||||
`.mktmaps` tables on this cache instance, respectively.
|
||||
|
@ -502,7 +500,7 @@ def match_from_pairs(
|
|||
)
|
||||
|
||||
# pop and repack pairs in output dict
|
||||
matched_pairs: dict[str, Struct] = {}
|
||||
matched_pairs: dict[str, Pair] = {}
|
||||
for item in matches:
|
||||
pair_key: str = item[0]
|
||||
matched_pairs[pair_key] = pairs[pair_key]
|
||||
|
|
|
@ -0,0 +1,336 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Financial time series processing utilities usually
|
||||
pertaining to OHLCV style sampled data.
|
||||
|
||||
Routines are generally implemented in either ``numpy`` or
|
||||
``polars`` B)
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
from typing import Literal
|
||||
from math import (
|
||||
ceil,
|
||||
floor,
|
||||
)
|
||||
|
||||
import numpy as np
|
||||
import polars as pl
|
||||
|
||||
from ._sharedmem import ShmArray
|
||||
from ..toolz.profile import (
|
||||
Profiler,
|
||||
pg_profile_enabled,
|
||||
ms_slower_then,
|
||||
)
|
||||
|
||||
|
||||
def slice_from_time(
|
||||
arr: np.ndarray,
|
||||
start_t: float,
|
||||
stop_t: float,
|
||||
step: float, # sampler period step-diff
|
||||
|
||||
) -> slice:
|
||||
'''
|
||||
Calculate array indices mapped from a time range and return them in
|
||||
a slice.
|
||||
|
||||
Given an input array with an epoch `'time'` series entry, calculate
|
||||
the indices which span the time range and return in a slice. Presume
|
||||
each `'time'` step increment is uniform and when the time stamp
|
||||
series contains gaps (the uniform presumption is untrue) use
|
||||
``np.searchsorted()`` binary search to look up the appropriate
|
||||
index.
|
||||
|
||||
'''
|
||||
profiler = Profiler(
|
||||
msg='slice_from_time()',
|
||||
disabled=not pg_profile_enabled(),
|
||||
ms_threshold=ms_slower_then,
|
||||
)
|
||||
|
||||
times = arr['time']
|
||||
t_first = floor(times[0])
|
||||
t_last = ceil(times[-1])
|
||||
|
||||
# the greatest index we can return which slices to the
|
||||
# end of the input array.
|
||||
read_i_max = arr.shape[0]
|
||||
|
||||
# compute (presumed) uniform-time-step index offsets
|
||||
i_start_t = floor(start_t)
|
||||
read_i_start = floor(((i_start_t - t_first) // step)) - 1
|
||||
|
||||
i_stop_t = ceil(stop_t)
|
||||
|
||||
# XXX: edge case -> always set stop index to last in array whenever
|
||||
# the input stop time is detected to be greater then the equiv time
|
||||
# stamp at that last entry.
|
||||
if i_stop_t >= t_last:
|
||||
read_i_stop = read_i_max
|
||||
else:
|
||||
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
|
||||
|
||||
# always clip outputs to array support
|
||||
# for read start:
|
||||
# - never allow a start < the 0 index
|
||||
# - never allow an end index > the read array len
|
||||
read_i_start = min(
|
||||
max(0, read_i_start),
|
||||
read_i_max - 1,
|
||||
)
|
||||
read_i_stop = max(
|
||||
0,
|
||||
min(read_i_stop, read_i_max),
|
||||
)
|
||||
|
||||
# check for larger-then-latest calculated index for given start
|
||||
# time, in which case we do a binary search for the correct index.
|
||||
# NOTE: this is usually the result of a time series with time gaps
|
||||
# where it is expected that each index step maps to a uniform step
|
||||
# in the time stamp series.
|
||||
t_iv_start = times[read_i_start]
|
||||
if (
|
||||
t_iv_start > i_start_t
|
||||
):
|
||||
# do a binary search for the best index mapping to ``start_t``
|
||||
# given we measured an overshoot using the uniform-time-step
|
||||
# calculation from above.
|
||||
|
||||
# TODO: once we start caching these per source-array,
|
||||
# we can just overwrite ``read_i_start`` directly.
|
||||
new_read_i_start = np.searchsorted(
|
||||
times,
|
||||
i_start_t,
|
||||
side='left',
|
||||
)
|
||||
|
||||
# TODO: minimize binary search work as much as possible:
|
||||
# - cache these remap values which compensate for gaps in the
|
||||
# uniform time step basis where we calc a later start
|
||||
# index for the given input ``start_t``.
|
||||
# - can we shorten the input search sequence by heuristic?
|
||||
# up_to_arith_start = index[:read_i_start]
|
||||
|
||||
if (
|
||||
new_read_i_start <= read_i_start
|
||||
):
|
||||
# t_diff = t_iv_start - start_t
|
||||
# print(
|
||||
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
||||
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
|
||||
# f'diff: {t_diff}\n'
|
||||
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
|
||||
# )
|
||||
read_i_start = new_read_i_start
|
||||
|
||||
t_iv_stop = times[read_i_stop - 1]
|
||||
if (
|
||||
t_iv_stop > i_stop_t
|
||||
):
|
||||
# t_diff = stop_t - t_iv_stop
|
||||
# print(
|
||||
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
||||
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
|
||||
# f'diff: {t_diff}\n'
|
||||
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
|
||||
# )
|
||||
new_read_i_stop = np.searchsorted(
|
||||
times[read_i_start:],
|
||||
# times,
|
||||
i_stop_t,
|
||||
side='right',
|
||||
)
|
||||
|
||||
if (
|
||||
new_read_i_stop <= read_i_stop
|
||||
):
|
||||
read_i_stop = read_i_start + new_read_i_stop + 1
|
||||
|
||||
# sanity checks for range size
|
||||
# samples = (i_stop_t - i_start_t) // step
|
||||
# index_diff = read_i_stop - read_i_start + 1
|
||||
# if index_diff > (samples + 3):
|
||||
# breakpoint()
|
||||
|
||||
# read-relative indexes: gives a slice where `shm.array[read_slc]`
|
||||
# will be the data spanning the input time range `start_t` ->
|
||||
# `stop_t`
|
||||
read_slc = slice(
|
||||
int(read_i_start),
|
||||
int(read_i_stop),
|
||||
)
|
||||
|
||||
profiler(
|
||||
'slicing complete'
|
||||
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
|
||||
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
|
||||
)
|
||||
|
||||
# NOTE: if caller needs absolute buffer indices they can
|
||||
# slice the buffer abs index like so:
|
||||
# index = arr['index']
|
||||
# abs_indx = index[read_slc]
|
||||
# abs_slc = slice(
|
||||
# int(abs_indx[0]),
|
||||
# int(abs_indx[-1]),
|
||||
# )
|
||||
|
||||
return read_slc
|
||||
|
||||
|
||||
def detect_null_time_gap(
|
||||
shm: ShmArray,
|
||||
imargin: int = 1,
|
||||
|
||||
) -> tuple[float, float] | None:
|
||||
'''
|
||||
Detect if there are any zero-epoch stamped rows in
|
||||
the presumed 'time' field-column.
|
||||
|
||||
Filter to the gap and return a surrounding index range.
|
||||
|
||||
NOTE: for now presumes only ONE gap XD
|
||||
|
||||
'''
|
||||
# ensure we read buffer state only once so that ShmArray rt
|
||||
# circular-buffer updates don't cause a indexing/size mismatch.
|
||||
array: np.ndarray = shm.array
|
||||
|
||||
zero_pred: np.ndarray = array['time'] == 0
|
||||
zero_t: np.ndarray = array[zero_pred]
|
||||
|
||||
if zero_t.size:
|
||||
istart, iend = zero_t['index'][[0, -1]]
|
||||
start, end = shm._array['time'][
|
||||
[istart - imargin, iend + imargin]
|
||||
]
|
||||
return (
|
||||
istart - imargin,
|
||||
start,
|
||||
end,
|
||||
iend + imargin,
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
t_unit: Literal = Literal[
|
||||
'days',
|
||||
'hours',
|
||||
'minutes',
|
||||
'seconds',
|
||||
'miliseconds',
|
||||
'microseconds',
|
||||
'nanoseconds',
|
||||
]
|
||||
|
||||
|
||||
def with_dts(
|
||||
df: pl.DataFrame,
|
||||
time_col: str = 'time',
|
||||
) -> pl.DataFrame:
|
||||
'''
|
||||
Insert datetime (casted) columns to a (presumably) OHLC sampled
|
||||
time series with an epoch-time column keyed by ``time_col``.
|
||||
|
||||
'''
|
||||
return df.with_columns([
|
||||
pl.col(time_col).shift(1).suffix('_prev'),
|
||||
pl.col(time_col).diff().alias('s_diff'),
|
||||
pl.from_epoch(pl.col(time_col)).alias('dt'),
|
||||
]).with_columns([
|
||||
pl.from_epoch(pl.col(f'{time_col}_prev')).alias('dt_prev'),
|
||||
pl.col('dt').diff().alias('dt_diff'),
|
||||
]) #.with_columns(
|
||||
# pl.col('dt').diff().dt.days().alias('days_dt_diff'),
|
||||
# )
|
||||
|
||||
|
||||
def detect_time_gaps(
|
||||
df: pl.DataFrame,
|
||||
|
||||
time_col: str = 'time',
|
||||
# epoch sampling step diff
|
||||
expect_period: float = 60,
|
||||
|
||||
# datetime diff unit and gap value
|
||||
# crypto mkts
|
||||
# gap_dt_unit: t_unit = 'minutes',
|
||||
# gap_thresh: int = 1,
|
||||
|
||||
# NOTE: legacy stock mkts have venue operating hours
|
||||
# and thus gaps normally no more then 1-2 days at
|
||||
# a time.
|
||||
# XXX -> must be valid ``polars.Expr.dt.<name>``
|
||||
# TODO: allow passing in a frame of operating hours
|
||||
# durations/ranges for faster legit gap checks.
|
||||
gap_dt_unit: t_unit = 'days',
|
||||
gap_thresh: int = 1,
|
||||
|
||||
) -> pl.DataFrame:
|
||||
'''
|
||||
Filter to OHLC datums which contain sample step gaps.
|
||||
|
||||
For eg. legacy markets which have venue close gaps and/or
|
||||
actual missing data segments.
|
||||
|
||||
'''
|
||||
return (
|
||||
with_dts(df)
|
||||
.filter(
|
||||
pl.col('s_diff').abs() > expect_period
|
||||
)
|
||||
.filter(
|
||||
getattr(
|
||||
pl.col('dt_diff').dt,
|
||||
gap_dt_unit,
|
||||
)().abs() > gap_thresh
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def detect_price_gaps(
|
||||
df: pl.DataFrame,
|
||||
gt_multiplier: float = 2.,
|
||||
price_fields: list[str] = ['high', 'low'],
|
||||
|
||||
) -> pl.DataFrame:
|
||||
'''
|
||||
Detect gaps in clearing price over an OHLC series.
|
||||
|
||||
2 types of gaps generally exist; up gaps and down gaps:
|
||||
|
||||
- UP gap: when any next sample's lo price is strictly greater
|
||||
then the current sample's hi price.
|
||||
|
||||
- DOWN gap: when any next sample's hi price is strictly
|
||||
less then the current samples lo price.
|
||||
|
||||
'''
|
||||
# return df.filter(
|
||||
# pl.col('high') - ) > expect_period,
|
||||
# ).select([
|
||||
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
|
||||
# pl.all(),
|
||||
# ]).select([
|
||||
# pl.all(),
|
||||
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
|
||||
# ])
|
||||
...
|
|
@ -273,7 +273,7 @@ async def _reconnect_forever(
|
|||
nobsws._connected.set()
|
||||
await trio.sleep_forever()
|
||||
except HandshakeError:
|
||||
log.exception('Retrying connection')
|
||||
log.exception(f'Retrying connection')
|
||||
|
||||
# ws & nursery block ends
|
||||
|
||||
|
@ -359,8 +359,8 @@ async def open_autorecon_ws(
|
|||
|
||||
|
||||
'''
|
||||
JSONRPC response-request style machinery for transparent multiplexing
|
||||
of msgs over a `NoBsWs`.
|
||||
JSONRPC response-request style machinery for transparent multiplexing of msgs
|
||||
over a NoBsWs.
|
||||
|
||||
'''
|
||||
|
||||
|
@ -377,82 +377,43 @@ async def open_jsonrpc_session(
|
|||
url: str,
|
||||
start_id: int = 0,
|
||||
response_type: type = JSONRPCResult,
|
||||
msg_recv_timeout: float = float('inf'),
|
||||
# ^NOTE, since only `deribit` is using this jsonrpc stuff atm
|
||||
# and options mkts are generally "slow moving"..
|
||||
#
|
||||
# FURTHER if we break the underlying ws connection then since we
|
||||
# don't pass a `fixture` to the task that manages `NoBsWs`, i.e.
|
||||
# `_reconnect_forever()`, the jsonrpc "transport pipe" get's
|
||||
# broken and never restored with wtv init sequence is required to
|
||||
# re-establish a working req-resp session.
|
||||
|
||||
request_type: Optional[type] = None,
|
||||
request_hook: Optional[Callable] = None,
|
||||
error_hook: Optional[Callable] = None,
|
||||
) -> Callable[[str, dict], dict]:
|
||||
'''
|
||||
Init a json-RPC-over-websocket connection to the provided `url`.
|
||||
|
||||
A `json_rpc: Callable[[str, dict], dict` is delivered to the
|
||||
caller for sending requests and a bg-`trio.Task` handles
|
||||
processing of response msgs including error reporting/raising in
|
||||
the parent/caller task.
|
||||
|
||||
'''
|
||||
# NOTE, store all request msgs so we can raise errors on the
|
||||
# caller side!
|
||||
req_msgs: dict[int, dict] = {}
|
||||
|
||||
async with (
|
||||
trio.open_nursery() as tn,
|
||||
open_autorecon_ws(
|
||||
url=url,
|
||||
msg_recv_timeout=msg_recv_timeout,
|
||||
) as ws
|
||||
trio.open_nursery() as n,
|
||||
open_autorecon_ws(url) as ws
|
||||
):
|
||||
rpc_id: Iterable[int] = count(start_id)
|
||||
rpc_id: Iterable = count(start_id)
|
||||
rpc_results: dict[int, dict] = {}
|
||||
|
||||
async def json_rpc(
|
||||
method: str,
|
||||
params: dict,
|
||||
) -> dict:
|
||||
async def json_rpc(method: str, params: dict) -> dict:
|
||||
'''
|
||||
perform a json rpc call and wait for the result, raise exception in
|
||||
case of error field present on response
|
||||
'''
|
||||
nonlocal req_msgs
|
||||
|
||||
req_id: int = next(rpc_id)
|
||||
msg = {
|
||||
'jsonrpc': '2.0',
|
||||
'id': req_id,
|
||||
'id': next(rpc_id),
|
||||
'method': method,
|
||||
'params': params
|
||||
}
|
||||
_id = msg['id']
|
||||
|
||||
result = rpc_results[_id] = {
|
||||
rpc_results[_id] = {
|
||||
'result': None,
|
||||
'error': None,
|
||||
'event': trio.Event(), # signal caller resp arrived
|
||||
'event': trio.Event()
|
||||
}
|
||||
req_msgs[_id] = msg
|
||||
|
||||
await ws.send_msg(msg)
|
||||
|
||||
# wait for reponse before unblocking requester code
|
||||
await rpc_results[_id]['event'].wait()
|
||||
|
||||
if (maybe_result := result['result']):
|
||||
ret = maybe_result
|
||||
del rpc_results[_id]
|
||||
ret = rpc_results[_id]['result']
|
||||
|
||||
else:
|
||||
err = result['error']
|
||||
raise Exception(
|
||||
f'JSONRPC request failed\n'
|
||||
f'req: {msg}\n'
|
||||
f'resp: {err}\n'
|
||||
)
|
||||
del rpc_results[_id]
|
||||
|
||||
if ret.error is not None:
|
||||
raise Exception(json.dumps(ret.error, indent=4))
|
||||
|
@ -467,7 +428,6 @@ async def open_jsonrpc_session(
|
|||
the server side.
|
||||
|
||||
'''
|
||||
nonlocal req_msgs
|
||||
async for msg in ws:
|
||||
match msg:
|
||||
case {
|
||||
|
@ -491,28 +451,19 @@ async def open_jsonrpc_session(
|
|||
'params': _,
|
||||
}:
|
||||
log.debug(f'Recieved\n{msg}')
|
||||
if request_hook:
|
||||
await request_hook(request_type(**msg))
|
||||
|
||||
case {
|
||||
'error': error
|
||||
}:
|
||||
# retreive orig request msg, set error
|
||||
# response in original "result" msg,
|
||||
# THEN FINALLY set the event to signal caller
|
||||
# to raise the error in the parent task.
|
||||
req_id: int = error['id']
|
||||
req_msg: dict = req_msgs[req_id]
|
||||
result: dict = rpc_results[req_id]
|
||||
result['error'] = error
|
||||
result['event'].set()
|
||||
log.error(
|
||||
f'JSONRPC request failed\n'
|
||||
f'req: {req_msg}\n'
|
||||
f'resp: {error}\n'
|
||||
)
|
||||
log.warning(f'Recieved\n{error}')
|
||||
if error_hook:
|
||||
await error_hook(response_type(**msg))
|
||||
|
||||
case _:
|
||||
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
|
||||
|
||||
tn.start_soon(recv_task)
|
||||
n.start_soon(recv_task)
|
||||
yield json_rpc
|
||||
tn.cancel_scope.cancel()
|
||||
n.cancel_scope.cancel()
|
||||
|
|
|
@ -28,7 +28,6 @@ module.
|
|||
from __future__ import annotations
|
||||
from collections import (
|
||||
defaultdict,
|
||||
abc,
|
||||
)
|
||||
from contextlib import asynccontextmanager as acm
|
||||
from functools import partial
|
||||
|
@ -37,6 +36,7 @@ from types import ModuleType
|
|||
from typing import (
|
||||
Any,
|
||||
AsyncContextManager,
|
||||
Optional,
|
||||
Awaitable,
|
||||
Sequence,
|
||||
)
|
||||
|
@ -45,7 +45,10 @@ import trio
|
|||
from trio.abc import ReceiveChannel
|
||||
from trio_typing import TaskStatus
|
||||
import tractor
|
||||
from tractor import trionics
|
||||
from tractor.trionics import (
|
||||
maybe_open_context,
|
||||
gather_contexts,
|
||||
)
|
||||
|
||||
from piker.accounting import (
|
||||
MktPair,
|
||||
|
@ -56,6 +59,7 @@ from piker.brokers import get_brokermod
|
|||
from piker.service import (
|
||||
maybe_spawn_brokerd,
|
||||
)
|
||||
from piker.ui import _search
|
||||
from piker.calc import humanize
|
||||
from ._util import (
|
||||
log,
|
||||
|
@ -66,7 +70,7 @@ from .validate import (
|
|||
FeedInit,
|
||||
validate_backend,
|
||||
)
|
||||
from ..tsp import (
|
||||
from .history import (
|
||||
manage_history,
|
||||
)
|
||||
from .ingest import get_ingestormod
|
||||
|
@ -76,31 +80,6 @@ from ._sampling import (
|
|||
)
|
||||
|
||||
|
||||
class Sub(Struct, frozen=True):
|
||||
'''
|
||||
A live feed subscription entry.
|
||||
|
||||
Contains meta-data on the remote-actor type (in functionality
|
||||
terms) as well as refs to IPC streams and sampler runtime
|
||||
params.
|
||||
|
||||
'''
|
||||
ipc: tractor.MsgStream
|
||||
send_chan: trio.abc.SendChannel | None = None
|
||||
|
||||
# tick throttle rate in Hz; determines how live
|
||||
# quotes/ticks should be downsampled before relay
|
||||
# to the receiving remote consumer (process).
|
||||
throttle_rate: float | None = None
|
||||
_throttle_cs: trio.CancelScope | None = None
|
||||
|
||||
# TODO: actually stash comms info for the far end to allow
|
||||
# `.tsp`, `.fsp` and `.data._sampling` sub-systems to re-render
|
||||
# the data view as needed via msging with the `._remote_ctl`
|
||||
# ipc ctx.
|
||||
rc_ui: bool = False
|
||||
|
||||
|
||||
class _FeedsBus(Struct):
|
||||
'''
|
||||
Data feeds broadcaster and persistence management.
|
||||
|
@ -125,7 +104,13 @@ class _FeedsBus(Struct):
|
|||
|
||||
_subscribers: defaultdict[
|
||||
str,
|
||||
set[Sub]
|
||||
set[
|
||||
tuple[
|
||||
tractor.MsgStream | trio.MemorySendChannel,
|
||||
# tractor.Context,
|
||||
float | None, # tick throttle in Hz
|
||||
]
|
||||
]
|
||||
] = defaultdict(set)
|
||||
|
||||
async def start_task(
|
||||
|
@ -140,8 +125,6 @@ class _FeedsBus(Struct):
|
|||
trio.CancelScope] = trio.TASK_STATUS_IGNORED,
|
||||
) -> None:
|
||||
with trio.CancelScope() as cs:
|
||||
# TODO: shouldn't this be a direct await to avoid
|
||||
# cancellation contagion to the bus nursery!?!?!
|
||||
await self.nursery.start(
|
||||
target,
|
||||
*args,
|
||||
|
@ -159,28 +142,31 @@ class _FeedsBus(Struct):
|
|||
def get_subs(
|
||||
self,
|
||||
key: str,
|
||||
|
||||
) -> set[Sub]:
|
||||
) -> set[
|
||||
tuple[
|
||||
tractor.MsgStream | trio.MemorySendChannel,
|
||||
float | None, # tick throttle in Hz
|
||||
]
|
||||
]:
|
||||
'''
|
||||
Get the ``set`` of consumer subscription entries for the given key.
|
||||
|
||||
'''
|
||||
return self._subscribers[key]
|
||||
|
||||
def subs_items(self) -> abc.ItemsView[str, set[Sub]]:
|
||||
return self._subscribers.items()
|
||||
|
||||
def add_subs(
|
||||
self,
|
||||
key: str,
|
||||
subs: set[Sub],
|
||||
|
||||
) -> set[Sub]:
|
||||
subs: set[tuple[
|
||||
tractor.MsgStream | trio.MemorySendChannel,
|
||||
float | None, # tick throttle in Hz
|
||||
]],
|
||||
) -> set[tuple]:
|
||||
'''
|
||||
Add a ``set`` of consumer subscription entries for the given key.
|
||||
|
||||
'''
|
||||
_subs: set[Sub] = self._subscribers.setdefault(key, set())
|
||||
_subs: set[tuple] = self._subscribers[key]
|
||||
_subs.update(subs)
|
||||
return _subs
|
||||
|
||||
|
@ -345,6 +331,7 @@ async def allocate_persistent_feed(
|
|||
) = await bus.nursery.start(
|
||||
manage_history,
|
||||
mod,
|
||||
bus,
|
||||
mkt,
|
||||
some_data_ready,
|
||||
feed_is_live,
|
||||
|
@ -421,12 +408,6 @@ async def allocate_persistent_feed(
|
|||
rt_shm.array['time'][1] = ts + 1
|
||||
|
||||
elif hist_shm.array.size == 0:
|
||||
for i in range(100):
|
||||
await trio.sleep(0.1)
|
||||
if hist_shm.array.size > 0:
|
||||
break
|
||||
else:
|
||||
await tractor.pause()
|
||||
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
|
||||
|
||||
# wait the spawning parent task to register its subscriber
|
||||
|
@ -457,9 +438,8 @@ async def open_feed_bus(
|
|||
symbols: list[str], # normally expected to the broker-specific fqme
|
||||
|
||||
loglevel: str = 'error',
|
||||
tick_throttle: float | None = None,
|
||||
tick_throttle: Optional[float] = None,
|
||||
start_stream: bool = True,
|
||||
allow_remote_ctl_ui: bool = False,
|
||||
|
||||
) -> dict[
|
||||
str, # fqme
|
||||
|
@ -474,12 +454,8 @@ async def open_feed_bus(
|
|||
if loglevel is None:
|
||||
loglevel = tractor.current_actor().loglevel
|
||||
|
||||
# XXX: required to propagate ``tractor`` loglevel to piker
|
||||
# logging
|
||||
get_console_log(
|
||||
loglevel
|
||||
or tractor.current_actor().loglevel
|
||||
)
|
||||
# XXX: required to propagate ``tractor`` loglevel to piker logging
|
||||
get_console_log(loglevel or tractor.current_actor().loglevel)
|
||||
|
||||
# local state sanity checks
|
||||
# TODO: check for any stale shm entries for this symbol
|
||||
|
@ -489,7 +465,7 @@ async def open_feed_bus(
|
|||
assert 'brokerd' in servicename
|
||||
assert brokername in servicename
|
||||
|
||||
bus: _FeedsBus = get_feed_bus(brokername)
|
||||
bus = get_feed_bus(brokername)
|
||||
sub_registered = trio.Event()
|
||||
|
||||
flumes: dict[str, Flume] = {}
|
||||
|
@ -536,10 +512,10 @@ async def open_feed_bus(
|
|||
# pack for ``.started()`` sync msg
|
||||
flumes[fqme] = flume
|
||||
|
||||
# we use the broker-specific fqme (bs_fqme) for the sampler
|
||||
# subscription since the backend isn't (yet) expected to
|
||||
# append it's own name to the fqme, so we filter on keys
|
||||
# which *do not* include that name (e.g .ib) .
|
||||
# we use the broker-specific fqme (bs_fqme) for the
|
||||
# sampler subscription since the backend isn't (yet) expected to
|
||||
# append it's own name to the fqme, so we filter on keys which
|
||||
# *do not* include that name (e.g .ib) .
|
||||
bus._subscribers.setdefault(bs_fqme, set())
|
||||
|
||||
# sync feed subscribers with flume handles
|
||||
|
@ -578,60 +554,49 @@ async def open_feed_bus(
|
|||
# that the ``sample_and_broadcast()`` task (spawned inside
|
||||
# ``allocate_persistent_feed()``) will push real-time quote
|
||||
# (ticks) to this new consumer.
|
||||
cs: trio.CancelScope | None = None
|
||||
send: trio.MemorySendChannel | None = None
|
||||
|
||||
if tick_throttle:
|
||||
flume.throttle_rate = tick_throttle
|
||||
|
||||
# open a bg task which receives quotes over a mem
|
||||
# chan and only pushes them to the target
|
||||
# actor-consumer at a max ``tick_throttle``
|
||||
# (instantaneous) rate.
|
||||
# open a bg task which receives quotes over a mem chan
|
||||
# and only pushes them to the target actor-consumer at
|
||||
# a max ``tick_throttle`` instantaneous rate.
|
||||
send, recv = trio.open_memory_channel(2**10)
|
||||
|
||||
# NOTE: the ``.send`` channel here is a swapped-in
|
||||
# trio mem chan which gets `.send()`-ed by the normal
|
||||
# sampler task but instead of being sent directly
|
||||
# over the IPC msg stream it's the throttle task
|
||||
# does the work of incrementally forwarding to the
|
||||
# IPC stream at the throttle rate.
|
||||
cs: trio.CancelScope = await bus.start_task(
|
||||
cs = await bus.start_task(
|
||||
uniform_rate_send,
|
||||
tick_throttle,
|
||||
recv,
|
||||
stream,
|
||||
)
|
||||
# NOTE: so the ``send`` channel here is actually a swapped
|
||||
# in trio mem chan which gets pushed by the normal sampler
|
||||
# task but instead of being sent directly over the IPC msg
|
||||
# stream it's the throttle task does the work of
|
||||
# incrementally forwarding to the IPC stream at the throttle
|
||||
# rate.
|
||||
send._ctx = ctx # mock internal ``tractor.MsgStream`` ref
|
||||
sub = (send, tick_throttle)
|
||||
|
||||
sub = Sub(
|
||||
ipc=stream,
|
||||
send_chan=send,
|
||||
throttle_rate=tick_throttle,
|
||||
_throttle_cs=cs,
|
||||
rc_ui=allow_remote_ctl_ui,
|
||||
)
|
||||
else:
|
||||
sub = (stream, tick_throttle)
|
||||
|
||||
# TODO: add an api for this on the bus?
|
||||
# maybe use the current task-id to key the sub list that's
|
||||
# added / removed? Or maybe we can add a general
|
||||
# pause-resume by sub-key api?
|
||||
bs_fqme = fqme.removesuffix(f'.{brokername}')
|
||||
local_subs.setdefault(
|
||||
bs_fqme,
|
||||
set()
|
||||
).add(sub)
|
||||
bus.add_subs(
|
||||
bs_fqme,
|
||||
{sub}
|
||||
)
|
||||
local_subs.setdefault(bs_fqme, set()).add(sub)
|
||||
bus.add_subs(bs_fqme, {sub})
|
||||
|
||||
# sync caller with all subs registered state
|
||||
sub_registered.set()
|
||||
|
||||
uid: tuple[str, str] = ctx.chan.uid
|
||||
uid = ctx.chan.uid
|
||||
try:
|
||||
# ctrl protocol for start/stop of live quote streams
|
||||
# based on UI state (eg. don't need a stream when
|
||||
# a symbol isn't being displayed).
|
||||
# ctrl protocol for start/stop of quote streams based on UI
|
||||
# state (eg. don't need a stream when a symbol isn't being
|
||||
# displayed).
|
||||
async for msg in stream:
|
||||
|
||||
if msg == 'pause':
|
||||
|
@ -769,7 +734,6 @@ async def install_brokerd_search(
|
|||
except trio.EndOfChannel:
|
||||
return {}
|
||||
|
||||
from piker.ui import _search
|
||||
async with _search.register_symbol_search(
|
||||
|
||||
provider_name=brokermod.name,
|
||||
|
@ -788,7 +752,7 @@ async def install_brokerd_search(
|
|||
async def maybe_open_feed(
|
||||
|
||||
fqmes: list[str],
|
||||
loglevel: str | None = None,
|
||||
loglevel: Optional[str] = None,
|
||||
|
||||
**kwargs,
|
||||
|
||||
|
@ -804,7 +768,7 @@ async def maybe_open_feed(
|
|||
'''
|
||||
fqme = fqmes[0]
|
||||
|
||||
async with trionics.maybe_open_context(
|
||||
async with maybe_open_context(
|
||||
acm_func=open_feed,
|
||||
kwargs={
|
||||
'fqmes': fqmes,
|
||||
|
@ -824,7 +788,7 @@ async def maybe_open_feed(
|
|||
# add a new broadcast subscription for the quote stream
|
||||
# if this feed is likely already in use
|
||||
|
||||
async with trionics.gather_contexts(
|
||||
async with gather_contexts(
|
||||
mngrs=[stream.subscribe() for stream in feed.streams.values()]
|
||||
) as bstreams:
|
||||
for bstream, flume in zip(bstreams, feed.flumes.values()):
|
||||
|
@ -848,8 +812,6 @@ async def open_feed(
|
|||
start_stream: bool = True,
|
||||
tick_throttle: float | None = None, # Hz
|
||||
|
||||
allow_remote_ctl_ui: bool = False,
|
||||
|
||||
) -> Feed:
|
||||
'''
|
||||
Open a "data feed" which provides streamed real-time quotes.
|
||||
|
@ -886,7 +848,7 @@ async def open_feed(
|
|||
)
|
||||
|
||||
portals: tuple[tractor.Portal]
|
||||
async with trionics.gather_contexts(
|
||||
async with gather_contexts(
|
||||
brokerd_ctxs,
|
||||
) as portals:
|
||||
|
||||
|
@ -932,19 +894,13 @@ async def open_feed(
|
|||
# of these stream open sequences sequentially per
|
||||
# backend? .. need some thot!
|
||||
allow_overruns=True,
|
||||
|
||||
# NOTE: UI actors (like charts) can allow
|
||||
# remote control of certain graphics rendering
|
||||
# capabilities via the
|
||||
# `.ui._remote_ctl.remote_annotate()` msg loop.
|
||||
allow_remote_ctl_ui=allow_remote_ctl_ui,
|
||||
)
|
||||
)
|
||||
|
||||
assert len(feed.mods) == len(feed.portals)
|
||||
|
||||
async with (
|
||||
trionics.gather_contexts(bus_ctxs) as ctxs,
|
||||
gather_contexts(bus_ctxs) as ctxs,
|
||||
):
|
||||
stream_ctxs: list[tractor.MsgStream] = []
|
||||
for (
|
||||
|
@ -986,7 +942,7 @@ async def open_feed(
|
|||
brokermod: ModuleType
|
||||
fqmes: list[str]
|
||||
async with (
|
||||
trionics.gather_contexts(stream_ctxs) as streams,
|
||||
gather_contexts(stream_ctxs) as streams,
|
||||
):
|
||||
for (
|
||||
stream,
|
||||
|
@ -1002,12 +958,6 @@ async def open_feed(
|
|||
if brokermod.name == flume.mkt.broker:
|
||||
flume.stream = stream
|
||||
|
||||
assert (
|
||||
len(feed.mods)
|
||||
==
|
||||
len(feed.portals)
|
||||
==
|
||||
len(feed.streams)
|
||||
)
|
||||
assert len(feed.mods) == len(feed.portals) == len(feed.streams)
|
||||
|
||||
yield feed
|
||||
|
|
|
@ -42,15 +42,35 @@ if TYPE_CHECKING:
|
|||
from .feed import Feed
|
||||
|
||||
|
||||
# TODO: ideas for further abstractions as per
|
||||
# https://github.com/pikers/piker/issues/216 and
|
||||
# https://github.com/pikers/piker/issues/270:
|
||||
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
|
||||
# as per circuit parlance:
|
||||
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
|
||||
# - could cover the combination of our `FspAdmin` and the
|
||||
# backend `.fsp._engine` related machinery to "connect" one flume
|
||||
# to another?
|
||||
# - a (financial signal) ``Flow`` would be the a "collection" of such
|
||||
# minmial cascades. Some engineering based jargon concepts:
|
||||
# - https://en.wikipedia.org/wiki/Signal_chain
|
||||
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
|
||||
# - https://en.wikipedia.org/wiki/Audio_signal_flow
|
||||
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
|
||||
# - https://en.wikipedia.org/wiki/Dataflow_programming
|
||||
# - https://en.wikipedia.org/wiki/Signal_programming
|
||||
# - https://en.wikipedia.org/wiki/Incremental_computing
|
||||
|
||||
|
||||
class Flume(Struct):
|
||||
'''
|
||||
Composite reference type which points to all the addressing
|
||||
handles and other meta-data necessary for the read, measure and
|
||||
management of a set of real-time updated data flows.
|
||||
Composite reference type which points to all the addressing handles
|
||||
and other meta-data necessary for the read, measure and management
|
||||
of a set of real-time updated data flows.
|
||||
|
||||
Can be thought of as a "flow descriptor" or "flow frame" which
|
||||
describes the high level properties of a set of data flows that
|
||||
can be used seamlessly across process-memory boundaries.
|
||||
describes the high level properties of a set of data flows that can
|
||||
be used seamlessly across process-memory boundaries.
|
||||
|
||||
Each instance's sub-components normally includes:
|
||||
- a msg oriented quote stream provided via an IPC transport
|
||||
|
@ -73,7 +93,6 @@ class Flume(Struct):
|
|||
# private shm refs loaded dynamically from tokens
|
||||
_hist_shm: ShmArray | None = None
|
||||
_rt_shm: ShmArray | None = None
|
||||
_readonly: bool = True
|
||||
|
||||
stream: tractor.MsgStream | None = None
|
||||
izero_hist: int = 0
|
||||
|
@ -90,7 +109,7 @@ class Flume(Struct):
|
|||
if self._rt_shm is None:
|
||||
self._rt_shm = attach_shm_array(
|
||||
token=self._rt_shm_token,
|
||||
readonly=self._readonly,
|
||||
readonly=True,
|
||||
)
|
||||
|
||||
return self._rt_shm
|
||||
|
@ -103,10 +122,12 @@ class Flume(Struct):
|
|||
'No shm token has been set for the history buffer?'
|
||||
)
|
||||
|
||||
if self._hist_shm is None:
|
||||
if (
|
||||
self._hist_shm is None
|
||||
):
|
||||
self._hist_shm = attach_shm_array(
|
||||
token=self._hist_shm_token,
|
||||
readonly=self._readonly,
|
||||
readonly=True,
|
||||
)
|
||||
|
||||
return self._hist_shm
|
||||
|
@ -125,10 +146,10 @@ class Flume(Struct):
|
|||
period and ratio between them.
|
||||
|
||||
'''
|
||||
times: np.ndarray = self.hist_shm.array['time']
|
||||
end: float | int = pendulum.from_timestamp(times[-1])
|
||||
start: float | int = pendulum.from_timestamp(times[times != times[-1]][-1])
|
||||
hist_step_size_s: float = (end - start).seconds
|
||||
times = self.hist_shm.array['time']
|
||||
end = pendulum.from_timestamp(times[-1])
|
||||
start = pendulum.from_timestamp(times[times != times[-1]][-1])
|
||||
hist_step_size_s = (end - start).seconds
|
||||
|
||||
times = self.rt_shm.array['time']
|
||||
end = pendulum.from_timestamp(times[-1])
|
||||
|
@ -148,25 +169,17 @@ class Flume(Struct):
|
|||
msg = self.to_dict()
|
||||
msg['mkt'] = self.mkt.to_dict()
|
||||
|
||||
# NOTE: pop all un-msg-serializable fields:
|
||||
# - `tractor.MsgStream`
|
||||
# - `Feed`
|
||||
# - `Shmarray`
|
||||
# it's expected the `.from_msg()` on the other side
|
||||
# will get instead some kind of msg-compat version
|
||||
# that it can load.
|
||||
# can't serialize the stream or feed objects, it's expected
|
||||
# you'll have a ref to it since this msg should be rxed on
|
||||
# a stream on whatever far end IPC..
|
||||
msg.pop('stream')
|
||||
msg.pop('feed')
|
||||
msg.pop('_rt_shm')
|
||||
msg.pop('_hist_shm')
|
||||
|
||||
return msg
|
||||
|
||||
@classmethod
|
||||
def from_msg(
|
||||
cls,
|
||||
msg: dict,
|
||||
readonly: bool = True,
|
||||
|
||||
) -> dict:
|
||||
'''
|
||||
|
@ -177,11 +190,7 @@ class Flume(Struct):
|
|||
mkt_msg = msg.pop('mkt')
|
||||
from ..accounting import MktPair # cycle otherwise..
|
||||
mkt = MktPair.from_msg(mkt_msg)
|
||||
msg |= {'_readonly': readonly}
|
||||
return cls(
|
||||
mkt=mkt,
|
||||
**msg,
|
||||
)
|
||||
return cls(mkt=mkt, **msg)
|
||||
|
||||
def get_index(
|
||||
self,
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -26,10 +26,7 @@ from ._api import (
|
|||
maybe_mk_fsp_shm,
|
||||
Fsp,
|
||||
)
|
||||
from ._engine import (
|
||||
cascade,
|
||||
Cascade,
|
||||
)
|
||||
from ._engine import cascade
|
||||
from ._volume import (
|
||||
dolla_vlm,
|
||||
flow_rates,
|
||||
|
@ -38,7 +35,6 @@ from ._volume import (
|
|||
|
||||
__all__: list[str] = [
|
||||
'cascade',
|
||||
'Cascade',
|
||||
'maybe_mk_fsp_shm',
|
||||
'Fsp',
|
||||
'dolla_vlm',
|
||||
|
@ -50,12 +46,9 @@ __all__: list[str] = [
|
|||
async def latency(
|
||||
source: 'TickStream[Dict[str, float]]', # noqa
|
||||
ohlcv: np.ndarray
|
||||
|
||||
) -> AsyncIterator[np.ndarray]:
|
||||
'''
|
||||
Latency measurements, broker to piker.
|
||||
|
||||
'''
|
||||
"""Latency measurements, broker to piker.
|
||||
"""
|
||||
# TODO: do we want to offer yielding this async
|
||||
# before the rt data connection comes up?
|
||||
|
||||
|
|
|
@ -18,12 +18,13 @@
|
|||
core task logic for processing chains
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
from contextlib import asynccontextmanager as acm
|
||||
from dataclasses import dataclass
|
||||
from functools import partial
|
||||
from typing import (
|
||||
AsyncIterator,
|
||||
Callable,
|
||||
Optional,
|
||||
Union,
|
||||
)
|
||||
|
||||
import numpy as np
|
||||
|
@ -32,9 +33,9 @@ from trio_typing import TaskStatus
|
|||
import tractor
|
||||
from tractor.msg import NamespacePath
|
||||
|
||||
from piker.types import Struct
|
||||
from ..log import get_logger, get_console_log
|
||||
from .. import data
|
||||
from ..data import attach_shm_array
|
||||
from ..data.feed import (
|
||||
Flume,
|
||||
Feed,
|
||||
|
@ -55,6 +56,12 @@ from ..toolz import Profiler
|
|||
log = get_logger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskTracker:
|
||||
complete: trio.Event
|
||||
cs: trio.CancelScope
|
||||
|
||||
|
||||
async def filter_quotes_by_sym(
|
||||
|
||||
sym: str,
|
||||
|
@ -75,168 +82,30 @@ async def filter_quotes_by_sym(
|
|||
if quote:
|
||||
yield quote
|
||||
|
||||
# TODO: unifying the abstractions in this FSP subsys/layer:
|
||||
# -[ ] move the `.data.flows.Flume` type into this
|
||||
# module/subsys/pkg?
|
||||
# -[ ] ideas for further abstractions as per
|
||||
# - https://github.com/pikers/piker/issues/216,
|
||||
# - https://github.com/pikers/piker/issues/270:
|
||||
# - a (financial signal) ``Flow`` would be the a "collection" of such
|
||||
# minmial cascades. Some engineering based jargon concepts:
|
||||
# - https://en.wikipedia.org/wiki/Signal_chain
|
||||
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
|
||||
# - https://en.wikipedia.org/wiki/Audio_signal_flow
|
||||
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
|
||||
# - https://en.wikipedia.org/wiki/Dataflow_programming
|
||||
# - https://en.wikipedia.org/wiki/Signal_programming
|
||||
# - https://en.wikipedia.org/wiki/Incremental_computing
|
||||
# - https://en.wikipedia.org/wiki/Signal-flow_graph
|
||||
# - https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
|
||||
|
||||
# -[ ] we probably want to eval THE BELOW design and unify with the
|
||||
# proto `TaskManager` in the `tractor` dev branch as well as with
|
||||
# our below idea for `Cascade`:
|
||||
# - https://github.com/goodboy/tractor/pull/363
|
||||
class Cascade(Struct):
|
||||
'''
|
||||
As per sig-proc engineering parlance, this is a chaining of
|
||||
`Flume`s, which are themselves collections of "Streams"
|
||||
implemented currently via `ShmArray`s.
|
||||
async def fsp_compute(
|
||||
|
||||
A `Cascade` is be the minimal "connection" of 2 `Flumes`
|
||||
as per circuit parlance:
|
||||
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
|
||||
|
||||
TODO:
|
||||
-[ ] could cover the combination of our `FspAdmin` and the
|
||||
backend `.fsp._engine` related machinery to "connect" one flume
|
||||
to another?
|
||||
|
||||
'''
|
||||
# TODO: make these `Flume`s
|
||||
src: Flume
|
||||
dst: Flume
|
||||
tn: trio.Nursery
|
||||
fsp: Fsp # UI-side middleware ctl API
|
||||
|
||||
# filled during cascade/.bind_func() (fsp_compute) init phases
|
||||
bind_func: Callable | None = None
|
||||
complete: trio.Event | None = None
|
||||
cs: trio.CancelScope | None = None
|
||||
client_stream: tractor.MsgStream | None = None
|
||||
|
||||
async def resync(self) -> int:
|
||||
# TODO: adopt an incremental update engine/approach
|
||||
# where possible here eventually!
|
||||
log.info(f're-syncing fsp {self.fsp.name} to source')
|
||||
self.cs.cancel()
|
||||
await self.complete.wait()
|
||||
index: int = await self.tn.start(self.bind_func)
|
||||
|
||||
# always trigger UI refresh after history update,
|
||||
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
|
||||
# ``piker.ui._display.trigger_update()``.
|
||||
dst_shm: ShmArray = self.dst.rt_shm
|
||||
await self.client_stream.send({
|
||||
'fsp_update': {
|
||||
'key': dst_shm.token,
|
||||
'first': dst_shm._first.value,
|
||||
'last': dst_shm._last.value,
|
||||
}
|
||||
})
|
||||
return index
|
||||
|
||||
def is_synced(self) -> tuple[bool, int, int]:
|
||||
'''
|
||||
Predicate to dertmine if a destination FSP
|
||||
output array is aligned to its source array.
|
||||
|
||||
'''
|
||||
src_shm: ShmArray = self.src.rt_shm
|
||||
dst_shm: ShmArray = self.dst.rt_shm
|
||||
step_diff = src_shm.index - dst_shm.index
|
||||
len_diff = abs(len(src_shm.array) - len(dst_shm.array))
|
||||
synced: bool = not (
|
||||
# the source is likely backfilling and we must
|
||||
# sync history calculations
|
||||
len_diff > 2
|
||||
|
||||
# we aren't step synced to the source and may be
|
||||
# leading/lagging by a step
|
||||
or step_diff > 1
|
||||
or step_diff < 0
|
||||
)
|
||||
if not synced:
|
||||
fsp: Fsp = self.fsp
|
||||
log.warning(
|
||||
'***DESYNCED FSP***\n'
|
||||
f'{fsp.ns_path}@{src_shm.token}\n'
|
||||
f'step_diff: {step_diff}\n'
|
||||
f'len_diff: {len_diff}\n'
|
||||
)
|
||||
return (
|
||||
synced,
|
||||
step_diff,
|
||||
len_diff,
|
||||
)
|
||||
|
||||
async def poll_and_sync_to_step(self) -> int:
|
||||
synced, step_diff, _ = self.is_synced()
|
||||
while not synced:
|
||||
await self.resync()
|
||||
synced, step_diff, _ = self.is_synced()
|
||||
|
||||
return step_diff
|
||||
|
||||
@acm
|
||||
async def open_edge(
|
||||
self,
|
||||
bind_func: Callable,
|
||||
) -> int:
|
||||
self.bind_func = bind_func
|
||||
index = await self.tn.start(bind_func)
|
||||
yield index
|
||||
# TODO: what do we want on teardown/error?
|
||||
# -[ ] dynamic reconnection after update?
|
||||
|
||||
|
||||
async def connect_streams(
|
||||
casc: Cascade,
|
||||
mkt: MktPair,
|
||||
flume: Flume,
|
||||
quote_stream: trio.abc.ReceiveChannel,
|
||||
src: Flume,
|
||||
dst: Flume,
|
||||
|
||||
edge_func: Callable,
|
||||
src: ShmArray,
|
||||
dst: ShmArray,
|
||||
|
||||
func: Callable,
|
||||
|
||||
# attach_stream: bool = False,
|
||||
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
Stream and per-sample compute and write the cascade of
|
||||
2 `Flumes`/streams given some operating `func`.
|
||||
|
||||
https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
|
||||
|
||||
Not literally, but something like:
|
||||
|
||||
edge_func(Flume_in) -> Flume_out
|
||||
|
||||
'''
|
||||
profiler = Profiler(
|
||||
delayed=False,
|
||||
disabled=True
|
||||
)
|
||||
|
||||
# TODO: just pull it from src.mkt.fqme no?
|
||||
# fqme: str = mkt.fqme
|
||||
fqme: str = src.mkt.fqme
|
||||
|
||||
# TODO: dynamic introspection of what the underlying (vertex)
|
||||
# function actually requires from input node (flumes) then
|
||||
# deliver those inputs as part of a graph "compilation" step?
|
||||
out_stream = edge_func(
|
||||
fqme = mkt.fqme
|
||||
out_stream = func(
|
||||
|
||||
# TODO: do we even need this if we do the feed api right?
|
||||
# shouldn't a local stream do this before we get a handle
|
||||
|
@ -244,21 +113,20 @@ async def connect_streams(
|
|||
# async itertools style?
|
||||
filter_quotes_by_sym(fqme, quote_stream),
|
||||
|
||||
# XXX: currently the ``ohlcv`` arg, but we should allow
|
||||
# (dynamic) requests for src flume (node) streams?
|
||||
src.rt_shm,
|
||||
# XXX: currently the ``ohlcv`` arg
|
||||
flume.rt_shm,
|
||||
)
|
||||
|
||||
# HISTORY COMPUTE PHASE
|
||||
# conduct a single iteration of fsp with historical bars input
|
||||
# and get historical output.
|
||||
history_output: (
|
||||
dict[str, np.ndarray] # multi-output case
|
||||
| np.ndarray, # single output case
|
||||
)
|
||||
history_output: Union[
|
||||
dict[str, np.ndarray], # multi-output case
|
||||
np.ndarray, # single output case
|
||||
]
|
||||
history_output = await anext(out_stream)
|
||||
|
||||
func_name = edge_func.__name__
|
||||
func_name = func.__name__
|
||||
profiler(f'{func_name} generated history')
|
||||
|
||||
# build struct array with an 'index' field to push as history
|
||||
|
@ -266,12 +134,10 @@ async def connect_streams(
|
|||
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
|
||||
# if the output array is multi-field then push
|
||||
# each respective field.
|
||||
dst_shm: ShmArray = dst.rt_shm
|
||||
fields = getattr(dst_shm.array.dtype, 'fields', None).copy()
|
||||
fields = getattr(dst.array.dtype, 'fields', None).copy()
|
||||
fields.pop('index')
|
||||
history_by_field: np.ndarray | None = None
|
||||
src_shm: ShmArray = src.rt_shm
|
||||
src_time = src_shm.array['time']
|
||||
history_by_field: Optional[np.ndarray] = None
|
||||
src_time = src.array['time']
|
||||
|
||||
if (
|
||||
fields and
|
||||
|
@ -290,7 +156,7 @@ async def connect_streams(
|
|||
if history_by_field is None:
|
||||
|
||||
if output is None:
|
||||
length = len(src_shm.array)
|
||||
length = len(src.array)
|
||||
else:
|
||||
length = len(output)
|
||||
|
||||
|
@ -299,7 +165,7 @@ async def connect_streams(
|
|||
# will be pushed to shm.
|
||||
history_by_field = np.zeros(
|
||||
length,
|
||||
dtype=dst_shm.array.dtype
|
||||
dtype=dst.array.dtype
|
||||
)
|
||||
|
||||
if output is None:
|
||||
|
@ -316,13 +182,13 @@ async def connect_streams(
|
|||
)
|
||||
history_by_field = np.zeros(
|
||||
len(history_output),
|
||||
dtype=dst_shm.array.dtype
|
||||
dtype=dst.array.dtype
|
||||
)
|
||||
history_by_field[func_name] = history_output
|
||||
|
||||
history_by_field['time'] = src_time[-len(history_by_field):]
|
||||
|
||||
history_output['time'] = src_shm.array['time']
|
||||
history_output['time'] = src.array['time']
|
||||
|
||||
# TODO: XXX:
|
||||
# THERE'S A BIG BUG HERE WITH THE `index` field since we're
|
||||
|
@ -335,11 +201,11 @@ async def connect_streams(
|
|||
# is `index` aware such that historical data can be indexed
|
||||
# relative to the true first datum? Not sure if this is sane
|
||||
# for incremental compuations.
|
||||
first = dst_shm._first.value = src_shm._first.value
|
||||
first = dst._first.value = src._first.value
|
||||
|
||||
# TODO: can we use this `start` flag instead of the manual
|
||||
# setting above?
|
||||
index = dst_shm.push(
|
||||
index = dst.push(
|
||||
history_by_field,
|
||||
start=first,
|
||||
)
|
||||
|
@ -350,9 +216,12 @@ async def connect_streams(
|
|||
# setup a respawn handle
|
||||
with trio.CancelScope() as cs:
|
||||
|
||||
casc.cs = cs
|
||||
casc.complete = trio.Event()
|
||||
task_status.started(index)
|
||||
# TODO: might be better to just make a "restart" method where
|
||||
# the target task is spawned implicitly and then the event is
|
||||
# set via some higher level api? At that poing we might as well
|
||||
# be writing a one-cancels-one nursery though right?
|
||||
tracker = TaskTracker(trio.Event(), cs)
|
||||
task_status.started((tracker, index))
|
||||
|
||||
profiler(f'{func_name} yield last index')
|
||||
|
||||
|
@ -366,12 +235,12 @@ async def connect_streams(
|
|||
log.debug(f"{func_name}: {processed}")
|
||||
key, output = processed
|
||||
# dst.array[-1][key] = output
|
||||
dst_shm.array[[key, 'time']][-1] = (
|
||||
dst.array[[key, 'time']][-1] = (
|
||||
output,
|
||||
# TODO: what about pushing ``time.time_ns()``
|
||||
# in which case we'll need to round at the graphics
|
||||
# processing / sampling layer?
|
||||
src_shm.array[-1]['time']
|
||||
src.array[-1]['time']
|
||||
)
|
||||
|
||||
# NOTE: for now we aren't streaming this to the consumer
|
||||
|
@ -383,7 +252,7 @@ async def connect_streams(
|
|||
# N-consumers who subscribe for the real-time output,
|
||||
# which we'll likely want to implement using local-mem
|
||||
# chans for the fan out?
|
||||
# index = src_shm.index
|
||||
# index = src.index
|
||||
# if attach_stream:
|
||||
# await client_stream.send(index)
|
||||
|
||||
|
@ -393,7 +262,7 @@ async def connect_streams(
|
|||
# log.info(f'FSP quote too fast: {hz}')
|
||||
# last = time.time()
|
||||
finally:
|
||||
casc.complete.set()
|
||||
tracker.complete.set()
|
||||
|
||||
|
||||
@tractor.context
|
||||
|
@ -404,15 +273,15 @@ async def cascade(
|
|||
# data feed key
|
||||
fqme: str,
|
||||
|
||||
# flume pair cascaded using an "edge function"
|
||||
src_flume_addr: dict,
|
||||
dst_flume_addr: dict,
|
||||
src_shm_token: dict,
|
||||
dst_shm_token: tuple[str, np.dtype],
|
||||
|
||||
ns_path: NamespacePath,
|
||||
|
||||
shm_registry: dict[str, _Token],
|
||||
|
||||
zero_on_step: bool = False,
|
||||
loglevel: str | None = None,
|
||||
loglevel: Optional[str] = None,
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
|
@ -428,14 +297,8 @@ async def cascade(
|
|||
if loglevel:
|
||||
get_console_log(loglevel)
|
||||
|
||||
src: Flume = Flume.from_msg(src_flume_addr)
|
||||
dst: Flume = Flume.from_msg(
|
||||
dst_flume_addr,
|
||||
readonly=False,
|
||||
)
|
||||
|
||||
# src: ShmArray = attach_shm_array(token=src_shm_token)
|
||||
# dst: ShmArray = attach_shm_array(readonly=False, token=dst_shm_token)
|
||||
src = attach_shm_array(token=src_shm_token)
|
||||
dst = attach_shm_array(readonly=False, token=dst_shm_token)
|
||||
|
||||
reg = _load_builtins()
|
||||
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
|
||||
|
@ -443,11 +306,11 @@ async def cascade(
|
|||
f'Registered FSP set:\n{lines}'
|
||||
)
|
||||
|
||||
# NOTE XXX: update actorlocal flows table which registers
|
||||
# readonly "instances" of this fsp for symbol/source so that
|
||||
# consumer fsps can look it up by source + fsp.
|
||||
# TODO: ugh i hate this wind/unwind to list over the wire but
|
||||
# not sure how else to do it.
|
||||
# update actorlocal flows table which registers
|
||||
# readonly "instances" of this fsp for symbol/source
|
||||
# so that consumer fsps can look it up by source + fsp.
|
||||
# TODO: ugh i hate this wind/unwind to list over the wire
|
||||
# but not sure how else to do it.
|
||||
for (token, fsp_name, dst_token) in shm_registry:
|
||||
Fsp._flow_registry[(
|
||||
_Token.from_msg(token),
|
||||
|
@ -457,15 +320,12 @@ async def cascade(
|
|||
fsp: Fsp = reg.get(
|
||||
NamespacePath(ns_path)
|
||||
)
|
||||
func: Callable = fsp.func
|
||||
func = fsp.func
|
||||
|
||||
if not func:
|
||||
# TODO: assume it's a func target path
|
||||
raise ValueError(f'Unknown fsp target: {ns_path}')
|
||||
|
||||
_fqme: str = src.mkt.fqme
|
||||
assert _fqme == fqme
|
||||
|
||||
# open a data feed stream with requested broker
|
||||
feed: Feed
|
||||
async with data.feed.maybe_open_feed(
|
||||
|
@ -479,68 +339,40 @@ async def cascade(
|
|||
|
||||
) as feed:
|
||||
|
||||
flume: Flume = feed.flumes[fqme]
|
||||
# XXX: can't do this since flume.feed will be set XD
|
||||
# assert flume == src
|
||||
assert flume.mkt == src.mkt
|
||||
mkt: MktPair = flume.mkt
|
||||
|
||||
# NOTE: FOR NOW, sanity checks around the feed as being
|
||||
# always the src flume (until we get to fancier/lengthier
|
||||
# chains/graphs.
|
||||
assert src.rt_shm.token == flume.rt_shm.token
|
||||
|
||||
# XXX: won't work bc the _hist_shm_token value will be
|
||||
# list[list] after IPC..
|
||||
# assert flume.to_msg() == src_flume_addr
|
||||
|
||||
flume = feed.flumes[fqme]
|
||||
mkt = flume.mkt
|
||||
assert src.token == flume.rt_shm.token
|
||||
profiler(f'{func}: feed up')
|
||||
|
||||
func_name: str = func.__name__
|
||||
func_name = func.__name__
|
||||
async with (
|
||||
trio.open_nursery() as tn,
|
||||
trio.open_nursery() as n,
|
||||
):
|
||||
# TODO: might be better to just make a "restart" method where
|
||||
# the target task is spawned implicitly and then the event is
|
||||
# set via some higher level api? At that poing we might as well
|
||||
# be writing a one-cancels-one nursery though right?
|
||||
casc = Cascade(
|
||||
src,
|
||||
dst,
|
||||
tn,
|
||||
fsp,
|
||||
)
|
||||
|
||||
# TODO: this seems like it should be wrapped somewhere?
|
||||
fsp_target = partial(
|
||||
connect_streams,
|
||||
casc=casc,
|
||||
|
||||
fsp_compute,
|
||||
mkt=mkt,
|
||||
flume=flume,
|
||||
quote_stream=flume.stream,
|
||||
|
||||
# flumes and shm passthrough
|
||||
# shm
|
||||
src=src,
|
||||
dst=dst,
|
||||
|
||||
# chain function which takes src flume input(s)
|
||||
# and renders dst flume output(s)
|
||||
edge_func=func
|
||||
# target
|
||||
func=func
|
||||
)
|
||||
async with casc.open_edge(
|
||||
bind_func=fsp_target,
|
||||
) as index:
|
||||
# casc.bind_func = fsp_target
|
||||
# index = await tn.start(fsp_target)
|
||||
dst_shm: ShmArray = dst.rt_shm
|
||||
src_shm: ShmArray = src.rt_shm
|
||||
|
||||
tracker, index = await n.start(fsp_target)
|
||||
|
||||
if zero_on_step:
|
||||
last = dst.rt_shm.array[-1:]
|
||||
last = dst.array[-1:]
|
||||
zeroed = np.zeros(last.shape, dtype=last.dtype)
|
||||
|
||||
profiler(f'{func_name}: fsp up')
|
||||
|
||||
# sync to client-side actor
|
||||
# sync client
|
||||
await ctx.started(index)
|
||||
|
||||
# XXX: rt stream with client which we MUST
|
||||
|
@ -548,26 +380,85 @@ async def cascade(
|
|||
# incremental "updates" as history prepends take
|
||||
# place.
|
||||
async with ctx.open_stream() as client_stream:
|
||||
casc.client_stream: tractor.MsgStream = client_stream
|
||||
|
||||
s, step, ld = casc.is_synced()
|
||||
# TODO: these likely should all become
|
||||
# methods of this ``TaskLifetime`` or wtv
|
||||
# abstraction..
|
||||
async def resync(
|
||||
tracker: TaskTracker,
|
||||
|
||||
) -> tuple[TaskTracker, int]:
|
||||
# TODO: adopt an incremental update engine/approach
|
||||
# where possible here eventually!
|
||||
log.info(f're-syncing fsp {func_name} to source')
|
||||
tracker.cs.cancel()
|
||||
await tracker.complete.wait()
|
||||
tracker, index = await n.start(fsp_target)
|
||||
|
||||
# always trigger UI refresh after history update,
|
||||
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
|
||||
# ``piker.ui._display.trigger_update()``.
|
||||
await client_stream.send({
|
||||
'fsp_update': {
|
||||
'key': dst_shm_token,
|
||||
'first': dst._first.value,
|
||||
'last': dst._last.value,
|
||||
}
|
||||
})
|
||||
return tracker, index
|
||||
|
||||
def is_synced(
|
||||
src: ShmArray,
|
||||
dst: ShmArray
|
||||
) -> tuple[bool, int, int]:
|
||||
'''
|
||||
Predicate to dertmine if a destination FSP
|
||||
output array is aligned to its source array.
|
||||
|
||||
'''
|
||||
step_diff = src.index - dst.index
|
||||
len_diff = abs(len(src.array) - len(dst.array))
|
||||
return not (
|
||||
# the source is likely backfilling and we must
|
||||
# sync history calculations
|
||||
len_diff > 2
|
||||
|
||||
# we aren't step synced to the source and may be
|
||||
# leading/lagging by a step
|
||||
or step_diff > 1
|
||||
or step_diff < 0
|
||||
), step_diff, len_diff
|
||||
|
||||
async def poll_and_sync_to_step(
|
||||
tracker: TaskTracker,
|
||||
src: ShmArray,
|
||||
dst: ShmArray,
|
||||
|
||||
) -> tuple[TaskTracker, int]:
|
||||
|
||||
synced, step_diff, _ = is_synced(src, dst)
|
||||
while not synced:
|
||||
tracker, index = await resync(tracker)
|
||||
synced, step_diff, _ = is_synced(src, dst)
|
||||
|
||||
return tracker, step_diff
|
||||
|
||||
s, step, ld = is_synced(src, dst)
|
||||
|
||||
# detect sample period step for subscription to increment
|
||||
# signal
|
||||
times = src.rt_shm.array['time']
|
||||
times = src.array['time']
|
||||
if len(times) > 1:
|
||||
last_ts = times[-1]
|
||||
delay_s: float = float(last_ts - times[times != last_ts][-1])
|
||||
delay_s = float(last_ts - times[times != last_ts][-1])
|
||||
else:
|
||||
# our default "HFT" sample rate.
|
||||
delay_s: float = _default_delay_s
|
||||
delay_s = _default_delay_s
|
||||
|
||||
# sub and increment the underlying shared memory buffer
|
||||
# on every step msg received from the global `samplerd`
|
||||
# service.
|
||||
async with open_sample_stream(
|
||||
float(delay_s)
|
||||
) as istream:
|
||||
async with open_sample_stream(float(delay_s)) as istream:
|
||||
|
||||
profiler(f'{func_name}: sample stream up')
|
||||
profiler.finish()
|
||||
|
@ -578,9 +469,13 @@ async def cascade(
|
|||
# respawn the compute task if the source
|
||||
# array has been updated such that we compute
|
||||
# new history from the (prepended) source.
|
||||
synced, step_diff, _ = casc.is_synced()
|
||||
synced, step_diff, _ = is_synced(src, dst)
|
||||
if not synced:
|
||||
step_diff: int = await casc.poll_and_sync_to_step()
|
||||
tracker, step_diff = await poll_and_sync_to_step(
|
||||
tracker,
|
||||
src,
|
||||
dst,
|
||||
)
|
||||
|
||||
# skip adding a last bar since we should already
|
||||
# be step alinged
|
||||
|
@ -588,7 +483,7 @@ async def cascade(
|
|||
continue
|
||||
|
||||
# read out last shm row, copy and write new row
|
||||
array = dst_shm.array
|
||||
array = dst.array
|
||||
|
||||
# some metrics like vlm should be reset
|
||||
# to zero every step.
|
||||
|
@ -597,14 +492,14 @@ async def cascade(
|
|||
else:
|
||||
last = array[-1:].copy()
|
||||
|
||||
dst.rt_shm.push(last)
|
||||
dst.push(last)
|
||||
|
||||
# sync with source buffer's time step
|
||||
src_l2 = src_shm.array[-2:]
|
||||
src_l2 = src.array[-2:]
|
||||
src_li, src_lt = src_l2[-1][['index', 'time']]
|
||||
src_2li, src_2lt = src_l2[-2][['index', 'time']]
|
||||
dst_shm._array['time'][src_li] = src_lt
|
||||
dst_shm._array['time'][src_2li] = src_2lt
|
||||
dst._array['time'][src_li] = src_lt
|
||||
dst._array['time'][src_2li] = src_2lt
|
||||
|
||||
# last2 = dst.array[-2:]
|
||||
# if (
|
||||
|
|
|
@ -14,45 +14,49 @@
|
|||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Actor runtime primtives and (distributed) service APIs for,
|
||||
"""
|
||||
Actor-runtime service orchestration machinery.
|
||||
|
||||
- daemon-service mgmt: `_daemon` (i.e. low-level spawn and supervise machinery
|
||||
for sub-actors like `brokerd`, `emsd`, datad`, etc.)
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
- service-actor supervision (via `trio` tasks) API: `._mngr`
|
||||
|
||||
- discovery interface (via light wrapping around `tractor`'s built-in
|
||||
prot): `._registry`
|
||||
|
||||
- `docker` cntr SC supervision for use with `trio`: `_ahab`
|
||||
- wrappers for marketstore and elasticsearch dbs
|
||||
=> TODO: maybe to (re)move elsewhere?
|
||||
|
||||
'''
|
||||
from ._mngr import Services as Services
|
||||
from ._registry import (
|
||||
_tractor_kwargs as _tractor_kwargs,
|
||||
_default_reg_addr as _default_reg_addr,
|
||||
_default_registry_host as _default_registry_host,
|
||||
_default_registry_port as _default_registry_port,
|
||||
|
||||
open_registry as open_registry,
|
||||
find_service as find_service,
|
||||
check_for_service as check_for_service,
|
||||
from ._mngr import Services
|
||||
from ._registry import ( # noqa
|
||||
_tractor_kwargs,
|
||||
_default_reg_addr,
|
||||
_default_registry_host,
|
||||
_default_registry_port,
|
||||
open_registry,
|
||||
find_service,
|
||||
check_for_service,
|
||||
)
|
||||
from ._daemon import (
|
||||
maybe_spawn_daemon as maybe_spawn_daemon,
|
||||
spawn_emsd as spawn_emsd,
|
||||
maybe_open_emsd as maybe_open_emsd,
|
||||
from ._daemon import ( # noqa
|
||||
maybe_spawn_daemon,
|
||||
spawn_emsd,
|
||||
maybe_open_emsd,
|
||||
)
|
||||
from ._actor_runtime import (
|
||||
open_piker_runtime as open_piker_runtime,
|
||||
maybe_open_pikerd as maybe_open_pikerd,
|
||||
open_pikerd as open_pikerd,
|
||||
get_runtime_vars as get_runtime_vars,
|
||||
open_piker_runtime,
|
||||
maybe_open_pikerd,
|
||||
open_pikerd,
|
||||
get_tractor_runtime_kwargs,
|
||||
)
|
||||
from ..brokers._daemon import (
|
||||
spawn_brokerd as spawn_brokerd,
|
||||
maybe_spawn_brokerd as maybe_spawn_brokerd,
|
||||
spawn_brokerd,
|
||||
maybe_spawn_brokerd,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
'check_for_service',
|
||||
'Services',
|
||||
'maybe_spawn_daemon',
|
||||
'spawn_brokerd',
|
||||
'maybe_spawn_brokerd',
|
||||
'spawn_emsd',
|
||||
'maybe_open_emsd',
|
||||
'open_piker_runtime',
|
||||
'maybe_open_pikerd',
|
||||
'open_pikerd',
|
||||
'get_tractor_runtime_kwargs',
|
||||
]
|
||||
|
|
|
@ -45,7 +45,7 @@ from ._registry import ( # noqa
|
|||
)
|
||||
|
||||
|
||||
def get_runtime_vars() -> dict[str, Any]:
|
||||
def get_tractor_runtime_kwargs() -> dict[str, Any]:
|
||||
'''
|
||||
Deliver ``tractor`` related runtime variables in a `dict`.
|
||||
|
||||
|
@ -56,8 +56,6 @@ def get_runtime_vars() -> dict[str, Any]:
|
|||
@acm
|
||||
async def open_piker_runtime(
|
||||
name: str,
|
||||
registry_addrs: list[tuple[str, int]] = [],
|
||||
|
||||
enable_modules: list[str] = [],
|
||||
loglevel: Optional[str] = None,
|
||||
|
||||
|
@ -65,6 +63,8 @@ async def open_piker_runtime(
|
|||
# for data daemons when running in production.
|
||||
debug_mode: bool = False,
|
||||
|
||||
registry_addr: None | tuple[str, int] = None,
|
||||
|
||||
# TODO: once we have `rsyscall` support we will read a config
|
||||
# and spawn the service tree distributed per that.
|
||||
start_method: str = 'trio',
|
||||
|
@ -74,7 +74,7 @@ async def open_piker_runtime(
|
|||
|
||||
) -> tuple[
|
||||
tractor.Actor,
|
||||
list[tuple[str, int]],
|
||||
tuple[str, int],
|
||||
]:
|
||||
'''
|
||||
Start a piker actor who's runtime will automatically sync with
|
||||
|
@ -84,31 +84,21 @@ async def open_piker_runtime(
|
|||
a root actor.
|
||||
|
||||
'''
|
||||
# check for existing runtime, boot it
|
||||
# if not already running.
|
||||
try:
|
||||
actor = tractor.current_actor()
|
||||
# check for existing runtime
|
||||
actor = tractor.current_actor().uid
|
||||
|
||||
except tractor._exceptions.NoRuntime:
|
||||
tractor._state._runtime_vars[
|
||||
'piker_vars'
|
||||
] = tractor_runtime_overrides
|
||||
'piker_vars'] = tractor_runtime_overrides
|
||||
|
||||
# NOTE: if no registrar list passed used the default of just
|
||||
# setting it as the root actor on localhost.
|
||||
registry_addrs = (
|
||||
registry_addrs
|
||||
or [_default_reg_addr]
|
||||
)
|
||||
|
||||
if ems := tractor_kwargs.pop('enable_modules', None):
|
||||
# import pdbp; pdbp.set_trace()
|
||||
enable_modules.extend(ems)
|
||||
registry_addr = registry_addr or _default_reg_addr
|
||||
|
||||
async with (
|
||||
tractor.open_root_actor(
|
||||
|
||||
# passed through to ``open_root_actor``
|
||||
registry_addrs=registry_addrs,
|
||||
arbiter_addr=registry_addr,
|
||||
name=name,
|
||||
loglevel=loglevel,
|
||||
debug_mode=debug_mode,
|
||||
|
@ -120,30 +110,24 @@ async def open_piker_runtime(
|
|||
enable_modules=enable_modules,
|
||||
|
||||
**tractor_kwargs,
|
||||
) as actor,
|
||||
) as _,
|
||||
|
||||
open_registry(
|
||||
registry_addrs,
|
||||
ensure_exists=False,
|
||||
) as addrs,
|
||||
open_registry(registry_addr, ensure_exists=False) as addr,
|
||||
):
|
||||
assert actor is tractor.current_actor()
|
||||
yield (
|
||||
actor,
|
||||
addrs,
|
||||
tractor.current_actor(),
|
||||
addr,
|
||||
)
|
||||
else:
|
||||
async with open_registry(
|
||||
registry_addrs
|
||||
) as addrs:
|
||||
async with open_registry(registry_addr) as addr:
|
||||
yield (
|
||||
actor,
|
||||
addrs,
|
||||
addr,
|
||||
)
|
||||
|
||||
|
||||
_root_dname: str = 'pikerd'
|
||||
_root_modules: list[str] = [
|
||||
_root_dname = 'pikerd'
|
||||
_root_modules = [
|
||||
__name__,
|
||||
'piker.service._daemon',
|
||||
'piker.brokers._daemon',
|
||||
|
@ -157,13 +141,13 @@ _root_modules: list[str] = [
|
|||
|
||||
@acm
|
||||
async def open_pikerd(
|
||||
registry_addrs: list[tuple[str, int]],
|
||||
|
||||
loglevel: str | None = None,
|
||||
|
||||
# XXX: you should pretty much never want debug mode
|
||||
# for data daemons when running in production.
|
||||
debug_mode: bool = False,
|
||||
registry_addr: None | tuple[str, int] = None,
|
||||
|
||||
**kwargs,
|
||||
|
||||
|
@ -175,37 +159,27 @@ async def open_pikerd(
|
|||
alive underling services (see below).
|
||||
|
||||
'''
|
||||
# NOTE: for the root daemon we always enable the root
|
||||
# mod set and we `list.extend()` it into wtv the
|
||||
# caller requested.
|
||||
# TODO: make this mod set more strict?
|
||||
# -[ ] eventually we should be able to avoid
|
||||
# having the root have more then permissions to spawn other
|
||||
# specialized daemons I think?
|
||||
ems: list[str] = kwargs.setdefault('enable_modules', [])
|
||||
ems.extend(_root_modules)
|
||||
|
||||
async with (
|
||||
open_piker_runtime(
|
||||
|
||||
name=_root_dname,
|
||||
# TODO: eventually we should be able to avoid
|
||||
# having the root have more then permissions to
|
||||
# spawn other specialized daemons I think?
|
||||
enable_modules=_root_modules,
|
||||
loglevel=loglevel,
|
||||
debug_mode=debug_mode,
|
||||
registry_addrs=registry_addrs,
|
||||
registry_addr=registry_addr,
|
||||
|
||||
**kwargs,
|
||||
|
||||
) as (
|
||||
root_actor,
|
||||
reg_addrs,
|
||||
),
|
||||
) as (root_actor, reg_addr),
|
||||
tractor.open_nursery() as actor_nursery,
|
||||
trio.open_nursery() as service_nursery,
|
||||
):
|
||||
for addr in reg_addrs:
|
||||
if addr not in root_actor.accept_addrs:
|
||||
if root_actor.accept_addr != reg_addr:
|
||||
raise RuntimeError(
|
||||
f'`pikerd` failed to bind on {addr}!\n'
|
||||
f'`pikerd` failed to bind on {reg_addr}!\n'
|
||||
'Maybe you have another daemon already running?'
|
||||
)
|
||||
|
||||
|
@ -251,9 +225,9 @@ async def open_pikerd(
|
|||
|
||||
@acm
|
||||
async def maybe_open_pikerd(
|
||||
registry_addrs: list[tuple[str, int]] | None = None,
|
||||
loglevel: Optional[str] = None,
|
||||
registry_addr: None | tuple = None,
|
||||
|
||||
loglevel: str | None = None,
|
||||
**kwargs,
|
||||
|
||||
) -> tractor._portal.Portal | ClassVar[Services]:
|
||||
|
@ -279,51 +253,32 @@ async def maybe_open_pikerd(
|
|||
# async with open_portal(chan) as arb_portal:
|
||||
# yield arb_portal
|
||||
|
||||
registry_addrs: list[tuple[str, int]] = (
|
||||
registry_addrs
|
||||
or [_default_reg_addr]
|
||||
)
|
||||
|
||||
pikerd_portal: tractor.Portal | None
|
||||
async with (
|
||||
open_piker_runtime(
|
||||
name=query_name,
|
||||
registry_addrs=registry_addrs,
|
||||
registry_addr=registry_addr,
|
||||
loglevel=loglevel,
|
||||
**kwargs,
|
||||
) as (actor, addrs),
|
||||
):
|
||||
if _root_dname in actor.uid:
|
||||
yield None
|
||||
return
|
||||
) as _,
|
||||
|
||||
# NOTE: IFF running in disti mode, try to attach to any
|
||||
# existing (host-local) `pikerd`.
|
||||
else:
|
||||
async with tractor.find_actor(
|
||||
tractor.find_actor(
|
||||
_root_dname,
|
||||
registry_addrs=registry_addrs,
|
||||
only_first=True,
|
||||
# raise_on_none=True,
|
||||
) as pikerd_portal:
|
||||
|
||||
# connect to any existing remote daemon presuming its
|
||||
# registry socket was selected.
|
||||
if pikerd_portal is not None:
|
||||
|
||||
# sanity check that we are actually connecting to
|
||||
# a remote process and not ourselves.
|
||||
assert actor.uid != pikerd_portal.channel.uid
|
||||
assert registry_addrs
|
||||
|
||||
yield pikerd_portal
|
||||
arbiter_sockaddr=registry_addr,
|
||||
) as portal
|
||||
):
|
||||
# connect to any existing daemon presuming
|
||||
# its registry socket was selected.
|
||||
if (
|
||||
portal is not None
|
||||
):
|
||||
yield portal
|
||||
return
|
||||
|
||||
# presume pikerd role since no daemon could be found at
|
||||
# configured address
|
||||
async with open_pikerd(
|
||||
loglevel=loglevel,
|
||||
registry_addrs=registry_addrs,
|
||||
registry_addr=registry_addr,
|
||||
|
||||
# passthrough to ``tractor`` init
|
||||
**kwargs,
|
||||
|
|
|
@ -15,8 +15,8 @@
|
|||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Supervisor for ``docker`` with included async and SC wrapping to
|
||||
ensure a cancellable container lifetime system.
|
||||
Supervisor for ``docker`` with included async and SC wrapping
|
||||
to ensure a cancellable container lifetime system.
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
|
|
|
@ -70,10 +70,7 @@ async def maybe_spawn_daemon(
|
|||
lock = Services.locks[service_name]
|
||||
await lock.acquire()
|
||||
|
||||
async with find_service(
|
||||
service_name,
|
||||
registry_addrs=[('127.0.0.1', 6116)],
|
||||
) as portal:
|
||||
async with find_service(service_name) as portal:
|
||||
if portal is not None:
|
||||
lock.release()
|
||||
yield portal
|
||||
|
|
|
@ -27,12 +27,6 @@ from typing import (
|
|||
import trio
|
||||
from trio_typing import TaskStatus
|
||||
import tractor
|
||||
from tractor import (
|
||||
current_actor,
|
||||
ContextCancelled,
|
||||
Context,
|
||||
Portal,
|
||||
)
|
||||
|
||||
from ._util import (
|
||||
log, # sub-sys logger
|
||||
|
@ -44,8 +38,6 @@ from ._util import (
|
|||
# library.
|
||||
# - wrap a "remote api" wherein you can get a method proxy
|
||||
# to the pikerd actor for starting services remotely!
|
||||
# - prolly rename this to ActorServicesNursery since it spawns
|
||||
# new actors and supervises them to completion?
|
||||
class Services:
|
||||
|
||||
actor_n: tractor._supervise.ActorNursery
|
||||
|
@ -55,7 +47,7 @@ class Services:
|
|||
str,
|
||||
tuple[
|
||||
trio.CancelScope,
|
||||
Portal,
|
||||
tractor.Portal,
|
||||
trio.Event,
|
||||
]
|
||||
] = {}
|
||||
|
@ -65,12 +57,12 @@ class Services:
|
|||
async def start_service_task(
|
||||
self,
|
||||
name: str,
|
||||
portal: Portal,
|
||||
portal: tractor.Portal,
|
||||
target: Callable,
|
||||
allow_overruns: bool = False,
|
||||
**ctx_kwargs,
|
||||
|
||||
) -> (trio.CancelScope, Context):
|
||||
) -> (trio.CancelScope, tractor.Context):
|
||||
'''
|
||||
Open a context in a service sub-actor, add to a stack
|
||||
that gets unwound at ``pikerd`` teardown.
|
||||
|
@ -109,30 +101,13 @@ class Services:
|
|||
# wait on any context's return value
|
||||
# and any final portal result from the
|
||||
# sub-actor.
|
||||
ctx_res: Any = await ctx.result()
|
||||
ctx_res = await ctx.result()
|
||||
|
||||
# NOTE: blocks indefinitely until cancelled
|
||||
# either by error from the target context
|
||||
# function or by being cancelled here by the
|
||||
# surrounding cancel scope.
|
||||
return (await portal.result(), ctx_res)
|
||||
except ContextCancelled as ctxe:
|
||||
canceller: tuple[str, str] = ctxe.canceller
|
||||
our_uid: tuple[str, str] = current_actor().uid
|
||||
if (
|
||||
canceller != portal.channel.uid
|
||||
and
|
||||
canceller != our_uid
|
||||
):
|
||||
log.cancel(
|
||||
f'Actor-service {name} was remotely cancelled?\n'
|
||||
f'remote canceller: {canceller}\n'
|
||||
f'Keeping {our_uid} alive, ignoring sub-actor cancel..\n'
|
||||
)
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
|
||||
finally:
|
||||
await portal.cancel_actor()
|
||||
|
|
|
@ -27,7 +27,6 @@ from typing import (
|
|||
)
|
||||
|
||||
import tractor
|
||||
from tractor import Portal
|
||||
|
||||
from ._util import (
|
||||
log, # sub-sys logger
|
||||
|
@ -47,9 +46,7 @@ _registry: Registry | None = None
|
|||
|
||||
|
||||
class Registry:
|
||||
# TODO: should this be a set or should we complain
|
||||
# on duplicates?
|
||||
addrs: list[tuple[str, int]] = []
|
||||
addr: None | tuple[str, int] = None
|
||||
|
||||
# TODO: table of uids to sockaddrs
|
||||
peers: dict[
|
||||
|
@ -63,115 +60,69 @@ _tractor_kwargs: dict[str, Any] = {}
|
|||
|
||||
@acm
|
||||
async def open_registry(
|
||||
addrs: list[tuple[str, int]],
|
||||
addr: None | tuple[str, int] = None,
|
||||
ensure_exists: bool = True,
|
||||
|
||||
) -> list[tuple[str, int]]:
|
||||
'''
|
||||
Open the service-actor-discovery registry by returning a set of
|
||||
tranport socket-addrs to registrar actors which may be
|
||||
contacted and queried for similar addresses for other
|
||||
non-registrar actors.
|
||||
) -> tuple[str, int]:
|
||||
|
||||
'''
|
||||
global _tractor_kwargs
|
||||
actor = tractor.current_actor()
|
||||
uid = actor.uid
|
||||
preset_reg_addrs: list[tuple[str, int]] = Registry.addrs
|
||||
if (
|
||||
preset_reg_addrs
|
||||
and addrs
|
||||
Registry.addr is not None
|
||||
and addr
|
||||
):
|
||||
if preset_reg_addrs != addrs:
|
||||
# if any(addr in preset_reg_addrs for addr in addrs):
|
||||
diff: set[tuple[str, int]] = set(preset_reg_addrs) - set(addrs)
|
||||
if diff:
|
||||
log.warning(
|
||||
f'`{uid}` requested only subset of registrars: {addrs}\n'
|
||||
f'However there are more @{diff}'
|
||||
)
|
||||
else:
|
||||
raise RuntimeError(
|
||||
f'`{uid}` has non-matching registrar addresses?\n'
|
||||
f'request: {addrs}\n'
|
||||
f'already set: {preset_reg_addrs}'
|
||||
f'`{uid}` registry addr already bound @ {_registry.sockaddr}'
|
||||
)
|
||||
|
||||
was_set: bool = False
|
||||
|
||||
if (
|
||||
not tractor.is_root_process()
|
||||
and not Registry.addrs
|
||||
and Registry.addr is None
|
||||
):
|
||||
Registry.addrs.extend(actor.reg_addrs)
|
||||
Registry.addr = actor._arb_addr
|
||||
|
||||
if (
|
||||
ensure_exists
|
||||
and not Registry.addrs
|
||||
and Registry.addr is None
|
||||
):
|
||||
raise RuntimeError(
|
||||
f"`{uid}` registry should already exist but doesn't?"
|
||||
f"`{uid}` registry should already exist bug doesn't?"
|
||||
)
|
||||
|
||||
if (
|
||||
not Registry.addrs
|
||||
Registry.addr is None
|
||||
):
|
||||
was_set = True
|
||||
Registry.addrs = addrs or [_default_reg_addr]
|
||||
Registry.addr = addr or _default_reg_addr
|
||||
|
||||
# NOTE: only spot this seems currently used is inside
|
||||
# `.ui._exec` which is the (eventual qtloops) bootstrapping
|
||||
# with guest mode.
|
||||
_tractor_kwargs['registry_addrs'] = Registry.addrs
|
||||
_tractor_kwargs['arbiter_addr'] = Registry.addr
|
||||
|
||||
try:
|
||||
yield Registry.addrs
|
||||
yield Registry.addr
|
||||
finally:
|
||||
# XXX: always clear the global addr if we set it so that the
|
||||
# next (set of) calls will apply whatever new one is passed
|
||||
# in.
|
||||
if was_set:
|
||||
Registry.addrs = None
|
||||
Registry.addr = None
|
||||
|
||||
|
||||
@acm
|
||||
async def find_service(
|
||||
service_name: str,
|
||||
registry_addrs: list[tuple[str, int]] | None = None,
|
||||
) -> tractor.Portal | None:
|
||||
|
||||
first_only: bool = True,
|
||||
|
||||
) -> (
|
||||
Portal
|
||||
| list[Portal]
|
||||
| None
|
||||
):
|
||||
|
||||
reg_addrs: list[tuple[str, int]]
|
||||
async with open_registry(
|
||||
addrs=(
|
||||
registry_addrs
|
||||
# NOTE: if no addr set is passed assume the registry has
|
||||
# already been opened and use the previously applied
|
||||
# startup set.
|
||||
or Registry.addrs
|
||||
),
|
||||
) as reg_addrs:
|
||||
async with open_registry() as reg_addr:
|
||||
log.info(f'Scanning for service `{service_name}`')
|
||||
|
||||
maybe_portals: list[Portal] | Portal | None
|
||||
|
||||
# attach to existing daemon by name if possible
|
||||
async with tractor.find_actor(
|
||||
service_name,
|
||||
registry_addrs=reg_addrs,
|
||||
only_first=first_only, # if set only returns single ref
|
||||
) as maybe_portals:
|
||||
if not maybe_portals:
|
||||
yield None
|
||||
return
|
||||
|
||||
yield maybe_portals
|
||||
arbiter_sockaddr=reg_addr,
|
||||
) as maybe_portal:
|
||||
yield maybe_portal
|
||||
|
||||
|
||||
async def check_for_service(
|
||||
|
@ -182,11 +133,9 @@ async def check_for_service(
|
|||
Service daemon "liveness" predicate.
|
||||
|
||||
'''
|
||||
async with (
|
||||
open_registry(ensure_exists=False) as reg_addr,
|
||||
tractor.query_actor(
|
||||
async with open_registry(ensure_exists=False) as reg_addr:
|
||||
async with tractor.query_actor(
|
||||
service_name,
|
||||
arbiter_sockaddr=reg_addr,
|
||||
) as sockaddr,
|
||||
):
|
||||
) as sockaddr:
|
||||
return sockaddr
|
||||
|
|
|
@ -139,13 +139,6 @@ class StorageClient(
|
|||
...
|
||||
|
||||
|
||||
class TimeseriesNotFound(Exception):
|
||||
'''
|
||||
No timeseries entry can be found for this backend.
|
||||
|
||||
'''
|
||||
|
||||
|
||||
class StorageConnectionError(ConnectionError):
|
||||
'''
|
||||
Can't connect to the desired tsdb subsys/service.
|
||||
|
@ -176,13 +169,10 @@ async def open_storage_client(
|
|||
tsdb_host: str = 'localhost'
|
||||
|
||||
# load root config and any tsdb user defined settings
|
||||
conf, path = config.load(
|
||||
conf_name='conf',
|
||||
touch_if_dne=True,
|
||||
)
|
||||
conf, path = config.load('conf', touch_if_dne=True)
|
||||
|
||||
# TODO: maybe not under a "network" section.. since
|
||||
# no more chitty `marketstore`..
|
||||
# no more chitty mkts..
|
||||
tsdbconf: dict = {}
|
||||
service_section = conf.get('service')
|
||||
if (
|
||||
|
@ -193,11 +183,8 @@ async def open_storage_client(
|
|||
|
||||
# lookup backend tsdb module by name and load any user service
|
||||
# settings for connecting to the tsdb service.
|
||||
backend: str = tsdbconf.pop(
|
||||
'name',
|
||||
def_backend,
|
||||
)
|
||||
tsdb_host: str = tsdbconf.get('maddrs', [])
|
||||
backend: str = tsdbconf.pop('backend')
|
||||
tsdb_host: str = tsdbconf['host']
|
||||
|
||||
if backend is None:
|
||||
backend: str = def_backend
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
|
@ -19,18 +19,10 @@ Storage middle-ware CLIs.
|
|||
|
||||
"""
|
||||
from __future__ import annotations
|
||||
# from datetime import datetime
|
||||
# from contextlib import (
|
||||
# AsyncExitStack,
|
||||
# )
|
||||
from pathlib import Path
|
||||
from math import copysign
|
||||
import time
|
||||
from types import ModuleType
|
||||
from typing import (
|
||||
Any,
|
||||
TYPE_CHECKING,
|
||||
)
|
||||
from typing import Generator
|
||||
# from typing import TYPE_CHECKING
|
||||
|
||||
import polars as pl
|
||||
import numpy as np
|
||||
|
@ -43,21 +35,24 @@ import typer
|
|||
|
||||
from piker.service import open_piker_runtime
|
||||
from piker.cli import cli
|
||||
from piker.config import get_conf_dir
|
||||
from piker.data import (
|
||||
maybe_open_shm_array,
|
||||
def_iohlcv_fields,
|
||||
ShmArray,
|
||||
)
|
||||
from piker import tsp
|
||||
from piker.data._formatters import BGM
|
||||
from . import log
|
||||
from piker.data.history import (
|
||||
_default_hist_size,
|
||||
_default_rt_size,
|
||||
)
|
||||
from . import (
|
||||
log,
|
||||
)
|
||||
from . import (
|
||||
__tsdbs__,
|
||||
open_storage_client,
|
||||
StorageClient,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from piker.ui._remote_ctl import AnnotCtl
|
||||
|
||||
|
||||
store = typer.Typer()
|
||||
|
||||
|
@ -103,18 +98,6 @@ def ls(
|
|||
trio.run(query_all)
|
||||
|
||||
|
||||
# TODO: like ls but takes in a pattern and matches
|
||||
# @store.command()
|
||||
# def search(
|
||||
# patt: str,
|
||||
# backends: list[str] = typer.Argument(
|
||||
# default=None,
|
||||
# help='Storage backends to query, default is all.'
|
||||
# ),
|
||||
# ):
|
||||
# ...
|
||||
|
||||
|
||||
@store.command()
|
||||
def delete(
|
||||
symbols: list[str],
|
||||
|
@ -157,33 +140,20 @@ def delete(
|
|||
def anal(
|
||||
fqme: str,
|
||||
period: int = 60,
|
||||
pdb: bool = False,
|
||||
|
||||
) -> np.ndarray:
|
||||
'''
|
||||
Anal-ysis is when you take the data do stuff to it.
|
||||
|
||||
NOTE: This ONLY loads the offline timeseries data (by default
|
||||
from a parquet file) NOT the in-shm version you might be seeing
|
||||
in a chart.
|
||||
|
||||
'''
|
||||
async def main():
|
||||
async with (
|
||||
open_piker_runtime(
|
||||
# are you a bear or boi?
|
||||
'tsdb_polars_anal',
|
||||
debug_mode=pdb,
|
||||
),
|
||||
open_storage_client() as (
|
||||
mod,
|
||||
client,
|
||||
debug_mode=True,
|
||||
),
|
||||
open_storage_client() as (mod, client),
|
||||
):
|
||||
syms: list[str] = await client.list_keys()
|
||||
log.info(f'{len(syms)} FOUND for {mod.name}')
|
||||
print(f'{len(syms)} FOUND for {mod.name}')
|
||||
|
||||
history: ShmArray # np buffer format
|
||||
(
|
||||
history,
|
||||
first_dt,
|
||||
|
@ -194,357 +164,179 @@ def anal(
|
|||
)
|
||||
assert first_dt < last_dt
|
||||
|
||||
null_segs: tuple = tsp.get_null_segs(
|
||||
frame=history,
|
||||
period=period,
|
||||
)
|
||||
# TODO: do tsp queries to backcend to fill i missing
|
||||
# history and then prolly write it to tsdb!
|
||||
src_df = await client.as_df(fqme, period)
|
||||
from piker.data import _timeseries as tsmod
|
||||
df: pl.DataFrame = tsmod.with_dts(src_df)
|
||||
gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
|
||||
|
||||
shm_df: pl.DataFrame = await client.as_df(
|
||||
fqme,
|
||||
period,
|
||||
)
|
||||
if not gaps.is_empty():
|
||||
print(f'Gaps found:\n{gaps}')
|
||||
|
||||
df: pl.DataFrame # with dts
|
||||
deduped: pl.DataFrame # deduplicated dts
|
||||
(
|
||||
df,
|
||||
deduped,
|
||||
diff,
|
||||
) = tsp.dedupe(
|
||||
shm_df,
|
||||
period=period,
|
||||
)
|
||||
|
||||
write_edits: bool = True
|
||||
if (
|
||||
write_edits
|
||||
and (
|
||||
diff
|
||||
or null_segs
|
||||
)
|
||||
):
|
||||
await tractor.pause()
|
||||
await client.write_ohlcv(
|
||||
fqme,
|
||||
ohlcv=deduped,
|
||||
timeframe=period,
|
||||
)
|
||||
|
||||
else:
|
||||
# TODO: something better with tab completion..
|
||||
# is there something more minimal but nearly as
|
||||
# functional as ipython?
|
||||
await tractor.pause()
|
||||
assert not null_segs
|
||||
|
||||
trio.run(main)
|
||||
|
||||
|
||||
async def markup_gaps(
|
||||
fqme: str,
|
||||
timeframe: float,
|
||||
actl: AnnotCtl,
|
||||
wdts: pl.DataFrame,
|
||||
gaps: pl.DataFrame,
|
||||
def iter_dfs_from_shms(fqme: str) -> Generator[
|
||||
tuple[Path, ShmArray, pl.DataFrame],
|
||||
None,
|
||||
None,
|
||||
]:
|
||||
# shm buffer size table based on known sample rates
|
||||
sizes: dict[str, int] = {
|
||||
'hist': _default_hist_size,
|
||||
'rt': _default_rt_size,
|
||||
}
|
||||
|
||||
) -> dict[int, dict]:
|
||||
'''
|
||||
Remote annotate time-gaps in a dt-fielded ts (normally OHLC)
|
||||
with rectangles.
|
||||
# load all detected shm buffer files which have the
|
||||
# passed FQME pattern in the file name.
|
||||
shmfiles: list[Path] = []
|
||||
shmdir = Path('/dev/shm/')
|
||||
|
||||
'''
|
||||
aids: dict[int] = {}
|
||||
for i in range(gaps.height):
|
||||
for shmfile in shmdir.glob(f'*{fqme}*'):
|
||||
filename: str = shmfile.name
|
||||
|
||||
row: pl.DataFrame = gaps[i]
|
||||
# skip index files
|
||||
if (
|
||||
'_first' in filename
|
||||
or '_last' in filename
|
||||
):
|
||||
continue
|
||||
|
||||
# the gap's RIGHT-most bar's OPEN value
|
||||
# at that time (sample) step.
|
||||
iend: int = row['index'][0]
|
||||
# dt: datetime = row['dt'][0]
|
||||
# dt_prev: datetime = row['dt_prev'][0]
|
||||
# dt_end_t: float = dt.timestamp()
|
||||
assert shmfile.is_file()
|
||||
log.debug(f'Found matching shm buffer file: {filename}')
|
||||
shmfiles.append(shmfile)
|
||||
|
||||
for shmfile in shmfiles:
|
||||
|
||||
# TODO: can we eventually remove this
|
||||
# once we figure out why the epoch cols
|
||||
# don't match?
|
||||
# TODO: FIX HOW/WHY these aren't matching
|
||||
# and are instead off by 4hours (EST
|
||||
# vs. UTC?!?!)
|
||||
# end_t: float = row['time']
|
||||
# assert (
|
||||
# dt.timestamp()
|
||||
# ==
|
||||
# end_t
|
||||
# )
|
||||
# lookup array buffer size based on file suffix
|
||||
# being either .rt or .hist
|
||||
key: str = shmfile.name.rsplit('.')[-1]
|
||||
|
||||
# the gap's LEFT-most bar's CLOSE value
|
||||
# at that time (sample) step.
|
||||
prev_r: pl.DataFrame = wdts.filter(
|
||||
pl.col('index') == iend - 1
|
||||
# skip FSP buffers for now..
|
||||
if key not in sizes:
|
||||
continue
|
||||
|
||||
size: int = sizes[key]
|
||||
|
||||
# attach to any shm buffer, load array into polars df,
|
||||
# write to local parquet file.
|
||||
shm, opened = maybe_open_shm_array(
|
||||
key=shmfile.name,
|
||||
size=size,
|
||||
dtype=def_iohlcv_fields,
|
||||
readonly=True,
|
||||
)
|
||||
# XXX: probably a gap in the (newly sorted or de-duplicated)
|
||||
# dt-df, so we might need to re-index first..
|
||||
if prev_r.is_empty():
|
||||
await tractor.pause()
|
||||
assert not opened
|
||||
ohlcv = shm.array
|
||||
|
||||
istart: int = prev_r['index'][0]
|
||||
# dt_start_t: float = dt_prev.timestamp()
|
||||
start = time.time()
|
||||
|
||||
# start_t: float = prev_r['time']
|
||||
# assert (
|
||||
# dt_start_t
|
||||
# ==
|
||||
# start_t
|
||||
# )
|
||||
|
||||
# TODO: implement px-col width measure
|
||||
# and ensure at least as many px-cols
|
||||
# shown per rect as configured by user.
|
||||
# gap_w: float = abs((iend - istart))
|
||||
# if gap_w < 6:
|
||||
# margin: float = 6
|
||||
# iend += margin
|
||||
# istart -= margin
|
||||
|
||||
rect_gap: float = BGM*3/8
|
||||
opn: float = row['open'][0]
|
||||
ro: tuple[float, float] = (
|
||||
# dt_end_t,
|
||||
iend + rect_gap + 1,
|
||||
opn,
|
||||
# XXX: thanks to this SO answer for this conversion tip:
|
||||
# https://stackoverflow.com/a/72054819
|
||||
df = pl.DataFrame({
|
||||
field_name: ohlcv[field_name]
|
||||
for field_name in ohlcv.dtype.fields
|
||||
})
|
||||
delay: float = round(
|
||||
time.time() - start,
|
||||
ndigits=6,
|
||||
)
|
||||
cls: float = prev_r['close'][0]
|
||||
lc: tuple[float, float] = (
|
||||
# dt_start_t,
|
||||
istart - rect_gap, # + 1 ,
|
||||
cls,
|
||||
log.info(
|
||||
f'numpy -> polars conversion took {delay} secs\n'
|
||||
f'polars df: {df}'
|
||||
)
|
||||
|
||||
color: str = 'dad_blue'
|
||||
diff: float = cls - opn
|
||||
sgn: float = copysign(1, diff)
|
||||
color: str = {
|
||||
-1: 'buy_green',
|
||||
1: 'sell_red',
|
||||
}[sgn]
|
||||
|
||||
rect_kwargs: dict[str, Any] = dict(
|
||||
fqme=fqme,
|
||||
timeframe=timeframe,
|
||||
start_pos=lc,
|
||||
end_pos=ro,
|
||||
color=color,
|
||||
yield (
|
||||
shmfile,
|
||||
shm,
|
||||
df,
|
||||
)
|
||||
|
||||
aid: int = await actl.add_rect(**rect_kwargs)
|
||||
assert aid
|
||||
aids[aid] = rect_kwargs
|
||||
|
||||
# tell chart to redraw all its
|
||||
# graphics view layers Bo
|
||||
await actl.redraw(
|
||||
fqme=fqme,
|
||||
timeframe=timeframe,
|
||||
)
|
||||
return aids
|
||||
|
||||
|
||||
@store.command()
|
||||
def ldshm(
|
||||
fqme: str,
|
||||
write_parquet: bool = True,
|
||||
reload_parquet_to_shm: bool = True,
|
||||
|
||||
write_parquet: bool = False,
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
Linux ONLY: load any fqme file name matching shm buffer from
|
||||
/dev/shm/ into an OHLCV numpy array and polars DataFrame,
|
||||
optionally write to offline storage via `.parquet` file.
|
||||
optionally write to .parquet file.
|
||||
|
||||
'''
|
||||
async def main():
|
||||
from piker.ui._remote_ctl import (
|
||||
open_annot_ctl,
|
||||
)
|
||||
actl: AnnotCtl
|
||||
mod: ModuleType
|
||||
client: StorageClient
|
||||
async with (
|
||||
open_piker_runtime(
|
||||
'polars_boi',
|
||||
enable_modules=['piker.data._sharedmem'],
|
||||
debug_mode=True,
|
||||
),
|
||||
open_storage_client() as (
|
||||
mod,
|
||||
client,
|
||||
),
|
||||
open_annot_ctl() as actl,
|
||||
):
|
||||
shm_df: pl.DataFrame | None = None
|
||||
tf2aids: dict[float, dict] = {}
|
||||
|
||||
for (
|
||||
shmfile,
|
||||
shm,
|
||||
# parquet_path,
|
||||
shm_df,
|
||||
) in tsp.iter_dfs_from_shms(fqme):
|
||||
df: pl.DataFrame | None = None
|
||||
for shmfile, shm, src_df in iter_dfs_from_shms(fqme):
|
||||
|
||||
# compute ohlc properties for naming
|
||||
times: np.ndarray = shm.array['time']
|
||||
d1: float = float(times[-1] - times[-2])
|
||||
d2: float = float(times[-2] - times[-3])
|
||||
med: float = np.median(np.diff(times))
|
||||
if (
|
||||
d1 < 1.
|
||||
and d2 < 1.
|
||||
and med < 1.
|
||||
):
|
||||
secs: float = times[-1] - times[-2]
|
||||
if secs < 1.:
|
||||
raise ValueError(
|
||||
f'Something is wrong with time period for {shm}:\n{times}'
|
||||
)
|
||||
|
||||
period_s: float = float(max(d1, d2, med))
|
||||
from piker.data import _timeseries as tsmod
|
||||
df: pl.DataFrame = tsmod.with_dts(src_df)
|
||||
gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
|
||||
|
||||
null_segs: tuple = tsp.get_null_segs(
|
||||
frame=shm.array,
|
||||
period=period_s,
|
||||
)
|
||||
|
||||
# TODO: call null-seg fixer somehow?
|
||||
if null_segs:
|
||||
await tractor.pause()
|
||||
# async with (
|
||||
# trio.open_nursery() as tn,
|
||||
# mod.open_history_client(
|
||||
# mkt,
|
||||
# ) as (get_hist, config),
|
||||
# ):
|
||||
# nulls_detected: trio.Event = await tn.start(partial(
|
||||
# tsp.maybe_fill_null_segments,
|
||||
|
||||
# shm=shm,
|
||||
# timeframe=timeframe,
|
||||
# get_hist=get_hist,
|
||||
# sampler_stream=sampler_stream,
|
||||
# mkt=mkt,
|
||||
# ))
|
||||
|
||||
# over-write back to shm?
|
||||
wdts: pl.DataFrame # with dts
|
||||
deduped: pl.DataFrame # deduplicated dts
|
||||
(
|
||||
wdts,
|
||||
deduped,
|
||||
diff,
|
||||
) = tsp.dedupe(
|
||||
shm_df,
|
||||
period=period_s,
|
||||
)
|
||||
|
||||
# detect gaps from in expected (uniform OHLC) sample period
|
||||
step_gaps: pl.DataFrame = tsp.detect_time_gaps(
|
||||
deduped,
|
||||
expect_period=period_s,
|
||||
)
|
||||
|
||||
# TODO: by default we always want to mark these up
|
||||
# with rects showing up/down gaps Bo
|
||||
venue_gaps: pl.DataFrame = tsp.detect_time_gaps(
|
||||
deduped,
|
||||
expect_period=period_s,
|
||||
|
||||
# TODO: actually pull the exact duration
|
||||
# expected for each venue operational period?
|
||||
gap_dt_unit='days',
|
||||
gap_thresh=1,
|
||||
)
|
||||
|
||||
# TODO: find the disjoint set of step gaps from
|
||||
# venue (closure) set!
|
||||
# -[ ] do a set diff by checking for the unique
|
||||
# gap set only in the step_gaps?
|
||||
# TODO: maybe only optionally enter this depending
|
||||
# on some CLI flags and/or gap detection?
|
||||
if (
|
||||
not venue_gaps.is_empty()
|
||||
or (
|
||||
period_s < 60
|
||||
and not step_gaps.is_empty()
|
||||
)
|
||||
not gaps.is_empty()
|
||||
or secs > 2
|
||||
):
|
||||
# write repaired ts to parquet-file?
|
||||
if write_parquet:
|
||||
start: float = time.time()
|
||||
path: Path = await client.write_ohlcv(
|
||||
fqme,
|
||||
ohlcv=deduped,
|
||||
timeframe=period_s,
|
||||
)
|
||||
write_delay: float = round(
|
||||
time.time() - start,
|
||||
ndigits=6,
|
||||
)
|
||||
await tractor.pause()
|
||||
|
||||
# read back from fs
|
||||
start: float = time.time()
|
||||
read_df: pl.DataFrame = pl.read_parquet(path)
|
||||
read_delay: float = round(
|
||||
# write to parquet file?
|
||||
if write_parquet:
|
||||
timeframe: str = f'{secs}s'
|
||||
|
||||
datadir: Path = get_conf_dir() / 'nativedb'
|
||||
if not datadir.is_dir():
|
||||
datadir.mkdir()
|
||||
|
||||
path: Path = datadir / f'{fqme}.{timeframe}.parquet'
|
||||
|
||||
# write to fs
|
||||
start = time.time()
|
||||
df.write_parquet(path)
|
||||
delay: float = round(
|
||||
time.time() - start,
|
||||
ndigits=6,
|
||||
)
|
||||
log.info(
|
||||
f'parquet write took {write_delay} secs\n'
|
||||
f'parquet write took {delay} secs\n'
|
||||
f'file path: {path}'
|
||||
f'parquet read took {read_delay} secs\n'
|
||||
)
|
||||
|
||||
# read back from fs
|
||||
start = time.time()
|
||||
read_df: pl.DataFrame = pl.read_parquet(path)
|
||||
delay: float = round(
|
||||
time.time() - start,
|
||||
ndigits=6,
|
||||
)
|
||||
print(
|
||||
f'parquet read took {delay} secs\n'
|
||||
f'polars df: {read_df}'
|
||||
)
|
||||
|
||||
if reload_parquet_to_shm:
|
||||
new = tsp.pl2np(
|
||||
deduped,
|
||||
dtype=shm.array.dtype,
|
||||
)
|
||||
# since normally readonly
|
||||
shm._array.setflags(
|
||||
write=int(1),
|
||||
)
|
||||
shm.push(
|
||||
new,
|
||||
prepend=True,
|
||||
start=new['index'][-1],
|
||||
update_first=False, # don't update ._first
|
||||
)
|
||||
|
||||
do_markup_gaps: bool = True
|
||||
if do_markup_gaps:
|
||||
new_df: pl.DataFrame = tsp.np2pl(new)
|
||||
aids: dict = await markup_gaps(
|
||||
fqme,
|
||||
period_s,
|
||||
actl,
|
||||
new_df,
|
||||
step_gaps,
|
||||
)
|
||||
# last chance manual overwrites in REPL
|
||||
# await tractor.pause()
|
||||
assert aids
|
||||
tf2aids[period_s] = aids
|
||||
|
||||
else:
|
||||
# allow interaction even when no ts problems.
|
||||
assert not diff
|
||||
|
||||
await tractor.pause()
|
||||
log.info('Exiting TSP shm anal-izer!')
|
||||
|
||||
if shm_df is None:
|
||||
log.error(
|
||||
f'No matching shm buffers for {fqme} ?'
|
||||
|
||||
)
|
||||
if df is None:
|
||||
log.error(f'No matching shm buffers for {fqme} ?')
|
||||
|
||||
trio.run(main)
|
||||
|
||||
|
|
|
@ -19,8 +19,7 @@
|
|||
call a poor man's tsdb).
|
||||
|
||||
AKA a `piker`-native file-system native "time series database"
|
||||
without needing an extra process and no standard TSDB features,
|
||||
YET!
|
||||
without needing an extra process and no standard TSDB features, YET!
|
||||
|
||||
'''
|
||||
# TODO: like there's soo much..
|
||||
|
@ -56,6 +55,8 @@ from datetime import datetime
|
|||
from pathlib import Path
|
||||
import time
|
||||
|
||||
# from bidict import bidict
|
||||
# import tractor
|
||||
import numpy as np
|
||||
import polars as pl
|
||||
from pendulum import (
|
||||
|
@ -63,18 +64,45 @@ from pendulum import (
|
|||
)
|
||||
|
||||
from piker import config
|
||||
from piker import tsp
|
||||
from piker.data import (
|
||||
def_iohlcv_fields,
|
||||
ShmArray,
|
||||
)
|
||||
from piker.data import def_iohlcv_fields
|
||||
from piker.data import ShmArray
|
||||
from piker.log import get_logger
|
||||
from . import TimeseriesNotFound
|
||||
|
||||
|
||||
log = get_logger('storage.nativedb')
|
||||
|
||||
|
||||
# NOTE: thanks to this SO answer for the below conversion routines
|
||||
# to go from numpy struct-arrays to polars dataframes and back:
|
||||
# https://stackoverflow.com/a/72054819
|
||||
def np2pl(array: np.ndarray) -> pl.DataFrame:
|
||||
return pl.DataFrame({
|
||||
field_name: array[field_name]
|
||||
for field_name in array.dtype.fields
|
||||
})
|
||||
|
||||
|
||||
def pl2np(
|
||||
df: pl.DataFrame,
|
||||
dtype: np.dtype,
|
||||
|
||||
) -> np.ndarray:
|
||||
|
||||
# Create numpy struct array of the correct size and dtype
|
||||
# and loop through df columns to fill in array fields.
|
||||
array = np.empty(
|
||||
df.height,
|
||||
dtype,
|
||||
)
|
||||
for field, col in zip(
|
||||
dtype.fields,
|
||||
df.columns,
|
||||
):
|
||||
array[field] = df.get_column(col).to_numpy()
|
||||
|
||||
return array
|
||||
|
||||
|
||||
def detect_period(shm: ShmArray) -> float:
|
||||
'''
|
||||
Attempt to detect the series time step sampling period
|
||||
|
@ -95,19 +123,16 @@ def detect_period(shm: ShmArray) -> float:
|
|||
|
||||
def mk_ohlcv_shm_keyed_filepath(
|
||||
fqme: str,
|
||||
period: float | int, # ow known as the "timeframe"
|
||||
period: float, # ow known as the "timeframe"
|
||||
datadir: Path,
|
||||
|
||||
) -> Path:
|
||||
) -> str:
|
||||
|
||||
if period < 1.:
|
||||
raise ValueError('Sample period should be >= 1.!?')
|
||||
|
||||
path: Path = (
|
||||
datadir
|
||||
/
|
||||
f'{fqme}.ohlcv{int(period)}s.parquet'
|
||||
)
|
||||
period_s: str = f'{period}s'
|
||||
path: Path = datadir / f'{fqme}.ohlcv{period_s}.parquet'
|
||||
return path
|
||||
|
||||
|
||||
|
@ -161,13 +186,7 @@ class NativeStorageClient:
|
|||
|
||||
def index_files(self):
|
||||
for path in self._datadir.iterdir():
|
||||
if (
|
||||
path.is_dir()
|
||||
or
|
||||
'.parquet' not in str(path)
|
||||
# or
|
||||
# path.name in {'borked', 'expired',}
|
||||
):
|
||||
if path.name in {'borked', 'expired',}:
|
||||
continue
|
||||
|
||||
key: str = path.name.rstrip('.parquet')
|
||||
|
@ -209,21 +228,8 @@ class NativeStorageClient:
|
|||
fqme,
|
||||
timeframe,
|
||||
)
|
||||
except FileNotFoundError as fnfe:
|
||||
|
||||
bs_fqme, _, *_ = fqme.rpartition('.')
|
||||
|
||||
possible_matches: list[str] = []
|
||||
for tskey in self._index:
|
||||
if bs_fqme in tskey:
|
||||
possible_matches.append(tskey)
|
||||
|
||||
match_str: str = '\n'.join(sorted(possible_matches))
|
||||
raise TimeseriesNotFound(
|
||||
f'No entry for `{fqme}`?\n'
|
||||
f'Maybe you need a more specific fqme-key like:\n\n'
|
||||
f'{match_str}'
|
||||
) from fnfe
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
times = array['time']
|
||||
return (
|
||||
|
@ -236,7 +242,6 @@ class NativeStorageClient:
|
|||
self,
|
||||
fqme: str,
|
||||
period: float,
|
||||
|
||||
) -> Path:
|
||||
return mk_ohlcv_shm_keyed_filepath(
|
||||
fqme=fqme,
|
||||
|
@ -244,23 +249,6 @@ class NativeStorageClient:
|
|||
datadir=self._datadir,
|
||||
)
|
||||
|
||||
def _cache_df(
|
||||
self,
|
||||
fqme: str,
|
||||
df: pl.DataFrame,
|
||||
timeframe: float,
|
||||
|
||||
) -> None:
|
||||
# cache df for later usage since we (currently) need to
|
||||
# convert to np.ndarrays to push to our `ShmArray` rt
|
||||
# buffers subsys but later we may operate entirely on
|
||||
# pyarrow arrays/buffers so keeping the dfs around for
|
||||
# a variety of purposes is handy.
|
||||
self._dfs.setdefault(
|
||||
timeframe,
|
||||
{},
|
||||
)[fqme] = df
|
||||
|
||||
async def read_ohlcv(
|
||||
self,
|
||||
fqme: str,
|
||||
|
@ -269,20 +257,13 @@ class NativeStorageClient:
|
|||
# limit: int = int(200e3),
|
||||
|
||||
) -> np.ndarray:
|
||||
path: Path = self.mk_path(
|
||||
fqme,
|
||||
period=int(timeframe),
|
||||
)
|
||||
path: Path = self.mk_path(fqme, period=int(timeframe))
|
||||
df: pl.DataFrame = pl.read_parquet(path)
|
||||
self._dfs.setdefault(timeframe, {})[fqme] = df
|
||||
|
||||
self._cache_df(
|
||||
fqme=fqme,
|
||||
df=df,
|
||||
timeframe=timeframe,
|
||||
)
|
||||
# TODO: filter by end and limit inputs
|
||||
# times: pl.Series = df['time']
|
||||
array: np.ndarray = tsp.pl2np(
|
||||
array: np.ndarray = pl2np(
|
||||
df,
|
||||
dtype=np.dtype(def_iohlcv_fields),
|
||||
)
|
||||
|
@ -292,15 +273,11 @@ class NativeStorageClient:
|
|||
self,
|
||||
fqme: str,
|
||||
period: int = 60,
|
||||
load_from_offline: bool = True,
|
||||
|
||||
) -> pl.DataFrame:
|
||||
try:
|
||||
return self._dfs[period][fqme]
|
||||
except KeyError:
|
||||
if not load_from_offline:
|
||||
raise
|
||||
|
||||
await self.read_ohlcv(fqme, period)
|
||||
return self._dfs[period][fqme]
|
||||
|
||||
|
@ -322,22 +299,14 @@ class NativeStorageClient:
|
|||
datadir=self._datadir,
|
||||
)
|
||||
if isinstance(ohlcv, np.ndarray):
|
||||
df: pl.DataFrame = tsp.np2pl(ohlcv)
|
||||
df: pl.DataFrame = np2pl(ohlcv)
|
||||
else:
|
||||
df = ohlcv
|
||||
|
||||
self._cache_df(
|
||||
fqme=fqme,
|
||||
df=df,
|
||||
timeframe=timeframe,
|
||||
)
|
||||
|
||||
# TODO: in terms of managing the ultra long term data
|
||||
# -[ ] use a proper profiler to measure all this IO and
|
||||
# - use a proper profiler to measure all this IO and
|
||||
# roundtripping!
|
||||
# -[ ] implement parquet append!? see issue:
|
||||
# https://github.com/pikers/piker/issues/536
|
||||
# -[ ] try out ``fastparquet``'s append writing:
|
||||
# - try out ``fastparquet``'s append writing:
|
||||
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
|
||||
start = time.time()
|
||||
df.write_parquet(path)
|
||||
|
@ -345,16 +314,17 @@ class NativeStorageClient:
|
|||
time.time() - start,
|
||||
ndigits=6,
|
||||
)
|
||||
log.info(
|
||||
print(
|
||||
f'parquet write took {delay} secs\n'
|
||||
f'file path: {path}'
|
||||
)
|
||||
return path
|
||||
|
||||
|
||||
async def write_ohlcv(
|
||||
self,
|
||||
fqme: str,
|
||||
ohlcv: np.ndarray | pl.DataFrame,
|
||||
ohlcv: np.ndarray,
|
||||
timeframe: int,
|
||||
|
||||
) -> Path:
|
||||
|
@ -406,8 +376,6 @@ class NativeStorageClient:
|
|||
# ...
|
||||
|
||||
|
||||
# TODO: does this need to be async on average?
|
||||
# I guess for any IPC connected backend yes?
|
||||
@acm
|
||||
async def get_client(
|
||||
|
||||
|
@ -425,7 +393,7 @@ async def get_client(
|
|||
'''
|
||||
datadir: Path = config.get_conf_dir() / 'nativedb'
|
||||
if not datadir.is_dir():
|
||||
log.info(f'Creating `nativedb` dir: {datadir}')
|
||||
log.info(f'Creating `nativedb` director: {datadir}')
|
||||
datadir.mkdir()
|
||||
|
||||
client = NativeStorageClient(datadir)
|
||||
|
|
|
@ -18,12 +18,24 @@
|
|||
Toolz for debug, profile and trace of the distributed runtime :surfer:
|
||||
|
||||
'''
|
||||
from tractor.devx import (
|
||||
open_crash_handler as open_crash_handler,
|
||||
from .debug import (
|
||||
open_crash_handler,
|
||||
)
|
||||
from .profile import (
|
||||
Profiler as Profiler,
|
||||
pg_profile_enabled as pg_profile_enabled,
|
||||
ms_slower_then as ms_slower_then,
|
||||
timeit as timeit,
|
||||
Profiler,
|
||||
pg_profile_enabled,
|
||||
ms_slower_then,
|
||||
timeit,
|
||||
)
|
||||
|
||||
# TODO: other mods to include?
|
||||
# - DROP .trionics, already moved into tractor
|
||||
# - move in `piker.calc`
|
||||
|
||||
__all__: list[str] = [
|
||||
'open_crash_handler',
|
||||
'pg_profile_enabled',
|
||||
'ms_slower_then',
|
||||
'Profiler',
|
||||
'timeit',
|
||||
]
|
||||
|
|
|
@ -0,0 +1,40 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Debugger wrappers for `pdbp` as used by `tractor`.
|
||||
|
||||
'''
|
||||
from contextlib import contextmanager as cm
|
||||
|
||||
import pdbp
|
||||
|
||||
|
||||
# TODO: better naming and what additionals?
|
||||
# - optional runtime plugging?
|
||||
# - detection for sync vs. async code?
|
||||
# - specialized REPL entry when in distributed mode?
|
||||
@cm
|
||||
def open_crash_handler():
|
||||
'''
|
||||
Super basic crash handler using `pdbp` debugger.
|
||||
|
||||
'''
|
||||
try:
|
||||
yield
|
||||
except BaseException:
|
||||
pdbp.xpm()
|
||||
raise
|
File diff suppressed because it is too large
Load Diff
|
@ -1,746 +0,0 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Financial time series processing utilities usually
|
||||
pertaining to OHLCV style sampled data.
|
||||
|
||||
Routines are generally implemented in either ``numpy`` or
|
||||
``polars`` B)
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
from functools import partial
|
||||
from math import (
|
||||
ceil,
|
||||
floor,
|
||||
)
|
||||
import time
|
||||
from typing import (
|
||||
Literal,
|
||||
# AsyncGenerator,
|
||||
Generator,
|
||||
)
|
||||
|
||||
import numpy as np
|
||||
import polars as pl
|
||||
from pendulum import (
|
||||
DateTime,
|
||||
from_timestamp,
|
||||
)
|
||||
|
||||
from ..toolz.profile import (
|
||||
Profiler,
|
||||
pg_profile_enabled,
|
||||
ms_slower_then,
|
||||
)
|
||||
from ..log import (
|
||||
get_logger,
|
||||
get_console_log,
|
||||
)
|
||||
# for "time series processing"
|
||||
subsys: str = 'piker.tsp'
|
||||
|
||||
log = get_logger(subsys)
|
||||
get_console_log = partial(
|
||||
get_console_log,
|
||||
name=subsys,
|
||||
)
|
||||
|
||||
# NOTE: union type-defs to handle generic `numpy` and `polars` types
|
||||
# side-by-side Bo
|
||||
# |_ TODO: schema spec typing?
|
||||
# -[ ] nptyping!
|
||||
# -[ ] wtv we can with polars?
|
||||
Frame = pl.DataFrame | np.ndarray
|
||||
Seq = pl.Series | np.ndarray
|
||||
|
||||
|
||||
def slice_from_time(
|
||||
arr: np.ndarray,
|
||||
start_t: float,
|
||||
stop_t: float,
|
||||
step: float, # sampler period step-diff
|
||||
|
||||
) -> slice:
|
||||
'''
|
||||
Calculate array indices mapped from a time range and return them in
|
||||
a slice.
|
||||
|
||||
Given an input array with an epoch `'time'` series entry, calculate
|
||||
the indices which span the time range and return in a slice. Presume
|
||||
each `'time'` step increment is uniform and when the time stamp
|
||||
series contains gaps (the uniform presumption is untrue) use
|
||||
``np.searchsorted()`` binary search to look up the appropriate
|
||||
index.
|
||||
|
||||
'''
|
||||
profiler = Profiler(
|
||||
msg='slice_from_time()',
|
||||
disabled=not pg_profile_enabled(),
|
||||
ms_threshold=ms_slower_then,
|
||||
)
|
||||
|
||||
times = arr['time']
|
||||
t_first = floor(times[0])
|
||||
t_last = ceil(times[-1])
|
||||
|
||||
# the greatest index we can return which slices to the
|
||||
# end of the input array.
|
||||
read_i_max = arr.shape[0]
|
||||
|
||||
# compute (presumed) uniform-time-step index offsets
|
||||
i_start_t = floor(start_t)
|
||||
read_i_start = floor(((i_start_t - t_first) // step)) - 1
|
||||
|
||||
i_stop_t = ceil(stop_t)
|
||||
|
||||
# XXX: edge case -> always set stop index to last in array whenever
|
||||
# the input stop time is detected to be greater then the equiv time
|
||||
# stamp at that last entry.
|
||||
if i_stop_t >= t_last:
|
||||
read_i_stop = read_i_max
|
||||
else:
|
||||
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
|
||||
|
||||
# always clip outputs to array support
|
||||
# for read start:
|
||||
# - never allow a start < the 0 index
|
||||
# - never allow an end index > the read array len
|
||||
read_i_start = min(
|
||||
max(0, read_i_start),
|
||||
read_i_max - 1,
|
||||
)
|
||||
read_i_stop = max(
|
||||
0,
|
||||
min(read_i_stop, read_i_max),
|
||||
)
|
||||
|
||||
# check for larger-then-latest calculated index for given start
|
||||
# time, in which case we do a binary search for the correct index.
|
||||
# NOTE: this is usually the result of a time series with time gaps
|
||||
# where it is expected that each index step maps to a uniform step
|
||||
# in the time stamp series.
|
||||
t_iv_start = times[read_i_start]
|
||||
if (
|
||||
t_iv_start > i_start_t
|
||||
):
|
||||
# do a binary search for the best index mapping to ``start_t``
|
||||
# given we measured an overshoot using the uniform-time-step
|
||||
# calculation from above.
|
||||
|
||||
# TODO: once we start caching these per source-array,
|
||||
# we can just overwrite ``read_i_start`` directly.
|
||||
new_read_i_start = np.searchsorted(
|
||||
times,
|
||||
i_start_t,
|
||||
side='left',
|
||||
)
|
||||
|
||||
# TODO: minimize binary search work as much as possible:
|
||||
# - cache these remap values which compensate for gaps in the
|
||||
# uniform time step basis where we calc a later start
|
||||
# index for the given input ``start_t``.
|
||||
# - can we shorten the input search sequence by heuristic?
|
||||
# up_to_arith_start = index[:read_i_start]
|
||||
|
||||
if (
|
||||
new_read_i_start <= read_i_start
|
||||
):
|
||||
# t_diff = t_iv_start - start_t
|
||||
# print(
|
||||
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
||||
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
|
||||
# f'diff: {t_diff}\n'
|
||||
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
|
||||
# )
|
||||
read_i_start = new_read_i_start
|
||||
|
||||
t_iv_stop = times[read_i_stop - 1]
|
||||
if (
|
||||
t_iv_stop > i_stop_t
|
||||
):
|
||||
# t_diff = stop_t - t_iv_stop
|
||||
# print(
|
||||
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
|
||||
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
|
||||
# f'diff: {t_diff}\n'
|
||||
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
|
||||
# )
|
||||
new_read_i_stop = np.searchsorted(
|
||||
times[read_i_start:],
|
||||
# times,
|
||||
i_stop_t,
|
||||
side='right',
|
||||
)
|
||||
|
||||
if (
|
||||
new_read_i_stop <= read_i_stop
|
||||
):
|
||||
read_i_stop = read_i_start + new_read_i_stop + 1
|
||||
|
||||
# sanity checks for range size
|
||||
# samples = (i_stop_t - i_start_t) // step
|
||||
# index_diff = read_i_stop - read_i_start + 1
|
||||
# if index_diff > (samples + 3):
|
||||
# breakpoint()
|
||||
|
||||
# read-relative indexes: gives a slice where `shm.array[read_slc]`
|
||||
# will be the data spanning the input time range `start_t` ->
|
||||
# `stop_t`
|
||||
read_slc = slice(
|
||||
int(read_i_start),
|
||||
int(read_i_stop),
|
||||
)
|
||||
|
||||
profiler(
|
||||
'slicing complete'
|
||||
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
|
||||
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
|
||||
)
|
||||
|
||||
# NOTE: if caller needs absolute buffer indices they can
|
||||
# slice the buffer abs index like so:
|
||||
# index = arr['index']
|
||||
# abs_indx = index[read_slc]
|
||||
# abs_slc = slice(
|
||||
# int(abs_indx[0]),
|
||||
# int(abs_indx[-1]),
|
||||
# )
|
||||
|
||||
return read_slc
|
||||
|
||||
|
||||
def get_null_segs(
|
||||
frame: Frame,
|
||||
period: float, # sampling step in seconds
|
||||
imargin: int = 1,
|
||||
col: str = 'time',
|
||||
|
||||
) -> tuple[
|
||||
# Seq, # TODO: can we make it an array-type instead?
|
||||
list[
|
||||
list[int, int],
|
||||
],
|
||||
Seq,
|
||||
Frame
|
||||
] | None:
|
||||
'''
|
||||
Detect if there are any zero(-epoch stamped) valued
|
||||
rows in for the provided `col: str` column; by default
|
||||
presume the 'time' field/column.
|
||||
|
||||
Filter to all such zero (time) segments and return
|
||||
the corresponding frame zeroed segment's,
|
||||
|
||||
- gap absolute (in buffer terms) indices-endpoints as
|
||||
`absi_zsegs`
|
||||
- abs indices of all rows with zeroed `col` values as `absi_zeros`
|
||||
- the corresponding frame's row-entries (view) which are
|
||||
zeroed for the `col` as `zero_t`
|
||||
|
||||
'''
|
||||
times: Seq = frame['time']
|
||||
zero_pred: Seq = (times == 0)
|
||||
|
||||
if isinstance(frame, np.ndarray):
|
||||
tis_zeros: int = zero_pred.any()
|
||||
else:
|
||||
tis_zeros: int = zero_pred.any()
|
||||
|
||||
if not tis_zeros:
|
||||
return None
|
||||
|
||||
# TODO: use ndarray for this?!
|
||||
absi_zsegs: list[list[int, int]] = []
|
||||
|
||||
if isinstance(frame, np.ndarray):
|
||||
# view of ONLY the zero segments as one continuous chunk
|
||||
zero_t: np.ndarray = frame[zero_pred]
|
||||
# abs indices of said zeroed rows
|
||||
absi_zeros = zero_t['index']
|
||||
# diff of abs index steps between each zeroed row
|
||||
absi_zdiff: np.ndarray = np.diff(absi_zeros)
|
||||
|
||||
# scan for all frame-indices where the
|
||||
# zeroed-row-abs-index-step-diff is greater then the
|
||||
# expected increment of 1.
|
||||
# data 1st zero seg data zeros
|
||||
# ---- ------------ ---- ----- ------ ----
|
||||
# ||||..000000000000..||||..00000..||||||..0000
|
||||
# ---- ------------ ---- ----- ------ ----
|
||||
# ^zero_t[0] ^zero_t[-1]
|
||||
# ^fi_zgaps[0] ^fi_zgaps[1]
|
||||
# ^absi_zsegs[0][0] ^---^ => absi_zsegs[1]: tuple
|
||||
# absi_zsegs[0][1]^
|
||||
#
|
||||
# NOTE: the first entry in `fi_zgaps` is where
|
||||
# the first (absolute) index step diff is > 1.
|
||||
# and it is a frame-relative index into `zero_t`.
|
||||
fi_zgaps = np.argwhere(
|
||||
absi_zdiff > 1
|
||||
# NOTE: +1 here is ensure we index to the "start" of each
|
||||
# segment (if we didn't the below loop needs to be
|
||||
# re-written to expect `fi_end_rows`!
|
||||
) + 1
|
||||
# the rows from the contiguous zeroed segments which have
|
||||
# abs-index steps >1 compared to the previous zero row
|
||||
# (indicating an end of zeroed segment).
|
||||
fi_zseg_start_rows = zero_t[fi_zgaps]
|
||||
|
||||
# TODO: equiv for pl.DataFrame case!
|
||||
else:
|
||||
izeros: pl.Series = zero_pred.arg_true()
|
||||
zero_t: pl.DataFrame = frame[izeros]
|
||||
|
||||
absi_zeros = zero_t['index']
|
||||
absi_zdiff: pl.Series = absi_zeros.diff()
|
||||
fi_zgaps = (absi_zdiff > 1).arg_true()
|
||||
|
||||
# XXX: our goal (in this func) is to select out slice index
|
||||
# pairs (zseg0_start, zseg_end) in abs index units for each
|
||||
# null-segment portion detected throughout entire input frame.
|
||||
|
||||
# only up to one null-segment in entire frame?
|
||||
num_gaps: int = fi_zgaps.size + 1
|
||||
if num_gaps < 1:
|
||||
if absi_zeros.size > 1:
|
||||
absi_zsegs = [[
|
||||
# TODO: maybe mk these max()/min() limits func
|
||||
# consts instead of called more then once?
|
||||
max(
|
||||
absi_zeros[0] - 1,
|
||||
0,
|
||||
),
|
||||
# NOTE: need the + 1 to guarantee we index "up to"
|
||||
# the next non-null row-datum.
|
||||
min(
|
||||
absi_zeros[-1] + 1,
|
||||
frame['index'][-1],
|
||||
),
|
||||
]]
|
||||
else:
|
||||
# XXX EDGE CASE: only one null-datum found so
|
||||
# mark the start abs index as None to trigger
|
||||
# a full frame-len query to the respective backend?
|
||||
absi_zsegs = [[
|
||||
# see `get_hist()` in backend, should ALWAYS be
|
||||
# able to handle a `start_dt=None`!
|
||||
# None,
|
||||
None,
|
||||
absi_zeros[0] + 1,
|
||||
]]
|
||||
|
||||
# XXX NOTE XXX: if >= 2 zeroed segments are found, there should
|
||||
# ALWAYS be more then one zero-segment-abs-index-step-diff row
|
||||
# in `absi_zdiff`, so loop through all such
|
||||
# abs-index-step-diffs >1 (i.e. the entries of `absi_zdiff`)
|
||||
# and add them as the "end index" entries for each segment.
|
||||
# Then, iif NOT iterating the first such segment end, look back
|
||||
# for the prior segments zero-segment start indext by relative
|
||||
# indexing the `zero_t` frame by -1 and grabbing the abs index
|
||||
# of what should be the prior zero-segment abs start index.
|
||||
else:
|
||||
# NOTE: since `absi_zdiff` will never have a row
|
||||
# corresponding to the first zero-segment's row, we add it
|
||||
# manually here.
|
||||
absi_zsegs.append([
|
||||
max(
|
||||
absi_zeros[0] - 1,
|
||||
0,
|
||||
),
|
||||
None,
|
||||
])
|
||||
|
||||
# TODO: can we do it with vec ops?
|
||||
for i, (
|
||||
fi, # frame index of zero-seg start
|
||||
zseg_start_row, # full row for ^
|
||||
) in enumerate(zip(
|
||||
fi_zgaps,
|
||||
fi_zseg_start_rows,
|
||||
)):
|
||||
assert (zseg_start_row == zero_t[fi]).all()
|
||||
iabs: int = zseg_start_row['index'][0]
|
||||
absi_zsegs.append([
|
||||
iabs - 1,
|
||||
None, # backfilled on next iter
|
||||
])
|
||||
|
||||
# final iter case, backfill FINAL end iabs!
|
||||
if (i + 1) == fi_zgaps.size:
|
||||
absi_zsegs[-1][1] = absi_zeros[-1] + 1
|
||||
|
||||
# NOTE: only after the first segment (due to `.diff()`
|
||||
# usage above) can we do a lookback to the prior
|
||||
# segment's end row and determine it's abs index to
|
||||
# retroactively insert to the prior
|
||||
# `absi_zsegs[i-1][1]` entry Bo
|
||||
last_end: int = absi_zsegs[i][1]
|
||||
if last_end is None:
|
||||
prev_zseg_row = zero_t[fi - 1]
|
||||
absi_post_zseg = prev_zseg_row['index'][0] + 1
|
||||
# XXX: MUST BACKFILL previous end iabs!
|
||||
absi_zsegs[i][1] = absi_post_zseg
|
||||
|
||||
else:
|
||||
if 0 < num_gaps < 2:
|
||||
absi_zsegs[-1][1] = min(
|
||||
absi_zeros[-1] + 1,
|
||||
frame['index'][-1],
|
||||
)
|
||||
|
||||
iabs_first: int = frame['index'][0]
|
||||
for start, end in absi_zsegs:
|
||||
|
||||
ts_start: float = times[start - iabs_first]
|
||||
ts_end: float = times[end - iabs_first]
|
||||
if (
|
||||
(ts_start == 0 and not start == 0)
|
||||
or
|
||||
ts_end == 0
|
||||
):
|
||||
import pdbp
|
||||
pdbp.set_trace()
|
||||
|
||||
assert end
|
||||
assert start < end
|
||||
|
||||
log.warning(
|
||||
f'Frame has {len(absi_zsegs)} NULL GAPS!?\n'
|
||||
f'period: {period}\n'
|
||||
f'total null samples: {len(zero_t)}\n'
|
||||
)
|
||||
|
||||
return (
|
||||
absi_zsegs, # [start, end] abs slice indices of seg
|
||||
absi_zeros, # all abs indices within all null-segs
|
||||
zero_t, # sliced-view of all null-segment rows-datums
|
||||
)
|
||||
|
||||
|
||||
def iter_null_segs(
|
||||
timeframe: float,
|
||||
frame: Frame | None = None,
|
||||
null_segs: tuple | None = None,
|
||||
|
||||
) -> Generator[
|
||||
tuple[
|
||||
int, int,
|
||||
int, int,
|
||||
float, float,
|
||||
float, float,
|
||||
|
||||
# Seq, # TODO: can we make it an array-type instead?
|
||||
# list[
|
||||
# list[int, int],
|
||||
# ],
|
||||
# Seq,
|
||||
# Frame
|
||||
],
|
||||
None,
|
||||
]:
|
||||
if not (
|
||||
null_segs := get_null_segs(
|
||||
frame,
|
||||
period=timeframe,
|
||||
)
|
||||
):
|
||||
return
|
||||
|
||||
absi_pairs_zsegs: list[list[float, float]]
|
||||
izeros: Seq
|
||||
zero_t: Frame
|
||||
(
|
||||
absi_pairs_zsegs,
|
||||
izeros,
|
||||
zero_t,
|
||||
) = null_segs
|
||||
|
||||
absi_first: int = frame[0]['index']
|
||||
for (
|
||||
absi_start,
|
||||
absi_end,
|
||||
) in absi_pairs_zsegs:
|
||||
|
||||
fi_end: int = absi_end - absi_first
|
||||
end_row: Seq = frame[fi_end]
|
||||
end_t: float = end_row['time']
|
||||
end_dt: DateTime = from_timestamp(end_t)
|
||||
|
||||
fi_start = None
|
||||
start_row = None
|
||||
start_t = None
|
||||
start_dt = None
|
||||
if (
|
||||
absi_start is not None
|
||||
and start_t != 0
|
||||
):
|
||||
fi_start: int = absi_start - absi_first
|
||||
start_row: Seq = frame[fi_start]
|
||||
start_t: float = start_row['time']
|
||||
start_dt: DateTime = from_timestamp(start_t)
|
||||
|
||||
if absi_start < 0:
|
||||
import pdbp
|
||||
pdbp.set_trace()
|
||||
|
||||
yield (
|
||||
absi_start, absi_end, # abs indices
|
||||
fi_start, fi_end, # relative "frame" indices
|
||||
start_t, end_t,
|
||||
start_dt, end_dt,
|
||||
)
|
||||
|
||||
|
||||
def with_dts(
|
||||
df: pl.DataFrame,
|
||||
time_col: str = 'time',
|
||||
|
||||
) -> pl.DataFrame:
|
||||
'''
|
||||
Insert datetime (casted) columns to a (presumably) OHLC sampled
|
||||
time series with an epoch-time column keyed by `time_col: str`.
|
||||
|
||||
'''
|
||||
return df.with_columns([
|
||||
pl.col(time_col).shift(1).suffix('_prev'),
|
||||
pl.col(time_col).diff().alias('s_diff'),
|
||||
pl.from_epoch(pl.col(time_col)).alias('dt'),
|
||||
]).with_columns([
|
||||
pl.from_epoch(
|
||||
column=pl.col(f'{time_col}_prev'),
|
||||
).alias('dt_prev'),
|
||||
pl.col('dt').diff().alias('dt_diff'),
|
||||
])
|
||||
|
||||
|
||||
t_unit: Literal = Literal[
|
||||
'days',
|
||||
'hours',
|
||||
'minutes',
|
||||
'seconds',
|
||||
'miliseconds',
|
||||
'microseconds',
|
||||
'nanoseconds',
|
||||
]
|
||||
|
||||
|
||||
def detect_time_gaps(
|
||||
w_dts: pl.DataFrame,
|
||||
|
||||
time_col: str = 'time',
|
||||
# epoch sampling step diff
|
||||
expect_period: float = 60,
|
||||
|
||||
# NOTE: legacy stock mkts have venue operating hours
|
||||
# and thus gaps normally no more then 1-2 days at
|
||||
# a time.
|
||||
gap_thresh: float = 1.,
|
||||
|
||||
# TODO: allow passing in a frame of operating hours?
|
||||
# -[ ] durations/ranges for faster legit gap checks?
|
||||
# XXX -> must be valid ``polars.Expr.dt.<name>``
|
||||
# like 'days' which a sane default for venue closures
|
||||
# though will detect weekend gaps which are normal :o
|
||||
gap_dt_unit: t_unit | None = None,
|
||||
|
||||
) -> pl.DataFrame:
|
||||
'''
|
||||
Filter to OHLC datums which contain sample step gaps.
|
||||
|
||||
For eg. legacy markets which have venue close gaps and/or
|
||||
actual missing data segments.
|
||||
|
||||
'''
|
||||
# first select by any sample-period (in seconds unit) step size
|
||||
# greater then expected.
|
||||
step_gaps: pl.DataFrame = w_dts.filter(
|
||||
pl.col('s_diff').abs() > expect_period
|
||||
)
|
||||
|
||||
if gap_dt_unit is None:
|
||||
return step_gaps
|
||||
|
||||
# NOTE: this flag is to indicate that on this (sampling) time
|
||||
# scale we expect to only be filtering against larger venue
|
||||
# closures-scale time gaps.
|
||||
return step_gaps.filter(
|
||||
# Second by an arbitrary dt-unit step size
|
||||
getattr(
|
||||
pl.col('dt_diff').dt,
|
||||
gap_dt_unit,
|
||||
)().abs() > gap_thresh
|
||||
)
|
||||
|
||||
|
||||
def detect_price_gaps(
|
||||
df: pl.DataFrame,
|
||||
gt_multiplier: float = 2.,
|
||||
price_fields: list[str] = ['high', 'low'],
|
||||
|
||||
) -> pl.DataFrame:
|
||||
'''
|
||||
Detect gaps in clearing price over an OHLC series.
|
||||
|
||||
2 types of gaps generally exist; up gaps and down gaps:
|
||||
|
||||
- UP gap: when any next sample's lo price is strictly greater
|
||||
then the current sample's hi price.
|
||||
|
||||
- DOWN gap: when any next sample's hi price is strictly
|
||||
less then the current samples lo price.
|
||||
|
||||
'''
|
||||
# return df.filter(
|
||||
# pl.col('high') - ) > expect_period,
|
||||
# ).select([
|
||||
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
|
||||
# pl.all(),
|
||||
# ]).select([
|
||||
# pl.all(),
|
||||
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
|
||||
# ])
|
||||
...
|
||||
|
||||
# TODO: probably just use the null_segs impl above?
|
||||
def detect_vlm_gaps(
|
||||
df: pl.DataFrame,
|
||||
col: str = 'volume',
|
||||
|
||||
) -> pl.DataFrame:
|
||||
|
||||
vnull: pl.DataFrame = w_dts.filter(
|
||||
pl.col(col) == 0
|
||||
)
|
||||
return vnull
|
||||
|
||||
|
||||
def dedupe(
|
||||
src_df: pl.DataFrame,
|
||||
|
||||
time_gaps: pl.DataFrame | None = None,
|
||||
sort: bool = True,
|
||||
period: float = 60,
|
||||
|
||||
) -> tuple[
|
||||
pl.DataFrame, # with dts
|
||||
pl.DataFrame, # with deduplicated dts (aka gap/repeat removal)
|
||||
int, # len diff between input and deduped
|
||||
]:
|
||||
'''
|
||||
Check for time series gaps and if found
|
||||
de-duplicate any datetime entries, check for
|
||||
a frame height diff and return the newly
|
||||
dt-deduplicated frame.
|
||||
|
||||
'''
|
||||
wdts: pl.DataFrame = with_dts(src_df)
|
||||
|
||||
deduped = wdts
|
||||
|
||||
# remove duplicated datetime samples/sections
|
||||
deduped: pl.DataFrame = wdts.unique(
|
||||
# subset=['dt'],
|
||||
subset=['time'],
|
||||
maintain_order=True,
|
||||
)
|
||||
|
||||
# maybe sort on any time field
|
||||
if sort:
|
||||
deduped = deduped.sort(by='time')
|
||||
# TODO: detect out-of-order segments which were corrected!
|
||||
# -[ ] report in log msg
|
||||
# -[ ] possibly return segment sections which were moved?
|
||||
|
||||
diff: int = (
|
||||
wdts.height
|
||||
-
|
||||
deduped.height
|
||||
)
|
||||
return (
|
||||
wdts,
|
||||
deduped,
|
||||
diff,
|
||||
)
|
||||
|
||||
|
||||
def sort_diff(
|
||||
src_df: pl.DataFrame,
|
||||
col: str = 'time',
|
||||
|
||||
) -> tuple[
|
||||
pl.DataFrame, # with dts
|
||||
pl.DataFrame, # sorted
|
||||
list[int], # indices of segments that are out-of-order
|
||||
]:
|
||||
ser: pl.Series = src_df[col]
|
||||
sortd: pl.DataFrame = ser.sort()
|
||||
diff: pl.Series = ser.diff()
|
||||
|
||||
sortd_diff: pl.Series = sortd.diff()
|
||||
i_step_diff = (diff != sortd_diff).arg_true()
|
||||
frame_reorders: int = i_step_diff.len()
|
||||
if frame_reorders:
|
||||
log.warn(
|
||||
f'Resorted frame on col: {col}\n'
|
||||
f'{frame_reorders}'
|
||||
|
||||
)
|
||||
# import pdbp; pdbp.set_trace()
|
||||
|
||||
# NOTE: thanks to this SO answer for the below conversion routines
|
||||
# to go from numpy struct-arrays to polars dataframes and back:
|
||||
# https://stackoverflow.com/a/72054819
|
||||
def np2pl(array: np.ndarray) -> pl.DataFrame:
|
||||
start: float = time.time()
|
||||
|
||||
# XXX: thanks to this SO answer for this conversion tip:
|
||||
# https://stackoverflow.com/a/72054819
|
||||
df = pl.DataFrame({
|
||||
field_name: array[field_name]
|
||||
for field_name in array.dtype.fields
|
||||
})
|
||||
delay: float = round(
|
||||
time.time() - start,
|
||||
ndigits=6,
|
||||
)
|
||||
log.info(
|
||||
f'numpy -> polars conversion took {delay} secs\n'
|
||||
f'polars df: {df}'
|
||||
)
|
||||
return df
|
||||
|
||||
|
||||
def pl2np(
|
||||
df: pl.DataFrame,
|
||||
dtype: np.dtype,
|
||||
|
||||
) -> np.ndarray:
|
||||
|
||||
# Create numpy struct array of the correct size and dtype
|
||||
# and loop through df columns to fill in array fields.
|
||||
array = np.empty(
|
||||
df.height,
|
||||
dtype,
|
||||
)
|
||||
for field, col in zip(
|
||||
dtype.fields,
|
||||
df.columns,
|
||||
):
|
||||
array[field] = df.get_column(col).to_numpy()
|
||||
|
||||
return array
|
|
@ -21,16 +21,15 @@ Extensions to built-in or (heavily used but 3rd party) friend-lib
|
|||
types.
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
from collections import UserList
|
||||
from pprint import (
|
||||
saferepr,
|
||||
pformat,
|
||||
)
|
||||
from typing import Any
|
||||
|
||||
from msgspec import (
|
||||
msgpack,
|
||||
Struct as _Struct,
|
||||
Struct,
|
||||
structs,
|
||||
)
|
||||
|
||||
|
@ -63,7 +62,7 @@ class DiffDump(UserList):
|
|||
|
||||
|
||||
class Struct(
|
||||
_Struct,
|
||||
Struct,
|
||||
|
||||
# https://jcristharif.com/msgspec/structs.html#tagged-unions
|
||||
# tag='pikerstruct',
|
||||
|
@ -73,27 +72,9 @@ class Struct(
|
|||
A "human friendlier" (aka repl buddy) struct subtype.
|
||||
|
||||
'''
|
||||
def _sin_props(self) -> Iterator[
|
||||
tuple[
|
||||
structs.FieldIinfo,
|
||||
str,
|
||||
Any,
|
||||
]
|
||||
]:
|
||||
'''
|
||||
Iterate over all non-@property fields of this struct.
|
||||
|
||||
'''
|
||||
fi: structs.FieldInfo
|
||||
for fi in structs.fields(self):
|
||||
key: str = fi.name
|
||||
val: Any = getattr(self, key)
|
||||
yield fi, key, val
|
||||
|
||||
def to_dict(
|
||||
self,
|
||||
include_non_members: bool = True,
|
||||
|
||||
) -> dict:
|
||||
'''
|
||||
Like it sounds.. direct delegation to:
|
||||
|
@ -109,72 +90,16 @@ class Struct(
|
|||
|
||||
# only return a dict of the struct members
|
||||
# which were provided as input, NOT anything
|
||||
# added as type-defined `@property` methods!
|
||||
# added as `@properties`!
|
||||
sin_props: dict = {}
|
||||
fi: structs.FieldInfo
|
||||
for fi, k, v in self._sin_props():
|
||||
sin_props[k] = asdict[k]
|
||||
for fi in structs.fields(self):
|
||||
key: str = fi.name
|
||||
sin_props[key] = asdict[key]
|
||||
|
||||
return sin_props
|
||||
|
||||
def pformat(
|
||||
self,
|
||||
field_indent: int = 2,
|
||||
indent: int = 0,
|
||||
|
||||
) -> str:
|
||||
'''
|
||||
Recursion-safe `pprint.pformat()` style formatting of
|
||||
a `msgspec.Struct` for sane reading by a human using a REPL.
|
||||
|
||||
'''
|
||||
# global whitespace indent
|
||||
ws: str = ' '*indent
|
||||
|
||||
# field whitespace indent
|
||||
field_ws: str = ' '*(field_indent + indent)
|
||||
|
||||
# qtn: str = ws + self.__class__.__qualname__
|
||||
qtn: str = self.__class__.__qualname__
|
||||
|
||||
obj_str: str = '' # accumulator
|
||||
fi: structs.FieldInfo
|
||||
k: str
|
||||
v: Any
|
||||
for fi, k, v in self._sin_props():
|
||||
|
||||
# TODO: how can we prefer `Literal['option1', 'option2,
|
||||
# ..]` over .__name__ == `Literal` but still get only the
|
||||
# latter for simple types like `str | int | None` etc..?
|
||||
ft: type = fi.type
|
||||
typ_name: str = getattr(ft, '__name__', str(ft))
|
||||
|
||||
# recurse to get sub-struct's `.pformat()` output Bo
|
||||
if isinstance(v, Struct):
|
||||
val_str: str = v.pformat(
|
||||
indent=field_indent + indent,
|
||||
field_indent=indent + field_indent,
|
||||
)
|
||||
|
||||
else: # the `pprint` recursion-safe format:
|
||||
# https://docs.python.org/3.11/library/pprint.html#pprint.saferepr
|
||||
val_str: str = saferepr(v)
|
||||
|
||||
obj_str += (field_ws + f'{k}: {typ_name} = {val_str},\n')
|
||||
|
||||
return (
|
||||
f'{qtn}(\n'
|
||||
f'{obj_str}'
|
||||
f'{ws})'
|
||||
)
|
||||
|
||||
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
|
||||
# inside a known tty?
|
||||
# def __repr__(self) -> str:
|
||||
# ...
|
||||
|
||||
# __str__ = __repr__ = pformat
|
||||
__repr__ = pformat
|
||||
def pformat(self) -> str:
|
||||
return f'Struct({pformat(self.to_dict())})'
|
||||
|
||||
def copy(
|
||||
self,
|
||||
|
|
|
@ -14,8 +14,9 @@
|
|||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
UI components built using `Qt` with major versions swapped in via
|
||||
the import indirection in the `.qt` sub-mod.
|
||||
"""
|
||||
Stuff for your eyes, aka super hawt Qt UI components.
|
||||
|
||||
'''
|
||||
Currently we only support PyQt5 due to this issue in Pyside2:
|
||||
https://bugreports.qt.io/projects/PYSIDE/issues/PYSIDE-1313
|
||||
"""
|
||||
|
|
|
@ -21,10 +21,8 @@ Anchor funtions for UI placement of annotions.
|
|||
from __future__ import annotations
|
||||
from typing import Callable, TYPE_CHECKING
|
||||
|
||||
from piker.ui.qt import (
|
||||
QPointF,
|
||||
QGraphicsPathItem,
|
||||
)
|
||||
from PyQt5.QtCore import QPointF
|
||||
from PyQt5.QtWidgets import QGraphicsPathItem
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ._chart import ChartPlotWidget
|
||||
|
|
|
@ -20,22 +20,12 @@ Annotations for ur faces.
|
|||
"""
|
||||
from typing import Callable
|
||||
|
||||
from pyqtgraph import (
|
||||
Point,
|
||||
functions as fn,
|
||||
Color,
|
||||
)
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5.QtCore import QPointF, QRectF
|
||||
from PyQt5.QtWidgets import QGraphicsPathItem
|
||||
from pyqtgraph import Point, functions as fn, Color
|
||||
import numpy as np
|
||||
|
||||
from piker.ui.qt import (
|
||||
QtCore,
|
||||
QtGui,
|
||||
QtWidgets,
|
||||
QPointF,
|
||||
QRectF,
|
||||
QGraphicsPathItem,
|
||||
)
|
||||
|
||||
|
||||
def mk_marker_path(
|
||||
|
||||
|
|
|
@ -21,11 +21,9 @@ Main app startup and run.
|
|||
from functools import partial
|
||||
from types import ModuleType
|
||||
|
||||
from PyQt5.QtCore import QEvent
|
||||
import trio
|
||||
|
||||
from piker.ui.qt import (
|
||||
QEvent,
|
||||
)
|
||||
from ..service import maybe_spawn_brokerd
|
||||
from . import _event
|
||||
from ._exec import run_qtractor
|
||||
|
|
|
@ -23,24 +23,16 @@ from functools import lru_cache
|
|||
from typing import Callable
|
||||
from math import floor
|
||||
|
||||
import polars as pl
|
||||
import numpy as np
|
||||
import pyqtgraph as pg
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5.QtCore import QPointF
|
||||
|
||||
from piker.ui.qt import (
|
||||
QtCore,
|
||||
QtGui,
|
||||
QtWidgets,
|
||||
QPointF,
|
||||
txt_flag,
|
||||
align_flag,
|
||||
px_cache_mode,
|
||||
)
|
||||
from . import _pg_overrides as pgo
|
||||
from ..accounting._mktinfo import float_digits
|
||||
from ._label import Label
|
||||
from ._style import DpiAwareFont, hcolor, _font
|
||||
from ._interaction import ChartView
|
||||
from ._dataviz import Viz
|
||||
|
||||
_axis_pen = pg.mkPen(hcolor('bracket'))
|
||||
|
||||
|
@ -295,7 +287,9 @@ class DynamicDateAxis(Axis):
|
|||
# time formats mapped by seconds between bars
|
||||
tick_tpl = {
|
||||
60 * 60 * 24: '%Y-%b-%d',
|
||||
60: '%Y-%b-%d(%H:%M)',
|
||||
60: '%H:%M',
|
||||
30: '%H:%M:%S',
|
||||
5: '%H:%M:%S',
|
||||
1: '%H:%M:%S',
|
||||
}
|
||||
|
||||
|
@ -311,10 +305,10 @@ class DynamicDateAxis(Axis):
|
|||
# XX: ARGGGGG AG:LKSKDJF:LKJSDFD
|
||||
chart = self.pi.chart_widget
|
||||
|
||||
viz: Viz = chart._vizs[chart.name]
|
||||
viz = chart._vizs[chart.name]
|
||||
shm = viz.shm
|
||||
array = shm.array
|
||||
ifield: str = viz.index_field
|
||||
ifield = viz.index_field
|
||||
index = array[ifield]
|
||||
i_0, i_l = index[0], index[-1]
|
||||
|
||||
|
@ -335,7 +329,7 @@ class DynamicDateAxis(Axis):
|
|||
arr_len = index.shape[0]
|
||||
first = shm._first.value
|
||||
times = array['time']
|
||||
epochs: list[int] = times[
|
||||
epochs = times[
|
||||
list(
|
||||
map(
|
||||
int,
|
||||
|
@ -347,30 +341,23 @@ class DynamicDateAxis(Axis):
|
|||
)
|
||||
]
|
||||
else:
|
||||
epochs: list[int] = list(map(int, indexes))
|
||||
epochs = list(map(int, indexes))
|
||||
|
||||
# TODO: **don't** have this hard coded shift to EST
|
||||
delay: float = viz.time_step()
|
||||
if delay > 1:
|
||||
# NOTE: use less granular dt-str when using 1M+ OHLC
|
||||
fmtstr: str = self.tick_tpl[delay]
|
||||
else:
|
||||
fmtstr: str = '%Y-%m-%d(%H:%M:%S)'
|
||||
|
||||
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.from_epoch.html#polars-from-epoch
|
||||
pl_dts: pl.Series = pl.from_epoch(
|
||||
# delay = times[-1] - times[-2]
|
||||
dts = np.array(
|
||||
epochs,
|
||||
time_unit='s',
|
||||
# NOTE: kinda weird we can pass it to `.from_epoch()` no?
|
||||
).dt.replace_time_zone(
|
||||
time_zone='UTC'
|
||||
).dt.convert_time_zone(
|
||||
# TODO: pull this from either:
|
||||
# -[ ] the mkt venue tz by default
|
||||
# -[ ] the user's config under `sys.mkt_timezone: str`
|
||||
'EST'
|
||||
dtype='datetime64[s]',
|
||||
)
|
||||
return pl_dts.dt.to_string(fmtstr).to_list()
|
||||
|
||||
# see units listing:
|
||||
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units
|
||||
return list(np.datetime_as_string(dts))
|
||||
|
||||
# TODO: per timeframe formatting?
|
||||
# - we probably need this based on zoom now right?
|
||||
# prec = self.np_dt_precision[delay]
|
||||
# return dts.strftime(self.tick_tpl[delay])
|
||||
|
||||
def tickStrings(
|
||||
self,
|
||||
|
@ -421,15 +408,11 @@ class AxisLabel(pg.GraphicsObject):
|
|||
super().__init__()
|
||||
self.setParentItem(parent)
|
||||
|
||||
self.setFlag(
|
||||
self.GraphicsItemFlag.ItemIgnoresTransformations
|
||||
)
|
||||
self.setFlag(self.ItemIgnoresTransformations)
|
||||
self.setZValue(100)
|
||||
|
||||
# XXX: pretty sure this is faster
|
||||
self.setCacheMode(
|
||||
px_cache_mode.DeviceCoordinateCache
|
||||
)
|
||||
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||
|
||||
self._parent = parent
|
||||
|
||||
|
@ -566,14 +549,21 @@ class AxisLabel(pg.GraphicsObject):
|
|||
|
||||
return (self.rect.width(), self.rect.height())
|
||||
|
||||
# _common_text_flags = (
|
||||
# QtCore.Qt.TextDontClip |
|
||||
# QtCore.Qt.AlignCenter |
|
||||
# QtCore.Qt.AlignTop |
|
||||
# QtCore.Qt.AlignHCenter |
|
||||
# QtCore.Qt.AlignVCenter
|
||||
# )
|
||||
|
||||
|
||||
class XAxisLabel(AxisLabel):
|
||||
_x_margin = 8
|
||||
|
||||
text_flags = (
|
||||
align_flag.AlignCenter
|
||||
| txt_flag.TextDontClip
|
||||
QtCore.Qt.TextDontClip
|
||||
| QtCore.Qt.AlignCenter
|
||||
)
|
||||
|
||||
def size_hint(self) -> tuple[float, float]:
|
||||
|
@ -630,10 +620,10 @@ class YAxisLabel(AxisLabel):
|
|||
_y_margin: int = 4
|
||||
|
||||
text_flags = (
|
||||
align_flag.AlignLeft
|
||||
| align_flag.AlignVCenter
|
||||
# | align_flag.AlignHCenter
|
||||
| txt_flag.TextDontClip
|
||||
QtCore.Qt.AlignLeft
|
||||
# QtCore.Qt.AlignHCenter
|
||||
| QtCore.Qt.AlignVCenter
|
||||
| QtCore.Qt.TextDontClip
|
||||
)
|
||||
|
||||
def __init__(
|
||||
|
|
|
@ -28,20 +28,22 @@ from typing import (
|
|||
TYPE_CHECKING,
|
||||
)
|
||||
|
||||
import pyqtgraph as pg
|
||||
import trio
|
||||
|
||||
from piker.ui.qt import (
|
||||
QtCore,
|
||||
QtWidgets,
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
from PyQt5.QtCore import (
|
||||
Qt,
|
||||
QLineF,
|
||||
# QPointF,
|
||||
)
|
||||
from PyQt5.QtWidgets import (
|
||||
QFrame,
|
||||
QWidget,
|
||||
QHBoxLayout,
|
||||
QVBoxLayout,
|
||||
QSplitter,
|
||||
)
|
||||
import pyqtgraph as pg
|
||||
import trio
|
||||
|
||||
from ._axes import (
|
||||
DynamicDateAxis,
|
||||
PriceAxis,
|
||||
|
@ -568,8 +570,8 @@ class LinkedSplits(QWidget):
|
|||
|
||||
# style?
|
||||
self.chart.setFrameStyle(
|
||||
QFrame.Shape.StyledPanel |
|
||||
QFrame.Shadow.Plain
|
||||
QFrame.StyledPanel |
|
||||
QFrame.Plain
|
||||
)
|
||||
|
||||
return self.chart
|
||||
|
@ -687,8 +689,8 @@ class LinkedSplits(QWidget):
|
|||
|
||||
cpw.plotItem.vb.linked = self
|
||||
cpw.setFrameStyle(
|
||||
QFrame.Shape.StyledPanel
|
||||
# | QFrame.Shadow.Plain
|
||||
QtWidgets.QFrame.StyledPanel
|
||||
# | QtWidgets.QFrame.Plain
|
||||
)
|
||||
|
||||
# don't show the little "autoscale" A label.
|
||||
|
|
|
@ -28,14 +28,9 @@ from typing import (
|
|||
import inspect
|
||||
import numpy as np
|
||||
import pyqtgraph as pg
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
from PyQt5.QtCore import QPointF, QRectF
|
||||
|
||||
from piker.ui.qt import (
|
||||
QPointF,
|
||||
QRectF,
|
||||
QtCore,
|
||||
QtWidgets,
|
||||
px_cache_mode,
|
||||
)
|
||||
from ._style import (
|
||||
_xaxis_at,
|
||||
hcolor,
|
||||
|
@ -109,9 +104,7 @@ class LineDot(pg.CurvePoint):
|
|||
dot.setParentItem(self)
|
||||
|
||||
# keep a static size
|
||||
self.setFlag(
|
||||
self.GraphicsItemFlag.ItemIgnoresTransformations
|
||||
)
|
||||
self.setFlag(self.ItemIgnoresTransformations)
|
||||
|
||||
def event(
|
||||
self,
|
||||
|
@ -214,10 +207,9 @@ class ContentsLabel(pg.LabelItem):
|
|||
# this being "html" is the dumbest shit :eyeroll:
|
||||
|
||||
self.setText(
|
||||
"<b>i_arr</b>:{index}<br/>"
|
||||
"<b>i</b>:{index}<br/>"
|
||||
# NB: these fields must be indexed in the correct order via
|
||||
# the slice syntax below.
|
||||
"<b>i_shm</b>:{}<br/>"
|
||||
"<b>epoch</b>:{}<br/>"
|
||||
"<b>O</b>:{}<br/>"
|
||||
"<b>H</b>:{}<br/>"
|
||||
|
@ -227,7 +219,6 @@ class ContentsLabel(pg.LabelItem):
|
|||
# "<b>wap</b>:{}".format(
|
||||
*array[ix][
|
||||
[
|
||||
'index',
|
||||
'time',
|
||||
'open',
|
||||
'high',
|
||||
|
@ -279,15 +270,10 @@ class ContentsLabels:
|
|||
x_in: int,
|
||||
|
||||
) -> None:
|
||||
for (
|
||||
chart,
|
||||
name,
|
||||
label,
|
||||
update,
|
||||
)in self._labels:
|
||||
for chart, name, label, update in self._labels:
|
||||
|
||||
viz = chart.get_viz(name)
|
||||
array: np.ndarray = viz.shm._array
|
||||
array = viz.shm.array
|
||||
index = array[viz.index_field]
|
||||
start = index[0]
|
||||
stop = index[-1]
|
||||
|
@ -298,7 +284,7 @@ class ContentsLabels:
|
|||
):
|
||||
# out of range
|
||||
print('WTF out of range?')
|
||||
# continue
|
||||
continue
|
||||
|
||||
# call provided update func with data point
|
||||
try:
|
||||
|
@ -306,7 +292,6 @@ class ContentsLabels:
|
|||
ix = np.searchsorted(index, x_in)
|
||||
if ix > len(array):
|
||||
breakpoint()
|
||||
|
||||
update(ix, array)
|
||||
|
||||
except IndexError:
|
||||
|
@ -431,10 +416,10 @@ class Cursor(pg.GraphicsObject):
|
|||
# vertical and horizonal lines and a y-axis label
|
||||
|
||||
vl = plot.addLine(x=0, pen=self.lines_pen, movable=False)
|
||||
vl.setCacheMode(px_cache_mode.DeviceCoordinateCache)
|
||||
vl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||
|
||||
hl = plot.addLine(y=0, pen=self.lines_pen, movable=False)
|
||||
hl.setCacheMode(px_cache_mode.DeviceCoordinateCache)
|
||||
hl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||
hl.hide()
|
||||
|
||||
yl = YAxisLabel(
|
||||
|
@ -518,10 +503,7 @@ class Cursor(pg.GraphicsObject):
|
|||
plot=chart
|
||||
)
|
||||
chart.addItem(cursor)
|
||||
self.graphics[chart].setdefault(
|
||||
'cursors',
|
||||
[],
|
||||
).append(cursor)
|
||||
self.graphics[chart].setdefault('cursors', []).append(cursor)
|
||||
return cursor
|
||||
|
||||
def mouseAction(
|
||||
|
|
|
@ -19,21 +19,20 @@ Fast, smooth, sexy curves.
|
|||
|
||||
"""
|
||||
from contextlib import contextmanager as cm
|
||||
from enum import EnumType
|
||||
from typing import Callable
|
||||
|
||||
import numpy as np
|
||||
import pyqtgraph as pg
|
||||
|
||||
from piker.ui.qt import (
|
||||
QtWidgets,
|
||||
QGraphicsItem,
|
||||
from PyQt5 import QtWidgets
|
||||
from PyQt5.QtWidgets import QGraphicsItem
|
||||
from PyQt5.QtCore import (
|
||||
Qt,
|
||||
QLineF,
|
||||
QRectF,
|
||||
)
|
||||
from PyQt5.QtGui import (
|
||||
QPainter,
|
||||
QPainterPath,
|
||||
px_cache_mode,
|
||||
)
|
||||
from ._style import hcolor
|
||||
from ..log import get_logger
|
||||
|
@ -43,23 +42,22 @@ from ..toolz.profile import (
|
|||
ms_slower_then,
|
||||
)
|
||||
|
||||
|
||||
log = get_logger(__name__)
|
||||
|
||||
|
||||
pen_style: EnumType = Qt.PenStyle
|
||||
|
||||
_line_styles: dict[str, int] = {
|
||||
'solid': pen_style.SolidLine,
|
||||
'dash': pen_style.DashLine,
|
||||
'dot': pen_style.DotLine,
|
||||
'dashdot': pen_style.DashDotLine,
|
||||
'solid': Qt.PenStyle.SolidLine,
|
||||
'dash': Qt.PenStyle.DashLine,
|
||||
'dot': Qt.PenStyle.DotLine,
|
||||
'dashdot': Qt.PenStyle.DashDotLine,
|
||||
}
|
||||
|
||||
|
||||
class FlowGraphic(pg.GraphicsObject):
|
||||
'''
|
||||
Base class with minimal interface for `QPainterPath`
|
||||
implemented, real-time updated "data flow" graphics.
|
||||
Base class with minimal interface for `QPainterPath` implemented,
|
||||
real-time updated "data flow" graphics.
|
||||
|
||||
See subtypes below.
|
||||
|
||||
|
@ -71,12 +69,12 @@ class FlowGraphic(pg.GraphicsObject):
|
|||
# XXX-NOTE-XXX: graphics caching B)
|
||||
# see explanation for different caching modes:
|
||||
# https://stackoverflow.com/a/39410081
|
||||
cache_mode: int = px_cache_mode.DeviceCoordinateCache
|
||||
cache_mode: int = QGraphicsItem.DeviceCoordinateCache
|
||||
# XXX: WARNING item caching seems to only be useful
|
||||
# if we don't re-generate the entire QPainterPath every time
|
||||
# don't ever use this - it's a colossal nightmare of artefacts
|
||||
# and is disastrous for performance.
|
||||
# cache_mode.ItemCoordinateCache
|
||||
# QGraphicsItem.ItemCoordinateCache
|
||||
# TODO: still questions todo with coord-cacheing that we should
|
||||
# probably talk to a core dev about:
|
||||
# - if this makes trasform interactions slower (such as zooming)
|
||||
|
@ -169,16 +167,15 @@ class FlowGraphic(pg.GraphicsObject):
|
|||
return None
|
||||
|
||||
# XXX: due to a variety of weird jitter bugs and "smearing"
|
||||
# artifacts when click-drag panning and viewing history time
|
||||
# series, we offer this ctx-mngr interface to allow temporarily
|
||||
# disabling Qt's graphics caching mode; this is now currently
|
||||
# used from ``ChartView.start/signal_ic()`` methods which also
|
||||
# disable the rt-display loop when the user is moving around
|
||||
# a view.
|
||||
# artifacts when click-drag panning and viewing history time series,
|
||||
# we offer this ctx-mngr interface to allow temporarily disabling
|
||||
# Qt's graphics caching mode; this is now currently used from
|
||||
# ``ChartView.start/signal_ic()`` methods which also disable the
|
||||
# rt-display loop when the user is moving around a view.
|
||||
@cm
|
||||
def reset_cache(self) -> None:
|
||||
try:
|
||||
none = px_cache_mode.NoCache
|
||||
none = QGraphicsItem.NoCache
|
||||
log.debug(
|
||||
f'{self._name} -> CACHE DISABLE: {none}'
|
||||
)
|
||||
|
|
|
@ -36,12 +36,9 @@ from msgspec import (
|
|||
field,
|
||||
)
|
||||
import numpy as np
|
||||
from numpy import (
|
||||
ndarray,
|
||||
)
|
||||
import pyqtgraph as pg
|
||||
from PyQt5.QtCore import QLineF
|
||||
|
||||
from piker.ui.qt import QLineF
|
||||
from ..data._sharedmem import (
|
||||
ShmArray,
|
||||
)
|
||||
|
@ -52,7 +49,7 @@ from ..data._formatters import (
|
|||
OHLCBarsAsCurveFmtr, # OHLC converted to line
|
||||
StepCurveFmtr, # "step" curve (like for vlm)
|
||||
)
|
||||
from ..tsp import (
|
||||
from ..data._timeseries import (
|
||||
slice_from_time,
|
||||
)
|
||||
from ._ohlc import (
|
||||
|
@ -85,11 +82,10 @@ def render_baritems(
|
|||
viz: Viz,
|
||||
graphics: BarItems,
|
||||
read: tuple[
|
||||
int, int, ndarray,
|
||||
int, int, ndarray,
|
||||
int, int, np.ndarray,
|
||||
int, int, np.ndarray,
|
||||
],
|
||||
profiler: Profiler,
|
||||
force_redraw: bool = False,
|
||||
**kwargs,
|
||||
|
||||
) -> None:
|
||||
|
@ -220,11 +216,9 @@ def render_baritems(
|
|||
viz._in_ds = should_line
|
||||
|
||||
should_redraw = (
|
||||
force_redraw
|
||||
or changed_to_line
|
||||
changed_to_line
|
||||
or not should_line
|
||||
)
|
||||
# print(f'should_redraw: {should_redraw}')
|
||||
return (
|
||||
graphics,
|
||||
r,
|
||||
|
@ -256,7 +250,7 @@ class ViewState(Struct):
|
|||
] | None = None
|
||||
|
||||
# last in view ``ShmArray.array[read_slc]`` data
|
||||
in_view: ndarray | None = None
|
||||
in_view: np.ndarray | None = None
|
||||
|
||||
|
||||
class Viz(Struct):
|
||||
|
@ -319,7 +313,6 @@ class Viz(Struct):
|
|||
_last_uppx: float = 0
|
||||
_in_ds: bool = False
|
||||
_index_step: float | None = None
|
||||
_time_step: float | None = None
|
||||
|
||||
# map from uppx -> (downsampled data, incremental graphics)
|
||||
_src_r: Renderer | None = None
|
||||
|
@ -366,8 +359,7 @@ class Viz(Struct):
|
|||
|
||||
def index_step(
|
||||
self,
|
||||
index_field: str | None = None,
|
||||
|
||||
reset: bool = False,
|
||||
) -> float:
|
||||
'''
|
||||
Return the size between sample steps in the units of the
|
||||
|
@ -375,17 +367,12 @@ class Viz(Struct):
|
|||
epoch time in seconds.
|
||||
|
||||
'''
|
||||
# attempt to detect the best step size by scanning a sample
|
||||
# of the source data.
|
||||
if (
|
||||
self._index_step is None
|
||||
or index_field is not None
|
||||
):
|
||||
index: ndarray = self.shm.array[
|
||||
index_field
|
||||
or self.index_field
|
||||
]
|
||||
isample: ndarray = index[-16:]
|
||||
# attempt to dectect the best step size by scanning a sample of
|
||||
# the source data.
|
||||
if self._index_step is None:
|
||||
|
||||
index: np.ndarray = self.shm.array[self.index_field]
|
||||
isample: np.ndarray = index[-16:]
|
||||
|
||||
mxdiff: None | float = None
|
||||
for step in np.diff(isample):
|
||||
|
@ -399,15 +386,7 @@ class Viz(Struct):
|
|||
)
|
||||
mxdiff = step
|
||||
|
||||
step: float = max(mxdiff, 1)
|
||||
|
||||
# only SET the internal index step if an explicit
|
||||
# field name is NOT passed, since in such cases this
|
||||
# is likely just being called from `.time_step()`.
|
||||
if index_field is not None:
|
||||
return step
|
||||
|
||||
self._index_step = step
|
||||
self._index_step = max(mxdiff, 1)
|
||||
if (
|
||||
mxdiff < 1
|
||||
or 1 < mxdiff < 60
|
||||
|
@ -418,17 +397,6 @@ class Viz(Struct):
|
|||
|
||||
return self._index_step
|
||||
|
||||
def time_step(self) -> float:
|
||||
'''
|
||||
Attempt to determine the per-sample time-step period by
|
||||
forcing an epoch-index and calling `.index_step()`.
|
||||
|
||||
'''
|
||||
if self._time_step is None:
|
||||
self._time_step: float = self.index_step(index_field='time')
|
||||
|
||||
return self._time_step
|
||||
|
||||
def maxmin(
|
||||
self,
|
||||
|
||||
|
@ -436,9 +404,6 @@ class Viz(Struct):
|
|||
i_read_range: tuple[int, int] | None = None,
|
||||
use_caching: bool = True,
|
||||
|
||||
# XXX: internal debug
|
||||
_do_print: bool = False
|
||||
|
||||
) -> tuple[float, float] | None:
|
||||
'''
|
||||
Compute the cached max and min y-range values for a given
|
||||
|
@ -458,14 +423,15 @@ class Viz(Struct):
|
|||
if shm is None:
|
||||
return None
|
||||
|
||||
arr: ndarray = shm.array
|
||||
do_print: bool = False
|
||||
arr = shm.array
|
||||
|
||||
if i_read_range is not None:
|
||||
read_slc = slice(*i_read_range)
|
||||
index: float | int = arr[read_slc][self.index_field]
|
||||
index = arr[read_slc][self.index_field]
|
||||
if not index.size:
|
||||
return None
|
||||
ixrng: tuple[int, int] = (index[0], index[-1])
|
||||
ixrng = (index[0], index[-1])
|
||||
|
||||
else:
|
||||
if x_range is None:
|
||||
|
@ -483,24 +449,15 @@ class Viz(Struct):
|
|||
|
||||
# TODO: hash the slice instead maybe?
|
||||
# https://stackoverflow.com/a/29980872
|
||||
ixrng = lbar, rbar = (
|
||||
round(x_range[0]),
|
||||
round(x_range[1]),
|
||||
)
|
||||
ixrng = lbar, rbar = round(x_range[0]), round(x_range[1])
|
||||
|
||||
if (
|
||||
use_caching
|
||||
and self._mxmn_cache_enabled
|
||||
):
|
||||
# TODO: is there a way to ONLY clear ranges containing
|
||||
# a certain sub-range?
|
||||
# -[ ] currently we have a problem where a previously
|
||||
# cached mxmn will persist even if the viz is "hard
|
||||
# re-rendered" (usually bc underlying data was
|
||||
# corrected)
|
||||
cached_result = self._mxmns.get(ixrng)
|
||||
if cached_result:
|
||||
if _do_print:
|
||||
if do_print:
|
||||
print(
|
||||
f'{self.name} CACHED maxmin\n'
|
||||
f'{ixrng} -> {cached_result}'
|
||||
|
@ -530,7 +487,7 @@ class Viz(Struct):
|
|||
(rbar - ifirst) + 1
|
||||
)
|
||||
|
||||
slice_view: ndarray = arr[read_slc]
|
||||
slice_view = arr[read_slc]
|
||||
|
||||
if not slice_view.size:
|
||||
log.warning(
|
||||
|
@ -541,7 +498,7 @@ class Viz(Struct):
|
|||
|
||||
elif self.ds_yrange:
|
||||
mxmn = self.ds_yrange
|
||||
if _do_print:
|
||||
if do_print:
|
||||
print(
|
||||
f'{self.name} M4 maxmin:\n'
|
||||
f'{ixrng} -> {mxmn}'
|
||||
|
@ -558,7 +515,7 @@ class Viz(Struct):
|
|||
|
||||
mxmn = ylow, yhigh
|
||||
if (
|
||||
_do_print
|
||||
do_print
|
||||
):
|
||||
s = 3
|
||||
print(
|
||||
|
@ -572,23 +529,14 @@ class Viz(Struct):
|
|||
|
||||
# cache result for input range
|
||||
ylow, yhi = mxmn
|
||||
diff: float = yhi - ylow
|
||||
|
||||
# order-of-magnitude check
|
||||
# TODO: really we should be checking the hi or low
|
||||
# against the previous sample to catch stuff like,
|
||||
# - rando stock (reverse-)split
|
||||
# - null-segments written by some prior
|
||||
# crash-during-backfil
|
||||
if diff > 0:
|
||||
omg: float = abs(logf(diff, 10))
|
||||
else:
|
||||
omg: float = 0
|
||||
|
||||
try:
|
||||
prolly_anomaly: bool = (
|
||||
# diff == 0
|
||||
(ylow and omg > 10)
|
||||
(
|
||||
abs(logf(ylow, 10)) > 16
|
||||
if ylow
|
||||
else False
|
||||
)
|
||||
or (
|
||||
isnan(ylow) or isnan(yhi)
|
||||
)
|
||||
|
@ -615,8 +563,7 @@ class Viz(Struct):
|
|||
|
||||
def view_range(self) -> tuple[int, int]:
|
||||
'''
|
||||
Return the start and stop x-indexes for the managed
|
||||
``ViewBox``.
|
||||
Return the start and stop x-indexes for the managed ``ViewBox``.
|
||||
|
||||
'''
|
||||
vr = self.plot.viewRect()
|
||||
|
@ -629,7 +576,7 @@ class Viz(Struct):
|
|||
self,
|
||||
view_range: None | tuple[float, float] = None,
|
||||
index_field: str | None = None,
|
||||
array: ndarray | None = None,
|
||||
array: np.ndarray | None = None,
|
||||
|
||||
) -> tuple[
|
||||
int, int, int, int, int, int
|
||||
|
@ -700,8 +647,8 @@ class Viz(Struct):
|
|||
profiler: None | Profiler = None,
|
||||
|
||||
) -> tuple[
|
||||
int, int, ndarray,
|
||||
int, int, ndarray,
|
||||
int, int, np.ndarray,
|
||||
int, int, np.ndarray,
|
||||
]:
|
||||
'''
|
||||
Read the underlying shm array buffer and
|
||||
|
@ -871,10 +818,6 @@ class Viz(Struct):
|
|||
graphics,
|
||||
read,
|
||||
profiler,
|
||||
|
||||
# NOTE: only set when caller says to
|
||||
force_redraw=should_redraw,
|
||||
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
|
@ -1037,39 +980,6 @@ class Viz(Struct):
|
|||
graphics,
|
||||
)
|
||||
|
||||
def reset_graphics(
|
||||
self,
|
||||
|
||||
# TODO: allow only resetting within some x-domain range?
|
||||
# ixrng: tuple[int, int] | None = None,
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
Hard reset all graphics (rendering) layers for this
|
||||
data viz including clearing the mxmn auto-y-range
|
||||
cache.
|
||||
|
||||
Normally called when the underlying data set is modified
|
||||
(probably by some `.tsp` correcting/editing routine) and
|
||||
the (now cached) graphics need to be fully re-rendered from
|
||||
source.
|
||||
|
||||
'''
|
||||
log.warning(
|
||||
f'Forcing hard Viz graphihcs RESET:\n'
|
||||
f'.name: {self.name}\n'
|
||||
f'.index_field: {self.index_field}\n'
|
||||
f'.index_step(): {self.index_step()}\n'
|
||||
f'.time_step(): {self.time_step()}\n'
|
||||
)
|
||||
# XXX: always clear the mxn y-range cache
|
||||
# to avoid old data (anomalies) from being
|
||||
# retained in auto-yrange output.
|
||||
self._mxmn_cache_enabled = False
|
||||
self._mxmns.clear()
|
||||
self.update_graphics(force_redraw=True)
|
||||
self._mxmn_cache_enabled = True
|
||||
|
||||
def draw_last(
|
||||
self,
|
||||
array_key: str | None = None,
|
||||
|
@ -1162,7 +1072,7 @@ class Viz(Struct):
|
|||
|
||||
'''
|
||||
shm: ShmArray = self.shm
|
||||
array: ndarray = shm.array
|
||||
array: np.ndarray = shm.array
|
||||
view: ChartView = self.plot.vb
|
||||
(
|
||||
vl,
|
||||
|
|
|
@ -57,7 +57,6 @@ from piker.toolz import (
|
|||
Profiler,
|
||||
)
|
||||
from piker.log import get_logger
|
||||
from piker import config
|
||||
# from ..data._source import tf_in_1s
|
||||
from ._axes import YAxisLabel
|
||||
from ._chart import (
|
||||
|
@ -211,9 +210,9 @@ async def increment_history_view(
|
|||
):
|
||||
hist_chart: ChartPlotWidget = ds.hist_chart
|
||||
hist_viz: Viz = ds.hist_viz
|
||||
# viz: Viz = ds.viz
|
||||
viz: Viz = ds.viz
|
||||
assert 'hist' in hist_viz.shm.token['shm_name']
|
||||
# name: str = hist_viz.name
|
||||
name: str = hist_viz.name
|
||||
|
||||
# TODO: seems this is more reliable at keeping the slow
|
||||
# chart incremented in view more correctly?
|
||||
|
@ -226,8 +225,7 @@ async def increment_history_view(
|
|||
# draw everything from scratch on first entry!
|
||||
for curve_name, hist_viz in hist_chart._vizs.items():
|
||||
log.info(f'Forcing hard redraw -> {curve_name}')
|
||||
hist_viz.reset_graphics()
|
||||
# hist_viz.update_graphics(force_redraw=True)
|
||||
hist_viz.update_graphics(force_redraw=True)
|
||||
|
||||
async with open_sample_stream(1.) as min_istream:
|
||||
async for msg in min_istream:
|
||||
|
@ -250,27 +248,25 @@ async def increment_history_view(
|
|||
# - samplerd could emit the actual update range via
|
||||
# tuple and then we only enter the below block if that
|
||||
# range is detected as in-view?
|
||||
# match msg:
|
||||
# case {
|
||||
# 'backfilling': (viz_name, timeframe),
|
||||
# } if (
|
||||
# viz_name == name
|
||||
# ):
|
||||
# log.warning(
|
||||
# f'Forcing HARD REDRAW:\n'
|
||||
# f'name: {name}\n'
|
||||
# f'timeframe: {timeframe}\n'
|
||||
# )
|
||||
# # TODO: only allow this when the data is IN VIEW!
|
||||
# # also, we probably can do this more efficiently
|
||||
# # / smarter by only redrawing the portion of the
|
||||
# # path necessary?
|
||||
# {
|
||||
# 60: hist_viz,
|
||||
# 1: viz,
|
||||
# }[timeframe].update_graphics(
|
||||
# force_redraw=True
|
||||
# )
|
||||
if (
|
||||
(bf_wut := msg.get('backfilling', False))
|
||||
):
|
||||
viz_name, timeframe = bf_wut
|
||||
if (
|
||||
viz_name == name
|
||||
|
||||
# TODO: only allow this when the data is IN VIEW!
|
||||
# also, we probably can do this more efficiently
|
||||
# / smarter by only redrawing the portion of the
|
||||
# path necessary?
|
||||
and False
|
||||
):
|
||||
log.info(f'Forcing hard redraw -> {name}@{timeframe}')
|
||||
match timeframe:
|
||||
case 60:
|
||||
hist_viz.update_graphics(force_redraw=True)
|
||||
case 1:
|
||||
viz.update_graphics(force_redraw=True)
|
||||
|
||||
# check if slow chart needs an x-domain shift and/or
|
||||
# y-range resize.
|
||||
|
@ -311,7 +307,6 @@ async def increment_history_view(
|
|||
|
||||
async def graphics_update_loop(
|
||||
|
||||
dss: dict[str, DisplayState],
|
||||
nurse: trio.Nursery,
|
||||
godwidget: GodWidget,
|
||||
feed: Feed,
|
||||
|
@ -353,6 +348,8 @@ async def graphics_update_loop(
|
|||
'i_last_slow_t': 0, # multiview-global slow (1m) step index
|
||||
}
|
||||
|
||||
dss: dict[str, DisplayState] = {}
|
||||
|
||||
for fqme, flume in feed.flumes.items():
|
||||
ohlcv = flume.rt_shm
|
||||
hist_ohlcv = flume.hist_shm
|
||||
|
@ -471,18 +468,10 @@ async def graphics_update_loop(
|
|||
if ds.hist_vars['i_last'] < ds.hist_vars['i_last_append']:
|
||||
await tractor.pause()
|
||||
|
||||
# try:
|
||||
|
||||
# XXX TODO: we need to do _dss UPDATE here so that when
|
||||
# a feed-view is switched you can still remote annotate the
|
||||
# prior view..
|
||||
from . import _remote_ctl
|
||||
_remote_ctl._dss.update(dss)
|
||||
|
||||
# main real-time quotes update loop
|
||||
stream: tractor.MsgStream
|
||||
async with feed.open_multi_stream() as stream:
|
||||
# assert stream
|
||||
assert stream
|
||||
async for quotes in stream:
|
||||
quote_period = time.time() - last_quote_s
|
||||
quote_rate = round(
|
||||
|
@ -498,7 +487,7 @@ async def graphics_update_loop(
|
|||
pass
|
||||
# log.warning(f'High quote rate {mkt.fqme}: {quote_rate}')
|
||||
|
||||
last_quote_s: float = time.time()
|
||||
last_quote_s = time.time()
|
||||
|
||||
for fqme, quote in quotes.items():
|
||||
ds = dss[fqme]
|
||||
|
@ -528,12 +517,6 @@ async def graphics_update_loop(
|
|||
quote,
|
||||
)
|
||||
|
||||
# finally:
|
||||
# # XXX: cancel any remote annotation control ctxs
|
||||
# _remote_ctl._dss = None
|
||||
# for cid, (ctx, aids) in _remote_ctl._ctxs.items():
|
||||
# await ctx.cancel()
|
||||
|
||||
|
||||
def graphics_update_cycle(
|
||||
ds: DisplayState,
|
||||
|
@ -1232,8 +1215,6 @@ async def link_views_with_region(
|
|||
# region.sigRegionChangeFinished.connect(update_pi_from_region)
|
||||
|
||||
|
||||
# NOTE: default is set to 60 FPS until the runtime delivers the
|
||||
# discoverd hw value below.
|
||||
_quote_throttle_rate: int = 60 - 6
|
||||
|
||||
|
||||
|
@ -1252,7 +1233,7 @@ async def display_symbol_data(
|
|||
fast from a cached watch-list.
|
||||
|
||||
'''
|
||||
# sbar = godwidget.window.status_bar
|
||||
sbar = godwidget.window.status_bar
|
||||
# historical data fetch
|
||||
# brokermod = brokers.get_brokermod(provider)
|
||||
|
||||
|
@ -1262,11 +1243,11 @@ async def display_symbol_data(
|
|||
# group_key=loading_sym_key,
|
||||
# )
|
||||
|
||||
# for fqme in fqmes:
|
||||
# loading_sym_key = sbar.open_status(
|
||||
# f'loading {fqme} ->',
|
||||
# group_key=True
|
||||
# )
|
||||
for fqme in fqmes:
|
||||
loading_sym_key = sbar.open_status(
|
||||
f'loading {fqme} ->',
|
||||
group_key=True
|
||||
)
|
||||
|
||||
# (TODO: make this not so shit XD)
|
||||
# close group status once a symbol feed fully loads to view.
|
||||
|
@ -1275,54 +1256,26 @@ async def display_symbol_data(
|
|||
# TODO: ctl over update loop's maximum frequency.
|
||||
# - load this from a config.toml!
|
||||
# - allow dyanmic configuration from chart UI?
|
||||
(
|
||||
conf,
|
||||
path,
|
||||
) = config.load()
|
||||
ui_conf: dict = conf['ui']
|
||||
|
||||
global _quote_throttle_rate
|
||||
from ._window import main_window
|
||||
|
||||
display_rate: int = floor(
|
||||
main_window().current_screen().refreshRate()
|
||||
) - 6
|
||||
|
||||
mx_redraw_rate: int = ui_conf.get(
|
||||
'max_redraw_rate',
|
||||
_quote_throttle_rate,
|
||||
)
|
||||
|
||||
if mx_redraw_rate < display_rate:
|
||||
log.info(
|
||||
'Down-throttling redraw rate to config setting\n'
|
||||
f'display FPS: {display_rate}\n'
|
||||
'max_redraw_rate: {max_redraw_rate}\n'
|
||||
)
|
||||
else:
|
||||
_quote_throttle_rate = display_rate
|
||||
display_rate = main_window().current_screen().refreshRate()
|
||||
_quote_throttle_rate = floor(display_rate) - 6
|
||||
|
||||
# TODO: we should be able to increase this if we use some
|
||||
# `mypyc` speedups elsewhere? 22ish seems to be the sweet
|
||||
# spot for single-feed chart.
|
||||
num_of_feeds = len(fqmes)
|
||||
# if num_of_feeds > 1:
|
||||
|
||||
mx: int = 22
|
||||
if num_of_feeds > 1:
|
||||
# there will be more ctx switches with more than 1 feed so we
|
||||
# max throttle down a bit more.
|
||||
mx_per_feed: int = (
|
||||
ui_conf.get(
|
||||
'per_feed_redraw_rate',
|
||||
mx_redraw_rate,
|
||||
)
|
||||
or 16
|
||||
)
|
||||
mx = 16
|
||||
|
||||
# limit to at least display's FPS
|
||||
# avoiding needless Qt-in-guest-mode context switches
|
||||
cycles_per_feed = min(
|
||||
round(_quote_throttle_rate/num_of_feeds),
|
||||
mx_per_feed,
|
||||
mx,
|
||||
)
|
||||
|
||||
feed: Feed
|
||||
|
@ -1467,7 +1420,7 @@ async def display_symbol_data(
|
|||
start_fsp_displays,
|
||||
rt_linked,
|
||||
flume,
|
||||
# loading_sym_key,
|
||||
loading_sym_key,
|
||||
loglevel,
|
||||
)
|
||||
|
||||
|
@ -1586,10 +1539,8 @@ async def display_symbol_data(
|
|||
)
|
||||
|
||||
# start update loop task
|
||||
dss: dict[str, DisplayState] = {}
|
||||
ln.start_soon(
|
||||
graphics_update_loop,
|
||||
dss,
|
||||
ln,
|
||||
godwidget,
|
||||
feed,
|
||||
|
@ -1603,31 +1554,15 @@ async def display_symbol_data(
|
|||
order_ctl_fqme: str = fqmes[0]
|
||||
mode: OrderMode
|
||||
async with (
|
||||
|
||||
open_order_mode(
|
||||
feed,
|
||||
godwidget,
|
||||
order_ctl_fqme,
|
||||
order_mode_started,
|
||||
loglevel=loglevel
|
||||
) as mode,
|
||||
|
||||
# TODO: maybe have these startup sooner before
|
||||
# order mode fully boots? but we gotta,
|
||||
# -[ ] decouple the order mode bindings until
|
||||
# the mode has fully booted..
|
||||
# -[ ] maybe do an Event to sync?
|
||||
|
||||
# start input handling for ``ChartView`` input
|
||||
# (i.e. kb + mouse handling loops)
|
||||
rt_chart.view.open_async_input_handler(
|
||||
dss=dss,
|
||||
),
|
||||
hist_chart.view.open_async_input_handler(
|
||||
dss=dss,
|
||||
),
|
||||
|
||||
) as mode
|
||||
):
|
||||
|
||||
rt_linked.mode = mode
|
||||
|
||||
rt_viz = rt_chart.get_viz(order_ctl_fqme)
|
||||
|
|
|
@ -21,8 +21,7 @@ Higher level annotation editors.
|
|||
from __future__ import annotations
|
||||
from collections import defaultdict
|
||||
from typing import (
|
||||
Sequence,
|
||||
TYPE_CHECKING,
|
||||
TYPE_CHECKING
|
||||
)
|
||||
|
||||
import pyqtgraph as pg
|
||||
|
@ -32,34 +31,24 @@ from pyqtgraph import (
|
|||
QtCore,
|
||||
QtWidgets,
|
||||
)
|
||||
from PyQt5.QtGui import (
|
||||
QColor,
|
||||
)
|
||||
from PyQt5.QtWidgets import (
|
||||
QLabel,
|
||||
)
|
||||
|
||||
from pyqtgraph import functions as fn
|
||||
from PyQt5.QtCore import QPointF
|
||||
import numpy as np
|
||||
|
||||
from piker.types import Struct
|
||||
from piker.ui.qt import (
|
||||
Qt,
|
||||
QPointF,
|
||||
QRectF,
|
||||
QGraphicsProxyWidget,
|
||||
QGraphicsScene,
|
||||
QLabel,
|
||||
QColor,
|
||||
QTransform,
|
||||
)
|
||||
from ._style import (
|
||||
hcolor,
|
||||
_font,
|
||||
)
|
||||
from ._style import hcolor, _font
|
||||
from ._lines import LevelLine
|
||||
from ..log import get_logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ._chart import (
|
||||
GodWidget,
|
||||
ChartPlotWidget,
|
||||
)
|
||||
from ._interaction import ChartView
|
||||
from ._chart import GodWidget
|
||||
|
||||
|
||||
log = get_logger(__name__)
|
||||
|
@ -76,7 +65,7 @@ class ArrowEditor(Struct):
|
|||
uid: str,
|
||||
x: float,
|
||||
y: float,
|
||||
color: str = 'default',
|
||||
color='default',
|
||||
pointing: str | None = None,
|
||||
|
||||
) -> pg.ArrowItem:
|
||||
|
@ -262,75 +251,43 @@ class LineEditor(Struct):
|
|||
return lines
|
||||
|
||||
|
||||
def as_point(
|
||||
pair: Sequence[float, float] | QPointF,
|
||||
) -> list[QPointF, QPointF]:
|
||||
'''
|
||||
Case any input tuple of floats to a a list of `QPoint` objects
|
||||
for use in Qt geometry routines.
|
||||
|
||||
'''
|
||||
if isinstance(pair, QPointF):
|
||||
return pair
|
||||
|
||||
return QPointF(pair[0], pair[1])
|
||||
|
||||
|
||||
# TODO: maybe implement better, something something RectItemProxy??
|
||||
# -[ ] dig into details of how proxy's work?
|
||||
# https://doc.qt.io/qt-5/qgraphicsscene.html#addWidget
|
||||
# -[ ] consider using `.addRect()` maybe?
|
||||
|
||||
class SelectRect(QtWidgets.QGraphicsRectItem):
|
||||
'''
|
||||
A data-view "selection rectangle": the most fundamental
|
||||
geometry for annotating data views.
|
||||
|
||||
- https://doc.qt.io/qt-5/qgraphicsrectitem.html
|
||||
- https://doc.qt.io/qt-6/qgraphicsrectitem.html
|
||||
|
||||
'''
|
||||
def __init__(
|
||||
self,
|
||||
viewbox: ViewBox,
|
||||
color: str | None = None,
|
||||
color: str = 'dad_blue',
|
||||
) -> None:
|
||||
super().__init__(0, 0, 1, 1)
|
||||
|
||||
# self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1)
|
||||
self.vb: ViewBox = viewbox
|
||||
self.vb = viewbox
|
||||
self._chart: 'ChartPlotWidget' = None # noqa
|
||||
|
||||
self._chart: ChartPlotWidget | None = None # noqa
|
||||
|
||||
# TODO: maybe allow this to be dynamic via a method?
|
||||
#l override selection box color
|
||||
color: str = color or 'dad_blue'
|
||||
# override selection box color
|
||||
color = QColor(hcolor(color))
|
||||
|
||||
self.setPen(fn.mkPen(color, width=1))
|
||||
color.setAlpha(66)
|
||||
self.setBrush(fn.mkBrush(color))
|
||||
self.setZValue(1e9)
|
||||
self.hide()
|
||||
self._label = None
|
||||
|
||||
label = self._label = QLabel()
|
||||
label.setTextFormat(
|
||||
Qt.TextFormat.MarkdownText
|
||||
)
|
||||
label.setTextFormat(0) # markdown
|
||||
label.setFont(_font.font)
|
||||
label.setMargin(0)
|
||||
label.setAlignment(
|
||||
QtCore.Qt.AlignLeft
|
||||
# | QtCore.Qt.AlignVCenter
|
||||
)
|
||||
label.hide() # always right after init
|
||||
|
||||
# proxy is created after containing scene is initialized
|
||||
self._label_proxy: QGraphicsProxyWidget | None = None
|
||||
self._abs_top_right: Point | None = None
|
||||
self._label_proxy = None
|
||||
self._abs_top_right = None
|
||||
|
||||
# TODO: "swing %" might be handy here (data's max/min
|
||||
# # % change)?
|
||||
self._contents: list[str] = [
|
||||
# TODO: "swing %" might be handy here (data's max/min # % change)
|
||||
self._contents = [
|
||||
'change: {pchng:.2f} %',
|
||||
'range: {rng:.2f}',
|
||||
'bars: {nbars}',
|
||||
|
@ -340,31 +297,12 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
|||
'sigma: {std:.2f}',
|
||||
]
|
||||
|
||||
self.add_to_view(viewbox)
|
||||
self.hide()
|
||||
|
||||
def add_to_view(
|
||||
self,
|
||||
view: ChartView,
|
||||
) -> None:
|
||||
'''
|
||||
Self-defined view hookup impl which will
|
||||
also re-assign the internal ref.
|
||||
|
||||
'''
|
||||
view.addItem(
|
||||
self,
|
||||
ignoreBounds=True,
|
||||
)
|
||||
if self.vb is not view:
|
||||
self.vb = view
|
||||
|
||||
@property
|
||||
def chart(self) -> ChartPlotWidget: # noqa
|
||||
def chart(self) -> 'ChartPlotWidget': # noqa
|
||||
return self._chart
|
||||
|
||||
@chart.setter
|
||||
def chart(self, chart: ChartPlotWidget) -> None: # noqa
|
||||
def chart(self, chart: 'ChartPlotWidget') -> None: # noqa
|
||||
self._chart = chart
|
||||
chart.sigRangeChanged.connect(self.update_on_resize)
|
||||
palette = self._label.palette()
|
||||
|
@ -377,155 +315,57 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
|||
)
|
||||
|
||||
def update_on_resize(self, vr, r):
|
||||
'''
|
||||
Re-position measure label on view range change.
|
||||
"""Re-position measure label on view range change.
|
||||
|
||||
'''
|
||||
"""
|
||||
if self._abs_top_right:
|
||||
self._label_proxy.setPos(
|
||||
self.vb.mapFromView(self._abs_top_right)
|
||||
)
|
||||
|
||||
def set_scen_pos(
|
||||
def mouse_drag_released(
|
||||
self,
|
||||
scen_p1: QPointF,
|
||||
scen_p2: QPointF,
|
||||
|
||||
update_label: bool = True,
|
||||
|
||||
p1: QPointF,
|
||||
p2: QPointF
|
||||
) -> None:
|
||||
'''
|
||||
Set position from scene coords of selection rect (normally
|
||||
from mouse position) and accompanying label, move label to
|
||||
match.
|
||||
"""Called on final button release for mouse drag with start and
|
||||
end positions.
|
||||
|
||||
'''
|
||||
# NOTE XXX: apparently just setting it doesn't work!?
|
||||
# i have no idea why but it's pretty weird we have to do
|
||||
# this transform thing which was basically pulled verbatim
|
||||
# from the `pg.ViewBox.updateScaleBox()` method.
|
||||
view_rect: QRectF = self.vb.childGroup.mapRectFromScene(
|
||||
QRectF(
|
||||
scen_p1,
|
||||
scen_p2,
|
||||
)
|
||||
)
|
||||
self.setPos(view_rect.topLeft())
|
||||
# XXX: does not work..!?!?
|
||||
# https://doc.qt.io/qt-5/qgraphicsrectitem.html#setRect
|
||||
# self.setRect(view_rect)
|
||||
"""
|
||||
self.set_pos(p1, p2)
|
||||
|
||||
tr = QTransform.fromScale(
|
||||
view_rect.width(),
|
||||
view_rect.height(),
|
||||
)
|
||||
self.setTransform(tr)
|
||||
|
||||
# XXX: never got this working, was always offset
|
||||
# / transformed completely wrong (and off to the far right
|
||||
# from the cursor?)
|
||||
# self.set_view_pos(
|
||||
# view_rect=view_rect,
|
||||
# # self.vwqpToView(p1),
|
||||
# # self.vb.mapToView(p2),
|
||||
# # start_pos=self.vb.mapToScene(p1),
|
||||
# # end_pos=self.vb.mapToScene(p2),
|
||||
# )
|
||||
self.show()
|
||||
|
||||
if update_label:
|
||||
self.init_label(view_rect)
|
||||
|
||||
def set_view_pos(
|
||||
def set_pos(
|
||||
self,
|
||||
|
||||
start_pos: QPointF | Sequence[float, float] | None = None,
|
||||
end_pos: QPointF | Sequence[float, float] | None = None,
|
||||
view_rect: QRectF | None = None,
|
||||
|
||||
update_label: bool = True,
|
||||
|
||||
p1: QPointF,
|
||||
p2: QPointF
|
||||
) -> None:
|
||||
'''
|
||||
Set position from `ViewBox` coords (i.e. from the actual
|
||||
data domain) of rect (and any accompanying label which is
|
||||
moved to match).
|
||||
"""Set position of selection rect and accompanying label, move
|
||||
label to match.
|
||||
|
||||
'''
|
||||
if self._chart is None:
|
||||
raise RuntimeError(
|
||||
'You MUST assign a `SelectRect.chart: ChartPlotWidget`!'
|
||||
)
|
||||
"""
|
||||
if self._label_proxy is None:
|
||||
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
|
||||
self._label_proxy = self.vb.scene().addWidget(self._label)
|
||||
|
||||
if view_rect is None:
|
||||
# ensure point casting
|
||||
start_pos: QPointF = as_point(start_pos)
|
||||
end_pos: QPointF = as_point(end_pos)
|
||||
start_pos = self.vb.mapToView(p1)
|
||||
end_pos = self.vb.mapToView(p2)
|
||||
|
||||
# map to view coords and update area
|
||||
view_rect = QtCore.QRectF(
|
||||
start_pos,
|
||||
end_pos,
|
||||
)
|
||||
r = QtCore.QRectF(start_pos, end_pos)
|
||||
|
||||
self.setPos(view_rect.topLeft())
|
||||
# old way; don't need right?
|
||||
# lr = QtCore.QRectF(p1, p2)
|
||||
# r = self.vb.childGroup.mapRectFromParent(lr)
|
||||
|
||||
# NOTE: SERIOUSLY NO IDEA WHY THIS WORKS...
|
||||
# but it does and all the other commented stuff above
|
||||
# dint, dawg..
|
||||
|
||||
# self.resetTransform()
|
||||
# self.setRect(view_rect)
|
||||
|
||||
tr = QTransform.fromScale(
|
||||
view_rect.width(),
|
||||
view_rect.height(),
|
||||
)
|
||||
self.setTransform(tr)
|
||||
|
||||
if update_label:
|
||||
self.init_label(view_rect)
|
||||
|
||||
print(
|
||||
'SelectRect modify:\n'
|
||||
f'QRectF: {view_rect}\n'
|
||||
f'start_pos: {start_pos}\n'
|
||||
f'end_pos: {end_pos}\n'
|
||||
)
|
||||
self.setPos(r.topLeft())
|
||||
self.resetTransform()
|
||||
self.setRect(r)
|
||||
self.show()
|
||||
|
||||
def init_label(
|
||||
self,
|
||||
view_rect: QRectF,
|
||||
) -> QLabel:
|
||||
y1, y2 = start_pos.y(), end_pos.y()
|
||||
x1, x2 = start_pos.x(), end_pos.x()
|
||||
|
||||
# should be init-ed in `.__init__()`
|
||||
label: QLabel = self._label
|
||||
cv: ChartView = self.vb
|
||||
|
||||
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
|
||||
if self._label_proxy is None:
|
||||
scen: QGraphicsScene = cv.scene()
|
||||
# NOTE: specifically this is passing a widget
|
||||
# pointer to the scene's `.addWidget()` as per,
|
||||
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html#embedding-a-widget-with-qgraphicsproxywidget
|
||||
self._label_proxy: QGraphicsProxyWidget = scen.addWidget(label)
|
||||
|
||||
# get label startup coords
|
||||
tl: QPointF = view_rect.topLeft()
|
||||
br: QPointF = view_rect.bottomRight()
|
||||
|
||||
x1, y1 = tl.x(), tl.y()
|
||||
x2, y2 = br.x(), br.y()
|
||||
|
||||
# TODO: to remove, previous label corner point unpacking
|
||||
# x1, y1 = start_pos.x(), start_pos.y()
|
||||
# x2, y2 = end_pos.x(), end_pos.y()
|
||||
# y1, y2 = start_pos.y(), end_pos.y()
|
||||
# x1, x2 = start_pos.x(), end_pos.x()
|
||||
|
||||
# TODO: heh, could probably use a max-min streamin algo
|
||||
# here too?
|
||||
# TODO: heh, could probably use a max-min streamin algo here too
|
||||
_, xmn = min(y1, y2), min(x1, x2)
|
||||
ymx, xmx = max(y1, y2), max(x1, x2)
|
||||
|
||||
|
@ -535,35 +375,26 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
|||
ixmn, ixmx = round(xmn), round(xmx)
|
||||
nbars = ixmx - ixmn + 1
|
||||
|
||||
chart: ChartPlotWidget = self._chart
|
||||
data: np.ndarray = chart.get_viz(
|
||||
chart.name
|
||||
).shm.array[ixmn:ixmx]
|
||||
chart = self._chart
|
||||
data = chart.get_viz(chart.name).shm.array[ixmn:ixmx]
|
||||
|
||||
if len(data):
|
||||
std: float = data['close'].std()
|
||||
dmx: float = data['high'].max()
|
||||
dmn: float = data['low'].min()
|
||||
std = data['close'].std()
|
||||
dmx = data['high'].max()
|
||||
dmn = data['low'].min()
|
||||
else:
|
||||
dmn = dmx = std = np.nan
|
||||
|
||||
# update label info
|
||||
label.setText('\n'.join(self._contents).format(
|
||||
pchng=pchng,
|
||||
rng=rng,
|
||||
nbars=nbars,
|
||||
std=std,
|
||||
dmx=dmx,
|
||||
dmn=dmn,
|
||||
self._label.setText('\n'.join(self._contents).format(
|
||||
pchng=pchng, rng=rng, nbars=nbars,
|
||||
std=std, dmx=dmx, dmn=dmn,
|
||||
))
|
||||
|
||||
# print(f'x2, y2: {(x2, y2)}')
|
||||
# print(f'xmn, ymn: {(xmn, ymx)}')
|
||||
|
||||
label_anchor = Point(
|
||||
xmx + 2,
|
||||
ymx,
|
||||
)
|
||||
label_anchor = Point(xmx + 2, ymx)
|
||||
|
||||
# XXX: in the drag bottom-right -> top-left case we don't
|
||||
# want the label to overlay the box.
|
||||
|
@ -572,40 +403,13 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
|
|||
# # label_anchor = Point(x2, y2 + self._label.height())
|
||||
# label_anchor = Point(xmn, ymn)
|
||||
|
||||
self._abs_top_right: Point = label_anchor
|
||||
self._label_proxy.setPos(
|
||||
cv.mapFromView(label_anchor)
|
||||
)
|
||||
label.show()
|
||||
self._abs_top_right = label_anchor
|
||||
self._label_proxy.setPos(self.vb.mapFromView(label_anchor))
|
||||
# self._label.show()
|
||||
|
||||
def hide(self):
|
||||
'''
|
||||
Clear the selection box from its graphics scene but
|
||||
don't delete it permanently.
|
||||
def clear(self):
|
||||
"""Clear the selection box from view.
|
||||
|
||||
'''
|
||||
super().hide()
|
||||
"""
|
||||
self._label.hide()
|
||||
|
||||
# TODO: ensure noone else using dis.
|
||||
clear = hide
|
||||
|
||||
def delete(self) -> None:
|
||||
'''
|
||||
De-allocate this rect from its rendering graphics scene.
|
||||
|
||||
Like a permanent hide.
|
||||
|
||||
'''
|
||||
scen: QGraphicsScene = self.scene()
|
||||
if scen is None:
|
||||
return
|
||||
|
||||
scen.removeItem(self)
|
||||
if (
|
||||
self._label
|
||||
and
|
||||
self._label_proxy
|
||||
|
||||
):
|
||||
scen.removeItem(self._label_proxy)
|
||||
self.hide()
|
||||
|
|
|
@ -23,29 +23,28 @@ from typing import Callable
|
|||
|
||||
import trio
|
||||
from tractor.trionics import gather_contexts
|
||||
|
||||
from piker.ui.qt import (
|
||||
QtCore,
|
||||
QWidget,
|
||||
QEvent,
|
||||
keys,
|
||||
gs_keys,
|
||||
pyqtBoundSignal,
|
||||
from PyQt5 import QtCore
|
||||
from PyQt5.QtCore import QEvent, pyqtBoundSignal
|
||||
from PyQt5.QtWidgets import QWidget
|
||||
from PyQt5.QtWidgets import (
|
||||
QGraphicsSceneMouseEvent as gs_mouse,
|
||||
)
|
||||
|
||||
from piker.types import Struct
|
||||
|
||||
|
||||
MOUSE_EVENTS = {
|
||||
gs_keys.GraphicsSceneMousePress,
|
||||
gs_keys.GraphicsSceneMouseRelease,
|
||||
keys.MouseButtonPress,
|
||||
keys.MouseButtonRelease,
|
||||
gs_mouse.GraphicsSceneMousePress,
|
||||
gs_mouse.GraphicsSceneMouseRelease,
|
||||
QEvent.MouseButtonPress,
|
||||
QEvent.MouseButtonRelease,
|
||||
# QtGui.QMouseEvent,
|
||||
}
|
||||
|
||||
|
||||
# TODO: maybe consider some constrained ints down the road?
|
||||
# https://pydantic-docs.helpmanual.io/usage/types/#constrained-types
|
||||
|
||||
class KeyboardMsg(Struct):
|
||||
'''Unpacked Qt keyboard event data.
|
||||
|
||||
|
@ -115,10 +114,7 @@ class EventRelay(QtCore.QObject):
|
|||
# something to do with Qt internals and calling the
|
||||
# parent handler?
|
||||
|
||||
if etype in {
|
||||
QEvent.Type.KeyPress,
|
||||
QEvent.Type.KeyRelease,
|
||||
}:
|
||||
if etype in {QEvent.KeyPress, QEvent.KeyRelease}:
|
||||
|
||||
msg = KeyboardMsg(
|
||||
event=ev,
|
||||
|
@ -164,9 +160,7 @@ class EventRelay(QtCore.QObject):
|
|||
async def open_event_stream(
|
||||
|
||||
source_widget: QWidget,
|
||||
event_types: set[QEvent] = {
|
||||
QEvent.Type.KeyPress,
|
||||
},
|
||||
event_types: set[QEvent] = {QEvent.KeyPress},
|
||||
filter_auto_repeats: bool = True,
|
||||
|
||||
) -> trio.abc.ReceiveChannel:
|
||||
|
@ -207,8 +201,8 @@ async def open_signal_handler(
|
|||
async for args in recv:
|
||||
await async_handler(*args)
|
||||
|
||||
async with trio.open_nursery() as tn:
|
||||
tn.start_soon(proxy_to_handler)
|
||||
async with trio.open_nursery() as n:
|
||||
n.start_soon(proxy_to_handler)
|
||||
async with send:
|
||||
yield
|
||||
|
||||
|
@ -218,48 +212,18 @@ async def open_handlers(
|
|||
|
||||
source_widgets: list[QWidget],
|
||||
event_types: set[QEvent],
|
||||
|
||||
# NOTE: if you want to bind in additional kwargs to the handler
|
||||
# pass in a `partial()` instead!
|
||||
async_handler: Callable[
|
||||
[QWidget, trio.abc.ReceiveChannel], # required handler args
|
||||
None
|
||||
],
|
||||
|
||||
# XXX: these are ONLY inputs available to the
|
||||
# `open_event_stream()` event-relay to mem-chan factor above!
|
||||
**open_ev_stream_kwargs,
|
||||
async_handler: Callable[[QWidget, trio.abc.ReceiveChannel], None],
|
||||
**kwargs,
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
Connect and schedule an async handler function to receive an
|
||||
arbitrary `QWidget`'s events with kb/mouse msgs repacked into
|
||||
structs (see above) and shuttled over a mem-chan to the input
|
||||
`async_handler` to allow interaction-IO processing from
|
||||
a `trio` func-as-task.
|
||||
|
||||
'''
|
||||
widget: QWidget
|
||||
streams: list[trio.abc.ReceiveChannel]
|
||||
async with (
|
||||
trio.open_nursery() as tn,
|
||||
trio.open_nursery() as n,
|
||||
gather_contexts([
|
||||
open_event_stream(
|
||||
widget,
|
||||
event_types,
|
||||
**open_ev_stream_kwargs,
|
||||
)
|
||||
open_event_stream(widget, event_types, **kwargs)
|
||||
for widget in source_widgets
|
||||
]) as streams,
|
||||
):
|
||||
for widget, event_recv_stream in zip(
|
||||
source_widgets,
|
||||
streams,
|
||||
):
|
||||
tn.start_soon(
|
||||
async_handler,
|
||||
widget,
|
||||
event_recv_stream,
|
||||
)
|
||||
for widget, event_recv_stream in zip(source_widgets, streams):
|
||||
n.start_soon(async_handler, widget, event_recv_stream)
|
||||
|
||||
yield
|
||||
|
|
|
@ -30,35 +30,34 @@ from typing import (
|
|||
import platform
|
||||
import traceback
|
||||
|
||||
# Qt specific
|
||||
import PyQt5 # noqa
|
||||
from PyQt5.QtWidgets import (
|
||||
QWidget,
|
||||
QMainWindow,
|
||||
QApplication,
|
||||
)
|
||||
from PyQt5 import QtCore
|
||||
from PyQt5.QtCore import (
|
||||
pyqtRemoveInputHook,
|
||||
Qt,
|
||||
QCoreApplication,
|
||||
)
|
||||
import qdarkstyle
|
||||
from qdarkstyle import DarkPalette
|
||||
# import qdarkgraystyle # TODO: play with it
|
||||
import trio
|
||||
from outcome import Error
|
||||
|
||||
# Qt version-agnostic
|
||||
from .qt import (
|
||||
QWidget,
|
||||
QMainWindow,
|
||||
QApplication,
|
||||
QtCore,
|
||||
pyqtRemoveInputHook,
|
||||
Qt,
|
||||
QCoreApplication,
|
||||
)
|
||||
from ..service import (
|
||||
maybe_open_pikerd,
|
||||
get_runtime_vars,
|
||||
get_tractor_runtime_kwargs,
|
||||
)
|
||||
from ..log import get_logger
|
||||
from ._pg_overrides import _do_overrides
|
||||
from . import _style
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ._chart import GodWidget
|
||||
|
||||
|
||||
log = get_logger(__name__)
|
||||
|
||||
# pyqtgraph global config
|
||||
|
@ -147,7 +146,7 @@ def run_qtractor(
|
|||
|
||||
# load dark theme
|
||||
stylesheet = qdarkstyle.load_stylesheet(
|
||||
qt_api='pyqt6',
|
||||
qt_api='pyqt5',
|
||||
palette=DarkPalette,
|
||||
)
|
||||
app.setStyleSheet(stylesheet)
|
||||
|
@ -174,9 +173,7 @@ def run_qtractor(
|
|||
instance.window = window
|
||||
|
||||
# override tractor's defaults
|
||||
tractor_kwargs.update(
|
||||
get_runtime_vars()
|
||||
)
|
||||
tractor_kwargs.update(get_tractor_runtime_kwargs())
|
||||
|
||||
# define tractor entrypoint
|
||||
async def main():
|
||||
|
|
|
@ -28,15 +28,9 @@ from typing import (
|
|||
)
|
||||
|
||||
import trio
|
||||
|
||||
from piker.ui.qt import (
|
||||
keys,
|
||||
size_policy,
|
||||
QtGui,
|
||||
QSize,
|
||||
QModelIndex,
|
||||
Qt,
|
||||
QEvent,
|
||||
from PyQt5 import QtGui
|
||||
from PyQt5.QtCore import QSize, QModelIndex, Qt, QEvent
|
||||
from PyQt5.QtWidgets import (
|
||||
QWidget,
|
||||
QLabel,
|
||||
QComboBox,
|
||||
|
@ -45,6 +39,7 @@ from piker.ui.qt import (
|
|||
QVBoxLayout,
|
||||
QFormLayout,
|
||||
QProgressBar,
|
||||
QSizePolicy,
|
||||
QStyledItemDelegate,
|
||||
QStyleOptionViewItem,
|
||||
)
|
||||
|
@ -76,14 +71,14 @@ class Edit(QLineEdit):
|
|||
|
||||
if width_in_chars:
|
||||
self._chars = int(width_in_chars)
|
||||
x_size_policy = size_policy.Fixed
|
||||
x_size_policy = QSizePolicy.Fixed
|
||||
|
||||
else:
|
||||
# chart count which will be used to calculate
|
||||
# width of input field.
|
||||
self._chars: int = 6
|
||||
# fit to surroundingn frame width
|
||||
x_size_policy = size_policy.Expanding
|
||||
x_size_policy = QSizePolicy.Expanding
|
||||
|
||||
super().__init__(parent)
|
||||
|
||||
|
@ -91,7 +86,7 @@ class Edit(QLineEdit):
|
|||
# https://doc.qt.io/qt-5/qsizepolicy.html#Policy-enum
|
||||
self.setSizePolicy(
|
||||
x_size_policy,
|
||||
size_policy.Fixed,
|
||||
QSizePolicy.Fixed,
|
||||
)
|
||||
self.setFont(font.font)
|
||||
|
||||
|
@ -185,13 +180,11 @@ class Selection(QComboBox):
|
|||
|
||||
self._items: dict[str, int] = {}
|
||||
super().__init__(parent=parent)
|
||||
self.setSizeAdjustPolicy(
|
||||
QComboBox.SizeAdjustPolicy.AdjustToContents,
|
||||
)
|
||||
self.setSizeAdjustPolicy(QComboBox.AdjustToContents)
|
||||
# make line edit expand to surrounding frame
|
||||
self.setSizePolicy(
|
||||
size_policy.Expanding,
|
||||
size_policy.Fixed,
|
||||
QSizePolicy.Expanding,
|
||||
QSizePolicy.Fixed,
|
||||
)
|
||||
view = self.view()
|
||||
view.setUniformItemSizes(True)
|
||||
|
@ -315,8 +308,8 @@ class FieldsForm(QWidget):
|
|||
|
||||
# size it as we specify
|
||||
self.setSizePolicy(
|
||||
size_policy.Expanding,
|
||||
size_policy.Expanding,
|
||||
QSizePolicy.Expanding,
|
||||
QSizePolicy.Expanding,
|
||||
)
|
||||
|
||||
# XXX: not sure why we have to create this here exactly
|
||||
|
@ -423,8 +416,8 @@ class FieldsForm(QWidget):
|
|||
select.set_items(values)
|
||||
|
||||
self.setSizePolicy(
|
||||
size_policy.Fixed,
|
||||
size_policy.Fixed,
|
||||
QSizePolicy.Fixed,
|
||||
QSizePolicy.Fixed,
|
||||
)
|
||||
select.show()
|
||||
self.form.addRow(label, select)
|
||||
|
@ -444,10 +437,7 @@ async def handle_field_input(
|
|||
|
||||
async for kbmsg in recv_chan:
|
||||
|
||||
if kbmsg.etype in {
|
||||
keys.KeyPress,
|
||||
keys.KeyRelease,
|
||||
}:
|
||||
if kbmsg.etype in {QEvent.KeyPress, QEvent.KeyRelease}:
|
||||
event, etype, key, mods, txt = kbmsg.to_tuple()
|
||||
print(f'key: {kbmsg.key}, mods: {kbmsg.mods}, txt: {kbmsg.txt}')
|
||||
|
||||
|
@ -713,8 +703,7 @@ def mk_fill_status_bar(
|
|||
)
|
||||
|
||||
bottom_label = form.add_field_label(
|
||||
# 'x: {step_size}',
|
||||
'{unit_prefix}: {step_size}',
|
||||
'x: {step_size}',
|
||||
font_size=bar_label_font_size,
|
||||
font_color='gunmetal',
|
||||
)
|
||||
|
|
|
@ -181,10 +181,7 @@ async def open_fsp_sidepane(
|
|||
async def open_fsp_actor_cluster(
|
||||
names: list[str] = ['fsp_0', 'fsp_1'],
|
||||
|
||||
) -> AsyncGenerator[
|
||||
int,
|
||||
dict[str, tractor.Portal]
|
||||
]:
|
||||
) -> AsyncGenerator[int, dict[str, tractor.Portal]]:
|
||||
|
||||
from tractor._clustering import open_actor_cluster
|
||||
|
||||
|
@ -393,7 +390,7 @@ class FspAdmin:
|
|||
complete: trio.Event,
|
||||
started: trio.Event,
|
||||
fqme: str,
|
||||
dst_flume: Flume,
|
||||
dst_fsp_flume: Flume,
|
||||
conf: dict,
|
||||
target: Fsp,
|
||||
loglevel: str,
|
||||
|
@ -411,14 +408,16 @@ class FspAdmin:
|
|||
# chaining entrypoint
|
||||
cascade,
|
||||
|
||||
# TODO: can't we just drop this and expect
|
||||
# far end to read the src flume's .mkt.fqme?
|
||||
# data feed key
|
||||
fqme=fqme,
|
||||
|
||||
src_flume_addr=self.flume.to_msg(),
|
||||
dst_flume_addr=dst_flume.to_msg(),
|
||||
ns_path=ns_path, # edge-bind-func
|
||||
# TODO: pass `Flume.to_msg()`s here?
|
||||
# mems
|
||||
src_shm_token=self.flume.rt_shm.token,
|
||||
dst_shm_token=dst_fsp_flume.rt_shm.token,
|
||||
|
||||
# target
|
||||
ns_path=ns_path,
|
||||
|
||||
loglevel=loglevel,
|
||||
zero_on_step=conf.get('zero_on_step', False),
|
||||
|
@ -432,14 +431,14 @@ class FspAdmin:
|
|||
ctx.open_stream() as stream,
|
||||
):
|
||||
|
||||
dst_flume.stream: tractor.MsgStream = stream
|
||||
dst_fsp_flume.stream: tractor.MsgStream = stream
|
||||
|
||||
# register output data
|
||||
self._registry[
|
||||
(fqme, ns_path)
|
||||
] = (
|
||||
stream,
|
||||
dst_flume.rt_shm,
|
||||
dst_fsp_flume.rt_shm,
|
||||
complete
|
||||
)
|
||||
|
||||
|
@ -516,7 +515,7 @@ class FspAdmin:
|
|||
broker='piker',
|
||||
_atype='fsp',
|
||||
)
|
||||
dst_flume = Flume(
|
||||
dst_fsp_flume = Flume(
|
||||
mkt=mkt,
|
||||
_rt_shm_token=dst_shm.token,
|
||||
first_quote={},
|
||||
|
@ -544,13 +543,13 @@ class FspAdmin:
|
|||
complete,
|
||||
started,
|
||||
fqme,
|
||||
dst_flume,
|
||||
dst_fsp_flume,
|
||||
conf,
|
||||
target,
|
||||
loglevel,
|
||||
)
|
||||
|
||||
return dst_flume, started
|
||||
return dst_fsp_flume, started
|
||||
|
||||
async def open_fsp_chart(
|
||||
self,
|
||||
|
@ -560,7 +559,7 @@ class FspAdmin:
|
|||
conf: dict, # yeah probably dumb..
|
||||
loglevel: str = 'error',
|
||||
|
||||
) -> trio.Event:
|
||||
) -> (trio.Event, ChartPlotWidget):
|
||||
|
||||
flume, started = await self.start_engine_task(
|
||||
target,
|
||||
|
@ -927,7 +926,7 @@ async def start_fsp_displays(
|
|||
|
||||
linked: LinkedSplits,
|
||||
flume: Flume,
|
||||
# group_status_key: str,
|
||||
group_status_key: str,
|
||||
loglevel: str,
|
||||
|
||||
) -> None:
|
||||
|
@ -974,23 +973,21 @@ async def start_fsp_displays(
|
|||
flume,
|
||||
) as admin,
|
||||
):
|
||||
statuses: list[trio.Event] = []
|
||||
statuses = []
|
||||
for target, conf in fsp_conf.items():
|
||||
started: trio.Event = await admin.open_fsp_chart(
|
||||
started = await admin.open_fsp_chart(
|
||||
target,
|
||||
conf,
|
||||
)
|
||||
# done = linked.window().status_bar.open_status(
|
||||
# f'loading fsp, {target}..',
|
||||
# group_key=group_status_key,
|
||||
# )
|
||||
# statuses.append((started, done))
|
||||
statuses.append(started)
|
||||
done = linked.window().status_bar.open_status(
|
||||
f'loading fsp, {target}..',
|
||||
group_key=group_status_key,
|
||||
)
|
||||
statuses.append((started, done))
|
||||
|
||||
# for fsp_loaded, status_cb in statuses:
|
||||
for fsp_loaded in statuses:
|
||||
for fsp_loaded, status_cb in statuses:
|
||||
await fsp_loaded.wait()
|
||||
profiler(f'attached to fsp portal: {target}')
|
||||
# status_cb()
|
||||
status_cb()
|
||||
|
||||
# blocks on nursery until all fsp actors complete
|
||||
|
|
|
@ -15,18 +15,15 @@
|
|||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
`QIcon` hackery.
|
||||
|
||||
Mostly dynamically loading pixmaps for use with `QGraphicsScene`.
|
||||
``QIcon`` hackery.
|
||||
|
||||
'''
|
||||
from piker.ui.qt import (
|
||||
QSize,
|
||||
QStyle,
|
||||
QIcon,
|
||||
QPixmap,
|
||||
QColor,
|
||||
from PyQt5.QtWidgets import QStyle
|
||||
from PyQt5.QtGui import (
|
||||
QIcon, QPixmap, QColor
|
||||
)
|
||||
from PyQt5.QtCore import QSize
|
||||
|
||||
from ._style import hcolor
|
||||
|
||||
# https://www.pythonguis.com/faq/built-in-qicons-pyqt/
|
||||
|
@ -47,8 +44,7 @@ def mk_icons(
|
|||
size: QSize,
|
||||
|
||||
) -> dict[str, QIcon]:
|
||||
'''
|
||||
This helper is indempotent.
|
||||
'''This helper is indempotent.
|
||||
|
||||
'''
|
||||
global _icons, _icon_names
|
||||
|
@ -60,11 +56,7 @@ def mk_icons(
|
|||
# load account selection using current style
|
||||
for name, icon_name in _icon_names.items():
|
||||
|
||||
stdpixmap = getattr(
|
||||
# https://www.pythonguis.com/faq/built-in-qicons-pyqt/
|
||||
QStyle.StandardPixmap, # pyqt/pyside6
|
||||
icon_name,
|
||||
)
|
||||
stdpixmap = getattr(QStyle, icon_name)
|
||||
stdicon = style.standardIcon(stdpixmap)
|
||||
pixmap = stdicon.pixmap(size)
|
||||
|
||||
|
|
|
@ -23,7 +23,6 @@ from contextlib import (
|
|||
asynccontextmanager,
|
||||
ExitStack,
|
||||
)
|
||||
from functools import partial
|
||||
import time
|
||||
from typing import (
|
||||
Callable,
|
||||
|
@ -31,26 +30,24 @@ from typing import (
|
|||
)
|
||||
|
||||
import pyqtgraph as pg
|
||||
# NOTE XXX: pg is super annoying and re-implements it's own mouse
|
||||
# event subsystem.. we should really look into re-working/writing
|
||||
# this down the road.. Bo
|
||||
from pyqtgraph.GraphicsScene import mouseEvents as mevs
|
||||
# from pyqtgraph.GraphicsScene.mouseEvents import MouseDragEvent
|
||||
# from pyqtgraph.GraphicsScene import mouseEvents
|
||||
from PyQt5.QtWidgets import QGraphicsSceneMouseEvent as gs_mouse
|
||||
from PyQt5.QtGui import (
|
||||
QWheelEvent,
|
||||
)
|
||||
from PyQt5.QtCore import (
|
||||
Qt,
|
||||
QEvent,
|
||||
)
|
||||
from pyqtgraph import (
|
||||
ViewBox,
|
||||
Point,
|
||||
QtCore,
|
||||
functions as fn,
|
||||
)
|
||||
from pyqtgraph import functions as fn
|
||||
import numpy as np
|
||||
import trio
|
||||
|
||||
from piker.ui.qt import (
|
||||
QWheelEvent,
|
||||
QGraphicsSceneMouseEvent as gs_mouse,
|
||||
Qt,
|
||||
QEvent,
|
||||
)
|
||||
from ..log import get_logger
|
||||
from ..toolz import (
|
||||
Profiler,
|
||||
|
@ -73,28 +70,27 @@ if TYPE_CHECKING:
|
|||
)
|
||||
from ._dataviz import Viz
|
||||
from .order_mode import OrderMode
|
||||
from ._display import DisplayState
|
||||
|
||||
|
||||
log = get_logger(__name__)
|
||||
|
||||
NUMBER_LINE = {
|
||||
Qt.Key.Key_1,
|
||||
Qt.Key.Key_2,
|
||||
Qt.Key.Key_3,
|
||||
Qt.Key.Key_4,
|
||||
Qt.Key.Key_5,
|
||||
Qt.Key.Key_6,
|
||||
Qt.Key.Key_7,
|
||||
Qt.Key.Key_8,
|
||||
Qt.Key.Key_9,
|
||||
Qt.Key.Key_0,
|
||||
Qt.Key_1,
|
||||
Qt.Key_2,
|
||||
Qt.Key_3,
|
||||
Qt.Key_4,
|
||||
Qt.Key_5,
|
||||
Qt.Key_6,
|
||||
Qt.Key_7,
|
||||
Qt.Key_8,
|
||||
Qt.Key_9,
|
||||
Qt.Key_0,
|
||||
}
|
||||
|
||||
ORDER_MODE = {
|
||||
Qt.Key.Key_A,
|
||||
Qt.Key.Key_F,
|
||||
Qt.Key.Key_D,
|
||||
Qt.Key_A,
|
||||
Qt.Key_F,
|
||||
Qt.Key_D,
|
||||
}
|
||||
|
||||
|
||||
|
@ -102,7 +98,6 @@ async def handle_viewmode_kb_inputs(
|
|||
|
||||
view: ChartView,
|
||||
recv_chan: trio.abc.ReceiveChannel,
|
||||
dss: dict[str, DisplayState],
|
||||
|
||||
) -> None:
|
||||
|
||||
|
@ -178,42 +173,17 @@ async def handle_viewmode_kb_inputs(
|
|||
Qt.Key_P,
|
||||
}
|
||||
):
|
||||
import tractor
|
||||
feed = order_mode.feed # noqa
|
||||
chart = order_mode.chart # noqa
|
||||
viz = chart.main_viz # noqa
|
||||
vlm_chart = chart.linked.subplots['volume'] # noqa
|
||||
vlm_viz = vlm_chart.main_viz # noqa
|
||||
dvlm_pi = vlm_chart._vizs['dolla_vlm'].plot # noqa
|
||||
import tractor
|
||||
await tractor.pause()
|
||||
view.interact_graphics_cycle()
|
||||
|
||||
# FORCE graphics reset-and-render of all currently
|
||||
# shown data `Viz`s for the current chart app.
|
||||
if (
|
||||
ctrl
|
||||
and key in {
|
||||
Qt.Key_R,
|
||||
}
|
||||
):
|
||||
fqme: str
|
||||
ds: DisplayState
|
||||
for fqme, ds in dss.items():
|
||||
|
||||
viz: Viz
|
||||
for tf, viz in {
|
||||
60: ds.hist_viz,
|
||||
1: ds.viz,
|
||||
}.items():
|
||||
# TODO: only allow this when the data is IN VIEW!
|
||||
# also, we probably can do this more efficiently
|
||||
# / smarter by only redrawing the portion of the
|
||||
# path necessary?
|
||||
viz.reset_graphics()
|
||||
|
||||
# ------ - ------
|
||||
# SEARCH MODE
|
||||
# ------ - ------
|
||||
# SEARCH MODE #
|
||||
# ctlr-<space>/<l> for "lookup", "search" -> open search tree
|
||||
if (
|
||||
ctrl
|
||||
|
@ -273,10 +243,8 @@ async def handle_viewmode_kb_inputs(
|
|||
delta=-view.def_delta,
|
||||
)
|
||||
|
||||
elif (
|
||||
not ctrl
|
||||
and key == Qt.Key_R
|
||||
):
|
||||
elif key == Qt.Key_R:
|
||||
|
||||
# NOTE: seems that if we don't yield a Qt render
|
||||
# cycle then the m4 downsampled curves will show here
|
||||
# without another reset..
|
||||
|
@ -459,7 +427,6 @@ async def handle_viewmode_mouse(
|
|||
|
||||
view: ChartView,
|
||||
recv_chan: trio.abc.ReceiveChannel,
|
||||
dss: dict[str, DisplayState],
|
||||
|
||||
) -> None:
|
||||
|
||||
|
@ -499,7 +466,6 @@ class ChartView(ViewBox):
|
|||
mode_name: str = 'view'
|
||||
def_delta: float = 616 * 6
|
||||
def_scale_factor: float = 1.016 ** (def_delta * -1 / 20)
|
||||
# annots: dict[int, GraphicsObject] = {}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
@ -520,7 +486,6 @@ class ChartView(ViewBox):
|
|||
# defaultPadding=0.,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
# for "known y-range style"
|
||||
self._static_yrange = static_yrange
|
||||
|
||||
|
@ -535,11 +500,7 @@ class ChartView(ViewBox):
|
|||
|
||||
# add our selection box annotator
|
||||
self.select_box = SelectRect(self)
|
||||
# self.select_box.add_to_view(self)
|
||||
# self.addItem(
|
||||
# self.select_box,
|
||||
# ignoreBounds=True,
|
||||
# )
|
||||
self.addItem(self.select_box, ignoreBounds=True)
|
||||
|
||||
self.mode = None
|
||||
self.order_mode: bool = False
|
||||
|
@ -596,7 +557,6 @@ class ChartView(ViewBox):
|
|||
@asynccontextmanager
|
||||
async def open_async_input_handler(
|
||||
self,
|
||||
**handler_kwargs,
|
||||
|
||||
) -> ChartView:
|
||||
|
||||
|
@ -607,20 +567,14 @@ class ChartView(ViewBox):
|
|||
QEvent.KeyPress,
|
||||
QEvent.KeyRelease,
|
||||
},
|
||||
async_handler=partial(
|
||||
handle_viewmode_kb_inputs,
|
||||
**handler_kwargs,
|
||||
),
|
||||
async_handler=handle_viewmode_kb_inputs,
|
||||
),
|
||||
_event.open_handlers(
|
||||
[self],
|
||||
event_types={
|
||||
gs_mouse.GraphicsSceneMousePress,
|
||||
},
|
||||
async_handler=partial(
|
||||
handle_viewmode_mouse,
|
||||
**handler_kwargs,
|
||||
),
|
||||
async_handler=handle_viewmode_mouse,
|
||||
),
|
||||
):
|
||||
yield self
|
||||
|
@ -757,18 +711,17 @@ class ChartView(ViewBox):
|
|||
|
||||
def mouseDragEvent(
|
||||
self,
|
||||
ev: mevs.MouseDragEvent,
|
||||
ev,
|
||||
axis: int | None = None,
|
||||
|
||||
) -> None:
|
||||
pos: Point = ev.pos()
|
||||
lastPos: Point = ev.lastPos()
|
||||
dif: Point = (pos - lastPos) * -1
|
||||
# dif: Point = pos - lastPos
|
||||
# dif: Point = dif * -1
|
||||
pos = ev.pos()
|
||||
lastPos = ev.lastPos()
|
||||
dif = pos - lastPos
|
||||
dif = dif * -1
|
||||
|
||||
# NOTE: if axis is specified, event will only affect that axis.
|
||||
btn = ev.button()
|
||||
button = ev.button()
|
||||
|
||||
# Ignore axes if mouse is disabled
|
||||
mouseEnabled = np.array(
|
||||
|
@ -780,7 +733,7 @@ class ChartView(ViewBox):
|
|||
mask[1-axis] = 0.0
|
||||
|
||||
# Scale or translate based on mouse button
|
||||
if btn & (
|
||||
if button & (
|
||||
QtCore.Qt.LeftButton | QtCore.Qt.MidButton
|
||||
):
|
||||
# zoom y-axis ONLY when click-n-drag on it
|
||||
|
@ -803,55 +756,34 @@ class ChartView(ViewBox):
|
|||
# XXX: WHY
|
||||
ev.accept()
|
||||
|
||||
down_pos: Point = ev.buttonDownPos(
|
||||
btn=btn,
|
||||
)
|
||||
scen_pos: Point = ev.scenePos()
|
||||
scen_down_pos: Point = ev.buttonDownScenePos(
|
||||
btn=btn,
|
||||
)
|
||||
down_pos = ev.buttonDownPos()
|
||||
|
||||
# This is the final position in the drag
|
||||
if ev.isFinish():
|
||||
|
||||
# import pdbp; pdbp.set_trace()
|
||||
self.select_box.mouse_drag_released(down_pos, pos)
|
||||
|
||||
# NOTE: think of this as a `.mouse_drag_release()`
|
||||
# (bc HINT that's what i called the shit ass
|
||||
# method that wrapped this call [yes, as a single
|
||||
# fucking call] originally.. you bish, guille)
|
||||
# Bo.. oraleeee
|
||||
self.select_box.set_scen_pos(
|
||||
# down_pos,
|
||||
# pos,
|
||||
scen_down_pos,
|
||||
scen_pos,
|
||||
)
|
||||
|
||||
# this is the zoom transform cmd
|
||||
ax = QtCore.QRectF(down_pos, pos)
|
||||
ax = self.childGroup.mapRectFromParent(ax)
|
||||
# self.showAxRect(ax)
|
||||
|
||||
# this is the zoom transform cmd
|
||||
self.showAxRect(ax)
|
||||
|
||||
# axis history tracking
|
||||
self.axHistoryPointer += 1
|
||||
self.axHistory = self.axHistory[
|
||||
:self.axHistoryPointer] + [ax]
|
||||
|
||||
else:
|
||||
self.select_box.set_scen_pos(
|
||||
# down_pos,
|
||||
# pos,
|
||||
scen_down_pos,
|
||||
scen_pos,
|
||||
)
|
||||
print('drag finish?')
|
||||
self.select_box.set_pos(down_pos, pos)
|
||||
|
||||
# update shape of scale box
|
||||
# self.updateScaleBox(ev.buttonDownPos(), ev.pos())
|
||||
# breakpoint()
|
||||
# self.updateScaleBox(
|
||||
# down_pos,
|
||||
# ev.pos(),
|
||||
# )
|
||||
self.updateScaleBox(
|
||||
down_pos,
|
||||
ev.pos(),
|
||||
)
|
||||
|
||||
# PANNING MODE
|
||||
else:
|
||||
|
@ -890,7 +822,7 @@ class ChartView(ViewBox):
|
|||
# ev.accept()
|
||||
|
||||
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
|
||||
elif btn & QtCore.Qt.RightButton:
|
||||
elif button & QtCore.Qt.RightButton:
|
||||
|
||||
if self.state['aspectLocked'] is not False:
|
||||
mask[0] = 0
|
||||
|
|
|
@ -21,12 +21,9 @@ Double auction top-of-book (L1) graphics.
|
|||
from typing import Tuple
|
||||
|
||||
import pyqtgraph as pg
|
||||
from PyQt5 import QtCore, QtGui
|
||||
from PyQt5.QtCore import QPointF
|
||||
|
||||
from piker.ui.qt import (
|
||||
QPointF,
|
||||
QtCore,
|
||||
QtGui,
|
||||
)
|
||||
from ._axes import YAxisLabel
|
||||
from ._style import hcolor
|
||||
from ._pg_overrides import PlotItem
|
||||
|
|
|
@ -25,17 +25,10 @@ from typing import (
|
|||
)
|
||||
|
||||
import pyqtgraph as pg
|
||||
from PyQt5 import QtGui, QtWidgets
|
||||
from PyQt5.QtWidgets import QLabel, QSizePolicy
|
||||
from PyQt5.QtCore import QPointF, QRectF, Qt
|
||||
|
||||
from piker.ui.qt import (
|
||||
px_cache_mode,
|
||||
QtGui,
|
||||
QtWidgets,
|
||||
QLabel,
|
||||
size_policy,
|
||||
QPointF,
|
||||
QRectF,
|
||||
Qt,
|
||||
)
|
||||
from ._style import (
|
||||
DpiAwareFont,
|
||||
hcolor,
|
||||
|
@ -85,7 +78,7 @@ class Label:
|
|||
self._x_offset = x_offset
|
||||
|
||||
txt = self.txt = QtWidgets.QGraphicsTextItem(parent=parent)
|
||||
txt.setCacheMode(px_cache_mode.DeviceCoordinateCache)
|
||||
txt.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||
|
||||
vb.scene().addItem(txt)
|
||||
|
||||
|
@ -110,7 +103,7 @@ class Label:
|
|||
self._anchor_func = self.txt.pos().x
|
||||
|
||||
# not sure if this makes a diff
|
||||
self.txt.setCacheMode(px_cache_mode.DeviceCoordinateCache)
|
||||
self.txt.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||
|
||||
# TODO: edit and selection support
|
||||
# https://doc.qt.io/qt-5/qt.html#TextInteractionFlag-enum
|
||||
|
@ -306,14 +299,12 @@ class FormatLabel(QLabel):
|
|||
"""
|
||||
)
|
||||
self.setFont(_font.font)
|
||||
self.setTextFormat(
|
||||
Qt.TextFormat.MarkdownText
|
||||
)
|
||||
self.setTextFormat(Qt.MarkdownText) # markdown
|
||||
self.setMargin(0)
|
||||
|
||||
self.setSizePolicy(
|
||||
size_policy.Expanding,
|
||||
size_policy.Expanding,
|
||||
QSizePolicy.Expanding,
|
||||
QSizePolicy.Expanding,
|
||||
)
|
||||
self.setAlignment(
|
||||
Qt.AlignVCenter | Qt.AlignLeft
|
||||
|
|
|
@ -27,22 +27,10 @@ from typing import (
|
|||
)
|
||||
|
||||
import pyqtgraph as pg
|
||||
from pyqtgraph import (
|
||||
Point,
|
||||
functions as fn,
|
||||
)
|
||||
from pyqtgraph import Point, functions as fn
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5.QtCore import QPointF
|
||||
|
||||
from piker.ui.qt import (
|
||||
px_cache_mode,
|
||||
QtCore,
|
||||
QtGui,
|
||||
QGraphicsPathItem,
|
||||
QStyleOptionGraphicsItem,
|
||||
QGraphicsItem,
|
||||
QGraphicsScene,
|
||||
QWidget,
|
||||
QPointF,
|
||||
)
|
||||
from ._annotate import LevelMarker
|
||||
from ._anchors import (
|
||||
vbr_left,
|
||||
|
@ -142,9 +130,7 @@ class LevelLine(pg.InfiniteLine):
|
|||
self._right_end_sc: float = 0
|
||||
|
||||
# use px caching
|
||||
self.setCacheMode(
|
||||
px_cache_mode.DeviceCoordinateCache
|
||||
)
|
||||
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
|
||||
|
||||
def txt_offsets(self) -> tuple[int, int]:
|
||||
return 0, 0
|
||||
|
@ -215,7 +201,7 @@ class LevelLine(pg.InfiniteLine):
|
|||
) -> None:
|
||||
|
||||
if not called_from_on_pos_change:
|
||||
last: float = self.value()
|
||||
last = self.value()
|
||||
|
||||
# if the position hasn't changed then ``.update_labels()``
|
||||
# will not be called by a non-triggered `.on_pos_change()`,
|
||||
|
@ -322,7 +308,7 @@ class LevelLine(pg.InfiniteLine):
|
|||
Remove this line from containing chart/view/scene.
|
||||
|
||||
'''
|
||||
scene: QGraphicsScene = self.scene()
|
||||
scene = self.scene()
|
||||
if scene:
|
||||
for label in self._labels:
|
||||
label.delete()
|
||||
|
@ -353,8 +339,8 @@ class LevelLine(pg.InfiniteLine):
|
|||
self,
|
||||
|
||||
p: QtGui.QPainter,
|
||||
opt: QStyleOptionGraphicsItem,
|
||||
w: QWidget
|
||||
opt: QtWidgets.QStyleOptionGraphicsItem,
|
||||
w: QtWidgets.QWidget
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
|
@ -431,9 +417,9 @@ class LevelLine(pg.InfiniteLine):
|
|||
|
||||
def add_marker(
|
||||
self,
|
||||
path: QGraphicsPathItem,
|
||||
path: QtWidgets.QGraphicsPathItem,
|
||||
|
||||
) -> QGraphicsPathItem:
|
||||
) -> QtWidgets.QGraphicsPathItem:
|
||||
|
||||
self._marker = path
|
||||
self._marker.setPen(self.currentPen)
|
||||
|
|
|
@ -20,14 +20,16 @@ Super fast OHLC sampling graphics types.
|
|||
from __future__ import annotations
|
||||
|
||||
import numpy as np
|
||||
|
||||
from piker.ui.qt import (
|
||||
from PyQt5 import (
|
||||
QtGui,
|
||||
QtWidgets,
|
||||
QPainterPath,
|
||||
)
|
||||
from PyQt5.QtCore import (
|
||||
QLineF,
|
||||
QRectF,
|
||||
)
|
||||
from PyQt5.QtGui import QPainterPath
|
||||
|
||||
from ._curve import FlowGraphic
|
||||
from ..toolz import (
|
||||
Profiler,
|
||||
|
|
|
@ -24,6 +24,8 @@ view transforms.
|
|||
"""
|
||||
import pyqtgraph as pg
|
||||
|
||||
from ._axes import Axis
|
||||
|
||||
|
||||
def invertQTransform(tr):
|
||||
"""Return a QTransform that is the inverse of *tr*.
|
||||
|
@ -51,9 +53,6 @@ def _do_overrides() -> None:
|
|||
pg.functions.invertQTransform = invertQTransform
|
||||
pg.PlotItem = PlotItem
|
||||
|
||||
from ._axes import Axis
|
||||
pg.Axis = Axis
|
||||
|
||||
# enable "QPainterPathPrivate for faster arrayToQPath" from
|
||||
# https://github.com/pyqtgraph/pyqtgraph/pull/2324
|
||||
pg.setConfigOption('enableExperimental', True)
|
||||
|
@ -235,7 +234,7 @@ class PlotItem(pg.PlotItem):
|
|||
# ``ViewBox`` geometry bug.. where a gap for the
|
||||
# 'bottom' axis is somehow left in?
|
||||
# axis = pg.AxisItem(orientation=name, parent=self)
|
||||
axis = pg.Axis(
|
||||
axis = Axis(
|
||||
self,
|
||||
orientation=name,
|
||||
parent=self,
|
||||
|
|
|
@ -344,10 +344,7 @@ class SettingsPane:
|
|||
dsize = tracker.live_pp.dsize
|
||||
|
||||
# READ out settings and update the status UI / settings widgets
|
||||
unit_char: str = {
|
||||
'currency': '$',
|
||||
'units': 'u',
|
||||
}[alloc.size_unit]
|
||||
suffix = {'currency': ' $', 'units': ' u'}[alloc.size_unit]
|
||||
size_unit, limit = alloc.limit_info()
|
||||
|
||||
step_size, currency_per_slot = alloc.step_sizes()
|
||||
|
@ -361,11 +358,10 @@ class SettingsPane:
|
|||
self.apply_setting('limit', limit)
|
||||
|
||||
self.step_label.format(
|
||||
unit_prefix=unit_char,
|
||||
step_size=str(humanize(step_size))
|
||||
step_size=str(humanize(step_size)) + suffix
|
||||
)
|
||||
self.limit_label.format(
|
||||
limit=f'{unit_char}: {str(humanize(limit))}'
|
||||
limit=str(humanize(limit)) + suffix
|
||||
)
|
||||
|
||||
# update size unit in UI
|
||||
|
|
|
@ -1,426 +0,0 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Remote control tasks for sending annotations (and maybe more cmds)
|
||||
to a chart from some other actor.
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
from contextlib import (
|
||||
asynccontextmanager as acm,
|
||||
AsyncExitStack,
|
||||
)
|
||||
from functools import partial
|
||||
from pprint import pformat
|
||||
from typing import (
|
||||
# Any,
|
||||
AsyncContextManager,
|
||||
)
|
||||
|
||||
import tractor
|
||||
from tractor import trionics
|
||||
from tractor import (
|
||||
Portal,
|
||||
Context,
|
||||
MsgStream,
|
||||
)
|
||||
|
||||
from piker.log import get_logger
|
||||
from piker.types import Struct
|
||||
from piker.service import find_service
|
||||
from piker.brokers import SymbolNotFound
|
||||
from piker.ui.qt import (
|
||||
QGraphicsItem,
|
||||
)
|
||||
from ._display import DisplayState
|
||||
from ._interaction import ChartView
|
||||
from ._editors import SelectRect
|
||||
from ._chart import ChartPlotWidget
|
||||
from ._dataviz import Viz
|
||||
|
||||
|
||||
log = get_logger(__name__)
|
||||
|
||||
# NOTE: this is UPDATED by the `._display.graphics_update_loop()`
|
||||
# once all chart widgets / Viz per flume have been initialized
|
||||
# allowing for remote annotation (control) of any chart-actor's mkt
|
||||
# feed by fqme lookup Bo
|
||||
_dss: dict[str, DisplayState] = {}
|
||||
|
||||
# stash each and every client connection so that they can all
|
||||
# be cancelled on shutdown/error.
|
||||
# TODO: make `tractor.Context` hashable via is `.cid: str`?
|
||||
# _ctxs: set[Context] = set()
|
||||
# TODO: use type statements from 3.12+
|
||||
IpcCtxTable = dict[
|
||||
str, # each `Context.cid`
|
||||
tuple[
|
||||
Context, # handle for ctx-cancellation
|
||||
set[int] # set of annotation (instance) ids
|
||||
]
|
||||
]
|
||||
|
||||
_ctxs: IpcCtxTable = {}
|
||||
|
||||
# XXX: global map of all uniquely created annotation-graphics so
|
||||
# that they can be mutated (eventually) by a client.
|
||||
# NOTE: this map is only populated on the `chart` actor side (aka
|
||||
# the "annotations server" which actually renders to a Qt canvas).
|
||||
# type AnnotsTable = dict[int, QGraphicsItem]
|
||||
AnnotsTable = dict[int, QGraphicsItem]
|
||||
|
||||
_annots: AnnotsTable = {}
|
||||
|
||||
|
||||
async def serve_rc_annots(
|
||||
ipc_key: str,
|
||||
annot_req_stream: MsgStream,
|
||||
dss: dict[str, DisplayState],
|
||||
ctxs: IpcCtxTable,
|
||||
annots: AnnotsTable,
|
||||
|
||||
) -> None:
|
||||
async for msg in annot_req_stream:
|
||||
match msg:
|
||||
case {
|
||||
'cmd': 'SelectRect',
|
||||
'fqme': fqme,
|
||||
'timeframe': timeframe,
|
||||
'meth': str(meth),
|
||||
'kwargs': dict(kwargs),
|
||||
}:
|
||||
|
||||
ds: DisplayState = _dss[fqme]
|
||||
chart: ChartPlotWidget = {
|
||||
60: ds.hist_chart,
|
||||
1: ds.chart,
|
||||
}[timeframe]
|
||||
cv: ChartView = chart.cv
|
||||
|
||||
# annot type lookup from cmd
|
||||
rect = SelectRect(
|
||||
viewbox=cv,
|
||||
|
||||
# TODO: make this more dynamic?
|
||||
# -[ ] pull from conf.toml?
|
||||
# -[ ] add `.set_color()` method to type?
|
||||
# -[ ] make a green/red based on direction
|
||||
# instead of default static color?
|
||||
color=kwargs.pop('color', None),
|
||||
)
|
||||
# XXX NOTE: this is REQUIRED to set the rect
|
||||
# resize callback!
|
||||
rect.chart: ChartPlotWidget = chart
|
||||
|
||||
# delegate generically to the requested method
|
||||
getattr(rect, meth)(**kwargs)
|
||||
rect.show()
|
||||
aid: int = id(rect)
|
||||
annots[aid] = rect
|
||||
aids: set[int] = ctxs[ipc_key][1]
|
||||
aids.add(aid)
|
||||
await annot_req_stream.send(aid)
|
||||
|
||||
case {
|
||||
'cmd': 'remove',
|
||||
'aid': int(aid),
|
||||
}:
|
||||
# NOTE: this is normally entered on
|
||||
# a client's annotation de-alloc normally
|
||||
# prior to detach or modify.
|
||||
annot: QGraphicsItem = annots[aid]
|
||||
annot.delete()
|
||||
|
||||
# respond to client indicating annot
|
||||
# was indeed deleted.
|
||||
await annot_req_stream.send(aid)
|
||||
|
||||
case {
|
||||
'cmd': 'redraw',
|
||||
'fqme': fqme,
|
||||
'timeframe': timeframe,
|
||||
|
||||
# TODO: maybe more fields?
|
||||
# 'render': int(aid),
|
||||
# 'viz_name': str(viz_name),
|
||||
}:
|
||||
# NOTE: old match from the 60s display loop task
|
||||
# | {
|
||||
# 'backfilling': (str(viz_name), timeframe),
|
||||
# }:
|
||||
ds: DisplayState = _dss[fqme]
|
||||
viz: Viz = {
|
||||
60: ds.hist_viz,
|
||||
1: ds.viz,
|
||||
}[timeframe]
|
||||
log.warning(
|
||||
f'Forcing VIZ REDRAW:\n'
|
||||
f'fqme: {fqme}\n'
|
||||
f'timeframe: {timeframe}\n'
|
||||
)
|
||||
viz.reset_graphics()
|
||||
|
||||
case _:
|
||||
log.error(
|
||||
'Unknown remote annotation cmd:\n'
|
||||
f'{pformat(msg)}'
|
||||
)
|
||||
|
||||
|
||||
@tractor.context
|
||||
async def remote_annotate(
|
||||
ctx: Context,
|
||||
) -> None:
|
||||
|
||||
global _dss, _ctxs
|
||||
assert _dss
|
||||
|
||||
_ctxs[ctx.cid] = (ctx, set())
|
||||
|
||||
# send back full fqme symbology to caller
|
||||
await ctx.started(list(_dss))
|
||||
|
||||
# open annot request handler stream
|
||||
async with ctx.open_stream() as annot_req_stream:
|
||||
try:
|
||||
await serve_rc_annots(
|
||||
ipc_key=ctx.cid,
|
||||
annot_req_stream=annot_req_stream,
|
||||
dss=_dss,
|
||||
ctxs=_ctxs,
|
||||
annots=_annots,
|
||||
)
|
||||
finally:
|
||||
# ensure all annots for this connection are deleted
|
||||
# on any final teardown
|
||||
(_ctx, aids) = _ctxs[ctx.cid]
|
||||
assert _ctx is ctx
|
||||
for aid in aids:
|
||||
annot: QGraphicsItem = _annots[aid]
|
||||
annot.delete()
|
||||
|
||||
|
||||
class AnnotCtl(Struct):
|
||||
'''
|
||||
A control for remote "data annotations".
|
||||
|
||||
You know those "squares they always show in machine vision
|
||||
UIs.." this API allows you to remotely control stuff like that
|
||||
in some other graphics actor.
|
||||
|
||||
'''
|
||||
ctx2fqmes: dict[str, str]
|
||||
fqme2ipc: dict[str, MsgStream]
|
||||
_annot_stack: AsyncExitStack
|
||||
|
||||
# runtime-populated mapping of all annotation
|
||||
# ids to their equivalent IPC msg-streams.
|
||||
_ipcs: dict[int, MsgStream] = {}
|
||||
|
||||
def _get_ipc(
|
||||
self,
|
||||
fqme: str,
|
||||
) -> MsgStream:
|
||||
ipc: MsgStream = self.fqme2ipc.get(fqme)
|
||||
if ipc is None:
|
||||
raise SymbolNotFound(
|
||||
'No chart (actor) seems to have mkt feed loaded?\n'
|
||||
f'{fqme}'
|
||||
)
|
||||
return ipc
|
||||
|
||||
async def add_rect(
|
||||
self,
|
||||
fqme: str,
|
||||
timeframe: float,
|
||||
start_pos: tuple[float, float],
|
||||
end_pos: tuple[float, float],
|
||||
|
||||
# TODO: a `Literal['view', 'scene']` for this?
|
||||
domain: str = 'view', # or 'scene'
|
||||
color: str = 'dad_blue',
|
||||
|
||||
from_acm: bool = False,
|
||||
|
||||
) -> int:
|
||||
'''
|
||||
Add a `SelectRect` annotation to the target view, return
|
||||
the instances `id(obj)` from the remote UI actor.
|
||||
|
||||
'''
|
||||
ipc: MsgStream = self._get_ipc(fqme)
|
||||
await ipc.send({
|
||||
'fqme': fqme,
|
||||
'cmd': 'SelectRect',
|
||||
'timeframe': timeframe,
|
||||
# 'meth': str(meth),
|
||||
'meth': 'set_view_pos' if domain == 'view' else 'set_scene_pos',
|
||||
'kwargs': {
|
||||
'start_pos': tuple(start_pos),
|
||||
'end_pos': tuple(end_pos),
|
||||
'color': color,
|
||||
'update_label': False,
|
||||
},
|
||||
})
|
||||
aid: int = await ipc.receive()
|
||||
self._ipcs[aid] = ipc
|
||||
if not from_acm:
|
||||
self._annot_stack.push_async_callback(
|
||||
partial(
|
||||
self.remove,
|
||||
aid,
|
||||
)
|
||||
)
|
||||
return aid
|
||||
|
||||
async def remove(
|
||||
self,
|
||||
aid: int,
|
||||
|
||||
) -> bool:
|
||||
'''
|
||||
Remove an existing annotation by instance id.
|
||||
|
||||
'''
|
||||
ipc: MsgStream = self._ipcs[aid]
|
||||
await ipc.send({
|
||||
'cmd': 'remove',
|
||||
'aid': aid,
|
||||
})
|
||||
removed: bool = await ipc.receive()
|
||||
return removed
|
||||
|
||||
@acm
|
||||
async def open_rect(
|
||||
self,
|
||||
**kwargs,
|
||||
) -> int:
|
||||
try:
|
||||
aid: int = await self.add_rect(
|
||||
from_acm=True,
|
||||
**kwargs,
|
||||
)
|
||||
yield aid
|
||||
finally:
|
||||
await self.remove(aid)
|
||||
|
||||
async def redraw(
|
||||
self,
|
||||
fqme: str,
|
||||
timeframe: float,
|
||||
) -> None:
|
||||
await self._get_ipc(fqme).send({
|
||||
'cmd': 'redraw',
|
||||
'fqme': fqme,
|
||||
# 'render': int(aid),
|
||||
# 'viz_name': str(viz_name),
|
||||
'timeframe': timeframe,
|
||||
})
|
||||
|
||||
# TODO: do we even need this?
|
||||
# async def modify(
|
||||
# self,
|
||||
# aid: int, # annotation id
|
||||
# meth: str, # far end graphics object method to invoke
|
||||
# params: dict[str, Any], # far end `meth(**kwargs)`
|
||||
# ) -> bool:
|
||||
# '''
|
||||
# Modify an existing (remote) annotation's graphics
|
||||
# paramters, thus changing it's appearance / state in real
|
||||
# time.
|
||||
|
||||
# '''
|
||||
# raise NotImplementedError
|
||||
|
||||
|
||||
@acm
|
||||
async def open_annot_ctl(
|
||||
uid: tuple[str, str] | None = None,
|
||||
|
||||
) -> AnnotCtl:
|
||||
# TODO: load connetion to a specific chart actor
|
||||
# -[ ] pull from either service scan or config
|
||||
# -[ ] return some kinda client/proxy thinger?
|
||||
# -[ ] maybe we should finally just provide this as
|
||||
# a `tractor.hilevel.CallableProxy` or wtv?
|
||||
# -[ ] use this from the storage.cli stuff to mark up gaps!
|
||||
|
||||
maybe_portals: list[Portal] | None
|
||||
fqmes: list[str]
|
||||
async with find_service(
|
||||
service_name='chart',
|
||||
first_only=False,
|
||||
) as maybe_portals:
|
||||
|
||||
ctx_mngrs: list[AsyncContextManager] = []
|
||||
|
||||
# TODO: print the current discoverable actor UID set
|
||||
# here as well?
|
||||
if not maybe_portals:
|
||||
raise RuntimeError('No chart UI actors found in service domain?')
|
||||
|
||||
for portal in maybe_portals:
|
||||
ctx_mngrs.append(
|
||||
portal.open_context(remote_annotate)
|
||||
)
|
||||
|
||||
ctx2fqmes: dict[str, set[str]] = {}
|
||||
fqme2ipc: dict[str, MsgStream] = {}
|
||||
stream_ctxs: list[AsyncContextManager] = []
|
||||
|
||||
async with (
|
||||
trionics.gather_contexts(ctx_mngrs) as ctxs,
|
||||
):
|
||||
for (ctx, fqmes) in ctxs:
|
||||
stream_ctxs.append(ctx.open_stream())
|
||||
|
||||
# fill lookup table of mkt addrs to IPC ctxs
|
||||
for fqme in fqmes:
|
||||
if other := fqme2ipc.get(fqme):
|
||||
raise ValueError(
|
||||
f'More then one chart displays {fqme}!?\n'
|
||||
'Other UI actor info:\n'
|
||||
f'channel: {other._ctx.chan}]\n'
|
||||
f'actor uid: {other._ctx.chan.uid}]\n'
|
||||
f'ctx id: {other._ctx.cid}]\n'
|
||||
)
|
||||
|
||||
ctx2fqmes.setdefault(
|
||||
ctx.cid,
|
||||
set(),
|
||||
).add(fqme)
|
||||
|
||||
async with trionics.gather_contexts(stream_ctxs) as streams:
|
||||
for stream in streams:
|
||||
fqmes: set[str] = ctx2fqmes[stream._ctx.cid]
|
||||
for fqme in fqmes:
|
||||
fqme2ipc[fqme] = stream
|
||||
|
||||
# NOTE: on graceful teardown we always attempt to
|
||||
# remove all annots that were created by the
|
||||
# entering client.
|
||||
# TODO: should we maybe instead/also do this on the
|
||||
# server-actor side so that when a client
|
||||
# disconnects we always delete all annotations by
|
||||
# default instaead of expecting the client to?
|
||||
async with AsyncExitStack() as annots_stack:
|
||||
client = AnnotCtl(
|
||||
ctx2fqmes=ctx2fqmes,
|
||||
fqme2ipc=fqme2ipc,
|
||||
_annot_stack=annots_stack,
|
||||
)
|
||||
yield client
|
|
@ -30,8 +30,8 @@ from typing import (
|
|||
import msgspec
|
||||
import numpy as np
|
||||
import pyqtgraph as pg
|
||||
from PyQt5.QtGui import QPainterPath
|
||||
|
||||
from piker.ui.qt import QPainterPath
|
||||
from ..data._formatters import (
|
||||
IncrementalFormatter,
|
||||
)
|
||||
|
|
|
@ -43,29 +43,32 @@ from typing import (
|
|||
Iterator,
|
||||
)
|
||||
import time
|
||||
from pprint import pformat
|
||||
# from pprint import pformat
|
||||
|
||||
from rapidfuzz import process as fuzzy
|
||||
import trio
|
||||
from trio_typing import TaskStatus
|
||||
|
||||
from piker.ui.qt import (
|
||||
size_policy,
|
||||
align_flag,
|
||||
from PyQt5 import QtCore
|
||||
from PyQt5 import QtWidgets
|
||||
from PyQt5.QtCore import (
|
||||
Qt,
|
||||
QtCore,
|
||||
QtWidgets,
|
||||
QModelIndex,
|
||||
QItemSelectionModel,
|
||||
)
|
||||
from PyQt5.QtGui import (
|
||||
# QLayout,
|
||||
QStandardItem,
|
||||
QStandardItemModel,
|
||||
)
|
||||
from PyQt5.QtWidgets import (
|
||||
QWidget,
|
||||
QTreeView,
|
||||
# QListWidgetItem,
|
||||
# QAbstractScrollArea,
|
||||
# QStyledItemDelegate,
|
||||
)
|
||||
|
||||
|
||||
from ..log import get_logger
|
||||
from ._style import (
|
||||
_font,
|
||||
|
@ -126,8 +129,8 @@ class CompleterView(QTreeView):
|
|||
|
||||
# ux settings
|
||||
self.setSizePolicy(
|
||||
size_policy.Expanding,
|
||||
size_policy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding,
|
||||
)
|
||||
self.setItemsExpandable(True)
|
||||
self.setExpandsOnDoubleClick(False)
|
||||
|
@ -564,8 +567,8 @@ class SearchWidget(QtWidgets.QWidget):
|
|||
|
||||
# size it as we specify
|
||||
self.setSizePolicy(
|
||||
size_policy.Fixed,
|
||||
size_policy.Fixed,
|
||||
QtWidgets.QSizePolicy.Fixed,
|
||||
QtWidgets.QSizePolicy.Fixed,
|
||||
)
|
||||
|
||||
self.godwidget = godwidget
|
||||
|
@ -589,16 +592,14 @@ class SearchWidget(QtWidgets.QWidget):
|
|||
}}
|
||||
"""
|
||||
)
|
||||
label.setTextFormat(
|
||||
Qt.TextFormat.MarkdownText
|
||||
)
|
||||
label.setTextFormat(3) # markdown
|
||||
label.setFont(_font.font)
|
||||
label.setMargin(4)
|
||||
label.setText("search:")
|
||||
label.show()
|
||||
label.setAlignment(
|
||||
align_flag.AlignVCenter
|
||||
| align_flag.AlignLeft
|
||||
QtCore.Qt.AlignVCenter
|
||||
| QtCore.Qt.AlignLeft
|
||||
)
|
||||
|
||||
self.bar_hbox.addWidget(label)
|
||||
|
@ -616,17 +617,9 @@ class SearchWidget(QtWidgets.QWidget):
|
|||
|
||||
self.vbox.addLayout(self.bar_hbox)
|
||||
|
||||
self.vbox.setAlignment(
|
||||
self.bar,
|
||||
align_flag.AlignTop
|
||||
| align_flag.AlignRight,
|
||||
)
|
||||
self.vbox.setAlignment(self.bar, Qt.AlignTop | Qt.AlignRight)
|
||||
self.vbox.addWidget(self.bar.view)
|
||||
self.vbox.setAlignment(
|
||||
self.view,
|
||||
align_flag.AlignTop
|
||||
| align_flag.AlignLeft,
|
||||
)
|
||||
self.vbox.setAlignment(self.view, Qt.AlignTop | Qt.AlignLeft)
|
||||
|
||||
def focus(self) -> None:
|
||||
self.show()
|
||||
|
@ -1146,25 +1139,21 @@ async def search_simple_dict(
|
|||
|
||||
) -> dict[str, Any]:
|
||||
|
||||
tokens: list[str] = []
|
||||
tokens = []
|
||||
for key in source:
|
||||
match key:
|
||||
case str():
|
||||
tokens.append(key)
|
||||
case []:
|
||||
if not isinstance(key, str):
|
||||
tokens.extend(key)
|
||||
else:
|
||||
tokens.append(key)
|
||||
|
||||
# search routine can be specified as a function such
|
||||
# as in the case of the current app's local symbol cache
|
||||
matches = fuzzy.extract(
|
||||
matches = fuzzy.extractBests(
|
||||
text,
|
||||
tokens,
|
||||
score_cutoff=90,
|
||||
)
|
||||
log.info(
|
||||
'cache search results:\n'
|
||||
f'{pformat(matches)}'
|
||||
)
|
||||
|
||||
return [item[0] for item in matches]
|
||||
|
||||
|
||||
|
|
|
@ -22,14 +22,10 @@ from typing import Dict
|
|||
import math
|
||||
|
||||
import pyqtgraph as pg
|
||||
from PyQt5 import QtCore, QtGui
|
||||
from PyQt5.QtCore import Qt, QCoreApplication
|
||||
from qdarkstyle import DarkPalette
|
||||
|
||||
from .qt import (
|
||||
QtCore,
|
||||
QtGui,
|
||||
Qt,
|
||||
QCoreApplication,
|
||||
)
|
||||
from ..log import get_logger
|
||||
|
||||
from .. import config
|
||||
|
|
|
@ -27,14 +27,16 @@ from typing import (
|
|||
)
|
||||
import uuid
|
||||
|
||||
from piker.ui.qt import (
|
||||
Qt,
|
||||
QtCore,
|
||||
from PyQt5 import QtCore
|
||||
from PyQt5.QtWidgets import (
|
||||
QWidget,
|
||||
QMainWindow,
|
||||
QApplication,
|
||||
QLabel,
|
||||
QStatusBar,
|
||||
)
|
||||
|
||||
from PyQt5.QtGui import (
|
||||
QScreen,
|
||||
QCloseEvent,
|
||||
)
|
||||
|
@ -195,9 +197,7 @@ class MainWindow(QMainWindow):
|
|||
"""
|
||||
# font-size : {font_size}px;
|
||||
)
|
||||
label.setTextFormat(
|
||||
Qt.TextFormat.MarkdownText
|
||||
)
|
||||
label.setTextFormat(3) # markdown
|
||||
label.setFont(_font_small.font)
|
||||
label.setMargin(2)
|
||||
label.setAlignment(
|
||||
|
|
|
@ -96,17 +96,9 @@ def monitor(config, rate, name, dhost, test, tl):
|
|||
@click.option('--rate', '-r', default=1, help='Logging level')
|
||||
@click.argument('symbol', required=True)
|
||||
@click.pass_obj
|
||||
def optschain(
|
||||
config,
|
||||
symbol,
|
||||
date,
|
||||
rate,
|
||||
test,
|
||||
):
|
||||
'''
|
||||
Start an option chain UI
|
||||
|
||||
'''
|
||||
def optschain(config, symbol, date, rate, test):
|
||||
"""Start an option chain UI
|
||||
"""
|
||||
# global opts
|
||||
loglevel = config['loglevel']
|
||||
brokername = config['broker']
|
||||
|
@ -140,23 +132,21 @@ def optschain(
|
|||
default=None,
|
||||
help='Enable pyqtgraph profiling'
|
||||
)
|
||||
# @click.option(
|
||||
# '--pdb',
|
||||
# is_flag=True,
|
||||
# help='Enable tractor debug mode'
|
||||
# )
|
||||
@click.option(
|
||||
'--pdb',
|
||||
is_flag=True,
|
||||
help='Enable tractor debug mode'
|
||||
)
|
||||
@click.argument('symbols', nargs=-1, required=True)
|
||||
# @click.pass_context
|
||||
@click.pass_obj
|
||||
def chart(
|
||||
config,
|
||||
# ctx: click.Context,
|
||||
symbols: list[str],
|
||||
profile,
|
||||
pdb: bool,
|
||||
):
|
||||
'''
|
||||
Run chart UI app, spawning service daemons dynamically as
|
||||
needed if not discovered via [network] config.
|
||||
Start a real-time chartng UI
|
||||
|
||||
'''
|
||||
# eg. ``--profile 3`` reports profiling for anything slower then 3 ms.
|
||||
|
@ -183,42 +173,6 @@ def chart(
|
|||
tractorloglevel = config['tractorloglevel']
|
||||
pikerloglevel = config['loglevel']
|
||||
|
||||
maddrs: list[tuple[str, int]] = config.get(
|
||||
'maddrs',
|
||||
[],
|
||||
)
|
||||
|
||||
# if maddrs:
|
||||
# from tractor._multiaddr import parse_maddr
|
||||
# for addr in maddrs:
|
||||
# breakpoint()
|
||||
# layers: dict = parse_maddr(addr)
|
||||
|
||||
regaddrs: list[tuple[str, int]] = config.get(
|
||||
'registry_addrs',
|
||||
[],
|
||||
)
|
||||
|
||||
from ..config import load
|
||||
conf, _ = load(
|
||||
conf_name='conf',
|
||||
)
|
||||
network: dict = conf.get('network')
|
||||
if network:
|
||||
from ..cli import load_trans_eps
|
||||
eps: dict = load_trans_eps(
|
||||
network,
|
||||
maddrs,
|
||||
)
|
||||
for layers in eps['pikerd']:
|
||||
regaddrs.append((
|
||||
layers['ipv4']['addr'],
|
||||
layers['tcp']['port'],
|
||||
))
|
||||
|
||||
from tractor.devx import maybe_open_crash_handler
|
||||
pdb: bool = config['pdb']
|
||||
with maybe_open_crash_handler(pdb=pdb):
|
||||
_main(
|
||||
syms=symbols,
|
||||
brokermods=brokermods,
|
||||
|
@ -227,11 +181,6 @@ def chart(
|
|||
'debug_mode': pdb,
|
||||
'loglevel': tractorloglevel,
|
||||
'name': 'chart',
|
||||
'registry_addrs': list(set(regaddrs)),
|
||||
'enable_modules': [
|
||||
|
||||
# remote data-view annotations Bo
|
||||
'piker.ui._remote_ctl',
|
||||
],
|
||||
'registry_addr': config.get('registry_addr'),
|
||||
},
|
||||
)
|
||||
|
|
|
@ -34,6 +34,7 @@ import uuid
|
|||
from bidict import bidict
|
||||
import tractor
|
||||
import trio
|
||||
from PyQt5.QtCore import Qt
|
||||
|
||||
from piker import config
|
||||
from piker.accounting import (
|
||||
|
@ -58,7 +59,6 @@ from piker.data import (
|
|||
)
|
||||
from piker.types import Struct
|
||||
from piker.log import get_logger
|
||||
from piker.ui.qt import Qt
|
||||
from ._editors import LineEditor, ArrowEditor
|
||||
from ._lines import order_line, LevelLine
|
||||
from ._position import (
|
||||
|
@ -358,7 +358,7 @@ class OrderMode:
|
|||
send_msg: bool = True,
|
||||
order: Order | None = None,
|
||||
|
||||
) -> Dialog|None:
|
||||
) -> Dialog:
|
||||
'''
|
||||
Send execution order to EMS return a level line to
|
||||
represent the order on a chart.
|
||||
|
@ -378,16 +378,6 @@ class OrderMode:
|
|||
'oid': oid,
|
||||
})
|
||||
|
||||
if order.price <= 0:
|
||||
log.error(
|
||||
'*!? Invalid `Order.price <= 0` ?!*\n'
|
||||
# TODO: make this present multi-line in object form
|
||||
# like `ib_insync.contracts.Contract.__repr__()`
|
||||
f'{order}\n'
|
||||
)
|
||||
self.cancel_orders([order.oid])
|
||||
return None
|
||||
|
||||
lines = self.lines_from_order(
|
||||
order,
|
||||
show_markers=True,
|
||||
|
@ -494,7 +484,7 @@ class OrderMode:
|
|||
uuid: str,
|
||||
order: Order | None = None,
|
||||
|
||||
) -> Dialog | None:
|
||||
) -> Dialog:
|
||||
'''
|
||||
Order submitted status event handler.
|
||||
|
||||
|
@ -515,11 +505,6 @@ class OrderMode:
|
|||
# if an order msg is provided update the line
|
||||
# **from** that msg.
|
||||
if order:
|
||||
if order.price <= 0:
|
||||
log.error(f'Order has 0 price, cancelling..\n{order}')
|
||||
self.cancel_orders([order.oid])
|
||||
return None
|
||||
|
||||
line.set_level(order.price)
|
||||
self.on_level_change_update_next_order_info(
|
||||
level=order.price,
|
||||
|
@ -628,7 +613,7 @@ class OrderMode:
|
|||
|
||||
oids: set[str] = set()
|
||||
for line in lines:
|
||||
if dialog := getattr(line, 'dialog', None):
|
||||
dialog: Dialog = getattr(line, 'dialog', None)
|
||||
oid: str = dialog.uuid
|
||||
if (
|
||||
dialog
|
||||
|
@ -678,7 +663,7 @@ class OrderMode:
|
|||
self,
|
||||
msg: Status,
|
||||
|
||||
) -> Dialog | None:
|
||||
) -> Dialog:
|
||||
# NOTE: the `.order` attr **must** be set with the
|
||||
# equivalent order msg in order to be loaded.
|
||||
order = msg.req
|
||||
|
@ -709,15 +694,12 @@ class OrderMode:
|
|||
fqsn=fqme,
|
||||
info={},
|
||||
)
|
||||
maybe_dialog: Dialog | None = self.submit_order(
|
||||
dialog = self.submit_order(
|
||||
send_msg=False,
|
||||
order=order,
|
||||
)
|
||||
if maybe_dialog is None:
|
||||
return None
|
||||
|
||||
assert self.dialogs[oid] == maybe_dialog
|
||||
return maybe_dialog
|
||||
assert self.dialogs[oid] == dialog
|
||||
return dialog
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
|
@ -948,8 +930,13 @@ async def open_order_mode(
|
|||
msg,
|
||||
)
|
||||
|
||||
# start async input handling for chart's view
|
||||
async with (
|
||||
|
||||
# ``ChartView`` input async handler startup
|
||||
chart.view.open_async_input_handler(),
|
||||
hist_chart.view.open_async_input_handler(),
|
||||
|
||||
# pp pane kb inputs
|
||||
open_form_input_handling(
|
||||
form,
|
||||
|
@ -1018,13 +1005,8 @@ async def process_trade_msg(
|
|||
|
||||
) -> tuple[Dialog, Status]:
|
||||
|
||||
# TODO: obvi once we're parsing to native struct instances we can
|
||||
# drop the `pformat()` call Bo
|
||||
fmtmsg: Struct | dict = msg
|
||||
if not isinstance(msg, Struct):
|
||||
fmtmsg: str = pformat(msg)
|
||||
|
||||
log.debug(f'Received order msg:\n{fmtmsg}')
|
||||
fmsg = pformat(msg)
|
||||
log.debug(f'Received order msg:\n{fmsg}')
|
||||
name = msg['name']
|
||||
|
||||
if name in (
|
||||
|
@ -1040,7 +1022,7 @@ async def process_trade_msg(
|
|||
):
|
||||
log.info(
|
||||
f'Loading position for `{fqme}`:\n'
|
||||
f'{fmtmsg}'
|
||||
f'{fmsg}'
|
||||
)
|
||||
tracker = mode.trackers[msg['account']]
|
||||
tracker.live_pp.update_from_msg(msg)
|
||||
|
@ -1082,7 +1064,7 @@ async def process_trade_msg(
|
|||
|
||||
elif order.action != 'cancel':
|
||||
log.warning(
|
||||
f'received msg for untracked dialog:\n{fmtmsg}'
|
||||
f'received msg for untracked dialog:\n{fmsg}'
|
||||
)
|
||||
assert msg.resp in ('open', 'dark_open'), f'Unknown msg: {msg}'
|
||||
|
||||
|
@ -1102,24 +1084,7 @@ async def process_trade_msg(
|
|||
)
|
||||
):
|
||||
msg.req = order
|
||||
dialog: (
|
||||
Dialog
|
||||
# NOTE: on an invalid order submission (eg.
|
||||
# price <=0) the downstream APIs may return
|
||||
# a null.
|
||||
| None
|
||||
) = mode.load_unknown_dialog_from_msg(msg)
|
||||
|
||||
# cancel any invalid pre-existing order!
|
||||
if dialog is None:
|
||||
log.warning(
|
||||
'Order was ignored/invalid?\n'
|
||||
f'{order}'
|
||||
)
|
||||
|
||||
# if valid, display the order line the same as if
|
||||
# it was submitted during this UI session.
|
||||
else:
|
||||
dialog = mode.load_unknown_dialog_from_msg(msg)
|
||||
mode.on_submit(oid)
|
||||
|
||||
case Status(resp='error'):
|
||||
|
@ -1149,7 +1114,7 @@ async def process_trade_msg(
|
|||
req={'exec_mode': 'dark'},
|
||||
):
|
||||
# TODO: UX for a "pending" clear/live order
|
||||
log.info(f'Dark order triggered for {fmtmsg}')
|
||||
log.info(f'Dark order triggered for {fmsg}')
|
||||
|
||||
case Status(
|
||||
resp='triggered',
|
||||
|
|
104
piker/ui/qt.py
104
piker/ui/qt.py
|
@ -1,104 +0,0 @@
|
|||
# piker: trading gear for hackers
|
||||
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Qt UI framework version shimming.
|
||||
|
||||
Allow importing sub-pkgs from this module instead of worrying about
|
||||
major version specifics, any enum moves or component renames.
|
||||
|
||||
Code in `piker.ui.*` should always explicitlyimport directly from
|
||||
this module like `from piker.ui.qt import ( ..`
|
||||
|
||||
'''
|
||||
from enum import EnumType
|
||||
|
||||
from PyQt6 import (
|
||||
QtCore,
|
||||
QtGui,
|
||||
QtWidgets,
|
||||
)
|
||||
from PyQt6.QtCore import (
|
||||
Qt,
|
||||
QCoreApplication,
|
||||
QLineF,
|
||||
QRectF,
|
||||
# NOTE: for enums use the `.Type` subattr-space
|
||||
QEvent,
|
||||
QPointF,
|
||||
QSize,
|
||||
QModelIndex,
|
||||
QItemSelectionModel,
|
||||
pyqtBoundSignal,
|
||||
pyqtRemoveInputHook,
|
||||
)
|
||||
|
||||
align_flag: EnumType = Qt.AlignmentFlag
|
||||
txt_flag: EnumType = Qt.TextFlag
|
||||
keys: EnumType = QEvent.Type
|
||||
scrollbar_policy: EnumType = Qt.ScrollBarPolicy
|
||||
|
||||
# ^-NOTE-^: handy snippet to discover enums:
|
||||
# import enum
|
||||
# [attr for attr_name in dir(QFrame)
|
||||
# if (attr := getattr(QFrame, attr_name))
|
||||
# and isinstance(attr, enum.EnumType)]
|
||||
|
||||
from PyQt6.QtGui import (
|
||||
QPainter,
|
||||
QPainterPath,
|
||||
QIcon,
|
||||
QPixmap,
|
||||
QColor,
|
||||
QTransform,
|
||||
QStandardItem,
|
||||
QStandardItemModel,
|
||||
QWheelEvent,
|
||||
QScreen,
|
||||
QCloseEvent,
|
||||
)
|
||||
|
||||
from PyQt6.QtWidgets import (
|
||||
QMainWindow,
|
||||
QApplication,
|
||||
QLabel,
|
||||
QStatusBar,
|
||||
QLineEdit,
|
||||
QHBoxLayout,
|
||||
QVBoxLayout,
|
||||
QFormLayout,
|
||||
QProgressBar,
|
||||
QSizePolicy,
|
||||
QStyledItemDelegate,
|
||||
QStyleOptionViewItem,
|
||||
QComboBox,
|
||||
QWidget,
|
||||
QFrame,
|
||||
QSplitter,
|
||||
QTreeView,
|
||||
QStyle,
|
||||
QGraphicsItem,
|
||||
QGraphicsPathItem,
|
||||
# QGraphicsView,
|
||||
QStyleOptionGraphicsItem,
|
||||
QGraphicsScene,
|
||||
QGraphicsSceneMouseEvent,
|
||||
QGraphicsProxyWidget,
|
||||
)
|
||||
|
||||
gs_keys: EnumType = QGraphicsSceneMouseEvent.Type
|
||||
size_policy: EnumType = QtWidgets.QSizePolicy.Policy
|
||||
px_cache_mode: EnumType = QGraphicsItem.CacheMode
|
|
@ -31,7 +31,7 @@ import pendulum
|
|||
import pyqtgraph as pg
|
||||
|
||||
from piker.types import Struct
|
||||
from ..tsp import slice_from_time
|
||||
from ..data._timeseries import slice_from_time
|
||||
from ..log import get_logger
|
||||
from ..toolz import Profiler
|
||||
|
||||
|
|
209
pyproject.toml
209
pyproject.toml
|
@ -15,119 +15,128 @@
|
|||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
[project]
|
||||
# ------ - ------
|
||||
|
||||
[tool.poetry]
|
||||
name = "piker"
|
||||
version = "0.1.0a0dev0"
|
||||
version = "0.1.0.alpha0.dev0"
|
||||
description = "trading gear for hackers"
|
||||
authors = [{ name = "Tyler Goodlet", email = "goodboy_foss@protonmail.com" }]
|
||||
requires-python = ">=3.12, <3.13"
|
||||
license = "AGPL-3.0-or-later"
|
||||
authors = ["Tyler Goodlet <jgbt@protonmail.com>"]
|
||||
license = "AGPLv3"
|
||||
readme = "README.rst"
|
||||
keywords = [
|
||||
"async",
|
||||
"trading",
|
||||
"finance",
|
||||
"quant",
|
||||
"charting",
|
||||
]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
|
||||
"Operating System :: POSIX :: Linux",
|
||||
"Programming Language :: Python :: Implementation :: CPython",
|
||||
"Programming Language :: Python :: 3 :: Only",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Intended Audience :: Financial and Insurance Industry",
|
||||
"Intended Audience :: Science/Research",
|
||||
"Intended Audience :: Developers",
|
||||
"Intended Audience :: Education",
|
||||
]
|
||||
dependencies = [
|
||||
"async-generator >=1.10, <2.0.0",
|
||||
"attrs >=23.1.0, <24.0.0",
|
||||
"bidict >=0.22.1, <0.23.0",
|
||||
"colorama >=0.4.6, <0.5.0",
|
||||
"colorlog >=6.7.0, <7.0.0",
|
||||
"ib-insync >=0.9.86, <0.10.0",
|
||||
"numba >=0.59.0, <0.60.0",
|
||||
"numpy >=1.25, <2.0",
|
||||
"polars >=0.18.13, <0.19.0",
|
||||
"pygments >=2.16.1, <3.0.0",
|
||||
"rich >=13.5.2, <14.0.0",
|
||||
"tomli >=2.0.1, <3.0.0",
|
||||
"tomli-w >=1.0.0, <2.0.0",
|
||||
"trio-util >=0.7.0, <0.8.0",
|
||||
"trio-websocket >=0.10.3, <0.11.0",
|
||||
"typer >=0.9.0, <1.0.0",
|
||||
"rapidfuzz >=3.5.2, <4.0.0",
|
||||
"pdbp >=1.5.0, <2.0.0",
|
||||
"trio >=0.24, <0.25",
|
||||
"pendulum >=3.0.0, <4.0.0",
|
||||
"httpx >=0.27.0, <0.28.0",
|
||||
"cryptofeed >=2.4.0, <3.0.0",
|
||||
"pyarrow >=17.0.0, <18.0.0",
|
||||
"websockets ==12.0",
|
||||
"msgspec",
|
||||
"tractor",
|
||||
"asyncvnc",
|
||||
"tomlkit",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
uis = [
|
||||
# https://docs.astral.sh/uv/concepts/projects/dependencies/#optional-dependencies
|
||||
# TODO: add meta-data from setup.py
|
||||
# keywords=[
|
||||
# "async",
|
||||
# "trading",
|
||||
# "finance",
|
||||
# "quant",
|
||||
# "charting",
|
||||
# ],
|
||||
# classifiers=[
|
||||
# 'Development Status :: 3 - Alpha',
|
||||
# 'License :: OSI Approved :: ',
|
||||
# 'Operating System :: POSIX :: Linux',
|
||||
# "Programming Language :: Python :: Implementation :: CPython",
|
||||
# "Programming Language :: Python :: 3 :: Only",
|
||||
# "Programming Language :: Python :: 3.10",
|
||||
# "Programming Language :: Python :: 3.11",
|
||||
# 'Intended Audience :: Financial and Insurance Industry',
|
||||
# 'Intended Audience :: Science/Research',
|
||||
# 'Intended Audience :: Developers',
|
||||
# 'Intended Audience :: Education',
|
||||
# ],
|
||||
|
||||
# ------ - ------
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
asks = "^3.0.0"
|
||||
async-generator = "^1.10"
|
||||
attrs = "^23.1.0"
|
||||
bidict = "^0.22.1"
|
||||
colorama = "^0.4.6"
|
||||
colorlog = "^6.7.0"
|
||||
cython = "^3.0.0"
|
||||
greenback = "^1.1.1"
|
||||
ib-insync = "^0.9.86"
|
||||
msgspec = "^0.18.0"
|
||||
numba = "^0.57.1"
|
||||
numpy = "1.24"
|
||||
pendulum = "^2.1.2"
|
||||
polars = "^0.18.13"
|
||||
pygments = "^2.16.1"
|
||||
python = "^3.10"
|
||||
rich = "^13.5.2"
|
||||
# setuptools = "^68.0.0"
|
||||
tomli = "^2.0.1"
|
||||
tomli-w = "^1.0.0"
|
||||
trio = "^0.22.2"
|
||||
trio-util = "^0.7.0"
|
||||
trio-websocket = "^0.10.3"
|
||||
typer = "^0.9.0"
|
||||
|
||||
|
||||
[tool.poetry.dependencies.asyncvnc]
|
||||
git = 'https://github.com/pikers/asyncvnc.git'
|
||||
branch = 'main'
|
||||
|
||||
[tool.poetry.dependencies.tomlkit]
|
||||
git = 'https://github.com/pikers/tomlkit.git'
|
||||
branch = 'piker_pin'
|
||||
develop = true
|
||||
# path = "../tomlkit/"
|
||||
|
||||
[tool.poetry.dependencies.tractor]
|
||||
git = 'https://github.com/goodboy/tractor.git'
|
||||
branch = 'asyncio_debugger_support'
|
||||
# branch = 'piker_pin'
|
||||
develop = true
|
||||
# path = '../tractor/'
|
||||
|
||||
# ------ - ------
|
||||
|
||||
[tool.poetry.group.uis]
|
||||
optional = true
|
||||
[tool.poetry.group.uis.dependencies]
|
||||
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
|
||||
# TODO: make sure the levenshtein shit compiles on nix..
|
||||
# rapidfuzz = {extras = ["speedup"], version = "^0.18.0"}
|
||||
"rapidfuzz >=3.2.0, <4.0.0",
|
||||
"qdarkstyle >=3.0.2, <4.0.0",
|
||||
"pyqt6 >=6.7.0, <7.0.0",
|
||||
"pyqtgraph",
|
||||
rapidfuzz = "^3.2.0"
|
||||
qdarkstyle = ">=3.0.2"
|
||||
pyqt5 = "^5.15.9"
|
||||
pyqtgraph = { git = 'https://github.com/pikers/pyqtgraph.git' }
|
||||
pyqt6 = "^6.5.2"
|
||||
|
||||
# for consideration,
|
||||
# - 'visidata'
|
||||
# ------ - ------
|
||||
|
||||
# TODO: add an `--only daemon` group for running non-ui / pikerd
|
||||
# service tree in distributed mode B)
|
||||
# https://docs.astral.sh/uv/concepts/projects/dependencies/#optional-dependencies
|
||||
]
|
||||
[tool.poetry.group.dev]
|
||||
optional = true
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
# testing / CI
|
||||
pytest = "^6.0.0"
|
||||
elasticsearch = "^8.9.0"
|
||||
|
||||
[dependency-groups]
|
||||
# TODO: a toolset that makes debugging a `pikerd` service (tree) easy
|
||||
# to hack on directly using more or less the local env:
|
||||
# console ehancements and eventually remote debugging
|
||||
# extras/helpers.
|
||||
# TODO: add a toolset that makes debugging a `pikerd` service
|
||||
# (tree) easy to hack on directly using more or less the local env:
|
||||
# - xonsh + xxh
|
||||
# - rsyscall + pdbp
|
||||
# - actor runtime control console like BEAM/OTP
|
||||
#
|
||||
# console ehancements and eventually remote debugging extras/helpers.
|
||||
# use `uv --dev` to enable
|
||||
dev = [
|
||||
"pytest >=6.0.0, <7.0.0",
|
||||
"elasticsearch >=8.9.0, <9.0.0",
|
||||
"xonsh >=0.14.2, <0.15.0",
|
||||
"prompt-toolkit ==3.0.40",
|
||||
"cython >=3.0.0, <4.0.0",
|
||||
"greenback >=1.1.1, <2.0.0",
|
||||
"ruff>=0.9.6",
|
||||
]
|
||||
xonsh = "^0.14.0" # XXX: explicit env install for shell use w nix
|
||||
prompt-toolkit = "^3.0.39"
|
||||
|
||||
[project.scripts]
|
||||
piker = "piker.cli:cli"
|
||||
pikerd = "piker.cli:pikerd"
|
||||
ledger = "piker.accounting.cli:ledger"
|
||||
# ------ - ------
|
||||
|
||||
[tool.hatch.build.targets.sdist]
|
||||
include = ["piker"]
|
||||
# TODO: add an `--only daemon` group for running non-ui / pikerd
|
||||
# service tree in distributed mode B)
|
||||
# https://python-poetry.org/docs/managing-dependencies/#installing-group-dependencies
|
||||
# [tool.poetry.group.daemon.dependencies]
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
include = ["piker"]
|
||||
|
||||
[tool.uv.sources]
|
||||
pyqtgraph = { git = "https://github.com/pikers/pyqtgraph.git" }
|
||||
asyncvnc = { git = "https://github.com/pikers/asyncvnc.git", branch = "main" }
|
||||
tomlkit = { git = "https://github.com/pikers/tomlkit.git", branch ="piker_pin" }
|
||||
msgspec = { git = "https://github.com/jcrist/msgspec.git" }
|
||||
tractor = { path = "../tractor", editable = true }
|
||||
[tool.poetry.scripts]
|
||||
piker = 'piker.cli:cli'
|
||||
pikerd = 'piker.cli:pikerd'
|
||||
ledger = 'piker.accounting.cli:ledger'
|
||||
|
|
93
ruff.toml
93
ruff.toml
|
@ -1,93 +0,0 @@
|
|||
# from default `ruff.toml` @
|
||||
# https://docs.astral.sh/ruff/configuration/
|
||||
|
||||
# Exclude a variety of commonly ignored directories.
|
||||
exclude = [
|
||||
".bzr",
|
||||
".direnv",
|
||||
".eggs",
|
||||
".git",
|
||||
".git-rewrite",
|
||||
".hg",
|
||||
".ipynb_checkpoints",
|
||||
".mypy_cache",
|
||||
".nox",
|
||||
".pants.d",
|
||||
".pyenv",
|
||||
".pytest_cache",
|
||||
".pytype",
|
||||
".ruff_cache",
|
||||
".svn",
|
||||
".tox",
|
||||
".venv",
|
||||
".vscode",
|
||||
"__pypackages__",
|
||||
"_build",
|
||||
"buck-out",
|
||||
"build",
|
||||
"dist",
|
||||
"node_modules",
|
||||
"site-packages",
|
||||
"venv",
|
||||
]
|
||||
|
||||
# Same as Black.
|
||||
line-length = 88
|
||||
indent-width = 4
|
||||
|
||||
# Assume Python 3.9
|
||||
target-version = "py312"
|
||||
|
||||
# ------ - ------
|
||||
# TODO, stop warnings around `anext()` builtin use?
|
||||
# tool.ruff.target-version = "py310"
|
||||
|
||||
|
||||
[lint]
|
||||
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
|
||||
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
|
||||
# McCabe complexity (`C901`) by default.
|
||||
select = ["E4", "E7", "E9", "F"]
|
||||
ignore = []
|
||||
ignore-init-module-imports = false
|
||||
|
||||
[lint.per-file-ignores]
|
||||
"piker/ui/qt.py" = [
|
||||
"E402",
|
||||
'F401', # unused imports (without __all__ or blah as blah)
|
||||
# "F841", # unused variable rules
|
||||
]
|
||||
|
||||
# Allow fix for all enabled rules (when `--fix`) is provided.
|
||||
fixable = ["ALL"]
|
||||
unfixable = []
|
||||
|
||||
# Allow unused variables when underscore-prefixed.
|
||||
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
|
||||
|
||||
[format]
|
||||
# Use single quotes in `ruff format`.
|
||||
quote-style = "single"
|
||||
|
||||
# Like Black, indent with spaces, rather than tabs.
|
||||
indent-style = "space"
|
||||
|
||||
# Like Black, respect magic trailing commas.
|
||||
skip-magic-trailing-comma = false
|
||||
|
||||
# Like Black, automatically detect the appropriate line ending.
|
||||
line-ending = "auto"
|
||||
|
||||
# Enable auto-formatting of code examples in docstrings. Markdown,
|
||||
# reStructuredText code/literal blocks and doctests are all supported.
|
||||
#
|
||||
# This is currently disabled by default, but it is planned for this
|
||||
# to be opt-out in the future.
|
||||
docstring-code-format = false
|
||||
|
||||
# Set the line length limit used when formatting code snippets in
|
||||
# docstrings.
|
||||
#
|
||||
# This only has an effect when the `docstring-code-format` setting is
|
||||
# enabled.
|
||||
docstring-code-line-length = "dynamic"
|
Loading…
Reference in New Issue