Compare commits

..

No commits in common. "gitea_feats" and "ib_py311_fixes" have entirely different histories.

89 changed files with 3504 additions and 8908 deletions

View File

@ -1,161 +1,162 @@
piker piker
----- -----
trading gear for hackers trading gear for hackers.
|gh_actions| |gh_actions|
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square .. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/piker/pikers/goto :target: https://actions-badge.atrox.dev/piker/pikers/goto
``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for ``piker`` is a broker agnostic, next-gen FOSS toolset for real-time
real-time computational trading targeted at `hardcore Linux users computational trading targeted at `hardcore Linux users <comp_trader>`_ .
<comp_trader>`_ .
we use much bleeding edge tech including (but not limited to): we use as much bleeding edge tech as possible including (but not limited to):
- latest python for glue_ - latest python for glue_
- uv_ for packaging and distribution - trio_ & tractor_ for our distributed, multi-core, real-time streaming
- trio_ & tractor_ for our distributed `structured concurrency`_ runtime `structured concurrency`_ runtime B)
- Qt_ for pristine low latency UIs - Qt_ for pristine high performance UIs
- pyqtgraph_ (which we've extended) for real-time charting and graphics - pyqtgraph_ for real-time charting
- ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_ - ``polars`` ``numpy`` and ``numba`` for `fast numerics`_
- `apache arrow and parquet`_ for time-series storage - `apache arrow and parquet`_ for time series history management
persistence and sharing
- (prototyped) techtonicdb_ for L2 book storage
potential projects we might integrate with soon, .. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg
:target: https://travis-ci.org/pikers/piker
- (already prototyped in ) techtonicdb_ for L2 book storage
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _uv: https://docs.astral.sh/uv/
.. _trio: https://github.com/python-trio/trio .. _trio: https://github.com/python-trio/trio
.. _tractor: https://github.com/goodboy/tractor .. _tractor: https://github.com/goodboy/tractor
.. _structured concurrency: https://trio.discourse.group/ .. _structured concurrency: https://trio.discourse.group/
.. _marketstore: https://github.com/alpacahq/marketstore
.. _techtonicdb: https://github.com/0b01/tectonicdb
.. _Qt: https://www.qt.io/ .. _Qt: https://www.qt.io/
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph .. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _apache arrow and parquet: https://arrow.apache.org/faq/ .. _apache arrow and parquet: https://arrow.apache.org/faq/
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/ .. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
.. _techtonicdb: https://github.com/0b01/tectonicdb .. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
focus and feats: focus and features:
**************** *******************
fitting with these tenets, we're always open to new - 100% federated: your code, your hardware, your data feeds, your broker fills.
framework/lib/service interop suggestions and ideas! - zero web: low latency, native software that doesn't try to re-invent the OS
- maximal **privacy**: prevent brokers and mms from knowing your
planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency
primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will
make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
- **100% federated**: fitting with these tenets, we're always open to new framework suggestions and ideas.
your code, your hardware, your data feeds, your broker fills.
- **zero web**: building the best looking, most reliable, keyboard friendly trading
low latency as a prime objective, native UIs and modern IPC platform is the dream; join the cause.
protocols without trying to re-invent the "OS-as-an-app"..
- **maximal privacy**:
prevent brokers and mms from knowing your planz; smack their
spreads with dark volume from a VPN tunnel.
- **zero clutter**:
modal, context oriented UIs that echew minimalism, reduce thought
noise and encourage un-emotion.
- **first class parallelism**:
built from the ground up on a next-gen structured concurrency
supervision sys.
- **traders first**:
broker/exchange/venue/asset-class/money-sys agnostic
- **systems grounded**:
real-time financial signal processing (fsp) that will make any
queuing or DSP eng juice their shorts.
- **non-tina UX**:
sleek, powerful keyboard driven interaction with expected use in
tiling wms (or maybe even a DDE).
- **data collab at scale**:
every actor-process and protocol is multi-host aware.
- **fight club ready**:
zero interest in adoption by suits; no corporate friendly license,
ever.
building the hottest looking, fastest, most reliable, keyboard
friendly FOSS trading platform is the dream; join the cause.
a sane install with `uv` sane install with `poetry`
************************ **************************
bc why install with `python` when you can faster with `rust` :: TODO!
uv lock
rigorous install on ``nixos`` using ``poetry2nix``
**************************************************
TODO!
hacky install on nixos hacky install on nixos
********************** **********************
``NixOS`` is our core devs' distro of choice for which we offer `NixOS` is our core devs' distro of choice for which we offer
a stringently defined development shell envoirment that can be loaded with:: a stringently defined development shell envoirment that can be loaded with::
nix-shell default.nix nix-shell develop.nix
this will setup the required python environment to run piker, make sure to
run::
pip install -r requirements.txt -e .
once after loading the shell
start a chart install wild-west style via `pip`
************* *********************************
run a realtime OHLCV chart stand-alone:: ``piker`` is currently under heavy pre-alpha development and as such
should be cloned from this repo and hacked on directly.
piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken for a development install::
this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes git clone git@github.com:pikers/piker.git
overlayed on the same graph. Use of `piker` without first starting cd piker
a daemon (`pikerd` - see below) means there is an implicit spawning of the virtualenv env
multi-actor-runtime (implemented as a `tractor` app). source ./env/bin/activate
pip install -r requirements.txt -e .
For additional subsystem feats available through our chart UI see the
various sub-readmes:
- order control using a mouse-n-keyboard UX B)
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
- src-asset derivatives scan for anal, like the infamous "max pain" XO
spawn a daemon standalone check out our charts
************************* ********************
we call the root actor-process the ``pikerd``. it can be (and is bet you weren't expecting this from the foss::
recommended normally to be) started separately from the ``piker
chart`` program:: piker -l info -b kraken -b binance chart btcusdt.binance --pdb
this runs the main chart (currently with 1m sampled OHLC) in in debug
mode and you can practice paper trading using the following
micro-manual:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
you can also configure your position allocation limits from the
sidepane.
run in distributed mode
***********************
start the service manager and data feed daemon in the background and
connect to it::
pikerd -l info --pdb pikerd -l info --pdb
the daemon does nothing until a ``piker``-client (like ``piker
chart``) connects and requests some particular sub-system. for
a connecting chart ``pikerd`` will spawn and manage at least,
- a data-feed daemon: ``datad`` which does all the work of comms with connect your chart::
the backend provider (in this case the ``binance`` cex).
- a paper-trading engine instance, ``paperboi.binance``, (if no live
account has been configured) which allows for auto/manual order
control against the live quote stream.
*using* an actor-service (aka micro-daemon) manager which dynamically piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
supervises various sub-subsystems-as-services throughout the ``piker``
runtime-stack.
now you can (implicitly) connect your chart::
piker chart btcusdt.spot.binance enjoy persistent real-time data feeds tied to daemon lifetime. the next
time you spawn a chart it will load much faster since the data feed has
since ``pikerd`` was started separately you can now enjoy a persistent been cached and is now always running live in the background until you
real-time data stream tied to the daemon-tree's lifetime. i.e. the next kill ``pikerd``.
time you spawn a chart it will obviously not only load much faster
(since the underlying ``datad.binance`` is left running with its
in-memory IPC data structures) but also the data-feed and any order
mgmt states should be persistent until you finally cancel ``pikerd``.
if anyone asks you what this project is about if anyone asks you what this project is about
********************************************* *********************************************
you don't talk about it; just use it. you don't talk about it.
how do i get involved? how do i get involved?
@ -165,15 +166,6 @@ enter the matrix.
how come there ain't that many docs how come there ain't that many docs
*********************************** ***********************************
i mean we want/need them but building the core right has been higher suck it up, learn the code; no one is trying to sell you on anything.
prio then marketting (and likely will stay that way Bp). also, we need lotsa help so if you want to start somewhere and can't
necessarily write serious code, this might be the place for you!
soo, suck it up bc,
- no one is trying to sell you on anything
- learning the code base is prolly way more valuable
- the UI/UXs are intended to be "intuitive" for any hacker..
we obviously need tonz help so if you want to start somewhere and
can't necessarily write "advanced" concurrent python/rust code, this
helping document literally anything might be the place for you!

View File

@ -1,134 +0,0 @@
with (import <nixpkgs> {});
let
glibStorePath = lib.getLib glib;
zlibStorePath = lib.getLib zlib;
zstdStorePath = lib.getLib zstd;
dbusStorePath = lib.getLib dbus;
libGLStorePath = lib.getLib libGL;
freetypeStorePath = lib.getLib freetype;
qt6baseStorePath = lib.getLib qt6.qtbase;
fontconfigStorePath = lib.getLib fontconfig;
libxkbcommonStorePath = lib.getLib libxkbcommon;
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
qtpyStorePath = lib.getLib python312Packages.qtpy;
pyqt6StorePath = lib.getLib python312Packages.pyqt6;
pyqt6SipStorePath = lib.getLib python312Packages.pyqt6-sip;
rapidfuzzStorePath = lib.getLib python312Packages.rapidfuzz;
qdarkstyleStorePath = lib.getLib python312Packages.qdarkstyle;
xorgLibX11StorePath = lib.getLib xorg.libX11;
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
in
stdenv.mkDerivation {
name = "piker-qt6-uv";
buildInputs = [
# System requirements.
glib
zlib
dbus
zstd
libGL
freetype
qt6.qtbase
libgcc.lib
fontconfig
libxkbcommon
# Xorg requirements
xcb-util-cursor
xorg.libxcb
xorg.libX11
xorg.xcbutilwm
xorg.xcbutilimage
xorg.xcbutilerrors
xorg.xcbutilkeysyms
xorg.xcbutilrenderutil
# Python requirements.
python312Full
python312Packages.uv
python312Packages.qdarkstyle
python312Packages.rapidfuzz
python312Packages.pyqt6
python312Packages.qtpy
];
src = null;
shellHook = ''
set -e
# Set the Qt plugin path
# export QT_DEBUG_PLUGINS=1
QTBASE_PATH="${qt6baseStorePath}/lib"
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
LIB_GCC_PATH="${libgcc.lib}/lib"
GLIB_PATH="${glibStorePath}/lib"
ZSTD_PATH="${zstdStorePath}/lib"
ZLIB_PATH="${zlibStorePath}/lib"
DBUS_PATH="${dbusStorePath}/lib"
LIBGL_PATH="${libGLStorePath}/lib"
FREETYPE_PATH="${freetypeStorePath}/lib"
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
export LD_LIBRARY_PATH
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.12/site-packages"
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.12/site-packages"
QTPY_PATH="${qtpyStorePath}/lib/python3.12/site-packages"
PYQT6_PATH="${pyqt6StorePath}/lib/python3.12/site-packages"
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.12/site-packages"
PATCH="$PATCH:$RPDFUZZ_PATH"
PATCH="$PATCH:$QDRKSTYLE_PATH"
PATCH="$PATCH:$QTPY_PATH"
PATCH="$PATCH:$PYQT6_PATH"
PATCH="$PATCH:$PYQT6_SIP_PATH"
export PATCH
# Install deps
uv lock
'';
}

View File

@ -1,34 +1,28 @@
with (import <nixpkgs> {}); with (import <nixpkgs> {});
with python310Packages;
stdenv.mkDerivation { stdenv.mkDerivation {
name = "poetry-env"; name = "pip-env";
buildInputs = [ buildInputs = [
# System requirements. # System requirements.
readline readline
# TODO: hacky non-poetry install stuff we need to get rid of!! # TODO: hacky non-poetry install stuff we need to get rid of!!
poetry virtualenv
# virtualenv setuptools
# setuptools pip
# pip
# Python requirements (enough to get a virtualenv going).
python311Full
# obviously, and see below for hacked linking # obviously, and see below for hacked linking
python311Packages.pyqt5 pyqt5
python311Packages.pyqt5_sip
# python311Packages.qtpy # Python requirements (enough to get a virtualenv going).
python310Full
# numerics deps # numerics deps
python311Packages.levenshtein python310Packages.python-Levenshtein
python311Packages.fastparquet python310Packages.fastparquet
python311Packages.polars python310Packages.polars
]; ];
# environment.sessionVariables = {
# LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib";
# };
src = null; src = null;
shellHook = '' shellHook = ''
# Allow the use of wheels. # Allow the use of wheels.
@ -36,12 +30,13 @@ stdenv.mkDerivation {
# Augment the dynamic linker path # Augment the dynamic linker path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins"; export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
if [ ! -d ".venv" ]; then if [ ! -d "venv" ]; then
poetry install --with uis virtualenv venv
fi fi
poetry shell source venv/bin/activate
''; '';
} }

View File

@ -19,9 +19,8 @@ services:
# other image tags available: # other image tags available:
# https://github.com/waytrade/ib-gateway-docker#supported-tags # https://github.com/waytrade/ib-gateway-docker#supported-tags
# image: waytrade/ib-gateway:1012.2i # image: waytrade/ib-gateway:981.3j
image: ghcr.io/gnzsnz/ib-gateway:latest image: waytrade/ib-gateway:1012.2i
restart: 'no' # restart on boot whenev there's a crash or user clicsk restart: 'no' # restart on boot whenev there's a crash or user clicsk
network_mode: 'host' network_mode: 'host'

View File

@ -117,57 +117,9 @@ SecondFactorDevice=
# If you use the IBKR Mobile app for second factor authentication, # If you use the IBKR Mobile app for second factor authentication,
# and you fail to complete the process before the time limit imposed # and you fail to complete the process before the time limit imposed
# by IBKR, this setting tells IBC whether to automatically restart # by IBKR, you can use this setting to tell IBC to exit: arrangements
# the login sequence, giving you another opportunity to complete # can then be made to automatically restart IBC in order to initiate
# second factor authentication. # the login sequence afresh. Otherwise, manual intervention at TWS's
#
# Permitted values are 'yes' and 'no'.
#
# If this setting is not present or has no value, then the value
# of the deprecated ExitAfterSecondFactorAuthenticationTimeout is
# used instead. If this also has no value, then this setting defaults
# to 'no'.
#
# NB: you must be using IBC v3.14.0 or later to use this setting:
# earlier versions ignore it.
ReloginAfterSecondFactorAuthenticationTimeout=
# This setting is only relevant if
# ReloginAfterSecondFactorAuthenticationTimeout is set to 'yes',
# or if ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 60.
SecondFactorAuthenticationExitInterval=
# This setting specifies the timeout for second factor authentication
# imposed by IB. The value is in seconds. You should not change this
# setting unless you have reason to believe that IB has changed the
# timeout. The default value is 180.
SecondFactorAuthenticationTimeout=180
# DEPRECATED SETTING
# ------------------
#
# ExitAfterSecondFactorAuthenticationTimeout - THIS SETTING WILL BE
# REMOVED IN A FUTURE RELEASE. For IBC version 3.14.0 and later, see
# the notes for ReloginAfterSecondFactorAuthenticationTimeout above.
#
# For IBC versions earlier than 3.14.0: If you use the IBKR Mobile
# app for second factor authentication, and you fail to complete the
# process before the time limit imposed by IBKR, you can use this
# setting to tell IBC to exit: arrangements can then be made to
# automatically restart IBC in order to initiate the login sequence
# afresh. Otherwise, manual intervention at TWS's
# Second Factor Authentication dialog is needed to complete the # Second Factor Authentication dialog is needed to complete the
# login. # login.
# #
@ -180,18 +132,29 @@ SecondFactorAuthenticationTimeout=180
ExitAfterSecondFactorAuthenticationTimeout=no ExitAfterSecondFactorAuthenticationTimeout=no
# This setting is only relevant if
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 40.
SecondFactorAuthenticationExitInterval=
# Trading Mode # Trading Mode
# ------------ # ------------
# #
# This indicates whether the live account or the paper trading # TWS 955 introduced a new Trading Mode combo box on its login
# account corresponding to the supplied credentials is to be used. # dialog. This indicates whether the live account or the paper
# The allowed values are 'live' (the default) and 'paper'. # trading account corresponding to the supplied credentials is
# # to be used. The allowed values are 'live' (the default) and
# If this is set to 'live', then the credentials for the live # 'paper'. For earlier versions of TWS this setting has no
# account must be supplied. If it is set to 'paper', then either # effect.
# the live or the paper-trading credentials may be supplied.
TradingMode=paper TradingMode=
# Paper-trading Account Warning # Paper-trading Account Warning
@ -225,7 +188,7 @@ AcceptNonBrokerageAccountWarning=yes
# #
# The default value is 60. # The default value is 60.
LoginDialogDisplayTimeout=60 LoginDialogDisplayTimeout=20
@ -254,15 +217,7 @@ LoginDialogDisplayTimeout=60
# but they are acceptable. # but they are acceptable.
# #
# The default is the current working directory when IBC is # The default is the current working directory when IBC is
# started, unless the TWS_SETTINGS_PATH setting in the relevant # started.
# start script is set.
#
# If both this setting and TWS_SETTINGS_PATH are set, then this
# setting takes priority. Note that if they have different values,
# auto-restart will not work.
#
# NB: this setting is now DEPRECATED. You should use the
# TWS_SETTINGS_PATH setting in the relevant start script.
IbDir=/root/Jts IbDir=/root/Jts
@ -331,30 +286,13 @@ ExistingSessionDetectedAction=primary
# #
# If OverrideTwsApiPort is set to an integer, IBC changes the # If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly # 'Socket port' in TWS's API configuration to that number shortly
# after startup (but note that for the FIX Gateway, this setting is # after startup. Leaving the setting blank will make no change to
# actually stored in jts.ini rather than the Gateway's settings
# file). Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in # the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to # certain specialized situations where the port number needs to
# be set dynamically at run-time, and for the FIX Gateway: most
# non-FIX users will never need it, so don't use it unless you know
# you need it.
OverrideTwsApiPort=4000
# Override TWS Master Client ID
# -----------------------------
#
# If OverrideTwsMasterClientID is set to an integer, IBC changes the
# 'Master Client ID' value in TWS's API configuration to that
# value shortly after startup. Leaving the setting blank will make
# no change to the current setting. This setting is only intended
# for use in certain specialized situations where the value needs to
# be set dynamically at run-time: most users will never need it, # be set dynamically at run-time: most users will never need it,
# so don't use it unless you know you need it. # so don't use it unless you know you need it.
OverrideTwsMasterClientID= ; OverrideTwsApiPort=4002
# Read-only Login # Read-only Login
@ -364,13 +302,11 @@ OverrideTwsMasterClientID=
# account security programme, the user will not be asked to perform # account security programme, the user will not be asked to perform
# the second factor authentication action, and login to TWS will # the second factor authentication action, and login to TWS will
# occur automatically in read-only mode: in this mode, placing or # occur automatically in read-only mode: in this mode, placing or
# managing orders is not allowed. # managing orders is not allowed. If set to 'no', and the user is
# # enrolled in IB's account security programme, the user must perform
# If set to 'no', and the user is enrolled in IB's account security # the relevant second factor authentication action to complete the
# programme, the second factor authentication process is handled # login.
# according to the Second Factor Authentication Settings described
# elsewhere in this file.
#
# If the user is not enrolled in IB's account security programme, # If the user is not enrolled in IB's account security programme,
# this setting is ignored. The default is 'no'. # this setting is ignored. The default is 'no'.
@ -390,44 +326,7 @@ ReadOnlyLogin=no
# set the relevant checkbox (this only needs to be done once) and # set the relevant checkbox (this only needs to be done once) and
# not provide a value for this setting. # not provide a value for this setting.
ReadOnlyApi= ReadOnlyApi=no
# API Precautions
# ---------------
#
# These settings relate to the corresponding 'Precautions' checkboxes in the
# API section of the Global Configuration dialog.
#
# For all of these, the accepted values are:
# - 'yes' sets the checkbox
# - 'no' clears the checkbox
# - if not set, the existing TWS/Gateway configuration is unchanged
#
# NB: thess settings are really only supplied for the benefit of new TWS
# or Gateway instances that are being automatically installed and
# started without user intervention, or where user settings are not preserved
# between sessions (eg some Docker containers). Where a user is involved, they
# should use the Global Configuration to set the relevant checkboxes and not
# provide values for these settings.
BypassOrderPrecautions=
BypassBondWarning=
BypassNegativeYieldToWorstConfirmation=
BypassCalledBondWarning=
BypassSameActionPairTradeWarning=
BypassPriceBasedVolatilityRiskWarning=
BypassUSStocksMarketDataInSharesWarning=
BypassRedirectOrderWarning=
BypassNoOverfillProtectionPrecaution=
# Market data size for US stocks - lots or shares # Market data size for US stocks - lots or shares
@ -482,145 +381,54 @@ AcceptBidAskLastSizeDisplayUpdateNotification=accept
SendMarketDataInLotsForUSstocks= SendMarketDataInLotsForUSstocks=
# Trusted API Client IPs
# ----------------------
#
# NB: THIS SETTING IS ONLY RELEVANT FOR THE GATEWAY, AND ONLY WHEN FIX=yes.
# In all other cases it is ignored.
#
# This is a list of IP addresses separated by commas. API clients with IP
# addresses in this list are able to connect to the API without Gateway
# generating the 'Incoming connection' popup.
#
# Note that 127.0.0.1 is always permitted to connect, so do not include it
# in this setting.
TrustedTwsApiClientIPs=
# Reset Order ID Sequence
# -----------------------
#
# The setting resets the order id sequence for orders submitted via the API, so
# that the next invocation of the `NextValidId` API callback will return the
# value 1. The reset occurs when TWS starts.
#
# Note that order ids are reset for all API clients, except those that have
# outstanding (ie incomplete) orders: their order id sequence carries on as
# before.
#
# Valid values are 'yes', 'true', 'false' and 'no'. The default is 'no'.
ResetOrderIdsAtStart=
# This setting specifies IBC's action when TWS displays the dialog asking for
# confirmation of a request to reset the API order id sequence.
#
# Note that the Gateway never displays this dialog, so this setting is ignored
# for a Gateway session.
#
# Valid values consist of two strings separated by a solidus '/'. The first
# value specifies the action to take when the order id reset request resulted
# from setting ResetOrderIdsAtStart=yes. The second specifies the action to
# take when the order id reset request is a result of the user clicking the
# 'Reset API order ID sequence' button in the API configuration. Each value
# must be one of the following:
#
# 'confirm'
# order ids will be reset
#
# 'reject'
# order ids will not be reset
#
# 'ignore'
# IBC will ignore the dialog. The user must take action.
#
# The default setting is ignore/ignore
# Examples:
#
# 'confirm/reject' - confirm order id reset only if ResetOrderIdsAtStart=yes
# and reject any user-initiated requests
#
# 'ignore/confirm' - user must decide what to do if ResetOrderIdsAtStart=yes
# and confirm user-initiated requests
#
# 'reject/ignore' - reject order id reset if ResetOrderIdsAtStart=yes but
# allow user to handle user-initiated requests
ConfirmOrderIdReset=
# ============================================================================= # =============================================================================
# 4. TWS Auto-Logoff and Auto-Restart # 4. TWS Auto-Closedown
# ============================================================================= # =============================================================================
# #
# TWS and Gateway insist on being restarted every day. Two alternative # IMPORTANT NOTE: Starting with TWS 974, this setting no longer
# automatic options are offered: # works properly, because IB have changed the way TWS handles its
# autologoff mechanism.
# #
# - Auto-Logoff: at a specified time, TWS shuts down tidily, without # You should now configure the TWS autologoff time to something
# restarting. # convenient for you, and restart IBC each day.
# #
# - Auto-Restart: at a specified time, TWS shuts down and then restarts # Alternatively, discontinue use of IBC and use the auto-relogin
# without the user having to re-autheticate. # mechanism within TWS 974 and later versions (note that the
# # auto-relogin mechanism provided by IB is not available if you
# The normal way to configure the time at which this happens is via the Lock # use IBC).
# and Exit section of the Configuration dialog. Once this time has been
# configured in this way, the setting persists until the user changes it again.
#
# However, there are situations where there is no user available to do this
# configuration, or where there is no persistent storage (for example some
# Docker images). In such cases, the auto-restart or auto-logoff time can be
# set whenever IBC starts with the settings below.
#
# The value, if specified, must be a time in HH:MM AM/PM format, for example
# 08:00 AM or 10:00 PM. Note that there must be a single space between the
# two parts of this value; also that midnight is "12:00 AM" and midday is
# "12:00 PM".
#
# If no value is specified for either setting, the currently configured
# settings will apply. If a value is supplied for one setting, the other
# setting is cleared. If values are supplied for both settings, only the
# auto-restart time is set, and the auto-logoff time is cleared.
#
# Note that for a normal TWS/Gateway installation with persistent storage
# (for example on a desktop computer) the value will be persisted as if the
# user had set it via the configuration dialog.
#
# If you choose to auto-restart, you should take note of the considerations
# described at the link below. Note that where this information mentions
# 'manual authentication', restarting IBC will do the job (IBKR does not
# recognise the existence of IBC in its docuemntation).
#
# https://www.interactivebrokers.com/en/software/tws/twsguide.htm#usersguidebook/configuretws/auto_restart_info.htm
#
# If you use the "RESTART" command via the IBC command server, and IBC is
# running any version of the Gateway (or a version of TWS earlier than 1018),
# note that this will set the Auto-Restart time in Gateway/TWS's configuration
# dialog to the time at which the restart actually happens (which may be up to
# a minute after the RESTART command is issued). To prevent future auto-
# restarts at this time, you must make sure you have set AutoLogoffTime or
# AutoRestartTime to your desired value before running IBC. NB: this does not
# apply to TWS from version 1018 onwards.
AutoLogoffTime= # Set to yes or no (lower case).
#
# yes means allow TWS to shut down automatically at its
# specified shutdown time, which is set via the TWS
# configuration menu.
#
# no means TWS never shuts down automatically.
#
# NB: IB recommends that you do not keep TWS running
# continuously. If you set this setting to 'no', you may
# experience incorrect TWS operation.
#
# NB: the default for this setting is 'no'. Since this will
# only work properly with TWS versions earlier than 974, you
# should explicitly set this to 'yes' for version 974 and later.
IbAutoClosedown=yes
AutoRestartTime=
# ============================================================================= # =============================================================================
# 5. TWS Tidy Closedown Time # 5. TWS Tidy Closedown Time
# ============================================================================= # =============================================================================
# #
# Specifies a time at which TWS will close down tidily, with no restart. # NB: starting with TWS 974 this is no longer a useful option
# because both TWS and Gateway now have the same auto-logoff
# mechanism, and IBC can no longer avoid this.
# #
# There is little reason to use this setting. It is similar to AutoLogoffTime, # Note that giving this setting a value does not change TWS's
# but can include a day-of-the-week, whereas AutoLogoffTime and AutoRestartTime # auto-logoff in any way: any setting will be additional to the
# apply every day. So for example you could use ClosedownAt in conjunction with # TWS auto-logoff.
# AutoRestartTime to shut down TWS on Friday evenings after the markets
# close, without it running on Saturday as well.
# #
# To tell IBC to tidily close TWS at a specified time every # To tell IBC to tidily close TWS at a specified time every
# day, set this value to <hh:mm>, for example: # day, set this value to <hh:mm>, for example:
@ -679,7 +487,7 @@ AcceptIncomingConnectionAction=reject
# no means the dialog remains on display and must be # no means the dialog remains on display and must be
# handled by the user. # handled by the user.
AllowBlindTrading=no AllowBlindTrading=yes
# Save Settings on a Schedule # Save Settings on a Schedule
@ -722,26 +530,6 @@ AllowBlindTrading=no
SaveTwsSettingsAt= SaveTwsSettingsAt=
# Confirm Crypto Currency Orders Automatically
# --------------------------------------------
#
# When you place an order for a cryptocurrency contract, a dialog is displayed
# asking you to confirm that you want to place the order, and notifying you
# that you are placing an order to trade cryptocurrency with Paxos, a New York
# limited trust company, and not at Interactive Brokers.
#
# transmit means that the order will be placed automatically, and the
# dialog will then be closed
#
# cancel means that the order will not be placed, and the dialog will
# then be closed
#
# manual means that IBC will take no action and the user must deal
# with the dialog
ConfirmCryptoCurrencyOrders=transmit
# ============================================================================= # =============================================================================
# 7. Settings Specific to Indian Versions of TWS # 7. Settings Specific to Indian Versions of TWS
@ -778,17 +566,13 @@ DismissNSEComplianceNotice=yes
# #
# The port number that IBC listens on for commands # The port number that IBC listens on for commands
# such as "STOP". DO NOT set this to the port number # such as "STOP". DO NOT set this to the port number
# used for TWS API connections. # used for TWS API connections. There is no good reason
# # to change this setting unless the port is used by
# The convention is to use 7462 for this port, # some other application (typically another instance of
# but it must be set to a different value from any other # IBC). The default value is 0, which tells IBC not to
# IBC instance that might run at the same time. # start the command server
#
# The default value is 0, which tells IBC not to start
# the command server
#CommandServerPort=7462 #CommandServerPort=7462
CommandServerPort=0
# Permitted Command Sources # Permitted Command Sources
@ -799,19 +583,19 @@ CommandServerPort=0
# IBC. Commands can always be sent from the # IBC. Commands can always be sent from the
# same host as IBC is running on. # same host as IBC is running on.
ControlFrom= ControlFrom=127.0.0.1
# Address for Receiving Commands # Address for Receiving Commands
# ------------------------------ # ------------------------------
# #
# Specifies the IP address on which the Command Server # Specifies the IP address on which the Command Server
# is to listen. For a multi-homed host, this can be used # is so listen. For a multi-homed host, this can be used
# to specify that connection requests are only to be # to specify that connection requests are only to be
# accepted on the specified address. The default is to # accepted on the specified address. The default is to
# accept connection requests on all local addresses. # accept connection requests on all local addresses.
BindAddress= BindAddress=127.0.0.1
# Command Prompt # Command Prompt
@ -837,7 +621,7 @@ CommandPrompt=
# information is sent. The default is that such information # information is sent. The default is that such information
# is not sent. # is not sent.
SuppressInfoMessages=yes SuppressInfoMessages=no
@ -867,10 +651,10 @@ SuppressInfoMessages=yes
# The LogStructureScope setting indicates which windows are # The LogStructureScope setting indicates which windows are
# eligible for structure logging: # eligible for structure logging:
# #
# - (default value) if set to 'known', only windows that # - if set to 'known', only windows that IBC recognizes
# IBC recognizes are eligible - these are windows that # are eligible - these are windows that IBC has some
# IBC has some interest in monitoring, usually to take # interest in monitoring, usually to take some action
# some action on the user's behalf; # on the user's behalf;
# #
# - if set to 'unknown', only windows that IBC does not # - if set to 'unknown', only windows that IBC does not
# recognize are eligible. Most windows displayed by # recognize are eligible. Most windows displayed by
@ -883,8 +667,9 @@ SuppressInfoMessages=yes
# - if set to 'all', then every window displayed by TWS # - if set to 'all', then every window displayed by TWS
# is eligible. # is eligible.
# #
# The default value is 'known'.
LogStructureScope=known LogStructureScope=all
# When to Log Window Structure # When to Log Window Structure
@ -897,15 +682,13 @@ LogStructureScope=known
# structure of an eligible window the first time it # structure of an eligible window the first time it
# is encountered; # is encountered;
# #
# - if set to 'openclose', the structure is logged every
# time an eligible window is opened or closed;
#
# - if set to 'activate', the structure is logged every # - if set to 'activate', the structure is logged every
# time an eligible window is made active; # time an eligible window is made active;
# #
# - (default value) if set to 'never' or 'no' or 'false', # - if set to 'never' or 'no' or 'false', structure
# structure information is never logged. # information is never logged.
# #
# The default value is 'never'.
LogStructureWhen=never LogStructureWhen=never
@ -925,3 +708,4 @@ LogStructureWhen=never
#LogComponents= #LogComponents=

View File

@ -6,11 +6,6 @@
# - then manually ensuring all deps are converted over: # - then manually ensuring all deps are converted over:
# - add this file to the repo and commit it # - add this file to the repo and commit it
# - # -
# GROKin tips:
# - CLI eps are (ostensibly) added via an `entry_points.txt`:
# - https://packaging.python.org/en/latest/specifications/entry-points/#file-format
# - https://github.com/nix-community/poetry2nix/blob/master/editable.nix#L49
{ {
description = "piker: trading gear for hackers (pkged with poetry2nix)"; description = "piker: trading gear for hackers (pkged with poetry2nix)";
@ -106,7 +101,7 @@
# won't be needed - thanks @k900: # won't be needed - thanks @k900:
# https://github.com/nix-community/poetry2nix/pull/1257 # https://github.com/nix-community/poetry2nix/pull/1257
pyqt5 = prev.pyqt5.override { pyqt5 = prev.pyqt5.override {
# withWebkit = false; withWebkit = false;
preferWheel = true; preferWheel = true;
}; };
@ -129,52 +124,55 @@
# WHY!? -> output-attrs that `nix develop` scans for: # WHY!? -> output-attrs that `nix develop` scans for:
# https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop.html#flake-output-attributes # https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop.html#flake-output-attributes
in in {
rec { packages = {
packages = { # piker = poetry2nix.legacyPackages.x86_64-linux.mkPoetryEditablePackage {
# piker = poetry2nix.legacyPackages.x86_64-linux.mkPoetryEditablePackage { # editablePackageSources = { piker = ./piker; };
# editablePackageSources = { piker = ./piker; };
piker = p2npkgs.mkPoetryApplication { piker = p2npkgs.mkPoetryApplication {
projectDir = projectDir; projectDir = projectDir;
# SEE ABOVE for auto-genned input set, override # SEE ABOVE for auto-genned input set, override
# buncha deps with extras.. like `setuptools` mostly. # buncha deps with extras.. like `setuptools` mostly.
# TODO: maybe propose a patch to p2n to show that you # TODO: maybe propose a patch to p2n to show that you
# can even do this in the edgecases docs? # can even do this in the edgecases docs?
overrides = ahot_overrides; overrides = ahot_overrides;
# XXX: won't work on llvmlite.. # XXX: won't work on llvmlite..
# preferWheels = true; # preferWheels = true;
};
}; };
};
# devShells.default = pkgs.mkShell { devShells.default = pkgs.mkShell {
# projectDir = projectDir; # packages = [ poetry2nix.packages.${system}.poetry ];
# python = "python3.10"; packages = [ poetry2nix.packages.x86_64-linux.poetry ];
# overrides = ahot_overrides; inputsFrom = [ self.packages.x86_64-linux.piker ];
# inputsFrom = [ self.packages.x86_64-linux.piker ];
# packages = packages;
# # packages = [ poetry2nix.packages.${system}.poetry ];
# };
# TODO: grok the difference here.. # TODO: boot xonsh inside the poetry virtualenv when
# - avoid re-cloning git repos on every develop entry.. # defined via a custom entry point?
# - ideally allow hacking on the src code of some deps # NOTE XXX: apparently DON'T do these..?
# (tractor, pyqtgraph, tomlkit, etc.) WITHOUT having to # shellHook = "poetry run xonsh";
# re-install them every time a change is made. # shellHook = "poetry shell";
# - boot a usable xonsh inside the poetry virtualenv when };
# defined via a custom entry point?
devShells.default = p2npkgs.mkPoetryEnv {
# env = p2npkgs.mkPoetryEnv { # TODO: grok the difference here..
projectDir = projectDir; # - avoid re-cloning git repos on every develop entry..
python = pkgs.python310; # - ideally allow hacking on the src code of some deps
overrides = ahot_overrides; # (tractor, pyqtgraph, tomlkit, etc.) WITHOUT having to
editablePackageSources = packages; # re-install them every time a change is made.
# piker = "./";
# tractor = "../tractor/"; # devShells.default = (p2npkgs.mkPoetryEnv {
# }; # wut? # # let {
}; # # devEnv = p2npkgs.mkPoetryEnv {
} # projectDir = projectDir;
# overrides = ahot_overrides;
# inputsFrom = [ self.packages.x86_64-linux.piker ];
# }).env.overrideAttrs (old: {
# buildInputs = [ packages.piker ];
# }
# );
}
); # end of .outputs scope ); # end of .outputs scope
} }

View File

@ -327,11 +327,7 @@ class MktPair(Struct, frozen=True):
) -> dict: ) -> dict:
d = super().to_dict(**kwargs) d = super().to_dict(**kwargs)
d['src'] = self.src.to_dict(**kwargs) d['src'] = self.src.to_dict(**kwargs)
d['dst'] = self.dst.to_dict(**kwargs)
if not isinstance(self.dst, str):
d['dst'] = self.dst.to_dict(**kwargs)
else:
d['dst'] = str(self.dst)
d['price_tick'] = str(self.price_tick) d['price_tick'] = str(self.price_tick)
d['size_tick'] = str(self.size_tick) d['size_tick'] = str(self.size_tick)
@ -353,16 +349,11 @@ class MktPair(Struct, frozen=True):
Constructor for a received msg-dict normally received over IPC. Constructor for a received msg-dict normally received over IPC.
''' '''
if not isinstance( dst_asset_msg = msg.pop('dst')
dst_asset_msg := msg.pop('dst'), dst = Asset.from_msg(dst_asset_msg) # .copy()
str,
):
dst: Asset = Asset.from_msg(dst_asset_msg) # .copy()
else:
dst: str = dst_asset_msg
src_asset_msg: dict = msg.pop('src') src_asset_msg = msg.pop('src')
src: Asset = Asset.from_msg(src_asset_msg) # .copy() src = Asset.from_msg(src_asset_msg) # .copy()
# XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't # XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't
# decide to it by default since we aren't spec-cing these # decide to it by default since we aren't spec-cing these

View File

@ -50,7 +50,7 @@ __brokers__: list[str] = [
'binance', 'binance',
'ib', 'ib',
'kraken', 'kraken',
'kucoin', 'kucoin'
# broken but used to work # broken but used to work
# 'questrade', # 'questrade',
@ -71,7 +71,7 @@ def get_brokermod(brokername: str) -> ModuleType:
Return the imported broker module by name. Return the imported broker module by name.
''' '''
module: ModuleType = import_module('.' + brokername, 'piker.brokers') module = import_module('.' + brokername, 'piker.brokers')
# we only allow monkeying because it's for internal keying # we only allow monkeying because it's for internal keying
module.name = module.__name__.split('.')[-1] module.name = module.__name__.split('.')[-1]
return module return module

View File

@ -18,11 +18,10 @@
Handy cross-broker utils. Handy cross-broker utils.
""" """
from __future__ import annotations
from functools import partial from functools import partial
import json import json
import httpx import asks
import logging import logging
from ..log import ( from ..log import (
@ -51,7 +50,6 @@ class SymbolNotFound(BrokerError):
"Symbol not found by broker search" "Symbol not found by broker search"
# TODO: these should probably be moved to `.tsp/.data`?
class NoData(BrokerError): class NoData(BrokerError):
''' '''
Symbol data not permitted or no data Symbol data not permitted or no data
@ -61,15 +59,14 @@ class NoData(BrokerError):
def __init__( def __init__(
self, self,
*args, *args,
info: dict|None = None, frame_size: int = 1000,
) -> None: ) -> None:
super().__init__(*args) super().__init__(*args)
self.info: dict|None = info
# when raised, machinery can check if the backend # when raised, machinery can check if the backend
# set a "frame size" for doing datetime calcs. # set a "frame size" for doing datetime calcs.
# self.frame_size: int = 1000 self.frame_size: int = 1000
class DataUnavailable(BrokerError): class DataUnavailable(BrokerError):
@ -91,18 +88,16 @@ class DataThrottle(BrokerError):
def resproc( def resproc(
resp: httpx.Response, resp: asks.response_objects.Response,
log: logging.Logger, log: logging.Logger,
return_json: bool = True, return_json: bool = True,
log_resp: bool = False, log_resp: bool = False,
) -> httpx.Response: ) -> asks.response_objects.Response:
''' """Process response and return its json content.
Process response and return its json content.
Raise the appropriate error on non-200 OK responses. Raise the appropriate error on non-200 OK responses.
"""
'''
if not resp.status_code == 200: if not resp.status_code == 200:
raise BrokerError(resp.body) raise BrokerError(resp.body)
try: try:

View File

@ -1,8 +1,8 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) # Copyright (C)
# Guillermo Rodriguez (aka ze jefe) # Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet # Tyler Goodlet
# (in stewardship for pikers) # (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -25,13 +25,14 @@ from __future__ import annotations
from collections import ChainMap from collections import ChainMap
from contextlib import ( from contextlib import (
asynccontextmanager as acm, asynccontextmanager as acm,
AsyncExitStack,
) )
from datetime import datetime from datetime import datetime
from pprint import pformat from pprint import pformat
from typing import ( from typing import (
Any, Any,
Callable, Callable,
Hashable,
Sequence,
Type, Type,
) )
import hmac import hmac
@ -42,7 +43,8 @@ import trio
from pendulum import ( from pendulum import (
now, now,
) )
import httpx import asks
from rapidfuzz import process as fuzzy
import numpy as np import numpy as np
from piker import config from piker import config
@ -52,7 +54,6 @@ from piker.clearing._messages import (
from piker.accounting import ( from piker.accounting import (
Asset, Asset,
digits_to_dec, digits_to_dec,
MktPair,
) )
from piker.types import Struct from piker.types import Struct
from piker.data import ( from piker.data import (
@ -68,6 +69,7 @@ from .venues import (
PAIRTYPES, PAIRTYPES,
Pair, Pair,
MarketType, MarketType,
_spot_url, _spot_url,
_futes_url, _futes_url,
_testnet_futes_url, _testnet_futes_url,
@ -77,18 +79,16 @@ from .venues import (
log = get_logger('piker.brokers.binance') log = get_logger('piker.brokers.binance')
def get_config() -> dict[str, Any]: def get_config() -> dict:
conf: dict conf: dict
path: Path path: Path
conf, path = config.load( conf, path = config.load(touch_if_dne=True)
conf_name='brokers',
touch_if_dne=True, section = conf.get('binance')
)
section: dict = conf.get('binance')
if not section: if not section:
log.warning( log.warning(f'No config section found for binance in {path}')
f'No config section found for binance in {path}'
)
return {} return {}
return section return section
@ -144,7 +144,7 @@ def binance_timestamp(
class Client: class Client:
''' '''
Async ReST API client using `trio` + `httpx` B) Async ReST API client using ``trio`` + ``asks`` B)
Supports all of the spot, margin and futures endpoints depending Supports all of the spot, margin and futures endpoints depending
on method. on method.
@ -153,17 +153,10 @@ class Client:
def __init__( def __init__(
self, self,
venue_sessions: dict[
str, # venue key
tuple[httpx.AsyncClient, str] # session, eps path
],
conf: dict[str, Any],
# TODO: change this to `Client.[mkt_]venue: MarketType`? # TODO: change this to `Client.[mkt_]venue: MarketType`?
mkt_mode: MarketType = 'spot', mkt_mode: MarketType = 'spot',
) -> None: ) -> None:
self.conf = conf
# build out pair info tables for each market type # build out pair info tables for each market type
# and wrap in a chain-map view for search / query. # and wrap in a chain-map view for search / query.
self._spot_pairs: dict[str, Pair] = {} # spot info table self._spot_pairs: dict[str, Pair] = {} # spot info table
@ -190,13 +183,44 @@ class Client:
# market symbols for use by search. See `.exch_info()`. # market symbols for use by search. See `.exch_info()`.
self._pairs: ChainMap[str, Pair] = ChainMap() self._pairs: ChainMap[str, Pair] = ChainMap()
# spot EPs sesh
self._sesh = asks.Session(connections=4)
self._sesh.base_location: str = _spot_url
# spot testnet
self._test_sesh: asks.Session = asks.Session(connections=4)
self._test_sesh.base_location: str = _testnet_spot_url
# margin and extended spot endpoints session.
self._sapi_sesh = asks.Session(connections=4)
self._sapi_sesh.base_location: str = _spot_url
# futes EPs sesh
self._fapi_sesh = asks.Session(connections=4)
self._fapi_sesh.base_location: str = _futes_url
# futes testnet
self._test_fapi_sesh: asks.Session = asks.Session(connections=4)
self._test_fapi_sesh.base_location: str = _testnet_futes_url
# global client "venue selection" mode. # global client "venue selection" mode.
# set this when you want to switch venues and not have to # set this when you want to switch venues and not have to
# specify the venue for the next request. # specify the venue for the next request.
self.mkt_mode: MarketType = mkt_mode self.mkt_mode: MarketType = mkt_mode
# per-mkt-venue API client table # per 8
self.venue_sesh = venue_sessions self.venue_sesh: dict[
str, # venue key
tuple[asks.Session, str] # session, eps path
] = {
'spot': (self._sesh, '/api/v3/'),
'spot_testnet': (self._test_sesh, '/fapi/v1/'),
'margin': (self._sapi_sesh, '/sapi/v1/'),
'usdtm_futes': (self._fapi_sesh, '/fapi/v1/'),
'usdtm_futes_testnet': (self._test_fapi_sesh, '/fapi/v1/'),
# 'futes_coin': self._dapi, # TODO
}
# lookup for going from `.mkt_mode: str` to the config # lookup for going from `.mkt_mode: str` to the config
# subsection `key: str` # subsection `key: str`
@ -211,6 +235,40 @@ class Client:
'futes': ['usdtm_futes'], 'futes': ['usdtm_futes'],
} }
# for creating API keys see,
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
self.conf: dict = get_config()
for key, subconf in self.conf.items():
if api_key := subconf.get('api_key', ''):
venue_keys: list[str] = self.confkey2venuekeys[key]
venue_key: str
sesh: asks.Session
for venue_key in venue_keys:
sesh, _ = self.venue_sesh[venue_key]
api_key_header: dict = {
# taken from official:
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
"Content-Type": "application/json;charset=utf-8",
# TODO: prolly should just always query and copy
# in the real latest ver?
"User-Agent": "binance-connector/6.1.6smbz6",
"X-MBX-APIKEY": api_key,
}
sesh.headers.update(api_key_header)
# if `.use_tesnet = true` in the config then
# also add headers for the testnet session which
# will be used for all order control
if subconf.get('use_testnet', False):
testnet_sesh, _ = self.venue_sesh[
venue_key + '_testnet'
]
testnet_sesh.headers.update(api_key_header)
def _mk_sig( def _mk_sig(
self, self,
data: dict, data: dict,
@ -229,6 +287,7 @@ class Client:
'to define the creds for auth-ed endpoints!?' 'to define the creds for auth-ed endpoints!?'
) )
# XXX: Info on security and authentification # XXX: Info on security and authentification
# https://binance-docs.github.io/apidocs/#endpoint-security-type # https://binance-docs.github.io/apidocs/#endpoint-security-type
if not (api_secret := subconf.get('api_secret')): if not (api_secret := subconf.get('api_secret')):
@ -257,7 +316,7 @@ class Client:
params: dict, params: dict,
method: str = 'get', method: str = 'get',
venue: str|None = None, # if None use `.mkt_mode` state venue: str | None = None, # if None use `.mkt_mode` state
signed: bool = False, signed: bool = False,
allow_testnet: bool = False, allow_testnet: bool = False,
@ -268,9 +327,8 @@ class Client:
- /fapi/v3/ USD-M FUTURES, or - /fapi/v3/ USD-M FUTURES, or
- /api/v3/ SPOT/MARGIN - /api/v3/ SPOT/MARGIN
account/market endpoint request depending on either passed in account/market endpoint request depending on either passed in `venue: str`
`venue: str` or the current setting `.mkt_mode: str` setting, or the current setting `.mkt_mode: str` setting, default `'spot'`.
default `'spot'`.
Docs per venue API: Docs per venue API:
@ -299,6 +357,9 @@ class Client:
venue=venue_key, venue=venue_key,
) )
sesh: asks.Session
path: str
# Check if we're configured to route order requests to the # Check if we're configured to route order requests to the
# venue equivalent's testnet. # venue equivalent's testnet.
use_testnet: bool = False use_testnet: bool = False
@ -323,12 +384,11 @@ class Client:
# ctl machinery B) # ctl machinery B)
venue_key += '_testnet' venue_key += '_testnet'
client: httpx.AsyncClient sesh, path = self.venue_sesh[venue_key]
path: str
client, path = self.venue_sesh[venue_key] meth: Callable = getattr(sesh, method)
meth: Callable = getattr(client, method)
resp = await meth( resp = await meth(
url=path + endpoint, path=path + endpoint,
params=params, params=params,
timeout=float('inf'), timeout=float('inf'),
) )
@ -370,15 +430,7 @@ class Client:
item['filters'] = filters item['filters'] = filters
pair_type: Type = PAIRTYPES[venue] pair_type: Type = PAIRTYPES[venue]
try: pair: Pair = pair_type(**item)
pair: Pair = pair_type(**item)
except Exception as e:
e.add_note(
"\nDon't panic, prolly stupid binance changed their symbology schema again..\n"
'Check out their API docs here:\n\n'
'https://binance-docs.github.io/apidocs/spot/en/#exchange-information'
)
raise
pair_table[pair.symbol.upper()] = pair pair_table[pair.symbol.upper()] = pair
# update an additional top-level-cross-venue-table # update an additional top-level-cross-venue-table
@ -473,9 +525,7 @@ class Client:
''' '''
pair_table: dict[str, Pair] = self._venue2pairs[ pair_table: dict[str, Pair] = self._venue2pairs[
venue venue or self.mkt_mode
or
self.mkt_mode
] ]
if ( if (
expiry expiry
@ -494,9 +544,9 @@ class Client:
venues: list[str] = [venue] venues: list[str] = [venue]
# batch per-venue download of all exchange infos # batch per-venue download of all exchange infos
async with trio.open_nursery() as tn: async with trio.open_nursery() as rn:
for ven in venues: for ven in venues:
tn.start_soon( rn.start_soon(
self._cache_pairs, self._cache_pairs,
ven, ven,
) )
@ -549,11 +599,11 @@ class Client:
) -> dict[str, Any]: ) -> dict[str, Any]:
fq_pairs: dict[str, Pair] = await self.exch_info() fq_pairs: dict = await self.exch_info()
# TODO: cache this list like we were in # TODO: cache this list like we were in
# `open_symbol_search()`? # `open_symbol_search()`?
# keys: list[str] = list(fq_pairs) keys: list[str] = list(fq_pairs)
return match_from_pairs( return match_from_pairs(
pairs=fq_pairs, pairs=fq_pairs,
@ -561,20 +611,9 @@ class Client:
score_cutoff=50, score_cutoff=50,
) )
def pair2venuekey(
self,
pair: Pair,
) -> str:
return {
'USDTM': 'usdtm_futes',
'SPOT': 'spot',
# 'COINM': 'coin_futes',
# ^-TODO-^ bc someone might want it..?
}[pair.venue]
async def bars( async def bars(
self, self,
mkt: MktPair, symbol: str,
start_dt: datetime | None = None, start_dt: datetime | None = None,
end_dt: datetime | None = None, end_dt: datetime | None = None,
@ -604,20 +643,16 @@ class Client:
start_time = binance_timestamp(start_dt) start_time = binance_timestamp(start_dt)
end_time = binance_timestamp(end_dt) end_time = binance_timestamp(end_dt)
bs_pair: Pair = self._pairs[mkt.bs_fqme.upper()]
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data # https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
bars = await self._api( bars = await self._api(
'klines', 'klines',
params={ params={
# NOTE: always query using their native symbology! 'symbol': symbol.upper(),
'symbol': mkt.bs_mktid.upper(),
'interval': '1m', 'interval': '1m',
'startTime': start_time, 'startTime': start_time,
'endTime': end_time, 'endTime': end_time,
'limit': limit 'limit': limit
}, },
venue=self.pair2venuekey(bs_pair),
allow_testnet=False, allow_testnet=False,
) )
new_bars: list[tuple] = [] new_bars: list[tuple] = []
@ -934,148 +969,17 @@ class Client:
await self.close_listen_key(key) await self.close_listen_key(key)
_venue_urls: dict[str, str] = {
'spot': (
_spot_url,
'/api/v3/',
),
'spot_testnet': (
_testnet_spot_url,
'/fapi/v1/'
),
# margin and extended spot endpoints session.
# TODO: did this ever get implemented fully?
# 'margin': (
# _spot_url,
# '/sapi/v1/'
# ),
'usdtm_futes': (
_futes_url,
'/fapi/v1/',
),
'usdtm_futes_testnet': (
_testnet_futes_url,
'/fapi/v1/',
),
# TODO: for anyone who actually needs it ;P
# 'coin_futes': ()
}
def init_api_keys(
client: Client,
conf: dict[str, Any],
) -> None:
'''
Set up per-venue API keys each http client according to the user's
`brokers.conf`.
For ex, to use spot-testnet and live usdt futures APIs:
```toml
[binance]
# spot test net
spot.use_testnet = true
spot.api_key = '<spot_api_key_from_binance_account>'
spot.api_secret = '<spot_api_key_password>'
# futes live
futes.use_testnet = false
accounts.usdtm = 'futes'
futes.api_key = '<futes_api_key_from_binance>'
futes.api_secret = '<futes_api_key_password>''
# if uncommented will use the built-in paper engine and not
# connect to `binance` API servers for order ctl.
# accounts.paper = 'paper'
```
'''
for key, subconf in conf.items():
if api_key := subconf.get('api_key', ''):
venue_keys: list[str] = client.confkey2venuekeys[key]
venue_key: str
client: httpx.AsyncClient
for venue_key in venue_keys:
client, _ = client.venue_sesh[venue_key]
api_key_header: dict = {
# taken from official:
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
"Content-Type": "application/json;charset=utf-8",
# TODO: prolly should just always query and copy
# in the real latest ver?
"User-Agent": "binance-connector/6.1.6smbz6",
"X-MBX-APIKEY": api_key,
}
client.headers.update(api_key_header)
# if `.use_tesnet = true` in the config then
# also add headers for the testnet session which
# will be used for all order control
if subconf.get('use_testnet', False):
testnet_sesh, _ = client.venue_sesh[
venue_key + '_testnet'
]
testnet_sesh.headers.update(api_key_header)
@acm @acm
async def get_client( async def get_client() -> Client:
mkt_mode: MarketType = 'spot',
) -> Client:
'''
Construct an single `piker` client which composes multiple underlying venue
specific API clients both for live and test networks.
''' client = Client()
venue_sessions: dict[ await client.exch_info()
str, # venue key log.info(
tuple[httpx.AsyncClient, str] # session, eps path f'{client} in {client.mkt_mode} mode: caching exchange infos..\n'
] = {} 'Cached multi-market pairs:\n'
async with AsyncExitStack() as client_stack: f'spot: {len(client._spot_pairs)}\n'
for name, (base_url, path) in _venue_urls.items(): f'usdtm_futes: {len(client._ufutes_pairs)}\n'
api: httpx.AsyncClient = await client_stack.enter_async_context( f'Total: {len(client._pairs)}\n'
httpx.AsyncClient( )
base_url=base_url,
# headers={},
# TODO: is there a way to numerate this? yield client
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
# connections=4
)
)
venue_sessions[name] = (
api,
path,
)
conf: dict[str, Any] = get_config()
# for creating API keys see,
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
client = Client(
venue_sessions=venue_sessions,
conf=conf,
mkt_mode=mkt_mode,
)
init_api_keys(
client=client,
conf=conf,
)
fq_pairs: dict[str, Pair] = await client.exch_info()
assert fq_pairs
log.info(
f'Loaded multi-venue `Client` in mkt_mode={client.mkt_mode!r}\n\n'
f'Symbology Summary:\n'
f'------ - ------\n'
f'spot: {len(client._spot_pairs)}\n'
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
'------ - ------\n'
f'total: {len(client._pairs)}\n'
)
yield client

View File

@ -264,20 +264,15 @@ async def open_trade_dialog(
# do a open_symcache() call.. though maybe we can hide # do a open_symcache() call.. though maybe we can hide
# this in a new async version of open_account()? # this in a new async version of open_account()?
async with open_cached_client('binance') as client: async with open_cached_client('binance') as client:
subconf: dict|None = client.conf.get(venue_name) subconf: dict = client.conf[venue_name]
use_testnet = subconf.get('use_testnet', False)
# XXX: if no futes.api_key or spot.api_key has been set we # XXX: if no futes.api_key or spot.api_key has been set we
# always fall back to the paper engine! # always fall back to the paper engine!
if ( if not subconf.get('api_key'):
not subconf
or
not subconf.get('api_key')
):
await ctx.started('paper') await ctx.started('paper')
return return
use_testnet: bool = subconf.get('use_testnet', False)
async with ( async with (
open_cached_client('binance') as client, open_cached_client('binance') as client,
): ):

View File

@ -42,12 +42,12 @@ from trio_typing import TaskStatus
from pendulum import ( from pendulum import (
from_timestamp, from_timestamp,
) )
from rapidfuzz import process as fuzzy
import numpy as np import numpy as np
import tractor import tractor
from piker.brokers import ( from piker.brokers import (
open_cached_client, open_cached_client,
NoData,
) )
from piker._cacheables import ( from piker._cacheables import (
async_lifo_cache, async_lifo_cache,
@ -110,7 +110,6 @@ class AggTrade(Struct, frozen=True):
async def stream_messages( async def stream_messages(
ws: NoBsWs, ws: NoBsWs,
) -> AsyncGenerator[NoBsWs, dict]: ) -> AsyncGenerator[NoBsWs, dict]:
# TODO: match syntax here! # TODO: match syntax here!
@ -221,8 +220,6 @@ def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
} }
# TODO, why aren't frame resp `log.info()`s showing in upstream
# code?!
@acm @acm
async def open_history_client( async def open_history_client(
mkt: MktPair, mkt: MktPair,
@ -255,30 +252,24 @@ async def open_history_client(
else: else:
client.mkt_mode = 'spot' client.mkt_mode = 'spot'
array: np.ndarray = await client.bars( # NOTE: always query using their native symbology!
mkt=mkt, mktid: str = mkt.bs_mktid
array = await client.bars(
mktid,
start_dt=start_dt, start_dt=start_dt,
end_dt=end_dt, end_dt=end_dt,
) )
if array.size == 0:
raise NoData(
f'No frame for {start_dt} -> {end_dt}\n'
)
times = array['time'] times = array['time']
if not times.any(): if (
raise ValueError( end_dt is None
'Bad frame with null-times?\n\n' ):
f'{times}' inow = round(time.time())
)
if end_dt is None:
inow: int = round(time.time())
if (inow - times[-1]) > 60: if (inow - times[-1]) > 60:
await tractor.pause() await tractor.pause()
start_dt = from_timestamp(times[0]) start_dt = from_timestamp(times[0])
end_dt = from_timestamp(times[-1]) end_dt = from_timestamp(times[-1])
return array, start_dt, end_dt return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3} yield get_ohlc, {'erlangs': 3, 'rate': 3}
@ -465,8 +456,6 @@ async def stream_quotes(
): ):
init_msgs: list[FeedInit] = [] init_msgs: list[FeedInit] = []
for sym in symbols: for sym in symbols:
mkt: MktPair
pair: Pair
mkt, pair = await get_mkt_info(sym) mkt, pair = await get_mkt_info(sym)
# build out init msgs according to latest spec # build out init msgs according to latest spec
@ -515,6 +504,7 @@ async def stream_quotes(
# start streaming # start streaming
async for typ, quote in msg_gen: async for typ, quote in msg_gen:
# period = time.time() - last # period = time.time() - last
# hz = 1/period if period else float('inf') # hz = 1/period if period else float('inf')
# if hz > 60: # if hz > 60:
@ -550,7 +540,7 @@ async def open_symbol_search(
) )
# repack in fqme-keyed table # repack in fqme-keyed table
byfqme: dict[str, Pair] = {} byfqme: dict[start, Pair] = {}
for pair in pairs.values(): for pair in pairs.values():
byfqme[pair.bs_fqme] = pair byfqme[pair.bs_fqme] = pair

View File

@ -137,12 +137,10 @@ class SpotPair(Pair, frozen=True):
quoteOrderQtyMarketAllowed: bool quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool isSpotTradingAllowed: bool
isMarginTradingAllowed: bool isMarginTradingAllowed: bool
otoAllowed: bool
defaultSelfTradePreventionMode: str defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str] allowedSelfTradePreventionModes: list[str]
permissions: list[str] permissions: list[str]
permissionSets: list[list[str]]
# NOTE: see `.data._symcache.SymbologyCache.load()` for why # NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:SpotPair' ns_path: str = 'piker.brokers.binance:SpotPair'
@ -181,6 +179,7 @@ class FutesPair(Pair):
quoteAsset: str # 'USDT', quoteAsset: str # 'USDT',
quotePrecision: int # 8, quotePrecision: int # 8,
requiredMarginPercent: float # '5.0000', requiredMarginPercent: float # '5.0000',
settlePlan: int # 0,
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'], timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
triggerProtect: float # '0.0500', triggerProtect: float # '0.0500',
underlyingSubType: list[str] # ['PoW'], underlyingSubType: list[str] # ['PoW'],
@ -202,7 +201,6 @@ class FutesPair(Pair):
match contype: match contype:
case ( case (
'CURRENT_QUARTER' 'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance.. | 'NEXT_QUARTER' # su madre binance..
): ):
pair, _, expiry = symbol.partition('_') pair, _, expiry = symbol.partition('_')
@ -222,10 +220,6 @@ class FutesPair(Pair):
case ['DEFI']: case ['DEFI']:
return 'PERP' return 'PERP'
# wow, just wow you binance guys suck..
if self.status == 'PENDING_TRADING':
return 'PENDING'
# XXX: yeah no clue then.. # XXX: yeah no clue then..
raise ValueError( raise ValueError(
f'Bad .expiry token match: {contype} for {symbol}' f'Bad .expiry token match: {contype} for {symbol}'
@ -243,7 +237,6 @@ class FutesPair(Pair):
case ( case (
'CURRENT_QUARTER' 'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance.. | 'NEXT_QUARTER' # su madre binance..
): ):
_, _, expiry = symbol.partition('_') _, _, expiry = symbol.partition('_')
@ -256,10 +249,7 @@ class FutesPair(Pair):
return f'{margin}M' return f'{margin}M'
match subtype: match subtype:
case ( case ['DEFI']:
['DEFI']
| ['USDC']
):
return f'{subtype[0]}' return f'{subtype[0]}'
# XXX: yeah no clue then.. # XXX: yeah no clue then..

View File

@ -482,22 +482,20 @@ def search(
): ):
return await func() return await func()
from piker.toolz import open_crash_handler quotes = trio.run(
with open_crash_handler(): main,
quotes = trio.run( partial(
main, core.symbol_search,
partial( brokermods,
core.symbol_search, pattern,
brokermods, ),
pattern, )
),
)
if not quotes: if not quotes:
log.error(f"No matches could be found for {pattern}?") log.error(f"No matches could be found for {pattern}?")
return return
click.echo(colorize_json(quotes)) click.echo(colorize_json(quotes))
@cli.command() @cli.command()
@ -506,11 +504,9 @@ def search(
@click.option('--delete', '-d', flag_value=True, help='Delete section') @click.option('--delete', '-d', flag_value=True, help='Delete section')
@click.pass_obj @click.pass_obj
def brokercfg(config, section, value, delete): def brokercfg(config, section, value, delete):
''' """If invoked with no arguments, open an editor to edit broker configs file
If invoked with no arguments, open an editor to edit broker or get / update an individual section.
configs file or get / update an individual section. """
'''
from .. import config from .. import config
if section: if section:

View File

@ -145,11 +145,7 @@ async def symbol_search(
async with maybe_spawn_brokerd( async with maybe_spawn_brokerd(
mod.name, mod.name,
infect_asyncio=getattr( infect_asyncio=getattr(mod, '_infect_asyncio', False),
mod,
'_infect_asyncio',
False,
),
) as portal: ) as portal:
results.append(( results.append((

View File

@ -100,7 +100,7 @@ async def data_reset_hack(
log.warning( log.warning(
no_setup_msg no_setup_msg
+ +
'REQUIRES A `vnc_addrs: array` ENTRY' f'REQUIRES A `vnc_addrs: array` ENTRY'
) )
vnc_host, vnc_port = vnc_sockaddr.get( vnc_host, vnc_port = vnc_sockaddr.get(
@ -259,7 +259,7 @@ def i3ipc_xdotool_manual_click_hack() -> None:
timeout=timeout, timeout=timeout,
) )
# re-activate and focus original window # re-activate and focus original window
subprocess.call([ subprocess.call([
'xdotool', 'xdotool',
'windowactivate', '--sync', str(orig_win_id), 'windowactivate', '--sync', str(orig_win_id),

View File

@ -41,6 +41,7 @@ import time
from typing import ( from typing import (
Any, Any,
Callable, Callable,
Union,
) )
from types import SimpleNamespace from types import SimpleNamespace
@ -48,12 +49,7 @@ from bidict import bidict
import trio import trio
import tractor import tractor
from tractor import to_asyncio from tractor import to_asyncio
from pendulum import ( import pendulum
from_timestamp,
DateTime,
Duration,
duration as mk_duration,
)
from eventkit import Event from eventkit import Event
from ib_insync import ( from ib_insync import (
client as ib_client, client as ib_client,
@ -225,20 +221,16 @@ def bars_to_np(bars: list) -> np.ndarray:
# https://interactivebrokers.github.io/tws-api/historical_limitations.html#non-available_hd # https://interactivebrokers.github.io/tws-api/historical_limitations.html#non-available_hd
_samplings: dict[int, tuple[str, str]] = { _samplings: dict[int, tuple[str, str]] = {
1: ( 1: (
# ib strs
'1 secs', '1 secs',
f'{int(2e3)} S', f'{int(2e3)} S',
pendulum.duration(seconds=2e3),
mk_duration(seconds=2e3),
), ),
# TODO: benchmark >1 D duration on query to see if # TODO: benchmark >1 D duration on query to see if
# throughput can be made faster during backfilling. # throughput can be made faster during backfilling.
60: ( 60: (
# ib strs
'1 min', '1 min',
'2 D', '1 D',
pendulum.duration(days=1),
mk_duration(days=2),
), ),
} }
@ -287,31 +279,9 @@ class Client:
self.conf = config self.conf = config
# NOTE: the ib.client here is "throttled" to 45 rps by default # NOTE: the ib.client here is "throttled" to 45 rps by default
self.ib: IB = ib self.ib = ib
self.ib.RaiseRequestErrors: bool = True self.ib.RaiseRequestErrors: bool = True
# self._acnt_names: set[str] = {}
self._acnt_names: list[str] = []
@property
def acnts(self) -> list[str]:
# return list(self._acnt_names)
return self._acnt_names
def __repr__(self) -> str:
return (
f'<{type(self).__name__}('
f'ib={self.ib} '
f'acnts={self.acnts}'
# TODO: we need to mask out acnt-#s and other private
# infos if we're going to console this!
# f' |_.conf:\n'
# f' {pformat(self.conf)}\n'
')>'
)
async def get_fills(self) -> list[Fill]: async def get_fills(self) -> list[Fill]:
''' '''
Return list of rents `Fills` from trading session. Return list of rents `Fills` from trading session.
@ -333,8 +303,8 @@ class Client:
fqme: str, fqme: str,
# EST in ISO 8601 format is required... below is EPOCH # EST in ISO 8601 format is required... below is EPOCH
start_dt: datetime | str = "1970-01-01T00:00:00.000000-05:00", start_dt: Union[datetime, str] = "1970-01-01T00:00:00.000000-05:00",
end_dt: datetime | str = "", end_dt: Union[datetime, str] = "",
# ohlc sample period in seconds # ohlc sample period in seconds
sample_period_s: int = 1, sample_period_s: int = 1,
@ -345,7 +315,7 @@ class Client:
**kwargs, **kwargs,
) -> tuple[BarDataList, np.ndarray, Duration]: ) -> tuple[BarDataList, np.ndarray, pendulum.Duration]:
''' '''
Retreive OHLCV bars for a fqme over a range to the present. Retreive OHLCV bars for a fqme over a range to the present.
@ -354,19 +324,14 @@ class Client:
# https://interactivebrokers.github.io/tws-api/historical_data.html # https://interactivebrokers.github.io/tws-api/historical_data.html
bars_kwargs = {'whatToShow': 'TRADES'} bars_kwargs = {'whatToShow': 'TRADES'}
bars_kwargs.update(kwargs) bars_kwargs.update(kwargs)
( bar_size, duration, dt_duration = _samplings[sample_period_s]
bar_size,
ib_duration_str,
default_dt_duration,
) = _samplings[sample_period_s]
dt_duration: Duration = ( global _enters
duration log.info(
or default_dt_duration f"REQUESTING {duration}'s worth {bar_size} BARS\n"
f'{_enters} @ end={end_dt}"'
) )
# TODO: maybe remove all this?
global _enters
if not end_dt: if not end_dt:
end_dt = '' end_dt = ''
@ -375,8 +340,8 @@ class Client:
contract: Contract = (await self.find_contracts(fqme))[0] contract: Contract = (await self.find_contracts(fqme))[0]
bars_kwargs.update(getattr(contract, 'bars_kwargs', {})) bars_kwargs.update(getattr(contract, 'bars_kwargs', {}))
kwargs: dict[str, Any] = dict( bars = await self.ib.reqHistoricalDataAsync(
contract=contract, contract,
endDateTime=end_dt, endDateTime=end_dt,
formatDate=2, formatDate=2,
@ -388,7 +353,7 @@ class Client:
# time history length values format: # time history length values format:
# ``durationStr=integer{SPACE}unit (S|D|W|M|Y)`` # ``durationStr=integer{SPACE}unit (S|D|W|M|Y)``
durationStr=ib_duration_str, durationStr=duration,
# always use extended hours # always use extended hours
useRTH=False, useRTH=False,
@ -398,122 +363,50 @@ class Client:
# whatToShow='MIDPOINT', # whatToShow='MIDPOINT',
# whatToShow='TRADES', # whatToShow='TRADES',
) )
bars = await self.ib.reqHistoricalDataAsync(
**kwargs,
)
query_info: str = (
f'REQUESTING IB history BARS\n'
f' ------ - ------\n'
f'dt_duration: {dt_duration}\n'
f'ib_duration_str: {ib_duration_str}\n'
f'bar_size: {bar_size}\n'
f'fqme: {fqme}\n'
f'actor-global _enters: {_enters}\n'
f'kwargs: {pformat(kwargs)}\n'
)
# tail case if no history for range or none prior. # tail case if no history for range or none prior.
# NOTE: there's actually 3 cases here to handle (and
# this should be read alongside the implementation of
# `.reqHistoricalDataAsync()`):
# - a timeout occurred in which case insync internals return
# an empty list thing with bars.clear()...
# - no data exists for the period likely due to
# a weekend, holiday or other non-trading period prior to
# ``end_dt`` which exceeds the ``duration``,
# - LITERALLY this is the start of the mkt's history!
if not bars: if not bars:
# TODO: figure out wut's going on here. # NOTE: there's 2 cases here to handle (and this should be
# read alongside the implementation of
# TODO: is this handy, a sync requester for tinkering # ``.reqHistoricalDataAsync()``):
# with empty frame cases? # - no data is returned for the period likely due to
# def get_hist(): # a weekend, holiday or other non-trading period prior to
# return self.ib.reqHistoricalData(**kwargs) # ``end_dt`` which exceeds the ``duration``,
# import pdbp # - a timeout occurred in which case insync internals return
# pdbp.set_trace() # an empty list thing with bars.clear()...
log.critical(
'STUPID IB SAYS NO HISTORY\n\n'
+ query_info
)
# TODO: we could maybe raise ``NoData`` instead if we
# rewrite the method in the first case?
# right now there's no way to detect a timeout..
return [], np.empty(0), dt_duration return [], np.empty(0), dt_duration
# TODO: we could maybe raise ``NoData`` instead if we
# rewrite the method in the first case? right now there's no
# way to detect a timeout.
log.info(query_info) # NOTE XXX: ensure minimum duration in bars B)
# NOTE XXX: ensure minimum duration in bars? # => we recursively call this method until we get at least
# => recursively call this method until we get at least as # as many bars such that they sum in aggregate to the the
# many bars such that they sum in aggregate to the the # desired total time (duration) at most.
# desired total time (duration) at most. elif (
# - if you query over a gap and get no data end_dt
# that may short circuit the history and (
if ( (len(bars) * sample_period_s) < dt_duration.in_seconds()
# XXX XXX XXX
# => WHY DID WE EVEN NEED THIS ORIGINALLY!? <=
# XXX XXX XXX
False
and end_dt
):
nparr: np.ndarray = bars_to_np(bars)
times: np.ndarray = nparr['time']
first: float = times[0]
tdiff: float = times[-1] - first
if (
# len(bars) * sample_period_s) < dt_duration.in_seconds()
tdiff < dt_duration.in_seconds()
# and False
):
end_dt: DateTime = from_timestamp(first)
log.warning(
f'Frame result was shorter then {dt_duration}!?\n'
'Recursing for more bars:\n'
f'end_dt: {end_dt}\n'
f'dt_duration: {dt_duration}\n'
)
(
r_bars,
r_arr,
r_duration,
) = await self.bars(
fqme,
start_dt=start_dt,
end_dt=end_dt,
sample_period_s=sample_period_s,
# TODO: make a table for Duration to
# the ib str values in order to use this?
# duration=duration,
)
r_bars.extend(bars)
bars = r_bars
nparr: np.ndarray = bars_to_np(bars)
# timestep should always be at least as large as the
# period step.
tdiff: np.ndarray = np.diff(nparr['time'])
to_short: np.ndarray = tdiff < sample_period_s
if (to_short).any():
# raise ValueError(
log.error(
f'OHLC frame for {sample_period_s} has {to_short.size} '
'time steps which are shorter then expected?!"'
) )
# OOF: this will break teardown? ):
# -[ ] check if it's greenback log.warning(
# -[ ] why tf are we leaking shm entries.. f'Recursing to get more bars from {end_dt} for {dt_duration}'
# -[ ] make a test on the debugging asyncio testing )
# branch.. end_dt -= dt_duration
# breakpoint() (
r_bars,
r_arr,
r_duration,
) = await self.bars(
fqme,
start_dt=start_dt,
end_dt=end_dt,
)
r_bars.extend(bars)
bars = r_bars
return ( nparr = bars_to_np(bars)
bars, return bars, nparr, dt_duration
nparr,
dt_duration,
)
async def con_deats( async def con_deats(
self, self,
@ -857,23 +750,6 @@ class Client:
return contracts return contracts
async def maybe_get_head_time(
self,
fqme: str,
) -> datetime | None:
'''
Return the first datetime stamp for `fqme` or `None`
on request failure.
'''
try:
head_dt: datetime = await self.get_head_time(fqme=fqme)
return head_dt
except RequestError:
log.warning(f'Unable to get head time: {fqme} ?')
return None
async def get_head_time( async def get_head_time(
self, self,
fqme: str, fqme: str,
@ -914,7 +790,6 @@ class Client:
self, self,
contract: Contract, contract: Contract,
timeout: float = 1, timeout: float = 1,
tries: int = 100,
raise_on_timeout: bool = False, raise_on_timeout: bool = False,
) -> Ticker | None: ) -> Ticker | None:
@ -929,45 +804,34 @@ class Client:
ready: ticker.TickerUpdateEvent = ticker.updateEvent ready: ticker.TickerUpdateEvent = ticker.updateEvent
# ensure a last price gets filled in before we deliver quote # ensure a last price gets filled in before we deliver quote
timeouterr: Exception | None = None
warnset: bool = False warnset: bool = False
for _ in range(tries): for _ in range(100):
# wait for a first update(Event) indicatingn a
# live quote feed.
if isnan(ticker.last): if isnan(ticker.last):
# wait for a first update(Event)
try: try:
tkr = await asyncio.wait_for( tkr = await asyncio.wait_for(
ready, ready,
timeout=timeout, timeout=timeout,
) )
if tkr: except TimeoutError:
break if raise_on_timeout:
except TimeoutError as err: raise
timeouterr = err return None
await asyncio.sleep(0.01)
continue
if tkr:
break
else: else:
if not warnset: if not warnset:
log.warning( log.warning(
f'Quote req timed out..maybe venue is closed?\n' f'Quote for {contract} timed out: market is closed?'
f'{asdict(contract)}'
) )
warnset = True warnset = True
else: else:
log.info( log.info(f'Got first quote for {contract}')
'Got first quote for contract\n'
f'{contract}\n'
)
break break
else: else:
if timeouterr and raise_on_timeout:
import pdbp
pdbp.set_trace()
raise timeouterr
if not warnset: if not warnset:
log.warning( log.warning(
f'Contract {contract} is not returning a quote ' f'Contract {contract} is not returning a quote '
@ -975,8 +839,6 @@ class Client:
) )
warnset = True warnset = True
return None
return ticker return ticker
# async to be consistent for the client proxy, and cuz why not. # async to be consistent for the client proxy, and cuz why not.
@ -1024,12 +886,8 @@ class Client:
outsideRth=True, outsideRth=True,
optOutSmartRouting=True, optOutSmartRouting=True,
# TODO: need to understand this setting better as
# it pertains to shit ass mms..
routeMarketableToBbo=True, routeMarketableToBbo=True,
designatedLocation='SMART', designatedLocation='SMART',
# TODO: make all orders GTC? # TODO: make all orders GTC?
# https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html#a95539081751afb9980f4c6bd1655a6ba # https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html#a95539081751afb9980f4c6bd1655a6ba
# goodTillDate=f"yyyyMMdd-HH:mm:ss", # goodTillDate=f"yyyyMMdd-HH:mm:ss",
@ -1142,9 +1000,7 @@ _scan_ignore: set[tuple[str, int]] = set()
def get_config() -> dict[str, Any]: def get_config() -> dict[str, Any]:
conf, path = config.load( conf, path = config.load('brokers')
conf_name='brokers',
)
section = conf.get('ib') section = conf.get('ib')
accounts = section.get('accounts') accounts = section.get('accounts')
@ -1157,8 +1013,8 @@ def get_config() -> dict[str, Any]:
names = list(accounts.keys()) names = list(accounts.keys())
accts = section['accounts'] = bidict(accounts) accts = section['accounts'] = bidict(accounts)
log.info( log.info(
f'{path} defines {len(accts)} account aliases:\n' f'brokers.toml defines {len(accts)} accounts: '
f'{pformat(names)}\n' f'{pformat(names)}'
) )
if section is None: if section is None:
@ -1225,7 +1081,7 @@ async def load_aio_clients(
try_ports = list(try_ports.values()) try_ports = list(try_ports.values())
_err = None _err = None
accounts_def: dict[str, str] = config.load_accounts(['ib']) accounts_def = config.load_accounts(['ib'])
ports = try_ports if port is None else [port] ports = try_ports if port is None else [port]
combos = list(itertools.product(hosts, ports)) combos = list(itertools.product(hosts, ports))
accounts_found: dict[str, Client] = {} accounts_found: dict[str, Client] = {}
@ -1250,12 +1106,6 @@ async def load_aio_clients(
for i in range(connect_retries): for i in range(connect_retries):
try: try:
log.info(
'Trying `ib_async` connect\n'
f'{host}: {port}\n'
f'clientId: {client_id}\n'
f'timeout: {connect_timeout}\n'
)
await ib.connectAsync( await ib.connectAsync(
host, host,
port, port,
@ -1270,9 +1120,7 @@ async def load_aio_clients(
client = Client(ib=ib, config=conf) client = Client(ib=ib, config=conf)
# update all actor-global caches # update all actor-global caches
log.runtime( log.info(f"Caching client for {sockaddr}")
f'Connected and caching `Client` @ {sockaddr!r}'
)
_client_cache[sockaddr] = client _client_cache[sockaddr] = client
break break
@ -1287,59 +1135,37 @@ async def load_aio_clients(
OSError, OSError,
) as ce: ) as ce:
_err = ce _err = ce
message: str = ( log.warning(
f'Failed to connect on {host}:{port} after {i} tries with\n' f'Failed to connect on {host}:{port} for {i} time with,\n'
f'{ib.client.apiError.value()!r}\n\n' f'{ib.client.apiError.value()}\n'
'Retrying with a new client id..\n' 'retrying with a new client id..')
)
log.runtime(message)
else:
# XXX report loudly if we never established after all
# re-tries
log.warning(message)
# Pre-collect all accounts available for this # Pre-collect all accounts available for this
# connection and map account names to this client # connection and map account names to this client
# instance. # instance.
for value in ib.accountValues(): for value in ib.accountValues():
acct_number: str = value.account acct_number = value.account
acnt_alias: str = accounts_def.inverse.get(acct_number) entry = accounts_def.inverse.get(acct_number)
if not acnt_alias: if not entry:
# TODO: should we constuct the below reco-ex from
# the existing config content?
_, path = config.load(
conf_name='brokers',
)
raise ValueError( raise ValueError(
'No alias in account section for account!\n' 'No section in brokers.toml for account:'
f'Please add an acnt alias entry to your {path}\n' f' {acct_number}\n'
'For example,\n\n' f'Please add entry to continue using this API client'
'[ib.accounts]\n'
'margin = {accnt_number!r}\n'
'^^^^^^ <- you need this part!\n\n'
'This ensures `piker` will not leak private acnt info '
'to console output by default!\n'
) )
# surjection of account names to operating clients. # surjection of account names to operating clients.
if acnt_alias not in accounts_found: if acct_number not in accounts_found:
accounts_found[acnt_alias] = client accounts_found[entry] = client
# client._acnt_names.add(acnt_alias)
client._acnt_names.append(acnt_alias)
if accounts_found: log.info(
log.info( f'Loaded accounts for client @ {host}:{port}\n'
f'Loaded accounts for api client\n\n' f'{pformat(accounts_found)}'
f'{pformat(accounts_found)}\n' )
)
# XXX: why aren't we just updating this directy above # XXX: why aren't we just updating this directy above
# instead of using the intermediary `accounts_found`? # instead of using the intermediary `accounts_found`?
_accounts2clients.update(accounts_found) _accounts2clients.update(accounts_found)
# if we have no clients after the scan loop then error out. # if we have no clients after the scan loop then error out.
if not _client_cache: if not _client_cache:
@ -1373,9 +1199,7 @@ async def load_clients_for_trio(
a ``tractor.to_asyncio.open_channel_from()``. a ``tractor.to_asyncio.open_channel_from()``.
''' '''
async with load_aio_clients( async with load_aio_clients() as accts2clients:
disconnect_on_exit=False,
) as accts2clients:
to_trio.send_nowait(accts2clients) to_trio.send_nowait(accts2clients)
@ -1501,7 +1325,7 @@ class MethodProxy:
self, self,
pattern: str, pattern: str,
) -> dict[str, Any] | trio.Event: ) -> Union[dict[str, Any], trio.Event]:
ev = self.event_table.get(pattern) ev = self.event_table.get(pattern)
@ -1541,7 +1365,7 @@ async def open_aio_client_method_relay(
msg: tuple[str, dict] | dict | None = await from_trio.get() msg: tuple[str, dict] | dict | None = await from_trio.get()
match msg: match msg:
case None: # termination sentinel case None: # termination sentinel
log.info('asyncio `Client` method-proxy SHUTDOWN!') print('asyncio PROXY-RELAY SHUTDOWN')
break break
case (meth_name, kwargs): case (meth_name, kwargs):

View File

@ -20,7 +20,7 @@ Order and trades endpoints for use with ``piker``'s EMS.
""" """
from __future__ import annotations from __future__ import annotations
from contextlib import ExitStack from contextlib import ExitStack
# from collections import ChainMap from collections import ChainMap
from functools import partial from functools import partial
from pprint import pformat from pprint import pformat
import time import time
@ -1183,14 +1183,7 @@ async def deliver_trade_events(
pos pos
and fill and fill
): ):
now_cr: CommissionReport = fill.commissionReport assert fill.commissionReport == cr
if (now_cr != cr):
log.warning(
'UhhHh ib updated the commission report mid-fill..?\n'
f'was: {pformat(cr)}\n'
f'now: {pformat(now_cr)}\n'
)
await emit_pp_update( await emit_pp_update(
ems_stream, ems_stream,
accounts_def, accounts_def,

View File

@ -25,7 +25,6 @@ from contextlib import (
from dataclasses import asdict from dataclasses import asdict
from datetime import datetime from datetime import datetime
from functools import partial from functools import partial
from pprint import pformat
from math import isnan from math import isnan
import time import time
from typing import ( from typing import (
@ -37,13 +36,7 @@ from typing import (
from async_generator import aclosing from async_generator import aclosing
import ib_insync as ibis import ib_insync as ibis
import numpy as np import numpy as np
from pendulum import ( import pendulum
now,
from_timestamp,
# DateTime,
Duration,
duration as mk_duration,
)
import tractor import tractor
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
@ -52,9 +45,10 @@ from piker.accounting import (
MktPair, MktPair,
) )
from piker.data.validate import FeedInit from piker.data.validate import FeedInit
from piker.brokers._util import ( from .._util import (
NoData, NoData,
DataUnavailable, DataUnavailable,
SymbolNotFound,
) )
from .api import ( from .api import (
# _adhoc_futes_set, # _adhoc_futes_set,
@ -165,13 +159,13 @@ async def open_history_client(
head_dt: None | datetime = None head_dt: None | datetime = None
if ( if (
# fx cons seem to not provide this endpoint? # fx cons seem to not provide this endpoint?
# TODO: guard against all contract types which don't
# support it?
'idealpro' not in fqme 'idealpro' not in fqme
): ):
head_dt: datetime | None = await proxy.maybe_get_head_time( try:
fqme=fqme head_dt = await proxy.get_head_time(fqme=fqme)
) except RequestError:
log.warning(f'Unable to get head time: {fqme} ?')
pass
async def get_hist( async def get_hist(
timeframe: float, timeframe: float,
@ -179,15 +173,8 @@ async def open_history_client(
start_dt: datetime | None = None, start_dt: datetime | None = None,
) -> tuple[np.ndarray, str]: ) -> tuple[np.ndarray, str]:
nonlocal max_timeout, mean, count nonlocal max_timeout, mean, count
if (
start_dt
and start_dt.timestamp() == 0
):
await tractor.pause()
query_start = time.time() query_start = time.time()
out, timedout = await get_bars( out, timedout = await get_bars(
proxy, proxy,
@ -208,48 +195,24 @@ async def open_history_client(
f'mean: {mean}' f'mean: {mean}'
) )
# could be trying to retreive bars over weekend if (
if out is None: out is None
):
# could be trying to retreive bars over weekend
log.error(f"Can't grab bars starting at {end_dt}!?!?") log.error(f"Can't grab bars starting at {end_dt}!?!?")
if ( raise NoData(
end_dt f'{end_dt}',
and head_dt # frame_size=2000,
and end_dt <= head_dt )
):
raise DataUnavailable(
f'First timestamp is {head_dt}\n'
f'But {end_dt} was requested..'
)
else: if (
raise NoData( end_dt
info={ and head_dt
'fqme': fqme, and end_dt <= head_dt
'head_dt': head_dt, ):
'start_dt': start_dt, raise DataUnavailable(f'First timestamp is {head_dt}')
'end_dt': end_dt,
'timedout': timedout,
},
)
# also see return type for `get_bars()` bars, bars_array, first_dt, last_dt = out
bars: ibis.objects.BarDataList
bars_array: np.ndarray
first_dt: datetime
last_dt: datetime
(
bars,
bars_array,
first_dt,
last_dt,
) = out
# TODO: audit the sampling period here as well?
# timestep should always be at least as large as the
# period step.
# tdiff: np.ndarray = np.diff(bars_array['time'])
# if (tdiff < timeframe).any():
# await tractor.pause()
# volume cleaning since there's -ve entries, # volume cleaning since there's -ve entries,
# wood luv to know what crookery that is.. # wood luv to know what crookery that is..
@ -263,18 +226,7 @@ async def open_history_client(
# quite sure why.. needs some tinkering and probably # quite sure why.. needs some tinkering and probably
# a lookthrough of the ``ib_insync`` machinery, for eg. maybe # a lookthrough of the ``ib_insync`` machinery, for eg. maybe
# we have to do the batch queries on the `asyncio` side? # we have to do the batch queries on the `asyncio` side?
yield ( yield get_hist, {'erlangs': 1, 'rate': 3}
get_hist,
{
'erlangs': 1, # max conc reqs
'rate': 3, # max req rate
'frame_types': { # expected frame sizes
1: mk_duration(seconds=2e3),
60: mk_duration(days=2),
}
},
)
_pacing: str = ( _pacing: str = (
@ -419,11 +371,7 @@ async def get_bars(
while _failed_resets < max_failed_resets: while _failed_resets < max_failed_resets:
try: try:
( out = await proxy.bars(
bars,
bars_array,
dt_duration,
) = await proxy.bars(
fqme=fqme, fqme=fqme,
end_dt=end_dt, end_dt=end_dt,
sample_period_s=timeframe, sample_period_s=timeframe,
@ -434,58 +382,44 @@ async def get_bars(
# current impl) to detect a cancel case. # current impl) to detect a cancel case.
# timeout=timeout, # timeout=timeout,
) )
# usually either a request during a venue closure if out is None:
# or into a large (weekend) closure gap. raise NoData(f'{end_dt}')
if not bars:
# no data returned? bars, bars_array, dt_duration = out
log.warning(
'History frame is blank?\n'
f'start_dt: {start_dt}\n'
f'end_dt: {end_dt}\n'
f'duration: {dt_duration}\n'
)
# NOTE: REQUIRED to pass back value..
result = None
return None
# not enough bars signal, likely due to venue # not enough bars signal, likely due to venue
# operational gaps. # operational gaps.
if end_dt: too_little: bool = False
dur_s: float = len(bars) * timeframe if (
bars_dur = Duration(seconds=dur_s) end_dt
dt_dur_s: float = dt_duration.in_seconds() and (
if dur_s < dt_dur_s: not bars
log.warning( or (too_little :=
'History frame is shorter then expected?\n' start_dt
f'start_dt: {start_dt}\n' and (len(bars) * timeframe)
f'end_dt: {end_dt}\n' < dt_duration.in_seconds()
f'duration: {dt_dur_s}\n'
f'frame duration seconds: {dur_s}\n'
f'dur diff: {dt_duration - bars_dur}\n'
) )
# NOTE: we used to try to get a minimal )
# set of bars by recursing but this ran ):
# into possible infinite query loops if (
# when logic in the `Client.bars()` dt end_dt
# diffing went bad. So instead for now or too_little
# we just return the ):
# shorter-then-expected history with log.warning(
# a warning. f'History is blank for {dt_duration} from {end_dt}'
# TODO: in the future it prolly makes )
# the most send to do venue operating end_dt -= dt_duration
# hours lookup and continue
# timestamp-in-operating-range set
# checking to know for sure if we can
# safely and quickly ignore non-uniform history
# frame timestamp gaps..
# end_dt -= dt_duration
# continue
# await tractor.pause()
first_dt = from_timestamp( raise NoData(f'{end_dt}')
if bars_array is None:
raise SymbolNotFound(fqme)
first_dt = pendulum.from_timestamp(
bars[0].date.timestamp()) bars[0].date.timestamp())
last_dt = from_timestamp( last_dt = pendulum.from_timestamp(
bars[-1].date.timestamp()) bars[-1].date.timestamp())
time = bars_array['time'] time = bars_array['time']
@ -498,7 +432,6 @@ async def get_bars(
if data_cs: if data_cs:
data_cs.cancel() data_cs.cancel()
# NOTE: setting this is critical!
result = ( result = (
bars, # ib native bars, # ib native
bars_array, # numpy bars_array, # numpy
@ -509,7 +442,6 @@ async def get_bars(
# signal data reset loop parent task # signal data reset loop parent task
result_ready.set() result_ready.set()
# NOTE: this isn't getting collected anywhere!
return result return result
except RequestError as err: except RequestError as err:
@ -535,7 +467,7 @@ async def get_bars(
if end_dt is not None: if end_dt is not None:
end_dt = end_dt.subtract(days=1) end_dt = end_dt.subtract(days=1)
elif end_dt is None: elif end_dt is None:
end_dt = now().subtract(days=1) end_dt = pendulum.now().subtract(days=1)
log.warning( log.warning(
f'NO DATA found ending @ {end_dt}\n' f'NO DATA found ending @ {end_dt}\n'
@ -671,8 +603,8 @@ async def _setup_quote_stream(
# making them mostly useless and explains why the scanner # making them mostly useless and explains why the scanner
# is always slow XD # is always slow XD
# '293', # Trade count for day # '293', # Trade count for day
# '294', # Trade rate / minute '294', # Trade rate / minute
# '295', # Vlm rate / minute '295', # Vlm rate / minute
), ),
contract: Contract | None = None, contract: Contract | None = None,
@ -883,10 +815,7 @@ async def stream_quotes(
proxy: MethodProxy proxy: MethodProxy
mkt: MktPair mkt: MktPair
details: ibis.ContractDetails details: ibis.ContractDetails
async with ( async with open_data_client() as proxy:
open_data_client() as proxy,
# trio.open_nursery() as tn,
):
mkt, details = await get_mkt_info( mkt, details = await get_mkt_info(
sym, sym,
proxy=proxy, # passed to avoid implicit client load proxy=proxy, # passed to avoid implicit client load
@ -906,50 +835,30 @@ async def stream_quotes(
init_msgs.append(init_msg) init_msgs.append(init_msg)
con: Contract = details.contract con: Contract = details.contract
first_ticker: Ticker | None = None first_ticker: Ticker = await proxy.get_quote(contract=con)
first_quote: dict = normalize(first_ticker)
log.warning(f'FIRST QUOTE: {first_quote}')
# TODO: we should instead spawn a task that waits on a feed to start
# and let it wait indefinitely..instead of this hard coded stuff.
with trio.move_on_after(1): with trio.move_on_after(1):
first_ticker: Ticker = await proxy.get_quote( first_ticker = await proxy.get_quote(
contract=con, contract=con,
raise_on_timeout=False, raise_on_timeout=True,
) )
if first_ticker: # it might be outside regular trading hours so see if we can at
first_quote: dict = normalize(first_ticker) # least grab history.
# TODO: we need a stack-oriented log levels filters for
# this!
# log.info(message, filter={'stack': 'live_feed'}) ?
log.runtime(
'Rxed init quote:\n\n'
f'{pformat(first_quote)}\n'
)
# NOTE: it might be outside regular trading hours for
# assets with "standard venue operating hours" so we
# only "pretend the feed is live" when the dst asset
# type is NOT within the NON-NORMAL-venue set: aka not
# commodities, forex or crypto currencies which CAN
# always return a NaN on a snap quote request during
# normal venue hours. In the case of a closed venue
# (equitiies, futes, bonds etc.) we at least try to
# grab the OHLC history.
if ( if (
first_ticker isnan(first_ticker.last) # last quote price value is nan
and
isnan(first_ticker.last)
# SO, if the last quote price value is NaN we ONLY
# "pretend to do" `feed_is_live.set()` if it's a known
# dst asset venue with a lot of closed operating hours.
and mkt.dst.atype not in { and mkt.dst.atype not in {
'commodity', 'commodity',
'fiat', 'fiat',
'crypto', 'crypto',
} }
): ):
task_status.started(( task_status.started((init_msgs, first_quote))
init_msgs,
first_quote,
))
# it's not really live but this will unblock # it's not really live but this will unblock
# the brokerd feed task to tell the ui to update? # the brokerd feed task to tell the ui to update?
@ -959,28 +868,6 @@ async def stream_quotes(
await trio.sleep_forever() await trio.sleep_forever()
return # we never expect feed to come up? return # we never expect feed to come up?
# TODO: we should instead spawn a task that waits on a feed
# to start and let it wait indefinitely..instead of this
# hard coded stuff.
# async def wait_for_first_quote():
# with trio.CancelScope() as cs:
# XXX: MUST acquire a ticker + first quote before starting
# the live quotes loop!
# with trio.move_on_after(1):
first_ticker = await proxy.get_quote(
contract=con,
raise_on_timeout=True,
)
first_quote: dict = normalize(first_ticker)
# TODO: we need a stack-oriented log levels filters for
# this!
# log.info(message, filter={'stack': 'live_feed'}) ?
log.runtime(
'Rxed init quote:\n'
f'{pformat(first_quote)}'
)
cs: trio.CancelScope | None = None cs: trio.CancelScope | None = None
startup: bool = True startup: bool = True
while ( while (
@ -1001,11 +888,8 @@ async def stream_quotes(
# only on first entry at feed boot up # only on first entry at feed boot up
if startup: if startup:
startup: bool = False startup = False
task_status.started(( task_status.started((init_msgs, first_quote))
init_msgs,
first_quote,
))
# start a stream restarter task which monitors the # start a stream restarter task which monitors the
# data feed event. # data feed event.
@ -1029,7 +913,7 @@ async def stream_quotes(
# generally speaking these feeds don't # generally speaking these feeds don't
# include vlm data. # include vlm data.
atype: str = mkt.dst.atype atype = mkt.dst.atype
log.info( log.info(
f'No-vlm {mkt.fqme}@{atype}, skipping quote poll' f'No-vlm {mkt.fqme}@{atype}, skipping quote poll'
) )
@ -1065,8 +949,7 @@ async def stream_quotes(
quote = normalize(ticker) quote = normalize(ticker)
log.debug(f"First ticker received {quote}") log.debug(f"First ticker received {quote}")
# tell data-layer spawner-caller that live # tell caller quotes are now coming in live
# quotes are now streaming.
feed_is_live.set() feed_is_live.set()
# last = time.time() # last = time.time()

View File

@ -31,11 +31,7 @@ from typing import (
) )
from bidict import bidict from bidict import bidict
from pendulum import ( import pendulum
DateTime,
parse,
from_timestamp,
)
from ib_insync import ( from ib_insync import (
Contract, Contract,
Commodity, Commodity,
@ -70,11 +66,10 @@ tx_sort: Callable = partial(
iter_by_dt, iter_by_dt,
parsers={ parsers={
'dateTime': parse_flex_dt, 'dateTime': parse_flex_dt,
'datetime': parse, 'datetime': pendulum.parse,
# for some some fucking 2022 and
# XXX: for some some fucking 2022 and # back options records...fuck me.
# back options records.. f@#$ me.. 'date': pendulum.parse,
'date': parse,
} }
) )
@ -94,38 +89,15 @@ def norm_trade(
conid: int = str(record.get('conId') or record['conid']) conid: int = str(record.get('conId') or record['conid'])
bs_mktid: str = str(conid) bs_mktid: str = str(conid)
comms = record.get('commission')
if comms is None:
comms = -1*record['ibCommission']
# NOTE: sometimes weird records (like BTTX?) price = record.get('price') or record['tradePrice']
# have no field for this?
comms: float = -1 * (
record.get('commission')
or record.get('ibCommission')
or 0
)
if not comms:
log.warning(
'No commissions found for record?\n'
f'{pformat(record)}\n'
)
price: float = (
record.get('price')
or record.get('tradePrice')
)
if price is None:
log.warning(
'No `price` field found in record?\n'
'Skipping normalization..\n'
f'{pformat(record)}\n'
)
return None
# the api doesn't do the -/+ on the quantity for you but flex # the api doesn't do the -/+ on the quantity for you but flex
# records do.. are you fucking serious ib...!? # records do.. are you fucking serious ib...!?
size: float|int = ( size = record.get('quantity') or record['shares'] * {
record.get('quantity')
or record['shares']
) * {
'BOT': 1, 'BOT': 1,
'SLD': -1, 'SLD': -1,
}[record['side']] }[record['side']]
@ -156,31 +128,26 @@ def norm_trade(
# otype = tail[6] # otype = tail[6]
# strike = tail[7:] # strike = tail[7:]
log.warning( print(f'skipping opts contract {symbol}')
f'Skipping option contract -> NO SUPPORT YET!\n'
f'{symbol}\n'
)
return None return None
# timestamping is way different in API records # timestamping is way different in API records
dtstr: str = record.get('datetime') dtstr = record.get('datetime')
date: str = record.get('date') date = record.get('date')
flex_dtstr: str = record.get('dateTime') flex_dtstr = record.get('dateTime')
if dtstr or date: if dtstr or date:
dt: DateTime = parse(dtstr or date) dt = pendulum.parse(dtstr or date)
elif flex_dtstr: elif flex_dtstr:
# probably a flex record with a wonky non-std timestamp.. # probably a flex record with a wonky non-std timestamp..
dt: DateTime = parse_flex_dt(record['dateTime']) dt = parse_flex_dt(record['dateTime'])
# special handling of symbol extraction from # special handling of symbol extraction from
# flex records using some ad-hoc schema parsing. # flex records using some ad-hoc schema parsing.
asset_type: str = ( asset_type: str = record.get(
record.get('assetCategory') 'assetCategory'
or record.get('secType') ) or record.get('secType', 'STK')
or 'STK'
)
if (expiry := ( if (expiry := (
record.get('lastTradeDateOrContractMonth') record.get('lastTradeDateOrContractMonth')
@ -390,7 +357,6 @@ def norm_trade_records(
if txn is None: if txn is None:
continue continue
# inject txns sorted by datetime
insort( insort(
records, records,
txn, txn,
@ -439,7 +405,7 @@ def api_trades_to_ledger_entries(
txn_dict[attr_name] = val txn_dict[attr_name] = val
tid = str(txn_dict['execId']) tid = str(txn_dict['execId'])
dt = from_timestamp(txn_dict['time']) dt = pendulum.from_timestamp(txn_dict['time'])
txn_dict['datetime'] = str(dt) txn_dict['datetime'] = str(dt)
acctid = accounts[txn_dict['acctNumber']] acctid = accounts[txn_dict['acctNumber']]

View File

@ -209,15 +209,12 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
break break
ib_client = proxy._aio_ns.ib ib_client = proxy._aio_ns.ib
log.info( log.info(f'Using {ib_client} for symbol search')
f'Using API client for symbol-search\n'
f'{ib_client}\n'
)
last = time.time() last = time.time()
async for pattern in stream: async for pattern in stream:
log.info(f'received {pattern}') log.info(f'received {pattern}')
now: float = time.time() now = time.time()
# this causes tractor hang... # this causes tractor hang...
# assert 0 # assert 0
@ -264,9 +261,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
# defined adhoc symbol set. # defined adhoc symbol set.
stock_results = [] stock_results = []
async def extend_results( async def stash_results(target: Awaitable[list]):
target: Awaitable[list]
) -> None:
try: try:
results = await target results = await target
except tractor.trionics.Lagged: except tractor.trionics.Lagged:
@ -279,7 +274,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
with trio.move_on_after(3) as cs: with trio.move_on_after(3) as cs:
async with trio.open_nursery() as sn: async with trio.open_nursery() as sn:
sn.start_soon( sn.start_soon(
extend_results, stash_results,
proxy.search_symbols( proxy.search_symbols(
pattern=pattern, pattern=pattern,
upto=5, upto=5,
@ -294,10 +289,8 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
f'Search timeout? {proxy._aio_ns.ib.client}' f'Search timeout? {proxy._aio_ns.ib.client}'
) )
continue continue
elif stock_results: else:
break break
# else:
# await tractor.pause()
# # match against our ad-hoc set immediately # # match against our ad-hoc set immediately
# adhoc_matches = fuzzy.extract( # adhoc_matches = fuzzy.extract(
@ -525,21 +518,7 @@ async def get_mkt_info(
venue = con.primaryExchange or con.exchange venue = con.primaryExchange or con.exchange
price_tick: Decimal = Decimal(str(details.minTick)) price_tick: Decimal = Decimal(str(details.minTick))
ib_min_tick_gt_2: Decimal = Decimal('0.01') # price_tick: Decimal = Decimal('0.01')
if (
price_tick < ib_min_tick_gt_2
):
# TODO: we need to add some kinda dynamic rounding sys
# to our MktPair i guess?
# not sure where the logic should sit, but likely inside
# the `.clearing._ems` i suppose...
log.warning(
'IB seems to disallow a min price tick < 0.01 '
'when the price is > 2.0..?\n'
f'Decreasing min tick precision for {fqme} to 0.01'
)
# price_tick = ib_min_tick
# await tractor.pause()
if atype == 'stock': if atype == 'stock':
# XXX: GRRRR they don't support fractional share sizes for # XXX: GRRRR they don't support fractional share sizes for

View File

@ -27,8 +27,8 @@ from typing import (
) )
import time import time
import httpx
import pendulum import pendulum
import asks
import numpy as np import numpy as np
import urllib.parse import urllib.parse
import hashlib import hashlib
@ -60,11 +60,6 @@ log = get_logger('piker.brokers.kraken')
# <uri>/<version>/ # <uri>/<version>/
_url = 'https://api.kraken.com/0' _url = 'https://api.kraken.com/0'
_headers: dict[str, str] = {
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
}
# TODO: this is the only backend providing this right? # TODO: this is the only backend providing this right?
# in which case we should drop it from the defaults and # in which case we should drop it from the defaults and
# instead make a custom fields descr in this module! # instead make a custom fields descr in this module!
@ -75,18 +70,12 @@ _symbol_info_translation: dict[str, str] = {
def get_config() -> dict[str, Any]: def get_config() -> dict[str, Any]:
'''
Load our section from `piker/brokers.toml`.
''' conf, path = config.load()
conf, path = config.load( section = conf.get('kraken')
conf_name='brokers',
touch_if_dne=True, if section is None:
) log.warning(f'No config section found for kraken in {path}')
if (section := conf.get('kraken')) is None:
log.warning(
f'No config section found for kraken in {path}'
)
return {} return {}
return section return section
@ -140,15 +129,16 @@ class Client:
def __init__( def __init__(
self, self,
config: dict[str, str], config: dict[str, str],
httpx_client: httpx.AsyncClient,
name: str = '', name: str = '',
api_key: str = '', api_key: str = '',
secret: str = '' secret: str = ''
) -> None: ) -> None:
self._sesh = asks.Session(connections=4)
self._sesh: httpx.AsyncClient = httpx_client self._sesh.base_location = _url
self._sesh.headers.update({
'User-Agent':
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
})
self._name = name self._name = name
self._api_key = api_key self._api_key = api_key
self._secret = secret self._secret = secret
@ -170,9 +160,10 @@ class Client:
method: str, method: str,
data: dict, data: dict,
) -> dict[str, Any]: ) -> dict[str, Any]:
resp: httpx.Response = await self._sesh.post( resp = await self._sesh.post(
url=f'/public/{method}', path=f'/public/{method}',
json=data, json=data,
timeout=float('inf')
) )
return resproc(resp, log) return resproc(resp, log)
@ -183,18 +174,18 @@ class Client:
uri_path: str uri_path: str
) -> dict[str, Any]: ) -> dict[str, Any]:
headers = { headers = {
'Content-Type': 'application/x-www-form-urlencoded', 'Content-Type':
'API-Key': self._api_key, 'application/x-www-form-urlencoded',
'API-Sign': get_kraken_signature( 'API-Key':
uri_path, self._api_key,
data, 'API-Sign':
self._secret, get_kraken_signature(uri_path, data, self._secret)
),
} }
resp: httpx.Response = await self._sesh.post( resp = await self._sesh.post(
url=f'/private/{method}', path=f'/private/{method}',
data=data, data=data,
headers=headers, headers=headers,
timeout=float('inf')
) )
return resproc(resp, log) return resproc(resp, log)
@ -668,36 +659,24 @@ class Client:
@acm @acm
async def get_client() -> Client: async def get_client() -> Client:
conf: dict[str, Any] = get_config() conf = get_config()
async with httpx.AsyncClient( if conf:
base_url=_url, client = Client(
headers=_headers, conf,
# TODO: is there a way to numerate this? # TODO: don't break these up and just do internal
# https://www.python-httpx.org/advanced/clients/#why-use-a-client # conf lookups instead..
# connections=4 name=conf['key_descr'],
) as trio_client: api_key=conf['api_key'],
if conf: secret=conf['secret']
client = Client( )
conf, else:
httpx_client=trio_client, client = Client({})
# TODO: don't break these up and just do internal # at startup, load all symbols, and asset info in
# conf lookups instead.. # batch requests.
name=conf['key_descr'], async with trio.open_nursery() as nurse:
api_key=conf['api_key'], nurse.start_soon(client.get_assets)
secret=conf['secret'] await client.get_mkt_pairs()
)
else:
client = Client(
conf={},
httpx_client=trio_client,
)
# at startup, load all symbols, and asset info in yield client
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.get_mkt_pairs()
yield client

View File

@ -612,18 +612,18 @@ async def open_trade_dialog(
# enter relay loop # enter relay loop
await handle_order_updates( await handle_order_updates(
client=client, client,
ws=ws, ws,
ws_stream=stream, stream,
ems_stream=ems_stream, ems_stream,
apiflows=apiflows, apiflows,
ids=ids, ids,
reqids2txids=reqids2txids, reqids2txids,
acnt=acnt, acnt,
ledger=ledger, api_trans,
acctid=acctid, acctid,
acc_name=acc_name, acc_name,
token=token, token,
) )
@ -639,8 +639,7 @@ async def handle_order_updates(
# transaction records which will be updated # transaction records which will be updated
# on new trade clearing events (aka order "fills") # on new trade clearing events (aka order "fills")
ledger: TransactionLedger, ledger_trans: dict[str, Transaction],
# ledger_trans: dict[str, Transaction],
acctid: str, acctid: str,
acc_name: str, acc_name: str,
token: str, token: str,
@ -700,8 +699,7 @@ async def handle_order_updates(
# if tid not in ledger_trans # if tid not in ledger_trans
} }
for tid, trade in trades.items(): for tid, trade in trades.items():
# assert tid not in ledger_trans assert tid not in ledger_trans
assert tid not in ledger
txid = trade['ordertxid'] txid = trade['ordertxid']
reqid = trade.get('userref') reqid = trade.get('userref')
@ -749,17 +747,11 @@ async def handle_order_updates(
client, client,
api_name_set='wsname', api_name_set='wsname',
) )
ppmsgs: list[BrokerdPosition] = trades2pps( ppmsgs = trades2pps(
acnt=acnt, acnt,
ledger=ledger, acctid,
acctid=acctid, new_trans,
new_trans=new_trans,
) )
# ppmsgs = trades2pps(
# acnt,
# acctid,
# new_trans,
# )
for pp_msg in ppmsgs: for pp_msg in ppmsgs:
await ems_stream.send(pp_msg) await ems_stream.send(pp_msg)

View File

@ -16,9 +16,10 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Kucoin cex API backend. Kucoin broker backend
''' '''
from contextlib import ( from contextlib import (
asynccontextmanager as acm, asynccontextmanager as acm,
aclosing, aclosing,
@ -40,8 +41,9 @@ from typing import (
import wsproto import wsproto
from uuid import uuid4 from uuid import uuid4
from rapidfuzz import process as fuzzy
from trio_typing import TaskStatus from trio_typing import TaskStatus
import httpx import asks
from bidict import bidict from bidict import bidict
import numpy as np import numpy as np
import pendulum import pendulum
@ -62,7 +64,7 @@ from piker._cacheables import (
) )
from piker.log import get_logger from piker.log import get_logger
from piker.data.validate import FeedInit from piker.data.validate import FeedInit
from piker.types import Struct # NOTE, this is already a `tractor.msg.Struct` from piker.types import Struct
from piker.data import ( from piker.data import (
def_iohlcv_fields, def_iohlcv_fields,
match_from_pairs, match_from_pairs,
@ -98,18 +100,9 @@ class KucoinMktPair(Struct, frozen=True):
def size_tick(self) -> Decimal: def size_tick(self) -> Decimal:
return Decimal(str(self.quoteMinSize)) return Decimal(str(self.quoteMinSize))
callauctionFirstStageStartTime: None|float
callauctionIsEnabled: bool
callauctionPriceCeiling: float|None
callauctionPriceFloor: float|None
callauctionSecondStageStartTime: float|None
callauctionThirdStageStartTime: float|None
enableTrading: bool enableTrading: bool
feeCategory: int
feeCurrency: str feeCurrency: str
isMarginEnabled: bool isMarginEnabled: bool
makerFeeCoefficient: float
market: str market: str
minFunds: float minFunds: float
name: str name: str
@ -119,10 +112,7 @@ class KucoinMktPair(Struct, frozen=True):
quoteIncrement: float quoteIncrement: float
quoteMaxSize: float quoteMaxSize: float
quoteMinSize: float quoteMinSize: float
st: bool
symbol: str # our bs_mktid, kucoin's internal id symbol: str # our bs_mktid, kucoin's internal id
takerFeeCoefficient: float
tradingStartTime: float|None
class AccountTrade(Struct, frozen=True): class AccountTrade(Struct, frozen=True):
@ -223,12 +213,8 @@ def get_config() -> BrokerConfig | None:
class Client: class Client:
def __init__( def __init__(self) -> None:
self, self._config: BrokerConfig | None = get_config()
httpx_client: httpx.AsyncClient,
) -> None:
self._http: httpx.AsyncClient = httpx_client
self._config: BrokerConfig|None = get_config()
self._pairs: dict[str, KucoinMktPair] = {} self._pairs: dict[str, KucoinMktPair] = {}
self._fqmes2mktids: bidict[str, str] = bidict() self._fqmes2mktids: bidict[str, str] = bidict()
self._bars: list[list[float]] = [] self._bars: list[list[float]] = []
@ -242,24 +228,18 @@ class Client:
) -> dict[str, str | bytes]: ) -> dict[str, str | bytes]:
''' '''
Generate authenticated request headers: Generate authenticated request headers
https://docs.kucoin.com/#authentication https://docs.kucoin.com/#authentication
https://www.kucoin.com/docs/basic-info/connection-method/authentication/creating-a-request
https://www.kucoin.com/docs/basic-info/connection-method/authentication/signing-a-message
''' '''
if not self._config: if not self._config:
raise ValueError( raise ValueError(
'No config found when trying to send authenticated request' 'No config found when trying to send authenticated request')
)
str_to_sign = ( str_to_sign = (
str(int(time.time() * 1000)) str(int(time.time() * 1000))
+ + action + f'/api/{api}/{endpoint.lstrip("/")}'
action
+
f'/api/{api}/{endpoint.lstrip("/")}'
) )
signature = base64.b64encode( signature = base64.b64encode(
@ -270,7 +250,6 @@ class Client:
).digest() ).digest()
) )
# TODO: can we cache this between calls?
passphrase = base64.b64encode( passphrase = base64.b64encode(
hmac.new( hmac.new(
self._config.key_secret.encode('utf-8'), self._config.key_secret.encode('utf-8'),
@ -292,10 +271,8 @@ class Client:
self, self,
action: Literal['POST', 'GET'], action: Literal['POST', 'GET'],
endpoint: str, endpoint: str,
api: str = 'v2', api: str = 'v2',
headers: dict = {}, headers: dict = {},
) -> Any: ) -> Any:
''' '''
Generic request wrapper for Kucoin API Generic request wrapper for Kucoin API
@ -308,19 +285,14 @@ class Client:
api, api,
) )
req_meth: Callable = getattr( api_url = f'https://api.kucoin.com/api/{api}/{endpoint}'
self._http,
action.lower(), res = await asks.request(action, api_url, headers=headers)
)
res = await req_meth( json = res.json()
url=f'/{api}/{endpoint}', if 'data' in json:
headers=headers, return json['data']
)
json: dict = res.json()
if (data := json.get('data')) is not None:
return data
else: else:
api_url: str = self._http.base_url
log.error( log.error(
f'Error making request to {api_url} ->\n' f'Error making request to {api_url} ->\n'
f'{pformat(res)}' f'{pformat(res)}'
@ -340,7 +312,7 @@ class Client:
''' '''
token_type = 'private' if private else 'public' token_type = 'private' if private else 'public'
try: try:
data: dict[str, Any]|None = await self._request( data: dict[str, Any] | None = await self._request(
'POST', 'POST',
endpoint=f'bullet-{token_type}', endpoint=f'bullet-{token_type}',
api='v1' api='v1'
@ -378,8 +350,8 @@ class Client:
currencies: dict[str, Currency] = {} currencies: dict[str, Currency] = {}
entries: list[dict] = await self._request( entries: list[dict] = await self._request(
'GET', 'GET',
endpoint='currencies',
api='v1', api='v1',
endpoint='currencies',
) )
for entry in entries: for entry in entries:
curr = Currency(**entry).copy() curr = Currency(**entry).copy()
@ -395,22 +367,13 @@ class Client:
dict[str, KucoinMktPair], dict[str, KucoinMktPair],
bidict[str, KucoinMktPair], bidict[str, KucoinMktPair],
]: ]:
entries = await self._request( entries = await self._request('GET', 'symbols')
'GET',
endpoint='symbols',
)
log.info(f' {len(entries)} Kucoin market pairs fetched') log.info(f' {len(entries)} Kucoin market pairs fetched')
pairs: dict[str, KucoinMktPair] = {} pairs: dict[str, KucoinMktPair] = {}
fqmes2mktids: bidict[str, str] = bidict() fqmes2mktids: bidict[str, str] = bidict()
for item in entries: for item in entries:
try: pair = pairs[item['name']] = KucoinMktPair(**item)
pair = pairs[item['name']] = KucoinMktPair(**item)
except TypeError as te:
raise TypeError(
'`KucoinMktPair` and reponse fields do not match ??\n'
f'{KucoinMktPair.fields_diff(item)}\n'
) from te
fqmes2mktids[ fqmes2mktids[
item['name'].lower().replace('-', '') item['name'].lower().replace('-', '')
] = pair.name ] = pair.name
@ -453,7 +416,8 @@ class Client:
await self.get_mkt_pairs() await self.get_mkt_pairs()
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?' assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
matches: dict[str, KucoinMktPair] = match_from_pairs(
matches: dict[str, Pair] = match_from_pairs(
pairs=self._pairs, pairs=self._pairs,
# query=pattern.upper(), # query=pattern.upper(),
query=pattern.upper(), query=pattern.upper(),
@ -605,21 +569,13 @@ def fqme_to_kucoin_sym(
@acm @acm
async def get_client() -> AsyncGenerator[Client, None]: async def get_client() -> AsyncGenerator[Client, None]:
''' client = Client()
Load an API `Client` preconfigured from user settings
''' async with trio.open_nursery() as n:
async with ( n.start_soon(client.get_mkt_pairs)
httpx.AsyncClient( await client.get_currencies()
base_url='https://api.kucoin.com/api',
) as trio_client,
):
client = Client(httpx_client=trio_client)
async with trio.open_nursery() as tn:
tn.start_soon(client.get_mkt_pairs)
await client.get_currencies()
yield client yield client
@tractor.context @tractor.context
@ -655,7 +611,7 @@ async def open_ping_task(
await trio.sleep((ping_interval - 1000) / 1000) await trio.sleep((ping_interval - 1000) / 1000)
await ws.send_msg({'id': connect_id, 'type': 'ping'}) await ws.send_msg({'id': connect_id, 'type': 'ping'})
log.warning('Starting ping task for kucoin ws connection') log.info('Starting ping task for kucoin ws connection')
n.start_soon(ping_server) n.start_soon(ping_server)
yield yield
@ -667,14 +623,9 @@ async def open_ping_task(
async def get_mkt_info( async def get_mkt_info(
fqme: str, fqme: str,
) -> tuple[ ) -> tuple[MktPair, KucoinMktPair]:
MktPair,
KucoinMktPair,
]:
''' '''
Query for and return both a `piker.accounting.MktPair` and Query for and return a `MktPair` and `KucoinMktPair`.
`KucoinMktPair` from provided `fqme: str`
(fully-qualified-market-endpoint).
''' '''
async with open_cached_client('kucoin') as client: async with open_cached_client('kucoin') as client:
@ -749,8 +700,6 @@ async def stream_quotes(
log.info(f'Starting up quote stream(s) for {symbols}') log.info(f'Starting up quote stream(s) for {symbols}')
for sym_str in symbols: for sym_str in symbols:
mkt: MktPair
pair: KucoinMktPair
mkt, pair = await get_mkt_info(sym_str) mkt, pair = await get_mkt_info(sym_str)
init_msgs.append( init_msgs.append(
FeedInit(mkt_info=mkt) FeedInit(mkt_info=mkt)
@ -758,11 +707,7 @@ async def stream_quotes(
ws: NoBsWs ws: NoBsWs
token, ping_interval = await client._get_ws_token() token, ping_interval = await client._get_ws_token()
log.info('API reported ping_interval: {ping_interval}\n') connect_id = str(uuid4())
connect_id: str = str(uuid4())
typ: str
quote: dict
async with ( async with (
open_autorecon_ws( open_autorecon_ws(
( (
@ -776,37 +721,20 @@ async def stream_quotes(
), ),
) as ws, ) as ws,
open_ping_task(ws, ping_interval, connect_id), open_ping_task(ws, ping_interval, connect_id),
aclosing( aclosing(stream_messages(ws, sym_str)) as msg_gen,
iter_normed_quotes(
ws, sym_str
)
) as iter_quotes,
): ):
typ, quote = await anext(iter_quotes) typ, quote = await anext(msg_gen)
# take care to not unblock here until we get a real while typ != 'trade':
# trade quote? # take care to not unblock here until we get a real
# ^TODO, remove this right? # trade quote
# -[ ] what often blocks chart boot/new-feed switching typ, quote = await anext(msg_gen)
# since we'ere waiting for a live quote instead of just
# loading history afap..
# |_ XXX, not sure if we require a bit of rework to core
# feed init logic or if backends justg gotta be
# changed up.. feel like there was some causality
# dilema prolly only seen with IB too..
# while typ != 'trade':
# typ, quote = await anext(iter_quotes)
task_status.started((init_msgs, quote)) task_status.started((init_msgs, quote))
feed_is_live.set() feed_is_live.set()
# XXX NOTE, DO NOT include the `.<backend>` suffix! async for typ, msg in msg_gen:
# OW the sampling loop will not broadcast correctly.. await send_chan.send({sym_str: msg})
# since `bus._subscribers.setdefault(bs_fqme, set())`
# is used inside `.data.open_feed_bus()` !!!
topic: str = mkt.bs_fqme
async for typ, quote in iter_quotes:
await send_chan.send({topic: quote})
@acm @acm
@ -861,7 +789,7 @@ async def subscribe(
) )
async def iter_normed_quotes( async def stream_messages(
ws: NoBsWs, ws: NoBsWs,
sym: str, sym: str,
@ -892,9 +820,6 @@ async def iter_normed_quotes(
yield 'trade', { yield 'trade', {
'symbol': sym, 'symbol': sym,
# TODO, is 'last' even used elsewhere/a-good
# semantic? can't we just read the ticks with our
# .data.ticktools.frame_ticks()`/
'last': trade_data.price, 'last': trade_data.price,
'brokerd_ts': last_trade_ts, 'brokerd_ts': last_trade_ts,
'ticks': [ 'ticks': [
@ -987,7 +912,7 @@ async def open_history_client(
if end_dt is None: if end_dt is None:
inow = round(time.time()) inow = round(time.time())
log.debug( print(
f'difference in time between load and processing' f'difference in time between load and processing'
f'{inow - times[-1]}' f'{inow - times[-1]}'
) )

View File

@ -1,49 +0,0 @@
piker.clearing
______________
trade execution-n-control subsys for both live and paper trading as
well as algo-trading manual override/interaction across any backend
broker and data provider.
avail UIs
*********
order ctl
---------
the `piker.clearing` subsys is exposed mainly though
the `piker chart` GUI as a "chart trader" style UX and
is automatically enabled whenever a chart is opened.
.. ^TODO, more prose here!
the "manual" order control features are exposed via the
`piker.ui.order_mode` API and can pretty much always be
used (at least) in simulated-trading mode, aka "paper"-mode, and
the micro-manual is as follows:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
position (pp) mgmt
------------------
you can also configure your position allocation limits from the
sidepane.
.. ^TODO, explain and provide tut once more refined!

View File

@ -913,17 +913,8 @@ async def translate_and_relay_brokerd_events(
}: }:
if ( if (
not oid not oid
# try to lookup any order dialog by
# brokerd-side id..
and not (
oid := book._ems2brokerd_ids.inverse.get(reqid)
)
): ):
log.warning( oid: str = book._ems2brokerd_ids.inverse[reqid]
f'Rxed unusable error-msg:\n'
f'{brokerd_msg}'
)
continue
msg = BrokerdError(**brokerd_msg) msg = BrokerdError(**brokerd_msg)

View File

@ -26,7 +26,6 @@ from contextlib import asynccontextmanager as acm
from datetime import datetime from datetime import datetime
from operator import itemgetter from operator import itemgetter
import itertools import itertools
from pprint import pformat
import time import time
from typing import ( from typing import (
Callable, Callable,
@ -40,7 +39,6 @@ import trio
import tractor import tractor
from piker.brokers import get_brokermod from piker.brokers import get_brokermod
from piker.service import find_service
from piker.accounting import ( from piker.accounting import (
Account, Account,
MktPair, MktPair,
@ -698,12 +696,7 @@ async def open_trade_dialog(
# sanity check all the mkt infos # sanity check all the mkt infos
for fqme, flume in feed.flumes.items(): for fqme, flume in feed.flumes.items():
mkt: MktPair = symcache.mktmaps.get(fqme) or mkt_by_fqme[fqme] mkt: MktPair = symcache.mktmaps.get(fqme) or mkt_by_fqme[fqme]
if mkt != flume.mkt: assert mkt == flume.mkt
diff: tuple = mkt - flume.mkt
log.warning(
'MktPair sig mismatch?\n'
f'{pformat(diff)}'
)
get_cost: Callable = getattr( get_cost: Callable = getattr(
brokermod, brokermod,
@ -761,7 +754,7 @@ async def open_paperboi(
service_name = f'paperboi.{broker}' service_name = f'paperboi.{broker}'
async with ( async with (
find_service(service_name) as portal, tractor.find_actor(service_name) as portal,
tractor.open_nursery() as an, tractor.open_nursery() as an,
): ):
# NOTE: only spawn if no paperboi already is up since we likely # NOTE: only spawn if no paperboi already is up since we likely
@ -784,10 +777,8 @@ async def open_paperboi(
) as (ctx, first): ) as (ctx, first):
yield ctx, first yield ctx, first
# ALWAYS tear down connection AND any newly spawned # tear down connection and any spawned actor on exit
# paperboi actor on exit!
await ctx.cancel() await ctx.cancel()
if we_spawned: if we_spawned:
await portal.cancel_actor() await portal.cancel_actor()

View File

@ -1,33 +1,30 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet # Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# (in stewardship for pikers, everywhere.)
# This program is free software: you can redistribute it and/or # This program is free software: you can redistribute it and/or modify
# modify it under the terms of the GNU Affero General Public # it under the terms of the GNU Affero General Public License as published by
# License as published by the Free Software Foundation, either # the Free Software Foundation, either version 3 of the License, or
# version 3 of the License, or (at your option) any later version. # (at your option) any later version.
# This program is distributed in the hope that it will be useful, # This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of # but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# Affero General Public License for more details. # GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public # You should have received a copy of the GNU Affero General Public License
# License along with this program. If not, see # along with this program. If not, see <https://www.gnu.org/licenses/>.
# <https://www.gnu.org/licenses/>.
''' '''
CLI commons. CLI commons.
''' '''
import os import os
# from contextlib import AsyncExitStack from contextlib import AsyncExitStack
from types import ModuleType from types import ModuleType
import click import click
import trio import trio
import tractor import tractor
from tractor._multiaddr import parse_maddr
from ..log import ( from ..log import (
get_console_log, get_console_log,
@ -45,175 +42,89 @@ from .. import config
log = get_logger('piker.cli') log = get_logger('piker.cli')
def load_trans_eps(
network: dict | None = None,
maddrs: list[tuple] | None = None,
) -> dict[str, dict[str, dict]]:
# transport-oriented endpoint multi-addresses
eps: dict[
str, # service name, eg. `pikerd`, `emsd`..
# libp2p style multi-addresses parsed into prot layers
list[dict[str, str | int]]
] = {}
if (
network
and not maddrs
):
# load network section and (attempt to) connect all endpoints
# which are reachable B)
for key, maddrs in network.items():
match key:
# TODO: resolve table across multiple discov
# prots Bo
case 'resolv':
pass
case 'pikerd':
dname: str = key
for maddr in maddrs:
layers: dict = parse_maddr(maddr)
eps.setdefault(
dname,
[],
).append(layers)
elif maddrs:
# presume user is manually specifying the root actor ep.
eps['pikerd'] = [parse_maddr(maddr)]
return eps
@click.command() @click.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
@click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.option( @click.option(
'--loglevel', '--tsdb',
'-l',
default='warning',
help='Logging level',
)
@click.option(
'--tl',
is_flag=True, is_flag=True,
help='Enable tractor-runtime logs', help='Enable local ``marketstore`` instance'
) )
@click.option( @click.option(
'--pdb', '--es',
is_flag=True, is_flag=True,
help='Enable tractor debug mode', help='Enable local ``elasticsearch`` instance'
) )
@click.option(
'--maddr',
'-m',
default=None,
help='Multiaddrs to bind or contact',
)
# @click.option(
# '--tsdb',
# is_flag=True,
# help='Enable local ``marketstore`` instance'
# )
# @click.option(
# '--es',
# is_flag=True,
# help='Enable local ``elasticsearch`` instance'
# )
def pikerd( def pikerd(
maddr: list[str] | None,
loglevel: str, loglevel: str,
host: str,
port: int,
tl: bool, tl: bool,
pdb: bool, pdb: bool,
# tsdb: bool, tsdb: bool,
# es: bool, es: bool,
): ):
''' '''
Spawn the piker broker-daemon. Spawn the piker broker-daemon.
''' '''
from tractor.devx import maybe_open_crash_handler log = get_console_log(loglevel, name='cli')
with maybe_open_crash_handler(pdb=pdb):
log = get_console_log(loglevel, name='cli')
if pdb: if pdb:
log.warning(( log.warning((
"\n" "\n"
"!!! YOU HAVE ENABLED DAEMON DEBUG MODE !!!\n" "!!! YOU HAVE ENABLED DAEMON DEBUG MODE !!!\n"
"When a `piker` daemon crashes it will block the " "When a `piker` daemon crashes it will block the "
"task-thread until resumed from console!\n" "task-thread until resumed from console!\n"
"\n" "\n"
)) ))
# service-actor registry endpoint socket-address set reg_addr: None | tuple[str, int] = None
regaddrs: list[tuple[str, int]] = [] if host or port:
reg_addr = (
conf, _ = config.load( host or _default_registry_host,
conf_name='conf', int(port) or _default_registry_port,
) )
network: dict = conf.get('network')
if ( from .. import service
network is None
and not maddr async def main():
service_mngr: service.Services
async with (
service.open_pikerd(
loglevel=loglevel,
debug_mode=pdb,
registry_addr=reg_addr,
) as service_mngr, # normally delivers a ``Services`` handle
AsyncExitStack() as stack,
): ):
regaddrs = [( if tsdb:
_default_registry_host, dname, conf = await stack.enter_async_context(
_default_registry_port, service.marketstore.start_ahab_daemon(
)] service_mngr,
loglevel=loglevel,
)
)
log.info(f'TSDB `{dname}` up with conf:\n{conf}')
else: if es:
eps: dict = load_trans_eps( dname, conf = await stack.enter_async_context(
network, service.elastic.start_ahab_daemon(
maddr, service_mngr,
) loglevel=loglevel,
for layers in eps['pikerd']: )
regaddrs.append(( )
layers['ipv4']['addr'], log.info(f'DB `{dname}` up with conf:\n{conf}')
layers['tcp']['port'],
))
from .. import service await trio.sleep_forever()
async def main(): trio.run(main)
service_mngr: service.Services
async with (
service.open_pikerd(
registry_addrs=regaddrs,
loglevel=loglevel,
debug_mode=pdb,
) as service_mngr, # normally delivers a ``Services`` handle
# AsyncExitStack() as stack,
):
# TODO: spawn all other sub-actor daemons according to
# multiaddress endpoint spec defined by user config
assert service_mngr
# if tsdb:
# dname, conf = await stack.enter_async_context(
# service.marketstore.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'TSDB `{dname}` up with conf:\n{conf}')
# if es:
# dname, conf = await stack.enter_async_context(
# service.elastic.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'DB `{dname}` up with conf:\n{conf}')
await trio.sleep_forever()
trio.run(main)
@click.group(context_settings=config._context_defaults) @click.group(context_settings=config._context_defaults)
@ -226,24 +137,8 @@ def pikerd(
@click.option('--loglevel', '-l', default='warning', help='Logging level') @click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--configdir', '-c', help='Configuration directory') @click.option('--configdir', '-c', help='Configuration directory')
@click.option( @click.option('--host', '-h', default=None, help='Host addr to bind')
'--pdb', @click.option('--port', '-p', default=None, help='Port number to bind')
is_flag=True,
help='Enable runtime debug mode ',
)
@click.option(
'--maddr',
'-m',
default=None,
multiple=True,
help='Multiaddr to bind',
)
@click.option(
'--regaddr',
'-r',
default=None,
help='Registrar addr to contact',
)
@click.pass_context @click.pass_context
def cli( def cli(
ctx: click.Context, ctx: click.Context,
@ -251,11 +146,8 @@ def cli(
loglevel: str, loglevel: str,
tl: bool, tl: bool,
configdir: str, configdir: str,
pdb: bool, host: str,
port: int,
# TODO: make these list[str] with multiple -m maddr0 -m maddr1
maddr: list[str],
regaddr: str,
) -> None: ) -> None:
if configdir is not None: if configdir is not None:
@ -276,20 +168,12 @@ def cli(
} }
assert brokermods assert brokermods
# TODO: load endpoints from `conf::[network].pikerd` reg_addr: None | tuple[str, int] = None
# - pikerd vs. regd, separate registry daemon? if host or port:
# - expose datad vs. brokerd? reg_addr = (
# - bind emsd with certain perms on public iface? host or _default_registry_host,
regaddrs: list[tuple[str, int]] = regaddr or [( int(port) or _default_registry_port,
_default_registry_host, )
_default_registry_port,
)]
# TODO: factor [network] section parsing out from pikerd
# above and call it here as well.
# if maddr:
# for addr in maddr:
# layers: dict = parse_maddr(addr)
ctx.obj.update({ ctx.obj.update({
'brokers': brokers, 'brokers': brokers,
@ -299,12 +183,7 @@ def cli(
'log': get_console_log(loglevel), 'log': get_console_log(loglevel),
'confdir': config._config_dir, 'confdir': config._config_dir,
'wl_path': config._watchlists_data_path, 'wl_path': config._watchlists_data_path,
'registry_addrs': regaddrs, 'registry_addr': reg_addr,
'pdb': pdb, # debug mode flag
# TODO: endpoint parsing, pinging and binding
# on no existing server.
# 'maddrs': maddr,
}) })
# allow enabling same loglevel in ``tractor`` machinery # allow enabling same loglevel in ``tractor`` machinery
@ -351,7 +230,7 @@ def services(config, tl, ports):
def _load_clis() -> None: def _load_clis() -> None:
# from ..service import elastic # noqa from ..service import elastic # noqa
from ..brokers import cli # noqa from ..brokers import cli # noqa
from ..ui import cli # noqa from ..ui import cli # noqa
from ..watchlists import cli # noqa from ..watchlists import cli # noqa

View File

@ -104,15 +104,14 @@ def get_app_dir(
# `tractor`) with the testing dir and check for it whenever we # `tractor`) with the testing dir and check for it whenever we
# detect `pytest` is being used (which it isn't under normal # detect `pytest` is being used (which it isn't under normal
# operation). # operation).
# if "pytest" in sys.modules: if "pytest" in sys.modules:
# import tractor import tractor
# actor = tractor.current_actor(err_on_no_runtime=False) actor = tractor.current_actor(err_on_no_runtime=False)
# if actor: # runtime is up if actor: # runtime is up
# rvs = tractor._state._runtime_vars rvs = tractor._state._runtime_vars
# import pdbp; pdbp.set_trace() testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
# testdirpath = Path(rvs['piker_vars']['piker_test_dir']) assert testdirpath.exists(), 'piker test harness might be borked!?'
# assert testdirpath.exists(), 'piker test harness might be borked!?' app_name = str(testdirpath)
# app_name = str(testdirpath)
if platform.system() == 'Windows': if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA" key = "APPDATA" if roaming else "LOCALAPPDATA"
@ -135,19 +134,14 @@ def get_app_dir(
_click_config_dir: Path = Path(get_app_dir('piker')) _click_config_dir: Path = Path(get_app_dir('piker'))
_config_dir: Path = _click_config_dir _config_dir: Path = _click_config_dir
_parent_user: str = os.environ.get('SUDO_USER')
# NOTE: when using `sudo` we attempt to determine the non-root user if _parent_user:
# and still use their normal config dir.
if (
(_parent_user := os.environ.get('SUDO_USER'))
and
_parent_user != 'root'
):
non_root_user_dir = Path( non_root_user_dir = Path(
os.path.expanduser(f'~{_parent_user}') os.path.expanduser(f'~{_parent_user}')
) )
root: str = 'root' root: str = 'root'
_ccds: str = str(_click_config_dir) # click config dir as string _ccds: str = str(_click_config_dir) # click config dir string
i_tail: int = int(_ccds.rfind(root) + len(root)) i_tail: int = int(_ccds.rfind(root) + len(root))
_config_dir = ( _config_dir = (
non_root_user_dir non_root_user_dir
@ -252,8 +246,7 @@ def repodir() -> Path:
def load( def load(
# NOTE: always appended with .toml suffix conf_name: str = 'brokers', # appended with .toml suffix
conf_name: str = 'conf',
path: Path | None = None, path: Path | None = None,
decode: Callable[ decode: Callable[
@ -364,9 +357,7 @@ def load_accounts(
) -> bidict[str, str | None]: ) -> bidict[str, str | None]:
conf, path = load( conf, path = load()
conf_name='brokers',
)
accounts = bidict() accounts = bidict()
for provider_name, section in conf.items(): for provider_name, section in conf.items():
accounts_section = section.get('accounts') accounts_section = section.get('accounts')

View File

@ -56,7 +56,6 @@ __all__: list[str] = [
'ShmArray', 'ShmArray',
'iterticks', 'iterticks',
'maybe_open_shm_array', 'maybe_open_shm_array',
'match_from_pairs',
'attach_shm_array', 'attach_shm_array',
'open_shm_array', 'open_shm_array',
'get_shm_token', 'get_shm_token',

View File

@ -41,11 +41,6 @@ if TYPE_CHECKING:
) )
from piker.toolz import Profiler from piker.toolz import Profiler
# default gap between bars: "bar gap multiplier"
# - 0.5 is no overlap between OC arms,
# - 1.0 is full overlap on each neighbor sample
BGM: float = 0.16
class IncrementalFormatter(msgspec.Struct): class IncrementalFormatter(msgspec.Struct):
''' '''
@ -518,7 +513,6 @@ class IncrementalFormatter(msgspec.Struct):
class OHLCBarsFmtr(IncrementalFormatter): class OHLCBarsFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([ x_offset: np.ndarray = np.array([
-0.5, -0.5,
0, 0,
@ -610,9 +604,8 @@ class OHLCBarsFmtr(IncrementalFormatter):
vr: tuple[int, int], vr: tuple[int, int],
start: int = 0, # XXX: do we need this? start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap # 0.5 is no overlap between arms, 1.0 is full overlap
gap: float = BGM, w: float = 0.16,
) -> tuple[ ) -> tuple[
np.ndarray, np.ndarray,
@ -629,7 +622,7 @@ class OHLCBarsFmtr(IncrementalFormatter):
array[:-1], array[:-1],
start, start,
bar_w=self.index_step_size, bar_w=self.index_step_size,
bar_gap=gap * self.index_step_size, bar_gap=w * self.index_step_size,
# XXX: don't ask, due to a ``numba`` bug.. # XXX: don't ask, due to a ``numba`` bug..
use_time_index=(self.index_field == 'time'), use_time_index=(self.index_field == 'time'),

View File

@ -33,11 +33,6 @@ from typing import (
) )
import tractor import tractor
from tractor import (
Context,
MsgStream,
Channel,
)
from tractor.trionics import ( from tractor.trionics import (
maybe_open_nursery, maybe_open_nursery,
) )
@ -58,10 +53,7 @@ if TYPE_CHECKING:
from ._sharedmem import ( from ._sharedmem import (
ShmArray, ShmArray,
) )
from .feed import ( from .feed import _FeedsBus
_FeedsBus,
Sub,
)
# highest frequency sample step is 1 second by default, though in # highest frequency sample step is 1 second by default, though in
@ -102,7 +94,7 @@ class Sampler:
float, float,
list[ list[
float, float,
set[MsgStream] set[tractor.MsgStream]
], ],
] = defaultdict( ] = defaultdict(
lambda: [ lambda: [
@ -266,8 +258,8 @@ class Sampler:
f'broadcasting {period_s} -> {last_ts}\n' f'broadcasting {period_s} -> {last_ts}\n'
# f'consumers: {subs}' # f'consumers: {subs}'
) )
borked: set[MsgStream] = set() borked: set[tractor.MsgStream] = set()
sent: set[MsgStream] = set() sent: set[tractor.MsgStream] = set()
while True: while True:
try: try:
for stream in (subs - sent): for stream in (subs - sent):
@ -322,7 +314,7 @@ class Sampler:
@tractor.context @tractor.context
async def register_with_sampler( async def register_with_sampler(
ctx: Context, ctx: tractor.Context,
period_s: float, period_s: float,
shms_by_period: dict[float, dict] | None = None, shms_by_period: dict[float, dict] | None = None,
@ -657,7 +649,12 @@ async def sample_and_broadcast(
# eventually block this producer end of the feed and # eventually block this producer end of the feed and
# thus other consumers still attached. # thus other consumers still attached.
sub_key: str = broker_symbol.lower() sub_key: str = broker_symbol.lower()
subs: set[Sub] = bus.get_subs(sub_key) subs: list[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
float | None, # tick throttle in Hz
]
] = bus.get_subs(sub_key)
# NOTE: by default the broker backend doesn't append # NOTE: by default the broker backend doesn't append
# it's own "name" into the fqme schema (but maybe it # it's own "name" into the fqme schema (but maybe it
@ -666,40 +663,34 @@ async def sample_and_broadcast(
fqme: str = f'{broker_symbol}.{brokername}' fqme: str = f'{broker_symbol}.{brokername}'
lags: int = 0 lags: int = 0
# XXX TODO XXX: speed up this loop in an AOT compiled # TODO: speed up this loop in an AOT compiled lang (like
# lang (like rust or nim or zig)! # rust or nim or zig) and/or instead of doing a fan out to
# AND/OR instead of doing a fan out to TCP sockets # TCP sockets here, we add a shm-style tick queue which
# here, we add a shm-style tick queue which readers can # readers can pull from instead of placing the burden of
# pull from instead of placing the burden of broadcast # broadcast on solely on this `brokerd` actor. see issues:
# on solely on this `brokerd` actor. see issues:
# - https://github.com/pikers/piker/issues/98 # - https://github.com/pikers/piker/issues/98
# - https://github.com/pikers/piker/issues/107 # - https://github.com/pikers/piker/issues/107
# for (stream, tick_throttle) in subs.copy(): for (stream, tick_throttle) in subs.copy():
for sub in subs.copy():
ipc: MsgStream = sub.ipc
throttle: float = sub.throttle_rate
try: try:
with trio.move_on_after(0.2) as cs: with trio.move_on_after(0.2) as cs:
if throttle: if tick_throttle:
send_chan: trio.abc.SendChannel = sub.send_chan
# this is a send mem chan that likely # this is a send mem chan that likely
# pushes to the ``uniform_rate_send()`` below. # pushes to the ``uniform_rate_send()`` below.
try: try:
send_chan.send_nowait( stream.send_nowait(
(fqme, quote) (fqme, quote)
) )
except trio.WouldBlock: except trio.WouldBlock:
overruns[sub_key] += 1 overruns[sub_key] += 1
ctx: Context = ipc._ctx ctx = stream._ctx
chan: Channel = ctx.chan chan = ctx.chan
log.warning( log.warning(
f'Feed OVERRUN {sub_key}' f'Feed OVERRUN {sub_key}'
'@{bus.brokername} -> \n' '@{bus.brokername} -> \n'
f'feed @ {chan.uid}\n' f'feed @ {chan.uid}\n'
f'throttle = {throttle} Hz' f'throttle = {tick_throttle} Hz'
) )
if overruns[sub_key] > 6: if overruns[sub_key] > 6:
@ -716,10 +707,10 @@ async def sample_and_broadcast(
f'{sub_key}:' f'{sub_key}:'
f'{ctx.cid}@{chan.uid}' f'{ctx.cid}@{chan.uid}'
) )
await ipc.aclose() await stream.aclose()
raise trio.BrokenResourceError raise trio.BrokenResourceError
else: else:
await ipc.send( await stream.send(
{fqme: quote} {fqme: quote}
) )
@ -733,16 +724,16 @@ async def sample_and_broadcast(
trio.ClosedResourceError, trio.ClosedResourceError,
trio.EndOfChannel, trio.EndOfChannel,
): ):
ctx: Context = ipc._ctx ctx = stream._ctx
chan: Channel = ctx.chan chan = ctx.chan
if ctx: if ctx:
log.warning( log.warning(
'Dropped `brokerd`-quotes-feed connection:\n' 'Dropped `brokerd`-quotes-feed connection:\n'
f'{broker_symbol}:' f'{broker_symbol}:'
f'{ctx.cid}@{chan.uid}' f'{ctx.cid}@{chan.uid}'
) )
if sub.throttle_rate: if tick_throttle:
assert ipc._closed assert stream._closed
# XXX: do we need to deregister here # XXX: do we need to deregister here
# if it's done in the fee bus code? # if it's done in the fee bus code?
@ -751,7 +742,7 @@ async def sample_and_broadcast(
# since there seems to be some kinda race.. # since there seems to be some kinda race..
bus.remove_subs( bus.remove_subs(
sub_key, sub_key,
{sub}, {(stream, tick_throttle)},
) )
@ -759,7 +750,7 @@ async def uniform_rate_send(
rate: float, rate: float,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
stream: MsgStream, stream: tractor.MsgStream,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED, task_status: TaskStatus = trio.TASK_STATUS_IGNORED,

View File

@ -31,8 +31,6 @@ from pathlib import Path
from pprint import pformat from pprint import pformat
from typing import ( from typing import (
Any, Any,
Sequence,
Hashable,
TYPE_CHECKING, TYPE_CHECKING,
) )
from types import ModuleType from types import ModuleType
@ -130,8 +128,8 @@ class SymbologyCache(Struct):
- `.get_mkt_pairs()`: returning a table of pair-`Struct` - `.get_mkt_pairs()`: returning a table of pair-`Struct`
types, custom defined by the particular backend. types, custom defined by the particular backend.
AND, the required `.get_mkt_info()` module-level endpoint AND, the required `.get_mkt_info()` module-level endpoint which
which maps `fqme: str` -> `MktPair`s. maps `fqme: str` -> `MktPair`s.
These tables are then used to fill out the `.assets`, `.pairs` and These tables are then used to fill out the `.assets`, `.pairs` and
`.mktmaps` tables on this cache instance, respectively. `.mktmaps` tables on this cache instance, respectively.
@ -502,7 +500,7 @@ def match_from_pairs(
) )
# pop and repack pairs in output dict # pop and repack pairs in output dict
matched_pairs: dict[str, Struct] = {} matched_pairs: dict[str, Pair] = {}
for item in matches: for item in matches:
pair_key: str = item[0] pair_key: str = item[0]
matched_pairs[pair_key] = pairs[pair_key] matched_pairs[pair_key] = pairs[pair_key]

View File

@ -0,0 +1,336 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Financial time series processing utilities usually
pertaining to OHLCV style sampled data.
Routines are generally implemented in either ``numpy`` or
``polars`` B)
'''
from __future__ import annotations
from typing import Literal
from math import (
ceil,
floor,
)
import numpy as np
import polars as pl
from ._sharedmem import ShmArray
from ..toolz.profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: float, # sampler period step-diff
) -> slice:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='right',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc
def detect_null_time_gap(
shm: ShmArray,
imargin: int = 1,
) -> tuple[float, float] | None:
'''
Detect if there are any zero-epoch stamped rows in
the presumed 'time' field-column.
Filter to the gap and return a surrounding index range.
NOTE: for now presumes only ONE gap XD
'''
# ensure we read buffer state only once so that ShmArray rt
# circular-buffer updates don't cause a indexing/size mismatch.
array: np.ndarray = shm.array
zero_pred: np.ndarray = array['time'] == 0
zero_t: np.ndarray = array[zero_pred]
if zero_t.size:
istart, iend = zero_t['index'][[0, -1]]
start, end = shm._array['time'][
[istart - imargin, iend + imargin]
]
return (
istart - imargin,
start,
end,
iend + imargin,
)
return None
t_unit: Literal = Literal[
'days',
'hours',
'minutes',
'seconds',
'miliseconds',
'microseconds',
'nanoseconds',
]
def with_dts(
df: pl.DataFrame,
time_col: str = 'time',
) -> pl.DataFrame:
'''
Insert datetime (casted) columns to a (presumably) OHLC sampled
time series with an epoch-time column keyed by ``time_col``.
'''
return df.with_columns([
pl.col(time_col).shift(1).suffix('_prev'),
pl.col(time_col).diff().alias('s_diff'),
pl.from_epoch(pl.col(time_col)).alias('dt'),
]).with_columns([
pl.from_epoch(pl.col(f'{time_col}_prev')).alias('dt_prev'),
pl.col('dt').diff().alias('dt_diff'),
]) #.with_columns(
# pl.col('dt').diff().dt.days().alias('days_dt_diff'),
# )
def detect_time_gaps(
df: pl.DataFrame,
time_col: str = 'time',
# epoch sampling step diff
expect_period: float = 60,
# datetime diff unit and gap value
# crypto mkts
# gap_dt_unit: t_unit = 'minutes',
# gap_thresh: int = 1,
# NOTE: legacy stock mkts have venue operating hours
# and thus gaps normally no more then 1-2 days at
# a time.
# XXX -> must be valid ``polars.Expr.dt.<name>``
# TODO: allow passing in a frame of operating hours
# durations/ranges for faster legit gap checks.
gap_dt_unit: t_unit = 'days',
gap_thresh: int = 1,
) -> pl.DataFrame:
'''
Filter to OHLC datums which contain sample step gaps.
For eg. legacy markets which have venue close gaps and/or
actual missing data segments.
'''
return (
with_dts(df)
.filter(
pl.col('s_diff').abs() > expect_period
)
.filter(
getattr(
pl.col('dt_diff').dt,
gap_dt_unit,
)().abs() > gap_thresh
)
)
def detect_price_gaps(
df: pl.DataFrame,
gt_multiplier: float = 2.,
price_fields: list[str] = ['high', 'low'],
) -> pl.DataFrame:
'''
Detect gaps in clearing price over an OHLC series.
2 types of gaps generally exist; up gaps and down gaps:
- UP gap: when any next sample's lo price is strictly greater
then the current sample's hi price.
- DOWN gap: when any next sample's hi price is strictly
less then the current samples lo price.
'''
# return df.filter(
# pl.col('high') - ) > expect_period,
# ).select([
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
# pl.all(),
# ]).select([
# pl.all(),
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
# ])
...

View File

@ -273,7 +273,7 @@ async def _reconnect_forever(
nobsws._connected.set() nobsws._connected.set()
await trio.sleep_forever() await trio.sleep_forever()
except HandshakeError: except HandshakeError:
log.exception('Retrying connection') log.exception(f'Retrying connection')
# ws & nursery block ends # ws & nursery block ends
@ -359,8 +359,8 @@ async def open_autorecon_ws(
''' '''
JSONRPC response-request style machinery for transparent multiplexing JSONRPC response-request style machinery for transparent multiplexing of msgs
of msgs over a `NoBsWs`. over a NoBsWs.
''' '''
@ -377,82 +377,43 @@ async def open_jsonrpc_session(
url: str, url: str,
start_id: int = 0, start_id: int = 0,
response_type: type = JSONRPCResult, response_type: type = JSONRPCResult,
msg_recv_timeout: float = float('inf'), request_type: Optional[type] = None,
# ^NOTE, since only `deribit` is using this jsonrpc stuff atm request_hook: Optional[Callable] = None,
# and options mkts are generally "slow moving".. error_hook: Optional[Callable] = None,
#
# FURTHER if we break the underlying ws connection then since we
# don't pass a `fixture` to the task that manages `NoBsWs`, i.e.
# `_reconnect_forever()`, the jsonrpc "transport pipe" get's
# broken and never restored with wtv init sequence is required to
# re-establish a working req-resp session.
) -> Callable[[str, dict], dict]: ) -> Callable[[str, dict], dict]:
'''
Init a json-RPC-over-websocket connection to the provided `url`.
A `json_rpc: Callable[[str, dict], dict` is delivered to the
caller for sending requests and a bg-`trio.Task` handles
processing of response msgs including error reporting/raising in
the parent/caller task.
'''
# NOTE, store all request msgs so we can raise errors on the
# caller side!
req_msgs: dict[int, dict] = {}
async with ( async with (
trio.open_nursery() as tn, trio.open_nursery() as n,
open_autorecon_ws( open_autorecon_ws(url) as ws
url=url,
msg_recv_timeout=msg_recv_timeout,
) as ws
): ):
rpc_id: Iterable[int] = count(start_id) rpc_id: Iterable = count(start_id)
rpc_results: dict[int, dict] = {} rpc_results: dict[int, dict] = {}
async def json_rpc( async def json_rpc(method: str, params: dict) -> dict:
method: str,
params: dict,
) -> dict:
''' '''
perform a json rpc call and wait for the result, raise exception in perform a json rpc call and wait for the result, raise exception in
case of error field present on response case of error field present on response
''' '''
nonlocal req_msgs
req_id: int = next(rpc_id)
msg = { msg = {
'jsonrpc': '2.0', 'jsonrpc': '2.0',
'id': req_id, 'id': next(rpc_id),
'method': method, 'method': method,
'params': params 'params': params
} }
_id = msg['id'] _id = msg['id']
result = rpc_results[_id] = { rpc_results[_id] = {
'result': None, 'result': None,
'error': None, 'event': trio.Event()
'event': trio.Event(), # signal caller resp arrived
} }
req_msgs[_id] = msg
await ws.send_msg(msg) await ws.send_msg(msg)
# wait for reponse before unblocking requester code
await rpc_results[_id]['event'].wait() await rpc_results[_id]['event'].wait()
if (maybe_result := result['result']): ret = rpc_results[_id]['result']
ret = maybe_result
del rpc_results[_id]
else: del rpc_results[_id]
err = result['error']
raise Exception(
f'JSONRPC request failed\n'
f'req: {msg}\n'
f'resp: {err}\n'
)
if ret.error is not None: if ret.error is not None:
raise Exception(json.dumps(ret.error, indent=4)) raise Exception(json.dumps(ret.error, indent=4))
@ -467,7 +428,6 @@ async def open_jsonrpc_session(
the server side. the server side.
''' '''
nonlocal req_msgs
async for msg in ws: async for msg in ws:
match msg: match msg:
case { case {
@ -491,28 +451,19 @@ async def open_jsonrpc_session(
'params': _, 'params': _,
}: }:
log.debug(f'Recieved\n{msg}') log.debug(f'Recieved\n{msg}')
if request_hook:
await request_hook(request_type(**msg))
case { case {
'error': error 'error': error
}: }:
# retreive orig request msg, set error log.warning(f'Recieved\n{error}')
# response in original "result" msg, if error_hook:
# THEN FINALLY set the event to signal caller await error_hook(response_type(**msg))
# to raise the error in the parent task.
req_id: int = error['id']
req_msg: dict = req_msgs[req_id]
result: dict = rpc_results[req_id]
result['error'] = error
result['event'].set()
log.error(
f'JSONRPC request failed\n'
f'req: {req_msg}\n'
f'resp: {error}\n'
)
case _: case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}') log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
tn.start_soon(recv_task) n.start_soon(recv_task)
yield json_rpc yield json_rpc
tn.cancel_scope.cancel() n.cancel_scope.cancel()

View File

@ -28,7 +28,6 @@ module.
from __future__ import annotations from __future__ import annotations
from collections import ( from collections import (
defaultdict, defaultdict,
abc,
) )
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from functools import partial from functools import partial
@ -37,6 +36,7 @@ from types import ModuleType
from typing import ( from typing import (
Any, Any,
AsyncContextManager, AsyncContextManager,
Optional,
Awaitable, Awaitable,
Sequence, Sequence,
) )
@ -45,7 +45,10 @@ import trio
from trio.abc import ReceiveChannel from trio.abc import ReceiveChannel
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
from tractor import trionics from tractor.trionics import (
maybe_open_context,
gather_contexts,
)
from piker.accounting import ( from piker.accounting import (
MktPair, MktPair,
@ -56,6 +59,7 @@ from piker.brokers import get_brokermod
from piker.service import ( from piker.service import (
maybe_spawn_brokerd, maybe_spawn_brokerd,
) )
from piker.ui import _search
from piker.calc import humanize from piker.calc import humanize
from ._util import ( from ._util import (
log, log,
@ -66,7 +70,7 @@ from .validate import (
FeedInit, FeedInit,
validate_backend, validate_backend,
) )
from ..tsp import ( from .history import (
manage_history, manage_history,
) )
from .ingest import get_ingestormod from .ingest import get_ingestormod
@ -76,31 +80,6 @@ from ._sampling import (
) )
class Sub(Struct, frozen=True):
'''
A live feed subscription entry.
Contains meta-data on the remote-actor type (in functionality
terms) as well as refs to IPC streams and sampler runtime
params.
'''
ipc: tractor.MsgStream
send_chan: trio.abc.SendChannel | None = None
# tick throttle rate in Hz; determines how live
# quotes/ticks should be downsampled before relay
# to the receiving remote consumer (process).
throttle_rate: float | None = None
_throttle_cs: trio.CancelScope | None = None
# TODO: actually stash comms info for the far end to allow
# `.tsp`, `.fsp` and `.data._sampling` sub-systems to re-render
# the data view as needed via msging with the `._remote_ctl`
# ipc ctx.
rc_ui: bool = False
class _FeedsBus(Struct): class _FeedsBus(Struct):
''' '''
Data feeds broadcaster and persistence management. Data feeds broadcaster and persistence management.
@ -125,7 +104,13 @@ class _FeedsBus(Struct):
_subscribers: defaultdict[ _subscribers: defaultdict[
str, str,
set[Sub] set[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
# tractor.Context,
float | None, # tick throttle in Hz
]
]
] = defaultdict(set) ] = defaultdict(set)
async def start_task( async def start_task(
@ -140,8 +125,6 @@ class _FeedsBus(Struct):
trio.CancelScope] = trio.TASK_STATUS_IGNORED, trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
# TODO: shouldn't this be a direct await to avoid
# cancellation contagion to the bus nursery!?!?!
await self.nursery.start( await self.nursery.start(
target, target,
*args, *args,
@ -159,28 +142,31 @@ class _FeedsBus(Struct):
def get_subs( def get_subs(
self, self,
key: str, key: str,
) -> set[
) -> set[Sub]: tuple[
tractor.MsgStream | trio.MemorySendChannel,
float | None, # tick throttle in Hz
]
]:
''' '''
Get the ``set`` of consumer subscription entries for the given key. Get the ``set`` of consumer subscription entries for the given key.
''' '''
return self._subscribers[key] return self._subscribers[key]
def subs_items(self) -> abc.ItemsView[str, set[Sub]]:
return self._subscribers.items()
def add_subs( def add_subs(
self, self,
key: str, key: str,
subs: set[Sub], subs: set[tuple[
tractor.MsgStream | trio.MemorySendChannel,
) -> set[Sub]: float | None, # tick throttle in Hz
]],
) -> set[tuple]:
''' '''
Add a ``set`` of consumer subscription entries for the given key. Add a ``set`` of consumer subscription entries for the given key.
''' '''
_subs: set[Sub] = self._subscribers.setdefault(key, set()) _subs: set[tuple] = self._subscribers[key]
_subs.update(subs) _subs.update(subs)
return _subs return _subs
@ -345,6 +331,7 @@ async def allocate_persistent_feed(
) = await bus.nursery.start( ) = await bus.nursery.start(
manage_history, manage_history,
mod, mod,
bus,
mkt, mkt,
some_data_ready, some_data_ready,
feed_is_live, feed_is_live,
@ -421,13 +408,7 @@ async def allocate_persistent_feed(
rt_shm.array['time'][1] = ts + 1 rt_shm.array['time'][1] = ts + 1
elif hist_shm.array.size == 0: elif hist_shm.array.size == 0:
for i in range(100): raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
await trio.sleep(0.1)
if hist_shm.array.size > 0:
break
else:
await tractor.pause()
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
# wait the spawning parent task to register its subscriber # wait the spawning parent task to register its subscriber
# send-stream entry before we start the sample loop. # send-stream entry before we start the sample loop.
@ -457,9 +438,8 @@ async def open_feed_bus(
symbols: list[str], # normally expected to the broker-specific fqme symbols: list[str], # normally expected to the broker-specific fqme
loglevel: str = 'error', loglevel: str = 'error',
tick_throttle: float | None = None, tick_throttle: Optional[float] = None,
start_stream: bool = True, start_stream: bool = True,
allow_remote_ctl_ui: bool = False,
) -> dict[ ) -> dict[
str, # fqme str, # fqme
@ -474,12 +454,8 @@ async def open_feed_bus(
if loglevel is None: if loglevel is None:
loglevel = tractor.current_actor().loglevel loglevel = tractor.current_actor().loglevel
# XXX: required to propagate ``tractor`` loglevel to piker # XXX: required to propagate ``tractor`` loglevel to piker logging
# logging get_console_log(loglevel or tractor.current_actor().loglevel)
get_console_log(
loglevel
or tractor.current_actor().loglevel
)
# local state sanity checks # local state sanity checks
# TODO: check for any stale shm entries for this symbol # TODO: check for any stale shm entries for this symbol
@ -489,7 +465,7 @@ async def open_feed_bus(
assert 'brokerd' in servicename assert 'brokerd' in servicename
assert brokername in servicename assert brokername in servicename
bus: _FeedsBus = get_feed_bus(brokername) bus = get_feed_bus(brokername)
sub_registered = trio.Event() sub_registered = trio.Event()
flumes: dict[str, Flume] = {} flumes: dict[str, Flume] = {}
@ -536,10 +512,10 @@ async def open_feed_bus(
# pack for ``.started()`` sync msg # pack for ``.started()`` sync msg
flumes[fqme] = flume flumes[fqme] = flume
# we use the broker-specific fqme (bs_fqme) for the sampler # we use the broker-specific fqme (bs_fqme) for the
# subscription since the backend isn't (yet) expected to # sampler subscription since the backend isn't (yet) expected to
# append it's own name to the fqme, so we filter on keys # append it's own name to the fqme, so we filter on keys which
# which *do not* include that name (e.g .ib) . # *do not* include that name (e.g .ib) .
bus._subscribers.setdefault(bs_fqme, set()) bus._subscribers.setdefault(bs_fqme, set())
# sync feed subscribers with flume handles # sync feed subscribers with flume handles
@ -578,60 +554,49 @@ async def open_feed_bus(
# that the ``sample_and_broadcast()`` task (spawned inside # that the ``sample_and_broadcast()`` task (spawned inside
# ``allocate_persistent_feed()``) will push real-time quote # ``allocate_persistent_feed()``) will push real-time quote
# (ticks) to this new consumer. # (ticks) to this new consumer.
cs: trio.CancelScope | None = None
send: trio.MemorySendChannel | None = None
if tick_throttle: if tick_throttle:
flume.throttle_rate = tick_throttle flume.throttle_rate = tick_throttle
# open a bg task which receives quotes over a mem # open a bg task which receives quotes over a mem chan
# chan and only pushes them to the target # and only pushes them to the target actor-consumer at
# actor-consumer at a max ``tick_throttle`` # a max ``tick_throttle`` instantaneous rate.
# (instantaneous) rate.
send, recv = trio.open_memory_channel(2**10) send, recv = trio.open_memory_channel(2**10)
# NOTE: the ``.send`` channel here is a swapped-in cs = await bus.start_task(
# trio mem chan which gets `.send()`-ed by the normal
# sampler task but instead of being sent directly
# over the IPC msg stream it's the throttle task
# does the work of incrementally forwarding to the
# IPC stream at the throttle rate.
cs: trio.CancelScope = await bus.start_task(
uniform_rate_send, uniform_rate_send,
tick_throttle, tick_throttle,
recv, recv,
stream, stream,
) )
# NOTE: so the ``send`` channel here is actually a swapped
# in trio mem chan which gets pushed by the normal sampler
# task but instead of being sent directly over the IPC msg
# stream it's the throttle task does the work of
# incrementally forwarding to the IPC stream at the throttle
# rate.
send._ctx = ctx # mock internal ``tractor.MsgStream`` ref
sub = (send, tick_throttle)
sub = Sub( else:
ipc=stream, sub = (stream, tick_throttle)
send_chan=send,
throttle_rate=tick_throttle,
_throttle_cs=cs,
rc_ui=allow_remote_ctl_ui,
)
# TODO: add an api for this on the bus? # TODO: add an api for this on the bus?
# maybe use the current task-id to key the sub list that's # maybe use the current task-id to key the sub list that's
# added / removed? Or maybe we can add a general # added / removed? Or maybe we can add a general
# pause-resume by sub-key api? # pause-resume by sub-key api?
bs_fqme = fqme.removesuffix(f'.{brokername}') bs_fqme = fqme.removesuffix(f'.{brokername}')
local_subs.setdefault( local_subs.setdefault(bs_fqme, set()).add(sub)
bs_fqme, bus.add_subs(bs_fqme, {sub})
set()
).add(sub)
bus.add_subs(
bs_fqme,
{sub}
)
# sync caller with all subs registered state # sync caller with all subs registered state
sub_registered.set() sub_registered.set()
uid: tuple[str, str] = ctx.chan.uid uid = ctx.chan.uid
try: try:
# ctrl protocol for start/stop of live quote streams # ctrl protocol for start/stop of quote streams based on UI
# based on UI state (eg. don't need a stream when # state (eg. don't need a stream when a symbol isn't being
# a symbol isn't being displayed). # displayed).
async for msg in stream: async for msg in stream:
if msg == 'pause': if msg == 'pause':
@ -769,7 +734,6 @@ async def install_brokerd_search(
except trio.EndOfChannel: except trio.EndOfChannel:
return {} return {}
from piker.ui import _search
async with _search.register_symbol_search( async with _search.register_symbol_search(
provider_name=brokermod.name, provider_name=brokermod.name,
@ -788,7 +752,7 @@ async def install_brokerd_search(
async def maybe_open_feed( async def maybe_open_feed(
fqmes: list[str], fqmes: list[str],
loglevel: str | None = None, loglevel: Optional[str] = None,
**kwargs, **kwargs,
@ -804,7 +768,7 @@ async def maybe_open_feed(
''' '''
fqme = fqmes[0] fqme = fqmes[0]
async with trionics.maybe_open_context( async with maybe_open_context(
acm_func=open_feed, acm_func=open_feed,
kwargs={ kwargs={
'fqmes': fqmes, 'fqmes': fqmes,
@ -824,7 +788,7 @@ async def maybe_open_feed(
# add a new broadcast subscription for the quote stream # add a new broadcast subscription for the quote stream
# if this feed is likely already in use # if this feed is likely already in use
async with trionics.gather_contexts( async with gather_contexts(
mngrs=[stream.subscribe() for stream in feed.streams.values()] mngrs=[stream.subscribe() for stream in feed.streams.values()]
) as bstreams: ) as bstreams:
for bstream, flume in zip(bstreams, feed.flumes.values()): for bstream, flume in zip(bstreams, feed.flumes.values()):
@ -848,8 +812,6 @@ async def open_feed(
start_stream: bool = True, start_stream: bool = True,
tick_throttle: float | None = None, # Hz tick_throttle: float | None = None, # Hz
allow_remote_ctl_ui: bool = False,
) -> Feed: ) -> Feed:
''' '''
Open a "data feed" which provides streamed real-time quotes. Open a "data feed" which provides streamed real-time quotes.
@ -886,7 +848,7 @@ async def open_feed(
) )
portals: tuple[tractor.Portal] portals: tuple[tractor.Portal]
async with trionics.gather_contexts( async with gather_contexts(
brokerd_ctxs, brokerd_ctxs,
) as portals: ) as portals:
@ -932,19 +894,13 @@ async def open_feed(
# of these stream open sequences sequentially per # of these stream open sequences sequentially per
# backend? .. need some thot! # backend? .. need some thot!
allow_overruns=True, allow_overruns=True,
# NOTE: UI actors (like charts) can allow
# remote control of certain graphics rendering
# capabilities via the
# `.ui._remote_ctl.remote_annotate()` msg loop.
allow_remote_ctl_ui=allow_remote_ctl_ui,
) )
) )
assert len(feed.mods) == len(feed.portals) assert len(feed.mods) == len(feed.portals)
async with ( async with (
trionics.gather_contexts(bus_ctxs) as ctxs, gather_contexts(bus_ctxs) as ctxs,
): ):
stream_ctxs: list[tractor.MsgStream] = [] stream_ctxs: list[tractor.MsgStream] = []
for ( for (
@ -986,7 +942,7 @@ async def open_feed(
brokermod: ModuleType brokermod: ModuleType
fqmes: list[str] fqmes: list[str]
async with ( async with (
trionics.gather_contexts(stream_ctxs) as streams, gather_contexts(stream_ctxs) as streams,
): ):
for ( for (
stream, stream,
@ -1002,12 +958,6 @@ async def open_feed(
if brokermod.name == flume.mkt.broker: if brokermod.name == flume.mkt.broker:
flume.stream = stream flume.stream = stream
assert ( assert len(feed.mods) == len(feed.portals) == len(feed.streams)
len(feed.mods)
==
len(feed.portals)
==
len(feed.streams)
)
yield feed yield feed

View File

@ -42,15 +42,35 @@ if TYPE_CHECKING:
from .feed import Feed from .feed import Feed
# TODO: ideas for further abstractions as per
# https://github.com/pikers/piker/issues/216 and
# https://github.com/pikers/piker/issues/270:
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
# as per circuit parlance:
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
# - could cover the combination of our `FspAdmin` and the
# backend `.fsp._engine` related machinery to "connect" one flume
# to another?
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
class Flume(Struct): class Flume(Struct):
''' '''
Composite reference type which points to all the addressing Composite reference type which points to all the addressing handles
handles and other meta-data necessary for the read, measure and and other meta-data necessary for the read, measure and management
management of a set of real-time updated data flows. of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that describes the high level properties of a set of data flows that can
can be used seamlessly across process-memory boundaries. be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes: Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport - a msg oriented quote stream provided via an IPC transport
@ -73,7 +93,6 @@ class Flume(Struct):
# private shm refs loaded dynamically from tokens # private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None _hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None _rt_shm: ShmArray | None = None
_readonly: bool = True
stream: tractor.MsgStream | None = None stream: tractor.MsgStream | None = None
izero_hist: int = 0 izero_hist: int = 0
@ -90,7 +109,7 @@ class Flume(Struct):
if self._rt_shm is None: if self._rt_shm is None:
self._rt_shm = attach_shm_array( self._rt_shm = attach_shm_array(
token=self._rt_shm_token, token=self._rt_shm_token,
readonly=self._readonly, readonly=True,
) )
return self._rt_shm return self._rt_shm
@ -103,10 +122,12 @@ class Flume(Struct):
'No shm token has been set for the history buffer?' 'No shm token has been set for the history buffer?'
) )
if self._hist_shm is None: if (
self._hist_shm is None
):
self._hist_shm = attach_shm_array( self._hist_shm = attach_shm_array(
token=self._hist_shm_token, token=self._hist_shm_token,
readonly=self._readonly, readonly=True,
) )
return self._hist_shm return self._hist_shm
@ -125,10 +146,10 @@ class Flume(Struct):
period and ratio between them. period and ratio between them.
''' '''
times: np.ndarray = self.hist_shm.array['time'] times = self.hist_shm.array['time']
end: float | int = pendulum.from_timestamp(times[-1]) end = pendulum.from_timestamp(times[-1])
start: float | int = pendulum.from_timestamp(times[times != times[-1]][-1]) start = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s: float = (end - start).seconds hist_step_size_s = (end - start).seconds
times = self.rt_shm.array['time'] times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1]) end = pendulum.from_timestamp(times[-1])
@ -148,25 +169,17 @@ class Flume(Struct):
msg = self.to_dict() msg = self.to_dict()
msg['mkt'] = self.mkt.to_dict() msg['mkt'] = self.mkt.to_dict()
# NOTE: pop all un-msg-serializable fields: # can't serialize the stream or feed objects, it's expected
# - `tractor.MsgStream` # you'll have a ref to it since this msg should be rxed on
# - `Feed` # a stream on whatever far end IPC..
# - `Shmarray`
# it's expected the `.from_msg()` on the other side
# will get instead some kind of msg-compat version
# that it can load.
msg.pop('stream') msg.pop('stream')
msg.pop('feed') msg.pop('feed')
msg.pop('_rt_shm')
msg.pop('_hist_shm')
return msg return msg
@classmethod @classmethod
def from_msg( def from_msg(
cls, cls,
msg: dict, msg: dict,
readonly: bool = True,
) -> dict: ) -> dict:
''' '''
@ -177,11 +190,7 @@ class Flume(Struct):
mkt_msg = msg.pop('mkt') mkt_msg = msg.pop('mkt')
from ..accounting import MktPair # cycle otherwise.. from ..accounting import MktPair # cycle otherwise..
mkt = MktPair.from_msg(mkt_msg) mkt = MktPair.from_msg(mkt_msg)
msg |= {'_readonly': readonly} return cls(mkt=mkt, **msg)
return cls(
mkt=mkt,
**msg,
)
def get_index( def get_index(
self, self,

1003
piker/data/history.py 100644

File diff suppressed because it is too large Load Diff

View File

@ -26,10 +26,7 @@ from ._api import (
maybe_mk_fsp_shm, maybe_mk_fsp_shm,
Fsp, Fsp,
) )
from ._engine import ( from ._engine import cascade
cascade,
Cascade,
)
from ._volume import ( from ._volume import (
dolla_vlm, dolla_vlm,
flow_rates, flow_rates,
@ -38,7 +35,6 @@ from ._volume import (
__all__: list[str] = [ __all__: list[str] = [
'cascade', 'cascade',
'Cascade',
'maybe_mk_fsp_shm', 'maybe_mk_fsp_shm',
'Fsp', 'Fsp',
'dolla_vlm', 'dolla_vlm',
@ -50,12 +46,9 @@ __all__: list[str] = [
async def latency( async def latency(
source: 'TickStream[Dict[str, float]]', # noqa source: 'TickStream[Dict[str, float]]', # noqa
ohlcv: np.ndarray ohlcv: np.ndarray
) -> AsyncIterator[np.ndarray]: ) -> AsyncIterator[np.ndarray]:
''' """Latency measurements, broker to piker.
Latency measurements, broker to piker. """
'''
# TODO: do we want to offer yielding this async # TODO: do we want to offer yielding this async
# before the rt data connection comes up? # before the rt data connection comes up?

View File

@ -18,12 +18,13 @@
core task logic for processing chains core task logic for processing chains
''' '''
from __future__ import annotations from dataclasses import dataclass
from contextlib import asynccontextmanager as acm
from functools import partial from functools import partial
from typing import ( from typing import (
AsyncIterator, AsyncIterator,
Callable, Callable,
Optional,
Union,
) )
import numpy as np import numpy as np
@ -32,9 +33,9 @@ from trio_typing import TaskStatus
import tractor import tractor
from tractor.msg import NamespacePath from tractor.msg import NamespacePath
from piker.types import Struct
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from .. import data from .. import data
from ..data import attach_shm_array
from ..data.feed import ( from ..data.feed import (
Flume, Flume,
Feed, Feed,
@ -55,6 +56,12 @@ from ..toolz import Profiler
log = get_logger(__name__) log = get_logger(__name__)
@dataclass
class TaskTracker:
complete: trio.Event
cs: trio.CancelScope
async def filter_quotes_by_sym( async def filter_quotes_by_sym(
sym: str, sym: str,
@ -75,168 +82,30 @@ async def filter_quotes_by_sym(
if quote: if quote:
yield quote yield quote
# TODO: unifying the abstractions in this FSP subsys/layer:
# -[ ] move the `.data.flows.Flume` type into this
# module/subsys/pkg?
# -[ ] ideas for further abstractions as per
# - https://github.com/pikers/piker/issues/216,
# - https://github.com/pikers/piker/issues/270:
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
# - https://en.wikipedia.org/wiki/Signal-flow_graph
# - https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
# -[ ] we probably want to eval THE BELOW design and unify with the async def fsp_compute(
# proto `TaskManager` in the `tractor` dev branch as well as with
# our below idea for `Cascade`:
# - https://github.com/goodboy/tractor/pull/363
class Cascade(Struct):
'''
As per sig-proc engineering parlance, this is a chaining of
`Flume`s, which are themselves collections of "Streams"
implemented currently via `ShmArray`s.
A `Cascade` is be the minimal "connection" of 2 `Flumes`
as per circuit parlance:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
TODO:
-[ ] could cover the combination of our `FspAdmin` and the
backend `.fsp._engine` related machinery to "connect" one flume
to another?
'''
# TODO: make these `Flume`s
src: Flume
dst: Flume
tn: trio.Nursery
fsp: Fsp # UI-side middleware ctl API
# filled during cascade/.bind_func() (fsp_compute) init phases
bind_func: Callable | None = None
complete: trio.Event | None = None
cs: trio.CancelScope | None = None
client_stream: tractor.MsgStream | None = None
async def resync(self) -> int:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.info(f're-syncing fsp {self.fsp.name} to source')
self.cs.cancel()
await self.complete.wait()
index: int = await self.tn.start(self.bind_func)
# always trigger UI refresh after history update,
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
dst_shm: ShmArray = self.dst.rt_shm
await self.client_stream.send({
'fsp_update': {
'key': dst_shm.token,
'first': dst_shm._first.value,
'last': dst_shm._last.value,
}
})
return index
def is_synced(self) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
'''
src_shm: ShmArray = self.src.rt_shm
dst_shm: ShmArray = self.dst.rt_shm
step_diff = src_shm.index - dst_shm.index
len_diff = abs(len(src_shm.array) - len(dst_shm.array))
synced: bool = not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2
# we aren't step synced to the source and may be
# leading/lagging by a step
or step_diff > 1
or step_diff < 0
)
if not synced:
fsp: Fsp = self.fsp
log.warning(
'***DESYNCED FSP***\n'
f'{fsp.ns_path}@{src_shm.token}\n'
f'step_diff: {step_diff}\n'
f'len_diff: {len_diff}\n'
)
return (
synced,
step_diff,
len_diff,
)
async def poll_and_sync_to_step(self) -> int:
synced, step_diff, _ = self.is_synced()
while not synced:
await self.resync()
synced, step_diff, _ = self.is_synced()
return step_diff
@acm
async def open_edge(
self,
bind_func: Callable,
) -> int:
self.bind_func = bind_func
index = await self.tn.start(bind_func)
yield index
# TODO: what do we want on teardown/error?
# -[ ] dynamic reconnection after update?
async def connect_streams(
casc: Cascade,
mkt: MktPair, mkt: MktPair,
flume: Flume,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
src: Flume,
dst: Flume,
edge_func: Callable, src: ShmArray,
dst: ShmArray,
func: Callable,
# attach_stream: bool = False, # attach_stream: bool = False,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
'''
Stream and per-sample compute and write the cascade of
2 `Flumes`/streams given some operating `func`.
https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
Not literally, but something like:
edge_func(Flume_in) -> Flume_out
'''
profiler = Profiler( profiler = Profiler(
delayed=False, delayed=False,
disabled=True disabled=True
) )
# TODO: just pull it from src.mkt.fqme no? fqme = mkt.fqme
# fqme: str = mkt.fqme out_stream = func(
fqme: str = src.mkt.fqme
# TODO: dynamic introspection of what the underlying (vertex)
# function actually requires from input node (flumes) then
# deliver those inputs as part of a graph "compilation" step?
out_stream = edge_func(
# TODO: do we even need this if we do the feed api right? # TODO: do we even need this if we do the feed api right?
# shouldn't a local stream do this before we get a handle # shouldn't a local stream do this before we get a handle
@ -244,21 +113,20 @@ async def connect_streams(
# async itertools style? # async itertools style?
filter_quotes_by_sym(fqme, quote_stream), filter_quotes_by_sym(fqme, quote_stream),
# XXX: currently the ``ohlcv`` arg, but we should allow # XXX: currently the ``ohlcv`` arg
# (dynamic) requests for src flume (node) streams? flume.rt_shm,
src.rt_shm,
) )
# HISTORY COMPUTE PHASE # HISTORY COMPUTE PHASE
# conduct a single iteration of fsp with historical bars input # conduct a single iteration of fsp with historical bars input
# and get historical output. # and get historical output.
history_output: ( history_output: Union[
dict[str, np.ndarray] # multi-output case dict[str, np.ndarray], # multi-output case
| np.ndarray, # single output case np.ndarray, # single output case
) ]
history_output = await anext(out_stream) history_output = await anext(out_stream)
func_name = edge_func.__name__ func_name = func.__name__
profiler(f'{func_name} generated history') profiler(f'{func_name} generated history')
# build struct array with an 'index' field to push as history # build struct array with an 'index' field to push as history
@ -266,12 +134,10 @@ async def connect_streams(
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no? # TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
# if the output array is multi-field then push # if the output array is multi-field then push
# each respective field. # each respective field.
dst_shm: ShmArray = dst.rt_shm fields = getattr(dst.array.dtype, 'fields', None).copy()
fields = getattr(dst_shm.array.dtype, 'fields', None).copy()
fields.pop('index') fields.pop('index')
history_by_field: np.ndarray | None = None history_by_field: Optional[np.ndarray] = None
src_shm: ShmArray = src.rt_shm src_time = src.array['time']
src_time = src_shm.array['time']
if ( if (
fields and fields and
@ -290,7 +156,7 @@ async def connect_streams(
if history_by_field is None: if history_by_field is None:
if output is None: if output is None:
length = len(src_shm.array) length = len(src.array)
else: else:
length = len(output) length = len(output)
@ -299,7 +165,7 @@ async def connect_streams(
# will be pushed to shm. # will be pushed to shm.
history_by_field = np.zeros( history_by_field = np.zeros(
length, length,
dtype=dst_shm.array.dtype dtype=dst.array.dtype
) )
if output is None: if output is None:
@ -316,13 +182,13 @@ async def connect_streams(
) )
history_by_field = np.zeros( history_by_field = np.zeros(
len(history_output), len(history_output),
dtype=dst_shm.array.dtype dtype=dst.array.dtype
) )
history_by_field[func_name] = history_output history_by_field[func_name] = history_output
history_by_field['time'] = src_time[-len(history_by_field):] history_by_field['time'] = src_time[-len(history_by_field):]
history_output['time'] = src_shm.array['time'] history_output['time'] = src.array['time']
# TODO: XXX: # TODO: XXX:
# THERE'S A BIG BUG HERE WITH THE `index` field since we're # THERE'S A BIG BUG HERE WITH THE `index` field since we're
@ -335,11 +201,11 @@ async def connect_streams(
# is `index` aware such that historical data can be indexed # is `index` aware such that historical data can be indexed
# relative to the true first datum? Not sure if this is sane # relative to the true first datum? Not sure if this is sane
# for incremental compuations. # for incremental compuations.
first = dst_shm._first.value = src_shm._first.value first = dst._first.value = src._first.value
# TODO: can we use this `start` flag instead of the manual # TODO: can we use this `start` flag instead of the manual
# setting above? # setting above?
index = dst_shm.push( index = dst.push(
history_by_field, history_by_field,
start=first, start=first,
) )
@ -350,9 +216,12 @@ async def connect_streams(
# setup a respawn handle # setup a respawn handle
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
casc.cs = cs # TODO: might be better to just make a "restart" method where
casc.complete = trio.Event() # the target task is spawned implicitly and then the event is
task_status.started(index) # set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
tracker = TaskTracker(trio.Event(), cs)
task_status.started((tracker, index))
profiler(f'{func_name} yield last index') profiler(f'{func_name} yield last index')
@ -366,12 +235,12 @@ async def connect_streams(
log.debug(f"{func_name}: {processed}") log.debug(f"{func_name}: {processed}")
key, output = processed key, output = processed
# dst.array[-1][key] = output # dst.array[-1][key] = output
dst_shm.array[[key, 'time']][-1] = ( dst.array[[key, 'time']][-1] = (
output, output,
# TODO: what about pushing ``time.time_ns()`` # TODO: what about pushing ``time.time_ns()``
# in which case we'll need to round at the graphics # in which case we'll need to round at the graphics
# processing / sampling layer? # processing / sampling layer?
src_shm.array[-1]['time'] src.array[-1]['time']
) )
# NOTE: for now we aren't streaming this to the consumer # NOTE: for now we aren't streaming this to the consumer
@ -383,7 +252,7 @@ async def connect_streams(
# N-consumers who subscribe for the real-time output, # N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem # which we'll likely want to implement using local-mem
# chans for the fan out? # chans for the fan out?
# index = src_shm.index # index = src.index
# if attach_stream: # if attach_stream:
# await client_stream.send(index) # await client_stream.send(index)
@ -393,7 +262,7 @@ async def connect_streams(
# log.info(f'FSP quote too fast: {hz}') # log.info(f'FSP quote too fast: {hz}')
# last = time.time() # last = time.time()
finally: finally:
casc.complete.set() tracker.complete.set()
@tractor.context @tractor.context
@ -404,15 +273,15 @@ async def cascade(
# data feed key # data feed key
fqme: str, fqme: str,
# flume pair cascaded using an "edge function" src_shm_token: dict,
src_flume_addr: dict, dst_shm_token: tuple[str, np.dtype],
dst_flume_addr: dict,
ns_path: NamespacePath, ns_path: NamespacePath,
shm_registry: dict[str, _Token], shm_registry: dict[str, _Token],
zero_on_step: bool = False, zero_on_step: bool = False,
loglevel: str | None = None, loglevel: Optional[str] = None,
) -> None: ) -> None:
''' '''
@ -428,14 +297,8 @@ async def cascade(
if loglevel: if loglevel:
get_console_log(loglevel) get_console_log(loglevel)
src: Flume = Flume.from_msg(src_flume_addr) src = attach_shm_array(token=src_shm_token)
dst: Flume = Flume.from_msg( dst = attach_shm_array(readonly=False, token=dst_shm_token)
dst_flume_addr,
readonly=False,
)
# src: ShmArray = attach_shm_array(token=src_shm_token)
# dst: ShmArray = attach_shm_array(readonly=False, token=dst_shm_token)
reg = _load_builtins() reg = _load_builtins()
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg]) lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
@ -443,11 +306,11 @@ async def cascade(
f'Registered FSP set:\n{lines}' f'Registered FSP set:\n{lines}'
) )
# NOTE XXX: update actorlocal flows table which registers # update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source so that # readonly "instances" of this fsp for symbol/source
# consumer fsps can look it up by source + fsp. # so that consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire but # TODO: ugh i hate this wind/unwind to list over the wire
# not sure how else to do it. # but not sure how else to do it.
for (token, fsp_name, dst_token) in shm_registry: for (token, fsp_name, dst_token) in shm_registry:
Fsp._flow_registry[( Fsp._flow_registry[(
_Token.from_msg(token), _Token.from_msg(token),
@ -457,15 +320,12 @@ async def cascade(
fsp: Fsp = reg.get( fsp: Fsp = reg.get(
NamespacePath(ns_path) NamespacePath(ns_path)
) )
func: Callable = fsp.func func = fsp.func
if not func: if not func:
# TODO: assume it's a func target path # TODO: assume it's a func target path
raise ValueError(f'Unknown fsp target: {ns_path}') raise ValueError(f'Unknown fsp target: {ns_path}')
_fqme: str = src.mkt.fqme
assert _fqme == fqme
# open a data feed stream with requested broker # open a data feed stream with requested broker
feed: Feed feed: Feed
async with data.feed.maybe_open_feed( async with data.feed.maybe_open_feed(
@ -479,142 +339,177 @@ async def cascade(
) as feed: ) as feed:
flume: Flume = feed.flumes[fqme] flume = feed.flumes[fqme]
# XXX: can't do this since flume.feed will be set XD mkt = flume.mkt
# assert flume == src assert src.token == flume.rt_shm.token
assert flume.mkt == src.mkt
mkt: MktPair = flume.mkt
# NOTE: FOR NOW, sanity checks around the feed as being
# always the src flume (until we get to fancier/lengthier
# chains/graphs.
assert src.rt_shm.token == flume.rt_shm.token
# XXX: won't work bc the _hist_shm_token value will be
# list[list] after IPC..
# assert flume.to_msg() == src_flume_addr
profiler(f'{func}: feed up') profiler(f'{func}: feed up')
func_name: str = func.__name__ func_name = func.__name__
async with ( async with (
trio.open_nursery() as tn, trio.open_nursery() as n,
): ):
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
casc = Cascade(
src,
dst,
tn,
fsp,
)
# TODO: this seems like it should be wrapped somewhere?
fsp_target = partial( fsp_target = partial(
connect_streams,
casc=casc, fsp_compute,
mkt=mkt, mkt=mkt,
flume=flume,
quote_stream=flume.stream, quote_stream=flume.stream,
# flumes and shm passthrough # shm
src=src, src=src,
dst=dst, dst=dst,
# chain function which takes src flume input(s) # target
# and renders dst flume output(s) func=func
edge_func=func
) )
async with casc.open_edge(
bind_func=fsp_target,
) as index:
# casc.bind_func = fsp_target
# index = await tn.start(fsp_target)
dst_shm: ShmArray = dst.rt_shm
src_shm: ShmArray = src.rt_shm
if zero_on_step: tracker, index = await n.start(fsp_target)
last = dst.rt_shm.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
profiler(f'{func_name}: fsp up') if zero_on_step:
last = dst.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
# sync to client-side actor profiler(f'{func_name}: fsp up')
await ctx.started(index)
# XXX: rt stream with client which we MUST # sync client
# open here (and keep it open) in order to make await ctx.started(index)
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
casc.client_stream: tractor.MsgStream = client_stream
s, step, ld = casc.is_synced() # XXX: rt stream with client which we MUST
# open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
# detect sample period step for subscription to increment # TODO: these likely should all become
# signal # methods of this ``TaskLifetime`` or wtv
times = src.rt_shm.array['time'] # abstraction..
if len(times) > 1: async def resync(
last_ts = times[-1] tracker: TaskTracker,
delay_s: float = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s: float = _default_delay_s
# sub and increment the underlying shared memory buffer ) -> tuple[TaskTracker, int]:
# on every step msg received from the global `samplerd` # TODO: adopt an incremental update engine/approach
# service. # where possible here eventually!
async with open_sample_stream( log.info(f're-syncing fsp {func_name} to source')
float(delay_s) tracker.cs.cancel()
) as istream: await tracker.complete.wait()
tracker, index = await n.start(fsp_target)
profiler(f'{func_name}: sample stream up') # always trigger UI refresh after history update,
profiler.finish() # see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
await client_stream.send({
'fsp_update': {
'key': dst_shm_token,
'first': dst._first.value,
'last': dst._last.value,
}
})
return tracker, index
async for i in istream: def is_synced(
# print(f'FSP incrementing {i}') src: ShmArray,
dst: ShmArray
) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
# respawn the compute task if the source '''
# array has been updated such that we compute step_diff = src.index - dst.index
# new history from the (prepended) source. len_diff = abs(len(src.array) - len(dst.array))
synced, step_diff, _ = casc.is_synced() return not (
if not synced: # the source is likely backfilling and we must
step_diff: int = await casc.poll_and_sync_to_step() # sync history calculations
len_diff > 2
# skip adding a last bar since we should already # we aren't step synced to the source and may be
# be step alinged # leading/lagging by a step
if step_diff == 0: or step_diff > 1
continue or step_diff < 0
), step_diff, len_diff
# read out last shm row, copy and write new row async def poll_and_sync_to_step(
array = dst_shm.array tracker: TaskTracker,
src: ShmArray,
dst: ShmArray,
# some metrics like vlm should be reset ) -> tuple[TaskTracker, int]:
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
dst.rt_shm.push(last) synced, step_diff, _ = is_synced(src, dst)
while not synced:
tracker, index = await resync(tracker)
synced, step_diff, _ = is_synced(src, dst)
# sync with source buffer's time step return tracker, step_diff
src_l2 = src_shm.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst_shm._array['time'][src_li] = src_lt
dst_shm._array['time'][src_2li] = src_2lt
# last2 = dst.array[-2:] s, step, ld = is_synced(src, dst)
# if (
# last2[-1]['index'] != src_li # detect sample period step for subscription to increment
# or last2[-2]['index'] != src_2li # signal
# ): times = src.array['time']
# dstl2 = list(last2) if len(times) > 1:
# srcl2 = list(src_l2) last_ts = times[-1]
# print( delay_s = float(last_ts - times[times != last_ts][-1])
# # f'{dst.token}\n' else:
# f'src: {srcl2}\n' # our default "HFT" sample rate.
# f'dst: {dstl2}\n' delay_s = _default_delay_s
# )
# sub and increment the underlying shared memory buffer
# on every step msg received from the global `samplerd`
# service.
async with open_sample_stream(float(delay_s)) as istream:
profiler(f'{func_name}: sample stream up')
profiler.finish()
async for i in istream:
# print(f'FSP incrementing {i}')
# respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = is_synced(src, dst)
if not synced:
tracker, step_diff = await poll_and_sync_to_step(
tracker,
src,
dst,
)
# skip adding a last bar since we should already
# be step alinged
if step_diff == 0:
continue
# read out last shm row, copy and write new row
array = dst.array
# some metrics like vlm should be reset
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
dst.push(last)
# sync with source buffer's time step
src_l2 = src.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst._array['time'][src_li] = src_lt
dst._array['time'][src_2li] = src_2lt
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )

View File

@ -14,45 +14,49 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' """
Actor runtime primtives and (distributed) service APIs for, Actor-runtime service orchestration machinery.
- daemon-service mgmt: `_daemon` (i.e. low-level spawn and supervise machinery """
for sub-actors like `brokerd`, `emsd`, datad`, etc.) from __future__ import annotations
- service-actor supervision (via `trio` tasks) API: `._mngr` from ._mngr import Services
from ._registry import ( # noqa
- discovery interface (via light wrapping around `tractor`'s built-in _tractor_kwargs,
prot): `._registry` _default_reg_addr,
_default_registry_host,
- `docker` cntr SC supervision for use with `trio`: `_ahab` _default_registry_port,
- wrappers for marketstore and elasticsearch dbs open_registry,
=> TODO: maybe to (re)move elsewhere? find_service,
check_for_service,
'''
from ._mngr import Services as Services
from ._registry import (
_tractor_kwargs as _tractor_kwargs,
_default_reg_addr as _default_reg_addr,
_default_registry_host as _default_registry_host,
_default_registry_port as _default_registry_port,
open_registry as open_registry,
find_service as find_service,
check_for_service as check_for_service,
) )
from ._daemon import ( from ._daemon import ( # noqa
maybe_spawn_daemon as maybe_spawn_daemon, maybe_spawn_daemon,
spawn_emsd as spawn_emsd, spawn_emsd,
maybe_open_emsd as maybe_open_emsd, maybe_open_emsd,
) )
from ._actor_runtime import ( from ._actor_runtime import (
open_piker_runtime as open_piker_runtime, open_piker_runtime,
maybe_open_pikerd as maybe_open_pikerd, maybe_open_pikerd,
open_pikerd as open_pikerd, open_pikerd,
get_runtime_vars as get_runtime_vars, get_tractor_runtime_kwargs,
) )
from ..brokers._daemon import ( from ..brokers._daemon import (
spawn_brokerd as spawn_brokerd, spawn_brokerd,
maybe_spawn_brokerd as maybe_spawn_brokerd, maybe_spawn_brokerd,
) )
__all__ = [
'check_for_service',
'Services',
'maybe_spawn_daemon',
'spawn_brokerd',
'maybe_spawn_brokerd',
'spawn_emsd',
'maybe_open_emsd',
'open_piker_runtime',
'maybe_open_pikerd',
'open_pikerd',
'get_tractor_runtime_kwargs',
]

View File

@ -45,7 +45,7 @@ from ._registry import ( # noqa
) )
def get_runtime_vars() -> dict[str, Any]: def get_tractor_runtime_kwargs() -> dict[str, Any]:
''' '''
Deliver ``tractor`` related runtime variables in a `dict`. Deliver ``tractor`` related runtime variables in a `dict`.
@ -56,8 +56,6 @@ def get_runtime_vars() -> dict[str, Any]:
@acm @acm
async def open_piker_runtime( async def open_piker_runtime(
name: str, name: str,
registry_addrs: list[tuple[str, int]] = [],
enable_modules: list[str] = [], enable_modules: list[str] = [],
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
@ -65,6 +63,8 @@ async def open_piker_runtime(
# for data daemons when running in production. # for data daemons when running in production.
debug_mode: bool = False, debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
# TODO: once we have `rsyscall` support we will read a config # TODO: once we have `rsyscall` support we will read a config
# and spawn the service tree distributed per that. # and spawn the service tree distributed per that.
start_method: str = 'trio', start_method: str = 'trio',
@ -74,7 +74,7 @@ async def open_piker_runtime(
) -> tuple[ ) -> tuple[
tractor.Actor, tractor.Actor,
list[tuple[str, int]], tuple[str, int],
]: ]:
''' '''
Start a piker actor who's runtime will automatically sync with Start a piker actor who's runtime will automatically sync with
@ -84,31 +84,21 @@ async def open_piker_runtime(
a root actor. a root actor.
''' '''
# check for existing runtime, boot it
# if not already running.
try: try:
actor = tractor.current_actor() # check for existing runtime
actor = tractor.current_actor().uid
except tractor._exceptions.NoRuntime: except tractor._exceptions.NoRuntime:
tractor._state._runtime_vars[ tractor._state._runtime_vars[
'piker_vars' 'piker_vars'] = tractor_runtime_overrides
] = tractor_runtime_overrides
# NOTE: if no registrar list passed used the default of just registry_addr = registry_addr or _default_reg_addr
# setting it as the root actor on localhost.
registry_addrs = (
registry_addrs
or [_default_reg_addr]
)
if ems := tractor_kwargs.pop('enable_modules', None):
# import pdbp; pdbp.set_trace()
enable_modules.extend(ems)
async with ( async with (
tractor.open_root_actor( tractor.open_root_actor(
# passed through to ``open_root_actor`` # passed through to ``open_root_actor``
registry_addrs=registry_addrs, arbiter_addr=registry_addr,
name=name, name=name,
loglevel=loglevel, loglevel=loglevel,
debug_mode=debug_mode, debug_mode=debug_mode,
@ -120,30 +110,24 @@ async def open_piker_runtime(
enable_modules=enable_modules, enable_modules=enable_modules,
**tractor_kwargs, **tractor_kwargs,
) as actor, ) as _,
open_registry( open_registry(registry_addr, ensure_exists=False) as addr,
registry_addrs,
ensure_exists=False,
) as addrs,
): ):
assert actor is tractor.current_actor()
yield ( yield (
actor, tractor.current_actor(),
addrs, addr,
) )
else: else:
async with open_registry( async with open_registry(registry_addr) as addr:
registry_addrs
) as addrs:
yield ( yield (
actor, actor,
addrs, addr,
) )
_root_dname: str = 'pikerd' _root_dname = 'pikerd'
_root_modules: list[str] = [ _root_modules = [
__name__, __name__,
'piker.service._daemon', 'piker.service._daemon',
'piker.brokers._daemon', 'piker.brokers._daemon',
@ -157,13 +141,13 @@ _root_modules: list[str] = [
@acm @acm
async def open_pikerd( async def open_pikerd(
registry_addrs: list[tuple[str, int]],
loglevel: str | None = None, loglevel: str | None = None,
# XXX: you should pretty much never want debug mode # XXX: you should pretty much never want debug mode
# for data daemons when running in production. # for data daemons when running in production.
debug_mode: bool = False, debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
**kwargs, **kwargs,
@ -175,39 +159,29 @@ async def open_pikerd(
alive underling services (see below). alive underling services (see below).
''' '''
# NOTE: for the root daemon we always enable the root
# mod set and we `list.extend()` it into wtv the
# caller requested.
# TODO: make this mod set more strict?
# -[ ] eventually we should be able to avoid
# having the root have more then permissions to spawn other
# specialized daemons I think?
ems: list[str] = kwargs.setdefault('enable_modules', [])
ems.extend(_root_modules)
async with ( async with (
open_piker_runtime( open_piker_runtime(
name=_root_dname, name=_root_dname,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
loglevel=loglevel, loglevel=loglevel,
debug_mode=debug_mode, debug_mode=debug_mode,
registry_addrs=registry_addrs, registry_addr=registry_addr,
**kwargs, **kwargs,
) as ( ) as (root_actor, reg_addr),
root_actor,
reg_addrs,
),
tractor.open_nursery() as actor_nursery, tractor.open_nursery() as actor_nursery,
trio.open_nursery() as service_nursery, trio.open_nursery() as service_nursery,
): ):
for addr in reg_addrs: if root_actor.accept_addr != reg_addr:
if addr not in root_actor.accept_addrs: raise RuntimeError(
raise RuntimeError( f'`pikerd` failed to bind on {reg_addr}!\n'
f'`pikerd` failed to bind on {addr}!\n' 'Maybe you have another daemon already running?'
'Maybe you have another daemon already running?' )
)
# assign globally for future daemon/task creation # assign globally for future daemon/task creation
Services.actor_n = actor_nursery Services.actor_n = actor_nursery
@ -251,9 +225,9 @@ async def open_pikerd(
@acm @acm
async def maybe_open_pikerd( async def maybe_open_pikerd(
registry_addrs: list[tuple[str, int]] | None = None, loglevel: Optional[str] = None,
registry_addr: None | tuple = None,
loglevel: str | None = None,
**kwargs, **kwargs,
) -> tractor._portal.Portal | ClassVar[Services]: ) -> tractor._portal.Portal | ClassVar[Services]:
@ -279,51 +253,32 @@ async def maybe_open_pikerd(
# async with open_portal(chan) as arb_portal: # async with open_portal(chan) as arb_portal:
# yield arb_portal # yield arb_portal
registry_addrs: list[tuple[str, int]] = (
registry_addrs
or [_default_reg_addr]
)
pikerd_portal: tractor.Portal | None
async with ( async with (
open_piker_runtime( open_piker_runtime(
name=query_name, name=query_name,
registry_addrs=registry_addrs, registry_addr=registry_addr,
loglevel=loglevel, loglevel=loglevel,
**kwargs, **kwargs,
) as (actor, addrs), ) as _,
tractor.find_actor(
_root_dname,
arbiter_sockaddr=registry_addr,
) as portal
): ):
if _root_dname in actor.uid: # connect to any existing daemon presuming
yield None # its registry socket was selected.
if (
portal is not None
):
yield portal
return return
# NOTE: IFF running in disti mode, try to attach to any
# existing (host-local) `pikerd`.
else:
async with tractor.find_actor(
_root_dname,
registry_addrs=registry_addrs,
only_first=True,
# raise_on_none=True,
) as pikerd_portal:
# connect to any existing remote daemon presuming its
# registry socket was selected.
if pikerd_portal is not None:
# sanity check that we are actually connecting to
# a remote process and not ourselves.
assert actor.uid != pikerd_portal.channel.uid
assert registry_addrs
yield pikerd_portal
return
# presume pikerd role since no daemon could be found at # presume pikerd role since no daemon could be found at
# configured address # configured address
async with open_pikerd( async with open_pikerd(
loglevel=loglevel, loglevel=loglevel,
registry_addrs=registry_addrs, registry_addr=registry_addr,
# passthrough to ``tractor`` init # passthrough to ``tractor`` init
**kwargs, **kwargs,

View File

@ -15,8 +15,8 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Supervisor for ``docker`` with included async and SC wrapping to Supervisor for ``docker`` with included async and SC wrapping
ensure a cancellable container lifetime system. to ensure a cancellable container lifetime system.
''' '''
from __future__ import annotations from __future__ import annotations

View File

@ -70,10 +70,7 @@ async def maybe_spawn_daemon(
lock = Services.locks[service_name] lock = Services.locks[service_name]
await lock.acquire() await lock.acquire()
async with find_service( async with find_service(service_name) as portal:
service_name,
registry_addrs=[('127.0.0.1', 6116)],
) as portal:
if portal is not None: if portal is not None:
lock.release() lock.release()
yield portal yield portal

View File

@ -27,12 +27,6 @@ from typing import (
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
from tractor import (
current_actor,
ContextCancelled,
Context,
Portal,
)
from ._util import ( from ._util import (
log, # sub-sys logger log, # sub-sys logger
@ -44,8 +38,6 @@ from ._util import (
# library. # library.
# - wrap a "remote api" wherein you can get a method proxy # - wrap a "remote api" wherein you can get a method proxy
# to the pikerd actor for starting services remotely! # to the pikerd actor for starting services remotely!
# - prolly rename this to ActorServicesNursery since it spawns
# new actors and supervises them to completion?
class Services: class Services:
actor_n: tractor._supervise.ActorNursery actor_n: tractor._supervise.ActorNursery
@ -55,7 +47,7 @@ class Services:
str, str,
tuple[ tuple[
trio.CancelScope, trio.CancelScope,
Portal, tractor.Portal,
trio.Event, trio.Event,
] ]
] = {} ] = {}
@ -65,12 +57,12 @@ class Services:
async def start_service_task( async def start_service_task(
self, self,
name: str, name: str,
portal: Portal, portal: tractor.Portal,
target: Callable, target: Callable,
allow_overruns: bool = False, allow_overruns: bool = False,
**ctx_kwargs, **ctx_kwargs,
) -> (trio.CancelScope, Context): ) -> (trio.CancelScope, tractor.Context):
''' '''
Open a context in a service sub-actor, add to a stack Open a context in a service sub-actor, add to a stack
that gets unwound at ``pikerd`` teardown. that gets unwound at ``pikerd`` teardown.
@ -109,30 +101,13 @@ class Services:
# wait on any context's return value # wait on any context's return value
# and any final portal result from the # and any final portal result from the
# sub-actor. # sub-actor.
ctx_res: Any = await ctx.result() ctx_res = await ctx.result()
# NOTE: blocks indefinitely until cancelled # NOTE: blocks indefinitely until cancelled
# either by error from the target context # either by error from the target context
# function or by being cancelled here by the # function or by being cancelled here by the
# surrounding cancel scope. # surrounding cancel scope.
return (await portal.result(), ctx_res) return (await portal.result(), ctx_res)
except ContextCancelled as ctxe:
canceller: tuple[str, str] = ctxe.canceller
our_uid: tuple[str, str] = current_actor().uid
if (
canceller != portal.channel.uid
and
canceller != our_uid
):
log.cancel(
f'Actor-service {name} was remotely cancelled?\n'
f'remote canceller: {canceller}\n'
f'Keeping {our_uid} alive, ignoring sub-actor cancel..\n'
)
else:
raise
finally: finally:
await portal.cancel_actor() await portal.cancel_actor()

View File

@ -27,7 +27,6 @@ from typing import (
) )
import tractor import tractor
from tractor import Portal
from ._util import ( from ._util import (
log, # sub-sys logger log, # sub-sys logger
@ -47,9 +46,7 @@ _registry: Registry | None = None
class Registry: class Registry:
# TODO: should this be a set or should we complain addr: None | tuple[str, int] = None
# on duplicates?
addrs: list[tuple[str, int]] = []
# TODO: table of uids to sockaddrs # TODO: table of uids to sockaddrs
peers: dict[ peers: dict[
@ -63,115 +60,69 @@ _tractor_kwargs: dict[str, Any] = {}
@acm @acm
async def open_registry( async def open_registry(
addrs: list[tuple[str, int]], addr: None | tuple[str, int] = None,
ensure_exists: bool = True, ensure_exists: bool = True,
) -> list[tuple[str, int]]: ) -> tuple[str, int]:
'''
Open the service-actor-discovery registry by returning a set of
tranport socket-addrs to registrar actors which may be
contacted and queried for similar addresses for other
non-registrar actors.
'''
global _tractor_kwargs global _tractor_kwargs
actor = tractor.current_actor() actor = tractor.current_actor()
uid = actor.uid uid = actor.uid
preset_reg_addrs: list[tuple[str, int]] = Registry.addrs
if ( if (
preset_reg_addrs Registry.addr is not None
and addrs and addr
): ):
if preset_reg_addrs != addrs: raise RuntimeError(
# if any(addr in preset_reg_addrs for addr in addrs): f'`{uid}` registry addr already bound @ {_registry.sockaddr}'
diff: set[tuple[str, int]] = set(preset_reg_addrs) - set(addrs) )
if diff:
log.warning(
f'`{uid}` requested only subset of registrars: {addrs}\n'
f'However there are more @{diff}'
)
else:
raise RuntimeError(
f'`{uid}` has non-matching registrar addresses?\n'
f'request: {addrs}\n'
f'already set: {preset_reg_addrs}'
)
was_set: bool = False was_set: bool = False
if ( if (
not tractor.is_root_process() not tractor.is_root_process()
and not Registry.addrs and Registry.addr is None
): ):
Registry.addrs.extend(actor.reg_addrs) Registry.addr = actor._arb_addr
if ( if (
ensure_exists ensure_exists
and not Registry.addrs and Registry.addr is None
): ):
raise RuntimeError( raise RuntimeError(
f"`{uid}` registry should already exist but doesn't?" f"`{uid}` registry should already exist bug doesn't?"
) )
if ( if (
not Registry.addrs Registry.addr is None
): ):
was_set = True was_set = True
Registry.addrs = addrs or [_default_reg_addr] Registry.addr = addr or _default_reg_addr
# NOTE: only spot this seems currently used is inside _tractor_kwargs['arbiter_addr'] = Registry.addr
# `.ui._exec` which is the (eventual qtloops) bootstrapping
# with guest mode.
_tractor_kwargs['registry_addrs'] = Registry.addrs
try: try:
yield Registry.addrs yield Registry.addr
finally: finally:
# XXX: always clear the global addr if we set it so that the # XXX: always clear the global addr if we set it so that the
# next (set of) calls will apply whatever new one is passed # next (set of) calls will apply whatever new one is passed
# in. # in.
if was_set: if was_set:
Registry.addrs = None Registry.addr = None
@acm @acm
async def find_service( async def find_service(
service_name: str, service_name: str,
registry_addrs: list[tuple[str, int]] | None = None, ) -> tractor.Portal | None:
first_only: bool = True, async with open_registry() as reg_addr:
) -> (
Portal
| list[Portal]
| None
):
reg_addrs: list[tuple[str, int]]
async with open_registry(
addrs=(
registry_addrs
# NOTE: if no addr set is passed assume the registry has
# already been opened and use the previously applied
# startup set.
or Registry.addrs
),
) as reg_addrs:
log.info(f'Scanning for service `{service_name}`') log.info(f'Scanning for service `{service_name}`')
maybe_portals: list[Portal] | Portal | None
# attach to existing daemon by name if possible # attach to existing daemon by name if possible
async with tractor.find_actor( async with tractor.find_actor(
service_name, service_name,
registry_addrs=reg_addrs, arbiter_sockaddr=reg_addr,
only_first=first_only, # if set only returns single ref ) as maybe_portal:
) as maybe_portals: yield maybe_portal
if not maybe_portals:
yield None
return
yield maybe_portals
async def check_for_service( async def check_for_service(
@ -182,11 +133,9 @@ async def check_for_service(
Service daemon "liveness" predicate. Service daemon "liveness" predicate.
''' '''
async with ( async with open_registry(ensure_exists=False) as reg_addr:
open_registry(ensure_exists=False) as reg_addr, async with tractor.query_actor(
tractor.query_actor(
service_name, service_name,
arbiter_sockaddr=reg_addr, arbiter_sockaddr=reg_addr,
) as sockaddr, ) as sockaddr:
): return sockaddr
return sockaddr

View File

@ -139,13 +139,6 @@ class StorageClient(
... ...
class TimeseriesNotFound(Exception):
'''
No timeseries entry can be found for this backend.
'''
class StorageConnectionError(ConnectionError): class StorageConnectionError(ConnectionError):
''' '''
Can't connect to the desired tsdb subsys/service. Can't connect to the desired tsdb subsys/service.
@ -176,13 +169,10 @@ async def open_storage_client(
tsdb_host: str = 'localhost' tsdb_host: str = 'localhost'
# load root config and any tsdb user defined settings # load root config and any tsdb user defined settings
conf, path = config.load( conf, path = config.load('conf', touch_if_dne=True)
conf_name='conf',
touch_if_dne=True,
)
# TODO: maybe not under a "network" section.. since # TODO: maybe not under a "network" section.. since
# no more chitty `marketstore`.. # no more chitty mkts..
tsdbconf: dict = {} tsdbconf: dict = {}
service_section = conf.get('service') service_section = conf.get('service')
if ( if (
@ -193,11 +183,8 @@ async def open_storage_client(
# lookup backend tsdb module by name and load any user service # lookup backend tsdb module by name and load any user service
# settings for connecting to the tsdb service. # settings for connecting to the tsdb service.
backend: str = tsdbconf.pop( backend: str = tsdbconf.pop('backend')
'name', tsdb_host: str = tsdbconf['host']
def_backend,
)
tsdb_host: str = tsdbconf.get('maddrs', [])
if backend is None: if backend is None:
backend: str = def_backend backend: str = def_backend

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers) # Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -19,18 +19,10 @@ Storage middle-ware CLIs.
""" """
from __future__ import annotations from __future__ import annotations
# from datetime import datetime
# from contextlib import (
# AsyncExitStack,
# )
from pathlib import Path from pathlib import Path
from math import copysign
import time import time
from types import ModuleType from typing import Generator
from typing import ( # from typing import TYPE_CHECKING
Any,
TYPE_CHECKING,
)
import polars as pl import polars as pl
import numpy as np import numpy as np
@ -43,21 +35,24 @@ import typer
from piker.service import open_piker_runtime from piker.service import open_piker_runtime
from piker.cli import cli from piker.cli import cli
from piker.config import get_conf_dir
from piker.data import ( from piker.data import (
maybe_open_shm_array,
def_iohlcv_fields,
ShmArray, ShmArray,
) )
from piker import tsp from piker.data.history import (
from piker.data._formatters import BGM _default_hist_size,
from . import log _default_rt_size,
)
from . import (
log,
)
from . import ( from . import (
__tsdbs__, __tsdbs__,
open_storage_client, open_storage_client,
StorageClient,
) )
if TYPE_CHECKING:
from piker.ui._remote_ctl import AnnotCtl
store = typer.Typer() store = typer.Typer()
@ -103,18 +98,6 @@ def ls(
trio.run(query_all) trio.run(query_all)
# TODO: like ls but takes in a pattern and matches
# @store.command()
# def search(
# patt: str,
# backends: list[str] = typer.Argument(
# default=None,
# help='Storage backends to query, default is all.'
# ),
# ):
# ...
@store.command() @store.command()
def delete( def delete(
symbols: list[str], symbols: list[str],
@ -157,33 +140,20 @@ def delete(
def anal( def anal(
fqme: str, fqme: str,
period: int = 60, period: int = 60,
pdb: bool = False,
) -> np.ndarray: ) -> np.ndarray:
'''
Anal-ysis is when you take the data do stuff to it.
NOTE: This ONLY loads the offline timeseries data (by default
from a parquet file) NOT the in-shm version you might be seeing
in a chart.
'''
async def main(): async def main():
async with ( async with (
open_piker_runtime( open_piker_runtime(
# are you a bear or boi?
'tsdb_polars_anal', 'tsdb_polars_anal',
debug_mode=pdb, debug_mode=True,
),
open_storage_client() as (
mod,
client,
), ),
open_storage_client() as (mod, client),
): ):
syms: list[str] = await client.list_keys() syms: list[str] = await client.list_keys()
log.info(f'{len(syms)} FOUND for {mod.name}') print(f'{len(syms)} FOUND for {mod.name}')
history: ShmArray # np buffer format
( (
history, history,
first_dt, first_dt,
@ -194,357 +164,179 @@ def anal(
) )
assert first_dt < last_dt assert first_dt < last_dt
null_segs: tuple = tsp.get_null_segs( src_df = await client.as_df(fqme, period)
frame=history, from piker.data import _timeseries as tsmod
period=period, df: pl.DataFrame = tsmod.with_dts(src_df)
) gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
# TODO: do tsp queries to backcend to fill i missing
# history and then prolly write it to tsdb!
shm_df: pl.DataFrame = await client.as_df( if not gaps.is_empty():
fqme, print(f'Gaps found:\n{gaps}')
period,
)
df: pl.DataFrame # with dts # TODO: something better with tab completion..
deduped: pl.DataFrame # deduplicated dts # is there something more minimal but nearly as
( # functional as ipython?
df, await tractor.pause()
deduped,
diff,
) = tsp.dedupe(
shm_df,
period=period,
)
write_edits: bool = True
if (
write_edits
and (
diff
or null_segs
)
):
await tractor.pause()
await client.write_ohlcv(
fqme,
ohlcv=deduped,
timeframe=period,
)
else:
# TODO: something better with tab completion..
# is there something more minimal but nearly as
# functional as ipython?
await tractor.pause()
assert not null_segs
trio.run(main) trio.run(main)
async def markup_gaps( def iter_dfs_from_shms(fqme: str) -> Generator[
fqme: str, tuple[Path, ShmArray, pl.DataFrame],
timeframe: float, None,
actl: AnnotCtl, None,
wdts: pl.DataFrame, ]:
gaps: pl.DataFrame, # shm buffer size table based on known sample rates
sizes: dict[str, int] = {
'hist': _default_hist_size,
'rt': _default_rt_size,
}
) -> dict[int, dict]: # load all detected shm buffer files which have the
''' # passed FQME pattern in the file name.
Remote annotate time-gaps in a dt-fielded ts (normally OHLC) shmfiles: list[Path] = []
with rectangles. shmdir = Path('/dev/shm/')
''' for shmfile in shmdir.glob(f'*{fqme}*'):
aids: dict[int] = {} filename: str = shmfile.name
for i in range(gaps.height):
row: pl.DataFrame = gaps[i] # skip index files
if (
'_first' in filename
or '_last' in filename
):
continue
# the gap's RIGHT-most bar's OPEN value assert shmfile.is_file()
# at that time (sample) step. log.debug(f'Found matching shm buffer file: {filename}')
iend: int = row['index'][0] shmfiles.append(shmfile)
# dt: datetime = row['dt'][0]
# dt_prev: datetime = row['dt_prev'][0]
# dt_end_t: float = dt.timestamp()
for shmfile in shmfiles:
# TODO: can we eventually remove this # lookup array buffer size based on file suffix
# once we figure out why the epoch cols # being either .rt or .hist
# don't match? key: str = shmfile.name.rsplit('.')[-1]
# TODO: FIX HOW/WHY these aren't matching
# and are instead off by 4hours (EST
# vs. UTC?!?!)
# end_t: float = row['time']
# assert (
# dt.timestamp()
# ==
# end_t
# )
# the gap's LEFT-most bar's CLOSE value # skip FSP buffers for now..
# at that time (sample) step. if key not in sizes:
prev_r: pl.DataFrame = wdts.filter( continue
pl.col('index') == iend - 1
size: int = sizes[key]
# attach to any shm buffer, load array into polars df,
# write to local parquet file.
shm, opened = maybe_open_shm_array(
key=shmfile.name,
size=size,
dtype=def_iohlcv_fields,
readonly=True,
) )
# XXX: probably a gap in the (newly sorted or de-duplicated) assert not opened
# dt-df, so we might need to re-index first.. ohlcv = shm.array
if prev_r.is_empty():
await tractor.pause()
istart: int = prev_r['index'][0] start = time.time()
# dt_start_t: float = dt_prev.timestamp()
# start_t: float = prev_r['time'] # XXX: thanks to this SO answer for this conversion tip:
# assert ( # https://stackoverflow.com/a/72054819
# dt_start_t df = pl.DataFrame({
# == field_name: ohlcv[field_name]
# start_t for field_name in ohlcv.dtype.fields
# ) })
delay: float = round(
# TODO: implement px-col width measure time.time() - start,
# and ensure at least as many px-cols ndigits=6,
# shown per rect as configured by user.
# gap_w: float = abs((iend - istart))
# if gap_w < 6:
# margin: float = 6
# iend += margin
# istart -= margin
rect_gap: float = BGM*3/8
opn: float = row['open'][0]
ro: tuple[float, float] = (
# dt_end_t,
iend + rect_gap + 1,
opn,
) )
cls: float = prev_r['close'][0] log.info(
lc: tuple[float, float] = ( f'numpy -> polars conversion took {delay} secs\n'
# dt_start_t, f'polars df: {df}'
istart - rect_gap, # + 1 ,
cls,
) )
color: str = 'dad_blue' yield (
diff: float = cls - opn shmfile,
sgn: float = copysign(1, diff) shm,
color: str = { df,
-1: 'buy_green',
1: 'sell_red',
}[sgn]
rect_kwargs: dict[str, Any] = dict(
fqme=fqme,
timeframe=timeframe,
start_pos=lc,
end_pos=ro,
color=color,
) )
aid: int = await actl.add_rect(**rect_kwargs)
assert aid
aids[aid] = rect_kwargs
# tell chart to redraw all its
# graphics view layers Bo
await actl.redraw(
fqme=fqme,
timeframe=timeframe,
)
return aids
@store.command() @store.command()
def ldshm( def ldshm(
fqme: str, fqme: str,
write_parquet: bool = True,
reload_parquet_to_shm: bool = True, write_parquet: bool = False,
) -> None: ) -> None:
''' '''
Linux ONLY: load any fqme file name matching shm buffer from Linux ONLY: load any fqme file name matching shm buffer from
/dev/shm/ into an OHLCV numpy array and polars DataFrame, /dev/shm/ into an OHLCV numpy array and polars DataFrame,
optionally write to offline storage via `.parquet` file. optionally write to .parquet file.
''' '''
async def main(): async def main():
from piker.ui._remote_ctl import (
open_annot_ctl,
)
actl: AnnotCtl
mod: ModuleType
client: StorageClient
async with ( async with (
open_piker_runtime( open_piker_runtime(
'polars_boi', 'polars_boi',
enable_modules=['piker.data._sharedmem'], enable_modules=['piker.data._sharedmem'],
debug_mode=True, debug_mode=True,
), ),
open_storage_client() as (
mod,
client,
),
open_annot_ctl() as actl,
): ):
shm_df: pl.DataFrame | None = None df: pl.DataFrame | None = None
tf2aids: dict[float, dict] = {} for shmfile, shm, src_df in iter_dfs_from_shms(fqme):
for (
shmfile,
shm,
# parquet_path,
shm_df,
) in tsp.iter_dfs_from_shms(fqme):
# compute ohlc properties for naming
times: np.ndarray = shm.array['time'] times: np.ndarray = shm.array['time']
d1: float = float(times[-1] - times[-2]) secs: float = times[-1] - times[-2]
d2: float = float(times[-2] - times[-3]) if secs < 1.:
med: float = np.median(np.diff(times))
if (
d1 < 1.
and d2 < 1.
and med < 1.
):
raise ValueError( raise ValueError(
f'Something is wrong with time period for {shm}:\n{times}' f'Something is wrong with time period for {shm}:\n{times}'
) )
period_s: float = float(max(d1, d2, med)) from piker.data import _timeseries as tsmod
df: pl.DataFrame = tsmod.with_dts(src_df)
gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
null_segs: tuple = tsp.get_null_segs( # TODO: maybe only optionally enter this depending
frame=shm.array, # on some CLI flags and/or gap detection?
period=period_s,
)
# TODO: call null-seg fixer somehow?
if null_segs:
await tractor.pause()
# async with (
# trio.open_nursery() as tn,
# mod.open_history_client(
# mkt,
# ) as (get_hist, config),
# ):
# nulls_detected: trio.Event = await tn.start(partial(
# tsp.maybe_fill_null_segments,
# shm=shm,
# timeframe=timeframe,
# get_hist=get_hist,
# sampler_stream=sampler_stream,
# mkt=mkt,
# ))
# over-write back to shm?
wdts: pl.DataFrame # with dts
deduped: pl.DataFrame # deduplicated dts
(
wdts,
deduped,
diff,
) = tsp.dedupe(
shm_df,
period=period_s,
)
# detect gaps from in expected (uniform OHLC) sample period
step_gaps: pl.DataFrame = tsp.detect_time_gaps(
deduped,
expect_period=period_s,
)
# TODO: by default we always want to mark these up
# with rects showing up/down gaps Bo
venue_gaps: pl.DataFrame = tsp.detect_time_gaps(
deduped,
expect_period=period_s,
# TODO: actually pull the exact duration
# expected for each venue operational period?
gap_dt_unit='days',
gap_thresh=1,
)
# TODO: find the disjoint set of step gaps from
# venue (closure) set!
# -[ ] do a set diff by checking for the unique
# gap set only in the step_gaps?
if ( if (
not venue_gaps.is_empty() not gaps.is_empty()
or ( or secs > 2
period_s < 60
and not step_gaps.is_empty()
)
): ):
# write repaired ts to parquet-file? await tractor.pause()
if write_parquet:
start: float = time.time()
path: Path = await client.write_ohlcv(
fqme,
ohlcv=deduped,
timeframe=period_s,
)
write_delay: float = round(
time.time() - start,
ndigits=6,
)
# read back from fs # write to parquet file?
start: float = time.time() if write_parquet:
read_df: pl.DataFrame = pl.read_parquet(path) timeframe: str = f'{secs}s'
read_delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'parquet write took {write_delay} secs\n'
f'file path: {path}'
f'parquet read took {read_delay} secs\n'
f'polars df: {read_df}'
)
if reload_parquet_to_shm: datadir: Path = get_conf_dir() / 'nativedb'
new = tsp.pl2np( if not datadir.is_dir():
deduped, datadir.mkdir()
dtype=shm.array.dtype,
)
# since normally readonly
shm._array.setflags(
write=int(1),
)
shm.push(
new,
prepend=True,
start=new['index'][-1],
update_first=False, # don't update ._first
)
do_markup_gaps: bool = True path: Path = datadir / f'{fqme}.{timeframe}.parquet'
if do_markup_gaps:
new_df: pl.DataFrame = tsp.np2pl(new)
aids: dict = await markup_gaps(
fqme,
period_s,
actl,
new_df,
step_gaps,
)
# last chance manual overwrites in REPL
# await tractor.pause()
assert aids
tf2aids[period_s] = aids
else: # write to fs
# allow interaction even when no ts problems. start = time.time()
assert not diff df.write_parquet(path)
delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'parquet write took {delay} secs\n'
f'file path: {path}'
)
await tractor.pause() # read back from fs
log.info('Exiting TSP shm anal-izer!') start = time.time()
read_df: pl.DataFrame = pl.read_parquet(path)
delay: float = round(
time.time() - start,
ndigits=6,
)
print(
f'parquet read took {delay} secs\n'
f'polars df: {read_df}'
)
if shm_df is None: if df is None:
log.error( log.error(f'No matching shm buffers for {fqme} ?')
f'No matching shm buffers for {fqme} ?'
)
trio.run(main) trio.run(main)

View File

@ -19,8 +19,7 @@
call a poor man's tsdb). call a poor man's tsdb).
AKA a `piker`-native file-system native "time series database" AKA a `piker`-native file-system native "time series database"
without needing an extra process and no standard TSDB features, without needing an extra process and no standard TSDB features, YET!
YET!
''' '''
# TODO: like there's soo much.. # TODO: like there's soo much..
@ -56,6 +55,8 @@ from datetime import datetime
from pathlib import Path from pathlib import Path
import time import time
# from bidict import bidict
# import tractor
import numpy as np import numpy as np
import polars as pl import polars as pl
from pendulum import ( from pendulum import (
@ -63,18 +64,45 @@ from pendulum import (
) )
from piker import config from piker import config
from piker import tsp from piker.data import def_iohlcv_fields
from piker.data import ( from piker.data import ShmArray
def_iohlcv_fields,
ShmArray,
)
from piker.log import get_logger from piker.log import get_logger
from . import TimeseriesNotFound
log = get_logger('storage.nativedb') log = get_logger('storage.nativedb')
# NOTE: thanks to this SO answer for the below conversion routines
# to go from numpy struct-arrays to polars dataframes and back:
# https://stackoverflow.com/a/72054819
def np2pl(array: np.ndarray) -> pl.DataFrame:
return pl.DataFrame({
field_name: array[field_name]
for field_name in array.dtype.fields
})
def pl2np(
df: pl.DataFrame,
dtype: np.dtype,
) -> np.ndarray:
# Create numpy struct array of the correct size and dtype
# and loop through df columns to fill in array fields.
array = np.empty(
df.height,
dtype,
)
for field, col in zip(
dtype.fields,
df.columns,
):
array[field] = df.get_column(col).to_numpy()
return array
def detect_period(shm: ShmArray) -> float: def detect_period(shm: ShmArray) -> float:
''' '''
Attempt to detect the series time step sampling period Attempt to detect the series time step sampling period
@ -95,19 +123,16 @@ def detect_period(shm: ShmArray) -> float:
def mk_ohlcv_shm_keyed_filepath( def mk_ohlcv_shm_keyed_filepath(
fqme: str, fqme: str,
period: float | int, # ow known as the "timeframe" period: float, # ow known as the "timeframe"
datadir: Path, datadir: Path,
) -> Path: ) -> str:
if period < 1.: if period < 1.:
raise ValueError('Sample period should be >= 1.!?') raise ValueError('Sample period should be >= 1.!?')
path: Path = ( period_s: str = f'{period}s'
datadir path: Path = datadir / f'{fqme}.ohlcv{period_s}.parquet'
/
f'{fqme}.ohlcv{int(period)}s.parquet'
)
return path return path
@ -161,13 +186,7 @@ class NativeStorageClient:
def index_files(self): def index_files(self):
for path in self._datadir.iterdir(): for path in self._datadir.iterdir():
if ( if path.name in {'borked', 'expired',}:
path.is_dir()
or
'.parquet' not in str(path)
# or
# path.name in {'borked', 'expired',}
):
continue continue
key: str = path.name.rstrip('.parquet') key: str = path.name.rstrip('.parquet')
@ -209,21 +228,8 @@ class NativeStorageClient:
fqme, fqme,
timeframe, timeframe,
) )
except FileNotFoundError as fnfe: except FileNotFoundError:
return None
bs_fqme, _, *_ = fqme.rpartition('.')
possible_matches: list[str] = []
for tskey in self._index:
if bs_fqme in tskey:
possible_matches.append(tskey)
match_str: str = '\n'.join(sorted(possible_matches))
raise TimeseriesNotFound(
f'No entry for `{fqme}`?\n'
f'Maybe you need a more specific fqme-key like:\n\n'
f'{match_str}'
) from fnfe
times = array['time'] times = array['time']
return ( return (
@ -236,7 +242,6 @@ class NativeStorageClient:
self, self,
fqme: str, fqme: str,
period: float, period: float,
) -> Path: ) -> Path:
return mk_ohlcv_shm_keyed_filepath( return mk_ohlcv_shm_keyed_filepath(
fqme=fqme, fqme=fqme,
@ -244,23 +249,6 @@ class NativeStorageClient:
datadir=self._datadir, datadir=self._datadir,
) )
def _cache_df(
self,
fqme: str,
df: pl.DataFrame,
timeframe: float,
) -> None:
# cache df for later usage since we (currently) need to
# convert to np.ndarrays to push to our `ShmArray` rt
# buffers subsys but later we may operate entirely on
# pyarrow arrays/buffers so keeping the dfs around for
# a variety of purposes is handy.
self._dfs.setdefault(
timeframe,
{},
)[fqme] = df
async def read_ohlcv( async def read_ohlcv(
self, self,
fqme: str, fqme: str,
@ -269,20 +257,13 @@ class NativeStorageClient:
# limit: int = int(200e3), # limit: int = int(200e3),
) -> np.ndarray: ) -> np.ndarray:
path: Path = self.mk_path( path: Path = self.mk_path(fqme, period=int(timeframe))
fqme,
period=int(timeframe),
)
df: pl.DataFrame = pl.read_parquet(path) df: pl.DataFrame = pl.read_parquet(path)
self._dfs.setdefault(timeframe, {})[fqme] = df
self._cache_df(
fqme=fqme,
df=df,
timeframe=timeframe,
)
# TODO: filter by end and limit inputs # TODO: filter by end and limit inputs
# times: pl.Series = df['time'] # times: pl.Series = df['time']
array: np.ndarray = tsp.pl2np( array: np.ndarray = pl2np(
df, df,
dtype=np.dtype(def_iohlcv_fields), dtype=np.dtype(def_iohlcv_fields),
) )
@ -292,15 +273,11 @@ class NativeStorageClient:
self, self,
fqme: str, fqme: str,
period: int = 60, period: int = 60,
load_from_offline: bool = True,
) -> pl.DataFrame: ) -> pl.DataFrame:
try: try:
return self._dfs[period][fqme] return self._dfs[period][fqme]
except KeyError: except KeyError:
if not load_from_offline:
raise
await self.read_ohlcv(fqme, period) await self.read_ohlcv(fqme, period)
return self._dfs[period][fqme] return self._dfs[period][fqme]
@ -322,39 +299,32 @@ class NativeStorageClient:
datadir=self._datadir, datadir=self._datadir,
) )
if isinstance(ohlcv, np.ndarray): if isinstance(ohlcv, np.ndarray):
df: pl.DataFrame = tsp.np2pl(ohlcv) df: pl.DataFrame = np2pl(ohlcv)
else: else:
df = ohlcv df = ohlcv
self._cache_df(
fqme=fqme,
df=df,
timeframe=timeframe,
)
# TODO: in terms of managing the ultra long term data # TODO: in terms of managing the ultra long term data
# -[ ] use a proper profiler to measure all this IO and # - use a proper profiler to measure all this IO and
# roundtripping! # roundtripping!
# -[ ] implement parquet append!? see issue: # - try out ``fastparquet``'s append writing:
# https://github.com/pikers/piker/issues/536 # https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
# -[ ] try out ``fastparquet``'s append writing:
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
start = time.time() start = time.time()
df.write_parquet(path) df.write_parquet(path)
delay: float = round( delay: float = round(
time.time() - start, time.time() - start,
ndigits=6, ndigits=6,
) )
log.info( print(
f'parquet write took {delay} secs\n' f'parquet write took {delay} secs\n'
f'file path: {path}' f'file path: {path}'
) )
return path return path
async def write_ohlcv( async def write_ohlcv(
self, self,
fqme: str, fqme: str,
ohlcv: np.ndarray | pl.DataFrame, ohlcv: np.ndarray,
timeframe: int, timeframe: int,
) -> Path: ) -> Path:
@ -406,8 +376,6 @@ class NativeStorageClient:
# ... # ...
# TODO: does this need to be async on average?
# I guess for any IPC connected backend yes?
@acm @acm
async def get_client( async def get_client(
@ -425,7 +393,7 @@ async def get_client(
''' '''
datadir: Path = config.get_conf_dir() / 'nativedb' datadir: Path = config.get_conf_dir() / 'nativedb'
if not datadir.is_dir(): if not datadir.is_dir():
log.info(f'Creating `nativedb` dir: {datadir}') log.info(f'Creating `nativedb` director: {datadir}')
datadir.mkdir() datadir.mkdir()
client = NativeStorageClient(datadir) client = NativeStorageClient(datadir)

View File

@ -18,12 +18,24 @@
Toolz for debug, profile and trace of the distributed runtime :surfer: Toolz for debug, profile and trace of the distributed runtime :surfer:
''' '''
from tractor.devx import ( from .debug import (
open_crash_handler as open_crash_handler, open_crash_handler,
) )
from .profile import ( from .profile import (
Profiler as Profiler, Profiler,
pg_profile_enabled as pg_profile_enabled, pg_profile_enabled,
ms_slower_then as ms_slower_then, ms_slower_then,
timeit as timeit, timeit,
) )
# TODO: other mods to include?
# - DROP .trionics, already moved into tractor
# - move in `piker.calc`
__all__: list[str] = [
'open_crash_handler',
'pg_profile_enabled',
'ms_slower_then',
'Profiler',
'timeit',
]

View File

@ -0,0 +1,40 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Debugger wrappers for `pdbp` as used by `tractor`.
'''
from contextlib import contextmanager as cm
import pdbp
# TODO: better naming and what additionals?
# - optional runtime plugging?
# - detection for sync vs. async code?
# - specialized REPL entry when in distributed mode?
@cm
def open_crash_handler():
'''
Super basic crash handler using `pdbp` debugger.
'''
try:
yield
except BaseException:
pdbp.xpm()
raise

File diff suppressed because it is too large Load Diff

View File

@ -1,746 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Financial time series processing utilities usually
pertaining to OHLCV style sampled data.
Routines are generally implemented in either ``numpy`` or
``polars`` B)
'''
from __future__ import annotations
from functools import partial
from math import (
ceil,
floor,
)
import time
from typing import (
Literal,
# AsyncGenerator,
Generator,
)
import numpy as np
import polars as pl
from pendulum import (
DateTime,
from_timestamp,
)
from ..toolz.profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
from ..log import (
get_logger,
get_console_log,
)
# for "time series processing"
subsys: str = 'piker.tsp'
log = get_logger(subsys)
get_console_log = partial(
get_console_log,
name=subsys,
)
# NOTE: union type-defs to handle generic `numpy` and `polars` types
# side-by-side Bo
# |_ TODO: schema spec typing?
# -[ ] nptyping!
# -[ ] wtv we can with polars?
Frame = pl.DataFrame | np.ndarray
Seq = pl.Series | np.ndarray
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: float, # sampler period step-diff
) -> slice:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='right',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc
def get_null_segs(
frame: Frame,
period: float, # sampling step in seconds
imargin: int = 1,
col: str = 'time',
) -> tuple[
# Seq, # TODO: can we make it an array-type instead?
list[
list[int, int],
],
Seq,
Frame
] | None:
'''
Detect if there are any zero(-epoch stamped) valued
rows in for the provided `col: str` column; by default
presume the 'time' field/column.
Filter to all such zero (time) segments and return
the corresponding frame zeroed segment's,
- gap absolute (in buffer terms) indices-endpoints as
`absi_zsegs`
- abs indices of all rows with zeroed `col` values as `absi_zeros`
- the corresponding frame's row-entries (view) which are
zeroed for the `col` as `zero_t`
'''
times: Seq = frame['time']
zero_pred: Seq = (times == 0)
if isinstance(frame, np.ndarray):
tis_zeros: int = zero_pred.any()
else:
tis_zeros: int = zero_pred.any()
if not tis_zeros:
return None
# TODO: use ndarray for this?!
absi_zsegs: list[list[int, int]] = []
if isinstance(frame, np.ndarray):
# view of ONLY the zero segments as one continuous chunk
zero_t: np.ndarray = frame[zero_pred]
# abs indices of said zeroed rows
absi_zeros = zero_t['index']
# diff of abs index steps between each zeroed row
absi_zdiff: np.ndarray = np.diff(absi_zeros)
# scan for all frame-indices where the
# zeroed-row-abs-index-step-diff is greater then the
# expected increment of 1.
# data 1st zero seg data zeros
# ---- ------------ ---- ----- ------ ----
# ||||..000000000000..||||..00000..||||||..0000
# ---- ------------ ---- ----- ------ ----
# ^zero_t[0] ^zero_t[-1]
# ^fi_zgaps[0] ^fi_zgaps[1]
# ^absi_zsegs[0][0] ^---^ => absi_zsegs[1]: tuple
# absi_zsegs[0][1]^
#
# NOTE: the first entry in `fi_zgaps` is where
# the first (absolute) index step diff is > 1.
# and it is a frame-relative index into `zero_t`.
fi_zgaps = np.argwhere(
absi_zdiff > 1
# NOTE: +1 here is ensure we index to the "start" of each
# segment (if we didn't the below loop needs to be
# re-written to expect `fi_end_rows`!
) + 1
# the rows from the contiguous zeroed segments which have
# abs-index steps >1 compared to the previous zero row
# (indicating an end of zeroed segment).
fi_zseg_start_rows = zero_t[fi_zgaps]
# TODO: equiv for pl.DataFrame case!
else:
izeros: pl.Series = zero_pred.arg_true()
zero_t: pl.DataFrame = frame[izeros]
absi_zeros = zero_t['index']
absi_zdiff: pl.Series = absi_zeros.diff()
fi_zgaps = (absi_zdiff > 1).arg_true()
# XXX: our goal (in this func) is to select out slice index
# pairs (zseg0_start, zseg_end) in abs index units for each
# null-segment portion detected throughout entire input frame.
# only up to one null-segment in entire frame?
num_gaps: int = fi_zgaps.size + 1
if num_gaps < 1:
if absi_zeros.size > 1:
absi_zsegs = [[
# TODO: maybe mk these max()/min() limits func
# consts instead of called more then once?
max(
absi_zeros[0] - 1,
0,
),
# NOTE: need the + 1 to guarantee we index "up to"
# the next non-null row-datum.
min(
absi_zeros[-1] + 1,
frame['index'][-1],
),
]]
else:
# XXX EDGE CASE: only one null-datum found so
# mark the start abs index as None to trigger
# a full frame-len query to the respective backend?
absi_zsegs = [[
# see `get_hist()` in backend, should ALWAYS be
# able to handle a `start_dt=None`!
# None,
None,
absi_zeros[0] + 1,
]]
# XXX NOTE XXX: if >= 2 zeroed segments are found, there should
# ALWAYS be more then one zero-segment-abs-index-step-diff row
# in `absi_zdiff`, so loop through all such
# abs-index-step-diffs >1 (i.e. the entries of `absi_zdiff`)
# and add them as the "end index" entries for each segment.
# Then, iif NOT iterating the first such segment end, look back
# for the prior segments zero-segment start indext by relative
# indexing the `zero_t` frame by -1 and grabbing the abs index
# of what should be the prior zero-segment abs start index.
else:
# NOTE: since `absi_zdiff` will never have a row
# corresponding to the first zero-segment's row, we add it
# manually here.
absi_zsegs.append([
max(
absi_zeros[0] - 1,
0,
),
None,
])
# TODO: can we do it with vec ops?
for i, (
fi, # frame index of zero-seg start
zseg_start_row, # full row for ^
) in enumerate(zip(
fi_zgaps,
fi_zseg_start_rows,
)):
assert (zseg_start_row == zero_t[fi]).all()
iabs: int = zseg_start_row['index'][0]
absi_zsegs.append([
iabs - 1,
None, # backfilled on next iter
])
# final iter case, backfill FINAL end iabs!
if (i + 1) == fi_zgaps.size:
absi_zsegs[-1][1] = absi_zeros[-1] + 1
# NOTE: only after the first segment (due to `.diff()`
# usage above) can we do a lookback to the prior
# segment's end row and determine it's abs index to
# retroactively insert to the prior
# `absi_zsegs[i-1][1]` entry Bo
last_end: int = absi_zsegs[i][1]
if last_end is None:
prev_zseg_row = zero_t[fi - 1]
absi_post_zseg = prev_zseg_row['index'][0] + 1
# XXX: MUST BACKFILL previous end iabs!
absi_zsegs[i][1] = absi_post_zseg
else:
if 0 < num_gaps < 2:
absi_zsegs[-1][1] = min(
absi_zeros[-1] + 1,
frame['index'][-1],
)
iabs_first: int = frame['index'][0]
for start, end in absi_zsegs:
ts_start: float = times[start - iabs_first]
ts_end: float = times[end - iabs_first]
if (
(ts_start == 0 and not start == 0)
or
ts_end == 0
):
import pdbp
pdbp.set_trace()
assert end
assert start < end
log.warning(
f'Frame has {len(absi_zsegs)} NULL GAPS!?\n'
f'period: {period}\n'
f'total null samples: {len(zero_t)}\n'
)
return (
absi_zsegs, # [start, end] abs slice indices of seg
absi_zeros, # all abs indices within all null-segs
zero_t, # sliced-view of all null-segment rows-datums
)
def iter_null_segs(
timeframe: float,
frame: Frame | None = None,
null_segs: tuple | None = None,
) -> Generator[
tuple[
int, int,
int, int,
float, float,
float, float,
# Seq, # TODO: can we make it an array-type instead?
# list[
# list[int, int],
# ],
# Seq,
# Frame
],
None,
]:
if not (
null_segs := get_null_segs(
frame,
period=timeframe,
)
):
return
absi_pairs_zsegs: list[list[float, float]]
izeros: Seq
zero_t: Frame
(
absi_pairs_zsegs,
izeros,
zero_t,
) = null_segs
absi_first: int = frame[0]['index']
for (
absi_start,
absi_end,
) in absi_pairs_zsegs:
fi_end: int = absi_end - absi_first
end_row: Seq = frame[fi_end]
end_t: float = end_row['time']
end_dt: DateTime = from_timestamp(end_t)
fi_start = None
start_row = None
start_t = None
start_dt = None
if (
absi_start is not None
and start_t != 0
):
fi_start: int = absi_start - absi_first
start_row: Seq = frame[fi_start]
start_t: float = start_row['time']
start_dt: DateTime = from_timestamp(start_t)
if absi_start < 0:
import pdbp
pdbp.set_trace()
yield (
absi_start, absi_end, # abs indices
fi_start, fi_end, # relative "frame" indices
start_t, end_t,
start_dt, end_dt,
)
def with_dts(
df: pl.DataFrame,
time_col: str = 'time',
) -> pl.DataFrame:
'''
Insert datetime (casted) columns to a (presumably) OHLC sampled
time series with an epoch-time column keyed by `time_col: str`.
'''
return df.with_columns([
pl.col(time_col).shift(1).suffix('_prev'),
pl.col(time_col).diff().alias('s_diff'),
pl.from_epoch(pl.col(time_col)).alias('dt'),
]).with_columns([
pl.from_epoch(
column=pl.col(f'{time_col}_prev'),
).alias('dt_prev'),
pl.col('dt').diff().alias('dt_diff'),
])
t_unit: Literal = Literal[
'days',
'hours',
'minutes',
'seconds',
'miliseconds',
'microseconds',
'nanoseconds',
]
def detect_time_gaps(
w_dts: pl.DataFrame,
time_col: str = 'time',
# epoch sampling step diff
expect_period: float = 60,
# NOTE: legacy stock mkts have venue operating hours
# and thus gaps normally no more then 1-2 days at
# a time.
gap_thresh: float = 1.,
# TODO: allow passing in a frame of operating hours?
# -[ ] durations/ranges for faster legit gap checks?
# XXX -> must be valid ``polars.Expr.dt.<name>``
# like 'days' which a sane default for venue closures
# though will detect weekend gaps which are normal :o
gap_dt_unit: t_unit | None = None,
) -> pl.DataFrame:
'''
Filter to OHLC datums which contain sample step gaps.
For eg. legacy markets which have venue close gaps and/or
actual missing data segments.
'''
# first select by any sample-period (in seconds unit) step size
# greater then expected.
step_gaps: pl.DataFrame = w_dts.filter(
pl.col('s_diff').abs() > expect_period
)
if gap_dt_unit is None:
return step_gaps
# NOTE: this flag is to indicate that on this (sampling) time
# scale we expect to only be filtering against larger venue
# closures-scale time gaps.
return step_gaps.filter(
# Second by an arbitrary dt-unit step size
getattr(
pl.col('dt_diff').dt,
gap_dt_unit,
)().abs() > gap_thresh
)
def detect_price_gaps(
df: pl.DataFrame,
gt_multiplier: float = 2.,
price_fields: list[str] = ['high', 'low'],
) -> pl.DataFrame:
'''
Detect gaps in clearing price over an OHLC series.
2 types of gaps generally exist; up gaps and down gaps:
- UP gap: when any next sample's lo price is strictly greater
then the current sample's hi price.
- DOWN gap: when any next sample's hi price is strictly
less then the current samples lo price.
'''
# return df.filter(
# pl.col('high') - ) > expect_period,
# ).select([
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
# pl.all(),
# ]).select([
# pl.all(),
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
# ])
...
# TODO: probably just use the null_segs impl above?
def detect_vlm_gaps(
df: pl.DataFrame,
col: str = 'volume',
) -> pl.DataFrame:
vnull: pl.DataFrame = w_dts.filter(
pl.col(col) == 0
)
return vnull
def dedupe(
src_df: pl.DataFrame,
time_gaps: pl.DataFrame | None = None,
sort: bool = True,
period: float = 60,
) -> tuple[
pl.DataFrame, # with dts
pl.DataFrame, # with deduplicated dts (aka gap/repeat removal)
int, # len diff between input and deduped
]:
'''
Check for time series gaps and if found
de-duplicate any datetime entries, check for
a frame height diff and return the newly
dt-deduplicated frame.
'''
wdts: pl.DataFrame = with_dts(src_df)
deduped = wdts
# remove duplicated datetime samples/sections
deduped: pl.DataFrame = wdts.unique(
# subset=['dt'],
subset=['time'],
maintain_order=True,
)
# maybe sort on any time field
if sort:
deduped = deduped.sort(by='time')
# TODO: detect out-of-order segments which were corrected!
# -[ ] report in log msg
# -[ ] possibly return segment sections which were moved?
diff: int = (
wdts.height
-
deduped.height
)
return (
wdts,
deduped,
diff,
)
def sort_diff(
src_df: pl.DataFrame,
col: str = 'time',
) -> tuple[
pl.DataFrame, # with dts
pl.DataFrame, # sorted
list[int], # indices of segments that are out-of-order
]:
ser: pl.Series = src_df[col]
sortd: pl.DataFrame = ser.sort()
diff: pl.Series = ser.diff()
sortd_diff: pl.Series = sortd.diff()
i_step_diff = (diff != sortd_diff).arg_true()
frame_reorders: int = i_step_diff.len()
if frame_reorders:
log.warn(
f'Resorted frame on col: {col}\n'
f'{frame_reorders}'
)
# import pdbp; pdbp.set_trace()
# NOTE: thanks to this SO answer for the below conversion routines
# to go from numpy struct-arrays to polars dataframes and back:
# https://stackoverflow.com/a/72054819
def np2pl(array: np.ndarray) -> pl.DataFrame:
start: float = time.time()
# XXX: thanks to this SO answer for this conversion tip:
# https://stackoverflow.com/a/72054819
df = pl.DataFrame({
field_name: array[field_name]
for field_name in array.dtype.fields
})
delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'numpy -> polars conversion took {delay} secs\n'
f'polars df: {df}'
)
return df
def pl2np(
df: pl.DataFrame,
dtype: np.dtype,
) -> np.ndarray:
# Create numpy struct array of the correct size and dtype
# and loop through df columns to fill in array fields.
array = np.empty(
df.height,
dtype,
)
for field, col in zip(
dtype.fields,
df.columns,
):
array[field] = df.get_column(col).to_numpy()
return array

View File

@ -21,16 +21,15 @@ Extensions to built-in or (heavily used but 3rd party) friend-lib
types. types.
''' '''
from __future__ import annotations
from collections import UserList from collections import UserList
from pprint import ( from pprint import (
saferepr, pformat,
) )
from typing import Any from typing import Any
from msgspec import ( from msgspec import (
msgpack, msgpack,
Struct as _Struct, Struct,
structs, structs,
) )
@ -63,7 +62,7 @@ class DiffDump(UserList):
class Struct( class Struct(
_Struct, Struct,
# https://jcristharif.com/msgspec/structs.html#tagged-unions # https://jcristharif.com/msgspec/structs.html#tagged-unions
# tag='pikerstruct', # tag='pikerstruct',
@ -73,27 +72,9 @@ class Struct(
A "human friendlier" (aka repl buddy) struct subtype. A "human friendlier" (aka repl buddy) struct subtype.
''' '''
def _sin_props(self) -> Iterator[
tuple[
structs.FieldIinfo,
str,
Any,
]
]:
'''
Iterate over all non-@property fields of this struct.
'''
fi: structs.FieldInfo
for fi in structs.fields(self):
key: str = fi.name
val: Any = getattr(self, key)
yield fi, key, val
def to_dict( def to_dict(
self, self,
include_non_members: bool = True, include_non_members: bool = True,
) -> dict: ) -> dict:
''' '''
Like it sounds.. direct delegation to: Like it sounds.. direct delegation to:
@ -109,72 +90,16 @@ class Struct(
# only return a dict of the struct members # only return a dict of the struct members
# which were provided as input, NOT anything # which were provided as input, NOT anything
# added as type-defined `@property` methods! # added as `@properties`!
sin_props: dict = {} sin_props: dict = {}
fi: structs.FieldInfo for fi in structs.fields(self):
for fi, k, v in self._sin_props(): key: str = fi.name
sin_props[k] = asdict[k] sin_props[key] = asdict[key]
return sin_props return sin_props
def pformat( def pformat(self) -> str:
self, return f'Struct({pformat(self.to_dict())})'
field_indent: int = 2,
indent: int = 0,
) -> str:
'''
Recursion-safe `pprint.pformat()` style formatting of
a `msgspec.Struct` for sane reading by a human using a REPL.
'''
# global whitespace indent
ws: str = ' '*indent
# field whitespace indent
field_ws: str = ' '*(field_indent + indent)
# qtn: str = ws + self.__class__.__qualname__
qtn: str = self.__class__.__qualname__
obj_str: str = '' # accumulator
fi: structs.FieldInfo
k: str
v: Any
for fi, k, v in self._sin_props():
# TODO: how can we prefer `Literal['option1', 'option2,
# ..]` over .__name__ == `Literal` but still get only the
# latter for simple types like `str | int | None` etc..?
ft: type = fi.type
typ_name: str = getattr(ft, '__name__', str(ft))
# recurse to get sub-struct's `.pformat()` output Bo
if isinstance(v, Struct):
val_str: str = v.pformat(
indent=field_indent + indent,
field_indent=indent + field_indent,
)
else: # the `pprint` recursion-safe format:
# https://docs.python.org/3.11/library/pprint.html#pprint.saferepr
val_str: str = saferepr(v)
obj_str += (field_ws + f'{k}: {typ_name} = {val_str},\n')
return (
f'{qtn}(\n'
f'{obj_str}'
f'{ws})'
)
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
# inside a known tty?
# def __repr__(self) -> str:
# ...
# __str__ = __repr__ = pformat
__repr__ = pformat
def copy( def copy(
self, self,

View File

@ -14,8 +14,9 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' """
UI components built using `Qt` with major versions swapped in via Stuff for your eyes, aka super hawt Qt UI components.
the import indirection in the `.qt` sub-mod.
''' Currently we only support PyQt5 due to this issue in Pyside2:
https://bugreports.qt.io/projects/PYSIDE/issues/PYSIDE-1313
"""

View File

@ -21,10 +21,8 @@ Anchor funtions for UI placement of annotions.
from __future__ import annotations from __future__ import annotations
from typing import Callable, TYPE_CHECKING from typing import Callable, TYPE_CHECKING
from piker.ui.qt import ( from PyQt5.QtCore import QPointF
QPointF, from PyQt5.QtWidgets import QGraphicsPathItem
QGraphicsPathItem,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from ._chart import ChartPlotWidget from ._chart import ChartPlotWidget

View File

@ -20,22 +20,12 @@ Annotations for ur faces.
""" """
from typing import Callable from typing import Callable
from pyqtgraph import ( from PyQt5 import QtCore, QtGui, QtWidgets
Point, from PyQt5.QtCore import QPointF, QRectF
functions as fn, from PyQt5.QtWidgets import QGraphicsPathItem
Color, from pyqtgraph import Point, functions as fn, Color
)
import numpy as np import numpy as np
from piker.ui.qt import (
QtCore,
QtGui,
QtWidgets,
QPointF,
QRectF,
QGraphicsPathItem,
)
def mk_marker_path( def mk_marker_path(

View File

@ -21,11 +21,9 @@ Main app startup and run.
from functools import partial from functools import partial
from types import ModuleType from types import ModuleType
from PyQt5.QtCore import QEvent
import trio import trio
from piker.ui.qt import (
QEvent,
)
from ..service import maybe_spawn_brokerd from ..service import maybe_spawn_brokerd
from . import _event from . import _event
from ._exec import run_qtractor from ._exec import run_qtractor

View File

@ -23,24 +23,16 @@ from functools import lru_cache
from typing import Callable from typing import Callable
from math import floor from math import floor
import polars as pl import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF
from piker.ui.qt import (
QtCore,
QtGui,
QtWidgets,
QPointF,
txt_flag,
align_flag,
px_cache_mode,
)
from . import _pg_overrides as pgo from . import _pg_overrides as pgo
from ..accounting._mktinfo import float_digits from ..accounting._mktinfo import float_digits
from ._label import Label from ._label import Label
from ._style import DpiAwareFont, hcolor, _font from ._style import DpiAwareFont, hcolor, _font
from ._interaction import ChartView from ._interaction import ChartView
from ._dataviz import Viz
_axis_pen = pg.mkPen(hcolor('bracket')) _axis_pen = pg.mkPen(hcolor('bracket'))
@ -295,7 +287,9 @@ class DynamicDateAxis(Axis):
# time formats mapped by seconds between bars # time formats mapped by seconds between bars
tick_tpl = { tick_tpl = {
60 * 60 * 24: '%Y-%b-%d', 60 * 60 * 24: '%Y-%b-%d',
60: '%Y-%b-%d(%H:%M)', 60: '%H:%M',
30: '%H:%M:%S',
5: '%H:%M:%S',
1: '%H:%M:%S', 1: '%H:%M:%S',
} }
@ -311,10 +305,10 @@ class DynamicDateAxis(Axis):
# XX: ARGGGGG AG:LKSKDJF:LKJSDFD # XX: ARGGGGG AG:LKSKDJF:LKJSDFD
chart = self.pi.chart_widget chart = self.pi.chart_widget
viz: Viz = chart._vizs[chart.name] viz = chart._vizs[chart.name]
shm = viz.shm shm = viz.shm
array = shm.array array = shm.array
ifield: str = viz.index_field ifield = viz.index_field
index = array[ifield] index = array[ifield]
i_0, i_l = index[0], index[-1] i_0, i_l = index[0], index[-1]
@ -335,7 +329,7 @@ class DynamicDateAxis(Axis):
arr_len = index.shape[0] arr_len = index.shape[0]
first = shm._first.value first = shm._first.value
times = array['time'] times = array['time']
epochs: list[int] = times[ epochs = times[
list( list(
map( map(
int, int,
@ -347,30 +341,23 @@ class DynamicDateAxis(Axis):
) )
] ]
else: else:
epochs: list[int] = list(map(int, indexes)) epochs = list(map(int, indexes))
# TODO: **don't** have this hard coded shift to EST # TODO: **don't** have this hard coded shift to EST
delay: float = viz.time_step() # delay = times[-1] - times[-2]
if delay > 1: dts = np.array(
# NOTE: use less granular dt-str when using 1M+ OHLC
fmtstr: str = self.tick_tpl[delay]
else:
fmtstr: str = '%Y-%m-%d(%H:%M:%S)'
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.from_epoch.html#polars-from-epoch
pl_dts: pl.Series = pl.from_epoch(
epochs, epochs,
time_unit='s', dtype='datetime64[s]',
# NOTE: kinda weird we can pass it to `.from_epoch()` no?
).dt.replace_time_zone(
time_zone='UTC'
).dt.convert_time_zone(
# TODO: pull this from either:
# -[ ] the mkt venue tz by default
# -[ ] the user's config under `sys.mkt_timezone: str`
'EST'
) )
return pl_dts.dt.to_string(fmtstr).to_list()
# see units listing:
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units
return list(np.datetime_as_string(dts))
# TODO: per timeframe formatting?
# - we probably need this based on zoom now right?
# prec = self.np_dt_precision[delay]
# return dts.strftime(self.tick_tpl[delay])
def tickStrings( def tickStrings(
self, self,
@ -421,15 +408,11 @@ class AxisLabel(pg.GraphicsObject):
super().__init__() super().__init__()
self.setParentItem(parent) self.setParentItem(parent)
self.setFlag( self.setFlag(self.ItemIgnoresTransformations)
self.GraphicsItemFlag.ItemIgnoresTransformations
)
self.setZValue(100) self.setZValue(100)
# XXX: pretty sure this is faster # XXX: pretty sure this is faster
self.setCacheMode( self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
px_cache_mode.DeviceCoordinateCache
)
self._parent = parent self._parent = parent
@ -566,14 +549,21 @@ class AxisLabel(pg.GraphicsObject):
return (self.rect.width(), self.rect.height()) return (self.rect.width(), self.rect.height())
# _common_text_flags = (
# QtCore.Qt.TextDontClip |
# QtCore.Qt.AlignCenter |
# QtCore.Qt.AlignTop |
# QtCore.Qt.AlignHCenter |
# QtCore.Qt.AlignVCenter
# )
class XAxisLabel(AxisLabel): class XAxisLabel(AxisLabel):
_x_margin = 8 _x_margin = 8
text_flags = ( text_flags = (
align_flag.AlignCenter QtCore.Qt.TextDontClip
| txt_flag.TextDontClip | QtCore.Qt.AlignCenter
) )
def size_hint(self) -> tuple[float, float]: def size_hint(self) -> tuple[float, float]:
@ -630,10 +620,10 @@ class YAxisLabel(AxisLabel):
_y_margin: int = 4 _y_margin: int = 4
text_flags = ( text_flags = (
align_flag.AlignLeft QtCore.Qt.AlignLeft
| align_flag.AlignVCenter # QtCore.Qt.AlignHCenter
# | align_flag.AlignHCenter | QtCore.Qt.AlignVCenter
| txt_flag.TextDontClip | QtCore.Qt.TextDontClip
) )
def __init__( def __init__(

View File

@ -28,20 +28,22 @@ from typing import (
TYPE_CHECKING, TYPE_CHECKING,
) )
import pyqtgraph as pg from PyQt5 import QtCore, QtWidgets
import trio from PyQt5.QtCore import (
from piker.ui.qt import (
QtCore,
QtWidgets,
Qt, Qt,
QLineF, QLineF,
# QPointF,
)
from PyQt5.QtWidgets import (
QFrame, QFrame,
QWidget, QWidget,
QHBoxLayout, QHBoxLayout,
QVBoxLayout, QVBoxLayout,
QSplitter, QSplitter,
) )
import pyqtgraph as pg
import trio
from ._axes import ( from ._axes import (
DynamicDateAxis, DynamicDateAxis,
PriceAxis, PriceAxis,
@ -568,8 +570,8 @@ class LinkedSplits(QWidget):
# style? # style?
self.chart.setFrameStyle( self.chart.setFrameStyle(
QFrame.Shape.StyledPanel | QFrame.StyledPanel |
QFrame.Shadow.Plain QFrame.Plain
) )
return self.chart return self.chart
@ -687,8 +689,8 @@ class LinkedSplits(QWidget):
cpw.plotItem.vb.linked = self cpw.plotItem.vb.linked = self
cpw.setFrameStyle( cpw.setFrameStyle(
QFrame.Shape.StyledPanel QtWidgets.QFrame.StyledPanel
# | QFrame.Shadow.Plain # | QtWidgets.QFrame.Plain
) )
# don't show the little "autoscale" A label. # don't show the little "autoscale" A label.

View File

@ -28,14 +28,9 @@ from typing import (
import inspect import inspect
import numpy as np import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtWidgets
from PyQt5.QtCore import QPointF, QRectF
from piker.ui.qt import (
QPointF,
QRectF,
QtCore,
QtWidgets,
px_cache_mode,
)
from ._style import ( from ._style import (
_xaxis_at, _xaxis_at,
hcolor, hcolor,
@ -109,9 +104,7 @@ class LineDot(pg.CurvePoint):
dot.setParentItem(self) dot.setParentItem(self)
# keep a static size # keep a static size
self.setFlag( self.setFlag(self.ItemIgnoresTransformations)
self.GraphicsItemFlag.ItemIgnoresTransformations
)
def event( def event(
self, self,
@ -214,10 +207,9 @@ class ContentsLabel(pg.LabelItem):
# this being "html" is the dumbest shit :eyeroll: # this being "html" is the dumbest shit :eyeroll:
self.setText( self.setText(
"<b>i_arr</b>:{index}<br/>" "<b>i</b>:{index}<br/>"
# NB: these fields must be indexed in the correct order via # NB: these fields must be indexed in the correct order via
# the slice syntax below. # the slice syntax below.
"<b>i_shm</b>:{}<br/>"
"<b>epoch</b>:{}<br/>" "<b>epoch</b>:{}<br/>"
"<b>O</b>:{}<br/>" "<b>O</b>:{}<br/>"
"<b>H</b>:{}<br/>" "<b>H</b>:{}<br/>"
@ -227,7 +219,6 @@ class ContentsLabel(pg.LabelItem):
# "<b>wap</b>:{}".format( # "<b>wap</b>:{}".format(
*array[ix][ *array[ix][
[ [
'index',
'time', 'time',
'open', 'open',
'high', 'high',
@ -279,15 +270,10 @@ class ContentsLabels:
x_in: int, x_in: int,
) -> None: ) -> None:
for ( for chart, name, label, update in self._labels:
chart,
name,
label,
update,
)in self._labels:
viz = chart.get_viz(name) viz = chart.get_viz(name)
array: np.ndarray = viz.shm._array array = viz.shm.array
index = array[viz.index_field] index = array[viz.index_field]
start = index[0] start = index[0]
stop = index[-1] stop = index[-1]
@ -298,7 +284,7 @@ class ContentsLabels:
): ):
# out of range # out of range
print('WTF out of range?') print('WTF out of range?')
# continue continue
# call provided update func with data point # call provided update func with data point
try: try:
@ -306,7 +292,6 @@ class ContentsLabels:
ix = np.searchsorted(index, x_in) ix = np.searchsorted(index, x_in)
if ix > len(array): if ix > len(array):
breakpoint() breakpoint()
update(ix, array) update(ix, array)
except IndexError: except IndexError:
@ -431,10 +416,10 @@ class Cursor(pg.GraphicsObject):
# vertical and horizonal lines and a y-axis label # vertical and horizonal lines and a y-axis label
vl = plot.addLine(x=0, pen=self.lines_pen, movable=False) vl = plot.addLine(x=0, pen=self.lines_pen, movable=False)
vl.setCacheMode(px_cache_mode.DeviceCoordinateCache) vl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
hl = plot.addLine(y=0, pen=self.lines_pen, movable=False) hl = plot.addLine(y=0, pen=self.lines_pen, movable=False)
hl.setCacheMode(px_cache_mode.DeviceCoordinateCache) hl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
hl.hide() hl.hide()
yl = YAxisLabel( yl = YAxisLabel(
@ -518,10 +503,7 @@ class Cursor(pg.GraphicsObject):
plot=chart plot=chart
) )
chart.addItem(cursor) chart.addItem(cursor)
self.graphics[chart].setdefault( self.graphics[chart].setdefault('cursors', []).append(cursor)
'cursors',
[],
).append(cursor)
return cursor return cursor
def mouseAction( def mouseAction(

View File

@ -19,21 +19,20 @@ Fast, smooth, sexy curves.
""" """
from contextlib import contextmanager as cm from contextlib import contextmanager as cm
from enum import EnumType
from typing import Callable from typing import Callable
import numpy as np import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtWidgets
from piker.ui.qt import ( from PyQt5.QtWidgets import QGraphicsItem
QtWidgets, from PyQt5.QtCore import (
QGraphicsItem,
Qt, Qt,
QLineF, QLineF,
QRectF, QRectF,
)
from PyQt5.QtGui import (
QPainter, QPainter,
QPainterPath, QPainterPath,
px_cache_mode,
) )
from ._style import hcolor from ._style import hcolor
from ..log import get_logger from ..log import get_logger
@ -43,23 +42,22 @@ from ..toolz.profile import (
ms_slower_then, ms_slower_then,
) )
log = get_logger(__name__) log = get_logger(__name__)
pen_style: EnumType = Qt.PenStyle
_line_styles: dict[str, int] = { _line_styles: dict[str, int] = {
'solid': pen_style.SolidLine, 'solid': Qt.PenStyle.SolidLine,
'dash': pen_style.DashLine, 'dash': Qt.PenStyle.DashLine,
'dot': pen_style.DotLine, 'dot': Qt.PenStyle.DotLine,
'dashdot': pen_style.DashDotLine, 'dashdot': Qt.PenStyle.DashDotLine,
} }
class FlowGraphic(pg.GraphicsObject): class FlowGraphic(pg.GraphicsObject):
''' '''
Base class with minimal interface for `QPainterPath` Base class with minimal interface for `QPainterPath` implemented,
implemented, real-time updated "data flow" graphics. real-time updated "data flow" graphics.
See subtypes below. See subtypes below.
@ -71,12 +69,12 @@ class FlowGraphic(pg.GraphicsObject):
# XXX-NOTE-XXX: graphics caching B) # XXX-NOTE-XXX: graphics caching B)
# see explanation for different caching modes: # see explanation for different caching modes:
# https://stackoverflow.com/a/39410081 # https://stackoverflow.com/a/39410081
cache_mode: int = px_cache_mode.DeviceCoordinateCache cache_mode: int = QGraphicsItem.DeviceCoordinateCache
# XXX: WARNING item caching seems to only be useful # XXX: WARNING item caching seems to only be useful
# if we don't re-generate the entire QPainterPath every time # if we don't re-generate the entire QPainterPath every time
# don't ever use this - it's a colossal nightmare of artefacts # don't ever use this - it's a colossal nightmare of artefacts
# and is disastrous for performance. # and is disastrous for performance.
# cache_mode.ItemCoordinateCache # QGraphicsItem.ItemCoordinateCache
# TODO: still questions todo with coord-cacheing that we should # TODO: still questions todo with coord-cacheing that we should
# probably talk to a core dev about: # probably talk to a core dev about:
# - if this makes trasform interactions slower (such as zooming) # - if this makes trasform interactions slower (such as zooming)
@ -169,16 +167,15 @@ class FlowGraphic(pg.GraphicsObject):
return None return None
# XXX: due to a variety of weird jitter bugs and "smearing" # XXX: due to a variety of weird jitter bugs and "smearing"
# artifacts when click-drag panning and viewing history time # artifacts when click-drag panning and viewing history time series,
# series, we offer this ctx-mngr interface to allow temporarily # we offer this ctx-mngr interface to allow temporarily disabling
# disabling Qt's graphics caching mode; this is now currently # Qt's graphics caching mode; this is now currently used from
# used from ``ChartView.start/signal_ic()`` methods which also # ``ChartView.start/signal_ic()`` methods which also disable the
# disable the rt-display loop when the user is moving around # rt-display loop when the user is moving around a view.
# a view.
@cm @cm
def reset_cache(self) -> None: def reset_cache(self) -> None:
try: try:
none = px_cache_mode.NoCache none = QGraphicsItem.NoCache
log.debug( log.debug(
f'{self._name} -> CACHE DISABLE: {none}' f'{self._name} -> CACHE DISABLE: {none}'
) )

View File

@ -36,12 +36,9 @@ from msgspec import (
field, field,
) )
import numpy as np import numpy as np
from numpy import (
ndarray,
)
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5.QtCore import QLineF
from piker.ui.qt import QLineF
from ..data._sharedmem import ( from ..data._sharedmem import (
ShmArray, ShmArray,
) )
@ -52,7 +49,7 @@ from ..data._formatters import (
OHLCBarsAsCurveFmtr, # OHLC converted to line OHLCBarsAsCurveFmtr, # OHLC converted to line
StepCurveFmtr, # "step" curve (like for vlm) StepCurveFmtr, # "step" curve (like for vlm)
) )
from ..tsp import ( from ..data._timeseries import (
slice_from_time, slice_from_time,
) )
from ._ohlc import ( from ._ohlc import (
@ -85,11 +82,10 @@ def render_baritems(
viz: Viz, viz: Viz,
graphics: BarItems, graphics: BarItems,
read: tuple[ read: tuple[
int, int, ndarray, int, int, np.ndarray,
int, int, ndarray, int, int, np.ndarray,
], ],
profiler: Profiler, profiler: Profiler,
force_redraw: bool = False,
**kwargs, **kwargs,
) -> None: ) -> None:
@ -220,11 +216,9 @@ def render_baritems(
viz._in_ds = should_line viz._in_ds = should_line
should_redraw = ( should_redraw = (
force_redraw changed_to_line
or changed_to_line
or not should_line or not should_line
) )
# print(f'should_redraw: {should_redraw}')
return ( return (
graphics, graphics,
r, r,
@ -256,7 +250,7 @@ class ViewState(Struct):
] | None = None ] | None = None
# last in view ``ShmArray.array[read_slc]`` data # last in view ``ShmArray.array[read_slc]`` data
in_view: ndarray | None = None in_view: np.ndarray | None = None
class Viz(Struct): class Viz(Struct):
@ -319,7 +313,6 @@ class Viz(Struct):
_last_uppx: float = 0 _last_uppx: float = 0
_in_ds: bool = False _in_ds: bool = False
_index_step: float | None = None _index_step: float | None = None
_time_step: float | None = None
# map from uppx -> (downsampled data, incremental graphics) # map from uppx -> (downsampled data, incremental graphics)
_src_r: Renderer | None = None _src_r: Renderer | None = None
@ -366,8 +359,7 @@ class Viz(Struct):
def index_step( def index_step(
self, self,
index_field: str | None = None, reset: bool = False,
) -> float: ) -> float:
''' '''
Return the size between sample steps in the units of the Return the size between sample steps in the units of the
@ -375,17 +367,12 @@ class Viz(Struct):
epoch time in seconds. epoch time in seconds.
''' '''
# attempt to detect the best step size by scanning a sample # attempt to dectect the best step size by scanning a sample of
# of the source data. # the source data.
if ( if self._index_step is None:
self._index_step is None
or index_field is not None index: np.ndarray = self.shm.array[self.index_field]
): isample: np.ndarray = index[-16:]
index: ndarray = self.shm.array[
index_field
or self.index_field
]
isample: ndarray = index[-16:]
mxdiff: None | float = None mxdiff: None | float = None
for step in np.diff(isample): for step in np.diff(isample):
@ -399,15 +386,7 @@ class Viz(Struct):
) )
mxdiff = step mxdiff = step
step: float = max(mxdiff, 1) self._index_step = max(mxdiff, 1)
# only SET the internal index step if an explicit
# field name is NOT passed, since in such cases this
# is likely just being called from `.time_step()`.
if index_field is not None:
return step
self._index_step = step
if ( if (
mxdiff < 1 mxdiff < 1
or 1 < mxdiff < 60 or 1 < mxdiff < 60
@ -418,17 +397,6 @@ class Viz(Struct):
return self._index_step return self._index_step
def time_step(self) -> float:
'''
Attempt to determine the per-sample time-step period by
forcing an epoch-index and calling `.index_step()`.
'''
if self._time_step is None:
self._time_step: float = self.index_step(index_field='time')
return self._time_step
def maxmin( def maxmin(
self, self,
@ -436,9 +404,6 @@ class Viz(Struct):
i_read_range: tuple[int, int] | None = None, i_read_range: tuple[int, int] | None = None,
use_caching: bool = True, use_caching: bool = True,
# XXX: internal debug
_do_print: bool = False
) -> tuple[float, float] | None: ) -> tuple[float, float] | None:
''' '''
Compute the cached max and min y-range values for a given Compute the cached max and min y-range values for a given
@ -458,14 +423,15 @@ class Viz(Struct):
if shm is None: if shm is None:
return None return None
arr: ndarray = shm.array do_print: bool = False
arr = shm.array
if i_read_range is not None: if i_read_range is not None:
read_slc = slice(*i_read_range) read_slc = slice(*i_read_range)
index: float | int = arr[read_slc][self.index_field] index = arr[read_slc][self.index_field]
if not index.size: if not index.size:
return None return None
ixrng: tuple[int, int] = (index[0], index[-1]) ixrng = (index[0], index[-1])
else: else:
if x_range is None: if x_range is None:
@ -483,24 +449,15 @@ class Viz(Struct):
# TODO: hash the slice instead maybe? # TODO: hash the slice instead maybe?
# https://stackoverflow.com/a/29980872 # https://stackoverflow.com/a/29980872
ixrng = lbar, rbar = ( ixrng = lbar, rbar = round(x_range[0]), round(x_range[1])
round(x_range[0]),
round(x_range[1]),
)
if ( if (
use_caching use_caching
and self._mxmn_cache_enabled and self._mxmn_cache_enabled
): ):
# TODO: is there a way to ONLY clear ranges containing
# a certain sub-range?
# -[ ] currently we have a problem where a previously
# cached mxmn will persist even if the viz is "hard
# re-rendered" (usually bc underlying data was
# corrected)
cached_result = self._mxmns.get(ixrng) cached_result = self._mxmns.get(ixrng)
if cached_result: if cached_result:
if _do_print: if do_print:
print( print(
f'{self.name} CACHED maxmin\n' f'{self.name} CACHED maxmin\n'
f'{ixrng} -> {cached_result}' f'{ixrng} -> {cached_result}'
@ -530,7 +487,7 @@ class Viz(Struct):
(rbar - ifirst) + 1 (rbar - ifirst) + 1
) )
slice_view: ndarray = arr[read_slc] slice_view = arr[read_slc]
if not slice_view.size: if not slice_view.size:
log.warning( log.warning(
@ -541,7 +498,7 @@ class Viz(Struct):
elif self.ds_yrange: elif self.ds_yrange:
mxmn = self.ds_yrange mxmn = self.ds_yrange
if _do_print: if do_print:
print( print(
f'{self.name} M4 maxmin:\n' f'{self.name} M4 maxmin:\n'
f'{ixrng} -> {mxmn}' f'{ixrng} -> {mxmn}'
@ -558,7 +515,7 @@ class Viz(Struct):
mxmn = ylow, yhigh mxmn = ylow, yhigh
if ( if (
_do_print do_print
): ):
s = 3 s = 3
print( print(
@ -572,23 +529,14 @@ class Viz(Struct):
# cache result for input range # cache result for input range
ylow, yhi = mxmn ylow, yhi = mxmn
diff: float = yhi - ylow
# order-of-magnitude check
# TODO: really we should be checking the hi or low
# against the previous sample to catch stuff like,
# - rando stock (reverse-)split
# - null-segments written by some prior
# crash-during-backfil
if diff > 0:
omg: float = abs(logf(diff, 10))
else:
omg: float = 0
try: try:
prolly_anomaly: bool = ( prolly_anomaly: bool = (
# diff == 0 (
(ylow and omg > 10) abs(logf(ylow, 10)) > 16
if ylow
else False
)
or ( or (
isnan(ylow) or isnan(yhi) isnan(ylow) or isnan(yhi)
) )
@ -615,8 +563,7 @@ class Viz(Struct):
def view_range(self) -> tuple[int, int]: def view_range(self) -> tuple[int, int]:
''' '''
Return the start and stop x-indexes for the managed Return the start and stop x-indexes for the managed ``ViewBox``.
``ViewBox``.
''' '''
vr = self.plot.viewRect() vr = self.plot.viewRect()
@ -629,7 +576,7 @@ class Viz(Struct):
self, self,
view_range: None | tuple[float, float] = None, view_range: None | tuple[float, float] = None,
index_field: str | None = None, index_field: str | None = None,
array: ndarray | None = None, array: np.ndarray | None = None,
) -> tuple[ ) -> tuple[
int, int, int, int, int, int int, int, int, int, int, int
@ -700,8 +647,8 @@ class Viz(Struct):
profiler: None | Profiler = None, profiler: None | Profiler = None,
) -> tuple[ ) -> tuple[
int, int, ndarray, int, int, np.ndarray,
int, int, ndarray, int, int, np.ndarray,
]: ]:
''' '''
Read the underlying shm array buffer and Read the underlying shm array buffer and
@ -871,10 +818,6 @@ class Viz(Struct):
graphics, graphics,
read, read,
profiler, profiler,
# NOTE: only set when caller says to
force_redraw=should_redraw,
**kwargs, **kwargs,
) )
@ -1037,39 +980,6 @@ class Viz(Struct):
graphics, graphics,
) )
def reset_graphics(
self,
# TODO: allow only resetting within some x-domain range?
# ixrng: tuple[int, int] | None = None,
) -> None:
'''
Hard reset all graphics (rendering) layers for this
data viz including clearing the mxmn auto-y-range
cache.
Normally called when the underlying data set is modified
(probably by some `.tsp` correcting/editing routine) and
the (now cached) graphics need to be fully re-rendered from
source.
'''
log.warning(
f'Forcing hard Viz graphihcs RESET:\n'
f'.name: {self.name}\n'
f'.index_field: {self.index_field}\n'
f'.index_step(): {self.index_step()}\n'
f'.time_step(): {self.time_step()}\n'
)
# XXX: always clear the mxn y-range cache
# to avoid old data (anomalies) from being
# retained in auto-yrange output.
self._mxmn_cache_enabled = False
self._mxmns.clear()
self.update_graphics(force_redraw=True)
self._mxmn_cache_enabled = True
def draw_last( def draw_last(
self, self,
array_key: str | None = None, array_key: str | None = None,
@ -1162,7 +1072,7 @@ class Viz(Struct):
''' '''
shm: ShmArray = self.shm shm: ShmArray = self.shm
array: ndarray = shm.array array: np.ndarray = shm.array
view: ChartView = self.plot.vb view: ChartView = self.plot.vb
( (
vl, vl,

View File

@ -57,7 +57,6 @@ from piker.toolz import (
Profiler, Profiler,
) )
from piker.log import get_logger from piker.log import get_logger
from piker import config
# from ..data._source import tf_in_1s # from ..data._source import tf_in_1s
from ._axes import YAxisLabel from ._axes import YAxisLabel
from ._chart import ( from ._chart import (
@ -211,9 +210,9 @@ async def increment_history_view(
): ):
hist_chart: ChartPlotWidget = ds.hist_chart hist_chart: ChartPlotWidget = ds.hist_chart
hist_viz: Viz = ds.hist_viz hist_viz: Viz = ds.hist_viz
# viz: Viz = ds.viz viz: Viz = ds.viz
assert 'hist' in hist_viz.shm.token['shm_name'] assert 'hist' in hist_viz.shm.token['shm_name']
# name: str = hist_viz.name name: str = hist_viz.name
# TODO: seems this is more reliable at keeping the slow # TODO: seems this is more reliable at keeping the slow
# chart incremented in view more correctly? # chart incremented in view more correctly?
@ -226,8 +225,7 @@ async def increment_history_view(
# draw everything from scratch on first entry! # draw everything from scratch on first entry!
for curve_name, hist_viz in hist_chart._vizs.items(): for curve_name, hist_viz in hist_chart._vizs.items():
log.info(f'Forcing hard redraw -> {curve_name}') log.info(f'Forcing hard redraw -> {curve_name}')
hist_viz.reset_graphics() hist_viz.update_graphics(force_redraw=True)
# hist_viz.update_graphics(force_redraw=True)
async with open_sample_stream(1.) as min_istream: async with open_sample_stream(1.) as min_istream:
async for msg in min_istream: async for msg in min_istream:
@ -250,27 +248,25 @@ async def increment_history_view(
# - samplerd could emit the actual update range via # - samplerd could emit the actual update range via
# tuple and then we only enter the below block if that # tuple and then we only enter the below block if that
# range is detected as in-view? # range is detected as in-view?
# match msg: if (
# case { (bf_wut := msg.get('backfilling', False))
# 'backfilling': (viz_name, timeframe), ):
# } if ( viz_name, timeframe = bf_wut
# viz_name == name if (
# ): viz_name == name
# log.warning(
# f'Forcing HARD REDRAW:\n' # TODO: only allow this when the data is IN VIEW!
# f'name: {name}\n' # also, we probably can do this more efficiently
# f'timeframe: {timeframe}\n' # / smarter by only redrawing the portion of the
# ) # path necessary?
# # TODO: only allow this when the data is IN VIEW! and False
# # also, we probably can do this more efficiently ):
# # / smarter by only redrawing the portion of the log.info(f'Forcing hard redraw -> {name}@{timeframe}')
# # path necessary? match timeframe:
# { case 60:
# 60: hist_viz, hist_viz.update_graphics(force_redraw=True)
# 1: viz, case 1:
# }[timeframe].update_graphics( viz.update_graphics(force_redraw=True)
# force_redraw=True
# )
# check if slow chart needs an x-domain shift and/or # check if slow chart needs an x-domain shift and/or
# y-range resize. # y-range resize.
@ -311,7 +307,6 @@ async def increment_history_view(
async def graphics_update_loop( async def graphics_update_loop(
dss: dict[str, DisplayState],
nurse: trio.Nursery, nurse: trio.Nursery,
godwidget: GodWidget, godwidget: GodWidget,
feed: Feed, feed: Feed,
@ -353,6 +348,8 @@ async def graphics_update_loop(
'i_last_slow_t': 0, # multiview-global slow (1m) step index 'i_last_slow_t': 0, # multiview-global slow (1m) step index
} }
dss: dict[str, DisplayState] = {}
for fqme, flume in feed.flumes.items(): for fqme, flume in feed.flumes.items():
ohlcv = flume.rt_shm ohlcv = flume.rt_shm
hist_ohlcv = flume.hist_shm hist_ohlcv = flume.hist_shm
@ -471,18 +468,10 @@ async def graphics_update_loop(
if ds.hist_vars['i_last'] < ds.hist_vars['i_last_append']: if ds.hist_vars['i_last'] < ds.hist_vars['i_last_append']:
await tractor.pause() await tractor.pause()
# try:
# XXX TODO: we need to do _dss UPDATE here so that when
# a feed-view is switched you can still remote annotate the
# prior view..
from . import _remote_ctl
_remote_ctl._dss.update(dss)
# main real-time quotes update loop # main real-time quotes update loop
stream: tractor.MsgStream stream: tractor.MsgStream
async with feed.open_multi_stream() as stream: async with feed.open_multi_stream() as stream:
# assert stream assert stream
async for quotes in stream: async for quotes in stream:
quote_period = time.time() - last_quote_s quote_period = time.time() - last_quote_s
quote_rate = round( quote_rate = round(
@ -498,7 +487,7 @@ async def graphics_update_loop(
pass pass
# log.warning(f'High quote rate {mkt.fqme}: {quote_rate}') # log.warning(f'High quote rate {mkt.fqme}: {quote_rate}')
last_quote_s: float = time.time() last_quote_s = time.time()
for fqme, quote in quotes.items(): for fqme, quote in quotes.items():
ds = dss[fqme] ds = dss[fqme]
@ -528,12 +517,6 @@ async def graphics_update_loop(
quote, quote,
) )
# finally:
# # XXX: cancel any remote annotation control ctxs
# _remote_ctl._dss = None
# for cid, (ctx, aids) in _remote_ctl._ctxs.items():
# await ctx.cancel()
def graphics_update_cycle( def graphics_update_cycle(
ds: DisplayState, ds: DisplayState,
@ -1232,8 +1215,6 @@ async def link_views_with_region(
# region.sigRegionChangeFinished.connect(update_pi_from_region) # region.sigRegionChangeFinished.connect(update_pi_from_region)
# NOTE: default is set to 60 FPS until the runtime delivers the
# discoverd hw value below.
_quote_throttle_rate: int = 60 - 6 _quote_throttle_rate: int = 60 - 6
@ -1252,7 +1233,7 @@ async def display_symbol_data(
fast from a cached watch-list. fast from a cached watch-list.
''' '''
# sbar = godwidget.window.status_bar sbar = godwidget.window.status_bar
# historical data fetch # historical data fetch
# brokermod = brokers.get_brokermod(provider) # brokermod = brokers.get_brokermod(provider)
@ -1262,11 +1243,11 @@ async def display_symbol_data(
# group_key=loading_sym_key, # group_key=loading_sym_key,
# ) # )
# for fqme in fqmes: for fqme in fqmes:
# loading_sym_key = sbar.open_status( loading_sym_key = sbar.open_status(
# f'loading {fqme} ->', f'loading {fqme} ->',
# group_key=True group_key=True
# ) )
# (TODO: make this not so shit XD) # (TODO: make this not so shit XD)
# close group status once a symbol feed fully loads to view. # close group status once a symbol feed fully loads to view.
@ -1275,54 +1256,26 @@ async def display_symbol_data(
# TODO: ctl over update loop's maximum frequency. # TODO: ctl over update loop's maximum frequency.
# - load this from a config.toml! # - load this from a config.toml!
# - allow dyanmic configuration from chart UI? # - allow dyanmic configuration from chart UI?
(
conf,
path,
) = config.load()
ui_conf: dict = conf['ui']
global _quote_throttle_rate global _quote_throttle_rate
from ._window import main_window from ._window import main_window
display_rate = main_window().current_screen().refreshRate()
display_rate: int = floor( _quote_throttle_rate = floor(display_rate) - 6
main_window().current_screen().refreshRate()
) - 6
mx_redraw_rate: int = ui_conf.get(
'max_redraw_rate',
_quote_throttle_rate,
)
if mx_redraw_rate < display_rate:
log.info(
'Down-throttling redraw rate to config setting\n'
f'display FPS: {display_rate}\n'
'max_redraw_rate: {max_redraw_rate}\n'
)
else:
_quote_throttle_rate = display_rate
# TODO: we should be able to increase this if we use some # TODO: we should be able to increase this if we use some
# `mypyc` speedups elsewhere? 22ish seems to be the sweet # `mypyc` speedups elsewhere? 22ish seems to be the sweet
# spot for single-feed chart. # spot for single-feed chart.
num_of_feeds = len(fqmes) num_of_feeds = len(fqmes)
# if num_of_feeds > 1: mx: int = 22
if num_of_feeds > 1:
# there will be more ctx switches with more than 1 feed so we # there will be more ctx switches with more than 1 feed so we
# max throttle down a bit more. # max throttle down a bit more.
mx_per_feed: int = ( mx = 16
ui_conf.get(
'per_feed_redraw_rate',
mx_redraw_rate,
)
or 16
)
# limit to at least display's FPS # limit to at least display's FPS
# avoiding needless Qt-in-guest-mode context switches # avoiding needless Qt-in-guest-mode context switches
cycles_per_feed = min( cycles_per_feed = min(
round(_quote_throttle_rate/num_of_feeds), round(_quote_throttle_rate/num_of_feeds),
mx_per_feed, mx,
) )
feed: Feed feed: Feed
@ -1467,7 +1420,7 @@ async def display_symbol_data(
start_fsp_displays, start_fsp_displays,
rt_linked, rt_linked,
flume, flume,
# loading_sym_key, loading_sym_key,
loglevel, loglevel,
) )
@ -1586,10 +1539,8 @@ async def display_symbol_data(
) )
# start update loop task # start update loop task
dss: dict[str, DisplayState] = {}
ln.start_soon( ln.start_soon(
graphics_update_loop, graphics_update_loop,
dss,
ln, ln,
godwidget, godwidget,
feed, feed,
@ -1603,31 +1554,15 @@ async def display_symbol_data(
order_ctl_fqme: str = fqmes[0] order_ctl_fqme: str = fqmes[0]
mode: OrderMode mode: OrderMode
async with ( async with (
open_order_mode( open_order_mode(
feed, feed,
godwidget, godwidget,
order_ctl_fqme, order_ctl_fqme,
order_mode_started, order_mode_started,
loglevel=loglevel loglevel=loglevel
) as mode, ) as mode
# TODO: maybe have these startup sooner before
# order mode fully boots? but we gotta,
# -[ ] decouple the order mode bindings until
# the mode has fully booted..
# -[ ] maybe do an Event to sync?
# start input handling for ``ChartView`` input
# (i.e. kb + mouse handling loops)
rt_chart.view.open_async_input_handler(
dss=dss,
),
hist_chart.view.open_async_input_handler(
dss=dss,
),
): ):
rt_linked.mode = mode rt_linked.mode = mode
rt_viz = rt_chart.get_viz(order_ctl_fqme) rt_viz = rt_chart.get_viz(order_ctl_fqme)

View File

@ -21,8 +21,7 @@ Higher level annotation editors.
from __future__ import annotations from __future__ import annotations
from collections import defaultdict from collections import defaultdict
from typing import ( from typing import (
Sequence, TYPE_CHECKING
TYPE_CHECKING,
) )
import pyqtgraph as pg import pyqtgraph as pg
@ -32,34 +31,24 @@ from pyqtgraph import (
QtCore, QtCore,
QtWidgets, QtWidgets,
) )
from PyQt5.QtGui import (
QColor,
)
from PyQt5.QtWidgets import (
QLabel,
)
from pyqtgraph import functions as fn from pyqtgraph import functions as fn
from PyQt5.QtCore import QPointF
import numpy as np import numpy as np
from piker.types import Struct from piker.types import Struct
from piker.ui.qt import ( from ._style import hcolor, _font
Qt,
QPointF,
QRectF,
QGraphicsProxyWidget,
QGraphicsScene,
QLabel,
QColor,
QTransform,
)
from ._style import (
hcolor,
_font,
)
from ._lines import LevelLine from ._lines import LevelLine
from ..log import get_logger from ..log import get_logger
if TYPE_CHECKING: if TYPE_CHECKING:
from ._chart import ( from ._chart import GodWidget
GodWidget,
ChartPlotWidget,
)
from ._interaction import ChartView
log = get_logger(__name__) log = get_logger(__name__)
@ -76,7 +65,7 @@ class ArrowEditor(Struct):
uid: str, uid: str,
x: float, x: float,
y: float, y: float,
color: str = 'default', color='default',
pointing: str | None = None, pointing: str | None = None,
) -> pg.ArrowItem: ) -> pg.ArrowItem:
@ -262,75 +251,43 @@ class LineEditor(Struct):
return lines return lines
def as_point(
pair: Sequence[float, float] | QPointF,
) -> list[QPointF, QPointF]:
'''
Case any input tuple of floats to a a list of `QPoint` objects
for use in Qt geometry routines.
'''
if isinstance(pair, QPointF):
return pair
return QPointF(pair[0], pair[1])
# TODO: maybe implement better, something something RectItemProxy??
# -[ ] dig into details of how proxy's work?
# https://doc.qt.io/qt-5/qgraphicsscene.html#addWidget
# -[ ] consider using `.addRect()` maybe?
class SelectRect(QtWidgets.QGraphicsRectItem): class SelectRect(QtWidgets.QGraphicsRectItem):
'''
A data-view "selection rectangle": the most fundamental
geometry for annotating data views.
- https://doc.qt.io/qt-5/qgraphicsrectitem.html
- https://doc.qt.io/qt-6/qgraphicsrectitem.html
'''
def __init__( def __init__(
self, self,
viewbox: ViewBox, viewbox: ViewBox,
color: str | None = None, color: str = 'dad_blue',
) -> None: ) -> None:
super().__init__(0, 0, 1, 1) super().__init__(0, 0, 1, 1)
# self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1) # self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1)
self.vb: ViewBox = viewbox self.vb = viewbox
self._chart: 'ChartPlotWidget' = None # noqa
self._chart: ChartPlotWidget | None = None # noqa # override selection box color
# TODO: maybe allow this to be dynamic via a method?
#l override selection box color
color: str = color or 'dad_blue'
color = QColor(hcolor(color)) color = QColor(hcolor(color))
self.setPen(fn.mkPen(color, width=1)) self.setPen(fn.mkPen(color, width=1))
color.setAlpha(66) color.setAlpha(66)
self.setBrush(fn.mkBrush(color)) self.setBrush(fn.mkBrush(color))
self.setZValue(1e9) self.setZValue(1e9)
self.hide()
self._label = None
label = self._label = QLabel() label = self._label = QLabel()
label.setTextFormat( label.setTextFormat(0) # markdown
Qt.TextFormat.MarkdownText
)
label.setFont(_font.font) label.setFont(_font.font)
label.setMargin(0) label.setMargin(0)
label.setAlignment( label.setAlignment(
QtCore.Qt.AlignLeft QtCore.Qt.AlignLeft
# | QtCore.Qt.AlignVCenter # | QtCore.Qt.AlignVCenter
) )
label.hide() # always right after init
# proxy is created after containing scene is initialized # proxy is created after containing scene is initialized
self._label_proxy: QGraphicsProxyWidget | None = None self._label_proxy = None
self._abs_top_right: Point | None = None self._abs_top_right = None
# TODO: "swing %" might be handy here (data's max/min # TODO: "swing %" might be handy here (data's max/min # % change)
# # % change)? self._contents = [
self._contents: list[str] = [
'change: {pchng:.2f} %', 'change: {pchng:.2f} %',
'range: {rng:.2f}', 'range: {rng:.2f}',
'bars: {nbars}', 'bars: {nbars}',
@ -340,31 +297,12 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
'sigma: {std:.2f}', 'sigma: {std:.2f}',
] ]
self.add_to_view(viewbox)
self.hide()
def add_to_view(
self,
view: ChartView,
) -> None:
'''
Self-defined view hookup impl which will
also re-assign the internal ref.
'''
view.addItem(
self,
ignoreBounds=True,
)
if self.vb is not view:
self.vb = view
@property @property
def chart(self) -> ChartPlotWidget: # noqa def chart(self) -> 'ChartPlotWidget': # noqa
return self._chart return self._chart
@chart.setter @chart.setter
def chart(self, chart: ChartPlotWidget) -> None: # noqa def chart(self, chart: 'ChartPlotWidget') -> None: # noqa
self._chart = chart self._chart = chart
chart.sigRangeChanged.connect(self.update_on_resize) chart.sigRangeChanged.connect(self.update_on_resize)
palette = self._label.palette() palette = self._label.palette()
@ -377,155 +315,57 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
) )
def update_on_resize(self, vr, r): def update_on_resize(self, vr, r):
''' """Re-position measure label on view range change.
Re-position measure label on view range change.
''' """
if self._abs_top_right: if self._abs_top_right:
self._label_proxy.setPos( self._label_proxy.setPos(
self.vb.mapFromView(self._abs_top_right) self.vb.mapFromView(self._abs_top_right)
) )
def set_scen_pos( def mouse_drag_released(
self, self,
scen_p1: QPointF, p1: QPointF,
scen_p2: QPointF, p2: QPointF
update_label: bool = True,
) -> None: ) -> None:
''' """Called on final button release for mouse drag with start and
Set position from scene coords of selection rect (normally end positions.
from mouse position) and accompanying label, move label to
match.
''' """
# NOTE XXX: apparently just setting it doesn't work!? self.set_pos(p1, p2)
# i have no idea why but it's pretty weird we have to do
# this transform thing which was basically pulled verbatim
# from the `pg.ViewBox.updateScaleBox()` method.
view_rect: QRectF = self.vb.childGroup.mapRectFromScene(
QRectF(
scen_p1,
scen_p2,
)
)
self.setPos(view_rect.topLeft())
# XXX: does not work..!?!?
# https://doc.qt.io/qt-5/qgraphicsrectitem.html#setRect
# self.setRect(view_rect)
tr = QTransform.fromScale( def set_pos(
view_rect.width(),
view_rect.height(),
)
self.setTransform(tr)
# XXX: never got this working, was always offset
# / transformed completely wrong (and off to the far right
# from the cursor?)
# self.set_view_pos(
# view_rect=view_rect,
# # self.vwqpToView(p1),
# # self.vb.mapToView(p2),
# # start_pos=self.vb.mapToScene(p1),
# # end_pos=self.vb.mapToScene(p2),
# )
self.show()
if update_label:
self.init_label(view_rect)
def set_view_pos(
self, self,
p1: QPointF,
start_pos: QPointF | Sequence[float, float] | None = None, p2: QPointF
end_pos: QPointF | Sequence[float, float] | None = None,
view_rect: QRectF | None = None,
update_label: bool = True,
) -> None: ) -> None:
''' """Set position of selection rect and accompanying label, move
Set position from `ViewBox` coords (i.e. from the actual label to match.
data domain) of rect (and any accompanying label which is
moved to match).
''' """
if self._chart is None:
raise RuntimeError(
'You MUST assign a `SelectRect.chart: ChartPlotWidget`!'
)
if view_rect is None:
# ensure point casting
start_pos: QPointF = as_point(start_pos)
end_pos: QPointF = as_point(end_pos)
# map to view coords and update area
view_rect = QtCore.QRectF(
start_pos,
end_pos,
)
self.setPos(view_rect.topLeft())
# NOTE: SERIOUSLY NO IDEA WHY THIS WORKS...
# but it does and all the other commented stuff above
# dint, dawg..
# self.resetTransform()
# self.setRect(view_rect)
tr = QTransform.fromScale(
view_rect.width(),
view_rect.height(),
)
self.setTransform(tr)
if update_label:
self.init_label(view_rect)
print(
'SelectRect modify:\n'
f'QRectF: {view_rect}\n'
f'start_pos: {start_pos}\n'
f'end_pos: {end_pos}\n'
)
self.show()
def init_label(
self,
view_rect: QRectF,
) -> QLabel:
# should be init-ed in `.__init__()`
label: QLabel = self._label
cv: ChartView = self.vb
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
if self._label_proxy is None: if self._label_proxy is None:
scen: QGraphicsScene = cv.scene() # https://doc.qt.io/qt-5/qgraphicsproxywidget.html
# NOTE: specifically this is passing a widget self._label_proxy = self.vb.scene().addWidget(self._label)
# pointer to the scene's `.addWidget()` as per,
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html#embedding-a-widget-with-qgraphicsproxywidget
self._label_proxy: QGraphicsProxyWidget = scen.addWidget(label)
# get label startup coords start_pos = self.vb.mapToView(p1)
tl: QPointF = view_rect.topLeft() end_pos = self.vb.mapToView(p2)
br: QPointF = view_rect.bottomRight()
x1, y1 = tl.x(), tl.y() # map to view coords and update area
x2, y2 = br.x(), br.y() r = QtCore.QRectF(start_pos, end_pos)
# TODO: to remove, previous label corner point unpacking # old way; don't need right?
# x1, y1 = start_pos.x(), start_pos.y() # lr = QtCore.QRectF(p1, p2)
# x2, y2 = end_pos.x(), end_pos.y() # r = self.vb.childGroup.mapRectFromParent(lr)
# y1, y2 = start_pos.y(), end_pos.y()
# x1, x2 = start_pos.x(), end_pos.x()
# TODO: heh, could probably use a max-min streamin algo self.setPos(r.topLeft())
# here too? self.resetTransform()
self.setRect(r)
self.show()
y1, y2 = start_pos.y(), end_pos.y()
x1, x2 = start_pos.x(), end_pos.x()
# TODO: heh, could probably use a max-min streamin algo here too
_, xmn = min(y1, y2), min(x1, x2) _, xmn = min(y1, y2), min(x1, x2)
ymx, xmx = max(y1, y2), max(x1, x2) ymx, xmx = max(y1, y2), max(x1, x2)
@ -535,35 +375,26 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
ixmn, ixmx = round(xmn), round(xmx) ixmn, ixmx = round(xmn), round(xmx)
nbars = ixmx - ixmn + 1 nbars = ixmx - ixmn + 1
chart: ChartPlotWidget = self._chart chart = self._chart
data: np.ndarray = chart.get_viz( data = chart.get_viz(chart.name).shm.array[ixmn:ixmx]
chart.name
).shm.array[ixmn:ixmx]
if len(data): if len(data):
std: float = data['close'].std() std = data['close'].std()
dmx: float = data['high'].max() dmx = data['high'].max()
dmn: float = data['low'].min() dmn = data['low'].min()
else: else:
dmn = dmx = std = np.nan dmn = dmx = std = np.nan
# update label info # update label info
label.setText('\n'.join(self._contents).format( self._label.setText('\n'.join(self._contents).format(
pchng=pchng, pchng=pchng, rng=rng, nbars=nbars,
rng=rng, std=std, dmx=dmx, dmn=dmn,
nbars=nbars,
std=std,
dmx=dmx,
dmn=dmn,
)) ))
# print(f'x2, y2: {(x2, y2)}') # print(f'x2, y2: {(x2, y2)}')
# print(f'xmn, ymn: {(xmn, ymx)}') # print(f'xmn, ymn: {(xmn, ymx)}')
label_anchor = Point( label_anchor = Point(xmx + 2, ymx)
xmx + 2,
ymx,
)
# XXX: in the drag bottom-right -> top-left case we don't # XXX: in the drag bottom-right -> top-left case we don't
# want the label to overlay the box. # want the label to overlay the box.
@ -572,40 +403,13 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
# # label_anchor = Point(x2, y2 + self._label.height()) # # label_anchor = Point(x2, y2 + self._label.height())
# label_anchor = Point(xmn, ymn) # label_anchor = Point(xmn, ymn)
self._abs_top_right: Point = label_anchor self._abs_top_right = label_anchor
self._label_proxy.setPos( self._label_proxy.setPos(self.vb.mapFromView(label_anchor))
cv.mapFromView(label_anchor) # self._label.show()
)
label.show()
def hide(self): def clear(self):
''' """Clear the selection box from view.
Clear the selection box from its graphics scene but
don't delete it permanently.
''' """
super().hide()
self._label.hide() self._label.hide()
self.hide()
# TODO: ensure noone else using dis.
clear = hide
def delete(self) -> None:
'''
De-allocate this rect from its rendering graphics scene.
Like a permanent hide.
'''
scen: QGraphicsScene = self.scene()
if scen is None:
return
scen.removeItem(self)
if (
self._label
and
self._label_proxy
):
scen.removeItem(self._label_proxy)

View File

@ -23,29 +23,28 @@ from typing import Callable
import trio import trio
from tractor.trionics import gather_contexts from tractor.trionics import gather_contexts
from PyQt5 import QtCore
from piker.ui.qt import ( from PyQt5.QtCore import QEvent, pyqtBoundSignal
QtCore, from PyQt5.QtWidgets import QWidget
QWidget, from PyQt5.QtWidgets import (
QEvent, QGraphicsSceneMouseEvent as gs_mouse,
keys,
gs_keys,
pyqtBoundSignal,
) )
from piker.types import Struct from piker.types import Struct
MOUSE_EVENTS = { MOUSE_EVENTS = {
gs_keys.GraphicsSceneMousePress, gs_mouse.GraphicsSceneMousePress,
gs_keys.GraphicsSceneMouseRelease, gs_mouse.GraphicsSceneMouseRelease,
keys.MouseButtonPress, QEvent.MouseButtonPress,
keys.MouseButtonRelease, QEvent.MouseButtonRelease,
# QtGui.QMouseEvent, # QtGui.QMouseEvent,
} }
# TODO: maybe consider some constrained ints down the road? # TODO: maybe consider some constrained ints down the road?
# https://pydantic-docs.helpmanual.io/usage/types/#constrained-types # https://pydantic-docs.helpmanual.io/usage/types/#constrained-types
class KeyboardMsg(Struct): class KeyboardMsg(Struct):
'''Unpacked Qt keyboard event data. '''Unpacked Qt keyboard event data.
@ -115,10 +114,7 @@ class EventRelay(QtCore.QObject):
# something to do with Qt internals and calling the # something to do with Qt internals and calling the
# parent handler? # parent handler?
if etype in { if etype in {QEvent.KeyPress, QEvent.KeyRelease}:
QEvent.Type.KeyPress,
QEvent.Type.KeyRelease,
}:
msg = KeyboardMsg( msg = KeyboardMsg(
event=ev, event=ev,
@ -164,9 +160,7 @@ class EventRelay(QtCore.QObject):
async def open_event_stream( async def open_event_stream(
source_widget: QWidget, source_widget: QWidget,
event_types: set[QEvent] = { event_types: set[QEvent] = {QEvent.KeyPress},
QEvent.Type.KeyPress,
},
filter_auto_repeats: bool = True, filter_auto_repeats: bool = True,
) -> trio.abc.ReceiveChannel: ) -> trio.abc.ReceiveChannel:
@ -207,8 +201,8 @@ async def open_signal_handler(
async for args in recv: async for args in recv:
await async_handler(*args) await async_handler(*args)
async with trio.open_nursery() as tn: async with trio.open_nursery() as n:
tn.start_soon(proxy_to_handler) n.start_soon(proxy_to_handler)
async with send: async with send:
yield yield
@ -218,48 +212,18 @@ async def open_handlers(
source_widgets: list[QWidget], source_widgets: list[QWidget],
event_types: set[QEvent], event_types: set[QEvent],
async_handler: Callable[[QWidget, trio.abc.ReceiveChannel], None],
# NOTE: if you want to bind in additional kwargs to the handler **kwargs,
# pass in a `partial()` instead!
async_handler: Callable[
[QWidget, trio.abc.ReceiveChannel], # required handler args
None
],
# XXX: these are ONLY inputs available to the
# `open_event_stream()` event-relay to mem-chan factor above!
**open_ev_stream_kwargs,
) -> None: ) -> None:
'''
Connect and schedule an async handler function to receive an
arbitrary `QWidget`'s events with kb/mouse msgs repacked into
structs (see above) and shuttled over a mem-chan to the input
`async_handler` to allow interaction-IO processing from
a `trio` func-as-task.
'''
widget: QWidget
streams: list[trio.abc.ReceiveChannel]
async with ( async with (
trio.open_nursery() as tn, trio.open_nursery() as n,
gather_contexts([ gather_contexts([
open_event_stream( open_event_stream(widget, event_types, **kwargs)
widget,
event_types,
**open_ev_stream_kwargs,
)
for widget in source_widgets for widget in source_widgets
]) as streams, ]) as streams,
): ):
for widget, event_recv_stream in zip( for widget, event_recv_stream in zip(source_widgets, streams):
source_widgets, n.start_soon(async_handler, widget, event_recv_stream)
streams,
):
tn.start_soon(
async_handler,
widget,
event_recv_stream,
)
yield yield

View File

@ -30,35 +30,34 @@ from typing import (
import platform import platform
import traceback import traceback
# Qt specific
import PyQt5 # noqa
from PyQt5.QtWidgets import (
QWidget,
QMainWindow,
QApplication,
)
from PyQt5 import QtCore
from PyQt5.QtCore import (
pyqtRemoveInputHook,
Qt,
QCoreApplication,
)
import qdarkstyle import qdarkstyle
from qdarkstyle import DarkPalette from qdarkstyle import DarkPalette
# import qdarkgraystyle # TODO: play with it # import qdarkgraystyle # TODO: play with it
import trio import trio
from outcome import Error from outcome import Error
# Qt version-agnostic
from .qt import (
QWidget,
QMainWindow,
QApplication,
QtCore,
pyqtRemoveInputHook,
Qt,
QCoreApplication,
)
from ..service import ( from ..service import (
maybe_open_pikerd, maybe_open_pikerd,
get_runtime_vars, get_tractor_runtime_kwargs,
) )
from ..log import get_logger from ..log import get_logger
from ._pg_overrides import _do_overrides from ._pg_overrides import _do_overrides
from . import _style from . import _style
if TYPE_CHECKING:
from ._chart import GodWidget
log = get_logger(__name__) log = get_logger(__name__)
# pyqtgraph global config # pyqtgraph global config
@ -147,7 +146,7 @@ def run_qtractor(
# load dark theme # load dark theme
stylesheet = qdarkstyle.load_stylesheet( stylesheet = qdarkstyle.load_stylesheet(
qt_api='pyqt6', qt_api='pyqt5',
palette=DarkPalette, palette=DarkPalette,
) )
app.setStyleSheet(stylesheet) app.setStyleSheet(stylesheet)
@ -174,9 +173,7 @@ def run_qtractor(
instance.window = window instance.window = window
# override tractor's defaults # override tractor's defaults
tractor_kwargs.update( tractor_kwargs.update(get_tractor_runtime_kwargs())
get_runtime_vars()
)
# define tractor entrypoint # define tractor entrypoint
async def main(): async def main():

View File

@ -28,15 +28,9 @@ from typing import (
) )
import trio import trio
from PyQt5 import QtGui
from piker.ui.qt import ( from PyQt5.QtCore import QSize, QModelIndex, Qt, QEvent
keys, from PyQt5.QtWidgets import (
size_policy,
QtGui,
QSize,
QModelIndex,
Qt,
QEvent,
QWidget, QWidget,
QLabel, QLabel,
QComboBox, QComboBox,
@ -45,6 +39,7 @@ from piker.ui.qt import (
QVBoxLayout, QVBoxLayout,
QFormLayout, QFormLayout,
QProgressBar, QProgressBar,
QSizePolicy,
QStyledItemDelegate, QStyledItemDelegate,
QStyleOptionViewItem, QStyleOptionViewItem,
) )
@ -76,14 +71,14 @@ class Edit(QLineEdit):
if width_in_chars: if width_in_chars:
self._chars = int(width_in_chars) self._chars = int(width_in_chars)
x_size_policy = size_policy.Fixed x_size_policy = QSizePolicy.Fixed
else: else:
# chart count which will be used to calculate # chart count which will be used to calculate
# width of input field. # width of input field.
self._chars: int = 6 self._chars: int = 6
# fit to surroundingn frame width # fit to surroundingn frame width
x_size_policy = size_policy.Expanding x_size_policy = QSizePolicy.Expanding
super().__init__(parent) super().__init__(parent)
@ -91,7 +86,7 @@ class Edit(QLineEdit):
# https://doc.qt.io/qt-5/qsizepolicy.html#Policy-enum # https://doc.qt.io/qt-5/qsizepolicy.html#Policy-enum
self.setSizePolicy( self.setSizePolicy(
x_size_policy, x_size_policy,
size_policy.Fixed, QSizePolicy.Fixed,
) )
self.setFont(font.font) self.setFont(font.font)
@ -185,13 +180,11 @@ class Selection(QComboBox):
self._items: dict[str, int] = {} self._items: dict[str, int] = {}
super().__init__(parent=parent) super().__init__(parent=parent)
self.setSizeAdjustPolicy( self.setSizeAdjustPolicy(QComboBox.AdjustToContents)
QComboBox.SizeAdjustPolicy.AdjustToContents,
)
# make line edit expand to surrounding frame # make line edit expand to surrounding frame
self.setSizePolicy( self.setSizePolicy(
size_policy.Expanding, QSizePolicy.Expanding,
size_policy.Fixed, QSizePolicy.Fixed,
) )
view = self.view() view = self.view()
view.setUniformItemSizes(True) view.setUniformItemSizes(True)
@ -315,8 +308,8 @@ class FieldsForm(QWidget):
# size it as we specify # size it as we specify
self.setSizePolicy( self.setSizePolicy(
size_policy.Expanding, QSizePolicy.Expanding,
size_policy.Expanding, QSizePolicy.Expanding,
) )
# XXX: not sure why we have to create this here exactly # XXX: not sure why we have to create this here exactly
@ -423,8 +416,8 @@ class FieldsForm(QWidget):
select.set_items(values) select.set_items(values)
self.setSizePolicy( self.setSizePolicy(
size_policy.Fixed, QSizePolicy.Fixed,
size_policy.Fixed, QSizePolicy.Fixed,
) )
select.show() select.show()
self.form.addRow(label, select) self.form.addRow(label, select)
@ -444,10 +437,7 @@ async def handle_field_input(
async for kbmsg in recv_chan: async for kbmsg in recv_chan:
if kbmsg.etype in { if kbmsg.etype in {QEvent.KeyPress, QEvent.KeyRelease}:
keys.KeyPress,
keys.KeyRelease,
}:
event, etype, key, mods, txt = kbmsg.to_tuple() event, etype, key, mods, txt = kbmsg.to_tuple()
print(f'key: {kbmsg.key}, mods: {kbmsg.mods}, txt: {kbmsg.txt}') print(f'key: {kbmsg.key}, mods: {kbmsg.mods}, txt: {kbmsg.txt}')
@ -713,8 +703,7 @@ def mk_fill_status_bar(
) )
bottom_label = form.add_field_label( bottom_label = form.add_field_label(
# 'x: {step_size}', 'x: {step_size}',
'{unit_prefix}: {step_size}',
font_size=bar_label_font_size, font_size=bar_label_font_size,
font_color='gunmetal', font_color='gunmetal',
) )

View File

@ -181,10 +181,7 @@ async def open_fsp_sidepane(
async def open_fsp_actor_cluster( async def open_fsp_actor_cluster(
names: list[str] = ['fsp_0', 'fsp_1'], names: list[str] = ['fsp_0', 'fsp_1'],
) -> AsyncGenerator[ ) -> AsyncGenerator[int, dict[str, tractor.Portal]]:
int,
dict[str, tractor.Portal]
]:
from tractor._clustering import open_actor_cluster from tractor._clustering import open_actor_cluster
@ -393,7 +390,7 @@ class FspAdmin:
complete: trio.Event, complete: trio.Event,
started: trio.Event, started: trio.Event,
fqme: str, fqme: str,
dst_flume: Flume, dst_fsp_flume: Flume,
conf: dict, conf: dict,
target: Fsp, target: Fsp,
loglevel: str, loglevel: str,
@ -411,14 +408,16 @@ class FspAdmin:
# chaining entrypoint # chaining entrypoint
cascade, cascade,
# TODO: can't we just drop this and expect
# far end to read the src flume's .mkt.fqme?
# data feed key # data feed key
fqme=fqme, fqme=fqme,
src_flume_addr=self.flume.to_msg(), # TODO: pass `Flume.to_msg()`s here?
dst_flume_addr=dst_flume.to_msg(), # mems
ns_path=ns_path, # edge-bind-func src_shm_token=self.flume.rt_shm.token,
dst_shm_token=dst_fsp_flume.rt_shm.token,
# target
ns_path=ns_path,
loglevel=loglevel, loglevel=loglevel,
zero_on_step=conf.get('zero_on_step', False), zero_on_step=conf.get('zero_on_step', False),
@ -432,14 +431,14 @@ class FspAdmin:
ctx.open_stream() as stream, ctx.open_stream() as stream,
): ):
dst_flume.stream: tractor.MsgStream = stream dst_fsp_flume.stream: tractor.MsgStream = stream
# register output data # register output data
self._registry[ self._registry[
(fqme, ns_path) (fqme, ns_path)
] = ( ] = (
stream, stream,
dst_flume.rt_shm, dst_fsp_flume.rt_shm,
complete complete
) )
@ -516,7 +515,7 @@ class FspAdmin:
broker='piker', broker='piker',
_atype='fsp', _atype='fsp',
) )
dst_flume = Flume( dst_fsp_flume = Flume(
mkt=mkt, mkt=mkt,
_rt_shm_token=dst_shm.token, _rt_shm_token=dst_shm.token,
first_quote={}, first_quote={},
@ -544,13 +543,13 @@ class FspAdmin:
complete, complete,
started, started,
fqme, fqme,
dst_flume, dst_fsp_flume,
conf, conf,
target, target,
loglevel, loglevel,
) )
return dst_flume, started return dst_fsp_flume, started
async def open_fsp_chart( async def open_fsp_chart(
self, self,
@ -560,7 +559,7 @@ class FspAdmin:
conf: dict, # yeah probably dumb.. conf: dict, # yeah probably dumb..
loglevel: str = 'error', loglevel: str = 'error',
) -> trio.Event: ) -> (trio.Event, ChartPlotWidget):
flume, started = await self.start_engine_task( flume, started = await self.start_engine_task(
target, target,
@ -927,7 +926,7 @@ async def start_fsp_displays(
linked: LinkedSplits, linked: LinkedSplits,
flume: Flume, flume: Flume,
# group_status_key: str, group_status_key: str,
loglevel: str, loglevel: str,
) -> None: ) -> None:
@ -974,23 +973,21 @@ async def start_fsp_displays(
flume, flume,
) as admin, ) as admin,
): ):
statuses: list[trio.Event] = [] statuses = []
for target, conf in fsp_conf.items(): for target, conf in fsp_conf.items():
started: trio.Event = await admin.open_fsp_chart( started = await admin.open_fsp_chart(
target, target,
conf, conf,
) )
# done = linked.window().status_bar.open_status( done = linked.window().status_bar.open_status(
# f'loading fsp, {target}..', f'loading fsp, {target}..',
# group_key=group_status_key, group_key=group_status_key,
# ) )
# statuses.append((started, done)) statuses.append((started, done))
statuses.append(started)
# for fsp_loaded, status_cb in statuses: for fsp_loaded, status_cb in statuses:
for fsp_loaded in statuses:
await fsp_loaded.wait() await fsp_loaded.wait()
profiler(f'attached to fsp portal: {target}') profiler(f'attached to fsp portal: {target}')
# status_cb() status_cb()
# blocks on nursery until all fsp actors complete # blocks on nursery until all fsp actors complete

View File

@ -15,18 +15,15 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
`QIcon` hackery. ``QIcon`` hackery.
Mostly dynamically loading pixmaps for use with `QGraphicsScene`.
''' '''
from piker.ui.qt import ( from PyQt5.QtWidgets import QStyle
QSize, from PyQt5.QtGui import (
QStyle, QIcon, QPixmap, QColor
QIcon,
QPixmap,
QColor,
) )
from PyQt5.QtCore import QSize
from ._style import hcolor from ._style import hcolor
# https://www.pythonguis.com/faq/built-in-qicons-pyqt/ # https://www.pythonguis.com/faq/built-in-qicons-pyqt/
@ -47,8 +44,7 @@ def mk_icons(
size: QSize, size: QSize,
) -> dict[str, QIcon]: ) -> dict[str, QIcon]:
''' '''This helper is indempotent.
This helper is indempotent.
''' '''
global _icons, _icon_names global _icons, _icon_names
@ -60,11 +56,7 @@ def mk_icons(
# load account selection using current style # load account selection using current style
for name, icon_name in _icon_names.items(): for name, icon_name in _icon_names.items():
stdpixmap = getattr( stdpixmap = getattr(QStyle, icon_name)
# https://www.pythonguis.com/faq/built-in-qicons-pyqt/
QStyle.StandardPixmap, # pyqt/pyside6
icon_name,
)
stdicon = style.standardIcon(stdpixmap) stdicon = style.standardIcon(stdpixmap)
pixmap = stdicon.pixmap(size) pixmap = stdicon.pixmap(size)

View File

@ -23,7 +23,6 @@ from contextlib import (
asynccontextmanager, asynccontextmanager,
ExitStack, ExitStack,
) )
from functools import partial
import time import time
from typing import ( from typing import (
Callable, Callable,
@ -31,26 +30,24 @@ from typing import (
) )
import pyqtgraph as pg import pyqtgraph as pg
# NOTE XXX: pg is super annoying and re-implements it's own mouse # from pyqtgraph.GraphicsScene import mouseEvents
# event subsystem.. we should really look into re-working/writing from PyQt5.QtWidgets import QGraphicsSceneMouseEvent as gs_mouse
# this down the road.. Bo from PyQt5.QtGui import (
from pyqtgraph.GraphicsScene import mouseEvents as mevs QWheelEvent,
# from pyqtgraph.GraphicsScene.mouseEvents import MouseDragEvent )
from PyQt5.QtCore import (
Qt,
QEvent,
)
from pyqtgraph import ( from pyqtgraph import (
ViewBox, ViewBox,
Point, Point,
QtCore, QtCore,
functions as fn,
) )
from pyqtgraph import functions as fn
import numpy as np import numpy as np
import trio import trio
from piker.ui.qt import (
QWheelEvent,
QGraphicsSceneMouseEvent as gs_mouse,
Qt,
QEvent,
)
from ..log import get_logger from ..log import get_logger
from ..toolz import ( from ..toolz import (
Profiler, Profiler,
@ -73,28 +70,27 @@ if TYPE_CHECKING:
) )
from ._dataviz import Viz from ._dataviz import Viz
from .order_mode import OrderMode from .order_mode import OrderMode
from ._display import DisplayState
log = get_logger(__name__) log = get_logger(__name__)
NUMBER_LINE = { NUMBER_LINE = {
Qt.Key.Key_1, Qt.Key_1,
Qt.Key.Key_2, Qt.Key_2,
Qt.Key.Key_3, Qt.Key_3,
Qt.Key.Key_4, Qt.Key_4,
Qt.Key.Key_5, Qt.Key_5,
Qt.Key.Key_6, Qt.Key_6,
Qt.Key.Key_7, Qt.Key_7,
Qt.Key.Key_8, Qt.Key_8,
Qt.Key.Key_9, Qt.Key_9,
Qt.Key.Key_0, Qt.Key_0,
} }
ORDER_MODE = { ORDER_MODE = {
Qt.Key.Key_A, Qt.Key_A,
Qt.Key.Key_F, Qt.Key_F,
Qt.Key.Key_D, Qt.Key_D,
} }
@ -102,7 +98,6 @@ async def handle_viewmode_kb_inputs(
view: ChartView, view: ChartView,
recv_chan: trio.abc.ReceiveChannel, recv_chan: trio.abc.ReceiveChannel,
dss: dict[str, DisplayState],
) -> None: ) -> None:
@ -178,42 +173,17 @@ async def handle_viewmode_kb_inputs(
Qt.Key_P, Qt.Key_P,
} }
): ):
import tractor
feed = order_mode.feed # noqa feed = order_mode.feed # noqa
chart = order_mode.chart # noqa chart = order_mode.chart # noqa
viz = chart.main_viz # noqa viz = chart.main_viz # noqa
vlm_chart = chart.linked.subplots['volume'] # noqa vlm_chart = chart.linked.subplots['volume'] # noqa
vlm_viz = vlm_chart.main_viz # noqa vlm_viz = vlm_chart.main_viz # noqa
dvlm_pi = vlm_chart._vizs['dolla_vlm'].plot # noqa dvlm_pi = vlm_chart._vizs['dolla_vlm'].plot # noqa
import tractor
await tractor.pause() await tractor.pause()
view.interact_graphics_cycle() view.interact_graphics_cycle()
# FORCE graphics reset-and-render of all currently # SEARCH MODE #
# shown data `Viz`s for the current chart app.
if (
ctrl
and key in {
Qt.Key_R,
}
):
fqme: str
ds: DisplayState
for fqme, ds in dss.items():
viz: Viz
for tf, viz in {
60: ds.hist_viz,
1: ds.viz,
}.items():
# TODO: only allow this when the data is IN VIEW!
# also, we probably can do this more efficiently
# / smarter by only redrawing the portion of the
# path necessary?
viz.reset_graphics()
# ------ - ------
# SEARCH MODE
# ------ - ------
# ctlr-<space>/<l> for "lookup", "search" -> open search tree # ctlr-<space>/<l> for "lookup", "search" -> open search tree
if ( if (
ctrl ctrl
@ -273,10 +243,8 @@ async def handle_viewmode_kb_inputs(
delta=-view.def_delta, delta=-view.def_delta,
) )
elif ( elif key == Qt.Key_R:
not ctrl
and key == Qt.Key_R
):
# NOTE: seems that if we don't yield a Qt render # NOTE: seems that if we don't yield a Qt render
# cycle then the m4 downsampled curves will show here # cycle then the m4 downsampled curves will show here
# without another reset.. # without another reset..
@ -459,7 +427,6 @@ async def handle_viewmode_mouse(
view: ChartView, view: ChartView,
recv_chan: trio.abc.ReceiveChannel, recv_chan: trio.abc.ReceiveChannel,
dss: dict[str, DisplayState],
) -> None: ) -> None:
@ -499,7 +466,6 @@ class ChartView(ViewBox):
mode_name: str = 'view' mode_name: str = 'view'
def_delta: float = 616 * 6 def_delta: float = 616 * 6
def_scale_factor: float = 1.016 ** (def_delta * -1 / 20) def_scale_factor: float = 1.016 ** (def_delta * -1 / 20)
# annots: dict[int, GraphicsObject] = {}
def __init__( def __init__(
self, self,
@ -520,7 +486,6 @@ class ChartView(ViewBox):
# defaultPadding=0., # defaultPadding=0.,
**kwargs **kwargs
) )
# for "known y-range style" # for "known y-range style"
self._static_yrange = static_yrange self._static_yrange = static_yrange
@ -535,11 +500,7 @@ class ChartView(ViewBox):
# add our selection box annotator # add our selection box annotator
self.select_box = SelectRect(self) self.select_box = SelectRect(self)
# self.select_box.add_to_view(self) self.addItem(self.select_box, ignoreBounds=True)
# self.addItem(
# self.select_box,
# ignoreBounds=True,
# )
self.mode = None self.mode = None
self.order_mode: bool = False self.order_mode: bool = False
@ -596,7 +557,6 @@ class ChartView(ViewBox):
@asynccontextmanager @asynccontextmanager
async def open_async_input_handler( async def open_async_input_handler(
self, self,
**handler_kwargs,
) -> ChartView: ) -> ChartView:
@ -607,20 +567,14 @@ class ChartView(ViewBox):
QEvent.KeyPress, QEvent.KeyPress,
QEvent.KeyRelease, QEvent.KeyRelease,
}, },
async_handler=partial( async_handler=handle_viewmode_kb_inputs,
handle_viewmode_kb_inputs,
**handler_kwargs,
),
), ),
_event.open_handlers( _event.open_handlers(
[self], [self],
event_types={ event_types={
gs_mouse.GraphicsSceneMousePress, gs_mouse.GraphicsSceneMousePress,
}, },
async_handler=partial( async_handler=handle_viewmode_mouse,
handle_viewmode_mouse,
**handler_kwargs,
),
), ),
): ):
yield self yield self
@ -757,18 +711,17 @@ class ChartView(ViewBox):
def mouseDragEvent( def mouseDragEvent(
self, self,
ev: mevs.MouseDragEvent, ev,
axis: int | None = None, axis: int | None = None,
) -> None: ) -> None:
pos: Point = ev.pos() pos = ev.pos()
lastPos: Point = ev.lastPos() lastPos = ev.lastPos()
dif: Point = (pos - lastPos) * -1 dif = pos - lastPos
# dif: Point = pos - lastPos dif = dif * -1
# dif: Point = dif * -1
# NOTE: if axis is specified, event will only affect that axis. # NOTE: if axis is specified, event will only affect that axis.
btn = ev.button() button = ev.button()
# Ignore axes if mouse is disabled # Ignore axes if mouse is disabled
mouseEnabled = np.array( mouseEnabled = np.array(
@ -780,7 +733,7 @@ class ChartView(ViewBox):
mask[1-axis] = 0.0 mask[1-axis] = 0.0
# Scale or translate based on mouse button # Scale or translate based on mouse button
if btn & ( if button & (
QtCore.Qt.LeftButton | QtCore.Qt.MidButton QtCore.Qt.LeftButton | QtCore.Qt.MidButton
): ):
# zoom y-axis ONLY when click-n-drag on it # zoom y-axis ONLY when click-n-drag on it
@ -803,55 +756,34 @@ class ChartView(ViewBox):
# XXX: WHY # XXX: WHY
ev.accept() ev.accept()
down_pos: Point = ev.buttonDownPos( down_pos = ev.buttonDownPos()
btn=btn,
)
scen_pos: Point = ev.scenePos()
scen_down_pos: Point = ev.buttonDownScenePos(
btn=btn,
)
# This is the final position in the drag # This is the final position in the drag
if ev.isFinish(): if ev.isFinish():
# import pdbp; pdbp.set_trace() self.select_box.mouse_drag_released(down_pos, pos)
# NOTE: think of this as a `.mouse_drag_release()`
# (bc HINT that's what i called the shit ass
# method that wrapped this call [yes, as a single
# fucking call] originally.. you bish, guille)
# Bo.. oraleeee
self.select_box.set_scen_pos(
# down_pos,
# pos,
scen_down_pos,
scen_pos,
)
# this is the zoom transform cmd
ax = QtCore.QRectF(down_pos, pos) ax = QtCore.QRectF(down_pos, pos)
ax = self.childGroup.mapRectFromParent(ax) ax = self.childGroup.mapRectFromParent(ax)
# self.showAxRect(ax)
# this is the zoom transform cmd
self.showAxRect(ax)
# axis history tracking # axis history tracking
self.axHistoryPointer += 1 self.axHistoryPointer += 1
self.axHistory = self.axHistory[ self.axHistory = self.axHistory[
:self.axHistoryPointer] + [ax] :self.axHistoryPointer] + [ax]
else: else:
self.select_box.set_scen_pos( print('drag finish?')
# down_pos, self.select_box.set_pos(down_pos, pos)
# pos,
scen_down_pos,
scen_pos,
)
# update shape of scale box # update shape of scale box
# self.updateScaleBox(ev.buttonDownPos(), ev.pos()) # self.updateScaleBox(ev.buttonDownPos(), ev.pos())
# breakpoint() self.updateScaleBox(
# self.updateScaleBox( down_pos,
# down_pos, ev.pos(),
# ev.pos(), )
# )
# PANNING MODE # PANNING MODE
else: else:
@ -890,7 +822,7 @@ class ChartView(ViewBox):
# ev.accept() # ev.accept()
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE # WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
elif btn & QtCore.Qt.RightButton: elif button & QtCore.Qt.RightButton:
if self.state['aspectLocked'] is not False: if self.state['aspectLocked'] is not False:
mask[0] = 0 mask[0] = 0

View File

@ -21,12 +21,9 @@ Double auction top-of-book (L1) graphics.
from typing import Tuple from typing import Tuple
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtGui
from PyQt5.QtCore import QPointF
from piker.ui.qt import (
QPointF,
QtCore,
QtGui,
)
from ._axes import YAxisLabel from ._axes import YAxisLabel
from ._style import hcolor from ._style import hcolor
from ._pg_overrides import PlotItem from ._pg_overrides import PlotItem

View File

@ -25,17 +25,10 @@ from typing import (
) )
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtGui, QtWidgets
from PyQt5.QtWidgets import QLabel, QSizePolicy
from PyQt5.QtCore import QPointF, QRectF, Qt
from piker.ui.qt import (
px_cache_mode,
QtGui,
QtWidgets,
QLabel,
size_policy,
QPointF,
QRectF,
Qt,
)
from ._style import ( from ._style import (
DpiAwareFont, DpiAwareFont,
hcolor, hcolor,
@ -85,7 +78,7 @@ class Label:
self._x_offset = x_offset self._x_offset = x_offset
txt = self.txt = QtWidgets.QGraphicsTextItem(parent=parent) txt = self.txt = QtWidgets.QGraphicsTextItem(parent=parent)
txt.setCacheMode(px_cache_mode.DeviceCoordinateCache) txt.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
vb.scene().addItem(txt) vb.scene().addItem(txt)
@ -110,7 +103,7 @@ class Label:
self._anchor_func = self.txt.pos().x self._anchor_func = self.txt.pos().x
# not sure if this makes a diff # not sure if this makes a diff
self.txt.setCacheMode(px_cache_mode.DeviceCoordinateCache) self.txt.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
# TODO: edit and selection support # TODO: edit and selection support
# https://doc.qt.io/qt-5/qt.html#TextInteractionFlag-enum # https://doc.qt.io/qt-5/qt.html#TextInteractionFlag-enum
@ -306,14 +299,12 @@ class FormatLabel(QLabel):
""" """
) )
self.setFont(_font.font) self.setFont(_font.font)
self.setTextFormat( self.setTextFormat(Qt.MarkdownText) # markdown
Qt.TextFormat.MarkdownText
)
self.setMargin(0) self.setMargin(0)
self.setSizePolicy( self.setSizePolicy(
size_policy.Expanding, QSizePolicy.Expanding,
size_policy.Expanding, QSizePolicy.Expanding,
) )
self.setAlignment( self.setAlignment(
Qt.AlignVCenter | Qt.AlignLeft Qt.AlignVCenter | Qt.AlignLeft

View File

@ -27,22 +27,10 @@ from typing import (
) )
import pyqtgraph as pg import pyqtgraph as pg
from pyqtgraph import ( from pyqtgraph import Point, functions as fn
Point, from PyQt5 import QtCore, QtGui, QtWidgets
functions as fn, from PyQt5.QtCore import QPointF
)
from piker.ui.qt import (
px_cache_mode,
QtCore,
QtGui,
QGraphicsPathItem,
QStyleOptionGraphicsItem,
QGraphicsItem,
QGraphicsScene,
QWidget,
QPointF,
)
from ._annotate import LevelMarker from ._annotate import LevelMarker
from ._anchors import ( from ._anchors import (
vbr_left, vbr_left,
@ -142,9 +130,7 @@ class LevelLine(pg.InfiniteLine):
self._right_end_sc: float = 0 self._right_end_sc: float = 0
# use px caching # use px caching
self.setCacheMode( self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
px_cache_mode.DeviceCoordinateCache
)
def txt_offsets(self) -> tuple[int, int]: def txt_offsets(self) -> tuple[int, int]:
return 0, 0 return 0, 0
@ -215,7 +201,7 @@ class LevelLine(pg.InfiniteLine):
) -> None: ) -> None:
if not called_from_on_pos_change: if not called_from_on_pos_change:
last: float = self.value() last = self.value()
# if the position hasn't changed then ``.update_labels()`` # if the position hasn't changed then ``.update_labels()``
# will not be called by a non-triggered `.on_pos_change()`, # will not be called by a non-triggered `.on_pos_change()`,
@ -322,7 +308,7 @@ class LevelLine(pg.InfiniteLine):
Remove this line from containing chart/view/scene. Remove this line from containing chart/view/scene.
''' '''
scene: QGraphicsScene = self.scene() scene = self.scene()
if scene: if scene:
for label in self._labels: for label in self._labels:
label.delete() label.delete()
@ -353,8 +339,8 @@ class LevelLine(pg.InfiniteLine):
self, self,
p: QtGui.QPainter, p: QtGui.QPainter,
opt: QStyleOptionGraphicsItem, opt: QtWidgets.QStyleOptionGraphicsItem,
w: QWidget w: QtWidgets.QWidget
) -> None: ) -> None:
''' '''
@ -431,9 +417,9 @@ class LevelLine(pg.InfiniteLine):
def add_marker( def add_marker(
self, self,
path: QGraphicsPathItem, path: QtWidgets.QGraphicsPathItem,
) -> QGraphicsPathItem: ) -> QtWidgets.QGraphicsPathItem:
self._marker = path self._marker = path
self._marker.setPen(self.currentPen) self._marker.setPen(self.currentPen)

View File

@ -20,14 +20,16 @@ Super fast OHLC sampling graphics types.
from __future__ import annotations from __future__ import annotations
import numpy as np import numpy as np
from PyQt5 import (
from piker.ui.qt import (
QtGui, QtGui,
QtWidgets, QtWidgets,
QPainterPath, )
from PyQt5.QtCore import (
QLineF, QLineF,
QRectF, QRectF,
) )
from PyQt5.QtGui import QPainterPath
from ._curve import FlowGraphic from ._curve import FlowGraphic
from ..toolz import ( from ..toolz import (
Profiler, Profiler,

View File

@ -24,6 +24,8 @@ view transforms.
""" """
import pyqtgraph as pg import pyqtgraph as pg
from ._axes import Axis
def invertQTransform(tr): def invertQTransform(tr):
"""Return a QTransform that is the inverse of *tr*. """Return a QTransform that is the inverse of *tr*.
@ -51,9 +53,6 @@ def _do_overrides() -> None:
pg.functions.invertQTransform = invertQTransform pg.functions.invertQTransform = invertQTransform
pg.PlotItem = PlotItem pg.PlotItem = PlotItem
from ._axes import Axis
pg.Axis = Axis
# enable "QPainterPathPrivate for faster arrayToQPath" from # enable "QPainterPathPrivate for faster arrayToQPath" from
# https://github.com/pyqtgraph/pyqtgraph/pull/2324 # https://github.com/pyqtgraph/pyqtgraph/pull/2324
pg.setConfigOption('enableExperimental', True) pg.setConfigOption('enableExperimental', True)
@ -235,7 +234,7 @@ class PlotItem(pg.PlotItem):
# ``ViewBox`` geometry bug.. where a gap for the # ``ViewBox`` geometry bug.. where a gap for the
# 'bottom' axis is somehow left in? # 'bottom' axis is somehow left in?
# axis = pg.AxisItem(orientation=name, parent=self) # axis = pg.AxisItem(orientation=name, parent=self)
axis = pg.Axis( axis = Axis(
self, self,
orientation=name, orientation=name,
parent=self, parent=self,

View File

@ -344,10 +344,7 @@ class SettingsPane:
dsize = tracker.live_pp.dsize dsize = tracker.live_pp.dsize
# READ out settings and update the status UI / settings widgets # READ out settings and update the status UI / settings widgets
unit_char: str = { suffix = {'currency': ' $', 'units': ' u'}[alloc.size_unit]
'currency': '$',
'units': 'u',
}[alloc.size_unit]
size_unit, limit = alloc.limit_info() size_unit, limit = alloc.limit_info()
step_size, currency_per_slot = alloc.step_sizes() step_size, currency_per_slot = alloc.step_sizes()
@ -361,11 +358,10 @@ class SettingsPane:
self.apply_setting('limit', limit) self.apply_setting('limit', limit)
self.step_label.format( self.step_label.format(
unit_prefix=unit_char, step_size=str(humanize(step_size)) + suffix
step_size=str(humanize(step_size))
) )
self.limit_label.format( self.limit_label.format(
limit=f'{unit_char}: {str(humanize(limit))}' limit=str(humanize(limit)) + suffix
) )
# update size unit in UI # update size unit in UI

View File

@ -1,426 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Remote control tasks for sending annotations (and maybe more cmds)
to a chart from some other actor.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
AsyncExitStack,
)
from functools import partial
from pprint import pformat
from typing import (
# Any,
AsyncContextManager,
)
import tractor
from tractor import trionics
from tractor import (
Portal,
Context,
MsgStream,
)
from piker.log import get_logger
from piker.types import Struct
from piker.service import find_service
from piker.brokers import SymbolNotFound
from piker.ui.qt import (
QGraphicsItem,
)
from ._display import DisplayState
from ._interaction import ChartView
from ._editors import SelectRect
from ._chart import ChartPlotWidget
from ._dataviz import Viz
log = get_logger(__name__)
# NOTE: this is UPDATED by the `._display.graphics_update_loop()`
# once all chart widgets / Viz per flume have been initialized
# allowing for remote annotation (control) of any chart-actor's mkt
# feed by fqme lookup Bo
_dss: dict[str, DisplayState] = {}
# stash each and every client connection so that they can all
# be cancelled on shutdown/error.
# TODO: make `tractor.Context` hashable via is `.cid: str`?
# _ctxs: set[Context] = set()
# TODO: use type statements from 3.12+
IpcCtxTable = dict[
str, # each `Context.cid`
tuple[
Context, # handle for ctx-cancellation
set[int] # set of annotation (instance) ids
]
]
_ctxs: IpcCtxTable = {}
# XXX: global map of all uniquely created annotation-graphics so
# that they can be mutated (eventually) by a client.
# NOTE: this map is only populated on the `chart` actor side (aka
# the "annotations server" which actually renders to a Qt canvas).
# type AnnotsTable = dict[int, QGraphicsItem]
AnnotsTable = dict[int, QGraphicsItem]
_annots: AnnotsTable = {}
async def serve_rc_annots(
ipc_key: str,
annot_req_stream: MsgStream,
dss: dict[str, DisplayState],
ctxs: IpcCtxTable,
annots: AnnotsTable,
) -> None:
async for msg in annot_req_stream:
match msg:
case {
'cmd': 'SelectRect',
'fqme': fqme,
'timeframe': timeframe,
'meth': str(meth),
'kwargs': dict(kwargs),
}:
ds: DisplayState = _dss[fqme]
chart: ChartPlotWidget = {
60: ds.hist_chart,
1: ds.chart,
}[timeframe]
cv: ChartView = chart.cv
# annot type lookup from cmd
rect = SelectRect(
viewbox=cv,
# TODO: make this more dynamic?
# -[ ] pull from conf.toml?
# -[ ] add `.set_color()` method to type?
# -[ ] make a green/red based on direction
# instead of default static color?
color=kwargs.pop('color', None),
)
# XXX NOTE: this is REQUIRED to set the rect
# resize callback!
rect.chart: ChartPlotWidget = chart
# delegate generically to the requested method
getattr(rect, meth)(**kwargs)
rect.show()
aid: int = id(rect)
annots[aid] = rect
aids: set[int] = ctxs[ipc_key][1]
aids.add(aid)
await annot_req_stream.send(aid)
case {
'cmd': 'remove',
'aid': int(aid),
}:
# NOTE: this is normally entered on
# a client's annotation de-alloc normally
# prior to detach or modify.
annot: QGraphicsItem = annots[aid]
annot.delete()
# respond to client indicating annot
# was indeed deleted.
await annot_req_stream.send(aid)
case {
'cmd': 'redraw',
'fqme': fqme,
'timeframe': timeframe,
# TODO: maybe more fields?
# 'render': int(aid),
# 'viz_name': str(viz_name),
}:
# NOTE: old match from the 60s display loop task
# | {
# 'backfilling': (str(viz_name), timeframe),
# }:
ds: DisplayState = _dss[fqme]
viz: Viz = {
60: ds.hist_viz,
1: ds.viz,
}[timeframe]
log.warning(
f'Forcing VIZ REDRAW:\n'
f'fqme: {fqme}\n'
f'timeframe: {timeframe}\n'
)
viz.reset_graphics()
case _:
log.error(
'Unknown remote annotation cmd:\n'
f'{pformat(msg)}'
)
@tractor.context
async def remote_annotate(
ctx: Context,
) -> None:
global _dss, _ctxs
assert _dss
_ctxs[ctx.cid] = (ctx, set())
# send back full fqme symbology to caller
await ctx.started(list(_dss))
# open annot request handler stream
async with ctx.open_stream() as annot_req_stream:
try:
await serve_rc_annots(
ipc_key=ctx.cid,
annot_req_stream=annot_req_stream,
dss=_dss,
ctxs=_ctxs,
annots=_annots,
)
finally:
# ensure all annots for this connection are deleted
# on any final teardown
(_ctx, aids) = _ctxs[ctx.cid]
assert _ctx is ctx
for aid in aids:
annot: QGraphicsItem = _annots[aid]
annot.delete()
class AnnotCtl(Struct):
'''
A control for remote "data annotations".
You know those "squares they always show in machine vision
UIs.." this API allows you to remotely control stuff like that
in some other graphics actor.
'''
ctx2fqmes: dict[str, str]
fqme2ipc: dict[str, MsgStream]
_annot_stack: AsyncExitStack
# runtime-populated mapping of all annotation
# ids to their equivalent IPC msg-streams.
_ipcs: dict[int, MsgStream] = {}
def _get_ipc(
self,
fqme: str,
) -> MsgStream:
ipc: MsgStream = self.fqme2ipc.get(fqme)
if ipc is None:
raise SymbolNotFound(
'No chart (actor) seems to have mkt feed loaded?\n'
f'{fqme}'
)
return ipc
async def add_rect(
self,
fqme: str,
timeframe: float,
start_pos: tuple[float, float],
end_pos: tuple[float, float],
# TODO: a `Literal['view', 'scene']` for this?
domain: str = 'view', # or 'scene'
color: str = 'dad_blue',
from_acm: bool = False,
) -> int:
'''
Add a `SelectRect` annotation to the target view, return
the instances `id(obj)` from the remote UI actor.
'''
ipc: MsgStream = self._get_ipc(fqme)
await ipc.send({
'fqme': fqme,
'cmd': 'SelectRect',
'timeframe': timeframe,
# 'meth': str(meth),
'meth': 'set_view_pos' if domain == 'view' else 'set_scene_pos',
'kwargs': {
'start_pos': tuple(start_pos),
'end_pos': tuple(end_pos),
'color': color,
'update_label': False,
},
})
aid: int = await ipc.receive()
self._ipcs[aid] = ipc
if not from_acm:
self._annot_stack.push_async_callback(
partial(
self.remove,
aid,
)
)
return aid
async def remove(
self,
aid: int,
) -> bool:
'''
Remove an existing annotation by instance id.
'''
ipc: MsgStream = self._ipcs[aid]
await ipc.send({
'cmd': 'remove',
'aid': aid,
})
removed: bool = await ipc.receive()
return removed
@acm
async def open_rect(
self,
**kwargs,
) -> int:
try:
aid: int = await self.add_rect(
from_acm=True,
**kwargs,
)
yield aid
finally:
await self.remove(aid)
async def redraw(
self,
fqme: str,
timeframe: float,
) -> None:
await self._get_ipc(fqme).send({
'cmd': 'redraw',
'fqme': fqme,
# 'render': int(aid),
# 'viz_name': str(viz_name),
'timeframe': timeframe,
})
# TODO: do we even need this?
# async def modify(
# self,
# aid: int, # annotation id
# meth: str, # far end graphics object method to invoke
# params: dict[str, Any], # far end `meth(**kwargs)`
# ) -> bool:
# '''
# Modify an existing (remote) annotation's graphics
# paramters, thus changing it's appearance / state in real
# time.
# '''
# raise NotImplementedError
@acm
async def open_annot_ctl(
uid: tuple[str, str] | None = None,
) -> AnnotCtl:
# TODO: load connetion to a specific chart actor
# -[ ] pull from either service scan or config
# -[ ] return some kinda client/proxy thinger?
# -[ ] maybe we should finally just provide this as
# a `tractor.hilevel.CallableProxy` or wtv?
# -[ ] use this from the storage.cli stuff to mark up gaps!
maybe_portals: list[Portal] | None
fqmes: list[str]
async with find_service(
service_name='chart',
first_only=False,
) as maybe_portals:
ctx_mngrs: list[AsyncContextManager] = []
# TODO: print the current discoverable actor UID set
# here as well?
if not maybe_portals:
raise RuntimeError('No chart UI actors found in service domain?')
for portal in maybe_portals:
ctx_mngrs.append(
portal.open_context(remote_annotate)
)
ctx2fqmes: dict[str, set[str]] = {}
fqme2ipc: dict[str, MsgStream] = {}
stream_ctxs: list[AsyncContextManager] = []
async with (
trionics.gather_contexts(ctx_mngrs) as ctxs,
):
for (ctx, fqmes) in ctxs:
stream_ctxs.append(ctx.open_stream())
# fill lookup table of mkt addrs to IPC ctxs
for fqme in fqmes:
if other := fqme2ipc.get(fqme):
raise ValueError(
f'More then one chart displays {fqme}!?\n'
'Other UI actor info:\n'
f'channel: {other._ctx.chan}]\n'
f'actor uid: {other._ctx.chan.uid}]\n'
f'ctx id: {other._ctx.cid}]\n'
)
ctx2fqmes.setdefault(
ctx.cid,
set(),
).add(fqme)
async with trionics.gather_contexts(stream_ctxs) as streams:
for stream in streams:
fqmes: set[str] = ctx2fqmes[stream._ctx.cid]
for fqme in fqmes:
fqme2ipc[fqme] = stream
# NOTE: on graceful teardown we always attempt to
# remove all annots that were created by the
# entering client.
# TODO: should we maybe instead/also do this on the
# server-actor side so that when a client
# disconnects we always delete all annotations by
# default instaead of expecting the client to?
async with AsyncExitStack() as annots_stack:
client = AnnotCtl(
ctx2fqmes=ctx2fqmes,
fqme2ipc=fqme2ipc,
_annot_stack=annots_stack,
)
yield client

View File

@ -30,8 +30,8 @@ from typing import (
import msgspec import msgspec
import numpy as np import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5.QtGui import QPainterPath
from piker.ui.qt import QPainterPath
from ..data._formatters import ( from ..data._formatters import (
IncrementalFormatter, IncrementalFormatter,
) )

View File

@ -43,29 +43,32 @@ from typing import (
Iterator, Iterator,
) )
import time import time
from pprint import pformat # from pprint import pformat
from rapidfuzz import process as fuzzy from rapidfuzz import process as fuzzy
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
from PyQt5 import QtCore
from piker.ui.qt import ( from PyQt5 import QtWidgets
size_policy, from PyQt5.QtCore import (
align_flag,
Qt, Qt,
QtCore,
QtWidgets,
QModelIndex, QModelIndex,
QItemSelectionModel, QItemSelectionModel,
)
from PyQt5.QtGui import (
# QLayout, # QLayout,
QStandardItem, QStandardItem,
QStandardItemModel, QStandardItemModel,
)
from PyQt5.QtWidgets import (
QWidget, QWidget,
QTreeView, QTreeView,
# QListWidgetItem, # QListWidgetItem,
# QAbstractScrollArea, # QAbstractScrollArea,
# QStyledItemDelegate, # QStyledItemDelegate,
) )
from ..log import get_logger from ..log import get_logger
from ._style import ( from ._style import (
_font, _font,
@ -126,8 +129,8 @@ class CompleterView(QTreeView):
# ux settings # ux settings
self.setSizePolicy( self.setSizePolicy(
size_policy.Expanding, QtWidgets.QSizePolicy.Expanding,
size_policy.Expanding, QtWidgets.QSizePolicy.Expanding,
) )
self.setItemsExpandable(True) self.setItemsExpandable(True)
self.setExpandsOnDoubleClick(False) self.setExpandsOnDoubleClick(False)
@ -564,8 +567,8 @@ class SearchWidget(QtWidgets.QWidget):
# size it as we specify # size it as we specify
self.setSizePolicy( self.setSizePolicy(
size_policy.Fixed, QtWidgets.QSizePolicy.Fixed,
size_policy.Fixed, QtWidgets.QSizePolicy.Fixed,
) )
self.godwidget = godwidget self.godwidget = godwidget
@ -589,16 +592,14 @@ class SearchWidget(QtWidgets.QWidget):
}} }}
""" """
) )
label.setTextFormat( label.setTextFormat(3) # markdown
Qt.TextFormat.MarkdownText
)
label.setFont(_font.font) label.setFont(_font.font)
label.setMargin(4) label.setMargin(4)
label.setText("search:") label.setText("search:")
label.show() label.show()
label.setAlignment( label.setAlignment(
align_flag.AlignVCenter QtCore.Qt.AlignVCenter
| align_flag.AlignLeft | QtCore.Qt.AlignLeft
) )
self.bar_hbox.addWidget(label) self.bar_hbox.addWidget(label)
@ -616,17 +617,9 @@ class SearchWidget(QtWidgets.QWidget):
self.vbox.addLayout(self.bar_hbox) self.vbox.addLayout(self.bar_hbox)
self.vbox.setAlignment( self.vbox.setAlignment(self.bar, Qt.AlignTop | Qt.AlignRight)
self.bar,
align_flag.AlignTop
| align_flag.AlignRight,
)
self.vbox.addWidget(self.bar.view) self.vbox.addWidget(self.bar.view)
self.vbox.setAlignment( self.vbox.setAlignment(self.view, Qt.AlignTop | Qt.AlignLeft)
self.view,
align_flag.AlignTop
| align_flag.AlignLeft,
)
def focus(self) -> None: def focus(self) -> None:
self.show() self.show()
@ -1146,25 +1139,21 @@ async def search_simple_dict(
) -> dict[str, Any]: ) -> dict[str, Any]:
tokens: list[str] = [] tokens = []
for key in source: for key in source:
match key: if not isinstance(key, str):
case str(): tokens.extend(key)
tokens.append(key) else:
case []: tokens.append(key)
tokens.extend(key)
# search routine can be specified as a function such # search routine can be specified as a function such
# as in the case of the current app's local symbol cache # as in the case of the current app's local symbol cache
matches = fuzzy.extract( matches = fuzzy.extractBests(
text, text,
tokens, tokens,
score_cutoff=90, score_cutoff=90,
) )
log.info(
'cache search results:\n'
f'{pformat(matches)}'
)
return [item[0] for item in matches] return [item[0] for item in matches]

View File

@ -22,14 +22,10 @@ from typing import Dict
import math import math
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtGui
from PyQt5.QtCore import Qt, QCoreApplication
from qdarkstyle import DarkPalette from qdarkstyle import DarkPalette
from .qt import (
QtCore,
QtGui,
Qt,
QCoreApplication,
)
from ..log import get_logger from ..log import get_logger
from .. import config from .. import config

View File

@ -27,14 +27,16 @@ from typing import (
) )
import uuid import uuid
from piker.ui.qt import ( from PyQt5 import QtCore
Qt, from PyQt5.QtWidgets import (
QtCore,
QWidget, QWidget,
QMainWindow, QMainWindow,
QApplication, QApplication,
QLabel, QLabel,
QStatusBar, QStatusBar,
)
from PyQt5.QtGui import (
QScreen, QScreen,
QCloseEvent, QCloseEvent,
) )
@ -195,9 +197,7 @@ class MainWindow(QMainWindow):
""" """
# font-size : {font_size}px; # font-size : {font_size}px;
) )
label.setTextFormat( label.setTextFormat(3) # markdown
Qt.TextFormat.MarkdownText
)
label.setFont(_font_small.font) label.setFont(_font_small.font)
label.setMargin(2) label.setMargin(2)
label.setAlignment( label.setAlignment(

View File

@ -96,17 +96,9 @@ def monitor(config, rate, name, dhost, test, tl):
@click.option('--rate', '-r', default=1, help='Logging level') @click.option('--rate', '-r', default=1, help='Logging level')
@click.argument('symbol', required=True) @click.argument('symbol', required=True)
@click.pass_obj @click.pass_obj
def optschain( def optschain(config, symbol, date, rate, test):
config, """Start an option chain UI
symbol, """
date,
rate,
test,
):
'''
Start an option chain UI
'''
# global opts # global opts
loglevel = config['loglevel'] loglevel = config['loglevel']
brokername = config['broker'] brokername = config['broker']
@ -140,23 +132,21 @@ def optschain(
default=None, default=None,
help='Enable pyqtgraph profiling' help='Enable pyqtgraph profiling'
) )
# @click.option( @click.option(
# '--pdb', '--pdb',
# is_flag=True, is_flag=True,
# help='Enable tractor debug mode' help='Enable tractor debug mode'
# ) )
@click.argument('symbols', nargs=-1, required=True) @click.argument('symbols', nargs=-1, required=True)
# @click.pass_context
@click.pass_obj @click.pass_obj
def chart( def chart(
config, config,
# ctx: click.Context,
symbols: list[str], symbols: list[str],
profile, profile,
pdb: bool,
): ):
''' '''
Run chart UI app, spawning service daemons dynamically as Start a real-time chartng UI
needed if not discovered via [network] config.
''' '''
# eg. ``--profile 3`` reports profiling for anything slower then 3 ms. # eg. ``--profile 3`` reports profiling for anything slower then 3 ms.
@ -183,55 +173,14 @@ def chart(
tractorloglevel = config['tractorloglevel'] tractorloglevel = config['tractorloglevel']
pikerloglevel = config['loglevel'] pikerloglevel = config['loglevel']
maddrs: list[tuple[str, int]] = config.get( _main(
'maddrs', syms=symbols,
[], brokermods=brokermods,
piker_loglevel=pikerloglevel,
tractor_kwargs={
'debug_mode': pdb,
'loglevel': tractorloglevel,
'name': 'chart',
'registry_addr': config.get('registry_addr'),
},
) )
# if maddrs:
# from tractor._multiaddr import parse_maddr
# for addr in maddrs:
# breakpoint()
# layers: dict = parse_maddr(addr)
regaddrs: list[tuple[str, int]] = config.get(
'registry_addrs',
[],
)
from ..config import load
conf, _ = load(
conf_name='conf',
)
network: dict = conf.get('network')
if network:
from ..cli import load_trans_eps
eps: dict = load_trans_eps(
network,
maddrs,
)
for layers in eps['pikerd']:
regaddrs.append((
layers['ipv4']['addr'],
layers['tcp']['port'],
))
from tractor.devx import maybe_open_crash_handler
pdb: bool = config['pdb']
with maybe_open_crash_handler(pdb=pdb):
_main(
syms=symbols,
brokermods=brokermods,
piker_loglevel=pikerloglevel,
tractor_kwargs={
'debug_mode': pdb,
'loglevel': tractorloglevel,
'name': 'chart',
'registry_addrs': list(set(regaddrs)),
'enable_modules': [
# remote data-view annotations Bo
'piker.ui._remote_ctl',
],
},
)

View File

@ -34,6 +34,7 @@ import uuid
from bidict import bidict from bidict import bidict
import tractor import tractor
import trio import trio
from PyQt5.QtCore import Qt
from piker import config from piker import config
from piker.accounting import ( from piker.accounting import (
@ -58,7 +59,6 @@ from piker.data import (
) )
from piker.types import Struct from piker.types import Struct
from piker.log import get_logger from piker.log import get_logger
from piker.ui.qt import Qt
from ._editors import LineEditor, ArrowEditor from ._editors import LineEditor, ArrowEditor
from ._lines import order_line, LevelLine from ._lines import order_line, LevelLine
from ._position import ( from ._position import (
@ -358,7 +358,7 @@ class OrderMode:
send_msg: bool = True, send_msg: bool = True,
order: Order | None = None, order: Order | None = None,
) -> Dialog|None: ) -> Dialog:
''' '''
Send execution order to EMS return a level line to Send execution order to EMS return a level line to
represent the order on a chart. represent the order on a chart.
@ -378,16 +378,6 @@ class OrderMode:
'oid': oid, 'oid': oid,
}) })
if order.price <= 0:
log.error(
'*!? Invalid `Order.price <= 0` ?!*\n'
# TODO: make this present multi-line in object form
# like `ib_insync.contracts.Contract.__repr__()`
f'{order}\n'
)
self.cancel_orders([order.oid])
return None
lines = self.lines_from_order( lines = self.lines_from_order(
order, order,
show_markers=True, show_markers=True,
@ -494,7 +484,7 @@ class OrderMode:
uuid: str, uuid: str,
order: Order | None = None, order: Order | None = None,
) -> Dialog | None: ) -> Dialog:
''' '''
Order submitted status event handler. Order submitted status event handler.
@ -515,11 +505,6 @@ class OrderMode:
# if an order msg is provided update the line # if an order msg is provided update the line
# **from** that msg. # **from** that msg.
if order: if order:
if order.price <= 0:
log.error(f'Order has 0 price, cancelling..\n{order}')
self.cancel_orders([order.oid])
return None
line.set_level(order.price) line.set_level(order.price)
self.on_level_change_update_next_order_info( self.on_level_change_update_next_order_info(
level=order.price, level=order.price,
@ -628,13 +613,13 @@ class OrderMode:
oids: set[str] = set() oids: set[str] = set()
for line in lines: for line in lines:
if dialog := getattr(line, 'dialog', None): dialog: Dialog = getattr(line, 'dialog', None)
oid: str = dialog.uuid oid: str = dialog.uuid
if ( if (
dialog dialog
and oid not in oids and oid not in oids
): ):
oids.add(oid) oids.add(oid)
return oids return oids
@ -678,7 +663,7 @@ class OrderMode:
self, self,
msg: Status, msg: Status,
) -> Dialog | None: ) -> Dialog:
# NOTE: the `.order` attr **must** be set with the # NOTE: the `.order` attr **must** be set with the
# equivalent order msg in order to be loaded. # equivalent order msg in order to be loaded.
order = msg.req order = msg.req
@ -709,15 +694,12 @@ class OrderMode:
fqsn=fqme, fqsn=fqme,
info={}, info={},
) )
maybe_dialog: Dialog | None = self.submit_order( dialog = self.submit_order(
send_msg=False, send_msg=False,
order=order, order=order,
) )
if maybe_dialog is None: assert self.dialogs[oid] == dialog
return None return dialog
assert self.dialogs[oid] == maybe_dialog
return maybe_dialog
@asynccontextmanager @asynccontextmanager
@ -948,8 +930,13 @@ async def open_order_mode(
msg, msg,
) )
# start async input handling for chart's view
async with ( async with (
# ``ChartView`` input async handler startup
chart.view.open_async_input_handler(),
hist_chart.view.open_async_input_handler(),
# pp pane kb inputs # pp pane kb inputs
open_form_input_handling( open_form_input_handling(
form, form,
@ -1018,13 +1005,8 @@ async def process_trade_msg(
) -> tuple[Dialog, Status]: ) -> tuple[Dialog, Status]:
# TODO: obvi once we're parsing to native struct instances we can fmsg = pformat(msg)
# drop the `pformat()` call Bo log.debug(f'Received order msg:\n{fmsg}')
fmtmsg: Struct | dict = msg
if not isinstance(msg, Struct):
fmtmsg: str = pformat(msg)
log.debug(f'Received order msg:\n{fmtmsg}')
name = msg['name'] name = msg['name']
if name in ( if name in (
@ -1040,7 +1022,7 @@ async def process_trade_msg(
): ):
log.info( log.info(
f'Loading position for `{fqme}`:\n' f'Loading position for `{fqme}`:\n'
f'{fmtmsg}' f'{fmsg}'
) )
tracker = mode.trackers[msg['account']] tracker = mode.trackers[msg['account']]
tracker.live_pp.update_from_msg(msg) tracker.live_pp.update_from_msg(msg)
@ -1082,7 +1064,7 @@ async def process_trade_msg(
elif order.action != 'cancel': elif order.action != 'cancel':
log.warning( log.warning(
f'received msg for untracked dialog:\n{fmtmsg}' f'received msg for untracked dialog:\n{fmsg}'
) )
assert msg.resp in ('open', 'dark_open'), f'Unknown msg: {msg}' assert msg.resp in ('open', 'dark_open'), f'Unknown msg: {msg}'
@ -1102,25 +1084,8 @@ async def process_trade_msg(
) )
): ):
msg.req = order msg.req = order
dialog: ( dialog = mode.load_unknown_dialog_from_msg(msg)
Dialog mode.on_submit(oid)
# NOTE: on an invalid order submission (eg.
# price <=0) the downstream APIs may return
# a null.
| None
) = mode.load_unknown_dialog_from_msg(msg)
# cancel any invalid pre-existing order!
if dialog is None:
log.warning(
'Order was ignored/invalid?\n'
f'{order}'
)
# if valid, display the order line the same as if
# it was submitted during this UI session.
else:
mode.on_submit(oid)
case Status(resp='error'): case Status(resp='error'):
@ -1149,7 +1114,7 @@ async def process_trade_msg(
req={'exec_mode': 'dark'}, req={'exec_mode': 'dark'},
): ):
# TODO: UX for a "pending" clear/live order # TODO: UX for a "pending" clear/live order
log.info(f'Dark order triggered for {fmtmsg}') log.info(f'Dark order triggered for {fmsg}')
case Status( case Status(
resp='triggered', resp='triggered',

View File

@ -1,104 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Qt UI framework version shimming.
Allow importing sub-pkgs from this module instead of worrying about
major version specifics, any enum moves or component renames.
Code in `piker.ui.*` should always explicitlyimport directly from
this module like `from piker.ui.qt import ( ..`
'''
from enum import EnumType
from PyQt6 import (
QtCore,
QtGui,
QtWidgets,
)
from PyQt6.QtCore import (
Qt,
QCoreApplication,
QLineF,
QRectF,
# NOTE: for enums use the `.Type` subattr-space
QEvent,
QPointF,
QSize,
QModelIndex,
QItemSelectionModel,
pyqtBoundSignal,
pyqtRemoveInputHook,
)
align_flag: EnumType = Qt.AlignmentFlag
txt_flag: EnumType = Qt.TextFlag
keys: EnumType = QEvent.Type
scrollbar_policy: EnumType = Qt.ScrollBarPolicy
# ^-NOTE-^: handy snippet to discover enums:
# import enum
# [attr for attr_name in dir(QFrame)
# if (attr := getattr(QFrame, attr_name))
# and isinstance(attr, enum.EnumType)]
from PyQt6.QtGui import (
QPainter,
QPainterPath,
QIcon,
QPixmap,
QColor,
QTransform,
QStandardItem,
QStandardItemModel,
QWheelEvent,
QScreen,
QCloseEvent,
)
from PyQt6.QtWidgets import (
QMainWindow,
QApplication,
QLabel,
QStatusBar,
QLineEdit,
QHBoxLayout,
QVBoxLayout,
QFormLayout,
QProgressBar,
QSizePolicy,
QStyledItemDelegate,
QStyleOptionViewItem,
QComboBox,
QWidget,
QFrame,
QSplitter,
QTreeView,
QStyle,
QGraphicsItem,
QGraphicsPathItem,
# QGraphicsView,
QStyleOptionGraphicsItem,
QGraphicsScene,
QGraphicsSceneMouseEvent,
QGraphicsProxyWidget,
)
gs_keys: EnumType = QGraphicsSceneMouseEvent.Type
size_policy: EnumType = QtWidgets.QSizePolicy.Policy
px_cache_mode: EnumType = QGraphicsItem.CacheMode

View File

@ -31,7 +31,7 @@ import pendulum
import pyqtgraph as pg import pyqtgraph as pg
from piker.types import Struct from piker.types import Struct
from ..tsp import slice_from_time from ..data._timeseries import slice_from_time
from ..log import get_logger from ..log import get_logger
from ..toolz import Profiler from ..toolz import Profiler

View File

@ -15,119 +15,128 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
[build-system] [build-system]
requires = ["hatchling"] requires = ["poetry-core"]
build-backend = "hatchling.build" build-backend = "poetry.core.masonry.api"
[project] # ------ - ------
[tool.poetry]
name = "piker" name = "piker"
version = "0.1.0a0dev0" version = "0.1.0.alpha0.dev0"
description = "trading gear for hackers" description = "trading gear for hackers"
authors = [{ name = "Tyler Goodlet", email = "goodboy_foss@protonmail.com" }] authors = ["Tyler Goodlet <jgbt@protonmail.com>"]
requires-python = ">=3.12, <3.13" license = "AGPLv3"
license = "AGPL-3.0-or-later"
readme = "README.rst" readme = "README.rst"
keywords = [
"async",
"trading",
"finance",
"quant",
"charting",
]
classifiers = [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Intended Audience :: Education",
]
dependencies = [
"async-generator >=1.10, <2.0.0",
"attrs >=23.1.0, <24.0.0",
"bidict >=0.22.1, <0.23.0",
"colorama >=0.4.6, <0.5.0",
"colorlog >=6.7.0, <7.0.0",
"ib-insync >=0.9.86, <0.10.0",
"numba >=0.59.0, <0.60.0",
"numpy >=1.25, <2.0",
"polars >=0.18.13, <0.19.0",
"pygments >=2.16.1, <3.0.0",
"rich >=13.5.2, <14.0.0",
"tomli >=2.0.1, <3.0.0",
"tomli-w >=1.0.0, <2.0.0",
"trio-util >=0.7.0, <0.8.0",
"trio-websocket >=0.10.3, <0.11.0",
"typer >=0.9.0, <1.0.0",
"rapidfuzz >=3.5.2, <4.0.0",
"pdbp >=1.5.0, <2.0.0",
"trio >=0.24, <0.25",
"pendulum >=3.0.0, <4.0.0",
"httpx >=0.27.0, <0.28.0",
"cryptofeed >=2.4.0, <3.0.0",
"pyarrow >=17.0.0, <18.0.0",
"websockets ==12.0",
"msgspec",
"tractor",
"asyncvnc",
"tomlkit",
]
[project.optional-dependencies] # TODO: add meta-data from setup.py
uis = [ # keywords=[
# https://docs.astral.sh/uv/concepts/projects/dependencies/#optional-dependencies # "async",
# TODO: make sure the levenshtein shit compiles on nix.. # "trading",
# rapidfuzz = {extras = ["speedup"], version = "^0.18.0"} # "finance",
"rapidfuzz >=3.2.0, <4.0.0", # "quant",
"qdarkstyle >=3.0.2, <4.0.0", # "charting",
"pyqt6 >=6.7.0, <7.0.0", # ],
"pyqtgraph", # classifiers=[
# 'Development Status :: 3 - Alpha',
# 'License :: OSI Approved :: ',
# 'Operating System :: POSIX :: Linux',
# "Programming Language :: Python :: Implementation :: CPython",
# "Programming Language :: Python :: 3 :: Only",
# "Programming Language :: Python :: 3.10",
# "Programming Language :: Python :: 3.11",
# 'Intended Audience :: Financial and Insurance Industry',
# 'Intended Audience :: Science/Research',
# 'Intended Audience :: Developers',
# 'Intended Audience :: Education',
# ],
# for consideration, # ------ - ------
# - 'visidata'
# TODO: add an `--only daemon` group for running non-ui / pikerd [tool.poetry.dependencies]
# service tree in distributed mode B) asks = "^3.0.0"
# https://docs.astral.sh/uv/concepts/projects/dependencies/#optional-dependencies async-generator = "^1.10"
] attrs = "^23.1.0"
bidict = "^0.22.1"
colorama = "^0.4.6"
colorlog = "^6.7.0"
cython = "^3.0.0"
greenback = "^1.1.1"
ib-insync = "^0.9.86"
msgspec = "^0.18.0"
numba = "^0.57.1"
numpy = "1.24"
pendulum = "^2.1.2"
polars = "^0.18.13"
pygments = "^2.16.1"
python = "^3.10"
rich = "^13.5.2"
# setuptools = "^68.0.0"
tomli = "^2.0.1"
tomli-w = "^1.0.0"
trio = "^0.22.2"
trio-util = "^0.7.0"
trio-websocket = "^0.10.3"
typer = "^0.9.0"
[dependency-groups]
# TODO: a toolset that makes debugging a `pikerd` service (tree) easy [tool.poetry.dependencies.asyncvnc]
# to hack on directly using more or less the local env: git = 'https://github.com/pikers/asyncvnc.git'
branch = 'main'
[tool.poetry.dependencies.tomlkit]
git = 'https://github.com/pikers/tomlkit.git'
branch = 'piker_pin'
develop = true
# path = "../tomlkit/"
[tool.poetry.dependencies.tractor]
git = 'https://github.com/goodboy/tractor.git'
branch = 'asyncio_debugger_support'
# branch = 'piker_pin'
develop = true
# path = '../tractor/'
# ------ - ------
[tool.poetry.group.uis]
optional = true
[tool.poetry.group.uis.dependencies]
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# TODO: make sure the levenshtein shit compiles on nix..
# rapidfuzz = {extras = ["speedup"], version = "^0.18.0"}
rapidfuzz = "^3.2.0"
qdarkstyle = ">=3.0.2"
pyqt5 = "^5.15.9"
pyqtgraph = { git = 'https://github.com/pikers/pyqtgraph.git' }
pyqt6 = "^6.5.2"
# ------ - ------
[tool.poetry.group.dev]
optional = true
[tool.poetry.group.dev.dependencies]
# testing / CI
pytest = "^6.0.0"
elasticsearch = "^8.9.0"
# console ehancements and eventually remote debugging
# extras/helpers.
# TODO: add a toolset that makes debugging a `pikerd` service
# (tree) easy to hack on directly using more or less the local env:
# - xonsh + xxh # - xonsh + xxh
# - rsyscall + pdbp # - rsyscall + pdbp
# - actor runtime control console like BEAM/OTP # - actor runtime control console like BEAM/OTP
# xonsh = "^0.14.0" # XXX: explicit env install for shell use w nix
# console ehancements and eventually remote debugging extras/helpers. prompt-toolkit = "^3.0.39"
# use `uv --dev` to enable
dev = [
"pytest >=6.0.0, <7.0.0",
"elasticsearch >=8.9.0, <9.0.0",
"xonsh >=0.14.2, <0.15.0",
"prompt-toolkit ==3.0.40",
"cython >=3.0.0, <4.0.0",
"greenback >=1.1.1, <2.0.0",
"ruff>=0.9.6",
]
[project.scripts] # ------ - ------
piker = "piker.cli:cli"
pikerd = "piker.cli:pikerd"
ledger = "piker.accounting.cli:ledger"
[tool.hatch.build.targets.sdist] # TODO: add an `--only daemon` group for running non-ui / pikerd
include = ["piker"] # service tree in distributed mode B)
# https://python-poetry.org/docs/managing-dependencies/#installing-group-dependencies
# [tool.poetry.group.daemon.dependencies]
[tool.hatch.build.targets.wheel] [tool.poetry.scripts]
include = ["piker"] piker = 'piker.cli:cli'
pikerd = 'piker.cli:pikerd'
[tool.uv.sources] ledger = 'piker.accounting.cli:ledger'
pyqtgraph = { git = "https://github.com/pikers/pyqtgraph.git" }
asyncvnc = { git = "https://github.com/pikers/asyncvnc.git", branch = "main" }
tomlkit = { git = "https://github.com/pikers/tomlkit.git", branch ="piker_pin" }
msgspec = { git = "https://github.com/jcrist/msgspec.git" }
tractor = { path = "../tractor", editable = true }

View File

@ -1,93 +0,0 @@
# from default `ruff.toml` @
# https://docs.astral.sh/ruff/configuration/
# Exclude a variety of commonly ignored directories.
exclude = [
".bzr",
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".ipynb_checkpoints",
".mypy_cache",
".nox",
".pants.d",
".pyenv",
".pytest_cache",
".pytype",
".ruff_cache",
".svn",
".tox",
".venv",
".vscode",
"__pypackages__",
"_build",
"buck-out",
"build",
"dist",
"node_modules",
"site-packages",
"venv",
]
# Same as Black.
line-length = 88
indent-width = 4
# Assume Python 3.9
target-version = "py312"
# ------ - ------
# TODO, stop warnings around `anext()` builtin use?
# tool.ruff.target-version = "py310"
[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
# McCabe complexity (`C901`) by default.
select = ["E4", "E7", "E9", "F"]
ignore = []
ignore-init-module-imports = false
[lint.per-file-ignores]
"piker/ui/qt.py" = [
"E402",
'F401', # unused imports (without __all__ or blah as blah)
# "F841", # unused variable rules
]
# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
[format]
# Use single quotes in `ruff format`.
quote-style = "single"
# Like Black, indent with spaces, rather than tabs.
indent-style = "space"
# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false
# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"
# Enable auto-formatting of code examples in docstrings. Markdown,
# reStructuredText code/literal blocks and doctests are all supported.
#
# This is currently disabled by default, but it is planned for this
# to be opt-out in the future.
docstring-code-format = false
# Set the line length limit used when formatting code snippets in
# docstrings.
#
# This only has an effect when the `docstring-code-format` setting is
# enabled.
docstring-code-line-length = "dynamic"

1500
uv.lock

File diff suppressed because it is too large Load Diff