Compare commits

..

No commits in common. "310_plus" and "dark_vlm" have entirely different histories.

74 changed files with 4588 additions and 12618 deletions

View File

@ -1,6 +1,5 @@
name: CI name: CI
on: on:
# Triggers the workflow on push or pull request events but only for the master branch # Triggers the workflow on push or pull request events but only for the master branch
push: push:
@ -11,21 +10,41 @@ on:
# Allows you to run this workflow manually from the Actions tab # Allows you to run this workflow manually from the Actions tab
workflow_dispatch: workflow_dispatch:
jobs: jobs:
testing: basic_install:
name: 'install + test-suite' name: 'pip install'
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v2
with:
ref: master
- name: Setup python - name: Setup python
uses: actions/setup-python@v3 uses: actions/setup-python@v2
with: with:
python-version: '3.10' python-version: '3.9'
- name: Install dependencies
run: pip install -e . --upgrade-strategy eager -r requirements.txt
- name: Run piker cli
run: piker
testing:
name: 'test suite'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies - name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements.txt --upgrade-strategy eager run: pip install -U . -r requirements-test.txt -r requirements.txt --upgrade-strategy eager

3
.gitignore vendored
View File

@ -97,9 +97,6 @@ ENV/
# mkdocs documentation # mkdocs documentation
/site /site
# extra scripts dir
/snippets
# mypy # mypy
.mypy_cache/ .mypy_cache/
.vscode/settings.json .vscode/settings.json

View File

@ -74,57 +74,23 @@ for a development install::
install for tinas install for tinas
***************** *****************
for windows peeps you can start by installing all the prerequisite software: for windows peeps you can start by getting `conda installed`_
and the `C++ build toolz`_ on your system.
- install git with all default settings - https://git-scm.com/download/win
- install anaconda all default settings - https://www.anaconda.com/products/individual
- install microsoft build tools (check the box for Desktop development for C++, you might be able to uncheck some optional downloads) - https://visualstudio.microsoft.com/visual-cpp-build-tools/
- install visual studio code default settings - https://code.visualstudio.com/download
then, `crack a conda shell`_ and run the following commands:: then, `crack a conda shell`_ and run the following commands::
mkdir code # create code directory conda create piker --python=3.9
cd code # change directory to code conda activate piker
git clone https://github.com/pikers/piker.git # downloads piker installation package from github conda install pip
cd piker # change directory to piker pip install --upgrade setuptools
cd dIreCToRieZ\oF\cODez\piker\
conda create -n pikonda # creates conda environment named pikonda pip install -r requirements -e .
conda activate pikonda # activates pikonda
conda install -c conda-forge python-levenshtein # in case it is not already installed
conda install pip # may already be installed
pip # will show if pip is installed
pip install -e . -r requirements.txt # install piker in editable mode
test Piker to see if it is working::
piker -b binance chart btcusdt.binance # formatting for loading a chart in order to look coolio in front of all ur tina friends (and maybe
piker -b kraken -b binance chart xbtusdt.kraken want to help us with testin, hackzing or configgin), install
piker -b kraken -b binance -b ib chart qqq.nasdaq.ib `vscode`_ and `setup a coolio tiled wm console`_ so you can start
piker -b ib chart tsla.nasdaq.ib living the life of the tech literate..
potential error::
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\AppData\\Roaming\\piker\\brokers.toml'
solution:
- navigate to file directory above (may be different on your machine, location should be listed in the error code)
- copy and paste file from 'C:\\Users\\user\\code\\data/brokers.toml' or create a blank file using notepad at the location above
Visual Studio Code setup:
- now that piker is installed we can set up vscode as the default terminal for running piker and editing the code
- open Visual Studio Code
- file --> Add Folder to Workspace --> C:\Users\user\code\piker (adds piker directory where all piker files are located)
- file --> Save Workspace As --> save it wherever you want and call it whatever you want, this is going to be your default workspace for running and editing piker code
- ctrl + shift + p --> start typing Python: Select Interpetter --> when the option comes up select it --> Select at the workspace level --> select the one that shows ('pikonda')
- change the default terminal to cmd.exe instead of powershell (default)
- now when you create a new terminal VScode should automatically activate you conda env so that piker can be run as the first command after a new terminal is created
also, try out fancyzones as part of powertoyz for a decent tiling windows manager to manage all the cool new software you are going to be running.
.. _conda installed: https:// .. _conda installed: https://
.. _C++ build toolz: https:// .. _C++ build toolz: https://
@ -138,7 +104,7 @@ provider support
**************** ****************
for live data feeds the in-progress set of supported brokers is: for live data feeds the in-progress set of supported brokers is:
- IB_ via ``ib_insync``, also see our `container docs`_ - IB_ via ``ib_insync``
- binance_ and kraken_ for crypto over their public websocket API - binance_ and kraken_ for crypto over their public websocket API
- questrade_ (ish) which comes with effectively free L1 - questrade_ (ish) which comes with effectively free L1
@ -150,7 +116,6 @@ coming soon...
if you want your broker supported and they have an API let us know. if you want your broker supported and they have an API let us know.
.. _IB: https://interactivebrokers.github.io/tws-api/index.html .. _IB: https://interactivebrokers.github.io/tws-api/index.html
.. _container docs: https://github.com/pikers/piker/tree/master/dockering/ib
.. _questrade: https://www.questrade.com/api/documentation .. _questrade: https://www.questrade.com/api/documentation
.. _kraken: https://www.kraken.com/features/api#public-market-data .. _kraken: https://www.kraken.com/features/api#public-market-data
.. _binance: https://github.com/pikers/piker/pull/182 .. _binance: https://github.com/pikers/piker/pull/182

View File

@ -8,45 +8,20 @@ expires_at = 1616095326.355846
[kraken] [kraken]
key_descr = "api_0" key_descr = "api_0"
api_key = "" public_key = ""
secret = "" private_key = ""
[ib] [ib]
hosts = [ host = "127.0.0.1"
"127.0.0.1",
]
# XXX: the order in which ports will be scanned
# (by the `brokerd` daemon-actor)
# is determined # by the line order here.
# TODO: when we eventually spawn gateways in our
# container, we can just dynamically allocate these
# using IBC.
ports = [
4002, # gw
7497, # tws
]
# XXX: for a paper account the flex web query service ports.gw = 4002
# is not supported so you have to manually download ports.tws = 7497
# and XML report and put it in a location that can be ports.order = ["gw", "tws",]
# accessed by the ``brokerd.ib`` backend code for parsing.
flex_token = '666666666666666666666666'
flex_trades_query_id = '666666' # live account
# when clients are being scanned this determines accounts.margin = "X0000000"
# which clients are preferred to be used for data accounts.ira = "X0000000"
# feeds based on the order of account names, if accounts.paper = "XX0000000"
# detected as active on an API client.
prefer_data_account = [
'paper',
'margin',
'ira',
]
[ib.accounts] # the order in which accounts will be selected (if found through
# the order in which accounts will be selectable # `brokerd`) when a new symbol is loaded
# in the order mode UI (if found via clients during accounts_order = ['paper', 'margin', 'ira']
# API-app scanning)when a new symbol is loaded.
paper = "XX0000000"
margin = "X0000000"
ira = "X0000000"

View File

@ -1,30 +0,0 @@
running ``ib`` gateway in ``docker``
------------------------------------
We have a config based on the (now defunct)
image from "waytrade":
https://github.com/waytrade/ib-gateway-docker
To startup this image with our custom settings
simply run the command::
docker compose up
And you should have the following socket-available services:
- ``x11vnc1@127.0.0.1:3003``
- ``ib-gw@127.0.0.1:4002``
You can attach to the container via a VNC client
without password auth.
SECURITY STUFF!?!?!
-------------------
Though "``ib``" claims they host filter connections outside
localhost (aka ``127.0.0.1``) it's probably better if you filter
the socket at the OS level using a stateless firewall rule::
ip rule add not unicast iif lo to 0.0.0.0/0 dport 4002
We will soon have this baked into our own custom image but for
now you'll have to do it urself dawgy.

View File

@ -1,64 +0,0 @@
# rework from the original @
# https://github.com/waytrade/ib-gateway-docker/blob/master/docker-compose.yml
version: "3.5"
services:
ib-gateway:
# other image tags available:
# https://github.com/waytrade/ib-gateway-docker#supported-tags
image: waytrade/ib-gateway:981.3j
restart: always
network_mode: 'host'
volumes:
- type: bind
source: ./jts.ini
target: /root/Jts/jts.ini
# don't let IBC clobber this file for
# the main reason of not having a stupid
# timezone set..
read_only: true
# force our own IBC config
- type: bind
source: ./ibc.ini
target: /root/ibc/config.ini
# force our noop script - socat isn't needed in host mode.
- type: bind
source: ./fork_ports_delayed.sh
target: /root/scripts/fork_ports_delayed.sh
# force our noop script - socat isn't needed in host mode.
- type: bind
source: ./run_x11_vnc.sh
target: /root/scripts/run_x11_vnc.sh
read_only: true
# NOTE:to fill these out, define an `.env` file in the same dir as
# this compose file which looks something like:
# TWS_USERID='myuser'
# TWS_PASSWORD='guest'
# TRADING_MODE=paper (or live)
# VNC_SERVER_PASSWORD='diggity'
environment:
TWS_USERID: ${TWS_USERID}
TWS_PASSWORD: ${TWS_PASSWORD}
TRADING_MODE: ${TRADING_MODE:-paper}
VNC_SERVER_PASSWORD: ${VNC_SERVER_PASSWORD:-}
# ports:
# - target: 4002
# host_ip: 127.0.0.1
# published: 4002
# protocol: tcp
# original mappings for use in non-host-mode
# which we won't really need going forward since
# ideally we just pick the port to have ib-gw listen
# on **when** we spawn the container - i.e. everything
# will be driven by a ``brokers.toml`` def.
# - "127.0.0.1:4001:4001"
# - "127.0.0.1:4002:4002"
# - "127.0.0.1:5900:5900"

View File

@ -1,6 +0,0 @@
#!/bin/sh
# we now just set this is to a noop script
# since we can just run the container in
# `network_mode: 'host'` and get literally
# the exact same behaviour XD

View File

@ -1,711 +0,0 @@
# Note that in the comments in this file, TWS refers to both the Trader
# Workstation and the IB Gateway, unless explicitly stated otherwise.
#
# When referred to below, the default value for a setting is the value
# assumed if either the setting is included but no value is specified, or
# the setting is not included at all.
#
# IBC may also be used to start the FIX CTCI Gateway. All settings
# relating to this have names prefixed with FIX.
#
# The IB API Gateway and the FIX CTCI Gateway share the same code. Which
# gateway actually runs is governed by an option on the initial gateway
# login screen. The FIX setting described under IBC Startup
# Settings below controls this.
# =============================================================================
# 1. IBC Startup Settings
# =============================================================================
# IBC may be used to start the IB Gateway for the FIX CTCI. This
# setting must be set to 'yes' if you want to run the FIX CTCI gateway. The
# default is 'no'.
FIX=no
# =============================================================================
# 2. Authentication Settings
# =============================================================================
# TWS and the IB API gateway require a single username and password.
# You may specify the username and password using the following settings:
#
# IbLoginId
# IbPassword
#
# Alternatively, you can specify the username and password in the command
# files used to start TWS or the Gateway, but this is not recommended for
# security reasons.
#
# If you don't specify them, you will be prompted for them in the usual
# login dialog when TWS starts (but whatever you have specified will be
# included in the dialog automatically: for example you may specify the
# username but not the password, and then you will be prompted for the
# password via the login dialog). Note that if you specify either
# the username or the password (or both) in the command file, then
# IbLoginId and IbPassword settings defined in this file are ignored.
#
#
# The FIX CTCI gateway requires one username and password for FIX order
# routing, and optionally a separate username and password for market
# data connections. You may specify the usernames and passwords using
# the following settings:
#
# FIXLoginId
# FIXPassword
# IbLoginId (optional - for market data connections)
# IbPassword (optional - for market data connections)
#
# Alternatively you can specify the FIX username and password in the
# command file used to start the FIX CTCI Gateway, but this is not
# recommended for security reasons.
#
# If you don't specify them, you will be prompted for them in the usual
# login dialog when FIX CTCI gateway starts (but whatever you have
# specified will be included in the dialog automatically: for example
# you may specify the usernames but not the passwords, and then you will
# be prompted for the passwords via the login dialog). Note that if you
# specify either the FIX username or the FIX password (or both) on the
# command line, then FIXLoginId and FIXPassword settings defined in this
# file are ignored; he same applies to the market data username and
# password.
# IB API Authentication Settings
# ------------------------------
# Your TWS username:
IbLoginId=
# Your TWS password:
IbPassword=
# FIX CTCI Authentication Settings
# --------------------------------
# Your FIX CTCI username:
FIXLoginId=
# Your FIX CTCI password:
FIXPassword=
# Second Factor Authentication Settings
# -------------------------------------
# If you have enabled more than one second factor authentication
# device, TWS presents a list from which you must select the device
# you want to use for this login. You can use this setting to
# instruct IBC to select a particular item in the list on your
# behalf. Note that you must spell this value exactly as it appears
# in the list. If no value is set, you must manually select the
# relevant list entry.
SecondFactorDevice=
# If you use the IBKR Mobile app for second factor authentication,
# and you fail to complete the process before the time limit imposed
# by IBKR, you can use this setting to tell IBC to exit: arrangements
# can then be made to automatically restart IBC in order to initiate
# the login sequence afresh. Otherwise, manual intervention at TWS's
# Second Factor Authentication dialog is needed to complete the
# login.
#
# Permitted values are 'yes' and 'no'. The default is 'no'.
#
# Note that the scripts provided with the IBC zips for Windows and
# Linux provide options to automatically restart in these
# circumstances, but only if this setting is also set to 'yes'.
ExitAfterSecondFactorAuthenticationTimeout=no
# This setting is only relevant if
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 40.
SecondFactorAuthenticationExitInterval=
# Trading Mode
# ------------
#
# TWS 955 introduced a new Trading Mode combo box on its login
# dialog. This indicates whether the live account or the paper
# trading account corresponding to the supplied credentials is
# to be used. The allowed values are 'live' (the default) and
# 'paper'. For earlier versions of TWS this setting has no
# effect.
TradingMode=
# Paper-trading Account Warning
# -----------------------------
#
# Logging in to a paper-trading account results in TWS displaying
# a dialog asking the user to confirm that they are aware that this
# is not a brokerage account. Until this dialog has been accepted,
# TWS will not allow API connections to succeed. Setting this
# to 'yes' (the default) will cause IBC to automatically
# confirm acceptance. Setting it to 'no' will leave the dialog
# on display, and the user will have to deal with it manually.
AcceptNonBrokerageAccountWarning=yes
# Login Dialog Display Timeout
#-----------------------------
#
# In some circumstances, starting TWS may result in failure to display
# the login dialog. Restarting TWS may help to resolve this situation,
# and IBC does this automatically.
#
# This setting controls how long (in seconds) IBC waits for the login
# dialog to appear before restarting TWS.
#
# Note that in normal circumstances with a reasonably specified
# computer the time to displaying the login dialog is typically less
# than 20 seconds, and frequently much less. However many factors can
# influence this, and it is unwise to set this value too low.
#
# The default value is 60.
LoginDialogDisplayTimeout = 60
# =============================================================================
# 3. TWS Startup Settings
# =============================================================================
# Path to settings store
# ----------------------
#
# Path to the directory where TWS should store its settings. This is
# normally the folder in which TWS is installed. However you may set
# it to some other location if you wish (for example if you want to
# run multiple instances of TWS with different settings).
#
# It is recommended for clarity that you use an absolute path. The
# effect of using a relative path is undefined.
#
# Linux and macOS users should use the appropriate path syntax.
#
# Note that, for Windows users, you MUST use double separator
# characters to separate the elements of the folder path: for
# example, IbDir=C:\\IBLiveSettings is valid, but
# IbDir=C:\IBLiveSettings is NOT valid and will give unexpected
# results. Linux and macOS users need not use double separators,
# but they are acceptable.
#
# The default is the current working directory when IBC is
# started.
IbDir=/root/Jts
# Store settings on server
# ------------------------
#
# If you wish to store a copy of your TWS settings on IB's
# servers as well as locally on your computer, set this to
# 'yes': this enables you to run TWS on different computers
# with the same configuration, market data lines, etc. If set
# to 'no', running TWS on different computers will not share the
# same settings. If no value is specified, TWS will obtain its
# settings from the same place as the last time this user logged
# in (whether manually or using IBC).
StoreSettingsOnServer=
# Minimize TWS on startup
# -----------------------
#
# Set to 'yes' to minimize TWS when it starts:
MinimizeMainWindow=no
# Existing Session Detected Action
# --------------------------------
#
# When a user logs on to an IBKR account for trading purposes by any means, the
# IBKR account server checks to see whether the account is already logged in
# elsewhere. If so, a dialog is displayed to both the users that enables them
# to determine what happens next. The 'ExistingSessionDetectedAction' setting
# instructs TWS how to proceed when it displays this dialog:
#
# * If the new TWS session is set to 'secondary', the existing session continues
# and the new session terminates. Thus a secondary TWS session can never
# override any other session.
#
# * If the existing TWS session is set to 'primary', the existing session
# continues and the new session terminates (even if the new session is also
# set to primary). Thus a primary TWS session can never be overridden by
# any new session).
#
# * If both the existing and the new TWS sessions are set to 'primaryoverride',
# the existing session terminates and the new session proceeds.
#
# * If the existing TWS session is set to 'manual', the user must handle the
# dialog.
#
# The difference between 'primary' and 'primaryoverride' is that a
# 'primaryoverride' session can be overriden over by a new 'primary' session,
# but a 'primary' session cannot be overriden by any other session.
#
# When set to 'primary', if another TWS session is started and manually told to
# end the 'primary' session, the 'primary' session is automatically reconnected.
#
# The default is 'manual'.
ExistingSessionDetectedAction=primary
# Override TWS API Port Number
# ----------------------------
#
# If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly
# after startup. Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to
# be set dynamically at run-time: most users will never need it,
# so don't use it unless you know you need it.
OverrideTwsApiPort=4002
# Read-only Login
# ---------------
#
# If ReadOnlyLogin is set to 'yes', and the user is enrolled in IB's
# account security programme, the user will not be asked to perform
# the second factor authentication action, and login to TWS will
# occur automatically in read-only mode: in this mode, placing or
# managing orders is not allowed. If set to 'no', and the user is
# enrolled in IB's account security programme, the user must perform
# the relevant second factor authentication action to complete the
# login.
# If the user is not enrolled in IB's account security programme,
# this setting is ignored. The default is 'no'.
ReadOnlyLogin=no
# Read-only API
# -------------
#
# If ReadOnlyApi is set to 'yes', API programs cannot submit, modify
# or cancel orders. If set to 'no', API programs can do these things.
# If not set, the existing TWS/Gateway configuration is unchanged.
# NB: this setting is really only supplied for the benefit of new TWS
# or Gateway instances that are being automatically installed and
# started without user intervention (eg Docker containers). Where
# a user is involved, they should use the Global Configuration to
# set the relevant checkbox (this only needs to be done once) and
# not provide a value for this setting.
ReadOnlyApi=no
# Market data size for US stocks - lots or shares
# -----------------------------------------------
#
# Since IB introduced the option of market data for US stocks showing
# bid, ask and last sizes in shares rather than lots, TWS and Gateway
# display a dialog immediately after login notifying the user about
# this and requiring user input before allowing market data to be
# accessed. The user can request that the dialog not be shown again.
#
# It is recommended that the user should handle this dialog manually
# rather than using these settings, which are provided for situations
# where the user interface is not easily accessible, or where user
# settings are not preserved between sessions (eg some Docker images).
#
# - If this setting is set to 'accept', the dialog will be handled
# automatically and the option to not show it again will be
# selected.
#
# Note that in this case, the only way to allow the dialog to be
# displayed again is to manually enable the 'Bid, Ask and Last
# Size Display Update' message in the 'Messages' section of the TWS
# configuration dialog. So you should only use 'Accept' if you are
# sure you really don't want the dialog to be displayed again, or
# you have easy access to the user interface.
#
# - If set to 'defer', the dialog will be handled automatically (so
# that market data will start), but the option to not show it again
# will not be selected, and it will be shown again after the next
# login.
#
# - If set to 'ignore', the user has to deal with the dialog manually.
#
# The default value is 'ignore'.
#
# Note if set to 'accept' or 'defer', TWS also automatically sets
# the API settings checkbox labelled 'Send market data in lots for
# US stocks for dual-mode API clients'. IBC cannot prevent this.
# However you can change this immmediately by setting
# SendMarketDataInLotsForUSstocks (see below) to 'no' .
AcceptBidAskLastSizeDisplayUpdateNotification=accept
# This setting determines whether the API settings checkbox labelled
# 'Send market data in lots for US stocks for dual-mode API clients'
# is set or cleared. If set to 'yes', the checkbox is set. If set to
# 'no' the checkbox is cleared. If defaulted, the checkbox is
# unchanged.
SendMarketDataInLotsForUSstocks=
# =============================================================================
# 4. TWS Auto-Closedown
# =============================================================================
#
# IMPORTANT NOTE: Starting with TWS 974, this setting no longer
# works properly, because IB have changed the way TWS handles its
# autologoff mechanism.
#
# You should now configure the TWS autologoff time to something
# convenient for you, and restart IBC each day.
#
# Alternatively, discontinue use of IBC and use the auto-relogin
# mechanism within TWS 974 and later versions (note that the
# auto-relogin mechanism provided by IB is not available if you
# use IBC).
# Set to yes or no (lower case).
#
# yes means allow TWS to shut down automatically at its
# specified shutdown time, which is set via the TWS
# configuration menu.
#
# no means TWS never shuts down automatically.
#
# NB: IB recommends that you do not keep TWS running
# continuously. If you set this setting to 'no', you may
# experience incorrect TWS operation.
#
# NB: the default for this setting is 'no'. Since this will
# only work properly with TWS versions earlier than 974, you
# should explicitly set this to 'yes' for version 974 and later.
IbAutoClosedown=yes
# =============================================================================
# 5. TWS Tidy Closedown Time
# =============================================================================
#
# NB: starting with TWS 974 this is no longer a useful option
# because both TWS and Gateway now have the same auto-logoff
# mechanism, and IBC can no longer avoid this.
#
# Note that giving this setting a value does not change TWS's
# auto-logoff in any way: any setting will be additional to the
# TWS auto-logoff.
#
# To tell IBC to tidily close TWS at a specified time every
# day, set this value to <hh:mm>, for example:
# ClosedownAt=22:00
#
# To tell IBC to tidily close TWS at a specified day and time
# each week, set this value to <dayOfWeek hh:mm>, for example:
# ClosedownAt=Friday 22:00
#
# Note that the day of the week must be specified using your
# default locale. Also note that Java will only accept
# characters encoded to ISO 8859-1 (Latin-1). This means that
# if the day name in your default locale uses any non-Latin-1
# characters you need to encode them using Unicode escapes
# (see http://java.sun.com/docs/books/jls/third_edition/html/lexical.html#3.3
# for details). For example, to tidily close TWS at 12:00 on
# Saturday where the default locale is Simplified Chinese,
# use the following:
# #ClosedownAt=\u661F\u671F\u516D 12:00
ClosedownAt=
# =============================================================================
# 6. Other TWS Settings
# =============================================================================
# Accept Incoming Connection
# --------------------------
#
# If set to 'accept', IBC automatically accepts incoming
# API connection dialogs. If set to 'reject', IBC
# automatically rejects incoming API connection dialogs. If
# set to 'manual', the user must decide whether to accept or reject
# incoming API connection dialogs. The default is 'manual'.
# NB: it is recommended to set this to 'reject', and to explicitly
# configure which IP addresses can connect to the API in TWS's API
# configuration page, as this is much more secure (in this case, no
# incoming API connection dialogs will occur for those IP addresses).
AcceptIncomingConnectionAction=reject
# Allow Blind Trading
# -------------------
#
# If you attempt to place an order for a contract for which
# you have no market data subscription, TWS displays a dialog
# to warn you against such blind trading.
#
# yes means the dialog is dismissed as though the user had
# clicked the 'Ok' button: this means that you accept
# the risk and want the order to be submitted.
#
# no means the dialog remains on display and must be
# handled by the user.
AllowBlindTrading=yes
# Save Settings on a Schedule
# ---------------------------
#
# You can tell TWS to automatically save its settings on a schedule
# of your choosing. You can specify one or more specific times,
# like this:
#
# SaveTwsSettingsAt=HH:MM [ HH:MM]...
#
# for example:
# SaveTwsSettingsAt=08:00 12:30 17:30
#
# Or you can specify an interval at which settings are to be saved,
# optionally starting at a specific time and continuing until another
# time, like this:
#
#SaveTwsSettingsAt=Every n [{mins | hours}] [hh:mm] [hh:mm]
#
# where the first hh:mm is the start time and the second is the end
# time. If you don't specify the end time, settings are saved regularly
# from the start time till midnight. If you don't specify the start time.
# settings are saved regularly all day, beginning at 00:00. Note that
# settings will always be saved at the end time, even if that is not
# exactly one interval later than the previous time. If neither 'mins'
# nor 'hours' is specified, 'mins' is assumed. Examples:
#
# To save every 30 minutes all day starting at 00:00
#SaveTwsSettingsAt=Every 30
#SaveTwsSettingsAt=Every 30 mins
#
# To save every hour starting at 08:00 and ending at midnight
#SaveTwsSettingsAt=Every 1 hours 08:00
#SaveTwsSettingsAt=Every 1 hours 08:00 00:00
#
# To save every 90 minutes starting at 08:00 up to and including 17:43
#SaveTwsSettingsAt=Every 90 08:00 17:43
SaveTwsSettingsAt=
# =============================================================================
# 7. Settings Specific to Indian Versions of TWS
# =============================================================================
# Indian versions of TWS may display a password expiry
# notification dialog and a NSE Compliance dialog. These can be
# dismissed by setting the following to yes. By default the
# password expiry notice is not dismissed, but the NSE Compliance
# notice is dismissed.
# Warning: setting DismissPasswordExpiryWarning=yes will mean
# you will not be notified when your password is about to expire.
# You must then take other measures to ensure that your password
# is changed within the expiry period, otherwise IBC will
# not be able to login successfully.
DismissPasswordExpiryWarning=no
DismissNSEComplianceNotice=yes
# =============================================================================
# 8. IBC Command Server Settings
# =============================================================================
# Do NOT CHANGE THE FOLLOWING SETTINGS unless you
# intend to issue commands to IBC (for example
# using telnet). Note that these settings have nothing to
# do with running programs that use the TWS API.
# Command Server Port Number
# --------------------------
#
# The port number that IBC listens on for commands
# such as "STOP". DO NOT set this to the port number
# used for TWS API connections. There is no good reason
# to change this setting unless the port is used by
# some other application (typically another instance of
# IBC). The default value is 0, which tells IBC not to
# start the command server
#CommandServerPort=7462
# Permitted Command Sources
# -------------------------
#
# A comma separated list of IP addresses, or host names,
# which are allowed addresses for sending commands to
# IBC. Commands can always be sent from the
# same host as IBC is running on.
ControlFrom=127.0.0.1
# Address for Receiving Commands
# ------------------------------
#
# Specifies the IP address on which the Command Server
# is so listen. For a multi-homed host, this can be used
# to specify that connection requests are only to be
# accepted on the specified address. The default is to
# accept connection requests on all local addresses.
BindAddress=127.0.0.1
# Command Prompt
# --------------
#
# The specified string is output by the server when
# the connection is first opened and after the completion
# of each command. This can be useful if sending commands
# using an interactive program such as telnet. The default
# is that no prompt is output.
# For example:
#
# CommandPrompt=>
CommandPrompt=
# Suppress Command Server Info Messages
# -------------------------------------
#
# Some commands can return intermediate information about
# their progress. This setting controls whether such
# information is sent. The default is that such information
# is not sent.
SuppressInfoMessages=no
# =============================================================================
# 9. Diagnostic Settings
# =============================================================================
#
# IBC can log information about the structure of windows
# displayed by TWS. This information is useful when adding
# new features to IBC or when behaviour is not as expected.
#
# The logged information shows the hierarchical organisation
# of all the components of the window, and includes the
# current values of text boxes and labels.
#
# Note that this structure logging has a small performance
# impact, and depending on the settings can cause the logfile
# size to be significantly increased. It is therefore
# recommended that the LogStructureWhen setting be set to
# 'never' (the default) unless there is a specific reason
# that this information is needed.
# Scope of Structure Logging
# --------------------------
#
# The LogStructureScope setting indicates which windows are
# eligible for structure logging:
#
# - if set to 'known', only windows that IBC recognizes
# are eligible - these are windows that IBC has some
# interest in monitoring, usually to take some action
# on the user's behalf;
#
# - if set to 'unknown', only windows that IBC does not
# recognize are eligible. Most windows displayed by
# TWS fall into this category;
#
# - if set to 'untitled', only windows that IBC does not
# recognize and that have no title are eligible. These
# are usually message boxes or similar small windows,
#
# - if set to 'all', then every window displayed by TWS
# is eligible.
#
# The default value is 'known'.
LogStructureScope=all
# When to Log Window Structure
# ----------------------------
#
# The LogStructureWhen setting specifies the circumstances
# when eligible TWS windows have their structure logged:
#
# - if set to 'open' or 'yes' or 'true', IBC logs the
# structure of an eligible window the first time it
# is encountered;
#
# - if set to 'activate', the structure is logged every
# time an eligible window is made active;
#
# - if set to 'never' or 'no' or 'false', structure
# information is never logged.
#
# The default value is 'never'.
LogStructureWhen=never
# DEPRECATED SETTING
# ------------------
#
# LogComponents - THIS SETTING WILL BE REMOVED IN A FUTURE
# RELEASE
#
# If LogComponents is set to any value, this is equivalent
# to setting LogStructureWhen to that same value and
# LogStructureScope to 'all': the actual values of those
# settings are ignored. The default is that the values
# of LogStructureScope and LogStructureWhen are honoured.
#LogComponents=

View File

@ -1,33 +0,0 @@
[IBGateway]
ApiOnly=true
LocalServerPort=4002
# NOTE: must be set if using IBC's "reject" mode
TrustedIPs=127.0.0.1
; RemoteHostOrderRouting=ndc1.ibllc.com
; WriteDebug=true
; RemotePortOrderRouting=4001
; useRemoteSettings=false
; tradingMode=p
; Steps=8
; colorPalletName=dark
# window geo, this may be useful for sending `xdotool` commands?
; MainWindow.Width=1986
; screenHeight=3960
[Logon]
Locale=en
# most markets are oriented around this zone
# so might as well hard code it.
TimeZone=America/New_York
UseSSL=true
displayedproxymsg=1
os_titlebar=true
s3store=true
useRemoteSettings=false
[Communication]
ctciAutoEncrypt=true
Region=usr
; Peer=cdc1.ibllc.com:4001

View File

@ -1,16 +0,0 @@
#!/bin/sh
# start VNC server
x11vnc \
-ncache_cr \
-listen localhost \
-display :1 \
-forever \
-shared \
-logappend /var/log/x11vnc.log \
-bg \
-noipv6 \
-autoport 3003 \
# can't use this because of ``asyncvnc`` issue:
# https://github.com/barneygale/asyncvnc/issues/1
# -passwd 'ibcansmbz'

View File

@ -18,3 +18,10 @@
piker: trading gear for hackers. piker: trading gear for hackers.
""" """
import msgpack # noqa
# TODO: remove this now right?
import msgpack_numpy
# patch msgpack for numpy arrays
msgpack_numpy.patch()

View File

@ -19,7 +19,7 @@ Structured, daemon tree service management.
""" """
from typing import Optional, Union, Callable, Any from typing import Optional, Union, Callable, Any
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager
from collections import defaultdict from collections import defaultdict
from pydantic import BaseModel from pydantic import BaseModel
@ -34,11 +34,9 @@ from .brokers import get_brokermod
log = get_logger(__name__) log = get_logger(__name__)
_root_dname = 'pikerd' _root_dname = 'pikerd'
_registry_addr = ('127.0.0.1', 6116)
_tractor_kwargs: dict[str, Any] = { _tractor_kwargs: dict[str, Any] = {
# use a different registry addr then tractor's default # use a different registry addr then tractor's default
'arbiter_addr': _registry_addr 'arbiter_addr': ('127.0.0.1', 6116),
} }
_root_modules = [ _root_modules = [
__name__, __name__,
@ -80,6 +78,7 @@ class Services(BaseModel):
) -> Any: ) -> Any:
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
async with portal.open_context( async with portal.open_context(
target, target,
**kwargs, **kwargs,
@ -88,21 +87,19 @@ class Services(BaseModel):
# unblock once the remote context has started # unblock once the remote context has started
task_status.started((cs, first)) task_status.started((cs, first))
# wait on any context's return value
ctx_res = await ctx.result()
log.info( log.info(
f'`pikerd` service {name} started with value {first}' f'`pikerd` service {name} started with value {ctx_res}'
) )
try:
# wait on any context's return value # wait on any error from the sub-actor
ctx_res = await ctx.result() # NOTE: this will block indefinitely until cancelled
except tractor.ContextCancelled: # either by error from the target context function or
return await self.cancel_service(name) # by being cancelled here by the surroundingn cancel
else: # scope
# wait on any error from the sub-actor return await (portal.result(), ctx_res)
# NOTE: this will block indefinitely until
# cancelled either by error from the target
# context function or by being cancelled here by
# the surrounding cancel scope
return (await portal.result(), ctx_res)
cs, first = await self.service_n.start(open_context_in_task) cs, first = await self.service_n.start(open_context_in_task)
@ -112,17 +109,14 @@ class Services(BaseModel):
return cs, first return cs, first
# TODO: per service cancellation by scope, we aren't using this
# anywhere right?
async def cancel_service( async def cancel_service(
self, self,
name: str, name: str,
) -> Any: ) -> Any:
log.info(f'Cancelling `pikerd` service {name}') log.info(f'Cancelling `pikerd` service {name}')
cs, portal = self.service_tasks[name] cs, portal = self.service_tasks[name]
# XXX: not entirely sure why this is required,
# and should probably be better fine tuned in
# ``tractor``?
cs.cancel() cs.cancel()
return await portal.cancel_actor() return await portal.cancel_actor()
@ -130,7 +124,7 @@ class Services(BaseModel):
_services: Optional[Services] = None _services: Optional[Services] = None
@acm @asynccontextmanager
async def open_pikerd( async def open_pikerd(
start_method: str = 'trio', start_method: str = 'trio',
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
@ -156,7 +150,7 @@ async def open_pikerd(
tractor.open_root_actor( tractor.open_root_actor(
# passed through to ``open_root_actor`` # passed through to ``open_root_actor``
arbiter_addr=_registry_addr, arbiter_addr=_tractor_kwargs['arbiter_addr'],
name=_root_dname, name=_root_dname,
loglevel=loglevel, loglevel=loglevel,
debug_mode=debug_mode, debug_mode=debug_mode,
@ -185,48 +179,7 @@ async def open_pikerd(
yield _services yield _services
@acm @asynccontextmanager
async def open_piker_runtime(
name: str,
enable_modules: list[str] = [],
start_method: str = 'trio',
loglevel: Optional[str] = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
) -> Optional[tractor._portal.Portal]:
'''
Start a piker actor who's runtime will automatically
sync with existing piker actors in local network
based on configuration.
'''
global _services
assert _services is None
# XXX: this may open a root actor as well
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=_registry_addr,
name=name,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
) as _,
):
yield tractor.current_actor()
@acm
async def maybe_open_runtime( async def maybe_open_runtime(
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
**kwargs, **kwargs,
@ -249,7 +202,7 @@ async def maybe_open_runtime(
yield yield
@acm @asynccontextmanager
async def maybe_open_pikerd( async def maybe_open_pikerd(
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
**kwargs, **kwargs,
@ -300,36 +253,7 @@ class Brokerd:
locks = defaultdict(trio.Lock) locks = defaultdict(trio.Lock)
@acm @asynccontextmanager
async def find_service(
service_name: str,
) -> Optional[tractor.Portal]:
log.info(f'Scanning for service `{service_name}`')
# attach to existing daemon by name if possible
async with tractor.find_actor(
service_name,
arbiter_sockaddr=_registry_addr,
) as maybe_portal:
yield maybe_portal
async def check_for_service(
service_name: str,
) -> bool:
'''
Service daemon "liveness" predicate.
'''
async with tractor.query_actor(
service_name,
arbiter_sockaddr=_registry_addr,
) as sockaddr:
return sockaddr
@acm
async def maybe_spawn_daemon( async def maybe_spawn_daemon(
service_name: str, service_name: str,
@ -339,7 +263,7 @@ async def maybe_spawn_daemon(
**kwargs, **kwargs,
) -> tractor.Portal: ) -> tractor.Portal:
''' """
If no ``service_name`` daemon-actor can be found, If no ``service_name`` daemon-actor can be found,
spawn one in a local subactor and return a portal to it. spawn one in a local subactor and return a portal to it.
@ -350,7 +274,7 @@ async def maybe_spawn_daemon(
This can be seen as a service starting api for remote-actor This can be seen as a service starting api for remote-actor
clients. clients.
''' """
if loglevel: if loglevel:
get_console_log(loglevel) get_console_log(loglevel)
@ -359,14 +283,13 @@ async def maybe_spawn_daemon(
lock = Brokerd.locks[service_name] lock = Brokerd.locks[service_name]
await lock.acquire() await lock.acquire()
async with find_service(service_name) as portal: # attach to existing daemon by name if possible
async with tractor.find_actor(service_name) as portal:
if portal is not None: if portal is not None:
lock.release() lock.release()
yield portal yield portal
return return
log.warning(f"Couldn't find any existing {service_name}")
# ask root ``pikerd`` daemon to spawn the daemon we need if # ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the # pikerd is not live we now become the root of the
# process tree # process tree
@ -402,7 +325,6 @@ async def maybe_spawn_daemon(
async with tractor.wait_for_actor(service_name) as portal: async with tractor.wait_for_actor(service_name) as portal:
lock.release() lock.release()
yield portal yield portal
await portal.cancel_actor()
async def spawn_brokerd( async def spawn_brokerd(
@ -426,19 +348,9 @@ async def spawn_brokerd(
# ask `pikerd` to spawn a new sub-actor and manage it under its # ask `pikerd` to spawn a new sub-actor and manage it under its
# actor nursery # actor nursery
modpath = brokermod.__name__
broker_enable = [modpath]
for submodname in getattr(
brokermod,
'__enable_modules__',
[],
):
subpath = f'{modpath}.{submodname}'
broker_enable.append(subpath)
portal = await _services.actor_n.start_actor( portal = await _services.actor_n.start_actor(
dname, dname,
enable_modules=_data_mods + broker_enable, enable_modules=_data_mods + [brokermod.__name__],
loglevel=loglevel, loglevel=loglevel,
debug_mode=_services.debug_mode, debug_mode=_services.debug_mode,
**tractor_kwargs **tractor_kwargs
@ -456,7 +368,7 @@ async def spawn_brokerd(
return True return True
@acm @asynccontextmanager
async def maybe_spawn_brokerd( async def maybe_spawn_brokerd(
brokername: str, brokername: str,
@ -464,9 +376,7 @@ async def maybe_spawn_brokerd(
**kwargs, **kwargs,
) -> tractor.Portal: ) -> tractor.Portal:
''' '''Helper to spawn a brokerd service.
Helper to spawn a brokerd service *from* a client
who wishes to use the sub-actor-daemon.
''' '''
async with maybe_spawn_daemon( async with maybe_spawn_daemon(
@ -518,7 +428,7 @@ async def spawn_emsd(
return True return True
@acm @asynccontextmanager
async def maybe_open_emsd( async def maybe_open_emsd(
brokername: str, brokername: str,
@ -537,25 +447,3 @@ async def maybe_open_emsd(
) as portal: ) as portal:
yield portal yield portal
# TODO: ideally we can start the tsdb "on demand" but it's
# probably going to require "rootless" docker, at least if we don't
# want to expect the user to start ``pikerd`` with root perms all the
# time.
# async def maybe_open_marketstored(
# loglevel: Optional[str] = None,
# **kwargs,
# ) -> tractor._portal.Portal: # noqa
# async with maybe_spawn_daemon(
# 'marketstored',
# service_task_target=spawn_emsd,
# spawn_args={'loglevel': loglevel},
# loglevel=loglevel,
# **kwargs,
# ) as portal:
# yield portal

View File

@ -21,10 +21,7 @@ Profiling wrappers for internal libs.
import time import time
from functools import wraps from functools import wraps
# NOTE: you can pass a flag to enable this: _pg_profile: bool = True
# ``piker chart <args> --profile``.
_pg_profile: bool = False
ms_slower_then: float = 0
def pg_profile_enabled() -> bool: def pg_profile_enabled() -> bool:

View File

@ -33,49 +33,13 @@ class SymbolNotFound(BrokerError):
class NoData(BrokerError): class NoData(BrokerError):
''' "Symbol data not permitted"
Symbol data not permitted or no data
for time range found.
'''
def __init__(
self,
*args,
frame_size: int = 1000,
) -> None:
super().__init__(*args)
# when raised, machinery can check if the backend
# set a "frame size" for doing datetime calcs.
self.frame_size: int = 1000
class DataUnavailable(BrokerError):
'''
Signal storage requests to terminate.
'''
# TODO: add in a reason that can be displayed in the
# UI (for eg. `kraken` is bs and you should complain
# to them that you can't pull more OHLC data..)
class DataThrottle(BrokerError):
'''
Broker throttled request rate for data.
'''
# TODO: add in throttle metrics/feedback
def resproc( def resproc(
resp: asks.response_objects.Response, resp: asks.response_objects.Response,
log: logging.Logger, log: logging.Logger,
return_json: bool = True, return_json: bool = True
log_resp: bool = False,
) -> asks.response_objects.Response: ) -> asks.response_objects.Response:
"""Process response and return its json content. """Process response and return its json content.
@ -84,12 +48,11 @@ def resproc(
if not resp.status_code == 200: if not resp.status_code == 200:
raise BrokerError(resp.body) raise BrokerError(resp.body)
try: try:
msg = resp.json() json = resp.json()
except json.decoder.JSONDecodeError: except json.decoder.JSONDecodeError:
log.exception(f"Failed to process {resp}:\n{resp.text}") log.exception(f"Failed to process {resp}:\n{resp.text}")
raise BrokerError(resp.text) raise BrokerError(resp.text)
else:
log.debug(f"Received json contents:\n{colorize_json(json)}")
if log_resp: return json if return_json else resp
log.debug(f"Received json contents:\n{colorize_json(msg)}")
return msg if return_json else resp

View File

@ -18,17 +18,13 @@
Binance backend Binance backend
""" """
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager
from datetime import datetime from typing import List, Dict, Any, Tuple, Union, Optional, AsyncGenerator
from typing import (
Any, Union, Optional,
AsyncGenerator, Callable,
)
import time import time
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import pendulum import arrow
import asks import asks
from fuzzywuzzy import process as fuzzy from fuzzywuzzy import process as fuzzy
import numpy as np import numpy as np
@ -92,7 +88,7 @@ class Pair(BaseModel):
baseCommissionPrecision: int baseCommissionPrecision: int
quoteCommissionPrecision: int quoteCommissionPrecision: int
orderTypes: list[str] orderTypes: List[str]
icebergAllowed: bool icebergAllowed: bool
ocoAllowed: bool ocoAllowed: bool
@ -100,8 +96,8 @@ class Pair(BaseModel):
isSpotTradingAllowed: bool isSpotTradingAllowed: bool
isMarginTradingAllowed: bool isMarginTradingAllowed: bool
filters: list[dict[str, Union[str, int, float]]] filters: List[Dict[str, Union[str, int, float]]]
permissions: list[str] permissions: List[str]
@dataclass @dataclass
@ -133,7 +129,7 @@ class OHLC:
bar_wap: float = 0.0 bar_wap: float = 0.0
# convert datetime obj timestamp to unixtime in milliseconds # convert arrow timestamp to unixtime in miliseconds
def binance_timestamp(when): def binance_timestamp(when):
return int((when.timestamp() * 1000) + (when.microsecond / 1000)) return int((when.timestamp() * 1000) + (when.microsecond / 1000))
@ -149,7 +145,7 @@ class Client:
self, self,
method: str, method: str,
params: dict, params: dict,
) -> dict[str, Any]: ) -> Dict[str, Any]:
resp = await self._sesh.get( resp = await self._sesh.get(
path=f'/api/v3/{method}', path=f'/api/v3/{method}',
params=params, params=params,
@ -204,7 +200,7 @@ class Client:
self, self,
pattern: str, pattern: str,
limit: int = None, limit: int = None,
) -> dict[str, Any]: ) -> Dict[str, Any]:
if self._pairs is not None: if self._pairs is not None:
data = self._pairs data = self._pairs
else: else:
@ -222,22 +218,20 @@ class Client:
async def bars( async def bars(
self, self,
symbol: str, symbol: str,
start_dt: Optional[datetime] = None, start_time: int = None,
end_dt: Optional[datetime] = None, end_time: int = None,
limit: int = 1000, # <- max allowed per query limit: int = 1000, # <- max allowed per query
as_np: bool = True, as_np: bool = True,
) -> dict: ) -> dict:
if end_dt is None: if start_time is None:
end_dt = pendulum.now('UTC') start_time = binance_timestamp(
arrow.utcnow().floor('minute').shift(minutes=-limit)
)
if start_dt is None: if end_time is None:
start_dt = end_dt.start_of( end_time = binance_timestamp(arrow.utcnow())
'minute').subtract(minutes=limit)
start_time = binance_timestamp(start_dt)
end_time = binance_timestamp(end_dt)
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data # https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
bars = await self._api( bars = await self._api(
@ -279,7 +273,7 @@ class Client:
return array return array
@acm @asynccontextmanager
async def get_client() -> Client: async def get_client() -> Client:
client = Client() client = Client()
await client.cache_symbols() await client.cache_symbols()
@ -359,7 +353,7 @@ async def stream_messages(ws: NoBsWs) -> AsyncGenerator[NoBsWs, dict]:
} }
def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]: def make_sub(pairs: List[str], sub_name: str, uid: int) -> Dict[str, str]:
"""Create a request subscription packet dict. """Create a request subscription packet dict.
https://binance-docs.github.io/apidocs/spot/en/#live-subscribing-unsubscribing-to-streams https://binance-docs.github.io/apidocs/spot/en/#live-subscribing-unsubscribing-to-streams
@ -374,37 +368,6 @@ def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
} }
@acm
async def open_history_client(
symbol: str,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('binance') as client:
async def get_ohlc(
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
array = await client.bars(
symbol,
start_dt=start_dt,
end_dt=end_dt,
)
start_dt = pendulum.from_timestamp(array[0]['time'])
end_dt = pendulum.from_timestamp(array[-1]['time'])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
async def backfill_bars( async def backfill_bars(
sym: str, sym: str,
shm: ShmArray, # type: ignore # noqa shm: ShmArray, # type: ignore # noqa
@ -422,12 +385,13 @@ async def backfill_bars(
async def stream_quotes( async def stream_quotes(
send_chan: trio.abc.SendChannel, send_chan: trio.abc.SendChannel,
symbols: list[str], symbols: List[str],
shm: ShmArray,
feed_is_live: trio.Event, feed_is_live: trio.Event,
loglevel: str = None, loglevel: str = None,
# startup sync # startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[Tuple[Dict, Dict]] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
# XXX: required to propagate ``tractor`` loglevel to piker logging # XXX: required to propagate ``tractor`` loglevel to piker logging
@ -452,8 +416,8 @@ async def stream_quotes(
# XXX: after manually inspecting the response format we # XXX: after manually inspecting the response format we
# just directly pick out the info we need # just directly pick out the info we need
si['price_tick_size'] = float(syminfo.filters[0]['tickSize']) si['price_tick_size'] = syminfo.filters[0]['tickSize']
si['lot_tick_size'] = float(syminfo.filters[2]['stepSize']) si['lot_tick_size'] = syminfo.filters[2]['stepSize']
si['asset_type'] = 'crypto' si['asset_type'] = 'crypto'
symbol = symbols[0] symbol = symbols[0]
@ -464,11 +428,10 @@ async def stream_quotes(
symbol: { symbol: {
'symbol_info': sym_infos[sym], 'symbol_info': sym_infos[sym],
'shm_write_opts': {'sum_tick_vml': False}, 'shm_write_opts': {'sum_tick_vml': False},
'fqsn': sym,
}, },
} }
@acm @asynccontextmanager
async def subscribe(ws: wsproto.WSConnection): async def subscribe(ws: wsproto.WSConnection):
# setup subs # setup subs
@ -518,7 +481,8 @@ async def stream_quotes(
# TODO: use ``anext()`` when it lands in 3.10! # TODO: use ``anext()`` when it lands in 3.10!
typ, quote = await msg_gen.__anext__() typ, quote = await msg_gen.__anext__()
task_status.started((init_msgs, quote)) first_quote = {quote['symbol'].lower(): quote}
task_status.started((init_msgs, first_quote))
# signal to caller feed is ready for consumption # signal to caller feed is ready for consumption
feed_is_live.set() feed_is_live.set()

View File

@ -23,6 +23,7 @@ from operator import attrgetter
from operator import itemgetter from operator import itemgetter
import click import click
import pandas as pd
import trio import trio
import tractor import tractor
@ -46,10 +47,8 @@ _watchlists_data_path = os.path.join(_config_dir, 'watchlists.json')
@click.argument('kwargs', nargs=-1) @click.argument('kwargs', nargs=-1)
@click.pass_obj @click.pass_obj
def api(config, meth, kwargs, keys): def api(config, meth, kwargs, keys):
''' """Make a broker-client API method call
Make a broker-client API method call """
'''
# global opts # global opts
broker = config['brokers'][0] broker = config['brokers'][0]
@ -80,13 +79,13 @@ def api(config, meth, kwargs, keys):
@cli.command() @cli.command()
@click.option('--df-output', '-df', flag_value=True,
help='Output in `pandas.DataFrame` format')
@click.argument('tickers', nargs=-1, required=True) @click.argument('tickers', nargs=-1, required=True)
@click.pass_obj @click.pass_obj
def quote(config, tickers): def quote(config, tickers, df_output):
''' """Print symbol quotes to the console
Print symbol quotes to the console """
'''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = config['brokermods'][0]
@ -101,19 +100,28 @@ def quote(config, tickers):
if ticker not in syms: if ticker not in syms:
brokermod.log.warn(f"Could not find symbol {ticker}?") brokermod.log.warn(f"Could not find symbol {ticker}?")
click.echo(colorize_json(quotes)) if df_output:
cols = next(filter(bool, quotes)).copy()
cols.pop('symbol')
df = pd.DataFrame(
(quote or {} for quote in quotes),
columns=cols,
)
click.echo(df)
else:
click.echo(colorize_json(quotes))
@cli.command() @cli.command()
@click.option('--df-output', '-df', flag_value=True,
help='Output in `pandas.DataFrame` format')
@click.option('--count', '-c', default=1000, @click.option('--count', '-c', default=1000,
help='Number of bars to retrieve') help='Number of bars to retrieve')
@click.argument('symbol', required=True) @click.argument('symbol', required=True)
@click.pass_obj @click.pass_obj
def bars(config, symbol, count): def bars(config, symbol, count, df_output):
''' """Retreive 1m bars for symbol and print on the console
Retreive 1m bars for symbol and print on the console """
'''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = config['brokermods'][0]
@ -125,7 +133,7 @@ def bars(config, symbol, count):
brokermod, brokermod,
symbol, symbol,
count=count, count=count,
as_np=False, as_np=df_output
) )
) )
@ -133,7 +141,10 @@ def bars(config, symbol, count):
log.error(f"No quotes could be found for {symbol}?") log.error(f"No quotes could be found for {symbol}?")
return return
click.echo(colorize_json(bars)) if df_output:
click.echo(pd.DataFrame(bars))
else:
click.echo(colorize_json(bars))
@cli.command() @cli.command()
@ -145,10 +156,8 @@ def bars(config, symbol, count):
@click.argument('name', nargs=1, required=True) @click.argument('name', nargs=1, required=True)
@click.pass_obj @click.pass_obj
def record(config, rate, name, dhost, filename): def record(config, rate, name, dhost, filename):
''' """Record client side quotes to a file on disk
Record client side quotes to a file on disk """
'''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = config['brokermods'][0]
loglevel = config['loglevel'] loglevel = config['loglevel']
@ -186,10 +195,8 @@ def record(config, rate, name, dhost, filename):
@click.argument('symbol', required=True) @click.argument('symbol', required=True)
@click.pass_context @click.pass_context
def contracts(ctx, loglevel, broker, symbol, ids): def contracts(ctx, loglevel, broker, symbol, ids):
''' """Get list of all option contracts for symbol
Get list of all option contracts for symbol """
'''
brokermod = get_brokermod(broker) brokermod = get_brokermod(broker)
get_console_log(loglevel) get_console_log(loglevel)
@ -206,14 +213,14 @@ def contracts(ctx, loglevel, broker, symbol, ids):
@cli.command() @cli.command()
@click.option('--df-output', '-df', flag_value=True,
help='Output in `pandas.DataFrame` format')
@click.option('--date', '-d', help='Contracts expiry date') @click.option('--date', '-d', help='Contracts expiry date')
@click.argument('symbol', required=True) @click.argument('symbol', required=True)
@click.pass_obj @click.pass_obj
def optsquote(config, symbol, date): def optsquote(config, symbol, df_output, date):
''' """Retreive symbol option quotes on the console
Retreive symbol option quotes on the console """
'''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = config['brokermods'][0]
@ -226,17 +233,22 @@ def optsquote(config, symbol, date):
log.error(f"No option quotes could be found for {symbol}?") log.error(f"No option quotes could be found for {symbol}?")
return return
click.echo(colorize_json(quotes)) if df_output:
df = pd.DataFrame(
(quote.values() for quote in quotes),
columns=quotes[0].keys(),
)
click.echo(df)
else:
click.echo(colorize_json(quotes))
@cli.command() @cli.command()
@click.argument('tickers', nargs=-1, required=True) @click.argument('tickers', nargs=-1, required=True)
@click.pass_obj @click.pass_obj
def symbol_info(config, tickers): def symbol_info(config, tickers):
''' """Print symbol quotes to the console
Print symbol quotes to the console """
'''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = config['brokermods'][0]
@ -258,10 +270,8 @@ def symbol_info(config, tickers):
@click.argument('pattern', required=True) @click.argument('pattern', required=True)
@click.pass_obj @click.pass_obj
def search(config, pattern): def search(config, pattern):
''' """Search for symbols from broker backend(s).
Search for symbols from broker backend(s). """
'''
# global opts # global opts
brokermods = config['brokermods'] brokermods = config['brokermods']

View File

@ -142,23 +142,15 @@ async def symbol_search(
brokermods: list[ModuleType], brokermods: list[ModuleType],
pattern: str, pattern: str,
**kwargs, **kwargs,
) -> Dict[str, Dict[str, Dict[str, Any]]]: ) -> Dict[str, Dict[str, Dict[str, Any]]]:
''' """Return symbol info from broker.
Return symbol info from broker. """
'''
results = [] results = []
async def search_backend( async def search_backend(brokername: str) -> None:
brokermod: ModuleType
) -> None:
brokername: str = mod.name
async with maybe_spawn_brokerd( async with maybe_spawn_brokerd(
mod.name, brokername,
infect_asyncio=getattr(mod, '_infect_asyncio', False),
) as portal: ) as portal:
results.append(( results.append((

1968
piker/brokers/ib.py 100644

File diff suppressed because it is too large Load Diff

View File

@ -1,67 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Interactive Brokers API backend.
Sub-modules within break into the core functionalities:
- ``broker.py`` part for orders / trading endpoints
- ``data.py`` for real-time data feed endpoints
- ``client.py`` for the core API machinery which is ``trio``-ized
wrapping around ``ib_insync``.
- ``report.py`` for the hackery to build manual pp calcs
to avoid ib's absolute bullshit FIFO style position
tracking..
"""
from .api import (
get_client,
)
from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
)
from .broker import trades_dialogue
__all__ = [
'get_client',
'trades_dialogue',
'open_history_client',
'open_symbol_search',
'stream_quotes',
]
# tractor RPC enable arg
__enable_modules__: list[str] = [
'api',
'feed',
'broker',
]
# passed to ``tractor.ActorNursery.start_actor()``
_spawn_kwargs = {
'infect_asyncio': True,
}
# annotation to let backend agnostic code
# know if ``brokerd`` should be spawned with
# ``tractor``'s aio mode.
_infect_asyncio: bool = True

File diff suppressed because it is too large Load Diff

View File

@ -1,590 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Order and trades endpoints for use with ``piker``'s EMS.
"""
from __future__ import annotations
from dataclasses import asdict
from functools import partial
from pprint import pformat
import time
from typing import (
Any,
Optional,
AsyncIterator,
)
import trio
from trio_typing import TaskStatus
import tractor
from ib_insync.contract import (
Contract,
Option,
)
from ib_insync.order import (
Trade,
OrderStatus,
)
from ib_insync.objects import (
Fill,
Execution,
)
from ib_insync.objects import Position
from piker import config
from piker.log import get_console_log
from piker.clearing._messages import (
BrokerdOrder,
BrokerdOrderAck,
BrokerdStatus,
BrokerdPosition,
BrokerdCancel,
BrokerdFill,
BrokerdError,
)
from .api import (
_accounts2clients,
_adhoc_futes_set,
log,
get_config,
open_client_proxies,
Client,
)
def pack_position(
pos: Position
) -> dict[str, Any]:
con = pos.contract
if isinstance(con, Option):
# TODO: option symbol parsing and sane display:
symbol = con.localSymbol.replace(' ', '')
else:
# TODO: lookup fqsn even for derivs.
symbol = con.symbol.lower()
exch = (con.primaryExchange or con.exchange).lower()
symkey = '.'.join((symbol, exch))
if not exch:
# attempt to lookup the symbol from our
# hacked set..
for sym in _adhoc_futes_set:
if symbol in sym:
symkey = sym
break
expiry = con.lastTradeDateOrContractMonth
if expiry:
symkey += f'.{expiry}'
# TODO: options contracts into a sane format..
return BrokerdPosition(
broker='ib',
account=pos.account,
symbol=symkey,
currency=con.currency,
size=float(pos.position),
avg_price=float(pos.avgCost) / float(con.multiplier or 1.0),
)
async def handle_order_requests(
ems_order_stream: tractor.MsgStream,
accounts_def: dict[str, str],
) -> None:
request_msg: dict
async for request_msg in ems_order_stream:
log.info(f'Received order request {request_msg}')
action = request_msg['action']
account = request_msg['account']
acct_number = accounts_def.get(account)
if not acct_number:
log.error(
f'An IB account number for name {account} is not found?\n'
'Make sure you have all TWS and GW instances running.'
)
await ems_order_stream.send(BrokerdError(
oid=request_msg['oid'],
symbol=request_msg['symbol'],
reason=f'No account found: `{account}` ?',
).dict())
continue
client = _accounts2clients.get(account)
if not client:
log.error(
f'An IB client for account name {account} is not found.\n'
'Make sure you have all TWS and GW instances running.'
)
await ems_order_stream.send(BrokerdError(
oid=request_msg['oid'],
symbol=request_msg['symbol'],
reason=f'No api client loaded for account: `{account}` ?',
).dict())
continue
if action in {'buy', 'sell'}:
# validate
order = BrokerdOrder(**request_msg)
# call our client api to submit the order
reqid = client.submit_limit(
oid=order.oid,
symbol=order.symbol,
price=order.price,
action=order.action,
size=order.size,
account=acct_number,
# XXX: by default 0 tells ``ib_insync`` methods that
# there is no existing order so ask the client to create
# a new one (which it seems to do by allocating an int
# counter - collision prone..)
reqid=order.reqid,
)
if reqid is None:
await ems_order_stream.send(BrokerdError(
oid=request_msg['oid'],
symbol=request_msg['symbol'],
reason='Order already active?',
).dict())
# deliver ack that order has been submitted to broker routing
await ems_order_stream.send(
BrokerdOrderAck(
# ems order request id
oid=order.oid,
# broker specific request id
reqid=reqid,
time_ns=time.time_ns(),
account=account,
).dict()
)
elif action == 'cancel':
msg = BrokerdCancel(**request_msg)
client.submit_cancel(reqid=msg.reqid)
else:
log.error(f'Unknown order command: {request_msg}')
async def recv_trade_updates(
client: Client,
to_trio: trio.abc.SendChannel,
) -> None:
"""Stream a ticker using the std L1 api.
"""
client.inline_errors(to_trio)
# sync with trio task
to_trio.send_nowait(None)
def push_tradesies(eventkit_obj, obj, fill=None):
"""Push events to trio task.
"""
if fill is not None:
# execution details event
item = ('fill', (obj, fill))
elif eventkit_obj.name() == 'positionEvent':
item = ('position', obj)
else:
item = ('status', obj)
log.info(f'eventkit event ->\n{pformat(item)}')
try:
to_trio.send_nowait(item)
except trio.BrokenResourceError:
log.exception(f'Disconnected from {eventkit_obj} updates')
eventkit_obj.disconnect(push_tradesies)
# hook up to the weird eventkit object - event stream api
for ev_name in [
'orderStatusEvent', # all order updates
'execDetailsEvent', # all "fill" updates
'positionEvent', # avg price updates per symbol per account
# 'commissionReportEvent',
# XXX: ugh, it is a separate event from IB and it's
# emitted as follows:
# self.ib.commissionReportEvent.emit(trade, fill, report)
# XXX: not sure yet if we need these
# 'updatePortfolioEvent',
# XXX: these all seem to be weird ib_insync intrernal
# events that we probably don't care that much about
# given the internal design is wonky af..
# 'newOrderEvent',
# 'orderModifyEvent',
# 'cancelOrderEvent',
# 'openOrderEvent',
]:
eventkit_obj = getattr(client.ib, ev_name)
handler = partial(push_tradesies, eventkit_obj)
eventkit_obj.connect(handler)
# let the engine run and stream
await client.ib.disconnectedEvent
@tractor.context
async def trades_dialogue(
ctx: tractor.Context,
loglevel: str = None,
) -> AsyncIterator[dict[str, Any]]:
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
accounts_def = config.load_accounts(['ib'])
global _client_cache
# deliver positions to subscriber before anything else
all_positions = []
accounts = set()
clients: list[tuple[Client, trio.MemoryReceiveChannel]] = []
async with (
trio.open_nursery() as nurse,
open_client_proxies() as (proxies, aioclients),
):
for account, proxy in proxies.items():
client = aioclients[account]
async def open_stream(
task_status: TaskStatus[
trio.abc.ReceiveChannel
] = trio.TASK_STATUS_IGNORED,
):
# each api client has a unique event stream
async with tractor.to_asyncio.open_channel_from(
recv_trade_updates,
client=client,
) as (first, trade_event_stream):
task_status.started(trade_event_stream)
await trio.sleep_forever()
trade_event_stream = await nurse.start(open_stream)
clients.append((client, trade_event_stream))
assert account in accounts_def
accounts.add(account)
for client in aioclients.values():
for pos in client.positions():
msg = pack_position(pos)
msg.account = accounts_def.inverse[msg.account]
assert msg.account in accounts, (
f'Position for unknown account: {msg.account}')
all_positions.append(msg.dict())
trades: list[dict] = []
for proxy in proxies.values():
trades.append(await proxy.trades())
log.info(f'Loaded {len(trades)} from this session')
# TODO: write trades to local ``trades.toml``
# - use above per-session trades data and write to local file
# - get the "flex reports" working and pull historical data and
# also save locally.
await ctx.started((
all_positions,
tuple(name for name in accounts_def if name in accounts),
))
async with (
ctx.open_stream() as ems_stream,
trio.open_nursery() as n,
):
# start order request handler **before** local trades event loop
n.start_soon(handle_order_requests, ems_stream, accounts_def)
# allocate event relay tasks for each client connection
for client, stream in clients:
n.start_soon(
deliver_trade_events,
stream,
ems_stream,
accounts_def
)
# block until cancelled
await trio.sleep_forever()
async def deliver_trade_events(
trade_event_stream: trio.MemoryReceiveChannel,
ems_stream: tractor.MsgStream,
accounts_def: dict[str, str],
) -> None:
'''Format and relay all trade events for a given client to the EMS.
'''
action_map = {'BOT': 'buy', 'SLD': 'sell'}
# TODO: for some reason we can receive a ``None`` here when the
# ib-gw goes down? Not sure exactly how that's happening looking
# at the eventkit code above but we should probably handle it...
async for event_name, item in trade_event_stream:
log.info(f'ib sending {event_name}:\n{pformat(item)}')
# TODO: templating the ib statuses in comparison with other
# brokers is likely the way to go:
# https://interactivebrokers.github.io/tws-api/interfaceIBApi_1_1EWrapper.html#a17f2a02d6449710b6394d0266a353313
# short list:
# - PendingSubmit
# - PendingCancel
# - PreSubmitted (simulated orders)
# - ApiCancelled (cancelled by client before submission
# to routing)
# - Cancelled
# - Filled
# - Inactive (reject or cancelled but not by trader)
# XXX: here's some other sucky cases from the api
# - short-sale but securities haven't been located, in this
# case we should probably keep the order in some kind of
# weird state or cancel it outright?
# status='PendingSubmit', message=''),
# status='Cancelled', message='Error 404,
# reqId 1550: Order held while securities are located.'),
# status='PreSubmitted', message='')],
if event_name == 'status':
# XXX: begin normalization of nonsense ib_insync internal
# object-state tracking representations...
# unwrap needed data from ib_insync internal types
trade: Trade = item
status: OrderStatus = trade.orderStatus
# skip duplicate filled updates - we get the deats
# from the execution details event
msg = BrokerdStatus(
reqid=trade.order.orderId,
time_ns=time.time_ns(), # cuz why not
account=accounts_def.inverse[trade.order.account],
# everyone doin camel case..
status=status.status.lower(), # force lower case
filled=status.filled,
reason=status.whyHeld,
# this seems to not be necessarily up to date in the
# execDetails event.. so we have to send it here I guess?
remaining=status.remaining,
broker_details={'name': 'ib'},
)
elif event_name == 'fill':
# for wtv reason this is a separate event type
# from IB, not sure why it's needed other then for extra
# complexity and over-engineering :eyeroll:.
# we may just end up dropping these events (or
# translating them to ``Status`` msgs) if we can
# show the equivalent status events are no more latent.
# unpack ib_insync types
# pep-0526 style:
# https://www.python.org/dev/peps/pep-0526/#global-and-local-variable-annotations
trade: Trade
fill: Fill
trade, fill = item
execu: Execution = fill.execution
# TODO: normalize out commissions details?
details = {
'contract': asdict(fill.contract),
'execution': asdict(fill.execution),
'commissions': asdict(fill.commissionReport),
'broker_time': execu.time, # supposedly server fill time
'name': 'ib',
}
msg = BrokerdFill(
# should match the value returned from `.submit_limit()`
reqid=execu.orderId,
time_ns=time.time_ns(), # cuz why not
action=action_map[execu.side],
size=execu.shares,
price=execu.price,
broker_details=details,
# XXX: required by order mode currently
broker_time=details['broker_time'],
)
elif event_name == 'error':
err: dict = item
# f$#$% gawd dammit insync..
con = err['contract']
if isinstance(con, Contract):
err['contract'] = asdict(con)
if err['reqid'] == -1:
log.error(f'TWS external order error:\n{pformat(err)}')
# TODO: what schema for this msg if we're going to make it
# portable across all backends?
# msg = BrokerdError(**err)
continue
elif event_name == 'position':
msg = pack_position(item)
msg.account = accounts_def.inverse[msg.account]
elif event_name == 'event':
# it's either a general system status event or an external
# trade event?
log.info(f"TWS system status: \n{pformat(item)}")
# TODO: support this again but needs parsing at the callback
# level...
# reqid = item.get('reqid', 0)
# if getattr(msg, 'reqid', 0) < -1:
# log.info(f"TWS triggered trade\n{pformat(msg.dict())}")
continue
# msg.reqid = 'tws-' + str(-1 * reqid)
# mark msg as from "external system"
# TODO: probably something better then this.. and start
# considering multiplayer/group trades tracking
# msg.broker_details['external_src'] = 'tws'
# XXX: we always serialize to a dict for msgpack
# translations, ideally we can move to an msgspec (or other)
# encoder # that can be enabled in ``tractor`` ahead of
# time so we can pass through the message types directly.
await ems_stream.send(msg.dict())
def load_flex_trades(
path: Optional[str] = None,
) -> dict[str, str]:
from pprint import pprint
from ib_insync import flexreport, util
conf = get_config()
if not path:
# load ``brokers.toml`` and try to get the flex
# token and query id that must be previously defined
# by the user.
token = conf.get('flex_token')
if not token:
raise ValueError(
'You must specify a ``flex_token`` field in your'
'`brokers.toml` in order load your trade log, see our'
'intructions for how to set this up here:\n'
'PUT LINK HERE!'
)
qid = conf['flex_trades_query_id']
# TODO: hack this into our logging
# system like we do with the API client..
util.logToConsole()
# TODO: rewrite the query part of this with async..httpx?
report = flexreport.FlexReport(
token=token,
queryId=qid,
)
else:
# XXX: another project we could potentially look at,
# https://pypi.org/project/ibflex/
report = flexreport.FlexReport(path=path)
trade_entries = report.extract('Trade')
trades = {
# XXX: LOL apparently ``toml`` has a bug
# where a section key error will show up in the write
# if you leave this as an ``int``?
str(t.__dict__['tradeID']): t.__dict__
for t in trade_entries
}
ln = len(trades)
log.info(f'Loaded {ln} trades from flex query')
trades_by_account = {}
for tid, trade in trades.items():
trades_by_account.setdefault(
# oddly for some so-called "BookTrade" entries
# this field seems to be blank, no cuckin clue.
# trade['ibExecID']
str(trade['accountId']), {}
)[tid] = trade
section = {'ib': trades_by_account}
pprint(section)
# TODO: load the config first and append in
# the new trades loaded here..
try:
config.write(section, 'trades')
except KeyError:
import pdbpp; pdbpp.set_trace() # noqa
if __name__ == '__main__':
load_flex_trades()

View File

@ -1,938 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Data feed endpoints pre-wrapped and ready for use with ``tractor``/``trio``.
"""
from __future__ import annotations
import asyncio
from contextlib import asynccontextmanager as acm
from dataclasses import asdict
from datetime import datetime
from math import isnan
import time
from typing import (
Callable,
Optional,
Awaitable,
)
from async_generator import aclosing
from fuzzywuzzy import process as fuzzy
import numpy as np
import pendulum
import tractor
import trio
from trio_typing import TaskStatus
from piker.data._sharedmem import ShmArray
from .._util import SymbolNotFound, NoData
from .api import (
_adhoc_futes_set,
log,
load_aio_clients,
ibis,
MethodProxy,
open_client_proxies,
get_preferred_data_client,
Ticker,
RequestError,
Contract,
)
# https://interactivebrokers.github.io/tws-api/tick_types.html
tick_types = {
77: 'trade',
# a "utrade" aka an off exchange "unreportable" (dark) vlm:
# https://interactivebrokers.github.io/tws-api/tick_types.html#rt_volume
48: 'dark_trade',
# standard L1 ticks
0: 'bsize',
1: 'bid',
2: 'ask',
3: 'asize',
4: 'last',
5: 'size',
8: 'volume',
# ``ib_insync`` already packs these into
# quotes under the following fields.
# 55: 'trades_per_min', # `'tradeRate'`
# 56: 'vlm_per_min', # `'volumeRate'`
# 89: 'shortable', # `'shortableShares'`
}
@acm
async def open_data_client() -> MethodProxy:
'''
Open the first found preferred "data client" as defined in the
user's ``brokers.toml`` in the ``ib.prefer_data_account`` variable
and deliver that client wrapped in a ``MethodProxy``.
'''
async with (
open_client_proxies() as (proxies, clients),
):
account_name, client = get_preferred_data_client(clients)
proxy = proxies.get(f'ib.{account_name}')
if not proxy:
raise ValueError(
f'No preferred data client could be found for {account_name}!'
)
yield proxy
@acm
async def open_history_client(
symbol: str,
) -> tuple[Callable, int]:
'''
History retreival endpoint - delivers a historical frame callble
that takes in ``pendulum.datetime`` and returns ``numpy`` arrays.
'''
async with open_data_client() as proxy:
async def get_hist(
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[np.ndarray, str]:
out, fails = await get_bars(proxy, symbol, end_dt=end_dt)
# TODO: add logic here to handle tradable hours and only grab
# valid bars in the range
if out is None:
# could be trying to retreive bars over weekend
log.error(f"Can't grab bars starting at {end_dt}!?!?")
raise NoData(
f'{end_dt}',
frame_size=2000,
)
bars, bars_array, first_dt, last_dt = out
# volume cleaning since there's -ve entries,
# wood luv to know what crookery that is..
vlm = bars_array['volume']
vlm[vlm < 0] = 0
return bars_array, first_dt, last_dt
# TODO: it seems like we can do async queries for ohlc
# but getting the order right still isn't working and I'm not
# quite sure why.. needs some tinkering and probably
# a lookthrough of the ``ib_insync`` machinery, for eg. maybe
# we have to do the batch queries on the `asyncio` side?
yield get_hist, {'erlangs': 1, 'rate': 6}
_pacing: str = (
'Historical Market Data Service error '
'message:Historical data request pacing violation'
)
async def get_bars(
proxy: MethodProxy,
fqsn: str,
# blank to start which tells ib to look up the latest datum
end_dt: str = '',
) -> (dict, np.ndarray):
'''
Retrieve historical data from a ``trio``-side task using
a ``MethoProxy``.
'''
fails = 0
bars: Optional[list] = None
first_dt: datetime = None
last_dt: datetime = None
if end_dt:
last_dt = pendulum.from_timestamp(end_dt.timestamp())
for _ in range(10):
try:
out = await proxy.bars(
fqsn=fqsn,
end_dt=end_dt,
)
if out:
bars, bars_array = out
else:
await tractor.breakpoint()
if bars_array is None:
raise SymbolNotFound(fqsn)
first_dt = pendulum.from_timestamp(
bars[0].date.timestamp())
last_dt = pendulum.from_timestamp(
bars[-1].date.timestamp())
time = bars_array['time']
assert time[-1] == last_dt.timestamp()
assert time[0] == first_dt.timestamp()
log.info(
f'{len(bars)} bars retreived for {first_dt} -> {last_dt}'
)
return (bars, bars_array, first_dt, last_dt), fails
except RequestError as err:
msg = err.message
# why do we always need to rebind this?
# _err = err
if 'No market data permissions for' in msg:
# TODO: signalling for no permissions searches
raise NoData(
f'Symbol: {fqsn}',
)
elif (
err.code == 162
and 'HMDS query returned no data' in err.message
):
# XXX: this is now done in the storage mgmt layer
# and we shouldn't implicitly decrement the frame dt
# index since the upper layer may be doing so
# concurrently and we don't want to be delivering frames
# that weren't asked for.
log.warning(
f'NO DATA found ending @ {end_dt}\n'
)
# try to decrement start point and look further back
# end_dt = last_dt = last_dt.subtract(seconds=2000)
raise NoData(
f'Symbol: {fqsn}',
frame_size=2000,
)
elif _pacing in msg:
log.warning(
'History throttle rate reached!\n'
'Resetting farms with `ctrl-alt-f` hack\n'
)
# TODO: we might have to put a task lock around this
# method..
hist_ev = proxy.status_event(
'HMDS data farm connection is OK:ushmds'
)
# XXX: other event messages we might want to try and
# wait for but i wasn't able to get any of this
# reliable..
# reconnect_start = proxy.status_event(
# 'Market data farm is connecting:usfuture'
# )
# live_ev = proxy.status_event(
# 'Market data farm connection is OK:usfuture'
# )
# try to wait on the reset event(s) to arrive, a timeout
# will trigger a retry up to 6 times (for now).
tries: int = 2
timeout: float = 10
# try 3 time with a data reset then fail over to
# a connection reset.
for i in range(1, tries):
log.warning('Sending DATA RESET request')
await data_reset_hack(reset_type='data')
with trio.move_on_after(timeout) as cs:
for name, ev in [
# TODO: not sure if waiting on other events
# is all that useful here or not. in theory
# you could wait on one of the ones above
# first to verify the reset request was
# sent?
('history', hist_ev),
]:
await ev.wait()
log.info(f"{name} DATA RESET")
break
if cs.cancelled_caught:
fails += 1
log.warning(
f'Data reset {name} timeout, retrying {i}.'
)
continue
else:
log.warning('Sending CONNECTION RESET')
await data_reset_hack(reset_type='connection')
with trio.move_on_after(timeout) as cs:
for name, ev in [
# TODO: not sure if waiting on other events
# is all that useful here or not. in theory
# you could wait on one of the ones above
# first to verify the reset request was
# sent?
('history', hist_ev),
]:
await ev.wait()
log.info(f"{name} DATA RESET")
if cs.cancelled_caught:
fails += 1
log.warning('Data CONNECTION RESET timeout!?')
else:
raise
return None, None
# else: # throttle wasn't fixed so error out immediately
# raise _err
async def backfill_bars(
fqsn: str,
shm: ShmArray, # type: ignore # noqa
# TODO: we want to avoid overrunning the underlying shm array buffer
# and we should probably calc the number of calls to make depending
# on that until we have the `marketstore` daemon in place in which
# case the shm size will be driven by user config and available sys
# memory.
count: int = 16,
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Fill historical bars into shared mem / storage afap.
TODO: avoid pacing constraints:
https://github.com/pikers/piker/issues/128
'''
# last_dt1 = None
last_dt = None
with trio.CancelScope() as cs:
async with open_data_client() as proxy:
out, fails = await get_bars(proxy, fqsn)
if out is None:
raise RuntimeError("Could not pull currrent history?!")
(first_bars, bars_array, first_dt, last_dt) = out
vlm = bars_array['volume']
vlm[vlm < 0] = 0
last_dt = first_dt
# write historical data to buffer
shm.push(bars_array)
task_status.started(cs)
i = 0
while i < count:
out, fails = await get_bars(proxy, fqsn, end_dt=first_dt)
if out is None:
# could be trying to retreive bars over weekend
# TODO: add logic here to handle tradable hours and
# only grab valid bars in the range
log.error(f"Can't grab bars starting at {first_dt}!?!?")
# XXX: get_bars() should internally decrement dt by
# 2k seconds and try again.
continue
(first_bars, bars_array, first_dt, last_dt) = out
# last_dt1 = last_dt
# last_dt = first_dt
# volume cleaning since there's -ve entries,
# wood luv to know what crookery that is..
vlm = bars_array['volume']
vlm[vlm < 0] = 0
# TODO we should probably dig into forums to see what peeps
# think this data "means" and then use it as an indicator of
# sorts? dinkus has mentioned that $vlms for the day dont'
# match other platforms nor the summary stat tws shows in
# the monitor - it's probably worth investigating.
shm.push(bars_array, prepend=True)
i += 1
asset_type_map = {
'STK': 'stock',
'OPT': 'option',
'FUT': 'future',
'CONTFUT': 'continuous_future',
'CASH': 'forex',
'IND': 'index',
'CFD': 'cfd',
'BOND': 'bond',
'CMDTY': 'commodity',
'FOP': 'futures_option',
'FUND': 'mutual_fund',
'WAR': 'warrant',
'IOPT': 'warran',
'BAG': 'bag',
# 'NEWS': 'news',
}
_quote_streams: dict[str, trio.abc.ReceiveStream] = {}
async def _setup_quote_stream(
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
symbol: str,
opts: tuple[int] = (
'375', # RT trade volume (excludes utrades)
'233', # RT trade volume (includes utrades)
'236', # Shortable shares
# these all appear to only be updated every 25s thus
# making them mostly useless and explains why the scanner
# is always slow XD
# '293', # Trade count for day
'294', # Trade rate / minute
'295', # Vlm rate / minute
),
contract: Optional[Contract] = None,
) -> trio.abc.ReceiveChannel:
'''
Stream a ticker using the std L1 api.
This task is ``asyncio``-side and must be called from
``tractor.to_asyncio.open_channel_from()``.
'''
global _quote_streams
to_trio.send_nowait(None)
async with load_aio_clients() as accts2clients:
caccount_name, client = get_preferred_data_client(accts2clients)
contract = contract or (await client.find_contract(symbol))
ticker: Ticker = client.ib.reqMktData(contract, ','.join(opts))
# NOTE: it's batch-wise and slow af but I guess could
# be good for backchecking? Seems to be every 5s maybe?
# ticker: Ticker = client.ib.reqTickByTickData(
# contract, 'Last',
# )
# # define a simple queue push routine that streams quote packets
# # to trio over the ``to_trio`` memory channel.
# to_trio, from_aio = trio.open_memory_channel(2**8) # type: ignore
def teardown():
ticker.updateEvent.disconnect(push)
log.error(f"Disconnected stream for `{symbol}`")
client.ib.cancelMktData(contract)
# decouple broadcast mem chan
_quote_streams.pop(symbol, None)
def push(t: Ticker) -> None:
"""
Push quotes to trio task.
"""
# log.debug(t)
try:
to_trio.send_nowait(t)
except (
trio.BrokenResourceError,
# XXX: HACK, not sure why this gets left stale (probably
# due to our terrible ``tractor.to_asyncio``
# implementation for streams.. but if the mem chan
# gets left here and starts blocking just kill the feed?
# trio.WouldBlock,
):
# XXX: eventkit's ``Event.emit()`` for whatever redic
# reason will catch and ignore regular exceptions
# resulting in tracebacks spammed to console..
# Manually do the dereg ourselves.
teardown()
except trio.WouldBlock:
log.warning(
f'channel is blocking symbol feed for {symbol}?'
f'\n{to_trio.statistics}'
)
# except trio.WouldBlock:
# # for slow debugging purposes to avoid clobbering prompt
# # with log msgs
# pass
ticker.updateEvent.connect(push)
try:
await asyncio.sleep(float('inf'))
finally:
teardown()
# return from_aio
@acm
async def open_aio_quote_stream(
symbol: str,
contract: Optional[Contract] = None,
) -> trio.abc.ReceiveStream:
from tractor.trionics import broadcast_receiver
global _quote_streams
from_aio = _quote_streams.get(symbol)
if from_aio:
# if we already have a cached feed deliver a rx side clone to consumer
async with broadcast_receiver(
from_aio,
2**6,
) as from_aio:
yield from_aio
return
async with tractor.to_asyncio.open_channel_from(
_setup_quote_stream,
symbol=symbol,
contract=contract,
) as (first, from_aio):
# cache feed for later consumers
_quote_streams[symbol] = from_aio
yield from_aio
# TODO: cython/mypyc/numba this!
def normalize(
ticker: Ticker,
calc_price: bool = False
) -> dict:
# should be real volume for this contract by default
calc_price = False
# check for special contract types
con = ticker.contract
if type(con) in (
ibis.Commodity,
ibis.Forex,
):
# commodities and forex don't have an exchange name and
# no real volume so we have to calculate the price
suffix = con.secType
# no real volume on this tract
calc_price = True
else:
suffix = con.primaryExchange
if not suffix:
suffix = con.exchange
# append a `.<suffix>` to the returned symbol
# key for derivatives that normally is the expiry
# date key.
expiry = con.lastTradeDateOrContractMonth
if expiry:
suffix += f'.{expiry}'
# convert named tuples to dicts so we send usable keys
new_ticks = []
for tick in ticker.ticks:
if tick and not isinstance(tick, dict):
td = tick._asdict()
td['type'] = tick_types.get(
td['tickType'],
'n/a',
)
new_ticks.append(td)
tbt = ticker.tickByTicks
if tbt:
print(f'tickbyticks:\n {ticker.tickByTicks}')
ticker.ticks = new_ticks
# some contracts don't have volume so we may want to calculate
# a midpoint price based on data we can acquire (such as bid / ask)
if calc_price:
ticker.ticks.append(
{'type': 'trade', 'price': ticker.marketPrice()}
)
# serialize for transport
data = asdict(ticker)
# generate fqsn with possible specialized suffix
# for derivatives, note the lowercase.
data['symbol'] = data['fqsn'] = '.'.join(
(con.symbol, suffix)
).lower()
# convert named tuples to dicts for transport
tbts = data.get('tickByTicks')
if tbts:
data['tickByTicks'] = [tbt._asdict() for tbt in tbts]
# add time stamps for downstream latency measurements
data['brokerd_ts'] = time.time()
# stupid stupid shit...don't even care any more..
# leave it until we do a proper latency study
# if ticker.rtTime is not None:
# data['broker_ts'] = data['rtTime_s'] = float(
# ticker.rtTime.timestamp) / 1000.
data.pop('rtTime')
return data
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Stream symbol quotes.
This is a ``trio`` callable routine meant to be invoked
once the brokerd is up.
'''
# TODO: support multiple subscriptions
sym = symbols[0]
log.info(f'request for real-time quotes: {sym}')
async with open_data_client() as proxy:
con, first_ticker, details = await proxy.get_sym_details(symbol=sym)
first_quote = normalize(first_ticker)
# print(f'first quote: {first_quote}')
def mk_init_msgs() -> dict[str, dict]:
'''
Collect a bunch of meta-data useful for feed startup and
pack in a `dict`-msg.
'''
# pass back some symbol info like min_tick, trading_hours, etc.
syminfo = asdict(details)
syminfo.update(syminfo['contract'])
# nested dataclass we probably don't need and that won't IPC
# serialize
syminfo.pop('secIdList')
# TODO: more consistent field translation
atype = syminfo['asset_type'] = asset_type_map[syminfo['secType']]
# for stocks it seems TWS reports too small a tick size
# such that you can't submit orders with that granularity?
min_tick = 0.01 if atype == 'stock' else 0
syminfo['price_tick_size'] = max(syminfo['minTick'], min_tick)
# for "traditional" assets, volume is normally discreet, not
# a float
syminfo['lot_tick_size'] = 0.0
ibclient = proxy._aio_ns.ib.client
host, port = ibclient.host, ibclient.port
# TODO: for loop through all symbols passed in
init_msgs = {
# pass back token, and bool, signalling if we're the writer
# and that history has been written
sym: {
'symbol_info': syminfo,
'fqsn': first_quote['fqsn'],
},
'status': {
'data_ep': f'{host}:{port}',
},
}
return init_msgs
init_msgs = mk_init_msgs()
# TODO: we should instead spawn a task that waits on a feed to start
# and let it wait indefinitely..instead of this hard coded stuff.
with trio.move_on_after(1):
contract, first_ticker, details = await proxy.get_quote(symbol=sym)
# it might be outside regular trading hours so see if we can at
# least grab history.
if isnan(first_ticker.last):
task_status.started((init_msgs, first_quote))
# it's not really live but this will unblock
# the brokerd feed task to tell the ui to update?
feed_is_live.set()
# block and let data history backfill code run.
await trio.sleep_forever()
return # we never expect feed to come up?
async with open_aio_quote_stream(
symbol=sym,
contract=con,
) as stream:
# ugh, clear ticks since we've consumed them
# (ahem, ib_insync is stateful trash)
first_ticker.ticks = []
task_status.started((init_msgs, first_quote))
async with aclosing(stream):
if type(first_ticker.contract) not in (
ibis.Commodity,
ibis.Forex
):
# wait for real volume on feed (trading might be closed)
while True:
ticker = await stream.receive()
# for a real volume contract we rait for the first
# "real" trade to take place
if (
# not calc_price
# and not ticker.rtTime
not ticker.rtTime
):
# spin consuming tickers until we get a real
# market datum
log.debug(f"New unsent ticker: {ticker}")
continue
else:
log.debug("Received first real volume tick")
# ugh, clear ticks since we've consumed them
# (ahem, ib_insync is truly stateful trash)
ticker.ticks = []
# XXX: this works because we don't use
# ``aclosing()`` above?
break
quote = normalize(ticker)
log.debug(f"First ticker received {quote}")
# tell caller quotes are now coming in live
feed_is_live.set()
# last = time.time()
async for ticker in stream:
quote = normalize(ticker)
await send_chan.send({quote['fqsn']: quote})
# ugh, clear ticks since we've consumed them
ticker.ticks = []
# last = time.time()
async def data_reset_hack(
reset_type: str = 'data',
) -> None:
'''
Run key combos for resetting data feeds and yield back to caller
when complete.
This is a linux-only hack around:
https://interactivebrokers.github.io/tws-api/historical_limitations.html#pacing_violations
TODOs:
- a return type that hopefully determines if the hack was
successful.
- other OS support?
- integration with ``ib-gw`` run in docker + Xorg?
'''
async def vnc_click_hack(
reset_type: str = 'data'
) -> None:
'''
Reset the data or netowork connection for the VNC attached
ib gateway using magic combos.
'''
key = {'data': 'f', 'connection': 'r'}[reset_type]
import asyncvnc
async with asyncvnc.connect(
'localhost',
port=3003,
# password='ibcansmbz',
) as client:
# move to middle of screen
# 640x1800
client.mouse.move(
x=500,
y=500,
)
client.mouse.click()
client.keyboard.press('Ctrl', 'Alt', key) # keys are stacked
await tractor.to_asyncio.run_task(vnc_click_hack)
# we don't really need the ``xdotool`` approach any more B)
return True
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> None:
# TODO: load user defined symbol set locally for fast search?
await ctx.started({})
async with open_data_client() as proxy:
async with ctx.open_stream() as stream:
last = time.time()
async for pattern in stream:
log.debug(f'received {pattern}')
now = time.time()
assert pattern, 'IB can not accept blank search pattern'
# throttle search requests to no faster then 1Hz
diff = now - last
if diff < 1.0:
log.debug('throttle sleeping')
await trio.sleep(diff)
try:
pattern = stream.receive_nowait()
except trio.WouldBlock:
pass
if not pattern or pattern.isspace():
log.warning('empty pattern received, skipping..')
# TODO: *BUG* if nothing is returned here the client
# side will cache a null set result and not showing
# anything to the use on re-searches when this query
# timed out. We probably need a special "timeout" msg
# or something...
# XXX: this unblocks the far end search task which may
# hold up a multi-search nursery block
await stream.send({})
continue
log.debug(f'searching for {pattern}')
last = time.time()
# async batch search using api stocks endpoint and module
# defined adhoc symbol set.
stock_results = []
async def stash_results(target: Awaitable[list]):
stock_results.extend(await target)
async with trio.open_nursery() as sn:
sn.start_soon(
stash_results,
proxy.search_symbols(
pattern=pattern,
upto=5,
),
)
# trigger async request
await trio.sleep(0)
# match against our ad-hoc set immediately
adhoc_matches = fuzzy.extractBests(
pattern,
list(_adhoc_futes_set),
score_cutoff=90,
)
log.info(f'fuzzy matched adhocs: {adhoc_matches}')
adhoc_match_results = {}
if adhoc_matches:
# TODO: do we need to pull contract details?
adhoc_match_results = {i[0]: {} for i in adhoc_matches}
log.debug(f'fuzzy matching stocks {stock_results}')
stock_matches = fuzzy.extractBests(
pattern,
stock_results,
score_cutoff=50,
)
matches = adhoc_match_results | {
item[0]: {} for item in stock_matches
}
# TODO: we used to deliver contract details
# {item[2]: item[0] for item in stock_matches}
log.debug(f"sending matches: {matches.keys()}")
await stream.send(matches)

File diff suppressed because it is too large Load Diff

View File

@ -19,6 +19,7 @@ Questrade API backend.
""" """
from __future__ import annotations from __future__ import annotations
import inspect import inspect
import contextlib
import time import time
from datetime import datetime from datetime import datetime
from functools import partial from functools import partial
@ -31,10 +32,11 @@ from typing import (
Callable, Callable,
) )
import pendulum import arrow
import trio import trio
import tractor import tractor
from async_generator import asynccontextmanager from async_generator import asynccontextmanager
import pandas as pd
import numpy as np import numpy as np
import wrapt import wrapt
import asks import asks
@ -44,6 +46,7 @@ from .._cacheables import open_cached_client, async_lifo_cache
from .. import config from .. import config
from ._util import resproc, BrokerError, SymbolNotFound from ._util import resproc, BrokerError, SymbolNotFound
from ..log import get_logger, colorize_json, get_console_log from ..log import get_logger, colorize_json, get_console_log
from . import get_brokermod
log = get_logger(__name__) log = get_logger(__name__)
@ -598,16 +601,12 @@ class Client:
sid = sids[symbol] sid = sids[symbol]
# get last market open end time # get last market open end time
est_end = now = pendulum.now('UTC').in_timezoe( est_end = now = arrow.utcnow().to('US/Eastern').floor('minute')
'America/New_York').start_of('minute')
# on non-paid feeds we can't retreive the first 15 mins # on non-paid feeds we can't retreive the first 15 mins
wd = now.isoweekday() wd = now.isoweekday()
if wd > 5: if wd > 5:
quotes = await self.quote([symbol]) quotes = await self.quote([symbol])
est_end = pendulum.parse( est_end = arrow.get(quotes[0]['lastTradeTime'])
quotes[0]['lastTradeTime']
)
if est_end.hour == 0: if est_end.hour == 0:
# XXX don't bother figuring out extended hours for now # XXX don't bother figuring out extended hours for now
est_end = est_end.replace(hour=17) est_end = est_end.replace(hour=17)
@ -668,7 +667,7 @@ def get_OHLCV(
""" """
del bar['end'] del bar['end']
del bar['VWAP'] del bar['VWAP']
bar['start'] = pendulum.from_timestamp(bar['start']) / 10**9 bar['start'] = pd.Timestamp(bar['start']).value/10**9
return tuple(bar.values()) return tuple(bar.values())

View File

@ -29,8 +29,7 @@ from ._messages import BrokerdPosition, Status
class Position(BaseModel): class Position(BaseModel):
''' '''Basic pp (personal position) model with attached fills history.
Basic pp (personal position) model with attached fills history.
This type should be IPC wire ready? This type should be IPC wire ready?
@ -62,15 +61,6 @@ class Position(BaseModel):
self.avg_price = avg_price self.avg_price = avg_price
self.size = size self.size = size
@property
def dsize(self) -> float:
'''
The "dollar" size of the pp, normally in trading (fiat) unit
terms.
'''
return self.avg_price * self.size
_size_units = bidict({ _size_units = bidict({
'currency': '$ size', 'currency': '$ size',
@ -124,8 +114,7 @@ class Allocator(BaseModel):
def step_sizes( def step_sizes(
self, self,
) -> (float, float): ) -> (float, float):
''' '''Return the units size for each unit type as a tuple.
Return the units size for each unit type as a tuple.
''' '''
slots = self.slots slots = self.slots
@ -153,8 +142,7 @@ class Allocator(BaseModel):
action: str, action: str,
) -> dict: ) -> dict:
''' '''Generate order request info for the "next" submittable order
Generate order request info for the "next" submittable order
depending on position / order entry config. depending on position / order entry config.
''' '''
@ -178,9 +166,7 @@ class Allocator(BaseModel):
l_sub_pp = (self.currency_limit - live_cost_basis) / price l_sub_pp = (self.currency_limit - live_cost_basis) / price
else: else:
raise ValueError( raise ValueError(f"Not valid size unit '{size}'")
f"Not valid size unit '{size_unit}'"
)
# an entry (adding-to or starting a pp) # an entry (adding-to or starting a pp)
if ( if (
@ -264,8 +250,7 @@ class Allocator(BaseModel):
pp: Position, pp: Position,
) -> float: ) -> float:
''' '''Calc and return the number of slots used by this ``Position``.
Calc and return the number of slots used by this ``Position``.
''' '''
abs_pp_size = abs(pp.size) abs_pp_size = abs(pp.size)
@ -284,14 +269,6 @@ class Allocator(BaseModel):
return round(prop * self.slots) return round(prop * self.slots)
_derivs = (
'future',
'continuous_future',
'option',
'futures_option',
)
def mk_allocator( def mk_allocator(
symbol: Symbol, symbol: Symbol,
@ -300,7 +277,7 @@ def mk_allocator(
# default allocation settings # default allocation settings
defaults: dict[str, float] = { defaults: dict[str, float] = {
'account': None, # select paper by default 'account': None, # select paper by default
'size_unit': 'currency', 'size_unit': 'currency', #_size_units['currency'],
'units_limit': 400, 'units_limit': 400,
'currency_limit': 5e3, 'currency_limit': 5e3,
'slots': 4, 'slots': 4,
@ -328,9 +305,11 @@ def mk_allocator(
asset_type = symbol.type_key asset_type = symbol.type_key
# specific configs by asset class / type # specific configs by asset class / type
if asset_type in _derivs: if asset_type in ('future', 'option', 'futures_option'):
# since it's harder to know how currency "applies" in this case # since it's harder to know how currency "applies" in this case
# given leverage properties # given leverage properties
alloc.size_unit = '# units' alloc.size_unit = '# units'
@ -353,7 +332,7 @@ def mk_allocator(
if startup_size > alloc.units_limit: if startup_size > alloc.units_limit:
alloc.units_limit = startup_size alloc.units_limit = startup_size
if asset_type in _derivs: if asset_type in ('future', 'option', 'futures_option'):
alloc.slots = alloc.units_limit alloc.slots = alloc.units_limit
return alloc return alloc

View File

@ -18,7 +18,7 @@
Orders and execution client API. Orders and execution client API.
""" """
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager
from typing import Dict from typing import Dict
from pprint import pformat from pprint import pformat
from dataclasses import dataclass, field from dataclasses import dataclass, field
@ -27,6 +27,7 @@ import trio
import tractor import tractor
from tractor.trionics import broadcast_receiver from tractor.trionics import broadcast_receiver
from ..data._source import Symbol
from ..log import get_logger from ..log import get_logger
from ._ems import _emsd_main from ._ems import _emsd_main
from .._daemon import maybe_open_emsd from .._daemon import maybe_open_emsd
@ -155,19 +156,16 @@ async def relay_order_cmds_from_sync_code(
await to_ems_stream.send(cmd) await to_ems_stream.send(cmd)
@acm @asynccontextmanager
async def open_ems( async def open_ems(
fqsn: str, broker: str,
symbol: Symbol,
) -> ( ) -> (OrderBook, tractor.MsgStream, dict):
OrderBook, """Spawn an EMS daemon and begin sending orders and receiving
tractor.MsgStream,
dict,
):
'''
Spawn an EMS daemon and begin sending orders and receiving
alerts. alerts.
This EMS tries to reduce most broker's terrible order entry apis to This EMS tries to reduce most broker's terrible order entry apis to
a very simple protocol built on a few easy to grok and/or a very simple protocol built on a few easy to grok and/or
"rantsy" premises: "rantsy" premises:
@ -196,22 +194,21 @@ async def open_ems(
- 'dark_executed', 'broker_executed' - 'dark_executed', 'broker_executed'
- 'broker_filled' - 'broker_filled'
''' """
# wait for service to connect back to us signalling # wait for service to connect back to us signalling
# ready for order commands # ready for order commands
book = get_orders() book = get_orders()
from ..data._source import unpack_fqsn
broker, symbol, suffix = unpack_fqsn(fqsn)
async with maybe_open_emsd(broker) as portal: async with maybe_open_emsd(broker) as portal:
async with ( async with (
# connect to emsd # connect to emsd
portal.open_context( portal.open_context(
_emsd_main, _emsd_main,
fqsn=fqsn, broker=broker,
symbol=symbol.key,
) as (ctx, (positions, accounts)), ) as (ctx, (positions, accounts)),
@ -221,7 +218,7 @@ async def open_ems(
async with trio.open_nursery() as n: async with trio.open_nursery() as n:
n.start_soon( n.start_soon(
relay_order_cmds_from_sync_code, relay_order_cmds_from_sync_code,
fqsn, symbol.key,
trades_stream trades_stream
) )

View File

@ -20,6 +20,7 @@ In da suit parlances: "Execution management systems"
""" """
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from dataclasses import dataclass, field from dataclasses import dataclass, field
from math import isnan
from pprint import pformat from pprint import pformat
import time import time
from typing import AsyncIterator, Callable from typing import AsyncIterator, Callable
@ -53,8 +54,7 @@ def mk_check(
action: str, action: str,
) -> Callable[[float, float], bool]: ) -> Callable[[float, float], bool]:
''' """Create a predicate for given ``exec_price`` based on last known
Create a predicate for given ``exec_price`` based on last known
price, ``known_last``. price, ``known_last``.
This is an automatic alert level thunk generator based on where the This is an automatic alert level thunk generator based on where the
@ -62,7 +62,7 @@ def mk_check(
interest is; pick an appropriate comparison operator based on interest is; pick an appropriate comparison operator based on
avoiding the case where the a predicate returns true immediately. avoiding the case where the a predicate returns true immediately.
''' """
# str compares: # str compares:
# https://stackoverflow.com/questions/46708708/compare-strings-in-numba-compiled-function # https://stackoverflow.com/questions/46708708/compare-strings-in-numba-compiled-function
@ -80,9 +80,7 @@ def mk_check(
return check_lt return check_lt
raise ValueError( raise ValueError('trigger: {trigger_price}, last: {known_last}')
f'trigger: {trigger_price}, last: {known_last}'
)
@dataclass @dataclass
@ -114,8 +112,8 @@ class _DarkBook:
# tracks most recent values per symbol each from data feed # tracks most recent values per symbol each from data feed
lasts: dict[ lasts: dict[
str, tuple[str, str],
float, float
] = field(default_factory=dict) ] = field(default_factory=dict)
# mapping of piker ems order ids to current brokerd order flow message # mapping of piker ems order ids to current brokerd order flow message
@ -136,42 +134,40 @@ async def clear_dark_triggers(
ems_client_order_stream: tractor.MsgStream, ems_client_order_stream: tractor.MsgStream,
quote_stream: tractor.ReceiveMsgStream, # noqa quote_stream: tractor.ReceiveMsgStream, # noqa
broker: str, broker: str,
fqsn: str, symbol: str,
book: _DarkBook, book: _DarkBook,
) -> None: ) -> None:
''' """Core dark order trigger loop.
Core dark order trigger loop.
Scan the (price) data feed and submit triggered orders Scan the (price) data feed and submit triggered orders
to broker. to broker.
''' """
# this stream may eventually contain multiple symbols
# XXX: optimize this for speed! # XXX: optimize this for speed!
# TODO:
# - numba all this!
# - this stream may eventually contain multiple symbols
async for quotes in quote_stream: async for quotes in quote_stream:
# TODO: numba all this!
# start = time.time() # start = time.time()
for sym, quote in quotes.items(): for sym, quote in quotes.items():
execs = book.orders.get(sym, {})
execs = book.orders.get(sym, None)
if execs is None:
continue
for tick in iterticks( for tick in iterticks(
quote, quote,
# dark order price filter(s) # dark order price filter(s)
types=( types=('ask', 'bid', 'trade', 'last')
'ask',
'bid',
'trade',
'last',
# 'dark_trade', # TODO: should allow via config?
)
): ):
price = tick.get('price') price = tick.get('price')
ttype = tick['type'] ttype = tick['type']
# update to keep new cmds informed # update to keep new cmds informed
book.lasts[sym] = price book.lasts[(broker, symbol)] = price
for oid, ( for oid, (
pred, pred,
@ -182,6 +178,7 @@ async def clear_dark_triggers(
) in ( ) in (
tuple(execs.items()) tuple(execs.items())
): ):
if ( if (
not pred or not pred or
ttype not in tf or ttype not in tf or
@ -196,7 +193,6 @@ async def clear_dark_triggers(
action: str = cmd['action'] action: str = cmd['action']
symbol: str = cmd['symbol'] symbol: str = cmd['symbol']
bfqsn: str = symbol.replace(f'.{broker}', '')
if action == 'alert': if action == 'alert':
# nothing to do but relay a status # nothing to do but relay a status
@ -226,7 +222,7 @@ async def clear_dark_triggers(
# order-request and instead create a new one. # order-request and instead create a new one.
reqid=None, reqid=None,
symbol=bfqsn, symbol=sym,
price=submit_price, price=submit_price,
size=cmd['size'], size=cmd['size'],
) )
@ -248,9 +244,12 @@ async def clear_dark_triggers(
oid=oid, # ems order id oid=oid, # ems order id
resp=resp, resp=resp,
time_ns=time.time_ns(), time_ns=time.time_ns(),
symbol=fqsn,
symbol=symbol,
trigger_price=price, trigger_price=price,
broker_details={'name': broker}, broker_details={'name': broker},
cmd=cmd, # original request message cmd=cmd, # original request message
).dict() ).dict()
@ -263,20 +262,12 @@ async def clear_dark_triggers(
f'pred for {oid} was already removed!?' f'pred for {oid} was already removed!?'
) )
try: await ems_client_order_stream.send(msg)
await ems_client_order_stream.send(msg)
except (
trio.ClosedResourceError,
):
log.warning(
f'client {ems_client_order_stream} stream is broke'
)
break
else: # condition scan loop complete else: # condition scan loop complete
log.debug(f'execs are {execs}') log.debug(f'execs are {execs}')
if execs: if execs:
book.orders[fqsn] = execs book.orders[symbol] = execs
# print(f'execs scan took: {time.time() - start}') # print(f'execs scan took: {time.time() - start}')
@ -299,8 +290,7 @@ class TradesRelay:
class Router(BaseModel): class Router(BaseModel):
''' '''Order router which manages and tracks per-broker dark book,
Order router which manages and tracks per-broker dark book,
alerts, clearing and related data feed management. alerts, clearing and related data feed management.
A singleton per ``emsd`` actor. A singleton per ``emsd`` actor.
@ -388,8 +378,7 @@ async def open_brokerd_trades_dialogue(
task_status: TaskStatus[TradesRelay] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[TradesRelay] = trio.TASK_STATUS_IGNORED,
) -> tuple[dict, tractor.MsgStream]: ) -> tuple[dict, tractor.MsgStream]:
''' '''Open and yield ``brokerd`` trades dialogue context-stream if none
Open and yield ``brokerd`` trades dialogue context-stream if none
already exists. already exists.
''' '''
@ -426,7 +415,8 @@ async def open_brokerd_trades_dialogue(
# actor to simulate the real IPC load it'll have when also # actor to simulate the real IPC load it'll have when also
# pulling data from feeds # pulling data from feeds
open_trades_endpoint = paper.open_paperboi( open_trades_endpoint = paper.open_paperboi(
fqsn='.'.join([symbol, broker]), broker=broker,
symbol=symbol,
loglevel=loglevel, loglevel=loglevel,
) )
@ -464,13 +454,12 @@ async def open_brokerd_trades_dialogue(
# locally cache and track positions per account. # locally cache and track positions per account.
pps = {} pps = {}
for msg in positions: for msg in positions:
log.info(f'loading pp: {msg}')
account = msg['account'] account = msg['account']
assert account in accounts assert account in accounts
pps.setdefault( pps.setdefault(
f'{msg["symbol"]}.{broker}', msg['symbol'],
{} {}
)[account] = msg )[account] = msg
@ -500,9 +489,7 @@ async def open_brokerd_trades_dialogue(
finally: finally:
# parent context must have been closed # parent context must have been closed
# remove from cache so next client will respawn if needed # remove from cache so next client will respawn if needed
relay = _router.relays.pop(broker, None) _router.relays.pop(broker)
if not relay:
log.warning(f'Relay for {broker} was already removed!?')
@tractor.context @tractor.context
@ -534,8 +521,7 @@ async def translate_and_relay_brokerd_events(
router: Router, router: Router,
) -> AsyncIterator[dict]: ) -> AsyncIterator[dict]:
''' '''Trades update loop - receive updates from ``brokerd`` trades
Trades update loop - receive updates from ``brokerd`` trades
endpoint, convert to EMS response msgs, transmit **only** to endpoint, convert to EMS response msgs, transmit **only** to
ordering client(s). ordering client(s).
@ -563,10 +549,7 @@ async def translate_and_relay_brokerd_events(
name = brokerd_msg['name'] name = brokerd_msg['name']
log.info( log.info(f'Received broker trade event:\n{pformat(brokerd_msg)}')
f'Received broker trade event:\n'
f'{pformat(brokerd_msg)}'
)
if name == 'position': if name == 'position':
@ -574,28 +557,14 @@ async def translate_and_relay_brokerd_events(
# XXX: this will be useful for automatic strats yah? # XXX: this will be useful for automatic strats yah?
# keep pps per account up to date locally in ``emsd`` mem # keep pps per account up to date locally in ``emsd`` mem
sym, broker = pos_msg['symbol'], pos_msg['broker'] relay.positions.setdefault(pos_msg['symbol'], {}).setdefault(
relay.positions.setdefault(
# NOTE: translate to a FQSN!
f'{sym}.{broker}',
{}
).setdefault(
pos_msg['account'], {} pos_msg['account'], {}
).update(pos_msg) ).update(pos_msg)
# fan-out-relay position msgs immediately by # fan-out-relay position msgs immediately by
# broadcasting updates on all client streams # broadcasting updates on all client streams
for client_stream in router.clients.copy(): for client_stream in router.clients:
try: await client_stream.send(pos_msg)
await client_stream.send(pos_msg)
except(
trio.ClosedResourceError,
trio.BrokenResourceError,
):
router.clients.remove(client_stream)
log.warning(
f'client for {client_stream} was already closed?')
continue continue
@ -618,28 +587,19 @@ async def translate_and_relay_brokerd_events(
# packed at submission since we already know it ahead of # packed at submission since we already know it ahead of
# time # time
paper = brokerd_msg['broker_details'].get('paper_info') paper = brokerd_msg['broker_details'].get('paper_info')
ext = brokerd_msg['broker_details'].get('external')
if paper: if paper:
# paperboi keeps the ems id up front # paperboi keeps the ems id up front
oid = paper['oid'] oid = paper['oid']
elif ext: else:
# may be an order msg specified as "external" to the # may be an order msg specified as "external" to the
# piker ems flow (i.e. generated by some other # piker ems flow (i.e. generated by some other
# external broker backend client (like tws for ib) # external broker backend client (like tws for ib)
log.error(f"External trade event {ext}") ext = brokerd_msg['broker_details'].get('external')
if ext:
log.error(f"External trade event {ext}")
continue continue
else:
# something is out of order, we don't have an oid for
# this broker-side message.
log.error(
'Unknown oid:{oid} for msg:\n'
f'{pformat(brokerd_msg)}'
'Unable to relay message to client side!?'
)
else: else:
# check for existing live flow entry # check for existing live flow entry
entry = book._ems_entries.get(oid) entry = book._ems_entries.get(oid)
@ -837,9 +797,7 @@ async def process_client_order_cmds(
if reqid: if reqid:
# send cancel to brokerd immediately! # send cancel to brokerd immediately!
log.info( log.info("Submitting cancel for live order {reqid}")
f'Submitting cancel for live order {reqid}'
)
await brokerd_order_stream.send(msg.dict()) await brokerd_order_stream.send(msg.dict())
@ -876,15 +834,11 @@ async def process_client_order_cmds(
msg = Order(**cmd) msg = Order(**cmd)
fqsn = msg.symbol sym = msg.symbol
trigger_price = msg.price trigger_price = msg.price
size = msg.size size = msg.size
exec_mode = msg.exec_mode exec_mode = msg.exec_mode
broker = msg.brokers[0] broker = msg.brokers[0]
# remove the broker part before creating a message
# to send to the specific broker since they probably
# aren't expectig their own name, but should they?
sym = fqsn.replace(f'.{broker}', '')
if exec_mode == 'live' and action in ('buy', 'sell',): if exec_mode == 'live' and action in ('buy', 'sell',):
@ -942,7 +896,7 @@ async def process_client_order_cmds(
# price received from the feed, instead of being # price received from the feed, instead of being
# like every other shitty tina platform that makes # like every other shitty tina platform that makes
# the user choose the predicate operator. # the user choose the predicate operator.
last = dark_book.lasts[fqsn] last = dark_book.lasts[(broker, sym)]
pred = mk_check(trigger_price, last, action) pred = mk_check(trigger_price, last, action)
spread_slap: float = 5 spread_slap: float = 5
@ -973,7 +927,7 @@ async def process_client_order_cmds(
# dark book entry if the order id already exists # dark book entry if the order id already exists
dark_book.orders.setdefault( dark_book.orders.setdefault(
fqsn, {} sym, {}
)[oid] = ( )[oid] = (
pred, pred,
tickfilter, tickfilter,
@ -1000,8 +954,8 @@ async def process_client_order_cmds(
async def _emsd_main( async def _emsd_main(
ctx: tractor.Context, ctx: tractor.Context,
fqsn: str, broker: str,
symbol: str,
_exec_mode: str = 'dark', # ('paper', 'dark', 'live') _exec_mode: str = 'dark', # ('paper', 'dark', 'live')
loglevel: str = 'info', loglevel: str = 'info',
@ -1043,8 +997,6 @@ async def _emsd_main(
global _router global _router
assert _router assert _router
from ..data._source import unpack_fqsn
broker, symbol, suffix = unpack_fqsn(fqsn)
dark_book = _router.get_dark_book(broker) dark_book = _router.get_dark_book(broker)
# TODO: would be nice if in tractor we can require either a ctx arg, # TODO: would be nice if in tractor we can require either a ctx arg,
@ -1057,16 +1009,18 @@ async def _emsd_main(
# spawn one task per broker feed # spawn one task per broker feed
async with ( async with (
maybe_open_feed( maybe_open_feed(
[fqsn], broker,
[symbol],
loglevel=loglevel, loglevel=loglevel,
) as (feed, quote_stream), ) as (feed, stream),
): ):
# XXX: this should be initial price quote from target provider # XXX: this should be initial price quote from target provider
first_quote = feed.first_quotes[fqsn] first_quote = feed.first_quotes[symbol]
book = _router.get_dark_book(broker) book = _router.get_dark_book(broker)
book.lasts[fqsn] = first_quote['last'] last = book.lasts[(broker, symbol)] = first_quote['last']
assert not isnan(last) # ib is a cucker but we've fixed it in the backend
# open a stream with the brokerd backend for order # open a stream with the brokerd backend for order
# flow dialogue # flow dialogue
@ -1090,13 +1044,13 @@ async def _emsd_main(
# flatten out collected pps from brokerd for delivery # flatten out collected pps from brokerd for delivery
pp_msgs = { pp_msgs = {
fqsn: list(pps.values()) sym: list(pps.values())
for fqsn, pps in relay.positions.items() for sym, pps in relay.positions.items()
} }
# signal to client that we're started and deliver # signal to client that we're started and deliver
# all known pps and accounts for this ``brokerd``. # all known pps and accounts for this ``brokerd``.
await ems_ctx.started((pp_msgs, list(relay.accounts))) await ems_ctx.started((pp_msgs, relay.accounts))
# establish 2-way stream with requesting order-client and # establish 2-way stream with requesting order-client and
# begin handling inbound order requests and updates # begin handling inbound order requests and updates
@ -1108,9 +1062,9 @@ async def _emsd_main(
brokerd_stream, brokerd_stream,
ems_client_order_stream, ems_client_order_stream,
quote_stream, stream,
broker, broker,
fqsn, # form: <name>.<venue>.<suffix>.<broker> symbol,
book book
) )
@ -1126,7 +1080,7 @@ async def _emsd_main(
# relay.brokerd_dialogue, # relay.brokerd_dialogue,
brokerd_stream, brokerd_stream,
fqsn, symbol,
feed, feed,
dark_book, dark_book,
_router, _router,

View File

@ -155,11 +155,8 @@ class BrokerdOrder(BaseModel):
class BrokerdOrderAck(BaseModel): class BrokerdOrderAck(BaseModel):
''' '''Immediate reponse to a brokerd order request providing
Immediate reponse to a brokerd order request providing the broker the broker specifci unique order id.
specific unique order id so that the EMS can associate this
(presumably differently formatted broker side ID) with our own
``.oid`` (which is a uuid4).
''' '''
name: str = 'ack' name: str = 'ack'
@ -184,7 +181,7 @@ class BrokerdStatus(BaseModel):
# { # {
# 'submitted', # 'submitted',
# 'cancelled', # 'cancelled',
# 'filled', # 'executed',
# } # }
status: str status: str
@ -206,8 +203,7 @@ class BrokerdStatus(BaseModel):
class BrokerdFill(BaseModel): class BrokerdFill(BaseModel):
''' '''A single message indicating a "fill-details" event from the broker
A single message indicating a "fill-details" event from the broker
if avaiable. if avaiable.
''' '''
@ -231,18 +227,16 @@ class BrokerdFill(BaseModel):
class BrokerdError(BaseModel): class BrokerdError(BaseModel):
''' '''Optional error type that can be relayed to emsd for error handling.
Optional error type that can be relayed to emsd for error handling.
This is still a TODO thing since we're not sure how to employ it yet. This is still a TODO thing since we're not sure how to employ it yet.
''' '''
name: str = 'error' name: str = 'error'
oid: str oid: str
# if no brokerd order request was actually submitted (eg. we errored # if no brokerd order request was actually submitted (eg. we errored
# at the ``pikerd`` layer) then there will be ``reqid`` allocated. # at the ``pikerd`` layer) then there will be ``reqid`` allocated.
reqid: Optional[Union[int, str]] = None reqid: Union[int, str] = ''
symbol: str symbol: str
reason: str reason: str

View File

@ -32,7 +32,6 @@ from dataclasses import dataclass
from .. import data from .. import data
from ..data._normalize import iterticks from ..data._normalize import iterticks
from ..data._source import unpack_fqsn
from ..log import get_logger from ..log import get_logger
from ._messages import ( from ._messages import (
BrokerdCancel, BrokerdOrder, BrokerdOrderAck, BrokerdStatus, BrokerdCancel, BrokerdOrder, BrokerdOrderAck, BrokerdStatus,
@ -390,7 +389,7 @@ async def handle_order_requests(
account = request_msg['account'] account = request_msg['account']
if account != 'paper': if account != 'paper':
log.error( log.error(
'This is a paper account, only a `paper` selection is valid' 'On a paper account, only a `paper` selection is valid'
) )
await ems_order_stream.send(BrokerdError( await ems_order_stream.send(BrokerdError(
oid=request_msg['oid'], oid=request_msg['oid'],
@ -447,7 +446,7 @@ async def trades_dialogue(
ctx: tractor.Context, ctx: tractor.Context,
broker: str, broker: str,
fqsn: str, symbol: str,
loglevel: str = None, loglevel: str = None,
) -> None: ) -> None:
@ -456,15 +455,15 @@ async def trades_dialogue(
async with ( async with (
data.open_feed( data.open_feed(
[fqsn], broker,
[symbol],
loglevel=loglevel, loglevel=loglevel,
) as feed, ) as feed,
): ):
# TODO: load paper positions per broker from .toml config file # TODO: load paper positions per broker from .toml config file
# and pass as symbol to position data mapping: ``dict[str, dict]`` # and pass as symbol to position data mapping: ``dict[str, dict]``
# await ctx.started(all_positions) await ctx.started(({}, ['paper']))
await ctx.started(({}, {'paper',}))
async with ( async with (
ctx.open_stream() as ems_stream, ctx.open_stream() as ems_stream,
@ -491,16 +490,15 @@ async def trades_dialogue(
@asynccontextmanager @asynccontextmanager
async def open_paperboi( async def open_paperboi(
fqsn: str, broker: str,
symbol: str,
loglevel: str, loglevel: str,
) -> Callable: ) -> Callable:
''' '''Spawn a paper engine actor and yield through access to
Spawn a paper engine actor and yield through access to
its context. its context.
''' '''
broker, symbol, expiry = unpack_fqsn(fqsn)
service_name = f'paperboi.{broker}' service_name = f'paperboi.{broker}'
async with ( async with (
@ -519,7 +517,7 @@ async def open_paperboi(
async with portal.open_context( async with portal.open_context(
trades_dialogue, trades_dialogue,
broker=broker, broker=broker,
fqsn=fqsn, symbol=symbol,
loglevel=loglevel, loglevel=loglevel,
) as (ctx, first): ) as (ctx, first):

View File

@ -1,25 +1,7 @@
# piker: trading gear for hackers """
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
CLI commons. CLI commons.
"""
'''
import os import os
from pprint import pformat
import click import click
import trio import trio
@ -34,22 +16,29 @@ from .. import config
log = get_logger('cli') log = get_logger('cli')
DEFAULT_BROKER = 'questrade' DEFAULT_BROKER = 'questrade'
_config_dir = click.get_app_dir('piker')
_watchlists_data_path = os.path.join(_config_dir, 'watchlists.json')
_context_defaults = dict(
default_map={
# Questrade specific quote poll rates
'monitor': {
'rate': 3,
},
'optschain': {
'rate': 1,
},
}
)
@click.command() @click.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level') @click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode') @click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
@click.option('--host', '-h', default='127.0.0.1', help='Host address to bind') @click.option('--host', '-h', default='127.0.0.1', help='Host address to bind')
@click.option( def pikerd(loglevel, host, tl, pdb):
'--tsdb', """Spawn the piker broker-daemon.
is_flag=True, """
help='Enable local ``marketstore`` instance'
)
def pikerd(loglevel, host, tl, pdb, tsdb):
'''
Spawn the piker broker-daemon.
'''
from .._daemon import open_pikerd from .._daemon import open_pikerd
log = get_console_log(loglevel) log = get_console_log(loglevel)
@ -63,38 +52,13 @@ def pikerd(loglevel, host, tl, pdb, tsdb):
)) ))
async def main(): async def main():
async with open_pikerd(loglevel=loglevel, debug_mode=pdb):
async with (
open_pikerd(
loglevel=loglevel,
debug_mode=pdb,
), # normally delivers a ``Services`` handle
trio.open_nursery() as n,
):
if tsdb:
from piker.data._ahab import start_ahab
from piker.data.marketstore import start_marketstore
log.info('Spawning `marketstore` supervisor')
ctn_ready, config, (cid, pid) = await n.start(
start_ahab,
'marketstored',
start_marketstore,
)
log.info(
f'`marketstore` up!\n'
f'`marketstored` pid: {pid}\n'
f'docker container id: {cid}\n'
f'config: {pformat(config)}'
)
await trio.sleep_forever() await trio.sleep_forever()
trio.run(main) trio.run(main)
@click.group(context_settings=config._context_defaults) @click.group(context_settings=_context_defaults)
@click.option( @click.option(
'--brokers', '-b', '--brokers', '-b',
default=[DEFAULT_BROKER], default=[DEFAULT_BROKER],
@ -123,8 +87,8 @@ def cli(ctx, brokers, loglevel, tl, configdir):
'loglevel': loglevel, 'loglevel': loglevel,
'tractorloglevel': None, 'tractorloglevel': None,
'log': get_console_log(loglevel), 'log': get_console_log(loglevel),
'confdir': config._config_dir, 'confdir': _config_dir,
'wl_path': config._watchlists_data_path, 'wl_path': _watchlists_data_path,
}) })
# allow enabling same loglevel in ``tractor`` machinery # allow enabling same loglevel in ``tractor`` machinery
@ -145,11 +109,13 @@ def services(config, tl, names):
) as portal: ) as portal:
registry = await portal.run_from_ns('self', 'get_registry') registry = await portal.run_from_ns('self', 'get_registry')
json_d = {} json_d = {}
for key, socket in registry.items(): for uid, socket in registry.items():
# name, uuid = uid name, uuid = uid
host, port = socket host, port = socket
json_d[key] = f'{host}:{port}' json_d[f'{name}.{uuid}'] = f'{host}:{port}'
click.echo(f"{colorize_json(json_d)}") click.echo(
f"Available `piker` services:\n{colorize_json(json_d)}"
)
tractor.run( tractor.run(
list_services, list_services,

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship for pikers) # Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -16,10 +16,7 @@
""" """
Broker configuration mgmt. Broker configuration mgmt.
""" """
import platform
import sys
import os import os
from os.path import dirname from os.path import dirname
import shutil import shutil
@ -27,106 +24,14 @@ from typing import Optional
from bidict import bidict from bidict import bidict
import toml import toml
import click
from .log import get_logger from .log import get_logger
log = get_logger('broker-config') log = get_logger('broker-config')
_config_dir = click.get_app_dir('piker')
# taken from ``click`` since apparently they have some _file_name = 'brokers.toml'
# super weirdness with sigint and sudo..no clue
def get_app_dir(app_name, roaming=True, force_posix=False):
r"""Returns the config folder for the application. The default behavior
is to return whatever is most appropriate for the operating system.
To give you an idea, for an app called ``"Foo Bar"``, something like
the following folders could be returned:
Mac OS X:
``~/Library/Application Support/Foo Bar``
Mac OS X (POSIX):
``~/.foo-bar``
Unix:
``~/.config/foo-bar``
Unix (POSIX):
``~/.foo-bar``
Win XP (roaming):
``C:\Documents and Settings\<user>\Local Settings\Application Data\Foo``
Win XP (not roaming):
``C:\Documents and Settings\<user>\Application Data\Foo Bar``
Win 7 (roaming):
``C:\Users\<user>\AppData\Roaming\Foo Bar``
Win 7 (not roaming):
``C:\Users\<user>\AppData\Local\Foo Bar``
.. versionadded:: 2.0
:param app_name: the application name. This should be properly capitalized
and can contain whitespace.
:param roaming: controls if the folder should be roaming or not on Windows.
Has no affect otherwise.
:param force_posix: if this is set to `True` then on any POSIX system the
folder will be stored in the home folder with a leading
dot instead of the XDG config home or darwin's
application support folder.
"""
def _posixify(name):
return "-".join(name.split()).lower()
# if WIN:
if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA"
folder = os.environ.get(key)
if folder is None:
folder = os.path.expanduser("~")
return os.path.join(folder, app_name)
if force_posix:
return os.path.join(
os.path.expanduser("~/.{}".format(_posixify(app_name))))
if sys.platform == "darwin":
return os.path.join(
os.path.expanduser("~/Library/Application Support"), app_name
)
return os.path.join(
os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")),
_posixify(app_name),
)
_config_dir = _click_config_dir = get_app_dir('piker')
_parent_user = os.environ.get('SUDO_USER')
if _parent_user:
non_root_user_dir = os.path.expanduser(
f'~{_parent_user}'
)
root = 'root'
_config_dir = (
non_root_user_dir +
_click_config_dir[
_click_config_dir.rfind(root) + len(root):
]
)
_conf_names: set[str] = {
'brokers',
'trades',
'watchlists',
}
_watchlists_data_path = os.path.join(_config_dir, 'watchlists.json')
_context_defaults = dict(
default_map={
# Questrade specific quote poll rates
'monitor': {
'rate': 3,
},
'optschain': {
'rate': 1,
},
}
)
def _override_config_dir( def _override_config_dir(
@ -136,72 +41,41 @@ def _override_config_dir(
_config_dir = path _config_dir = path
def _conf_fn_w_ext( def get_broker_conf_path():
name: str,
) -> str:
# change this if we ever change the config file format.
return f'{name}.toml'
def get_conf_path(
conf_name: str = 'brokers',
) -> str:
"""Return the default config path normally under """Return the default config path normally under
``~/.config/piker`` on linux. ``~/.config/piker`` on linux.
Contains files such as: Contains files such as:
- brokers.toml - brokers.toml
- watchlists.toml - watchlists.toml
- trades.toml
# maybe coming soon ;)
- signals.toml - signals.toml
- strats.toml - strats.toml
""" """
assert conf_name in _conf_names return os.path.join(_config_dir, _file_name)
fn = _conf_fn_w_ext(conf_name)
return os.path.join(
_config_dir,
fn,
)
def repodir(): def repodir():
''' """Return the abspath to the repo directory.
Return the abspath to the repo directory. """
'''
dirpath = os.path.abspath( dirpath = os.path.abspath(
# we're 3 levels down in **this** module file # we're 3 levels down in **this** module file
dirname(dirname(os.path.realpath(__file__))) dirname(dirname(dirname(os.path.realpath(__file__))))
) )
return dirpath return dirpath
def load( def load(
conf_name: str = 'brokers',
path: str = None path: str = None
) -> (dict, str): ) -> (dict, str):
''' """Load broker config.
Load config file by name. """
path = path or get_broker_conf_path()
'''
path = path or get_conf_path(conf_name)
if not os.path.isfile(path): if not os.path.isfile(path):
fn = _conf_fn_w_ext(conf_name) shutil.copyfile(
os.path.join(repodir(), 'data/brokers.toml'),
template = os.path.join( path,
repodir(),
'config',
fn
) )
# try to copy in a template config to the user's directory
# if one exists.
if os.path.isfile(template):
shutil.copyfile(template, path)
config = toml.load(path) config = toml.load(path)
log.debug(f"Read config file {path}") log.debug(f"Read config file {path}")
@ -210,17 +84,13 @@ def load(
def write( def write(
config: dict, # toml config as dict config: dict, # toml config as dict
name: str = 'brokers',
path: str = None, path: str = None,
) -> None: ) -> None:
'''' """Write broker config to disk.
Write broker config to disk.
Create a ``brokers.ini`` file if one does not exist. Create a ``brokers.ini`` file if one does not exist.
"""
''' path = path or get_broker_conf_path()
path = path or get_conf_path(name)
dirname = os.path.dirname(path) dirname = os.path.dirname(path)
if not os.path.isdir(dirname): if not os.path.isdir(dirname):
log.debug(f"Creating config dir {_config_dir}") log.debug(f"Creating config dir {_config_dir}")
@ -230,10 +100,7 @@ def write(
raise ValueError( raise ValueError(
"Watch out you're trying to write a blank config!") "Watch out you're trying to write a blank config!")
log.debug( log.debug(f"Writing config file {path}")
f"Writing config `{name}` file to:\n"
f"{path}"
)
with open(path, 'w') as cf: with open(path, 'w') as cf:
return toml.dump(config, cf) return toml.dump(config, cf)
@ -263,5 +130,4 @@ def load_accounts(
# our default paper engine entry # our default paper engine entry
accounts['paper'] = None accounts['paper'] = None
return accounts return accounts

View File

@ -1,385 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Supervisor for docker with included specific-image service helpers.
'''
import os
import time
from typing import (
Optional,
Callable,
Any,
)
from contextlib import asynccontextmanager as acm
import trio
from trio_typing import TaskStatus
import tractor
from tractor.msg import NamespacePath
import docker
import json
from docker.models.containers import Container as DockerContainer
from docker.errors import (
DockerException,
APIError,
)
from requests.exceptions import ConnectionError, ReadTimeout
from ..log import get_logger, get_console_log
from .. import config
log = get_logger(__name__)
class DockerNotStarted(Exception):
'Prolly you dint start da daemon bruh'
class ContainerError(RuntimeError):
'Error reported via app-container logging level'
@acm
async def open_docker(
url: Optional[str] = None,
**kwargs,
) -> docker.DockerClient:
client: Optional[docker.DockerClient] = None
try:
client = docker.DockerClient(
base_url=url,
**kwargs
) if url else docker.from_env(**kwargs)
yield client
except (
DockerException,
APIError,
) as err:
def unpack_msg(err: Exception) -> str:
args = getattr(err, 'args', None)
if args:
return args
else:
return str(err)
# could be more specific so let's check if it's just perms.
if err.args:
errs = err.args
for err in errs:
msg = unpack_msg(err)
if 'PermissionError' in msg:
raise DockerException('You dint run as root yo!')
elif 'FileNotFoundError' in msg:
raise DockerNotStarted('Did you start da service sister?')
# not perms?
raise
finally:
if client:
client.close()
class Container:
'''
Wrapper around a ``docker.models.containers.Container`` to include
log capture and relay through our native logging system and helper
method(s) for cancellation/teardown.
'''
def __init__(
self,
cntr: DockerContainer,
) -> None:
self.cntr = cntr
# log msg de-duplication
self.seen_so_far = set()
async def process_logs_until(
self,
patt: str,
bp_on_msg: bool = False,
) -> bool:
'''
Attempt to capture container log messages and relay through our
native logging system.
'''
seen_so_far = self.seen_so_far
while True:
logs = self.cntr.logs()
entries = logs.decode().split('\n')
for entry in entries:
# ignore null lines
if not entry:
continue
try:
record = json.loads(entry.strip())
except json.JSONDecodeError:
if 'Error' in entry:
raise RuntimeError(entry)
raise
msg = record['msg']
level = record['level']
if msg and entry not in seen_so_far:
seen_so_far.add(entry)
if bp_on_msg:
await tractor.breakpoint()
getattr(log, level, log.error)(f'{msg}')
# print(f'level: {level}')
if level in ('error', 'fatal'):
raise ContainerError(msg)
if patt in msg:
return True
# do a checkpoint so we don't block if cancelled B)
await trio.sleep(0.01)
return False
def try_signal(
self,
signal: str = 'SIGINT',
) -> bool:
try:
# XXX: market store doesn't seem to shutdown nicely all the
# time with this (maybe because there are still open grpc
# connections?) noticably after client connections have been
# made or are in use/teardown. It works just fine if you
# just start and stop the container tho?..
log.cancel(f'SENDING {signal} to {self.cntr.id}')
self.cntr.kill(signal)
return True
except docker.errors.APIError as err:
if 'is not running' in err.explanation:
return False
async def cancel(
self,
stop_msg: str,
) -> None:
cid = self.cntr.id
# first try a graceful cancel
log.cancel(
f'SIGINT cancelling container: {cid}\n'
f'waiting on stop msg: "{stop_msg}"'
)
self.try_signal('SIGINT')
start = time.time()
for _ in range(30):
with trio.move_on_after(0.5) as cs:
cs.shield = True
await self.process_logs_until(stop_msg)
# if we aren't cancelled on above checkpoint then we
# assume we read the expected stop msg and terminated.
break
try:
log.info(f'Polling for container shutdown:\n{cid}')
if self.cntr.status not in {'exited', 'not-running'}:
self.cntr.wait(
timeout=0.1,
condition='not-running',
)
break
except (
ReadTimeout,
):
log.info(f'Still waiting on container:\n{cid}')
continue
except (
docker.errors.APIError,
ConnectionError,
):
log.exception('Docker connection failure')
break
else:
delay = time.time() - start
log.error(
f'Failed to kill container {cid} after {delay}s\n'
'sending SIGKILL..'
)
# get out the big guns, bc apparently marketstore
# doesn't actually know how to terminate gracefully
# :eyeroll:...
self.try_signal('SIGKILL')
self.cntr.wait(
timeout=3,
condition='not-running',
)
log.cancel(f'Container stopped: {cid}')
@tractor.context
async def open_ahabd(
ctx: tractor.Context,
endpoint: str, # ns-pointer str-msg-type
**kwargs,
) -> None:
get_console_log('info', name=__name__)
async with open_docker() as client:
# TODO: eventually offer a config-oriented API to do the mounts,
# params, etc. passing to ``Containter.run()``?
# call into endpoint for container config/init
ep_func = NamespacePath(endpoint).load_ref()
(
dcntr,
cntr_config,
start_msg,
stop_msg,
) = ep_func(client)
cntr = Container(dcntr)
with trio.move_on_after(1):
found = await cntr.process_logs_until(start_msg)
if not found and cntr not in client.containers.list():
raise RuntimeError(
'Failed to start `marketstore` check logs deats'
)
await ctx.started((
cntr.cntr.id,
os.getpid(),
cntr_config,
))
try:
# TODO: we might eventually want a proxy-style msg-prot here
# to allow remote control of containers without needing
# callers to have root perms?
await trio.sleep_forever()
finally:
with trio.CancelScope(shield=True):
await cntr.cancel(stop_msg)
async def start_ahab(
service_name: str,
endpoint: Callable[docker.DockerClient, DockerContainer],
task_status: TaskStatus[
tuple[
trio.Event,
dict[str, Any],
],
] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Start a ``docker`` container supervisor with given service name.
Currently the actor calling this task should normally be started
with root permissions (until we decide to use something that doesn't
require this, like docker's rootless mode or some wrapper project) but
te root perms are de-escalated after the docker supervisor sub-actor
is started.
'''
cn_ready = trio.Event()
try:
async with tractor.open_nursery(
loglevel='runtime',
) as tn:
portal = await tn.start_actor(
service_name,
enable_modules=[__name__]
)
# TODO: we have issues with this on teardown
# where ``tractor`` tries to issue ``os.kill()``
# and hits perms errors since the root process
# doesn't any longer have root perms..
# de-escalate root perms to the original user
# after the docker supervisor actor is spawned.
if config._parent_user:
import pwd
os.setuid(
pwd.getpwnam(
config._parent_user
)[2] # named user's uid
)
async with portal.open_context(
open_ahabd,
endpoint=str(NamespacePath.from_ref(endpoint)),
) as (ctx, first):
cid, pid, cntr_config = first
task_status.started((
cn_ready,
cntr_config,
(cid, pid),
))
await trio.sleep_forever()
# since we demoted root perms in this parent
# we'll get a perms error on proc cleanup in
# ``tractor`` nursery exit. just make sure
# the child is terminated and don't raise the
# error if so.
# TODO: we could also consider adding
# a ``tractor.ZombieDetected`` or something that we could raise
# if we find the child didn't terminate.
except PermissionError:
log.warning('Failed to cancel root permsed container')
except (
trio.MultiError,
) as err:
for subexc in err.exceptions:
if isinstance(subexc, PermissionError):
log.warning('Failed to cancel root perms-ed container')
return
else:
raise

View File

@ -14,67 +14,27 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' """
Stream format enforcement. Stream format enforcement.
"""
''' from typing import AsyncIterator, Optional, Tuple
from itertools import chain
from typing import AsyncIterator import numpy as np
def iterticks( def iterticks(
quote: dict, quote: dict,
types: tuple[str] = ( types: Tuple[str] = ('trade', 'dark_trade'),
'trade',
'dark_trade',
),
deduplicate_darks: bool = False,
) -> AsyncIterator: ) -> AsyncIterator:
''' '''
Iterate through ticks delivered per quote cycle. Iterate through ticks delivered per quote cycle.
''' '''
if deduplicate_darks:
assert 'dark_trade' in types
# print(f"{quote}\n\n") # print(f"{quote}\n\n")
ticks = quote.get('ticks', ()) ticks = quote.get('ticks', ())
trades = {}
darks = {}
if ticks: if ticks:
# do a first pass and attempt to remove duplicate dark
# trades with the same tick signature.
if deduplicate_darks:
for tick in ticks:
ttype = tick.get('type')
time = tick.get('time', None)
if time:
sig = (
time,
tick['price'],
tick['size']
)
if ttype == 'dark_trade':
darks[sig] = tick
elif ttype == 'trade':
trades[sig] = tick
# filter duplicates
for sig, tick in trades.items():
tick = darks.pop(sig, None)
if tick:
ticks.remove(tick)
# print(f'DUPLICATE {tick}')
# re-insert ticks
ticks.extend(list(chain(trades.values(), darks.values())))
for tick in ticks: for tick in ticks:
# print(f"{quote['symbol']}: {tick}") # print(f"{quote['symbol']}: {tick}")
ttype = tick.get('type') ttype = tick.get('type')

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers) # Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -15,57 +15,40 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Sampling and broadcast machinery for (soft) real-time delivery of Data buffers for fast shared humpy.
financial data flows.
""" """
from __future__ import annotations
from collections import Counter
import time import time
from typing import TYPE_CHECKING, Optional, Union from typing import Dict, List
import tractor import tractor
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
from ._sharedmem import ShmArray
from ..log import get_logger from ..log import get_logger
if TYPE_CHECKING:
from ._sharedmem import ShmArray
from .feed import _FeedsBus
log = get_logger(__name__) log = get_logger(__name__)
class sampler: # TODO: we could stick these in a composed type to avoid
''' # angering the "i hate module scoped variables crowd" (yawn).
Global sampling engine registry. _shms: Dict[int, List[ShmArray]] = {}
_start_increment: Dict[str, trio.Event] = {}
_incrementers: Dict[int, trio.CancelScope] = {}
_subscribers: Dict[str, tractor.Context] = {}
Manages state for sampling events, shm incrementing and
sample period logic.
''' def shm_incrementing(shm_token_name: str) -> trio.Event:
# TODO: we could stick these in a composed type to avoid global _start_increment
# angering the "i hate module scoped variables crowd" (yawn). return _start_increment.setdefault(shm_token_name, trio.Event())
ohlcv_shms: dict[int, list[ShmArray]] = {}
# holds one-task-per-sample-period tasks which are spawned as-needed by
# data feed requests with a given detected time step usually from
# history loading.
incrementers: dict[int, trio.CancelScope] = {}
# holds all the ``tractor.Context`` remote subscriptions for
# a particular sample period increment event: all subscribers are
# notified on a step.
subscribers: dict[int, tractor.Context] = {}
async def increment_ohlc_buffer( async def increment_ohlc_buffer(
delay_s: int, delay_s: int,
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
): ):
''' """Task which inserts new bars into the provide shared memory array
Task which inserts new bars into the provide shared memory array
every ``delay_s`` seconds. every ``delay_s`` seconds.
This task fulfills 2 purposes: This task fulfills 2 purposes:
@ -76,8 +59,8 @@ async def increment_ohlc_buffer(
Note that if **no** actor has initiated this task then **none** of Note that if **no** actor has initiated this task then **none** of
the underlying buffers will actually be incremented. the underlying buffers will actually be incremented.
"""
'''
# # wait for brokerd to signal we should start sampling # # wait for brokerd to signal we should start sampling
# await shm_incrementing(shm_token['shm_name']).wait() # await shm_incrementing(shm_token['shm_name']).wait()
@ -86,18 +69,19 @@ async def increment_ohlc_buffer(
# to solve this is to make this task aware of the instrument's # to solve this is to make this task aware of the instrument's
# tradable hours? # tradable hours?
global _incrementers
# adjust delay to compensate for trio processing time # adjust delay to compensate for trio processing time
ad = min(sampler.ohlcv_shms.keys()) - 0.001 ad = min(_shms.keys()) - 0.001
total_s = 0 # total seconds counted total_s = 0 # total seconds counted
lowest = min(sampler.ohlcv_shms.keys()) lowest = min(_shms.keys())
lowest_shm = sampler.ohlcv_shms[lowest][0]
ad = lowest - 0.001 ad = lowest - 0.001
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
# register this time period step as active # register this time period step as active
sampler.incrementers[delay_s] = cs _incrementers[delay_s] = cs
task_status.started(cs) task_status.started(cs)
while True: while True:
@ -107,10 +91,8 @@ async def increment_ohlc_buffer(
total_s += lowest total_s += lowest
# increment all subscribed shm arrays # increment all subscribed shm arrays
# TODO: # TODO: this in ``numba``
# - this in ``numba`` for delay_s, shms in _shms.items():
# - just lookup shms for this step instead of iterating?
for delay_s, shms in sampler.ohlcv_shms.items():
if total_s % delay_s != 0: if total_s % delay_s != 0:
continue continue
@ -135,107 +117,65 @@ async def increment_ohlc_buffer(
# write to the buffer # write to the buffer
shm.push(last) shm.push(last)
await broadcast(delay_s, shm=lowest_shm) # broadcast the buffer index step
subs = _subscribers.get(delay_s, ())
for ctx in subs:
try:
await ctx.send_yield({'index': shm._last.value})
except (
trio.BrokenResourceError,
trio.ClosedResourceError
):
log.error(f'{ctx.chan.uid} dropped connection')
subs.remove(ctx)
async def broadcast( @tractor.stream
delay_s: int,
shm: Optional[ShmArray] = None,
) -> None:
'''
Broadcast the given ``shm: ShmArray``'s buffer index step to any
subscribers for a given sample period.
The sent msg will include the first and last index which slice into
the buffer's non-empty data.
'''
subs = sampler.subscribers.get(delay_s, ())
first = last = -1
if shm is None:
periods = sampler.ohlcv_shms.keys()
# if this is an update triggered by a history update there
# might not actually be any sampling bus setup since there's
# no "live feed" active yet.
if periods:
lowest = min(periods)
shm = sampler.ohlcv_shms[lowest][0]
first = shm._first.value
last = shm._last.value
for stream in subs:
try:
await stream.send({
'first': first,
'last': last,
'index': last,
})
except (
trio.BrokenResourceError,
trio.ClosedResourceError
):
log.error(
f'{stream._ctx.chan.uid} dropped connection'
)
try:
subs.remove(stream)
except ValueError:
log.warning(
f'{stream._ctx.chan.uid} sub already removed!?'
)
@tractor.context
async def iter_ohlc_periods( async def iter_ohlc_periods(
ctx: tractor.Context, ctx: tractor.Context,
delay_s: int, delay_s: int,
) -> None: ) -> None:
''' """
Subscribe to OHLC sampling "step" events: when the time Subscribe to OHLC sampling "step" events: when the time
aggregation period increments, this event stream emits an index aggregation period increments, this event stream emits an index
event. event.
''' """
# add our subscription # add our subscription
subs = sampler.subscribers.setdefault(delay_s, []) global _subscribers
await ctx.started() subs = _subscribers.setdefault(delay_s, [])
async with ctx.open_stream() as stream: subs.append(ctx)
subs.append(stream)
try:
# stream and block until cancelled
await trio.sleep_forever()
finally:
try: try:
# stream and block until cancelled subs.remove(ctx)
await trio.sleep_forever() except ValueError:
finally: log.error(
try: f'iOHLC step stream was already dropped for {ctx.chan.uid}?'
subs.remove(stream) )
except ValueError:
log.error(
f'iOHLC step stream was already dropped {ctx.chan.uid}?'
)
async def sample_and_broadcast( async def sample_and_broadcast(
bus: _FeedsBus, # noqa bus: '_FeedsBus', # noqa
shm: ShmArray, shm: ShmArray,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
brokername: str,
sum_tick_vlm: bool = True, sum_tick_vlm: bool = True,
) -> None: ) -> None:
log.info("Started shared mem bar writer") log.info("Started shared mem bar writer")
overruns = Counter()
# iterate stream delivered by broker # iterate stream delivered by broker
async for quotes in quote_stream: async for quotes in quote_stream:
# TODO: ``numba`` this! # TODO: ``numba`` this!
for broker_symbol, quote in quotes.items(): for sym, quote in quotes.items():
# TODO: in theory you can send the IPC msg *before* writing # TODO: in theory you can send the IPC msg *before* writing
# to the sharedmem array to decrease latency, however, that # to the sharedmem array to decrease latency, however, that
# will require at least some way to prevent task switching # will require at least some way to prevent task switching
@ -299,22 +239,10 @@ async def sample_and_broadcast(
# end up triggering backpressure which which will # end up triggering backpressure which which will
# eventually block this producer end of the feed and # eventually block this producer end of the feed and
# thus other consumers still attached. # thus other consumers still attached.
subs: list[ subs = bus._subscribers[sym.lower()]
tuple[
Union[tractor.MsgStream, trio.MemorySendChannel],
tractor.Context,
Optional[float], # tick throttle in Hz
]
] = bus._subscribers[broker_symbol.lower()]
# NOTE: by default the broker backend doesn't append lags = 0
# it's own "name" into the fqsn schema (but maybe it for (stream, tick_throttle) in subs:
# should?) so we have to manually generate the correct
# key here.
bsym = f'{broker_symbol}.{brokername}'
lags: int = 0
for (stream, ctx, tick_throttle) in subs:
try: try:
with trio.move_on_after(0.2) as cs: with trio.move_on_after(0.2) as cs:
@ -322,49 +250,21 @@ async def sample_and_broadcast(
# this is a send mem chan that likely # this is a send mem chan that likely
# pushes to the ``uniform_rate_send()`` below. # pushes to the ``uniform_rate_send()`` below.
try: try:
stream.send_nowait( stream.send_nowait((sym, quote))
(bsym, quote)
)
except trio.WouldBlock: except trio.WouldBlock:
chan = ctx.chan ctx = getattr(sream, '_ctx', None)
if ctx: if ctx:
log.warning( log.warning(
f'Feed overrun {bus.brokername} ->' f'Feed overrun {bus.brokername} ->'
f'{chan.uid} !!!' f'{ctx.channel.uid} !!!'
) )
else: else:
key = id(stream)
overruns[key] += 1
log.warning( log.warning(
f'Feed overrun {broker_symbol}' f'Feed overrun {bus.brokername} -> '
'@{bus.brokername} -> '
f'feed @ {tick_throttle} Hz' f'feed @ {tick_throttle} Hz'
) )
if overruns[key] > 6:
# TODO: should we check for the
# context being cancelled? this
# could happen but the
# channel-ipc-pipe is still up.
if not chan.connected():
log.warning(
'Dropping broken consumer:\n'
f'{broker_symbol}:'
f'{ctx.cid}@{chan.uid}'
)
await stream.aclose()
raise trio.BrokenResourceError
else:
log.warning(
'Feed getting overrun bro!\n'
f'{broker_symbol}:'
f'{ctx.cid}@{chan.uid}'
)
continue
else: else:
await stream.send( await stream.send({sym: quote})
{bsym: quote}
)
if cs.cancelled_caught: if cs.cancelled_caught:
lags += 1 lags += 1
@ -376,29 +276,21 @@ async def sample_and_broadcast(
trio.ClosedResourceError, trio.ClosedResourceError,
trio.EndOfChannel, trio.EndOfChannel,
): ):
chan = ctx.chan ctx = getattr(stream, '_ctx', None)
if ctx: if ctx:
log.warning( log.warning(
'Dropped `brokerd`-quotes-feed connection:\n' f'{ctx.chan.uid} dropped '
f'{broker_symbol}:' '`brokerd`-quotes-feed connection'
f'{ctx.cid}@{chan.uid}'
) )
if tick_throttle: if tick_throttle:
assert stream._closed assert stream.closed()
# XXX: do we need to deregister here # XXX: do we need to deregister here
# if it's done in the fee bus code? # if it's done in the fee bus code?
# so far seems like no since this should all # so far seems like no since this should all
# be single-threaded. Doing it anyway though # be single-threaded. Doing it anyway though
# since there seems to be some kinda race.. # since there seems to be some kinda race..
try: subs.remove((stream, tick_throttle))
subs.remove((stream, tick_throttle))
except ValueError:
log.error(
f'Stream was already removed from subs!?\n'
f'{broker_symbol}:'
f'{ctx.cid}@{chan.uid}'
)
# TODO: a less naive throttler, here's some snippets: # TODO: a less naive throttler, here's some snippets:
@ -411,8 +303,6 @@ async def uniform_rate_send(
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
stream: tractor.MsgStream, stream: tractor.MsgStream,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
# TODO: compute the approx overhead latency per cycle # TODO: compute the approx overhead latency per cycle
@ -423,8 +313,6 @@ async def uniform_rate_send(
last_send = time.time() last_send = time.time()
diff = 0 diff = 0
task_status.started()
while True: while True:
# compute the remaining time to sleep for this throttled cycle # compute the remaining time to sleep for this throttled cycle
@ -432,12 +320,7 @@ async def uniform_rate_send(
if left_to_sleep > 0: if left_to_sleep > 0:
with trio.move_on_after(left_to_sleep) as cs: with trio.move_on_after(left_to_sleep) as cs:
try: sym, last_quote = await quote_stream.receive()
sym, last_quote = await quote_stream.receive()
except trio.EndOfChannel:
log.exception(f"feed for {stream} ended?")
break
diff = time.time() - last_send diff = time.time() - last_send
if not first_quote: if not first_quote:
@ -488,7 +371,7 @@ async def uniform_rate_send(
# we have a quote already so send it now. # we have a quote already so send it now.
# measured_rate = 1 / (time.time() - last_send) measured_rate = 1 / (time.time() - last_send)
# log.info( # log.info(
# f'`{sym}` throttled send hz: {round(measured_rate, ndigits=1)}' # f'`{sym}` throttled send hz: {round(measured_rate, ndigits=1)}'
# ) # )
@ -497,20 +380,10 @@ async def uniform_rate_send(
# rate timing exactly lul # rate timing exactly lul
try: try:
await stream.send({sym: first_quote}) await stream.send({sym: first_quote})
except ( except trio.ClosedResourceError:
# NOTE: any of these can be raised by ``tractor``'s IPC
# transport-layer and we want to be highly resilient
# to consumers which crash or lose network connection.
# I.e. we **DO NOT** want to crash and propagate up to
# ``pikerd`` these kinds of errors!
trio.ClosedResourceError,
trio.BrokenResourceError,
ConnectionResetError,
):
# if the feed consumer goes down then drop # if the feed consumer goes down then drop
# out of this rate limiter # out of this rate limiter
log.warning(f'{stream} closed') log.warning(f'{stream} closed')
await stream.aclose()
return return
# reset send cycle state # reset send cycle state

View File

@ -18,19 +18,17 @@
NumPy compatible shared memory buffers for real-time IPC streaming. NumPy compatible shared memory buffers for real-time IPC streaming.
""" """
from __future__ import annotations from dataclasses import dataclass, asdict
from sys import byteorder from sys import byteorder
import time from typing import List, Tuple, Optional
from typing import Optional
from multiprocessing.shared_memory import SharedMemory, _USE_POSIX from multiprocessing.shared_memory import SharedMemory, _USE_POSIX
from multiprocessing import resource_tracker as mantracker
if _USE_POSIX: if _USE_POSIX:
from _posixshmem import shm_unlink from _posixshmem import shm_unlink
import tractor import tractor
import numpy as np import numpy as np
from pydantic import BaseModel
from numpy.lib import recfunctions as rfn
from ..log import get_logger from ..log import get_logger
from ._source import base_iohlc_dtype from ._source import base_iohlc_dtype
@ -39,41 +37,26 @@ from ._source import base_iohlc_dtype
log = get_logger(__name__) log = get_logger(__name__)
# how much is probably dependent on lifestyle # Tell the "resource tracker" thing to fuck off.
_secs_in_day = int(60 * 60 * 24) class ManTracker(mantracker.ResourceTracker):
# we try for a buncha times, but only on a run-every-other-day kinda week. def register(self, name, rtype):
_days_worth = 16 pass
_default_size = _days_worth * _secs_in_day
# where to start the new data append index def unregister(self, name, rtype):
_rt_buffer_start = int((_days_worth - 1) * _secs_in_day) pass
def ensure_running(self):
pass
def cuckoff_mantracker(): # "know your land and know your prey"
# https://www.dailymotion.com/video/x6ozzco
from multiprocessing import resource_tracker as mantracker mantracker._resource_tracker = ManTracker()
mantracker.register = mantracker._resource_tracker.register
# Tell the "resource tracker" thing to fuck off. mantracker.ensure_running = mantracker._resource_tracker.ensure_running
class ManTracker(mantracker.ResourceTracker): ensure_running = mantracker._resource_tracker.ensure_running
def register(self, name, rtype): mantracker.unregister = mantracker._resource_tracker.unregister
pass mantracker.getfd = mantracker._resource_tracker.getfd
def unregister(self, name, rtype):
pass
def ensure_running(self):
pass
# "know your land and know your prey"
# https://www.dailymotion.com/video/x6ozzco
mantracker._resource_tracker = ManTracker()
mantracker.register = mantracker._resource_tracker.register
mantracker.ensure_running = mantracker._resource_tracker.ensure_running
# ensure_running = mantracker._resource_tracker.ensure_running
mantracker.unregister = mantracker._resource_tracker.unregister
mantracker.getfd = mantracker._resource_tracker.getfd
cuckoff_mantracker()
class SharedInt: class SharedInt:
@ -99,42 +82,29 @@ class SharedInt:
if _USE_POSIX: if _USE_POSIX:
# We manually unlink to bypass all the "resource tracker" # We manually unlink to bypass all the "resource tracker"
# nonsense meant for non-SC systems. # nonsense meant for non-SC systems.
name = self._shm.name shm_unlink(self._shm.name)
try:
shm_unlink(name)
except FileNotFoundError:
# might be a teardown race here?
log.warning(f'Shm for {name} already unlinked?')
class _Token(BaseModel): @dataclass
''' class _Token:
Internal represenation of a shared memory "token" """Internal represenation of a shared memory "token"
which can be used to key a system wide post shm entry. which can be used to key a system wide post shm entry.
"""
'''
class Config:
frozen = True
shm_name: str # this servers as a "key" value shm_name: str # this servers as a "key" value
shm_first_index_name: str shm_first_index_name: str
shm_last_index_name: str shm_last_index_name: str
dtype_descr: tuple dtype_descr: List[Tuple[str]]
@property def __post_init__(self):
def dtype(self) -> np.dtype: # np.array requires a list for dtype
return np.dtype(list(map(tuple, self.dtype_descr))).descr self.dtype_descr = np.dtype(list(map(tuple, self.dtype_descr))).descr
def as_msg(self): def as_msg(self):
return self.dict() return asdict(self)
@classmethod @classmethod
def from_msg(cls, msg: dict) -> _Token: def from_msg(self, msg: dict) -> '_Token':
if isinstance(msg, _Token): return msg if isinstance(msg, _Token) else _Token(**msg)
return msg
msg['dtype_descr'] = tuple(map(tuple, msg['dtype_descr']))
return _Token(**msg)
# TODO: this api? # TODO: this api?
@ -157,23 +127,20 @@ def _make_token(
key: str, key: str,
dtype: Optional[np.dtype] = None, dtype: Optional[np.dtype] = None,
) -> _Token: ) -> _Token:
''' """Create a serializable token that can be used
Create a serializable token that can be used
to access a shared array. to access a shared array.
"""
'''
dtype = base_iohlc_dtype if dtype is None else dtype dtype = base_iohlc_dtype if dtype is None else dtype
return _Token( return _Token(
shm_name=key, key,
shm_first_index_name=key + "_first", key + "_first",
shm_last_index_name=key + "_last", key + "_last",
dtype_descr=np.dtype(dtype).descr np.dtype(dtype).descr
) )
class ShmArray: class ShmArray:
''' """A shared memory ``numpy`` (compatible) array API.
A shared memory ``numpy`` (compatible) array API.
An underlying shared memory buffer is allocated based on An underlying shared memory buffer is allocated based on
a user specified ``numpy.ndarray``. This fixed size array a user specified ``numpy.ndarray``. This fixed size array
@ -183,7 +150,7 @@ class ShmArray:
``SharedInt`` interfaces) values such that multiple processes can ``SharedInt`` interfaces) values such that multiple processes can
interact with the same array using a synchronized-index. interact with the same array using a synchronized-index.
''' """
def __init__( def __init__(
self, self,
shmarr: np.ndarray, shmarr: np.ndarray,
@ -204,21 +171,17 @@ class ShmArray:
self._post_init: bool = False self._post_init: bool = False
# pushing data does not write the index (aka primary key) # pushing data does not write the index (aka primary key)
dtype = shmarr.dtype self._write_fields = list(shmarr.dtype.fields.keys())[1:]
if dtype.fields:
self._write_fields = list(shmarr.dtype.fields.keys())[1:]
else:
self._write_fields = None
# TODO: ringbuf api? # TODO: ringbuf api?
@property @property
def _token(self) -> _Token: def _token(self) -> _Token:
return _Token( return _Token(
shm_name=self._shm.name, self._shm.name,
shm_first_index_name=self._first._shm.name, self._first._shm.name,
shm_last_index_name=self._last._shm.name, self._last._shm.name,
dtype_descr=tuple(self._array.dtype.descr), self._array.dtype.descr,
) )
@property @property
@ -234,8 +197,7 @@ class ShmArray:
@property @property
def array(self) -> np.ndarray: def array(self) -> np.ndarray:
''' '''Return an up-to-date ``np.ndarray`` view of the
Return an up-to-date ``np.ndarray`` view of the
so-far-written data to the underlying shm buffer. so-far-written data to the underlying shm buffer.
''' '''
@ -254,129 +216,62 @@ class ShmArray:
return a return a
def ustruct(
self,
fields: Optional[list[str]] = None,
# type that all field values will be cast to
# in the returned view.
common_dtype: np.dtype = np.float,
) -> np.ndarray:
array = self._array
if fields:
selection = array[fields]
# fcount = len(fields)
else:
selection = array
# fcount = len(array.dtype.fields)
# XXX: manual ``.view()`` attempt that also doesn't work.
# uview = selection.view(
# dtype='<f16',
# ).reshape(-1, 4, order='A')
# assert len(selection) == len(uview)
u = rfn.structured_to_unstructured(
selection,
# dtype=float,
copy=True,
)
# unstruct = np.ndarray(u.shape, dtype=a.dtype, buffer=shm.buf)
# array[:] = a[:]
return u
# return ShmArray(
# shmarr=u,
# first=self._first,
# last=self._last,
# shm=self._shm
# )
def last( def last(
self, self,
length: int = 1, length: int = 1,
) -> np.ndarray: ) -> np.ndarray:
'''
Return the last ``length``'s worth of ("row") entries from the
array.
'''
return self.array[-length:] return self.array[-length:]
def push( def push(
self, self,
data: np.ndarray, data: np.ndarray,
field_map: Optional[dict[str, str]] = None,
prepend: bool = False, prepend: bool = False,
update_first: bool = True,
start: Optional[int] = None, start: Optional[int] = None,
) -> int: ) -> int:
''' '''Ring buffer like "push" to append data
Ring buffer like "push" to append data
into the buffer and return updated "last" index. into the buffer and return updated "last" index.
NB: no actual ring logic yet to give a "loop around" on overflow NB: no actual ring logic yet to give a "loop around" on overflow
condition, lel. condition, lel.
''' '''
self._post_init = True
length = len(data) length = len(data)
index = start or self._last.value
if prepend: if prepend:
index = (start or self._first.value) - length index = self._first.value - length
if index < 0: if index < 0:
raise ValueError( raise ValueError(
f'Array size of {self._len} was overrun during prepend.\n' f'Array size of {self._len} was overrun during prepend.\n'
f'You have passed {abs(index)} too many datums.' 'You have passed {abs(index)} too many datums.'
) )
else:
index = start if start is not None else self._last.value
end = index + length end = index + length
if field_map: fields = self._write_fields
src_names, dst_names = zip(*field_map.items())
else:
dst_names = src_names = self._write_fields
try: try:
self._array[ self._array[fields][index:end] = data[fields][:]
list(dst_names)
][index:end] = data[list(src_names)][:]
# NOTE: there was a race here between updating # NOTE: there was a race here between updating
# the first and last indices and when the next reader # the first and last indices and when the next reader
# tries to access ``.array`` (which due to the index # tries to access ``.array`` (which due to the index
# overlap will be empty). Pretty sure we've fixed it now # overlap will be empty). Pretty sure we've fixed it now
# but leaving this here as a reminder. # but leaving this here as a reminder.
if prepend and update_first and length: if prepend:
assert index < self._first.value assert index < self._first.value
if ( if index < self._first.value:
index < self._first.value
and update_first
):
assert prepend, 'prepend=True not passed but index decreased?'
self._first.value = index self._first.value = index
else:
elif not prepend:
self._last.value = end self._last.value = end
self._post_init = True
return end return end
except ValueError as err: except ValueError as err:
if field_map:
raise
# should raise if diff detected # should raise if diff detected
self.diff_err_fields(data) self.diff_err_fields(data)
raise err raise err
@ -403,7 +298,6 @@ class ShmArray:
f"Input array has unknown field(s): {only_in_theirs}" f"Input array has unknown field(s): {only_in_theirs}"
) )
# TODO: support "silent" prepends that don't update ._first.value?
def prepend( def prepend(
self, self,
data: np.ndarray, data: np.ndarray,
@ -430,6 +324,12 @@ class ShmArray:
... ...
# how much is probably dependent on lifestyle
_secs_in_day = int(60 * 60 * 24)
# we try for 3 times but only on a run-every-other-day kinda week.
_default_size = 3 * _secs_in_day
def open_shm_array( def open_shm_array(
key: Optional[str] = None, key: Optional[str] = None,
@ -454,11 +354,7 @@ def open_shm_array(
create=True, create=True,
size=a.nbytes size=a.nbytes
) )
array = np.ndarray( array = np.ndarray(a.shape, dtype=a.dtype, buffer=shm.buf)
a.shape,
dtype=a.dtype,
buffer=shm.buf
)
array[:] = a[:] array[:] = a[:]
array.setflags(write=int(not readonly)) array.setflags(write=int(not readonly))
@ -484,24 +380,7 @@ def open_shm_array(
) )
) )
# start the "real-time" updated section after 3-days worth of 1s last.value = first.value = int(_secs_in_day)
# sampled OHLC. this allows appending up to a days worth from
# tick/quote feeds before having to flush to a (tsdb) storage
# backend, and looks something like,
# -------------------------
# | | i
# _________________________
# <-------------> <------->
# history real-time
#
# Once fully "prepended", the history section will leave the
# ``ShmArray._start.value: int = 0`` and the yet-to-be written
# real-time section will start at ``ShmArray.index: int``.
# this sets the index to 3/4 of the length of the buffer
# leaving a "days worth of second samples" for the real-time
# section.
last.value = first.value = _rt_buffer_start
shmarr = ShmArray( shmarr = ShmArray(
array, array,
@ -523,48 +402,27 @@ def open_shm_array(
def attach_shm_array( def attach_shm_array(
token: tuple[str, str, tuple[str, str]], token: Tuple[str, str, Tuple[str, str]],
size: int = _default_size, size: int = _default_size,
readonly: bool = True, readonly: bool = True,
) -> ShmArray: ) -> ShmArray:
''' """Attach to an existing shared memory array previously
Attach to an existing shared memory array previously
created by another process using ``open_shared_array``. created by another process using ``open_shared_array``.
No new shared mem is allocated but wrapper types for read/write No new shared mem is allocated but wrapper types for read/write
access are constructed. access are constructed.
"""
'''
token = _Token.from_msg(token) token = _Token.from_msg(token)
key = token.shm_name key = token.shm_name
if key in _known_tokens: if key in _known_tokens:
assert _Token.from_msg(_known_tokens[key]) == token, "WTF" assert _Token.from_msg(_known_tokens[key]) == token, "WTF"
# XXX: ugh, looks like due to the ``shm_open()`` C api we can't
# actually place files in a subdir, see discussion here:
# https://stackoverflow.com/a/11103289
# attach to array buffer and view as per dtype # attach to array buffer and view as per dtype
_err: Optional[Exception] = None shm = SharedMemory(name=key)
for _ in range(3):
try:
shm = SharedMemory(
name=key,
create=False,
)
break
except OSError as oserr:
_err = oserr
time.sleep(0.1)
else:
if _err:
raise _err
shmarr = np.ndarray( shmarr = np.ndarray(
(size,), (size,),
dtype=token.dtype, dtype=token.dtype_descr,
buffer=shm.buf buffer=shm.buf
) )
shmarr.setflags(write=int(not readonly)) shmarr.setflags(write=int(not readonly))
@ -612,10 +470,8 @@ def maybe_open_shm_array(
key: str, key: str,
dtype: Optional[np.dtype] = None, dtype: Optional[np.dtype] = None,
**kwargs, **kwargs,
) -> Tuple[ShmArray, bool]:
) -> tuple[ShmArray, bool]: """Attempt to attach to a shared memory block using a "key" lookup
'''
Attempt to attach to a shared memory block using a "key" lookup
to registered blocks in the users overall "system" registry to registered blocks in the users overall "system" registry
(presumes you don't have the block's explicit token). (presumes you don't have the block's explicit token).
@ -629,8 +485,7 @@ def maybe_open_shm_array(
If you know the explicit ``_Token`` for your memory segment instead If you know the explicit ``_Token`` for your memory segment instead
use ``attach_shm_array``. use ``attach_shm_array``.
"""
'''
try: try:
# see if we already know this key # see if we already know this key
token = _known_tokens[key] token = _known_tokens[key]

View File

@ -17,13 +17,12 @@
""" """
numpy data source coversion helpers. numpy data source coversion helpers.
""" """
from __future__ import annotations from typing import Dict, Any, List
from typing import Any
import decimal import decimal
from bidict import bidict
import numpy as np import numpy as np
from pydantic import BaseModel import pandas as pd
from pydantic import BaseModel, validate_arguments
# from numba import from_dtype # from numba import from_dtype
@ -33,7 +32,7 @@ ohlc_fields = [
('high', float), ('high', float),
('low', float), ('low', float),
('close', float), ('close', float),
('volume', float), ('volume', int),
('bar_wap', float), ('bar_wap', float),
] ]
@ -48,29 +47,16 @@ base_ohlc_dtype = np.dtype(ohlc_fields)
# https://github.com/numba/numba/issues/4511 # https://github.com/numba/numba/issues/4511
# numba_ohlc_dtype = from_dtype(base_ohlc_dtype) # numba_ohlc_dtype = from_dtype(base_ohlc_dtype)
# map time frame "keys" to seconds values # map time frame "keys" to minutes values
tf_in_1s = bidict({ tf_in_1m = {
1: '1s', '1m': 1,
60: '1m', '5m': 5,
60*5: '5m', '15m': 15,
60*15: '15m', '30m': 30,
60*30: '30m', '1h': 60,
60*60: '1h', '4h': 240,
60*60*24: '1d', '1d': 1440,
}) }
def mk_fqsn(
provider: str,
symbol: str,
) -> str:
'''
Generate a "fully qualified symbol name" which is
a reverse-hierarchical cross broker/provider symbol
'''
return '.'.join([symbol, provider]).lower()
def float_digits( def float_digits(
@ -92,113 +78,31 @@ def ohlc_zeros(length: int) -> np.ndarray:
return np.zeros(length, dtype=base_ohlc_dtype) return np.zeros(length, dtype=base_ohlc_dtype)
def unpack_fqsn(fqsn: str) -> tuple[str, str, str]:
'''
Unpack a fully-qualified-symbol-name to ``tuple``.
'''
venue = ''
suffix = ''
# TODO: probably reverse the order of all this XD
tokens = fqsn.split('.')
if len(tokens) < 3:
# probably crypto
symbol, broker = tokens
return (
broker,
symbol,
'',
)
elif len(tokens) > 3:
symbol, venue, suffix, broker = tokens
else:
symbol, venue, broker = tokens
suffix = ''
# head, _, broker = fqsn.rpartition('.')
# symbol, _, suffix = head.rpartition('.')
return (
broker,
'.'.join([symbol, venue]),
suffix,
)
class Symbol(BaseModel): class Symbol(BaseModel):
''' """I guess this is some kinda container thing for dealing with
I guess this is some kinda container thing for dealing with
all the different meta-data formats from brokers? all the different meta-data formats from brokers?
''' Yah, i guess dats what it izz.
"""
key: str key: str
tick_size: float = 0.01 type_key: str # {'stock', 'forex', 'future', ... etc.}
lot_tick_size: float = 0.0 # "volume" precision as min step value tick_size: float
tick_size_digits: int = 2 lot_tick_size: float # "volume" precision as min step value
lot_size_digits: int = 0 tick_size_digits: int
suffix: str = '' lot_size_digits: int
broker_info: dict[str, dict[str, Any]] = {} broker_info: Dict[str, Dict[str, Any]] = {}
# specifies a "class" of financial instrument # specifies a "class" of financial instrument
# ex. stock, futer, option, bond etc. # ex. stock, futer, option, bond etc.
# @validate_arguments
@classmethod
def from_broker_info(
cls,
broker: str,
symbol: str,
info: dict[str, Any],
suffix: str = '',
# XXX: like wtf..
# ) -> 'Symbol':
) -> None:
tick_size = info.get('price_tick_size', 0.01)
lot_tick_size = info.get('lot_tick_size', 0.0)
return Symbol(
key=symbol,
tick_size=tick_size,
lot_tick_size=lot_tick_size,
tick_size_digits=float_digits(tick_size),
lot_size_digits=float_digits(lot_tick_size),
suffix=suffix,
broker_info={broker: info},
)
@classmethod
def from_fqsn(
cls,
fqsn: str,
info: dict[str, Any],
# XXX: like wtf..
# ) -> 'Symbol':
) -> None:
broker, key, suffix = unpack_fqsn(fqsn)
return cls.from_broker_info(
broker,
key,
info=info,
suffix=suffix,
)
@property @property
def type_key(self) -> str: def brokers(self) -> List[str]:
return list(self.broker_info.values())[0]['asset_type']
@property
def brokers(self) -> list[str]:
return list(self.broker_info.keys()) return list(self.broker_info.keys())
def nearest_tick(self, value: float) -> float: def nearest_tick(self, value: float) -> float:
''' """Return the nearest tick value based on mininum increment.
Return the nearest tick value based on mininum increment.
''' """
mult = 1 / self.tick_size mult = 1 / self.tick_size
return round(value * mult) / mult return round(value * mult) / mult
@ -214,44 +118,85 @@ class Symbol(BaseModel):
self.key, self.key,
) )
def tokens(self) -> tuple[str]:
broker, key = self.front_feed()
if self.suffix:
return (key, self.suffix, broker)
else:
return (key, broker)
def front_fqsn(self) -> str: @validate_arguments
''' def mk_symbol(
fqsn = "fully qualified symbol name"
Basically the idea here is for all client-ish code (aka programs/actors key: str,
that ask the provider agnostic layers in the stack for data) should be type_key: str,
able to tell which backend / venue / derivative each data feed/flow is tick_size: float = 0.01,
from by an explicit string key of the current form: lot_tick_size: float = 0,
broker_info: dict[str, Any] = {},
<instrumentname>.<venue>.<suffixwithmetadata>.<brokerbackendname> ) -> Symbol:
'''Create and return an instrument description for the
"symbol" named as ``key``.
TODO: I have thoughts that we should actually change this to be '''
more like an "attr lookup" (like how the web should have done return Symbol(
urls, but marketting peeps ruined it etc. etc.): key=key,
type_key=type_key,
tick_size=tick_size,
lot_tick_size=lot_tick_size,
tick_size_digits=float_digits(tick_size),
lot_size_digits=float_digits(lot_tick_size),
broker_info=broker_info,
)
<broker>.<venue>.<instrumentname>.<suffixwithmetadata>
''' def from_df(
tokens = self.tokens()
fqsn = '.'.join(tokens)
return fqsn
def iterfqsns(self) -> list[str]: df: pd.DataFrame,
keys = [] source=None,
for broker in self.broker_info.keys(): default_tf=None
fqsn = mk_fqsn(self.key, broker)
if self.suffix:
fqsn += f'.{self.suffix}'
keys.append(fqsn)
return keys ) -> np.recarray:
"""Convert OHLC formatted ``pandas.DataFrame`` to ``numpy.recarray``.
"""
df.reset_index(inplace=True)
# hackery to convert field names
date = 'Date'
if 'date' in df.columns:
date = 'date'
# convert to POSIX time
df[date] = [d.timestamp() for d in df[date]]
# try to rename from some camel case
columns = {
'Date': 'time',
'date': 'time',
'Open': 'open',
'High': 'high',
'Low': 'low',
'Close': 'close',
'Volume': 'volume',
# most feeds are providing this over sesssion anchored
'vwap': 'bar_wap',
# XXX: ib_insync calls this the "wap of the bar"
# but no clue what is actually is...
# https://github.com/pikers/piker/issues/119#issuecomment-729120988
'average': 'bar_wap',
}
df = df.rename(columns=columns)
for name in df.columns:
# if name not in base_ohlc_dtype.names[1:]:
if name not in base_ohlc_dtype.names:
del df[name]
# TODO: it turns out column access on recarrays is actually slower:
# https://jakevdp.github.io/PythonDataScienceHandbook/02.09-structured-data-numpy.html#RecordArrays:-Structured-Arrays-with-a-Twist
# it might make sense to make these structured arrays?
array = df.to_records(index=False)
_nan_to_closest_num(array)
return array
def _nan_to_closest_num(array: np.ndarray): def _nan_to_closest_num(array: np.ndarray):

View File

@ -53,13 +53,11 @@ class NoBsWs:
def __init__( def __init__(
self, self,
url: str, url: str,
token: str,
stack: AsyncExitStack, stack: AsyncExitStack,
fixture: Callable, fixture: Callable,
serializer: ModuleType = json, serializer: ModuleType = json,
): ):
self.url = url self.url = url
self.token = token
self.fixture = fixture self.fixture = fixture
self._stack = stack self._stack = stack
self._ws: 'WebSocketConnection' = None # noqa self._ws: 'WebSocketConnection' = None # noqa
@ -83,15 +81,9 @@ class NoBsWs:
trio_websocket.open_websocket_url(self.url) trio_websocket.open_websocket_url(self.url)
) )
# rerun user code fixture # rerun user code fixture
if self.token == '': ret = await self._stack.enter_async_context(
ret = await self._stack.enter_async_context( self.fixture(self)
self.fixture(self) )
)
else:
ret = await self._stack.enter_async_context(
self.fixture(self, self.token)
)
assert ret is None assert ret is None
log.info(f'Connection success: {self.url}') log.info(f'Connection success: {self.url}')
@ -135,14 +127,12 @@ async def open_autorecon_ws(
# TODO: proper type annot smh # TODO: proper type annot smh
fixture: Callable, fixture: Callable,
# used for authenticated websockets
token: str = '',
) -> AsyncGenerator[tuple[...], NoBsWs]: ) -> AsyncGenerator[tuple[...], NoBsWs]:
"""Apparently we can QoS for all sorts of reasons..so catch em. """Apparently we can QoS for all sorts of reasons..so catch em.
""" """
async with AsyncExitStack() as stack: async with AsyncExitStack() as stack:
ws = NoBsWs(url, token, stack, fixture=fixture) ws = NoBsWs(url, stack, fixture=fixture)
await ws._connect() await ws._connect()
try: try:

View File

@ -16,34 +16,26 @@
""" """
marketstore cli. marketstore cli.
""" """
from typing import List
from functools import partial from functools import partial
from pprint import pformat from pprint import pformat
from anyio_marketstore import open_marketstore_client
import trio import trio
import tractor import tractor
import click import click
import numpy as np
from .marketstore import ( from .marketstore import (
get_client, get_client,
# stream_quotes, stream_quotes,
ingest_quote_stream, ingest_quote_stream,
# _url, _url,
_tick_tbk_ids, _tick_tbk_ids,
mk_tbk, mk_tbk,
) )
from ..cli import cli from ..cli import cli
from .. import watchlists as wl from .. import watchlists as wl
from ..log import get_logger from ..log import get_logger
from ._sharedmem import (
maybe_open_shm_array,
)
from ._source import (
base_iohlc_dtype,
)
log = get_logger(__name__) log = get_logger(__name__)
@ -57,58 +49,51 @@ log = get_logger(__name__)
) )
@click.argument('names', nargs=-1) @click.argument('names', nargs=-1)
@click.pass_obj @click.pass_obj
def ms_stream( def ms_stream(config: dict, names: List[str], url: str):
config: dict, """Connect to a marketstore time bucket stream for (a set of) symbols(s)
names: list[str],
url: str,
) -> None:
'''
Connect to a marketstore time bucket stream for (a set of) symbols(s)
and print to console. and print to console.
"""
'''
async def main(): async def main():
# async for quote in stream_quotes(symbols=names): async for quote in stream_quotes(symbols=names):
# log.info(f"Received quote:\n{quote}") log.info(f"Received quote:\n{quote}")
...
trio.run(main) trio.run(main)
# @cli.command() @cli.command()
# @click.option( @click.option(
# '--url', '--url',
# default=_url, default=_url,
# help='HTTP URL of marketstore instance' help='HTTP URL of marketstore instance'
# ) )
# @click.argument('names', nargs=-1) @click.argument('names', nargs=-1)
# @click.pass_obj @click.pass_obj
# def ms_destroy(config: dict, names: list[str], url: str) -> None: def ms_destroy(config: dict, names: List[str], url: str) -> None:
# """Destroy symbol entries in the local marketstore instance. """Destroy symbol entries in the local marketstore instance.
# """ """
# async def main(): async def main():
# nonlocal names nonlocal names
# async with get_client(url) as client: async with get_client(url) as client:
#
# if not names: if not names:
# names = await client.list_symbols() names = await client.list_symbols()
#
# # default is to wipe db entirely. # default is to wipe db entirely.
# answer = input( answer = input(
# "This will entirely wipe you local marketstore db @ " "This will entirely wipe you local marketstore db @ "
# f"{url} of the following symbols:\n {pformat(names)}" f"{url} of the following symbols:\n {pformat(names)}"
# "\n\nDelete [N/y]?\n") "\n\nDelete [N/y]?\n")
#
# if answer == 'y': if answer == 'y':
# for sym in names: for sym in names:
# # tbk = _tick_tbk.format(sym) # tbk = _tick_tbk.format(sym)
# tbk = tuple(sym, *_tick_tbk_ids) tbk = tuple(sym, *_tick_tbk_ids)
# print(f"Destroying {tbk}..") print(f"Destroying {tbk}..")
# await client.destroy(mk_tbk(tbk)) await client.destroy(mk_tbk(tbk))
# else: else:
# print("Nothing deleted.") print("Nothing deleted.")
#
# tractor.run(main) tractor.run(main)
@cli.command() @cli.command()
@ -117,53 +102,41 @@ def ms_stream(
is_flag=True, is_flag=True,
help='Enable tractor logging') help='Enable tractor logging')
@click.option( @click.option(
'--host', '--url',
default='localhost' default=_url,
help='HTTP URL of marketstore instance'
) )
@click.option( @click.argument('name', nargs=1, required=True)
'--port',
default=5993
)
@click.argument('symbols', nargs=-1)
@click.pass_obj @click.pass_obj
def storesh( def ms_shell(config, name, tl, url):
config, """Start an IPython shell ready to query the local marketstore db.
tl, """
host,
port,
symbols: list[str],
):
'''
Start an IPython shell ready to query the local marketstore db.
'''
from piker.data.marketstore import tsdb_history_update
from piker._daemon import open_piker_runtime
async def main(): async def main():
nonlocal symbols async with get_client(url) as client:
query = client.query # noqa
# TODO: write magics to query marketstore
from IPython import embed
embed()
async with open_piker_runtime( tractor.run(main)
'storesh',
enable_modules=['piker.data._ahab'],
):
symbol = symbols[0]
await tsdb_history_update(symbol)
trio.run(main)
@cli.command() @cli.command()
@click.option('--test-file', '-t', help='Test quote stream file') @click.option('--test-file', '-t', help='Test quote stream file')
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option(
'--url',
default=_url,
help='HTTP URL of marketstore instance'
)
@click.argument('name', nargs=1, required=True) @click.argument('name', nargs=1, required=True)
@click.pass_obj @click.pass_obj
def ingest(config, name, test_file, tl): def ingest(config, name, test_file, tl, url):
''' """Ingest real-time broker quotes and ticks to a marketstore instance.
Ingest real-time broker quotes and ticks to a marketstore instance. """
'''
# global opts # global opts
brokermod = config['brokermod']
loglevel = config['loglevel'] loglevel = config['loglevel']
tractorloglevel = config['tractorloglevel'] tractorloglevel = config['tractorloglevel']
# log = config['log'] # log = config['log']
@ -172,25 +145,15 @@ def ingest(config, name, test_file, tl):
watchlists = wl.merge_watchlist(watchlist_from_file, wl._builtins) watchlists = wl.merge_watchlist(watchlist_from_file, wl._builtins)
symbols = watchlists[name] symbols = watchlists[name]
grouped_syms = {} tractor.run(
for sym in symbols: partial(
symbol, _, provider = sym.rpartition('.') ingest_quote_stream,
if provider not in grouped_syms: symbols,
grouped_syms[provider] = [] brokermod.name,
tries=1,
grouped_syms[provider].append(symbol) loglevel=loglevel,
),
async def entry_point(): name='ingest_marketstore',
async with tractor.open_nursery() as n: loglevel=tractorloglevel,
for provider, symbols in grouped_syms.items(): debug_mode=True,
await n.run_in_actor( )
ingest_quote_stream,
name='ingest_marketstore',
symbols=symbols,
brokername=provider,
tries=1,
actorloglevel=loglevel,
loglevel=tractorloglevel
)
tractor.run(entry_point)

File diff suppressed because it is too large Load Diff

View File

@ -14,200 +14,36 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' """
``marketstore`` integration. ``marketstore`` integration.
- client management routines - client management routines
- ticK data ingest routines - ticK data ingest routines
- websocket client for subscribing to write triggers - websocket client for subscribing to write triggers
- todo: tick sequence stream-cloning for testing - todo: tick sequence stream-cloning for testing
- todo: docker container management automation
''' """
from __future__ import annotations from contextlib import asynccontextmanager
from contextlib import asynccontextmanager as acm from typing import Dict, Any, List, Callable, Tuple
from datetime import datetime
from pprint import pformat
from typing import (
Any,
Optional,
Union,
TYPE_CHECKING,
)
import time import time
from math import isnan from math import isnan
from bidict import bidict
import msgpack import msgpack
import pyqtgraph as pg
import numpy as np import numpy as np
import pandas as pd
import pymarketstore as pymkts
import tractor import tractor
from trio_websocket import open_websocket_url from trio_websocket import open_websocket_url
from anyio_marketstore import (
open_marketstore_client,
MarketstoreClient,
Params,
)
import pendulum
import purerpc
if TYPE_CHECKING:
import docker
from ._ahab import DockerContainer
from .feed import maybe_open_feed
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from ..data import open_feed
log = get_logger(__name__) log = get_logger(__name__)
_tick_tbk_ids: Tuple[str, str] = ('1Sec', 'TICK')
# container level config
_config = {
'grpc_listen_port': 5995,
'ws_listen_port': 5993,
'log_level': 'debug',
}
_yaml_config = '''
# piker's ``marketstore`` config.
# mount this config using:
# sudo docker run --mount \
# type=bind,source="$HOME/.config/piker/",target="/etc" -i -p \
# 5993:5993 alpacamarkets/marketstore:latest
root_directory: data
listen_port: {ws_listen_port}
grpc_listen_port: {grpc_listen_port}
log_level: {log_level}
queryable: true
stop_grace_period: 0
wal_rotate_interval: 5
stale_threshold: 5
enable_add: true
enable_remove: false
triggers:
- module: ondiskagg.so
on: "*/1Sec/OHLCV"
config:
# filter: "nasdaq"
destinations:
- 1Min
- 5Min
- 15Min
- 1H
- 1D
- module: stream.so
on: '*/*/*'
# config:
# filter: "nasdaq"
'''.format(**_config)
def start_marketstore(
client: docker.DockerClient,
**kwargs,
) -> tuple[DockerContainer, dict[str, Any]]:
'''
Start and supervise a marketstore instance with its config bind-mounted
in from the piker config directory on the system.
The equivalent cli cmd to this code is:
sudo docker run --mount \
type=bind,source="$HOME/.config/piker/",target="/etc" -i -p \
5993:5993 alpacamarkets/marketstore:latest
'''
import os
import docker
from .. import config
get_console_log('info', name=__name__)
mktsdir = os.path.join(config._config_dir, 'marketstore')
# create when dne
if not os.path.isdir(mktsdir):
os.mkdir(mktsdir)
yml_file = os.path.join(mktsdir, 'mkts.yml')
if not os.path.isfile(yml_file):
log.warning(
f'No `marketstore` config exists?: {yml_file}\n'
'Generating new file from template:\n'
f'{_yaml_config}\n'
)
with open(yml_file, 'w') as yf:
yf.write(_yaml_config)
# create a mount from user's local piker config dir into container
config_dir_mnt = docker.types.Mount(
target='/etc',
source=mktsdir,
type='bind',
)
# create a user config subdir where the marketstore
# backing filesystem database can be persisted.
persistent_data_dir = os.path.join(
mktsdir, 'data',
)
if not os.path.isdir(persistent_data_dir):
os.mkdir(persistent_data_dir)
data_dir_mnt = docker.types.Mount(
target='/data',
source=persistent_data_dir,
type='bind',
)
dcntr: DockerContainer = client.containers.run(
'alpacamarkets/marketstore:latest',
# do we need this for cmds?
# '-i',
# '-p 5993:5993',
ports={
'5993/tcp': 5993, # jsonrpc / ws?
'5995/tcp': 5995, # grpc
},
mounts=[
config_dir_mnt,
data_dir_mnt,
],
detach=True,
# stop_signal='SIGINT',
init=True,
# remove=True,
)
return (
dcntr,
_config,
# expected startup and stop msgs
"launching tcp listener for all services...",
"exiting...",
)
_tick_tbk_ids: tuple[str, str] = ('1Sec', 'TICK')
_tick_tbk: str = '{}/' + '/'.join(_tick_tbk_ids) _tick_tbk: str = '{}/' + '/'.join(_tick_tbk_ids)
_url: str = 'http://localhost:5993/rpc'
_tick_dt = [
# these two are required for as a "primary key"
('Epoch', 'i8'),
('Nanoseconds', 'i4'),
('IsTrade', 'i1'),
('IsBid', 'i1'),
('Price', 'f4'),
('Size', 'f4')
]
_quote_dt = [ _quote_dt = [
# these two are required for as a "primary key" # these two are required for as a "primary key"
('Epoch', 'i8'), ('Epoch', 'i8'),
@ -225,7 +61,6 @@ _quote_dt = [
# ('brokerd_ts', 'i64'), # ('brokerd_ts', 'i64'),
# ('VWAP', 'f4') # ('VWAP', 'f4')
] ]
_quote_tmp = {}.fromkeys(dict(_quote_dt).keys(), np.nan) _quote_tmp = {}.fromkeys(dict(_quote_dt).keys(), np.nan)
_tick_map = { _tick_map = {
'Up': 1, 'Up': 1,
@ -234,52 +69,31 @@ _tick_map = {
None: np.nan, None: np.nan,
} }
_ohlcv_dt = [
# these two are required for as a "primary key"
('Epoch', 'i8'),
# ('Nanoseconds', 'i4'),
# ohlcv sampling class MarketStoreError(Exception):
('Open', 'f4'), "Generic marketstore client error"
('High', 'f4'),
('Low', 'f4'),
('Close', 'f4'),
('Volume', 'f4'),
]
ohlc_key_map = bidict({ def err_on_resp(response: dict) -> None:
'Epoch': 'time', """Raise any errors found in responses from client request.
'Open': 'open', """
'High': 'high', responses = response['responses']
'Low': 'low', if responses is not None:
'Close': 'close', for r in responses:
'Volume': 'volume', err = r['error']
}) if err:
raise MarketStoreError(err)
def mk_tbk(keys: tuple[str, str, str]) -> str:
'''
Generate a marketstore table key from a tuple.
Converts,
``('SPY', '1Sec', 'TICK')`` -> ``"SPY/1Sec/TICK"```
'''
return '/'.join(keys)
def quote_to_marketstore_structarray( def quote_to_marketstore_structarray(
quote: dict[str, Any], quote: Dict[str, Any],
last_fill: Optional[float] last_fill: str,
) -> np.array: ) -> np.array:
''' """Return marketstore writeable structarray from quote ``dict``.
Return marketstore writeable structarray from quote ``dict``. """
'''
if last_fill: if last_fill:
# new fill bby # new fill bby
now = int(pendulum.parse(last_fill).timestamp) now = timestamp(last_fill)
else: else:
# this should get inserted upstream by the broker-client to # this should get inserted upstream by the broker-client to
# subtract from IPC latency # subtract from IPC latency
@ -287,7 +101,7 @@ def quote_to_marketstore_structarray(
secs, ns = now / 10**9, now % 10**9 secs, ns = now / 10**9, now % 10**9
# pack into list[tuple[str, Any]] # pack into List[Tuple[str, Any]]
array_input = [] array_input = []
# insert 'Epoch' entry first and then 'Nanoseconds'. # insert 'Epoch' entry first and then 'Nanoseconds'.
@ -309,467 +123,146 @@ def quote_to_marketstore_structarray(
return np.array([tuple(array_input)], dtype=_quote_dt) return np.array([tuple(array_input)], dtype=_quote_dt)
@acm def timestamp(datestr: str) -> int:
"""Return marketstore compatible 'Epoch' integer in nanoseconds
from a date formatted str.
"""
return int(pd.Timestamp(datestr).value)
def mk_tbk(keys: Tuple[str, str, str]) -> str:
"""Generate a marketstore table key from a tuple.
Converts,
``('SPY', '1Sec', 'TICK')`` -> ``"SPY/1Sec/TICK"```
"""
return '{}/' + '/'.join(keys)
class Client:
"""Async wrapper around the alpaca ``pymarketstore`` sync client.
This will server as the shell for building out a proper async client
that isn't horribly documented and un-tested..
"""
def __init__(self, url: str):
self._client = pymkts.Client(url)
async def _invoke(
self,
meth: Callable,
*args,
**kwargs,
) -> Any:
return err_on_resp(meth(*args, **kwargs))
async def destroy(
self,
tbk: Tuple[str, str, str],
) -> None:
return await self._invoke(self._client.destroy, mk_tbk(tbk))
async def list_symbols(
self,
tbk: str,
) -> List[str]:
return await self._invoke(self._client.list_symbols, mk_tbk(tbk))
async def write(
self,
symbol: str,
array: np.ndarray,
) -> None:
start = time.time()
await self._invoke(
self._client.write,
array,
_tick_tbk.format(symbol),
isvariablelength=True
)
log.debug(f"{symbol} write time (s): {time.time() - start}")
def query(
self,
symbol,
tbk: Tuple[str, str] = _tick_tbk_ids,
) -> pd.DataFrame:
# XXX: causes crash
# client.query(pymkts.Params(symbol, '*', 'OHCLV'
result = self._client.query(
pymkts.Params(symbol, *tbk),
)
return result.first().df()
@asynccontextmanager
async def get_client( async def get_client(
host: str = 'localhost', url: str = _url,
port: int = 5995 ) -> Client:
yield Client(url)
) -> MarketstoreClient:
'''
Load a ``anyio_marketstore`` grpc client connected
to an existing ``marketstore`` server.
'''
async with open_marketstore_client(
host,
port
) as client:
yield client
class MarketStoreError(Exception):
"Generic marketstore client error"
# def err_on_resp(response: dict) -> None:
# """Raise any errors found in responses from client request.
# """
# responses = response['responses']
# if responses is not None:
# for r in responses:
# err = r['error']
# if err:
# raise MarketStoreError(err)
# map of seconds ints to "time frame" accepted keys
tf_in_1s = bidict({
1: '1Sec',
60: '1Min',
60*5: '5Min',
60*15: '15Min',
60*30: '30Min',
60*60: '1H',
60*60*24: '1D',
})
class Storage:
'''
High level storage api for both real-time and historical ingest.
'''
def __init__(
self,
client: MarketstoreClient,
) -> None:
# TODO: eventually this should be an api/interface type that
# ensures we can support multiple tsdb backends.
self.client = client
# series' cache from tsdb reads
self._arrays: dict[str, np.ndarray] = {}
async def list_keys(self) -> list[str]:
return await self.client.list_symbols()
async def search_keys(self, pattern: str) -> list[str]:
'''
Search for time series key in the storage backend.
'''
...
async def write_ticks(self, ticks: list) -> None:
...
async def load(
self,
fqsn: str,
) -> tuple[
dict[int, np.ndarray], # timeframe (in secs) to series
Optional[datetime], # first dt
Optional[datetime], # last dt
]:
first_tsdb_dt, last_tsdb_dt = None, None
tsdb_arrays = await self.read_ohlcv(
fqsn,
# on first load we don't need to pull the max
# history per request size worth.
limit=3000,
)
log.info(f'Loaded tsdb history {tsdb_arrays}')
if tsdb_arrays:
fastest = list(tsdb_arrays.values())[0]
times = fastest['Epoch']
first, last = times[0], times[-1]
first_tsdb_dt, last_tsdb_dt = map(
pendulum.from_timestamp, [first, last]
)
return tsdb_arrays, first_tsdb_dt, last_tsdb_dt
async def read_ohlcv(
self,
fqsn: str,
timeframe: Optional[Union[int, str]] = None,
end: Optional[int] = None,
limit: int = int(800e3),
) -> tuple[
MarketstoreClient,
Union[dict, np.ndarray]
]:
client = self.client
syms = await client.list_symbols()
if fqsn not in syms:
return {}
tfstr = tf_in_1s[1]
params = Params(
symbols=fqsn,
timeframe=tfstr,
attrgroup='OHLCV',
end=end,
# limit_from_start=True,
# TODO: figure the max limit here given the
# ``purepc`` msg size limit of purerpc: 33554432
limit=limit,
)
if timeframe is None:
log.info(f'starting {fqsn} tsdb granularity scan..')
# loop through and try to find highest granularity
for tfstr in tf_in_1s.values():
try:
log.info(f'querying for {tfstr}@{fqsn}')
params.set('timeframe', tfstr)
result = await client.query(params)
break
except purerpc.grpclib.exceptions.UnknownError:
# XXX: this is already logged by the container and
# thus shows up through `marketstored` logs relay.
# log.warning(f'{tfstr}@{fqsn} not found')
continue
else:
return {}
else:
result = await client.query(params)
# TODO: it turns out column access on recarrays is actually slower:
# https://jakevdp.github.io/PythonDataScienceHandbook/02.09-structured-data-numpy.html#RecordArrays:-Structured-Arrays-with-a-Twist
# it might make sense to make these structured arrays?
# Fill out a `numpy` array-results map
arrays = {}
for fqsn, data_set in result.by_symbols().items():
arrays.setdefault(fqsn, {})[
tf_in_1s.inverse[data_set.timeframe]
] = data_set.array
return arrays[fqsn][timeframe] if timeframe else arrays[fqsn]
async def delete_ts(
self,
key: str,
timeframe: Optional[Union[int, str]] = None,
) -> bool:
client = self.client
syms = await client.list_symbols()
print(syms)
# if key not in syms:
# raise KeyError(f'`{fqsn}` table key not found?')
return await client.destroy(tbk=key)
async def write_ohlcv(
self,
fqsn: str,
ohlcv: np.ndarray,
append_and_duplicate: bool = True,
limit: int = int(800e3),
) -> None:
# build mkts schema compat array for writing
mkts_dt = np.dtype(_ohlcv_dt)
mkts_array = np.zeros(
len(ohlcv),
dtype=mkts_dt,
)
# copy from shm array (yes it's this easy):
# https://numpy.org/doc/stable/user/basics.rec.html#assignment-from-other-structured-arrays
mkts_array[:] = ohlcv[[
'time',
'open',
'high',
'low',
'close',
'volume',
]]
m, r = divmod(len(mkts_array), limit)
for i in range(m, 1):
to_push = mkts_array[i-1:i*limit]
# write to db
resp = await self.client.write(
to_push,
tbk=f'{fqsn}/1Sec/OHLCV',
# NOTE: will will append duplicates
# for the same timestamp-index.
# TODO: pre deduplicate?
isvariablelength=append_and_duplicate,
)
log.info(
f'Wrote {mkts_array.size} datums to tsdb\n'
)
for resp in resp.responses:
err = resp.error
if err:
raise MarketStoreError(err)
if r:
to_push = mkts_array[m*limit:]
# write to db
resp = await self.client.write(
to_push,
tbk=f'{fqsn}/1Sec/OHLCV',
# NOTE: will will append duplicates
# for the same timestamp-index.
# TODO: pre deduplicate?
isvariablelength=append_and_duplicate,
)
log.info(
f'Wrote {mkts_array.size} datums to tsdb\n'
)
for resp in resp.responses:
err = resp.error
if err:
raise MarketStoreError(err)
# XXX: currently the only way to do this is through the CLI:
# sudo ./marketstore connect --dir ~/.config/piker/data
# >> \show mnq.globex.20220617.ib/1Sec/OHLCV 2022-05-15
# and this seems to block and use up mem..
# >> \trim mnq.globex.20220617.ib/1Sec/OHLCV 2022-05-15
# relevant source code for this is here:
# https://github.com/alpacahq/marketstore/blob/master/cmd/connect/session/trim.go#L14
# def delete_range(self, start_dt, end_dt) -> None:
# ...
@acm
async def open_storage_client(
fqsn: str,
period: Optional[Union[int, str]] = None, # in seconds
) -> tuple[Storage, dict[str, np.ndarray]]:
'''
Load a series by key and deliver in ``numpy`` struct array format.
'''
async with (
# eventually a storage backend endpoint
get_client() as client,
):
# slap on our wrapper api
yield Storage(client)
async def tsdb_history_update(
fqsn: Optional[str] = None,
) -> list[str]:
# TODO: real-time dedicated task for ensuring
# history consistency between the tsdb, shm and real-time feed..
# update sequence design notes:
# - load existing highest frequency data from mkts
# * how do we want to offer this to the UI?
# - lazy loading?
# - try to load it all and expect graphics caching/diffing
# to hide extra bits that aren't in view?
# - compute the diff between latest data from broker and shm
# * use sql api in mkts to determine where the backend should
# start querying for data?
# * append any diff with new shm length
# * determine missing (gapped) history by scanning
# * how far back do we look?
# - begin rt update ingest and aggregation
# * could start by always writing ticks to mkts instead of
# worrying about a shm queue for now.
# * we have a short list of shm queues worth groking:
# - https://github.com/pikers/piker/issues/107
# * the original data feed arch blurb:
# - https://github.com/pikers/piker/issues/98
#
profiler = pg.debug.Profiler(
disabled=False, # not pg_profile_enabled(),
delayed=False,
)
async with (
open_storage_client(fqsn) as storage,
maybe_open_feed(
[fqsn],
start_stream=False,
) as (feed, stream),
):
profiler(f'opened feed for {fqsn}')
to_append = feed.shm.array
to_prepend = None
if fqsn:
symbol = feed.symbols.get(fqsn)
if symbol:
fqsn = symbol.front_fqsn()
# diff db history with shm and only write the missing portions
ohlcv = feed.shm.array
# TODO: use pg profiler
tsdb_arrays = await storage.read_ohlcv(fqsn)
# hist diffing
if tsdb_arrays:
for secs in (1, 60):
ts = tsdb_arrays.get(secs)
if ts is not None and len(ts):
# these aren't currently used but can be referenced from
# within the embedded ipython shell below.
to_append = ohlcv[ohlcv['time'] > ts['Epoch'][-1]]
to_prepend = ohlcv[ohlcv['time'] < ts['Epoch'][0]]
profiler('Finished db arrays diffs')
syms = await storage.client.list_symbols()
log.info(f'Existing tsdb symbol set:\n{pformat(syms)}')
profiler(f'listed symbols {syms}')
# TODO: ask if user wants to write history for detected
# available shm buffers?
from tractor.trionics import ipython_embed
await ipython_embed()
# for array in [to_append, to_prepend]:
# if array is None:
# continue
# log.info(
# f'Writing datums {array.size} -> to tsdb from shm\n'
# )
# await storage.write_ohlcv(fqsn, array)
# profiler('Finished db writes')
async def ingest_quote_stream( async def ingest_quote_stream(
symbols: list[str], symbols: List[str],
brokername: str, brokername: str,
tries: int = 1, tries: int = 1,
loglevel: str = None, loglevel: str = None,
) -> None: ) -> None:
''' """Ingest a broker quote stream into marketstore in (sampled) tick format.
Ingest a broker quote stream into a ``marketstore`` tsdb. """
async with open_feed(
brokername,
symbols,
loglevel=loglevel,
) as (first_quotes, qstream):
''' quote_cache = first_quotes.copy()
async with (
maybe_open_feed(brokername, symbols, loglevel=loglevel) as feed,
get_client() as ms_client,
):
async for quotes in feed.stream:
log.info(quotes)
for symbol, quote in quotes.items():
for tick in quote.get('ticks', ()):
ticktype = tick.get('type', 'n/a')
# techtonic tick write async with get_client() as ms_client:
array = quote_to_marketstore_structarray({
'IsTrade': 1 if ticktype == 'trade' else 0,
'IsBid': 1 if ticktype in ('bid', 'bsize') else 0,
'Price': tick.get('price'),
'Size': tick.get('size')
}, last_fill=quote.get('broker_ts', None))
await ms_client.write(array, _tick_tbk) # start ingest to marketstore
async for quotes in qstream:
log.info(quotes)
for symbol, quote in quotes.items():
# LEGACY WRITE LOOP (using old tick dt) # remap tick strs to ints
# quote_cache = { quote['tick'] = _tick_map[quote.get('tick', 'Equal')]
# 'size': 0,
# 'tick': 0
# }
# async for quotes in qstream: # check for volume update (i.e. did trades happen
# log.info(quotes) # since last quote)
# for symbol, quote in quotes.items(): new_vol = quote.get('volume', None)
if new_vol is None:
log.debug(f"No fills for {symbol}")
if new_vol == quote_cache.get('volume'):
# should never happen due to field diffing
# on sender side
log.error(
f"{symbol}: got same volume as last quote?")
# # remap tick strs to ints quote_cache.update(quote)
# quote['tick'] = _tick_map[quote.get('tick', 'Equal')]
# # check for volume update (i.e. did trades happen a = quote_to_marketstore_structarray(
# # since last quote) quote,
# new_vol = quote.get('volume', None) # TODO: check this closer to the broker query api
# if new_vol is None: last_fill=quote.get('fill_time', '')
# log.debug(f"No fills for {symbol}") )
# if new_vol == quote_cache.get('volume'): await ms_client.write(symbol, a)
# # should never happen due to field diffing
# # on sender side
# log.error(
# f"{symbol}: got same volume as last quote?")
# quote_cache.update(quote)
# a = quote_to_marketstore_structarray(
# quote,
# # TODO: check this closer to the broker query api
# last_fill=quote.get('fill_time', '')
# )
# await ms_client.write(symbol, a)
async def stream_quotes( async def stream_quotes(
symbols: list[str], symbols: List[str],
host: str = 'localhost', host: str = 'localhost',
port: int = 5993, port: int = 5993,
diff_cached: bool = True, diff_cached: bool = True,
loglevel: str = None, loglevel: str = None,
) -> None: ) -> None:
''' """Open a symbol stream from a running instance of marketstore and
Open a symbol stream from a running instance of marketstore and
log to console. log to console.
"""
'''
# XXX: required to propagate ``tractor`` loglevel to piker logging # XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel) get_console_log(loglevel or tractor.current_actor().loglevel)
tbks: dict[str, str] = {sym: f"{sym}/*/*" for sym in symbols} tbks: Dict[str, str] = {sym: f"{sym}/*/*" for sym in symbols}
async with open_websocket_url(f'ws://{host}:{port}/ws') as ws: async with open_websocket_url(f'ws://{host}:{port}/ws') as ws:
# send subs topics to server # send subs topics to server
@ -778,7 +271,7 @@ async def stream_quotes(
) )
log.info(resp) log.info(resp)
async def recv() -> dict[str, Any]: async def recv() -> Dict[str, Any]:
return msgpack.loads((await ws.get_message()), encoding='utf-8') return msgpack.loads((await ws.get_message()), encoding='utf-8')
streams = (await recv())['streams'] streams = (await recv())['streams']

View File

@ -40,8 +40,6 @@ from tractor.msg import NamespacePath
from ..data._sharedmem import ( from ..data._sharedmem import (
ShmArray, ShmArray,
maybe_open_shm_array, maybe_open_shm_array,
attach_shm_array,
_Token,
) )
from ..log import get_logger from ..log import get_logger
@ -74,13 +72,6 @@ class Fsp:
# - custom function wrappers, # - custom function wrappers,
# https://wrapt.readthedocs.io/en/latest/wrappers.html#custom-function-wrappers # https://wrapt.readthedocs.io/en/latest/wrappers.html#custom-function-wrappers
# actor-local map of source flow shm tokens
# + the consuming fsp *to* the consumers output
# shm flow.
_flow_registry: dict[
tuple[_Token, str], _Token,
] = {}
def __init__( def __init__(
self, self,
func: Callable[..., Awaitable], func: Callable[..., Awaitable],
@ -102,7 +93,7 @@ class Fsp:
self.config: dict[str, Any] = config self.config: dict[str, Any] = config
# register with declared set. # register with declared set.
_fsp_registry[self.ns_path] = self _fsp_registry[self.ns_path] = func
@property @property
def name(self) -> str: def name(self) -> str:
@ -120,24 +111,6 @@ class Fsp:
): ):
return self.func(*args, **kwargs) return self.func(*args, **kwargs)
# TODO: lru_cache this? prettty sure it'll work?
def get_shm(
self,
src_shm: ShmArray,
) -> ShmArray:
'''
Provide access to allocated shared mem array
for this "instance" of a signal processor for
the given ``key``.
'''
dst_token = self._flow_registry[
(src_shm._token, self.name)
]
shm = attach_shm_array(dst_token)
return shm
def fsp( def fsp(
wrapped=None, wrapped=None,
@ -159,27 +132,18 @@ def fsp(
return Fsp(wrapped, outputs=(wrapped.__name__,)) return Fsp(wrapped, outputs=(wrapped.__name__,))
def mk_fsp_shm_key(
sym: str,
target: Fsp
) -> str:
uid = tractor.current_actor().uid
return f'{sym}.fsp.{target.name}.{".".join(uid)}'
def maybe_mk_fsp_shm( def maybe_mk_fsp_shm(
sym: str, sym: str,
target: Fsp, target: fsp,
readonly: bool = True, readonly: bool = True,
) -> (str, ShmArray, bool): ) -> (ShmArray, bool):
''' '''
Allocate a single row shm array for an symbol-fsp pair if none Allocate a single row shm array for an symbol-fsp pair if none
exists, otherwise load the shm already existing for that token. exists, otherwise load the shm already existing for that token.
''' '''
assert isinstance(sym, str), '`sym` should be file-name-friendly `str`' uid = tractor.current_actor().uid
# TODO: load output types from `Fsp` # TODO: load output types from `Fsp`
# - should `index` be a required internal field? # - should `index` be a required internal field?
@ -188,7 +152,7 @@ def maybe_mk_fsp_shm(
[(field_name, float) for field_name in target.outputs] [(field_name, float) for field_name in target.outputs]
) )
key = mk_fsp_shm_key(sym, target) key = f'{sym}.fsp.{target.name}.{".".join(uid)}'
shm, opened = maybe_open_shm_array( shm, opened = maybe_open_shm_array(
key, key,
@ -196,4 +160,4 @@ def maybe_mk_fsp_shm(
dtype=fsp_dtype, dtype=fsp_dtype,
readonly=True, readonly=True,
) )
return key, shm, opened return shm, opened

View File

@ -20,10 +20,7 @@ core task logic for processing chains
''' '''
from dataclasses import dataclass from dataclasses import dataclass
from functools import partial from functools import partial
from typing import ( from typing import AsyncIterator, Callable, Optional
AsyncIterator, Callable, Optional,
Union,
)
import numpy as np import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
@ -37,12 +34,8 @@ from .. import data
from ..data import attach_shm_array from ..data import attach_shm_array
from ..data.feed import Feed from ..data.feed import Feed
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray
from ..data._source import Symbol from ._api import Fsp
from ._api import ( from ._api import _load_builtins
Fsp,
_load_builtins,
_Token,
)
log = get_logger(__name__) log = get_logger(__name__)
@ -76,7 +69,8 @@ async def filter_quotes_by_sym(
async def fsp_compute( async def fsp_compute(
symbol: Symbol, ctx: tractor.Context,
symbol: str,
feed: Feed, feed: Feed,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
@ -85,7 +79,7 @@ async def fsp_compute(
func: Callable, func: Callable,
# attach_stream: bool = False, attach_stream: bool = False,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
@ -95,81 +89,40 @@ async def fsp_compute(
disabled=True disabled=True
) )
fqsn = symbol.front_fqsn()
out_stream = func( out_stream = func(
# TODO: do we even need this if we do the feed api right? # TODO: do we even need this if we do the feed api right?
# shouldn't a local stream do this before we get a handle # shouldn't a local stream do this before we get a handle
# to the async iterable? it's that or we do some kinda # to the async iterable? it's that or we do some kinda
# async itertools style? # async itertools style?
filter_quotes_by_sym(fqsn, quote_stream), filter_quotes_by_sym(symbol, quote_stream),
# XXX: currently the ``ohlcv`` arg
feed.shm, feed.shm,
) )
# Conduct a single iteration of fsp with historical bars input # Conduct a single iteration of fsp with historical bars input
# and get historical output # and get historical output
history_output: Union[
dict[str, np.ndarray], # multi-output case
np.ndarray, # single output case
]
history_output = await out_stream.__anext__() history_output = await out_stream.__anext__()
func_name = func.__name__ func_name = func.__name__
profiler(f'{func_name} generated history') profiler(f'{func_name} generated history')
# build struct array with an 'index' field to push as history # build struct array with an 'index' field to push as history
history = np.zeros(
len(history_output),
dtype=dst.array.dtype
)
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no? # TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
# if the output array is multi-field then push # if the output array is multi-field then push
# each respective field. # each respective field.
fields = getattr(dst.array.dtype, 'fields', None).copy() fields = getattr(history.dtype, 'fields', None)
fields.pop('index') if fields:
history: Optional[np.ndarray] = None # TODO: nptyping here!
if fields and len(fields) > 1 and fields:
if not isinstance(history_output, dict):
raise ValueError(
f'`{func_name}` is a multi-output FSP and should yield a '
'`dict[str, np.ndarray]` for history'
)
for key in fields.keys(): for key in fields.keys():
if key in history_output: if key in history.dtype.fields:
output = history_output[key] history[func_name] = history_output
if history is None:
if output is None:
length = len(src.array)
else:
length = len(output)
# using the first output, determine
# the length of the struct-array that
# will be pushed to shm.
history = np.zeros(
length,
dtype=dst.array.dtype
)
if output is None:
continue
history[key] = output
# single-key output stream # single-key output stream
else: else:
if not isinstance(history_output, np.ndarray):
raise ValueError(
f'`{func_name}` is a single output FSP and should yield an '
'`np.ndarray` for history'
)
history = np.zeros(
len(history_output),
dtype=dst.array.dtype
)
history[func_name] = history_output history[func_name] = history_output
# TODO: XXX: # TODO: XXX:
@ -192,47 +145,40 @@ async def fsp_compute(
profiler(f'{func_name} pushed history') profiler(f'{func_name} pushed history')
profiler.finish() profiler.finish()
# TODO: UGH, what is the right way to do something like this?
if not ctx._started_called:
await ctx.started(index)
# setup a respawn handle # setup a respawn handle
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
tracker = TaskTracker(trio.Event(), cs) tracker = TaskTracker(trio.Event(), cs)
task_status.started((tracker, index)) task_status.started((tracker, index))
profiler(f'{func_name} yield last index') profiler(f'{func_name} yield last index')
# import time # import time
# last = time.time() # last = time.time()
try: try:
# rt stream
async with ctx.open_stream() as stream:
async for processed in out_stream:
async for processed in out_stream: log.debug(f"{func_name}: {processed}")
key, output = processed
index = src.index
dst.array[-1][key] = output
log.debug(f"{func_name}: {processed}") # NOTE: for now we aren't streaming this to the consumer
key, output = processed # stream latest array index entry which basically just acts
index = src.index # as trigger msg to tell the consumer to read from shm
dst.array[-1][key] = output if attach_stream:
await stream.send(index)
# NOTE: for now we aren't streaming this to the consumer # period = time.time() - last
# stream latest array index entry which basically just acts # hz = 1/period if period else float('nan')
# as trigger msg to tell the consumer to read from shm # if hz > 60:
# TODO: further this should likely be implemented much # log.info(f'FSP quote too fast: {hz}')
# like our `Feed` api where there is one background # last = time.time()
# "service" task which computes output and then sends to
# N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem
# chans for the fan out?
# if attach_stream:
# await client_stream.send(index)
# period = time.time() - last
# hz = 1/period if period else float('nan')
# if hz > 60:
# log.info(f'FSP quote too fast: {hz}')
# last = time.time()
finally: finally:
tracker.complete.set() tracker.complete.set()
@ -243,15 +189,14 @@ async def cascade(
ctx: tractor.Context, ctx: tractor.Context,
# data feed key # data feed key
fqsn: str, brokername: str,
symbol: str,
src_shm_token: dict, src_shm_token: dict,
dst_shm_token: tuple[str, np.dtype], dst_shm_token: tuple[str, np.dtype],
ns_path: NamespacePath, ns_path: NamespacePath,
shm_registry: dict[str, _Token],
zero_on_step: bool = False, zero_on_step: bool = False,
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
@ -261,10 +206,7 @@ async def cascade(
destination shm array buffer. destination shm array buffer.
''' '''
profiler = pg.debug.Profiler( profiler = pg.debug.Profiler(delayed=False, disabled=False)
delayed=False,
disabled=False
)
if loglevel: if loglevel:
get_console_log(loglevel) get_console_log(loglevel)
@ -277,21 +219,9 @@ async def cascade(
log.info( log.info(
f'Registered FSP set:\n{lines}' f'Registered FSP set:\n{lines}'
) )
func: Fsp = reg.get(
# update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source
# so that consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire
# but not sure how else to do it.
for (token, fsp_name, dst_token) in shm_registry:
Fsp._flow_registry[
(_Token.from_msg(token), fsp_name)
] = _Token.from_msg(dst_token)
fsp: Fsp = reg.get(
NamespacePath(ns_path) NamespacePath(ns_path)
) )
func = fsp.func
if not func: if not func:
# TODO: assume it's a func target path # TODO: assume it's a func target path
@ -299,7 +229,8 @@ async def cascade(
# open a data feed stream with requested broker # open a data feed stream with requested broker
async with data.feed.maybe_open_feed( async with data.feed.maybe_open_feed(
[fqsn], brokername,
[symbol],
# TODO throttle tick outputs from *this* daemon since # TODO throttle tick outputs from *this* daemon since
# it'll emit tons of ticks due to the throttle only # it'll emit tons of ticks due to the throttle only
@ -308,7 +239,6 @@ async def cascade(
# tick_throttle=60, # tick_throttle=60,
) as (feed, quote_stream): ) as (feed, quote_stream):
symbol = feed.symbols[fqsn]
profiler(f'{func}: feed up') profiler(f'{func}: feed up')
@ -323,6 +253,7 @@ async def cascade(
fsp_target = partial( fsp_target = partial(
fsp_compute, fsp_compute,
ctx=ctx,
symbol=symbol, symbol=symbol,
feed=feed, feed=feed,
quote_stream=quote_stream, quote_stream=quote_stream,
@ -331,7 +262,7 @@ async def cascade(
src=src, src=src,
dst=dst, dst=dst,
# target # func_name=func_name,
func=func func=func
) )
@ -343,118 +274,85 @@ async def cascade(
profiler(f'{func_name}: fsp up') profiler(f'{func_name}: fsp up')
# sync client async def resync(tracker: TaskTracker) -> tuple[TaskTracker, int]:
await ctx.started(index) # TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.warning(f're-syncing fsp {func_name} to source')
tracker.cs.cancel()
await tracker.complete.wait()
return await n.start(fsp_target)
# XXX: rt stream with client which we MUST def is_synced(
# open here (and keep it open) in order to make src: ShmArray,
# incremental "updates" as history prepends take dst: ShmArray
# place. ) -> tuple[bool, int, int]:
async with ctx.open_stream() as client_stream: '''Predicate to dertmine if a destination FSP
output array is aligned to its source array.
# TODO: these likely should all become '''
# methods of this ``TaskLifetime`` or wtv step_diff = src.index - dst.index
# abstraction.. len_diff = abs(len(src.array) - len(dst.array))
async def resync( return not (
tracker: TaskTracker, # the source is likely backfilling and we must
# sync history calculations
len_diff > 2 or
) -> tuple[TaskTracker, int]: # we aren't step synced to the source and may be
# TODO: adopt an incremental update engine/approach # leading/lagging by a step
# where possible here eventually! step_diff > 1 or
log.debug(f're-syncing fsp {func_name} to source') step_diff < 0
tracker.cs.cancel() ), step_diff, len_diff
await tracker.complete.wait()
tracker, index = await n.start(fsp_target)
# always trigger UI refresh after history update, async def poll_and_sync_to_step(
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
await client_stream.send({
'fsp_update': {
'key': dst_shm_token,
'first': dst._first.value,
'last': dst._last.value,
}})
return tracker, index
def is_synced( tracker: TaskTracker,
src: ShmArray, src: ShmArray,
dst: ShmArray dst: ShmArray,
) -> tuple[bool, int, int]:
'''Predicate to dertmine if a destination FSP
output array is aligned to its source array.
''' ) -> tuple[TaskTracker, int]:
step_diff = src.index - dst.index
len_diff = abs(len(src.array) - len(dst.array))
return not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2 or
# we aren't step synced to the source and may be
# leading/lagging by a step
step_diff > 1 or
step_diff < 0
), step_diff, len_diff
async def poll_and_sync_to_step(
tracker: TaskTracker,
src: ShmArray,
dst: ShmArray,
) -> tuple[TaskTracker, int]:
synced, step_diff, _ = is_synced(src, dst)
while not synced:
tracker, index = await resync(tracker)
synced, step_diff, _ = is_synced(src, dst) synced, step_diff, _ = is_synced(src, dst)
while not synced:
tracker, index = await resync(tracker)
synced, step_diff, _ = is_synced(src, dst)
return tracker, step_diff return tracker, step_diff
s, step, ld = is_synced(src, dst) s, step, ld = is_synced(src, dst)
# detect sample period step for subscription to increment # Increment the underlying shared memory buffer on every
# signal # "increment" msg received from the underlying data feed.
times = src.array['time'] async with feed.index_stream() as stream:
delay_s = times[-1] - times[times != times[-1]][-1]
# Increment the underlying shared memory buffer on every profiler(f'{func_name}: sample stream up')
# "increment" msg received from the underlying data feed. profiler.finish()
async with feed.index_stream(
int(delay_s)
) as istream:
profiler(f'{func_name}: sample stream up') async for msg in stream:
profiler.finish()
async for _ in istream: # respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = is_synced(src, dst)
if not synced:
tracker, step_diff = await poll_and_sync_to_step(
tracker,
src,
dst,
)
# respawn the compute task if the source # skip adding a last bar since we should already
# array has been updated such that we compute # be step alinged
# new history from the (prepended) source. if step_diff == 0:
synced, step_diff, _ = is_synced(src, dst) continue
if not synced:
tracker, step_diff = await poll_and_sync_to_step(
tracker,
src,
dst,
)
# skip adding a last bar since we should already # read out last shm row, copy and write new row
# be step alinged array = dst.array
if step_diff == 0:
continue
# read out last shm row, copy and write new row # some metrics like vlm should be reset
array = dst.array # to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
# some metrics like vlm should be reset dst.push(last)
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
dst.push(last)

View File

@ -167,36 +167,9 @@ def _wma(
assert length == len(weights) assert length == len(weights)
# lol, for long sequences this is nutso slow and expensive..
return np.convolve(signal, weights, 'valid') return np.convolve(signal, weights, 'valid')
@fsp
async def wma(
source, #: AsyncStream[np.ndarray],
length: int,
ohlcv: np.ndarray, # price time-frame "aware"
) -> AsyncIterator[np.ndarray]: # maybe something like like FspStream?
'''
Streaming weighted moving average.
``weights`` is a sequence of already scaled values. As an example
for the WMA often found in "techincal analysis":
``weights = np.arange(1, N) * N*(N-1)/2``.
'''
# deliver historical output as "first yield"
yield _wma(ohlcv.array['close'], length)
# begin real-time section
async for quote in source:
for tick in iterticks(quote, type='trade'):
yield _wma(ohlcv.last(length))
@fsp @fsp
async def rsi( async def rsi(
@ -251,3 +224,29 @@ async def rsi(
down_ema_last=last_down_ema_close, down_ema_last=last_down_ema_close,
) )
yield rsi_out[-1:] yield rsi_out[-1:]
@fsp
async def wma(
source, #: AsyncStream[np.ndarray],
length: int,
ohlcv: np.ndarray, # price time-frame "aware"
) -> AsyncIterator[np.ndarray]: # maybe something like like FspStream?
'''
Streaming weighted moving average.
``weights`` is a sequence of already scaled values. As an example
for the WMA often found in "techincal analysis":
``weights = np.arange(1, N) * N*(N-1)/2``.
'''
# deliver historical output as "first yield"
yield _wma(ohlcv.array['close'], length)
# begin real-time section
async for quote in source:
for tick in iterticks(quote, type='trade'):
yield _wma(ohlcv.last(length))

View File

@ -22,25 +22,17 @@ from tractor.trionics._broadcast import AsyncReceiver
from ._api import fsp from ._api import fsp
from ..data._normalize import iterticks from ..data._normalize import iterticks
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray
from ._momo import _wma
from ..log import get_logger
log = get_logger(__name__)
# NOTE: is the same as our `wma` fsp, and if so which one is faster?
# Ohhh, this is an IIR style i think? So it has an anchor point
# effectively instead of a moving window/FIR style?
def wap( def wap(
signal: np.ndarray, signal: np.ndarray,
weights: np.ndarray, weights: np.ndarray,
) -> np.ndarray: ) -> np.ndarray:
''' """Weighted average price from signal and weights.
Weighted average price from signal and weights.
''' """
cum_weights = np.cumsum(weights) cum_weights = np.cumsum(weights)
cum_weighted_input = np.cumsum(signal * weights) cum_weighted_input = np.cumsum(signal * weights)
@ -97,10 +89,7 @@ async def tina_vwap(
# vwap_tot = h_vwap[-1] # vwap_tot = h_vwap[-1]
async for quote in source: async for quote in source:
for tick in iterticks( for tick in iterticks(quote, types=['trade']):
quote,
types=['trade'],
):
# c, h, l, v = ohlcv.array[-1][ # c, h, l, v = ohlcv.array[-1][
# ['closes', 'high', 'low', 'volume'] # ['closes', 'high', 'low', 'volume']
@ -118,12 +107,8 @@ async def tina_vwap(
@fsp( @fsp(
outputs=( outputs=('dolla_vlm', 'dark_vlm'),
'dolla_vlm', ohlc=False,
'dark_vlm',
'trade_count',
'dark_trade_count',
),
curve_style='step', curve_style='step',
) )
async def dolla_vlm( async def dolla_vlm(
@ -147,24 +132,14 @@ async def dolla_vlm(
v = a['volume'] v = a['volume']
# on first iteration yield history # on first iteration yield history
yield { yield chl3 * v
'dolla_vlm': chl3 * v,
'dark_vlm': None,
}
i = ohlcv.index i = ohlcv.index
dvlm = vlm = 0 output = vlm = 0
dark_trade_count = trade_count = 0 dvlm = 0
async for quote in source: async for quote in source:
for tick in iterticks( for tick in iterticks(quote):
quote,
types=(
'trade',
'dark_trade',
),
deduplicate_darks=True,
):
# this computes tick-by-tick weightings from here forward # this computes tick-by-tick weightings from here forward
size = tick['size'] size = tick['size']
@ -173,30 +148,24 @@ async def dolla_vlm(
li = ohlcv.index li = ohlcv.index
if li > i: if li > i:
i = li i = li
trade_count = dark_trade_count = dvlm = vlm = 0 vlm = 0
dvlm = 0
# TODO: for marginned instruments (futes, etfs?) we need to # TODO: for marginned instruments (futes, etfs?) we need to
# show the margin $vlm by multiplying by whatever multiplier # show the margin $vlm by multiplying by whatever multiplier
# is reported in the sym info. # is reported in the sym info.
ttype = tick.get('type') ttype = tick.get('type')
if ttype == 'dark_trade': if ttype == 'dark_trade':
print(f'dark_trade: {tick}')
key = 'dark_vlm'
dvlm += price * size dvlm += price * size
yield 'dark_vlm', dvlm output = dvlm
dark_trade_count += 1
yield 'dark_trade_count', dark_trade_count
# print(f'{dark_trade_count}th dark_trade: {tick}')
else: else:
# print(f'vlm: {tick}') key = 'dolla_vlm'
vlm += price * size vlm += price * size
yield 'dolla_vlm', vlm output = vlm
trade_count += 1
yield 'trade_count', trade_count
# TODO: plot both to compare? # TODO: plot both to compare?
# c, h, l, v = ohlcv.last()[ # c, h, l, v = ohlcv.last()[
@ -205,154 +174,4 @@ async def dolla_vlm(
# tina_lvlm = c+h+l/3 * v # tina_lvlm = c+h+l/3 * v
# print(f' tinal vlm: {tina_lvlm}') # print(f' tinal vlm: {tina_lvlm}')
yield key, output
@fsp(
# TODO: eventually I guess we should support some kinda declarative
# graphics config syntax per output yah? That seems like a clean way
# to let users configure things? Not sure how exactly to offer that
# api as well as how to expose such a thing *inside* the body?
outputs=(
# pulled verbatim from `ib` for now
'1m_trade_rate',
'1m_vlm_rate',
# our own instantaneous rate calcs which are all
# parameterized by a samples count (bars) period
'trade_rate',
'dark_trade_rate',
'dvlm_rate',
'dark_dvlm_rate',
),
curve_style='line',
)
async def flow_rates(
source: AsyncReceiver[dict],
ohlcv: ShmArray, # OHLC sampled history
# TODO (idea): a dynamic generic / boxing type that can be updated by other
# FSPs, user input, and possibly any general event stream in
# real-time. Hint: ideally implemented with caching until mutated
# ;)
period: 'Param[int]' = 6, # noqa
# TODO: support other means by providing a map
# to weights `partial()`-ed with `wma()`?
mean_type: str = 'arithmetic',
# TODO (idea): a generic for declaring boxed fsps much like ``pytest``
# fixtures? This probably needs a lot of thought if we want to offer
# a higher level composition syntax eventually (oh right gotta make
# an issue for that).
# ideas for how to allow composition / intercalling:
# - offer a `Fsp.get_history()` to do the first yield output?
# * err wait can we just have shm access directly?
# - how would it work if some consumer fsp wanted to dynamically
# change params which are input to the callee fsp? i guess we could
# lazy copy in that case?
# dvlm: 'Fsp[dolla_vlm]'
) -> AsyncIterator[
tuple[str, Union[np.ndarray, float]],
]:
# generally no history available prior to real-time calcs
yield {
# from ib
'1m_trade_rate': None,
'1m_vlm_rate': None,
'trade_rate': None,
'dark_trade_rate': None,
'dvlm_rate': None,
'dark_dvlm_rate': None,
}
# TODO: 3.10 do ``anext()``
quote = await source.__anext__()
# ltr = 0
# lvr = 0
tr = quote.get('tradeRate')
yield '1m_trade_rate', tr or 0
vr = quote.get('volumeRate')
yield '1m_vlm_rate', vr or 0
yield 'trade_rate', 0
yield 'dark_trade_rate', 0
yield 'dvlm_rate', 0
yield 'dark_dvlm_rate', 0
# NOTE: in theory we could dynamically allocate a cascade based on
# this call but not sure if that's too "dynamic" in terms of
# validating cascade flows from message typing perspective.
# attach to ``dolla_vlm`` fsp running
# on this same source flow.
dvlm_shm = dolla_vlm.get_shm(ohlcv)
# precompute arithmetic mean weights (all ones)
seq = np.full((period,), 1)
weights = seq / seq.sum()
async for quote in source:
if not quote:
log.error("OH WTF NO QUOTE IN FSP")
continue
# dvlm_wma = _wma(
# dvlm_shm.array['dolla_vlm'],
# period,
# weights=weights,
# )
# yield 'dvlm_rate', dvlm_wma[-1]
if period > 1:
trade_rate_wma = _wma(
dvlm_shm.array['trade_count'][-period:],
period,
weights=weights,
)
trade_rate = trade_rate_wma[-1]
# print(trade_rate)
yield 'trade_rate', trade_rate
else:
# instantaneous rate per sample step
count = dvlm_shm.array['trade_count'][-1]
yield 'trade_rate', count
# TODO: skip this if no dark vlm is declared
# by symbol info (eg. in crypto$)
# dark_dvlm_wma = _wma(
# dvlm_shm.array['dark_vlm'],
# period,
# weights=weights,
# )
# yield 'dark_dvlm_rate', dark_dvlm_wma[-1]
if period > 1:
dark_trade_rate_wma = _wma(
dvlm_shm.array['dark_trade_count'][-period:],
period,
weights=weights,
)
yield 'dark_trade_rate', dark_trade_rate_wma[-1]
else:
# instantaneous rate per sample step
dark_count = dvlm_shm.array['dark_trade_count'][-1]
yield 'dark_trade_rate', dark_count
# XXX: ib specific schema we should
# probably pre-pack ourselves.
# tr = quote.get('tradeRate')
# if tr is not None and tr != ltr:
# # print(f'trade rate: {tr}')
# yield '1m_trade_rate', tr
# ltr = tr
# vr = quote.get('volumeRate')
# if vr is not None and vr != lvr:
# # print(f'vlm rate: {vr}')
# yield '1m_vlm_rate', vr
# lvr = vr

View File

@ -25,13 +25,10 @@ from pygments import highlight, lexers, formatters
# Makes it so we only see the full module name when using ``__name__`` # Makes it so we only see the full module name when using ``__name__``
# without the extra "piker." prefix. # without the extra "piker." prefix.
_proj_name: str = 'piker' _proj_name = 'piker'
def get_logger( def get_logger(name: str = None) -> logging.Logger:
name: str = None,
) -> logging.Logger:
'''Return the package log or a sub-log for `name` if provided. '''Return the package log or a sub-log for `name` if provided.
''' '''
return tractor.log.get_logger(name=name, _root_name=_proj_name) return tractor.log.get_logger(name=name, _root_name=_proj_name)

View File

@ -25,10 +25,39 @@ from PyQt5.QtCore import QPointF
from PyQt5.QtWidgets import QGraphicsPathItem from PyQt5.QtWidgets import QGraphicsPathItem
if TYPE_CHECKING: if TYPE_CHECKING:
from ._axes import PriceAxis
from ._chart import ChartPlotWidget from ._chart import ChartPlotWidget
from ._label import Label from ._label import Label
def marker_right_points(
chart: ChartPlotWidget, # noqa
marker_size: int = 20,
) -> (float, float, float):
'''
Return x-dimension, y-axis-aware, level-line marker oriented scene
values.
X values correspond to set the end of a level line, end of
a paried level line marker, and the right most side of the "right"
axis respectively.
'''
# TODO: compute some sensible maximum value here
# and use a humanized scheme to limit to that length.
l1_len = chart._max_l1_line_len
ryaxis = chart.getAxis('right')
r_axis_x = ryaxis.pos().x()
up_to_l1_sc = r_axis_x - l1_len - 10
marker_right = up_to_l1_sc - (1.375 * 2 * marker_size)
line_end = marker_right - (6/16 * marker_size)
return line_end, marker_right, r_axis_x
def vbr_left( def vbr_left(
label: Label, label: Label,

View File

@ -26,6 +26,8 @@ from PyQt5.QtWidgets import QGraphicsPathItem
from pyqtgraph import Point, functions as fn, Color from pyqtgraph import Point, functions as fn, Color
import numpy as np import numpy as np
from ._anchors import marker_right_points
def mk_marker_path( def mk_marker_path(
@ -114,7 +116,7 @@ class LevelMarker(QGraphicsPathItem):
self.get_level = get_level self.get_level = get_level
self._on_paint = on_paint self._on_paint = on_paint
self.scene_x = lambda: chart.marker_right_points()[1] self.scene_x = lambda: marker_right_points(chart)[1]
self.level: float = 0 self.level: float = 0
self.keep_in_view = keep_in_view self.keep_in_view = keep_in_view
@ -167,7 +169,7 @@ class LevelMarker(QGraphicsPathItem):
vr = view.state['viewRange'] vr = view.state['viewRange']
ymn, ymx = vr[1] ymn, ymx = vr[1]
# _, marker_right, _ = line._chart.marker_right_points() # _, marker_right, _ = marker_right_points(line._chart)
x = self.scene_x() x = self.scene_x()
if self.style == '>|': # short style, points "down-to" line if self.style == '>|': # short style, points "down-to" line

View File

@ -19,10 +19,10 @@ Chart axes graphics and behavior.
""" """
from functools import lru_cache from functools import lru_cache
from typing import Optional, Callable from typing import List, Tuple, Optional, Callable
from math import floor from math import floor
import numpy as np import pandas as pd
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF from PyQt5.QtCore import QPointF
@ -44,14 +44,10 @@ class Axis(pg.AxisItem):
self, self,
linkedsplits, linkedsplits,
typical_max_str: str = '100 000.000', typical_max_str: str = '100 000.000',
text_color: str = 'bracket',
**kwargs **kwargs
) -> None: ) -> None:
super().__init__( super().__init__(**kwargs)
# textPen=textPen,
**kwargs
)
# XXX: pretty sure this makes things slower # XXX: pretty sure this makes things slower
# self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache) # self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
@ -78,32 +74,19 @@ class Axis(pg.AxisItem):
}) })
self.setTickFont(_font.font) self.setTickFont(_font.font)
# NOTE: this is for surrounding "border" # NOTE: this is for surrounding "border"
self.setPen(_axis_pen) self.setPen(_axis_pen)
# this is the text color # this is the text color
# self.setTextPen(pg.mkPen(hcolor(text_color))) self.setTextPen(_axis_pen)
self.text_color = text_color
self.typical_br = _font._qfm.boundingRect(typical_max_str) self.typical_br = _font._qfm.boundingRect(typical_max_str)
# size the pertinent axis dimension to a "typical value" # size the pertinent axis dimension to a "typical value"
self.size_to_values() self.size_to_values()
@property
def text_color(self) -> str:
return self._text_color
@text_color.setter
def text_color(self, text_color: str) -> None:
self.setTextPen(pg.mkPen(hcolor(text_color)))
self._text_color = text_color
def size_to_values(self) -> None: def size_to_values(self) -> None:
pass pass
def txt_offsets(self) -> tuple[int, int]: def txt_offsets(self) -> Tuple[int, int]:
return tuple(self.style['tickTextOffset']) return tuple(self.style['tickTextOffset'])
@ -126,8 +109,7 @@ class PriceAxis(Axis):
def set_title( def set_title(
self, self,
title: str, title: str,
view: Optional[ChartView] = None, view: Optional[ChartView] = None
color: Optional[str] = None,
) -> Label: ) -> Label:
''' '''
@ -141,7 +123,7 @@ class PriceAxis(Axis):
label = self.title = Label( label = self.title = Label(
view=view or self.linkedView(), view=view or self.linkedView(),
fmt_str=title, fmt_str=title,
color=color or self.text_color, color='bracket',
parent=self, parent=self,
# update_on_range_change=False, # update_on_range_change=False,
) )
@ -218,14 +200,13 @@ class DynamicDateAxis(Axis):
def _indexes_to_timestrs( def _indexes_to_timestrs(
self, self,
indexes: list[int], indexes: List[int],
) -> list[str]: ) -> List[str]:
chart = self.linkedsplits.chart chart = self.linkedsplits.chart
flow = chart._flows[chart.name] bars = chart._arrays[chart.name]
shm = flow.shm shm = self.linkedsplits.chart._shm
bars = shm.array
first = shm._first.value first = shm._first.value
bars_len = len(bars) bars_len = len(bars)
@ -242,17 +223,10 @@ class DynamicDateAxis(Axis):
)] )]
# TODO: **don't** have this hard coded shift to EST # TODO: **don't** have this hard coded shift to EST
# delay = times[-1] - times[-2] dts = pd.to_datetime(epochs, unit='s') # - 4*pd.offsets.Hour()
dts = np.array(epochs, dtype='datetime64[s]')
# see units listing: delay = times[-1] - times[-2]
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units return dts.strftime(self.tick_tpl[delay])
return list(np.datetime_as_string(dts))
# TODO: per timeframe formatting?
# - we probably need this based on zoom now right?
# prec = self.np_dt_precision[delay]
# return dts.strftime(self.tick_tpl[delay])
def tickStrings( def tickStrings(
self, self,
@ -438,7 +412,7 @@ class XAxisLabel(AxisLabel):
| QtCore.Qt.AlignCenter | QtCore.Qt.AlignCenter
) )
def size_hint(self) -> tuple[float, float]: def size_hint(self) -> Tuple[float, float]:
# size to parent axis height # size to parent axis height
return self._parent.height(), None return self._parent.height(), None
@ -452,11 +426,11 @@ class XAxisLabel(AxisLabel):
timestrs = self._parent._indexes_to_timestrs([int(value)]) timestrs = self._parent._indexes_to_timestrs([int(value)])
if not len(timestrs): if not timestrs.any():
return return
pad = 1*' ' pad = 1*' '
self.label_str = pad + str(timestrs[0]) + pad self.label_str = pad + timestrs[0] + pad
_, y_offset = self._parent.txt_offsets() _, y_offset = self._parent.txt_offsets()
@ -517,7 +491,7 @@ class YAxisLabel(AxisLabel):
if getattr(self._parent, 'txt_offsets', False): if getattr(self._parent, 'txt_offsets', False):
self.x_offset, y_offset = self._parent.txt_offsets() self.x_offset, y_offset = self._parent.txt_offsets()
def size_hint(self) -> tuple[float, float]: def size_hint(self) -> Tuple[float, float]:
# size to parent axis width(-ish) # size to parent axis width(-ish)
wsh = self._dpifont.boundingRect(' ').height() / 2 wsh = self._dpifont.boundingRect(' ').height() / 2
return ( return (

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers) # Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -19,14 +19,10 @@ High level chart-widget apis.
''' '''
from __future__ import annotations from __future__ import annotations
from typing import Optional, TYPE_CHECKING from typing import Optional
from PyQt5 import QtCore, QtWidgets from PyQt5 import QtCore, QtWidgets
from PyQt5.QtCore import ( from PyQt5.QtCore import Qt
Qt,
QLineF,
# QPointF,
)
from PyQt5.QtWidgets import ( from PyQt5.QtWidgets import (
QFrame, QFrame,
QWidget, QWidget,
@ -47,30 +43,25 @@ from ._cursor import (
Cursor, Cursor,
ContentsLabel, ContentsLabel,
) )
from ..data._sharedmem import ShmArray
from ._l1 import L1Labels from ._l1 import L1Labels
from ._ohlc import BarItems from ._ohlc import BarItems
from ._curve import ( from ._curve import FastAppendCurve
Curve,
StepCurve,
)
from ._style import ( from ._style import (
hcolor, hcolor,
CHART_MARGINS, CHART_MARGINS,
_xaxis_at, _xaxis_at,
_min_points_to_show, _min_points_to_show,
_bars_from_right_in_follow_mode,
_bars_to_left_in_follow_mode,
) )
from ..data.feed import Feed from ..data.feed import Feed
from ..data._source import Symbol from ..data._source import Symbol
from ..data._sharedmem import ShmArray
from ..log import get_logger from ..log import get_logger
from ._interaction import ChartView from ._interaction import ChartView
from ._forms import FieldsForm from ._forms import FieldsForm
from .._profile import pg_profile_enabled, ms_slower_then
from ._overlay import PlotItemOverlay from ._overlay import PlotItemOverlay
from ._flows import Flow
if TYPE_CHECKING:
from ._display import DisplayState
log = get_logger(__name__) log = get_logger(__name__)
@ -120,9 +111,6 @@ class GodWidget(QWidget):
# assigned in the startup func `_async_main()` # assigned in the startup func `_async_main()`
self._root_n: trio.Nursery = None self._root_n: trio.Nursery = None
self._widgets: dict[str, QWidget] = {}
self._resizing: bool = False
# def init_timeframes_ui(self): # def init_timeframes_ui(self):
# self.tf_layout = QHBoxLayout() # self.tf_layout = QHBoxLayout()
# self.tf_layout.setSpacing(0) # self.tf_layout.setSpacing(0)
@ -238,23 +226,16 @@ class GodWidget(QWidget):
# chart is already in memory so just focus it # chart is already in memory so just focus it
linkedsplits.show() linkedsplits.show()
linkedsplits.focus() linkedsplits.focus()
linkedsplits.graphics_cycle()
await trio.sleep(0) await trio.sleep(0)
# resume feeds *after* rendering chart view asap # resume feeds *after* rendering chart view asap
chart.resume_all_feeds() chart.resume_all_feeds()
# TODO: we need a check to see if the chart
# last had the xlast in view, if so then shift so it's
# still in view, if the user was viewing history then
# do nothing yah?
chart.default_view()
self.linkedsplits = linkedsplits self.linkedsplits = linkedsplits
symbol = linkedsplits.symbol symbol = linkedsplits.symbol
if symbol is not None: if symbol is not None:
self.window.setWindowTitle( self.window.setWindowTitle(
f'{symbol.front_fqsn()} ' f'{symbol.key}@{symbol.brokers} '
f'tick:{symbol.tick_size}' f'tick:{symbol.tick_size}'
) )
@ -277,16 +258,7 @@ class GodWidget(QWidget):
Where we do UX magic to make things not suck B) Where we do UX magic to make things not suck B)
''' '''
if self._resizing: log.debug('god widget resize')
return
self._resizing = True
log.info('God widget resize')
for name, widget in self._widgets.items():
widget.on_resize()
self._resizing = False
class ChartnPane(QFrame): class ChartnPane(QFrame):
@ -361,19 +333,8 @@ class LinkedSplits(QWidget):
self.layout.setContentsMargins(0, 0, 0, 0) self.layout.setContentsMargins(0, 0, 0, 0)
self.layout.addWidget(self.splitter) self.layout.addWidget(self.splitter)
# chart-local graphics state that can be passed to
# a ``graphic_update_cycle()`` call by any task wishing to
# update the UI for a given "chart instance".
self.display_state: Optional[DisplayState] = None
self._symbol: Symbol = None self._symbol: Symbol = None
def graphics_cycle(self, **kwargs) -> None:
from . import _display
ds = self.display_state
if ds:
return _display.graphics_update_cycle(ds, **kwargs)
@property @property
def symbol(self) -> Symbol: def symbol(self) -> Symbol:
return self._symbol return self._symbol
@ -388,15 +349,12 @@ class LinkedSplits(QWidget):
''' '''
ln = len(self.subplots) ln = len(self.subplots)
# proportion allocated to consumer subcharts
if not prop: if not prop:
prop = 3/8*5/8 # proportion allocated to consumer subcharts
if ln < 2:
# if ln < 2: prop = 1/3
# prop = 3/8*5/8 elif ln >= 2:
prop = 3/8
# elif ln >= 2:
# prop = 3/8
major = 1 - prop major = 1 - prop
min_h_ind = int((self.height() * prop) / ln) min_h_ind = int((self.height() * prop) / ln)
@ -418,7 +376,7 @@ class LinkedSplits(QWidget):
self, self,
symbol: Symbol, symbol: Symbol,
shm: ShmArray, array: np.ndarray,
sidepane: FieldsForm, sidepane: FieldsForm,
style: str = 'bar', style: str = 'bar',
@ -443,7 +401,7 @@ class LinkedSplits(QWidget):
self.chart = self.add_plot( self.chart = self.add_plot(
name=symbol.key, name=symbol.key,
shm=shm, array=array,
style=style, style=style,
_is_main=True, _is_main=True,
@ -471,7 +429,7 @@ class LinkedSplits(QWidget):
self, self,
name: str, name: str,
shm: ShmArray, array: np.ndarray,
array_key: Optional[str] = None, array_key: Optional[str] = None,
style: str = 'line', style: str = 'line',
@ -515,6 +473,7 @@ class LinkedSplits(QWidget):
name=name, name=name,
data_key=array_key or name, data_key=array_key or name,
array=array,
parent=qframe, parent=qframe,
linkedsplits=self, linkedsplits=self,
axisItems=axes, axisItems=axes,
@ -578,7 +537,7 @@ class LinkedSplits(QWidget):
graphics, data_key = cpw.draw_ohlc( graphics, data_key = cpw.draw_ohlc(
name, name,
shm, array,
array_key=array_key array_key=array_key
) )
self.cursor.contents_labels.add_label( self.cursor.contents_labels.add_label(
@ -592,7 +551,7 @@ class LinkedSplits(QWidget):
add_label = True add_label = True
graphics, data_key = cpw.draw_curve( graphics, data_key = cpw.draw_curve(
name, name,
shm, array,
array_key=array_key, array_key=array_key,
color='default_light', color='default_light',
) )
@ -601,7 +560,7 @@ class LinkedSplits(QWidget):
add_label = True add_label = True
graphics, data_key = cpw.draw_curve( graphics, data_key = cpw.draw_curve(
name, name,
shm, array,
array_key=array_key, array_key=array_key,
step_mode=True, step_mode=True,
color='davies', color='davies',
@ -655,6 +614,18 @@ class LinkedSplits(QWidget):
cpw.sidepane.setMinimumWidth(sp_w) cpw.sidepane.setMinimumWidth(sp_w)
cpw.sidepane.setMaximumWidth(sp_w) cpw.sidepane.setMaximumWidth(sp_w)
# import pydantic
# class Graphics(pydantic.BaseModel):
# '''
# Data-AGGRegate: high level API onto multiple (categorized)
# ``ShmArray``s with high level processing routines for
# graphics computations and display.
# '''
# arrays: dict[str, np.ndarray] = {}
# graphics: dict[str, pg.GraphicsObject] = {}
class ChartPlotWidget(pg.PlotWidget): class ChartPlotWidget(pg.PlotWidget):
''' '''
@ -689,6 +660,7 @@ class ChartPlotWidget(pg.PlotWidget):
# the "data view" we generate graphics from # the "data view" we generate graphics from
name: str, name: str,
array: np.ndarray,
data_key: str, data_key: str,
linkedsplits: LinkedSplits, linkedsplits: LinkedSplits,
@ -741,9 +713,16 @@ class ChartPlotWidget(pg.PlotWidget):
self._max_l1_line_len: float = 0 self._max_l1_line_len: float = 0
# self.setViewportMargins(0, 0, 0, 0) # self.setViewportMargins(0, 0, 0, 0)
# self._ohlc = array # readonly view of ohlc data
# TODO: move to Aggr above XD
# readonly view of data arrays
self._arrays = {
self.data_key: array,
}
self._graphics = {} # registry of underlying graphics
# registry of overlay curve names # registry of overlay curve names
self._flows: dict[str, Flow] = {} self._overlays: dict[str, ShmArray] = {}
self._feeds: dict[Symbol, Feed] = {} self._feeds: dict[Symbol, Feed] = {}
@ -756,6 +735,7 @@ class ChartPlotWidget(pg.PlotWidget):
# show background grid # show background grid
self.showGrid(x=False, y=True, alpha=0.3) self.showGrid(x=False, y=True, alpha=0.3)
self.default_view()
self.cv.enable_auto_yrange() self.cv.enable_auto_yrange()
self.pi_overlay: PlotItemOverlay = PlotItemOverlay(self.plotItem) self.pi_overlay: PlotItemOverlay = PlotItemOverlay(self.plotItem)
@ -800,138 +780,39 @@ class ChartPlotWidget(pg.PlotWidget):
return int(vr.left()), int(vr.right()) return int(vr.left()), int(vr.right())
def bars_range(self) -> tuple[int, int, int, int]: def bars_range(self) -> tuple[int, int, int, int]:
''' """Return a range tuple for the bars present in view.
Return a range tuple for the bars present in view. """
l, r = self.view_range()
''' array = self._arrays[self.name]
main_flow = self._flows[self.name] lbar = max(l, array[0]['index'])
ifirst, l, lbar, rbar, r, ilast = main_flow.datums_range() rbar = min(r, array[-1]['index'])
return l, lbar, rbar, r return l, lbar, rbar, r
def curve_width_pxs(
self,
) -> float:
_, lbar, rbar, _ = self.bars_range()
return self.view.mapViewToDevice(
QLineF(lbar, 0, rbar, 0)
).length()
def pre_l1_xs(self) -> tuple[float, float]:
'''
Return the view x-coord for the value just before
the L1 labels on the y-axis as well as the length
of that L1 label from the y-axis.
'''
line_end, marker_right, yaxis_x = self.marker_right_points()
view = self.view
line = view.mapToView(
QLineF(line_end, 0, yaxis_x, 0)
)
return line.x1(), line.length()
def marker_right_points(
self,
marker_size: int = 20,
) -> (float, float, float):
'''
Return x-dimension, y-axis-aware, level-line marker oriented scene
values.
X values correspond to set the end of a level line, end of
a paried level line marker, and the right most side of the "right"
axis respectively.
'''
# TODO: compute some sensible maximum value here
# and use a humanized scheme to limit to that length.
l1_len = self._max_l1_line_len
ryaxis = self.getAxis('right')
r_axis_x = ryaxis.pos().x()
up_to_l1_sc = r_axis_x - l1_len - 10
marker_right = up_to_l1_sc - (1.375 * 2 * marker_size)
line_end = marker_right - (6/16 * marker_size)
return line_end, marker_right, r_axis_x
def default_view( def default_view(
self, self,
bars_from_y: int = 3000, index: int = -1,
) -> None: ) -> None:
''' """Set the view box to the "default" startup view of the scene.
Set the view box to the "default" startup view of the scene.
''' """
flow = self._flows.get(self.name) xlast = self._arrays[self.name][index]['index']
if not flow: begin = xlast - _bars_to_left_in_follow_mode
log.warning(f'`Flow` for {self.name} not loaded yet?') end = xlast + _bars_from_right_in_follow_mode
return
index = flow.shm.array['index']
xfirst, xlast = index[0], index[-1]
l, lbar, rbar, r = self.bars_range()
view = self.view
if (
rbar < 0
or l < xfirst
or l < 0
or (rbar - lbar) < 6
):
# TODO: set fixed bars count on screen that approx includes as
# many bars as possible before a downsample line is shown.
begin = xlast - bars_from_y
view.setXRange(
min=begin,
max=xlast,
padding=0,
)
# re-get range
l, lbar, rbar, r = self.bars_range()
# we get the L1 spread label "length" in view coords
# terms now that we've scaled either by user control
# or to the default set of bars as per the immediate block
# above.
marker_pos, l1_len = self.pre_l1_xs()
end = xlast + l1_len + 1
begin = end - (r - l)
# for debugging
# print(
# # f'bars range: {brange}\n'
# f'xlast: {xlast}\n'
# f'marker pos: {marker_pos}\n'
# f'l1 len: {l1_len}\n'
# f'begin: {begin}\n'
# f'end: {end}\n'
# )
# remove any custom user yrange setttings # remove any custom user yrange setttings
if self._static_yrange == 'axis': if self._static_yrange == 'axis':
self._static_yrange = None self._static_yrange = None
view = self.view
view.setXRange( view.setXRange(
min=begin, min=begin,
max=end, max=end,
padding=0, padding=0,
) )
self.view.maybe_downsample_graphics()
view._set_yrange() view._set_yrange()
try:
self.linked.graphics_cycle()
except IndexError:
pass
def increment_view( def increment_view(
self, self,
steps: int = 1,
vb: Optional[ChartView] = None,
) -> None: ) -> None:
""" """
Increment the data view one step to the right thus "following" Increment the data view one step to the right thus "following"
@ -939,10 +820,9 @@ class ChartPlotWidget(pg.PlotWidget):
""" """
l, r = self.view_range() l, r = self.view_range()
view = vb or self.view self.view.setXRange(
view.setXRange( min=l + 1,
min=l + steps, max=r + 1,
max=r + steps,
# TODO: holy shit, wtf dude... why tf would this not be 0 by # TODO: holy shit, wtf dude... why tf would this not be 0 by
# default... speechless. # default... speechless.
@ -951,8 +831,9 @@ class ChartPlotWidget(pg.PlotWidget):
def draw_ohlc( def draw_ohlc(
self, self,
name: str, name: str,
shm: ShmArray, data: np.ndarray,
array_key: Optional[str] = None, array_key: Optional[str] = None,
@ -962,26 +843,19 @@ class ChartPlotWidget(pg.PlotWidget):
''' '''
graphics = BarItems( graphics = BarItems(
self.linked,
self.plotItem, self.plotItem,
pen_color=self.pen_color, pen_color=self.pen_color
name=name,
) )
# adds all bar/candle graphics objects for each data point in # adds all bar/candle graphics objects for each data point in
# the np array buffer to be drawn on next render cycle # the np array buffer to be drawn on next render cycle
self.plotItem.addItem(graphics) self.plotItem.addItem(graphics)
# draw after to allow self.scene() to work...
graphics.draw_from_data(data)
data_key = array_key or name data_key = array_key or name
self._graphics[data_key] = graphics
self._flows[data_key] = Flow(
name=name,
plot=self.plotItem,
_shm=shm,
is_ohlc=True,
graphics=graphics,
)
self._add_sticky(name, bg_color='davies') self._add_sticky(name, bg_color='davies')
return graphics, data_key return graphics, data_key
@ -1022,7 +896,6 @@ class ChartPlotWidget(pg.PlotWidget):
) )
pi.hideButtons() pi.hideButtons()
# cv.enable_auto_yrange(self.view)
cv.enable_auto_yrange() cv.enable_auto_yrange()
# compose this new plot's graphics with the current chart's # compose this new plot's graphics with the current chart's
@ -1047,21 +920,19 @@ class ChartPlotWidget(pg.PlotWidget):
self, self,
name: str, name: str,
shm: ShmArray, data: np.ndarray,
array_key: Optional[str] = None, array_key: Optional[str] = None,
overlay: bool = False, overlay: bool = False,
color: Optional[str] = None, color: Optional[str] = None,
add_label: bool = True, add_label: bool = True,
pi: Optional[pg.PlotItem] = None,
step_mode: bool = False,
**pdi_kwargs, **pdi_kwargs,
) -> (pg.PlotDataItem, str): ) -> (pg.PlotDataItem, str):
''' '''
Draw a "curve" (line plot graphics) for the provided data in Draw a "curve" (line plot graphics) for the provided data in
the input shm array ``shm``. the input array ``data``.
''' '''
color = color or self.pen_color or 'default_light' color = color or self.pen_color or 'default_light'
@ -1071,37 +942,54 @@ class ChartPlotWidget(pg.PlotWidget):
data_key = array_key or name data_key = array_key or name
curve_type = { # yah, we wrote our own B)
None: Curve, curve = FastAppendCurve(
'step': StepCurve, y=data[data_key],
# TODO: x=data['index'],
# 'bars': BarsItems # antialias=True,
}['step' if step_mode else None]
curve = curve_type(
name=name, name=name,
# XXX: pretty sure this is just more overhead
# on data reads and makes graphics rendering no faster
# clipToView=True,
# TODO: see how this handles with custom ohlcv bars graphics
# and/or if we can implement something similar for OHLC graphics
# autoDownsample=True,
# downsample=60,
# downsampleMethod='subsample',
**pdi_kwargs, **pdi_kwargs,
) )
pi = pi or self.plotItem # XXX: see explanation for different caching modes:
# https://stackoverflow.com/a/39410081
# seems to only be useful if we don't re-generate the entire
# QPainterPath every time
# curve.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self._flows[data_key] = Flow( # don't ever use this - it's a colossal nightmare of artefacts
name=name, # and is disastrous for performance.
plot=pi, # curve.setCacheMode(QtWidgets.QGraphicsItem.ItemCoordinateCache)
_shm=shm,
is_ohlc=False, # register curve graphics and backing array for name
# register curve graphics with this flow self._graphics[name] = curve
graphics=curve, self._arrays[data_key] = data
)
pi = self.plotItem
# TODO: this probably needs its own method? # TODO: this probably needs its own method?
if overlay: if overlay:
# anchor_at = ('bottom', 'left')
self._overlays[name] = None
if isinstance(overlay, pg.PlotItem): if isinstance(overlay, pg.PlotItem):
if overlay not in self.pi_overlay.overlays: if overlay not in self.pi_overlay.overlays:
raise RuntimeError( raise RuntimeError(
f'{overlay} must be from `.plotitem_overlay()`' f'{overlay} must be from `.plotitem_overlay()`'
) )
pi = overlay pi = overlay
else: else:
# anchor_at = ('top', 'left') # anchor_at = ('top', 'left')
@ -1109,17 +997,7 @@ class ChartPlotWidget(pg.PlotWidget):
# (we need something that avoids clutter on x-axis). # (we need something that avoids clutter on x-axis).
self._add_sticky(name, bg_color=color) self._add_sticky(name, bg_color=color)
# NOTE: this is more or less the RENDER call that tells Qt to
# start showing the generated graphics-curves. This is kind of
# of edge-triggered call where once added any
# ``QGraphicsItem.update()`` calls are automatically displayed.
# Our internal graphics objects have their own "update from
# data" style method API that allows for real-time updates on
# the next render cycle; just note a lot of the real-time
# updates are implicit and require a bit of digging to
# understand.
pi.addItem(curve) pi.addItem(curve)
return curve, data_key return curve, data_key
# TODO: make this a ctx mngr # TODO: make this a ctx mngr
@ -1151,11 +1029,11 @@ class ChartPlotWidget(pg.PlotWidget):
) )
return last return last
def update_graphics_from_flow( def update_ohlc_from_array(
self, self,
graphics_name: str,
array_key: Optional[str] = None,
graphics_name: str,
array: np.ndarray,
**kwargs, **kwargs,
) -> pg.GraphicsObject: ) -> pg.GraphicsObject:
@ -1163,12 +1041,50 @@ class ChartPlotWidget(pg.PlotWidget):
Update the named internal graphics from ``array``. Update the named internal graphics from ``array``.
''' '''
flow = self._flows[array_key or graphics_name] self._arrays[self.name] = array
return flow.update_graphics( graphics = self._graphics[graphics_name]
array_key=array_key, graphics.update_from_array(array, **kwargs)
**kwargs, return graphics
def update_curve_from_array(
self,
graphics_name: str,
array: np.ndarray,
array_key: Optional[str] = None,
**kwargs,
) -> pg.GraphicsObject:
'''
Update the named internal graphics from ``array``.
'''
assert len(array)
data_key = array_key or graphics_name
if graphics_name not in self._overlays:
self._arrays[self.name] = array
else:
self._arrays[data_key] = array
curve = self._graphics[graphics_name]
# NOTE: back when we weren't implementing the curve graphics
# ourselves you'd have updates using this method:
# curve.setData(y=array[graphics_name], x=array['index'], **kwargs)
# NOTE: graphics **must** implement a diff based update
# operation where an internal ``FastUpdateCurve._xrange`` is
# used to determine if the underlying path needs to be
# pre/ap-pended.
curve.update_from_array(
x=array['index'],
y=array[data_key],
**kwargs
) )
return curve
# def _label_h(self, yhigh: float, ylow: float) -> float: # def _label_h(self, yhigh: float, ylow: float) -> float:
# # compute contents label "height" in view terms # # compute contents label "height" in view terms
# # to avoid having data "contents" overlap with them # # to avoid having data "contents" overlap with them
@ -1198,9 +1114,6 @@ class ChartPlotWidget(pg.PlotWidget):
# print(f"bounds (ylow, yhigh): {(ylow, yhigh)}") # print(f"bounds (ylow, yhigh): {(ylow, yhigh)}")
# TODO: pretty sure we can just call the cursor
# directly not? i don't wee why we need special "signal proxies"
# for this lul..
def enterEvent(self, ev): # noqa def enterEvent(self, ev): # noqa
# pg.PlotWidget.enterEvent(self, ev) # pg.PlotWidget.enterEvent(self, ev)
self.sig_mouse_enter.emit(self) self.sig_mouse_enter.emit(self)
@ -1214,7 +1127,7 @@ class ChartPlotWidget(pg.PlotWidget):
# TODO: this should go onto some sort of # TODO: this should go onto some sort of
# data-view thinger..right? # data-view thinger..right?
ohlc = self._flows[self.name].shm.array ohlc = self._shm.array
# XXX: not sure why the time is so off here # XXX: not sure why the time is so off here
# looks like we're gonna have to do some fixing.. # looks like we're gonna have to do some fixing..
@ -1225,28 +1138,10 @@ class ChartPlotWidget(pg.PlotWidget):
else: else:
return ohlc['index'][-1] return ohlc['index'][-1]
def in_view(
self,
array: np.ndarray,
) -> np.ndarray:
'''
Slice an input struct array providing only datums
"in view" of this chart.
'''
l, lbar, rbar, r = self.bars_range()
ifirst = array[0]['index']
# slice data by offset from the first index
# available in the passed datum set.
return array[lbar - ifirst:(rbar - ifirst) + 1]
def maxmin( def maxmin(
self, self,
name: Optional[str] = None, name: Optional[str] = None,
bars_range: Optional[tuple[ bars_range: Optional[tuple[int, int, int, int]] = None,
int, int, int, int, int, int
]] = None,
) -> tuple[float, float]: ) -> tuple[float, float]:
''' '''
@ -1255,43 +1150,40 @@ class ChartPlotWidget(pg.PlotWidget):
If ``bars_range`` is provided use that range. If ``bars_range`` is provided use that range.
''' '''
# print(f'Chart[{self.name}].maxmin()') l, lbar, rbar, r = bars_range or self.bars_range()
profiler = pg.debug.Profiler( # TODO: logic to check if end of bars in view
msg=f'`{str(self)}.maxmin(name={name})`: `{self.name}`', # extra = view_len - _min_points_to_show
disabled=not pg_profile_enabled(), # begin = self._arrays['ohlc'][0]['index'] - extra
ms_threshold=ms_slower_then, # # end = len(self._arrays['ohlc']) - 1 + extra
delayed=True, # end = self._arrays['ohlc'][-1]['index'] - 1 + extra
)
# bars_len = rbar - lbar
# log.debug(
# f"\nl: {l}, lbar: {lbar}, rbar: {rbar}, r: {r}\n"
# f"view_len: {view_len}, bars_len: {bars_len}\n"
# f"begin: {begin}, end: {end}, extra: {extra}"
# )
a = self._arrays[name or self.name]
ifirst = a[0]['index']
bars = a[lbar - ifirst:rbar - ifirst + 1]
if not len(bars):
# likely no data loaded yet or extreme scrolling?
log.error(f"WTF bars_range = {lbar}:{rbar}")
return
# TODO: here we should instead look up the ``Flow.shm.array``
# and read directly from shm to avoid copying to memory first
# and then reading it again here.
flow_key = name or self.name
flow = self._flows.get(flow_key)
if ( if (
flow is None self.data_key == self.linked.symbol.key
): ):
log.error(f"flow {flow_key} doesn't exist in chart {self.name} !?") # ohlc sampled bars hi/lo lookup
key = res = 0, 0 ylow = np.nanmin(bars['low'])
yhigh = np.nanmax(bars['high'])
else: else:
( view = bars[name or self.data_key]
first, ylow = np.nanmin(view)
l, yhigh = np.nanmax(view)
lbar,
rbar,
r,
last,
) = bars_range or flow.datums_range()
profiler(f'{self.name} got bars range')
key = round(lbar), round(rbar) # print(f'{(ylow, yhigh)}')
res = flow.maxmin(*key) return ylow, yhigh
if res == (None, None):
log.error(
f"{flow_key} no mxmn for bars_range => {key} !?"
)
res = 0, 0
profiler(f'yrange mxmn: {key} -> {res}')
return res

View File

@ -1,318 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Graphics related downsampling routines for compressing to pixel
limits on the display device.
'''
import math
from typing import Optional
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import (
jit,
# float64, optional, int64,
)
from ..log import get_logger
log = get_logger(__name__)
def hl2mxmn(ohlc: np.ndarray) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc['index']
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@jit(
# TODO: the type annots..
# float64[:](float64[:],),
nopython=True,
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i] = last_h
last_l = l
last_h = h
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc['index']
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat
def ds_m4(
x: np.ndarray,
y: np.ndarray,
# units-per-pixel-x(dimension)
uppx: float,
# XXX: troll zone / easter egg..
# want to mess with ur pal, pass in the actual
# pixel width here instead of uppx-proper (i.e. pass
# in our ``pg.GraphicsObject`` derivative's ``.px_width()``
# gto mega-trip-out ur bud). Hint, it used to be implemented
# (wrongly) using "pixel width", so check the git history ;)
xrange: Optional[float] = None,
) -> tuple[int, np.ndarray, np.ndarray]:
'''
Downsample using the M4 algorithm.
This is more or less an OHLC style sampling of a line-style series.
'''
# NOTE: this method is a so called "visualization driven data
# aggregation" approach. It gives error-free line chart
# downsampling, see
# further scientific paper resources:
# - http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
# - http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
# Details on implementation of this algo are based in,
# https://github.com/pikers/piker/issues/109
# XXX: from infinite on downsampling viewable graphics:
# "one thing i remembered about the binning - if you are
# picking a range within your timeseries the start and end bin
# should be one more bin size outside the visual range, then
# you get better visual fidelity at the edges of the graph"
# "i didn't show it in the sample code, but it's accounted for
# in the start and end indices and number of bins"
# should never get called unless actually needed
assert uppx > 1
# NOTE: if we didn't pre-slice the data to downsample
# you could in theory pass these as the slicing params,
# do we care though since we can always just pre-slice the
# input?
x_start = x[0] # x value start/lowest in domain
if xrange is None:
x_end = x[-1] # x end value/highest in domain
xrange = (x_end - x_start)
# XXX: always round up on the input pixels
# lnx = len(x)
# uppx *= max(4 / (1 + math.log(uppx, 2)), 1)
pxw = math.ceil(xrange / uppx)
# scale up the frame "width" directly with uppx
w = uppx
# ensure we make more then enough
# frames (windows) for the output pixel
frames = pxw
# if we have more and then exact integer's
# (uniform quotient output) worth of datum-domain-points
# per windows-frame, add one more window to ensure
# we have room for all output down-samples.
pts_per_pixel, r = divmod(xrange, frames)
if r:
# while r:
frames += 1
pts_per_pixel, r = divmod(xrange, frames)
# print(
# f'uppx: {uppx}\n'
# f'xrange: {xrange}\n'
# f'pxw: {pxw}\n'
# f'frames: {frames}\n'
# )
assert frames >= (xrange / uppx)
# call into ``numba``
nb, i_win, y_out = _m4(
x,
y,
frames,
# TODO: see func below..
# i_win,
# y_out,
# first index in x data to start at
x_start,
# window size for each "frame" of data to downsample (normally
# scaled by the ratio of pixels on screen to data in x-range).
w,
)
# filter out any overshoot in the input allocation arrays by
# removing zero-ed tail entries which should start at a certain
# index.
i_win = i_win[i_win != 0]
y_out = y_out[:i_win.size]
return nb, i_win, y_out
@jit(
nopython=True,
nogil=True,
)
def _m4(
xs: np.ndarray,
ys: np.ndarray,
frames: int,
# TODO: using this approach by having the ``.zeros()`` alloc lines
# below, in put python was causing segs faults and alloc crashes..
# we might need to see how it behaves with shm arrays and consider
# allocating them once at startup?
# pre-alloc array of x indices mapping to the start
# of each window used for downsampling in y.
# i_win: np.ndarray,
# pre-alloc array of output downsampled y values
# y_out: np.ndarray,
x_start: int,
step: float,
) -> int:
# nbins = len(i_win)
# count = len(xs)
# these are pre-allocated and mutated by ``numba``
# code in-place.
y_out = np.zeros((frames, 4), ys.dtype)
i_win = np.zeros(frames, xs.dtype)
bincount = 0
x_left = x_start
# Find the first window's starting value which *includes* the
# first value in the x-domain array, i.e. the first
# "left-side-of-window" **plus** the downsampling step,
# creates a window which includes the first x **value**.
while xs[0] >= x_left + step:
x_left += step
# set all bins in the left-most entry to the starting left-most x value
# (aka a row broadcast).
i_win[bincount] = x_left
# set all y-values to the first value passed in.
y_out[bincount] = ys[0]
for i in range(len(xs)):
x = xs[i]
y = ys[i]
if x < x_left + step: # the current window "step" is [bin, bin+1)
y_out[bincount, 1] = min(y, y_out[bincount, 1])
y_out[bincount, 2] = max(y, y_out[bincount, 2])
y_out[bincount, 3] = y
else:
# Find the next bin
while x >= x_left + step:
x_left += step
bincount += 1
i_win[bincount] = x_left
y_out[bincount] = y
return bincount, i_win, y_out

View File

@ -43,8 +43,8 @@ log = get_logger(__name__)
# latency (in terms of perceived lag in cross hair) so really be sure # latency (in terms of perceived lag in cross hair) so really be sure
# there's an improvement if you want to change it! # there's an improvement if you want to change it!
_mouse_rate_limit = 60 # TODO; should we calc current screen refresh rate? _mouse_rate_limit = 120 # TODO; should we calc current screen refresh rate?
_debounce_delay = 0 _debounce_delay = 1 / 40
_ch_label_opac = 1 _ch_label_opac = 1
@ -95,33 +95,25 @@ class LineDot(pg.CurvePoint):
def event( def event(
self, self,
ev: QtCore.QEvent, ev: QtCore.QEvent,
) -> bool: ) -> None:
if not isinstance(
if ( ev, QtCore.QDynamicPropertyChangeEvent
not isinstance(ev, QtCore.QDynamicPropertyChangeEvent) ) or self.curve() is None:
or self.curve() is None
):
return False return False
# TODO: get rid of this ``.getData()`` and (x, y) = self.curve().getData()
# make a more pythonic api to retreive backing index = self.property('index')
# numpy arrays... # first = self._plot._arrays['ohlc'][0]['index']
# (x, y) = self.curve().getData() # first = x[0]
# index = self.property('index') # i = index - first
# # first = self._plot._arrays['ohlc'][0]['index'] i = index - x[0]
# # first = x[0] if i > 0 and i < len(y):
# # i = index - first newPos = (index, y[i])
# if index: QtWidgets.QGraphicsItem.setPos(self, *newPos)
# i = round(index - x[0]) return True
# if i > 0 and i < len(y):
# newPos = (index, y[i])
# QtWidgets.QGraphicsItem.setPos(
# self,
# *newPos,
# )
# return True
return False return False
@ -196,9 +188,6 @@ class ContentsLabel(pg.LabelItem):
self.setText( self.setText(
"<b>i</b>:{index}<br/>" "<b>i</b>:{index}<br/>"
# NB: these fields must be indexed in the correct order via
# the slice syntax below.
"<b>epoch</b>:{}<br/>"
"<b>O</b>:{}<br/>" "<b>O</b>:{}<br/>"
"<b>H</b>:{}<br/>" "<b>H</b>:{}<br/>"
"<b>L</b>:{}<br/>" "<b>L</b>:{}<br/>"
@ -206,15 +195,7 @@ class ContentsLabel(pg.LabelItem):
"<b>V</b>:{}<br/>" "<b>V</b>:{}<br/>"
"<b>wap</b>:{}".format( "<b>wap</b>:{}".format(
*array[index - first][ *array[index - first][
[ ['open', 'high', 'low', 'close', 'volume', 'bar_wap']
'time',
'open',
'high',
'low',
'close',
'volume',
'bar_wap',
]
], ],
name=name, name=name,
index=index, index=index,
@ -259,21 +240,23 @@ class ContentsLabels:
def update_labels( def update_labels(
self, self,
index: int, index: int,
# array_name: str,
) -> None: ) -> None:
# for name, (label, update) in self._labels.items():
for chart, name, label, update in self._labels: for chart, name, label, update in self._labels:
flow = chart._flows[name] array = chart._arrays[name]
array = flow.shm.array
if not ( if not (
index >= 0 index >= 0
and index < array[-1]['index'] and index < array[-1]['index']
): ):
# out of range # out of range
print('WTF out of range?') print('out of range?')
continue continue
# array = chart._arrays[name]
# call provided update func with data point # call provided update func with data point
try: try:
label.show() label.show()
@ -309,8 +292,7 @@ class ContentsLabels:
class Cursor(pg.GraphicsObject): class Cursor(pg.GraphicsObject):
''' '''Multi-plot cursor for use on a ``LinkedSplits`` chart (set).
Multi-plot cursor for use on a ``LinkedSplits`` chart (set).
''' '''
def __init__( def __init__(
@ -325,7 +307,7 @@ class Cursor(pg.GraphicsObject):
self.linked = linkedsplits self.linked = linkedsplits
self.graphics: dict[str, pg.GraphicsObject] = {} self.graphics: dict[str, pg.GraphicsObject] = {}
self.plots: list['PlotChartWidget'] = [] # type: ignore # noqa self.plots: List['PlotChartWidget'] = [] # type: ignore # noqa
self.active_plot = None self.active_plot = None
self.digits: int = digits self.digits: int = digits
self._datum_xy: tuple[int, float] = (0, 0) self._datum_xy: tuple[int, float] = (0, 0)
@ -422,7 +404,6 @@ class Cursor(pg.GraphicsObject):
slot=self.mouseMoved, slot=self.mouseMoved,
delay=_debounce_delay, delay=_debounce_delay,
) )
px_enter = pg.SignalProxy( px_enter = pg.SignalProxy(
plot.sig_mouse_enter, plot.sig_mouse_enter,
rateLimit=_mouse_rate_limit, rateLimit=_mouse_rate_limit,
@ -454,10 +435,7 @@ class Cursor(pg.GraphicsObject):
if plot.linked.xaxis_chart is plot: if plot.linked.xaxis_chart is plot:
xlabel = self.xaxis_label = XAxisLabel( xlabel = self.xaxis_label = XAxisLabel(
parent=self.plots[plot_index].getAxis('bottom'), parent=self.plots[plot_index].getAxis('bottom'),
# parent=self.plots[plot_index].pi_overlay.get_axis( # parent=self.plots[plot_index].pi_overlay.get_axis(plot.plotItem, 'bottom'),
# plot.plotItem, 'bottom'
# ),
opacity=_ch_label_opac, opacity=_ch_label_opac,
bg_color=self.label_color, bg_color=self.label_color,
) )
@ -475,12 +453,9 @@ class Cursor(pg.GraphicsObject):
) -> LineDot: ) -> LineDot:
# if this plot contains curves add line dot "cursors" to denote # if this plot contains curves add line dot "cursors" to denote
# the current sample under the mouse # the current sample under the mouse
main_flow = plot._flows[plot.name]
# read out last index
i = main_flow.shm.array[-1]['index']
cursor = LineDot( cursor = LineDot(
curve, curve,
index=i, index=plot._arrays[plot.name][-1]['index'],
plot=plot plot=plot
) )
plot.addItem(cursor) plot.addItem(cursor)
@ -574,20 +549,17 @@ class Cursor(pg.GraphicsObject):
for cursor in opts.get('cursors', ()): for cursor in opts.get('cursors', ()):
cursor.setIndex(ix) cursor.setIndex(ix)
# Update the label on the bottom of the crosshair. # update the label on the bottom of the crosshair
# TODO: make this an up-front calc that we update axes = plot.plotItem.axes
# on axis-widget resize events instead of on every mouse
# update cylce.
# TODO: make this an up-front calc that we update
# on axis-widget resize events.
# left axis offset width for calcuating # left axis offset width for calcuating
# absolute x-axis label placement. # absolute x-axis label placement.
left_axis_width = 0 left_axis_width = 0
if len(plot.pi_overlay.overlays): left = axes.get('left')
# breakpoint() if left:
lefts = plot.pi_overlay.get_axes('left') left_axis_width = left['item'].width()
if lefts:
for left in lefts:
left_axis_width += left.width()
# map back to abs (label-local) coordinates # map back to abs (label-local) coordinates
self.xaxis_label.update_label( self.xaxis_label.update_label(

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers) # Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -18,306 +18,301 @@
Fast, smooth, sexy curves. Fast, smooth, sexy curves.
""" """
from contextlib import contextmanager as cm from typing import Optional
from typing import Optional, Callable
import numpy as np import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtWidgets from PyQt5 import QtGui, QtWidgets
from PyQt5.QtWidgets import QGraphicsItem
from PyQt5.QtCore import ( from PyQt5.QtCore import (
Qt,
QLineF, QLineF,
QSizeF, QSizeF,
QRectF, QRectF,
# QRect,
QPointF, QPointF,
) )
from PyQt5.QtGui import (
QPainter, from .._profile import pg_profile_enabled
QPainterPath,
)
from .._profile import pg_profile_enabled, ms_slower_then
from ._style import hcolor from ._style import hcolor
# from ._compression import (
# # ohlc_to_m4_line,
# ds_m4,
# )
from ..log import get_logger
log = get_logger(__name__) def step_path_arrays_from_1d(
x: np.ndarray,
y: np.ndarray,
include_endpoints: bool = False,
) -> (np.ndarray, np.ndarray):
_line_styles: dict[str, int] = {
'solid': Qt.PenStyle.SolidLine,
'dash': Qt.PenStyle.DashLine,
'dot': Qt.PenStyle.DotLine,
'dashdot': Qt.PenStyle.DashDotLine,
}
class Curve(pg.GraphicsObject):
''' '''
A faster, simpler, append friendly version of Generate a "step mode" curve aligned with OHLC style bars
``pyqtgraph.PlotCurveItem`` built for highly customizable real-time such that each segment spans each bar (aka "centered" style).
updates.
This type is a much stripped down version of a ``pyqtgraph`` style
"graphics object" in the sense that the internal lower level
graphics which are drawn in the ``.paint()`` method are actually
rendered outside of this class entirely and instead are assigned as
state (instance vars) here and then drawn during a Qt graphics
cycle.
The main motivation for this more modular, composed design is that
lower level graphics data can be rendered in different threads and
then read and drawn in this main thread without having to worry
about dealing with Qt's concurrency primitives. See
``piker.ui._flows.Renderer`` for details and logic related to lower
level path generation and incremental update. The main differences in
the path generation code include:
- avoiding regeneration of the entire historical path where possible
and instead only updating the "new" segment(s) via a ``numpy``
array diff calc.
- here, the "last" graphics datum-segment is drawn independently
such that near-term (high frequency) discrete-time-sampled style
updates don't trigger a full path redraw.
''' '''
y_out = y.copy()
x_out = x.copy()
x2 = np.empty(
# the data + 2 endpoints on either end for
# "termination of the path".
(len(x) + 1, 2),
# we want to align with OHLC or other sampling style
# bars likely so we need fractinal values
dtype=float,
)
x2[0] = x[0] - 0.5
x2[1] = x[0] + 0.5
x2[1:] = x[:, np.newaxis] + 0.5
# sub-type customization methods # flatten to 1-d
sub_br: Optional[Callable] = None x_out = x2.reshape(x2.size)
sub_paint: Optional[Callable] = None
declare_paintables: Optional[Callable] = None # we create a 1d with 2 extra indexes to
# hold the start and (current) end value for the steps
# on either end
y2 = np.empty((len(y), 2), dtype=y.dtype)
y2[:] = y[:, np.newaxis]
y_out = np.empty(
2*len(y) + 2,
dtype=y.dtype
)
# flatten and set 0 endpoints
y_out[1:-1] = y2.reshape(y2.size)
y_out[0] = 0
y_out[-1] = 0
if not include_endpoints:
return x_out[:-1], y_out[:-1]
else:
return x_out, y_out
# TODO: got a feeling that dropping this inheritance gets us even more speedups
class FastAppendCurve(pg.PlotCurveItem):
'''
A faster, append friendly version of ``pyqtgraph.PlotCurveItem``
built for real-time data updates.
The main difference is avoiding regeneration of the entire
historical path where possible and instead only updating the "new"
segment(s) via a ``numpy`` array diff calc. Further the "last"
graphic segment is drawn independently such that near-term (high
frequency) discrete-time-sampled style updates don't trigger a full
path redraw.
'''
def __init__( def __init__(
self, self,
*args, *args,
step_mode: bool = False, step_mode: bool = False,
color: str = 'default_lightest', color: str = 'default_lightest',
fill_color: Optional[str] = None, fill_color: Optional[str] = None,
style: str = 'solid',
name: Optional[str] = None,
use_fpath: bool = True,
**kwargs **kwargs
) -> None: ) -> None:
self._name = name
# brutaaalll, see comments within..
self.yData = None
self.xData = None
# self._last_cap: int = 0
self.path: Optional[QPainterPath] = None
# additional path used for appends which tries to avoid
# triggering an update/redraw of the presumably larger
# historical ``.path`` above.
self.use_fpath = use_fpath
self.fast_path: Optional[QPainterPath] = None
# TODO: we can probably just dispense with the parent since # TODO: we can probably just dispense with the parent since
# we're basically only using the pen setting now... # we're basically only using the pen setting now...
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self._xrange: tuple[int, int] = self.dataBounds(ax=0)
# all history of curve is drawn in single px thickness # all history of curve is drawn in single px thickness
pen = pg.mkPen(hcolor(color)) self.setPen(hcolor(color))
pen.setStyle(_line_styles[style])
if 'dash' in style:
pen.setDashPattern([8, 3])
self._pen = pen
# last segment is drawn in 2px thickness for emphasis # last segment is drawn in 2px thickness for emphasis
# self.last_step_pen = pg.mkPen(hcolor(color), width=2) self.last_step_pen = pg.mkPen(hcolor(color), width=2)
self.last_step_pen = pg.mkPen(pen, width=2) self._last_line: QLineF = None
self._last_step_rect: QRectF = None
# self._last_line: Optional[QLineF] = None
self._last_line = QLineF()
self._last_w: float = 1
# flat-top style histogram-like discrete curve # flat-top style histogram-like discrete curve
# self._step_mode: bool = step_mode self._step_mode: bool = step_mode
# self._fill = True # self._fill = True
self._brush = pg.functions.mkBrush(hcolor(fill_color or color)) self.setBrush(hcolor(fill_color or color))
# NOTE: this setting seems to mostly prevent redraws on mouse
# interaction which is a huge boon for avg interaction latency.
# TODO: one question still remaining is if this makes trasform # TODO: one question still remaining is if this makes trasform
# interactions slower (such as zooming) and if so maybe if/when # interactions slower (such as zooming) and if so maybe if/when
# we implement a "history" mode for the view we disable this in # we implement a "history" mode for the view we disable this in
# that mode? # that mode?
# don't enable caching by default for the case where the self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
# only thing drawn is the "last" line segment which can
# have a weird artifact where it won't be fully drawn to its
# endpoint (something we saw on trade rate curves)
self.setCacheMode(QGraphicsItem.DeviceCoordinateCache)
# XXX: see explanation for different caching modes: def update_from_array(
# https://stackoverflow.com/a/39410081 self,
# seems to only be useful if we don't re-generate the entire x: np.ndarray,
# QPainterPath every time y: np.ndarray,
# curve.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
# don't ever use this - it's a colossal nightmare of artefacts ) -> QtGui.QPainterPath:
# and is disastrous for performance.
# curve.setCacheMode(QtWidgets.QGraphicsItem.ItemCoordinateCache)
# allow sub-type customization profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
declare = self.declare_paintables flip_cache = False
if declare:
declare()
# TODO: probably stick this in a new parent istart, istop = self._xrange
# type which will contain our own version of # print(f"xrange: {self._xrange}")
# what ``PlotCurveItem`` had in terms of base
# functionality? A `FlowGraphic` maybe? # compute the length diffs between the first/last index entry in
def x_uppx(self) -> int: # the input data and the last indexes we have on record from the
# last time we updated the curve index.
prepend_length = istart - x[0]
append_length = x[-1] - istop
# step mode: draw flat top discrete "step"
# over the index space for each datum.
if self._step_mode:
x_out, y_out = step_path_arrays_from_1d(x[:-1], y[:-1])
px_vecs = self.pixelVectors()[0]
if px_vecs:
xs_in_px = px_vecs.x()
return round(xs_in_px)
else: else:
return 0 # by default we only pull data up to the last (current) index
x_out, y_out = x[:-1], y[:-1]
def px_width(self) -> float: if self.path is None or prepend_length > 0:
self.path = pg.functions.arrayToQPath(
x_out,
y_out,
connect='all',
finiteCheck=False,
)
profiler('generate fresh path')
vb = self.getViewBox() # if self._step_mode:
if not vb: # self.path.closeSubpath()
return 0
vr = self.viewRect() # TODO: get this piecewise prepend working - right now it's
l, r = int(vr.left()), int(vr.right()) # giving heck on vwap...
# if prepend_length:
# breakpoint()
start, stop = self._xrange # prepend_path = pg.functions.arrayToQPath(
lbar = max(l, start) # x[0:prepend_length],
rbar = min(r, stop) # y[0:prepend_length],
# connect='all'
# )
return vb.mapViewToDevice( # # swap prepend path in "front"
QLineF(lbar, 0, rbar, 0) # old_path = self.path
).length() # self.path = prepend_path
# # self.path.moveTo(new_x[0], new_y[0])
# self.path.connectPath(old_path)
# XXX: lol brutal, the internals of `CurvePoint` (inherited by elif append_length > 0:
# our `LineDot`) required ``.getData()`` to work.. if self._step_mode:
def getData(self): new_x, new_y = step_path_arrays_from_1d(
return self.xData, self.yData x[-append_length - 2:-1],
y[-append_length - 2:-1],
)
# [1:] since we don't need the vertical line normally at
# the beginning of the step curve taking the first (x,
# y) poing down to the x-axis **because** this is an
# appended path graphic.
new_x = new_x[1:]
new_y = new_y[1:]
def clear(self): else:
''' # print(f"append_length: {append_length}")
Clear internal graphics making object ready for full re-draw. new_x = x[-append_length - 2:-1]
new_y = y[-append_length - 2:-1]
# print((new_x, new_y))
''' append_path = pg.functions.arrayToQPath(
# NOTE: original code from ``pg.PlotCurveItem`` new_x,
self.xData = None new_y,
self.yData = None connect='all',
# finiteCheck=False,
)
# XXX: previously, if not trying to leverage `.reserve()` allocs path = self.path
# then you might as well create a new one..
# self.path = None
# path reservation aware non-mem de-alloc cleaning # other merging ideas:
if self.path: # https://stackoverflow.com/questions/8936225/how-to-merge-qpainterpaths
self.path.clear() if self._step_mode:
# path.addPath(append_path)
self.path.connectPath(append_path)
if self.fast_path: # TODO: try out new work from `pyqtgraph` main which
# self.fast_path.clear() # should repair horrid perf:
self.fast_path = None # https://github.com/pyqtgraph/pyqtgraph/pull/2032
# ok, nope still horrible XD
# if self._fill:
# # XXX: super slow set "union" op
# self.path = self.path.united(append_path).simplified()
@cm # # path.addPath(append_path)
def reset_cache(self) -> None: # # path.closeSubpath()
self.setCacheMode(QtWidgets.QGraphicsItem.NoCache)
yield else:
self.setCacheMode(QGraphicsItem.DeviceCoordinateCache) # print(f"append_path br: {append_path.boundingRect()}")
# self.path.moveTo(new_x[0], new_y[0])
# self.path.connectPath(append_path)
path.connectPath(append_path)
# XXX: pretty annoying but, without this there's little
# artefacts on the append updates to the curve...
self.setCacheMode(QtWidgets.QGraphicsItem.NoCache)
self.prepareGeometryChange()
flip_cache = True
# print(f"update br: {self.path.boundingRect()}")
# XXX: lol brutal, the internals of `CurvePoint` (inherited by
# our `LineDot`) required ``.getData()`` to work..
self.xData = x
self.yData = y
x0, x_last = self._xrange = x[0], x[-1]
y_last = y[-1]
# draw the "current" step graphic segment so it lines up with
# the "middle" of the current (OHLC) sample.
if self._step_mode:
self._last_line = QLineF(
x_last - 0.5, 0,
x_last + 0.5, 0,
)
self._last_step_rect = QRectF(
x_last - 0.5, 0,
x_last + 0.5, y_last
)
else:
self._last_line = QLineF(
x[-2], y[-2],
x[-1], y_last
)
# trigger redraw of path
# do update before reverting to cache mode
self.prepareGeometryChange()
self.update()
if flip_cache:
# XXX: seems to be needed to avoid artifacts (see above).
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
def boundingRect(self): def boundingRect(self):
'''
Compute and then cache our rect.
'''
if self.path is None: if self.path is None:
return QPainterPath().boundingRect() return QtGui.QPainterPath().boundingRect()
else: else:
# dynamically override this method after initial # dynamically override this method after initial
# path is created to avoid requiring the above None check # path is created to avoid requiring the above None check
self.boundingRect = self._path_br self.boundingRect = self._br
return self._path_br() return self._br()
def _path_br(self): def _br(self):
''' """Post init ``.boundingRect()```.
Post init ``.boundingRect()```.
''' """
# hb = self.path.boundingRect()
hb = self.path.controlPointRect() hb = self.path.controlPointRect()
hb_size = hb.size() hb_size = hb.size()
fp = self.fast_path
if fp:
fhb = fp.controlPointRect()
hb_size = fhb.size() + hb_size
# print(f'hb_size: {hb_size}') # print(f'hb_size: {hb_size}')
# if self._last_step_rect: w = hb_size.width() + 1
# hb_size += self._last_step_rect.size() h = hb_size.height() + 1
# if self._line:
# br = self._last_step_rect.bottomRight()
# tl = QPointF(
# # self._vr[0],
# # hb.topLeft().y(),
# # 0,
# # hb_size.height() + 1
# )
# br = self._last_step_rect.bottomRight()
w = hb_size.width()
h = hb_size.height()
sbr = self.sub_br
if sbr:
w, h = self.sub_br(w, h)
else:
# assume plain line graphic and use
# default unit step in each direction.
# only on a plane line do we include
# and extra index step's worth of width
# since in the step case the end of the curve
# actually terminates earlier so we don't need
# this for the last step.
w += self._last_w
# ll = self._last_line
h += 1 # ll.y2() - ll.y1()
# br = QPointF(
# self._vr[-1],
# # tl.x() + w,
# tl.y() + h,
# )
br = QRectF( br = QRectF(
# top left # top left
# hb.topLeft()
# tl,
QPointF(hb.topLeft()), QPointF(hb.topLeft()),
# br,
# total size # total size
# QSizeF(hb_size)
# hb_size,
QSizeF(w, h) QSizeF(w, h)
) )
# print(f'bounding rect: {br}') # print(f'bounding rect: {br}')
@ -325,160 +320,38 @@ class Curve(pg.GraphicsObject):
def paint( def paint(
self, self,
p: QPainter, p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem, opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget w: QtWidgets.QWidget
) -> None: ) -> None:
profiler = pg.debug.Profiler( profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
msg=f'Curve.paint(): `{self._name}`', # p.setRenderHint(p.Antialiasing, True)
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
sub_paint = self.sub_paint if (
if sub_paint: self._step_mode
sub_paint(p, profiler) and self._last_step_rect
):
brush = self.opts['brush']
# p.drawLines(*tuple(filter(bool, self._last_step_lines)))
# p.drawRect(self._last_step_rect)
p.fillRect(self._last_step_rect, brush)
# p.drawPath(self.path)
# profiler('.drawPath()')
# else:
p.setPen(self.last_step_pen) p.setPen(self.last_step_pen)
p.drawLine(self._last_line) p.drawLine(self._last_line)
profiler('.drawLine()') profiler('.drawLine()')
p.setPen(self._pen)
path = self.path p.setPen(self.opts['pen'])
# cap = path.capacity() p.drawPath(self.path)
# if cap != self._last_cap: profiler('.drawPath()')
# print(f'NEW CAPACITY: {self._last_cap} -> {cap}')
# self._last_cap = cap
if path: # TODO: try out new work from `pyqtgraph` main which
p.drawPath(path) # should repair horrid perf:
profiler(f'.drawPath(path): {path.capacity()}')
fp = self.fast_path
if fp:
p.drawPath(fp)
profiler('.drawPath(fast_path)')
# TODO: try out new work from `pyqtgraph` main which should
# repair horrid perf (pretty sure i did and it was still
# horrible?):
# https://github.com/pyqtgraph/pyqtgraph/pull/2032 # https://github.com/pyqtgraph/pyqtgraph/pull/2032
# if self._fill: # if self._fill:
# brush = self.opts['brush'] # brush = self.opts['brush']
# p.fillPath(self.path, brush) # p.fillPath(self.path, brush)
def draw_last_datum(
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
) -> None:
# default line draw last call
# with self.reset_cache():
x = render_data['index']
y = render_data[array_key]
# draw the "current" step graphic segment so it
# lines up with the "middle" of the current
# (OHLC) sample.
self._last_line = QLineF(
x[-2], y[-2],
x[-1], y[-1],
)
return x, y
# TODO: this should probably be a "downsampled" curve type
# that draws a bar-style (but for the px column) last graphics
# element such that the current datum in view can be shown
# (via it's max / min) even when highly zoomed out.
class FlattenedOHLC(Curve):
def draw_last_datum(
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
) -> None:
lasts = src_data[-2:]
x = lasts['index']
y = lasts['close']
# draw the "current" step graphic segment so it
# lines up with the "middle" of the current
# (OHLC) sample.
self._last_line = QLineF(
x[-2], y[-2],
x[-1], y[-1]
)
return x, y
class StepCurve(Curve):
def declare_paintables(
self,
) -> None:
self._last_step_rect = QRectF()
def draw_last_datum(
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
w: float = 0.5,
) -> None:
# TODO: remove this and instead place all step curve
# updating into pre-path data render callbacks.
# full input data
x = src_data['index']
y = src_data[array_key]
x_last = x[-1]
y_last = y[-1]
# lol, commenting this makes step curves
# all "black" for me :eyeroll:..
self._last_line = QLineF(
x_last - w, 0,
x_last + w, 0,
)
self._last_step_rect = QRectF(
x_last - w, 0,
x_last + w, y_last,
)
return x, y
def sub_paint(
self,
p: QPainter,
profiler: pg.debug.Profiler,
) -> None:
# p.drawLines(*tuple(filter(bool, self._last_step_lines)))
# p.drawRect(self._last_step_rect)
p.fillRect(self._last_step_rect, self._brush)
profiler('.fillRect()')
def sub_br(
self,
path_w: float,
path_h: float,
) -> (float, float):
# passthrough
return path_w, path_h

File diff suppressed because it is too large Load Diff

View File

@ -343,7 +343,7 @@ class SelectRect(QtGui.QGraphicsRectItem):
nbars = ixmx - ixmn + 1 nbars = ixmx - ixmn + 1
chart = self._chart chart = self._chart
data = chart._flows[chart.name].shm.array[ixmn:ixmx] data = chart._arrays[chart.name][ixmn:ixmx]
if len(data): if len(data):
std = data['close'].std() std = data['close'].std()

View File

@ -49,6 +49,10 @@ from . import _style
log = get_logger(__name__) log = get_logger(__name__)
# pyqtgraph global config # pyqtgraph global config
# might as well enable this for now?
pg.useOpenGL = True
pg.enableExperimental = True
# engage core tweaks that give us better response # engage core tweaks that give us better response
# latency then the average pg user # latency then the average pg user
_do_overrides() _do_overrides()
@ -57,9 +61,7 @@ _do_overrides()
# XXX: pretty sure none of this shit works on linux as per: # XXX: pretty sure none of this shit works on linux as per:
# https://bugreports.qt.io/browse/QTBUG-53022 # https://bugreports.qt.io/browse/QTBUG-53022
# it seems to work on windows.. no idea wtf is up. # it seems to work on windows.. no idea wtf is up.
is_windows = False
if platform.system() == "Windows": if platform.system() == "Windows":
is_windows = True
# Proper high DPI scaling is available in Qt >= 5.6.0. This attibute # Proper high DPI scaling is available in Qt >= 5.6.0. This attibute
# must be set before creating the application # must be set before creating the application
@ -180,8 +182,6 @@ def run_qtractor(
window.main_widget = main_widget window.main_widget = main_widget
window.setCentralWidget(instance) window.setCentralWidget(instance)
if is_windows:
window.configure_to_desktop()
# actually render to screen # actually render to screen
window.show() window.show()

View File

@ -1,83 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Feed status and controls widget(s) for embedding in a UI-pane.
"""
from __future__ import annotations
from textwrap import dedent
from typing import TYPE_CHECKING
# from PyQt5.QtCore import Qt
from ._style import _font, _font_small
# from ..calc import humanize
from ._label import FormatLabel
if TYPE_CHECKING:
from ._chart import ChartPlotWidget
from ..data.feed import Feed
from ._forms import FieldsForm
def mk_feed_label(
form: FieldsForm,
feed: Feed,
chart: ChartPlotWidget,
) -> FormatLabel:
'''
Generate a label from feed meta-data to be displayed
in a UI sidepane.
TODO: eventually buttons for changing settings over
a feed control protocol.
'''
status = feed.status
assert status
msg = dedent("""
actor: **{actor_name}**\n
|_ @**{host}:{port}**\n
""")
for key, val in status.items():
if key in ('host', 'port', 'actor_name'):
continue
msg += f'\n|_ {key}: **{{{key}}}**\n'
feed_label = FormatLabel(
fmt_str=msg,
# |_ streams: **{symbols}**\n
font=_font.font,
font_size=_font_small.px_size,
font_color='default_lightest',
)
# form.vbox.setAlignment(feed_label, Qt.AlignBottom)
# form.vbox.setAlignment(Qt.AlignBottom)
_ = chart.height() - (
form.height() +
form.fill_bar.height()
# feed_label.height()
)
feed_label.format(**feed.status)
return feed_label

File diff suppressed because it is too large Load Diff

View File

@ -21,7 +21,6 @@ Text entry "forms" widgets (mostly for configuration and UI user input).
from __future__ import annotations from __future__ import annotations
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from functools import partial from functools import partial
from math import floor
from typing import ( from typing import (
Optional, Any, Callable, Awaitable Optional, Any, Callable, Awaitable
) )
@ -106,7 +105,7 @@ class Edit(QLineEdit):
# TODO: somehow this math ain't right? # TODO: somehow this math ain't right?
chars_w_pxs = dpi_font.boundingRect('0'*self._chars).width() chars_w_pxs = dpi_font.boundingRect('0'*self._chars).width()
scale = round(dpi_font.scale()) scale = round(dpi_font.scale())
psh.setWidth(int(chars_w_pxs * scale)) psh.setWidth(chars_w_pxs * scale)
return psh return psh
def set_width_in_chars( def set_width_in_chars(
@ -219,7 +218,7 @@ class Selection(QComboBox):
) -> None: ) -> None:
br = _font.boundingRect(str(char)) br = _font.boundingRect(str(char))
_, h = br.width(), int(br.height()) _, h = br.width(), br.height()
# TODO: something better then this monkey patch # TODO: something better then this monkey patch
view = self.view() view = self.view()
@ -331,7 +330,7 @@ class FieldsForm(QWidget):
self.form.setSpacing(0) self.form.setSpacing(0)
self.form.setHorizontalSpacing(0) self.form.setHorizontalSpacing(0)
self.vbox.addLayout(self.form, stretch=3) self.vbox.addLayout(self.form, stretch=1/3)
self.labels: dict[str, QLabel] = {} self.labels: dict[str, QLabel] = {}
self.fields: dict[str, QWidget] = {} self.fields: dict[str, QWidget] = {}
@ -543,7 +542,7 @@ class FillStatusBar(QProgressBar):
''' '''
border_px: int = 2 border_px: int = 2
slot_margin_px: int = 1 slot_margin_px: int = 2
def __init__( def __init__(
self, self,
@ -554,16 +553,12 @@ class FillStatusBar(QProgressBar):
) -> None: ) -> None:
super().__init__(parent=parent) super().__init__(parent=parent)
self.approx_h = approx_height_px
self.approx_h = int(round(approx_height_px))
self.setMinimumHeight(self.approx_h)
self.setMaximumHeight(self.approx_h)
self.font_size = font_size self.font_size = font_size
self.setFormat('') # label format self.setFormat('') # label format
self.setMinimumWidth(int(width_px)) self.setMinimumWidth(width_px)
self.setMaximumWidth(int(width_px)) self.setMaximumWidth(width_px)
def set_slots( def set_slots(
self, self,
@ -572,12 +567,17 @@ class FillStatusBar(QProgressBar):
) -> None: ) -> None:
self.setOrientation(Qt.Vertical) approx_h = self.approx_h
h = self.height()
# TODO: compute "used height" thus far and mostly fill the rest # TODO: compute "used height" thus far and mostly fill the rest
tot_slot_h, r = divmod(h, slots) tot_slot_h, r = divmod(
approx_h,
slots,
)
clipped = slots * tot_slot_h + 2*self.border_px
self.setMaximumHeight(clipped)
slot_height_px = tot_slot_h - 2*self.slot_margin_px
self.setOrientation(Qt.Vertical)
self.setStyleSheet( self.setStyleSheet(
f""" f"""
QProgressBar {{ QProgressBar {{
@ -592,28 +592,21 @@ class FillStatusBar(QProgressBar):
border: {self.border_px}px solid {hcolor('default_light')}; border: {self.border_px}px solid {hcolor('default_light')};
border-radius: 2px; border-radius: 2px;
}} }}
QProgressBar::chunk {{ QProgressBar::chunk {{
background-color: {hcolor('default_spotlight')}; background-color: {hcolor('default_spotlight')};
color: {hcolor('bracket')}; color: {hcolor('bracket')};
border-radius: 2px; border-radius: 2px;
margin: {self.slot_margin_px}px;
height: {slot_height_px}px;
}} }}
""" """
) )
# to set a discrete "block" per slot...
# XXX: couldn't get the discrete math to work here such
# that it was always correctly showing a discretized value
# up to the limit; not sure if it's the ``.setRange()``
# / ``.setValue()`` api or not but i was able to get something
# close screwing with the divmod above above but after so large
# a value it would always be less chunks then the correct
# value..
# margin: {self.slot_margin_px}px;
# height: {slot_height_px}px;
# margin-bottom: {slot_margin_px*2}px; # margin-bottom: {slot_margin_px*2}px;
# margin-top: {slot_margin_px*2}px; # margin-top: {slot_margin_px*2}px;
# color: #19232D; # color: #19232D;
@ -664,14 +657,14 @@ def mk_fill_status_bar(
) )
# size according to dpi scaled fonted contents to avoid # size according to dpi scaled fonted contents to avoid
# resizes on magnitude changes (eg. 9 -> 10 %) # resizes on magnitude changes (eg. 9 -> 10 %)
min_w = int(_font.boundingRect('1000.0M% pnl').width()) min_w = _font.boundingRect('1000.0M% pnl').width()
left_label.setMinimumWidth(min_w) left_label.setMinimumWidth(min_w)
left_label.resize( left_label.resize(
min_w, min_w,
left_label.size().height(), left_label.size().height(),
) )
bar_labels_lhs.addSpacing(int(5/8 * bar_h)) bar_labels_lhs.addSpacing(5/8 * bar_h)
bar_labels_lhs.addWidget( bar_labels_lhs.addWidget(
left_label, left_label,
@ -750,12 +743,12 @@ def mk_order_pane_layout(
parent=parent, parent=parent,
fields_schema={ fields_schema={
'account': { 'account': {
'label': '**accnt**:', 'label': '**account**:',
'type': 'select', 'type': 'select',
'default_value': ['paper'], 'default_value': ['paper'],
}, },
'size_unit': { 'size_unit': {
'label': '**alloc**:', 'label': '**allocate**:',
'type': 'select', 'type': 'select',
'default_value': [ 'default_value': [
'$ size', '$ size',
@ -803,7 +796,7 @@ def mk_order_pane_layout(
form.top_label = top_label form.top_label = top_label
# add pp fill bar + spacing # add pp fill bar + spacing
vbox.addLayout(hbox, stretch=3) vbox.addLayout(hbox, stretch=1/3)
# TODO: handle resize events and appropriately scale this # TODO: handle resize events and appropriately scale this
# to the sidepane height? # to the sidepane height?

View File

@ -22,7 +22,6 @@ Financial signal processing cluster and real-time graphics management.
''' '''
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from functools import partial from functools import partial
import inspect
from itertools import cycle from itertools import cycle
from typing import Optional, AsyncGenerator, Any from typing import Optional, AsyncGenerator, Any
@ -38,7 +37,6 @@ from .._cacheables import maybe_open_context
from ..calc import humanize from ..calc import humanize
from ..data._sharedmem import ( from ..data._sharedmem import (
ShmArray, ShmArray,
_Token,
try_read, try_read,
) )
from ._chart import ( from ._chart import (
@ -52,11 +50,7 @@ from ._forms import (
) )
from ..fsp._api import maybe_mk_fsp_shm, Fsp from ..fsp._api import maybe_mk_fsp_shm, Fsp
from ..fsp import cascade from ..fsp import cascade
from ..fsp._volume import ( from ..fsp._volume import tina_vwap, dolla_vlm
tina_vwap,
dolla_vlm,
flow_rates,
)
from ..log import get_logger from ..log import get_logger
log = get_logger(__name__) log = get_logger(__name__)
@ -72,17 +66,12 @@ def has_vlm(ohlcv: ShmArray) -> bool:
def update_fsp_chart( def update_fsp_chart(
chart: ChartPlotWidget, chart: ChartPlotWidget,
flow, shm: ShmArray,
graphics_name: str, graphics_name: str,
array_key: Optional[str], array_key: Optional[str],
**kwargs,
) -> None: ) -> None:
shm = flow.shm
if not shm:
return
array = shm.array array = shm.array
last_row = try_read(array) last_row = try_read(array)
@ -94,10 +83,10 @@ def update_fsp_chart(
# update graphics # update graphics
# NOTE: this does a length check internally which allows it # NOTE: this does a length check internally which allows it
# staying above the last row check below.. # staying above the last row check below..
chart.update_graphics_from_flow( chart.update_curve_from_array(
graphics_name, graphics_name,
array,
array_key=array_key or graphics_name, array_key=array_key or graphics_name,
**kwargs,
) )
# XXX: re: ``array_key``: fsp func names must be unique meaning we # XXX: re: ``array_key``: fsp func names must be unique meaning we
@ -107,6 +96,9 @@ def update_fsp_chart(
# read from last calculated value and update any label # read from last calculated value and update any label
last_val_sticky = chart._ysticks.get(graphics_name) last_val_sticky = chart._ysticks.get(graphics_name)
if last_val_sticky: if last_val_sticky:
# array = shm.array[array_key]
# if len(array):
# value = array[-1]
last = last_row[array_key] last = last_row[array_key]
last_val_sticky.update_from_data(-1, last) last_val_sticky.update_from_data(-1, last)
@ -161,11 +153,7 @@ async def open_fsp_sidepane(
sidepane.model = FspConfig() sidepane.model = FspConfig()
# just a logger for now until we get fsp configs up and running. # just a logger for now until we get fsp configs up and running.
async def settings_change( async def settings_change(key: str, value: str) -> bool:
key: str,
value: str
) -> bool:
print(f'{key}: {value}') print(f'{key}: {value}')
return True return True
@ -244,18 +232,21 @@ async def run_fsp_ui(
chart.draw_curve( chart.draw_curve(
name=name, name=name,
shm=shm, data=shm.array,
overlay=True, overlay=True,
color='default_light', color='default_light',
array_key=name, array_key=name,
separate_axes=conf.get('separate_axes', False),
**conf.get('chart_kwargs', {}) **conf.get('chart_kwargs', {})
) )
# specially store ref to shm for lookup in display loop
chart._overlays[name] = shm
else: else:
# create a new sub-chart widget for this fsp # create a new sub-chart widget for this fsp
chart = linkedsplits.add_plot( chart = linkedsplits.add_plot(
name=name, name=name,
shm=shm, array=shm.array,
array_key=name, array_key=name,
sidepane=sidepane, sidepane=sidepane,
@ -267,6 +258,11 @@ async def run_fsp_ui(
**conf.get('chart_kwargs', {}) **conf.get('chart_kwargs', {})
) )
# XXX: ONLY for sub-chart fsps, overlays have their
# data looked up from the chart's internal array set.
# TODO: we must get a data view api going STAT!!
chart._shm = shm
# should **not** be the same sub-chart widget # should **not** be the same sub-chart widget
assert chart.name != linkedsplits.chart.name assert chart.name != linkedsplits.chart.name
@ -277,7 +273,7 @@ async def run_fsp_ui(
# first UI update, usually from shm pushed history # first UI update, usually from shm pushed history
update_fsp_chart( update_fsp_chart(
chart, chart,
chart._flows[array_key], shm,
name, name,
array_key=array_key, array_key=array_key,
) )
@ -367,7 +363,6 @@ class FspAdmin:
tuple, tuple,
tuple[tractor.MsgStream, ShmArray] tuple[tractor.MsgStream, ShmArray]
] = {} ] = {}
self._flow_registry: dict[_Token, str] = {}
self.src_shm = src_shm self.src_shm = src_shm
def rr_next_portal(self) -> tractor.Portal: def rr_next_portal(self) -> tractor.Portal:
@ -380,7 +375,6 @@ class FspAdmin:
portal: tractor.Portal, portal: tractor.Portal,
complete: trio.Event, complete: trio.Event,
started: trio.Event, started: trio.Event,
fqsn: str,
dst_shm: ShmArray, dst_shm: ShmArray,
conf: dict, conf: dict,
target: Fsp, target: Fsp,
@ -392,6 +386,7 @@ class FspAdmin:
cluster and sleeps until signalled to exit. cluster and sleeps until signalled to exit.
''' '''
brokername, sym = self.linked.symbol.front_feed()
ns_path = str(target.ns_path) ns_path = str(target.ns_path)
async with ( async with (
portal.open_context( portal.open_context(
@ -400,7 +395,8 @@ class FspAdmin:
cascade, cascade,
# data feed key # data feed key
fqsn=fqsn, brokername=brokername,
symbol=sym,
# mems # mems
src_shm_token=self.src_shm.token, src_shm_token=self.src_shm.token,
@ -411,19 +407,13 @@ class FspAdmin:
loglevel=loglevel, loglevel=loglevel,
zero_on_step=conf.get('zero_on_step', False), zero_on_step=conf.get('zero_on_step', False),
shm_registry=[
(token.as_msg(), fsp_name, dst_token.as_msg())
for (token, fsp_name), dst_token
in self._flow_registry.items()
],
) as (ctx, last_index), ) as (ctx, last_index),
ctx.open_stream() as stream, ctx.open_stream() as stream,
): ):
# register output data # register output data
self._registry[ self._registry[
(fqsn, ns_path) (brokername, sym, ns_path)
] = ( ] = (
stream, stream,
dst_shm, dst_shm,
@ -433,21 +423,6 @@ class FspAdmin:
started.set() started.set()
# wait for graceful shutdown signal # wait for graceful shutdown signal
async with stream.subscribe() as stream:
async for msg in stream:
info = msg.get('fsp_update')
if info:
# if the chart isn't hidden try to update
# the data on screen.
if not self.linked.isHidden():
log.debug(f'Re-syncing graphics for fsp: {ns_path}')
self.linked.graphics_cycle(
trigger_all=True,
prepend_update_index=info['first'],
)
else:
log.info(f'recved unexpected fsp engine msg: {msg}')
await complete.wait() await complete.wait()
async def start_engine_task( async def start_engine_task(
@ -461,18 +436,14 @@ class FspAdmin:
) -> (ShmArray, trio.Event): ) -> (ShmArray, trio.Event):
fqsn = self.linked.symbol.front_fqsn() fqsn = self.linked.symbol.front_feed()
# allocate an output shm array # allocate an output shm array
key, dst_shm, opened = maybe_mk_fsp_shm( dst_shm, opened = maybe_mk_fsp_shm(
fqsn, fqsn,
target=target, target=target,
readonly=True, readonly=True,
) )
self._flow_registry[
(self.src_shm._token, target.name)
] = dst_shm._token
# if not opened: # if not opened:
# raise RuntimeError( # raise RuntimeError(
# f'Already started FSP `{fqsn}:{func_name}`' # f'Already started FSP `{fqsn}:{func_name}`'
@ -486,7 +457,6 @@ class FspAdmin:
portal, portal,
complete, complete,
started, started,
fqsn,
dst_shm, dst_shm,
conf, conf,
target, target,
@ -585,22 +555,15 @@ async def open_vlm_displays(
be spawned here. be spawned here.
''' '''
sig = inspect.signature(flow_rates.func)
params = sig.parameters
async with ( async with (
open_fsp_sidepane( open_fsp_sidepane(
linked, { linked, {
'flows': { 'vlm': {
# TODO: add support for dynamically changing these
'params': { 'params': {
u'\u03BC' + '_type': { 'price_func': {
'default_value': str(params['mean_type'].default), 'default_value': 'chl3',
}, # tell target ``Edit`` widget to not allow
'period': { # edits for now.
'default_value': str(params['period'].default),
# make widget un-editable for now.
'widget_kwargs': {'readonly': True}, 'widget_kwargs': {'readonly': True},
}, },
}, },
@ -609,18 +572,12 @@ async def open_vlm_displays(
) as sidepane, ) as sidepane,
open_fsp_admin(linked, ohlcv) as admin, open_fsp_admin(linked, ohlcv) as admin,
): ):
# TODO: support updates
# period_field = sidepane.fields['period']
# period_field.setText(
# str(period_param.default)
# )
# built-in vlm which we plot ASAP since it's # built-in vlm which we plot ASAP since it's
# usually data provided directly with OHLC history. # usually data provided directly with OHLC history.
shm = ohlcv shm = ohlcv
chart = linked.add_plot( chart = linked.add_plot(
name='volume', name='volume',
shm=shm, array=shm.array,
array_key='volume', array_key='volume',
sidepane=sidepane, sidepane=sidepane,
@ -635,23 +592,22 @@ async def open_vlm_displays(
) )
# force 0 to always be in view # force 0 to always be in view
def multi_maxmin( def maxmin(
names: list[str], names: list[str],
) -> tuple[float, float]: ) -> tuple[float, float]:
mx = 0 mx = 0
for name in names: for name in names:
mxmn = chart.maxmin(name=name) mxmn = chart.maxmin(name=name)
if mxmn: if mxmn:
ymax = mxmn[1] mx = max(mxmn[1], mx)
if ymax > mx:
mx = ymax # if mx:
# return 0, mxmn[1]
return 0, mx return 0, mx
chart.view.maxmin = partial(multi_maxmin, names=['volume']) chart.view._maxmin = partial(maxmin, names=['volume'])
# TODO: fix the x-axis label issue where if you put # TODO: fix the x-axis label issue where if you put
# the axis on the left it's totally not lined up... # the axis on the left it's totally not lined up...
@ -659,6 +615,11 @@ async def open_vlm_displays(
# chart.hideAxis('right') # chart.hideAxis('right')
# chart.showAxis('left') # chart.showAxis('left')
# XXX: ONLY for sub-chart fsps, overlays have their
# data looked up from the chart's internal array set.
# TODO: we must get a data view api going STAT!!
chart._shm = shm
# send back new chart to caller # send back new chart to caller
task_status.started(chart) task_status.started(chart)
@ -673,9 +634,9 @@ async def open_vlm_displays(
last_val_sticky.update_from_data(-1, value) last_val_sticky.update_from_data(-1, value)
vlm_curve = chart.update_graphics_from_flow( vlm_curve = chart.update_curve_from_array(
'volume', 'volume',
# shm.array, shm.array,
) )
# size view to data once at outset # size view to data once at outset
@ -687,9 +648,8 @@ async def open_vlm_displays(
if dvlm: if dvlm:
tasks_ready = []
# spawn and overlay $ vlm on the same subchart # spawn and overlay $ vlm on the same subchart
dvlm_shm, started = await admin.start_engine_task( shm, started = await admin.start_engine_task(
dolla_vlm, dolla_vlm,
{ # fsp engine conf { # fsp engine conf
@ -703,26 +663,11 @@ async def open_vlm_displays(
}, },
# loglevel, # loglevel,
) )
tasks_ready.append(started)
# FIXME: we should error on starting the same fsp right
# since it might collide with existing shm.. or wait we
# had this before??
# dolla_vlm,
tasks_ready.append(started)
# profiler(f'created shm for fsp actor: {display_name}') # profiler(f'created shm for fsp actor: {display_name}')
# wait for all engine tasks to startup await started.wait()
async with trio.open_nursery() as n:
for event in tasks_ready:
n.start_soon(event.wait)
# dolla vlm overlay pi = chart.overlay_plotitem(
# XXX: the main chart already contains a vlm "units" axis
# so here we add an overlay wth a y-range in
# $ liquidity-value units (normally a fiat like USD).
dvlm_pi = chart.overlay_plotitem(
'dolla_vlm', 'dolla_vlm',
index=0, # place axis on inside (nearest to chart) index=0, # place axis on inside (nearest to chart)
axis_title=' $vlm', axis_title=' $vlm',
@ -734,167 +679,80 @@ async def open_vlm_displays(
digits=2, digits=2,
), ),
}, },
)
# all to be overlayed curve names
fields = [
'dolla_vlm',
'dark_vlm',
]
# dvlm_rate_fields = [
# 'dvlm_rate',
# 'dark_dvlm_rate',
# ]
trade_rate_fields = [
'trade_rate',
'dark_trade_rate',
]
group_mxmn = partial(
multi_maxmin,
# keep both regular and dark vlm in view
names=fields,
# names=fields + dvlm_rate_fields,
) )
# add custom auto range handler # add custom auto range handler
dvlm_pi.vb._maxmin = group_mxmn pi.vb._maxmin = partial(
maxmin,
# keep both regular and dark vlm in view
names=['dolla_vlm', 'dark_vlm'],
)
# use slightly less light (then bracket) gray curve, _ = chart.draw_curve(
# for volume from "main exchange" and a more "bluey" name='dolla_vlm',
# gray for "dark" vlm. data=shm.array,
vlm_color = 'i3' array_key='dolla_vlm',
dark_vlm_color = 'charcoal' overlay=pi,
# add dvlm (step) curves to common view
def chart_curves(
names: list[str],
pi: pg.PlotItem,
shm: ShmArray,
step_mode: bool = False,
style: str = 'solid',
) -> None:
for name in names:
if 'dark' in name:
color = dark_vlm_color
elif 'rate' in name:
color = vlm_color
else:
color = 'bracket'
curve, _ = chart.draw_curve(
name=name,
shm=shm,
array_key=name,
overlay=pi,
color=color,
step_mode=step_mode,
style=style,
pi=pi,
)
# TODO: we need a better API to do this..
# specially store ref to shm for lookup in display loop
# since only a placeholder of `None` is entered in
# ``.draw_curve()``.
flow = chart._flows[name]
assert flow.plot is pi
chart_curves(
fields,
dvlm_pi,
dvlm_shm,
step_mode=True, step_mode=True,
# **conf.get('chart_kwargs', {})
) )
# spawn flow rates fsp **ONLY AFTER** the 'dolla_vlm' fsp is
# up since this one depends on it.
fr_shm, started = await admin.start_engine_task(
flow_rates,
{ # fsp engine conf
'func_name': 'flow_rates',
'zero_on_step': False,
},
# loglevel,
)
await started.wait()
# chart_curves(
# dvlm_rate_fields,
# dvlm_pi,
# fr_shm,
# )
# TODO: is there a way to "sync" the dual axes such that only # TODO: is there a way to "sync" the dual axes such that only
# one curve is needed? # one curve is needed?
# hide the original vlm curve since the $vlm one is now # hide the original vlm curve since the $vlm one is now
# displayed and the curves are effectively the same minus # displayed and the curves are effectively the same minus
# liquidity events (well at least on low OHLC periods - 1s). # liquidity events (well at least on low OHLC periods - 1s).
vlm_curve.hide() vlm_curve.hide()
chart.removeItem(vlm_curve)
vflow = chart._flows['volume']
vflow.render = False
# avoid range sorting on volume once disabled # TODO: we need a better API to do this..
chart.view.disable_auto_yrange() # specially store ref to shm for lookup in display loop
# since only a placeholder of `None` is entered in
# ``.draw_curve()``.
chart._overlays['dolla_vlm'] = shm
# Trade rate overlay curve, _ = chart.draw_curve(
# XXX: requires an additional overlay for
# a trades-per-period (time) y-range.
tr_pi = chart.overlay_plotitem(
'trade_rates',
# TODO: dynamically update period (and thus this axis?)
# title from user input.
axis_title='clears',
axis_side='left',
axis_kwargs={
'typical_max_str': ' 10.0 M ',
'formatter': partial(
humanize,
digits=2,
),
'text_color': vlm_color,
},
name='dark_vlm',
data=shm.array,
array_key='dark_vlm',
overlay=pi,
color='charcoal', # darker theme hue
step_mode=True,
# **conf.get('chart_kwargs', {})
) )
# add custom auto range handler chart._overlays['dark_vlm'] = shm
tr_pi.vb.maxmin = partial( # XXX: old dict-style config before it was moved into the
multi_maxmin, # helper task
# keep both regular and dark vlm in view # 'dolla_vlm': {
names=trade_rate_fields, # 'func_name': 'dolla_vlm',
) # 'zero_on_step': True,
# 'overlay': 'volume',
# 'separate_axes': True,
# 'params': {
# 'price_func': {
# 'default_value': 'chl3',
# # tell target ``Edit`` widget to not allow
# # edits for now.
# 'widget_kwargs': {'readonly': True},
# },
# },
# 'chart_kwargs': {'step_mode': True}
# },
chart_curves( # }
trade_rate_fields,
tr_pi,
fr_shm,
# step_mode=True,
# dashed line to represent "individual trades" being for name, axis_info in pi.axes.items():
# more "granular" B) # lol this sux XD
style='dash', axis = axis_info['item']
) if isinstance(axis, PriceAxis):
axis.size_to_values()
for pi in (
dvlm_pi,
tr_pi,
):
for name, axis_info in pi.axes.items():
# lol this sux XD
axis = axis_info['item']
if isinstance(axis, PriceAxis):
axis.size_to_values()
# built-in vlm fsps # built-in vlm fsps
for target, conf in { for target, conf in {
# tina_vwap: { tina_vwap: {
# 'overlay': 'ohlc', # overlays with OHLCV (main) chart 'overlay': 'ohlc', # overlays with OHLCV (main) chart
# 'anchor': 'session', 'anchor': 'session',
# }, },
}.items(): }.items():
started = await admin.open_fsp_chart( started = await admin.open_fsp_chart(
target, target,

View File

@ -33,8 +33,7 @@ import numpy as np
import trio import trio
from ..log import get_logger from ..log import get_logger
from .._profile import pg_profile_enabled, ms_slower_then from ._style import _min_points_to_show
# from ._style import _min_points_to_show
from ._editors import SelectRect from ._editors import SelectRect
from . import _event from . import _event
@ -319,7 +318,6 @@ async def handle_viewmode_mouse(
): ):
# when in order mode, submit execution # when in order mode, submit execution
# msg.event.accept() # msg.event.accept()
# breakpoint()
view.order_mode.submit_order() view.order_mode.submit_order()
@ -344,7 +342,7 @@ class ChartView(ViewBox):
wheelEventRelay = QtCore.Signal(object, object, object) wheelEventRelay = QtCore.Signal(object, object, object)
event_relay_source: 'Optional[ViewBox]' = None event_relay_source: 'Optional[ViewBox]' = None
relays: dict[str, QtCore.Signal] = {} relays: dict[str, Signal] = {}
def __init__( def __init__(
self, self,
@ -358,13 +356,13 @@ class ChartView(ViewBox):
): ):
super().__init__( super().__init__(
parent=parent, parent=parent,
name=name,
# TODO: look into the default view padding # TODO: look into the default view padding
# support that might replace somem of our # support that might replace somem of our
# ``ChartPlotWidget._set_yrange()` # ``ChartPlotWidget._set_yrange()`
# defaultPadding=0., # defaultPadding=0.,
**kwargs **kwargs
) )
# for "known y-range style" # for "known y-range style"
self._static_yrange = static_yrange self._static_yrange = static_yrange
self._maxmin = None self._maxmin = None
@ -386,34 +384,6 @@ class ChartView(ViewBox):
self.order_mode: bool = False self.order_mode: bool = False
self.setFocusPolicy(QtCore.Qt.StrongFocus) self.setFocusPolicy(QtCore.Qt.StrongFocus)
self._ic = None
def start_ic(
self,
) -> None:
'''
Signal the beginning of a click-drag interaction
to any interested task waiters.
'''
if self._ic is None:
self.chart.pause_all_feeds()
self._ic = trio.Event()
def signal_ic(
self,
*args,
) -> None:
'''
Signal the end of a click-drag interaction
to any waiters.
'''
if self._ic:
self._ic.set()
self._ic = None
self.chart.resume_all_feeds()
@asynccontextmanager @asynccontextmanager
async def open_async_input_handler( async def open_async_input_handler(
@ -451,22 +421,13 @@ class ChartView(ViewBox):
if self._maxmin is None: if self._maxmin is None:
self._maxmin = chart.maxmin self._maxmin = chart.maxmin
@property
def maxmin(self) -> Callable:
return self._maxmin
@maxmin.setter
def maxmin(self, callback: Callable) -> None:
self._maxmin = callback
def wheelEvent( def wheelEvent(
self, self,
ev, ev,
axis=None, axis=None,
relayed_from: ChartView = None, relayed_from: ChartView = None,
): ):
''' '''Override "center-point" location for scrolling.
Override "center-point" location for scrolling.
This is an override of the ``ViewBox`` method simply changing This is an override of the ``ViewBox`` method simply changing
the center of the zoom to be the y-axis. the center of the zoom to be the y-axis.
@ -484,18 +445,15 @@ class ChartView(ViewBox):
# don't zoom more then the min points setting # don't zoom more then the min points setting
l, lbar, rbar, r = chart.bars_range() l, lbar, rbar, r = chart.bars_range()
# vl = r - l vl = r - l
# if ev.delta() > 0 and vl <= _min_points_to_show: if ev.delta() > 0 and vl <= _min_points_to_show:
# log.debug("Max zoom bruh...") log.debug("Max zoom bruh...")
# return return
# if ( if ev.delta() < 0 and vl >= len(chart._arrays[chart.name]) + 666:
# ev.delta() < 0 log.debug("Min zoom bruh...")
# and vl >= len(chart._flows[chart.name].shm.array) + 666 return
# ):
# log.debug("Min zoom bruh...")
# return
# actual scaling factor # actual scaling factor
s = 1.015 ** (ev.delta() * -1 / 20) # self.state['wheelScaleFactor']) s = 1.015 ** (ev.delta() * -1 / 20) # self.state['wheelScaleFactor'])
@ -516,11 +474,7 @@ class ChartView(ViewBox):
# lastPos = ev.lastPos() # lastPos = ev.lastPos()
# dif = pos - lastPos # dif = pos - lastPos
# dif = dif * -1 # dif = dif * -1
center = Point( center = Point(fn.invertQTransform(self.childGroup.transform()).map(ev.pos()))
fn.invertQTransform(
self.childGroup.transform()
).map(ev.pos())
)
# scale_y = 1.3 ** (center.y() * -1 / 20) # scale_y = 1.3 ** (center.y() * -1 / 20)
self.scaleBy(s, center) self.scaleBy(s, center)
@ -569,24 +523,7 @@ class ChartView(ViewBox):
self._resetTarget() self._resetTarget()
self.scaleBy(s, focal) self.scaleBy(s, focal)
# XXX: the order of the next 2 lines i'm pretty sure
# matters, we want the resize to trigger before the graphics
# update, but i gotta feelin that because this one is signal
# based (and thus not necessarily sync invoked right away)
# that calling the resize method manually might work better.
self.sigRangeChangedManually.emit(mask) self.sigRangeChangedManually.emit(mask)
# XXX: without this is seems as though sometimes
# when zooming in from far out (and maybe vice versa?)
# the signal isn't being fired enough since if you pan
# just after you'll see further downsampling code run
# (pretty noticeable on the OHLC ds curve) but with this
# that never seems to happen? Only question is how much this
# "double work" is causing latency when these missing event
# fires don't happen?
self.maybe_downsample_graphics()
ev.accept() ev.accept()
def mouseDragEvent( def mouseDragEvent(
@ -669,11 +606,6 @@ class ChartView(ViewBox):
# XXX: WHY # XXX: WHY
ev.accept() ev.accept()
self.start_ic()
# if self._ic is None:
# self.chart.pause_all_feeds()
# self._ic = trio.Event()
if axis == 1: if axis == 1:
self.chart._static_yrange = 'axis' self.chart._static_yrange = 'axis'
@ -691,12 +623,6 @@ class ChartView(ViewBox):
self.sigRangeChangedManually.emit(self.state['mouseEnabled']) self.sigRangeChangedManually.emit(self.state['mouseEnabled'])
if ev.isFinish():
self.signal_ic()
# self._ic.set()
# self._ic = None
# self.chart.resume_all_feeds()
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE # WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
elif button & QtCore.Qt.RightButton: elif button & QtCore.Qt.RightButton:
@ -747,8 +673,8 @@ class ChartView(ViewBox):
# flag to prevent triggering sibling charts from the same linked # flag to prevent triggering sibling charts from the same linked
# set from recursion errors. # set from recursion errors.
autoscale_linked_plots: bool = False, autoscale_linked_plots: bool = True,
name: Optional[str] = None, autoscale_overlays: bool = False,
) -> None: ) -> None:
''' '''
@ -759,14 +685,6 @@ class ChartView(ViewBox):
data set. data set.
''' '''
name = self.name
# print(f'YRANGE ON {name}')
profiler = pg.debug.Profiler(
msg=f'`ChartView._set_yrange()`: `{name}`',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
delayed=True,
)
set_range = True set_range = True
chart = self._chart chart = self._chart
@ -790,22 +708,30 @@ class ChartView(ViewBox):
elif yrange is not None: elif yrange is not None:
ylow, yhigh = yrange ylow, yhigh = yrange
# calculate max, min y values in viewable x-range from data.
# Make sure min bars/datums on screen is adhered.
else:
br = bars_range or chart.bars_range()
# TODO: maybe should be a method on the
# chart widget/item?
if autoscale_linked_plots:
# avoid recursion by sibling plots
linked = self.linkedsplits
plots = list(linked.subplots.copy().values())
main = linked.chart
if main:
plots.append(main)
for chart in plots:
if chart and not chart._static_yrange:
chart.cv._set_yrange(
bars_range=br,
autoscale_linked_plots=False,
)
if set_range: if set_range:
ylow, yhigh = self._maxmin()
# XXX: only compute the mxmn range
# if none is provided as input!
if not yrange:
# flow = chart._flows[name]
yrange = self._maxmin()
if yrange is None:
log.warning(f'No yrange provided for {name}!?')
print(f"WTF NO YRANGE {name}")
return
ylow, yhigh = yrange
profiler(f'callback ._maxmin(): {yrange}')
# view margins: stay within a % of the "true range" # view margins: stay within a % of the "true range"
diff = yhigh - ylow diff = yhigh - ylow
@ -820,13 +746,9 @@ class ChartView(ViewBox):
yMax=yhigh, yMax=yhigh,
) )
self.setYRange(ylow, yhigh) self.setYRange(ylow, yhigh)
profiler(f'set limits: {(ylow, yhigh)}')
profiler.finish()
def enable_auto_yrange( def enable_auto_yrange(
self, vb: ChartView,
src_vb: Optional[ChartView] = None,
) -> None: ) -> None:
''' '''
@ -834,114 +756,13 @@ class ChartView(ViewBox):
based on data contents and ``ViewBox`` state. based on data contents and ``ViewBox`` state.
''' '''
if src_vb is None: vb.sigXRangeChanged.connect(vb._set_yrange)
src_vb = self
# splitter(s) resizing
src_vb.sigResized.connect(self._set_yrange)
# TODO: a smarter way to avoid calling this needlessly?
# 2 things i can think of:
# - register downsample-able graphics specially and only
# iterate those.
# - only register this when certain downsampleable graphics are
# "added to scene".
src_vb.sigRangeChangedManually.connect(
self.maybe_downsample_graphics
)
# mouse wheel doesn't emit XRangeChanged # mouse wheel doesn't emit XRangeChanged
src_vb.sigRangeChangedManually.connect(self._set_yrange) vb.sigRangeChangedManually.connect(vb._set_yrange)
vb.sigResized.connect(vb._set_yrange) # splitter(s) resizing
# src_vb.sigXRangeChanged.connect(self._set_yrange) def disable_auto_yrange(
# src_vb.sigXRangeChanged.connect(
# self.maybe_downsample_graphics
# )
def disable_auto_yrange(self) -> None:
self.sigResized.disconnect(
self._set_yrange,
)
self.sigRangeChangedManually.disconnect(
self.maybe_downsample_graphics
)
self.sigRangeChangedManually.disconnect(
self._set_yrange,
)
# self.sigXRangeChanged.disconnect(self._set_yrange)
# self.sigXRangeChanged.disconnect(
# self.maybe_downsample_graphics
# )
def x_uppx(self) -> float:
'''
Return the "number of x units" within a single
pixel currently being displayed for relevant
graphics items which are our children.
'''
graphics = [f.graphics for f in self._chart._flows.values()]
if not graphics:
return 0
for graphic in graphics:
xvec = graphic.pixelVectors()[0]
if xvec:
return xvec.x()
else:
return 0
def maybe_downsample_graphics(
self, self,
autoscale_overlays: bool = True, ) -> None:
):
profiler = pg.debug.Profiler( self._chart._static_yrange = 'axis'
msg=f'ChartView.maybe_downsample_graphics() for {self.name}',
disabled=not pg_profile_enabled(),
# XXX: important to avoid not seeing underlying
# ``.update_graphics_from_flow()`` nested profiling likely
# due to the way delaying works and garbage collection of
# the profiler in the delegated method calls.
ms_threshold=6,
# ms_threshold=ms_slower_then,
)
# TODO: a faster single-loop-iterator way of doing this XD
chart = self._chart
linked = self.linkedsplits
plots = linked.subplots | {chart.name: chart}
for chart_name, chart in plots.items():
for name, flow in chart._flows.items():
if (
not flow.render
# XXX: super important to be aware of this.
# or not flow.graphics.isVisible()
):
continue
# pass in no array which will read and render from the last
# passed array (normally provided by the display loop.)
chart.update_graphics_from_flow(
name,
use_vr=True,
)
# for each overlay on this chart auto-scale the
# y-range to max-min values.
if autoscale_overlays:
overlay = chart.pi_overlay
if overlay:
for pi in overlay.overlays:
pi.vb._set_yrange(
# TODO: get the range once up front...
# bars_range=br,
)
profiler('autoscaled linked plots')
profiler(f'<{chart_name}>.update_graphics_from_flow({name})')

View File

@ -199,14 +199,7 @@ class LevelLabel(YAxisLabel):
elif self._orient_v == 'top': elif self._orient_v == 'top':
lp, rp = rect.bottomLeft(), rect.bottomRight() lp, rp = rect.bottomLeft(), rect.bottomRight()
p.drawLine( p.drawLine(lp.x(), lp.y(), rp.x(), rp.y())
*map(int, [
lp.x(),
lp.y(),
rp.x(),
rp.y(),
])
)
def highlight(self, pen) -> None: def highlight(self, pen) -> None:
self._pen = pen self._pen = pen

View File

@ -20,7 +20,7 @@ Lines for orders, alerts, L2.
""" """
from functools import partial from functools import partial
from math import floor from math import floor
from typing import Optional, Callable from typing import Tuple, Optional, List, Callable
import pyqtgraph as pg import pyqtgraph as pg
from pyqtgraph import Point, functions as fn from pyqtgraph import Point, functions as fn
@ -29,8 +29,10 @@ from PyQt5.QtCore import QPointF
from ._annotate import qgo_draw_markers, LevelMarker from ._annotate import qgo_draw_markers, LevelMarker
from ._anchors import ( from ._anchors import (
marker_right_points,
vbr_left, vbr_left,
right_axis, right_axis,
# pp_tight_and_right, # wanna keep it straight in the long run
gpath_pin, gpath_pin,
) )
from ..calc import humanize from ..calc import humanize
@ -102,8 +104,8 @@ class LevelLine(pg.InfiniteLine):
# list of labels anchored at one of the 2 line endpoints # list of labels anchored at one of the 2 line endpoints
# inside the viewbox # inside the viewbox
self._labels: list[Label] = [] self._labels: List[Label] = []
self._markers: list[(int, Label)] = [] self._markers: List[(int, Label)] = []
# whenever this line is moved trigger label updates # whenever this line is moved trigger label updates
self.sigPositionChanged.connect(self.on_pos_change) self.sigPositionChanged.connect(self.on_pos_change)
@ -122,7 +124,7 @@ class LevelLine(pg.InfiniteLine):
self._y_incr_mult = 1 / chart.linked.symbol.tick_size self._y_incr_mult = 1 / chart.linked.symbol.tick_size
self._right_end_sc: float = 0 self._right_end_sc: float = 0
def txt_offsets(self) -> tuple[int, int]: def txt_offsets(self) -> Tuple[int, int]:
return 0, 0 return 0, 0
@property @property
@ -313,6 +315,17 @@ class LevelLine(pg.InfiniteLine):
# TODO: enter labels edit mode # TODO: enter labels edit mode
print(f'double click {ev}') print(f'double click {ev}')
def right_point(
self,
) -> float:
chart = self._chart
l1_len = chart._max_l1_line_len
ryaxis = chart.getAxis('right')
up_to_l1_sc = ryaxis.pos().x() - l1_len
return up_to_l1_sc
def paint( def paint(
self, self,
@ -332,7 +345,7 @@ class LevelLine(pg.InfiniteLine):
vb_left, vb_right = self._endPoints vb_left, vb_right = self._endPoints
vb = self.getViewBox() vb = self.getViewBox()
line_end, marker_right, r_axis_x = self._chart.marker_right_points() line_end, marker_right, r_axis_x = marker_right_points(self._chart)
if self.show_markers and self.markers: if self.show_markers and self.markers:
@ -398,7 +411,7 @@ class LevelLine(pg.InfiniteLine):
def scene_endpoint(self) -> QPointF: def scene_endpoint(self) -> QPointF:
if not self._right_end_sc: if not self._right_end_sc:
line_end, _, _ = self._chart.marker_right_points() line_end, _, _ = marker_right_points(self._chart)
self._right_end_sc = line_end - 10 self._right_end_sc = line_end - 10
return QPointF(self._right_end_sc, self.scene_y()) return QPointF(self._right_end_sc, self.scene_y())
@ -409,23 +422,23 @@ class LevelLine(pg.InfiniteLine):
) -> QtWidgets.QGraphicsPathItem: ) -> QtWidgets.QGraphicsPathItem:
self._marker = path
self._marker.setPen(self.currentPen)
self._marker.setBrush(fn.mkBrush(self.currentPen.color()))
# add path to scene # add path to scene
self.getViewBox().scene().addItem(path) self.getViewBox().scene().addItem(path)
# place to just-left of L1 labels self._marker = path
rsc = self._chart.pre_l1_xs()[0]
rsc = self.right_point()
self._marker.setPen(self.currentPen)
self._marker.setBrush(fn.mkBrush(self.currentPen.color()))
path.setPos(QPointF(rsc, self.scene_y())) path.setPos(QPointF(rsc, self.scene_y()))
return path return path
def hoverEvent(self, ev): def hoverEvent(self, ev):
''' """Mouse hover callback.
Mouse hover callback.
''' """
cur = self._chart.linked.cursor cur = self._chart.linked.cursor
# hovered # hovered
@ -601,8 +614,7 @@ def order_line(
**line_kwargs, **line_kwargs,
) -> LevelLine: ) -> LevelLine:
''' '''Convenience routine to add a line graphic representing an order
Convenience routine to add a line graphic representing an order
execution submitted to the EMS via the chart's "order mode". execution submitted to the EMS via the chart's "order mode".
''' '''
@ -677,6 +689,7 @@ def order_line(
return f'{account}: ' return f'{account}: '
label.fields = { label.fields = {
'size': size, 'size': size,
'size_digits': 0, 'size_digits': 0,

View File

@ -17,40 +17,40 @@
Super fast OHLC sampling graphics types. Super fast OHLC sampling graphics types.
""" """
from __future__ import annotations from typing import List, Optional, Tuple
from typing import (
Optional,
TYPE_CHECKING,
)
import numpy as np import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from numba import njit, float64, int64 # , optional
from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QLineF, QPointF from PyQt5.QtCore import QLineF, QPointF
from PyQt5.QtGui import QPainterPath # from numba import types as ntypes
# from ..data._source import numba_ohlc_dtype
from .._profile import pg_profile_enabled, ms_slower_then from .._profile import pg_profile_enabled
from ._style import hcolor from ._style import hcolor
from ..log import get_logger
if TYPE_CHECKING:
from ._chart import LinkedSplits
log = get_logger(__name__) def _mk_lines_array(
data: List,
size: int,
elements_step: int = 6,
) -> np.ndarray:
"""Create an ndarray to hold lines graphics info.
"""
return np.zeros_like(
data,
shape=(int(size), elements_step),
dtype=object,
)
def bar_from_ohlc_row( def lines_from_ohlc(
row: np.ndarray, row: np.ndarray,
# 0.5 is no overlap between arms, 1.0 is full overlap w: float
w: float = 0.43 ) -> Tuple[QLineF]:
) -> tuple[QLineF]:
'''
Generate the minimal ``QLineF`` lines to construct a single
OHLC "bar" for use in the "last datum" of a series.
'''
open, high, low, close, index = row[ open, high, low, close, index = row[
['open', 'high', 'low', 'close', 'index']] ['open', 'high', 'low', 'close', 'index']]
@ -81,37 +81,280 @@ def bar_from_ohlc_row(
return [hl, o, c] return [hl, o, c]
class BarItems(pg.GraphicsObject): @njit(
''' # TODO: for now need to construct this manually for readonly arrays, see
"Price range" bars graphics rendered from a OHLC sampled sequence. # https://github.com/numba/numba/issues/4511
# ntypes.Tuple((float64[:], float64[:], float64[:]))(
# numba_ohlc_dtype[::1], # contiguous
# int64,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_gap: float64 = 0.43,
) -> np.ndarray:
"""Generate an array of lines objects from input ohlc data.
"""
size = int(data.shape[0] * 6)
x = np.zeros(
# data,
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
# TODO: ask numba why this doesn't work..
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
open = q['open']
high = q['high']
low = q['low']
close = q['close']
index = float64(q['index'])
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
x[istart:istop] = (
index - bar_gap,
index,
index,
index,
index,
index + bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def gen_qpath(
data,
start, # XXX: do we need this?
w,
) -> QtGui.QPainterPath:
profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
x, y, c = path_arrays_from_ohlc(data, start, bar_gap=w)
profiler("generate stream with numba")
# TODO: numba the internals of this!
path = pg.functions.arrayToQPath(x, y, connect=c)
profiler("generate path with arrayToQPath")
return path
class BarItems(pg.GraphicsObject):
"""Price range bars graphics rendered from a OHLC sequence.
"""
sigPlotChanged = QtCore.pyqtSignal(object)
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.43
'''
def __init__( def __init__(
self, self,
linked: LinkedSplits, # scene: 'QGraphicsScene', # noqa
plotitem: 'pg.PlotItem', # noqa plotitem: 'pg.PlotItem', # noqa
pen_color: str = 'bracket', pen_color: str = 'bracket',
last_bar_color: str = 'bracket', last_bar_color: str = 'bracket',
name: Optional[str] = None,
) -> None: ) -> None:
super().__init__() super().__init__()
self.linked = linked
# XXX: for the mega-lulz increasing width here increases draw # XXX: for the mega-lulz increasing width here increases draw
# latency... so probably don't do it until we figure that out. # latency... so probably don't do it until we figure that out.
self._color = pen_color
self.bars_pen = pg.mkPen(hcolor(pen_color), width=1) self.bars_pen = pg.mkPen(hcolor(pen_color), width=1)
self.last_bar_pen = pg.mkPen(hcolor(last_bar_color), width=2) self.last_bar_pen = pg.mkPen(hcolor(last_bar_color), width=2)
self._name = name
# NOTE: this prevents redraws on mouse interaction which is
# a huge boon for avg interaction latency.
# TODO: one question still remaining is if this makes trasform
# interactions slower (such as zooming) and if so maybe if/when
# we implement a "history" mode for the view we disable this in
# that mode?
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache) self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.path = QPainterPath()
self._last_bar_lines: Optional[tuple[QLineF, ...]] = None
def x_uppx(self) -> int: # not sure if this is actually impoving anything but figured it
# we expect the downsample curve report this. # was worth a shot:
return 0 # self.path.reserve(int(100e3 * 6))
self.path = QtGui.QPainterPath()
self._pi = plotitem
self._xrange: Tuple[int, int]
self._yrange: Tuple[float, float]
# TODO: don't render the full backing array each time
# self._path_data = None
self._last_bar_lines: Optional[Tuple[QLineF, ...]] = None
# track the current length of drawable lines within the larger array
self.start_index: int = 0
self.stop_index: int = 0
def draw_from_data(
self,
data: np.ndarray,
start: int = 0,
) -> QtGui.QPainterPath:
"""Draw OHLC datum graphics from a ``np.ndarray``.
This routine is usually only called to draw the initial history.
"""
hist, last = data[:-1], data[-1]
self.path = gen_qpath(hist, start, self.w)
# save graphics for later reference and keep track
# of current internal "last index"
# self.start_index = len(data)
index = data['index']
self._xrange = (index[0], index[-1])
self._yrange = (
np.nanmax(data['high']),
np.nanmin(data['low']),
)
# up to last to avoid double draw of last bar
self._last_bar_lines = lines_from_ohlc(last, self.w)
# trigger render
# https://doc.qt.io/qt-5/qgraphicsitem.html#update
self.update()
return self.path
def update_from_array(
self,
array: np.ndarray,
just_history=False,
) -> None:
"""Update the last datum's bar graphic from input data array.
This routine should be interface compatible with
``pg.PlotCurveItem.setData()``. Normally this method in
``pyqtgraph`` seems to update all the data passed to the
graphics object, and then update/rerender, but here we're
assuming the prior graphics havent changed (OHLC history rarely
does) so this "should" be simpler and faster.
This routine should be made (transitively) as fast as possible.
"""
# index = self.start_index
istart, istop = self._xrange
index = array['index']
first_index, last_index = index[0], index[-1]
# length = len(array)
prepend_length = istart - first_index
append_length = last_index - istop
flip_cache = False
# TODO: allow mapping only a range of lines thus
# only drawing as many bars as exactly specified.
if prepend_length:
# new history was added and we need to render a new path
new_bars = array[:prepend_length]
prepend_path = gen_qpath(new_bars, 0, self.w)
# XXX: SOMETHING IS MAYBE FISHY HERE what with the old_path
# y value not matching the first value from
# array[prepend_length + 1] ???
# update path
old_path = self.path
self.path = prepend_path
self.path.addPath(old_path)
# trigger redraw despite caching
self.prepareGeometryChange()
if append_length:
# generate new lines objects for updatable "current bar"
self._last_bar_lines = lines_from_ohlc(array[-1], self.w)
# generate new graphics to match provided array
# path appending logic:
# we need to get the previous "current bar(s)" for the time step
# and convert it to a sub-path to append to the historical set
# new_bars = array[istop - 1:istop + append_length - 1]
new_bars = array[-append_length - 1:-1]
append_path = gen_qpath(new_bars, 0, self.w)
self.path.moveTo(float(istop - self.w), float(new_bars[0]['open']))
self.path.addPath(append_path)
# trigger redraw despite caching
self.prepareGeometryChange()
self.setCacheMode(QtWidgets.QGraphicsItem.NoCache)
flip_cache = True
self._xrange = first_index, last_index
# last bar update
i, o, h, l, last, v = array[-1][
['index', 'open', 'high', 'low', 'close', 'volume']
]
# assert i == self.start_index - 1
# assert i == last_index
body, larm, rarm = self._last_bar_lines
# XXX: is there a faster way to modify this?
rarm.setLine(rarm.x1(), last, rarm.x2(), last)
# writer is responsible for changing open on "first" volume of bar
larm.setLine(larm.x1(), o, larm.x2(), o)
if l != h: # noqa
if body is None:
body = self._last_bar_lines[0] = QLineF(i, l, i, h)
else:
# update body
body.setLine(i, l, i, h)
# XXX: pretty sure this is causing an issue where the bar has
# a large upward move right before the next sample and the body
# is getting set to None since the next bar is flat but the shm
# array index update wasn't read by the time this code runs. Iow
# we're doing this removal of the body for a bar index that is
# now out of date / from some previous sample. It's weird
# though because i've seen it do this to bars i - 3 back?
self.update()
if flip_cache:
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
def boundingRect(self): def boundingRect(self):
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect # Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
@ -130,21 +373,16 @@ class BarItems(pg.GraphicsObject):
# apparently this a lot faster says the docs? # apparently this a lot faster says the docs?
# https://doc.qt.io/qt-5/qpainterpath.html#controlPointRect # https://doc.qt.io/qt-5/qpainterpath.html#controlPointRect
hb = self.path.controlPointRect() hb = self.path.controlPointRect()
hb_tl, hb_br = ( hb_tl, hb_br = hb.topLeft(), hb.bottomRight()
hb.topLeft(),
hb.bottomRight(),
)
# need to include last bar height or BR will be off # need to include last bar height or BR will be off
mx_y = hb_br.y() mx_y = hb_br.y()
mn_y = hb_tl.y() mn_y = hb_tl.y()
last_lines = self._last_bar_lines body_line = self._last_bar_lines[0]
if last_lines: if body_line:
body_line = self._last_bar_lines[0] mx_y = max(mx_y, max(body_line.y1(), body_line.y2()))
if body_line: mn_y = min(mn_y, min(body_line.y1(), body_line.y2()))
mx_y = max(mx_y, max(body_line.y1(), body_line.y2()))
mn_y = min(mn_y, min(body_line.y1(), body_line.y2()))
return QtCore.QRectF( return QtCore.QRectF(
@ -167,13 +405,9 @@ class BarItems(pg.GraphicsObject):
p: QtGui.QPainter, p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem, opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget w: QtWidgets.QWidget
) -> None: ) -> None:
profiler = pg.debug.Profiler( profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
# p.setCompositionMode(0) # p.setCompositionMode(0)
@ -184,67 +418,9 @@ class BarItems(pg.GraphicsObject):
# lead to any perf gains other then when zoomed in to less bars # lead to any perf gains other then when zoomed in to less bars
# in view. # in view.
p.setPen(self.last_bar_pen) p.setPen(self.last_bar_pen)
if self._last_bar_lines: p.drawLines(*tuple(filter(bool, self._last_bar_lines)))
p.drawLines(*tuple(filter(bool, self._last_bar_lines))) profiler('draw last bar')
profiler('draw last bar')
p.setPen(self.bars_pen) p.setPen(self.bars_pen)
p.drawPath(self.path) p.drawPath(self.path)
profiler(f'draw history path: {self.path.capacity()}') profiler('draw history path')
def draw_last_datum(
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
fields: list[str] = [
'index',
'open',
'high',
'low',
'close',
],
) -> None:
# relevant fields
ohlc = src_data[fields]
last_row = ohlc[-1:]
# individual values
last_row = i, o, h, l, last = ohlc[-1]
# generate new lines objects for updatable "current bar"
self._last_bar_lines = bar_from_ohlc_row(last_row)
# assert i == graphics.start_index - 1
# assert i == last_index
body, larm, rarm = self._last_bar_lines
# XXX: is there a faster way to modify this?
rarm.setLine(rarm.x1(), last, rarm.x2(), last)
# writer is responsible for changing open on "first" volume of bar
larm.setLine(larm.x1(), o, larm.x2(), o)
if l != h: # noqa
if body is None:
body = self._last_bar_lines[0] = QLineF(i, l, i, h)
else:
# update body
body.setLine(i, l, i, h)
# XXX: pretty sure this is causing an issue where the
# bar has a large upward move right before the next
# sample and the body is getting set to None since the
# next bar is flat but the shm array index update wasn't
# read by the time this code runs. Iow we're doing this
# removal of the body for a bar index that is now out of
# date / from some previous sample. It's weird though
# because i've seen it do this to bars i - 3 back?
return ohlc['index'], ohlc['close']

View File

@ -103,6 +103,11 @@ class ComposedGridLayout:
dict[str, AxisItem], dict[str, AxisItem],
] = {} ] = {}
self._axes2pi: dict[
AxisItem,
dict[str, PlotItem],
] = {}
# TODO: better name? # TODO: better name?
# construct surrounding layouts for placing outer axes and # construct surrounding layouts for placing outer axes and
# their legends and title labels. # their legends and title labels.
@ -153,8 +158,8 @@ class ComposedGridLayout:
for name, axis_info in plotitem.axes.items(): for name, axis_info in plotitem.axes.items():
axis = axis_info['item'] axis = axis_info['item']
# register this plot's (maybe re-placed) axes for lookup. # register this plot's (maybe re-placed) axes for lookup.
# print(f'inserting {name}:{axis} to index {index}') self._pi2axes.setdefault(index, {})[name] = axis
self._pi2axes.setdefault(name, {})[index] = axis self._axes2pi.setdefault(index, {})[name] = plotitem
# enter plot into list for index tracking # enter plot into list for index tracking
self.items.insert(index, plotitem) self.items.insert(index, plotitem)
@ -208,12 +213,11 @@ class ComposedGridLayout:
# invert insert index for layouts which are # invert insert index for layouts which are
# not-left-to-right, top-to-bottom insert oriented # not-left-to-right, top-to-bottom insert oriented
insert_index = index
if name in ('top', 'left'): if name in ('top', 'left'):
insert_index = min(len(axes) - index, 0) index = min(len(axes) - index, 0)
assert insert_index >= 0 assert index >= 0
linlayout.insertItem(insert_index, axis) linlayout.insertItem(index, axis)
axes.insert(index, axis) axes.insert(index, axis)
self._register_item(index, plotitem) self._register_item(index, plotitem)
@ -239,15 +243,13 @@ class ComposedGridLayout:
plot: PlotItem, plot: PlotItem,
name: str, name: str,
) -> Optional[AxisItem]: ) -> AxisItem:
''' '''
Retrieve the named axis for overlayed ``plot`` or ``None`` Retrieve the named axis for overlayed ``plot``.
if axis for that name is not shown.
''' '''
index = self.items.index(plot) index = self.items.index(plot)
named = self._pi2axes[name] return self._pi2axes[index][name]
return named.get(index)
def pop( def pop(
self, self,
@ -339,7 +341,7 @@ def mk_relay_method(
# halt/short circuit the graphicscene loop). Further the # halt/short circuit the graphicscene loop). Further the
# surrounding handler for this signal must be allowed to execute # surrounding handler for this signal must be allowed to execute
# and get processed by **this consumer**. # and get processed by **this consumer**.
# print(f'{vb.name} rx relayed from {relayed_from.name}') print(f'{vb.name} rx relayed from {relayed_from.name}')
ev.ignore() ev.ignore()
return slot( return slot(
@ -349,7 +351,7 @@ def mk_relay_method(
) )
if axis is not None: if axis is not None:
# print(f'{vb.name} handling axis event:\n{str(ev)}') print(f'{vb.name} handling axis event:\n{str(ev)}')
ev.accept() ev.accept()
return slot( return slot(
vb, vb,
@ -488,6 +490,7 @@ class PlotItemOverlay:
vb.setZValue(1000) # XXX: critical for scene layering/relaying vb.setZValue(1000) # XXX: critical for scene layering/relaying
self.overlays: list[PlotItem] = [] self.overlays: list[PlotItem] = []
from piker.ui._overlay import ComposedGridLayout
self.layout = ComposedGridLayout( self.layout = ComposedGridLayout(
root_plotitem, root_plotitem,
root_plotitem.layout, root_plotitem.layout,
@ -508,7 +511,7 @@ class PlotItemOverlay:
) -> None: ) -> None:
index = index or len(self.overlays) index = index or 0
root = self.root_plotitem root = self.root_plotitem
# layout: QGraphicsGridLayout = root.layout # layout: QGraphicsGridLayout = root.layout
self.overlays.insert(index, plotitem) self.overlays.insert(index, plotitem)
@ -610,26 +613,6 @@ class PlotItemOverlay:
''' '''
return self.layout.get_axis(plot, name) return self.layout.get_axis(plot, name)
def get_axes(
self,
name: str,
) -> list[AxisItem]:
'''
Retrieve all axes for all plots with ``name: str``.
If a particular overlay doesn't have a displayed named axis
then it is not delivered in the returned ``list``.
'''
axes = []
for plot in self.overlays:
axis = self.layout.get_axis(plot, name)
if axis:
axes.append(axis)
return axes
# TODO: i guess we need this if you want to detach existing plots # TODO: i guess we need this if you want to detach existing plots
# dynamically? XXX: untested as of now. # dynamically? XXX: untested as of now.
def _disconnect_all( def _disconnect_all(

View File

@ -1,236 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Super fast ``QPainterPath`` generation related operator routines.
"""
from __future__ import annotations
from typing import (
# Optional,
TYPE_CHECKING,
)
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import njit, float64, int64 # , optional
# import pyqtgraph as pg
from PyQt5 import QtGui
# from PyQt5.QtCore import QLineF, QPointF
from ..data._sharedmem import (
ShmArray,
)
# from .._profile import pg_profile_enabled, ms_slower_then
from ._compression import (
ds_m4,
)
if TYPE_CHECKING:
from ._flows import Renderer
def xy_downsample(
x,
y,
uppx,
x_spacer: float = 0.5,
) -> tuple[np.ndarray, np.ndarray]:
# downsample whenever more then 1 pixels per datum can be shown.
# always refresh data bounds until we get diffing
# working properly, see above..
bins, x, y = ds_m4(
x,
y,
uppx,
)
# flatten output to 1d arrays suitable for path-graphics generation.
x = np.broadcast_to(x[:, None], y.shape)
x = (x + np.array(
[-x_spacer, 0, 0, x_spacer]
)).flatten()
y = y.flatten()
return x, y
@njit(
# TODO: for now need to construct this manually for readonly arrays, see
# https://github.com/numba/numba/issues/4511
# ntypes.tuple((float64[:], float64[:], float64[:]))(
# numba_ohlc_dtype[::1], # contiguous
# int64,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_gap: float64 = 0.43,
) -> np.ndarray:
'''
Generate an array of lines objects from input ohlc data.
'''
size = int(data.shape[0] * 6)
x = np.zeros(
# data,
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
# TODO: ask numba why this doesn't work..
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
open = q['open']
high = q['high']
low = q['low']
close = q['close']
index = float64(q['index'])
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
x[istart:istop] = (
index - bar_gap,
index,
index,
index,
index,
index + bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def gen_ohlc_qpath(
r: Renderer,
data: np.ndarray,
array_key: str, # we ignore this
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.43,
) -> QtGui.QPainterPath:
'''
More or less direct proxy to ``path_arrays_from_ohlc()``
but with closed in kwargs for line spacing.
'''
x, y, c = path_arrays_from_ohlc(
data,
start,
bar_gap=w,
)
return x, y, c
def ohlc_to_line(
ohlc_shm: ShmArray,
data_field: str,
fields: list[str] = ['open', 'high', 'low', 'close']
) -> tuple[
np.ndarray,
np.ndarray,
]:
'''
Convert an input struct-array holding OHLC samples into a pair of
flattened x, y arrays with the same size (datums wise) as the source
data.
'''
y_out = ohlc_shm.ustruct(fields)
first = ohlc_shm._first.value
last = ohlc_shm._last.value
# write pushed data to flattened copy
y_out[first:last] = rfn.structured_to_unstructured(
ohlc_shm.array[fields]
)
# generate an flat-interpolated x-domain
x_out = (
np.broadcast_to(
ohlc_shm._array['index'][:, None],
(
ohlc_shm._array.size,
# 4, # only ohlc
y_out.shape[1],
),
) + np.array([-0.5, 0, 0, 0.5])
)
assert y_out.any()
return (
x_out,
y_out,
)
def to_step_format(
shm: ShmArray,
data_field: str,
index_field: str = 'index',
) -> tuple[int, np.ndarray, np.ndarray]:
'''
Convert an input 1d shm array to a "step array" format
for use by path graphics generation.
'''
i = shm._array['index'].copy()
out = shm._array[data_field].copy()
x_out = np.broadcast_to(
i[:, None],
(i.size, 2),
) + np.array([-0.5, 0.5])
y_out = np.empty((len(out), 2), dtype=out.dtype)
y_out[:] = out[:, np.newaxis]
# start y at origin level
y_out[0, 0] = 0
return x_out, y_out

View File

@ -71,10 +71,10 @@ async def update_pnl_from_feed(
log.info(f'Starting pnl display for {pp.alloc.account}') log.info(f'Starting pnl display for {pp.alloc.account}')
if live.size < 0: if live.size < 0:
types = ('ask', 'last', 'last', 'dark_trade') types = ('ask', 'last', 'last', 'utrade')
elif live.size > 0: elif live.size > 0:
types = ('bid', 'last', 'last', 'dark_trade') types = ('bid', 'last', 'last', 'utrade')
else: else:
log.info(f'No position (yet) for {tracker.alloc.account}@{key}') log.info(f'No position (yet) for {tracker.alloc.account}@{key}')
@ -119,8 +119,7 @@ async def update_pnl_from_feed(
@dataclass @dataclass
class SettingsPane: class SettingsPane:
''' '''Composite set of widgets plus an allocator model for configuring
Composite set of widgets plus an allocator model for configuring
order entry sizes and position limits per tradable instrument. order entry sizes and position limits per tradable instrument.
''' '''
@ -152,8 +151,7 @@ class SettingsPane:
key: str, key: str,
) -> None: ) -> None:
''' '''Called on any order pane drop down selection change.
Called on any order pane drop down selection change.
''' '''
log.info(f'selection input {key}:{text}') log.info(f'selection input {key}:{text}')
@ -166,8 +164,7 @@ class SettingsPane:
value: str, value: str,
) -> bool: ) -> bool:
''' '''Called on any order pane edit field value change.
Called on any order pane edit field value change.
''' '''
mode = self.order_mode mode = self.order_mode
@ -222,36 +219,16 @@ class SettingsPane:
else: else:
value = puterize(value) value = puterize(value)
if key == 'limit': if key == 'limit':
pp = mode.current_pp.live_pp
if size_unit == 'currency': if size_unit == 'currency':
dsize = pp.dsize
if dsize > value:
log.error(
f'limit must > then current pp: {dsize}'
)
raise ValueError
alloc.currency_limit = value alloc.currency_limit = value
else: else:
size = pp.size
if size > value:
log.error(
f'limit must > then current pp: {size}'
)
raise ValueError
alloc.units_limit = value alloc.units_limit = value
elif key == 'slots': elif key == 'slots':
if value <= 0:
raise ValueError('slots must be > 0')
alloc.slots = int(value) alloc.slots = int(value)
else: else:
log.error(f'Unknown setting {key}') raise ValueError(f'Unknown setting {key}')
raise ValueError
log.info(f'settings change: {key}: {value}') log.info(f'settings change: {key}: {value}')
@ -386,8 +363,7 @@ def position_line(
marker: Optional[LevelMarker] = None, marker: Optional[LevelMarker] = None,
) -> LevelLine: ) -> LevelLine:
''' '''Convenience routine to create a line graphic representing a "pp"
Convenience routine to create a line graphic representing a "pp"
aka the acro for a, aka the acro for a,
"{piker, private, personal, puny, <place your p-word here>} position". "{piker, private, personal, puny, <place your p-word here>} position".
@ -441,8 +417,7 @@ def position_line(
class PositionTracker: class PositionTracker:
''' '''Track and display real-time positions for a single symbol
Track and display real-time positions for a single symbol
over multiple accounts on a single chart. over multiple accounts on a single chart.
Graphically composed of a level line and marker as well as labels Graphically composed of a level line and marker as well as labels
@ -522,8 +497,7 @@ class PositionTracker:
@property @property
def pane(self) -> FieldsForm: def pane(self) -> FieldsForm:
''' '''Return handle to pp side pane form.
Return handle to pp side pane form.
''' '''
return self.chart.linked.godwidget.pp_pane return self.chart.linked.godwidget.pp_pane
@ -533,8 +507,7 @@ class PositionTracker:
marker: LevelMarker marker: LevelMarker
) -> None: ) -> None:
''' '''Update all labels.
Update all labels.
Meant to be called from the maker ``.paint()`` Meant to be called from the maker ``.paint()``
for immediate, lag free label draws. for immediate, lag free label draws.

View File

@ -49,6 +49,7 @@ from PyQt5 import QtCore
from PyQt5 import QtWidgets from PyQt5 import QtWidgets
from PyQt5.QtCore import ( from PyQt5.QtCore import (
Qt, Qt,
# QSize,
QModelIndex, QModelIndex,
QItemSelectionModel, QItemSelectionModel,
) )
@ -125,10 +126,6 @@ class CompleterView(QTreeView):
# self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustIgnored) # self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustIgnored)
# ux settings # ux settings
self.setSizePolicy(
QtWidgets.QSizePolicy.Expanding,
QtWidgets.QSizePolicy.Expanding,
)
self.setItemsExpandable(True) self.setItemsExpandable(True)
self.setExpandsOnDoubleClick(False) self.setExpandsOnDoubleClick(False)
self.setAnimated(False) self.setAnimated(False)
@ -156,58 +153,23 @@ class CompleterView(QTreeView):
self.setStyleSheet(f"font: {size}px") self.setStyleSheet(f"font: {size}px")
# def resizeEvent(self, event: 'QEvent') -> None: def resize(self):
# event.accept()
# super().resizeEvent(event)
def on_resize(self) -> None:
'''
Resize relay event from god.
'''
self.resize_to_results()
def resize_to_results(self):
model = self.model() model = self.model()
cols = model.columnCount() cols = model.columnCount()
# rows = model.rowCount()
col_w_tot = 0
for i in range(cols): for i in range(cols):
self.resizeColumnToContents(i) self.resizeColumnToContents(i)
col_w_tot += self.columnWidth(i)
win = self.window() # inclusive of search bar and header "rows" in pixel terms
win_h = win.height() rows = 100
edit_h = self.parent().bar.height() # max_rows = 8 # 6 + search and headers
sb_h = win.statusBar().height() row_px = self.rowHeight(self.currentIndex())
# print(f'font_h: {font_h}\n px_height: {px_height}')
# TODO: probably make this more general / less hacky # TODO: probably make this more general / less hacky
# we should figure out the exact number of rows to allow self.setMinimumSize(self.width(), rows * row_px)
# inclusive of search bar and header "rows", in pixel terms. self.setMaximumSize(self.width() + 10, rows * row_px)
# Eventually when we have an "info" widget below the results we self.setFixedWidth(333)
# will want space for it and likely terminating the results-view
# space **exactly on a row** would be ideal.
# if row_px > 0:
# rows = ceil(window_h / row_px) - 4
# else:
# rows = 16
# self.setFixedHeight(rows * row_px)
# self.resize(self.width(), rows * row_px)
# NOTE: if the heigh set here is **too large** then the resize
# event will perpetually trigger as the window causes some kind
# of recompute of callbacks.. so we have to ensure it's limited.
h = win_h - (edit_h + 1.666*sb_h)
assert h > 0
self.setFixedHeight(round(h))
# size to width of longest result seen thus far
# TODO: should we always dynamically scale to longest result?
if self.width() < col_w_tot:
self.setFixedWidth(col_w_tot)
self.update()
def is_selecting_d1(self) -> bool: def is_selecting_d1(self) -> bool:
cidx = self.selectionModel().currentIndex() cidx = self.selectionModel().currentIndex()
@ -256,8 +218,7 @@ class CompleterView(QTreeView):
idx: QModelIndex, idx: QModelIndex,
) -> QStandardItem: ) -> QStandardItem:
''' '''Select and return the item at index ``idx``.
Select and return the item at index ``idx``.
''' '''
sel = self.selectionModel() sel = self.selectionModel()
@ -272,8 +233,7 @@ class CompleterView(QTreeView):
return model.itemFromIndex(idx) return model.itemFromIndex(idx)
def select_first(self) -> QStandardItem: def select_first(self) -> QStandardItem:
''' '''Select the first depth >= 2 entry from the completer tree and
Select the first depth >= 2 entry from the completer tree and
return it's item. return it's item.
''' '''
@ -336,8 +296,7 @@ class CompleterView(QTreeView):
section: str, section: str,
) -> Optional[QModelIndex]: ) -> Optional[QModelIndex]:
''' '''Find the *first* depth = 1 section matching ``section`` in
Find the *first* depth = 1 section matching ``section`` in
the tree and return its index. the tree and return its index.
''' '''
@ -375,7 +334,7 @@ class CompleterView(QTreeView):
else: else:
model.setItem(idx.row(), 1, QStandardItem()) model.setItem(idx.row(), 1, QStandardItem())
self.resize_to_results() self.resize()
return idx return idx
else: else:
@ -446,7 +405,7 @@ class CompleterView(QTreeView):
def show_matches(self) -> None: def show_matches(self) -> None:
self.show() self.show()
self.resize_to_results() self.resize()
class SearchBar(Edit): class SearchBar(Edit):
@ -466,7 +425,6 @@ class SearchBar(Edit):
self.godwidget = godwidget self.godwidget = godwidget
super().__init__(parent, **kwargs) super().__init__(parent, **kwargs)
self.view: CompleterView = view self.view: CompleterView = view
godwidget._widgets[view.mode_name] = view
def show(self) -> None: def show(self) -> None:
super().show() super().show()
@ -501,7 +459,7 @@ class SearchWidget(QtWidgets.QWidget):
# size it as we specify # size it as we specify
self.setSizePolicy( self.setSizePolicy(
QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed,
QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed,
) )
self.godwidget = godwidget self.godwidget = godwidget

View File

@ -14,16 +14,14 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' """
Qt UI styling. Qt UI styling.
"""
'''
from typing import Optional, Dict from typing import Optional, Dict
import math import math
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtGui from PyQt5 import QtCore, QtGui
from PyQt5.QtCore import Qt, QCoreApplication
from qdarkstyle import DarkPalette from qdarkstyle import DarkPalette
from ..log import get_logger from ..log import get_logger
@ -122,10 +120,6 @@ class DpiAwareFont:
dpi = mn_dpi dpi = mn_dpi
mult = 1.0
# No implicit DPI scaling was done by the DE so let's engage
# some hackery ad-hoc scaling shiat.
# dpi is likely somewhat scaled down so use slightly larger font size # dpi is likely somewhat scaled down so use slightly larger font size
if scale >= 1.1 and self._font_size: if scale >= 1.1 and self._font_size:
@ -136,19 +130,9 @@ class DpiAwareFont:
if scale >= 1.5: if scale >= 1.5:
mult = 1.375 mult = 1.375
# TODO: this multiplier should probably be determined from # TODO: this multiplier should probably be determined from
# relative aspect ratios or something? # relative aspect ratios or something?
inches *= mult inches *= mult
# XXX: if additionally we detect a known DE scaling factor we
# also scale *up* our font size on top of the existing
# heuristical (aka no clue why it works) scaling from the block
# above XD
if (
hasattr(Qt, 'AA_EnableHighDpiScaling')
and QCoreApplication.testAttribute(Qt.AA_EnableHighDpiScaling)
):
inches *= round(scale)
# TODO: we might want to fiddle with incrementing font size by # TODO: we might want to fiddle with incrementing font size by
# +1 for the edge cases above. it seems doing it via scaling is # +1 for the edge cases above. it seems doing it via scaling is
@ -157,8 +141,8 @@ class DpiAwareFont:
self._font_inches = inches self._font_inches = inches
font_size = math.floor(inches * dpi) font_size = math.floor(inches * dpi)
log.debug( log.info(
f"screen:{screen.name()}\n" f"screen:{screen.name()}]\n"
f"pDPI: {pdpi}, lDPI: {ldpi}, scale: {scale}\n" f"pDPI: {pdpi}, lDPI: {ldpi}, scale: {scale}\n"
f"\nOur best guess font size is {font_size}\n" f"\nOur best guess font size is {font_size}\n"
) )
@ -203,6 +187,8 @@ _xaxis_at = 'bottom'
# charting config # charting config
CHART_MARGINS = (0, 0, 2, 2) CHART_MARGINS = (0, 0, 2, 2)
_min_points_to_show = 6 _min_points_to_show = 6
_bars_to_left_in_follow_mode = int(61*6)
_bars_from_right_in_follow_mode = round(0.16 * _bars_to_left_in_follow_mode)
_tina_mode = False _tina_mode = False

View File

@ -153,7 +153,8 @@ class MainWindow(QtGui.QMainWindow):
# XXX: for tiling wms this should scale # XXX: for tiling wms this should scale
# with the alloted window size. # with the alloted window size.
# TODO: detect for tiling and if untrue set some size? # TODO: detect for tiling and if untrue set some size?
size = (300, 500) # size = (300, 500)
size = (0, 0)
title = 'piker chart (ur symbol is loading bby)' title = 'piker chart (ur symbol is loading bby)'
@ -164,7 +165,6 @@ class MainWindow(QtGui.QMainWindow):
self._status_bar: QStatusBar = None self._status_bar: QStatusBar = None
self._status_label: QLabel = None self._status_label: QLabel = None
self._size: Optional[tuple[int, int]] = None
@property @property
def mode_label(self) -> QtGui.QLabel: def mode_label(self) -> QtGui.QLabel:
@ -269,29 +269,6 @@ class MainWindow(QtGui.QMainWindow):
assert screen, "Wow Qt is dumb as shit and has no screen..." assert screen, "Wow Qt is dumb as shit and has no screen..."
return screen return screen
def configure_to_desktop(
self,
size: Optional[tuple[int, int]] = None,
) -> None:
'''
Explicitly size the window dimensions (for stacked window
managers).
For tina systems (like windoze) try to do a sane window size on
startup.
'''
# https://stackoverflow.com/a/18975846
if not size and not self._size:
app = QtGui.QApplication.instance()
geo = self.current_screen().geometry()
h, w = geo.height(), geo.width()
# use approx 1/3 of the area of the screen by default
self._size = round(w * .666), round(h * .666)
self.resize(*size or self._size)
# singleton app per actor # singleton app per actor
_qt_win: QtGui.QMainWindow = None _qt_win: QtGui.QMainWindow = None

View File

@ -122,8 +122,7 @@ def optschain(config, symbol, date, rate, test):
@cli.command() @cli.command()
@click.option( @click.option(
'--profile', '--profile',
'-p', is_flag=True,
default=None,
help='Enable pyqtgraph profiling' help='Enable pyqtgraph profiling'
) )
@click.option( @click.option(
@ -134,16 +133,9 @@ def optschain(config, symbol, date, rate, test):
@click.argument('symbol', required=True) @click.argument('symbol', required=True)
@click.pass_obj @click.pass_obj
def chart(config, symbol, profile, pdb): def chart(config, symbol, profile, pdb):
''' """Start a real-time chartng UI
Start a real-time chartng UI """
from .. import _profile
'''
# eg. ``--profile 3`` reports profiling for anything slower then 3 ms.
if profile is not None:
from .. import _profile
_profile._pg_profile = True
_profile.ms_slower_then = float(profile)
from ._app import _main from ._app import _main
if '.' not in symbol: if '.' not in symbol:
@ -153,6 +145,8 @@ def chart(config, symbol, profile, pdb):
)) ))
return return
# toggle to enable profiling
_profile._pg_profile = profile
# global opts # global opts
brokernames = config['brokers'] brokernames = config['brokers']

View File

@ -22,7 +22,6 @@ from contextlib import asynccontextmanager
from dataclasses import dataclass, field from dataclasses import dataclass, field
from functools import partial from functools import partial
from pprint import pformat from pprint import pformat
import platform
import time import time
from typing import Optional, Dict, Callable, Any from typing import Optional, Dict, Callable, Any
import uuid import uuid
@ -30,7 +29,6 @@ import uuid
from pydantic import BaseModel from pydantic import BaseModel
import tractor import tractor
import trio import trio
from PyQt5.QtCore import Qt
from .. import config from .. import config
from ..clearing._client import open_ems, OrderBook from ..clearing._client import open_ems, OrderBook
@ -38,7 +36,6 @@ from ..clearing._allocate import (
mk_allocator, mk_allocator,
Position, Position,
) )
from ._style import _font
from ..data._source import Symbol from ..data._source import Symbol
from ..data.feed import Feed from ..data.feed import Feed
from ..log import get_logger from ..log import get_logger
@ -48,8 +45,7 @@ from ._position import (
PositionTracker, PositionTracker,
SettingsPane, SettingsPane,
) )
from ._forms import FieldsForm from ._label import FormatLabel
# from ._label import FormatLabel
from ._window import MultiStatus from ._window import MultiStatus
from ..clearing._messages import Order, BrokerdPosition from ..clearing._messages import Order, BrokerdPosition
from ._forms import open_form_input_handling from ._forms import open_form_input_handling
@ -110,8 +106,7 @@ def on_level_change_update_next_order_info(
@dataclass @dataclass
class OrderMode: class OrderMode:
''' '''Major UX mode for placing orders on a chart view providing so
Major UX mode for placing orders on a chart view providing so
called, "chart trading". called, "chart trading".
This is the other "main" mode that pairs with "view mode" (when This is the other "main" mode that pairs with "view mode" (when
@ -271,14 +266,13 @@ class OrderMode:
''' '''
staged = self._staged_order staged = self._staged_order
symbol: Symbol = staged.symbol symbol = staged.symbol
oid = str(uuid.uuid4()) oid = str(uuid.uuid4())
# format order data for ems # format order data for ems
fqsn = symbol.front_fqsn()
order = staged.copy( order = staged.copy(
update={ update={
'symbol': fqsn, 'symbol': symbol.key,
'oid': oid, 'oid': oid,
} }
) )
@ -435,19 +429,13 @@ class OrderMode:
# TODO: make this not trash. # TODO: make this not trash.
# XXX: linux only for now # XXX: linux only for now
if platform.system() == "Windows":
return
result = await trio.run_process( result = await trio.run_process(
[ [
'notify-send', 'notify-send',
'-u', 'normal', '-u', 'normal',
'-t', '1616', '-t', '10000',
'piker', 'piker',
f'alert: {msg}',
# TODO: add in standard fill/exec info that maybe we
# pack in a broker independent way?
f'{msg["resp"]}: {msg["trigger_price"]}',
], ],
) )
log.runtime(result) log.runtime(result)
@ -523,7 +511,8 @@ async def open_order_mode(
feed: Feed, feed: Feed,
chart: 'ChartPlotWidget', # noqa chart: 'ChartPlotWidget', # noqa
fqsn: str, symbol: Symbol,
brokername: str,
started: trio.Event, started: trio.Event,
) -> None: ) -> None:
@ -549,7 +538,8 @@ async def open_order_mode(
# spawn EMS actor-service # spawn EMS actor-service
async with ( async with (
open_ems(fqsn) as (
open_ems(brokername, symbol) as (
book, book,
trades_stream, trades_stream,
position_msgs, position_msgs,
@ -558,7 +548,8 @@ async def open_order_mode(
trio.open_nursery() as tn, trio.open_nursery() as tn,
): ):
log.info(f'Opening order mode for {fqsn}') log.info(f'Opening order mode for {brokername}.{symbol.key}')
view = chart.view view = chart.view
# annotations editors # annotations editors
@ -567,7 +558,7 @@ async def open_order_mode(
# symbol id # symbol id
symbol = chart.linked.symbol symbol = chart.linked.symbol
symkey = symbol.front_fqsn() symkey = symbol.key
# map of per-provider account keys to position tracker instances # map of per-provider account keys to position tracker instances
trackers: dict[str, PositionTracker] = {} trackers: dict[str, PositionTracker] = {}
@ -611,7 +602,7 @@ async def open_order_mode(
log.info(f'Loading pp for {symkey}:\n{pformat(msg)}') log.info(f'Loading pp for {symkey}:\n{pformat(msg)}')
startup_pp.update_from_msg(msg) startup_pp.update_from_msg(msg)
# allocator config # allocator
alloc = mk_allocator( alloc = mk_allocator(
symbol=symbol, symbol=symbol,
account=account_name, account=account_name,
@ -642,21 +633,61 @@ async def open_order_mode(
pp_tracker.hide_info() pp_tracker.hide_info()
# setup order mode sidepane widgets # setup order mode sidepane widgets
form: FieldsForm = chart.sidepane form = chart.sidepane
form.vbox.setSpacing( vbox = form.vbox
int((1 + 5/8)*_font.px_size)
from textwrap import dedent
from PyQt5.QtCore import Qt
from ._style import _font, _font_small
from ..calc import humanize
feed_label = FormatLabel(
fmt_str=dedent("""
actor: **{actor_name}**\n
|_ @**{host}:{port}**\n
|_ throttle_hz: **{throttle_rate}**\n
|_ streams: **{symbols}**\n
|_ shm: **{shm}**\n
"""),
font=_font.font,
font_size=_font_small.px_size,
font_color='default_lightest',
) )
from ._feedstatus import mk_feed_label
feed_label = mk_feed_label(
form,
feed,
chart,
)
# XXX: we set this because?
form.feed_label = feed_label form.feed_label = feed_label
# add feed info label to top
vbox.insertWidget(
0,
feed_label,
alignment=Qt.AlignBottom,
)
# vbox.setAlignment(feed_label, Qt.AlignBottom)
# vbox.setAlignment(Qt.AlignBottom)
blank_h = chart.height() - (
form.height() +
form.fill_bar.height()
# feed_label.height()
)
vbox.setSpacing((1 + 5/8)*_font.px_size)
# fill in brokerd feed info
host, port = feed.portal.channel.raddr
if host == '127.0.0.1':
host = 'localhost'
mpshm = feed.shm._shm
shmstr = f'{humanize(mpshm.size)}'
form.feed_label.format(
actor_name=feed.portal.channel.uid[0],
host=host,
port=port,
symbols=len(feed.symbols),
shm=shmstr,
throttle_rate=feed.throttle_rate,
)
order_pane = SettingsPane( order_pane = SettingsPane(
form=form, form=form,
# XXX: ugh, so hideous... # XXX: ugh, so hideous...
@ -667,11 +698,6 @@ async def open_order_mode(
) )
order_pane.set_accounts(list(trackers.keys())) order_pane.set_accounts(list(trackers.keys()))
form.vbox.addWidget(
feed_label,
alignment=Qt.AlignBottom,
)
# update pp icons # update pp icons
for name, tracker in trackers.items(): for name, tracker in trackers.items():
order_pane.update_account_icons({name: tracker.live_pp}) order_pane.update_account_icons({name: tracker.live_pp})
@ -782,18 +808,8 @@ async def process_trades_and_update_ui(
'position', 'position',
): ):
sym = mode.chart.linked.symbol sym = mode.chart.linked.symbol
pp_msg_symbol = msg['symbol'].lower() if msg['symbol'].lower() in sym.key:
fqsn = sym.front_fqsn()
broker, key = sym.front_feed()
# print(
# f'pp msg symbol: {pp_msg_symbol}\n',
# f'fqsn: {fqsn}\n',
# f'front key: {key}\n',
# )
if (
pp_msg_symbol == fqsn.replace(f'.{broker}', '')
):
tracker = mode.trackers[msg['account']] tracker = mode.trackers[msg['account']]
tracker.live_pp.update_from_msg(msg) tracker.live_pp.update_from_msg(msg)
# update order pane widgets # update order pane widgets
@ -873,9 +889,7 @@ async def process_trades_and_update_ui(
mode.lines.remove_line(uuid=oid) mode.lines.remove_line(uuid=oid)
# each clearing tick is responded individually # each clearing tick is responded individually
elif resp in ( elif resp in ('broker_filled',):
'broker_filled',
):
known_order = book._sent_orders.get(oid) known_order = book._sent_orders.get(oid)
if not known_order: if not known_order:

View File

@ -1,21 +1,9 @@
# we require a pinned dev branch to get some edge features that # we require a pinned dev branch to get some edge features that
# are often untested in tractor's CI and/or being tested by us # are often untested in tractor's CI and/or being tested by us
# first before committing as core features in tractor's base. # first before committing as core features in tractor's base.
-e git+https://github.com/goodboy/tractor.git@master#egg=tractor -e git+git://github.com/goodboy/tractor.git@master#egg=tractor
# `pyqtgraph` peeps keep breaking, fixing, improving so might as well # `pyqtgraph` peeps keep breaking, fixing, improving so might as well
# pin this to a dev branch that we have more control over especially # pin this to a dev branch that we have more control over especially
# as more graphics stuff gets hashed out. # as more graphics stuff gets hashed out.
-e git+https://github.com/pikers/pyqtgraph.git@piker_pin#egg=pyqtgraph -e git+git://github.com/pikers/pyqtgraph.git@piker_pin#egg=pyqtgraph
# our async client for ``marketstore`` (the tsdb)
-e git+https://github.com/pikers/anyio-marketstore.git@master#egg=anyio-marketstore
# ``trimeter`` for asysnc history fetching
-e git+https://github.com/python-trio/trimeter.git@master#egg=trimeter
# ``asyncvnc`` for sending interactions to ib-gw inside docker
-e git+https://github.com/pikers/asyncvnc.git@vid_passthrough#egg=asyncvnc

View File

@ -51,56 +51,42 @@ setup(
# async # async
'trio', 'trio',
'trio-websocket', 'trio-websocket',
'msgspec', # performant IPC messaging # 'tractor', # from github currently
'async_generator', 'async_generator',
# from github currently (see requirements.txt)
# 'trimeter', # not released yet..
# 'tractor',
# asyncvnc,
# brokers # brokers
'asks==2.4.8', 'asks==2.4.8',
'ib_insync', 'ib_insync',
# numerics # numerics
'pendulum', # easier datetimes 'arrow', # better datetimes
'bidict', # 2 way map 'bidict', # 2 way map
'cython', 'cython',
'numpy', 'numpy',
'numba', 'numba',
'pandas',
'msgpack-numpy',
# UI # UI
'PyQt5', 'PyQt5',
# 'pyqtgraph', from our fork see reqs.txt 'pyqtgraph',
'qdarkstyle >= 3.0.2', # themeing 'qdarkstyle >= 3.0.2',
'fuzzywuzzy[speedup]', # fuzzy search # fuzzy search
'fuzzywuzzy[speedup]',
# tsdbs # tsdbs
# anyio-marketstore # from gh see reqs.txt 'pymarketstore',
], ],
extras_require={
'tsdb': [
'docker',
],
},
tests_require=['pytest'], tests_require=['pytest'],
python_requires=">=3.10", python_requires=">=3.9", # literally for ``datetime.datetime.fromisoformat``...
keywords=[ keywords=["async", "trading", "finance", "quant", "charting"],
"async",
"trading",
"finance",
"quant",
"charting",
],
classifiers=[ classifiers=[
'Development Status :: 3 - Alpha', 'Development Status :: 3 - Alpha',
'License :: OSI Approved :: ', 'License :: OSI Approved :: ',
'Operating System :: POSIX :: Linux', 'Operating System :: POSIX :: Linux',
"Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.9",
'Intended Audience :: Financial and Insurance Industry', 'Intended Audience :: Financial and Insurance Industry',
'Intended Audience :: Science/Research', 'Intended Audience :: Science/Research',
'Intended Audience :: Developers', 'Intended Audience :: Developers',

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers) # Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -30,13 +30,11 @@ orig_win_id = t.find_focused().window
# for tws # for tws
win_names: list[str] = [ win_names: list[str] = [
'Interactive Brokers', # tws running in i3 'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3 'IB Gateway.', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
] ]
for name in win_names: for name in win_names:
results = t.find_titled(name) results = t.find_named(name)
print(f'results for {name}: {results}')
if results: if results:
con = results[0] con = results[0]
print(f'Resetting data feed for {name}') print(f'Resetting data feed for {name}')
@ -49,32 +47,22 @@ for name in win_names:
# https://github.com/rr-/pyxdotool # https://github.com/rr-/pyxdotool
# https://github.com/ShaneHutter/pyxdotool # https://github.com/ShaneHutter/pyxdotool
# https://github.com/cphyc/pyxdotool # https://github.com/cphyc/pyxdotool
subprocess.call([
'xdotool',
'windowactivate', '--sync', win_id,
# TODO: only run the reconnect (2nd) kc on a detected # move mouse to bottom left of window (where there should
# disconnect? # be nothing to click).
for key_combo, timeout in [ 'mousemove_relative', '--sync', str(w-4), str(h-4),
# only required if we need a connection reset.
# ('ctrl+alt+r', 12),
# data feed reset.
('ctrl+alt+f', 6)
]:
subprocess.call([
'xdotool',
'windowactivate', '--sync', win_id,
# move mouse to bottom left of window (where there should # NOTE: we may need to stick a `--retry 3` in here..
# be nothing to click). 'click', '--window', win_id, '--repeat', '3', '1',
'mousemove_relative', '--sync', str(w-4), str(h-4),
# NOTE: we may need to stick a `--retry 3` in here.. # hackzorzes
'click', '--window', win_id, 'key', 'ctrl+alt+f',
'--repeat', '3', '1', ],
timeout=1,
# hackzorzes )
'key', key_combo,
],
timeout=timeout,
)
# re-activate and focus original window # re-activate and focus original window
subprocess.call([ subprocess.call([