We are already packing framed ticks in extended lists from
the `.data._sampling.uniform_rate_send()` task so the natural solution
to avoid needless graphics cycles for HFT-ish feeds (like binance) is
to unpack those frames and for most cases only update graphics with the
"latest" data per loop iteration. Unpacking in this way also lessens
nested-iterations per tick type.
Btw, this also effectively solves all remaining issues of fast tick
feeds over-triggering the graphics loop renders as long as the original
quote stream is throttled appropriately, usually to the local display
rate.
Relates to #183, #192
Dirty deats:
- drop all per-tick rate checks, they were always somewhat pointless
when iterating a frame of ticks per render cycle XD.
- unpack tick frame into ticks per frame type, and last of each type;
the lasts are used to update each part of the UI/graphics by class.
- only skip the label update if we can't retrieve the last from from a
graphics source array; it seems `chart.update_curve_from_array()`
already does a `len` check internally.
- add some draft commented code for tick type classes and a possible
wire framed tick data structure.
- move `chart_maxmin()` range computer to module level, bind a chart to
it with a `partial.`
- only check rate limits in main quote loop thus reporting actual
overages
- add in commented logic for only updating the "last" cleared price from
the most recent framed value if we want to eventually (right now seems
like this is only relevant to ib and it's dark trades: `utrade`).
- rename `_clear_throttle_rate` -> `_quote_throttle_rate`, drop
`_book_throttle_rate`.
This is in prep toward doing fsp graphics updates from the main quotes
update loop (where OHLC and volume are done). Updating fsp output from
that task should, for the majority of cases, be fine presuming the
processing is derived from the quote stream as a source. Further,
calling an update function on each fsp subplot/overlay is of course
faster then a full task switch - which is how it currently works with
a separate stream for every fsp output. This also will let us delay
adding full `Feed` support around fsp streams for the moment while still
getting quote throttling dictated by the quote stream.
Going forward, We can still support a separate task/fsp stream for
updates as needed (ex. some kind of fast external data source that isn't
synced with price data) but it should be enabled as needed required by
the user.
The major change is moving the fsp "daemon" (more like wanna-be fspd)
endpoint to use the newer `tractor.Portal.open_context()` and
bi-directional streaming api.
There's a few other things in here too:
- make a helper for allocating single colume fsp shm arrays
- rename some some fsp related functions to be more explicit on their
purposes
Since our startup is very concurrent there is often races where widgets
have not fully spawned before python (re-)sizing code has a chance to
run sizing logic and thus incorrect dimensions are read. Instead ensure
the Qt render loop gets to run in between such checks.
Also add a `open_sidepane()` mngr for creating a minimal form widget for
FSP subchart sidepanes which can be configured from an input `dict`.
This should in theory result in increased burstiness since we remove
the plain `trio.sleep()` and instead always wait on the receive channel
as much as possible until the `trio.move_on_after()` (+ time diffing
calcs) times out and signals the next throttled send cycle. This also is
slightly easier to grok code-wise instead of the `try, except` and
another tight while loop until a `trio.WouldBlock`. The only simpler
way i can think to do it is with 2 tasks: 1 to collect ticks and the
other to read and send at the throttle rate.
Comment out the log msg for now to avoid latency and add much more
detailed comments. Add an overrun log msg to the main sample loop.
A `QRectF` is easier to make and draw (i think?) so use that and fill it
on volume events for decent sleek real-time look. Adjust the step array
generator to allow for an endpoints flag. Comment and/or clean out all
the old path filling calls that gave us perf issues..
Turns out the performance of updating and refilling step curves > 1k ish
points is super slow :sadkek:. Disabling the fill basically returns
normal performance, so it seems maybe we'll stick with unfilled volume
"bars" for now. The other tricky bit is getting the path to extend and
fill which is particularly slow if you use the `QPainterPath.united()`
(what `+` set op does) operation which seems to require an entire redraw
of the curve each paint iteration. Removing the pixel buffer cache makes
things that much worse too..
One technique i tried was only setting a `._fill` flag when so many
datums are in view (< 1k as determined by the chart widget), and this
helps, but under high load (trade rates) you still see more lag then
without the fill which makes me say screw it and let's stick with
unfilled bars for now. Trying go to get performant filled curves will be
an exercise for an aspiring graphics eng :P
In latest `pyqtgraph` it seems there's a discrepancy
since `function.arrayToQPath()` was reworked and now
we need to *not* connect the last point for each bar.
The prior PR for fixing fsp array misalignment also added
`tractor.Context` usage which wasn't reflected in the graphics update
loop (newer code added it but the prior PR was factored from path
dependent history) and thus was broken. Further in newer work we don't
have fsp actors actually stream value updates since the display loop can
already pull from the source feed and update graphics at a preferred
throttle rate. Re-enabled the fsp stream sending here by default until
that newer only-throttle-pull-from-source code is landed in the display
loop.
This should finally be correct fsp src-to-dst array syncing now..
There's a few edge cases but mostly we need to be sure we sync both
back-filled history diffs and avoid current step lag/leads. Use
a polling routine and the more stringent task re-spawn system to get
this right.
There was a lingering issue where the fsp daemon would sync its shm
array with the source data and we'd set the start/end indices to the
same value. Under some races a reader would then read an empty `.array`
which it wasn't expecting. This fixes that as well as tidies up the
`ShmArray.push()` logic and adds a temporary check in `.array` for zero
length if the array hasn't been written yet.
We can now start removing read array length checks in consumer code
and hopefully no more races will show up.
Litter the engine code with `pyqtgraph` profiling to see if we can
improve startup times - likely it'll mean pre-allocating a small fsp
daemon cluster at startup.
Split up the rather large `.ui._chart` module into its constituents:
- a `.ui._app` for the highlevel widget composition, qtractor entry
point and startup logic
- `.ui._display` for all the real-time graphics update tasks which
consume the `.ui._chart` widget apis
Must have run into some confusion with data structures in `brokerd` vs.
`emsd`. This fixes the ems `relay.positions` state tracking to be
composed maps, vs. messages from `brokerd` should just be a sequence.
This reverts commit 6fa8958acf.
We actually do need it since the selection widget of course won't tell
you its "key" that we assign and further we'd have to use a (value, key)
style invocation which isn't super pythonic.
The paper engine returns `"paper"` instead of `None` in the pp msgs so
expect that. Don't bother with fills tracking for now (since we'll need
either the account in the msg or a lookup table locally for oids to
accounts). Change the order line update handler to a local module function,
there was no reason for it to be a pane method.
Make a pp tracker per account and load on order mode boot.
Only show details on the pp tracker for the selected account.
Make the settings pane assign a `.current_pp` state on the order mode
instance (for the charted symbol) on account selection switches and no
longer keep a ref to a single pp tracker and allocator in the pane.
`SettingsPane.update_status_ui()` now expects an explicit tracker
reference as input. Still need to figure out the pnl update task logic
despite the intermittent account changes.
This adds full support for a single `brokerd` managing multiple API
endpoint clients in tandem. Get the client scan loop correct and load
accounts from all discovered clients as specified in a user's
`broker.toml`. We now just always re-scan for all clients and if there's
a cache hit just skip a creation/connection logic.
Route orders with an account name to the correct client in the
`handle_order_requests()` endpoint and spawn an event relay task per
client for transmitting trade events back to `emsd`.
Make the `handle_order_requests()` tasks now lookup the appropriate API
client for a given account (or error if it can't be found) and use it
for submission. Account names are loaded from the
`brokers.toml::accounts.ib` section both UI side and in the `brokerd`.
Change `_aio_get_client()` to a `load_aio_client()` which now tries to
scan and load api clients for all connections defined in the config as
well as deliver the client cache and account lookup tables.
Each backend broker may support multiple (types) of accounts; this patch
lets clients send order requests that pass through an `account` field in
certain `emsd` <-> `brokerd` transactions. This allows each provider to read
in and conduct logic based on what account value is passed via requests
to the `trades_dialogue()` endpoint as well as tie together positioning
updates with relevant account keys for display in UIs.
This also adds relay support for a `Status` msg with a `'broker_errored'`
status which for now will trigger the same logic as cancelled orders on
the client side and thus will remove order lines submitted on a chart.
Get rid of `PositionTracker.init_status_ui()` and instead make
a helper func `mk_allocator()` which takes in the alloc and adjusts
default settings on the allocator alone (which is expected to be
passed in). Expect a `Position` instance to be passed into the tracker
which will be looked up for UI updates. Move *update-from-position-msg*
ops into a `Position.update_from_msg()` method.
We weren't updating the LHS size labels on creation and we now use the
lot size digits to do so. Change `PositionTracker.update()` to
`.update_from_pp_msg()`.
Acts as a fix for lodpi and better sizing logic for the pp status bar.
Drop all the redundant passing of the form to its child layouts during
instantiating (since they're all added as layouts to the tree). Comment
out the feed status label for now since it's not hooked up to the
backend and we'll get it going in a new PR.
Down the road we probably want to do all the pp pane component-widget
sizing *after* the `pyqtgraph` chart is up; it's going to take some
reworking of the charting api tho.
We were re-implementing a few things order lines already support.
All we really needed was to not add a pp size label if one is provided.
Use `.hide_label()` in the mouse hover handler.
When exiting a pp toward net-zero, we may sometimes run into the issue
of having a "fractional slot" worth of units in allocator limit terms.
This is further nuanced by live orders which are submitted above the
current clearing price which get allocated a size (based on that staged
but non-cleared price) according to their limit size unit which can be
calculated to be less then the size that would have been allocated at
the actual clearing price. In the short term cope with this discrepancy
by simply using a "slot and a half" as the decision point of whether to
exit a slot's worth or the remaining pp's worth of units. In other words
if you can exit 1.5x a slot's worth or less, exit the remaining pp,
otherwise exit a slot's worth. This is a stop gap until we have a better
solution to limiting staged orders to (some range around) the currently
computed clear-able price.
We need a subtask to compute the current pp PnL in real-time but really
only if a pp exists - a spawnable subtask would be ideal for this. Stage
a tick streaming task using a stream bcaster; no actual pnl calc yet.
Since we're going to need subtasks anyway might as well stick the order
mode UI processing loop in a task as well and then just give the whole
thing a ctx mngr api. This'll probably be handy for when we have
auto-strats that need to dynamically use the mode's api as well.
Oh, and move the time -> index mapper to a chart method for now.
Use this method to go through writing all allocator parameters and then
reading all changes back into the order mode pane including updating the
limit and step labels by the fill bar.
Machinery changes:
- add `.limit()` and `.step_sizes()` methods to the allocator to
provide the appropriate data depending on the pp limit size unit (eg.
currency vs. units)
- humanize the label display text such that you have nice suffixes and
a fixed precision
- tweak the fill bar labels to be simpler since the values are now
humanized
- expect `.on_ui_settings_change()` to be called for every slots hotkey
tweak
Turned out to be pretty simple, on every pp update just recompute
the proportion of slots used based on the limit size units.
Don't assign the allocator callback method for alert lines since
there's no size to generate. Move from-existing-pp calculations
into the order pane itself.
Handling the edge cases in this was "fun", namely:
- entering with less then a slot's worth of units to purchase
before hitting the pp limit or, less then a slots worth when exiting
toward a net-zero position.
- round pp msg updates using the symbol tick and lot size digits to
avoid super small (1e-30 lel) positions lingering in the ems (happens
moreso with the paper engine).
- don't expect the next size method to be called for alert level changes
- pass label text and field widget key separately
- fix fill status bar slot sizing logic (once and for all) and
create a new type that allows generating / resizing the bar's
size / values with a `.set_slots()` method
- pull account names from allocator attr
- set `.fill_bar` as the fill status bar on the form for now
- make `GodWidget.load_symbol()` async
- track loaded feeds with a private `._feeds` dict
- add methods to pause/resume all feeds when chart is (un)focussed
- add some commented test code for 2nd feed consumer task and rsi2 fsp
- load async signal handler for view clicking
- generate lines from staged `Order` msgs
- apply level update callback to each order that dynamically
updates the order size from the allocator calcs
- pass order msg instances to the ems client for submission
- update order size on line moves
- add `Order` msg and `Symbol` refs to each dialog
In an effort to simplify line creation and management from an order
mode here's a slew of changes:
- use our new ``LevelMarker`` for order lines and fully drop usage
of the original marker implementation stuff from `pg.InfiniteLine`
- add a left side label which shows the instrument's "units" value
- the most fundamental unit for the "size" of the order
- allow passing in an optional `marker_size: str` so that `action: str`
doesn't necessarily have to be passed (eg. when copying from an
existing line)
- change a couple of internal line config options to be public attrs
which can now be configured dynamically in real-time (since they're
all `bool` anyway):
* `hl_on_hover` -> `highlight_on_hover`
* `_always_show_labels` -> `always_show_labels`
- `LevelLine.set_level()` now only sets the position if it was **not**
called from the position changed signal (which would be redundant)
Move all the ``pydantic`` finagling to an `_orm.py` and
just keep an `Allocator` as the backing model for our pp controls
in the position module. This all needs to be tied together in some sane
with with facility for multiple symbols/streams per chart for when we
get to charting-trading aggregate feeds.
It was becoming too much with all the labels and markers and lines..
Might as well package it all together instead of cramming it in the
order mode loop, chief.
The techincal summary,
- move `_lines.position_line()` -> `PositionInfo.position_line()`.
- slap a `.pp` on the order mode instance which *is* a `PositionInfo`
- drop the position info info label for now (let's see what users want
eventually but for now let's keep it super minimal).
- add a `LevelMarker` type to replace the old `LevelLine` internal
marker system (includes ability to change the style and level on the
fly).
- change `_annotate.mk_marker()` -> `mk_maker_path()` and expect caller
to wrap in a `QGraphicsPathItem` if needed.
Generate and maintain position messages in the paper engine for each
`pikerd` session. We no longer tear down the engine on each client
disconnect. Ensure -ve size on sells to make the math work.
This gives us fast search over a known set of symbols you can't search
for with the api such as futures and commodities contracts.
Toss in a new client method to lookup contract details
`Client.con_deats()` and avoid calling it for now from `.search_stock()`
for speed; it seems originally we were doing the 2nd lookup due to weird
suffixes in the `.primaryExchange` which we can just discard.
In order to ensure the lifetime of the feed can in fact be kept open
until the last consumer task has completed we need to maintain
a lifetime which is hierarchically greater then all consumer tasks.
This solution is somewhat hacky but seems to work well: we just use the
`tractor` actor's "service nursery" (the one normally used to invoke rpc
tasks) to launch the task which will start and keep open the target
cached async context manager. To make this more "proper" we may want to
offer a "root nursery" in all piker actors that is exposed through some
singleton api or even introduce a public api for it into `tractor`
directly.
Think this was fixed by passing through `**kwargs` in
`maybe_open_feed()`, the shielding for fsp respawns wasn't being
properly passed through..
This reverts commit 2f1455d423.
Maybe i've finally learned my lesson that exit stacks and per task ctx
manager caching is just not trionic.. Use the approach we've taken for
the daemon service manager as well: create a process global nursery for
each unique ctx manager we wish to cache and simply tear it down when
the number of consumers goes to zero.
This seems to resolve all prior issues and gets us error-free cached
feeds!
Try out he new broadcast channels from `tractor` for data feeds
we already have cached. Any time there's a cache hit we load the
cached feed and just slap a broadcast receiver on it for the local
consumer task.
Add a new type/api to manage "contents labels" (labels that sit in
a view and display info about viewed data) since it's mostly used by
the linked charts cursor. Make `LinkedSplits.cursor` the new and only
instance var for the cursor such that charts can look it up from that
common class. Drop the `ChartPlotWidget._ohlc` array, just add
a `'ohlc'` entry to `._arrays`.
Orders in order mode should be chart oriented since there's a mode per
chart. If you want all orders just ask the ems or query all the charts
in a loop.
This fixes cancel-all-orders such that when 'cc' is tapped only the
orders on the *current* chart are cancelled, lel.
Generalize the methods for cancelling groups of orders (all or those
under cursor) and add new group status support such that statuses for
each cancel or order submission is displayed in the status bar. In the
"cancel-all-orders" case, use the new group status stuff.
Allows for submitting a top level "group status" associated with
a "group key" which eventually resolves once all sub-statuses associated
with that group key (and thus top level status) complete and are also
removed. Also add support for a "final message" for each status such
that once the status clear callback is called a final msg is placed on
the status bar that is then removed when the next status is set.
It's all a questionable bunch of closures/callbacks but it worx.
Instead of callbacks for key presses/releases convert our `ChartView`'s
kb input handling to async code using our event relaying-over-mem-chan
system. This is a first step toward a more async driven modal control
UX. Changed a bunch of "chart" component naming as part of this as well,
namely: `ChartSpace` -> `GodWidget` and `LinkedSplitCharts` ->
`LinkedSplits`. Engage the view boxe's async handler code as part of new
symbol data loading in `display_symbol_data()`. More re-orging to come!
Add an `open_handler()` ctx manager for wholesale handling event sets
with a passed in async func. Better document and implement the event
filtering core including adding support for key "auto repeat" filtering;
it turns out the events delivered when `trio` does its guest-most tick
are not the same (Qt has somehow consumed them or something) so we have
to do certain things (like getting the `.type()`, `.isAutoRepeat()`,
etc.) before shipping over the mem chan. The alt might be to copy the
event objects first but haven't tried it yet. For now just offer
auto-repeat filtering through a flag.
If a client attaches to a quotes data feed and requests a throttle rate,
be sure to unsub that side-band memchan + task when it detaches and
especially so on any transport connection error.
Also, use an explicit `tractor.Context.cancel()` on the client feed
block exit since we removed the implicit cancel option from the
`tractor` api.
There is no reason to have more then `brokerd` trades dialogue stream
open per `emsd`. Here we minimize to managing that lone stream and
multiplexing msgs from each client such that multiple clients can be
connected to the ems, conducting trading without requiring multiple
ems-client connections to the backend broker and without the broker
being aware there are even multiple flows going on.
This patch also sets up for being able to have ems clients which
register to receive and track trade flows from other piker clients thus
enabling so called "multi-player" trading where orders for both paper
and live trades can be shared between multiple participants in the form
of a pre-broker, local clearing service and trade signals dark book.
This solves a bunch of issues to do with `brokerd` order status msgs
getting relayed for each order to **every** correspondingly connected
EMS client. Previously we weren't keeping track of which emsd orders
were associated with which clients so you had backend msgs getting
broadcast to all clients which not only resulted in duplicate (and
sometimes erroneous, due to state tracking) actions taking place in the
UI's order mode, but it's also just duplicate traffic (usually to the
same actor) over multiple logical streams. Instead, only keep up **one**
(cached) stream with the `trades_dialogue()` endpoint such that **all**
emsd orders route over that single connection to the particular
`brokerd` actor.
An async exit stack around the new `@tractor.context` is problematic
since a pushed context can't bubble errors unless the exit stack has
been closed. But in that case why do you need the exit stack if you're
going to push it and wait it right away; it seems more correct to use
a nursery and spawn a task in `pikerd` that waits on the both the
target context completion first (thus being able to bubble up any errors
from the remote, and top level service task) and the sub-actor portal.
(Sub)service Daemons are spawned with `.start_actor()` and thus will
block forever until cancelled so, add a way to cancel them explicitly
which we'll need eventually for restarts and dynamic feed management.
The big lesson here is that async exit stacks are not conducive to
spawning and monitoring service tasks, and especially so if
a `@tractor.context` is used since if the `.open_context()` call isn't
exited (only possible by the stack being closed), then there will be no
way for `trio` to cancel the task that pushed that context (since it
can't run a checkpoint while yielded inside the stack) without also
cancelling all other contexts pushed on that stack. Presuming one
`pikerd` task is used to do the original pushing (which it was) then
any error would have to kill all service daemon tasks which obviously
won't work.
I see this mostly as the painz of tinkering out an SC service manager
with `tractor` / `trio` for the first time, so try to go easy on the
process ;P
Adding binance's "hft" ws feeds has resulted in a lot of context
switching in our Qt charts, so much so it's chewin CPU and definitely
worth it to throttle to the detected display rate as per discussion in
issue #192.
This is a first very very naive attempt at throttling L1 tick feeds on
the `brokerd` end (producer side) using a constant and uniform delivery
rate by way of a `trio` task + mem chan. The new func is
`data._sampling.uniform_rate_send()`. Basically if a client request
a feed and provides a throttle rate we just spawn a task and queue up
ticks until approximately the next display rate's worth period of time
has passed before forwarding. It's definitely nothing fancy but does
provide fodder and a start point for an up and coming queueing eng to
start digging into both #107 and #109 ;)