An async exit stack around the new `@tractor.context` is problematic
since a pushed context can't bubble errors unless the exit stack has
been closed. But in that case why do you need the exit stack if you're
going to push it and wait it right away; it seems more correct to use
a nursery and spawn a task in `pikerd` that waits on the both the
target context completion first (thus being able to bubble up any errors
from the remote, and top level service task) and the sub-actor portal.
(Sub)service Daemons are spawned with `.start_actor()` and thus will
block forever until cancelled so, add a way to cancel them explicitly
which we'll need eventually for restarts and dynamic feed management.
The big lesson here is that async exit stacks are not conducive to
spawning and monitoring service tasks, and especially so if
a `@tractor.context` is used since if the `.open_context()` call isn't
exited (only possible by the stack being closed), then there will be no
way for `trio` to cancel the task that pushed that context (since it
can't run a checkpoint while yielded inside the stack) without also
cancelling all other contexts pushed on that stack. Presuming one
`pikerd` task is used to do the original pushing (which it was) then
any error would have to kill all service daemon tasks which obviously
won't work.
I see this mostly as the painz of tinkering out an SC service manager
with `tractor` / `trio` for the first time, so try to go easy on the
process ;P
This solves a bunch of issues to do with `brokerd` order status msgs
getting relayed for each order to **every** correspondingly connected
EMS client. Previously we weren't keeping track of which emsd orders
were associated with which clients so you had backend msgs getting
broadcast to all clients which not only resulted in duplicate (and
sometimes erroneous, due to state tracking) actions taking place in the
UI's order mode, but it's also just duplicate traffic (usually to the
same actor) over multiple logical streams. Instead, only keep up **one**
(cached) stream with the `trades_dialogue()` endpoint such that **all**
emsd orders route over that single connection to the particular
`brokerd` actor.
Add a new type/api to manage "contents labels" (labels that sit in
a view and display info about viewed data) since it's mostly used by
the linked charts cursor. Make `LinkedSplits.cursor` the new and only
instance var for the cursor such that charts can look it up from that
common class. Drop the `ChartPlotWidget._ohlc` array, just add
a `'ohlc'` entry to `._arrays`.
Orders in order mode should be chart oriented since there's a mode per
chart. If you want all orders just ask the ems or query all the charts
in a loop.
This fixes cancel-all-orders such that when 'cc' is tapped only the
orders on the *current* chart are cancelled, lel.
Generalize the methods for cancelling groups of orders (all or those
under cursor) and add new group status support such that statuses for
each cancel or order submission is displayed in the status bar. In the
"cancel-all-orders" case, use the new group status stuff.
Allows for submitting a top level "group status" associated with
a "group key" which eventually resolves once all sub-statuses associated
with that group key (and thus top level status) complete and are also
removed. Also add support for a "final message" for each status such
that once the status clear callback is called a final msg is placed on
the status bar that is then removed when the next status is set.
It's all a questionable bunch of closures/callbacks but it worx.
Instead of callbacks for key presses/releases convert our `ChartView`'s
kb input handling to async code using our event relaying-over-mem-chan
system. This is a first step toward a more async driven modal control
UX. Changed a bunch of "chart" component naming as part of this as well,
namely: `ChartSpace` -> `GodWidget` and `LinkedSplitCharts` ->
`LinkedSplits`. Engage the view boxe's async handler code as part of new
symbol data loading in `display_symbol_data()`. More re-orging to come!
Add an `open_handler()` ctx manager for wholesale handling event sets
with a passed in async func. Better document and implement the event
filtering core including adding support for key "auto repeat" filtering;
it turns out the events delivered when `trio` does its guest-most tick
are not the same (Qt has somehow consumed them or something) so we have
to do certain things (like getting the `.type()`, `.isAutoRepeat()`,
etc.) before shipping over the mem chan. The alt might be to copy the
event objects first but haven't tried it yet. For now just offer
auto-repeat filtering through a flag.
Adding binance's "hft" ws feeds has resulted in a lot of context
switching in our Qt charts, so much so it's chewin CPU and definitely
worth it to throttle to the detected display rate as per discussion in
issue #192.
This is a first very very naive attempt at throttling L1 tick feeds on
the `brokerd` end (producer side) using a constant and uniform delivery
rate by way of a `trio` task + mem chan. The new func is
`data._sampling.uniform_rate_send()`. Basically if a client request
a feed and provides a throttle rate we just spawn a task and queue up
ticks until approximately the next display rate's worth period of time
has passed before forwarding. It's definitely nothing fancy but does
provide fodder and a start point for an up and coming queueing eng to
start digging into both #107 and #109 ;)
Avoids some cyclical and confusing import time stuff that we needed to get
DPI aware fonts configured from the active display. Move the main window
singleton into its own module and add a `main_window()` getter for it.
Make `current_screen()` a ``MainWindow` method to avoid so many module
variables.