An async exit stack around the new `@tractor.context` is problematic
since a pushed context can't bubble errors unless the exit stack has
been closed. But in that case why do you need the exit stack if you're
going to push it and wait it right away; it seems more correct to use
a nursery and spawn a task in `pikerd` that waits on the both the
target context completion first (thus being able to bubble up any errors
from the remote, and top level service task) and the sub-actor portal.
(Sub)service Daemons are spawned with `.start_actor()` and thus will
block forever until cancelled so, add a way to cancel them explicitly
which we'll need eventually for restarts and dynamic feed management.
The big lesson here is that async exit stacks are not conducive to
spawning and monitoring service tasks, and especially so if
a `@tractor.context` is used since if the `.open_context()` call isn't
exited (only possible by the stack being closed), then there will be no
way for `trio` to cancel the task that pushed that context (since it
can't run a checkpoint while yielded inside the stack) without also
cancelling all other contexts pushed on that stack. Presuming one
`pikerd` task is used to do the original pushing (which it was) then
any error would have to kill all service daemon tasks which obviously
won't work.
I see this mostly as the painz of tinkering out an SC service manager
with `tractor` / `trio` for the first time, so try to go easy on the
process ;P
Adding binance's "hft" ws feeds has resulted in a lot of context
switching in our Qt charts, so much so it's chewin CPU and definitely
worth it to throttle to the detected display rate as per discussion in
issue #192.
This is a first very very naive attempt at throttling L1 tick feeds on
the `brokerd` end (producer side) using a constant and uniform delivery
rate by way of a `trio` task + mem chan. The new func is
`data._sampling.uniform_rate_send()`. Basically if a client request
a feed and provides a throttle rate we just spawn a task and queue up
ticks until approximately the next display rate's worth period of time
has passed before forwarding. It's definitely nothing fancy but does
provide fodder and a start point for an up and coming queueing eng to
start digging into both #107 and #109 ;)
Avoids some cyclical and confusing import time stuff that we needed to get
DPI aware fonts configured from the active display. Move the main window
singleton into its own module and add a `main_window()` getter for it.
Make `current_screen()` a ``MainWindow` method to avoid so many module
variables.
This moves the entire clearing system to use typed messages using
`pydantic.BaseModel` such that the streamed request-response order
submission protocols can be explicitly viewed in terms of message
schema, flow, and sequencing. Using the explicit message formats we can
now dig into simplifying and normalizing across broker provider apis to
get the best uniformity and simplicity.
The order submission sequence is now fully async: an order request is
expected to be explicitly acked with a new message and if cancellation
is requested by the client before the ack arrives, the cancel message is
stashed and then later sent immediately on receipt of the order
submission's ack from the backend broker. Backend brokers are now
controlled using a 2-way request-response streaming dialogue which is
fully api agnostic of the clearing system's core processing; This
leverages the new bi-directional streaming apis from `tractor`. The
clearing core (emsd) was also simplified by moving the paper engine to
it's own sub-actor and making it api-symmetric with expected `brokerd`
endpoints.
A couple of the ems status messages were changed/added:
'dark_executed' -> 'dark_triggered'
added 'alert_triggered'
More cleaning of old code to come!
This makes the paper engine look IPC-wise exactly like any
broker-provider backend module and uses the new ``trades_dialogue()``
2-way streaming endpoint for commanding order requests.
This serves as a first step toward truly distributed forward testing
since the paper engine can now be run out-of tree from `pikerd` if
needed thus demonstrating how real-time clearing signals can be shared
between fully distinct services.