Main "public" API change is to make `GodWidget.get/set_chart_symbol()`
accept and cache-on fqsn tuples to allow handling overlayed chart groups
and adjust method names to be plural to match.
Wrt `LinkedSplits`,
- create all chart widget axes with a `None` plotitem argument and set
the `.pi` field after axis creation (since apparently we have another
object reference causality dilemma..)
- set a monkeyed `PlotItem.chart_widget` for use in axes that still need
the widget reference.
- drop feed pause/resume for now since it's leaking feed tasks on the
`brokerd` side and we probably don't really need it any more, and if
we still do it should be done on the feed not the flume.
Wrt `ChartPlotItem`,
- drop `._add_sticky()` and use the `Axis` method instead and add some
overlay + axis sanity checks.
- refactor `.draw_ohlc()` to be a lighter wrapper around a call to
`.add_plot()`.
We have this method on our `ChartPlotWidget` but it makes more sense to
directly associate axis-labels with, well, the label's parent axis XD.
We add `._stickies: dict[str, YAxisLabel]` to replace
`ChartPlotWidget._ysticks` and pass in the `pg.PlotItem` to each axis
instance, stored as `Axis.pi` instead of handing around linked split
references (which are way out of scope for a single axis).
More work needs to be done to remove dependence on `.chart:
ChartPlotWidget` references in the date axis type as per comments.
More or less a revamp (and possibly first draft for something similar in
`tractor` core) which ensures all actor trees attempt to discover the
`pikerd` registry actor.
Implementation improvements include:
- new `Registry` singleton which houses the `pikerd` discovery
socket-address `Registry.addr` + a `open_registry()` manager which
provides bootstrapped actor-local access.
- refine `open_piker_runtime()` to do the work of opening a root actor
and call the new `open_registry()` depending on whether a runtime has
yet been bootstrapped.
- rejig `[maybe_]open_pikerd()` in terms of the above.
Allows running simultaneous data feed services on the same (linux) host
by avoiding file-name collisions instead keying shm buffer sets by the
given `brokerd` instance. This allows, for example, either multiple dev
versions of the data layer to run side-by-side or for the test suite to
be seamlessly run alongside a production instance.
Previously we were relying on implicit actor termination in
`maybe_spawn_daemon()` but really on `pikerd` teardown we should be sure
to tear down not only all service tasks in each actor but also the actor
runtimes. This adjusts `Services.cancel_service()` to only cancel the
service task scope and wait on the `complete` event and reworks the
`open_context_in_task()` inner closure body to,
- always cancel the service actor at exit.
- not call `.cancel_service()` (potentially causing recursion issues on
cancellation).
- allocate a `complete: trio.Event` to signal full task + actor termination.
- pop the service task from the `.service_tasks` registry.
Further, add a `maybe_set_global_registry_sockaddr()` helper-cm to do
the work of checking whether a registry socket needs-to/has-been set
and use it for discovery calls to the `pikerd` service tree.
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.
Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:
- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
`._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
the work of (lazily) starting the `samplerd` service (if dne) and
registers shm buffers for update as well as connect a sample-index
stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
endpoint implementing all the shm buffer and index-stream registry
details as well as logic to ensure a lone service task runs
`Services.increment_ohlc_buffer()`; it increments at the shortest period
registered which, for now, is the default 1s duration.
Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
Drop the `_services` module level ref and adjust all client code to
match. Drop struct inheritance and convert all methods to class level.
Move `Brokerd.locks` -> `Services.locks` and add sampling mod to pikerd
enabled set.
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.
Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.
Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
Wow, turns out tick framing was totally borked since we weren't framing
on "greater then throttle period long waits" XD
This moves all the framing logic into a common func and calls it in
every case:
- every (normal) "pre throttle period expires" quote receive
- each "no new quote before throttle period expires" (slow case)
- each "no clearing tick yet received" / only burst on clears case
Add some (untested) data slicing util methods for mapping time ranges to
source data indices:
- `.get_index()` which maps a single input epoch time to an equiv array
(int) index.
- add `slice_from_time()` which returns a view of the shm data from an
input epoch range presuming the underlying struct array contains
a `'time'` field with epoch stamps.
- `.view_data()` which slices out the "in view" data according to the
current state of the passed in `pg.PlotItem`'s view box.
This has been an outstanding idea for a while and changes the framing
format of tick events into a `dict[str, list[dict]]` wherein for each
tick "type" (eg. 'bid', 'ask', 'trade', 'asize'..etc) we create an FIFO
ordered `list` of events (data) and then pack this table into each
(throttled) send. This gives an additional implied downsample reduction
(in terms of iteration on the consumer side) from `N` tick-events to
a (max) `T` tick-types presuming the rx side only needs the latest tick
event.
Drop the `types: set` and adjust clearing event test to use the new
`ticks_by_type` map's keys.
Instead of uniformly distributing the msg send rate for a given
aggregate subscription, choose to be more bursty around clearing ticks
so as to avoid saturating the consumer with L1 book updates and vs.
delivering real trade data as-fast-as-possible.
Presuming the consumer is in the "UI land of slow" (eg. modern display
frame rates) such an approach serves more useful for seeing "material
changes" in the market as-bursty-as-possible (i.e. more short lived fast
changes in last clearing price vs. many slower changes in the bid-ask
spread queues). Such an approach also lends better to multi-feed
overlays which in aggregate tend to scale linearly with the number of
feeds/overlays; centralization of bursty arrival rates allows for
a higher overall throttle rate if used cleverly with framing.
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.
Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
Trying to send a message in the `NoBsWs.fixture()` exit when the ws is
not currently disconnected causes a double `._stack.close()` call which
will corrupt `trio`'s coro stack. Instead only do the unsub if we detect
the ws is still up.
Also drops the legacy `backfill_bars()` module endpoint.
Fixes#437
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.
Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
Instead of requiring any `-b` try to import all built-in broker backend
python modules by default and only load those detected from the input symbol
list's fqsn values. In other words the `piker chart` cmd can be run sin
`-b` now and that flag is only required if you only want to load
a subset of the built-ins or are trying to load a specific
not-yet-builtin backend.
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
`set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
a msg on each unique underlying IPC msg stream.
Use new api in sampler task.
Previously we would only detect overruns and drop subscriptions on
non-throttled feed subs, however you can get the same issue with
a wrapping throttler task:
- the intermediate mem chan can be blocked either by the throttler task
being too slow, in which case we still want to warn about it
- the stream's IPC channel actually breaks and we still want to drop
the connection and subscription so it doesn't be come a source of
stale backpressure.
Set each quote-stream by matching the provider for each `Flume` and thus
results in some flumes mapping to the same (multiplexed) stream.
Monkey-patch the equivalent `tractor.MsgStream._ctx: tractor.Context` on
each broadcast-receiver subscription to allow use by feed bus methods as
well as other internals which need to reference IPC channel/portal info.
Start a `_FeedsBus` subscription management API:
- add `.get_subs()` which returns the list of tuples registered for the
given key (normally the fqsn).
- add `.remove_sub()` which allows removing by key and tuple value and
provides encapsulation for sampler task(s) which deal with dropped
connections/subscribers.
Adds provider-list-filtered (quote) stream multiplexing support allowing
for merged real-time `tractor.MsgStream`s using an `@acm` interface.
Behind the scenes we are just doing a classic multi-task push to common
mem chan approach.
Details to make it work on `Feed`:
- add `Feed.mods: dict[str, Moduletype]` and
`Feed.portals[ModuleType, tractor.Portal]` which are both populated
during init in `open_feed()`
- drop `Feed.portal` and `Feed.name`
Also fix a final lingering tsdb history loading loop termination bug.
A slight facepalm but, the main issue was a simple indexing logic error:
we need to slice with `tsdb_history[-shm._first.value:]` to push most
recent history not oldest.. This allows cleanup of tsdb backfill loop as
well.
Further, greatly simply `diff_history()` time slicing by using the
classic `numpy` conditional slice on the epoch field.
This had a bug prior where the end of a frame (a partial) wasn't being
sliced correctly and we'd get odd gaps showing up in the backfilled from
`brokerd` vs. tsdb end index. Repair this by doing timeframe aware index
diffing in `diff_history()` which seems to resolve it. Also, use the
frame-result's `end_dt: datetime` for the loop exit condition.
Sync per-symbol sampler loop start to subscription registers such that
the loop can't start until the consumer's stream subscription is added;
the task-sync uses a `trio.Event`. This patch also drops a ton of
commented cruft.
Further adjustments needed to get parity with prior functionality:
- pass init msg 'symbol_info' field to the `Symbol.broker_info: dict`.
- ensure the `_FeedsBus._subscriptions` table uses the broker specific
(without brokername suffix) as keys for lookup so that the sampler
loop doesn't have to append in the brokername as a suffix.
- ensure the `open_feed_bus()` flumes-table-msg returned sent by
`tractor.Context.started()` uses the `.to_msg()` form of all flume
structs.
- ensure `maybe_open_feed()` uses `tractor.MsgStream.subscribe()` on all
`Flume.stream`s on cache hits using the
`tractor.trionics.gather_contexts()` helper.
Orient shm-flow-arrays around the new idea of a `Flume` which provides
access, mgmt and basic measure of real-time data flow sets (see water
flow management semantics).
- We discard the previous idea of a "init message" which contained all
the shm attachment info and instead send a startup message full of
`Flume.to_msg()`s which are symmetrically loaded on the caller actor
side.
- Create data-flows "entries" for every passed in fqsn such that the consumer gets back
streams and shm for each, now all wrapped in `Flume` types. For now we
allocate `brokermod.stream_quotes()` tasks 1-to-1 for each fqsn
(instead of expecting each backend to do multi-plexing, though we
might want that eventually) as well a `_FeedsBus._subscriber` entry
for each. The pause/resume management loop is adjusted to match.
Previously `Feed`s were allocated 1-to-1 with each fqsn.
- Make `Feed` a `Struct` subtype instead of a `@dataclass` and move all
flow specific attrs to the new `Flume`:
- move `.index_stream()`, `.get_ds_info()` to `Flume`.
- drop `.receive()`: each fqsn entry will now require knowledge of
separate streams by feed users.
- add multi-fqsn tables: `.flumes`, `.streams` which point to the
appropriate per-symbol entries.
- Async load all `Flume`s from all contexts and all quote streams using
`tractor.trionics.gather_contexts()` on the client `open_feed()` side.
- Update feeds test to include streaming 2 symbols on the same (binance)
backend.
This is to prep for multi-symbol feeds and charts so we accept
a sequence of fqsns to the top level entrypoints as well as the
`.data.feed.open_feed()` API (though we're not actually supporting true
multiplexed feeds nor shm lookups per fqsn yet).
Allows starting UI apps and passing the `pikerd` registry socket-addr
args via `--host` or `--port` such that a separate actor tree can be
started by selecting an unused port. This is handy when hacking new
features but while also wishing to run a more stable version of the code
for trading on the same host.
Drop all attempts at rewiring `ViewBox` signals, monkey-patching
relayee handlers, and generally modifying event source public
attributes. Instead take a much simpler approach where the event source
graphics object simply has it's handler dynamically overridden by
a broadcaster function which relays to all consumers using a Python
loop.
The benefits of this much simplified approach include:
- avoiding the tedious and often complex (re)connection of signals between
the source plot and the overlayed consumers.
- requiring zero modification of the public interface of any of the
publisher or consumer `ViewBox`s, no decoration, extra signal
definitions (eg. previous `mouseDragEventRelay` or the like).
- only a single dynamic method override on the event source graphics object
(`ViewBox`) which does the broadcasting work and requires no
modification to handler implementations.
Detailed `.ui._overlay` changes:
- drop `mk_relay_signal()`, `enable_relays()` which removes signal/slot
hacking methodology.
- drop unused `ComposedGridLayout.grid` and `.reverse`, change some
method names: `.insert()` -> `.insert_plotitem()`, `append()` ->
`.append_plotitem()`.
- in `PlotOverlay`, again drop all signal/slot rewiring in
`.add_plotitem()` and instead add our new closure based python-loop in
`broadcast()` routine which is used to override the event-source
object's handler.
- comment out all the auxiliary/want-to-have event source selection
methods for now.
Mainly this involves instantiating our overriden `PlotItem` in a few
places and tweaking type annots. A further detail is that inside
the fsp sub-chart creation code we hide some axes for overlays in the
flows subchart; these were previously somehow hidden implicitly?
Fork out our patch set submitted to upstream in multiple PRs (since they
aren't moving and/or aren't a priority to core) which can be seen in
full from the following diff:
https://github.com/pyqtgraph/pyqtgraph/compare/master...pikers:pyqtgraph:graphics_pin
Move these type extensions into the internal `.ui._pg_overrides` module.
The changes are related to both `pyqtgraph.PlotItem` and `.AxisItem` and
were driven for our need for multi-view overlays (overlaid charts with
optionally synced axis and interaction controls) as documented in the PR
to upstream: https://github.com/pyqtgraph/pyqtgraph/pull/2162
More specifically,
- wrt to `AxisItem` we added lru caching of tick values as per:
https://github.com/pyqtgraph/pyqtgraph/pull/2160.
- wrt to `PlotItem` we adjusted some of the axis management code, namely
adding a standalone `.removeAxis()` and modifying the `.setAxisItems()` logic
to use it in: https://github.com/pyqtgraph/pyqtgraph/pull/2162
as well as some tweaks to `.updateGrid()` to loop through all possible
axes when grid setting.