Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.
Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
Instead of requiring any `-b` try to import all built-in broker backend
python modules by default and only load those detected from the input symbol
list's fqsn values. In other words the `piker chart` cmd can be run sin
`-b` now and that flag is only required if you only want to load
a subset of the built-ins or are trying to load a specific
not-yet-builtin backend.
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
`set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
a msg on each unique underlying IPC msg stream.
Use new api in sampler task.
Previously we would only detect overruns and drop subscriptions on
non-throttled feed subs, however you can get the same issue with
a wrapping throttler task:
- the intermediate mem chan can be blocked either by the throttler task
being too slow, in which case we still want to warn about it
- the stream's IPC channel actually breaks and we still want to drop
the connection and subscription so it doesn't be come a source of
stale backpressure.
Set each quote-stream by matching the provider for each `Flume` and thus
results in some flumes mapping to the same (multiplexed) stream.
Monkey-patch the equivalent `tractor.MsgStream._ctx: tractor.Context` on
each broadcast-receiver subscription to allow use by feed bus methods as
well as other internals which need to reference IPC channel/portal info.
Start a `_FeedsBus` subscription management API:
- add `.get_subs()` which returns the list of tuples registered for the
given key (normally the fqsn).
- add `.remove_sub()` which allows removing by key and tuple value and
provides encapsulation for sampler task(s) which deal with dropped
connections/subscribers.
Adds provider-list-filtered (quote) stream multiplexing support allowing
for merged real-time `tractor.MsgStream`s using an `@acm` interface.
Behind the scenes we are just doing a classic multi-task push to common
mem chan approach.
Details to make it work on `Feed`:
- add `Feed.mods: dict[str, Moduletype]` and
`Feed.portals[ModuleType, tractor.Portal]` which are both populated
during init in `open_feed()`
- drop `Feed.portal` and `Feed.name`
Also fix a final lingering tsdb history loading loop termination bug.
A slight facepalm but, the main issue was a simple indexing logic error:
we need to slice with `tsdb_history[-shm._first.value:]` to push most
recent history not oldest.. This allows cleanup of tsdb backfill loop as
well.
Further, greatly simply `diff_history()` time slicing by using the
classic `numpy` conditional slice on the epoch field.
This had a bug prior where the end of a frame (a partial) wasn't being
sliced correctly and we'd get odd gaps showing up in the backfilled from
`brokerd` vs. tsdb end index. Repair this by doing timeframe aware index
diffing in `diff_history()` which seems to resolve it. Also, use the
frame-result's `end_dt: datetime` for the loop exit condition.
Sync per-symbol sampler loop start to subscription registers such that
the loop can't start until the consumer's stream subscription is added;
the task-sync uses a `trio.Event`. This patch also drops a ton of
commented cruft.
Further adjustments needed to get parity with prior functionality:
- pass init msg 'symbol_info' field to the `Symbol.broker_info: dict`.
- ensure the `_FeedsBus._subscriptions` table uses the broker specific
(without brokername suffix) as keys for lookup so that the sampler
loop doesn't have to append in the brokername as a suffix.
- ensure the `open_feed_bus()` flumes-table-msg returned sent by
`tractor.Context.started()` uses the `.to_msg()` form of all flume
structs.
- ensure `maybe_open_feed()` uses `tractor.MsgStream.subscribe()` on all
`Flume.stream`s on cache hits using the
`tractor.trionics.gather_contexts()` helper.
Orient shm-flow-arrays around the new idea of a `Flume` which provides
access, mgmt and basic measure of real-time data flow sets (see water
flow management semantics).
- We discard the previous idea of a "init message" which contained all
the shm attachment info and instead send a startup message full of
`Flume.to_msg()`s which are symmetrically loaded on the caller actor
side.
- Create data-flows "entries" for every passed in fqsn such that the consumer gets back
streams and shm for each, now all wrapped in `Flume` types. For now we
allocate `brokermod.stream_quotes()` tasks 1-to-1 for each fqsn
(instead of expecting each backend to do multi-plexing, though we
might want that eventually) as well a `_FeedsBus._subscriber` entry
for each. The pause/resume management loop is adjusted to match.
Previously `Feed`s were allocated 1-to-1 with each fqsn.
- Make `Feed` a `Struct` subtype instead of a `@dataclass` and move all
flow specific attrs to the new `Flume`:
- move `.index_stream()`, `.get_ds_info()` to `Flume`.
- drop `.receive()`: each fqsn entry will now require knowledge of
separate streams by feed users.
- add multi-fqsn tables: `.flumes`, `.streams` which point to the
appropriate per-symbol entries.
- Async load all `Flume`s from all contexts and all quote streams using
`tractor.trionics.gather_contexts()` on the client `open_feed()` side.
- Update feeds test to include streaming 2 symbols on the same (binance)
backend.
Initial test that starts a `binance` feed and reads the quote messages
alongside shm buffers for 1s and 1m OHLC; just prints to console for
now.
Template out parametrization for multi-symbol quote-multiplexed feeds
which coming soon B)
This is to prep for multi-symbol feeds and charts so we accept
a sequence of fqsns to the top level entrypoints as well as the
`.data.feed.open_feed()` API (though we're not actually supporting true
multiplexed feeds nor shm lookups per fqsn yet).