This took a teensie bit of reworking in some `.ui` modules
more or less in the following order of functional dependence:
- add a `Ctl-R` kb-binding to trigger a `Viz.reset_graphics()` in
the kb-handler task `handle_viewmode_kb_inputs()`.
- call the new method on all `Viz`s (& for all sample-rates) and
`DisplayState` refs provided in a (new input)
`dss: dict[str, DisplayState]` table, which was originally inite-ed
from the multi-feed display loop (so orig in `.graphics_update_loop()`
but now provided as an input to that func, see below..)
- `._interaction`: allow binding in `async_handler()` kwargs (`via
a `functools.partial`) passed to `ChartView.open_async_input_handler()`
such that arbitrary inputs to our kb+mouse handler funcs can accept
"wtv we desire".
- use ^ to bind in the aforementioned `dss` display-state table to
said handlers!
- define the `dss` table (as mentioned) inside `._display.display_symbol_data()`
and pass it into the update loop funcs as well as the newly augmented
`.open_async_input_handler()` calls,
- drop calling `chart.view.open_async_input_handler()` from the
`.order_mode.open_order_mode()`'s enter block and instead factor it
into the caller to support passing the `dss` table to the kb
handlers.
- comment out the original history update loop handling of forced `Viz`
redraws entirely since we now have a manual method via `Ctl-R`.
- now, just update the `._remote_ctl.dss: dict` with this table since
we want to also provide rc for **all** loaded feeds, not just the
currently shown one/set.
- docs, naming and typing tweaks to `._event.open_handlers()`
Since we're now using it multiple layers probably makes sense to impl
and wrap it more correctly / publicly. The main (recent) use case is
where editing an underlying time series and then wanting to refresh the
graphics layers to reflect the changes in a chart. Part of this also
obviously includes wiping the y-range mx/mn cache.
Also ensure that `force_redraw` is proxying through to any `BarItems`
via the new `render_baritems()` func kwarg even when switching between
downsampled-line vs. bars modes.
Since `polars` has a more sane set of (time-zone aware) datetime APIs it
makes more sense and is definitely no slower then the previous `numpy`
impl. Also, actually use the sample-rate specific formats defined in
`DynamicDateAxis.tick_tpl`: dict[int, str]` finally using the new
`Viz.time_step()` property.
Since we end up needing the actual (OHLC sampled) time step info (at
least in seconds) for various purposes (in this specific follow up use
case to determine sample-rate specific `datetime` format strings for
a charted time series x-axis label), allow always reading it from the
viz with the presumption (at least for now) the underlying data-frame
will have an epoch `'time'` col/field.
Thanks to oremanj in the `trio` room for this hot style tip which i much
prefer to have less LOC and places to change sub-pkg name exports!
Also drop expecting a `gaps` frame output from `dedupe()`.
Turns out we were always filtering to time gaps longer then a day smh..
Instead tweak `detect_time_gaps()` to only return venue-gaps when
a `gap_dt_unit: str` is passed and pass `'days'` (like it was by default
before) from `dedupe()` though we should really pass in an actual venue
gap duration in the future.
In theory the `async for msg` loop can be re-purposed without having to
always call `remote_annotate()` so factor it into a new
`serve_rc_annots()` and then just call it from the former (for now) with
the wrapping `try:` block outside to delete per-client-ctx annotation
instance sets. Also, use some type aliases instead of repeatedly
defining the same complex `dict`-table defs B)
In prep for supporting reverse-ipc connect-back to UI actors from
middle-ware systems (for the purposes of triggering data-view canvas
re-renders and built-in tsp annotations), add a new struct type to
better generalize the management of remote feed subscriptions. Include
a `Sub.rc_ui: bool` for now (with nearby todo-comment) and expose an
`allow_remote_ctl_ui: bool` through the feed endpoints to help drive
/ prep for all that ^
Rework all the sampler tasks to expect the `Sub`'s new iface:
- split up the `Sub.ipc: MsgStream` and `.send_chan` as separate fields
since we're handling the throttle case in separate
`sample_and_broadcast()` logic blocks anyway and avoids needing to
monkey-patch on the `._ctx` malarky..
- explicitly provide the optional handle to the `_throttle_cs:
CancelScope` again for the case where throttling/event-downsampling is
requested.
- add `_FeedsBus.subs_items()` as a public iterator.
Apparently `.storage.nativedb.mk_ohlcv_shm_keyed_filepath()` was always
kinda broken if you passed in a `period: float` with an actual non-`int`
to the format string? Fixed it to strictly cast to `int()` before
str-ifying so that you don't get weird `60.0s.parquet` in there..
Further this rejigs the `sotre ldshm` gap correction-annotation loop to,
- use `StorageClient.write_ohlcv()` instead of hackily re-implementing
it.. now that problem from above is fixed!
- use a `needs_correction: bool` var to determine if gap markup and
de-duplictated data should be pushed to the shm buffer,
- go back to using `AnnotCtl.add_rect()` for all detected gaps such that
they all persist (and thus are shown together) until the client
disconnects.
For non-full-`.__aexit__()` handlers need this method instead (facepalm).
Also create and assign the `AnnotCtl._annot_stack: AsyncExitStack` just
before yielding the client since it's not needed prior and ensures annot
removal happens **before** ipc teardown.
Since leaking annots to a remote `chart` actor probably isn't a thing we
want to do (often), add a removal/deletion handler block to the
`remote_annotate()` ctx which can be triggered using a `{rm_annot: aid}`
msg.
Augmnent the `AnnotCtl` with,
- `.remove() which sends said msg (from above) and returns a `bool`
indicating success.
- add an `.open_rect()` acm which does the `.add_rect()` / `.remove()`
calls underneath for use in scope oriented client usage.
- add a `._annot_stack: AsyncExitStack` which will always have any/all
non-`.open_rect()` calls to `.add_rect()` register removal on client
teardown, to avoid leaking annots when a client finally disconnects.
- comment out the `.modify()` meth idea for now.
- rename all `Xstream` var-tags to `Xipc` names.
Got borked by the logic re-factoring to get more conc going around
tsdb vs. latest frame loads with nested nurseries. So, repair all that
such that we can still backfill symbols previously not loaded as well as
drop all the `_FeedBus` instance passing to subtasks where it's
definitely not needed.
Toss in a pause point around sampler stream `'backfilling'` msgs as well
since there's seems to be a weird ctx-cancelled propagation going on
when a feed client disconnects during backfill and this might be where
the src `tractor.ContextCancelled` is getting bubbled from?
Obvi took a little `.ui` component fixing (as per prior commits) but
this is now a working PoC for gap detection and markup from a remote
(data) non-`chart` actor!
Iface and impl deats from `.ui._remote_ctl`:
- add new `open_annot_ctl()` mngr which attaches to all locally
discoverable chart actors, gathers annot-ctl streams per fqme set, and
delivers a new `AnnotCtl` client which allows adding annotation
rectangles via a `.add_rect()` method.
- also template out some other soon-to-get methods for removing and
modifying pre-exiting annotations on some `ChartView` 💥
- ensure the `chart` CLI subcmd starts the (`qtloops`) guest-mode init
with the `.ui._remote_ctl` module enabled.
- actually use this stuff in the `piker store ldshm` CLI to submit
markup rects around any detected null/time gaps in the tsdb data!
Still lots to do:
- probably colorization of gaps depending on if they're venue
closures (aka real mkt gaps) vs. "missing data" from the backend (aka
timeseries consistency gaps).
- run gap detection and markup as part of the std `.tsp` sub-sys
runtime such that gap annots are a std "built-in" feature of
charting.
- support for epoch time stamp AND abs-shm-index rect x-values
(depending on chart operational state).
As mentioned in a prior commit this was the (seemingly, and so far) only
way to make our `.select_box` annotator shift-click rect work properly
(and the same as) by adopting the code around `ViewBox.rbScaleBox`
(which we now also disable). That means also passing the scene coords to
the `SelectRect.set_scen_pos()`. Also add in the proper `ev:
pyqtgraph.GraphicsScene.mouseEvents.MouseDragEvent` so we can actually
figure out wut the hell all this pg custom mouse-event stuff is XD
Turns out using the `.setRect()` method was the main cause of the issue
(though still don't really understand how or why) and this instead
adopts verbatim the code from `pg.ViewBox.updateScaleBox()` which uses
a scaling transform to set the rect for the "zoom scale box" thingy.
Further add a shite ton more improvements and interface tweaks in
support of the new remote-annotation control msging subsys:
- re-impl `.set_scen_pos()` to expect `QGraphicsScene` coordinates (i.e.
passed from the interaction loop and pass scene `QPointF`s from
`ViewBox.mouseDragEvent()` using the `MouseDragEvent.scenePos()` and
friends; this is required to properly use the transform setting
approach to resize the select-rect as mentioned above.
- add `as_point()` converter to maybe-cast python `tuple[float, float]`
inputs (prolly from IPC msgs) to equivalent `QPointF`s.
- add a ton more detailed Qt-obj-related typing throughout our deriv.
- call `.add_to_view()` from init so that wtv view is passed in during
instantiation is always set as the `.vb` after creation.
- factor the (proxy widget) label creation into a new `.init_label()`
so that both the `set_scen/view_pos()` methods can call it and just
generally decouple rect-pos mods from label content mods.
Since we can and want to eventually allow remote control of pretty much
all UIs, this drafts out a new `.ui._remote_ctl` module with a new
`@tractor.context` called `remote_annotate()` which simply starts a msg
loop which allows for (eventual) initial control of a `SelectRect`
through IPC msgs.
Remote controller impl deats:
- make `._display.graphics_update_loop()` set a `._remote_ctl._dss:
dict` for all chart actor-global `DisplayState` instances which can
then be controlled from the `remote_annotate()` handler task.
- also stash any remote client controller `tractor.Context` handles in
a module var for broadband IPC cancellation on any display loop
shutdown.
- draft a further global map to track graphics object instances since
likely we'll want to support remote mutation where the client can use
the `id(obj): int` key as an IPC handle/uuid.
- just draft out a client-side `@acm` for now: `open_annots_client()` to
be filled out in up coming commits.
UI component tweaks in support of the above:
- change/add `SelectRect.set_view_pos()` and `.set_scene_pos()` to allow
specifying the rect coords in either of the scene or viewbox domains.
- use these new apis in the interaction loop.
- add a `SelectRect.add_to_view()` to avoid having annotation client
code knowing "how" a graphics obj needs to be added and can instead
just pass only the target `ChartView` during init.
- drop all the status label updates from the display loop since they
don't really work all the time, and probably it's not a feature we
want to keep in the longer term (over just console output and/or using
the status bar for simpler "current state / mkt" infos).
- allows a bit of simplification of `.ui._fsp` method APIs to not pass
around status (bar) callbacks as well!
Can't ref `dt_eps` and `tsdb_entry` if they don't exist.. like for 1s
sampling from `binance` (which dne). So make sure to add better logic
guard and only open the finaly backload nursery if we actually need to
fill the gap between latest history and where tsdb history ends.
TO CHERRY #486
Also toss in a poll loop around the `hist_shm: ShmArray` backfill
read-check in the `.data.allocate_persisten_feed()` init to cope with
possible racy-ness from the increased tsdb history loading concurrency
now implemented.
Move `.data.history` -> `.tsp.__init__.py` for now as main pkg-mod
and `.data.tsp` -> `.tsp._anal` (for analysis).
Obviously follow commits will change surrounding codebase (imports) to
match..
Previously we were actually failing silently too fast instead of
actually trying multiple times (now we do for 100) before finally
raising any timeout in the final loop `else:` block.
Thinking about just moving all of that module (after a content breakup)
to a new `.piker.tsp` which will mostly depend on the `.data` and
`.storage` sub-pkgs; the idea is to move biz-logic for tsdb IO/mgmt and
orchestration with real-time (shm) buffers and the graphics layer into
a common spot for both manual analysis/research work and better
separation of low level data structure primitives from their higher
level usage.
Add a better `data.history` mod doc string in prep for this move
as well as clean out a bunch of legacy commented cruft from the
`trimeter` and `marketstore` days.
TO CHERRY #486 (if we can)
For each timeframe open a sub-nursery to do the backfilling + tsdb load
+ null-segment scanning in an effort to both speed up load time (though
we need to reverse the current order to really make it faster rn since
moving to the much faster parquet file backend) and do concurrent
time-gap/null-segment checking of tsdb history while mrf (most recent
frame) history is backfilling.
The details are more or less just `trio` related task-func composition
tricks and a reordering of said funcs for optimal startup latency.
Also commented the `back_load_from_tsdb()` task for now since it's
unused.
Apparently it returns the index of the prior zero-row (prolly since we
do the backward difference) so ensure `fi_zgaps += 1`..
Also fix remaining edge case handling when there's only 2 zero-segs
which was borked after a refactor to the special case blocks (like
a single zero row) prior to the `absi_zsegs` building loop AND make sure
to always return abs indices OUTSIDE the zero seg, i.e. the indices of
the non-zero row just before and just after so that the history
backfiller can use non-zero timestamps to generate range datetimes for
backend frame queries.
Add much more detailed doc-comments with a small ascii diagram to
explain how all these somewhat subtle vec ops work. Also toss in some
sanity checks on the output indices to ensure they don't point to
zero (time) valued rows when used to read the frame.
Call it `iter_null_segs()` (for now?) and use in the final (sequential)
stage of the `.history.start_backfill()` task-func. Delivers abs,
frame-relative, and equiv time stamps on each iteration pertaining to
each detected null-segment to make it easy to do piece-wise history
queries for each.
Further,
- handle edge case in `get_null_segs()` where there is only 1 zeroed
row value, in which case we deliver `absi_zsegs` as a single pair of
the same index value and,
- when this occurs `iter_null_seqs()` delivers `None` for all the
`start_` related indices/timestamps since all `get_hist()` routines
(delivered by `open_history_client()`) should handle it as being a
"get max history from this end_dt" type query.
- add note about needing to do time gap handling where there's a gap in
the timeseries-history that isn't actually IN the data-history.
Using a bunch of fancy `numpy` vec ops (and ideally eventually extending
the same to `polars`) this is a first draft of `get_null_segs()`
a `col: str` field-value-is-zero detector which filters to all zero-valued
input frame segments and returns the corresponding useful slice-indexes:
- gap absolute (in shm buffer terms) index-endpoints as
`absi_zsegs` for slicing to each null-segment in the src frame.
- ALL abs indices of rows with zeroed `col` values as `absi_zeros`.
- the full set of the input frame's row-entries (view) which are
null valued for the chosen `col` as `zero_t`.
Use this new null-segment-detector in the
`.data.history.start_backfill()` task to attempt to fill null gaps that
might be extant from some prior backfill attempt. Since
`get_null_segs()` should now deliver a sequence of slices for each gap
we don't really need to have the `while gap_indices:` loop any more, so
just move that to the end-of-func and warn log (for now) if all gaps
aren't eventually filled.
TODO:
-[ ] do the null-seg detection and filling concurrently from
most-recent-frame backfilling.
-[ ] offer the same detection in `.storage.cli` cmds for manual tsp
anal.
-[ ] make the graphics layer actually update correctly when null-segs
are filled (currently still broken somehow in the `Viz` caching
layer?)
CHERRY INTO #486
In an effort to catch out-of-order and/or partial-frame-duplicated
segments, add some `.tsp` calls throughout the backloader tasks
including a call to the new `.sort_diff()` to catch the out-of-order
history cases.
Since the `diff: int` serves as a predicate anyway (when `0` nothing
duplicate was detected) might as well just return it directly since it's
likely also useful for the caller when doing deeper anal.
Also, handle the zero-diff case by just returning early with a copy of
the input frame and a `diff=0`.
CHERRY INTO #486
Turns out this was the main source of all sorts of gaps and overlaps
in history frame backfilling. The original idea was that when a gap
causes not enough (1m) bars to be delivered (like over a weekend or
holiday) when we just implicitly do another frame query to try and at
least fill out the default duration (normally 1-2 days). Doing the
recursion sloppily was causing all sorts of stupid problems..
It's kinda obvious now what was wrong in hindsight:
- always pass the sampling period (timeframe) when recursing
- adjust the logic to not be mutex with the no-data case (since it
already is mutex..)
- pack to the `numpy` array BEFORE the recursive call to ensure the
`end_dt: DateTime` is selected and passed correctly!
Toss in some other helpfuls:
- more explicit `pendulum` typing imports
- some masked out sorted-diffing checks (that can be enabled when
debugging out-of-order frame issues)
- always error log about less-than time step mismatches since we should never
have time-diff steps **smaller** then specified in the
`sample_period_s`!