When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
Turns out we were updating the wrong ``Viz``/``DisplayState`` inside the
closure style `increment_history_view()`` (probably due to looping
through the flumes and dynamically closing in that task-func).. Instead
define the history incrementer at module level and pass in the
`DisplayState` explicitly. Further rework the `DisplayState` attrs to be
more focused around the `Viz` associated with the fast and slow chart
and be sure to adjust output from each `Viz.incr_info()` call to latest
update. Oh, and just tweaked the line palette for the moment.
FYI "treading" here is referring to the x-shifting of the curve when
the last datum is in view such that on new sampled appends the "last"
datum is kept in the same x-location in UI terms.
Mainly it was the global (should we )increment logic that needs to be
independent for the fast vs. slow chart such that the slow isn't
update-shifted by the fast and vice versa. We do this using a new
`'i_last_slow'` key in the `DisplayState.globalz: dict` which is
singleton for each sample-rate-specific chart and works for both time
and array indexing.
Also, we drop some old commented `graphics.draw_last_datum()` code that
never ended up being needed again inside the coordinate cache reset
bloc.
Might as well since it makes the chart look less gappy and we can easily
flip the index switch now B)
Also adds a new `'i_slow_last'` key to `DisplayState` for a singleton
across all slow charts and thus no more need for special case logic in
`viz.incr_info()`.
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
Previously we were drawing with the middle of the bar on each index with
arms to either side: +/- some arm length. Instead this changes so that
each bar is drawn *after* each index/timestamp such that in graphics
coords the bar span more correctly matches the time span in the
x-domain. This makes the linked region between slow and fast chart
directly match (without any transform) for epoch-time indexing such that
the last x-coord in view on the fast chart is no more then the
next time step in (downsampled) slow view.
Deats:
- adjust in `._pathops.path_arrays_from_ohlc()` and take an `bar_w` bar
width input (normally taken from the data step size).
- change `.ui._ohlc.bar_from_ohlc_row()` and
`BarItems.draw_last_datum()` to match.
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.
Also, guard all the x-data audit breakpoints with a time indexing
condition.
Turned out to be super simple to get the first draft to work since the
fast and slow chart now use the same domain, however, it seems like
maybe there's an offset issue still where the fast may be a couple
minutes ahead of the slow?
Need to dig in a bit..
Using a global "last index step" (via module var) obviously has problems
when working with multiple feed sets in a single global app instance:
any separate feed-set will be incremented according to an app-global
index-step and thus won't correctly calc per-feed-set-step update info.
Impl deatz:
- drop `DisplayState.incr_info()` (since previously moved to `Viz`) and
call that method on each appropriate `Viz` instance where necessary;
further ensure the appropriate `DisplayState` instance is passed in to
each call and make sure to pass a `state: DisplayState`.
- add `DisplayState.hist_vars: dict` for history chart (sets) to
determine the per-feed (not set) current slow chart (time) step.
- add `DisplayState.globalz: dict` to house a common per-feed-set state
and use it inside the new `Viz.incr_info()` such that
a `should_increment: bool` can be returned and used by the display
loop to determine whether to x-shift the current chart.
Read the `Viz.index_step()` directly to avoid always reading 1 on the
slow chart; this was completely broken before and resulting in not
rendering the bars graphic on the slow chart until at a true uppx of
1 which obviously doesn't work for 60 width bars XD
Further cleanups to `._render` module:
- drop `array` output from `Renderer.render()`, `read_from_key` input
and fix type annot.
- drop `should_line`, `changed_to_line` and `render_kwargs` from
`render_baritems()` outputs and instead calc `should_redraw` logic
inside the func body and return as output.
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.
Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.
Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
downsample to, this is normally based on the ratio of pixel columns on
screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
first and last index would be the size of the input buffer and thus
would never cause a large mem allocation issue (though it may have
been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
near-now epoch time stamp **minus** an x-allocation value: generally
some value in `[0.5, -0.5]` which would result in a massive frames and
thus internal `np.ndarray()` allocation causing either a crash in
`numba` code or actual system mem over allocation.
Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.
Turns out we can't seem to avoid the artefacts when click-drag-scrolling
(results in weird repeated "smeared" curve segments) so just go back to
the original code.
Ensures that a "last datum" graphics object exists so that zooming can
read it using `.x_last()`. Also, disable the linked region stuff for now
since it's totally borked after flipping to the time indexing.
Since we don't really need it defined on the "chart widget" move it to
a viz method and rework it to hell:
- always discard the invalid view l > r case.
- use the graphic's UPPX to determine UI-to-scene coordinate scaling for
the L1-label collision detection, if there is no L1 just offset by
a few (index step scaled) datums; this allows us to drop the 2x
x-range calls as was hacked previous.
- handle no-data-in-view cases explicitly and error if we get any
ostensibly impossible cases.
- expect caller to trigger a graphics cycle if needed.
Further support this includes a rework a slew of other important
details:
- add `Viz.index_step`, an idempotent computed, index (presumably uniform)
step value which is needed for variable sample rate graphics displayed
on an epoch (second) time index.
- rework `Viz.datums_range()` to pass view x-endpoints as first and last
elements in return `tuple`; tighten up snap-to-data edge case logic
using `max()`/`min()` calls and better internal var naming.
- adjust all calls to `slice_from_time()` to not expect an "abs" slice.
- drop all `.yrange` resetting since we can just have the `Renderer` do
it when necessary.