Since there's a growing list of top level mods which are more or less
utils/tools for working with the runtime; begin to move them into a new
subpkg starting with a new `.toolz.debug`.
Start with,
- a new `open_crash_handller()` for doing breakpoints around blocks that
might error.
- move in what was `piker._profile` into `.toolz.profile` and adjust all
importing appropriately.
For the purposes of avoiding another full format call we can stash the
last rendered 1d xy pre-graphics formats as
`IncrementalFormatter.x/y_1d: np.ndarray`s and allow readers in the viz
and render machinery to use this data easily for things like "only
drawing the last uppx's worth of data as a line". Also add
a `.flat_index_ratio: float` which can be used similarly as a scalar
applied to indexes into the src array but instead when indexing
(flattened) 1d xy formatted outputs. Finally, this drops the way
overdone/noisy `.__repr__()` meth we had XD
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.
Also, guard all the x-data audit breakpoints with a time indexing
condition.
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.
Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.
Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
downsample to, this is normally based on the ratio of pixel columns on
screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
first and last index would be the size of the input buffer and thus
would never cause a large mem allocation issue (though it may have
been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
near-now epoch time stamp **minus** an x-allocation value: generally
some value in `[0.5, -0.5]` which would result in a massive frames and
thus internal `np.ndarray()` allocation causing either a crash in
`numba` code or actual system mem over allocation.
Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.