The basic logic is now this:
- when zooming out, uppx (units per pixel in x) can be >= 1
- if the uppx is `n` then the next pixel in view becomes occupied by
a new datum-x-coordinate-value when the diff between the last
datum step (since the last such update) is greater then the
current uppx -> `datums_diff >= n`
- if we're less then some constant uppx we just always update (because
it's not costly enough and we're not downsampling.
More or less this just avoids unnecessary real-time updates to flow
graphics until they would actually be noticeable via the next pixel
column on screen.
This was a bit of a nightmare to figure out but, it seems that the
coordinate caching system will really be a dick (like the nickname for
richard for you serious types) about leaving stale graphics if we don't
reset the cache on downsample full-redraw updates...Sooo, instead we do
this manual reset to avoid such artifacts and consequently (for now)
return a `reset: bool` flag in the return tuple from `Renderer.render()`
to indicate as such.
Some further shite:
- move the step mode `.draw_last()` equivalent graphics updates down
with the rest..
- drop some superfluous `should_redraw` logic from
`Renderer.render()` and compound it in the full path redraw block.
Adds a new pre-graphics data-format callback incremental update api to
our `Renderer`. `Renderer` instance can now overload these custom routines:
- `.update_xy()` a routine which accepts the latest [pre/a]pended data
sliced out from shm and returns it in a format suitable to store in
the optional `.[x/y]_data` arrays.
- `.allocate_xy()` which initially does the work of pre-allocating the
`.[x/y]_data` arrays based on the source shm sizing such that new
data can be filled in (to memory).
- `._xy_[first/last]: int` attrs to track index diffs between src shm
and the xy format data updates.
Implement the step curve data format with 3 super simple routines:
- `.allocate_xy()` -> `._pathops.to_step_format()`
- `.update_xy()` -> `._flows.update_step_xy()`
- `.format_xy()` -> `._flows.step_to_xy()`
Further, adjust `._pathops.gen_ohlc_qpath()` to adhere to the new
call signature.
We're doing this in `Flow.update_graphics()` atm and probably are going
to in general want custom graphics objects for all the diff curve / path
types. The new flows work seems to fix the bounding rect width calcs to
not require the ad-hoc extra `+ 1` in the step mode case; before it was
always a bit hacky anyway. This also tries to add a more correct
bounding rect adjustment for the `._last_line` segment.
Finally this gets us much closer to a generic incremental update system
for graphics wherein the input array diffing, pre-graphical format data
processing, downsampler activation and incremental update and storage of
any of these data flow stages can be managed in one modular sub-system
:surfer_boi:.
Dirty deatz:
- reorg and move all path logic into `Renderer.render()` and have it
take in pretty much the same flags as the old
`FastAppendCurve.update_from_array()` and instead storing all update
state vars (even copies of the downsampler related ones) on the
renderer instance:
- new state vars: `._last_uppx, ._in_ds, ._vr, ._avr`
- `.render()` input bools: `new_sample_rate, should_redraw,
should_ds, showing_src_data`
- add a hack-around for passing in incremental update data (for now)
via a `input_data: tuple` of numpy arrays
- a default `uppx: float = 1`
- add new render interface attrs:
- `.format_xy()` which takes in the source data array and produces out
x, y arrays (and maybe a `connect` array) that can be passed to
`.draw_path()` (the default for this is just to slice out the index
and `array_key: str` columns from the input struct array),
- `.draw_path()` which takes in the x, y, connect arrays and generates
a `QPainterPath`
- `.fast_path`, for "appendable" updates like there was on the fast
append curve
- move redraw (aka `.clear()` calls) into `.draw_path()` and trigger
via `redraw: bool` flag.
- our graphics objects no longer set their own `.path` state, it's done
by the `Flow.update_graphics()` method using output from
`Renderer.render()` (and it's state if necessary)
A bit hacky to get all graphics types working but this is hopefully the
first step toward moving all the generic update logic into `Renderer`
types which can be themselves managed more compactly and cached per
uppx-m4 level.
Yet another path ops routine which converts a 1d array into a data
format suitable for rendering a "step curve" graphics path (aka a "bar
graph" but implemented as a continuous line).
Also, factor the `BarItems` rendering logic (which determines whether to
render the literal bars lines or a downsampled curve) into a routine
`render_baritems()` until we figure out the right abstraction layer for
it.
Starts a module for grouping together all our `QPainterpath` related
generation and data format operations for creation of fast curve
graphics. To start, drops `FastAppendCurve.downsample()` and moves
it to a new `._pathops.xy_downsample()`.
Mostly just dropping old commented code for "step mode" format
generation. Always slice the tail part of the input data and move to the
new `ms_threshold` in the `pg` profiler'
Relates to the bug discovered in #310, this should avoid out-of-order
msgs which do not have a `.reqid` set to be error logged to console.
Further, add `pformat()` to kraken logging of ems msging.
Since downsampling with the more correct version of m4 (uppx driven
windows sizing) is super fast now we don't need to avoid downsampling
on low uppx values. Further all graphics objects now support in-view
slicing so make sure to use it on interaction updates. Pass in the view
profiler to update method calls for more detailed measuring.
Even moar,
- Add a manual call to `.maybe_downsample_graphics()` inside the mouse
wheel event handler since it seems that sometimes trailing events get
lost from the `.sigRangeChangedManually` signal which can result in
"non-downsampled-enough" graphics on chart given the scroll amount;
this manual call seems to entirely fix this?
- drop "max zoom" guard since internals now support (near) infinite
scroll out to graphics becoming a single pixel column line XD
- add back in commented xrange signal connect code for easy testing to
verify against range updates not happening without it
This took longer then i care to admit XD but it definitely adds a huge
speedup and with only a few outstanding correctness bugs:
- panning from left to right causes strange trailing artifacts in the
flows fsp (vlm) sub-plot but only when some data is off-screen on the
left but doesn't appear to be an issue if we keep the `._set_yrange()`
handler hooked up to the `.sigXRangeChanged` signal (but we aren't
going to because this makes panning way slower). i've got a feeling
this is a bug todo with the device coordinate cache stuff and we may
need to report to Qt core?
- factoring out the step curve logic from
`FastAppendCurve.update_from_array()` (un)fortunately required some
logic branch uncoupling but also meant we needed special input controls
to avoid things like redraws and curve appends for special cases,
this will hopefully all be better rectified in code when the core of
this method is moved into a renderer type/implementation.
- the `tina_vwap` fsp curve now somehow causes hangs when doing erratic
scrolling on downsampled graphics data. i have no idea why or how but
disabling it makes the issue go away (ui will literally just freeze
and gobble CPU on a `.paint()` call until you ctrl-c the hell out of
it). my guess is that something in the logic for standard line curves
and appends on large data sets is the issue?
Code related changes/hacks:
- drop use of `step_path_arrays_from_1d()`, it was always a bit hacky
(being based on `pyqtgraph` internals) and was generally hard to
understand since it returns 1d data instead of the more expected (N,2)
array of "step levels"; instead this is now implemented (uglily) in
the `Flow.update_graphics()` block for step curves (which will
obviously get cleaned up and factored elsewhere).
- add a bunch of new flags to the update method on the fast append
curve: `draw_last: bool`, `slice_to_head: int`, `do_append: bool`,
`should_redraw: bool` which are all controls to aid with previously
mentioned issues specific to getting step curve updates working
correctly.
- add a ton of commented tinkering related code (that we may end up
using) to both the flow and append curve methods that was written as
part of the effort to get this all working.
- implement all step curve updating inline in `Flow.update_graphics()`
including prepend and append logic for pre-graphics incremental step
data maintenance and in-view slicing as well as "last step" graphics
updating.
Obviously clean up commits coming stat B)
Since we have in-view style rendering working for all curve types
(finally) we can avoid the guard for low uppx levels and without losing
interaction speed. Further don't delay the profiler so that the nested
method calls correctly report upward - which wasn't working likely due
to some kinda GC collection related issue.
More or less this improves update latency like mad. Only draw data in
view and avoid full path regen as much as possible within a given
(down)sampling setting. We now support append path updates with in-view
data and the *SPECIAL CAVEAT* is that we avoid redrawing the whole curve
**only when** we calc an `append_length <= 1` **even if the view range
changed**. XXX: this should change in the future probably such that the
caller graphics update code can pass a flag which says whether or not to
do a full redraw based on it knowing where it's an interaction based
view-range change or a flow update change which doesn't require a full
path re-render.
After much effort (and exhaustion) but failure to get a view into our
`numpy` OHLC struct-array, this instead allocates an in-thread-memory
array which is updated with flattened data every flow update cycle.
I need to report what I think is a bug to `numpy` core about the whole
view thing not working but, more or less this gets the same behaviour
and minimizes work to flatten the sampled data for line-graphics drawing
thus improving refresh latency when drawing large downsampled curves.
Update the OHLC ds curve with view aware data sliced out from the
pre-allocated and incrementally updated data (we had to add a last index
var `._iflat` to track appends - this should be moved into a renderer
eventually?).
This begins the removal of data processing / analysis methods from the
chart widget and instead moving them to our new `Flow` API (in the new
module introduce here) and delegating the old chart methods to the
respective internal flow. Most importantly is no longer storing the
"last read" of an array from shm in an internal chart table (was
`._arrays`) and instead the `ShmArray` instance is passed as input and
stored in the `Flow` instance. This greatly simplifies lookup logic such
that the display loop now doesn't have to worry about reading shm, it
can be done by internal graphics logic as desired. Generally speaking,
all previous `._arrays`/`._graphics` lookups are now delegated to the
entries in the chart's `._flows` table.
The new `Flow` methods are generally better factored and provide more
detailed output regarding data-stream <-> graphics inter-relations for
the future purpose of allowing much more efficient update calls in the
display loop as well as supporting low latency interaction UX.
The concept here is that we're introducing an intermediary layer that
ties together graphics and real-time data flows such that widget code is
oriented around plot layout and the flow apis are oriented around
real-time low latency updates and providing an efficient high level
metric layer for the UX.
The summary api transition is something like:
- `update_graphics_from_array()` -> `.update_graphics_from_flow()`
- `.bars_range()` -> `Flow.datums_range()`
- `.bars_range()` -> `Flow.datums_range()`
The most important changes include:
- iterating the new `Flow` type and updating graphics
- adding detailed profiling
- increasing the min uppx before graphics updates are throttled
- including the L1 spread in y-range calcs so that you never have the
bid/ask go "out of view"..
- pass around `Flow`s instead of shms
- drop all the old prototyped downsampling code
If manually managing an overlay you'll likely call `.overlay_plotitem()`
and then a plotting method so we need to accept a plot item input so
that the chart's pi doesn't get assigned incorrectly in the `Flow` entry
(though it is by default if no input is provided).
More,
- add a `Flow.graphics` field and set it to the `pg.GraphicsObject`.
- make `Flow.maxmin()` return `None` in the "can't calculate" cases.
- set shm refs on `Flow` entries.
- don't run a graphics cycle on 'update' msgs from the engine
if the containing chart is hidden.
- drop `volume` from flows map and disable auto-yranging
once $vlm comes up.
Allows for removing resize callbacks for a flow/overlay that you wish to
remove from view (eg. unit volume after dollar volume is up) and thus
less general interaction callback overhead for any plot you don't wish
to show or resize.
Further,
- drop the `autoscale_linked_plots` block for now since with
multi-view-box overlays each register their own vb resize slots
- pull the graphics object from the chart's `Flow` map inside
`.maybe_downsample_graphics()`
This new type wraps a shm data flow and will eventually include things
like incremental path-graphics updates and serialization + bg downsampling
techniques. The main immediate motivation was to get a cached y-range max/min
calc going since profiling revealed the `numpy` equivalents were
actually quite slow as the data set grows large. Likely we can use all
this to drive a streaming mx/mn routine that's always launched as part
of each on-host flow.
This is our official foray into use of `msgspec.Struct` B) and I have to
say, pretty impressed; we'll likely completely ditch `pydantic` from
here on out.
We don't need update graphics on every x-range change since that's what
the display loop does. Instead, only on manual changes do we make manual
calls into `.update_graphics_from_array()` and be sure to iterate all
linked subplots and all their embedded graphics.
The pg profiler seems to have trouble with early `return`s in function
calls (likely muckery with the GC/`.__delete__()`) so let's just try
to avoid it for now until we either fix it (probably by implementing as
a ctx mngr) or use diff one.
Ugh, turns out the wacky `ChartView.maxmin` callback stuff we did (for
determining y-range sizings) currently requires that the volume array
has a "bars in view" result.. so let's make that keep working without
rendering the graphics for the curve (since we're disabling them once
$vlm comes up).
As with the `BarItems` graphics, this makes it possible to pass in a "in
view" range of array data that can be *only* rendered improving
performance for large(r) data sets. All the other normal behaviour is
kept (i.e a persistent, (pre/ap)pendable path can still be maintained)
if a ``view_range`` is not provided.
Further updates,
- drop the `.should_ds_or_redraw()` and `.maybe_downsample()` predicates
instead moving all that logic inside `.update_from_array()`.
- disable the "cache flipping", which doesn't seem to be needed to avoid
artifacts any more?
- handle all redraw/dowsampling logic in `.update_from_array()`.
- even more profiling.
- drop path `.reserve()` stuff until we better figure out how it's
supposed to work.
Drop all the logic originally in `.update_ds_line()` which is now done
internal to our `FastAppendCurve`. Add incremental update of the
flattened OHLC -> line curve (unfortunately using `np.concatenate()` for
the moment) and maintain a new `._ds_line_xy` arrays tuple which keeps
the internal state. Add `.maybe_downsample()` as per the new interaction
update method requirement. Draft out some fast path curve stuff like in
our line graphic. Short-circuit bars path updates when we downsample to
line. Oh, and add a ton more profiling in prep for getting
all this stuff faf.
Build out an interface that makes it super easy to downsample curves
using the m4 algorithm while keeping our incremental `QPainterPath`
update feature. A lot of hard work and tinkering went into getting this
working all in-thread correctly and there are quite a few details..
New interface methods:
- `.x_uppx()` which returns the x-axis "view units per pixel"
- `.px_width()` which returns the total (rounded) x-axis pixels spanned
by the curve in view.
- `.should_ds_or_redraw()` a predicate which checks internal state to
see if either downsampling of the curve should take place, or the curve
should have all downsampling removed and be redrawn with source array
data.
- `.downsample()` the actual ds processing routine which delegates into
the m4 algo impl.
- `.maybe_downsample()` a simple update method which can be called by
the view box when the user changes the zoom level.
Implementation details/changes:
- make `.update_from_array()` check for downsample (or revert to source
aka de-downsample) conditions exist and then downsample and re-draw
path graphics accordingly.
- in order to even further speed up path appends (since our main
bottleneck is measured to be `QPainter.drawPath()` calls with large
paths which are frequently updates), add a secondary path `.fast_path`
which is the path that is real-time updates by incremental appends and
which is painted separately for speed in `.pain()`.
- drop all the `QPolyLine` stuff since it was tested to be much slower
in general and especially so for append-updates.
- stop disabling the cache settings on updates since it doesn't seem to
be required any more?
- more move toward deprecating and removing all lingering interface
requirements from `pg.PlotCurveItem` (like `.xData`/`.yData`).
- adjust `.paint()` and `.boundingRect()` to compensate for the new
`.fast_path`
- add a butt-load of profiling B)
Pretty sure this was most of the cause of the stale (more downsampled)
curves showing when zooming in and out from bars mode quickly. All this
stuff needs to get factored out into a new abstraction anyway, but
i think this get's mostly correct functionality.
Only draw new ds curve on uppx steps >= 4 and stop adding/removing
graphics objects from the scene; doesn't seem to speed anything up
afaict. Add better reporting of ds scale changes.
Only if the uppx increases by more then 2 we redraw the entire line
otherwise just ds with previous params and update the current curve.
This *should* avoid strange lower sample rate artefacts from showing on
updates.
Summary:
- stash both uppx and px width in `._dsi` (downsample info)
- use the new `ohlc_to_m4_line()` flags
- add notes about using `.reserve()` and friends
- always delete last `._array` ref prior to line updates
In an effort to try and make `QPainterPath.reserve()` work, add internal
logic to use the same object without de-allocating memory from
a previous path write/creation.
Note this required the addition of a `._redraw` flag (to be used in
`.clear()` and a small patch to `pyqtgraph.functions.arrayToQPath` to
allow passing in an existing path (thus reusing the same underlying mem
alloc) which will likely be first pushed to our fork.
We were previously ad-hoc scaling up the px count/width to get more
detail at lower uppx values. Add a log scaling sigmoid that range scales
between 1 < px_width < 16.
Add in a flag to use the mxmn OH tracer in `ohlc_flatten()` if desired.
Helpers to quickly convert ohlc struct-array sequences into lines
for consumption by the m4 downsampler. Strip trailing zero entries
from the `ds_m4()` output if found (avoids lines back to origin).
This makes the `'r'` hotkey snap the last bar to the middle of the pp
line arrow marker no matter the zoom level. Now we also boot with
approximately the most number of x units on screen that keep the bars
graphics drawn in full (just before downsampling to a line).
Moved some internals around to get this all in place,
- drop `_anchors.marker_right_points()` and move it to a chart method.
- change `.pre_l1_x()` -> `.pre_l1_xs()` and just have it return the
two view-mapped x values from the former method.
Instead of using a guess about how many x-indexes to reset the last
datum in-view to, calculate and shift the latest index such that it's
just before any L1 spread labels on the y-axis. This makes the view
placement "widget aware" and gives a much more cross-display UX.
Summary:
- add `ChartPlotWidget.pre_l1_x()` which returns a `tuple` of
x view-coord points for the absolute x-pos and length of any L1
line/labels
- make `.default_view()` only shift to see the xlast just outside
the l1 but keep whatever view range xfirst as the first datum in view
- drop `LevelLine.right_point()` since this is now just a
`.pre_l1_x()` call and can be retrieved from the line's internal chart
ref
- drop `._style.bars_from/to_..` vars since we aren't using hard coded
offsets any more
`ChartPlotWidget.curve_width_pxs()` now can be used to get the total
horizontal (x) pixels on screen that are occupied by the current curve
graphics for a given chart. This will be used for downsampling large
data sets to the pixel domain using M4.
Probably the best place to root the profiler since we can get a better
top down view of bottlenecks in the graphics stack.
More,
- add in draft M4 downsampling code (commented) after getting it mostly
working; next step is to move this processing into an FSP subactor.
- always update the vlm chart last y-axis sticky
- set call `.default_view()` just before inf sleep on startup
Obviously determining the x-range from indices was wrong and was the
reason for the incorrect (downsampled) output size XD. Instead correctly
determine the x range and start value from the *values of* the input
x-array. Pretty sure this makes the implementation nearly production
ready.
Relates to #109
All the refs are in the comments and original sample code from infinite
has been reworked to expect the input x/y arrays to already be sliced
(though we can later support passing in the start-end indexes if
desired).
The new routines are `ds_m4()` the python top level API and `_m4()` the
fast `numba` implementation.
- the chart's uppx (units-per-pixel) is > 4 (i.e. zoomed out a lot)
- don't shift the chart (to keep the most recent step in view) if the
last datum isn't in view (aka the user is probably looking at history)
When a bars graphic is zoomed out enough you get a high uppx, datum
units-per-pixel, and there is no point in drawing the 6-lines in each
bar element-graphic if you can't see them on the screen/display device.
Instead here we offer converting to a `FastAppendCurve` which traces
the high-low outline and instead display that when it's impossible to see the
details of bars - approximately when the uppx >= 2.
There is also some draft-commented code in here for downsampling the
outlines as zoom level increases but it's not fully working and should
likely be factored out into a higher level api anyway.
In effort to start getting some graphics speedups as detailed in #109,
this adds a `FastAppendCurve`to every `BarItems` as a `._ds_line` which
is only displayed (instead of the normal mult-line bars curve) when the
"width" of a bar is indistinguishable on screen from a line -> so once
the view coordinates map to > 2 pixels on the display device.
`BarItems.maybe_paint_line()` takes care of this scaling detection logic and is
called by the associated view's `.sigXRangeChanged` signal handler.
The graphics update loop is much easier to grok when all the UI
components which potentially need to be updated on a cycle are arranged
together in a high-level composite namespace, thus this new
`DisplayState` addition. Create and set this state on each
`LinkedSplits` chart set and add a new method `.graphics_cycle()` which
let's a caller trigger a graphics loop update manually. Use this method
in the fsp graphics manager such that a chain can update new history
output even if there is no real-time feed driving the display loop (eg.
when a market is "closed").
Use fqsn as input to the client-side EMS apis but strip broker-name
stuff before generating and sending `Brokerd*` msgs to each backend for
live order requests (since it's weird for a backend to expect it's own
name, though maybe that could be a sanity check?).
Summary of fqsn use vs. broker native keys:
- client side pps, order requests and general UX for order management
use an fqsn for tracking
- brokerd side order dialogs use the broker-specific symbol which is
usually nearly the same key minus the broker name
- internal dark book and quote feed lookups use the fqsn where possible
Since moving to a "god loop" for graphics, we don't really need to have
a dedicated task for updating graphics on new sample increments. The
only UX difference will be that curves won't be updated until an actual new
rt-quote-event triggers the graphics loop -> so we'll have the chart
"jump" to a new position and new curve segments generated only when new
data arrives. This is imo fine since it's just less "idle" updates
where the chart would sit printing the same (last) value every step.
Instead only update the view increment if a new index is detected by
reading shm.
If we ever want this dedicated task update again this commit can be
easily reverted B)
You can get a weird "last line segment" artifact if *only* that segment
is drawn and the cache is enabled, so just disable unless in step mode
at startup and re-flash as normal when new path data is appended. Add
a `.disable_cache()` method for the multi-use in the update method. Use
line style on the `._last_line: QLineF` segment as well.
Enables retrieving all "named axes" on a particular "side" of the
overlayed plot items. This is useful for calculating how much space
needs to be allocated for the axes before the view box area starts.
Instead of referencing the remote processing funcs by a `str` name start
embracing the new `@fsp`/`Fsp` API such that wrapped processing
functions are first class APIs.
Summary of the changeset:
- move and load the fsp built-in set in the new `.fsp._api` module
- handle processors ("fsps") which want to yield multiple keyed-values
(interleaved in time) by expecting both history that is keyed and
assigned to the appropriate struct-array field, *and* real-time
`yield`ed value in tuples of the form `tuple[str, float]` such that
any one (async) processing function can deliver multiple outputs from
the same base calculation.
- drop `maybe_mk_fsp_shm()` from UI module
- expect and manage `Fsp` instances (`@fsp` decorated funcs) throughout
the UI code, particularly the `FspAdmin` layer.