Main "public" API change is to make `GodWidget.get/set_chart_symbol()`
accept and cache-on fqsn tuples to allow handling overlayed chart groups
and adjust method names to be plural to match.
Wrt `LinkedSplits`,
- create all chart widget axes with a `None` plotitem argument and set
the `.pi` field after axis creation (since apparently we have another
object reference causality dilemma..)
- set a monkeyed `PlotItem.chart_widget` for use in axes that still need
the widget reference.
- drop feed pause/resume for now since it's leaking feed tasks on the
`brokerd` side and we probably don't really need it any more, and if
we still do it should be done on the feed not the flume.
Wrt `ChartPlotItem`,
- drop `._add_sticky()` and use the `Axis` method instead and add some
overlay + axis sanity checks.
- refactor `.draw_ohlc()` to be a lighter wrapper around a call to
`.add_plot()`.
We have this method on our `ChartPlotWidget` but it makes more sense to
directly associate axis-labels with, well, the label's parent axis XD.
We add `._stickies: dict[str, YAxisLabel]` to replace
`ChartPlotWidget._ysticks` and pass in the `pg.PlotItem` to each axis
instance, stored as `Axis.pi` instead of handing around linked split
references (which are way out of scope for a single axis).
More work needs to be done to remove dependence on `.chart:
ChartPlotWidget` references in the date axis type as per comments.
Comments out the pixel-cache resetting since it doesn't seem we need it
any more to avoid draw oddities?
For `.fast_path` appends, this nearly got it working except the new path
segments are either not being connected correctly (step curve) or not
being drawn in full since the history path (plain line).
Leaving the attempted code commented in for a retry in the future; my
best guesses are that maybe,
- `.connectPath()` call is being done with incorrect segment length
and/or start point.
- the "appended" data: `appended = array[-append_len-1:slice_to_head]`
(done inside the formatter) isn't correct (i.e. endpoint handling
considering a path append) and needs special handling for different
curve types?
Ensure `.boundingRect()` calcs and `.draw_last_datum()` do geo-sizing
based on source data instead of presuming some `1.0` unit steps in some
spots; we need this to support an epoch index as is needed for overlays.
Further, clean out a bunch of old bounding rect calc code and add some
commented code for trying out `QRectF.united()` on the path + last datum
curve segment. Turns out that approach is slower as per eyeballing the
added profiler points.
After trying to hack epoch indexed time series and failing miserably,
decided to properly factor out all formatting routines into a common
subsystem API: ``IncrementalFormatter`` which provides the interface for
incrementally updating and tracking pre-path-graphics formatted data.
Previously this functionality was mangled into our `Renderer` (which
also does the work of `QPath` generation and update) but splitting it
out also preps for being able to do graphics-buffer downsampling and
caching on a remote host B)
The ``IncrementalFormatter`` (parent type) has the default behaviour of
tracking a single field-array on some source `ShmArray`, updating
a flattened `numpy.ndarray` in-mem allocation, and providing a default
1d conversion for pre-downsampling and path generation.
Changed out of `Renderer`,
- `.allocate_xy()`, `update_xy()` and `format_xy()` all are moved to
more explicitly named formatter methods.
- all `.x/y_data` nd array management and update
- "last view range" tracking
- `.last_read`, `.diff()`
- now calls `IncrementalFormatter.format_to_1d()` inside `.render()`
The new API gets,
- `.diff()`, `.last_read`
- all view range diff tracking through `.track_inview_range()`.
- better nd format array names: `.x/y_nd`, `xy_nd_start/stop`.
- `.format_to_1d()` which renders pre-path formatted arrays ready for
both m4 sampling and path gen.
- better explicit overloadable formatting method names:
* `.allocate_xy()` -> `.allocate_xy_nd()`
* `.update_xy()` -> `.incr_update_xy_nd()`
* `.format_xy()` -> `.format_xy_nd_to_1d()`
Finally this implements per-graphics-type formatters which define
each set up related formatting routines:
- `OHLCBarsFmtr`: std multi-line style bars
- `OHLCBarsAsCurveFmtr`: draws an interpolated line for ohlc sampled data
- `StepCurveFmtr`: handles vlm style curves
More or less a revamp (and possibly first draft for something similar in
`tractor` core) which ensures all actor trees attempt to discover the
`pikerd` registry actor.
Implementation improvements include:
- new `Registry` singleton which houses the `pikerd` discovery
socket-address `Registry.addr` + a `open_registry()` manager which
provides bootstrapped actor-local access.
- refine `open_piker_runtime()` to do the work of opening a root actor
and call the new `open_registry()` depending on whether a runtime has
yet been bootstrapped.
- rejig `[maybe_]open_pikerd()` in terms of the above.
Allows running simultaneous data feed services on the same (linux) host
by avoiding file-name collisions instead keying shm buffer sets by the
given `brokerd` instance. This allows, for example, either multiple dev
versions of the data layer to run side-by-side or for the test suite to
be seamlessly run alongside a production instance.
Previously we were relying on implicit actor termination in
`maybe_spawn_daemon()` but really on `pikerd` teardown we should be sure
to tear down not only all service tasks in each actor but also the actor
runtimes. This adjusts `Services.cancel_service()` to only cancel the
service task scope and wait on the `complete` event and reworks the
`open_context_in_task()` inner closure body to,
- always cancel the service actor at exit.
- not call `.cancel_service()` (potentially causing recursion issues on
cancellation).
- allocate a `complete: trio.Event` to signal full task + actor termination.
- pop the service task from the `.service_tasks` registry.
Further, add a `maybe_set_global_registry_sockaddr()` helper-cm to do
the work of checking whether a registry socket needs-to/has-been set
and use it for discovery calls to the `pikerd` service tree.
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.
Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:
- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
`._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
the work of (lazily) starting the `samplerd` service (if dne) and
registers shm buffers for update as well as connect a sample-index
stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
endpoint implementing all the shm buffer and index-stream registry
details as well as logic to ensure a lone service task runs
`Services.increment_ohlc_buffer()`; it increments at the shortest period
registered which, for now, is the default 1s duration.
Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
Drop the `_services` module level ref and adjust all client code to
match. Drop struct inheritance and convert all methods to class level.
Move `Brokerd.locks` -> `Services.locks` and add sampling mod to pikerd
enabled set.
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.
Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.
Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.