It turns out (i guess not so shockingly?) that `marketstore` doesn't
always teardown "gracefully" under SIGINT (seems to hang if there are
open client connections which are also in the midst of teardown?) so
this instead first tries the SIGINT and then fails over to a SIGKILL
(destroy loop) which seems to be much more reliable to ensure shutdown
without any downside - in terms of a "hard kill".
Originally i was thinking the issue was root perms related (which get
relegated solely to the `marketstored` daemon actor after spawn) but
actually it was indeed the signalling / application layer causing the
hold-up/latency on teardown. There's a bunch of lingering (now
commented) code which tried to solve this non-problem as well as a bunch
logging/prints to help decipher the root of the issue - this will all
get cleaned out shortly.
If `marketstore` is detected try to only load most recent missing data
from the data provider (broker) and the rest from the tsdb and push it
all to shm for display in the UI. If the provider/broker doesn't have
the history client endpoint, just use the old one for now so we can
start to incrementally add support. Don't start the ohlc step
incrementer task until the backend signals that the feed is live.
Add some basic `numpy` epoch slice logic to generate append and prepend
arrays to write to the db.
Mooar cool things,
- add a `Storage.delete_ts()` method to wipe a column series from the db
easily.
- don't attempt to read in any OHLC series by default on client load
- add some `pyqtgraph` profiling and drop manual latency measures
- if no db series for the fqsn exists write the entire shm array
Also, Start tinkering with `tractor.trionics.ipython_embed()`
In effort to get back to a usable REPL around the mkts client
this adds usage of the new `tractor` integration api as well as logic
for skipping backfilling if existing tsdb arrays are found.
Starts a wrapper around the `marketstore` client to do basic ohlcv query
and retrieval and prototypes out write methods for ohlc and tick.
Try to connect to `marketstore` automatically (which will fail if not
started currently) but we will eventually first do a service query.
Further:
- get `pikerd` working with and without `--tsdb` flag.
- support spawning `brokerd` with no real-time quotes.
- bring back in "fqsn" support that was originally not
in this history before commits factoring.
Not sure how I missed mapping the 5995 grpc port 🤦; done now.
Also adds graceful teardown using SIGINT with included container
logging relayed to the piker console B).
Found this caused breakage on `kraken` orders which triggered the
"insufficient funds" error response. Makes sense since they won't
generate an order id if the order can't ever be submitted.
The most important changes include:
- iterating the new `Flow` type and updating graphics
- adding detailed profiling
- increasing the min uppx before graphics updates are throttled
- including the L1 spread in y-range calcs so that you never have the
bid/ask go "out of view"..
- pass around `Flow`s instead of shms
- drop all the old prototyped downsampling code
If manually managing an overlay you'll likely call `.overlay_plotitem()`
and then a plotting method so we need to accept a plot item input so
that the chart's pi doesn't get assigned incorrectly in the `Flow` entry
(though it is by default if no input is provided).
More,
- add a `Flow.graphics` field and set it to the `pg.GraphicsObject`.
- make `Flow.maxmin()` return `None` in the "can't calculate" cases.
- set shm refs on `Flow` entries.
- don't run a graphics cycle on 'update' msgs from the engine
if the containing chart is hidden.
- drop `volume` from flows map and disable auto-yranging
once $vlm comes up.
Allows for removing resize callbacks for a flow/overlay that you wish to
remove from view (eg. unit volume after dollar volume is up) and thus
less general interaction callback overhead for any plot you don't wish
to show or resize.
Further,
- drop the `autoscale_linked_plots` block for now since with
multi-view-box overlays each register their own vb resize slots
- pull the graphics object from the chart's `Flow` map inside
`.maybe_downsample_graphics()`
This new type wraps a shm data flow and will eventually include things
like incremental path-graphics updates and serialization + bg downsampling
techniques. The main immediate motivation was to get a cached y-range max/min
calc going since profiling revealed the `numpy` equivalents were
actually quite slow as the data set grows large. Likely we can use all
this to drive a streaming mx/mn routine that's always launched as part
of each on-host flow.
This is our official foray into use of `msgspec.Struct` B) and I have to
say, pretty impressed; we'll likely completely ditch `pydantic` from
here on out.
We don't need update graphics on every x-range change since that's what
the display loop does. Instead, only on manual changes do we make manual
calls into `.update_graphics_from_array()` and be sure to iterate all
linked subplots and all their embedded graphics.
The pg profiler seems to have trouble with early `return`s in function
calls (likely muckery with the GC/`.__delete__()`) so let's just try
to avoid it for now until we either fix it (probably by implementing as
a ctx mngr) or use diff one.
Ugh, turns out the wacky `ChartView.maxmin` callback stuff we did (for
determining y-range sizings) currently requires that the volume array
has a "bars in view" result.. so let's make that keep working without
rendering the graphics for the curve (since we're disabling them once
$vlm comes up).
As with the `BarItems` graphics, this makes it possible to pass in a "in
view" range of array data that can be *only* rendered improving
performance for large(r) data sets. All the other normal behaviour is
kept (i.e a persistent, (pre/ap)pendable path can still be maintained)
if a ``view_range`` is not provided.
Further updates,
- drop the `.should_ds_or_redraw()` and `.maybe_downsample()` predicates
instead moving all that logic inside `.update_from_array()`.
- disable the "cache flipping", which doesn't seem to be needed to avoid
artifacts any more?
- handle all redraw/dowsampling logic in `.update_from_array()`.
- even more profiling.
- drop path `.reserve()` stuff until we better figure out how it's
supposed to work.
Drop all the logic originally in `.update_ds_line()` which is now done
internal to our `FastAppendCurve`. Add incremental update of the
flattened OHLC -> line curve (unfortunately using `np.concatenate()` for
the moment) and maintain a new `._ds_line_xy` arrays tuple which keeps
the internal state. Add `.maybe_downsample()` as per the new interaction
update method requirement. Draft out some fast path curve stuff like in
our line graphic. Short-circuit bars path updates when we downsample to
line. Oh, and add a ton more profiling in prep for getting
all this stuff faf.
Build out an interface that makes it super easy to downsample curves
using the m4 algorithm while keeping our incremental `QPainterPath`
update feature. A lot of hard work and tinkering went into getting this
working all in-thread correctly and there are quite a few details..
New interface methods:
- `.x_uppx()` which returns the x-axis "view units per pixel"
- `.px_width()` which returns the total (rounded) x-axis pixels spanned
by the curve in view.
- `.should_ds_or_redraw()` a predicate which checks internal state to
see if either downsampling of the curve should take place, or the curve
should have all downsampling removed and be redrawn with source array
data.
- `.downsample()` the actual ds processing routine which delegates into
the m4 algo impl.
- `.maybe_downsample()` a simple update method which can be called by
the view box when the user changes the zoom level.
Implementation details/changes:
- make `.update_from_array()` check for downsample (or revert to source
aka de-downsample) conditions exist and then downsample and re-draw
path graphics accordingly.
- in order to even further speed up path appends (since our main
bottleneck is measured to be `QPainter.drawPath()` calls with large
paths which are frequently updates), add a secondary path `.fast_path`
which is the path that is real-time updates by incremental appends and
which is painted separately for speed in `.pain()`.
- drop all the `QPolyLine` stuff since it was tested to be much slower
in general and especially so for append-updates.
- stop disabling the cache settings on updates since it doesn't seem to
be required any more?
- more move toward deprecating and removing all lingering interface
requirements from `pg.PlotCurveItem` (like `.xData`/`.yData`).
- adjust `.paint()` and `.boundingRect()` to compensate for the new
`.fast_path`
- add a butt-load of profiling B)
Pretty sure this was most of the cause of the stale (more downsampled)
curves showing when zooming in and out from bars mode quickly. All this
stuff needs to get factored out into a new abstraction anyway, but
i think this get's mostly correct functionality.
Only draw new ds curve on uppx steps >= 4 and stop adding/removing
graphics objects from the scene; doesn't seem to speed anything up
afaict. Add better reporting of ds scale changes.