Found this caused breakage on `kraken` orders which triggered the
"insufficient funds" error response. Makes sense since they won't
generate an order id if the order can't ever be submitted.
The most important changes include:
- iterating the new `Flow` type and updating graphics
- adding detailed profiling
- increasing the min uppx before graphics updates are throttled
- including the L1 spread in y-range calcs so that you never have the
bid/ask go "out of view"..
- pass around `Flow`s instead of shms
- drop all the old prototyped downsampling code
If manually managing an overlay you'll likely call `.overlay_plotitem()`
and then a plotting method so we need to accept a plot item input so
that the chart's pi doesn't get assigned incorrectly in the `Flow` entry
(though it is by default if no input is provided).
More,
- add a `Flow.graphics` field and set it to the `pg.GraphicsObject`.
- make `Flow.maxmin()` return `None` in the "can't calculate" cases.
- set shm refs on `Flow` entries.
- don't run a graphics cycle on 'update' msgs from the engine
if the containing chart is hidden.
- drop `volume` from flows map and disable auto-yranging
once $vlm comes up.
Allows for removing resize callbacks for a flow/overlay that you wish to
remove from view (eg. unit volume after dollar volume is up) and thus
less general interaction callback overhead for any plot you don't wish
to show or resize.
Further,
- drop the `autoscale_linked_plots` block for now since with
multi-view-box overlays each register their own vb resize slots
- pull the graphics object from the chart's `Flow` map inside
`.maybe_downsample_graphics()`
This new type wraps a shm data flow and will eventually include things
like incremental path-graphics updates and serialization + bg downsampling
techniques. The main immediate motivation was to get a cached y-range max/min
calc going since profiling revealed the `numpy` equivalents were
actually quite slow as the data set grows large. Likely we can use all
this to drive a streaming mx/mn routine that's always launched as part
of each on-host flow.
This is our official foray into use of `msgspec.Struct` B) and I have to
say, pretty impressed; we'll likely completely ditch `pydantic` from
here on out.
We don't need update graphics on every x-range change since that's what
the display loop does. Instead, only on manual changes do we make manual
calls into `.update_graphics_from_array()` and be sure to iterate all
linked subplots and all their embedded graphics.
The pg profiler seems to have trouble with early `return`s in function
calls (likely muckery with the GC/`.__delete__()`) so let's just try
to avoid it for now until we either fix it (probably by implementing as
a ctx mngr) or use diff one.
Ugh, turns out the wacky `ChartView.maxmin` callback stuff we did (for
determining y-range sizings) currently requires that the volume array
has a "bars in view" result.. so let's make that keep working without
rendering the graphics for the curve (since we're disabling them once
$vlm comes up).
As with the `BarItems` graphics, this makes it possible to pass in a "in
view" range of array data that can be *only* rendered improving
performance for large(r) data sets. All the other normal behaviour is
kept (i.e a persistent, (pre/ap)pendable path can still be maintained)
if a ``view_range`` is not provided.
Further updates,
- drop the `.should_ds_or_redraw()` and `.maybe_downsample()` predicates
instead moving all that logic inside `.update_from_array()`.
- disable the "cache flipping", which doesn't seem to be needed to avoid
artifacts any more?
- handle all redraw/dowsampling logic in `.update_from_array()`.
- even more profiling.
- drop path `.reserve()` stuff until we better figure out how it's
supposed to work.
Drop all the logic originally in `.update_ds_line()` which is now done
internal to our `FastAppendCurve`. Add incremental update of the
flattened OHLC -> line curve (unfortunately using `np.concatenate()` for
the moment) and maintain a new `._ds_line_xy` arrays tuple which keeps
the internal state. Add `.maybe_downsample()` as per the new interaction
update method requirement. Draft out some fast path curve stuff like in
our line graphic. Short-circuit bars path updates when we downsample to
line. Oh, and add a ton more profiling in prep for getting
all this stuff faf.
Build out an interface that makes it super easy to downsample curves
using the m4 algorithm while keeping our incremental `QPainterPath`
update feature. A lot of hard work and tinkering went into getting this
working all in-thread correctly and there are quite a few details..
New interface methods:
- `.x_uppx()` which returns the x-axis "view units per pixel"
- `.px_width()` which returns the total (rounded) x-axis pixels spanned
by the curve in view.
- `.should_ds_or_redraw()` a predicate which checks internal state to
see if either downsampling of the curve should take place, or the curve
should have all downsampling removed and be redrawn with source array
data.
- `.downsample()` the actual ds processing routine which delegates into
the m4 algo impl.
- `.maybe_downsample()` a simple update method which can be called by
the view box when the user changes the zoom level.
Implementation details/changes:
- make `.update_from_array()` check for downsample (or revert to source
aka de-downsample) conditions exist and then downsample and re-draw
path graphics accordingly.
- in order to even further speed up path appends (since our main
bottleneck is measured to be `QPainter.drawPath()` calls with large
paths which are frequently updates), add a secondary path `.fast_path`
which is the path that is real-time updates by incremental appends and
which is painted separately for speed in `.pain()`.
- drop all the `QPolyLine` stuff since it was tested to be much slower
in general and especially so for append-updates.
- stop disabling the cache settings on updates since it doesn't seem to
be required any more?
- more move toward deprecating and removing all lingering interface
requirements from `pg.PlotCurveItem` (like `.xData`/`.yData`).
- adjust `.paint()` and `.boundingRect()` to compensate for the new
`.fast_path`
- add a butt-load of profiling B)
Pretty sure this was most of the cause of the stale (more downsampled)
curves showing when zooming in and out from bars mode quickly. All this
stuff needs to get factored out into a new abstraction anyway, but
i think this get's mostly correct functionality.
Only draw new ds curve on uppx steps >= 4 and stop adding/removing
graphics objects from the scene; doesn't seem to speed anything up
afaict. Add better reporting of ds scale changes.
Only if the uppx increases by more then 2 we redraw the entire line
otherwise just ds with previous params and update the current curve.
This *should* avoid strange lower sample rate artefacts from showing on
updates.
Summary:
- stash both uppx and px width in `._dsi` (downsample info)
- use the new `ohlc_to_m4_line()` flags
- add notes about using `.reserve()` and friends
- always delete last `._array` ref prior to line updates
In an effort to try and make `QPainterPath.reserve()` work, add internal
logic to use the same object without de-allocating memory from
a previous path write/creation.
Note this required the addition of a `._redraw` flag (to be used in
`.clear()` and a small patch to `pyqtgraph.functions.arrayToQPath` to
allow passing in an existing path (thus reusing the same underlying mem
alloc) which will likely be first pushed to our fork.
We were previously ad-hoc scaling up the px count/width to get more
detail at lower uppx values. Add a log scaling sigmoid that range scales
between 1 < px_width < 16.
Add in a flag to use the mxmn OH tracer in `ohlc_flatten()` if desired.
Helpers to quickly convert ohlc struct-array sequences into lines
for consumption by the m4 downsampler. Strip trailing zero entries
from the `ds_m4()` output if found (avoids lines back to origin).
This makes the `'r'` hotkey snap the last bar to the middle of the pp
line arrow marker no matter the zoom level. Now we also boot with
approximately the most number of x units on screen that keep the bars
graphics drawn in full (just before downsampling to a line).
Moved some internals around to get this all in place,
- drop `_anchors.marker_right_points()` and move it to a chart method.
- change `.pre_l1_x()` -> `.pre_l1_xs()` and just have it return the
two view-mapped x values from the former method.
Instead of using a guess about how many x-indexes to reset the last
datum in-view to, calculate and shift the latest index such that it's
just before any L1 spread labels on the y-axis. This makes the view
placement "widget aware" and gives a much more cross-display UX.
Summary:
- add `ChartPlotWidget.pre_l1_x()` which returns a `tuple` of
x view-coord points for the absolute x-pos and length of any L1
line/labels
- make `.default_view()` only shift to see the xlast just outside
the l1 but keep whatever view range xfirst as the first datum in view
- drop `LevelLine.right_point()` since this is now just a
`.pre_l1_x()` call and can be retrieved from the line's internal chart
ref
- drop `._style.bars_from/to_..` vars since we aren't using hard coded
offsets any more
`ChartPlotWidget.curve_width_pxs()` now can be used to get the total
horizontal (x) pixels on screen that are occupied by the current curve
graphics for a given chart. This will be used for downsampling large
data sets to the pixel domain using M4.
Probably the best place to root the profiler since we can get a better
top down view of bottlenecks in the graphics stack.
More,
- add in draft M4 downsampling code (commented) after getting it mostly
working; next step is to move this processing into an FSP subactor.
- always update the vlm chart last y-axis sticky
- set call `.default_view()` just before inf sleep on startup
Obviously determining the x-range from indices was wrong and was the
reason for the incorrect (downsampled) output size XD. Instead correctly
determine the x range and start value from the *values of* the input
x-array. Pretty sure this makes the implementation nearly production
ready.
Relates to #109
All the refs are in the comments and original sample code from infinite
has been reworked to expect the input x/y arrays to already be sliced
(though we can later support passing in the start-end indexes if
desired).
The new routines are `ds_m4()` the python top level API and `_m4()` the
fast `numba` implementation.
- the chart's uppx (units-per-pixel) is > 4 (i.e. zoomed out a lot)
- don't shift the chart (to keep the most recent step in view) if the
last datum isn't in view (aka the user is probably looking at history)
When a bars graphic is zoomed out enough you get a high uppx, datum
units-per-pixel, and there is no point in drawing the 6-lines in each
bar element-graphic if you can't see them on the screen/display device.
Instead here we offer converting to a `FastAppendCurve` which traces
the high-low outline and instead display that when it's impossible to see the
details of bars - approximately when the uppx >= 2.
There is also some draft-commented code in here for downsampling the
outlines as zoom level increases but it's not fully working and should
likely be factored out into a higher level api anyway.
In effort to start getting some graphics speedups as detailed in #109,
this adds a `FastAppendCurve`to every `BarItems` as a `._ds_line` which
is only displayed (instead of the normal mult-line bars curve) when the
"width" of a bar is indistinguishable on screen from a line -> so once
the view coordinates map to > 2 pixels on the display device.
`BarItems.maybe_paint_line()` takes care of this scaling detection logic and is
called by the associated view's `.sigXRangeChanged` signal handler.
The graphics update loop is much easier to grok when all the UI
components which potentially need to be updated on a cycle are arranged
together in a high-level composite namespace, thus this new
`DisplayState` addition. Create and set this state on each
`LinkedSplits` chart set and add a new method `.graphics_cycle()` which
let's a caller trigger a graphics loop update manually. Use this method
in the fsp graphics manager such that a chain can update new history
output even if there is no real-time feed driving the display loop (eg.
when a market is "closed").
As per https://github.com/erdewit/ib_insync/pull/454 the more correct
way to do this is with `.reqContractDetailsAsync()` which we wrap with
`Client.con_deats()` and which works just as well. Further drop all the
`dict`-ifying that was being done in that method and instead always
return `ContractDetails` object in an fqsn-like explicitly keyed `dict`.
ib has a throttle limit for "hft" bars but contained in here is some
hackery using ``xdotool`` to reset data farms auto-magically B)
This copies the working script into the ib backend mod as a routine and
now uses `trio.run_process()` and calls into it from the `get_bars()`
history retriever and then waits for "data re-established" events to be
received from the client before making more history queries.
TL;DR summary of changes:
- relay ib's "system status" events (like for data farm statuses)
as a new "event" msg that can be processed by registers of
`Client.inline_errors()` (though we should probably make a new
method for this).
- add `MethodProxy.status_event()` which allows a proxy user to register
for a particular "system event" (as mentioned above), which puts
a `trio.Event` entry in a small table can be set by an relay task if
there are any detected waiters.
- start a "msg relay task" when opening the method proxy which does
the event setting mentioned above in the background.
- drop the request error handling around the proxy creation, doesn't
seem necessary any more now that we have better error propagation from
`asyncio`.
- add event waiting logic around the data feed reset hackzorin.
- change the order relay task to only log system events for now (though
we need to do some better parsing/logic to get tws-external order
updates to work again..
Found an issue (that was predictably brushed aside XD) where the
`ib_insync.util.df()` helper was changing the timestamps on bars data to
be way off (probably a `pandas.Timestamp` timezone thing?).
Anyway, dropped all that (which will hopefully let us drop `pandas` as
a hard dep) and added a buncha timestamp checking as well as start/end
datetime return values using `pendulum` so that consumer code can know
which "slice" is output.
Also added some WIP code to work around "no history found" request
errors where instead now we try to increment backward another 200
seconds - not sure if this actually correct yet.
Make the throttle error propagate through to `trio` again by adding
`dict`-msg support between the two loops such that errors can be
re-raised on the `trio` side. This is all integrated into the
`MethoProxy` and accompanying result relay task.
Further fix a longer standing issue where sometimes the `ib_insync`
order entry method will raise a weird assertion error because it detects
some internal order-id state issue.. Just ignore those and make relay
back an error to the ems in such cases.
Add a bunch of notes for todos surrounding data feed reset hackery.
To start we only have futes working but this allows both searching
and loading multiple expiries of the same instrument by specifying
different expiries with a `.<expiry>` suffix in the symbol key (eg.
`mnq.globex.20220617`). This also paves the way for options contracts
which will need something similar plus a strike property. This change
set also required a patch to `ib_insync` to allow retrieving multiple
"ambiguous" contracts from the `IB.reqContractDetailsAcync()` method,
see https://github.com/erdewit/ib_insync/pull/454 for further discussion
since the approach here might change.
This patch also includes a lot of serious reworking of some `trio`-`asyncio`
integration to use the newer `tractor.to_asyncio.open_channel_from()`
api and use it (with a relay task) to open a persistent connection with
an in-actor `ib_insync` `Client` mostly for history requests.
Deats,
- annot the module with a `_infect_asyncio: bool` for `tractor` spawning
- add a futes venu list
- support ambiguous futes contracts lookups so that all expiries will
show in search
- support both continuous and specific expiry fute contract
qualification
- allow searching with "fqsn" keys
- don't crash on "data not found" errors in history requests
- move all quotes msg "topic-key" generation (which should now be
a broker-specific fqsn) and per-contract quote processing into
`normalize()`
- set the fqsn key in the symbol info init msg
- use `open_client_proxy()` in bars backfiller endpoint
- include expiry suffix in position update keys
This adds a new client manager-factory: `open_client_proxy()` which uses
the newer `tractor.to_asyncio.open_channel_from()` (and thus the
inter-loop-task-channel style) a `aio_client_method_relay()` and
a re-implemented `MethodProxy` wrapper to allow transparently calling
`asyncio` client methods from `trio` tasks. Use this proxy in the
history backfiller task and add a new (prototype)
`open_history_client()` which will be used in the new storage management
layer. Drop `get_client()` which was the portal wrapping equivalent of
the same proxy but with a one-task-per-call approach. Oh, and
`Client.bars()` can take `datetime`, so let's use it B)
Use fqsn as input to the client-side EMS apis but strip broker-name
stuff before generating and sending `Brokerd*` msgs to each backend for
live order requests (since it's weird for a backend to expect it's own
name, though maybe that could be a sanity check?).
Summary of fqsn use vs. broker native keys:
- client side pps, order requests and general UX for order management
use an fqsn for tracking
- brokerd side order dialogs use the broker-specific symbol which is
usually nearly the same key minus the broker name
- internal dark book and quote feed lookups use the fqsn where possible
In order to support instruments with lifetimes (aka derivatives) we need
generally need special symbol annotations which detail such meta data
(such as `MNQ.GLOBEX.20220717` for daq futes). Further there is really
no reason for the public api for this feed layer to care about getting
a special "brokername" field since generally the data is coming directly
from UIs (eg. search selection) so we might as well accept a fqsn (fully
qualified symbol name) which includes the broker name; for now a suffix
like `'.ib'`. We may change this schema (soon) but this at least gets us
to a point where we expect the full name including broker/provider.
An additional detail: for certain "generic" symbol names (like for
futes) we will pull a so called "front contract" and map this to
a specific fqsn underneath, so there is a double (cached) entry for that
entry such that other consumers can use it the same way if desired.
Some other machinery changes:
- expect the `stream_quotes()` endpoint to deliver it's `.started()` msg
almost immediately since we now need it deliver any fqsn asap (yes
this means the ep should no longer wait on a "live" first quote and
instead deliver what quote data it can right away.
- expect the quotes ohlc sampler task to add in the broker name before
broadcast to remote (actor) consumers since the backend isn't (yet)
expected to do that add in itself.
- obviously we start using all the new fqsn related `Symbol` apis
Move the core ws message handling into `stream_messages()` and call that
from 2 new stream processors: `process_data_feed_msgs()` and
`process_order_msgs()`. Add comments for hints on how to implement the
order msg parsing as well as `pprint` received msgs to console for now.
Since moving to a "god loop" for graphics, we don't really need to have
a dedicated task for updating graphics on new sample increments. The
only UX difference will be that curves won't be updated until an actual new
rt-quote-event triggers the graphics loop -> so we'll have the chart
"jump" to a new position and new curve segments generated only when new
data arrives. This is imo fine since it's just less "idle" updates
where the chart would sit printing the same (last) value every step.
Instead only update the view increment if a new index is detected by
reading shm.
If we ever want this dedicated task update again this commit can be
easily reverted B)
Break up real-time quote feed and history loading into 2 separate tasks
and deliver a client side `data.Feed` as soon as history is loaded
(instead of waiting for a rt quote - the previous logic). If
a symbol doesn't have history then likely the feed shouldn't be loaded
(since presumably client code will need at least "some" datums history
to do anything) and waiting on a real-time quote is dumb, since it'll
hang if the market isn't open XD. If a symbol doesn't have history we
can always write a zero/null array when we run into that case. This also
greatly speeds up feed loading when both history and quotes are available.
TL;DR summary:
- add a `_Feedsbus.start_task()` one-cancel-scope-per-task method for
assisting with (re-)starting and stopping long running persistent
feeds (basically a "one cancels one" style nursery API).
- add a `manage_history()` task which does all history loading (and
eventually real-time writing) which has an independent signal and
start it in a separate task.
- drop the "sample rate per symbol" stuff since client code doesn't really
care when it can just inspect shm indexing/time-steps itself.
- run throttle tasks in the bus nursery thus avoiding cancelling the
underlying sampler task on feed client disconnects.
- don't store a repeated ref the bus nursery's cancel scope..
To avoid the "trigger finger" issue (darks execing before they should
due to a stale last price state, normally when generating a trigger
predicate..) always iterate the loop and update the last known book
price even when no execs/triggered orders are registered.
You can get a weird "last line segment" artifact if *only* that segment
is drawn and the cache is enabled, so just disable unless in step mode
at startup and re-flash as normal when new path data is appended. Add
a `.disable_cache()` method for the multi-use in the update method. Use
line style on the `._last_line: QLineF` segment as well.
Enables retrieving all "named axes" on a particular "side" of the
overlayed plot items. This is useful for calculating how much space
needs to be allocated for the axes before the view box area starts.
Though it's not per-tick accurate, accumulate the number of "trades"
(i.e. the "clearing rate" - maybe this is a better name?) per bar
inside the `dolla_vlm` fsp and average and report wmas of this in the
`flow_rates` fsp.
Define the flows table as a class var (thus making it a "global" and/or
actor-local state) which can be accessed by any in process task. Add
`Fsp.get_shm()` to allow accessing output streams by source-token + fsp
routine reference and thus providing inter-fsp low level access to
real-time flows.
In order for fsp routines to be able to look up other "flows" in the
cascade, we need a small registry-table which gives access to a map of
a source stream + an fsp -> an output stream. Eventually we'll also
likely want a dependency (injection) mechanism so that any fsp demanded
can either be dynamically allocated or at the least waited upon before
a consumer tries to access it.
Instead of referencing the remote processing funcs by a `str` name start
embracing the new `@fsp`/`Fsp` API such that wrapped processing
functions are first class APIs.
Summary of the changeset:
- move and load the fsp built-in set in the new `.fsp._api` module
- handle processors ("fsps") which want to yield multiple keyed-values
(interleaved in time) by expecting both history that is keyed and
assigned to the appropriate struct-array field, *and* real-time
`yield`ed value in tuples of the form `tuple[str, float]` such that
any one (async) processing function can deliver multiple outputs from
the same base calculation.
- drop `maybe_mk_fsp_shm()` from UI module
- expect and manage `Fsp` instances (`@fsp` decorated funcs) throughout
the UI code, particularly the `FspAdmin` layer.
Since more curves costs more processing and since the vlm and $vlm
curves are normally very close to the same (graphically) we hide the
unit volume curve once the dollar volume is up (after the fsp daemon-task is
spawned) and just expect the user to understand the diff in axes units.
Also, use the new `title=` api to `.overlay_plotitem()`.
Use our internal `Label` with much better dpi based sizing of text and
placement below the y-axis ticks area for more minimalism and less
clutter.
Play around with `lru_cache` on axis label bounding rects and for now
just hack sizing by subtracting half the text height (not sure why) from
the width to avoid over-extension / overlap with any adjacent axis.
Allow passing in a formatter function for processing tick values on an
axis. This makes it easy to for example, `piker.calc.humanize()` dollar
volume on a subchart.
Factor `set_min_tick()` into the `PriceAxis` since it's not used on any
x-axis data thus far.
Adds `FspAdmin.open_fsp_chart()` which allows adding a real time graphics
display of an fsp's output with different options for where (which chart
or make a new one) to place it.
Further,
- change some method naming, namely the other fsp engine task methods to
`.open_chain()` and `.start_engine_task()`.
- make `run_fsp_ui()` a lone task function for now with the default
config parsing and chart setup logic (and it still includes a buncha
commented out stuff for doing graphics update which is now done in the
main loop to avoid task switching overhead).
- move all vlm related fsp config entries into the `open_vlm_displays()`
task for dedicated setup with the fsp admin api such as special
auto-yrange handling and graph overlays.
- `start_fsp_displays()` is now just a small loop through config entries
with synced startup status messages.
For wtv cucked reason all the viewbox/scene coordinate calcs do **not**
include a left axis in the geo (likely because it's a hacked in widget
+ layout thing managed by `PlotItem`). Detect if there's a left axis and
if so use it in the label placement scene coords calc. ToDo: probably
make this a non-move calc and only recompute any time the axis changes.
Other:
- rate limit mouse events down to the 60 (ish) Hz for now
- change one last lingering `'ohlc'` array lookup
- fix `.mouseMoved()` "event" type annot
This is a huge commit which moves a bunch of code around in order to
simplify some of our UI modules as well as support our first official
mult-axis chart: overlaid volume and "dollar volume". A good deal of
this change set is to make startup fast such that volume data which is
often shipped alongside OHLC history is loaded and shown asap and FSPs
are loaded in an actor cluster with their graphics overlayed
concurrently as each responsible worker generates plottable output.
For everything to work this commit requires use of a draft `pyqtgraph`
PR: https://github.com/pyqtgraph/pyqtgraph/pull/2162
Change summary:
- move remaining FSP actor cluster helpers into `.ui._fsp` mod as well
as fsp specific UI managers (`maybe_open_vlm_display()`,
`start_fsp_displays()`).
- add an `FspAdmin` API for starting fsp chains on the cluster
concurrently allowing for future work toward reload/unloading.
- bring FSP config dict into `start_fsp_displays()` and `.started()`-deliver
both the fsp admin and any volume chart back up to the calling display
loop code.
ToDo:
- repair `ChartView` click-drag interactions
- auto-range on $ vlm needs to use `ChartPlotWidget._set_yrange()`
- a lot better styling for the $_vlm overlay XD
As part of factoring `._set_yrange()` into the lower level view box,
move the y-range calculations into a new method. These calcs should
eventually be completely separate (as they are for the real-time version
in the graphics display update loop) and likely part of some kind of
graphics-related lower level management API. Draft such an API as an
`ArrayScene` (commented for now) as a sketch toward factoring array
tracking **out of** the chart widget. Drop the `'ohlc'` array name and
instead always use whatever `.name` was assigned to the chart widget
to lookup its "main" / source data array for now.
Enable auto-yranging on overlayed plotitems by enabling on its viewbox
and, for now, assign an ad-hoc `._maxmin()` since the widget version
from this commit has no easy way to know which internal array to use. If
an FSP (`dolla_vlm` in this case) is overlayed on an existing chart
without also having a full widget (which it doesn't in this case since
we're using an overlayed `PlotItem` instead of a full `ChartPlotWidget`)
we need some way to define the `.maxmin()` for the overlayed
data/graphics. This likely means the `.maxmin()` will eventually get
factored into wtv lowlevel `ArrayScene` API mentioned above.
Calculations for auto-yaxis ranging are both signalled and drawn by our
`ViewBox` so we might as well factor this handler down from the chart
widget into the view type. This makes it much easier (and clearer) that
`PlotItem` and other lower level overlayed `GraphicsObject`s can utilize
*size-to-data* style view modes easily without widget-level coupling.
Further changes,
- support a `._maxmin()` internal callable (temporarily) for allowing
a viewed graphics object to define it's own y-range max/min calc.
- add `._static_range` var (though usage hasn't been moved from the
chart plot widget yet
- drop y-axis click-drag zoom instead reverting back to default viewbox
behaviour with wheel-zoom and click-drag-pan on the axis.
This brings in the WIP components developed as part of
https://github.com/pyqtgraph/pyqtgraph/pull/2162.
Most of the history can be understood from that issue and effort but the
TL;DR is,
- add an event handler wrapper system which can be used to
wrap `ViewBox` methods such that multiple views can be overlayed and
a single event stream broadcast from one "main" view to others which
are overlaid with it.
- add in 2 relay `Signal` attrs to our `ViewBox` subtype (`Chartview`)
to accomplish per event `MouseEvent.emit()` style broadcasting to
multiple (sub-)views.
- Add a `PlotItemOverlay` api which does all the work of overlaying the
actual chart graphics and arranging multiple-axes without collision as
well as tying together all the event/signalling so that only a single
"focussed" view relays to all overlays.
Each `pyqtgraph.PlotItem` uses a `QGraphicsGridLayout` to place its view
box, axes and titles in the traditional graph format. With multiple
overlayed charts we need those axes to not collide with one another and
further allow for an "order" specified by the user. We accomplish this
by adding `QGraphicsLinearLayout`s for each axis "side": `{'left',
'right', 'top', 'bottom'}` such that plot axes can be inserted and moved
easily without having to constantly re-stack/order a grid layout (which
does not have a linked-list style API).
The new type is called `ComposedGridLayout` for now and offers a basic
list-like API with `.insert()`, `.append()`, and eventually a dict-style
`.pop()`. We probably want to also eventually offer a `.focus()` to
allow user switching of *which* main graphics object (aka chart) is "in
use".
This syncs with a dev branch in our `pyqtgraph` fork:
https://github.com/pyqtgraph/pyqtgraph/pull/2162
The main idea is to get mult-yaxis display fully functional with
multiple view boxes running in a "relay mode" where some focussed view
relays signals to overlaid views which may have independent axes. This
preps us for both displaying independent codomain-set FSP output as well
as so called "aggregate" feeds of multiple fins underlyings on the same
chart (eg. options and futures over top of ETFs and underlying stocks).
The eventual desired UX is to support fast switching of instruments for
order mode trading without requiring entirely separate charts as well as
simple real-time anal of associated instruments.
The first effort here is to display vlm and $_vlm alongside each other
as a built-in FSP subchart.
We can instead use the god widget's nursery to schedule all the feed
pause/resume requests and be even more concurrent during a view (of
symbols) switch.
Use `tractor.trionics.gather_contexts()` to start up the fsp and volume
chart-displays (for an additional conc speedup). Drop `dolla_vlm` again for
now until we figure out how we can display it *and* vlm on the same
sub-chart? It would be nice to avoid having to spawn an fsp process
before showing the volume curve.
Call the resize method only after all FSP subcharts have rendered
such that the main OHLC chart's final width is read.
Further tweaks:
- drop rsi by default
- drop the stream drain stuff
- fix failed-to-read shm logging
This fixes a weird re-render bug/slowdown/artifact that was introduced
with the order mode sidepane work. Prior to the sidepane addition, chart
switching was immediate with zero noticeable widget rendering steps.
The slow down was caused by 2 things:
- not yielding back to the Qt loop asap after re-showing/focussing
a linked split chart that was already in memory.
- pausing/resuming feeds only after a Qt loop render cycle has
completed.
This now restores the near zero latency UX.
There was a lingering issue where the fsp daemon would sync its shm
array with the source data and we'd set the start/end indices to the
same value. Under some races a reader would then read an empty `.array`
which it wasn't expecting. This fixes that as well as tidies up the
`ShmArray.push()` logic and adds a temporary check in `.array` for zero
length if the array hasn't been written yet.
We can now start removing read array length checks in consumer code
and hopefully no more races will show up.
Revert to old shm "last" meaning last row
It can now be declared inside an fsp config dict under the name
`dolla_vlm`. We still need to offer an engine control that zeros
the newest sample value instead of copying from the previous.
This also litters the engine code with `pyqtgraph` profiling to see if
we can improve startup times - likely it'll mean pre-allocating a small
fsp daemon cluster at startup.
Use a fixed worker count and don't respawn for every chart, instead
opting for a round-robin to tasks in a cluster and (for now) hoping for
the best in terms of trio scheduling, though we should obviously route
via symbol-locality next. This is currently a boon for chart spawning
startup times since actor creation is done AOT.
Additionally,
- use `zero_on_step` for dollar volume
- drop rsi on startup (again)
- add dollar volume (via fsp) along side unit volume
- litter more profiling to fsp chart startup sequence
- pre-define tick type classes for update loop
We are already packing framed ticks in extended lists from
the `.data._sampling.uniform_rate_send()` task so the natural solution
to avoid needless graphics cycles for HFT-ish feeds (like binance) is
to unpack those frames and for most cases only update graphics with the
"latest" data per loop iteration. Unpacking in this way also lessens
nested-iterations per tick type.
Btw, this also effectively solves all remaining issues of fast tick
feeds over-triggering the graphics loop renders as long as the original
quote stream is throttled appropriately, usually to the local display
rate.
Relates to #183, #192
Dirty deats:
- drop all per-tick rate checks, they were always somewhat pointless
when iterating a frame of ticks per render cycle XD.
- unpack tick frame into ticks per frame type, and last of each type;
the lasts are used to update each part of the UI/graphics by class.
- only skip the label update if we can't retrieve the last from from a
graphics source array; it seems `chart.update_curve_from_array()`
already does a `len` check internally.
- add some draft commented code for tick type classes and a possible
wire framed tick data structure.
- move `chart_maxmin()` range computer to module level, bind a chart to
it with a `partial.`
- only check rate limits in main quote loop thus reporting actual
overages
- add in commented logic for only updating the "last" cleared price from
the most recent framed value if we want to eventually (right now seems
like this is only relevant to ib and it's dark trades: `utrade`).
- rename `_clear_throttle_rate` -> `_quote_throttle_rate`, drop
`_book_throttle_rate`.
This is in prep toward doing fsp graphics updates from the main quotes
update loop (where OHLC and volume are done). Updating fsp output from
that task should, for the majority of cases, be fine presuming the
processing is derived from the quote stream as a source. Further,
calling an update function on each fsp subplot/overlay is of course
faster then a full task switch - which is how it currently works with
a separate stream for every fsp output. This also will let us delay
adding full `Feed` support around fsp streams for the moment while still
getting quote throttling dictated by the quote stream.
Going forward, We can still support a separate task/fsp stream for
updates as needed (ex. some kind of fast external data source that isn't
synced with price data) but it should be enabled as needed required by
the user.
The major change is moving the fsp "daemon" (more like wanna-be fspd)
endpoint to use the newer `tractor.Portal.open_context()` and
bi-directional streaming api.
There's a few other things in here too:
- make a helper for allocating single colume fsp shm arrays
- rename some some fsp related functions to be more explicit on their
purposes
Since our startup is very concurrent there is often races where widgets
have not fully spawned before python (re-)sizing code has a chance to
run sizing logic and thus incorrect dimensions are read. Instead ensure
the Qt render loop gets to run in between such checks.
Also add a `open_sidepane()` mngr for creating a minimal form widget for
FSP subchart sidepanes which can be configured from an input `dict`.
This should in theory result in increased burstiness since we remove
the plain `trio.sleep()` and instead always wait on the receive channel
as much as possible until the `trio.move_on_after()` (+ time diffing
calcs) times out and signals the next throttled send cycle. This also is
slightly easier to grok code-wise instead of the `try, except` and
another tight while loop until a `trio.WouldBlock`. The only simpler
way i can think to do it is with 2 tasks: 1 to collect ticks and the
other to read and send at the throttle rate.
Comment out the log msg for now to avoid latency and add much more
detailed comments. Add an overrun log msg to the main sample loop.
A `QRectF` is easier to make and draw (i think?) so use that and fill it
on volume events for decent sleek real-time look. Adjust the step array
generator to allow for an endpoints flag. Comment and/or clean out all
the old path filling calls that gave us perf issues..
Turns out the performance of updating and refilling step curves > 1k ish
points is super slow :sadkek:. Disabling the fill basically returns
normal performance, so it seems maybe we'll stick with unfilled volume
"bars" for now. The other tricky bit is getting the path to extend and
fill which is particularly slow if you use the `QPainterPath.united()`
(what `+` set op does) operation which seems to require an entire redraw
of the curve each paint iteration. Removing the pixel buffer cache makes
things that much worse too..
One technique i tried was only setting a `._fill` flag when so many
datums are in view (< 1k as determined by the chart widget), and this
helps, but under high load (trade rates) you still see more lag then
without the fill which makes me say screw it and let's stick with
unfilled bars for now. Trying go to get performant filled curves will be
an exercise for an aspiring graphics eng :P
In latest `pyqtgraph` it seems there's a discrepancy
since `function.arrayToQPath()` was reworked and now
we need to *not* connect the last point for each bar.
The prior PR for fixing fsp array misalignment also added
`tractor.Context` usage which wasn't reflected in the graphics update
loop (newer code added it but the prior PR was factored from path
dependent history) and thus was broken. Further in newer work we don't
have fsp actors actually stream value updates since the display loop can
already pull from the source feed and update graphics at a preferred
throttle rate. Re-enabled the fsp stream sending here by default until
that newer only-throttle-pull-from-source code is landed in the display
loop.
This should finally be correct fsp src-to-dst array syncing now..
There's a few edge cases but mostly we need to be sure we sync both
back-filled history diffs and avoid current step lag/leads. Use
a polling routine and the more stringent task re-spawn system to get
this right.
There was a lingering issue where the fsp daemon would sync its shm
array with the source data and we'd set the start/end indices to the
same value. Under some races a reader would then read an empty `.array`
which it wasn't expecting. This fixes that as well as tidies up the
`ShmArray.push()` logic and adds a temporary check in `.array` for zero
length if the array hasn't been written yet.
We can now start removing read array length checks in consumer code
and hopefully no more races will show up.
Litter the engine code with `pyqtgraph` profiling to see if we can
improve startup times - likely it'll mean pre-allocating a small fsp
daemon cluster at startup.