Compare commits

...

113 Commits

Author SHA1 Message Date
Tyler Goodlet 40e49333be Bump mkts timeout to 2s 2023-03-08 15:25:38 -05:00
Tyler Goodlet f627fedf74 Move all docker and external db code to `piker.service` 2023-03-08 15:25:20 -05:00
Tyler Goodlet bd248381ea Start `piker.service` sub-package
For now just moves everything that was in `piker._daemon` to a subpkg
module but a reorg is coming pronto!
2023-03-08 15:14:39 -05:00
Tyler Goodlet a70d76e3e6 Set explicit `marketstore` container startup timeout 2023-03-08 15:01:06 -05:00
Tyler Goodlet a5caaef467 Hardcode `cancel` log level for `ahabd` for now 2023-03-08 15:00:24 -05:00
Tyler Goodlet 7e35696dbb Always passthrough loglevel to `ahabd` supervisor 2023-03-08 14:56:21 -05:00
Tyler Goodlet 93702320a3 Background docker-container logs processing
Previously we would make the `ahabd` supervisor-actor sync to docker
container startup using pseudo-blocking log message processing.

This has issues,
- we're forced to do a hacky "yield back to `trio`" in order to be
  "fake async" when reading the log stream and further,
- blocking on a message is fragile and often slow.

Instead, run the log processor in a background task and in the parent
task poll for the container to be in the client list using a similar
pseudo-async poll pattern. This allows the super to `Context.started()`
sooner (when the container is actually registered as "up") and thus
unblock its (remote) caller faster whilst still doing full log msg
proxying!

Deatz:
- adds `Container.cuid: str` a unique container id for logging.
- correctly proxy through the `loglevel: str` from `pikerd` caller task.
- shield around `Container.cancel()` in the teardown block and use
  cancel level logging in that method.
2023-03-08 14:28:48 -05:00
Tyler Goodlet 5683eb8ef0 Doc string and types bump in loggin mod 2023-03-08 14:22:23 -05:00
Tyler Goodlet ad6b655d7d Apply `Services` runtime state **immediately** inside starup block 2023-03-08 13:01:42 -05:00
Tyler Goodlet 6d1ecdde40 Deliver es specific ahab-super in endpoint startup config 2023-03-08 13:00:11 -05:00
Tyler Goodlet 899c6ebc09 Add warning around detach flag to docker client 2023-03-08 12:59:20 -05:00
Tyler Goodlet d3272ede7a Support startup-config overrides to `ahabd` super
With the addition of a new `elastixsearch` docker support in
https://github.com/pikers/piker/pull/464, adjustments were made
to container startup sync logic (particularly the `trio` checkpoint
sleep period - which itself is a hack around a sync client api) which
caused a regression in upstream startup logic wherein container error
logs were not being bubbled up correctly causing a silent failure mode:

- `marketstore` container started with corrupt input config
- `ahabd` super code timed out on startup phase due to a larger log
  polling period, skipped processing startup logs from the container,
  and continued on as though the container was started
- history client fails on grpc connection with no clear error on why the
  connection failed.

Here we revert to the old poll period (1ms) to avoid any more silent
failures and further extend supervisor control through a configuration
override mechanism. To address the underlying design issue, this patch
adds support for container-endpoint-callbacks to override supervisor
startup configuration parameters via the 2nd value in their returned
tuple: the already delivered configuration `dict` value.

The current exposed values include:
    {
        'startup_timeout': 1.0,
        'startup_query_period': 0.001,
        'log_msg_key': 'msg',
    },

This allows for container specific control over the startup-sync query
period (the hack mentioned above)  as well as the expected log msg key
and of course the startup timeout.
2023-03-08 12:56:56 -05:00
Tyler Goodlet 58f39d1829 First draft storage layer cli
Adds a `piker storage` subcmd with a `-d` flag to wipe a particular
fqsn's time series (both 1s and 60s). Obviously this needs to be
extended much more but provides a start point.
2023-03-08 11:23:33 -05:00
Tyler Goodlet 4379bfe760 Read `Symbol` tick precision fields when no entry in `.broker_info` 2023-03-08 09:06:50 -05:00
Tyler Goodlet 73089e5612 Always show a minimum bars during startup
This is particularly more "good looking" when we boot with a pair that
doesn't have historical 1s OHLC and thus the fast chart is empty from
outset. In this case it's a lot nicer to be already zoomed to
a comfortable preset number of "datums in view" even when the history
isn't yet filled in.

Adjusts the chart display `Viz.default_view()` startup to explicitly
ensure this happens via the `do_min_bars=True` flag B)
2023-03-07 20:40:21 -05:00
Tyler Goodlet 5bf40ceb79 Catch `KeyError` on bcast errors which pop the sub
Not sure how i missed this (and left in handling of `list.remove()` and
it ever worked for that?) after the `samplerd` impl in 5ec1a72 but, this
adjusts the remove-broken-subscriber loop to catch the correct
`set.remove()` exception type on a missing (likely already removed)
subscription entry.
2023-03-07 15:42:06 -05:00
Tyler Goodlet 6f3a6bcb42 Remove leftover debug print in cache reset meth 2023-03-07 15:41:38 -05:00
Tyler Goodlet ddd1722a0a Add (commented) draft 1min OHLC time index logging
For the purposes of eventually trying to resolve last-step indexing
synchronization (an intermittent but still existing) issue(s) that can
happen due to races during history frame query and shm writing during
startup. In fact, here we drop all `hist_viz` info queries from the main
display loop for now anticipating that this code will either be removed
or improved later.
2023-03-07 15:35:07 -05:00
Tyler Goodlet 0cfad6838d Always pass step to `slice_from_time()` in view mode
Again, as per the signature change, never expect implicit time step
calcs from overlay processing/machinery code. Also, extend the debug
printing (yet again) to include better details around
"rescale-due-to-minor-range-out-of-view" cases and a detailed msg for
the transform/scaling calculation (inputs/outputs), particularly for the
cases when one of the curves has a lesser support.
2023-03-07 15:18:34 -05:00
Tyler Goodlet c0d2baaaaa Always pass `step` to `slice_from_time()` in the `Viz`
As per the change to `slice_from_time()` this ensures this `Viz` always
passes its self-calculated time indexing step size to the time slicing
routine(s).

Further this contains a slight impl tweak to `.scalars_from_index()` to
slice the actual view range from `xref` to `Viz.ViewState.xrange[1]` and
then reading the corresponding `yref` from the first entry in that
array; this should be no slower in theory and makes way for further
caching of x-read-range to `ViewState` opportunities later.
2023-03-07 15:05:42 -05:00
Tyler Goodlet b9c2c254dc Require `step: float` input to `slice_from_time()`
There's been way too many issues when trying to calculate this
dynamically from the input array, so just expect the caller to know what
it's doing and don't bother with ever hitting the error case of
calculating and incorrect value internally.
2023-03-07 14:36:19 -05:00
Tyler Goodlet 4da772292f Handle "target-is-shorter-then-pinned" case
When the target pinning curve (by default, the dispersion major) is
shorter then the pinned curve, we need to make sure we find still find
the x-intersect for computing returns scalars! Use `Viz.i_from_t()` to
accomplish this as well and, augment that method with a `return_y: bool`
to allow the caller to also retrieve the equivalent y-value at the
requested input time `t: float` for convenience.

Also tweak a few more internals around the 'loglin_ref_to_curve'
method:
- only solve / adjust for the above case when the major's xref is
  detected as being "earlier" in time the current minor's.
- pop the major viz entry from the overlay table ahead of time to avoid
  a needless iteration and simplify the transform calc phase loop to
  avoid handling that needless cycle B)
- add much better "organized" debug printing with more clear headers
  around which "phase"/loop the message pertains and well as more
  explicit details in terms of x and y-range values on each cycle of
  each loop.
2023-03-06 19:03:04 -05:00
Tyler Goodlet 7a37c700e5 Don't `@lru_cache` on `Viz.i_from_t()`, since view state.. 2023-03-06 18:30:58 -05:00
Tyler Goodlet f3fd627a75 Tweak debug printing to display y-mxmn per viz 2023-03-06 10:37:26 -05:00
Tyler Goodlet 1d649e55ca Fix curve up-sampling on `'r'` hotkey
Previously when very zoomed out and using the `'r'` hotkey the
interaction handler loop wouldn't trigger a re-(up)sampling to get
a more detailed curve graphic and instead the previous downsampled
(under-detailed) graphic would show. Fix that by ensuring we yield back
to the Qt event loop and do at least a couple render cycles with paired
`.interact_graphics_cycle()` calls.

Further this flips the `.start/signal_ic()` methods to use the new
`.reset_graphics_caches()` ctr-mngr method.
2023-03-05 21:23:42 -05:00
Tyler Goodlet 08f1f569d0 Facepalm: set `Viz.ViewState.yrange` even on cache hits.. 2023-03-05 21:22:55 -05:00
Tyler Goodlet 80af431c2a Drop remaining usage of `ChartPlotWidget.default_view()`
Instead delegate directly to `Viz.default_view()` throughout charting
startup and interaction handlers.

Also add a `ChartPlotWidget.reset_graphics_caches()` context mngr which
resets all managed graphics object's cacheing modes on enter and
restores them on exit for simplified use in interaction handling code.
2023-03-05 21:14:22 -05:00
Tyler Goodlet d32382f831 Add `do_min_bars: bool` flag to `Viz.default_view()` 2023-03-04 17:07:46 -05:00
Tyler Goodlet d2e64accf6 Drop remaining non-usage of `ChartPlotWidget.maxmin()` 2023-03-04 16:49:20 -05:00
Tyler Goodlet 0ac2e4e027 Expand mxmn view y-margins back to 0.06 2023-03-03 18:36:46 -05:00
Tyler Goodlet 82797a097b Handle yrange not set on view vase for vlm fsp plot 2023-03-03 18:36:46 -05:00
Tyler Goodlet eecae69076 Disable coordinate caching during interaction
This finally seems to mitigate all the "smearing" and "jitter" artifacts
when using Qt's "coordinate cache" graphics-mode:

- whenever we're in a mouse interaction (as per calls to
  `ChartView.start/signal_ic()`) we simply disable the caching mode (set
  `.NoCache` until the interaction is complete.
- only do this (for now) during a pan since it doesn't seem to be an
  issue when zooming?
- ensure disabling all `Viz.graphics` and `.ds_graphics` to be agnostic
  to any case where there's both a zoom and a pan simultaneously (not
  that it's easy to do manually XD) as well as solving the problem
  whenever an OHLC series is in traced-and-downsampled mode (during low
  zoom).

Impl deatz:
- rename `ChartView._ic` -> `._in_interact: trio.Event`
- add `.ChartView._interact_stack: ExitStack` which  we use to open.
  and close the `FlowGraphics.reset_cache()` mngrs from mouse handlers.
- drop all the commented per-subtype overrides for `.cache_mode: int`.
- write up much better doc strings for `FlattenedOHLC` and `StepCurve`
  including some very basic ASCII-art diagrams.
2023-03-03 18:36:46 -05:00
Tyler Goodlet e0dd8ae3cf Add per-chart `Viz`/overlay graphics iterator method 2023-03-03 18:36:46 -05:00
Tyler Goodlet 5c08b5658f Move cache-reset ctx mngr to parent type: `FlowGraphics.reset_cache()` 2023-03-03 18:36:46 -05:00
Tyler Goodlet a81b51b142 Fix focal min calc after switching to `Viz.datums_range()`.. 2023-03-03 18:36:46 -05:00
Tyler Goodlet 6df2c3d009 Simplify `FlowGraphics.x_last()` logics 2023-03-03 18:36:46 -05:00
Tyler Goodlet e2cb1aca8e Rename overlay technique var to `method` 2023-03-03 18:36:46 -05:00
Tyler Goodlet aa9bd9994d Repair x-label datetime labels when in array-index mode 2023-03-03 18:36:46 -05:00
Tyler Goodlet 1d76586701 Skip overlay handling when `N < 2` are detected 2023-03-03 18:36:46 -05:00
Tyler Goodlet 94682ed9d9 Drop passing overlay method from viewbox to view-mode handler 2023-03-03 18:36:46 -05:00
Tyler Goodlet 8149b25732 Drop a bunch of commented/uneeded cruft 2023-03-03 18:36:46 -05:00
Tyler Goodlet 2fd36d27f6 Solve a final minor-should-rescale edge case
When the minor has the same scaling as the major in a given direction we
should still do back-scaling against the major-target and previous
minors to avoid strange edge cases where only the target-major might not
be shifted correctly to show an matched intersect point? More or less
this just meant making the y-mxmn checks interval-inclusive with
`>=`/`<=` operators.

Also adds a shite ton of detailed comments throughout the pin-to-target
method blocks and moves the final major y-range call outside the final
`scaled: dict` loop.
2023-03-03 18:36:46 -05:00
Tyler Goodlet f8727251f9 Better doc string, use `Viz.vs: ViewState` 2023-03-03 18:36:46 -05:00
Tyler Goodlet d3c85bc925 Back-rescale previous (minor) curves from latest
For the "pin to target major/target curve" overlay method, this finally
solves the longstanding issue of ensuring that any new minor curve,
which requires and increase in the major/target curve y-range, also
re-scales all previously scaled minor curves retroactively. Thus we now
guarantee that all minor curves are correctly "pinned" to their
target/major on their earliest available datum **and** are all kept in
view.
2023-03-03 18:36:46 -05:00
Tyler Goodlet b118954bf7 Support "pin-to-target-curve" overlay method again
Yah yah, i know it's the same as before (the N > 2 curves case with
out-of-range-minor rescaling the previously scaled curves isn't fixed
yet...) but, this is a much better and optional implementation in less
code. Further we're now better leveraging various new cached properties
and methods on `Viz`.

We now handle different `overlay_technique: str` options using `match:`
syntax in the 2ndary scaling loop, stash the returns scalars per curve
in `overlay_table`, and store and iterate the curves by dispersion
measure sort order.

Further wrt "pin-to-target-curve" mode, which currently still pins to
the largest measured dispersion curve in the overlay set:

- pop major Ci overlay table entries at start for sub-calcs usage when
  handling the "minor requires major rescale after pin" case.
- (finally) correctly rescale the major curve y-mxmn to whatever the
  latest minor overlay curve by calcing the inverse transform from the
  minor *at that point*:
  - the intersect point being that which the minor has starts support on
    the major's x-domain* using the new `Viz.scalars_from_index()` and,
  - checking that the minor is not out of range (versus what the major's
    transform calcs it to be, in which case,
  - calc the inverse transform from the current out-of-range minor and
    use it to project the new y-mxmn for the major/target based on the
    same intersect-reference point in the x-domain used by the minor.
  - always handle the target-major Ci specially by only setting the
    `mx_ymn` / `mx_ymn` value when iterating that entry in the overlay
    table.
  - add todos around also doing the last sub-sub bullet for all previously
    major-transform scaled minor overlays (this is coming next..i hope).
- add a final 3rd overlay loop which goes through a final `scaled: dict`
  to apply all range values to each view; this is where we will
  eventually solve that last edge case of an out-of-range minor's
  scaling needing to be used to rescale already scaled minors XD
2023-03-03 18:36:46 -05:00
Tyler Goodlet 55ec9ef5a0 Add cached dispersion methods to `Viz`
In an effort to make overlay calcs cleaner and leverage caching of view
range -> dispersion measures, this adds the following new methods:

- `._dispersion()` an lru cached returns scalar calculator given input
  y-range and y-ref values.
- `.disp_from_range()` which calls the above method and returns variable
  output depending on requested calc `method: str`.
- `.i_from_t()` a currently unused cached method for slicing the
  in-view's array index from time stamp (though not working yet due to
  needing to parameterize the cache by the input `.vs.xrange`).

Further refinements/adjustments:
- rename `.view_state: ViewState` -> `.vs`.
- drop the `.bars_range()` method as it's no longer used anywhere else
  in the code base.
- always set the `ViewState.in_view: np.ndarray` inside `.read()`.
- return the start array index (from slice) and `yref` value @ `xref`
  from `.scalars_from_index()` to aid with "pin to curve" rescaling
  caused by out-of-range pinned-minor curves.
2023-03-03 18:36:46 -05:00
Tyler Goodlet c5a9cc22c2 Avoid index-from-time slicing including gaps
Not sure why this was ever allowed but, for slicing to the sample
*before* whatever target time stamp is passed in we should definitely
not return the prior index as for the slice start since that might
include a very large gap prior to whatever sample is scanned to have
the earliest matching time stamp.

This was essential to fixing overlay intersect points searching in our
``ui.view_mode`` machinery..
2023-03-03 18:36:46 -05:00
Tyler Goodlet 5ec873fa2a Drop last lingering usage of `Viz.bars_range()` 2023-03-03 18:36:46 -05:00
Tyler Goodlet 247a77857f Add `Viz.view_state: ViewState`
Adds a small struct which is used to track the most recently viewed
data's x/y ranges as well as the last `Viz.read()` "in view" array data
for fast access by chart related graphics processing code, namely view
mode overlay handling.

Also adds new `Viz` interfaces:
- `Viz.ds_yrange: tuple[float, float]' which replaces the previous
  `.yrange` (now set by `.datums_range()` on manual y-range calcs) so
  that the m4 downsampler can set this field specifically and then it
  get used (when available) by `Viz.maxmin()`.
- `Viz.scalars_from_index()` a new returns-scalar generator which can be
  used to calc the up and down returns values (used for scaling overlay
  y-ranges) from an input `xref` x-domain index which maps to some
  `Ci(xref) = yref`.
2023-03-03 18:36:46 -05:00
Tyler Goodlet ca80b3b808 Make slow chart a teensie bit smaller 2023-03-03 18:36:46 -05:00
Tyler Goodlet d2b99c6889 Drop (now) unused major curve mx/mn variables 2023-03-03 18:36:46 -05:00
Tyler Goodlet cab335ef2f Move overlay transform logic to new `.ui.view_mode`
It was getting waayy to long to be jammed in a method XD

This moves all the chart-viz iteration and transform logic into a new
`piker.ui.view_mode.overlay_viewlists()` core routine which will make it
a lot nicer for,

- AOT compilation via `numba` / `cython` / `mypyc`.
- decoupling from the `pyqtgraph.ViewBox` APIs if we ever decide to get
  crazy and go without another graphics engine.
- keeping your head clear when trying to rework the code B)
2023-03-03 18:36:46 -05:00
Tyler Goodlet 1346058a48 Adjust `.ui` modules to new set-style "optional" annots 2023-03-03 18:36:46 -05:00
Tyler Goodlet 9b43639416 Remove vlm chart again, drop lotsa fsp cruft 2023-03-03 18:36:46 -05:00
Tyler Goodlet 22efd05d8c Rework overlay pin technique: "align to first"
As part of solving a final bullet-issue in #455, which is specifically
a case:
- with N > 2 curves, one of which is the "major" dispersion curve" and
  the others are "minors",
- we can run into a scenario where some minor curve which gets pinned to
  the major (due to the original "pinning technique" -> "align to
  major") at some `P(t)` which is *not* the major's minimum / maximum
  due to the minor having a smaller/shorter support and thus,
- requires that in order to show then max/min on the minor curve we have
  to expand the range of the major curve as well but,
- that also means any previously scaled (to the major) minor curves need
  to be adjusted as well or they'll not be pinned to the major the same
  way!

I originally was trying to avoid doing the recursive iteration back
through all previously scaled minor curves and instead decided to try
implementing the "per side" curve dispersion detection (as was
originally attempted when first starting this work). The idea is to
decide which curve's up or down "swing in % returns" would determine the
global y-range *on that side*. Turns out I stumbled on the "align to
first" technique in the process: "for each overlay curve we align its
earliest sample (in time) to the same level of the earliest such sample
for whatever is deemed the major (directionally disperse) curve in
view".

I decided (with help) that this "pin to first" approach/style is equally
as useful and maybe often more so when wanting to view support-disjoint
time series:

- instead of compressing the y-range on "longer series which have lesser
  sigma" to make whatever "shorter but larger-sigma series" pin to it at
  an intersect time step, this instead will expand the price ranges
  based on the earliest time step in each series.
- the output global-returns-overlay-range for any N-set of series is equal to
  the same in the previous "pin to intersect time" technique.
- the only time this technique seems less useful is for overlaying
  market feeds which have the same destination asset but different
  source assets (eg. btceur and btcusd on the same chart since if one
  of the series is shorter it will always be aligned to the earliest
  datum on the longer instead of more naturally to the intersect sample
  level as was in the previous approach).

As such I'm going to keep this technique as discovered and will later
add back optional support for the "align to intersect" approach from
previous (which will again require detecting the highest dispersion
curve direction-agnostic) and pin all minors to the price level at which
they start on the major.

Further details of the implementation rework in
`.interact_graphics_cycle()` include:

- add `intersect_from_longer()` to detect and deliver a common datum
  from 2 series which are different in length: the first time-index
  sample in the longer.
- Rewrite the drafted `OverlayT` to only compute (inversed log-returns)
  transforms for a single direction and use 2 instances, one for each
  direction inside the `Viz`-overlay iteration loop.
- do all dispersion-per-side major curve detection in the first pass of
  all `Viz`s on a plot, instead updating the `OverlayT` instances for
  each side and compensating for any length mismatch and
  rescale-to-minor cases in each loop cycle.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 73912ab9a8 Try to hide all axes even when removed 2023-03-03 18:36:46 -05:00
Tyler Goodlet 7f91cda899 Add hack-zone UI REPL access via `ctl-u` 2023-03-03 18:36:46 -05:00
Tyler Goodlet fd1fd8d49b Facepalm, align overlay plot view exactly to parent
Previously we were aligning the child's `PlotItem` to the "root" (top
most) overlays `ViewBox`..smh. This is why there was a weird gap on the
LHS next to the 'left' price axes: something weird in the implied axes
offsets was getting jammed in that rect.

Also comments out "the-skipping-of" moving axes from the overlay's
`PlotItem.layout` to the root's linear layout(s) when an overlay's axis
is read as not visible; this isn't really necessary nor useful and if we
want to remove the axes entirely we should do it explicitly and/or
provide a way through the `ComposeGridLayout` API.
2023-03-03 18:36:46 -05:00
Tyler Goodlet a2934b7d18 Go back to caching on all curves
Despite there being artifacts when interacting, the speedups when
cross-hair-ing are just too good to ignore. We can always play with
disabling caches when interaction takes place much like we do with feed
pausing.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 2a4a5588a8 Dynamically adjust y-range margin in display loop
When zoomed in alot, and thus a quote driven y-range resize takes place,
it makes more sense to increase the `range_margin: float` input to
`._set_yrange()` to ensure all L1 labels stay in view; generally the
more zoomed in,
- the smaller the y-range is and thus the larger the needed margin (on
  that range's dispersion diff) would be,
- it's more likely to get a last datum move outside the previous range.

Also, always do overlayT style scaling on the slow chart whenever it
treads.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 1e85668bc2 Expose `._set_yrange()` kwargs via `yrange_kwargs: dict`
Since it can be desirable to dynamically adjust inputs to the y-ranging
method (such as in the display loop when a chart is very zoomed in), this
adds such support through a new `yrange_kwargs: dict[Viz, dict]` which
replaces the `yrange` tuple we were passing through prior. Also, adjusts
the y-range margin back to the original 0.09 of the diff now that we can
support dynamic control.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 7f7af4ba00 Go back to no-cache on OHLC downsample line 2023-03-03 18:36:46 -05:00
Tyler Goodlet d742dd25c9 Only use last `ChartView._yrange` if set 2023-03-03 18:36:46 -05:00
Tyler Goodlet 2b075c7644 Skip overlay transform calcs on common-pi curves
If there is a common `PlotItem` used for a set of `Viz`/curves (on
a given view) we don't need to do overlay scaling and thus can also
short circuit the viz iteration loop early.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 45368ff19d Lel, always meant to no-cache the step curve.. 2023-03-03 18:36:46 -05:00
Tyler Goodlet 372f298b23 Incrementally set vlm chart yrange per quote 2023-03-03 18:36:46 -05:00
Tyler Goodlet ccbe7c75e2 Only set the specific view's yrange per quote
Somewhat of a facepalm but, for incremental update of the auto-yrange
from quotes in the display loop obviously we only want to update the
associated `Viz`/viewbox for *that* fqsn. Further we don't need to worry
about the whole "tick margin" stuff since `._set_yrange()` already adds
margin to the yrange by default; thus we remove all of that.
2023-03-03 18:36:46 -05:00
Tyler Goodlet b446dba493 Always set the `ChartView._viz` for each plot 2023-03-03 18:36:46 -05:00
Tyler Goodlet d19b663013 No-overlays, y-ranging optimizations
When the caller passes `do_overlay_scaling=False` we skip the given
chart's `Viz` iteration loop, and set the yrange immediately, then
continue to the next chart (if `do_linked_charts` is set) instead of
a `continue` short circuit within the viz sub-loop.

Deats:
- add a `_maybe_calc_yrange()` helper which makes the `yranges`
  provided-or-not case logic more terse (factored).
- add a `do_linked_charts=False` short circuit.
- drop the legacy commented swing % calcs stuff.
- use the `ChartView._viz` when `do_overlay_scaling=False` thus
  presuming that we want to handle the viz mapped to *this* view box.
- add a `._yrange` "last set yrange" tracking var which keeps record of
  the last ymn/ymx value set in `._set_yrange()` BEFORE doing range
  margins; this will be used for incremental update in the display loop.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 858429cfcd Disable overlay scaling on per-symbol-feed updates
Since each symbol's feed is multiplexed by quote key (an fqsn), we can
avoid scaling overlay curves on any single update, presuming each quote
driven cycle will trigger **only** the specific symbol's curve.

Also disables fsp `.interact_graphics_cycle()` calls for now since it
seems they aren't really that critical to and we should be using the
same technique as above (doing incremental y-range checks/updates) for
FSPs as well.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 83f50af485 Iterate all charts (widgets) when only one overlay
The reason (fsp) subcharts were not linked-updating correctly was
because of the early termination of the interact update loop when only
one "overlay" (aka no other overlays then the main curve) is detected.
Obviously in this case we still need to iterate all linked charts in the
set (presuming the user doesn't disable this).

Also tweaks a few internals:
- rename `start_datums: dict` -> `overlay_table`.
- compact all "single curve" checks to one logic block.
- don't collect curve info into the `overlay_table: dict` when
  `do_overlay_scaling=True`.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 554f3f05aa Pass windowed y-mxmn to `.interact_graphics_cycle()` calls in display loop 2023-03-03 18:36:46 -05:00
Tyler Goodlet 55de7244c5 Allow y-range input via a `yranges: dict[Viz, tuple[float, float]]` 2023-03-03 18:36:46 -05:00
Tyler Goodlet cfd3ff6527 Don't unset `Viz.render` for unit vlm
Such that we still y-range auto-sort inside
`ChartView.interact_graphics_cycle()` still runs on the unit vlm axis
and we always size such that the y-label stays in view.
2023-03-03 18:36:46 -05:00
Tyler Goodlet eca140ac87 Fix profiler f-string 2023-03-03 18:36:46 -05:00
Tyler Goodlet 4d9d04d9db Update profile msgs to new apis 2023-03-03 18:36:46 -05:00
Tyler Goodlet 264d21d59e Move axis hiding into `.overlay_plotitem()`
Since we pretty much always want the 'bottom' and any side that is not
declared by the caller move the axis hides into this method. Lets us
drop the same calls in `.ui._fsp` and `._display`.

This also disables the auto-ranging back-linking for now since it
doesn't seem to be working quite yet?
2023-03-03 18:36:46 -05:00
Tyler Goodlet 63e705bab0 Better handle dynamic registry sampler broadcasts
In situations where clients are (dynamically) subscribing *while*
broadcasts are starting to taking place we need to handle the
`set`-modified-during-iteration case. This scenario seems to be more
common during races on concurrent startup of multiple symbols. The
solution here is to use another set to take note of subscribers which
are successfully sent-to and then skipping them on re-try.

This also contains an attempt to exception-handle throttled stream
overruns caused by higher frequency feeds (like binance) pushing more
quotes then can be handled during (UI) client startup.
2023-03-03 18:36:46 -05:00
Tyler Goodlet 1e078a3c30 Drop old loop and wait on fsp engine tasks startups 2023-03-03 18:36:46 -05:00
Tyler Goodlet b26cab416f Comment out all median usage, turns out it's uneeded.. 2023-03-03 18:36:46 -05:00
Tyler Goodlet d6a8d779cf Lul, actually scaled main chart from linked set
This was a subtle logic error when building the `plots: dict` we weren't
adding the "main (ohlc or other source) chart" from the `LinkedSplits`
set when interacting with some sub-chart from `.subplots`..

Further this tries out bypassing `numpy.median()` altogether by just
using `median = (ymx - ymn) / 2` which should be nearly the same?
2023-03-03 18:36:46 -05:00
Tyler Goodlet b8d94bd337 Use `._pathops.slice_from_time()` for overlay intersects
It's way faster since it uses a uniform time arithmetic to narrow the
`numpy.searchsorted()` range before actually doing the index search B)
2023-03-03 18:36:46 -05:00
Tyler Goodlet 69b79191f1 Don't scale overlays on linked from display loop
In the (incrementally updated) display loop we have range logic that is
incrementally updated in real-time by streams, as such we don't really
need to update all linked chart's (for any given, currently updated
chart) y-ranges on calls of each separate (sub-)chart's
`ChartView.interact_graphics_cycle()`. In practise there are plenty of
cases where resizing in one chart (say the vlm fsps sub-plot) requires
a y-range re-calc but not in the OHLC price chart. Therefore
we always avoid doing more resizing then necessary despite it resulting
in potentially more method call overhead (which will later be justified
by better leveraging incrementally updated `Viz.maxmin()` and
`media_from_range()` calcs).
2023-03-03 18:36:46 -05:00
Tyler Goodlet 8ae47acdb4 Don't skip overlay scaling in disp-loop for now 2023-03-03 18:36:46 -05:00
Tyler Goodlet cdcf4aa326 Add linked charts guard-flag for use in display loop 2023-03-03 18:36:46 -05:00
Tyler Goodlet 94a1fdee1a Use new cached median method in overlay scaling
Massively speeds up scaling transform cycles (duh).

Also includes a draft for an "overlay transform" type/api; obviously
still a WIP 🏄..
2023-03-03 18:36:45 -05:00
Tyler Goodlet 5e6e2f8925 Add `Viz.median_from_range()`
A super snappy `numpy.median()` calculator (per input range) which we
slap an `lru_cache` on thanks to handy dunder method hacks for such
things on mutable types XD
2023-03-03 18:36:45 -05:00
Tyler Goodlet 0932a85c9f Speed up ranging in display loop
use the new `do_overlay_scaling: bool` since we know each feed will have
its own updates (cuz multiplexed by feed..) and we can avoid
ranging/scaling overlays that will make their own calls.

Also, pass in the last datum "brighter" color for ohlc curves as it was
originally (and now that we can pass that styling bit through).
2023-03-03 18:36:45 -05:00
Tyler Goodlet 8ed7bd8a8c Add full profiling to `.interact_graphics_cycle()` 2023-03-03 18:36:45 -05:00
Tyler Goodlet ea913e160d Fix intersect detection using time indexing
Facepalm, obviously absolute array indexes are not going to necessarily
align vs. time over multiple feeds/history. Instead use
`np.searchsorted()` on whatever curve has the smallest support and find
the appropriate index of intersection in time so that alignment always
starts at a sensible reference.

Also adds a `debug_print: bool` input arg which can enable all the
prints when working on this.
2023-03-03 18:36:45 -05:00
Tyler Goodlet a9670a85e8 Factor curve-dispersion sorting into primary loop
We can determine the major curve (in view) in the first pass of all
`Viz`s so drop the 2nd loop and thus the `mxmn_groups: dict`. Also
simplifies logic for the case of only one (the major) curve in view.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 84b5a5f3d6 When only one curve is in view, skip group ranging 2023-03-03 18:36:45 -05:00
Tyler Goodlet 59c2c8fa0d Adjust `.update_graphics()` to expect `in_view: bool` in `_fsp.py` 2023-03-03 18:36:45 -05:00
Tyler Goodlet edd73f9c58 Drop `update_graphics_from_flow()` 2023-03-03 18:36:45 -05:00
Tyler Goodlet ba1fa8c2aa Just warn log on bad intersect indexing errors (for now) 2023-03-03 18:36:45 -05:00
Tyler Goodlet 8a5fe9da79 Only set the major curve's range once (per render cycle)
Turns out this is a limitation of the `ViewBox.setYRange()` api: you
can't call it more then once and expect anything but the first call to
be applied without letting a render cycle run. As such, we wait until
the end of the log-linear scaling loop to finally apply the major curves
y-mx/mn after all minor curves have been evaluated.

This also drops all the debug prints (for now) to get a feel for latency
in production mode.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 50bd32aeca Only remove axis from scene when in one 2023-03-03 18:36:45 -05:00
Tyler Goodlet 70baf6db0c Drop `.group_maxmin()`
We ended up doing groups maxmin sorting at the interaction layer (new
the view box) and thus this method is no longer needed, though it was
the reference for the code now in `ChartView.interact_graphics_cycle()`.

Further this adds a `remove_axes: bool` arg to `.insert_plotitem()`
which can be used to drop axis entries from the inserted pi (though it
doesn't seem like we really ever need that?) and does the removal in
a separate loop to avoid removing axes before they are registered in
`ComposedGridLayout._pi2axes`.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 520905a653 Clean up cross-curve intersect point indexing
When there are `N`-curves we need to consider the smallest
x-data-support subset when figuring out for each major-minor pair such
that the "shorter" series is always returns aligned to the longer one.

This makes the var naming more explicit with `major/minor_i_start` as
well as clarifies more stringently a bunch of other variables and
explicitly uses the `minor_y_intersect` y value in the scaling transform
calcs. Also fixes some debug prints.
2023-03-03 18:36:45 -05:00
Tyler Goodlet b3ca8d83a6 3rdz the charm: log-linearize minor y-ranges to a major
In very close manner to the original (gut instinct) attempt, this
properly (y-axis-vertically) aligns and scales overlaid curves according
to what we are calling a "log-linearized y-range multi-plot" B)

The basic idea is that a simple returns measure (eg. `R = (p1 - p0)
/ p0`) applied to all curves gives a constant output `R` no matter the
price co-domain in use and thus gives a constant returns over all assets
in view styled scaling; a intuitive visual of returns correlation. The
reference point is for now the left-most point in view (or highest
common index available to all curves), though we can make this
a parameter based on user needs.

A slew of debug `print()`s are left in for now until we iron out the
remaining edge cases to do with re-scaling a major (dispersion) curve
based on a minor now requiring a larger log-linear y-range from that
previous major' range.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 9b321bc7f1 2nd try: dispersion normalize y-ranges around median
In the dispersion swing calcs, use the series median from the in-view
data to determine swing proportions to apply on each "minor curve"
(series with lesser dispersion the one with the greatest). Track the
major `Viz` as before by max dispersion. Apply the dispersion swing
proportions to each minor curve-series in a third loop/pass of all
overlay groups: this ensures all overlays are dispersion normalized in
their ranges but, minor curves are currently (vertically) centered (vs.
the major) via their medians.

There is a ton of commented code from attempts to try and vertically
align minor curves to the major via the "first datum" in-view/available.
This still needs work and we may want to offer it as optional.

Also adds logic to allow skipping margin adjustments in `._set_yrange()`
if you pass `range_margin=None`.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 81f384db13 First draft, group y-minmax transform algo
On overlaid ohlc vizs we compute the largest max/min spread and
apply that maxmimum "up and down swing" proportion to each `Viz`'s
viewbox in the group.

We obviously still need to clip to the shortest x-range so that
it doesn't look exactly the same as before XD
2023-03-03 18:36:45 -05:00
Tyler Goodlet b15136a351 Rename `.maybe_downsample_graphics()` -> `.interact_graphics_cycle()` 2023-03-03 18:36:45 -05:00
Tyler Goodlet 1770ceeacc Right, handle y-ranging multiple paths per plot
We were hacking this before using the whole `ChartView._maxmin()`
setting stuff since in some cases you might want similarly ranged paths
on the same view, but of course you need to max/min them together..

This adds that group sorting by using a table of `dict[PlotItem,
tuple[float, float]` and taking the abs highest/lowest value for each
plot in the viz interaction update loop.

Also removes the now commented signal registry calls and thus
`._yranger`, drops the `set_range: bool` from `._set_yrange` and adds
and extra `.maybe_downsample_graphics()` to the mouse wheel handler to
avoid a weird slow debounce where ds-ing is delayed until a further
interaction.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 399186a10a Drop Qt interaction signal usage
It's kind of hard to understand with the C++ fan-out to multiple views
(imo a cluster-f#$*&) and seems honestly just plain faster to loop (in
python) through all the linked view handlers XD

Core adjustments:
- make the panning and wheel-scroll handlers just call
  `.maybe_downsample_graphics()` directly; drop all signal emissions.
- make `.maybe_downsample_graphics()` loop through all vizs per subchart
  and use the new pipeline-style call sequence of:
  - `Viz.update_graphics() -> <read_slc>: tuple`
  - `Viz.maxmin(i_read_range=<read_slc>) -> yrange: tuple`
  - `Viz.plot.vb._set_yrange(yrange=yrange)`
  which inlines all the necessary calls in the most efficient way whilst
  leveraging `.maxmin()` caching and ymxmn-from-m4-during-render to
  boot.
- drop registering `._set_yrange()` for handling `.sigRangeChangedManually`.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 5ce4337d42 Add first-draft `PlotItemOverlay.group_maxmin()`
Computes the maxmin values for each underlying plot's in-view range as
well as the max up/down swing (in percentage terms) from the plot with
most dispersion and returns a all these values plus a `dict` of plots to
their ranges as part of output.
2023-03-03 18:36:45 -05:00
Tyler Goodlet 4c838474be `flake8` linter cleanup and comment out order ctl draft code 2023-03-03 18:32:24 -05:00
Tyler Goodlet 1bd421a0f3 Block hist queries for non-60s 2023-03-03 18:17:02 -05:00
Tyler Goodlet 2ea850eed0 `deribit`: add new `Trade.block_trade_id` field.. 2023-03-03 18:17:02 -05:00
Tyler Goodlet e6fd2adb69 Include `deribit` backend in default brokers scan set 2023-03-03 18:17:02 -05:00
Tyler Goodlet 3bfe541259 `deribit`: fix history query routine sig to take `timeframe: float` 2023-03-03 18:17:02 -05:00
Tyler Goodlet 18d70447cd `deribit`: various lib API compat fixes
- port to new `msgspec` "default fields must come after non-default
  ones" shite they changed.
- adjust to  `open_jsonrpc_session()` kwarg remap: `dtype` ->
  `response_type=JSONRPCResult`.
2023-03-03 18:17:02 -05:00
Tyler Goodlet c85324f142 `deribit`: drop removed (now deprecated and removed) `.backfill_bars()` endpoint 2023-03-03 18:17:02 -05:00
50 changed files with 2469 additions and 976 deletions

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers.
# Copyright 2020-eternity Tyler Goodlet (in stewardship for piker0)
# Copyright 2020-eternity Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -14,11 +14,11 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
'''
piker: trading gear for hackers.
"""
from ._daemon import open_piker_runtime
'''
from .service import open_piker_runtime
from .data.feed import open_feed
__all__ = [

View File

@ -24,6 +24,7 @@ __brokers__ = [
'binance',
'ib',
'kraken',
'deribit',
# broken but used to work
# 'questrade',

View File

@ -29,8 +29,15 @@ import tractor
from ..cli import cli
from .. import watchlists as wl
from ..log import get_console_log, colorize_json, get_logger
from .._daemon import maybe_spawn_brokerd, maybe_open_pikerd
from ..brokers import core, get_brokermod, data
from ..service import (
maybe_spawn_brokerd,
maybe_open_pikerd,
)
from ..brokers import (
core,
get_brokermod,
data,
)
log = get_logger('cli')
DEFAULT_BROKER = 'questrade'
@ -60,6 +67,7 @@ def get_method(client, meth_name: str):
print_ok('found!.')
return method
async def run_method(client, meth_name: str, **kwargs):
method = get_method(client, meth_name)
print('running...', end='', flush=True)
@ -67,19 +75,20 @@ async def run_method(client, meth_name: str, **kwargs):
print_ok(f'done! result: {type(result)}')
return result
async def run_test(broker_name: str):
brokermod = get_brokermod(broker_name)
total = 0
passed = 0
failed = 0
print(f'getting client...', end='', flush=True)
print('getting client...', end='', flush=True)
if not hasattr(brokermod, 'get_client'):
print_error('fail! no \'get_client\' context manager found.')
return
async with brokermod.get_client(is_brokercheck=True) as client:
print_ok(f'done! inside client context.')
print_ok('done! inside client context.')
# check for methods present on brokermod
method_list = [
@ -130,7 +139,6 @@ async def run_test(broker_name: str):
total += 1
# check for methods present con brokermod.Client and their
# results
@ -180,7 +188,6 @@ def brokercheck(config, broker):
trio.run(run_test, broker)
@cli.command()
@click.option('--keys', '-k', multiple=True,
help='Return results only for these keys')
@ -335,8 +342,6 @@ def contracts(ctx, loglevel, broker, symbol, ids):
brokermod = get_brokermod(broker)
get_console_log(loglevel)
contracts = trio.run(partial(core.contracts, brokermod, symbol))
if not ids:
# just print out expiry dates which can be used with

View File

@ -28,7 +28,7 @@ import trio
from ..log import get_logger
from . import get_brokermod
from .._daemon import maybe_spawn_brokerd
from ..service import maybe_spawn_brokerd
from .._cacheables import open_cached_client

View File

@ -30,7 +30,6 @@ from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
backfill_bars
)
# from .broker import (
# trades_dialogue,

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# Copyright (C) Guillermo Rodriguez (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -18,49 +18,48 @@
Deribit backend.
'''
import json
from __future__ import annotations
import time
import asyncio
from contextlib import asynccontextmanager as acm, AsyncExitStack
from contextlib import asynccontextmanager as acm
from functools import partial
from datetime import datetime
from typing import Any, Optional, Iterable, Callable
from typing import (
Any,
Optional,
Callable,
)
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT,
L1_BOOK,
TRADES,
OPTION,
CALL,
PUT,
)
import pendulum
import asks
import trio
from trio_typing import Nursery, TaskStatus
from trio_typing import TaskStatus
from fuzzywuzzy import process as fuzzy
import numpy as np
from tractor.trionics import (
broadcast_receiver,
maybe_open_context
)
from tractor import to_asyncio
from cryptofeed.symbols import Symbol
from piker.data.types import Struct
from piker.data._web_bs import (
NoBsWs,
open_autorecon_ws,
open_jsonrpc_session
)
from .._util import resproc
from piker import config
from piker.log import get_logger
from tractor.trionics import (
broadcast_receiver,
BroadcastReceiver,
maybe_open_context
)
from tractor import to_asyncio
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT,
L1_BOOK, TRADES,
OPTION, CALL, PUT
)
from cryptofeed.symbols import Symbol
log = get_logger(__name__)
@ -89,19 +88,20 @@ _ohlc_dtype = [
class JSONRPCResult(Struct):
jsonrpc: str = '2.0'
id: int
result: Optional[dict] = None
error: Optional[dict] = None
usIn: int
usOut: int
usIn: int
usOut: int
usDiff: int
testnet: bool
jsonrpc: str = '2.0'
result: Optional[dict] = None
error: Optional[dict] = None
class JSONRPCChannel(Struct):
jsonrpc: str = '2.0'
method: str
params: dict
jsonrpc: str = '2.0'
class KLinesResult(Struct):
@ -114,6 +114,7 @@ class KLinesResult(Struct):
ticks: list[int]
volume: list[float]
class Trade(Struct):
trade_seq: int
trade_id: str
@ -125,9 +126,11 @@ class Trade(Struct):
instrument_name: str
index_price: float
direction: str
combo_trade_id: Optional[int] = 0,
combo_id: Optional[str] = '',
amount: float
combo_trade_id: Optional[int] = 0
combo_id: Optional[str] = ''
block_trade_id: str | None = ''
class LastTradesResult(Struct):
trades: list[Trade]
@ -145,8 +148,8 @@ def str_to_cb_sym(name: str) -> Symbol:
quote = base
if option_type == 'put':
option_type = PUT
elif option_type == 'call':
option_type = PUT
elif option_type == 'call':
option_type = CALL
else:
raise Exception("Couldn\'t parse option type")
@ -167,8 +170,8 @@ def piker_sym_to_cb_sym(name: str) -> Symbol:
quote = base
if option_type == 'P':
option_type = PUT
elif option_type == 'C':
option_type = PUT
elif option_type == 'C':
option_type = CALL
else:
raise Exception("Couldn\'t parse option type")
@ -185,8 +188,13 @@ def cb_sym_to_deribit_inst(sym: Symbol):
# cryptofeed normalized
cb_norm = ['F', 'G', 'H', 'J', 'K', 'M', 'N', 'Q', 'U', 'V', 'X', 'Z']
# deribit specific
months = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']
# deribit specific
months = [
'JAN', 'FEB', 'MAR',
'APR', 'MAY', 'JUN',
'JUL', 'AUG', 'SEP',
'OCT', 'NOV', 'DEC',
]
exp = sym.expiry_date
@ -206,14 +214,15 @@ def get_config() -> dict[str, Any]:
section = conf.get('deribit')
# TODO: document why we send this, basically because logging params for cryptofeed
# TODO: document why we send this, basically because logging params
# for cryptofeed
conf['log'] = {}
conf['log']['disabled'] = True
if section is None:
log.warning(f'No config section found for deribit in {path}')
return conf
return conf
class Client:
@ -360,6 +369,7 @@ class Client:
end_dt: Optional[datetime] = None,
limit: int = 1000,
as_np: bool = True,
) -> dict:
instrument = symbol
@ -385,14 +395,8 @@ class Client:
result = KLinesResult(**resp.result)
new_bars = []
for i in range(len(result.close)):
_open = result.open[i]
high = result.high[i]
low = result.low[i]
close = result.close[i]
volume = result.volume[i]
row = [
(start_time + (i * (60 * 1000))) / 1000.0, # time
result.open[i],
@ -405,7 +409,7 @@ class Client:
new_bars.append((i,) + tuple(row))
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else klines
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else new_bars
return array
async def last_trades(
@ -431,7 +435,9 @@ async def get_client(
async with (
trio.open_nursery() as n,
open_jsonrpc_session(
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc
_testnet_ws_url,
response_type=JSONRPCResult
) as json_rpc
):
client = Client(json_rpc)
@ -457,7 +463,7 @@ async def get_client(
if time.time() - _expiry_time < renew_time:
# if we are close to token expiry time
if _refresh_token != None:
if _refresh_token is not None:
# if we have a refresh token already dont need to send
# secret
params = {
@ -467,7 +473,8 @@ async def get_client(
}
else:
# we don't have refresh token, send secret to initialize
# we don't have refresh token, send secret to
# initialize
params = {
'grant_type': 'client_credentials',
'client_id': client._key_id,
@ -535,20 +542,30 @@ async def aio_price_feed_relay(
}))
async def _l1(data: dict, receipt_timestamp):
to_trio.send_nowait(('l1', {
'symbol': cb_sym_to_deribit_inst(
str_to_cb_sym(data.symbol)).lower(),
'ticks': [
{'type': 'bid',
'price': float(data.bid_price), 'size': float(data.bid_size)},
{'type': 'bsize',
'price': float(data.bid_price), 'size': float(data.bid_size)},
{'type': 'ask',
'price': float(data.ask_price), 'size': float(data.ask_size)},
{'type': 'asize',
'price': float(data.ask_price), 'size': float(data.ask_size)}
]
}))
to_trio.send_nowait(
('l1', {
'symbol': cb_sym_to_deribit_inst(
str_to_cb_sym(data.symbol)).lower(),
'ticks': [
{'type': 'bid',
'price': float(data.bid_price),
'size': float(data.bid_size)},
{'type': 'bsize',
'price': float(data.bid_price),
'size': float(data.bid_size)},
{'type': 'ask',
'price': float(data.ask_price),
'size': float(data.ask_size)},
{'type': 'asize',
'price': float(data.ask_price),
'size': float(data.ask_size)}
]
})
)
fh.add_feed(
DERIBIT,
@ -604,69 +621,71 @@ async def maybe_open_price_feed(
yield feed
# TODO: order broker support: this is all draft code from @guilledk B)
async def aio_order_feed_relay(
fh: FeedHandler,
instrument: Symbol,
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> None:
async def _fill(data: dict, receipt_timestamp):
breakpoint()
# async def aio_order_feed_relay(
# fh: FeedHandler,
# instrument: Symbol,
# from_trio: asyncio.Queue,
# to_trio: trio.abc.SendChannel,
async def _order_info(data: dict, receipt_timestamp):
breakpoint()
# ) -> None:
# async def _fill(data: dict, receipt_timestamp):
# breakpoint()
fh.add_feed(
DERIBIT,
channels=[FILLS, ORDER_INFO],
symbols=[instrument.upper()],
callbacks={
FILLS: _fill,
ORDER_INFO: _order_info,
})
# async def _order_info(data: dict, receipt_timestamp):
# breakpoint()
if not fh.running:
fh.run(
start_loop=False,
install_signal_handlers=False)
# fh.add_feed(
# DERIBIT,
# channels=[FILLS, ORDER_INFO],
# symbols=[instrument.upper()],
# callbacks={
# FILLS: _fill,
# ORDER_INFO: _order_info,
# })
# sync with trio
to_trio.send_nowait(None)
# if not fh.running:
# fh.run(
# start_loop=False,
# install_signal_handlers=False)
await asyncio.sleep(float('inf'))
# # sync with trio
# to_trio.send_nowait(None)
# await asyncio.sleep(float('inf'))
@acm
async def open_order_feed(
instrument: list[str]
) -> trio.abc.ReceiveStream:
async with maybe_open_feed_handler() as fh:
async with to_asyncio.open_channel_from(
partial(
aio_order_feed_relay,
fh,
instrument
)
) as (first, chan):
yield chan
# @acm
# async def open_order_feed(
# instrument: list[str]
# ) -> trio.abc.ReceiveStream:
# async with maybe_open_feed_handler() as fh:
# async with to_asyncio.open_channel_from(
# partial(
# aio_order_feed_relay,
# fh,
# instrument
# )
# ) as (first, chan):
# yield chan
@acm
async def maybe_open_order_feed(
instrument: str
) -> trio.abc.ReceiveStream:
# @acm
# async def maybe_open_order_feed(
# instrument: str
# ) -> trio.abc.ReceiveStream:
# TODO: add a predicate to maybe_open_context
async with maybe_open_context(
acm_func=open_order_feed,
kwargs={
'instrument': instrument,
'fh': fh
},
key=f'{instrument}-order',
) as (cache_hit, feed):
if cache_hit:
yield broadcast_receiver(feed, 10)
else:
yield feed
# # TODO: add a predicate to maybe_open_context
# async with maybe_open_context(
# acm_func=open_order_feed,
# kwargs={
# 'instrument': instrument,
# 'fh': fh
# },
# key=f'{instrument}-order',
# ) as (cache_hit, feed):
# if cache_hit:
# yield broadcast_receiver(feed, 10)
# else:
# yield feed

View File

@ -20,35 +20,28 @@ Deribit backend.
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
from typing import Any, Optional, Callable
import time
from typing import (
Callable,
)
import trio
from trio_typing import TaskStatus
import pendulum
from fuzzywuzzy import process as fuzzy
import numpy as np
import tractor
from piker._cacheables import open_cached_client
from piker.log import get_logger, get_console_log
from piker.data import ShmArray
from piker.brokers._util import (
BrokerError,
DataUnavailable,
)
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT, L1_BOOK, TRADES, OPTION, CALL, PUT
)
from cryptofeed.symbols import Symbol
from .api import (
Client, Trade,
get_config,
str_to_cb_sym, piker_sym_to_cb_sym, cb_sym_to_deribit_inst,
Client,
Trade,
piker_sym_to_cb_sym,
cb_sym_to_deribit_inst,
maybe_open_price_feed
)
@ -69,20 +62,24 @@ async def open_history_client(
async with open_cached_client('deribit') as client:
async def get_ohlc(
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
timeframe: float,
end_dt: datetime | None = None,
start_dt: datetime | None = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
if timeframe != 60:
raise DataUnavailable('Only 1m bars are supported')
array = await client.bars(
instrument,
start_dt=start_dt,
end_dt=end_dt,
)
if len(array) == 0:
raise DataUnavailable
@ -132,7 +129,7 @@ async def stream_quotes(
async with maybe_open_price_feed(sym) as stream:
cache = await client.cache_symbols()
await client.cache_symbols()
last_trades = (await client.last_trades(
cb_sym_to_deribit_inst(nsym), count=1)).trades
@ -174,7 +171,7 @@ async def open_symbol_search(
async with open_cached_client('deribit') as client:
# load all symbols locally for fast search
cache = await client.cache_symbols()
await client.cache_symbols()
await ctx.started()
async with ctx.open_stream() as stream:

View File

@ -29,8 +29,11 @@ from tractor.trionics import broadcast_receiver
from ..log import get_logger
from ..data.types import Struct
from .._daemon import maybe_open_emsd
from ._messages import Order, Cancel
from ..service import maybe_open_emsd
from ._messages import (
Order,
Cancel,
)
from ..brokers import get_brokermod
if TYPE_CHECKING:

View File

@ -19,16 +19,18 @@ CLI commons.
'''
import os
from pprint import pformat
from functools import partial
import click
import trio
import tractor
from ..log import get_console_log, get_logger, colorize_json
from ..log import (
get_console_log,
get_logger,
colorize_json,
)
from ..brokers import get_brokermod
from .._daemon import (
from ..service import (
_default_registry_host,
_default_registry_port,
)
@ -68,7 +70,7 @@ def pikerd(
'''
from .._daemon import open_pikerd
from ..service import open_pikerd
log = get_console_log(loglevel)
if pdb:
@ -171,7 +173,7 @@ def cli(
@click.pass_obj
def services(config, tl, ports):
from .._daemon import (
from ..service import (
open_piker_runtime,
_default_registry_port,
_default_registry_host,
@ -204,8 +206,8 @@ def services(config, tl, ports):
def _load_clis() -> None:
from ..data import marketstore # noqa
from ..data import elastic
from ..service import marketstore # noqa
from ..service import elastic
from ..data import cli # noqa
from ..brokers import cli # noqa
from ..ui import cli # noqa

View File

@ -295,7 +295,7 @@ def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: int | None = None,
step: float, # sampler period step-diff
) -> slice:
'''
@ -324,12 +324,6 @@ def slice_from_time(
# end of the input array.
read_i_max = arr.shape[0]
# TODO: require this is always passed in?
if step is None:
step = round(t_last - times[-2])
if step == 0:
step = 1
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
@ -395,7 +389,7 @@ def slice_from_time(
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start - 1
read_i_start = new_read_i_start
t_iv_stop = times[read_i_stop - 1]
if (
@ -412,7 +406,7 @@ def slice_from_time(
times[read_i_start:],
# times,
i_stop_t,
side='left',
side='right',
)
if (

View File

@ -42,7 +42,7 @@ from ..log import (
get_logger,
get_console_log,
)
from .._daemon import maybe_spawn_daemon
from ..service import maybe_spawn_daemon
if TYPE_CHECKING:
from ._sharedmem import (
@ -68,8 +68,8 @@ class Sampler:
This non-instantiated type is meant to be a singleton within
a `samplerd` actor-service spawned once by the user wishing to
time-step sample real-time quote feeds, see
``._daemon.maybe_open_samplerd()`` and the below
time-step-sample (real-time) quote feeds, see
``.service.maybe_open_samplerd()`` and the below
``register_with_sampler()``.
'''
@ -87,7 +87,6 @@ class Sampler:
# holds all the ``tractor.Context`` remote subscriptions for
# a particular sample period increment event: all subscribers are
# notified on a step.
# subscribers: dict[int, list[tractor.MsgStream]] = {}
subscribers: defaultdict[
float,
list[
@ -240,8 +239,11 @@ class Sampler:
subscribers for a given sample period.
'''
pair: list[float, set]
pair = self.subscribers[period_s]
last_ts: float
subs: set
last_ts, subs = pair
task = trio.lowlevel.current_task()
@ -253,25 +255,35 @@ class Sampler:
# f'consumers: {subs}'
)
borked: set[tractor.MsgStream] = set()
for stream in subs:
sent: set[tractor.MsgStream] = set()
while True:
try:
await stream.send({
'index': time_stamp or last_ts,
'period': period_s,
})
except (
trio.BrokenResourceError,
trio.ClosedResourceError
):
log.error(
f'{stream._ctx.chan.uid} dropped connection'
)
borked.add(stream)
for stream in (subs - sent):
try:
await stream.send({
'index': time_stamp or last_ts,
'period': period_s,
})
sent.add(stream)
except (
trio.BrokenResourceError,
trio.ClosedResourceError
):
log.error(
f'{stream._ctx.chan.uid} dropped connection'
)
borked.add(stream)
else:
break
except RuntimeError:
log.warning(f'Client subs {subs} changed while broadcasting')
continue
for stream in borked:
try:
subs.remove(stream)
except ValueError:
except KeyError:
log.warning(
f'{stream._ctx.chan.uid} sub already removed!?'
)
@ -379,7 +391,7 @@ async def spawn_samplerd(
update and increment count write and stream broadcasting.
'''
from piker._daemon import Services
from piker.service import Services
dname = 'samplerd'
log.info(f'Spawning `{dname}`')
@ -419,7 +431,7 @@ async def maybe_open_samplerd(
loglevel: str | None = None,
**kwargs,
) -> tractor._portal.Portal: # noqa
) -> tractor.Portal: # noqa
'''
Client-side helper to maybe startup the ``samplerd`` service
under the ``pikerd`` tree.
@ -609,6 +621,14 @@ async def sample_and_broadcast(
fqsn = f'{broker_symbol}.{brokername}'
lags: int = 0
# TODO: speed up this loop in an AOT compiled lang (like
# rust or nim or zig) and/or instead of doing a fan out to
# TCP sockets here, we add a shm-style tick queue which
# readers can pull from instead of placing the burden of
# broadcast on solely on this `brokerd` actor. see issues:
# - https://github.com/pikers/piker/issues/98
# - https://github.com/pikers/piker/issues/107
for (stream, tick_throttle) in subs.copy():
try:
with trio.move_on_after(0.2) as cs:
@ -738,9 +758,6 @@ def frame_ticks(
ticks_by_type[ttype].append(tick)
# TODO: a less naive throttler, here's some snippets:
# token bucket by njs:
# https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
async def uniform_rate_send(
rate: float,
@ -750,8 +767,22 @@ async def uniform_rate_send(
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Throttle a real-time (presumably tick event) stream to a uniform
transmissiom rate, normally for the purposes of throttling a data
flow being consumed by a graphics rendering actor which itself is limited
by a fixed maximum display rate.
# try not to error-out on overruns of the subscribed (chart) client
Though this function isn't documented (nor was intentially written
to be) a token-bucket style algo, it effectively operates as one (we
think?).
TODO: a less naive throttler, here's some snippets:
token bucket by njs:
https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
'''
# try not to error-out on overruns of the subscribed client
stream._ctx._backpressure = True
# TODO: compute the approx overhead latency per cycle
@ -848,6 +879,16 @@ async def uniform_rate_send(
# rate timing exactly lul
try:
await stream.send({sym: first_quote})
except tractor.RemoteActorError as rme:
if rme.type is not tractor._exceptions.StreamOverrun:
raise
ctx = stream._ctx
chan = ctx.chan
log.warning(
'Throttled quote-stream overrun!\n'
f'{sym}:{ctx.cid}@{chan.uid}'
)
except (
# NOTE: any of these can be raised by ``tractor``'s IPC
# transport-layer and we want to be highly resilient

View File

@ -19,7 +19,10 @@ marketstore cli.
"""
from functools import partial
from pprint import pformat
from pprint import (
pformat,
pprint,
)
from anyio_marketstore import open_marketstore_client
import trio
@ -113,15 +116,11 @@ def ms_stream(
@cli.command()
@click.option(
'--tl',
is_flag=True,
help='Enable tractor logging')
@click.option(
'--host',
'--tsdb_host',
default='localhost'
)
@click.option(
'--port',
'--tsdb_port',
default=5993
)
@click.argument('symbols', nargs=-1)
@ -137,18 +136,74 @@ def storesh(
Start an IPython shell ready to query the local marketstore db.
'''
from piker.data.marketstore import tsdb_history_update
from piker._daemon import open_piker_runtime
from piker.data.marketstore import open_tsdb_client
from piker.service import open_piker_runtime
async def main():
nonlocal symbols
async with open_piker_runtime(
'storesh',
enable_modules=['piker.data._ahab'],
enable_modules=['piker.service._ahab'],
):
symbol = symbols[0]
await tsdb_history_update(symbol)
async with open_tsdb_client(symbol) as storage:
# TODO: ask if user wants to write history for detected
# available shm buffers?
from tractor.trionics import ipython_embed
await ipython_embed()
trio.run(main)
@cli.command()
@click.option(
'--host',
default='localhost'
)
@click.option(
'--port',
default=5993
)
@click.option(
'--delete',
'-d',
is_flag=True,
help='Delete history (1 Min) for symbol(s)',
)
@click.argument('symbols', nargs=-1)
@click.pass_obj
def storage(
config,
host,
port,
symbols: list[str],
delete: bool,
):
'''
Start an IPython shell ready to query the local marketstore db.
'''
from piker.data.marketstore import open_tsdb_client
from piker.service import open_piker_runtime
async def main():
nonlocal symbols
async with open_piker_runtime(
'tsdb_storage',
enable_modules=['piker.service._ahab'],
):
symbol = symbols[0]
async with open_tsdb_client(symbol) as storage:
if delete:
for fqsn in symbols:
syms = await storage.client.list_symbols()
breakpoint()
await storage.delete_ts(fqsn, 60)
await storage.delete_ts(fqsn, 1)
trio.run(main)

View File

@ -58,7 +58,7 @@ from ..log import (
get_logger,
get_console_log,
)
from .._daemon import (
from ..service import (
maybe_spawn_brokerd,
check_for_service,
)
@ -1589,6 +1589,9 @@ async def open_feed(
(brokermod, bfqsns),
) in zip(ctxs, providers.items()):
# NOTE: do it asap to avoid overruns during multi-feed setup?
ctx._backpressure = backpressure
for fqsn, flume_msg in flumes_msg_dict.items():
flume = Flume.from_msg(flume_msg)
assert flume.symbol.fqsn == fqsn

View File

@ -21,7 +21,11 @@ import logging
import json
import tractor
from pygments import highlight, lexers, formatters
from pygments import (
highlight,
lexers,
formatters,
)
# Makes it so we only see the full module name when using ``__name__``
# without the extra "piker." prefix.
@ -32,26 +36,48 @@ def get_logger(
name: str = None,
) -> logging.Logger:
'''Return the package log or a sub-log for `name` if provided.
'''
Return the package log or a sub-log for `name` if provided.
'''
return tractor.log.get_logger(name=name, _root_name=_proj_name)
def get_console_log(level: str = None, name: str = None) -> logging.Logger:
'''Get the package logger and enable a handler which writes to stderr.
def get_console_log(
level: str | None = None,
name: str | None = None,
) -> logging.Logger:
'''
Get the package logger and enable a handler which writes to stderr.
Yeah yeah, i know we can use ``DictConfig``. You do it...
'''
return tractor.log.get_console_log(
level, name=name, _root_name=_proj_name) # our root logger
level,
name=name,
_root_name=_proj_name,
) # our root logger
def colorize_json(data, style='algol_nu'):
"""Colorize json output using ``pygments``.
"""
formatted_json = json.dumps(data, sort_keys=True, indent=4)
def colorize_json(
data: dict,
style='algol_nu',
):
'''
Colorize json output using ``pygments``.
'''
formatted_json = json.dumps(
data,
sort_keys=True,
indent=4,
)
return highlight(
formatted_json, lexers.JsonLexer(),
formatted_json,
lexers.JsonLexer(),
# likeable styles: algol_nu, tango, monokai
formatters.TerminalTrueColorFormatter(style=style)
)

View File

@ -199,8 +199,16 @@ class Position(Struct):
sym_info = s.broker_info[broker]
d['asset_type'] = sym_info['asset_type']
d['price_tick_size'] = sym_info['price_tick_size']
d['lot_tick_size'] = sym_info['lot_tick_size']
d['price_tick_size'] = (
sym_info.get('price_tick_size')
or
s.tick_size
)
d['lot_tick_size'] = (
sym_info.get('lot_tick_size')
or
s.lot_tick_size
)
if self.expiry is None:
d.pop('expiry', None)

View File

@ -19,6 +19,8 @@ Structured, daemon tree service management.
"""
from __future__ import annotations
from pprint import pformat
from functools import partial
import os
from typing import (
Optional,
@ -35,14 +37,11 @@ import tractor
import trio
from trio_typing import TaskStatus
from .log import (
from ..log import (
get_logger,
get_console_log,
)
from .brokers import get_brokermod
from pprint import pformat
from functools import partial
from ..brokers import get_brokermod
log = get_logger(__name__)
@ -337,7 +336,6 @@ async def open_pikerd(
alive underling services (see below).
'''
async with (
open_piker_runtime(
@ -355,17 +353,26 @@ async def open_pikerd(
tractor.open_nursery() as actor_nursery,
trio.open_nursery() as service_nursery,
):
assert root_actor.accept_addr == reg_addr
if root_actor.accept_addr != reg_addr:
raise RuntimeError(f'Daemon failed to bind on {reg_addr}!?')
# assign globally for future daemon/task creation
Services.actor_n = actor_nursery
Services.service_n = service_nursery
Services.debug_mode = debug_mode
if tsdb:
from piker.data._ahab import start_ahab
from piker.data.marketstore import start_marketstore
from ._ahab import start_ahab
from .marketstore import start_marketstore
log.info('Spawning `marketstore` supervisor')
ctn_ready, config, (cid, pid) = await service_nursery.start(
start_ahab,
'marketstored',
start_marketstore,
partial(
start_ahab,
'marketstored',
start_marketstore,
loglevel=loglevel,
)
)
log.info(
@ -385,7 +392,7 @@ async def open_pikerd(
start_ahab,
'elasticsearch',
start_elasticsearch,
start_timeout=240.0 # high cause ci
loglevel=loglevel,
)
)
@ -396,12 +403,6 @@ async def open_pikerd(
f'config: {pformat(config)}'
)
# assign globally for future daemon/task creation
Services.actor_n = actor_nursery
Services.service_n = service_nursery
Services.debug_mode = debug_mode
try:
yield Services
@ -667,7 +668,7 @@ async def spawn_brokerd(
)
# non-blocking setup of brokerd service nursery
from .data import _setup_persistent_brokerd
from ..data import _setup_persistent_brokerd
await Services.start_service_task(
dname,
@ -695,7 +696,10 @@ async def maybe_spawn_brokerd(
f'brokerd.{brokername}',
service_task_target=spawn_brokerd,
spawn_args={'brokername': brokername, 'loglevel': loglevel},
spawn_args={
'brokername': brokername,
'loglevel': loglevel,
},
loglevel=loglevel,
**kwargs,
@ -727,7 +731,7 @@ async def spawn_emsd(
)
# non-blocking setup of clearing service
from .clearing._ems import _setup_persistent_emsd
from ..clearing._ems import _setup_persistent_emsd
await Services.start_service_task(
'emsd',

View File

@ -18,6 +18,8 @@
Supervisor for docker with included specific-image service helpers.
'''
from collections import ChainMap
from functools import partial
import os
import time
from typing import (
@ -45,7 +47,10 @@ from requests.exceptions import (
ReadTimeout,
)
from ..log import get_logger, get_console_log
from ..log import (
get_logger,
get_console_log,
)
from .. import config
log = get_logger(__name__)
@ -124,10 +129,19 @@ class Container:
async def process_logs_until(
self,
log_msg_key: str,
# this is a predicate func for matching log msgs emitted by the
# underlying containerized app
patt_matcher: Callable[[str], bool],
bp_on_msg: bool = False,
# XXX WARNING XXX: do not touch this sleep value unless
# you know what you are doing! the value is critical to
# making sure the caller code inside the startup context
# does not timeout BEFORE we receive a match on the
# ``patt_matcher()`` predicate above.
checkpoint_period: float = 0.001,
) -> bool:
'''
Attempt to capture container log messages and relay through our
@ -137,12 +151,14 @@ class Container:
seen_so_far = self.seen_so_far
while True:
logs = self.cntr.logs()
try:
logs = self.cntr.logs()
except (
docker.errors.NotFound,
docker.errors.APIError
):
log.exception('Failed to parse logs?')
return False
entries = logs.decode().split('\n')
@ -155,25 +171,23 @@ class Container:
entry = entry.strip()
try:
record = json.loads(entry)
if 'msg' in record:
msg = record['msg']
elif 'message' in record:
msg = record['message']
else:
raise KeyError(f'Unexpected log format\n{record}')
msg = record[log_msg_key]
level = record['level']
except json.JSONDecodeError:
msg = entry
level = 'error'
if msg and entry not in seen_so_far:
seen_so_far.add(entry)
if bp_on_msg:
await tractor.breakpoint()
# TODO: do we need a more general mechanism
# for these kinda of "log record entries"?
# if 'Error' in entry:
# raise RuntimeError(entry)
if (
msg
and entry not in seen_so_far
):
seen_so_far.add(entry)
getattr(log, level.lower(), log.error)(f'{msg}')
if level == 'fatal':
@ -183,10 +197,15 @@ class Container:
return True
# do a checkpoint so we don't block if cancelled B)
await trio.sleep(0.1)
await trio.sleep(checkpoint_period)
return False
@property
def cuid(self) -> str:
fqcn: str = self.cntr.attrs['Config']['Image']
return f'{fqcn}[{self.cntr.short_id}]'
def try_signal(
self,
signal: str = 'SIGINT',
@ -222,17 +241,23 @@ class Container:
async def cancel(
self,
stop_msg: str,
log_msg_key: str,
stop_predicate: Callable[[str], bool],
hard_kill: bool = False,
) -> None:
'''
Attempt to cancel this container gracefully, fail over to
a hard kill on timeout.
'''
cid = self.cntr.id
# first try a graceful cancel
log.cancel(
f'SIGINT cancelling container: {cid}\n'
f'waiting on stop msg: "{stop_msg}"'
f'SIGINT cancelling container: {self.cuid}\n'
'waiting on stop predicate...'
)
self.try_signal('SIGINT')
@ -243,7 +268,10 @@ class Container:
log.cancel('polling for CNTR logs...')
try:
await self.process_logs_until(stop_msg)
await self.process_logs_until(
log_msg_key,
stop_predicate,
)
except ApplicationLogError:
hard_kill = True
else:
@ -301,12 +329,16 @@ class Container:
async def open_ahabd(
ctx: tractor.Context,
endpoint: str, # ns-pointer str-msg-type
start_timeout: float = 1.0,
loglevel: str | None = 'cancel',
**kwargs,
) -> None:
get_console_log('info', name=__name__)
log = get_console_log(
loglevel,
name=__name__,
)
async with open_docker() as client:
@ -322,37 +354,94 @@ async def open_ahabd(
) = ep_func(client)
cntr = Container(dcntr)
with trio.move_on_after(start_timeout):
found = await cntr.process_logs_until(start_lambda)
conf: ChainMap[str, Any] = ChainMap(
if not found and dcntr not in client.containers.list():
# container specific
cntr_config,
# defaults
{
# startup time limit which is the max the supervisor
# will wait for the container to be registered in
# ``client.containers.list()``
'startup_timeout': 1.0,
# how fast to poll for the starup predicate by sleeping
# this amount incrementally thus yielding to the
# ``trio`` scheduler on during sync polling execution.
'startup_query_period': 0.001,
# str-key value expected to contain log message body-contents
# when read using:
# ``json.loads(entry for entry in DockerContainer.logs())``
'log_msg_key': 'msg',
},
)
with trio.move_on_after(conf['startup_timeout']) as cs:
async with trio.open_nursery() as tn:
tn.start_soon(
partial(
cntr.process_logs_until,
log_msg_key=conf['log_msg_key'],
patt_matcher=start_lambda,
checkpoint_period=conf['startup_query_period'],
)
)
# poll for container startup or timeout
while not cs.cancel_called:
if dcntr in client.containers.list():
break
await trio.sleep(conf['startup_query_period'])
# sync with remote caller actor-task but allow log
# processing to continue running in bg.
await ctx.started((
cntr.cntr.id,
os.getpid(),
cntr_config,
))
try:
# XXX: if we timeout on finding the "startup msg" we expect then
# we want to FOR SURE raise an error upwards!
if cs.cancelled_caught:
# if dcntr not in client.containers.list():
for entry in cntr.seen_so_far:
log.info(entry)
raise RuntimeError(
f'Failed to start {dcntr.id} check logs deats'
raise DockerNotStarted(
f'Failed to start container: {cntr.cuid}\n'
f'due to startup_timeout={conf["startup_timeout"]}s\n\n'
"prolly you should check your container's logs for deats.."
)
await ctx.started((
cntr.cntr.id,
os.getpid(),
cntr_config,
))
try:
# TODO: we might eventually want a proxy-style msg-prot here
# to allow remote control of containers without needing
# callers to have root perms?
await trio.sleep_forever()
finally:
await cntr.cancel(stop_lambda)
# TODO: ensure loglevel can be set and teardown logs are
# reported if possible on error or cancel..
# XXX WARNING: currently shielding here can result in hangs
# on ctl-c from user.. ideally we can avoid a cancel getting
# consumed and not propagating whilst still doing teardown
# logging..
# with trio.CancelScope(shield=True):
await cntr.cancel(
log_msg_key=conf['log_msg_key'],
stop_predicate=stop_lambda,
)
async def start_ahab(
service_name: str,
endpoint: Callable[docker.DockerClient, DockerContainer],
start_timeout: float = 1.0,
loglevel: str | None = 'cancel',
task_status: TaskStatus[
tuple[
trio.Event,
@ -373,13 +462,12 @@ async def start_ahab(
'''
cn_ready = trio.Event()
try:
async with tractor.open_nursery(
loglevel='runtime',
) as tn:
async with tractor.open_nursery() as an:
portal = await tn.start_actor(
portal = await an.start_actor(
service_name,
enable_modules=[__name__]
enable_modules=[__name__],
loglevel=loglevel,
)
# TODO: we have issues with this on teardown
@ -400,7 +488,7 @@ async def start_ahab(
async with portal.open_context(
open_ahabd,
endpoint=str(NamespacePath.from_ref(endpoint)),
start_timeout=start_timeout
loglevel='cancel',
) as (ctx, first):
cid, pid, cntr_config = first

View File

@ -15,17 +15,11 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from pprint import pformat
from typing import (
Any,
TYPE_CHECKING,
)
import pyqtgraph as pg
import numpy as np
import tractor
if TYPE_CHECKING:
import docker
@ -65,14 +59,14 @@ def start_elasticsearch(
-itd \
--rm \
--network=host \
--mount type=bind,source="$(pwd)"/elastic,target=/usr/share/elasticsearch/data \
--mount type=bind,source="$(pwd)"/elastic,\
target=/usr/share/elasticsearch/data \
--env "elastic_username=elastic" \
--env "elastic_password=password" \
--env "xpack.security.enabled=false" \
elastic
'''
import docker
get_console_log('info', name=__name__)
dcntr: DockerContainer = client.containers.run(
@ -86,7 +80,7 @@ def start_elasticsearch(
async def start_matcher(msg: str):
try:
health = (await asks.get(
f'http://localhost:19200/_cat/health',
'http://localhost:19200/_cat/health',
params={'format': 'json'}
)).json()
@ -102,7 +96,17 @@ def start_elasticsearch(
return (
dcntr,
{},
{
# apparently we're REALLY tolerant of startup latency
# for CI XD
'startup_timeout': 240.0,
# XXX: decrease http poll period bc docker
# is shite at handling fast poll rates..
'startup_query_period': 0.1,
'log_msg_key': 'message',
},
# expected startup and stop msgs
start_matcher,
stop_matcher,

View File

@ -26,7 +26,6 @@
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from datetime import datetime
from pprint import pformat
from typing import (
Any,
Optional,
@ -55,7 +54,7 @@ if TYPE_CHECKING:
import docker
from ._ahab import DockerContainer
from .feed import maybe_open_feed
from ..data.feed import maybe_open_feed
from ..log import get_logger, get_console_log
from .._profile import Profiler
@ -63,11 +62,12 @@ from .._profile import Profiler
log = get_logger(__name__)
# container level config
# ahabd-supervisor and container level config
_config = {
'grpc_listen_port': 5995,
'ws_listen_port': 5993,
'log_level': 'debug',
'startup_timeout': 2,
}
_yaml_config = '''
@ -135,7 +135,7 @@ def start_marketstore(
# create dirs when dne
if not os.path.isdir(config._config_dir):
Path(config._config_dir).mkdir(parents=True, exist_ok=True)
Path(config._config_dir).mkdir(parents=True, exist_ok=True)
if not os.path.isdir(mktsdir):
os.mkdir(mktsdir)
@ -185,7 +185,11 @@ def start_marketstore(
config_dir_mnt,
data_dir_mnt,
],
# XXX: this must be set to allow backgrounding/non-blocking
# usage interaction with the container's process.
detach=True,
# stop_signal='SIGINT',
init=True,
# remove=True,
@ -510,7 +514,6 @@ class Storage:
client = self.client
syms = await client.list_symbols()
print(syms)
if key not in syms:
raise KeyError(f'`{key}` table key not found in\n{syms}?')
@ -627,10 +630,10 @@ async def open_storage_client(
yield Storage(client)
async def tsdb_history_update(
fqsn: Optional[str] = None,
) -> list[str]:
@acm
async def open_tsdb_client(
fqsn: str,
) -> Storage:
# TODO: real-time dedicated task for ensuring
# history consistency between the tsdb, shm and real-time feed..
@ -659,7 +662,7 @@ async def tsdb_history_update(
# - https://github.com/pikers/piker/issues/98
#
profiler = Profiler(
disabled=False, # not pg_profile_enabled(),
disabled=True, # not pg_profile_enabled(),
delayed=False,
)
@ -700,14 +703,10 @@ async def tsdb_history_update(
# profiler('Finished db arrays diffs')
syms = await storage.client.list_symbols()
log.info(f'Existing tsdb symbol set:\n{pformat(syms)}')
profiler(f'listed symbols {syms}')
# TODO: ask if user wants to write history for detected
# available shm buffers?
from tractor.trionics import ipython_embed
await ipython_embed()
syms = await storage.client.list_symbols()
# log.info(f'Existing tsdb symbol set:\n{pformat(syms)}')
# profiler(f'listed symbols {syms}')
yield storage
# for array in [to_append, to_prepend]:
# if array is None:

View File

@ -18,7 +18,7 @@
Annotations for ur faces.
"""
from typing import Callable, Optional
from typing import Callable
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF, QRectF
@ -105,7 +105,7 @@ class LevelMarker(QGraphicsPathItem):
get_level: Callable[..., float],
size: float = 20,
keep_in_view: bool = True,
on_paint: Optional[Callable] = None,
on_paint: Callable | None = None,
) -> None:

View File

@ -24,7 +24,7 @@ from types import ModuleType
from PyQt5.QtCore import QEvent
import trio
from .._daemon import maybe_spawn_brokerd
from ..service import maybe_spawn_brokerd
from . import _event
from ._exec import run_qtractor
from ..data.feed import install_brokerd_search

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -20,7 +20,7 @@ Chart axes graphics and behavior.
"""
from __future__ import annotations
from functools import lru_cache
from typing import Optional, Callable
from typing import Callable
from math import floor
import numpy as np
@ -60,7 +60,8 @@ class Axis(pg.AxisItem):
**kwargs
)
# XXX: pretty sure this makes things slower
# XXX: pretty sure this makes things slower!
# no idea why given we only move labels for the most part?
# self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.pi = plotitem
@ -190,7 +191,7 @@ class PriceAxis(Axis):
*args,
min_tick: int = 2,
title: str = '',
formatter: Optional[Callable[[float], str]] = None,
formatter: Callable[[float], str] | None = None,
**kwargs
) -> None:
@ -202,8 +203,8 @@ class PriceAxis(Axis):
def set_title(
self,
title: str,
view: Optional[ChartView] = None,
color: Optional[str] = None,
view: ChartView | None = None,
color: str | None = None,
) -> Label:
'''
@ -303,8 +304,9 @@ class DynamicDateAxis(Axis):
viz = chart._vizs[chart.name]
shm = viz.shm
array = shm.array
times = array['time']
i_0, i_l = times[0], times[-1]
ifield = viz.index_field
index = array[ifield]
i_0, i_l = index[0], index[-1]
# edge cases
if (
@ -316,11 +318,13 @@ class DynamicDateAxis(Axis):
(indexes[0] > i_0
and indexes[-1] > i_l)
):
# print(f"x-label indexes empty edge case: {indexes}")
return []
if viz.index_field == 'index':
arr_len = times.shape[0]
if ifield == 'index':
arr_len = index.shape[0]
first = shm._first.value
times = array['time']
epochs = times[
list(
map(

View File

@ -19,9 +19,12 @@ High level chart-widget apis.
'''
from __future__ import annotations
from contextlib import (
contextmanager as cm,
ExitStack,
)
from typing import (
Iterator,
Optional,
TYPE_CHECKING,
)
@ -102,7 +105,7 @@ class GodWidget(QWidget):
super().__init__(parent)
self.search: Optional[SearchWidget] = None
self.search: SearchWidget | None = None
self.hbox = QHBoxLayout(self)
self.hbox.setContentsMargins(0, 0, 0, 0)
@ -116,22 +119,14 @@ class GodWidget(QWidget):
self.hbox.addLayout(self.vbox)
# self.toolbar_layout = QHBoxLayout()
# self.toolbar_layout.setContentsMargins(0, 0, 0, 0)
# self.vbox.addLayout(self.toolbar_layout)
# self.init_timeframes_ui()
# self.init_strategy_ui()
# self.vbox.addLayout(self.hbox)
self._chart_cache: dict[
str,
tuple[LinkedSplits, LinkedSplits],
] = {}
self.hist_linked: Optional[LinkedSplits] = None
self.rt_linked: Optional[LinkedSplits] = None
self._active_cursor: Optional[Cursor] = None
self.hist_linked: LinkedSplits | None = None
self.rt_linked: LinkedSplits | None = None
self._active_cursor: Cursor | None = None
# assigned in the startup func `_async_main()`
self._root_n: trio.Nursery = None
@ -143,15 +138,18 @@ class GodWidget(QWidget):
# and the window does not? Never right?!
# self.reg_for_resize(self)
# TODO: strat loader/saver that we don't need yet.
# def init_strategy_ui(self):
# self.toolbar_layout = QHBoxLayout()
# self.toolbar_layout.setContentsMargins(0, 0, 0, 0)
# self.vbox.addLayout(self.toolbar_layout)
# self.strategy_box = StrategyBoxWidget(self)
# self.toolbar_layout.addWidget(self.strategy_box)
@property
def linkedsplits(self) -> LinkedSplits:
return self.rt_linked
# XXX: strat loader/saver that we don't need yet.
# def init_strategy_ui(self):
# self.strategy_box = StrategyBoxWidget(self)
# self.toolbar_layout.addWidget(self.strategy_box)
def set_chart_symbols(
self,
group_key: tuple[str], # of form <fqsn>.<providername>
@ -263,7 +261,9 @@ class GodWidget(QWidget):
# last had the xlast in view, if so then shift so it's
# still in view, if the user was viewing history then
# do nothing yah?
self.rt_linked.chart.default_view()
self.rt_linked.chart.main_viz.default_view(
do_min_bars=True,
)
# if a history chart instance is already up then
# set the search widget as its sidepane.
@ -372,7 +372,7 @@ class ChartnPane(QFrame):
'''
sidepane: FieldsForm | SearchWidget
hbox: QHBoxLayout
chart: Optional[ChartPlotWidget] = None
chart: ChartPlotWidget | None = None
def __init__(
self,
@ -432,7 +432,7 @@ class LinkedSplits(QWidget):
self.godwidget = godwidget
self.chart: ChartPlotWidget = None # main (ohlc) chart
self.subplots: dict[tuple[str, ...], ChartPlotWidget] = {}
self.subplots: dict[str, ChartPlotWidget] = {}
self.godwidget = godwidget
# placeholder for last appended ``PlotItem``'s bottom axis.
@ -450,7 +450,7 @@ class LinkedSplits(QWidget):
# chart-local graphics state that can be passed to
# a ``graphic_update_cycle()`` call by any task wishing to
# update the UI for a given "chart instance".
self.display_state: Optional[DisplayState] = None
self.display_state: DisplayState | None = None
self._symbol: Symbol = None
@ -480,7 +480,7 @@ class LinkedSplits(QWidget):
def set_split_sizes(
self,
prop: Optional[float] = None,
prop: float | None = None,
) -> None:
'''
@ -494,7 +494,7 @@ class LinkedSplits(QWidget):
prop = 3/8
h = self.height()
histview_h = h * (6/16)
histview_h = h * (4/11)
h = h - histview_h
major = 1 - prop
@ -574,11 +574,11 @@ class LinkedSplits(QWidget):
shm: ShmArray,
flume: Flume,
array_key: Optional[str] = None,
array_key: str | None = None,
style: str = 'line',
_is_main: bool = False,
sidepane: Optional[QWidget] = None,
sidepane: QWidget | None = None,
draw_kwargs: dict = {},
**cpw_kwargs,
@ -634,6 +634,7 @@ class LinkedSplits(QWidget):
axis.pi = cpw.plotItem
cpw.hideAxis('left')
# cpw.removeAxis('left')
cpw.hideAxis('bottom')
if (
@ -750,12 +751,12 @@ class LinkedSplits(QWidget):
# NOTE: back-link the new sub-chart to trigger y-autoranging in
# the (ohlc parent) main chart for this linked set.
if self.chart:
main_viz = self.chart.get_viz(self.chart.name)
self.chart.view.enable_auto_yrange(
src_vb=cpw.view,
viz=main_viz,
)
# if self.chart:
# main_viz = self.chart.get_viz(self.chart.name)
# self.chart.view.enable_auto_yrange(
# src_vb=cpw.view,
# viz=main_viz,
# )
graphics = viz.graphics
data_key = viz.name
@ -793,7 +794,7 @@ class LinkedSplits(QWidget):
def resize_sidepanes(
self,
from_linked: Optional[LinkedSplits] = None,
from_linked: LinkedSplits | None = None,
) -> None:
'''
@ -816,11 +817,17 @@ class LinkedSplits(QWidget):
self.chart.sidepane.setMinimumWidth(sp_w)
# TODO: we should really drop using this type and instead just
# write our own wrapper around `PlotItem`..
# TODO: a general rework of this widget-interface:
# - we should really drop using this type and instead just lever our
# own override of `PlotItem`..
# - possibly rename to class -> MultiChart(pg.PlotWidget):
# where the widget is responsible for containing management
# harness for multi-Viz "view lists" and their associated mode-panes
# (fsp chain, order ctl, feed queue-ing params, actor ctl, etc).
class ChartPlotWidget(pg.PlotWidget):
'''
``GraphicsView`` subtype containing a ``.plotItem: PlotItem`` as well
``PlotWidget`` subtype containing a ``.plotItem: PlotItem`` as well
as a `.pi_overlay: PlotItemOverlay`` which helps manage and overlay flow
graphics view multiple compose view boxes.
@ -861,7 +868,7 @@ class ChartPlotWidget(pg.PlotWidget):
# TODO: load from config
use_open_gl: bool = False,
static_yrange: Optional[tuple[float, float]] = None,
static_yrange: tuple[float, float] | None = None,
parent=None,
**kwargs,
@ -876,7 +883,7 @@ class ChartPlotWidget(pg.PlotWidget):
# NOTE: must be set bfore calling ``.mk_vb()``
self.linked = linkedsplits
self.sidepane: Optional[FieldsForm] = None
self.sidepane: FieldsForm | None = None
# source of our custom interactions
self.cv = self.mk_vb(name)
@ -1010,36 +1017,10 @@ class ChartPlotWidget(pg.PlotWidget):
# )
return line_end, marker_right, r_axis_x
def default_view(
self,
bars_from_y: int = int(616 * 3/8),
y_offset: int = 0,
do_ds: bool = True,
) -> None:
'''
Set the view box to the "default" startup view of the scene.
'''
viz = self.get_viz(self.name)
if not viz:
log.warning(f'`Viz` for {self.name} not loaded yet?')
return
viz.default_view(
bars_from_y,
y_offset,
do_ds,
)
if do_ds:
self.linked.graphics_cycle()
def increment_view(
self,
datums: int = 1,
vb: Optional[ChartView] = None,
vb: ChartView | None = None,
) -> None:
'''
@ -1057,6 +1038,7 @@ class ChartPlotWidget(pg.PlotWidget):
# breakpoint()
return
# should trigger broadcast on all overlays right?
view.setXRange(
min=l + x_shift,
max=r + x_shift,
@ -1069,8 +1051,8 @@ class ChartPlotWidget(pg.PlotWidget):
def overlay_plotitem(
self,
name: str,
index: Optional[int] = None,
axis_title: Optional[str] = None,
index: int | None = None,
axis_title: str | None = None,
axis_side: str = 'right',
axis_kwargs: dict = {},
@ -1119,6 +1101,15 @@ class ChartPlotWidget(pg.PlotWidget):
link_axes=(0,),
)
# hide all axes not named by ``axis_side``
for axname in (
({'bottom'} | allowed_sides) - {axis_side}
):
try:
pi.hideAxis(axname)
except Exception:
pass
# add axis title
# TODO: do we want this API to still work?
# raxis = pi.getAxis('right')
@ -1134,11 +1125,11 @@ class ChartPlotWidget(pg.PlotWidget):
shm: ShmArray,
flume: Flume,
array_key: Optional[str] = None,
array_key: str | None = None,
overlay: bool = False,
color: Optional[str] = None,
color: str | None = None,
add_label: bool = True,
pi: Optional[pg.PlotItem] = None,
pi: pg.PlotItem | None = None,
step_mode: bool = False,
is_ohlc: bool = False,
add_sticky: None | str = 'right',
@ -1197,6 +1188,10 @@ class ChartPlotWidget(pg.PlotWidget):
)
pi.viz = viz
# so that viewboxes are associated 1-to-1 with
# their parent plotitem
pi.vb._viz = viz
assert isinstance(viz.shm, ShmArray)
# TODO: this probably needs its own method?
@ -1209,17 +1204,21 @@ class ChartPlotWidget(pg.PlotWidget):
pi = overlay
if add_sticky:
axis = pi.getAxis(add_sticky)
if pi.name not in axis._stickies:
if pi is not self.plotItem:
overlay = self.pi_overlay
assert pi in overlay.overlays
overlay_axis = overlay.get_axis(
pi,
add_sticky,
)
assert overlay_axis is axis
if pi is not self.plotItem:
# overlay = self.pi_overlay
# assert pi in overlay.overlays
overlay = self.pi_overlay
assert pi in overlay.overlays
axis = overlay.get_axis(
pi,
add_sticky,
)
else:
axis = pi.getAxis(add_sticky)
if pi.name not in axis._stickies:
# TODO: UGH! just make this not here! we should
# be making the sticky from code which has access
@ -1263,7 +1262,7 @@ class ChartPlotWidget(pg.PlotWidget):
shm: ShmArray,
flume: Flume,
array_key: Optional[str] = None,
array_key: str | None = None,
**draw_curve_kwargs,
) -> Viz:
@ -1280,24 +1279,6 @@ class ChartPlotWidget(pg.PlotWidget):
**draw_curve_kwargs,
)
def update_graphics_from_flow(
self,
graphics_name: str,
array_key: Optional[str] = None,
**kwargs,
) -> pg.GraphicsObject:
'''
Update the named internal graphics from ``array``.
'''
viz = self._vizs[array_key or graphics_name]
return viz.update_graphics(
array_key=array_key,
**kwargs,
)
# TODO: pretty sure we can just call the cursor
# directly not? i don't wee why we need special "signal proxies"
# for this lul..
@ -1310,43 +1291,6 @@ class ChartPlotWidget(pg.PlotWidget):
self.sig_mouse_leave.emit(self)
self.scene().leaveEvent(ev)
def maxmin(
self,
name: Optional[str] = None,
bars_range: Optional[tuple[
int, int, int, int, int, int
]] = None,
) -> tuple[float, float]:
'''
Return the max and min y-data values "in view".
If ``bars_range`` is provided use that range.
'''
# TODO: here we should instead look up the ``Viz.shm.array``
# and read directly from shm to avoid copying to memory first
# and then reading it again here.
viz_key = name or self.name
viz = self._vizs.get(viz_key)
if viz is None:
log.error(f"viz {viz_key} doesn't exist in chart {self.name} !?")
return 0, 0
res = viz.maxmin()
if (
res is None
):
mxmn = 0, 0
if not self._on_screen:
self.default_view(do_ds=False)
self._on_screen = True
else:
x_range, read_slc, mxmn = res
return mxmn
def get_viz(
self,
key: str,
@ -1360,3 +1304,32 @@ class ChartPlotWidget(pg.PlotWidget):
@property
def main_viz(self) -> Viz:
return self.get_viz(self.name)
def iter_vizs(self) -> Iterator[Viz]:
return iter(self._vizs.values())
@cm
def reset_graphics_caches(self) -> None:
'''
Reset all managed ``Viz`` (flow) graphics objects
Qt cache modes (to ``NoCache`` mode) on enter and
restore on exit.
'''
with ExitStack() as stack:
for viz in self.iter_vizs():
stack.enter_context(
viz.graphics.reset_cache(),
)
# also reset any downsampled alt-graphics objects which
# might be active.
dsg = viz.ds_graphics
if dsg:
stack.enter_context(
dsg.reset_cache(),
)
try:
yield
finally:
stack.close()

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -21,7 +21,6 @@ Mouse interaction graphics
from __future__ import annotations
from functools import partial
from typing import (
Optional,
Callable,
TYPE_CHECKING,
)
@ -38,7 +37,10 @@ from ._style import (
_font_small,
_font,
)
from ._axes import YAxisLabel, XAxisLabel
from ._axes import (
YAxisLabel,
XAxisLabel,
)
from ..log import get_logger
if TYPE_CHECKING:
@ -167,7 +169,7 @@ class ContentsLabel(pg.LabelItem):
anchor_at: str = ('top', 'right'),
justify_text: str = 'left',
font_size: Optional[int] = None,
font_size: int | None = None,
) -> None:
@ -338,7 +340,7 @@ class Cursor(pg.GraphicsObject):
self.linked = linkedsplits
self.graphics: dict[str, pg.GraphicsObject] = {}
self.xaxis_label: Optional[XAxisLabel] = None
self.xaxis_label: XAxisLabel | None = None
self.always_show_xlabel: bool = True
self.plots: list['PlotChartWidget'] = [] # type: ignore # noqa
self.active_plot = None

View File

@ -19,7 +19,7 @@ Fast, smooth, sexy curves.
"""
from contextlib import contextmanager as cm
from typing import Optional, Callable
from typing import Callable
import numpy as np
import pyqtgraph as pg
@ -86,7 +86,7 @@ class FlowGraphic(pg.GraphicsObject):
# line styling
color: str = 'bracket',
last_step_color: str | None = None,
fill_color: Optional[str] = None,
fill_color: str | None = None,
style: str = 'solid',
**kwargs
@ -158,14 +158,37 @@ class FlowGraphic(pg.GraphicsObject):
drawn yet, ``None``.
'''
return self._last_line.x1() if self._last_line else None
if self._last_line:
return self._last_line.x1()
return None
# XXX: due to a variety of weird jitter bugs and "smearing"
# artifacts when click-drag panning and viewing history time series,
# we offer this ctx-mngr interface to allow temporarily disabling
# Qt's graphics caching mode; this is now currently used from
# ``ChartView.start/signal_ic()`` methods which also disable the
# rt-display loop when the user is moving around a view.
@cm
def reset_cache(self) -> None:
try:
none = QGraphicsItem.NoCache
log.debug(
f'{self._name} -> CACHE DISABLE: {none}'
)
self.setCacheMode(none)
yield
finally:
mode = self.cache_mode
log.debug(f'{self._name} -> CACHE ENABLE {mode}')
self.setCacheMode(mode)
class Curve(FlowGraphic):
'''
A faster, simpler, append friendly version of
``pyqtgraph.PlotCurveItem`` built for highly customizable real-time
updates.
updates; a graphics object to render a simple "line" plot.
This type is a much stripped down version of a ``pyqtgraph`` style
"graphics object" in the sense that the internal lower level
@ -191,14 +214,14 @@ class Curve(FlowGraphic):
'''
# TODO: can we remove this?
# sub_br: Optional[Callable] = None
# sub_br: Callable | None = None
def __init__(
self,
*args,
# color: str = 'default_lightest',
# fill_color: Optional[str] = None,
# fill_color: str | None = None,
# style: str = 'solid',
**kwargs
@ -248,12 +271,6 @@ class Curve(FlowGraphic):
self.fast_path.clear()
# self.fast_path = None
@cm
def reset_cache(self) -> None:
self.setCacheMode(QtWidgets.QGraphicsItem.NoCache)
yield
self.setCacheMode(QGraphicsItem.DeviceCoordinateCache)
def boundingRect(self):
'''
Compute and then cache our rect.
@ -378,7 +395,6 @@ class Curve(FlowGraphic):
) -> None:
# default line draw last call
# with self.reset_cache():
x = src_data[index_field]
y = src_data[array_key]
@ -406,10 +422,20 @@ class Curve(FlowGraphic):
# element such that the current datum in view can be shown
# (via it's max / min) even when highly zoomed out.
class FlattenedOHLC(Curve):
'''
More or less the exact same as a standard line ``Curve`` above
but meant to handle a traced-and-downsampled OHLC time series.
_
_| | _
|_ | |_ | |
_| => |_| |
| |
|_ |_
# avoids strange dragging/smearing artifacts when panning..
cache_mode: int = QGraphicsItem.NoCache
The main implementation different is that ``.draw_last_datum()``
expects an underlying OHLC array for the ``src_data`` input.
'''
def draw_last_datum(
self,
path: QPainterPath,
@ -434,7 +460,19 @@ class FlattenedOHLC(Curve):
class StepCurve(Curve):
'''
A familiar rectangle-with-y-height-per-datum type curve:
||
|| ||
|| || ||||
_||_||_||_||||_ where each datum's y-value is drawn as
a nearly full rectangle, each "level" spans some x-step size.
This is most often used for vlm and option OI style curves and/or
the very popular "bar chart".
'''
def declare_paintables(
self,
) -> None:

View File

@ -19,17 +19,20 @@ Data vizualization APIs
'''
from __future__ import annotations
from functools import lru_cache
from math import (
ceil,
floor,
)
from typing import (
Optional,
Literal,
TYPE_CHECKING,
)
import msgspec
from msgspec import (
Struct,
field,
)
import numpy as np
import pyqtgraph as pg
from PyQt5.QtCore import QLineF
@ -225,15 +228,51 @@ def render_baritems(
_sample_rates: set[float] = {1, 60}
class Viz(msgspec.Struct): # , frozen=True):
class ViewState(Struct):
'''
Indexing objects representing the current view x-range -> y-range.
'''
# (xl, xr) "input" view range in x-domain
xrange: tuple[
float | int,
float | int
] | None = None
# TODO: cache the (ixl, ixr) read_slc-into-.array style slice index?
# (ymn, ymx) "output" min and max in viewed y-codomain
yrange: tuple[
float | int,
float | int
] | None = None
# last in view ``ShmArray.array[read_slc]`` data
in_view: np.ndarray | None = None
class Viz(Struct):
'''
(Data) "Visualization" compound type which wraps a real-time
shm array stream with displayed graphics (curves, charts)
for high level access and control as well as efficient incremental
update.
update, oriented around the idea of a "view state".
The intention is for this type to eventually be capable of shm-passing
of incrementally updated graphics stream data between actors.
The (backend) intention is for this interface and type is to
eventually be capable of shm-passing of incrementally updated
graphics stream data, thus providing a cross-actor solution to
sharing UI-related update state potentionally in a (compressed)
binary-interchange format.
Further, from an interaction-triggers-view-in-UI perspective, this type
operates as a transform:
(x_left, x_right) -> output metrics {ymn, ymx, uppx, ...}
wherein each x-domain range maps to some output set of (graphics
related) vizualization metrics. In further documentation we often
refer to this abstraction as a vizualization curve: Ci. Each Ci is
considered a function which maps an x-range (input view range) to
a multi-variate (metrics) output.
'''
name: str
@ -242,13 +281,17 @@ class Viz(msgspec.Struct): # , frozen=True):
flume: Flume
graphics: Curve | BarItems
# for tracking y-mn/mx for y-axis auto-ranging
yrange: tuple[float, float] = None
vs: ViewState = field(default_factory=ViewState)
# last calculated y-mn/mx from m4 downsample code, this
# is updated in the body of `Renderer.render()`.
ds_yrange: tuple[float, float] | None = None
yrange: tuple[float, float] | None = None
# in some cases a viz may want to change its
# graphical "type" or, "form" when downsampling, to
# start this is only ever an interpolation line.
ds_graphics: Optional[Curve] = None
ds_graphics: Curve | None = None
is_ohlc: bool = False
render: bool = True # toggle for display loop
@ -264,7 +307,7 @@ class Viz(msgspec.Struct): # , frozen=True):
] = 'time'
# downsampling state
# TODO: maybe compound this into a downsampling state type?
_last_uppx: float = 0
_in_ds: bool = False
_index_step: float | None = None
@ -282,20 +325,44 @@ class Viz(msgspec.Struct): # , frozen=True):
tuple[float, float],
] = {}
# cache of median calcs from input read slice hashes
# see `.median()`
_meds: dict[
int,
float,
] = {}
# to make lru_cache-ing work, see
# https://docs.python.org/3/faq/programming.html#how-do-i-cache-method-calls
def __eq__(self, other):
return self._shm._token == other._shm._token
def __hash__(self):
return hash(self._shm._token)
@property
def shm(self) -> ShmArray:
return self._shm
@property
def index_field(self) -> str:
'''
The column name as ``str`` in the underlying ``._shm: ShmArray``
which will deliver the "index" array.
'''
return self._index_field
def index_step(
self,
reset: bool = False,
) -> float:
'''
Return the size between sample steps in the units of the
x-domain, normally either an ``int`` array index size or an
epoch time in seconds.
'''
# attempt to dectect the best step size by scanning a sample of
# the source data.
if self._index_step is None:
@ -378,7 +445,7 @@ class Viz(msgspec.Struct): # , frozen=True):
# TODO: hash the slice instead maybe?
# https://stackoverflow.com/a/29980872
lbar, rbar = ixrng = round(x_range[0]), round(x_range[1])
ixrng = lbar, rbar = round(x_range[0]), round(x_range[1])
if use_caching:
cached_result = self._mxmns.get(ixrng)
@ -389,6 +456,7 @@ class Viz(msgspec.Struct): # , frozen=True):
f'{ixrng} -> {cached_result}'
)
read_slc, mxmn = cached_result
self.vs.yrange = mxmn
return (
ixrng,
read_slc,
@ -421,8 +489,8 @@ class Viz(msgspec.Struct): # , frozen=True):
)
return None
elif self.yrange:
mxmn = self.yrange
elif self.ds_yrange:
mxmn = self.ds_yrange
if do_print:
print(
f'{self.name} M4 maxmin:\n'
@ -455,6 +523,7 @@ class Viz(msgspec.Struct): # , frozen=True):
# cache result for input range
assert mxmn
self._mxmns[ixrng] = (read_slc, mxmn)
self.vs.yrange = mxmn
profiler(f'yrange mxmn cacheing: {x_range} -> {mxmn}')
return (
ixrng,
@ -473,20 +542,11 @@ class Viz(msgspec.Struct): # , frozen=True):
vr.right(),
)
def bars_range(self) -> tuple[int, int, int, int]:
'''
Return a range tuple for the left-view, left-datum, right-datum
and right-view x-indices.
'''
l, start, datum_start, datum_stop, stop, r = self.datums_range()
return l, datum_start, datum_stop, r
def datums_range(
self,
view_range: None | tuple[float, float] = None,
index_field: str | None = None,
array: None | np.ndarray = None,
array: np.ndarray | None = None,
) -> tuple[
int, int, int, int, int, int
@ -499,42 +559,47 @@ class Viz(msgspec.Struct): # , frozen=True):
index_field: str = index_field or self.index_field
if index_field == 'index':
l, r = round(l), round(r)
l: int = round(l)
r: int = round(r)
if array is None:
array = self.shm.array
index = array[index_field]
first = floor(index[0])
last = ceil(index[-1])
# first and last datums in view determined by
# l / r view range.
leftmost = floor(l)
rightmost = ceil(r)
first: int = floor(index[0])
last: int = ceil(index[-1])
# invalid view state
if (
r < l
or l < 0
or r < 0
or (l > last and r > last)
or (
l > last
and r > last
)
):
leftmost = first
rightmost = last
leftmost: int = first
rightmost: int = last
else:
# determine first and last datums in view determined by
# l -> r view range.
rightmost = max(
min(last, rightmost),
min(last, ceil(r)),
first,
)
leftmost = min(
max(first, leftmost),
max(first, floor(l)),
last,
rightmost - 1,
)
assert leftmost < rightmost
# sanity
# assert leftmost < rightmost
self.vs.xrange = leftmost, rightmost
return (
l, # left x-in-view
@ -547,7 +612,7 @@ class Viz(msgspec.Struct): # , frozen=True):
def read(
self,
array_field: Optional[str] = None,
array_field: str | None = None,
index_field: str | None = None,
profiler: None | Profiler = None,
@ -563,11 +628,9 @@ class Viz(msgspec.Struct): # , frozen=True):
'''
index_field: str = index_field or self.index_field
vr = l, r = self.view_range()
# readable data
array = self.shm.array
if profiler:
profiler('self.shm.array READ')
@ -579,7 +642,6 @@ class Viz(msgspec.Struct): # , frozen=True):
ilast,
r,
) = self.datums_range(
view_range=vr,
index_field=index_field,
array=array,
)
@ -595,17 +657,21 @@ class Viz(msgspec.Struct): # , frozen=True):
array,
start_t=lbar,
stop_t=rbar,
step=self.index_step(),
)
# TODO: maybe we should return this from the slicer call
# above?
in_view = array[read_slc]
if in_view.size:
self.vs.in_view = in_view
abs_indx = in_view['index']
abs_slc = slice(
int(abs_indx[0]),
int(abs_indx[-1]),
)
else:
self.vs.in_view = None
if profiler:
profiler(
@ -626,10 +692,11 @@ class Viz(msgspec.Struct): # , frozen=True):
# BUT the ``in_view`` slice DOES..
read_slc = slice(lbar_i, rbar_i)
in_view = array[lbar_i: rbar_i + 1]
self.vs.in_view = in_view
# in_view = array[lbar_i-1: rbar_i+1]
# XXX: same as ^
# to_draw = array[lbar - ifirst:(rbar - ifirst) + 1]
if profiler:
profiler('index arithmetic for slicing')
@ -664,8 +731,8 @@ class Viz(msgspec.Struct): # , frozen=True):
pg.GraphicsObject,
]:
'''
Read latest datums from shm and render to (incrementally)
render to graphics.
Read latest datums from shm and (incrementally) render to
graphics.
'''
profiler = Profiler(
@ -955,9 +1022,11 @@ class Viz(msgspec.Struct): # , frozen=True):
def default_view(
self,
bars_from_y: int = int(616 * 3/8),
min_bars_from_y: int = int(616 * 4/11),
y_offset: int = 0, # in datums
do_ds: bool = True,
do_min_bars: bool = False,
) -> None:
'''
@ -1013,12 +1082,10 @@ class Viz(msgspec.Struct): # , frozen=True):
data_diff = last_datum - first_datum
rl_diff = vr - vl
rescale_to_data: bool = False
# new_uppx: float = 1
if rl_diff > data_diff:
rescale_to_data = True
rl_diff = data_diff
new_uppx: float = data_diff / self.px_width()
# orient by offset from the y-axis including
# space to compensate for the L1 labels.
@ -1027,17 +1094,29 @@ class Viz(msgspec.Struct): # , frozen=True):
offset = l1_offset
if (
rescale_to_data
):
if rescale_to_data:
new_uppx: float = data_diff / self.px_width()
offset = (offset / uppx) * new_uppx
else:
offset = (y_offset * step) + uppx*step
# NOTE: if we are in the midst of start-up and a bunch of
# widgets are spawning/rendering concurrently, it's likely the
# label size above `l1_offset` won't have yet fully rendered.
# Here we try to compensate for that ensure at least a static
# bar gap between the last datum and the y-axis.
if (
do_min_bars
and offset <= (6 * step)
):
offset = 6 * step
# align right side of view to the rightmost datum + the selected
# offset from above.
r_reset = (self.graphics.x_last() or last_datum) + offset
r_reset = (
self.graphics.x_last() or last_datum
) + offset
# no data is in view so check for the only 2 sane cases:
# - entire view is LEFT of data
@ -1062,12 +1141,20 @@ class Viz(msgspec.Struct): # , frozen=True):
else:
log.warning(f'Unknown view state {vl} -> {vr}')
return
# raise RuntimeError(f'Unknown view state {vl} -> {vr}')
else:
# maintain the l->r view distance
l_reset = r_reset - rl_diff
if (
do_min_bars
and (r_reset - l_reset) < min_bars_from_y
):
l_reset = (
(r_reset + offset)
-
min_bars_from_y * step
)
# remove any custom user yrange setttings
if chartw._static_yrange == 'axis':
chartw._static_yrange = None
@ -1079,9 +1166,7 @@ class Viz(msgspec.Struct): # , frozen=True):
)
if do_ds:
# view.interaction_graphics_cycle()
view.maybe_downsample_graphics()
view._set_yrange(viz=self)
view.interact_graphics_cycle()
def incr_info(
self,
@ -1236,3 +1321,149 @@ class Viz(msgspec.Struct): # , frozen=True):
vr, 0,
)
).length()
@lru_cache(maxsize=6116)
def median_from_range(
self,
start: int,
stop: int,
) -> float:
in_view = self.shm.array[start:stop]
if self.is_ohlc:
return np.median(in_view['close'])
else:
return np.median(in_view[self.name])
@lru_cache(maxsize=6116)
def _dispersion(
self,
# xrange: tuple[float, float],
ymn: float,
ymx: float,
yref: float,
) -> tuple[float, float]:
return (
(ymx - yref) / yref,
(ymn - yref) / yref,
)
def disp_from_range(
self,
xrange: tuple[float, float] | None = None,
yref: float | None = None,
method: Literal[
'up',
'down',
'full', # both sides
'both', # both up and down as separate scalars
] = 'full',
) -> float | tuple[float, float] | None:
'''
Return a dispersion metric referenced from an optionally
provided ``yref`` or the left-most datum level by default.
'''
vs = self.vs
yrange = vs.yrange
if yrange is None:
return None
ymn, ymx = yrange
key = 'open' if self.is_ohlc else self.name
yref = yref or vs.in_view[0][key]
# xrange = xrange or vs.xrange
# call into the lru_cache-d sigma calculator method
r_up, r_down = self._dispersion(ymn, ymx, yref)
match method:
case 'full':
return r_up - r_down
case 'up':
return r_up
case 'down':
return r_up
case 'both':
return r_up, r_down
# @lru_cache(maxsize=6116)
def i_from_t(
self,
t: float,
return_y: bool = False,
) -> int | tuple[int, float]:
istart = slice_from_time(
self.vs.in_view,
start_t=t,
stop_t=t,
step=self.index_step(),
).start
if not return_y:
return istart
vs = self.vs
arr = vs.in_view
key = 'open' if self.is_ohlc else self.name
yref = arr[istart][key]
return istart, yref
def scalars_from_index(
self,
xref: float | None = None,
) -> tuple[
int,
float,
float,
float,
]:
'''
Calculate and deliver the log-returns scalars specifically
according to y-data supported on this ``Viz``'s underlying
x-domain data range from ``xref`` -> ``.vs.xrange[1]``.
The main use case for this method (currently) is to generate
scalars which will allow calculating the required y-range for
some "pinned" curve to be aligned *from* the ``xref`` time
stamped datum *to* the curve rendered by THIS viz.
'''
vs = self.vs
arr = vs.in_view
# TODO: make this work by parametrizing over input
# .vs.xrange input for caching?
# read_slc_start = self.i_from_t(xref)
read_slc = slice_from_time(
arr=self.vs.in_view,
start_t=xref,
stop_t=vs.xrange[1],
step=self.index_step(),
)
key = 'open' if self.is_ohlc else self.name
# NOTE: old code, it's no faster right?
# read_slc_start = read_slc.start
# yref = arr[read_slc_start][key]
read = arr[read_slc][key]
yref = read[0]
ymn, ymx = self.vs.yrange
# print(
# f'Viz[{self.name}].scalars_from_index(xref={xref})\n'
# f'read_slc: {read_slc}\n'
# f'ymnmx: {(ymn, ymx)}\n'
# )
return (
read_slc.start,
yref,
(ymx - yref) / yref,
(ymn - yref) / yref,
)

View File

@ -21,18 +21,18 @@ this module ties together quote and computational (fsp) streams with
graphics update methods via our custom ``pyqtgraph`` charting api.
'''
from functools import partial
import itertools
from math import floor
import time
from typing import (
Optional,
Any,
TYPE_CHECKING,
)
import tractor
import trio
import pyqtgraph as pg
# import pendulum
from msgspec import field
@ -82,6 +82,9 @@ from .._profile import (
from ..log import get_logger
from .._profile import Profiler
if TYPE_CHECKING:
from ._interaction import ChartView
log = get_logger(__name__)
@ -146,12 +149,11 @@ def multi_maxmin(
profiler(f'vlm_viz.maxmin({read_slc})')
return (
mx,
# enforcing price can't be negative?
# TODO: do we even need this?
max(mn, 0),
mx,
mx_vlm_in_view, # vlm max
)
@ -183,29 +185,23 @@ class DisplayState(Struct):
# misc state tracking
vars: dict[str, Any] = field(
default_factory=lambda: {
'tick_margin': 0,
'i_last': 0,
'i_last_append': 0,
'last_mx_vlm': 0,
'last_mx': 0,
'last_mn': 0,
}
)
hist_vars: dict[str, Any] = field(
default_factory=lambda: {
'tick_margin': 0,
'i_last': 0,
'i_last_append': 0,
'last_mx_vlm': 0,
'last_mx': 0,
'last_mn': 0,
}
)
globalz: None | dict[str, Any] = None
vlm_chart: Optional[ChartPlotWidget] = None
vlm_sticky: Optional[YAxisLabel] = None
vlm_chart: ChartPlotWidget | None = None
vlm_sticky: YAxisLabel | None = None
wap_in_history: bool = False
@ -261,7 +257,10 @@ async def increment_history_view(
profiler('`hist Viz.update_graphics()` call')
if liv:
hist_viz.plot.vb._set_yrange(viz=hist_viz)
hist_viz.plot.vb.interact_graphics_cycle(
do_linked_charts=False,
do_overlay_scaling=True, # always overlayT slow chart
)
profiler('hist chart yrange view')
# check if tread-in-place view x-shift is needed
@ -351,8 +350,8 @@ async def graphics_update_loop(
vlm_viz = vlm_chart._vizs.get('volume') if vlm_chart else None
(
last_mx,
last_mn,
last_mx,
last_mx_vlm,
) = multi_maxmin(
None,
@ -379,9 +378,6 @@ async def graphics_update_loop(
# levels this might be dark volume we need to
# present differently -> likely dark vlm
tick_size = symbol.tick_size
tick_margin = 3 * tick_size
fast_chart.show()
last_quote_s = time.time()
@ -389,7 +385,6 @@ async def graphics_update_loop(
'fqsn': fqsn,
'godwidget': godwidget,
'quotes': {},
# 'maxmin': maxmin,
'flume': flume,
@ -406,12 +401,11 @@ async def graphics_update_loop(
'l1': l1,
'vars': {
'tick_margin': tick_margin,
'i_last': 0,
'i_last_append': 0,
'last_mx_vlm': last_mx_vlm,
'last_mx': last_mx,
'last_mn': last_mn,
# 'last_mx': last_mx,
# 'last_mn': last_mn,
},
'globalz': globalz,
})
@ -422,7 +416,9 @@ async def graphics_update_loop(
ds.vlm_chart = vlm_chart
ds.vlm_sticky = vlm_sticky
fast_chart.default_view()
fast_chart.main_viz.default_view(
do_min_bars=True,
)
# ds.hist_vars.update({
# 'i_last_append': 0,
@ -474,7 +470,7 @@ async def graphics_update_loop(
fast_chart.pause_all_feeds()
continue
ic = fast_chart.view._ic
ic = fast_chart.view._in_interact
if ic:
fast_chart.pause_all_feeds()
print(f'{fqsn} PAUSING DURING INTERACTION')
@ -494,7 +490,7 @@ def graphics_update_cycle(
wap_in_history: bool = False,
trigger_all: bool = False, # flag used by prepend history updates
prepend_update_index: Optional[int] = None,
prepend_update_index: int | None = None,
) -> None:
@ -517,7 +513,7 @@ def graphics_update_cycle(
chart = ds.chart
vlm_chart = ds.vlm_chart
varz = ds.vars
# varz = ds.vars
l1 = ds.l1
flume = ds.flume
ohlcv = flume.rt_shm
@ -527,8 +523,6 @@ def graphics_update_cycle(
main_viz = ds.viz
index_field = main_viz.index_field
tick_margin = varz['tick_margin']
(
uppx,
liv,
@ -547,35 +541,37 @@ def graphics_update_cycle(
# them as an additional graphic.
clear_types = _tick_groups['clears']
mx = varz['last_mx']
mn = varz['last_mn']
mx_vlm_in_view = varz['last_mx_vlm']
# TODO: fancier y-range sorting..
# https://github.com/pikers/piker/issues/325
# - a proper streaming mxmn algo as per above issue.
# - we should probably scale the view margin based on the size of
# the true range? This way you can slap in orders outside the
# current L1 (only) book range.
main_vb: ChartView = main_viz.plot.vb
this_viz: Viz = chart._vizs[fqsn]
this_vb: ChartView = this_viz.plot.vb
this_yr = this_vb._yrange
if this_yr:
lmn, lmx = this_yr
else:
lmn = lmx = 0
mn: float = lmn
mx: float = lmx
mx_vlm_in_view: float | None = None
yrange_margin = 0.09
# update ohlc sampled price bars
if (
# do_rt_update
# or do_px_step
(liv and do_px_step)
or trigger_all
):
# TODO: i think we're double calling this right now
# since .interact_graphics_cycle() also calls it?
# I guess we can add a guard in there?
_, i_read_range, _ = main_viz.update_graphics()
profiler('`Viz.update_graphics()` call')
(
mx_in_view,
mn_in_view,
mx_vlm_in_view,
) = multi_maxmin(
i_read_range,
main_viz,
ds.vlm_viz,
profiler,
)
mx = mx_in_view + tick_margin
mn = mn_in_view - tick_margin
profiler('{fqsdn} `multi_maxmin()` call')
# don't real-time "shift" the curve to the
# left unless we get one of the following:
if (
@ -583,7 +579,6 @@ def graphics_update_cycle(
or trigger_all
):
chart.increment_view(datums=append_diff)
# main_viz.plot.vb._set_yrange(viz=main_viz)
# NOTE: since vlm and ohlc charts are axis linked now we don't
# need the double increment request?
@ -592,6 +587,21 @@ def graphics_update_cycle(
profiler('view incremented')
# NOTE: do this **after** the tread to ensure we take the yrange
# from the most current view x-domain.
(
mn,
mx,
mx_vlm_in_view,
) = multi_maxmin(
i_read_range,
main_viz,
ds.vlm_viz,
profiler,
)
profiler(f'{fqsn} `multi_maxmin()` call')
# iterate frames of ticks-by-type such that we only update graphics
# using the last update per type where possible.
ticks_by_type = quote.get('tbt', {})
@ -613,8 +623,22 @@ def graphics_update_cycle(
# TODO: make sure IB doesn't send ``-1``!
and price > 0
):
mx = max(price + tick_margin, mx)
mn = min(price - tick_margin, mn)
if (
price < mn
):
mn = price
yrange_margin = 0.16
# # print(f'{this_viz.name} new MN from TICK {mn}')
if (
price > mx
):
mx = price
yrange_margin = 0.16
# # print(f'{this_viz.name} new MX from TICK {mx}')
# mx = max(price, mx)
# mn = min(price, mn)
# clearing price update:
# generally, we only want to update grahpics from the *last*
@ -677,14 +701,16 @@ def graphics_update_cycle(
# Y-autoranging: adjust y-axis limits based on state tracking
# of previous "last" L1 values which are in view.
lmx = varz['last_mx']
lmn = varz['last_mn']
mx_diff = mx - lmx
mn_diff = mn - lmn
mx_diff = mx - lmx
if (
mx_diff
or mn_diff
mn_diff or mx_diff # covers all cases below?
# (mx - lmx) > 0 # upward expansion
# or (mn - lmn) < 0 # downward expansion
# or (lmx - mx) > 0 # upward contraction
# or (lmn - mn) < 0 # downward contraction
):
# complain about out-of-range outliers which can show up
# in certain annoying feeds (like ib)..
@ -703,53 +729,77 @@ def graphics_update_cycle(
f'mn_diff: {mn_diff}\n'
)
# FAST CHART resize case
# TODO: track local liv maxmin without doing a recompute all the
# time..plus, just generally the user is more likely to be
# zoomed out enough on the slow chart that this is never an
# issue (the last datum going out of y-range).
# FAST CHART y-auto-range resize case
elif (
liv
and not chart._static_yrange == 'axis'
):
main_vb = main_viz.plot.vb
# NOTE: this auto-yranging approach is a sort of, hybrid,
# between always aligning overlays to the their common ref
# sample and not updating at all:
# - whenever an interaction happens the overlays are scaled
# to one another and thus are ref-point aligned and
# scaled.
# - on treads and range updates due to new mn/mx from last
# datum, we don't scale to the overlayT instead only
# adjusting when the latest datum is outside the previous
# dispersion range.
mn = min(mn, lmn)
mx = max(mx, lmx)
if (
main_vb._ic is None
or not main_vb._ic.is_set()
main_vb._in_interact is None
or not main_vb._in_interact.is_set()
):
yr = (mn, mx)
# print(
# f'MAIN VIZ yrange update\n'
# f'{fqsn}: {yr}'
# )
main_vb._set_yrange(
# TODO: we should probably scale
# the view margin based on the size
# of the true range? This way you can
# slap in orders outside the current
# L1 (only) book range.
# range_margin=0.1,
yrange=yr
# print(f'SETTING Y-mnmx -> {main_viz.name}: {(mn, mx)}')
this_vb.interact_graphics_cycle(
do_linked_charts=False,
# TODO: we could optionally offer always doing this
# on treads thus always keeping fast-chart overlays
# aligned by their LHS datum?
do_overlay_scaling=False,
yrange_kwargs={
this_viz: {
'yrange': (mn, mx),
'range_margin': yrange_margin,
},
}
)
profiler('main vb y-autorange')
# SLOW CHART resize case
(
_,
hist_liv,
_,
_,
_,
_,
_,
) = hist_viz.incr_info(
ds=ds,
is_1m=True,
)
profiler('hist `Viz.incr_info()`')
# SLOW CHART y-auto-range resize casd
# (NOTE: still is still inside the y-range
# guard block above!)
# (
# _,
# hist_liv,
# _,
# _,
# _,
# _,
# _,
# ) = hist_viz.incr_info(
# ds=ds,
# is_1m=True,
# )
# if hist_liv:
# times = hist_viz.shm.array['time']
# last_t = times[-1]
# dt = pendulum.from_timestamp(last_t)
# log.info(
# f'{hist_viz.name} TIMESTEP:'
# f'epoch: {last_t}\n'
# f'datetime: {dt}\n'
# )
# profiler('hist `Viz.incr_info()`')
# TODO: track local liv maxmin without doing a recompute all the
# time..plut, just generally the user is more likely to be
# zoomed out enough on the slow chart that this is never an
# issue (the last datum going out of y-range).
# hist_chart = ds.hist_chart
# if (
# hist_liv
@ -764,7 +814,8 @@ def graphics_update_cycle(
# XXX: update this every draw cycle to ensure y-axis auto-ranging
# only adjusts when the in-view data co-domain actually expands or
# contracts.
varz['last_mx'], varz['last_mn'] = mx, mn
# varz['last_mn'] = mn
# varz['last_mx'] = mx
# TODO: a similar, only-update-full-path-on-px-step approach for all
# fsp overlays and vlm stuff..
@ -772,10 +823,12 @@ def graphics_update_cycle(
# run synchronous update on all `Viz` overlays
for curve_name, viz in chart._vizs.items():
if viz.is_ohlc:
continue
# update any overlayed fsp flows
if (
curve_name != fqsn
and not viz.is_ohlc
):
update_fsp_chart(
viz,
@ -788,8 +841,7 @@ def graphics_update_cycle(
# px column to give the user the mx/mn
# range of that set.
if (
curve_name != fqsn
and liv
liv
# and not do_px_step
# and not do_rt_update
):
@ -809,8 +861,14 @@ def graphics_update_cycle(
# TODO: can we unify this with the above loop?
if vlm_chart:
vlm_vizs = vlm_chart._vizs
main_vlm_viz = vlm_vizs['volume']
main_vlm_vb = main_vlm_viz.plot.vb
# TODO: we should probably read this
# from the `Viz.vs: ViewState`!
vlm_yr = main_vlm_vb._yrange
if vlm_yr:
(_, vlm_ymx) = vlm_yrange = vlm_yr
# always update y-label
ds.vlm_sticky.update_from_data(
@ -848,16 +906,30 @@ def graphics_update_cycle(
profiler('`main_vlm_viz.update_graphics()`')
if (
mx_vlm_in_view != varz['last_mx_vlm']
mx_vlm_in_view
and vlm_yr
and mx_vlm_in_view != vlm_ymx
):
varz['last_mx_vlm'] = mx_vlm_in_view
# vlm_yr = (0, mx_vlm_in_view * 1.375)
# vlm_chart.view._set_yrange(yrange=vlm_yr)
# profiler('`vlm_chart.view._set_yrange()`')
# in this case we want to scale all overlays in the
# sub-chart but only incrementally update the vlm since
# we already calculated the new range above.
# TODO: in theory we can incrementally update all
# overlays as well though it will require iteration of
# them here in the display loop right?
main_vlm_viz.plot.vb.interact_graphics_cycle(
do_overlay_scaling=True,
do_linked_charts=False,
yrange_kwargs={
main_vlm_viz: {
'yrange': vlm_yrange,
# 'range_margin': yrange_margin,
},
},
)
profiler('`vlm_chart.view.interact_graphics_cycle()`')
# update all downstream FSPs
for curve_name, viz in vlm_vizs.items():
if curve_name == 'volume':
continue
@ -882,10 +954,13 @@ def graphics_update_cycle(
# XXX: without this we get completely
# mangled/empty vlm display subchart..
# fvb = viz.plot.vb
# fvb._set_yrange(
# viz=viz,
# fvb.interact_graphics_cycle(
# do_linked_charts=False,
# do_overlay_scaling=False,
# )
profiler(f'vlm `Viz[{viz.name}].plot.vb._set_yrange()`')
profiler(
f'Viz[{viz.name}].plot.vb.interact_graphics_cycle()`'
)
# even if we're downsampled bigly
# draw the last datum in the final
@ -1224,6 +1299,9 @@ async def display_symbol_data(
# to avoid internal pane creation.
# sidepane=False,
sidepane=godwidget.search,
draw_kwargs={
'last_step_color': 'original',
},
)
# ensure the last datum graphic is generated
@ -1242,6 +1320,9 @@ async def display_symbol_data(
# in the case of history chart we explicitly set `False`
# to avoid internal pane creation.
sidepane=pp_pane,
draw_kwargs={
'last_step_color': 'original',
},
)
rt_viz = rt_chart.get_viz(fqsn)
pis.setdefault(fqsn, [None, None])[0] = rt_chart.plotItem
@ -1308,13 +1389,6 @@ async def display_symbol_data(
name=fqsn,
axis_title=fqsn,
)
# only show a singleton bottom-bottom axis by default.
hist_pi.hideAxis('bottom')
# XXX: TODO: THIS WILL CAUSE A GAP ON OVERLAYS,
# i think it needs to be "removed" instead when there
# are none?
hist_pi.hideAxis('left')
hist_viz = hist_chart.draw_curve(
fqsn,
@ -1333,10 +1407,6 @@ async def display_symbol_data(
# for zoom-interaction purposes.
hist_viz.draw_last(array_key=fqsn)
hist_pi.vb.maxmin = partial(
hist_chart.maxmin,
name=fqsn,
)
# TODO: we need a better API to do this..
# specially store ref to shm for lookup in display loop
# since only a placeholder of `None` is entered in
@ -1350,9 +1420,6 @@ async def display_symbol_data(
axis_title=fqsn,
)
rt_pi.hideAxis('left')
rt_pi.hideAxis('bottom')
rt_viz = rt_chart.draw_curve(
fqsn,
ohlcv,
@ -1365,10 +1432,6 @@ async def display_symbol_data(
color=bg_chart_color,
last_step_color=bg_last_bar_color,
)
rt_pi.vb.maxmin = partial(
rt_chart.maxmin,
name=fqsn,
)
# TODO: we need a better API to do this..
# specially store ref to shm for lookup in display loop
@ -1395,7 +1458,9 @@ async def display_symbol_data(
for fqsn, flume in feed.flumes.items():
# size view to data prior to order mode init
rt_chart.default_view()
rt_chart.main_viz.default_view(
do_min_bars=True,
)
rt_linked.graphics_cycle()
# TODO: look into this because not sure why it was
@ -1406,7 +1471,9 @@ async def display_symbol_data(
# determine if auto-range adjustements should be made.
# rt_linked.subplots.pop('volume', None)
hist_chart.default_view()
hist_chart.main_viz.default_view(
do_min_bars=True,
)
hist_linked.graphics_cycle()
godwidget.resize_all()
@ -1449,10 +1516,14 @@ async def display_symbol_data(
# default view adjuments and sidepane alignment
# as final default UX touch.
rt_chart.default_view()
rt_chart.main_viz.default_view(
do_min_bars=True,
)
await trio.sleep(0)
hist_chart.default_view()
hist_chart.main_viz.default_view(
do_min_bars=True,
)
hist_viz = hist_chart.get_viz(fqsn)
await trio.sleep(0)

View File

@ -21,7 +21,6 @@ Higher level annotation editors.
from __future__ import annotations
from collections import defaultdict
from typing import (
Optional,
TYPE_CHECKING
)
@ -67,7 +66,7 @@ class ArrowEditor(Struct):
x: float,
y: float,
color='default',
pointing: Optional[str] = None,
pointing: str | None = None,
) -> pg.ArrowItem:
'''
@ -221,7 +220,7 @@ class LineEditor(Struct):
line: LevelLine = None,
uuid: str = None,
) -> Optional[LevelLine]:
) -> LevelLine | None:
'''Remove a line by refernce or uuid.
If no lines or ids are provided remove all lines under the

View File

@ -49,7 +49,7 @@ from qdarkstyle import DarkPalette
import trio
from outcome import Error
from .._daemon import (
from ..service import (
maybe_open_pikerd,
get_tractor_runtime_kwargs,
)

View File

@ -23,7 +23,9 @@ from contextlib import asynccontextmanager
from functools import partial
from math import floor
from typing import (
Optional, Any, Callable, Awaitable
Any,
Callable,
Awaitable,
)
import trio
@ -263,7 +265,7 @@ class Selection(QComboBox):
def set_icon(
self,
key: str,
icon_name: Optional[str],
icon_name: str | None,
) -> None:
self.setItemIcon(
@ -344,7 +346,7 @@ class FieldsForm(QWidget):
name: str,
font_size: Optional[int] = None,
font_size: int | None = None,
font_color: str = 'default_lightest',
) -> QtGui.QLabel:
@ -469,7 +471,7 @@ def mk_form(
parent: QWidget,
fields_schema: dict,
font_size: Optional[int] = None,
font_size: int | None = None,
) -> FieldsForm:
@ -628,7 +630,7 @@ def mk_fill_status_bar(
parent_pane: QWidget,
form: FieldsForm,
pane_vbox: QVBoxLayout,
label_font_size: Optional[int] = None,
label_font_size: int | None = None,
) -> (
# TODO: turn this into a composite?
@ -738,7 +740,7 @@ def mk_fill_status_bar(
def mk_order_pane_layout(
parent: QWidget,
# accounts: dict[str, Optional[str]],
# accounts: dict[str, str | None],
) -> FieldsForm:

View File

@ -24,7 +24,10 @@ from contextlib import asynccontextmanager as acm
from functools import partial
import inspect
from itertools import cycle
from typing import Optional, AsyncGenerator, Any
from typing import (
AsyncGenerator,
Any,
)
import numpy as np
import msgspec
@ -80,7 +83,7 @@ def has_vlm(ohlcv: ShmArray) -> bool:
def update_fsp_chart(
viz,
graphics_name: str,
array_key: Optional[str],
array_key: str | None,
**kwargs,
) -> None:
@ -476,7 +479,7 @@ class FspAdmin:
target: Fsp,
conf: dict[str, dict[str, Any]],
worker_name: Optional[str] = None,
worker_name: str | None = None,
loglevel: str = 'info',
) -> (Flume, trio.Event):
@ -608,10 +611,11 @@ async def open_vlm_displays(
linked: LinkedSplits,
flume: Flume,
dvlm: bool = True,
loglevel: str = 'info',
task_status: TaskStatus[ChartPlotWidget] = trio.TASK_STATUS_IGNORED,
) -> ChartPlotWidget:
) -> None:
'''
Volume subchart displays.
@ -666,7 +670,6 @@ async def open_vlm_displays(
# built-in vlm which we plot ASAP since it's
# usually data provided directly with OHLC history.
shm = ohlcv
# ohlc_chart = linked.chart
vlm_chart = linked.add_plot(
name='volume',
@ -690,7 +693,14 @@ async def open_vlm_displays(
# the axis on the left it's totally not lined up...
# show volume units value on LHS (for dinkus)
# vlm_chart.hideAxis('right')
# vlm_chart.showAxis('left')
vlm_chart.hideAxis('left')
# TODO: is it worth being able to remove axes (from i guess
# a perf perspective) enough that we can actually do this and
# other axis related calls (for eg. label upddates in the
# display loop) don't raise when a the axis can't be loaded and
# thus would normally cause many label related calls to crash?
# axis = vlm_chart.removeAxis('left')
# send back new chart to caller
task_status.started(vlm_chart)
@ -704,17 +714,9 @@ async def open_vlm_displays(
# read from last calculated value
value = shm.array['volume'][-1]
last_val_sticky.update_from_data(-1, value)
_, _, vlm_curve = vlm_chart.update_graphics_from_flow(
'volume',
)
# size view to data once at outset
vlm_chart.view._set_yrange(
viz=vlm_viz
)
_, _, vlm_curve = vlm_viz.update_graphics()
# add axis title
axis = vlm_chart.getAxis('right')
@ -722,7 +724,6 @@ async def open_vlm_displays(
if dvlm:
tasks_ready = []
# spawn and overlay $ vlm on the same subchart
dvlm_flume, started = await admin.start_engine_task(
dolla_vlm,
@ -736,22 +737,8 @@ async def open_vlm_displays(
},
},
},
# loglevel,
loglevel,
)
tasks_ready.append(started)
# FIXME: we should error on starting the same fsp right
# since it might collide with existing shm.. or wait we
# had this before??
# dolla_vlm
tasks_ready.append(started)
# profiler(f'created shm for fsp actor: {display_name}')
# wait for all engine tasks to startup
async with trio.open_nursery() as n:
for event in tasks_ready:
n.start_soon(event.wait)
# dolla vlm overlay
# XXX: the main chart already contains a vlm "units" axis
@ -774,10 +761,6 @@ async def open_vlm_displays(
},
)
# TODO: should this maybe be implicit based on input args to
# `.overlay_plotitem()` above?
dvlm_pi.hideAxis('bottom')
# all to be overlayed curve names
dvlm_fields = [
'dolla_vlm',
@ -827,6 +810,7 @@ async def open_vlm_displays(
)
assert viz.plot is pi
await started.wait()
chart_curves(
dvlm_fields,
dvlm_pi,
@ -835,19 +819,17 @@ async def open_vlm_displays(
step_mode=True,
)
# spawn flow rates fsp **ONLY AFTER** the 'dolla_vlm' fsp is
# up since this one depends on it.
# NOTE: spawn flow rates fsp **ONLY AFTER** the 'dolla_vlm' fsp is
# up since calculating vlm "rates" obvs first requires the
# underlying vlm event feed ;)
fr_flume, started = await admin.start_engine_task(
flow_rates,
{ # fsp engine conf
'func_name': 'flow_rates',
'zero_on_step': True,
},
# loglevel,
loglevel,
)
await started.wait()
# chart_curves(
# dvlm_rate_fields,
# dvlm_pi,
@ -859,13 +841,15 @@ async def open_vlm_displays(
# hide the original vlm curve since the $vlm one is now
# displayed and the curves are effectively the same minus
# liquidity events (well at least on low OHLC periods - 1s).
vlm_curve.hide()
# vlm_curve.hide()
vlm_chart.removeItem(vlm_curve)
vlm_viz = vlm_chart._vizs['volume']
vlm_viz.render = False
# avoid range sorting on volume once disabled
vlm_chart.view.disable_auto_yrange()
# NOTE: DON'T DO THIS.
# WHY: we want range sorting on volume for the RHS label!
# -> if you don't want that then use this but likely you
# only will if we decide to drop unit vlm..
# vlm_viz.render = False
# Trade rate overlay
# XXX: requires an additional overlay for
@ -888,8 +872,8 @@ async def open_vlm_displays(
},
)
tr_pi.hideAxis('bottom')
await started.wait()
chart_curves(
trade_rate_fields,
tr_pi,

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -14,16 +14,17 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
'''
Chart view box primitives
"""
'''
from __future__ import annotations
from contextlib import asynccontextmanager
from functools import partial
from contextlib import (
asynccontextmanager,
ExitStack,
)
import time
from typing import (
Optional,
Callable,
TYPE_CHECKING,
)
@ -40,6 +41,7 @@ import trio
from ..log import get_logger
from .._profile import Profiler
from .._profile import pg_profile_enabled, ms_slower_then
from .view_mode import overlay_viewlists
# from ._style import _min_points_to_show
from ._editors import SelectRect
from . import _event
@ -73,7 +75,7 @@ ORDER_MODE = {
async def handle_viewmode_kb_inputs(
view: 'ChartView',
view: ChartView,
recv_chan: trio.abc.ReceiveChannel,
) -> None:
@ -87,7 +89,7 @@ async def handle_viewmode_kb_inputs(
last = time.time()
action: str
on_next_release: Optional[Callable] = None
on_next_release: Callable | None = None
# for quick key sequence-combo pattern matching
# we have a min_tap period and these should not
@ -142,6 +144,23 @@ async def handle_viewmode_kb_inputs(
if mods == Qt.ControlModifier:
ctrl = True
# UI REPL-shell
if (
ctrl and key in {
Qt.Key_U,
}
):
import tractor
god = order_mode.godw # noqa
feed = order_mode.feed # noqa
chart = order_mode.chart # noqa
viz = chart.main_viz # noqa
vlm_chart = chart.linked.subplots['volume'] # noqa
vlm_viz = vlm_chart.main_viz # noqa
dvlm_pi = vlm_chart._vizs['dolla_vlm'].plot # noqa
await tractor.breakpoint()
view.interact_graphics_cycle()
# SEARCH MODE #
# ctlr-<space>/<l> for "lookup", "search" -> open search tree
if (
@ -169,9 +188,13 @@ async def handle_viewmode_kb_inputs(
# View modes
if key == Qt.Key_R:
# TODO: set this for all subplots
# edge triggered default view activation
view.chart.default_view()
# NOTE: seems that if we don't yield a Qt render
# cycle then the m4 downsampled curves will show here
# without another reset..
view._viz.default_view()
view.interact_graphics_cycle()
await trio.sleep(0)
view.interact_graphics_cycle()
if len(fast_key_seq) > 1:
# begin matches against sequences
@ -313,7 +336,7 @@ async def handle_viewmode_kb_inputs(
async def handle_viewmode_mouse(
view: 'ChartView',
view: ChartView,
recv_chan: trio.abc.ReceiveChannel,
) -> None:
@ -359,7 +382,7 @@ class ChartView(ViewBox):
name: str,
parent: pg.PlotItem = None,
static_yrange: Optional[tuple[float, float]] = None,
static_yrange: tuple[float, float] | None = None,
**kwargs,
):
@ -392,8 +415,13 @@ class ChartView(ViewBox):
self.order_mode: bool = False
self.setFocusPolicy(QtCore.Qt.StrongFocus)
self._ic = None
self._yranger: Callable | None = None
self._in_interact: trio.Event | None = None
self._interact_stack: ExitStack = ExitStack()
# TODO: probably just assign this whenever a new `PlotItem` is
# allocated since they're 1to1 with views..
self._viz: Viz | None = None
self._yrange: tuple[float, float] | None = None
def start_ic(
self,
@ -403,10 +431,15 @@ class ChartView(ViewBox):
to any interested task waiters.
'''
if self._ic is None:
if self._in_interact is None:
chart = self.chart
try:
self.chart.pause_all_feeds()
self._ic = trio.Event()
self._in_interact = trio.Event()
chart.pause_all_feeds()
self._interact_stack.enter_context(
chart.reset_graphics_caches()
)
except RuntimeError:
pass
@ -420,11 +453,13 @@ class ChartView(ViewBox):
to any waiters.
'''
if self._ic:
if self._in_interact:
try:
self._ic.set()
self._ic = None
self._interact_stack.close()
self.chart.resume_all_feeds()
self._in_interact.set()
self._in_interact = None
except RuntimeError:
pass
@ -432,7 +467,7 @@ class ChartView(ViewBox):
async def open_async_input_handler(
self,
) -> 'ChartView':
) -> ChartView:
async with (
_event.open_handlers(
@ -492,7 +527,7 @@ class ChartView(ViewBox):
# don't zoom more then the min points setting
viz = chart.get_viz(chart.name)
vl, lbar, rbar, vr = viz.bars_range()
_, vl, lbar, rbar, vr, r = viz.datums_range()
# TODO: max/min zoom limits incorporating time step size.
# rl = vr - vl
@ -507,7 +542,7 @@ class ChartView(ViewBox):
# return
# actual scaling factor
s = 1.015 ** (ev.delta() * -1 / 20) # self.state['wheelScaleFactor'])
s = 1.016 ** (ev.delta() * -1 / 20) # self.state['wheelScaleFactor'])
s = [(None if m is False else s) for m in mask]
if (
@ -533,12 +568,13 @@ class ChartView(ViewBox):
# scale_y = 1.3 ** (center.y() * -1 / 20)
self.scaleBy(s, center)
# zoom in view-box area
else:
# use right-most point of current curve graphic
xl = viz.graphics.x_last()
focal = min(
xl,
vr,
r,
)
self._resetTarget()
@ -552,7 +588,7 @@ class ChartView(ViewBox):
# update, but i gotta feelin that because this one is signal
# based (and thus not necessarily sync invoked right away)
# that calling the resize method manually might work better.
self.sigRangeChangedManually.emit(mask)
# self.sigRangeChangedManually.emit(mask)
# XXX: without this is seems as though sometimes
# when zooming in from far out (and maybe vice versa?)
@ -562,14 +598,15 @@ class ChartView(ViewBox):
# that never seems to happen? Only question is how much this
# "double work" is causing latency when these missing event
# fires don't happen?
self.maybe_downsample_graphics()
self.interact_graphics_cycle()
self.interact_graphics_cycle()
ev.accept()
def mouseDragEvent(
self,
ev,
axis: Optional[int] = None,
axis: int | None = None,
) -> None:
pos = ev.pos()
@ -581,7 +618,10 @@ class ChartView(ViewBox):
button = ev.button()
# Ignore axes if mouse is disabled
mouseEnabled = np.array(self.state['mouseEnabled'], dtype=np.float)
mouseEnabled = np.array(
self.state['mouseEnabled'],
dtype=np.float,
)
mask = mouseEnabled.copy()
if axis is not None:
mask[1-axis] = 0.0
@ -645,9 +685,6 @@ class ChartView(ViewBox):
self.start_ic()
except RuntimeError:
pass
# if self._ic is None:
# self.chart.pause_all_feeds()
# self._ic = trio.Event()
if axis == 1:
self.chart._static_yrange = 'axis'
@ -664,16 +701,19 @@ class ChartView(ViewBox):
if x is not None or y is not None:
self.translateBy(x=x, y=y)
self.sigRangeChangedManually.emit(self.state['mouseEnabled'])
# self.sigRangeChangedManually.emit(mask)
# self.state['mouseEnabled']
# )
self.interact_graphics_cycle()
if ev.isFinish():
self.signal_ic()
# self._ic.set()
# self._ic = None
# self._in_interact.set()
# self._in_interact = None
# self.chart.resume_all_feeds()
# XXX: WHY
ev.accept()
# # XXX: WHY
# ev.accept()
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
elif button & QtCore.Qt.RightButton:
@ -695,10 +735,12 @@ class ChartView(ViewBox):
center = Point(tr.map(ev.buttonDownPos(QtCore.Qt.RightButton)))
self._resetTarget()
self.scaleBy(x=x, y=y, center=center)
self.sigRangeChangedManually.emit(self.state['mouseEnabled'])
# XXX: WHY
ev.accept()
# self.sigRangeChangedManually.emit(self.state['mouseEnabled'])
self.interact_graphics_cycle()
# XXX: WHY
ev.accept()
# def mouseClickEvent(self, event: QtCore.QEvent) -> None:
# '''This routine is rerouted to an async handler.
@ -719,19 +761,19 @@ class ChartView(ViewBox):
self,
*,
yrange: Optional[tuple[float, float]] = None,
yrange: tuple[float, float] | None = None,
viz: Viz | None = None,
# NOTE: this value pairs (more or less) with L1 label text
# height offset from from the bid/ask lines.
range_margin: float = 0.09,
range_margin: float | None = 0.06,
bars_range: Optional[tuple[int, int, int, int]] = None,
bars_range: tuple[int, int, int, int] | None = None,
# flag to prevent triggering sibling charts from the same linked
# set from recursion errors.
autoscale_linked_plots: bool = False,
name: Optional[str] = None,
name: str | None = None,
) -> None:
'''
@ -743,14 +785,13 @@ class ChartView(ViewBox):
'''
name = self.name
# print(f'YRANGE ON {name}')
# print(f'YRANGE ON {name} -> yrange{yrange}')
profiler = Profiler(
msg=f'`ChartView._set_yrange()`: `{name}`',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
delayed=True,
)
set_range = True
chart = self._chart
# view has been set in 'axis' mode
@ -759,8 +800,8 @@ class ChartView(ViewBox):
# - disable autoranging
# - remove any y range limits
if chart._static_yrange == 'axis':
set_range = False
self.setLimits(yMin=None, yMax=None)
return
# static y-range has been set likely by
# a specialized FSP configuration.
@ -773,54 +814,72 @@ class ChartView(ViewBox):
elif yrange is not None:
ylow, yhigh = yrange
if set_range:
# XXX: only compute the mxmn range
# if none is provided as input!
if not yrange:
# XXX: only compute the mxmn range
# if none is provided as input!
if not yrange:
if not viz:
breakpoint()
if not viz:
breakpoint()
out = viz.maxmin()
if out is None:
log.warning(f'No yrange provided for {name}!?')
return
(
ixrng,
_,
yrange
) = out
out = viz.maxmin()
if out is None:
log.warning(f'No yrange provided for {name}!?')
return
(
ixrng,
_,
yrange
) = out
profiler(f'`{self.name}:Viz.maxmin()` -> {ixrng}=>{yrange}')
profiler(f'`{self.name}:Viz.maxmin()` -> {ixrng}=>{yrange}')
if yrange is None:
log.warning(f'No yrange provided for {name}!?')
return
if yrange is None:
log.warning(f'No yrange provided for {name}!?')
return
ylow, yhigh = yrange
# view margins: stay within a % of the "true range"
# always stash last range for diffing by
# incremental update calculations BEFORE adding
# margin.
self._yrange = ylow, yhigh
# view margins: stay within a % of the "true range"
if range_margin is not None:
diff = yhigh - ylow
ylow = ylow - (diff * range_margin)
yhigh = yhigh + (diff * range_margin)
# XXX: this often needs to be unset
# to get different view modes to operate
# correctly!
self.setLimits(
yMin=ylow,
yMax=yhigh,
ylow = max(
ylow - (diff * range_margin),
0,
)
yhigh = min(
yhigh + (diff * range_margin),
yhigh * (1 + range_margin),
)
self.setYRange(ylow, yhigh)
profiler(f'set limits: {(ylow, yhigh)}')
# print(
# f'set limits {self.name}:\n'
# f'ylow: {ylow}\n'
# f'yhigh: {yhigh}\n'
# )
self.setYRange(
ylow,
yhigh,
padding=0,
)
self.setLimits(
yMin=ylow,
yMax=yhigh,
)
self.update()
# LOL: yet anothercucking pg buggg..
# can't use `msg=f'setYRange({ylow}, {yhigh}')`
profiler.finish()
def enable_auto_yrange(
self,
viz: Viz,
src_vb: Optional[ChartView] = None,
src_vb: ChartView | None = None,
) -> None:
'''
@ -831,18 +890,6 @@ class ChartView(ViewBox):
if src_vb is None:
src_vb = self
if self._yranger is None:
self._yranger = partial(
self._set_yrange,
viz=viz,
)
# widget-UIs/splitter(s) resizing
src_vb.sigResized.connect(self._yranger)
# mouse wheel doesn't emit XRangeChanged
src_vb.sigRangeChangedManually.connect(self._yranger)
# re-sampling trigger:
# TODO: a smarter way to avoid calling this needlessly?
# 2 things i can think of:
@ -850,23 +897,20 @@ class ChartView(ViewBox):
# iterate those.
# - only register this when certain downsample-able graphics are
# "added to scene".
src_vb.sigRangeChangedManually.connect(
self.maybe_downsample_graphics
# src_vb.sigRangeChangedManually.connect(
# self.interact_graphics_cycle
# )
# widget-UIs/splitter(s) resizing
src_vb.sigResized.connect(
self.interact_graphics_cycle
)
def disable_auto_yrange(self) -> None:
# XXX: not entirely sure why we can't de-reg this..
self.sigResized.disconnect(
self._yranger,
)
self.sigRangeChangedManually.disconnect(
self._yranger,
)
self.sigRangeChangedManually.disconnect(
self.maybe_downsample_graphics
self.interact_graphics_cycle
)
def x_uppx(self) -> float:
@ -887,57 +931,54 @@ class ChartView(ViewBox):
else:
return 0
def maybe_downsample_graphics(
def interact_graphics_cycle(
self,
autoscale_overlays: bool = False,
*args, # capture Qt signal (slot) inputs
# debug_print: bool = False,
do_linked_charts: bool = True,
do_overlay_scaling: bool = True,
yrange_kwargs: dict[
str,
tuple[float, float],
] | None = None,
):
profiler = Profiler(
msg=f'ChartView.maybe_downsample_graphics() for {self.name}',
msg=f'ChartView.interact_graphics_cycle() for {self.name}',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
# XXX: important to avoid not seeing underlying
# ``.update_graphics_from_flow()`` nested profiling likely
# ``Viz.update_graphics()`` nested profiling likely
# due to the way delaying works and garbage collection of
# the profiler in the delegated method calls.
ms_threshold=6,
# ms_threshold=ms_slower_then,
delayed=True,
# for hardcore latency checking, comment these flags above.
# disabled=False,
# ms_threshold=4,
)
# TODO: a faster single-loop-iterator way of doing this XD
chart = self._chart
plots = {chart.name: chart}
linked = self.linked
if linked:
if (
do_linked_charts
and linked
):
plots = {linked.chart.name: linked.chart}
plots |= linked.subplots
for chart_name, chart in plots.items():
for name, flow in chart._vizs.items():
else:
chart = self._chart
plots = {chart.name: chart}
if (
not flow.render
# XXX: super important to be aware of this.
# or not flow.graphics.isVisible()
):
# print(f'skipping {flow.name}')
continue
# pass in no array which will read and render from the last
# passed array (normally provided by the display loop.)
chart.update_graphics_from_flow(name)
# for each overlay on this chart auto-scale the
# y-range to max-min values.
# if autoscale_overlays:
# overlay = chart.pi_overlay
# if overlay:
# for pi in overlay.overlays:
# pi.vb._set_yrange(
# # TODO: get the range once up front...
# # bars_range=br,
# viz=pi.viz,
# )
# profiler('autoscaled linked plots')
profiler(f'<{chart_name}>.update_graphics_from_flow({name})')
# TODO: a faster single-loop-iterator way of doing this?
return overlay_viewlists(
self._viz,
plots,
profiler,
do_overlay_scaling=do_overlay_scaling,
do_linked_charts=do_linked_charts,
yrange_kwargs=yrange_kwargs,
)

View File

@ -19,7 +19,10 @@ Non-shitty labels that don't re-invent the wheel.
"""
from inspect import isfunction
from typing import Callable, Optional, Any
from typing import (
Callable,
Any,
)
import pyqtgraph as pg
from PyQt5 import QtGui, QtWidgets
@ -70,9 +73,7 @@ class Label:
self._fmt_str = fmt_str
self._view_xy = QPointF(0, 0)
self.scene_anchor: Optional[
Callable[..., QPointF]
] = None
self.scene_anchor: Callable[..., QPointF] | None = None
self._x_offset = x_offset
@ -164,7 +165,7 @@ class Label:
self,
y: float,
x: Optional[float] = None,
x: float | None = None,
) -> None:

View File

@ -22,7 +22,6 @@ from __future__ import annotations
from functools import partial
from math import floor
from typing import (
Optional,
Callable,
TYPE_CHECKING,
)
@ -32,7 +31,7 @@ from pyqtgraph import Point, functions as fn
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF
from ._annotate import qgo_draw_markers, LevelMarker
from ._annotate import LevelMarker
from ._anchors import (
vbr_left,
right_axis,
@ -295,7 +294,7 @@ class LevelLine(pg.InfiniteLine):
# show y-crosshair again
cursor.show_xhair()
def get_cursor(self) -> Optional[Cursor]:
def get_cursor(self) -> Cursor | None:
chart = self._chart
cur = chart.linked.cursor
@ -610,11 +609,11 @@ def order_line(
chart,
level: float,
action: Optional[str] = 'buy', # buy or sell
action: str | None = 'buy', # buy or sell
marker_style: Optional[str] = None,
level_digits: Optional[float] = 3,
size: Optional[int] = 1,
marker_style: str | None = None,
level_digits: float | None = 3,
size: int | None = 1,
size_digits: int = 1,
show_markers: bool = False,
submit_price: float = None,

View File

@ -21,7 +21,6 @@ Notifications utils.
import os
import platform
import subprocess
from typing import Optional
import trio
@ -33,7 +32,7 @@ from ..clearing._messages import (
log = get_logger(__name__)
_dbus_uid: Optional[str] = ''
_dbus_uid: str | None = ''
async def notify_from_ems_status_msg(

View File

@ -28,7 +28,6 @@ from PyQt5.QtCore import (
QLineF,
QRectF,
)
from PyQt5.QtWidgets import QGraphicsItem
from PyQt5.QtGui import QPainterPath
from ._curve import FlowGraphic
@ -91,10 +90,6 @@ class BarItems(FlowGraphic):
"Price range" bars graphics rendered from a OHLC sampled sequence.
'''
# XXX: causes this weird jitter bug when click-drag panning
# where the path curve will awkwardly flicker back and forth?
cache_mode: int = QGraphicsItem.NoCache
def __init__(
self,
*args,
@ -113,9 +108,10 @@ class BarItems(FlowGraphic):
'''
if self._last_bar_lines:
close_arm_line = self._last_bar_lines[-1]
return close_arm_line.x2() if close_arm_line else None
else:
return None
if close_arm_line:
return close_arm_line.x2()
return None
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
def boundingRect(self):

View File

@ -20,8 +20,9 @@ micro-ORM for coupling ``pydantic`` models with Qt input/output widgets.
"""
from __future__ import annotations
from typing import (
Optional, Generic,
TypeVar, Callable,
Generic,
TypeVar,
Callable,
)
# from pydantic import BaseModel, validator
@ -42,13 +43,11 @@ DataType = TypeVar('DataType')
class Field(GenericModel, Generic[DataType]):
widget_factory: Optional[
Callable[
[QWidget, 'Field'],
QWidget
]
]
value: Optional[DataType] = None
widget_factory: Callable[
[QWidget, 'Field'],
QWidget
] | None = None
value: DataType | None = None
class Selection(Field[DataType], Generic[DataType]):

View File

@ -22,7 +22,6 @@ from collections import defaultdict
from functools import partial
from typing import (
Callable,
Optional,
)
from pyqtgraph.graphicsItems.AxisItem import AxisItem
@ -116,6 +115,7 @@ class ComposedGridLayout:
layout.setContentsMargins(0, 0, 0, 0)
layout.setSpacing(0)
layout.setMinimumWidth(0)
if name in ('top', 'bottom'):
orient = Qt.Vertical
@ -125,7 +125,11 @@ class ComposedGridLayout:
layout.setOrientation(orient)
self.insert_plotitem(0, pi)
self.insert_plotitem(
0,
pi,
remove_axes=False,
)
# insert surrounding linear layouts into the parent pi's layout
# such that additional axes can be appended arbitrarily without
@ -140,7 +144,9 @@ class ComposedGridLayout:
assert linlayout.itemAt(0) is axis
# XXX: see comment in ``.insert_plotitem()``...
# our `PlotItem.removeAxis()` does this internally.
# pi.layout.removeItem(axis)
pi.layout.addItem(linlayout, *index)
layout = pi.layout.itemAt(*index)
assert layout is linlayout
@ -165,6 +171,8 @@ class ComposedGridLayout:
index: int,
plotitem: PlotItem,
remove_axes: bool = False,
) -> tuple[int, list[AxisItem]]:
'''
Place item at index by inserting all axes into the grid
@ -193,25 +201,19 @@ class ComposedGridLayout:
axis_view = axis.linkedView()
assert axis_view is plotitem.vb
if (
not axis.isVisible()
# if (
# not axis.isVisible()
# XXX: we never skip moving the axes for the *root*
# plotitem inserted (even if not shown) since we need to
# move all the hidden axes into linear sub-layouts for
# that "central" plot in the overlay. Also if we don't
# do it there's weird geomoetry calc offsets that make
# view coords slightly off somehow .. smh
and not len(self.pitems) == 0
):
continue
# XXX: Remove old axis?
# No, turns out we don't need this?
# DON'T UNLINK IT since we need the original ``ViewBox`` to
# still drive it with events/handlers B)
# popped = plotitem.removeAxis(name, unlink=False)
# assert axis is popped
# # XXX: we never skip moving the axes for the *root*
# # plotitem inserted (even if not shown) since we need to
# # move all the hidden axes into linear sub-layouts for
# # that "central" plot in the overlay. Also if we don't
# # do it there's weird geomoetry calc offsets that make
# # view coords slightly off somehow .. smh
# and not len(self.pitems) == 0
# ):
# print(f'SKIPPING MOVE: {plotitem.name}:{name} -> {axis}')
# continue
# invert insert index for layouts which are
# not-left-to-right, top-to-bottom insert oriented
@ -225,6 +227,16 @@ class ComposedGridLayout:
self._register_item(index, plotitem)
if remove_axes:
for name, axis_info in plotitem.axes.copy().items():
axis = axis_info['item']
# XXX: Remove old axis?
# No, turns out we don't need this?
# DON'T UNLINK IT since we need the original ``ViewBox`` to
# still drive it with events/handlers B)
popped = plotitem.removeAxis(name, unlink=False)
assert axis is popped
return (index, inserted_axes)
def append_plotitem(
@ -246,7 +258,7 @@ class ComposedGridLayout:
plot: PlotItem,
name: str,
) -> Optional[AxisItem]:
) -> AxisItem | None:
'''
Retrieve the named axis for overlayed ``plot`` or ``None``
if axis for that name is not shown.
@ -321,7 +333,7 @@ class PlotItemOverlay:
def add_plotitem(
self,
plotitem: PlotItem,
index: Optional[int] = None,
index: int | None = None,
# event/signal names which will be broadcasted to all added
# (relayee) ``PlotItem``s (eg. ``ViewBox.mouseDragEvent``).
@ -376,7 +388,7 @@ class PlotItemOverlay:
# TODO: drop this viewbox specific input and
# allow a predicate to be passed in by user.
axis: 'Optional[int]' = None,
axis: int | None = None,
*,
@ -487,10 +499,10 @@ class PlotItemOverlay:
else:
insert_index, axes = self.layout.insert_plotitem(index, plotitem)
plotitem.setGeometry(root.vb.sceneBoundingRect())
plotitem.vb.setGeometry(root.vb.sceneBoundingRect())
def size_to_viewbox(vb: 'ViewBox'):
plotitem.setGeometry(vb.sceneBoundingRect())
plotitem.vb.setGeometry(root.vb.sceneBoundingRect())
root.vb.sigResized.connect(size_to_viewbox)

View File

@ -22,8 +22,6 @@ Generally, our does not require "scentific precision" for pixel perfect
view transforms.
"""
from typing import Optional
import pyqtgraph as pg
from ._axes import Axis
@ -47,9 +45,10 @@ def invertQTransform(tr):
def _do_overrides() -> None:
"""Dooo eeet.
'''
Dooo eeet.
"""
'''
# we don't care about potential fp issues inside Qt
pg.functions.invertQTransform = invertQTransform
pg.PlotItem = PlotItem
@ -91,7 +90,7 @@ class PlotItem(pg.PlotItem):
title=None,
viewBox=None,
axisItems=None,
default_axes=['left', 'bottom'],
default_axes=['right', 'bottom'],
enableMenu=True,
**kargs
):
@ -119,7 +118,7 @@ class PlotItem(pg.PlotItem):
name: str,
unlink: bool = True,
) -> Optional[pg.AxisItem]:
) -> pg.AxisItem | None:
"""
Remove an axis from the contained axis items
by ```name: str```.
@ -130,7 +129,7 @@ class PlotItem(pg.PlotItem):
If the ``unlink: bool`` is set to ``False`` then the axis will
stay linked to its view and will only be removed from the
layoutonly be removed from the layout.
layout.
If no axis with ``name: str`` is found then this is a noop.
@ -144,7 +143,10 @@ class PlotItem(pg.PlotItem):
axis = entry['item']
self.layout.removeItem(axis)
axis.scene().removeItem(axis)
scn = axis.scene()
if scn:
scn.removeItem(axis)
if unlink:
axis.unlinkFromView()
@ -166,14 +168,14 @@ class PlotItem(pg.PlotItem):
def setAxisItems(
self,
# XXX: yeah yeah, i know we can't use type annots like this yet.
axisItems: Optional[dict[str, pg.AxisItem]] = None,
axisItems: dict[str, pg.AxisItem] | None = None,
add_to_layout: bool = True,
default_axes: list[str] = ['left', 'bottom'],
):
"""
Override axis item setting to only
'''
Override axis item setting to only what is passed in.
"""
'''
axisItems = axisItems or {}
# XXX: wth is is this even saying?!?

View File

@ -25,7 +25,6 @@ from functools import partial
from math import floor, copysign
from typing import (
Callable,
Optional,
TYPE_CHECKING,
)
@ -170,12 +169,12 @@ class SettingsPane:
limit_label: QLabel
# encompasing high level namespace
order_mode: Optional['OrderMode'] = None # typing: ignore # noqa
order_mode: OrderMode | None = None # typing: ignore # noqa
def set_accounts(
self,
names: list[str],
sizes: Optional[list[float]] = None,
sizes: list[float] | None = None,
) -> None:
combo = self.form.fields['account']
@ -540,8 +539,8 @@ class Nav(Struct):
charts: dict[int, ChartPlotWidget]
pp_labels: dict[str, Label] = {}
size_labels: dict[str, Label] = {}
lines: dict[str, Optional[LevelLine]] = {}
level_markers: dict[str, Optional[LevelMarker]] = {}
lines: dict[str, LevelLine | None] = {}
level_markers: dict[str, LevelMarker | None] = {}
color: str = 'default_lightest'
def update_ui(
@ -550,7 +549,7 @@ class Nav(Struct):
price: float,
size: float,
slots_used: float,
size_digits: Optional[int] = None,
size_digits: int | None = None,
) -> None:
'''
@ -847,7 +846,7 @@ class PositionTracker:
def update_from_pp(
self,
position: Optional[Position] = None,
position: Position | None = None,
set_as_startup: bool = False,
) -> None:

View File

@ -51,7 +51,20 @@ log = get_logger(__name__)
class Renderer(msgspec.Struct):
'''
Low(er) level interface for converting a source, real-time updated,
data buffer (usually held in a ``ShmArray``) to a graphics data
format usable by `Qt`.
A renderer reads in context-specific source data using a ``Viz``,
formats that data to a 2D-xy pre-graphics format using
a ``IncrementalFormatter``, then renders that data to a set of
output graphics objects normally a ``.ui._curve.FlowGraphics``
sub-type to which the ``Renderer.path`` is applied and further "last
datum" graphics are updated from the source buffer's latest
sample(s).
'''
viz: Viz
fmtr: IncrementalFormatter
@ -195,7 +208,7 @@ class Renderer(msgspec.Struct):
fast_path: QPainterPath = self.fast_path
reset: bool = False
self.viz.yrange = None
self.viz.ds_yrange = None
# redraw the entire source data if we have either of:
# - no prior path graphic rendered or,
@ -218,7 +231,7 @@ class Renderer(msgspec.Struct):
)
if ds_out is not None:
x_1d, y_1d, ymn, ymx = ds_out
self.viz.yrange = ymn, ymx
self.viz.ds_yrange = ymn, ymx
# print(f'{self.viz.name} post ds: ymn, ymx: {ymn},{ymx}')
reset = True

View File

@ -35,7 +35,6 @@ from collections import defaultdict
from contextlib import asynccontextmanager
from functools import partial
from typing import (
Optional,
Callable,
Awaitable,
Sequence,
@ -178,8 +177,8 @@ class CompleterView(QTreeView):
def resize_to_results(
self,
w: Optional[float] = 0,
h: Optional[float] = None,
w: float | None = 0,
h: float | None = None,
) -> None:
model = self.model()
@ -380,7 +379,7 @@ class CompleterView(QTreeView):
self,
section: str,
) -> Optional[QModelIndex]:
) -> QModelIndex | None:
'''
Find the *first* depth = 1 section matching ``section`` in
the tree and return its index.
@ -504,7 +503,7 @@ class CompleterView(QTreeView):
def show_matches(
self,
wh: Optional[tuple[float, float]] = None,
wh: tuple[float, float] | None = None,
) -> None:
@ -529,7 +528,7 @@ class SearchBar(Edit):
self,
parent: QWidget,
godwidget: QWidget,
view: Optional[CompleterView] = None,
view: CompleterView | None = None,
**kwargs,
) -> None:
@ -708,7 +707,7 @@ class SearchWidget(QtWidgets.QWidget):
self,
clear_to_cache: bool = True,
) -> Optional[str]:
) -> str | None:
'''
Attempt to load and switch the current selected
completion result to the affiliated chart app.
@ -1167,7 +1166,7 @@ async def register_symbol_search(
provider_name: str,
search_routine: Callable,
pause_period: Optional[float] = None,
pause_period: float | None = None,
) -> AsyncIterator[dict]:

View File

@ -18,7 +18,7 @@
Qt UI styling.
'''
from typing import Optional, Dict
from typing import Dict
import math
import pyqtgraph as pg
@ -52,7 +52,7 @@ class DpiAwareFont:
# TODO: move to config
name: str = 'Hack',
font_size: str = 'default',
# size_in_inches: Optional[float] = None,
) -> None:
self.name = name
self._qfont = QtGui.QFont(name)
@ -91,13 +91,14 @@ class DpiAwareFont:
def px_size(self) -> int:
return self._qfont.pixelSize()
def configure_to_dpi(self, screen: Optional[QtGui.QScreen] = None):
"""Set an appropriately sized font size depending on the screen DPI.
def configure_to_dpi(self, screen: QtGui.QScreen | None = None):
'''
Set an appropriately sized font size depending on the screen DPI.
If we end up needing to generalize this more here there are resources
listed in the script in ``snippets/qt_screen_info.py``.
"""
'''
if screen is None:
screen = self.screen

View File

@ -23,7 +23,6 @@ import signal
import time
from typing import (
Callable,
Optional,
Union,
)
import uuid
@ -64,9 +63,9 @@ class MultiStatus:
self,
msg: str,
final_msg: Optional[str] = None,
final_msg: str | None = None,
clear_on_next: bool = False,
group_key: Optional[Union[bool, str]] = False,
group_key: Union[bool, str] | None = False,
) -> Union[Callable[..., None], str]:
'''
@ -178,11 +177,11 @@ class MainWindow(QMainWindow):
self.setWindowTitle(self.title)
# set by runtime after `trio` is engaged.
self.godwidget: Optional[GodWidget] = None
self.godwidget: GodWidget | None = None
self._status_bar: QStatusBar = None
self._status_label: QLabel = None
self._size: Optional[tuple[int, int]] = None
self._size: tuple[int, int] | None = None
@property
def mode_label(self) -> QLabel:
@ -289,7 +288,7 @@ class MainWindow(QMainWindow):
def configure_to_desktop(
self,
size: Optional[tuple[int, int]] = None,
size: tuple[int, int] | None = None,
) -> None:
'''

View File

@ -24,7 +24,7 @@ import tractor
from ..cli import cli
from .. import watchlists as wl
from .._daemon import maybe_spawn_brokerd
from ..service import maybe_spawn_brokerd
_config_dir = click.get_app_dir('piker')

View File

@ -25,7 +25,6 @@ from functools import partial
from pprint import pformat
import time
from typing import (
Optional,
Callable,
Any,
TYPE_CHECKING,
@ -129,7 +128,7 @@ class OrderMode:
trackers: dict[str, PositionTracker]
# switched state, the current position
current_pp: Optional[PositionTracker] = None
current_pp: PositionTracker | None = None
active: bool = False
name: str = 'order'
dialogs: dict[str, Dialog] = field(default_factory=dict)
@ -139,7 +138,7 @@ class OrderMode:
'buy': 'buy_green',
'sell': 'sell_red',
}
_staged_order: Optional[Order] = None
_staged_order: Order | None = None
def on_level_change_update_next_order_info(
self,
@ -180,7 +179,7 @@ class OrderMode:
def new_line_from_order(
self,
order: Order,
chart: Optional[ChartPlotWidget] = None,
chart: ChartPlotWidget | None = None,
**line_kwargs,
) -> LevelLine:
@ -340,7 +339,7 @@ class OrderMode:
def submit_order(
self,
send_msg: bool = True,
order: Optional[Order] = None,
order: Order | None = None,
) -> Dialog:
'''
@ -452,7 +451,7 @@ class OrderMode:
def on_submit(
self,
uuid: str,
order: Optional[Order] = None,
order: Order | None = None,
) -> Dialog:
'''
@ -496,7 +495,7 @@ class OrderMode:
price: float,
time_s: float,
pointing: Optional[str] = None,
pointing: str | None = None,
) -> None:
'''

View File

@ -0,0 +1,883 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Overlay (aka multi-chart) UX machinery.
'''
from __future__ import annotations
from typing import (
Any,
Literal,
TYPE_CHECKING,
)
import numpy as np
import pendulum
import pyqtgraph as pg
from ..data.types import Struct
from ..data._pathops import slice_from_time
from ..log import get_logger
from .._profile import Profiler
if TYPE_CHECKING:
from ._chart import ChartPlotWidget
from ._dataviz import Viz
from ._interaction import ChartView
log = get_logger(__name__)
class OverlayT(Struct):
'''
An overlay co-domain range transformer.
Used to translate and apply a range from one y-range
to another based on a returns logarithm:
R(ymn, ymx, yref) = (ymx - yref)/yref
which gives the log-scale multiplier, and
ymx_t = yref * (1 + R)
which gives the inverse to translate to the same value
in the target co-domain.
'''
viz: Viz | None = None
start_t: float | None = None
# % "range" computed from some ref value to the mn/mx
rng: float | None = None
in_view: np.ndarray | None = None
# pinned-minor curve modified mn and max for the major dispersion
# curve due to one series being shorter and the pin + scaling from
# that pin point causing the original range to have to increase.
y_val: float | None = None
def apply_r(
self,
y_ref: float, # reference value for dispersion metric
) -> float:
return y_ref * (1 + self.rng)
def intersect_from_longer(
start_t_first: float,
in_view_first: np.ndarray,
start_t_second: float,
in_view_second: np.ndarray,
step: float,
) -> np.ndarray:
tdiff = start_t_first - start_t_second
if tdiff == 0:
return False
i: int = 0
# first time series has an "earlier" first time stamp then the 2nd.
# aka 1st is "shorter" then the 2nd.
if tdiff > 0:
longer = in_view_second
find_t = start_t_first
i = 1
# second time series has an "earlier" first time stamp then the 1st.
# aka 2nd is "shorter" then the 1st.
elif tdiff < 0:
longer = in_view_first
find_t = start_t_second
i = 0
slc = slice_from_time(
arr=longer,
start_t=find_t,
stop_t=find_t,
step=step,
)
return (
longer[slc.start],
find_t,
i,
)
def _maybe_calc_yrange(
viz: Viz,
yrange_kwargs: dict[Viz, dict[str, Any]],
profiler: Profiler,
chart_name: str,
) -> tuple[
slice,
dict,
] | None:
if not viz.render:
return
# pass in no array which will read and render from the last
# passed array (normally provided by the display loop.)
in_view, i_read_range, _ = viz.update_graphics()
if not in_view:
return
profiler(f'{viz.name}@{chart_name} `Viz.update_graphics()`')
# check if explicit yrange (kwargs) was passed in by the caller
yrange_kwargs = yrange_kwargs.get(viz) if yrange_kwargs else None
if yrange_kwargs is not None:
read_slc = slice(*i_read_range)
else:
out = viz.maxmin(i_read_range=i_read_range)
if out is None:
log.warning(f'No yrange provided for {viz.name}!?')
return
(
_, # ixrng,
read_slc,
yrange
) = out
profiler(f'{viz.name}@{chart_name} `Viz.maxmin()`')
yrange_kwargs = {'yrange': yrange}
return (
read_slc,
yrange_kwargs,
)
def overlay_viewlists(
active_viz: Viz,
plots: dict[str, ChartPlotWidget],
profiler: Profiler,
# public config ctls
do_linked_charts: bool = True,
do_overlay_scaling: bool = True,
yrange_kwargs: dict[
str,
tuple[float, float],
] | None = None,
method: Literal[
'loglin_ref_to_curve',
'loglin_ref_to_first',
'mxmn',
'solo',
] = 'loglin_ref_to_curve',
# internal debug
debug_print: bool = False,
) -> None:
'''
Calculate and apply y-domain (axis y-range) multi-curve overlay adjustments
a set of ``plots`` based on the requested ``method``.
'''
chart_name: str
chart: ChartPlotWidget
for chart_name, chart in plots.items():
overlay_viz_items = chart._vizs.items()
# Common `PlotItem` maxmin table; presumes that some path
# graphics (and thus their backing data sets) are in the
# same co-domain and view box (since the were added
# a separate graphics objects to a common plot) and thus can
# be sorted as one set per plot.
mxmns_by_common_pi: dict[
pg.PlotItem,
tuple[float, float],
] = {}
# proportional group auto-scaling per overlay set.
# -> loop through overlays on each multi-chart widget
# and scale all y-ranges based on autoscale config.
# -> for any "group" overlay we want to dispersion normalize
# and scale minor charts onto the major chart: the chart
# with the most dispersion in the set.
# ONLY auto-yrange the viz mapped to THIS view box
if (
not do_overlay_scaling
or len(overlay_viz_items) < 2
):
viz = active_viz
out = _maybe_calc_yrange(
viz,
yrange_kwargs,
profiler,
chart_name,
)
if out is None:
continue
read_slc, yrange_kwargs = out
viz.plot.vb._set_yrange(**yrange_kwargs)
profiler(f'{viz.name}@{chart_name} single curve yrange')
if debug_print:
print(f'ONLY ranging THIS viz: {viz.name}')
# don't iterate overlays, just move to next chart
continue
if debug_print:
divstr = '#'*46
print(
f'BEGIN UX GRAPHICS CYCLE: @{chart_name}\n'
+
divstr
+
'\n'
)
# create a group overlay log-linearized y-range transform to
# track and eventually inverse transform all overlay curves
# to a common target max dispersion range.
dnt = OverlayT()
upt = OverlayT()
# collect certain flows have grapics objects **in seperate
# plots/viewboxes** into groups and do a common calc to
# determine auto-ranging input for `._set_yrange()`.
# this is primarly used for our so called "log-linearized
# multi-plot" overlay technique.
overlay_table: dict[
float,
tuple[
ChartView,
Viz,
float, # y start
float, # y min
float, # y max
float, # y median
slice, # in-view array slice
np.ndarray, # in-view array
float, # returns up scalar
float, # return down scalar
],
] = {}
# multi-curve overlay processing stage
for name, viz in overlay_viz_items:
out = _maybe_calc_yrange(
viz,
yrange_kwargs,
profiler,
chart_name,
)
if out is None:
continue
read_slc, yrange_kwargs = out
yrange = yrange_kwargs['yrange']
pi = viz.plot
# handle multiple graphics-objs per viewbox cases
mxmn = mxmns_by_common_pi.get(pi)
if mxmn:
yrange = mxmns_by_common_pi[pi] = (
min(yrange[0], mxmn[0]),
max(yrange[1], mxmn[1]),
)
else:
mxmns_by_common_pi[pi] = yrange
profiler(f'{viz.name}@{chart_name} common pi sort')
# non-overlay group case
if (
not viz.is_ohlc
or method == 'solo'
):
pi.vb._set_yrange(yrange=yrange)
profiler(
f'{viz.name}@{chart_name} simple std `._set_yrange()`'
)
continue
# handle overlay log-linearized group scaling cases
# TODO: a better predicate here, likely something
# to do with overlays and their settings..
# TODO: we probably eventually might want some other
# charts besides OHLC?
else:
ymn, ymx = yrange
# determine start datum in view
in_view = viz.vs.in_view
if not in_view.size:
log.warning(f'{viz.name} not in view?')
continue
row_start = in_view[0]
if viz.is_ohlc:
y_ref = row_start['open']
else:
y_ref = row_start[viz.name]
profiler(f'{viz.name}@{chart_name} MINOR curve median')
key = 'open' if viz.is_ohlc else viz.name
start_t = row_start['time']
# returns scalars
r_up = (ymx - y_ref) / y_ref
r_down = (ymn - y_ref) / y_ref
disp = r_up - r_down
msg = (
f'Viz[{viz.name}][{key}]: @{chart_name}\n'
f' .yrange = {viz.vs.yrange}\n'
f' .xrange = {viz.vs.xrange}\n\n'
f'start_t: {start_t}\n'
f'y_ref: {y_ref}\n'
f'ymn: {ymn}\n'
f'ymx: {ymx}\n'
f'r_up: {r_up}\n'
f'r_down: {r_down}\n'
f'(full) disp: {disp}\n'
)
profiler(msg)
if debug_print:
print(msg)
# track the "major" curve as the curve with most
# dispersion.
if (
dnt.rng is None
or (
r_down < dnt.rng
and r_down < 0
)
):
dnt.viz = viz
dnt.rng = r_down
dnt.in_view = in_view
dnt.start_t = in_view[0]['time']
dnt.y_val = ymn
profiler(f'NEW DOWN: {viz.name}@{chart_name} r: {r_down}')
else:
# minor in the down swing range so check that if
# we apply the current rng to the minor that it
# doesn't go outside the current range for the major
# otherwise we recompute the minor's range (when
# adjusted for it's intersect point to be the new
# major's range.
intersect = intersect_from_longer(
dnt.start_t,
dnt.in_view,
start_t,
in_view,
viz.index_step(),
)
profiler(f'{viz.name}@{chart_name} intersect by t')
if intersect:
longer_in_view, _t, i = intersect
scaled_mn = dnt.apply_r(y_ref)
if scaled_mn > ymn:
# after major curve scaling we detected
# the minor curve is still out of range
# so we need to adjust the major's range
# to include the new composed range.
y_maj_ref = longer_in_view[key]
new_major_ymn = y_maj_ref * (1 + r_down)
# rewrite the major range to the new
# minor-pinned-to-major range and mark
# the transform as "virtual".
msg = (
f'EXPAND DOWN bc {viz.name}@{chart_name}\n'
f'y_start epoch time @ {_t}:\n'
f'y_maj_ref @ {_t}: {y_maj_ref}\n'
f'R: {dnt.rng} -> {r_down}\n'
f'MN: {dnt.y_val} -> {new_major_ymn}\n'
)
dnt.rng = r_down
dnt.y_val = new_major_ymn
profiler(msg)
if debug_print:
print(msg)
# is the current up `OverlayT` not yet defined or
# the current `r_up` greater then the previous max.
if (
upt.rng is None
or (
r_up > upt.rng
and r_up > 0
)
):
upt.rng = r_up
upt.viz = viz
upt.in_view = in_view
upt.start_t = in_view[0]['time']
upt.y_val = ymx
profiler(f'NEW UP: {viz.name}@{chart_name} r: {r_up}')
else:
intersect = intersect_from_longer(
upt.start_t,
upt.in_view,
start_t,
in_view,
viz.index_step(),
)
profiler(f'{viz.name}@{chart_name} intersect by t')
if intersect:
longer_in_view, _t, i = intersect
# after major curve scaling we detect if
# the minor curve is still out of range
# so we need to adjust the major's range
# to include the new composed range.
scaled_mx = upt.apply_r(y_ref)
if scaled_mx < ymx:
y_maj_ref = longer_in_view[key]
new_major_ymx = y_maj_ref * (1 + r_up)
# rewrite the major range to the new
# minor-pinned-to-major range and mark
# the transform as "virtual".
msg = (
f'EXPAND UP bc {viz.name}@{chart_name}:\n'
f'y_maj_ref @ {_t}: {y_maj_ref}\n'
f'R: {upt.rng} -> {r_up}\n'
f'MX: {upt.y_val} -> {new_major_ymx}\n'
)
upt.rng = r_up
upt.y_val = new_major_ymx
profiler(msg)
print(msg)
# register curves by a "full" dispersion metric for
# later sort order in the overlay (technique
# ) application loop below.
overlay_table[disp] = (
viz.plot.vb,
viz,
y_ref,
ymn,
ymx,
read_slc,
in_view,
r_up,
r_down,
)
profiler(f'{viz.name}@{chart_name} yrange scan complete')
# NOTE: if no there were no overlay charts
# detected/collected (could be either no group detected or
# chart with a single symbol, thus a single viz/overlay)
# then we ONLY set the mone chart's (viz) yrange and short
# circuit to the next chart in the linked charts loop. IOW
# there's no reason to go through the overlay dispersion
# scaling in the next loop below when only one curve is
# detected.
if (
not mxmns_by_common_pi
and len(overlay_table) < 2
):
if debug_print:
print(f'ONLY ranging major: {viz.name}')
out = _maybe_calc_yrange(
viz,
yrange_kwargs,
profiler,
chart_name,
)
if out is None:
continue
read_slc, yrange_kwargs = out
viz.plot.vb._set_yrange(**yrange_kwargs)
profiler(f'{viz.name}@{chart_name} single curve yrange')
# move to next chart in linked set since
# no overlay transforming is needed.
continue
elif (
mxmns_by_common_pi
and not overlay_table
):
# move to next chart in linked set since
# no overlay transforming is needed.
continue
profiler('`Viz` curve (first) scan phase complete\n')
r_up_mx: float
r_dn_mn: float
mx_disp = max(overlay_table)
if debug_print:
# print overlay table in descending dispersion order
msg = 'overlays in dispersion order:\n'
for i, disp in enumerate(reversed(overlay_table)):
entry = overlay_table[disp]
msg += f' [{i}] {disp}: {entry[1].name}\n'
print(
'TRANSFORM PHASE' + '-'*100 + '\n\n'
+
msg
)
if method == 'loglin_ref_to_curve':
mx_entry = overlay_table.pop(mx_disp)
else:
# TODO: for pin to first-in-view we need to no pop this from the
# table, but can we simplify below code even more?
mx_entry = overlay_table[mx_disp]
(
mx_view, # viewbox
mx_viz, # viz
_, # y_ref
mx_ymn,
mx_ymx,
_, # read_slc
mx_in_view, # in_view array
r_up_mx,
r_dn_mn,
) = mx_entry
mx_time = mx_in_view['time']
mx_xref = mx_time[0]
# conduct "log-linearized multi-plot" range transform
# calculations for curves detected as overlays in the previous
# loop:
# -> iterate all curves Ci in dispersion-measure sorted order
# going from smallest swing to largest via the
# ``overlay_table: dict``,
# -> match on overlay ``method: str`` provided by caller,
# -> calc y-ranges from each curve's time series and store in
# a final table ``scaled: dict`` for final application in the
# scaling loop; the final phase.
scaled: dict[
float,
tuple[Viz, float, float, float, float]
] = {}
for full_disp in reversed(overlay_table):
(
view,
viz,
y_start,
y_min,
y_max,
read_slc,
minor_in_view,
r_up,
r_dn,
) = overlay_table[full_disp]
key = 'open' if viz.is_ohlc else viz.name
xref = minor_in_view[0]['time']
match method:
# Pin this curve to the "major dispersion" (or other
# target) curve:
#
# - find the intersect datum and then scaling according
# to the returns log-lin tranform 'at that intersect
# reference data'.
# - if the pinning/log-returns-based transform scaling
# results in this minor/pinned curve being out of
# view, adjust the scalars to match **this** curve's
# y-range to stay in view and then backpropagate that
# scaling to all curves, including the major-target,
# which were previously scaled before.
case 'loglin_ref_to_curve':
# calculate y-range scalars from the earliest
# "intersect" datum with the target-major
# (dispersion) curve so as to "pin" the curves
# in the y-domain at that spot.
# NOTE: there are 2 cases for un-matched support
# in x-domain (where one series is shorter then the
# other):
# => major is longer then minor:
# - need to scale the minor *from* the first
# supported datum in both series.
#
# => major is shorter then minor:
# - need to scale the minor *from* the first
# supported datum in both series (the
# intersect x-value) but using the
# intersecting point from the minor **not**
# its first value in view!
yref = y_start
if mx_xref > xref:
(
xref_pin,
yref,
) = viz.i_from_t(
mx_xref,
return_y=True,
)
xref_pin_dt = pendulum.from_timestamp(xref_pin)
xref = mx_xref
if debug_print:
print(
'MAJOR SHORTER!!!\n'
f'xref: {xref}\n'
f'xref_pin: {xref_pin}\n'
f'xref_pin-dt: {xref_pin_dt}\n'
f'yref@xref_pin: {yref}\n'
)
(
i_start,
y_ref_major,
r_up_from_major_at_xref,
r_down_from_major_at_xref,
) = mx_viz.scalars_from_index(xref)
if debug_print:
print(
'MAJOR PIN SCALING\n'
f'mx_xref: {mx_xref}\n'
f'major i_start: {i_start}\n'
f'y_ref_major: {y_ref_major}\n'
f'r_up_from_major_at_xref {r_up_from_major_at_xref}\n'
f'r_down_from_major_at_xref: {r_down_from_major_at_xref}\n'
f'-----to minor-----\n'
f'xref: {xref}\n'
f'y_start: {y_start}\n'
f'yref: {yref}\n'
)
ymn = yref * (1 + r_down_from_major_at_xref)
ymx = yref * (1 + r_up_from_major_at_xref)
# if this curve's y-range is detected as **not
# being in view** after applying the
# target-major's transform, adjust the
# target-major curve's range to (log-linearly)
# include it (the extra missing range) by
# adjusting the y-mxmn to this new y-range and
# applying the inverse transform of the minor
# back on the target-major (and possibly any
# other previously-scaled-to-target/major, minor
# curves).
if ymn >= y_min:
ymn = y_min
r_dn_minor = (ymn - yref) / yref
# rescale major curve's y-max to include new
# range increase required by **this minor**.
mx_ymn = y_ref_major * (1 + r_dn_minor)
mx_viz.vs.yrange = mx_ymn, mx_viz.vs.yrange[1]
if debug_print:
print(
f'RESCALE {mx_viz.name} DUE TO {viz.name} ymn -> {y_min}\n'
f'-> MAJ ymn (w r_down: {r_dn_minor}) -> {mx_ymn}\n\n'
)
# rescale all already scaled curves to new
# increased range for this side as
# determined by ``y_min`` staying in view;
# re-set the `scaled: dict` entry to
# ensure that this minor curve will be
# entirely in view.
# TODO: re updating already-scaled minor curves
# - is there a faster way to do this by
# mutating state on some object instead?
for _view in scaled:
_viz, _yref, _ymn, _ymx, _xref = scaled[_view]
(
_,
_,
_,
r_down_from_out_of_range,
) = mx_viz.scalars_from_index(_xref)
new_ymn = _yref * (1 + r_down_from_out_of_range)
scaled[_view] = (
_viz, _yref, new_ymn, _ymx, _xref)
if debug_print:
print(
f'RESCALE {_viz.name} ymn -> {new_ymn}'
f'RESCALE MAJ ymn -> {mx_ymn}'
)
# same as above but for minor being out-of-range
# on the upside.
if ymx <= y_max:
ymx = y_max
r_up_minor = (ymx - yref) / yref
mx_ymx = y_ref_major * (1 + r_up_minor)
mx_viz.vs.yrange = mx_viz.vs.yrange[0], mx_ymx
if debug_print:
print(
f'RESCALE {mx_viz.name} DUE TO {viz.name} ymx -> {y_max}\n'
f'-> MAJ ymx (r_up: {r_up_minor} -> {mx_ymx}\n\n'
)
for _view in scaled:
_viz, _yref, _ymn, _ymx, _xref = scaled[_view]
(
_,
_,
r_up_from_out_of_range,
_,
) = mx_viz.scalars_from_index(_xref)
new_ymx = _yref * (1 + r_up_from_out_of_range)
scaled[_view] = (
_viz, _yref, _ymn, new_ymx, _xref)
if debug_print:
print(
f'RESCALE {_viz.name} ymn -> {new_ymx}'
)
# register all overlays for a final pass where we
# apply all pinned-curve y-range transform scalings.
scaled[view] = (viz, yref, ymn, ymx, xref)
if debug_print:
print(
f'Viz[{viz.name}]: @ {chart_name}\n'
f' .yrange = {viz.vs.yrange}\n'
f' .xrange = {viz.vs.xrange}\n\n'
f'xref: {xref}\n'
f'xref-dt: {pendulum.from_timestamp(xref)}\n'
f'y_min: {y_min}\n'
f'y_max: {y_max}\n'
f'RESCALING\n'
f'r dn: {r_down_from_major_at_xref}\n'
f'r up: {r_up_from_major_at_xref}\n'
f'ymn: {ymn}\n'
f'ymx: {ymx}\n'
)
# Pin all curves by their first datum in view to all
# others such that each curve's earliest datum provides the
# reference point for returns vs. every other curve in
# view.
case 'loglin_ref_to_first':
ymn = dnt.apply_r(y_start)
ymx = upt.apply_r(y_start)
view._set_yrange(yrange=(ymn, ymx))
# Do not pin curves by log-linearizing their y-ranges,
# instead allow each curve to fully scale to the
# time-series in view's min and max y-values.
case 'mxmn':
view._set_yrange(yrange=(y_min, y_max))
case _:
raise RuntimeError(
f'overlay ``method`` is invalid `{method}'
)
if scaled:
if debug_print:
print(
'SCALING PHASE' + '-'*100 + '\n\n'
'_________MAJOR INFO___________\n'
f'SIGMA MAJOR C: {mx_viz.name} -> {mx_disp}\n'
f'UP MAJOR C: {upt.viz.name} with disp: {upt.rng}\n'
f'DOWN MAJOR C: {dnt.viz.name} with disp: {dnt.rng}\n'
f'xref: {mx_xref}\n'
f'xref-dt: {pendulum.from_timestamp(mx_xref)}\n'
f'dn: {r_dn_mn}\n'
f'up: {r_up_mx}\n'
f'mx_ymn: {mx_ymn}\n'
f'mx_ymx: {mx_ymx}\n'
'------------------------------'
)
for (
view,
(viz, yref, ymn, ymx, xref)
) in scaled.items():
# NOTE XXX: we have to set each curve's range once (and
# ONLY ONCE) here since we're doing this entire routine
# inside of a single render cycle (and apparently calling
# `ViewBox.setYRange()` multiple times within one only takes
# the first call as serious...) XD
view._set_yrange(yrange=(ymn, ymx))
profiler(f'{viz.name}@{chart_name} log-SCALE minor')
if debug_print:
print(
'_________MINOR INFO___________\n'
f'Viz[{viz.name}]: @ {chart_name}\n'
f' .yrange = {viz.vs.yrange}\n'
f' .xrange = {viz.vs.xrange}\n\n'
f'xref: {xref}\n'
f'xref-dt: {pendulum.from_timestamp(xref)}\n'
f'y_start: {y_start}\n'
f'y min: {y_min}\n'
f'y max: {y_max}\n'
f'T scaled ymn: {ymn}\n'
f'T scaled ymx: {ymx}\n\n'
'--------------------------------\n'
)
# finally, scale the major target/dispersion curve to
# the (possibly re-scaled/modified) values were set in
# transform phase loop.
mx_view._set_yrange(yrange=(mx_ymn, mx_ymx))
if debug_print:
print(
f'END UX GRAPHICS CYCLE: @{chart_name}\n'
+
divstr
+
'\n'
)
profiler(f'<{chart_name}>.interact_graphics_cycle()')
if not do_linked_charts:
break
profiler.finish()

View File

@ -1,7 +1,6 @@
from contextlib import asynccontextmanager as acm
from functools import partial
import os
from typing import AsyncContextManager
from pathlib import Path
from shutil import rmtree
@ -11,7 +10,7 @@ from piker import (
# log,
config,
)
from piker._daemon import (
from piker.service import (
Services,
)
from piker.clearing._client import open_ems
@ -88,7 +87,7 @@ async def _open_test_pikerd(
'''
import random
from piker._daemon import maybe_open_pikerd
from piker.service import maybe_open_pikerd
if reg_addr is None:
port = random.randint(6e3, 7e3)
@ -151,8 +150,9 @@ async def _open_test_pikerd_and_ems(
fqsn,
mode=mode,
loglevel=loglevel,
) as ems_services):
yield (services, ems_services)
) as ems_services,
):
yield (services, ems_services)
@pytest.fixture
@ -168,7 +168,7 @@ def open_test_pikerd_and_ems(
mode,
loglevel,
open_test_pikerd
)
)
@pytest.fixture(scope='module')

View File

@ -3,7 +3,7 @@ import trio
from typing import AsyncContextManager
from piker._daemon import Services
from piker.service import Services
from piker.log import get_logger
from elasticsearch import Elasticsearch

View File

@ -9,8 +9,7 @@ import pytest
import trio
import tractor
from piker.log import get_logger
from piker._daemon import (
from piker.service import (
find_service,
Services,
)