Compare commits

...

210 Commits

Author SHA1 Message Date
Tyler Goodlet e2e3e30d7a Attempt to keep selected item highlighted
This attempt was unsuccessful since trying to (re)select the last
highlighted item on both an "enter" or "click" of that item causes
a hang and then segfault in `Qt`; no clue why..

Adds a `keep_current_item_selected: bool` flag to
`CompleterView.show_cache_entries()` but using it seems to always cause
a hang and crash; we keep all potential use spots commented for now
obviously to avoid this. Also included is a bunch of tidying to logic
blocks in the kb-control loop for readability.
2023-01-10 12:42:26 -05:00
Tyler Goodlet dd292b3652 Don't raise on quote feed lags to dark clearing loop 2023-01-10 12:42:26 -05:00
Tyler Goodlet 140d21c179 Lol, pull hist chart from the display state 2023-01-10 12:42:26 -05:00
Tyler Goodlet 1412c435fd Make (cache) search-results a `set` and avoid overlay duplicate entries 2023-01-10 12:42:26 -05:00
Tyler Goodlet acef3505fd Move sync log msg back to info 2023-01-10 12:42:26 -05:00
Tyler Goodlet bfeebba734 Take outer-interval values in `Viz.datums_range()` 2023-01-10 12:42:26 -05:00
Tyler Goodlet e2a299fe6c Clean a buncha cruft from render mod 2023-01-10 12:42:26 -05:00
Tyler Goodlet 9c46b92ce7 Don't deliver shms from `start_backfill()`, they're not used 2023-01-10 12:42:26 -05:00
Tyler Goodlet 9a0605e405 `deribit`: drop old `backfill_bars()` ep 2023-01-10 12:42:26 -05:00
Tyler Goodlet ae6d5b07e7 `kraken`: only do unsub if connected
Trying to send a message in the `NoBsWs.fixture()` exit when the ws is
not currently disconnected causes a double `._stack.close()` call which
will corrupt `trio`'s coro stack. Instead only do the unsub if we detect
the ws is still up.

Also drops the legacy `backfill_bars()` module endpoint.

Fixes #437
2023-01-10 12:42:26 -05:00
Tyler Goodlet 61c4147b73 Add `NoBsWs.connected()` predicate 2023-01-10 12:42:26 -05:00
Tyler Goodlet d2fec7016a Handle last-in-view time slicing edge case
Whenever the last datum is in view `slice_from_time()` need to always
spec the final array index (i.e. the len - 1 value we set as
`read_i_max`) to avoid a uniform-step arithmetic error where gaps in the
underlying time series causes an index that's too low to be returned.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 05fced37f1 Drop bp blocks from formatters mod 2023-01-10 12:42:26 -05:00
Tyler Goodlet b5fa00d63d Fix query-mode cursor labels to work with epoch-indexing 2023-01-10 12:42:26 -05:00
Tyler Goodlet 59483dc8e8 Breakpoint when bad 1m history offsets are detected 2023-01-10 12:42:26 -05:00
Tyler Goodlet 496ef0a9ac `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-10 12:42:26 -05:00
Tyler Goodlet ba7c8bb5a3 Don't receive sample-index msgs in feed layer 2023-01-10 12:42:26 -05:00
Tyler Goodlet a4408fc740 Support not registering for sample-index msgs via `sub_for_broadcasts: bool` flag 2023-01-10 12:42:26 -05:00
Tyler Goodlet c2c9053ca6 Never restart `ib-gw` containers on boot 2023-01-10 12:42:26 -05:00
Tyler Goodlet fdc581f215 Use `open_sample_stream()` in display loop 2023-01-10 12:42:26 -05:00
Tyler Goodlet c0f1a29bfd Use `open_sample_stream()` to increment fsp buffers 2023-01-10 12:42:26 -05:00
Tyler Goodlet 3328822e44 Port feed layer to use new `samplerd` APIs
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 8ed48add18 Drop `Flume.index_stream()`, `._sampling.open_sample_stream()` replaces it 2023-01-10 12:42:26 -05:00
Tyler Goodlet c531f8a69a Implement a `samplerd` singleton actor service
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:

- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
  `._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
  the work of (lazily) starting the `samplerd` service (if dne) and
  registers shm buffers for update as well as connect a sample-index
  stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
  endpoint implementing all the shm buffer and index-stream registry
  details as well as logic to ensure a lone service task runs
  `Services.increment_ohlc_buffer()`; it increments at the shortest period
  registered which, for now, is the default 1s duration.

Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
  gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
  one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 09b53b133b Make `._daemon.Services` for use as singleton
Drop the `_services` module level ref and adjust all client code to
match. Drop struct inheritance and convert all methods to class level.
Move `Brokerd.locks` -> `Services.locks` and add sampling mod to pikerd
enabled set.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 33e7e204d8 Begin formalizing `Sampler` singleton API
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.

Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
  registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.

Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 141f4cf018 Add back another panes resize during startup 2023-01-10 12:42:26 -05:00
Tyler Goodlet 7f2a5e267f Always zero-on-step $vlm 2023-01-10 12:42:26 -05:00
Tyler Goodlet a09735e0f0 Do full marker width after line 2023-01-10 12:42:26 -05:00
Tyler Goodlet 0fb44e1ec0 Fix indent level 2023-01-10 12:42:26 -05:00
Tyler Goodlet 049d7d0dc0 Make $vlm axis color same as clears 2023-01-10 12:42:26 -05:00
Tyler Goodlet f858dbcf68 Correctly load order mode for first fqsn in overlay set 2023-01-10 12:42:26 -05:00
Tyler Goodlet 075dd94759 Drop meaning the clearing rate, use per step count 2023-01-10 12:42:26 -05:00
Tyler Goodlet 6d2077e8e6 Move $vlm y-axis to LHS 2023-01-10 12:42:26 -05:00
Tyler Goodlet 7ec30efff4 Better index step value scanning by checking with our expected set 2023-01-10 12:42:26 -05:00
Tyler Goodlet adeb969810 Repair auto-y-ranging to always include L1 spread
Goes back to always adjusting the y-axis range to include the L1 spread
and clearing label in view whenever the last datum is also in view,
previously this was broken after reworking the display loop for
multi-feeds.

Drops a bunch of old commented tick looping cruft from before we started
using tick-type framing. Also adds more stringent guards for ignoring
but error logging quote values that are more then 25% out of range; it
seems particularly our `ib` feed has some issues with strange `price`
values that are way off here and there?
2023-01-10 12:42:26 -05:00
Tyler Goodlet bc271c4ebc Mouse interaction tweaks
- adjust zoom focal to be min of the view-right coord or the right-most
  point on the flow graphic in view and drop all the legacy l1-in-view
  focal point cruft.
- flip to not auto-scaling overlays by default.
- change the `._set_yrange()` margin to `0.09`.
- drop `use_vr: bool` usage.
2023-01-10 12:42:26 -05:00
Tyler Goodlet c47fa14d8c Modernize optional path variable type annots 2023-01-10 12:42:26 -05:00
Tyler Goodlet 783285e92c Drop `._index_step` from formatters and instead defer to `Viz.index_step()` 2023-01-10 12:42:26 -05:00
Tyler Goodlet 4ae46c1e20 Further fixes `Viz.default_view()` and `.index_step()`
Use proper uppx scaling when either of scaling the data to the x-domain
index-range or when the uppx is < 1 (now that we support it) such that
both the fast and slow chart always appropriately scale and offset to
the y-axis with the last datum graphic just adjacent to the order line
arrow markers.

Further this fixes the `.index_step()` calc to use the "earliest" 16
values to compute the expected sample step diff since the last set often
contained gaps due to start up race conditions and generated
unexpected/incorrect output.

Further this drops the `.curve_width_pxs()` method and replaces it with
`.px_width()`, taken from the graphics object API and instead returns
the pixel account for the whole view width instead of the
x-domain-data-range within the view.
2023-01-10 12:42:26 -05:00
Tyler Goodlet c0ef20894c Make `FlowGraphic.x_last()` be optionally `None`
In the case where the last-datum-graphic hasn't been created yet, simply
return a `None` from this method so the caller can choose to ignore the
output. Further, drop `.px_width()` since it makes more sense defined on
`Viz` as well as the previously commented `BarItems.x_uppx()` method.
Also, don't round the `.x_uppx()` output since it can then be used when
< 1 to do x-domain scaling during high zoom usage.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 009102fc05 Drop edge case from `slice_from_time()`
Doesn't seem like we really need to handle the situation where the start
or stop input time stamps are outside the index range of the data since
the new binary search handling via `numpy.searchsorted()` covers this
case at minimal runtime cost and with an equally correct output. Allows
us to drop some other indexing endpoint internal variables as well.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 2fde315089 Use left-style index search on RHS scan as well 2023-01-10 12:42:26 -05:00
Tyler Goodlet daf1cfc785 Use static `L1Label._x_br_offset` as l1 label length 2023-01-10 12:42:26 -05:00
Tyler Goodlet ef19604698 Add a parent-type for graphics: `FlowGraphic`
Factor some common methods into the parent type:
- `.x_uppx()` for reading the horizontal units-per-pixel.
- `.x_last()` for reading the "closest to y-axis" last datum coordinate
  for zooming "around" during mouse interaction.
- `.px_width()` for computing the max width of any curve in view in
  pixels.

Adjust all previous derived `pg.GraphicsObject` child types to now
inherit from this new parent and in particular enable proper `.x_uppx()`
support to `BarItems`.
2023-01-10 12:42:26 -05:00
Tyler Goodlet b32cb7ecad Just-offset-from-arrow-marker on slow chart
We want the fast and slow chart to behave the same on calls to
`Viz.default_view()` so adjust the offset calc to make both work:
- just offset by the line len regardless of step / uppx
- add back the `should_line: bool` output from `render_bar_items()` (and
  use it to set a new `ds_allowed: bool` guard variable) so that we can
  bypass calling the m4 downsampler unless the bars have been switched
  to the interpolation line graphic (which we normally required before
  any downsampling of OHLC graphics data).

Further, this drops use of the `use_vr: bool` flag from all rendering
since we pretty much always use it by default.
2023-01-10 12:42:26 -05:00
Tyler Goodlet aabd46d707 Drop l1 labels attr from chart widget 2023-01-10 12:42:26 -05:00
Tyler Goodlet 23070e5fab TOSQUASH: bd78f17f (duplicate hist frames) 2023-01-10 12:42:26 -05:00
Tyler Goodlet 575e60bd1d Handle empty `indexes` input edge case.. 2023-01-10 12:42:26 -05:00
Tyler Goodlet 905b37e7ac TOSQUASH: 84f19308 (l1 rework) 2023-01-10 12:42:26 -05:00
Tyler Goodlet 2019db0fe7 TOSQUASH: b6fd8427 (kraken src fiat parsing) 2023-01-10 12:42:26 -05:00
Tyler Goodlet d0858236c1 Set cursor label color to "bracket" 2023-01-10 12:42:26 -05:00
Tyler Goodlet 46d6b1f6e4 Don't set y-axis label colors to curve's, use the default from global scheme 2023-01-10 12:42:26 -05:00
Tyler Goodlet c9104880c8 Simplify L1 labels for multicharts
Instead of having the l1 lines be inside the view space, move them to be
inside their respective axis (with only a 16 unit portion inside the
view) such that the clear price label can overlay with them nicely
without obscuring; this is much better suited to multiple adjacent
y-axes and in general is simpler and less noisy.

Further `L1Labels` + `LevelLabel` style tweaks:
- adjust `.rect` positioning to be "right" (i.e. inside the parent
  y-axis) with a slight 16 unit shift toward the viewbox (using the new
  `._x_br_offset`) to allow seeing each level label's line even when the
  clearing price label is positioned at that same level.
- add a newline's worth of vertical space to each of the bid/ask labels
  so that L1 labels' text content isn't ever obscured by the clear price
  label.
- set a low (10) z-value to ensure l1 labels are always placed
  underneath the clear price label.
- always fill the label rect with the chosen background color.
- make labels fully opaque so as to always make them hide the parent
  axes' `.tickStrings()` contents.
- make default color the "default" from the global scheme.
- drop the "price" part from the l1 label text contents, just show the
  book-queue's amount (in dst asset's units, aka the potential clearing vlm).
2023-01-10 12:42:26 -05:00
Tyler Goodlet ac1f4571d9 Fix x-axis labelling when using an epoch domain
Previously with array-int indexing we had to map the input x-domain
"indexes" passed to `DynamicDateAxis._indexes_to_timestr()`. In the
epoch-time indexing case we obviously don't need to lookup time stamps
from the underlying shm array and can instead just cast to `int` and
relay the values verbatim.

Further, this patch includes some style adjustments to `AxisLabel` to
better enable multi-feed chart overlays by avoiding L1 label clutter
when multiple y-axes are stacked adjacent:
- adjust the `Axis` typical max string to include a couple spaces suffix
 providing for a bit more margin between side-by-side y-axes.
- make the default label (fill) color the "default" from the global
 color scheme and drop it's opacity to .9
- add some new label placement options and use them in the
 `.boundingRect()` method:
 * `._x/y_br_offset` for relatively shifting the overall label relative
   to it's parent axis.
 * `._y_txt_h_scaling` for increasing the bounding rect's height
   without including more whitespace in the label's text content.
- ensure labels have a high z-value such that by default they are always
 placed "on top" such that when we adjust the l1 labels they can be set
 to a lower value and thus never obscure the last-price label.
2023-01-10 12:42:26 -05:00
Tyler Goodlet cdc22e0807 Sync 1s (or less) sampler steps using rounded now-epoch 2023-01-10 12:42:26 -05:00
Tyler Goodlet 7649df1a24 Add commented append slice-len sanity check 2023-01-10 12:42:26 -05:00
Tyler Goodlet b2cff0af6f Always `.error()` log unknown queries for `marketstore` 2023-01-10 12:42:26 -05:00
Tyler Goodlet 9f37b33167 Only accept 6 tries for the same duplicate hist frame
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 7faca820bd Use `np.diff()` on last 16 samples instead of only last datum pair 2023-01-10 12:42:26 -05:00
Tyler Goodlet 983e495522 `kraken`: don't presume src fiat symbol size in pos predicate 2023-01-10 12:42:26 -05:00
Tyler Goodlet 03300549c2 Drop symbol token size =6 check 2023-01-10 12:42:26 -05:00
Tyler Goodlet aaf8754776 Use recon set on stack closing during reconnect
Hopefully resolves https://github.com/pikers/piker/issues/434
2023-01-10 12:42:26 -05:00
Tyler Goodlet 4b5b4f96a9 Enable the experimental `QPrivatePath` functionality from latest `pyqtgraph` 2023-01-10 12:42:26 -05:00
Tyler Goodlet d14435fa59 Fix overlayed slow chart "treading"
Turns out we were updating the wrong ``Viz``/``DisplayState`` inside the
closure style `increment_history_view()`` (probably due to looping
through the flumes and dynamically closing in that task-func).. Instead
define the history incrementer at module level and pass in the
`DisplayState` explicitly. Further rework the `DisplayState` attrs to be
more focused around the `Viz` associated with the fast and slow chart
and be sure to adjust output from each `Viz.incr_info()` call to latest
update. Oh, and just tweaked the line palette for the moment.

FYI "treading" here is referring to  the x-shifting of the curve when
the last datum is in view such that on new sampled appends the "last"
datum is kept in the same x-location in UI terms.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 99e100cd6b Make `.increment_view()` take in a `datums: int` and always scale it by sample step size 2023-01-10 12:42:26 -05:00
Tyler Goodlet 8e300a3aed Make `Viz.incr_info()` do treading with time-index, and appending with array-index 2023-01-10 12:42:26 -05:00
Tyler Goodlet 89352a3b3b Rename `reset` -> `reset_cache` 2023-01-10 12:42:26 -05:00
Tyler Goodlet d4c2aeb4e0 Fix gap detection on RHS; always bin-search on overshot time range 2023-01-10 12:42:26 -05:00
Tyler Goodlet ddf8fa7b7a Add type annots to vars inside `Render.render()` 2023-01-10 12:42:26 -05:00
Tyler Goodlet abac60a0f4 Drop coordinate cacheing from `BarItems`, causes weird jitter on pan 2023-01-10 12:42:26 -05:00
Tyler Goodlet 6e6c6484fc Fix f-str in duplicate frame msg print 2023-01-10 12:42:26 -05:00
Tyler Goodlet 134b8129b5 `ib`: fix position log msg 2023-01-10 12:42:26 -05:00
Tyler Goodlet f7cfb848c5 Add `ChartPlotWidget.main_viz: Viz` convenience `@property` 2023-01-10 12:42:26 -05:00
Tyler Goodlet fd02a60ab0 Make `Viz.incr_info()` sample rate agnostic
Mainly it was the global (should we )increment logic that needs to be
independent for the fast vs. slow chart such that the slow isn't
update-shifted by the fast and vice versa. We do this using a new
`'i_last_slow'` key in the `DisplayState.globalz: dict` which is
singleton for each sample-rate-specific chart and works for both time
and array indexing.

Also, we drop some old commented `graphics.draw_last_datum()` code that
never ended up being needed again inside the coordinate cache reset
bloc.
2023-01-10 12:42:26 -05:00
Tyler Goodlet de585d2dc1 Use array-`int`-indexing on single feed
Might as well since it makes the chart look less gappy and we can easily
flip the index switch now B)

Also adds a new `'i_slow_last'` key to `DisplayState` for a singleton
across all slow charts and thus no more need for special case logic in
`viz.incr_info()`.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 4b76f9ec9a Align step curves the same as OHLC bars 2023-01-10 12:42:26 -05:00
Tyler Goodlet 28d9c781e8 Add `IncrementalFormatter.x_offset: np.ndarray`
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 6756ca5931 Adjust OHLC bar x-offsets to be time span matched
Previously we were drawing with the middle of the bar on each index with
arms to either side: +/- some arm length. Instead this changes so that
each bar is drawn *after* each index/timestamp such that in graphics
coords the bar span more correctly matches the time span in the
x-domain. This makes the linked region between slow and fast chart
directly match (without any transform) for epoch-time indexing such that
the last x-coord in view on the fast chart is no more then the
next time step in (downsampled) slow view.

Deats:
- adjust in `._pathops.path_arrays_from_ohlc()` and take an `bar_w` bar
  width input (normally taken from the data step size).
- change `.ui._ohlc.bar_from_ohlc_row()` and
  `BarItems.draw_last_datum()` to match.
2023-01-10 12:42:26 -05:00
Tyler Goodlet ef6a1167b0 `Viz._index_field` a `typing.Literal[str]` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 325fe1ca67 Set `path_arrays_from_ohlc(use_time_index=True)` on epoch indexing
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.

Also, guard all the x-data audit breakpoints with a time indexing
condition.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 0316304e3d Ugh, use `bool` flag to determine index field.. 2023-01-10 12:42:25 -05:00
Tyler Goodlet 72c6b5f646 Make `LinearRegion` link using epoch-time index
Turned out to be super simple to get the first draft to work since the
fast and slow chart now use the same domain, however, it seems like
maybe there's an offset issue still where the fast may be a couple
minutes ahead of the slow?

Need to dig in a bit..
2023-01-10 12:42:25 -05:00
Tyler Goodlet 1f356b6e10 `ib`: Add treasury yield futs to adhoc fqsn set 2023-01-10 12:42:25 -05:00
Tyler Goodlet 1059520212 Add global `i_step` per overlay to `DisplayState`
Using a global "last index step" (via module var) obviously has problems
when working with multiple feed sets in a single global app instance:
any separate feed-set will be incremented according to an app-global
index-step and thus won't correctly calc per-feed-set-step update info.

Impl deatz:
- drop `DisplayState.incr_info()` (since previously moved to `Viz`) and
  call that method on each appropriate `Viz` instance where necessary;
  further ensure the appropriate `DisplayState` instance is passed in to
  each call and make sure to pass a `state: DisplayState`.
- add `DisplayState.hist_vars: dict` for history chart (sets) to
  determine the per-feed (not set) current slow chart (time) step.
- add `DisplayState.globalz: dict` to house a common per-feed-set state
  and use it inside the new `Viz.incr_info()` such that
  a `should_increment: bool` can be returned and used by the display
  loop to determine whether to x-shift the current chart.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 2a4fafcf21 Move `DisplayState.incr_info()` -> `Viz` 2023-01-10 12:42:25 -05:00
Tyler Goodlet e363f102a3 Drop `tractor` assert bug note 2023-01-10 12:42:25 -05:00
Tyler Goodlet d2b7cb7b35 Move `Viz` layer to new `.ui` mod 2023-01-10 12:42:25 -05:00
Tyler Goodlet 81d6d1d80b Fix line -> bars on 6x UPPX
Read the `Viz.index_step()` directly to avoid always reading 1 on the
slow chart; this was completely broken before and resulting in not
rendering the bars graphic on the slow chart until at a true uppx of
1 which obviously doesn't work for 60 width bars XD

Further cleanups to `._render` module:
- drop `array` output from `Renderer.render()`, `read_from_key` input
  and fix type annot.
- drop `should_line`, `changed_to_line` and `render_kwargs` from
  `render_baritems()` outputs and instead calc `should_redraw` logic
  inside the func body and return as output.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 57264f87c6 Drop unused `read_src_from_key: bool` to `.format_to_1d()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet cd7d36d2d8 Right, do index lookup for int-index as well.. 2023-01-10 12:42:25 -05:00
Tyler Goodlet 30b9130be6 Fix formatter xy ndarray first prepend case
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.

Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.

Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
  downsample to, this is normally based on the ratio of pixel columns on
  screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
  first and last index would be the size of the input buffer and thus
  would never cause a large mem allocation issue (though it may have
  been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
  near-now epoch time stamp **minus** an x-allocation value: generally
  some value in `[0.5, -0.5]` which would result in a massive frames and
  thus internal `np.ndarray()` allocation causing either a crash in
  `numba` code or actual system mem over allocation.

Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.
2023-01-10 12:42:25 -05:00
Tyler Goodlet abf3b08328 Handle time-indexing for fill arrows
Call into a reworked `Flume.get_index()` for both the slow and fast
chart and do time index clipping to last datum where necessary.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 95b9ae66b2 Restore coord-cache resetting
Turns out we can't seem to avoid the artefacts when click-drag-scrolling
(results in weird repeated "smeared" curve segments) so just go back to
the original code.
2023-01-10 12:42:25 -05:00
Tyler Goodlet ac6a1b1521 ib: ignore throttles on `.get_head_time()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 734c818ed0 Add some commented debug prints for default fmtr 2023-01-10 12:42:25 -05:00
Tyler Goodlet 1546ff0001 Slicec to an extra index around each timestamp input 2023-01-10 12:42:25 -05:00
Tyler Goodlet b4384209b6 Ensure FSPs last 2 times are synced with its source 2023-01-10 12:42:25 -05:00
Tyler Goodlet de791e62c8 Drop passing `render_data` to `Curve.draw_last_datum()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet de26fecff4 Add back `.default_view()` slice logic for `int` indexing 2023-01-10 12:42:25 -05:00
Tyler Goodlet ff34ac9ae7 Block out `do_print` stuff inside `Viz.maxmin()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 69681347a4 Implement `stop_t` gap adjustments; the good lord said it is the problem 2023-01-10 12:42:25 -05:00
Tyler Goodlet 26a79d667e Draw last datums on boot
Ensures that a "last datum" graphics object exists so that zooming can
read it using `.x_last()`. Also, disable the linked region stuff for now
since it's totally borked after flipping to the time indexing.
2023-01-10 12:42:25 -05:00
Tyler Goodlet d5a4dcea70 Use `Curve.x_last()` for zoom focal point 2023-01-10 12:42:25 -05:00
Tyler Goodlet 7ef6219d01 Delegate to `Viz.default_view()` on chart
Also add a rage print to not forget about the global index
tracking/diffing in the display loop we still need to change.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 5976d68bb2 Re-implement `.default_view()` on `Viz`
Since we don't really need it defined on the "chart widget" move it to
a viz method and rework it to hell:

- always discard the invalid view l > r case.
- use the graphic's UPPX to determine UI-to-scene coordinate scaling for
  the L1-label collision detection, if there is no L1 just offset by
  a few (index step scaled) datums; this allows us to drop the 2x
  x-range calls as was hacked previous.
- handle no-data-in-view cases explicitly and error if we get any
  ostensibly impossible cases.
- expect caller to trigger a graphics cycle if needed.

Further support this includes a rework a slew of other important
details:

- add `Viz.index_step`, an idempotent computed, index (presumably uniform)
  step value which is needed for variable sample rate graphics displayed
  on an epoch (second) time index.
- rework `Viz.datums_range()` to pass view x-endpoints as first and last
  elements in return `tuple`; tighten up snap-to-data edge case logic
  using `max()`/`min()` calls and better internal var naming.
- adjust all calls to `slice_from_time()` to not expect an "abs" slice.
- drop all `.yrange` resetting since we can just have the `Renderer` do
  it when necessary.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 3a0cbe518e Add gap detection for `stop_t`, though only report atm 2023-01-10 12:42:25 -05:00
Tyler Goodlet aaa1bccd60 Add `.x_last()` meth to flow graphics 2023-01-10 12:42:25 -05:00
Tyler Goodlet 14bcba367e Drop `Flume.view_data()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 1b95668309 Drop old breakpoint 2023-01-10 12:42:25 -05:00
Tyler Goodlet 688d7d7f2f Drop `_slice_from_time()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 5c417fe815 Use uniform step arithmetic in `slice_from_time()`
If we presume that time indexing using a uniform step we can calculate
the exact index (using `//`) for the input time presuming the data
set has zero gaps. This gives a massive speedup over `numpy` fancy
indexing and (naive) `numba` iteration. Further in the case where time
gaps are detected, we can use `numpy.searchsorted()` to binary search
for the nearest expected index at lower latency.

Deatz,
- comment-disable the call to the naive `numba` scan impl.
- add a optional `step: int` input (calced if not provided).
- add todos for caching binary search results in the gap detection
  cases.
- drop returning the "absolute buffer indexing" slice since the caller
  can always just use the read-relative slice to acquire it.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 94e0f48f39 Make `.default_view()` time step aware
When we use an epoch index and any sample rate > 1s we need to scale the
"number of bars" to that step in order to place the view correctly in
x-domain terms. For now we're calcing the step in-method but likely,
longer run, we'll pull this from elsewhere (like a ``Viz`` attr).
2023-01-10 12:42:25 -05:00
Tyler Goodlet aa404ab18b Flip over to epoch-time based x-domain indexing 2023-01-10 12:42:25 -05:00
Tyler Goodlet 3e62832580 Adjust all `slice_from_time()` calls to not expect mask 2023-01-10 12:42:25 -05:00
Tyler Goodlet 352bd4a1f7 Rewrite `slice_from_time()` using `numba`
Gives approx a 3-4x speedup using plain old iterate-with-for-loop style
though still not really happy with this .5 to 1 ms latency..

Move the core `@njit` part to a `_slice_from_time()` with a pure python
func with orig name around it. Also, drop the output `mask` array since
we can generally just use the slices in the caller to accomplish the
same input array slicing, duh..
2023-01-10 12:42:25 -05:00
Tyler Goodlet f18247b855 Use index (time) step to calc OHLC bar/line uppx threshold 2023-01-10 12:42:25 -05:00
Tyler Goodlet 92a71293ac Use step size to determine bar gaps 2023-01-10 12:42:25 -05:00
Tyler Goodlet 6829daa79c Use step size to determine last datum bar gap 2023-01-10 12:42:25 -05:00
Tyler Goodlet cd58bfb8cf Move `Flume.slice_from_time()` to `.data._pathops` mod func 2023-01-10 12:42:25 -05:00
Tyler Goodlet 49ea4e1ef6 Drop `index_field` input to renders, add `.read()` profiling 2023-01-10 12:42:25 -05:00
Tyler Goodlet d8f325ddd9 Delegate formatter `.index_field` to the parent `Viz` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 2e6f14afb3 Facepalm**2: fix array-read-slice, like actually..
We need to subtract the first index in the array segment read, not the
first index value in the time-sliced output, to get the correct offset
into the non-absolute (`ShmArray.array` read) array..

Further we **do** need the `&` between the advance indexing conditions
and this adds profiling to see that it is indeed real slow (like 20ms
ish even when using `np.where()`).
2023-01-10 12:42:25 -05:00
Tyler Goodlet a2f75a83b6 TOSQUASH 4eb5fe0dd96 (FSP copy time from src -> dst)
Slice up to history's length worth of (latest) time stamps from source
series read at the start of the history init phase.
2023-01-10 12:42:25 -05:00
Tyler Goodlet faecd6f0e0 Markup OHLC->path gen with `numba` issue # 2023-01-10 12:42:25 -05:00
Tyler Goodlet 69d4fe9fef Facepalm: put graphics cycle in `do_ds: bool` block.. 2023-01-10 12:42:25 -05:00
Tyler Goodlet 6ce9872530 TOSQUASH: 552a8c298cd (return index for arrow..) 2023-01-10 12:42:25 -05:00
Tyler Goodlet c5a352bc64 Facepalm: actually return latest index on time slice fail.. 2023-01-10 12:42:25 -05:00
Tyler Goodlet 787fa53aa9 Go with explicit `.data._m4` mod name
Since it's a notable and self-contained graphics compression algo, might
as well give it a dedicated module B)
2023-01-10 12:42:25 -05:00
Tyler Goodlet 8de8a40a1e Move (unused) path gen routines to `.ui._pathops` 2023-01-10 12:42:25 -05:00
Tyler Goodlet a2d23244e7 Move qpath-ops routines back to separate mod 2023-01-10 12:42:25 -05:00
Tyler Goodlet 4ca8e23b5b Rename `.ui._pathops.py` -> `.ui._formatters.py 2023-01-10 12:42:25 -05:00
Tyler Goodlet 95ee69c119 Look up "index field" in display cycles
Again, to make epoch indexing a flip-of-switch for testing look up the
`Viz.index_field: str` value when updating labels.

Also, drops the legacy tick-type set tracking which we no longer use
thanks to the new throttler subsys and it's framing msgs.
2023-01-10 12:42:25 -05:00
Tyler Goodlet cab75217dd Fix from-time index slicing?
Apparently we want an `|` for the advanced indexing logic?
Also, fix `read_slc` start to not always be 0 XD
2023-01-10 12:42:25 -05:00
Tyler Goodlet 59766f53cf Move old label sizing cruft to label mod 2023-01-10 12:42:25 -05:00
Tyler Goodlet 3098d12221 Move path ops routines to top of mod
Planning to put the formatters into a new mod and aggregate all path
gen/op helpers into this module.

Further tweak include:
- moving `path_arrays_from_ohlc()` back to module level
- slice out the last xy datum for `OHLCBarsAsCurveFmtr` 1d formatting
- always copy the new x-value from the source to `.x_nd`
2023-01-10 12:42:25 -05:00
Tyler Goodlet b1ad1f2af1 Drop diff state tracking in formatter
This was a major cause of error (particularly trying to get epoch
indexing working) and really isn't necessary; instead just have
`.diff()` always read from the underlying source array for current
index-step diffing and append/prepend slice construction.

Allows us to,
- drop `._last_read` state management and thus usage.
- better handle startup indexing by setting `.xy_nd_start/stop` to
  `None` initially so that the first update can be done in one large
  prepend.
- better understand and document the step curve "slice back to previous
  level" logic which is now heavily commented B)
- drop all the `slice_to_head` stuff from and instead allow each
  formatter to choose it's 1d segmenting.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 1bee6e3150 Explicitly enable chart widget yranging in display init 2023-01-10 12:42:25 -05:00
Tyler Goodlet ede2edc85c Enable/disable vlm chart yranging (TO SQUASH) 2023-01-10 12:42:25 -05:00
Tyler Goodlet 27b2daa448 Don't disable non-enabled vlm chart y-autoranging 2023-01-10 12:42:25 -05:00
Tyler Goodlet 620152d783 Comment out bps for time indexing 2023-01-10 12:42:25 -05:00
Tyler Goodlet a13b7aab7c Call `Viz.bars_range()` from display loop 2023-01-10 12:42:25 -05:00
Tyler Goodlet 5cff7a7193 TOSQUASH: f5dcf1dc (viz index field) 2023-01-10 12:42:25 -05:00
Tyler Goodlet 83b3cac807 Fix `.default_view()` to view-left-of-data 2023-01-10 12:42:25 -05:00
Tyler Goodlet 166f97c8af Add `Viz.index_field: str`, pass to graphics objs
In an effort to make it easy to override the indexing scheme.

Further, this repairs the `.datums_range()` special case to handle when
the view box is to-the-right-of the data set (i.e. l > datum_start).
2023-01-10 12:42:25 -05:00
Tyler Goodlet 0751f51cfa Expect `index_field: str` in all graphics objects 2023-01-10 12:42:25 -05:00
Tyler Goodlet 3096b206d9 TOSQUASH: 2dc706aa (.default_view w time) 2023-01-10 12:42:25 -05:00
Tyler Goodlet 16d5ea5b33 Frame ticks in helper routine
Wow, turns out tick framing was totally borked since we weren't framing
on "greater then throttle period long waits" XD

This moves all the framing logic into a common func and calls it in
every case:
- every (normal) "pre throttle period expires" quote receive
- each "no new quote before throttle period expires" (slow case)
- each "no clearing tick yet received" / only burst on clears case
2023-01-10 12:42:25 -05:00
Tyler Goodlet ac0166f936 Facepalm: pass correct flume to each FSP chart group.. 2023-01-10 12:42:25 -05:00
Tyler Goodlet 925849b5e4 Attempt to make `.default_view()` time-index ready
As in make the call to `Flume.slice_from_time()` to try and convert any
time index values from the view range to array-indices; all untested
atm.

Also drop some old/unused/moved methods:
- `._set_xlimits()`
- `.bars_range()`
- `.curve_width_pxs()`

and fix some `flow` -> `viz` var naming.
2023-01-10 12:42:25 -05:00
Tyler Goodlet bd2abcb91f Simplify formatter update methodology
Don't expect values (array + slice) to be returned and applied by
`.incr_update_xy_nd()` and instead presume this will implemented
internally in each (sub)formatter.

Attempt to simplify some incr-update routines, (particularly in the step
curve formatter, though most of it was reverted to just a simpler form
of the original implementation XD) including:
- dropping the need for the `slice_to_head: int` control.
- using the `xy_nd_start/stop` index counters over custom lookups.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 97feb195e6 TOSQUASH: f3d757c2 (flow->viz) 2023-01-10 12:42:25 -05:00
Tyler Goodlet c084a1122a First attempt, field-index agnostic formatting
Remove harcoded `'index'` field refs from all formatters in a first
attempt at moving towards epoch-time alignment (though don't actually
use it it yet).

Adjustments to the formatter interface:
- property for `.xy_nd` the x/y nd arrays.
- property for and `.xy_slice` the nd format array(s) start->stop index
  slice.

Internal routine tweaks:
- drop `read_src_from_key` and always pass full source array on updates
  and adjust handlers to expect to have to index the data field of
  interest.
- set `.last_read` right after update calls instead of after 1d
  conversion.
- drop `slice_to_head` array read slicing.
- add some debug points for testing 'time' indexing (though not used
  here yet).
- add `.x_nd` array update logic for when the `.index_field` is not
  'index' - i.e. when we begin to try and support epoch time.
- simplify some new y_nd updates to not require use of `np.broadcast()`
  where possible.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 1b9f6a7152 Pepper render routines with time-slice calls 2023-01-10 12:42:25 -05:00
Tyler Goodlet 3574548fe2 Add `Viz.bars_range()` (moved from chart API)
Call it from view kb loop.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 598b1e2787 Make `Viz.slice_from_time()` take input array
Probably means it doesn't need to be a `Flume` method but it's
convenient to expect the caller to pass in the `np.ndarray` with
a `'time'` field instead of a `timeframe: str` arg; also, return the
slice mask instead of the sliced array as output (again allowing the
caller to do any slicing). Also, handle the slice-outside-time-range
case by just returning the entire index range with a `None` mask.

Adjust `Viz.view_data()` to instead do timeframe (for rt vs. hist shm
array) lookup and equiv array slicing with the returned mask.
2023-01-10 12:42:25 -05:00
Tyler Goodlet cb85079cf1 Add breakpoint on -ve range for now 2023-01-10 12:42:25 -05:00
Tyler Goodlet 670ba169e9 Copy timestamps from source to FSP dest buffer 2023-01-10 12:06:03 -05:00
Tyler Goodlet 3cf590eedf `Order.symbol` is a `str`.. 2023-01-10 12:06:03 -05:00
Tyler Goodlet d839fcb8e7 Avoid key error on already popped cancel 2023-01-10 12:06:03 -05:00
Tyler Goodlet 42faaa9870 Go back to hard-coded index field
Turns out https://github.com/numba/numba/issues/8622 is real
and the suggested `numba.literally` hack doesn't seem to work..
2023-01-10 12:05:57 -05:00
Tyler Goodlet b078235414 Move `ui._compression`/`._pathops` to `.data` subpkg
Since these modules no longer contain Qt specific code we might
as well include them in the data sub-package.

Also, add `IncrementalFormatter.index_field` as single point to def the
indexing field that should be used for all x-domain graphics-data
rendering.
2023-01-10 12:05:57 -05:00
Tyler Goodlet 03e6a00efd Add some data-flows jargon notes (re: #270) 2023-01-10 12:05:57 -05:00
Tyler Goodlet 1bfcda70ae Rename `.ui._flows.py` -> `.ui._render.py` 2023-01-10 12:05:57 -05:00
Tyler Goodlet 498ed8757c Rename `._flumes.py` -> `.flows.py` 2023-01-10 12:05:57 -05:00
Tyler Goodlet 4f4b5e0280 Rename `Flow` -> `Viz`
The type is better described as a "data visualization":
https://en.wikipedia.org/wiki/Data_and_information_visualization

Add `ChartPlotWidget.get_viz()` to start working towards not accessing
the private table directly XD

We'll probably end up using the name `Flow` for a type that tracks
a collection of composed/cascaded `Flume`s:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
2023-01-10 12:05:57 -05:00
Tyler Goodlet 6cca1eb941 Expand sampler loop shm write lines 2023-01-10 12:05:57 -05:00
Tyler Goodlet a016a28032 Adjust order mode to use `Flume.get_index()` 2023-01-10 12:05:57 -05:00
Tyler Goodlet 75f21470a9 Pass `Flume`s throughout FSP-ui and charting APIs
Since higher level charting and fsp management need access to the
new `Flume` indexing apis this adjusts some func sigs to pass through
(and/or create) flume instances:
- `LinkedSplits.add_plot()` and dependents.
- `ChartPlotWidget.draw_curve()` and deps, and it now returns a `Flow`.
- `.ui._fsp.open_fsp_admin()` and `FspAdmin.open_fsp_ui()` related
  methods => now we wrap the destination fsp shm in a flume on the admin
  side and is returned from `.start_engine_method()`.

Drop a bunch of (unused) chart widget methods including some already
moved to flume methods: `.get_index()`, `.in_view()`,
`.last_bar_in_view()`, `.is_valid_index()`.
2023-01-10 12:05:57 -05:00
Tyler Goodlet d3be4caa6a Make hist shm token optional to allow for FSPs 2023-01-10 12:05:57 -05:00
Tyler Goodlet 13e86fbe30 Move `Flume` to a new `.data._flumes` module 2023-01-10 12:05:57 -05:00
Tyler Goodlet 8793b76ee2 Extend `Flume` methods
Add some (untested) data slicing util methods for mapping time ranges to
source data indices:
- `.get_index()` which maps a single input epoch time to an equiv array
  (int) index.
- add `slice_from_time()` which returns a view of the shm data from an
  input epoch range presuming the underlying struct array contains
  a `'time'` field with epoch stamps.
- `.view_data()` which slices out the "in view" data according to the
  current state of the passed in `pg.PlotItem`'s view box.
2023-01-10 12:05:57 -05:00
Tyler Goodlet d115f43885 Add epoch time index to fsp buffers 2023-01-10 12:05:57 -05:00
Tyler Goodlet 0442945ce5 Drop px-cache-resets, failed try at path appends
Comments out the pixel-cache resetting since it doesn't seem we need it
any more to avoid draw oddities?

For `.fast_path` appends, this nearly got it working except the new path
segments are either not being connected correctly (step curve) or not
being drawn in full since the history path (plain line).

Leaving the attempted code commented in for a retry in the future; my
best guesses are that maybe,
- `.connectPath()` call is being done with incorrect segment length
  and/or start point.
- the "appended" data: `appended = array[-append_len-1:slice_to_head]`
  (done inside the formatter) isn't correct (i.e. endpoint handling
  considering a path append) and needs special handling for different
  curve types?
2023-01-10 12:05:57 -05:00
Tyler Goodlet 07714c5cbd Mask profile points and drop rect `.united()` attempts 2023-01-10 12:05:57 -05:00
Tyler Goodlet f139e4f273 Make curve graphics timeframe agnostic
Ensure `.boundingRect()` calcs and `.draw_last_datum()` do geo-sizing
based on source data instead of presuming some `1.0` unit steps in some
spots; we need this to support an epoch index as is needed for overlays.

Further, clean out a bunch of old bounding rect calc code and add some
commented code for trying out `QRectF.united()` on the path + last datum
curve segment. Turns out that approach is slower as per eyeballing the
added profiler points.
2023-01-10 12:05:57 -05:00
Tyler Goodlet 366df3307f Add graphics incr-updated "formatter" subsys
After trying to hack epoch indexed time series and failing miserably,
decided to properly factor out all formatting routines into a common
subsystem API: ``IncrementalFormatter`` which provides the interface for
incrementally updating and tracking pre-path-graphics formatted data.

Previously this functionality was mangled into our `Renderer` (which
also does the work of `QPath` generation and update) but splitting it
out also preps for being able to do graphics-buffer downsampling and
caching on a remote host B)

The ``IncrementalFormatter`` (parent type) has the default behaviour of
tracking a single field-array on some source `ShmArray`, updating
a flattened `numpy.ndarray` in-mem allocation, and providing a default
1d conversion for pre-downsampling and path generation.

Changed out of `Renderer`,
- `.allocate_xy()`, `update_xy()` and `format_xy()` all are moved to
  more explicitly named formatter methods.
- all `.x/y_data` nd array management and update
- "last view range" tracking
- `.last_read`, `.diff()`
- now calls `IncrementalFormatter.format_to_1d()` inside `.render()`

The new API gets,
- `.diff()`, `.last_read`
- all view range diff tracking through `.track_inview_range()`.
- better nd format array names: `.x/y_nd`, `xy_nd_start/stop`.
- `.format_to_1d()` which renders pre-path formatted arrays ready for
  both m4 sampling and path gen.
- better explicit overloadable formatting method names:
  * `.allocate_xy()` -> `.allocate_xy_nd()`
  * `.update_xy()` -> `.incr_update_xy_nd()`
  * `.format_xy()` -> `.format_xy_nd_to_1d()`

Finally this implements per-graphics-type formatters which define
each set up related formatting routines:
- `OHLCBarsFmtr`: std multi-line style bars
- `OHLCBarsAsCurveFmtr`: draws an interpolated line for ohlc sampled data
- `StepCurveFmtr`: handles vlm style curves
2023-01-10 12:05:57 -05:00
Tyler Goodlet cbd4119101 Max out per symbol throttle @ 22Hz 2023-01-10 12:05:53 -05:00
Tyler Goodlet 01b470faf4 Move all pre-path formatting routines to `._pathops`, proto formatter type 2023-01-10 11:09:49 -05:00
Tyler Goodlet 226e84d15f Ensure a rt shm buffer without backfill has correct epoch timestamping 2023-01-10 11:09:49 -05:00
Tyler Goodlet de1c0b1399 Use throttle period for wait-on-clearing-event timeout 2023-01-10 11:09:49 -05:00
Tyler Goodlet e7daf09a83 Expect and update from by-type tick frames
Move to expect and process new by-tick-event frames where the display
loop can now just iterate the most recent tick events by type instead of
the entire tick history sequence - thus we reduce iterations inside the
update loop.

Also, go back to use using the detected display's refresh rate (minus 6)
as the default feed requested throttle rate since we can now handle
much more bursty-ness in display updates thanks to the new framing
format B)
2023-01-10 11:09:49 -05:00
Tyler Goodlet 5fcc34a9e6 Implement by-type tick-framing in throttler loop
This has been an outstanding idea for a while and changes the framing
format of tick events into a `dict[str, list[dict]]` wherein for each
tick "type" (eg. 'bid', 'ask', 'trade', 'asize'..etc) we create an FIFO
ordered `list` of events (data) and then pack this table into each
(throttled) send. This gives an additional implied downsample reduction
(in terms of iteration on the consumer side) from `N` tick-events to
a (max) `T` tick-types presuming the rx side only needs the latest tick
event.

Drop the `types: set` and adjust clearing event test to use the new
`ticks_by_type` map's keys.
2023-01-10 11:09:49 -05:00
Tyler Goodlet 6f0b1ea283 Factor info print into func 2023-01-10 11:09:49 -05:00
Tyler Goodlet e618d13fc9 Update/improve qt screen script 2023-01-10 11:09:49 -05:00
Tyler Goodlet 947f29aefb Improved clearing-tick-burst-oriented throttling
Instead of uniformly distributing the msg send rate for a given
aggregate subscription, choose to be more bursty around clearing ticks
so as to avoid saturating the consumer with L1 book updates and vs.
delivering real trade data as-fast-as-possible.

Presuming the consumer is in the "UI land of slow" (eg. modern display
frame rates) such an approach serves more useful for seeing "material
changes" in the market as-bursty-as-possible (i.e. more short lived fast
changes in last clearing price vs. many slower changes in the bid-ask
spread queues). Such an approach also lends better to multi-feed
overlays which in aggregate tend to scale linearly with the number of
feeds/overlays; centralization of bursty arrival rates allows for
a higher overall throttle rate if used cleverly with framing.
2023-01-10 11:09:49 -05:00
Tyler Goodlet 363c7a2df2 Brighter last OHLC graphics datum by default 2023-01-10 11:09:49 -05:00
Tyler Goodlet 701eb7c2c5 Factor setup loop, 1 FSP chain, colors, throttling
Factor out the chart widget creation since it's only executed once
during rendering of the first feed/flow whilst keeping plotitem overlay
creation inside the (flume oriented) init loop. Only create one vlm and
FSP chart/chain for now until we figure out if we want FSPs overlayed by
default or selected based on the "front" symbol in use. Add a default
color-palette set using shades of gray when plotting overlays. Presume
that the display loop's quote throttle rate should be uniformly
distributed over all input symbol-feeds for now. Restore feed pausing on
mouse interaction.
2023-01-10 11:09:49 -05:00
Tyler Goodlet 4020a198c4 Type annot-declare fsp-engine data `Feed` 2023-01-10 11:09:49 -05:00
Tyler Goodlet d834dfac74 Define a single `ChartPlotWidget.feed: Feed` for pause/resume 2023-01-10 11:09:49 -05:00
Tyler Goodlet bdbc8de8c1 Assign pnl calc output for use when debugging 2023-01-10 11:09:49 -05:00
Tyler Goodlet 5e6ebca1e0 Rework `_FeedsBus` subscriptions mgmt using `set`
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
  `set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
  a msg on each unique underlying IPC msg stream.

Use new api in sampler task.
2023-01-10 11:09:49 -05:00
Tyler Goodlet 42e934b912 Init msg keys are always lower case 2023-01-10 11:09:49 -05:00
Tyler Goodlet da285d6275 Make `PlotItemOverlay` add items inwards->out
Before this axes were being stacked from the outside in (for `'right'`
and 'bottom'` axes) which is somewhat non-intuitive for an `.append()`
operation. As such this change makes a symbol list stack a set of
`'right'` axes from left-to-right.

Details:
- rename `ComposeGridLayout.items` -> `.pitems`
- return `(int, list[AxisItem])` pairs from `.insert/append_plotitem()`
  and the down stream `PlotItemOverlay.add_plotitem()`.
- drop `PlotItemOverlay.overlays` and add it back as `@property` around
  the underlying `.layout.pitems`.
2023-01-10 11:09:49 -05:00
Tyler Goodlet 396fb742bd Fix for empty tsdb query result case 2023-01-10 11:09:49 -05:00
Tyler Goodlet 8c6a18fdb7 Drop tick frame builder loop for now 2023-01-10 11:09:49 -05:00
Tyler Goodlet a6241a5a16 Adjust FSP UI/mgmt apis to be `Flume` oriented 2023-01-10 11:09:49 -05:00
Tyler Goodlet eb1650197b Make graphics-update-loop multi-sym aware B)
Initial support for real-time multi-symbol overlay charts using an
aggregate feed delivered by `Feed.open_multi_stream()`.

The setup steps for constructing the overlayed plot items is still very
very rough and will likely provide incentive for better refactoring high
level "charting APIs". For each fqsn passed into `display_symbol_data()`
we now synchronously,
- create a single call to `LinkedSplits.plot_ohlc_main() -> `ChartPlotWidget`
  where we cache the chart in scope and for all other "sibling" fqsns
  we,
- make a call to `ChartPlotWidget.overlay_plotitem()` -> `PlotItem`, hide its axes,
  make another call with this plotitem input to
  `ChartPlotWidget.draw_curve()`, set a sym-specific view box auto-yrange maxmin callback,
  register the plotitem in a global `pis: dict[str, list[pgo.PlotItem, pgo.PlotItem]] = {}`

Once all plots have been created we then asynchronously for each symbol,
- maybe create a volume chart and register it in a similar task-global
  table: `vlms: dict[str, ChartPlotWidget] = {}`
- start fsp displays for each symbol

Then common entrypoints are entered once for all symbols:
- a single `graphics_update_loop()` loop-task is started wherein
  real-time graphics update components for each symbol are created,
      * `L1Labels`
      * y-axis last clearing price stickies
      * `maxmin()` auto-ranger
      * `DisplayState` (stored in a table `dss: dict[str, DisplayState] = {}`)
      * an `increment_history_view()` task
  and a single call to `Feed.open_multi_stream()` is used to create
  a symbol-multiplexed quote stream which drives a single loop over all
  symbols wherein for each quote the appropriate components are looked
  up and passed to `graphics_update_cycle()`.
- a single call to `open_order_mode()` is made with the first symbol
  provided as input, though eventually we want to support passing in the
  entire list.

Further internal implementation details:
- special tweaks to the `pg.LinearRegionItem` setup wherein the region
  is added with a zero opacity and *after* all plotitem overlays to
  avoid and issue where overlays weren't being shown within the region
  area in the history chart.
- all symbol-specific graphics oriented update calls are adjusted to
  pass in the fqsn:
  * `update_fsp_chart()`
  * `ChartView._set_yrange()`
  * ChartPlotWidget.update_graphics_from_flow()`
- avoid a double increment on sample step updates by not calling the
  increment on any vlm chart since it seems the vlm-ohlc chart linking
  already takes care of this now?
- use global counters for the last epoch time step to avoid incrementing
  all views more then once per new time step given underlying shm array
  buffers may be on different array-index values from one another.
2023-01-10 11:09:49 -05:00
Tyler Goodlet 6b4614f735 Only add plot to cursor set if not an overlay 2023-01-10 11:09:49 -05:00
Tyler Goodlet fc067eb7a8 Adjust search to handle multi-sym results 2023-01-10 11:09:49 -05:00
Tyler Goodlet 0d657553f9 Drop the legacy `relayed_from` cruft from our view box 2023-01-10 11:09:49 -05:00
Tyler Goodlet b384cea706 Only update pnl label on quotes with an fqsn match 2023-01-10 11:09:49 -05:00
Tyler Goodlet 70b24795a6 Pass plotitem to axis from cursor 2023-01-10 11:09:49 -05:00
Tyler Goodlet dc4a2c8c2b Adjust L1 labels to expect `.pi: PlotItem` 2023-01-10 11:09:49 -05:00
Tyler Goodlet 322ab34200 Allocate our internal `Axis` subtype in our `PlotItem` override 2023-01-10 11:09:49 -05:00
Tyler Goodlet b768eb19ec Passthrough fqsns list directly to `.load_symbols()` 2023-01-10 11:09:49 -05:00
Tyler Goodlet c3e5162c30 Initial chart widget adjustments for agg feeds
Main "public" API change is to make `GodWidget.get/set_chart_symbol()`
accept and cache-on fqsn tuples to allow handling overlayed chart groups
and adjust method names to be plural to match.

Wrt `LinkedSplits`,
- create all chart widget axes with a `None` plotitem argument and set
  the `.pi` field after axis creation (since apparently we have another
  object reference causality dilemma..)
- set a monkeyed `PlotItem.chart_widget` for use in axes that still need
  the widget reference.
- drop feed pause/resume for now since it's leaking feed tasks on the
  `brokerd` side and we probably don't really need it any more, and if
  we still do it should be done on the feed not the flume.

Wrt `ChartPlotItem`,
- drop `._add_sticky()` and use the `Axis` method instead and add some
  overlay + axis sanity checks.
- refactor `.draw_ohlc()` to be a lighter wrapper around a call to
  `.add_plot()`.
2023-01-10 11:09:49 -05:00
Tyler Goodlet e677cb1ddb Simplify OHLC graphic color instance var name 2023-01-10 11:09:49 -05:00
Tyler Goodlet fc7c498c65 Add `Axis.add_sticky()` for creating axis labels
We have this method on our `ChartPlotWidget` but it makes more sense to
directly associate axis-labels with, well, the label's parent axis XD.

We add `._stickies: dict[str, YAxisLabel]` to replace
`ChartPlotWidget._ysticks` and pass in the `pg.PlotItem` to each axis
instance, stored as `Axis.pi` instead of handing around linked split
references (which are way out of scope for a single axis).

More work needs to be done to remove dependence on `.chart:
ChartPlotWidget` references in the date axis type as per comments.
2023-01-10 11:09:48 -05:00
Tyler Goodlet 6653ee8662 Add default YAxisLable.x_offset: int` 2023-01-10 11:09:48 -05:00
46 changed files with 6109 additions and 4113 deletions

View File

@ -8,7 +8,7 @@ services:
# https://github.com/waytrade/ib-gateway-docker#supported-tags
# image: waytrade/ib-gateway:981.3j
image: waytrade/ib-gateway:1012.2i
restart: always # restart whenev there's a crash or user clicsk
restart: 'no' # restart on boot whenev there's a crash or user clicsk
network_mode: 'host'
volumes:
@ -64,7 +64,7 @@ services:
# ib_gw_live:
# image: waytrade/ib-gateway:1012.2i
# restart: always
# restart: no
# network_mode: 'host'
# volumes:

View File

@ -22,7 +22,6 @@ from typing import Optional, Union, Callable, Any
from contextlib import asynccontextmanager as acm
from collections import defaultdict
from msgspec import Struct
import tractor
import trio
from trio_typing import TaskStatus
@ -54,16 +53,19 @@ _root_modules = [
__name__,
'piker.clearing._ems',
'piker.clearing._client',
'piker.data._sampling',
]
class Services(Struct):
class Services:
actor_n: tractor._supervise.ActorNursery
service_n: trio.Nursery
debug_mode: bool # tractor sub-actor debug mode flag
service_tasks: dict[str, tuple[trio.CancelScope, tractor.Portal]] = {}
locks = defaultdict(trio.Lock)
@classmethod
async def start_service_task(
self,
name: str,
@ -119,11 +121,11 @@ class Services(Struct):
return cs, first
# TODO: per service cancellation by scope, we aren't using this
# anywhere right?
@classmethod
async def cancel_service(
self,
name: str,
) -> Any:
log.info(f'Cancelling `pikerd` service {name}')
cs, portal = self.service_tasks[name]
@ -134,29 +136,25 @@ class Services(Struct):
return await portal.cancel_actor()
_services: Optional[Services] = None
@acm
async def open_pikerd(
start_method: str = 'trio',
loglevel: Optional[str] = None,
loglevel: str | None = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
) -> Optional[tractor._portal.Portal]:
) -> None:
'''
Start a root piker daemon who's lifetime extends indefinitely
until cancelled.
Start a root piker daemon who's lifetime extends indefinitely until
cancelled.
A root actor nursery is created which can be used to create and keep
alive underling services (see below).
'''
global _services
global _registry_addr
if (
@ -186,17 +184,11 @@ async def open_pikerd(
):
async with trio.open_nursery() as service_nursery:
# # setup service mngr singleton instance
# async with AsyncExitStack() as stack:
# assign globally for future daemon/task creation
_services = Services(
actor_n=actor_nursery,
service_n=service_nursery,
debug_mode=debug_mode,
)
yield _services
Services.actor_n = actor_nursery
Services.service_n = service_nursery
Services.debug_mode = debug_mode
yield
@acm
@ -217,7 +209,6 @@ async def open_piker_runtime(
existing piker actors on the local link based on configuration.
'''
global _services
global _registry_addr
if (
@ -276,11 +267,12 @@ async def maybe_open_pikerd(
**kwargs,
) -> Union[tractor._portal.Portal, Services]:
"""If no ``pikerd`` daemon-root-actor can be found start it and
'''
If no ``pikerd`` daemon-root-actor can be found start it and
yield up (we should probably figure out returning a portal to self
though).
"""
'''
if loglevel:
get_console_log(loglevel)
@ -316,7 +308,9 @@ async def maybe_open_pikerd(
yield None
# brokerd enabled modules
# `brokerd` enabled modules
# NOTE: keeping this list as small as possible is part of our caps-sec
# model and should be treated with utmost care!
_data_mods = [
'piker.brokers.core',
'piker.brokers.data',
@ -326,10 +320,6 @@ _data_mods = [
]
class Brokerd:
locks = defaultdict(trio.Lock)
@acm
async def find_service(
service_name: str,
@ -366,6 +356,8 @@ async def maybe_spawn_daemon(
service_task_target: Callable,
spawn_args: dict[str, Any],
loglevel: Optional[str] = None,
singleton: bool = False,
**kwargs,
) -> tractor.Portal:
@ -386,7 +378,7 @@ async def maybe_spawn_daemon(
# serialize access to this section to avoid
# 2 or more tasks racing to create a daemon
lock = Brokerd.locks[service_name]
lock = Services.locks[service_name]
await lock.acquire()
async with find_service(service_name) as portal:
@ -397,6 +389,9 @@ async def maybe_spawn_daemon(
log.warning(f"Couldn't find any existing {service_name}")
# TODO: really shouldn't the actor spawning be part of the service
# starting method `Services.start_service()` ?
# ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the
# process tree
@ -407,7 +402,6 @@ async def maybe_spawn_daemon(
) as pikerd_portal:
if pikerd_portal is None:
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
@ -415,7 +409,9 @@ async def maybe_spawn_daemon(
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
await service_task_target(**spawn_args)
started: bool
if pikerd_portal is None:
started = await service_task_target(**spawn_args)
else:
# tell the remote `pikerd` to start the target,
@ -424,11 +420,14 @@ async def maybe_spawn_daemon(
# non-blocking and the target task will persist running
# on `pikerd` after the client requesting it's start
# disconnects.
await pikerd_portal.run(
started = await pikerd_portal.run(
service_task_target,
**spawn_args,
)
if started:
log.info(f'Service {service_name} started!')
async with tractor.wait_for_actor(service_name) as portal:
lock.release()
yield portal
@ -451,9 +450,6 @@ async def spawn_brokerd(
extra_tractor_kwargs = getattr(brokermod, '_spawn_kwargs', {})
tractor_kwargs.update(extra_tractor_kwargs)
global _services
assert _services
# ask `pikerd` to spawn a new sub-actor and manage it under its
# actor nursery
modpath = brokermod.__name__
@ -466,18 +462,18 @@ async def spawn_brokerd(
subpath = f'{modpath}.{submodname}'
broker_enable.append(subpath)
portal = await _services.actor_n.start_actor(
portal = await Services.actor_n.start_actor(
dname,
enable_modules=_data_mods + broker_enable,
loglevel=loglevel,
debug_mode=_services.debug_mode,
debug_mode=Services.debug_mode,
**tractor_kwargs
)
# non-blocking setup of brokerd service nursery
from .data import _setup_persistent_brokerd
await _services.start_service_task(
await Services.start_service_task(
dname,
portal,
_setup_persistent_brokerd,
@ -523,24 +519,21 @@ async def spawn_emsd(
"""
log.info('Spawning emsd')
global _services
assert _services
portal = await _services.actor_n.start_actor(
portal = await Services.actor_n.start_actor(
'emsd',
enable_modules=[
'piker.clearing._ems',
'piker.clearing._client',
],
loglevel=loglevel,
debug_mode=_services.debug_mode, # set by pikerd flag
debug_mode=Services.debug_mode, # set by pikerd flag
**extra_tractor_kwargs
)
# non-blocking setup of clearing service
from .clearing._ems import _setup_persistent_emsd
await _services.start_service_task(
await Services.start_service_task(
'emsd',
portal,
_setup_persistent_emsd,

View File

@ -94,21 +94,6 @@ async def open_history_client(
yield get_ohlc, {'erlangs': 3, 'rate': 3}
async def backfill_bars(
symbol: str,
shm: ShmArray, # type: ignore # noqa
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
"""Fill historical bars into shared mem / storage afap.
"""
instrument = symbol
with trio.CancelScope() as cs:
async with open_cached_client('deribit') as client:
bars = await client.bars(instrument)
shm.push(bars)
task_status.started(cs)
async def stream_quotes(
send_chan: trio.abc.SendChannel,

View File

@ -162,6 +162,7 @@ _futes_venues = (
'CMECRYPTO',
'COMEX',
'CMDTY', # special name case..
'CBOT', # (treasury) yield futures
)
_adhoc_futes_set = {
@ -197,6 +198,21 @@ _adhoc_futes_set = {
'xagusd.cmdty', # silver spot
'ni.comex', # silver futes
'qi.comex', # mini-silver futes
# treasury yields
# etfs by duration:
# SHY -> IEI -> IEF -> TLT
'zt.cbot', # 2y
'z3n.cbot', # 3y
'zf.cbot', # 5y
'zn.cbot', # 10y
'zb.cbot', # 30y
# (micros of above)
'2yy.cbot',
'5yy.cbot',
'10y.cbot',
'30y.cbot',
}

View File

@ -611,7 +611,7 @@ async def trades_dialogue(
pp = table.pps[bsuid]
if msg.size != pp.size:
log.error(
'Position mismatch {pp.symbol.front_fqsn()}:\n'
f'Position mismatch {pp.symbol.front_fqsn()}:\n'
f'ib: {msg.size}\n'
f'piker: {pp.size}\n'
)

View File

@ -135,7 +135,10 @@ async def open_history_client(
# fx cons seem to not provide this endpoint?
'idealpro' not in fqsn
):
try:
head_dt = await proxy.get_head_time(fqsn=fqsn)
except RequestError:
head_dt = None
async def get_hist(
timeframe: float,

View File

@ -510,10 +510,6 @@ class Client:
'''
ticker = cls._ntable[ticker]
symlen = len(ticker)
if symlen != 6:
raise ValueError(f'Unhandled symbol: {ticker}')
return ticker.lower()

View File

@ -413,20 +413,27 @@ async def trades_dialogue(
) -> AsyncIterator[dict[str, Any]]:
# XXX: required to propagate ``tractor`` loglevel to piker logging
# XXX: required to propagate ``tractor`` loglevel to ``piker`` logging
get_console_log(loglevel or tractor.current_actor().loglevel)
async with get_client() as client:
# TODO: make ems flip to paper mode via
# some returned signal if the user only wants to use
# the data feed or we return this?
# await ctx.started(({}, ['paper']))
if not client._api_key:
raise RuntimeError(
'Missing Kraken API key in `brokers.toml`!?!?')
# TODO: make ems flip to paper mode via
# some returned signal if the user only wants to use
# the data feed or we return this?
# else:
# await ctx.started(({}, ['paper']))
# NOTE: currently we expect the user to define a "source fiat"
# (much like the web UI let's you set an "account currency")
# such that all positions (nested or flat) will be translated to
# this source currency's terms.
src_fiat = client.conf['src_fiat']
# auth required block
acctid = client._name
acc_name = 'kraken.' + acctid
@ -444,10 +451,9 @@ async def trades_dialogue(
# NOTE: testing code for making sure the rt incremental update
# of positions, via newly generated msgs works. In order to test
# this,
# - delete the *ABSOLUTE LAST* entry from accont's corresponding
# - delete the *ABSOLUTE LAST* entry from account's corresponding
# trade ledgers file (NOTE this MUST be the last record
# delivered from the
# api ledger),
# delivered from the api ledger),
# - open you ``pps.toml`` and find that same tid and delete it
# from the pp's clears table,
# - set this flag to `True`
@ -486,27 +492,51 @@ async def trades_dialogue(
# and do diff with ledger to determine
# what amount of trades-transactions need
# to be reloaded.
sizes = await client.get_balances()
for dst, size in sizes.items():
balances = await client.get_balances()
for dst, size in balances.items():
# we don't care about tracking positions
# in the user's source fiat currency.
if dst == client.conf['src_fiat']:
if dst == src_fiat:
continue
def has_pp(dst: str) -> Position | bool:
pps_dst_assets = {bsuid[:3]: bsuid for bsuid in table.pps}
pair = pps_dst_assets.get(dst)
def has_pp(
dst: str,
size: float,
) -> Position | bool:
src2dst: dict[str, str] = {}
for bsuid in table.pps:
try:
dst_name_start = bsuid.rindex(src_fiat)
except IndexError:
# TODO: handle nested positions..(i.e.
# positions where the src fiat was used to
# buy some other dst which was furhter used
# to buy another dst..)
log.warning(
f'No src fiat {src_fiat} found in {bsuid}?'
)
continue
_dst = bsuid[:dst_name_start]
if _dst != dst:
continue
src2dst[src_fiat] = dst
for src, dst in src2dst.items():
pair = f'{dst}{src_fiat}'
pp = table.pps.get(pair)
if (
not pair or not pp
or not math.isclose(pp.size, size)
pp
and math.isclose(pp.size, size)
):
return False
return pp
pos = has_pp(dst)
return False
pos = has_pp(dst, size)
if not pos:
# we have a balance for which there is no pp
@ -514,12 +544,15 @@ async def trades_dialogue(
# ledger.
updated = table.update_from_trans(ledger_trans)
log.info(f'Updated pps from ledger:\n{pformat(updated)}')
pos = has_pp(dst)
pos = has_pp(dst, size)
if not pos and not simulate_pp_update:
if (
not pos
and not simulate_pp_update
):
# try reloading from API
table.update_from_trans(api_trans)
pos = has_pp(dst)
pos = has_pp(dst, size)
if not pos:
# get transfers to make sense of abs balances.
@ -557,7 +590,7 @@ async def trades_dialogue(
f'{pformat(updated)}'
)
if not has_pp(dst):
if not has_pp(dst, size):
raise ValueError(
'Could not reproduce balance:\n'
f'dst: {dst}, {size}\n'

View File

@ -303,24 +303,6 @@ async def open_history_client(
yield get_ohlc, {'erlangs': 1, 'rate': 1}
async def backfill_bars(
sym: str,
shm: ShmArray, # type: ignore # noqa
count: int = 10, # NOTE: any more and we'll overrun the underlying buffer
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Fill historical bars into shared mem / storage afap.
'''
with trio.CancelScope() as cs:
async with open_cached_client('kraken') as client:
bars = await client.bars(symbol=sym)
shm.push(bars)
task_status.started(cs)
async def stream_quotes(
send_chan: trio.abc.SendChannel,
@ -419,6 +401,7 @@ async def stream_quotes(
yield
# unsub from all pairs on teardown
if ws.connected():
await ws.send_msg({
'pair': list(ws_pairs.values()),
'event': 'unsubscribe',

View File

@ -172,6 +172,7 @@ async def clear_dark_triggers(
# TODO:
# - numba all this!
# - this stream may eventually contain multiple symbols
quote_stream._raise_on_lag = False
async for quotes in quote_stream:
# start = time.time()
for sym, quote in quotes.items():
@ -866,7 +867,7 @@ async def translate_and_relay_brokerd_events(
elif status == 'canceled':
log.cancel(f'Cancellation for {oid} is complete!')
status_msg = book._active.pop(oid)
status_msg = book._active.pop(oid, None)
else: # open
# relayed from backend but probably not handled so

View File

@ -0,0 +1,837 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Pre-(path)-graphics formatted x/y nd/1d rendering subsystem.
"""
from __future__ import annotations
from typing import (
Optional,
TYPE_CHECKING,
)
import msgspec
import numpy as np
from numpy.lib import recfunctions as rfn
from ._sharedmem import (
ShmArray,
)
from ._pathops import (
path_arrays_from_ohlc,
)
if TYPE_CHECKING:
from ._dataviz import (
Viz,
)
from .._profile import Profiler
class IncrementalFormatter(msgspec.Struct):
'''
Incrementally updating, pre-path-graphics tracking, formatter.
Allows tracking source data state in an updateable pre-graphics
``np.ndarray`` format (in local process memory) as well as
incrementally rendering from that format **to** 1d x/y for path
generation using ``pg.functions.arrayToQPath()``.
'''
shm: ShmArray
viz: Viz
@property
def index_field(self) -> 'str':
'''
Value (``str``) used to look up the "index series" from the
underlying source ``numpy`` struct-array; delegate directly to
the managing ``Viz``.
'''
return self.viz.index_field
# Incrementally updated xy ndarray formatted data, a pre-1d
# format which is updated and cached independently of the final
# pre-graphics-path 1d format.
x_nd: Optional[np.ndarray] = None
y_nd: Optional[np.ndarray] = None
@property
def xy_nd(self) -> tuple[np.ndarray, np.ndarray]:
return (
self.x_nd[self.xy_slice],
self.y_nd[self.xy_slice],
)
@property
def xy_slice(self) -> slice:
return slice(
self.xy_nd_start,
self.xy_nd_stop,
)
# indexes which slice into the above arrays (which are allocated
# based on source data shm input size) and allow retrieving
# incrementally updated data.
xy_nd_start: int | None = None
xy_nd_stop: int | None = None
# TODO: eventually incrementally update 1d-pre-graphics path data?
# x_1d: Optional[np.ndarray] = None
# y_1d: Optional[np.ndarray] = None
# incremental view-change state(s) tracking
_last_vr: tuple[float, float] | None = None
_last_ivdr: tuple[float, float] | None = None
@property
def index_step_size(self) -> float:
'''
Readonly value computed on first ``.diff()`` call.
'''
return self.viz.index_step()
def __repr__(self) -> str:
msg = (
f'{type(self)}: ->\n\n'
f'fqsn={self.viz.name}\n'
f'shm_name={self.shm.token["shm_name"]}\n\n'
f'last_vr={self._last_vr}\n'
f'last_ivdr={self._last_ivdr}\n\n'
f'xy_slice={self.xy_slice}\n'
# f'xy_nd_stop={self.xy_nd_stop}\n\n'
)
x_nd_len = 0
y_nd_len = 0
if self.x_nd is not None:
x_nd_len = len(self.x_nd)
y_nd_len = len(self.y_nd)
msg += (
f'x_nd_len={x_nd_len}\n'
f'y_nd_len={y_nd_len}\n'
)
return msg
def diff(
self,
new_read: tuple[np.ndarray],
) -> tuple[
np.ndarray,
np.ndarray,
]:
# TODO:
# - can the renderer just call ``Viz.read()`` directly? unpack
# latest source data read
# - eventually maybe we can implement some kind of
# transform on the ``QPainterPath`` that will more or less
# detect the diff in "elements" terms? update diff state since
# we've now rendered paths.
(
xfirst,
xlast,
array,
ivl,
ivr,
in_view,
) = new_read
index = array['index']
# if the first index in the read array is 0 then
# it means the source buffer has bee completely backfilled to
# available space.
src_start = index[0]
src_stop = index[-1] + 1
# these are the "formatted output data" indices
# for the pre-graphics arrays.
nd_start = self.xy_nd_start
nd_stop = self.xy_nd_stop
if (
nd_start is None
):
assert nd_stop is None
# setup to do a prepend of all existing src history
nd_start = self.xy_nd_start = src_stop
# set us in a zero-to-append state
nd_stop = self.xy_nd_stop = src_stop
# compute the length diffs between the first/last index entry in
# the input data and the last indexes we have on record from the
# last time we updated the curve index.
prepend_length = int(nd_start - src_start)
append_length = int(src_stop - nd_stop)
# blah blah blah
# do diffing for prepend, append and last entry
return (
slice(src_start, nd_start),
prepend_length,
append_length,
slice(nd_stop, src_stop),
)
def _track_inview_range(
self,
view_range: tuple[int, int],
) -> bool:
# if a view range is passed, plan to draw the
# source ouput that's "in view" of the chart.
vl, vr = view_range
zoom_or_append = False
last_vr = self._last_vr
# incremental in-view data update.
if last_vr:
lvl, lvr = last_vr # relative slice indices
# TODO: detecting more specifically the interaction changes
# last_ivr = self._last_ivdr or (vl, vr)
# al, ar = last_ivr # abs slice indices
# left_change = abs(x_iv[0] - al) >= 1
# right_change = abs(x_iv[-1] - ar) >= 1
# likely a zoom/pan view change or data append update
if (
(vr - lvr) > 2
or vl < lvl
# append / prepend update
# we had an append update where the view range
# didn't change but the data-viewed (shifted)
# underneath, so we need to redraw.
# or left_change and right_change and last_vr == view_range
# not (left_change and right_change) and ivr
# (
# or abs(x_iv[ivr] - livr) > 1
):
zoom_or_append = True
self._last_vr = view_range
return zoom_or_append
def format_to_1d(
self,
new_read: tuple,
array_key: str,
profiler: Profiler,
slice_to_inview: bool = True,
) -> tuple[
np.ndarray,
np.ndarray,
]:
shm = self.shm
(
_,
_,
array,
ivl,
ivr,
in_view,
) = new_read
(
pre_slice,
prepend_len,
append_len,
post_slice,
) = self.diff(new_read)
# we first need to allocate xy data arrays
# from the source data.
if self.y_nd is None:
self.xy_nd_start = shm._first.value
self.xy_nd_stop = shm._last.value
self.x_nd, self.y_nd = self.allocate_xy_nd(
shm,
array_key,
)
profiler('allocated xy history')
# once allocated we do incremental pre/append
# updates from the diff with the source buffer.
else:
if prepend_len:
self.incr_update_xy_nd(
shm,
array_key,
# this is the pre-sliced, "normally expected"
# new data that an updater would normally be
# expected to process, however in some cases (like
# step curves) the updater routine may want to do
# the source history-data reading itself, so we pass
# both here.
shm._array[pre_slice],
pre_slice,
prepend_len,
self.xy_nd_start,
self.xy_nd_stop,
is_append=False,
)
self.xy_nd_start -= prepend_len
profiler('prepended xy history: {prepend_length}')
if append_len:
self.incr_update_xy_nd(
shm,
array_key,
shm._array[post_slice],
post_slice,
append_len,
self.xy_nd_start,
self.xy_nd_stop,
is_append=True,
)
self.xy_nd_stop += append_len
profiler('appened xy history: {append_length}')
# sanity
# slice_ln = post_slice.stop - post_slice.start
# assert append_len == slice_ln
view_changed: bool = False
view_range: tuple[int, int] = (ivl, ivr)
if slice_to_inview:
view_changed = self._track_inview_range(view_range)
array = in_view
profiler(f'{self.viz.name} view range slice {view_range}')
# hist = array[:slice_to_head]
# XXX: WOA WTF TRACTOR DEBUGGING BUGGG
# assert 0
# xy-path data transform: convert source data to a format
# able to be passed to a `QPainterPath` rendering routine.
if not len(array):
# XXX: this might be why the profiler only has exits?
return
# TODO: hist here should be the pre-sliced
# x/y_data in the case where allocate_xy is
# defined?
x_1d, y_1d, connect = self.format_xy_nd_to_1d(
array,
array_key,
view_range,
)
# app_tres = None
# if append_len:
# appended = array[-append_len-1:slice_to_head]
# app_tres = self.format_xy_nd_to_1d(
# appended,
# array_key,
# (
# view_range[1] - append_len + slice_to_head,
# view_range[1]
# ),
# )
# # assert (len(appended) - 1) == append_len
# # assert len(appended) == append_len
# print(
# f'{self.viz.name} APPEND LEN: {append_len}\n'
# f'{self.viz.name} APPENDED: {appended}\n'
# f'{self.viz.name} app_tres: {app_tres}\n'
# )
# update the last "in view data range"
if len(x_1d):
self._last_ivdr = x_1d[0], x_1d[-1]
profiler('.format_to_1d()')
return (
x_1d,
y_1d,
connect,
prepend_len,
append_len,
view_changed,
# app_tres,
)
###############################
# Sub-type override interface #
###############################
x_offset: np.ndarray = np.array([0])
# optional pre-graphics xy formatted data which
# is incrementally updated in sync with the source data.
# XXX: was ``.allocate_xy()``
def allocate_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert the structured-array ``src_shm`` format to
a equivalently shaped (and field-less) ``np.ndarray``.
Eg. a 4 field x N struct-array => (N, 4)
'''
y_nd = src_shm._array[data_field].copy()
x_nd = (
src_shm._array[self.index_field].copy()
+
self.x_offset
)
return x_nd, y_nd
# XXX: was ``.update_xy()``
def incr_update_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> None:
# write pushed data to flattened copy
y_nd_new = new_from_src[data_field]
self.y_nd[read_slc] = y_nd_new
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = (
new_from_src[self.index_field]
+
self.x_offset
)
# x_nd = self.x_nd[self.xy_slice]
# y_nd = self.y_nd[self.xy_slice]
# name = self.viz.name
# if 'trade_rate' == name:
# s = 4
# print(
# f'{name.upper()}:\n'
# 'NEW_FROM_SRC:\n'
# f'new_from_src: {new_from_src}\n\n'
# f'PRE self.x_nd:'
# f'\n{list(x_nd[-s:])}\n'
# f'PRE self.y_nd:\n'
# f'{list(y_nd[-s:])}\n\n'
# f'TO WRITE:\n'
# f'x_nd_new:\n'
# f'{x_nd_new[0]}\n'
# f'y_nd_new:\n'
# f'{y_nd_new}\n'
# )
# XXX: was ``.format_xy()``
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray, # 1d x
np.ndarray, # 1d y
np.ndarray | str, # connection array/style
]:
'''
Default xy-nd array to 1d pre-graphics-path render routine.
Return single field column data verbatim
'''
# NOTE: we don't include the very last datum which is filled in
# normally by another graphics object.
x_1d = array[self.index_field][:-1]
y_1d = array[array_key][:-1]
# name = self.viz.name
# if 'trade_rate' == name:
# s = 4
# x_nd = list(self.x_nd[self.xy_slice][-s:-1])
# y_nd = list(self.y_nd[self.xy_slice][-s:-1])
# print(
# f'{name}:\n'
# f'XY data:\n'
# f'x: {x_nd}\n'
# f'y: {y_nd}\n\n'
# f'x_1d: {list(x_1d[-s:])}\n'
# f'y_1d: {list(y_1d[-s:])}\n\n'
# )
return (
x_1d,
y_1d,
# 1d connection array or style-key to
# ``pg.functions.arrayToQPath()``
'all',
)
class OHLCBarsFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
-0.5,
0,
0,
0.5,
])
fields: list[str] = ['open', 'high', 'low', 'close']
def allocate_xy_nd(
self,
ohlc_shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert an input struct-array holding OHLC samples into a pair of
flattened x, y arrays with the same size (datums wise) as the source
data.
'''
y_nd = ohlc_shm.ustruct(self.fields)
# generate an flat-interpolated x-domain
x_nd = (
np.broadcast_to(
ohlc_shm._array[self.index_field][:, None],
(
ohlc_shm._array.size,
# 4, # only ohlc
y_nd.shape[1],
),
)
+
self.x_offset
)
assert y_nd.any()
# write pushed data to flattened copy
return (
x_nd,
y_nd,
)
def incr_update_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> None:
# write newly pushed data to flattened copy
# a struct-arr is always passed in.
new_y_nd = rfn.structured_to_unstructured(
new_from_src[self.fields]
)
self.y_nd[read_slc] = new_y_nd
# generate same-valued-per-row x support based on y shape
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = np.broadcast_to(
new_from_src[self.index_field][:, None],
new_y_nd.shape,
) + self.x_offset
# TODO: can we drop this frame and just use the above?
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.16,
) -> tuple[
np.ndarray,
np.ndarray,
np.ndarray,
]:
'''
More or less direct proxy to the ``numba``-fied
``path_arrays_from_ohlc()`` (above) but with closed in kwargs
for line spacing.
'''
x, y, c = path_arrays_from_ohlc(
array,
start,
bar_w=self.index_step_size,
bar_gap=w * self.index_step_size,
# XXX: don't ask, due to a ``numba`` bug..
use_time_index=(self.index_field == 'time'),
)
return x, y, c
class OHLCBarsAsCurveFmtr(OHLCBarsFmtr):
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray,
np.ndarray,
str,
]:
# TODO: in the case of an existing ``.update_xy()``
# should we be passing in array as an xy arrays tuple?
# 2 more datum-indexes to capture zero at end
x_flat = self.x_nd[self.xy_nd_start:self.xy_nd_stop-1]
y_flat = self.y_nd[self.xy_nd_start:self.xy_nd_stop-1]
# slice to view
ivl, ivr = vr
x_iv_flat = x_flat[ivl:ivr]
y_iv_flat = y_flat[ivl:ivr]
# reshape to 1d for graphics rendering
y_iv = y_iv_flat.reshape(-1)
x_iv = x_iv_flat.reshape(-1)
return x_iv, y_iv, 'all'
class StepCurveFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
0,
1,
])
def allocate_xy_nd(
self,
shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert an input 1d shm array to a "step array" format
for use by path graphics generation.
'''
i = shm._array[self.index_field].copy()
out = shm._array[data_field].copy()
x_out = (
np.broadcast_to(
i[:, None],
(i.size, 2),
)
+
self.x_offset
)
# fill out Nx2 array to hold each step's left + right vertices.
y_out = np.empty(
x_out.shape,
dtype=out.dtype,
)
# fill in (current) values from source shm buffer
y_out[:] = out[:, np.newaxis]
# TODO: pretty sure we can drop this?
# start y at origin level
# y_out[0, 0] = 0
# y_out[self.xy_nd_start] = 0
return x_out, y_out
def incr_update_xy_nd(
self,
src_shm: ShmArray,
array_key: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> tuple[
np.ndarray,
slice,
]:
# NOTE: for a step curve we slice from one datum prior
# to the current "update slice" to get the previous
# "level".
#
# why this is needed,
# - the current new append slice will often have a zero
# value in the latest datum-step (at least for zero-on-new
# cases like vlm in the) as per configuration of the FSP
# engine.
# - we need to look back a datum to get the last level which
# will be used to terminate/complete the last step x-width
# which will be set to pair with the last x-index THIS MEANS
#
# XXX: this means WE CAN'T USE the append slice since we need to
# "look backward" one step to get the needed back-to-zero level
# and the update data in ``new_from_src`` will only contain the
# latest new data.
back_1 = slice(
read_slc.start - 1,
read_slc.stop,
)
to_write = src_shm._array[back_1]
y_nd_new = self.y_nd[back_1]
y_nd_new[:] = to_write[array_key][:, None]
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = (
new_from_src[self.index_field][:, None]
+
self.x_offset
)
# XXX: uncomment for debugging
# x_nd = self.x_nd[self.xy_slice]
# y_nd = self.y_nd[self.xy_slice]
# name = self.viz.name
# if 'dolla_vlm' in name:
# s = 4
# print(
# f'{name}:\n'
# 'NEW_FROM_SRC:\n'
# f'new_from_src: {new_from_src}\n\n'
# f'PRE self.x_nd:'
# f'\n{x_nd[-s:]}\n'
# f'PRE self.y_nd:\n'
# f'{y_nd[-s:]}\n\n'
# f'TO WRITE:\n'
# f'x_nd_new:\n'
# f'{x_nd_new}\n'
# f'y_nd_new:\n'
# f'{y_nd_new}\n'
# )
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray,
np.ndarray,
str,
]:
last_t, last = array[-1][[self.index_field, array_key]]
start = self.xy_nd_start
stop = self.xy_nd_stop
x_step = self.x_nd[start:stop]
y_step = self.y_nd[start:stop]
# slice out in-view data
ivl, ivr = vr
# NOTE: add an extra step to get the vertical-line-down-to-zero
# adjacent to the last-datum graphic (filled rect).
x_step_iv = x_step[ivl:ivr+1]
y_step_iv = y_step[ivl:ivr+1]
# flatten to 1d
x_1d = x_step_iv.reshape(x_step_iv.size)
y_1d = y_step_iv.reshape(y_step_iv.size)
# debugging
# if y_1d.any():
# s = 6
# print(
# f'x_step_iv:\n{x_step_iv[-s:]}\n'
# f'y_step_iv:\n{y_step_iv[-s:]}\n\n'
# f'x_1d:\n{x_1d[-s:]}\n'
# f'y_1d:\n{y_1d[-s:]}\n'
# )
return x_1d, y_1d, 'all'

View File

@ -15,17 +15,30 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Graphics related downsampling routines for compressing to pixel
limits on the display device.
Graphics downsampling using the infamous M4 algorithm.
This is one of ``piker``'s secret weapons allowing us to boss all other
charting platforms B)
(AND DON'T YOU DARE TAKE THIS CODE WITHOUT CREDIT OR WE'LL SUE UR F#&@* ASS).
NOTES: this method is a so called "visualization driven data
aggregation" approach. It gives error-free line chart
downsampling, see
further scientific paper resources:
- http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
- http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
Details on implementation of this algo are based in,
https://github.com/pikers/piker/issues/109
'''
import math
from typing import Optional
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import (
jit,
njit,
# float64, optional, int64,
)
@ -35,109 +48,6 @@ from ..log import get_logger
log = get_logger(__name__)
def hl2mxmn(ohlc: np.ndarray) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc['index']
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@jit(
# TODO: the type annots..
# float64[:](float64[:],),
nopython=True,
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i] = last_h
last_l = l
last_h = h
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc['index']
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat
def ds_m4(
x: np.ndarray,
y: np.ndarray,
@ -160,16 +70,6 @@ def ds_m4(
This is more or less an OHLC style sampling of a line-style series.
'''
# NOTE: this method is a so called "visualization driven data
# aggregation" approach. It gives error-free line chart
# downsampling, see
# further scientific paper resources:
# - http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
# - http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
# Details on implementation of this algo are based in,
# https://github.com/pikers/piker/issues/109
# XXX: from infinite on downsampling viewable graphics:
# "one thing i remembered about the binning - if you are
# picking a range within your timeseries the start and end bin
@ -256,8 +156,7 @@ def ds_m4(
return nb, x_out, y_out, ymn, ymx
@jit(
nopython=True,
@njit(
nogil=True,
)
def _m4(

View File

@ -0,0 +1,448 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Super fast ``QPainterPath`` generation related operator routines.
"""
from math import (
ceil,
floor,
)
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import (
# types,
njit,
float64,
int64,
# optional,
)
# TODO: for ``numba`` typing..
# from ._source import numba_ohlc_dtype
from ._m4 import ds_m4
from .._profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
def xy_downsample(
x,
y,
uppx,
x_spacer: float = 0.5,
) -> tuple[
np.ndarray,
np.ndarray,
float,
float,
]:
'''
Downsample 1D (flat ``numpy.ndarray``) arrays using M4 given an input
``uppx`` (units-per-pixel) and add space between discreet datums.
'''
# downsample whenever more then 1 pixels per datum can be shown.
# always refresh data bounds until we get diffing
# working properly, see above..
bins, x, y, ymn, ymx = ds_m4(
x,
y,
uppx,
)
# flatten output to 1d arrays suitable for path-graphics generation.
x = np.broadcast_to(x[:, None], y.shape)
x = (x + np.array(
[-x_spacer, 0, 0, x_spacer]
)).flatten()
y = y.flatten()
return x, y, ymn, ymx
@njit(
# NOTE: need to construct this manually for readonly
# arrays, see https://github.com/numba/numba/issues/4511
# (
# types.Array(
# numba_ohlc_dtype,
# 1,
# 'C',
# readonly=True,
# ),
# int64,
# types.unicode_type,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_w: float64,
bar_gap: float64 = 0.16,
use_time_index: bool = True,
# XXX: ``numba`` issue: https://github.com/numba/numba/issues/8622
# index_field: str,
) -> tuple[
np.ndarray,
np.ndarray,
np.ndarray,
]:
'''
Generate an array of lines objects from input ohlc data.
'''
size = int(data.shape[0] * 6)
# XXX: see this for why the dtype might have to be defined outside
# the routine.
# https://github.com/numba/numba/issues/4098#issuecomment-493914533
x = np.zeros(
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
half_w: float = bar_w/2
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
open = q['open']
high = q['high']
low = q['low']
close = q['close']
if use_time_index:
index = float64(q['time'])
else:
index = float64(q['index'])
# XXX: ``numba`` issue: https://github.com/numba/numba/issues/8622
# index = float64(q[index_field])
# AND this (probably)
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
mid: float = index + half_w
x[istart:istop] = (
index + bar_gap,
mid,
mid,
mid,
mid,
index + bar_w - bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def hl2mxmn(
ohlc: np.ndarray,
index_field: str = 'index',
) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc[index_field]
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@njit(
# TODO: the type annots..
# float64[:](float64[:],),
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i] = last_h
last_l = l
last_h = h
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
index_field: str = 'index',
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc[index_field]
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: int | None = None,
) -> tuple[
slice,
slice,
]:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# TODO: require this is always passed in?
if step is None:
step = round(t_last - times[-2])
if step == 0:
step = 1
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start - 1
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='left',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc

View File

@ -20,58 +20,96 @@ financial data flows.
"""
from __future__ import annotations
from collections import Counter
from collections import (
Counter,
defaultdict,
)
from contextlib import asynccontextmanager as acm
import time
from typing import (
AsyncIterator,
TYPE_CHECKING,
)
import tractor
from tractor.trionics import (
maybe_open_nursery,
)
import trio
from trio_typing import TaskStatus
from ..log import get_logger
from ..log import (
get_logger,
get_console_log,
)
from .._daemon import maybe_spawn_daemon
if TYPE_CHECKING:
from ._sharedmem import ShmArray
from ._sharedmem import (
ShmArray,
)
from .feed import _FeedsBus
log = get_logger(__name__)
# highest frequency sample step is 1 second by default, though in
# the future we may want to support shorter periods or a dynamic style
# tick-event stream.
_default_delay_s: float = 1.0
class sampler:
class Sampler:
'''
Global sampling engine registry.
Manages state for sampling events, shm incrementing and
sample period logic.
This non-instantiated type is meant to be a singleton within
a `samplerd` actor-service spawned once by the user wishing to
time-step sample real-time quote feeds, see
``._daemon.maybe_open_samplerd()`` and the below
``register_with_sampler()``.
'''
service_nursery: None | trio.Nursery = None
# TODO: we could stick these in a composed type to avoid
# angering the "i hate module scoped variables crowd" (yawn).
ohlcv_shms: dict[int, list[ShmArray]] = {}
ohlcv_shms: dict[float, list[ShmArray]] = {}
# holds one-task-per-sample-period tasks which are spawned as-needed by
# data feed requests with a given detected time step usually from
# history loading.
incrementers: dict[int, trio.CancelScope] = {}
incr_task_cs: trio.CancelScope | None = None
# holds all the ``tractor.Context`` remote subscriptions for
# a particular sample period increment event: all subscribers are
# notified on a step.
subscribers: dict[int, tractor.Context] = {}
# subscribers: dict[int, list[tractor.MsgStream]] = {}
subscribers: defaultdict[
float,
list[
float,
set[tractor.MsgStream]
],
] = defaultdict(
lambda: [
round(time.time()),
set(),
]
)
@classmethod
async def increment_ohlc_buffer(
delay_s: int,
self,
period_s: float,
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
):
'''
Task which inserts new bars into the provide shared memory array
every ``delay_s`` seconds.
every ``period_s`` seconds.
This task fulfills 2 purposes:
- it takes the subscribed set of shm arrays and increments them
@ -83,103 +121,143 @@ async def increment_ohlc_buffer(
the underlying buffers will actually be incremented.
'''
# # wait for brokerd to signal we should start sampling
# await shm_incrementing(shm_token['shm_name']).wait()
# TODO: right now we'll spin printing bars if the last time stamp is
# before a large period of no market activity. Likely the best way
# to solve this is to make this task aware of the instrument's
# tradable hours?
# adjust delay to compensate for trio processing time
ad = min(sampler.ohlcv_shms.keys()) - 0.001
total_s = 0 # total seconds counted
lowest = min(sampler.ohlcv_shms.keys())
lowest_shm = sampler.ohlcv_shms[lowest][0]
ad = lowest - 0.001
total_s: float = 0 # total seconds counted
ad = period_s - 0.001 # compensate for trio processing time
with trio.CancelScope() as cs:
# register this time period step as active
sampler.incrementers[delay_s] = cs
task_status.started(cs)
# sample step loop:
# includes broadcasting to all connected consumers on every
# new sample step as well incrementing any registered
# buffers by registered sample period.
while True:
# TODO: do we want to support dynamically
# adding a "lower" lowest increment period?
await trio.sleep(ad)
total_s += delay_s
total_s += period_s
# increment all subscribed shm arrays
# TODO:
# - this in ``numba``
# - just lookup shms for this step instead of iterating?
for this_delay_s, shms in sampler.ohlcv_shms.items():
i_epoch = round(time.time())
broadcasted: set[float] = set()
# print(f'epoch: {i_epoch} -> REGISTRY {self.ohlcv_shms}')
for shm_period_s, shms in self.ohlcv_shms.items():
# short-circuit on any not-ready because slower sample
# rate consuming shm buffers.
if total_s % this_delay_s != 0:
# print(f'skipping `{this_delay_s}s` sample update')
if total_s % shm_period_s != 0:
# print(f'skipping `{shm_period_s}s` sample update')
continue
# update last epoch stamp for this period group
if shm_period_s not in broadcasted:
sub_pair = self.subscribers[shm_period_s]
sub_pair[0] = i_epoch
broadcasted.add(shm_period_s)
# TODO: ``numba`` this!
for shm in shms:
# TODO: in theory we could make this faster by copying the
# "last" readable value into the underlying larger buffer's
# next value and then incrementing the counter instead of
# using ``.push()``?
# print(f'UPDATE {shm_period_s}s STEP for {shm.token}')
# append new entry to buffer thus "incrementing" the bar
# append new entry to buffer thus "incrementing"
# the bar
array = shm.array
last = array[-1:][shm._write_fields].copy()
# (index, t, close) = last[0][['index', 'time', 'close']]
(t, close) = last[0][['time', 'close']]
# this copies non-std fields (eg. vwap) from the last datum
last[
['time', 'volume', 'open', 'high', 'low', 'close']
][0] = (t + this_delay_s, 0, close, close, close, close)
# guard against startup backfilling races where
# the buffer has not yet been filled.
if not last.size:
continue
(t, close) = last[0][[
'time',
'close',
]]
next_t = t + shm_period_s
if shm_period_s <= 1:
next_t = i_epoch
# this copies non-std fields (eg. vwap) from the
# last datum
last[[
'time',
'open',
'high',
'low',
'close',
'volume',
]][0] = (
# epoch timestamp
next_t,
# OHLC
close,
close,
close,
close,
0, # vlm
)
# TODO: in theory we could make this faster by
# copying the "last" readable value into the
# underlying larger buffer's next value and then
# incrementing the counter instead of using
# ``.push()``?
# write to the buffer
shm.push(last)
await broadcast(delay_s, shm=lowest_shm)
# broadcast increment msg to all updated subs per period
for shm_period_s in broadcasted:
await self.broadcast(
period_s=shm_period_s,
time_stamp=i_epoch,
)
@classmethod
async def broadcast(
delay_s: int,
shm: ShmArray | None = None,
self,
period_s: float,
time_stamp: float | None = None,
) -> None:
'''
Broadcast the given ``shm: ShmArray``'s buffer index step to any
Broadcast the period size and last index step value to all
subscribers for a given sample period.
The sent msg will include the first and last index which slice into
the buffer's non-empty data.
'''
subs = sampler.subscribers.get(delay_s, ())
first = last = -1
pair = self.subscribers[period_s]
if shm is None:
periods = sampler.ohlcv_shms.keys()
# if this is an update triggered by a history update there
# might not actually be any sampling bus setup since there's
# no "live feed" active yet.
if periods:
lowest = min(periods)
shm = sampler.ohlcv_shms[lowest][0]
first = shm._first.value
last = shm._last.value
last_ts, subs = pair
task = trio.lowlevel.current_task()
log.debug(
f'SUBS {self.subscribers}\n'
f'PAIR {pair}\n'
f'TASK: {task}: {id(task)}\n'
f'broadcasting {period_s} -> {last_ts}\n'
# f'consumers: {subs}'
)
borked: set[tractor.MsgStream] = set()
for stream in subs:
try:
await stream.send({
'first': first,
'last': last,
'index': last,
'index': time_stamp or last_ts,
'period': period_s,
})
except (
trio.BrokenResourceError,
@ -188,6 +266,9 @@ async def broadcast(
log.error(
f'{stream._ctx.chan.uid} dropped connection'
)
borked.add(stream)
for stream in borked:
try:
subs.remove(stream)
except ValueError:
@ -195,35 +276,227 @@ async def broadcast(
f'{stream._ctx.chan.uid} sub already removed!?'
)
@classmethod
async def broadcast_all(self) -> None:
for period_s in self.subscribers:
await self.broadcast(period_s)
@tractor.context
async def iter_ohlc_periods(
async def register_with_sampler(
ctx: tractor.Context,
delay_s: int,
period_s: float,
shms_by_period: dict[float, dict] | None = None,
open_index_stream: bool = True, # open a 2way stream for sample step msgs?
sub_for_broadcasts: bool = True, # sampler side to send step updates?
) -> None:
'''
Subscribe to OHLC sampling "step" events: when the time
aggregation period increments, this event stream emits an index
event.
'''
# add our subscription
subs = sampler.subscribers.setdefault(delay_s, [])
await ctx.started()
async with ctx.open_stream() as stream:
subs.append(stream)
get_console_log(tractor.current_actor().loglevel)
incr_was_started: bool = False
try:
# stream and block until cancelled
await trio.sleep_forever()
finally:
try:
subs.remove(stream)
except ValueError:
log.error(
f'iOHLC step stream was already dropped {ctx.chan.uid}?'
async with maybe_open_nursery(
Sampler.service_nursery
) as service_nursery:
# init startup, create (actor-)local service nursery and start
# increment task
Sampler.service_nursery = service_nursery
# always ensure a period subs entry exists
last_ts, subs = Sampler.subscribers[float(period_s)]
async with trio.Lock():
if Sampler.incr_task_cs is None:
Sampler.incr_task_cs = await service_nursery.start(
Sampler.increment_ohlc_buffer,
1.,
)
incr_was_started = True
# insert the base 1s period (for OHLC style sampling) into
# the increment buffer set to update and shift every second.
if shms_by_period is not None:
from ._sharedmem import (
attach_shm_array,
_Token,
)
for period in shms_by_period:
# load and register shm handles
shm_token_msg = shms_by_period[period]
shm = attach_shm_array(
_Token.from_msg(shm_token_msg),
readonly=False,
)
shms_by_period[period] = shm
Sampler.ohlcv_shms.setdefault(period, []).append(shm)
assert Sampler.ohlcv_shms
# unblock caller
await ctx.started(set(Sampler.ohlcv_shms.keys()))
if open_index_stream:
try:
async with ctx.open_stream() as stream:
if sub_for_broadcasts:
subs.add(stream)
# except broadcast requests from the subscriber
async for msg in stream:
if msg == 'broadcast_all':
await Sampler.broadcast_all()
finally:
if sub_for_broadcasts:
subs.remove(stream)
else:
# if no shms are passed in we just wait until cancelled
# by caller.
await trio.sleep_forever()
finally:
# TODO: why tf isn't this working?
if shms_by_period is not None:
for period, shm in shms_by_period.items():
Sampler.ohlcv_shms[period].remove(shm)
if incr_was_started:
Sampler.incr_task_cs.cancel()
Sampler.incr_task_cs = None
async def spawn_samplerd(
loglevel: str | None = None,
**extra_tractor_kwargs
) -> bool:
'''
Daemon-side service task: start a sampling daemon for common step
update and increment count write and stream broadcasting.
'''
from piker._daemon import Services
dname = 'samplerd'
log.info(f'Spawning `{dname}`')
# singleton lock creation of ``samplerd`` since we only ever want
# one daemon per ``pikerd`` proc tree.
# TODO: make this built-into the service api?
async with Services.locks[dname + '_singleton']:
if dname not in Services.service_tasks:
portal = await Services.actor_n.start_actor(
dname,
enable_modules=[
'piker.data._sampling',
],
loglevel=loglevel,
debug_mode=Services.debug_mode, # set by pikerd flag
**extra_tractor_kwargs
)
await Services.start_service_task(
dname,
portal,
register_with_sampler,
period_s=1,
sub_for_broadcasts=False,
)
return True
return False
@acm
async def maybe_open_samplerd(
loglevel: str | None = None,
**kwargs,
) -> tractor._portal.Portal: # noqa
'''
Client-side helper to maybe startup the ``samplerd`` service
under the ``pikerd`` tree.
'''
dname = 'samplerd'
async with maybe_spawn_daemon(
dname,
service_task_target=spawn_samplerd,
spawn_args={'loglevel': loglevel},
loglevel=loglevel,
**kwargs,
) as portal:
yield portal
@acm
async def open_sample_stream(
period_s: float,
shms_by_period: dict[float, dict] | None = None,
open_index_stream: bool = True,
sub_for_broadcasts: bool = True,
cache_key: str | None = None,
allow_new_sampler: bool = True,
) -> AsyncIterator[dict[str, float]]:
'''
Subscribe to OHLC sampling "step" events: when the time aggregation
period increments, this event stream emits an index event.
This is a client-side endpoint that does all the work of ensuring
the `samplerd` actor is up and that mult-consumer-tasks are given
a broadcast stream when possible.
'''
# TODO: wrap this manager with the following to make it cached
# per client-multitasks entry.
# maybe_open_context(
# acm_func=partial(
# portal.open_context,
# register_with_sampler,
# ),
# key=cache_key or period_s,
# )
# if cache_hit:
# # add a new broadcast subscription for the quote stream
# # if this feed is likely already in use
# async with istream.subscribe() as bistream:
# yield bistream
# else:
async with (
# XXX: this should be singleton on a host,
# a lone broker-daemon per provider should be
# created for all practical purposes
maybe_open_samplerd() as portal,
portal.open_context(
register_with_sampler,
**{
'period_s': period_s,
'shms_by_period': shms_by_period,
'open_index_stream': open_index_stream,
'sub_for_broadcasts': sub_for_broadcasts,
},
) as (ctx, first)
):
async with (
ctx.open_stream() as istream,
# TODO: we don't need this task-bcasting right?
# istream.subscribe() as istream,
):
yield istream
async def sample_and_broadcast(
@ -236,7 +509,14 @@ async def sample_and_broadcast(
sum_tick_vlm: bool = True,
) -> None:
'''
`brokerd`-side task which writes latest datum sampled data.
This task is meant to run in the same actor (mem space) as the
`brokerd` real-time quote feed which is being sampled to
a ``ShmArray`` buffer.
'''
log.info("Started shared mem bar writer")
overruns = Counter()
@ -273,7 +553,6 @@ async def sample_and_broadcast(
for shm in [rt_shm, hist_shm]:
# update last entry
# benchmarked in the 4-5 us range
# for shm in [rt_shm, hist_shm]:
o, high, low, v = shm.array[-1][
['open', 'high', 'low', 'volume']
]
@ -383,6 +662,7 @@ async def sample_and_broadcast(
trio.ClosedResourceError,
trio.EndOfChannel,
):
ctx = stream._ctx
chan = ctx.chan
if ctx:
log.warning(
@ -404,10 +684,63 @@ async def sample_and_broadcast(
)
# a working tick-type-classes template
_tick_groups = {
'clears': {'trade', 'dark_trade', 'last'},
'bids': {'bid', 'bsize'},
'asks': {'ask', 'asize'},
}
def frame_ticks(
first_quote: dict,
last_quote: dict,
ticks_by_type: dict,
) -> None:
# append quotes since last iteration into the last quote's
# tick array/buffer.
ticks = last_quote.get('ticks')
# TODO: once we decide to get fancy really we should
# have a shared mem tick buffer that is just
# continually filled and the UI just ready from it
# at it's display rate.
if ticks:
# TODO: do we need this any more or can we just
# expect the receiver to unwind the below
# `ticks_by_type: dict`?
# => undwinding would potentially require a
# `dict[str, set | list]` instead with an
# included `'types' field which is an (ordered)
# set of tick type fields in the order which
# types arrived?
first_quote['ticks'].extend(ticks)
# XXX: build a tick-by-type table of lists
# of tick messages. This allows for less
# iteration on the receiver side by allowing for
# a single "latest tick event" look up by
# indexing the last entry in each sub-list.
# tbt = {
# 'types': ['bid', 'asize', 'last', .. '<type_n>'],
# 'bid': [tick0, tick1, tick2, .., tickn],
# 'asize': [tick0, tick1, tick2, .., tickn],
# 'last': [tick0, tick1, tick2, .., tickn],
# ...
# '<type_n>': [tick0, tick1, tick2, .., tickn],
# }
# append in reverse FIFO order for in-order iteration on
# receiver side.
for tick in ticks:
ttype = tick['type']
ticks_by_type[ttype].append(tick)
# TODO: a less naive throttler, here's some snippets:
# token bucket by njs:
# https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
async def uniform_rate_send(
rate: float,
@ -427,6 +760,12 @@ async def uniform_rate_send(
diff = 0
task_status.started()
ticks_by_type: defaultdict[
str,
list[dict],
] = defaultdict(list)
clear_types = _tick_groups['clears']
while True:
@ -445,34 +784,17 @@ async def uniform_rate_send(
if not first_quote:
first_quote = last_quote
# first_quote['tbt'] = ticks_by_type
if (throttle_period - diff) > 0:
# received a quote but the send cycle period hasn't yet
# expired we aren't supposed to send yet so append
# to the tick frame.
# append quotes since last iteration into the last quote's
# tick array/buffer.
ticks = last_quote.get('ticks')
# XXX: idea for frame type data structure we could
# use on the wire instead of a simple list?
# frames = {
# 'index': ['type_a', 'type_c', 'type_n', 'type_n'],
# 'type_a': [tick0, tick1, tick2, .., tickn],
# 'type_b': [tick0, tick1, tick2, .., tickn],
# 'type_c': [tick0, tick1, tick2, .., tickn],
# ...
# 'type_n': [tick0, tick1, tick2, .., tickn],
# }
# TODO: once we decide to get fancy really we should
# have a shared mem tick buffer that is just
# continually filled and the UI just ready from it
# at it's display rate.
if ticks:
first_quote['ticks'].extend(ticks)
frame_ticks(
first_quote,
last_quote,
ticks_by_type,
)
# send cycle isn't due yet so continue waiting
continue
@ -489,12 +811,35 @@ async def uniform_rate_send(
# received quote ASAP.
sym, first_quote = await quote_stream.receive()
frame_ticks(
first_quote,
first_quote,
ticks_by_type,
)
# we have a quote already so send it now.
with trio.move_on_after(throttle_period) as cs:
while (
not set(ticks_by_type).intersection(clear_types)
):
try:
sym, last_quote = await quote_stream.receive()
except trio.EndOfChannel:
log.exception(f"feed for {stream} ended?")
break
frame_ticks(
first_quote,
last_quote,
ticks_by_type,
)
# measured_rate = 1 / (time.time() - last_send)
# log.info(
# f'`{sym}` throttled send hz: {round(measured_rate, ndigits=1)}'
# )
first_quote['tbt'] = ticks_by_type
# TODO: now if only we could sync this to the display
# rate timing exactly lul
@ -520,3 +865,4 @@ async def uniform_rate_send(
first_quote = last_quote = None
diff = 0
last_send = time.time()
ticks_by_type.clear()

View File

@ -18,12 +18,20 @@
ToOlS fOr CoPInG wITh "tHE wEB" protocols.
"""
from contextlib import asynccontextmanager, AsyncExitStack
from contextlib import (
asynccontextmanager,
AsyncExitStack,
)
from itertools import count
from types import ModuleType
from typing import Any, Optional, Callable, AsyncGenerator
from typing import (
Any,
Optional,
Callable,
AsyncGenerator,
Iterable,
)
import json
import sys
import trio
import trio_websocket
@ -44,9 +52,12 @@ log = get_logger(__name__)
class NoBsWs:
"""Make ``trio_websocket`` sockets stay up no matter the bs.
'''
Make ``trio_websocket`` sockets stay up no matter the bs.
"""
You can provide a ``fixture`` async-context-manager which will be
enter/exitted around each reconnect operation.
'''
recon_errors = (
ConnectionClosed,
DisconnectionTimeout,
@ -68,14 +79,20 @@ class NoBsWs:
self._stack = stack
self._ws: 'WebSocketConnection' = None # noqa
# TODO: is there some method we can call
# on the underlying `._ws` to get this?
self._connected: bool = False
async def _connect(
self,
tries: int = 1000,
) -> None:
self._connected = False
while True:
try:
await self._stack.aclose()
except (DisconnectionTimeout, RuntimeError):
except self.recon_errors:
await trio.sleep(0.5)
else:
break
@ -96,6 +113,8 @@ class NoBsWs:
assert ret is None
log.info(f'Connection success: {self.url}')
self._connected = True
return self._ws
except self.recon_errors as err:
@ -105,11 +124,15 @@ class NoBsWs:
f'{type(err)}...retry attempt {i}'
)
await trio.sleep(0.5)
self._connected = False
continue
else:
log.exception('ws connection fail...')
raise last_err
def connected(self) -> bool:
return self._connected
async def send_msg(
self,
data: Any,
@ -161,6 +184,7 @@ async def open_autorecon_ws(
'''
JSONRPC response-request style machinery for transparent multiplexing of msgs
over a NoBsWs.
'''
@ -170,6 +194,7 @@ class JSONRPCResult(Struct):
result: Optional[dict] = None
error: Optional[dict] = None
@asynccontextmanager
async def open_jsonrpc_session(
url: str,
@ -220,15 +245,16 @@ async def open_jsonrpc_session(
async def recv_task():
'''
receives every ws message and stores it in its corresponding result
field, then sets the event to wakeup original sender tasks.
also recieves responses to requests originated from the server side.
'''
receives every ws message and stores it in its corresponding
result field, then sets the event to wakeup original sender
tasks. also recieves responses to requests originated from
the server side.
'''
async for msg in ws:
match msg:
case {
'result': result,
'result': _,
'id': mid,
} if res_entry := rpc_results.get(mid):
@ -239,7 +265,9 @@ async def open_jsonrpc_session(
'result': _,
'id': mid,
} if not rpc_results.get(mid):
log.warning(f'Wasn\'t expecting ws msg: {json.dumps(msg, indent=4)}')
log.warning(
f'Unexpected ws msg: {json.dumps(msg, indent=4)}'
)
case {
'method': _,
@ -259,7 +287,6 @@ async def open_jsonrpc_session(
case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
n.start_soon(recv_task)
yield json_rpc
n.cancel_scope.cancel()

View File

@ -21,14 +21,17 @@ This module is enabled for ``brokerd`` daemons.
"""
from __future__ import annotations
from collections import defaultdict
from collections import (
defaultdict,
Counter,
)
from contextlib import asynccontextmanager as acm
from datetime import datetime
from functools import partial
import time
from types import ModuleType
from typing import (
Any,
AsyncIterator,
AsyncContextManager,
Callable,
Optional,
@ -56,11 +59,10 @@ from .._daemon import (
maybe_spawn_brokerd,
check_for_service,
)
from .flows import Flume
from ._sharedmem import (
maybe_open_shm_array,
attach_shm_array,
ShmArray,
_Token,
_secs_in_day,
)
from .ingest import get_ingestormod
@ -72,13 +74,9 @@ from ._source import (
)
from ..ui import _search
from ._sampling import (
sampler,
broadcast,
increment_ohlc_buffer,
iter_ohlc_periods,
open_sample_stream,
sample_and_broadcast,
uniform_rate_send,
_default_delay_s,
)
from ..brokers._util import (
DataUnavailable,
@ -128,7 +126,7 @@ class _FeedsBus(Struct):
target: Awaitable,
*args,
) -> None:
) -> trio.CancelScope:
async def start_with_cs(
task_status: TaskStatus[
@ -278,6 +276,7 @@ async def start_backfill(
bfqsn: str,
shm: ShmArray,
timeframe: float,
sampler_stream: tractor.MsgStream,
last_tsdb_dt: Optional[datetime] = None,
storage: Optional[Storage] = None,
@ -309,6 +308,11 @@ async def start_backfill(
- pendulum.from_timestamp(times[-2])
).seconds
if step_size_s == 60:
inow = round(time.time())
if (inow - times[-1]) > 60:
await tractor.breakpoint()
# frame's worth of sample-period-steps, in seconds
frame_size_s = len(array) * step_size_s
@ -326,8 +330,7 @@ async def start_backfill(
# TODO: *** THIS IS A BUG ***
# we need to only broadcast to subscribers for this fqsn..
# otherwise all fsps get reset on every chart..
for delay_s in sampler.subscribers:
await broadcast(delay_s)
await sampler_stream.send('broadcast_all')
# signal that backfilling to tsdb's end datum is complete
bf_done = trio.Event()
@ -376,8 +379,9 @@ async def start_backfill(
# erlangs = config.get('erlangs', 1)
# avoid duplicate history frames with a set of datetime frame
# starts.
starts: set[datetime] = set()
# starts and associated counts of how many duplicates we see
# per time stamp.
starts: Counter[datetime] = Counter()
# inline sequential loop where we simply pass the
# last retrieved start dt to the next request as
@ -405,14 +409,26 @@ async def start_backfill(
# request loop until the condition is resolved?
return
if next_start_dt in starts:
if (
next_start_dt in starts
and starts[next_start_dt] <= 6
):
start_dt = min(starts)
print("SKIPPING DUPLICATE FRAME @ {next_start_dt}")
log.warning(
f"{bfqsn}: skipping duplicate frame @ {next_start_dt}"
)
starts[start_dt] += 1
continue
elif starts[next_start_dt] > 6:
log.warning(
f'NO-MORE-DATA: backend {mod.name} before {next_start_dt}?'
)
return
# only update new start point if not-yet-seen
start_dt = next_start_dt
starts.add(start_dt)
starts[start_dt] += 1
assert array['time'][0] == start_dt.timestamp()
@ -484,8 +500,7 @@ async def start_backfill(
# in the block above to avoid entering new ``frames``
# values while we're pipelining the current ones to
# memory...
for delay_s in sampler.subscribers:
await broadcast(delay_s)
await sampler_stream.send('broadcast_all')
# short-circuit (for now)
bf_done.set()
@ -496,6 +511,7 @@ async def basic_backfill(
mod: ModuleType,
bfqsn: str,
shms: dict[int, ShmArray],
sampler_stream: tractor.MsgStream,
) -> None:
@ -513,7 +529,8 @@ async def basic_backfill(
mod,
bfqsn,
shm,
timeframe=timeframe,
timeframe,
sampler_stream,
)
)
except DataUnavailable:
@ -529,6 +546,7 @@ async def tsdb_backfill(
fqsn: str,
bfqsn: str,
shms: dict[int, ShmArray],
sampler_stream: tractor.MsgStream,
task_status: TaskStatus[
tuple[ShmArray, ShmArray]
@ -561,7 +579,8 @@ async def tsdb_backfill(
mod,
bfqsn,
shm,
timeframe=timeframe,
timeframe,
sampler_stream,
last_tsdb_dt=last_tsdb_dt,
tsdb_is_up=True,
storage=storage,
@ -599,10 +618,7 @@ async def tsdb_backfill(
# unblock the feed bus management task
# assert len(shms[1].array)
task_status.started((
shms[60],
shms[1],
))
task_status.started()
async def back_load_from_tsdb(
timeframe: int,
@ -658,10 +674,10 @@ async def tsdb_backfill(
# Load TSDB history into shm buffer (for display) if there is
# remaining buffer space.
if (
len(tsdb_history)
):
# load the first (smaller) bit of history originally loaded
# above from ``Storage.load()``.
to_push = tsdb_history[-prepend_start:]
@ -678,26 +694,27 @@ async def tsdb_backfill(
tsdb_last_frame_start = tsdb_history['Epoch'][0]
if timeframe == 1:
times = shm.array['time']
assert (times[1] - times[0]) == 1
# load as much from storage into shm possible (depends on
# user's shm size settings).
while (
shm._first.value > 0
):
while shm._first.value > 0:
tsdb_history = await storage.read_ohlcv(
fqsn,
end=tsdb_last_frame_start,
timeframe=timeframe,
end=tsdb_last_frame_start,
)
# empty query
if not len(tsdb_history):
break
next_start = tsdb_history['Epoch'][0]
if (
not len(tsdb_history) # empty query
if next_start >= tsdb_last_frame_start:
# no earlier data detected
or next_start >= tsdb_last_frame_start
):
break
else:
tsdb_last_frame_start = next_start
@ -725,8 +742,7 @@ async def tsdb_backfill(
# (usually a chart showing graphics for said fsp)
# which tells the chart to conduct a manual full
# graphics loop cycle.
for delay_s in sampler.subscribers:
await broadcast(delay_s)
await sampler_stream.send('broadcast_all')
# TODO: write new data to tsdb to be ready to for next read.
@ -815,6 +831,23 @@ async def manage_history(
"Persistent shm for sym was already open?!"
)
# register 1s and 1m buffers with the global incrementer task
async with open_sample_stream(
period_s=1.,
shms_by_period={
1.: rt_shm.token,
60.: hist_shm.token,
},
# NOTE: we want to only open a stream for doing broadcasts on
# backfill operations, not receive the sample index-stream
# (since there's no code in this data feed layer that needs to
# consume it).
open_index_stream=True,
sub_for_broadcasts=False,
) as sample_stream:
log.info('Scanning for existing `marketstored`')
tsdb_is_up = await check_for_service('marketstored')
@ -833,7 +866,8 @@ async def manage_history(
async with (
marketstore.open_storage_client(fqsn)as storage,
):
hist_shm, rt_shm = await bus.nursery.start(
# TODO: drop returning the output that we pass in?
await bus.nursery.start(
tsdb_backfill,
mod,
marketstore,
@ -845,6 +879,7 @@ async def manage_history(
1: rt_shm,
60: hist_shm,
},
sample_stream,
)
# yield back after client connect with filled shm
@ -860,9 +895,9 @@ async def manage_history(
# data that can be used.
some_data_ready.set()
# history retreival loop depending on user interaction and thus
# a small RPC-prot for remotely controllinlg what data is loaded
# for viewing.
# history retreival loop depending on user interaction
# and thus a small RPC-prot for remotely controllinlg
# what data is loaded for viewing.
await trio.sleep_forever()
# load less history if no tsdb can be found
@ -874,10 +909,11 @@ async def manage_history(
bus,
mod,
bfqsn,
shms={
{
1: rt_shm,
60: hist_shm,
},
sample_stream,
)
task_status.started((
hist_zero_index,
@ -889,151 +925,6 @@ async def manage_history(
await trio.sleep_forever()
class Flume(Struct):
'''
Composite reference type which points to all the addressing handles
and other meta-data necessary for the read, measure and management
of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that can
be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport
- history and real-time shm buffers which are both real-time
updated and backfilled.
- associated startup indexing information related to both buffer
real-time-append and historical prepend addresses.
- low level APIs to read and measure the updated data and manage
queuing properties.
'''
symbol: Symbol
first_quote: dict
_hist_shm_token: _Token
_rt_shm_token: _Token
# private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None
stream: tractor.MsgStream | None = None
izero_hist: int = 0
izero_rt: int = 0
throttle_rate: int | None = None
# TODO: do we need this really if we can pull the `Portal` from
# ``tractor``'s internals?
feed: Feed | None = None
@property
def rt_shm(self) -> ShmArray:
if self._rt_shm is None:
self._rt_shm = attach_shm_array(
token=self._rt_shm_token,
readonly=True,
)
return self._rt_shm
@property
def hist_shm(self) -> ShmArray:
if self._hist_shm is None:
self._hist_shm = attach_shm_array(
token=self._hist_shm_token,
readonly=True,
)
return self._hist_shm
async def receive(self) -> dict:
return await self.stream.receive()
@acm
async def index_stream(
self,
delay_s: int = 1,
) -> AsyncIterator[int]:
if not self.feed:
raise RuntimeError('This flume is not part of any ``Feed``?')
# TODO: maybe a public (property) API for this in ``tractor``?
portal = self.stream._ctx._portal
assert portal
# XXX: this should be singleton on a host,
# a lone broker-daemon per provider should be
# created for all practical purposes
async with maybe_open_context(
acm_func=partial(
portal.open_context,
iter_ohlc_periods,
),
kwargs={'delay_s': delay_s},
) as (cache_hit, (ctx, first)):
async with ctx.open_stream() as istream:
if cache_hit:
# add a new broadcast subscription for the quote stream
# if this feed is likely already in use
async with istream.subscribe() as bistream:
yield bistream
else:
yield istream
def get_ds_info(
self,
) -> tuple[float, float, float]:
'''
Compute the "downsampling" ratio info between the historical shm
buffer and the real-time (HFT) one.
Return a tuple of the fast sample period, historical sample
period and ratio between them.
'''
times = self.hist_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s = (end - start).seconds
times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
rt_step_size_s = (end - start).seconds
ratio = hist_step_size_s / rt_step_size_s
return (
rt_step_size_s,
hist_step_size_s,
ratio,
)
# TODO: get native msgspec decoding for these workinn
def to_msg(self) -> dict:
msg = self.to_dict()
msg['symbol'] = msg['symbol'].to_dict()
# can't serialize the stream or feed objects, it's expected
# you'll have a ref to it since this msg should be rxed on
# a stream on whatever far end IPC..
msg.pop('stream')
msg.pop('feed')
return msg
@classmethod
def from_msg(cls, msg: dict) -> dict:
symbol = Symbol(**msg.pop('symbol'))
return cls(
symbol=symbol,
**msg,
)
async def allocate_persistent_feed(
bus: _FeedsBus,
sub_registered: trio.Event,
@ -1074,6 +965,8 @@ async def allocate_persistent_feed(
some_data_ready = trio.Event()
feed_is_live = trio.Event()
symstr = symstr.lower()
# establish broker backend quote stream by calling
# ``stream_quotes()``, which is a required broker backend endpoint.
init_msg, first_quote = await bus.nursery.start(
@ -1132,6 +1025,7 @@ async def allocate_persistent_feed(
# https://github.com/python-trio/trio/issues/2258
# bus.nursery.start_soon(
# await bus.start_task(
(
izero_hist,
hist_shm,
@ -1153,25 +1047,17 @@ async def allocate_persistent_feed(
flume = Flume(
symbol=symbol,
_hist_shm_token=hist_shm.token,
_rt_shm_token=rt_shm.token,
first_quote=first_quote,
_rt_shm_token=rt_shm.token,
_hist_shm_token=hist_shm.token,
izero_hist=izero_hist,
izero_rt=izero_rt,
# throttle_rate=tick_throttle,
)
# for ambiguous names we simply apply the retreived
# feed to that name (for now).
bus.feeds[symstr] = bus.feeds[bfqsn] = flume
# insert 1s ohlc into the increment buffer set
# to update and shift every second
sampler.ohlcv_shms.setdefault(
1,
[]
).append(rt_shm)
task_status.started()
if not start_stream:
@ -1181,18 +1067,6 @@ async def allocate_persistent_feed(
# the backend will indicate when real-time quotes have begun.
await feed_is_live.wait()
# insert 1m ohlc into the increment buffer set
# to shift every 60s.
sampler.ohlcv_shms.setdefault(60, []).append(hist_shm)
# create buffer a single incrementer task broker backend
# (aka `brokerd`) using the lowest sampler period.
if sampler.incrementers.get(_default_delay_s) is None:
await bus.start_task(
increment_ohlc_buffer,
_default_delay_s,
)
sum_tick_vlm: bool = init_msg.get(
'shm_write_opts', {}
).get('sum_tick_vlm', True)
@ -1205,7 +1079,12 @@ async def allocate_persistent_feed(
rt_shm.push(hist_shm.array[-3:-1])
ohlckeys = ['open', 'high', 'low', 'close']
rt_shm.array[ohlckeys][-2:] = hist_shm.array['close'][-1]
rt_shm.array['volume'][-2] = 0
rt_shm.array['volume'][-2:] = 0
# set fast buffer time step to 1s
ts = round(time.time())
rt_shm.array['time'][0] = ts
rt_shm.array['time'][1] = ts + 1
# wait the spawning parent task to register its subscriber
# send-stream entry before we start the sample loop.
@ -1261,10 +1140,6 @@ async def open_feed_bus(
servicename = tractor.current_actor().name
assert 'brokerd' in servicename
# XXX: figure this not crashing into debug!
# await tractor.breakpoint()
# assert 0
assert brokername in servicename
bus = get_feed_bus(brokername)
@ -1273,6 +1148,10 @@ async def open_feed_bus(
flumes: dict[str, Flume] = {}
for symbol in symbols:
# we always use lower case keys internally
symbol = symbol.lower()
# if no cached feed for this symbol has been created for this
# brokerd yet, start persistent stream and shm writer task in
# service nursery

210
piker/data/flows.py 100644
View File

@ -0,0 +1,210 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
abstractions for organizing, managing and generally operating-on
real-time data processing data-structures.
"Streams, flumes, cascades and flows.."
"""
from __future__ import annotations
from typing import (
TYPE_CHECKING,
)
import tractor
import pendulum
import numpy as np
from .types import Struct
from ._source import (
Symbol,
)
from ._sharedmem import (
attach_shm_array,
ShmArray,
_Token,
)
# from .._profile import (
# Profiler,
# pg_profile_enabled,
# )
if TYPE_CHECKING:
# from pyqtgraph import PlotItem
from .feed import Feed
# TODO: ideas for further abstractions as per
# https://github.com/pikers/piker/issues/216 and
# https://github.com/pikers/piker/issues/270:
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
# as per circuit parlance:
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
# - could cover the combination of our `FspAdmin` and the
# backend `.fsp._engine` related machinery to "connect" one flume
# to another?
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
class Flume(Struct):
'''
Composite reference type which points to all the addressing handles
and other meta-data necessary for the read, measure and management
of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that can
be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport
- history and real-time shm buffers which are both real-time
updated and backfilled.
- associated startup indexing information related to both buffer
real-time-append and historical prepend addresses.
- low level APIs to read and measure the updated data and manage
queuing properties.
'''
symbol: Symbol
first_quote: dict
_rt_shm_token: _Token
# optional since some data flows won't have a "downsampled" history
# buffer/stream (eg. FSPs).
_hist_shm_token: _Token | None = None
# private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None
stream: tractor.MsgStream | None = None
izero_hist: int = 0
izero_rt: int = 0
throttle_rate: int | None = None
# TODO: do we need this really if we can pull the `Portal` from
# ``tractor``'s internals?
feed: Feed | None = None
@property
def rt_shm(self) -> ShmArray:
if self._rt_shm is None:
self._rt_shm = attach_shm_array(
token=self._rt_shm_token,
readonly=True,
)
return self._rt_shm
@property
def hist_shm(self) -> ShmArray:
if self._hist_shm_token is None:
raise RuntimeError(
'No shm token has been set for the history buffer?'
)
if (
self._hist_shm is None
):
self._hist_shm = attach_shm_array(
token=self._hist_shm_token,
readonly=True,
)
return self._hist_shm
async def receive(self) -> dict:
return await self.stream.receive()
def get_ds_info(
self,
) -> tuple[float, float, float]:
'''
Compute the "downsampling" ratio info between the historical shm
buffer and the real-time (HFT) one.
Return a tuple of the fast sample period, historical sample
period and ratio between them.
'''
times = self.hist_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s = (end - start).seconds
times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
rt_step_size_s = (end - start).seconds
ratio = hist_step_size_s / rt_step_size_s
return (
rt_step_size_s,
hist_step_size_s,
ratio,
)
# TODO: get native msgspec decoding for these workinn
def to_msg(self) -> dict:
msg = self.to_dict()
msg['symbol'] = msg['symbol'].to_dict()
# can't serialize the stream or feed objects, it's expected
# you'll have a ref to it since this msg should be rxed on
# a stream on whatever far end IPC..
msg.pop('stream')
msg.pop('feed')
return msg
@classmethod
def from_msg(cls, msg: dict) -> dict:
symbol = Symbol(**msg.pop('symbol'))
return cls(
symbol=symbol,
**msg,
)
def get_index(
self,
time_s: float,
array: np.ndarray,
) -> int | float:
'''
Return array shm-buffer index for for epoch time.
'''
times = array['time']
first = np.searchsorted(
times,
time_s,
side='left',
)
imx = times.shape[0] - 1
return min(first, imx)

View File

@ -454,8 +454,12 @@ class Storage:
try:
result = await client.query(params)
except purerpc.grpclib.exceptions.UnknownError:
except purerpc.grpclib.exceptions.UnknownError as err:
# indicate there is no history for this timeframe
log.exception(
f'Unknown mkts QUERY error: {params}\n'
f'{err.args}'
)
return {}
# TODO: it turns out column access on recarrays is actually slower:

View File

@ -199,7 +199,10 @@ def maybe_mk_fsp_shm(
# TODO: load output types from `Fsp`
# - should `index` be a required internal field?
fsp_dtype = np.dtype(
[('index', int)] +
[('index', int)]
+
[('time', float)]
+
[(field_name, float) for field_name in target.outputs]
)

View File

@ -21,7 +21,9 @@ core task logic for processing chains
from dataclasses import dataclass
from functools import partial
from typing import (
AsyncIterator, Callable, Optional,
AsyncIterator,
Callable,
Optional,
Union,
)
@ -36,9 +38,13 @@ from .. import data
from ..data import attach_shm_array
from ..data.feed import (
Flume,
Feed,
)
from ..data._sharedmem import ShmArray
from ..data._sampling import _default_delay_s
from ..data._sampling import (
_default_delay_s,
open_sample_stream,
)
from ..data._source import Symbol
from ._api import (
Fsp,
@ -111,8 +117,9 @@ async def fsp_compute(
flume.rt_shm,
)
# Conduct a single iteration of fsp with historical bars input
# and get historical output
# HISTORY COMPUTE PHASE
# conduct a single iteration of fsp with historical bars input
# and get historical output.
history_output: Union[
dict[str, np.ndarray], # multi-output case
np.ndarray, # single output case
@ -129,9 +136,13 @@ async def fsp_compute(
# each respective field.
fields = getattr(dst.array.dtype, 'fields', None).copy()
fields.pop('index')
history: Optional[np.ndarray] = None # TODO: nptyping here!
history_by_field: Optional[np.ndarray] = None
src_time = src.array['time']
if fields and len(fields) > 1 and fields:
if (
fields and
len(fields) > 1
):
if not isinstance(history_output, dict):
raise ValueError(
f'`{func_name}` is a multi-output FSP and should yield a '
@ -142,7 +153,7 @@ async def fsp_compute(
if key in history_output:
output = history_output[key]
if history is None:
if history_by_field is None:
if output is None:
length = len(src.array)
@ -152,7 +163,7 @@ async def fsp_compute(
# using the first output, determine
# the length of the struct-array that
# will be pushed to shm.
history = np.zeros(
history_by_field = np.zeros(
length,
dtype=dst.array.dtype
)
@ -160,7 +171,7 @@ async def fsp_compute(
if output is None:
continue
history[key] = output
history_by_field[key] = output
# single-key output stream
else:
@ -169,11 +180,13 @@ async def fsp_compute(
f'`{func_name}` is a single output FSP and should yield an '
'`np.ndarray` for history'
)
history = np.zeros(
history_by_field = np.zeros(
len(history_output),
dtype=dst.array.dtype
)
history[func_name] = history_output
history_by_field[func_name] = history_output
history_by_field['time'] = src_time[-len(history_by_field):]
# TODO: XXX:
# THERE'S A BIG BUG HERE WITH THE `index` field since we're
@ -190,7 +203,10 @@ async def fsp_compute(
# TODO: can we use this `start` flag instead of the manual
# setting above?
index = dst.push(history, start=first)
index = dst.push(
history_by_field,
start=first,
)
profiler(f'{func_name} pushed history')
profiler.finish()
@ -216,8 +232,14 @@ async def fsp_compute(
log.debug(f"{func_name}: {processed}")
key, output = processed
index = src.index
dst.array[-1][key] = output
# dst.array[-1][key] = output
dst.array[[key, 'time']][-1] = (
output,
# TODO: what about pushing ``time.time_ns()``
# in which case we'll need to round at the graphics
# processing / sampling layer?
src.array[-1]['time']
)
# NOTE: for now we aren't streaming this to the consumer
# stream latest array index entry which basically just acts
@ -228,6 +250,7 @@ async def fsp_compute(
# N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem
# chans for the fan out?
# index = src.index
# if attach_stream:
# await client_stream.send(index)
@ -302,6 +325,7 @@ async def cascade(
raise ValueError(f'Unknown fsp target: {ns_path}')
# open a data feed stream with requested broker
feed: Feed
async with data.feed.maybe_open_feed(
[fqsn],
@ -317,7 +341,6 @@ async def cascade(
symbol = flume.symbol
assert src.token == flume.rt_shm.token
profiler(f'{func}: feed up')
# last_len = new_len = len(src.array)
func_name = func.__name__
async with (
@ -365,7 +388,7 @@ async def cascade(
) -> tuple[TaskTracker, int]:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.debug(f're-syncing fsp {func_name} to source')
log.info(f're-syncing fsp {func_name} to source')
tracker.cs.cancel()
await tracker.complete.wait()
tracker, index = await n.start(fsp_target)
@ -386,7 +409,8 @@ async def cascade(
src: ShmArray,
dst: ShmArray
) -> tuple[bool, int, int]:
'''Predicate to dertmine if a destination FSP
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
'''
@ -395,16 +419,15 @@ async def cascade(
return not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2 or
len_diff > 2
# we aren't step synced to the source and may be
# leading/lagging by a step
step_diff > 1 or
step_diff < 0
or step_diff > 1
or step_diff < 0
), step_diff, len_diff
async def poll_and_sync_to_step(
tracker: TaskTracker,
src: ShmArray,
dst: ShmArray,
@ -424,22 +447,22 @@ async def cascade(
# signal
times = src.array['time']
if len(times) > 1:
delay_s = times[-1] - times[times != times[-1]][-1]
last_ts = times[-1]
delay_s = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s = _default_delay_s
# Increment the underlying shared memory buffer on every
# "increment" msg received from the underlying data feed.
async with flume.index_stream(
int(delay_s)
) as istream:
# sub and increment the underlying shared memory buffer
# on every step msg received from the global `samplerd`
# service.
async with open_sample_stream(float(delay_s)) as istream:
profiler(f'{func_name}: sample stream up')
profiler.finish()
async for i in istream:
# log.runtime(f'FSP incrementing {i}')
# print(f'FSP incrementing {i}')
# respawn the compute task if the source
# array has been updated such that we compute
@ -468,3 +491,23 @@ async def cascade(
last = array[-1:].copy()
dst.push(last)
# sync with source buffer's time step
src_l2 = src.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst._array['time'][src_li] = src_lt
dst._array['time'][src_2li] = src_2lt
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )

View File

@ -234,7 +234,7 @@ async def flow_rates(
# FSPs, user input, and possibly any general event stream in
# real-time. Hint: ideally implemented with caching until mutated
# ;)
period: 'Param[int]' = 6, # noqa
period: 'Param[int]' = 1, # noqa
# TODO: support other means by providing a map
# to weights `partial()`-ed with `wma()`?
@ -268,8 +268,7 @@ async def flow_rates(
'dark_dvlm_rate': None,
}
# TODO: 3.10 do ``anext()``
quote = await source.__anext__()
quote = await anext(source)
# ltr = 0
# lvr = 0

View File

@ -118,17 +118,10 @@ async def _async_main(
# godwidget.hbox.addWidget(search)
godwidget.search = search
symbols: list[str] = []
for sym in syms:
symbol, _, provider = sym.rpartition('.')
symbols.append(symbol)
# this internally starts a ``display_symbol_data()`` task above
order_mode_ready = await godwidget.load_symbols(
provider,
symbols,
loglevel
fqsns=syms,
loglevel=loglevel,
)
# spin up a search engine for the local cached symbol set
@ -185,8 +178,7 @@ def _main(
tractor_kwargs,
) -> None:
'''
Sync entry point to start a chart: a ``tractor`` + Qt runtime
entry point
Sync entry point to start a chart: a ``tractor`` + Qt runtime.
'''
run_qtractor(

View File

@ -18,6 +18,7 @@
Chart axes graphics and behavior.
"""
from __future__ import annotations
from functools import lru_cache
from typing import Optional, Callable
from math import floor
@ -27,6 +28,7 @@ import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF
from . import _pg_overrides as pgo
from ..data._source import float_digits
from ._label import Label
from ._style import DpiAwareFont, hcolor, _font
@ -46,7 +48,7 @@ class Axis(pg.AxisItem):
'''
def __init__(
self,
linkedsplits,
plotitem: pgo.PlotItem,
typical_max_str: str = '100 000.000 ',
text_color: str = 'bracket',
lru_cache_tick_strings: bool = True,
@ -61,36 +63,42 @@ class Axis(pg.AxisItem):
# XXX: pretty sure this makes things slower
# self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.linkedsplits = linkedsplits
self.pi = plotitem
self._dpi_font = _font
self.setTickFont(_font.font)
font_size = self._dpi_font.font.pixelSize()
style_conf = {
'textFillLimits': [(0, 0.5)],
'tickFont': self._dpi_font.font,
}
text_offset = None
if self.orientation in ('bottom',):
text_offset = floor(0.25 * font_size)
elif self.orientation in ('left', 'right'):
text_offset = floor(font_size / 2)
self.setStyle(**{
'textFillLimits': [(0, 0.5)],
'tickFont': self._dpi_font.font,
if text_offset:
style_conf.update({
# offset of text *away from* axis line in px
# use approx. half the font pixel size (height)
'tickTextOffset': text_offset,
})
self.setStyle(**style_conf)
self.setTickFont(_font.font)
# NOTE: this is for surrounding "border"
self.setPen(_axis_pen)
# this is the text color
# self.setTextPen(pg.mkPen(hcolor(text_color)))
self.text_color = text_color
# generate a bounding rect based on sizing to a "typical"
# maximum length-ed string defined as init default.
self.typical_br = _font._qfm.boundingRect(typical_max_str)
# size the pertinent axis dimension to a "typical value"
@ -102,6 +110,9 @@ class Axis(pg.AxisItem):
maxsize=2**20
)(self.tickStrings)
# axis "sticky" labels
self._stickies: dict[str, YAxisLabel] = {}
# NOTE: only overriden to cast tick values entries into tuples
# for use with the lru caching.
def tickValues(
@ -139,6 +150,38 @@ class Axis(pg.AxisItem):
def txt_offsets(self) -> tuple[int, int]:
return tuple(self.style['tickTextOffset'])
def add_sticky(
self,
pi: pgo.PlotItem,
name: None | str = None,
digits: None | int = 2,
bg_color='default',
fg_color='black',
) -> YAxisLabel:
# if the sticky is for our symbol
# use the tick size precision for display
name = name or pi.name
digits = digits or 2
# TODO: ``._ysticks`` should really be an attr on each
# ``PlotItem`` now instead of the containing widget (because of
# overlays) ?
# add y-axis "last" value label
sticky = self._stickies[name] = YAxisLabel(
pi=pi,
parent=self,
digits=digits, # TODO: pass this from symbol data
opacity=0.9, # slight see-through
bg_color=bg_color,
fg_color=fg_color,
)
pi.sigRangeChanged.connect(sticky.update_on_resize)
return sticky
class PriceAxis(Axis):
@ -200,7 +243,6 @@ class PriceAxis(Axis):
self._min_tick = size
def size_to_values(self) -> None:
# self.typical_br = _font._qfm.boundingRect(typical_max_str)
self.setWidth(self.typical_br.width())
# XXX: drop for now since it just eats up h space
@ -255,28 +297,50 @@ class DynamicDateAxis(Axis):
) -> list[str]:
chart = self.linkedsplits.chart
flow = chart._flows[chart.name]
shm = flow.shm
bars = shm.array
# XX: ARGGGGG AG:LKSKDJF:LKJSDFD
chart = self.pi.chart_widget
viz = chart._vizs[chart.name]
shm = viz.shm
array = shm.array
times = array['time']
i_0, i_l = times[0], times[-1]
# edge cases
if (
not indexes
or
(indexes[0] < i_0
and indexes[-1] < i_l)
or
(indexes[0] > i_0
and indexes[-1] > i_l)
):
return []
if viz.index_field == 'index':
arr_len = times.shape[0]
first = shm._first.value
bars_len = len(bars)
times = bars['time']
epochs = times[list(
epochs = times[
list(
map(
int,
filter(
lambda i: i > 0 and i < bars_len,
lambda i: i > 0 and i < arr_len,
(i - first for i in indexes)
)
)
)]
)
]
else:
epochs = list(map(int, indexes))
# TODO: **don't** have this hard coded shift to EST
# delay = times[-1] - times[-2]
dts = np.array(epochs, dtype='datetime64[s]')
dts = np.array(
epochs,
dtype='datetime64[s]',
)
# see units listing:
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units
@ -294,24 +358,39 @@ class DynamicDateAxis(Axis):
spacing: float,
) -> list[str]:
return self._indexes_to_timestrs(values)
# NOTE: handy for debugging the lru cache
# info = self.tickStrings.cache_info()
# print(info)
return self._indexes_to_timestrs(values)
class AxisLabel(pg.GraphicsObject):
_x_margin = 0
_y_margin = 0
# relative offsets *OF* the bounding rect relative
# to parent graphics object.
# eg. <parent>| => <_x_br_offset> => | <text> |
_x_br_offset: float = 0
_y_br_offset: float = 0
# relative offsets of text *within* bounding rect
# eg. | <_x_margin> => <text> |
_x_margin: float = 0
_y_margin: float = 0
# multiplier of the text content's height in order
# to force a larger (y-dimension) bounding rect.
_y_txt_h_scaling: float = 1
def __init__(
self,
parent: pg.GraphicsItem,
digits: int = 2,
bg_color: str = 'bracket',
bg_color: str = 'default',
fg_color: str = 'black',
opacity: int = 1, # XXX: seriously don't set this to 0
opacity: int = .8, # XXX: seriously don't set this to 0
font_size: str = 'default',
use_arrow: bool = True,
@ -322,6 +401,7 @@ class AxisLabel(pg.GraphicsObject):
self.setParentItem(parent)
self.setFlag(self.ItemIgnoresTransformations)
self.setZValue(100)
# XXX: pretty sure this is faster
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
@ -353,14 +433,14 @@ class AxisLabel(pg.GraphicsObject):
p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget
) -> None:
"""Draw a filled rectangle based on the size of ``.label_str`` text.
'''
Draw a filled rectangle based on the size of ``.label_str`` text.
Subtypes can customize further by overloading ``.draw()``.
"""
# p.setCompositionMode(QtWidgets.QPainter.CompositionMode_SourceOver)
'''
if self.label_str:
# if not self.rect:
@ -371,7 +451,11 @@ class AxisLabel(pg.GraphicsObject):
p.setFont(self._dpifont.font)
p.setPen(self.fg_color)
p.drawText(self.rect, self.text_flags, self.label_str)
p.drawText(
self.rect,
self.text_flags,
self.label_str,
)
def draw(
self,
@ -379,6 +463,8 @@ class AxisLabel(pg.GraphicsObject):
rect: QtCore.QRectF
) -> None:
p.setOpacity(self.opacity)
if self._use_arrow:
if not self.path:
self._draw_arrow_path()
@ -386,15 +472,13 @@ class AxisLabel(pg.GraphicsObject):
p.drawPath(self.path)
p.fillPath(self.path, pg.mkBrush(self.bg_color))
# this adds a nice black outline around the label for some odd
# reason; ok by us
p.setOpacity(self.opacity)
# this cause the L1 labels to glitch out if used in the subtype
# and it will leave a small black strip with the arrow path if
# done before the above
p.fillRect(self.rect, self.bg_color)
p.fillRect(
self.rect,
self.bg_color,
)
def boundingRect(self): # noqa
'''
@ -438,15 +522,18 @@ class AxisLabel(pg.GraphicsObject):
txt_h, txt_w = txt_br.height(), txt_br.width()
# print(f'wsw: {self._dpifont.boundingRect(" ")}')
# allow subtypes to specify a static width and height
# allow subtypes to override width and height
h, w = self.size_hint()
# print(f'axis size: {self._parent.size()}')
# print(f'axis geo: {self._parent.geometry()}')
self.rect = QtCore.QRectF(
0, 0,
# relative bounds offsets
self._x_br_offset,
self._y_br_offset,
(w or txt_w) + self._x_margin / 2,
(h or txt_h) + self._y_margin / 2,
(h or txt_h) * self._y_txt_h_scaling + (self._y_margin / 2),
)
# print(self.rect)
# hb = self.path.controlPointRect()
@ -522,7 +609,7 @@ class XAxisLabel(AxisLabel):
class YAxisLabel(AxisLabel):
_y_margin = 4
_y_margin: int = 4
text_flags = (
QtCore.Qt.AlignLeft
@ -533,19 +620,19 @@ class YAxisLabel(AxisLabel):
def __init__(
self,
chart,
pi: pgo.PlotItem,
*args,
**kwargs
) -> None:
super().__init__(*args, **kwargs)
self._chart = chart
chart.sigRangeChanged.connect(self.update_on_resize)
self._pi = pi
pi.sigRangeChanged.connect(self.update_on_resize)
self._last_datum = (None, None)
self.x_offset = 0
# pull text offset from axis from parent axis
if getattr(self._parent, 'txt_offsets', False):
self.x_offset, y_offset = self._parent.txt_offsets()
@ -564,7 +651,8 @@ class YAxisLabel(AxisLabel):
value: float, # data for text
# on odd dimension and/or adds nice black line
x_offset: Optional[int] = None
x_offset: int = 0,
) -> None:
# this is read inside ``.paint()``
@ -610,7 +698,7 @@ class YAxisLabel(AxisLabel):
self._last_datum = (index, value)
self.update_label(
self._chart.mapFromView(QPointF(index, value)),
self._pi.mapFromView(QPointF(index, value)),
value
)

View File

@ -38,14 +38,12 @@ from PyQt5.QtWidgets import (
QVBoxLayout,
QSplitter,
)
import numpy as np
import pyqtgraph as pg
import trio
from ._axes import (
DynamicDateAxis,
PriceAxis,
YAxisLabel,
)
from ._cursor import (
Cursor,
@ -62,16 +60,19 @@ from ._style import (
hcolor,
CHART_MARGINS,
_xaxis_at,
_min_points_to_show,
# _min_points_to_show,
)
from ..data.feed import (
Feed,
Flume,
)
from ..data.feed import Feed
from ..data._source import Symbol
from ..log import get_logger
from ._interaction import ChartView
from ._forms import FieldsForm
from .._profile import pg_profile_enabled, ms_slower_then
from ._overlay import PlotItemOverlay
from ._flows import Flow
from ._dataviz import Viz
from ._search import SearchWidget
from . import _pg_overrides as pgo
from .._profile import Profiler
@ -126,7 +127,10 @@ class GodWidget(QWidget):
# self.init_strategy_ui()
# self.vbox.addLayout(self.hbox)
self._chart_cache: dict[str, LinkedSplits] = {}
self._chart_cache: dict[
str,
tuple[LinkedSplits, LinkedSplits],
] = {}
self.hist_linked: Optional[LinkedSplits] = None
self.rt_linked: Optional[LinkedSplits] = None
@ -146,40 +150,23 @@ class GodWidget(QWidget):
def linkedsplits(self) -> LinkedSplits:
return self.rt_linked
# def init_timeframes_ui(self):
# self.tf_layout = QHBoxLayout()
# self.tf_layout.setSpacing(0)
# self.tf_layout.setContentsMargins(0, 12, 0, 0)
# time_frames = ('1M', '5M', '15M', '30M', '1H', '1D', '1W', 'MN')
# btn_prefix = 'TF'
# for tf in time_frames:
# btn_name = ''.join([btn_prefix, tf])
# btn = QtWidgets.QPushButton(tf)
# # TODO:
# btn.setEnabled(False)
# setattr(self, btn_name, btn)
# self.tf_layout.addWidget(btn)
# self.toolbar_layout.addLayout(self.tf_layout)
# XXX: strat loader/saver that we don't need yet.
# def init_strategy_ui(self):
# self.strategy_box = StrategyBoxWidget(self)
# self.toolbar_layout.addWidget(self.strategy_box)
def set_chart_symbol(
def set_chart_symbols(
self,
symbol_key: str, # of form <fqsn>.<providername>
group_key: tuple[str], # of form <fqsn>.<providername>
all_linked: tuple[LinkedSplits, LinkedSplits], # type: ignore
) -> None:
# re-sort org cache symbol list in LIFO order
cache = self._chart_cache
cache.pop(symbol_key, None)
cache[symbol_key] = all_linked
cache.pop(group_key, None)
cache[group_key] = all_linked
def get_chart_symbol(
def get_chart_symbols(
self,
symbol_key: str,
@ -188,8 +175,7 @@ class GodWidget(QWidget):
async def load_symbols(
self,
providername: str,
symbol_keys: list[str],
fqsns: list[str],
loglevel: str,
reset: bool = False,
@ -200,20 +186,11 @@ class GodWidget(QWidget):
Expects a ``numpy`` structured array containing all the ohlcv fields.
'''
fqsns: list[str] = []
# our symbol key style is always lower case
for key in list(map(str.lower, symbol_keys)):
# fully qualified symbol name (SNS i guess is what we're making?)
fqsn = '.'.join([key, providername])
fqsns.append(fqsn)
# NOTE: for now we use the first symbol in the set as the "key"
# for the overlay of feeds on the chart.
group_key = fqsns[0]
group_key: tuple[str] = tuple(fqsns)
all_linked = self.get_chart_symbol(group_key)
all_linked = self.get_chart_symbols(group_key)
order_mode_started = trio.Event()
if not self.vbox.isEmpty():
@ -245,7 +222,6 @@ class GodWidget(QWidget):
self._root_n.start_soon(
display_symbol_data,
self,
providername,
fqsns,
loglevel,
order_mode_started,
@ -253,8 +229,8 @@ class GodWidget(QWidget):
# self.vbox.addWidget(hist_charts)
self.vbox.addWidget(rt_charts)
self.set_chart_symbol(
fqsn,
self.set_chart_symbols(
group_key,
(hist_charts, rt_charts),
)
@ -495,7 +471,11 @@ class LinkedSplits(QWidget):
from . import _display
ds = self.display_state
if ds:
return _display.graphics_update_cycle(ds, **kwargs)
return _display.graphics_update_cycle(
ds,
ds.quotes,
**kwargs,
)
@property
def symbol(self) -> Symbol:
@ -546,9 +526,10 @@ class LinkedSplits(QWidget):
symbol: Symbol,
shm: ShmArray,
flume: Flume,
sidepane: FieldsForm,
style: str = 'bar',
style: str = 'ohlc_bar',
) -> ChartPlotWidget:
'''
@ -568,12 +549,11 @@ class LinkedSplits(QWidget):
# be no distinction since we will have multiple symbols per
# view as part of "aggregate feeds".
self.chart = self.add_plot(
name=symbol.key,
name=symbol.fqsn,
shm=shm,
flume=flume,
style=style,
_is_main=True,
sidepane=sidepane,
)
# add crosshair graphic
@ -592,6 +572,7 @@ class LinkedSplits(QWidget):
name: str,
shm: ShmArray,
flume: Flume,
array_key: Optional[str] = None,
style: str = 'line',
@ -615,12 +596,13 @@ class LinkedSplits(QWidget):
# TODO: we gotta possibly assign this back
# to the last subplot on removal of some last subplot
xaxis = DynamicDateAxis(
None,
orientation='bottom',
linkedsplits=self
)
axes = {
'right': PriceAxis(linkedsplits=self, orientation='right'),
'left': PriceAxis(linkedsplits=self, orientation='left'),
'right': PriceAxis(None, orientation='right'),
'left': PriceAxis(None, orientation='left'),
'bottom': xaxis,
}
@ -645,6 +627,11 @@ class LinkedSplits(QWidget):
axisItems=axes,
**cpw_kwargs,
)
# TODO: wow i can't believe how confusing garbage all this axes
# stuff iss..
for axis in axes.values():
axis.pi = cpw.plotItem
cpw.hideAxis('left')
cpw.hideAxis('bottom')
@ -707,11 +694,13 @@ class LinkedSplits(QWidget):
anchor_at = ('top', 'left')
# draw curve graphics
if style == 'bar':
if style == 'ohlc_bar':
graphics, data_key = cpw.draw_ohlc(
# graphics, data_key = cpw.draw_ohlc(
viz = cpw.draw_ohlc(
name,
shm,
flume=flume,
array_key=array_key
)
self.cursor.contents_labels.add_label(
@ -723,18 +712,22 @@ class LinkedSplits(QWidget):
elif style == 'line':
add_label = True
graphics, data_key = cpw.draw_curve(
# graphics, data_key = cpw.draw_curve(
viz = cpw.draw_curve(
name,
shm,
flume,
array_key=array_key,
color='default_light',
)
elif style == 'step':
add_label = True
graphics, data_key = cpw.draw_curve(
# graphics, data_key = cpw.draw_curve(
viz = cpw.draw_curve(
name,
shm,
flume,
array_key=array_key,
step_mode=True,
color='davies',
@ -744,22 +737,28 @@ class LinkedSplits(QWidget):
else:
raise ValueError(f"Chart style {style} is currently unsupported")
if not _is_main:
graphics = viz.graphics
data_key = viz.name
if _is_main:
assert style == 'ohlc_bar', 'main chart must be OHLC'
else:
# track by name
self.subplots[name] = cpw
if qframe is not None:
self.splitter.addWidget(qframe)
else:
assert style == 'bar', 'main chart must be OHLC'
# add to cross-hair's known plots
# NOTE: add **AFTER** creating the underlying ``PlotItem``s
# since we require that global (linked charts wide) axes have
# been created!
if self.cursor:
if (
_is_main
or style != 'ohlc_bar'
):
self.cursor.add_plot(cpw)
if self.cursor and style != 'bar':
if style != 'ohlc_bar':
self.cursor.add_curve_cursor(cpw, graphics)
if add_label:
@ -797,6 +796,8 @@ class LinkedSplits(QWidget):
self.chart.sidepane.setMinimumWidth(sp_w)
# TODO: we should really drop using this type and instead just
# write our own wrapper around `PlotItem`..
class ChartPlotWidget(pg.PlotWidget):
'''
``GraphicsView`` subtype containing a single ``PlotItem``.
@ -814,8 +815,6 @@ class ChartPlotWidget(pg.PlotWidget):
sig_mouse_leave = QtCore.pyqtSignal(object)
sig_mouse_enter = QtCore.pyqtSignal(object)
_l1_labels: L1Labels = None
mode_name: str = 'view'
# TODO: can take a ``background`` color setting - maybe there's
@ -860,7 +859,12 @@ class ChartPlotWidget(pg.PlotWidget):
# source of our custom interactions
self.cv = cv = self.mk_vb(name)
pi = pgo.PlotItem(viewBox=cv, **kwargs)
pi = pgo.PlotItem(
viewBox=cv,
name=name,
**kwargs,
)
pi.chart_widget = self
super().__init__(
background=hcolor(view_color),
viewBox=cv,
@ -890,9 +894,9 @@ class ChartPlotWidget(pg.PlotWidget):
# self.setViewportMargins(0, 0, 0, 0)
# registry of overlay curve names
self._flows: dict[str, Flow] = {}
self._vizs: dict[str, Viz] = {}
self._feeds: dict[Symbol, Feed] = {}
self.feed: Feed | None = None
self._labels = {} # registry of underlying graphics
self._ysticks = {} # registry of underlying graphics
@ -903,8 +907,6 @@ class ChartPlotWidget(pg.PlotWidget):
# show background grid
self.showGrid(x=False, y=True, alpha=0.3)
self.cv.enable_auto_yrange()
self.pi_overlay: PlotItemOverlay = PlotItemOverlay(self.plotItem)
# indempotent startup flag for auto-yrange subsys
@ -913,17 +915,17 @@ class ChartPlotWidget(pg.PlotWidget):
self._on_screen: bool = False
def resume_all_feeds(self):
feed = self.feed
if feed:
try:
for feed in self._feeds.values():
for flume in feed.flumes.values():
self.linked.godwidget._root_n.start_soon(feed.resume)
except RuntimeError:
# TODO: cancel the qtractor runtime here?
raise
def pause_all_feeds(self):
for feed in self._feeds.values():
for flume in feed.flumes.values():
feed = self.feed
if feed:
self.linked.godwidget._root_n.start_soon(feed.pause)
@property
@ -933,47 +935,6 @@ class ChartPlotWidget(pg.PlotWidget):
def focus(self) -> None:
self.view.setFocus()
def last_bar_in_view(self) -> int:
self._arrays[self.name][-1]['index']
def is_valid_index(self, index: int) -> bool:
return index >= 0 and index < self._arrays[self.name][-1]['index']
def _set_xlimits(
self,
xfirst: int,
xlast: int
) -> None:
"""Set view limits (what's shown in the main chart "pane")
based on max/min x/y coords.
"""
self.setLimits(
xMin=xfirst,
xMax=xlast,
minXRange=_min_points_to_show,
)
def view_range(self) -> tuple[int, int]:
vr = self.viewRect()
return int(vr.left()), int(vr.right())
def bars_range(self) -> tuple[int, int, int, int]:
'''
Return a range tuple for the bars present in view.
'''
main_flow = self._flows[self.name]
ifirst, l, lbar, rbar, r, ilast = main_flow.datums_range()
return l, lbar, rbar, r
def curve_width_pxs(
self,
) -> float:
_, lbar, rbar, _ = self.bars_range()
return self.view.mapViewToDevice(
QLineF(lbar, 0, rbar, 0)
).length()
def pre_l1_xs(self) -> tuple[float, float]:
'''
Return the view x-coord for the value just before
@ -982,11 +943,16 @@ class ChartPlotWidget(pg.PlotWidget):
'''
line_end, marker_right, yaxis_x = self.marker_right_points()
view = self.view
line = view.mapToView(
line = self.view.mapToView(
QLineF(line_end, 0, yaxis_x, 0)
)
return line.x1(), line.length()
linex, linelen = line.x1(), line.length()
# print(
# f'line: {line}\n'
# f'linex: {linex}\n'
# f'linelen: {linelen}\n'
# )
return linex, linelen
def marker_right_points(
self,
@ -1004,15 +970,22 @@ class ChartPlotWidget(pg.PlotWidget):
'''
# TODO: compute some sensible maximum value here
# and use a humanized scheme to limit to that length.
l1_len = self._max_l1_line_len
from ._l1 import L1Label
l1_len = abs(L1Label._x_br_offset)
ryaxis = self.getAxis('right')
r_axis_x = ryaxis.pos().x()
up_to_l1_sc = r_axis_x - l1_len - 10
up_to_l1_sc = r_axis_x - l1_len
marker_right = up_to_l1_sc - (1.375 * 2 * marker_size)
line_end = marker_right - (6/16 * marker_size)
# line_end = marker_right - (6/16 * marker_size)
line_end = marker_right - marker_size
# print(
# f'r_axis_x: {r_axis_x}\n'
# f'up_to_l1_sc: {up_to_l1_sc}\n'
# f'marker_right: {marker_right}\n'
# f'line_end: {line_end}\n'
# )
return line_end, marker_right, r_axis_x
def default_view(
@ -1026,133 +999,51 @@ class ChartPlotWidget(pg.PlotWidget):
Set the view box to the "default" startup view of the scene.
'''
flow = self._flows.get(self.name)
if not flow:
log.warning(f'`Flow` for {self.name} not loaded yet?')
viz = self.get_viz(self.name)
if not viz:
log.warning(f'`Viz` for {self.name} not loaded yet?')
return
index = flow.shm.array['index']
xfirst, xlast = index[0], index[-1]
l, lbar, rbar, r = self.bars_range()
view = self.view
if (
rbar < 0
or l < xfirst
or l < 0
or (rbar - lbar) < 6
):
# TODO: set fixed bars count on screen that approx includes as
# many bars as possible before a downsample line is shown.
begin = xlast - bars_from_y
view.setXRange(
min=begin,
max=xlast,
padding=0,
)
# re-get range
l, lbar, rbar, r = self.bars_range()
# we get the L1 spread label "length" in view coords
# terms now that we've scaled either by user control
# or to the default set of bars as per the immediate block
# above.
if not y_offset:
marker_pos, l1_len = self.pre_l1_xs()
end = xlast + l1_len + 1
else:
end = xlast + y_offset + 1
begin = end - (r - l)
# for debugging
# print(
# # f'bars range: {brange}\n'
# f'xlast: {xlast}\n'
# f'marker pos: {marker_pos}\n'
# f'l1 len: {l1_len}\n'
# f'begin: {begin}\n'
# f'end: {end}\n'
# )
# remove any custom user yrange setttings
if self._static_yrange == 'axis':
self._static_yrange = None
view.setXRange(
min=begin,
max=end,
padding=0,
viz.default_view(
bars_from_y,
y_offset,
do_ds,
)
if do_ds:
self.view.maybe_downsample_graphics()
view._set_yrange()
try:
self.linked.graphics_cycle()
except IndexError:
pass
def increment_view(
self,
steps: int = 1,
datums: int = 1,
vb: Optional[ChartView] = None,
) -> None:
"""
Increment the data view one step to the right thus "following"
the current time slot/step/bar.
'''
Increment the data view ``datums``` steps toward y-axis thus
"following" the current time slot/step/bar.
"""
l, r = self.view_range()
'''
view = vb or self.view
viz = self.main_viz
l, r = viz.view_range()
x_shift = viz.index_step() * datums
if datums >= 300:
print("FUCKING FIX THE GLOBAL STEP BULLSHIT")
# breakpoint()
return
view.setXRange(
min=l + steps,
max=r + steps,
min=l + x_shift,
max=r + x_shift,
# TODO: holy shit, wtf dude... why tf would this not be 0 by
# default... speechless.
padding=0,
)
def draw_ohlc(
self,
name: str,
shm: ShmArray,
array_key: Optional[str] = None,
) -> (pg.GraphicsObject, str):
'''
Draw OHLC datums to chart.
'''
graphics = BarItems(
self.linked,
self.plotItem,
pen_color=self.pen_color,
name=name,
)
# adds all bar/candle graphics objects for each data point in
# the np array buffer to be drawn on next render cycle
self.plotItem.addItem(graphics)
data_key = array_key or name
self._flows[data_key] = Flow(
name=name,
plot=self.plotItem,
_shm=shm,
is_ohlc=True,
graphics=graphics,
)
self._add_sticky(name, bg_color='davies')
return graphics, data_key
def overlay_plotitem(
self,
name: str,
@ -1172,8 +1063,8 @@ class ChartPlotWidget(pg.PlotWidget):
raise ValueError(f'``axis_side``` must be in {allowed_sides}')
yaxis = PriceAxis(
plotitem=None,
orientation=axis_side,
linkedsplits=self.linked,
**axis_kwargs,
)
@ -1188,6 +1079,9 @@ class ChartPlotWidget(pg.PlotWidget):
},
default_axes=[],
)
# pi.vb.background.setOpacity(0)
yaxis.pi = pi
pi.chart_widget = self
pi.hideButtons()
# compose this new plot's graphics with the current chart's
@ -1224,6 +1118,7 @@ class ChartPlotWidget(pg.PlotWidget):
name: str,
shm: ShmArray,
flume: Flume,
array_key: Optional[str] = None,
overlay: bool = False,
@ -1231,22 +1126,32 @@ class ChartPlotWidget(pg.PlotWidget):
add_label: bool = True,
pi: Optional[pg.PlotItem] = None,
step_mode: bool = False,
is_ohlc: bool = False,
add_sticky: None | str = 'right',
**pdi_kwargs,
**graphics_kwargs,
) -> (pg.PlotDataItem, str):
) -> Viz:
'''
Draw a "curve" (line plot graphics) for the provided data in
the input shm array ``shm``.
'''
color = color or self.pen_color or 'default_light'
pdi_kwargs.update({
'color': color
})
data_key = array_key or name
pi = pi or self.plotItem
if is_ohlc:
graphics = BarItems(
linked=self.linked,
plotitem=pi,
color=color,
name=name,
**graphics_kwargs,
)
else:
curve_type = {
None: Curve,
'step': StepCurve,
@ -1254,21 +1159,23 @@ class ChartPlotWidget(pg.PlotWidget):
# 'bars': BarsItems
}['step' if step_mode else None]
curve = curve_type(
graphics = curve_type(
name=name,
**pdi_kwargs,
color=color,
**graphics_kwargs,
)
pi = pi or self.plotItem
viz = self._vizs[data_key] = Viz(
data_key,
pi,
shm,
flume,
self._flows[data_key] = Flow(
name=name,
plot=pi,
_shm=shm,
is_ohlc=False,
# register curve graphics with this flow
graphics=curve,
is_ohlc=is_ohlc,
# register curve graphics with this viz
graphics=graphics,
)
assert isinstance(viz.shm, ShmArray)
# TODO: this probably needs its own method?
if overlay:
@ -1278,12 +1185,42 @@ class ChartPlotWidget(pg.PlotWidget):
f'{overlay} must be from `.plotitem_overlay()`'
)
pi = overlay
else:
if add_sticky:
axis = pi.getAxis(add_sticky)
if pi.name not in axis._stickies:
if pi is not self.plotItem:
overlay = self.pi_overlay
assert pi in overlay.overlays
overlay_axis = overlay.get_axis(
pi,
add_sticky,
)
assert overlay_axis is axis
# TODO: UGH! just make this not here! we should
# be making the sticky from code which has access
# to the ``Symbol`` instance..
# if the sticky is for our symbol
# use the tick size precision for display
name = name or pi.name
sym = self.linked.symbol
digits = None
if name == sym.key:
digits = sym.tick_size_digits
# anchor_at = ('top', 'left')
# TODO: something instead of stickies for overlays
# (we need something that avoids clutter on x-axis).
self._add_sticky(name, bg_color=color)
axis.add_sticky(
pi=pi,
fg_color='black',
# bg_color=color,
digits=digits,
)
# NOTE: this is more or less the RENDER call that tells Qt to
# start showing the generated graphics-curves. This is kind of
@ -1294,38 +1231,32 @@ class ChartPlotWidget(pg.PlotWidget):
# the next render cycle; just note a lot of the real-time
# updates are implicit and require a bit of digging to
# understand.
pi.addItem(curve)
pi.addItem(graphics)
return curve, data_key
return viz
# TODO: make this a ctx mngr
def _add_sticky(
def draw_ohlc(
self,
name: str,
bg_color='bracket',
shm: ShmArray,
flume: Flume,
) -> YAxisLabel:
array_key: Optional[str] = None,
**draw_curve_kwargs,
# if the sticky is for our symbol
# use the tick size precision for display
sym = self.linked.symbol
if name == sym.key:
digits = sym.tick_size_digits
else:
digits = 2
) -> Viz:
'''
Draw OHLC datums to chart.
# add y-axis "last" value label
last = self._ysticks[name] = YAxisLabel(
chart=self,
# parent=self.getAxis('right'),
parent=self.pi_overlay.get_axis(self.plotItem, 'right'),
# TODO: pass this from symbol data
digits=digits,
opacity=1,
bg_color=bg_color,
'''
return self.draw_curve(
name,
shm,
flume,
array_key=array_key,
is_ohlc=True,
**draw_curve_kwargs,
)
return last
def update_graphics_from_flow(
self,
@ -1339,41 +1270,12 @@ class ChartPlotWidget(pg.PlotWidget):
Update the named internal graphics from ``array``.
'''
flow = self._flows[array_key or graphics_name]
return flow.update_graphics(
viz = self._vizs[array_key or graphics_name]
return viz.update_graphics(
array_key=array_key,
**kwargs,
)
# def _label_h(self, yhigh: float, ylow: float) -> float:
# # compute contents label "height" in view terms
# # to avoid having data "contents" overlap with them
# if self._labels:
# label = self._labels[self.name][0]
# rect = label.itemRect()
# tl, br = rect.topLeft(), rect.bottomRight()
# vb = self.plotItem.vb
# try:
# # on startup labels might not yet be rendered
# top, bottom = (vb.mapToView(tl).y(), vb.mapToView(br).y())
# # XXX: magic hack, how do we compute exactly?
# label_h = (top - bottom) * 0.42
# except np.linalg.LinAlgError:
# label_h = 0
# else:
# label_h = 0
# # print(f'label height {self.name}: {label_h}')
# if label_h > yhigh - ylow:
# label_h = 0
# print(f"bounds (ylow, yhigh): {(ylow, yhigh)}")
# TODO: pretty sure we can just call the cursor
# directly not? i don't wee why we need special "signal proxies"
# for this lul..
@ -1386,37 +1288,6 @@ class ChartPlotWidget(pg.PlotWidget):
self.sig_mouse_leave.emit(self)
self.scene().leaveEvent(ev)
def get_index(self, time: float) -> int:
# TODO: this should go onto some sort of
# data-view thinger..right?
ohlc = self._flows[self.name].shm.array
# XXX: not sure why the time is so off here
# looks like we're gonna have to do some fixing..
indexes = ohlc['time'] >= time
if any(indexes):
return ohlc['index'][indexes][-1]
else:
return ohlc['index'][-1]
def in_view(
self,
array: np.ndarray,
) -> np.ndarray:
'''
Slice an input struct array providing only datums
"in view" of this chart.
'''
l, lbar, rbar, r = self.bars_range()
ifirst = array[0]['index']
# slice data by offset from the first index
# available in the passed datum set.
return array[lbar - ifirst:(rbar - ifirst) + 1]
def maxmin(
self,
name: Optional[str] = None,
@ -1438,36 +1309,34 @@ class ChartPlotWidget(pg.PlotWidget):
delayed=True,
)
# TODO: here we should instead look up the ``Flow.shm.array``
# TODO: here we should instead look up the ``Viz.shm.array``
# and read directly from shm to avoid copying to memory first
# and then reading it again here.
flow_key = name or self.name
flow = self._flows.get(flow_key)
if (
flow is None
):
log.error(f"flow {flow_key} doesn't exist in chart {self.name} !?")
viz_key = name or self.name
viz = self._vizs.get(viz_key)
if viz is None:
log.error(f"viz {viz_key} doesn't exist in chart {self.name} !?")
key = res = 0, 0
else:
(
first,
l,
_,
lbar,
rbar,
_,
r,
last,
) = bars_range or flow.datums_range()
profiler(f'{self.name} got bars range')
) = bars_range or viz.datums_range()
key = round(lbar), round(rbar)
res = flow.maxmin(*key)
profiler(f'{self.name} got bars range')
key = lbar, rbar
res = viz.maxmin(*key)
if (
res is None
):
log.warning(
f"{flow_key} no mxmn for bars_range => {key} !?"
f"{viz_key} no mxmn for bars_range => {key} !?"
)
res = 0, 0
if not self._on_screen:
@ -1475,5 +1344,19 @@ class ChartPlotWidget(pg.PlotWidget):
self._on_screen = True
profiler(f'yrange mxmn: {key} -> {res}')
# print(f'{flow_key} yrange mxmn: {key} -> {res}')
# print(f'{viz_key} yrange mxmn: {key} -> {res}')
return res
def get_viz(
self,
key: str,
) -> Viz:
'''
Try to get an underlying ``Viz`` by key.
'''
return self._vizs.get(key)
@property
def main_viz(self) -> Viz:
return self.get_viz(self.name)

View File

@ -71,7 +71,7 @@ class LineDot(pg.CurvePoint):
plot: ChartPlotWidget, # type: ingore # noqa
pos=None,
color: str = 'default_light',
color: str = 'bracket',
) -> None:
# scale from dpi aware font size
@ -198,12 +198,11 @@ class ContentsLabel(pg.LabelItem):
self,
name: str,
index: int,
ix: int,
array: np.ndarray,
) -> None:
# this being "html" is the dumbest shit :eyeroll:
first = array[0]['index']
self.setText(
"<b>i</b>:{index}<br/>"
@ -216,7 +215,7 @@ class ContentsLabel(pg.LabelItem):
"<b>C</b>:{}<br/>"
"<b>V</b>:{}<br/>"
"<b>wap</b>:{}".format(
*array[index - first][
*array[ix][
[
'time',
'open',
@ -228,7 +227,7 @@ class ContentsLabel(pg.LabelItem):
]
],
name=name,
index=index,
index=ix,
)
)
@ -236,14 +235,11 @@ class ContentsLabel(pg.LabelItem):
self,
name: str,
index: int,
ix: int,
array: np.ndarray,
) -> None:
first = array[0]['index']
if index < array[-1]['index'] and index > first:
data = array[index - first][name]
data = array[ix][name]
self.setText(f"{name}: {data:.2f}")
@ -269,17 +265,20 @@ class ContentsLabels:
def update_labels(
self,
index: int,
x_in: int,
) -> None:
for chart, name, label, update in self._labels:
flow = chart._flows[name]
array = flow.shm.array
viz = chart.get_viz(name)
array = viz.shm.array
index = array[viz.index_field]
start = index[0]
stop = index[-1]
if not (
index >= 0
and index < array[-1]['index']
x_in >= start
and x_in <= stop
):
# out of range
print('WTF out of range?')
@ -288,7 +287,10 @@ class ContentsLabels:
# call provided update func with data point
try:
label.show()
update(index, array)
ix = np.searchsorted(index, x_in)
if ix > len(array):
breakpoint()
update(ix, array)
except IndexError:
log.exception(f"Failed to update label: {name}")
@ -349,7 +351,7 @@ class Cursor(pg.GraphicsObject):
# XXX: not sure why these are instance variables?
# It's not like we can change them on the fly..?
self.pen = pg.mkPen(
color=hcolor('default'),
color=hcolor('bracket'),
style=QtCore.Qt.DashLine,
)
self.lines_pen = pg.mkPen(
@ -365,7 +367,7 @@ class Cursor(pg.GraphicsObject):
self._lw = self.pixelWidth() * self.lines_pen.width()
# xhair label's color name
self.label_color: str = 'default'
self.label_color: str = 'bracket'
self._y_label_update: bool = True
@ -418,7 +420,7 @@ class Cursor(pg.GraphicsObject):
hl.hide()
yl = YAxisLabel(
chart=plot,
pi=plot.plotItem,
# parent=plot.getAxis('right'),
parent=plot.pi_overlay.get_axis(plot.plotItem, 'right'),
digits=digits or self.digits,
@ -482,25 +484,32 @@ class Cursor(pg.GraphicsObject):
def add_curve_cursor(
self,
plot: ChartPlotWidget, # noqa
chart: ChartPlotWidget, # noqa
curve: 'PlotCurveItem', # noqa
) -> LineDot:
# if this plot contains curves add line dot "cursors" to denote
# if this chart contains curves add line dot "cursors" to denote
# the current sample under the mouse
main_flow = plot._flows[plot.name]
main_viz = chart.get_viz(chart.name)
# read out last index
i = main_flow.shm.array[-1]['index']
i = main_viz.shm.array[-1]['index']
cursor = LineDot(
curve,
index=i,
plot=plot
plot=chart
)
plot.addItem(cursor)
self.graphics[plot].setdefault('cursors', []).append(cursor)
chart.addItem(cursor)
self.graphics[chart].setdefault('cursors', []).append(cursor)
return cursor
def mouseAction(self, action, plot): # noqa
def mouseAction(
self,
action: str,
plot: ChartPlotWidget,
) -> None: # noqa
log.debug(f"{(action, plot.name)}")
if action == 'Enter':
self.active_plot = plot

View File

@ -28,10 +28,7 @@ from PyQt5.QtWidgets import QGraphicsItem
from PyQt5.QtCore import (
Qt,
QLineF,
QSizeF,
QRectF,
# QRect,
QPointF,
)
from PyQt5.QtGui import (
QPainter,
@ -39,10 +36,6 @@ from PyQt5.QtGui import (
)
from .._profile import pg_profile_enabled, ms_slower_then
from ._style import hcolor
# from ._compression import (
# # ohlc_to_m4_line,
# ds_m4,
# )
from ..log import get_logger
from .._profile import Profiler
@ -58,7 +51,39 @@ _line_styles: dict[str, int] = {
}
class Curve(pg.GraphicsObject):
class FlowGraphic(pg.GraphicsObject):
'''
Base class with minimal interface for `QPainterPath` implemented,
real-time updated "data flow" graphics.
See subtypes below.
'''
# sub-type customization methods
declare_paintables: Optional[Callable] = None
sub_paint: Optional[Callable] = None
# TODO: can we remove this?
# sub_br: Optional[Callable] = None
def x_uppx(self) -> int:
px_vecs = self.pixelVectors()[0]
if px_vecs:
return px_vecs.x()
else:
return 0
def x_last(self) -> float | None:
'''
Return the last most x value of the last line segment or if not
drawn yet, ``None``.
'''
return self._last_line.x1() if self._last_line else None
class Curve(FlowGraphic):
'''
A faster, simpler, append friendly version of
``pyqtgraph.PlotCurveItem`` built for highly customizable real-time
@ -75,7 +100,7 @@ class Curve(pg.GraphicsObject):
lower level graphics data can be rendered in different threads and
then read and drawn in this main thread without having to worry
about dealing with Qt's concurrency primitives. See
``piker.ui._flows.Renderer`` for details and logic related to lower
``piker.ui._render.Renderer`` for details and logic related to lower
level path generation and incremental update. The main differences in
the path generation code include:
@ -88,11 +113,6 @@ class Curve(pg.GraphicsObject):
'''
# sub-type customization methods
sub_br: Optional[Callable] = None
sub_paint: Optional[Callable] = None
declare_paintables: Optional[Callable] = None
def __init__(
self,
*args,
@ -102,7 +122,6 @@ class Curve(pg.GraphicsObject):
fill_color: Optional[str] = None,
style: str = 'solid',
name: Optional[str] = None,
use_fpath: bool = True,
**kwargs
@ -117,11 +136,11 @@ class Curve(pg.GraphicsObject):
# self._last_cap: int = 0
self.path: Optional[QPainterPath] = None
# additional path used for appends which tries to avoid
# triggering an update/redraw of the presumably larger
# historical ``.path`` above.
self.use_fpath = use_fpath
self.fast_path: Optional[QPainterPath] = None
# additional path that can be optionally used for appends which
# tries to avoid triggering an update/redraw of the presumably
# larger historical ``.path`` above. the flag to enable
# this behaviour is found in `Renderer.render()`.
self.fast_path: QPainterPath | None = None
# TODO: we can probably just dispense with the parent since
# we're basically only using the pen setting now...
@ -140,9 +159,7 @@ class Curve(pg.GraphicsObject):
# self.last_step_pen = pg.mkPen(hcolor(color), width=2)
self.last_step_pen = pg.mkPen(pen, width=2)
# self._last_line: Optional[QLineF] = None
self._last_line = QLineF()
self._last_w: float = 1
self._last_line: QLineF = QLineF()
# flat-top style histogram-like discrete curve
# self._step_mode: bool = step_mode
@ -163,51 +180,19 @@ class Curve(pg.GraphicsObject):
# endpoint (something we saw on trade rate curves)
self.setCacheMode(QGraphicsItem.DeviceCoordinateCache)
# XXX: see explanation for different caching modes:
# https://stackoverflow.com/a/39410081
# seems to only be useful if we don't re-generate the entire
# QPainterPath every time
# curve.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
# XXX-NOTE-XXX: graphics caching.
# see explanation for different caching modes:
# https://stackoverflow.com/a/39410081 seems to only be useful
# if we don't re-generate the entire QPainterPath every time
# don't ever use this - it's a colossal nightmare of artefacts
# and is disastrous for performance.
# curve.setCacheMode(QtWidgets.QGraphicsItem.ItemCoordinateCache)
# self.setCacheMode(QtWidgets.QGraphicsItem.ItemCoordinateCache)
# allow sub-type customization
declare = self.declare_paintables
if declare:
declare()
# TODO: probably stick this in a new parent
# type which will contain our own version of
# what ``PlotCurveItem`` had in terms of base
# functionality? A `FlowGraphic` maybe?
def x_uppx(self) -> int:
px_vecs = self.pixelVectors()[0]
if px_vecs:
xs_in_px = px_vecs.x()
return round(xs_in_px)
else:
return 0
def px_width(self) -> float:
vb = self.getViewBox()
if not vb:
return 0
vr = self.viewRect()
l, r = int(vr.left()), int(vr.right())
start, stop = self._xrange
lbar = max(l, start)
rbar = min(r, stop)
return vb.mapViewToDevice(
QLineF(lbar, 0, rbar, 0)
).length()
# XXX: lol brutal, the internals of `CurvePoint` (inherited by
# our `LineDot`) required ``.getData()`` to work..
def getData(self):
@ -231,8 +216,8 @@ class Curve(pg.GraphicsObject):
self.path.clear()
if self.fast_path:
# self.fast_path.clear()
self.fast_path = None
self.fast_path.clear()
# self.fast_path = None
@cm
def reset_cache(self) -> None:
@ -252,77 +237,65 @@ class Curve(pg.GraphicsObject):
self.boundingRect = self._path_br
return self._path_br()
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
def _path_br(self):
'''
Post init ``.boundingRect()```.
'''
# hb = self.path.boundingRect()
hb = self.path.controlPointRect()
hb_size = hb.size()
fp = self.fast_path
if fp:
fhb = fp.controlPointRect()
hb_size = fhb.size() + hb_size
# print(f'hb_size: {hb_size}')
# if self._last_step_rect:
# hb_size += self._last_step_rect.size()
# if self._line:
# br = self._last_step_rect.bottomRight()
# tl = QPointF(
# # self._vr[0],
# # hb.topLeft().y(),
# # 0,
# # hb_size.height() + 1
# profiler = Profiler(
# msg=f'Curve.boundingRect(): `{self._name}`',
# disabled=not pg_profile_enabled(),
# ms_threshold=ms_slower_then,
# )
pr = self.path.controlPointRect()
hb_tl, hb_br = (
pr.topLeft(),
pr.bottomRight(),
)
mn_y = hb_tl.y()
mx_y = hb_br.y()
most_left = hb_tl.x()
most_right = hb_br.x()
# profiler('calc path vertices')
# br = self._last_step_rect.bottomRight()
# TODO: if/when we get fast path appends working in the
# `Renderer`, then we might need to actually use this..
# fp = self.fast_path
# if fp:
# fhb = fp.controlPointRect()
# # hb_size = fhb.size() + hb_size
# br = pr.united(fhb)
w = hb_size.width()
h = hb_size.height()
# XXX: *was* a way to allow sub-types to extend the
# boundingrect calc, but in the one use case for a step curve
# doesn't seem like we need it as long as the last line segment
# is drawn as it is?
# sbr = self.sub_br
# if sbr:
# # w, h = self.sub_br(w, h)
# sub_br = sbr()
# br = br.united(sub_br)
sbr = self.sub_br
if sbr:
w, h = self.sub_br(w, h)
else:
# assume plain line graphic and use
# default unit step in each direction.
ll = self._last_line
y1, y2 = ll.y1(), ll.y2()
x1, x2 = ll.x1(), ll.x2()
# only on a plane line do we include
# and extra index step's worth of width
# since in the step case the end of the curve
# actually terminates earlier so we don't need
# this for the last step.
w += self._last_w
# ll = self._last_line
h += 1 # ll.y2() - ll.y1()
ymn = min(y1, y2, mn_y)
ymx = max(y1, y2, mx_y)
most_left = min(x1, x2, most_left)
most_right = max(x1, x2, most_right)
# profiler('calc last line vertices')
# br = QPointF(
# self._vr[-1],
# # tl.x() + w,
# tl.y() + h,
# )
br = QRectF(
# top left
# hb.topLeft()
# tl,
QPointF(hb.topLeft()),
# br,
# total size
# QSizeF(hb_size)
# hb_size,
QSizeF(w, h)
return QRectF(
most_left,
ymn,
most_right - most_left + 1,
ymx,
)
# print(f'bounding rect: {br}')
return br
def paint(
self,
@ -340,7 +313,7 @@ class Curve(pg.GraphicsObject):
sub_paint = self.sub_paint
if sub_paint:
sub_paint(p, profiler)
sub_paint(p)
p.setPen(self.last_step_pen)
p.drawLine(self._last_line)
@ -374,22 +347,30 @@ class Curve(pg.GraphicsObject):
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
index_field: str,
) -> None:
# default line draw last call
# with self.reset_cache():
x = render_data['index']
y = render_data[array_key]
x = src_data[index_field]
y = src_data[array_key]
x_last = x[-1]
x_2last = x[-2]
# draw the "current" step graphic segment so it
# lines up with the "middle" of the current
# (OHLC) sample.
self._last_line = QLineF(
x[-2], y[-2],
x[-1], y[-1],
# NOTE: currently we draw in x-domain
# from last datum to current such that
# the end of line touches the "beginning"
# of the current datum step span.
x_2last , y[-2],
x_last, y[-1],
)
return x, y
@ -405,13 +386,13 @@ class FlattenedOHLC(Curve):
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
index_field: str,
) -> None:
lasts = src_data[-2:]
x = lasts['index']
x = lasts[index_field]
y = lasts['close']
# draw the "current" step graphic segment so it
@ -435,9 +416,9 @@ class StepCurve(Curve):
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
index_field: str,
w: float = 0.5,
@ -446,40 +427,31 @@ class StepCurve(Curve):
# TODO: remove this and instead place all step curve
# updating into pre-path data render callbacks.
# full input data
x = src_data['index']
x = src_data[index_field]
y = src_data[array_key]
x_last = x[-1]
x_2last = x[-2]
y_last = y[-1]
step_size = x_last - x_2last
# lol, commenting this makes step curves
# all "black" for me :eyeroll:..
self._last_line = QLineF(
x_last - w, 0,
x_last + w, 0,
x_2last, 0,
x_last, 0,
)
self._last_step_rect = QRectF(
x_last - w, 0,
x_last + w, y_last,
x_last, 0,
step_size, y_last,
)
return x, y
def sub_paint(
self,
p: QPainter,
profiler: Profiler,
) -> None:
# p.drawLines(*tuple(filter(bool, self._last_step_lines)))
# p.drawRect(self._last_step_rect)
p.fillRect(self._last_step_rect, self._brush)
profiler('.fillRect()')
def sub_br(
self,
path_w: float,
path_h: float,
) -> (float, float):
# passthrough
return path_w, path_h

1108
piker/ui/_dataviz.py 100644

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -377,7 +377,7 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
nbars = ixmx - ixmn + 1
chart = self._chart
data = chart._flows[chart.name].shm.array[ixmn:ixmx]
data = chart.get_viz(chart.name).shm.array[ixmn:ixmx]
if len(data):
std = data['close'].std()

File diff suppressed because it is too large Load Diff

View File

@ -42,6 +42,8 @@ from ..data._sharedmem import (
_Token,
try_read,
)
from ..data.feed import Flume
from ..data._source import Symbol
from ._chart import (
ChartPlotWidget,
LinkedSplits,
@ -51,7 +53,10 @@ from ._forms import (
mk_form,
open_form_input_handling,
)
from ..fsp._api import maybe_mk_fsp_shm, Fsp
from ..fsp._api import (
maybe_mk_fsp_shm,
Fsp,
)
from ..fsp import cascade
from ..fsp._volume import (
# tina_vwap,
@ -74,14 +79,14 @@ def has_vlm(ohlcv: ShmArray) -> bool:
def update_fsp_chart(
chart: ChartPlotWidget,
flow,
viz,
graphics_name: str,
array_key: Optional[str],
**kwargs,
) -> None:
shm = flow.shm
shm = viz.shm
if not shm:
return
@ -107,7 +112,8 @@ def update_fsp_chart(
# sub-charts reference it under different 'named charts'.
# read from last calculated value and update any label
last_val_sticky = chart._ysticks.get(graphics_name)
last_val_sticky = chart.plotItem.getAxis(
'right')._stickies.get(graphics_name)
if last_val_sticky:
last = last_row[array_key]
last_val_sticky.update_from_data(-1, last)
@ -208,7 +214,7 @@ async def open_fsp_actor_cluster(
async def run_fsp_ui(
linkedsplits: LinkedSplits,
shm: ShmArray,
flume: Flume,
started: trio.Event,
target: Fsp,
conf: dict[str, dict],
@ -245,9 +251,11 @@ async def run_fsp_ui(
else:
chart = linkedsplits.subplots[overlay_with]
shm = flume.rt_shm
chart.draw_curve(
name=name,
shm=shm,
name,
shm,
flume,
overlay=True,
color='default_light',
array_key=name,
@ -257,8 +265,9 @@ async def run_fsp_ui(
else:
# create a new sub-chart widget for this fsp
chart = linkedsplits.add_plot(
name=name,
shm=shm,
name,
shm,
flume,
array_key=name,
sidepane=sidepane,
@ -280,7 +289,7 @@ async def run_fsp_ui(
# first UI update, usually from shm pushed history
update_fsp_chart(
chart,
chart._flows[array_key],
chart.get_viz(array_key),
name,
array_key=array_key,
)
@ -348,6 +357,9 @@ async def run_fsp_ui(
# last = time.time()
# TODO: maybe this should be our ``Viz`` type since it maps
# one flume to the next? The machinery for task/actor mgmt should
# be part of the instantiation API?
class FspAdmin:
'''
Client API for orchestrating FSP actors and displaying
@ -359,7 +371,7 @@ class FspAdmin:
tn: trio.Nursery,
cluster: dict[str, tractor.Portal],
linked: LinkedSplits,
src_shm: ShmArray,
flume: Flume,
) -> None:
self.tn = tn
@ -371,7 +383,11 @@ class FspAdmin:
tuple[tractor.MsgStream, ShmArray]
] = {}
self._flow_registry: dict[_Token, str] = {}
self.src_shm = src_shm
# TODO: make this a `.src_flume` and add
# a `dst_flume`?
# (=> but then wouldn't this be the most basic `Viz`?)
self.flume = flume
def rr_next_portal(self) -> tractor.Portal:
name, portal = next(self._rr_next_actor)
@ -384,7 +400,7 @@ class FspAdmin:
complete: trio.Event,
started: trio.Event,
fqsn: str,
dst_shm: ShmArray,
dst_fsp_flume: Flume,
conf: dict,
target: Fsp,
loglevel: str,
@ -405,9 +421,10 @@ class FspAdmin:
# data feed key
fqsn=fqsn,
# TODO: pass `Flume.to_msg()`s here?
# mems
src_shm_token=self.src_shm.token,
dst_shm_token=dst_shm.token,
src_shm_token=self.flume.rt_shm.token,
dst_shm_token=dst_fsp_flume.rt_shm.token,
# target
ns_path=ns_path,
@ -424,12 +441,14 @@ class FspAdmin:
ctx.open_stream() as stream,
):
dst_fsp_flume.stream: tractor.MsgStream = stream
# register output data
self._registry[
(fqsn, ns_path)
] = (
stream,
dst_shm,
dst_fsp_flume.rt_shm,
complete
)
@ -464,9 +483,9 @@ class FspAdmin:
worker_name: Optional[str] = None,
loglevel: str = 'info',
) -> (ShmArray, trio.Event):
) -> (Flume, trio.Event):
fqsn = self.linked.symbol.front_fqsn()
fqsn = self.flume.symbol.fqsn
# allocate an output shm array
key, dst_shm, opened = maybe_mk_fsp_shm(
@ -474,8 +493,28 @@ class FspAdmin:
target=target,
readonly=True,
)
portal = self.cluster.get(worker_name) or self.rr_next_portal()
provider_tag = portal.channel.uid
symbol = Symbol(
key=key,
broker_info={
provider_tag: {'asset_type': 'fsp'},
},
)
dst_fsp_flume = Flume(
symbol=symbol,
_rt_shm_token=dst_shm.token,
first_quote={},
# set to 0 presuming for now that we can't load
# FSP history (though we should eventually).
izero_hist=0,
izero_rt=0,
)
self._flow_registry[(
self.src_shm._token,
self.flume.rt_shm._token,
target.name
)] = dst_shm._token
@ -484,7 +523,6 @@ class FspAdmin:
# f'Already started FSP `{fqsn}:{func_name}`'
# )
portal = self.cluster.get(worker_name) or self.rr_next_portal()
complete = trio.Event()
started = trio.Event()
self.tn.start_soon(
@ -493,13 +531,13 @@ class FspAdmin:
complete,
started,
fqsn,
dst_shm,
dst_fsp_flume,
conf,
target,
loglevel,
)
return dst_shm, started
return dst_fsp_flume, started
async def open_fsp_chart(
self,
@ -511,7 +549,7 @@ class FspAdmin:
) -> (trio.Event, ChartPlotWidget):
shm, started = await self.start_engine_task(
flume, started = await self.start_engine_task(
target,
conf,
loglevel,
@ -523,7 +561,7 @@ class FspAdmin:
run_fsp_ui,
self.linked,
shm,
flume,
started,
target,
@ -537,7 +575,7 @@ class FspAdmin:
@acm
async def open_fsp_admin(
linked: LinkedSplits,
src_shm: ShmArray,
flume: Flume,
**kwargs,
) -> AsyncGenerator[dict, dict[str, tractor.Portal]]:
@ -558,7 +596,7 @@ async def open_fsp_admin(
tn,
cluster_map,
linked,
src_shm,
flume,
)
try:
yield admin
@ -572,7 +610,7 @@ async def open_fsp_admin(
async def open_vlm_displays(
linked: LinkedSplits,
ohlcv: ShmArray,
flume: Flume,
dvlm: bool = True,
task_status: TaskStatus[ChartPlotWidget] = trio.TASK_STATUS_IGNORED,
@ -594,6 +632,8 @@ async def open_vlm_displays(
sig = inspect.signature(flow_rates.func)
params = sig.parameters
ohlcv: ShmArray = flume.rt_shm
async with (
open_fsp_sidepane(
linked, {
@ -613,7 +653,7 @@ async def open_vlm_displays(
}
},
) as sidepane,
open_fsp_admin(linked, ohlcv) as admin,
open_fsp_admin(linked, flume) as admin,
):
# TODO: support updates
# period_field = sidepane.fields['period']
@ -621,14 +661,21 @@ async def open_vlm_displays(
# str(period_param.default)
# )
# use slightly less light (then bracket) gray
# for volume from "main exchange" and a more "bluey"
# gray for "dark" vlm.
vlm_color = 'i3'
dark_vlm_color = 'charcoal'
# built-in vlm which we plot ASAP since it's
# usually data provided directly with OHLC history.
shm = ohlcv
ohlc_chart = linked.chart
chart = linked.add_plot(
vlm_chart = linked.add_plot(
name='volume',
shm=shm,
flume=flume,
array_key='volume',
sidepane=sidepane,
@ -641,8 +688,12 @@ async def open_vlm_displays(
# the curve item internals are pretty convoluted.
style='step',
)
vlm_chart.view.enable_auto_yrange()
# back-link the volume chart to trigger y-autoranging
# in the ohlc (parent) chart.
ohlc_chart.view.enable_auto_yrange(
src_vb=chart.view,
src_vb=vlm_chart.view,
)
# force 0 to always be in view
@ -651,7 +702,7 @@ async def open_vlm_displays(
) -> tuple[float, float]:
'''
Flows "group" maxmin loop; assumes all named flows
Viz "group" maxmin loop; assumes all named flows
are in the same co-domain and thus can be sorted
as one set.
@ -664,7 +715,7 @@ async def open_vlm_displays(
'''
mx = 0
for name in names:
ymn, ymx = chart.maxmin(name=name)
ymn, ymx = vlm_chart.maxmin(name=name)
mx = max(mx, ymx)
return 0, mx
@ -672,40 +723,40 @@ async def open_vlm_displays(
# TODO: fix the x-axis label issue where if you put
# the axis on the left it's totally not lined up...
# show volume units value on LHS (for dinkus)
# chart.hideAxis('right')
# chart.showAxis('left')
# vlm_chart.hideAxis('right')
# vlm_chart.showAxis('left')
# send back new chart to caller
task_status.started(chart)
task_status.started(vlm_chart)
# should **not** be the same sub-chart widget
assert chart.name != linked.chart.name
assert vlm_chart.name != linked.chart.name
# sticky only on sub-charts atm
last_val_sticky = chart._ysticks[chart.name]
last_val_sticky = vlm_chart.plotItem.getAxis(
'right')._stickies.get(vlm_chart.name)
# read from last calculated value
value = shm.array['volume'][-1]
last_val_sticky.update_from_data(-1, value)
vlm_curve = chart.update_graphics_from_flow(
vlm_curve = vlm_chart.update_graphics_from_flow(
'volume',
# shm.array,
)
# size view to data once at outset
chart.view._set_yrange()
vlm_chart.view._set_yrange()
# add axis title
axis = chart.getAxis('right')
axis = vlm_chart.getAxis('right')
axis.set_title(' vlm')
if dvlm:
tasks_ready = []
# spawn and overlay $ vlm on the same subchart
dvlm_shm, started = await admin.start_engine_task(
dvlm_flume, started = await admin.start_engine_task(
dolla_vlm,
{ # fsp engine conf
@ -724,7 +775,7 @@ async def open_vlm_displays(
# FIXME: we should error on starting the same fsp right
# since it might collide with existing shm.. or wait we
# had this before??
# dolla_vlm,
# dolla_vlm
tasks_ready.append(started)
# profiler(f'created shm for fsp actor: {display_name}')
@ -738,22 +789,27 @@ async def open_vlm_displays(
# XXX: the main chart already contains a vlm "units" axis
# so here we add an overlay wth a y-range in
# $ liquidity-value units (normally a fiat like USD).
dvlm_pi = chart.overlay_plotitem(
dvlm_pi = vlm_chart.overlay_plotitem(
'dolla_vlm',
index=0, # place axis on inside (nearest to chart)
axis_title=' $vlm',
axis_side='right',
axis_side='left',
axis_kwargs={
'typical_max_str': ' 100.0 M ',
'formatter': partial(
humanize,
digits=2,
),
'text_color': vlm_color,
},
)
dvlm_pi.hideAxis('left')
# TODO: should this maybe be implicit based on input args to
# `.overlay_plotitem()` above?
dvlm_pi.hideAxis('bottom')
# all to be overlayed curve names
fields = [
'dolla_vlm',
@ -778,17 +834,12 @@ async def open_vlm_displays(
# add custom auto range handler
dvlm_pi.vb._maxmin = group_mxmn
# use slightly less light (then bracket) gray
# for volume from "main exchange" and a more "bluey"
# gray for "dark" vlm.
vlm_color = 'i3'
dark_vlm_color = 'charcoal'
# add dvlm (step) curves to common view
def chart_curves(
names: list[str],
pi: pg.PlotItem,
shm: ShmArray,
flume: Flume,
step_mode: bool = False,
style: str = 'solid',
@ -802,9 +853,13 @@ async def open_vlm_displays(
else:
color = 'bracket'
curve, _ = chart.draw_curve(
name=name,
shm=shm,
assert isinstance(shm, ShmArray)
assert isinstance(flume, Flume)
viz = vlm_chart.draw_curve(
name,
shm,
flume,
array_key=name,
overlay=pi,
color=color,
@ -812,29 +867,24 @@ async def open_vlm_displays(
style=style,
pi=pi,
)
# TODO: we need a better API to do this..
# specially store ref to shm for lookup in display loop
# since only a placeholder of `None` is entered in
# ``.draw_curve()``.
flow = chart._flows[name]
assert flow.plot is pi
assert viz.plot is pi
chart_curves(
fields,
dvlm_pi,
dvlm_shm,
dvlm_flume.rt_shm,
dvlm_flume,
step_mode=True,
)
# spawn flow rates fsp **ONLY AFTER** the 'dolla_vlm' fsp is
# up since this one depends on it.
fr_shm, started = await admin.start_engine_task(
fr_flume, started = await admin.start_engine_task(
flow_rates,
{ # fsp engine conf
'func_name': 'flow_rates',
'zero_on_step': False,
'zero_on_step': True,
},
# loglevel,
)
@ -843,7 +893,7 @@ async def open_vlm_displays(
# chart_curves(
# dvlm_rate_fields,
# dvlm_pi,
# fr_shm,
# fr_flume.rt_shm,
# )
# TODO: is there a way to "sync" the dual axes such that only
@ -852,24 +902,24 @@ async def open_vlm_displays(
# displayed and the curves are effectively the same minus
# liquidity events (well at least on low OHLC periods - 1s).
vlm_curve.hide()
chart.removeItem(vlm_curve)
vflow = chart._flows['volume']
vflow.render = False
vlm_chart.removeItem(vlm_curve)
vlm_viz = vlm_chart._vizs['volume']
vlm_viz.render = False
# avoid range sorting on volume once disabled
chart.view.disable_auto_yrange()
vlm_chart.view.disable_auto_yrange()
# Trade rate overlay
# XXX: requires an additional overlay for
# a trades-per-period (time) y-range.
tr_pi = chart.overlay_plotitem(
tr_pi = vlm_chart.overlay_plotitem(
'trade_rates',
# TODO: dynamically update period (and thus this axis?)
# title from user input.
axis_title='clears',
axis_side='left',
axis_kwargs={
'typical_max_str': ' 10.0 M ',
'formatter': partial(
@ -891,7 +941,8 @@ async def open_vlm_displays(
chart_curves(
trade_rate_fields,
tr_pi,
fr_shm,
fr_flume.rt_shm,
fr_flume,
# step_mode=True,
# dashed line to represent "individual trades" being
@ -925,7 +976,7 @@ async def open_vlm_displays(
async def start_fsp_displays(
linked: LinkedSplits,
ohlcv: ShmArray,
flume: Flume,
group_status_key: str,
loglevel: str,
@ -968,7 +1019,10 @@ async def start_fsp_displays(
async with (
# NOTE: this admin internally opens an actor cluster
open_fsp_admin(linked, ohlcv) as admin,
open_fsp_admin(
linked,
flume,
) as admin,
):
statuses = []
for target, conf in fsp_conf.items():

View File

@ -76,7 +76,6 @@ async def handle_viewmode_kb_inputs(
pressed: set[str] = set()
last = time.time()
trigger_mode: str
action: str
on_next_release: Optional[Callable] = None
@ -468,7 +467,6 @@ class ChartView(ViewBox):
self,
ev,
axis=None,
# relayed_from: ChartView = None,
):
'''
Override "center-point" location for scrolling.
@ -483,7 +481,6 @@ class ChartView(ViewBox):
if (
not linked
):
# print(f'{self.name} not linked but relay from {relayed_from.name}')
return
if axis in (0, 1):
@ -495,18 +492,19 @@ class ChartView(ViewBox):
chart = self.linked.chart
# don't zoom more then the min points setting
l, lbar, rbar, r = chart.bars_range()
# vl = r - l
viz = chart.get_viz(chart.name)
vl, lbar, rbar, vr = viz.bars_range()
# if ev.delta() > 0 and vl <= _min_points_to_show:
# log.debug("Max zoom bruh...")
# TODO: max/min zoom limits incorporating time step size.
# rl = vr - vl
# if ev.delta() > 0 and rl <= _min_points_to_show:
# log.warning("Max zoom bruh...")
# return
# if (
# ev.delta() < 0
# and vl >= len(chart._flows[chart.name].shm.array) + 666
# and rl >= len(chart._vizs[chart.name].shm.array) + 666
# ):
# log.debug("Min zoom bruh...")
# log.warning("Min zoom bruh...")
# return
# actual scaling factor
@ -537,49 +535,17 @@ class ChartView(ViewBox):
self.scaleBy(s, center)
else:
# center = pg.Point(
# fn.invertQTransform(self.childGroup.transform()).map(ev.pos())
# )
# XXX: scroll "around" the right most element in the view
# which stays "pinned" in place.
# furthest_right_coord = self.boundingRect().topRight()
# yaxis = pg.Point(
# fn.invertQTransform(
# self.childGroup.transform()
# ).map(furthest_right_coord)
# )
# This seems like the most "intuitive option, a hybrid of
# tws and tv styles
last_bar = pg.Point(int(rbar)) + 1
ryaxis = chart.getAxis('right')
r_axis_x = ryaxis.pos().x()
end_of_l1 = pg.Point(
round(
chart.cv.mapToView(
pg.Point(r_axis_x - chart._max_l1_line_len)
# QPointF(chart._max_l1_line_len, 0)
).x()
)
) # .x()
# self.state['viewRange'][0][1] = end_of_l1
# focal = pg.Point((last_bar.x() + end_of_l1)/2)
# use right-most point of current curve graphic
xl = viz.graphics.x_last()
focal = min(
last_bar,
end_of_l1,
key=lambda p: p.x()
xl,
vr,
)
# focal = pg.Point(last_bar.x() + end_of_l1)
self._resetTarget()
# NOTE: scroll "around" the right most datum-element in view
# gives the feeling of staying "pinned" in place.
self.scaleBy(s, focal)
# XXX: the order of the next 2 lines i'm pretty sure
@ -605,21 +571,8 @@ class ChartView(ViewBox):
self,
ev,
axis: Optional[int] = None,
# relayed_from: ChartView = None,
) -> None:
# if relayed_from:
# print(f'PAN: {self.name} -> RELAYED FROM: {relayed_from.name}')
# NOTE since in the overlay case axes are already
# "linked" any x-range change will already be mirrored
# in all overlaid ``PlotItems``, so we need to simply
# ignore the signal here since otherwise we get N-calls
# from N-overlays resulting in an "accelerated" feeling
# panning motion instead of the expect linear shift.
# if relayed_from:
# return
pos = ev.pos()
lastPos = ev.lastPos()
dif = pos - lastPos
@ -689,9 +642,6 @@ class ChartView(ViewBox):
# PANNING MODE
else:
# XXX: WHY
ev.accept()
try:
self.start_ic()
except RuntimeError:
@ -723,6 +673,9 @@ class ChartView(ViewBox):
# self._ic = None
# self.chart.resume_all_feeds()
# XXX: WHY
ev.accept()
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
elif button & QtCore.Qt.RightButton:
@ -768,7 +721,11 @@ class ChartView(ViewBox):
*,
yrange: Optional[tuple[float, float]] = None,
range_margin: float = 0.06,
# NOTE: this value pairs (more or less) with L1 label text
# height offset from from the bid/ask lines.
range_margin: float = 0.09,
bars_range: Optional[tuple[int, int, int, int]] = None,
# flag to prevent triggering sibling charts from the same linked
@ -821,7 +778,7 @@ class ChartView(ViewBox):
# XXX: only compute the mxmn range
# if none is provided as input!
if not yrange:
# flow = chart._flows[name]
# flow = chart._vizs[name]
yrange = self._maxmin()
if yrange is None:
@ -912,7 +869,7 @@ class ChartView(ViewBox):
graphics items which are our children.
'''
graphics = [f.graphics for f in self._chart._flows.values()]
graphics = [f.graphics for f in self._chart._vizs.values()]
if not graphics:
return 0
@ -925,7 +882,7 @@ class ChartView(ViewBox):
def maybe_downsample_graphics(
self,
autoscale_overlays: bool = True,
autoscale_overlays: bool = False,
):
profiler = Profiler(
msg=f'ChartView.maybe_downsample_graphics() for {self.name}',
@ -948,7 +905,7 @@ class ChartView(ViewBox):
plots |= linked.subplots
for chart_name, chart in plots.items():
for name, flow in chart._flows.items():
for name, flow in chart._vizs.items():
if (
not flow.render
@ -961,10 +918,7 @@ class ChartView(ViewBox):
# pass in no array which will read and render from the last
# passed array (normally provided by the display loop.)
chart.update_graphics_from_flow(
name,
use_vr=True,
)
chart.update_graphics_from_flow(name)
# for each overlay on this chart auto-scale the
# y-range to max-min values.

View File

@ -26,22 +26,24 @@ from PyQt5.QtCore import QPointF
from ._axes import YAxisLabel
from ._style import hcolor
from ._pg_overrides import PlotItem
class LevelLabel(YAxisLabel):
"""Y-axis (vertically) oriented, horizontal label that sticks to
'''
Y-axis (vertically) oriented, horizontal label that sticks to
where it's placed despite chart resizing and supports displaying
multiple fields.
TODO: replace the rectangle-text part with our new ``Label`` type.
"""
_x_margin = 0
_y_margin = 0
'''
_x_br_offset: float = -16
_y_txt_h_scaling: float = 2
# adjustment "further away from" anchor point
_x_offset = 9
_x_offset = 0
_y_offset = 0
# fields to be displayed in the label string
@ -57,12 +59,12 @@ class LevelLabel(YAxisLabel):
chart,
parent,
color: str = 'bracket',
color: str = 'default_light',
orient_v: str = 'bottom',
orient_h: str = 'left',
orient_h: str = 'right',
opacity: float = 0,
opacity: float = 1,
# makes order line labels offset from their parent axis
# such that they don't collide with the L1/L2 lines/prices
@ -98,13 +100,15 @@ class LevelLabel(YAxisLabel):
self._h_shift = {
'left': -1.,
'right': 0.
'right': 0.,
}[orient_h]
self.fields = self._fields.copy()
# ensure default format fields are in correct
self.set_fmt_str(self._fmt_str, self.fields)
self.setZValue(10)
@property
def color(self):
return self._hcolor
@ -112,7 +116,10 @@ class LevelLabel(YAxisLabel):
@color.setter
def color(self, color: str) -> None:
self._hcolor = color
self._pen = self.pen = pg.mkPen(hcolor(color))
self._pen = self.pen = pg.mkPen(
hcolor(color),
width=3,
)
def update_on_resize(self, vr, r):
"""Tiis is a ``.sigRangeChanged()`` handler.
@ -124,15 +131,16 @@ class LevelLabel(YAxisLabel):
self,
fields: dict = None,
) -> None:
"""Update the label's text contents **and** position from
'''
Update the label's text contents **and** position from
a view box coordinate datum.
"""
'''
self.fields.update(fields)
level = self.fields['level']
# map "level" to local coords
abs_xy = self._chart.mapFromView(QPointF(0, level))
abs_xy = self._pi.mapFromView(QPointF(0, level))
self.update_label(
abs_xy,
@ -149,7 +157,7 @@ class LevelLabel(YAxisLabel):
h, w = self.set_label_str(fields)
if self._adjust_to_l1:
self._x_offset = self._chart._max_l1_line_len
self._x_offset = self._pi.chart_widget._max_l1_line_len
self.setPos(QPointF(
self._h_shift * (w + self._x_offset),
@ -174,7 +182,8 @@ class LevelLabel(YAxisLabel):
fields: dict,
):
# use space as e3 delim
self.label_str = self._fmt_str.format(**fields).replace(',', ' ')
self.label_str = self._fmt_str.format(
**fields).replace(',', ' ')
br = self.boundingRect()
h, w = br.height(), br.width()
@ -187,14 +196,14 @@ class LevelLabel(YAxisLabel):
self,
p: QtGui.QPainter,
rect: QtCore.QRectF
) -> None:
p.setPen(self._pen)
) -> None:
p.setPen(self._pen)
rect = self.rect
if self._orient_v == 'bottom':
lp, rp = rect.topLeft(), rect.topRight()
# p.drawLine(rect.topLeft(), rect.topRight())
elif self._orient_v == 'top':
lp, rp = rect.bottomLeft(), rect.bottomRight()
@ -208,6 +217,11 @@ class LevelLabel(YAxisLabel):
])
)
p.fillRect(
self.rect,
self.bg_color,
)
def highlight(self, pen) -> None:
self._pen = pen
self.update()
@ -236,43 +250,46 @@ class L1Label(LevelLabel):
# Set a global "max L1 label length" so we can
# look it up on order lines and adjust their
# labels not to overlap with it.
chart = self._chart
chart = self._pi.chart_widget
chart._max_l1_line_len: float = max(
chart._max_l1_line_len,
w
w,
)
return h, w
class L1Labels:
"""Level 1 bid ask labels for dynamic update on price-axis.
'''
Level 1 bid ask labels for dynamic update on price-axis.
"""
'''
def __init__(
self,
chart: 'ChartPlotWidget', # noqa
plotitem: PlotItem,
digits: int = 2,
size_digits: int = 3,
font_size: str = 'small',
) -> None:
self.chart = chart
chart = self.chart = plotitem.chart_widget
raxis = chart.getAxis('right')
raxis = plotitem.getAxis('right')
kwargs = {
'chart': chart,
'chart': plotitem,
'parent': raxis,
'opacity': 1,
'opacity': .9,
'font_size': font_size,
'fg_color': chart.pen_color,
'bg_color': chart.view_color,
'fg_color': 'default_light',
'bg_color': chart.view_color, # normally 'papas_special'
}
# TODO: add humanized source-asset
# info format.
fmt_str = (
' {size:.{size_digits}f} x '
'{level:,.{level_digits}f} '
' {size:.{size_digits}f} u'
# '{level:,.{level_digits}f} '
)
fields = {
'level': 0,
@ -285,12 +302,17 @@ class L1Labels:
orient_v='bottom',
**kwargs,
)
bid.set_fmt_str(fmt_str=fmt_str, fields=fields)
bid.set_fmt_str(
fmt_str='\n' + fmt_str,
fields=fields,
)
bid.show()
ask = self.ask_label = L1Label(
orient_v='top',
**kwargs,
)
ask.set_fmt_str(fmt_str=fmt_str, fields=fields)
ask.set_fmt_str(
fmt_str=fmt_str,
fields=fields)
ask.show()

View File

@ -233,6 +233,36 @@ class Label:
def delete(self) -> None:
self.vb.scene().removeItem(self.txt)
# NOTE: pulled out from ``ChartPlotWidget`` from way way old code.
# def _label_h(self, yhigh: float, ylow: float) -> float:
# # compute contents label "height" in view terms
# # to avoid having data "contents" overlap with them
# if self._labels:
# label = self._labels[self.name][0]
# rect = label.itemRect()
# tl, br = rect.topLeft(), rect.bottomRight()
# vb = self.plotItem.vb
# try:
# # on startup labels might not yet be rendered
# top, bottom = (vb.mapToView(tl).y(), vb.mapToView(br).y())
# # XXX: magic hack, how do we compute exactly?
# label_h = (top - bottom) * 0.42
# except np.linalg.LinAlgError:
# label_h = 0
# else:
# label_h = 0
# # print(f'label height {self.name}: {label_h}')
# if label_h > yhigh - ylow:
# label_h = 0
# print(f"bounds (ylow, yhigh): {(ylow, yhigh)}")
class FormatLabel(QLabel):
'''

View File

@ -25,10 +25,18 @@ from typing import (
import numpy as np
import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QLineF, QPointF
from PyQt5 import (
QtGui,
QtWidgets,
)
from PyQt5.QtCore import (
QLineF,
QRectF,
)
from PyQt5.QtGui import QPainterPath
from ._curve import FlowGraphic
from .._profile import pg_profile_enabled, ms_slower_then
from ._style import hcolor
from ..log import get_logger
@ -44,7 +52,8 @@ log = get_logger(__name__)
def bar_from_ohlc_row(
row: np.ndarray,
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.43
bar_w: float,
bar_gap: float = 0.16
) -> tuple[QLineF]:
'''
@ -52,8 +61,7 @@ def bar_from_ohlc_row(
OHLC "bar" for use in the "last datum" of a series.
'''
open, high, low, close, index = row[
['open', 'high', 'low', 'close', 'index']]
open, high, low, close, index = row
# TODO: maybe consider using `QGraphicsLineItem` ??
# gives us a ``.boundingRect()`` on the objects which may make
@ -61,9 +69,11 @@ def bar_from_ohlc_row(
# history path faster since it's done in C++:
# https://doc.qt.io/qt-5/qgraphicslineitem.html
mid: float = (bar_w / 2) + index
# high -> low vertical (body) line
if low != high:
hl = QLineF(index, low, index, high)
hl = QLineF(mid, low, mid, high)
else:
# XXX: if we don't do it renders a weird rectangle?
# see below for filtering this later...
@ -74,15 +84,18 @@ def bar_from_ohlc_row(
# the index's range according to the view mapping coordinates.
# open line
o = QLineF(index - w, open, index, open)
o = QLineF(index + bar_gap, open, mid, open)
# close line
c = QLineF(index, close, index + w, close)
c = QLineF(
mid, close,
index + bar_w - bar_gap, close,
)
return [hl, o, c]
class BarItems(pg.GraphicsObject):
class BarItems(FlowGraphic):
'''
"Price range" bars graphics rendered from a OHLC sampled sequence.
@ -91,8 +104,8 @@ class BarItems(pg.GraphicsObject):
self,
linked: LinkedSplits,
plotitem: 'pg.PlotItem', # noqa
pen_color: str = 'bracket',
last_bar_color: str = 'bracket',
color: str = 'bracket',
last_bar_color: str = 'original',
name: Optional[str] = None,
@ -101,21 +114,37 @@ class BarItems(pg.GraphicsObject):
self.linked = linked
# XXX: for the mega-lulz increasing width here increases draw
# latency... so probably don't do it until we figure that out.
self._color = pen_color
self.bars_pen = pg.mkPen(hcolor(pen_color), width=1)
self._color = color
self.bars_pen = pg.mkPen(hcolor(color), width=1)
self.last_bar_pen = pg.mkPen(hcolor(last_bar_color), width=2)
self._name = name
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
# XXX: causes this weird jitter bug when click-drag panning
# where the path curve will awkwardly flicker back and forth?
# self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.path = QPainterPath()
self._last_bar_lines: Optional[tuple[QLineF, ...]] = None
self._last_bar_lines: tuple[QLineF, ...] | None = None
def x_uppx(self) -> int:
# we expect the downsample curve report this.
return 0
def x_last(self) -> None | float:
'''
Return the last most x value of the close line segment
or if not drawn yet, ``None``.
'''
if self._last_bar_lines:
close_arm_line = self._last_bar_lines[-1]
return close_arm_line.x2() if close_arm_line else None
else:
return None
def boundingRect(self):
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
def boundingRect(self):
# profiler = Profiler(
# msg=f'BarItems.boundingRect(): `{self._name}`',
# disabled=not pg_profile_enabled(),
# ms_threshold=ms_slower_then,
# )
# TODO: Can we do rect caching to make this faster
# like `pg.PlotCurveItem` does? In theory it's just
@ -135,32 +164,37 @@ class BarItems(pg.GraphicsObject):
hb.topLeft(),
hb.bottomRight(),
)
mn_y = hb_tl.y()
mx_y = hb_br.y()
most_left = hb_tl.x()
most_right = hb_br.x()
# profiler('calc path vertices')
# need to include last bar height or BR will be off
mx_y = hb_br.y()
mn_y = hb_tl.y()
last_lines = self._last_bar_lines
# OHLC line segments: [hl, o, c]
last_lines: tuple[QLineF] | None = self._last_bar_lines
if last_lines:
body_line = self._last_bar_lines[0]
if body_line:
mx_y = max(mx_y, max(body_line.y1(), body_line.y2()))
mn_y = min(mn_y, min(body_line.y1(), body_line.y2()))
(
hl,
o,
c,
) = last_lines
most_right = c.x2() + 1
ymx = ymn = c.y2()
return QtCore.QRectF(
if hl:
y1, y2 = hl.y1(), hl.y2()
ymn = min(y1, y2)
ymx = max(y1, y2)
mx_y = max(ymx, mx_y)
mn_y = min(ymn, mn_y)
# profiler('calc last bar vertices')
# top left
QPointF(
hb_tl.x(),
return QRectF(
most_left,
mn_y,
),
# bottom right
QPointF(
hb_br.x() + 1,
mx_y,
)
most_right - most_left + 1,
mx_y - mn_y,
)
def paint(
@ -197,29 +231,40 @@ class BarItems(pg.GraphicsObject):
self,
path: QPainterPath,
src_data: np.ndarray,
render_data: np.ndarray,
reset: bool,
array_key: str,
fields: list[str] = [
'index',
'open',
'high',
'low',
'close',
],
index_field: str,
) -> None:
# relevant fields
fields: list[str] = [
'open',
'high',
'low',
'close',
index_field,
]
ohlc = src_data[fields]
last_row = ohlc[-1:]
# last_row = ohlc[-1:]
# individual values
last_row = i, o, h, l, last = ohlc[-1]
last_row = o, h, l, last, i = ohlc[-1]
# times = src_data['time']
# if times[-1] - times[-2]:
# breakpoint()
index = src_data[index_field]
step_size = index[-1] - index[-2]
# generate new lines objects for updatable "current bar"
self._last_bar_lines = bar_from_ohlc_row(last_row)
bg: float = 0.16 * step_size
self._last_bar_lines = bar_from_ohlc_row(
last_row,
bar_w=step_size,
bar_gap=bg,
)
# assert i == graphics.start_index - 1
# assert i == last_index
@ -234,10 +279,16 @@ class BarItems(pg.GraphicsObject):
if l != h: # noqa
if body is None:
body = self._last_bar_lines[0] = QLineF(i, l, i, h)
body = self._last_bar_lines[0] = QLineF(
i + bg, l,
i + step_size - bg, h,
)
else:
# update body
body.setLine(i, l, i, h)
body.setLine(
body.x1(), l,
body.x2(), h,
)
# XXX: pretty sure this is causing an issue where the
# bar has a large upward move right before the next
@ -248,4 +299,5 @@ class BarItems(pg.GraphicsObject):
# date / from some previous sample. It's weird though
# because i've seen it do this to bars i - 3 back?
return ohlc['index'], ohlc['close']
# return ohlc['time'], ohlc['close']
return ohlc[index_field], ohlc['close']

View File

@ -92,11 +92,11 @@ class ComposedGridLayout:
'''
def __init__(
self,
item: PlotItem,
pi: PlotItem,
) -> None:
self.items: list[PlotItem] = []
self.pitems: list[PlotItem] = []
self._pi2axes: dict[ # TODO: use a ``bidict`` here?
int,
dict[str, AxisItem],
@ -125,7 +125,7 @@ class ComposedGridLayout:
layout.setOrientation(orient)
self.insert_plotitem(0, item)
self.insert_plotitem(0, pi)
# insert surrounding linear layouts into the parent pi's layout
# such that additional axes can be appended arbitrarily without
@ -135,13 +135,14 @@ class ComposedGridLayout:
# TODO: do we need this?
# axis should have been removed during insert above
index = _axes_layout_indices[name]
axis = item.layout.itemAt(*index)
axis = pi.layout.itemAt(*index)
if axis and axis.isVisible():
assert linlayout.itemAt(0) is axis
# item.layout.removeItem(axis)
item.layout.addItem(linlayout, *index)
layout = item.layout.itemAt(*index)
# XXX: see comment in ``.insert_plotitem()``...
# pi.layout.removeItem(axis)
pi.layout.addItem(linlayout, *index)
layout = pi.layout.itemAt(*index)
assert layout is linlayout
def _register_item(
@ -157,14 +158,14 @@ class ComposedGridLayout:
self._pi2axes.setdefault(name, {})[index] = axis
# enter plot into list for index tracking
self.items.insert(index, plotitem)
self.pitems.insert(index, plotitem)
def insert_plotitem(
self,
index: int,
plotitem: PlotItem,
) -> (int, int):
) -> tuple[int, list[AxisItem]]:
'''
Place item at index by inserting all axes into the grid
at list-order appropriate position.
@ -175,11 +176,14 @@ class ComposedGridLayout:
'`.insert_plotitem()` only supports an index >= 0'
)
inserted_axes: list[AxisItem] = []
# add plot's axes in sequence to the embedded linear layouts
# for each "side" thus avoiding graphics collisions.
for name, axis_info in plotitem.axes.copy().items():
linlayout, axes = self.sides[name]
axis = axis_info['item']
inserted_axes.append(axis)
if axis in axes:
# TODO: re-order using ``.pop()`` ?
@ -192,19 +196,20 @@ class ComposedGridLayout:
if (
not axis.isVisible()
# XXX: we never skip moving the axes for the *first*
# XXX: we never skip moving the axes for the *root*
# plotitem inserted (even if not shown) since we need to
# move all the hidden axes into linear sub-layouts for
# that "central" plot in the overlay. Also if we don't
# do it there's weird geomoetry calc offsets that make
# view coords slightly off somehow .. smh
and not len(self.items) == 0
and not len(self.pitems) == 0
):
continue
# XXX: Remove old axis? No, turns out we don't need this?
# DON'T unlink it since we the original ``ViewBox``
# to still drive it B)
# XXX: Remove old axis?
# No, turns out we don't need this?
# DON'T UNLINK IT since we need the original ``ViewBox`` to
# still drive it with events/handlers B)
# popped = plotitem.removeAxis(name, unlink=False)
# assert axis is popped
@ -220,7 +225,7 @@ class ComposedGridLayout:
self._register_item(index, plotitem)
return index
return (index, inserted_axes)
def append_plotitem(
self,
@ -234,7 +239,7 @@ class ComposedGridLayout:
'''
# for left and bottom axes we have to first remove
# items and re-insert to maintain a list-order.
return self.insert_plotitem(len(self.items), item)
return self.insert_plotitem(len(self.pitems), item)
def get_axis(
self,
@ -247,7 +252,7 @@ class ComposedGridLayout:
if axis for that name is not shown.
'''
index = self.items.index(plot)
index = self.pitems.index(plot)
named = self._pi2axes[name]
return named.get(index)
@ -306,10 +311,13 @@ class PlotItemOverlay:
# events/signals.
root_plotitem.vb.setZValue(10)
self.overlays: list[PlotItem] = []
self.layout = ComposedGridLayout(root_plotitem)
self._relays: dict[str, Signal] = {}
@property
def overlays(self) -> list[PlotItem]:
return self.layout.pitems
def add_plotitem(
self,
plotitem: PlotItem,
@ -324,11 +332,9 @@ class PlotItemOverlay:
# (0, 1), # link both
link_axes: tuple[int] = (),
) -> None:
) -> tuple[int, list[AxisItem]]:
index = index or len(self.overlays)
root = self.root_plotitem
self.overlays.insert(index, plotitem)
vb: ViewBox = plotitem.vb
# TODO: some sane way to allow menu event broadcast XD
@ -476,7 +482,10 @@ class PlotItemOverlay:
# ``PlotItem`` dynamically.
# append-compose into the layout all axes from this plot
self.layout.insert_plotitem(index, plotitem)
if index is None:
insert_index, axes = self.layout.append_plotitem(plotitem)
else:
insert_index, axes = self.layout.insert_plotitem(index, plotitem)
plotitem.setGeometry(root.vb.sceneBoundingRect())
@ -496,6 +505,11 @@ class PlotItemOverlay:
vb.setZValue(100)
return (
index,
axes,
)
def get_axis(
self,
plot: PlotItem,

View File

@ -1,241 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Super fast ``QPainterPath`` generation related operator routines.
"""
from __future__ import annotations
from typing import (
# Optional,
TYPE_CHECKING,
)
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import njit, float64, int64 # , optional
# import pyqtgraph as pg
from PyQt5 import QtGui
# from PyQt5.QtCore import QLineF, QPointF
from ..data._sharedmem import (
ShmArray,
)
# from .._profile import pg_profile_enabled, ms_slower_then
from ._compression import (
ds_m4,
)
if TYPE_CHECKING:
from ._flows import Renderer
def xy_downsample(
x,
y,
uppx,
x_spacer: float = 0.5,
) -> tuple[
np.ndarray,
np.ndarray,
float,
float,
]:
# downsample whenever more then 1 pixels per datum can be shown.
# always refresh data bounds until we get diffing
# working properly, see above..
bins, x, y, ymn, ymx = ds_m4(
x,
y,
uppx,
)
# flatten output to 1d arrays suitable for path-graphics generation.
x = np.broadcast_to(x[:, None], y.shape)
x = (x + np.array(
[-x_spacer, 0, 0, x_spacer]
)).flatten()
y = y.flatten()
return x, y, ymn, ymx
@njit(
# TODO: for now need to construct this manually for readonly arrays, see
# https://github.com/numba/numba/issues/4511
# ntypes.tuple((float64[:], float64[:], float64[:]))(
# numba_ohlc_dtype[::1], # contiguous
# int64,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_gap: float64 = 0.43,
) -> np.ndarray:
'''
Generate an array of lines objects from input ohlc data.
'''
size = int(data.shape[0] * 6)
x = np.zeros(
# data,
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
# TODO: ask numba why this doesn't work..
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
open = q['open']
high = q['high']
low = q['low']
close = q['close']
index = float64(q['index'])
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
x[istart:istop] = (
index - bar_gap,
index,
index,
index,
index,
index + bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def gen_ohlc_qpath(
r: Renderer,
data: np.ndarray,
array_key: str, # we ignore this
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.43,
) -> QtGui.QPainterPath:
'''
More or less direct proxy to ``path_arrays_from_ohlc()``
but with closed in kwargs for line spacing.
'''
x, y, c = path_arrays_from_ohlc(
data,
start,
bar_gap=w,
)
return x, y, c
def ohlc_to_line(
ohlc_shm: ShmArray,
data_field: str,
fields: list[str] = ['open', 'high', 'low', 'close']
) -> tuple[
np.ndarray,
np.ndarray,
]:
'''
Convert an input struct-array holding OHLC samples into a pair of
flattened x, y arrays with the same size (datums wise) as the source
data.
'''
y_out = ohlc_shm.ustruct(fields)
first = ohlc_shm._first.value
last = ohlc_shm._last.value
# write pushed data to flattened copy
y_out[first:last] = rfn.structured_to_unstructured(
ohlc_shm.array[fields]
)
# generate an flat-interpolated x-domain
x_out = (
np.broadcast_to(
ohlc_shm._array['index'][:, None],
(
ohlc_shm._array.size,
# 4, # only ohlc
y_out.shape[1],
),
) + np.array([-0.5, 0, 0, 0.5])
)
assert y_out.any()
return (
x_out,
y_out,
)
def to_step_format(
shm: ShmArray,
data_field: str,
index_field: str = 'index',
) -> tuple[int, np.ndarray, np.ndarray]:
'''
Convert an input 1d shm array to a "step array" format
for use by path graphics generation.
'''
i = shm._array['index'].copy()
out = shm._array[data_field].copy()
x_out = np.broadcast_to(
i[:, None],
(i.size, 2),
) + np.array([-0.5, 0.5])
y_out = np.empty((len(out), 2), dtype=out.dtype)
y_out[:] = out[:, np.newaxis]
# start y at origin level
y_out[0, 0] = 0
return x_out, y_out

View File

@ -26,6 +26,8 @@ from typing import Optional
import pyqtgraph as pg
from ._axes import Axis
def invertQTransform(tr):
"""Return a QTransform that is the inverse of *tr*.
@ -52,6 +54,10 @@ def _do_overrides() -> None:
pg.functions.invertQTransform = invertQTransform
pg.PlotItem = PlotItem
# enable "QPainterPathPrivate for faster arrayToQPath" from
# https://github.com/pyqtgraph/pyqtgraph/pull/2324
pg.setConfigOption('enableExperimental', True)
# NOTE: the below customized type contains all our changes on a method
# by method basis as per the diff:
@ -62,6 +68,20 @@ class PlotItem(pg.PlotItem):
Overrides for the core plot object mostly pertaining to overlayed
multi-view management as it relates to multi-axis managment.
This object is the combination of a ``ViewBox`` and multiple
``AxisItem``s and so far we've added additional functionality and
APIs for:
- removal of axes
---
From ``pyqtgraph`` super type docs:
- Manage placement of ViewBox, AxisItems, and LabelItems
- Create and manage a list of PlotDataItems displayed inside the
ViewBox
- Implement a context menu with commonly used display and analysis
options
'''
def __init__(
self,
@ -86,6 +106,8 @@ class PlotItem(pg.PlotItem):
enableMenu=enableMenu,
kargs=kargs,
)
self.name = name
self.chart_widget = None
# self.setAxisItems(
# axisItems,
# default_axes=default_axes,
@ -209,7 +231,12 @@ class PlotItem(pg.PlotItem):
# adding this is without it there's some weird
# ``ViewBox`` geometry bug.. where a gap for the
# 'bottom' axis is somehow left in?
axis = pg.AxisItem(orientation=name, parent=self)
# axis = pg.AxisItem(orientation=name, parent=self)
axis = Axis(
self,
orientation=name,
parent=self,
)
axis.linkToView(self.vb)

View File

@ -41,7 +41,11 @@ from ._anchors import (
pp_tight_and_right, # wanna keep it straight in the long run
gpath_pin,
)
from ..calc import humanize, pnl, puterize
from ..calc import (
humanize,
pnl,
puterize,
)
from ..clearing._allocate import Allocator
from ..pp import Position
from ..data._normalize import iterticks
@ -80,9 +84,9 @@ async def update_pnl_from_feed(
'''
global _pnl_tasks
pp = order_mode.current_pp
live = pp.live_pp
key = live.symbol.front_fqsn()
pp: PositionTracker = order_mode.current_pp
live: Position = pp.live_pp
key: str = live.symbol.front_fqsn()
log.info(f'Starting pnl display for {pp.alloc.account}')
@ -101,11 +105,22 @@ async def update_pnl_from_feed(
async with flume.stream.subscribe() as bstream:
# last_tick = time.time()
async for quotes in bstream:
# now = time.time()
# period = now - last_tick
for sym, quote in quotes.items():
# print(f'{key} PnL: sym:{sym}')
# TODO: uggggh we probably want a better state
# management then this sincce we want to enable
# updating whatever the current symbol is in
# real-time right?
if sym != key:
continue
# watch out for wrong quote msg-data if you muck
# with backend feed subs code..
# assert sym == quote['fqsn']
for tick in iterticks(quote, types):
# print(f'{1/period} Hz')
@ -119,13 +134,17 @@ async def update_pnl_from_feed(
else:
# compute and display pnl status
order_mode.pane.pnl_label.format(
pnl=copysign(1, size) * pnl(
pnl_val = (
copysign(1, size)
*
pnl(
# live.ppu,
order_mode.current_pp.live_pp.ppu,
tick['price'],
),
)
)
# print(f'formatting PNL {sym} => {pnl_val}')
order_mode.pane.pnl_label.format(pnl=pnl_val)
# last_tick = time.time()
finally:

318
piker/ui/_render.py 100644
View File

@ -0,0 +1,318 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
High level streaming graphics primitives.
This is an intermediate layer which associates real-time low latency
graphics primitives with underlying stream/flow related data structures
for fast incremental update.
'''
from __future__ import annotations
from typing import (
TYPE_CHECKING,
)
import msgspec
import numpy as np
import pyqtgraph as pg
from PyQt5.QtGui import QPainterPath
from ..data._formatters import (
IncrementalFormatter,
)
from ..data._pathops import (
xy_downsample,
)
from ..log import get_logger
from .._profile import (
Profiler,
)
if TYPE_CHECKING:
from ._dataviz import Viz
log = get_logger(__name__)
class Renderer(msgspec.Struct):
viz: Viz
fmtr: IncrementalFormatter
# output graphics rendering, the main object
# processed in ``QGraphicsObject.paint()``
path: QPainterPath | None = None
fast_path: QPainterPath | None = None
# downsampling state
_last_uppx: float = 0
_in_ds: bool = False
def draw_path(
self,
x: np.ndarray,
y: np.ndarray,
connect: str | np.ndarray = 'all',
path: QPainterPath | None = None,
redraw: bool = False,
) -> QPainterPath:
path_was_none = path is None
if redraw and path:
path.clear()
# TODO: avoid this?
if self.fast_path:
self.fast_path.clear()
path = pg.functions.arrayToQPath(
x,
y,
connect=connect,
finiteCheck=False,
# reserve mem allocs see:
# - https://doc.qt.io/qt-5/qpainterpath.html#reserve
# - https://doc.qt.io/qt-5/qpainterpath.html#capacity
# - https://doc.qt.io/qt-5/qpainterpath.html#clear
# XXX: right now this is based on ad-hoc checks on a
# hidpi 3840x2160 4k monitor but we should optimize for
# the target display(s) on the sys.
# if no_path_yet:
# graphics.path.reserve(int(500e3))
# path=path, # path re-use / reserving
)
# avoid mem allocs if possible
if path_was_none:
path.reserve(path.capacity())
return path
def render(
self,
new_read,
array_key: str,
profiler: Profiler,
uppx: float = 1,
# redraw and ds flags
should_redraw: bool = False,
new_sample_rate: bool = False,
should_ds: bool = False,
showing_src_data: bool = True,
do_append: bool = True,
use_fpath: bool = True,
# only render datums "in view" of the ``ChartView``
use_vr: bool = True,
) -> tuple[QPainterPath, bool]:
'''
Render the current graphics path(s)
There are (at least) 3 stages from source data to graphics data:
- a data transform (which can be stored in additional shm)
- a graphics transform which converts discrete basis data to
a `float`-basis view-coords graphics basis. (eg. ``ohlc_flatten()``,
``step_path_arrays_from_1d()``, etc.)
- blah blah blah (from notes)
'''
# TODO: can the renderer just call ``Viz.read()`` directly?
# unpack latest source data read
fmtr = self.fmtr
(
_,
_,
array,
ivl,
ivr,
in_view,
) = new_read
# xy-path data transform: convert source data to a format
# able to be passed to a `QPainterPath` rendering routine.
fmt_out = fmtr.format_to_1d(
new_read,
array_key,
profiler,
slice_to_inview=use_vr,
)
# no history in view case
if not fmt_out:
# XXX: this might be why the profiler only has exits?
return
(
x_1d,
y_1d,
connect,
prepend_length,
append_length,
view_changed,
# append_tres,
) = fmt_out
# redraw conditions
if (
prepend_length > 0
or new_sample_rate
or view_changed
# NOTE: comment this to try and make "append paths"
# work below..
or append_length > 0
):
should_redraw = True
path: QPainterPath = self.path
fast_path: QPainterPath = self.fast_path
reset: bool = False
self.viz.yrange = None
# redraw the entire source data if we have either of:
# - no prior path graphic rendered or,
# - we always intend to re-render the data only in view
if (
path is None
or should_redraw
):
# print(f"{self.viz.name} -> REDRAWING BRUH")
if new_sample_rate and showing_src_data:
log.info(f'DE-downsampling -> {array_key}')
self._in_ds = False
elif should_ds and uppx > 1:
x_1d, y_1d, ymn, ymx = xy_downsample(
x_1d,
y_1d,
uppx,
)
self.viz.yrange = ymn, ymx
# print(f'{self.viz.name} post ds: ymn, ymx: {ymn},{ymx}')
reset = True
profiler(f'FULL PATH downsample redraw={should_ds}')
self._in_ds = True
path = self.draw_path(
x=x_1d,
y=y_1d,
connect=connect,
path=path,
redraw=True,
)
profiler(
'generated fresh path. '
f'(should_redraw: {should_redraw} '
f'should_ds: {should_ds} new_sample_rate: {new_sample_rate})'
)
# TODO: get this piecewise prepend working - right now it's
# giving heck on vwap...
# elif prepend_length:
# prepend_path = pg.functions.arrayToQPath(
# x[0:prepend_length],
# y[0:prepend_length],
# connect='all'
# )
# # swap prepend path in "front"
# old_path = graphics.path
# graphics.path = prepend_path
# # graphics.path.moveTo(new_x[0], new_y[0])
# graphics.path.connectPath(old_path)
elif (
append_length > 0
and do_append
):
profiler(f'sliced append path {append_length}')
# (
# x_1d,
# y_1d,
# connect,
# ) = append_tres
profiler(
f'diffed array input, append_length={append_length}'
)
# if should_ds and uppx > 1:
# new_x, new_y = xy_downsample(
# new_x,
# new_y,
# uppx,
# )
# profiler(f'fast path downsample redraw={should_ds}')
append_path = self.draw_path(
x=x_1d,
y=y_1d,
connect=connect,
path=fast_path,
)
profiler('generated append qpath')
if use_fpath:
# an attempt at trying to make append-updates faster..
if fast_path is None:
fast_path = append_path
# fast_path.reserve(int(6e3))
else:
# print(
# f'{self.viz.name}: FAST PATH\n'
# f"append_path br: {append_path.boundingRect()}\n"
# f"path size: {size}\n"
# f"append_path len: {append_path.length()}\n"
# f"fast_path len: {fast_path.length()}\n"
# )
fast_path.connectPath(append_path)
size = fast_path.capacity()
profiler(f'connected fast path w size: {size}')
# graphics.path.moveTo(new_x[0], new_y[0])
# path.connectPath(append_path)
# XXX: lol this causes a hang..
# graphics.path = graphics.path.simplified()
else:
size = path.capacity()
profiler(f'connected history path w size: {size}')
path.connectPath(append_path)
self.path = path
self.fast_path = fast_path
return self.path, reset

View File

@ -144,15 +144,29 @@ class CompleterView(QTreeView):
self._font_size: int = 0 # pixels
self._init: bool = False
async def on_pressed(self, idx: QModelIndex) -> None:
async def on_pressed(
self,
idx: QModelIndex,
) -> None:
'''
Mouse pressed on view handler.
'''
search = self.parent()
await search.chart_current_item()
await search.chart_current_item(
clear_to_cache=True,
)
# XXX: this causes Qt to hang and segfault..lovely
# self.show_cache_entries(
# only=True,
# keep_current_item_selected=True,
# )
search.focus()
def set_font_size(self, size: int = 18):
# print(size)
if size < 0:
@ -288,7 +302,7 @@ class CompleterView(QTreeView):
def select_first(self) -> QStandardItem:
'''
Select the first depth >= 2 entry from the completer tree and
return it's item.
return its item.
'''
# ensure we're **not** selecting the first level parent node and
@ -416,12 +430,26 @@ class CompleterView(QTreeView):
section: str,
values: Sequence[str],
clear_all: bool = False,
reverse: bool = False,
) -> None:
'''
Set result-rows for depth = 1 tree section ``section``.
'''
if (
values
and not isinstance(values[0], str)
):
flattened: list[str] = []
for val in values:
flattened.extend(val)
values = flattened
if reverse:
values = reversed(values)
model = self.model()
if clear_all:
# XXX: rewrite the model from scratch if caller requests it
@ -598,22 +626,50 @@ class SearchWidget(QtWidgets.QWidget):
self.show()
self.bar.focus()
def show_only_cache_entries(self) -> None:
def show_cache_entries(
self,
only: bool = False,
keep_current_item_selected: bool = False,
) -> None:
'''
Clear the search results view and show only cached (aka recently
loaded with active data) feeds in the results section.
'''
godw = self.godwidget
# first entry in the cache is the current symbol(s)
fqsns = set()
for multi_fqsns in list(godw._chart_cache):
for fqsn in set(multi_fqsns):
fqsns.add(fqsn)
if keep_current_item_selected:
sel = self.view.selectionModel()
cidx = sel.currentIndex()
self.view.set_section_entries(
'cache',
list(reversed(godw._chart_cache)),
list(fqsns),
# remove all other completion results except for cache
clear_all=True,
clear_all=only,
reverse=True,
)
def get_current_item(self) -> Optional[tuple[str, str]]:
'''Return the current completer tree selection as
if (
keep_current_item_selected
and cidx.isValid()
):
# set current selection back to what it was before filling out
# the view results.
self.view.select_from_idx(cidx)
else:
self.view.select_first()
def get_current_item(self) -> tuple[QModelIndex, str, str] | None:
'''
Return the current completer tree selection as
a tuple ``(parent: str, child: str)`` if valid, else ``None``.
'''
@ -639,7 +695,11 @@ class SearchWidget(QtWidgets.QWidget):
if provider == 'cache':
symbol, _, provider = symbol.rpartition('.')
return provider, symbol
return (
cidx,
provider,
symbol,
)
else:
return None
@ -660,15 +720,16 @@ class SearchWidget(QtWidgets.QWidget):
if value is None:
return None
provider, symbol = value
cidx, provider, symbol = value
godw = self.godwidget
log.info(f'Requesting symbol: {symbol}.{provider}')
fqsn = f'{symbol}.{provider}'
log.info(f'Requesting symbol: {fqsn}')
# assert provider in symbol
await godw.load_symbols(
provider,
[symbol],
'info',
fqsns=[fqsn],
loglevel='info',
)
# fully qualified symbol name (SNS i guess is what we're
@ -682,13 +743,15 @@ class SearchWidget(QtWidgets.QWidget):
# Re-order the symbol cache on the chart to display in
# LIFO order. this is normally only done internally by
# the chart on new symbols being loaded into memory
godw.set_chart_symbol(
fqsn, (
godw.set_chart_symbols(
(fqsn,), (
godw.hist_linked,
godw.rt_linked,
)
)
self.show_only_cache_entries()
self.show_cache_entries(
only=True,
)
self.bar.focus()
return fqsn
@ -757,9 +820,10 @@ async def pack_matches(
with trio.CancelScope() as cs:
task_status.started(cs)
# ensure ^ status is updated
results = await search(pattern)
results = list(await search(pattern))
if provider != 'cache': # XXX: don't cache the cache results xD
# XXX: don't cache the cache results xD
if provider != 'cache':
matches[(provider, pattern)] = results
# print(f'results from {provider}: {results}')
@ -806,7 +870,7 @@ async def fill_results(
has_results: defaultdict[str, set[str]] = defaultdict(set)
# show cached feed list at startup
search.show_only_cache_entries()
search.show_cache_entries()
search.on_resize()
while True:
@ -860,8 +924,9 @@ async def fill_results(
# it hasn't already been searched with the current
# input pattern (in which case just look up the old
# results).
if (period >= pause) and (
provider not in already_has_results
if (
period >= pause
and provider not in already_has_results
):
# TODO: it may make more sense TO NOT search the
@ -869,7 +934,9 @@ async def fill_results(
# cpu-bound.
if provider != 'cache':
view.clear_section(
provider, status_field='-> searchin..')
provider,
status_field='-> searchin..',
)
await n.start(
pack_matches,
@ -890,11 +957,20 @@ async def fill_results(
# re-searching it's ``dict`` since it's easier
# but it also causes it to be slower then cached
# results from other providers on occasion.
if results and provider != 'cache':
if (
results
):
if provider != 'cache':
view.set_section_entries(
section=provider,
values=results,
)
else:
# if provider == 'cache':
# for the cache just show what we got
# that matches
search.show_cache_entries()
else:
view.clear_section(provider)
@ -916,11 +992,10 @@ async def handle_keyboard_input(
global _search_active, _search_enabled
# startup
bar = searchbar
search = searchbar.parent()
godwidget = search.godwidget
view = bar.view
view.set_font_size(bar.dpi_font.px_size)
searchw = searchbar.parent()
godwidget = searchw.godwidget
view = searchbar.view
view.set_font_size(searchbar.dpi_font.px_size)
send, recv = trio.open_memory_channel(616)
async with trio.open_nursery() as n:
@ -931,13 +1006,13 @@ async def handle_keyboard_input(
n.start_soon(
partial(
fill_results,
search,
searchw,
recv,
)
)
bar.focus()
search.show_only_cache_entries()
searchbar.focus()
searchw.show_cache_entries()
await trio.sleep(0)
async for kbmsg in recv_chan:
@ -949,20 +1024,29 @@ async def handle_keyboard_input(
if mods == Qt.ControlModifier:
ctl = True
if key in (Qt.Key_Enter, Qt.Key_Return):
if key in (
Qt.Key_Enter,
Qt.Key_Return
):
_search_enabled = False
await search.chart_current_item(clear_to_cache=True)
search.show_only_cache_entries()
view.show_matches()
search.focus()
await searchw.chart_current_item(clear_to_cache=True)
elif not ctl and not bar.text():
# if nothing in search text show the cache
view.set_section_entries(
'cache',
list(reversed(godwidget._chart_cache)),
clear_all=True,
)
# XXX: causes hang and segfault..
# searchw.show_cache_entries(
# only=True,
# keep_current_item_selected=True,
# )
view.show_matches()
searchw.focus()
elif (
not ctl
and not searchbar.text()
):
# TODO: really should factor this somewhere..bc
# we're doin it in another spot as well..
searchw.show_cache_entries(only=True)
continue
# cancel and close
@ -971,7 +1055,7 @@ async def handle_keyboard_input(
Qt.Key_Space, # i feel like this is the "native" one
Qt.Key_Alt,
}:
bar.unfocus()
searchbar.unfocus()
# kill the search and focus back on main chart
if godwidget:
@ -979,41 +1063,54 @@ async def handle_keyboard_input(
continue
if ctl and key in {
Qt.Key_L,
}:
if (
ctl
and key in {Qt.Key_L}
):
# like url (link) highlight in a web browser
bar.focus()
searchbar.focus()
# selection navigation controls
elif ctl and key in {
Qt.Key_D,
}:
elif (
ctl
and key in {Qt.Key_D}
):
view.next_section(direction='down')
_search_enabled = False
elif ctl and key in {
Qt.Key_U,
}:
elif (
ctl
and key in {Qt.Key_U}
):
view.next_section(direction='up')
_search_enabled = False
# selection navigation controls
elif (ctl and key in {
elif (
ctl and (
key in {
Qt.Key_K,
Qt.Key_J,
}
}) or key in {
or key in {
Qt.Key_Up,
Qt.Key_Down,
}:
}
)
):
_search_enabled = False
if key in {Qt.Key_K, Qt.Key_Up}:
if key in {
Qt.Key_K,
Qt.Key_Up
}:
item = view.select_previous()
elif key in {Qt.Key_J, Qt.Key_Down}:
elif key in {
Qt.Key_J,
Qt.Key_Down,
}:
item = view.select_next()
if item:
@ -1022,26 +1119,39 @@ async def handle_keyboard_input(
# if we're in the cache section and thus the next
# selection is a cache item, switch and show it
# immediately since it should be very fast.
if parent_item and parent_item.text() == 'cache':
await search.chart_current_item(clear_to_cache=False)
if (
parent_item
and parent_item.text() == 'cache'
):
await searchw.chart_current_item(clear_to_cache=False)
# ACTUAL SEARCH BLOCK #
# where we fuzzy complete and fill out sections.
elif not ctl:
# relay to completer task
_search_enabled = True
send.send_nowait(search.bar.text())
send.send_nowait(searchw.bar.text())
_search_active.set()
async def search_simple_dict(
text: str,
source: dict,
) -> dict[str, Any]:
tokens = []
for key in source:
if not isinstance(key, str):
tokens.extend(key)
else:
tokens.append(key)
# search routine can be specified as a function such
# as in the case of the current app's local symbol cache
matches = fuzzy.extractBests(
text,
source.keys(),
tokens,
score_cutoff=90,
)

View File

@ -240,12 +240,12 @@ def hcolor(name: str) -> str:
'gunmetal': '#91A3B0',
'battleship': '#848482',
# default ohlc-bars/curve gray
'bracket': '#666666', # like the logo
# bluish
'charcoal': '#36454F',
# default bars
'bracket': '#666666', # like the logo
# work well for filled polygons which want a 'bracket' feel
# going light to dark
'davies': '#555555',

View File

@ -88,7 +88,7 @@ class Dialog(Struct):
# TODO: use ``pydantic.UUID4`` field
uuid: str
order: Order
symbol: Symbol
symbol: str
lines: list[LevelLine]
last_status_close: Callable = lambda: None
msgs: dict[str, dict] = {}
@ -349,7 +349,7 @@ class OrderMode:
'''
if not order:
staged = self._staged_order
staged: Order = self._staged_order
# apply order fields for ems
oid = str(uuid.uuid4())
order = staged.copy()
@ -379,7 +379,7 @@ class OrderMode:
dialog = Dialog(
uuid=order.oid,
order=order,
symbol=order.symbol,
symbol=order.symbol, # XXX: always a str?
lines=lines,
last_status_close=self.multistatus.open_status(
f'submitting {order.exec_mode}-{order.action}',
@ -494,7 +494,7 @@ class OrderMode:
uuid: str,
price: float,
arrow_index: float,
time_s: float,
pointing: Optional[str] = None,
@ -513,22 +513,32 @@ class OrderMode:
'''
dialog = self.dialogs[uuid]
lines = dialog.lines
chart = self.chart
# XXX: seems to fail on certain types of races?
# assert len(lines) == 2
if lines:
flume: Flume = self.feed.flumes[self.chart.linked.symbol.fqsn]
flume: Flume = self.feed.flumes[chart.linked.symbol.fqsn]
_, _, ratio = flume.get_ds_info()
for i, chart in [
(arrow_index, self.chart),
(flume.izero_hist
+
round((arrow_index - flume.izero_rt)/ratio),
self.hist_chart)
for chart, shm in [
(self.chart, flume.rt_shm),
(self.hist_chart, flume.hist_shm),
]:
viz = chart.get_viz(chart.name)
index_field = viz.index_field
arr = shm.array
# TODO: borked for int index based..
index = flume.get_index(time_s, arr)
# get absolute index for arrow placement
arrow_index = arr[index_field][index]
self.arrows.add(
chart.plotItem,
uuid,
i,
arrow_index,
price,
pointing=pointing,
color=lines[0].color
@ -693,7 +703,6 @@ async def open_order_mode(
# symbol id
symbol = chart.linked.symbol
symkey = symbol.front_fqsn()
# map of per-provider account keys to position tracker instances
trackers: dict[str, PositionTracker] = {}
@ -854,7 +863,7 @@ async def open_order_mode(
# the expected symbol key in its positions msg.
for (broker, acctid), msgs in position_msgs.items():
for msg in msgs:
log.info(f'Loading pp for {symkey}:\n{pformat(msg)}')
log.info(f'Loading pp for {acctid}@{broker}:\n{pformat(msg)}')
await process_trade_msg(
mode,
book,
@ -930,7 +939,6 @@ async def process_trade_msg(
) -> tuple[Dialog, Status]:
get_index = mode.chart.get_index
fmsg = pformat(msg)
log.debug(f'Received order msg:\n{fmsg}')
name = msg['name']
@ -965,6 +973,9 @@ async def process_trade_msg(
oid = msg.oid
dialog: Dialog = mode.dialogs.get(oid)
if dialog:
fqsn = dialog.symbol
match msg:
case Status(
resp='dark_open' | 'open',
@ -1034,10 +1045,11 @@ async def process_trade_msg(
# should only be one "fill" for an alert
# add a triangle and remove the level line
req = Order(**req)
tm = time.time()
mode.on_fill(
oid,
price=req.price,
arrow_index=get_index(time.time()),
time_s=tm,
)
mode.lines.remove_line(uuid=oid)
msg.req = req
@ -1065,15 +1077,10 @@ async def process_trade_msg(
action = order.action
details = msg.brokerd_msg
# TODO: some kinda progress system
mode.on_fill(
oid,
price=details['price'],
pointing='up' if action == 'buy' else 'down',
# TODO: put the actual exchange timestamp?
# TODO: some kinda progress system?
# TODO: put the actual exchange timestamp
arrow_index=get_index(
# TODO: note currently the ``kraken`` openOrders sub
# NOTE: currently the ``kraken`` openOrders sub
# doesn't deliver their engine timestamp as part of
# it's schema, so this value is **not** from them
# (see our backend code). We should probably either
@ -1083,8 +1090,12 @@ async def process_trade_msg(
# a true backend one? This will require finagling
# with how each backend tracks/summarizes time
# stamps for the downstream API.
details['broker_time']
),
tm = details['broker_time']
mode.on_fill(
oid,
price=details['price'],
time_s=tm,
pointing='up' if action == 'buy' else 'down',
)
# TODO: append these fill events to the position's clear

View File

@ -1,3 +0,0 @@
"""
Super hawt Qt UI components
"""

View File

@ -1,67 +0,0 @@
import sys
from PySide2.QtCharts import QtCharts
from PySide2.QtWidgets import QApplication, QMainWindow
from PySide2.QtCore import Qt, QPointF
from PySide2 import QtGui
import qdarkstyle
data = ((1, 7380, 7520, 7380, 7510, 7324),
(2, 7520, 7580, 7410, 7440, 7372),
(3, 7440, 7650, 7310, 7520, 7434),
(4, 7450, 7640, 7450, 7550, 7480),
(5, 7510, 7590, 7460, 7490, 7502),
(6, 7500, 7590, 7480, 7560, 7512),
(7, 7560, 7830, 7540, 7800, 7584))
app = QApplication([])
# set dark stylesheet
# import pdb; pdb.set_trace()
app.setStyleSheet(qdarkstyle.load_stylesheet_pyside())
series = QtCharts.QCandlestickSeries()
series.setDecreasingColor(Qt.darkRed)
series.setIncreasingColor(Qt.darkGreen)
ma5 = QtCharts.QLineSeries() # 5-days average data line
tm = [] # stores str type data
# in a loop, series and ma5 append corresponding data
for num, o, h, l, c, m in data:
candle = QtCharts.QCandlestickSet(o, h, l, c)
series.append(candle)
ma5.append(QPointF(num, m))
tm.append(str(num))
pen = candle.pen()
# import pdb; pdb.set_trace()
chart = QtCharts.QChart()
# import pdb; pdb.set_trace()
series.setBodyOutlineVisible(False)
series.setCapsVisible(False)
# brush = QtGui.QBrush()
# brush.setColor(Qt.green)
# series.setBrush(brush)
chart.addSeries(series) # candle
chart.addSeries(ma5) # ma5 line
chart.setAnimationOptions(QtCharts.QChart.SeriesAnimations)
chart.createDefaultAxes()
chart.legend().hide()
chart.axisX(series).setCategories(tm)
chart.axisX(ma5).setVisible(False)
view = QtCharts.QChartView(chart)
view.chart().setTheme(QtCharts.QChart.ChartTheme.ChartThemeDark)
view.setRubberBand(QtCharts.QChartView.HorizontalRubberBand)
# chartview.chart().setTheme(QtCharts.QChart.ChartTheme.ChartThemeBlueCerulean)
ui = QMainWindow()
# ui.setGeometry(50, 50, 500, 300)
ui.setCentralWidget(view)
ui.show()
sys.exit(app.exec_())

View File

@ -1,22 +1,26 @@
"""
Resource list for mucking with DPIs on multiple screens:
- https://stackoverflow.com/questions/42141354/convert-pixel-size-to-point-size-for-fonts-on-multiple-platforms
- https://stackoverflow.com/questions/25761556/qt5-font-rendering-different-on-various-platforms/25929628#25929628
- https://doc.qt.io/qt-5/highdpi.html
- https://stackoverflow.com/questions/20464814/changing-dpi-scaling-size-of-display-make-qt-applications-font-size-get-rendere
- https://stackoverflow.com/a/20465247
- https://doc.qt.io/archives/qt-4.8/qfontmetrics.html#width
- https://forum.qt.io/topic/54136/how-do-i-get-the-qscreen-my-widget-is-on-qapplication-desktop-screen-returns-a-qwidget-and-qobject_cast-qscreen-returns-null/3
- https://forum.qt.io/topic/43625/point-sizes-are-they-reliable/4
- https://stackoverflow.com/questions/16561879/what-is-the-difference-between-logicaldpix-and-physicaldpix-in-qt
- https://doc.qt.io/qt-5/qguiapplication.html#screenAt
DPI and info helper script for display metrics.
"""
from pyqtgraph import QtGui
# Resource list for mucking with DPIs on multiple screens:
# https://stackoverflow.com/questions/42141354/convert-pixel-size-to-point-size-for-fonts-on-multiple-platforms
# https://stackoverflow.com/questions/25761556/qt5-font-rendering-different-on-various-platforms/25929628#25929628
# https://doc.qt.io/qt-5/highdpi.html
# https://stackoverflow.com/questions/20464814/changing-dpi-scaling-size-of-display-make-qt-applications-font-size-get-rendere
# https://stackoverflow.com/a/20465247
# https://doc.qt.io/archives/qt-4.8/qfontmetrics.html#width
# https://forum.qt.io/topic/54136/how-do-i-get-the-qscreen-my-widget-is-on-qapplication-desktop-screen-returns-a-qwidget-and-qobject_cast-qscreen-returns-null/3
# https://forum.qt.io/topic/43625/point-sizes-are-they-reliable/4
# https://stackoverflow.com/questions/16561879/what-is-the-difference-between-logicaldpix-and-physicaldpix-in-qt
# https://doc.qt.io/qt-5/qguiapplication.html#screenAt
from pyqtgraph import (
QtGui,
QtWidgets,
)
from PyQt5.QtCore import (
Qt, QCoreApplication
Qt,
QCoreApplication,
)
# Proper high DPI scaling is available in Qt >= 5.6.0. This attibute
@ -28,55 +32,47 @@ if hasattr(Qt, 'AA_UseHighDpiPixmaps'):
QCoreApplication.setAttribute(Qt.AA_UseHighDpiPixmaps, True)
app = QtGui.QApplication([])
window = QtGui.QMainWindow()
main_widget = QtGui.QWidget()
app = QtWidgets.QApplication([])
window = QtWidgets.QMainWindow()
main_widget = QtWidgets.QWidget()
window.setCentralWidget(main_widget)
window.show()
# TODO: move widget through multiple displays and auto-detect the pixel
# ratio? (probably is gonna require calls to i3ipc on linux)..
pxr = main_widget.devicePixelRatioF()
# screen_num = app.desktop().screenNumber()
# TODO: how to detect list of displays from API?
# screen = app.screens()[screen_num]
screen = app.screenAt(main_widget.geometry().center())
def ppscreeninfo(screen: 'QScreen') -> None:
# screen_num = app.desktop().screenNumber()
name = screen.name()
size = screen.size()
geo = screen.availableGeometry()
phydpi = screen.physicalDotsPerInch()
logdpi = screen.logicalDotsPerInch()
rr = screen.refreshRate()
print(
# f'screen number: {screen_num}\n',
f'screen name: {name}\n'
f'screen size: {size}\n'
f'screen geometry: {geo}\n\n'
f'screen: {name}\n'
f' size: {size}\n'
f' geometry: {geo}\n'
f' logical dpi: {logdpi}\n'
f' devicePixelRationF(): {pxr}\n'
f' physical dpi: {phydpi}\n'
f'logical dpi: {logdpi}\n'
f' refresh rate: {rr}\n'
)
print('-'*50)
print('-'*50 + '\n')
screen = app.screenAt(main_widget.geometry().center())
ppscreeninfo(screen)
screen = app.primaryScreen()
name = screen.name()
size = screen.size()
geo = screen.availableGeometry()
phydpi = screen.physicalDotsPerInch()
logdpi = screen.logicalDotsPerInch()
print(
# f'screen number: {screen_num}\n',
f'screen name: {name}\n'
f'screen size: {size}\n'
f'screen geometry: {geo}\n\n'
f'devicePixelRationF(): {pxr}\n'
f'physical dpi: {phydpi}\n'
f'logical dpi: {logdpi}\n'
)
ppscreeninfo(screen)
# app-wide font
font = QtGui.QFont("Hack")