Add `MktPair` handling block for when a backend delivers
a `mkt_info`-field containing init msg. Adjust the original
`Symbol`-style `'symbol_info'` msg processing to do `Decimal` defaults
and convert to `MktPair` including slapping in a hacky `_atype: str`
field XD
General initial name changes to `bs_mktid` and `_fqme` throughout!
For `price_tick` and `size_tick` we read in `str` and decimal-ize
and now correctly fail over to default values of the same type..
Also, always treat `bs_mktid` field as a `str` in TOML form.
Drop the strange `clears: dict` var from the loading code (not sure why
that was left in smh) and better name `toml_clears_list` for the
TOML-loaded-pre-transaction sequence.
Handle case where `'dst'` field is just a `str` (in which case delegate to
`.from_fqme()`) as well as do `Asset` loading and use our
`Struct.copy()` to enforce type-casting to (for eg. `Decimal`s) such
that we'll now capture typing errors despite IPC transport.
Change `Symbol.tick_size` and `.lot_tick_size` defaults to decimal
for proper casting and type `MktPair.atype: str` since `msgspec` can't
cast to `AssetTypeName` without special handling..
Allows building a `MktPair` from the backend specific `Pair` for
eventual use in the data feed layer. Also adds `Pair.price/tick_size` to
get to the expected tick precision info format.
Add a logic branch for now that switches on an instance check.
Generally swap over all `Position.symbol` and `Transaction.sym` refs to
`MktPair`. Do a wholesale rename of all `.bsuid` var names to
`.bs_mktid`.
Instead let's name it `.sys` for "system", the thing we use to conduct
the "transactions" ..
Also rename `.bsuid` -> `.bs_mktid` for "backend system market id`
which is more explicit, easier to remember and read.
Prepping to entirely replace `Symbol`; this adds a buncha docs/comments,
better implementation for representing and parsing the FQME: "fully
qualified market endpoint".
Deatz:
- make `.src` an optional field until we figure out how we're going
to support loading source assets from all backends sensibly..
- implement `MktPair.fqme: str` (what was previously called `fqsn`)
using a new util func: `maybe_cons_tokens()`.
- `Symbol.brokers` and expect only `.broker` usage.
- remap anything with `fqsn` in the name to `fqme` with aliases from the
old name.
- implement `unpack_fqme()` with `match:` syntax B)
- add `MktPair.tick_size_digits`, `.lot_size_digits`, `.fqsn`, `.key` for
backward compat.
- make all fqme generation related fields empty `str`s by default.
- add `MktPair.resolved: bool` a flag indicating whether or not `.dst`
is an `Asset` instance or just a string and, `.bs_mktid` the field
to hold the "backend system market id" per broker.
Try out using our new internal type for storing info about kraken's asset
infos now stored in the `Client.assets: dict[str, Asset]` table. Handle
a server error when requesting such info msgs.
Drop everything we can in terms of methods and attrs from `Symbol`:
- kill `.tokens()`, `.front_feed()`, `.tokens()`, `.nearest_tick()`,
`.front_fqsn()`, instead moving logic from these methods into
dependents (and obviously removing any usage from rest of code base,
coming in follow up commits).
- rename `.quantize_size()` -> `.quantize()`.
- re-implement `.brokers`, `.lot_size_digits`, `.tick_size_digits` as
`@property` methods; for the latter two, allows us to minimize to only
accepting min tick decimal values on alternative constructor class
methods and to drop the equivalent instance vars.
- map `_fqsn` related variable names to new and preferred `_fqme`.
We also juggle around some utility functions, moving limited precision
related `decimal.Decimal` routines to the top of module and soon-to-be
legacy `fqsn` related routines to the bottom.
`MktPair` draft type extensions:
- drop requirements for `src_type`, and offer the optional `.dst_type`
field as either a `str` or (new `typing.Literal`) `AssetTypeName`.
- define an equivalent `.quantize()` as (re)defined in `Symbol` but with
`quantity_type: str` field which specifies whether to use the price or
the size precision.
- add a lot more docs, a `.key` property for the "symbol" name, draft
property for a `.fqme: str`
- allow `.src` and `.dst` to be of type `str | Asset`
Add a new `Asset` to capture "things which can be used in markets and/or
transactions" XD
- defines a `.name`, `.atype: AssetTypeName` a financial category tag, `tx_tick:
Decimal` the precision limit for transactions and of course
a `.quantime()` method for doing accounting arithmetic on a given tech
stack.
- define the `atype: AssetTypeName` type as a finite set of `str`s
expected to be used in various ways for default settings in other
parts of the data and order control layers..
Our issue was not having the correct value set on each
`Symbol.lot_tick_size`.. and then doing PPU calcs with the default set
for legacy mkts..
Also,
- actually write `pps.toml` on broker mode exit.
- drop `get_likely_pair()` and import from pp module.
Not sure how this worked before but, the PPU calculation critically
requires that the order of clearing transactions are in the correct
chronological order! Fix this by sorting `trans: dict[str, Transaction]`
in the `PpTable.update_from_trans()` method.
Also, move the `get_likely_pair()` parser from the `kraken` backend here
for future use particularly when we revamp the asset-transaction
processing layer.
Apparently it will likely fix our `trio`-cancel-scopes-corrupted crash
when we try to let our `._web_bs.NoBsWs` do reconnect logic around
the asyn-generator implemented data-feed streaming routines in `binance`
and `kraken`. See the project docs for deatz; obvs we add the lib as
a dep.
Solve this by always scaling the y-range for the major/target curve
*before* the final overlay scaling loop; this implicitly always solve
the case where the major series is the only one in view.
Tidy up debug print formatting and add some loop-end demarcation comment
lines.
This is particularly more "good looking" when we boot with a pair that
doesn't have historical 1s OHLC and thus the fast chart is empty from
outset. In this case it's a lot nicer to be already zoomed to
a comfortable preset number of "datums in view" even when the history
isn't yet filled in.
Adjusts the chart display `Viz.default_view()` startup to explicitly
ensure this happens via the `do_min_bars=True` flag B)
Not sure how i missed this (and left in handling of `list.remove()` and
it ever worked for that?) after the `samplerd` impl in 5ec1a72 but, this
adjusts the remove-broken-subscriber loop to catch the correct
`set.remove()` exception type on a missing (likely already removed)
subscription entry.
For the purposes of eventually trying to resolve last-step indexing
synchronization (an intermittent but still existing) issue(s) that can
happen due to races during history frame query and shm writing during
startup. In fact, here we drop all `hist_viz` info queries from the main
display loop for now anticipating that this code will either be removed
or improved later.
Again, as per the signature change, never expect implicit time step
calcs from overlay processing/machinery code. Also, extend the debug
printing (yet again) to include better details around
"rescale-due-to-minor-range-out-of-view" cases and a detailed msg for
the transform/scaling calculation (inputs/outputs), particularly for the
cases when one of the curves has a lesser support.
As per the change to `slice_from_time()` this ensures this `Viz` always
passes its self-calculated time indexing step size to the time slicing
routine(s).
Further this contains a slight impl tweak to `.scalars_from_index()` to
slice the actual view range from `xref` to `Viz.ViewState.xrange[1]` and
then reading the corresponding `yref` from the first entry in that
array; this should be no slower in theory and makes way for further
caching of x-read-range to `ViewState` opportunities later.
There's been way too many issues when trying to calculate this
dynamically from the input array, so just expect the caller to know what
it's doing and don't bother with ever hitting the error case of
calculating and incorrect value internally.
When the target pinning curve (by default, the dispersion major) is
shorter then the pinned curve, we need to make sure we find still find
the x-intersect for computing returns scalars! Use `Viz.i_from_t()` to
accomplish this as well and, augment that method with a `return_y: bool`
to allow the caller to also retrieve the equivalent y-value at the
requested input time `t: float` for convenience.
Also tweak a few more internals around the 'loglin_ref_to_curve'
method:
- only solve / adjust for the above case when the major's xref is
detected as being "earlier" in time the current minor's.
- pop the major viz entry from the overlay table ahead of time to avoid
a needless iteration and simplify the transform calc phase loop to
avoid handling that needless cycle B)
- add much better "organized" debug printing with more clear headers
around which "phase"/loop the message pertains and well as more
explicit details in terms of x and y-range values on each cycle of
each loop.
Previously when very zoomed out and using the `'r'` hotkey the
interaction handler loop wouldn't trigger a re-(up)sampling to get
a more detailed curve graphic and instead the previous downsampled
(under-detailed) graphic would show. Fix that by ensuring we yield back
to the Qt event loop and do at least a couple render cycles with paired
`.interact_graphics_cycle()` calls.
Further this flips the `.start/signal_ic()` methods to use the new
`.reset_graphics_caches()` ctr-mngr method.
Instead delegate directly to `Viz.default_view()` throughout charting
startup and interaction handlers.
Also add a `ChartPlotWidget.reset_graphics_caches()` context mngr which
resets all managed graphics object's cacheing modes on enter and
restores them on exit for simplified use in interaction handling code.
This finally seems to mitigate all the "smearing" and "jitter" artifacts
when using Qt's "coordinate cache" graphics-mode:
- whenever we're in a mouse interaction (as per calls to
`ChartView.start/signal_ic()`) we simply disable the caching mode (set
`.NoCache` until the interaction is complete.
- only do this (for now) during a pan since it doesn't seem to be an
issue when zooming?
- ensure disabling all `Viz.graphics` and `.ds_graphics` to be agnostic
to any case where there's both a zoom and a pan simultaneously (not
that it's easy to do manually XD) as well as solving the problem
whenever an OHLC series is in traced-and-downsampled mode (during low
zoom).
Impl deatz:
- rename `ChartView._ic` -> `._in_interact: trio.Event`
- add `.ChartView._interact_stack: ExitStack` which we use to open.
and close the `FlowGraphics.reset_cache()` mngrs from mouse handlers.
- drop all the commented per-subtype overrides for `.cache_mode: int`.
- write up much better doc strings for `FlattenedOHLC` and `StepCurve`
including some very basic ASCII-art diagrams.
When the minor has the same scaling as the major in a given direction we
should still do back-scaling against the major-target and previous
minors to avoid strange edge cases where only the target-major might not
be shifted correctly to show an matched intersect point? More or less
this just meant making the y-mxmn checks interval-inclusive with
`>=`/`<=` operators.
Also adds a shite ton of detailed comments throughout the pin-to-target
method blocks and moves the final major y-range call outside the final
`scaled: dict` loop.
For the "pin to target major/target curve" overlay method, this finally
solves the longstanding issue of ensuring that any new minor curve,
which requires and increase in the major/target curve y-range, also
re-scales all previously scaled minor curves retroactively. Thus we now
guarantee that all minor curves are correctly "pinned" to their
target/major on their earliest available datum **and** are all kept in
view.
Yah yah, i know it's the same as before (the N > 2 curves case with
out-of-range-minor rescaling the previously scaled curves isn't fixed
yet...) but, this is a much better and optional implementation in less
code. Further we're now better leveraging various new cached properties
and methods on `Viz`.
We now handle different `overlay_technique: str` options using `match:`
syntax in the 2ndary scaling loop, stash the returns scalars per curve
in `overlay_table`, and store and iterate the curves by dispersion
measure sort order.
Further wrt "pin-to-target-curve" mode, which currently still pins to
the largest measured dispersion curve in the overlay set:
- pop major Ci overlay table entries at start for sub-calcs usage when
handling the "minor requires major rescale after pin" case.
- (finally) correctly rescale the major curve y-mxmn to whatever the
latest minor overlay curve by calcing the inverse transform from the
minor *at that point*:
- the intersect point being that which the minor has starts support on
the major's x-domain* using the new `Viz.scalars_from_index()` and,
- checking that the minor is not out of range (versus what the major's
transform calcs it to be, in which case,
- calc the inverse transform from the current out-of-range minor and
use it to project the new y-mxmn for the major/target based on the
same intersect-reference point in the x-domain used by the minor.
- always handle the target-major Ci specially by only setting the
`mx_ymn` / `mx_ymn` value when iterating that entry in the overlay
table.
- add todos around also doing the last sub-sub bullet for all previously
major-transform scaled minor overlays (this is coming next..i hope).
- add a final 3rd overlay loop which goes through a final `scaled: dict`
to apply all range values to each view; this is where we will
eventually solve that last edge case of an out-of-range minor's
scaling needing to be used to rescale already scaled minors XD
In an effort to make overlay calcs cleaner and leverage caching of view
range -> dispersion measures, this adds the following new methods:
- `._dispersion()` an lru cached returns scalar calculator given input
y-range and y-ref values.
- `.disp_from_range()` which calls the above method and returns variable
output depending on requested calc `method: str`.
- `.i_from_t()` a currently unused cached method for slicing the
in-view's array index from time stamp (though not working yet due to
needing to parameterize the cache by the input `.vs.xrange`).
Further refinements/adjustments:
- rename `.view_state: ViewState` -> `.vs`.
- drop the `.bars_range()` method as it's no longer used anywhere else
in the code base.
- always set the `ViewState.in_view: np.ndarray` inside `.read()`.
- return the start array index (from slice) and `yref` value @ `xref`
from `.scalars_from_index()` to aid with "pin to curve" rescaling
caused by out-of-range pinned-minor curves.
Not sure why this was ever allowed but, for slicing to the sample
*before* whatever target time stamp is passed in we should definitely
not return the prior index as for the slice start since that might
include a very large gap prior to whatever sample is scanned to have
the earliest matching time stamp.
This was essential to fixing overlay intersect points searching in our
``ui.view_mode`` machinery..
Adds a small struct which is used to track the most recently viewed
data's x/y ranges as well as the last `Viz.read()` "in view" array data
for fast access by chart related graphics processing code, namely view
mode overlay handling.
Also adds new `Viz` interfaces:
- `Viz.ds_yrange: tuple[float, float]' which replaces the previous
`.yrange` (now set by `.datums_range()` on manual y-range calcs) so
that the m4 downsampler can set this field specifically and then it
get used (when available) by `Viz.maxmin()`.
- `Viz.scalars_from_index()` a new returns-scalar generator which can be
used to calc the up and down returns values (used for scaling overlay
y-ranges) from an input `xref` x-domain index which maps to some
`Ci(xref) = yref`.
It was getting waayy to long to be jammed in a method XD
This moves all the chart-viz iteration and transform logic into a new
`piker.ui.view_mode.overlay_viewlists()` core routine which will make it
a lot nicer for,
- AOT compilation via `numba` / `cython` / `mypyc`.
- decoupling from the `pyqtgraph.ViewBox` APIs if we ever decide to get
crazy and go without another graphics engine.
- keeping your head clear when trying to rework the code B)
As part of solving a final bullet-issue in #455, which is specifically
a case:
- with N > 2 curves, one of which is the "major" dispersion curve" and
the others are "minors",
- we can run into a scenario where some minor curve which gets pinned to
the major (due to the original "pinning technique" -> "align to
major") at some `P(t)` which is *not* the major's minimum / maximum
due to the minor having a smaller/shorter support and thus,
- requires that in order to show then max/min on the minor curve we have
to expand the range of the major curve as well but,
- that also means any previously scaled (to the major) minor curves need
to be adjusted as well or they'll not be pinned to the major the same
way!
I originally was trying to avoid doing the recursive iteration back
through all previously scaled minor curves and instead decided to try
implementing the "per side" curve dispersion detection (as was
originally attempted when first starting this work). The idea is to
decide which curve's up or down "swing in % returns" would determine the
global y-range *on that side*. Turns out I stumbled on the "align to
first" technique in the process: "for each overlay curve we align its
earliest sample (in time) to the same level of the earliest such sample
for whatever is deemed the major (directionally disperse) curve in
view".
I decided (with help) that this "pin to first" approach/style is equally
as useful and maybe often more so when wanting to view support-disjoint
time series:
- instead of compressing the y-range on "longer series which have lesser
sigma" to make whatever "shorter but larger-sigma series" pin to it at
an intersect time step, this instead will expand the price ranges
based on the earliest time step in each series.
- the output global-returns-overlay-range for any N-set of series is equal to
the same in the previous "pin to intersect time" technique.
- the only time this technique seems less useful is for overlaying
market feeds which have the same destination asset but different
source assets (eg. btceur and btcusd on the same chart since if one
of the series is shorter it will always be aligned to the earliest
datum on the longer instead of more naturally to the intersect sample
level as was in the previous approach).
As such I'm going to keep this technique as discovered and will later
add back optional support for the "align to intersect" approach from
previous (which will again require detecting the highest dispersion
curve direction-agnostic) and pin all minors to the price level at which
they start on the major.
Further details of the implementation rework in
`.interact_graphics_cycle()` include:
- add `intersect_from_longer()` to detect and deliver a common datum
from 2 series which are different in length: the first time-index
sample in the longer.
- Rewrite the drafted `OverlayT` to only compute (inversed log-returns)
transforms for a single direction and use 2 instances, one for each
direction inside the `Viz`-overlay iteration loop.
- do all dispersion-per-side major curve detection in the first pass of
all `Viz`s on a plot, instead updating the `OverlayT` instances for
each side and compensating for any length mismatch and
rescale-to-minor cases in each loop cycle.
Previously we were aligning the child's `PlotItem` to the "root" (top
most) overlays `ViewBox`..smh. This is why there was a weird gap on the
LHS next to the 'left' price axes: something weird in the implied axes
offsets was getting jammed in that rect.
Also comments out "the-skipping-of" moving axes from the overlay's
`PlotItem.layout` to the root's linear layout(s) when an overlay's axis
is read as not visible; this isn't really necessary nor useful and if we
want to remove the axes entirely we should do it explicitly and/or
provide a way through the `ComposeGridLayout` API.
Despite there being artifacts when interacting, the speedups when
cross-hair-ing are just too good to ignore. We can always play with
disabling caches when interaction takes place much like we do with feed
pausing.
When zoomed in alot, and thus a quote driven y-range resize takes place,
it makes more sense to increase the `range_margin: float` input to
`._set_yrange()` to ensure all L1 labels stay in view; generally the
more zoomed in,
- the smaller the y-range is and thus the larger the needed margin (on
that range's dispersion diff) would be,
- it's more likely to get a last datum move outside the previous range.
Also, always do overlayT style scaling on the slow chart whenever it
treads.
Since it can be desirable to dynamically adjust inputs to the y-ranging
method (such as in the display loop when a chart is very zoomed in), this
adds such support through a new `yrange_kwargs: dict[Viz, dict]` which
replaces the `yrange` tuple we were passing through prior. Also, adjusts
the y-range margin back to the original 0.09 of the diff now that we can
support dynamic control.