The slow (history) chart requires it's own y-range checker logic which
needs to be run in 2 cases:
- the last datum is in view and goes outside the previous mx/mn in view
- the chart is incremented a step
Since we need this duplicate logic this patch also factors the incremental
graphics update info "reading" into a new `DisplayState.incr_info()`
method that can be configured to a chart and input state and returns all
relevant "graphics update measure" in a tuple (for now).
Use this method throughout the rest of the display loop for both fast
and slow chart checks and in the `increment_history_view()` slow chart
task.
Use the new `Feed.get_ds_info()` method in a poll loop to definitively
get the inter-chart sampling info and avoid races with shm buffer
backfilling.
Also, factor the history increment closure-task into
`graphics_update_loop()` which will make it clearer how to factor
all the "should we update" logic into some `DisplayState` API.
If you spawn a brokerd set and no `ib` data feed was started (via our
`.data.feed.Feed` api) then there will be no active client loaded and
thus wont' be connected. So in these cases just return nothing, and
I guess we'll figure out real connection failures later?
Add an update call to the display loop to consistently update the last
datum in the history view chart. Compute the inter-chart sampling ratio
and use it to sync the linear region.
Add a first draft of a working `pyqtgraph.LinearRegionItem` link between
a history view chart (+ data set) and the normal real-time "HFT" chart
set.
Add the history view (aka more downsampled data view) chart set to the
rt/hft set's splitter as it's "first widget". Hook up linear region
callbacks to enable syncing between charts including compenstating for
the downsampling rate ration (in this case hardcoded 60 since 1s to 1M,
but we'll actually compute it going forward obvs).
More to come dawgys..
Adds an additional `GodWidget.hist_linked: LinkedSplits` alongside the
renamed `.rt_linked` to enable 2 sets of linked charts with different
sampled data sets/flows. The history set is added without "all the
fixins" for now (i.e. no order mode sidepane or search integration) such
that it is merely a top level chart which shows a much longer term
history and can be added to the UI via embedding the entire history
linked-splits instance into the real-time linked set's splitter.
Further impl deats:
- adjust the `GodWidget._chart_cache: dict[str, tuple]]` to store both
linked split chart sets per symbol so that symbol switching will
continue to work with the added history chart (set).
- rework `.load_symbol()` to operate on both the real-time (HFT) chart
set and the history set.
- rework `LinkedSplits.set_split_sizes()` to compensate for the history
chart and do more detailed height calcs arithmetic to make it appear
by default as a minor sub-chart.
- adjust `LinkedSplits.add_plot()` and `ChartPlotWidget` internals to allow
adding a plot without a sidepane and/or container `ChartnPane`
composite widget by checking for a `sidepane == False` input.
- make `.default_view()` accept a manual y-axis offset kwarg.
- adjust search mode to provide history linked splits to
`.set_chart_symbol()` call.
As part of supporting a "history view" chart which shows downsampled
datums alongside our 1s (or higher) sampled OHLC we need a separate
buffer to store a the slower history from broker backends. This begins
that design by allocating 2 buffers:
- `rt_shm: ShmArray` which maps to a `/dev/shm/` file with `_rt` suffix
- `hist_shm: ShmArray` which maps to a file with `_hist` suffix
Deliver both of these shms back from both `manage_history()` and load
them as `Feed.rt_shm`/`.hist_shm` on the client side.
Impl deats:
- init the rt buffer with the first datum from loaded history and
assign all OHLC values to that row's 'close' and the vlm to 0.
- pass the hist buffer to the backfiller task
- only spawn **one** global sampler array-row increment task per
`brokerd` and pass in the 1s delay which we presume is our lowest
OHLC sample rate for now.
- drop `open_sample_step_stream()` and just move its body contents into
`Feed.index_stream()`
Instead of worrying about the increment period per shm subscription,
just use the value passed as input and presume the caller knows that
only one task is necessary and that the wakeup (sampling) period should
be the shortest that is needed.
It's very unlikely we don't want at least a 1s sampling (both in terms
of task switching cost and general usage) which will eventually ship as
the default "real-time" feed "timeframe". Further, this "fast" increment
sampling task can handle all lower sampling periods (eg. 1m, 5m, 1H)
based on the current implementation just the same.
Also, add a global default sample period as `_defaul_delay_s` for use in
other internal modules.
Clearly, the linter didn't help us here.. but, just pass the
`brokerd` time for now in the `.broker_time` field; we can't get it from
the fill-case incremental updates in the `openOrders` sub. Add some
notes about this and how we might approach for backends with this
limitation.
This fixes a regression added after moving the msg parsing to later in
the order mode startup sequence. The `Allocator` needs to be configured
*to* the initial pos otherwise default settings will show in the UI..
Move the startup config logic from inside `mk_allocator()` to
`PositionTracker.update_from_pp()` and add a flag to allow setting the
`.startup_pp` from the current live one as is needed during initial
load.
In the short case (-ve size) we had a bug where the last sub-slots worth
of exit size would never be limited to zero once the allocator limit pos
size was hit (i.e. you could keep going more -ve on the pos,
exponentially per slot over the limit). It's a simple fix, just
a `max()` around the `l_sub_pp` var used in the next-step-size calc.
Resolves#392
Turns out we were putting too many brokername suffixes in the symbol
field and thus the order mode msg parser wasn't matching the current
asset to said msgs correctly and pps weren't being shown...
This repairs that plus simplifies the order mode initial pos msg loading
to just delegate into `process_trade_msg()` just as is done for
real-time msg updates.
If a setting fails to apply try to log an error msg and revert to the
previous setting by not applying the UI read-update until after the new
`SettingsPane.apply_setting()` call. This prevents crashes when the user
tries to give bad inputs on editable allocator fields.
Previously we only simulated paper engine fills when the data feed
provide L1 queue-levels matched an execution. This patch add further
support for clear-level matches when there are real live clears on the
data feed that are faster/not synced with the L1 (aka usually during
periods of HFT).
The solution was to simply iterate the interleaved paper book entries on
both sides for said tick types and instead yield side-specific predicate
per entry.
Not entirely sure why this all of a sudden became a problem but it seems
price changes on order edits were sometimes resulting in key errors when
modifying paper book entries quickly. This changes the implementation to
not care about matching the last price when keying/popping old orders
and use `bidict`s to more easily pop cleared orders in the paper loop.
When the paper engine is used it seems we can definitely hit races where
order ack msgs arrive close enough to status messages that `trio`
schedules the status processing before the acks. In such cases we want
to be tolerant and not crash but instead warn that we got an
unknown/out-of-order msg.
Quite a simple fix, we just assign the account-specific
`PositionTracker` to the level line's `._on_level_change()` handler
instead of whatever the current `OrderMode.current_pp` is set to.
Further this adds proper pane switching support such that when a user
modifies an order line from an account which is not the currently
selected one, the settings pane is changed to reflect the
account and thus corresponding position info for that account and
instrument B)
We were overwriting the existing loaded orders list in the per client
loop (lul) so move the def above all that.
Comment out the "try-to-cancel-inactive-orders-via-task-after-timeout"
stuff pertaining to https://github.com/erdewit/ib_insync/issues/363 for
now since we don't have a mechanism in place to cancel the re-cancel
task once the order is cancelled - plus who knows if this is even the
best way to do it..
Fills seems to be dual emitted from both the `status` and `fill` events
in `ib_insync` internals and more or less contain the same data nested
inside their `Trade` type. We started handling the 'fill' case to deal
with a race issue in commissions/cost report tracking but we don't
really want to leak that same race to incremental fills vs.
order-"closed" tracking.. So go back to only emitting the fill msgs
on statuses and a "closed" on `.remaining == 0`.
`ib` is super good not being reliable with order event sequence order
and duplication of fill info. This adds some guards to try and avoid
popping the last status status too early if we end up receiving
a `'closed'` before the expected `'fill`' event(s). Further delete the
`status_msg` ref on each iteration to avoid stale reference lookups in
the relay task/loop.
This includes darks, lives and alerts with all connecting clients
being broadcast all existing order-flow dialog states. Obviously
for now darks and alerts only live as long as the `emsd` actor lifetime
(though we will store these in local state eventually) and "live" orders
have lifetimes managed by their respective backend broker.
The details of this change-set is extensive, so here we go..
Messaging schema:
- change the messaging `Status` status-key set to:
`resp: Literal['pending', 'open', 'dark_open', 'triggered',
'closed', 'fill', 'canceled', 'error']`
which better reflects the semantics of order lifetimes and was
partially inspired by the status keys `kraken` provides for their
order-entry API. The prior key set was based on `ib`'s horrible
semantics which sound like they're right out of the 80s..
Also, we reflect this same set in the `BrokerdStatus` msg and likely
we'll just get rid of the separate brokerd-dialog side type
eventually.
- use `Literal` type annots for statuses where applicable and as they
are supported by `msgspec`.
- add additional optional `Status` fields:
-`req: Order` to allow each status msg to optionally ref its
commanding order-request msg allowing at least a request-response
style implicit tracing in all response msgs.
-`src: str` tag string to show the source of the msg.
-`reqid: str | int` such that the ems can relay the `brokerd`
request id both to the client side and have one spot to look
up prior status msgs and
- draft a (unused/commented) `Dialog` type which can be eventually used
at all EMS endpoints to track msg-flow states
EMS engine adjustments/rework:
- use the new status key set throughout and expect `BrokerdStatus` msgs
to use the same new schema as `Status`.
- add a `_DarkBook._active: dict[str, Status]` table which is now used for
all per-leg-dialog associations and order flow state tracking
allowing for the both the brokerd-relay and client-request handler loops
to read/write the same msg-table and provides for delivering
the overall EMS-active-orders state to newly/re-connecting clients
with minimal processing; this table replaces what the `._ems_entries`
table from prior.
- add `Router.client_broadcast()` to send a msg to all currently
connected peers.
- a variety of msg handler block logic tweaks including more `case:`
blocks to be both flatter and improve explicitness:
- for the relay loop move all `Status` msg update and sending to
within each block instead of a fallthrough case plus hard-to-follow
state logic.
- add a specific case for unhandled backend status keys and just log
them.
- pop alerts from `._active` immediately once triggered.
- where possible mutate status msgs fields over instantiating new
ones.
- insert and expect `Order` instances in the dark clearing loop and
adjust `case:` blocks accordingly.
- tag `dark_open` and `triggered` statuses as sourced from the ems.
- drop all the `ChainMap` stuff for now; we're going to make our own
`Dialog` type for this purpose..
Order mode rework:
- always parse the `Status` msg and use match syntax cases with object
patterns, hackily assign the `.req` in many blocks to work around not
yet having proper on-the-wire decoding yet.
- make `.load_unknown_dialog_from_msg()` expect a `Status` with boxed
`.req: Order` as input.
- change `OrderDialog` -> `Dialog` in prep for a general purpose type
of the same name.
`ib` backend order loading support:
- do "closed" status detection inside the msg-relay loop instead
of expecting the ems to do this..
- add an attempt to cancel inactive orders by scheduling cancel
submissions continually (no idea if this works).
- add a status map to go from the 80s keys to our new set.
- deliver `Status` msgs with an embedded `Order` for existing live order
loading and make sure to try an get the source exchange info (instead
of SMART).
Paper engine ported to match:
- use new status keys in `BrokerdStatus` msgs
- use `match:` syntax in request handler loop
Ideally every client that connects to the ems can know its state
(immediately) meaning relay all the order dialogs that are currently
active. This adds full (hacky WIP) support to receive those dialog
(msgs) from the `open_ems()` startup values via the `.started()` msg
from `_emsd_main()`.
Further this adds support to the order mode chart-UI to display existing
(live) orders on the chart during startup. Details include,
- add a `OrderMode.load_unknown_dialog_from_msg()` for processing and
displaying a ``BrokerdStatus`` (for now) msg from the EMS that was not
previously created by the current ems client and registering and
displaying it on the chart.
- break out the ems msg processing into a new
`order_mode.process_trade_msg()` func so that it can be called on the
startup dialog-msg set as well as eventually used a more general low
level auto-strat API (eg. when we get to displaying auto-strat and
group trading automatically on an observing chart UI.
- hackyness around msg-processing for the dialogs delivery since we're
technically delivering `BrokerdStatus` msgs when the client-side
processing technically expects `Status` msgs.. we'll rectify this
soon!
In order to avoid missed existing order message emissions on startup we
need to be sure the client side stream is registered with the router
first. So break out the starting of the
`translate_and_relay_brokerd_events()` task until inside the client
stream block and start the task using the dark clearing loop nursery.
Also, ensure `oid` (and thus for `ib` the equivalent re-used `reqid`)
are cast to `str` before registering the dark book. Deliver the dark
book entries as part of the `_emsd_main()` context `.started()` values.
This seems to have been broken in refactoring from commit 279c899de5
which was never tested against multiple accounts/clients.
The fix is 2 part:
- position tables are now correctly loaded ahead of time and used by
account for each connected client in processing of ledgers and
existing positions.
- a task for each API client is started (as implemented prior) so that
we actually get status updates for every client used for submissions.
Further we add a bit of code using `bisect.insort()` to normalize
ledgers to a datetime sorted list records (though pretty sure the `dict`
transform ruins it?) in an effort to avoid issues with ledger
transaction processing with previously minimized `Position.clears`
tables, which should (but might not?) avoid incorporating clear events
prior to the last "net-zero" positioning state.
This firstly changes `.audit_sizing()` => `.ensure_state()` and makes it
return `None` as well as only error when split ratio denoted (via
config) positions do not size as expected.
Further refinements,
- add an `.expired()` predicate method
- always return a size of zero from `.calc_size()` on expired assets
- load each `pps.toml` entry's clear tabe into `Transaction`s and use
`.add_clear()` during from config init.
In order to avoid issues with reloading ledger and API trades after an
existing `pps.toml` exists we have to make sure we not only avoid
duplicate entries but also avoid re-adding entries that would have been
removed during a prior call to the `Position.minimize_clears()` filter.
The easiest way to do this is to sort on timestamps and avoid adding any
record that pre-existed the last net-zero position ledger event that
`.minimize_clears()` discarded. In order to implement this it means
parsing config file clears table's timestamps into datetime objects for
inequality checks and we add a `Position.first_clear_dt` attr for
storing this value when managing pps in object form but never store it
in the config (since it should be obviously from the sorted clear event
table).
The (partial) fills from this sub are most indicative of clears (also
says support) whereas the msgs in the `ownTrades` sub are only emitted
after the entire order request has completed - there is no size-vlm
remaining.
Further enhancements:
- this also includes proper subscription-syncing inside `subscribe()` with
a small pre-msg-loop which waits on ack-msgs for each sub and raises any
errors. This approach should probably be implemented for the data feed
streams as well.
- configure the `ownTrades` sub to not bother sending historical data on
startup.
- make the `openOrders` sub include rate limit counters.
- handle the rare case where the ems is trying to cancel an order which
was just edited and hasn't yet had it's new `txid` registered.
Since we figured out how to pass through ems dialog ids to the
`openOrders` sub we don't really need to do much with status updates
other then error handling. This drops `process_status()` and moves the
error handling logic into a status handler sub-block; we now just
info-log status updates for troubleshooting purposes.
Why we need so many fields to accomplish passing through a dialog key to
orders is beyond me but this is how they do it with edits..
Allows not having to handle `editOrderStatus` msgs to update the dialog
key table and instead just do it in the `openOrders` sub by checking the
canceled msg for a 'cancel_reason' of 'Order replaced', in which case we
just pop the txid and wait for the new order the kraken backend engine
will submit automatically, which will now have the correct 'userref'
value we passed in via the `newuserref`, and then we add that new `txid`
to our table.
Turns out you can pass both thus making mapping an ems `oid` to
a brokerd-side `reqid` much more simple. This allows us to avoid keeping
as much local dialog state but with still the following caveats:
- ok `editOrder` msgs must update the reqid<->txid map
- only pop `reqids2txids` entries inside the `cancelOrderStatus` handler
If we don't have a pos table built out already (in mem) we can't figure
out the likely dst asset (since there's no pair entry to guide us) that
we should use to search for withdrawal transactions; so move it later.
Further this ports to the new api changes in `piker.pp`` that will land
with #365.
This ended up driving the rework of the `piker.pp` apis to use context
manager + table style which resulted in a much easier to follow
state/update system B). Also added is a flag to do a manual simulation
of a "fill triggered rt pp msg" which requires the user to delete the
last ledgered trade entry from config files and then allowing that trade
to emit through the `openOrders` sub and update client shortly after
order mode boot; this is how the rt updates were verified to work
without doing even more live orders 😂.
Patch details:
- open both `open_trade_ledger()` and `open_pps()` inside the trade
dialog startup and conduct a "pp state sync" logic phase where we now
pull the account balances and incrementally load pp data (in order,
from `pps.toml`, ledger, api) until we can generate the asset balance
by reverse incrementing through trade history eventually erroring out
if we can't reproduce the balance value.
- rework the `trade2pps()` to take in the `PpTable` and generate new
ems msgs from table updates.
- return the new `dict[str, Transaction]` expected from
`norm_trade_records()`
- only update pp config and ledger on dialog exit.
Since our ems doesn't actually do blocking style client-side submission
updates, thus resulting in the client being able to update an existing
order's state before knowing its current state, we can run into race
conditions where for some backends an order is updated using the wrong
order id. For kraken we manually implement detecting this race (lol, for
now anyway) such that when a new client side edit comes in before the
new `txid` is known, we simply expect the handler loop to cancel the
order. Further this adds cancellation on arbitrary status errors, like
rate limits.
Also this adds 2 leg (ems <-> brokerd <-> kraken) msg tracing using
a `collections.ChainMap` which is likely going to end up being the POC
for a more general data structure recommended for backends that need to
trace msg flow for translation with the ems.
Turns out the `openOrders` and `ownTrades` subs always return a `reqid`
value (the one brokerd sends to the kraken api in order requests) is
always set to zero, which seems to be a bug? So this includes patches to
work around that as well reliance on the `openOrders` sub to do most
`BrokerdStatus` updates since `XOrderStatus` events don't seem to have
much data in them at all (they almost look like pure ack events so maybe
they aren't affirmative of final state changes anyway..).
Other fixes:
- respond with a `BrokerdOrderAck` immediately after `requid` generation
not after order submission to ensure the ems has a valid `requid`
*before* kraken api events are relayed through.
- add a `reqids2txids: bidict[int, str]` which maps brokerd genned
`requid`s to kraken-side `txid`s since (as mentioned above) the
clearing and state endpoints don't relay back this value (it's always
0...)
- add log messages for each sub so that (at least for now) we can see
exact msg contents coming from kraken.
- drop `.remaining` calcs for now since we need to keep record of the
order states manually in order to retreive the original submission
vlm..
- fix the `openOrders` case for fills, in this case the message includes
no `status` field and thus we must catch it in a block *after* the
normal state handler to avoid masking.
- drop response msg generation from the cancel status case since we
can do it again from the `openOrders` handler and sending a double
status causes issues on the client side.
- add a shite ton of notes around all this missing `requid` stuff.
More or less just to avoid orders the user wasn't aware of from
persisting until we get "open order relaying" through the ems working.
Some further fixes which required a new `reqids2txids` map which keeps
track of which `kraken` "txid" is mapped to our `reqid: int`; mainly
this was needed for cancel requests which require knowing the underlying
`txid`s (since apparently kraken doesn't keep track of the "reqid" we
pass it). Pass the ws instance into `handle_order_updates()` to enable
the cancelling orders on startup. Don't key error on unknown `reqid`
values (for eg. when receiving historical trade events on startup).
Handle cancel requests first in the ems side loop.
Since we seem to always be able to get back the `reqid`/`userref` value
we send to kraken ws endpoints, we can use this as our brokerd side
order id and avoid all race cases with getting the true `txid` value
that `kraken` assigns (and which changes when you do "edits"
:eyeroll:). This simplifies status updates by allowing our relay loop
just to pass back our generated `.reqid` verbatim and allows responding
with a `BrokerdOrderAck` immediately in the request handler task which
should guarantee there are no further race conditions with the relay
loop and mapping `txid`s from kraken.. and figuring out wtf to do when
they change, etc.
Addressing same issue as in #350 where we need to compute position
updates using the *first read* from the ledger **before** we update it
to make sure `Position.lifo_update()` gets called and **not skipped**
because new trades were read as clears entries but haven't actually been
included in update calcs yet.. aka we call `Position.lifo_update()`.
Main change here is to convert `update_ledger()` into a context mngr so
that the ledger write is committed after pps updates using
`pp.update_pps_conf()`..
This is basically a hotfix to #346 as well.
Turns out the EMS can support this as originally expected: you can
update a `brokerd`-side `.reqid` through a `BrokerdAck` msg and the ems
which update its cross-dialog (leg) tracking correctly! The issue was
a bug in the `editOrderStatus` msg handling and appropriate tracking
of the correct `.oid` (ems uid) on the kraken side. This unfortunately
required adding a `emsflow: dict[str, list[BrokerdOrder]]` msg flow
tracing table which means the broker daemon is tracking all the msg flow
with the ems, though I'm wondering now if this is just good practise
anyway and maybe we should offer a small primitive type from our msging
utils to aid with this? I've used such constructs in event handling
systems prior.
There's a lot more factoring that can be done after these changes as
well but the quick detailed summary is,
- rework the `handle_order_requests()` loop to use `match:` syntax and
update the new `emsflow` table on every new request from the ems.
- fix the `editOrderStatus` case pattern to not include an error msg and
thus actually be triggered to respond to the ems with a `BrokerdAck`
containing the new `.reqid`, the new kraken side `txid`.
- skip any `openOrders` msgs which are detected as being kraken's
internal order "edits" by matching on the `cancel_reason` field.
- update the `emsflow` table in all ws-stream msg handling blocks
with responses sent to the ems.
Relates to #290
Move to using the websocket API for all order control ops and dropping
the sync rest api approach which resulted in a bunch of buggy races.
Further this gets us must faster (batch) order cancellation for free
and a simpler ems request handler loop. We now heavily leverage the new
py3.10 `match:` syntax for all kraken-side API msg parsing and
processing and handle both the `openOrders` and `ownTrades` subscription
streams.
We also block "order editing" (by immediate cancellation) for now since
the EMS isn't entirely yet equipped to handle brokerd side `.reqid`
changes (which is how kraken implements so called order "updates" or
"edits") for a given order-request dialog and we may want to even
consider just implementing "updates" ourselves via independent cancel
and submit requests? Definitely something to ponder. Alternatively we
can "masquerade" such updates behind the count-style `.oid` remapping we
had to implement anyway (kraken's limitation) and maybe everything will
just work?
Further details in this patch:
- create 2 tables for tracking the EMS's `.oid` (uui4) value to `int`s
that kraken expects (for `reqid`s): `ids` and `reqmsgs` which enable
local lookup of ems uids to piker-backend-client-side request ids and
received order messages.
- add `openOrders` sub support which more or less directly relays to
equivalent `BrokerdStatus` updates and calc the `.filled` and
`.remaining` values based on cleared vlm updates.
- add handler blocks for `[add/edit/cancel]OrderStatus` events including
error msg cases.
- don't do any order request response processing in
`handle_order_requests()` since responses are always received via one
(or both?) of the new ws subs: `ownTrades` and `openOrders` and thus
such msgs are now handled in the response relay loop.
Relates to #290Resolves#310, #296
This drops the use of `pp.update_pps_conf()` (and friends) and instead
moves to using the context style `open_trade_ledger()` and `open_pps()`
managers for faster pp msg gen due to delayed file writing (which was
the main source update latency).
In order to make this work with potentially multiple accounts this also
uses an exit stack which loads each ledger / `pps.toml` into an account
id mapped `dict`; a POC for likely how we should implement some higher
level position manager api.
The original implementation of `.calc_be_price()` wasn't correct since
the real so called "price per unit" (ppu), is actually defined by
a recurrence relation (which is why the original state-updated
`.lifo_update()` approach worked well) and requires the previous ppu to
be weighted by the new accumulated position size when considering a new
clear event. The ppu is the price that above or below which the trader
takes a win or loss on transacting one unit of the trading asset and
thus it is the true "break even price" that determines making or losing
money per fill. This patches fixes the implementation to use trailing
windows of the accumulated size and ppu to compute the next ppu value
for any new clear event as well as handle rare cases where the
"direction" changes polarity (eg. long to short in a single order). The
new method is `Position.calc_ppu()` and further details of the relation
can be seen in the doc strings.
This patch also includes a wack-ton of clean ups and removals in an
effort to refine position management api for easier use in new backends:
- drop `updaate_pps_conf()`, `load_pps_from_toml()` and rename
`load_trands_from_ledger()` -> `load_pps_from_ledger()`.
- extend `PpTable` to have a `.to_toml()` method which returns the
active set of positions ready to be serialized to the `pps.toml` file
which is collects from calling,
- `PpTable.dump_active()` which now returns double dicts of the
open/closed pp object maps.
- make `Position.minimize_clears()` now iterate the clears table in
chronological order (instead of reverse) and only drop fills prior
to any zero-size state (the old reversed way can result incorrect
history-size-retracement in cases where a position is lessened but
not completely exited).
- drop `Position.add_clear()` and instead just manually add entries
inside `.update_from_trans()` and also add a `accum_size` and `ppu`
field to ever entry thus creating a position "history" sequence of
the ppu and accum size for every position and prepares for being
and to show "position lifetimes" in the UI.
- move fqsn getting into `Position.to_pretoml()`.
Use the new `.calc_[be_price/size]()` methods when serializing to and
from the `pps.toml` format and add an audit method which will warn about
mismatched values and assign the clears table calculated values pre-write.
Drop the `.lifo_update()` method and instead allow both
`.size`/`.be_price` properties to exist (for non-ledger related uses of
`Position`) alongside the new calc methods and only get fussy about
*what* the properties are set to in the case of ledger audits.
Also changes `Position.update()` -> `.add_clear()`.
Since we're going to need them anyway for desired features, add
2 new `Position` methods:
- `.calc_be_price()` which computes the breakeven cost basis price
from the entries in the clears table.
- `.calc_size()` which just sums the clear sizes.
Add a `cost_scalar: float` control to the `.update_from_trans()` method
to allow manual adjustment of the cost weighting for the case where
a "non-symmetrical" model is wanted.
Go back to always trying to write the backing ledger files on exit, even
when there's an error (obvs without the `return` in the `finally:` block
f$#% up).
Can't believe i missed this but any `return` inside a `finally` will
suppress the error from the `try:` part... XD
Thought i was losing my mind when the ledger was mutated and then
an error just after wasn't getting raised.. lul.
Never again...
In order to avoid double transaction adds/updates and too-early-discard
of zero sized pps (like when trades are loaded from a backend broker but
were already added to a ledger or `pps.toml` prior) we now **don't** pop
such `Position` entries from the `.pps` table in order to keep each
position's clears table always in place. This avoids the edge case where
an entry was removed too early (due to zero size) but then duplicate
trade entries that were in that entrie's clears show up from the backend
and are entered into a new entry resulting in an incorrect size in a new
entry..We still only push non-net-zero entries to the `pps.toml`.
More fixes:
- return the updated set of `Positions` from `.lifo_update()`.
- return the full table set from `update_pps()`.
- use `PpTable.update_from_trans()` more throughout.
- always write the `pps.toml` on `open_pps()` exit.
- only return table from `load_pps_from_toml()`.
In an effort to begin allowing backends to have more granular control
over position updates, particular in the case where they need to be
reloaded from a trades ledger, this adds a new table API which can
be loaded using `open_pps()`.
- offer an `.update_trans()` method which takes in a `dict` of
`Transactions` and updates the current table of `Positions` from it.
- add a `.dump_active()` which renders the active pp entries dict in
a format ready for toml serialization and all closed positions since
the last update (we might want to not drop these?)
All other module-function apis currently in use should remain working as
before for the moment.
Change `.find_contract()` -> `.find_contracts()` to allow multi-search
for so called "ambiguous" contracts (like for `Future`s) such that the
method now returns a `list` of tracts and populates the contract cache
with all specific tracts retrieved. Let it take in an (unvalidated)
contract that will be fqsn-style-tokenized such that it can be called
from `.search_symbols()` (though we're not quite yet XD).
More stuff,
- add `Client.parse_patt2fqsn()` which is an fqsn to token unpacker
built from the original logic in the old `.find_contract()`.
- handle fiat/forex pairs with the `'CASH'` sectype.
- add a flag to allow unqualified contracts to fail with a warning msg.
- populate the client's contract cache with all expiries of
an ambiguous derivative.
- allow `.con_deats()` to warn msg instead of raise on def-not-found.
- add commented `assert 0` which was triggering a debugger deadlock in
`tractor` which we still haven't been able to create a unit test for.
Minimize calling `.data._shmarray.attach_shm_array()` as much as is
possible to avoid the crash from #332. This is the suggested hack from
issue #359.
Resolves https://github.com/pikers/piker/issues/359
Not sure why I put this off for so long but the check is in now such
that if the market isn't open or no rt quote comes in from the first
query, we just pull from the last shm history 'close' value.
Includes another fix to avoid raising when a double remove on the client
side stream from the registry sometimes happens.
Not sure this didn't get caught in usage, but basically real-time
updates got broken by a rework of `update_ledger_from_api_trades()`.
The issue is that the ledger was being updated **before** calling
`piker.pp.update_pps_conf()` which resulted in the `Position.size`
not being updated correctly since the [latest added] clears passed
in via the `trade_records` arg were already found in the `.clears` table
and thus were causing the loop to skip the `Position.lifo_update()`
call..
The solution here is to not update the ledger **until after** we call
`update_pps_conf()` - it's more read/writes but it's correct and we
figure out a less io heavy way to do the file writing later.
Further this includes a fix to avoid double emitting a pp update caused
by non-thorough logic that waits for a commission report to arrive
during a fill event; previously we were emitting the same message twice
due to the lack of a check for an existing comms report in the case
where the report arrives *after* the fill.
Moves to using the new `piker.pp` apis to both store real-time trade
events in a ledger file as well emit position update msgs (which were
not in this backend at all prior) when new orders clear (aka fill).
In terms of outstanding issues,
- solves the pp update part of the bugs reported in #310
- starts a msg case block in prep for #293
Details of rework:
- move the `subscribe()` ws fixture to module level and `partial()` in
the client token instead of passing it to the instance; in prep for
removal of the `.token` attr from the `NoBsWs` wrapper.
- drop `make_auth_sub()` since it was too thin and we can just
do it all succinctly in `subscribe()`
- filter trade update msgs to those not yet stored int the toml ledger
- much better kraken api msg unpacking using new `match:` synax B)
Resolves#311
No real-time update support (yet) but this is the first draft at writing
trades ledgers and `pps.toml` entries for the kraken backend.
Deatz:
- drop `pack_positions()`, no longer used.
- use `piker.pp` apis to both write a trades ledger file and update the
`pps.toml` inside the `trades_dialogue()` endpoint startup.
- drop the weird paper engine swap over if auth can't be done, we should
be doing something with messaging in the ems over this..
- more web API error response raising.
- pass the `pp.Transaction` set loaded from ledger into
`process_trade_msgs()` do avoid duplicate sends of already collected
trades msgs.
- add `norm_trade_records()` public endpoing (used by `piker.pp` api)
and `update_ledger()` helper.
- rejig `process_trade_msgs()` to drop the weird `try:` assertion block
and skip already-recorded-in-ledger trade msgs as well as yield *each*
trade instead of sub-sequences.
This was just implemented totally wrong but somehow worked XD
The idea was to include all trades that contribute to ongoing position
size since the last time the position was "net zero", i.e. no position
in the asset. Adjust arithmetic to *subtract* from the current size
until a zero size condition is met and then keep all those clears as
part of the "current state" clears table.
Additionally this fixes another bug where the positions freshly loaded
from a ledger *were not* being merged with the current `pps.toml` state.
Gah, was a remaining bug where if you tried to update the pps state with
both new trades and from the ledger you'd do a double add of
transactions that were cleared during a `update_pps()` loop. Instead now
keep all clears in tact until ready to serialize to the `pps.toml` file
in which cases we call a new method `Position.minimize_clears()` which
does the work of only keep clears since the last net-zero size.
Re-implement `update_pps_conf()` update logic as a single pass loop
which does expiry and size checking for closed pps all in one pass thus
allowing us to drop `dump_active()` which was kinda redundant anyway..
Before we weren't emitting pp msgs when a position went back to "net
zero" (aka the size is zero) nor when a new one was opened (wasn't
previously loaded from the `pps.toml`). This reworks a bunch of the
incremental update logic as well as ports to the changes in the
`piker.pp` module:
- rename a few of the normalizing helpers to be more explicit.
- drop calling `pp.get_pps()` in the trades dialog task and instead
create msgs iteratively, per account, by iterating through collected
position and API trade records and calling instead
`pp.update_pps_conf()`.
- always from-ledger-update both positions reported from ib's pp sys and
session api trades detected on ems-trade-dialog startup.
- `update_ledger_from_api_trades()` now does **just** that: only updates
the trades ledger and returns the transaction set.
- `update_and_audit_msgs()` now only the input list of msgs and properly
generates new msgs for newly created positions that weren't previously
loaded from the `pps.toml`.
- use `tomli` package for reading since it's the fastest pure python
reader available apparently.
- add new fields to each pp's clears table: price, size, dt
- make `load_pps_from_toml()`'s `reload_records` a dict that can be
passed in by the caller and is verbatim used to re-read a ledger and
filter to the specified symbol set to build out fresh pp objects.
- add a `update_from_ledger: bool` flag to `load_pps_from_toml()`
to allow forcing a full backend ledger read.
- if a set of trades records is passed into `update_pps_conf()` parse
out the meta data required to cause a ledger reload as per 2 bullets
above.
- return active and closed pps in separate by-account maps from
`update_pps_conf()`.
- drop the `key_by` kwarg.
This makes it possible to refresh a single fqsn-position in one's
`pps.toml` by simply deleting the file entry, in which case, if there is
new trade records passed to `load_pps_from_toml()` via the new
`reload_records` kwarg, then the backend ledger entries matching that
symbol will be filtered and used to recompute a fresh position.
This turns out to be super handy when you have crashes that prevent
a `pps.toml` entry from being updated correctly but where the ledger
does have all the data necessary to calculate a fresh correct entry.
Since some positions obviously expire and thus shouldn't continually
exist inside a `pps.toml` add naive support for tracking and discarding
expired contracts:
- add `Transaction.expiry: Optional[pendulum.datetime]`.
- add `Position.expiry: Optional[pendulum.datetime]` which can be parsed
from a transaction ledger.
- only write pps with a non-none expiry to the `pps.toml`
- change `Position.avg_price` -> `.be_price` (be is "breakeven")
since it's a much less ambiguous name.
- change `load_pps_from_legder()` to *not* call `dump_active()` since
for the only use case it ends up getting called later anyway.
We can probably make this better (and with less file sys accesses) later
such that we keep a consistent pps state in mem and only write async
maybe from another side-task?
What a nightmare this was.. main holdup was that cost (commissions)
reports are fired independent from "fills" so you can't really emit
a proper full position update until they both arrive.
Deatz:
- move `push_tradesies()` and relay loop in `deliver_trade_events()` to
the new py3.10 `match:` syntax B)
- subscribe for, and handle `CommissionReport` events from `ib_insync`
and repack as a `cost` event type.
- handle cons with no primary/listing exchange (like futes) in
`update_ledger_from_api_trades()` by falling back to the plain
'exchange' field.
- drop reverse fqsn lookup from ib positions map; just use contract
lookup for api trade logs since we're already connected..
- make validation in `update_and_audit()` optional via flag.
- pass in the accounts def, ib pp msg table and the proxies table to the
trade event relay task-loop.
- add `emit_pp_update()` too encapsulate a full api trade entry
incremental update which calls into the `piker.pp` apis to,
- update the ledger
- update the pps.toml
- generate a new `BrokerdPosition` msg to send to the ems
- adjust trades relay loop to only emit pp updates when a cost report
arrives for the fill/execution by maintaining a small table per exec
id.
I don't want to rant too much any more since it's pretty clear `ib` has
either zero concern for its (api) user's or a severely terrible data
management team and/or general inter-team coordination system, but this
patch more or less hacks the flex report records to be similar enough to
API "execution" / "fill" records such that they can be similarly
normalized and stored as well as processed for position calculations..
Dirty deats,
- use the `IB.fills()` method for pulling current session trade events
since it's both recommended in the docs and does seem to capture
more extensive meta-data.
- add a `update_ledger_from_api()` helper which does all the insane work
of making sure api trade entries are usable both within piker's global
fqsn system but also compatible with incremental updates of positions
computed from trade ledgers derived from ib's "flex reports".
- add "auditting" of `ib`'s reported positioning API messages by
comparison with piker's new "traders first" breakeven price style and
complain via logging on mismatches.
- handle buy vs. sell arithmetic (via a +ve or -ve multiplier) to make
"size" arithmetic work for API trade entries..
- draft out options contract transaction parsing but skip in pps
generation for now.
- always use the "execution id" as ledger keys both in flex and api
trade processing.
- for whatever weird reason `ib_insync` doesn't include the so called
"primary exchange" in contracts reported in fill events, so do manual
contract lookups in such cases such that pps entries can be placed
in the right fqsn section...
Still ToDo:
- incremental update on trade clears / position updates
- pps audit from ledger depending on user config?
This makes a few major changes but mostly is centered around including
transaction (aka trade-clear) costs in the avg breakeven price
calculation.
TL;DR:
- rename `TradeRecord` -> `Transaction`.
- make `Position.fills` a `dict[str, float]` which holds each clear's
cost value.
- change `Transaction.symkey` -> `.bsuid` for "backend symbol unique id".
- drop `brokername: str` arg to `update_pps()`
- rename `._split_active()` -> `dump_active()` and use input keys
verbatim in output map.
- in `update_pps_conf()` always incrementally update from trade records
even when no `pps.toml` exists yet since it may be both the case that
the ledger needs loading **and** the caller is handing new records not
yet in the ledger.
Begins the position tracking incremental update API which supports both
constructing a `pps.toml` both from trade ledgers as well diff-oriented
incremental update from an existing config assumed to be previously
generated from some prior ledger.
New set of routines includes:
- `_split_active()` a helper to split a position table into the active
and closed positions (aka pps of size 0) for determining entry updates
in the `pps.toml`.
- `update_pps_conf()` to maybe load a `pps.toml` and update it from
an input trades ledger including necessary (de)serialization to and
from `Position` object form(s).
- `load_pps_from_ledger()` a ledger parser-loader which constructs
a table of pps strictly from the broker-account ledger data without
any consideration for any existing pps file.
Each "entry" in `pps.toml` also contains a `fills: list` attr (name may
change) which references the set of trade records which make up its
state since the last net-zero position in the instrument.
Add a `TradeRecord` struct which holds the minimal field set to build
out position entries. Add `.update_pps()` to convert a set of records
into LIFO position entries, optionally allowing for an update to some
existing pp input set. Add `load_pps_from_ledger()` which does a full
ledger extraction to pp objects, ready for writing a `pps.toml`.
Since "flex reports" are only available for the current session's trades
the day after, this adds support for also collecting trade execution
records for the current session and writing them to the equivalent
ledger file.
Summary:
- add `trades_to_records()` to handle parsing both flex and API event
objects into a common record form.
- add `norm_trade_records()` to handle converting ledger entries into
`TradeRecord` types from the new `piker.pps` mod (coming in next
commit).
Start a generic "position related" util mod and bring in the `Position`
type from the allocator , convert it to a `msgspec.Struct` and add
a `.lifo_update()` method. Implement a WIP pp parser from a trades
ledger and use the new lifo method to gather position entries.
Add `ChartPlotWidget._on_screen: bool` which allows detecting for the
first state where there is y-range-able flow data loaded and able to be
drawn. Check for this flag to be set in `.maxmin()` such that until the
historical data is loaded `.default_view()` will be called to ensure
that a blank view is never shown: race with the UI starting versus the
data layer loading flow graphics can have this outcome.
This should hopefully make teardown more reliable and includes better
logic to fail over to a hard kill path after a 3 second timeout waiting
for the instance to complete using the `docker-py` wait API. Also
generalize the supervisor teardown loop by allowing the container config
endpoint to return 2 msgs to expect:
- a startup message that can be read from the container's internal
process logging that indicates it is fully up and ready.
- a teardown msg that can be polled for that indicates the container has
gracefully terminated after a cancellation request which is passed to
our container wrappers `.cancel()` method.
Make the marketstore config endpoint return the 2 messages we previously
had hard coded and use this new api.
This was introduced in #302 but after thorough testing was clear to be
not working XD. Adjust the display loop to update the last graphics
segment on both the OHLC and vlm charts (as well as all deriving fsp
flows) whenever the uppx >= 1 and there is no current path append
taking place (since more datums are needed to span an x-pixel in view).
Summary of tweaks:
- move vlm chart update code to be at the end of the cycle routine and
have that block include the tests for a "interpolated last datum in
view" line.
- make `do_append: bool` compare with a floor of the uppx value (i.e.
appends should happen when we're just fractionally over a pixel of
x units).
- never update the "volume" chart.
Allows for optionally updating a "downsampled" graphics type which is
currently necessary in the `BarItems` -> `FlattenedOHLC` curve switching
case; we don't want to be needlessly redrawing the `Flow.graphics`
object (which will be an OHLC curve) when in flattened curve mode.
Further add a `only_last_uppx: bool` flag to `.draw_last()` to allow
forcing a "last uppx's worth of data max/min" style interpolating line
as needed.
The single-file module was getting way out of hand size-wise with the
new flex report parsing stuff so this starts the process of breaking
things up into smaller modules oriented around trade, data, and ledger
related endpoints.
Add support for backends to declare sub-modules to enable in
a `__enable_modules__: list[str]` module var which is parsed by the
daemon spawning code passed to `tractor`'s `enable_modules: list[str]`
input.
When using m4, we downsample to the max and min of each
pixel-column's-worth of data thus preserving range / dispersion details
whilst not drawing more graphics then can be displayed by the available
amount of horizontal pixels.
Take and apply this exact same concept to the "last datum" graphics
elements for any `Flow` that is reported as being in a downsampled
state:
- take the xy output from the `Curve.draw_last_datum()`,
- slice out all data that fits in the last pixel's worth of x-range
by using the uppx,
- compute the highest and lowest value from that data,
- draw a singe line segment which spans this yrange thus creating
a simple vertical set of pixels which are "filled in" and show the
entire y-range for the most recent data "contained with that pixel".
Instead of using a bunch of internal logic to modify low level paint-able
elements create a `Curve` lineage that allows for graphics "style"
customization via a small set of public methods:
- `Curve.declare_paintables()` to allow setup of state/elements to be
drawn in later methods.
- `.sub_paint()` to allow painting additional elements along with the
defaults.
- `.sub_br()` to customize the `.boundingRect()` dimensions.
- `.draw_last_datum()` which is expected to produce the paintable
elements which will show the last datum in view.
Introduce the new sub-types and load as necessary in
`ChartPlotWidget.draw_curve()`:
- `FlattenedOHLC`
- `StepCurve`
Reimplement all `.draw_last()` routines as a `Curve` method
and call it the same way from `Flow.update_graphics()`
The basic logic is now this:
- when zooming out, uppx (units per pixel in x) can be >= 1
- if the uppx is `n` then the next pixel in view becomes occupied by
a new datum-x-coordinate-value when the diff between the last
datum step (since the last such update) is greater then the
current uppx -> `datums_diff >= n`
- if we're less then some constant uppx we just always update (because
it's not costly enough and we're not downsampling.
More or less this just avoids unnecessary real-time updates to flow
graphics until they would actually be noticeable via the next pixel
column on screen.
This was a bit of a nightmare to figure out but, it seems that the
coordinate caching system will really be a dick (like the nickname for
richard for you serious types) about leaving stale graphics if we don't
reset the cache on downsample full-redraw updates...Sooo, instead we do
this manual reset to avoid such artifacts and consequently (for now)
return a `reset: bool` flag in the return tuple from `Renderer.render()`
to indicate as such.
Some further shite:
- move the step mode `.draw_last()` equivalent graphics updates down
with the rest..
- drop some superfluous `should_redraw` logic from
`Renderer.render()` and compound it in the full path redraw block.
Adds a new pre-graphics data-format callback incremental update api to
our `Renderer`. `Renderer` instance can now overload these custom routines:
- `.update_xy()` a routine which accepts the latest [pre/a]pended data
sliced out from shm and returns it in a format suitable to store in
the optional `.[x/y]_data` arrays.
- `.allocate_xy()` which initially does the work of pre-allocating the
`.[x/y]_data` arrays based on the source shm sizing such that new
data can be filled in (to memory).
- `._xy_[first/last]: int` attrs to track index diffs between src shm
and the xy format data updates.
Implement the step curve data format with 3 super simple routines:
- `.allocate_xy()` -> `._pathops.to_step_format()`
- `.update_xy()` -> `._flows.update_step_xy()`
- `.format_xy()` -> `._flows.step_to_xy()`
Further, adjust `._pathops.gen_ohlc_qpath()` to adhere to the new
call signature.
We're doing this in `Flow.update_graphics()` atm and probably are going
to in general want custom graphics objects for all the diff curve / path
types. The new flows work seems to fix the bounding rect width calcs to
not require the ad-hoc extra `+ 1` in the step mode case; before it was
always a bit hacky anyway. This also tries to add a more correct
bounding rect adjustment for the `._last_line` segment.
Finally this gets us much closer to a generic incremental update system
for graphics wherein the input array diffing, pre-graphical format data
processing, downsampler activation and incremental update and storage of
any of these data flow stages can be managed in one modular sub-system
:surfer_boi:.
Dirty deatz:
- reorg and move all path logic into `Renderer.render()` and have it
take in pretty much the same flags as the old
`FastAppendCurve.update_from_array()` and instead storing all update
state vars (even copies of the downsampler related ones) on the
renderer instance:
- new state vars: `._last_uppx, ._in_ds, ._vr, ._avr`
- `.render()` input bools: `new_sample_rate, should_redraw,
should_ds, showing_src_data`
- add a hack-around for passing in incremental update data (for now)
via a `input_data: tuple` of numpy arrays
- a default `uppx: float = 1`
- add new render interface attrs:
- `.format_xy()` which takes in the source data array and produces out
x, y arrays (and maybe a `connect` array) that can be passed to
`.draw_path()` (the default for this is just to slice out the index
and `array_key: str` columns from the input struct array),
- `.draw_path()` which takes in the x, y, connect arrays and generates
a `QPainterPath`
- `.fast_path`, for "appendable" updates like there was on the fast
append curve
- move redraw (aka `.clear()` calls) into `.draw_path()` and trigger
via `redraw: bool` flag.
- our graphics objects no longer set their own `.path` state, it's done
by the `Flow.update_graphics()` method using output from
`Renderer.render()` (and it's state if necessary)
A bit hacky to get all graphics types working but this is hopefully the
first step toward moving all the generic update logic into `Renderer`
types which can be themselves managed more compactly and cached per
uppx-m4 level.
Which is basically just "deleting" rows from a column series.
You can only use the trim command from the `.cmd` cli and only with a so
called `LocalClient` currently; it's also sketchy af and caused
a machine to hang due to mem usage..
Ideally we can patch in this functionality for use by the rpc api
and have it not hang like this XD
Pertains to https://github.com/alpacahq/marketstore/issues/264
Yet another path ops routine which converts a 1d array into a data
format suitable for rendering a "step curve" graphics path (aka a "bar
graph" but implemented as a continuous line).
Also, factor the `BarItems` rendering logic (which determines whether to
render the literal bars lines or a downsampled curve) into a routine
`render_baritems()` until we figure out the right abstraction layer for
it.
Starts a module for grouping together all our `QPainterpath` related
generation and data format operations for creation of fast curve
graphics. To start, drops `FastAppendCurve.downsample()` and moves
it to a new `._pathops.xy_downsample()`.
Mostly just dropping old commented code for "step mode" format
generation. Always slice the tail part of the input data and move to the
new `ms_threshold` in the `pg` profiler'
Relates to the bug discovered in #310, this should avoid out-of-order
msgs which do not have a `.reqid` set to be error logged to console.
Further, add `pformat()` to kraken logging of ems msging.
Since downsampling with the more correct version of m4 (uppx driven
windows sizing) is super fast now we don't need to avoid downsampling
on low uppx values. Further all graphics objects now support in-view
slicing so make sure to use it on interaction updates. Pass in the view
profiler to update method calls for more detailed measuring.
Even moar,
- Add a manual call to `.maybe_downsample_graphics()` inside the mouse
wheel event handler since it seems that sometimes trailing events get
lost from the `.sigRangeChangedManually` signal which can result in
"non-downsampled-enough" graphics on chart given the scroll amount;
this manual call seems to entirely fix this?
- drop "max zoom" guard since internals now support (near) infinite
scroll out to graphics becoming a single pixel column line XD
- add back in commented xrange signal connect code for easy testing to
verify against range updates not happening without it
This took longer then i care to admit XD but it definitely adds a huge
speedup and with only a few outstanding correctness bugs:
- panning from left to right causes strange trailing artifacts in the
flows fsp (vlm) sub-plot but only when some data is off-screen on the
left but doesn't appear to be an issue if we keep the `._set_yrange()`
handler hooked up to the `.sigXRangeChanged` signal (but we aren't
going to because this makes panning way slower). i've got a feeling
this is a bug todo with the device coordinate cache stuff and we may
need to report to Qt core?
- factoring out the step curve logic from
`FastAppendCurve.update_from_array()` (un)fortunately required some
logic branch uncoupling but also meant we needed special input controls
to avoid things like redraws and curve appends for special cases,
this will hopefully all be better rectified in code when the core of
this method is moved into a renderer type/implementation.
- the `tina_vwap` fsp curve now somehow causes hangs when doing erratic
scrolling on downsampled graphics data. i have no idea why or how but
disabling it makes the issue go away (ui will literally just freeze
and gobble CPU on a `.paint()` call until you ctrl-c the hell out of
it). my guess is that something in the logic for standard line curves
and appends on large data sets is the issue?
Code related changes/hacks:
- drop use of `step_path_arrays_from_1d()`, it was always a bit hacky
(being based on `pyqtgraph` internals) and was generally hard to
understand since it returns 1d data instead of the more expected (N,2)
array of "step levels"; instead this is now implemented (uglily) in
the `Flow.update_graphics()` block for step curves (which will
obviously get cleaned up and factored elsewhere).
- add a bunch of new flags to the update method on the fast append
curve: `draw_last: bool`, `slice_to_head: int`, `do_append: bool`,
`should_redraw: bool` which are all controls to aid with previously
mentioned issues specific to getting step curve updates working
correctly.
- add a ton of commented tinkering related code (that we may end up
using) to both the flow and append curve methods that was written as
part of the effort to get this all working.
- implement all step curve updating inline in `Flow.update_graphics()`
including prepend and append logic for pre-graphics incremental step
data maintenance and in-view slicing as well as "last step" graphics
updating.
Obviously clean up commits coming stat B)