Quite a simple fix, we just assign the account-specific
`PositionTracker` to the level line's `._on_level_change()` handler
instead of whatever the current `OrderMode.current_pp` is set to.
Further this adds proper pane switching support such that when a user
modifies an order line from an account which is not the currently
selected one, the settings pane is changed to reflect the
account and thus corresponding position info for that account and
instrument B)
We were overwriting the existing loaded orders list in the per client
loop (lul) so move the def above all that.
Comment out the "try-to-cancel-inactive-orders-via-task-after-timeout"
stuff pertaining to https://github.com/erdewit/ib_insync/issues/363 for
now since we don't have a mechanism in place to cancel the re-cancel
task once the order is cancelled - plus who knows if this is even the
best way to do it..
Fills seems to be dual emitted from both the `status` and `fill` events
in `ib_insync` internals and more or less contain the same data nested
inside their `Trade` type. We started handling the 'fill' case to deal
with a race issue in commissions/cost report tracking but we don't
really want to leak that same race to incremental fills vs.
order-"closed" tracking.. So go back to only emitting the fill msgs
on statuses and a "closed" on `.remaining == 0`.
`ib` is super good not being reliable with order event sequence order
and duplication of fill info. This adds some guards to try and avoid
popping the last status status too early if we end up receiving
a `'closed'` before the expected `'fill`' event(s). Further delete the
`status_msg` ref on each iteration to avoid stale reference lookups in
the relay task/loop.
This includes darks, lives and alerts with all connecting clients
being broadcast all existing order-flow dialog states. Obviously
for now darks and alerts only live as long as the `emsd` actor lifetime
(though we will store these in local state eventually) and "live" orders
have lifetimes managed by their respective backend broker.
The details of this change-set is extensive, so here we go..
Messaging schema:
- change the messaging `Status` status-key set to:
`resp: Literal['pending', 'open', 'dark_open', 'triggered',
'closed', 'fill', 'canceled', 'error']`
which better reflects the semantics of order lifetimes and was
partially inspired by the status keys `kraken` provides for their
order-entry API. The prior key set was based on `ib`'s horrible
semantics which sound like they're right out of the 80s..
Also, we reflect this same set in the `BrokerdStatus` msg and likely
we'll just get rid of the separate brokerd-dialog side type
eventually.
- use `Literal` type annots for statuses where applicable and as they
are supported by `msgspec`.
- add additional optional `Status` fields:
-`req: Order` to allow each status msg to optionally ref its
commanding order-request msg allowing at least a request-response
style implicit tracing in all response msgs.
-`src: str` tag string to show the source of the msg.
-`reqid: str | int` such that the ems can relay the `brokerd`
request id both to the client side and have one spot to look
up prior status msgs and
- draft a (unused/commented) `Dialog` type which can be eventually used
at all EMS endpoints to track msg-flow states
EMS engine adjustments/rework:
- use the new status key set throughout and expect `BrokerdStatus` msgs
to use the same new schema as `Status`.
- add a `_DarkBook._active: dict[str, Status]` table which is now used for
all per-leg-dialog associations and order flow state tracking
allowing for the both the brokerd-relay and client-request handler loops
to read/write the same msg-table and provides for delivering
the overall EMS-active-orders state to newly/re-connecting clients
with minimal processing; this table replaces what the `._ems_entries`
table from prior.
- add `Router.client_broadcast()` to send a msg to all currently
connected peers.
- a variety of msg handler block logic tweaks including more `case:`
blocks to be both flatter and improve explicitness:
- for the relay loop move all `Status` msg update and sending to
within each block instead of a fallthrough case plus hard-to-follow
state logic.
- add a specific case for unhandled backend status keys and just log
them.
- pop alerts from `._active` immediately once triggered.
- where possible mutate status msgs fields over instantiating new
ones.
- insert and expect `Order` instances in the dark clearing loop and
adjust `case:` blocks accordingly.
- tag `dark_open` and `triggered` statuses as sourced from the ems.
- drop all the `ChainMap` stuff for now; we're going to make our own
`Dialog` type for this purpose..
Order mode rework:
- always parse the `Status` msg and use match syntax cases with object
patterns, hackily assign the `.req` in many blocks to work around not
yet having proper on-the-wire decoding yet.
- make `.load_unknown_dialog_from_msg()` expect a `Status` with boxed
`.req: Order` as input.
- change `OrderDialog` -> `Dialog` in prep for a general purpose type
of the same name.
`ib` backend order loading support:
- do "closed" status detection inside the msg-relay loop instead
of expecting the ems to do this..
- add an attempt to cancel inactive orders by scheduling cancel
submissions continually (no idea if this works).
- add a status map to go from the 80s keys to our new set.
- deliver `Status` msgs with an embedded `Order` for existing live order
loading and make sure to try an get the source exchange info (instead
of SMART).
Paper engine ported to match:
- use new status keys in `BrokerdStatus` msgs
- use `match:` syntax in request handler loop
Ideally every client that connects to the ems can know its state
(immediately) meaning relay all the order dialogs that are currently
active. This adds full (hacky WIP) support to receive those dialog
(msgs) from the `open_ems()` startup values via the `.started()` msg
from `_emsd_main()`.
Further this adds support to the order mode chart-UI to display existing
(live) orders on the chart during startup. Details include,
- add a `OrderMode.load_unknown_dialog_from_msg()` for processing and
displaying a ``BrokerdStatus`` (for now) msg from the EMS that was not
previously created by the current ems client and registering and
displaying it on the chart.
- break out the ems msg processing into a new
`order_mode.process_trade_msg()` func so that it can be called on the
startup dialog-msg set as well as eventually used a more general low
level auto-strat API (eg. when we get to displaying auto-strat and
group trading automatically on an observing chart UI.
- hackyness around msg-processing for the dialogs delivery since we're
technically delivering `BrokerdStatus` msgs when the client-side
processing technically expects `Status` msgs.. we'll rectify this
soon!
In order to avoid missed existing order message emissions on startup we
need to be sure the client side stream is registered with the router
first. So break out the starting of the
`translate_and_relay_brokerd_events()` task until inside the client
stream block and start the task using the dark clearing loop nursery.
Also, ensure `oid` (and thus for `ib` the equivalent re-used `reqid`)
are cast to `str` before registering the dark book. Deliver the dark
book entries as part of the `_emsd_main()` context `.started()` values.
This seems to have been broken in refactoring from commit 279c899de5
which was never tested against multiple accounts/clients.
The fix is 2 part:
- position tables are now correctly loaded ahead of time and used by
account for each connected client in processing of ledgers and
existing positions.
- a task for each API client is started (as implemented prior) so that
we actually get status updates for every client used for submissions.
Further we add a bit of code using `bisect.insort()` to normalize
ledgers to a datetime sorted list records (though pretty sure the `dict`
transform ruins it?) in an effort to avoid issues with ledger
transaction processing with previously minimized `Position.clears`
tables, which should (but might not?) avoid incorporating clear events
prior to the last "net-zero" positioning state.
This firstly changes `.audit_sizing()` => `.ensure_state()` and makes it
return `None` as well as only error when split ratio denoted (via
config) positions do not size as expected.
Further refinements,
- add an `.expired()` predicate method
- always return a size of zero from `.calc_size()` on expired assets
- load each `pps.toml` entry's clear tabe into `Transaction`s and use
`.add_clear()` during from config init.
In order to avoid issues with reloading ledger and API trades after an
existing `pps.toml` exists we have to make sure we not only avoid
duplicate entries but also avoid re-adding entries that would have been
removed during a prior call to the `Position.minimize_clears()` filter.
The easiest way to do this is to sort on timestamps and avoid adding any
record that pre-existed the last net-zero position ledger event that
`.minimize_clears()` discarded. In order to implement this it means
parsing config file clears table's timestamps into datetime objects for
inequality checks and we add a `Position.first_clear_dt` attr for
storing this value when managing pps in object form but never store it
in the config (since it should be obviously from the sorted clear event
table).
The (partial) fills from this sub are most indicative of clears (also
says support) whereas the msgs in the `ownTrades` sub are only emitted
after the entire order request has completed - there is no size-vlm
remaining.
Further enhancements:
- this also includes proper subscription-syncing inside `subscribe()` with
a small pre-msg-loop which waits on ack-msgs for each sub and raises any
errors. This approach should probably be implemented for the data feed
streams as well.
- configure the `ownTrades` sub to not bother sending historical data on
startup.
- make the `openOrders` sub include rate limit counters.
- handle the rare case where the ems is trying to cancel an order which
was just edited and hasn't yet had it's new `txid` registered.
Since we figured out how to pass through ems dialog ids to the
`openOrders` sub we don't really need to do much with status updates
other then error handling. This drops `process_status()` and moves the
error handling logic into a status handler sub-block; we now just
info-log status updates for troubleshooting purposes.
Why we need so many fields to accomplish passing through a dialog key to
orders is beyond me but this is how they do it with edits..
Allows not having to handle `editOrderStatus` msgs to update the dialog
key table and instead just do it in the `openOrders` sub by checking the
canceled msg for a 'cancel_reason' of 'Order replaced', in which case we
just pop the txid and wait for the new order the kraken backend engine
will submit automatically, which will now have the correct 'userref'
value we passed in via the `newuserref`, and then we add that new `txid`
to our table.
Turns out you can pass both thus making mapping an ems `oid` to
a brokerd-side `reqid` much more simple. This allows us to avoid keeping
as much local dialog state but with still the following caveats:
- ok `editOrder` msgs must update the reqid<->txid map
- only pop `reqids2txids` entries inside the `cancelOrderStatus` handler
If we don't have a pos table built out already (in mem) we can't figure
out the likely dst asset (since there's no pair entry to guide us) that
we should use to search for withdrawal transactions; so move it later.
Further this ports to the new api changes in `piker.pp`` that will land
with #365.
This ended up driving the rework of the `piker.pp` apis to use context
manager + table style which resulted in a much easier to follow
state/update system B). Also added is a flag to do a manual simulation
of a "fill triggered rt pp msg" which requires the user to delete the
last ledgered trade entry from config files and then allowing that trade
to emit through the `openOrders` sub and update client shortly after
order mode boot; this is how the rt updates were verified to work
without doing even more live orders 😂.
Patch details:
- open both `open_trade_ledger()` and `open_pps()` inside the trade
dialog startup and conduct a "pp state sync" logic phase where we now
pull the account balances and incrementally load pp data (in order,
from `pps.toml`, ledger, api) until we can generate the asset balance
by reverse incrementing through trade history eventually erroring out
if we can't reproduce the balance value.
- rework the `trade2pps()` to take in the `PpTable` and generate new
ems msgs from table updates.
- return the new `dict[str, Transaction]` expected from
`norm_trade_records()`
- only update pp config and ledger on dialog exit.
Since our ems doesn't actually do blocking style client-side submission
updates, thus resulting in the client being able to update an existing
order's state before knowing its current state, we can run into race
conditions where for some backends an order is updated using the wrong
order id. For kraken we manually implement detecting this race (lol, for
now anyway) such that when a new client side edit comes in before the
new `txid` is known, we simply expect the handler loop to cancel the
order. Further this adds cancellation on arbitrary status errors, like
rate limits.
Also this adds 2 leg (ems <-> brokerd <-> kraken) msg tracing using
a `collections.ChainMap` which is likely going to end up being the POC
for a more general data structure recommended for backends that need to
trace msg flow for translation with the ems.
Turns out the `openOrders` and `ownTrades` subs always return a `reqid`
value (the one brokerd sends to the kraken api in order requests) is
always set to zero, which seems to be a bug? So this includes patches to
work around that as well reliance on the `openOrders` sub to do most
`BrokerdStatus` updates since `XOrderStatus` events don't seem to have
much data in them at all (they almost look like pure ack events so maybe
they aren't affirmative of final state changes anyway..).
Other fixes:
- respond with a `BrokerdOrderAck` immediately after `requid` generation
not after order submission to ensure the ems has a valid `requid`
*before* kraken api events are relayed through.
- add a `reqids2txids: bidict[int, str]` which maps brokerd genned
`requid`s to kraken-side `txid`s since (as mentioned above) the
clearing and state endpoints don't relay back this value (it's always
0...)
- add log messages for each sub so that (at least for now) we can see
exact msg contents coming from kraken.
- drop `.remaining` calcs for now since we need to keep record of the
order states manually in order to retreive the original submission
vlm..
- fix the `openOrders` case for fills, in this case the message includes
no `status` field and thus we must catch it in a block *after* the
normal state handler to avoid masking.
- drop response msg generation from the cancel status case since we
can do it again from the `openOrders` handler and sending a double
status causes issues on the client side.
- add a shite ton of notes around all this missing `requid` stuff.
More or less just to avoid orders the user wasn't aware of from
persisting until we get "open order relaying" through the ems working.
Some further fixes which required a new `reqids2txids` map which keeps
track of which `kraken` "txid" is mapped to our `reqid: int`; mainly
this was needed for cancel requests which require knowing the underlying
`txid`s (since apparently kraken doesn't keep track of the "reqid" we
pass it). Pass the ws instance into `handle_order_updates()` to enable
the cancelling orders on startup. Don't key error on unknown `reqid`
values (for eg. when receiving historical trade events on startup).
Handle cancel requests first in the ems side loop.
Since we seem to always be able to get back the `reqid`/`userref` value
we send to kraken ws endpoints, we can use this as our brokerd side
order id and avoid all race cases with getting the true `txid` value
that `kraken` assigns (and which changes when you do "edits"
:eyeroll:). This simplifies status updates by allowing our relay loop
just to pass back our generated `.reqid` verbatim and allows responding
with a `BrokerdOrderAck` immediately in the request handler task which
should guarantee there are no further race conditions with the relay
loop and mapping `txid`s from kraken.. and figuring out wtf to do when
they change, etc.
Addressing same issue as in #350 where we need to compute position
updates using the *first read* from the ledger **before** we update it
to make sure `Position.lifo_update()` gets called and **not skipped**
because new trades were read as clears entries but haven't actually been
included in update calcs yet.. aka we call `Position.lifo_update()`.
Main change here is to convert `update_ledger()` into a context mngr so
that the ledger write is committed after pps updates using
`pp.update_pps_conf()`..
This is basically a hotfix to #346 as well.
Turns out the EMS can support this as originally expected: you can
update a `brokerd`-side `.reqid` through a `BrokerdAck` msg and the ems
which update its cross-dialog (leg) tracking correctly! The issue was
a bug in the `editOrderStatus` msg handling and appropriate tracking
of the correct `.oid` (ems uid) on the kraken side. This unfortunately
required adding a `emsflow: dict[str, list[BrokerdOrder]]` msg flow
tracing table which means the broker daemon is tracking all the msg flow
with the ems, though I'm wondering now if this is just good practise
anyway and maybe we should offer a small primitive type from our msging
utils to aid with this? I've used such constructs in event handling
systems prior.
There's a lot more factoring that can be done after these changes as
well but the quick detailed summary is,
- rework the `handle_order_requests()` loop to use `match:` syntax and
update the new `emsflow` table on every new request from the ems.
- fix the `editOrderStatus` case pattern to not include an error msg and
thus actually be triggered to respond to the ems with a `BrokerdAck`
containing the new `.reqid`, the new kraken side `txid`.
- skip any `openOrders` msgs which are detected as being kraken's
internal order "edits" by matching on the `cancel_reason` field.
- update the `emsflow` table in all ws-stream msg handling blocks
with responses sent to the ems.
Relates to #290
Move to using the websocket API for all order control ops and dropping
the sync rest api approach which resulted in a bunch of buggy races.
Further this gets us must faster (batch) order cancellation for free
and a simpler ems request handler loop. We now heavily leverage the new
py3.10 `match:` syntax for all kraken-side API msg parsing and
processing and handle both the `openOrders` and `ownTrades` subscription
streams.
We also block "order editing" (by immediate cancellation) for now since
the EMS isn't entirely yet equipped to handle brokerd side `.reqid`
changes (which is how kraken implements so called order "updates" or
"edits") for a given order-request dialog and we may want to even
consider just implementing "updates" ourselves via independent cancel
and submit requests? Definitely something to ponder. Alternatively we
can "masquerade" such updates behind the count-style `.oid` remapping we
had to implement anyway (kraken's limitation) and maybe everything will
just work?
Further details in this patch:
- create 2 tables for tracking the EMS's `.oid` (uui4) value to `int`s
that kraken expects (for `reqid`s): `ids` and `reqmsgs` which enable
local lookup of ems uids to piker-backend-client-side request ids and
received order messages.
- add `openOrders` sub support which more or less directly relays to
equivalent `BrokerdStatus` updates and calc the `.filled` and
`.remaining` values based on cleared vlm updates.
- add handler blocks for `[add/edit/cancel]OrderStatus` events including
error msg cases.
- don't do any order request response processing in
`handle_order_requests()` since responses are always received via one
(or both?) of the new ws subs: `ownTrades` and `openOrders` and thus
such msgs are now handled in the response relay loop.
Relates to #290Resolves#310, #296
This drops the use of `pp.update_pps_conf()` (and friends) and instead
moves to using the context style `open_trade_ledger()` and `open_pps()`
managers for faster pp msg gen due to delayed file writing (which was
the main source update latency).
In order to make this work with potentially multiple accounts this also
uses an exit stack which loads each ledger / `pps.toml` into an account
id mapped `dict`; a POC for likely how we should implement some higher
level position manager api.
The original implementation of `.calc_be_price()` wasn't correct since
the real so called "price per unit" (ppu), is actually defined by
a recurrence relation (which is why the original state-updated
`.lifo_update()` approach worked well) and requires the previous ppu to
be weighted by the new accumulated position size when considering a new
clear event. The ppu is the price that above or below which the trader
takes a win or loss on transacting one unit of the trading asset and
thus it is the true "break even price" that determines making or losing
money per fill. This patches fixes the implementation to use trailing
windows of the accumulated size and ppu to compute the next ppu value
for any new clear event as well as handle rare cases where the
"direction" changes polarity (eg. long to short in a single order). The
new method is `Position.calc_ppu()` and further details of the relation
can be seen in the doc strings.
This patch also includes a wack-ton of clean ups and removals in an
effort to refine position management api for easier use in new backends:
- drop `updaate_pps_conf()`, `load_pps_from_toml()` and rename
`load_trands_from_ledger()` -> `load_pps_from_ledger()`.
- extend `PpTable` to have a `.to_toml()` method which returns the
active set of positions ready to be serialized to the `pps.toml` file
which is collects from calling,
- `PpTable.dump_active()` which now returns double dicts of the
open/closed pp object maps.
- make `Position.minimize_clears()` now iterate the clears table in
chronological order (instead of reverse) and only drop fills prior
to any zero-size state (the old reversed way can result incorrect
history-size-retracement in cases where a position is lessened but
not completely exited).
- drop `Position.add_clear()` and instead just manually add entries
inside `.update_from_trans()` and also add a `accum_size` and `ppu`
field to ever entry thus creating a position "history" sequence of
the ppu and accum size for every position and prepares for being
and to show "position lifetimes" in the UI.
- move fqsn getting into `Position.to_pretoml()`.
Use the new `.calc_[be_price/size]()` methods when serializing to and
from the `pps.toml` format and add an audit method which will warn about
mismatched values and assign the clears table calculated values pre-write.
Drop the `.lifo_update()` method and instead allow both
`.size`/`.be_price` properties to exist (for non-ledger related uses of
`Position`) alongside the new calc methods and only get fussy about
*what* the properties are set to in the case of ledger audits.
Also changes `Position.update()` -> `.add_clear()`.
Since we're going to need them anyway for desired features, add
2 new `Position` methods:
- `.calc_be_price()` which computes the breakeven cost basis price
from the entries in the clears table.
- `.calc_size()` which just sums the clear sizes.
Add a `cost_scalar: float` control to the `.update_from_trans()` method
to allow manual adjustment of the cost weighting for the case where
a "non-symmetrical" model is wanted.
Go back to always trying to write the backing ledger files on exit, even
when there's an error (obvs without the `return` in the `finally:` block
f$#% up).
Can't believe i missed this but any `return` inside a `finally` will
suppress the error from the `try:` part... XD
Thought i was losing my mind when the ledger was mutated and then
an error just after wasn't getting raised.. lul.
Never again...
In order to avoid double transaction adds/updates and too-early-discard
of zero sized pps (like when trades are loaded from a backend broker but
were already added to a ledger or `pps.toml` prior) we now **don't** pop
such `Position` entries from the `.pps` table in order to keep each
position's clears table always in place. This avoids the edge case where
an entry was removed too early (due to zero size) but then duplicate
trade entries that were in that entrie's clears show up from the backend
and are entered into a new entry resulting in an incorrect size in a new
entry..We still only push non-net-zero entries to the `pps.toml`.
More fixes:
- return the updated set of `Positions` from `.lifo_update()`.
- return the full table set from `update_pps()`.
- use `PpTable.update_from_trans()` more throughout.
- always write the `pps.toml` on `open_pps()` exit.
- only return table from `load_pps_from_toml()`.
In an effort to begin allowing backends to have more granular control
over position updates, particular in the case where they need to be
reloaded from a trades ledger, this adds a new table API which can
be loaded using `open_pps()`.
- offer an `.update_trans()` method which takes in a `dict` of
`Transactions` and updates the current table of `Positions` from it.
- add a `.dump_active()` which renders the active pp entries dict in
a format ready for toml serialization and all closed positions since
the last update (we might want to not drop these?)
All other module-function apis currently in use should remain working as
before for the moment.
Change `.find_contract()` -> `.find_contracts()` to allow multi-search
for so called "ambiguous" contracts (like for `Future`s) such that the
method now returns a `list` of tracts and populates the contract cache
with all specific tracts retrieved. Let it take in an (unvalidated)
contract that will be fqsn-style-tokenized such that it can be called
from `.search_symbols()` (though we're not quite yet XD).
More stuff,
- add `Client.parse_patt2fqsn()` which is an fqsn to token unpacker
built from the original logic in the old `.find_contract()`.
- handle fiat/forex pairs with the `'CASH'` sectype.
- add a flag to allow unqualified contracts to fail with a warning msg.
- populate the client's contract cache with all expiries of
an ambiguous derivative.
- allow `.con_deats()` to warn msg instead of raise on def-not-found.
- add commented `assert 0` which was triggering a debugger deadlock in
`tractor` which we still haven't been able to create a unit test for.
Minimize calling `.data._shmarray.attach_shm_array()` as much as is
possible to avoid the crash from #332. This is the suggested hack from
issue #359.
Resolves https://github.com/pikers/piker/issues/359
Not sure why I put this off for so long but the check is in now such
that if the market isn't open or no rt quote comes in from the first
query, we just pull from the last shm history 'close' value.
Includes another fix to avoid raising when a double remove on the client
side stream from the registry sometimes happens.
Not sure this didn't get caught in usage, but basically real-time
updates got broken by a rework of `update_ledger_from_api_trades()`.
The issue is that the ledger was being updated **before** calling
`piker.pp.update_pps_conf()` which resulted in the `Position.size`
not being updated correctly since the [latest added] clears passed
in via the `trade_records` arg were already found in the `.clears` table
and thus were causing the loop to skip the `Position.lifo_update()`
call..
The solution here is to not update the ledger **until after** we call
`update_pps_conf()` - it's more read/writes but it's correct and we
figure out a less io heavy way to do the file writing later.
Further this includes a fix to avoid double emitting a pp update caused
by non-thorough logic that waits for a commission report to arrive
during a fill event; previously we were emitting the same message twice
due to the lack of a check for an existing comms report in the case
where the report arrives *after* the fill.
Moves to using the new `piker.pp` apis to both store real-time trade
events in a ledger file as well emit position update msgs (which were
not in this backend at all prior) when new orders clear (aka fill).
In terms of outstanding issues,
- solves the pp update part of the bugs reported in #310
- starts a msg case block in prep for #293
Details of rework:
- move the `subscribe()` ws fixture to module level and `partial()` in
the client token instead of passing it to the instance; in prep for
removal of the `.token` attr from the `NoBsWs` wrapper.
- drop `make_auth_sub()` since it was too thin and we can just
do it all succinctly in `subscribe()`
- filter trade update msgs to those not yet stored int the toml ledger
- much better kraken api msg unpacking using new `match:` synax B)
Resolves#311
No real-time update support (yet) but this is the first draft at writing
trades ledgers and `pps.toml` entries for the kraken backend.
Deatz:
- drop `pack_positions()`, no longer used.
- use `piker.pp` apis to both write a trades ledger file and update the
`pps.toml` inside the `trades_dialogue()` endpoint startup.
- drop the weird paper engine swap over if auth can't be done, we should
be doing something with messaging in the ems over this..
- more web API error response raising.
- pass the `pp.Transaction` set loaded from ledger into
`process_trade_msgs()` do avoid duplicate sends of already collected
trades msgs.
- add `norm_trade_records()` public endpoing (used by `piker.pp` api)
and `update_ledger()` helper.
- rejig `process_trade_msgs()` to drop the weird `try:` assertion block
and skip already-recorded-in-ledger trade msgs as well as yield *each*
trade instead of sub-sequences.
This was just implemented totally wrong but somehow worked XD
The idea was to include all trades that contribute to ongoing position
size since the last time the position was "net zero", i.e. no position
in the asset. Adjust arithmetic to *subtract* from the current size
until a zero size condition is met and then keep all those clears as
part of the "current state" clears table.
Additionally this fixes another bug where the positions freshly loaded
from a ledger *were not* being merged with the current `pps.toml` state.
Gah, was a remaining bug where if you tried to update the pps state with
both new trades and from the ledger you'd do a double add of
transactions that were cleared during a `update_pps()` loop. Instead now
keep all clears in tact until ready to serialize to the `pps.toml` file
in which cases we call a new method `Position.minimize_clears()` which
does the work of only keep clears since the last net-zero size.
Re-implement `update_pps_conf()` update logic as a single pass loop
which does expiry and size checking for closed pps all in one pass thus
allowing us to drop `dump_active()` which was kinda redundant anyway..
Before we weren't emitting pp msgs when a position went back to "net
zero" (aka the size is zero) nor when a new one was opened (wasn't
previously loaded from the `pps.toml`). This reworks a bunch of the
incremental update logic as well as ports to the changes in the
`piker.pp` module:
- rename a few of the normalizing helpers to be more explicit.
- drop calling `pp.get_pps()` in the trades dialog task and instead
create msgs iteratively, per account, by iterating through collected
position and API trade records and calling instead
`pp.update_pps_conf()`.
- always from-ledger-update both positions reported from ib's pp sys and
session api trades detected on ems-trade-dialog startup.
- `update_ledger_from_api_trades()` now does **just** that: only updates
the trades ledger and returns the transaction set.
- `update_and_audit_msgs()` now only the input list of msgs and properly
generates new msgs for newly created positions that weren't previously
loaded from the `pps.toml`.
- use `tomli` package for reading since it's the fastest pure python
reader available apparently.
- add new fields to each pp's clears table: price, size, dt
- make `load_pps_from_toml()`'s `reload_records` a dict that can be
passed in by the caller and is verbatim used to re-read a ledger and
filter to the specified symbol set to build out fresh pp objects.
- add a `update_from_ledger: bool` flag to `load_pps_from_toml()`
to allow forcing a full backend ledger read.
- if a set of trades records is passed into `update_pps_conf()` parse
out the meta data required to cause a ledger reload as per 2 bullets
above.
- return active and closed pps in separate by-account maps from
`update_pps_conf()`.
- drop the `key_by` kwarg.
This makes it possible to refresh a single fqsn-position in one's
`pps.toml` by simply deleting the file entry, in which case, if there is
new trade records passed to `load_pps_from_toml()` via the new
`reload_records` kwarg, then the backend ledger entries matching that
symbol will be filtered and used to recompute a fresh position.
This turns out to be super handy when you have crashes that prevent
a `pps.toml` entry from being updated correctly but where the ledger
does have all the data necessary to calculate a fresh correct entry.
Since some positions obviously expire and thus shouldn't continually
exist inside a `pps.toml` add naive support for tracking and discarding
expired contracts:
- add `Transaction.expiry: Optional[pendulum.datetime]`.
- add `Position.expiry: Optional[pendulum.datetime]` which can be parsed
from a transaction ledger.
- only write pps with a non-none expiry to the `pps.toml`
- change `Position.avg_price` -> `.be_price` (be is "breakeven")
since it's a much less ambiguous name.
- change `load_pps_from_legder()` to *not* call `dump_active()` since
for the only use case it ends up getting called later anyway.
We can probably make this better (and with less file sys accesses) later
such that we keep a consistent pps state in mem and only write async
maybe from another side-task?
What a nightmare this was.. main holdup was that cost (commissions)
reports are fired independent from "fills" so you can't really emit
a proper full position update until they both arrive.
Deatz:
- move `push_tradesies()` and relay loop in `deliver_trade_events()` to
the new py3.10 `match:` syntax B)
- subscribe for, and handle `CommissionReport` events from `ib_insync`
and repack as a `cost` event type.
- handle cons with no primary/listing exchange (like futes) in
`update_ledger_from_api_trades()` by falling back to the plain
'exchange' field.
- drop reverse fqsn lookup from ib positions map; just use contract
lookup for api trade logs since we're already connected..
- make validation in `update_and_audit()` optional via flag.
- pass in the accounts def, ib pp msg table and the proxies table to the
trade event relay task-loop.
- add `emit_pp_update()` too encapsulate a full api trade entry
incremental update which calls into the `piker.pp` apis to,
- update the ledger
- update the pps.toml
- generate a new `BrokerdPosition` msg to send to the ems
- adjust trades relay loop to only emit pp updates when a cost report
arrives for the fill/execution by maintaining a small table per exec
id.
I don't want to rant too much any more since it's pretty clear `ib` has
either zero concern for its (api) user's or a severely terrible data
management team and/or general inter-team coordination system, but this
patch more or less hacks the flex report records to be similar enough to
API "execution" / "fill" records such that they can be similarly
normalized and stored as well as processed for position calculations..
Dirty deats,
- use the `IB.fills()` method for pulling current session trade events
since it's both recommended in the docs and does seem to capture
more extensive meta-data.
- add a `update_ledger_from_api()` helper which does all the insane work
of making sure api trade entries are usable both within piker's global
fqsn system but also compatible with incremental updates of positions
computed from trade ledgers derived from ib's "flex reports".
- add "auditting" of `ib`'s reported positioning API messages by
comparison with piker's new "traders first" breakeven price style and
complain via logging on mismatches.
- handle buy vs. sell arithmetic (via a +ve or -ve multiplier) to make
"size" arithmetic work for API trade entries..
- draft out options contract transaction parsing but skip in pps
generation for now.
- always use the "execution id" as ledger keys both in flex and api
trade processing.
- for whatever weird reason `ib_insync` doesn't include the so called
"primary exchange" in contracts reported in fill events, so do manual
contract lookups in such cases such that pps entries can be placed
in the right fqsn section...
Still ToDo:
- incremental update on trade clears / position updates
- pps audit from ledger depending on user config?
This makes a few major changes but mostly is centered around including
transaction (aka trade-clear) costs in the avg breakeven price
calculation.
TL;DR:
- rename `TradeRecord` -> `Transaction`.
- make `Position.fills` a `dict[str, float]` which holds each clear's
cost value.
- change `Transaction.symkey` -> `.bsuid` for "backend symbol unique id".
- drop `brokername: str` arg to `update_pps()`
- rename `._split_active()` -> `dump_active()` and use input keys
verbatim in output map.
- in `update_pps_conf()` always incrementally update from trade records
even when no `pps.toml` exists yet since it may be both the case that
the ledger needs loading **and** the caller is handing new records not
yet in the ledger.
Begins the position tracking incremental update API which supports both
constructing a `pps.toml` both from trade ledgers as well diff-oriented
incremental update from an existing config assumed to be previously
generated from some prior ledger.
New set of routines includes:
- `_split_active()` a helper to split a position table into the active
and closed positions (aka pps of size 0) for determining entry updates
in the `pps.toml`.
- `update_pps_conf()` to maybe load a `pps.toml` and update it from
an input trades ledger including necessary (de)serialization to and
from `Position` object form(s).
- `load_pps_from_ledger()` a ledger parser-loader which constructs
a table of pps strictly from the broker-account ledger data without
any consideration for any existing pps file.
Each "entry" in `pps.toml` also contains a `fills: list` attr (name may
change) which references the set of trade records which make up its
state since the last net-zero position in the instrument.
Add a `TradeRecord` struct which holds the minimal field set to build
out position entries. Add `.update_pps()` to convert a set of records
into LIFO position entries, optionally allowing for an update to some
existing pp input set. Add `load_pps_from_ledger()` which does a full
ledger extraction to pp objects, ready for writing a `pps.toml`.
Since "flex reports" are only available for the current session's trades
the day after, this adds support for also collecting trade execution
records for the current session and writing them to the equivalent
ledger file.
Summary:
- add `trades_to_records()` to handle parsing both flex and API event
objects into a common record form.
- add `norm_trade_records()` to handle converting ledger entries into
`TradeRecord` types from the new `piker.pps` mod (coming in next
commit).
Start a generic "position related" util mod and bring in the `Position`
type from the allocator , convert it to a `msgspec.Struct` and add
a `.lifo_update()` method. Implement a WIP pp parser from a trades
ledger and use the new lifo method to gather position entries.
Add `ChartPlotWidget._on_screen: bool` which allows detecting for the
first state where there is y-range-able flow data loaded and able to be
drawn. Check for this flag to be set in `.maxmin()` such that until the
historical data is loaded `.default_view()` will be called to ensure
that a blank view is never shown: race with the UI starting versus the
data layer loading flow graphics can have this outcome.
This should hopefully make teardown more reliable and includes better
logic to fail over to a hard kill path after a 3 second timeout waiting
for the instance to complete using the `docker-py` wait API. Also
generalize the supervisor teardown loop by allowing the container config
endpoint to return 2 msgs to expect:
- a startup message that can be read from the container's internal
process logging that indicates it is fully up and ready.
- a teardown msg that can be polled for that indicates the container has
gracefully terminated after a cancellation request which is passed to
our container wrappers `.cancel()` method.
Make the marketstore config endpoint return the 2 messages we previously
had hard coded and use this new api.
This was introduced in #302 but after thorough testing was clear to be
not working XD. Adjust the display loop to update the last graphics
segment on both the OHLC and vlm charts (as well as all deriving fsp
flows) whenever the uppx >= 1 and there is no current path append
taking place (since more datums are needed to span an x-pixel in view).
Summary of tweaks:
- move vlm chart update code to be at the end of the cycle routine and
have that block include the tests for a "interpolated last datum in
view" line.
- make `do_append: bool` compare with a floor of the uppx value (i.e.
appends should happen when we're just fractionally over a pixel of
x units).
- never update the "volume" chart.
Allows for optionally updating a "downsampled" graphics type which is
currently necessary in the `BarItems` -> `FlattenedOHLC` curve switching
case; we don't want to be needlessly redrawing the `Flow.graphics`
object (which will be an OHLC curve) when in flattened curve mode.
Further add a `only_last_uppx: bool` flag to `.draw_last()` to allow
forcing a "last uppx's worth of data max/min" style interpolating line
as needed.
The single-file module was getting way out of hand size-wise with the
new flex report parsing stuff so this starts the process of breaking
things up into smaller modules oriented around trade, data, and ledger
related endpoints.
Add support for backends to declare sub-modules to enable in
a `__enable_modules__: list[str]` module var which is parsed by the
daemon spawning code passed to `tractor`'s `enable_modules: list[str]`
input.
When using m4, we downsample to the max and min of each
pixel-column's-worth of data thus preserving range / dispersion details
whilst not drawing more graphics then can be displayed by the available
amount of horizontal pixels.
Take and apply this exact same concept to the "last datum" graphics
elements for any `Flow` that is reported as being in a downsampled
state:
- take the xy output from the `Curve.draw_last_datum()`,
- slice out all data that fits in the last pixel's worth of x-range
by using the uppx,
- compute the highest and lowest value from that data,
- draw a singe line segment which spans this yrange thus creating
a simple vertical set of pixels which are "filled in" and show the
entire y-range for the most recent data "contained with that pixel".
Instead of using a bunch of internal logic to modify low level paint-able
elements create a `Curve` lineage that allows for graphics "style"
customization via a small set of public methods:
- `Curve.declare_paintables()` to allow setup of state/elements to be
drawn in later methods.
- `.sub_paint()` to allow painting additional elements along with the
defaults.
- `.sub_br()` to customize the `.boundingRect()` dimensions.
- `.draw_last_datum()` which is expected to produce the paintable
elements which will show the last datum in view.
Introduce the new sub-types and load as necessary in
`ChartPlotWidget.draw_curve()`:
- `FlattenedOHLC`
- `StepCurve`
Reimplement all `.draw_last()` routines as a `Curve` method
and call it the same way from `Flow.update_graphics()`
The basic logic is now this:
- when zooming out, uppx (units per pixel in x) can be >= 1
- if the uppx is `n` then the next pixel in view becomes occupied by
a new datum-x-coordinate-value when the diff between the last
datum step (since the last such update) is greater then the
current uppx -> `datums_diff >= n`
- if we're less then some constant uppx we just always update (because
it's not costly enough and we're not downsampling.
More or less this just avoids unnecessary real-time updates to flow
graphics until they would actually be noticeable via the next pixel
column on screen.
This was a bit of a nightmare to figure out but, it seems that the
coordinate caching system will really be a dick (like the nickname for
richard for you serious types) about leaving stale graphics if we don't
reset the cache on downsample full-redraw updates...Sooo, instead we do
this manual reset to avoid such artifacts and consequently (for now)
return a `reset: bool` flag in the return tuple from `Renderer.render()`
to indicate as such.
Some further shite:
- move the step mode `.draw_last()` equivalent graphics updates down
with the rest..
- drop some superfluous `should_redraw` logic from
`Renderer.render()` and compound it in the full path redraw block.
Adds a new pre-graphics data-format callback incremental update api to
our `Renderer`. `Renderer` instance can now overload these custom routines:
- `.update_xy()` a routine which accepts the latest [pre/a]pended data
sliced out from shm and returns it in a format suitable to store in
the optional `.[x/y]_data` arrays.
- `.allocate_xy()` which initially does the work of pre-allocating the
`.[x/y]_data` arrays based on the source shm sizing such that new
data can be filled in (to memory).
- `._xy_[first/last]: int` attrs to track index diffs between src shm
and the xy format data updates.
Implement the step curve data format with 3 super simple routines:
- `.allocate_xy()` -> `._pathops.to_step_format()`
- `.update_xy()` -> `._flows.update_step_xy()`
- `.format_xy()` -> `._flows.step_to_xy()`
Further, adjust `._pathops.gen_ohlc_qpath()` to adhere to the new
call signature.
We're doing this in `Flow.update_graphics()` atm and probably are going
to in general want custom graphics objects for all the diff curve / path
types. The new flows work seems to fix the bounding rect width calcs to
not require the ad-hoc extra `+ 1` in the step mode case; before it was
always a bit hacky anyway. This also tries to add a more correct
bounding rect adjustment for the `._last_line` segment.
Finally this gets us much closer to a generic incremental update system
for graphics wherein the input array diffing, pre-graphical format data
processing, downsampler activation and incremental update and storage of
any of these data flow stages can be managed in one modular sub-system
:surfer_boi:.
Dirty deatz:
- reorg and move all path logic into `Renderer.render()` and have it
take in pretty much the same flags as the old
`FastAppendCurve.update_from_array()` and instead storing all update
state vars (even copies of the downsampler related ones) on the
renderer instance:
- new state vars: `._last_uppx, ._in_ds, ._vr, ._avr`
- `.render()` input bools: `new_sample_rate, should_redraw,
should_ds, showing_src_data`
- add a hack-around for passing in incremental update data (for now)
via a `input_data: tuple` of numpy arrays
- a default `uppx: float = 1`
- add new render interface attrs:
- `.format_xy()` which takes in the source data array and produces out
x, y arrays (and maybe a `connect` array) that can be passed to
`.draw_path()` (the default for this is just to slice out the index
and `array_key: str` columns from the input struct array),
- `.draw_path()` which takes in the x, y, connect arrays and generates
a `QPainterPath`
- `.fast_path`, for "appendable" updates like there was on the fast
append curve
- move redraw (aka `.clear()` calls) into `.draw_path()` and trigger
via `redraw: bool` flag.
- our graphics objects no longer set their own `.path` state, it's done
by the `Flow.update_graphics()` method using output from
`Renderer.render()` (and it's state if necessary)
A bit hacky to get all graphics types working but this is hopefully the
first step toward moving all the generic update logic into `Renderer`
types which can be themselves managed more compactly and cached per
uppx-m4 level.
Which is basically just "deleting" rows from a column series.
You can only use the trim command from the `.cmd` cli and only with a so
called `LocalClient` currently; it's also sketchy af and caused
a machine to hang due to mem usage..
Ideally we can patch in this functionality for use by the rpc api
and have it not hang like this XD
Pertains to https://github.com/alpacahq/marketstore/issues/264
Yet another path ops routine which converts a 1d array into a data
format suitable for rendering a "step curve" graphics path (aka a "bar
graph" but implemented as a continuous line).
Also, factor the `BarItems` rendering logic (which determines whether to
render the literal bars lines or a downsampled curve) into a routine
`render_baritems()` until we figure out the right abstraction layer for
it.
Starts a module for grouping together all our `QPainterpath` related
generation and data format operations for creation of fast curve
graphics. To start, drops `FastAppendCurve.downsample()` and moves
it to a new `._pathops.xy_downsample()`.
Mostly just dropping old commented code for "step mode" format
generation. Always slice the tail part of the input data and move to the
new `ms_threshold` in the `pg` profiler'
Relates to the bug discovered in #310, this should avoid out-of-order
msgs which do not have a `.reqid` set to be error logged to console.
Further, add `pformat()` to kraken logging of ems msging.
Since downsampling with the more correct version of m4 (uppx driven
windows sizing) is super fast now we don't need to avoid downsampling
on low uppx values. Further all graphics objects now support in-view
slicing so make sure to use it on interaction updates. Pass in the view
profiler to update method calls for more detailed measuring.
Even moar,
- Add a manual call to `.maybe_downsample_graphics()` inside the mouse
wheel event handler since it seems that sometimes trailing events get
lost from the `.sigRangeChangedManually` signal which can result in
"non-downsampled-enough" graphics on chart given the scroll amount;
this manual call seems to entirely fix this?
- drop "max zoom" guard since internals now support (near) infinite
scroll out to graphics becoming a single pixel column line XD
- add back in commented xrange signal connect code for easy testing to
verify against range updates not happening without it
This took longer then i care to admit XD but it definitely adds a huge
speedup and with only a few outstanding correctness bugs:
- panning from left to right causes strange trailing artifacts in the
flows fsp (vlm) sub-plot but only when some data is off-screen on the
left but doesn't appear to be an issue if we keep the `._set_yrange()`
handler hooked up to the `.sigXRangeChanged` signal (but we aren't
going to because this makes panning way slower). i've got a feeling
this is a bug todo with the device coordinate cache stuff and we may
need to report to Qt core?
- factoring out the step curve logic from
`FastAppendCurve.update_from_array()` (un)fortunately required some
logic branch uncoupling but also meant we needed special input controls
to avoid things like redraws and curve appends for special cases,
this will hopefully all be better rectified in code when the core of
this method is moved into a renderer type/implementation.
- the `tina_vwap` fsp curve now somehow causes hangs when doing erratic
scrolling on downsampled graphics data. i have no idea why or how but
disabling it makes the issue go away (ui will literally just freeze
and gobble CPU on a `.paint()` call until you ctrl-c the hell out of
it). my guess is that something in the logic for standard line curves
and appends on large data sets is the issue?
Code related changes/hacks:
- drop use of `step_path_arrays_from_1d()`, it was always a bit hacky
(being based on `pyqtgraph` internals) and was generally hard to
understand since it returns 1d data instead of the more expected (N,2)
array of "step levels"; instead this is now implemented (uglily) in
the `Flow.update_graphics()` block for step curves (which will
obviously get cleaned up and factored elsewhere).
- add a bunch of new flags to the update method on the fast append
curve: `draw_last: bool`, `slice_to_head: int`, `do_append: bool`,
`should_redraw: bool` which are all controls to aid with previously
mentioned issues specific to getting step curve updates working
correctly.
- add a ton of commented tinkering related code (that we may end up
using) to both the flow and append curve methods that was written as
part of the effort to get this all working.
- implement all step curve updating inline in `Flow.update_graphics()`
including prepend and append logic for pre-graphics incremental step
data maintenance and in-view slicing as well as "last step" graphics
updating.
Obviously clean up commits coming stat B)
Since we have in-view style rendering working for all curve types
(finally) we can avoid the guard for low uppx levels and without losing
interaction speed. Further don't delay the profiler so that the nested
method calls correctly report upward - which wasn't working likely due
to some kinda GC collection related issue.
More or less this improves update latency like mad. Only draw data in
view and avoid full path regen as much as possible within a given
(down)sampling setting. We now support append path updates with in-view
data and the *SPECIAL CAVEAT* is that we avoid redrawing the whole curve
**only when** we calc an `append_length <= 1` **even if the view range
changed**. XXX: this should change in the future probably such that the
caller graphics update code can pass a flag which says whether or not to
do a full redraw based on it knowing where it's an interaction based
view-range change or a flow update change which doesn't require a full
path re-render.
After much effort (and exhaustion) but failure to get a view into our
`numpy` OHLC struct-array, this instead allocates an in-thread-memory
array which is updated with flattened data every flow update cycle.
I need to report what I think is a bug to `numpy` core about the whole
view thing not working but, more or less this gets the same behaviour
and minimizes work to flatten the sampled data for line-graphics drawing
thus improving refresh latency when drawing large downsampled curves.
Update the OHLC ds curve with view aware data sliced out from the
pre-allocated and incrementally updated data (we had to add a last index
var `._iflat` to track appends - this should be moved into a renderer
eventually?).
This begins the removal of data processing / analysis methods from the
chart widget and instead moving them to our new `Flow` API (in the new
module introduce here) and delegating the old chart methods to the
respective internal flow. Most importantly is no longer storing the
"last read" of an array from shm in an internal chart table (was
`._arrays`) and instead the `ShmArray` instance is passed as input and
stored in the `Flow` instance. This greatly simplifies lookup logic such
that the display loop now doesn't have to worry about reading shm, it
can be done by internal graphics logic as desired. Generally speaking,
all previous `._arrays`/`._graphics` lookups are now delegated to the
entries in the chart's `._flows` table.
The new `Flow` methods are generally better factored and provide more
detailed output regarding data-stream <-> graphics inter-relations for
the future purpose of allowing much more efficient update calls in the
display loop as well as supporting low latency interaction UX.
The concept here is that we're introducing an intermediary layer that
ties together graphics and real-time data flows such that widget code is
oriented around plot layout and the flow apis are oriented around
real-time low latency updates and providing an efficient high level
metric layer for the UX.
The summary api transition is something like:
- `update_graphics_from_array()` -> `.update_graphics_from_flow()`
- `.bars_range()` -> `Flow.datums_range()`
- `.bars_range()` -> `Flow.datums_range()`
Given that naming the port map is mostly pointless, since accounts can
be detected once the client connects, just expect a `brokers.toml` to
define a simple sequence of port numbers. Toss in a warning for using
the old map/`dict` style.
Now that we have working client auth thanks to:
https://github.com/barneygale/asyncvnc/pull/4 and related issue,
we can use a pw for the vnc server, though we should eventually
auto-generate a random one from a docker super obviously.
Add logic to the data reset hack loop to do a connection reset after
2 failed/timeout attempts at the regular data reset. We need to also add
this logic around reconnectionn events that are due to the host
network connection: aka roaming that's faster then timing logic
builtin to the gateway.
`ib-gw` seems particularly fragile to connections from clients with the
same id (can result in weird connect hangs and even crashes) and
`ib_insync` doesn't handle intermittent tcp disconnects that
well..(especially on dockerized IBC setups). This adds a bunch of
changes to our client caching and scan loop as well a proper
task-locking-to-cache-proxies so that,
- `asyncio`-side clients aren't double-loaded/connected even when
explicitly trying to reconnect repeatedly with a given client to work
around the unreliability of the `asyncio.Transport` design in
`ib_insync`.
- we can use `tractor.trionics.maybe_open_context()` to lock the `trio`
side from loading more then one `Client` on the `asyncio` side and
instead on cache hits only making a new `MethodProxy` around the
reused `asyncio`-side client (since each `trio` task needs its own
inter-task msg channel).
- a `finally:` block teardown on all clients loaded in the scan loop
avoids stale connections.
- the connect params are now exposed as named args to
`load_aio_clients()` can be easily controlled from caller code.
Oh, and we properly hooked up the internal `ib_insync` logging to our
own internal schema - makes it a lot easier to debug wtf is going on XD
In order to expose more `asyncio` powered `Client` methods to endpoint
task-code this adds a more extensive and layered set of `MethodProxy`
loading routines, in dependency order these are:
- `load_clients_for_trio()` a `tractor.to_asyncio.open_channel_from()`
entry-point factory for loading all scanned clients on the `asyncio` side
and delivering them over the inter-task channel to a `trio`-side task.
- `get_preferred_data_client()` a simple client instance loading routine
which reads from the users `brokers.toml -> `prefer_data_account:
list[str]` which must list account names, in priority order, that are
acceptable to be used as the main "data connection client" such that
only one of the detected clients is used for data (whereas the rest
are used only for order entry).
- `open_client_proxies()` which delivers the detected `Client` set
wrapped each in a `MethodProxy`.
- `open_data_client()` which directly delivers the preferred data client
as a proxy for `trio` tasks.
- update `open_client_method_proxy()` and `open_client_proxy` to require
an input `Client` instance.
Further impl details:
- add `MethodProxy._aio_ns` to ref the original `asyncio` side proxied instance
- add `Client.trades()` to pull executions from the last day/session
- load proxies inside `trades_dialogue` and use the new `.trades()`
method to try and pull a fill ledger for eventual correct pp price
calcs (pertains to #307)..
We return a copy (since since a view doesn't seem to work..) of the
(field filtered) shm array contents which is the same index-length as
the source data.
Further, fence off the resource tracker disable-hack into a helper
routine.
It seems once in a while a frame can get missed or dropped (at least
with binance?) so in those cases, when the request erlangs is already at
max, we just manually request the missing frame and presume things will
work out XD
Further, discard out of order frames that are "from the future" that
somehow end up in the async queue once in a while? Not sure why this
happens but it seems thus far just discarding them is nbd.
Bleh/🤦, the ``end_dt`` in scope is not the "earliest" frame's
`end_dt` in the async response queue.. Parse the queue's latest epoch
and use **that** to compare to the last last pushed datetime index..
Add more detailed logging to help debug any (un)expected datetime index
gaps.