More or less just to avoid orders the user wasn't aware of from
persisting until we get "open order relaying" through the ems working.
Some further fixes which required a new `reqids2txids` map which keeps
track of which `kraken` "txid" is mapped to our `reqid: int`; mainly
this was needed for cancel requests which require knowing the underlying
`txid`s (since apparently kraken doesn't keep track of the "reqid" we
pass it). Pass the ws instance into `handle_order_updates()` to enable
the cancelling orders on startup. Don't key error on unknown `reqid`
values (for eg. when receiving historical trade events on startup).
Handle cancel requests first in the ems side loop.
Since we seem to always be able to get back the `reqid`/`userref` value
we send to kraken ws endpoints, we can use this as our brokerd side
order id and avoid all race cases with getting the true `txid` value
that `kraken` assigns (and which changes when you do "edits"
:eyeroll:). This simplifies status updates by allowing our relay loop
just to pass back our generated `.reqid` verbatim and allows responding
with a `BrokerdOrderAck` immediately in the request handler task which
should guarantee there are no further race conditions with the relay
loop and mapping `txid`s from kraken.. and figuring out wtf to do when
they change, etc.
Addressing same issue as in #350 where we need to compute position
updates using the *first read* from the ledger **before** we update it
to make sure `Position.lifo_update()` gets called and **not skipped**
because new trades were read as clears entries but haven't actually been
included in update calcs yet.. aka we call `Position.lifo_update()`.
Main change here is to convert `update_ledger()` into a context mngr so
that the ledger write is committed after pps updates using
`pp.update_pps_conf()`..
This is basically a hotfix to #346 as well.
Turns out the EMS can support this as originally expected: you can
update a `brokerd`-side `.reqid` through a `BrokerdAck` msg and the ems
which update its cross-dialog (leg) tracking correctly! The issue was
a bug in the `editOrderStatus` msg handling and appropriate tracking
of the correct `.oid` (ems uid) on the kraken side. This unfortunately
required adding a `emsflow: dict[str, list[BrokerdOrder]]` msg flow
tracing table which means the broker daemon is tracking all the msg flow
with the ems, though I'm wondering now if this is just good practise
anyway and maybe we should offer a small primitive type from our msging
utils to aid with this? I've used such constructs in event handling
systems prior.
There's a lot more factoring that can be done after these changes as
well but the quick detailed summary is,
- rework the `handle_order_requests()` loop to use `match:` syntax and
update the new `emsflow` table on every new request from the ems.
- fix the `editOrderStatus` case pattern to not include an error msg and
thus actually be triggered to respond to the ems with a `BrokerdAck`
containing the new `.reqid`, the new kraken side `txid`.
- skip any `openOrders` msgs which are detected as being kraken's
internal order "edits" by matching on the `cancel_reason` field.
- update the `emsflow` table in all ws-stream msg handling blocks
with responses sent to the ems.
Relates to #290
Move to using the websocket API for all order control ops and dropping
the sync rest api approach which resulted in a bunch of buggy races.
Further this gets us must faster (batch) order cancellation for free
and a simpler ems request handler loop. We now heavily leverage the new
py3.10 `match:` syntax for all kraken-side API msg parsing and
processing and handle both the `openOrders` and `ownTrades` subscription
streams.
We also block "order editing" (by immediate cancellation) for now since
the EMS isn't entirely yet equipped to handle brokerd side `.reqid`
changes (which is how kraken implements so called order "updates" or
"edits") for a given order-request dialog and we may want to even
consider just implementing "updates" ourselves via independent cancel
and submit requests? Definitely something to ponder. Alternatively we
can "masquerade" such updates behind the count-style `.oid` remapping we
had to implement anyway (kraken's limitation) and maybe everything will
just work?
Further details in this patch:
- create 2 tables for tracking the EMS's `.oid` (uui4) value to `int`s
that kraken expects (for `reqid`s): `ids` and `reqmsgs` which enable
local lookup of ems uids to piker-backend-client-side request ids and
received order messages.
- add `openOrders` sub support which more or less directly relays to
equivalent `BrokerdStatus` updates and calc the `.filled` and
`.remaining` values based on cleared vlm updates.
- add handler blocks for `[add/edit/cancel]OrderStatus` events including
error msg cases.
- don't do any order request response processing in
`handle_order_requests()` since responses are always received via one
(or both?) of the new ws subs: `ownTrades` and `openOrders` and thus
such msgs are now handled in the response relay loop.
Relates to #290Resolves#310, #296
Not sure why I put this off for so long but the check is in now such
that if the market isn't open or no rt quote comes in from the first
query, we just pull from the last shm history 'close' value.
Includes another fix to avoid raising when a double remove on the client
side stream from the registry sometimes happens.
Not sure this didn't get caught in usage, but basically real-time
updates got broken by a rework of `update_ledger_from_api_trades()`.
The issue is that the ledger was being updated **before** calling
`piker.pp.update_pps_conf()` which resulted in the `Position.size`
not being updated correctly since the [latest added] clears passed
in via the `trade_records` arg were already found in the `.clears` table
and thus were causing the loop to skip the `Position.lifo_update()`
call..
The solution here is to not update the ledger **until after** we call
`update_pps_conf()` - it's more read/writes but it's correct and we
figure out a less io heavy way to do the file writing later.
Further this includes a fix to avoid double emitting a pp update caused
by non-thorough logic that waits for a commission report to arrive
during a fill event; previously we were emitting the same message twice
due to the lack of a check for an existing comms report in the case
where the report arrives *after* the fill.
Moves to using the new `piker.pp` apis to both store real-time trade
events in a ledger file as well emit position update msgs (which were
not in this backend at all prior) when new orders clear (aka fill).
In terms of outstanding issues,
- solves the pp update part of the bugs reported in #310
- starts a msg case block in prep for #293
Details of rework:
- move the `subscribe()` ws fixture to module level and `partial()` in
the client token instead of passing it to the instance; in prep for
removal of the `.token` attr from the `NoBsWs` wrapper.
- drop `make_auth_sub()` since it was too thin and we can just
do it all succinctly in `subscribe()`
- filter trade update msgs to those not yet stored int the toml ledger
- much better kraken api msg unpacking using new `match:` synax B)
Resolves#311
No real-time update support (yet) but this is the first draft at writing
trades ledgers and `pps.toml` entries for the kraken backend.
Deatz:
- drop `pack_positions()`, no longer used.
- use `piker.pp` apis to both write a trades ledger file and update the
`pps.toml` inside the `trades_dialogue()` endpoint startup.
- drop the weird paper engine swap over if auth can't be done, we should
be doing something with messaging in the ems over this..
- more web API error response raising.
- pass the `pp.Transaction` set loaded from ledger into
`process_trade_msgs()` do avoid duplicate sends of already collected
trades msgs.
- add `norm_trade_records()` public endpoing (used by `piker.pp` api)
and `update_ledger()` helper.
- rejig `process_trade_msgs()` to drop the weird `try:` assertion block
and skip already-recorded-in-ledger trade msgs as well as yield *each*
trade instead of sub-sequences.