Commit Graph

2663 Commits (deribit)

Author SHA1 Message Date
Tyler Goodlet 2386270cad Handle too-fast-edits, add `ChainMap` msg tracing
Since our ems doesn't actually do blocking style client-side submission
updates, thus resulting in the client being able to update an existing
order's state before knowing its current state, we can run into race
conditions where for some backends an order is updated using the wrong
order id. For kraken we manually implement detecting this race (lol, for
now anyway) such that when a new client side edit comes in before the
new `txid` is known, we simply expect the handler loop to cancel the
order. Further this adds cancellation on arbitrary status errors, like
rate limits.

Also this adds 2 leg (ems <-> brokerd <-> kraken) msg tracing using
a `collections.ChainMap` which is likely going to end up being the POC
for a more general data structure recommended for backends that need to
trace msg flow for translation with the ems.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 5b135fad61 Handle pre-existing open orders specifically by checking for null `oid` 2022-07-30 17:33:45 -04:00
Tyler Goodlet abb6854e74 Make all `.bsuid`s the normed symbol "altname"s 2022-07-30 17:33:45 -04:00
Tyler Goodlet 22f9b2552c Provide symbol norming via a classmethod + global table 2022-07-30 17:33:45 -04:00
Tyler Goodlet 57f2478dc7 Fixes for state updates and clears
Turns out the `openOrders` and `ownTrades` subs always return a `reqid`
value (the one brokerd sends to the kraken api in order requests) is
always set to zero, which seems to be a bug? So this includes patches to
work around that as well reliance on the `openOrders` sub to do most
`BrokerdStatus` updates since `XOrderStatus` events don't seem to have
much data in them at all (they almost look like pure ack events so maybe
they aren't affirmative of final state changes anyway..).

Other fixes:
- respond with a `BrokerdOrderAck` immediately after `requid` generation
  not after order submission to ensure the ems has a valid `requid`
  *before* kraken api events are relayed through.
- add a `reqids2txids: bidict[int, str]` which maps brokerd genned
  `requid`s to kraken-side `txid`s since (as mentioned above) the
  clearing and state endpoints don't relay back this value (it's always
  0...)
- add log messages for each sub so that (at least for now) we can see
  exact msg contents coming from kraken.
- drop `.remaining` calcs for now since we need to keep record of the
  order states manually in order to retreive the original submission
  vlm..
- fix the `openOrders` case for fills, in this case the message includes
  no `status` field and thus we must catch it in a block *after* the
  normal state handler to avoid masking.
- drop response msg generation from the cancel status case since we
  can do it again from the `openOrders` handler and sending a double
  status causes issues on the client side.
- add a shite ton of notes around all this missing `requid` stuff.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 5dc9a61ec4 Use cancel level logging for cancelled orders 2022-07-30 17:33:45 -04:00
Tyler Goodlet b0d3d9bb01 TOSQUASH: lingering `.dict()`s 2022-07-30 17:33:45 -04:00
Tyler Goodlet caecbaa231 Cancel any live orders found on connect
More or less just to avoid orders the user wasn't aware of from
persisting until we get "open order relaying" through the ems working.

Some further fixes which required a new `reqids2txids` map which keeps
track of which `kraken` "txid" is mapped to our `reqid: int`; mainly
this was needed for cancel requests which require knowing the underlying
`txid`s (since apparently kraken doesn't keep track of the "reqid"  we
pass it). Pass the ws instance into `handle_order_updates()` to enable
the cancelling orders on startup. Don't key error on unknown `reqid`
values (for eg. when receiving historical trade events on startup).
Handle cancel requests first in the ems side loop.
2022-07-30 17:33:45 -04:00
Tyler Goodlet a20a8d95d5 Use `aclosing()` around ws async gen 2022-07-30 17:33:45 -04:00
Tyler Goodlet ba93f96c71 Lol, gotta `float()` that vlm before `*` XD 2022-07-30 17:33:45 -04:00
Tyler Goodlet 804e9afdde Pass our manually mapped `reqid: int` to EMS
Since we seem to always be able to get back the `reqid`/`userref` value
we send to kraken ws endpoints, we can use this as our brokerd side
order id and avoid all race cases with getting the true `txid` value
that `kraken` assigns (and which changes when you do "edits"
:eyeroll:). This simplifies status updates by allowing our relay loop
just to pass back our generated `.reqid` verbatim and allows responding
with a `BrokerdOrderAck` immediately in the request handler task which
should guarantee there are no further race conditions with the relay
loop and mapping `txid`s from kraken.. and figuring out wtf to do when
they change, etc.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 89bcaed15e Add ledger and `pps.toml` snippets 2022-07-30 17:33:45 -04:00
Tyler Goodlet bb2f8e4304 Try out a backend readme 2022-07-30 17:33:45 -04:00
Tyler Goodlet 8ab8268edc Don't require an ems msg symbol on error statuses 2022-07-30 17:33:45 -04:00
Tyler Goodlet bbcc55b24c Update ledger *after* pps updates from new trades
Addressing same issue as in #350 where we need to compute position
updates using the *first read* from the ledger **before** we update it
to make sure `Position.lifo_update()` gets called and **not skipped**
because new trades were read as clears entries but haven't actually been
included in update calcs yet.. aka we call `Position.lifo_update()`.

Main change here is to convert `update_ledger()` into a context mngr so
that the ledger write is committed after pps updates using
`pp.update_pps_conf()`..

This is basically a hotfix to #346 as well.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 9fa9c27e4d Factor status handling into a new `process_status()` helper 2022-07-30 17:33:45 -04:00
Tyler Goodlet d9b4c4a413 Factor msg loop into new func: `handle_order_updates()` 2022-07-30 17:33:45 -04:00
Tyler Goodlet 84cab1327d Drop uneeded count-sequencec verification 2022-07-30 17:33:45 -04:00
Tyler Goodlet df4cec930b Get order "editing" working fully
Turns out the EMS can support this as originally expected: you can
update a `brokerd`-side `.reqid` through a `BrokerdAck` msg and the ems
which update its cross-dialog (leg) tracking correctly! The issue was
a bug in the `editOrderStatus` msg handling and appropriate tracking
of the correct `.oid` (ems uid) on the kraken side. This unfortunately
required adding a `emsflow: dict[str, list[BrokerdOrder]]` msg flow
tracing table which means the broker daemon is tracking all the msg flow
with the ems, though I'm wondering now if this is just good practise
anyway and maybe we should offer a small primitive type from our msging
utils to aid with this? I've used such constructs in event handling
systems prior.

There's a lot more factoring that can be done after these changes as
well but the quick detailed summary is,
- rework the `handle_order_requests()` loop to use `match:` syntax and
  update the new `emsflow` table on every new request from the ems.
- fix the `editOrderStatus` case pattern to not include an error msg and
  thus actually be triggered to respond to the ems with a `BrokerdAck`
  containing the new `.reqid`, the new kraken side `txid`.
- skip any `openOrders` msgs which are detected as being kraken's
  internal order "edits" by matching on the `cancel_reason` field.
- update the `emsflow` table in all ws-stream msg handling blocks
  with responses sent to the ems.

Relates to #290
2022-07-30 17:33:45 -04:00
Tyler Goodlet ab08dc582d Make ems relay loop report on brokerd `.reqid` changes 2022-07-30 17:33:45 -04:00
Tyler Goodlet f79d9865a0 Use `match:` syntax in data feed subs processing 2022-07-30 17:33:45 -04:00
Tyler Goodlet 00378c330c First draft, working WS based order management
Move to using the websocket API for all order control ops and dropping
the sync rest api approach which resulted in a bunch of buggy races.
Further this gets us must faster (batch) order cancellation for free
and a simpler ems request handler loop. We now heavily leverage the new
py3.10 `match:` syntax for all kraken-side API msg parsing and
processing and handle both the `openOrders` and `ownTrades` subscription
streams.

We also block "order editing" (by immediate cancellation) for now since
the EMS isn't entirely yet equipped to handle brokerd side `.reqid`
changes (which is how kraken implements so called order "updates" or
"edits") for a given order-request dialog and we may want to even
consider just implementing "updates" ourselves via independent cancel
and submit requests? Definitely something to ponder. Alternatively we
can "masquerade" such updates behind the count-style `.oid` remapping we
had to implement anyway (kraken's limitation) and maybe everything will
just work?

Further details in this patch:
- create 2 tables for tracking the EMS's `.oid` (uui4) value to `int`s
  that kraken expects (for `reqid`s): `ids` and `reqmsgs` which enable
  local lookup of ems uids to piker-backend-client-side request ids and
  received order messages.
- add `openOrders` sub support which more or less directly relays to
  equivalent `BrokerdStatus` updates and calc the `.filled` and
  `.remaining` values based on cleared vlm updates.
- add handler blocks for `[add/edit/cancel]OrderStatus` events including
  error msg cases.
- don't do any order request response processing in
  `handle_order_requests()` since responses are always received via one
  (or both?) of the new ws subs: `ownTrades` and `openOrders` and thus
  such msgs are now handled in the response relay loop.

Relates to #290
Resolves #310, #296
2022-07-30 17:33:45 -04:00
goodboy 180b97b180
Merge pull request #369 from pikers/pydantic_zombie
Drop `pydantic.create_model()` usage for `msgspec.defstruct()`
2022-07-30 17:33:18 -04:00
Tyler Goodlet f0b3a4d5c0 Drop `pydantic.create_model()` usage for `msgspec.defstruct()` 2022-07-30 17:01:56 -04:00
goodboy e2e66324cc
Merge pull request #363 from pikers/ib_pps_upgrade
`ib` pps api layer upgrade
2022-07-27 14:50:28 -04:00
Tyler Goodlet d950c78b81 Mention liquidation in error msg 2022-07-27 14:40:32 -04:00
Tyler Goodlet 7dbcbfdcd5 Write `pps.toml` shortly after broker startup 2022-07-27 14:40:32 -04:00
Tyler Goodlet 279c899de5 Port to new PpTable.dump_active()` output, move order event task to child nursery 2022-07-27 14:40:32 -04:00
Tyler Goodlet db5aacdb9c Only allow vnc client connections from localhost 2022-07-27 14:40:32 -04:00
Tyler Goodlet c7b84ab500 Port position calcs to new ctx mngr apis and drop multi-loop madness 2022-07-27 14:40:32 -04:00
Tyler Goodlet 9967adb371 Lol, drop unintented accound name key layer from ledger ledger 2022-07-27 14:40:32 -04:00
Tyler Goodlet 30ff793a22 Port `ib` broker machinery to new ctx mngr pp api
This drops the use of `pp.update_pps_conf()` (and friends) and instead
moves to using the context style `open_trade_ledger()` and `open_pps()`
managers for faster pp msg gen due to delayed file writing (which was
the main source update latency).

In order to make this work with potentially multiple accounts this also
uses an exit stack which loads each ledger / `pps.toml` into an account
id mapped `dict`; a POC for likely how we should implement some higher
level position manager api.
2022-07-27 12:29:53 -04:00
Tyler Goodlet 666587991a Avoid crash when no vnc server running 2022-07-27 12:29:53 -04:00
goodboy 01005e40a8
Merge pull request #366 from pikers/multisympaper
Fix #222 multi-symbol paper engine support
2022-07-27 12:29:05 -04:00
goodboy d81e629c29
Merge pull request #365 from pikers/ppu_history
Ppu history
2022-07-27 12:25:23 -04:00
Tyler Goodlet 2766fad719 Fix #222 multi-symbol paper engine support 2022-07-27 12:18:59 -04:00
Tyler Goodlet ae71168216 Change name `be_price` -> `ppu` throughout codebase 2022-07-27 12:18:36 -04:00
Tyler Goodlet a0c238daa7 Adjust paper-engine to use `Transaction` for pps updates 2022-07-27 11:20:59 -04:00
Tyler Goodlet 7cbdc6a246 Move clears updates back into a method 2022-07-27 11:17:57 -04:00
Tyler Goodlet 2ff8be71aa Add `PpTable.write_config(), order `pps.toml` columns 2022-07-27 11:17:57 -04:00
Tyler Goodlet ddffaa952d Rework "breakeven" price as "price-per-uni": ppu
The original implementation of `.calc_be_price()` wasn't correct since
the real so called "price per unit" (ppu), is actually defined by
a recurrence relation (which is why the original state-updated
`.lifo_update()` approach worked well) and requires the previous ppu to
be weighted by the new accumulated position size when considering a new
clear event. The ppu is the price that above or below which the trader
takes a win or loss on transacting one unit of the trading asset and
thus it is the true "break even price" that determines making or losing
money per fill. This patches fixes the implementation to use trailing
windows of the accumulated size and ppu to compute the next ppu value
for any new clear event as well as handle rare cases where the
"direction" changes polarity (eg. long to short in a single order). The
new method is `Position.calc_ppu()` and further details of the relation
can be seen in the doc strings.

This patch also includes a wack-ton of clean ups and removals in an
effort to refine position management api for easier use in new backends:

- drop `updaate_pps_conf()`, `load_pps_from_toml()` and rename
  `load_trands_from_ledger()` -> `load_pps_from_ledger()`.
- extend `PpTable` to have a `.to_toml()` method which returns the
  active set of positions ready to be serialized to the `pps.toml` file
  which is collects from calling,
- `PpTable.dump_active()` which now returns double dicts of the
  open/closed pp object maps.
- make `Position.minimize_clears()` now iterate the clears table in
  chronological order (instead of reverse) and only drop fills prior
  to any zero-size state (the old reversed way can result incorrect
  history-size-retracement in cases where a position is lessened but
  not completely exited).
- drop `Position.add_clear()` and instead just manually add entries
  inside `.update_from_trans()` and also add a `accum_size` and `ppu`
  field to ever entry thus creating a position "history" sequence of
  the ppu and accum size for every position and prepares for being
  and to show "position lifetimes" in the UI.
- move fqsn getting into `Position.to_pretoml()`.
2022-07-26 12:09:59 -04:00
Tyler Goodlet 5520e9ef21 Minimize clears and audit sizing for all updates in `.update_from_trans()` 2022-07-26 12:09:59 -04:00
Tyler Goodlet 958e542f7d Drop `.lifo_upate()` add `.audit_sizing()`
Use the new `.calc_[be_price/size]()` methods when serializing to and
from the `pps.toml` format and add an audit method which will warn about
mismatched values and assign the clears table calculated values pre-write.

Drop the `.lifo_update()` method and instead allow both
`.size`/`.be_price` properties to exist (for non-ledger related uses of
`Position`) alongside the new calc methods and only get fussy about
*what* the properties are set to in the case of ledger audits.

Also changes `Position.update()` -> `.add_clear()`.
2022-07-25 12:06:52 -04:00
goodboy 927bbc7258
Merge pull request #364 from pikers/historical_breakeven_pp_price
Add non-state-incremented calculation methods
2022-07-25 09:24:26 -04:00
Tyler Goodlet 45bef0cea9 Add non-state-incremented calculation methods
Since we're going to need them anyway for desired features, add
2 new `Position` methods:
- `.calc_be_price()` which computes the breakeven cost basis price
  from the entries in the clears table.
- `.calc_size()` which just sums the clear sizes.

Add a `cost_scalar: float` control to the `.update_from_trans()` method
to allow manual adjustment of the cost weighting for the case where
a "non-symmetrical" model is wanted.

Go back to always trying to write the backing ledger files on exit, even
when there's an error (obvs without the `return` in the `finally:` block
f$#% up).
2022-07-23 19:39:47 -04:00
goodboy a3d46f713e
Merge pull request #361 from pikers/pptables
`PpTable`s
2022-07-21 17:54:43 -04:00
Tyler Goodlet 5684120c11 Wow, drop idiotic `return` inside `finally:`
Can't believe i missed this but any `return` inside a `finally` will
suppress the error from the `try:` part... XD

Thought i was losing my mind when the ledger was mutated and then
an error just after wasn't getting raised.. lul.

Never again...
2022-07-21 17:52:44 -04:00
Tyler Goodlet ddb195ed2c Add a flag to prevent writing `pps.toml` on exit 2022-07-21 17:52:44 -04:00
Tyler Goodlet 6747831677 Don't pop zero pps from table in `.dump_active()`
In order to avoid double transaction adds/updates and too-early-discard
of zero sized pps (like when trades are loaded from a backend broker but
were already added to a ledger or `pps.toml` prior) we now **don't** pop
such `Position` entries from the `.pps` table in order to keep each
position's clears table always in place. This avoids the edge case where
an entry was removed too early (due to zero size) but then duplicate
trade entries that were in that entrie's clears show up from the backend
and are entered into a new entry resulting in an incorrect size in a new
entry..We still only push non-net-zero entries to the `pps.toml`.

More fixes:
- return the updated set of `Positions` from `.lifo_update()`.
- return the full table set from `update_pps()`.
- use `PpTable.update_from_trans()` more throughout.
- always write the `pps.toml` on `open_pps()` exit.
- only return table from `load_pps_from_toml()`.
2022-07-21 17:52:44 -04:00
Tyler Goodlet 9326379b04 Add a `PpTable` type, give it the update methods
In an effort to begin allowing backends to have more granular control
over position updates, particular in the case where they need to be
reloaded from a trades ledger, this adds a new table API which can
be loaded using `open_pps()`.

- offer an `.update_trans()` method which takes in a `dict` of
  `Transactions` and updates the current table of `Positions` from it.
- add a `.dump_active()` which renders the active pp entries dict in
  a format ready for toml serialization and all closed positions since
  the last update (we might want to not drop these?)

All other module-function apis currently in use should remain working as
before for the moment.
2022-07-21 17:52:44 -04:00