This is a tricky edge case we weren't handling prior; an example is
submitting a limit order with a price tick precision which mismatches
that supported (probably bc IB reported the wrong one..) and IB responds
immediately with an error event (via a special code..) but doesn't
include any `Trade` object(s) nor details beyond the `reqid`. So, we
have to do a little reverse EMS order lookup on our own and ideally
indicate to the requester which order failed and *why*.
To enable this we,
- create a `flows: OrderDialogs` instance and pass it to most order/event relay
tasks, particularly ensuring we update update ASAP in `handle_order_requests()`
such that any successful submit has an `Ack` recorded in the flow.
- on such errors lookup the `.symbol` / `Order` from the `flow` and
respond back to the EMS with as many details as possible about the
prior msg history.
- always explicitly relay `error` events which don't fall into the
sensible filtered set and wrap in
a `BrokerdError.broker_details['flow']: dict` snapshot for the EMS.
- in `symbols.get_mkt_info()` support adhoc lookup for `MktPair` inputs
and when defined we re-construct with those inputs; in this case we do
this for a first mkt: `'vtgn.nasdaq'`..
Finally this is a reason to use our new `OrderDialogs` abstraction; on
order submission errors IB doesn't really pass back anything other then
the `orderId` and the reason so we have to conduct our own lookup for
a message to relay to the EMS..
So, for every EMS msg we send, add it to the dialog tracker and then use
the `flows: OrderDialogs` for lookup in the case where we need to relay
said error. Also, include sending a `canceled` status such that the
order won't get stuck as a stale entry in the `emsd`'s own dialog table.
For now we just filter out errors that are unrelated from the stream
since there's always going to be stuff to do with live/history data
queries..
Since it's depended on by `.data` stuff as well as pretty much
everything else, makes more sense to expose it as a top level module
(and maybe eventually as a subpkg as we add to it).
Instead of casting to `dict`s and rewriting event names in the
`push_tradesies()` handler, be transparent with event names (also
defining and piker-equivalent mapping them in a redefined `_statuses`
table) and types
passing them directly to the `deliver_trade_events()` task and generally
make event handler blocks much easier to grok with type annotations. To
deal with the causality dilemma of *when to emit a pos msg* due to
needing all of `execDetailsEvent, commissionReportEvent, positionEvent`
but having no guarantee on received order, we implement a small task
`clears: dict[Contract, tuple[Position, Fill]]` tracker table and (as
before) only emit a position event once the "cost" can be accessed for
the fill. We now ALWAYS relay any `Position` update from IB directly to
ensure (at least) the cumsize is correct (since it appears we still have
ongoing issues with computing this correctly via `.accounting.Position`
updates..).
Further related adjustments:
- load (fiat) balances and startup positions into a new `IbAcnt` struct.
- change `update_and_audit_pos_msg()` to blindly forward ib position
event updates for the **the size** since it should always be
considered the true gospel for accounting!
- drop ib-has-no-position handling since it should never occur..
- move `update_ledger_from_api_trades()` to the `.ledger` submod and do
processing of ib_insync `Fill` related objects instead of dict-casted
versions instead doing the casting in
`api_trades_to_ledger_entries()`.
- `norm_trade()`: add `symcache.mktmaps[bs_mktid] = mkt` in since it
turns out API (and sometimes FLEX) records don't contain the listing
exchange/venue thus making it impossible to map an asset pair in the
"position sense" (i.e. over multiple venues: qqq.nasdaq, qqq.arca,
qqq.directedge) to an fqme when doing offline ledger processing;
instead use frickin IB's internal int-id so there's no discrepancy.
- also much better handle futures mkt trade flex records such that
parsed `MktPair.fqme` is consistent.
Define and bind in the `tx_sort()` routine to be used by
`open_trade_ledger()` when datetime sorting trade records.
Further deats:
- always use the IB reported position size (since apparently our ledger
based accounting is getting rekt on occasion..).
- better ib pos msg formatting when there's mismatches with the piker
equivalent.
- never emit zero-size pos msgs (in terms of strict ib pos sizing) since
when there's piker ledger sizing errors we'll send the wrong thing to
the ems and its clients..
Since there's no easy way to support it yet, we bypass symbology caching
in for now and instead allow the `ib.ledger` routines to fill in
`MktPair` and `Asset` entries ad-hoc for the purposes of txn ledger
processing.
Still kinda borked since i don't think there actually is a (per venue)
"get-all-symbologies" endpoint.. so we're likely gonna have to figure
out either how to hack it or provide a bypass in ledger processing?
Deatz:
- use new `Account` type name, rename endpoint vars to match and
obviously use any new method name(s).
- mask out split ratio handling for now.
- async open the symcache prior to ledger processing (again, for now).
- drop passing `Transaction.sym`.
- fix parser set for dt-sorter since apparently 2022 and back had
a `date` field instead?
Should be the final production backend to switch this over B)
Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..
Since we want to be able to support user-configurable vnc socketaddrs,
this preps for passing the piker client direct into the vnc hacker
routine so that we can (eventually load) and read the ib brokers config
settings into the client and then read those in the `asyncvnc` task
spawner.
It's been getting setup in the `brokerd` daemon-actor spawn task for
a while now and worker tasks already get a ref to that global log
instance so they don't need to care (in data or trading) task spawn
endpoints.
Also move to the new `open_trade_dialog()` naming for working broker
backends B)
Since `open_trade_ledger()` now requires a sort we pass in a combo of
the std `pendulum.parse()` for API records and a custom flex parser for
flex entries pulled offline.
Add special handling for `MktPair.src` such that when it's a fiat (like
it should always be for most legacy assets) we try to get the fqme
without that `.src` token (i.e. not mnqusd) to avoid breaking
roundtripping of live feed requests (due to new symbology) as well as
the current tsdb table key set..
Do a wholesale renaming of fqsn -> fqme in most of the rest of the
backend modules.
As per the new market info packing schema this patch almost gets it
completely compatible and useful via implementing the `get_mkt_info()`
backend module endpoint B)
There's still some questions around `MktPair.src` since all the contract
search machinery in the ib api isn't expecting a fiat currency in the
symbol key: for ex. `mnq/usd.cme.20230616.ib` has no handling for the
`[/]usd` part. For now i'm just excluding the `.src` since it requires
extra parsing on quotes-feed requests even though this is also currently
breaking forex pairs (idealpro or wtv). I think ideally we do move to
a `dst/src.<venue>.<etc..>` style but it's going to require adjustments
to all the existing crypto backends..
This also allows dropping the old `mk_init_msgs()` closure.
Previous we were re-processing all ledgers for every position msg
received from the API, per client.. Instead do that once in a first pass
and drop all key-miss lookups for `bs_mktid`s; it should never happen.
Better typing for in-routine vars, convert pos msg/objects to `dict`
prior to logging so it's sane to read on console. Skip processing
specifically option contracts for now.
If user has loaded from a flex report then we don't want the API records
from the same period to override those; instead just update with any
missing fields from the API schema.
Also, always `str`-ify the contract id (what is set for the `.bs_mktid`
*before* packing into transaction type to ensure when serialized to
`pps.toml` there are no discrepancies at the codec level.. smh
Add a logic branch for now that switches on an instance check.
Generally swap over all `Position.symbol` and `Transaction.sym` refs to
`MktPair`. Do a wholesale rename of all `.bsuid` var names to
`.bs_mktid`.
To make it easier to manually read/decipher long ledger files this adds
`dict` sorting based on record-type-specific (api vs. flex report)
datetime processing prior to ledger file write.
- break up parsers into separate routines for flex and api record
processing.
- add `parse_flex_dt()` for special handling of the weird semicolon
stamps in flex reports.
We were overwriting the existing loaded orders list in the per client
loop (lul) so move the def above all that.
Comment out the "try-to-cancel-inactive-orders-via-task-after-timeout"
stuff pertaining to https://github.com/erdewit/ib_insync/issues/363 for
now since we don't have a mechanism in place to cancel the re-cancel
task once the order is cancelled - plus who knows if this is even the
best way to do it..
Fills seems to be dual emitted from both the `status` and `fill` events
in `ib_insync` internals and more or less contain the same data nested
inside their `Trade` type. We started handling the 'fill' case to deal
with a race issue in commissions/cost report tracking but we don't
really want to leak that same race to incremental fills vs.
order-"closed" tracking.. So go back to only emitting the fill msgs
on statuses and a "closed" on `.remaining == 0`.
This includes darks, lives and alerts with all connecting clients
being broadcast all existing order-flow dialog states. Obviously
for now darks and alerts only live as long as the `emsd` actor lifetime
(though we will store these in local state eventually) and "live" orders
have lifetimes managed by their respective backend broker.
The details of this change-set is extensive, so here we go..
Messaging schema:
- change the messaging `Status` status-key set to:
`resp: Literal['pending', 'open', 'dark_open', 'triggered',
'closed', 'fill', 'canceled', 'error']`
which better reflects the semantics of order lifetimes and was
partially inspired by the status keys `kraken` provides for their
order-entry API. The prior key set was based on `ib`'s horrible
semantics which sound like they're right out of the 80s..
Also, we reflect this same set in the `BrokerdStatus` msg and likely
we'll just get rid of the separate brokerd-dialog side type
eventually.
- use `Literal` type annots for statuses where applicable and as they
are supported by `msgspec`.
- add additional optional `Status` fields:
-`req: Order` to allow each status msg to optionally ref its
commanding order-request msg allowing at least a request-response
style implicit tracing in all response msgs.
-`src: str` tag string to show the source of the msg.
-`reqid: str | int` such that the ems can relay the `brokerd`
request id both to the client side and have one spot to look
up prior status msgs and
- draft a (unused/commented) `Dialog` type which can be eventually used
at all EMS endpoints to track msg-flow states
EMS engine adjustments/rework:
- use the new status key set throughout and expect `BrokerdStatus` msgs
to use the same new schema as `Status`.
- add a `_DarkBook._active: dict[str, Status]` table which is now used for
all per-leg-dialog associations and order flow state tracking
allowing for the both the brokerd-relay and client-request handler loops
to read/write the same msg-table and provides for delivering
the overall EMS-active-orders state to newly/re-connecting clients
with minimal processing; this table replaces what the `._ems_entries`
table from prior.
- add `Router.client_broadcast()` to send a msg to all currently
connected peers.
- a variety of msg handler block logic tweaks including more `case:`
blocks to be both flatter and improve explicitness:
- for the relay loop move all `Status` msg update and sending to
within each block instead of a fallthrough case plus hard-to-follow
state logic.
- add a specific case for unhandled backend status keys and just log
them.
- pop alerts from `._active` immediately once triggered.
- where possible mutate status msgs fields over instantiating new
ones.
- insert and expect `Order` instances in the dark clearing loop and
adjust `case:` blocks accordingly.
- tag `dark_open` and `triggered` statuses as sourced from the ems.
- drop all the `ChainMap` stuff for now; we're going to make our own
`Dialog` type for this purpose..
Order mode rework:
- always parse the `Status` msg and use match syntax cases with object
patterns, hackily assign the `.req` in many blocks to work around not
yet having proper on-the-wire decoding yet.
- make `.load_unknown_dialog_from_msg()` expect a `Status` with boxed
`.req: Order` as input.
- change `OrderDialog` -> `Dialog` in prep for a general purpose type
of the same name.
`ib` backend order loading support:
- do "closed" status detection inside the msg-relay loop instead
of expecting the ems to do this..
- add an attempt to cancel inactive orders by scheduling cancel
submissions continually (no idea if this works).
- add a status map to go from the 80s keys to our new set.
- deliver `Status` msgs with an embedded `Order` for existing live order
loading and make sure to try an get the source exchange info (instead
of SMART).
Paper engine ported to match:
- use new status keys in `BrokerdStatus` msgs
- use `match:` syntax in request handler loop