To make it easier to manually read/decipher long ledger files this adds
`dict` sorting based on record-type-specific (api vs. flex report)
datetime processing prior to ledger file write.
- break up parsers into separate routines for flex and api record
processing.
- add `parse_flex_dt()` for special handling of the weird semicolon
stamps in flex reports.
Allows keeping mutex state around data reset requests which (if more
then one are sent) can cause a throttling condition where ib's servers
will get slower and slower to conduct a reconnect. With this you can
have multiple ongoing contract requests without hitting that issue and
we can go back to having a nice 3s timeout on the history queries before
activating the hack.
When a network outage or data feed connection is reset often the
`ib_insync` task will hang until some kind of (internal?) timeout takes
place or, in some (worst) cases it never re-establishes (the event
stream) and thus the backend needs to restart or the live feed will
never resume..
In order to avoid this issue once and for all this patch implements an
additional (extremely simple) task that is started with the real-time
feed and simply waits for any market data reset events; when detected
restarts the `open_aio_quote_stream()` call in a loop using
a surrounding cancel scope.
Been meaning to implement this for ages and it's finally working!
Allows for easier restarts of certain `trio` side tasks without killing
the `asyncio`-side clients; support via flag.
Also fix a bug in `Client.bars()`: we need to return the duration on the
empty bars case..
This allows the history manager to know the decrement size for
`end_dt: datetime` on the next query if a no-data / gap case was
encountered; subtract this in `get_bars()` in such cases. Define the
expected `pendulum.Duration`s in the `.api._samplings` table.
Also add a bit of query latency profiling that we may use later to more
dynamically determine timeout driven data feed resets. Factor the `162`
error cases into a common exception handler block.
When we get a timeout or a `NoData` condition still return a tuple of
empty sequences instead of `None` from `Client.bars()`. Move the
sampling period-duration table to module level.
Manual tinker-testing demonstrated that triggering data resets
completely independent of the frame request gets more throughput and
further, that repeated requests (for the same frame after cancelling on
the `trio`-side) can yield duplicate frame responses. Re-work the
dual-task structure to instead have one task wait indefinitely on the
frame response (and thus not trigger duplicate frames) and the 2nd data
reset task poll for the first task to complete in a poll loop which
terminates when the frame arrives via an event.
Dirty deatz:
- make `get_bars()` take an optional timeout (which will eventually be
dynamically passed from the history mgmt machinery) and move request
logic inside a new `query()` closure meant to be spawned in a task
which sets an event on frame arrival, add data reset poll loop in the
main/parent task, deliver result on nursery completion.
- handle frame request cancelled event case without crash.
- on no-frame result (due to real history gap) hack in a 1 day decrement
case which we need to eventually allow the caller to control likely
based on measured frame rx latency.
- make `wait_on_data_reset()` a predicate without output indicating
reset success as well as `trio.Nursery.start()` compat so that it can
be started in a new task with the started values yielded being
a cancel scope and completion event.
- drop the legacy `backfill_bars()`, not longer used.
Allow data feed sub-system to specify the timeframe (aka OHLC sample
period) to the `open_history_client()` delivered history fetching API.
Factor the data keycombo hack into a new routine to be used also from
the history backfiller code when request latency increases; there is
a first draft at trying to use the feed reset to speed up 1m frame
throttling by timing out on the history frame response, but it needs
a lot of fine tuning.
If you spawn a brokerd set and no `ib` data feed was started (via our
`.data.feed.Feed` api) then there will be no active client loaded and
thus wont' be connected. So in these cases just return nothing, and
I guess we'll figure out real connection failures later?
We were overwriting the existing loaded orders list in the per client
loop (lul) so move the def above all that.
Comment out the "try-to-cancel-inactive-orders-via-task-after-timeout"
stuff pertaining to https://github.com/erdewit/ib_insync/issues/363 for
now since we don't have a mechanism in place to cancel the re-cancel
task once the order is cancelled - plus who knows if this is even the
best way to do it..
Fills seems to be dual emitted from both the `status` and `fill` events
in `ib_insync` internals and more or less contain the same data nested
inside their `Trade` type. We started handling the 'fill' case to deal
with a race issue in commissions/cost report tracking but we don't
really want to leak that same race to incremental fills vs.
order-"closed" tracking.. So go back to only emitting the fill msgs
on statuses and a "closed" on `.remaining == 0`.
This includes darks, lives and alerts with all connecting clients
being broadcast all existing order-flow dialog states. Obviously
for now darks and alerts only live as long as the `emsd` actor lifetime
(though we will store these in local state eventually) and "live" orders
have lifetimes managed by their respective backend broker.
The details of this change-set is extensive, so here we go..
Messaging schema:
- change the messaging `Status` status-key set to:
`resp: Literal['pending', 'open', 'dark_open', 'triggered',
'closed', 'fill', 'canceled', 'error']`
which better reflects the semantics of order lifetimes and was
partially inspired by the status keys `kraken` provides for their
order-entry API. The prior key set was based on `ib`'s horrible
semantics which sound like they're right out of the 80s..
Also, we reflect this same set in the `BrokerdStatus` msg and likely
we'll just get rid of the separate brokerd-dialog side type
eventually.
- use `Literal` type annots for statuses where applicable and as they
are supported by `msgspec`.
- add additional optional `Status` fields:
-`req: Order` to allow each status msg to optionally ref its
commanding order-request msg allowing at least a request-response
style implicit tracing in all response msgs.
-`src: str` tag string to show the source of the msg.
-`reqid: str | int` such that the ems can relay the `brokerd`
request id both to the client side and have one spot to look
up prior status msgs and
- draft a (unused/commented) `Dialog` type which can be eventually used
at all EMS endpoints to track msg-flow states
EMS engine adjustments/rework:
- use the new status key set throughout and expect `BrokerdStatus` msgs
to use the same new schema as `Status`.
- add a `_DarkBook._active: dict[str, Status]` table which is now used for
all per-leg-dialog associations and order flow state tracking
allowing for the both the brokerd-relay and client-request handler loops
to read/write the same msg-table and provides for delivering
the overall EMS-active-orders state to newly/re-connecting clients
with minimal processing; this table replaces what the `._ems_entries`
table from prior.
- add `Router.client_broadcast()` to send a msg to all currently
connected peers.
- a variety of msg handler block logic tweaks including more `case:`
blocks to be both flatter and improve explicitness:
- for the relay loop move all `Status` msg update and sending to
within each block instead of a fallthrough case plus hard-to-follow
state logic.
- add a specific case for unhandled backend status keys and just log
them.
- pop alerts from `._active` immediately once triggered.
- where possible mutate status msgs fields over instantiating new
ones.
- insert and expect `Order` instances in the dark clearing loop and
adjust `case:` blocks accordingly.
- tag `dark_open` and `triggered` statuses as sourced from the ems.
- drop all the `ChainMap` stuff for now; we're going to make our own
`Dialog` type for this purpose..
Order mode rework:
- always parse the `Status` msg and use match syntax cases with object
patterns, hackily assign the `.req` in many blocks to work around not
yet having proper on-the-wire decoding yet.
- make `.load_unknown_dialog_from_msg()` expect a `Status` with boxed
`.req: Order` as input.
- change `OrderDialog` -> `Dialog` in prep for a general purpose type
of the same name.
`ib` backend order loading support:
- do "closed" status detection inside the msg-relay loop instead
of expecting the ems to do this..
- add an attempt to cancel inactive orders by scheduling cancel
submissions continually (no idea if this works).
- add a status map to go from the 80s keys to our new set.
- deliver `Status` msgs with an embedded `Order` for existing live order
loading and make sure to try an get the source exchange info (instead
of SMART).
Paper engine ported to match:
- use new status keys in `BrokerdStatus` msgs
- use `match:` syntax in request handler loop
This seems to have been broken in refactoring from commit 279c899de5
which was never tested against multiple accounts/clients.
The fix is 2 part:
- position tables are now correctly loaded ahead of time and used by
account for each connected client in processing of ledgers and
existing positions.
- a task for each API client is started (as implemented prior) so that
we actually get status updates for every client used for submissions.
Further we add a bit of code using `bisect.insort()` to normalize
ledgers to a datetime sorted list records (though pretty sure the `dict`
transform ruins it?) in an effort to avoid issues with ledger
transaction processing with previously minimized `Position.clears`
tables, which should (but might not?) avoid incorporating clear events
prior to the last "net-zero" positioning state.