In `.feed` and `._sampling` move to using the new
`tractor.Context.open_stream(allow_overruns: bool)` (cough, A BREAKING
CHANGE).
Also set `Flume.mkt` during construction in `.feed.open_feed()`.
Might as well try and flip it over to the new type; make appropriate
dict serialization changes in `.to_msg()`. Alias back to `.symbol:
Symbol` with a property.
Frickin ib, they give you the `0.001` (or wtv) in the
`ContractDetails.minSize: float` but won't accept fractional sizes
through the API.. Either way, it's probably not sane to be supporting
fractional order sizes for legacy instruments by default especially
since it in theory affects a lot of the clearing outcomes by having ib
do wtv magical junk behind the scenes to make it work..
Order mode previously was just willy-nilly sending `float` prices
(particularly on order edits) which are generated from the associated
level line. This actually uses the `MktPair.price_tick: Decimal` to
ensure the value is rounded correctly before submission to the ems..
Also adjusts the order mode init to expect a table of tables of startup
position messages, with the inner table being keyed by fqme per msg.
More or less a complete rework which allows passing a detailed
clearing/fills input and allows for *not* rebooting the runtime / ems
between each position check.
Some further enhancements:
- use (unit) fractional sizes to simulate both the more realistic and
more "complex position calculation" case; since this is crypto.
- add a no-fqme-found test.
- factor cross-session/offline pos storage (pps.toml) checks into
a `load_and_check_pos()` helper which does all entry loading directly
from a provided `BrokerdPosition` msg.
- use the new `OrderClient.send()` async api.
Tried a couple libs and ended up sticking with `rich` (since it's the
sibling lib to `typer`) but also (initially) implemented a version with
`blessings` that I ended up commenting out (and will likely remove).
Adjusted the CLI I/O a slight bit as well:
- require a fully qualified account name of the form:
`<brokername>.<accountname>` and error on non-matching input.
- dump positions summary lines as humanized size, ppu and cost basis
values per line.
When processing paper trades ledgers we normally won't have specific
`MktPair` info for the backend market we're simulating, as such we
need to look up this info when updating pps.toml files such that we
get precision info correct (particularly in the case of cryptos!) and
can also run paper ledger processing without running the simulated
clearing loop. In order to make it happen we lookup any `get_mkt_info()`
ep on the backend and pass the output to the `force_mkt` input of the
`PpTable.update_from_trans()` method.
This will (likely) act as a new backend query endpoint for other `piker`
(client) code to lookup `MktPair` info from each backend. To start it
also returns the backend-broker's local `Pair` (or wtv other type) as
well.
The main motivation for this is for our paper engine which can require
the mkt info when processing paper-trades ledgers which do not contain
appropriate info to compute position metrics.
Instead of stripping the broker part just use the full fqme for all
`Transaction.bs_mktid: str` values since it makes indexing the `PpTable`
much easier with less key mangling..
Change the root-service-task entrypoint to accept the level and
setup a console log as is now expected for all sub-services. Cast all
backend delivered startup `BrokerdPosition` msgs and log them to
console.
If you want a sub-actor to write console logs (with the right level) the
`get_console_log()` call has to be made somewhere during service task
startup. Previously this wasn't well formalized nor used (depending on
daemon) so passing `loglevel` to the service's root-task-endpoint (eg.
`_setup_persistent_brokerd()`) encourages that the daemon's logging is
configured during init according to the spawner's requesting logging
config. The previous `get_console_log()` call happening inside
`maybe_spawn_daemon()` wasn't actually doing anything in the target
daemon XD, so obviously remove that and instead passthrough loglevel
to the ctx endpoints and service manager methods.
More or less we need to be able to audit not only simple "make trades
check pps.toml files" tests (which btw were great to get started!).
We also need more sophisticated and granular order mgmt and service
config scenarios,
- full e2e EMS msg flow verification
- multi-client (dis)connection scenarios and/or monitoring
- dark order clearing and offline storage
- accounting schema and position calcs detailing
As such, this is the beginning to "modularlizingz" the components needed
in the test harness to this end by breaking up the `OrderClient` control
flows vs. position checking logic so as to allow for more flexible test
scenario cases and likely `pytest` parametrizations over different
transaction sequences.
Not sure how this worked before but we need to also override the
`piker._config_dir: Path` in the root actor when running in `pytest`; my
guess is something in the old test suite was masking this problem after
the change to passing the dir path down through the runtime vars via
`tractor`?
Also this drops the ems related fixtures/factories since they're
specific enough to define in the clearing engine tests directly.
Turns out we don't hookup our eventkit handler until after the
`load_aio_clients()` is complete, which means we can't get
`ib_insync.Client.apiError` events unless inside the asyncio side task.
So I guess try to report any such errors during API scan (note the
duplicate client id case is a special one from ibis itself) even though
we're not going to catch them trio side. The hack to work around this is
to just increment the client id value with the `connect_retries` led `i`
value even though that will break on more then 3 clients attached to an
API endpoint lul ..
Further adjustments that were to the end of trying to fix this proper:
- add `remove_handler_on_err()` cm to disconnect a handler when the trio
side of the channel closes.
- actually connect to client api erros in our `Client.inline_errors()`
- increase connect timeout to a sec.
- change the trio-asyncio proxy response-msg loop over to `match:`
syntax and raise on unhandled msgs from eventkit handlers.