Since `tractor` and our runtime internals is now moved to multihomed semantics,
do the same in the CLI / config entrypoints.
Also, try using the new `tractor.devx.maybe_open_crash_handler()` around
the `pikerd` CLI.
When a new (actor) caller opens the registry there are 2 possible cases:
1. - some task already opened the registry during init and set the global
superset of registrar addrs that are expected to be used,
2. - some task after the init task opens with a subset of addrs.
3. - some task after init opens with a disjoint set - should be an error?
In the 2nd case we don't want to error since the may just not need to
know about other registrar (multi-homed) addrs and thus only needs
specific access - so only warn about the diff in that case. If the
caller is requesting some disjoint set then we still runtime raise.
Adjust `find_service()` to allow a null `registry_addrs` input in which
case we fail over to using whatever pre-set the `Registry.addrs` has;
makes it simple for actors that don't want/need to know about the global
registrar set for their actor tree. Also, always set pass
`tractor.find_actor(only_first=True)` (for now).
This commit requires an equivalent commit in `tractor` which adds
multi-homed transport server support to the runtime and thus the ability
ability to listen on multiple (embedded protocol) addrs / networks as
well as exposing registry actors similarly. Multiple bind addresses can
now be (bare bones) specified either in the `conf.toml:[network]`
section, or passed on the `pikerd` CLI.
This patch specifically requires the ability to pass a `registry_addrs:
list[tuple]` into `tractor.open_root_actor()` as well as adjusts all
internal runtime routines to do the same, mostly inside the `.service`
pkg.
Further details include:
- adding a new `.service._multiaddr` parser module (which will likely be
moved into `tractor`'s core) which supports loading lib2p2 style
"multiaddresses" both from the `conf.toml` and the `pikerd` CLI as
per,
- reworking the `pikerd` cmd to accept a new `--maddr`/`-m` param that
accepts multiaddresses.
- adjust the actor-registry subsys to support multi-homing by also
accepting a list of addrs to its top level API eps.
- various internal name changes to reflect the multi-address interface
changes throughout.
- non-working CLI tweaks to `piker chart` (ui-client cmds) to begin
accepting maddrs.
- dropping all elasticsearch and marketstore flags / usage from `pikerd`
for now since we're planning to drop mkts and elasticsearch will be an
optional dep in the future.
Apparently they're being massive cucks and changing their futes pair
schema again now adding a `NEXT_QUARTER` contract type which we weren't
handling at all. The good news is falling back to an old symcache file
would have prevented this from crashing.
Add a new `FutesPair.expiry: str` field so that `.bs_fqme` can simply
call it during the summary FQME-ification output rendering..
A helper for scanning a "pairs table" that most backends should expose
as part of their (internal) symbology set using `rapidfuzz` over
a `dict[str, Struct]` input table.
Also expose the `data.types.Struct` at the subpkg top level.
Previously we were assuming that the `Client._contracts: dict[str,
Contract]` would suffice this directly, which obviously isn't true XD
Also,
- add the `NSE` venue to skip list.
- use new `rapidfuzz.process.extract()` lib API.
- only get con deats for non null exchange names..
Of course I missed this first try but, we need to use the ws market pair
symbology set (since apparently kraken loves redundancy at least 3 times
XD) when processing transactions that arrive from live clears since it's
an entirely different `LTC/EUR` style key then the `XLTCEUR` style
delivered from the ReST eps..
As part of this:
- add `Client._altnames`, `._wsnames` as `dict[str, Pair]` tables,
leaving the `._AssetPairs` table as is keyed by the "xname"s.
- Change `Pair.respname: str` -> `.xname` since these keys all just seem
to have a weird 'X' prefix.
- do the appropriately keyed pair table lookup via a new `api_name_set:
str` to `norm_trade_records()` and set is correctly in the ws live txn
handler task.
This is a tricky edge case we weren't handling prior; an example is
submitting a limit order with a price tick precision which mismatches
that supported (probably bc IB reported the wrong one..) and IB responds
immediately with an error event (via a special code..) but doesn't
include any `Trade` object(s) nor details beyond the `reqid`. So, we
have to do a little reverse EMS order lookup on our own and ideally
indicate to the requester which order failed and *why*.
To enable this we,
- create a `flows: OrderDialogs` instance and pass it to most order/event relay
tasks, particularly ensuring we update update ASAP in `handle_order_requests()`
such that any successful submit has an `Ack` recorded in the flow.
- on such errors lookup the `.symbol` / `Order` from the `flow` and
respond back to the EMS with as many details as possible about the
prior msg history.
- always explicitly relay `error` events which don't fall into the
sensible filtered set and wrap in
a `BrokerdError.broker_details['flow']: dict` snapshot for the EMS.
- in `symbols.get_mkt_info()` support adhoc lookup for `MktPair` inputs
and when defined we re-construct with those inputs; in this case we do
this for a first mkt: `'vtgn.nasdaq'`..
Turns out we were expecting/processing `Status(resp='error')` msgs not
`BrokerdError` (i guess bc latter was only really being used in initial
`brokerd` msg responses and not for relay of actual provider clearing
engine failures?) and the case block match / logic wasn't really
correct. So this changes a few things:
- always do reverse `oid` lookups from `reqid`s if possible in error msg
handling case.
- add a new `Error` client-dialog msg (derived from `Status`) which we
now relay when `brokerd` sends a `BrokerdError` and no prior `Status`
can be found (when it is we still fill in appropriate fields from the
backend-error and just send back the last status msg like before).
- try hard to look up the original `Order.symbol: str` for client
broadcasting trying first using any `Status.req` and failing over to
embedded `.brokerd_msg` field lookups.
- drop the `Status.name = 'error'` from literal def.
Finally this is a reason to use our new `OrderDialogs` abstraction; on
order submission errors IB doesn't really pass back anything other then
the `orderId` and the reason so we have to conduct our own lookup for
a message to relay to the EMS..
So, for every EMS msg we send, add it to the dialog tracker and then use
the `flows: OrderDialogs` for lookup in the case where we need to relay
said error. Also, include sending a `canceled` status such that the
order won't get stuck as a stale entry in the `emsd`'s own dialog table.
For now we just filter out errors that are unrelated from the stream
since there's always going to be stuff to do with live/history data
queries..
If a backend declares a top level `get_cost()` (provisional name)
we call it in the paper engine to try and simulate costs according to
the provider's own schedule. For now only `binance` has support (via the
ep def) but ideally we can fill these in incrementally as users start
forward testing on multiple cexes.
Since it's depended on by `.data` stuff as well as pretty much
everything else, makes more sense to expose it as a top level module
(and maybe eventually as a subpkg as we add to it).
In order to attempt giving the user a realistic prediction for a BEP per
txn we need to model what the (worst case) anticipated exit txn costs
will be during the equivalent, paired entries. For now we use a simple
"symmetric cost prediction" model where we assume the exit costs will be
simply the same as the enter txn costs and thus on every entry we apply
2x the enter txn cost; on exit txns we then unroll these predictions by
keeping a cumulative sum of the cost-per-unit and reversing the charges
based on applying that mean to the current exit txn's size. Once
unrolled we apply the actual exit txn cost received from the
broker-provider.
Since it appears impossible to compute the recurrence relations for PPU
(at least sanely) without using embedded `polars.List` elements, this
instead just implements price-per-unit and break-even-price calcs
doing a plain-ol-for-loop imperative approach with logic branching.
I burned wayy too much time trying to implement this in some kinda
`polars` DF native way without luck, so hopefuly someone smarter can
come in and make it work at some point xD
Resolves a related bullet in #515
Took a little while to get right using declarative style but it's
finally workin and seems (mostly correct B)
Computes the ppu (price per unit) using the PnL since last
net-zero-cumsize (aka the pnl from open to close) and uses it to calc
the pnl-per-exit trade (using the ppu).
Next up, bep (break even price both) per position and maybe since
ledger start or an arbitrary ref point?
We don't need it in `detect_time_gaps()` since doing straight up
datetime diffs in `polars` already has a humanized `str` representation
but with higher precision like '2d 1h 24m 1s' B)
Since we need `.get_mkt_info()` to remain symmetric across calls with
different fqme inputs, and binance generally uses upper case for it's
symbology keys, we always upper the FQME related tokens for both
symcaching and general search purposes.
Also don't set `_atype` on mkt pairs since it should be fully handled
via the dst asset loading in `Client._cache_pairs()`.
In cases where a brokerd backend doesn't yet support a symcache we need
to do manual `.get_mkt_info()` queries and stash them in a table that we
pass in for the mkt failover lookup to `Account.update_from_ledger()`.
Set the `PaperBoi._mkts` to this table for use on real-time ledger
writes in `.fake_fill()`.
Since some backends are going to have the issue of supporting multiple
venues for a given "position distinguishing instrument", like IB, we
can't presume that every `Position` can be uniquely keyed by
a `MktPair.fqme` (since the venue part can change and still be the same
"pair" relationship in accounting terms) so instead presume the
"backend system's market id" is the unique key (at least for now)
instead of the fqme.
More practically we use the `bs_mktid` to groupby-partition the per
pair DFs from the trades ledger and attempt to scan-match the input
fqme (in `ledger disect` cli) against the fqme column values set.
Not sure why i ever thought it would work otherwise but, obviously if
you're replicating a `Position` from a **summary** (IPC) msg we
need to wipe any prior clearing events from the events history..
The main use for this loading mechanism is precisely if you don't have
local access to the txn ledger and need to represent a position from
a summary 🤦
Also, never bother with ledger file fqme "rewriting" if the backend has
no symcache support (yet) since obviously there's then no symbol set to
search for a better key xD
Instead of casting to `dict`s and rewriting event names in the
`push_tradesies()` handler, be transparent with event names (also
defining and piker-equivalent mapping them in a redefined `_statuses`
table) and types
passing them directly to the `deliver_trade_events()` task and generally
make event handler blocks much easier to grok with type annotations. To
deal with the causality dilemma of *when to emit a pos msg* due to
needing all of `execDetailsEvent, commissionReportEvent, positionEvent`
but having no guarantee on received order, we implement a small task
`clears: dict[Contract, tuple[Position, Fill]]` tracker table and (as
before) only emit a position event once the "cost" can be accessed for
the fill. We now ALWAYS relay any `Position` update from IB directly to
ensure (at least) the cumsize is correct (since it appears we still have
ongoing issues with computing this correctly via `.accounting.Position`
updates..).
Further related adjustments:
- load (fiat) balances and startup positions into a new `IbAcnt` struct.
- change `update_and_audit_pos_msg()` to blindly forward ib position
event updates for the **the size** since it should always be
considered the true gospel for accounting!
- drop ib-has-no-position handling since it should never occur..
- move `update_ledger_from_api_trades()` to the `.ledger` submod and do
processing of ib_insync `Fill` related objects instead of dict-casted
versions instead doing the casting in
`api_trades_to_ledger_entries()`.
- `norm_trade()`: add `symcache.mktmaps[bs_mktid] = mkt` in since it
turns out API (and sometimes FLEX) records don't contain the listing
exchange/venue thus making it impossible to map an asset pair in the
"position sense" (i.e. over multiple venues: qqq.nasdaq, qqq.arca,
qqq.directedge) to an fqme when doing offline ledger processing;
instead use frickin IB's internal int-id so there's no discrepancy.
- also much better handle futures mkt trade flex records such that
parsed `MktPair.fqme` is consistent.
Since getting a global symcache result from the API is basically
impossible, we ad-hoc fill out the needed client tables on demand per
client code queries to the mkt info EP.
Also, use `unpack_fqme()` in fqme (search) pattern parser instead of
hacky `str.partition()`.
Add new `Client` attr tables to better stash `Contract` lookup results
normally mapped from some in put FQME;
- `._contracts: dict[str, Contract]` for any input pattern (fqme).
- `._cons: dict[str, Contract] = {}` for the `.conId: int` inputs.
- `_cons2mkts: bidict[Contract, MktPair]` for mapping back and forth
between ib and piker internal pair types.
Further,
- type out as many ib_insync internal types as possible mostly for
contract related objects.
- change `Client.trades()` -> `.get_fills()` and return directly the
result from `IB.fill()`.
- start flipping over internals to `Position.cumsize`
- allow passing in a `_mktmap_table` to `Account.update_from_ledger()`
for cases where the caller wants to per-call-dyamically insert the
`MktPair` via a one-off table (cough IB).
- use `polars.from_dicts()` in `.calc.open_ledger_dfs()`. and wrap the
whole func in a new `toolz.open_crash_handler()`.
Since there's a growing list of top level mods which are more or less
utils/tools for working with the runtime; begin to move them into a new
subpkg starting with a new `.toolz.debug`.
Start with,
- a new `open_crash_handller()` for doing breakpoints around blocks that
might error.
- move in what was `piker._profile` into `.toolz.profile` and adjust all
importing appropriately.
Define and bind in the `tx_sort()` routine to be used by
`open_trade_ledger()` when datetime sorting trade records.
Further deats:
- always use the IB reported position size (since apparently our ledger
based accounting is getting rekt on occasion..).
- better ib pos msg formatting when there's mismatches with the piker
equivalent.
- never emit zero-size pos msgs (in terms of strict ib pos sizing) since
when there's piker ledger sizing errors we'll send the wrong thing to
the ems and its clients..
Since apparently rendering to dict from a sorted generator func clearly
doesn't preserve the order when using a `dict`-comprehension.. Further,
there's really no reason to strictly return a `dict`. Adjust
`.calc.ppu()` to make the return value instead a `list[tuple[str,
dict]]`; this results in the current df cumsum values matching the
original impl and the existing `binance.paper` unit tests now passing XD
Other details that fix a variety of nonsense..
- adjust all `.clearsitems()` consumers to the new list output.
- use `str(pendulum.now())` in `Position.from_msg()` since adding
multiples with an `unknown` str will obviously discard them, facepalm.
- fix `.calc.ppu()` to NOT short circuit when `accum_size` is 0; it's
been causing all sorts of incorrect size outputs in the clearing
table.. lel, this is what fixed the unit test!
Move in the obvious things XD
- all the specially defined venue tables from `.api`.
- some parser funcs: `con2fqme()` and `parse_patt2fqme()`.
- the `get_mkt_info()` and `open_symbol_search()` broker eps.
- the `_asset_type_map` table which converts to `.accounting.Asset`
compat keys for each contract/security.
Since there's no easy way to support it yet, we bypass symbology caching
in for now and instead allow the `ib.ledger` routines to fill in
`MktPair` and `Asset` entries ad-hoc for the purposes of txn ledger
processing.
Some backends like `ib` don't have an obvious (nor practical) way to
easily download the entire symbology set available from all its mkt
venues. For such backends loading might require a non-std approach (like
using the contract search from some input mkt-key set) and can't be
expected to necessarily be supported out of the box. As such, allow
annotating a broker sub-pkg module with a `_no_symcache: bool = True`
attr which will make `open_symcache()` yield early with an empty
`SymbologyCache` instance for use by the caller to fill in the mkt and
assets tables in whatever ad-hoc way desired.