Apparently they're being massive cucks and changing their futes pair
schema again now adding a `NEXT_QUARTER` contract type which we weren't
handling at all. The good news is falling back to an old symcache file
would have prevented this from crashing.
Add a new `FutesPair.expiry: str` field so that `.bs_fqme` can simply
call it during the summary FQME-ification output rendering..
A helper for scanning a "pairs table" that most backends should expose
as part of their (internal) symbology set using `rapidfuzz` over
a `dict[str, Struct]` input table.
Also expose the `data.types.Struct` at the subpkg top level.
Previously we were assuming that the `Client._contracts: dict[str,
Contract]` would suffice this directly, which obviously isn't true XD
Also,
- add the `NSE` venue to skip list.
- use new `rapidfuzz.process.extract()` lib API.
- only get con deats for non null exchange names..
Of course I missed this first try but, we need to use the ws market pair
symbology set (since apparently kraken loves redundancy at least 3 times
XD) when processing transactions that arrive from live clears since it's
an entirely different `LTC/EUR` style key then the `XLTCEUR` style
delivered from the ReST eps..
As part of this:
- add `Client._altnames`, `._wsnames` as `dict[str, Pair]` tables,
leaving the `._AssetPairs` table as is keyed by the "xname"s.
- Change `Pair.respname: str` -> `.xname` since these keys all just seem
to have a weird 'X' prefix.
- do the appropriately keyed pair table lookup via a new `api_name_set:
str` to `norm_trade_records()` and set is correctly in the ws live txn
handler task.
This is a tricky edge case we weren't handling prior; an example is
submitting a limit order with a price tick precision which mismatches
that supported (probably bc IB reported the wrong one..) and IB responds
immediately with an error event (via a special code..) but doesn't
include any `Trade` object(s) nor details beyond the `reqid`. So, we
have to do a little reverse EMS order lookup on our own and ideally
indicate to the requester which order failed and *why*.
To enable this we,
- create a `flows: OrderDialogs` instance and pass it to most order/event relay
tasks, particularly ensuring we update update ASAP in `handle_order_requests()`
such that any successful submit has an `Ack` recorded in the flow.
- on such errors lookup the `.symbol` / `Order` from the `flow` and
respond back to the EMS with as many details as possible about the
prior msg history.
- always explicitly relay `error` events which don't fall into the
sensible filtered set and wrap in
a `BrokerdError.broker_details['flow']: dict` snapshot for the EMS.
- in `symbols.get_mkt_info()` support adhoc lookup for `MktPair` inputs
and when defined we re-construct with those inputs; in this case we do
this for a first mkt: `'vtgn.nasdaq'`..
Turns out we were expecting/processing `Status(resp='error')` msgs not
`BrokerdError` (i guess bc latter was only really being used in initial
`brokerd` msg responses and not for relay of actual provider clearing
engine failures?) and the case block match / logic wasn't really
correct. So this changes a few things:
- always do reverse `oid` lookups from `reqid`s if possible in error msg
handling case.
- add a new `Error` client-dialog msg (derived from `Status`) which we
now relay when `brokerd` sends a `BrokerdError` and no prior `Status`
can be found (when it is we still fill in appropriate fields from the
backend-error and just send back the last status msg like before).
- try hard to look up the original `Order.symbol: str` for client
broadcasting trying first using any `Status.req` and failing over to
embedded `.brokerd_msg` field lookups.
- drop the `Status.name = 'error'` from literal def.
Finally this is a reason to use our new `OrderDialogs` abstraction; on
order submission errors IB doesn't really pass back anything other then
the `orderId` and the reason so we have to conduct our own lookup for
a message to relay to the EMS..
So, for every EMS msg we send, add it to the dialog tracker and then use
the `flows: OrderDialogs` for lookup in the case where we need to relay
said error. Also, include sending a `canceled` status such that the
order won't get stuck as a stale entry in the `emsd`'s own dialog table.
For now we just filter out errors that are unrelated from the stream
since there's always going to be stuff to do with live/history data
queries..
If a backend declares a top level `get_cost()` (provisional name)
we call it in the paper engine to try and simulate costs according to
the provider's own schedule. For now only `binance` has support (via the
ep def) but ideally we can fill these in incrementally as users start
forward testing on multiple cexes.
Since it's depended on by `.data` stuff as well as pretty much
everything else, makes more sense to expose it as a top level module
(and maybe eventually as a subpkg as we add to it).
In order to attempt giving the user a realistic prediction for a BEP per
txn we need to model what the (worst case) anticipated exit txn costs
will be during the equivalent, paired entries. For now we use a simple
"symmetric cost prediction" model where we assume the exit costs will be
simply the same as the enter txn costs and thus on every entry we apply
2x the enter txn cost; on exit txns we then unroll these predictions by
keeping a cumulative sum of the cost-per-unit and reversing the charges
based on applying that mean to the current exit txn's size. Once
unrolled we apply the actual exit txn cost received from the
broker-provider.
Since it appears impossible to compute the recurrence relations for PPU
(at least sanely) without using embedded `polars.List` elements, this
instead just implements price-per-unit and break-even-price calcs
doing a plain-ol-for-loop imperative approach with logic branching.
I burned wayy too much time trying to implement this in some kinda
`polars` DF native way without luck, so hopefuly someone smarter can
come in and make it work at some point xD
Resolves a related bullet in #515
Took a little while to get right using declarative style but it's
finally workin and seems (mostly correct B)
Computes the ppu (price per unit) using the PnL since last
net-zero-cumsize (aka the pnl from open to close) and uses it to calc
the pnl-per-exit trade (using the ppu).
Next up, bep (break even price both) per position and maybe since
ledger start or an arbitrary ref point?
We don't need it in `detect_time_gaps()` since doing straight up
datetime diffs in `polars` already has a humanized `str` representation
but with higher precision like '2d 1h 24m 1s' B)
Since we need `.get_mkt_info()` to remain symmetric across calls with
different fqme inputs, and binance generally uses upper case for it's
symbology keys, we always upper the FQME related tokens for both
symcaching and general search purposes.
Also don't set `_atype` on mkt pairs since it should be fully handled
via the dst asset loading in `Client._cache_pairs()`.
In cases where a brokerd backend doesn't yet support a symcache we need
to do manual `.get_mkt_info()` queries and stash them in a table that we
pass in for the mkt failover lookup to `Account.update_from_ledger()`.
Set the `PaperBoi._mkts` to this table for use on real-time ledger
writes in `.fake_fill()`.
Since some backends are going to have the issue of supporting multiple
venues for a given "position distinguishing instrument", like IB, we
can't presume that every `Position` can be uniquely keyed by
a `MktPair.fqme` (since the venue part can change and still be the same
"pair" relationship in accounting terms) so instead presume the
"backend system's market id" is the unique key (at least for now)
instead of the fqme.
More practically we use the `bs_mktid` to groupby-partition the per
pair DFs from the trades ledger and attempt to scan-match the input
fqme (in `ledger disect` cli) against the fqme column values set.
Not sure why i ever thought it would work otherwise but, obviously if
you're replicating a `Position` from a **summary** (IPC) msg we
need to wipe any prior clearing events from the events history..
The main use for this loading mechanism is precisely if you don't have
local access to the txn ledger and need to represent a position from
a summary 🤦
Also, never bother with ledger file fqme "rewriting" if the backend has
no symcache support (yet) since obviously there's then no symbol set to
search for a better key xD
Instead of casting to `dict`s and rewriting event names in the
`push_tradesies()` handler, be transparent with event names (also
defining and piker-equivalent mapping them in a redefined `_statuses`
table) and types
passing them directly to the `deliver_trade_events()` task and generally
make event handler blocks much easier to grok with type annotations. To
deal with the causality dilemma of *when to emit a pos msg* due to
needing all of `execDetailsEvent, commissionReportEvent, positionEvent`
but having no guarantee on received order, we implement a small task
`clears: dict[Contract, tuple[Position, Fill]]` tracker table and (as
before) only emit a position event once the "cost" can be accessed for
the fill. We now ALWAYS relay any `Position` update from IB directly to
ensure (at least) the cumsize is correct (since it appears we still have
ongoing issues with computing this correctly via `.accounting.Position`
updates..).
Further related adjustments:
- load (fiat) balances and startup positions into a new `IbAcnt` struct.
- change `update_and_audit_pos_msg()` to blindly forward ib position
event updates for the **the size** since it should always be
considered the true gospel for accounting!
- drop ib-has-no-position handling since it should never occur..
- move `update_ledger_from_api_trades()` to the `.ledger` submod and do
processing of ib_insync `Fill` related objects instead of dict-casted
versions instead doing the casting in
`api_trades_to_ledger_entries()`.
- `norm_trade()`: add `symcache.mktmaps[bs_mktid] = mkt` in since it
turns out API (and sometimes FLEX) records don't contain the listing
exchange/venue thus making it impossible to map an asset pair in the
"position sense" (i.e. over multiple venues: qqq.nasdaq, qqq.arca,
qqq.directedge) to an fqme when doing offline ledger processing;
instead use frickin IB's internal int-id so there's no discrepancy.
- also much better handle futures mkt trade flex records such that
parsed `MktPair.fqme` is consistent.
Since getting a global symcache result from the API is basically
impossible, we ad-hoc fill out the needed client tables on demand per
client code queries to the mkt info EP.
Also, use `unpack_fqme()` in fqme (search) pattern parser instead of
hacky `str.partition()`.
Add new `Client` attr tables to better stash `Contract` lookup results
normally mapped from some in put FQME;
- `._contracts: dict[str, Contract]` for any input pattern (fqme).
- `._cons: dict[str, Contract] = {}` for the `.conId: int` inputs.
- `_cons2mkts: bidict[Contract, MktPair]` for mapping back and forth
between ib and piker internal pair types.
Further,
- type out as many ib_insync internal types as possible mostly for
contract related objects.
- change `Client.trades()` -> `.get_fills()` and return directly the
result from `IB.fill()`.
- start flipping over internals to `Position.cumsize`
- allow passing in a `_mktmap_table` to `Account.update_from_ledger()`
for cases where the caller wants to per-call-dyamically insert the
`MktPair` via a one-off table (cough IB).
- use `polars.from_dicts()` in `.calc.open_ledger_dfs()`. and wrap the
whole func in a new `toolz.open_crash_handler()`.
Since there's a growing list of top level mods which are more or less
utils/tools for working with the runtime; begin to move them into a new
subpkg starting with a new `.toolz.debug`.
Start with,
- a new `open_crash_handller()` for doing breakpoints around blocks that
might error.
- move in what was `piker._profile` into `.toolz.profile` and adjust all
importing appropriately.
Define and bind in the `tx_sort()` routine to be used by
`open_trade_ledger()` when datetime sorting trade records.
Further deats:
- always use the IB reported position size (since apparently our ledger
based accounting is getting rekt on occasion..).
- better ib pos msg formatting when there's mismatches with the piker
equivalent.
- never emit zero-size pos msgs (in terms of strict ib pos sizing) since
when there's piker ledger sizing errors we'll send the wrong thing to
the ems and its clients..
Since apparently rendering to dict from a sorted generator func clearly
doesn't preserve the order when using a `dict`-comprehension.. Further,
there's really no reason to strictly return a `dict`. Adjust
`.calc.ppu()` to make the return value instead a `list[tuple[str,
dict]]`; this results in the current df cumsum values matching the
original impl and the existing `binance.paper` unit tests now passing XD
Other details that fix a variety of nonsense..
- adjust all `.clearsitems()` consumers to the new list output.
- use `str(pendulum.now())` in `Position.from_msg()` since adding
multiples with an `unknown` str will obviously discard them, facepalm.
- fix `.calc.ppu()` to NOT short circuit when `accum_size` is 0; it's
been causing all sorts of incorrect size outputs in the clearing
table.. lel, this is what fixed the unit test!
Move in the obvious things XD
- all the specially defined venue tables from `.api`.
- some parser funcs: `con2fqme()` and `parse_patt2fqme()`.
- the `get_mkt_info()` and `open_symbol_search()` broker eps.
- the `_asset_type_map` table which converts to `.accounting.Asset`
compat keys for each contract/security.
Since there's no easy way to support it yet, we bypass symbology caching
in for now and instead allow the `ib.ledger` routines to fill in
`MktPair` and `Asset` entries ad-hoc for the purposes of txn ledger
processing.
Some backends like `ib` don't have an obvious (nor practical) way to
easily download the entire symbology set available from all its mkt
venues. For such backends loading might require a non-std approach (like
using the contract search from some input mkt-key set) and can't be
expected to necessarily be supported out of the box. As such, allow
annotating a broker sub-pkg module with a `_no_symcache: bool = True`
attr which will make `open_symcache()` yield early with an empty
`SymbologyCache` instance for use by the caller to fill in the mkt and
assets tables in whatever ad-hoc way desired.
The list is `open_symcache()`, `get_symcache()`, `SymbologyCache`, and
`Stuct` which seems more or less fine to make part of the public
namespace. Also, make `._timeseries.t_unit` an instance of literal to make
`ruff` happy?
This was more involved then expected but on the bright side, is going to
help drive a more general `Account` update/processing/loading API
providing for all the high-level txn update methods needed for any
backend to generically update the participant's account *state* via
an input ledger/txn set B)
Key changes to enable `SymbologyCache` compat:
- adjust `Client` pairs / assets lookup tables to include a duplicate
keying of all assets and "asset pairs" using the (chitty) default key
set that kraken ships which is NOT the `.altname` no `.wsname` keys;
the "default ReST response keys" i guess?
- `._AssetPairs` and `._Assets` are *these ^* rest-key sets delivered
verbatim from the endpoint responses,
- `._pairs` and `._assets` the equivalent value-sets keyed by piker
style FQME-looking keys (now provided via the new
`.kraken.symbols.Pair.bs_fqme: str` and the delivered `'altname'`
field (for assets) respectively.
- re-implement `.get_assets()` and `.get_mkt_pairs()` to appropriately
delegate to internal methods and these new (multi-keyed) tables to
deliver the cacheable set of symbology info.
- adjust `.feed.get_mkt_info()` to handle parsing of both fqme-style and
wtv(-the-shit-stupid) kraken key set a caller passes via
a key-matches-first-table-style-scan after pre-processing the
input `fqme: str`; also do the `Asset` lookups from the new
`Pair.bs_dst/src_asset: str` fields which should always map correctly
to an internal asset entry delivered by `Client.get_assets()`.
Dirty impl deatz:
- add new `.kraken.symbols` and move the newly refined `Pair` there.
- add `.kraken.ledger` and move in the factored out ledger processing
routines.
- also move out what was the `has_pp()` and large chung of nested-ish
looking acnt-position verification logic blocks into a new
`verify_balances()` B)
Provides for fully isolated symbology caching in a flat TOML table
without special case handling B)
Also explicitly define `.bs_mktid: str` which is now used by the
symcache to to key-index the backend specific pair set and thus provides
for round-trip marshalling without special knowledge of any backend
schema.
Previously the cum-size calc(s) was in the `disect` CLI but it's better
stuffed into the backing df converter. Also, ensure that whenever
a `dt` field is type-detected as a `str` we parse it to `DateTime`.
For testing (and probably hacking) it's handy to be able to point
somewhere other the default user-config dir for a ledger or account file
to test offline processing apis from `.accounting` subsystems. For now
it's a private optional named-arg: `_fp: Path` and it's obviously passed
down into the `load_account()` config getter.
Note that in the non-paper account case `Account.update_from_ledger()`
will use the ledger's `.symcache` and `.iter_txns()` method to acquite
actual txn-structs to compute positions.
Since each broker backend generally needs to define a specific
field-name-schema to determine the exact instantiation arguments to
`Transaction`, we generally need each backend to define an endpoint
function to conduct this transaction from an input `dict[str, Any]`
received either directly from provided ledger APIs or from previously
stored `.accounting._ledger` saved trades ledger TOML files.
To accomplish this we now require backends to declare a new routine:
```python
def norm_trade(
tid: str, # the uuid for the transaction
txdict: dict, # the input record-dict
# a table of mkt-symbols to backend
# struct objects which define the (meta-data) for the backend specific
# venue's symbology
pairs: dict[str, Struct],
) -> Transaction:
...
```
which implements that record conversion (at least for trades)
and can thus be used in `TransactionLedger.iter_txns()` which requires
"some code" to implement the loading from a serialization format (aka
the input `dict` record) to our local `Transaction` struct, normally
also using a `Pair`-struct table defined (and maybe previously cached)
by the specific backend such our (normalization layer's) `MktPair`'s
fields can be set.
For the case of our `.clearing._paper_engine` we def the routine to
simply extract the exact same fields from the TOML ledger records that
we previously had written (to it) and define it in that module.
Also, we always pass `pairs=SymbologyCache.pairs: dict[str, Struct]` on
norm trade calls such that offline ledger and accounting processing
clients can use a previously cached symbology set without having to
necessarily start the async-actor runtime to query the actual backend API
if the data has already been saved locally on the system B)
Other related:
- always passthrough kwargs in overridden `.to_dict()` method.
- only do fqme related trade record field name rewrites/names when
operating on a paper ledger; normally a backend's records don't
contain these.
- fix `pendulum.DateTime` type annots.
- just deliver `Transaction`s from `.iter_txns()`
Since some backends have multiple venues keyed by the same
symbol-pair-name, AND often the market/symbol info for those different
market-venues is entirely different (cough binance), we will have to
(sometimes) save the struct namespace-path as str for lookup when
deserializing a symcache to object form.
NOTE: this change is reliant on the following `tractor` dev commit which
improves support for constructing a path from object-instance:
bee2c36072
Add a backend(-wide) default struct path stored as a (TOML top level)
field `pair_ns_path: str` in the serialized `dict`-table as well as
allow for a per pair-`Struct` value optionally defined on each type def;
the global is only used if none was defined per struct via a `ns_path:
str`.
Further deats:
- don't write non-struct-member fields to dict for TOML file cache.
- always keep object forms, well as objects (in tables).. XD
- factor cache loading from `dict` (and thus from TOML or presumably any
other interchange form) into a `@classmethod` constructor method B)
- all choosing the subtable for `.search()` by name.
Still kinda borked since i don't think there actually is a (per venue)
"get-all-symbologies" endpoint.. so we're likely gonna have to figure
out either how to hack it or provide a bypass in ledger processing?
Deatz:
- use new `Account` type name, rename endpoint vars to match and
obviously use any new method name(s).
- mask out split ratio handling for now.
- async open the symcache prior to ledger processing (again, for now).
- drop passing `Transaction.sym`.
- fix parser set for dt-sorter since apparently 2022 and back had
a `date` field instead?
Since we don't want to be doing a `trio.run()` from async code (being
already in the `tractor` runtime and all); for now just put a top level
block wrapping async enter until we figure out to embed it (likely)
inside `open_account()` and pass the ref to `open_trade_ledger()`.
Require passing an explicit flag when entering from sync code with an
extra super duper explicit runtime error to indicate how to use in the
async case as well!
Also, do rewrites of both the fqme (from best match in the symcache
according to search - the worst case) or from the `bs_mktid` field if
it exists (should only be true for paper engine accounts) AND the
`bs_mktid` for paper accounts if it seems un-fully-qualified.
Turns in order to make things much cleaner from inside-the-runtime usage
we do probably want to just make the manager async so that we can
generate the cache on demand from async UI inits as well as daemon
actors.. So change to that and instead make `get_symcache()` the helper
that should ONLY be called from sync funcs / offline ledger processing
utils!
Instead of constructing them (previously manually) in `.get_mkt_info()` ep,
just call `.get_assets()` and do key lookups for assets to hand directly
to the `.src/dst` of `MktPair`.
Refine fqme input parsing to match:
- adjust parsing logic to only use `unpack_fqme()` on the input fqme
token.
- set `.mkt_mode: str` to the derivs venue when an expiry token is
detected in the fqme.
- pass the parsed `expiry: str` to `Client.exch_info()` to ensure
a deriv venue (table) is used for pair lookup.
- skip any "DEFI" venue or other unknown asset type cases (since binance
doesn't seem to define some assets anywhere?).
Also, just use the `Client._pairs` unified table for search input since
the first call to `.exch_info()` won't necessarily contain the most
up-to-date state whereas `._pairs` always will.
Meaning we add the `Client.get_assets()` and `.get_mkt_pairs()` methods.
Also implement `.exch_info()` to take in a `expiry: str` to detect
whether to look up a derivative venue instead of spot.
In support of all this we now explicitly key all assets (via
`._cache_pairs() during the populate of `._venue2assets` sub-tables)
with their `.bs_dst_asset: str` value to ensure, for ex., a spot
`BTCUSDT` has a distinct value from any futures contracts with the same
`Pair.symbol: str` value!
Also, ensure we always create a `brokers.toml` (from template) if DNE
and binance is the user's first used backend XD
Add `bs_src/dst_asset: str` properties which provide for unique keying
into futures vs. spot venues by offering a `.venue: str` property which,
for non-spot delivers normally an expiry suffix (eg. '.PERP') and for
spot just delivers the bair chain-token key.
This enables keying multiple venues with the same mkt pairs easily in
a global flat key->pair table needed as part of supporting a symcache.
So you can do a `Struct1` - `Struct2` and we dump a little diff `list`
of tuples for anal on the REPL B)
Prolly can be broken out into it's own micro-patch?
Drop all the old `polars` (groupby + agg related) mangling to get a df
per fqme by delegating to the new routine and add in the `.cumsum()`ing
(per frame) as a first start on computing pps using dfs instead of
python dicts + loops as in `ppu()`.
To isolate it from the ledger/account mods and bc it is actually for
doing (eventual) position calcs / anal, might as well put it in this
mod. Add in the old-masked `ensure_state()` method content in case we
want to use it later for testing. Also tighten up the parser loading
inside `dyn_parse_to_dt()`.
Rename `open_pps()` -> `open_account()` for obvious reasons as well as
expect a bit tighter integration with `SymbologyCache` and consequently
`LedgerTransaction` in order to drop `Transaction.sym: MktPair`
dependence when compiling / allocating new `Position`s from a ledger.
Also we drop a bunch of prior attrs and do some cleaning,
- `Position.first_clear_dt` we no longer sort during insert.
- `._clears` now replaces by `._events` table.
- drop the now masked `.ensure_state()` method (eventually moved to
`.calc` submod for maybe-later-use).
- drop `.sym=` from all remaining txns init calls.
- clean out the `Position.add_clear()` method and only add the provided
txn directly to the `._events` table.
Improve some `Account` docs and interface:
- fill out the main type descr.
- add the backend broker modules as `Account.mod` allowing to drop
`.brokername` as input and instead wrap as a `@property`.
- make `.update_from_trans()` now a new `.update_from_ledger()` and
expect either of a `TransactionLedger` (user-dict) or a dict of txns;
in the latter case if we have not been also passed a symcache as input
then runtime error since the symcache is necessary to allocate
positions.
- also, delegate to `TransactionLedger.iter_txns()` instead of
a manual datetime sorted iter-loop.
- drop all the clears datetime don't-insert-if-earlier-then-first
logic.
- rename `.to_toml()` -> `.prep_toml()`.
- drop old `PpTable` alias.
- rename `load_pps_from_ledger()` -> `load_account_from_ledger()` and
make it only deliver the account instance and also move out all the
`polars.DataFrame` related stuff (to `.calc`).
And tweak some account clears table formatting,
- store datetimes as TOML native equivs.
- drop `be_price` fixing.
- obvsly drop `.ensure_state()` call to pps.
Turns out we don't really need it directly for most "txn processing" AND
if we do it's usually related to some `Account`-ing related calcs; which
means we can instead just rely on the new `SymbologyCache` lookup to get
it when needed. So, basically just get rid of it and rely instead on the
`.fqme` to be the god-key to getting `MktPair` info (from the cache).
Further, extend the `TransactionLedger` to contain much more info on the
pertaining backend:
- `.mod` mapping to the (pkg) py mod.
- `.filepath` pointing to the actual ledger TOML file.
- `_symcache` for doing any needed asset or mkt lookup as mentioned
above.
- rename `.iter_trans()` -> `.iter_txns()` and allow passing in
a symcache or using the init provided one.
- rename `.to_trans()` similarly.
- delegate paper account txn processing to the `.clearing._paper_engine`
mod's `norm_trade()` (and expect this similarly from other backends!)
- use new `SymbologyCache.search()` to find the best but
un-fully-qualified fqme for a given `txdict` being processed when
writing a config (aka always try to expand to the most verbose `.fqme`
possible).
- add a `rewrite: bool` control to `open_trade_ledger()`.
Since we now fully support interchange-as-dict-msg, use the msg codec
API and drop manual `Asset` unpacking. Also, wrap `get_symcache()` in
a `pdbp` crash handler block for now B)
As part of loading the cache we can now fill the asset sub-tables:
`.mktmaps` and `.assets` with their deserialized struct instances!
In theory this might be possible for the backend defined `Pair` structs
as well but we need to figure out probably an endpoint to offer
the conversion?
Also, add a `SymbologyCache.search()` which allows sync code to scan the
existing (known via cache) symbol set just like how async code can use the
(much slower) `open_symbol_search()` ctx endpoint 💥
Previously we weren't necessarily serializing mkt pairs (for IPC msging)
entirely bc the assets `.src/.dst` were being sent just by their
str-names. This now properly supports fully serializing `Asset`s as
`dict`-msgs such that use of `MktPair.to_dict()` can be transmitted over
`tractor.MsgStream`s and deserialized entirely back to struct from on
the receiver end.
Deats:
- implement `Asset.to_dict()` and `.from_msg()`
- adjust `MktPair.to_dict()` and `.from_msg()` to use these methods.
- drop all the hacky "if .src/.dst is str" handling.
- add better `MktPair.from_fqme()` input handling for expiry and venue;
ensure that either can be extracted from passed fqme *and* if so they
are also popped from any duplicate passed in `**kwargs**`.
For starters rename the cache type to `SymbologyCache` and fill out its
interface to include an (async) `.reload()` which can be used to populate
the in-mem asset-table sets such that any tractor-runtime task can
actually directly call it. Use a symcache file name schema of
`_cache/<backend>.symcache.toml`.
Dirtier deatz:
- make `.open_symcache()` a `@cm` such that it can be used from sync code
and will actually call `trio.run()` in the case where it needs to do a
full (re)load; also don't write on exit only on reloads.
- add `.get_symcache()` a simple non-ctx-mngr reader which again can
mostly be called willy-nilly from sync code without the full runtime
being up (but likely will only work if symcache files already exist
for the backend).
New mod is `.data._symcache` and it needs backend clients to declare
`Client.get_assets()` and `.get_mkt_pairs()` to generate the cache files
which now go in the config dir under `_cache/`.
We're probably going to move to implementing all accounting using
`polars.DataFrame` and friends and thus this rejig preps for a much more
"stateless" implementation of our `Position` type and its internal
pos-accounting metrics: `ppu` and `cumsize`.
Summary:
- wrt to `._pos.Position`:
- rename `.size`/`.accum_size` to `.cumsize` to be more in line
with `polars.DataFrame.cumsum()`.
- make `Position.expiry` delegate to the underlying `.mkt: MktPair`
handling (hopefully) all edge cases..
- change over to a new `._events: dict[str, Transaction]` in prep
for #510 (and friends) and enforce a new `Transaction.etype: str`
which is by default `clear`.
- add `.iter_by_type()` which iterates, filters and sorts the
entries in `._events` from above.
- add `Position.clearsdict()` which returns the dict-ified and
datetime-sorted table which can more-or-less be stored in the
toml account file.
- add `.minimized_clears()` a new (and close) version of the old
method which always grabs at least one clear before
a position-side-polarity-change.
- mask-drop `.ensure_state()` since there is no more `.size`/`.price`
state vars (per say) as we always re-calc the ppu and cumsize from
the clears records on every read.
- `.add_clear` no longer does bisec insorting since all sorting is
done on position properties *reads*.
- move the PPU (price per unit) calculator to a new `.accounting.calcs`
as well as add in the `iter_by_dt()` clearing transaction sorted
iterator.
- also make some fixes to this to handle both lists of `Transaction`
as well as `dict`s as before.
- start rename of `PpTable` -> `Account` and make a note about adding
a `.balances` table.
- always `float()` the transaction size/price values since it seems if
they get processed as `tomlkit.Integer` there's some suuper weird
double negative on read-then-write to the clears table?
- something like `cumsize = -1` -> `cumsize = --1` !?!?
- make `load_pps_from_ledger()` work again but now includes some very
very first draft `polars` df processing from a transaction ledger.
- use this from the `accounting.cli.disect` subcmd which is also in
*super early draft* mode ;)
- obviously as mentioned in the `Position` section, add the new `.calcs`
module with a `.ppu()` calculator func B)
Also finally adds full `FeedInit` and `MktPair` support for this backend
by handling:
- all "currency" fields for each `Contract` by constructing
and `Asset` and setting the `MktPair.src` with `.atype='fiat'`.
- always render the `MktPair.src` name in the `.fqme` for fiat pairs
(aka forex) but never for other instruments.
In an effort to properly support fiat pairs (aka forex) as well as more
generally insert a fully-qualified `MktPair` in for the
`Transaction.sys`. Note that there's a bit of special handling for API
`Contract`s-as-dict records vs. flex-report-from-xml equivalents.
No point having duplicate data when we already stash the `expiry` on the
mkt info type and can just read it (and cast to `datetime` obj).
Further this fixes a regression caused by converting `._clears` to
a list by adding a `._events: dict[str, Transaction]` which prevents
double entering transactions based on checking the events table for the
existing id.. Further add a sanity check that all events are popped
(for now) after serializing the clearing table for the toml account
file.
In the longer run, ideally we don't have the separate sequences ._clears
and ._events by choosing a better data structure (sorted unique set of
mkt events) maybe a specially used `polars.DataFrame` (which we kind
need eventually anyway)?
When you look at usage we don't end up really needing clear entries to
be keyed by their `Transaction.tid`, instead it's much more important to
ensure the time sorted order of trade-clearing transactions such that
position properties such as the size and ppu are calculated correctly.
Thus, this instead simplified the `.clears` table to a list of clear
dict entries making a bunch of things simpler:
- object form `Position._clears` compared to the offline TOML schema
(saved in account files) is now data-structure-symmetrical.
- `Position.add_clear()` now uses `bisect.insort()` to
datetime-field-sort-insert into the *list* which saves having to worry
about sorting on every sequence *read*.
Further deats:
- adjust `.accounting._ledger.iter_by_dt()` to expect an input `list`.
- change `Position.iter_clears()` to iterate only the clearing entry
dicts without yielding a key/tid; no more tuples.
- drop `Position.to_dict()` since parent `Struct` already implements it.
Since it may be handy to get the latest ticks first, add a `reverse:
bool` to `iterticks()` and add some cleaner logic and a proper doc
string to `frame_ticks()`.
Allows for tracking paper engine orders despite the ems not necessarily
being opened by the current order mode instance (UI) in "paper"
execution mode; useful for tracking bots/strats running against the same
EMS daemon.
Like you'd think:
- `load_ledger()` -> ._ledger
- `load_accounrt()` -> ._pos
Also fixup the old `load_pps_from_ledger()` and expose it from a new
`.accounting.cli.disect` cli cmd for trying to figure out why pp calcs
are totally mucked on stupid ib..
Should be the final production backend to switch this over B)
Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..
We might as well start standardizing on `brokerd` init such that it can
be used more generally in client code (such as the `.accounting.cli`
stuff).
Deats of `broker_init()` impl:
- loads appropriate py pkg module,
- reads any declared `__enable_modules__: listr[str]` which will be
passed to `tractor.ActorNursery.start_actor(enabled_modules=<this>)`
- loads the `.brokers._daemon._setup_persistent_brokerd
As expected the `accounting.cli` tools can now import directly from this
new location and use the common daemon fixture definition.
After #520 we've moved to better supporting explicit venues for cex
backends which is important where a provider offers both spot and
derivatives markets (kraken, binance, kucoin) and we need to distinguish
which is being traded given a common asset pair (eg. BTC/USDT). So, make
this work for `kraken`'s brokerd such that requests and pre-existing
live order are (un)packed to/from EMS messaging form.
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
Drop the older `dict[str, ChainMap]` prototype we had since the new
`OrderDialogs` built-out while adding `binance` order support is more
refined and general. Also, handle new and now expect `.spot` venue token
in FQMEs since kraken too has futes markets that we'll likely want to
support eventually.
Use dynamic lookups instead by mapping to the correct http session and
endpoints path using the venue routing/mode key. This let's us simplify
from 3 methods down to a single `Client._api()` which either can be
passed the `venue: str` explicitly by the caller (as is needed in the
`._cache_pairs()` case) or falls back to the client's current
`.mkt_mode: str` setting B)
Deatz:
- add couple more tables to suffice all authed-endpoint use cases:
- `.venue2configkey: dict[str, str]` which maps the venue key to the
`brokers.toml` subsection which should be used for auth creds and
testnet config.
- `.confkey2venuekeys: dict[str, list[str]]` which maps each config
subsection key to the list of venue name keys for doing config to
venues lookup.
- always build out testnet sessions for spot and futes venues (though if
not set the sessions obviously won't ever be used).
- add and use new `config.ConfigurationError` custom exceptions when api
creds are missing.
- rename `action: str` to `method: str` in `._api()` since it's the
proper ReST term and switch what was "method" to be `endpoint: str`.
- mask out `.get_positions()` since we can get that from a user stream
wss request (and are doing that).
- (in theory) import and use spot testnet url as necessary.
Do parsing of the `'symbol'` and check for an `_<expiry>` suffix, in
which case we re-format in capitalized FQME style, do the
`Client._pairs[str, Pair]` lookup and then send the `Pair.bs_fqme` in
the `Order.fqme: str` field.
It's finally a decent little design / interface and definitely can be
used in other backends like `kraken` which rolled something lower level
but more or less the same without a wrapper class.
As you'd expect query and sync the EMS with existing live orders
reported by the market venue by packing them in `Status` msgs and
sending over the order dialog stream before starting the handler tasks.
XXX CAVEAT:
- there appears to be no way (at least on the usdtm market/venue) to
distinguish between different contracts such as perps vs. the
quarterlies?
- for now we just assume that the perp is being used since
there's no indicator otherwise in the 'symbol' field?
- we should maybe open an issue with the futures-connector project to
see how they'd recommend solving this discrepancy?
Only a couple tweaks to make this work according to the docs:
https://binance-docs.github.io/apidocs/futures/en/#modify-order-trade
- use a PUT request.
- provide the original user id in a `'origClientOrderId'` msg field.
- don't expect the same oid in the PUT response.
Other broker-mode related details:
- don't call `OrderDialogs.add_msg()` until after the existing check
since we want to check against the *last* msgs contents not the new
request.
- ensure we pass the `modify=True` flag in the edit case.
There was one trick which was that it seems that binance will often send
the account/position update event over the user stream *before* the
actual clearing (aka FILLED) order update event, so make sure we put an
entry in the `dialogs: OrderDialogs` as soon as an order request comes
in such that even if the account update arrives first the
`BrokerdPosition` msg can be relayed without delay / order event order
considerations.
Was using the wrong key before from our old code (not sure how that
slipped back in.. prolly doing too many git stashes XD), so fix that to
properly match against order update events with 'ORDER_TRADE_UPDATE'.
Also, don't match on the types we want to *cast to*, that's not how
match syntax works (facepalm), so we have to typecast prior to EMS msg
creation / downstream logic.
Further,
- try not bothering with binance's own internal `'orderId'` field
tracking since they seem to support just using your own user version
for all ctl endpoints? (thus we only need to track the EMS `.oid`s B)
- log all event update msgs for now.
- pop order dialogs on 'closed' statuses.
- wrap cancel requests in an error handler block since it seems the EMS
is double sending requests from the client?
For crypto derivatives (at least futes), yes they are margined, but
generally not around a single unit of vlm (like equities or commodities
futes) so don't pre-set the order mode allocator to use a #unit limit,
$limit is fine.
Since the request handler task will work concurrently across venues
(spot, futes, margin) we need to be sure that we look up the correct
venue to update the order dialog and this is naturally determined by the
FQME-style symbol in the `BrokerdOrder` msg; the best way to map that
symbol-key to the correct venue/`Pair` is by using said `._pairs:
ChainMap`.
Further, handle limit order errors by catching and relaying back an
error response to the EMS. Fix the "account name" to be `binance.usdtm`
so that we can eventually and explicitly support all venues by name.
Untested fully but has ostensibly working position and balance loading
(by delegating entirely to binance's internals for that) and an MVP ems
order request handler; still need to fill out the order status update
task implementation..
Notes:
- uses user data stream for all per account balance and position tracking.
- no support yet for `piker.accounting` position tracking.
- no support yet for full order / position real-time update via user
stream.
Since we need them for accounting and since we can get them directly
from the usdtm futes `exchangeInfo` ep, just preload all asset info that
we can during initial `Pair` caching. Cache the asset infos inside a new per venue
`Client._venues2assets: dict[str, dict[str, Asset | None]]` and mostly
be pedantic with the spot asset list for now since futes seems much
smaller and doesn't include transaction precision info.
Further:
- load a testnet http session if `binance.use_testnet.futes = true`.
- add testnet support for all non-data endpoints.
- hardcode user stream methods to work for usdtm futes for the moment.
- add logging around order request calls.
As just added for binance move to using an explicit `.<venue>.kraken`
style for spot markets which makes the current spot symbology expand to
`<PAIR>.SPOT` from the new `Pair.bs_fqme: str`. Reasons for why are
laid out in the equivalent patch for binance. Obviously this also primes
for supporting kraken's futures venue APIs as well 🏄https://docs.futures.kraken.com/#introduction
Detalles:
- add `.spot.kraken` parsing to `get_mkt_info()` so that if the venue
token is not passed by caller we implicitly expand it in.
- change `normalize()` to only return the `quote: dict` not the topic
key.
- rewrite live feed msg loop to use `match:` syntax B)
Since there are indeed multiple futures (perp swaps) contracts including
a set with expiry, we need a way to distinguish through search and
`FutesPair` lookup which contract we're requesting. To solve this extend
the `FutesPair` and `SpotPair` to include a `.bs_fqme` field similar to
`MktPair` and key the `Client._pairs: ChainMap`'s backing tables with
these expanded fqmes. For example the perp swap now expands to
`btcusdt.usdtm.perp` which fills in the venue as `'usdtm'` (the
usd-margined fututes market) and the expiry as `'perp'` (as before).
This allows distinguishing explicitly from, for ex., coin-margined
contracts which could instead (since we haven't added the support yet)
fqmes of the sort `btcusdt.<coin>m.perp.binance` thus making it explicit
and obvious which contract is which B)
Further we interpolate the venue token to `spot` for spot markets going
forward, which again makes cex spot markets explicit in symbology; we'll
need to add this as well to other cex backends ;)
Other misc detalles:
- change USD-M futes `MarketType` key to `'usdtm_futes'`.
- add `Pair.bs_fqme: str` for all pair subtypes with particular
special contract handling for futes including quarterlies, perps and
the weird "DEFI" ones..
- drop `OHLC.bar_wap` since it's no longer in the default time-series
schema and we weren't filling it in here anyway..
- `Client._pairs: ChainMap` is now a read-only fqme-re-keyed view into
the underlying pairs tables (which themselves are ideally keyed
identically cross-venue) which we populate inside `Client.exch_info()`
which itself now does concurrent pairs info fetching via a new
`._cache_pairs()` using a `trio` task per API-venue.
- support klines history query across all venues using same
`Client.mkt_mode_req[Client.mkt_mode]` style as we're doing for
`.exch_info()` B)
- use the venue specific klines history query limits where documented.
- handle new FQME venue / expiry fields inside `get_mkt_info()` ep such
that again the correct `Client.mkt_mode` is selected based on parsing
the desired spot vs. derivative contract.
- do venue-specific-WSS-addr lookup based on output from
`get_mkt_info()`; use usdtm venue WSS addr if a `FutesPair` is loaded.
- set `topic: str` to the `.bs_fqme` value in live feed quotes!
- use `Pair.bs_fqme: str` values for fuzzy-search input set.
The beginning of supporting multi-markets through a common API client.
Change to futes market mode in the client if `.perp.` is matched in the
fqme. Currently the exchange info and live feed ws impl will swap out
for their usd-margin futures market equivalent (endpoints).
Not sure why it seemed like futures pairs didn't have this field but add
it to the parent `Pair` type as well as drop the overridden
`.price/size_tick` fields instead doing the same as in spot as well.
Also moves the `MarketType: Literal` (for the `Client.mkt_mode: str`)
and adds a pair type lookup table for exchange info loading.
Add the usd-futes "Pair" type and thus ability to load all exchange
(info for) contracts settled in USDT. Luckily we don't seem to have to
modify anything in the `Client` interface (yet) other then a new
`.mkt_mode: str` which determines which endpoint set to make requests.
Obviously data received from endpoints will likely need diff handling as
per below.
Deats:
- add a bunch more API and WSS top level domains to `.api` with comments
- start a `.binance.schemas` module to house the structs for loading
different `Pair` subtypes depending on target market: `SpotPair`,
`FutesPair`, .. etc. and implement required `MktPair` fields on the
new futes type for compatibility with the clearing layer.
- add `Client.mkt_mode: str` and a method lookup for endpoint parent
paths depending on market via `.mkt_req: dict`
Also related to live feeds,
- drop `Struct` typecasting instead opting for specific fields both for
speed and simplicity atm.
- breakout `subscribe()` into module level acm from being embedded
closure.
- for now swap over the ws feed to be strictly the futes ep (while
testing) and set the `.mkt_mode = 'usd_futes'`.
- hack in `Client._pairs` to only load `FutesPair`s until we figure out
whether we want separate `Client` instances per market or not..