Since we don't want to be doing a `trio.run()` from async code (being
already in the `tractor` runtime and all); for now just put a top level
block wrapping async enter until we figure out to embed it (likely)
inside `open_account()` and pass the ref to `open_trade_ledger()`.
Require passing an explicit flag when entering from sync code with an
extra super duper explicit runtime error to indicate how to use in the
async case as well!
Also, do rewrites of both the fqme (from best match in the symcache
according to search - the worst case) or from the `bs_mktid` field if
it exists (should only be true for paper engine accounts) AND the
`bs_mktid` for paper accounts if it seems un-fully-qualified.
Turns in order to make things much cleaner from inside-the-runtime usage
we do probably want to just make the manager async so that we can
generate the cache on demand from async UI inits as well as daemon
actors.. So change to that and instead make `get_symcache()` the helper
that should ONLY be called from sync funcs / offline ledger processing
utils!
Instead of constructing them (previously manually) in `.get_mkt_info()` ep,
just call `.get_assets()` and do key lookups for assets to hand directly
to the `.src/dst` of `MktPair`.
Refine fqme input parsing to match:
- adjust parsing logic to only use `unpack_fqme()` on the input fqme
token.
- set `.mkt_mode: str` to the derivs venue when an expiry token is
detected in the fqme.
- pass the parsed `expiry: str` to `Client.exch_info()` to ensure
a deriv venue (table) is used for pair lookup.
- skip any "DEFI" venue or other unknown asset type cases (since binance
doesn't seem to define some assets anywhere?).
Also, just use the `Client._pairs` unified table for search input since
the first call to `.exch_info()` won't necessarily contain the most
up-to-date state whereas `._pairs` always will.
Meaning we add the `Client.get_assets()` and `.get_mkt_pairs()` methods.
Also implement `.exch_info()` to take in a `expiry: str` to detect
whether to look up a derivative venue instead of spot.
In support of all this we now explicitly key all assets (via
`._cache_pairs() during the populate of `._venue2assets` sub-tables)
with their `.bs_dst_asset: str` value to ensure, for ex., a spot
`BTCUSDT` has a distinct value from any futures contracts with the same
`Pair.symbol: str` value!
Also, ensure we always create a `brokers.toml` (from template) if DNE
and binance is the user's first used backend XD
Add `bs_src/dst_asset: str` properties which provide for unique keying
into futures vs. spot venues by offering a `.venue: str` property which,
for non-spot delivers normally an expiry suffix (eg. '.PERP') and for
spot just delivers the bair chain-token key.
This enables keying multiple venues with the same mkt pairs easily in
a global flat key->pair table needed as part of supporting a symcache.
So you can do a `Struct1` - `Struct2` and we dump a little diff `list`
of tuples for anal on the REPL B)
Prolly can be broken out into it's own micro-patch?
Drop all the old `polars` (groupby + agg related) mangling to get a df
per fqme by delegating to the new routine and add in the `.cumsum()`ing
(per frame) as a first start on computing pps using dfs instead of
python dicts + loops as in `ppu()`.
To isolate it from the ledger/account mods and bc it is actually for
doing (eventual) position calcs / anal, might as well put it in this
mod. Add in the old-masked `ensure_state()` method content in case we
want to use it later for testing. Also tighten up the parser loading
inside `dyn_parse_to_dt()`.
Rename `open_pps()` -> `open_account()` for obvious reasons as well as
expect a bit tighter integration with `SymbologyCache` and consequently
`LedgerTransaction` in order to drop `Transaction.sym: MktPair`
dependence when compiling / allocating new `Position`s from a ledger.
Also we drop a bunch of prior attrs and do some cleaning,
- `Position.first_clear_dt` we no longer sort during insert.
- `._clears` now replaces by `._events` table.
- drop the now masked `.ensure_state()` method (eventually moved to
`.calc` submod for maybe-later-use).
- drop `.sym=` from all remaining txns init calls.
- clean out the `Position.add_clear()` method and only add the provided
txn directly to the `._events` table.
Improve some `Account` docs and interface:
- fill out the main type descr.
- add the backend broker modules as `Account.mod` allowing to drop
`.brokername` as input and instead wrap as a `@property`.
- make `.update_from_trans()` now a new `.update_from_ledger()` and
expect either of a `TransactionLedger` (user-dict) or a dict of txns;
in the latter case if we have not been also passed a symcache as input
then runtime error since the symcache is necessary to allocate
positions.
- also, delegate to `TransactionLedger.iter_txns()` instead of
a manual datetime sorted iter-loop.
- drop all the clears datetime don't-insert-if-earlier-then-first
logic.
- rename `.to_toml()` -> `.prep_toml()`.
- drop old `PpTable` alias.
- rename `load_pps_from_ledger()` -> `load_account_from_ledger()` and
make it only deliver the account instance and also move out all the
`polars.DataFrame` related stuff (to `.calc`).
And tweak some account clears table formatting,
- store datetimes as TOML native equivs.
- drop `be_price` fixing.
- obvsly drop `.ensure_state()` call to pps.
Turns out we don't really need it directly for most "txn processing" AND
if we do it's usually related to some `Account`-ing related calcs; which
means we can instead just rely on the new `SymbologyCache` lookup to get
it when needed. So, basically just get rid of it and rely instead on the
`.fqme` to be the god-key to getting `MktPair` info (from the cache).
Further, extend the `TransactionLedger` to contain much more info on the
pertaining backend:
- `.mod` mapping to the (pkg) py mod.
- `.filepath` pointing to the actual ledger TOML file.
- `_symcache` for doing any needed asset or mkt lookup as mentioned
above.
- rename `.iter_trans()` -> `.iter_txns()` and allow passing in
a symcache or using the init provided one.
- rename `.to_trans()` similarly.
- delegate paper account txn processing to the `.clearing._paper_engine`
mod's `norm_trade()` (and expect this similarly from other backends!)
- use new `SymbologyCache.search()` to find the best but
un-fully-qualified fqme for a given `txdict` being processed when
writing a config (aka always try to expand to the most verbose `.fqme`
possible).
- add a `rewrite: bool` control to `open_trade_ledger()`.
Since we now fully support interchange-as-dict-msg, use the msg codec
API and drop manual `Asset` unpacking. Also, wrap `get_symcache()` in
a `pdbp` crash handler block for now B)
As part of loading the cache we can now fill the asset sub-tables:
`.mktmaps` and `.assets` with their deserialized struct instances!
In theory this might be possible for the backend defined `Pair` structs
as well but we need to figure out probably an endpoint to offer
the conversion?
Also, add a `SymbologyCache.search()` which allows sync code to scan the
existing (known via cache) symbol set just like how async code can use the
(much slower) `open_symbol_search()` ctx endpoint 💥
Previously we weren't necessarily serializing mkt pairs (for IPC msging)
entirely bc the assets `.src/.dst` were being sent just by their
str-names. This now properly supports fully serializing `Asset`s as
`dict`-msgs such that use of `MktPair.to_dict()` can be transmitted over
`tractor.MsgStream`s and deserialized entirely back to struct from on
the receiver end.
Deats:
- implement `Asset.to_dict()` and `.from_msg()`
- adjust `MktPair.to_dict()` and `.from_msg()` to use these methods.
- drop all the hacky "if .src/.dst is str" handling.
- add better `MktPair.from_fqme()` input handling for expiry and venue;
ensure that either can be extracted from passed fqme *and* if so they
are also popped from any duplicate passed in `**kwargs**`.
For starters rename the cache type to `SymbologyCache` and fill out its
interface to include an (async) `.reload()` which can be used to populate
the in-mem asset-table sets such that any tractor-runtime task can
actually directly call it. Use a symcache file name schema of
`_cache/<backend>.symcache.toml`.
Dirtier deatz:
- make `.open_symcache()` a `@cm` such that it can be used from sync code
and will actually call `trio.run()` in the case where it needs to do a
full (re)load; also don't write on exit only on reloads.
- add `.get_symcache()` a simple non-ctx-mngr reader which again can
mostly be called willy-nilly from sync code without the full runtime
being up (but likely will only work if symcache files already exist
for the backend).
New mod is `.data._symcache` and it needs backend clients to declare
`Client.get_assets()` and `.get_mkt_pairs()` to generate the cache files
which now go in the config dir under `_cache/`.
We're probably going to move to implementing all accounting using
`polars.DataFrame` and friends and thus this rejig preps for a much more
"stateless" implementation of our `Position` type and its internal
pos-accounting metrics: `ppu` and `cumsize`.
Summary:
- wrt to `._pos.Position`:
- rename `.size`/`.accum_size` to `.cumsize` to be more in line
with `polars.DataFrame.cumsum()`.
- make `Position.expiry` delegate to the underlying `.mkt: MktPair`
handling (hopefully) all edge cases..
- change over to a new `._events: dict[str, Transaction]` in prep
for #510 (and friends) and enforce a new `Transaction.etype: str`
which is by default `clear`.
- add `.iter_by_type()` which iterates, filters and sorts the
entries in `._events` from above.
- add `Position.clearsdict()` which returns the dict-ified and
datetime-sorted table which can more-or-less be stored in the
toml account file.
- add `.minimized_clears()` a new (and close) version of the old
method which always grabs at least one clear before
a position-side-polarity-change.
- mask-drop `.ensure_state()` since there is no more `.size`/`.price`
state vars (per say) as we always re-calc the ppu and cumsize from
the clears records on every read.
- `.add_clear` no longer does bisec insorting since all sorting is
done on position properties *reads*.
- move the PPU (price per unit) calculator to a new `.accounting.calcs`
as well as add in the `iter_by_dt()` clearing transaction sorted
iterator.
- also make some fixes to this to handle both lists of `Transaction`
as well as `dict`s as before.
- start rename of `PpTable` -> `Account` and make a note about adding
a `.balances` table.
- always `float()` the transaction size/price values since it seems if
they get processed as `tomlkit.Integer` there's some suuper weird
double negative on read-then-write to the clears table?
- something like `cumsize = -1` -> `cumsize = --1` !?!?
- make `load_pps_from_ledger()` work again but now includes some very
very first draft `polars` df processing from a transaction ledger.
- use this from the `accounting.cli.disect` subcmd which is also in
*super early draft* mode ;)
- obviously as mentioned in the `Position` section, add the new `.calcs`
module with a `.ppu()` calculator func B)
Also finally adds full `FeedInit` and `MktPair` support for this backend
by handling:
- all "currency" fields for each `Contract` by constructing
and `Asset` and setting the `MktPair.src` with `.atype='fiat'`.
- always render the `MktPair.src` name in the `.fqme` for fiat pairs
(aka forex) but never for other instruments.
In an effort to properly support fiat pairs (aka forex) as well as more
generally insert a fully-qualified `MktPair` in for the
`Transaction.sys`. Note that there's a bit of special handling for API
`Contract`s-as-dict records vs. flex-report-from-xml equivalents.
No point having duplicate data when we already stash the `expiry` on the
mkt info type and can just read it (and cast to `datetime` obj).
Further this fixes a regression caused by converting `._clears` to
a list by adding a `._events: dict[str, Transaction]` which prevents
double entering transactions based on checking the events table for the
existing id.. Further add a sanity check that all events are popped
(for now) after serializing the clearing table for the toml account
file.
In the longer run, ideally we don't have the separate sequences ._clears
and ._events by choosing a better data structure (sorted unique set of
mkt events) maybe a specially used `polars.DataFrame` (which we kind
need eventually anyway)?
When you look at usage we don't end up really needing clear entries to
be keyed by their `Transaction.tid`, instead it's much more important to
ensure the time sorted order of trade-clearing transactions such that
position properties such as the size and ppu are calculated correctly.
Thus, this instead simplified the `.clears` table to a list of clear
dict entries making a bunch of things simpler:
- object form `Position._clears` compared to the offline TOML schema
(saved in account files) is now data-structure-symmetrical.
- `Position.add_clear()` now uses `bisect.insort()` to
datetime-field-sort-insert into the *list* which saves having to worry
about sorting on every sequence *read*.
Further deats:
- adjust `.accounting._ledger.iter_by_dt()` to expect an input `list`.
- change `Position.iter_clears()` to iterate only the clearing entry
dicts without yielding a key/tid; no more tuples.
- drop `Position.to_dict()` since parent `Struct` already implements it.
Since it may be handy to get the latest ticks first, add a `reverse:
bool` to `iterticks()` and add some cleaner logic and a proper doc
string to `frame_ticks()`.
Allows for tracking paper engine orders despite the ems not necessarily
being opened by the current order mode instance (UI) in "paper"
execution mode; useful for tracking bots/strats running against the same
EMS daemon.
Like you'd think:
- `load_ledger()` -> ._ledger
- `load_accounrt()` -> ._pos
Also fixup the old `load_pps_from_ledger()` and expose it from a new
`.accounting.cli.disect` cli cmd for trying to figure out why pp calcs
are totally mucked on stupid ib..
Should be the final production backend to switch this over B)
Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..
We might as well start standardizing on `brokerd` init such that it can
be used more generally in client code (such as the `.accounting.cli`
stuff).
Deats of `broker_init()` impl:
- loads appropriate py pkg module,
- reads any declared `__enable_modules__: listr[str]` which will be
passed to `tractor.ActorNursery.start_actor(enabled_modules=<this>)`
- loads the `.brokers._daemon._setup_persistent_brokerd
As expected the `accounting.cli` tools can now import directly from this
new location and use the common daemon fixture definition.
After #520 we've moved to better supporting explicit venues for cex
backends which is important where a provider offers both spot and
derivatives markets (kraken, binance, kucoin) and we need to distinguish
which is being traded given a common asset pair (eg. BTC/USDT). So, make
this work for `kraken`'s brokerd such that requests and pre-existing
live order are (un)packed to/from EMS messaging form.
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
Drop the older `dict[str, ChainMap]` prototype we had since the new
`OrderDialogs` built-out while adding `binance` order support is more
refined and general. Also, handle new and now expect `.spot` venue token
in FQMEs since kraken too has futes markets that we'll likely want to
support eventually.
Use dynamic lookups instead by mapping to the correct http session and
endpoints path using the venue routing/mode key. This let's us simplify
from 3 methods down to a single `Client._api()` which either can be
passed the `venue: str` explicitly by the caller (as is needed in the
`._cache_pairs()` case) or falls back to the client's current
`.mkt_mode: str` setting B)
Deatz:
- add couple more tables to suffice all authed-endpoint use cases:
- `.venue2configkey: dict[str, str]` which maps the venue key to the
`brokers.toml` subsection which should be used for auth creds and
testnet config.
- `.confkey2venuekeys: dict[str, list[str]]` which maps each config
subsection key to the list of venue name keys for doing config to
venues lookup.
- always build out testnet sessions for spot and futes venues (though if
not set the sessions obviously won't ever be used).
- add and use new `config.ConfigurationError` custom exceptions when api
creds are missing.
- rename `action: str` to `method: str` in `._api()` since it's the
proper ReST term and switch what was "method" to be `endpoint: str`.
- mask out `.get_positions()` since we can get that from a user stream
wss request (and are doing that).
- (in theory) import and use spot testnet url as necessary.
Do parsing of the `'symbol'` and check for an `_<expiry>` suffix, in
which case we re-format in capitalized FQME style, do the
`Client._pairs[str, Pair]` lookup and then send the `Pair.bs_fqme` in
the `Order.fqme: str` field.
It's finally a decent little design / interface and definitely can be
used in other backends like `kraken` which rolled something lower level
but more or less the same without a wrapper class.
As you'd expect query and sync the EMS with existing live orders
reported by the market venue by packing them in `Status` msgs and
sending over the order dialog stream before starting the handler tasks.
XXX CAVEAT:
- there appears to be no way (at least on the usdtm market/venue) to
distinguish between different contracts such as perps vs. the
quarterlies?
- for now we just assume that the perp is being used since
there's no indicator otherwise in the 'symbol' field?
- we should maybe open an issue with the futures-connector project to
see how they'd recommend solving this discrepancy?
Only a couple tweaks to make this work according to the docs:
https://binance-docs.github.io/apidocs/futures/en/#modify-order-trade
- use a PUT request.
- provide the original user id in a `'origClientOrderId'` msg field.
- don't expect the same oid in the PUT response.
Other broker-mode related details:
- don't call `OrderDialogs.add_msg()` until after the existing check
since we want to check against the *last* msgs contents not the new
request.
- ensure we pass the `modify=True` flag in the edit case.
There was one trick which was that it seems that binance will often send
the account/position update event over the user stream *before* the
actual clearing (aka FILLED) order update event, so make sure we put an
entry in the `dialogs: OrderDialogs` as soon as an order request comes
in such that even if the account update arrives first the
`BrokerdPosition` msg can be relayed without delay / order event order
considerations.
Was using the wrong key before from our old code (not sure how that
slipped back in.. prolly doing too many git stashes XD), so fix that to
properly match against order update events with 'ORDER_TRADE_UPDATE'.
Also, don't match on the types we want to *cast to*, that's not how
match syntax works (facepalm), so we have to typecast prior to EMS msg
creation / downstream logic.
Further,
- try not bothering with binance's own internal `'orderId'` field
tracking since they seem to support just using your own user version
for all ctl endpoints? (thus we only need to track the EMS `.oid`s B)
- log all event update msgs for now.
- pop order dialogs on 'closed' statuses.
- wrap cancel requests in an error handler block since it seems the EMS
is double sending requests from the client?
For crypto derivatives (at least futes), yes they are margined, but
generally not around a single unit of vlm (like equities or commodities
futes) so don't pre-set the order mode allocator to use a #unit limit,
$limit is fine.
Since the request handler task will work concurrently across venues
(spot, futes, margin) we need to be sure that we look up the correct
venue to update the order dialog and this is naturally determined by the
FQME-style symbol in the `BrokerdOrder` msg; the best way to map that
symbol-key to the correct venue/`Pair` is by using said `._pairs:
ChainMap`.
Further, handle limit order errors by catching and relaying back an
error response to the EMS. Fix the "account name" to be `binance.usdtm`
so that we can eventually and explicitly support all venues by name.
Untested fully but has ostensibly working position and balance loading
(by delegating entirely to binance's internals for that) and an MVP ems
order request handler; still need to fill out the order status update
task implementation..
Notes:
- uses user data stream for all per account balance and position tracking.
- no support yet for `piker.accounting` position tracking.
- no support yet for full order / position real-time update via user
stream.
Since we need them for accounting and since we can get them directly
from the usdtm futes `exchangeInfo` ep, just preload all asset info that
we can during initial `Pair` caching. Cache the asset infos inside a new per venue
`Client._venues2assets: dict[str, dict[str, Asset | None]]` and mostly
be pedantic with the spot asset list for now since futes seems much
smaller and doesn't include transaction precision info.
Further:
- load a testnet http session if `binance.use_testnet.futes = true`.
- add testnet support for all non-data endpoints.
- hardcode user stream methods to work for usdtm futes for the moment.
- add logging around order request calls.
As just added for binance move to using an explicit `.<venue>.kraken`
style for spot markets which makes the current spot symbology expand to
`<PAIR>.SPOT` from the new `Pair.bs_fqme: str`. Reasons for why are
laid out in the equivalent patch for binance. Obviously this also primes
for supporting kraken's futures venue APIs as well 🏄https://docs.futures.kraken.com/#introduction
Detalles:
- add `.spot.kraken` parsing to `get_mkt_info()` so that if the venue
token is not passed by caller we implicitly expand it in.
- change `normalize()` to only return the `quote: dict` not the topic
key.
- rewrite live feed msg loop to use `match:` syntax B)
Since there are indeed multiple futures (perp swaps) contracts including
a set with expiry, we need a way to distinguish through search and
`FutesPair` lookup which contract we're requesting. To solve this extend
the `FutesPair` and `SpotPair` to include a `.bs_fqme` field similar to
`MktPair` and key the `Client._pairs: ChainMap`'s backing tables with
these expanded fqmes. For example the perp swap now expands to
`btcusdt.usdtm.perp` which fills in the venue as `'usdtm'` (the
usd-margined fututes market) and the expiry as `'perp'` (as before).
This allows distinguishing explicitly from, for ex., coin-margined
contracts which could instead (since we haven't added the support yet)
fqmes of the sort `btcusdt.<coin>m.perp.binance` thus making it explicit
and obvious which contract is which B)
Further we interpolate the venue token to `spot` for spot markets going
forward, which again makes cex spot markets explicit in symbology; we'll
need to add this as well to other cex backends ;)
Other misc detalles:
- change USD-M futes `MarketType` key to `'usdtm_futes'`.
- add `Pair.bs_fqme: str` for all pair subtypes with particular
special contract handling for futes including quarterlies, perps and
the weird "DEFI" ones..
- drop `OHLC.bar_wap` since it's no longer in the default time-series
schema and we weren't filling it in here anyway..
- `Client._pairs: ChainMap` is now a read-only fqme-re-keyed view into
the underlying pairs tables (which themselves are ideally keyed
identically cross-venue) which we populate inside `Client.exch_info()`
which itself now does concurrent pairs info fetching via a new
`._cache_pairs()` using a `trio` task per API-venue.
- support klines history query across all venues using same
`Client.mkt_mode_req[Client.mkt_mode]` style as we're doing for
`.exch_info()` B)
- use the venue specific klines history query limits where documented.
- handle new FQME venue / expiry fields inside `get_mkt_info()` ep such
that again the correct `Client.mkt_mode` is selected based on parsing
the desired spot vs. derivative contract.
- do venue-specific-WSS-addr lookup based on output from
`get_mkt_info()`; use usdtm venue WSS addr if a `FutesPair` is loaded.
- set `topic: str` to the `.bs_fqme` value in live feed quotes!
- use `Pair.bs_fqme: str` values for fuzzy-search input set.
The beginning of supporting multi-markets through a common API client.
Change to futes market mode in the client if `.perp.` is matched in the
fqme. Currently the exchange info and live feed ws impl will swap out
for their usd-margin futures market equivalent (endpoints).
Not sure why it seemed like futures pairs didn't have this field but add
it to the parent `Pair` type as well as drop the overridden
`.price/size_tick` fields instead doing the same as in spot as well.
Also moves the `MarketType: Literal` (for the `Client.mkt_mode: str`)
and adds a pair type lookup table for exchange info loading.
Add the usd-futes "Pair" type and thus ability to load all exchange
(info for) contracts settled in USDT. Luckily we don't seem to have to
modify anything in the `Client` interface (yet) other then a new
`.mkt_mode: str` which determines which endpoint set to make requests.
Obviously data received from endpoints will likely need diff handling as
per below.
Deats:
- add a bunch more API and WSS top level domains to `.api` with comments
- start a `.binance.schemas` module to house the structs for loading
different `Pair` subtypes depending on target market: `SpotPair`,
`FutesPair`, .. etc. and implement required `MktPair` fields on the
new futes type for compatibility with the clearing layer.
- add `Client.mkt_mode: str` and a method lookup for endpoint parent
paths depending on market via `.mkt_req: dict`
Also related to live feeds,
- drop `Struct` typecasting instead opting for specific fields both for
speed and simplicity atm.
- breakout `subscribe()` into module level acm from being embedded
closure.
- for now swap over the ws feed to be strictly the futes ep (while
testing) and set the `.mkt_mode = 'usd_futes'`.
- hack in `Client._pairs` to only load `FutesPair`s until we figure out
whether we want separate `Client` instances per market or not..
Instead of having a buncha logic branches for 'get', 'post', etc. just
pass the `method: str` and do a attr lookup on the `asks` sesh.
Also, adjust the `trades_dialogue()` ep to switch to paper mode when no
client API key is detected/loaded.
First draft originally by @guilledk but update by myself 2 years later
xD. Will crash at runtime but at least has the machinery to setup signed
requests for auth-ed endpoints B)
Also adds a generic `NoSignature` error for when credentials are not
present in `brokers.toml` but user is trying to access auth-ed eps with
the client.
Since we only ever want to do incremental y-range calcs based on the
price always skip any tick types emitted by the data daemon which aren't
defined in the fundamental set. Further, toss in a new `debug_n_trade:
bool` toggle which by default turns off all loggin and profiler calls;
if you want to do profiling this has to now be adjusted manually!
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
This was actually incorrect prior, we were rounding triggered limit
orders with the `.size_tick` value's digits when we should have been
using the `.price_tick` (facepalm). So fix that and compute the rounding
number of digits (as passed to the round(<value>, ndigits=<here>)`
builtin) and store it in the `DarkBook.triggers` tuples so that at
trigger/match time the round call is done *just prior* to msg send to
`brokerd` given the last known live L1 queue price.
Not sure how this lasted so long without complaint (literally since we
added history 1m OHLC it seems; guess it means most backends are pretty
tolerant XD ) but we've been sending 2 cancels per order (dialog) due to
the mirrored lines on each chart: 1s and 1m. This fixes that by
reworking the `OrderMode` methods to be a bit more sane and less
conflated with the graphics (lines) layer.
Deatz:
- add new methods:
- `.oids_from_lines()` line -> oid extraction,
- `.cancel_orders()` which makes the order client cancel requests from
a `oids: list[str]`.
- re-impl `.cancel_all_orders()` and `.cancel_orders_under_cursor()` to
use the above methods thus fixing the original bug B)
Since we want to be able to support user-configurable vnc socketaddrs,
this preps for passing the piker client direct into the vnc hacker
routine so that we can (eventually load) and read the ib brokers config
settings into the client and then read those in the `asyncvnc` task
spawner.
It's been getting setup in the `brokerd` daemon-actor spawn task for
a while now and worker tasks already get a ref to that global log
instance so they don't need to care (in data or trading) task spawn
endpoints.
Also move to the new `open_trade_dialog()` naming for working broker
backends B)
Discovered due to originally having a history loading bug between
btcusdt futes display where the same time series was being loaded into
the graphics system, this avoids the issue where 2 (or more) curves are
measured to have the same dispersion and thus do not get added as unique
entries to the `overlay_table: dict[float, tuple]` during the scaling
phase..
Practically speaking this should never really be a problem if the curves
(and their backing timeseries) are indeed unique but keying the
overlay table by the dispersion and the `Viz` is a minimal performance
hit when looping the sorted table and is a lot nicer then you **do want
to show** duplicate curves then having one overlay just not be ranged
correctly at all XD
Instead of effectively (and poorly) duplicating the trade dialog setup
logic, just use the new helper we exposed in the EMS module B)
Also, handle paper accounts that have no ledger / positions existing.
As part of bringing the brokerd agnostic APIs up to date and modernizing
wrapping CLIs, this adds a new sub-cmd to allow more or less directly
calling the `.get_mkt_info()` broker mod endpoint and dumping the both
the backend specific `Pair`-ish and `.accounting.MktPair` normalized
version to console.
Deatz:
- make the click config's `brokermods` entry a `dict`
- make `.brokers.core.mkt_info()` strip the broker name part from the
input fqme before calling the backend.
Connecting to a `brokerd` daemon's trading dialog via a helper `@acm`
func is handy so that arbitrary trading middleware clients **and** the
ems can setup a trading dialog and, at the least, query existing
position state; this is in fact our immediate need when simply querying
for an account's position status in the `.accounting.cli.ledger` cli.
It's now exposed (for now) as `.clearing._ems.open_brokerd_dialog()` and
is called by the `Router.maybe_open_brokerd_dialog()` for every new
relay allocation or paper-account engine instance.
Changed from the old `store clone` to instead simply load any shm buffer
matching a user provided `FQME: str` pattern; writing to parquet file is
only done if an explicit option flag is passed by user.
Implement new `iter_dfs_from_shms()` generator which allows interatively
loading both 1m and 1s buffers delivering the `Path`, `ShmArray` and
`polars.DataFrame` instances per matching file B)
Also add a todo for a `NativeStorageClient.clear_range()` method.
Also adjust sizing such that the history buffer will backfill the last
six years by default (in 1m OHLC) and the hft buffer will do only 3 days
worth. Also ensure the fsp layer passes the src shm's buffer size when
allocating since the size is now required by allocators in the shm apis.
Avoid unnecessarily re-rendering the wrong (1min OHLC history) chart
and/or other such charts with update tasks listening to the sampler
stream. Instead only redraw in tasks which are updating vizs which match
the actual details of the backfill event.
We can probably also eventually match against a range tuple (emitted in
the msg) and then have the task further only update the formatter layer
unless the range is actually in view?
It's no longer part of the default OHLCV array-buffer schema and just
generally we should be processing and managing **any** non source data
in the FSP subsystem(s) despite it maybe being provided as a default by
some backends.
Explains why stuff always seemed wrong before XD
Previously whenever a time-gappy asset (like a stock due to it's venue
operating hours) was being loaded, we weren't querying for a "durations
worth" of bars and this was causing all sorts of actual gaps in our
data set that shouldn't exist..
Fix that by always attempting to retrieve a min aggregate-time's
worth/duration of bars/datums in the history manager. Actually,
i implemented this in both the feed and api layers for this backend
since it doesn't seem to strictly work just implementing it at the
`Client.bars()` level, not sure why but..
Also, buncha `ruff` linting cleanups and fix the logger nameeee, lel.
For now, just detect and fill in gaps (via fresh backend queries)
*in the shm buffer* but eventually i'm pretty sure we can just write
these direct to the parquet file as well.
Use the new `.data._timeseries.detect_null_time_gap()` to find and fill
in the `ShmArray` index range, re-check it and enter a prompt if it
didn't totally fill.
Also,
- do a massive cleanup and removal of all unused/commented code.
- drop the duplicate frames tracking, don't think we need it after
removing multi-frame concurrent queries.
- change backfill loop variable `end_dt` -> `last_start_dt` which is
more semantically correct.
- fix logic to backfill any missing sub-sequence portion for any frame
query that overruns the shm buffer prependable space by detecting
the available rows left to insert and only push those.
- add a new `shm_push_in_between()` helper to match.
It took a little while (and a lot of commenting out of old no longer
needed code) but, this gets tsdb (from parquet file) loading *before*
final backfilling from the most recent history frame until the most
recent tsdb time stamp!
More or less all the convoluted concurrency shit we had for coping with
`marketstore` IPC junk is no longer needed, particularly all the query
size limits and accompanying load loops.. The recent frame loading
technique/order *has* now changed though since we'd like to show charts
asap once tsdb history loads.
The new load sequence is as follows:
- load mr (most recent) frame from backend.
- load existing history (one shot) from the "tsdb" aka parquet files
with `polars`.
- backfill the gap part from the mr frame back to the tsdb start
incrementally by making (hacky) `ShmArray.push(start=<blah>)` calls
and *not* updating the `._first.value` while doing it XD
Dirtier deatz:
- make `tsdb_backfill()` run per timeframe in a separate task.
- drop all the loop through timeframes and insert `dts_per_tf` crap.
- only spawn a subtask for the `start_backfill()` call which in turn
only does the gap backfilling as mentioned above.
- mask out all the code related to being limited to certain query sizes
(over gRPC) as was restricted by marketstore.. not gonna go through
what all of that was since it's probably getting deleted in a follow
up commit.
- buncha off-by-one tweaks to do with backfilling the gap from mr frame
to tsdb start.. mostly tinkered it to get it all right but seems to be
working correctly B)
- still use the `broadcast_all()` msg stuff when doing the gap backfill
though don't have it really working yet on the UI side (since
previously we were relying on the shm first/last values.. so this will
be "coming soon" :)
For OHLCV time series we normally presume a uniform sampling period
(1s or 60s by default) and it's handy to have tools to ensure a series
is gapless or contains expected gaps based on (legacy) market hours.
For this we leverage `polars`:
- add `.nativedb.with_dts()` a datetime-from-epoch-time-column frame
"column-expander" which inserts datetime-casted, epoch-diff and
dt-diff columns.
- add `.nativedb.detect_time_gaps()` which filters to any larger then
expected sampling period rows.
- wrap the above (for now) in a `piker store anal` (analysis) cmd which
atm always enters a breakpoint for tinkering.
Supporting storage client additions:
- add a `detect_period()` helper for extracting expected OHLC time step.
- add new `NativedbStorageClient` methods and attrs to provide for the above:
- `.mk_path()` to **only** deliver a parquet-file path for use in
other methods.
- `._dfs` to house cached `pl.DataFrame`s loaded from `.parquet` files.
- `.as_df()` which loads cached frames or loads them from disk and
then caches (for next use).
- `_write_ohlcv()` a private-sync version of the public equivalent
meth since we don't currently have any actual async file IO
underneath; add a flag for whether to return as a `numpy.ndarray`.
- drop buncha cruft from `store ls` cmd and make it work for
multi-backend fqme listing.
- including adding an `.address` to the mkts client which shows the
grpc socketaddr details.
- change defauls to new `'nativedb'.
- drop 'marketstore' from built-in backend list (for now)
It was a concurrency-hack mess somewhat due to all sorts of limitations
imposed by marketstore (query size limits, strange datetime/timestamp
errors, slow table loads for large queries..) and we can drastically
simplify. There's still some issues with getting new backfills (not yet
in storage) correctly prepended: there's sometimes little gaps due to shm
races when reading history indexing vs. when the live-feed startup
finishes.
We generally need tests for all this and likely a better rework of the
feed layer's init such that we're showing history in chart afap instead
of waiting on backfills or the live feed to come up.
Much more to come B)
Turns out no backend (including kraken) requires it and really this
kinda of measure should be implemented and recorded from our fsp layer
instead of (hackily) sometimes expecting it to be in "source data".
After much frustration with a particular tsdb (cough) this instead
implements a new native-file (and apache tech based) backend which
stores time series in parquet files (for now) using the `polars` apis
(since we plan to use that lib as well for processing).
Note this code is currently **very** rough and in draft mode.
Details:
- add conversion routines for going from `polars.DataFrame` to
`numpy.ndarray` and back.
- lay out a simple file-name as series key symbology:
`fqme.<datadescriptions>.parquet`, though probably it will evolve.
- implement the entire `StorageClient` interface as it stands.
- adjust `storage.cli` cmds to instead expect to use this new backend,
which means it's a complete mess XD
Main benefits/motivation:
- wayy faster load times with no "datums to load limit" required.
- smaller space footprint and we haven't even touched compression
settings yet!
- wayyy more compatible with other systems which can lever the apache
ecosystem.
- gives us finer grained control over the filesystem usage so we can
choose to swap out stuff like the replication system or networking
access.
Turns out just (over)writing `.parquet` files with >= 1M datums is like
less then a second, and we can likely speed up appends using
`fastparquet` (usage coming soon).
Includes:
- a new `clone` CLI subcmd to test this all out by ad-hoc copy of
(literally hardcoded to a daemon-actor specific shm allocation X) an
existing `/dev/shm/<ShmArray>` and push to `.parquet` file.
- code to convert from our `ShmArray.array: np.ndarray` ->
`polars.DataFrame` (thanks SO).
- timing checks around the file IO and np -> polars conversion.
- a `read` subcmd which i was using to test the sync `pymarketstore`
client against our async one to see if the issues from
https://github.com/pikers/piker/issues/443 were resolved, but nope!
Turns out trying to switch to the old sync client and going back to
using the old json-RPC API (after having had to patch the upstream repo
to not import gRPC machinery to avoid crashes..) I'm basically getting
the exact same issues.
New tinkering results does possibly tell some new stuff:
- the EOF error seems to indeed be due to trying fetch records which haven't been
written (properly) - like asking for a `end=<epoch_int>` that is
earlier then the earliest record.
- the "snappy input corrupt" error seems to have something to do with
the `Params.end` field not being an `int` and/or the int precision not
being chosen correctly?
- toying with this a bunch manually shows that the internals of the
client (particularly `.build_query()` stuff) is parsing/calcing the
`Epoch` and `Nanoseconds` values out incorrectly.. which is likely
part of the problem.
- we also changed `anyio_marketstore.MarketStoreclient.build_query()`
logic when removing `pandas` a while back, which also seems to be
part of the problem on the async side, however reverting those
changes also didn't fix the issue entirely; likely something else
more subtle going on (maybe with the write vs. read `Epoch` field
type we pass?).
Despite all this malarky, we're already underway more or less obsoleting
this whole thing with a much less complex approach of using apache
parquet files and modern filesystem tools to get a more flexible and
numerics-native dataframe-oriented tsdb B)
Turns out the reason we were originally making the `time: float` column in our
ohlcv arrays was bc that's what **only** ib uses XD (and/or 🤦)
Instead we changed the default field type to be an `int` (which is also
more correct to avoid `float` rounding/precision discrepancies) and thus
**do not need to override it** in all other (crypto) backends (except
`ib`). Now we only do the customization (via `._ohlc_dtype`) to `float`
only for `ib` for now (though pretty sure we can also not do that
eventually as well..)!
Use `def_iohlcv_fields` for a name and instead of copying and inserting
the index field pop it for the non-index version. Drop creating
`np.dtype()` instances since `numpy`'s apis accept both input forms so
this is simpler on our end.
Turns out you can mix and match `click` with `typer` so this moves what
was the `.data.cli` stuff into `storage.cli` and uses the integration
api to make it all work B)
New subcmd: `piker store`
- add `piker store ls` which lists all fqme keyed time-series from backend.
- add `store delete` to remove any such key->time-series.
- now uses a nursery for multi-timeframe concurrency B)
Mask out all the old `marketstore` specific subcmds for now (streaming,
ingest, storesh, etc..) in anticipation of moving them into
a subpkg-module and make sure to import the sub-cmd module in our top
level cli package.
Other `.storage` api tweaks:
- drop the reraising with custom error (for now).
- rename `Storage` -> `StorageClient` (or should it be API?).
To kick off our (tsdb) storage backends this adds our first implementing
a new `Storage(Protocol)` client interface. Going foward, the top level
`.storage` pkg-module will now expose backend agnostic APIs and helpers
whilst specific backend implementations will adhere to that middle-ware
layer.
Deats:
- add `.storage.marketstore.Storage` as the first client implementation,
moving all needed (import) dependencies out from
`.service.marketstore` as well as `.ohlc_key_map` and `get_client()`.
- move root `conf.toml` loading from `.data.history` into
`.storage.__init__.open_storage_client()` which now takes in a `name:
str` and does all the work of loading the correct backend module, its
config, and determining if a service-instance can be contacted and
a client loaded; in the case where this fails we raise a new
`StorageConnectionError`.
- add a new `.storage.get_storagemod()` just like we have for brokers.
- make `open_storage_client()` also return the backend module such that
the history-data layer can make backend specific calls as needed (eg.
ohlc_key_map).
- fall back to a basic non-tsdb backfill when `open_storage_client()`
raises the new connection error.
The plan is to offer multiple tsdb and other storage backends (for
a variety of use cases) and expose them similarly to how we do for
broker and data providers B)
Was borked on linux if you didn't provide the setting in `conf.toml` due
to some logic errors. Fix that by rejigging `DpiAwareFont` internal
variables:
- add new `._font_size_calc_key: str` which was the old `._font_size`
and is only used when no explicit font size is set by the user in the
`conf.toml` config:
- this is the "key" that is used to lookup a calculation function
which attempts to compute a best fit font size given the measured
system displays DPI settings and dimensions.
- make the `._font_size: int` the **actual** font size integer that is
cached and passed to `Qt` to set the size.
- this is overridden by user config now if defined.
- change the input kwarg `font_size: str` to the constructor to better
change the input kwarg `font_size: str` to the constructor to better
named private `_font_size_key: str` which gets set to the new
`._font_size_calc_key`.
Also, adjust all client code which instantiates `DpiAwareFont` to use
the new `_font_size_key` kwarg input so nothing breaks XD
Was only really borked for higher-precision but lower priced assets
(like TLOS or peeneez) which have a `MktPair.price_tick_digits >= 2`.
The issue was using the wrong attr, the `size_tick_digits`..
Including changing to `LinkedSplits.mkt: MktPair` and adding an explicit
setter method for setting it and being sure that nothing breaks
in the display system init!
For this commit we leave in warning access to `LinkedSplits.symbol` but
will remove in following commit.
Only stuff left was the allocator stuff. Drop the top level subpkg
exports and finally kill off the awkwardly named
`Symbol.lot_size_digits` properties XD
Expose a bunch more util funcs at subpkg top level, do some typing in
allocator method internals.
Contract matching in live setup was borked; switch to
`MktPair.dst.atype` matching, don't override the `cmdty` "venue" (a
weird special case) in `get_mkt_info()` otherwise lookup will fail..
Stash it for now in the (now mutable by default) `.shm_write_opts` and
have the new `Flume._has_vlm: bool` (only set to false internally by
feed layer) which can be read via new public `.has_vlm()` predicate.
Move out the old `.ui/_fsp` helper logic to this flume method.
Using the new `._ahab.start_ahab_service()` mngr of course, and now
support user config overrides (such that our defaults can be modified by
a keen user, say using a config file, or for testing). This is where the
functionality moved out of the `pikerd` init has been moved - instead of
being triggered by bool flag inputs to that factory.
For marketstore actually support overriding the entire yaml config via
runtime `_yaml_config_str: str` formatting with any passed user dict,
primarily focussing on supporting override of the sockaddrs for testing.
Allows for using the `Services.cancel_service()` api for explicit
cancellation in tests and eventually for remote teardown. Change
`.start_ahab()` to an `@acm` `start_ahab_service()` and just yield back
the same values we were returning prior. Also fix the logging (level) to
actually reflect what's passed in - we weren't using the correct name
/ instance from the `.sevice` subpkg..
`Flume.mkt.fqme` might not be exactly the same as the local
version now since we've had to add some hacks to certain backends
(cough ib) to handle `MktPair.src` not being set as an `Asset` (yet).
Since `open_trade_ledger()` now requires a sort we pass in a combo of
the std `pendulum.parse()` for API records and a custom flex parser for
flex entries pulled offline.
Add special handling for `MktPair.src` such that when it's a fiat (like
it should always be for most legacy assets) we try to get the fqme
without that `.src` token (i.e. not mnqusd) to avoid breaking
roundtripping of live feed requests (due to new symbology) as well as
the current tsdb table key set..
Do a wholesale renaming of fqsn -> fqme in most of the rest of the
backend modules.
Turns out that reading **and** writing with `tomlkit` is just wayya slow
for large documents like ledger files so move to using the `tomli`
sibling pkg `tomli-w` which seems to much improve on the latency, though
obviously longer run we're likely going to want:
- a better algorithm for only back loading records using as little
history as possible
- a different serialization format for production maybe something
like apache parquet?
The only issue with using a non-style-preserving writer is that we don't
necessarily get TOML conf ordering for free (without first ordering it
ourselves), and thus this patch also adds much more general date-time
sorting machinery which is now **required** when using
`open_trades_ledger()` via a `tx_sort: Callable`. By default we now
provide `.accounting._ledger.iter_by_dt()` (exposed in the subpkg mod)
which conducts dynamic "datetime key detection" based parsing of records
based on a `parsers: dict[str, Callabe]` input table. The default should
handle most use cases including all currently supported live backends
(kraken, ib) as well as our paper engine ledger-records format.
Granulars:
- adjust `Position.iter_clears()` to use new `iter_by_dt(key=lambda ..)`
signature.
- add `tomli-w` to setup and our `tomlkit` fork to requirements file.
- move `.write_config()` to bottom of class defn.
- fix closed pos popping to not error if pp was already popped..
Since porting all backends to the new `FeedInit` + `MktPair` + `Asset`
style init, we can now just directly pass a `MktPair` instance to the
history endpoint(s) since it's always called *after* the live feed
`.stream_quotes()` ep B)
This has a lot of benefits including allowing brokerd backends to have
more flexible, pre-processed market endpoint meta-data that piker has
already validated; makes handling special cases in much more straight
forward as well such as forex pairs from legacy brokers XD
First pass changes all crypto backends to expect this new input, ib will
come next after handling said special cases..
Since most (legacy) stock brokers design their symbology without
including the target exchange's source asset name - normally a fiat
currency like USD - this adds an option for rendering market endpoints
without that token for simpler use in backends for such brokers.
As an example IB doesn't expect a `mnq/usd.cme.ib` symbol and instead
presumes that since the CME lists all assets in USD then the source
asset is implied.
Impl details:
- add `MktPair.pair: str` which replaces `.key` as a better name.
- offer a `without_src: bool` to a new `.get_fqme()` getter method
which will render everything the same minus the src token.
- expose the new flag through both the new `.get_fqme()` and
`.get_bs_fqme()` methods and wrap those both under the original
property names `.bs_fqme` and `.fqme`.
Previously the subscription response handling was a bit sloppy what with
ignoring the welcome msg; this now correctly expects the correct startup
sequence. Also this avoids warn logging on pong messages by expecting
them in the msg loop and further drops the `KucoinMsg` struct and
instead changes the msg loop to expect `dict`s and only cast to structs
on live feed msgs that we actually process/relay.
Mostly renaming from the old acronym. This also contains necessary
conf.toml loading in order to call `open_storage_client()` which now
does not have default network contact info.
Previously we were passing the `fqme: str` which isn't as extensive nor
were we able to pass `MktPair` direct to backend history manager-loading
routines (which should be able to rely on always receiving it since
currently `stream_quotes()` is always called first for setup).
This also starts a slight bit of configuration oriented tsdb info
loading (via a new `conf.toml`) such that a user can decide to host
their (marketstore) db on a remote host and our container spawning and
client code will do the right startup automatically based on the config.
|-> Related to this I've added some comments about doing storage
backend module loading which should get actually written out as part of
patches coming in #486 (or something related).
Don't allow overruns again in history context since it seems it was
never a problem?
As per the new market info packing schema this patch almost gets it
completely compatible and useful via implementing the `get_mkt_info()`
backend module endpoint B)
There's still some questions around `MktPair.src` since all the contract
search machinery in the ib api isn't expecting a fiat currency in the
symbol key: for ex. `mnq/usd.cme.20230616.ib` has no handling for the
`[/]usd` part. For now i'm just excluding the `.src` since it requires
extra parsing on quotes-feed requests even though this is also currently
breaking forex pairs (idealpro or wtv). I think ideally we do move to
a `dst/src.<venue>.<etc..>` style but it's going to require adjustments
to all the existing crypto backends..
This also allows dropping the old `mk_init_msgs()` closure.
We need to allow overruns during the async multi-broker context spawning
init bc some backends might take longer then others to setup (eg.
binance vs. kucoin) and result in some context (stream) being overrun by
the time we get to the `.open_stream()` phase. Ideally, we can maybe
adjust the concurrent setup to be more of a task-per-provider style to
avoid this in the future - which would also in theory result in
more-immediate per-provider setup in terms showing ready feeds asap.
Also, does a bunch of renaming from fqsn -> fqme and drops the lower
casing of input symbols instead expecting the caller to know what the
data backend it's requesting is going to be able to handle in terms of
symbology.
Instead of pre-converting and mapping piker style fqmes to
`KucoinMktPair`s make `Client._pairs` keyed by the kucoin native market
ids and instead also create a `._fqmes2mktids: bidict[str, str]` for
doing lookups to the native pair from the fqme.
Also, adjust any remaining `fqsn` naming to fqme.
So that styling is preserved on write but requires that we pop `None`
values (in this case any unset `.expiry` transactions) due to `tomkit`
having no support for writing them as values?
So that we aren't creating blank files for legacy configs (as we do name
changes or wtv). Further change `.get_conf_path()` to validate against
new `account.` prefix and a god `conf.toml` file.
Allows for direct loading of an "account file configuration" contents
without having to pass the explicit config dir path. In this case we are
also rewriting the `pps.<brokername>.<accnt_name>.toml` file names to
instead have a `account.` prefix, but providing this helper function
allows such changes more easily in the future - since callers won't have
to use the lower level `.load()` input signature.
Also add some todo comments about moving to `tomlkit`.
I figure we might as well support multiple types of distributed
multi-host setups; why not allow running the API (gateway) and thus vnc
server on a diff host and allowing clients to connect and do their thing
B)
Deatz:
- make `ib._util.data_reset_hack()` take in a `vnc_host` which gets
proxied through to the `asyncvnc` client.
- pull `ib_insync.client.Client` host value and pass-through to data
reset machinery, presuming the vnc server is running in the same
container (and/or the same host).
- if no vnc connection **and** no i3ipc trick can be used, just report
to the user that they need to remove the data throttle manually.
- fix `feed.get_bars()` to handle throttle cases the same based on error
msg matching, not error the code and add a max `_failed_resets` count
to trigger bailing on the query loop.
No longer need to implement connection timeout logic in the streaming
code, instead we just `async for` that bby B)
Further refining:
- better `KucoinTrade` msg parsing and handling with object cases.
- make `subscribe()` do sub request in a loop wand wair for acks.
Previously you couldn't have a brokerd backend which defined
`.trades_dialogue()` but which could also indicate that the paper
clearing engine should be used. This adds that support by allowing the
endpoint task to return a simple `'paper'` string, in which case the ems
will boot a paperboi.
The obvious useful case for this is if you have a broker you want to use
but do not have actual broker credentials setup (yet) with that provider
in your `brokers.toml`; demonstrated here with the adjustment to
`kraken`'s startup to no longer raise a runtime error B)
To fit with the rest of the new requirements added in `.data.validate`
this adds `FeedInit` init including `MktPair` and `Asset` loading for
all spot currencies provided by `kucoin`.
Deatz:
- add a `Currency` struct and accompanying `Client.get_currencies()` for
storing all asset infos.
- implement `.get_mkt_info()` which loads all necessary accounting and
mkt meta-data structs including adding `.price/size_tick` fields to
the `KucoinMktPair`.
- on client boot, async spawn requests to cache both symbols and currencies.
- pass `subscribe()` as the `fixture` arg to `open_autorecon_ws()`
instead of opening it manually.
Other:
- tweak `Client._request` to not expect the prefixed `'/'` for the
`endpoint: str`.
- change the `api_v` arg to just be `api: str`.
Since it's a bit weird having service specific implementation details
inside the general service `._daemon` mod, and since i'd mentioned
trying this re-org; let's do it B)
Requires enabling the new mod in both `pikerd` and `brokerd` and
obviously a bit more runtime-loading of the service modules in the
`brokerd` service eps to avoid import cycles.
Also moved `_setup_persistent_brokerd()` into the new mod since the
naming would place it there even though the implementation really
wouldn't (longer run) since we want to split up `.data.feed` layer
backend-invoked eps into a separate actor eventually from the "actual"
`brokerd` which will be the actor running **only** the trade control eps
(eg. trades_dialogue()` and friends).
`trio`'s internals don't allow for async generator (and thus by
consequence dynamic reset of async exit stacks containing `@acm`s)
interleaving since doing so corrupts the cancel-scope stack. See details
in:
- https://github.com/python-trio/trio/issues/638
- https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator
- `trio._core._run.MISNESTING_ADVICE`
We originally tried to address this using
`@trio_util.trio_async_generator` in backend streaming code but for
whatever reason stopped working recently (at least for me) and it's more
or less implemented the same way as this patch but with more layers and
an extra dep. I also don't want us to have to address this problem again
if/when that lib isn't able to keep up to date with wtv `trio` is
doing..
So instead this is a complete rewrite of the conc design of our
auto-reconnect ws API to move all reset logic and msg relay into a bg
task which is respawned on reset-requiring events: user spec-ed msg recv
latency, network errors, roaming events.
Deatz:
- drop all usage of `AsyncExitStack` and no longer require client code
to (hackily) call `NoBsWs._connect()` on msg latency conditions,
intead this is all done behind the scenes and the user can instead
pass in a `msg_recv_timeout: float`.
- massively simplify impl of `NoBsWs` and move all reset logic into a
new `_reconnect_forever()` task.
- offer use of `reset_after: int` a count value that determines how many
`msg_recv_timeout` events are allowed to occur before reconnecting the
entire ws from scratch again.