Since it's a bit weird having service specific implementation details
inside the general service `._daemon` mod, and since i'd mentioned
trying this re-org; let's do it B)
Requires enabling the new mod in both `pikerd` and `brokerd` and
obviously a bit more runtime-loading of the service modules in the
`brokerd` service eps to avoid import cycles.
Also moved `_setup_persistent_brokerd()` into the new mod since the
naming would place it there even though the implementation really
wouldn't (longer run) since we want to split up `.data.feed` layer
backend-invoked eps into a separate actor eventually from the "actual"
`brokerd` which will be the actor running **only** the trade control eps
(eg. trades_dialogue()` and friends).
`trio`'s internals don't allow for async generator (and thus by
consequence dynamic reset of async exit stacks containing `@acm`s)
interleaving since doing so corrupts the cancel-scope stack. See details
in:
- https://github.com/python-trio/trio/issues/638
- https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator
- `trio._core._run.MISNESTING_ADVICE`
We originally tried to address this using
`@trio_util.trio_async_generator` in backend streaming code but for
whatever reason stopped working recently (at least for me) and it's more
or less implemented the same way as this patch but with more layers and
an extra dep. I also don't want us to have to address this problem again
if/when that lib isn't able to keep up to date with wtv `trio` is
doing..
So instead this is a complete rewrite of the conc design of our
auto-reconnect ws API to move all reset logic and msg relay into a bg
task which is respawned on reset-requiring events: user spec-ed msg recv
latency, network errors, roaming events.
Deatz:
- drop all usage of `AsyncExitStack` and no longer require client code
to (hackily) call `NoBsWs._connect()` on msg latency conditions,
intead this is all done behind the scenes and the user can instead
pass in a `msg_recv_timeout: float`.
- massively simplify impl of `NoBsWs` and move all reset logic into a
new `_reconnect_forever()` task.
- offer use of `reset_after: int` a count value that determines how many
`msg_recv_timeout` events are allowed to occur before reconnecting the
entire ws from scratch again.
Since we have made `MktPair.bs_mktid` mean something else now, change
all the feed setup var names to instead be more representative of the
actual value: `bs_fqme: str` and use the new `MktPair.bs_fqme` where
necessary.
The legacy version was a `dict` of `dicts` vs. now we want to be handed
a `list[FeedInit]`; process both in a factored way.
Drop `FeedInit.bs_mktid` since it's already defined on `.mkt.bs_mktid`
and we don't really need it top level.
More or less a replacement for what @guilledk did with the initial
attempt at a "broker check" type script a while back except in this case
we're going to always run this validation routine and it now uses a new
`FeedInit` struct to ensure backends are delivering the right schema-ed
data during startup. Also allows us to stick deprecation warnings / and
or strict API compat errors all in one spot (at least for live feeds).
Factors out a bunch of `MktPair` related adapter-logic into a new
`.validate.valiate_backend()` which warns to the backend implementer via
log msgs all the problems outstanding. Ideally we do our backend module
endpoint scan-and-complain regarding missing feature support from here
as well (eg. search, broker/trade ctl, ledger processing, etc.).
It needed some work..
- Make `unpack_fqme()` always return a 4-tuple handling the venue and
suffix parts more generally.
- add `Asset.Asset.guess_from_mkt_ep_key()` a like-it-sounds hack at
trying to render a `.dst: Asset` for most most purposes throughout the
stack.
- always try to preprocess the input `fqme: str` with `unpack_fqme()` in
`MktPair.from_fqme()` and use the new `Asset` method (above) to make
up a `.dst: Asset` pulling as much meta-info we can from the caller.
- add `MktPair.bs_fqme` to get the thing without the broker part..
- add an `'unknown'` value to the `_derivs` def.
- drop `Symbol.from_fqsn()` and `unpack_fqsn()` more generally (yes
BREAKING).
- only preload necessary (one for clearing, all for ledger sync)
`MktPair` info from the backend using `.get_mkt_info()`, build the
`mkt_by_fqme: dict[str, MktPair]` and pass it to
`TransactionLedger.iter_trans()`.
- use new `TransactionLedger.update_from_t()` method on clears.
- sanity check all `mkt_by_fqme` entries against `Flume.mkt` values
when we open a data feed.
- rename `PaperBoi._syms` -> `._mkts`.
Turns out we actually had further pp entry bugs due to *not quantizing*
the size inside `.minimize_clears()` method calcs; fix that using
`Position.sys.mkt.quantize()` as is done in `Position.calc_size()`.
Fix `PpTable.write_config()` to drop from the TOML config any
`closed: dict[str, Position]` entries delivered by `.dump_active()`.
Add a more detailed doc string for our position type and a little todo
for the `.bep` B)
Since ledger records are often provided (and thus stored) from most
backends *without* containing the info we normally need for accounting
defined by `MktPair`, this extends the ledger method to take in a table
that allows assigning the `Transaction.sys` from an fqme lookup. This
way client code (like the paper engine and new ledger mgmt tools) can
do the mkt info lookup before hand and then load both ledger
`Transactions` and positions via the `PpTable` and get correct
accounting calculations, always :fingers_crossed:
Also adds `TransactionLedger.update_from_t(t: Transaction)` to allow
updating directly from an existing tran instead of making the user cast
to a `dict` first. Includes fix to `.to_dict()` to always pop the `.sym`
again to avoid client code having to do so.
Wow not sure how that happened, but we should probably use the correct
market precision info for the correct parameter..
Also, use `@lru_cache` on new `get_mkt_info()` ep, seems to work?
In `.feed` and `._sampling` move to using the new
`tractor.Context.open_stream(allow_overruns: bool)` (cough, A BREAKING
CHANGE).
Also set `Flume.mkt` during construction in `.feed.open_feed()`.
Might as well try and flip it over to the new type; make appropriate
dict serialization changes in `.to_msg()`. Alias back to `.symbol:
Symbol` with a property.
Frickin ib, they give you the `0.001` (or wtv) in the
`ContractDetails.minSize: float` but won't accept fractional sizes
through the API.. Either way, it's probably not sane to be supporting
fractional order sizes for legacy instruments by default especially
since it in theory affects a lot of the clearing outcomes by having ib
do wtv magical junk behind the scenes to make it work..
Order mode previously was just willy-nilly sending `float` prices
(particularly on order edits) which are generated from the associated
level line. This actually uses the `MktPair.price_tick: Decimal` to
ensure the value is rounded correctly before submission to the ems..
Also adjusts the order mode init to expect a table of tables of startup
position messages, with the inner table being keyed by fqme per msg.
Tried a couple libs and ended up sticking with `rich` (since it's the
sibling lib to `typer`) but also (initially) implemented a version with
`blessings` that I ended up commenting out (and will likely remove).
Adjusted the CLI I/O a slight bit as well:
- require a fully qualified account name of the form:
`<brokername>.<accountname>` and error on non-matching input.
- dump positions summary lines as humanized size, ppu and cost basis
values per line.
When processing paper trades ledgers we normally won't have specific
`MktPair` info for the backend market we're simulating, as such we
need to look up this info when updating pps.toml files such that we
get precision info correct (particularly in the case of cryptos!) and
can also run paper ledger processing without running the simulated
clearing loop. In order to make it happen we lookup any `get_mkt_info()`
ep on the backend and pass the output to the `force_mkt` input of the
`PpTable.update_from_trans()` method.
This will (likely) act as a new backend query endpoint for other `piker`
(client) code to lookup `MktPair` info from each backend. To start it
also returns the backend-broker's local `Pair` (or wtv other type) as
well.
The main motivation for this is for our paper engine which can require
the mkt info when processing paper-trades ledgers which do not contain
appropriate info to compute position metrics.
Instead of stripping the broker part just use the full fqme for all
`Transaction.bs_mktid: str` values since it makes indexing the `PpTable`
much easier with less key mangling..
Change the root-service-task entrypoint to accept the level and
setup a console log as is now expected for all sub-services. Cast all
backend delivered startup `BrokerdPosition` msgs and log them to
console.
If you want a sub-actor to write console logs (with the right level) the
`get_console_log()` call has to be made somewhere during service task
startup. Previously this wasn't well formalized nor used (depending on
daemon) so passing `loglevel` to the service's root-task-endpoint (eg.
`_setup_persistent_brokerd()`) encourages that the daemon's logging is
configured during init according to the spawner's requesting logging
config. The previous `get_console_log()` call happening inside
`maybe_spawn_daemon()` wasn't actually doing anything in the target
daemon XD, so obviously remove that and instead passthrough loglevel
to the ctx endpoints and service manager methods.
Turns out we don't hookup our eventkit handler until after the
`load_aio_clients()` is complete, which means we can't get
`ib_insync.Client.apiError` events unless inside the asyncio side task.
So I guess try to report any such errors during API scan (note the
duplicate client id case is a special one from ibis itself) even though
we're not going to catch them trio side. The hack to work around this is
to just increment the client id value with the `connect_retries` led `i`
value even though that will break on more then 3 clients attached to an
API endpoint lul ..
Further adjustments that were to the end of trying to fix this proper:
- add `remove_handler_on_err()` cm to disconnect a handler when the trio
side of the channel closes.
- actually connect to client api erros in our `Client.inline_errors()`
- increase connect timeout to a sec.
- change the trio-asyncio proxy response-msg loop over to `match:`
syntax and raise on unhandled msgs from eventkit handlers.
We previously only offered a sync API (which was recently renamed to
`.<meth>_nowait()` style) since initially all order control was from our
`OrderMode` Qt driven UI/UX. This adds the equivalent async methods for
both testing as well as eventual auto-strat driven control B)
Also includes a bunch of renaming:
- `OrderBook` -> `OrderClient`.
- better internal renaming of the client's mem chan vars and add a ref
`._ems_stream: tractor.MsgStream`.
- drop `get_orders()` factory, just always check for the actor-global
instance and always set the ems stream on that client (in case old one
was closed).
This will end up being super handy for testing our accounting subsystems
as well as providing unified and simple cli utils for managing ledgers
and position tracking. Allows loading the paper boi without starting
a data feed and instead just trigger ledger and pps loading without
starting the entire clearing engine.
Deatz:
- only init `PaperBoi` and start clearing loop (tasks) if a non-`None`
fqme is provided, ow just `Context.started()` the existing pps msgs
as loaded from the ledger.
- always update both the ledger and pp table on startup and pass
a single instance of each obj to the `PaperBoi` for reuse (without
opening and closing backing config files since we now have
`.write_config()`).
- drop the global `_positions` dict, it's not needed any more if we use
a `PaperBoi.ppt: PpTable` which persists with the engine actor's
lifetime.
Add a new `class TransactionLedger(collections.UserDict)` for managing
ledger (files) from a `dict`-like API. The main motivations being easy
conversion between `dict` <-> `Transaction` obj forms as well as dynamic
(toml) file updates via a set of methods:
- `.write_config()` to render and write state to the local toml file.
- `.iter_trans()` to allow iterator style conversion to `Transaction`
form for each entry.
- `.to_trans()` for the dict output from the above.
Some adjustments to `Transaction` namely making `.sym/.sys` optional for
now so that paper engine entries can be loaded (offline) without
connecting to the emulated broker backend. Move to using `pathlib.Path`
throughout for bootyful toml file mgmt B)
When loading a `Position` from a pps file we might not have the entire
`MktPair` field-set loaded (though going forward that shouldn't really
ever happen except in the case of a legacy `pps.toml`), in which case we
can check if the `.fqme: str` value loaded from the transaction is
longer and use that instead - presuming it must have more mkt meta-data
filled out.
Also includes some more `fqsn` -> `fqme` renames.
Been meaning to do this port for a while and since it makes passing
around file handles (presumably alongside the in mem obj form) a lot
simpler/nicer and the implementations of all the config file handling
much more terse with less presumptions about the form of filename/dir
`str` values all over the place B)
moar technically, let's us:
- drop remaining `.config` usage of `os.path`.
- return `Path`s from most routines.
- adds a special case to `get_conf_path()` such that if the input name
contains a `pps.` pattern, we avoid validating the name; this is going
to be used by new `.accounting.open_pps()` code which will instead
write a separate TOML file for each account B)
Previous we were re-processing all ledgers for every position msg
received from the API, per client.. Instead do that once in a first pass
and drop all key-miss lookups for `bs_mktid`s; it should never happen.
Better typing for in-routine vars, convert pos msg/objects to `dict`
prior to logging so it's sane to read on console. Skip processing
specifically option contracts for now.
Turns out `binance` is pretty great with their schema since they have
more or less the same data schema for their exchange info ep which we
wrap in a `Pair` struct:
https://binance-docs.github.io/apidocs/spot/en/#exchange-information
That makes it super easy to provide the most general case for filling
out a `MktPair` with both `.src/dst: Asset` to maintain maximum
meta-data B)
Deatz:
- adjust `Pair` to have `.size/price_tick: Decimal` by parsing out
the values from the filters field; TODO: we should probably just rewrite
the input `.filter` at init time so we can keep the frozen style.
- rename `Client.mkt_info()` (was `.symbol_info` to `.exch_info()`
better matching the ep name and have it build, cache, and return
a `dict[str, Pair]`; allows dropping `.cache_symbols()`
- only pass the `mkt_info: MktPair` field in the init msg!
Accept a msg with any of:
- `.src: Asset` and `.dst: Asset`
- `.src: str` and `.dst: str`
- `.src: Asset` and `.dst: str`
but not the final combo tho XD
Also, fix `.key` to properly cast any `.src: Asset` to string!
If user has loaded from a flex report then we don't want the API records
from the same period to override those; instead just update with any
missing fields from the API schema.
Also, always `str`-ify the contract id (what is set for the `.bs_mktid`
*before* packing into transaction type to ensure when serialized to
`pps.toml` there are no discrepancies at the codec level.. smh
Instead adjust `load_aio_clients()` to only reload clients detected as
non-loaded or disconnected (2 birds), and avoid use of the global module
table which could result in stale disconnected clients persisting on
multiple `brokerd` client reconnects, resulting in error.
To make nested `msgspec.Struct`s work we need to tell the codec that the
`.symbol` is some struct def, since we don't really need to enforce that
(yet) we're just going to enc/dec as `str` until we further formalize
and/or need something more complex.
Initial attempt at getting the sampling and shm layer to use the new mkt
info meta-data type. Draft out a potential `BackendInitMsg:
msgspec.Struct` for validating the init msg returned from the
`stream_quotes()` start value; obvs don't actually use it yet.
To be compat with the `Symbol` (for now) and generally allow for reading
the (derivative) contract specific part of the fqme. Adjust
`contract_info: list[str]` and make `src: str = ''` by default.
Add `MktPair` handling block for when a backend delivers
a `mkt_info`-field containing init msg. Adjust the original
`Symbol`-style `'symbol_info'` msg processing to do `Decimal` defaults
and convert to `MktPair` including slapping in a hacky `_atype: str`
field XD
General initial name changes to `bs_mktid` and `_fqme` throughout!
For `price_tick` and `size_tick` we read in `str` and decimal-ize
and now correctly fail over to default values of the same type..
Also, always treat `bs_mktid` field as a `str` in TOML form.
Drop the strange `clears: dict` var from the loading code (not sure why
that was left in smh) and better name `toml_clears_list` for the
TOML-loaded-pre-transaction sequence.
Handle case where `'dst'` field is just a `str` (in which case delegate to
`.from_fqme()`) as well as do `Asset` loading and use our
`Struct.copy()` to enforce type-casting to (for eg. `Decimal`s) such
that we'll now capture typing errors despite IPC transport.
Change `Symbol.tick_size` and `.lot_tick_size` defaults to decimal
for proper casting and type `MktPair.atype: str` since `msgspec` can't
cast to `AssetTypeName` without special handling..
Allows building a `MktPair` from the backend specific `Pair` for
eventual use in the data feed layer. Also adds `Pair.price/tick_size` to
get to the expected tick precision info format.
Add a logic branch for now that switches on an instance check.
Generally swap over all `Position.symbol` and `Transaction.sym` refs to
`MktPair`. Do a wholesale rename of all `.bsuid` var names to
`.bs_mktid`.
Instead let's name it `.sys` for "system", the thing we use to conduct
the "transactions" ..
Also rename `.bsuid` -> `.bs_mktid` for "backend system market id`
which is more explicit, easier to remember and read.
Prepping to entirely replace `Symbol`; this adds a buncha docs/comments,
better implementation for representing and parsing the FQME: "fully
qualified market endpoint".
Deatz:
- make `.src` an optional field until we figure out how we're going
to support loading source assets from all backends sensibly..
- implement `MktPair.fqme: str` (what was previously called `fqsn`)
using a new util func: `maybe_cons_tokens()`.
- `Symbol.brokers` and expect only `.broker` usage.
- remap anything with `fqsn` in the name to `fqme` with aliases from the
old name.
- implement `unpack_fqme()` with `match:` syntax B)
- add `MktPair.tick_size_digits`, `.lot_size_digits`, `.fqsn`, `.key` for
backward compat.
- make all fqme generation related fields empty `str`s by default.
- add `MktPair.resolved: bool` a flag indicating whether or not `.dst`
is an `Asset` instance or just a string and, `.bs_mktid` the field
to hold the "backend system market id" per broker.
Try out using our new internal type for storing info about kraken's asset
infos now stored in the `Client.assets: dict[str, Asset]` table. Handle
a server error when requesting such info msgs.
Drop everything we can in terms of methods and attrs from `Symbol`:
- kill `.tokens()`, `.front_feed()`, `.tokens()`, `.nearest_tick()`,
`.front_fqsn()`, instead moving logic from these methods into
dependents (and obviously removing any usage from rest of code base,
coming in follow up commits).
- rename `.quantize_size()` -> `.quantize()`.
- re-implement `.brokers`, `.lot_size_digits`, `.tick_size_digits` as
`@property` methods; for the latter two, allows us to minimize to only
accepting min tick decimal values on alternative constructor class
methods and to drop the equivalent instance vars.
- map `_fqsn` related variable names to new and preferred `_fqme`.
We also juggle around some utility functions, moving limited precision
related `decimal.Decimal` routines to the top of module and soon-to-be
legacy `fqsn` related routines to the bottom.
`MktPair` draft type extensions:
- drop requirements for `src_type`, and offer the optional `.dst_type`
field as either a `str` or (new `typing.Literal`) `AssetTypeName`.
- define an equivalent `.quantize()` as (re)defined in `Symbol` but with
`quantity_type: str` field which specifies whether to use the price or
the size precision.
- add a lot more docs, a `.key` property for the "symbol" name, draft
property for a `.fqme: str`
- allow `.src` and `.dst` to be of type `str | Asset`
Add a new `Asset` to capture "things which can be used in markets and/or
transactions" XD
- defines a `.name`, `.atype: AssetTypeName` a financial category tag, `tx_tick:
Decimal` the precision limit for transactions and of course
a `.quantime()` method for doing accounting arithmetic on a given tech
stack.
- define the `atype: AssetTypeName` type as a finite set of `str`s
expected to be used in various ways for default settings in other
parts of the data and order control layers..
Our issue was not having the correct value set on each
`Symbol.lot_tick_size`.. and then doing PPU calcs with the default set
for legacy mkts..
Also,
- actually write `pps.toml` on broker mode exit.
- drop `get_likely_pair()` and import from pp module.
Not sure how this worked before but, the PPU calculation critically
requires that the order of clearing transactions are in the correct
chronological order! Fix this by sorting `trans: dict[str, Transaction]`
in the `PpTable.update_from_trans()` method.
Also, move the `get_likely_pair()` parser from the `kraken` backend here
for future use particularly when we revamp the asset-transaction
processing layer.
Apparently it will likely fix our `trio`-cancel-scopes-corrupted crash
when we try to let our `._web_bs.NoBsWs` do reconnect logic around
the asyn-generator implemented data-feed streaming routines in `binance`
and `kraken`. See the project docs for deatz; obvs we add the lib as
a dep.
Solve this by always scaling the y-range for the major/target curve
*before* the final overlay scaling loop; this implicitly always solve
the case where the major series is the only one in view.
Tidy up debug print formatting and add some loop-end demarcation comment
lines.
This is particularly more "good looking" when we boot with a pair that
doesn't have historical 1s OHLC and thus the fast chart is empty from
outset. In this case it's a lot nicer to be already zoomed to
a comfortable preset number of "datums in view" even when the history
isn't yet filled in.
Adjusts the chart display `Viz.default_view()` startup to explicitly
ensure this happens via the `do_min_bars=True` flag B)
Not sure how i missed this (and left in handling of `list.remove()` and
it ever worked for that?) after the `samplerd` impl in 5ec1a72 but, this
adjusts the remove-broken-subscriber loop to catch the correct
`set.remove()` exception type on a missing (likely already removed)
subscription entry.
For the purposes of eventually trying to resolve last-step indexing
synchronization (an intermittent but still existing) issue(s) that can
happen due to races during history frame query and shm writing during
startup. In fact, here we drop all `hist_viz` info queries from the main
display loop for now anticipating that this code will either be removed
or improved later.
Again, as per the signature change, never expect implicit time step
calcs from overlay processing/machinery code. Also, extend the debug
printing (yet again) to include better details around
"rescale-due-to-minor-range-out-of-view" cases and a detailed msg for
the transform/scaling calculation (inputs/outputs), particularly for the
cases when one of the curves has a lesser support.
As per the change to `slice_from_time()` this ensures this `Viz` always
passes its self-calculated time indexing step size to the time slicing
routine(s).
Further this contains a slight impl tweak to `.scalars_from_index()` to
slice the actual view range from `xref` to `Viz.ViewState.xrange[1]` and
then reading the corresponding `yref` from the first entry in that
array; this should be no slower in theory and makes way for further
caching of x-read-range to `ViewState` opportunities later.
There's been way too many issues when trying to calculate this
dynamically from the input array, so just expect the caller to know what
it's doing and don't bother with ever hitting the error case of
calculating and incorrect value internally.
When the target pinning curve (by default, the dispersion major) is
shorter then the pinned curve, we need to make sure we find still find
the x-intersect for computing returns scalars! Use `Viz.i_from_t()` to
accomplish this as well and, augment that method with a `return_y: bool`
to allow the caller to also retrieve the equivalent y-value at the
requested input time `t: float` for convenience.
Also tweak a few more internals around the 'loglin_ref_to_curve'
method:
- only solve / adjust for the above case when the major's xref is
detected as being "earlier" in time the current minor's.
- pop the major viz entry from the overlay table ahead of time to avoid
a needless iteration and simplify the transform calc phase loop to
avoid handling that needless cycle B)
- add much better "organized" debug printing with more clear headers
around which "phase"/loop the message pertains and well as more
explicit details in terms of x and y-range values on each cycle of
each loop.
Previously when very zoomed out and using the `'r'` hotkey the
interaction handler loop wouldn't trigger a re-(up)sampling to get
a more detailed curve graphic and instead the previous downsampled
(under-detailed) graphic would show. Fix that by ensuring we yield back
to the Qt event loop and do at least a couple render cycles with paired
`.interact_graphics_cycle()` calls.
Further this flips the `.start/signal_ic()` methods to use the new
`.reset_graphics_caches()` ctr-mngr method.
Instead delegate directly to `Viz.default_view()` throughout charting
startup and interaction handlers.
Also add a `ChartPlotWidget.reset_graphics_caches()` context mngr which
resets all managed graphics object's cacheing modes on enter and
restores them on exit for simplified use in interaction handling code.
This finally seems to mitigate all the "smearing" and "jitter" artifacts
when using Qt's "coordinate cache" graphics-mode:
- whenever we're in a mouse interaction (as per calls to
`ChartView.start/signal_ic()`) we simply disable the caching mode (set
`.NoCache` until the interaction is complete.
- only do this (for now) during a pan since it doesn't seem to be an
issue when zooming?
- ensure disabling all `Viz.graphics` and `.ds_graphics` to be agnostic
to any case where there's both a zoom and a pan simultaneously (not
that it's easy to do manually XD) as well as solving the problem
whenever an OHLC series is in traced-and-downsampled mode (during low
zoom).
Impl deatz:
- rename `ChartView._ic` -> `._in_interact: trio.Event`
- add `.ChartView._interact_stack: ExitStack` which we use to open.
and close the `FlowGraphics.reset_cache()` mngrs from mouse handlers.
- drop all the commented per-subtype overrides for `.cache_mode: int`.
- write up much better doc strings for `FlattenedOHLC` and `StepCurve`
including some very basic ASCII-art diagrams.