Since we need `.get_mkt_info()` to remain symmetric across calls with
different fqme inputs, and binance generally uses upper case for it's
symbology keys, we always upper the FQME related tokens for both
symcaching and general search purposes.
Also don't set `_atype` on mkt pairs since it should be fully handled
via the dst asset loading in `Client._cache_pairs()`.
In cases where a brokerd backend doesn't yet support a symcache we need
to do manual `.get_mkt_info()` queries and stash them in a table that we
pass in for the mkt failover lookup to `Account.update_from_ledger()`.
Set the `PaperBoi._mkts` to this table for use on real-time ledger
writes in `.fake_fill()`.
Since some backends are going to have the issue of supporting multiple
venues for a given "position distinguishing instrument", like IB, we
can't presume that every `Position` can be uniquely keyed by
a `MktPair.fqme` (since the venue part can change and still be the same
"pair" relationship in accounting terms) so instead presume the
"backend system's market id" is the unique key (at least for now)
instead of the fqme.
More practically we use the `bs_mktid` to groupby-partition the per
pair DFs from the trades ledger and attempt to scan-match the input
fqme (in `ledger disect` cli) against the fqme column values set.
Not sure why i ever thought it would work otherwise but, obviously if
you're replicating a `Position` from a **summary** (IPC) msg we
need to wipe any prior clearing events from the events history..
The main use for this loading mechanism is precisely if you don't have
local access to the txn ledger and need to represent a position from
a summary 🤦
Also, never bother with ledger file fqme "rewriting" if the backend has
no symcache support (yet) since obviously there's then no symbol set to
search for a better key xD
Instead of casting to `dict`s and rewriting event names in the
`push_tradesies()` handler, be transparent with event names (also
defining and piker-equivalent mapping them in a redefined `_statuses`
table) and types
passing them directly to the `deliver_trade_events()` task and generally
make event handler blocks much easier to grok with type annotations. To
deal with the causality dilemma of *when to emit a pos msg* due to
needing all of `execDetailsEvent, commissionReportEvent, positionEvent`
but having no guarantee on received order, we implement a small task
`clears: dict[Contract, tuple[Position, Fill]]` tracker table and (as
before) only emit a position event once the "cost" can be accessed for
the fill. We now ALWAYS relay any `Position` update from IB directly to
ensure (at least) the cumsize is correct (since it appears we still have
ongoing issues with computing this correctly via `.accounting.Position`
updates..).
Further related adjustments:
- load (fiat) balances and startup positions into a new `IbAcnt` struct.
- change `update_and_audit_pos_msg()` to blindly forward ib position
event updates for the **the size** since it should always be
considered the true gospel for accounting!
- drop ib-has-no-position handling since it should never occur..
- move `update_ledger_from_api_trades()` to the `.ledger` submod and do
processing of ib_insync `Fill` related objects instead of dict-casted
versions instead doing the casting in
`api_trades_to_ledger_entries()`.
- `norm_trade()`: add `symcache.mktmaps[bs_mktid] = mkt` in since it
turns out API (and sometimes FLEX) records don't contain the listing
exchange/venue thus making it impossible to map an asset pair in the
"position sense" (i.e. over multiple venues: qqq.nasdaq, qqq.arca,
qqq.directedge) to an fqme when doing offline ledger processing;
instead use frickin IB's internal int-id so there's no discrepancy.
- also much better handle futures mkt trade flex records such that
parsed `MktPair.fqme` is consistent.
Since getting a global symcache result from the API is basically
impossible, we ad-hoc fill out the needed client tables on demand per
client code queries to the mkt info EP.
Also, use `unpack_fqme()` in fqme (search) pattern parser instead of
hacky `str.partition()`.
Add new `Client` attr tables to better stash `Contract` lookup results
normally mapped from some in put FQME;
- `._contracts: dict[str, Contract]` for any input pattern (fqme).
- `._cons: dict[str, Contract] = {}` for the `.conId: int` inputs.
- `_cons2mkts: bidict[Contract, MktPair]` for mapping back and forth
between ib and piker internal pair types.
Further,
- type out as many ib_insync internal types as possible mostly for
contract related objects.
- change `Client.trades()` -> `.get_fills()` and return directly the
result from `IB.fill()`.
- start flipping over internals to `Position.cumsize`
- allow passing in a `_mktmap_table` to `Account.update_from_ledger()`
for cases where the caller wants to per-call-dyamically insert the
`MktPair` via a one-off table (cough IB).
- use `polars.from_dicts()` in `.calc.open_ledger_dfs()`. and wrap the
whole func in a new `toolz.open_crash_handler()`.
Since there's a growing list of top level mods which are more or less
utils/tools for working with the runtime; begin to move them into a new
subpkg starting with a new `.toolz.debug`.
Start with,
- a new `open_crash_handller()` for doing breakpoints around blocks that
might error.
- move in what was `piker._profile` into `.toolz.profile` and adjust all
importing appropriately.
Define and bind in the `tx_sort()` routine to be used by
`open_trade_ledger()` when datetime sorting trade records.
Further deats:
- always use the IB reported position size (since apparently our ledger
based accounting is getting rekt on occasion..).
- better ib pos msg formatting when there's mismatches with the piker
equivalent.
- never emit zero-size pos msgs (in terms of strict ib pos sizing) since
when there's piker ledger sizing errors we'll send the wrong thing to
the ems and its clients..
Since apparently rendering to dict from a sorted generator func clearly
doesn't preserve the order when using a `dict`-comprehension.. Further,
there's really no reason to strictly return a `dict`. Adjust
`.calc.ppu()` to make the return value instead a `list[tuple[str,
dict]]`; this results in the current df cumsum values matching the
original impl and the existing `binance.paper` unit tests now passing XD
Other details that fix a variety of nonsense..
- adjust all `.clearsitems()` consumers to the new list output.
- use `str(pendulum.now())` in `Position.from_msg()` since adding
multiples with an `unknown` str will obviously discard them, facepalm.
- fix `.calc.ppu()` to NOT short circuit when `accum_size` is 0; it's
been causing all sorts of incorrect size outputs in the clearing
table.. lel, this is what fixed the unit test!
Move in the obvious things XD
- all the specially defined venue tables from `.api`.
- some parser funcs: `con2fqme()` and `parse_patt2fqme()`.
- the `get_mkt_info()` and `open_symbol_search()` broker eps.
- the `_asset_type_map` table which converts to `.accounting.Asset`
compat keys for each contract/security.
Since there's no easy way to support it yet, we bypass symbology caching
in for now and instead allow the `ib.ledger` routines to fill in
`MktPair` and `Asset` entries ad-hoc for the purposes of txn ledger
processing.
Some backends like `ib` don't have an obvious (nor practical) way to
easily download the entire symbology set available from all its mkt
venues. For such backends loading might require a non-std approach (like
using the contract search from some input mkt-key set) and can't be
expected to necessarily be supported out of the box. As such, allow
annotating a broker sub-pkg module with a `_no_symcache: bool = True`
attr which will make `open_symcache()` yield early with an empty
`SymbologyCache` instance for use by the caller to fill in the mkt and
assets tables in whatever ad-hoc way desired.
The list is `open_symcache()`, `get_symcache()`, `SymbologyCache`, and
`Stuct` which seems more or less fine to make part of the public
namespace. Also, make `._timeseries.t_unit` an instance of literal to make
`ruff` happy?
This was more involved then expected but on the bright side, is going to
help drive a more general `Account` update/processing/loading API
providing for all the high-level txn update methods needed for any
backend to generically update the participant's account *state* via
an input ledger/txn set B)
Key changes to enable `SymbologyCache` compat:
- adjust `Client` pairs / assets lookup tables to include a duplicate
keying of all assets and "asset pairs" using the (chitty) default key
set that kraken ships which is NOT the `.altname` no `.wsname` keys;
the "default ReST response keys" i guess?
- `._AssetPairs` and `._Assets` are *these ^* rest-key sets delivered
verbatim from the endpoint responses,
- `._pairs` and `._assets` the equivalent value-sets keyed by piker
style FQME-looking keys (now provided via the new
`.kraken.symbols.Pair.bs_fqme: str` and the delivered `'altname'`
field (for assets) respectively.
- re-implement `.get_assets()` and `.get_mkt_pairs()` to appropriately
delegate to internal methods and these new (multi-keyed) tables to
deliver the cacheable set of symbology info.
- adjust `.feed.get_mkt_info()` to handle parsing of both fqme-style and
wtv(-the-shit-stupid) kraken key set a caller passes via
a key-matches-first-table-style-scan after pre-processing the
input `fqme: str`; also do the `Asset` lookups from the new
`Pair.bs_dst/src_asset: str` fields which should always map correctly
to an internal asset entry delivered by `Client.get_assets()`.
Dirty impl deatz:
- add new `.kraken.symbols` and move the newly refined `Pair` there.
- add `.kraken.ledger` and move in the factored out ledger processing
routines.
- also move out what was the `has_pp()` and large chung of nested-ish
looking acnt-position verification logic blocks into a new
`verify_balances()` B)
Provides for fully isolated symbology caching in a flat TOML table
without special case handling B)
Also explicitly define `.bs_mktid: str` which is now used by the
symcache to to key-index the backend specific pair set and thus provides
for round-trip marshalling without special knowledge of any backend
schema.
Previously the cum-size calc(s) was in the `disect` CLI but it's better
stuffed into the backing df converter. Also, ensure that whenever
a `dt` field is type-detected as a `str` we parse it to `DateTime`.
For testing (and probably hacking) it's handy to be able to point
somewhere other the default user-config dir for a ledger or account file
to test offline processing apis from `.accounting` subsystems. For now
it's a private optional named-arg: `_fp: Path` and it's obviously passed
down into the `load_account()` config getter.
Note that in the non-paper account case `Account.update_from_ledger()`
will use the ledger's `.symcache` and `.iter_txns()` method to acquite
actual txn-structs to compute positions.
Since each broker backend generally needs to define a specific
field-name-schema to determine the exact instantiation arguments to
`Transaction`, we generally need each backend to define an endpoint
function to conduct this transaction from an input `dict[str, Any]`
received either directly from provided ledger APIs or from previously
stored `.accounting._ledger` saved trades ledger TOML files.
To accomplish this we now require backends to declare a new routine:
```python
def norm_trade(
tid: str, # the uuid for the transaction
txdict: dict, # the input record-dict
# a table of mkt-symbols to backend
# struct objects which define the (meta-data) for the backend specific
# venue's symbology
pairs: dict[str, Struct],
) -> Transaction:
...
```
which implements that record conversion (at least for trades)
and can thus be used in `TransactionLedger.iter_txns()` which requires
"some code" to implement the loading from a serialization format (aka
the input `dict` record) to our local `Transaction` struct, normally
also using a `Pair`-struct table defined (and maybe previously cached)
by the specific backend such our (normalization layer's) `MktPair`'s
fields can be set.
For the case of our `.clearing._paper_engine` we def the routine to
simply extract the exact same fields from the TOML ledger records that
we previously had written (to it) and define it in that module.
Also, we always pass `pairs=SymbologyCache.pairs: dict[str, Struct]` on
norm trade calls such that offline ledger and accounting processing
clients can use a previously cached symbology set without having to
necessarily start the async-actor runtime to query the actual backend API
if the data has already been saved locally on the system B)
Other related:
- always passthrough kwargs in overridden `.to_dict()` method.
- only do fqme related trade record field name rewrites/names when
operating on a paper ledger; normally a backend's records don't
contain these.
- fix `pendulum.DateTime` type annots.
- just deliver `Transaction`s from `.iter_txns()`
Since some backends have multiple venues keyed by the same
symbol-pair-name, AND often the market/symbol info for those different
market-venues is entirely different (cough binance), we will have to
(sometimes) save the struct namespace-path as str for lookup when
deserializing a symcache to object form.
NOTE: this change is reliant on the following `tractor` dev commit which
improves support for constructing a path from object-instance:
bee2c36072
Add a backend(-wide) default struct path stored as a (TOML top level)
field `pair_ns_path: str` in the serialized `dict`-table as well as
allow for a per pair-`Struct` value optionally defined on each type def;
the global is only used if none was defined per struct via a `ns_path:
str`.
Further deats:
- don't write non-struct-member fields to dict for TOML file cache.
- always keep object forms, well as objects (in tables).. XD
- factor cache loading from `dict` (and thus from TOML or presumably any
other interchange form) into a `@classmethod` constructor method B)
- all choosing the subtable for `.search()` by name.
Still kinda borked since i don't think there actually is a (per venue)
"get-all-symbologies" endpoint.. so we're likely gonna have to figure
out either how to hack it or provide a bypass in ledger processing?
Deatz:
- use new `Account` type name, rename endpoint vars to match and
obviously use any new method name(s).
- mask out split ratio handling for now.
- async open the symcache prior to ledger processing (again, for now).
- drop passing `Transaction.sym`.
- fix parser set for dt-sorter since apparently 2022 and back had
a `date` field instead?
Since we don't want to be doing a `trio.run()` from async code (being
already in the `tractor` runtime and all); for now just put a top level
block wrapping async enter until we figure out to embed it (likely)
inside `open_account()` and pass the ref to `open_trade_ledger()`.
Require passing an explicit flag when entering from sync code with an
extra super duper explicit runtime error to indicate how to use in the
async case as well!
Also, do rewrites of both the fqme (from best match in the symcache
according to search - the worst case) or from the `bs_mktid` field if
it exists (should only be true for paper engine accounts) AND the
`bs_mktid` for paper accounts if it seems un-fully-qualified.
Turns in order to make things much cleaner from inside-the-runtime usage
we do probably want to just make the manager async so that we can
generate the cache on demand from async UI inits as well as daemon
actors.. So change to that and instead make `get_symcache()` the helper
that should ONLY be called from sync funcs / offline ledger processing
utils!
Instead of constructing them (previously manually) in `.get_mkt_info()` ep,
just call `.get_assets()` and do key lookups for assets to hand directly
to the `.src/dst` of `MktPair`.
Refine fqme input parsing to match:
- adjust parsing logic to only use `unpack_fqme()` on the input fqme
token.
- set `.mkt_mode: str` to the derivs venue when an expiry token is
detected in the fqme.
- pass the parsed `expiry: str` to `Client.exch_info()` to ensure
a deriv venue (table) is used for pair lookup.
- skip any "DEFI" venue or other unknown asset type cases (since binance
doesn't seem to define some assets anywhere?).
Also, just use the `Client._pairs` unified table for search input since
the first call to `.exch_info()` won't necessarily contain the most
up-to-date state whereas `._pairs` always will.
Meaning we add the `Client.get_assets()` and `.get_mkt_pairs()` methods.
Also implement `.exch_info()` to take in a `expiry: str` to detect
whether to look up a derivative venue instead of spot.
In support of all this we now explicitly key all assets (via
`._cache_pairs() during the populate of `._venue2assets` sub-tables)
with their `.bs_dst_asset: str` value to ensure, for ex., a spot
`BTCUSDT` has a distinct value from any futures contracts with the same
`Pair.symbol: str` value!
Also, ensure we always create a `brokers.toml` (from template) if DNE
and binance is the user's first used backend XD
Add `bs_src/dst_asset: str` properties which provide for unique keying
into futures vs. spot venues by offering a `.venue: str` property which,
for non-spot delivers normally an expiry suffix (eg. '.PERP') and for
spot just delivers the bair chain-token key.
This enables keying multiple venues with the same mkt pairs easily in
a global flat key->pair table needed as part of supporting a symcache.
So you can do a `Struct1` - `Struct2` and we dump a little diff `list`
of tuples for anal on the REPL B)
Prolly can be broken out into it's own micro-patch?
Drop all the old `polars` (groupby + agg related) mangling to get a df
per fqme by delegating to the new routine and add in the `.cumsum()`ing
(per frame) as a first start on computing pps using dfs instead of
python dicts + loops as in `ppu()`.
To isolate it from the ledger/account mods and bc it is actually for
doing (eventual) position calcs / anal, might as well put it in this
mod. Add in the old-masked `ensure_state()` method content in case we
want to use it later for testing. Also tighten up the parser loading
inside `dyn_parse_to_dt()`.
Rename `open_pps()` -> `open_account()` for obvious reasons as well as
expect a bit tighter integration with `SymbologyCache` and consequently
`LedgerTransaction` in order to drop `Transaction.sym: MktPair`
dependence when compiling / allocating new `Position`s from a ledger.
Also we drop a bunch of prior attrs and do some cleaning,
- `Position.first_clear_dt` we no longer sort during insert.
- `._clears` now replaces by `._events` table.
- drop the now masked `.ensure_state()` method (eventually moved to
`.calc` submod for maybe-later-use).
- drop `.sym=` from all remaining txns init calls.
- clean out the `Position.add_clear()` method and only add the provided
txn directly to the `._events` table.
Improve some `Account` docs and interface:
- fill out the main type descr.
- add the backend broker modules as `Account.mod` allowing to drop
`.brokername` as input and instead wrap as a `@property`.
- make `.update_from_trans()` now a new `.update_from_ledger()` and
expect either of a `TransactionLedger` (user-dict) or a dict of txns;
in the latter case if we have not been also passed a symcache as input
then runtime error since the symcache is necessary to allocate
positions.
- also, delegate to `TransactionLedger.iter_txns()` instead of
a manual datetime sorted iter-loop.
- drop all the clears datetime don't-insert-if-earlier-then-first
logic.
- rename `.to_toml()` -> `.prep_toml()`.
- drop old `PpTable` alias.
- rename `load_pps_from_ledger()` -> `load_account_from_ledger()` and
make it only deliver the account instance and also move out all the
`polars.DataFrame` related stuff (to `.calc`).
And tweak some account clears table formatting,
- store datetimes as TOML native equivs.
- drop `be_price` fixing.
- obvsly drop `.ensure_state()` call to pps.
Turns out we don't really need it directly for most "txn processing" AND
if we do it's usually related to some `Account`-ing related calcs; which
means we can instead just rely on the new `SymbologyCache` lookup to get
it when needed. So, basically just get rid of it and rely instead on the
`.fqme` to be the god-key to getting `MktPair` info (from the cache).
Further, extend the `TransactionLedger` to contain much more info on the
pertaining backend:
- `.mod` mapping to the (pkg) py mod.
- `.filepath` pointing to the actual ledger TOML file.
- `_symcache` for doing any needed asset or mkt lookup as mentioned
above.
- rename `.iter_trans()` -> `.iter_txns()` and allow passing in
a symcache or using the init provided one.
- rename `.to_trans()` similarly.
- delegate paper account txn processing to the `.clearing._paper_engine`
mod's `norm_trade()` (and expect this similarly from other backends!)
- use new `SymbologyCache.search()` to find the best but
un-fully-qualified fqme for a given `txdict` being processed when
writing a config (aka always try to expand to the most verbose `.fqme`
possible).
- add a `rewrite: bool` control to `open_trade_ledger()`.
Since we now fully support interchange-as-dict-msg, use the msg codec
API and drop manual `Asset` unpacking. Also, wrap `get_symcache()` in
a `pdbp` crash handler block for now B)
As part of loading the cache we can now fill the asset sub-tables:
`.mktmaps` and `.assets` with their deserialized struct instances!
In theory this might be possible for the backend defined `Pair` structs
as well but we need to figure out probably an endpoint to offer
the conversion?
Also, add a `SymbologyCache.search()` which allows sync code to scan the
existing (known via cache) symbol set just like how async code can use the
(much slower) `open_symbol_search()` ctx endpoint 💥
Previously we weren't necessarily serializing mkt pairs (for IPC msging)
entirely bc the assets `.src/.dst` were being sent just by their
str-names. This now properly supports fully serializing `Asset`s as
`dict`-msgs such that use of `MktPair.to_dict()` can be transmitted over
`tractor.MsgStream`s and deserialized entirely back to struct from on
the receiver end.
Deats:
- implement `Asset.to_dict()` and `.from_msg()`
- adjust `MktPair.to_dict()` and `.from_msg()` to use these methods.
- drop all the hacky "if .src/.dst is str" handling.
- add better `MktPair.from_fqme()` input handling for expiry and venue;
ensure that either can be extracted from passed fqme *and* if so they
are also popped from any duplicate passed in `**kwargs**`.
For starters rename the cache type to `SymbologyCache` and fill out its
interface to include an (async) `.reload()` which can be used to populate
the in-mem asset-table sets such that any tractor-runtime task can
actually directly call it. Use a symcache file name schema of
`_cache/<backend>.symcache.toml`.
Dirtier deatz:
- make `.open_symcache()` a `@cm` such that it can be used from sync code
and will actually call `trio.run()` in the case where it needs to do a
full (re)load; also don't write on exit only on reloads.
- add `.get_symcache()` a simple non-ctx-mngr reader which again can
mostly be called willy-nilly from sync code without the full runtime
being up (but likely will only work if symcache files already exist
for the backend).
New mod is `.data._symcache` and it needs backend clients to declare
`Client.get_assets()` and `.get_mkt_pairs()` to generate the cache files
which now go in the config dir under `_cache/`.
We're probably going to move to implementing all accounting using
`polars.DataFrame` and friends and thus this rejig preps for a much more
"stateless" implementation of our `Position` type and its internal
pos-accounting metrics: `ppu` and `cumsize`.
Summary:
- wrt to `._pos.Position`:
- rename `.size`/`.accum_size` to `.cumsize` to be more in line
with `polars.DataFrame.cumsum()`.
- make `Position.expiry` delegate to the underlying `.mkt: MktPair`
handling (hopefully) all edge cases..
- change over to a new `._events: dict[str, Transaction]` in prep
for #510 (and friends) and enforce a new `Transaction.etype: str`
which is by default `clear`.
- add `.iter_by_type()` which iterates, filters and sorts the
entries in `._events` from above.
- add `Position.clearsdict()` which returns the dict-ified and
datetime-sorted table which can more-or-less be stored in the
toml account file.
- add `.minimized_clears()` a new (and close) version of the old
method which always grabs at least one clear before
a position-side-polarity-change.
- mask-drop `.ensure_state()` since there is no more `.size`/`.price`
state vars (per say) as we always re-calc the ppu and cumsize from
the clears records on every read.
- `.add_clear` no longer does bisec insorting since all sorting is
done on position properties *reads*.
- move the PPU (price per unit) calculator to a new `.accounting.calcs`
as well as add in the `iter_by_dt()` clearing transaction sorted
iterator.
- also make some fixes to this to handle both lists of `Transaction`
as well as `dict`s as before.
- start rename of `PpTable` -> `Account` and make a note about adding
a `.balances` table.
- always `float()` the transaction size/price values since it seems if
they get processed as `tomlkit.Integer` there's some suuper weird
double negative on read-then-write to the clears table?
- something like `cumsize = -1` -> `cumsize = --1` !?!?
- make `load_pps_from_ledger()` work again but now includes some very
very first draft `polars` df processing from a transaction ledger.
- use this from the `accounting.cli.disect` subcmd which is also in
*super early draft* mode ;)
- obviously as mentioned in the `Position` section, add the new `.calcs`
module with a `.ppu()` calculator func B)
Also finally adds full `FeedInit` and `MktPair` support for this backend
by handling:
- all "currency" fields for each `Contract` by constructing
and `Asset` and setting the `MktPair.src` with `.atype='fiat'`.
- always render the `MktPair.src` name in the `.fqme` for fiat pairs
(aka forex) but never for other instruments.
In an effort to properly support fiat pairs (aka forex) as well as more
generally insert a fully-qualified `MktPair` in for the
`Transaction.sys`. Note that there's a bit of special handling for API
`Contract`s-as-dict records vs. flex-report-from-xml equivalents.
No point having duplicate data when we already stash the `expiry` on the
mkt info type and can just read it (and cast to `datetime` obj).
Further this fixes a regression caused by converting `._clears` to
a list by adding a `._events: dict[str, Transaction]` which prevents
double entering transactions based on checking the events table for the
existing id.. Further add a sanity check that all events are popped
(for now) after serializing the clearing table for the toml account
file.
In the longer run, ideally we don't have the separate sequences ._clears
and ._events by choosing a better data structure (sorted unique set of
mkt events) maybe a specially used `polars.DataFrame` (which we kind
need eventually anyway)?
When you look at usage we don't end up really needing clear entries to
be keyed by their `Transaction.tid`, instead it's much more important to
ensure the time sorted order of trade-clearing transactions such that
position properties such as the size and ppu are calculated correctly.
Thus, this instead simplified the `.clears` table to a list of clear
dict entries making a bunch of things simpler:
- object form `Position._clears` compared to the offline TOML schema
(saved in account files) is now data-structure-symmetrical.
- `Position.add_clear()` now uses `bisect.insort()` to
datetime-field-sort-insert into the *list* which saves having to worry
about sorting on every sequence *read*.
Further deats:
- adjust `.accounting._ledger.iter_by_dt()` to expect an input `list`.
- change `Position.iter_clears()` to iterate only the clearing entry
dicts without yielding a key/tid; no more tuples.
- drop `Position.to_dict()` since parent `Struct` already implements it.
Since it may be handy to get the latest ticks first, add a `reverse:
bool` to `iterticks()` and add some cleaner logic and a proper doc
string to `frame_ticks()`.
Allows for tracking paper engine orders despite the ems not necessarily
being opened by the current order mode instance (UI) in "paper"
execution mode; useful for tracking bots/strats running against the same
EMS daemon.
Like you'd think:
- `load_ledger()` -> ._ledger
- `load_accounrt()` -> ._pos
Also fixup the old `load_pps_from_ledger()` and expose it from a new
`.accounting.cli.disect` cli cmd for trying to figure out why pp calcs
are totally mucked on stupid ib..
Should be the final production backend to switch this over B)
Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..