Was orig for debugging an issue with `kucoin` i think but definitely
shouldn't be left in XD
Also add `'perpetual_future'` to `start_backfill()` input literal set.
About time we tidy'd a buncha status logging in this backend..
particularly for boot-up where there's lots of client-try-connect poll
looping with account detection from the user config.
`.api.Client` pprint and logging fmt improvements:
- add `Client.__repr__()` which shows the minimally useful set of info
from the underlying `.ib: IB` as well as a new `.acnts: list[str]`
of the account aliases defined in the user's `brokers.toml`.
- mk `.bars()` define a comprehensive `query_info: str` with all the
request deats but only display if there's a problem with the response
data.
- mk `.get_config()` report both the config file path and the acnt
aliases (NOT the actual account #s).
- move all `.load_aio_clients()` client poll loop requests do
`log.runtime()` statuses, only falling through to a `.warning()` when
the loop fails to connect the client to the spec-ed API-gw addr, and
|_ don't allow loading accounts for which the user has not defined an
alias in `brokers.toml::[ib]`; raise a value-error in such cases
with a message indicating how to mod the config.
|_ only `log.info()` about acnts if some were loaded..
Other mod logging de-noising:
- better status fmting in `.symbols.open_symbol_search()` with
`repr(Client)`.
- for `.feed.stream_quotes()` first quote reporting use `.runtime()`.
- timestamps came as `'date'`-keyed from 2022 and before but now are
`'datetime'`..
- some symbols seem to have no commission field, so handle that..
- when no `'price'` field found return `None` from `norm_trade()`.
- add a warn log on mid-fill commission updates.
Like other backends use the `AsyncClient` for all venue specific
client-sessions but change to allocating them inside `get_client()`
using an `AsyncExitStack` and inserting directly in the
`Client.venue_sesh: dict` table during init.
Supporting impl tweaks:
- remove most of the API client session building logic and instead make
`Client.__init__()` take in a `venue_sessions: dict` (set it to
`.venue_sesh`) and `conf: dict`, instead opting to do the http client
configuration inside `get_client()` since all that code only needs to
be run once.
|_load config inside `get_client()` once.
|_move session token creation into a new util func `init_api_keys()` and
also call it from `get_client()` factory; toss in an ex. toml section
config to the doc string.
- define `_venue_urls: dict[str, str]` (content taken from old static
`.venue_sesh` dict) at module level and feed them as `base_url: str`
inputs to the client create loop.
- adjust all call sigs in httpx-sesh-using methods, namely just
`._api()`.
- do a `.exch_info()` call in `get_client()` to cache the symbology
set.
Unrelated changes for various other outstanding buggers:
- to get futures feeds correctly loading when selected
from search (like 'XMRUSDT.USDTM.PERP'), expect a `MktPair` input to
`Client.bars()` such that the exact venue-key can be looked up (via
a new `.pair2venuekey()` meth) and then passed to `._api()`.
- adjust `.broker.open_trade_dialog()` to failover to paper engine when
there's no `api_key` key set for the `subconf` venue-key.
Like we do with other history backends to indicate lack of a data set.
This avoids any raise that will will bring down the backloader task with
some downstream error.
Raise a `ValueError` on no time index for now.
Code base is already ported for `Qt6` so this removes the pyqt5 dep,
adds latest pyqt6 as well as buncha other updates:
- add `xonsh` and ptk as dev deps for those of us using wacky shells ;P
- bump compiled deps as needed for python 3.12 (`numpy`, `numba`)
- add `httpx` and drop `asks` since the latter is zombied and not compat
with other libs on 3.12.
- add `ruff` linting ignore rules for the new `.ui.qt` shim mod layer.
- few other deps updates to latest versions.
- add in the `keywords` and `classifiers` sections from the old
`setup.py`.
This also officially moves the code base to using `PyQt6` including all
necessary reference changes and enum namespace path moves.
Also includes a small `.ui.order_mode` fix to cancel any
`Order.price <= 0` and a name error fix with logging using `msg`,
which is already used for the input order msg..
For the future, like if we ever get a `PyQt7` (or wtv else..), add
a module which allows changing Qt binding lib imports from one spot for
all other `.ui` submodules. In some sense this is like a shoddier, less
dynamic version of how `pyqtgraph.Qt.__init__.py` supports multiple
libs; it might actually make sense eventually to instead import from
their shim layer instead?
Included is a draft attempt at exposing a bunch of enums which under
custom names:
- while the specific grouping of values seem to always stay consistent,
the root enum's seem to almost always get moved around in the `PyQtX`
module namespace.
- changing groupings and/or each top level enum's ns location can more
simply be changed/re-orged from one spot.
- allows `.ui` consumer code to use a name more relevant to `piker`'s
usage of wtv UI component is being configured.
Such that our internal structs can be pretty printed with indented and
type-hinted fields, AND for nested `Struct`-fields call `.pformat()` but
avoiding any recursion errors using `pprint.saferepr()`. Add
a `._sin_props()` iterator over the non-property fields; use it for
`dict` casting when called with `.to_dict(include_non_members=False)`.
Actually, we should also probably figure out how to only pprint like
when required by the user in a REPL or log msg by context-selectively
`pprint.PrettyPrinter` right? Also, if we can generalize decently enough
it'd be cool to maybe patch this in as a util to upstream `msgspec`?
Ran into this trading a peenee where a dark got (mistakenly) submitted
at a price of 0 and then that consecutively broke upstream allocator/pp
code due to a divide-by-zero.. So instead always check for a zero-price
(since that should never ever be valid in any market) and instead cancel
any such order in the EMS and return `None` so that upstream callers can
ignore it without crash handling.
Since we definitely want to markup gaps that are both data-errors and
just plain old venue closures (time gaps), generalize the `gaps:
pl.DataFrame` loop in a func and call it from the `ldshm` cmd for now.
Some other tweaks to `store ldshm`:
- add `np.median()` failover when detecting (ohlc) ts sample rate.
- use new `tsp.dedupe()` signature.
- differentiate between sample-period size gaps and venue closure gaps
by calling `tsp.detect_time_gaps()` with diff size thresholds.
- add todo for backfilling null-segs when detected.
Since service daemon actors may be cancelled remotely by clients (who
maybe also requested said daemon-actor's spawn in the first place) we
specifically ignore `tractor.ContextCancelled`s from the `ctx.wait()`
inside `Services.start_service_task()` to avoid crashing the service
mngr, and thus for now `pikerd`, (which **does** happen now due to
updated and more explicit remote cancellation semantics implemented in
`tractor`) since the `.canceller` field is not going to match the
`pikerd` uid in such cases!
This explicit check makes sense since the `Services` mngr is built to
allow remote requests to "spawn-n-supervise service actors" where the
services can remain persistent but also cancelled later as requested. We
may want to consider only allowing cancellation by actors who requested
spawn in the future tho?
Also change to more explicit imports to `tractor` types for annots
throughout the sub-pkg.
Since certain actors (even if client-like) will want to augment their
module set to provide remote features (such as our new rc annotation
msg-prot for `Qt`-chart actors) we need to ensure we merge in any input
`enable_modules: list[str]` to the value passed to the underlying
`tractor` spawning api. Previously we were passing `_root_modules` as
this value by name, but now instead we simply `list.extend()` that into
whatever is in the `kwargs` both in `open_piker_runtime()` and
`open_pikerd()`.
Presuming the data provider gives us a config with a `frame_types: dict`
(indicating frame sizes per query/request) we try to be clever and
decrement our submitted `end_dt: DateTime` based on it.. hoping for the
best on the next attempt.
Apparently publishing futures contracts that aren't yet trading AND
changing their contract type `str` format/schema was necessary (such
that's there's a f@#$in space in it..)?
I honestly have no idea where they found their "data engineers" XD
TO CHERRY to #520
So now a chart rc client can ask to invoke the new
`Viz.reset_graphics()` by timeframe and fqme Bo This handy when doing
underlying (real time or tsp) edits and you want to make the UI reflect
the changes incrementally.
Impl deatz:
- tweak the msg schema to use a `cmd: str` which normally maps to
(something similar to) the UI method name instead of `annot` and now
offer 3 such "commands": 'redraw', 'remove', 'SelectRect'.
- impl `AnnotCtl.redraw()` which sends the underlying `msg: dict` on the
correct `tractor.Msgstream` ipc instance.
- since ipc-stream lookups now happen in multiple client methods impl
a private `._get_ipc()` to do the error raise on unknown fqmes.
Been hitting wayy too many cases like this so, finally put my foot down
and stuck in a buncha helper code to figure why (especially for gappy
ass pennies) this can/is happening XD
inside the `.ib.api.Client()`:
- in `.bars()` pack all `.reqHistoricalDataAsync()` kwargs into a dict such that
wen/if we rx a blank frame we can enter pdb and make sync calls using
a little `get_hist()` closure from the REPL.
- tidy up type annots a bit too.
- add a new `.maybe_get_head_time()` meth which will return `None` when
the dt can't be retrieved for the contract.
inside `.feed.open_history_client()`:
- use new `Client.maybe_get_head_time()` and only do `DataUnavailable`
raises when the request `end_dt` is actually earlier.
- when `get_bars()` returns a `None` and the `head_dt` is not earlier
then the `end_dt` submitted, raise a `NoData` with more `.info: dict`.
- deliver a new `frame_types: dict[int, pendulum.Duration]` as part
of the yielded `config: dict`.
- in `.get_bars()` always assume a `tuple` returned from
`Client.bars()`.
- return a `None` on empty frames instead of raising `NoData` at this
call frame.
- do more explicit imports from `pendulum` for brevity.
inside `.brokers._util`:
- make `NoData` take an `info: dict` as input to allow backends to pack
in empty frame meta-data for (eventual) use in the tsp back-filling
layer.
This took a teensie bit of reworking in some `.ui` modules
more or less in the following order of functional dependence:
- add a `Ctl-R` kb-binding to trigger a `Viz.reset_graphics()` in
the kb-handler task `handle_viewmode_kb_inputs()`.
- call the new method on all `Viz`s (& for all sample-rates) and
`DisplayState` refs provided in a (new input)
`dss: dict[str, DisplayState]` table, which was originally inite-ed
from the multi-feed display loop (so orig in `.graphics_update_loop()`
but now provided as an input to that func, see below..)
- `._interaction`: allow binding in `async_handler()` kwargs (`via
a `functools.partial`) passed to `ChartView.open_async_input_handler()`
such that arbitrary inputs to our kb+mouse handler funcs can accept
"wtv we desire".
- use ^ to bind in the aforementioned `dss` display-state table to
said handlers!
- define the `dss` table (as mentioned) inside `._display.display_symbol_data()`
and pass it into the update loop funcs as well as the newly augmented
`.open_async_input_handler()` calls,
- drop calling `chart.view.open_async_input_handler()` from the
`.order_mode.open_order_mode()`'s enter block and instead factor it
into the caller to support passing the `dss` table to the kb
handlers.
- comment out the original history update loop handling of forced `Viz`
redraws entirely since we now have a manual method via `Ctl-R`.
- now, just update the `._remote_ctl.dss: dict` with this table since
we want to also provide rc for **all** loaded feeds, not just the
currently shown one/set.
- docs, naming and typing tweaks to `._event.open_handlers()`
Since we're now using it multiple layers probably makes sense to impl
and wrap it more correctly / publicly. The main (recent) use case is
where editing an underlying time series and then wanting to refresh the
graphics layers to reflect the changes in a chart. Part of this also
obviously includes wiping the y-range mx/mn cache.
Also ensure that `force_redraw` is proxying through to any `BarItems`
via the new `render_baritems()` func kwarg even when switching between
downsampled-line vs. bars modes.
Since `polars` has a more sane set of (time-zone aware) datetime APIs it
makes more sense and is definitely no slower then the previous `numpy`
impl. Also, actually use the sample-rate specific formats defined in
`DynamicDateAxis.tick_tpl`: dict[int, str]` finally using the new
`Viz.time_step()` property.
Since we end up needing the actual (OHLC sampled) time step info (at
least in seconds) for various purposes (in this specific follow up use
case to determine sample-rate specific `datetime` format strings for
a charted time series x-axis label), allow always reading it from the
viz with the presumption (at least for now) the underlying data-frame
will have an epoch `'time'` col/field.
Thanks to oremanj in the `trio` room for this hot style tip which i much
prefer to have less LOC and places to change sub-pkg name exports!
Also drop expecting a `gaps` frame output from `dedupe()`.
Turns out we were always filtering to time gaps longer then a day smh..
Instead tweak `detect_time_gaps()` to only return venue-gaps when
a `gap_dt_unit: str` is passed and pass `'days'` (like it was by default
before) from `dedupe()` though we should really pass in an actual venue
gap duration in the future.
In theory the `async for msg` loop can be re-purposed without having to
always call `remote_annotate()` so factor it into a new
`serve_rc_annots()` and then just call it from the former (for now) with
the wrapping `try:` block outside to delete per-client-ctx annotation
instance sets. Also, use some type aliases instead of repeatedly
defining the same complex `dict`-table defs B)
In prep for supporting reverse-ipc connect-back to UI actors from
middle-ware systems (for the purposes of triggering data-view canvas
re-renders and built-in tsp annotations), add a new struct type to
better generalize the management of remote feed subscriptions. Include
a `Sub.rc_ui: bool` for now (with nearby todo-comment) and expose an
`allow_remote_ctl_ui: bool` through the feed endpoints to help drive
/ prep for all that ^
Rework all the sampler tasks to expect the `Sub`'s new iface:
- split up the `Sub.ipc: MsgStream` and `.send_chan` as separate fields
since we're handling the throttle case in separate
`sample_and_broadcast()` logic blocks anyway and avoids needing to
monkey-patch on the `._ctx` malarky..
- explicitly provide the optional handle to the `_throttle_cs:
CancelScope` again for the case where throttling/event-downsampling is
requested.
- add `_FeedsBus.subs_items()` as a public iterator.
Apparently `.storage.nativedb.mk_ohlcv_shm_keyed_filepath()` was always
kinda broken if you passed in a `period: float` with an actual non-`int`
to the format string? Fixed it to strictly cast to `int()` before
str-ifying so that you don't get weird `60.0s.parquet` in there..
Further this rejigs the `sotre ldshm` gap correction-annotation loop to,
- use `StorageClient.write_ohlcv()` instead of hackily re-implementing
it.. now that problem from above is fixed!
- use a `needs_correction: bool` var to determine if gap markup and
de-duplictated data should be pushed to the shm buffer,
- go back to using `AnnotCtl.add_rect()` for all detected gaps such that
they all persist (and thus are shown together) until the client
disconnects.
For non-full-`.__aexit__()` handlers need this method instead (facepalm).
Also create and assign the `AnnotCtl._annot_stack: AsyncExitStack` just
before yielding the client since it's not needed prior and ensures annot
removal happens **before** ipc teardown.
Since leaking annots to a remote `chart` actor probably isn't a thing we
want to do (often), add a removal/deletion handler block to the
`remote_annotate()` ctx which can be triggered using a `{rm_annot: aid}`
msg.
Augmnent the `AnnotCtl` with,
- `.remove() which sends said msg (from above) and returns a `bool`
indicating success.
- add an `.open_rect()` acm which does the `.add_rect()` / `.remove()`
calls underneath for use in scope oriented client usage.
- add a `._annot_stack: AsyncExitStack` which will always have any/all
non-`.open_rect()` calls to `.add_rect()` register removal on client
teardown, to avoid leaking annots when a client finally disconnects.
- comment out the `.modify()` meth idea for now.
- rename all `Xstream` var-tags to `Xipc` names.