Commit Graph

4179 Commits (1e3a4ca36d046a63a2dbdcc9113f98a5242db391)

Author SHA1 Message Date
Tyler Goodlet bc58e42a74 Refine accounting related config loading routine doc strings 2023-06-27 13:41:47 -04:00
Tyler Goodlet 77dfeb4bf2 Update brokerd msgs with modern type annots, add a "closed" status 2023-06-27 13:41:47 -04:00
Tyler Goodlet f2c1988536 Better empty account console msg styling 2023-06-27 13:41:47 -04:00
Tyler Goodlet 81d5ca9bc2 ib: drop `ibis` import and use fq object imports instead 2023-06-27 13:41:47 -04:00
Tyler Goodlet a4b8fb2d6b Woops, drop paper mode detection on client side.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet e7437cb722 Facepalm, break on first matching trades ep.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet f81ea64cab Drop unused `Union` 2023-06-27 13:41:47 -04:00
Tyler Goodlet 2e878ca52a Don't pass loglevel to trade dialog endpoint
It's been getting setup in the `brokerd` daemon-actor spawn task for
a while now and worker tasks already get a ref to that global log
instance so they don't need to care (in data or trading) task spawn
endpoints.

Also move to the new `open_trade_dialog()` naming for working broker
backends B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 6b2e85e4b3 Add type-annots to sampler subscription method internals 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6a1c49be4e view_mode: handle duplicate overlay dispersions
Discovered due to originally having a history loading bug between
btcusdt futes display where the same time series was being loaded into
the graphics system, this avoids the issue where 2 (or more) curves are
measured to have the same dispersion and thus do not get added as unique
entries to the `overlay_table: dict[float, tuple]` during the scaling
phase..

Practically speaking this should never really be a problem if the curves
(and their backing timeseries) are indeed unique but keying the
overlay table by the dispersion and the `Viz` is a minimal performance
hit when looping the sorted table and is a lot nicer then you **do want
to show** duplicate curves then having one overlay just not be ranged
correctly at all XD
2023-06-27 13:41:47 -04:00
Tyler Goodlet 0f8c685735 .clearing._client: return early on cancel-dead-dialog attempts 2023-06-27 13:41:47 -04:00
Tyler Goodlet 921e18728c Move `._cacheables.open_cached_client()` into `.brokers` pkg mod 2023-06-27 13:41:47 -04:00
Tyler Goodlet c0552fa352 Just use brokermods dict directly in chart entrypoint now 2023-06-27 13:41:47 -04:00
Tyler Goodlet 90810dcffd Right partition the fqme to remove broker part in mkt-info cli 2023-06-27 13:41:47 -04:00
Tyler Goodlet ebbfa7f48d Passthrough kwargs to `open_cached_client()` 2023-06-27 13:41:47 -04:00
Tyler Goodlet bb02775cab Change `ledger` CLI to use new `open_brokerd_dialog()`
Instead of effectively (and poorly) duplicating the trade dialog setup
logic, just use the new helper we exposed in the EMS module B)
Also, handle paper accounts that have no ledger / positions existing.
2023-06-27 13:41:47 -04:00
Tyler Goodlet b15e736e3e Change `piker symbol-info` -> `mkt-info`
As part of bringing the brokerd agnostic APIs up to date and modernizing
wrapping CLIs, this adds a new sub-cmd to allow more or less directly
calling the `.get_mkt_info()` broker mod endpoint and dumping the both
the backend specific `Pair`-ish and `.accounting.MktPair` normalized
version to console.

Deatz:
- make the click config's `brokermods` entry a `dict`
- make `.brokers.core.mkt_info()` strip the broker name part from the
  input fqme before calling the backend.
2023-06-27 13:41:47 -04:00
Tyler Goodlet cc3037149c Factor `brokerd` trade dialog init into acm
Connecting to a `brokerd` daemon's trading dialog via a helper `@acm`
func is handy so that arbitrary trading middleware clients **and** the
ems can setup a trading dialog and, at the least, query existing
position state; this is in fact our immediate need when simply querying
for an account's position status in the `.accounting.cli.ledger` cli.

It's now exposed (for now) as `.clearing._ems.open_brokerd_dialog()` and
is called by the `Router.maybe_open_brokerd_dialog()` for every new
relay allocation or paper-account engine instance.
2023-06-27 13:41:47 -04:00
Tyler Goodlet d704d631ba Add `store ldshm` subcmd
Changed from the old `store clone` to instead simply load any shm buffer
matching a user provided `FQME: str` pattern; writing to parquet file is
only done if an explicit option flag is passed by user.

Implement new `iter_dfs_from_shms()` generator which allows interatively
loading both 1m and 1s buffers delivering the `Path`, `ShmArray` and
`polars.DataFrame` instances per matching file B)

Also add a todo for a `NativeStorageClient.clear_range()` method.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 58c096bfad Bleh go back to using pdbp for REPL in anal 2023-06-27 13:41:47 -04:00
Tyler Goodlet 9eeea51165 Define shm buffer sizing in `.data.history`
Also adjust sizing such that the history buffer will backfill the last
six years by default (in 1m OHLC) and the hft buffer will do only 3 days
worth. Also ensure the fsp layer passes the src shm's buffer size when
allocating since the size is now required by allocators in the shm apis.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 33ec27715b Sync shm mod with dev version in `tractor`, drop buffer sizing vars, require `size: int` to all allocators 2023-06-27 13:41:47 -04:00
Tyler Goodlet e1be098406 Only hard re-render `Viz`s matching backfill deats
Avoid unnecessarily re-rendering the wrong (1min OHLC history) chart
and/or other such charts with update tasks listening to the sampler
stream. Instead only redraw in tasks which are updating vizs which match
the actual details of the backfill event.

We can probably also eventually match against a range tuple (emitted in
the msg) and then have the task further only update the formatter layer
unless the range is actually in view?
2023-06-27 13:41:47 -04:00
Tyler Goodlet dd3e4b5a1f Emit backfill details in broadcasts
Send both the `Viz.name` and `timeframe: int` so that the UI side can
match against them and only update a lone curve in a single plot.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 2a1835843f Drop `wap_in_history` stuff from display loop
It's no longer part of the default OHLCV array-buffer schema and just
generally we should be processing and managing **any** non source data
in the FSP subsystem(s) despite it maybe being provided as a default by
some backends.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 8947932289 Use last 16 steps in period detection, not first 16.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 0484e97382 Try to not overrun shm during gap backfilling.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 5251561e20 TOCHERRY: into #486, add polars/apache deps for nix 2023-06-27 13:41:47 -04:00
Tyler Goodlet 937d8c410d binance: add futes API link, freeze the agg tradez struct 2023-06-27 13:41:47 -04:00
Tyler Goodlet 75ff3921b6 ib: fix mega borked hist queries on gappy assets
Explains why stuff always seemed wrong before XD

Previously whenever a time-gappy asset (like a stock due to it's venue
operating hours) was being loaded, we weren't querying for a "durations
worth" of bars and this was causing all sorts of actual gaps in our
data set that shouldn't exist..

Fix that by always attempting to retrieve a min aggregate-time's
worth/duration of bars/datums in the history manager. Actually,
i implemented this in both the feed and api layers for this backend
since it doesn't seem to strictly work just implementing it at the
`Client.bars()` level, not sure why but..

Also, buncha `ruff` linting cleanups and fix the logger nameeee, lel.
2023-06-27 13:41:47 -04:00
Tyler Goodlet c8f8724887 Mask out all the duplicate frame detection 2023-06-27 13:41:47 -04:00
Tyler Goodlet c1546eb043 Add note about appending parquet files on write 2023-06-27 13:41:47 -04:00
Tyler Goodlet f8ab3bde35 Allow sampler step events to overrun; only 1s period 2023-06-27 13:41:47 -04:00
Tyler Goodlet c1201c164c Parametrize index margin around gap detection segment 2023-06-27 13:41:47 -04:00
Tyler Goodlet a575e67fab Go back to just opening sampler stream inside history update task? 2023-06-27 13:41:47 -04:00
Tyler Goodlet 34dd6ffc22 Add a configurable timeout around backend live feed startup
For now make it a larger value but ideally in the long run we can tune
it to specific backends and expose it in the config(s).
2023-06-27 13:41:47 -04:00
Tyler Goodlet fda7111305 Import from new `.data._timeseries` mod for anal 2023-06-27 13:41:47 -04:00
Tyler Goodlet 8233d12afb Detect and fill time gaps in tsdb history
For now, just detect and fill in gaps (via fresh backend queries)
*in the shm buffer* but eventually i'm pretty sure we can just write
these direct to the parquet file as well.

Use the new `.data._timeseries.detect_null_time_gap()` to find and fill
in the `ShmArray` index range, re-check it and enter a prompt if it
didn't totally fill.

Also,
- do a massive cleanup and removal of all unused/commented code.
  - drop the duplicate frames tracking, don't think we need it after
    removing multi-frame concurrent queries.
- change backfill loop variable `end_dt` -> `last_start_dt` which is
  more semantically correct.
- fix logic to backfill any missing sub-sequence portion for any frame
  query that overruns the shm buffer prependable space by detecting
  the available rows left to insert and only push those.
  - add a new `shm_push_in_between()` helper to match.
2023-06-27 13:41:47 -04:00
Tyler Goodlet f25248c871 Add `.data._timeseries` utility mod
Org all the new (time) gap detection routines here and also move in the
`slice_from_time()` epoch -> index converter routine from `._pathops` B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 54f8a615fc Use `code.interact()` in anal subcmd for now 2023-06-27 13:41:47 -04:00
Tyler Goodlet 2dbcecdac7 Generalize time-gap detector to accept unit and threshold 2023-06-27 13:41:47 -04:00
Tyler Goodlet 0dcfcea6ee Finally get partial backfills after tsdb load workinnn
It took a little while (and a lot of commenting out of old no longer
needed code) but, this gets tsdb (from parquet file) loading *before*
final backfilling from the most recent history frame until the most
recent tsdb time stamp!

More or less all the convoluted concurrency shit we had for coping with
`marketstore` IPC junk is no longer needed, particularly all the query
size limits and accompanying load loops.. The recent frame loading
technique/order *has* now changed though since we'd like to show charts
asap once tsdb history loads.

The new load sequence is as follows:
- load mr (most recent) frame from backend.
- load existing history (one shot) from the "tsdb" aka parquet files
  with `polars`.
- backfill the gap part from the mr frame back to the tsdb start
  incrementally by making (hacky) `ShmArray.push(start=<blah>)` calls
  and *not* updating the `._first.value` while doing it XD

Dirtier deatz:
- make `tsdb_backfill()` run per timeframe in a separate task.
  - drop all the loop through timeframes and insert `dts_per_tf` crap.
  - only spawn a subtask for the `start_backfill()` call which in turn
    only does the gap backfilling as mentioned above.
- mask out all the code related to being limited to certain query sizes
  (over gRPC) as was restricted by marketstore.. not gonna go through
  what all of that was since it's probably getting deleted in a follow
  up commit.
- buncha off-by-one tweaks to do with backfilling the gap from mr frame
  to tsdb start.. mostly tinkered it to get it all right but seems to be
  working correctly B)
- still use the `broadcast_all()` msg stuff when doing the gap backfill
  though don't have it really working yet on the UI side (since
  previously we were relying on the shm first/last values.. so this will
  be "coming soon" :)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7a5c43d01a Support injecting a `info: dict` to `Sampler.broadcast_all()` calls 2023-06-27 13:41:47 -04:00
Tyler Goodlet f1252983e4 kucoin: support start and end dt based bars queries 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6dc3ed8d6a Expose a `force_reformat: bool` up through graphics stack 2023-06-27 13:41:47 -04:00
Tyler Goodlet 4f4860cfb0 Update shm.push() type sig style 2023-06-27 13:41:47 -04:00
Tyler Goodlet 1e683a4b91 Another guard around sampling subscriber popped race.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 9fd412f631 Add basic time-sampling gap detection via `polars`
For OHLCV time series we normally presume a uniform sampling period
(1s or 60s by default) and it's handy to have tools to ensure a series
is gapless or contains expected gaps based on (legacy) market hours.

For this we leverage `polars`:
- add `.nativedb.with_dts()` a datetime-from-epoch-time-column frame
  "column-expander" which inserts datetime-casted, epoch-diff and
  dt-diff columns.
- add `.nativedb.detect_time_gaps()` which filters to any larger then
  expected sampling period rows.
- wrap the above (for now) in a `piker store anal` (analysis) cmd which
  atm always enters a breakpoint for tinkering.

Supporting storage client additions:
- add a `detect_period()` helper for extracting expected OHLC time step.
- add new `NativedbStorageClient` methods and attrs to provide for the above:
    - `.mk_path()` to **only** deliver a parquet-file path for use in
      other methods.
    - `._dfs` to house cached `pl.DataFrame`s loaded from `.parquet` files.
    - `.as_df()` which loads cached frames or loads them from disk and
      then caches (for next use).
    - `_write_ohlcv()` a private-sync version of the public equivalent
      meth since we don't currently have any actual async file IO
      underneath; add a flag for whether to return as a `numpy.ndarray`.
2023-06-27 13:41:47 -04:00
Tyler Goodlet d027ad5a4f Whenever there is overlays, set a title on main chart price-y axis! 2023-06-27 13:41:47 -04:00
Tyler Goodlet 106ebe94bf Drop marketstore and tina install from readme, add polars and apache! 2023-06-27 13:41:47 -04:00