Compare commits

...

2025 Commits

Author SHA1 Message Date
Nelson Torres 49ea380503 Add new `ruff.toml` config file
Based on the default provided in their
[docs](https://docs.astral.sh/ruff/configuration/) and migrating
previous config from the prior `poetry`-verion of our `pyproject.toml`
2025-02-17 14:48:10 -05:00
Tyler Goodlet 933f169938 Add/reorg back some content from `poetry` old config 2025-02-14 13:47:02 -05:00
Nelson Torres 51337052a4 Remove legacy `poetry` config content from pyproject.toml 2025-02-14 15:01:29 -03:00
Tyler Goodlet 8abe55dcc6 Add `ruff` to deps, bump `uv.lock`
Such that we start encouraging devs to lint code they touch and
hopefully we include a pass as part of our tests/CI eventually B)

Also, mk local `tractor` install `--editable` since without it being
a locally hackable repo it's kinda pointless to install from the local
fs Xp
2025-02-13 21:20:11 -05:00
goodboy c933f2ad56 Merge pull request 'kucoin_and_binance_fix' (#9) from kucoin_and_binance_fix into gitea_feats
Reviewed-on: #9
2025-02-13 19:40:50 +00:00
Tyler Goodlet 00108010c9 Mask `pytest` detection block in `piker.config`
Seems to be some kinda super weird env bug since we moved to using
`uv`? When it triggers it also seems to cause a pretty fundamental crash
that not only breaks `tractor.devx._debug` stuff but also seems to get
us in a perma-hang state where no SIGINT or other sys sig will be able
to kill the root proc!?!?

TODO, a `gitea` issue to track so we can fix the fundamental problem as
well as transitive fault in `tractor`'s core which seems to be due to
the error taking place during a sub-actor's module import phase which
prevents the runtime from booting fully and then the proc getting stuck
in a real gnarly SIG-state..
2025-02-13 13:32:11 -05:00
Tyler Goodlet 8a4901c517 `.binance.feed`: moar type fixes, drop `rapidfuzz` 2025-02-13 12:35:41 -05:00
Tyler Goodlet d7f6a5ab63 Update to latest `KucoinMktPair` spec 2025-02-12 18:08:40 -05:00
Tyler Goodlet e0fdabf651 Use `../tractor` srcs in editable mode? 2025-02-12 15:14:30 -05:00
Tyler Goodlet cb88dfc9da `kucoin`: repair live quotes streaming..
This must have broke at some point during the new `MktPair` and thus
`.fqme: str` updates; mas-o-menos the symbol key in the quote-msg-`dict`
was NOT set to the `MktPair.bs_fqme: str` value and thus wasn't being
processed by the downstream sampling and feed subsys.

So fix that as well as a few other refinements,
- set the `topic: mkt.bs_fqme` in quote msgs obvi.
- drop the "wait for first clearing vlm" quote poll loop; going to fix
  the sampler to handle a `first_quote` without a `'last'` key.
- add some typing around calls to `get_mkt_info()`.
- rename `stream_messages()` -> `iter_normed_quotes()`.
2025-02-12 15:05:22 -05:00
Nelson Torres bb41dd6d18 Deleted settlePlan field from binance FutesPair. 2025-02-12 15:05:22 -05:00
Nelson Torres 99e90129ad Added missing fields for kucoin.
feeCategory, makerFeeCoefficient, takerFeeCoefficient and st.
2025-02-12 15:05:22 -05:00
Tyler Goodlet cceb7a37b9 Lel, forgot to add a `SPOT` venue for `binance`.. 2025-02-12 15:05:22 -05:00
goodboy 5382815b2d Merge pull request 'uv migration and default.nix for qt6' (#17) from uv_migration into gitea_feats
Reviewed-on: #17
2025-02-12 20:04:02 +00:00
Tyler Goodlet cb1ba8a05f Further root readme bump, factor `.clearing` content
In line with our move to `uv` and recent `nix` support update a bunch of
the summary content and factor out the order-control section to a new
`.piker.clearing` readme file with embedded todos therein.
2025-02-12 15:01:51 -05:00
Nelson Torres 6c65ec4d3b Readme update:
- uv replace poetry
- update nix-shell command
2025-02-12 13:05:25 -03:00
Nelson Torres 12e371b027 uv migration 2025-02-12 11:19:34 -03:00
goodboy 014bd58db4 Merge pull request 'go_httpx' (#2) from go_httpx into gitea_feats
Landed-in: #2
2025-02-12 13:01:19 +00:00
Tyler Goodlet 844544ed8e Port binance to `httpx`
Like other backends use the `AsyncClient` for all venue specific
client-sessions but change to allocating them inside `get_client()`
using an `AsyncExitStack` and inserting directly in the
`Client.venue_sesh: dict` table during init.

Supporting impl tweaks:
- remove most of the API client session building logic and instead make
  `Client.__init__()` take in a `venue_sessions: dict` (set it to
  `.venue_sesh`) and `conf: dict`, instead opting to do the http client
  configuration inside `get_client()` since all that code only needs to
  be run once.
 |_load config inside `get_client()` once.
 |_move session token creation into a new util func `init_api_keys()` and
  also call it from `get_client()` factory; toss in an ex. toml section
  config to the doc string.
- define `_venue_urls: dict[str, str]` (content taken from old static
  `.venue_sesh` dict) at module level and feed them as `base_url: str`
  inputs to the client create loop.
- adjust all call sigs in httpx-sesh-using methods, namely just
  `._api()`.
- do a `.exch_info()` call in `get_client()` to cache the symbology
  set.

Unrelated changes for various other outstanding buggers:
- to get futures feeds correctly loading when selected
  from search (like 'XMRUSDT.USDTM.PERP'), expect a `MktPair` input to
  `Client.bars()` such that the exact venue-key can be looked up (via
  a new `.pair2venuekey()` meth) and then passed to `._api()`.
- adjust `.broker.open_trade_dialog()` to failover to paper engine when
  there's no `api_key` key set for the `subconf` venue-key.
2025-02-11 16:27:28 -05:00
Nelson Torres f479252d26 Added note to exception when missing field in SpotPair class 2025-02-11 16:27:28 -05:00
Nelson Torres 033ef2e35e Added new fields to SpotPair class in venues 2025-02-11 16:27:28 -05:00
Tyler Goodlet 2cdece244c binance: raise `NoData` on null hist arrays
Like we do with other history backends to indicate lack of a data set.
This avoids any raise that will will bring down the backloader task with
some downstream error.

Raise a `ValueError` on no time index for now.
2025-02-11 16:27:28 -05:00
Tyler Goodlet 018694bbdb Woops, `data` can be an empty list XD 2025-02-11 16:27:28 -05:00
Tyler Goodlet 128a2d507f Woops, fix missing `api_url` ref in error log 2025-02-11 16:27:28 -05:00
Tyler Goodlet 430650a6a7 Change type-annots to use `httpx.Response` 2025-02-11 16:27:28 -05:00
Tyler Goodlet 1da3cf5698 Port `kucoin` backend to `httpx` 2025-02-11 16:27:28 -05:00
Tyler Goodlet a348603fc4 Port `kraken` backend to `httpx` 2025-02-11 16:27:28 -05:00
goodboy 86047824d8 Merge pull request '`.brokers.ib` random fixes-n-improvements from various other dev branches..' (#27) from ib_refinements into gitea_feats
Merged-in: #27
2025-02-11 21:26:20 +00:00
Tyler Goodlet cb92abbc38 ib: add connect status info emit 2025-02-11 14:56:17 -05:00
Tyler Goodlet 70332e375b ib: `.api` mod and log-fmt cleaning
About time we tidy'd a buncha status logging in this backend..
particularly for boot-up where there's lots of client-try-connect poll
looping with account detection from the user config.

`.api.Client` pprint and logging fmt improvements:
- add `Client.__repr__()` which shows the minimally useful set of info
  from the underlying `.ib: IB` as well as a new `.acnts: list[str]`
  of the account aliases defined in the user's `brokers.toml`.
- mk `.bars()` define a comprehensive `query_info: str` with all the
  request deats but only display if there's a problem with the response
  data.
- mk `.get_config()` report both the config file path and the acnt
  aliases (NOT the actual account #s).
- move all `.load_aio_clients()` client poll loop requests do
  `log.runtime()` statuses, only falling through to a `.warning()` when
  the loop fails to connect the client to the spec-ed API-gw addr, and
 |_ don't allow loading accounts for which the user has not defined an
    alias in `brokers.toml::[ib]`; raise a value-error in such cases
    with a message indicating how to mod the config.
 |_ only `log.info()` about acnts if some were loaded..

Other mod logging de-noising:
- better status fmting in `.symbols.open_symbol_search()` with
  `repr(Client)`.
- for `.feed.stream_quotes()` first quote reporting use `.runtime()`.
2025-02-11 14:56:17 -05:00
Tyler Goodlet 4940aabe05 ib: warn about mkt precision cuckups that `Contract`s clearly deliver wrong.. 2025-02-11 14:56:17 -05:00
Tyler Goodlet 4f9998e9fb ib: mask out trade and vlm rates for now 2025-02-11 14:56:17 -05:00
Tyler Goodlet c92a236196 ib: more trade record edge case handling
- timestamps came as `'date'`-keyed from 2022 and before but now are
  `'datetime'`..
- some symbols seem to have no commission field, so handle that..
- when no `'price'` field found return `None` from `norm_trade()`.
- add a warn log on mid-fill commission updates.
2025-02-11 14:56:17 -05:00
goodboy e4cd1f85f6 Merge pull request 'pyqt6' (#3) from pyqt6 into gitea_feats
Reviewed-on: #3 (well by fomo anyway..)
2025-02-11 17:25:03 +00:00
Tyler Goodlet 129cf58d41 Bump deps for Py3.12, go PyQt6, tweak ruff rules
Code base is already ported for `Qt6` so this removes the pyqt5 dep,
adds latest pyqt6 as well as buncha other updates:

- add `xonsh` and ptk as dev deps for those of us using wacky shells ;P
- bump compiled deps as needed for python 3.12 (`numpy`, `numba`)
- add `httpx` and drop `asks` since the latter is zombied and not compat
  with other libs on 3.12.
- add `ruff` linting ignore rules for the new `.ui.qt` shim mod layer.
- few other deps updates to latest versions.
- add in the `keywords` and `classifiers` sections from the old
  `setup.py`.
2024-05-20 11:07:27 -04:00
Tyler Goodlet 1fd8654ca5 Port all `.ui*` submods to new `.ui.qt` imports
This also officially moves the code base to using `PyQt6` including all
necessary reference changes and enum namespace path moves.

Also includes a small `.ui.order_mode` fix to cancel any
`Order.price <= 0` and a name error fix with logging using `msg`,
which is already used for the input order msg..
2024-05-01 14:33:10 -04:00
Tyler Goodlet d0170982bf Add `piker.ui.qt` as a `PyQt6` shim module
For the future, like if we ever get a `PyQt7` (or wtv else..), add
a module which allows changing Qt binding lib imports from one spot for
all other `.ui` submodules. In some sense this is like a shoddier, less
dynamic version of how `pyqtgraph.Qt.__init__.py` supports multiple
libs; it might actually make sense eventually to instead import from
their shim layer instead?

Included is a draft attempt at exposing a bunch of enums which under
custom names:
- while the specific grouping of values seem to always stay consistent,
  the root enum's seem to almost always get moved around in the `PyQtX`
  module namespace.
- changing groupings and/or each top level enum's ns location can more
  simply be changed/re-orged from one spot.
- allows `.ui` consumer code to use a name more relevant to `piker`'s
  usage of wtv UI component is being configured.
2024-05-01 14:30:18 -04:00
Tyler Goodlet 821e73a409 Use a `unit_prefix: str` (like u or $) on health bar 2024-05-01 14:09:39 -04:00
Tyler Goodlet 3d03781810 Impl a sane (with nesting) `.types.Struct.pformat()`
Such that our internal structs can be pretty printed with indented and
type-hinted fields, AND for nested `Struct`-fields call `.pformat()` but
avoiding any recursion errors using `pprint.saferepr()`. Add
a `._sin_props()` iterator over the non-property fields; use it for
`dict` casting when called with `.to_dict(include_non_members=False)`.

Actually, we should also probably figure out how to only pprint like
when required by the user in a REPL or log msg by context-selectively
`pprint.PrettyPrinter` right? Also, if we can generalize decently enough
it'd be cool to maybe patch this in as a util to upstream `msgspec`?
2024-01-17 15:50:27 -05:00
Tyler Goodlet 83d1f117a8 Always cancel (loaded) zero-priced orders
Ran into this trading a peenee where a dark got (mistakenly) submitted
at a price of 0 and then that consecutively broke upstream allocator/pp
code due to a divide-by-zero.. So instead always check for a zero-price
(since that should never ever be valid in any market) and instead cancel
any such order in the EMS and return `None` so that upstream callers can
ignore it without crash handling.
2024-01-17 10:29:43 -05:00
Tyler Goodlet e4ce79f720 Delegate `.toolz.open_crash_handler()` to `tractor.devx`
Means we can drop `.toolz.debug` module outright.
2024-01-16 10:26:38 -05:00
Tyler Goodlet 264246d89b Fix `brokers.toml` load for `kraken` backend 2024-01-10 17:53:15 -05:00
Tyler Goodlet 7c96c9fafe Just warn log on mismatched `MktPair` in paper eng 2024-01-10 17:52:50 -05:00
Tyler Goodlet 52b349fe79 Always reload shm data before annotating gaps, so they line up.. 2024-01-09 15:55:16 -05:00
Tyler Goodlet 6959429af8 Factor gap annotating into new `markup_gaps()`
Since we definitely want to markup gaps that are both data-errors and
just plain old venue closures (time gaps), generalize the `gaps:
pl.DataFrame` loop in a func and call it from the `ldshm` cmd for now.

Some other tweaks to `store ldshm`:
- add `np.median()` failover when detecting (ohlc) ts sample rate.
- use new `tsp.dedupe()` signature.
- differentiate between sample-period size gaps and venue closure gaps
  by calling `tsp.detect_time_gaps()` with diff size thresholds.
- add todo for backfilling null-segs when detected.
2024-01-04 11:01:21 -05:00
Tyler Goodlet 05f874001a Ignore `ContextCancelled`s from non-mngr requests
Since service daemon actors may be cancelled remotely by clients (who
maybe also requested said daemon-actor's spawn in the first place) we
specifically ignore `tractor.ContextCancelled`s from the `ctx.wait()`
inside `Services.start_service_task()` to avoid crashing the service
mngr, and thus for now `pikerd`, (which **does** happen now due to
updated and more explicit remote cancellation semantics implemented in
`tractor`) since the `.canceller` field is not going to match the
`pikerd` uid in such cases!

This explicit check makes sense since the `Services` mngr is built to
allow remote requests to "spawn-n-supervise service actors" where the
services can remain persistent but also cancelled later as requested. We
may want to consider only allowing cancellation by actors who requested
spawn in the future tho?

Also change to more explicit imports to `tractor` types for annots
throughout the sub-pkg.
2024-01-04 10:06:42 -05:00
Tyler Goodlet fc216d37de Drop `__all__` import style from `.services` 2024-01-04 10:05:53 -05:00
Tyler Goodlet 03e429abf8 Extend `enable_modules` from input `tractor_kwargs`
Since certain actors (even if client-like) will want to augment their
module set to provide remote features (such as our new rc annotation
msg-prot for `Qt`-chart actors) we need to ensure we merge in any input
`enable_modules: list[str]` to the value passed to the underlying
`tractor` spawning api. Previously we were passing `_root_modules` as
this value by name, but now instead we simply `list.extend()` that into
whatever is in the `kwargs` both in `open_piker_runtime()` and
`open_pikerd()`.
2024-01-04 09:59:15 -05:00
Tyler Goodlet 7ae7cc829f `tsp`: on backfill, do a smart retry on a `NoData`
Presuming the data provider gives us a config with a `frame_types: dict`
(indicating frame sizes per query/request) we try to be clever and
decrement our submitted `end_dt: DateTime` based on it.. hoping for the
best on the next attempt.
2024-01-03 19:49:41 -05:00
Tyler Goodlet b23d44e21a ib; return `None` on empty bars frame resp so as to trigger raising `NoData` in the caller 2024-01-03 18:16:48 -05:00
Tyler Goodlet 2669db785c Workaround `binance`'s latest API schema bs..
Apparently publishing futures contracts that aren't yet trading AND
changing their contract type `str` format/schema was necessary (such
that's there's a f@#$in space in it..)?

I honestly have no idea where they found their "data engineers" XD

TO CHERRY to #520
2024-01-03 17:50:09 -05:00
Tyler Goodlet d3e7b5cd0e Formalize rc `redraw()` msg-endpoint
So now a chart rc client can ask to invoke the new
`Viz.reset_graphics()` by timeframe and fqme Bo This handy when doing
underlying (real time or tsp) edits and you want to make the UI reflect
the changes incrementally.

Impl deatz:
- tweak the msg schema to use a `cmd:  str` which normally maps to
  (something similar to) the UI method name instead of `annot` and now
  offer 3 such "commands": 'redraw', 'remove', 'SelectRect'.
- impl `AnnotCtl.redraw()` which sends the underlying `msg: dict` on the
  correct `tractor.Msgstream` ipc instance.
  - since ipc-stream lookups now happen in multiple client methods impl
    a private `._get_ipc()` to do the error raise on unknown fqmes.
2024-01-03 17:33:15 -05:00
Tyler Goodlet 9be29a707d Make `ib` failed history requests more debug-able
Been hitting wayy too many cases like this so, finally put my foot down
and stuck in a buncha helper code to figure why (especially for gappy
ass pennies) this can/is happening XD

inside the `.ib.api.Client()`:
- in `.bars()` pack all `.reqHistoricalDataAsync()` kwargs into a dict such that
  wen/if we rx a blank frame we can enter pdb and make sync calls using
  a little `get_hist()` closure from the REPL.
  - tidy up type annots a bit too.
- add a new `.maybe_get_head_time()` meth which will return `None` when
  the dt can't be retrieved for the contract.

inside `.feed.open_history_client()`:
- use new `Client.maybe_get_head_time()` and only do `DataUnavailable`
  raises when the request `end_dt` is actually earlier.
- when `get_bars()` returns a `None` and the `head_dt` is not earlier
  then the `end_dt` submitted, raise a `NoData` with more `.info: dict`.
- deliver a new `frame_types: dict[int, pendulum.Duration]` as part
  of the yielded `config: dict`.
- in `.get_bars()` always assume a `tuple` returned from
  `Client.bars()`.
  - return a `None` on empty frames instead of raising `NoData` at this
    call frame.
- do more explicit imports from `pendulum` for brevity.

inside `.brokers._util`:
- make `NoData` take an `info: dict` as input to allow backends to pack
  in empty frame meta-data for (eventual) use in the tsp back-filling
  layer.
2023-12-29 21:59:59 -05:00
Tyler Goodlet c82ca812a8 Pass display state table to interaction handlers
This took a teensie bit of reworking in some `.ui` modules
more or less in the following order of functional dependence:

- add a `Ctl-R` kb-binding to trigger a `Viz.reset_graphics()` in
  the kb-handler task `handle_viewmode_kb_inputs()`.
  - call the new method on all `Viz`s (& for all sample-rates) and
    `DisplayState` refs provided in a (new input)
    `dss: dict[str, DisplayState]` table, which was originally inite-ed
    from the multi-feed display loop (so orig in `.graphics_update_loop()`
    but now provided as an input to that func, see below..)
- `._interaction`: allow binding in `async_handler()` kwargs (`via
  a `functools.partial`) passed to `ChartView.open_async_input_handler()`
  such that arbitrary inputs to our kb+mouse handler funcs can accept
  "wtv we desire".
  - use ^ to bind in the aforementioned `dss` display-state table to
    said handlers!
- define the `dss` table (as mentioned) inside `._display.display_symbol_data()`
  and pass it into the update loop funcs as well as the newly augmented
  `.open_async_input_handler()` calls,
  - drop calling `chart.view.open_async_input_handler()` from the
    `.order_mode.open_order_mode()`'s enter block and instead factor it
    into the caller to support passing the `dss` table to the kb
    handlers.
  - comment out the original history update loop handling of forced `Viz`
    redraws entirely since we now have a manual method via `Ctl-R`.
  - now, just update the `._remote_ctl.dss: dict` with this table since
    we want to also provide rc for **all** loaded feeds, not just the
    currently shown one/set.
- docs, naming and typing tweaks to `._event.open_handlers()`
2023-12-28 21:06:06 -05:00
Tyler Goodlet a7ad50cf8f Add `Viz.reset_graphics()` for "force re-render"
Since we're now using it multiple layers probably makes sense to impl
and wrap it more correctly / publicly. The main (recent) use case is
where editing an underlying time series and then wanting to refresh the
graphics layers to reflect the changes in a chart. Part of this also
obviously includes wiping the y-range mx/mn cache.

Also ensure that `force_redraw` is proxying through to any `BarItems`
via the new `render_baritems()` func kwarg even when switching between
downsampled-line vs. bars modes.
2023-12-28 18:00:26 -05:00
Tyler Goodlet 661805695e Reimpl axis dt label contents gen with `polars`
Since `polars` has a more sane set of (time-zone aware) datetime APIs it
makes more sense and is definitely no slower then the previous `numpy`
impl. Also, actually use the sample-rate specific formats defined in
`DynamicDateAxis.tick_tpl`: dict[int, str]` finally using the new
`Viz.time_step()` property.
2023-12-28 11:08:29 -05:00
Tyler Goodlet 3de7c9a9eb Add `Viz.time_step()`, the sample step-size in time
Since we end up needing the actual (OHLC sampled) time step info (at
least in seconds) for various purposes (in this specific follow up use
case to determine sample-rate specific `datetime` format strings for
a charted time series x-axis label), allow always reading it from the
viz with the presumption (at least for now) the underlying data-frame
will have an epoch `'time'` col/field.
2023-12-28 11:02:06 -05:00
Tyler Goodlet 59536bd284 Use `import <name> as <name>,` in `.tsp`
Thanks to oremanj in the `trio` room for this hot style tip which i much
prefer to have less LOC and places to change sub-pkg name exports!

Also drop expecting a `gaps` frame output from `dedupe()`.
2023-12-28 10:58:22 -05:00
Tyler Goodlet 5702e422d8 Drop gap detection from `dedupe()`, expect caller to handle it 2023-12-28 10:40:08 -05:00
Tyler Goodlet 07331a160e Expose "bar gap margin" as `.ui._formatters.BGM: float` 2023-12-28 10:37:20 -05:00
Tyler Goodlet 0d18cb65c3 Lul, actually detect gaps for 1s OHLC
Turns out we were always filtering to time gaps longer then a day smh..
Instead tweak `detect_time_gaps()` to only return venue-gaps when
a `gap_dt_unit: str` is passed and pass `'days'` (like it was by default
before) from `dedupe()` though we should really pass in an actual venue
gap duration in the future.
2023-12-27 16:55:00 -05:00
Tyler Goodlet ad565936ec Factor UI-rc loop into ctx-free func
In theory the `async for msg` loop can be re-purposed without having to
always call `remote_annotate()` so factor it into a new
`serve_rc_annots()` and then just call it from the former (for now) with
the wrapping `try:` block outside to delete per-client-ctx annotation
instance sets. Also, use some type aliases instead of repeatedly
defining the same complex `dict`-table defs B)
2023-12-26 20:56:04 -05:00
Tyler Goodlet d4b07cc95a `ui._lines`: more direct Qt imports for typing 2023-12-26 20:49:07 -05:00
Tyler Goodlet 1231c459aa Track data feed subscribers using a new `Sub(Struct)`
In prep for supporting reverse-ipc connect-back to UI actors from
middle-ware systems (for the purposes of triggering data-view canvas
re-renders and built-in tsp annotations), add a new struct type to
better generalize the management of remote feed subscriptions. Include
a `Sub.rc_ui: bool` for now (with nearby todo-comment) and expose an
`allow_remote_ctl_ui: bool` through the feed endpoints to help drive
/ prep for all that ^

Rework all the sampler tasks to expect the `Sub`'s new iface:

- split up the `Sub.ipc: MsgStream`  and `.send_chan` as separate fields
  since we're handling the throttle case in separate
  `sample_and_broadcast()` logic blocks anyway and avoids needing to
  monkey-patch on the `._ctx` malarky..
- explicitly provide the optional handle to the `_throttle_cs:
  CancelScope` again for the case where throttling/event-downsampling is
  requested.
- add `_FeedsBus.subs_items()` as a public iterator.
2023-12-26 20:48:06 -05:00
Tyler Goodlet 88f415e5b8 Cannot delete when the rect has no scene.. 2023-12-26 17:36:34 -05:00
Tyler Goodlet d9c574e291 Add `.sort()` support to `dedupe()` 2023-12-26 17:35:38 -05:00
Tyler Goodlet a86573b5a2 Fix .parquet filenaming..
Apparently `.storage.nativedb.mk_ohlcv_shm_keyed_filepath()` was always
kinda broken if you passed in a `period: float` with an actual non-`int`
to the format string? Fixed it to strictly cast to `int()` before
str-ifying so that you don't get weird `60.0s.parquet` in there..

Further this rejigs the `sotre ldshm` gap correction-annotation loop to,
- use `StorageClient.write_ohlcv()` instead of hackily re-implementing
  it.. now that problem from above is fixed!
- use a `needs_correction: bool` var to determine if gap markup and
  de-duplictated data should be pushed to the shm buffer,
- go back to using `AnnotCtl.add_rect()` for all detected gaps such that
  they all persist (and thus are shown together) until the client
  disconnects.
2023-12-26 17:14:26 -05:00
Tyler Goodlet 1d7e97a295 Woops, need to use `.push_async_callback()`
For non-full-`.__aexit__()` handlers need this method instead (facepalm).
Also create and assign the `AnnotCtl._annot_stack: AsyncExitStack` just
before yielding the client since it's not needed prior and ensures annot
removal happens **before** ipc teardown.
2023-12-24 15:08:44 -05:00
Tyler Goodlet bbb98597a0 Add annot removal via client methods or ctx-mngr
Since leaking annots to a remote `chart` actor probably isn't a thing we
want to do (often), add a removal/deletion handler block to the
`remote_annotate()` ctx which can be triggered using a `{rm_annot: aid}`
msg.

Augmnent the `AnnotCtl` with,
- `.remove() which sends said msg (from above) and returns a `bool`
  indicating success.
- add an `.open_rect()` acm which does the `.add_rect()` / `.remove()`
  calls underneath for use in scope oriented client usage.
- add a `._annot_stack: AsyncExitStack` which will always have any/all
  non-`.open_rect()` calls to `.add_rect()` register removal on client
  teardown, to avoid leaking annots when a client finally disconnects.
- comment out the `.modify()` meth idea for now.
- rename all `Xstream` var-tags to `Xipc` names.
2023-12-24 14:42:12 -05:00
Tyler Goodlet e33d6333ec Woops, remove the label-proxy, not the widget.. 2023-12-24 13:59:16 -05:00
Tyler Goodlet 263a5a8d07 Add `SelectRect.delete()` for permanent scene dealloc 2023-12-23 13:37:47 -05:00
Tyler Goodlet a681b2f0bb Drop passing `bus` to `tsp.manage_history()` in feed allocator 2023-12-22 21:44:38 -05:00
Tyler Goodlet 5b0c94933b `.config`: don't hack the user config dir if user is 'root' and sudo was NOT used.. 2023-12-22 21:41:51 -05:00
Tyler Goodlet 61e52213b2 Oof, fix no-tsdb-entry since needs full backfill case!
Got borked by the logic re-factoring to get more conc going around
tsdb vs. latest frame loads with nested nurseries. So, repair all that
such that we can still backfill symbols previously not loaded as well as
drop all the `_FeedBus` instance passing to subtasks where it's
definitely not needed.

Toss in a pause point around sampler stream `'backfilling'` msgs as well
since there's seems to be a weird ctx-cancelled propagation going on
when a feed client disconnects during backfill and this might be where
the src `tractor.ContextCancelled` is getting bubbled from?
2023-12-22 21:34:31 -05:00
Tyler Goodlet b064a5f94d A working remote annotations controller B)
Obvi took a little `.ui` component fixing (as per prior commits) but
this is now a working PoC for gap detection and markup from a remote
(data) non-`chart` actor!

Iface and impl deats from `.ui._remote_ctl`:
- add new `open_annot_ctl()` mngr which attaches to all locally
  discoverable chart actors, gathers annot-ctl streams per fqme set, and
  delivers a new `AnnotCtl` client which allows adding annotation
  rectangles via a `.add_rect()` method.
  - also template out some other soon-to-get methods for removing and
    modifying pre-exiting annotations on some `ChartView` 💥
- ensure the `chart` CLI subcmd starts the (`qtloops`) guest-mode init
  with the `.ui._remote_ctl` module enabled.
- actually use this stuff in the `piker store ldshm` CLI to submit
  markup rects around any detected null/time gaps in the tsdb data!

Still lots to do:
-  probably colorization of gaps depending on if they're venue
   closures (aka real mkt gaps) vs. "missing data" from the backend (aka
   timeseries consistency gaps).
- run gap detection and markup as part of the std `.tsp` sub-sys
   runtime such that gap annots are a std "built-in" feature of
   charting.
- support for epoch time stamp AND abs-shm-index rect x-values
  (depending on chart operational state).
2023-12-22 15:19:20 -05:00
Tyler Goodlet e7fa841263 Pass scene-points to `.select_box` as per prior comments
As mentioned in a prior commit this was the (seemingly, and so far) only
way to make our `.select_box` annotator shift-click rect work properly
(and the same as) by adopting the code around `ViewBox.rbScaleBox`
(which we now also disable). That means also passing the scene coords to
the `SelectRect.set_scen_pos()`. Also add in the proper `ev:
pyqtgraph.GraphicsScene.mouseEvents.MouseDragEvent` so we can actually
figure out wut the hell all this pg custom mouse-event stuff is XD
2023-12-22 12:09:08 -05:00
Tyler Goodlet 1f346483a0 Always pass full `ShmArray._array` buf to `ContentsLables` updates so the label can be used outside the "backfilled-valid" range 2023-12-22 12:06:55 -05:00
Tyler Goodlet d006ecce7e Fix `._pg_overrides` import cycle caused by our `Axis` override 2023-12-22 12:05:18 -05:00
Tyler Goodlet 69368f20c2 Finally fix our `SelectRect` for use with cursor..
Turns out using the `.setRect()` method was the main cause of the issue
(though still don't really understand how or why) and this instead
adopts verbatim the code from `pg.ViewBox.updateScaleBox()` which uses
a scaling transform to set the rect for the "zoom scale box" thingy.

Further add a shite ton more improvements and interface tweaks in
support of the new remote-annotation control msging subsys:
- re-impl `.set_scen_pos()` to expect `QGraphicsScene` coordinates (i.e.
  passed from the interaction loop and pass scene `QPointF`s from
  `ViewBox.mouseDragEvent()` using the `MouseDragEvent.scenePos()` and
  friends; this is required to properly use the transform setting
  approach to resize the select-rect as mentioned above.
- add `as_point()` converter to maybe-cast python `tuple[float, float]`
  inputs (prolly from IPC msgs) to equivalent `QPointF`s.
- add a ton more detailed Qt-obj-related typing throughout our deriv.
- call `.add_to_view()` from init so that wtv view is passed in during
  instantiation is always set as the `.vb` after creation.
- factor the (proxy widget) label creation into a new `.init_label()`
  so that both the `set_scen/view_pos()` methods can call it and just
  generally decouple rect-pos mods from label content mods.
2023-12-22 11:47:31 -05:00
Tyler Goodlet 31fa0b02f5 Append any `enable_modules` specc-ed in the chart guest-mode runner 2023-12-21 20:40:00 -05:00
Tyler Goodlet 5a60974990 Use explicit `.data.feed` import of `tractor.trionics` 2023-12-21 20:26:45 -05:00
Tyler Goodlet 8d324acf91 First (untested) draft remote annotation ctl API
Since we can and want to eventually allow remote control of pretty much
all UIs, this drafts out a new `.ui._remote_ctl` module with a new
`@tractor.context` called `remote_annotate()` which simply starts a msg
loop which allows for (eventual) initial control of a `SelectRect`
through IPC msgs.

Remote controller impl deats:
- make `._display.graphics_update_loop()` set a `._remote_ctl._dss:
  dict` for all chart actor-global `DisplayState` instances which can
  then be controlled from the `remote_annotate()` handler task.
- also stash any remote client controller `tractor.Context` handles in
  a module var for broadband IPC cancellation on any display loop
  shutdown.
- draft a further global map to track graphics object instances since
  likely we'll want to support remote mutation where the client can use
  the `id(obj): int` key as an IPC handle/uuid.
- just draft out a client-side `@acm` for now: `open_annots_client()` to
  be filled out in up coming commits.

UI component tweaks in support of the above:
- change/add `SelectRect.set_view_pos()` and `.set_scene_pos()` to allow
  specifying the rect coords in either of the scene or viewbox domains.
  - use these new apis in the interaction loop.
- add a `SelectRect.add_to_view()` to avoid having annotation client
  code knowing "how" a graphics obj needs to be added and can instead
  just pass only the target `ChartView` during init.
- drop all the status label updates from the display loop since they
  don't really work all the time, and probably it's not a feature we
  want to keep in the longer term (over just console output and/or using
  the status bar for simpler "current state / mkt" infos).
  - allows a bit of simplification of `.ui._fsp` method APIs to not pass
    around status (bar) callbacks as well!
2023-12-19 15:36:54 -05:00
Tyler Goodlet ab84303da7 Drop `SelectRect.mouse_drag_released()` since it was a dumb method 2023-12-18 20:32:17 -05:00
Tyler Goodlet 659649ec48 Bah, fix nursery indents for maybe tsdb backloading
Can't ref `dt_eps` and `tsdb_entry` if they don't exist.. like for 1s
sampling from `binance` (which dne). So make sure to add better logic
guard and only open the finaly backload nursery if we actually need to
fill the gap between latest history and where tsdb history ends.

TO CHERRY #486
2023-12-18 19:46:59 -05:00
Tyler Goodlet f7cc43ee0b Add pauses to `store anal/ldshm` only on bad segs
Particularly halting before maybe writing the repaired timeseries
history in `store anal` to optionally allow user to avoid writing to
storage.
2023-12-18 11:56:57 -05:00
Tyler Goodlet f5dc21d3f4 Adjust all `.tsp` imports to use new sub-pkg
Also toss in a poll loop around the `hist_shm: ShmArray` backfill
read-check in the `.data.allocate_persisten_feed()`  init to cope with
possible racy-ness from the increased tsdb history loading concurrency
now implemented.
2023-12-18 11:54:28 -05:00
Tyler Goodlet 4568c55f17 Create `piker.tsp` "time series processing" subpkg
Move `.data.history` -> `.tsp.__init__.py` for now as main pkg-mod
and `.data.tsp` -> `.tsp._anal` (for analysis).

Obviously follow commits will change surrounding codebase (imports) to
match..
2023-12-18 11:53:27 -05:00
Tyler Goodlet d5d68f75ea ib: only raise first quote timeout err after tries
Previously we were actually failing silently too fast instead of
actually trying multiple times (now we do for 100) before finally
raising any timeout in the final loop `else:` block.
2023-12-18 11:45:19 -05:00
Tyler Goodlet 1f9a497637 Fixup symcache annot for kucoin as well 2023-12-15 16:01:31 -05:00
Tyler Goodlet 40c5d88a9b Fixup symcache type annots; no more `Pair` type 2023-12-15 16:00:51 -05:00
Tyler Goodlet 8989c73a93 Move `iter_dfs_from_shms` into `.data.history`
Thinking about just moving all of that module (after a content breakup)
to a new `.piker.tsp` which will mostly depend on the `.data` and
`.storage` sub-pkgs; the idea is to move biz-logic for tsdb IO/mgmt and
orchestration with real-time (shm) buffers and the graphics layer into
a common spot for both manual analysis/research work and better
separation of low level data structure primitives from their higher
level usage.

Add a better `data.history` mod doc string in prep for this move
as well as clean out a bunch of legacy commented cruft from the
`trimeter` and `marketstore` days.

TO CHERRY #486 (if we can)
2023-12-15 15:53:02 -05:00
Tyler Goodlet 3639f360c3 Reactivate forced viz updates from sampler broadcasts in hist display loop 2023-12-15 13:59:19 -05:00
Tyler Goodlet afd0781b62 Add (shm) abs index to `ContextLabel` 2023-12-15 13:57:10 -05:00
Tyler Goodlet ba154ef413 ib: don't bother with recursive not-enough-bars queries for now, causes more problems then it solves.. 2023-12-15 13:56:42 -05:00
Tyler Goodlet 97e2403fb1 Rework backfiller and null-segment task conc
For each timeframe open a sub-nursery to do the backfilling + tsdb load
+ null-segment scanning in an effort to both speed up load time (though
we need to reverse the current order to really make it faster rn since
moving to the much faster parquet file backend) and do concurrent
time-gap/null-segment checking of tsdb history while mrf (most recent
frame) history is backfilling.

The details are more or less just `trio` related task-func composition
tricks and a reordering of said funcs for optimal startup latency.
Also commented the `back_load_from_tsdb()` task for now since it's
unused.
2023-12-15 13:11:00 -05:00
Tyler Goodlet a4084d6a0b Bleh, fix another off-by-one issue in `np.argwhere()`
Apparently it returns the index of the prior zero-row (prolly since we
do the backward difference) so ensure `fi_zgaps += 1`..

Also fix remaining edge case handling when there's only 2 zero-segs
which was borked after a refactor to the special case blocks (like
a single zero row) prior to the `absi_zsegs` building loop AND make sure
to always return abs indices OUTSIDE the zero seg, i.e. the indices of
the non-zero row just before and just after so that the history
backfiller can use non-zero timestamps to generate range datetimes for
backend frame queries.

Add much more detailed doc-comments with a small ascii diagram to
explain how all these somewhat subtle vec ops work. Also toss in some
sanity checks on the output indices to ensure they don't point to
zero (time) valued rows when used to read the frame.
2023-12-15 12:48:50 -05:00
Tyler Goodlet 83bdca46a2 Wrap null-gap detect and fill in async gen
Call it `iter_null_segs()` (for now?) and use in the final (sequential)
stage of the `.history.start_backfill()` task-func. Delivers abs,
frame-relative, and equiv time stamps on each iteration pertaining to
each detected null-segment to make it easy to do piece-wise history
queries for each.

Further,
- handle edge case in `get_null_segs()` where there is only 1 zeroed
  row value, in which case we deliver `absi_zsegs` as a single pair of
  the same index value and,
  - when this occurs `iter_null_seqs()` delivers `None` for all the
    `start_` related indices/timestamps since all `get_hist()` routines
    (delivered by `open_history_client()`) should handle it as being a
    "get max history from this end_dt" type query.
- add note about needing to do time gap handling where there's a gap in
  the timeseries-history that isn't actually IN the data-history.
2023-12-13 18:29:06 -05:00
Tyler Goodlet c129f5bb4a Finally write a general purpose null-gap detector!
Using a bunch of fancy `numpy` vec ops (and ideally eventually extending
the same to `polars`) this is a first draft of `get_null_segs()`
a `col: str` field-value-is-zero detector which filters to all zero-valued
input frame segments and returns the corresponding useful slice-indexes:
- gap absolute (in shm buffer terms) index-endpoints as
  `absi_zsegs` for slicing to each null-segment in the src frame.
- ALL abs indices of rows with zeroed `col` values as `absi_zeros`.
- the full set of the input frame's row-entries (view) which are
  null valued for the chosen `col` as `zero_t`.

Use this new null-segment-detector in the
`.data.history.start_backfill()` task to attempt to fill null gaps that
might be extant from some prior backfill attempt. Since
`get_null_segs()` should now deliver a sequence of slices for each gap
we don't really need to have the `while gap_indices:` loop any more, so
just move that to the end-of-func and warn log (for now) if all gaps
aren't eventually filled.

TODO:
-[ ] do the null-seg detection and filling concurrently from
  most-recent-frame backfilling.
-[ ] offer the same detection in `.storage.cli` cmds for manual tsp
  anal.
-[ ] make the graphics layer actually update correctly when null-segs
  are filled (currently still broken somehow in the `Viz` caching
  layer?)

CHERRY INTO #486
2023-12-13 15:26:33 -05:00
Tyler Goodlet c4853a3fee Drop inter-method NL 2023-12-13 09:27:23 -05:00
Tyler Goodlet f274c3db3b Import `np2pl()` from `.data.tsp`
Also toss in todo for a timeseries search CLI cmd which can be handy
when doing offine store mgmt.
2023-12-13 09:25:44 -05:00
Tyler Goodlet b95932ea09 `.data.history`: run `.tsp.dedupe()` in backloader
In an effort to catch out-of-order and/or partial-frame-duplicated
segments, add some `.tsp` calls throughout the backloader tasks
including a call to the new `.sort_diff()` to catch the out-of-order
history cases.
2023-12-12 19:57:46 -05:00
Tyler Goodlet e8bf4c6e04 Return the `.len()` diff from `dedupe()` instead
Since the `diff: int` serves as a predicate anyway (when `0` nothing
duplicate was detected) might as well just return it directly since it's
likely also useful for the caller when doing deeper anal.

Also, handle the zero-diff case by just returning early with a copy of
the input frame and a `diff=0`.

CHERRY INTO #486
2023-12-12 16:48:56 -05:00
Tyler Goodlet 8e4d1a48ed Bleh, fix ib's `Client.bars()` recursion..
Turns out this was the main source of all sorts of gaps and overlaps
in history frame backfilling. The original idea was that when a gap
causes not enough (1m) bars to be delivered (like over a weekend or
holiday) when we just implicitly do another frame query to try and at
least fill out the default duration (normally 1-2 days). Doing the
recursion sloppily was causing all sorts of stupid problems..

It's kinda obvious now what was wrong in hindsight:
- always pass the sampling period (timeframe) when recursing
- adjust the logic to not be mutex with the no-data case (since it
  already is mutex..)
- pack to the `numpy` array BEFORE the recursive call to ensure the
  `end_dt: DateTime` is selected and passed correctly!

Toss in some other helpfuls:
- more explicit `pendulum` typing imports
- some masked out sorted-diffing checks (that can be enabled when
  debugging out-of-order frame issues)
- always error log about less-than time step mismatches since we should never
  have time-diff steps **smaller** then specified in the
  `sample_period_s`!
2023-12-12 16:19:21 -05:00
Tyler Goodlet b03eceebef data.tsp: drop masked `return` one liner 2023-12-11 20:11:42 -05:00
Tyler Goodlet f7a8d79b7b Add `NativeStorageClient._cache_df()` use it in `.write_ohlcv()` for caching on writes as well 2023-12-11 20:10:53 -05:00
Tyler Goodlet 49c458710e Move `numpy` <-> `polars` converters into `.data.tsp`
Yet again these are (going to be) generally useful in the data proc
layer as well as going forward with (possibly) moving the history and
shm rt-processing layer to apache (arrow or other) shared-ds
equivalents.
2023-12-11 17:53:31 -05:00
Tyler Goodlet b94582cb35 Move `dedupe()` to `.data.tsp` (so it has pals)
Includes a rename of `.data._timeseries` -> `.data.tsp` for "time series
processing", making it a public sub-mod; it contains a highly useful set
of data-frame and `numpy.ndarray` ops routines in various subsystems Bo
2023-12-11 16:24:27 -05:00
Tyler Goodlet 7311000846 Facepalm, set `was_deduped` as bool not the deduped frame.. 2023-12-11 13:18:10 -05:00
Tyler Goodlet e719733f97 Comment out overlap case block for now too? 2023-12-08 19:08:10 -05:00
Tyler Goodlet cb941a5554 BABOSO.. fix last history frame overlap slicing!
I guess since i started supporting the whole "allow a gap between
the latest tsdb sample and the latest retrieved history frame" the
overlap slicing has been completely borked XD where we've been sticking
in duplicate history samples and this has caused all sorts of down
stream time-series processing issues..

So fix that but ensuring whenever there IS an overlap between history in
the latest frame and the tsdb that we always prefer the latest frame's
data and slice OUT the tsdb's duplicate indices..

CHERRY TO #486
2023-12-08 18:56:38 -05:00
Tyler Goodlet 2d72a052aa Woops, make sure non-disti mode still works wen maybe getting `pikerd` XD 2023-12-08 17:43:52 -05:00
Tyler Goodlet 2eeef2a123 Add `dedupe()` to help with gap detection/resolution
Think i finally figured out the weird issue without out-of-order OHLC
history getting jammed in the wrong place:
- gap is detected in parquet/offline ts (likely due to a zero dt or
  other gap),
- query for history in the gap is made BUT that frame is then inserted
  in the shm buffer **at the end** (likely using array int-entry
  indexing) which inserts it at the wrong location,
- later this out-of-order frame is written to the storage layer
  (parquet) and then is repeated on further reboots with the original
  gap causing further queries for the same frame on every history
  backfill.

A set of tools useful for detecting these issues and annotating them
nicely on chart part of this patch's intent:
- `dedupe()` will detect any dt gaps, deduplicate datetime rows and
  return the de-duplicated df along with gaps table.
- use this in both `piker store anal` such that we potentially
  resolve and backfill the gaps correctly if some rows were removed.
- possibly also use this to detect the backfilling error in logic at
  the time of backfilling the frame instead of after the fact (which
  would require re-writing the shm array from something like `store
  ldshm` and would be a manual post-hoc solution, not a fix to the
  original issue..
2023-12-08 15:11:34 -05:00
Tyler Goodlet b6d2550f33 Add datetime col de-duplicator 2023-12-08 14:38:27 -05:00
Tyler Goodlet b9af6176c5 Factor `TimeseriesNotFound` to top level
TO CHERRY into #486
2023-12-07 12:31:14 -05:00
Tyler Goodlet dd0167b9a5 Make `fsp.cascade()` expect src/dst `Flume`s
Been meaning to this for a while, and there's still a few design
/ interface kinks (like `.mkt: MktPair` which should be better
generalized?) but this flips over all of the fsp chaining engine
to operate on the higher level `Flume` APIs via the newly cobbled
`Cascade` thinger..
2023-12-06 17:53:35 -05:00
Tyler Goodlet 9e71e0768f Define and pass a default `Flume._readonly: bool`
Allows opening with `.from_msg(readonly=False)` for write permissions
making underlyig shm arrays readonly. Also, make sure to pop the
`ShmArray` field entries prior to msg-ization, not sure how that worked
with the `Feed.flumes` equivalent..but?
2023-12-06 17:25:49 -05:00
Tyler Goodlet 6029f39a3f Allow `MktPair.from/to_msg()` to still do `.dst: str` for fsp flumes 2023-12-06 17:09:52 -05:00
Tyler Goodlet 656e2c6a88 fsp: intro a `Cascade` type that connects `Flume`s of streams 2023-12-05 16:59:07 -05:00
Tyler Goodlet b8065a413b ib: update ibc.ini from latest upstream template 2023-12-05 16:57:38 -05:00
Tyler Goodlet 9245d24b47 ib: add `.pause()` on symbol query overruns to aid in fixing the issue 2023-12-04 13:10:15 -05:00
Tyler Goodlet 22bd83943b .storage: support `store anal --pdb` flag 2023-12-04 13:00:33 -05:00
Tyler Goodlet b94931bbdd Fix `Portal.channel: Channel` attr name error 2023-12-04 13:00:04 -05:00
Tyler Goodlet 239c1c457e Sort fqme suggestions pre-print 2023-12-04 11:34:39 -05:00
Tyler Goodlet 24a54a7085 Add `TimeseriesNotFound` for fqme lookup failures
A common usage error is to run `piker anal mnq.cme.ib` where the CLI
passed fqme is not actually fully-qualified (in this case missing an
expiry token) and we get an underlying `FileNotFoundError` from the
`StorageClient.read_ohlcv()` call. In such key misses, scan the existing
`StorageClient._index` for possible matches and report in a `raise from`
the new error.

CHERRY into #486
2023-12-04 11:22:55 -05:00
Tyler Goodlet ebd1eb114e Port runtime init to new `tractor.Actor.reg_addrs` related changes 2023-11-21 15:18:52 -05:00
Tyler Goodlet 29ce8de462 Use new container image mentioned on IBC thread 2023-10-29 13:21:32 -04:00
Tyler Goodlet d3dab17939 order_mode: fix to avoid `Dialog.uuid` on null dialog.. 2023-10-20 13:57:52 -04:00
Tyler Goodlet cadc200818 Always ignore untracked-order error msgs from `brokerd` 2023-10-16 13:15:12 -04:00
Tyler Goodlet 363c8dfdb1 Default spec registrar set as empty addr list
Since it probably IS sane to just assume a root-actor-as-registrar
listening on the localhost as a default, AND allows NOT expecting every
caller of `open_piker_runtime()` to not have to pass an addr set XD

This makes a bucha CLI shit work again after breakage due to no
default..
2023-10-03 13:36:22 -04:00
Tyler Goodlet 00c046c280 Factor transport-ep parser/loader into helper
For now def it `.cli.load_trans_eps()` just inside the pkg mod; only
loads the ep for `pikerd` which currently acts as the main service-actor
registrar per host. Delegate to this new `.load_trans_eps()`
as-it-was-used from the `pikerd` cmd body and add fresh support for
`piker chart --maddr <addr: str>` using the routine in the body of the
`piker.cli.cli` cmd group after loading the `conf.toml::network` section
B)

Also, toss in runtime debug mode wrapping around `piker chart` using the
new `tractor.devx.maybe_open_crash_handler()` and pull the switch from
a `--pdb` flag now factored into the `.cli.cli` click group.
2023-10-03 10:00:01 -04:00
Tyler Goodlet 9165515811 ib: more detailed comments on wait-for-quote-task todo 2023-10-02 17:57:47 -04:00
Tyler Goodlet 543c11f377 ib: only normalize and log first quote if it arrives 2023-10-01 19:14:08 -04:00
Tyler Goodlet 637d33d7cc Make `.config.load_accounts()` load `brokers.toml`.. 2023-10-01 19:09:15 -04:00
Tyler Goodlet e5fdb33e31 Port cache-`dict` search to new `rapidfuzz` api 2023-10-01 17:46:46 -04:00
Tyler Goodlet 81a8cd1685 binance: always load the `brokers.toml` file since default is `conf.toml` now 2023-10-01 17:37:09 -04:00
Tyler Goodlet a382f01c85 Move tsdb section to `service.tsdb.name` and get host from `.maddrs` 2023-10-01 17:23:39 -04:00
Tyler Goodlet 653348fcd8 Use `.service.find_service()` instead of of `tractor.find_actor()` in pape-eng 2023-10-01 16:10:37 -04:00
Tyler Goodlet e139d2e259 Set `registry_addrs` in CLI (click) context-config
Since `tractor` and our runtime internals is now moved to multihomed semantics,
do the same in the CLI / config entrypoints.

Also, try using the new `tractor.devx.maybe_open_crash_handler()` around
the `pikerd` CLI.
2023-10-01 15:42:31 -04:00
Tyler Goodlet 7258d57c69 Only warn on mismatched `open_registry()` input addrs
When a new (actor) caller opens the registry there are 2 possible cases:
1. - some task already opened the registry during init and set the global
  superset of registrar addrs that are expected to be used,
2. - some task after the init task opens with a subset of addrs.
3. - some task after init opens with a disjoint set - should be an error?

In the 2nd case we don't want to error since the may just not need to
know about other registrar (multi-homed) addrs and thus only needs
specific access - so only warn about the diff in that case. If the
caller is requesting some disjoint set then we still runtime raise.

Adjust `find_service()` to allow a null `registry_addrs` input in which
case we fail over to using whatever pre-set the `Registry.addrs` has;
makes it simple for actors that don't want/need to know about the global
registrar set for their actor tree. Also, always set pass
`tractor.find_actor(only_first=True)` (for now).
2023-10-01 15:36:17 -04:00
Tyler Goodlet 5d081a40d5 Port to new `parse_maddr()` name 2023-09-29 15:20:56 -04:00
Tyler Goodlet fcececce19 Move multi-addr parser mod to `tractor` 2023-09-29 14:33:15 -04:00
Tyler Goodlet b6ac6069fe Temporarily use crash handler around search CLI ep 2023-09-29 14:02:17 -04:00
Tyler Goodlet a98f5877bc ui._exec: use new `get_runtime_vars()` name 2023-09-28 12:31:24 -04:00
Tyler Goodlet 50ddef0985 data.feed: dynamically load `ui._search` mod for headless installs 2023-09-28 12:30:10 -04:00
Tyler Goodlet b1cde3df49 config: make `conf.toml` the default load target 2023-09-28 12:29:07 -04:00
Tyler Goodlet 57010d479d Support multi-homed service actors and multiaddrs
This commit requires an equivalent commit in `tractor` which adds
multi-homed transport server support to the runtime and thus the ability
ability to listen on multiple (embedded protocol) addrs / networks as
well as exposing registry actors similarly. Multiple bind addresses can
now be (bare bones) specified either in the `conf.toml:[network]`
section, or passed on the `pikerd` CLI.

This patch specifically requires the ability to pass a `registry_addrs:
list[tuple]` into `tractor.open_root_actor()` as well as adjusts all
internal runtime routines to do the same, mostly inside the `.service`
pkg.

Further details include:
- adding a new `.service._multiaddr` parser module (which will likely be
  moved into `tractor`'s core) which supports loading lib2p2 style
  "multiaddresses" both from the `conf.toml` and the `pikerd` CLI as
  per,
- reworking the `pikerd` cmd to accept a new `--maddr`/`-m` param that
  accepts multiaddresses.
- adjust the actor-registry subsys to support multi-homing by also
  accepting a list of addrs to its top level API eps.
- various internal name changes to reflect the multi-address interface
  changes throughout.
- non-working CLI tweaks to `piker chart` (ui-client cmds) to begin
  accepting maddrs.
- dropping all elasticsearch and marketstore flags / usage from `pikerd`
  for now since we're planning to drop mkts and elasticsearch will be an
  optional dep in the future.
2023-09-28 12:13:34 -04:00
Tyler Goodlet f94244aad4 Load `network` section from `conf.toml` for service-addr map 2023-09-28 12:04:24 -04:00
Tyler Goodlet 261c331602 Try using `.mkPoetryEnv` instead for devving (dont work yet..) 2023-09-22 16:39:54 -04:00
Tyler Goodlet 3b4a4db7b6 Muck with `develop.nix` to try and hack it with `poetry` venv, go py3.11 2023-09-22 16:39:54 -04:00
Tyler Goodlet ad59a581c7 symcache: passthrough `rapidfuzz.process.extract` kwargs 2023-09-22 15:56:49 -04:00
Tyler Goodlet c312f90c0c kucoin: port to using `rapidfuzz`
Just like the others but also flip to using a `Client.get_mkt_pairs()`
meth name for consistency across clients.
2023-09-22 15:55:19 -04:00
Tyler Goodlet 1a859bc1a2 kraken: drop now unused `rapidfuzz` import 2023-09-22 15:53:03 -04:00
Tyler Goodlet e9887cb611 binance: parse .expiry separate from .venue
Apparently they're being massive cucks and changing their futes pair
schema again now adding a `NEXT_QUARTER` contract type which we weren't
handling at all. The good news is falling back to an old symcache file
would have prevented this from crashing.

Add a new `FutesPair.expiry: str` field so that `.bs_fqme` can simply
call it during the summary FQME-ification output rendering..
2023-09-22 14:48:50 -04:00
Tyler Goodlet 0ba75df877 Add `data.match_from_pairs` fuzzy symbology scanner
A helper for scanning a "pairs table" that most backends should expose
as part of their (internal) symbology set using `rapidfuzz` over
a `dict[str, Struct]` input table.

Also expose the `data.types.Struct` at the subpkg top level.
2023-09-22 13:54:25 -04:00
Tyler Goodlet a97a0ced8c kraken: switch to `rapidfuzz` API 2023-09-21 19:49:10 -04:00
Tyler Goodlet 46d83e9ca9 deribit: switch to `rapidfuzz` API 2023-09-21 19:44:27 -04:00
Tyler Goodlet d4833eba21 binance: switch to `rapidfuzz` API 2023-09-21 19:44:06 -04:00
Tyler Goodlet 14f124164a ib: fix mktpair fallback table: use `Client._con2mkts` to translate..
Previously we were assuming that the `Client._contracts: dict[str,
Contract]` would suffice this directly, which obviously isn't true XD

Also,
- add the `NSE` venue to skip list.
- use new `rapidfuzz.process.extract()` lib API.
- only get con deats for non null exchange names..
2023-09-21 19:14:44 -04:00
Tyler Goodlet 05959eaf70 Always ensure symcache mkt pair entry is valid type 2023-09-19 15:56:47 -04:00
Tyler Goodlet 30d55fdb27 Add `--pdb` support to `piker search` 2023-09-13 12:13:56 -04:00
Tyler Goodlet 2c88ebe697 binance: implement `Client.search_symbols()` using `rapidfuzz`
Change the deats inside the method and have the `brokerd` search task
just call it as needed since we already do internal mem caching on the
lookup table.

APIs changed so we need to make some tweaks as per:
- https://github.com/maxbachmann/RapidFuzz/blob/main/api_differences.md
- https://github.com/maxbachmann/RapidFuzz/blob/main/api_differences.md#differences-in-processor-functions

The main motivation is to get better wheel pkging support (for nixos),
better impl in C++, and a more simply licensed dep.
2023-09-13 11:59:51 -04:00
Tyler Goodlet 4a180019f0 Swap out `fuzzywuzzy` for the newer `rapidfuzz` lib 2023-09-13 11:57:02 -04:00
Tyler Goodlet 4d274b16d8 Attempt to generate .uis deps free lock file
Since `poetry` doesn't seem to actually mark optional group deps as such
in the lock file (!?) manually generate a `poetry.lock` with the
optional groups commented out in the `pyproject.toml`; this is all in
an attempt at trying to make `poetry2nix` build without any UI components
which seem to be the source of much frustration without hacking on p2n
and/or nixpkgs repos..

Further drop all the old build system files including the
setup.py and requirements.txt files.
2023-09-07 14:17:01 -04:00
Tyler Goodlet 481618cc51 kraken: handle ws live trading API symbology
Of course I missed this first try but, we need to use the ws market pair
symbology set (since apparently kraken loves redundancy at least 3 times
XD) when processing transactions that arrive from live clears since it's
an entirely different `LTC/EUR` style key then the `XLTCEUR` style
delivered from the ReST eps..

As part of this:
- add `Client._altnames`, `._wsnames` as `dict[str, Pair]` tables,
  leaving the `._AssetPairs` table as is keyed by the "xname"s.
- Change `Pair.respname: str` -> `.xname` since these keys all just seem
  to have a weird 'X' prefix.
- do the appropriately keyed pair table lookup via a new `api_name_set:
  str` to `norm_trade_records()` and set is correctly in the ws live txn
  handler task.
2023-08-30 16:32:34 -04:00
Tyler Goodlet 778d26067d ib.api: return None on manual quote timeout 2023-08-30 14:56:11 -04:00
Tyler Goodlet e54c3dc523 TOSQUASH 9005335e18: pack empty dict on no flow 2023-08-29 08:45:45 -04:00
Tyler Goodlet ad37cfbe2f Break backfill loop on `end_dt < start_dt` 2023-08-29 08:43:14 -04:00
Tyler Goodlet 8369f557c7 TOSQUASH 2e6b1330f375c310ad: adding .dev / .ui groups 2023-08-25 18:07:15 -04:00
Tyler Goodlet 461764419d ib.api: always key `._contracts` with '.ib' suffix
So that pos msgs from the ems are correctly loaded..
2023-08-25 17:47:30 -04:00
Tyler Goodlet 1002ce1e10 kraken.broker: one last fix to `Position.cumsize`.. 2023-08-25 17:47:30 -04:00
Tyler Goodlet 546049b62f data.history: handle venue-closure gap edge case 2023-08-25 17:47:30 -04:00
Tyler Goodlet e9517cdb02 ib: handle commodity-contract trade records 2023-08-25 17:47:30 -04:00
Tyler Goodlet 2b8cd031e8 By default silence `Client.get_quote()` timeout errors unless caller specifies to raise 2023-08-25 17:47:30 -04:00
Tyler Goodlet 2e6b1330f3 Add `.ui` and `.dev` deps groups via `poetry` Bo
Since we eventually want to allow users to minimally deploy `pikerd`
service-tree (aka distributed cross host) installs, we need to offer
a "headless" deps group. Really this is just the core dep set minus Qt
and some aux search related libs (for now).

The new `.dev` group is for adding hacking and testing tools including
`xonsh` since that will eventually be our REPL of choice more then
likely B)

Oh, and fix the namespace path (was a typo) for the `ledger` CLI and
of course bump the lock file.
2023-08-25 17:47:28 -04:00
Tyler Goodlet 995d1534b6 Drop hard redraws for now 2023-08-25 13:33:59 -04:00
Tyler Goodlet 9d31941d42 order_mode: embedded `Order` maybe be in dict form.. 2023-08-25 13:33:59 -04:00
Tyler Goodlet a695208992 brokers._daemon: drop question-comment about enabling feed module 2023-08-25 13:33:59 -04:00
Tyler Goodlet fed89562dc Import crash handler mngr from `piker.toolz` 2023-08-25 13:33:59 -04:00
Tyler Goodlet 9005335e18 ib: pack empty `dict` on no flow entry 2023-08-25 13:33:59 -04:00
Tyler Goodlet c3f8b089be Drop `.service._ahab` from storage cli runtime mods 2023-08-25 13:33:59 -04:00
Tyler Goodlet 0068119a6d ib: use `asyncio.wait_for()` on ticker first quote; on 3.11 input coros are not allowed.. 2023-08-25 13:33:59 -04:00
Tyler Goodlet 94540ce1cf Pin tomlkit as a path dep for now 2023-08-25 13:13:29 -04:00
Tyler Goodlet ea9a5e524c Factor prefer wheels deps into new `ahot_overrides`
Makes it easier to pass the overrides to multiple p2n functions (like
hopefully `.mkPoetryEnv`). Also, add some commented attempts at using
`mkPoetryEnv` and todo list for "why", remove the `poetry` CLI main
point from the pyproject.toml, bump the poetry lock file.
2023-08-17 15:56:28 -04:00
Tyler Goodlet 6b22024570 MVP get us working fully on nixos
NB: for now this is linking to a presumed local clone of the
`poetry2nix` repo since part of fixing what was adjusted here needs to
be patched upstream, which means hackin on the p2n repo in tandem B)

Since there's some dependency build issues we need
to tweak the following to get baseline `nix develop` working:
- drop `python-levenshtein` (required by `fuzzywuzzy[speedup]`) for now
  since the overlay and/or wheel install needs to be properly figured
  out.
- build `pyqt5` from src for the moment (since `preferWheel` doesn't
  seem to be workin?) despite it taking forever XD
- add in the `flake.lock` file.
2023-08-16 12:19:00 -04:00
Tyler Goodlet 847cb7740c Drop `marketstore` mod import from CLIs loader 2023-08-16 12:15:49 -04:00
Tyler Goodlet 84dd0ae4ce Bump `msgspect`, `polars` versions and add CLI script eps 2023-08-16 08:07:35 -04:00
Tyler Goodlet 6b90e2e3ee Factor and gen per-dep overrides via "fancy" `.extend()`
As per the hot tip from the edgecases.md,
https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md#modulenotfounderror-no-module-named-packagename

Factor all the (mostly `setuptools`) overrides into
a `pypkgs-build-requirements` set and `.extend()` in any `preferWheel`
additions (`polars`, `pyqt`, etc.) before passing to to
`mkPoetryApplication(overrides=<it>)`.

Add a buncha todos for improving the poetry2nix pkging including:
- adding the override requirements to the json file for all our deps
  in the `pypkgs-build-requirement` set.
- maybe propose docs for the edgecases.md to show how to do the auto-gen
  set (via func) AND extend with further overrides like `preferWheel`?
- task to support `polars` build from src (by copying `cryptography`
  stuff) instead of only from a wheel?
- get pyqt5 building from wheel since it seems to be taking forever from
  src..
- get pyqt6 working in general - going to require taking stuff from
  nixpkgs and applying it in the overrides of p2n.
2023-08-15 12:40:01 -04:00
Tyler Goodlet 482ad1cc83 Add `prompt-toolkit` for full `xonsh` feats 2023-08-14 13:10:23 -04:00
Tyler Goodlet 6e8d07852c Pkg with `poetry`, `poetry2nix` and a `flake.nix` 2023-08-14 11:36:34 -04:00
Tyler Goodlet 4aa04e1c8e Add note about broadcast when no `.symbol` found 2023-08-11 14:52:10 -04:00
Tyler Goodlet c5ed6e6ac4 Facepalm: remove now unused `CostModel` idea.. 2023-08-11 13:34:23 -04:00
Tyler Goodlet 077d9bf1d2 Better commenting around order-mode error block 2023-08-10 12:41:53 -04:00
Tyler Goodlet 78178c2fb7 Add example mtr prober from `mtrpacket`
Started rejigging example code from this example to use more modern
`asyncio` APIs:
https://github.com/matt-kimball/mtr-packet-python/blob/master/examples/trace-concurrent.py

Relates to #330
2023-08-10 11:49:09 -04:00
Tyler Goodlet f66a1f8b23 ib: relay submission errors, allow adhoc mkt overrides
This is a tricky edge case we weren't handling prior; an example is
submitting a limit order with a price tick precision which mismatches
that supported (probably bc IB reported the wrong one..) and IB responds
immediately with an error event (via a special code..) but doesn't
include any `Trade` object(s) nor details beyond the `reqid`. So, we
have to do a little reverse EMS order lookup on our own and ideally
indicate to the requester which order failed and *why*.

To enable this we,
- create a `flows: OrderDialogs` instance and pass it to most order/event relay
  tasks, particularly ensuring we update update ASAP in `handle_order_requests()`
  such that any successful submit has an `Ack` recorded in the flow.
- on such errors lookup the `.symbol` / `Order` from the `flow` and
  respond back to the EMS with as many details as possible about the
  prior msg history.
- always explicitly relay `error` events which don't fall into the
  sensible filtered set and wrap in
  a `BrokerdError.broker_details['flow']: dict` snapshot for the EMS.
- in `symbols.get_mkt_info()` support adhoc lookup for `MktPair` inputs
  and when defined we re-construct with those inputs; in this case we do
  this for a first mkt: `'vtgn.nasdaq'`..
2023-08-10 10:31:00 -04:00
Tyler Goodlet 562d027ee6 Relay brokerd errors to client side, correctly..
Turns out we were expecting/processing `Status(resp='error')` msgs not
`BrokerdError` (i guess bc latter was only really being used in initial
`brokerd` msg responses and not for relay of actual provider clearing
engine failures?) and the case block match / logic wasn't really
correct. So this changes a few things:

- always do reverse `oid` lookups from `reqid`s if possible in error msg
  handling case.
- add a new `Error` client-dialog msg (derived from `Status`) which we
  now relay when `brokerd` sends a `BrokerdError` and no prior `Status`
  can be found (when it is we still fill in appropriate fields from the
  backend-error and just send back the last status msg like before).
- try hard to look up the original `Order.symbol: str` for client
  broadcasting trying first using any `Status.req` and failing over to
  embedded `.brokerd_msg` field lookups.
- drop the `Status.name = 'error'` from literal def.
2023-08-09 21:43:38 -04:00
Tyler Goodlet ff2bbd5aca ib: handle order errors via `reqid` lookup
Finally this is a reason to use our new `OrderDialogs` abstraction; on
order submission errors IB doesn't really pass back anything other then
the `orderId` and the reason so we have to conduct our own lookup for
a message to relay to the EMS..

So, for every EMS msg we send, add it to the dialog tracker and then use
the `flows: OrderDialogs` for lookup in the case where we need to relay
said error. Also, include sending a `canceled` status such that the
order won't get stuck as a stale entry in the `emsd`'s own dialog table.
For now we just filter out errors that are unrelated from the stream
since there's always going to be stuff to do with live/history data
queries..
2023-08-07 18:19:35 -04:00
Tyler Goodlet 85a38d057b Factor cumsize sign to var 2023-08-07 10:13:31 -04:00
Tyler Goodlet eba6a77966 Add paper-engine cost simulation support
If a backend declares a top level `get_cost()` (provisional name)
we call it in the paper engine to try and simulate costs according to
the provider's own schedule. For now only `binance` has support (via the
ep def) but ideally we can fill these in incrementally as users start
forward testing on multiple cexes.
2023-08-07 09:55:45 -04:00
Tyler Goodlet 5ed8544fd1 Bleh, move `.data.types` back up to top level pkg
Since it's depended on by `.data` stuff as well as pretty much
everything else, makes more sense to expose it as a top level module
(and maybe eventually as a subpkg as we add to it).
2023-08-05 15:57:10 -04:00
Tyler Goodlet 5d86d336f2 Parametrize account names for offline ledger tests 2023-08-03 17:28:08 -04:00
Tyler Goodlet e4ea7d6193 Lul, fix `open_ledger_dfs()` to `yield` when ledger passed in.. 2023-08-03 17:27:26 -04:00
Tyler Goodlet 60751acf85 Officially drop `Position.size` 2023-08-03 16:57:02 -04:00
Tyler Goodlet e9dfd28aac ib: add back `src/dst` parsing for fiat pairs 2023-08-03 16:56:33 -04:00
Tyler Goodlet ae444d1bc7 Add note about `xonsh.main.main()` attempted usage 2023-08-03 13:56:23 -04:00
Tyler Goodlet a51a61090d Drop `virt_cost: str` from df output 2023-08-02 20:42:18 -04:00
Tyler Goodlet 94ebe1e87e Add some new hotkey maps for chart zoom and pane hiding 2023-08-02 20:41:56 -04:00
Tyler Goodlet fff610fa8d Fix `PositionTracker.pane` attr resolve bug.. 2023-08-02 17:33:02 -04:00
Tyler Goodlet 7ecf2bd89a Guess exit transaction costs for BEP prediction
In order to attempt giving the user a realistic prediction for a BEP per
txn we need to model what the (worst case) anticipated exit txn costs
will be during the equivalent, paired entries. For now we use a simple
"symmetric cost prediction" model where we assume the exit costs will be
simply the same as the enter txn costs and thus on every entry we apply
2x the enter txn cost; on exit txns we then unroll these predictions by
keeping a cumulative sum of the cost-per-unit and reversing the charges
based on applying that mean to the current exit txn's size. Once
unrolled we apply the actual exit txn cost received from the
broker-provider.
2023-08-02 17:25:23 -04:00
Tyler Goodlet 1e3a4ca36d Drop commented, now deprecated edge case notes 🏄 2023-08-01 15:49:56 -04:00
Tyler Goodlet b6a705852d Handle txn costs in BEP, factor enter/exit blocks and df row assignments B) 2023-08-01 15:42:30 -04:00
Tyler Goodlet 29bab02c64 Pass sync code flag in flex report processor 2023-08-01 09:12:52 -04:00
Tyler Goodlet 85ae180f8f Factor df conversion into lone routine: `ledger_to_dfs()` 2023-07-31 17:48:03 -04:00
Tyler Goodlet 5d24b5defb Swap branch order for enter/exit
Also fix bug since we always need to reset cum_pos_pnl
after a `exit_to_zero` case.
2023-07-31 17:32:49 -04:00
Tyler Goodlet 100be54641 data.history: add TODO for non-zero epochs and some typing 2023-07-31 17:21:11 -04:00
Tyler Goodlet a088ebf5e2 Use inf row/col repr for debugging atm 2023-07-31 17:18:28 -04:00
Tyler Goodlet b37a447595 Implement PPU and BEP and inject the ledger frames
Since it appears impossible to compute the recurrence relations for PPU
(at least sanely) without using embedded `polars.List` elements, this
instead just implements price-per-unit and break-even-price calcs
doing a plain-ol-for-loop imperative approach with logic branching.

I burned wayy too much time trying to implement this in some kinda
`polars` DF native way without luck, so hopefuly someone smarter can
come in and make it work at some point xD

Resolves a related bullet in #515
2023-07-31 16:01:31 -04:00
Tyler Goodlet b1edaf0639 First draft position accounting with `polars`
Took a little while to get right using declarative style but it's
finally workin and seems (mostly correct B)

Computes the ppu (price per unit) using the PnL since last
net-zero-cumsize (aka the pnl from open to close) and uses it to calc
the pnl-per-exit trade (using the ppu).

Next up, bep (break even price both) per position and maybe since
ledger start or an arbitrary ref point?
2023-07-29 21:02:59 -04:00
Tyler Goodlet 385561276b Add gap detection into the `store ldshm` cmd 2023-07-26 15:45:55 -04:00
Tyler Goodlet d94ab9d5b2 order_mode: Only send cancels for dialogs that still exist 2023-07-26 15:43:48 -04:00
Tyler Goodlet 08e8990fe3 Do single `ShmArray.array` read on zero-time filtering 2023-07-26 15:41:04 -04:00
Tyler Goodlet 2c6ae5d994 Drop the `gap_dt_unit: str` column
We don't need it in `detect_time_gaps()` since doing straight up
datetime diffs in `polars` already has a humanized `str` representation
but with higher precision like '2d 1h 24m 1s' B)
2023-07-26 15:37:59 -04:00
Tyler Goodlet f1289ccce2 ib: Oof, right need to create ledger entries too.. 2023-07-26 14:55:17 -04:00
Tyler Goodlet 7802febd20 Backfill history gaps with pre-gap close 2023-07-26 12:56:06 -04:00
Tyler Goodlet 64329d44e7 Flip `tractor.breakpoint()`s to new `.pause()` 2023-07-26 12:48:19 -04:00
Tyler Goodlet bd0af7a4c0 kucoin: facepalm, use correct pair fields for price/size ticks 2023-07-26 12:44:41 -04:00
Tyler Goodlet 618c461bfb binance: always upper case venue and expiry tokens
Since we need `.get_mkt_info()` to remain symmetric across calls with
different fqme inputs, and binance generally uses upper case for it's
symbology keys, we always upper the FQME related tokens for both
symcaching and general search purposes.

Also don't set `_atype` on mkt pairs since it should be fully handled
via the dst asset loading in `Client._cache_pairs()`.
2023-07-26 12:44:29 -04:00
Tyler Goodlet c00cf41541 kraken: `norm_trade()` now much accept an optional symcache 2023-07-26 12:40:58 -04:00
Tyler Goodlet 4436342d33 Change ui stuff to use new `Position.cumsize` attr name 2023-07-26 12:40:09 -04:00
Tyler Goodlet 58cf7ce10e Add `norm_trade()` ep to validator warnings 2023-07-26 12:39:08 -04:00
Tyler Goodlet 9fbb75ce7f Remove piker.trionics; already factored into `tractor` 2023-07-26 12:38:25 -04:00
Tyler Goodlet d0f72bf269 Wrap symcache loading into `.from_scratch()`
Since we need it both when explicitly reloading **and**
whenever either the file or data in the file doesn't exist.
2023-07-26 12:27:26 -04:00
Tyler Goodlet 188508575a Utilize the new `_mktmap_table` input in paper engine
In cases where a brokerd backend doesn't yet support a symcache we need
to do manual `.get_mkt_info()` queries and stash them in a table that we
pass in for the mkt failover lookup to `Account.update_from_ledger()`.
Set the `PaperBoi._mkts` to this table for use on real-time ledger
writes in `.fake_fill()`.
2023-07-26 12:21:27 -04:00
Tyler Goodlet bebc817d19 Partition ledger data frames by `bs_mktid`
Since some backends are going to have the issue of supporting multiple
venues for a given "position distinguishing instrument", like IB, we
can't presume that every `Position` can be uniquely keyed by
a `MktPair.fqme` (since the venue part can change and still be the same
"pair" relationship in accounting terms) so instead presume the
"backend system's market id" is the unique key (at least for now)
instead of the fqme.

More practically we use the `bs_mktid` to groupby-partition the per
pair DFs from the trades ledger and attempt to scan-match the input
fqme (in `ledger disect` cli) against the fqme column values set.
2023-07-26 12:13:54 -04:00
Tyler Goodlet 1d35747fbf Always clear `Position._events` in `.from_msg()`..
Not sure why i ever thought it would work otherwise but, obviously if
you're replicating a `Position` from a **summary** (IPC) msg we
need to wipe any prior clearing events from the events history..
The main use for this loading mechanism is precisely if you don't have
local access to the txn ledger and need to represent a position from
a summary 🤦

Also, never bother with ledger file fqme "rewriting" if the backend has
no symcache support (yet) since obviously there's then no symbol set to
search for a better key xD
2023-07-26 12:10:26 -04:00
Tyler Goodlet e344bdbf1b ib: rework trade handling, take ib position sizes as gospel
Instead of casting to `dict`s and rewriting event names in the
`push_tradesies()` handler, be transparent with event names (also
defining and piker-equivalent mapping them in a redefined `_statuses`
table) and types
passing them directly to the `deliver_trade_events()` task and generally
make event handler blocks much easier to grok with type annotations. To
deal with the causality dilemma of *when to emit a pos msg* due to
needing all of `execDetailsEvent, commissionReportEvent, positionEvent`
but having no guarantee on received order, we implement a small task
`clears: dict[Contract, tuple[Position, Fill]]` tracker table and (as
before) only emit a position event once the "cost" can be accessed for
the fill. We now ALWAYS relay any `Position` update from IB directly to
ensure (at least) the cumsize is correct (since it appears we still have
ongoing issues with computing this correctly via `.accounting.Position`
updates..).

Further related adjustments:
- load (fiat) balances and startup positions into a new `IbAcnt` struct.
- change `update_and_audit_pos_msg()` to blindly forward ib position
  event updates for the **the size** since it should always be
  considered the true gospel for accounting!
  - drop ib-has-no-position handling since it should never occur..
- move `update_ledger_from_api_trades()` to the `.ledger` submod and do
  processing of ib_insync `Fill` related objects instead of dict-casted
  versions instead doing the casting in
  `api_trades_to_ledger_entries()`.
- `norm_trade()`: add `symcache.mktmaps[bs_mktid] = mkt` in since it
  turns out API (and sometimes FLEX) records don't contain the listing
  exchange/venue thus making it impossible to map an asset pair in the
  "position sense" (i.e. over multiple venues: qqq.nasdaq, qqq.arca,
  qqq.directedge) to an fqme when doing offline ledger processing;
  instead use frickin IB's internal int-id so there's no discrepancy.
  - also much better handle futures mkt trade flex records such that
    parsed `MktPair.fqme` is consistent.
2023-07-25 20:28:54 -04:00
Tyler Goodlet b33be86b2f ib: fill out contract tables in `.get_mkt_info()`
Since getting a global symcache result from the API is basically
impossible, we ad-hoc fill out the needed client tables on demand per
client code queries to the mkt info EP.

Also, use `unpack_fqme()` in fqme (search) pattern parser instead of
hacky `str.partition()`.
2023-07-25 16:43:08 -04:00
Tyler Goodlet 50b221f788 ib: rework client-internal contract caching
Add new `Client` attr tables to better stash `Contract` lookup results
normally mapped from some in put FQME;

- `._contracts: dict[str, Contract]` for any input pattern (fqme).
- `._cons: dict[str, Contract] = {}` for the `.conId: int` inputs.
- `_cons2mkts: bidict[Contract, MktPair]` for mapping back and forth
  between ib and piker internal pair types.

Further,
- type out as many ib_insync internal types as possible mostly for
  contract related objects.
- change `Client.trades()` -> `.get_fills()` and return directly the
  result from `IB.fill()`.
2023-07-25 16:42:15 -04:00
Tyler Goodlet 897c20bd4a Moar `.accounting` tweaks
- start flipping over internals to `Position.cumsize`
- allow passing in a `_mktmap_table` to `Account.update_from_ledger()`
  for cases where the caller wants to per-call-dyamically insert the
  `MktPair` via a one-off table (cough IB).
- use `polars.from_dicts()` in `.calc.open_ledger_dfs()`. and wrap the
  whole func in a new `toolz.open_crash_handler()`.
2023-07-21 23:48:53 -04:00
Tyler Goodlet 759ebe71e9 Allow disabling symcache load via kwarg as well 2023-07-20 15:27:46 -04:00
Tyler Goodlet e88913e1f3 .data._pathops: drop profiler imports, fix some naming to appease `ruff` 2023-07-20 15:27:22 -04:00
Tyler Goodlet 5e7916a0df Start `piker.toolz` subpkg for all our tooling B)
Since there's a growing list of top level mods which are more or less
utils/tools for working with the runtime; begin to move them into a new
subpkg starting with a new `.toolz.debug`.

Start with,
- a new `open_crash_handller()` for doing breakpoints around blocks that
  might error.
- move in what was `piker._profile` into `.toolz.profile` and adjust all
  importing appropriately.
2023-07-20 15:23:01 -04:00
Tyler Goodlet 5eb310cac9 ib: more fixes to try and get positioning correct..
Define and bind in the `tx_sort()` routine to be used by
`open_trade_ledger()` when datetime sorting trade records.

Further deats:
- always use the IB reported position size (since apparently our ledger
  based accounting is getting rekt on occasion..).
- better ib pos msg formatting when there's mismatches with the piker
  equivalent.
- never emit zero-size pos msgs (in terms of strict ib pos sizing) since
  when there's piker ledger sizing errors we'll send the wrong thing to
  the ems and its clients..
2023-07-19 16:46:36 -04:00
Tyler Goodlet 8a10cbf6ab Change `Position.clearsdict()` -> `.clearsitems()`
Since apparently rendering to dict from a sorted generator func clearly
doesn't preserve the order when using a `dict`-comprehension.. Further,
there's really no reason to strictly return a `dict`. Adjust
`.calc.ppu()` to make the return value instead a `list[tuple[str,
dict]]`; this results in the current df cumsum values matching the
original impl and the existing `binance.paper` unit tests now passing XD

Other details that fix a variety of nonsense..
- adjust all `.clearsitems()` consumers to the new list output.
- use `str(pendulum.now())` in `Position.from_msg()` since adding
  multiples with an `unknown` str will obviously discard them, facepalm.
- fix `.calc.ppu()` to NOT short circuit when `accum_size` is 0; it's
  been causing all sorts of incorrect size outputs in the clearing
  table.. lel, this is what fixed the unit test!
2023-07-18 21:00:19 -04:00
Tyler Goodlet fe78277948 ib: add new `.symbols` sub-mod
Move in the obvious things XD
- all the specially defined venue tables from `.api`.
- some parser funcs: `con2fqme()` and `parse_patt2fqme()`.
- the `get_mkt_info()` and `open_symbol_search()` broker eps.
- the `_asset_type_map` table which converts to `.accounting.Asset`
  compat keys for each contract/security.
2023-07-17 18:30:11 -04:00
Tyler Goodlet 9e87b6515b ib: be symcache compat by using bypass attr
Since there's no easy way to support it yet, we bypass symbology caching
in for now and instead allow the `ib.ledger` routines to fill in
`MktPair` and `Asset` entries ad-hoc for the purposes of txn ledger
processing.
2023-07-17 17:31:34 -04:00
Tyler Goodlet a05a82486d Log a warning on no symcache support in a backend 2023-07-17 17:31:12 -04:00
Tyler Goodlet e4731eff10 Fix `Position.expiry == None` bug 2023-07-17 17:27:22 -04:00
Tyler Goodlet dfa13afe22 Allow backends to "bypass" symcache loading
Some backends like `ib` don't have an obvious (nor practical) way to
easily download the entire symbology set available from all its mkt
venues. For such backends loading might require a non-std approach (like
using the contract search from some input mkt-key set) and can't be
expected to necessarily be supported out of the box. As such, allow
annotating a broker sub-pkg module with a `_no_symcache: bool = True`
attr which will make `open_symcache()` yield early with an empty
`SymbologyCache` instance for use by the caller to fill in the mkt and
assets tables in whatever ad-hoc way desired.
2023-07-17 17:12:40 -04:00
Tyler Goodlet 912f1bc635 .kraken: start new `.symbols` submod and move symcache and search stuff there 2023-07-17 16:20:11 -04:00
Tyler Goodlet 82fd785646 Adjust default `[binance]` config to use paper and disable testnets 2023-07-17 14:58:15 -04:00
Tyler Goodlet 71d0097dc7 Switch to `Position.cumsize` in tracker and order mode mods 2023-07-17 13:51:30 -04:00
Tyler Goodlet 8fb667686f Open symcaches as part of per-backend search spawning 2023-07-17 01:24:45 -04:00
Tyler Goodlet 2dab0e2e56 Expose `.data._symcache` stuff at subpkg toplevel
The list is `open_symcache()`, `get_symcache()`, `SymbologyCache`, and
`Stuct` which seems more or less fine to make part of the public
namespace. Also, make `._timeseries.t_unit` an instance of literal to make
`ruff` happy?
2023-07-17 01:20:52 -04:00
Tyler Goodlet e8025d0985 .data.types.Struct: by default include non-members from `.to_dict()`.. 2023-07-16 21:32:36 -04:00
Tyler Goodlet 430309b5dc .accounting: type `Transaction.etype` as a `Literal`
Start working out the set of possible "txn types" we want to define in
a simple set.

Relates to #510
2023-07-16 21:22:15 -04:00
Tyler Goodlet 4c5507301e kraken: be symcache compatible!
This was more involved then expected but on the bright side, is going to
help drive a more general `Account` update/processing/loading API
providing for all the high-level txn update methods needed for any
backend to generically update the participant's account *state* via
an input ledger/txn set B)

Key changes to enable `SymbologyCache` compat:
- adjust `Client` pairs / assets lookup tables to include a duplicate
  keying of all assets and "asset pairs" using the (chitty) default key
  set that kraken ships which is NOT the `.altname` no `.wsname` keys;
  the "default ReST response keys" i guess?
  - `._AssetPairs` and `._Assets` are *these ^* rest-key sets delivered
    verbatim from the endpoint responses,
  - `._pairs` and `._assets` the equivalent value-sets keyed by piker
    style FQME-looking keys (now provided via the new
    `.kraken.symbols.Pair.bs_fqme: str` and the delivered `'altname'`
    field (for assets) respectively.
- re-implement `.get_assets()` and `.get_mkt_pairs()` to appropriately
  delegate to internal methods and these new (multi-keyed) tables to
  deliver the cacheable set of symbology info.
- adjust `.feed.get_mkt_info()` to handle parsing of both fqme-style and
  wtv(-the-shit-stupid) kraken key set a caller passes via
  a key-matches-first-table-style-scan after pre-processing the
  input `fqme: str`; also do the `Asset` lookups from the new
  `Pair.bs_dst/src_asset: str` fields which should always map correctly
  to an internal asset entry delivered by `Client.get_assets()`.

Dirty impl deatz:
- add new `.kraken.symbols` and move the newly refined `Pair` there.
- add `.kraken.ledger` and move in the factored out ledger processing
  routines.
- also move out what was the `has_pp()` and large chung of nested-ish
  looking acnt-position verification logic blocks into a new
  `verify_balances()` B)
2023-07-16 21:21:53 -04:00
Tyler Goodlet a5821ae9b1 binance: spec `.ns_path: str` on pair structs
Provides for fully isolated symbology caching in a flat TOML table
without special case handling B)

Also explicitly define `.bs_mktid: str` which is now used by the
symcache to to key-index the backend specific pair set and thus provides
for round-trip marshalling without special knowledge of any backend
schema.
2023-07-15 17:37:56 -04:00
Tyler Goodlet d794afcb5c Adjust `.clearing._paper_engine.norm_trade()` to new sig
Always expect a `tid: str` and `pair: dict[str, Struct]` for aiding with
txn struct packing B)
2023-07-15 17:35:41 -04:00
Tyler Goodlet 3d20490ee5 Move cum-calcs to `open_ledger_dfs()`, always parse `str`->`Datetime`
Previously the cum-size calc(s) was in the `disect` CLI but it's better
stuffed into the backing df converter. Also, ensure that whenever
a `dt` field is type-detected as a `str` we parse it to `DateTime`.
2023-07-15 15:43:09 -04:00
Tyler Goodlet 69314e9fca Passthrough all **kwargs `Struct.to_dict()`
Since for symcache-ing we don't want to write non-member fields we need
to allow passing the appropriate flag; i hate frickin inheritance XD
2023-07-14 20:29:05 -04:00
Tyler Goodlet b9fec091ca Allow accounting (file) dir override via kwarg
For testing (and probably hacking) it's handy to be able to point
somewhere other the default user-config dir for a ledger or account file
to test offline processing apis from `.accounting` subsystems. For now
it's a private optional named-arg: `_fp: Path` and it's obviously passed
down into the `load_account()` config getter.

Note that in the non-paper account case `Account.update_from_ledger()`
will use the ledger's `.symcache` and `.iter_txns()` method to acquite
actual txn-structs to compute positions.
2023-07-14 20:17:24 -04:00
Tyler Goodlet 803f4a6354 Add first account cumsize test; known to fail Bo 2023-07-14 17:54:13 -04:00
Tyler Goodlet 494b3faa9b Formalize transaction normalizer func signature
Since each broker backend generally needs to define a specific
field-name-schema to determine the exact instantiation arguments to
`Transaction`, we generally need each backend to define an endpoint
function to conduct this transaction from an input `dict[str, Any]`
received either directly from provided ledger APIs or from previously
stored `.accounting._ledger` saved trades ledger TOML files.

To accomplish this we now require backends to declare a new routine:

```python
def norm_trade(
    tid: str,  # the uuid for the transaction
    txdict: dict,  # the input record-dict

    # a table of mkt-symbols to backend
    # struct objects which define the (meta-data) for the backend specific
    # venue's symbology
    pairs: dict[str, Struct],

) -> Transaction:
    ...
```

which implements that record conversion (at least for trades)
and can thus be used in `TransactionLedger.iter_txns()` which requires
"some code" to implement the loading from a serialization format (aka
the input `dict` record) to our local `Transaction` struct, normally
also using a `Pair`-struct table defined (and maybe previously cached)
by the specific backend such our (normalization layer's) `MktPair`'s
fields can be set.

For the case of our `.clearing._paper_engine` we def the routine to
simply extract the exact same fields from the TOML ledger records that
we previously had written (to it) and define it in that module.

Also, we always pass `pairs=SymbologyCache.pairs: dict[str, Struct]` on
norm trade calls such that offline ledger and accounting processing
clients can use a previously cached symbology set without having to
necessarily start the async-actor runtime to query the actual backend API
if the data has already been saved locally on the system B)

Other related:
- always passthrough kwargs in overridden `.to_dict()` method.
- only do fqme related trade record field name rewrites/names when
  operating on a paper ledger; normally a backend's records don't
  contain these.
- fix `pendulum.DateTime` type annots.
- just deliver `Transaction`s from `.iter_txns()`
2023-07-14 16:13:04 -04:00
Tyler Goodlet da206f5242 Store "namespace path" for each backend's pair struct
Since some backends have multiple venues keyed by the same
symbol-pair-name, AND often the market/symbol info for those different
market-venues is entirely different (cough binance), we will have to
(sometimes) save the struct namespace-path as str for lookup when
deserializing a symcache to object form.

NOTE: this change is reliant on the following `tractor` dev commit which
improves support for constructing a path from object-instance:
bee2c36072

Add a backend(-wide) default struct path stored as a (TOML top level)
field `pair_ns_path: str` in the serialized `dict`-table as well as
allow for a per pair-`Struct` value optionally defined on each type def;
the global is only used if none was defined per struct via a `ns_path:
str`.

Further deats:
- don't write non-struct-member fields to dict for TOML file cache.
- always keep object forms, well as objects (in tables).. XD
- factor cache loading from `dict` (and thus from TOML or presumably any
  other interchange form) into a `@classmethod` constructor method B)
- all choosing the subtable for `.search()` by name.
2023-07-13 17:58:50 -04:00
Tyler Goodlet 7f4884a6d9 data.types.Struct.to_dict(): discard non-member struct by default 2023-07-12 12:33:30 -04:00
Tyler Goodlet c30d8ac9ba ib: port to new `.accounting` APIs
Still kinda borked since i don't think there actually is a (per venue)
"get-all-symbologies" endpoint.. so we're likely gonna have to figure
out either how to hack it or provide a bypass in ledger processing?

Deatz:
- use new `Account` type name, rename endpoint vars to match and
  obviously use any new method name(s).
- mask out split ratio handling for now.
- async open the symcache prior to ledger processing (again, for now).
- drop passing `Transaction.sym`.
- fix parser set for dt-sorter since apparently 2022 and back had
  a `date` field instead?
2023-07-12 08:45:55 -04:00
Tyler Goodlet 8b9494281d Don't verify the history step period for now in `tsdb_backfill()` 2023-07-12 08:45:55 -04:00
Tyler Goodlet 06c581bfab Async enter/open the symcache in paper engine
Since we don't want to be doing a `trio.run()` from async code (being
already in the `tractor` runtime and all); for now just put a top level
block wrapping async enter until we figure out to embed it (likely)
inside `open_account()` and pass the ref to `open_trade_ledger()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 108e8c7082 .accounting: expose `open_account()` at subsys pkg level 2023-07-12 08:45:55 -04:00
Tyler Goodlet ddcdbce1a2 Use `acnt` instead of `table` for ref name B) 2023-07-12 08:45:55 -04:00
Tyler Goodlet 14d5b3c963 Be pedantic in `open_trade_ledger()` from sync code
Require passing an explicit flag when entering from sync code with an
extra super duper explicit runtime error to indicate how to use in the
async case as well!

Also, do rewrites of both the fqme (from best match in the symcache
according to search - the worst case) or from the `bs_mktid` field if
it exists (should only be true for paper engine accounts) AND the
`bs_mktid` for paper accounts if it seems un-fully-qualified.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 8330b36e58 User/return explicit `symcache` var name in sync case 2023-07-12 08:45:55 -04:00
Tyler Goodlet 243821aab1 Bleh! Ok make `open_symcache()` and `@acm`..
Turns in order to make things much cleaner from inside-the-runtime usage
we do probably want to just make the manager async so that we can
generate the cache on demand from async UI inits as well as daemon
actors.. So change to that and instead make `get_symcache()` the helper
that should ONLY be called from sync funcs / offline ledger processing
utils!
2023-07-12 08:45:55 -04:00
Tyler Goodlet 4123c97139 Add symcache support to paper eng
- add the `.norm_trade()` required ep (for symcache offline loading)
- port to new `Account` apis (which now require a symcache input)
2023-07-12 08:45:55 -04:00
Tyler Goodlet 55c3d617fa brokers.core: open cached client before hitting `.get_mkt_info()` 2023-07-12 08:45:55 -04:00
Tyler Goodlet a2c6749112 binance.feed: use `Client.get_assets()` for mkt pairs
Instead of constructing them (previously manually) in `.get_mkt_info()` ep,
just call `.get_assets()` and do key lookups for assets to hand directly
to the `.src/dst` of `MktPair`.

Refine fqme input parsing to match:
- adjust parsing logic to only use `unpack_fqme()` on the input fqme
  token.
- set `.mkt_mode: str` to the derivs venue when an expiry token is
  detected in the fqme.
- pass the parsed `expiry: str` to `Client.exch_info()` to ensure
  a deriv venue (table) is used for pair lookup.
- skip any "DEFI" venue or other unknown asset type cases (since binance
  doesn't seem to define some assets anywhere?).

Also, just use the `Client._pairs` unified table for search input since
the first call to `.exch_info()` won't necessarily contain the most
up-to-date state whereas `._pairs` always will.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 19be8348e5 binance.api: add venue qualified symcache support
Meaning we add the `Client.get_assets()` and `.get_mkt_pairs()` methods.
Also implement `.exch_info()` to take in a `expiry: str` to detect
whether to look up a derivative venue instead of spot.

In support of all this we now explicitly key all assets (via
`._cache_pairs() during the populate of `._venue2assets` sub-tables)
with their `.bs_dst_asset: str` value to ensure, for ex., a spot
`BTCUSDT` has a distinct value from any futures contracts with the same
`Pair.symbol: str` value!

Also, ensure we always create a `brokers.toml` (from template) if DNE
and binance is the user's first used backend XD
2023-07-12 08:45:55 -04:00
Tyler Goodlet 3c84ac326a binance.venues: add pair-type specific asset keying
Add `bs_src/dst_asset: str` properties which provide for unique keying
into futures vs. spot venues by offering a `.venue: str` property which,
for non-spot delivers normally an expiry suffix (eg. '.PERP') and for
spot just delivers the bair chain-token key.

This enables keying multiple venues with the same mkt pairs easily in
a global flat key->pair table needed as part of supporting a symcache.
2023-07-12 08:45:55 -04:00
Tyler Goodlet c9681d0aa2 .nativedb: ignore an `expired/` subdir 2023-07-12 08:45:55 -04:00
Tyler Goodlet 8f40e522ef Add handy `DiffDump`ing for our `.types.Struct`
So you can do a `Struct1` - `Struct2` and we dump a little diff `list`
of tuples for anal on the REPL B)

Prolly can be broken out into it's own micro-patch?
2023-07-12 08:45:55 -04:00
Tyler Goodlet 87185cf8bb Drop `config` get/set/del apis.. 2023-07-12 08:45:55 -04:00
Tyler Goodlet ff267890d1 Change cached-client hit msg to runtime level 2023-07-12 08:45:55 -04:00
Tyler Goodlet 749401e500 .accounting: expose new names at pkg top level 2023-07-12 08:45:55 -04:00
Tyler Goodlet 3704e2ceac Call `open_ledger_dfs()` for `disect` sub-cmd
Drop all the old `polars` (groupby + agg related) mangling to get a df
per fqme by delegating to the new routine and add in the `.cumsum()`ing
(per frame) as a first start on computing pps using dfs instead of
python dicts + loops as in `ppu()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 8f1983fd8e Move df loading into `calc.load_ledger_dfs()`
To isolate it from the ledger/account mods and bc it is actually for
doing (eventual) position calcs / anal, might as well put it in this
mod. Add in the old-masked `ensure_state()` method content in case we
want to use it later for testing. Also tighten up the parser loading
inside `dyn_parse_to_dt()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet f5d4f58610 `Account` api update and refine
Rename `open_pps()` -> `open_account()` for obvious reasons as well as
expect a bit tighter integration with `SymbologyCache` and consequently
`LedgerTransaction` in order to drop `Transaction.sym: MktPair`
dependence when compiling / allocating new `Position`s from a ledger.

Also we drop a bunch of  prior attrs and do some cleaning,
- `Position.first_clear_dt` we no longer sort during insert.
- `._clears` now replaces by `._events` table.
- drop the now masked `.ensure_state()` method (eventually moved to
  `.calc` submod for maybe-later-use).
- drop `.sym=` from all remaining txns init calls.
- clean out the `Position.add_clear()` method and only add the provided
  txn directly to the `._events` table.

Improve some `Account` docs and interface:
- fill out the main type descr.
- add the backend broker modules as `Account.mod` allowing to drop
  `.brokername` as input and instead wrap as a `@property`.
- make `.update_from_trans()` now a new `.update_from_ledger()` and
  expect either of a `TransactionLedger` (user-dict) or a dict of txns;
  in the latter case if we have not been also passed a symcache as input
  then runtime error since the symcache is necessary to allocate
  positions.
  - also, delegate to `TransactionLedger.iter_txns()` instead of
    a manual datetime sorted iter-loop.
  - drop all the clears datetime don't-insert-if-earlier-then-first
    logic.
- rename `.to_toml()` -> `.prep_toml()`.
- drop old `PpTable` alias.
- rename `load_pps_from_ledger()` -> `load_account_from_ledger()` and
  make it only deliver the account instance and also move out all the
  `polars.DataFrame` related stuff (to `.calc`).

And tweak some account clears table formatting,
- store datetimes as TOML native equivs.
- drop `be_price` fixing.
- obvsly drop `.ensure_state()` call to pps.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 0e94e89373 Finally, just drop `Transaction.sym`
Turns out we don't really need it directly for most "txn processing" AND
if we do it's usually related to some `Account`-ing related calcs; which
means we can instead just rely on the new `SymbologyCache` lookup to get
it when needed. So, basically just get rid of it and rely instead on the
`.fqme` to be the god-key to getting `MktPair` info (from the cache).

Further, extend the `TransactionLedger` to contain much more info on the
pertaining backend:
- `.mod` mapping to the (pkg) py mod.
- `.filepath` pointing to the actual ledger TOML file.
- `_symcache` for doing any needed asset or mkt lookup as mentioned
  above.
- rename `.iter_trans()` -> `.iter_txns()` and allow passing in
  a symcache or using the init provided one.
  - rename `.to_trans()` similarly.
- delegate paper account txn processing to the `.clearing._paper_engine`
  mod's `norm_trade()` (and expect this similarly from other backends!)
- use new `SymbologyCache.search()` to find the best but
  un-fully-qualified fqme for a given `txdict` being processed when
  writing a config (aka always try to expand to the most verbose `.fqme`
  possible).
- add a `rewrite: bool` control to `open_trade_ledger()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 520414a096 Oof, fix `.size` tick msg encode.. 2023-07-12 08:45:55 -04:00
Tyler Goodlet ddc5f2b441 Use `MktPair.from_msg()` in symcache
Since we now fully support interchange-as-dict-msg, use the msg codec
API and drop manual `Asset` unpacking. Also, wrap `get_symcache()` in
a `pdbp` crash handler block for now B)
2023-07-12 08:45:55 -04:00
Tyler Goodlet 3994fd8384 Also handle `Decimal` interchange in `MktPair` msg-ification 2023-07-12 08:45:55 -04:00
Tyler Goodlet 13f231b926 Decode cached mkts and assets back to structs B)
As part of loading the cache we can now fill the asset sub-tables:
`.mktmaps` and `.assets` with their deserialized struct instances!
In theory this might be possible for the backend defined `Pair` structs
as well but we need to figure out probably an endpoint to offer
the conversion?

Also, add a `SymbologyCache.search()` which allows sync code to scan the
existing (known via cache) symbol set just like how async code can use the
(much slower) `open_symbol_search()` ctx endpoint 💥
2023-07-12 08:45:55 -04:00
Tyler Goodlet 309b91676d Finally, support full `MktPair` + `Asset` msgs
Previously we weren't necessarily serializing mkt pairs (for IPC msging)
entirely bc the assets `.src/.dst` were being sent just by their
str-names. This now properly supports fully serializing `Asset`s as
`dict`-msgs such that use of `MktPair.to_dict()` can be transmitted over
`tractor.MsgStream`s and deserialized entirely back to struct from on
the receiver end.

Deats:
- implement `Asset.to_dict()` and `.from_msg()`
- adjust `MktPair.to_dict()` and `.from_msg()` to use these methods.
  - drop all the hacky "if .src/.dst is str" handling.
- add better `MktPair.from_fqme()` input handling for expiry and venue;
  ensure that either can be extracted from passed fqme *and* if so they
  are also popped from any duplicate passed in `**kwargs**`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet c8c28df62f Much (much) better symbology cache refinements
For starters rename the cache type to `SymbologyCache` and fill out its
interface to include an (async) `.reload()` which can be used to populate
the in-mem asset-table sets such that any tractor-runtime task can
actually directly call it. Use a symcache file name schema of
`_cache/<backend>.symcache.toml`.

Dirtier deatz:
- make `.open_symcache()` a `@cm` such that it can be used from sync code
  and will actually call `trio.run()` in the case where it needs to do a
  full (re)load; also don't write on exit only on reloads.
- add `.get_symcache()` a simple non-ctx-mngr reader which again can
  mostly be called willy-nilly from sync code without the full runtime
  being up (but likely will only work if symcache files already exist
  for the backend).
2023-07-12 08:45:55 -04:00
Tyler Goodlet 005023275e Add a symbology cache subsys
New mod is `.data._symcache` and it needs backend clients to declare
`Client.get_assets()` and `.get_mkt_pairs()` to generate the cache files
which now go in the config dir under `_cache/`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 05af2b3e64 Rework `.accounting.Position` calcs to prep for `polars`
We're probably going to move to implementing all accounting using
`polars.DataFrame` and friends and thus this rejig preps for a much more
"stateless" implementation of our `Position` type and its internal
pos-accounting metrics: `ppu` and `cumsize`.

Summary:
- wrt to `._pos.Position`:
  - rename `.size`/`.accum_size` to `.cumsize` to be more in line
    with `polars.DataFrame.cumsum()`.
  - make `Position.expiry` delegate to the underlying `.mkt: MktPair`
    handling (hopefully) all edge cases..
  - change over to a new `._events: dict[str, Transaction]` in prep
    for #510 (and friends) and enforce a new `Transaction.etype: str`
    which is by default `clear`.
  - add `.iter_by_type()` which iterates, filters and sorts the
    entries in `._events` from above.
  - add `Position.clearsdict()` which returns the dict-ified and
    datetime-sorted table which can more-or-less be stored in the
    toml account file.
  - add `.minimized_clears()` a new (and close) version of the old
    method which always grabs at least one clear before
    a position-side-polarity-change.
  - mask-drop `.ensure_state()` since there is no more `.size`/`.price`
    state vars (per say) as we always re-calc the ppu and cumsize from
    the clears records on every read.
  - `.add_clear` no longer does bisec insorting since all sorting is
    done on position properties *reads*.
  - move the PPU (price per unit) calculator to a new `.accounting.calcs`
    as well as add in the `iter_by_dt()` clearing transaction sorted
    iterator.
    - also make some fixes to this to handle both lists of `Transaction`
      as well as `dict`s as before.

- start rename of `PpTable` -> `Account` and make a note about adding
  a `.balances` table.
- always `float()` the transaction size/price values since it seems if
  they get processed as `tomlkit.Integer` there's some suuper weird
  double negative on read-then-write to the clears table?
  - something like `cumsize = -1` -> `cumsize = --1` !?!?
- make `load_pps_from_ledger()` work again but now includes some very
  very first draft `polars` df processing from a transaction ledger.
  - use this from the `accounting.cli.disect` subcmd which is also in
    *super early draft* mode ;)
- obviously as mentioned in the `Position` section, add the new `.calcs`
  module with a `.ppu()` calculator func B)
2023-07-12 08:45:55 -04:00
Tyler Goodlet 745c144314 ib.feed: handle fiat (forex) pairs with `Asset`
Also finally adds full `FeedInit` and `MktPair` support for this backend
by handling:
- all "currency" fields for each `Contract` by constructing
  and `Asset` and setting the `MktPair.src` with `.atype='fiat'`.
- always render the `MktPair.src` name in the `.fqme` for fiat pairs
  (aka forex) but never for other instruments.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 10ebc855e4 ib: fully handle `MktPair.src` and `.dst` in ledger loading
In an effort to properly support fiat pairs (aka forex) as well as more
generally insert a fully-qualified `MktPair` in for the
`Transaction.sys`. Note that there's a bit of special handling for API
`Contract`s-as-dict records vs. flex-report-from-xml equivalents.
2023-07-12 08:45:55 -04:00
Tyler Goodlet c0929c042a ib: fix `Client.trades()` return type annot 2023-07-12 08:45:55 -04:00
Tyler Goodlet 9748b22d34 Always include the src asset for (parquet file names) for fiat pairs 2023-07-12 08:45:55 -04:00
Tyler Goodlet 3ff9fb3e10 clearing._messages: add todo to drop the `BrokedPosition` msg 2023-07-12 08:45:55 -04:00
Tyler Goodlet 75f01e22d7 Drop `Position.expiry`, delegate to `.mkt: MktPair`
No point having duplicate data when we already stash the `expiry` on the
mkt info type and can just read it (and cast to `datetime` obj).

Further this fixes a regression caused by converting `._clears` to
a list by adding a `._events: dict[str, Transaction]` which prevents
double entering transactions based on checking the events table for the
existing id.. Further add a sanity check that all events are popped
(for now) after serializing the clearing table for the toml account
file.

In the longer run, ideally we don't have the separate sequences ._clears
and ._events by choosing a better data structure (sorted unique set of
mkt events) maybe a specially used `polars.DataFrame` (which we kind
need eventually anyway)?
2023-07-12 08:45:55 -04:00
Tyler Goodlet 87d6115954 Add src asset name ignore via `MktPair._fqme_without_src: bool` 2023-07-12 08:45:55 -04:00
Tyler Goodlet c780164f69 Fix test to use new `load_account()` location 2023-07-12 08:45:55 -04:00
Tyler Goodlet 482403c887 Expose `.accounting.load_account()` 2023-07-12 08:45:55 -04:00
Ebisu 2ac8191722 discrepancy between live/testnet urls 2023-07-12 01:49:17 +02:00
Tyler Goodlet 35af5f11fa binance: Map `use_testnet` to off by default (since data feeds) 2023-06-30 20:20:14 -04:00
Tyler Goodlet a7ec59862a binance: Map `use_testnet` to off by default (since data feeds) 2023-06-30 20:17:02 -04:00
Tyler Goodlet ad4847cbac basic bot: iter latest ticks first to decide new submission price per quote 2023-06-27 15:47:23 -04:00
Tyler Goodlet da07685e8b Use `iterticks()` to filter to clears, get first price manually before submit.. 2023-06-27 15:47:23 -04:00
Tyler Goodlet f1eb76d29f Drop prints, break on latest clear match tick 2023-06-27 15:47:23 -04:00
Tyler Goodlet 46b22958f0 basic bot: add real-time price trailer (task) that keeps bid price 0.0005% below last clear value 2023-06-27 15:47:23 -04:00
Tyler Goodlet 57399e4f5d basic bot: drop registry addr and connect to default pikerd 2023-06-27 15:47:23 -04:00
Tyler Goodlet 5690595064 basic bot: set unix fileformat, add KBI handling to cancel order submission 2023-06-27 15:47:23 -04:00
Tyler Goodlet 63a6c6efde Add a super basic "order bot" example B)
Shows how to boot the piker runtime, submit an order to the ems, cancel
said order right away. NOTE, this uses piker's built in paper engine but
can be easily tweaked to use a live backend at the user's whim.
2023-06-27 15:47:23 -04:00
Tyler Goodlet f2fff5a5fa ib._ledger: move trades transaction processing helpers into new module 2023-06-27 15:47:05 -04:00
Tyler Goodlet c0d575c009 Change `Position.clears` -> `._clears[list[dict]]`
When you look at usage we don't end up really needing clear entries to
be keyed by their `Transaction.tid`, instead it's much more important to
ensure the time sorted order of trade-clearing transactions such that
position properties such as the size and ppu are calculated correctly.
Thus, this instead simplified the `.clears` table to a list of clear
dict entries making a bunch of things simpler:
- object form `Position._clears` compared to the offline TOML schema
  (saved in account files) is now data-structure-symmetrical.
- `Position.add_clear()` now uses `bisect.insort()` to
  datetime-field-sort-insert into the *list* which saves having to worry
  about sorting on every sequence *read*.

Further deats:
- adjust `.accounting._ledger.iter_by_dt()` to expect an input `list`.
- change `Position.iter_clears()` to iterate only the clearing entry
  dicts without yielding a key/tid; no more tuples.
- drop `Position.to_dict()` since parent `Struct` already implements it.
2023-06-27 15:47:05 -04:00
Tyler Goodlet 66d402b80e Load ledger records into `pl.DataFrame` for `disect`-tion 2023-06-27 15:47:05 -04:00
Tyler Goodlet ea270d3396 .data.ticktools: add reverse flag, better docs
Since it may be handy to get the latest ticks first, add a `reverse:
bool` to `iterticks()` and add some cleaner logic and a proper doc
string to `frame_ticks()`.
2023-06-27 15:47:05 -04:00
Tyler Goodlet 621634b5a2 Move `frame_ticks()` and tick-type defs into `.ticktools` 2023-06-27 15:47:05 -04:00
Tyler Goodlet eacc59226f rename `.data._normalize` -> `.ticktools` 2023-06-27 15:47:05 -04:00
Tyler Goodlet 7b4472e37e data._sampling.frame_ticks(): slight rework to generalize 2023-06-27 15:47:05 -04:00
Tyler Goodlet 4a8eafabb8 Never key error on bad flow pops.. 2023-06-27 13:48:03 -04:00
Tyler Goodlet e7e7919a43 Ensure paper engine logger is `piker.clearing` instance.. 2023-06-27 13:48:03 -04:00
Tyler Goodlet cdf9105d0d Export `Flume` and `Feed` from `piker.data` 2023-06-27 13:48:03 -04:00
Tyler Goodlet 49e67d5f36 Always add a paper (account) entry to order mode init
Allows for tracking paper engine orders despite the ems not necessarily
being opened by the current order mode instance (UI) in "paper"
execution mode; useful for tracking bots/strats running against the same
EMS daemon.
2023-06-27 13:48:03 -04:00
Tyler Goodlet 85fa87fe6f Update the `_emsd_main()` doc task tree layout 2023-06-27 13:48:03 -04:00
Tyler Goodlet 249b091c2f binance: better bad account in order request error msg 2023-06-27 13:48:03 -04:00
Tyler Goodlet 2d291bd2c3 ib: expose `.broker.norm_trade_records()` from pkg 2023-06-27 13:42:08 -04:00
Tyler Goodlet cf1f4bed75 Move `.accounting` related config loaders to subpkg
Like you'd think:
- `load_ledger()` -> ._ledger
- `load_accounrt()` -> ._pos

Also fixup the old `load_pps_from_ledger()` and expose it from a new
`.accounting.cli.disect` cli cmd for trying to figure out why pp calcs
are totally mucked on stupid ib..
2023-06-27 13:42:08 -04:00
Tyler Goodlet 032976b118 view_mode: add in one missing debug_print block.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet cbe364cb62 Add explicit `piker.cli` logger name for `pikerd` 2023-06-27 13:42:08 -04:00
Tyler Goodlet efd52e8ce3 kraken: always insert ticks `list`, only append if vlm 2023-06-27 13:42:08 -04:00
Tyler Goodlet 3be1d610e0 ib: expose trade EP as `open_trade_dialog()`
Should be the final production backend to switch this over B)

Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..
2023-06-27 13:42:08 -04:00
Tyler Goodlet b1ef549276 Move `broker_init()` into `brokers._daemon`
We might as well start standardizing on `brokerd` init such that it can
be used more generally in client code (such as the `.accounting.cli`
stuff).

Deats of `broker_init()` impl:
- loads appropriate py pkg module,
- reads any declared `__enable_modules__: listr[str]` which will be
  passed to `tractor.ActorNursery.start_actor(enabled_modules=<this>)`
- loads the `.brokers._daemon._setup_persistent_brokerd

As expected the `accounting.cli` tools can now import directly from this
new location and use the common daemon fixture definition.
2023-06-27 13:42:08 -04:00
Tyler Goodlet f7f76137ca kraken: handle `.spot.kraken` new-style FQMEs
After #520 we've moved to better supporting explicit venues for cex
backends which is important where a provider offers both spot and
derivatives markets (kraken, binance, kucoin) and we need to distinguish
which is being traded given a common asset pair (eg. BTC/USDT). So, make
this work for `kraken`'s brokerd such that requests and pre-existing
live order are (un)packed to/from EMS messaging form.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 3fcf44aa52 Skip marketstore docker tests, we're gonna drop it.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet d9708e28c8 kraken: drop `OHLC.ticks` field and just inject to quote before send 2023-06-27 13:42:08 -04:00
Tyler Goodlet 65f2549d90 binance: more explicit var naming in `OHLC` parse loop 2023-06-27 13:42:08 -04:00
Tyler Goodlet a4d16ec6ab Fix ems tests: add `.spot` venue token to fqme 2023-06-27 13:42:08 -04:00
Tyler Goodlet d82173dd50 Always use fully expanded FQME throughout `.clearing`
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 5d930175e4 kraken: use new `OrderDialogs` type, handle `.spot`
Drop the older `dict[str, ChainMap]` prototype we had since the new
`OrderDialogs` built-out while adding `binance` order support is more
refined and general. Also, handle new and now expect `.spot` venue token
in FQMEs since kraken too has futes markets that we'll likely want to
support eventually.
2023-06-27 13:42:08 -04:00
Tyler Goodlet e4c1003aba Hard code futes venue(s) for now in `brokerd`.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet 676b00592d Don't allow `Client.api()` testnet queries by default, require explicit flag set 2023-06-27 13:42:08 -04:00
Tyler Goodlet 9970fa89ee Drop per-venue request methods from `Client`
Use dynamic lookups instead by mapping to the correct http session and
endpoints path using the venue routing/mode key. This let's us simplify
from 3 methods down to a single `Client._api()` which either can be
passed the `venue: str` explicitly by the caller (as is needed in the
`._cache_pairs()` case) or falls back to the client's current
`.mkt_mode: str` setting B)

Deatz:
- add couple more tables to suffice all authed-endpoint use cases:
  - `.venue2configkey: dict[str, str]` which maps the venue key to the
    `brokers.toml` subsection which should be used for auth creds and
    testnet config.
  - `.confkey2venuekeys: dict[str, list[str]]` which maps each config
    subsection key to the list of venue name keys for doing config to
    venues lookup.
- always build out testnet sessions for spot and futes venues (though if
  not set the sessions obviously won't ever be used).
- add and use new `config.ConfigurationError` custom exceptions when api
  creds are missing.
- rename `action: str` to `method: str` in `._api()` since it's the
  proper ReST term and switch what was "method" to be `endpoint: str`.
- mask out `.get_positions()` since we can get that from a user stream
  wss request (and are doing that).
- (in theory) import and use spot testnet url as necessary.
2023-06-27 13:42:08 -04:00
Tyler Goodlet fe902c017b Drop `OrderedDict` usage, not necessary in modern python 2023-06-27 13:42:08 -04:00
Tyler Goodlet 77db2fa7c8 Support loading quarterly futes existing lives
Do parsing of the `'symbol'` and check for an `_<expiry>` suffix, in
which case we re-format in capitalized FQME style, do the
`Client._pairs[str, Pair]` lookup and then send the `Pair.bs_fqme` in
the `Order.fqme: str` field.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 7f39de59d4 Factor `OrderDialogs` into `.clearing._util`
It's finally a decent little design / interface and definitely can be
used in other backends like `kraken` which rolled something lower level
but more or less the same without a wrapper class.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 5c315ba163 Support live order loading (with caveats)
As you'd expect query and sync the EMS with existing live orders
reported by the market venue by packing them in `Status` msgs and
sending over the order dialog stream before starting the handler tasks.

XXX CAVEAT:
- there appears to be no way (at least on the usdtm market/venue) to
  distinguish between different contracts such as perps vs. the
  quarterlies?
- for now we just assume that the perp is being used since
  there's no indicator otherwise in the 'symbol' field?
- we should maybe open an issue with the futures-connector project to
  see how they'd recommend solving this discrepancy?
2023-06-27 13:42:08 -04:00
Tyler Goodlet dc3ac8de01 binance: support order "modifies" B)
Only a couple tweaks to make this work according to the docs:
https://binance-docs.github.io/apidocs/futures/en/#modify-order-trade

- use a PUT request.
- provide the original user id in a `'origClientOrderId'` msg field.
- don't expect the same oid in the PUT response.

Other broker-mode related details:
- don't call `OrderDialogs.add_msg()` until after the existing check
  since we want to check against the *last* msgs contents not the new
  request.
- ensure we pass the `modify=True` flag in the edit case.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 6eee6ead79 binance: add accounts def to `brokers.toml` template 2023-06-27 13:42:08 -04:00
Tyler Goodlet 572badb4d8 Add full real-time position update support B)
There was one trick which was that it seems that binance will often send
the account/position update event over the user stream *before* the
actual clearing (aka FILLED) order update event, so make sure we put an
entry in the `dialogs: OrderDialogs` as soon as an order request comes
in such that even if the account update arrives first the
`BrokerdPosition` msg can be relayed without delay / order event order
considerations.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 4eeb232248 kraken: add more type annots in broker codez 2023-06-27 13:42:08 -04:00
Tyler Goodlet 3f555b2f5a Fix user event matching
Was using the wrong key before from our old code (not sure how that
slipped back in.. prolly doing too many git stashes XD), so fix that to
properly match against order update events with 'ORDER_TRADE_UPDATE'.

Also, don't match on the types we want to *cast to*, that's not how
match syntax works (facepalm), so we have to typecast prior to EMS msg
creation / downstream logic.

Further,
- try not bothering with binance's own internal `'orderId'` field
  tracking since they seem to support just using your own user version
  for all ctl endpoints? (thus we only need to track the EMS `.oid`s B)
- log all event update msgs for now.
- pop order dialogs on 'closed' statuses.
- wrap cancel requests in an error handler block since it seems the EMS
  is double sending requests from the client?
2023-06-27 13:42:08 -04:00
Tyler Goodlet 09007cbf08 Do native symbology lookup in order methods, send user oid in cancel requests 2023-06-27 13:42:08 -04:00
Tyler Goodlet 8a06e4d073 Wrap dialog tracking in new `OrderDialogs` type, info log all user stream msgs 2023-06-27 13:42:08 -04:00
Tyler Goodlet 45ded4f2d1 binance: order submission "user id" is not the same as their internal `int` one.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet 60b0b721c5 Split out crypto$ derivs into separate type set
For crypto derivatives (at least futes), yes they are margined, but
generally not around a single unit of vlm (like equities or commodities
futes) so don't pre-set the order mode allocator to use a #unit limit,
$limit is fine.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 249d358737 Woops, fix wss_url lookup depending on venue.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet a9c016ba10 Use `Client._pairs` cross-venue table for orders
Since the request handler task will work concurrently across venues
(spot, futes, margin) we need to be sure that we look up the correct
venue to update the order dialog and this is naturally determined by the
FQME-style symbol in the `BrokerdOrder` msg; the best way to map that
symbol-key to the correct venue/`Pair` is by using said `._pairs:
ChainMap`.

Further, handle limit order errors by catching and relaying back an
error response to the EMS. Fix the "account name" to be `binance.usdtm`
so that we can eventually and explicitly support all venues by name.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 98f6d85b65 Make order request methods be venue aware 2023-06-27 13:42:08 -04:00
Tyler Goodlet f36061a149 binance: first draft live order ctl support B)
Untested fully but has ostensibly working position and balance loading
(by delegating entirely to binance's internals for that) and an MVP ems
order request handler; still need to fill out the order status update
task implementation..

Notes:
- uses user data stream for all per account balance and position tracking.
- no support yet for `piker.accounting` position tracking.
- no support yet for full order / position real-time update via user
  stream.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 43494e4994 Add note about expecting client side to cache search domain? 2023-06-27 13:42:08 -04:00
Tyler Goodlet c6d1007e66 Load `Asset`s during echange info queries
Since we need them for accounting and since we can get them directly
from the usdtm futes `exchangeInfo` ep, just preload all asset info that
we can during initial `Pair` caching. Cache the asset infos inside a new per venue
`Client._venues2assets: dict[str, dict[str, Asset | None]]` and mostly
be pedantic with the spot asset list for now since futes seems much
smaller and doesn't include transaction precision info.

Further:
- load a testnet http session if `binance.use_testnet.futes = true`.
- add testnet support for all non-data endpoints.
- hardcode user stream methods to work for usdtm futes for the moment.
- add logging around order request calls.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 1bb7c9a2e4 Handle pending futes, optional `.filters` add testnet urls 2023-06-27 13:42:08 -04:00
Tyler Goodlet 2ee11f65f0 binance: facepalm, always lower case venue token.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet 0c74a67ee1 Move API urls to `.venues`
Also add a lookup helper for getting addrs by venue:
`get_api_eps()` which returns the rest and wss values.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 9972bd387a kraken: use new `open_trade_dialog()` ep name B) 2023-06-27 13:42:08 -04:00
Tyler Goodlet f792ecf3af binance: use new `open_trade_dialog()` endpoint name B) 2023-06-27 13:42:08 -04:00
Tyler Goodlet 3c89295efe Rename `.binance.schemas` -> `.venues` 2023-06-27 13:42:08 -04:00
Tyler Goodlet 9ff03ba00c kraken: add `<pair>.spot.kraken` fqme interpolation
As just added for binance move to using an explicit `.<venue>.kraken`
style for spot markets which makes the current spot symbology expand to
`<PAIR>.SPOT` from the new `Pair.bs_fqme: str`. Reasons for why are
laid out in the equivalent patch for binance. Obviously this also primes
for supporting kraken's futures venue APIs as well 🏄
https://docs.futures.kraken.com/#introduction

Detalles:
- add `.spot.kraken` parsing to `get_mkt_info()` so that if the venue
  token is not passed by caller we implicitly expand it in.
- change `normalize()` to only return the `quote: dict` not the topic
  key.
- rewrite live feed msg loop to use `match:` syntax B)
2023-06-27 13:42:08 -04:00
Tyler Goodlet 8e03212e40 Always expand FQMEs with .venue and .expiry values
Since there are indeed multiple futures (perp swaps) contracts including
a set with expiry, we need a way to distinguish through search and
`FutesPair` lookup which contract we're requesting. To solve this extend
the `FutesPair` and `SpotPair` to include a `.bs_fqme` field similar to
`MktPair` and key the `Client._pairs: ChainMap`'s backing tables with
these expanded fqmes. For example the perp swap now expands to
`btcusdt.usdtm.perp` which fills in the venue as `'usdtm'` (the
usd-margined fututes market) and the expiry as `'perp'` (as before).
This allows distinguishing explicitly from, for ex., coin-margined
contracts which could instead (since we haven't added the support yet)
fqmes of the sort `btcusdt.<coin>m.perp.binance` thus making it explicit
and obvious which contract is which B)

Further we interpolate the venue token to `spot` for spot markets going
forward, which again makes cex spot markets explicit in symbology; we'll
need to add this as well to other cex backends ;)

Other misc detalles:

- change USD-M futes `MarketType` key to `'usdtm_futes'`.

- add `Pair.bs_fqme: str` for all pair subtypes with particular
  special contract handling for futes including quarterlies, perps and
  the weird "DEFI" ones..

- drop `OHLC.bar_wap` since it's no longer in the default time-series
  schema and we weren't filling it in here anyway..

- `Client._pairs: ChainMap` is now a read-only fqme-re-keyed view into
  the underlying pairs tables (which themselves are ideally keyed
  identically cross-venue) which we populate inside `Client.exch_info()`
  which itself now does concurrent pairs info fetching via a new
  `._cache_pairs()` using a `trio` task per API-venue.

- support klines history query across all venues using same
  `Client.mkt_mode_req[Client.mkt_mode]` style as we're doing for
  `.exch_info()` B)
  - use the venue specific klines history query limits where documented.

- handle new FQME venue / expiry fields inside `get_mkt_info()` ep such
  that again the correct `Client.mkt_mode` is selected based on parsing
  the desired spot vs. derivative contract.

- do venue-specific-WSS-addr lookup based on output from
  `get_mkt_info()`; use usdtm venue WSS addr if a `FutesPair` is loaded.

- set `topic: str` to the `.bs_fqme` value in live feed quotes!

- use `Pair.bs_fqme: str` values for fuzzy-search input set.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 4c4787ce58 Add a "perpetual_future" mkt info type 2023-06-27 13:42:08 -04:00
Tyler Goodlet e68c55e9bd Switch `Client.mkt_mode` to 'usd_futes' if 'perp' in fqme
The beginning of supporting multi-markets through a common API client.
Change to futes market mode in the client if `.perp.` is matched in the
fqme. Currently the exchange info and live feed ws impl will swap out
for their usd-margin futures market equivalent (endpoints).
2023-06-27 13:42:08 -04:00
Tyler Goodlet dc23f1c9bd binance: fix `FutesPair` to have `.filters`
Not sure why it seemed like futures pairs didn't have this field but add
it to the parent `Pair` type as well as drop the overridden
`.price/size_tick` fields instead doing the same as in spot as well.

Also moves the `MarketType: Literal` (for the `Client.mkt_mode: str`)
and adds a pair type lookup table for exchange info loading.
2023-06-27 13:42:08 -04:00
Tyler Goodlet d173d373cb kraken: raise `SymbolNotFound` on symbology query errors 2023-06-27 13:42:08 -04:00
Tyler Goodlet 8220bd152e Extend `MktPair` doc string to refer to binance pairs 2023-06-27 13:42:08 -04:00
Tyler Goodlet aa49c38d55 Add `binance` section to `brokers.toml` 2023-06-27 13:42:08 -04:00
Tyler Goodlet dac93dd8f8 Support USD-M futes live feeds and exchange info
Add the usd-futes "Pair" type and thus ability to load all exchange
(info for) contracts settled in USDT. Luckily we don't seem to have to
modify anything in the `Client` interface (yet) other then a new
`.mkt_mode: str` which determines which endpoint set to make requests.
Obviously data received from endpoints will likely need diff handling as
per below.

Deats:
- add a bunch more API and WSS top level domains to `.api` with comments
- start a `.binance.schemas` module to house the structs for loading
  different `Pair` subtypes depending on target market: `SpotPair`,
  `FutesPair`, .. etc. and implement required `MktPair` fields on the
  new futes type for compatibility with the clearing layer.
- add `Client.mkt_mode: str` and a method lookup for endpoint parent
  paths depending on market via `.mkt_req: dict`

Also related to live feeds,
- drop `Struct` typecasting instead opting for specific fields both for
  speed and simplicity atm.
- breakout `subscribe()` into module level acm from being embedded
  closure.
- for now swap over the ws feed to be strictly the futes ep (while
  testing) and set the `.mkt_mode = 'usd_futes'`.
- hack in `Client._pairs` to only load `FutesPair`s until we figure out
  whether we want separate `Client` instances per market or not..
2023-06-27 13:42:08 -04:00
Tyler Goodlet ae1c5a0db0 binance: breakout into `feed` and `broker` mods like other backends 2023-06-27 13:42:08 -04:00
Tyler Goodlet ed0c2555fc binance: make pkgmod expose endpoints from coming submods 2023-06-27 13:42:08 -04:00
Tyler Goodlet 26a8638836 binance: convert to subpkg module 2023-06-27 13:42:08 -04:00
Tyler Goodlet e035af2f42 Don't filter out clearing ticks XD 2023-06-27 13:42:08 -04:00
Tyler Goodlet 2dc8ee2b4e Don't bother casting `AggTrade` values for now, just floatify the price/quantity 2023-06-27 13:42:08 -04:00
Tyler Goodlet 06026ec661 Add `binance` section to broker conf template 2023-06-27 13:42:08 -04:00
Guillermo Rodriguez 7c00ca0254 binance: add deposits/withdrawals API support
From @guilledk,
- Drop Decimal quantize for now
- Minor tweaks to trades_dialogue proto
2023-06-27 13:42:08 -04:00
Tyler Goodlet eaaf6e4cc1 kraken: fix `trades2pps()` type sig 2023-06-27 13:42:08 -04:00
Guillermo Rodriguez ef544ba55a Add order status tracking 2023-06-27 13:42:08 -04:00
Tyler Goodlet e85e031df7 Use new config get/set API in `brokercnf` cmd? 2023-06-27 13:42:08 -04:00
Tyler Goodlet e03da40867 Add a config get/set API (from @guilledk) ? 2023-06-27 13:42:08 -04:00
Tyler Goodlet f8af13d010 binance: add `submit_cancel()` & listen key mgmt
Patch again originally from @guilledk and adds a sesh for futures
testnet as well as a order canceller method B)
2023-06-27 13:42:08 -04:00
Tyler Goodlet 1d9c195506 kraken: tidy up paper mode activation comments 2023-06-27 13:42:08 -04:00
Tyler Goodlet d3a504864a Add draft `brokercnf` CLI cmd from @guilledk 2023-06-27 13:42:08 -04:00
Tyler Goodlet f99e8fe7eb binance: dynamically choose the rest method
Instead of having a buncha logic branches for 'get', 'post', etc. just
pass the `method: str` and do a attr lookup on the `asks` sesh.

Also, adjust the `trades_dialogue()` ep to switch to paper mode when no
client API key is detected/loaded.
2023-06-27 13:42:08 -04:00
Guillermo Rodriguez bc4ded2662 binance: start drafting live order ctl endpoints
First draft originally by @guilledk but update by myself 2 years later
xD. Will crash at runtime but at least has the machinery to setup signed
requests for auth-ed endpoints B)

Also adds a generic `NoSignature` error for when credentials are not
present in `brokers.toml` but user is trying to access auth-ed eps with
the client.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 35359861bb .brokers._daemon: add notes around needed brokerd respawn tech 2023-06-27 13:41:47 -04:00
Tyler Goodlet a44bc4aeb3 binance: pre-#520 fixes for `open_cached_client()` import and struct-field casting 2023-06-27 13:41:47 -04:00
Tyler Goodlet c4277ebd8e .ui._display: filter y-ranging to `_auction_ticks`
Since we only ever want to do incremental y-range calcs based on the
price always skip any tick types emitted by the data daemon which aren't
defined in the fundamental set. Further, toss in a new `debug_n_trade:
bool` toggle which by default turns off all loggin and profiler calls;
if you want to do profiling this has to now be adjusted manually!
2023-06-27 13:41:47 -04:00
Tyler Goodlet d42aa60325 Define the flattened "fundamental double auction" emitted tick type set 2023-06-27 13:41:47 -04:00
Tyler Goodlet c57d4b2181 ib: map some tick types particulary "volumeRate" to avoid auto-range issue 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6c10c2f623 order_mode: add comment around `Order` being a dict bug 2023-06-27 13:41:47 -04:00
Tyler Goodlet ad31631a8f Always round order pane $limit to 3 digits 2023-06-27 13:41:47 -04:00
Tyler Goodlet 020a3955d2 Always use fully expanded FQME throughout `.clearing`
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 736bbbff77 view_mode: drop rounding dispersions and "debug print" 2023-06-27 13:41:47 -04:00
Tyler Goodlet 80461e18a5 Use `MktPair.price_tick: Decimal` in dark triggers
This was actually incorrect prior, we were rounding triggered limit
orders with the `.size_tick` value's digits when we should have been
using the `.price_tick` (facepalm). So fix that and compute the rounding
number of digits (as passed to the round(<value>, ndigits=<here>)`
builtin) and store it in the `DarkBook.triggers` tuples so that at
trigger/match time the round call is done *just prior* to msg send to
`brokerd` given the last known live L1 queue price.
2023-06-27 13:41:47 -04:00
Tyler Goodlet a149e71fb1 ib: pull vnc sockaddrs from brokers.toml config if defined 2023-06-27 13:41:47 -04:00
Tyler Goodlet b28b38afab Fix double cancel bug!
Not sure how this lasted so long without complaint (literally since we
added history 1m OHLC it seems; guess it means most backends are pretty
tolerant XD ) but we've been sending 2 cancels per order (dialog) due to
the mirrored lines on each chart: 1s and 1m. This fixes that by
reworking the `OrderMode` methods to be a bit more sane and less
conflated with the graphics (lines) layer.

Deatz:
- add new methods:
  - `.oids_from_lines()` line -> oid extraction,
  - `.cancel_orders()` which makes the order client cancel requests from
    a `oids: list[str]`.
- re-impl `.cancel_all_orders()` and `.cancel_orders_under_cursor()` to
  use the above methods thus fixing the original bug B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 84613cd596 clearing._messages: don't require `.symbol` in brokerd side error msgs 2023-06-27 13:41:47 -04:00
Tyler Goodlet 909f880211 ib: prep for passing `Client` to data reset hacker
Since we want to be able to support user-configurable vnc socketaddrs,
this preps for passing the piker client direct into the vnc hacker
routine so that we can (eventually load) and read the ib brokers config
settings into the client and then read those in the `asyncvnc` task
spawner.
2023-06-27 13:41:47 -04:00
Tyler Goodlet bc58e42a74 Refine accounting related config loading routine doc strings 2023-06-27 13:41:47 -04:00
Tyler Goodlet 77dfeb4bf2 Update brokerd msgs with modern type annots, add a "closed" status 2023-06-27 13:41:47 -04:00
Tyler Goodlet f2c1988536 Better empty account console msg styling 2023-06-27 13:41:47 -04:00
Tyler Goodlet 81d5ca9bc2 ib: drop `ibis` import and use fq object imports instead 2023-06-27 13:41:47 -04:00
Tyler Goodlet a4b8fb2d6b Woops, drop paper mode detection on client side.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet e7437cb722 Facepalm, break on first matching trades ep.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet f81ea64cab Drop unused `Union` 2023-06-27 13:41:47 -04:00
Tyler Goodlet 2e878ca52a Don't pass loglevel to trade dialog endpoint
It's been getting setup in the `brokerd` daemon-actor spawn task for
a while now and worker tasks already get a ref to that global log
instance so they don't need to care (in data or trading) task spawn
endpoints.

Also move to the new `open_trade_dialog()` naming for working broker
backends B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 6b2e85e4b3 Add type-annots to sampler subscription method internals 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6a1c49be4e view_mode: handle duplicate overlay dispersions
Discovered due to originally having a history loading bug between
btcusdt futes display where the same time series was being loaded into
the graphics system, this avoids the issue where 2 (or more) curves are
measured to have the same dispersion and thus do not get added as unique
entries to the `overlay_table: dict[float, tuple]` during the scaling
phase..

Practically speaking this should never really be a problem if the curves
(and their backing timeseries) are indeed unique but keying the
overlay table by the dispersion and the `Viz` is a minimal performance
hit when looping the sorted table and is a lot nicer then you **do want
to show** duplicate curves then having one overlay just not be ranged
correctly at all XD
2023-06-27 13:41:47 -04:00
Tyler Goodlet 0f8c685735 .clearing._client: return early on cancel-dead-dialog attempts 2023-06-27 13:41:47 -04:00
Tyler Goodlet 921e18728c Move `._cacheables.open_cached_client()` into `.brokers` pkg mod 2023-06-27 13:41:47 -04:00
Tyler Goodlet c0552fa352 Just use brokermods dict directly in chart entrypoint now 2023-06-27 13:41:47 -04:00
Tyler Goodlet 90810dcffd Right partition the fqme to remove broker part in mkt-info cli 2023-06-27 13:41:47 -04:00
Tyler Goodlet ebbfa7f48d Passthrough kwargs to `open_cached_client()` 2023-06-27 13:41:47 -04:00
Tyler Goodlet bb02775cab Change `ledger` CLI to use new `open_brokerd_dialog()`
Instead of effectively (and poorly) duplicating the trade dialog setup
logic, just use the new helper we exposed in the EMS module B)
Also, handle paper accounts that have no ledger / positions existing.
2023-06-27 13:41:47 -04:00
Tyler Goodlet b15e736e3e Change `piker symbol-info` -> `mkt-info`
As part of bringing the brokerd agnostic APIs up to date and modernizing
wrapping CLIs, this adds a new sub-cmd to allow more or less directly
calling the `.get_mkt_info()` broker mod endpoint and dumping the both
the backend specific `Pair`-ish and `.accounting.MktPair` normalized
version to console.

Deatz:
- make the click config's `brokermods` entry a `dict`
- make `.brokers.core.mkt_info()` strip the broker name part from the
  input fqme before calling the backend.
2023-06-27 13:41:47 -04:00
Tyler Goodlet cc3037149c Factor `brokerd` trade dialog init into acm
Connecting to a `brokerd` daemon's trading dialog via a helper `@acm`
func is handy so that arbitrary trading middleware clients **and** the
ems can setup a trading dialog and, at the least, query existing
position state; this is in fact our immediate need when simply querying
for an account's position status in the `.accounting.cli.ledger` cli.

It's now exposed (for now) as `.clearing._ems.open_brokerd_dialog()` and
is called by the `Router.maybe_open_brokerd_dialog()` for every new
relay allocation or paper-account engine instance.
2023-06-27 13:41:47 -04:00
Tyler Goodlet d704d631ba Add `store ldshm` subcmd
Changed from the old `store clone` to instead simply load any shm buffer
matching a user provided `FQME: str` pattern; writing to parquet file is
only done if an explicit option flag is passed by user.

Implement new `iter_dfs_from_shms()` generator which allows interatively
loading both 1m and 1s buffers delivering the `Path`, `ShmArray` and
`polars.DataFrame` instances per matching file B)

Also add a todo for a `NativeStorageClient.clear_range()` method.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 58c096bfad Bleh go back to using pdbp for REPL in anal 2023-06-27 13:41:47 -04:00
Tyler Goodlet 9eeea51165 Define shm buffer sizing in `.data.history`
Also adjust sizing such that the history buffer will backfill the last
six years by default (in 1m OHLC) and the hft buffer will do only 3 days
worth. Also ensure the fsp layer passes the src shm's buffer size when
allocating since the size is now required by allocators in the shm apis.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 33ec27715b Sync shm mod with dev version in `tractor`, drop buffer sizing vars, require `size: int` to all allocators 2023-06-27 13:41:47 -04:00
Tyler Goodlet e1be098406 Only hard re-render `Viz`s matching backfill deats
Avoid unnecessarily re-rendering the wrong (1min OHLC history) chart
and/or other such charts with update tasks listening to the sampler
stream. Instead only redraw in tasks which are updating vizs which match
the actual details of the backfill event.

We can probably also eventually match against a range tuple (emitted in
the msg) and then have the task further only update the formatter layer
unless the range is actually in view?
2023-06-27 13:41:47 -04:00
Tyler Goodlet dd3e4b5a1f Emit backfill details in broadcasts
Send both the `Viz.name` and `timeframe: int` so that the UI side can
match against them and only update a lone curve in a single plot.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 2a1835843f Drop `wap_in_history` stuff from display loop
It's no longer part of the default OHLCV array-buffer schema and just
generally we should be processing and managing **any** non source data
in the FSP subsystem(s) despite it maybe being provided as a default by
some backends.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 8947932289 Use last 16 steps in period detection, not first 16.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 0484e97382 Try to not overrun shm during gap backfilling.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 5251561e20 TOCHERRY: into #486, add polars/apache deps for nix 2023-06-27 13:41:47 -04:00
Tyler Goodlet 937d8c410d binance: add futes API link, freeze the agg tradez struct 2023-06-27 13:41:47 -04:00
Tyler Goodlet 75ff3921b6 ib: fix mega borked hist queries on gappy assets
Explains why stuff always seemed wrong before XD

Previously whenever a time-gappy asset (like a stock due to it's venue
operating hours) was being loaded, we weren't querying for a "durations
worth" of bars and this was causing all sorts of actual gaps in our
data set that shouldn't exist..

Fix that by always attempting to retrieve a min aggregate-time's
worth/duration of bars/datums in the history manager. Actually,
i implemented this in both the feed and api layers for this backend
since it doesn't seem to strictly work just implementing it at the
`Client.bars()` level, not sure why but..

Also, buncha `ruff` linting cleanups and fix the logger nameeee, lel.
2023-06-27 13:41:47 -04:00
Tyler Goodlet c8f8724887 Mask out all the duplicate frame detection 2023-06-27 13:41:47 -04:00
Tyler Goodlet c1546eb043 Add note about appending parquet files on write 2023-06-27 13:41:47 -04:00
Tyler Goodlet f8ab3bde35 Allow sampler step events to overrun; only 1s period 2023-06-27 13:41:47 -04:00
Tyler Goodlet c1201c164c Parametrize index margin around gap detection segment 2023-06-27 13:41:47 -04:00
Tyler Goodlet a575e67fab Go back to just opening sampler stream inside history update task? 2023-06-27 13:41:47 -04:00
Tyler Goodlet 34dd6ffc22 Add a configurable timeout around backend live feed startup
For now make it a larger value but ideally in the long run we can tune
it to specific backends and expose it in the config(s).
2023-06-27 13:41:47 -04:00
Tyler Goodlet fda7111305 Import from new `.data._timeseries` mod for anal 2023-06-27 13:41:47 -04:00
Tyler Goodlet 8233d12afb Detect and fill time gaps in tsdb history
For now, just detect and fill in gaps (via fresh backend queries)
*in the shm buffer* but eventually i'm pretty sure we can just write
these direct to the parquet file as well.

Use the new `.data._timeseries.detect_null_time_gap()` to find and fill
in the `ShmArray` index range, re-check it and enter a prompt if it
didn't totally fill.

Also,
- do a massive cleanup and removal of all unused/commented code.
  - drop the duplicate frames tracking, don't think we need it after
    removing multi-frame concurrent queries.
- change backfill loop variable `end_dt` -> `last_start_dt` which is
  more semantically correct.
- fix logic to backfill any missing sub-sequence portion for any frame
  query that overruns the shm buffer prependable space by detecting
  the available rows left to insert and only push those.
  - add a new `shm_push_in_between()` helper to match.
2023-06-27 13:41:47 -04:00
Tyler Goodlet f25248c871 Add `.data._timeseries` utility mod
Org all the new (time) gap detection routines here and also move in the
`slice_from_time()` epoch -> index converter routine from `._pathops` B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 54f8a615fc Use `code.interact()` in anal subcmd for now 2023-06-27 13:41:47 -04:00
Tyler Goodlet 2dbcecdac7 Generalize time-gap detector to accept unit and threshold 2023-06-27 13:41:47 -04:00
Tyler Goodlet 0dcfcea6ee Finally get partial backfills after tsdb load workinnn
It took a little while (and a lot of commenting out of old no longer
needed code) but, this gets tsdb (from parquet file) loading *before*
final backfilling from the most recent history frame until the most
recent tsdb time stamp!

More or less all the convoluted concurrency shit we had for coping with
`marketstore` IPC junk is no longer needed, particularly all the query
size limits and accompanying load loops.. The recent frame loading
technique/order *has* now changed though since we'd like to show charts
asap once tsdb history loads.

The new load sequence is as follows:
- load mr (most recent) frame from backend.
- load existing history (one shot) from the "tsdb" aka parquet files
  with `polars`.
- backfill the gap part from the mr frame back to the tsdb start
  incrementally by making (hacky) `ShmArray.push(start=<blah>)` calls
  and *not* updating the `._first.value` while doing it XD

Dirtier deatz:
- make `tsdb_backfill()` run per timeframe in a separate task.
  - drop all the loop through timeframes and insert `dts_per_tf` crap.
  - only spawn a subtask for the `start_backfill()` call which in turn
    only does the gap backfilling as mentioned above.
- mask out all the code related to being limited to certain query sizes
  (over gRPC) as was restricted by marketstore.. not gonna go through
  what all of that was since it's probably getting deleted in a follow
  up commit.
- buncha off-by-one tweaks to do with backfilling the gap from mr frame
  to tsdb start.. mostly tinkered it to get it all right but seems to be
  working correctly B)
- still use the `broadcast_all()` msg stuff when doing the gap backfill
  though don't have it really working yet on the UI side (since
  previously we were relying on the shm first/last values.. so this will
  be "coming soon" :)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7a5c43d01a Support injecting a `info: dict` to `Sampler.broadcast_all()` calls 2023-06-27 13:41:47 -04:00
Tyler Goodlet f1252983e4 kucoin: support start and end dt based bars queries 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6dc3ed8d6a Expose a `force_reformat: bool` up through graphics stack 2023-06-27 13:41:47 -04:00
Tyler Goodlet 4f4860cfb0 Update shm.push() type sig style 2023-06-27 13:41:47 -04:00
Tyler Goodlet 1e683a4b91 Another guard around sampling subscriber popped race.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 9fd412f631 Add basic time-sampling gap detection via `polars`
For OHLCV time series we normally presume a uniform sampling period
(1s or 60s by default) and it's handy to have tools to ensure a series
is gapless or contains expected gaps based on (legacy) market hours.

For this we leverage `polars`:
- add `.nativedb.with_dts()` a datetime-from-epoch-time-column frame
  "column-expander" which inserts datetime-casted, epoch-diff and
  dt-diff columns.
- add `.nativedb.detect_time_gaps()` which filters to any larger then
  expected sampling period rows.
- wrap the above (for now) in a `piker store anal` (analysis) cmd which
  atm always enters a breakpoint for tinkering.

Supporting storage client additions:
- add a `detect_period()` helper for extracting expected OHLC time step.
- add new `NativedbStorageClient` methods and attrs to provide for the above:
    - `.mk_path()` to **only** deliver a parquet-file path for use in
      other methods.
    - `._dfs` to house cached `pl.DataFrame`s loaded from `.parquet` files.
    - `.as_df()` which loads cached frames or loads them from disk and
      then caches (for next use).
    - `_write_ohlcv()` a private-sync version of the public equivalent
      meth since we don't currently have any actual async file IO
      underneath; add a flag for whether to return as a `numpy.ndarray`.
2023-06-27 13:41:47 -04:00
Tyler Goodlet d027ad5a4f Whenever there is overlays, set a title on main chart price-y axis! 2023-06-27 13:41:47 -04:00
Tyler Goodlet 106ebe94bf Drop marketstore and tina install from readme, add polars and apache! 2023-06-27 13:41:47 -04:00
Tyler Goodlet d2accdac9b Drop remaining mkts nonsense from `store delete` 2023-06-27 13:41:47 -04:00
Tyler Goodlet c020ab76be Clean out marketstore specifics
- drop buncha cruft from `store ls` cmd and make it work for
  multi-backend fqme listing.
  - including adding an `.address` to the mkts client which shows the
    grpc socketaddr details.
- change defauls to new `'nativedb'.
- drop 'marketstore' from built-in backend list (for now)
2023-06-27 13:41:47 -04:00
Tyler Goodlet c52e889fe5 First draft history loading rework
It was a concurrency-hack mess somewhat due to all sorts of limitations
imposed by marketstore (query size limits, strange datetime/timestamp
errors, slow table loads for large queries..) and we can drastically
simplify. There's still some issues with getting new backfills (not yet
in storage) correctly prepended: there's sometimes little gaps due to shm
races when reading history indexing vs. when the live-feed startup
finishes.

We generally need tests for all this and likely a better rework of the
feed layer's init such that we're showing history in chart afap instead
of waiting on backfills or the live feed to come up.

Much more to come B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 0ba3c798d7 Drop `bar_wap` from default ohlc field set
Turns out no backend (including kraken) requires it and really this
kinda of measure should be implemented and recorded from our fsp layer
instead of (hackily) sometimes expecting it to be in "source data".
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7b4f4bf804 First draft `.storage.nativedb.` using parquet files
After much frustration with a particular tsdb (cough) this instead
implements a new native-file (and apache tech based) backend which
stores time series in parquet files (for now) using the `polars` apis
(since we plan to use that lib as well for processing).

Note this code is currently **very** rough and in draft mode.

Details:
- add conversion routines for going from `polars.DataFrame` to
  `numpy.ndarray` and back.
- lay out a simple file-name as series key symbology:
  `fqme.<datadescriptions>.parquet`, though probably it will evolve.
- implement the entire `StorageClient` interface as it stands.
- adjust `storage.cli` cmds to instead expect to use this new backend,
  which means it's a complete mess XD

Main benefits/motivation:
- wayy faster load times with no "datums to load limit" required.
- smaller space footprint and we haven't even touched compression
  settings yet!
- wayyy more compatible with other systems which can lever the apache
  ecosystem.
- gives us finer grained control over the filesystem usage so we can
  choose to swap out stuff like the replication system or networking
  access.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 8de92179da kucoin: fix missing default fields def import 2023-06-27 13:41:47 -04:00
Tyler Goodlet 94733c4a0b A PoC tsdb prototype: `parqdb` using `polars`
Turns out just (over)writing `.parquet` files with >= 1M datums is like
less then a second, and we can likely speed up appends using
`fastparquet` (usage coming soon).

Includes:
- a new `clone` CLI subcmd to test this all out by ad-hoc copy of
  (literally hardcoded to a daemon-actor specific shm allocation X) an
  existing `/dev/shm/<ShmArray>` and push to `.parquet` file.
  - code to convert from our `ShmArray.array: np.ndarray` ->
    `polars.DataFrame` (thanks SO).
  - timing checks around the file IO and np -> polars conversion.
- a `read` subcmd which i was using to test the sync `pymarketstore`
  client against our async one to see if the issues from
  https://github.com/pikers/piker/issues/443 were resolved, but nope!
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7d1cc47db9 ROFL, even using `pymarketstore`'s json-RPC it's borked..
Turns out trying to switch to the old sync client and going back to
using the old json-RPC API (after having had to patch the upstream repo
to not import gRPC machinery to avoid crashes..) I'm basically getting
the exact same issues.

New tinkering results does possibly tell some new stuff:
- the EOF error seems to indeed be due to trying fetch records which haven't been
  written (properly) - like asking for a `end=<epoch_int>` that is
  earlier then the earliest record.
- the "snappy input corrupt" error seems to have something to do with
  the `Params.end` field not being an `int` and/or the int precision not
  being chosen correctly?
  - toying with this a bunch manually shows that the internals of the
    client (particularly `.build_query()` stuff) is parsing/calcing the
    `Epoch` and `Nanoseconds` values out incorrectly.. which is likely
    part of the problem.
  - we also changed `anyio_marketstore.MarketStoreclient.build_query()`
    logic when removing `pandas` a while back, which also seems to be
    part of the problem on the async side, however reverting those
    changes also didn't fix the issue entirely; likely something else
    more subtle going on (maybe with the write vs. read `Epoch` field
    type we pass?).

Despite all this malarky, we're already underway more or less obsoleting
this whole thing with a much less complex approach of using apache
parquet files and modern filesystem tools to get a more flexible and
numerics-native dataframe-oriented tsdb B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 9859f601ca Invert data provider's OHLCV field defs
Turns out the reason we were originally making the `time: float` column in our
ohlcv arrays was bc that's what **only** ib uses XD (and/or 🤦)

Instead we changed the default field type to be an `int` (which is also
more correct to avoid `float` rounding/precision discrepancies) and thus
**do not need to override it** in all other (crypto) backends (except
`ib`). Now we only do the customization (via `._ohlc_dtype`) to `float`
only for `ib` for now (though pretty sure we can also not do that
eventually as well..)!
2023-06-27 13:41:47 -04:00
Tyler Goodlet af64152640 .data.history: update to new naming
-> `._source.def_iohlcv_fields`
-> `.storage.StorageClient`
2023-06-27 13:41:47 -04:00
Tyler Goodlet bf21d2e329 Rename default OHLCV `np.dtype` descriptions
Use `def_iohlcv_fields` for a name and instead of copying and inserting
the index field pop it for the non-index version. Drop creating
`np.dtype()` instances since `numpy`'s apis accept both input forms so
this is simpler on our end.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 848577488e Add public config dir getter 2023-06-27 13:41:47 -04:00
Tyler Goodlet e82538eded .data: export ohlc dtypes at top level 2023-06-27 13:41:47 -04:00
Tyler Goodlet 8ccb8b0744 kucoin: drop shm-array `numpy` dtype def, our default is the same 2023-06-27 13:41:47 -04:00
Tyler Goodlet e83de2906f Relegate old marketstore cli eps to masked module 2023-06-27 13:41:47 -04:00
Tyler Goodlet 33c464524b Lower the paper engine order-cancel latency 2023-06-27 13:41:47 -04:00
Tyler Goodlet cb774e5a5d Re-implement `piker store` CLI with `typer`
Turns out you can mix and match `click` with `typer` so this moves what
was the `.data.cli` stuff into `storage.cli` and uses the integration
api to make it all work B)

New subcmd: `piker store`
- add `piker store ls` which lists all fqme keyed time-series from backend.
- add `store delete` to remove any such key->time-series.
  - now uses a nursery for multi-timeframe concurrency B)

Mask out all the old `marketstore` specific subcmds for now (streaming,
ingest, storesh, etc..) in anticipation of moving them into
a subpkg-module and make sure to import the sub-cmd module in our top
level cli package.

Other `.storage` api tweaks:
- drop the reraising with custom error (for now).
- rename `Storage` -> `StorageClient` (or should it be API?).
2023-06-27 13:41:47 -04:00
Tyler Goodlet 1ec9b0565f Move `.data.cli` to `.storage.cli` 2023-06-27 13:41:47 -04:00
Tyler Goodlet 7ab97fb21d Add marketstore client as storage-backend module
To kick off our (tsdb) storage backends this adds our first implementing
a new `Storage(Protocol)` client interface. Going foward, the top level
`.storage` pkg-module will now expose backend agnostic APIs and helpers
whilst specific backend implementations will adhere to that middle-ware
layer.

Deats:
- add `.storage.marketstore.Storage` as the first client implementation,
  moving all needed (import) dependencies out from
  `.service.marketstore` as well as `.ohlc_key_map` and `get_client()`.
- move root `conf.toml` loading from `.data.history` into
  `.storage.__init__.open_storage_client()` which now takes in a `name:
  str` and does all the work of loading the correct backend module, its
  config, and determining if a service-instance can be contacted and
  a client loaded; in the case where this fails we raise a new
  `StorageConnectionError`.
- add a new `.storage.get_storagemod()` just like we have for brokers.
- make `open_storage_client()` also return the backend module such that
  the history-data layer can make backend specific calls as needed (eg.
  ohlc_key_map).
- fall back to a basic non-tsdb backfill when `open_storage_client()`
  raises the new connection error.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 29211b200d Start `piker.storage` subsys: cross-(ts)db middlewares
The plan is to offer multiple tsdb and other storage backends (for
a variety of use cases) and expose them similarly to how we do for
broker and data providers B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet ae8358a5e7 Tidy up unused imports and doc string 2023-06-27 13:32:18 -04:00
Tyler Goodlet 00a51c0288 Use new `msgspec.structs` api for `.typecast()` 2023-06-27 13:26:52 -04:00
Tyler Goodlet 994564f923 Just warn-print when annots are str values? 2023-06-27 13:26:52 -04:00
Tyler Goodlet 12172cc5cd Make `.data.types.Struct.typecast()` work via type lookup from `builtins` 2023-06-27 13:26:52 -04:00
goodboy a65910c732
Merge pull request #523 from ebisu4/master
get font style from main config
2023-06-27 13:25:11 -04:00
ebisu4 949fa9fbb9
Merge pull request #1 from pikers/fix_custom_font_settings
Fix reading font size from user config on linux
2023-06-21 10:53:46 +02:00
Tyler Goodlet 4b77de5e2d Fix reading font size from user config
Was borked on linux if you didn't provide the setting in `conf.toml` due
to some logic errors. Fix that by rejigging `DpiAwareFont` internal
variables:

- add new `._font_size_calc_key: str` which was the old `._font_size`
  and is only used when no explicit font size is set by the user in the
  `conf.toml` config:
  - this is the "key" that is used to lookup a calculation function
    which attempts to compute a best fit font size given the measured
    system displays DPI settings and dimensions.
- make the `._font_size: int` the **actual** font size integer that is
  cached and passed to `Qt` to set the size.
  - this is overridden by user config now if defined.
- change the input kwarg `font_size: str` to the constructor to better
  change the input kwarg `font_size: str` to the constructor to better
  named private `_font_size_key: str` which gets set to the new
  `._font_size_calc_key`.

Also, adjust all client code which instantiates `DpiAwareFont` to use
the new `_font_size_key` kwarg input so nothing breaks XD
2023-06-19 15:13:01 -04:00
Ebisu d660376206 get font style from main config 2023-06-19 00:10:37 +02:00
goodboy 201b0d99c1
Merge pull request #518 from pikers/fix_price_label_digits
Fix price label precision as `MktPair.price_tick_digits`
2023-05-31 10:59:30 -04:00
Tyler Goodlet c27da99e12 Fix price label precision as `MktPair.price_tick_digits`
Was only really borked for higher-precision but lower priced assets
(like TLOS or peeneez) which have a `MktPair.price_tick_digits >= 2`.

The issue was using the wrong attr, the `size_tick_digits`..
2023-05-31 10:36:20 -04:00
goodboy e51ba404fc
Merge pull request #489 from pikers/rekt_pps
Rekt pps? problem? => `piker.accounting`
2023-05-28 15:41:50 -04:00
Tyler Goodlet abd3cefd84 Parametrize ems service test to cancel with API and kbi 2023-05-28 14:28:56 -04:00
Tyler Goodlet f6549fcb62 Always allocate a new `OrderClient` per `open_ems()` call 2023-05-28 14:05:03 -04:00
Tyler Goodlet 41aa87f847 Fix `_digits` attr names in order mode.. 2023-05-28 13:13:43 -04:00
Tyler Goodlet d6331ce9e1 Add nonlocal annots to satisfy ruff 2023-05-28 12:41:14 -04:00
Tyler Goodlet 4f67ac0337 Change to new context-cancelled msg contents: pikerd is canceller 2023-05-26 17:16:43 -04:00
Tyler Goodlet 024cf8b8c2 add in `[kucoin]` section to brokers conf 2023-05-26 16:51:11 -04:00
Tyler Goodlet 9ec664f7c8 Drop elastic search container build for now since we're also skipping the test 2023-05-26 16:50:53 -04:00
Tyler Goodlet 5e2107ff15 Adjust `config.load()` to handle CI git checkout dir, seems they changed it!? 2023-05-26 16:50:15 -04:00
Tyler Goodlet 5f1d0fcb8c `tmpconfdir`: always assert brokers config created 2023-05-26 14:58:59 -04:00
Tyler Goodlet 3b5bd8f43e Ensure quote last price is a `float` 2023-05-26 14:42:35 -04:00
Tyler Goodlet 40c5f39f0d conftest: be explicit about which config we touch 2023-05-26 14:42:09 -04:00
Tyler Goodlet 3d8c1a7b3c ib: don't log-emit ib pp msg when none exists.. 2023-05-26 14:05:32 -04:00
Tyler Goodlet 06cc3ac92c Tidy up ems tests as per some `ruff`in 2023-05-25 18:04:52 -04:00
Tyler Goodlet 4a8e8a32f9 Fix account config loading logic discovered in new test XD 2023-05-25 17:56:14 -04:00
Tyler Goodlet 9bc11d8dd9 Add basic config checking tests 2023-05-25 17:55:20 -04:00
Tyler Goodlet 9c80969fd5 .data.validate: add missing endpoint warnings 2023-05-25 16:01:21 -04:00
Tyler Goodlet da4d344e63 Change to `piker_pin` branch in `tomlkit` fork 2023-05-25 13:53:14 -04:00
goodboy 073ff0103a
Merge pull request #506 from pikers/py311
`python3.11` support!
2023-05-24 19:34:10 -04:00
Tyler Goodlet f0a346dcc3 Some linting fixes after trying out `ruff` 2023-05-24 17:25:23 -04:00
Tyler Goodlet 7381c361cd Strictly drop `LinkedSplits.symbol` B) 2023-05-24 15:42:14 -04:00
Tyler Goodlet 1b577eebf6 Change over the UI layer to use `MktPair`
Including changing to `LinkedSplits.mkt: MktPair` and adding an explicit
setter method for setting it and being sure that nothing breaks
in the display system init!

For this commit we leave in warning access to `LinkedSplits.symbol` but
will remove in following commit.
2023-05-24 15:30:17 -04:00
Tyler Goodlet 39af215d61 kraken: use new `Position.mkt` attr 2023-05-24 15:29:42 -04:00
Tyler Goodlet 35f0520cb0 Drop `Symbol` / `.symbol` support from `.accounting`
Only stuff left was the allocator stuff. Drop the top level subpkg
exports and finally kill off the awkwardly named
`Symbol.lot_size_digits` properties XD

Expose a bunch more util funcs at subpkg top level, do some typing in
allocator method internals.
2023-05-24 15:26:51 -04:00
Tyler Goodlet 738d0ca38b Rename db tests to test_docker_services 2023-05-24 12:30:57 -04:00
Tyler Goodlet bd8e4760d5 Port everything strictly to `Position.mkt` and `Flume.mkt` 2023-05-24 12:16:28 -04:00
Tyler Goodlet 9a063ccb11 ib: Solve lingering bugs for non-vlm contracts
Contract matching in live setup was borked; switch to
`MktPair.dst.atype` matching, don't override the `cmdty` "venue" (a
weird special case) in `get_mkt_info()` otherwise lookup will fail..
2023-05-24 09:11:24 -04:00
Tyler Goodlet e8787d89c6 ib: unset vlm via new `FeedInit.shm_write_opts` field 2023-05-24 08:28:16 -04:00
Tyler Goodlet 8e97814c1f Add "no vlm" indication to `FeedInit`
Stash it for now in the (now mutable by default) `.shm_write_opts` and
have the new `Flume._has_vlm: bool` (only set to false internally by
feed layer) which can be read via new public `.has_vlm()` predicate.
Move out the old `.ui/_fsp` helper logic to this flume method.
2023-05-24 08:25:14 -04:00
Tyler Goodlet e82f7f9012 Skip elasticsearch test for now, container build seems borked? 2023-05-23 22:39:38 -04:00
Tyler Goodlet b44b0915ca ib: i guess only discard `MktPair.src: Asset` on non-forex XD 2023-05-23 19:11:40 -04:00
Tyler Goodlet ff74d47fd5 kucoin: fix fqme or search result key lookups 2023-05-23 16:46:21 -04:00
Tyler Goodlet 6ad8c603d5 More detailed `Position.events` todo 2023-05-23 16:45:58 -04:00
Tyler Goodlet cd55d027c4 Re-implement db tests using new ahab daemons
Avoids the really sloppy flag passing to `open_pikerd()` and allows for
separation of the individual docker daemon starts.

Also add a new `root_conf() -> Path` fixture which will open and load
the `dict` for the new root `conf.toml` file.
2023-05-23 14:16:08 -04:00
Tyler Goodlet d094625bd6 Activate docker daemons via flags using exit stack 2023-05-23 14:16:08 -04:00
Tyler Goodlet e7a172b656 Reimplement marketstore and elasticsearch daemons
Using the new `._ahab.start_ahab_service()` mngr of course, and now
support user config overrides (such that our defaults can be modified by
a keen user, say using a config file, or for testing). This is where the
functionality moved out of the `pikerd` init has been moved - instead of
being triggered by bool flag inputs to that factory.

For marketstore actually support overriding the entire yaml config via
runtime `_yaml_config_str: str` formatting with any passed user dict,
primarily focussing on supporting override of the sockaddrs for testing.
2023-05-23 14:16:02 -04:00
Tyler Goodlet bd919f9d66 _ahab: use `Services` api to spawn docker tasks
Allows for using the `Services.cancel_service()` api for explicit
cancellation in tests and eventually for remote teardown. Change
`.start_ahab()` to an `@acm` `start_ahab_service()` and just yield back
the same values we were returning prior. Also fix the logging (level) to
actually reflect what's passed in - we weren't using the correct name
/ instance from the `.sevice` subpkg..
2023-05-23 14:16:02 -04:00
Tyler Goodlet 611d1ee3fc Drop db flags from pikerd startup 2023-05-23 14:16:02 -04:00
Tyler Goodlet 56b23e1fcc Add docker and elasticsearch to test deps 2023-05-23 14:16:02 -04:00
Tyler Goodlet d3bafb0063 Always prefer a config template if found 2023-05-23 14:16:02 -04:00
Tyler Goodlet 7f246697b4 Remove remaining `fqsn` usage from code base minus backward compats 2023-05-23 14:16:02 -04:00
Tyler Goodlet dd10acbbf9 Replace `Transaction.fqsn` -> `.fqme`
Change over all client (broker) code which constructs transactions
and finally wipe required `.fqsn` usage from `.accounting` B)
2023-05-23 14:15:57 -04:00
Tyler Goodlet 31a00eca94 Rename fqsn -> fqme in ui mods 2023-05-22 12:13:00 -04:00
Tyler Goodlet c93d119873 Move tmpdir creation into separate fixture
Since `.config.load()` was changed to not touch conf files by default
(without explicitly setting `touch_if_dne: bool`), this ensures both the
global module value is set and the `brokers.toml` file exists before
every test.
2023-05-22 12:03:32 -04:00
Tyler Goodlet 588770d034 ib: rename lingering fqsn -> fqme 2023-05-22 12:00:13 -04:00
Tyler Goodlet 2f2d612b5f Add todo to switch to `dst/src` delim 2023-05-22 11:57:37 -04:00
Tyler Goodlet 660a94d610 Don't expect `conf.toml`'s network section
For testing this is particularly true until we offer a template
with whatever (likely localhost) settings planned to ship.
2023-05-22 11:54:36 -04:00
Tyler Goodlet e4e4cacef3 .data.feed: Less stringency with fqme matching
`Flume.mkt.fqme` might not be exactly the same as the local
version now since we've had to add some hacks to certain backends
(cough ib) to handle `MktPair.src` not being set as an `Asset` (yet).
2023-05-22 11:52:36 -04:00
Tyler Goodlet 60a6f3269c ib: use flex report datetime sort
Since `open_trade_ledger()` now requires a sort we pass in a combo of
the std `pendulum.parse()` for API records and a custom flex parser for
flex entries pulled offline.

Add special handling for `MktPair.src` such that when it's a fiat (like
it should always be for most legacy assets) we try to get the fqme
without that `.src` token (i.e. not mnqusd) to avoid breaking
roundtripping of live feed requests (due to new symbology) as well as
the current tsdb table key set..

Do a wholesale renaming of fqsn -> fqme in most of the rest of the
backend modules.
2023-05-22 09:41:44 -04:00
Tyler Goodlet 53003618cb Add longer timeout on brokerd ctx cancel; seems to work? 2023-05-22 00:16:58 -04:00
Tyler Goodlet c6da09f3c6 Add fast(er), time-sorted ledger records
Turns out that reading **and** writing with `tomlkit` is just wayya slow
for large documents like ledger files so move to using the `tomli`
sibling pkg `tomli-w` which seems to much improve on the latency, though
obviously longer run we're likely going to want:
- a better algorithm for only back loading records using as little
  history as possible
- a different serialization format for production maybe something
  like apache parquet?

The only issue with using a non-style-preserving writer is that we don't
necessarily get TOML conf ordering for free (without first ordering it
ourselves), and thus this patch also adds much more general date-time
sorting machinery which is now **required** when using
`open_trades_ledger()` via a `tx_sort: Callable`. By default we now
provide `.accounting._ledger.iter_by_dt()` (exposed in the subpkg mod)
which conducts dynamic "datetime key detection" based parsing of records
based on a `parsers: dict[str, Callabe]` input table. The default should
handle most use cases including all currently supported live backends
(kraken, ib) as well as our paper engine ledger-records format.

Granulars:
- adjust `Position.iter_clears()` to use new `iter_by_dt(key=lambda ..)`
  signature.
- add `tomli-w` to setup and our `tomlkit` fork to requirements file.
- move `.write_config()` to bottom of class defn.
- fix closed pos popping to not error if pp was already popped..
2023-05-18 18:27:54 -04:00
Tyler Goodlet 89d24cfe33 Oof, fix closed position popping by fqme.. 2023-05-18 12:52:34 -04:00
Tyler Goodlet 8d7a9fa19e Make `MktPair.pair()` a meth, allow passing in a delim character 2023-05-18 12:01:30 -04:00
Tyler Goodlet a1a10676cd Go back to `tomllib` for ledger loading, it's wayy faster 2023-05-18 11:27:31 -04:00
Tyler Goodlet 97b2b25256 Avoid import cycle in clearing client 2023-05-18 01:25:04 -04:00
Tyler Goodlet b2bf0b06f2 ib.api: wholesale fqsn -> fqme renames 2023-05-17 16:56:04 -04:00
Tyler Goodlet 907eaa68cb Pass `mkt: MktPair` to `.open_history_client()`
Since porting all backends to the new `FeedInit` + `MktPair` + `Asset`
style init, we can now just directly pass a `MktPair` instance to the
history endpoint(s) since it's always called *after* the live feed
`.stream_quotes()` ep B)

This has a lot of benefits including allowing brokerd backends to have
more flexible, pre-processed market endpoint meta-data that piker has
already validated; makes handling special cases in much more straight
forward as well such as forex pairs from legacy brokers XD

First pass changes all crypto backends to expect this new input, ib will
come next after handling said special cases..
2023-05-17 16:52:15 -04:00
Tyler Goodlet 89e8a834bf Support fqme rendering *without* the src key
Since most (legacy) stock brokers design their symbology without
including the target exchange's source asset name - normally a fiat
currency like USD - this adds an option for rendering market endpoints
without that token for simpler use in backends for such brokers.

As an example IB doesn't expect a `mnq/usd.cme.ib` symbol and instead
presumes that since the CME lists all assets in USD then the source
asset is implied.

Impl details:
- add `MktPair.pair: str` which replaces `.key` as a better name.
- offer a `without_src: bool` to a new `.get_fqme()` getter method
  which will render everything the same minus the src token.
- expose the new flag through both the new `.get_fqme()` and
  `.get_bs_fqme()` methods and wrap those both under the original
  property names `.bs_fqme` and `.fqme`.
2023-05-17 16:47:15 -04:00
Tyler Goodlet 12bfabf056 Expose `.accounting.unpack_fqme()` 2023-05-17 16:43:31 -04:00
Tyler Goodlet a44e926c2f kucoin: handle ws welcome, subs-ack and pong msgs
Previously the subscription response handling was a bit sloppy what with
ignoring the welcome msg; this now correctly expects the correct startup
sequence. Also this avoids warn logging on pong messages by expecting
them in the msg loop and further drops the `KucoinMsg` struct and
instead changes the msg loop to expect `dict`s and only cast to structs
on live feed msgs that we actually process/relay.
2023-05-17 12:30:52 -04:00
Tyler Goodlet d0ba9a0a58 Start draft `conf.toml` "root" config with tsdb contact info 2023-05-17 10:58:12 -04:00
Tyler Goodlet 3294defee1 `fqme` adjustments to marketstore module
Mostly renaming from the old acronym. This also contains necessary
conf.toml loading in order to call `open_storage_client()` which now
does not have default network contact info.
2023-05-17 10:46:32 -04:00
Tyler Goodlet ae049eb84f Pass and use `MktPair` throughout history routines
Previously we were passing the `fqme: str` which isn't as extensive nor
were we able to pass `MktPair` direct to backend history manager-loading
routines (which should be able to rely on always receiving it since
currently `stream_quotes()` is always called first for setup).

This also starts a slight bit of configuration oriented tsdb info
loading (via a new `conf.toml`) such that a user can decide to host
their (marketstore) db on a remote host and our container spawning and
client code will do the right startup automatically based on the config.
|-> Related to this I've added some comments about doing storage
backend module loading which should get actually written out as part of
patches coming in #486 (or something related).

Don't allow overruns again in history context since it seems it was
never a problem?
2023-05-17 10:19:14 -04:00
Tyler Goodlet 5c8a45c64a Fix `MktPair.bs_fqme` to properly strip broker suffix 2023-05-17 09:45:00 -04:00
Tyler Goodlet 07b7d1d229 ib: implement `FeedInit` style quote stream setup
As per the new market info packing schema this patch almost gets it
completely compatible and useful via implementing the `get_mkt_info()`
backend module endpoint B)

There's still some questions around `MktPair.src` since all the contract
search machinery in the ib api isn't expecting a fiat currency in the
symbol key: for ex. `mnq/usd.cme.20230616.ib` has no handling for the
`[/]usd` part. For now i'm just excluding the `.src` since it requires
extra parsing on quotes-feed requests even though this is also currently
breaking forex pairs (idealpro or wtv). I think ideally we do move to
a `dst/src.<venue>.<etc..>` style but it's going to require adjustments
to all the existing crypto backends..

This also allows dropping the old `mk_init_msgs()` closure.
2023-05-16 17:29:07 -04:00
Tyler Goodlet 147e1baee9 Remove typo-ed `sum_tick_vlm` config from all crypto backends 2023-05-16 17:00:15 -04:00
Tyler Goodlet b096ee3b7a Make `FeedInit.shm_write_opts` an empty dict by default 2023-05-16 16:30:30 -04:00
Tyler Goodlet f20e2d6ee2 ib.feed: start drafting out `get_mkt_info()` endpoint 2023-05-15 15:35:57 -04:00
Tyler Goodlet 1263835034 ib.api: make `get_sym_details()` and `get_quote()` mutex methods 2023-05-15 15:35:30 -04:00
Tyler Goodlet 1e1e64f7f9 ib: fix op error when `end_dt` is `None`: the first query 2023-05-15 13:30:34 -04:00
Tyler Goodlet 98c043815a Woops, implement `Symbol.fqme` same a `Mktpair`.. 2023-05-14 20:24:19 -04:00
Tyler Goodlet ebe351e2ee kucoin: raise `DataUnavailable` if we get empty time array at some point? 2023-05-14 15:13:14 -04:00
Tyler Goodlet cfb125beef `.data.feed`: finally solve startup overruns issue
We need to allow overruns during the async multi-broker context spawning
init bc some backends might take longer then others to setup (eg.
binance vs. kucoin) and result in some context (stream) being overrun by
the time we get to the `.open_stream()` phase. Ideally, we can maybe
adjust the concurrent setup to be more of a task-per-provider style to
avoid this in the future - which would also in theory result in
more-immediate per-provider setup in terms showing ready feeds asap.

Also, does a bunch of renaming from fqsn -> fqme and drops the lower
casing of input symbols instead expecting the caller to know what the
data backend it's requesting is going to be able to handle in terms of
symbology.
2023-05-13 17:35:46 -04:00
Tyler Goodlet 1f0db3103d ib.broker: always cast `asset_type` to `str` 2023-05-13 17:27:45 -04:00
Tyler Goodlet 2e8268b53e Allow passing `allow_overruns: bool` to `Services.start_service_task()` 2023-05-13 16:51:11 -04:00
Tyler Goodlet b572cd1b77 kucoin: store fqme -> mktids table
Instead of pre-converting and mapping piker style fqmes to
`KucoinMktPair`s make `Client._pairs` keyed by the kucoin native market
ids and instead also create a `._fqmes2mktids: bidict[str, str]` for
doing lookups to the native pair from the fqme.

Also, adjust any remaining `fqsn` naming to fqme.
2023-05-13 16:45:05 -04:00
Tyler Goodlet b288d7051a ib.broker: load account name map as a `bidict` (no `tomlkit` support) 2023-05-13 16:44:28 -04:00
Tyler Goodlet c349d50f2f Allow creation of empty account files 2023-05-13 16:12:18 -04:00
Tyler Goodlet 779c0b73c9 Make `.accounting._ledger` use `tomlkit`
So that styling is preserved on write but requires that we pop `None`
values (in this case any unset `.expiry` transactions) due to `tomkit`
having no support for writing them as values?
2023-05-13 16:07:17 -04:00
Tyler Goodlet 50a4c425d3 Add `touch_if_dne: bool` to `config.load()`
So that we aren't creating blank files for legacy configs (as we do name
changes or wtv). Further change `.get_conf_path()` to validate against
new `account.` prefix and a god `conf.toml` file.
2023-05-13 16:05:23 -04:00
Tyler Goodlet df96155057 Always allow overruns in sampler context
Requires https://github.com/goodboy/tractor/pull/357.
Avoid overruns when doing concurrent live feed init over multiple
brokers.
2023-05-13 14:06:27 -04:00
Tyler Goodlet a62283bae2 Drop final use of `toml` 3rd party lib
We moved to `tomlkit` as per #496 and this lets us drop the mess that
was the inline-table encoder in `.accounting._toml` XD

Relates to #496
2023-05-12 16:15:12 -04:00
Tyler Goodlet 2865f0efe9 `piker.config`: use `tomlkit` for accounting files
We still need to get some patches landed in order to resolve:
- https://github.com/sdispater/tomlkit/issues/288
- https://github.com/sdispater/tomlkit/issues/289
- https://github.com/sdispater/tomlkit/issues/290

But, this does work for style preservation and the inline-table style we
were previously hacking into the `toml` lib in `.accounting._toml`,
which we can pretty much just drop now B)

Relates to #496 (pretty much solves it near-term i think?)
2023-05-12 16:05:45 -04:00
Tyler Goodlet 5f79434b23 Use new `.config` helpers for `accounting._pos/._ledger` file loading 2023-05-12 13:02:29 -04:00
Tyler Goodlet 5278f8b560 Add `.config.load_ledger()` for transaction record files 2023-05-12 13:01:45 -04:00
Tyler Goodlet 488a0cd119 Add `.config.load_account()`
Allows for direct loading of an "account file configuration" contents
without having to pass the explicit config dir path. In this case we are
also rewriting the `pps.<brokername>.<accnt_name>.toml` file names to
instead have a `account.` prefix, but providing this helper function
allows such changes more easily in the future - since callers won't have
to use the lower level `.load()` input signature.

Also add some todo comments about moving to `tomlkit`.
2023-05-12 12:40:09 -04:00
Tyler Goodlet 957224bdc5 ib: support remote host vnc client connections
I figure we might as well support multiple types of distributed
multi-host setups; why not allow running the API (gateway) and thus vnc
server on a diff host and allowing clients to connect and do their thing
B)

Deatz:
- make `ib._util.data_reset_hack()` take in a `vnc_host` which gets
  proxied through to the `asyncvnc` client.
- pull `ib_insync.client.Client` host value and pass-through to data
  reset machinery, presuming the vnc server is running in the same
  container (and/or the same host).
- if no vnc connection **and** no i3ipc trick can be used, just report
  to the user that they need to remove the data throttle manually.
- fix `feed.get_bars()` to handle throttle cases the same based on error
  msg matching, not error the code and add a max `_failed_resets` count
  to trigger bailing on the query loop.
2023-05-12 09:48:31 -04:00
Tyler Goodlet 7ff8aa1ba0 ib: passthrough host arg to vnc client for click hack 2023-05-11 12:32:38 -04:00
Tyler Goodlet e06f9dc5c0 kucoin: port to new `NoBsWs` api semantics
No longer need to implement connection timeout logic in the streaming
code, instead we just `async for` that bby B)

Further refining:
- better `KucoinTrade` msg parsing and handling with object cases.
- make `subscribe()` do sub request in a loop wand wair for acks.
2023-05-10 16:22:09 -04:00
Tyler Goodlet c6e5368520 paperboi: fix fqme parsing to handle `bs_fqme` cases 2023-05-09 18:34:01 -04:00
Tyler Goodlet 769b292dca Allow `brokerd` runtime switch to paper mode
Previously you couldn't have a brokerd backend which defined
`.trades_dialogue()` but which could also indicate that the paper
clearing engine should be used. This adds that support by allowing the
endpoint task to return a simple `'paper'` string, in which case the ems
will boot a paperboi.

The obvious useful case for this is if you have a broker you want to use
but do not have actual broker credentials setup (yet) with that provider
in your `brokers.toml`; demonstrated here with the adjustment to
`kraken`'s startup to no longer raise a runtime error B)
2023-05-09 18:29:28 -04:00
Tyler Goodlet 361fc4645c Drop passing `loglevel` to `stream_quotes()`, level is set when actor spawns 2023-05-09 18:28:51 -04:00
Tyler Goodlet f1f2ba2e02 kucoin: deliver `FeedInit` msgs on feed startup
To fit with the rest of the new requirements added in `.data.validate`
this adds `FeedInit` init including `MktPair` and `Asset` loading for
all spot currencies provided by `kucoin`.

Deatz:
- add a `Currency` struct and accompanying `Client.get_currencies()` for
  storing all asset infos.
- implement `.get_mkt_info()` which loads all necessary accounting and
  mkt meta-data structs including adding `.price/size_tick` fields to
  the `KucoinMktPair`.
- on client boot, async spawn requests to cache both symbols and currencies.
- pass `subscribe()` as the `fixture` arg to `open_autorecon_ws()`
  instead of opening it manually.

Other:
- tweak `Client._request` to not expect the prefixed `'/'` for the
  `endpoint: str`.
- change the `api_v` arg to just be `api: str`.
2023-05-09 18:17:50 -04:00
Tyler Goodlet 80338e1ddd kucoin: WIP moving to FeedInit API 2023-05-09 14:49:46 -04:00
Tyler Goodlet f8c8f63e87 Drop `Optional` usage from marketstore module 2023-05-09 14:49:46 -04:00
Tyler Goodlet 96532ad38c ui._display: no downsampling on history chart default view call 2023-05-09 14:49:46 -04:00
Tyler Goodlet 88f3912b2d test_ems: doc out some remaining suites 2023-05-09 14:49:46 -04:00
Tyler Goodlet cb8833d430 ib: clear error events on every received? 2023-05-09 14:49:46 -04:00
Tyler Goodlet 038b20d13a wsbs: increase msg rx timeout to 16 secs 2023-05-09 14:49:46 -04:00
Tyler Goodlet 05fb4a4014 kraken: drop recv timeout for recon ws 2023-05-09 14:49:46 -04:00
Tyler Goodlet c415bd1ee1 If backend does not provide `bs_mktid`, use the `bs_fqme` 2023-05-09 14:49:46 -04:00
Tyler Goodlet 226c3364c3 Smh, handle `fixture==None` case.. 2023-05-09 14:49:46 -04:00
Tyler Goodlet 685688d2b2 ib: add `mbt.cme` micro-btc futes to adhoc set 2023-05-09 14:49:46 -04:00
Tyler Goodlet 7a3bce3f33 .data._web_bs: add client module name to log msgs 2023-05-09 14:49:46 -04:00
Tyler Goodlet 363a2bbcc6 binance: use new `int` sub-id for each request 2023-05-09 14:49:46 -04:00
Tyler Goodlet 0a8dd7b6da Try to disable `snappy` compression on variables; it breaks everything XD 2023-05-09 14:49:46 -04:00
Tyler Goodlet 0b43e0aa8c Try having `brokerd` eps defined in `.brokers._daemon`
Since it's a bit weird having service specific implementation details
inside the general service `._daemon` mod, and since i'd mentioned
trying this re-org; let's do it B)

Requires enabling the new mod in both `pikerd` and `brokerd` and
obviously a bit more runtime-loading of the service modules in the
`brokerd` service eps to avoid import cycles.

Also moved `_setup_persistent_brokerd()` into the new mod since the
naming would place it there even though the implementation really
wouldn't (longer run) since we want to split up `.data.feed` layer
backend-invoked eps into a separate actor eventually from the "actual"
`brokerd` which will be the actor running **only** the trade control eps
(eg. trades_dialogue()` and friends).
2023-05-09 14:49:26 -04:00
Tyler Goodlet ed434e284b Disable ems init order-dialog notifications by default 2023-05-09 14:49:26 -04:00
Tyler Goodlet af068c5c51 binance: port `stream_messages()` to use `match:` and a new `L1` struct 2023-05-09 14:49:26 -04:00
Tyler Goodlet f6cd08c6fa Attempt to guard against numercial "anomalies" in `Viz.maxmin()`, add cacheing flag 2023-05-09 14:49:26 -04:00
Tyler Goodlet 34ff5ff249 kraken: port to new `NoBsWs`, passing timeout (counts) during setup 2023-05-09 14:49:26 -04:00
Tyler Goodlet b03564da2c binance: port to new `NoBsWs` api and drop `trio_util` usage 2023-05-09 14:49:26 -04:00
Tyler Goodlet 59743b7b73 Rework `NoBsWs` to avoid agen/`trio` incompatibility
`trio`'s internals don't allow for async generator (and thus by
consequence dynamic reset of async exit stacks containing `@acm`s)
interleaving since doing so corrupts the cancel-scope stack. See details
in:
- https://github.com/python-trio/trio/issues/638
- https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator
- `trio._core._run.MISNESTING_ADVICE`

We originally tried to address this using
`@trio_util.trio_async_generator` in backend streaming code but for
whatever reason stopped working recently (at least for me) and it's more
or less implemented the same way as this patch but with more layers and
an extra dep. I also don't want us to have to address this problem again
if/when that lib isn't able to keep up to date with wtv `trio` is
doing..

So instead this is a complete rewrite of the conc design of our
auto-reconnect ws API to move all reset logic and msg relay into a bg
task which is respawned on reset-requiring events: user spec-ed msg recv
latency, network errors, roaming events.

Deatz:
- drop all usage of `AsyncExitStack` and no longer require client code
  to (hackily) call `NoBsWs._connect()` on msg latency conditions,
  intead this is all done behind the scenes and the user can instead
  pass in a `msg_recv_timeout: float`.
- massively simplify impl of `NoBsWs` and move all reset logic into a
  new `_reconnect_forever()` task.
- offer use of `reset_after: int` a count value that determines how many
  `msg_recv_timeout` events are allowed to occur before reconnecting the
  entire ws from scratch again.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 9d04accf2e Factor out all history mgmt-logic into a new `.data.history` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 3cd853cb5d order_mode: revert switch to `MktPair` for pre-order loading 2023-05-09 14:49:26 -04:00
Tyler Goodlet 4a0beda77e kraken: asyncify and use `get_mkt_info()` in `norm_trade_records()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet d7288972b7 kraken: port to `FeedInit` and proper impl of `get_mkt_info()` ep 2023-05-09 14:49:26 -04:00
Tyler Goodlet 0d93871c88 kraken: drop `Client.cache_assets()`, simpler `.pair_info()`, drop `.mkt_info()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet d0e01ff9b6 Fix `Symbol.from_fqme()` extra added symbols.. 2023-05-09 14:49:26 -04:00
Tyler Goodlet af2f8756c5 binance: use `@async_lifo_cache` on `.get_mkt_info()` ep 2023-05-09 14:49:26 -04:00
Tyler Goodlet bcf355e2c8 Fix up `@async_lifo_cache` typing, add TODOs for move to `tractor` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 1b50bff625 Error test harness if `--pdb` passed without `-s` 2023-05-09 14:49:26 -04:00
Tyler Goodlet e317310ed3 binance: make `stream_quotes()` deliver new `list[FeedInit]` API 2023-05-09 14:49:26 -04:00
Tyler Goodlet 4131ff1152 Rename `bs_mktid` -> `bs_fqme` and drop (some) `fqsn`s
Since we have made `MktPair.bs_mktid` mean something else now, change
all the feed setup var names to instead be more representative of the
actual value: `bs_fqme: str` and use the new `MktPair.bs_fqme` where
necessary.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 83802e932a Drop (missed) usage of `Symbol.from_fqsn()` in order mode 2023-05-09 14:49:26 -04:00
Tyler Goodlet 765b8f8e5c Support both input msg-sequence types
The legacy version was a `dict` of `dicts` vs. now we want to be handed
a `list[FeedInit]`; process both in a factored way.

Drop `FeedInit.bs_mktid` since it's already defined on `.mkt.bs_mktid`
and we don't really need it top level.
2023-05-09 14:49:26 -04:00
Tyler Goodlet b4f2f49001 ib: make `stream_quotes()` compat with new init msg bare-minimums 2023-05-09 14:49:26 -04:00
Tyler Goodlet d1cf90e2ae ib: finally convert ledger processing to use `MktPair` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 6008497b89 Use more "hierarchical" schema for fsp shm segment names 2023-05-09 14:49:26 -04:00
Tyler Goodlet adb62dc7b4 Port oustanding parts of codebase to `unpack_fqme()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 4129d693be Add `.data.validate` checker for live feed layer
More or less a replacement for what @guilledk did with the initial
attempt at a "broker check" type script a while back except in this case
we're going to always run this validation routine and it now uses a new
`FeedInit` struct to ensure backends are delivering the right schema-ed
data during startup. Also allows us to stick deprecation warnings / and
or strict API compat errors all in one spot (at least for live feeds).

Factors out a bunch of `MktPair` related adapter-logic into a new
`.validate.valiate_backend()` which warns to the backend implementer via
log msgs all the problems outstanding. Ideally we do our backend module
endpoint scan-and-complain regarding missing feature support from here
as well (eg. search, broker/trade ctl, ledger processing, etc.).
2023-05-09 14:49:26 -04:00
Tyler Goodlet d48b2c5b57 `._paper_engine`: right, load `MktPair` in `fqme is not None` usage 2023-05-09 14:49:26 -04:00
Tyler Goodlet 6f5a2654ab Port `.clearing` to new `unpack_fqme()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet afdbf8e10a `.accounting`: Use `_fqme()` throughout and export decimal converters 2023-05-09 14:49:26 -04:00
Tyler Goodlet d4c8ba19a2 `.accounting._mktinfo`: better fqme `MktPair` handling
It needed some work..

- Make `unpack_fqme()` always return a 4-tuple handling the venue and
  suffix parts more generally.
- add `Asset.Asset.guess_from_mkt_ep_key()` a like-it-sounds hack at
  trying to render a `.dst: Asset` for most most purposes throughout the
  stack.
- always try to preprocess the input `fqme: str` with `unpack_fqme()` in
  `MktPair.from_fqme()` and use the new `Asset` method (above) to make
  up a `.dst: Asset` pulling as much meta-info we can from the caller.
- add `MktPair.bs_fqme` to get the thing without the broker part..
- add an `'unknown'` value to the `_derivs` def.
- drop `Symbol.from_fqsn()` and `unpack_fqsn()` more generally (yes
  BREAKING).
2023-05-09 14:49:26 -04:00
Tyler Goodlet 53a41ba93d Add subsys log to new `.data._util` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 06b80ff9ed ARRG, disable `dunst` notifications for now in order mode 2023-05-09 14:49:26 -04:00
Tyler Goodlet fa88924f84 Do we need feed mod enabled? no right? 2023-05-09 14:49:26 -04:00
Tyler Goodlet 83f1922f6e `binance.get_mkt_info()`: bleh, right `@lru_cache` dun work for async.. 2023-05-09 14:49:26 -04:00
Tyler Goodlet 4b7ac1d895 Port paper engine to latest `.accounting` sys fixes
- only preload necessary (one for clearing, all for ledger sync)
  `MktPair` info from the backend using `.get_mkt_info()`, build the
  `mkt_by_fqme: dict[str, MktPair]` and pass it to
  `TransactionLedger.iter_trans()`.
- use new `TransactionLedger.update_from_t()` method on clears.
- sanity check all `mkt_by_fqme` entries against `Flume.mkt` values
  when we open a data feed.
- rename `PaperBoi._syms` -> `._mkts`.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 7ee6f36e62 Actually, require `mkt_by_fqme` in `.iter_trans()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet f106472bcb Fix size quantization and closed position popping..
Turns out we actually had further pp entry bugs due to *not quantizing*
the size inside `.minimize_clears()` method calcs; fix that using
`Position.sys.mkt.quantize()` as is done in `Position.calc_size()`.

Fix `PpTable.write_config()` to drop from the TOML config any
`closed: dict[str, Position]` entries delivered by `.dump_active()`.

Add a more detailed doc string for our position type and a little todo
for the `.bep` B)
2023-05-09 14:49:26 -04:00
Tyler Goodlet bba1ee43ff Allow mkt info table input to `.iter_trans()`
Since ledger records are often provided (and thus stored) from most
backends *without* containing the info we normally need for accounting
defined by `MktPair`, this extends the ledger method to take in a table
that allows assigning the `Transaction.sys` from an fqme lookup. This
way client code (like the paper engine and new ledger mgmt tools) can
do the mkt info lookup before hand and then load both ledger
`Transactions` and positions via the `PpTable` and get correct
accounting calculations, always :fingers_crossed:

Also adds `TransactionLedger.update_from_t(t: Transaction)` to allow
updating directly from an existing tran instead of making the user cast
to a `dict` first. Includes fix to `.to_dict()` to always pop the `.sym`
again to avoid client code having to do so.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 0d2e713e9a `binance`: facepalm, swap price/size_tick methods..
Wow not sure how that happened, but we should probably use the correct
market precision info for the correct parameter..

Also, use `@lru_cache` on new `get_mkt_info()` ep, seems to work?
2023-05-09 14:49:26 -04:00
Tyler Goodlet 10a39ca42c More detailed dark-slap comments 2023-05-09 14:49:26 -04:00
Tyler Goodlet 0917b580c9 Flip `.feed` and `._sampling` over to new stuff
In `.feed` and `._sampling` move to using the new
`tractor.Context.open_stream(allow_overruns: bool)` (cough, A BREAKING
CHANGE).

Also set `Flume.mkt` during construction in `.feed.open_feed()`.
2023-05-09 14:49:26 -04:00
Tyler Goodlet a301fabd6c Change`.ui._fsp` to use `Flume.mkt` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 611d86d988 Change `Flume.symbol` -> `.mkt: MktPair`
Might as well try and flip it over to the new type; make appropriate
dict serialization changes in `.to_msg()`. Alias back to `.symbol:
Symbol` with a property.
2023-05-09 14:49:26 -04:00
Tyler Goodlet b1e162ebb4 Fix ._util import in questrade backend 2023-05-09 14:49:26 -04:00
Tyler Goodlet b810de3089 Rename fqsn -> fqme in feeds tests 2023-05-09 14:49:26 -04:00
Tyler Goodlet 48cae3c178 `ib`: rejects their own fractional size tick..
Frickin ib, they give you the `0.001` (or wtv) in the
`ContractDetails.minSize: float` but won't accept fractional sizes
through the API.. Either way, it's probably not sane to be supporting
fractional order sizes for legacy instruments by default especially
since it in theory affects a lot of the clearing outcomes by having ib
do wtv magical junk behind the scenes to make it work..
2023-05-09 14:49:26 -04:00
Tyler Goodlet 02eb966a87 Rename ems test mod 2023-05-09 14:49:26 -04:00
Tyler Goodlet 146e0993a9 More explicit test mod docstring 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2cf7daca30 Another fqsn -> fqme rename 2023-05-09 14:49:26 -04:00
Tyler Goodlet dedc51a939 Quantize order prices prior to `OrderClient.send()`
Order mode previously was just willy-nilly sending `float` prices
(particularly on order edits) which are generated from the associated
level line. This actually uses the `MktPair.price_tick: Decimal` to
ensure the value is rounded correctly before submission to the ems..

Also adjusts the order mode init to expect a table of tables of startup
position messages, with the inner table being keyed by fqme per msg.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 3b7579990b Link `tractor` debug mode to `pytest` --pdb flag 2023-05-09 14:49:26 -04:00
Tyler Goodlet 7de914d54c Fix bad-fqme test, adjust prices based on buy/sell 2023-05-09 14:49:26 -04:00
Tyler Goodlet 589232d12d Only flip size sign for seels if not already -ve 2023-05-09 14:49:26 -04:00
Tyler Goodlet 928765074f Fix zero-pp entry to toml case for new file-per-account format 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2ed9e40d5e Better EMS client-side msg formatting 2023-05-09 14:49:26 -04:00
Tyler Goodlet 30af91a82c Rewrite order ctl tests as a parametrization
More or less a complete rework which allows passing a detailed
clearing/fills input and allows for *not* rebooting the runtime / ems
between each position check.

Some further enhancements:
- use (unit) fractional sizes to simulate both the more realistic and
  more "complex position calculation" case; since this is crypto.
- add a no-fqme-found test.
- factor cross-session/offline pos storage (pps.toml) checks into
  a `load_and_check_pos()` helper which does all entry loading directly
  from a provided `BrokerdPosition` msg.
- use the new `OrderClient.send()` async api.
2023-05-09 14:49:26 -04:00
Tyler Goodlet e524c6fe4f `binance`: add startup caching info log msg 2023-05-09 14:49:26 -04:00
Tyler Goodlet abbba1fa6e Pack startup pps into a table keyed by fqmes 2023-05-09 14:49:26 -04:00
Tyler Goodlet 484565988d `order_mode`: broad rename book -> client 2023-05-09 14:49:26 -04:00
Tyler Goodlet f92c289842 Drop old blessings code, general cleanups 2023-05-09 14:49:26 -04:00
Tyler Goodlet b7ddf9cb05 paper-eng: close context and terminate actor on exit 2023-05-09 14:49:26 -04:00
Tyler Goodlet 250e1c4c51 `ledger` cli: dump colored summary lines to console
Tried a couple libs and ended up sticking with `rich` (since it's the
sibling lib to `typer`) but also (initially) implemented a version with
`blessings` that I ended up commenting out (and will likely remove).

Adjusted the CLI I/O a slight bit as well:
- require a fully qualified account name of the form:
  `<brokername>.<accountname>` and error on non-matching input.
- dump positions summary lines as humanized size, ppu and cost basis
  values per line.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 62259880fd paper: on no input fqme, load all mktinfos from pos table 2023-05-09 14:49:26 -04:00
Tyler Goodlet f42bc2dbce `pprint.pformat()` IB position mismatch log msgs 2023-05-09 14:49:26 -04:00
Tyler Goodlet 55b4866d5e Use `force_mkt` override in paper pps updates
When processing paper trades ledgers we normally won't have specific
`MktPair` info for the backend market we're simulating, as such we
need to look up this info when updating pps.toml files such that we
get precision info correct (particularly in the case of cryptos!) and
can also run paper ledger processing without running the simulated
clearing loop. In order to make it happen we lookup any `get_mkt_info()`
ep on the backend and pass the output to the `force_mkt` input of the
`PpTable.update_from_trans()` method.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 83514b0e90 `binance`: add `get_mkt_info()` ep 2023-05-09 14:49:26 -04:00
Tyler Goodlet 21401853c4 `kraken`: add module level `get_mkt_info()`
This will (likely) act as a new backend query endpoint for other `piker`
(client) code to lookup `MktPair` info from each backend. To start it
also returns the backend-broker's local `Pair` (or wtv other type) as
well.

The main motivation for this is for our paper engine which can require
the mkt info when processing paper-trades ledgers which do not contain
appropriate info to compute position metrics.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 6decd4112a kraken: drop console setup, now done during brokerd init 2023-05-09 14:49:26 -04:00
Tyler Goodlet 3f2f5edb28 kraken: rename `Client._atable` -> `_altnames` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 1d2d4b40a8 Only log about pps once in order mode code 2023-05-09 14:49:26 -04:00
Tyler Goodlet 5ee044e418 Another `@acm` in `._cacheables` XD 2023-05-09 14:49:26 -04:00
Tyler Goodlet 05a33ae634 Make default order size to decimal 2023-05-09 14:49:26 -04:00
Tyler Goodlet b8a975a3fd Drop `"<broker>.<account>.."` from pps.toml entries
Add special blocks to handle removing the broker account levels from
both writing and reading routines.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 33a78366ff paper: always sync pps.toml state on startup 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2806a4c0e5 Tweak ems msg-received log msg 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2d609dceac Drop `loglevel` from `spawn_args` inputs to `maybe_spawn_daemon()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet b2a5f8698d Use `--pdb` flag to config `brokerd` debug mode 2023-05-09 14:49:26 -04:00
Tyler Goodlet 70efce1631 `kraken`: handle ws connection startup status msgs 2023-05-09 14:49:26 -04:00
Tyler Goodlet a63599828b Drop masked `MktPair.size_tick_digits()` cruft 2023-05-09 14:49:26 -04:00
Tyler Goodlet f51361435f paper engine: use the `fqme` for the `bs_mktid`
Instead of stripping the broker part just use the full fqme for all
`Transaction.bs_mktid: str` values since it makes indexing the `PpTable`
much easier with less key mangling..
2023-05-09 14:49:26 -04:00
Tyler Goodlet 9770a39d7b Cancel the `OrderClient` sync-method relay task on exit 2023-05-09 14:49:26 -04:00
Tyler Goodlet 97e3c06af8 Set `emsd` log level and clearly report startup pps
Change the root-service-task entrypoint to accept the level and
setup a console log as is now expected for all sub-services. Cast all
backend delivered startup `BrokerdPosition` msgs and log them to
console.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 4c1d174801 Expect `loglevel: str` in brokerd root task ep
Set the level right after spawn and once for the lifetime of the daemon.
2023-05-09 14:49:26 -04:00
Tyler Goodlet eb7a7462ad Always pass `loglevel: str` to daemon root task eps
If you want a sub-actor to write console logs (with the right level) the
`get_console_log()` call has to be made somewhere during service task
startup. Previously this wasn't well formalized nor used (depending on
daemon) so passing `loglevel` to the service's root-task-endpoint (eg.
`_setup_persistent_brokerd()`) encourages that the daemon's logging is
configured during init according to the spawner's requesting logging
config. The previous `get_console_log()` call happening inside
`maybe_spawn_daemon()` wasn't actually doing anything in the target
daemon XD, so obviously remove that and instead passthrough loglevel
to the ctx endpoints and service manager methods.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 1944f75ae8 Expose `piker.clearing.OrderClient` 2023-05-09 14:49:26 -04:00
Tyler Goodlet b619e4a82d WIP complete rework of paper engine tests
More or less we need to be able to audit not only simple "make trades
check pps.toml files" tests (which btw were great to get started!).

We also need more sophisticated and granular order mgmt and service
config scenarios,

- full e2e EMS msg flow verification
- multi-client (dis)connection scenarios and/or monitoring
- dark order clearing and offline storage
- accounting schema and position calcs detailing

As such, this is the beginning to "modularlizingz" the components needed
in the test harness to this end by breaking up the `OrderClient` control
flows vs. position checking logic so as to allow for more flexible test
scenario cases and likely `pytest` parametrizations over different
transaction sequences.
2023-05-09 14:49:26 -04:00
Tyler Goodlet d67031d9ab Ensure we set the test config dir in the root actor..
Not sure how this worked before but we need to also override the
`piker._config_dir: Path` in the root actor when running in `pytest`; my
guess is something in the old test suite was masking this problem after
the change to passing the dir path down through the runtime vars via
`tractor`?

Also this drops the ems related fixtures/factories since they're
specific enough to define in the clearing engine tests directly.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 008bfed702 ib: lul, fix oil (cl) venue to correctly be nymex.. 2023-05-09 14:49:26 -04:00
Tyler Goodlet 96006b2422 Adjust tests to `.clearing._client.OrderClient` type 2023-05-09 14:49:26 -04:00
Tyler Goodlet 56cd15fa51 ib: maybe incr client id; can't catch api errors..
Turns out we don't hookup our eventkit handler until after the
`load_aio_clients()` is complete, which means we can't get
`ib_insync.Client.apiError` events unless inside the asyncio side task.
So I guess try to report any such errors during API scan (note the
duplicate client id case is a special one from ibis itself) even though
we're not going to catch them trio side. The hack to work around this is
to just increment the client id value with the `connect_retries` led `i`
value even though that will break on more then 3 clients attached to an
API endpoint lul ..

Further adjustments that were to the end of trying to fix this proper:
- add `remove_handler_on_err()` cm to disconnect a handler when the trio
  side of the channel closes.
- actually connect to client api erros in our `Client.inline_errors()`
- increase connect timeout to a sec.
- change the trio-asyncio proxy response-msg loop over to `match:`
  syntax and raise on unhandled msgs from eventkit handlers.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 879657cc75 Detail `pikerd` sock bind collision in error 2023-05-09 14:49:26 -04:00
Tyler Goodlet fb13c7cbf6 `ib`: drop pp mismatch err block, we already do it in audit routine 2023-05-09 14:49:26 -04:00
Tyler Goodlet 72abe98475 Async-ify order client methods and some renaming
We previously only offered a sync API (which was recently renamed to
`.<meth>_nowait()` style) since initially all order control was from our
`OrderMode` Qt driven UI/UX. This adds the equivalent async methods for
both testing as well as eventual auto-strat driven control B)

Also includes a bunch of renaming:
- `OrderBook` -> `OrderClient`.
- better internal renaming of the client's mem chan vars and add a ref
  `._ems_stream: tractor.MsgStream`.
- drop `get_orders()` factory, just always check for the actor-global
  instance and always set the ems stream on that client (in case old one
  was closed).
2023-05-09 14:49:26 -04:00
Tyler Goodlet 48f096995f `kraken`: write ledger and pps files on startup 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2cc77c21ba Rework paper engine for "offline" pp loading
This will end up being super handy for testing our accounting subsystems
as well as providing unified and simple cli utils for managing ledgers
and position tracking. Allows loading the paper boi without starting
a data feed and instead just trigger ledger and pps loading without
starting the entire clearing engine.

Deatz:
- only init `PaperBoi` and start clearing loop (tasks) if a non-`None`
  fqme is provided, ow just `Context.started()` the existing pps msgs
  as loaded from the ledger.
- always update both the ledger and pp table on startup and pass
  a single instance of each obj to the `PaperBoi` for reuse (without
  opening and closing backing config files since we now have
  `.write_config()`).
- drop the global `_positions` dict, it's not needed any more if we use
  a `PaperBoi.ppt: PpTable` which persists with the engine actor's
  lifetime.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 1560330acd Convert `Flume.MktPair.size_tick` to float for dark clearing 2023-05-09 14:49:26 -04:00
Tyler Goodlet a74caa9f77 Add paper engine "offline loading" support to the ledger cli 2023-05-09 14:49:26 -04:00
Tyler Goodlet 61fb783c4e Formalize a ledger type + API: `TransactionLedger`
Add a new `class TransactionLedger(collections.UserDict)` for managing
ledger (files) from a `dict`-like API. The main motivations being easy
conversion between `dict` <-> `Transaction` obj forms as well as dynamic
(toml) file updates via a set of methods:

- `.write_config()` to render and write state to the local toml file.
- `.iter_trans()` to allow iterator style conversion to `Transaction`
  form for each entry.
- `.to_trans()` for the dict output from the above.

Some adjustments to `Transaction` namely making `.sym/.sys` optional for
now so that paper engine entries can be loaded (offline) without
connecting to the emulated broker backend. Move to using `pathlib.Path`
throughout for bootyful toml file mgmt B)
2023-05-09 14:49:26 -04:00
Tyler Goodlet 9f7aa3d1ff Always use the "most resolved" `Position.symbol: MktPair`
When loading a `Position` from a pps file we might not have the entire
`MktPair` field-set loaded (though going forward that shouldn't really
ever happen except in the case of a legacy `pps.toml`), in which case we
can check if the `.fqme: str` value loaded from the transaction is
longer and use that instead - presuming it must have more mkt meta-data
filled out.

Also includes some more `fqsn` -> `fqme` renames.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 50be10a9bd `ib`: keep broker name in `Transaction.fqsn` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 29a5910b90 `ib`: move flex utils to new submod 2023-05-09 14:49:26 -04:00
Tyler Goodlet a336def65f `ib`: again, only *update* ledger records from API 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2cb59fe450 Flatter format for pos/ledger mngr statements 2023-05-09 14:49:26 -04:00
Tyler Goodlet 4494acbc01 Write a separate `pps.<brokername>.<accountname>.toml` file per account 2023-05-09 14:49:26 -04:00
Tyler Goodlet 7b3d724908 Rework `.config` routines to use `pathlib.Path`
Been meaning to do this port for a while and since it makes passing
around file handles (presumably alongside the in mem obj form) a lot
simpler/nicer and the implementations of all the config file handling
much more terse with less presumptions about the form of filename/dir
`str` values all over the place B)

moar technically, let's us:
- drop remaining `.config` usage of `os.path`.
- return `Path`s from most routines.
- adds a special case to `get_conf_path()` such that if the input name
  contains a `pps.` pattern, we avoid validating the name; this is going
  to be used by new `.accounting.open_pps()` code which will instead
  write a separate TOML file for each account B)
2023-05-09 14:49:26 -04:00
Tyler Goodlet bc249fbeca Move `.clearing._allocate` -> `accounting._allocate` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 53c76d3680 Drop `Optional` use from daemon mod 2023-05-09 14:49:26 -04:00
Tyler Goodlet 60123066e1 Use our `@acm` alias in paper eng 2023-05-09 14:49:26 -04:00
Tyler Goodlet 29ad20bc63 `ib`: only process ledger-txs once per client
Previous we were re-processing all ledgers for every position msg
received from the API, per client.. Instead do that once in a first pass
and drop all key-miss lookups for `bs_mktid`s; it should never happen.

Better typing for in-routine vars, convert pos msg/objects to `dict`
prior to logging so it's sane to read on console. Skip processing
specifically option contracts for now.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 978c59f5f0 `ib`: break up data vs. broker enabled modules 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2c23bc166b First working `brokerd` -> `trades_dialogue()` ep loader 2023-05-09 14:49:26 -04:00
Tyler Goodlet ff285fbbda `binance`: adjust earch to expect `Pair`s 2023-05-09 14:49:26 -04:00
Tyler Goodlet ccfafeeec2 Drop `cryptofeed`, what a mess XD 2023-05-09 14:49:26 -04:00
Tyler Goodlet e0067a4e1d WIP: trying out `typer` for ledger cli 2023-05-09 14:49:26 -04:00
Tyler Goodlet 485a17af26 Drop weird extra line from license headers 2023-05-09 14:49:26 -04:00
Tyler Goodlet c5b172a7df `binance`: pre-process `Pair` filters at init
Allows us to keep the struct frozen as well avoid complexity in the pure
data type. Also changes `.price/size_tick` to plain ol' properties.
2023-05-09 14:49:26 -04:00
Tyler Goodlet b718b5634e `binance`: use `MktPair` in live feed setup
Turns out `binance` is pretty great with their schema  since they have
more or less the same data schema for their exchange info ep which we
wrap in a `Pair` struct:
https://binance-docs.github.io/apidocs/spot/en/#exchange-information

That makes it super easy to provide the most general case for filling
out a `MktPair` with both `.src/dst: Asset` to maintain maximum
meta-data B)

Deatz:
- adjust `Pair` to have `.size/price_tick: Decimal` by parsing out
  the values from the filters field; TODO: we should probably just rewrite
  the input `.filter` at init time so we can keep the frozen style.
- rename `Client.mkt_info()` (was `.symbol_info` to `.exch_info()`
  better matching the ep name and have it build, cache, and return
  a `dict[str, Pair]`; allows dropping `.cache_symbols()`
- only pass the `mkt_info: MktPair` field in the init msg!
2023-05-09 14:49:26 -04:00
Tyler Goodlet 8f79c37b99 Generalize `MktPair.from_msg()` handling
Accept a msg with any of:
- `.src: Asset` and `.dst: Asset`
- `.src: str` and `.dst: str`
- `.src: Asset` and `.dst: str`

but not the final combo tho XD
Also, fix `.key` to properly cast any `.src: Asset` to string!
2023-05-09 14:49:26 -04:00
Tyler Goodlet aa5f25231a `ib`: never override existing ledger records
If user has loaded from a flex report then we don't want the API records
from the same period to override those; instead just update with any
missing fields from the API schema.

Also, always `str`-ify the contract id (what is set for the `.bs_mktid`
*before* packing into transaction type to ensure when serialized to
`pps.toml` there are no discrepancies at the codec level.. smh
2023-05-09 14:49:26 -04:00
Tyler Goodlet f3049016d6 `ib`: drop use of `_account2clients` in `load_clients_for_trio()`
Instead adjust `load_aio_clients()` to only reload clients detected as
non-loaded or disconnected (2 birds), and avoid use of the global module
table which could result in stale disconnected clients persisting on
multiple `brokerd` client reconnects, resulting in error.
2023-05-09 14:49:26 -04:00
Tyler Goodlet 16e11d447c Move toml table decoder to separate mod 2023-05-09 14:49:26 -04:00
Tyler Goodlet 199a5e8b38 `ib`: stick exc handler around client connection erros 2023-05-09 14:49:26 -04:00
Tyler Goodlet 59b095b2d5 `kraken`: heh, use `trio_util` for trades streamz tooo XD 2023-05-09 14:49:26 -04:00
Tyler Goodlet c59ec77d9c WIP: refactor ib pp load init 2023-05-09 14:49:26 -04:00
Tyler Goodlet 3e5da64571 Cache contract lookups from `Client.get_con()` 2023-05-09 14:49:26 -04:00
Tyler Goodlet 1c576d72d1 Dump `Position`s as pformatted dicts for now.. 2023-05-09 14:49:26 -04:00
Tyler Goodlet ea42f66b54 Use common `.brokers` logger in most backends 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2ae9576cd8 Add common logger instance for `.brokers` 2023-05-09 14:49:26 -04:00
Tyler Goodlet a462de6f2d Use a single log for entire `.service` subsys 2023-05-09 14:49:26 -04:00
Tyler Goodlet 3bf48ab597 Use a single log for entire `.clearing` subsys 2023-05-09 14:49:26 -04:00
Tyler Goodlet 2454dda18f Use `MktPair` attr `.size_tick` in charting 2023-05-09 14:49:26 -04:00
Tyler Goodlet 7498cbb5f4 Use `Struct.copy()` with update dict for `Order` from staged 2023-05-09 14:49:25 -04:00
Tyler Goodlet 581782800d Rename `Client.send_update()` -> `.update_nowait()` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 069466218e Use `str(cmd.symbol)` for fqme on cancels, add `_nowait()` method names 2023-05-09 14:49:25 -04:00
Tyler Goodlet fd9e484b55 Add `.__str__()` to mktpair and symbol types, fix `MktPair.fqme` token order 2023-05-09 14:49:25 -04:00
Tyler Goodlet 406565f74d Rename `fqsn` -> `fqme` in paper engine 2023-05-09 14:49:25 -04:00
Tyler Goodlet 6272cae8d4 Drop more `Optional` usage on our `Struct` 2023-05-09 14:49:25 -04:00
Tyler Goodlet dc2332c980 '`kraken`: finally, use new `MktPair` in `'mkt_info'` init msg field!' 2023-05-09 14:49:25 -04:00
Tyler Goodlet 7be85a882b Drop use of legacy `Symbol.broker_info` in display startup 2023-05-09 14:49:25 -04:00
Tyler Goodlet b6df83a0e9 Typecast `OrderMode.staged.symbol: str` before `.copy()`! 2023-05-09 14:49:25 -04:00
Tyler Goodlet d62fb655eb `kraken`: parse our source asset key and set on `MktPair.src: str` 2023-05-09 14:49:25 -04:00
Tyler Goodlet a9778e4001 Always cast `Order.symbol: str` for now
To make nested `msgspec.Struct`s work we need to tell the codec that the
`.symbol` is some struct def, since we don't really need to enforce that
(yet) we're just going to enc/dec as `str` until we further formalize
and/or need something more complex.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 580165f2f4 Expect new `MktPair.tick_size: Decimal` attr in ems 2023-05-09 14:49:25 -04:00
Tyler Goodlet 0f3041724b Use `MktPair` for `Flume.symbol` when used by backend
Initial attempt at getting the sampling and shm layer to use the new mkt
info meta-data type. Draft out a potential `BackendInitMsg:
msgspec.Struct` for validating the init msg returned from the
`stream_quotes()` start value; obvs don't actually use it yet.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 1d08ee6d01 `.clearing`: broad rename of `fqsn` -> `fqme` 2023-05-09 14:49:25 -04:00
Tyler Goodlet d4a5a3057c Add `MktPair.suffix: str` read from contract info
To be compat with the `Symbol` (for now) and generally allow for reading
the (derivative) contract specific part of the fqme. Adjust
`contract_info: list[str]` and make `src: str = ''` by default.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 452cd7db8a Optionally load `MktPair` in `Flume`s 2023-05-09 14:49:25 -04:00
Tyler Goodlet 2cc80d53ca First stage port of `.data.feed` to `MktPair`
Add `MktPair` handling block for when a backend delivers
a `mkt_info`-field containing init msg. Adjust the original
`Symbol`-style `'symbol_info'` msg processing to do `Decimal` defaults
and convert to `MktPair` including slapping in a hacky `_atype: str`
field XD

General initial name changes to `bs_mktid` and `_fqme` throughout!
2023-05-09 14:49:25 -04:00
Tyler Goodlet 7eb0b1d249 Comment about `Struct.typecast()` conflict with frozen instances 2023-05-09 14:49:25 -04:00
Tyler Goodlet 589b3f4201 Default `pps.toml` precision fields to `Decimal`
For `price_tick` and `size_tick` we read in `str` and decimal-ize
and now correctly fail over to default values of the same type..
Also, always treat `bs_mktid` field as a `str` in TOML form.

Drop the strange `clears: dict` var from the loading code (not sure why
that was left in smh) and better name `toml_clears_list` for the
TOML-loaded-pre-transaction sequence.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 6d5d9731ed Implement `MktPair.from_msg()` constructor
Handle case where `'dst'` field is just a `str` (in which case delegate to
`.from_fqme()`) as well as do `Asset` loading and use our
`Struct.copy()` to enforce type-casting to (for eg. `Decimal`s) such
that we'll now capture typing errors despite IPC transport.

Change `Symbol.tick_size` and `.lot_tick_size` defaults to decimal
for proper casting and type `MktPair.atype: str` since `msgspec` can't
cast to `AssetTypeName` without special handling..
2023-05-09 14:49:25 -04:00
Tyler Goodlet 25363ebd2e `ib`: deliver mkt precision info as `Decimal` 2023-05-09 14:49:25 -04:00
Tyler Goodlet b9c7e1b0c7 `binance`: deliver mkt precision info as `Decimal` 2023-05-09 14:49:25 -04:00
Tyler Goodlet ea9ea4a6d7 Rename `float_digits()` -> `dec_digits()`, since decimal. 2023-05-09 14:49:25 -04:00
Tyler Goodlet 76cd5519b3 Fix `Symbol.tick_size_digits`, add `.price/size_tick` props 2023-05-09 14:49:25 -04:00
Tyler Goodlet 677a6fc113 Cast to float from decimal for level line y-increment
Qt only accepts `float` to it's APIs obvs..
2023-05-09 14:49:25 -04:00
Tyler Goodlet 99199905b6 Add parity mapping from altnames back to themsevles in `Client._ntable` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 55b6cba31e Encode a `mktpair` field if passed in msg by caller 2023-05-09 14:49:25 -04:00
Tyler Goodlet 17b976eb88 Use `MktPair` building `Position` objects in `PpTable.update_from_trans()` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 7a8e615fa6 Explicitly decode tick sizes as decimal for symbol loading in `Flume` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 335e8d10d4 Cast back to float from decimal for cursor y-increment 2023-05-09 14:49:25 -04:00
Tyler Goodlet 6431071b2a Pass old fields in sym info init msg section 2023-05-09 14:49:25 -04:00
Tyler Goodlet 8fdff8769d Ensure `Symbol` tick sizes are decoded as `Decimal`.. 2023-05-09 14:49:25 -04:00
Tyler Goodlet 66782d29d1 `kraken`: use `Client.mkt_info()` in quotes feed init msg 2023-05-09 14:49:25 -04:00
Tyler Goodlet cfbba9e0b3 Add `MktPair._atype` for back-compat, always `str(.dst)` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 7aba290541 `kraken`: use `MktPair` in trasactions 2023-05-09 14:49:25 -04:00
Tyler Goodlet da10422160 `kraken`: add `Client.mkt_info()`
Allows building a `MktPair` from the backend specific `Pair` for
eventual use in the data feed layer. Also adds `Pair.price/tick_size` to
get to the expected tick precision info format.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 9e2eff507e Drop shm logging levels to debug over warning 2023-05-09 14:49:25 -04:00
Tyler Goodlet 71fc8b95dd Flip to `.bs_mktid` in `ib` and `kraken` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 72c97d4672 Handle read and write of `pps.toml` using `MktPair`
Add a logic branch for now that switches on an instance check.
Generally swap over all `Position.symbol` and `Transaction.sym` refs to
`MktPair`. Do a wholesale rename of all `.bsuid` var names to
`.bs_mktid`.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 7b28c7a43f Prep for dropping `Transaction.sym`
Instead let's name it `.sys` for "system", the thing we use to conduct
the "transactions" ..

Also rename `.bsuid` -> `.bs_mktid` for "backend system market id`
which is more explicit, easier to remember and read.
2023-05-09 14:49:25 -04:00
Tyler Goodlet cf9442f4d5 Further refinement and shimming of `MktPair`
Prepping to entirely replace `Symbol`; this adds a buncha docs/comments,
better implementation for representing and parsing the FQME: "fully
qualified market endpoint".

Deatz:
- make `.src` an optional field until we figure out how we're going
  to support loading source assets from all backends sensibly..
- implement `MktPair.fqme: str` (what was previously called `fqsn`)
  using a new util func: `maybe_cons_tokens()`.
- `Symbol.brokers` and expect only `.broker` usage.
- remap anything with `fqsn` in the name to `fqme` with aliases from the
  old name.
- implement `unpack_fqme()` with `match:` syntax B)
- add `MktPair.tick_size_digits`, `.lot_size_digits`, `.fqsn`, `.key` for
  backward compat.
- make all fqme generation related fields empty `str`s by default.
- add `MktPair.resolved: bool` a flag indicating whether or not `.dst`
  is an `Asset` instance or just a string and, `.bs_mktid` the field
  to hold the "backend system market id" per broker.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 85ddfc0f2d Drop use of `mk_fqsn()` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 56f736e7ca Drop use of `Symbol.brokers` everywhere 2023-05-09 14:49:25 -04:00
Tyler Goodlet 63304f535c Start to prep `Transaction` for `MktPair`.. 2023-05-09 14:49:25 -04:00
Tyler Goodlet 2583706b35 Port `accounting._pos` to new `Symbol` simplifications 2023-05-09 14:49:25 -04:00
Tyler Goodlet 65a7853cf3 Delegate to new `.accounting._mktinfo._derivs` from `ui._positioning` 2023-05-09 14:49:25 -04:00
Tyler Goodlet 69c9ecc5e3 `kraken`: write `pps.toml` on updates for now 2023-05-09 14:49:25 -04:00
Tyler Goodlet 3be53540c1 `kraken`: pack `Asset` into local client cache
Try out using our new internal type for storing info about kraken's asset
infos now stored in the `Client.assets: dict[str, Asset]` table. Handle
a server error when requesting such info msgs.
2023-05-09 14:49:25 -04:00
Tyler Goodlet a44b6f7c2f `ib`: adjust to new simplified `Symbol`
Drop usage of removed methods and attrs and only pass in the
`.tick_size: Decimal` value during construction.
2023-05-09 14:49:25 -04:00
Tyler Goodlet e65f3f84b9 Drop `Symbol.front_fqsn()` usage from chart, fsp and clearing stuff 2023-05-09 14:49:25 -04:00
Tyler Goodlet acc5af1fdb Drop `Symbol.front_feed()` usage from order mode 2023-05-09 14:49:25 -04:00
Tyler Goodlet 91dda3020e Simplify `Symbol` extend `MktPair`, add `Asset`
Drop everything we can in terms of methods and attrs from `Symbol`:
- kill `.tokens()`, `.front_feed()`, `.tokens()`, `.nearest_tick()`,
  `.front_fqsn()`, instead moving logic from these methods into
  dependents (and obviously removing any usage from rest of code base,
  coming in follow up commits).
- rename `.quantize_size()` -> `.quantize()`.
- re-implement `.brokers`, `.lot_size_digits`, `.tick_size_digits` as
  `@property` methods; for the latter two, allows us to minimize to only
  accepting min tick decimal values on alternative constructor class
  methods and to drop the equivalent instance vars.
- map `_fqsn` related variable names to new and preferred `_fqme`.

We also juggle around some utility functions, moving limited precision
related `decimal.Decimal` routines to the top of module and soon-to-be
legacy `fqsn` related routines to the bottom.

`MktPair` draft type extensions:
- drop requirements for `src_type`, and offer the optional `.dst_type`
  field as either a `str` or (new `typing.Literal`) `AssetTypeName`.
- define an equivalent `.quantize()` as (re)defined in `Symbol` but with
  `quantity_type: str` field which specifies whether to use the price or
  the size precision.
- add a lot more docs, a `.key` property for the "symbol" name, draft
  property for a `.fqme: str`
- allow `.src` and `.dst` to be of type `str | Asset`

Add a new `Asset` to capture "things which can be used in markets and/or
transactions" XD
- defines a `.name`, `.atype: AssetTypeName` a financial category tag, `tx_tick:
  Decimal` the precision limit for transactions and of course
  a `.quantime()` method for doing accounting arithmetic on a given tech
  stack.
- define the `atype: AssetTypeName` type as a finite set of `str`s
  expected to be used in various ways for default settings in other
  parts of the data and order control layers..
2023-05-09 14:49:25 -04:00
Tyler Goodlet 9f03484c4d Move all fqsn parsing and `Symbol` to new `accounting._mktinfo 2023-05-09 14:49:25 -04:00
Tyler Goodlet 7904c27127 (u)Limit the fd allocation for java 8 runtime..
Can't believe this was actually the issue..seriously i don't envy
jvm users.

See following issues:
- https://stackoverflow.com/a/56895801
- https://bugs.openjdk.org/browse/JDK-8150460
2023-05-09 14:49:25 -04:00
Tyler Goodlet 22622e1c01 `ib`: (cukcit) just presume a stonk if we can read type from existing ledger.. 2023-05-09 14:49:25 -04:00
Tyler Goodlet f549de7c88 Break out old `.pp` components into submods: `._ledger` and `._pos` 2023-05-09 14:49:25 -04:00
Tyler Goodlet beb6544bad Start a new `.accounting` subpkg, move `.pp` contents there 2023-05-09 14:49:25 -04:00
Tyler Goodlet d01fdbf981 '`kraken`: fix pos loading using `digits_to_dec()` to pair info
Our issue was not having the correct value set on each
`Symbol.lot_tick_size`.. and then doing PPU calcs with the default set
for legacy mkts..

Also,
- actually write `pps.toml` on broker mode exit.
- drop `get_likely_pair()` and import from pp module.
2023-05-09 14:49:25 -04:00
Tyler Goodlet badc30baae Add an inverse of `float_digits()`: `digits_to_dec() 2023-05-09 14:49:25 -04:00
Tyler Goodlet 4f36a03df2 Ensure clearing table entries are time-sorted..
Not sure how this worked before but, the PPU calculation critically
requires that the order of clearing transactions are in the correct
chronological order! Fix this by sorting `trans: dict[str, Transaction]`
in the `PpTable.update_from_trans()` method.

Also, move the `get_likely_pair()` parser from the `kraken` backend here
for future use particularly when we revamp the asset-transaction
processing layer.
2023-05-09 14:49:25 -04:00
Tyler Goodlet 0a2187a73f Add 3.11 install tag 2023-05-08 19:35:45 -04:00
Tyler Goodlet 166f99b3d1 setup: reorg some deps drop unused ones 2023-05-08 13:30:09 -04:00
Tyler Goodlet 0d9acb1cb0 numpy: drop `numpy.float` in py311 2023-05-04 12:01:59 -04:00
jaredgoldman 1ea0163b04
Merge pull request #494 from pikers/kucoin_backend
kucoin backend
2023-04-21 21:33:49 -04:00
jaredgoldman 3836f7d458 Run autopep8, add default case for message stream match case 2023-04-21 21:16:14 -04:00
jaredgoldman ae3f6696a7 Fix type hinting for stream_messages return type 2023-04-21 20:40:23 -04:00
jaredgoldman a06a4f67cc Remove unused timeframe var from open_history_client 2023-04-21 17:17:47 -04:00
jaredgoldman a69c8a8b44 Uncomment loglevel 2023-04-20 18:51:13 -04:00
jaredgoldman efad49ec5b Raise ValueError if no config is found when sending authenticated headers 2023-04-19 14:58:28 -04:00
jaredgoldman d772fe45c0 Comment out unused args 2023-04-19 14:55:58 -04:00
jaredgoldman 6f91c2932d Type bars data dict 2023-04-19 14:49:28 -04:00
jaredgoldman d07a73cf70 Add type annotation for open_ping_task' 2023-04-19 14:47:19 -04:00
jaredgoldman fcdddadec1 Use singlequotes 2023-04-18 10:42:30 -04:00
jaredgoldman 9fcfb8d780 More linting fixes 2023-04-18 10:39:47 -04:00
jaredgoldman 37ce04ca9a Linting fixes 2023-04-18 10:19:59 -04:00
jaredgoldman a109a8bf67 Add linting fixes 2023-04-18 09:51:50 -04:00
jaredgoldman b01771be1b Add comments to kucoin->piker bar conversion 2023-04-16 10:46:22 -04:00
jaredgoldman 0e4095c947 Don't yield ws from the ping task 2023-04-16 10:45:05 -04:00
jaredgoldman dae56baeba Refactor streaming logic to be less nested and readable 2023-04-16 10:12:29 -04:00
jaredgoldman 9706803220 Refactor streaming logic to be less nested and readable 2023-04-16 10:11:17 -04:00
jaredgoldman 8403d8a482 Simplify numpy mapping logic 2023-04-15 21:05:25 -04:00
jaredgoldman 59249a8c1e
Merge pull request #498 from pikers/small_kucoin_fixes
`kucoin` small fixes
2023-04-15 19:52:27 -04:00
Tyler Goodlet a111819667 Few fixes after review to get running again B)
- use `Struct.copy()` for frozen type
- fix `BrokerConfig` delegation attr lookups
- bit of linting according to `flake8`
2023-04-14 19:05:19 -04:00
jaredgoldman 4f576b6f36 Fix typo with ts vars 2023-04-13 22:37:17 -04:00
jaredgoldman 672c01f13a Use trade_data_ts for trade message receival 2023-04-13 22:35:21 -04:00
jaredgoldman f67ffeb70f Remove extra Noen check on msg.get 2023-04-13 22:34:04 -04:00
jaredgoldman 1b1e35d32d Add comment explaining waiting for first trade quote 2023-04-13 22:28:44 -04:00
jaredgoldman 9f5dfe8501 Remove anext() comment 2023-04-13 22:27:56 -04:00
jaredgoldman 11bd2e2f65 Use datetime | none instead of Optional[datetime] in get_bars 2023-04-13 22:04:43 -04:00
jaredgoldman ebfd490a1a Cache instead of get pairs in symbol search 2023-04-13 22:02:13 -04:00
jaredgoldman 89bb124728 Remove old comments normalize arguents and improve pair fetching log 2023-04-13 22:00:41 -04:00
jaredgoldman 63e34cf595 Typecast config, add type hint to pair in init message creation and turn init msg vals into floats 2023-04-13 21:57:54 -04:00
jaredgoldman 92f372dcc8 Use proper value for init message 2023-04-13 21:52:40 -04:00
jaredgoldman b00abd0e51 Add a fail case ws token request 2023-04-13 21:48:17 -04:00
jaredgoldman 52a015d927 Remove typo in binance 2023-04-12 21:40:58 -04:00
jaredgoldman 2c82b2aba9 Remove breakpoint in binance 2023-04-12 20:43:28 -04:00
jaredgoldman ff0f8dfaca Improve client._get_ws_token docstring 2023-04-12 20:37:10 -04:00
jaredgoldman ace04af21a Use anext() in kucoin stream_quotes 2023-04-12 20:25:35 -04:00
goodboy 70db20b07c
Merge pull request #473 from pikers/binance_ws_ep_update
`binance`: use built-in `anext()` add note about new ws ep URL, fix agen streaming within `NoBsWs` usage
2023-04-12 19:53:53 -04:00
jaredgoldman d2f3a79c09 Use pendulum for header timestamp,
type hint cleanup
2023-04-12 19:48:46 -04:00
jaredgoldman bedbbc3025 Only diff trade time 2023-04-12 19:48:46 -04:00
jaredgoldman 6e55f6706f Format condition for filtering and add link to docs explaining need for filtering in the first case 2023-04-12 19:48:46 -04:00
jaredgoldman d1b0608c88 Remove breakpoint 2023-04-12 19:48:46 -04:00
jaredgoldman 3bed3a64c3 Implement duplicate filtering at message level 2023-04-12 19:48:46 -04:00
jaredgoldman 93e7d54c5e Add api doc links to _get_bars def 2023-04-12 19:48:46 -04:00
jaredgoldman 9db84e8029 Remove norm_pairs method and do all normalization in initial _get_pairs call 2023-04-12 19:48:46 -04:00
jaredgoldman ea21656624 Don't cache pairs in _get_pairs call 2023-04-12 19:48:46 -04:00
jaredgoldman 5a0d29c774 Add ws token api doc link 2023-04-12 19:48:46 -04:00
jaredgoldman 13df3e70d5 Refactor sign gen into one line 2023-04-12 19:48:46 -04:00
jaredgoldman 208a8e5d7a Remove unecessary config vars 2023-04-12 19:48:46 -04:00
jaredgoldman ca937dff5e Add api doc links in structs 2023-04-12 19:48:46 -04:00
jaredgoldman c68fcf7e1c Remove extra line from docstrings 2023-04-12 19:48:46 -04:00
jaredgoldman 48c3b333b2 Format imports with parenthesis 2023-04-12 19:48:46 -04:00
jaredgoldman b71f6b6c67 Strip uneccesary data from ticks in l1 data feed 2023-04-12 19:48:46 -04:00
jaredgoldman 54cf648d74 Ensure sub logging dict attritbutes will be there 2023-04-12 19:48:46 -04:00
jaredgoldman 68d0327d41 Remove breakpoints, simplify backoff logic 2023-04-12 19:48:46 -04:00
jaredgoldman 68a06093e9 Format and ensure we're only grabbing the most closest bid and ask 2023-04-12 19:48:46 -04:00
jaredgoldman 52aadb374b Add L1 data feed and correct history issue 2023-04-12 19:48:46 -04:00
jaredgoldman dfd030a6aa Remove float conversion of key_id again 2023-04-12 19:48:46 -04:00
jaredgoldman 788e158d9f Stop still converting datetime to float 2023-04-12 19:48:46 -04:00
jaredgoldman 81890a39d9 Leave datetimes alone! 2023-04-12 19:48:46 -04:00
jaredgoldman ae170f2645 Add more informative logs on startup 2023-04-12 19:48:46 -04:00
jaredgoldman e2e5191ded Remove breaking useless condition for determining if res is list of ohlc values 2023-04-12 19:48:46 -04:00
jaredgoldman dcbb7fa64f Remove float conversion for config key id 2023-04-12 19:48:46 -04:00
jaredgoldman 32107d0ac3 Strengthen retry case and add comments 2023-04-12 19:48:46 -04:00
jaredgoldman 7bdebd47d1 Add exponential retry case for history client 2023-04-12 19:48:46 -04:00
jaredgoldman ac31bca181 Make broker creds/auth optional 2023-04-12 19:48:46 -04:00
jaredgoldman 52070c00f9 Remove typo 2023-04-12 19:48:46 -04:00
jaredgoldman 5ff0cc7905 Cast/validate streamed messages
Update comments

Minor formatting

Minor formatting
2023-04-12 19:48:46 -04:00
jaredgoldman 6ad1e3da38 Correct typo in license 2023-04-12 19:48:46 -04:00
jaredgoldman 9bf6f557ed Label private methods accordingly, remove cryptofeeds module 2023-04-12 19:48:46 -04:00
jaredgoldman 50e1070004 More cleanup, add comments re sub func 2023-04-12 19:48:46 -04:00
jaredgoldman 1c4c19b351 Clean up broker code,
Add typecasting for messages/rt-data and historcal user trades
ensure we're fetching all history
add multi-symbol support
'
2023-04-12 19:48:46 -04:00
jaredgoldman 199a70880c Spawn background ping task 2023-04-12 19:48:46 -04:00
jaredgoldman b14b323068 Remove breakpoint in web_bs,
ensure we only unsub if ws is connected
2023-04-12 19:48:46 -04:00
jaredgoldman a3c7bec576 Implement working message streaming 2023-04-12 19:48:46 -04:00
jaredgoldman ac34ca7cad Add sub method to flow
Stash for checkout of master
2023-04-12 19:48:46 -04:00
jaredgoldman ade2c32adb Succesfully connect to kucoin ws 2023-04-12 19:48:46 -04:00
jaredgoldman 109e7d7b43 Add back static API version in headers 2023-04-12 19:48:46 -04:00
jaredgoldman 1a655b7e39 Ensure we're passing the correct api version to the header builder,
make headers a default arg
2023-04-12 19:48:46 -04:00
jaredgoldman cda045f123 Abstract header gen to seperate function 2023-04-12 19:48:46 -04:00
jaredgoldman 7074ca7713 Implement Kucoin auth and last trades call 2023-04-12 19:48:46 -04:00
Tyler Goodlet 8e91e215b3 WIP - ensure `asyncio` pumps the event loop each send 2023-04-12 19:48:46 -04:00
jaredgoldman c751c36a8b Update trade message format 2023-04-12 19:48:46 -04:00
jaredgoldman ad9d645782 WIP - setup basic history and streaming client 2023-04-12 19:48:46 -04:00
jaredgoldman c96d4387c5 Start adding history client 2023-04-12 19:48:46 -04:00
jaredgoldman 5fdec8012d Add cryptofeeds data feed module,
Add Kucoin backend client
wip
2023-04-12 19:48:46 -04:00
Tyler Goodlet 609b91e848 Try out `@trio_util.async_generator` for streaming
Apparently it will likely fix our `trio`-cancel-scopes-corrupted crash
when we try to let our `._web_bs.NoBsWs` do reconnect logic around
the asyn-generator implemented data-feed streaming routines in `binance`
and `kraken`.  See the project docs for deatz; obvs we add the lib as
a dep.
2023-03-20 12:54:48 -04:00
Tyler Goodlet 78eb784091 Stick `try:` outside all `xdotool` subproc calls 2023-03-13 15:36:45 -04:00
Tyler Goodlet 973e4b5f44 `binance`: wrap streamer async-gen in `aclosing()` 2023-03-13 15:36:29 -04:00
Tyler Goodlet 9197e6decb `binance`: use built-in `anext()` add note about new ws ep URL 2023-03-13 15:36:29 -04:00
goodboy f3b04f27e6
Merge pull request #490 from pikers/log_linearized_curve_overlays
Log linearized curve overlays
2023-03-13 15:32:42 -04:00
Tyler Goodlet 889e920796 Short-circuit rendering on no 1d-data; avoid m4 layer crash 2023-03-13 12:18:54 -04:00
Tyler Goodlet 1aab9f1f81 Actually yes, we need to handle empty in-view range.. 2023-03-10 18:20:22 -05:00
Tyler Goodlet 5c697de58e Presume never handling not-in-view case for minor curves 2023-03-10 18:20:22 -05:00
Tyler Goodlet 3066b1541e Handle (shorter supported) minor-curve not-in-view
Solve this by always scaling the y-range for the major/target curve
*before* the final overlay scaling loop; this implicitly always solve
the case where the major series is the only one in view.

Tidy up debug print formatting and add some loop-end demarcation comment
lines.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 32339cb41a Always show a minimum bars during startup
This is particularly more "good looking" when we boot with a pair that
doesn't have historical 1s OHLC and thus the fast chart is empty from
outset. In this case it's a lot nicer to be already zoomed to
a comfortable preset number of "datums in view" even when the history
isn't yet filled in.

Adjusts the chart display `Viz.default_view()` startup to explicitly
ensure this happens via the `do_min_bars=True` flag B)
2023-03-10 18:20:22 -05:00
Tyler Goodlet 12e196a6f7 Catch `KeyError` on bcast errors which pop the sub
Not sure how i missed this (and left in handling of `list.remove()` and
it ever worked for that?) after the `samplerd` impl in 5ec1a72 but, this
adjusts the remove-broken-subscriber loop to catch the correct
`set.remove()` exception type on a missing (likely already removed)
subscription entry.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 8a87e5f390 Remove leftover debug print in cache reset meth 2023-03-10 18:20:22 -05:00
Tyler Goodlet 5958acebe1 Add (commented) draft 1min OHLC time index logging
For the purposes of eventually trying to resolve last-step indexing
synchronization (an intermittent but still existing) issue(s) that can
happen due to races during history frame query and shm writing during
startup. In fact, here we drop all `hist_viz` info queries from the main
display loop for now anticipating that this code will either be removed
or improved later.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 8d1c713a5a Always pass step to `slice_from_time()` in view mode
Again, as per the signature change, never expect implicit time step
calcs from overlay processing/machinery code. Also, extend the debug
printing (yet again) to include better details around
"rescale-due-to-minor-range-out-of-view" cases and a detailed msg for
the transform/scaling calculation (inputs/outputs), particularly for the
cases when one of the curves has a lesser support.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 32926747c6 Always pass `step` to `slice_from_time()` in the `Viz`
As per the change to `slice_from_time()` this ensures this `Viz` always
passes its self-calculated time indexing step size to the time slicing
routine(s).

Further this contains a slight impl tweak to `.scalars_from_index()` to
slice the actual view range from `xref` to `Viz.ViewState.xrange[1]` and
then reading the corresponding `yref` from the first entry in that
array; this should be no slower in theory and makes way for further
caching of x-read-range to `ViewState` opportunities later.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 712f1a47a0 Require `step: float` input to `slice_from_time()`
There's been way too many issues when trying to calculate this
dynamically from the input array, so just expect the caller to know what
it's doing and don't bother with ever hitting the error case of
calculating and incorrect value internally.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 51f3733487 Handle "target-is-shorter-then-pinned" case
When the target pinning curve (by default, the dispersion major) is
shorter then the pinned curve, we need to make sure we find still find
the x-intersect for computing returns scalars! Use `Viz.i_from_t()` to
accomplish this as well and, augment that method with a `return_y: bool`
to allow the caller to also retrieve the equivalent y-value at the
requested input time `t: float` for convenience.

Also tweak a few more internals around the 'loglin_ref_to_curve'
method:
- only solve / adjust for the above case when the major's xref is
  detected as being "earlier" in time the current minor's.
- pop the major viz entry from the overlay table ahead of time to avoid
  a needless iteration and simplify the transform calc phase loop to
  avoid handling that needless cycle B)
- add much better "organized" debug printing with more clear headers
  around which "phase"/loop the message pertains and well as more
  explicit details in terms of x and y-range values on each cycle of
  each loop.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 4bb580ae60 Don't `@lru_cache` on `Viz.i_from_t()`, since view state.. 2023-03-10 18:20:22 -05:00
Tyler Goodlet 05aee4a311 Tweak debug printing to display y-mxmn per viz 2023-03-10 18:20:22 -05:00
Tyler Goodlet fc98d66ffc Fix curve up-sampling on `'r'` hotkey
Previously when very zoomed out and using the `'r'` hotkey the
interaction handler loop wouldn't trigger a re-(up)sampling to get
a more detailed curve graphic and instead the previous downsampled
(under-detailed) graphic would show. Fix that by ensuring we yield back
to the Qt event loop and do at least a couple render cycles with paired
`.interact_graphics_cycle()` calls.

Further this flips the `.start/signal_ic()` methods to use the new
`.reset_graphics_caches()` ctr-mngr method.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 57d56c4791 Facepalm: set `Viz.ViewState.yrange` even on cache hits.. 2023-03-10 18:20:22 -05:00
Tyler Goodlet 7e6e04b7e2 Drop remaining usage of `ChartPlotWidget.default_view()`
Instead delegate directly to `Viz.default_view()` throughout charting
startup and interaction handlers.

Also add a `ChartPlotWidget.reset_graphics_caches()` context mngr which
resets all managed graphics object's cacheing modes on enter and
restores them on exit for simplified use in interaction handling code.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 12bee716c2 Add `do_min_bars: bool` flag to `Viz.default_view()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet 6690bd4576 Drop remaining non-usage of `ChartPlotWidget.maxmin()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet 9c8bd9b8ce Expand mxmn view y-margins back to 0.06 2023-03-10 18:20:22 -05:00
Tyler Goodlet eea850450a Handle yrange not set on view vase for vlm fsp plot 2023-03-10 18:20:22 -05:00
Tyler Goodlet f7dfe57090 Disable coordinate caching during interaction
This finally seems to mitigate all the "smearing" and "jitter" artifacts
when using Qt's "coordinate cache" graphics-mode:

- whenever we're in a mouse interaction (as per calls to
  `ChartView.start/signal_ic()`) we simply disable the caching mode (set
  `.NoCache` until the interaction is complete.
- only do this (for now) during a pan since it doesn't seem to be an
  issue when zooming?
- ensure disabling all `Viz.graphics` and `.ds_graphics` to be agnostic
  to any case where there's both a zoom and a pan simultaneously (not
  that it's easy to do manually XD) as well as solving the problem
  whenever an OHLC series is in traced-and-downsampled mode (during low
  zoom).

Impl deatz:
- rename `ChartView._ic` -> `._in_interact: trio.Event`
- add `.ChartView._interact_stack: ExitStack` which  we use to open.
  and close the `FlowGraphics.reset_cache()` mngrs from mouse handlers.
- drop all the commented per-subtype overrides for `.cache_mode: int`.
- write up much better doc strings for `FlattenedOHLC` and `StepCurve`
  including some very basic ASCII-art diagrams.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 9b960594aa Add per-chart `Viz`/overlay graphics iterator method 2023-03-10 18:20:22 -05:00
Tyler Goodlet 75642929e3 Move cache-reset ctx mngr to parent type: `FlowGraphics.reset_cache()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet eda283f059 Fix focal min calc after switching to `Viz.datums_range()`.. 2023-03-10 18:20:22 -05:00
Tyler Goodlet 77401a94fb Simplify `FlowGraphics.x_last()` logics 2023-03-10 18:20:22 -05:00
Tyler Goodlet 75807f4a96 Rename overlay technique var to `method` 2023-03-10 18:20:22 -05:00
Tyler Goodlet 94f0ef13ef Repair x-label datetime labels when in array-index mode 2023-03-10 18:20:22 -05:00
Tyler Goodlet 7579863068 Skip overlay handling when `N < 2` are detected 2023-03-10 18:20:22 -05:00
Tyler Goodlet 993bb47138 Drop passing overlay method from viewbox to view-mode handler 2023-03-10 18:20:22 -05:00
Tyler Goodlet 8c392fda60 Drop a bunch of commented/uneeded cruft 2023-03-10 18:20:22 -05:00
Tyler Goodlet 45e97dd4c8 Solve a final minor-should-rescale edge case
When the minor has the same scaling as the major in a given direction we
should still do back-scaling against the major-target and previous
minors to avoid strange edge cases where only the target-major might not
be shifted correctly to show an matched intersect point? More or less
this just meant making the y-mxmn checks interval-inclusive with
`>=`/`<=` operators.

Also adds a shite ton of detailed comments throughout the pin-to-target
method blocks and moves the final major y-range call outside the final
`scaled: dict` loop.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 01ea706644 Better doc string, use `Viz.vs: ViewState` 2023-03-10 18:20:22 -05:00
Tyler Goodlet c1ea8552ac Back-rescale previous (minor) curves from latest
For the "pin to target major/target curve" overlay method, this finally
solves the longstanding issue of ensuring that any new minor curve,
which requires and increase in the major/target curve y-range, also
re-scales all previously scaled minor curves retroactively. Thus we now
guarantee that all minor curves are correctly "pinned" to their
target/major on their earliest available datum **and** are all kept in
view.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 6601dea8cc Support "pin-to-target-curve" overlay method again
Yah yah, i know it's the same as before (the N > 2 curves case with
out-of-range-minor rescaling the previously scaled curves isn't fixed
yet...) but, this is a much better and optional implementation in less
code. Further we're now better leveraging various new cached properties
and methods on `Viz`.

We now handle different `overlay_technique: str` options using `match:`
syntax in the 2ndary scaling loop, stash the returns scalars per curve
in `overlay_table`, and store and iterate the curves by dispersion
measure sort order.

Further wrt "pin-to-target-curve" mode, which currently still pins to
the largest measured dispersion curve in the overlay set:

- pop major Ci overlay table entries at start for sub-calcs usage when
  handling the "minor requires major rescale after pin" case.
- (finally) correctly rescale the major curve y-mxmn to whatever the
  latest minor overlay curve by calcing the inverse transform from the
  minor *at that point*:
  - the intersect point being that which the minor has starts support on
    the major's x-domain* using the new `Viz.scalars_from_index()` and,
  - checking that the minor is not out of range (versus what the major's
    transform calcs it to be, in which case,
  - calc the inverse transform from the current out-of-range minor and
    use it to project the new y-mxmn for the major/target based on the
    same intersect-reference point in the x-domain used by the minor.
  - always handle the target-major Ci specially by only setting the
    `mx_ymn` / `mx_ymn` value when iterating that entry in the overlay
    table.
  - add todos around also doing the last sub-sub bullet for all previously
    major-transform scaled minor overlays (this is coming next..i hope).
- add a final 3rd overlay loop which goes through a final `scaled: dict`
  to apply all range values to each view; this is where we will
  eventually solve that last edge case of an out-of-range minor's
  scaling needing to be used to rescale already scaled minors XD
2023-03-10 18:20:22 -05:00
Tyler Goodlet 4d11c5c89c Add cached dispersion methods to `Viz`
In an effort to make overlay calcs cleaner and leverage caching of view
range -> dispersion measures, this adds the following new methods:

- `._dispersion()` an lru cached returns scalar calculator given input
  y-range and y-ref values.
- `.disp_from_range()` which calls the above method and returns variable
  output depending on requested calc `method: str`.
- `.i_from_t()` a currently unused cached method for slicing the
  in-view's array index from time stamp (though not working yet due to
  needing to parameterize the cache by the input `.vs.xrange`).

Further refinements/adjustments:
- rename `.view_state: ViewState` -> `.vs`.
- drop the `.bars_range()` method as it's no longer used anywhere else
  in the code base.
- always set the `ViewState.in_view: np.ndarray` inside `.read()`.
- return the start array index (from slice) and `yref` value @ `xref`
  from `.scalars_from_index()` to aid with "pin to curve" rescaling
  caused by out-of-range pinned-minor curves.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 29418e9655 Avoid index-from-time slicing including gaps
Not sure why this was ever allowed but, for slicing to the sample
*before* whatever target time stamp is passed in we should definitely
not return the prior index as for the slice start since that might
include a very large gap prior to whatever sample is scanned to have
the earliest matching time stamp.

This was essential to fixing overlay intersect points searching in our
``ui.view_mode`` machinery..
2023-03-10 18:20:22 -05:00
Tyler Goodlet 8fd5c67f2a Drop last lingering usage of `Viz.bars_range()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet 62e0889bf5 Add `Viz.view_state: ViewState`
Adds a small struct which is used to track the most recently viewed
data's x/y ranges as well as the last `Viz.read()` "in view" array data
for fast access by chart related graphics processing code, namely view
mode overlay handling.

Also adds new `Viz` interfaces:
- `Viz.ds_yrange: tuple[float, float]' which replaces the previous
  `.yrange` (now set by `.datums_range()` on manual y-range calcs) so
  that the m4 downsampler can set this field specifically and then it
  get used (when available) by `Viz.maxmin()`.
- `Viz.scalars_from_index()` a new returns-scalar generator which can be
  used to calc the up and down returns values (used for scaling overlay
  y-ranges) from an input `xref` x-domain index which maps to some
  `Ci(xref) = yref`.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 26690b061b Make slow chart a teensie bit smaller 2023-03-10 18:20:22 -05:00
Tyler Goodlet 98b7d78476 Drop (now) unused major curve mx/mn variables 2023-03-10 18:20:22 -05:00
Tyler Goodlet 35c40e825a Move overlay transform logic to new `.ui.view_mode`
It was getting waayy to long to be jammed in a method XD

This moves all the chart-viz iteration and transform logic into a new
`piker.ui.view_mode.overlay_viewlists()` core routine which will make it
a lot nicer for,

- AOT compilation via `numba` / `cython` / `mypyc`.
- decoupling from the `pyqtgraph.ViewBox` APIs if we ever decide to get
  crazy and go without another graphics engine.
- keeping your head clear when trying to rework the code B)
2023-03-10 18:20:22 -05:00
Tyler Goodlet 753e991dae Adjust `.ui` modules to new set-style "optional" annots 2023-03-10 18:20:22 -05:00
Tyler Goodlet 54ecb0990f Remove vlm chart again, drop lotsa fsp cruft 2023-03-10 18:20:22 -05:00
Tyler Goodlet 5f470d6122 Rework overlay pin technique: "align to first"
As part of solving a final bullet-issue in #455, which is specifically
a case:
- with N > 2 curves, one of which is the "major" dispersion curve" and
  the others are "minors",
- we can run into a scenario where some minor curve which gets pinned to
  the major (due to the original "pinning technique" -> "align to
  major") at some `P(t)` which is *not* the major's minimum / maximum
  due to the minor having a smaller/shorter support and thus,
- requires that in order to show then max/min on the minor curve we have
  to expand the range of the major curve as well but,
- that also means any previously scaled (to the major) minor curves need
  to be adjusted as well or they'll not be pinned to the major the same
  way!

I originally was trying to avoid doing the recursive iteration back
through all previously scaled minor curves and instead decided to try
implementing the "per side" curve dispersion detection (as was
originally attempted when first starting this work). The idea is to
decide which curve's up or down "swing in % returns" would determine the
global y-range *on that side*. Turns out I stumbled on the "align to
first" technique in the process: "for each overlay curve we align its
earliest sample (in time) to the same level of the earliest such sample
for whatever is deemed the major (directionally disperse) curve in
view".

I decided (with help) that this "pin to first" approach/style is equally
as useful and maybe often more so when wanting to view support-disjoint
time series:

- instead of compressing the y-range on "longer series which have lesser
  sigma" to make whatever "shorter but larger-sigma series" pin to it at
  an intersect time step, this instead will expand the price ranges
  based on the earliest time step in each series.
- the output global-returns-overlay-range for any N-set of series is equal to
  the same in the previous "pin to intersect time" technique.
- the only time this technique seems less useful is for overlaying
  market feeds which have the same destination asset but different
  source assets (eg. btceur and btcusd on the same chart since if one
  of the series is shorter it will always be aligned to the earliest
  datum on the longer instead of more naturally to the intersect sample
  level as was in the previous approach).

As such I'm going to keep this technique as discovered and will later
add back optional support for the "align to intersect" approach from
previous (which will again require detecting the highest dispersion
curve direction-agnostic) and pin all minors to the price level at which
they start on the major.

Further details of the implementation rework in
`.interact_graphics_cycle()` include:

- add `intersect_from_longer()` to detect and deliver a common datum
  from 2 series which are different in length: the first time-index
  sample in the longer.
- Rewrite the drafted `OverlayT` to only compute (inversed log-returns)
  transforms for a single direction and use 2 instances, one for each
  direction inside the `Viz`-overlay iteration loop.
- do all dispersion-per-side major curve detection in the first pass of
  all `Viz`s on a plot, instead updating the `OverlayT` instances for
  each side and compensating for any length mismatch and
  rescale-to-minor cases in each loop cycle.
2023-03-10 18:20:22 -05:00
Tyler Goodlet d5ba26cfaf Try to hide all axes even when removed 2023-03-10 18:20:22 -05:00
Tyler Goodlet cb5e2d48e2 Add hack-zone UI REPL access via `ctl-u` 2023-03-10 18:20:22 -05:00
Tyler Goodlet a6d1053c50 Facepalm, align overlay plot view exactly to parent
Previously we were aligning the child's `PlotItem` to the "root" (top
most) overlays `ViewBox`..smh. This is why there was a weird gap on the
LHS next to the 'left' price axes: something weird in the implied axes
offsets was getting jammed in that rect.

Also comments out "the-skipping-of" moving axes from the overlay's
`PlotItem.layout` to the root's linear layout(s) when an overlay's axis
is read as not visible; this isn't really necessary nor useful and if we
want to remove the axes entirely we should do it explicitly and/or
provide a way through the `ComposeGridLayout` API.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 3dc1f66ff6 Go back to caching on all curves
Despite there being artifacts when interacting, the speedups when
cross-hair-ing are just too good to ignore. We can always play with
disabling caches when interaction takes place much like we do with feed
pausing.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 091afccb72 Dynamically adjust y-range margin in display loop
When zoomed in alot, and thus a quote driven y-range resize takes place,
it makes more sense to increase the `range_margin: float` input to
`._set_yrange()` to ensure all L1 labels stay in view; generally the
more zoomed in,
- the smaller the y-range is and thus the larger the needed margin (on
  that range's dispersion diff) would be,
- it's more likely to get a last datum move outside the previous range.

Also, always do overlayT style scaling on the slow chart whenever it
treads.
2023-03-10 18:20:22 -05:00
Tyler Goodlet cda3bcc1f6 Expose `._set_yrange()` kwargs via `yrange_kwargs: dict`
Since it can be desirable to dynamically adjust inputs to the y-ranging
method (such as in the display loop when a chart is very zoomed in), this
adds such support through a new `yrange_kwargs: dict[Viz, dict]` which
replaces the `yrange` tuple we were passing through prior. Also, adjusts
the y-range margin back to the original 0.09 of the diff now that we can
support dynamic control.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 2d7359851f Go back to no-cache on OHLC downsample line 2023-03-10 18:20:22 -05:00
Tyler Goodlet db1e0a04f8 Only use last `ChartView._yrange` if set 2023-03-10 18:20:22 -05:00
Tyler Goodlet 972b723a5d Skip overlay transform calcs on common-pi curves
If there is a common `PlotItem` used for a set of `Viz`/curves (on
a given view) we don't need to do overlay scaling and thus can also
short circuit the viz iteration loop early.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 74c215d5b2 Lel, always meant to no-cache the step curve.. 2023-03-10 18:20:22 -05:00
Tyler Goodlet c646b435bf Incrementally set vlm chart yrange per quote 2023-03-10 18:20:22 -05:00
Tyler Goodlet 0a939311fe Only set the specific view's yrange per quote
Somewhat of a facepalm but, for incremental update of the auto-yrange
from quotes in the display loop obviously we only want to update the
associated `Viz`/viewbox for *that* fqsn. Further we don't need to worry
about the whole "tick margin" stuff since `._set_yrange()` already adds
margin to the yrange by default; thus we remove all of that.
2023-03-10 18:20:22 -05:00
Tyler Goodlet a7db6adc2e Always set the `ChartView._viz` for each plot 2023-03-10 18:20:22 -05:00
Tyler Goodlet c57567ab0d No-overlays, y-ranging optimizations
When the caller passes `do_overlay_scaling=False` we skip the given
chart's `Viz` iteration loop, and set the yrange immediately, then
continue to the next chart (if `do_linked_charts` is set) instead of
a `continue` short circuit within the viz sub-loop.

Deats:
- add a `_maybe_calc_yrange()` helper which makes the `yranges`
  provided-or-not case logic more terse (factored).
- add a `do_linked_charts=False` short circuit.
- drop the legacy commented swing % calcs stuff.
- use the `ChartView._viz` when `do_overlay_scaling=False` thus
  presuming that we want to handle the viz mapped to *this* view box.
- add a `._yrange` "last set yrange" tracking var which keeps record of
  the last ymn/ymx value set in `._set_yrange()` BEFORE doing range
  margins; this will be used for incremental update in the display loop.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 3daee0caa9 Disable overlay scaling on per-symbol-feed updates
Since each symbol's feed is multiplexed by quote key (an fqsn), we can
avoid scaling overlay curves on any single update, presuming each quote
driven cycle will trigger **only** the specific symbol's curve.

Also disables fsp `.interact_graphics_cycle()` calls for now since it
seems they aren't really that critical to and we should be using the
same technique as above (doing incremental y-range checks/updates) for
FSPs as well.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 6ea64a7d2e Iterate all charts (widgets) when only one overlay
The reason (fsp) subcharts were not linked-updating correctly was
because of the early termination of the interact update loop when only
one "overlay" (aka no other overlays then the main curve) is detected.
Obviously in this case we still need to iterate all linked charts in the
set (presuming the user doesn't disable this).

Also tweaks a few internals:
- rename `start_datums: dict` -> `overlay_table`.
- compact all "single curve" checks to one logic block.
- don't collect curve info into the `overlay_table: dict` when
  `do_overlay_scaling=True`.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 25cf8df367 Pass windowed y-mxmn to `.interact_graphics_cycle()` calls in display loop 2023-03-10 18:20:22 -05:00
Tyler Goodlet 91d41ebf76 Allow y-range input via a `yranges: dict[Viz, tuple[float, float]]` 2023-03-10 18:20:22 -05:00
Tyler Goodlet c690e141e1 Don't unset `Viz.render` for unit vlm
Such that we still y-range auto-sort inside
`ChartView.interact_graphics_cycle()` still runs on the unit vlm axis
and we always size such that the y-label stays in view.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 2ed43c0758 Fix profiler f-string 2023-03-10 18:20:22 -05:00
Tyler Goodlet 7a83a7288c Update profile msgs to new apis 2023-03-10 18:20:22 -05:00
Tyler Goodlet 9930f25ad3 Move axis hiding into `.overlay_plotitem()`
Since we pretty much always want the 'bottom' and any side that is not
declared by the caller move the axis hides into this method. Lets us
drop the same calls in `.ui._fsp` and `._display`.

This also disables the auto-ranging back-linking for now since it
doesn't seem to be working quite yet?
2023-03-10 18:20:22 -05:00
Tyler Goodlet 5dd69b2295 Better handle dynamic registry sampler broadcasts
In situations where clients are (dynamically) subscribing *while*
broadcasts are starting to taking place we need to handle the
`set`-modified-during-iteration case. This scenario seems to be more
common during races on concurrent startup of multiple symbols. The
solution here is to use another set to take note of subscribers which
are successfully sent-to and then skipping them on re-try.

This also contains an attempt to exception-handle throttled stream
overruns caused by higher frequency feeds (like binance) pushing more
quotes then can be handled during (UI) client startup.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 246d07021e Drop old loop and wait on fsp engine tasks startups 2023-03-10 18:20:22 -05:00
Tyler Goodlet 7ebcd6d734 Comment out all median usage, turns out it's uneeded.. 2023-03-10 18:20:22 -05:00
Tyler Goodlet 5a8fd42c0c Lul, actually scaled main chart from linked set
This was a subtle logic error when building the `plots: dict` we weren't
adding the "main (ohlc or other source) chart" from the `LinkedSplits`
set when interacting with some sub-chart from `.subplots`..

Further this tries out bypassing `numpy.median()` altogether by just
using `median = (ymx - ymn) / 2` which should be nearly the same?
2023-03-10 18:20:22 -05:00
Tyler Goodlet 517c68f3ad Use `._pathops.slice_from_time()` for overlay intersects
It's way faster since it uses a uniform time arithmetic to narrow the
`numpy.searchsorted()` range before actually doing the index search B)
2023-03-10 18:20:22 -05:00
Tyler Goodlet ea84505682 Don't scale overlays on linked from display loop
In the (incrementally updated) display loop we have range logic that is
incrementally updated in real-time by streams, as such we don't really
need to update all linked chart's (for any given, currently updated
chart) y-ranges on calls of each separate (sub-)chart's
`ChartView.interact_graphics_cycle()`. In practise there are plenty of
cases where resizing in one chart (say the vlm fsps sub-plot) requires
a y-range re-calc but not in the OHLC price chart. Therefore
we always avoid doing more resizing then necessary despite it resulting
in potentially more method call overhead (which will later be justified
by better leveraging incrementally updated `Viz.maxmin()` and
`media_from_range()` calcs).
2023-03-10 18:20:22 -05:00
Tyler Goodlet 5eaca18ee0 Don't skip overlay scaling in disp-loop for now 2023-03-10 18:20:22 -05:00
Tyler Goodlet e06d4b405d Add linked charts guard-flag for use in display loop 2023-03-10 18:20:22 -05:00
Tyler Goodlet cf67c790e5 Use new cached median method in overlay scaling
Massively speeds up scaling transform cycles (duh).

Also includes a draft for an "overlay transform" type/api; obviously
still a WIP 🏄..
2023-03-10 18:20:22 -05:00
Tyler Goodlet ec8679ad74 Add `Viz.median_from_range()`
A super snappy `numpy.median()` calculator (per input range) which we
slap an `lru_cache` on thanks to handy dunder method hacks for such
things on mutable types XD
2023-03-10 18:20:22 -05:00
Tyler Goodlet 9418f53244 Speed up ranging in display loop
use the new `do_overlay_scaling: bool` since we know each feed will have
its own updates (cuz multiplexed by feed..) and we can avoid
ranging/scaling overlays that will make their own calls.

Also, pass in the last datum "brighter" color for ohlc curves as it was
originally (and now that we can pass that styling bit through).
2023-03-10 18:20:22 -05:00
Tyler Goodlet 497174c687 Add full profiling to `.interact_graphics_cycle()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet 481f1b3d7e Fix intersect detection using time indexing
Facepalm, obviously absolute array indexes are not going to necessarily
align vs. time over multiple feeds/history. Instead use
`np.searchsorted()` on whatever curve has the smallest support and find
the appropriate index of intersection in time so that alignment always
starts at a sensible reference.

Also adds a `debug_print: bool` input arg which can enable all the
prints when working on this.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 776ffd2b1c Factor curve-dispersion sorting into primary loop
We can determine the major curve (in view) in the first pass of all
`Viz`s so drop the 2nd loop and thus the `mxmn_groups: dict`. Also
simplifies logic for the case of only one (the major) curve in view.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 896259d9e4 When only one curve is in view, skip group ranging 2023-03-10 18:20:22 -05:00
Tyler Goodlet 89e2e7fc54 Adjust `.update_graphics()` to expect `in_view: bool` in `_fsp.py` 2023-03-10 18:20:22 -05:00
Tyler Goodlet 32f21dc06b Drop `update_graphics_from_flow()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet a0fb84f55b Just warn log on bad intersect indexing errors (for now) 2023-03-10 18:20:22 -05:00
Tyler Goodlet fc6ccc306c Only set the major curve's range once (per render cycle)
Turns out this is a limitation of the `ViewBox.setYRange()` api: you
can't call it more then once and expect anything but the first call to
be applied without letting a render cycle run. As such, we wait until
the end of the log-linear scaling loop to finally apply the major curves
y-mx/mn after all minor curves have been evaluated.

This also drops all the debug prints (for now) to get a feel for latency
in production mode.
2023-03-10 18:20:22 -05:00
Tyler Goodlet c2dd255e8a Only remove axis from scene when in one 2023-03-10 18:20:22 -05:00
Tyler Goodlet 7e421ba57b Drop `.group_maxmin()`
We ended up doing groups maxmin sorting at the interaction layer (new
the view box) and thus this method is no longer needed, though it was
the reference for the code now in `ChartView.interact_graphics_cycle()`.

Further this adds a `remove_axes: bool` arg to `.insert_plotitem()`
which can be used to drop axis entries from the inserted pi (though it
doesn't seem like we really ever need that?) and does the removal in
a separate loop to avoid removing axes before they are registered in
`ComposedGridLayout._pi2axes`.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 0591cb09f6 Clean up cross-curve intersect point indexing
When there are `N`-curves we need to consider the smallest
x-data-support subset when figuring out for each major-minor pair such
that the "shorter" series is always returns aligned to the longer one.

This makes the var naming more explicit with `major/minor_i_start` as
well as clarifies more stringently a bunch of other variables and
explicitly uses the `minor_y_intersect` y value in the scaling transform
calcs. Also fixes some debug prints.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 052ce65682 3rdz the charm: log-linearize minor y-ranges to a major
In very close manner to the original (gut instinct) attempt, this
properly (y-axis-vertically) aligns and scales overlaid curves according
to what we are calling a "log-linearized y-range multi-plot" B)

The basic idea is that a simple returns measure (eg. `R = (p1 - p0)
/ p0`) applied to all curves gives a constant output `R` no matter the
price co-domain in use and thus gives a constant returns over all assets
in view styled scaling; a intuitive visual of returns correlation. The
reference point is for now the left-most point in view (or highest
common index available to all curves), though we can make this
a parameter based on user needs.

A slew of debug `print()`s are left in for now until we iron out the
remaining edge cases to do with re-scaling a major (dispersion) curve
based on a minor now requiring a larger log-linear y-range from that
previous major' range.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 52ac1053aa 2nd try: dispersion normalize y-ranges around median
In the dispersion swing calcs, use the series median from the in-view
data to determine swing proportions to apply on each "minor curve"
(series with lesser dispersion the one with the greatest). Track the
major `Viz` as before by max dispersion. Apply the dispersion swing
proportions to each minor curve-series in a third loop/pass of all
overlay groups: this ensures all overlays are dispersion normalized in
their ranges but, minor curves are currently (vertically) centered (vs.
the major) via their medians.

There is a ton of commented code from attempts to try and vertically
align minor curves to the major via the "first datum" in-view/available.
This still needs work and we may want to offer it as optional.

Also adds logic to allow skipping margin adjustments in `._set_yrange()`
if you pass `range_margin=None`.
2023-03-10 18:20:22 -05:00
Tyler Goodlet dfc35253ea First draft, group y-minmax transform algo
On overlaid ohlc vizs we compute the largest max/min spread and
apply that maxmimum "up and down swing" proportion to each `Viz`'s
viewbox in the group.

We obviously still need to clip to the shortest x-range so that
it doesn't look exactly the same as before XD
2023-03-10 18:20:22 -05:00
Tyler Goodlet 8a5b9f4e8c Rename `.maybe_downsample_graphics()` -> `.interact_graphics_cycle()` 2023-03-10 18:20:22 -05:00
Tyler Goodlet f89e11fc7d Right, handle y-ranging multiple paths per plot
We were hacking this before using the whole `ChartView._maxmin()`
setting stuff since in some cases you might want similarly ranged paths
on the same view, but of course you need to max/min them together..

This adds that group sorting by using a table of `dict[PlotItem,
tuple[float, float]` and taking the abs highest/lowest value for each
plot in the viz interaction update loop.

Also removes the now commented signal registry calls and thus
`._yranger`, drops the `set_range: bool` from `._set_yrange` and adds
and extra `.maybe_downsample_graphics()` to the mouse wheel handler to
avoid a weird slow debounce where ds-ing is delayed until a further
interaction.
2023-03-10 18:20:22 -05:00
Tyler Goodlet fc73becd5f Drop Qt interaction signal usage
It's kind of hard to understand with the C++ fan-out to multiple views
(imo a cluster-f#$*&) and seems honestly just plain faster to loop (in
python) through all the linked view handlers XD

Core adjustments:
- make the panning and wheel-scroll handlers just call
  `.maybe_downsample_graphics()` directly; drop all signal emissions.
- make `.maybe_downsample_graphics()` loop through all vizs per subchart
  and use the new pipeline-style call sequence of:
  - `Viz.update_graphics() -> <read_slc>: tuple`
  - `Viz.maxmin(i_read_range=<read_slc>) -> yrange: tuple`
  - `Viz.plot.vb._set_yrange(yrange=yrange)`
  which inlines all the necessary calls in the most efficient way whilst
  leveraging `.maxmin()` caching and ymxmn-from-m4-during-render to
  boot.
- drop registering `._set_yrange()` for handling `.sigRangeChangedManually`.
2023-03-10 18:20:22 -05:00
Tyler Goodlet 223e9d999c Add first-draft `PlotItemOverlay.group_maxmin()`
Computes the maxmin values for each underlying plot's in-view range as
well as the max up/down swing (in percentage terms) from the plot with
most dispersion and returns a all these values plus a `dict` of plots to
their ranges as part of output.
2023-03-10 18:20:22 -05:00
goodboy eb51033b18
Merge pull request #483 from pikers/service_subpkg
`.service` subpkg
2023-03-10 10:37:46 -05:00
Tyler Goodlet 12883c3c90 Don't double send `enable_modules` and `debug_mode` in kwargs..
This broke non-disti-mode actor tree spawn / runtime, seemingly because
the cli entrypoint for a `piker chart` also sends these values down
through the call stack independently? Pretty sure we don't need to send
the `enable_modules` from the chart actor anyway.
2023-03-10 10:30:26 -05:00
Tyler Goodlet 8ceaa27872 Add ES client polling to ensure eventual connectivity.. 2023-03-09 18:42:24 -05:00
Tyler Goodlet 97290fcb05 Never drop root perms in test harness 2023-03-09 18:42:24 -05:00
Tyler Goodlet 44a3115539 Expose `drop_root_perms_for_ahab` from `pikerd` factories to `ahabd` 2023-03-09 18:42:24 -05:00
Tyler Goodlet 0772b4a0fa Hard code version from our container, predicate renaming 2023-03-09 18:42:24 -05:00
Tyler Goodlet 15064d94cb `ahabd`: Harden cancellation teardown (again XD)
Needed to move the startup sequence inside the `try:` block to guarantee
we always do the (now shielded) `.cancel()` call if we get a cancel
during startup.

Also, support an optional `started_afunc` field in the config if
backends want to just provide a one-off blocking async func to sync
container startup. Add a `drop_root_perms: bool` to allow persisting
sudo perms for testing or dyanmic container spawning purposes.
2023-03-09 18:42:24 -05:00
Tyler Goodlet 9a00c45923 Add `log` fixture for easy test plugin 2023-03-09 18:42:24 -05:00
Tyler Goodlet 7cc9911565 Add connection poll loop to es test as well 2023-03-09 15:37:43 -05:00
Tyler Goodlet 79b0db4449 Pass a config `tmp_dir: Path` to the runtime when testing 2023-03-09 15:37:43 -05:00
Tyler Goodlet 5aaa7f47dc Pull testing config dir from `tractor` runtime vars
Provides a more correct solution (particularly for distributed testing)
to override the `piker` configuration directory by reading the path from
a specific `tractor._state._runtime_vars` entry that can be provided by
the test harness.

Also fix some typing and comments.
2023-03-09 15:37:43 -05:00
Tyler Goodlet aa36abf36e Support passing `tractor` "actor runtime vars" down the runtime 2023-03-09 15:37:43 -05:00
Tyler Goodlet 2014019b06 Add reconnect loop to `marketstore` startup test
Due to making ahabd supervisor init more async we need to be more
tolerant to mkts server startup: the grpc machinery needs to be up
otherwise a client which connects to early may just hang on requests..

Add a reconnect loop (which might end up getting factored into client
code too) so that we only block on requests once we know the client
connection is actually responsive.
2023-03-09 15:37:43 -05:00
Tyler Goodlet 75b7a8b56e `marketstore`: Pull default socket from server config 2023-03-09 15:37:43 -05:00
Tyler Goodlet 31392af427 Move enabled module defs to just above where used 2023-03-09 15:37:43 -05:00
Tyler Goodlet 6540c415c1 Lul, fix imports in elasticsearch block.. 2023-03-09 15:37:43 -05:00
Tyler Goodlet fbc12b1b07 Add 10min timeout on CI job.. 2023-03-09 15:37:43 -05:00
Tyler Goodlet cda7a54718 Fix final missed `marketstore` mod import
Thanks @esme! XD

Also, do a linter pass and remove a buncha unused references.
2023-03-09 15:37:43 -05:00
Tyler Goodlet 6f92c6b52d Don't crash on a `xdotool` timeout.. 2023-03-09 15:37:43 -05:00
Tyler Goodlet 441243f83b Attempt to report `piker storage -d <fqsn>` errors
Not really sure there's much we can do besides dump Grpc stuff when we
detect an "error" `str` for the moment..

Either way leave a buncha complaints (como siempre) and do linting
fixups..
2023-03-09 15:37:43 -05:00
Tyler Goodlet cec2967071 Import `maybe_open_pikerd` at module level 2023-03-09 15:37:43 -05:00
Tyler Goodlet f95ea19b21 Move `pikerd` runtime boostrap to `.service._actor_runtime` 2023-03-09 15:37:43 -05:00
Tyler Goodlet eca048c0c5 Move daemon spawning endpoints to `service._daemon` module 2023-03-09 15:37:43 -05:00
Tyler Goodlet a2d40937a3 Move actor-discovery utils to `.service._registry 2023-03-09 15:37:42 -05:00
Tyler Goodlet 31f2b01c3e Move `Services` api to `.service._mngr` mod 2023-03-09 15:37:42 -05:00
Tyler Goodlet b226b678e9 Fix missed `marketstore` mod imports 2023-03-09 15:37:42 -05:00
Tyler Goodlet dd87d1142e Bump mkts timeout to 2s 2023-03-09 15:37:42 -05:00
Tyler Goodlet afac553ea2 Move all docker and external db code to `piker.service` 2023-03-09 15:37:42 -05:00
Tyler Goodlet 93c81fa4d1 Start `piker.service` sub-package
For now just moves everything that was in `piker._daemon` to a subpkg
module but a reorg is coming pronto!
2023-03-09 15:37:42 -05:00
Tyler Goodlet bfe3ea1f59 Set explicit `marketstore` container startup timeout 2023-03-09 15:37:42 -05:00
Tyler Goodlet 56629b6b2e Hardcode `cancel` log level for `ahabd` for now 2023-03-09 15:37:42 -05:00
Tyler Goodlet bb723abc9d Always passthrough loglevel to `ahabd` supervisor 2023-03-09 15:37:42 -05:00
Tyler Goodlet 7694419e71 Background docker-container logs processing
Previously we would make the `ahabd` supervisor-actor sync to docker
container startup using pseudo-blocking log message processing.

This has issues,
- we're forced to do a hacky "yield back to `trio`" in order to be
  "fake async" when reading the log stream and further,
- blocking on a message is fragile and often slow.

Instead, run the log processor in a background task and in the parent
task poll for the container to be in the client list using a similar
pseudo-async poll pattern. This allows the super to `Context.started()`
sooner (when the container is actually registered as "up") and thus
unblock its (remote) caller faster whilst still doing full log msg
proxying!

Deatz:
- adds `Container.cuid: str` a unique container id for logging.
- correctly proxy through the `loglevel: str` from `pikerd` caller task.
- shield around `Container.cancel()` in the teardown block and use
  cancel level logging in that method.
2023-03-09 15:37:42 -05:00
Tyler Goodlet b078a06621 Doc string and types bump in loggin mod 2023-03-09 15:37:42 -05:00
Tyler Goodlet 05b67c27d0 Apply `Services` runtime state **immediately** inside starup block 2023-03-09 15:37:42 -05:00
Tyler Goodlet 8c66f066bd Deliver es specific ahab-super in endpoint startup config 2023-03-09 15:37:42 -05:00
Tyler Goodlet 959e423849 Add warning around detach flag to docker client 2023-03-09 15:37:42 -05:00
Tyler Goodlet 7b196b1b97 Support startup-config overrides to `ahabd` super
With the addition of a new `elastixsearch` docker support in
https://github.com/pikers/piker/pull/464, adjustments were made
to container startup sync logic (particularly the `trio` checkpoint
sleep period - which itself is a hack around a sync client api) which
caused a regression in upstream startup logic wherein container error
logs were not being bubbled up correctly causing a silent failure mode:

- `marketstore` container started with corrupt input config
- `ahabd` super code timed out on startup phase due to a larger log
  polling period, skipped processing startup logs from the container,
  and continued on as though the container was started
- history client fails on grpc connection with no clear error on why the
  connection failed.

Here we revert to the old poll period (1ms) to avoid any more silent
failures and further extend supervisor control through a configuration
override mechanism. To address the underlying design issue, this patch
adds support for container-endpoint-callbacks to override supervisor
startup configuration parameters via the 2nd value in their returned
tuple: the already delivered configuration `dict` value.

The current exposed values include:
    {
        'startup_timeout': 1.0,
        'startup_query_period': 0.001,
        'log_msg_key': 'msg',
    },

This allows for container specific control over the startup-sync query
period (the hack mentioned above)  as well as the expected log msg key
and of course the startup timeout.
2023-03-09 15:37:42 -05:00
Tyler Goodlet fe0695fb7b First draft storage layer cli
Adds a `piker storage` subcmd with a `-d` flag to wipe a particular
fqsn's time series (both 1s and 60s). Obviously this needs to be
extended much more but provides a start point.
2023-03-09 15:37:42 -05:00
jaredgoldman dae8e59d26
Merge pull request #484 from pikers/pps_precision_hotfixes
Pps precision hotfixes
2023-03-08 19:50:09 -05:00
Tyler Goodlet aba238e8b1 `kraken`: expect `Pair` in search results.. 2023-03-08 17:22:34 -05:00
Tyler Goodlet d3192bb8c2 Read `Symbol` tick precision fields when no entry in `.broker_info` 2023-03-08 17:22:34 -05:00
goodboy 6cd18576aa
Merge pull request #474 from pikers/xdo_and_you
`ib`: restore and (maybe) use `xdotool` + `i3ipc` reset method
2023-03-03 17:42:29 -05:00
Tyler Goodlet daa6a5c80a `ib`: restore and (maybe) use `xdotool` + `i3ipc` reset method
Since apparently the container we were using is totally borked on new
kernels and/or latest jvm, this move our old manual local-X-desktop script
back for use in `brokerd` backend code.

Adds a new `.brokers.ib._util` which contains the 2 methods and fails
over to this one when we can't connect to a VNC server. Also adjusts the
original in `scripts/ib_data_reset.py` to import and run the module code
as a script-program.
2023-03-03 17:37:26 -05:00
goodboy 201f86e482
Merge pull request #470 from pikers/decimalization_take_2
Fixed float dust bug on zero position
2023-03-03 17:34:36 -05:00
Guillermo Rodriguez d4ac8972ac
Merge pull request #477 from pikers/backward_compat_trans_with_symbolinfo
Backward compat support for `Transaction.sym: Symbol`
2023-03-02 23:19:55 -03:00
Tyler Goodlet b4a1cc8f22 `kraken`: parse and load info `Transaction.sym: Symbol`
Also includes a retyping of `Client._pair: dict[str, Pair]` to look up
pair structs and map all alt-key-name-sets to each for easy precision
info lookup to set the `.sym` field for each transaction including for
on-chain transfers which kraken provides as an "asset decimals" field,
presumably pulled from the particular block-token's limitation info.
2023-03-02 19:25:43 -05:00
Tyler Goodlet 69b85aa7e5 `ib`: parse and load info for new `Transaction.sym: Symbol` field 2023-03-02 19:23:47 -05:00
Tyler Goodlet 3a4794e9d1 Backward-compat: don't require `'lot_tick_size'`
In order to support existing `pps.toml` files in the wild which don't
have the `asset_type, price_tick_size, lot_tick_size` fields, we need to
only optionally read them and instead expect that backends will write
the fields going forward (coming in follow patches).

Further this makes some small asset-size (vlm accounting) quantization
related adjustments:
- rename `Symbol.decimal_quant()` -> `.quantize_size()` since that is
  explicitly what this method is doing.
- and expect an input `size: float` which we cast to decimal instead of
  doing it inside the `.calc_size()` caller code.
- drop `Symbol.iterfqsns()` which wasn't being used anywhere at all..

Additionally, this drafts out a new replacement market-trading-pair data
type to eventually replace `.data._source.Symbol` -> `MktPair` which we
aren't using yet, but serves as the documentation-driven motivator ;)
and, it relates to https://github.com/pikers/piker/issues/467.
2023-03-02 19:22:19 -05:00
Guillermo Rodriguez 6be96a96aa
Drop symbol section on Position serialization 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez d704b153ca
Fix mayor bug found by fomo, sym info getting stored incorrectly on pps.toml causing it to load pp wrong on second open, also fix header leak bug 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez 20d91f5e06
Good catch by j, unnecesary kwarg on open_pps 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez 6c23c79f2a
Minor fixes after fomo's review 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez f5b8b9a14f
Add sym registry to PaperBoi as well as a sym ref on Transaction
Add decimal quantize API to Symbol to simplify by-broker truncation
Add symbol info to `pps.toml`
Move _assert call to outside the _async_main context manager
Minor indentation and styling changes, also convert a few prints to log calls
Fix multi write / race condition on open_pps call
Switch open_pps to not write by default
Fix integer math kraken syminfo _tick_size initialization
2023-03-01 21:06:48 -03:00
Guillermo Rodriguez dc78994dcf
Fixed float dust bug on zero position case 2023-03-01 21:05:37 -03:00
goodboy 269a04ba1a
Merge pull request #475 from pikers/explicit_write_pps_on_exit
Explicitly write `pps.toml` on exit for `ib` and `kraken`
2023-03-01 17:47:57 -05:00
Tyler Goodlet 569df45d18 `kraken.`: drop trade history query limit 2023-03-01 17:40:36 -05:00
Tyler Goodlet f53f4df583 `ib/kraken`: adjust to new default of not-writing in `open_pps()` 2023-03-01 17:40:33 -05:00
jaredgoldman d04fe366ab
Merge pull request #462 from pikers/paper_trade_improvements_rebase
Paper trade improvements
2023-02-28 14:30:20 -05:00
jaredgoldman c83fe5aaa7 Fix typo in test docstring 2023-02-28 14:22:24 -05:00
jaredgoldman 41f81eb701 Make write on exit default false 2023-02-28 14:14:05 -05:00
jaredgoldman 05fdc9dd60 Add xfail 2023-02-28 13:55:12 -05:00
jaredgoldman 1323981cc4 Format lines in conftest
Add extra line in conftest
2023-02-28 13:52:12 -05:00
jaredgoldman 882032e3a3 Change skip to xfail 2023-02-28 13:52:03 -05:00
jaredgoldman a6257ae615 Add docstrings to test cases,
format function calls
2023-02-28 13:52:03 -05:00
jaredgoldman 973c068e96 Assert conditions like a nerd 2023-02-28 13:52:03 -05:00
jaredgoldman d7317c3710 Shorten assertion docstring 2023-02-28 13:52:03 -05:00
jaredgoldman 87eb9c5772 Format assertion conditions 2023-02-28 13:51:47 -05:00
jaredgoldman ecb22dda1a Remove whitespace, remove stale comments 2023-02-28 13:51:47 -05:00
jaredgoldman 6f15d47012 Add space in docstrings,
remove duplicate import
2023-02-28 13:51:47 -05:00
jaredgoldman 802af306ac Add specific location of _testing dir in delete_testing_dir fixture 2023-02-28 13:51:47 -05:00
jaredgoldman e4e368923d Add specific kwarg key to open_pps call when starting paperboi 2023-02-28 13:51:47 -05:00
jaredgoldman 342aec648b Skip zero test and change use Path when creating a config folder in marketstore 2023-02-28 13:51:47 -05:00
jaredgoldman 55253c8469 Remove whitespace and correct typo 2023-02-28 13:51:47 -05:00
jaredgoldman 4b72d3ba99 Add backpressure setting back as it wasn't altering test behaviour 2023-02-28 13:51:47 -05:00
jaredgoldman 61296bbdfc Minor formatting, removing whitespace 2023-02-28 13:51:47 -05:00
jaredgoldman 36f466fff8 Ensure tests are running and working up until asserting pps 2023-02-28 13:51:47 -05:00
Guillermo Rodriguez 26146097eb
Merge pull request #469 from pikers/emsd_loglevel_fix
Fix `loglevel` not getting propagated to `emsd`
2023-02-26 00:49:43 -03:00
jaredgoldman fcd8b8eb78 Remove breaking call to load pps from ledger 2023-02-25 18:59:40 -05:00
jaredgoldman 3e83764b5b Remove whitespace, uneeded comments 2023-02-25 18:59:40 -05:00
jaredgoldman 3a6fbabaf8 Minor formatting 2023-02-25 18:59:40 -05:00
jaredgoldman 85ad23a1e9 Remove uneeded assert_precision arg 2023-02-25 18:59:40 -05:00
jaredgoldman 15525c2b46 Add functionality and tests for executing mutliple orders 2023-02-25 18:59:40 -05:00
jaredgoldman 76736a5441 Refactor to avoid global state while testing 2023-02-25 18:59:40 -05:00
jaredgoldman 4c2e776e01 Ensure to cleanup by passing fixture in paper_test signature 2023-02-25 18:59:40 -05:00
jaredgoldman 1e748f11ef Ensure config path is being updated with _testing correctly during testing 2023-02-25 18:59:40 -05:00
jaredgoldman 3fcad16298 Ensure not to write to pps when asserting? 2023-02-25 18:59:40 -05:00
jaredgoldman 2d25d1f048 Push failing assert no pps test 2023-02-25 18:59:40 -05:00
jaredgoldman e54d928405 Reformat fake fill in paper engine,
Ensure tests pass, refactor test wrapper
2023-02-25 18:59:40 -05:00
jaredgoldman c99381216d Ensure actual pp is sent to ems
ensure not to write pp header on startup

Comment out pytest settings
Add comments explaining delete_testing_dir fixture
use nonlocal instead of global for test state

Add unpacking get_fqsn method
Format test_paper
Add comments explaining sync/async book.send calls
2023-02-25 18:59:40 -05:00
algorandpa db2e2ed78f Use constants value for test config dir path 2023-02-25 18:59:39 -05:00
algorandpa 3bc54e308f Use Path.mkdir instead of os.mkdir 2023-02-25 18:59:39 -05:00
algorandpa 8c9c165e0a Remove broken import 2023-02-25 18:59:39 -05:00
algorandpa 7bd8019876 Add back cleanup fixture 2023-02-25 18:59:39 -05:00
algorandpa 8122e6c86f Disable cleanup to see if CI passes 2023-02-25 18:59:39 -05:00
algorandpa 7e87dc52eb Scope fixture to session 2023-02-25 18:59:39 -05:00
algorandpa 2c366d7349 Fix type 2023-02-25 18:59:39 -05:00
algorandpa 9acbfacd4c only clean up if _testing file exists 2023-02-25 18:59:39 -05:00
algorandpa 316ead577d Remove scoping 2023-02-25 18:59:39 -05:00
algorandpa 4b6d3fe138 Scope cleanup fixture to module 2023-02-25 18:59:39 -05:00
algorandpa 0dec2b9c89 Enable backpressure during data-feed layer startup to avoid overruns 2023-02-25 18:59:39 -05:00
algorandpa acc86ae6db more formatting 2023-02-25 18:59:39 -05:00
algorandpa 730906a072 Minor formatting 2023-02-25 18:59:39 -05:00
algorandpa e5cefeb44b Format to prep for PR 2023-02-25 18:59:39 -05:00
algorandpa 7142a6a7ca Add hacky cleanup solution for _testng data 2023-02-25 18:59:39 -05:00
algorandpa dff8abd6ad Minor reformatting 2023-02-25 18:59:39 -05:00
algorandpa b180602a3e Make config grab _testing dir in pytest env,
- Remove print statements
2023-02-25 18:59:39 -05:00
algorandpa 95b9dacb7a Break test into steps 2023-02-25 18:59:39 -05:00
algorandpa df868cec35 Assert that trades persist in ems after teardown and startup 2023-02-25 18:59:39 -05:00
algorandpa 68a196218b force change branch name 2023-02-25 18:59:39 -05:00
algorandpa 84cd1e0059 initial commit on copy 2023-02-25 18:59:39 -05:00
algorandpa 86b4386522 minor changes, prepare for rebase of overlays branch 2023-02-25 18:59:39 -05:00
algorandpa 5bb93ccc5f change id to 'piker-paper' 2023-02-25 18:59:39 -05:00
algorandpa 3028a8b1f8 restore spacing 2023-02-25 18:59:39 -05:00
algorandpa 6126c4f438 restore spacing 2023-02-25 18:59:39 -05:00
algorandpa 41bb0445e0 remove unnecessary return 2023-02-25 18:59:39 -05:00
algorandpa 97627a4976 remove more logs 2023-02-25 18:59:39 -05:00
algorandpa 1b2fce430f remove logs, unused args 2023-02-25 18:59:39 -05:00
algorandpa 8cd2354d73 ensure that paper pps are pulled on open 2023-02-25 18:59:39 -05:00
algorandpa 9c28d7086e Add Generator as return type of open_trade_ledger 2023-02-25 18:59:39 -05:00
algorandpa a4bd51a01b change open_trade_ledger typing to return a Generator type 2023-02-25 18:59:39 -05:00
algorandpa b67d020e23 add basic func to load paper_trades file 2023-02-25 18:59:39 -05:00
Guillermo Rodriguez 85a1b858b4
Fix logging on emsd 2023-02-25 20:56:25 -03:00
Guillermo Rodriguez 47bf45f30e
Merge pull request #464 from pikers/elasticsearch_integration
Elasticsearch integration
2023-02-24 16:38:37 -03:00
Esmeralda Gallardo b96e2c314a
Minor style changes and removed unnecesary comments 2023-02-24 15:11:15 -03:00
Esmeralda Gallardo f96d6a04b6
Fixed UnboundLocalError on _ahab. Added test for marketstore's initialization 2023-02-22 13:28:07 -03:00
Guillermo Rodriguez acc6249d88
Remove unnesesary arguments to some pikerd functions, fix container init error
by switching from log reading to quering es health endpoint, fix install on ci
and add more logging.
2023-02-21 20:45:10 -03:00
jaredgoldman 82174d01c5
Merge pull request #465 from pikers/loglevel_to_testpikerd
`loglevel` to `open_test_pikerd()` via `--ll <level>` flag
2023-02-21 12:34:55 -05:00
Tyler Goodlet 0b678c97f4 Pass `loglevel: str` cli value through to service tests 2023-02-21 12:02:26 -05:00
Tyler Goodlet d0d1554d74 Expose `emsd` task loglevel through to clients 2023-02-21 12:02:01 -05:00
Esmeralda Gallardo 4122c482ba
Added new tests for elasticsearch's and marketstore's initialization and stop 2023-02-21 13:34:29 -03:00
Esmeralda Gallardo b5cdf14036
Modified elasticsearch file name to 'elastic' to avoid name errors. Applied changes suggested in the pr. 2023-02-21 13:34:29 -03:00
Esmeralda Gallardo 3ce8bfa012
Moved database initialization code inside the open_pikerd context manager 2023-02-21 13:34:29 -03:00
Guillermo Rodriguez bf9ca4a4a8
Generalize ahab to support elasticsearch logs and init procedure 2023-02-21 13:34:29 -03:00
Guillermo Rodriguez 17a4fe4b2f
Trim unnecesary stuff left from marketstore copy, also fix elastic config name for docker build, add elasticsearch to dependencies 2023-02-21 13:34:28 -03:00
Esmeralda Gallardo 0dc24bd475
Added dockerfile, yaml file and script to statrt an elasticsearch's docker instance. 2023-02-21 13:34:26 -03:00
Tyler Goodlet b3400f0d9c Add `loglevel: str` fixture, passthrough to `open_test_pikerd()` 2023-02-21 10:54:18 -05:00
Tyler Goodlet 2bad692703 Fix up some test warnings (summary) spots 2023-02-21 10:54:18 -05:00
Tyler Goodlet cd3e9b1b2a Move quest fixtures to test mod, clean out old travis fixture 2023-02-21 10:54:18 -05:00
Tyler Goodlet e01220af14 Type annot tweaks to feeds mod 2023-02-21 10:54:18 -05:00
goodboy bfc0220a47
Merge pull request #456 from pikers/nix-env
NixOS development envoirment
2023-02-16 14:59:48 -05:00
goodboy 139b8ba0f4
Merge pull request #453 from pikers/overlays_interaction_latency_tuning
Overlays interaction latency tuning
2023-02-14 13:48:12 -05:00
Guillermo Rodriguez 71b2f24a2e
Merge pull request #460 from pikers/fnf_notify-send
Fix crash on notification daemon not found
2023-02-13 18:22:27 -03:00
Guillermo Rodriguez ffd707db62
Add try catch for when notify-send is not present on system 2023-02-13 18:08:56 -03:00
Tyler Goodlet fefb0de51f Don't update overlays as fsps 2023-02-13 12:27:58 -05:00
Tyler Goodlet 59f34c94b0 Return fast on bad range in `.default_view()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet ebf53e32bd Fix return type annot for `slice_from_time()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet 9ce52033f0 Fix `do_px_step` output for epoch step sizing 2023-02-13 12:27:58 -05:00
Tyler Goodlet 9876f200c1 Support chart draw-api-kwargs-passthrough in lined plot meths 2023-02-13 12:27:58 -05:00
Tyler Goodlet 81b8cd5461 Use normal pen when last-datum color not provided 2023-02-13 12:27:58 -05:00
Tyler Goodlet 731eb91a58 Make profiler work when nested and not? 2023-02-13 12:27:58 -05:00
Tyler Goodlet 49ca743e6a Add back `.prepareGeometryChange()`, seems faster? 2023-02-13 12:27:58 -05:00
Tyler Goodlet a36d4b1dc6 Factor color and cache mode settings into `FlowGraphics`
Curve-path colouring and cache mode settings are used (and can thus be
factored out of) all child types; this moves them into the parent type's
`.__init__()` and adjusts all sub-types match:

- the bulk was moved out of the `Curve.__init__()` including all
  previous commentary around cache settings.
- adjust `BarItems` to use a `NoCache` mode and instead use the
  `last_step_pen: pg.Pen` and `._pen` inside it's `.pain()` instead of
  defining functionally duplicate vars.
- adjust all (transitive) calls to `BarItems` to use the new kwargs
  names.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 33df4f9927 Return `in_view: bool` from `Viz.update_graphics()`
Allows callers to know if they should care about a particular viz
rendering call by immediately knowing if the graphics are in view. This
turns out super useful particularly when doing dynamic y-ranging overlay
calcs.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 72a9af21ac Fix profiler f-strings 2023-02-13 12:27:58 -05:00
Tyler Goodlet 1a10514cad Disable coordinate caching on OHLC ds curves to avoid smearing 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5d9b7c72b3 Fix `Viz.draw_last()` to divide by `.flat_index_ratio` for uppx index lookback 2023-02-13 12:27:58 -05:00
Tyler Goodlet efddd43760 Drop masked `._maxmin()` override code from fsp stuff 2023-02-13 12:27:58 -05:00
Tyler Goodlet 1606b3a9c3 Document `Viz.incr_info()` outputs 2023-02-13 12:27:58 -05:00
Tyler Goodlet 8b5b1c214b Rework display loop maxmin-ing with `Viz` pipelining
First, we rename what was `chart_maxmin()` -> `multi_maxmin()` and don't
`partial` it in to the `DisplayState`, just call it with correct `Viz`
ref inputs.

Second, as we've done with `ChartView.maybe_downsample_graphics()` use
the output from the main `Viz.update_graphics()` and feed it to the
`.maxmin()` calls for the ohlc and vlm chart but still deliver the same
output signature as prior. Also accept and use an optional profiler
input, drop `DisplayState.maxmin()` and add `.vlm_viz`.

Further perf related tweak to do with more efficient incremental
updates:
- only call `multi_maxmin()` if the main fast chart viz does a pixel
  column step.
- mask out hist viz and vlm viz and all linked fsp `._set_yrange()`
  calls for now until we figure out how to best optimize these updates
  when considering the new group-scaled-by-% style for multicharts.
- drop `.enable_auto_yrange()` calls during startup.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 9780263cfa Adjust vlm fsp code to new `Viz.update_graphics()` output sig 2023-02-13 12:27:58 -05:00
Tyler Goodlet e1e3afb495 Support read-slice input to `Viz.maxmin()`
Acts as short cut when pipe-lining from `Viz.update_graphics()` (which
now returns the needed in-view array-relative-read-slice as output) such
that `Viz.read()` and `.datums_range()` doesn't need to be called
internally multiple times. In this case where `i_read_range` is provided
we of course skip doing time index translations and consequently lookup
the appropriate (epoch-time) index indices for caching.
2023-02-13 12:27:58 -05:00
Tyler Goodlet f9eb880404 Backlink subchart views to "main chart" in `.add_plot()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet a3bbbeda9d Drop `ChartView._maxmin()` usage in `.ui._fsp`
Removes the multi-maxmin usage as well as ensures appropriate `Viz` refs
are passed into the view methods now requiring it. Also drops the "back
linking" of the vlm chart view to the source OHLC chart since we're
going to add this as a default to the charting API.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 3ad7844fdf Drop `ChartView._maxmin()` idea, use `Viz.maxmin()`
The max min for a given data range is defined on the lowest level
through the `Viz` api intermingling it with the view is a layering
issue. Instead make `._set_yrange()` call the appropriate view's viz
(since they should be one-to-one) directly and thus avoid any callback
monkey patching nonsense.

Requires that we now make `._set_yrange()` require either one of an
explicit `yrange: tuple[float, float]` min/max pair or the `Viz` ref (so
that maxmin can be called) as input. Adjust
`enable/disable_auto_yrange()` to bind in a new `._yranger()` partial
that's (solely) needed for signal reg/unreg which binds in the now
required input `Viz` to these methods.

Comment the `autoscale_overlays` block in `.maybe_downsample_graphics()`
for now until we figure out the most sane way to auto-range all linked
overlays and subplots (with their own overlays).
2023-02-13 12:27:58 -05:00
Tyler Goodlet b71c61e23f More thoroughly profile the display loop 2023-02-13 12:27:58 -05:00
Tyler Goodlet 9650b32786 Use `Viz.draw_last()` inside `.update_graphics()`
In an effort to ensure uniform and uppx-optimized last datum graphics
updates call this method directly instead of the equivalent graphics
object thus ensuring we only update the last pixel column according with
the appropriate max/min computed from the last uppx's worth of data.

Fixes / improvements to enable `.draw_last()` usage include,
- change `Viz._render_table` -> `._alt_r: tuple[Renderer, pg.GraphicsItem] | None`
  which holds an alternative (usually downsampled) render and graphics
  obj.
- extend the `.draw_last()` signature to include:
  - `last_read` to allow passing in the already read data from
    `.update_graphics()`, if it isn't passed then a manual read is done
    internally.
  - `reset_cache: bool` which is passed through to the graphics obj.
- use the new `Formatter.flat_index_ratio: float` when indexing into xy
  1d data to compute the max/min for that px column.

Other,
- drop `bars_range` input from `maxmin()` since it's unused.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 433697cc4f Add cached refs to last 1d xy outputs
For the purposes of avoiding another full format call we can stash the
last rendered 1d xy pre-graphics formats as
`IncrementalFormatter.x/y_1d: np.ndarray`s and allow readers in the viz
and render machinery to use this data easily for things like "only
drawing the last uppx's worth of data as a line". Also add
a `.flat_index_ratio: float` which can be used similarly as a scalar
applied to indexes into the src array but instead when indexing
(flattened) 1d xy formatted outputs. Finally, this drops the way
overdone/noisy `.__repr__()` meth we had XD
2023-02-13 12:27:58 -05:00
Tyler Goodlet d622b4157c Only draw up to 2nd last datum for OHLC bars paths 2023-02-13 12:27:58 -05:00
Tyler Goodlet 1add591b2c Only update last datum graphic(s) on clear ticks
When a new tick comes in but no new time step / bar is yet needed (to be
appended) we can simply adjust **only** the last bar datum
lines-graphic(s) to avoid a redraw of the preceding `QPainterPath` on
every tick. Do this by calling `Viz.draw_last()` on the fast and slow
chart and adjusting the guards around calls to `Viz.update_graphics()`
(which *does* update paths) to only enter when there's a `do_px_step`
condition. We can stop calling `main_viz.plot.vb._set_yrange()` on view
treading cases since the range should have already been adjusted by the
clearing-tick processing mxmn updates.

Further this changes,
- the `chart_maxmin()` helper (which we should eventually just get rid
  of) to take bound in `Viz`s for the ohlc and vlm chart instead of the
  chart widget handles.
- extend the guard around hist viz yranging to only enter when not in
  "axis mode" - the same as for the fast viz.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 60440bc6b7 Ensure full hist OHLC path is drawn on tread
Since we removed the `Viz.update_graphics()` call from the main rt loop
we have to be sure to call it in the history chart incr-loop to avoid
a gap between the  last bar and prior history since startup. We only
need to update on tread since that should be the only time a full redraw
is ever necessary, ow only the last datum is needed.

Further this moves the graphics cycle func's profiler init to the top in
an effort to get more correct latency measures.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 4003729231 Use `Viz.update_graphics()` throughout remainder of graphics loop where possible 2023-02-13 12:27:58 -05:00
Tyler Goodlet 934b32c342 Use `Viz` over charts where possible in display loop
Since `ChartPlotWidget.update_graphics_from_flow()` is more or less just
a call to `Viz.update_graphics()` try to call that directly where
possible.

Changes include:
- calling the viz in the display state specific `maxmin()`.
- passing a viz instance to each `ChartView._set_yrange()` call (in prep
  of explicit group auto-ranging); not that this input is unused in the
  method for now.
- drop `bars_range` var passing since we don't use it.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 97bb3b48da Set a `PlotItem.viz` for interaction lookup
Inside `._interaction` routines we need access to `Viz` instances.
Instead of doing `CharPlotWidget._vizs: dict` lookups this ensures each
plot can lookup it's (parent) viz without error.

Also, adjusts `Viz.maxmin()` output parsing to new signature.
2023-02-13 12:27:58 -05:00
Tyler Goodlet da618e1d38 Always cache `read_slc` alongside y-mnmx values 2023-02-13 12:27:58 -05:00
Tyler Goodlet 23c03a0905 Add back coord-caching to ohlc graphic 2023-02-13 12:27:58 -05:00
Tyler Goodlet 07c8ed8a3a Use (modern) literal type annots in view code 2023-02-13 12:27:58 -05:00
Tyler Goodlet bcf2a9868d Drop x-range query from `ChartPlotWidget.maxmin()`
Move the `Viz.datums_range()` call into `Viz.maxmin()` itself thus
minimizing the chart `.maxmin()` method to an ultra light wrapper around
the viz call. Also move all profiling into the `Viz` method.

Adjust `Viz.maxmin()` to return both the (rounded) x-range values which
correspond to the range containing the y-domain min and max so that
it can be used for up and coming overlay group maxmin calcs.
2023-02-13 12:27:58 -05:00
Tyler Goodlet c09c3925a4 Drop multi mxmn from display mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet 92ce1b3304 Only handle hist discrepancies when market is open
We obviously don't want to be debugging a sample-index issue if/when the
market for the asset is closed (since we'll be guaranteed to have
a mismatch, lul). Pass in the `feed_is_live: trio.Event` throughout the
backfilling routines to allow first checking for the live feed being active
so as to avoid breakpointing on false +ves. Also, add a detailed warning
log message for when *actually* investigating a mismatch.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 0fc06a98d4 Passthrough `tractor` kwargs directly 2023-02-13 12:27:58 -05:00
Tyler Goodlet 4ba99494f0 Fix `open_trade_ledger()` enter value type annot 2023-02-13 12:27:58 -05:00
Tyler Goodlet a8e1796a8b Comment bad x-range bp for now 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5ced05aab0 Breakpoint bad (-ve or too large) x-ranges to m4
This should never really happen but when it does it appears to be a race
with writing startup pre-graphics-formatter array data where we get
`x_end` epoch value subtracting some really small offset value (like
`-/+0.5`) or the opposite where the `x_start` is epoch and `x_end` is
small.

This adds a warning msg and `breakpoint()` as well as guards around the
entire code downsampling code path so that when resumed the downsampling
cycle should just be skipped and avoid a crash.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 4a6339ffc2 Downthrottle to 16Hz on multi-feed charts 2023-02-13 12:27:58 -05:00
Tyler Goodlet efa4089920 Attempt to keep selected item highlighted
This attempt was unsuccessful since trying to (re)select the last
highlighted item on both an "enter" or "click" of that item causes
a hang and then segfault in `Qt`; no clue why..

Adds a `keep_current_item_selected: bool` flag to
`CompleterView.show_cache_entries()` but using it seems to always cause
a hang and crash; we keep all potential use spots commented for now
obviously to avoid this. Also included is a bunch of tidying to logic
blocks in the kb-control loop for readability.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 35cc37ddc1 Lol, pull hist chart from the display state 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5ea4be1d4b Make (cache) search-results a `set` and avoid overlay duplicate entries 2023-02-13 12:27:58 -05:00
Tyler Goodlet 0c5b5a5aea Take outer-interval values in `Viz.datums_range()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet 4027d683e9 Clean a buncha cruft from render mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet 7afc9301ac Handle last-in-view time slicing edge case
Whenever the last datum is in view `slice_from_time()` need to always
spec the final array index (i.e. the len - 1 value we set as
`read_i_max`) to avoid a uniform-step arithmetic error where gaps in the
underlying time series causes an index that's too low to be returned.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 12c6d58c2a Drop bp blocks from formatters mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet c5db7295e6 Fix query-mode cursor labels to work with epoch-indexing 2023-02-13 12:27:58 -05:00
Tyler Goodlet 02c3ea1743 Use `open_sample_stream()` in display loop 2023-02-13 12:27:58 -05:00
Tyler Goodlet 63f0567418 Drop `Flume.index_stream()`, `._sampling.open_sample_stream()` replaces it 2023-02-13 12:27:58 -05:00
Tyler Goodlet 3e17e52555 Add back another panes resize during startup 2023-02-13 12:27:58 -05:00
Tyler Goodlet 65dca16dc0 Always zero-on-step $vlm 2023-02-13 12:27:58 -05:00
Tyler Goodlet e742d18a6c Mouse interaction tweaks
- adjust zoom focal to be min of the view-right coord or the right-most
  point on the flow graphic in view and drop all the legacy l1-in-view
  focal point cruft.
- flip to not auto-scaling overlays by default.
- change the `._set_yrange()` margin to `0.09`.
- drop `use_vr: bool` usage.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 7e29c36a24 Modernize optional path variable type annots 2023-02-13 12:27:58 -05:00
Tyler Goodlet 4d2b5c8f86 Use `Curve.x_last()` for zoom focal point 2023-02-13 12:27:58 -05:00
Tyler Goodlet fe932a96a9 Make `PlotItemOverlay` add items inwards->out
Before this axes were being stacked from the outside in (for `'right'`
and 'bottom'` axes) which is somewhat non-intuitive for an `.append()`
operation. As such this change makes a symbol list stack a set of
`'right'` axes from left-to-right.

Details:
- rename `ComposeGridLayout.items` -> `.pitems`
- return `(int, list[AxisItem])` pairs from `.insert/append_plotitem()`
  and the down stream `PlotItemOverlay.add_plotitem()`.
- drop `PlotItemOverlay.overlays` and add it back as `@property` around
  the underlying `.layout.pitems`.
2023-02-13 12:27:58 -05:00
Tyler Goodlet c1b7063e3c Drop the legacy `relayed_from` cruft from our view box 2023-02-13 12:27:58 -05:00
goodboy 42d2f9e461
Merge pull request #452 from pikers/l1_compaction
Compact L1 labels
2023-02-13 11:21:26 -05:00
goodboy 31fc2d73ce
Merge pull request #459 from pikers/kraken_deposits_fixes
`kraken`: make pps work with arbitrary deposits
2023-02-12 16:17:23 -05:00
Tyler Goodlet 1346c33f04 `kraken`: make pps work with arbitrary deposits
Factor and fix dst <- src pair parsing into a new func
`get_likely_pair()` and use throughout initial position loading; solves
a parsing bug for src asset balances which aren't only 3 chars long..
a terrible assumption.
2023-02-12 15:52:48 -05:00
Tyler Goodlet cee6321a9f Do full marker width after line 2023-02-12 15:38:43 -05:00
Tyler Goodlet 1abed2ad9e Fix indent level 2023-02-12 15:38:43 -05:00
Tyler Goodlet 5bd6fa3cbf Make $vlm axis color same as clears 2023-02-12 15:38:43 -05:00
Tyler Goodlet a82911d8a9 Correctly load order mode for first fqsn in overlay set 2023-02-12 15:38:43 -05:00
Tyler Goodlet dc88364253 Move $vlm y-axis to LHS 2023-02-12 15:38:43 -05:00
Tyler Goodlet 4c51a68691 Better index step value scanning by checking with our expected set 2023-02-12 15:38:43 -05:00
Tyler Goodlet 42d3537516 Repair auto-y-ranging to always include L1 spread
Goes back to always adjusting the y-axis range to include the L1 spread
and clearing label in view whenever the last datum is also in view,
previously this was broken after reworking the display loop for
multi-feeds.

Drops a bunch of old commented tick looping cruft from before we started
using tick-type framing. Also adds more stringent guards for ignoring
but error logging quote values that are more then 25% out of range; it
seems particularly our `ib` feed has some issues with strange `price`
values that are way off here and there?
2023-02-12 15:38:43 -05:00
Tyler Goodlet 3fd394d693 Use static `L1Label._x_br_offset` as l1 label length 2023-02-12 15:38:43 -05:00
Tyler Goodlet a7a08aced9 Drop l1 labels attr from chart widget 2023-02-12 15:38:43 -05:00
Tyler Goodlet 1d83fdb510 Handle empty `indexes` input edge case.. 2023-02-12 15:38:43 -05:00
Tyler Goodlet 924fcca463 TOSQUASH: 84f19308 (l1 rework) 2023-02-12 15:38:43 -05:00
Tyler Goodlet 26f497e2bb Set cursor label color to "bracket" 2023-02-12 15:38:43 -05:00
Tyler Goodlet e37e118a7e Don't set y-axis label colors to curve's, use the default from global scheme 2023-02-12 15:38:43 -05:00
Tyler Goodlet b2bb7f4923 Simplify L1 labels for multicharts
Instead of having the l1 lines be inside the view space, move them to be
inside their respective axis (with only a 16 unit portion inside the
view) such that the clear price label can overlay with them nicely
without obscuring; this is much better suited to multiple adjacent
y-axes and in general is simpler and less noisy.

Further `L1Labels` + `LevelLabel` style tweaks:
- adjust `.rect` positioning to be "right" (i.e. inside the parent
  y-axis) with a slight 16 unit shift toward the viewbox (using the new
  `._x_br_offset`) to allow seeing each level label's line even when the
  clearing price label is positioned at that same level.
- add a newline's worth of vertical space to each of the bid/ask labels
  so that L1 labels' text content isn't ever obscured by the clear price
  label.
- set a low (10) z-value to ensure l1 labels are always placed
  underneath the clear price label.
- always fill the label rect with the chosen background color.
- make labels fully opaque so as to always make them hide the parent
  axes' `.tickStrings()` contents.
- make default color the "default" from the global scheme.
- drop the "price" part from the l1 label text contents, just show the
  book-queue's amount (in dst asset's units, aka the potential clearing vlm).
2023-02-12 15:38:43 -05:00
Tyler Goodlet 97b03bbfbb Move old label sizing cruft to label mod 2023-02-12 15:38:43 -05:00
goodboy d690ad2bab
Merge pull request #451 from pikers/epoch_indexing_and_dataviz_layer
Epoch indexing and dataviz layer
2023-02-12 14:27:43 -05:00
Guillermo Rodriguez 0f082ed9d4
Merge pull request #458 from pikers/missing_protobuf
Add missing protobuf dependency
2023-02-12 16:19:31 -03:00
Guillermo Rodriguez 2851a0ecc5
Add missing protobuf dependency 2023-02-12 16:07:42 -03:00
Tyler Goodlet 340045af77 Make `FlowGraphic.x_last()` be optionally `None`
In the case where the last-datum-graphic hasn't been created yet, simply
return a `None` from this method so the caller can choose to ignore the
output. Further, drop `.px_width()` since it makes more sense defined on
`Viz` as well as the previously commented `BarItems.x_uppx()` method.
Also, don't round the `.x_uppx()` output since it can then be used when
< 1 to do x-domain scaling during high zoom usage.
2023-02-12 13:55:26 -05:00
Tyler Goodlet c1988c4d8d Add a parent-type for graphics: `FlowGraphic`
Factor some common methods into the parent type:
- `.x_uppx()` for reading the horizontal units-per-pixel.
- `.x_last()` for reading the "closest to y-axis" last datum coordinate
  for zooming "around" during mouse interaction.
- `.px_width()` for computing the max width of any curve in view in
  pixels.

Adjust all previous derived `pg.GraphicsObject` child types to now
inherit from this new parent and in particular enable proper `.x_uppx()`
support to `BarItems`.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 6a0c36922e Drop `._index_step` from formatters and instead defer to `Viz.index_step()` 2023-02-12 13:55:26 -05:00
Tyler Goodlet 459cbfdbad Further fixes `Viz.default_view()` and `.index_step()`
Use proper uppx scaling when either of scaling the data to the x-domain
index-range or when the uppx is < 1 (now that we support it) such that
both the fast and slow chart always appropriately scale and offset to
the y-axis with the last datum graphic just adjacent to the order line
arrow markers.

Further this fixes the `.index_step()` calc to use the "earliest" 16
values to compute the expected sample step diff since the last set often
contained gaps due to start up race conditions and generated
unexpected/incorrect output.

Further this drops the `.curve_width_pxs()` method and replaces it with
`.px_width()`, taken from the graphics object API and instead returns
the pixel account for the whole view width instead of the
x-domain-data-range within the view.
2023-02-12 13:55:26 -05:00
Tyler Goodlet fc17187ff4 Drop edge case from `slice_from_time()`
Doesn't seem like we really need to handle the situation where the start
or stop input time stamps are outside the index range of the data since
the new binary search handling via `numpy.searchsorted()` covers this
case at minimal runtime cost and with an equally correct output. Allows
us to drop some other indexing endpoint internal variables as well.
2023-02-12 13:55:26 -05:00
Tyler Goodlet a7d78a3f40 Use left-style index search on RHS scan as well 2023-02-12 13:55:26 -05:00
Tyler Goodlet 7ce3f10e73 Just-offset-from-arrow-marker on slow chart
We want the fast and slow chart to behave the same on calls to
`Viz.default_view()` so adjust the offset calc to make both work:
- just offset by the line len regardless of step / uppx
- add back the `should_line: bool` output from `render_bar_items()` (and
  use it to set a new `ds_allowed: bool` guard variable) so that we can
  bypass calling the m4 downsampler unless the bars have been switched
  to the interpolation line graphic (which we normally required before
  any downsampling of OHLC graphics data).

Further, this drops use of the `use_vr: bool` flag from all rendering
since we pretty much always use it by default.
2023-02-12 13:55:26 -05:00
Tyler Goodlet bfc6014ad3 Fix history array name 2023-02-12 13:55:26 -05:00
Tyler Goodlet a5eed8fc1e Fix x-axis labelling when using an epoch domain
Previously with array-int indexing we had to map the input x-domain
"indexes" passed to `DynamicDateAxis._indexes_to_timestr()`. In the
epoch-time indexing case we obviously don't need to lookup time stamps
from the underlying shm array and can instead just cast to `int` and
relay the values verbatim.

Further, this patch includes some style adjustments to `AxisLabel` to
better enable multi-feed chart overlays by avoiding L1 label clutter
when multiple y-axes are stacked adjacent:
- adjust the `Axis` typical max string to include a couple spaces suffix
 providing for a bit more margin between side-by-side y-axes.
- make the default label (fill) color the "default" from the global
 color scheme and drop it's opacity to .9
- add some new label placement options and use them in the
 `.boundingRect()` method:
 * `._x/y_br_offset` for relatively shifting the overall label relative
   to it's parent axis.
 * `._y_txt_h_scaling` for increasing the bounding rect's height
   without including more whitespace in the label's text content.
- ensure labels have a high z-value such that by default they are always
 placed "on top" such that when we adjust the l1 labels they can be set
 to a lower value and thus never obscure the last-price label.
2023-02-12 13:55:26 -05:00
Tyler Goodlet cdec4782f0 Add commented append slice-len sanity check 2023-02-12 13:55:26 -05:00
Tyler Goodlet f30a48b82c Use `np.diff()` on last 16 samples instead of only last datum pair 2023-02-12 13:55:26 -05:00
Tyler Goodlet 98de22a740 Enable the experimental `QPrivatePath` functionality from latest `pyqtgraph` 2023-02-12 13:55:26 -05:00
Tyler Goodlet efbb8e86d4 Fix overlayed slow chart "treading"
Turns out we were updating the wrong ``Viz``/``DisplayState`` inside the
closure style `increment_history_view()`` (probably due to looping
through the flumes and dynamically closing in that task-func).. Instead
define the history incrementer at module level and pass in the
`DisplayState` explicitly. Further rework the `DisplayState` attrs to be
more focused around the `Viz` associated with the fast and slow chart
and be sure to adjust output from each `Viz.incr_info()` call to latest
update. Oh, and just tweaked the line palette for the moment.

FYI "treading" here is referring to  the x-shifting of the curve when
the last datum is in view such that on new sampled appends the "last"
datum is kept in the same x-location in UI terms.
2023-02-12 13:55:26 -05:00
Tyler Goodlet b6521498f4 Make `.increment_view()` take in a `datums: int` and always scale it by sample step size 2023-02-12 13:55:26 -05:00
Tyler Goodlet 06f1b94147 Make `Viz.incr_info()` do treading with time-index, and appending with array-index 2023-02-12 13:55:26 -05:00
Tyler Goodlet ffb57f0256 Rename `reset` -> `reset_cache` 2023-02-12 13:55:26 -05:00
Tyler Goodlet ed1f64cf43 Fix gap detection on RHS; always bin-search on overshot time range 2023-02-12 13:55:26 -05:00
Tyler Goodlet bf8ea33697 Add type annots to vars inside `Render.render()` 2023-02-12 13:55:26 -05:00
Tyler Goodlet bc17308de7 Drop coordinate cacheing from `BarItems`, causes weird jitter on pan 2023-02-12 13:55:26 -05:00
Tyler Goodlet 1ece704d6e Add `ChartPlotWidget.main_viz: Viz` convenience `@property` 2023-02-12 13:55:26 -05:00
Tyler Goodlet dea1c1c2d6 Make `Viz.incr_info()` sample rate agnostic
Mainly it was the global (should we )increment logic that needs to be
independent for the fast vs. slow chart such that the slow isn't
update-shifted by the fast and vice versa. We do this using a new
`'i_last_slow'` key in the `DisplayState.globalz: dict` which is
singleton for each sample-rate-specific chart and works for both time
and array indexing.

Also, we drop some old commented `graphics.draw_last_datum()` code that
never ended up being needed again inside the coordinate cache reset
bloc.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 3300a240c6 Use array-`int`-indexing on single feed
Might as well since it makes the chart look less gappy and we can easily
flip the index switch now B)

Also adds a new `'i_slow_last'` key to `DisplayState` for a singleton
across all slow charts and thus no more need for special case logic in
`viz.incr_info()`.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 50ef4efccb Align step curves the same as OHLC bars 2023-02-12 13:55:26 -05:00
Tyler Goodlet 51f2461e8b Add `IncrementalFormatter.x_offset: np.ndarray`
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 444768d30f Adjust OHLC bar x-offsets to be time span matched
Previously we were drawing with the middle of the bar on each index with
arms to either side: +/- some arm length. Instead this changes so that
each bar is drawn *after* each index/timestamp such that in graphics
coords the bar span more correctly matches the time span in the
x-domain. This makes the linked region between slow and fast chart
directly match (without any transform) for epoch-time indexing such that
the last x-coord in view on the fast chart is no more then the
next time step in (downsampled) slow view.

Deats:
- adjust in `._pathops.path_arrays_from_ohlc()` and take an `bar_w` bar
  width input (normally taken from the data step size).
- change `.ui._ohlc.bar_from_ohlc_row()` and
  `BarItems.draw_last_datum()` to match.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 0d0675ac7e `Viz._index_field` a `typing.Literal[str]` 2023-02-12 13:55:26 -05:00
Tyler Goodlet 24b384f3ef Set `path_arrays_from_ohlc(use_time_index=True)` on epoch indexing
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.

Also, guard all the x-data audit breakpoints with a time indexing
condition.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 93330954c2 Ugh, use `bool` flag to determine index field.. 2023-02-12 13:55:26 -05:00
Tyler Goodlet edf721f755 Make `LinearRegion` link using epoch-time index
Turned out to be super simple to get the first draft to work since the
fast and slow chart now use the same domain, however, it seems like
maybe there's an offset issue still where the fast may be a couple
minutes ahead of the slow?

Need to dig in a bit..
2023-02-12 13:55:26 -05:00
Tyler Goodlet 530b2731ba Add global `i_step` per overlay to `DisplayState`
Using a global "last index step" (via module var) obviously has problems
when working with multiple feed sets in a single global app instance:
any separate feed-set will be incremented according to an app-global
index-step and thus won't correctly calc per-feed-set-step update info.

Impl deatz:
- drop `DisplayState.incr_info()` (since previously moved to `Viz`) and
  call that method on each appropriate `Viz` instance where necessary;
  further ensure the appropriate `DisplayState` instance is passed in to
  each call and make sure to pass a `state: DisplayState`.
- add `DisplayState.hist_vars: dict` for history chart (sets) to
  determine the per-feed (not set) current slow chart (time) step.
- add `DisplayState.globalz: dict` to house a common per-feed-set state
  and use it inside the new `Viz.incr_info()` such that
  a `should_increment: bool` can be returned and used by the display
  loop to determine whether to x-shift the current chart.
2023-02-12 13:55:24 -05:00
Tyler Goodlet 14104185d2 Move `DisplayState.incr_info()` -> `Viz` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 3019c35e30 Move `Viz` layer to new `.ui` mod 2023-02-12 13:41:18 -05:00
Tyler Goodlet 4d74bc29b4 Fix line -> bars on 6x UPPX
Read the `Viz.index_step()` directly to avoid always reading 1 on the
slow chart; this was completely broken before and resulting in not
rendering the bars graphic on the slow chart until at a true uppx of
1 which obviously doesn't work for 60 width bars XD

Further cleanups to `._render` module:
- drop `array` output from `Renderer.render()`, `read_from_key` input
  and fix type annot.
- drop `should_line`, `changed_to_line` and `render_kwargs` from
  `render_baritems()` outputs and instead calc `should_redraw` logic
  inside the func body and return as output.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 3638ae8d3e Drop unused `read_src_from_key: bool` to `.format_to_1d()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet c5dd67e63c Right, do index lookup for int-index as well.. 2023-02-12 13:41:18 -05:00
Tyler Goodlet 0663880a6d Fix formatter xy ndarray first prepend case
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.

Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.

Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
  downsample to, this is normally based on the ratio of pixel columns on
  screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
  first and last index would be the size of the input buffer and thus
  would never cause a large mem allocation issue (though it may have
  been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
  near-now epoch time stamp **minus** an x-allocation value: generally
  some value in `[0.5, -0.5]` which would result in a massive frames and
  thus internal `np.ndarray()` allocation causing either a crash in
  `numba` code or actual system mem over allocation.

Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 3bed142d15 Handle time-indexing for fill arrows
Call into a reworked `Flume.get_index()` for both the slow and fast
chart and do time index clipping to last datum where necessary.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 9fcc6f9c44 Restore coord-cache resetting
Turns out we can't seem to avoid the artefacts when click-drag-scrolling
(results in weird repeated "smeared" curve segments) so just go back to
the original code.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 7aef31701b Add some commented debug prints for default fmtr 2023-02-12 13:41:18 -05:00
Tyler Goodlet 135627e142 Slicec to an extra index around each timestamp input 2023-02-12 13:41:18 -05:00
Tyler Goodlet 5216a6b732 Drop passing `render_data` to `Curve.draw_last_datum()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 2a797d32dc Add back `.default_view()` slice logic for `int` indexing 2023-02-12 13:41:18 -05:00
Tyler Goodlet 35a16ded2d Block out `do_print` stuff inside `Viz.maxmin()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 44f50e3d0e Implement `stop_t` gap adjustments; the good lord said it is the problem 2023-02-12 13:41:18 -05:00
Tyler Goodlet 96b871c4d7 Draw last datums on boot
Ensures that a "last datum" graphics object exists so that zooming can
read it using `.x_last()`. Also, disable the linked region stuff for now
since it's totally borked after flipping to the time indexing.
2023-02-12 13:41:18 -05:00
Tyler Goodlet d2aad74dfc Delegate to `Viz.default_view()` on chart
Also add a rage print to not forget about the global index
tracking/diffing in the display loop we still need to change.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 50209752c3 Re-implement `.default_view()` on `Viz`
Since we don't really need it defined on the "chart widget" move it to
a viz method and rework it to hell:

- always discard the invalid view l > r case.
- use the graphic's UPPX to determine UI-to-scene coordinate scaling for
  the L1-label collision detection, if there is no L1 just offset by
  a few (index step scaled) datums; this allows us to drop the 2x
  x-range calls as was hacked previous.
- handle no-data-in-view cases explicitly and error if we get any
  ostensibly impossible cases.
- expect caller to trigger a graphics cycle if needed.

Further support this includes a rework a slew of other important
details:

- add `Viz.index_step`, an idempotent computed, index (presumably uniform)
  step value which is needed for variable sample rate graphics displayed
  on an epoch (second) time index.
- rework `Viz.datums_range()` to pass view x-endpoints as first and last
  elements in return `tuple`; tighten up snap-to-data edge case logic
  using `max()`/`min()` calls and better internal var naming.
- adjust all calls to `slice_from_time()` to not expect an "abs" slice.
- drop all `.yrange` resetting since we can just have the `Renderer` do
  it when necessary.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 5ab4e5493e Add gap detection for `stop_t`, though only report atm 2023-02-12 13:41:18 -05:00
Tyler Goodlet e252f70253 Add `.x_last()` meth to flow graphics 2023-02-12 13:41:18 -05:00
Tyler Goodlet 98438e29ef Drop `Flume.view_data()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet d649a7d1fa Drop old breakpoint 2023-02-12 13:41:18 -05:00
Tyler Goodlet 2669ced629 Drop `_slice_from_time()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet f2c0987a04 Use uniform step arithmetic in `slice_from_time()`
If we presume that time indexing using a uniform step we can calculate
the exact index (using `//`) for the input time presuming the data
set has zero gaps. This gives a massive speedup over `numpy` fancy
indexing and (naive) `numba` iteration. Further in the case where time
gaps are detected, we can use `numpy.searchsorted()` to binary search
for the nearest expected index at lower latency.

Deatz,
- comment-disable the call to the naive `numba` scan impl.
- add a optional `step: int` input (calced if not provided).
- add todos for caching binary search results in the gap detection
  cases.
- drop returning the "absolute buffer indexing" slice since the caller
  can always just use the read-relative slice to acquire it.
2023-02-12 13:41:18 -05:00
Tyler Goodlet bb84715bf0 Make `.default_view()` time step aware
When we use an epoch index and any sample rate > 1s we need to scale the
"number of bars" to that step in order to place the view correctly in
x-domain terms. For now we're calcing the step in-method but likely,
longer run, we'll pull this from elsewhere (like a ``Viz`` attr).
2023-02-12 13:41:17 -05:00
Tyler Goodlet 0bdb7261d1 Flip over to epoch-time based x-domain indexing 2023-02-12 13:41:17 -05:00
Tyler Goodlet 12857a258b Adjust all `slice_from_time()` calls to not expect mask 2023-02-12 13:41:17 -05:00
Tyler Goodlet 46808fbb89 Rewrite `slice_from_time()` using `numba`
Gives approx a 3-4x speedup using plain old iterate-with-for-loop style
though still not really happy with this .5 to 1 ms latency..

Move the core `@njit` part to a `_slice_from_time()` with a pure python
func with orig name around it. Also, drop the output `mask` array since
we can generally just use the slices in the caller to accomplish the
same input array slicing, duh..
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6ca8334253 Use index (time) step to calc OHLC bar/line uppx threshold 2023-02-12 13:41:17 -05:00
Tyler Goodlet a3844f9922 Use step size to determine bar gaps 2023-02-12 13:41:17 -05:00
Tyler Goodlet 58b36db2e5 Use step size to determine last datum bar gap 2023-02-12 13:41:17 -05:00
Tyler Goodlet a33f58a61a Move `Flume.slice_from_time()` to `.data._pathops` mod func 2023-02-12 13:41:17 -05:00
Tyler Goodlet a4392696a1 Drop `index_field` input to renders, add `.read()` profiling 2023-02-12 13:41:17 -05:00
Tyler Goodlet d5844ce8ff Delegate formatter `.index_field` to the parent `Viz` 2023-02-12 13:41:17 -05:00
Tyler Goodlet bf88b40a50 Facepalm**2: fix array-read-slice, like actually..
We need to subtract the first index in the array segment read, not the
first index value in the time-sliced output, to get the correct offset
into the non-absolute (`ShmArray.array` read) array..

Further we **do** need the `&` between the advance indexing conditions
and this adds profiling to see that it is indeed real slow (like 20ms
ish even when using `np.where()`).
2023-02-12 13:41:17 -05:00
Tyler Goodlet e4a0d4ecea Markup OHLC->path gen with `numba` issue # 2023-02-12 13:41:17 -05:00
Tyler Goodlet cca3417c57 Facepalm: put graphics cycle in `do_ds: bool` block.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 031d7967de Facepalm: actually return latest index on time slice fail.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 2e67e98b4d Go with explicit `.data._m4` mod name
Since it's a notable and self-contained graphics compression algo, might
as well give it a dedicated module B)
2023-02-12 13:41:17 -05:00
Tyler Goodlet 7124a131dd Move (unused) path gen routines to `.ui._pathops` 2023-02-12 13:41:17 -05:00
Tyler Goodlet 9052ed5ddf Move qpath-ops routines back to separate mod 2023-02-12 13:41:17 -05:00
Tyler Goodlet 7ec21c7f3b Rename `.ui._pathops.py` -> `.ui._formatters.py 2023-02-12 13:41:17 -05:00
Tyler Goodlet 309ae240cf Look up "index field" in display cycles
Again, to make epoch indexing a flip-of-switch for testing look up the
`Viz.index_field: str` value when updating labels.

Also, drops the legacy tick-type set tracking which we no longer use
thanks to the new throttler subsys and it's framing msgs.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 382a619a03 Fix from-time index slicing?
Apparently we want an `|` for the advanced indexing logic?
Also, fix `read_slc` start to not always be 0 XD
2023-02-12 13:41:17 -05:00
Tyler Goodlet 7f3f6f871a Move path ops routines to top of mod
Planning to put the formatters into a new mod and aggregate all path
gen/op helpers into this module.

Further tweak include:
- moving `path_arrays_from_ohlc()` back to module level
- slice out the last xy datum for `OHLCBarsAsCurveFmtr` 1d formatting
- always copy the new x-value from the source to `.x_nd`
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6ea04f850d Drop diff state tracking in formatter
This was a major cause of error (particularly trying to get epoch
indexing working) and really isn't necessary; instead just have
`.diff()` always read from the underlying source array for current
index-step diffing and append/prepend slice construction.

Allows us to,
- drop `._last_read` state management and thus usage.
- better handle startup indexing by setting `.xy_nd_start/stop` to
  `None` initially so that the first update can be done in one large
  prepend.
- better understand and document the step curve "slice back to previous
  level" logic which is now heavily commented B)
- drop all the `slice_to_head` stuff from and instead allow each
  formatter to choose it's 1d segmenting.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 3d5695f40a Explicitly enable chart widget yranging in display init 2023-02-12 13:41:17 -05:00
Tyler Goodlet 5affad942f Enable/disable vlm chart yranging (TO SQUASH) 2023-02-12 13:41:17 -05:00
Tyler Goodlet eb9ab20646 Don't disable non-enabled vlm chart y-autoranging 2023-02-12 13:41:17 -05:00
Tyler Goodlet f3bab826f6 Comment out bps for time indexing 2023-02-12 13:41:17 -05:00
Tyler Goodlet 2b9ca5f805 Call `Viz.bars_range()` from display loop 2023-02-12 13:41:17 -05:00
Tyler Goodlet 25a75e5bec Fix `.default_view()` to view-left-of-data 2023-02-12 13:41:17 -05:00
Tyler Goodlet 702ae29a2c Add `Viz.index_field: str`, pass to graphics objs
In an effort to make it easy to override the indexing scheme.

Further, this repairs the `.datums_range()` special case to handle when
the view box is to-the-right-of the data set (i.e. l > datum_start).
2023-02-12 13:41:17 -05:00
Tyler Goodlet ac1f37a2c2 Expect `index_field: str` in all graphics objects 2023-02-12 13:41:17 -05:00
Tyler Goodlet 344d2eeb9e Facepalm: pass correct flume to each FSP chart group.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 9133103f8f Attempt to make `.default_view()` time-index ready
As in make the call to `Flume.slice_from_time()` to try and convert any
time index values from the view range to array-indices; all untested
atm.

Also drop some old/unused/moved methods:
- `._set_xlimits()`
- `.bars_range()`
- `.curve_width_pxs()`

and fix some `flow` -> `viz` var naming.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 166d14af69 Simplify formatter update methodology
Don't expect values (array + slice) to be returned and applied by
`.incr_update_xy_nd()` and instead presume this will implemented
internally in each (sub)formatter.

Attempt to simplify some incr-update routines, (particularly in the step
curve formatter, though most of it was reverted to just a simpler form
of the original implementation XD) including:
- dropping the need for the `slice_to_head: int` control.
- using the `xy_nd_start/stop` index counters over custom lookups.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 696c6f8897 First attempt, field-index agnostic formatting
Remove harcoded `'index'` field refs from all formatters in a first
attempt at moving towards epoch-time alignment (though don't actually
use it it yet).

Adjustments to the formatter interface:
- property for `.xy_nd` the x/y nd arrays.
- property for and `.xy_slice` the nd format array(s) start->stop index
  slice.

Internal routine tweaks:
- drop `read_src_from_key` and always pass full source array on updates
  and adjust handlers to expect to have to index the data field of
  interest.
- set `.last_read` right after update calls instead of after 1d
  conversion.
- drop `slice_to_head` array read slicing.
- add some debug points for testing 'time' indexing (though not used
  here yet).
- add `.x_nd` array update logic for when the `.index_field` is not
  'index' - i.e. when we begin to try and support epoch time.
- simplify some new y_nd updates to not require use of `np.broadcast()`
  where possible.
2023-02-12 13:41:17 -05:00
Tyler Goodlet be21f9829e Pepper render routines with time-slice calls 2023-02-12 13:41:17 -05:00
Tyler Goodlet 5a0673d66f Add `Viz.bars_range()` (moved from chart API)
Call it from view kb loop.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6cacd7d18b Make `Viz.slice_from_time()` take input array
Probably means it doesn't need to be a `Flume` method but it's
convenient to expect the caller to pass in the `np.ndarray` with
a `'time'` field instead of a `timeframe: str` arg; also, return the
slice mask instead of the sliced array as output (again allowing the
caller to do any slicing). Also, handle the slice-outside-time-range
case by just returning the entire index range with a `None` mask.

Adjust `Viz.view_data()` to instead do timeframe (for rt vs. hist shm
array) lookup and equiv array slicing with the returned mask.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 5b08e9cba3 Add breakpoint on -ve range for now 2023-02-12 13:41:17 -05:00
Tyler Goodlet d3f5ff1b4f Go back to hard-coded index field
Turns out https://github.com/numba/numba/issues/8622 is real
and the suggested `numba.literally` hack doesn't seem to work..
2023-02-12 13:41:16 -05:00
Tyler Goodlet e45bc4c619 Move `ui._compression`/`._pathops` to `.data` subpkg
Since these modules no longer contain Qt specific code we might
as well include them in the data sub-package.

Also, add `IncrementalFormatter.index_field` as single point to def the
indexing field that should be used for all x-domain graphics-data
rendering.
2023-02-12 13:39:10 -05:00
Tyler Goodlet baee86a2d6 Rename `.ui._flows.py` -> `.ui._render.py` 2023-02-12 13:39:10 -05:00
Tyler Goodlet 86d09d9305 Rename `Flow` -> `Viz`
The type is better described as a "data visualization":
https://en.wikipedia.org/wiki/Data_and_information_visualization

Add `ChartPlotWidget.get_viz()` to start working towards not accessing
the private table directly XD

We'll probably end up using the name `Flow` for a type that tracks
a collection of composed/cascaded `Flume`s:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
2023-02-12 13:39:10 -05:00
Tyler Goodlet 9ace053aaf Copy timestamps from source to FSP dest buffer 2023-02-12 13:39:10 -05:00
Guillermo Rodriguez 69707786fc
Fix environment spelling 2023-02-12 13:23:55 -03:00
Guillermo Rodriguez 096e87cd3b
Add info about nix to README.rst 2023-02-12 13:23:55 -03:00
Guillermo Rodriguez 5017c541db
Auto initialize and activate virtualenv 2023-02-12 13:23:55 -03:00
Guillermo Rodriguez 3ea6554ab0
Add nix development shell file 2023-02-12 13:23:45 -03:00
Guillermo Rodriguez f0b17cb8f7
Merge pull request #457 from pikers/msgspec-default-factories
Use new msgspec default factories
2023-02-12 13:17:31 -03:00
Guillermo Rodriguez 5ca45362c8
Add default factories for all required fields 2023-02-11 16:08:45 -03:00
Tyler Goodlet 1f2081911f Revert "Adjust chart call to graphics cycle to not pass quotes"
This reverts commit 50ad7370c7
which was originally applied due to missing API changes coming in
a future patchset..
2023-02-09 16:26:32 -05:00
goodboy a7d02ecec8
Merge pull request #449 from pikers/multi_symbol_input
Multi symbol input (support)
2023-02-09 16:20:34 -05:00
goodboy 11ba706797
Merge pull request #448 from pikers/axis_sticky_api
Axis sticky api, `PlotItem` is the new "chart"
2023-02-05 15:32:22 -05:00
Tyler Goodlet 50ad7370c7 Adjust chart call to graphics cycle to not pass quotes
Was breaking the `'r'` hotkey to reset the chart..
2023-02-05 15:27:12 -05:00
goodboy 0616cbd1f1
Merge pull request #454 from pikers/ib_fix_cmdtys
`ib`: fix cmdtys feeds
2023-02-03 07:53:39 -05:00
Tyler Goodlet af92602027 `ib`: make commodities search and feeds work again..
Was broken since the `_adhoc_futes_set` rework a while back. Removes the
cmdty symbols from that set into a new one and fixes the contract
case block to catch `Contract(secType='CMDTY')` case. Also makes
`Client.search_symbols()` return details `dict`s so that `piker search`
will work again..
2023-02-02 16:52:34 -05:00
Tyler Goodlet d8bf45b02d Use latest `asks` 2023-02-02 16:52:34 -05:00
Tyler Goodlet 07ab853d3d `Order.symbol` is a `str`.. 2023-02-02 15:05:26 -05:00
Tyler Goodlet 414866fc6b Assign pnl calc output for use when debugging 2023-02-02 15:05:26 -05:00
Tyler Goodlet bc7fe6114d Adjust order mode to use `Flume.get_index()` 2023-02-02 15:05:23 -05:00
Tyler Goodlet 8d592886fa Pass `Flume`s throughout FSP-ui and charting APIs
Since higher level charting and fsp management need access to the
new `Flume` indexing apis this adjusts some func sigs to pass through
(and/or create) flume instances:
- `LinkedSplits.add_plot()` and dependents.
- `ChartPlotWidget.draw_curve()` and deps, and it now returns a `Flow`.
- `.ui._fsp.open_fsp_admin()` and `FspAdmin.open_fsp_ui()` related
  methods => now we wrap the destination fsp shm in a flume on the admin
  side and is returned from `.start_engine_method()`.

Drop a bunch of (unused) chart widget methods including some already
moved to flume methods: `.get_index()`, `.in_view()`,
`.last_bar_in_view()`, `.is_valid_index()`.
2023-02-02 13:32:30 -05:00
Tyler Goodlet 69ea296a9b Max out per symbol throttle @ 22Hz 2023-02-02 13:32:30 -05:00
Tyler Goodlet 03821fdf6f Expect and update from by-type tick frames
Move to expect and process new by-tick-event frames where the display
loop can now just iterate the most recent tick events by type instead of
the entire tick history sequence - thus we reduce iterations inside the
update loop.

Also, go back to use using the detected display's refresh rate (minus 6)
as the default feed requested throttle rate since we can now handle
much more bursty-ness in display updates thanks to the new framing
format B)
2023-02-02 13:32:30 -05:00
Tyler Goodlet 1aa9ab03da Brighter last OHLC graphics datum by default 2023-02-02 13:32:20 -05:00
Tyler Goodlet 1d83b43efe Factor setup loop, 1 FSP chain, colors, throttling
Factor out the chart widget creation since it's only executed once
during rendering of the first feed/flow whilst keeping plotitem overlay
creation inside the (flume oriented) init loop. Only create one vlm and
FSP chart/chain for now until we figure out if we want FSPs overlayed by
default or selected based on the "front" symbol in use. Add a default
color-palette set using shades of gray when plotting overlays. Presume
that the display loop's quote throttle rate should be uniformly
distributed over all input symbol-feeds for now. Restore feed pausing on
mouse interaction.
2023-02-02 13:32:20 -05:00
Tyler Goodlet 6986be1b21 Define a single `ChartPlotWidget.feed: Feed` for pause/resume 2023-02-02 13:32:20 -05:00
Tyler Goodlet 92c50aa6a7 Drop tick frame builder loop for now 2023-02-02 13:32:20 -05:00
Tyler Goodlet eac79c5cdd Adjust FSP UI/mgmt apis to be `Flume` oriented 2023-02-02 13:32:20 -05:00
Tyler Goodlet 7aec238f5f Make graphics-update-loop multi-sym aware B)
Initial support for real-time multi-symbol overlay charts using an
aggregate feed delivered by `Feed.open_multi_stream()`.

The setup steps for constructing the overlayed plot items is still very
very rough and will likely provide incentive for better refactoring high
level "charting APIs". For each fqsn passed into `display_symbol_data()`
we now synchronously,
- create a single call to `LinkedSplits.plot_ohlc_main() -> `ChartPlotWidget`
  where we cache the chart in scope and for all other "sibling" fqsns
  we,
- make a call to `ChartPlotWidget.overlay_plotitem()` -> `PlotItem`, hide its axes,
  make another call with this plotitem input to
  `ChartPlotWidget.draw_curve()`, set a sym-specific view box auto-yrange maxmin callback,
  register the plotitem in a global `pis: dict[str, list[pgo.PlotItem, pgo.PlotItem]] = {}`

Once all plots have been created we then asynchronously for each symbol,
- maybe create a volume chart and register it in a similar task-global
  table: `vlms: dict[str, ChartPlotWidget] = {}`
- start fsp displays for each symbol

Then common entrypoints are entered once for all symbols:
- a single `graphics_update_loop()` loop-task is started wherein
  real-time graphics update components for each symbol are created,
      * `L1Labels`
      * y-axis last clearing price stickies
      * `maxmin()` auto-ranger
      * `DisplayState` (stored in a table `dss: dict[str, DisplayState] = {}`)
      * an `increment_history_view()` task
  and a single call to `Feed.open_multi_stream()` is used to create
  a symbol-multiplexed quote stream which drives a single loop over all
  symbols wherein for each quote the appropriate components are looked
  up and passed to `graphics_update_cycle()`.
- a single call to `open_order_mode()` is made with the first symbol
  provided as input, though eventually we want to support passing in the
  entire list.

Further internal implementation details:
- special tweaks to the `pg.LinearRegionItem` setup wherein the region
  is added with a zero opacity and *after* all plotitem overlays to
  avoid and issue where overlays weren't being shown within the region
  area in the history chart.
- all symbol-specific graphics oriented update calls are adjusted to
  pass in the fqsn:
  * `update_fsp_chart()`
  * `ChartView._set_yrange()`
  * ChartPlotWidget.update_graphics_from_flow()`
- avoid a double increment on sample step updates by not calling the
  increment on any vlm chart since it seems the vlm-ohlc chart linking
  already takes care of this now?
- use global counters for the last epoch time step to avoid incrementing
  all views more then once per new time step given underlying shm array
  buffers may be on different array-index values from one another.
2023-02-02 13:30:02 -05:00
Tyler Goodlet be3dc69290 Only update pnl label on quotes with an fqsn match 2023-02-02 13:30:02 -05:00
Tyler Goodlet 6100bd19c7 Adjust search to handle multi-sym results 2023-01-31 15:16:34 -05:00
Tyler Goodlet d57bc6c6d9 Adjust to using `PlotItem`s for axis sticky mgmt 2023-01-31 15:15:56 -05:00
Tyler Goodlet 58b42d629f Passthrough fqsns list directly to `.load_symbols()` 2023-01-31 14:54:19 -05:00
Tyler Goodlet 36a81cb2de Only add plot to cursor set if not an overlay 2023-01-31 14:27:39 -05:00
Tyler Goodlet ae0f3118f4 Pass plotitem to axis from cursor 2023-01-31 14:27:39 -05:00
Tyler Goodlet 727c7ce2b1 Adjust L1 labels to expect `.pi: PlotItem` 2023-01-31 14:27:39 -05:00
Tyler Goodlet a39c980266 Allocate our internal `Axis` subtype in our `PlotItem` override 2023-01-31 14:27:39 -05:00
Tyler Goodlet 00be100e71 Initial chart widget adjustments for agg feeds
Main "public" API change is to make `GodWidget.get/set_chart_symbol()`
accept and cache-on fqsn tuples to allow handling overlayed chart groups
and adjust method names to be plural to match.

Wrt `LinkedSplits`,
- create all chart widget axes with a `None` plotitem argument and set
  the `.pi` field after axis creation (since apparently we have another
  object reference causality dilemma..)
- set a monkeyed `PlotItem.chart_widget` for use in axes that still need
  the widget reference.
- drop feed pause/resume for now since it's leaking feed tasks on the
  `brokerd` side and we probably don't really need it any more, and if
  we still do it should be done on the feed not the flume.

Wrt `ChartPlotItem`,
- drop `._add_sticky()` and use the `Axis` method instead and add some
  overlay + axis sanity checks.
- refactor `.draw_ohlc()` to be a lighter wrapper around a call to
  `.add_plot()`.
2023-01-31 14:27:39 -05:00
Tyler Goodlet 9217610734 Simplify OHLC graphic color instance var name 2023-01-31 14:27:39 -05:00
Tyler Goodlet 31af7a2c99 Add `Axis.add_sticky()` for creating axis labels
We have this method on our `ChartPlotWidget` but it makes more sense to
directly associate axis-labels with, well, the label's parent axis XD.

We add `._stickies: dict[str, YAxisLabel]` to replace
`ChartPlotWidget._ysticks` and pass in the `pg.PlotItem` to each axis
instance, stored as `Axis.pi` instead of handing around linked split
references (which are way out of scope for a single axis).

More work needs to be done to remove dependence on `.chart:
ChartPlotWidget` references in the date axis type as per comments.
2023-01-31 14:27:39 -05:00
Tyler Goodlet 34fac364fd Add default YAxisLable.x_offset: int` 2023-01-31 14:27:39 -05:00
goodboy dcdfd2577a
Merge pull request #447 from pikers/pregraphics_formatters
Pregraphics formatters: `IncrementalFormatter`
2023-01-31 13:55:04 -05:00
goodboy 6733dc57af
Merge pull request #441 from pikers/dark_clearing_repairs
Dark clearing repairs
2023-01-30 14:21:23 -05:00
Tyler Goodlet 05c4b6afb9 Drop px-cache-resets, failed try at path appends
Comments out the pixel-cache resetting since it doesn't seem we need it
any more to avoid draw oddities?

For `.fast_path` appends, this nearly got it working except the new path
segments are either not being connected correctly (step curve) or not
being drawn in full since the history path (plain line).

Leaving the attempted code commented in for a retry in the future; my
best guesses are that maybe,
- `.connectPath()` call is being done with incorrect segment length
  and/or start point.
- the "appended" data: `appended = array[-append_len-1:slice_to_head]`
  (done inside the formatter) isn't correct (i.e. endpoint handling
  considering a path append) and needs special handling for different
  curve types?
2023-01-30 13:22:24 -05:00
Tyler Goodlet 4b22325ffc Mask profile points and drop rect `.united()` attempts 2023-01-30 13:22:14 -05:00
Tyler Goodlet 9d16299f60 Make curve graphics timeframe agnostic
Ensure `.boundingRect()` calcs and `.draw_last_datum()` do geo-sizing
based on source data instead of presuming some `1.0` unit steps in some
spots; we need this to support an epoch index as is needed for overlays.

Further, clean out a bunch of old bounding rect calc code and add some
commented code for trying out `QRectF.united()` on the path + last datum
curve segment. Turns out that approach is slower as per eyeballing the
added profiler points.
2023-01-30 13:21:43 -05:00
Tyler Goodlet ab1f15506d Add graphics incr-updated "formatter" subsys
After trying to hack epoch indexed time series and failing miserably,
decided to properly factor out all formatting routines into a common
subsystem API: ``IncrementalFormatter`` which provides the interface for
incrementally updating and tracking pre-path-graphics formatted data.

Previously this functionality was mangled into our `Renderer` (which
also does the work of `QPath` generation and update) but splitting it
out also preps for being able to do graphics-buffer downsampling and
caching on a remote host B)

The ``IncrementalFormatter`` (parent type) has the default behaviour of
tracking a single field-array on some source `ShmArray`, updating
a flattened `numpy.ndarray` in-mem allocation, and providing a default
1d conversion for pre-downsampling and path generation.

Changed out of `Renderer`,
- `.allocate_xy()`, `update_xy()` and `format_xy()` all are moved to
  more explicitly named formatter methods.
- all `.x/y_data` nd array management and update
- "last view range" tracking
- `.last_read`, `.diff()`
- now calls `IncrementalFormatter.format_to_1d()` inside `.render()`

The new API gets,
- `.diff()`, `.last_read`
- all view range diff tracking through `.track_inview_range()`.
- better nd format array names: `.x/y_nd`, `xy_nd_start/stop`.
- `.format_to_1d()` which renders pre-path formatted arrays ready for
  both m4 sampling and path gen.
- better explicit overloadable formatting method names:
  * `.allocate_xy()` -> `.allocate_xy_nd()`
  * `.update_xy()` -> `.incr_update_xy_nd()`
  * `.format_xy()` -> `.format_xy_nd_to_1d()`

Finally this implements per-graphics-type formatters which define
each set up related formatting routines:
- `OHLCBarsFmtr`: std multi-line style bars
- `OHLCBarsAsCurveFmtr`: draws an interpolated line for ohlc sampled data
- `StepCurveFmtr`: handles vlm style curves
2023-01-30 13:20:17 -05:00
Tyler Goodlet 0db5451e47 Move all pre-path formatting routines to `._pathops`, proto formatter type 2023-01-30 13:19:33 -05:00
goodboy 61218f30f5
Merge pull request #440 from pikers/samplerd_service
`samplerd` service
2023-01-30 11:48:07 -05:00
Tyler Goodlet fcfc0f31f0 Enable backpressure in an effort to prevent bootup overruns 2023-01-30 11:45:29 -05:00
Tyler Goodlet 69074f4fa5 Bump up service tree spawn timeout a couple secs 2023-01-26 17:59:25 -05:00
Tyler Goodlet fe4fb37b58 Add service tree tests for data-feeds and the EMS 2023-01-24 15:15:27 -05:00
Tyler Goodlet 7cfd431a2b Yield `Services` in `open_test_pikerd()` fixture 2023-01-24 15:15:27 -05:00
Tyler Goodlet 61e20a86cc Fix clearing endpoint type annots, export `open_ems()` 2023-01-24 15:15:27 -05:00
Tyler Goodlet d9b73e1d08 Yield services (manager) from `maybe_open_pikerd()` 2023-01-24 15:15:27 -05:00
goodboy 4833d56ecb
Merge pull request #442 from pikers/misc_brokerd_backend_repairs
Misc brokerd backend repairs
2023-01-23 18:44:00 -05:00
Tyler Goodlet 090d1ba524 `kraken`: catch value error not index on missing `src_fiat` in pair 2023-01-23 15:36:20 -05:00
Tyler Goodlet afc45a8e16 `binance`: same thing, only unsub when connected 2023-01-23 15:29:24 -05:00
Tyler Goodlet 844626f6dc Move `brokerd` service task to root `.data` mod 2023-01-13 13:21:49 -05:00
Tyler Goodlet 470079665f Use new tractor kwargs getter func 2023-01-13 13:21:49 -05:00
Tyler Goodlet 0cd87d9e54 Drop commented markestored spawner code 2023-01-13 13:21:49 -05:00
Tyler Goodlet 09711750bf Registry subsys rework
More or less a revamp (and possibly first draft for something similar in
`tractor` core) which ensures all actor trees attempt to discover the
`pikerd` registry actor.

Implementation improvements include:
- new `Registry` singleton which houses the `pikerd` discovery
  socket-address `Registry.addr` + a `open_registry()` manager which
  provides bootstrapped actor-local access.
- refine `open_piker_runtime()` to do the work of opening a root actor
  and call the new `open_registry()` depending on whether a runtime has
  yet been bootstrapped.
- rejig `[maybe_]open_pikerd()` in terms of the above.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 71ca4c8e1f Use actor uid in shm keys for rt quote buffers
Allows running simultaneous data feed services on the same (linux) host
by avoiding file-name collisions instead keying shm buffer sets by the
given `brokerd` instance. This allows, for example, either multiple dev
versions of the data layer to run side-by-side or for the test suite to
be seamlessly run alongside a production instance.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 9811dcf5f3 Match `services` subcmd to new reg addr module variables 2023-01-13 13:21:49 -05:00
Tyler Goodlet da659cf607 Facepalm: definitely do not short circuit discovery helpers.. 2023-01-13 13:21:49 -05:00
Tyler Goodlet 37e0ec7b7d Assert fixture caller is `pikerd` 2023-01-13 13:21:49 -05:00
Tyler Goodlet 045b76bab5 Make `Flume.index_stream()` defer to new sampling api 2023-01-13 13:21:49 -05:00
Tyler Goodlet c8c641a038 Ensure all sub-services cancel on `pikerd` exit
Previously we were relying on implicit actor termination in
`maybe_spawn_daemon()` but really on `pikerd` teardown we should be sure
to tear down not only all service tasks in each actor but also the actor
runtimes. This adjusts `Services.cancel_service()` to only cancel the
service task scope and wait on the `complete` event and reworks the
`open_context_in_task()` inner closure body to,

- always cancel the service actor at exit.
- not call `.cancel_service()` (potentially causing recursion issues on
  cancellation).
- allocate a `complete: trio.Event` to signal full task + actor termination.
- pop the service task from the `.service_tasks` registry.

Further, add a `maybe_set_global_registry_sockaddr()` helper-cm to do
the work of checking whether a registry socket needs-to/has-been set
and use it for discovery calls to the `pikerd` service tree.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 6a1bb13feb Add base `pikerd` service tree custom check test 2023-01-13 13:21:49 -05:00
Tyler Goodlet 75591dd7e9 Don't raise on quote feed lags to dark clearing loop 2023-01-13 13:21:49 -05:00
Tyler Goodlet d792fed099 Move sync log msg back to info 2023-01-13 13:21:49 -05:00
Tyler Goodlet d66fb49077 Don't deliver shms from `start_backfill()`, they're not used 2023-01-13 13:21:49 -05:00
Tyler Goodlet 78c7c8524c Breakpoint when bad 1m history offsets are detected 2023-01-13 13:21:49 -05:00
Tyler Goodlet a746258f99 `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 5adb234a24 Don't receive sample-index msgs in feed layer 2023-01-13 13:21:49 -05:00
Tyler Goodlet 2778ee1401 Support not registering for sample-index msgs via `sub_for_broadcasts: bool` flag 2023-01-13 13:21:49 -05:00
Tyler Goodlet e0ca5d5200 Use `open_sample_stream()` to increment fsp buffers 2023-01-13 13:21:47 -05:00
Tyler Goodlet b3d1b1aa63 Port feed layer to use new `samplerd` APIs
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 5ec1a72a3d Implement a `samplerd` singleton actor service
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:

- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
  `._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
  the work of (lazily) starting the `samplerd` service (if dne) and
  registers shm buffers for update as well as connect a sample-index
  stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
  endpoint implementing all the shm buffer and index-stream registry
  details as well as logic to ensure a lone service task runs
  `Services.increment_ohlc_buffer()`; it increments at the shortest period
  registered which, for now, is the default 1s duration.

Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
  gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
  one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet a342f7d2d4 Make `._daemon.Services` for use as singleton
Drop the `_services` module level ref and adjust all client code to
match. Drop struct inheritance and convert all methods to class level.
Move `Brokerd.locks` -> `Services.locks` and add sampling mod to pikerd
enabled set.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 2c76cee928 Begin formalizing `Sampler` singleton API
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.

Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
  registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.

Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet b5f2ff854c Drop meaning the clearing rate, use per step count 2023-01-13 13:21:15 -05:00
Tyler Goodlet 3efb0b5884 Sync 1s (or less) sampler steps using rounded now-epoch 2023-01-13 13:21:15 -05:00
Tyler Goodlet 009bbe456e Always `.error()` log unknown queries for `marketstore` 2023-01-13 13:21:15 -05:00
Tyler Goodlet daf7b3f4a5 Only accept 6 tries for the same duplicate hist frame
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
2023-01-13 13:21:15 -05:00
Tyler Goodlet b0a6dd46e4 Use recon set on stack closing during reconnect
Hopefully resolves https://github.com/pikers/piker/issues/434
2023-01-13 13:21:15 -05:00
Tyler Goodlet 1c5141f4c6 Fix f-str in duplicate frame msg print 2023-01-13 13:21:15 -05:00
Tyler Goodlet 4cdd2271b0 Drop `tractor` assert bug note 2023-01-13 13:21:15 -05:00
Tyler Goodlet 89095d4e9f Ensure FSPs last 2 times are synced with its source 2023-01-13 13:21:15 -05:00
Tyler Goodlet 04c0d77595 Frame ticks in helper routine
Wow, turns out tick framing was totally borked since we weren't framing
on "greater then throttle period long waits" XD

This moves all the framing logic into a common func and calls it in
every case:
- every (normal) "pre throttle period expires" quote receive
- each "no new quote before throttle period expires" (slow case)
- each "no clearing tick yet received" / only burst on clears case
2023-01-13 13:21:15 -05:00
Tyler Goodlet d1b07c625f Copy timestamps from source to FSP dest buffer
Slice up to history's length worth of (latest) time stamps from source
series read at the start of the history init phase.
2023-01-13 13:21:15 -05:00
Tyler Goodlet a5bb33b0ff Avoid key error on already popped cancel 2023-01-13 13:21:15 -05:00
Tyler Goodlet 8e1ceca43d Add some data-flows jargon notes (re: #270) 2023-01-13 13:21:15 -05:00
Tyler Goodlet c85e7790de Rename `._flumes.py` -> `.flows.py` 2023-01-13 13:21:15 -05:00
Tyler Goodlet 2399c618b6 Expand sampler loop shm write lines 2023-01-13 13:21:15 -05:00
Tyler Goodlet 7ec88f8cac Make hist shm token optional to allow for FSPs 2023-01-13 13:21:15 -05:00
Tyler Goodlet eacd44dd65 Move `Flume` to a new `.data._flumes` module 2023-01-13 13:21:15 -05:00
Tyler Goodlet e5e70a6011 Extend `Flume` methods
Add some (untested) data slicing util methods for mapping time ranges to
source data indices:
- `.get_index()` which maps a single input epoch time to an equiv array
  (int) index.
- add `slice_from_time()` which returns a view of the shm data from an
  input epoch range presuming the underlying struct array contains
  a `'time'` field with epoch stamps.
- `.view_data()` which slices out the "in view" data according to the
  current state of the passed in `pg.PlotItem`'s view box.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 7da5c2b238 Add epoch time index to fsp buffers 2023-01-13 13:21:15 -05:00
Tyler Goodlet 1ee49df31d Ensure a rt shm buffer without backfill has correct epoch timestamping 2023-01-13 13:21:15 -05:00
Tyler Goodlet f2df32a673 Use throttle period for wait-on-clearing-event timeout 2023-01-13 13:21:15 -05:00
Tyler Goodlet 125e31dbf3 Implement by-type tick-framing in throttler loop
This has been an outstanding idea for a while and changes the framing
format of tick events into a `dict[str, list[dict]]` wherein for each
tick "type" (eg. 'bid', 'ask', 'trade', 'asize'..etc) we create an FIFO
ordered `list` of events (data) and then pack this table into each
(throttled) send. This gives an additional implied downsample reduction
(in terms of iteration on the consumer side) from `N` tick-events to
a (max) `T` tick-types presuming the rx side only needs the latest tick
event.

Drop the `types: set` and adjust clearing event test to use the new
`ticks_by_type` map's keys.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 715e693564 Improved clearing-tick-burst-oriented throttling
Instead of uniformly distributing the msg send rate for a given
aggregate subscription, choose to be more bursty around clearing ticks
so as to avoid saturating the consumer with L1 book updates and vs.
delivering real trade data as-fast-as-possible.

Presuming the consumer is in the "UI land of slow" (eg. modern display
frame rates) such an approach serves more useful for seeing "material
changes" in the market as-bursty-as-possible (i.e. more short lived fast
changes in last clearing price vs. many slower changes in the bid-ask
spread queues). Such an approach also lends better to multi-feed
overlays which in aggregate tend to scale linearly with the number of
feeds/overlays; centralization of bursty arrival rates allows for
a higher overall throttle rate if used cleverly with framing.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 43717c92d9 Type annot-declare fsp-engine data `Feed` 2023-01-13 13:21:15 -05:00
Tyler Goodlet f370685c62 Init msg keys are always lower case 2023-01-13 13:21:15 -05:00
Tyler Goodlet 4300470786 Fix for empty tsdb query result case 2023-01-13 13:21:15 -05:00
Tyler Goodlet b89fd9652c `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-13 13:14:35 -05:00
Tyler Goodlet 51f4afbd88 Don't raise on quote feed lags to dark clearing loop 2023-01-13 12:51:07 -05:00
Tyler Goodlet 7ef8111381 Provide `datetime`-sorted clears table iteration
Likely pertains to helping with stuff in issues #345 and #373 and just
generally is handy to have when processing ledgers / clearing event
tables.

Adds the following helper methods:
- `iter_by_dt()` to iter-sort an arbitrary `Transaction`-like table of
  clear entries.
- `Position.iter_clears()` as a convenience wrapper for the above.
2023-01-13 12:51:01 -05:00
Tyler Goodlet 35b097469b Round spread (slap) offset to min tick digits 2023-01-13 12:51:01 -05:00
Tyler Goodlet 94290c7d8b `kraken`: ignore mismatched zero-ed pps (for now)
See more details in the GH comment:
https://github.com/pikers/piker/issues/373#issuecomment-1380988581

More or less we need to pull and include the transfer fees for
withdrawals in our ledger tracking but this serves as a sloppy
workaround for the moment.
2023-01-13 12:48:18 -05:00
Tyler Goodlet 73379d3627 Run CI on all PRs 2023-01-13 12:39:17 -05:00
Tyler Goodlet 23835f2c08 `deribit`: drop old `backfill_bars()` ep 2023-01-13 12:39:17 -05:00
Tyler Goodlet d2aee00a56 `kraken`: only do unsub if connected
Trying to send a message in the `NoBsWs.fixture()` exit when the ws is
not currently disconnected causes a double `._stack.close()` call which
will corrupt `trio`'s coro stack. Instead only do the unsub if we detect
the ws is still up.

Also drops the legacy `backfill_bars()` module endpoint.

Fixes #437
2023-01-13 12:39:17 -05:00
Tyler Goodlet cf6e44cb9c Add `NoBsWs.connected()` predicate 2023-01-13 12:39:17 -05:00
Tyler Goodlet a146ad9e69 Never restart `ib-gw` containers on boot 2023-01-13 12:37:49 -05:00
Tyler Goodlet 70ad1a1860 `kraken`: don't presume src fiat symbol size in pos predicate 2023-01-13 12:37:49 -05:00
Tyler Goodlet f3ef73ef41 `kraken`: drop symbol token size =6 check 2023-01-13 12:37:49 -05:00
Tyler Goodlet a9832dc0cb `ib`: fix position log msg 2023-01-13 12:37:49 -05:00
Tyler Goodlet 9be245e955 `ib`: Add treasury yield futs to adhoc fqsn set 2023-01-13 12:37:49 -05:00
Tyler Goodlet 800773e585 ib: ignore throttles on `.get_head_time()` 2023-01-13 12:37:49 -05:00
goodboy 8d1eb81f16
Merge pull request #414 from pikers/agg_feedz
Agg feedz
2023-01-13 12:20:47 -05:00
Tyler Goodlet 963e5bdd62 Go back to `Feed.pause/resume()`, new flume APIs coming later 2023-01-10 11:09:19 -05:00
Tyler Goodlet 55de9abc41 Adjust cli mod imports of daemon sockaddr vars 2023-01-10 11:09:19 -05:00
Tyler Goodlet 593db0ed0d Only run `kraken` feed tests in CI, use `open_test_pikerd()` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 06622105cd Add a `open_test_pikerd()` acm fixture for easy booting of the service stack 2023-01-10 11:09:19 -05:00
Tyler Goodlet 008ae47e14 Reset `._registry_addr` to any passed in value from caller 2023-01-10 11:09:19 -05:00
Tyler Goodlet 81585d9e6e Set global registry addr after first entry point spawns `pikerd` 2023-01-10 11:09:19 -05:00
Tyler Goodlet f6b7057b0d `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 76f920a16b Always force lowercase on `binance` symbol keys
Hopefully helps resolve #435
2023-01-10 11:09:19 -05:00
Tyler Goodlet f232d6d4ee Add `ci_env` detector fixture 2023-01-10 11:09:19 -05:00
Tyler Goodlet b7e1443618 Use ETH on kraken to ensure enough quotes 2023-01-10 11:09:19 -05:00
Tyler Goodlet 5d021ffb85 Bump up timeout on multi-feed test for CI 2023-01-10 11:09:19 -05:00
Tyler Goodlet 28fd795280 Only require `-b <brokername>` for filtering
Instead of requiring any `-b` try to import all built-in broker backend
python modules by default and only load those detected from the input symbol
list's fqsn values. In other words the `piker chart` cmd can be run sin
`-b` now and that flag is only required if you only want to load
a subset of the built-ins or are trying to load a specific
not-yet-builtin backend.
2023-01-10 11:09:19 -05:00
Tyler Goodlet c944db5f02 Revert "Fix `_main()` arg back to `sym: str`"
This reverts commit 02fbc0a0ed.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 967e28b7ac Adjust built-in backend list to known working 2023-01-10 11:09:19 -05:00
Tyler Goodlet 2a158aea2c Rework `_FeedsBus` subscriptions mgmt using `set`
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
  `set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
  a msg on each unique underlying IPC msg stream.

Use new api in sampler task.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 88870fdda7 Set `brokers: list[st]` from mods when not provided.. 2023-01-10 11:09:19 -05:00
Tyler Goodlet 326f153a47 Catch overruns on throttled feed subs too
Previously we would only detect overruns and drop subscriptions on
non-throttled feed subs, however you can get the same issue with
a wrapping throttler task:
- the intermediate mem chan can be blocked either by the throttler task
  being too slow, in which case we still want to warn about it
- the stream's IPC channel actually breaks and we still want to drop
  the connection and subscription so it doesn't be come a source of
  stale backpressure.
2023-01-10 11:09:19 -05:00
Tyler Goodlet f5cd63ad35 Ensure correct stream is set on each `Flume`
Set each quote-stream by matching the provider for each `Flume` and thus
results in some flumes mapping to the same (multiplexed) stream.
Monkey-patch the equivalent `tractor.MsgStream._ctx: tractor.Context` on
each broadcast-receiver subscription to allow use by feed bus methods as
well as other internals which need to reference IPC channel/portal info.

Start a `_FeedsBus` subscription management API:
- add `.get_subs()` which returns the list of tuples registered for the
  given key (normally the fqsn).
- add `.remove_sub()` which allows removing by key and tuple value and
  provides encapsulation for sampler task(s) which deal with dropped
  connections/subscribers.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 1e96ca32df Move `maybe_open_feed()` above for readability 2023-01-10 11:09:19 -05:00
Tyler Goodlet c088963cf2 Always touch config file dir if dne 2023-01-10 11:09:19 -05:00
Tyler Goodlet 79fcbcc281 Add an sdist job to CI 2023-01-10 11:09:19 -05:00
Tyler Goodlet ddbba76095 Use (a new) `piker_pin` branch in `tractor` (again) 2023-01-10 11:09:19 -05:00
Tyler Goodlet 0a959c1c74 Not all accounts will have API trade transactions this session.. 2023-01-10 11:09:19 -05:00
Tyler Goodlet e348968113 Add multi-broker streaming test using both `binance` and `kraken` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7bbe86d6fb Unpack broker mod and portal from fqsn for brokerd-trade-dialogs 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7b9db86753 Multi-`broker` quotes with `Feed.open_multi_stream()`
Adds provider-list-filtered (quote) stream multiplexing support allowing
for merged real-time `tractor.MsgStream`s using an `@acm` interface.
Behind the scenes we are just doing a classic multi-task push to common
mem chan approach.

Details to make it work on `Feed`:
- add `Feed.mods: dict[str, Moduletype]` and
  `Feed.portals[ModuleType, tractor.Portal]` which are both populated
  during init in `open_feed()`
- drop `Feed.portal` and `Feed.name`

Also fix a final lingering tsdb history loading loop termination bug.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 20a396270e `Storage.read_ohlcv()` now returns a `numpy` array 2023-01-10 11:09:19 -05:00
Tyler Goodlet 81516c5204 Finally fix tsdb -> shm backfill loading
A slight facepalm but, the main issue was a simple indexing logic error:
we need to slice with `tsdb_history[-shm._first.value:]` to push most
recent history not oldest.. This allows cleanup of tsdb backfill loop as
well.

Further, greatly simply `diff_history()` time slicing by using the
classic `numpy` conditional slice on the epoch field.
2023-01-10 11:09:19 -05:00
Tyler Goodlet d6fb6fe3ae Just drop the pretty repr from our struct for now 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8476d8d056 Fix partial-frame-missing backfill logic
This had a bug prior where the end of a frame (a partial) wasn't being
sliced correctly and we'd get odd gaps showing up in the backfilled from
`brokerd` vs. tsdb end index. Repair this by doing timeframe aware index
diffing in `diff_history()` which seems to resolve it. Also, use the
frame-result's `end_dt: datetime` for the loop exit condition.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 36868bb86e Add `kraken` test, ensure single broker-provider for now 2023-01-10 11:09:19 -05:00
Tyler Goodlet 29b6b3e54f Port `storesh` cli-cmd machinery to `Flume` apis 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8a01c9e42b Fix broker-tail stripping using `str.removesuffix()` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 2c4daf08e0 Adjust to per-fqsn-oriented `Flume` lookups throughout 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7daab6329d Make `Symbol` derive from internal `.types.Struct` 2023-01-10 11:09:19 -05:00
Tyler Goodlet bb6452b969 Further feed syncing fixes wrt to `Flumes`
Sync per-symbol sampler loop start to subscription registers such that
the loop can't start until the consumer's stream subscription is added;
the task-sync uses a `trio.Event`. This patch also drops a ton of
commented cruft.

Further adjustments needed to get parity with prior functionality:
- pass init msg 'symbol_info' field to the `Symbol.broker_info: dict`.
- ensure the `_FeedsBus._subscriptions` table uses the broker specific
  (without brokername suffix) as keys for lookup so that the sampler
  loop doesn't have to append in the brokername as a suffix.
- ensure the `open_feed_bus()` flumes-table-msg returned sent by
  `tractor.Context.started()` uses the `.to_msg()` form of all flume
  structs.
- ensure `maybe_open_feed()` uses `tractor.MsgStream.subscribe()` on all
  `Flume.stream`s on cache hits using the
  `tractor.trionics.gather_contexts()` helper.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 25bfe6f035 Use new |-union style type annots in sampling routines 2023-01-10 11:09:19 -05:00
Tyler Goodlet 32b36aa042 Expect init startup quotes from each symbol 2023-01-10 11:09:19 -05:00
Tyler Goodlet e7de5404d3 Add `Symbol.fqsn: str` property 2023-01-10 11:09:19 -05:00
Tyler Goodlet 18dc8b08e4 First draft aggregate feedz support
Orient shm-flow-arrays around the new idea of a `Flume` which provides
access, mgmt and basic measure of real-time data flow sets (see water
flow management semantics).

- We discard the previous idea of a "init message" which contained all
  the shm attachment info and instead send a startup message full of
  `Flume.to_msg()`s which are symmetrically loaded on the caller actor
  side.

- Create data-flows "entries" for every passed in fqsn such that the consumer gets back
  streams and shm for each, now all wrapped in `Flume` types. For now we
  allocate `brokermod.stream_quotes()` tasks 1-to-1 for each fqsn
  (instead of expecting each backend to do multi-plexing, though we
  might want that eventually) as well a `_FeedsBus._subscriber` entry
  for each. The pause/resume management loop is adjusted to match.
  Previously `Feed`s were  allocated 1-to-1 with each fqsn.

- Make `Feed` a `Struct` subtype instead of a `@dataclass` and move all
  flow specific attrs to the new `Flume`:
  - move `.index_stream()`, `.get_ds_info()` to `Flume`.
  - drop `.receive()`: each fqsn entry will now require knowledge of
    separate streams by feed users.
  - add multi-fqsn tables: `.flumes`, `.streams` which point to the
    appropriate per-symbol entries.

- Async load all `Flume`s from all contexts and all quote streams using
  `tractor.trionics.gather_contexts()` on the client `open_feed()` side.

- Update feeds test to include streaming 2 symbols on the same (binance)
  backend.
2023-01-10 11:09:18 -05:00
Tyler Goodlet 5bf3cb8e4b Just warn on `ib` symbol search lags 2023-01-10 11:09:18 -05:00
Tyler Goodlet c7d5db5f90 Start data feed layer test suite
Initial test that starts a `binance` feed and reads the quote messages
alongside shm buffers for 1s and 1m OHLC; just prints to console for
now.

Template out parametrization for multi-symbol quote-multiplexed feeds
which coming soon B)
2023-01-10 11:09:18 -05:00
Tyler Goodlet 1bf1965a8b Drop `tractor.log` level override fixture 2023-01-10 11:09:18 -05:00
Tyler Goodlet 051a8729b6 EMS: expect fqsn key in `Feed.symbols` 2023-01-10 11:09:18 -05:00
Tyler Goodlet 8e85ed92c8 Use new `GodWidget.load_symbols()` from search 2023-01-10 11:09:18 -05:00
Tyler Goodlet 2a9042b1b1 Make all UI entrypoints accept an fqsn `list`
This is to prep for multi-symbol feeds and charts so we accept
a sequence of fqsns to the top level entrypoints as well as the
`.data.feed.open_feed()` API (though we're not actually supporting true
multiplexed feeds nor shm lookups per fqsn yet).
2023-01-10 11:09:18 -05:00
Tyler Goodlet 344a634cb6 Always set fqsn in `Feed.symbols: dict` 2023-01-10 11:09:18 -05:00
Tyler Goodlet 508de6182a Drop duplicate live gateway from compose file for now 2023-01-10 11:09:18 -05:00
Tyler Goodlet 40000345a1 Only log pos size errors for `ib` 2023-01-10 11:09:18 -05:00
goodboy 220d38b4a9
Merge pull request #439 from pikers/binance_syminfo_fix
Update Binance exchange information
2023-01-10 11:08:19 -05:00
Esmeralda Gallardo 888438ca25
Add two attributes to Pair class to match Binance exchange information update 2023-01-10 10:18:40 -03:00
goodboy d84bcf77c0
Merge pull request #438 from pikers/msgspec_ordering
Msgspec field ordering
2023-01-09 19:01:12 -05:00
Guillermo Rodriguez 0474d66531
Switch msgspec struct ordering to always have required fields first and optionals last 2023-01-09 18:43:50 -03:00
algorandpa f218b804b4
Merge pull request #433 from pikers/add_config_dir_on_daemon_startup
Add config dir on daemon startup
2022-12-22 19:40:47 +00:00
Guillermo Rodriguez 7b14f498a8
Merge pull request #409 from esmegl/json_rpc_req
Added support for JSONRPC requests coming from the server side
2022-12-21 15:14:12 -03:00
Esmeralda Gallardo 18e4352faf
Deleted unused timeout logic 2022-12-19 14:55:06 -03:00
Esmeralda Gallardo a6e921548b
Modified recv_task(): added functionality to restart ws after timeout, modified match msg and added new case to match in case of receiving an error. 2022-12-19 13:48:18 -03:00
Esmeralda Gallardo 3f5dec82ed
Replaced try/except block in recv_task() by match msg, and added new changes to description comment 2022-12-19 13:48:17 -03:00
Esmeralda Gallardo db0b59abaa
Added support for JSONRPC requests coming from the server side 2022-12-19 13:48:10 -03:00
algorandpa f5bcd1d91c remove binance additions 2022-12-17 21:53:57 +00:00
algorandpa db11c3c0f8 add config dir on pikerd startup 2022-12-17 21:51:49 +00:00
Tyler Goodlet df6071ae9e `binance`: more fields.. `SelfTradePreventMode`.. 2022-12-15 22:23:56 +00:00
goodboy cc1694760c
Merge pull request #432 from pikers/kraken_limits_fields
Kraken limits fields
2022-12-10 16:12:54 -05:00
goodboy 4d8b22dd8f
Merge pull request #431 from pikers/cz_post_ftx
Cz post ftx
2022-12-10 16:08:39 -05:00
Tyler Goodlet fd296a557e Add position limit fields 2022-12-10 16:07:03 -05:00
Tyler Goodlet 0de2f863bd `kraken`: Explicitly report missing `Pair` fields in error 2022-12-10 16:07:03 -05:00
Tyler Goodlet de93da202b Reconnect on ping-pong errors too i guess? 2022-12-10 16:05:36 -05:00
Tyler Goodlet 5c459f21be Honestly, f$@%! you cz... 2022-12-10 16:05:36 -05:00
goodboy 5915cf3acf
Merge pull request #430 from pikers/catch_notification_daemon_error
Catch notification daemon error
2022-12-04 17:06:12 -05:00
algorandpa 997bf31bd4 remove spacing again 2022-12-04 21:19:34 +00:00
algorandpa f3427bb13b restore spacing 2022-12-04 21:15:41 +00:00
algorandpa 6fa266e3e0 wrap notification process in try catch and capture stderr data 2022-12-04 21:13:33 +00:00
Guillermo Rodriguez 019a6432fb
Merge pull request #421 from pikers/ib_contract_updates
`ib` futes contract consolidation fixes
2022-11-17 18:38:22 -03:00
goodboy 209e1085ae
Merge pull request #422 from pikers/kraken_pair_status
Add `.status: str` to kraken pairs..
2022-11-17 15:22:17 -05:00
Tyler Goodlet 0ef75e6aa6 Add `.status: str` to kraken pairs.. 2022-11-17 15:18:12 -05:00
Tyler Goodlet 243d0329f6 Client.get_head_time()` seems unsupported for forex? 2022-11-17 15:12:10 -05:00
Tyler Goodlet a0ce9ecc0d Only append con suffix if not empty 2022-11-17 15:12:10 -05:00
Tyler Goodlet af9c30c3f5 Handle futes venue remaps as per oct-nov 2022 rollout 2022-11-17 15:12:10 -05:00
Zoltan ebbfa47baf
Merge pull request #419 from pikers/pre_multifeed_hotfix
HOTFIX: Fix `_main()` arg back to `sym: str`
2022-11-12 17:34:25 -05:00
Tyler Goodlet 02fbc0a0ed Fix `_main()` arg back to `sym: str`
This slipped in early from #414 before merge and was likely due to
cherry-picking from #417.
2022-11-12 16:26:21 -05:00
goodboy 4729e4c6bc
Merge pull request #418 from pikers/kraken_pair_updates
Kraken pair updates
2022-11-10 17:31:39 -05:00
goodboy a44b8e3e22
Merge pull request #417 from pikers/daemon_sockaddr_config
Daemon sockaddr config
2022-11-10 17:31:24 -05:00
goodboy 8a89303cb3
Merge pull request #415 from pikers/no_signal_pi_overlays
`Signal`-less pi overlays
2022-11-10 17:31:04 -05:00
Tyler Goodlet e547b307f6 Deflect 1s OHLC loading for `kraken` 2022-11-10 13:16:21 -05:00
Tyler Goodlet 72ec9b1e10 Add `Pair.tick_size` to `kraken` schema 2022-11-10 13:16:21 -05:00
Tyler Goodlet 40c70ae6d8 Drop unecessary services var asserts? 2022-11-10 13:06:31 -05:00
Tyler Goodlet d3fefdeaff Expose registry sockaddr in `open_piker_runtime()` 2022-11-10 13:06:31 -05:00
Tyler Goodlet 8be005212f Expose `.open_feed()` and `open_piker_runtime()` eps at top level 2022-11-10 13:06:31 -05:00
Tyler Goodlet 5a2795e76b Passthrough registry sockaddr from chart cmd to daemon 2022-11-10 13:06:31 -05:00
Tyler Goodlet a987f0ab81 Add registry socket cli flags to all client cmds
Allows starting UI apps and passing the `pikerd` registry socket-addr
args via `--host` or `--port` such that a separate actor tree can be
started by selecting an unused port. This is handy when hacking new
features but while also wishing to run a more stable version of the code
for trading on the same host.
2022-11-10 13:06:31 -05:00
Tyler Goodlet d99b40317d Add a `pikerd -p <port_number>` flag 2022-11-10 13:06:31 -05:00
Tyler Goodlet 9ae519f6fa Re-work chart-overlay event broadcasting
Drop all attempts at rewiring `ViewBox` signals, monkey-patching
relayee handlers, and generally modifying event source public
attributes. Instead take a much simpler approach where the event source
graphics object simply has it's handler dynamically overridden by
a broadcaster function which relays to all consumers using a Python
loop.

The benefits of this much simplified approach include:
- avoiding the tedious and often complex (re)connection of signals between
  the source plot and the overlayed consumers.
- requiring zero modification of the public interface of any of the
  publisher or consumer `ViewBox`s, no decoration, extra signal
  definitions (eg. previous `mouseDragEventRelay` or the like).
- only a single dynamic method override on the event source graphics object
  (`ViewBox`) which does the broadcasting work and requires no
  modification to handler implementations.

Detailed `.ui._overlay` changes:
- drop `mk_relay_signal()`, `enable_relays()` which removes signal/slot
  hacking methodology.
- drop unused `ComposedGridLayout.grid` and `.reverse`, change some
  method names: `.insert()` -> `.insert_plotitem()`, `append()` ->
  `.append_plotitem()`.
- in `PlotOverlay`, again drop all signal/slot rewiring in
  `.add_plotitem()` and instead add our new closure based python-loop in
  `broadcast()` routine which is used to override the event-source
  object's handler.
- comment out all the auxiliary/want-to-have event source selection
  methods for now.
2022-11-10 11:45:49 -05:00
Tyler Goodlet 8f3fe8e542 Back link auto-y-ranging to ohlc chart from vlm overlay fsp 2022-11-10 11:45:49 -05:00
Tyler Goodlet 490d85aba5 Drop fast chart buffer to 2 days worth 2022-11-10 11:45:49 -05:00
goodboy ba2e1e04cd
Merge pull request #413 from pikers/pg_exts_fork
Pg exts fork
2022-11-08 12:47:32 -05:00
Tyler Goodlet 5d4929db9c Pin to our `pyqtgraph` fork's master branch 2022-10-31 15:00:38 -04:00
Tyler Goodlet c41400ae18 Use `.setRect()`; not sure how this was ever working? 2022-10-31 14:58:35 -04:00
Tyler Goodlet e71bd2cb1e Move axis-tick-values lru caching into our existing `Axis` 2022-10-31 14:23:29 -04:00
Tyler Goodlet be24473fb4 Adjust remaining chart internals to pg extensions
Mainly this involves instantiating our overriden `PlotItem` in a few
places and tweaking type annots. A further detail is that inside
the fsp sub-chart creation code we hide some axes for overlays in the
flows subchart; these were previously somehow hidden implicitly?
2022-10-31 14:13:02 -04:00
Tyler Goodlet b524ea5c22 Extract and fork `pyqtgraph` upstream submissions
Fork out our patch set submitted to upstream in multiple PRs (since they
aren't moving and/or aren't a priority to core) which can be seen in
full from the following diff:
https://github.com/pyqtgraph/pyqtgraph/compare/master...pikers:pyqtgraph:graphics_pin

Move these type extensions into the internal `.ui._pg_overrides` module.

The changes are related to both `pyqtgraph.PlotItem` and `.AxisItem` and
were driven for our need for multi-view overlays (overlaid charts with
optionally synced axis and interaction controls) as documented in the PR
to upstream: https://github.com/pyqtgraph/pyqtgraph/pull/2162

More specifically,
- wrt to `AxisItem` we added lru caching of tick values as per:
  https://github.com/pyqtgraph/pyqtgraph/pull/2160.
- wrt to `PlotItem` we adjusted some of the axis management code, namely
  adding a standalone `.removeAxis()` and modifying the `.setAxisItems()` logic
  to use it in: https://github.com/pyqtgraph/pyqtgraph/pull/2162
  as well as some tweaks to `.updateGrid()` to loop through all possible
  axes when grid setting.
2022-10-31 09:37:32 -04:00
Tyler Goodlet d46945cb09 Move profiler imports to internal version 2022-10-31 09:26:36 -04:00
Tyler Goodlet 1d4fc6f327 Fork our latency tune-able profiler from `pyqtgraph.debug`
Details of the original patch to upstream are in:
https://github.com/pyqtgraph/pyqtgraph/pull/2281

Instead of trying to land this we've opted to just copy out that version
of `.debug.Profiler` into our own internals (luckily the class is
entirely self-contained) until such a time when we choose to find
a better dependency as per https://github.com/pikers/piker/issues/337
2022-10-30 21:11:27 -04:00
Tyler Goodlet 5976acbe76 `PyQt5` + `pyqtgraph` import updates (`QtGui -> `QtWidgets`) 2022-10-30 21:11:14 -04:00
goodboy 11ecf9cb09
Merge pull request #401 from pikers/ib_1m_hist
Ib 1m hist
2022-10-29 13:14:53 -04:00
goodboy 2dac531729
Merge pull request #410 from pikers/even_moar_kraken_order_fixes
Even moar `kraken` order fixes
2022-10-28 19:52:20 -04:00
Tyler Goodlet 1fadf58ab7 Add todo for order duration setting `goodTillDuration` 2022-10-28 17:50:09 -04:00
Tyler Goodlet ceca0d9fb7 Order ledger entries by processed datetime
To make it easier to manually read/decipher long ledger files this adds
`dict` sorting based on record-type-specific (api vs. flex report)
datetime processing prior to ledger file write.

- break up parsers into separate routines for flex and api record
  processing.
- add `parse_flex_dt()` for special handling of the weird semicolon
  stamps in flex reports.
2022-10-28 16:17:27 -04:00
Tyler Goodlet df16726211 Just wipe wrong timeframe filled tsdb colseries for now 2022-10-28 16:17:14 -04:00
Tyler Goodlet fb4f1732b6 Drop key error again 2022-10-28 16:17:14 -04:00
Tyler Goodlet d5b357b69a Raise `DataUnavailable` on >= 6 no data error events 2022-10-28 16:17:14 -04:00
Tyler Goodlet 610fb5f7c6 Drop `NoData` handler, just let it bubble 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2b231ba631 Lul, fix timeframe key when writing history
There never was any underlying db bug, it was a hardcoded timeframe in
the column series write key.. Now we always assert a matching timeframe
in results.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 286228c290 Only wait on backfill if provider supports timeframe 2022-10-28 16:17:14 -04:00
Tyler Goodlet a1a24da7b6 Make `binance` reject 1s OHLC history requests 2022-10-28 16:17:14 -04:00
Tyler Goodlet 553d0557b6 Raise `DataUnavailable` when a contract's 'earliest time' is hit 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2f7b272d8c Make `ib` client's `.get_head_time()` (only) expect an fqsn 2022-10-28 16:17:14 -04:00
Tyler Goodlet dc1edeecda Do tsdb backloading to shm concurrently
Not only improves startup latency but also avoids a bug where the rt
buffer was being tsdb-history prepended *before* the backfilling of
recent data from the backend was complete resulting in our of order
frames in shm.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 4ca7817735 Use feed-shm offsets in fill-arrow indexing arithmetic 2022-10-28 16:17:14 -04:00
Tyler Goodlet 5b63585398 Pack multi-chart region linking into helper
Factor the multi-sample-rate region UI connecting into a new helper
`link_views_with_region()` which reads in the shm buffer offsets from
the `Feed` and appropriately connects the fast and slow chart handlers
for the linear region graphics. Add detailed comments writeup for the
inter-sampling transform algebra.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 0000d9a314 Handle backends with no 1s OHLC history
If a history manager raises a `DataUnavailable` just assume the sample
rate isn't supported and that no shm prepends will be done. Further seed
the shm array in such cases as before from the 1m history's last datum.

Also, fix tsdb -> shm back-loading, cancelling tsdb queries when either
no array-data is returned or a frame is delivered which has a start time
no lesser then the least last retrieved. Use strict timeframes for every
`Storage` API call.
2022-10-28 16:17:14 -04:00
Tyler Goodlet f7ec66362e Only get dbus user on sudo-user-present 2022-10-28 16:17:14 -04:00
Tyler Goodlet b7ef0596b9 Drop remaining timeframe scanning from `.read_ohlcv()` 2022-10-28 16:17:14 -04:00
Tyler Goodlet 143e86a80c Handle super annoying mkts query bug..
Turns out querying for a high freq timeframe (like 1sec) will still
return a lower freq timeframe (like 1Min) SMH, and no idea if it's the
server or the client's fault, so we have to explicitly check the sample
step size and discard lower freq series-results. Do this inside
`Storage.read_ohlcv()` and return an empty `dict` when the wrong time
step is detected from the query result.

Further enforcements,
- both `.load()` and `read_ohlcv()` now require an explicit `timeframe:
  int` input to guarantee the time step of the output array.
- drop all calls `.load()` with non-timeframe specific input.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 956c7d3435 Add concurrent multi-time-frame history loading
Our default sample periods are 60s (1m) for the history chart and 1s for
the fast chart. This patch adds concurrent loading of both (or more)
different sample period data sets using the existing loading code but
with new support for looping through a passed "timeframe" table which
points to each shm instance.

More detailed adjustments include:
- breaking the "basic" and tsdb loading into 2 new funcs:
  `basic_backfill()` and `tsdb_backfill()` the latter of which is run
  when the tsdb daemon is discovered.
- adjust the fast shm buffer to offset with one day's worth of 1s so
  that only up to a day is backfilled as history in the fast chart.
- adjust bus task starting in `manage_history()` to deliver back the
  offset indices for both fast and slow shms and set them on the
  `Feed` object as `.izero_hist/rt: int` values:
  - allows the chart-UI linked view region handlers to use the offsets
    in the view-linking-transform math to index-align the history and
    fast chart.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 330d16262e Add data-reset-task global state var
Allows keeping mutex state around data reset requests which (if more
then one are sent) can cause a throttling condition where ib's servers
will get slower and slower to conduct a reconnect. With this you can
have multiple ongoing contract requests without hitting that issue and
we can go back to having a nice 3s timeout on the history queries before
activating the hack.
2022-10-28 16:17:14 -04:00
Tyler Goodlet c7f57b940c Add back adhoc symbol lookup support, some exchs info is off 2022-10-28 16:17:14 -04:00
Tyler Goodlet 27bd3c07af Comment format tweak 2022-10-28 16:17:14 -04:00
Tyler Goodlet 55dc27a197 Subtract duration instead of passing to `.subtract()` (facepalm) 2022-10-28 16:17:14 -04:00
Tyler Goodlet a11f20fac2 Fix `piker services`; `tractor.run()` is done.. 2022-10-28 16:17:14 -04:00
Tyler Goodlet daebb78755 Re-request quote feed on data reset events
When a network outage or data feed connection is reset often the
`ib_insync` task will hang until some kind of (internal?) timeout takes
place or, in some (worst) cases it never re-establishes (the event
stream) and thus the backend needs to restart or the live feed will
never resume..

In order to avoid this issue once and for all this patch implements an
additional (extremely simple) task that is started with the  real-time
feed and simply waits for any market data reset events; when detected
restarts the `open_aio_quote_stream()` call in a loop using
a surrounding cancel scope.

Been meaning to implement this for ages and it's finally working!
2022-10-28 16:17:14 -04:00
Tyler Goodlet 90a395a069 Support no-disconnect on `open_aio_clients()` exit
Allows for easier restarts of certain `trio` side tasks without killing
the `asyncio`-side clients; support via flag.

Also fix a bug in `Client.bars()`: we need to return the duration on the
empty bars case..
2022-10-28 16:17:14 -04:00
Tyler Goodlet 23d0353934 Drop duplicate frame request
Must have gotten left in during refactor from the `trimeter` version?
Drop down to 6 years for 1m sampling.
2022-10-28 16:17:14 -04:00
Tyler Goodlet ede67ed184 Return history-frame duration from `.bars()`
This allows the history manager to know the decrement size for
`end_dt: datetime` on the next query if a no-data / gap case was
encountered; subtract this in `get_bars()` in such cases. Define the
expected `pendulum.Duration`s in the `.api._samplings` table.

Also add a bit of query latency profiling that we may use later to more
dynamically determine timeout driven data feed resets. Factor the `162`
error cases into a common exception handler block.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 811d21e111 Explicit fast chart naming, auto-yrange the fast chart on increment 2022-10-28 16:17:14 -04:00
Tyler Goodlet 54567d33da More correct no-data output handling
When we get a timeout or a `NoData` condition still return a tuple of
empty sequences instead of `None` from `Client.bars()`. Move the
sampling period-duration table to module level.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 61ca5f7e19 Drop `trimeter`-ized concurrent history querying
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD

Commented some issues we still need to resolve as well.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 7396624be0 Rework history frame request concurrency
Manual tinker-testing demonstrated that triggering data resets
completely independent of the frame request gets more throughput and
further, that repeated requests (for the same frame after cancelling on
the `trio`-side) can yield duplicate frame responses. Re-work the
dual-task structure to instead have one task wait indefinitely on the
frame response (and thus not trigger duplicate frames) and the 2nd data
reset task poll for the first task to complete in a poll loop which
terminates when the frame arrives via an event.

Dirty deatz:
- make `get_bars()` take an optional timeout (which will eventually be
  dynamically passed from the history mgmt machinery) and move request
  logic inside a new `query()` closure meant to be spawned in a task
  which sets an event on frame arrival, add data reset poll loop in the
  main/parent task, deliver result on nursery completion.
- handle frame request cancelled event case without crash.
- on no-frame result (due to real history gap) hack in a 1 day decrement
  case which we need to eventually allow the caller to control likely
  based on measured frame rx latency.
- make `wait_on_data_reset()` a predicate without output indicating
  reset success as well as `trio.Nursery.start()` compat so that it can
  be started in a new task with the started values yielded being
  a cancel scope and completion event.
- drop the legacy `backfill_bars()`, not longer used.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 25b90afbdb Add `timeframe` input to `kraken` history api 2022-10-28 16:17:13 -04:00
Tyler Goodlet 72dfeb2b4e Pass back interal cancel scope from data reset task 2022-10-28 16:17:13 -04:00
Tyler Goodlet 6b34c9e866 Temporarily disable error on pos size mismatch 2022-10-28 16:17:13 -04:00
Tyler Goodlet e7ec01b8e6 Pass in default history time of 1 min
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.
2022-10-28 16:17:13 -04:00
Tyler Goodlet fce7055c62 Make `binance` history api accept a timeframe 2022-10-28 16:17:13 -04:00
Tyler Goodlet bf7d5e9a71 Make `marketstore` storage api timeframe aware
The `Store.load()`, `.read_ohlcv()` and `.write_ohlcv()` and
`.delete_ts()` now can take a `timeframe: Optional[float]` param which
is used to look up the appropriate sampling period table-key from
`marketstore`.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 2a866dde65 Make history routines `timeframe` aware
Allow data feed sub-system to specify the timeframe (aka OHLC sample
period) to the `open_history_client()` delivered history fetching API.
Factor the data keycombo hack into a new routine to be used also from
the history backfiller code when request latency increases; there is
a first draft at trying to use the feed reset to speed up 1m frame
throttling by timing out on the history frame response, but it needs
a lot of fine tuning.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 220981e718 Add 1m ohlc sample rate support to `Client.bars()`; frame query is 1 day 2022-10-28 16:17:13 -04:00
Tyler Goodlet 8537a4091b Use new `Status.cancel_called` in EMS msg loops 2022-10-28 16:16:45 -04:00
Tyler Goodlet 71a11a23bd Add `Status.cancel_called: bool`
This is a simpler (and oddly more `trio`-nic and/or SC) way to handle
the cancelled-before-acked race for order dialogs. Will allow keeping
the `.req` field as solely an `Order` msg.
2022-10-28 16:16:45 -04:00
Tyler Goodlet fa368b1263 'Just getitem access the 'action' from req msg' 2022-10-28 16:16:45 -04:00
Tyler Goodlet e6dd1458f8 `kraken`: the apiflows chain map needs a `dict` 2022-10-28 16:16:45 -04:00
Tyler Goodlet 9486d993ce Drop order mode settings change logmsgs to `.runtime` again 2022-10-28 16:16:45 -04:00
Tyler Goodlet 30994dac10 Better handle order-cancelled-but-not-yet-acked races
When the client is faster then a `brokerd` at submitting and cancelling
an order we run into the case where we need to specify that the EMS
cancels the order-flow as soon as the brokerd's ack arrives. Previously
we were stashing a `BrokerdCancel` msg as the `Status.req` msg (to be
both tested for as a "already cancelled" and sent immediately on ack arrival to
the broker), but for such
cases we can't use that msg to find the fqsn (since only the client side
msgs have it defined) which is required by the new
`Router.client_broadcast()`.

So, Since `Status.req` is supposed to be a client-side flow msg anyway,
and we need the fqsn for client broadcasting, we change this `.req`
value to the client's submitted `Cancel` msg (thus rectifying the
missing `Router.client_broadcast()` fqsn input issue) and build the
`BrokerdCancel` request from that `Cancel` inline in the relay loop
from the `.req: Cancel` status msg lookup.

Further we allow `Cancel` msgs to define an `.account` and adjust the
order mode loop to expect `Cancel` source requests in cancelled status
updates.
2022-10-28 16:16:45 -04:00
Tyler Goodlet 8a61211c8c Handle brokerd errors even when no client-side-status found 2022-10-28 16:16:45 -04:00
Tyler Goodlet c43f7eb656 Fix missing `costmin: float` field in pair msgs 2022-10-28 16:16:45 -04:00
goodboy d05caa4b02
Merge pull request #411 from pikers/ci_fix_tractor_testing
Drop `tractor.testing` import in qt tests
2022-10-28 16:15:47 -04:00
Tyler Goodlet 63e9af002d Drop `tractor.testing` import in qt tests 2022-10-28 16:09:55 -04:00
goodboy 5144299f4f
Merge pull request #408 from pikers/offline_dark_clearing
Offline dark clearing
2022-10-10 09:25:59 -04:00
Tyler Goodlet c437f9370a Factor out all `maybe_open_context()` guff 2022-10-07 14:13:52 -04:00
Tyler Goodlet 94f81587ab Cache EMS trade relay tasks on feed fqsn
Except for paper accounts (in which case we need a trades dialog and
paper engine per symbol to enable simulated clearing) we can rely on the
instrument feed (symbol name) to be the caching key. Utilize
`tractor.trionics.maybe_open_context()` and the new key-as-callable
support in the paper case to ensure we have separate paper clearing
loops per symbol.

Requires https://github.com/goodboy/tractor/pull/329
2022-10-07 14:13:52 -04:00
Tyler Goodlet 2bc25e3593 Repair already-open order relay, fix causality dilemma
With the refactor of the dark loop into a daemon task already-open order
relaying from a `brokerd` was broken since no subscribed clients were
registered prior to the relay loop sending status msgs for such existing
live orders. Repair that by adding one more synchronization phase to the
`Router.open_trade_relays()` task: deliver a `client_ready: trio.Event`
which is set by the client task once the client stream has been
established and don't start the `brokerd` order dialog relay loop until
this event is ready.

Further implementation deats:
- factor the `brokerd` relay caching back into it's own `@acm` method:
  `maybe_open_brokerd_dialog()` since we do want (but only this) stream
  singleton-cached per broker backend.
- spawn all relay tasks on every entry for the moment until we figure
  out what we're caching against (any client pre-existing right, which
  would mean there's an entry in the `.subscribers` table?)
- rename `_DarkBook` -> `DarkBook` and `DarkBook.orders` -> `.triggers`
2022-10-07 14:13:52 -04:00
Tyler Goodlet 1d9ab7b0de More direct import 2022-10-07 14:13:52 -04:00
Tyler Goodlet 4c96a4878e Process unknown order mode msgs 2022-10-07 14:13:52 -04:00
Tyler Goodlet 8cd56cb6d3 Flip ems-side-client (`OrderBook`) to be a struct
`@dataclass` is so 2 years ago ;)

Also rename `.update()` -> `.send_update()` to be a bit more explicit
about actually sending an update msg.
2022-10-07 14:13:52 -04:00
Tyler Goodlet c246dcef6f Drop uuid from notify func inputs 2022-10-07 14:13:52 -04:00
Tyler Goodlet 26d6e10ad7 Parameterize duration, pprint msg 2022-10-07 14:13:52 -04:00
Tyler Goodlet 3924c66bd0 Move headless notifies into `.client_broadcast()` 2022-10-07 14:13:52 -04:00
Tyler Goodlet 2fbfe583dd Drop the `Router.clients: set`, `.subscribers` is enough 2022-10-07 14:13:52 -04:00
Tyler Goodlet 525f805cdb Port order mode to new notify routine 2022-10-07 14:13:52 -04:00
Tyler Goodlet b65c02336d Don't short circuit relay loop when headless
If no clients are connected we now process as normal and try to fire
a desktop notification on linux.
2022-10-07 14:13:52 -04:00
Tyler Goodlet d3abfce540 Start notify mod, linux only 2022-10-07 14:13:52 -04:00
Tyler Goodlet 49433ea87d Run dark-clear-loop in daemon task
This enables "headless" dark order matching and clearing where an `emsd`
daemon subactor can be left running with active dark (or other
algorithmic) orders which will still trigger despite to attached-controlling
ems-client.

Impl details:
- rename/add `Router.maybe_open_trade_relays()` which now does all work
  of starting up ems-side long living clearing and relay tasks and the
  associated data feed; make is a `Nursery.start()`-able task instead of
  an `@acm`.
- drop `open_brokerd_trades_dialog()` and move/factor contents into the
  above method.
- add support for a `router.client_broadcast('all', msg)` to wholesale
  fan out a msg to all clients.
2022-10-07 14:13:52 -04:00
goodboy 31b0d8cee8
Merge pull request #402 from pikers/multi_client_order_mgt
Multi client order mgt
2022-10-05 01:46:09 -04:00
Tyler Goodlet 35871d0213 Support line update from `Order` msg in `.on_submit()` 2022-10-05 01:41:18 -04:00
Tyler Goodlet 4877af9bc3 Add pub-sub broadcasting
Establishes a more formalized subscription based fan out pattern to ems
clients who subscribe for order flow for a particular symbol (the fqsn
is the default subscription key for now).

Make `Router.client_broadcast()` take a `sub_key: str` value which
determines the set of clients to forward a message to and drop all such
manually defined broadcast loops from task (func) code. Also add
`.get_subs()` which (hackily) allows getting the set of clients for
a given sub key where any stream that is detected as "closed" is
discarded in the output. Further we simplify to `Router.dialogs:
defaultdict[str, set[tractor.MsgStream]]` and `.subscriptions` as maps
to sets of streams for much easier broadcast management/logic using set
operations inside `.client_broadcast()`.
2022-10-05 01:41:18 -04:00
Tyler Goodlet 909e068121 Support multi-client order-dialog management
This patch was originally to fix a bug where new clients who
re-connected to an `emsd` that was running a paper engine were not
getting updates from new fills and/or cancels. It turns out the solution
is more general: now, any client that creates a order dialog will be
subscribing to receive updates on the order flow set mapped for that
symbol/instrument as long as the client has registered for that
particular fqsn with the EMS. This means re-connecting clients as well
as "monitoring" clients can see the same orders, alerts, fills and
clears.

Impl details:
- change all var names spelled as `dialogues` -> `dialogs` to be
  murican.
- make `Router.dialogs: dict[str, defaultdict[str, list]]` so that each
  dialog id (oid) maps to a set of potential subscribing ems clients.
- add `Router.fqsn2dialogs: dict[str, list[str]]` a map of fqsn entries to
  sets of oids.
- adjust all core task code to make appropriate lookups into these 2 new
  tables instead of being handed specific client streams as input.
- start the `translate_and_relay_brokerd_events` task as a daemon task
  that lives with the particular `TradesRelay` such that dialogs cleared
  while no client is connected are still processed.
- rename `TradesRelay.brokerd_dialogue` -> `.brokerd_stream`
- broadcast all status msgs to all subscribed clients in the relay loop.
- always de-reg each client stream from the `Router.dialogs` table on close.
2022-10-05 01:41:18 -04:00
Tyler Goodlet cf835b97ca Add some info logs around paper fills 2022-10-05 01:41:18 -04:00
Tyler Goodlet 30bce42c0b Don't spin paper clear loop on non-clearing ticks
Not sure what exactly happened but it seemed clears weren't working in
some cases without this, also there's no point in spinning the simulated
clearing loop if we're handling a non-clearing tick type.
2022-10-05 01:41:18 -04:00
Tyler Goodlet 48ff4859e6 Update to new pair schema, adds `.cost_decimals` field 2022-10-05 01:41:18 -04:00
Tyler Goodlet 887583d27f Bleh, convert fill data to `float`s in kraken broker.. 2022-10-05 01:41:18 -04:00
Tyler Goodlet 45b97bf6c3 Make fill msg `.action: str` optional for `kraken` 2022-10-05 01:41:18 -04:00
Tyler Goodlet 91397b85a4 Fix missing f-str in ems msg sender err block 2022-10-05 01:41:18 -04:00
Tyler Goodlet 47f81b31af Kraken can cause status msg key error!? 2022-10-05 01:41:18 -04:00
goodboy 30c452cfd0
Merge pull request #404 from pikers/pin_tractor_main
Pin back to `tractor` master branch
2022-10-04 09:53:02 -04:00
Tyler Goodlet fda1c5b554 Pin back to `tractor` master branch 2022-10-03 13:48:58 -04:00
goodboy d6c9834a9a
Merge pull request #395 from pikers/history_view
History view
2022-09-23 20:28:02 -04:00
Tyler Goodlet 41b0c11aaa Hide existing level line markers on startup 2022-09-23 17:17:32 -04:00
Tyler Goodlet cc67d23eee Drop old marker drawing code from `LevelLine.paint()`
We haven't been using it for a while and the supposed (remembered)
latency issue on interaction doesn't seem existing after applying the
cache mode. This allows dropping some internal state-logic and generally
simplifying the show-on-hover checks.

Further add `.show_markers()` and `.hide_markers()` as explicit methods
that can be called externally by UI business logic.
2022-09-23 17:17:32 -04:00
Tyler Goodlet 4818af1445 Add better doc string on marker factory 2022-09-21 15:43:35 -04:00
Tyler Goodlet 2cf1742999 Always apply at least the pos size as the limit 2022-09-21 15:43:35 -04:00
Tyler Goodlet 25ac6e6665 Soft pop lines, handle error-cancel races 2022-09-21 15:43:35 -04:00
Tyler Goodlet 90754f979b Tick the slow chart task on a 1sec index event 2022-09-19 17:39:26 -04:00
Tyler Goodlet c0d490ed63 Only show pos nav on non-zero size 2022-09-19 16:17:05 -04:00
Tyler Goodlet 7c6d12d982 Always set marker y-pos even if we're tracking its x-pos 2022-09-19 16:17:05 -04:00
Tyler Goodlet fd8c05e024 A lines entry should always exist or it's a bug 2022-09-19 16:17:05 -04:00
Tyler Goodlet 5d65c86c84 Don't delete pp lines or markers
Bit of a face palm but obviously `LevelLine.delete()` also removes any
`._marker` from the view which makes it disappear permanently when
moving from non-zero to zero to non-zero positions.. We don't really
need to delete the line since it can be re-used so just remove that
code.

Further this patch removes marker style setting logic from within the
`pp_line()` factory and instead expects the caller to set the correct
"direction" (for long / short) afterward.
2022-09-19 16:17:05 -04:00
Tyler Goodlet cf11e8d7d8 Update navs on all slow and fast charts, only default the fast chart on switch 2022-09-19 16:17:05 -04:00
Tyler Goodlet ed868f6246 Go back to origin slow chart split proportion 2022-09-19 16:17:05 -04:00
goodboy 5d371ad80e
Merge pull request #396 from pikers/tractor_core_port
Tractor core port
2022-09-16 18:09:33 -04:00
Tyler Goodlet 6897aed6b6 Don't call show on marker in `Nav.show()` 2022-09-14 16:02:07 -04:00
Tyler Goodlet a61a11f86b Add draft but commented "scale-to-fast-chart" logic 2022-09-14 10:11:43 -04:00
Tyler Goodlet 286f620f8e Use fqsn to key pnl tasks 2022-09-13 18:59:12 -04:00
Tyler Goodlet b7e60b9653 Hide labels, show markers for lines on slow chart 2022-09-13 18:31:21 -04:00
Tyler Goodlet df42e7acc4 Add `LevelLine.get_cursor()` to get any currently hovering mouse-cursor 2022-09-13 18:26:06 -04:00
Tyler Goodlet e492e9ca0c Fix pp arrow/label placement bugs
- Every time a symbol is switched on chart we need to wait until the
  search bar sidepane has been added beside the slow chart before
  determining the offset for the pp line's arrow/labels; trigger this in
  `GodWidget.load_symbol()` -> required monkeypatching on a
  `.mode: OrderMode` to the `.rt_linked` for now..
- Drop the search pane widget removal from the current linked chart,
  seems faster?
- On the slow chart override the `LevelMarker.scene_x()` callback to
  adjust for the case where no L1 labels are shown beside the y-axis.
2022-09-13 17:58:20 -04:00
Tyler Goodlet 44c6f6dfda Add level line flag to allow tracking its marker x-position 2022-09-13 17:43:04 -04:00
Tyler Goodlet ad2100fe3f Only don't pp arrow on startup 2022-09-13 16:21:49 -04:00
Tyler Goodlet ae64ac79a6 Doc str tweaks 2022-09-13 16:13:46 -04:00
Tyler Goodlet 20663dfa1c Add (more) order mode race guards to avoid crashes on "kitty-keys" 2022-09-12 20:25:15 -04:00
Tyler Goodlet 70f2241d22 Hide pp markers on startup 2022-09-12 20:25:15 -04:00
Tyler Goodlet b3fcc25e21 Add extra row count for header, drop prints 2022-09-12 20:25:15 -04:00
Tyler Goodlet 4f15ce346b Drop splitter resizes except for once at startup
Also adds a `GodWidget.resize_all()` helper method which resizes all
sub-widgets and charts to their default ratios and/or parent-widget
dependent defaults using the detected available space on screen. This is
a "default layout" config method that eventually we'll probably want
allow users to customize.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 445849337f Always resize to slow chart height, not just on changes 2022-09-12 20:25:15 -04:00
Tyler Goodlet 3fd7107e08 Scale view to measured results row count
In other words instead of some static view size previously determined by
the accompanying (slow) chart's height, (recursively) calculate the
number of displayed rows and compute the minimal height needed. This
still caps the view at the height of the chart such that the view will
switch to scroll bar mode when too many results are shown and can't all
be fit in the vertical space.

Deats:
- add a ``CompleterView.iter_df_rows()`` which recursively iterates all
  rows in depth-first order making it simple to compute the absolute
  number of result rows in view and thus the minimal number of pixels to
  show all results.
- always pass the height in the `.on_resize()` handler to ensure
  triggering the height logic when new results are generated in the
  search loop.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 73a02d54b7 Down size the slots bar by .9 2022-09-12 20:25:15 -04:00
Tyler Goodlet b734af6dd0 Only delete lines under cursor if not `None` 2022-09-12 20:25:15 -04:00
Tyler Goodlet f7c0ee930a Offset last (live) datum from y-axis by a 16th 2022-09-12 20:25:15 -04:00
Tyler Goodlet ead426abc4 More space to fast chart(s), less to slow chart 2022-09-12 20:25:15 -04:00
Tyler Goodlet bcd6bbb7ca Increase the `brokerd` mem-chan size
Intention is to hopefully minimize (as many) context switches when
processing (near-)HFT feeds - tho not sure if it's improving things that
much XD
2022-09-12 20:25:15 -04:00
Tyler Goodlet 80929d080f Add more detailed splitter of splitters comment 2022-09-12 20:25:15 -04:00
Tyler Goodlet eed47b3733 Add splitter move handler which calls search widget resizer method 2022-09-12 20:25:15 -04:00
Tyler Goodlet d5f0c59b57 Ignore resize events with the same height (for now) 2022-09-12 20:25:15 -04:00
Tyler Goodlet d11dc787a1 First working attempt of search results view scaling
Scales the "view" instance that holds search results to the size of the
accompanying "slow chart" for which the search pane is a "sidepane".
A lot of mucking about was required due to resizing of the view
seemingly feeding back into window resizing and further implementing the
sizing logic such that the parent `QSplitter` can be resized as the
user's whim as well.

Details,
- add a `CompleterView._init: bool` which is set once (and only once)
  after startup where the first display of the current symbol/feed is
  shown allowing and a single *width* padding applied once at startup
  to ensure we don't have an awkward line to the right of the longest
  result.
- in `.resize_to_results()` only apply a minimum height to the view
  using `.setMinimumHeight()` with a down-scaled (`0.91` for now) height
  value from input.
- re-implement `CompleterView.show_matches()` to accept and optional
  width, heigh tuple and when not supplied pull the slow chart's
  dimensions and pass as input to the resize method.
- Make `SearchWidget` x dim sizing policy "fixed".
- register the `SearchWidget` for resize events with god.
- add `.show_only_cache_entries()` for easy results clearing.
- add `.space_dims()` to retrieve slow linked-charts dimensions.
- implement `SearchWidget.on_resize()` which is the caller of all the
  previously mentioned resizing routines.
- do resizing and cache entry showing on search loop startup and be sure
  to clear to cache when the user selects a symbol-feed with Enter.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 1e81feee46 Finally get chart startup view-state kinda correct
It ended up being what'd you expect, races on the accessing shm buffer
data by the UI during the whole "mega-async-startup-everything" phase XD

So we add the following list of ad-hoc startup steps:
- do `.default_view()` on the slow chart after the fast chart is mostly
  fully spawned with the intention being to capture the state where the
  historical buffer is mostly loaded before sizing the view to the
  graphical form of the data.
- resize slow chart sidepanes from the fast chart just before sleeping
  forever (and after order mode has booted).
2022-09-12 20:25:15 -04:00
Tyler Goodlet 40a9761943 Actually support resize events..
Turns out god widget resizes aren't triggered implicitly by window
resizes, so instead, hook into the window by moving what was our useless
method to that class. Further we explicitly define and declare that our
window has a `.godwidget: GodWidget` and set it up in the bootstrap
phase - in `run_qutractor()` during `trio` guest mode configuration.

Further deatz:
- retype the runtime/bootstrap routines to take a qwidget "type" not an
  instance, and drop the whole implicit `.main_widget` stuff.
- delegate into the `GodWidget.on_win_resize()` for any window resize
  which then triggers all the custom resize callbacks we already had in
  place.
- privatize `ChartnPane.sidepane` so that it can't be mutated willy
  nilly without calling `.set_sidepane()`.
- always adjust splitter sizes inside `LinkeSplits.add_plot()`.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 256bcf36d3 Drop use `tractor.trionics.gather_contexts()` in `open_handlers()` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 9944277096 Handle null lines that were removed, don't error on bad $size 2022-09-12 20:25:15 -04:00
Tyler Goodlet f9dc5637fa Use rt buffer for last price on nan in ems 2022-09-12 20:25:15 -04:00
Tyler Goodlet addedc20f1 WIP search pane always shown.. 2022-09-12 20:25:15 -04:00
Tyler Goodlet 1fa6e8d9ba Only show slow chart xlabel when focussed 2022-09-12 20:25:15 -04:00
Tyler Goodlet 2a06dc997f Use pixel caching on our level lines 2022-09-12 20:25:15 -04:00
Tyler Goodlet 6b93eedcda Port to new `._position.Nav` apis in order mode 2022-09-12 20:25:15 -04:00
Tyler Goodlet a786df65de Factor pos tracker UI element mgmt into new type
More or less moves all the UI related position "nav" logic and graphics
item management into a new `._position.Nav` composite type + api for
high level mgmt of position graphics indicators across multiple charts
(fast and slow).
2022-09-12 20:25:15 -04:00
Tyler Goodlet 8f2823d5f0 Stage line only on active cursor chart 2022-09-12 20:25:15 -04:00
Tyler Goodlet 58fe220fde Use ref annotations in position mod 2022-09-12 20:25:15 -04:00
Tyler Goodlet 161448c31a Support order staging from slow chart using `.get_cursor()` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 1c685189d1 Change to using real type annots 2022-09-12 20:25:15 -04:00
Tyler Goodlet ceac3f2ee4 Adjust corresponding fast/slow chart line level on edits 2022-09-12 20:25:15 -04:00
Tyler Goodlet a07367fae2 Fix div-by-zero split sizing bug 2022-09-12 20:25:15 -04:00
Tyler Goodlet 006190d227 Add fill arrow-mark support to history view 2022-09-12 20:25:15 -04:00
Tyler Goodlet 412197019e Make ArrowEditor.add()` expect a `PlotItem` as input for render 2022-09-12 20:25:15 -04:00
Tyler Goodlet 271e378ce3 Add `GodWidget.iter_linked()` interator over linked split charts 2022-09-12 20:25:15 -04:00
Tyler Goodlet 8e07fda88f Expose multi-chart-lines support through to order mode api 2022-09-12 20:25:15 -04:00
Tyler Goodlet a4935b8fa8 Make line editor multi-line aware, drop `dataclass` for `Struct` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 2b76baeb10 Pass god widget to line editor and order mode instances 2022-09-12 20:25:15 -04:00
Tyler Goodlet 2dfa8976a0 Make line editor expect god as input, use new .`get_cursor()` api 2022-09-12 20:25:15 -04:00
Tyler Goodlet d3402f715b Set godwidget active cursor from xhair callback 2022-09-12 20:25:15 -04:00
Tyler Goodlet f070f9a984 Add "active cursor" api to god widget 2022-09-12 20:25:15 -04:00
Tyler Goodlet 416270ee6c Refocus view on ctl-c from search 2022-09-12 20:25:15 -04:00
Tyler Goodlet 14bee778ec Hook up kb ctrls to hist chart, order mode not working yet 2022-09-12 20:25:15 -04:00
Tyler Goodlet 10c1944de5 Proper slow chart auto y-range support
The slow (history) chart requires it's own y-range checker logic which
needs to be run in 2 cases:
- the last datum is in view and goes outside the previous mx/mn in view
- the chart is incremented a step

Since we need this duplicate logic this patch also factors the incremental
graphics update info "reading" into a new `DisplayState.incr_info()`
method that can be configured to a chart and input state and returns all
relevant "graphics update measure" in a tuple (for now).

Use this method throughout the rest of the display loop for both fast
and slow chart checks and in the `increment_history_view()` slow chart
task.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 7958d8ad4f Up sample info poll loop iters 2022-09-12 20:25:15 -04:00
Tyler Goodlet 50c5dc255c Update history view y-sticky with last clear price 2022-09-12 20:25:15 -04:00
Tyler Goodlet 31735f26d3 Poll for sampling info at startup, tolerate races
Use the new `Feed.get_ds_info()` method in a poll loop to definitively
get the inter-chart sampling info and avoid races with shm buffer
backfilling.

Also, factor the history increment closure-task into
`graphics_update_loop()` which will make it clearer how to factor
all the "should we update" logic into some `DisplayState` API.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 2ef6460853 Add `Feed.get_ds_info()` to detect/compute sample rates 2022-09-12 20:25:15 -04:00
Tyler Goodlet 5e98a30537 Add simplified history incrementer consumer task 2022-09-12 20:25:15 -04:00
Tyler Goodlet dd03ef42ac Return empty search result on connection failure
If you spawn a brokerd set and no `ib` data feed was started (via our
`.data.feed.Feed` api) then there will be no active client loaded and
thus wont' be connected. So in these cases just return nothing, and
I guess we'll figure out real connection failures later?
2022-09-12 20:25:15 -04:00
Tyler Goodlet 59884d251e Update history "last" bar, compute ampling ratio
Add an update call to the display loop to consistently update the last
datum in the history view chart. Compute the inter-chart sampling ratio
and use it to sync the linear region.
2022-09-12 20:25:15 -04:00
Tyler Goodlet e06e257a81 Another history view splitter proportion tweak 2022-09-12 20:25:15 -04:00
Tyler Goodlet 6e574835c8 Update history shm buffer in ohlc sampler loop 2022-09-12 20:25:15 -04:00
Tyler Goodlet 49ccfdd673 Pass history shm "last index" in init msg, assign on feed 2022-09-12 20:25:15 -04:00
Tyler Goodlet 3a434f312b Add sidepane like color region styling 2022-09-12 20:25:15 -04:00
Tyler Goodlet bb4dc448b3 Add history chart and "linear region" for syncing
Add a first draft of a working `pyqtgraph.LinearRegionItem` link between
a history view chart (+ data set) and the normal real-time "HFT" chart
set.

Add the history view (aka more downsampled data view) chart set to the
rt/hft set's splitter as it's "first widget". Hook up linear region
callbacks to enable syncing between charts including compenstating for
the downsampling rate ration (in this case hardcoded 60 since 1s to 1M,
but we'll actually compute it going forward obvs).

More to come dawgys..
2022-09-12 20:25:15 -04:00
Tyler Goodlet 9846396df2 Add initial history (view) to charting sys
Adds an additional `GodWidget.hist_linked: LinkedSplits` alongside the
renamed `.rt_linked` to enable 2 sets of linked charts with different
sampled data sets/flows. The history set is added without "all the
fixins" for now (i.e. no order mode sidepane or search integration) such
that it is merely a top level chart which shows a much longer term
history and can be added to the UI via embedding the entire history
linked-splits instance into the real-time linked set's splitter.

Further impl deats:
- adjust the `GodWidget._chart_cache: dict[str, tuple]]` to store both
  linked split chart sets per symbol so that symbol switching will
  continue to work with the added history chart (set).
- rework `.load_symbol()` to operate on both the real-time (HFT) chart
  set and the history set.
- rework `LinkedSplits.set_split_sizes()` to compensate for the history
  chart and do more detailed height calcs arithmetic to make it appear
  by default as a minor sub-chart.
- adjust `LinkedSplits.add_plot()` and `ChartPlotWidget` internals to allow
  adding a plot without a sidepane and/or container `ChartnPane`
  composite widget by checking for a `sidepane == False` input.
- make `.default_view()` accept a manual y-axis offset kwarg.
- adjust search mode to provide history linked splits to
  `.set_chart_symbol()` call.
2022-09-12 20:25:15 -04:00
Tyler Goodlet f0d417ce42 Drop status msg var deleting from ns 2022-09-12 20:25:15 -04:00
Tyler Goodlet 55fc4114b4 Initial draft code working with `pg.LinearRegionItem` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 97b074365b Use rt buffer for close price pnl calcs 2022-09-12 20:25:15 -04:00
Tyler Goodlet f79c3617d6 Always load FSPs with the default (fast) sampling period 2022-09-12 20:25:15 -04:00
Tyler Goodlet 861fe791eb Allocate 2 shm buffers for history and real-time
As part of supporting a "history view" chart which shows downsampled
datums alongside our 1s (or higher) sampled OHLC we need a separate
buffer to store a the slower history from broker backends. This begins
that design by allocating 2 buffers:
- `rt_shm: ShmArray` which maps to a `/dev/shm/` file with `_rt` suffix
- `hist_shm: ShmArray` which maps to a file with `_hist` suffix

Deliver both of these shms back from both `manage_history()` and load
them as `Feed.rt_shm`/`.hist_shm` on the client side.

Impl deats:
- init the rt buffer with the first datum from loaded history and
  assign all OHLC values to that row's 'close' and the vlm to 0.
- pass the hist buffer to the backfiller task
- only spawn **one** global sampler array-row increment task per
  `brokerd` and pass in the 1s delay which we presume is our lowest
  OHLC sample rate for now.
- drop `open_sample_step_stream()` and just move its body contents into
  `Feed.index_stream()`
2022-09-12 20:25:15 -04:00
Tyler Goodlet 60052ff73a Presume shortest delay input to `increment_ohlc_buffer()`
Instead of worrying about the increment period per shm subscription,
just use the value passed as input and presume the caller knows that
only one task is necessary and that the wakeup (sampling) period should
be the shortest that is needed.

It's very unlikely we don't want at least a 1s sampling (both in terms
of task switching cost and general usage) which will eventually ship as
the default "real-time" feed "timeframe". Further, this "fast" increment
sampling task can handle all lower sampling periods (eg. 1m, 5m, 1H)
based on the current implementation just the same.

Also, add a global default sample period as `_defaul_delay_s` for use in
other internal modules.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 4d2708cd42 Force 1s sample step so crypto boiz can seee 2022-09-12 20:25:15 -04:00
Tyler Goodlet d1cc52dff5 Use new public lifetime-stack class attr 2022-09-12 20:24:56 -04:00
Tyler Goodlet 4fa901dbcb Port to new `tractor._runtime` mod 2022-09-12 20:24:56 -04:00
goodboy f2c488c1e0
Merge pull request #399 from pikers/kraken_fill_bugs
Kraken fill bugs
2022-09-12 20:12:04 -04:00
Tyler Goodlet 4a9c16d298 Fix stream type annot 2022-09-12 15:52:50 -04:00
Tyler Goodlet b9d5b904f4 Drop order entry removals on modify 2022-09-12 15:52:22 -04:00
Tyler Goodlet 0aef762d9a Bleh `kraken`, fix another ref error in fill block
Clearly, the linter didn't help us here.. but, just pass the
`brokerd` time for now in the `.broker_time` field; we can't get it from
the fill-case incremental updates in the `openOrders` sub. Add some
notes about this and how we might approach for backends with this
limitation.
2022-09-12 15:52:22 -04:00
goodboy c724117c1a
Merge pull request #398 from pikers/paper_clear_logics_fix
Oof, reverse clearing logic-routines in paper eng
2022-09-11 22:20:04 -04:00
Tyler Goodlet cc3bb85c66 Oof, reverse clearing logic-routines in paper eng 2022-09-10 16:35:31 -04:00
goodboy 20817313b1
Merge pull request #397 from pikers/kraken_nameerr_fix
Lul, fix name error on msg var name..
2022-09-06 08:18:17 -04:00
Tyler Goodlet 23d0b8a7ac Lul, fix name error on msg var name.. 2022-09-05 21:15:22 -04:00
goodboy 087a34f061
Merge pull request #367 from pikers/livenpaper
`ib`: live & paper accounts together, infra refinements
2022-08-31 18:15:39 -04:00
Tyler Goodlet 653f5c824b Drop empty vnc server script idea for live account 2022-08-31 17:45:02 -04:00
Tyler Goodlet f9217570ab Add intiial `ib` backend readme 2022-08-31 17:38:24 -04:00
Tyler Goodlet 7f224f0342 Doc string typos 2022-08-31 17:22:15 -04:00
Tyler Goodlet 75a5f3795a I guess go back to doing vnc servers on both? 2022-08-31 17:22:15 -04:00
Tyler Goodlet de9f215c83 If more then one `ib` api client is available use next available for search 2022-08-31 17:22:15 -04:00
Tyler Goodlet 848e345364 POC using paper-in-docker gw for symbol search 2022-08-31 17:22:15 -04:00
Tyler Goodlet 38b190e598 Add `ib` `Crypto` contract support 2022-08-31 17:22:15 -04:00
Tyler Goodlet 3a9bc8058f Spawn a live account gateway alongside paper
This is like, super first-draft-y (and ideally we move to offering an
`piker.data._ahab` super for this) but, it's a start at allowing easy
setup of both paper and live `ib-gw` container spawning. We Expect the
user to input creds for the live account manually and the vnc server is
(hackily) only run inside the paper instance which most of the time
seems to make it possible click on the live gui window and input creds
manually.

We also add extra files for the live instance:
- a `dockering/ib/run_x11_vnc_live.sh` which is is a blank script
  that avoids running an `x11vnc` server in the line account cntr.
- a `dockering/ib/jts_live.ini` config which manually sets the live
  gw to use the `4001` port for api connections.

Further config tweaks:
- IBC: drop the api dynamic port override, decrease login display
  timeout to the riskier but likely to be faster 20s.
- `x11vnc` cmd: got back to using `rfbport` instead of `autoport`, drop
  `-logappend` so we see logging on docker console again, drop the
  frame cacheing flags and add in some x-hack disable flags.
2022-08-31 17:22:15 -04:00
Guillermo Rodriguez 739a231afc
Merge pull request #394 from pikers/size_in_shm_token
Store shm array size in token schema, use for loading
2022-08-29 15:15:49 -03:00
Tyler Goodlet 7dfa4c3cde Better comment on the `size`'s purpose/units 2022-08-29 13:56:26 -04:00
Tyler Goodlet 7b653fe4f4 Store shm array size in token schema, use for loading 2022-08-29 13:46:41 -04:00
goodboy 77a687bced
Merge pull request #386 from pikers/paper_tolerance
Paper race tolerance
2022-08-29 13:28:38 -04:00
Tyler Goodlet d5c1cdd91d Configure allocator from pos msg on startup
This fixes a regression added after moving the msg parsing to later in
the order mode startup sequence. The `Allocator` needs to be configured
*to* the initial pos otherwise default settings will show in the UI..

Move the startup config logic from inside `mk_allocator()` to
`PositionTracker.update_from_pp()` and add a flag to allow setting the
`.startup_pp` from the current live one as is needed during initial
load.
2022-08-29 11:39:28 -04:00
Tyler Goodlet 46d3fe88ca Fix sub-slot-remains limiting for -ve sizes
In the short case (-ve size) we had a bug where the last sub-slots worth
of exit size would never be limited to zero once the allocator limit pos
size was hit (i.e. you could keep going more -ve on the pos,
exponentially per slot over the limit). It's a simple fix, just
a `max()` around the `l_sub_pp` var used in the next-step-size calc.

Resolves #392
2022-08-28 13:51:54 -04:00
Tyler Goodlet 5c8c5d8fbf Fix disti-mode paper pps relaying
Turns out we were putting too many brokername suffixes in the symbol
field and thus the order mode msg parser wasn't matching the current
asset to said msgs correctly and pps weren't being shown...

This repairs that plus simplifies the order mode initial pos msg loading
to just delegate into `process_trade_msg()` just as is done for
real-time msg updates.
2022-08-27 15:37:54 -04:00
goodboy 71412310c4
Merge pull request #391 from pikers/json_rpc_generic
Pull jsonrpc machinery out of deribit backend
2022-08-27 15:33:12 -04:00
Guillermo Rodriguez 0c323fdc0b
Minor style changes and warning on unexpected msg 2022-08-27 09:12:02 -03:00
Tyler Goodlet 02f53d0c13 Error on zero-size orders received by paper engine 2022-08-26 10:46:47 -04:00
Tyler Goodlet 8792c97de6 More stringent settings pane input handling
If a setting fails to apply try to log an error msg and revert to the
previous setting by not applying the UI read-update until after the new
`SettingsPane.apply_setting()` call. This prevents crashes when the user
tries to give bad inputs on editable allocator fields.
2022-08-26 10:46:47 -04:00
Tyler Goodlet 980815d075 Avoid handling account as numeric field in settings 2022-08-26 10:46:46 -04:00
Tyler Goodlet 4cedfedc21 Support clearing ticks ('last' & 'trade') fills
Previously we only simulated paper engine fills when the data feed
provide L1 queue-levels matched an execution. This patch add further
support for clear-level matches when there are real live clears on the
data feed that are faster/not synced with the L1 (aka usually during
periods of HFT).

The solution was to simply iterate the interleaved paper book entries on
both sides for said tick types and instead yield side-specific predicate
per entry.
2022-08-26 10:46:46 -04:00
Tyler Goodlet fe3d0c6fdd Handle too-fast-edits with `defaultdict[str, bidict[str, tuple]]`
Not entirely sure why this all of a sudden became a problem but it seems
price changes on order edits were sometimes resulting in key errors when
modifying paper book entries quickly. This changes the implementation to
not care about matching the last price when keying/popping old orders
and use `bidict`s to more easily pop cleared orders in the paper loop.
2022-08-26 10:46:46 -04:00
Tyler Goodlet 9200e8da57 Raw-dog-pop cancelled paper entries; old price dun matter 2022-08-26 10:46:46 -04:00
Tyler Goodlet 430d065da6 Handle paper-engine too-fast clearing race cases
When the paper engine is used it seems we can definitely hit races where
order ack msgs arrive close enough to status messages that `trio`
schedules the status processing before the acks. In such cases we want
to be tolerant and not crash but instead warn that we got an
unknown/out-of-order msg.
2022-08-26 10:46:46 -04:00
Tyler Goodlet ecd93cb05a Pass symbol with broker suffix to `.submit_limit()`; fix clearing 2022-08-26 10:46:46 -04:00
Guillermo Rodriguez 4facd161a9
Pull jsonrpc machinery out of deribit backend into piker.data._web_bs module and make it generic 2022-08-25 14:08:09 -03:00
goodboy c5447fda06
Merge pull request #390 from pikers/actually_enable_modules
Oneliner enable rpc modules on runtime open
2022-08-25 13:06:53 -04:00
Guillermo Rodriguez 0447612b34
Oneliner enable rpc modules on runtime open 2022-08-25 11:47:40 -03:00
goodboy b5499b8225
Merge pull request #331 from pikers/deribit
Deribit backend & minimal broker check tool
2022-08-25 10:08:29 -04:00
Guillermo Rodriguez 00aabddfe8
Fix link 2022-08-25 09:22:15 -03:00
Guillermo Rodriguez 43fb720877
Do multiline imports 2022-08-25 09:20:41 -03:00
Guillermo Rodriguez 9626dbd7ac
Simplify rpc machinery, and switch refs to Dict and List to builtins, make brokercheck call public broker methods and get their results again 2022-08-25 09:18:52 -03:00
Guillermo Rodriguez f286c79a03
Woops enable backfill_bars in module __init__.py 2022-08-24 19:41:04 -03:00
Guillermo Rodriguez accb0eee6c
Add brokercheck guard on deribit.get_client && drop method running in brokercheck 2022-08-24 19:32:54 -03:00
Guillermo Rodriguez e97dd1cbdb
Stop using as much closures
Use a custom tractor branch that fixes a `maybe_open_context` re entrant related bug
2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 34fb497eb4
Add aiter api to NoBsWs and rework cryptofeed relay to not be OOPy 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 6669ba6590
Switch back to using async for and dont install signal handlers on cryptofeed 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez cb8099bb8c
Add README.rst and brokers.toml section in config example 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 80a1a58bfc
Refactor cryptofeed relay api and move it to client
Added submit_limit and submit_cancel
Cache syms correctly
Lowercase search results
2022-08-24 18:09:32 -03:00
Guillermo Rodriguez d60f222bb7
Add get_balances, and get_assets rpc to deribit.api.Client
Improve symbol_info search results
Expect cancellation on cryptofeeds asyncio task
Fix the no trades on instrument bug that we had on startup
2022-08-24 18:08:45 -03:00
Guillermo Rodriguez 2c2e43d8ac
Add comments and update cryptofeed fork url in requirements 2022-08-24 18:08:31 -03:00
Guillermo Rodriguez 212b3d620d
Tweaks on Client init to make api credentials optional 2022-08-24 18:08:29 -03:00
Guillermo Rodriguez 92090b01b8
Begin jsonrpc over ws refactor 2022-08-24 18:06:00 -03:00
Guillermo Rodriguez 9073fbc317
drop pydantic to match master 2022-08-23 15:18:45 -03:00
Guillermo Rodriguez f55f56a29f
Refactored deribit backend into new multi file format 2022-08-23 15:18:45 -03:00
Guillermo Rodriguez 28e025d02e
Finally get a chart going! lots of fixes to streaming machinery and custom cryptofeed fork with fixes 2022-08-23 15:18:43 -03:00
Guillermo Rodriguez e558e5837e
Introduce piker protocol in stream_messages 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez a0b415095a
Brokermod check output fixed and tweaks to deribit Client.bars function 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez 6df181c233
Add brokercheck test and got deribit to dump l1 and trades to console 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez 7acc4e3208
Initial deribit mock up 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez 10ea242143
Merge pull request #385 from pikers/asycvnc_pin_bump
Pin to `asyncvnc@main` after upstream fixes
2022-08-22 13:03:08 -03:00
Tyler Goodlet eda6ecd529 Pin to `asyncvnc@main` after upstream fixes
We helped drive a bunch of fixes in
https://github.com/barneygale/asyncvnc/pull/4

This pins to our forked but matched `main` branch to get those fixes
until such a time as upstream makes another release.
2022-08-22 11:58:40 -04:00
goodboy cf5b0bf9c6
Merge pull request #374 from pikers/open_order_loading
Open order loading
2022-08-19 15:23:49 -04:00
Tyler Goodlet b9dba48306 Show correct account label on loaded order lines
Quite a simple fix, we just assign the account-specific
`PositionTracker` to the level line's `._on_level_change()` handler
instead of whatever the current `OrderMode.current_pp` is set to.

Further this adds proper pane switching support such that when a user
modifies an order line from an account which is not the currently
selected one, the settings pane is changed to reflect the
account and thus corresponding position info for that account and
instrument B)
2022-08-18 16:04:44 -04:00
Tyler Goodlet 4d2e23b5ce Expose level line marker via property 2022-08-18 16:00:41 -04:00
Tyler Goodlet 973bf87e67 Don't log aboout unknown status msg if no oid 2022-08-18 11:51:12 -04:00
Tyler Goodlet 5861839783 Fix multi-account order loading..
We were overwriting the existing loaded orders list in the per client
loop (lul) so move the def above all that.

Comment out the "try-to-cancel-inactive-orders-via-task-after-timeout"
stuff pertaining to https://github.com/erdewit/ib_insync/issues/363 for
now since we don't have a mechanism in place to cancel the re-cancel
task once the order is cancelled - plus who knows if this is even the
best way to do it..
2022-08-18 11:51:12 -04:00
Tyler Goodlet 06845e5504 `kraken`: drop `make_sub()` and inline sub defs in `subscribe()` 2022-08-18 11:51:12 -04:00
Tyler Goodlet 43bdd4d022 Pass correct instrument symbol in position msgs 2022-08-18 11:51:12 -04:00
Tyler Goodlet bafd2cb44f Only relay fills if dialog still alive 2022-08-18 11:51:12 -04:00
Tyler Goodlet be8fd32e7d Only emit ems fill msgs for 'status' events from ib
Fills seems to be dual emitted from both the `status` and `fill` events
in `ib_insync` internals and more or less contain the same data nested
inside their `Trade` type. We started handling the 'fill' case to deal
with a race issue in commissions/cost report tracking but we don't
really want to leak that same race to incremental fills vs.
order-"closed" tracking.. So go back to only emitting the fill msgs
on statuses and a "closed" on `.remaining == 0`.
2022-08-18 11:51:12 -04:00
Tyler Goodlet ee8c00684b Add actor-global "broker client" for tracking reqids 2022-08-18 11:51:12 -04:00
Tyler Goodlet 7379dc03af The `ps1` check doesn't work for `pdb`.. 2022-08-18 11:51:12 -04:00
Tyler Goodlet a602c47d47 Support loading paper engine live orders 2022-08-18 11:51:12 -04:00
Tyler Goodlet 317610e00a Store positions globally and deliver on ctx connects 2022-08-18 11:51:12 -04:00
Tyler Goodlet c4af706d51 Make order-book-vars globals to persist across ems-dialog connections 2022-08-18 11:51:12 -04:00
Tyler Goodlet 665bb183f7 Unpack existing live order params in case statement 2022-08-18 11:51:12 -04:00
Tyler Goodlet f6ba95a6c7 Split existing live-open case into its own block 2022-08-18 11:51:12 -04:00
Tyler Goodlet e2cd8c4aef Add initial `kraken` live order loading 2022-08-18 11:51:12 -04:00
Tyler Goodlet c8bff81220 Add runtime guards around feed pausing during interaction 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2aec1c5f1d Only pprint our struct when we detect a py REPL 2022-08-18 11:51:12 -04:00
Tyler Goodlet bec32956a8 Move fill case-block earlier, log broker errors 2022-08-18 11:51:12 -04:00
Tyler Goodlet 91fdc7c5c7 Load boxed `.req` values as `Order`s in mode loop 2022-08-18 11:51:12 -04:00
Tyler Goodlet b59ed74bc1 'Only send `'closed'` on Filled events, lowercase all statues' 2022-08-18 11:51:12 -04:00
Tyler Goodlet 16012f6f02 Include both symbols in error msg when a mismatch 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2b61672723 Handle 'closed' vs. 'fill` race case..
`ib` is super good not being reliable with order event sequence order
and duplication of fill info. This adds some guards to try and avoid
popping the last status status too early if we end up receiving
a `'closed'` before the expected `'fill`' event(s). Further delete the
`status_msg` ref on each iteration to avoid stale reference lookups in
the relay task/loop.
2022-08-18 11:51:12 -04:00
Tyler Goodlet 176b230a46 Use modern `Union` pipe op syntax for msg fields 2022-08-18 11:51:12 -04:00
Tyler Goodlet 7fa9dbf869 Add full EMS order-dialog (re-)load support!
This includes darks, lives and alerts with all connecting clients
being broadcast all existing order-flow dialog states. Obviously
for now darks and alerts only live as long as the `emsd` actor lifetime
(though we will store these in local state eventually) and "live" orders
have lifetimes managed by their respective backend broker.

The details of this change-set is extensive, so here we go..

Messaging schema:
- change the messaging `Status` status-key set to:
  `resp: Literal['pending', 'open', 'dark_open', 'triggered',
                'closed',  'fill', 'canceled', 'error']`

  which better reflects the semantics of order lifetimes and was
  partially inspired by the status keys `kraken` provides for their
  order-entry API. The prior key set was based on `ib`'s horrible
  semantics which sound like they're right out of the 80s..
  Also, we reflect this same set in the `BrokerdStatus` msg and likely
  we'll just get rid of the separate brokerd-dialog side type
  eventually.
- use `Literal` type annots for statuses where applicable and as they
  are supported by `msgspec`.
- add additional optional `Status` fields:
  -`req: Order` to allow each status msg to optionally ref its
    commanding order-request msg allowing at least a request-response
    style implicit tracing in all response msgs.
  -`src: str` tag string to show the source of the msg.
  -`reqid: str | int` such that the ems can relay the `brokerd`
    request id both to the client side and have one spot to look
    up prior status msgs and
- draft a (unused/commented) `Dialog` type which can be eventually used
  at all EMS endpoints to track msg-flow states

EMS engine adjustments/rework:
- use the new status key set throughout and expect `BrokerdStatus` msgs
  to use the same new schema as `Status`.
- add a `_DarkBook._active: dict[str, Status]` table which is now used for
  all per-leg-dialog associations and order flow state tracking
  allowing for the both the brokerd-relay and client-request handler loops
  to read/write the same msg-table and provides for delivering
  the overall EMS-active-orders state to newly/re-connecting clients
  with minimal processing; this table replaces what the `._ems_entries`
  table from prior.
- add `Router.client_broadcast()` to send a msg to all currently
  connected peers.
- a variety of msg handler block logic tweaks including more `case:`
  blocks to be both flatter and improve explicitness:
  - for the relay loop move all `Status` msg update and sending to
    within each block instead of a fallthrough case plus hard-to-follow
    state logic.
  - add a specific case for unhandled backend status keys and just log
    them.
  - pop alerts from `._active` immediately once triggered.
  - where possible mutate status msgs fields over instantiating new
    ones.
- insert and expect `Order` instances in the dark clearing loop and
  adjust `case:` blocks accordingly.
- tag `dark_open` and `triggered` statuses as sourced from the ems.
- drop all the `ChainMap` stuff for now; we're going to make our own
  `Dialog` type for this purpose..

Order mode rework:
- always parse the `Status` msg and use match syntax cases with object
  patterns, hackily assign the `.req` in many blocks to work around not
  yet having proper on-the-wire decoding yet.
- make `.load_unknown_dialog_from_msg()` expect a `Status` with boxed
  `.req: Order` as input.
- change `OrderDialog` -> `Dialog` in prep for a general purpose type
  of the same name.

`ib` backend order loading support:
- do "closed" status detection inside the msg-relay loop instead
  of expecting the ems to do this..
- add an attempt to cancel inactive orders by scheduling cancel
  submissions continually (no idea if this works).
- add a status map to go from the 80s keys to our new set.
- deliver `Status` msgs with an embedded `Order` for existing live order
  loading and make sure to try an get the source exchange info (instead
  of SMART).

Paper engine ported to match:
- use new status keys in `BrokerdStatus` msgs
- use `match:` syntax in request handler loop
2022-08-18 11:51:12 -04:00
Tyler Goodlet 87ed9abefa WIP playing with a `ChainMap` of messages 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2548aae73d Deliver existing dialog (msgs) to every EMS client
Ideally every client that connects to the ems can know its state
(immediately) meaning relay all the order dialogs that are currently
active. This adds full (hacky WIP) support to receive those dialog
(msgs) from the `open_ems()` startup values via the `.started()` msg
from `_emsd_main()`.

Further this adds support to the order mode chart-UI to display existing
(live) orders on the chart during startup. Details include,

- add a `OrderMode.load_unknown_dialog_from_msg()` for processing and
  displaying a ``BrokerdStatus`` (for now) msg from the EMS that was not
  previously created by the current ems client and registering and
  displaying it on the chart.
- break out the ems msg processing into a new
  `order_mode.process_trade_msg()` func so that it can be called on the
  startup dialog-msg set as well as eventually used a more general low
  level auto-strat API (eg. when we get to displaying auto-strat and
  group trading automatically on an observing chart UI.
- hackyness around msg-processing for the dialogs delivery since we're
  technically delivering `BrokerdStatus` msgs when the client-side
  processing technically expects `Status` msgs.. we'll rectify this
  soon!
2022-08-18 11:51:12 -04:00
Tyler Goodlet 1cfa04927d Lol, handle failed-to-cancel statuses.. 2022-08-18 11:51:12 -04:00
Tyler Goodlet e34ea94f9f Start brokerd relay loop after opening client stream
In order to avoid missed existing order message emissions on startup we
need to be sure the client side stream is registered with the router
first. So break out the starting of the
`translate_and_relay_brokerd_events()` task until inside the client
stream block and start the task using the dark clearing loop nursery.

Also, ensure `oid` (and thus for `ib` the equivalent re-used `reqid`)
are cast to `str` before registering the dark book. Deliver the dark
book entries as part of the `_emsd_main()` context `.started()` values.
2022-08-18 11:51:12 -04:00
Tyler Goodlet 1510383738 Always cast ems `requid` values to `int` 2022-08-18 11:51:12 -04:00
Tyler Goodlet 016b669d63 Drop staged line runtime guard 2022-08-18 11:51:12 -04:00
Tyler Goodlet 682a0191ef First draft: relay open orders through ems and display on chart 2022-08-18 11:51:12 -04:00
Tyler Goodlet 9e36dbe47f Relay existing open orders from ib on startup 2022-08-18 11:51:12 -04:00
goodboy 8bef67642e
Merge pull request #383 from pikers/doin_the_splits
Doin the splits
2022-08-18 11:50:46 -04:00
Tyler Goodlet 52febac6ae Facepalm: order-handler tasks are one-to-one with unique clients 2022-08-18 11:34:11 -04:00
Tyler Goodlet f202699c25 Fix scan loop: only stash clients that actually connect.. 2022-08-18 11:34:11 -04:00
Tyler Goodlet 0fb07670d2 Fix multi-account positioning and order tracking..
This seems to have been broken in refactoring from commit 279c899de5
which was never tested against multiple accounts/clients.

The fix is 2 part:
- position tables are now correctly loaded ahead of time and used by
  account for each connected client in processing of ledgers and
  existing positions.
- a task for each API client is started (as implemented prior) so that
  we actually get status updates for every client used for submissions.

Further we add a bit of code using `bisect.insort()` to normalize
ledgers to a datetime sorted list records (though pretty sure the `dict`
transform ruins it?) in an effort to avoid issues with ledger
transaction processing with previously minimized `Position.clears`
tables, which should (but might not?) avoid incorporating clear events
prior to the last "net-zero" positioning state.
2022-08-17 14:14:20 -04:00
Tyler Goodlet 73d2e7716f Pre-loop clients to build out pps tables, handle missing commission field 2022-08-17 10:23:01 -04:00
Tyler Goodlet 999ae5a1c6 Handle `Position.split_ratio` in state audits
This firstly changes `.audit_sizing()` => `.ensure_state()` and makes it
return `None` as well as only error when split ratio denoted (via
config) positions do not size as expected.

Further refinements,
- add an `.expired()` predicate method
- always return a size of zero from `.calc_size()` on expired assets
- load each `pps.toml` entry's clear tabe into `Transaction`s and use
  `.add_clear()` during from config init.
2022-08-17 10:06:58 -04:00
Tyler Goodlet 23ba0e5e69 Don't raise on missing position for now, just error log 2022-08-17 10:06:41 -04:00
Tyler Goodlet 941a2196b3 Get pos entry from table not `updated: dict` output 2022-08-17 10:06:37 -04:00
Tyler Goodlet 0cf4e07b84 Use `datetime` sorting on clears table appends
In order to avoid issues with reloading ledger and API trades after an
existing `pps.toml` exists we have to make sure we not only avoid
duplicate entries but also avoid re-adding entries that would have been
removed during a prior call to the `Position.minimize_clears()` filter.
The easiest way to do this is to sort on timestamps and avoid adding any
record that pre-existed the last net-zero position ledger event that
`.minimize_clears()` discarded. In order to implement this it means
parsing config file clears table's timestamps into datetime objects for
inequality checks and we add a `Position.first_clear_dt` attr for
storing this value when managing pps in object form but never store it
in the config (since it should be obviously from the sorted clear event
table).
2022-08-17 10:05:05 -04:00
Tyler Goodlet 7bec989eed First try mega-basic stock (reverse) split support with `ib` and `pps.toml` 2022-08-17 09:54:49 -04:00
Tyler Goodlet 6856ca207f Fix for TWS created position loading 2022-08-17 09:53:42 -04:00
Guillermo Rodriguez 2e5616850c
Merge pull request #378 from pikers/msgpack_zombie
Drop `msgpack` from `marketstore` module
2022-08-11 17:07:47 -03:00
Tyler Goodlet a83bd9c608 Drop `msgpack` from `marketstore` module 2022-08-11 14:21:36 -04:00
goodboy 9651ca84bf
Merge pull request #372 from pikers/the_ems_flattening
The ems flattening
2022-08-05 21:03:59 -04:00
Tyler Goodlet 109b35f6eb Matchify paper clearing loop 2022-08-05 21:02:15 -04:00
Tyler Goodlet e28c1748fc Comment out "unknown msg" case for now 2022-08-05 21:02:15 -04:00
Tyler Goodlet 72889b4d1f Fix reference error 2022-08-05 21:02:15 -04:00
Tyler Goodlet ae001c3dd7 Matchify the dark trigger loop 2022-08-05 21:02:15 -04:00
Tyler Goodlet 2309e7ab05 Flatten the brokerd-dialog relay loop using `match:` 2022-08-05 21:02:15 -04:00
Tyler Goodlet 46c51b55f7 Flatten the client-request handler loop with `match:` 2022-08-05 21:02:15 -04:00
goodboy a9185e7d6f
Merge pull request #349 from pikers/kraken_ws_orders
Kraken ws orders
2022-08-05 21:01:24 -04:00
Tyler Goodlet 3a0987e0be Fix to-fast-edit guard case 2022-08-05 21:00:54 -04:00
Tyler Goodlet d280a592b1 Repair normalize method logic to only error on lookup failure 2022-08-05 16:14:19 -04:00
goodboy ef5829a6b7
Merge pull request #368 from pikers/kraken_userref_hackzin
`kraken`: use `userref` field AND `reqid`, utilize `openOrders` sub for most msging
2022-08-03 09:11:42 -04:00
Tyler Goodlet 30bcfdcc83 Emit fills from `openOrders` block
The (partial) fills from this sub are most indicative of clears (also
says support) whereas the msgs in the `ownTrades` sub are only emitted
after the entire order request has completed - there is no size-vlm
remaining.

Further enhancements:
- this also includes proper subscription-syncing inside `subscribe()` with
  a small pre-msg-loop which waits on ack-msgs for each sub and raises any
  errors. This approach should probably be implemented for the data feed
  streams as well.
- configure the `ownTrades` sub to not bother sending historical data on
  startup.
- make the `openOrders` sub include rate limit counters.
- handle the rare case where the ems is trying to cancel an order which
  was just edited and hasn't yet had it's new `txid` registered.
2022-08-01 19:22:31 -04:00
Tyler Goodlet 1a291939c3 Drop subs ack handling from streamer 2022-08-01 16:55:04 -04:00
Tyler Goodlet 69e501764a Drop status event processing at large
Since we figured out how to pass through ems dialog ids to the
`openOrders` sub we don't really need to do much with status updates
other then error handling. This drops `process_status()` and moves the
error handling logic into a status handler sub-block; we now just
info-log status updates for troubleshooting purposes.
2022-08-01 14:08:45 -04:00
goodboy 7f3f7f0372
Merge pull request #370 from pikers/kill_pydantic_from_kraken
Kill `pydantic` from `kraken`
2022-07-31 15:18:43 -04:00
Tyler Goodlet 1cbf45b4c4 Use the ``newuserref`` field on order edits
Why we need so many fields to accomplish passing through a dialog key to
orders is beyond me but this is how they do it with edits..

Allows not having to handle `editOrderStatus` msgs to update the dialog
key table and instead just do it in the `openOrders` sub by checking the
canceled msg for a 'cancel_reason' of 'Order replaced', in which case we
just pop the txid and wait for the new order the kraken backend engine
will submit automatically, which will now have the correct 'userref'
value we passed in via the `newuserref`, and then we add that new `txid`
to our table.
2022-07-31 14:36:06 -04:00
Tyler Goodlet 227a80469e Use both `reqid` and `userref` in order requests
Turns out you can pass both thus making mapping an ems `oid` to
a brokerd-side `reqid` much more simple. This allows us to avoid keeping
as much local dialog state but with still the following caveats:

- ok `editOrder` msgs must update the reqid<->txid map
- only pop `reqids2txids` entries inside the `cancelOrderStatus` handler
2022-07-31 14:36:06 -04:00
Tyler Goodlet dc8072c6db WIP: use `userref` field over `reqid`... 2022-07-31 14:36:06 -04:00
Tyler Goodlet 808dbb12e6 Drop forgotten `pydantic` dataclass in binance backend.. 2022-07-31 14:35:25 -04:00
Tyler Goodlet 44e21b1de9 Drop field import 2022-07-30 17:34:40 -04:00
Tyler Goodlet b3058b8c78 Drop remaining `pydantic` usage, convert `OHLC` to our struct variant 2022-07-30 17:34:40 -04:00
Tyler Goodlet db564d7977 Add casting method to our struct variant 2022-07-30 17:34:40 -04:00
Tyler Goodlet e6a3e8b65a Add warning msg for `openOrders.userref` always being 0 2022-07-30 17:33:45 -04:00
Tyler Goodlet d43ba47ebe Renames to `ppu` 2022-07-30 17:33:45 -04:00
Tyler Goodlet 168c9863cb Look for transfers after ledger + api trans load
If we don't have a pos table built out already (in mem) we can't figure
out the likely dst asset (since there's no pair entry to guide us) that
we should use to search for withdrawal transactions; so move it later.

Further this ports to the new api changes in `piker.pp`` that will land
with #365.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 0fb31586fd Go back to using `Position.size` property in pp loading audits 2022-07-30 17:33:45 -04:00
Tyler Goodlet 8b609f531b Add transfers knowledge to positions validation 2022-07-30 17:33:45 -04:00
Tyler Goodlet d502274eb9 Add a `Client.get_xfers()` to retreive withdrawal transactions 2022-07-30 17:33:45 -04:00
Tyler Goodlet b1419c850d Update ledger from api immediately, cruft cleaning 2022-07-30 17:33:45 -04:00
Tyler Goodlet aa7f24b6db Drop old reversed order idea for rt-pp msg testing 2022-07-30 17:33:45 -04:00
Tyler Goodlet 319e68c855 TOSQUASH: revert to 22Hz display throttle 2022-07-30 17:33:45 -04:00
Tyler Goodlet 64f920d7e5 Accept direct fqsn matches on position msg updates 2022-07-30 17:33:45 -04:00
Tyler Goodlet 3b79743c7b Finally get real-time pp updates workin for `kraken`
This ended up driving the rework of the `piker.pp` apis to use context
manager + table style which resulted in a much easier to follow
state/update system B). Also added is a flag to do a manual simulation
of a "fill triggered rt pp msg" which requires the user to delete the
last ledgered trade entry from config files and then allowing that trade
to emit through the `openOrders` sub and update client shortly after
order mode boot; this is how the rt updates were verified to work
without doing even more live orders 😂.

Patch details:
- open both `open_trade_ledger()` and `open_pps()` inside the trade
  dialog startup and conduct a "pp state sync" logic phase where we now
  pull the account balances and incrementally load pp data (in order,
  from `pps.toml`, ledger, api) until we can generate the asset balance
  by reverse incrementing through trade history eventually erroring out
  if we can't reproduce the balance value.
- rework the `trade2pps()` to take in the `PpTable` and generate new
  ems msgs from table updates.
- return the new `dict[str, Transaction]` expected from
  `norm_trade_records()`
- only update pp config and ledger on dialog exit.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 54008a1976 Add balance and assets retreival methods, cache assets on startup
Pass config dict into client and assign to `.conf`.
2022-07-30 17:33:45 -04:00
Tyler Goodlet b96b7a8b9c Use `aclosing()` on all msg async-gens 2022-07-30 17:33:45 -04:00
Tyler Goodlet 0fca1b3e1a Also map the ws symbol set to the alt set 2022-07-30 17:33:45 -04:00
Tyler Goodlet 2386270cad Handle too-fast-edits, add `ChainMap` msg tracing
Since our ems doesn't actually do blocking style client-side submission
updates, thus resulting in the client being able to update an existing
order's state before knowing its current state, we can run into race
conditions where for some backends an order is updated using the wrong
order id. For kraken we manually implement detecting this race (lol, for
now anyway) such that when a new client side edit comes in before the
new `txid` is known, we simply expect the handler loop to cancel the
order. Further this adds cancellation on arbitrary status errors, like
rate limits.

Also this adds 2 leg (ems <-> brokerd <-> kraken) msg tracing using
a `collections.ChainMap` which is likely going to end up being the POC
for a more general data structure recommended for backends that need to
trace msg flow for translation with the ems.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 5b135fad61 Handle pre-existing open orders specifically by checking for null `oid` 2022-07-30 17:33:45 -04:00
Tyler Goodlet abb6854e74 Make all `.bsuid`s the normed symbol "altname"s 2022-07-30 17:33:45 -04:00
Tyler Goodlet 22f9b2552c Provide symbol norming via a classmethod + global table 2022-07-30 17:33:45 -04:00
Tyler Goodlet 57f2478dc7 Fixes for state updates and clears
Turns out the `openOrders` and `ownTrades` subs always return a `reqid`
value (the one brokerd sends to the kraken api in order requests) is
always set to zero, which seems to be a bug? So this includes patches to
work around that as well reliance on the `openOrders` sub to do most
`BrokerdStatus` updates since `XOrderStatus` events don't seem to have
much data in them at all (they almost look like pure ack events so maybe
they aren't affirmative of final state changes anyway..).

Other fixes:
- respond with a `BrokerdOrderAck` immediately after `requid` generation
  not after order submission to ensure the ems has a valid `requid`
  *before* kraken api events are relayed through.
- add a `reqids2txids: bidict[int, str]` which maps brokerd genned
  `requid`s to kraken-side `txid`s since (as mentioned above) the
  clearing and state endpoints don't relay back this value (it's always
  0...)
- add log messages for each sub so that (at least for now) we can see
  exact msg contents coming from kraken.
- drop `.remaining` calcs for now since we need to keep record of the
  order states manually in order to retreive the original submission
  vlm..
- fix the `openOrders` case for fills, in this case the message includes
  no `status` field and thus we must catch it in a block *after* the
  normal state handler to avoid masking.
- drop response msg generation from the cancel status case since we
  can do it again from the `openOrders` handler and sending a double
  status causes issues on the client side.
- add a shite ton of notes around all this missing `requid` stuff.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 5dc9a61ec4 Use cancel level logging for cancelled orders 2022-07-30 17:33:45 -04:00
Tyler Goodlet b0d3d9bb01 TOSQUASH: lingering `.dict()`s 2022-07-30 17:33:45 -04:00
Tyler Goodlet caecbaa231 Cancel any live orders found on connect
More or less just to avoid orders the user wasn't aware of from
persisting until we get "open order relaying" through the ems working.

Some further fixes which required a new `reqids2txids` map which keeps
track of which `kraken` "txid" is mapped to our `reqid: int`; mainly
this was needed for cancel requests which require knowing the underlying
`txid`s (since apparently kraken doesn't keep track of the "reqid"  we
pass it). Pass the ws instance into `handle_order_updates()` to enable
the cancelling orders on startup. Don't key error on unknown `reqid`
values (for eg. when receiving historical trade events on startup).
Handle cancel requests first in the ems side loop.
2022-07-30 17:33:45 -04:00
Tyler Goodlet a20a8d95d5 Use `aclosing()` around ws async gen 2022-07-30 17:33:45 -04:00
Tyler Goodlet ba93f96c71 Lol, gotta `float()` that vlm before `*` XD 2022-07-30 17:33:45 -04:00
Tyler Goodlet 804e9afdde Pass our manually mapped `reqid: int` to EMS
Since we seem to always be able to get back the `reqid`/`userref` value
we send to kraken ws endpoints, we can use this as our brokerd side
order id and avoid all race cases with getting the true `txid` value
that `kraken` assigns (and which changes when you do "edits"
:eyeroll:). This simplifies status updates by allowing our relay loop
just to pass back our generated `.reqid` verbatim and allows responding
with a `BrokerdOrderAck` immediately in the request handler task which
should guarantee there are no further race conditions with the relay
loop and mapping `txid`s from kraken.. and figuring out wtf to do when
they change, etc.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 89bcaed15e Add ledger and `pps.toml` snippets 2022-07-30 17:33:45 -04:00
Tyler Goodlet bb2f8e4304 Try out a backend readme 2022-07-30 17:33:45 -04:00
Tyler Goodlet 8ab8268edc Don't require an ems msg symbol on error statuses 2022-07-30 17:33:45 -04:00
Tyler Goodlet bbcc55b24c Update ledger *after* pps updates from new trades
Addressing same issue as in #350 where we need to compute position
updates using the *first read* from the ledger **before** we update it
to make sure `Position.lifo_update()` gets called and **not skipped**
because new trades were read as clears entries but haven't actually been
included in update calcs yet.. aka we call `Position.lifo_update()`.

Main change here is to convert `update_ledger()` into a context mngr so
that the ledger write is committed after pps updates using
`pp.update_pps_conf()`..

This is basically a hotfix to #346 as well.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 9fa9c27e4d Factor status handling into a new `process_status()` helper 2022-07-30 17:33:45 -04:00
Tyler Goodlet d9b4c4a413 Factor msg loop into new func: `handle_order_updates()` 2022-07-30 17:33:45 -04:00
Tyler Goodlet 84cab1327d Drop uneeded count-sequencec verification 2022-07-30 17:33:45 -04:00
Tyler Goodlet df4cec930b Get order "editing" working fully
Turns out the EMS can support this as originally expected: you can
update a `brokerd`-side `.reqid` through a `BrokerdAck` msg and the ems
which update its cross-dialog (leg) tracking correctly! The issue was
a bug in the `editOrderStatus` msg handling and appropriate tracking
of the correct `.oid` (ems uid) on the kraken side. This unfortunately
required adding a `emsflow: dict[str, list[BrokerdOrder]]` msg flow
tracing table which means the broker daemon is tracking all the msg flow
with the ems, though I'm wondering now if this is just good practise
anyway and maybe we should offer a small primitive type from our msging
utils to aid with this? I've used such constructs in event handling
systems prior.

There's a lot more factoring that can be done after these changes as
well but the quick detailed summary is,
- rework the `handle_order_requests()` loop to use `match:` syntax and
  update the new `emsflow` table on every new request from the ems.
- fix the `editOrderStatus` case pattern to not include an error msg and
  thus actually be triggered to respond to the ems with a `BrokerdAck`
  containing the new `.reqid`, the new kraken side `txid`.
- skip any `openOrders` msgs which are detected as being kraken's
  internal order "edits" by matching on the `cancel_reason` field.
- update the `emsflow` table in all ws-stream msg handling blocks
  with responses sent to the ems.

Relates to #290
2022-07-30 17:33:45 -04:00
Tyler Goodlet ab08dc582d Make ems relay loop report on brokerd `.reqid` changes 2022-07-30 17:33:45 -04:00
Tyler Goodlet f79d9865a0 Use `match:` syntax in data feed subs processing 2022-07-30 17:33:45 -04:00
Tyler Goodlet 00378c330c First draft, working WS based order management
Move to using the websocket API for all order control ops and dropping
the sync rest api approach which resulted in a bunch of buggy races.
Further this gets us must faster (batch) order cancellation for free
and a simpler ems request handler loop. We now heavily leverage the new
py3.10 `match:` syntax for all kraken-side API msg parsing and
processing and handle both the `openOrders` and `ownTrades` subscription
streams.

We also block "order editing" (by immediate cancellation) for now since
the EMS isn't entirely yet equipped to handle brokerd side `.reqid`
changes (which is how kraken implements so called order "updates" or
"edits") for a given order-request dialog and we may want to even
consider just implementing "updates" ourselves via independent cancel
and submit requests? Definitely something to ponder. Alternatively we
can "masquerade" such updates behind the count-style `.oid` remapping we
had to implement anyway (kraken's limitation) and maybe everything will
just work?

Further details in this patch:
- create 2 tables for tracking the EMS's `.oid` (uui4) value to `int`s
  that kraken expects (for `reqid`s): `ids` and `reqmsgs` which enable
  local lookup of ems uids to piker-backend-client-side request ids and
  received order messages.
- add `openOrders` sub support which more or less directly relays to
  equivalent `BrokerdStatus` updates and calc the `.filled` and
  `.remaining` values based on cleared vlm updates.
- add handler blocks for `[add/edit/cancel]OrderStatus` events including
  error msg cases.
- don't do any order request response processing in
  `handle_order_requests()` since responses are always received via one
  (or both?) of the new ws subs: `ownTrades` and `openOrders` and thus
  such msgs are now handled in the response relay loop.

Relates to #290
Resolves #310, #296
2022-07-30 17:33:45 -04:00
goodboy 180b97b180
Merge pull request #369 from pikers/pydantic_zombie
Drop `pydantic.create_model()` usage for `msgspec.defstruct()`
2022-07-30 17:33:18 -04:00
Tyler Goodlet f0b3a4d5c0 Drop `pydantic.create_model()` usage for `msgspec.defstruct()` 2022-07-30 17:01:56 -04:00
goodboy e2e66324cc
Merge pull request #363 from pikers/ib_pps_upgrade
`ib` pps api layer upgrade
2022-07-27 14:50:28 -04:00
Tyler Goodlet d950c78b81 Mention liquidation in error msg 2022-07-27 14:40:32 -04:00
Tyler Goodlet 7dbcbfdcd5 Write `pps.toml` shortly after broker startup 2022-07-27 14:40:32 -04:00
Tyler Goodlet 279c899de5 Port to new PpTable.dump_active()` output, move order event task to child nursery 2022-07-27 14:40:32 -04:00
Tyler Goodlet db5aacdb9c Only allow vnc client connections from localhost 2022-07-27 14:40:32 -04:00
Tyler Goodlet c7b84ab500 Port position calcs to new ctx mngr apis and drop multi-loop madness 2022-07-27 14:40:32 -04:00
Tyler Goodlet 9967adb371 Lol, drop unintented accound name key layer from ledger ledger 2022-07-27 14:40:32 -04:00
Tyler Goodlet 30ff793a22 Port `ib` broker machinery to new ctx mngr pp api
This drops the use of `pp.update_pps_conf()` (and friends) and instead
moves to using the context style `open_trade_ledger()` and `open_pps()`
managers for faster pp msg gen due to delayed file writing (which was
the main source update latency).

In order to make this work with potentially multiple accounts this also
uses an exit stack which loads each ledger / `pps.toml` into an account
id mapped `dict`; a POC for likely how we should implement some higher
level position manager api.
2022-07-27 12:29:53 -04:00
Tyler Goodlet 666587991a Avoid crash when no vnc server running 2022-07-27 12:29:53 -04:00
goodboy 01005e40a8
Merge pull request #366 from pikers/multisympaper
Fix #222 multi-symbol paper engine support
2022-07-27 12:29:05 -04:00
goodboy d81e629c29
Merge pull request #365 from pikers/ppu_history
Ppu history
2022-07-27 12:25:23 -04:00
Tyler Goodlet 2766fad719 Fix #222 multi-symbol paper engine support 2022-07-27 12:18:59 -04:00
Tyler Goodlet ae71168216 Change name `be_price` -> `ppu` throughout codebase 2022-07-27 12:18:36 -04:00
Tyler Goodlet a0c238daa7 Adjust paper-engine to use `Transaction` for pps updates 2022-07-27 11:20:59 -04:00
Tyler Goodlet 7cbdc6a246 Move clears updates back into a method 2022-07-27 11:17:57 -04:00
Tyler Goodlet 2ff8be71aa Add `PpTable.write_config(), order `pps.toml` columns 2022-07-27 11:17:57 -04:00
Tyler Goodlet ddffaa952d Rework "breakeven" price as "price-per-uni": ppu
The original implementation of `.calc_be_price()` wasn't correct since
the real so called "price per unit" (ppu), is actually defined by
a recurrence relation (which is why the original state-updated
`.lifo_update()` approach worked well) and requires the previous ppu to
be weighted by the new accumulated position size when considering a new
clear event. The ppu is the price that above or below which the trader
takes a win or loss on transacting one unit of the trading asset and
thus it is the true "break even price" that determines making or losing
money per fill. This patches fixes the implementation to use trailing
windows of the accumulated size and ppu to compute the next ppu value
for any new clear event as well as handle rare cases where the
"direction" changes polarity (eg. long to short in a single order). The
new method is `Position.calc_ppu()` and further details of the relation
can be seen in the doc strings.

This patch also includes a wack-ton of clean ups and removals in an
effort to refine position management api for easier use in new backends:

- drop `updaate_pps_conf()`, `load_pps_from_toml()` and rename
  `load_trands_from_ledger()` -> `load_pps_from_ledger()`.
- extend `PpTable` to have a `.to_toml()` method which returns the
  active set of positions ready to be serialized to the `pps.toml` file
  which is collects from calling,
- `PpTable.dump_active()` which now returns double dicts of the
  open/closed pp object maps.
- make `Position.minimize_clears()` now iterate the clears table in
  chronological order (instead of reverse) and only drop fills prior
  to any zero-size state (the old reversed way can result incorrect
  history-size-retracement in cases where a position is lessened but
  not completely exited).
- drop `Position.add_clear()` and instead just manually add entries
  inside `.update_from_trans()` and also add a `accum_size` and `ppu`
  field to ever entry thus creating a position "history" sequence of
  the ppu and accum size for every position and prepares for being
  and to show "position lifetimes" in the UI.
- move fqsn getting into `Position.to_pretoml()`.
2022-07-26 12:09:59 -04:00
Tyler Goodlet 5520e9ef21 Minimize clears and audit sizing for all updates in `.update_from_trans()` 2022-07-26 12:09:59 -04:00
Tyler Goodlet 958e542f7d Drop `.lifo_upate()` add `.audit_sizing()`
Use the new `.calc_[be_price/size]()` methods when serializing to and
from the `pps.toml` format and add an audit method which will warn about
mismatched values and assign the clears table calculated values pre-write.

Drop the `.lifo_update()` method and instead allow both
`.size`/`.be_price` properties to exist (for non-ledger related uses of
`Position`) alongside the new calc methods and only get fussy about
*what* the properties are set to in the case of ledger audits.

Also changes `Position.update()` -> `.add_clear()`.
2022-07-25 12:06:52 -04:00
goodboy 927bbc7258
Merge pull request #364 from pikers/historical_breakeven_pp_price
Add non-state-incremented calculation methods
2022-07-25 09:24:26 -04:00
Tyler Goodlet 45bef0cea9 Add non-state-incremented calculation methods
Since we're going to need them anyway for desired features, add
2 new `Position` methods:
- `.calc_be_price()` which computes the breakeven cost basis price
  from the entries in the clears table.
- `.calc_size()` which just sums the clear sizes.

Add a `cost_scalar: float` control to the `.update_from_trans()` method
to allow manual adjustment of the cost weighting for the case where
a "non-symmetrical" model is wanted.

Go back to always trying to write the backing ledger files on exit, even
when there's an error (obvs without the `return` in the `finally:` block
f$#% up).
2022-07-23 19:39:47 -04:00
goodboy a3d46f713e
Merge pull request #361 from pikers/pptables
`PpTable`s
2022-07-21 17:54:43 -04:00
Tyler Goodlet 5684120c11 Wow, drop idiotic `return` inside `finally:`
Can't believe i missed this but any `return` inside a `finally` will
suppress the error from the `try:` part... XD

Thought i was losing my mind when the ledger was mutated and then
an error just after wasn't getting raised.. lul.

Never again...
2022-07-21 17:52:44 -04:00
Tyler Goodlet ddb195ed2c Add a flag to prevent writing `pps.toml` on exit 2022-07-21 17:52:44 -04:00
Tyler Goodlet 6747831677 Don't pop zero pps from table in `.dump_active()`
In order to avoid double transaction adds/updates and too-early-discard
of zero sized pps (like when trades are loaded from a backend broker but
were already added to a ledger or `pps.toml` prior) we now **don't** pop
such `Position` entries from the `.pps` table in order to keep each
position's clears table always in place. This avoids the edge case where
an entry was removed too early (due to zero size) but then duplicate
trade entries that were in that entrie's clears show up from the backend
and are entered into a new entry resulting in an incorrect size in a new
entry..We still only push non-net-zero entries to the `pps.toml`.

More fixes:
- return the updated set of `Positions` from `.lifo_update()`.
- return the full table set from `update_pps()`.
- use `PpTable.update_from_trans()` more throughout.
- always write the `pps.toml` on `open_pps()` exit.
- only return table from `load_pps_from_toml()`.
2022-07-21 17:52:44 -04:00
Tyler Goodlet 9326379b04 Add a `PpTable` type, give it the update methods
In an effort to begin allowing backends to have more granular control
over position updates, particular in the case where they need to be
reloaded from a trades ledger, this adds a new table API which can
be loaded using `open_pps()`.

- offer an `.update_trans()` method which takes in a `dict` of
  `Transactions` and updates the current table of `Positions` from it.
- add a `.dump_active()` which renders the active pp entries dict in
  a format ready for toml serialization and all closed positions since
  the last update (we might want to not drop these?)

All other module-function apis currently in use should remain working as
before for the moment.
2022-07-21 17:52:44 -04:00
Tyler Goodlet 09d9a7ea2b Expect `<brokermod>.norm_trade_records()` to return `dict` 2022-07-21 17:52:44 -04:00
Tyler Goodlet 45871d5846 Freeze transactions, add todo notes for incr update 2022-07-21 17:52:44 -04:00
goodboy bf7a49c19b
Merge pull request #358 from pikers/fix_forex
Fix forex
2022-07-21 17:52:08 -04:00
goodboy 0a7fce087c
Merge pull request #362 from pikers/ahab_you_bad_boi
Revert to hard container kill on log error
2022-07-21 17:51:11 -04:00
Tyler Goodlet d3130ca04c Revert to hard container kill on log error 2022-07-21 17:00:36 -04:00
Tyler Goodlet e30a3c5b54 Single chart requires view reset to size to data on startup 2022-07-21 11:39:10 -04:00
Tyler Goodlet 2393965e83 Fix bottom axis when no fsps/subplots 2022-07-21 11:39:04 -04:00
Tyler Goodlet fb39da19f4 Add option and adhoc meta-info support to `con2fqsn()` 2022-07-21 11:38:53 -04:00
Tyler Goodlet a27431c34f Unify contract->fqsn translation with new cached-helper 2022-07-21 11:38:42 -04:00
Tyler Goodlet 070b9f3dc1 Log msg tweak 2022-07-19 09:58:43 -04:00
goodboy f2dba44169
Merge pull request #360 from pikers/fsp_shm_caching
Fsp shm caching
2022-07-19 09:55:27 -04:00
Tyler Goodlet 0ef5da0881 Unbreak regular searches and stock lookups..
Change `.find_contract()` -> `.find_contracts()` to allow multi-search
for so called "ambiguous" contracts (like for `Future`s) such that the
method now returns a `list` of tracts and populates the contract cache
with all specific tracts retrieved. Let it take in an (unvalidated)
contract that will be fqsn-style-tokenized such that it can be called
from `.search_symbols()` (though we're not quite yet XD).

More stuff,

- add `Client.parse_patt2fqsn()` which is an fqsn to token unpacker
  built from the original logic in the old `.find_contract()`.
- handle fiat/forex pairs with the `'CASH'` sectype.
- add a flag to allow unqualified contracts to fail with a warning msg.
- populate the client's contract cache with all expiries of
  an ambiguous derivative.
- allow `.con_deats()` to warn msg instead of raise on def-not-found.
- add commented `assert 0` which was triggering a debugger deadlock in
  `tractor` which we still haven't been able to create a unit test for.
2022-07-19 09:42:01 -04:00
Tyler Goodlet 0580b204a3 A `size` field in ticks is optional 2022-07-19 09:41:37 -04:00
Tyler Goodlet 6ce699ae1f Repair display loop to work when no vlm chart is loaded 2022-07-19 09:41:37 -04:00
Tyler Goodlet 3aa72abacf Primary exchange can never be "smart" 2022-07-19 09:41:37 -04:00
Tyler Goodlet 04004525c1 Specifically denote no-vlm contracts in symbol info 2022-07-19 09:41:37 -04:00
Tyler Goodlet a7f0adf1cf Make forex rt feeds work again 2022-07-19 09:41:37 -04:00
Tyler Goodlet cef511092d Support `Forex` in the pp packer 2022-07-19 09:41:37 -04:00
Tyler Goodlet 4e5df973a9 Support `Forex` tracts in `normalize()` 2022-07-19 09:41:37 -04:00
Tyler Goodlet 6a1a62d8c0 Add (hacky) forex pair support to `Client.find_contract()` 2022-07-19 09:41:37 -04:00
Tyler Goodlet e0491cf2e7 Cache fsp ``ShmArrays`` where possible
Minimize calling `.data._shmarray.attach_shm_array()` as much as is
possible to avoid the crash from #332. This is the suggested hack from
issue #359.

Resolves https://github.com/pikers/piker/issues/359
2022-07-19 09:07:40 -04:00
Tyler Goodlet 90bc9b9730 Only 4k seconds of 1s ohlc when no tsdb 2022-07-19 09:07:27 -04:00
goodboy f449672c68
Merge pull request #357 from pikers/paper_eng_msg_fixes
Oof, paper engine msg fixes after using `msgspec.Struct`..
2022-07-11 13:14:39 -04:00
Tyler Goodlet fd22f45178 Oof, paper engine msg fixes after using `msgspec.Struct`.. 2022-07-11 13:04:07 -04:00
goodboy 37f634a2ed
Merge pull request #353 from pikers/drop_pydantic
Drop `pydantic`
2022-07-09 14:15:50 -04:00
Tyler Goodlet dfee9dd97e Remove `pydantic` from deps 2022-07-09 13:10:09 -04:00
Tyler Goodlet 2a99f7a4d7 Drop remaining `BaseModel` api usage from rest of codebase 2022-07-09 12:38:17 -04:00
Tyler Goodlet b44e2d9ed9 Support `0` value `reqid`s 🤦 2022-07-09 12:10:23 -04:00
Tyler Goodlet 795d4d76f4 Add some todo-reminders for ``msgspec`` stuff 2022-07-09 12:09:50 -04:00
Tyler Goodlet c26acb1fa8 Add `Struct.copy()` which does a rountrip validate 2022-07-09 12:09:38 -04:00
Tyler Goodlet 11b6699a54 Change all clearing msgs over to `msgspec` 2022-07-09 12:09:38 -04:00
Tyler Goodlet f9bdd643cf Cast slots to `int` before range set 2022-07-09 12:09:38 -04:00
Tyler Goodlet 2baea21c7d Drop pydantic from allocator 2022-07-09 12:09:38 -04:00
Tyler Goodlet bea0111753 Add a custom `msgspec.Struct` with some humanizing 2022-07-09 12:09:38 -04:00
Tyler Goodlet c870665be0 Remove `BaseModel` use from all dataclass-like uses 2022-07-09 12:08:41 -04:00
Tyler Goodlet 4ff1090284 Use struct for shm tokens 2022-07-09 12:06:47 -04:00
Tyler Goodlet f22461a844 Use our struct for kraken `Pair` type 2022-07-09 12:06:47 -04:00
Tyler Goodlet 458c7211ee Drop `pydantic` from service mngr 2022-07-09 12:06:47 -04:00
Tyler Goodlet 5cc4b19a7c Use our struct in binance backend 2022-07-09 12:06:47 -04:00
goodboy f5236f658b
Merge pull request #356 from pikers/null_last_quote_fix
Finally solve the last-price-is-`nan` issue..
2022-07-08 17:47:45 -04:00
goodboy a360b66cc0
Merge pull request #355 from pikers/ahab_hardkill
Ahab hardkill
2022-07-08 17:47:17 -04:00
Tyler Goodlet 4bcb791161 Finally solve the last-price-is-`nan` issue..
Not sure why I put this off for so long but the check is in now such
that if the market isn't open or no rt quote comes in from the first
query, we just pull from the last shm history 'close' value.
Includes another fix to avoid raising when a double remove on the client
side stream from the registry sometimes happens.
2022-07-08 17:30:34 -04:00
Tyler Goodlet 4c7c78c815 Add a `ApplicationLogError` custom exc instead 2022-07-08 17:29:03 -04:00
Tyler Goodlet 019867b413 Fix missing container id, drop custom exception 2022-07-08 17:22:37 -04:00
Tyler Goodlet f356fb0a68 Hard kill container on both a timeout or connection error 2022-07-08 17:22:37 -04:00
goodboy 756249ff70
Merge pull request #348 from pikers/notokeninwswrapper
Drop token attr from `NoBsWs`
2022-07-05 20:57:30 -04:00
goodboy 419ebebe72
Merge pull request #346 from pikers/kraken_ledger_pps
Kraken ledger pps
2022-07-05 20:56:44 -04:00
goodboy a229996ebe
Merge pull request #350 from pikers/ib_rt_pp_update_hotfix
`ib` rt pps update hotfix..
2022-07-05 20:55:14 -04:00
Tyler Goodlet af01e89612 Create sub-pkg logger once during import 2022-07-05 16:59:47 -04:00
Tyler Goodlet 609034c634 Fix typo / line length 2022-07-05 16:46:31 -04:00
Tyler Goodlet 95dd0e6bd6 `ib` rt pps update hotfix..
Not sure this didn't get caught in usage, but basically real-time
updates got broken by a rework of `update_ledger_from_api_trades()`.
The issue is that the ledger was being updated **before** calling
`piker.pp.update_pps_conf()` which resulted in the `Position.size`
not being updated correctly since the [latest added] clears passed
in via the `trade_records` arg were already found in the `.clears` table
and thus were causing the loop to skip the `Position.lifo_update()`
call..

The solution here is to not update the ledger **until after** we call
`update_pps_conf()` - it's more read/writes but it's correct and we
figure out a less io heavy way to do the file writing later.

Further this includes a fix to avoid double emitting a pp update caused
by non-thorough logic that waits for a commission report to arrive
during a fill event; previously we were emitting the same message twice
due to the lack of a check for an existing comms report in the case
where the report arrives *after* the fill.
2022-07-05 16:25:11 -04:00
goodboy 479ad1bb15
Merge pull request #347 from pikers/pps_postmortem
Pps postmortem
2022-07-04 15:28:27 -04:00
Tyler Goodlet d506235a8b Drop token attr from `NoBsWs` 2022-07-03 17:07:35 -04:00
Tyler Goodlet 7846446a44 Add real-time incremental pp updates
Moves to using the new `piker.pp` apis to both store real-time trade
events in a ledger file as well emit position update msgs (which were
not in this backend at all prior) when new orders clear (aka fill).

In terms of outstanding issues,
- solves the pp update part of the bugs reported in #310
- starts a msg case block in prep for #293

Details of rework:
- move the `subscribe()` ws fixture to module level and `partial()` in
  the client token instead of passing it to the instance; in prep for
  removal of the `.token` attr from the `NoBsWs` wrapper.
- drop `make_auth_sub()` since it was too thin and we can just
  do it all succinctly in `subscribe()`
- filter trade update msgs to those not yet stored int the toml ledger
- much better kraken api msg unpacking using new `match:` synax B)

Resolves #311
2022-07-03 14:52:27 -04:00
Tyler Goodlet 214f864dcf Handle ws style symbol schema 2022-07-03 14:37:15 -04:00
Tyler Goodlet 4c0f2099aa Send fill msg first 2022-07-03 11:19:33 -04:00
Tyler Goodlet aea7bec2c3 Inline `process_trade_msgs()` into relay loop 2022-07-03 11:18:45 -04:00
Tyler Goodlet 47777e4192 Use new `str.removeprefix()` from py3.10 2022-07-02 16:20:22 -04:00
Tyler Goodlet f6888057c3 Just do a naive lookup for symbol normalization 2022-07-02 16:20:22 -04:00
Tyler Goodlet f65f56ec75 Initial `piker.pp` ledger support for `kraken`
No real-time update support (yet) but this is the first draft at writing
trades ledgers and `pps.toml` entries for the kraken backend.

Deatz:
- drop `pack_positions()`, no longer used.
- use `piker.pp` apis to both write a trades ledger file and update the
  `pps.toml` inside the `trades_dialogue()` endpoint startup.
- drop the weird paper engine swap over if auth can't be done, we should
  be doing something with messaging in the ems over this..
- more web API error response raising.
- pass the `pp.Transaction` set loaded from ledger into
  `process_trade_msgs()` do avoid duplicate sends of already collected
  trades msgs.
- add `norm_trade_records()` public endpoing (used by `piker.pp` api)
  and `update_ledger()` helper.
- rejig `process_trade_msgs()` to drop the weird `try:` assertion block
  and skip already-recorded-in-ledger trade msgs as well as yield *each*
  trade instead of sub-sequences.
2022-07-02 16:20:22 -04:00
Tyler Goodlet 5d39b04552 Invert normalizer branching logic, raise on edge case 2022-07-02 16:20:22 -04:00
Tyler Goodlet 735fbc6259 Raise any error from response 2022-07-02 16:20:22 -04:00
Tyler Goodlet fcd7e0f3f3 Avoid crash on trades ledger msgs
Just ignore them for now using new `match:` syntax B)
but we'll do incremental update sooon!

Resolves #311
2022-07-02 16:20:22 -04:00
Tyler Goodlet 9106d13dfe Drop wacky if block logic, while loop, handle errors and prep for async batching 2022-07-02 16:20:22 -04:00
Tyler Goodlet d3caad6e11 Factor data feeds endpoints into new sub-mod 2022-07-02 16:20:22 -04:00
Tyler Goodlet f87a2a810a Make broker mod import from new api mod 2022-07-02 16:20:21 -04:00
Tyler Goodlet 208e2e9e97 Move core api code into sub-module 2022-07-02 16:20:21 -04:00
Tyler Goodlet 90cc6eb317 Factor clearing related endpoints into new `.kraken.broker` submod 2022-07-02 16:20:21 -04:00
Tyler Goodlet b118becc84 Start `kraken` sub-pkg 2022-07-02 16:20:21 -04:00
Tyler Goodlet 7442d68ecf Drop nesting level from emsd's pp cacheing, adjust order mode 2022-07-02 16:19:58 -04:00
Tyler Goodlet 076c167d6e Fix ib pkg mod doc string 2022-07-02 16:14:34 -04:00
Tyler Goodlet 64d8cd448f Right, handle brand-new pp case.. 2022-07-02 16:14:34 -04:00
Tyler Goodlet ec6a28a8b1 Drop stale comment 2022-07-02 16:14:34 -04:00
Tyler Goodlet cc15d02488 Fix `.minimize_clears()` to include clears since zero
This was just implemented totally wrong but somehow worked XD

The idea was to include all trades that contribute to ongoing position
size since the last time the position was "net zero", i.e. no position
in the asset. Adjust arithmetic to *subtract* from the current size
until a zero size condition is met and then keep all those clears as
part of the "current state" clears table.

Additionally this fixes another bug where the positions freshly loaded
from a ledger *were not* being merged with the current `pps.toml` state.
2022-07-02 16:14:34 -04:00
goodboy d5bc43e8dd
Merge pull request #336 from pikers/lifo_pps_ib
LIFO/"breakeven" pps for `ib`
2022-06-29 10:07:56 -04:00
Tyler Goodlet 287a2c8396 Put swb2 in venue filter for now 2022-06-29 10:00:38 -04:00
Tyler Goodlet 453ebdfe30 Fix field name to new `.bsuid` 2022-06-28 10:07:57 -04:00
Tyler Goodlet 2b1fb90e03 Add tractor breaker assert.. 2022-06-28 10:07:57 -04:00
Tyler Goodlet 695ba5288d Comment-drop adhoc symbol (futes) matching in search 2022-06-28 10:07:57 -04:00
Tyler Goodlet d6c32bba86 Use new adhoc sym map for symbols without exchange tags (usually futes) 2022-06-28 10:07:57 -04:00
Tyler Goodlet fa89207583 Use sign of the new size which indicates direction of position 2022-06-28 10:07:57 -04:00
Tyler Goodlet 557562e25c Build out adhoc sym map from futes list 2022-06-28 10:07:57 -04:00
Tyler Goodlet c6efa2641b Cost part of position breakeven calc is direction dependent 2022-06-28 10:07:57 -04:00
Tyler Goodlet 8a7e391b4e Terser startup msg fields 2022-06-28 10:07:57 -04:00
Tyler Goodlet aec48a1dd5 Right, zero sized "closed out" msgs are totally fine 2022-06-28 10:07:57 -04:00
Tyler Goodlet 87f301500d Simplify updates to single-pass, fix clears minimizing
Gah, was a remaining bug where if you tried to update the pps state with
both new trades and from the ledger you'd do a double add of
transactions that were cleared during a `update_pps()` loop. Instead now
keep all clears in tact until ready to serialize to the `pps.toml` file
in which cases we call a new method `Position.minimize_clears()` which
does the work of only keep clears since the last net-zero size.

Re-implement `update_pps_conf()` update logic as a single pass loop
which does expiry and size checking for closed pps all in one pass thus
allowing us to drop `dump_active()` which was kinda redundant anyway..
2022-06-28 10:07:57 -04:00
Tyler Goodlet 566a54ffb6 Reset the clears table on zero size conditions 2022-06-28 10:07:57 -04:00
Tyler Goodlet f9c4b3cc96 Fixes for newly opened and closed pps
Before we weren't emitting pp msgs when a position went back to "net
zero" (aka the size is zero) nor when a new one was opened (wasn't
previously loaded from the `pps.toml`). This reworks a bunch of the
incremental update logic as well as ports to the changes in the
`piker.pp` module:

- rename a few of the normalizing helpers to be more explicit.
- drop calling `pp.get_pps()` in the trades dialog task and instead
  create msgs iteratively, per account, by iterating through collected
  position and API trade records and calling instead
  `pp.update_pps_conf()`.
- always from-ledger-update both positions reported from ib's pp sys and
  session api trades detected on ems-trade-dialog startup.
- `update_ledger_from_api_trades()` now does **just** that: only updates
  the trades ledger and returns the transaction set.
- `update_and_audit_msgs()` now only the input list of msgs and properly
  generates new msgs for newly created positions that weren't previously
  loaded from the `pps.toml`.
2022-06-28 10:07:57 -04:00
Tyler Goodlet a12e6800ff Support per-symbol reload from ledger pp loading
- use `tomli` package for reading since it's the fastest pure python
  reader available apparently.
- add new fields to each pp's clears table: price, size, dt
- make `load_pps_from_toml()`'s `reload_records` a dict that can be
  passed in by the caller and is verbatim used to re-read a ledger and
  filter to the specified symbol set to build out fresh pp objects.
- add a `update_from_ledger: bool` flag to `load_pps_from_toml()`
  to allow forcing a full backend ledger read.
- if a set of trades records is passed into `update_pps_conf()` parse
  out the meta data required to cause a ledger reload as per 2 bullets
  above.
- return active and closed pps in separate by-account maps from
  `update_pps_conf()`.
- drop the `key_by` kwarg.
2022-06-28 10:07:57 -04:00
Tyler Goodlet cc68501c7a Make pp msg `.currency` not required 2022-06-28 10:07:57 -04:00
Tyler Goodlet 7ebf8a8dc0 Add `tomli` as dep being fastest in the west 2022-06-28 10:07:57 -04:00
Tyler Goodlet 4475823e48 Add draft ip-mismatch skip case 2022-06-28 10:07:57 -04:00
Tyler Goodlet 3713288b48 Strip ib prefix before acctid use 2022-06-28 10:07:57 -04:00
Tyler Goodlet 4fdfb81876 Support re-processing a filtered ledger entry set
This makes it possible to refresh a single fqsn-position in one's
`pps.toml` by simply deleting the file entry, in which case, if there is
new trade records passed to `load_pps_from_toml()` via the new
`reload_records` kwarg, then the backend ledger entries matching that
symbol will be filtered and used to recompute a fresh position.

This turns out to be super handy when you have crashes that prevent
a `pps.toml` entry from being updated correctly but where the ledger
does have all the data necessary to calculate a fresh correct entry.
2022-06-28 10:07:57 -04:00
Tyler Goodlet f32b4d37cb Support pp audits with multiple accounts 2022-06-28 10:07:56 -04:00
Tyler Goodlet 2063b9d8bb Drop ledger entries that have no transaction id 2022-06-28 10:07:56 -04:00
Tyler Goodlet fe14605034 Fix null case return 2022-06-28 10:07:56 -04:00
Tyler Goodlet 68b32208de Key pps by bsuid to avoid incorrect disparate entries 2022-06-28 10:07:56 -04:00
Tyler Goodlet f1fe369bbf Write clears table as a list of tables in toml 2022-06-28 10:07:56 -04:00
Tyler Goodlet 16b2937d23 Passthrough toml lib kwargs 2022-06-28 10:07:56 -04:00
Tyler Goodlet bfad676b7c Add expiry and datetime support to ledger parsing 2022-06-28 10:07:56 -04:00
Tyler Goodlet c617a06905 Port everything to `Position.be_price` 2022-06-28 10:07:56 -04:00
Tyler Goodlet ff74f4302a Support pp expiries, datetimes on transactions
Since some positions obviously expire and thus shouldn't continually
exist inside a `pps.toml` add naive support for tracking and discarding
expired contracts:
- add `Transaction.expiry: Optional[pendulum.datetime]`.
- add `Position.expiry: Optional[pendulum.datetime]` which can be parsed
  from a transaction ledger.
- only write pps with a non-none expiry to the `pps.toml`
- change `Position.avg_price` -> `.be_price` (be is "breakeven")
  since it's a much less ambiguous name.
- change `load_pps_from_legder()` to *not* call `dump_active()` since
  for the only use case it ends up getting called later anyway.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 21153a0e1e Ugh, hack our own toml encoder since it seems everything in the lib is half-baked.. 2022-06-28 10:07:56 -04:00
Tyler Goodlet b6f344f34a Only emit pps msg for trade triggering instrument
We can probably make this better (and with less file sys accesses) later
such that we keep a consistent pps state in mem and only write async
maybe from another side-task?
2022-06-28 10:07:56 -04:00
Tyler Goodlet ecdc747ced Allow packing pps by a different key set 2022-06-28 10:07:56 -04:00
Tyler Goodlet 5147cd7be0 Drop global proxies table, isn't multi-task safe.. 2022-06-28 10:07:56 -04:00
Tyler Goodlet 3dcb72d429 Only finally-write around the ledger yield up 2022-06-28 10:07:56 -04:00
Tyler Goodlet fbee33b00d Get real-time trade oriented pp updates workin
What a nightmare this was.. main holdup was that cost (commissions)
reports are fired independent from "fills" so you can't really emit
a proper full position update until they both arrive.

Deatz:
- move `push_tradesies()` and relay loop in `deliver_trade_events()` to
  the new py3.10 `match:` syntax B)
- subscribe for, and handle `CommissionReport` events from `ib_insync`
  and repack as a `cost` event type.
- handle cons with no primary/listing exchange (like futes) in
  `update_ledger_from_api_trades()` by falling back to the plain
  'exchange' field.
- drop reverse fqsn lookup from ib positions map; just use contract
  lookup for api trade logs since we're already connected..
- make validation in `update_and_audit()` optional via flag.
- pass in the accounts def, ib pp msg table and the proxies table to the
  trade event relay task-loop.
- add `emit_pp_update()` too encapsulate a full api trade entry
  incremental update which calls into the `piker.pp` apis to,
  - update the ledger
  - update the pps.toml
  - generate a new `BrokerdPosition` msg to send to the ems
- adjust trades relay loop to only emit pp updates when a cost report
  arrives for the fill/execution by maintaining a small table per exec
  id.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 3991d8f911 Add `update_and_audit()` in prep for rt per-trade-event pp udpates 2022-06-28 10:07:56 -04:00
Tyler Goodlet 7b2e8f1ba5 Return object form from `update_pps_conf()` 2022-06-28 10:07:56 -04:00
Tyler Goodlet cbcbb2b243 Filter pps loading to client-active accounts set 2022-06-28 10:07:56 -04:00
Tyler Goodlet cd3bfb1ea4 Maybe load from ledger in `get_pps()`, allow account filtering 2022-06-28 10:07:56 -04:00
Tyler Goodlet 82b718d5a3 Many, many `ib` trade log schema hackz
I don't want to rant too much any more since it's pretty clear `ib` has
either zero concern for its (api) user's or a severely terrible data
management team and/or general inter-team coordination system, but this
patch more or less hacks the flex report records to be similar enough to
API "execution" / "fill" records such that they can be similarly
normalized and stored as well as processed for position calculations..

Dirty deats,
- use the `IB.fills()` method for pulling current session trade events
  since it's both recommended in the docs and does seem to capture
  more extensive meta-data.
- add a `update_ledger_from_api()` helper which does all the insane work
  of making sure api trade entries are usable both within piker's global
  fqsn system but also compatible with incremental updates of positions
  computed from trade ledgers derived from ib's "flex reports".
- add "auditting" of `ib`'s reported positioning API messages by
  comparison with piker's new "traders first" breakeven price style and
  complain via logging on mismatches.
- handle buy vs. sell arithmetic (via a +ve or -ve multiplier) to make
  "size" arithmetic work for API trade entries..
- draft out options contract transaction parsing but skip in pps
  generation for now.
- always use the "execution id" as ledger keys both in flex and api
  trade processing.
- for whatever weird reason `ib_insync` doesn't include the so called
  "primary exchange" in contracts reported in fill events, so do manual
  contract lookups in such cases such that pps entries can be placed
  in the right fqsn section...

Still ToDo:
- incremental update on trade clears / position updates
- pps audit from ledger depending on user config?
2022-06-28 10:07:56 -04:00
Tyler Goodlet 05a1a4e3d8 Use new `Position.bsuid` field throughout 2022-06-28 10:07:56 -04:00
Tyler Goodlet 412138a75b Add transaction costs to "fills"
This makes a few major changes but mostly is centered around including
transaction (aka trade-clear) costs in the avg breakeven price
calculation.

TL;DR:
- rename `TradeRecord` -> `Transaction`.
- make `Position.fills` a `dict[str, float]` which holds each clear's
  cost value.
- change `Transaction.symkey` -> `.bsuid` for "backend symbol unique id".
- drop `brokername: str` arg to `update_pps()`
- rename `._split_active()` -> `dump_active()` and use input keys
  verbatim in output map.
- in `update_pps_conf()` always incrementally update from trade records
  even when no `pps.toml` exists yet since it may be both the case that
  the ledger needs loading **and** the caller is handing new records not
  yet in the ledger.
2022-06-28 10:07:56 -04:00
Tyler Goodlet c1b63f4757 Use `IB.fills()` method for `Client.trades()` 2022-06-28 10:07:56 -04:00
Tyler Goodlet 5d774bef90 Move `open_trade_ledger()` to pp mod, add `get_pps()` 2022-06-28 10:07:56 -04:00
Tyler Goodlet de77c7d209 Better doc strings and detailed comments 2022-06-28 10:07:56 -04:00
Tyler Goodlet ce1eb11b59 Use new ledger pps but cross-ref with what ib says 2022-06-28 10:07:56 -04:00
Tyler Goodlet b629ce177d Ensure `.fills` are filled in during object construct.. 2022-06-28 10:07:56 -04:00
Tyler Goodlet 73fa320917 Cut schema-related comment down to major sections 2022-06-28 10:07:56 -04:00
Tyler Goodlet dd05ed1371 Implement updates and write to config: `pps.toml`
Begins the position tracking incremental update API which supports both
constructing a `pps.toml` both from trade ledgers as well diff-oriented
incremental update from an existing config assumed to be previously
generated from some prior ledger.

New set of routines includes:
- `_split_active()` a helper to split a position table into the active
  and closed positions (aka pps of size 0) for determining entry updates
  in the `pps.toml`.
- `update_pps_conf()` to maybe load a `pps.toml` and update it from
   an input trades ledger including necessary (de)serialization to and
   from `Position` object form(s).
- `load_pps_from_ledger()` a ledger parser-loader which constructs
  a table of pps strictly from the broker-account ledger data without
  any consideration for any existing pps file.

Each "entry" in `pps.toml` also contains a `fills: list` attr (name may
change) which references the set of trade records which make up its
state since the last net-zero position in the instrument.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 2a641ab8b4 Call it `pps.toml`, allows toml passthrough kwargs 2022-06-28 10:07:56 -04:00
Tyler Goodlet f8f7ca350c Extend trade-record tools, add ledger to pps extraction
Add a `TradeRecord` struct which holds the minimal field set to build
out position entries. Add `.update_pps()` to convert a set of records
into LIFO position entries, optionally allowing for an update to some
existing pp input set. Add `load_pps_from_ledger()` which does a full
ledger extraction to pp objects, ready for writing a `pps.toml`.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 88b4ccc768 Add API trade/exec entry parsing and ledger updates
Since "flex reports" are only available for the current session's trades
the day after, this adds support for also collecting trade execution
records for the current session and writing them to the equivalent
ledger file.

Summary:
- add `trades_to_records()` to handle parsing both flex and API event
  objects into a common record form.
- add `norm_trade_records()` to handle converting ledger entries into
  `TradeRecord` types from the new `piker.pps` mod (coming in next
  commit).
2022-06-28 10:07:56 -04:00
Tyler Goodlet eb2bad5138 Make our `Symbol` a `msgspec.Struct` 2022-06-28 10:07:56 -04:00
Tyler Goodlet f768576060 Delegate paper engine pp tracking to new type 2022-06-28 10:07:56 -04:00
Tyler Goodlet add0e92335 Drop old trade log config writing code 2022-06-28 10:07:56 -04:00
Tyler Goodlet 1eb7e109e6 Start `piker.pp` module, LIFO pp updates
Start a generic "position related" util mod and bring in the `Position`
type from the allocator , convert it to a `msgspec.Struct` and add
a `.lifo_update()` method. Implement a WIP pp parser from a trades
ledger and use the new lifo method to gather position entries.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 725909a94c Convert accounts table to `bidict` after config load 2022-06-28 10:07:56 -04:00
Tyler Goodlet 050aa7594c Simplify trades ledger collection to single pass loop 2022-06-28 10:07:56 -04:00
Tyler Goodlet 450009ff9c Add `open_trade_ledger()` for writing `<confdir>/ledgers/trades_<broker>_<acct>.toml` files 2022-06-28 10:07:56 -04:00
goodboy b2d5892010
Merge pull request #342 from pikers/mxmn_from_m4
Mxmn from m4
2022-06-28 10:07:17 -04:00
goodboy 5a3b465ac0
Merge pull request #344 from pikers/310_plus
Go Python 3.10+ in anticipation of upcoming feature PRs
2022-06-28 10:04:45 -04:00
Tyler Goodlet be7afdaa89 Drop commented draft quotes-drain-loop code/idea 2022-06-28 09:43:49 -04:00
Tyler Goodlet 1c561207f5 Simplify `Flow.maxmin()` block logics 2022-06-28 09:43:49 -04:00
Tyler Goodlet ed2c962bb9 Add an idempotent, graphics-state startup flag
Add `ChartPlotWidget._on_screen: bool` which allows detecting for the
first state where there is y-range-able flow data loaded and able to be
drawn. Check for this flag to be set in `.maxmin()` such that until the
historical data is loaded `.default_view()` will be called to ensure
that a blank view is never shown: race with the UI starting versus the
data layer loading flow graphics can have this outcome.
2022-06-28 09:43:49 -04:00
Tyler Goodlet 147ceca016 Drop uneeded render filter idea 2022-06-28 09:43:49 -04:00
Tyler Goodlet 03a7940f83 Rewrite per-pi group mxmn sorter to always expect output 2022-06-27 18:24:09 -04:00
Tyler Goodlet dd2a9f74f1 Add todo around graphics loop vlm chart mxmn sort calls 2022-06-27 18:23:13 -04:00
Tyler Goodlet 49c720af3c Add commented prints for debugging 2022-06-27 18:22:51 -04:00
Tyler Goodlet c620517543 Set zeros for `Flow.maxmin() -> None` results 2022-06-27 18:22:30 -04:00
Tyler Goodlet a425c29ef1 Play with render skip logic on non-dark vlm crypto feeds 2022-06-27 13:59:08 -04:00
Tyler Goodlet 783914c7fe Better comment, use -inf as startup min 2022-06-27 13:59:08 -04:00
Tyler Goodlet 920a394539 Use new `anext()` builtin 2022-06-27 13:59:08 -04:00
Tyler Goodlet e977597cd0 Commented for doing incrementing when downsampled, but doesn't seem to work? 2022-06-27 13:59:08 -04:00
Tyler Goodlet 7a33ba64f1 Avoid crash due to race on chart instance ref during startup? 2022-06-27 13:59:08 -04:00
Tyler Goodlet 191b94b67c POC try using yrange mxmn from m4 when downsampling 2022-06-27 13:59:08 -04:00
Tyler Goodlet 4ad7b073c3 Proxy through input y-mx/mn from `xy_downsample()` 2022-06-27 13:59:08 -04:00
Tyler Goodlet d92ff9c7a0 Return input y-range min/max values from m4 2022-06-27 13:59:08 -04:00
180 changed files with 47412 additions and 14541 deletions

View File

@ -3,9 +3,8 @@ name: CI
on: on:
# Triggers the workflow on push or pull request events but only for the master branch # Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request: pull_request:
push:
branches: [ master ] branches: [ master ]
# Allows you to run this workflow manually from the Actions tab # Allows you to run this workflow manually from the Actions tab
@ -14,19 +13,49 @@ on:
jobs: jobs:
# test that we can generate a software distribution and install it
# thus avoid missing file issues after packaging.
sdist-linux:
name: 'sdist'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Build sdist
run: python setup.py sdist --formats=zip
- name: Install sdist from .zips
run: python -m pip install dist/*.zip
testing: testing:
name: 'install + test-suite' name: 'install + test-suite'
timeout-minutes: 10
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
# elastic only
# - name: Build DB container
# run: docker build -t piker:elastic dockering/elastic
- name: Setup python - name: Setup python
uses: actions/setup-python@v3 uses: actions/setup-python@v4
with: with:
python-version: '3.10' python-version: '3.10'
# elastic only
# - name: Install dependencies
# run: pip install -U .[es] -r requirements-test.txt -r requirements.txt --upgrade-strategy eager
- name: Install dependencies - name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements.txt --upgrade-strategy eager run: pip install -U . -r requirements-test.txt -r requirements.txt --upgrade-strategy eager

View File

@ -1,222 +1,161 @@
piker piker
----- -----
trading gear for hackers. trading gear for hackers
|gh_actions| |gh_actions|
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square .. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/piker/pikers/goto :target: https://actions-badge.atrox.dev/piker/pikers/goto
``piker`` is a broker agnostic, next-gen FOSS toolset for real-time ``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for
computational trading targeted at `hardcore Linux users <comp_trader>`_ . real-time computational trading targeted at `hardcore Linux users
<comp_trader>`_ .
we use as much bleeding edge tech as possible including (but not limited to): we use much bleeding edge tech including (but not limited to):
- latest python for glue_ - latest python for glue_
- trio_ for `structured concurrency`_ - uv_ for packaging and distribution
- tractor_ for distributed, multi-core, real-time streaming - trio_ & tractor_ for our distributed `structured concurrency`_ runtime
- marketstore_ for historical and real-time tick data persistence and sharing - Qt_ for pristine low latency UIs
- techtonicdb_ for L2 book storage - pyqtgraph_ (which we've extended) for real-time charting and graphics
- Qt_ for pristine high performance UIs - ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_
- pyqtgraph_ for real-time charting - `apache arrow and parquet`_ for time-series storage
- ``numpy`` and ``numba`` for `fast numerics`_
.. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg potential projects we might integrate with soon,
:target: https://travis-ci.org/pikers/piker
- (already prototyped in ) techtonicdb_ for L2 book storage
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _uv: https://docs.astral.sh/uv/
.. _trio: https://github.com/python-trio/trio .. _trio: https://github.com/python-trio/trio
.. _tractor: https://github.com/goodboy/tractor .. _tractor: https://github.com/goodboy/tractor
.. _structured concurrency: https://trio.discourse.group/ .. _structured concurrency: https://trio.discourse.group/
.. _marketstore: https://github.com/alpacahq/marketstore
.. _techtonicdb: https://github.com/0b01/tectonicdb
.. _Qt: https://www.qt.io/ .. _Qt: https://www.qt.io/
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph .. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue .. _apache arrow and parquet: https://arrow.apache.org/faq/
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/ .. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/ .. _techtonicdb: https://github.com/0b01/tectonicdb
focus and features: focus and feats:
*******************
- 100% federated: your code, your hardware, your data feeds, your broker fills.
- zero web: low latency, native software that doesn't try to re-invent the OS
- maximal **privacy**: prevent brokers and mms from knowing your
planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency
primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will
make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
fitting with these tenets, we're always open to new framework suggestions and ideas.
building the best looking, most reliable, keyboard friendly trading
platform is the dream; join the cause.
install
*******
``piker`` is currently under heavy pre-alpha development and as such
should be cloned from this repo and hacked on directly.
for a development install::
git clone git@github.com:pikers/piker.git
cd piker
virtualenv env
source ./env/bin/activate
pip install -r requirements.txt -e .
install for tinas
*****************
for windows peeps you can start by installing all the prerequisite software:
- install git with all default settings - https://git-scm.com/download/win
- install anaconda all default settings - https://www.anaconda.com/products/individual
- install microsoft build tools (check the box for Desktop development for C++, you might be able to uncheck some optional downloads) - https://visualstudio.microsoft.com/visual-cpp-build-tools/
- install visual studio code default settings - https://code.visualstudio.com/download
then, `crack a conda shell`_ and run the following commands::
mkdir code # create code directory
cd code # change directory to code
git clone https://github.com/pikers/piker.git # downloads piker installation package from github
cd piker # change directory to piker
conda create -n pikonda # creates conda environment named pikonda
conda activate pikonda # activates pikonda
conda install -c conda-forge python-levenshtein # in case it is not already installed
conda install pip # may already be installed
pip # will show if pip is installed
pip install -e . -r requirements.txt # install piker in editable mode
test Piker to see if it is working::
piker -b binance chart btcusdt.binance # formatting for loading a chart
piker -b kraken -b binance chart xbtusdt.kraken
piker -b kraken -b binance -b ib chart qqq.nasdaq.ib
piker -b ib chart tsla.nasdaq.ib
potential error::
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\AppData\\Roaming\\piker\\brokers.toml'
solution:
- navigate to file directory above (may be different on your machine, location should be listed in the error code)
- copy and paste file from 'C:\\Users\\user\\code\\data/brokers.toml' or create a blank file using notepad at the location above
Visual Studio Code setup:
- now that piker is installed we can set up vscode as the default terminal for running piker and editing the code
- open Visual Studio Code
- file --> Add Folder to Workspace --> C:\Users\user\code\piker (adds piker directory where all piker files are located)
- file --> Save Workspace As --> save it wherever you want and call it whatever you want, this is going to be your default workspace for running and editing piker code
- ctrl + shift + p --> start typing Python: Select Interpetter --> when the option comes up select it --> Select at the workspace level --> select the one that shows ('pikonda')
- change the default terminal to cmd.exe instead of powershell (default)
- now when you create a new terminal VScode should automatically activate you conda env so that piker can be run as the first command after a new terminal is created
also, try out fancyzones as part of powertoyz for a decent tiling windows manager to manage all the cool new software you are going to be running.
.. _conda installed: https://
.. _C++ build toolz: https://
.. _crack a conda shell: https://
.. _vscode: https://
.. link to the tina guide
.. _setup a coolio tiled wm console: https://
provider support
**************** ****************
for live data feeds the in-progress set of supported brokers is: fitting with these tenets, we're always open to new
framework/lib/service interop suggestions and ideas!
- IB_ via ``ib_insync``, also see our `container docs`_ - **100% federated**:
- binance_ and kraken_ for crypto over their public websocket API your code, your hardware, your data feeds, your broker fills.
- questrade_ (ish) which comes with effectively free L1
coming soon... - **zero web**:
low latency as a prime objective, native UIs and modern IPC
protocols without trying to re-invent the "OS-as-an-app"..
- webull_ via the reverse engineered public API - **maximal privacy**:
- yahoo via yliveticker_ prevent brokers and mms from knowing your planz; smack their
spreads with dark volume from a VPN tunnel.
if you want your broker supported and they have an API let us know. - **zero clutter**:
modal, context oriented UIs that echew minimalism, reduce thought
noise and encourage un-emotion.
.. _IB: https://interactivebrokers.github.io/tws-api/index.html - **first class parallelism**:
.. _container docs: https://github.com/pikers/piker/tree/master/dockering/ib built from the ground up on a next-gen structured concurrency
.. _questrade: https://www.questrade.com/api/documentation supervision sys.
.. _kraken: https://www.kraken.com/features/api#public-market-data
.. _binance: https://github.com/pikers/piker/pull/182 - **traders first**:
.. _webull: https://github.com/tedchou12/webull broker/exchange/venue/asset-class/money-sys agnostic
.. _yliveticker: https://github.com/yahoofinancelive/yliveticker
.. _coinbase: https://docs.pro.coinbase.com/#websocket-feed - **systems grounded**:
real-time financial signal processing (fsp) that will make any
queuing or DSP eng juice their shorts.
- **non-tina UX**:
sleek, powerful keyboard driven interaction with expected use in
tiling wms (or maybe even a DDE).
- **data collab at scale**:
every actor-process and protocol is multi-host aware.
- **fight club ready**:
zero interest in adoption by suits; no corporate friendly license,
ever.
building the hottest looking, fastest, most reliable, keyboard
friendly FOSS trading platform is the dream; join the cause.
check out our charts a sane install with `uv`
******************** ************************
bet you weren't expecting this from the foss:: bc why install with `python` when you can faster with `rust` ::
piker -l info -b kraken -b binance chart btcusdt.binance --pdb uv lock
this runs the main chart (currently with 1m sampled OHLC) in in debug hacky install on nixos
mode and you can practice paper trading using the following **********************
micro-manual: ``NixOS`` is our core devs' distro of choice for which we offer
a stringently defined development shell envoirment that can be loaded with::
``order_mode`` ( nix-shell default.nix
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` ( start a chart
``ctl-l`` or ``ctl-space`` to open, *************
``ctl-c`` or ``ctl-space`` to close run a realtime OHLCV chart stand-alone::
) :
- begin typing to have symbol search automatically lookup piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes
- vi-like ``ctl-[hjkl]`` for navigation overlayed on the same graph. Use of `piker` without first starting
a daemon (`pikerd` - see below) means there is an implicit spawning of the
multi-actor-runtime (implemented as a `tractor` app).
For additional subsystem feats available through our chart UI see the
various sub-readmes:
- order control using a mouse-n-keyboard UX B)
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
- src-asset derivatives scan for anal, like the infamous "max pain" XO
you can also configure your position allocation limits from the spawn a daemon standalone
sidepane. *************************
we call the root actor-process the ``pikerd``. it can be (and is
recommended normally to be) started separately from the ``piker
run in distributed mode chart`` program::
***********************
start the service manager and data feed daemon in the background and
connect to it::
pikerd -l info --pdb pikerd -l info --pdb
the daemon does nothing until a ``piker``-client (like ``piker
chart``) connects and requests some particular sub-system. for
a connecting chart ``pikerd`` will spawn and manage at least,
connect your chart:: - a data-feed daemon: ``datad`` which does all the work of comms with
the backend provider (in this case the ``binance`` cex).
- a paper-trading engine instance, ``paperboi.binance``, (if no live
account has been configured) which allows for auto/manual order
control against the live quote stream.
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb *using* an actor-service (aka micro-daemon) manager which dynamically
supervises various sub-subsystems-as-services throughout the ``piker``
runtime-stack.
now you can (implicitly) connect your chart::
enjoy persistent real-time data feeds tied to daemon lifetime. the next piker chart btcusdt.spot.binance
time you spawn a chart it will load much faster since the data feed has
been cached and is now always running live in the background until you since ``pikerd`` was started separately you can now enjoy a persistent
kill ``pikerd``. real-time data stream tied to the daemon-tree's lifetime. i.e. the next
time you spawn a chart it will obviously not only load much faster
(since the underlying ``datad.binance`` is left running with its
in-memory IPC data structures) but also the data-feed and any order
mgmt states should be persistent until you finally cancel ``pikerd``.
if anyone asks you what this project is about if anyone asks you what this project is about
********************************************* *********************************************
you don't talk about it. you don't talk about it; just use it.
how do i get involved? how do i get involved?
@ -226,6 +165,15 @@ enter the matrix.
how come there ain't that many docs how come there ain't that many docs
*********************************** ***********************************
suck it up, learn the code; no one is trying to sell you on anything. i mean we want/need them but building the core right has been higher
also, we need lotsa help so if you want to start somewhere and can't prio then marketting (and likely will stay that way Bp).
necessarily write serious code, this might be the place for you!
soo, suck it up bc,
- no one is trying to sell you on anything
- learning the code base is prolly way more valuable
- the UI/UXs are intended to be "intuitive" for any hacker..
we obviously need tonz help so if you want to start somewhere and
can't necessarily write "advanced" concurrent python/rust code, this
helping document literally anything might be the place for you!

View File

@ -1,19 +1,52 @@
[questrade] ################
refresh_token = "" # ---- CEXY ----
access_token = "" ################
api_server = "https://api06.iq.questrade.com/" [binance]
expires_in = 1800 accounts.paper = 'paper'
token_type = "Bearer"
expires_at = 1616095326.355846 accounts.usdtm = 'futes'
futes.use_testnet = false
futes.api_key = ''
futes.api_secret = ''
accounts.spot = 'spot'
spot.use_testnet = false
spot.api_key = ''
spot.api_secret = ''
[deribit]
key_id = ''
key_secret = ''
[kraken] [kraken]
key_descr = "api_0" key_descr = ''
api_key = "" api_key = ''
secret = "" secret = ''
[kucoin]
key_id = ''
key_secret = ''
key_passphrase = ''
################
# -- BROKERZ ---
################
[questrade]
refresh_token = ''
access_token = ''
api_server = 'https://api06.iq.questrade.com/'
expires_in = 1800
token_type = 'Bearer'
expires_at = 1616095326.355846
[ib] [ib]
hosts = [ hosts = [
"127.0.0.1", '127.0.0.1',
] ]
# XXX: the order in which ports will be scanned # XXX: the order in which ports will be scanned
# (by the `brokerd` daemon-actor) # (by the `brokerd` daemon-actor)
@ -30,8 +63,8 @@ ports = [
# is not supported so you have to manually download # is not supported so you have to manually download
# and XML report and put it in a location that can be # and XML report and put it in a location that can be
# accessed by the ``brokerd.ib`` backend code for parsing. # accessed by the ``brokerd.ib`` backend code for parsing.
flex_token = '666666666666666666666666' flex_token = ''
flex_trades_query_id = '666666' # live account flex_trades_query_id = '' # live account
# when clients are being scanned this determines # when clients are being scanned this determines
# which clients are preferred to be used for data # which clients are preferred to be used for data
@ -47,6 +80,6 @@ prefer_data_account = [
# the order in which accounts will be selectable # the order in which accounts will be selectable
# in the order mode UI (if found via clients during # in the order mode UI (if found via clients during
# API-app scanning)when a new symbol is loaded. # API-app scanning)when a new symbol is loaded.
paper = "XX0000000" paper = 'XX0000000'
margin = "X0000000" margin = 'X0000000'
ira = "X0000000" ira = 'X0000000'

12
config/conf.toml 100644
View File

@ -0,0 +1,12 @@
[network]
tsdb.backend = 'marketstore'
tsdb.host = 'localhost'
tsdb.grpc_port = 5995
[ui]
# set custom font + size which will scale entire UI
# font_size = 16
# font_name = 'Monospaced'
# colorscheme = 'default' # UNUSED
# graphics.update_throttle = 60 # Hz # TODO

134
default.nix 100644
View File

@ -0,0 +1,134 @@
with (import <nixpkgs> {});
let
glibStorePath = lib.getLib glib;
zlibStorePath = lib.getLib zlib;
zstdStorePath = lib.getLib zstd;
dbusStorePath = lib.getLib dbus;
libGLStorePath = lib.getLib libGL;
freetypeStorePath = lib.getLib freetype;
qt6baseStorePath = lib.getLib qt6.qtbase;
fontconfigStorePath = lib.getLib fontconfig;
libxkbcommonStorePath = lib.getLib libxkbcommon;
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
qtpyStorePath = lib.getLib python312Packages.qtpy;
pyqt6StorePath = lib.getLib python312Packages.pyqt6;
pyqt6SipStorePath = lib.getLib python312Packages.pyqt6-sip;
rapidfuzzStorePath = lib.getLib python312Packages.rapidfuzz;
qdarkstyleStorePath = lib.getLib python312Packages.qdarkstyle;
xorgLibX11StorePath = lib.getLib xorg.libX11;
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
in
stdenv.mkDerivation {
name = "piker-qt6-uv";
buildInputs = [
# System requirements.
glib
zlib
dbus
zstd
libGL
freetype
qt6.qtbase
libgcc.lib
fontconfig
libxkbcommon
# Xorg requirements
xcb-util-cursor
xorg.libxcb
xorg.libX11
xorg.xcbutilwm
xorg.xcbutilimage
xorg.xcbutilerrors
xorg.xcbutilkeysyms
xorg.xcbutilrenderutil
# Python requirements.
python312Full
python312Packages.uv
python312Packages.qdarkstyle
python312Packages.rapidfuzz
python312Packages.pyqt6
python312Packages.qtpy
];
src = null;
shellHook = ''
set -e
# Set the Qt plugin path
# export QT_DEBUG_PLUGINS=1
QTBASE_PATH="${qt6baseStorePath}/lib"
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
LIB_GCC_PATH="${libgcc.lib}/lib"
GLIB_PATH="${glibStorePath}/lib"
ZSTD_PATH="${zstdStorePath}/lib"
ZLIB_PATH="${zlibStorePath}/lib"
DBUS_PATH="${dbusStorePath}/lib"
LIBGL_PATH="${libGLStorePath}/lib"
FREETYPE_PATH="${freetypeStorePath}/lib"
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
export LD_LIBRARY_PATH
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.12/site-packages"
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.12/site-packages"
QTPY_PATH="${qtpyStorePath}/lib/python3.12/site-packages"
PYQT6_PATH="${pyqt6StorePath}/lib/python3.12/site-packages"
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.12/site-packages"
PATCH="$PATCH:$RPDFUZZ_PATH"
PATCH="$PATCH:$QDRKSTYLE_PATH"
PATCH="$PATCH:$QTPY_PATH"
PATCH="$PATCH:$PYQT6_PATH"
PATCH="$PATCH:$PYQT6_SIP_PATH"
export PATCH
# Install deps
uv lock
'';
}

47
develop.nix 100644
View File

@ -0,0 +1,47 @@
with (import <nixpkgs> {});
stdenv.mkDerivation {
name = "poetry-env";
buildInputs = [
# System requirements.
readline
# TODO: hacky non-poetry install stuff we need to get rid of!!
poetry
# virtualenv
# setuptools
# pip
# Python requirements (enough to get a virtualenv going).
python311Full
# obviously, and see below for hacked linking
python311Packages.pyqt5
python311Packages.pyqt5_sip
# python311Packages.qtpy
# numerics deps
python311Packages.levenshtein
python311Packages.fastparquet
python311Packages.polars
];
# environment.sessionVariables = {
# LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib";
# };
src = null;
shellHook = ''
# Allow the use of wheels.
SOURCE_DATE_EPOCH=$(date +%s)
# Augment the dynamic linker path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
if [ ! -d ".venv" ]; then
poetry install --with uis
fi
poetry shell
'';
}

View File

@ -0,0 +1,11 @@
FROM elasticsearch:7.17.4
ENV ES_JAVA_OPTS "-Xms2g -Xmx2g"
ENV ELASTIC_USERNAME "elastic"
ENV ELASTIC_PASSWORD "password"
COPY elasticsearch.yml /usr/share/elasticsearch/config/
RUN printf "password" | ./bin/elasticsearch-keystore add -f -x "bootstrap.password"
EXPOSE 19200

View File

@ -0,0 +1,5 @@
network.host: 0.0.0.0
http.port: 19200
discovery.type: single-node

View File

@ -2,12 +2,27 @@
# https://github.com/waytrade/ib-gateway-docker/blob/master/docker-compose.yml # https://github.com/waytrade/ib-gateway-docker/blob/master/docker-compose.yml
version: "3.5" version: "3.5"
services: services:
ib-gateway:
ib_gw_paper:
# apparently java is a mega cukc:
# https://stackoverflow.com/a/56895801
# https://bugs.openjdk.org/browse/JDK-8150460
ulimits:
# nproc: 65535
nproc: 6000
nofile:
soft: 2000
hard: 3000
# other image tags available: # other image tags available:
# https://github.com/waytrade/ib-gateway-docker#supported-tags # https://github.com/waytrade/ib-gateway-docker#supported-tags
image: waytrade/ib-gateway:981.3j # image: waytrade/ib-gateway:1012.2i
restart: always image: ghcr.io/gnzsnz/ib-gateway:latest
restart: 'no' # restart on boot whenev there's a crash or user clicsk
network_mode: 'host' network_mode: 'host'
volumes: volumes:
@ -39,14 +54,12 @@ services:
# this compose file which looks something like: # this compose file which looks something like:
# TWS_USERID='myuser' # TWS_USERID='myuser'
# TWS_PASSWORD='guest' # TWS_PASSWORD='guest'
# TRADING_MODE=paper (or live)
# VNC_SERVER_PASSWORD='diggity'
environment: environment:
TWS_USERID: ${TWS_USERID} TWS_USERID: ${TWS_USERID}
TWS_PASSWORD: ${TWS_PASSWORD} TWS_PASSWORD: ${TWS_PASSWORD}
TRADING_MODE: ${TRADING_MODE:-paper} TRADING_MODE: 'paper'
VNC_SERVER_PASSWORD: ${VNC_SERVER_PASSWORD:-} VNC_SERVER_PASSWORD: 'doggy'
VNC_SERVER_PORT: '3003'
# ports: # ports:
# - target: 4002 # - target: 4002
@ -62,3 +75,40 @@ services:
# - "127.0.0.1:4001:4001" # - "127.0.0.1:4001:4001"
# - "127.0.0.1:4002:4002" # - "127.0.0.1:4002:4002"
# - "127.0.0.1:5900:5900" # - "127.0.0.1:5900:5900"
# ib_gw_live:
# image: waytrade/ib-gateway:1012.2i
# restart: no
# network_mode: 'host'
# volumes:
# - type: bind
# source: ./jts_live.ini
# target: /root/jts/jts.ini
# # don't let ibc clobber this file for
# # the main reason of not having a stupid
# # timezone set..
# read_only: true
# # force our own ibc config
# - type: bind
# source: ./ibc.ini
# target: /root/ibc/config.ini
# # force our noop script - socat isn't needed in host mode.
# - type: bind
# source: ./fork_ports_delayed.sh
# target: /root/scripts/fork_ports_delayed.sh
# # force our noop script - socat isn't needed in host mode.
# - type: bind
# source: ./run_x11_vnc.sh
# target: /root/scripts/run_x11_vnc.sh
# read_only: true
# # NOTE: to fill these out, define an `.env` file in the same dir as
# # this compose file which looks something like:
# environment:
# TRADING_MODE: 'live'
# VNC_SERVER_PASSWORD: 'doggy'
# VNC_SERVER_PORT: '3004'

View File

@ -117,9 +117,57 @@ SecondFactorDevice=
# If you use the IBKR Mobile app for second factor authentication, # If you use the IBKR Mobile app for second factor authentication,
# and you fail to complete the process before the time limit imposed # and you fail to complete the process before the time limit imposed
# by IBKR, you can use this setting to tell IBC to exit: arrangements # by IBKR, this setting tells IBC whether to automatically restart
# can then be made to automatically restart IBC in order to initiate # the login sequence, giving you another opportunity to complete
# the login sequence afresh. Otherwise, manual intervention at TWS's # second factor authentication.
#
# Permitted values are 'yes' and 'no'.
#
# If this setting is not present or has no value, then the value
# of the deprecated ExitAfterSecondFactorAuthenticationTimeout is
# used instead. If this also has no value, then this setting defaults
# to 'no'.
#
# NB: you must be using IBC v3.14.0 or later to use this setting:
# earlier versions ignore it.
ReloginAfterSecondFactorAuthenticationTimeout=
# This setting is only relevant if
# ReloginAfterSecondFactorAuthenticationTimeout is set to 'yes',
# or if ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 60.
SecondFactorAuthenticationExitInterval=
# This setting specifies the timeout for second factor authentication
# imposed by IB. The value is in seconds. You should not change this
# setting unless you have reason to believe that IB has changed the
# timeout. The default value is 180.
SecondFactorAuthenticationTimeout=180
# DEPRECATED SETTING
# ------------------
#
# ExitAfterSecondFactorAuthenticationTimeout - THIS SETTING WILL BE
# REMOVED IN A FUTURE RELEASE. For IBC version 3.14.0 and later, see
# the notes for ReloginAfterSecondFactorAuthenticationTimeout above.
#
# For IBC versions earlier than 3.14.0: If you use the IBKR Mobile
# app for second factor authentication, and you fail to complete the
# process before the time limit imposed by IBKR, you can use this
# setting to tell IBC to exit: arrangements can then be made to
# automatically restart IBC in order to initiate the login sequence
# afresh. Otherwise, manual intervention at TWS's
# Second Factor Authentication dialog is needed to complete the # Second Factor Authentication dialog is needed to complete the
# login. # login.
# #
@ -132,29 +180,18 @@ SecondFactorDevice=
ExitAfterSecondFactorAuthenticationTimeout=no ExitAfterSecondFactorAuthenticationTimeout=no
# This setting is only relevant if
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 40.
SecondFactorAuthenticationExitInterval=
# Trading Mode # Trading Mode
# ------------ # ------------
# #
# TWS 955 introduced a new Trading Mode combo box on its login # This indicates whether the live account or the paper trading
# dialog. This indicates whether the live account or the paper # account corresponding to the supplied credentials is to be used.
# trading account corresponding to the supplied credentials is # The allowed values are 'live' (the default) and 'paper'.
# to be used. The allowed values are 'live' (the default) and #
# 'paper'. For earlier versions of TWS this setting has no # If this is set to 'live', then the credentials for the live
# effect. # account must be supplied. If it is set to 'paper', then either
# the live or the paper-trading credentials may be supplied.
TradingMode= TradingMode=paper
# Paper-trading Account Warning # Paper-trading Account Warning
@ -188,7 +225,7 @@ AcceptNonBrokerageAccountWarning=yes
# #
# The default value is 60. # The default value is 60.
LoginDialogDisplayTimeout = 60 LoginDialogDisplayTimeout=60
@ -217,7 +254,15 @@ LoginDialogDisplayTimeout = 60
# but they are acceptable. # but they are acceptable.
# #
# The default is the current working directory when IBC is # The default is the current working directory when IBC is
# started. # started, unless the TWS_SETTINGS_PATH setting in the relevant
# start script is set.
#
# If both this setting and TWS_SETTINGS_PATH are set, then this
# setting takes priority. Note that if they have different values,
# auto-restart will not work.
#
# NB: this setting is now DEPRECATED. You should use the
# TWS_SETTINGS_PATH setting in the relevant start script.
IbDir=/root/Jts IbDir=/root/Jts
@ -286,13 +331,30 @@ ExistingSessionDetectedAction=primary
# #
# If OverrideTwsApiPort is set to an integer, IBC changes the # If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly # 'Socket port' in TWS's API configuration to that number shortly
# after startup. Leaving the setting blank will make no change to # after startup (but note that for the FIX Gateway, this setting is
# actually stored in jts.ini rather than the Gateway's settings
# file). Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in # the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to # certain specialized situations where the port number needs to
# be set dynamically at run-time, and for the FIX Gateway: most
# non-FIX users will never need it, so don't use it unless you know
# you need it.
OverrideTwsApiPort=4000
# Override TWS Master Client ID
# -----------------------------
#
# If OverrideTwsMasterClientID is set to an integer, IBC changes the
# 'Master Client ID' value in TWS's API configuration to that
# value shortly after startup. Leaving the setting blank will make
# no change to the current setting. This setting is only intended
# for use in certain specialized situations where the value needs to
# be set dynamically at run-time: most users will never need it, # be set dynamically at run-time: most users will never need it,
# so don't use it unless you know you need it. # so don't use it unless you know you need it.
OverrideTwsApiPort=4002 OverrideTwsMasterClientID=
# Read-only Login # Read-only Login
@ -302,11 +364,13 @@ OverrideTwsApiPort=4002
# account security programme, the user will not be asked to perform # account security programme, the user will not be asked to perform
# the second factor authentication action, and login to TWS will # the second factor authentication action, and login to TWS will
# occur automatically in read-only mode: in this mode, placing or # occur automatically in read-only mode: in this mode, placing or
# managing orders is not allowed. If set to 'no', and the user is # managing orders is not allowed.
# enrolled in IB's account security programme, the user must perform #
# the relevant second factor authentication action to complete the # If set to 'no', and the user is enrolled in IB's account security
# login. # programme, the second factor authentication process is handled
# according to the Second Factor Authentication Settings described
# elsewhere in this file.
#
# If the user is not enrolled in IB's account security programme, # If the user is not enrolled in IB's account security programme,
# this setting is ignored. The default is 'no'. # this setting is ignored. The default is 'no'.
@ -326,7 +390,44 @@ ReadOnlyLogin=no
# set the relevant checkbox (this only needs to be done once) and # set the relevant checkbox (this only needs to be done once) and
# not provide a value for this setting. # not provide a value for this setting.
ReadOnlyApi=no ReadOnlyApi=
# API Precautions
# ---------------
#
# These settings relate to the corresponding 'Precautions' checkboxes in the
# API section of the Global Configuration dialog.
#
# For all of these, the accepted values are:
# - 'yes' sets the checkbox
# - 'no' clears the checkbox
# - if not set, the existing TWS/Gateway configuration is unchanged
#
# NB: thess settings are really only supplied for the benefit of new TWS
# or Gateway instances that are being automatically installed and
# started without user intervention, or where user settings are not preserved
# between sessions (eg some Docker containers). Where a user is involved, they
# should use the Global Configuration to set the relevant checkboxes and not
# provide values for these settings.
BypassOrderPrecautions=
BypassBondWarning=
BypassNegativeYieldToWorstConfirmation=
BypassCalledBondWarning=
BypassSameActionPairTradeWarning=
BypassPriceBasedVolatilityRiskWarning=
BypassUSStocksMarketDataInSharesWarning=
BypassRedirectOrderWarning=
BypassNoOverfillProtectionPrecaution=
# Market data size for US stocks - lots or shares # Market data size for US stocks - lots or shares
@ -381,54 +482,145 @@ AcceptBidAskLastSizeDisplayUpdateNotification=accept
SendMarketDataInLotsForUSstocks= SendMarketDataInLotsForUSstocks=
# Trusted API Client IPs
# ----------------------
#
# NB: THIS SETTING IS ONLY RELEVANT FOR THE GATEWAY, AND ONLY WHEN FIX=yes.
# In all other cases it is ignored.
#
# This is a list of IP addresses separated by commas. API clients with IP
# addresses in this list are able to connect to the API without Gateway
# generating the 'Incoming connection' popup.
#
# Note that 127.0.0.1 is always permitted to connect, so do not include it
# in this setting.
TrustedTwsApiClientIPs=
# Reset Order ID Sequence
# -----------------------
#
# The setting resets the order id sequence for orders submitted via the API, so
# that the next invocation of the `NextValidId` API callback will return the
# value 1. The reset occurs when TWS starts.
#
# Note that order ids are reset for all API clients, except those that have
# outstanding (ie incomplete) orders: their order id sequence carries on as
# before.
#
# Valid values are 'yes', 'true', 'false' and 'no'. The default is 'no'.
ResetOrderIdsAtStart=
# This setting specifies IBC's action when TWS displays the dialog asking for
# confirmation of a request to reset the API order id sequence.
#
# Note that the Gateway never displays this dialog, so this setting is ignored
# for a Gateway session.
#
# Valid values consist of two strings separated by a solidus '/'. The first
# value specifies the action to take when the order id reset request resulted
# from setting ResetOrderIdsAtStart=yes. The second specifies the action to
# take when the order id reset request is a result of the user clicking the
# 'Reset API order ID sequence' button in the API configuration. Each value
# must be one of the following:
#
# 'confirm'
# order ids will be reset
#
# 'reject'
# order ids will not be reset
#
# 'ignore'
# IBC will ignore the dialog. The user must take action.
#
# The default setting is ignore/ignore
# Examples:
#
# 'confirm/reject' - confirm order id reset only if ResetOrderIdsAtStart=yes
# and reject any user-initiated requests
#
# 'ignore/confirm' - user must decide what to do if ResetOrderIdsAtStart=yes
# and confirm user-initiated requests
#
# 'reject/ignore' - reject order id reset if ResetOrderIdsAtStart=yes but
# allow user to handle user-initiated requests
ConfirmOrderIdReset=
# ============================================================================= # =============================================================================
# 4. TWS Auto-Closedown # 4. TWS Auto-Logoff and Auto-Restart
# ============================================================================= # =============================================================================
# #
# IMPORTANT NOTE: Starting with TWS 974, this setting no longer # TWS and Gateway insist on being restarted every day. Two alternative
# works properly, because IB have changed the way TWS handles its # automatic options are offered:
# autologoff mechanism.
# #
# You should now configure the TWS autologoff time to something # - Auto-Logoff: at a specified time, TWS shuts down tidily, without
# convenient for you, and restart IBC each day. # restarting.
# #
# Alternatively, discontinue use of IBC and use the auto-relogin # - Auto-Restart: at a specified time, TWS shuts down and then restarts
# mechanism within TWS 974 and later versions (note that the # without the user having to re-autheticate.
# auto-relogin mechanism provided by IB is not available if you #
# use IBC). # The normal way to configure the time at which this happens is via the Lock
# and Exit section of the Configuration dialog. Once this time has been
# configured in this way, the setting persists until the user changes it again.
#
# However, there are situations where there is no user available to do this
# configuration, or where there is no persistent storage (for example some
# Docker images). In such cases, the auto-restart or auto-logoff time can be
# set whenever IBC starts with the settings below.
#
# The value, if specified, must be a time in HH:MM AM/PM format, for example
# 08:00 AM or 10:00 PM. Note that there must be a single space between the
# two parts of this value; also that midnight is "12:00 AM" and midday is
# "12:00 PM".
#
# If no value is specified for either setting, the currently configured
# settings will apply. If a value is supplied for one setting, the other
# setting is cleared. If values are supplied for both settings, only the
# auto-restart time is set, and the auto-logoff time is cleared.
#
# Note that for a normal TWS/Gateway installation with persistent storage
# (for example on a desktop computer) the value will be persisted as if the
# user had set it via the configuration dialog.
#
# If you choose to auto-restart, you should take note of the considerations
# described at the link below. Note that where this information mentions
# 'manual authentication', restarting IBC will do the job (IBKR does not
# recognise the existence of IBC in its docuemntation).
#
# https://www.interactivebrokers.com/en/software/tws/twsguide.htm#usersguidebook/configuretws/auto_restart_info.htm
#
# If you use the "RESTART" command via the IBC command server, and IBC is
# running any version of the Gateway (or a version of TWS earlier than 1018),
# note that this will set the Auto-Restart time in Gateway/TWS's configuration
# dialog to the time at which the restart actually happens (which may be up to
# a minute after the RESTART command is issued). To prevent future auto-
# restarts at this time, you must make sure you have set AutoLogoffTime or
# AutoRestartTime to your desired value before running IBC. NB: this does not
# apply to TWS from version 1018 onwards.
# Set to yes or no (lower case). AutoLogoffTime=
#
# yes means allow TWS to shut down automatically at its
# specified shutdown time, which is set via the TWS
# configuration menu.
#
# no means TWS never shuts down automatically.
#
# NB: IB recommends that you do not keep TWS running
# continuously. If you set this setting to 'no', you may
# experience incorrect TWS operation.
#
# NB: the default for this setting is 'no'. Since this will
# only work properly with TWS versions earlier than 974, you
# should explicitly set this to 'yes' for version 974 and later.
IbAutoClosedown=yes
AutoRestartTime=
# ============================================================================= # =============================================================================
# 5. TWS Tidy Closedown Time # 5. TWS Tidy Closedown Time
# ============================================================================= # =============================================================================
# #
# NB: starting with TWS 974 this is no longer a useful option # Specifies a time at which TWS will close down tidily, with no restart.
# because both TWS and Gateway now have the same auto-logoff
# mechanism, and IBC can no longer avoid this.
# #
# Note that giving this setting a value does not change TWS's # There is little reason to use this setting. It is similar to AutoLogoffTime,
# auto-logoff in any way: any setting will be additional to the # but can include a day-of-the-week, whereas AutoLogoffTime and AutoRestartTime
# TWS auto-logoff. # apply every day. So for example you could use ClosedownAt in conjunction with
# AutoRestartTime to shut down TWS on Friday evenings after the markets
# close, without it running on Saturday as well.
# #
# To tell IBC to tidily close TWS at a specified time every # To tell IBC to tidily close TWS at a specified time every
# day, set this value to <hh:mm>, for example: # day, set this value to <hh:mm>, for example:
@ -487,7 +679,7 @@ AcceptIncomingConnectionAction=reject
# no means the dialog remains on display and must be # no means the dialog remains on display and must be
# handled by the user. # handled by the user.
AllowBlindTrading=yes AllowBlindTrading=no
# Save Settings on a Schedule # Save Settings on a Schedule
@ -530,6 +722,26 @@ AllowBlindTrading=yes
SaveTwsSettingsAt= SaveTwsSettingsAt=
# Confirm Crypto Currency Orders Automatically
# --------------------------------------------
#
# When you place an order for a cryptocurrency contract, a dialog is displayed
# asking you to confirm that you want to place the order, and notifying you
# that you are placing an order to trade cryptocurrency with Paxos, a New York
# limited trust company, and not at Interactive Brokers.
#
# transmit means that the order will be placed automatically, and the
# dialog will then be closed
#
# cancel means that the order will not be placed, and the dialog will
# then be closed
#
# manual means that IBC will take no action and the user must deal
# with the dialog
ConfirmCryptoCurrencyOrders=transmit
# ============================================================================= # =============================================================================
# 7. Settings Specific to Indian Versions of TWS # 7. Settings Specific to Indian Versions of TWS
@ -566,13 +778,17 @@ DismissNSEComplianceNotice=yes
# #
# The port number that IBC listens on for commands # The port number that IBC listens on for commands
# such as "STOP". DO NOT set this to the port number # such as "STOP". DO NOT set this to the port number
# used for TWS API connections. There is no good reason # used for TWS API connections.
# to change this setting unless the port is used by #
# some other application (typically another instance of # The convention is to use 7462 for this port,
# IBC). The default value is 0, which tells IBC not to # but it must be set to a different value from any other
# start the command server # IBC instance that might run at the same time.
#
# The default value is 0, which tells IBC not to start
# the command server
#CommandServerPort=7462 #CommandServerPort=7462
CommandServerPort=0
# Permitted Command Sources # Permitted Command Sources
@ -583,19 +799,19 @@ DismissNSEComplianceNotice=yes
# IBC. Commands can always be sent from the # IBC. Commands can always be sent from the
# same host as IBC is running on. # same host as IBC is running on.
ControlFrom=127.0.0.1 ControlFrom=
# Address for Receiving Commands # Address for Receiving Commands
# ------------------------------ # ------------------------------
# #
# Specifies the IP address on which the Command Server # Specifies the IP address on which the Command Server
# is so listen. For a multi-homed host, this can be used # is to listen. For a multi-homed host, this can be used
# to specify that connection requests are only to be # to specify that connection requests are only to be
# accepted on the specified address. The default is to # accepted on the specified address. The default is to
# accept connection requests on all local addresses. # accept connection requests on all local addresses.
BindAddress=127.0.0.1 BindAddress=
# Command Prompt # Command Prompt
@ -621,7 +837,7 @@ CommandPrompt=
# information is sent. The default is that such information # information is sent. The default is that such information
# is not sent. # is not sent.
SuppressInfoMessages=no SuppressInfoMessages=yes
@ -651,10 +867,10 @@ SuppressInfoMessages=no
# The LogStructureScope setting indicates which windows are # The LogStructureScope setting indicates which windows are
# eligible for structure logging: # eligible for structure logging:
# #
# - if set to 'known', only windows that IBC recognizes # - (default value) if set to 'known', only windows that
# are eligible - these are windows that IBC has some # IBC recognizes are eligible - these are windows that
# interest in monitoring, usually to take some action # IBC has some interest in monitoring, usually to take
# on the user's behalf; # some action on the user's behalf;
# #
# - if set to 'unknown', only windows that IBC does not # - if set to 'unknown', only windows that IBC does not
# recognize are eligible. Most windows displayed by # recognize are eligible. Most windows displayed by
@ -667,9 +883,8 @@ SuppressInfoMessages=no
# - if set to 'all', then every window displayed by TWS # - if set to 'all', then every window displayed by TWS
# is eligible. # is eligible.
# #
# The default value is 'known'.
LogStructureScope=all LogStructureScope=known
# When to Log Window Structure # When to Log Window Structure
@ -682,13 +897,15 @@ LogStructureScope=all
# structure of an eligible window the first time it # structure of an eligible window the first time it
# is encountered; # is encountered;
# #
# - if set to 'openclose', the structure is logged every
# time an eligible window is opened or closed;
#
# - if set to 'activate', the structure is logged every # - if set to 'activate', the structure is logged every
# time an eligible window is made active; # time an eligible window is made active;
# #
# - if set to 'never' or 'no' or 'false', structure # - (default value) if set to 'never' or 'no' or 'false',
# information is never logged. # structure information is never logged.
# #
# The default value is 'never'.
LogStructureWhen=never LogStructureWhen=never
@ -708,4 +925,3 @@ LogStructureWhen=never
#LogComponents= #LogComponents=

View File

@ -0,0 +1,33 @@
[IBGateway]
ApiOnly=true
LocalServerPort=4001
# NOTE: must be set if using IBC's "reject" mode
TrustedIPs=127.0.0.1
; RemoteHostOrderRouting=ndc1.ibllc.com
; WriteDebug=true
; RemotePortOrderRouting=4001
; useRemoteSettings=false
; tradingMode=p
; Steps=8
; colorPalletName=dark
# window geo, this may be useful for sending `xdotool` commands?
; MainWindow.Width=1986
; screenHeight=3960
[Logon]
Locale=en
# most markets are oriented around this zone
# so might as well hard code it.
TimeZone=America/New_York
UseSSL=true
displayedproxymsg=1
os_titlebar=true
s3store=true
useRemoteSettings=false
[Communication]
ctciAutoEncrypt=true
Region=usr
; Peer=cdc1.ibllc.com:4001

View File

@ -1,16 +1,35 @@
#!/bin/sh #!/bin/sh
# start vnc server and listen for connections
# on port specced in `$VNC_SERVER_PORT`
# start VNC server
x11vnc \ x11vnc \
-ncache_cr \ -listen 127.0.0.1 \
-listen localhost \ -allow 127.0.0.1 \
-rfbport "${VNC_SERVER_PORT}" \
-display :1 \ -display :1 \
-forever \ -forever \
-shared \ -shared \
-logappend /var/log/x11vnc.log \
-bg \ -bg \
-nowf \
-noxdamage \
-noxfixes \
-no6 \
-noipv6 \ -noipv6 \
-autoport 3003 \
# can't use this because of ``asyncvnc`` issue:
# -nowcr \
# TODO: can't use this because of ``asyncvnc`` issue:
# https://github.com/barneygale/asyncvnc/issues/1 # https://github.com/barneygale/asyncvnc/issues/1
# -passwd 'ibcansmbz' # -passwd 'ibcansmbz'
# XXX: optional graphics caching flags that seem to rekt the overlay
# of the 2 gw windows? When running a single gateway
# this seems to maybe optimize some memory usage?
# -ncache_cr \
# -ncache \
# NOTE: this will prevent logs from going to the console.
# -logappend /var/log/x11vnc.log \
# where to start allocating ports
# -autoport "${VNC_SERVER_PORT}" \

View File

@ -0,0 +1,91 @@
### NOTE this is likely out of date given it was written some
(years) time ago by a user that has since not really partaken in
contributing since.
install for tinas
*****************
for windows peeps you can start by installing all the prerequisite software:
- install git with all default settings - https://git-scm.com/download/win
- install anaconda all default settings - https://www.anaconda.com/products/individual
- install microsoft build tools (check the box for Desktop development for C++, you might be able to uncheck some optional downloads) - https://visualstudio.microsoft.com/visual-cpp-build-tools/
- install visual studio code default settings - https://code.visualstudio.com/download
then, `crack a conda shell`_ and run the following commands::
mkdir code # create code directory
cd code # change directory to code
git clone https://github.com/pikers/piker.git # downloads piker installation package from github
cd piker # change directory to piker
conda create -n pikonda # creates conda environment named pikonda
conda activate pikonda # activates pikonda
conda install -c conda-forge python-levenshtein # in case it is not already installed
conda install pip # may already be installed
pip # will show if pip is installed
pip install -e . -r requirements.txt # install piker in editable mode
test Piker to see if it is working::
piker -b binance chart btcusdt.binance # formatting for loading a chart
piker -b kraken -b binance chart xbtusdt.kraken
piker -b kraken -b binance -b ib chart qqq.nasdaq.ib
piker -b ib chart tsla.nasdaq.ib
potential error::
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\AppData\\Roaming\\piker\\brokers.toml'
solution:
- navigate to file directory above (may be different on your machine, location should be listed in the error code)
- copy and paste file from 'C:\\Users\\user\\code\\data/brokers.toml' or create a blank file using notepad at the location above
Visual Studio Code setup:
- now that piker is installed we can set up vscode as the default terminal for running piker and editing the code
- open Visual Studio Code
- file --> Add Folder to Workspace --> C:\Users\user\code\piker (adds piker directory where all piker files are located)
- file --> Save Workspace As --> save it wherever you want and call it whatever you want, this is going to be your default workspace for running and editing piker code
- ctrl + shift + p --> start typing Python: Select Interpetter --> when the option comes up select it --> Select at the workspace level --> select the one that shows ('pikonda')
- change the default terminal to cmd.exe instead of powershell (default)
- now when you create a new terminal VScode should automatically activate you conda env so that piker can be run as the first command after a new terminal is created
also, try out fancyzones as part of powertoyz for a decent tiling windows manager to manage all the cool new software you are going to be running.
.. _conda installed: https://
.. _C++ build toolz: https://
.. _crack a conda shell: https://
.. _vscode: https://
.. link to the tina guide
.. _setup a coolio tiled wm console: https://
provider support
****************
for live data feeds the in-progress set of supported brokers is:
- IB_ via ``ib_insync``, also see our `container docs`_
- binance_ and kraken_ for crypto over their public websocket API
- questrade_ (ish) which comes with effectively free L1
coming soon...
- webull_ via the reverse engineered public API
- yahoo via yliveticker_
if you want your broker supported and they have an API let us know.
.. _IB: https://interactivebrokers.github.io/tws-api/index.html
.. _container docs: https://github.com/pikers/piker/tree/master/dockering/ib
.. _questrade: https://www.questrade.com/api/documentation
.. _kraken: https://www.kraken.com/features/api#public-market-data
.. _binance: https://github.com/pikers/piker/pull/182
.. _webull: https://github.com/tedchou12/webull
.. _yliveticker: https://github.com/yahoofinancelive/yliveticker
.. _coinbase: https://docs.pro.coinbase.com/#websocket-feed

View File

@ -0,0 +1,263 @@
# from pprint import pformat
from functools import partial
from decimal import Decimal
from typing import Callable
import tractor
import trio
from uuid import uuid4
from piker.service import maybe_open_pikerd
from piker.accounting import dec_digits
from piker.clearing import (
open_ems,
OrderClient,
)
# TODO: we should probably expose these top level in this subsys?
from piker.clearing._messages import (
Order,
Status,
BrokerdPosition,
)
from piker.data import (
iterticks,
Flume,
open_feed,
Feed,
# ShmArray,
)
# TODO: handle other statuses:
# - fills, errors, and position tracking
async def wait_for_order_status(
trades_stream: tractor.MsgStream,
oid: str,
expect_status: str,
) -> tuple[
list[Status],
list[BrokerdPosition],
]:
'''
Wait for a specific order status for a given dialog, return msg flow
up to that msg and any position update msgs in a tuple.
'''
# Wait for position message before moving on to verify flow(s)
# for the multi-order position entry/exit.
status_msgs: list[Status] = []
pp_msgs: list[BrokerdPosition] = []
async for msg in trades_stream:
match msg:
case {'name': 'position'}:
ppmsg = BrokerdPosition(**msg)
pp_msgs.append(ppmsg)
case {
'name': 'status',
}:
msg = Status(**msg)
status_msgs.append(msg)
# if we get the status we expect then return all
# collected msgs from the brokerd dialog up to the
# exected msg B)
if (
msg.resp == expect_status
and msg.oid == oid
):
return status_msgs, pp_msgs
async def bot_main():
'''
Boot the piker runtime, open an ems connection, submit
and process orders statuses in real-time.
'''
ll: str = 'info'
# open an order ctl client, live data feed, trio nursery for
# spawning an order trailer task
client: OrderClient
trades_stream: tractor.MsgStream
feed: Feed
accounts: list[str]
fqme: str = 'btcusdt.usdtm.perp.binance'
async with (
# TODO: do this implicitly inside `open_ems()` ep below?
# init and sync actor-service runtime
maybe_open_pikerd(
loglevel=ll,
debug_mode=True,
),
open_ems(
fqme,
mode='paper', # {'live', 'paper'}
# mode='live', # for real-brokerd submissions
loglevel=ll,
) as (
client, # OrderClient
trades_stream, # tractor.MsgStream startup_pps,
_, # positions
accounts,
_, # dialogs
),
open_feed(
fqmes=[fqme],
loglevel=ll,
# TODO: if you want to throttle via downsampling
# how many tick updates your feed received on
# quote streams B)
# tick_throttle=10,
) as feed,
trio.open_nursery() as tn,
):
assert accounts
print(f'Loaded binance accounts: {accounts}')
flume: Flume = feed.flumes[fqme]
min_tick = Decimal(flume.mkt.price_tick)
min_tick_digits: int = dec_digits(min_tick)
price_round: Callable = partial(
round,
ndigits=min_tick_digits,
)
quote_stream: trio.abc.ReceiveChannel = feed.streams['binance']
# always keep live limit 0.003% below last
# clearing price
clear_margin: float = 0.9997
async def trailer(
order: Order,
):
# ref shm OHLCV array history, if you want
# s_shm: ShmArray = flume.rt_shm
# m_shm: ShmArray = flume.hist_shm
# NOTE: if you wanted to frame ticks by type like the
# the quote throttler does.. and this is probably
# faster in terms of getting the latest tick type
# embedded value of interest?
# from piker.data._sampling import frame_ticks
async for quotes in quote_stream:
for fqme, quote in quotes.items():
# print(
# f'{quote["symbol"]} -> {quote["ticks"]}\n'
# f'last 1s OHLC:\n{s_shm.array[-1]}\n'
# f'last 1m OHLC:\n{m_shm.array[-1]}\n'
# )
for tick in iterticks(
quote,
reverse=True,
# types=('trade', 'dark_trade'), # defaults
):
await client.update(
uuid=order.oid,
price=price_round(
clear_margin
*
tick['price']
),
)
msgs, pps = await wait_for_order_status(
trades_stream,
order.oid,
'open'
)
# if multiple clears per quote just
# skip to the next quote?
break
# get first live quote to be sure we submit the initial
# live buy limit low enough that it doesn't clear due to
# a stale initial price from the data feed layer!
first_ask_price: float | None = None
async for quotes in quote_stream:
for fqme, quote in quotes.items():
# print(quote['symbol'])
for tick in iterticks(quote, types=('ask')):
first_ask_price: float = tick['price']
break
if first_ask_price:
break
# setup order dialog via first msg
price: float = price_round(
clear_margin
*
first_ask_price,
)
# compute a 1k USD sized pos
size: float = round(1e3/price, ndigits=3)
order = Order(
# docs on how this all works, bc even i'm not entirely
# clear XD. also we probably want to figure out how to
# offer both the paper engine running and the brokerd
# order ctl tasks with the ems choosing which stream to
# route msgs on given the account value!
account='paper', # use built-in paper clearing engine and .accounting
# account='binance.usdtm', # for live binance futes
oid=str(uuid4()),
exec_mode='live', # {'dark', 'live', 'alert'}
action='buy', # TODO: remove this from our schema?
size=size,
symbol=fqme,
price=price,
brokers=['binance'],
)
await client.send(order)
msgs, pps = await wait_for_order_status(
trades_stream,
order.oid,
'open',
)
assert not pps
assert msgs[-1].oid == order.oid
# start "trailer task" which tracks rt quote stream
tn.start_soon(trailer, order)
try:
# wait for ctl-c from user..
await trio.sleep_forever()
except KeyboardInterrupt:
# cancel the open order
await client.cancel(order.oid)
msgs, pps = await wait_for_order_status(
trades_stream,
order.oid,
'canceled'
)
raise
if __name__ == '__main__':
trio.run(bot_main)

138
flake.lock 100644
View File

@ -0,0 +1,138 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1689068808,
"narHash": "sha256-6ixXo3wt24N/melDWjq70UuHQLxGV8jZvooRanIHXw0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "919d646de7be200f3bf08cb76ae1f09402b6f9b4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1689068808,
"narHash": "sha256-6ixXo3wt24N/melDWjq70UuHQLxGV8jZvooRanIHXw0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "919d646de7be200f3bf08cb76ae1f09402b6f9b4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"poetry2nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1688870561,
"narHash": "sha256-4UYkifnPEw1nAzqqPOTL2MvWtm3sNGw1UTYTalkTcGY=",
"owner": "nix-community",
"repo": "nix-github-actions",
"rev": "165b1650b753316aa7f1787f3005a8d2da0f5301",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-github-actions",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1692174805,
"narHash": "sha256-xmNPFDi/AUMIxwgOH/IVom55Dks34u1g7sFKKebxUm0=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "caac0eb6bdcad0b32cb2522e03e4002c8975c62e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"poetry2nix": {
"inputs": {
"flake-utils": "flake-utils_2",
"nix-github-actions": "nix-github-actions",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1692048894,
"narHash": "sha256-cDw03rso2V4CDc3Mll0cHN+ztzysAvdI8pJ7ybbz714=",
"ref": "refs/heads/pyqt6",
"rev": "b059ad4c3051f45d6c912e17747aae37a9ec1544",
"revCount": 2276,
"type": "git",
"url": "file:///home/lord_fomo/repos/poetry2nix"
},
"original": {
"type": "git",
"url": "file:///home/lord_fomo/repos/poetry2nix"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"poetry2nix": "poetry2nix"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

180
flake.nix 100644
View File

@ -0,0 +1,180 @@
# NOTE: to convert to a poetry2nix env like this here are the
# steps:
# - install poetry in your system nix config
# - convert the repo to use poetry using `poetry init`:
# https://python-poetry.org/docs/basic-usage/#initialising-a-pre-existing-project
# - then manually ensuring all deps are converted over:
# - add this file to the repo and commit it
# -
# GROKin tips:
# - CLI eps are (ostensibly) added via an `entry_points.txt`:
# - https://packaging.python.org/en/latest/specifications/entry-points/#file-format
# - https://github.com/nix-community/poetry2nix/blob/master/editable.nix#L49
{
description = "piker: trading gear for hackers (pkged with poetry2nix)";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
# see https://github.com/nix-community/poetry2nix/tree/master#api
inputs.poetry2nix = {
# url = "github:nix-community/poetry2nix";
# url = "github:K900/poetry2nix/qt5-explicit-deps";
url = "/home/lord_fomo/repos/poetry2nix";
inputs.nixpkgs.follows = "nixpkgs";
};
outputs = {
self,
nixpkgs,
flake-utils,
poetry2nix,
}:
# TODO: build cross-OS and use the `${system}` var thingy..
flake-utils.lib.eachDefaultSystem (system:
let
# use PWD as sources
projectDir = ./.;
pyproject = ./pyproject.toml;
poetrylock = ./poetry.lock;
# TODO: port to 3.11 and support both versions?
python = "python3.10";
# for more functions and examples.
# inherit
# (poetry2nix.legacyPackages.${system})
# mkPoetryApplication;
# pkgs = nixpkgs.legacyPackages.${system};
pkgs = nixpkgs.legacyPackages.x86_64-linux;
lib = pkgs.lib;
p2npkgs = poetry2nix.legacyPackages.x86_64-linux;
# define all pkg overrides per dep, see edgecases.md:
# https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md
# TODO: add these into the json file:
# https://github.com/nix-community/poetry2nix/blob/master/overrides/build-systems.json
pypkgs-build-requirements = {
asyncvnc = [ "setuptools" ];
eventkit = [ "setuptools" ];
ib-insync = [ "setuptools" "flake8" ];
msgspec = [ "setuptools"];
pdbp = [ "setuptools" ];
pyqt6-sip = [ "setuptools" ];
tabcompleter = [ "setuptools" ];
tractor = [ "setuptools" ];
tricycle = [ "setuptools" ];
trio-typing = [ "setuptools" ];
trio-util = [ "setuptools" ];
xonsh = [ "setuptools" ];
};
# auto-generate override entries
p2n-overrides = p2npkgs.defaultPoetryOverrides.extend (self: super:
builtins.mapAttrs (package: build-requirements:
(builtins.getAttr package super).overridePythonAttrs (old: {
buildInputs = (
old.buildInputs or [ ]
) ++ (
builtins.map (
pkg: if builtins.isString pkg then builtins.getAttr pkg super else pkg
) build-requirements
);
})
) pypkgs-build-requirements
);
# override some ahead-of-time compiled extensions
# to be built with their wheels.
ahot_overrides = p2n-overrides.extend(
final: prev: {
# llvmlite = prev.llvmlite.override {
# preferWheel = false;
# };
# TODO: get this workin with p2n and nixpkgs..
# pyqt6 = prev.pyqt6.override {
# preferWheel = true;
# };
# NOTE: this DOESN'T work atm but after a fix
# to poetry2nix, it will and actually this line
# won't be needed - thanks @k900:
# https://github.com/nix-community/poetry2nix/pull/1257
pyqt5 = prev.pyqt5.override {
# withWebkit = false;
preferWheel = true;
};
# see PR from @k900:
# https://github.com/nix-community/poetry2nix/pull/1257
# pyqt5-qt5 = prev.pyqt5-qt5.override {
# withWebkit = false;
# preferWheel = true;
# };
# TODO: patch in an override for polars to build
# from src! See the details likely needed from
# the cryptography entry:
# https://github.com/nix-community/poetry2nix/blob/master/overrides/default.nix#L426-L435
polars = prev.polars.override {
preferWheel = true;
};
}
);
# WHY!? -> output-attrs that `nix develop` scans for:
# https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop.html#flake-output-attributes
in
rec {
packages = {
# piker = poetry2nix.legacyPackages.x86_64-linux.mkPoetryEditablePackage {
# editablePackageSources = { piker = ./piker; };
piker = p2npkgs.mkPoetryApplication {
projectDir = projectDir;
# SEE ABOVE for auto-genned input set, override
# buncha deps with extras.. like `setuptools` mostly.
# TODO: maybe propose a patch to p2n to show that you
# can even do this in the edgecases docs?
overrides = ahot_overrides;
# XXX: won't work on llvmlite..
# preferWheels = true;
};
};
# devShells.default = pkgs.mkShell {
# projectDir = projectDir;
# python = "python3.10";
# overrides = ahot_overrides;
# inputsFrom = [ self.packages.x86_64-linux.piker ];
# packages = packages;
# # packages = [ poetry2nix.packages.${system}.poetry ];
# };
# TODO: grok the difference here..
# - avoid re-cloning git repos on every develop entry..
# - ideally allow hacking on the src code of some deps
# (tractor, pyqtgraph, tomlkit, etc.) WITHOUT having to
# re-install them every time a change is made.
# - boot a usable xonsh inside the poetry virtualenv when
# defined via a custom entry point?
devShells.default = p2npkgs.mkPoetryEnv {
# env = p2npkgs.mkPoetryEnv {
projectDir = projectDir;
python = pkgs.python310;
overrides = ahot_overrides;
editablePackageSources = packages;
# piker = "./";
# tractor = "../tractor/";
# }; # wut?
};
}
); # end of .outputs scope
}

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers. # piker: trading gear for hackers.
# Copyright 2020-eternity Tyler Goodlet (in stewardship for piker0) # Copyright 2020-eternity Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -14,7 +14,14 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" '''
piker: trading gear for hackers. piker: trading gear for hackers.
""" '''
from .service import open_piker_runtime
from .data.feed import open_feed
__all__ = [
'open_piker_runtime',
'open_feed',
]

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -14,37 +14,71 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" '''
Cacheing apis and toolz. Cacheing apis and toolz.
""" '''
from collections import OrderedDict from collections import OrderedDict
from contextlib import ( from typing import (
asynccontextmanager, Awaitable,
Callable,
ParamSpec,
TypeVar,
) )
from tractor.trionics import maybe_open_context
from .brokers import get_brokermod
from .log import get_logger from .log import get_logger
log = get_logger(__name__) log = get_logger(__name__)
T = TypeVar("T")
P = ParamSpec("P")
def async_lifo_cache(maxsize=128):
"""Async ``cache`` with a LIFO policy. # TODO: move this to `tractor.trionics`..
# - egs. to replicate for tests: https://github.com/aio-libs/async-lru#usage
# - their suite as well:
# https://github.com/aio-libs/async-lru/tree/master/tests
# - asked trio_util about it too:
# https://github.com/groove-x/trio-util/issues/21
def async_lifo_cache(
maxsize=128,
# NOTE: typing style was learned from:
# https://stackoverflow.com/a/71132186
) -> Callable[
Callable[P, Awaitable[T]],
Callable[
Callable[P, Awaitable[T]],
Callable[P, Awaitable[T]],
],
]:
'''
Async ``cache`` with a LIFO policy.
Implemented my own since no one else seems to have Implemented my own since no one else seems to have
a standard. I'll wait for the smarter people to come a standard. I'll wait for the smarter people to come
up with one, but until then... up with one, but until then...
"""
NOTE: when decorating, due to this simple/naive implementation, you
MUST call the decorator like,
.. code:: python
@async_lifo_cache()
async def cache_target():
'''
cache = OrderedDict() cache = OrderedDict()
def decorator(fn): def decorator(
fn: Callable[P, Awaitable[T]],
) -> Callable[P, Awaitable[T]]:
async def wrapper(*args): async def decorated(
*args: P.args,
**kwargs: P.kwargs,
) -> T:
key = args key = args
try: try:
return cache[key] return cache[key]
@ -53,27 +87,13 @@ def async_lifo_cache(maxsize=128):
# discard last added new entry # discard last added new entry
cache.popitem() cache.popitem()
# do it # call underlying
cache[key] = await fn(*args) cache[key] = await fn(
*args,
**kwargs,
)
return cache[key] return cache[key]
return wrapper return decorated
return decorator return decorator
@asynccontextmanager
async def open_cached_client(
brokername: str,
) -> 'Client': # noqa
'''
Get a cached broker client from the current actor's local vars.
If one has not been setup do it and cache it.
'''
brokermod = get_brokermod(brokername)
async with maybe_open_context(
acm_func=brokermod.get_client,
) as (cache_hit, client):
yield client

View File

@ -1,561 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Structured, daemon tree service management.
"""
from typing import Optional, Union, Callable, Any
from contextlib import asynccontextmanager as acm
from collections import defaultdict
from pydantic import BaseModel
import trio
from trio_typing import TaskStatus
import tractor
from .log import get_logger, get_console_log
from .brokers import get_brokermod
log = get_logger(__name__)
_root_dname = 'pikerd'
_registry_addr = ('127.0.0.1', 6116)
_tractor_kwargs: dict[str, Any] = {
# use a different registry addr then tractor's default
'arbiter_addr': _registry_addr
}
_root_modules = [
__name__,
'piker.clearing._ems',
'piker.clearing._client',
]
class Services(BaseModel):
actor_n: tractor._supervise.ActorNursery
service_n: trio.Nursery
debug_mode: bool # tractor sub-actor debug mode flag
service_tasks: dict[str, tuple[trio.CancelScope, tractor.Portal]] = {}
class Config:
arbitrary_types_allowed = True
async def start_service_task(
self,
name: str,
portal: tractor.Portal,
target: Callable,
**kwargs,
) -> (trio.CancelScope, tractor.Context):
'''
Open a context in a service sub-actor, add to a stack
that gets unwound at ``pikerd`` teardown.
This allows for allocating long-running sub-services in our main
daemon and explicitly controlling their lifetimes.
'''
async def open_context_in_task(
task_status: TaskStatus[
trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> Any:
with trio.CancelScope() as cs:
async with portal.open_context(
target,
**kwargs,
) as (ctx, first):
# unblock once the remote context has started
task_status.started((cs, first))
log.info(
f'`pikerd` service {name} started with value {first}'
)
try:
# wait on any context's return value
ctx_res = await ctx.result()
except tractor.ContextCancelled:
return await self.cancel_service(name)
else:
# wait on any error from the sub-actor
# NOTE: this will block indefinitely until
# cancelled either by error from the target
# context function or by being cancelled here by
# the surrounding cancel scope
return (await portal.result(), ctx_res)
cs, first = await self.service_n.start(open_context_in_task)
# store the cancel scope and portal for later cancellation or
# retstart if needed.
self.service_tasks[name] = (cs, portal)
return cs, first
# TODO: per service cancellation by scope, we aren't using this
# anywhere right?
async def cancel_service(
self,
name: str,
) -> Any:
log.info(f'Cancelling `pikerd` service {name}')
cs, portal = self.service_tasks[name]
# XXX: not entirely sure why this is required,
# and should probably be better fine tuned in
# ``tractor``?
cs.cancel()
return await portal.cancel_actor()
_services: Optional[Services] = None
@acm
async def open_pikerd(
start_method: str = 'trio',
loglevel: Optional[str] = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
) -> Optional[tractor._portal.Portal]:
'''
Start a root piker daemon who's lifetime extends indefinitely
until cancelled.
A root actor nursery is created which can be used to create and keep
alive underling services (see below).
'''
global _services
assert _services is None
# XXX: this may open a root actor as well
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=_registry_addr,
name=_root_dname,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
) as _,
tractor.open_nursery() as actor_nursery,
):
async with trio.open_nursery() as service_nursery:
# # setup service mngr singleton instance
# async with AsyncExitStack() as stack:
# assign globally for future daemon/task creation
_services = Services(
actor_n=actor_nursery,
service_n=service_nursery,
debug_mode=debug_mode,
)
yield _services
@acm
async def open_piker_runtime(
name: str,
enable_modules: list[str] = [],
start_method: str = 'trio',
loglevel: Optional[str] = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
) -> Optional[tractor._portal.Portal]:
'''
Start a piker actor who's runtime will automatically
sync with existing piker actors in local network
based on configuration.
'''
global _services
assert _services is None
# XXX: this may open a root actor as well
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=_registry_addr,
name=name,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
) as _,
):
yield tractor.current_actor()
@acm
async def maybe_open_runtime(
loglevel: Optional[str] = None,
**kwargs,
) -> None:
"""
Start the ``tractor`` runtime (a root actor) if none exists.
"""
settings = _tractor_kwargs
settings.update(kwargs)
if not tractor.current_actor(err_on_no_runtime=False):
async with tractor.open_root_actor(
loglevel=loglevel,
**settings,
):
yield
else:
yield
@acm
async def maybe_open_pikerd(
loglevel: Optional[str] = None,
**kwargs,
) -> Union[tractor._portal.Portal, Services]:
"""If no ``pikerd`` daemon-root-actor can be found start it and
yield up (we should probably figure out returning a portal to self
though).
"""
if loglevel:
get_console_log(loglevel)
# subtle, we must have the runtime up here or portal lookup will fail
async with maybe_open_runtime(loglevel, **kwargs):
async with tractor.find_actor(_root_dname) as portal:
# assert portal is not None
if portal is not None:
yield portal
return
# presume pikerd role since no daemon could be found at
# configured address
async with open_pikerd(
loglevel=loglevel,
debug_mode=kwargs.get('debug_mode', False),
) as _:
# in the case where we're starting up the
# tractor-piker runtime stack in **this** process
# we return no portal to self.
yield None
# brokerd enabled modules
_data_mods = [
'piker.brokers.core',
'piker.brokers.data',
'piker.data',
'piker.data.feed',
'piker.data._sampling'
]
class Brokerd:
locks = defaultdict(trio.Lock)
@acm
async def find_service(
service_name: str,
) -> Optional[tractor.Portal]:
log.info(f'Scanning for service `{service_name}`')
# attach to existing daemon by name if possible
async with tractor.find_actor(
service_name,
arbiter_sockaddr=_registry_addr,
) as maybe_portal:
yield maybe_portal
async def check_for_service(
service_name: str,
) -> bool:
'''
Service daemon "liveness" predicate.
'''
async with tractor.query_actor(
service_name,
arbiter_sockaddr=_registry_addr,
) as sockaddr:
return sockaddr
@acm
async def maybe_spawn_daemon(
service_name: str,
service_task_target: Callable,
spawn_args: dict[str, Any],
loglevel: Optional[str] = None,
**kwargs,
) -> tractor.Portal:
'''
If no ``service_name`` daemon-actor can be found,
spawn one in a local subactor and return a portal to it.
If this function is called from a non-pikerd actor, the
spawned service will persist as long as pikerd does or
it is requested to be cancelled.
This can be seen as a service starting api for remote-actor
clients.
'''
if loglevel:
get_console_log(loglevel)
# serialize access to this section to avoid
# 2 or more tasks racing to create a daemon
lock = Brokerd.locks[service_name]
await lock.acquire()
async with find_service(service_name) as portal:
if portal is not None:
lock.release()
yield portal
return
log.warning(f"Couldn't find any existing {service_name}")
# ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the
# process tree
async with maybe_open_pikerd(
loglevel=loglevel,
**kwargs,
) as pikerd_portal:
if pikerd_portal is None:
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
await service_task_target(**spawn_args)
else:
# tell the remote `pikerd` to start the target,
# the target can't return a non-serializable value
# since it is expected that service startingn is
# non-blocking and the target task will persist running
# on `pikerd` after the client requesting it's start
# disconnects.
await pikerd_portal.run(
service_task_target,
**spawn_args,
)
async with tractor.wait_for_actor(service_name) as portal:
lock.release()
yield portal
await portal.cancel_actor()
async def spawn_brokerd(
brokername: str,
loglevel: Optional[str] = None,
**tractor_kwargs,
) -> bool:
log.info(f'Spawning {brokername} broker daemon')
brokermod = get_brokermod(brokername)
dname = f'brokerd.{brokername}'
extra_tractor_kwargs = getattr(brokermod, '_spawn_kwargs', {})
tractor_kwargs.update(extra_tractor_kwargs)
global _services
assert _services
# ask `pikerd` to spawn a new sub-actor and manage it under its
# actor nursery
modpath = brokermod.__name__
broker_enable = [modpath]
for submodname in getattr(
brokermod,
'__enable_modules__',
[],
):
subpath = f'{modpath}.{submodname}'
broker_enable.append(subpath)
portal = await _services.actor_n.start_actor(
dname,
enable_modules=_data_mods + broker_enable,
loglevel=loglevel,
debug_mode=_services.debug_mode,
**tractor_kwargs
)
# non-blocking setup of brokerd service nursery
from .data import _setup_persistent_brokerd
await _services.start_service_task(
dname,
portal,
_setup_persistent_brokerd,
brokername=brokername,
)
return True
@acm
async def maybe_spawn_brokerd(
brokername: str,
loglevel: Optional[str] = None,
**kwargs,
) -> tractor.Portal:
'''
Helper to spawn a brokerd service *from* a client
who wishes to use the sub-actor-daemon.
'''
async with maybe_spawn_daemon(
f'brokerd.{brokername}',
service_task_target=spawn_brokerd,
spawn_args={'brokername': brokername, 'loglevel': loglevel},
loglevel=loglevel,
**kwargs,
) as portal:
yield portal
async def spawn_emsd(
loglevel: Optional[str] = None,
**extra_tractor_kwargs
) -> bool:
"""
Start the clearing engine under ``pikerd``.
"""
log.info('Spawning emsd')
global _services
assert _services
portal = await _services.actor_n.start_actor(
'emsd',
enable_modules=[
'piker.clearing._ems',
'piker.clearing._client',
],
loglevel=loglevel,
debug_mode=_services.debug_mode, # set by pikerd flag
**extra_tractor_kwargs
)
# non-blocking setup of clearing service
from .clearing._ems import _setup_persistent_emsd
await _services.start_service_task(
'emsd',
portal,
_setup_persistent_emsd,
)
return True
@acm
async def maybe_open_emsd(
brokername: str,
loglevel: Optional[str] = None,
**kwargs,
) -> tractor._portal.Portal: # noqa
async with maybe_spawn_daemon(
'emsd',
service_task_target=spawn_emsd,
spawn_args={'loglevel': loglevel},
loglevel=loglevel,
**kwargs,
) as portal:
yield portal
# TODO: ideally we can start the tsdb "on demand" but it's
# probably going to require "rootless" docker, at least if we don't
# want to expect the user to start ``pikerd`` with root perms all the
# time.
# async def maybe_open_marketstored(
# loglevel: Optional[str] = None,
# **kwargs,
# ) -> tractor._portal.Portal: # noqa
# async with maybe_spawn_daemon(
# 'marketstored',
# service_task_target=spawn_emsd,
# spawn_args={'loglevel': loglevel},
# loglevel=loglevel,
# **kwargs,
# ) as portal:
# yield portal

View File

@ -0,0 +1,16 @@
.accounting
-----------
A subsystem for transaction processing, storage and historical
measurement.
.pnl
----
BEP, the break even price: the price at which liquidating
a remaining position results in a zero PnL since the position was
"opened" in the destination asset.
PPU: price-per-unit: the "average cost" (in cumulative mean terms)
of the "entry" transactions which "make a position larger"; taking
a profit relative to this price means that you will "make more
profit then made prior" since the position was opened.

View File

@ -0,0 +1,107 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
"Accounting for degens": count dem numberz that tracks how much you got
for tendiez.
'''
from ..log import get_logger
from .calc import (
iter_by_dt,
)
from ._ledger import (
Transaction,
TransactionLedger,
open_trade_ledger,
)
from ._pos import (
Account,
load_account,
load_account_from_ledger,
open_pps,
open_account,
Position,
)
from ._mktinfo import (
Asset,
dec_digits,
digits_to_dec,
MktPair,
Symbol,
unpack_fqme,
_derivs as DerivTypes,
)
from ._allocate import (
mk_allocator,
Allocator,
)
log = get_logger(__name__)
__all__ = [
'Account',
'Allocator',
'Asset',
'MktPair',
'Position',
'Symbol',
'Transaction',
'TransactionLedger',
'dec_digits',
'digits_to_dec',
'iter_by_dt',
'load_account',
'load_account_from_ledger',
'mk_allocator',
'open_account',
'open_pps',
'open_trade_ledger',
'unpack_fqme',
'DerivTypes',
]
def get_likely_pair(
src: str,
dst: str,
bs_mktid: str,
) -> str | None:
'''
Attempt to get the likely trading pair matching a given destination
asset `dst: str`.
'''
try:
src_name_start: str = bs_mktid.rindex(src)
except (
ValueError, # substr not found
):
# TODO: handle nested positions..(i.e.
# positions where the src fiat was used to
# buy some other dst which was furhter used
# to buy another dst..)
# log.warning(
# f'No src fiat {src} found in {bs_mktid}?'
# )
return None
likely_dst: str = bs_mktid[:src_name_start]
if likely_dst == dst:
return bs_mktid

View File

@ -22,54 +22,10 @@ from enum import Enum
from typing import Optional from typing import Optional
from bidict import bidict from bidict import bidict
from pydantic import BaseModel, validator
from ..data._source import Symbol from ._pos import Position
from ._messages import BrokerdPosition, Status from . import MktPair
from piker.types import Struct
class Position(BaseModel):
'''
Basic pp (personal position) model with attached fills history.
This type should be IPC wire ready?
'''
symbol: Symbol
# last size and avg entry price
size: float
avg_price: float # TODO: contextual pricing
# ordered record of known constituent trade messages
fills: list[Status] = []
def update_from_msg(
self,
msg: BrokerdPosition,
) -> None:
# XXX: better place to do this?
symbol = self.symbol
lot_size_digits = symbol.lot_size_digits
avg_price, size = (
round(msg['avg_price'], ndigits=symbol.tick_size_digits),
round(msg['size'], ndigits=lot_size_digits),
)
self.avg_price = avg_price
self.size = size
@property
def dsize(self) -> float:
'''
The "dollar" size of the pp, normally in trading (fiat) unit
terms.
'''
return self.avg_price * self.size
_size_units = bidict({ _size_units = bidict({
@ -84,34 +40,9 @@ SizeUnit = Enum(
) )
class Allocator(BaseModel): class Allocator(Struct):
class Config: mkt: MktPair
validate_assignment = True
copy_on_model_validation = False
arbitrary_types_allowed = True
# required to get the account validator lookup working?
extra = 'allow'
underscore_attrs_are_private = False
symbol: Symbol
account: Optional[str] = 'paper'
# TODO: for enums this clearly doesn't fucking work, you can't set
# a default at startup by passing in a `dict` but yet you can set
# that value through assignment..for wtv cucked reason.. honestly, pure
# unintuitive garbage.
size_unit: str = 'currency'
_size_units: dict[str, Optional[str]] = _size_units
@validator('size_unit', pre=True)
def maybe_lookup_key(cls, v):
# apply the corresponding enum key for the text "description" value
if v not in _size_units:
return _size_units.inverse[v]
assert v in _size_units
return v
# TODO: if we ever want ot support non-uniform entry-slot-proportion # TODO: if we ever want ot support non-uniform entry-slot-proportion
# "sizes" # "sizes"
@ -120,6 +51,28 @@ class Allocator(BaseModel):
units_limit: float units_limit: float
currency_limit: float currency_limit: float
slots: int slots: int
account: Optional[str] = 'paper'
_size_units: bidict[str, Optional[str]] = _size_units
# TODO: for enums this clearly doesn't fucking work, you can't set
# a default at startup by passing in a `dict` but yet you can set
# that value through assignment..for wtv cucked reason.. honestly, pure
# unintuitive garbage.
_size_unit: str = 'currency'
@property
def size_unit(self) -> str:
return self._size_unit
@size_unit.setter
def size_unit(self, v: str) -> Optional[str]:
if v not in _size_units:
v = _size_units.inverse[v]
assert v in _size_units
self._size_unit = v
return v
def step_sizes( def step_sizes(
self, self,
@ -140,10 +93,13 @@ class Allocator(BaseModel):
else: else:
return self.units_limit return self.units_limit
def limit_info(self) -> tuple[str, float]:
return self.size_unit, self.limit()
def next_order_info( def next_order_info(
self, self,
# we only need a startup size for exit calcs, we can the # we only need a startup size for exit calcs, we can then
# determine how large slots should be if the initial pp size was # determine how large slots should be if the initial pp size was
# larger then the current live one, and the live one is smaller # larger then the current live one, and the live one is smaller
# then the initial config settings. # then the initial config settings.
@ -158,24 +114,24 @@ class Allocator(BaseModel):
depending on position / order entry config. depending on position / order entry config.
''' '''
sym = self.symbol mkt: MktPair = self.mkt
ld = sym.lot_size_digits ld: int = mkt.size_tick_digits
size_unit = self.size_unit size_unit = self.size_unit
live_size = live_pp.size live_size = live_pp.cumsize
abs_live_size = abs(live_size) abs_live_size = abs(live_size)
abs_startup_size = abs(startup_pp.size) abs_startup_size = abs(startup_pp.cumsize)
u_per_slot, currency_per_slot = self.step_sizes() u_per_slot, currency_per_slot = self.step_sizes()
if size_unit == 'units': if size_unit == 'units':
slot_size = u_per_slot slot_size: float = u_per_slot
l_sub_pp = self.units_limit - abs_live_size l_sub_pp: float = self.units_limit - abs_live_size
elif size_unit == 'currency': elif size_unit == 'currency':
live_cost_basis = abs_live_size * live_pp.avg_price live_cost_basis: float = abs_live_size * live_pp.ppu
slot_size = currency_per_slot / price slot_size: float = currency_per_slot / price
l_sub_pp = (self.currency_limit - live_cost_basis) / price l_sub_pp: float = (self.currency_limit - live_cost_basis) / price
else: else:
raise ValueError( raise ValueError(
@ -184,12 +140,20 @@ class Allocator(BaseModel):
# an entry (adding-to or starting a pp) # an entry (adding-to or starting a pp)
if ( if (
action == 'buy' and live_size > 0 or
action == 'sell' and live_size < 0 or
live_size == 0 live_size == 0
or (
action == 'buy'
and live_size > 0
)
or (
action == 'sell'
and live_size < 0
)
): ):
order_size = min(
order_size = min(slot_size, l_sub_pp) slot_size,
max(l_sub_pp, 0),
)
# an exit (removing-from or going to net-zero pp) # an exit (removing-from or going to net-zero pp)
else: else:
@ -205,7 +169,7 @@ class Allocator(BaseModel):
if size_unit == 'currency': if size_unit == 'currency':
# compute the "projected" limit's worth of units at the # compute the "projected" limit's worth of units at the
# current pp (weighted) price: # current pp (weighted) price:
slot_size = currency_per_slot / live_pp.avg_price slot_size = currency_per_slot / live_pp.ppu
else: else:
slot_size = u_per_slot slot_size = u_per_slot
@ -220,7 +184,7 @@ class Allocator(BaseModel):
order_size = max(slotted_pp, slot_size) order_size = max(slotted_pp, slot_size)
if ( if (
abs_live_size < slot_size or abs_live_size < slot_size
# NOTE: front/back "loading" heurstic: # NOTE: front/back "loading" heurstic:
# if the remaining pp is in between 0-1.5x a slot's # if the remaining pp is in between 0-1.5x a slot's
@ -229,14 +193,17 @@ class Allocator(BaseModel):
# **without** going past a net-zero pp. if the pp is # **without** going past a net-zero pp. if the pp is
# > 1.5x a slot size, then front load: exit a slot's and # > 1.5x a slot size, then front load: exit a slot's and
# expect net-zero to be acquired on the final exit. # expect net-zero to be acquired on the final exit.
slot_size < pp_size < round((1.5*slot_size), ndigits=ld) or or slot_size < pp_size < round((1.5*slot_size), ndigits=ld)
or (
# underlying requires discrete (int) units (eg. stocks) # underlying requires discrete (int) units (eg. stocks)
# and thus our slot size (based on our limit) would # and thus our slot size (based on our limit) would
# exit a fractional unit's worth so, presuming we aren't # exit a fractional unit's worth so, presuming we aren't
# supporting a fractional-units-style broker, we need # supporting a fractional-units-style broker, we need
# exit the final unit. # exit the final unit.
ld == 0 and abs_live_size == 1 ld == 0
and abs_live_size == 1
)
): ):
order_size = abs_live_size order_size = abs_live_size
@ -244,9 +211,13 @@ class Allocator(BaseModel):
if order_size < slot_size: if order_size < slot_size:
# compute a fractional slots size to display # compute a fractional slots size to display
slots_used = self.slots_used( slots_used = self.slots_used(
Position(symbol=sym, size=order_size, avg_price=price) Position(
mkt=mkt,
bs_mktid=mkt.bs_mktid,
)
) )
# TODO: render an actual ``Executable`` type here?
return { return {
'size': abs(round(order_size, ndigits=ld)), 'size': abs(round(order_size, ndigits=ld)),
'size_digits': ld, 'size_digits': ld,
@ -268,11 +239,11 @@ class Allocator(BaseModel):
Calc and return the number of slots used by this ``Position``. Calc and return the number of slots used by this ``Position``.
''' '''
abs_pp_size = abs(pp.size) abs_pp_size = abs(pp.cumsize)
if self.size_unit == 'currency': if self.size_unit == 'currency':
# live_currency_size = size or (abs_pp_size * pp.avg_price) # live_currency_size = size or (abs_pp_size * pp.ppu)
live_currency_size = abs_pp_size * pp.avg_price live_currency_size = abs_pp_size * pp.ppu
prop = live_currency_size / self.currency_limit prop = live_currency_size / self.currency_limit
else: else:
@ -284,23 +255,15 @@ class Allocator(BaseModel):
return round(prop * self.slots) return round(prop * self.slots)
_derivs = (
'future',
'continuous_future',
'option',
'futures_option',
)
def mk_allocator( def mk_allocator(
symbol: Symbol, mkt: MktPair,
startup_pp: Position, startup_pp: Position,
# default allocation settings # default allocation settings
defaults: dict[str, float] = { defaults: dict[str, float] = {
'account': None, # select paper by default 'account': None, # select paper by default
'size_unit': 'currency', # 'size_unit': 'currency',
'units_limit': 400, 'units_limit': 400,
'currency_limit': 5e3, 'currency_limit': 5e3,
'slots': 4, 'slots': 4,
@ -318,42 +281,9 @@ def mk_allocator(
'currency_limit': 6e3, 'currency_limit': 6e3,
'slots': 6, 'slots': 6,
} }
defaults.update(user_def) defaults.update(user_def)
alloc = Allocator( return Allocator(
symbol=symbol, mkt=mkt,
**defaults, **defaults,
) )
asset_type = symbol.type_key
# specific configs by asset class / type
if asset_type in _derivs:
# since it's harder to know how currency "applies" in this case
# given leverage properties
alloc.size_unit = '# units'
# set units limit to slots size thus making make the next
# entry step 1.0
alloc.units_limit = alloc.slots
# if the current position is already greater then the limit
# settings, increase the limit to the current position
if alloc.size_unit == 'currency':
startup_size = startup_pp.size * startup_pp.avg_price
if startup_size > alloc.currency_limit:
alloc.currency_limit = round(startup_size, ndigits=2)
else:
startup_size = abs(startup_pp.size)
if startup_size > alloc.units_limit:
alloc.units_limit = startup_size
if asset_type in _derivs:
alloc.slots = alloc.units_limit
return alloc

View File

@ -0,0 +1,421 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Trade and transaction ledger processing.
'''
from __future__ import annotations
from collections import UserDict
from contextlib import contextmanager as cm
from functools import partial
from pathlib import Path
from pprint import pformat
from types import ModuleType
from typing import (
Any,
Callable,
Generator,
Literal,
TYPE_CHECKING,
)
from pendulum import (
DateTime,
)
import tomli_w # for fast ledger writing
from piker.types import Struct
from piker import config
from ..log import get_logger
from .calc import (
iter_by_dt,
)
if TYPE_CHECKING:
from ..data._symcache import (
SymbologyCache,
)
log = get_logger(__name__)
TxnType = Literal[
'clear',
'transfer',
# TODO: see https://github.com/pikers/piker/issues/510
# 'split',
# 'rename',
# 'resize',
# 'removal',
]
class Transaction(Struct, frozen=True):
# NOTE: this is a unified acronym also used in our `MktPair`
# and can stand for any of a
# "fully qualified <blank> endpoint":
# - "market" in the case of financial trades
# (btcusdt.spot.binance).
# - "merkel (tree)" aka a blockchain system "wallet tranfers"
# (btc.blockchain)
# - "money" for tradtitional (digital databases)
# *bank accounts* (usd.swift, eur.sepa)
fqme: str
tid: str | int # unique transaction id
size: float
price: float
cost: float # commisions or other additional costs
dt: DateTime
# the "event type" in terms of "market events" see above and
# https://github.com/pikers/piker/issues/510
etype: TxnType = 'clear'
# TODO: we can drop this right since we
# can instead expect the backend to provide this
# via the `MktPair`?
expiry: DateTime | None = None
# (optional) key-id defined by the broker-service backend which
# ensures the instrument-symbol market key for this record is unique
# in the "their backend/system" sense; i.e. this uid for the market
# as defined (internally) in some namespace defined by the broker
# service.
bs_mktid: str | int | None = None
def to_dict(
self,
**kwargs,
) -> dict:
dct: dict[str, Any] = super().to_dict(**kwargs)
# ensure we use a pendulum formatted
# ISO style str here!@
dct['dt'] = str(self.dt)
return dct
class TransactionLedger(UserDict):
'''
Very simple ``dict`` wrapper + ``pathlib.Path`` handle to
a TOML formatted transaction file for enabling file writes
dynamically whilst still looking exactly like a ``dict`` from the
outside.
'''
# NOTE: see `open_trade_ledger()` for defaults, this should
# never be constructed manually!
def __init__(
self,
ledger_dict: dict,
file_path: Path,
account: str,
mod: ModuleType, # broker mod
tx_sort: Callable,
symcache: SymbologyCache,
) -> None:
self.account: str = account
self.file_path: Path = file_path
self.mod: ModuleType = mod
self.tx_sort: Callable = tx_sort
self._symcache: SymbologyCache = symcache
# any added txns we keep in that form for meta-data
# gathering purposes
self._txns: dict[str, Transaction] = {}
super().__init__(ledger_dict)
def __repr__(self) -> str:
return (
f'TransactionLedger: {len(self)}\n'
f'{pformat(list(self.data))}'
)
@property
def symcache(self) -> SymbologyCache:
'''
Read-only ref to backend's ``SymbologyCache``.
'''
return self._symcache
def update_from_t(
self,
t: Transaction,
) -> None:
'''
Given an input `Transaction`, cast to `dict` and update
from it's transaction id.
'''
self.data[t.tid] = t.to_dict()
self._txns[t.tid] = t
def iter_txns(
self,
symcache: SymbologyCache | None = None,
) -> Generator[
Transaction,
None,
None,
]:
'''
Deliver trades records in ``(key: str, t: Transaction)``
form via generator.
'''
symcache = symcache or self._symcache
if self.account == 'paper':
from piker.clearing import _paper_engine
norm_trade: Callable = partial(
_paper_engine.norm_trade,
brokermod=self.mod,
)
else:
norm_trade: Callable = self.mod.norm_trade
# datetime-sort and pack into txs
for tid, txdict in self.tx_sort(self.data.items()):
txn: Transaction = norm_trade(
tid,
txdict,
pairs=symcache.pairs,
symcache=symcache,
)
yield txn
def to_txns(
self,
symcache: SymbologyCache | None = None,
) -> dict[str, Transaction]:
'''
Return entire output from ``.iter_txns()`` in a ``dict``.
'''
txns: dict[str, Transaction] = {}
for t in self.iter_txns(symcache=symcache):
if not t:
log.warning(f'{self.mod.name}:{self.account} TXN is -> {t}')
continue
txns[t.tid] = t
return txns
def write_config(self) -> None:
'''
Render the self.data ledger dict to its TOML file form.
ALWAYS order datetime sorted!
'''
is_paper: bool = self.account == 'paper'
symcache: SymbologyCache = self._symcache
towrite: dict[str, Any] = {}
for tid, txdict in self.tx_sort(self.data.copy()):
# write blank-str expiry for non-expiring assets
if (
'expiry' in txdict
and txdict['expiry'] is None
):
txdict['expiry'] = ''
# (maybe) re-write old acro-key
if (
is_paper
# if symcache is empty/not supported (yet), don't
# bother xD
and symcache.mktmaps
):
fqme: str = txdict.pop('fqsn', None) or txdict['fqme']
bs_mktid: str | None = txdict.get('bs_mktid')
if (
fqme not in symcache.mktmaps
or (
# also try to see if this is maybe a paper
# engine ledger in which case the bs_mktid
# should be the fqme as well!
bs_mktid
and fqme != bs_mktid
)
):
# always take any (paper) bs_mktid if defined and
# in the backend's cache key set.
if bs_mktid in symcache.mktmaps:
fqme: str = bs_mktid
else:
best_fqme: str = list(symcache.search(fqme))[0]
log.warning(
f'Could not find FQME: {fqme} in qualified set?\n'
f'Qualifying and expanding {fqme} -> {best_fqme}'
)
fqme = best_fqme
if (
bs_mktid
and bs_mktid != fqme
):
# in paper account case always make sure both the
# fqme and bs_mktid are fully qualified..
txdict['bs_mktid'] = fqme
# in paper ledgers always write the latest
# symbology key field: an FQME.
txdict['fqme'] = fqme
towrite[tid] = txdict
with self.file_path.open(mode='wb') as fp:
tomli_w.dump(towrite, fp)
def load_ledger(
brokername: str,
acctid: str,
# for testing or manual load from file
dirpath: Path | None = None,
) -> tuple[dict, Path]:
'''
Load a ledger (TOML) file from user's config directory:
$CONFIG_DIR/accounting/ledgers/trades_<brokername>_<acctid>.toml
Return its `dict`-content and file path.
'''
import time
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
ldir: Path = (
dirpath
or
config._config_dir / 'accounting' / 'ledgers'
)
if not ldir.is_dir():
ldir.mkdir()
fname = f'trades_{brokername}_{acctid}.toml'
fpath: Path = ldir / fname
if not fpath.is_file():
log.info(
f'Creating new local trades ledger: {fpath}'
)
fpath.touch()
with fpath.open(mode='rb') as cf:
start = time.time()
ledger_dict = tomllib.load(cf)
log.debug(f'Ledger load took {time.time() - start}s')
return ledger_dict, fpath
@cm
def open_trade_ledger(
broker: str,
account: str,
allow_from_sync_code: bool = False,
symcache: SymbologyCache | None = None,
# default is to sort by detected datetime-ish field
tx_sort: Callable = iter_by_dt,
rewrite: bool = False,
# for testing or manual load from file
_fp: Path | None = None,
) -> Generator[TransactionLedger, None, None]:
'''
Indempotently create and read in a trade log file from the
``<configuration_dir>/ledgers/`` directory.
Files are named per broker account of the form
``<brokername>_<accountname>.toml``. The ``accountname`` here is the
name as defined in the user's ``brokers.toml`` config.
'''
from ..brokers import get_brokermod
mod: ModuleType = get_brokermod(broker)
ledger_dict, fpath = load_ledger(
broker,
account,
dirpath=_fp,
)
cpy = ledger_dict.copy()
# XXX NOTE: if not provided presume we are being called from
# sync code and need to maybe run `trio` to generate..
if symcache is None:
# XXX: be mega pendantic and ensure the caller knows what
# they're doing!
if not allow_from_sync_code:
raise RuntimeError(
'You MUST set `allow_from_sync_code=True` when '
'calling `open_trade_ledger()` from sync code! '
'If you are calling from async code you MUST '
'instead pass a `symcache: SymbologyCache`!'
)
from ..data._symcache import (
get_symcache,
)
symcache: SymbologyCache = get_symcache(broker)
assert symcache
ledger = TransactionLedger(
ledger_dict=cpy,
file_path=fpath,
account=account,
mod=mod,
symcache=symcache,
tx_sort=getattr(mod, 'tx_sort', tx_sort),
)
try:
yield ledger
finally:
if (
ledger.data != ledger_dict
or rewrite
):
# TODO: show diff output?
# https://stackoverflow.com/questions/12956957/print-diff-of-python-dictionaries
log.info(f'Updating ledger for {fpath}:\n')
ledger.write_config()

View File

@ -0,0 +1,766 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Market (pair) meta-info layer: sane addressing semantics and meta-data
for cross-provider marketplaces.
We intoduce the concept of,
- a FQMA: fully qualified market address,
- a sane schema for FQMAs including derivatives,
- a msg-serializeable description of markets for
easy sharing with other pikers B)
'''
from __future__ import annotations
from decimal import (
Decimal,
ROUND_HALF_EVEN,
)
from typing import (
Any,
Literal,
)
from piker.types import Struct
# TODO: make these literals..
_underlyings: list[str] = [
'stock',
'bond',
'crypto',
'fiat',
'commodity',
]
_crypto_derivs: list[str] = [
'perpetual_future',
'crypto_future',
]
_derivs: list[str] = [
'swap',
'future',
'continuous_future',
'option',
'futures_option',
# if we can't figure it out, presume the worst XD
'unknown',
]
# NOTE: a tag for other subsystems to try
# and do default settings for certain things:
# - allocator does unit vs. dolla size limiting.
AssetTypeName: Literal[
_underlyings
+
_derivs
+
_crypto_derivs
]
# egs. stock, futer, option, bond etc.
def dec_digits(
value: float | str | Decimal,
) -> int:
'''
Return the number of precision digits read from a decimal or float
value.
'''
if value == 0:
return 0
return int(
-Decimal(str(value)).as_tuple().exponent
)
float_digits = dec_digits
def digits_to_dec(
ndigits: int,
) -> Decimal:
'''
Return the minimum float value for an input integer value.
eg. 3 -> 0.001
'''
if ndigits == 0:
return Decimal('0')
return Decimal('0.' + '0'*(ndigits-1) + '1')
class Asset(Struct, frozen=True):
'''
Container type describing any transactable asset and its
contract-like and/or underlying technology meta-info.
'''
name: str
atype: str # AssetTypeName
# minimum transaction size / precision.
# eg. for buttcoin this is a "satoshi".
tx_tick: Decimal
# NOTE: additional info optionally packed in by the backend, but
# should not be explicitly required in our generic API.
info: dict | None = None
# `None` is not toml-compat so drop info
# if no extra data added..
def to_dict(
self,
**kwargs,
) -> dict:
dct = super().to_dict(**kwargs)
if (info := dct.pop('info', None)):
dct['info'] = info
assert dct['tx_tick']
return dct
@classmethod
def from_msg(
cls,
msg: dict[str, Any],
) -> Asset:
return cls(
tx_tick=Decimal(str(msg.pop('tx_tick'))),
info=msg.pop('info', None),
**msg,
)
def __str__(self) -> str:
return self.name
def quantize(
self,
size: float,
) -> Decimal:
'''
Truncate input ``size: float`` using ``Decimal``
quantized form of the digit precision defined
by ``self.lot_tick_size``.
'''
digits = float_digits(self.tx_tick)
return Decimal(size).quantize(
Decimal(f'1.{"0".ljust(digits, "0")}'),
rounding=ROUND_HALF_EVEN
)
@classmethod
def guess_from_mkt_ep_key(
cls,
mkt_ep_key: str,
atype: str | None = None,
) -> Asset:
'''
A hacky guess method for presuming a (target) asset's properties
based on either the actualy market endpoint key, or config settings
from the user.
'''
atype = atype or 'unknown'
# attempt to strip off any source asset
# via presumed syntax of:
# - <dst>/<src>
# - <dst>.<src>
# - etc.
for char in ['/', '.']:
dst, _, src = mkt_ep_key.partition(char)
if src:
if not atype:
atype = 'fiat'
break
return Asset(
name=dst,
atype=atype,
tx_tick=Decimal('0.01'),
)
def maybe_cons_tokens(
tokens: list[Any],
delim_char: str = '.',
) -> str:
'''
Construct `str` output from a maybe-concatenation of input
sequence of elements in ``tokens``.
'''
return delim_char.join(filter(bool, tokens)).lower()
class MktPair(Struct, frozen=True):
'''
Market description for a pair of assets which are tradeable:
a market which enables transactions of the form,
buy: source asset -> destination asset
sell: destination asset -> source asset
The main intention of this type is for a **simple** cross-asset
venue/broker normalized descrption type from which all
market-auctions can be mapped from FQME identifiers.
TODO: our eventual target fqme format/schema is:
<dst>/<src>.<expiry>.<con_info_1>.<con_info_2>. -> .<venue>.<broker>
^ -- optional tokens ------------------------------- ^
Notes:
------
Some venues provide a different semantic (which we frankly find
confusing and non-general) such as "base" and "quote" asset.
For example this is how `binance` defines the terms:
https://binance-docs.github.io/apidocs/websocket_api/en/#public-api-definitions
https://binance-docs.github.io/apidocs/futures/en/#public-endpoints-info
- *base* asset refers to the asset that is the *quantity* of a symbol.
- *quote* asset refers to the asset that is the *price* of a symbol.
In other words the "quote" asset is the asset that the market
is pricing "buys" *in*, and the *base* asset it the one that the market
allows you to "buy" an *amount of*. Put more simply the *quote*
asset is our "source" asset and the *base* asset is our "destination"
asset.
This defintion can be further understood reading our
`.brokers.binance.api.Pair` type wherein the
`Pair.[quote/base]AssetPrecision` field determines the (transfer)
transaction precision available per asset; i.e. the satoshis
unit in bitcoin for representing the minimum size of a
transaction that can take place on the blockchain.
'''
dst: str | Asset
# "destination asset" (name) used to buy *to*
# (or used to sell *from*)
price_tick: Decimal # minimum price increment
size_tick: Decimal # minimum size (aka vlm) increment
# the tick size is the number describing the smallest step in value
# available in this market between the source and destination
# assets.
# https://en.wikipedia.org/wiki/Tick_size
# https://en.wikipedia.org/wiki/Commodity_tick
# https://en.wikipedia.org/wiki/Percentage_in_point
# unique "broker id" since every market endpoint provider
# has their own nomenclature and schema for market maps.
bs_mktid: str
broker: str # the middle man giving access
# NOTE: to start this field is optional but should eventually be
# required; the reason is for backward compat since more positioning
# calculations were not originally stored with a src asset..
src: str | Asset = ''
# "source asset" (name) used to buy *from*
# (or used to sell *to*).
venue: str = '' # market venue provider name
expiry: str = '' # for derivs, expiry datetime parseable str
# destination asset's financial type/classification name
# NOTE: this is required for the order size allocator system,
# since we use different default settings based on the type
# of the destination asset, eg. futes use a units limits vs.
# equities a $limit.
# dst_type: AssetTypeName | None = None
# source asset's financial type/classification name
# TODO: is a src type required for trading?
# there's no reason to need any more then the one-way alloc-limiter
# config right?
# src_type: AssetTypeName
# for derivs, info describing contract, egs.
# strike price, call or put, swap type, exercise model, etc.
contract_info: list[str] | None = None
# TODO: rename to sectype since all of these can
# be considered "securities"?
_atype: str = ''
# allow explicit disable of the src part of the market
# pair name -> useful for legacy markets like qqq.nasdaq.ib
_fqme_without_src: bool = False
# NOTE: when cast to `str` return fqme
def __str__(self) -> str:
return self.fqme
def to_dict(
self,
**kwargs,
) -> dict:
d = super().to_dict(**kwargs)
d['src'] = self.src.to_dict(**kwargs)
if not isinstance(self.dst, str):
d['dst'] = self.dst.to_dict(**kwargs)
else:
d['dst'] = str(self.dst)
d['price_tick'] = str(self.price_tick)
d['size_tick'] = str(self.size_tick)
if self.contract_info is None:
d.pop('contract_info')
# d.pop('_fqme_without_src')
return d
@classmethod
def from_msg(
cls,
msg: dict[str, Any],
) -> MktPair:
'''
Constructor for a received msg-dict normally received over IPC.
'''
if not isinstance(
dst_asset_msg := msg.pop('dst'),
str,
):
dst: Asset = Asset.from_msg(dst_asset_msg) # .copy()
else:
dst: str = dst_asset_msg
src_asset_msg: dict = msg.pop('src')
src: Asset = Asset.from_msg(src_asset_msg) # .copy()
# XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't
# decide to it by default since we aren't spec-cing these
# msgs as structs proper to get them to decode implictily
# (yet) as per,
# - https://github.com/pikers/piker/pull/354
# - https://github.com/goodboy/tractor/pull/311
# SO we have to ensure we do a struct type
# case (which `.copy()` does) to ensure we get the right
# type!
return cls(
dst=dst,
src=src,
price_tick=Decimal(msg.pop('price_tick')),
size_tick=Decimal(msg.pop('size_tick')),
**msg,
).copy()
@property
def resolved(self) -> bool:
return isinstance(self.dst, Asset)
@classmethod
def from_fqme(
cls,
fqme: str,
price_tick: float | str,
size_tick: float | str,
bs_mktid: str,
broker: str | None = None,
**kwargs,
) -> MktPair:
_fqme: str = fqme
if (
broker
and broker not in fqme
):
_fqme = f'{fqme}.{broker}'
broker, mkt_ep_key, venue, expiry = unpack_fqme(_fqme)
kven: str = kwargs.pop('venue', venue)
if venue:
assert venue == kven
else:
venue = kven
exp: str = kwargs.pop('expiry', expiry)
if expiry:
assert exp == expiry
else:
expiry = exp
dst: Asset = Asset.guess_from_mkt_ep_key(
mkt_ep_key,
atype=kwargs.get('_atype'),
)
# XXX: loading from a fqme string will
# leave this pair as "un resolved" meaning
# we don't yet have `.dst` set as an `Asset`
# which we expect to be filled in by some
# backend client with access to that data-info.
return cls(
dst=dst,
# XXX: not resolved to ``Asset`` :(
#src=src,
broker=broker,
venue=venue,
# XXX NOTE: we presume this token
# if the expiry for now!
expiry=expiry,
price_tick=price_tick,
size_tick=size_tick,
bs_mktid=bs_mktid,
**kwargs,
).copy()
@property
def key(self) -> str:
'''
The "endpoint key" for this market.
'''
return self.pair
def pair(
self,
delim_char: str | None = None,
) -> str:
'''
The "endpoint asset pair key" for this market.
Eg. mnq/usd or btc/usdt or xmr/btc
In most other tina platforms this is referred to as the
"symbol".
'''
return maybe_cons_tokens(
[str(self.dst),
str(self.src)],
# TODO: make the default '/'
delim_char=delim_char or '',
)
@property
def suffix(self) -> str:
'''
The "contract suffix" for this market.
Eg. mnq/usd.20230616.cme.ib
^ ----- ^
or tsla/usd.20230324.200c.cboe.ib
^ ---------- ^
In most other tina platforms they only show you these details in
some kinda "meta data" format, we have FQMEs so we do this up
front and explicit.
'''
field_strs = [self.expiry]
con_info = self.contract_info
if con_info is not None:
field_strs.extend(con_info)
return maybe_cons_tokens(field_strs)
def get_fqme(
self,
# NOTE: allow dropping the source asset from the
# market endpoint's pair key. Eg. to change
# mnq/usd.<> -> mnq.<> which is useful when
# searching (legacy) stock exchanges.
without_src: bool = False,
delim_char: str | None = None,
) -> str:
'''
Return the fully qualified market endpoint-address for the
pair of transacting assets.
fqme = "fully qualified market endpoint"
And yes, you pronounce it colloquially as read..
Basically the idea here is for all client code (consumers of piker's
APIs which query the data/broker-provider agnostic layer(s)) should be
able to tell which backend / venue / derivative each data feed/flow is
from by an explicit string-key of the current form:
<market-instrument-name>
.<venue>
.<expiry>
.<derivative-suffix-info>
.<brokerbackendname>
eg. for an explicit daq mini futes contract: mnq.cme.20230317.ib
TODO: I have thoughts that we should actually change this to be
more like an "attr lookup" (like how the web should have done
urls, but marketting peeps ruined it etc. etc.)
<broker>.<venue>.<instrumentname>.<suffixwithmetadata>
TODO:
See community discussion on naming and nomenclature, order
of addressing hierarchy, general schema, internal representation:
https://github.com/pikers/piker/issues/467
'''
key: str = (
self.pair(delim_char=delim_char)
if not (without_src or self._fqme_without_src)
else str(self.dst)
)
return maybe_cons_tokens([
key, # final "pair name" (eg. qqq[/usd], btcusdt)
self.venue,
self.suffix, # includes expiry and other con info
self.broker,
])
# NOTE: the main idea behind an fqme is to map a "market address"
# to some endpoint from a transaction provider (eg. a broker) such
# that we build a table of `fqme: str -> bs_mktid: Any` where any "piker
# market address" maps 1-to-1 to some broker trading endpoint.
# @cached_property
fqme = property(get_fqme)
def get_bs_fqme(
self,
**kwargs,
) -> str:
'''
FQME sin broker part XD
'''
sin_broker, *_ = self.get_fqme(**kwargs).rpartition('.')
return sin_broker
bs_fqme = property(get_bs_fqme)
@property
def fqsn(self) -> str:
return self.fqme
def quantize(
self,
size: float,
quantity_type: Literal['price', 'size'] = 'size',
) -> Decimal:
'''
Truncate input ``size: float`` using ``Decimal``
and ``.size_tick``'s # of digits.
'''
match quantity_type:
case 'price':
digits = float_digits(self.price_tick)
case 'size':
digits = float_digits(self.size_tick)
return Decimal(size).quantize(
Decimal(f'1.{"0".ljust(digits, "0")}'),
rounding=ROUND_HALF_EVEN
)
# TODO: BACKWARD COMPAT, TO REMOVE?
@property
def type_key(self) -> str:
# if set explicitly then use it!
if self._atype:
return self._atype
if isinstance(self.dst, Asset):
return str(self.dst.atype)
return 'UNKNOWN'
@property
def price_tick_digits(self) -> int:
return float_digits(self.price_tick)
@property
def size_tick_digits(self) -> int:
return float_digits(self.size_tick)
def unpack_fqme(
fqme: str,
broker: str | None = None
) -> tuple[str, ...]:
'''
Unpack a fully-qualified-symbol-name to ``tuple``.
'''
venue = ''
suffix = ''
# TODO: probably reverse the order of all this XD
tokens = fqme.split('.')
match tokens:
case [mkt_ep, broker]:
# probably crypto
return (
broker,
mkt_ep,
'',
'',
)
# TODO: swap venue and suffix/deriv-info here?
case [mkt_ep, venue, suffix, broker]:
pass
# handle `bs_mktid` + `broker` input case
case [
mkt_ep, venue, suffix
] if (
broker
and suffix != broker
):
pass
case [mkt_ep, venue, broker]:
suffix = ''
case _:
raise ValueError(f'Invalid fqme: {fqme}')
return (
broker,
mkt_ep,
venue,
# '.'.join([mkt_ep, venue]),
suffix,
)
class Symbol(Struct):
'''
I guess this is some kinda container thing for dealing with
all the different meta-data formats from brokers?
'''
key: str
broker: str = ''
venue: str = ''
# precision descriptors for price and vlm
tick_size: Decimal = Decimal('0.01')
lot_tick_size: Decimal = Decimal('0.0')
suffix: str = ''
broker_info: dict[str, dict[str, Any]] = {}
@classmethod
def from_fqme(
cls,
fqsn: str,
info: dict[str, Any],
) -> Symbol:
broker, mktep, venue, suffix = unpack_fqme(fqsn)
tick_size = info.get('price_tick_size', 0.01)
lot_size = info.get('lot_tick_size', 0.0)
return Symbol(
broker=broker,
key=mktep,
tick_size=tick_size,
lot_tick_size=lot_size,
venue=venue,
suffix=suffix,
broker_info={broker: info},
)
@property
def type_key(self) -> str:
return list(self.broker_info.values())[0]['asset_type']
@property
def tick_size_digits(self) -> int:
return float_digits(self.tick_size)
@property
def lot_size_digits(self) -> int:
return float_digits(self.lot_tick_size)
@property
def price_tick(self) -> Decimal:
return Decimal(str(self.tick_size))
@property
def size_tick(self) -> Decimal:
return Decimal(str(self.lot_tick_size))
@property
def broker(self) -> str:
return list(self.broker_info.keys())[0]
@property
def fqme(self) -> str:
return maybe_cons_tokens([
self.key, # final "pair name" (eg. qqq[/usd], btcusdt)
self.venue,
self.suffix, # includes expiry and other con info
self.broker,
])
def quantize(
self,
size: float,
) -> Decimal:
digits = float_digits(self.lot_tick_size)
return Decimal(size).quantize(
Decimal(f'1.{"0".ljust(digits, "0")}'),
rounding=ROUND_HALF_EVEN
)
# NOTE: when cast to `str` return fqme
def __str__(self) -> str:
return self.fqme

View File

@ -0,0 +1,983 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Personal/Private position parsing, calculating, summarizing in a way
that doesn't try to cuk most humans who prefer to not lose their moneys..
(looking at you `ib` and dirt-bird friends)
'''
from __future__ import annotations
from contextlib import contextmanager as cm
from decimal import Decimal
from pprint import pformat
from pathlib import Path
from types import ModuleType
from typing import (
Any,
Iterator,
Generator
)
import pendulum
from pendulum import (
datetime,
now,
)
import polars as pl
import tomlkit
from ._ledger import (
Transaction,
TransactionLedger,
)
from ._mktinfo import (
MktPair,
Asset,
unpack_fqme,
)
from .calc import (
ppu,
# iter_by_dt,
)
from .. import config
from ..clearing._messages import (
BrokerdPosition,
)
from piker.types import Struct
from piker.data._symcache import SymbologyCache
from ..log import get_logger
log = get_logger(__name__)
class Position(Struct):
'''
An asset "position" model with attached clearing transaction history.
A financial "position" in `piker` terms is a summary of accounting
metrics computed from a transaction ledger; generally it describes
some accumulative "size" and "average price" from the summarized
underlying transaction set.
In piker we focus on the `.ppu` (price per unit) and the `.bep`
(break even price) including all transaction entries and exits since
the last "net-zero" size of the destination asset's holding.
This interface serves as an object API for computing and
tracking positions as well as supports serialization for
storage in the local file system (in TOML) and to interchange
as a msg over IPC.
'''
mkt: MktPair
# can be +ve or -ve for long/short
# size: float
# "price-per-unit price" above or below which pnl moves above and
# below zero for the entirety of the current "trade state". The ppu
# is only modified on "increases of" the absolute size of a position
# in one of a long/short "direction" (i.e. abs(.size_i) > 0 after
# the next transaction given .size was > 0 before that tx, and vice
# versa for -ve sized positions).
# ppu: float
# TODO: break-even-price support!
# bep: float
# unique "backend system market id"
bs_mktid: str
split_ratio: int | None = None
# TODO: use a `pl.DataFrame` intead?
_events: dict[str, Transaction | dict] = {}
@property
def expiry(self) -> datetime | None:
'''
Security expiry if it has a limited lifetime.
For non-derivative markets this is normally `None`.
'''
exp: str | None = self.mkt.expiry
if exp is None:
return None
match exp.lower():
# empty str, 'perp' (contract) or simply a null
# signifies instrument with NO expiry.
case 'perp' | '' | None:
return None
case str():
return pendulum.parse(exp)
case _:
raise ValueError(
f'Unhandled `MktPair.expiry`: `{exp}`'
)
# TODO: idea: "real LIFO" dynamic positioning.
# - when a trade takes place where the pnl for
# the (set of) trade(s) is below the breakeven price
# it may be that the trader took a +ve pnl on a short(er)
# term trade in the same account.
# - in this case we could recalc the be price to
# be reverted back to it's prior value before the nearest term
# trade was opened.?
# def bep() -> float:
# ...
def clears_df(self) -> pl.DataFrame:
...
def clearsitems(self) -> list[(str, dict)]:
return ppu(
self.iter_by_type('clear'),
as_ledger=True
)
def iter_by_type(
self,
etype: str,
) -> Iterator[dict | Transaction]:
'''
Iterate the internally managed ``._events: dict`` table in
datetime-stamped order.
'''
# sort on the expected datetime field
# for event in iter_by_dt(
for event in sorted(
self._events.values(),
key=lambda entry: entry.dt
):
# if event.etype == etype:
match event:
case (
{'etype': _etype} |
Transaction(etype=str(_etype))
):
assert _etype == etype
yield event
def minimized_clears(self) -> dict[str, dict]:
'''
Minimize the position's clears entries by removing
all transactions before the last net zero size except for when
a clear event causes a position "side" change (i.e. long to short
after a single fill) wherein we store the transaction prior to the
net-zero pass.
This avoids unnecessary history irrelevant to the current
non-net-zero size state when serializing for offline storage.
'''
# scan for the last "net zero" position by iterating
# transactions until the next net-zero cumsize, rinse,
# repeat.
cumsize: float = 0
clears_since_zero: list[dict] = []
for tid, cleardict in self.clearsitems():
cumsize = float(
# self.mkt.quantize(cumsize + cleardict['tx'].size
self.mkt.quantize(cleardict['cumsize'])
)
clears_since_zero.append(cleardict)
# NOTE: always pop sign change since we just use it to
# determine which entry to clear "up to".
sign_change: bool = cleardict.pop('sign_change')
if cumsize == 0:
clears_since_zero = clears_since_zero[:-2]
# clears_since_zero.clear()
elif sign_change:
clears_since_zero = clears_since_zero[:-1]
return clears_since_zero
def to_pretoml(self) -> tuple[str, dict]:
'''
Prep this position's data contents for export as an entry
in a TOML "account file" (such as
`account.binance.paper.toml`) including re-structuring of
the ``._events`` entries as an array of inline-subtables
for better ``pps.toml`` compactness.
'''
mkt: MktPair = self.mkt
assert isinstance(mkt, MktPair)
# TODO: we need to figure out how to have one top level
# listing venue here even when the backend isn't providing
# it via the trades ledger..
# drop symbol obj in serialized form
fqme: str = mkt.fqme
broker, mktep, venue, suffix = unpack_fqme(fqme)
# an asset resolved mkt where we have ``Asset`` info about
# each tradeable asset in the market.
asset_type: str = 'n/a'
if mkt.resolved:
dst: Asset = mkt.dst
asset_type = dst.atype
asdict: dict[str, Any] = {
'bs_mktid': self.bs_mktid,
# 'expiry': self.expiry or '',
'asset_type': asset_type,
'price_tick': mkt.price_tick,
'size_tick': mkt.size_tick,
}
if exp := self.expiry:
asdict['expiry'] = exp
clears_since_zero: list[dict] = self.minimized_clears()
# setup a "multi-line array of inline tables" which we call
# the "clears table", contained by each position entry in
# an "account file".
clears_table: tomlkit.Array = tomlkit.array()
clears_table.multiline(
multiline=True,
indent='',
)
for entry in clears_since_zero:
inline_table = tomlkit.inline_table()
# insert optional clear fields in column order
for k in ['ppu', 'cumsize']:
if val := entry.get(k):
inline_table[k] = val
# insert required fields
for k in ['price', 'size', 'cost']:
inline_table[k] = entry[k]
# NOTE: we don't actually need to serialize datetime to parsable `str`
# since `tomlkit` supports a native `DateTime` but
# seems like we're not doing it entirely in clearing
# tables yet?
inline_table['dt'] = entry['dt'] # .isoformat('T')
tid: str = entry['tid']
inline_table['tid'] = tid
clears_table.append(inline_table)
# assert not events
asdict['clears'] = clears_table
return fqme, asdict
def update_from_msg(
self,
msg: BrokerdPosition,
) -> None:
'''
Hard-set the current position from a remotely-received
(normally via IPC) msg by applying the msg as the one (and
only) txn in the `._events` table thus forcing the current
asset allocation blindly.
'''
mkt: MktPair = self.mkt
now_dt: pendulum.DateTime = now()
now_str: str = str(now_dt)
# XXX: wipe all prior txn history since we wanted it we wouldn't
# be using this method to compute our state!
self._events.clear()
# NOTE WARNING XXX: we summarize the pos with a single
# summary transaction (for now) until we either pass THIS
# type as msg directly from emsd or come up with a better
# way?
t = Transaction(
fqme=mkt.fqme,
bs_mktid=mkt.bs_mktid,
size=msg['size'],
price=msg['avg_price'],
cost=0,
# NOTE: special provisions required!
# - tid needs to be unique or this txn will be ignored!!
tid=now_str,
# TODO: also figure out how to avoid this!
dt=now_dt,
)
self.add_clear(t)
@property
def dsize(self) -> float:
'''
The "dollar" size of the pp, normally in source asset
(fiat) units.
'''
return self.ppu * self.cumsize
def expired(self) -> bool:
'''
Predicate which checks if the contract/instrument is past
its expiry.
'''
return bool(self.expiry) and self.expiry < now()
def add_clear(
self,
t: Transaction,
) -> bool:
'''
Update clearing table by calculating the rolling ppu and
(accumulative) size in both the clears entry and local
attrs state.
Inserts are always done in datetime sorted order.
'''
# added: bool = False
tid: str = t.tid
if tid in self._events:
log.warning(f'{t} is already added?!')
# return added
# TODO: apparently this IS possible with a dict but not
# common and probably not that beneficial unless we're also
# going to do cum-calcs on each insert?
# https://stackoverflow.com/questions/38079171/python-insert-new-element-into-sorted-list-of-dictionaries
# from bisect import insort
# insort(
# self._clears,
# clear,
# key=lambda entry: entry['dt']
# )
self._events[tid] = t
return True
# TODO: compute these incrementally instead
# of re-looping through each time resulting in O(n**2)
# behaviour..? Can we have some kinda clears len to cached
# output subsys?
def calc_ppu(self) -> float:
return ppu(self.iter_by_type('clear'))
# # return self.clearsdict()
# # )
# return list(self.clearsdict())[-1][1]['ppu']
@property
def ppu(self) -> float:
return round(
self.calc_ppu(),
ndigits=self.mkt.price_tick_digits,
)
def calc_size(self) -> float:
'''
Calculate the unit size of this position in the destination
asset using the clears/trade event table; zero if expired.
'''
# time-expired pps (normally derivatives) are "closed"
# and have a zero size.
if self.expired():
return 0.
clears: list[(str, dict)] = self.clearsitems()
if clears:
return clears[-1][1]['cumsize']
else:
return 0.
# if self.split_ratio is not None:
# size = round(size * self.split_ratio)
# return float(
# self.mkt.quantize(size),
# )
# TODO: ideally we don't implicitly recompute the
# full sequence from `.clearsdict()` every read..
# the writer-updates-local-attr-state was actually kinda nice
# before, but sometimes led to hard to detect bugs when
# state was de-synced.
@property
def cumsize(self) -> float:
if (
self.expiry
and self.expiry < now()
):
return 0
return round(
self.calc_size(),
ndigits=self.mkt.size_tick_digits,
)
# TODO: once we have an `.events` table with diff
# mkt event types..?
# def suggest_split(self) -> float:
# ...
class Account(Struct):
'''
The real-time (double-entry accounting) state of
a given **asset ownership tracking system**, normally offered
or measured from some brokerage, CEX or (implied virtual)
summary crypto$ "wallets" aggregated and tracked over some set
of DEX-es.
Both market-mapped and ledger-system-native (aka inter-account
"transfers") transactions are accounted and they pertain to
(implied) PnL relatve to any other accountable asset.
More specifically in piker terms, an account tracks all of:
- the *balances* of all assets currently available for use either
in (future) market or (inter-account/wallet) transfer
transactions.
- a transaction *ledger* from a given brokerd backend whic
is a recording of all (know) such transactions from the past.
- a set of financial *positions* as measured from the current
ledger state.
See the semantic origins from double-bookeeping:
https://en.wikipedia.org/wiki/Double-entry_bookkeeping
'''
mod: ModuleType
acctid: str
pps: dict[str, Position]
conf_path: Path
conf: dict | None = {}
# TODO: track a table of asset balances as `.balances:
# dict[Asset, float]`?
@property
def brokername(self) -> str:
return self.mod.name
def update_from_ledger(
self,
ledger: TransactionLedger | dict[str, Transaction],
cost_scalar: float = 2,
symcache: SymbologyCache | None = None,
_mktmap_table: dict[str, MktPair] | None = None,
) -> dict[str, Position]:
'''
Update the internal `.pps[str, Position]` table from input
transactions recomputing the price-per-unit (ppu) and
accumulative size for each entry.
'''
if (
not isinstance(ledger, TransactionLedger)
):
if symcache is None:
raise RuntimeError(
'No ledger provided!\n'
'We can not determine the `MktPair`s without a symcache..\n'
'Please provide `symcache: SymbologyCache` when '
'processing NEW positions!'
)
itertxns = sorted(
ledger.values(),
key=lambda t: t.dt,
)
else:
itertxns = ledger.iter_txns()
symcache = ledger.symcache
pps = self.pps
updated: dict[str, Position] = {}
# lifo update all pps from records, ensuring
# we compute the PPU and size sorted in time!
for txn in itertxns:
fqme: str = txn.fqme
bs_mktid: str = txn.bs_mktid
# template the mkt-info presuming a legacy market ticks
# if no info exists in the transactions..
try:
mkt: MktPair = symcache.mktmaps[fqme]
except KeyError:
if _mktmap_table is None:
raise
# XXX: caller is allowed to provide a fallback
# mktmap table for the case where a new position is
# being added and the preloaded symcache didn't
# have this entry prior (eg. with frickin IB..)
mkt = _mktmap_table[fqme]
if not (pos := pps.get(bs_mktid)):
assert isinstance(
mkt,
MktPair,
)
# if no existing pos, allocate fresh one.
pos = pps[bs_mktid] = Position(
mkt=mkt,
bs_mktid=bs_mktid,
)
else:
# NOTE: if for some reason a "less resolved" mkt pair
# info has been set (based on the `.fqme` being
# a shorter string), instead use the one from the
# transaction since it likely has (more) full
# information from the provider.
if len(pos.mkt.fqme) < len(fqme):
pos.mkt = mkt
# update clearing acnt!
# NOTE: likely you'll see repeats of the same
# ``Transaction`` passed in here if/when you are
# restarting a ``brokerd.ib`` where the API will
# re-report trades from the current session, so we need
# to make sure we don't "double count" these in pp
# calculations; `Position.add_clear()` stores txs in
# a `._events: dict[tid, tx]` which should always
# ensure this is true!
pos.add_clear(txn)
updated[txn.bs_mktid] = pos
# NOTE: deliver only the position entries that were
# actually updated (modified the state) from the input
# transaction set.
return updated
def dump_active(
self,
) -> tuple[
dict[str, Position],
dict[str, Position]
]:
'''
Iterate all tabulated positions, render active positions to
a ``dict`` format amenable to serialization (via TOML) and drop
from state (``.pps``) as well as return in a ``dict`` all
``Position``s which have recently closed.
'''
# NOTE: newly closed position are also important to report/return
# since a consumer, like an order mode UI ;), might want to react
# based on the closure (for example removing the breakeven line
# and clearing the entry from any lists/monitors).
closed_pp_objs: dict[str, Position] = {}
open_pp_objs: dict[str, Position] = {}
pp_objs = self.pps
for bs_mktid in list(pp_objs):
pos = pp_objs[bs_mktid]
# pos.ensure_state()
# "net-zero" is a "closed" position
if pos.cumsize == 0:
# NOTE: we DO NOT pop the pos here since it can still be
# used to check for duplicate clears that may come in as
# new transaction from some backend API and need to be
# ignored; the closed positions won't be written to the
# ``pps.toml`` since ``pp_active_entries`` above is what's
# written.
closed_pp_objs[bs_mktid] = pos
else:
open_pp_objs[bs_mktid] = pos
return open_pp_objs, closed_pp_objs
def prep_toml(
self,
active: dict[str, Position] | None = None,
) -> dict[str, Any]:
if active is None:
active, _ = self.dump_active()
# ONLY dict-serialize all active positions; those that are
# closed we don't store in the ``pps.toml``.
to_toml_dict: dict[str, Any] = {}
pos: Position
for bs_mktid, pos in active.items():
# pos.ensure_state()
# serialize to pre-toml form
# NOTE: we only store the minimal amount of clears that
# make up this position since the last net-zero state,
# see `Position.to_pretoml()` for details
fqme, asdict = pos.to_pretoml()
# clears: list[dict] = asdict['clears']
# assert 'Datetime' not in [0]['dt']
log.info(f'Updating active pp: {fqme}')
# XXX: ugh, it's cuz we push the section under
# the broker name.. maybe we need to rethink this?
brokerless_key = fqme.removeprefix(f'{self.brokername}.')
to_toml_dict[brokerless_key] = asdict
return to_toml_dict
def write_config(self) -> None:
'''
Write the current account state to the user's account TOML file, normally
something like ``pps.toml``.
'''
# TODO: show diff output?
# https://stackoverflow.com/questions/12956957/print-diff-of-python-dictionaries
# active, closed_pp_objs = acnt.dump_active()
active, closed = self.dump_active()
pp_entries = self.prep_toml(active=active)
if pp_entries:
log.info(
f'Updating positions in ``{self.conf_path}``:\n'
f'n{pformat(pp_entries)}'
)
if self.brokername in self.conf:
log.warning(
f'Rewriting {self.conf_path} keys to drop <broker.acct>!'
)
# legacy key schema including <brokername.account>, so
# rewrite all entries to drop those tables since we now
# put that in the filename!
accounts = self.conf.pop(self.brokername)
assert len(accounts) == 1
entries = accounts.pop(self.acctid)
self.conf.update(entries)
self.conf.update(pp_entries)
# drop any entries that are computed as net-zero
# we don't care about storing in the pps file.
if closed:
bs_mktid: str
for bs_mktid, pos in closed.items():
fqme: str = pos.mkt.fqme
if fqme in self.conf:
self.conf.pop(fqme)
else:
# TODO: we reallly need a diff set of
# loglevels/colors per subsys.
log.warning(
f'Recent position for {fqme} was closed!'
)
# if there are no active position entries according
# to the toml dump output above, then clear the config
# file of all entries.
elif self.conf:
for entry in list(self.conf):
del self.conf[entry]
# XXX WTF: if we use a tomlkit.Integer here we get this
# super weird --1 thing going on for cumsize!?1!
# NOTE: the fix was to always float() the size value loaded
# in open_pps() below!
config.write(
config=self.conf,
path=self.conf_path,
fail_empty=False,
)
def load_account(
brokername: str,
acctid: str,
dirpath: Path | None = None,
) -> tuple[dict, Path]:
'''
Load a accounting (with positions) file from
$CONFIG_DIR/accounting/account.<brokername>.<acctid>.toml
Where normally $CONFIG_DIR = ~/.config/piker/
and we implicitly create a accounting subdir which should
normally be linked to a git repo managed by the user B)
'''
legacy_fn: str = f'pps.{brokername}.{acctid}.toml'
fn: str = f'account.{brokername}.{acctid}.toml'
dirpath: Path = dirpath or (config._config_dir / 'accounting')
if not dirpath.is_dir():
dirpath.mkdir()
conf, path = config.load(
path=dirpath / fn,
decode=tomlkit.parse,
touch_if_dne=True,
)
if not conf:
legacypath = dirpath / legacy_fn
log.warning(
f'Your account file is using the legacy `pps.` prefix..\n'
f'Rewriting contents to new name -> {path}\n'
'Please delete the old file!\n'
f'|-> {legacypath}\n'
)
if legacypath.is_file():
legacy_config, _ = config.load(
path=legacypath,
# TODO: move to tomlkit:
# - needs to be fixed to support bidict?
# https://github.com/sdispater/tomlkit/issues/289
# - we need to use or fork's fix to do multiline array
# indenting.
decode=tomlkit.parse,
)
conf.update(legacy_config)
# XXX: override the presumably previously non-existant
# file with legacy's contents.
config.write(
conf,
path=path,
fail_empty=False,
)
return conf, path
# TODO: make this async and offer a `get_account()` that
# can be used from sync code which does the same thing as
# open_trade_ledger()!
@cm
def open_account(
brokername: str,
acctid: str,
write_on_exit: bool = False,
# for testing or manual load from file
_fp: Path | None = None,
) -> Generator[Account, None, None]:
'''
Read out broker-specific position entries from
incremental update file: ``pps.toml``.
'''
conf: dict
conf_path: Path
conf, conf_path = load_account(
brokername,
acctid,
dirpath=_fp,
)
if brokername in conf:
log.warning(
f'Rewriting {conf_path} keys to drop <broker.acct>!'
)
# legacy key schema including <brokername.account>, so
# rewrite all entries to drop those tables since we now
# put that in the filename!
accounts = conf.pop(brokername)
for acctid in accounts.copy():
entries = accounts.pop(acctid)
conf.update(entries)
# TODO: ideally we can pass in an existing
# pps state to this right? such that we
# don't have to do a ledger reload all the
# time.. a couple ideas I can think of,
# - mirror this in some client side actor which
# does the actual ledger updates (say the paper
# engine proc if we decide to always spawn it?),
# - do diffs against updates from the ledger writer
# actor and the in-mem state here?
from ..brokers import get_brokermod
mod: ModuleType = get_brokermod(brokername)
pp_objs: dict[str, Position] = {}
acnt = Account(
mod,
acctid,
pp_objs,
conf_path,
conf=conf,
)
# unmarshal/load ``pps.toml`` config entries into object form
# and update `Account` obj entries.
for fqme, entry in conf.items():
# unique broker-backend-system market id
bs_mktid = str(
entry.get('bsuid')
or entry.get('bs_mktid')
)
price_tick = Decimal(str(
entry.get('price_tick_size')
or entry.get('price_tick')
or '0.01'
))
size_tick = Decimal(str(
entry.get('lot_tick_size')
or entry.get('size_tick')
or '0.0'
))
# load the pair using the fqme which
# will make the pair "unresolved" until
# the backend broker actually loads
# the market and position info.
mkt = MktPair.from_fqme(
fqme,
price_tick=price_tick,
size_tick=size_tick,
bs_mktid=bs_mktid,
)
# TODO: RE: general "events" instead of just "clears":
# - make this an `events` field and support more event types
# such as 'split', 'name_change', 'mkt_info', etc..
# - should be make a ``Struct`` for clear/event entries? convert
# "clear events table" from the toml config (list of a dicts)
# and load it into object form for use in position processing of
# new clear events.
# convert clears sub-tables (only in this form
# for toml re-presentation) back into a master table.
toml_clears_list: list[dict[str, Any]] = entry['clears']
trans: list[Transaction] = []
for clears_table in toml_clears_list:
tid = clears_table['tid']
dt: tomlkit.items.DateTime | str = clears_table['dt']
# woa cool, `tomlkit` will actually load datetimes into
# native form B)
if isinstance(dt, str):
dt = pendulum.parse(dt)
clears_table['dt'] = dt
trans.append(Transaction(
fqme=bs_mktid,
# sym=mkt,
bs_mktid=bs_mktid,
tid=tid,
# XXX: not sure why sometimes these are loaded as
# `tomlkit.Integer` and are eventually written with
# an extra `-` in front like `--1`?
size=float(clears_table['size']),
price=float(clears_table['price']),
cost=clears_table['cost'],
dt=dt,
))
split_ratio = entry.get('split_ratio')
# if a string-ified expiry field is loaded we try to parse
# it, THO, they should normally be serialized as native
# TOML datetimes, since that's supported.
if (
(expiry := entry.get('expiry'))
and isinstance(expiry, str)
):
expiry: pendulum.DateTime = pendulum.parse(expiry)
pp = pp_objs[bs_mktid] = Position(
mkt,
split_ratio=split_ratio,
bs_mktid=bs_mktid,
)
# XXX: super critical, we need to be sure to include
# all pps.toml clears to avoid reusing clears that were
# already included in the current incremental update
# state, since today's records may have already been
# processed!
for t in trans:
pp.add_clear(t)
try:
yield acnt
finally:
if write_on_exit:
acnt.write_config()
# TODO: drop the old name and THIS!
@cm
def open_pps(
*args,
**kwargs,
) -> Generator[Account, None, None]:
log.warning(
'`open_pps()` is now deprecated!\n'
'Please use `with open_account() as cnt:`'
)
with open_account(*args, **kwargs) as acnt:
yield acnt
def load_account_from_ledger(
brokername: str,
acctname: str,
# post normalization filter on ledger entries to be processed
filter_by_ids: dict[str, list[str]] | None = None,
ledger: TransactionLedger | None = None,
**kwargs,
) -> Account:
'''
Open a ledger file by broker name and account and read in and
process any trade records into our normalized ``Transaction`` form
and then update the equivalent ``Pptable`` and deliver the two
bs_mktid-mapped dict-sets of the transactions and pps.
'''
acnt: Account
with open_account(
brokername,
acctname,
**kwargs,
) as acnt:
if ledger is not None:
acnt.update_from_ledger(ledger)
return acnt

View File

@ -0,0 +1,698 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Calculation routines for balance and position tracking such that
you know when you're losing money (if possible) XD
'''
from __future__ import annotations
from collections.abc import ValuesView
from contextlib import contextmanager as cm
from math import copysign
from typing import (
Any,
Callable,
Iterator,
TYPE_CHECKING,
)
import polars as pl
from pendulum import (
DateTime,
from_timestamp,
parse,
)
if TYPE_CHECKING:
from ._ledger import (
Transaction,
TransactionLedger,
)
def ppu(
clears: Iterator[Transaction],
# include transaction cost in breakeven price
# and presume the worst case of the same cost
# to exit this transaction (even though in reality
# it will be dynamic based on exit stratetgy).
cost_scalar: float = 2,
# return the ledger of clears as a (now dt sorted) dict with
# new position fields inserted alongside each entry.
as_ledger: bool = False,
) -> float | list[(str, dict)]:
'''
Compute the "price-per-unit" price for the given non-zero sized
rolling position.
The recurrence relation which computes this (exponential) mean
per new clear which **increases** the accumulative postiion size
is:
ppu[-1] = (
ppu[-2] * accum_size[-2]
+
ppu[-1] * size
) / accum_size[-1]
where `cost_basis` for the current step is simply the price
* size of the most recent clearing transaction.
-----
TODO: get the BEP computed and working similarly!
-----
the equivalent "break even price" or bep at each new clear
event step conversely only changes when an "position exiting
clear" which **decreases** the cumulative dst asset size:
bep[-1] = ppu[-1] - (cum_pnl[-1] / cumsize[-1])
'''
asize_h: list[float] = [] # historical accumulative size
ppu_h: list[float] = [] # historical price-per-unit
# ledger: dict[str, dict] = {}
ledger: list[dict] = []
t: Transaction
for t in clears:
clear_size: float = t.size
clear_price: str | float = t.price
is_clear: bool = not isinstance(clear_price, str)
last_accum_size = asize_h[-1] if asize_h else 0
accum_size: float = last_accum_size + clear_size
accum_sign = copysign(1, accum_size)
sign_change: bool = False
# on transfers we normally write some non-valid
# price since withdrawal to another account/wallet
# has nothing to do with inter-asset-market prices.
# TODO: this should be better handled via a `type: 'tx'`
# field as per existing issue surrounding all this:
# https://github.com/pikers/piker/issues/510
if isinstance(clear_price, str):
# TODO: we can't necessarily have this commit to
# the overall pos size since we also need to
# include other positions contributions to this
# balance or we might end up with a -ve balance for
# the position..
continue
# test if the pp somehow went "passed" a net zero size state
# resulting in a change of the "sign" of the size (+ve for
# long, -ve for short).
sign_change = (
copysign(1, last_accum_size) + accum_sign == 0
and last_accum_size != 0
)
# since we passed the net-zero-size state the new size
# after sum should be the remaining size the new
# "direction" (aka, long vs. short) for this clear.
if sign_change:
clear_size: float = accum_size
abs_diff: float = abs(accum_size)
asize_h.append(0)
ppu_h.append(0)
else:
# old size minus the new size gives us size diff with
# +ve -> increase in pp size
# -ve -> decrease in pp size
abs_diff = abs(accum_size) - abs(last_accum_size)
# XXX: LIFO breakeven price update. only an increaze in size
# of the position contributes the breakeven price,
# a decrease does not (i.e. the position is being made
# smaller).
# abs_clear_size = abs(clear_size)
abs_new_size: float | int = abs(accum_size)
if (
abs_diff > 0
and is_clear
):
cost_basis = (
# cost basis for this clear
clear_price * abs(clear_size)
+
# transaction cost
accum_sign * cost_scalar * t.cost
)
if asize_h:
size_last: float = abs(asize_h[-1])
cb_last: float = ppu_h[-1] * size_last
ppu: float = (cost_basis + cb_last) / abs_new_size
else:
ppu: float = cost_basis / abs_new_size
else:
# TODO: for PPU we should probably handle txs out
# (aka withdrawals) similarly by simply not having
# them contrib to the running PPU calc and only
# when the next entry clear comes in (which will
# then have a higher weighting on the PPU).
# on "exit" clears from a given direction,
# only the size changes not the price-per-unit
# need to be updated since the ppu remains constant
# and gets weighted by the new size.
ppu: float = ppu_h[-1] if ppu_h else 0 # set to previous value
# extend with new rolling metric for this step
ppu_h.append(ppu)
asize_h.append(accum_size)
# ledger[t.tid] = {
# 'txn': t,
# ledger[t.tid] = t.to_dict() | {
ledger.append((
t.tid,
t.to_dict() | {
'ppu': ppu,
'cumsize': accum_size,
'sign_change': sign_change,
# TODO: cum_pnl, bep
}
))
final_ppu = ppu_h[-1] if ppu_h else 0
# TODO: once we have etypes in all ledger entries..
# handle any split info entered (for now) manually by user
# if self.split_ratio is not None:
# final_ppu /= self.split_ratio
if as_ledger:
return ledger
else:
return final_ppu
def iter_by_dt(
records: (
dict[str, dict[str, Any]]
| ValuesView[dict] # eg. `Position._events.values()`
| list[dict]
| list[Transaction] # XXX preferred!
),
# NOTE: parsers are looked up in the insert order
# so if you know that the record stats show some field
# is more common then others, stick it at the top B)
parsers: dict[str, Callable | None] = {
'dt': parse, # parity case
'datetime': parse, # datetime-str
'time': from_timestamp, # float epoch
},
key: Callable | None = None,
) -> Iterator[tuple[str, dict]]:
'''
Iterate entries of a transaction table sorted by entry recorded
datetime presumably set at the ``'dt'`` field in each entry.
'''
if isinstance(records, dict):
records: list[tuple[str, dict]] = list(records.items())
def dyn_parse_to_dt(
tx: tuple[str, dict[str, Any]] | Transaction,
) -> DateTime:
# handle `.items()` inputs
if isinstance(tx, tuple):
tx = tx[1]
# dict or tx object?
isdict: bool = isinstance(tx, dict)
# get best parser for this record..
for k in parsers:
if (
isdict and k in tx
or getattr(tx, k, None)
):
v = tx[k] if isdict else tx.dt
assert v is not None, f'No valid value for `{k}`!?'
# only call parser on the value if not None from
# the `parsers` table above (when NOT using
# `.get()`), otherwise pass through the value and
# sort on it directly
if (
not isinstance(v, DateTime)
and (parser := parsers.get(k))
):
return parser(v)
else:
return v
else:
# XXX: should never get here..
breakpoint()
entry: tuple[str, dict] | Transaction
for entry in sorted(
records,
key=key or dyn_parse_to_dt,
):
# NOTE the type sig above; either pairs or txns B)
yield entry
# TODO: probably just move this into the test suite or
# keep it here for use from as such?
# def ensure_state(self) -> None:
# '''
# Audit either the `.cumsize` and `.ppu` local instance vars against
# the clears table calculations and return the calc-ed values if
# they differ and log warnings to console.
# '''
# # clears: list[dict] = self._clears
# # self.first_clear_dt = min(clears, key=lambda e: e['dt'])['dt']
# last_clear: dict = clears[-1]
# csize: float = self.calc_size()
# accum: float = last_clear['accum_size']
# if not self.expired():
# if (
# csize != accum
# and csize != round(accum * (self.split_ratio or 1))
# ):
# raise ValueError(f'Size mismatch: {csize}')
# else:
# assert csize == 0, 'Contract is expired but non-zero size?'
# if self.cumsize != csize:
# log.warning(
# 'Position state mismatch:\n'
# f'{self.cumsize} => {csize}'
# )
# self.cumsize = csize
# cppu: float = self.calc_ppu()
# ppu: float = last_clear['ppu']
# if (
# cppu != ppu
# and self.split_ratio is not None
# # handle any split info entered (for now) manually by user
# and cppu != (ppu / self.split_ratio)
# ):
# raise ValueError(f'PPU mismatch: {cppu}')
# if self.ppu != cppu:
# log.warning(
# 'Position state mismatch:\n'
# f'{self.ppu} => {cppu}'
# )
# self.ppu = cppu
@cm
def open_ledger_dfs(
brokername: str,
acctname: str,
ledger: TransactionLedger | None = None,
**kwargs,
) -> tuple[
dict[str, pl.DataFrame],
TransactionLedger,
]:
'''
Open a ledger of trade records (presumably from some broker
backend), normalize the records into `Transactions` via the
backend's declared endpoint, cast to a `polars.DataFrame` which
can update the ledger on exit.
'''
from piker.toolz import open_crash_handler
with open_crash_handler():
if not ledger:
import time
from ._ledger import open_trade_ledger
now = time.time()
with open_trade_ledger(
brokername,
acctname,
rewrite=True,
allow_from_sync_code=True,
# proxied through from caller
**kwargs,
) as ledger:
if not ledger:
raise ValueError(f'No ledger for {acctname}@{brokername} exists?')
print(f'LEDGER LOAD TIME: {time.time() - now}')
yield ledger_to_dfs(ledger), ledger
def ledger_to_dfs(
ledger: TransactionLedger,
) -> dict[str, pl.DataFrame]:
txns: dict[str, Transaction] = ledger.to_txns()
# ldf = pl.DataFrame(
# list(txn.to_dict() for txn in txns.values()),
ldf = pl.from_dicts(
list(txn.to_dict() for txn in txns.values()),
# only for ordering the cols
schema=[
('fqme', str),
('tid', str),
('bs_mktid', str),
('expiry', str),
('etype', str),
('dt', str),
('size', pl.Float64),
('price', pl.Float64),
('cost', pl.Float64),
],
).sort( # chronological order
'dt'
).with_columns([
pl.col('dt').str.to_datetime(),
# pl.col('expiry').str.to_datetime(),
# pl.col('expiry').dt.date(),
])
# filter out to the columns matching values filter passed
# as input.
# if filter_by_ids:
# for col, vals in filter_by_ids.items():
# str_vals = set(map(str, vals))
# pred: pl.Expr = pl.col(col).eq(str_vals.pop())
# for val in str_vals:
# pred |= pl.col(col).eq(val)
# fdf = df.filter(pred)
# TODO: originally i had tried just using a plain ol' groupby
# + agg here but the issue was re-inserting to the src frame.
# however, learning more about `polars` seems like maybe we can
# use `.over()`?
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.Expr.over.html#polars.Expr.over
# => CURRENTLY we break up into a frame per mkt / fqme
dfs: dict[str, pl.DataFrame] = ldf.partition_by(
'bs_mktid',
as_dict=True,
)
# TODO: not sure if this is even possible but..
# - it'd be more ideal to use `ppt = df.groupby('fqme').agg([`
# - ppu and bep calcs!
for key in dfs:
# covert to lazy form (since apparently we might need it
# eventually ...)
df: pl.DataFrame = dfs[key]
ldf: pl.LazyFrame = df.lazy()
df = dfs[key] = ldf.with_columns([
pl.cumsum('size').alias('cumsize'),
# amount of source asset "sent" (via buy txns in
# the market) to acquire the dst asset, PER txn.
# when this value is -ve (i.e. a sell operation) then
# the amount sent is actually "returned".
(
(pl.col('price') * pl.col('size'))
+
(pl.col('cost')) # * pl.col('size').sign())
).alias('dst_bot'),
]).with_columns([
# rolling balance in src asset units
(pl.col('dst_bot').cumsum() * -1).alias('src_balance'),
# "position operation type" in terms of increasing the
# amount in the dst asset (entering) or decreasing the
# amount in the dst asset (exiting).
pl.when(
pl.col('size').sign() == pl.col('cumsize').sign()
).then(
pl.lit('enter') # see above, but is just price * size per txn
).otherwise(
pl.when(pl.col('cumsize') == 0)
.then(pl.lit('exit_to_zero'))
.otherwise(pl.lit('exit'))
).alias('descr'),
(pl.col('cumsize').sign() == pl.col('size').sign())
.alias('is_enter'),
]).with_columns([
# pl.lit(0, dtype=pl.Utf8).alias('virt_cost'),
pl.lit(0, dtype=pl.Float64).alias('applied_cost'),
pl.lit(0, dtype=pl.Float64).alias('pos_ppu'),
pl.lit(0, dtype=pl.Float64).alias('per_txn_pnl'),
pl.lit(0, dtype=pl.Float64).alias('cum_pos_pnl'),
pl.lit(0, dtype=pl.Float64).alias('pos_bep'),
pl.lit(0, dtype=pl.Float64).alias('cum_ledger_pnl'),
pl.lit(None, dtype=pl.Float64).alias('ledger_bep'),
# TODO: instead of the iterative loop below i guess we
# could try using embedded lists to track which txns
# are part of which ppu / bep calcs? Not sure this will
# look any better nor be any more performant though xD
# pl.lit([[0]], dtype=pl.List(pl.Float64)).alias('list'),
# choose fields to emit for accounting puposes
]).select([
pl.exclude([
'tid',
# 'dt',
'expiry',
'bs_mktid',
'etype',
# 'is_enter',
]),
]).collect()
# compute recurrence relations for ppu and bep
last_ppu: float = 0
last_cumsize: float = 0
last_ledger_pnl: float = 0
last_pos_pnl: float = 0
virt_costs: list[float, float] = [0., 0.]
# imperatively compute the PPU (price per unit) and BEP
# (break even price) iteratively over the ledger, oriented
# around each position state: a state of split balances in
# > 1 asset.
for i, row in enumerate(df.iter_rows(named=True)):
cumsize: float = row['cumsize']
is_enter: bool = row['is_enter']
price: float = row['price']
size: float = row['size']
# the profit is ALWAYS decreased, aka made a "loss"
# by the constant fee charged by the txn provider!
# see below in final PnL calculation and row element
# set.
txn_cost: float = row['cost']
pnl: float = 0
# ALWAYS reset per-position cum PnL
if last_cumsize == 0:
last_pos_pnl: float = 0
# a "position size INCREASING" or ENTER transaction
# which "makes larger", in src asset unit terms, the
# trade's side-size of the destination asset:
# - "buying" (more) units of the dst asset
# - "selling" (more short) units of the dst asset
if is_enter:
# Naively include transaction cost in breakeven
# price and presume the worst case of the
# exact-same-cost-to-exit this transaction's worth
# of size even though in reality it will be dynamic
# based on exit strategy, price, liquidity, etc..
virt_cost: float = txn_cost
# cpu: float = cost / size
# cummean of the cost-per-unit used for modelling
# a projected future exit cost which we immediately
# include in the costs incorporated to BEP on enters
last_cum_costs_size, last_cpu = virt_costs
cum_costs_size: float = last_cum_costs_size + abs(size)
cumcpu = (
(last_cpu * last_cum_costs_size)
+
txn_cost
) / cum_costs_size
virt_costs = [cum_costs_size, cumcpu]
txn_cost = txn_cost + virt_cost
# df[i, 'virt_cost'] = f'{-virt_cost} FROM {cumcpu}@{cum_costs_size}'
# a cumulative mean of the price-per-unit acquired
# in the destination asset:
# https://en.wikipedia.org/wiki/Moving_average#Cumulative_average
# You could also think of this measure more
# generally as an exponential mean with `alpha
# = 1/N` where `N` is the current number of txns
# included in the "position" defining set:
# https://en.wikipedia.org/wiki/Exponential_smoothing
ppu: float = (
(
(last_ppu * last_cumsize)
+
(price * size)
) /
cumsize
)
# a "position size DECREASING" or EXIT transaction
# which "makes smaller" the trade's side-size of the
# destination asset:
# - selling previously bought units of the dst asset
# (aka 'closing' a long position).
# - buying previously borrowed and sold (short) units
# of the dst asset (aka 'covering'/'closing' a short
# position).
else:
# only changes on position size increasing txns
ppu: float = last_ppu
# UNWIND IMPLIED COSTS FROM ENTRIES
# => Reverse the virtual/modelled (2x predicted) txn
# cost that was included in the least-recently
# entered txn that is still part of the current CSi
# set.
# => we look up the cost-per-unit cumsum and apply
# if over the current txn size (by multiplication)
# and then reverse that previusly applied cost on
# the txn_cost for this record.
#
# NOTE: current "model" is just to previously assumed 2x
# the txn cost for a matching enter-txn's
# cost-per-unit; we then immediately reverse this
# prediction and apply the real cost received here.
last_cum_costs_size, last_cpu = virt_costs
prev_virt_cost: float = last_cpu * abs(size)
txn_cost: float = txn_cost - prev_virt_cost # +ve thus a "reversal"
cum_costs_size: float = last_cum_costs_size - abs(size)
virt_costs = [cum_costs_size, last_cpu]
# df[i, 'virt_cost'] = (
# f'{-prev_virt_cost} FROM {last_cpu}@{cum_costs_size}'
# )
# the per-txn profit or loss (PnL) given we are
# (partially) "closing"/"exiting" the position via
# this txn.
pnl: float = (last_ppu - price) * size
# always subtract txn cost from total txn pnl
txn_pnl: float = pnl - txn_cost
# cumulative PnLs per txn
last_ledger_pnl = (
last_ledger_pnl + txn_pnl
)
last_pos_pnl = df[i, 'cum_pos_pnl'] = (
last_pos_pnl + txn_pnl
)
if cumsize == 0:
last_ppu = ppu = 0
# compute the BEP: "break even price", a value that
# determines at what price the remaining cumsize can be
# liquidated such that the net-PnL on the current
# position will result in ZERO gain or loss from open
# to close including all txn costs B)
if (
abs(cumsize) > 0 # non-exit-to-zero position txn
):
cumsize_sign: float = copysign(1, cumsize)
ledger_bep: float = (
(
(ppu * cumsize)
-
(last_ledger_pnl * cumsize_sign)
) / cumsize
)
# NOTE: when we "enter more" dst asset units (aka
# increase position state) AFTER having exited some
# units (aka decreasing the pos size some) the bep
# needs to be RECOMPUTED based on new ppu such that
# liquidation of the cumsize at the bep price
# results in a zero-pnl for the existing position
# (since the last one).
# for position lifetime BEP we never can have
# a valid value once the position is "closed"
# / full exitted Bo
pos_bep: float = (
(
(ppu * cumsize)
-
(last_pos_pnl * cumsize_sign)
) / cumsize
)
# inject DF row with all values
df[i, 'pos_ppu'] = ppu
df[i, 'per_txn_pnl'] = txn_pnl
df[i, 'applied_cost'] = -txn_cost
df[i, 'cum_pos_pnl'] = last_pos_pnl
df[i, 'pos_bep'] = pos_bep
df[i, 'cum_ledger_pnl'] = last_ledger_pnl
df[i, 'ledger_bep'] = ledger_bep
# keep backrefs to suffice reccurence relation
last_ppu: float = ppu
last_cumsize: float = cumsize
# TODO?: pass back the current `Position` object loaded from
# the account as well? Would provide incentive to do all
# this ledger loading inside a new async open_account().
# bs_mktid: str = df[0]['bs_mktid']
# pos: Position = acnt.pps[bs_mktid]
return dfs

View File

@ -0,0 +1,311 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
CLI front end for trades ledger and position tracking management.
'''
from __future__ import annotations
from pprint import pformat
from rich.console import Console
from rich.markdown import Markdown
import polars as pl
import tractor
import trio
import typer
from ..log import get_logger
from ..service import (
open_piker_runtime,
)
from ..clearing._messages import BrokerdPosition
from ..calc import humanize
from ..brokers._daemon import broker_init
from ._ledger import (
load_ledger,
TransactionLedger,
# open_trade_ledger,
)
from .calc import (
open_ledger_dfs,
)
ledger = typer.Typer()
def unpack_fqan(
fully_qualified_account_name: str,
console: Console | None = None,
) -> tuple | bool:
try:
brokername, account = fully_qualified_account_name.split('.')
return brokername, account
except ValueError:
if console is not None:
md = Markdown(
f'=> `{fully_qualified_account_name}` <=\n\n'
'is not a valid '
'__fully qualified account name?__\n\n'
'Your account name needs to be of the form '
'`<brokername>.<account_name>`\n'
)
console.print(md)
return False
@ledger.command()
def sync(
fully_qualified_account_name: str,
pdb: bool = False,
loglevel: str = typer.Option(
'error',
"-l",
),
):
log = get_logger(loglevel)
console = Console()
pair: tuple[str, str]
if not (pair := unpack_fqan(
fully_qualified_account_name,
console,
)):
return
brokername, account = pair
brokermod, start_kwargs, deamon_ep = broker_init(
brokername,
loglevel=loglevel,
)
brokername: str = brokermod.name
async def main():
async with (
open_piker_runtime(
name='ledger_cli',
loglevel=loglevel,
debug_mode=pdb,
) as (actor, sockaddr),
tractor.open_nursery() as an,
):
try:
log.info(
f'Piker runtime up as {actor.uid}@{sockaddr}'
)
portal = await an.start_actor(
loglevel=loglevel,
debug_mode=pdb,
**start_kwargs,
)
from ..clearing import (
open_brokerd_dialog,
)
brokerd_stream: tractor.MsgStream
async with (
# engage the brokerd daemon context
portal.open_context(
deamon_ep,
brokername=brokername,
loglevel=loglevel,
),
# manually open the brokerd trade dialog EP
# (what the EMS normally does internall) B)
open_brokerd_dialog(
brokermod,
portal,
exec_mode=(
'paper'
if account == 'paper'
else 'live'
),
loglevel=loglevel,
) as (
brokerd_stream,
pp_msg_table,
accounts,
),
):
try:
assert len(accounts) == 1
if not pp_msg_table:
ld, fpath = load_ledger(brokername, account)
assert not ld, f'WTF did we fail to parse ledger:\n{ld}'
console.print(
'[yellow]'
'No pps found for '
f'`{brokername}.{account}` '
'account!\n\n'
'[/][underline]'
'None of the following ledger files exist:\n\n[/]'
f'{fpath.as_uri()}\n'
)
return
pps_by_symbol: dict[str, BrokerdPosition] = pp_msg_table[
brokername,
account,
]
summary: str = (
'[dim underline]Piker Position Summary[/] '
f'[dim blue underline]{brokername}[/]'
'[dim].[/]'
f'[blue underline]{account}[/]'
f'[dim underline] -> total pps: [/]'
f'[green]{len(pps_by_symbol)}[/]\n'
)
# for ppdict in positions:
for fqme, ppmsg in pps_by_symbol.items():
# ppmsg = BrokerdPosition(**ppdict)
size = ppmsg.size
if size:
ppu: float = round(
ppmsg.avg_price,
ndigits=2,
)
cost_basis: str = humanize(size * ppu)
h_size: str = humanize(size)
if size < 0:
pcolor = 'red'
else:
pcolor = 'green'
# sematic-highlight of fqme
fqme = ppmsg.symbol
tokens = fqme.split('.')
styled_fqme = f'[blue underline]{tokens[0]}[/]'
for tok in tokens[1:]:
styled_fqme += '[dim].[/]'
styled_fqme += f'[dim blue underline]{tok}[/]'
# TODO: instead display in a ``rich.Table``?
summary += (
styled_fqme +
'[dim]: [/]'
f'[{pcolor}]{h_size}[/]'
'[dim blue]u @[/]'
f'[{pcolor}]{ppu}[/]'
'[dim blue] = [/]'
f'[{pcolor}]$ {cost_basis}\n[/]'
)
console.print(summary)
finally:
# exit via ctx cancellation.
brokerd_ctx: tractor.Context = brokerd_stream._ctx
await brokerd_ctx.cancel(timeout=1)
# TODO: once ported to newer tractor branch we should
# be able to do a loop like this:
# while brokerd_ctx.cancel_called_remote is None:
# await trio.sleep(0.01)
# await brokerd_ctx.cancel()
finally:
await portal.cancel_actor()
trio.run(main)
@ledger.command()
def disect(
# "fully_qualified_account_name"
fqan: str,
fqme: str, # for ib
# TODO: in tractor we should really have
# a debug_mode ctx for wrapping any kind of code no?
pdb: bool = False,
bs_mktid: str = typer.Option(
None,
"-bid",
),
loglevel: str = typer.Option(
'error',
"-l",
),
):
from piker.log import get_console_log
from piker.toolz import open_crash_handler
get_console_log(loglevel)
pair: tuple[str, str]
if not (pair := unpack_fqan(fqan)):
raise ValueError('{fqan} malformed!?')
brokername, account = pair
# ledger dfs groupby-partitioned by fqme
dfs: dict[str, pl.DataFrame]
# actual ledger instance
ldgr: TransactionLedger
pl.Config.set_tbl_cols(-1)
pl.Config.set_tbl_rows(-1)
with (
open_crash_handler(),
open_ledger_dfs(
brokername,
account,
) as (dfs, ldgr),
):
# look up specific frame for fqme-selected asset
if (df := dfs.get(fqme)) is None:
mktids2fqmes: dict[str, list[str]] = {}
for bs_mktid in dfs:
df: pl.DataFrame = dfs[bs_mktid]
fqmes: pl.Series[str] = df['fqme']
uniques: list[str] = fqmes.unique()
mktids2fqmes[bs_mktid] = set(uniques)
if fqme in uniques:
break
print(
f'No specific ledger for fqme={fqme} could be found in\n'
f'{pformat(mktids2fqmes)}?\n'
f'Maybe the `{brokername}` backend uses something '
'else for its `bs_mktid` then the `fqme`?\n'
'Scanning for matches in unique fqmes per frame..\n'
)
# :pray:
assert not df.is_empty()
# muck around in pdbp REPL
breakpoint()
# TODO: we REALLY need a better console REPL for this
# kinda thing..
# - `xonsh` is an obvious option (and it looks amazin) but
# we need to figure out how to embed it better then just:
# from xonsh.main import main
# main(argv=[])
# which will not actually inject the `df` to globals?

View File

@ -17,33 +17,95 @@
""" """
Broker clients, daemons and general back end machinery. Broker clients, daemons and general back end machinery.
""" """
from contextlib import (
asynccontextmanager as acm,
)
from importlib import import_module from importlib import import_module
from types import ModuleType from types import ModuleType
# TODO: move to urllib3/requests once supported from tractor.trionics import maybe_open_context
import asks
asks.init('trio')
__brokers__ = [ from ._util import (
log,
BrokerError,
SymbolNotFound,
NoData,
DataUnavailable,
DataThrottle,
resproc,
get_logger,
)
__all__: list[str] = [
'BrokerError',
'SymbolNotFound',
'NoData',
'DataUnavailable',
'DataThrottle',
'resproc',
'get_logger',
]
__brokers__: list[str] = [
'binance', 'binance',
'questrade',
'robinhood',
'ib', 'ib',
'kraken', 'kraken',
'kucoin',
# broken but used to work
# 'questrade',
# 'robinhood',
# TODO: we should get on these stat!
# alpaca
# wstrade
# iex
# deribit
# bitso
] ]
def get_brokermod(brokername: str) -> ModuleType: def get_brokermod(brokername: str) -> ModuleType:
"""Return the imported broker module by name. '''
""" Return the imported broker module by name.
module = import_module('.' + brokername, 'piker.brokers')
'''
module: ModuleType = import_module('.' + brokername, 'piker.brokers')
# we only allow monkeying because it's for internal keying # we only allow monkeying because it's for internal keying
module.name = module.__name__.split('.')[-1] module.name = module.__name__.split('.')[-1]
return module return module
def iter_brokermods(): def iter_brokermods():
"""Iterate all built-in broker modules. '''
""" Iterate all built-in broker modules.
'''
for name in __brokers__: for name in __brokers__:
yield get_brokermod(name) yield get_brokermod(name)
@acm
async def open_cached_client(
brokername: str,
**kwargs,
) -> 'Client': # noqa
'''
Get a cached broker client from the current actor's local vars.
If one has not been setup do it and cache it.
'''
brokermod = get_brokermod(brokername)
async with maybe_open_context(
acm_func=brokermod.get_client,
kwargs=kwargs,
) as (cache_hit, client):
if cache_hit:
log.runtime(f'Reusing existing {client}')
yield client

View File

@ -0,0 +1,276 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Broker-daemon-actor "endpoint-hooks": the service task entry points for
``brokerd``.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
)
from types import ModuleType
from typing import (
TYPE_CHECKING,
AsyncContextManager,
)
import exceptiongroup as eg
import tractor
import trio
from . import _util
from . import get_brokermod
if TYPE_CHECKING:
from ..data import _FeedsBus
# `brokerd` enabled modules
# TODO: move this def to the `.data` subpkg..
# NOTE: keeping this list as small as possible is part of our caps-sec
# model and should be treated with utmost care!
_data_mods: str = [
'piker.brokers.core',
'piker.brokers.data',
'piker.brokers._daemon',
'piker.data',
'piker.data.feed',
'piker.data._sampling'
]
# TODO: we should rename the daemon to datad prolly once we split up
# broker vs. data tasks into separate actors?
@tractor.context
async def _setup_persistent_brokerd(
ctx: tractor.Context,
brokername: str,
loglevel: str | None = None,
) -> None:
'''
Allocate a actor-wide service nursery in ``brokerd``
such that feeds can be run in the background persistently by
the broker backend as needed.
'''
# NOTE: we only need to setup logging once (and only) here
# since all hosted daemon tasks will reference this same
# log instance's (actor local) state and thus don't require
# any further (level) configuration on their own B)
log = _util.get_console_log(
loglevel or tractor.current_actor().loglevel,
name=f'{_util.subsys}.{brokername}',
)
# set global for this actor to this new process-wide instance B)
_util.log = log
# further, set the log level on any broker broker specific
# logger instance.
from piker.data import feed
assert not feed._bus
# allocate a nursery to the bus for spawning background
# tasks to service client IPC requests, normally
# `tractor.Context` connections to explicitly required
# `brokerd` endpoints such as:
# - `stream_quotes()`,
# - `manage_history()`,
# - `allocate_persistent_feed()`,
# - `open_symbol_search()`
# NOTE: see ep invocation details inside `.data.feed`.
try:
async with trio.open_nursery() as service_nursery:
bus: _FeedsBus = feed.get_feed_bus(
brokername,
service_nursery,
)
assert bus is feed._bus
# unblock caller
await ctx.started()
# we pin this task to keep the feeds manager active until the
# parent actor decides to tear it down
await trio.sleep_forever()
except eg.ExceptionGroup:
# TODO: likely some underlying `brokerd` IPC connection
# broke so here we handle a respawn and re-connect attempt!
# This likely should pair with development of the OCO task
# nusery in dev over @ `tractor` B)
# https://github.com/goodboy/tractor/pull/363
raise
def broker_init(
brokername: str,
loglevel: str | None = None,
**start_actor_kwargs,
) -> tuple[
ModuleType,
dict,
AsyncContextManager,
]:
'''
Given an input broker name, load all named arguments
which can be passed for daemon endpoint + context spawn
as required in every `brokerd` (actor) service.
This includes:
- load the appropriate <brokername>.py pkg module,
- reads any declared `__enable_modules__: listr[str]` which will be
passed to `tractor.ActorNursery.start_actor(enabled_modules=<this>)`
at actor start time,
- deliver a references to the daemon lifetime fixture, which
for now is always the `_setup_persistent_brokerd()` context defined
above.
'''
from ..brokers import get_brokermod
brokermod = get_brokermod(brokername)
modpath: str = brokermod.__name__
start_actor_kwargs['name'] = f'brokerd.{brokername}'
start_actor_kwargs.update(
getattr(
brokermod,
'_spawn_kwargs',
{},
)
)
# XXX TODO: make this not so hacky/monkeypatched..
# -> we need a sane way to configure the logging level for all
# code running in brokerd.
# if utilmod := getattr(brokermod, '_util', False):
# utilmod.log.setLevel(loglevel.upper())
# lookup actor-enabled modules declared by the backend offering the
# `brokerd` endpoint(s).
enabled: list[str]
enabled = start_actor_kwargs['enable_modules'] = [
__name__, # so that eps from THIS mod can be invoked
modpath,
]
for submodname in getattr(
brokermod,
'__enable_modules__',
[],
):
subpath: str = f'{modpath}.{submodname}'
enabled.append(subpath)
return (
brokermod,
start_actor_kwargs, # to `ActorNursery.start_actor()`
# XXX see impl above; contains all (actor global)
# setup/teardown expected in all `brokerd` actor instances.
_setup_persistent_brokerd,
)
async def spawn_brokerd(
brokername: str,
loglevel: str | None = None,
**tractor_kwargs,
) -> bool:
from piker.service._util import log # use service mngr log
log.info(f'Spawning {brokername} broker daemon')
(
brokermode,
tractor_kwargs,
daemon_fixture_ep,
) = broker_init(
brokername,
loglevel,
**tractor_kwargs,
)
brokermod = get_brokermod(brokername)
extra_tractor_kwargs = getattr(brokermod, '_spawn_kwargs', {})
tractor_kwargs.update(extra_tractor_kwargs)
# ask `pikerd` to spawn a new sub-actor and manage it under its
# actor nursery
from piker.service import Services
dname: str = tractor_kwargs.pop('name') # f'brokerd.{brokername}'
portal = await Services.actor_n.start_actor(
dname,
enable_modules=_data_mods + tractor_kwargs.pop('enable_modules'),
debug_mode=Services.debug_mode,
**tractor_kwargs
)
# NOTE: the service mngr expects an already spawned actor + its
# portal ref in order to do non-blocking setup of brokerd
# service nursery.
await Services.start_service_task(
dname,
portal,
# signature of target root-task endpoint
daemon_fixture_ep,
brokername=brokername,
loglevel=loglevel,
)
return True
@acm
async def maybe_spawn_brokerd(
brokername: str,
loglevel: str | None = None,
**pikerd_kwargs,
) -> tractor.Portal:
'''
Helper to spawn a brokerd service *from* a client who wishes to
use the sub-actor-daemon but is fine with re-using any existing
and contactable `brokerd`.
Mas o menos, acts as a cached-actor-getter factory.
'''
from piker.service import maybe_spawn_daemon
async with maybe_spawn_daemon(
f'brokerd.{brokername}',
service_task_target=spawn_brokerd,
spawn_args={
'brokername': brokername,
},
loglevel=loglevel,
**pikerd_kwargs,
) as portal:
yield portal

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0) # Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -15,13 +15,32 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Handy utils. Handy cross-broker utils.
""" """
from __future__ import annotations
from functools import partial
import json import json
import asks import httpx
import logging import logging
from ..log import colorize_json from ..log import (
get_logger,
get_console_log,
colorize_json,
)
subsys: str = 'piker.brokers'
# NOTE: level should be reset by any actor that is spawned
# as well as given a (more) explicit name/key such
# as `piker.brokers.binance` matching the subpkg.
log = get_logger(subsys)
get_console_log = partial(
get_console_log,
name=subsys,
)
class BrokerError(Exception): class BrokerError(Exception):
@ -32,6 +51,7 @@ class SymbolNotFound(BrokerError):
"Symbol not found by broker search" "Symbol not found by broker search"
# TODO: these should probably be moved to `.tsp/.data`?
class NoData(BrokerError): class NoData(BrokerError):
''' '''
Symbol data not permitted or no data Symbol data not permitted or no data
@ -41,14 +61,15 @@ class NoData(BrokerError):
def __init__( def __init__(
self, self,
*args, *args,
frame_size: int = 1000, info: dict|None = None,
) -> None: ) -> None:
super().__init__(*args) super().__init__(*args)
self.info: dict|None = info
# when raised, machinery can check if the backend # when raised, machinery can check if the backend
# set a "frame size" for doing datetime calcs. # set a "frame size" for doing datetime calcs.
self.frame_size: int = 1000 # self.frame_size: int = 1000
class DataUnavailable(BrokerError): class DataUnavailable(BrokerError):
@ -69,18 +90,19 @@ class DataThrottle(BrokerError):
# TODO: add in throttle metrics/feedback # TODO: add in throttle metrics/feedback
def resproc( def resproc(
resp: asks.response_objects.Response, resp: httpx.Response,
log: logging.Logger, log: logging.Logger,
return_json: bool = True, return_json: bool = True,
log_resp: bool = False, log_resp: bool = False,
) -> asks.response_objects.Response: ) -> httpx.Response:
"""Process response and return its json content. '''
Process response and return its json content.
Raise the appropriate error on non-200 OK responses. Raise the appropriate error on non-200 OK responses.
"""
'''
if not resp.status_code == 200: if not resp.status_code == 200:
raise BrokerError(resp.body) raise BrokerError(resp.body)
try: try:

View File

@ -1,566 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Binance backend
"""
from contextlib import asynccontextmanager as acm
from datetime import datetime
from typing import (
Any, Union, Optional,
AsyncGenerator, Callable,
)
import time
import trio
from trio_typing import TaskStatus
import pendulum
import asks
from fuzzywuzzy import process as fuzzy
import numpy as np
import tractor
from pydantic.dataclasses import dataclass
from pydantic import BaseModel
import wsproto
from .._cacheables import open_cached_client
from ._util import resproc, SymbolNotFound
from ..log import get_logger, get_console_log
from ..data import ShmArray
from ..data._web_bs import open_autorecon_ws, NoBsWs
log = get_logger(__name__)
_url = 'https://api.binance.com'
# Broker specific ohlc schema (rest)
_ohlc_dtype = [
('index', int),
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('bar_wap', float), # will be zeroed by sampler if not filled
# XXX: some additional fields are defined in the docs:
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
# ('close_time', int),
# ('quote_vol', float),
# ('num_trades', int),
# ('buy_base_vol', float),
# ('buy_quote_vol', float),
# ('ignore', float),
]
# UI components allow this to be declared such that additional
# (historical) fields can be exposed.
ohlc_dtype = np.dtype(_ohlc_dtype)
_show_wap_in_history = False
# https://binance-docs.github.io/apidocs/spot/en/#exchange-information
class Pair(BaseModel):
symbol: str
status: str
baseAsset: str
baseAssetPrecision: int
quoteAsset: str
quotePrecision: int
quoteAssetPrecision: int
baseCommissionPrecision: int
quoteCommissionPrecision: int
orderTypes: list[str]
icebergAllowed: bool
ocoAllowed: bool
quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool
isMarginTradingAllowed: bool
filters: list[dict[str, Union[str, int, float]]]
permissions: list[str]
@dataclass
class OHLC:
"""Description of the flattened OHLC quote format.
For schema details see:
https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-streams
"""
time: int
open: float
high: float
low: float
close: float
volume: float
close_time: int
quote_vol: float
num_trades: int
buy_base_vol: float
buy_quote_vol: float
ignore: int
# null the place holder for `bar_wap` until we
# figure out what to extract for this.
bar_wap: float = 0.0
# convert datetime obj timestamp to unixtime in milliseconds
def binance_timestamp(when):
return int((when.timestamp() * 1000) + (when.microsecond / 1000))
class Client:
def __init__(self) -> None:
self._sesh = asks.Session(connections=4)
self._sesh.base_location = _url
self._pairs: dict[str, Any] = {}
async def _api(
self,
method: str,
params: dict,
) -> dict[str, Any]:
resp = await self._sesh.get(
path=f'/api/v3/{method}',
params=params,
timeout=float('inf')
)
return resproc(resp, log)
async def symbol_info(
self,
sym: Optional[str] = None,
) -> dict[str, Any]:
'''Get symbol info for the exchange.
'''
# TODO: we can load from our self._pairs cache
# on repeat calls...
# will retrieve all symbols by default
params = {}
if sym is not None:
sym = sym.upper()
params = {'symbol': sym}
resp = await self._api(
'exchangeInfo',
params=params,
)
entries = resp['symbols']
if not entries:
raise SymbolNotFound(f'{sym} not found')
syms = {item['symbol']: item for item in entries}
if sym is not None:
return syms[sym]
else:
return syms
async def cache_symbols(
self,
) -> dict:
if not self._pairs:
self._pairs = await self.symbol_info()
return self._pairs
async def search_symbols(
self,
pattern: str,
limit: int = None,
) -> dict[str, Any]:
if self._pairs is not None:
data = self._pairs
else:
data = await self.symbol_info()
matches = fuzzy.extractBests(
pattern,
data,
score_cutoff=50,
)
# repack in dict form
return {item[0]['symbol']: item[0]
for item in matches}
async def bars(
self,
symbol: str,
start_dt: Optional[datetime] = None,
end_dt: Optional[datetime] = None,
limit: int = 1000, # <- max allowed per query
as_np: bool = True,
) -> dict:
if end_dt is None:
end_dt = pendulum.now('UTC')
if start_dt is None:
start_dt = end_dt.start_of(
'minute').subtract(minutes=limit)
start_time = binance_timestamp(start_dt)
end_time = binance_timestamp(end_dt)
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
bars = await self._api(
'klines',
params={
'symbol': symbol.upper(),
'interval': '1m',
'startTime': start_time,
'endTime': end_time,
'limit': limit
}
)
# TODO: pack this bars scheme into a ``pydantic`` validator type:
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
# TODO: we should port this to ``pydantic`` to avoid doing
# manual validation ourselves..
new_bars = []
for i, bar in enumerate(bars):
bar = OHLC(*bar)
row = []
for j, (name, ftype) in enumerate(_ohlc_dtype[1:]):
# TODO: maybe we should go nanoseconds on all
# history time stamps?
if name == 'time':
# convert to epoch seconds: float
row.append(bar.time / 1000.0)
else:
row.append(getattr(bar, name))
new_bars.append((i,) + tuple(row))
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else bars
return array
@acm
async def get_client() -> Client:
client = Client()
await client.cache_symbols()
yield client
# validation type
class AggTrade(BaseModel):
e: str # Event type
E: int # Event time
s: str # Symbol
a: int # Aggregate trade ID
p: float # Price
q: float # Quantity
f: int # First trade ID
l: int # Last trade ID
T: int # Trade time
m: bool # Is the buyer the market maker?
M: bool # Ignore
async def stream_messages(ws: NoBsWs) -> AsyncGenerator[NoBsWs, dict]:
timeouts = 0
while True:
with trio.move_on_after(3) as cs:
msg = await ws.recv_msg()
if cs.cancelled_caught:
timeouts += 1
if timeouts > 2:
log.error("binance feed seems down and slow af? rebooting...")
await ws._connect()
continue
# for l1 streams binance doesn't add an event type field so
# identify those messages by matching keys
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
if msg.get('u'):
sym = msg['s']
bid = float(msg['b'])
bsize = float(msg['B'])
ask = float(msg['a'])
asize = float(msg['A'])
yield 'l1', {
'symbol': sym,
'ticks': [
{'type': 'bid', 'price': bid, 'size': bsize},
{'type': 'bsize', 'price': bid, 'size': bsize},
{'type': 'ask', 'price': ask, 'size': asize},
{'type': 'asize', 'price': ask, 'size': asize}
]
}
elif msg.get('e') == 'aggTrade':
# validate
msg = AggTrade(**msg)
# TODO: type out and require this quote format
# from all backends!
yield 'trade', {
'symbol': msg.s,
'last': msg.p,
'brokerd_ts': time.time(),
'ticks': [{
'type': 'trade',
'price': msg.p,
'size': msg.q,
'broker_ts': msg.T,
}],
}
def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
"""Create a request subscription packet dict.
https://binance-docs.github.io/apidocs/spot/en/#live-subscribing-unsubscribing-to-streams
"""
return {
'method': 'SUBSCRIBE',
'params': [
f'{pair.lower()}@{sub_name}'
for pair in pairs
],
'id': uid
}
@acm
async def open_history_client(
symbol: str,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('binance') as client:
async def get_ohlc(
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
array = await client.bars(
symbol,
start_dt=start_dt,
end_dt=end_dt,
)
start_dt = pendulum.from_timestamp(array[0]['time'])
end_dt = pendulum.from_timestamp(array[-1]['time'])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
async def backfill_bars(
sym: str,
shm: ShmArray, # type: ignore # noqa
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
"""Fill historical bars into shared mem / storage afap.
"""
with trio.CancelScope() as cs:
async with open_cached_client('binance') as client:
bars = await client.bars(symbol=sym)
shm.push(bars)
task_status.started(cs)
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
sym_infos = {}
uid = 0
async with (
open_cached_client('binance') as client,
send_chan as send_chan,
):
# keep client cached for real-time section
cache = await client.cache_symbols()
for sym in symbols:
d = cache[sym.upper()]
syminfo = Pair(**d) # validation
si = sym_infos[sym] = syminfo.dict()
# XXX: after manually inspecting the response format we
# just directly pick out the info we need
si['price_tick_size'] = float(syminfo.filters[0]['tickSize'])
si['lot_tick_size'] = float(syminfo.filters[2]['stepSize'])
si['asset_type'] = 'crypto'
symbol = symbols[0]
init_msgs = {
# pass back token, and bool, signalling if we're the writer
# and that history has been written
symbol: {
'symbol_info': sym_infos[sym],
'shm_write_opts': {'sum_tick_vml': False},
'fqsn': sym,
},
}
@acm
async def subscribe(ws: wsproto.WSConnection):
# setup subs
# trade data (aka L1)
# https://binance-docs.github.io/apidocs/spot/en/#symbol-order-book-ticker
l1_sub = make_sub(symbols, 'bookTicker', uid)
await ws.send_msg(l1_sub)
# aggregate (each order clear by taker **not** by maker)
# trades data:
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
agg_trades_sub = make_sub(symbols, 'aggTrade', uid)
await ws.send_msg(agg_trades_sub)
# ack from ws server
res = await ws.recv_msg()
assert res['id'] == uid
yield
subs = []
for sym in symbols:
subs.append("{sym}@aggTrade")
subs.append("{sym}@bookTicker")
# unsub from all pairs on teardown
await ws.send_msg({
"method": "UNSUBSCRIBE",
"params": subs,
"id": uid,
})
# XXX: do we need to ack the unsub?
# await ws.recv_msg()
async with open_autorecon_ws(
'wss://stream.binance.com/ws',
fixture=subscribe,
) as ws:
# pull a first quote and deliver
msg_gen = stream_messages(ws)
typ, quote = await msg_gen.__anext__()
while typ != 'trade':
# TODO: use ``anext()`` when it lands in 3.10!
typ, quote = await msg_gen.__anext__()
task_status.started((init_msgs, quote))
# signal to caller feed is ready for consumption
feed_is_live.set()
# import time
# last = time.time()
# start streaming
async for typ, msg in msg_gen:
# period = time.time() - last
# hz = 1/period if period else float('inf')
# if hz > 60:
# log.info(f'Binance quotez : {hz}')
topic = msg['symbol'].lower()
await send_chan.send({topic: msg})
# last = time.time()
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
async with open_cached_client('binance') as client:
# load all symbols locally for fast search
cache = await client.cache_symbols()
await ctx.started()
async with ctx.open_stream() as stream:
async for pattern in stream:
# results = await client.symbol_info(sym=pattern.upper())
matches = fuzzy.extractBests(
pattern,
cache,
score_cutoff=50,
)
# repack in dict form
await stream.send(
{item[0]['symbol']: item[0]
for item in matches}
)

View File

@ -0,0 +1,60 @@
# piker: trading gear for hackers
# Copyright (C)
# Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet
# (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
binancial secs on the floor, in the office, behind the dumpster.
"""
from .api import (
get_client,
)
from .feed import (
get_mkt_info,
open_history_client,
open_symbol_search,
stream_quotes,
)
from .broker import (
open_trade_dialog,
get_cost,
)
from .venues import (
SpotPair,
FutesPair,
)
__all__ = [
'get_client',
'get_mkt_info',
'get_cost',
'SpotPair',
'FutesPair',
'open_trade_dialog',
'open_history_client',
'open_symbol_search',
'stream_quotes',
]
# `brokerd` modules
__enable_modules__: list[str] = [
'api',
'feed',
'broker',
]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,710 @@
# piker: trading gear for hackers
# Copyright (C)
# Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet
# (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Live order control B)
'''
from __future__ import annotations
from pprint import pformat
from typing import (
Any,
AsyncIterator,
)
import time
from time import time_ns
from bidict import bidict
import tractor
import trio
from piker.accounting import (
Asset,
)
from piker.brokers._util import (
get_logger,
)
from piker.data._web_bs import (
open_autorecon_ws,
NoBsWs,
)
from piker.brokers import (
open_cached_client,
BrokerError,
)
from piker.clearing import (
OrderDialogs,
)
from piker.clearing._messages import (
BrokerdOrder,
BrokerdOrderAck,
BrokerdStatus,
BrokerdPosition,
BrokerdFill,
BrokerdCancel,
BrokerdError,
Status,
Order,
)
from .venues import (
Pair,
_futes_ws,
_testnet_futes_ws,
)
from .api import Client
log = get_logger('piker.brokers.binance')
# Fee schedule template, mostly for paper engine fees modelling.
# https://www.binance.com/en/support/faq/what-are-market-makers-and-takers-360007720071
def get_cost(
price: float,
size: float,
is_taker: bool = False,
) -> float:
# https://www.binance.com/en/fee/trading
cb: float = price * size
match is_taker:
case True:
return cb * 0.001000
case False if cb < 1e6:
return cb * 0.001000
case False if 1e6 >= cb < 5e6:
return cb * 0.000900
# NOTE: there's more but are you really going
# to have a cb bigger then this per trade?
case False if cb >= 5e6:
return cb * 0.000800
async def handle_order_requests(
ems_order_stream: tractor.MsgStream,
client: Client,
dids: bidict[str, str],
dialogs: OrderDialogs,
) -> None:
'''
Receive order requests from `emsd`, translate tramsit API calls and transmit.
'''
msg: dict | BrokerdOrder | BrokerdCancel
async for msg in ems_order_stream:
log.info(f'Rx order request:\n{pformat(msg)}')
match msg:
case {
'action': 'cancel',
}:
cancel = BrokerdCancel(**msg)
existing: BrokerdOrder | None = dialogs.get(cancel.oid)
if not existing:
log.error(
f'NO Existing order-dialog for {cancel.oid}!?'
)
await ems_order_stream.send(BrokerdError(
oid=cancel.oid,
# TODO: do we need the symbol?
# https://github.com/pikers/piker/issues/514
symbol='unknown',
reason=(
'Invalid `binance` order request dialog oid',
)
))
continue
else:
symbol: str = existing['symbol']
try:
await client.submit_cancel(
symbol,
cancel.oid,
)
except BrokerError as be:
await ems_order_stream.send(
BrokerdError(
oid=msg['oid'],
symbol=symbol,
reason=(
'`binance` CANCEL failed:\n'
f'{be}'
))
)
continue
case {
'account': ('binance.usdtm' | 'binance.spot') as account,
'action': action,
} if action in {'buy', 'sell'}:
# validate
order = BrokerdOrder(**msg)
oid: str = order.oid # emsd order id
modify: bool = False
# NOTE: check and report edits
if existing := dialogs.get(order.oid):
log.info(
f'Existing order for {oid} updated:\n'
f'{pformat(existing.maps[-1])} -> {pformat(msg)}'
)
modify = True
# only add new msg AFTER the existing check
dialogs.add_msg(oid, msg)
else:
# XXX NOTE: update before the ack!
# track latest request state such that map
# lookups start at the most recent msg and then
# scan reverse-chronologically.
dialogs.add_msg(oid, msg)
# XXX: ACK the request **immediately** before sending
# the api side request to ensure the ems maps the oid ->
# reqid correctly!
resp = BrokerdOrderAck(
oid=oid, # ems order request id
reqid=oid, # our custom int mapping
account='binance', # piker account
)
await ems_order_stream.send(resp)
# call our client api to submit the order
# NOTE: modifies only require diff key for user oid:
# https://binance-docs.github.io/apidocs/futures/en/#modify-order-trade
try:
reqid = await client.submit_limit(
symbol=order.symbol,
side=order.action,
quantity=order.size,
price=order.price,
oid=oid,
modify=modify,
)
# SMH they do gen their own order id: ints..
# assert reqid == order.oid
dids[order.oid] = reqid
except BrokerError as be:
await ems_order_stream.send(
BrokerdError(
oid=msg['oid'],
symbol=msg['symbol'],
reason=(
'`binance` request failed:\n'
f'{be}'
))
)
continue
case _:
account = msg.get('account')
if account not in {'binance.spot', 'binance.futes'}:
log.error(
'Order request does not have a valid binance account name?\n'
'Only one of\n'
'- `binance.spot` or,\n'
'- `binance.usdtm`\n'
'is currently valid!'
)
await ems_order_stream.send(
BrokerdError(
oid=msg['oid'],
symbol=msg['symbol'],
reason=(
f'Invalid `binance` broker request msg:\n{msg}'
))
)
@tractor.context
async def open_trade_dialog(
ctx: tractor.Context,
) -> AsyncIterator[dict[str, Any]]:
# TODO: how do we set this from the EMS such that
# positions are loaded from the correct venue on the user
# stream at startup? (that is in an attempt to support both
# spot and futes markets?)
# - I guess we just want to instead start 2 separate user
# stream tasks right? unless we want another actor pool?
# XXX: see issue: <urlhere>
venue_name: str = 'futes'
venue_mode: str = 'usdtm_futes'
account_name: str = 'usdtm'
use_testnet: bool = False
# TODO: if/when we add .accounting support we need to
# do a open_symcache() call.. though maybe we can hide
# this in a new async version of open_account()?
async with open_cached_client('binance') as client:
subconf: dict|None = client.conf.get(venue_name)
# XXX: if no futes.api_key or spot.api_key has been set we
# always fall back to the paper engine!
if (
not subconf
or
not subconf.get('api_key')
):
await ctx.started('paper')
return
use_testnet: bool = subconf.get('use_testnet', False)
async with (
open_cached_client('binance') as client,
):
client.mkt_mode: str = venue_mode
# TODO: map these wss urls depending on spot or futes
# setting passed when this task is spawned?
wss_url: str = _futes_ws if not use_testnet else _testnet_futes_ws
wss: NoBsWs
async with (
client.manage_listen_key() as listen_key,
open_autorecon_ws(f'{wss_url}/?listenKey={listen_key}') as wss,
):
nsid: int = time_ns()
await wss.send_msg({
# "method": "SUBSCRIBE",
"method": "REQUEST",
"params":
[
f"{listen_key}@account",
f"{listen_key}@balance",
f"{listen_key}@position",
# TODO: does this even work!? seems to cause
# a hang on the first msg..? lelelel.
# f"{listen_key}@order",
],
"id": nsid
})
with trio.fail_after(6):
msg = await wss.recv_msg()
assert msg['id'] == nsid
# TODO: load other market wide data / statistics:
# - OI: https://binance-docs.github.io/apidocs/futures/en/#open-interest
# - OI stats: https://binance-docs.github.io/apidocs/futures/en/#open-interest-statistics
accounts: bidict[str, str] = bidict({'binance.usdtm': None})
balances: dict[Asset, float] = {}
positions: list[BrokerdPosition] = []
for resp_dict in msg['result']:
resp: dict = resp_dict['res']
req: str = resp_dict['req']
# @account response should be something like:
# {'accountAlias': 'sRFzFzAuuXsR',
# 'canDeposit': True,
# 'canTrade': True,
# 'canWithdraw': True,
# 'feeTier': 0}
if 'account' in req:
# NOTE: fill in the hash-like key/alias binance
# provides for the account.
alias: str = resp['accountAlias']
accounts['binance.usdtm'] = alias
# @balance response:
# {'accountAlias': 'sRFzFzAuuXsR',
# 'balances': [{'asset': 'BTC',
# 'availableBalance': '0.00000000',
# 'balance': '0.00000000',
# 'crossUnPnl': '0.00000000',
# 'crossWalletBalance': '0.00000000',
# 'maxWithdrawAmount': '0.00000000',
# 'updateTime': 0}]
# ...
# }
elif 'balance' in req:
for entry in resp['balances']:
name: str = entry['asset']
balance: float = float(entry['balance'])
last_update_t: int = entry['updateTime']
spot_asset: Asset = client._venue2assets['spot'][name]
if balance > 0:
balances[spot_asset] = (balance, last_update_t)
# await tractor.pause()
# @position response:
# {'positions': [{'entryPrice': '0.0',
# 'isAutoAddMargin': False,
# 'isolatedMargin': '0',
# 'leverage': 20,
# 'liquidationPrice': '0',
# 'marginType': 'CROSSED',
# 'markPrice': '0.60289650',
# 'markPrice': '0.00000000',
# 'maxNotionalValue': '25000',
# 'notional': '0',
# 'positionAmt': '0',
# 'positionSide': 'BOTH',
# 'symbol': 'ETHUSDT_230630',
# 'unRealizedProfit': '0.00000000',
# 'updateTime': 1672741444894}
# ...
# }
elif 'position' in req:
for entry in resp['positions']:
bs_mktid: str = entry['symbol']
entry_size: float = float(entry['positionAmt'])
pair: Pair | None = client._venue2pairs[
venue_mode
].get(bs_mktid)
if (
pair
and entry_size > 0
):
entry_price: float = float(entry['entryPrice'])
ppmsg = BrokerdPosition(
broker='binance',
account=f'binance.{account_name}',
# TODO: maybe we should be passing back
# a `MktPair` here?
symbol=pair.bs_fqme.lower() + '.binance',
size=entry_size,
avg_price=entry_price,
)
positions.append(ppmsg)
if pair is None:
log.warning(
f'`{bs_mktid}` Position entry but no market pair?\n'
f'{pformat(entry)}\n'
)
await ctx.started((
positions,
list(accounts)
))
# TODO: package more state tracking into the dialogs API?
# - hmm maybe we could include `OrderDialogs.dids:
# bidict` as part of the interface and then ask for
# a reqid field to be passed at init?
# |-> `OrderDialog(reqid_field='orderId')` kinda thing?
# - also maybe bundle in some kind of dialog to account
# table?
dialogs = OrderDialogs()
dids: dict[str, int] = bidict()
# TODO: further init setup things to get full EMS and
# .accounting support B)
# - live order loading via user stream subscription and
# update to the order dialog table.
# - MAKE SURE we add live orders loaded during init
# into the dialogs table to ensure they can be
# cancelled, meaning we can do a symbol lookup.
# - position loading using `piker.accounting` subsys
# and comparison with binance's own position calcs.
# - load pps and accounts using accounting apis, write
# the ledger and account files
# - table: Account
# - ledger: TransactionLedger
async with (
trio.open_nursery() as tn,
ctx.open_stream() as ems_stream,
):
# deliver all pre-exist open orders to EMS thus syncing
# state with existing live limits reported by them.
order: Order
for order in await client.get_open_orders():
status_msg = Status(
time_ns=time.time_ns(),
resp='open',
oid=order.oid,
reqid=order.oid,
# embedded order info
req=order,
src='binance',
)
dialogs.add_msg(order.oid, order.to_dict())
await ems_stream.send(status_msg)
tn.start_soon(
handle_order_requests,
ems_stream,
client,
dids,
dialogs,
)
tn.start_soon(
handle_order_updates,
venue_mode,
account_name,
client,
ems_stream,
wss,
dialogs,
)
await trio.sleep_forever()
async def handle_order_updates(
venue: str,
account_name: str,
client: Client,
ems_stream: tractor.MsgStream,
wss: NoBsWs,
dialogs: OrderDialogs,
) -> None:
'''
Main msg handling loop for all things order management.
This code is broken out to make the context explicit and state
variables defined in the signature clear to the reader.
'''
async for msg in wss:
log.info(f'Rx USERSTREAM msg:\n{pformat(msg)}')
match msg:
# ORDER update
# spot: https://binance-docs.github.io/apidocs/spot/en/#payload-balance-update
# futes: https://binance-docs.github.io/apidocs/futures/en/#event-order-update
# futes: https://binance-docs.github.io/apidocs/futures/en/#event-balance-and-position-update
# {'o': {
# 'L': '0',
# 'N': 'USDT',
# 'R': False,
# 'S': 'BUY',
# 'T': 1687028772484,
# 'X': 'NEW',
# 'a': '0',
# 'ap': '0',
# 'b': '7012.06520',
# 'c': '518d4122-8d3e-49b0-9a1e-1fabe6f62e4c',
# 'cp': False,
# 'f': 'GTC',
# 'i': 3376956924,
# 'l': '0',
# 'm': False,
# 'n': '0',
# 'o': 'LIMIT',
# 'ot': 'LIMIT',
# 'p': '21136.80',
# 'pP': False,
# 'ps': 'BOTH',
# 'q': '0.047',
# 'rp': '0',
# 's': 'BTCUSDT',
# 'si': 0,
# 'sp': '0',
# 'ss': 0,
# 't': 0,
# 'wt': 'CONTRACT_PRICE',
# 'x': 'NEW',
# 'z': '0'}
# }
case {
# 'e': 'executionReport',
'e': 'ORDER_TRADE_UPDATE',
'T': int(epoch_ms),
'o': {
's': bs_mktid,
# XXX NOTE XXX see special ids for market
# events or margin calls:
# // special client order id:
# // starts with "autoclose-": liquidation order
# // "adl_autoclose": ADL auto close order
# // "settlement_autoclose-": settlement order
# for delisting or delivery
'c': oid,
# 'i': reqid, # binance internal int id
# prices
'a': submit_price,
'ap': avg_price,
'L': fill_price,
# sizing
'q': req_size,
'l': clear_size_filled, # this event
'z': accum_size_filled, # accum
# commissions
'n': cost,
'N': cost_asset,
# state
'S': side,
'X': status,
},
} as order_msg:
log.info(
f'{status} for {side} ORDER oid: {oid}\n'
f'bs_mktid: {bs_mktid}\n\n'
f'order size: {req_size}\n'
f'cleared size: {clear_size_filled}\n'
f'accum filled size: {accum_size_filled}\n\n'
f'submit price: {submit_price}\n'
f'fill_price: {fill_price}\n'
f'avg clearing price: {avg_price}\n\n'
f'cost: {cost}@{cost_asset}\n'
)
# status remap from binance to piker's
# status set:
# - NEW
# - PARTIALLY_FILLED
# - FILLED
# - CANCELED
# - EXPIRED
# https://binance-docs.github.io/apidocs/futures/en/#event-order-update
req_size: float = float(req_size)
accum_size_filled: float = float(accum_size_filled)
fill_price: float = float(fill_price)
match status:
case 'PARTIALLY_FILLED' | 'FILLED':
status = 'fill'
fill_msg = BrokerdFill(
time_ns=time_ns(),
# reqid=reqid,
reqid=oid,
# just use size value for now?
# action=action,
size=clear_size_filled,
price=fill_price,
# TODO: maybe capture more msg data
# i.e fees?
broker_details={'name': 'broker'} | order_msg,
broker_time=time.time(),
)
await ems_stream.send(fill_msg)
if accum_size_filled == req_size:
status = 'closed'
dialogs.pop(oid)
case 'NEW':
status = 'open'
case 'EXPIRED':
status = 'canceled'
dialogs.pop(oid)
case _:
status = status.lower()
resp = BrokerdStatus(
time_ns=time_ns(),
# reqid=reqid,
reqid=oid,
# TODO: i feel like we don't need to make the
# ems and upstream clients aware of this?
# account='binance.usdtm',
status=status,
filled=accum_size_filled,
remaining=req_size - accum_size_filled,
broker_details={
'name': 'binance',
'broker_time': epoch_ms / 1000.
}
)
await ems_stream.send(resp)
# ACCOUNT and POSITION update B)
# {
# 'E': 1687036749218,
# 'e': 'ACCOUNT_UPDATE'
# 'T': 1687036749215,
# 'a': {'B': [{'a': 'USDT',
# 'bc': '0',
# 'cw': '1267.48920735',
# 'wb': '1410.90245576'}],
# 'P': [{'cr': '-3292.10973007',
# 'ep': '26349.90000',
# 'iw': '143.41324841',
# 'ma': 'USDT',
# 'mt': 'isolated',
# 'pa': '0.038',
# 'ps': 'BOTH',
# 's': 'BTCUSDT',
# 'up': '5.17555453'}],
# 'm': 'ORDER'},
# }
case {
'T': int(epoch_ms),
'e': 'ACCOUNT_UPDATE',
'a': {
'P': [{
's': bs_mktid,
'pa': pos_amount,
'ep': entry_price,
}],
},
}:
# real-time relay position updates back to EMS
pair: Pair | None = client._venue2pairs[venue].get(bs_mktid)
ppmsg = BrokerdPosition(
broker='binance',
account=f'binance.{account_name}',
# TODO: maybe we should be passing back
# a `MktPair` here?
symbol=pair.bs_fqme.lower() + '.binance',
size=float(pos_amount),
avg_price=float(entry_price),
)
await ems_stream.send(ppmsg)
case _:
log.warning(
'Unhandled event:\n'
f'{pformat(msg)}'
)

View File

@ -0,0 +1,557 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Real-time and historical data feed endpoints.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
aclosing,
)
from datetime import datetime
from functools import (
partial,
)
import itertools
from pprint import pformat
from typing import (
Any,
AsyncGenerator,
Callable,
Generator,
)
import time
import trio
from trio_typing import TaskStatus
from pendulum import (
from_timestamp,
)
import numpy as np
import tractor
from piker.brokers import (
open_cached_client,
NoData,
)
from piker._cacheables import (
async_lifo_cache,
)
from piker.accounting import (
Asset,
DerivTypes,
MktPair,
unpack_fqme,
)
from piker.types import Struct
from piker.data.validate import FeedInit
from piker.data._web_bs import (
open_autorecon_ws,
NoBsWs,
)
from piker.brokers._util import (
DataUnavailable,
get_logger,
)
from .api import (
Client,
)
from .venues import (
Pair,
FutesPair,
get_api_eps,
)
log = get_logger('piker.brokers.binance')
class L1(Struct):
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
update_id: int
sym: str
bid: float
bsize: float
ask: float
asize: float
# validation type
class AggTrade(Struct, frozen=True):
e: str # Event type
E: int # Event time
s: str # Symbol
a: int # Aggregate trade ID
p: float # Price
q: float # Quantity
f: int # First trade ID
l: int # noqa Last trade ID
T: int # Trade time
m: bool # Is the buyer the market maker?
M: bool | None = None # Ignore
async def stream_messages(
ws: NoBsWs,
) -> AsyncGenerator[NoBsWs, dict]:
# TODO: match syntax here!
msg: dict[str, Any]
async for msg in ws:
match msg:
# for l1 streams binance doesn't add an event type field so
# identify those messages by matching keys
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
case {
# NOTE: this is never an old value it seems, so
# they are always sending real L1 spread updates.
'u': upid, # update id
's': sym,
'b': bid,
'B': bsize,
'a': ask,
'A': asize,
}:
# TODO: it would be super nice to have a `L1` piker type
# which "renders" incremental tick updates from a packed
# msg-struct:
# - backend msgs after packed into the type such that we
# can reduce IPC usage but without each backend having
# to do that incremental update logic manually B)
# - would it maybe be more efficient to use this instead?
# https://binance-docs.github.io/apidocs/spot/en/#diff-depth-stream
l1 = L1(
update_id=upid,
sym=sym,
bid=bid,
bsize=bsize,
ask=ask,
asize=asize,
)
# for speed probably better to only specifically
# cast fields we need in numerical form?
# l1.typecast()
# repack into piker's tick-quote format
yield 'l1', {
'symbol': l1.sym,
'ticks': [
{
'type': 'bid',
'price': float(l1.bid),
'size': float(l1.bsize),
},
{
'type': 'bsize',
'price': float(l1.bid),
'size': float(l1.bsize),
},
{
'type': 'ask',
'price': float(l1.ask),
'size': float(l1.asize),
},
{
'type': 'asize',
'price': float(l1.ask),
'size': float(l1.asize),
}
]
}
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
case {
'e': 'aggTrade',
}:
# NOTE: this is purely for a definition,
# ``msgspec.Struct`` does not runtime-validate until you
# decode/encode, see:
# https://jcristharif.com/msgspec/structs.html#type-validation
msg = AggTrade(**msg) # TODO: should we .copy() ?
piker_quote: dict = {
'symbol': msg.s,
'last': float(msg.p),
'brokerd_ts': time.time(),
'ticks': [{
'type': 'trade',
'price': float(msg.p),
'size': float(msg.q),
'broker_ts': msg.T,
}],
}
yield 'trade', piker_quote
def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
'''
Create a request subscription packet dict.
- spot:
https://binance-docs.github.io/apidocs/spot/en/#live-subscribing-unsubscribing-to-streams
- futes:
https://binance-docs.github.io/apidocs/futures/en/#websocket-market-streams
'''
return {
'method': 'SUBSCRIBE',
'params': [
f'{pair.lower()}@{sub_name}'
for pair in pairs
],
'id': uid
}
# TODO, why aren't frame resp `log.info()`s showing in upstream
# code?!
@acm
async def open_history_client(
mkt: MktPair,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('binance') as client:
async def get_ohlc(
timeframe: float,
end_dt: datetime | None = None,
start_dt: datetime | None = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
if timeframe != 60:
raise DataUnavailable('Only 1m bars are supported')
# TODO: better wrapping for venue / mode?
# - eventually logic for usd vs. coin settled futes
# based on `MktPair.src` type/value?
# - maybe something like `async with
# Client.use_venue('usdtm_futes')`
if mkt.type_key in DerivTypes:
client.mkt_mode = 'usdtm_futes'
else:
client.mkt_mode = 'spot'
array: np.ndarray = await client.bars(
mkt=mkt,
start_dt=start_dt,
end_dt=end_dt,
)
if array.size == 0:
raise NoData(
f'No frame for {start_dt} -> {end_dt}\n'
)
times = array['time']
if not times.any():
raise ValueError(
'Bad frame with null-times?\n\n'
f'{times}'
)
if end_dt is None:
inow: int = round(time.time())
if (inow - times[-1]) > 60:
await tractor.pause()
start_dt = from_timestamp(times[0])
end_dt = from_timestamp(times[-1])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, Pair] | None:
# uppercase since kraken bs_mktid is always upper
if 'binance' not in fqme.lower():
fqme += '.binance'
mkt_mode: str = ''
broker, mkt_ep, venue, expiry = unpack_fqme(fqme)
# NOTE: we always upper case all tokens to be consistent with
# binance's symbology style for pairs, like `BTCUSDT`, but in
# theory we could also just keep things lower case; as long as
# we're consistent and the symcache matches whatever this func
# returns, always!
expiry: str = expiry.upper()
venue: str = venue.upper()
venue_lower: str = venue.lower()
# XXX TODO: we should change the usdtm_futes name to just
# usdm_futes (dropping the tether part) since it turns out that
# there are indeed USD-tokens OTHER THEN tether being used as
# the margin assets.. it's going to require a wholesale
# (variable/key) rename as well as file name adjustments to any
# existing tsdb set..
if 'usd' in venue_lower:
mkt_mode: str = 'usdtm_futes'
# NO IDEA what these contracts (some kinda DEX-ish futes?) are
# but we're masking them for now..
elif (
'defi' in venue_lower
# TODO: handle coinm futes which have a margin asset that
# is some crypto token!
# https://binance-docs.github.io/apidocs/delivery/en/#exchange-information
or 'btc' in venue_lower
):
return None
else:
# NOTE: see the `FutesPair.bs_fqme: str` implementation
# to understand the reverse market info lookup below.
mkt_mode = venue_lower or 'spot'
if (
venue
and 'spot' not in venue_lower
# XXX: catch all in case user doesn't know which
# venue they want (usdtm vs. coinm) and we can choose
# a default (via config?) once we support coin-m APIs.
or 'perp' in venue_lower
):
if not mkt_mode:
mkt_mode: str = f'{venue_lower}_futes'
async with open_cached_client(
'binance',
) as client:
assets: dict[str, Asset] = await client.get_assets()
pair_str: str = mkt_ep.upper()
# switch venue-mode depending on input pattern parsing
# since we want to use a particular endpoint (set) for
# pair info lookup!
client.mkt_mode = mkt_mode
pair: Pair = await client.exch_info(
pair_str,
venue=mkt_mode, # explicit
expiry=expiry,
)
if 'futes' in mkt_mode:
assert isinstance(pair, FutesPair)
dst: Asset | None = assets.get(pair.bs_dst_asset)
if (
not dst
# TODO: a known asset DNE list?
# and pair.baseAsset == 'DEFI'
):
log.warning(
f'UNKNOWN {venue} asset {pair.baseAsset} from,\n'
f'{pformat(pair.to_dict())}'
)
# XXX UNKNOWN missing "asset", though no idea why?
# maybe it's only avail in the margin venue(s): /dapi/ ?
return None
mkt = MktPair(
dst=dst,
src=assets[pair.bs_src_asset],
price_tick=pair.price_tick,
size_tick=pair.size_tick,
bs_mktid=pair.symbol,
expiry=expiry,
venue=venue,
broker='binance',
# NOTE: sectype is always taken from dst, see
# `MktPair.type_key` and `Client._cache_pairs()`
# _atype=sectype,
)
return mkt, pair
@acm
async def subscribe(
ws: NoBsWs,
symbols: list[str],
# defined once at import time to keep a global state B)
iter_subids: Generator[int, None, None] = itertools.count(),
):
# setup subs
subid: int = next(iter_subids)
# trade data (aka L1)
# https://binance-docs.github.io/apidocs/spot/en/#symbol-order-book-ticker
l1_sub = make_sub(symbols, 'bookTicker', subid)
await ws.send_msg(l1_sub)
# aggregate (each order clear by taker **not** by maker)
# trades data:
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
agg_trades_sub = make_sub(symbols, 'aggTrade', subid)
await ws.send_msg(agg_trades_sub)
# might get ack from ws server, or maybe some
# other msg still in transit..
res = await ws.recv_msg()
subid: str | None = res.get('id')
if subid:
assert res['id'] == subid
yield
subs = []
for sym in symbols:
subs.append("{sym}@aggTrade")
subs.append("{sym}@bookTicker")
# unsub from all pairs on teardown
if ws.connected():
await ws.send_msg({
"method": "UNSUBSCRIBE",
"params": subs,
"id": subid,
})
# XXX: do we need to ack the unsub?
# await ws.recv_msg()
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
async with (
send_chan as send_chan,
open_cached_client('binance') as client,
):
init_msgs: list[FeedInit] = []
for sym in symbols:
mkt: MktPair
pair: Pair
mkt, pair = await get_mkt_info(sym)
# build out init msgs according to latest spec
init_msgs.append(
FeedInit(mkt_info=mkt)
)
wss_url: str = get_api_eps(client.mkt_mode)[1] # 2nd elem is wss url
# TODO: for sanity, but remove eventually Xp
if 'future' in mkt.type_key:
assert 'fstream' in wss_url
async with (
open_autorecon_ws(
url=wss_url,
fixture=partial(
subscribe,
symbols=[mkt.bs_mktid],
),
) as ws,
# avoid stream-gen closure from breaking trio..
aclosing(stream_messages(ws)) as msg_gen,
):
# log.info('WAITING ON FIRST LIVE QUOTE..')
typ, quote = await anext(msg_gen)
# pull a first quote and deliver
while typ != 'trade':
typ, quote = await anext(msg_gen)
task_status.started((init_msgs, quote))
# signal to caller feed is ready for consumption
feed_is_live.set()
# import time
# last = time.time()
# XXX NOTE: can't include the `.binance` suffix
# or the sampling loop will not broadcast correctly
# since `bus._subscribers.setdefault(bs_fqme, set())`
# is used inside `.data.open_feed_bus()` !!!
topic: str = mkt.bs_fqme
# start streaming
async for typ, quote in msg_gen:
# period = time.time() - last
# hz = 1/period if period else float('inf')
# if hz > 60:
# log.info(f'Binance quotez : {hz}')
await send_chan.send({topic: quote})
# last = time.time()
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
# NOTE: symbology tables are loaded as part of client
# startup in ``.api.get_client()`` and in this case
# are stored as `Client._pairs`.
async with open_cached_client('binance') as client:
# TODO: maybe we should deliver the cache
# so that client's can always do a local-lookup-first
# style try and then update async as (new) match results
# are delivered from here?
await ctx.started()
async with ctx.open_stream() as stream:
pattern: str
async for pattern in stream:
# NOTE: pattern fuzzy-matching is done within
# the methd impl.
pairs: dict[str, Pair] = await client.search_symbols(
pattern,
)
# repack in fqme-keyed table
byfqme: dict[str, Pair] = {}
for pair in pairs.values():
byfqme[pair.bs_fqme] = pair
await stream.send(byfqme)

View File

@ -0,0 +1,303 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Per market data-type definitions and schemas types.
"""
from __future__ import annotations
from typing import (
Literal,
)
from decimal import Decimal
from msgspec import field
from piker.types import Struct
# API endpoint paths by venue / sub-API
_domain: str = 'binance.com'
_spot_url = f'https://api.{_domain}'
_futes_url = f'https://fapi.{_domain}'
# WEBsocketz
# NOTE XXX: see api docs which show diff addr?
# https://developers.binance.com/docs/binance-trading-api/websocket_api#general-api-information
_spot_ws: str = 'wss://stream.binance.com/ws'
# or this one? ..
# 'wss://ws-api.binance.com:443/ws-api/v3',
# https://binance-docs.github.io/apidocs/futures/en/#websocket-market-streams
_futes_ws: str = f'wss://fstream.{_domain}/ws'
_auth_futes_ws: str = 'wss://fstream-auth.{_domain}/ws'
# test nets
# NOTE: spot test network only allows certain ep sets:
# https://testnet.binance.vision/
# https://www.binance.com/en/support/faq/how-to-test-my-functions-on-binance-testnet-ab78f9a1b8824cf0a106b4229c76496d
_testnet_spot_url: str = 'https://testnet.binance.vision/api'
_testnet_spot_ws: str = 'wss://testnet.binance.vision/ws'
# or this one? ..
# 'wss://testnet.binance.vision/ws-api/v3'
_testnet_futes_url: str = 'https://testnet.binancefuture.com'
_testnet_futes_ws: str = 'wss://stream.binancefuture.com/ws'
MarketType = Literal[
'spot',
# 'margin',
'usdtm_futes',
# 'coinm_futes',
]
def get_api_eps(venue: MarketType) -> tuple[str, str]:
'''
Return API ep root paths per venue.
'''
return {
'spot': (
_spot_url,
_spot_ws,
),
'usdtm_futes': (
_futes_url,
_futes_ws,
),
}[venue]
class Pair(Struct, frozen=True, kw_only=True):
symbol: str
status: str
orderTypes: list[str]
# src
quoteAsset: str
quotePrecision: int
# dst
baseAsset: str
baseAssetPrecision: int
filters: dict[
str,
str | int | float,
] = field(default_factory=dict)
@property
def price_tick(self) -> Decimal:
# XXX: lul, after manually inspecting the response format we
# just directly pick out the info we need
step_size: str = self.filters['PRICE_FILTER']['tickSize'].rstrip('0')
return Decimal(step_size)
@property
def size_tick(self) -> Decimal:
step_size: str = self.filters['LOT_SIZE']['stepSize'].rstrip('0')
return Decimal(step_size)
@property
def bs_fqme(self) -> str:
return self.symbol
@property
def bs_mktid(self) -> str:
return f'{self.symbol}.{self.venue}'
class SpotPair(Pair, frozen=True):
cancelReplaceAllowed: bool
allowTrailingStop: bool
quoteAssetPrecision: int
baseCommissionPrecision: int
quoteCommissionPrecision: int
icebergAllowed: bool
ocoAllowed: bool
quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool
isMarginTradingAllowed: bool
otoAllowed: bool
defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str]
permissions: list[str]
permissionSets: list[list[str]]
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:SpotPair'
@property
def venue(self) -> str:
return 'SPOT'
@property
def bs_fqme(self) -> str:
return f'{self.symbol}.SPOT'
@property
def bs_src_asset(self) -> str:
return f'{self.quoteAsset}'
@property
def bs_dst_asset(self) -> str:
return f'{self.baseAsset}'
class FutesPair(Pair):
symbol: str # 'BTCUSDT',
pair: str # 'BTCUSDT',
baseAssetPrecision: int # 8,
contractType: str # 'PERPETUAL',
deliveryDate: int # 4133404800000,
liquidationFee: float # '0.012500',
maintMarginPercent: float # '2.5000',
marginAsset: str # 'USDT',
marketTakeBound: float # '0.05',
maxMoveOrderLimit: int # 10000,
onboardDate: int # 1569398400000,
pricePrecision: int # 2,
quantityPrecision: int # 3,
quoteAsset: str # 'USDT',
quotePrecision: int # 8,
requiredMarginPercent: float # '5.0000',
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
triggerProtect: float # '0.0500',
underlyingSubType: list[str] # ['PoW'],
underlyingType: str # 'COIN'
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:FutesPair'
# NOTE: for compat with spot pairs and `MktPair.src: Asset`
# processing..
@property
def quoteAssetPrecision(self) -> int:
return self.quotePrecision
@property
def expiry(self) -> str:
symbol: str = self.symbol
contype: str = self.contractType
match contype:
case (
'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance..
):
pair, _, expiry = symbol.partition('_')
assert pair == self.pair # sanity
return f'{expiry}'
case 'PERPETUAL':
return 'PERP'
case '':
subtype: list[str] = self.underlyingSubType
if not subtype:
if self.status == 'PENDING_TRADING':
return 'PENDING'
match subtype:
case ['DEFI']:
return 'PERP'
# wow, just wow you binance guys suck..
if self.status == 'PENDING_TRADING':
return 'PENDING'
# XXX: yeah no clue then..
raise ValueError(
f'Bad .expiry token match: {contype} for {symbol}'
)
@property
def venue(self) -> str:
symbol: str = self.symbol
ctype: str = self.contractType
margin: str = self.marginAsset
match ctype:
case 'PERPETUAL':
return f'{margin}M'
case (
'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance..
):
_, _, expiry = symbol.partition('_')
return f'{margin}M'
case '':
subtype: list[str] = self.underlyingSubType
if not subtype:
if self.status == 'PENDING_TRADING':
return f'{margin}M'
match subtype:
case (
['DEFI']
| ['USDC']
):
return f'{subtype[0]}'
# XXX: yeah no clue then..
raise ValueError(
f'Bad .venue token match: {ctype}'
)
@property
def bs_fqme(self) -> str:
symbol: str = self.symbol
ctype: str = self.contractType
venue: str = self.venue
pair: str = self.pair
match ctype:
case (
'CURRENT_QUARTER'
| 'NEXT_QUARTER' # su madre binance..
):
pair, _, expiry = symbol.partition('_')
assert pair == self.pair
return f'{pair}.{venue}.{self.expiry}'
@property
def bs_src_asset(self) -> str:
return f'{self.quoteAsset}'
@property
def bs_dst_asset(self) -> str:
return f'{self.baseAsset}.{self.venue}'
PAIRTYPES: dict[MarketType, Pair] = {
'spot': SpotPair,
'usdtm_futes': FutesPair,
# TODO: support coin-margined venue:
# https://binance-docs.github.io/apidocs/delivery/en/#change-log
# 'coinm_futes': CoinFutesPair,
}

View File

@ -21,6 +21,7 @@ import os
from functools import partial from functools import partial
from operator import attrgetter from operator import attrgetter
from operator import itemgetter from operator import itemgetter
from types import ModuleType
import click import click
import trio import trio
@ -28,20 +29,173 @@ import tractor
from ..cli import cli from ..cli import cli
from .. import watchlists as wl from .. import watchlists as wl
from ..log import get_console_log, colorize_json, get_logger from ..log import (
from .._daemon import maybe_spawn_brokerd, maybe_open_pikerd colorize_json,
from ..brokers import core, get_brokermod, data )
from ._util import (
log = get_logger('cli') log,
DEFAULT_BROKER = 'questrade' get_console_log,
)
from ..service import (
maybe_spawn_brokerd,
maybe_open_pikerd,
)
from ..brokers import (
core,
get_brokermod,
data,
)
DEFAULT_BROKER = 'binance'
_config_dir = click.get_app_dir('piker') _config_dir = click.get_app_dir('piker')
_watchlists_data_path = os.path.join(_config_dir, 'watchlists.json') _watchlists_data_path = os.path.join(_config_dir, 'watchlists.json')
OK = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
def print_ok(s: str, **kwargs):
print(OK + s + ENDC, **kwargs)
def print_error(s: str, **kwargs):
print(FAIL + s + ENDC, **kwargs)
def get_method(client, meth_name: str):
print(f'checking client for method \'{meth_name}\'...', end='', flush=True)
method = getattr(client, meth_name, None)
assert method
print_ok('found!.')
return method
async def run_method(client, meth_name: str, **kwargs):
method = get_method(client, meth_name)
print('running...', end='', flush=True)
result = await method(**kwargs)
print_ok(f'done! result: {type(result)}')
return result
async def run_test(broker_name: str):
brokermod = get_brokermod(broker_name)
total = 0
passed = 0
failed = 0
print('getting client...', end='', flush=True)
if not hasattr(brokermod, 'get_client'):
print_error('fail! no \'get_client\' context manager found.')
return
async with brokermod.get_client(is_brokercheck=True) as client:
print_ok('done! inside client context.')
# check for methods present on brokermod
method_list = [
'backfill_bars',
'get_client',
'trades_dialogue',
'open_history_client',
'open_symbol_search',
'stream_quotes',
]
for method in method_list:
print(
f'checking brokermod for method \'{method}\'...',
end='', flush=True)
if not hasattr(brokermod, method):
print_error(f'fail! method \'{method}\' not found.')
failed += 1
else:
print_ok('done!')
passed += 1
total += 1
# check for methods present con brokermod.Client and their
# results
# for private methods only check is present
method_list = [
'get_balances',
'get_assets',
'get_trades',
'get_xfers',
'submit_limit',
'submit_cancel',
'search_symbols',
]
for method_name in method_list:
try:
get_method(client, method_name)
passed += 1
except AssertionError:
print_error(f'fail! method \'{method_name}\' not found.')
failed += 1
total += 1
# check for methods present con brokermod.Client and their
# results
syms = await run_method(client, 'symbol_info')
total += 1
if len(syms) == 0:
raise BaseException('Empty Symbol list?')
passed += 1
first_sym = tuple(syms.keys())[0]
method_list = [
('cache_symbols', {}),
('search_symbols', {'pattern': first_sym[:-1]}),
('bars', {'symbol': first_sym})
]
for method_name, method_kwargs in method_list:
try:
await run_method(client, method_name, **method_kwargs)
passed += 1
except AssertionError:
print_error(f'fail! method \'{method_name}\' not found.')
failed += 1
total += 1
print(f'total: {total}, passed: {passed}, failed: {failed}')
@cli.command()
@click.argument('broker', nargs=1, required=True)
@click.pass_obj
def brokercheck(config, broker):
'''
Test broker apis for completeness.
'''
async def bcheck_main():
async with maybe_spawn_brokerd(broker) as portal:
await portal.run(run_test, broker)
await portal.cancel_actor()
trio.run(run_test, broker)
@cli.command() @cli.command()
@click.option('--keys', '-k', multiple=True, @click.option('--keys', '-k', multiple=True,
help='Return results only for these keys') help='Return results only for these keys')
@click.argument('meth', nargs=1) @click.argument('meth', nargs=1)
@click.argument('kwargs', nargs=-1) @click.argument('kwargs', nargs=-1)
@click.pass_obj @click.pass_obj
@ -88,7 +242,7 @@ def quote(config, tickers):
''' '''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = list(config['brokermods'].values())[0]
quotes = trio.run(partial(core.stocks_quote, brokermod, tickers)) quotes = trio.run(partial(core.stocks_quote, brokermod, tickers))
if not quotes: if not quotes:
@ -115,7 +269,7 @@ def bars(config, symbol, count):
''' '''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = list(config['brokermods'].values())[0]
# broker backend should return at the least a # broker backend should return at the least a
# list of candle dictionaries # list of candle dictionaries
@ -150,7 +304,7 @@ def record(config, rate, name, dhost, filename):
''' '''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = list(config['brokermods'].values())[0]
loglevel = config['loglevel'] loglevel = config['loglevel']
log = config['log'] log = config['log']
@ -215,7 +369,7 @@ def optsquote(config, symbol, date):
''' '''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = list(config['brokermods'].values())[0]
quotes = trio.run( quotes = trio.run(
partial( partial(
@ -232,58 +386,151 @@ def optsquote(config, symbol, date):
@cli.command() @cli.command()
@click.argument('tickers', nargs=-1, required=True) @click.argument('tickers', nargs=-1, required=True)
@click.pass_obj @click.pass_obj
def symbol_info(config, tickers): def mkt_info(
config: dict,
tickers: list[str],
):
''' '''
Print symbol quotes to the console Print symbol quotes to the console
''' '''
# global opts from msgspec.json import encode, decode
brokermod = config['brokermods'][0] from ..accounting import MktPair
from ..service import (
open_piker_runtime,
)
quotes = trio.run(partial(core.symbol_info, brokermod, tickers)) # global opts
if not quotes: brokermods: dict[str, ModuleType] = config['brokermods']
log.error(f"No quotes could be found for {tickers}?")
mkts: list[MktPair] = []
async def main():
async with open_piker_runtime(
name='mkt_info_query',
# loglevel=loglevel,
debug_mode=True,
) as (_, _):
for fqme in tickers:
bs_fqme, _, broker = fqme.rpartition('.')
brokermod: ModuleType = brokermods[broker]
mkt, bs_pair = await core.mkt_info(
brokermod,
bs_fqme,
)
mkts.append((mkt, bs_pair))
trio.run(main)
if not mkts:
log.error(
f'No market info could be found for {tickers}'
)
return return
if len(quotes) < len(tickers): if len(mkts) < len(tickers):
syms = tuple(map(itemgetter('symbol'), quotes)) syms = tuple(map(itemgetter('fqme'), mkts))
for ticker in tickers: for ticker in tickers:
if ticker not in syms: if ticker not in syms:
brokermod.log.warn(f"Could not find symbol {ticker}?") log.warn(f"Could not find symbol {ticker}?")
click.echo(colorize_json(quotes))
# TODO: use ``rich.Table`` intead here!
for mkt, bs_pair in mkts:
click.echo(
'\n'
'----------------------------------------------------\n'
f'{type(bs_pair)}\n'
'----------------------------------------------------\n'
f'{colorize_json(bs_pair.to_dict())}\n'
'----------------------------------------------------\n'
f'as piker `MktPair` with fqme: {mkt.fqme}\n'
'----------------------------------------------------\n'
# NOTE: roundtrip to json codec for console print
f'{colorize_json(decode(encode(mkt)))}'
)
@cli.command() @cli.command()
@click.argument('pattern', required=True) @click.argument('pattern', required=True)
# TODO: move this to top level click/typer context for all subs
@click.option(
'--pdb',
is_flag=True,
help='Enable tractor debug mode',
)
@click.pass_obj @click.pass_obj
def search(config, pattern): def search(
config: dict,
pattern: str,
pdb: bool,
):
''' '''
Search for symbols from broker backend(s). Search for symbols from broker backend(s).
''' '''
# global opts # global opts
brokermods = config['brokermods'] brokermods = list(config['brokermods'].values())
# define tractor entrypoint # define tractor entrypoint
async def main(func): async def main(func):
async with maybe_open_pikerd( async with maybe_open_pikerd(
loglevel=config['loglevel'], loglevel=config['loglevel'],
debug_mode=pdb,
): ):
return await func() return await func()
quotes = trio.run( from piker.toolz import open_crash_handler
main, with open_crash_handler():
partial( quotes = trio.run(
core.symbol_search, main,
brokermods, partial(
pattern, core.symbol_search,
), brokermods,
) pattern,
),
)
if not quotes: if not quotes:
log.error(f"No matches could be found for {pattern}?") log.error(f"No matches could be found for {pattern}?")
return return
click.echo(colorize_json(quotes)) click.echo(colorize_json(quotes))
@cli.command()
@click.argument('section', required=False)
@click.argument('value', required=False)
@click.option('--delete', '-d', flag_value=True, help='Delete section')
@click.pass_obj
def brokercfg(config, section, value, delete):
'''
If invoked with no arguments, open an editor to edit broker
configs file or get / update an individual section.
'''
from .. import config
if section:
conf, path = config.load()
if not delete:
if value:
config.set_value(conf, section, value)
click.echo(
colorize_json(
config.get_value(conf, section))
)
else:
config.del_value(conf, section)
config.write(config=conf)
else:
conf, path = config.load(raw=True)
config.write(
raw=click.edit(text=conf)
)

View File

@ -26,13 +26,11 @@ from typing import List, Dict, Any, Optional
import trio import trio
from ..log import get_logger from ._util import log
from . import get_brokermod from . import get_brokermod
from .._daemon import maybe_spawn_brokerd from ..service import maybe_spawn_brokerd
from .._cacheables import open_cached_client from . import open_cached_client
from ..accounting import MktPair
log = get_logger(__name__)
async def api(brokername: str, methname: str, **kwargs) -> dict: async def api(brokername: str, methname: str, **kwargs) -> dict:
@ -97,15 +95,15 @@ async def option_chain(
return await client.option_chains(contracts) return await client.option_chains(contracts)
async def contracts( # async def contracts(
brokermod: ModuleType, # brokermod: ModuleType,
symbol: str, # symbol: str,
) -> Dict[str, Dict[str, Dict[str, Any]]]: # ) -> Dict[str, Dict[str, Dict[str, Any]]]:
"""Return option contracts (all expiries) for ``symbol``. # """Return option contracts (all expiries) for ``symbol``.
""" # """
async with brokermod.get_client() as client: # async with brokermod.get_client() as client:
# return await client.get_all_contracts([symbol]) # # return await client.get_all_contracts([symbol])
return await client.get_all_contracts([symbol]) # return await client.get_all_contracts([symbol])
async def bars( async def bars(
@ -119,17 +117,6 @@ async def bars(
return await client.bars(symbol, **kwargs) return await client.bars(symbol, **kwargs)
async def symbol_info(
brokermod: ModuleType,
symbol: str,
**kwargs,
) -> Dict[str, Dict[str, Dict[str, Any]]]:
"""Return symbol info from broker.
"""
async with brokermod.get_client() as client:
return await client.symbol_info(symbol, **kwargs)
async def search_w_brokerd(name: str, pattern: str) -> dict: async def search_w_brokerd(name: str, pattern: str) -> dict:
async with open_cached_client(name) as client: async with open_cached_client(name) as client:
@ -158,7 +145,11 @@ async def symbol_search(
async with maybe_spawn_brokerd( async with maybe_spawn_brokerd(
mod.name, mod.name,
infect_asyncio=getattr(mod, '_infect_asyncio', False), infect_asyncio=getattr(
mod,
'_infect_asyncio',
False,
),
) as portal: ) as portal:
results.append(( results.append((
@ -176,3 +167,20 @@ async def symbol_search(
n.start_soon(search_backend, mod.name) n.start_soon(search_backend, mod.name)
return results return results
async def mkt_info(
brokermod: ModuleType,
fqme: str,
**kwargs,
) -> MktPair:
'''
Return MktPair info from broker including src and dst assets.
'''
async with open_cached_client(brokermod.name) as client:
assert client
return await brokermod.get_mkt_info(
fqme.replace(brokermod.name, '')
)

View File

@ -41,13 +41,13 @@ import tractor
from tractor.experimental import msgpub from tractor.experimental import msgpub
from async_generator import asynccontextmanager from async_generator import asynccontextmanager
from ..log import get_logger, get_console_log from ._util import (
log,
get_console_log,
)
from . import get_brokermod from . import get_brokermod
log = get_logger(__name__)
async def wait_for_network( async def wait_for_network(
net_func: Callable, net_func: Callable,
sleep: int = 1 sleep: int = 1
@ -227,26 +227,28 @@ async def get_cached_feed(
@tractor.stream @tractor.stream
async def start_quote_stream( async def start_quote_stream(
ctx: tractor.Context, # marks this as a streaming func stream: tractor.Context, # marks this as a streaming func
broker: str, broker: str,
symbols: List[Any], symbols: List[Any],
feed_type: str = 'stock', feed_type: str = 'stock',
rate: int = 3, rate: int = 3,
) -> None: ) -> None:
"""Handle per-broker quote stream subscriptions using a "lazy" pub-sub '''
Handle per-broker quote stream subscriptions using a "lazy" pub-sub
pattern. pattern.
Spawns new quoter tasks for each broker backend on-demand. Spawns new quoter tasks for each broker backend on-demand.
Since most brokers seems to support batch quote requests we Since most brokers seems to support batch quote requests we
limit to one task per process (for now). limit to one task per process (for now).
"""
'''
# XXX: why do we need this again? # XXX: why do we need this again?
get_console_log(tractor.current_actor().loglevel) get_console_log(tractor.current_actor().loglevel)
# pull global vars from local actor # pull global vars from local actor
symbols = list(symbols) symbols = list(symbols)
log.info( log.info(
f"{ctx.chan.uid} subscribed to {broker} for symbols {symbols}") f"{stream.chan.uid} subscribed to {broker} for symbols {symbols}")
# another actor task may have already created it # another actor task may have already created it
async with get_cached_feed(broker) as feed: async with get_cached_feed(broker) as feed:
@ -290,13 +292,13 @@ async def start_quote_stream(
assert fquote['displayable'] assert fquote['displayable']
payload[sym] = fquote payload[sym] = fquote
await ctx.send_yield(payload) await stream.send_yield(payload)
await stream_poll_requests( await stream_poll_requests(
# ``trionics.msgpub`` required kwargs # ``trionics.msgpub`` required kwargs
task_name=feed_type, task_name=feed_type,
ctx=ctx, ctx=stream,
topics=symbols, topics=symbols,
packetizer=feed.mod.packetizer, packetizer=feed.mod.packetizer,
@ -319,9 +321,11 @@ async def call_client(
class DataFeed: class DataFeed:
"""Data feed client for streaming symbol data from and making API client calls '''
to a (remote) ``brokerd`` daemon. Data feed client for streaming symbol data from and making API
""" client calls to a (remote) ``brokerd`` daemon.
'''
_allowed = ('stock', 'option') _allowed = ('stock', 'option')
def __init__(self, portal, brokermod): def __init__(self, portal, brokermod):

View File

@ -0,0 +1,70 @@
``deribit`` backend
------------------
pretty good liquidity crypto derivatives, uses custom json rpc over ws for
client methods, then `cryptofeed` for data streams.
status
******
- supports option charts
- no order support yet
config
******
In order to get order mode support your ``brokers.toml``
needs to have something like the following:
.. code:: toml
[deribit]
key_id = 'XXXXXXXX'
key_secret = 'Xx_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx'
To obtain an api id and secret you need to create an account, which can be a
real market account over at:
- deribit.com (requires KYC for deposit address)
Or a testnet account over at:
- test.deribit.com
For testnet once the account is created here is how you deposit fake crypto to
try it out:
1) Go to Wallet:
.. figure:: assets/0_wallet.png
:align: center
:target: assets/0_wallet.png
:alt: wallet page
2) Then click on the elipsis menu and select deposit
.. figure:: assets/1_wallet_select_deposit.png
:align: center
:target: assets/1_wallet_select_deposit.png
:alt: wallet deposit page
3) This will take you to the deposit address page
.. figure:: assets/2_gen_deposit_addr.png
:align: center
:target: assets/2_gen_deposit_addr.png
:alt: generate deposit address page
4) After clicking generate you should see the address, copy it and go to the
`coin faucet <https://test.deribit.com/dericoin/BTC/deposit>`_ and send fake
coins to that address.
.. figure:: assets/3_deposit_address.png
:align: center
:target: assets/3_deposit_address.png
:alt: generated address
5) Back in the deposit address page you should see the deposit in your history
.. figure:: assets/4_wallet_deposit_history.png
:align: center
:target: assets/4_wallet_deposit_history.png
:alt: wallet deposit history

View File

@ -0,0 +1,65 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Deribit backend.
'''
from piker.log import get_logger
from .api import (
get_client,
)
from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
# backfill_bars,
)
# from .broker import (
# open_trade_dialog,
# norm_trade_records,
# )
log = get_logger(__name__)
__all__ = [
'get_client',
# 'trades_dialogue',
'open_history_client',
'open_symbol_search',
'stream_quotes',
# 'norm_trade_records',
]
# tractor RPC enable arg
__enable_modules__: list[str] = [
'api',
'feed',
# 'broker',
]
# passed to ``tractor.ActorNursery.start_actor()``
_spawn_kwargs = {
'infect_asyncio': True,
}
# annotation to let backend agnostic code
# know if ``brokerd`` should be spawned with
# ``tractor``'s aio mode.
_infect_asyncio: bool = True

View File

@ -0,0 +1,675 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Deribit backend.
'''
import asyncio
from contextlib import (
asynccontextmanager as acm,
)
from datetime import datetime
from functools import partial
import time
from typing import (
Any,
Optional,
Callable,
)
import pendulum
import trio
from trio_typing import TaskStatus
from rapidfuzz import process as fuzzy
import numpy as np
from tractor.trionics import (
broadcast_receiver,
maybe_open_context
)
from tractor import to_asyncio
# XXX WOOPS XD
# yeah you'll need to install it since it was removed in #489 by
# accident; well i thought we had removed all usage..
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT,
L1_BOOK, TRADES,
OPTION, CALL, PUT
)
from cryptofeed.symbols import Symbol
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
Struct,
)
from piker.data._web_bs import (
open_jsonrpc_session
)
from piker import config
from piker.log import get_logger
log = get_logger(__name__)
_spawn_kwargs = {
'infect_asyncio': True,
}
_url = 'https://www.deribit.com'
_ws_url = 'wss://www.deribit.com/ws/api/v2'
_testnet_ws_url = 'wss://test.deribit.com/ws/api/v2'
class JSONRPCResult(Struct):
jsonrpc: str = '2.0'
id: int
result: Optional[list[dict]] = None
error: Optional[dict] = None
usIn: int
usOut: int
usDiff: int
testnet: bool
class JSONRPCChannel(Struct):
jsonrpc: str = '2.0'
method: str
params: dict
class KLinesResult(Struct):
close: list[float]
cost: list[float]
high: list[float]
low: list[float]
open: list[float]
status: str
ticks: list[int]
volume: list[float]
class Trade(Struct):
trade_seq: int
trade_id: str
timestamp: int
tick_direction: int
price: float
mark_price: float
iv: float
instrument_name: str
index_price: float
direction: str
combo_trade_id: Optional[int] = 0,
combo_id: Optional[str] = '',
amount: float
class LastTradesResult(Struct):
trades: list[Trade]
has_more: bool
# convert datetime obj timestamp to unixtime in milliseconds
def deribit_timestamp(when):
return int((when.timestamp() * 1000) + (when.microsecond / 1000))
def str_to_cb_sym(name: str) -> Symbol:
base, strike_price, expiry_date, option_type = name.split('-')
quote = base
if option_type == 'put':
option_type = PUT
elif option_type == 'call':
option_type = CALL
else:
raise Exception("Couldn\'t parse option type")
return Symbol(
base, quote,
type=OPTION,
strike_price=strike_price,
option_type=option_type,
expiry_date=expiry_date,
expiry_normalize=False)
def piker_sym_to_cb_sym(name: str) -> Symbol:
base, expiry_date, strike_price, option_type = tuple(
name.upper().split('-'))
quote = base
if option_type == 'P':
option_type = PUT
elif option_type == 'C':
option_type = CALL
else:
raise Exception("Couldn\'t parse option type")
return Symbol(
base, quote,
type=OPTION,
strike_price=strike_price,
option_type=option_type,
expiry_date=expiry_date.upper())
def cb_sym_to_deribit_inst(sym: Symbol):
# cryptofeed normalized
cb_norm = ['F', 'G', 'H', 'J', 'K', 'M', 'N', 'Q', 'U', 'V', 'X', 'Z']
# deribit specific
months = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']
exp = sym.expiry_date
# YYMDD
# 01234
year, month, day = (
exp[:2], months[cb_norm.index(exp[2:3])], exp[3:])
otype = 'C' if sym.option_type == CALL else 'P'
return f'{sym.base}-{day}{month}{year}-{sym.strike_price}-{otype}'
def get_config() -> dict[str, Any]:
conf, path = config.load()
section = conf.get('deribit')
# TODO: document why we send this, basically because logging params for cryptofeed
conf['log'] = {}
conf['log']['disabled'] = True
if section is None:
log.warning(f'No config section found for deribit in {path}')
return conf
class Client:
def __init__(self, json_rpc: Callable) -> None:
self._pairs: dict[str, Any] = None
config = get_config().get('deribit', {})
if ('key_id' in config) and ('key_secret' in config):
self._key_id = config['key_id']
self._key_secret = config['key_secret']
else:
self._key_id = None
self._key_secret = None
self.json_rpc = json_rpc
@property
def currencies(self):
return ['btc', 'eth', 'sol', 'usd']
async def get_balances(self, kind: str = 'option') -> dict[str, float]:
"""Return the set of positions for this account
by symbol.
"""
balances = {}
for currency in self.currencies:
resp = await self.json_rpc(
'private/get_positions', params={
'currency': currency.upper(),
'kind': kind})
balances[currency] = resp.result
return balances
async def get_assets(self) -> dict[str, float]:
"""Return the set of asset balances for this account
by symbol.
"""
balances = {}
for currency in self.currencies:
resp = await self.json_rpc(
'private/get_account_summary', params={
'currency': currency.upper()})
balances[currency] = resp.result['balance']
return balances
async def submit_limit(
self,
symbol: str,
price: float,
action: str,
size: float
) -> dict:
"""Place an order
"""
params = {
'instrument_name': symbol.upper(),
'amount': size,
'type': 'limit',
'price': price,
}
resp = await self.json_rpc(
f'private/{action}', params)
return resp.result
async def submit_cancel(self, oid: str):
"""Send cancel request for order id
"""
resp = await self.json_rpc(
'private/cancel', {'order_id': oid})
return resp.result
async def symbol_info(
self,
instrument: Optional[str] = None,
currency: str = 'btc', # BTC, ETH, SOL, USDC
kind: str = 'option',
expired: bool = False
) -> dict[str, dict]:
'''
Get symbol infos.
'''
if self._pairs:
return self._pairs
# will retrieve all symbols by default
params: dict[str, str] = {
'currency': currency.upper(),
'kind': kind,
'expired': str(expired).lower()
}
resp: JSONRPCResult = await self.json_rpc(
'public/get_instruments',
params,
)
# convert to symbol-keyed table
results: list[dict] | None = resp.result
instruments: dict[str, dict] = {
item['instrument_name'].lower(): item
for item in results
}
if instrument is not None:
return instruments[instrument]
else:
return instruments
async def cache_symbols(
self,
) -> dict:
if not self._pairs:
self._pairs = await self.symbol_info()
return self._pairs
async def search_symbols(
self,
pattern: str,
limit: int = 30,
) -> dict[str, Any]:
'''
Fuzzy search symbology set for pairs matching `pattern`.
'''
pairs: dict[str, Any] = await self.symbol_info()
matches: dict[str, Pair] = match_from_pairs(
pairs=pairs,
query=pattern.upper(),
score_cutoff=35,
limit=limit
)
# repack in name-keyed table
return {
pair['instrument_name'].lower(): pair
for pair in matches.values()
}
async def bars(
self,
symbol: str,
start_dt: Optional[datetime] = None,
end_dt: Optional[datetime] = None,
limit: int = 1000,
as_np: bool = True,
) -> dict:
instrument = symbol
if end_dt is None:
end_dt = pendulum.now('UTC')
if start_dt is None:
start_dt = end_dt.start_of(
'minute').subtract(minutes=limit)
start_time = deribit_timestamp(start_dt)
end_time = deribit_timestamp(end_dt)
# https://docs.deribit.com/#public-get_tradingview_chart_data
resp = await self.json_rpc(
'public/get_tradingview_chart_data',
params={
'instrument_name': instrument.upper(),
'start_timestamp': start_time,
'end_timestamp': end_time,
'resolution': '1'
})
result = KLinesResult(**resp.result)
new_bars = []
for i in range(len(result.close)):
_open = result.open[i]
high = result.high[i]
low = result.low[i]
close = result.close[i]
volume = result.volume[i]
row = [
(start_time + (i * (60 * 1000))) / 1000.0, # time
result.open[i],
result.high[i],
result.low[i],
result.close[i],
result.volume[i],
0
]
new_bars.append((i,) + tuple(row))
array = np.array(new_bars, dtype=def_iohlcv_fields) if as_np else klines
return array
async def last_trades(
self,
instrument: str,
count: int = 10
):
resp = await self.json_rpc(
'public/get_last_trades_by_instrument',
params={
'instrument_name': instrument,
'count': count
})
return LastTradesResult(**resp.result)
@acm
async def get_client(
is_brokercheck: bool = False
) -> Client:
async with (
trio.open_nursery() as n,
open_jsonrpc_session(
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc
):
client = Client(json_rpc)
_refresh_token: Optional[str] = None
_access_token: Optional[str] = None
async def _auth_loop(
task_status: TaskStatus = trio.TASK_STATUS_IGNORED
):
"""Background task that adquires a first access token and then will
refresh the access token while the nursery isn't cancelled.
https://docs.deribit.com/?python#authentication-2
"""
renew_time = 10
access_scope = 'trade:read_write'
_expiry_time = time.time()
got_access = False
nonlocal _refresh_token
nonlocal _access_token
while True:
if time.time() - _expiry_time < renew_time:
# if we are close to token expiry time
if _refresh_token != None:
# if we have a refresh token already dont need to send
# secret
params = {
'grant_type': 'refresh_token',
'refresh_token': _refresh_token,
'scope': access_scope
}
else:
# we don't have refresh token, send secret to initialize
params = {
'grant_type': 'client_credentials',
'client_id': client._key_id,
'client_secret': client._key_secret,
'scope': access_scope
}
resp = await json_rpc('public/auth', params)
result = resp.result
_expiry_time = time.time() + result['expires_in']
_refresh_token = result['refresh_token']
if 'access_token' in result:
_access_token = result['access_token']
if not got_access:
# first time this loop runs we must indicate task is
# started, we have auth
got_access = True
task_status.started()
else:
await trio.sleep(renew_time / 2)
# if we have client creds launch auth loop
if client._key_id is not None:
await n.start(_auth_loop)
await client.cache_symbols()
yield client
n.cancel_scope.cancel()
@acm
async def open_feed_handler():
fh = FeedHandler(config=get_config())
yield fh
await to_asyncio.run_task(fh.stop_async)
@acm
async def maybe_open_feed_handler() -> trio.abc.ReceiveStream:
async with maybe_open_context(
acm_func=open_feed_handler,
key='feedhandler',
) as (cache_hit, fh):
yield fh
async def aio_price_feed_relay(
fh: FeedHandler,
instrument: Symbol,
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> None:
async def _trade(data: dict, receipt_timestamp):
to_trio.send_nowait(('trade', {
'symbol': cb_sym_to_deribit_inst(
str_to_cb_sym(data.symbol)).lower(),
'last': data,
'broker_ts': time.time(),
'data': data.to_dict(),
'receipt': receipt_timestamp
}))
async def _l1(data: dict, receipt_timestamp):
to_trio.send_nowait(('l1', {
'symbol': cb_sym_to_deribit_inst(
str_to_cb_sym(data.symbol)).lower(),
'ticks': [
{'type': 'bid',
'price': float(data.bid_price), 'size': float(data.bid_size)},
{'type': 'bsize',
'price': float(data.bid_price), 'size': float(data.bid_size)},
{'type': 'ask',
'price': float(data.ask_price), 'size': float(data.ask_size)},
{'type': 'asize',
'price': float(data.ask_price), 'size': float(data.ask_size)}
]
}))
fh.add_feed(
DERIBIT,
channels=[TRADES, L1_BOOK],
symbols=[piker_sym_to_cb_sym(instrument)],
callbacks={
TRADES: _trade,
L1_BOOK: _l1
})
if not fh.running:
fh.run(
start_loop=False,
install_signal_handlers=False)
# sync with trio
to_trio.send_nowait(None)
await asyncio.sleep(float('inf'))
@acm
async def open_price_feed(
instrument: str
) -> trio.abc.ReceiveStream:
async with maybe_open_feed_handler() as fh:
async with to_asyncio.open_channel_from(
partial(
aio_price_feed_relay,
fh,
instrument
)
) as (first, chan):
yield chan
@acm
async def maybe_open_price_feed(
instrument: str
) -> trio.abc.ReceiveStream:
# TODO: add a predicate to maybe_open_context
async with maybe_open_context(
acm_func=open_price_feed,
kwargs={
'instrument': instrument
},
key=f'{instrument}-price',
) as (cache_hit, feed):
if cache_hit:
yield broadcast_receiver(feed, 10)
else:
yield feed
async def aio_order_feed_relay(
fh: FeedHandler,
instrument: Symbol,
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> None:
async def _fill(data: dict, receipt_timestamp):
breakpoint()
async def _order_info(data: dict, receipt_timestamp):
breakpoint()
fh.add_feed(
DERIBIT,
channels=[FILLS, ORDER_INFO],
symbols=[instrument.upper()],
callbacks={
FILLS: _fill,
ORDER_INFO: _order_info,
})
if not fh.running:
fh.run(
start_loop=False,
install_signal_handlers=False)
# sync with trio
to_trio.send_nowait(None)
await asyncio.sleep(float('inf'))
@acm
async def open_order_feed(
instrument: list[str]
) -> trio.abc.ReceiveStream:
async with maybe_open_feed_handler() as fh:
async with to_asyncio.open_channel_from(
partial(
aio_order_feed_relay,
fh,
instrument
)
) as (first, chan):
yield chan
@acm
async def maybe_open_order_feed(
instrument: str
) -> trio.abc.ReceiveStream:
# TODO: add a predicate to maybe_open_context
async with maybe_open_context(
acm_func=open_order_feed,
kwargs={
'instrument': instrument,
'fh': fh
},
key=f'{instrument}-order',
) as (cache_hit, feed):
if cache_hit:
yield broadcast_receiver(feed, 10)
else:
yield feed

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

View File

@ -0,0 +1,185 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Deribit backend.
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
from typing import Any, Optional, Callable
import time
import trio
from trio_typing import TaskStatus
import pendulum
from rapidfuzz import process as fuzzy
import numpy as np
import tractor
from piker.brokers import open_cached_client
from piker.log import get_logger, get_console_log
from piker.data import ShmArray
from piker.brokers._util import (
BrokerError,
DataUnavailable,
)
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT, L1_BOOK, TRADES, OPTION, CALL, PUT
)
from cryptofeed.symbols import Symbol
from .api import (
Client, Trade,
get_config,
str_to_cb_sym, piker_sym_to_cb_sym, cb_sym_to_deribit_inst,
maybe_open_price_feed
)
_spawn_kwargs = {
'infect_asyncio': True,
}
log = get_logger(__name__)
@acm
async def open_history_client(
mkt: MktPair,
) -> tuple[Callable, int]:
fnstrument: str = mkt.bs_fqme
# TODO implement history getter for the new storage layer.
async with open_cached_client('deribit') as client:
async def get_ohlc(
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
array = await client.bars(
instrument,
start_dt=start_dt,
end_dt=end_dt,
)
if len(array) == 0:
raise DataUnavailable
start_dt = pendulum.from_timestamp(array[0]['time'])
end_dt = pendulum.from_timestamp(array[-1]['time'])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
sym = symbols[0]
async with (
open_cached_client('deribit') as client,
send_chan as send_chan
):
init_msgs = {
# pass back token, and bool, signalling if we're the writer
# and that history has been written
sym: {
'symbol_info': {
'asset_type': 'option',
'price_tick_size': 0.0005
},
'shm_write_opts': {'sum_tick_vml': False},
'fqsn': sym,
},
}
nsym = piker_sym_to_cb_sym(sym)
async with maybe_open_price_feed(sym) as stream:
cache = await client.cache_symbols()
last_trades = (await client.last_trades(
cb_sym_to_deribit_inst(nsym), count=1)).trades
if len(last_trades) == 0:
last_trade = None
async for typ, quote in stream:
if typ == 'trade':
last_trade = Trade(**(quote['data']))
break
else:
last_trade = Trade(**(last_trades[0]))
first_quote = {
'symbol': sym,
'last': last_trade.price,
'brokerd_ts': last_trade.timestamp,
'ticks': [{
'type': 'trade',
'price': last_trade.price,
'size': last_trade.amount,
'broker_ts': last_trade.timestamp
}]
}
task_status.started((init_msgs, first_quote))
feed_is_live.set()
async for typ, quote in stream:
topic = quote['symbol']
await send_chan.send({topic: quote})
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
async with open_cached_client('deribit') as client:
# load all symbols locally for fast search
cache = await client.cache_symbols()
await ctx.started()
async with ctx.open_stream() as stream:
async for pattern in stream:
# repack in dict form
await stream.send(
await client.search_symbols(pattern))

View File

@ -0,0 +1,134 @@
``ib`` backend
--------------
more or less the "everything broker" for traditional and international
markets. they are the "go to" provider for automatic retail trading
and we interface to their APIs using the `ib_insync` project.
status
******
current support is *production grade* and both real-time data and order
management should be correct and fast. this backend is used by core devs
for live trading.
currently there is not yet full support for:
- options charting and trading
- paxos based crypto rt feeds and trading
config
******
In order to get order mode support your ``brokers.toml``
needs to have something like the following:
.. code:: toml
[ib]
hosts = [
"127.0.0.1",
]
# TODO: when we eventually spawn gateways in our
# container, we can just dynamically allocate these
# using IBC.
ports = [
4002,
4003,
4006,
4001,
7497,
]
# XXX: for a paper account the flex web query service
# is not supported so you have to manually download
# and XML report and put it in a location that can be
# accessed by the ``brokerd.ib`` backend code for parsing.
flex_token = '1111111111111111'
flex_trades_query_id = '6969696' # live accounts only?
# 3rd party web-api token
# (XXX: not sure if this works yet)
trade_log_token = '111111111111111'
# when clients are being scanned this determines
# which clients are preferred to be used for data feeds
# based on account names which are detected as active
# on each client.
prefer_data_account = [
# this has to be first in order to make data work with dual paper + live
'main',
'algopaper',
]
[ib.accounts]
main = 'U69696969'
algopaper = 'DU9696969'
If everything works correctly you should see any current positions
loaded in the pps pane on chart load and you should also be able to
check your trade records in the file::
<pikerk_conf_dir>/ledgers/trades_ib_algopaper.toml
An example ledger file will have entries written verbatim from the
trade events schema:
.. code:: toml
["0000e1a7.630f5e5a.01.01"]
secType = "FUT"
conId = 515416577
symbol = "MNQ"
lastTradeDateOrContractMonth = "20221216"
strike = 0.0
right = ""
multiplier = "2"
exchange = "GLOBEX"
primaryExchange = ""
currency = "USD"
localSymbol = "MNQZ2"
tradingClass = "MNQ"
includeExpired = false
secIdType = ""
secId = ""
comboLegsDescrip = ""
comboLegs = []
execId = "0000e1a7.630f5e5a.01.01"
time = 1661972086.0
acctNumber = "DU69696969"
side = "BOT"
shares = 1.0
price = 12372.75
permId = 441472655
clientId = 6116
orderId = 985
liquidation = 0
cumQty = 1.0
avgPrice = 12372.75
orderRef = ""
evRule = ""
evMultiplier = 0.0
modelCode = ""
lastLiquidity = 1
broker_time = 1661972086.0
name = "ib"
commission = 0.57
realizedPNL = 243.41
yield_ = 0.0
yieldRedemptionDate = 0
listingExchange = "GLOBEX"
date = "2022-08-31T18:54:46+00:00"
your ``pps.toml`` file will have position entries like,
.. code:: toml
[ib.algopaper."mnq.globex.20221216"]
size = -1.0
ppu = 12423.630576923071
bs_mktid = 515416577
expiry = "2022-12-16T00:00:00+00:00"
clears = [
{ dt = "2022-08-31T18:54:46+00:00", ppu = 12423.630576923071, accum_size = -19.0, price = 12372.75, size = 1.0, cost = 0.57, tid = "0000e1a7.630f5e5a.01.01" },
]

View File

@ -20,41 +20,62 @@ Interactive Brokers API backend.
Sub-modules within break into the core functionalities: Sub-modules within break into the core functionalities:
- ``broker.py`` part for orders / trading endpoints - ``broker.py`` part for orders / trading endpoints
- ``data.py`` for real-time data feed endpoints - ``feed.py`` for real-time data feed endpoints
- ``api.py`` for the core API machinery which is ``trio``-ized
- ``client.py`` for the core API machinery which is ``trio``-ized
wrapping around ``ib_insync``. wrapping around ``ib_insync``.
- ``report.py`` for the hackery to build manual pp calcs
to avoid ib's absolute bullshit FIFO style position
tracking..
""" """
from .api import ( from .api import (
get_client, get_client,
) )
from .feed import ( from .feed import (
open_history_client, open_history_client,
open_symbol_search,
stream_quotes, stream_quotes,
) )
from .broker import trades_dialogue from .broker import (
open_trade_dialog,
)
from .ledger import (
norm_trade,
norm_trade_records,
tx_sort,
)
from .symbols import (
get_mkt_info,
open_symbol_search,
_search_conf,
)
__all__ = [ __all__ = [
'get_client', 'get_client',
'trades_dialogue', 'get_mkt_info',
'norm_trade',
'norm_trade_records',
'open_trade_dialog',
'open_history_client', 'open_history_client',
'open_symbol_search', 'open_symbol_search',
'stream_quotes', 'stream_quotes',
'_search_conf',
'tx_sort',
]
_brokerd_mods: list[str] = [
'api',
'broker',
]
_datad_mods: list[str] = [
'feed',
'symbols',
] ]
# tractor RPC enable arg # tractor RPC enable arg
__enable_modules__: list[str] = [ __enable_modules__: list[str] = (
'api', _brokerd_mods
'feed', +
'broker', _datad_mods
] )
# passed to ``tractor.ActorNursery.start_actor()`` # passed to ``tractor.ActorNursery.start_actor()``
_spawn_kwargs = { _spawn_kwargs = {
@ -65,3 +86,8 @@ _spawn_kwargs = {
# know if ``brokerd`` should be spawned with # know if ``brokerd`` should be spawned with
# ``tractor``'s aio mode. # ``tractor``'s aio mode.
_infect_asyncio: bool = True _infect_asyncio: bool = True
# XXX NOTE: for now we disable symcache with this backend since
# there is no clearly simple nor practical way to download "all
# symbology info" for all supported venues..
_no_symcache: bool = True

View File

@ -0,0 +1,195 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
"FLEX" report processing utils.
"""
from bidict import bidict
import pendulum
from pprint import pformat
from typing import Any
from .api import (
get_config,
log,
)
from piker.accounting import (
open_trade_ledger,
)
def parse_flex_dt(
record: str,
) -> pendulum.datetime:
'''
Parse stupid flex record datetime stamps for the `dateTime` field..
'''
date, ts = record.split(';')
dt = pendulum.parse(date)
ts = f'{ts[:2]}:{ts[2:4]}:{ts[4:]}'
tsdt = pendulum.parse(ts)
return dt.set(hour=tsdt.hour, minute=tsdt.minute, second=tsdt.second)
def flex_records_to_ledger_entries(
accounts: bidict,
trade_entries: list[object],
) -> dict:
'''
Convert flex report entry objects into ``dict`` form, pretty much
straight up without modification except add a `pydatetime` field
from the parsed timestamp.
'''
trades_by_account = {}
for t in trade_entries:
entry = t.__dict__
# XXX: LOL apparently ``toml`` has a bug
# where a section key error will show up in the write
# if you leave a table key as an `int`? So i guess
# cast to strs for all keys..
# oddly for some so-called "BookTrade" entries
# this field seems to be blank, no cuckin clue.
# trade['ibExecID']
tid = str(entry.get('ibExecID') or entry['tradeID'])
# date = str(entry['tradeDate'])
# XXX: is it going to cause problems if a account name
# get's lost? The user should be able to find it based
# on the actual exec history right?
acctid = accounts[str(entry['accountId'])]
# probably a flex record with a wonky non-std timestamp..
dt = entry['pydatetime'] = parse_flex_dt(entry['dateTime'])
entry['datetime'] = str(dt)
if not tid:
# this is likely some kind of internal adjustment
# transaction, likely one of the following:
# - an expiry event that will show a "book trade" indicating
# some adjustment to cash balances: zeroing or itm settle.
# - a manual cash balance position adjustment likely done by
# the user from the accounts window in TWS where they can
# manually set the avg price and size:
# https://api.ibkr.com/lib/cstools/faq/web1/index.html#/tag/DTWS_ADJ_AVG_COST
log.warning(f'Skipping ID-less ledger entry:\n{pformat(entry)}')
continue
trades_by_account.setdefault(
acctid, {}
)[tid] = entry
for acctid in trades_by_account:
trades_by_account[acctid] = dict(sorted(
trades_by_account[acctid].items(),
key=lambda entry: entry[1]['pydatetime'],
))
return trades_by_account
def load_flex_trades(
path: str | None = None,
) -> dict[str, Any]:
from ib_insync import flexreport, util
conf = get_config()
if not path:
# load ``brokers.toml`` and try to get the flex
# token and query id that must be previously defined
# by the user.
token = conf.get('flex_token')
if not token:
raise ValueError(
'You must specify a ``flex_token`` field in your'
'`brokers.toml` in order load your trade log, see our'
'intructions for how to set this up here:\n'
'PUT LINK HERE!'
)
qid = conf['flex_trades_query_id']
# TODO: hack this into our logging
# system like we do with the API client..
util.logToConsole()
# TODO: rewrite the query part of this with async..httpx?
report = flexreport.FlexReport(
token=token,
queryId=qid,
)
else:
# XXX: another project we could potentially look at,
# https://pypi.org/project/ibflex/
report = flexreport.FlexReport(path=path)
trade_entries = report.extract('Trade')
ln = len(trade_entries)
log.info(f'Loaded {ln} trades from flex query')
trades_by_account = flex_records_to_ledger_entries(
conf['accounts'].inverse, # reverse map to user account names
trade_entries,
)
ledger_dict: dict | None = None
for acctid in trades_by_account:
trades_by_id = trades_by_account[acctid]
with open_trade_ledger(
'ib',
acctid,
allow_from_sync_code=True,
) as ledger_dict:
tid_delta = set(trades_by_id) - set(ledger_dict)
log.info(
'New trades detected\n'
f'{pformat(tid_delta)}'
)
if tid_delta:
sorted_delta = dict(sorted(
{tid: trades_by_id[tid] for tid in tid_delta}.items(),
key=lambda entry: entry[1].pop('pydatetime'),
))
ledger_dict.update(sorted_delta)
return ledger_dict
if __name__ == '__main__':
import sys
import os
args = sys.argv
if len(args) > 1:
args = args[1:]
for arg in args:
path = os.path.abspath(arg)
load_flex_trades(path=path)
else:
# expect brokers.toml to have an entry and
# pull from the web service.
load_flex_trades()

View File

@ -0,0 +1,269 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
``ib`` utilities and hacks suitable for use in the backend and/or as
runnable script-programs.
'''
from __future__ import annotations
from functools import partial
from typing import (
Literal,
TYPE_CHECKING,
)
import subprocess
import tractor
from piker.brokers._util import get_logger
if TYPE_CHECKING:
from .api import Client
from ib_insync import IB
log = get_logger('piker.brokers.ib')
_reset_tech: Literal[
'vnc',
'i3ipc_xdotool',
# TODO: in theory we can use a different linux DE API or
# some other type of similar window scanning/mgmt client
# (on other OSs) to do the same.
] = 'vnc'
async def data_reset_hack(
# vnc_host: str,
client: Client,
reset_type: Literal['data', 'connection'],
) -> None:
'''
Run key combos for resetting data feeds and yield back to caller
when complete.
NOTE: this is a linux-only hack around!
There are multiple "techs" you can use depending on your infra setup:
- if running ib-gw in a container with a VNC server running the most
performant method is the `'vnc'` option.
- if running ib-gw/tws locally, and you are using `i3` you can use
the ``i3ipc`` lib and ``xdotool`` to send the appropriate click
and key-combos automatically to your local desktop's java X-apps.
https://interactivebrokers.github.io/tws-api/historical_limitations.html#pacing_violations
TODOs:
- a return type that hopefully determines if the hack was
successful.
- other OS support?
- integration with ``ib-gw`` run in docker + Xorg?
- is it possible to offer a local server that can be accessed by
a client? Would be sure be handy for running native java blobs
that need to be wrangle.
'''
ib_client: IB = client.ib
# look up any user defined vnc socket address mapped from
# a particular API socket port.
api_port: str = str(ib_client.client.port)
vnc_host: str
vnc_port: int
vnc_sockaddr: tuple[str] | None = client.conf.get('vnc_addrs')
no_setup_msg:str = (
f'No data reset hack test setup for {vnc_sockaddr}!\n'
'See config setup tips @\n'
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
)
if not vnc_sockaddr:
log.warning(
no_setup_msg
+
'REQUIRES A `vnc_addrs: array` ENTRY'
)
vnc_host, vnc_port = vnc_sockaddr.get(
api_port,
('localhost', 3003)
)
global _reset_tech
match _reset_tech:
case 'vnc':
try:
await tractor.to_asyncio.run_task(
partial(
vnc_click_hack,
host=vnc_host,
port=vnc_port,
)
)
except OSError:
if vnc_host != 'localhost':
log.warning(no_setup_msg)
return False
try:
import i3ipc # noqa (since a deps dynamic check)
except ModuleNotFoundError:
log.warning(no_setup_msg)
return False
try:
i3ipc_xdotool_manual_click_hack()
_reset_tech = 'i3ipc_xdotool'
return True
except OSError:
log.exception(no_setup_msg)
return False
case 'i3ipc_xdotool':
i3ipc_xdotool_manual_click_hack()
case _ as tech:
raise RuntimeError(f'{tech} is not supported for reset tech!?')
# we don't really need the ``xdotool`` approach any more B)
return True
async def vnc_click_hack(
host: str,
port: int,
reset_type: str = 'data'
) -> None:
'''
Reset the data or network connection for the VNC attached
ib gateway using magic combos.
'''
try:
import asyncvnc
except ModuleNotFoundError:
log.warning(
"In order to leverage `piker`'s built-in data reset hacks, install "
"the `asyncvnc` project: https://github.com/barneygale/asyncvnc"
)
return
# two different hot keys which trigger diff types of reset
# requests B)
key = {
'data': 'f',
'connection': 'r'
}[reset_type]
async with asyncvnc.connect(
host,
port=port,
# TODO: doesn't work see:
# https://github.com/barneygale/asyncvnc/issues/7
# password='ibcansmbz',
) as client:
# move to middle of screen
# 640x1800
client.mouse.move(
x=500,
y=500,
)
client.mouse.click()
client.keyboard.press('Ctrl', 'Alt', key) # keys are stacked
def i3ipc_xdotool_manual_click_hack() -> None:
'''
Do the data reset hack but expecting a local X-window using `xdotool`.
'''
import i3ipc
i3 = i3ipc.Connection()
# TODO: might be worth offering some kinda api for grabbing
# the window id from the pid?
# https://stackoverflow.com/a/2250879
t = i3.get_tree()
orig_win_id = t.find_focused().window
# for tws
win_names: list[str] = [
'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
]
try:
for name in win_names:
results = t.find_titled(name)
print(f'results for {name}: {results}')
if results:
con = results[0]
print(f'Resetting data feed for {name}')
win_id = str(con.window)
w, h = con.rect.width, con.rect.height
# TODO: seems to be a few libs for python but not sure
# if they support all the sub commands we need, order of
# most recent commit history:
# https://github.com/rr-/pyxdotool
# https://github.com/ShaneHutter/pyxdotool
# https://github.com/cphyc/pyxdotool
# TODO: only run the reconnect (2nd) kc on a detected
# disconnect?
for key_combo, timeout in [
# only required if we need a connection reset.
# ('ctrl+alt+r', 12),
# data feed reset.
('ctrl+alt+f', 6)
]:
subprocess.call([
'xdotool',
'windowactivate', '--sync', win_id,
# move mouse to bottom left of window (where
# there should be nothing to click).
'mousemove_relative', '--sync', str(w-4), str(h-4),
# NOTE: we may need to stick a `--retry 3` in here..
'click', '--window', win_id,
'--repeat', '3', '1',
# hackzorzes
'key', key_combo,
],
timeout=timeout,
)
# re-activate and focus original window
subprocess.call([
'xdotool',
'windowactivate', '--sync', str(orig_win_id),
'click', '--window', str(orig_win_id), '1',
])
except subprocess.TimeoutExpired:
log.exception('xdotool timed out?')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,529 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Trade transaction accounting and normalization.
'''
from __future__ import annotations
from bisect import insort
from dataclasses import asdict
from decimal import Decimal
from functools import partial
from pprint import pformat
from typing import (
Any,
Callable,
TYPE_CHECKING,
)
from bidict import bidict
from pendulum import (
DateTime,
parse,
from_timestamp,
)
from ib_insync import (
Contract,
Commodity,
Fill,
Execution,
CommissionReport,
)
from piker.types import Struct
from piker.data import (
SymbologyCache,
)
from piker.accounting import (
Asset,
dec_digits,
digits_to_dec,
Transaction,
MktPair,
iter_by_dt,
)
from ._flex_reports import parse_flex_dt
from ._util import log
if TYPE_CHECKING:
from .api import (
Client,
MethodProxy,
)
tx_sort: Callable = partial(
iter_by_dt,
parsers={
'dateTime': parse_flex_dt,
'datetime': parse,
# XXX: for some some fucking 2022 and
# back options records.. f@#$ me..
'date': parse,
}
)
def norm_trade(
tid: str,
record: dict[str, Any],
# this is the dict that was returned from
# `Client.get_mkt_pairs()` and when running offline ledger
# processing from `.accounting`, this will be the table loaded
# into `SymbologyCache.pairs`.
pairs: dict[str, Struct],
symcache: SymbologyCache | None = None,
) -> Transaction | None:
conid: int = str(record.get('conId') or record['conid'])
bs_mktid: str = str(conid)
# NOTE: sometimes weird records (like BTTX?)
# have no field for this?
comms: float = -1 * (
record.get('commission')
or record.get('ibCommission')
or 0
)
if not comms:
log.warning(
'No commissions found for record?\n'
f'{pformat(record)}\n'
)
price: float = (
record.get('price')
or record.get('tradePrice')
)
if price is None:
log.warning(
'No `price` field found in record?\n'
'Skipping normalization..\n'
f'{pformat(record)}\n'
)
return None
# the api doesn't do the -/+ on the quantity for you but flex
# records do.. are you fucking serious ib...!?
size: float|int = (
record.get('quantity')
or record['shares']
) * {
'BOT': 1,
'SLD': -1,
}[record['side']]
symbol: str = record['symbol']
exch: str = (
record.get('listingExchange')
or record.get('primaryExchange')
or record['exchange']
)
# NOTE: remove null values since `tomlkit` can't serialize
# them to file.
if dnc := record.pop('deltaNeutralContract', None):
record['deltaNeutralContract'] = dnc
# likely an opts contract record from a flex report..
# TODO: no idea how to parse ^ the strike part from flex..
# (00010000 any, or 00007500 tsla, ..)
# we probably must do the contract lookup for this?
if (
' ' in symbol
or '--' in exch
):
underlying, _, tail = symbol.partition(' ')
exch: str = 'opt'
expiry: str = tail[:6]
# otype = tail[6]
# strike = tail[7:]
log.warning(
f'Skipping option contract -> NO SUPPORT YET!\n'
f'{symbol}\n'
)
return None
# timestamping is way different in API records
dtstr: str = record.get('datetime')
date: str = record.get('date')
flex_dtstr: str = record.get('dateTime')
if dtstr or date:
dt: DateTime = parse(dtstr or date)
elif flex_dtstr:
# probably a flex record with a wonky non-std timestamp..
dt: DateTime = parse_flex_dt(record['dateTime'])
# special handling of symbol extraction from
# flex records using some ad-hoc schema parsing.
asset_type: str = (
record.get('assetCategory')
or record.get('secType')
or 'STK'
)
if (expiry := (
record.get('lastTradeDateOrContractMonth')
or record.get('expiry')
)
):
expiry: str = str(expiry).strip(' ')
# NOTE: we directly use the (simple and usually short)
# date-string expiry token when packing the `MktPair`
# since we want the fqme to contain *that* token.
# It might make sense later to instead parse and then
# render different output str format(s) for this same
# purpose depending on asset-type-market down the road.
# Eg. for derivs we use the short token only for fqme
# but use the isoformat('T') for transactions and
# account file position entries?
# dt_str: str = pendulum.parse(expiry).isoformat('T')
# XXX: pretty much all legacy market assets have a fiat
# currency (denomination) determined by their venue.
currency: str = record['currency']
src = Asset(
name=currency.lower(),
atype='fiat',
tx_tick=Decimal('0.01'),
)
match asset_type:
case 'FUT':
# XXX (flex) ledger entries don't necessarily have any
# simple 3-char key.. sometimes the .symbol is some
# weird internal key that we probably don't want in the
# .fqme => we should probably just wrap `Contract` to
# this like we do other crypto$ backends XD
# NOTE: at least older FLEX records should have
# this field.. no idea about API entries..
local_symbol: str | None = record.get('localSymbol')
underlying_key: str = record.get('underlyingSymbol')
descr: str | None = record.get('description')
if (
not (
local_symbol
and symbol in local_symbol
)
and (
descr
and symbol not in descr
)
):
con_key, exp_str = descr.split(' ')
symbol: str = underlying_key or con_key
dst = Asset(
name=symbol.lower(),
atype='future',
tx_tick=Decimal('1'),
)
case 'STK':
dst = Asset(
name=symbol.lower(),
atype='stock',
tx_tick=Decimal('1'),
)
case 'CASH':
if currency not in symbol:
# likely a dict-casted `Forex` contract which
# has .symbol as the dst and .currency as the
# src.
name: str = symbol.lower()
else:
# likely a flex-report record which puts
# EUR.USD as the symbol field and just USD in
# the currency field.
name: str = symbol.lower().replace(f'.{src.name}', '')
dst = Asset(
name=name,
atype='fiat',
tx_tick=Decimal('0.01'),
)
case 'OPT':
dst = Asset(
name=symbol.lower(),
atype='option',
tx_tick=Decimal('1'),
# TODO: we should probably always cast to the
# `Contract` instance then dict-serialize that for
# the `.info` field!
# info=asdict(Option()),
)
case 'CMDTY':
from .symbols import _adhoc_symbol_map
con_kwargs, _ = _adhoc_symbol_map[symbol.upper()]
dst = Asset(
name=symbol.lower(),
atype='commodity',
tx_tick=Decimal('1'),
info=asdict(Commodity(**con_kwargs)),
)
# try to build out piker fqme from record.
# src: str = record['currency']
price_tick: Decimal = digits_to_dec(dec_digits(price))
# NOTE: can't serlialize `tomlkit.String` so cast to native
atype: str = str(dst.atype)
# if not (mkt := symcache.mktmaps.get(bs_mktid)):
mkt = MktPair(
bs_mktid=bs_mktid,
dst=dst,
price_tick=price_tick,
# NOTE: for "legacy" assets, volume is normally discreet, not
# a float, but we keep a digit in case the suitz decide
# to get crazy and change it; we'll be kinda ready
# schema-wise..
size_tick=Decimal('1'),
src=src, # XXX: normally always a fiat
_atype=atype,
venue=exch,
expiry=expiry,
broker='ib',
_fqme_without_src=(atype != 'fiat'),
)
fqme: str = mkt.fqme
# XXX: if passed in, we fill out the symcache ad-hoc in order
# to make downstream accounting work..
if symcache is not None:
orig_mkt: MktPair | None = symcache.mktmaps.get(bs_mktid)
if (
orig_mkt
and orig_mkt.fqme != mkt.fqme
):
log.warning(
# print(
f'Contracts with common `conId`: {bs_mktid} mismatch..\n'
f'{orig_mkt.fqme} -> {mkt.fqme}\n'
# 'with DIFF:\n'
# f'{mkt - orig_mkt}'
)
symcache.mktmaps[bs_mktid] = mkt
symcache.mktmaps[fqme] = mkt
symcache.assets[src.name] = src
symcache.assets[dst.name] = dst
# NOTE: for flex records the normal fields for defining an fqme
# sometimes won't be available so we rely on two approaches for
# the "reverse lookup" of piker style fqme keys:
# - when dealing with API trade records received from
# `IB.trades()` we do a contract lookup at he time of processing
# - when dealing with flex records, it is assumed the record
# is at least a day old and thus the TWS position reporting system
# should already have entries if the pps are still open, in
# which case, we can pull the fqme from that table (see
# `trades_dialogue()` above).
return Transaction(
fqme=fqme,
tid=tid,
size=size,
price=price,
cost=comms,
dt=dt,
expiry=expiry,
bs_mktid=str(conid),
)
def norm_trade_records(
ledger: dict[str, Any],
symcache: SymbologyCache | None = None,
) -> dict[str, Transaction]:
'''
Normalize (xml) flex-report or (recent) API trade records into
our ledger format with parsing for `MktPair` and `Asset`
extraction to fill in the `Transaction.sys: MktPair` field.
'''
records: list[Transaction] = []
for tid, record in ledger.items():
txn = norm_trade(
tid,
record,
# NOTE: currently no symcache support
pairs={},
symcache=symcache,
)
if txn is None:
continue
# inject txns sorted by datetime
insort(
records,
txn,
key=lambda t: t.dt
)
return {r.tid: r for r in records}
def api_trades_to_ledger_entries(
accounts: bidict[str, str],
fills: list[Fill],
) -> dict[str, dict]:
'''
Convert API execution objects entry objects into
flattened-``dict`` form, pretty much straight up without
modification except add a `pydatetime` field from the parsed
timestamp so that on write
'''
trades_by_account: dict[str, dict] = {}
for fill in fills:
# NOTE: for the schema, see the defn for `Fill` which is
# a `NamedTuple` subtype
fdict: dict = fill._asdict()
# flatten all (sub-)objects and convert to dicts.
# with values packed into one top level entry.
val: CommissionReport | Execution | Contract
txn_dict: dict[str, Any] = {}
for attr_name, val in fdict.items():
match attr_name:
# value is a `@dataclass` subtype
case 'contract' | 'execution' | 'commissionReport':
txn_dict.update(asdict(val))
case 'time':
# ib has wack ns timestamps, or is that us?
continue
# TODO: we can remove this case right since there's
# only 4 fields on a `Fill`?
case _:
txn_dict[attr_name] = val
tid = str(txn_dict['execId'])
dt = from_timestamp(txn_dict['time'])
txn_dict['datetime'] = str(dt)
acctid = accounts[txn_dict['acctNumber']]
# NOTE: only inserted (then later popped) for sorting below!
txn_dict['pydatetime'] = dt
if not tid:
# this is likely some kind of internal adjustment
# transaction, likely one of the following:
# - an expiry event that will show a "book trade" indicating
# some adjustment to cash balances: zeroing or itm settle.
# - a manual cash balance position adjustment likely done by
# the user from the accounts window in TWS where they can
# manually set the avg price and size:
# https://api.ibkr.com/lib/cstools/faq/web1/index.html#/tag/DTWS_ADJ_AVG_COST
log.warning(
'Skipping ID-less ledger txn_dict:\n'
f'{pformat(txn_dict)}'
)
continue
trades_by_account.setdefault(
acctid, {}
)[tid] = txn_dict
# TODO: maybe we should just bisect.insort() into a list of
# tuples and then return a dict of that?
# sort entries in output by python based datetime
for acctid in trades_by_account:
trades_by_account[acctid] = dict(sorted(
trades_by_account[acctid].items(),
key=lambda entry: entry[1].pop('pydatetime'),
))
return trades_by_account
async def update_ledger_from_api_trades(
fills: list[Fill],
client: Client | MethodProxy,
accounts_def_inv: bidict[str, str],
# NOTE: provided for ad-hoc insertions "as transactions are
# processed" -> see `norm_trade()` signature requirements.
symcache: SymbologyCache | None = None,
) -> tuple[
dict[str, Transaction],
dict[str, dict],
]:
# XXX; ERRGGG..
# pack in the "primary/listing exchange" value from a
# contract lookup since it seems this isn't available by
# default from the `.fills()` method endpoint...
fill: Fill
for fill in fills:
con: Contract = fill.contract
conid: str = con.conId
pexch: str | None = con.primaryExchange
if not pexch:
cons = await client.get_con(conid=conid)
if cons:
con = cons[0]
pexch = con.primaryExchange or con.exchange
else:
# for futes it seems like the primary is always empty?
pexch: str = con.exchange
# pack in the ``Contract.secType``
# entry['asset_type'] = condict['secType']
entries: dict[str, dict] = api_trades_to_ledger_entries(
accounts_def_inv,
fills,
)
# normalize recent session's trades to the `Transaction` type
trans_by_acct: dict[str, dict[str, Transaction]] = {}
for acctid, trades_by_id in entries.items():
# normalize to transaction form
trans_by_acct[acctid] = norm_trade_records(
trades_by_id,
symcache=symcache,
)
return trans_by_acct, entries

View File

@ -0,0 +1,615 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Symbology search and normalization.
'''
from __future__ import annotations
from contextlib import (
nullcontext,
)
from decimal import Decimal
import time
from typing import (
Awaitable,
TYPE_CHECKING,
)
from rapidfuzz import process as fuzzy
import ib_insync as ibis
import tractor
import trio
from piker.accounting import (
Asset,
MktPair,
unpack_fqme,
)
from piker._cacheables import (
async_lifo_cache,
)
from ._util import (
log,
)
if TYPE_CHECKING:
from .api import (
MethodProxy,
Client,
)
_futes_venues = (
'GLOBEX',
'NYMEX',
'CME',
'CMECRYPTO',
'COMEX',
# 'CMDTY', # special name case..
'CBOT', # (treasury) yield futures
)
_adhoc_cmdty_set = {
# metals
# https://misc.interactivebrokers.com/cstools/contract_info/v3.10/index.php?action=Conid%20Info&wlId=IB&conid=69067924
'xauusd.cmdty', # london gold spot ^
'xagusd.cmdty', # silver spot
}
# NOTE: if you aren't seeing one of these symbol's futues contracts
# show up, it's likely the `.<venue>` part is wrong!
_adhoc_futes_set = {
# equities
'nq.cme',
'mnq.cme', # micro
'es.cme',
'mes.cme', # micro
# cypto$
'brr.cme',
'mbt.cme', # micro
'ethusdrr.cme',
# agriculture
'he.comex', # lean hogs
'le.comex', # live cattle (geezers)
'gf.comex', # feeder cattle (younguns)
# raw
'lb.comex', # random len lumber
'gc.comex',
'mgc.comex', # micro
# oil & gas
'cl.nymex',
'ni.comex', # silver futes
'qi.comex', # mini-silver futes
# treasury yields
# etfs by duration:
# SHY -> IEI -> IEF -> TLT
'zt.cbot', # 2y
'z3n.cbot', # 3y
'zf.cbot', # 5y
'zn.cbot', # 10y
'zb.cbot', # 30y
# (micros of above)
'2yy.cbot',
'5yy.cbot',
'10y.cbot',
'30y.cbot',
}
# taken from list here:
# https://www.interactivebrokers.com/en/trading/products-spot-currencies.php
_adhoc_fiat_set = set((
'USD, AED, AUD, CAD,'
'CHF, CNH, CZK, DKK,'
'EUR, GBP, HKD, HUF,'
'ILS, JPY, MXN, NOK,'
'NZD, PLN, RUB, SAR,'
'SEK, SGD, TRY, ZAR'
).split(' ,')
)
# manually discovered tick discrepancies,
# onl god knows how or why they'd cuck these up..
_adhoc_mkt_infos: dict[int | str, dict] = {
'vtgn.nasdaq': {'price_tick': Decimal('0.01')},
}
# map of symbols to contract ids
_adhoc_symbol_map = {
# https://misc.interactivebrokers.com/cstools/contract_info/v3.10/index.php?action=Conid%20Info&wlId=IB&conid=69067924
# NOTE: some cmdtys/metals don't have trade data like gold/usd:
# https://groups.io/g/twsapi/message/44174
'XAUUSD': ({'conId': 69067924}, {'whatToShow': 'MIDPOINT'}),
}
for qsn in _adhoc_futes_set:
sym, venue = qsn.split('.')
assert venue.upper() in _futes_venues, f'{venue}'
_adhoc_symbol_map[sym.upper()] = (
{'exchange': venue},
{},
)
# exchanges we don't support at the moment due to not knowing
# how to do symbol-contract lookup correctly likely due
# to not having the data feeds subscribed.
_exch_skip_list = {
'ASX', # aussie stocks
'MEXI', # mexican stocks
# no idea
'NSE',
'VALUE',
'FUNDSERV',
'SWB2',
'PSE',
'PHLX',
}
# optional search config the backend can register for
# it's symbol search handling (in this case we avoid
# accepting patterns before the kb has settled more then
# a quarter second).
_search_conf = {
'pause_period': 6 / 16,
}
@tractor.context
async def open_symbol_search(ctx: tractor.Context) -> None:
'''
Symbology search brokerd-endpoint.
'''
from .api import open_client_proxies
from .feed import open_data_client
# TODO: load user defined symbol set locally for fast search?
await ctx.started({})
async with (
open_client_proxies() as (proxies, _),
open_data_client() as data_proxy,
):
async with ctx.open_stream() as stream:
# select a non-history client for symbol search to lighten
# the load in the main data node.
proxy = data_proxy
for name, proxy in proxies.items():
if proxy is data_proxy:
continue
break
ib_client = proxy._aio_ns.ib
log.info(
f'Using API client for symbol-search\n'
f'{ib_client}\n'
)
last = time.time()
async for pattern in stream:
log.info(f'received {pattern}')
now: float = time.time()
# this causes tractor hang...
# assert 0
assert pattern, 'IB can not accept blank search pattern'
# throttle search requests to no faster then 1Hz
diff = now - last
if diff < 1.0:
log.debug('throttle sleeping')
await trio.sleep(diff)
try:
pattern = stream.receive_nowait()
except trio.WouldBlock:
pass
if (
not pattern
or pattern.isspace()
# XXX: not sure if this is a bad assumption but it
# seems to make search snappier?
or len(pattern) < 1
):
log.warning('empty pattern received, skipping..')
# TODO: *BUG* if nothing is returned here the client
# side will cache a null set result and not showing
# anything to the use on re-searches when this query
# timed out. We probably need a special "timeout" msg
# or something...
# XXX: this unblocks the far end search task which may
# hold up a multi-search nursery block
await stream.send({})
continue
log.info(f'searching for {pattern}')
last = time.time()
# async batch search using api stocks endpoint and module
# defined adhoc symbol set.
stock_results = []
async def extend_results(
target: Awaitable[list]
) -> None:
try:
results = await target
except tractor.trionics.Lagged:
print("IB SYM-SEARCH OVERRUN?!?")
return
stock_results.extend(results)
for _ in range(10):
with trio.move_on_after(3) as cs:
async with trio.open_nursery() as sn:
sn.start_soon(
extend_results,
proxy.search_symbols(
pattern=pattern,
upto=5,
),
)
# trigger async request
await trio.sleep(0)
if cs.cancelled_caught:
log.warning(
f'Search timeout? {proxy._aio_ns.ib.client}'
)
continue
elif stock_results:
break
# else:
# await tractor.pause()
# # match against our ad-hoc set immediately
# adhoc_matches = fuzzy.extract(
# pattern,
# list(_adhoc_futes_set),
# score_cutoff=90,
# )
# log.info(f'fuzzy matched adhocs: {adhoc_matches}')
# adhoc_match_results = {}
# if adhoc_matches:
# # TODO: do we need to pull contract details?
# adhoc_match_results = {i[0]: {} for i in
# adhoc_matches}
log.debug(f'fuzzy matching stocks {stock_results}')
stock_matches = fuzzy.extract(
pattern,
stock_results,
score_cutoff=50,
)
# matches = adhoc_match_results | {
matches = {
item[0]: {} for item in stock_matches
}
# TODO: we used to deliver contract details
# {item[2]: item[0] for item in stock_matches}
log.debug(f"sending matches: {matches.keys()}")
await stream.send(matches)
# re-mapping to piker asset type names
# https://github.com/erdewit/ib_insync/blob/master/ib_insync/contract.py#L113
_asset_type_map = {
'STK': 'stock',
'OPT': 'option',
'FUT': 'future',
'CONTFUT': 'continuous_future',
'CASH': 'fiat',
'IND': 'index',
'CFD': 'cfd',
'BOND': 'bond',
'CMDTY': 'commodity',
'FOP': 'futures_option',
'FUND': 'mutual_fund',
'WAR': 'warrant',
'IOPT': 'warran',
'BAG': 'bag',
'CRYPTO': 'crypto', # bc it's diff then fiat?
# 'NEWS': 'news',
}
def parse_patt2fqme(
# client: Client,
pattern: str,
) -> tuple[str, str, str, str]:
# TODO: we can't use this currently because
# ``wrapper.starTicker()`` currently cashes ticker instances
# which means getting a singel quote will potentially look up
# a quote for a ticker that it already streaming and thus run
# into state clobbering (eg. list: Ticker.ticks). It probably
# makes sense to try this once we get the pub-sub working on
# individual symbols...
# XXX UPDATE: we can probably do the tick/trades scraping
# inside our eventkit handler instead to bypass this entirely?
currency = ''
# fqme parsing stage
# ------------------
if '.ib' in pattern:
_, symbol, venue, expiry = unpack_fqme(pattern)
else:
symbol = pattern
expiry = ''
# # another hack for forex pairs lul.
# if (
# '.idealpro' in symbol
# # or '/' in symbol
# ):
# exch: str = 'IDEALPRO'
# symbol = symbol.removesuffix('.idealpro')
# if '/' in symbol:
# symbol, currency = symbol.split('/')
# else:
# TODO: yes, a cache..
# try:
# # give the cache a go
# return client._contracts[symbol]
# except KeyError:
# log.debug(f'Looking up contract for {symbol}')
expiry: str = ''
if symbol.count('.') > 1:
symbol, _, expiry = symbol.rpartition('.')
# use heuristics to figure out contract "type"
symbol, venue = symbol.upper().rsplit('.', maxsplit=1)
return symbol, currency, venue, expiry
def con2fqme(
con: ibis.Contract,
_cache: dict[int, (str, bool)] = {}
) -> tuple[str, bool]:
'''
Convert contracts to fqme-style strings to be used both in
symbol-search matching and as feed tokens passed to the front
end data deed layer.
Previously seen contracts are cached by id.
'''
# should be real volume for this contract by default
calc_price: bool = False
if con.conId:
try:
# TODO: LOL so apparently IB just changes the contract
# ID (int) on a whim.. so we probably need to use an
# FQME style key after all...
return _cache[con.conId]
except KeyError:
pass
suffix: str = con.primaryExchange or con.exchange
symbol: str = con.symbol
expiry: str = con.lastTradeDateOrContractMonth or ''
match con:
case ibis.Option():
# TODO: option symbol parsing and sane display:
symbol = con.localSymbol.replace(' ', '')
case (
ibis.Commodity()
# search API endpoint returns std con box..
| ibis.Contract(secType='CMDTY')
):
# commodities and forex don't have an exchange name and
# no real volume so we have to calculate the price
suffix = con.secType
# no real volume on this tract
calc_price = True
case ibis.Forex() | ibis.Contract(secType='CASH'):
dst, src = con.localSymbol.split('.')
symbol = ''.join([dst, src])
suffix = con.exchange or 'idealpro'
# no real volume on forex feeds..
calc_price = True
if not suffix:
entry = _adhoc_symbol_map.get(
con.symbol or con.localSymbol
)
if entry:
meta, kwargs = entry
cid = meta.get('conId')
if cid:
assert con.conId == meta['conId']
suffix = meta['exchange']
# append a `.<suffix>` to the returned symbol
# key for derivatives that normally is the expiry
# date key.
if expiry:
suffix += f'.{expiry}'
fqme_key = symbol.lower()
if suffix:
fqme_key = '.'.join((fqme_key, suffix)).lower()
_cache[con.conId] = fqme_key, calc_price
return fqme_key, calc_price
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
proxy: MethodProxy | None = None,
) -> tuple[MktPair, ibis.ContractDetails]:
if '.ib' not in fqme:
fqme += '.ib'
broker, pair, venue, expiry = unpack_fqme(fqme)
proxy: MethodProxy
if proxy is not None:
client_ctx = nullcontext(proxy)
else:
from .feed import (
open_data_client,
)
client_ctx = open_data_client
async with client_ctx as proxy:
try:
(
con, # Contract
details, # ContractDetails
) = await proxy.get_sym_details(fqme=fqme)
except ConnectionError:
log.exception(f'Proxy is ded {proxy._aio_ns}')
raise
# TODO: more consistent field translation
atype = _asset_type_map[con.secType]
if atype == 'commodity':
venue: str = 'cmdty'
else:
venue = con.primaryExchange or con.exchange
price_tick: Decimal = Decimal(str(details.minTick))
ib_min_tick_gt_2: Decimal = Decimal('0.01')
if (
price_tick < ib_min_tick_gt_2
):
# TODO: we need to add some kinda dynamic rounding sys
# to our MktPair i guess?
# not sure where the logic should sit, but likely inside
# the `.clearing._ems` i suppose...
log.warning(
'IB seems to disallow a min price tick < 0.01 '
'when the price is > 2.0..?\n'
f'Decreasing min tick precision for {fqme} to 0.01'
)
# price_tick = ib_min_tick
# await tractor.pause()
if atype == 'stock':
# XXX: GRRRR they don't support fractional share sizes for
# stocks from the API?!
# if con.secType == 'STK':
size_tick = Decimal('1')
else:
size_tick: Decimal = Decimal(
str(details.minSize).rstrip('0')
)
# |-> TODO: there is also the Contract.sizeIncrement, bt wtf is it?
# NOTE: this is duplicate from the .broker.norm_trade_records()
# routine, we should factor all this parsing somewhere..
expiry_str = str(con.lastTradeDateOrContractMonth)
# if expiry:
# expiry_str: str = str(pendulum.parse(
# str(expiry).strip(' ')
# ))
# TODO: currently we can't pass the fiat src asset because
# then we'll get a `MNQUSD` request for history data..
# we need to figure out how we're going to handle this (later?)
# but likely we want all backends to eventually handle
# ``dst/src.venue.`` style !?
src = Asset(
name=str(con.currency).lower(),
atype='fiat',
tx_tick=Decimal('0.01'), # right?
)
dst = Asset(
name=con.symbol.lower(),
atype=atype,
tx_tick=size_tick,
)
mkt = MktPair(
src=src,
dst=dst,
price_tick=price_tick,
size_tick=size_tick,
bs_mktid=str(con.conId),
venue=str(venue),
expiry=expiry_str,
broker='ib',
# TODO: options contract info as str?
# contract_info=<optionsdetails>
_fqme_without_src=(atype != 'fiat'),
)
# just.. wow.
if entry := _adhoc_mkt_infos.get(mkt.bs_fqme):
log.warning(f'Frickin {mkt.fqme} has an adhoc {entry}..')
new = mkt.to_dict()
new['price_tick'] = entry['price_tick']
new['src'] = src
new['dst'] = dst
mkt = MktPair(**new)
# if possible register the bs_mktid to the just-built
# mkt so that it can be retreived by order mode tasks later.
# TODO NOTE: this is going to be problematic if/when we split
# out the datatd vs. brokerd actors since the mktmap lookup
# table will now be inaccessible..
if proxy is not None:
client: Client = proxy._aio_ns
client._contracts[mkt.bs_fqme] = con
client._cons2mkts[con] = mkt
return mkt, details

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,64 @@
``kraken`` backend
------------------
though they don't have the most liquidity of all the cexes they sure are
accommodating to those of us who appreciate a little ``xmr``.
status
******
current support is *production grade* and both real-time data and order
management should be correct and fast. this backend is used by core devs
for live trading.
config
******
In order to get order mode support your ``brokers.toml``
needs to have something like the following:
.. code:: toml
[kraken]
accounts.spot = 'spot'
key_descr = "spot"
api_key = "69696969696969696696969696969696969696969696969696969696"
secret = "BOOBSBOOBSBOOBSBOOBSBOOBSSMBZ69696969696969669969696969696"
If everything works correctly you should see any current positions
loaded in the pps pane on chart load and you should also be able to
check your trade records in the file::
<pikerk_conf_dir>/ledgers/trades_kraken_spot.toml
An example ledger file will have entries written verbatim from the
trade events schema:
.. code:: toml
[TFJBKK-SMBZS-VJ4UWS]
ordertxid = "SMBZSA-7CNQU-3HWLNJ"
postxid = "SMBZSE-M7IF5-CFI7LT"
pair = "XXMRZEUR"
time = 1655691993.4133966
type = "buy"
ordertype = "limit"
price = "103.97000000"
cost = "499.99999977"
fee = "0.80000000"
vol = "4.80907954"
margin = "0.00000000"
misc = ""
your ``pps.toml`` file will have position entries like,
.. code:: toml
[kraken.spot."xmreur.kraken"]
size = 4.80907954
ppu = 103.97000000
bs_mktid = "XXMRZEUR"
clears = [
{ tid = "TFJBKK-SMBZS-VJ4UWS", cost = 0.8, price = 103.97, size = 4.80907954, dt = "2022-05-20T02:26:33.413397+00:00" },
]

View File

@ -0,0 +1,75 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Kraken backend.
Sub-modules within break into the core functionalities:
- .api: for the core API machinery which generally
a ``asks``/``trio-websocket`` implemented ``Client``.
- .broker: part for orders / trading endpoints.
- .feed: for real-time and historical data query endpoints.
- .ledger: for transaction processing as it pertains to accounting.
- .symbols: for market (name) search and symbology meta-defs.
'''
from .symbols import (
Pair, # for symcache
open_symbol_search,
# required by `.accounting`, `.data`
get_mkt_info,
)
# required by `.brokers`
from .api import (
get_client,
)
from .feed import (
# required by `.data`
stream_quotes,
open_history_client,
)
from .broker import (
# required by `.clearing`
open_trade_dialog,
)
from .ledger import (
# required by `.accounting`
norm_trade,
norm_trade_records,
)
__all__ = [
'get_client',
'get_mkt_info',
'Pair',
'open_trade_dialog',
'open_history_client',
'open_symbol_search',
'stream_quotes',
'norm_trade_records',
'norm_trade',
]
# tractor RPC enable arg
__enable_modules__: list[str] = [
'api',
'broker',
'feed',
'symbols',
]

View File

@ -0,0 +1,703 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Core (web) API client
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
import itertools
from typing import (
Any,
Union,
)
import time
import httpx
import pendulum
import numpy as np
import urllib.parse
import hashlib
import hmac
import base64
import trio
from piker import config
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
)
from piker.accounting._mktinfo import (
Asset,
digits_to_dec,
dec_digits,
)
from piker.brokers._util import (
resproc,
SymbolNotFound,
BrokerError,
DataThrottle,
)
from piker.accounting import Transaction
from piker.log import get_logger
from .symbols import Pair
log = get_logger('piker.brokers.kraken')
# <uri>/<version>/
_url = 'https://api.kraken.com/0'
_headers: dict[str, str] = {
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
}
# TODO: this is the only backend providing this right?
# in which case we should drop it from the defaults and
# instead make a custom fields descr in this module!
_show_wap_in_history = True
_symbol_info_translation: dict[str, str] = {
'tick_decimals': 'pair_decimals',
}
def get_config() -> dict[str, Any]:
'''
Load our section from `piker/brokers.toml`.
'''
conf, path = config.load(
conf_name='brokers',
touch_if_dne=True,
)
if (section := conf.get('kraken')) is None:
log.warning(
f'No config section found for kraken in {path}'
)
return {}
return section
def get_kraken_signature(
urlpath: str,
data: dict[str, Any],
secret: str
) -> str:
postdata = urllib.parse.urlencode(data)
encoded = (str(data['nonce']) + postdata).encode()
message = urlpath.encode() + hashlib.sha256(encoded).digest()
mac = hmac.new(base64.b64decode(secret), message, hashlib.sha512)
sigdigest = base64.b64encode(mac.digest())
return sigdigest.decode()
class InvalidKey(ValueError):
'''
EAPI:Invalid key
This error is returned when the API key used for the call is
either expired or disabled, please review the API key in your
Settings -> API tab of account management or generate a new one
and update your application.
'''
class Client:
# assets and mkt pairs are key-ed by kraken's ReST response
# symbol-bs_mktids (we call them "X-keys" like fricking
# "XXMRZEUR"). these keys used directly since ledger endpoints
# return transaction sets keyed with the same set!
_Assets: dict[str, Asset] = {}
_AssetPairs: dict[str, Pair] = {}
# offer lookup tables for all .altname and .wsname
# to the equivalent .xname so that various symbol-schemas
# can be mapped to `Pair`s in the tables above.
_altnames: dict[str, str] = {}
_wsnames: dict[str, str] = {}
# key-ed by `Pair.bs_fqme: str`, and thus used for search
# allowing for lookup using piker's own FQME symbology sys.
_pairs: dict[str, Pair] = {}
_assets: dict[str, Asset] = {}
def __init__(
self,
config: dict[str, str],
httpx_client: httpx.AsyncClient,
name: str = '',
api_key: str = '',
secret: str = ''
) -> None:
self._sesh: httpx.AsyncClient = httpx_client
self._name = name
self._api_key = api_key
self._secret = secret
self.conf: dict[str, str] = config
@property
def pairs(self) -> dict[str, Pair]:
if self._pairs is None:
raise RuntimeError(
"Client didn't run `.get_mkt_pairs()` on startup?!"
)
return self._pairs
async def _public(
self,
method: str,
data: dict,
) -> dict[str, Any]:
resp: httpx.Response = await self._sesh.post(
url=f'/public/{method}',
json=data,
)
return resproc(resp, log)
async def _private(
self,
method: str,
data: dict,
uri_path: str
) -> dict[str, Any]:
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'API-Key': self._api_key,
'API-Sign': get_kraken_signature(
uri_path,
data,
self._secret,
),
}
resp: httpx.Response = await self._sesh.post(
url=f'/private/{method}',
data=data,
headers=headers,
)
return resproc(resp, log)
async def endpoint(
self,
method: str,
data: dict[str, Any]
) -> dict[str, Any]:
uri_path = f'/0/private/{method}'
data['nonce'] = str(int(1000*time.time()))
return await self._private(method, data, uri_path)
async def get_balances(
self,
) -> dict[str, float]:
'''
Return the set of asset balances for this account
by symbol.
'''
resp = await self.endpoint(
'Balance',
{},
)
by_bsmktid: dict[str, dict] = resp['result']
balances: dict = {}
for xname, bal in by_bsmktid.items():
asset: Asset = self._Assets[xname]
# TODO: which KEY should we use? it's used to index
# the `Account.pps: dict` ..
key: str = asset.name.lower()
# TODO: should we just return a `Decimal` here
# or is the rounded version ok?
balances[key] = round(
float(bal),
ndigits=dec_digits(asset.tx_tick)
)
return balances
async def get_assets(
self,
reload: bool = False,
) -> dict[str, Asset]:
'''
Load and cache all asset infos and pack into
our native ``Asset`` struct.
https://docs.kraken.com/rest/#tag/Market-Data/operation/getAssetInfo
return msg:
"asset1": {
"aclass": "string",
"altname": "string",
"decimals": 0,
"display_decimals": 0,
"collateral_value": 0,
"status": "string"
}
'''
if (
not self._assets
or reload
):
resp = await self._public('Assets', {})
assets: dict[str, dict] = resp['result']
for bs_mktid, info in assets.items():
altname: str = info['altname']
aclass: str = info['aclass']
asset = Asset(
name=altname,
atype=f'crypto_{aclass}',
tx_tick=digits_to_dec(info['decimals']),
info=info,
)
# NOTE: yes we keep 2 sets since kraken insists on
# keeping 3 frickin sets bc apparently they have
# no sane data engineers whol all like different
# keys for their fricking symbology sets..
self._Assets[bs_mktid] = asset
self._assets[altname.lower()] = asset
self._assets[altname] = asset
# we return the "most native" set merged with our preferred
# naming (which i guess is the "altname" one) since that's
# what the symcache loader will be storing, and we need the
# keys that are easiest to match against in any trade
# records.
return self._Assets | self._assets
async def get_trades(
self,
fetch_limit: int | None = None,
) -> dict[str, Any]:
'''
Get the trades (aka cleared orders) history from the rest endpoint:
https://docs.kraken.com/rest/#operation/getTradeHistory
'''
ofs = 0
trades_by_id: dict[str, Any] = {}
for i in itertools.count():
if (
fetch_limit
and i >= fetch_limit
):
break
# increment 'ofs' pagination offset
ofs = i*50
resp = await self.endpoint(
'TradesHistory',
{'ofs': ofs},
)
by_id = resp['result']['trades']
trades_by_id.update(by_id)
# can get up to 50 results per query, see:
# https://docs.kraken.com/rest/#tag/User-Data/operation/getTradeHistory
if (
len(by_id) < 50
):
err = resp.get('error')
if err:
raise BrokerError(err)
# we know we received the max amount of
# trade results so there may be more history.
# catch the end of the trades
count = resp['result']['count']
break
# santity check on update
assert count == len(trades_by_id.values())
return trades_by_id
async def get_xfers(
self,
asset: str,
src_asset: str = '',
) -> dict[str, Transaction]:
'''
Get asset balance transfer transactions.
Currently only withdrawals are supported.
'''
resp = await self.endpoint(
'WithdrawStatus',
{'asset': asset},
)
try:
xfers: list[dict] = resp['result']
except KeyError:
log.exception(f'Kraken suxxx: {resp}')
return []
# eg. resp schema:
# 'result': [{'method': 'Bitcoin', 'aclass': 'currency', 'asset':
# 'XXBT', 'refid': 'AGBJRMB-JHD2M4-NDI3NR', 'txid':
# 'b95d66d3bb6fd76cbccb93f7639f99a505cb20752c62ea0acc093a0e46547c44',
# 'info': 'bc1qc8enqjekwppmw3g80p56z5ns7ze3wraqk5rl9z',
# 'amount': '0.00300726', 'fee': '0.00001000', 'time':
# 1658347714, 'status': 'Success'}]}
if xfers:
import tractor
await tractor.pp()
trans: dict[str, Transaction] = {}
for entry in xfers:
# look up the normalized name and asset info
asset_key: str = entry['asset']
asset: Asset = self._Assets[asset_key]
asset_key: str = asset.name.lower()
# XXX: this is in the asset units (likely) so it isn't
# quite the same as a commisions cost necessarily..)
# TODO: also round this based on `Pair` cost precision info?
cost = float(entry['fee'])
# fqme: str = asset_key + '.kraken'
tx = Transaction(
fqme=asset_key, # this must map to an entry in .assets!
tid=entry['txid'],
dt=pendulum.from_timestamp(entry['time']),
bs_mktid=f'{asset_key}{src_asset}',
size=-1*(
float(entry['amount'])
+
cost
),
# since this will be treated as a "sell" it
# shouldn't be needed to compute the be price.
price='NaN',
# XXX: see note above
cost=cost,
# not a trade but a withdrawal or deposit on the
# asset (chain) system.
etype='transfer',
)
trans[tx.tid] = tx
return trans
async def submit_limit(
self,
symbol: str,
price: float,
action: str,
size: float,
reqid: str = None,
validate: bool = False # set True test call without a real submission
) -> dict:
'''
Place an order and return integer request id provided by client.
'''
# Build common data dict for common keys from both endpoints
data = {
"pair": symbol,
"price": str(price),
"validate": validate
}
if reqid is None:
# Build order data for kraken api
data |= {
"ordertype": "limit",
"type": action,
"volume": str(size),
}
return await self.endpoint('AddOrder', data)
else:
# Edit order data for kraken api
data["txid"] = reqid
return await self.endpoint('EditOrder', data)
async def submit_cancel(
self,
reqid: str,
) -> dict:
'''
Send cancel request for order id ``reqid``.
'''
# txid is a transaction id given by kraken
return await self.endpoint('CancelOrder', {"txid": reqid})
async def asset_pairs(
self,
pair_patt: str | None = None,
) -> dict[str, Pair] | Pair:
'''
Query for a tradeable asset pair (info), or all if no input
pattern is provided.
https://docs.kraken.com/rest/#tag/Market-Data/operation/getTradableAssetPairs
'''
if not self._AssetPairs:
# get all pairs by default, or filter
# to whatever pattern is provided as input.
req_pairs: dict[str, str] | None = None
if pair_patt is not None:
req_pairs = {'pair': pair_patt}
resp = await self._public(
'AssetPairs',
req_pairs,
)
err = resp['error']
if err:
raise SymbolNotFound(pair_patt)
# NOTE: we try to key pairs by our custom defined
# `.bs_fqme` field since we want to offer search over
# this pattern set, callers should fill out lookup
# tables for kraken's bs_mktid keys to map to these
# keys!
# XXX: FURTHER kraken's data eng team decided to offer
# 3 frickin market-pair-symbol key sets depending on
# which frickin API is being used.
# Example for the trading pair 'LTC<EUR'
# - the "X-key" from rest eps 'XLTCZEUR'
# - the "websocket key" from ws msgs is 'LTC/EUR'
# - the "altname key" also delivered in pair info is 'LTCEUR'
for xkey, data in resp['result'].items():
# NOTE: always cache in pairs tables for faster lookup
pair = Pair(xname=xkey, **data)
# register the above `Pair` structs for all
# key-sets/monikers: a set of 4 (frickin) tables
# acting as a combined surjection of all possible
# (and stupid) kraken names to their `Pair` obj.
self._AssetPairs[xkey] = pair
self._pairs[pair.bs_fqme] = pair
self._altnames[pair.altname] = pair
self._wsnames[pair.wsname] = pair
if pair_patt is not None:
return next(iter(self._pairs.items()))[1]
return self._AssetPairs
async def get_mkt_pairs(
self,
reload: bool = False,
) -> dict:
'''
Load all market pair info build and cache it for downstream
use.
Multiple pair info lookup tables (like ``._altnames:
dict[str, str]``) are created for looking up the
piker-native `Pair`-struct from any input of the three
(yes, it's that idiotic..) available symbol/pair-key-sets
that kraken frickin offers depending on the API including
the .altname, .wsname and the weird ass default set they
return in ReST responses .xname..
'''
if (
not self._pairs
or reload
):
await self.asset_pairs()
return self._AssetPairs
async def search_symbols(
self,
pattern: str,
) -> dict[str, Any]:
'''
Search for a symbol by "alt name"..
It is expected that the ``Client._pairs`` table
gets populated before conducting the underlying fuzzy-search
over the pair-key set.
'''
if not len(self._pairs):
await self.get_mkt_pairs()
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
matches: dict[str, Pair] = match_from_pairs(
pairs=self._pairs,
query=pattern.upper(),
score_cutoff=50,
)
# repack in .altname-keyed output table
return {
pair.altname: pair
for pair in matches.values()
}
async def bars(
self,
symbol: str = 'XBTUSD',
# UTC 2017-07-02 12:53:20
since: Union[int, datetime] | None = None,
count: int = 720, # <- max allowed per query
as_np: bool = True,
) -> dict:
if since is None:
since = pendulum.now('UTC').start_of('minute').subtract(
minutes=count).timestamp()
elif isinstance(since, int):
since = pendulum.from_timestamp(since).timestamp()
else: # presumably a pendulum datetime
since = since.timestamp()
# UTC 2017-07-02 12:53:20 is oldest seconds value
since = str(max(1499000000, int(since)))
json = await self._public(
'OHLC',
data={
'pair': symbol,
'since': since,
},
)
try:
res = json['result']
res.pop('last')
bars = next(iter(res.values()))
new_bars = []
first = bars[0]
last_nz_vwap = first[-3]
if last_nz_vwap == 0:
# use close if vwap is zero
last_nz_vwap = first[-4]
# convert all fields to native types
for i, bar in enumerate(bars):
# normalize weird zero-ed vwap values..cmon kraken..
# indicates vwap didn't change since last bar
vwap = float(bar.pop(-3))
if vwap != 0:
last_nz_vwap = vwap
if vwap == 0:
vwap = last_nz_vwap
# re-insert vwap as the last of the fields
bar.append(vwap)
new_bars.append(
(i,) + tuple(
ftype(bar[j]) for j, (name, ftype) in enumerate(
def_iohlcv_fields[1:]
)
)
)
array = np.array(new_bars, dtype=def_iohlcv_fields) if as_np else bars
return array
except KeyError:
errmsg = json['error'][0]
if 'not found' in errmsg:
raise SymbolNotFound(errmsg + f': {symbol}')
elif 'Too many requests' in errmsg:
raise DataThrottle(f'{symbol}')
else:
raise BrokerError(errmsg)
@classmethod
def to_bs_fqme(
cls,
pair_str: str
) -> str:
'''
Normalize symbol names to to a 3x3 pair from the global
definition map which we build out from the data retreived from
the 'AssetPairs' endpoint, see methods above.
'''
try:
return cls._altnames[pair_str.upper()].bs_fqme
except KeyError as ke:
raise SymbolNotFound(f'kraken has no {ke.args[0]}')
@acm
async def get_client() -> Client:
conf: dict[str, Any] = get_config()
async with httpx.AsyncClient(
base_url=_url,
headers=_headers,
# TODO: is there a way to numerate this?
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
# connections=4
) as trio_client:
if conf:
client = Client(
conf,
httpx_client=trio_client,
# TODO: don't break these up and just do internal
# conf lookups instead..
name=conf['key_descr'],
api_key=conf['api_key'],
secret=conf['secret']
)
else:
client = Client(
conf={},
httpx_client=trio_client,
)
# at startup, load all symbols, and asset info in
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.get_mkt_pairs()
yield client

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,415 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Real-time and historical data feed endpoints.
'''
from contextlib import (
asynccontextmanager as acm,
aclosing,
)
from datetime import datetime
from typing import (
AsyncGenerator,
Callable,
Optional,
)
import time
import numpy as np
import pendulum
from trio_typing import TaskStatus
import trio
from piker.accounting._mktinfo import (
MktPair,
)
from piker.brokers import (
open_cached_client,
)
from piker.brokers._util import (
BrokerError,
DataThrottle,
DataUnavailable,
)
from piker.types import Struct
from piker.data.validate import FeedInit
from piker.data._web_bs import open_autorecon_ws, NoBsWs
from .api import (
log,
)
from .symbols import get_mkt_info
class OHLC(Struct, frozen=True):
'''
Description of the flattened OHLC quote format.
For schema details see:
https://docs.kraken.com/websockets/#message-ohlc
'''
chan_id: int # internal kraken id
chan_name: str # eg. ohlc-1 (name-interval)
pair: str # fx pair
# unpacked from array
time: float # Begin time of interval, in seconds since epoch
etime: float # End time of interval, in seconds since epoch
open: float # Open price of interval
high: float # High price within interval
low: float # Low price within interval
close: float # Close price of interval
vwap: float # Volume weighted average price within interval
volume: float # Accumulated volume **within interval**
count: int # Number of trades within interval
async def stream_messages(
ws: NoBsWs,
):
'''
Message stream parser and heartbeat handler.
Deliver ws subscription messages as well as handle heartbeat logic
though a single async generator.
'''
last_hb: float = 0
async for msg in ws:
match msg:
case {'event': 'heartbeat'}:
now = time.time()
delay = now - last_hb
last_hb = now
# XXX: why tf is this not printing without --tl flag?
log.debug(f"Heartbeat after {delay}")
# print(f"Heartbeat after {delay}")
continue
case _:
# passthrough sub msgs
yield msg
async def process_data_feed_msgs(
ws: NoBsWs,
):
'''
Parse and pack data feed messages.
'''
async with aclosing(stream_messages(ws)) as ws_stream:
async for msg in ws_stream:
match msg:
case {
'errorMessage': errmsg
}:
raise BrokerError(errmsg)
case {
'event': 'subscriptionStatus',
} as sub:
log.info(
'WS subscription is active:\n'
f'{sub}'
)
continue
case [
chan_id,
*payload_array,
chan_name,
pair
]:
if 'ohlc' in chan_name:
array: list = payload_array[0]
ohlc = OHLC(
chan_id,
chan_name,
pair,
*map(float, array[:-1]),
count=array[-1],
)
yield 'ohlc', ohlc.copy()
elif 'spread' in chan_name:
bid, ask, ts, bsize, asize = map(
float, payload_array[0])
# TODO: really makes you think IB has a horrible API...
quote = {
'symbol': pair.replace('/', ''),
'ticks': [
{'type': 'bid', 'price': bid, 'size': bsize},
{'type': 'bsize', 'price': bid, 'size': bsize},
{'type': 'ask', 'price': ask, 'size': asize},
{'type': 'asize', 'price': ask, 'size': asize},
],
}
yield 'l1', quote
# elif 'book' in msg[-2]:
# chan_id, *payload_array, chan_name, pair = msg
# print(msg)
case {
'connectionID': conid,
'event': 'systemStatus',
'status': 'online',
'version': ver,
}:
log.info(
f'Established {ver} ws connection with id: {conid}'
)
continue
case _:
print(f'UNHANDLED MSG: {msg}')
# yield msg
def normalize(ohlc: OHLC) -> dict:
'''
Norm an `OHLC` msg to piker's minimal (live-)quote schema.
'''
quote = ohlc.to_dict()
quote['broker_ts'] = quote['time']
quote['brokerd_ts'] = time.time()
quote['symbol'] = quote['pair'] = quote['pair'].replace('/', '')
quote['last'] = quote['close']
quote['bar_wap'] = ohlc.vwap
return quote
@acm
async def open_history_client(
mkt: MktPair,
) -> AsyncGenerator[Callable, None]:
symbol: str = mkt.bs_mktid
# TODO implement history getter for the new storage layer.
async with open_cached_client('kraken') as client:
# lol, kraken won't send any more then the "last"
# 720 1m bars.. so we have to just ignore further
# requests of this type..
queries: int = 0
async def get_ohlc(
timeframe: float,
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
nonlocal queries
if (
queries > 0
or timeframe != 60
):
raise DataUnavailable(
'Only a single query for 1m bars supported')
count = 0
while count <= 3:
try:
array = await client.bars(
symbol,
since=end_dt,
)
count += 1
queries += 1
break
except DataThrottle:
log.warning(f'kraken OHLC throttle for {symbol}')
await trio.sleep(1)
start_dt = pendulum.from_timestamp(array[0]['time'])
end_dt = pendulum.from_timestamp(array[-1]['time'])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 1, 'rate': 1}
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# backend specific
sub_type: str = 'ohlc',
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Subscribe for ohlc stream of quotes for ``pairs``.
``pairs`` must be formatted <crypto_symbol>/<fiat_symbol>.
'''
ws_pairs: list[str] = []
init_msgs: list[FeedInit] = []
async with (
send_chan as send_chan,
):
for sym_str in symbols:
mkt, pair = await get_mkt_info(sym_str)
init_msgs.append(
FeedInit(mkt_info=mkt)
)
ws_pairs.append(pair.wsname)
@acm
async def subscribe(ws: NoBsWs):
# XXX: setup subs
# https://docs.kraken.com/websockets/#message-subscribe
# specific logic for this in kraken's sync client:
# https://github.com/krakenfx/kraken-wsclient-py/blob/master/kraken_wsclient_py/kraken_wsclient_py.py#L188
ohlc_sub = {
'event': 'subscribe',
'pair': ws_pairs,
'subscription': {
'name': 'ohlc',
'interval': 1,
},
}
# TODO: we want to eventually allow unsubs which should
# be completely fine to request from a separate task
# since internally the ws methods appear to be FIFO
# locked.
await ws.send_msg(ohlc_sub)
# trade data (aka L1)
l1_sub = {
'event': 'subscribe',
'pair': ws_pairs,
'subscription': {
'name': 'spread',
# 'depth': 10}
},
}
# pull a first quote and deliver
await ws.send_msg(l1_sub)
yield
# unsub from all pairs on teardown
if ws.connected():
await ws.send_msg({
'pair': ws_pairs,
'event': 'unsubscribe',
'subscription': ['ohlc', 'spread'],
})
# XXX: do we need to ack the unsub?
# await ws.recv_msg()
# see the tips on reconnection logic:
# https://support.kraken.com/hc/en-us/articles/360044504011-WebSocket-API-unexpected-disconnections-from-market-data-feeds
ws: NoBsWs
async with (
open_autorecon_ws(
'wss://ws.kraken.com/',
fixture=subscribe,
reset_after=20,
) as ws,
# avoid stream-gen closure from breaking trio..
# NOTE: not sure this actually works XD particularly
# if we call `ws._connect()` manally in the streaming
# async gen..
aclosing(process_data_feed_msgs(ws)) as msg_gen,
):
# pull a first quote and deliver
typ, ohlc_last = await anext(msg_gen)
quote = normalize(ohlc_last)
task_status.started((init_msgs, quote))
feed_is_live.set()
# keep start of last interval for volume tracking
last_interval_start: float = ohlc_last.etime
# start streaming
topic: str = mkt.bs_fqme
async for typ, quote in msg_gen:
match typ:
# TODO: can get rid of all this by using
# ``trades`` subscription..? Not sure why this
# wasn't used originally? (music queues) zoltannn..
# https://docs.kraken.com/websockets/#message-trade
case 'ohlc':
# generate tick values to match time & sales pane:
# https://trade.kraken.com/charts/KRAKEN:BTC-USD?period=1m
volume = quote.volume
# new OHLC sample interval
if quote.etime > last_interval_start:
last_interval_start: float = quote.etime
tick_volume: float = volume
else:
# this is the tick volume *within the interval*
tick_volume: float = volume - ohlc_last.volume
ohlc_last = quote
last = quote.close
quote = normalize(quote)
ticks = quote.setdefault(
'ticks',
[],
)
if tick_volume:
ticks.append({
'type': 'trade',
'price': last,
'size': tick_volume,
})
case 'l1':
# passthrough quote msg
pass
case _:
log.warning(f'Unknown WSS message: {typ}, {quote}')
await send_chan.send({topic: quote})

View File

@ -0,0 +1,269 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Trade transaction accounting and normalization.
'''
import math
from pprint import pformat
from typing import (
Any,
)
import pendulum
from piker.accounting import (
Transaction,
Position,
Account,
get_likely_pair,
TransactionLedger,
# MktPair,
)
from piker.types import Struct
from piker.data import (
SymbologyCache,
)
from .api import (
log,
Client,
Pair,
)
# from .feed import get_mkt_info
def norm_trade(
tid: str,
record: dict[str, Any],
# this is the dict that was returned from
# `Client.get_mkt_pairs()` and when running offline ledger
# processing from `.accounting`, this will be the table loaded
# into `SymbologyCache.pairs`.
pairs: dict[str, Struct],
symcache: SymbologyCache | None = None,
) -> Transaction:
size: float = float(record.get('vol')) * {
'buy': 1,
'sell': -1,
}[record['type']]
# NOTE: this value may be either the websocket OR the rest schema
# so we need to detect the key format and then choose the
# correct symbol lookup table to evetually get a ``Pair``..
# See internals of `Client.asset_pairs()` for deats!
src_pair_key: str = record['pair']
# XXX: kraken's data engineering is soo bad they require THREE
# different pair schemas (more or less seemingly tied to
# transport-APIs)..LITERALLY they return different market id
# pairs in the ledger endpoints vs. the websocket event subs..
# lookup pair using appropriately provided tabled depending
# on API-key-schema..
pair: Pair = pairs[src_pair_key]
fqme: str = pair.bs_fqme.lower() + '.kraken'
return Transaction(
fqme=fqme,
tid=tid,
size=size,
price=float(record['price']),
cost=float(record['fee']),
dt=pendulum.from_timestamp(float(record['time'])),
bs_mktid=pair.bs_mktid,
)
async def norm_trade_records(
ledger: dict[str, Any],
client: Client,
api_name_set: str = 'xname',
) -> dict[str, Transaction]:
'''
Loop through an input ``dict`` of trade records
and convert them to ``Transactions``.
'''
records: dict[str, Transaction] = {}
for tid, record in ledger.items():
# manual_fqme: str = f'{bs_mktid.lower()}.kraken'
# mkt: MktPair = (await get_mkt_info(manual_fqme))[0]
# fqme: str = mkt.fqme
# assert fqme == manual_fqme
pairs: dict[str, Pair] = {
'xname': client._AssetPairs,
'wsname': client._wsnames,
'altname': client._altnames,
}[api_name_set]
records[tid] = norm_trade(
tid,
record,
pairs=pairs,
)
return records
def has_pp(
acnt: Account,
src_fiat: str,
dst: str,
size: float,
) -> Position | None:
src2dst: dict[str, str] = {}
for bs_mktid in acnt.pps:
likely_pair = get_likely_pair(
src_fiat,
dst,
bs_mktid,
)
if likely_pair:
src2dst[src_fiat] = dst
for src, dst in src2dst.items():
pair: str = f'{dst}{src_fiat}'
pos: Position = acnt.pps.get(pair)
if (
pos
and math.isclose(pos.size, size)
):
return pos
elif (
size == 0
and pos.size
):
log.warning(
f'`kraken` account says you have a ZERO '
f'balance for {bs_mktid}:{pair}\n'
f'but piker seems to think `{pos.size}`\n'
'This is likely a discrepancy in piker '
'accounting if the above number is'
"large,' though it's likely to due lack"
"f tracking xfers fees.."
)
return pos
return None # indicate no entry found
# TODO: factor most of this "account updating from txns" into the
# the `Account` impl so has to provide for hiding the mostly
# cross-provider updates from txn sets
async def verify_balances(
acnt: Account,
src_fiat: str,
balances: dict[str, float],
client: Client,
ledger: TransactionLedger,
ledger_trans: dict[str, Transaction], # from toml
api_trans: dict[str, Transaction], # from API
simulate_pp_update: bool = False,
) -> None:
for dst, size in balances.items():
# we don't care about tracking positions
# in the user's source fiat currency.
if (
dst == src_fiat
or not any(
dst in bs_mktid for bs_mktid in acnt.pps
)
):
log.warning(
f'Skipping balance `{dst}`:{size} for position calcs!'
)
continue
# we have a balance for which there is no pos entry
# - we have to likely update from the ledger?
if not has_pp(acnt, src_fiat, dst, size):
updated = acnt.update_from_ledger(
ledger_trans,
symcache=ledger.symcache,
)
log.info(f'Updated pps from ledger:\n{pformat(updated)}')
# FIRST try reloading from API records
if (
not has_pp(acnt, src_fiat, dst, size)
and not simulate_pp_update
):
acnt.update_from_ledger(
api_trans,
symcache=ledger.symcache,
)
# get transfers to make sense of abs
# balances.
# NOTE: we do this after ledger and API
# loading since we might not have an
# entry in the
# ``account.kraken.spot.toml`` for the
# necessary pair yet and thus this
# likely pair grabber will likely fail.
if not has_pp(acnt, src_fiat, dst, size):
for bs_mktid in acnt.pps:
likely_pair: str | None = get_likely_pair(
src_fiat,
dst,
bs_mktid,
)
if likely_pair:
break
else:
raise ValueError(
'Could not find a position pair in '
'ledger for likely widthdrawal '
f'candidate: {dst}'
)
# this was likely pos that had a withdrawal
# from the dst asset out of the account.
if likely_pair:
xfer_trans = await client.get_xfers(
dst,
# TODO: not all src assets are
# 3 chars long...
src_asset=likely_pair[3:],
)
if xfer_trans:
updated = acnt.update_from_ledger(
xfer_trans,
cost_scalar=1,
symcache=ledger.symcache,
)
log.info(
f'Updated {dst} from transfers:\n'
f'{pformat(updated)}'
)
if has_pp(acnt, src_fiat, dst, size):
raise ValueError(
'Could not reproduce balance:\n'
f'dst: {dst}, {size}\n'
)

View File

@ -0,0 +1,206 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Symbology defs and search.
'''
from decimal import Decimal
import tractor
from rapidfuzz import process as fuzzy
from piker._cacheables import (
async_lifo_cache,
)
from piker.accounting._mktinfo import (
digits_to_dec,
)
from piker.brokers import (
open_cached_client,
SymbolNotFound,
)
from piker.types import Struct
from piker.accounting._mktinfo import (
Asset,
MktPair,
unpack_fqme,
)
# https://www.kraken.com/features/api#get-tradable-pairs
class Pair(Struct):
xname: str # idiotic bs_mktid equiv i guess?
altname: str # alternate pair name
wsname: str # WebSocket pair name (if available)
aclass_base: str # asset class of base component
base: str # asset id of base component
aclass_quote: str # asset class of quote component
quote: str # asset id of quote component
lot: str # volume lot size
cost_decimals: int
costmin: float
pair_decimals: int # scaling decimal places for pair
lot_decimals: int # scaling decimal places for volume
# amount to multiply lot volume by to get currency volume
lot_multiplier: float
# array of leverage amounts available when buying
leverage_buy: list[int]
# array of leverage amounts available when selling
leverage_sell: list[int]
# fee schedule array in [volume, percent fee] tuples
fees: list[tuple[int, float]]
# maker fee schedule array in [volume, percent fee] tuples (if on
# maker/taker)
fees_maker: list[tuple[int, float]]
fee_volume_currency: str # volume discount currency
margin_call: str # margin call level
margin_stop: str # stop-out/liquidation margin level
ordermin: float # minimum order volume for pair
tick_size: float # min price step size
status: str
short_position_limit: float = 0
long_position_limit: float = float('inf')
# TODO: should we make this a literal NamespacePath ref?
ns_path: str = 'piker.brokers.kraken:Pair'
@property
def bs_mktid(self) -> str:
'''
Kraken seems to index it's market symbol sets in
transaction ledgers using the key returned from rest
queries.. so use that since apparently they can't
make up their minds on a better key set XD
'''
return self.xname
@property
def price_tick(self) -> Decimal:
return digits_to_dec(self.pair_decimals)
@property
def size_tick(self) -> Decimal:
return digits_to_dec(self.lot_decimals)
@property
def bs_dst_asset(self) -> str:
dst, _ = self.wsname.split('/')
return dst
@property
def bs_src_asset(self) -> str:
_, src = self.wsname.split('/')
return src
@property
def bs_fqme(self) -> str:
'''
Basically the `.altname` but with special '.' handling and
`.SPOT` suffix appending (for future multi-venue support).
'''
dst, src = self.wsname.split('/')
# XXX: omg for stupid shite like ETH2.S/ETH..
dst = dst.replace('.', '-')
return f'{dst}{src}.SPOT'
@tractor.context
async def open_symbol_search(ctx: tractor.Context) -> None:
async with open_cached_client('kraken') as client:
# load all symbols locally for fast search
cache = await client.get_mkt_pairs()
await ctx.started(cache)
async with ctx.open_stream() as stream:
async for pattern in stream:
await stream.send(
await client.search_symbols(pattern)
)
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, Pair]:
'''
Query for and return a `MktPair` and backend-native `Pair` (or
wtv else) info.
If more then one fqme is provided return a ``dict`` of native
key-strs to `MktPair`s.
'''
venue: str = 'spot'
expiry: str = ''
if '.kraken' not in fqme:
fqme += '.kraken'
broker, pair, venue, expiry = unpack_fqme(fqme)
venue: str = venue or 'spot'
if venue.lower() != 'spot':
raise SymbolNotFound(
'kraken only supports spot markets right now!\n'
f'{fqme}\n'
)
async with open_cached_client('kraken') as client:
# uppercase since kraken bs_mktid is always upper
# bs_fqme, _, broker = fqme.partition('.')
# pair_str: str = bs_fqme.upper()
pair_str: str = f'{pair}.{venue}'
pair: Pair | None = client._pairs.get(pair_str.upper())
if not pair:
bs_fqme: str = client.to_bs_fqme(pair_str)
pair: Pair = client._pairs[bs_fqme]
if not (assets := client._assets):
assets: dict[str, Asset] = await client.get_assets()
dst_asset: Asset = assets[pair.bs_dst_asset]
src_asset: Asset = assets[pair.bs_src_asset]
mkt = MktPair(
dst=dst_asset,
src=src_asset,
price_tick=pair.price_tick,
size_tick=pair.size_tick,
bs_mktid=pair.bs_mktid,
expiry=expiry,
venue=venue or 'spot',
# TODO: futes
# _atype=_atype,
broker='kraken',
)
return mkt, pair

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0) # Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -40,13 +40,17 @@ import wrapt
import asks import asks
from ..calc import humanize, percent_change from ..calc import humanize, percent_change
from .._cacheables import open_cached_client, async_lifo_cache from . import open_cached_client
from piker._cacheables import async_lifo_cache
from .. import config from .. import config
from ._util import resproc, BrokerError, SymbolNotFound from ._util import resproc, BrokerError, SymbolNotFound
from ..log import get_logger, colorize_json, get_console_log from ..log import (
colorize_json,
)
log = get_logger(__name__) from ._util import (
log,
get_console_log,
)
_use_practice_account = False _use_practice_account = False
_refresh_token_ep = 'https://{}login.questrade.com/oauth2/' _refresh_token_ep = 'https://{}login.questrade.com/oauth2/'

View File

@ -27,12 +27,13 @@ from typing import List
from async_generator import asynccontextmanager from async_generator import asynccontextmanager
import asks import asks
from ..log import get_logger from ._util import (
from ._util import resproc, BrokerError resproc,
BrokerError,
log,
)
from ..calc import percent_change from ..calc import percent_change
log = get_logger(__name__)
_service_ep = 'https://api.robinhood.com' _service_ep = 'https://api.robinhood.com'
@ -65,8 +66,10 @@ class Client:
self.api = _API(self._sess) self.api = _API(self._sess)
def _zip_in_order(self, symbols: [str], quotes: List[dict]): def _zip_in_order(self, symbols: [str], quotes: List[dict]):
return {quote.get('symbol', sym) if quote else sym: quote return {
for sym, quote in zip(symbols, results_dict)} quote.get('symbol', sym) if quote else sym: quote
for sym, quote in zip(symbols, quotes)
}
async def quote(self, symbols: [str]): async def quote(self, symbols: [str]):
"""Retrieve quotes for a list of ``symbols``. """Retrieve quotes for a list of ``symbols``.

View File

@ -0,0 +1,49 @@
piker.clearing
______________
trade execution-n-control subsys for both live and paper trading as
well as algo-trading manual override/interaction across any backend
broker and data provider.
avail UIs
*********
order ctl
---------
the `piker.clearing` subsys is exposed mainly though
the `piker chart` GUI as a "chart trader" style UX and
is automatically enabled whenever a chart is opened.
.. ^TODO, more prose here!
the "manual" order control features are exposed via the
`piker.ui.order_mode` API and can pretty much always be
used (at least) in simulated-trading mode, aka "paper"-mode, and
the micro-manual is as follows:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
position (pp) mgmt
------------------
you can also configure your position allocation limits from the
sidepane.
.. ^TODO, explain and provide tut once more refined!

View File

@ -18,3 +18,38 @@
Market machinery for order executions, book, management. Market machinery for order executions, book, management.
""" """
from ..log import get_logger
from ._client import (
open_ems,
OrderClient,
)
from ._ems import (
open_brokerd_dialog,
)
from ._util import OrderDialogs
from ._messages import(
Order,
Status,
Cancel,
# TODO: deprecate these and replace end-2-end with
# client-side-dialog set above B)
# https://github.com/pikers/piker/issues/514
BrokerdPosition
)
__all__ = [
'FeeModel',
'open_ems',
'OrderClient',
'open_brokerd_dialog',
'OrderDialogs',
'Order',
'Status',
'Cancel',
'BrokerdPosition'
]
log = get_logger(__name__)

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -18,211 +18,284 @@
Orders and execution client API. Orders and execution client API.
""" """
from __future__ import annotations
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from typing import Dict
from pprint import pformat from pprint import pformat
from dataclasses import dataclass, field from typing import TYPE_CHECKING
import trio import trio
import tractor import tractor
from tractor.trionics import broadcast_receiver from tractor.trionics import broadcast_receiver
from ..log import get_logger from ._util import (
from ._ems import _emsd_main log, # sub-sys logger
from .._daemon import maybe_open_emsd )
from ._messages import Order, Cancel from piker.types import Struct
from ..service import maybe_open_emsd
from ._messages import (
Order,
Cancel,
BrokerdPosition,
)
if TYPE_CHECKING:
from ._messages import (
Status,
)
log = get_logger(__name__) class OrderClient(Struct):
'''
EMS-client-side order book ctl and tracking.
(A)sync API for submitting orders and alerts to the `emsd` service;
@dataclass this is the main control for execution management from client code.
class OrderBook:
'''EMS-client-side order book ctl and tracking.
A style similar to "model-view" is used here where this api is
provided as a supervised control for an EMS actor which does all the
hard/fast work of talking to brokers/exchanges to conduct
executions.
Currently, this is mostly for keeping local state to match the EMS
and use received events to trigger graphics updates.
''' '''
# IPC stream to `emsd` actor
_ems_stream: tractor.MsgStream
# mem channels used to relay order requests to the EMS daemon # mem channels used to relay order requests to the EMS daemon
_to_ems: trio.abc.SendChannel _to_relay_task: trio.abc.SendChannel
_from_order_book: trio.abc.ReceiveChannel _from_sync_order_client: trio.abc.ReceiveChannel
_sent_orders: Dict[str, Order] = field(default_factory=dict) # history table
_ready_to_receive: trio.Event = trio.Event() _sent_orders: dict[str, Order] = {}
def send( def send_nowait(
self, self,
msg: Order, msg: Order | dict,
) -> dict: ) -> dict | Order:
'''
Sync version of ``.send()``.
'''
self._sent_orders[msg.oid] = msg self._sent_orders[msg.oid] = msg
self._to_ems.send_nowait(msg.dict()) self._to_relay_task.send_nowait(msg)
return msg return msg
def update( async def send(
self, self,
msg: Order | dict,
) -> dict | Order:
'''
Send a new order msg async to the `emsd` service.
'''
self._sent_orders[msg.oid] = msg
await self._ems_stream.send(msg)
return msg
def update_nowait(
self,
uuid: str, uuid: str,
**data: dict, **data: dict,
) -> dict: ) -> dict:
cmd = self._sent_orders[uuid] '''
msg = cmd.dict() Sync version of ``.update()``.
msg.update(data)
self._sent_orders[uuid] = Order(**msg)
self._to_ems.send_nowait(msg)
return cmd
def cancel(self, uuid: str) -> bool: '''
"""Cancel an order (or alert) in the EMS.
"""
cmd = self._sent_orders[uuid] cmd = self._sent_orders[uuid]
msg = Cancel( msg = cmd.copy(update=data)
self._sent_orders[uuid] = msg
self._to_relay_task.send_nowait(msg)
return msg
async def update(
self,
uuid: str,
**data: dict,
) -> dict:
'''
Update an existing order dialog with a msg updated from
``update`` kwargs.
'''
cmd = self._sent_orders[uuid]
msg = cmd.copy(update=data)
self._sent_orders[uuid] = msg
await self._ems_stream.send(msg)
return msg
def _mk_cancel_msg(
self,
uuid: str,
) -> Cancel:
cmd = self._sent_orders.get(uuid)
if not cmd:
log.error(
f'Unknown order {uuid}!?\n'
f'Maybe there is a stale entry or line?\n'
f'You should report this as a bug!'
)
return
fqme = str(cmd.symbol)
return Cancel(
oid=uuid, oid=uuid,
symbol=cmd.symbol, symbol=fqme,
)
self._to_ems.send_nowait(msg.dict())
_orders: OrderBook = None
def get_orders(
emsd_uid: tuple[str, str] = None
) -> OrderBook:
""""
OrderBook singleton factory per actor.
"""
if emsd_uid is not None:
# TODO: read in target emsd's active book on startup
pass
global _orders
if _orders is None:
size = 100
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
# setup local ui event streaming channels for request/resp
# streamging with EMS daemon
_orders = OrderBook(
_to_ems=tx,
_from_order_book=brx,
) )
return _orders def cancel_nowait(
self,
uuid: str,
) -> None:
'''
Sync version of ``.cancel()``.
'''
self._to_relay_task.send_nowait(
self._mk_cancel_msg(uuid)
)
async def cancel(
self,
uuid: str,
) -> bool:
'''
Cancel an already existintg order (or alert) dialog.
'''
await self._ems_stream.send(
self._mk_cancel_msg(uuid)
)
# TODO: we can get rid of this relay loop once we move
# order_mode inputs to async code!
async def relay_order_cmds_from_sync_code(
async def relay_orders_from_sync_code(
client: OrderClient,
symbol_key: str, symbol_key: str,
to_ems_stream: tractor.MsgStream, to_ems_stream: tractor.MsgStream,
) -> None: ) -> None:
""" '''
Order streaming task: deliver orders transmitted from UI Order submission relay task: deliver orders sent from synchronous (UI)
to downstream consumers. code to the EMS via ``OrderClient._from_sync_order_client``.
This is run in the UI actor (usually the one running Qt but could be This is run in the UI actor (usually the one running Qt but could be
any other client service code). This process simply delivers order any other client service code). This process simply delivers order
messages to the above ``_to_ems`` send channel (from sync code using messages to the above ``_to_relay_task`` send channel (from sync code using
``.send_nowait()``), these values are pulled from the channel here ``.send_nowait()``), these values are pulled from the channel here
and relayed to any consumer(s) that called this function using and relayed to any consumer(s) that called this function using
a ``tractor`` portal. a ``tractor`` portal.
This effectively makes order messages look like they're being This effectively makes order messages look like they're being
"pushed" from the parent to the EMS where local sync code is likely "pushed" from the parent to the EMS where local sync code is likely
doing the pushing from some UI. doing the pushing from some non-async UI handler.
""" '''
book = get_orders() async with (
async with book._from_order_book.subscribe() as orders_stream: client._from_sync_order_client.subscribe() as sync_order_cmds
async for cmd in orders_stream: ):
if cmd['symbol'] == symbol_key: async for cmd in sync_order_cmds:
log.info(f'Send order cmd:\n{pformat(cmd)}') sym = cmd.symbol
msg = pformat(cmd.to_dict())
if sym == symbol_key:
log.info(f'Send order cmd:\n{msg}')
# send msg over IPC / wire # send msg over IPC / wire
await to_ems_stream.send(cmd) await to_ems_stream.send(cmd)
else:
log.warning(
f'Ignoring unmatched order cmd for {sym} != {symbol_key}:'
f'\n{msg}'
)
@acm @acm
async def open_ems( async def open_ems(
fqsn: str, fqme: str,
mode: str = 'live',
loglevel: str = 'error',
) -> ( ) -> tuple[
OrderBook, OrderClient, # client
tractor.MsgStream, tractor.MsgStream, # order ctl stream
dict, dict[
): # brokername, acctid
tuple[str, str],
dict[str, BrokerdPosition],
],
list[str],
dict[str, Status],
]:
''' '''
Spawn an EMS daemon and begin sending orders and receiving (Maybe) spawn an EMS-daemon (emsd), deliver an `OrderClient` for
alerts. requesting orders/alerts and a `trades_stream` which delivers all
response-msgs.
This EMS tries to reduce most broker's terrible order entry apis to This is a "client side" entrypoint which may spawn the `emsd` service
a very simple protocol built on a few easy to grok and/or if it can't be discovered and generally speaking is the lowest level
"rantsy" premises: broker control client-API.
- most users will prefer "dark mode" where orders are not submitted
to a broker until and execution condition is triggered
(aka client-side "hidden orders")
- Brokers over-complicate their apis and generally speaking hire
poor designers to create them. We're better off using creating a super
minimal, schema-simple, request-event-stream protocol to unify all the
existing piles of shit (and shocker, it'll probably just end up
looking like a decent crypto exchange's api)
- all order types can be implemented with client-side limit orders
- we aren't reinventing a wheel in this case since none of these
brokers are exposing FIX protocol; it is they doing the re-invention.
TODO: make some fancy diagrams using mermaid.io
the possible set of responses from the stream is currently:
- 'dark_submitted', 'broker_submitted'
- 'dark_cancelled', 'broker_cancelled'
- 'dark_executed', 'broker_executed'
- 'broker_filled'
''' '''
# wait for service to connect back to us signalling # TODO: prolly hand in the `MktPair` instance directly here as well!
# ready for order commands from piker.accounting import unpack_fqme
book = get_orders() broker, mktep, venue, suffix = unpack_fqme(fqme)
from ..data._source import unpack_fqsn async with maybe_open_emsd(
broker, symbol, suffix = unpack_fqsn(fqsn) broker,
loglevel=loglevel,
async with maybe_open_emsd(broker) as portal: ) as portal:
from ._ems import _emsd_main
async with ( async with (
# connect to emsd # connect to emsd
portal.open_context( portal.open_context(
_emsd_main, _emsd_main,
fqsn=fqsn, fqme=fqme,
exec_mode=mode,
loglevel=loglevel,
) as (ctx, (positions, accounts)), ) as (
ctx,
(
positions,
accounts,
dialogs,
)
),
# open 2-way trade command stream # open 2-way trade command stream
ctx.open_stream() as trades_stream, ctx.open_stream() as trades_stream,
): ):
size: int = 100 # what should this be?
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
# setup local ui event streaming channels for request/resp
# streamging with EMS daemon
client = OrderClient(
_ems_stream=trades_stream,
_to_relay_task=tx,
_from_sync_order_client=brx,
)
client._ems_stream = trades_stream
# start sync code order msg delivery task
async with trio.open_nursery() as n: async with trio.open_nursery() as n:
n.start_soon( n.start_soon(
relay_order_cmds_from_sync_code, relay_orders_from_sync_code,
fqsn, client,
fqme,
trades_stream trades_stream
) )
yield book, trades_stream, positions, accounts yield (
client,
trades_stream,
positions,
accounts,
dialogs,
)
# stop the sync-msg-relay task on exit.
n.cancel_scope.cancel()

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -15,108 +15,148 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Clearing system messagingn types and protocols. Clearing sub-system message and protocols.
""" """
from typing import Optional, Union from __future__ import annotations
from typing import (
Literal,
)
# TODO: try out just encoding/send direction for now? from msgspec import field
# import msgspec
from pydantic import BaseModel
from ..data._source import Symbol from piker.types import Struct
# TODO: ``msgspec`` stuff worth paying attention to:
# - schema evolution:
# https://jcristharif.com/msgspec/usage.html#schema-evolution
# - for eg. ``BrokerdStatus``, instead just have separate messages?
# - use literals for a common msg determined by diff keys?
# - https://jcristharif.com/msgspec/usage.html#literal
# --------------
# Client -> emsd # Client -> emsd
# --------------
class Order(Struct):
class Cancel(BaseModel): # TODO: ideally we can combine these 2 fields into
'''Cancel msg for removing a dark (ems triggered) or # 1 and just use the size polarity to determine a buy/sell.
broker-submitted (live) trigger/order. # i would like to see this become more like
# https://jcristharif.com/msgspec/usage.html#literal
''' # action: Literal[
action: str = 'cancel' # 'live',
oid: str # uuid4 # 'dark',
symbol: str # 'alert',
# ]
class Order(BaseModel):
action: str # {'buy', 'sell', 'alert'}
# internal ``emdsd`` unique "order id"
oid: str # uuid4
symbol: Union[str, Symbol]
account: str # should we set a default as '' ?
price: float
size: float
brokers: list[str]
# Assigned once initial ack is received
# ack_time_ns: Optional[int] = None
action: Literal[
'buy',
'sell',
'alert',
]
# determines whether the create execution # determines whether the create execution
# will be submitted to the ems or directly to # will be submitted to the ems or directly to
# the backend broker # the backend broker
exec_mode: str # {'dark', 'live', 'paper'} exec_mode: Literal[
'dark',
'live',
# 'paper', no right?
]
class Config: # internal ``emdsd`` unique "order id"
# just for pre-loading a ``Symbol`` when used oid: str # uuid4
# in the order mode staging process # TODO: figure out how to optionally typecast this to `MktPair`?
arbitrary_types_allowed = True symbol: str # | MktPair
# don't copy this model instance when used in account: str # should we set a default as '' ?
# a recursive model
copy_on_model_validation = False
price: float
size: float # -ve is "sell", +ve is "buy"
brokers: list[str] = []
class Cancel(Struct):
'''
Cancel msg for removing a dark (ems triggered) or
broker-submitted (live) trigger/order.
'''
oid: str # uuid4
symbol: str
action: str = 'cancel'
# --------------
# Client <- emsd # Client <- emsd
# --------------
# update msgs from ems which relay state change info # update msgs from ems which relay state change info
# from the active clearing engine. # from the active clearing engine.
class Status(Struct):
class Status(BaseModel): time_ns: int
oid: str # uuid4 ems-order dialog id
resp: Literal[
'pending', # acked by broker but not yet open
'open',
'dark_open', # dark/algo triggered order is open in ems clearing loop
'triggered', # above triggered order sent to brokerd, or an alert closed
'closed', # fully cleared all size/units
'fill', # partial execution
'canceled',
'error',
]
name: str = 'status' name: str = 'status'
oid: str # uuid4
time_ns: int
# {
# 'dark_submitted',
# 'dark_cancelled',
# 'dark_triggered',
# 'broker_submitted',
# 'broker_cancelled',
# 'broker_executed',
# 'broker_filled',
# 'broker_errored',
# 'alert_submitted',
# 'alert_triggered',
# }
resp: str # "response", see above
# symbol: str
# trigger info
trigger_price: Optional[float] = None
# price: float
# broker: Optional[str] = None
# this maps normally to the ``BrokerdOrder.reqid`` below, an id # this maps normally to the ``BrokerdOrder.reqid`` below, an id
# normally allocated internally by the backend broker routing system # normally allocated internally by the backend broker routing system
broker_reqid: Optional[Union[int, str]] = None reqid: int | str | None = None
# for relaying backend msg data "through" the ems layer # the (last) source order/request msg if provided
# (eg. the Order/Cancel which causes this msg) and
# acts as a back-reference to the corresponding
# request message which was the source of this msg.
req: Order | None = None
# XXX: better design/name here?
# flag that can be set to indicate a message for an order
# event that wasn't originated by piker's emsd (eg. some external
# trading system which does it's own order control but that you
# might want to "track" using piker UIs/systems).
src: str | None = None
# set when a cancel request msg was set for this order flow dialog
# but the brokerd dialog isn't yet in a cancelled state.
cancel_called: bool = False
# for relaying a boxed brokerd-dialog-side msg data "through" the
# ems layer to clients.
brokerd_msg: dict = {} brokerd_msg: dict = {}
class Error(Status):
resp: str = 'error'
# TODO: allow re-wrapping from existing (last) status?
@classmethod
def from_status(
cls,
msg: Status,
) -> Error:
...
# ---------------
# emsd -> brokerd # emsd -> brokerd
# ---------------
# requests *sent* from ems to respective backend broker daemon # requests *sent* from ems to respective backend broker daemon
class BrokerdCancel(BaseModel): class BrokerdCancel(Struct):
action: str = 'cancel'
oid: str # piker emsd order id oid: str # piker emsd order id
time_ns: int time_ns: int
@ -127,34 +167,39 @@ class BrokerdCancel(BaseModel):
# for setting a unique order id then this value will be relayed back # for setting a unique order id then this value will be relayed back
# on the emsd order request stream as the ``BrokerdOrderAck.reqid`` # on the emsd order request stream as the ``BrokerdOrderAck.reqid``
# field # field
reqid: Optional[Union[int, str]] = None reqid: int | str | None = None
action: str = 'cancel'
class BrokerdOrder(BaseModel): class BrokerdOrder(Struct):
action: str # {buy, sell}
oid: str oid: str
account: str account: str
time_ns: int time_ns: int
symbol: str # fqme
price: float
size: float
# TODO: if we instead rely on a +ve/-ve size to determine
# the action we more or less don't need this field right?
action: str = '' # {buy, sell}
# "broker request id": broker specific/internal order id if this is # "broker request id": broker specific/internal order id if this is
# None, creates a new order otherwise if the id is valid the backend # None, creates a new order otherwise if the id is valid the backend
# api must modify the existing matching order. If the broker allows # api must modify the existing matching order. If the broker allows
# for setting a unique order id then this value will be relayed back # for setting a unique order id then this value will be relayed back
# on the emsd order request stream as the ``BrokerdOrderAck.reqid`` # on the emsd order request stream as the ``BrokerdOrderAck.reqid``
# field # field
reqid: Optional[Union[int, str]] = None reqid: int | str | None = None
symbol: str # symbol.<providername> ?
price: float
size: float
# ---------------
# emsd <- brokerd # emsd <- brokerd
# ---------------
# requests *received* to ems from broker backend # requests *received* to ems from broker backend
class BrokerdOrderAck(Struct):
class BrokerdOrderAck(BaseModel):
''' '''
Immediate reponse to a brokerd order request providing the broker Immediate reponse to a brokerd order request providing the broker
specific unique order id so that the EMS can associate this specific unique order id so that the EMS can associate this
@ -162,102 +207,100 @@ class BrokerdOrderAck(BaseModel):
``.oid`` (which is a uuid4). ``.oid`` (which is a uuid4).
''' '''
name: str = 'ack'
# defined and provided by backend # defined and provided by backend
reqid: Union[int, str] reqid: int | str
# emsd id originally sent in matching request msg # emsd id originally sent in matching request msg
oid: str oid: str
# TODO: do we need this?
account: str = '' account: str = ''
name: str = 'ack'
class BrokerdStatus(BaseModel): class BrokerdStatus(Struct):
name: str = 'status'
reqid: Union[int, str]
time_ns: int time_ns: int
reqid: int | str
status: Literal[
'open',
'canceled',
'pending',
# 'error', # NOTE: use `BrokerdError`
'closed',
]
name: str = 'status'
# XXX: should be best effort set for every update oid: str = ''
account: str = '' # TODO: do we need this?
account: str | None = None,
# {
# 'submitted',
# 'cancelled',
# 'filled',
# }
status: str
filled: float = 0.0 filled: float = 0.0
reason: str = '' reason: str = ''
remaining: float = 0.0 remaining: float = 0.0
# XXX: better design/name here? # external: bool = False
# flag that can be set to indicate a message for an order
# event that wasn't originated by piker's emsd (eg. some external
# trading system which does it's own order control but that you
# might want to "track" using piker UIs/systems).
external: bool = False
# XXX: not required schema as of yet # XXX: not required schema as of yet
broker_details: dict = { broker_details: dict = field(default_factory=lambda: {
'name': '', 'name': '',
} })
class BrokerdFill(BaseModel): class BrokerdFill(Struct):
''' '''
A single message indicating a "fill-details" event from the broker A single message indicating a "fill-details" event from the
if avaiable. broker if avaiable.
''' '''
name: str = 'fill'
reqid: Union[int, str]
time_ns: int
# order exeuction related
action: str
size: float
price: float
broker_details: dict = {} # meta-data (eg. commisions etc.)
# brokerd timestamp required for order mode arrow placement on x-axis # brokerd timestamp required for order mode arrow placement on x-axis
# TODO: maybe int if we force ns? # TODO: maybe int if we force ns?
# we need to normalize this somehow since backends will use their # we need to normalize this somehow since backends will use their
# own format and likely across many disparate epoch clocks... # own format and likely across many disparate epoch clocks...
time_ns: int
broker_time: float broker_time: float
reqid: int | str
# order exeuction related
size: float
price: float
name: str = 'fill'
action: str | None = None
broker_details: dict = {} # meta-data (eg. commisions etc.)
class BrokerdError(BaseModel): class BrokerdError(Struct):
''' '''
Optional error type that can be relayed to emsd for error handling. Optional error type that can be relayed to emsd for error handling.
This is still a TODO thing since we're not sure how to employ it yet. This is still a TODO thing since we're not sure how to employ it yet.
''' '''
name: str = 'error' reason: str
oid: str
# TODO: drop this right?
symbol: str | None = None
oid: str | None = None
# if no brokerd order request was actually submitted (eg. we errored # if no brokerd order request was actually submitted (eg. we errored
# at the ``pikerd`` layer) then there will be ``reqid`` allocated. # at the ``pikerd`` layer) then there will be ``reqid`` allocated.
reqid: Optional[Union[int, str]] = None reqid: str | None = None
symbol: str name: str = 'error'
reason: str
broker_details: dict = {} broker_details: dict = {}
class BrokerdPosition(BaseModel): # TODO: yeah, so we REALLY need to completely deprecate
'''Position update event from brokerd. # this and use the `.accounting.Position` msg-type instead..
class BrokerdPosition(Struct):
'''
Position update event from brokerd.
''' '''
name: str = 'position'
broker: str broker: str
account: str account: str
symbol: str symbol: str
currency: str
size: float size: float
avg_price: float avg_price: float
currency: str = ''
name: str = 'position'

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,93 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Sub-sys module commons.
"""
from collections import ChainMap
from functools import partial
from typing import Any
from ..log import (
get_logger,
get_console_log,
)
from piker.types import Struct
subsys: str = 'piker.clearing'
log = get_logger(subsys)
get_console_log = partial(
get_console_log,
name=subsys,
)
class OrderDialogs(Struct):
'''
Order control dialog (and thus transaction) tracking via
message recording.
Allows easily recording messages associated with a given set of
order control transactions and looking up the latest field
state using the entire (reverse chronological) msg flow.
'''
_flows: dict[str, ChainMap] = {}
def add_msg(
self,
oid: str,
msg: dict,
) -> None:
# NOTE: manually enter a new map on the first msg add to
# avoid creating one with an empty dict first entry in
# `ChainMap.maps` which is the default if none passed at
# init.
cm: ChainMap = self._flows.get(oid)
if cm:
cm.maps.insert(0, msg)
else:
cm = ChainMap(msg)
self._flows[oid] = cm
# TODO: wrap all this in the `collections.abc.Mapping` interface?
def get(
self,
oid: str,
) -> ChainMap[str, Any]:
'''
Return the dialog `ChainMap` for provided id.
'''
return self._flows.get(oid, None)
def pop(
self,
oid: str,
) -> ChainMap[str, Any]:
'''
Pop and thus remove the `ChainMap` containing the msg flow
for the given order id.
'''
if (flow := self._flows.pop(oid, None)) is None:
log.warning(f'No flow found for oid: {oid}')
return flow

View File

@ -1,121 +1,295 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers) # Copyright (C) 2018-present Tyler Goodlet
# (in stewardship for pikers, everywhere.)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or
# it under the terms of the GNU Affero General Public License as published by # modify it under the terms of the GNU Affero General Public
# the Free Software Foundation, either version 3 of the License, or # License as published by the Free Software Foundation, either
# (at your option) any later version. # version 3 of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful, # This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of # but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# GNU Affero General Public License for more details. # Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public
# along with this program. If not, see <https://www.gnu.org/licenses/>. # License along with this program. If not, see
# <https://www.gnu.org/licenses/>.
''' '''
CLI commons. CLI commons.
''' '''
import os import os
from pprint import pformat # from contextlib import AsyncExitStack
from types import ModuleType
import click import click
import trio import trio
import tractor import tractor
from tractor._multiaddr import parse_maddr
from ..log import get_console_log, get_logger, colorize_json from ..log import (
get_console_log,
get_logger,
colorize_json,
)
from ..brokers import get_brokermod from ..brokers import get_brokermod
from .._daemon import _tractor_kwargs from ..service import (
_default_registry_host,
_default_registry_port,
)
from .. import config from .. import config
log = get_logger('cli') log = get_logger('piker.cli')
DEFAULT_BROKER = 'questrade'
def load_trans_eps(
network: dict | None = None,
maddrs: list[tuple] | None = None,
) -> dict[str, dict[str, dict]]:
# transport-oriented endpoint multi-addresses
eps: dict[
str, # service name, eg. `pikerd`, `emsd`..
# libp2p style multi-addresses parsed into prot layers
list[dict[str, str | int]]
] = {}
if (
network
and not maddrs
):
# load network section and (attempt to) connect all endpoints
# which are reachable B)
for key, maddrs in network.items():
match key:
# TODO: resolve table across multiple discov
# prots Bo
case 'resolv':
pass
case 'pikerd':
dname: str = key
for maddr in maddrs:
layers: dict = parse_maddr(maddr)
eps.setdefault(
dname,
[],
).append(layers)
elif maddrs:
# presume user is manually specifying the root actor ep.
eps['pikerd'] = [parse_maddr(maddr)]
return eps
@click.command() @click.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
@click.option('--host', '-h', default='127.0.0.1', help='Host address to bind')
@click.option( @click.option(
'--tsdb', '--loglevel',
is_flag=True, '-l',
help='Enable local ``marketstore`` instance' default='warning',
help='Logging level',
) )
def pikerd(loglevel, host, tl, pdb, tsdb): @click.option(
'--tl',
is_flag=True,
help='Enable tractor-runtime logs',
)
@click.option(
'--pdb',
is_flag=True,
help='Enable tractor debug mode',
)
@click.option(
'--maddr',
'-m',
default=None,
help='Multiaddrs to bind or contact',
)
# @click.option(
# '--tsdb',
# is_flag=True,
# help='Enable local ``marketstore`` instance'
# )
# @click.option(
# '--es',
# is_flag=True,
# help='Enable local ``elasticsearch`` instance'
# )
def pikerd(
maddr: list[str] | None,
loglevel: str,
tl: bool,
pdb: bool,
# tsdb: bool,
# es: bool,
):
''' '''
Spawn the piker broker-daemon. Spawn the piker broker-daemon.
''' '''
from .._daemon import open_pikerd from tractor.devx import maybe_open_crash_handler
log = get_console_log(loglevel) with maybe_open_crash_handler(pdb=pdb):
log = get_console_log(loglevel, name='cli')
if pdb: if pdb:
log.warning(( log.warning((
"\n" "\n"
"!!! You have enabled daemon DEBUG mode !!!\n" "!!! YOU HAVE ENABLED DAEMON DEBUG MODE !!!\n"
"If a daemon crashes it will likely block" "When a `piker` daemon crashes it will block the "
" the service until resumed from console!\n" "task-thread until resumed from console!\n"
"\n" "\n"
)) ))
async def main(): # service-actor registry endpoint socket-address set
regaddrs: list[tuple[str, int]] = []
async with ( conf, _ = config.load(
open_pikerd( conf_name='conf',
loglevel=loglevel, )
debug_mode=pdb, network: dict = conf.get('network')
), # normally delivers a ``Services`` handle if (
trio.open_nursery() as n, network is None
and not maddr
): ):
if tsdb: regaddrs = [(
from piker.data._ahab import start_ahab _default_registry_host,
from piker.data.marketstore import start_marketstore _default_registry_port,
)]
log.info('Spawning `marketstore` supervisor') else:
ctn_ready, config, (cid, pid) = await n.start( eps: dict = load_trans_eps(
start_ahab, network,
'marketstored', maddr,
start_marketstore, )
for layers in eps['pikerd']:
regaddrs.append((
layers['ipv4']['addr'],
layers['tcp']['port'],
))
) from .. import service
log.info(
f'`marketstore` up!\n'
f'`marketstored` pid: {pid}\n'
f'docker container id: {cid}\n'
f'config: {pformat(config)}'
)
await trio.sleep_forever() async def main():
service_mngr: service.Services
trio.run(main) async with (
service.open_pikerd(
registry_addrs=regaddrs,
loglevel=loglevel,
debug_mode=pdb,
) as service_mngr, # normally delivers a ``Services`` handle
# AsyncExitStack() as stack,
):
# TODO: spawn all other sub-actor daemons according to
# multiaddress endpoint spec defined by user config
assert service_mngr
# if tsdb:
# dname, conf = await stack.enter_async_context(
# service.marketstore.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'TSDB `{dname}` up with conf:\n{conf}')
# if es:
# dname, conf = await stack.enter_async_context(
# service.elastic.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'DB `{dname}` up with conf:\n{conf}')
await trio.sleep_forever()
trio.run(main)
@click.group(context_settings=config._context_defaults) @click.group(context_settings=config._context_defaults)
@click.option( @click.option(
'--brokers', '-b', '--brokers', '-b',
default=[DEFAULT_BROKER], default=None,
multiple=True, multiple=True,
help='Broker backend to use' help='Broker backend to use'
) )
@click.option('--loglevel', '-l', default='warning', help='Logging level') @click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--configdir', '-c', help='Configuration directory') @click.option('--configdir', '-c', help='Configuration directory')
@click.option(
'--pdb',
is_flag=True,
help='Enable runtime debug mode ',
)
@click.option(
'--maddr',
'-m',
default=None,
multiple=True,
help='Multiaddr to bind',
)
@click.option(
'--regaddr',
'-r',
default=None,
help='Registrar addr to contact',
)
@click.pass_context @click.pass_context
def cli(ctx, brokers, loglevel, tl, configdir): def cli(
ctx: click.Context,
brokers: list[str],
loglevel: str,
tl: bool,
configdir: str,
pdb: bool,
# TODO: make these list[str] with multiple -m maddr0 -m maddr1
maddr: list[str],
regaddr: str,
) -> None:
if configdir is not None: if configdir is not None:
assert os.path.isdir(configdir), f"`{configdir}` is not a valid path" assert os.path.isdir(configdir), f"`{configdir}` is not a valid path"
config._override_config_dir(configdir) config._override_config_dir(configdir)
# TODO: for typer see
# https://typer.tiangolo.com/tutorial/commands/context/
ctx.ensure_object(dict) ctx.ensure_object(dict)
if len(brokers) == 1: if not brokers:
brokermods = [get_brokermod(brokers[0])] # (try to) load all (supposedly) supported data/broker backends
else: from piker.brokers import __brokers__
brokermods = [get_brokermod(broker) for broker in brokers] brokers = __brokers__
brokermods: dict[str, ModuleType] = {
broker: get_brokermod(broker) for broker in brokers
}
assert brokermods
# TODO: load endpoints from `conf::[network].pikerd`
# - pikerd vs. regd, separate registry daemon?
# - expose datad vs. brokerd?
# - bind emsd with certain perms on public iface?
regaddrs: list[tuple[str, int]] = regaddr or [(
_default_registry_host,
_default_registry_port,
)]
# TODO: factor [network] section parsing out from pikerd
# above and call it here as well.
# if maddr:
# for addr in maddr:
# layers: dict = parse_maddr(addr)
ctx.obj.update({ ctx.obj.update({
'brokers': brokers, 'brokers': brokers,
@ -125,6 +299,12 @@ def cli(ctx, brokers, loglevel, tl, configdir):
'log': get_console_log(loglevel), 'log': get_console_log(loglevel),
'confdir': config._config_dir, 'confdir': config._config_dir,
'wl_path': config._watchlists_data_path, 'wl_path': config._watchlists_data_path,
'registry_addrs': regaddrs,
'pdb': pdb, # debug mode flag
# TODO: endpoint parsing, pinging and binding
# on no existing server.
# 'maddrs': maddr,
}) })
# allow enabling same loglevel in ``tractor`` machinery # allow enabling same loglevel in ``tractor`` machinery
@ -134,38 +314,52 @@ def cli(ctx, brokers, loglevel, tl, configdir):
@cli.command() @cli.command()
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.argument('names', nargs=-1, required=False) @click.argument('ports', nargs=-1, required=False)
@click.pass_obj @click.pass_obj
def services(config, tl, names): def services(config, tl, ports):
from ..service import (
open_piker_runtime,
_default_registry_port,
_default_registry_host,
)
host = _default_registry_host
if not ports:
ports = [_default_registry_port]
async def list_services(): async def list_services():
nonlocal host
async with tractor.get_arbiter( async with (
*_tractor_kwargs['arbiter_addr'] open_piker_runtime(
) as portal: name='service_query',
loglevel=config['loglevel'] if tl else None,
),
tractor.get_arbiter(
host=host,
port=ports[0]
) as portal
):
registry = await portal.run_from_ns('self', 'get_registry') registry = await portal.run_from_ns('self', 'get_registry')
json_d = {} json_d = {}
for key, socket in registry.items(): for key, socket in registry.items():
# name, uuid = uid
host, port = socket host, port = socket
json_d[key] = f'{host}:{port}' json_d[key] = f'{host}:{port}'
click.echo(f"{colorize_json(json_d)}") click.echo(f"{colorize_json(json_d)}")
tractor.run( trio.run(list_services)
list_services,
name='service_query',
loglevel=config['loglevel'] if tl else None,
arbiter_addr=_tractor_kwargs['arbiter_addr'],
)
def _load_clis() -> None: def _load_clis() -> None:
from ..data import marketstore # noqa # from ..service import elastic # noqa
from ..data import cli # noqa
from ..brokers import cli # noqa from ..brokers import cli # noqa
from ..ui import cli # noqa from ..ui import cli # noqa
from ..watchlists import cli # noqa from ..watchlists import cli # noqa
# typer implemented
from ..storage import cli # noqa
from ..accounting import cli # noqa
# load downstream cli modules # load downstream cli modules
_load_clis() _load_clis()

View File

@ -15,27 +15,42 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Broker configuration mgmt. Platform configuration (files) mgmt.
""" """
import platform import platform
import sys import sys
import os import os
from os.path import dirname
import shutil import shutil
from typing import Optional from typing import (
Callable,
MutableMapping,
)
from pathlib import Path
from bidict import bidict from bidict import bidict
import toml import tomlkit
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
from .log import get_logger from .log import get_logger
log = get_logger('broker-config') log = get_logger('broker-config')
# taken from ``click`` since apparently they have some # XXX NOTE: taken from ``click`` since apparently they have some
# super weirdness with sigint and sudo..no clue # super weirdness with sigint and sudo..no clue
def get_app_dir(app_name, roaming=True, force_posix=False): # we're probably going to slowly just modify it to our own version over
# time..
def get_app_dir(
app_name: str,
roaming: bool = True,
force_posix: bool = False,
) -> str:
r"""Returns the config folder for the application. The default behavior r"""Returns the config folder for the application. The default behavior
is to return whatever is most appropriate for the operating system. is to return whatever is most appropriate for the operating system.
@ -74,7 +89,31 @@ def get_app_dir(app_name, roaming=True, force_posix=False):
def _posixify(name): def _posixify(name):
return "-".join(name.split()).lower() return "-".join(name.split()).lower()
# if WIN: # NOTE: for testing with `pytest` we leverage the `tmp_dir`
# fixture to generate (and clean up) a test-request-specific
# directory for isolated configuration files such that,
# - multiple tests can run (possibly in parallel) without data races
# on the config state,
# - we don't need to ever worry about leaking configs into the
# system thus avoiding needing to manage config cleaup fixtures or
# other bothers (since obviously `tmp_dir` cleans up after itself).
#
# In order to "pass down" the test dir path to all (sub-)actors in
# the actor tree we preload the root actor's runtime vars state (an
# internal mechanism for inheriting state down an actor tree in
# `tractor`) with the testing dir and check for it whenever we
# detect `pytest` is being used (which it isn't under normal
# operation).
# if "pytest" in sys.modules:
# import tractor
# actor = tractor.current_actor(err_on_no_runtime=False)
# if actor: # runtime is up
# rvs = tractor._state._runtime_vars
# import pdbp; pdbp.set_trace()
# testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
# assert testdirpath.exists(), 'piker test harness might be borked!?'
# app_name = str(testdirpath)
if platform.system() == 'Windows': if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA" key = "APPDATA" if roaming else "LOCALAPPDATA"
folder = os.environ.get(key) folder = os.environ.get(key)
@ -94,28 +133,38 @@ def get_app_dir(app_name, roaming=True, force_posix=False):
) )
_config_dir = _click_config_dir = get_app_dir('piker') _click_config_dir: Path = Path(get_app_dir('piker'))
_parent_user = os.environ.get('SUDO_USER') _config_dir: Path = _click_config_dir
if _parent_user: # NOTE: when using `sudo` we attempt to determine the non-root user
non_root_user_dir = os.path.expanduser( # and still use their normal config dir.
f'~{_parent_user}' if (
(_parent_user := os.environ.get('SUDO_USER'))
and
_parent_user != 'root'
):
non_root_user_dir = Path(
os.path.expanduser(f'~{_parent_user}')
) )
root = 'root' root: str = 'root'
_ccds: str = str(_click_config_dir) # click config dir as string
i_tail: int = int(_ccds.rfind(root) + len(root))
_config_dir = ( _config_dir = (
non_root_user_dir + non_root_user_dir
_click_config_dir[ /
_click_config_dir.rfind(root) + len(root): Path(_ccds[i_tail+1:]) # +1 to capture trailing '/'
]
) )
_conf_names: set[str] = { _conf_names: set[str] = {
'brokers', 'conf', # god config
'trades', 'brokers', # sec backend deatz
'watchlists', 'watchlists', # (user defined) market lists
} }
_watchlists_data_path = os.path.join(_config_dir, 'watchlists.json') # TODO: probably drop all this super legacy, questrade specific,
# config stuff XD ?
_watchlists_data_path: Path = _config_dir / Path('watchlists.json')
_context_defaults = dict( _context_defaults = dict(
default_map={ default_map={
# Questrade specific quote poll rates # Questrade specific quote poll rates
@ -129,6 +178,14 @@ _context_defaults = dict(
) )
class ConfigurationError(Exception):
'Misconfigured settings, likely in a TOML file.'
class NoSignature(ConfigurationError):
'No credentials setup for broker backend!'
def _override_config_dir( def _override_config_dir(
path: str path: str
) -> None: ) -> None:
@ -143,75 +200,130 @@ def _conf_fn_w_ext(
return f'{name}.toml' return f'{name}.toml'
def get_conf_dir() -> Path:
'''
Return the user configuration directory ``Path``
on the local filesystem.
'''
return _config_dir
def get_conf_path( def get_conf_path(
conf_name: str = 'brokers', conf_name: str = 'brokers',
) -> str: ) -> Path:
"""Return the default config path normally under '''
``~/.config/piker`` on linux. Return the top-level default config path normally under
``~/.config/piker`` on linux for a given ``conf_name``, the config
name.
Contains files such as: Contains files such as:
- brokers.toml - brokers.toml
- watchlists.toml - watchlists.toml
- trades.toml
# maybe coming soon ;) # maybe coming soon ;)
- signals.toml - signals.toml
- strats.toml - strats.toml
""" '''
assert conf_name in _conf_names if 'account.' not in conf_name:
assert str(conf_name) in _conf_names
fn = _conf_fn_w_ext(conf_name) fn = _conf_fn_w_ext(conf_name)
return os.path.join( return _config_dir / Path(fn)
_config_dir,
fn,
)
def repodir(): def repodir() -> Path:
''' '''
Return the abspath to the repo directory. Return the abspath as ``Path`` to the git repo's root dir.
''' '''
dirpath = os.path.abspath( repodir: Path = Path(__file__).absolute().parent.parent
# we're 3 levels down in **this** module file confdir: Path = repodir / 'config'
dirname(dirname(os.path.realpath(__file__)))
) if not confdir.is_dir():
return dirpath # prolly inside stupid GH actions CI..
repodir: Path = Path(os.environ.get('GITHUB_WORKSPACE'))
confdir: Path = repodir / 'config'
assert confdir.is_dir(), f'{confdir} DNE, {repodir} is likely incorrect!'
return repodir
def load( def load(
conf_name: str = 'brokers', # NOTE: always appended with .toml suffix
path: str = None conf_name: str = 'conf',
path: Path | None = None,
) -> (dict, str): decode: Callable[
[str | bytes,],
MutableMapping,
] = tomllib.loads,
touch_if_dne: bool = False,
**tomlkws,
) -> tuple[dict, Path]:
''' '''
Load config file by name. Load config file by name.
If desired config is not in the top level piker-user config path then
pass the ``path: Path`` explicitly.
''' '''
path = path or get_conf_path(conf_name) # create the $HOME/.config/piker dir if dne
if not os.path.isfile(path): if not _config_dir.is_dir():
fn = _conf_fn_w_ext(conf_name) _config_dir.mkdir(
parents=True,
template = os.path.join( exist_ok=True,
repodir(), )
'config',
fn path_provided: bool = path is not None
path: Path = path or get_conf_path(conf_name)
if (
not path.is_file()
and touch_if_dne
):
# only do a template if no path provided,
# just touch an empty file with same name.
if path_provided:
with path.open(mode='x'):
pass
# try to copy in a template config to the user's dir if one
# exists.
else:
fn: str = _conf_fn_w_ext(conf_name)
template: Path = repodir() / 'config' / fn
if template.is_file():
shutil.copyfile(template, path)
elif fn and template:
assert template.is_file(), f'{template} is not a file!?'
assert path.is_file(), f'Config file {path} not created!?'
with path.open(mode='r') as fp:
config: dict = decode(
fp.read(),
**tomlkws,
) )
# try to copy in a template config to the user's directory
# if one exists.
if os.path.isfile(template):
shutil.copyfile(template, path)
config = toml.load(path)
log.debug(f"Read config file {path}") log.debug(f"Read config file {path}")
return config, path return config, path
def write( def write(
config: dict, # toml config as dict config: dict, # toml config as dict
name: str = 'brokers',
path: str = None, name: str | None = None,
path: Path | None = None,
fail_empty: bool = True,
**toml_kwargs,
) -> None: ) -> None:
'''' ''''
@ -220,31 +332,41 @@ def write(
Create a ``brokers.ini`` file if one does not exist. Create a ``brokers.ini`` file if one does not exist.
''' '''
path = path or get_conf_path(name) if name:
dirname = os.path.dirname(path) path: Path = path or get_conf_path(name)
if not os.path.isdir(dirname): dirname: Path = path.parent
log.debug(f"Creating config dir {_config_dir}") if not dirname.is_dir():
os.makedirs(dirname) log.debug(f"Creating config dir {_config_dir}")
dirname.mkdir()
if not config: if (
not config
and fail_empty
):
raise ValueError( raise ValueError(
"Watch out you're trying to write a blank config!") "Watch out you're trying to write a blank config!"
)
log.debug( log.debug(
f"Writing config `{name}` file to:\n" f"Writing config `{name}` file to:\n"
f"{path}" f"{path}"
) )
with open(path, 'w') as cf: with path.open(mode='w') as fp:
return toml.dump(config, cf) return tomlkit.dump( # preserve style on write B)
config,
fp,
**toml_kwargs,
)
def load_accounts( def load_accounts(
providers: list[str] | None = None
providers: Optional[list[str]] = None ) -> bidict[str, str | None]:
) -> bidict[str, Optional[str]]: conf, path = load(
conf_name='brokers',
conf, path = load() )
accounts = bidict() accounts = bidict()
for provider_name, section in conf.items(): for provider_name, section in conf.items():
accounts_section = section.get('accounts') accounts_section = section.get('accounts')

View File

@ -22,7 +22,7 @@ and storing data from your brokers as well as
sharing live streams over a network. sharing live streams over a network.
""" """
from ._normalize import iterticks from .ticktools import iterticks
from ._sharedmem import ( from ._sharedmem import (
maybe_open_shm_array, maybe_open_shm_array,
attach_shm_array, attach_shm_array,
@ -30,19 +30,42 @@ from ._sharedmem import (
get_shm_token, get_shm_token,
ShmArray, ShmArray,
) )
from .feed import ( from ._source import (
open_feed, def_iohlcv_fields,
_setup_persistent_brokerd, def_ohlcv_fields,
) )
from .feed import (
Feed,
open_feed,
)
from .flows import Flume
from ._symcache import (
SymbologyCache,
open_symcache,
get_symcache,
match_from_pairs,
)
from ._sampling import open_sample_stream
from ..types import Struct
__all__ = [ __all__: list[str] = [
'Flume',
'Feed',
'open_feed', 'open_feed',
'ShmArray', 'ShmArray',
'iterticks', 'iterticks',
'maybe_open_shm_array', 'maybe_open_shm_array',
'match_from_pairs',
'attach_shm_array', 'attach_shm_array',
'open_shm_array', 'open_shm_array',
'get_shm_token', 'get_shm_token',
'_setup_persistent_brokerd', 'def_iohlcv_fields',
'def_ohlcv_fields',
'open_symcache',
'open_sample_stream',
'get_symcache',
'Struct',
'SymbologyCache',
'types',
] ]

View File

@ -1,385 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Supervisor for docker with included specific-image service helpers.
'''
import os
import time
from typing import (
Optional,
Callable,
Any,
)
from contextlib import asynccontextmanager as acm
import trio
from trio_typing import TaskStatus
import tractor
from tractor.msg import NamespacePath
import docker
import json
from docker.models.containers import Container as DockerContainer
from docker.errors import (
DockerException,
APIError,
)
from requests.exceptions import ConnectionError, ReadTimeout
from ..log import get_logger, get_console_log
from .. import config
log = get_logger(__name__)
class DockerNotStarted(Exception):
'Prolly you dint start da daemon bruh'
class ContainerError(RuntimeError):
'Error reported via app-container logging level'
@acm
async def open_docker(
url: Optional[str] = None,
**kwargs,
) -> docker.DockerClient:
client: Optional[docker.DockerClient] = None
try:
client = docker.DockerClient(
base_url=url,
**kwargs
) if url else docker.from_env(**kwargs)
yield client
except (
DockerException,
APIError,
) as err:
def unpack_msg(err: Exception) -> str:
args = getattr(err, 'args', None)
if args:
return args
else:
return str(err)
# could be more specific so let's check if it's just perms.
if err.args:
errs = err.args
for err in errs:
msg = unpack_msg(err)
if 'PermissionError' in msg:
raise DockerException('You dint run as root yo!')
elif 'FileNotFoundError' in msg:
raise DockerNotStarted('Did you start da service sister?')
# not perms?
raise
finally:
if client:
client.close()
class Container:
'''
Wrapper around a ``docker.models.containers.Container`` to include
log capture and relay through our native logging system and helper
method(s) for cancellation/teardown.
'''
def __init__(
self,
cntr: DockerContainer,
) -> None:
self.cntr = cntr
# log msg de-duplication
self.seen_so_far = set()
async def process_logs_until(
self,
patt: str,
bp_on_msg: bool = False,
) -> bool:
'''
Attempt to capture container log messages and relay through our
native logging system.
'''
seen_so_far = self.seen_so_far
while True:
logs = self.cntr.logs()
entries = logs.decode().split('\n')
for entry in entries:
# ignore null lines
if not entry:
continue
try:
record = json.loads(entry.strip())
except json.JSONDecodeError:
if 'Error' in entry:
raise RuntimeError(entry)
raise
msg = record['msg']
level = record['level']
if msg and entry not in seen_so_far:
seen_so_far.add(entry)
if bp_on_msg:
await tractor.breakpoint()
getattr(log, level, log.error)(f'{msg}')
# print(f'level: {level}')
if level in ('error', 'fatal'):
raise ContainerError(msg)
if patt in msg:
return True
# do a checkpoint so we don't block if cancelled B)
await trio.sleep(0.01)
return False
def try_signal(
self,
signal: str = 'SIGINT',
) -> bool:
try:
# XXX: market store doesn't seem to shutdown nicely all the
# time with this (maybe because there are still open grpc
# connections?) noticably after client connections have been
# made or are in use/teardown. It works just fine if you
# just start and stop the container tho?..
log.cancel(f'SENDING {signal} to {self.cntr.id}')
self.cntr.kill(signal)
return True
except docker.errors.APIError as err:
if 'is not running' in err.explanation:
return False
async def cancel(
self,
stop_msg: str,
) -> None:
cid = self.cntr.id
# first try a graceful cancel
log.cancel(
f'SIGINT cancelling container: {cid}\n'
f'waiting on stop msg: "{stop_msg}"'
)
self.try_signal('SIGINT')
start = time.time()
for _ in range(30):
with trio.move_on_after(0.5) as cs:
cs.shield = True
await self.process_logs_until(stop_msg)
# if we aren't cancelled on above checkpoint then we
# assume we read the expected stop msg and terminated.
break
try:
log.info(f'Polling for container shutdown:\n{cid}')
if self.cntr.status not in {'exited', 'not-running'}:
self.cntr.wait(
timeout=0.1,
condition='not-running',
)
break
except (
ReadTimeout,
):
log.info(f'Still waiting on container:\n{cid}')
continue
except (
docker.errors.APIError,
ConnectionError,
):
log.exception('Docker connection failure')
break
else:
delay = time.time() - start
log.error(
f'Failed to kill container {cid} after {delay}s\n'
'sending SIGKILL..'
)
# get out the big guns, bc apparently marketstore
# doesn't actually know how to terminate gracefully
# :eyeroll:...
self.try_signal('SIGKILL')
self.cntr.wait(
timeout=3,
condition='not-running',
)
log.cancel(f'Container stopped: {cid}')
@tractor.context
async def open_ahabd(
ctx: tractor.Context,
endpoint: str, # ns-pointer str-msg-type
**kwargs,
) -> None:
get_console_log('info', name=__name__)
async with open_docker() as client:
# TODO: eventually offer a config-oriented API to do the mounts,
# params, etc. passing to ``Containter.run()``?
# call into endpoint for container config/init
ep_func = NamespacePath(endpoint).load_ref()
(
dcntr,
cntr_config,
start_msg,
stop_msg,
) = ep_func(client)
cntr = Container(dcntr)
with trio.move_on_after(1):
found = await cntr.process_logs_until(start_msg)
if not found and cntr not in client.containers.list():
raise RuntimeError(
'Failed to start `marketstore` check logs deats'
)
await ctx.started((
cntr.cntr.id,
os.getpid(),
cntr_config,
))
try:
# TODO: we might eventually want a proxy-style msg-prot here
# to allow remote control of containers without needing
# callers to have root perms?
await trio.sleep_forever()
finally:
with trio.CancelScope(shield=True):
await cntr.cancel(stop_msg)
async def start_ahab(
service_name: str,
endpoint: Callable[docker.DockerClient, DockerContainer],
task_status: TaskStatus[
tuple[
trio.Event,
dict[str, Any],
],
] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Start a ``docker`` container supervisor with given service name.
Currently the actor calling this task should normally be started
with root permissions (until we decide to use something that doesn't
require this, like docker's rootless mode or some wrapper project) but
te root perms are de-escalated after the docker supervisor sub-actor
is started.
'''
cn_ready = trio.Event()
try:
async with tractor.open_nursery(
loglevel='runtime',
) as tn:
portal = await tn.start_actor(
service_name,
enable_modules=[__name__]
)
# TODO: we have issues with this on teardown
# where ``tractor`` tries to issue ``os.kill()``
# and hits perms errors since the root process
# doesn't any longer have root perms..
# de-escalate root perms to the original user
# after the docker supervisor actor is spawned.
if config._parent_user:
import pwd
os.setuid(
pwd.getpwnam(
config._parent_user
)[2] # named user's uid
)
async with portal.open_context(
open_ahabd,
endpoint=str(NamespacePath.from_ref(endpoint)),
) as (ctx, first):
cid, pid, cntr_config = first
task_status.started((
cn_ready,
cntr_config,
(cid, pid),
))
await trio.sleep_forever()
# since we demoted root perms in this parent
# we'll get a perms error on proc cleanup in
# ``tractor`` nursery exit. just make sure
# the child is terminated and don't raise the
# error if so.
# TODO: we could also consider adding
# a ``tractor.ZombieDetected`` or something that we could raise
# if we find the child didn't terminate.
except PermissionError:
log.warning('Failed to cancel root permsed container')
except (
trio.MultiError,
) as err:
for subexc in err.exceptions:
if isinstance(subexc, PermissionError):
log.warning('Failed to cancel root perms-ed container')
return
else:
raise

View File

@ -0,0 +1,838 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Pre-(path)-graphics formatted x/y nd/1d rendering subsystem.
'''
from __future__ import annotations
from typing import (
Optional,
TYPE_CHECKING,
)
import msgspec
from msgspec import field
import numpy as np
from numpy.lib import recfunctions as rfn
from ._sharedmem import (
ShmArray,
)
from ._pathops import (
path_arrays_from_ohlc,
)
if TYPE_CHECKING:
from ._dataviz import (
Viz,
)
from piker.toolz import Profiler
# default gap between bars: "bar gap multiplier"
# - 0.5 is no overlap between OC arms,
# - 1.0 is full overlap on each neighbor sample
BGM: float = 0.16
class IncrementalFormatter(msgspec.Struct):
'''
Incrementally updating, pre-path-graphics tracking, formatter.
Allows tracking source data state in an updateable pre-graphics
``np.ndarray`` format (in local process memory) as well as
incrementally rendering from that format **to** 1d x/y for path
generation using ``pg.functions.arrayToQPath()``.
'''
shm: ShmArray
viz: Viz
# the value to be multiplied any any index into the x/y_1d arrays
# given the input index is based on the original source data array.
flat_index_ratio: float = 1
@property
def index_field(self) -> 'str':
'''
Value (``str``) used to look up the "index series" from the
underlying source ``numpy`` struct-array; delegate directly to
the managing ``Viz``.
'''
return self.viz.index_field
# Incrementally updated xy ndarray formatted data, a pre-1d
# format which is updated and cached independently of the final
# pre-graphics-path 1d format.
x_nd: Optional[np.ndarray] = None
y_nd: Optional[np.ndarray] = None
@property
def xy_nd(self) -> tuple[np.ndarray, np.ndarray]:
return (
self.x_nd[self.xy_slice],
self.y_nd[self.xy_slice],
)
@property
def xy_slice(self) -> slice:
return slice(
self.xy_nd_start,
self.xy_nd_stop,
)
# indexes which slice into the above arrays (which are allocated
# based on source data shm input size) and allow retrieving
# incrementally updated data.
xy_nd_start: int | None = None
xy_nd_stop: int | None = None
# TODO: eventually incrementally update 1d-pre-graphics path data?
x_1d: np.ndarray | None = None
y_1d: np.ndarray | None = None
# incremental view-change state(s) tracking
_last_vr: tuple[float, float] | None = None
_last_ivdr: tuple[float, float] | None = None
@property
def index_step_size(self) -> float:
'''
Readonly value computed on first ``.diff()`` call.
'''
return self.viz.index_step()
def diff(
self,
new_read: tuple[np.ndarray],
) -> tuple[
np.ndarray,
np.ndarray,
]:
# TODO:
# - can the renderer just call ``Viz.read()`` directly? unpack
# latest source data read
# - eventually maybe we can implement some kind of
# transform on the ``QPainterPath`` that will more or less
# detect the diff in "elements" terms? update diff state since
# we've now rendered paths.
(
xfirst,
xlast,
array,
ivl,
ivr,
in_view,
) = new_read
index = array['index']
# if the first index in the read array is 0 then
# it means the source buffer has bee completely backfilled to
# available space.
src_start = index[0]
src_stop = index[-1] + 1
# these are the "formatted output data" indices
# for the pre-graphics arrays.
nd_start = self.xy_nd_start
nd_stop = self.xy_nd_stop
if (
nd_start is None
):
assert nd_stop is None
# setup to do a prepend of all existing src history
nd_start = self.xy_nd_start = src_stop
# set us in a zero-to-append state
nd_stop = self.xy_nd_stop = src_stop
# compute the length diffs between the first/last index entry in
# the input data and the last indexes we have on record from the
# last time we updated the curve index.
prepend_length = int(nd_start - src_start)
append_length = int(src_stop - nd_stop)
# blah blah blah
# do diffing for prepend, append and last entry
return (
slice(src_start, nd_start),
prepend_length,
append_length,
slice(nd_stop, src_stop),
)
def _track_inview_range(
self,
view_range: tuple[int, int],
) -> bool:
# if a view range is passed, plan to draw the
# source ouput that's "in view" of the chart.
vl, vr = view_range
zoom_or_append = False
last_vr = self._last_vr
# incremental in-view data update.
if last_vr:
lvl, lvr = last_vr # relative slice indices
# TODO: detecting more specifically the interaction changes
# last_ivr = self._last_ivdr or (vl, vr)
# al, ar = last_ivr # abs slice indices
# left_change = abs(x_iv[0] - al) >= 1
# right_change = abs(x_iv[-1] - ar) >= 1
# likely a zoom/pan view change or data append update
if (
(vr - lvr) > 2
or vl < lvl
# append / prepend update
# we had an append update where the view range
# didn't change but the data-viewed (shifted)
# underneath, so we need to redraw.
# or left_change and right_change and last_vr == view_range
# not (left_change and right_change) and ivr
# (
# or abs(x_iv[ivr] - livr) > 1
):
zoom_or_append = True
self._last_vr = view_range
return zoom_or_append
def format_to_1d(
self,
new_read: tuple,
array_key: str,
profiler: Profiler,
slice_to_inview: bool = True,
force_full_realloc: bool = False,
) -> tuple[
np.ndarray,
np.ndarray,
]:
shm = self.shm
(
_,
_,
array,
ivl,
ivr,
in_view,
) = new_read
(
pre_slice,
prepend_len,
append_len,
post_slice,
) = self.diff(new_read)
# we first need to allocate xy data arrays
# from the source data.
if (
self.y_nd is None
or force_full_realloc
):
self.xy_nd_start = shm._first.value
self.xy_nd_stop = shm._last.value
self.x_nd, self.y_nd = self.allocate_xy_nd(
shm,
array_key,
)
profiler('allocated xy history')
# once allocated we do incremental pre/append
# updates from the diff with the source buffer.
else:
if prepend_len:
self.incr_update_xy_nd(
shm,
array_key,
# this is the pre-sliced, "normally expected"
# new data that an updater would normally be
# expected to process, however in some cases (like
# step curves) the updater routine may want to do
# the source history-data reading itself, so we pass
# both here.
shm._array[pre_slice],
pre_slice,
prepend_len,
self.xy_nd_start,
self.xy_nd_stop,
is_append=False,
)
self.xy_nd_start -= prepend_len
profiler('prepended xy history: {prepend_length}')
if append_len:
self.incr_update_xy_nd(
shm,
array_key,
shm._array[post_slice],
post_slice,
append_len,
self.xy_nd_start,
self.xy_nd_stop,
is_append=True,
)
self.xy_nd_stop += append_len
profiler('appened xy history: {append_length}')
# sanity
# slice_ln = post_slice.stop - post_slice.start
# assert append_len == slice_ln
view_changed: bool = False
view_range: tuple[int, int] = (ivl, ivr)
if slice_to_inview:
view_changed = self._track_inview_range(view_range)
array = in_view
profiler(f'{self.viz.name} view range slice {view_range}')
# TODO: we need to check if the last-datum-in-view is true and
# if so only slice to the 2nd last datumonly slice to the 2nd
# last datum.
# hist = array[:slice_to_head]
# XXX: WOA WTF TRACTOR DEBUGGING BUGGG
# assert 0
# xy-path data transform: convert source data to a format
# able to be passed to a `QPainterPath` rendering routine.
if not len(array):
# XXX: this might be why the profiler only has exits?
return
# TODO: hist here should be the pre-sliced
# x/y_data in the case where allocate_xy is
# defined?
x_1d, y_1d, connect = self.format_xy_nd_to_1d(
array,
array_key,
view_range,
)
# cache/save last 1d outputs for use by other
# readers (eg. `Viz.draw_last_datum()` in the
# only-draw-last-uppx case).
self.x_1d = x_1d
self.y_1d = y_1d
# app_tres = None
# if append_len:
# appended = array[-append_len-1:slice_to_head]
# app_tres = self.format_xy_nd_to_1d(
# appended,
# array_key,
# (
# view_range[1] - append_len + slice_to_head,
# view_range[1]
# ),
# )
# # assert (len(appended) - 1) == append_len
# # assert len(appended) == append_len
# print(
# f'{self.viz.name} APPEND LEN: {append_len}\n'
# f'{self.viz.name} APPENDED: {appended}\n'
# f'{self.viz.name} app_tres: {app_tres}\n'
# )
# update the last "in view data range"
if len(x_1d):
self._last_ivdr = x_1d[0], x_1d[-1]
profiler('.format_to_1d()')
return (
x_1d,
y_1d,
connect,
prepend_len,
append_len,
view_changed,
# app_tres,
)
###############################
# Sub-type override interface #
###############################
x_offset: np.ndarray = np.array([0])
# optional pre-graphics xy formatted data which
# is incrementally updated in sync with the source data.
# XXX: was ``.allocate_xy()``
def allocate_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert the structured-array ``src_shm`` format to
a equivalently shaped (and field-less) ``np.ndarray``.
Eg. a 4 field x N struct-array => (N, 4)
'''
y_nd = src_shm._array[data_field].copy()
x_nd = (
src_shm._array[self.index_field].copy()
+
self.x_offset
)
return x_nd, y_nd
# XXX: was ``.update_xy()``
def incr_update_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> None:
# write pushed data to flattened copy
y_nd_new = new_from_src[data_field]
self.y_nd[read_slc] = y_nd_new
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = (
new_from_src[self.index_field]
+
self.x_offset
)
# x_nd = self.x_nd[self.xy_slice]
# y_nd = self.y_nd[self.xy_slice]
# name = self.viz.name
# if 'trade_rate' == name:
# s = 4
# print(
# f'{name.upper()}:\n'
# 'NEW_FROM_SRC:\n'
# f'new_from_src: {new_from_src}\n\n'
# f'PRE self.x_nd:'
# f'\n{list(x_nd[-s:])}\n'
# f'PRE self.y_nd:\n'
# f'{list(y_nd[-s:])}\n\n'
# f'TO WRITE:\n'
# f'x_nd_new:\n'
# f'{x_nd_new[0]}\n'
# f'y_nd_new:\n'
# f'{y_nd_new}\n'
# )
# XXX: was ``.format_xy()``
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray, # 1d x
np.ndarray, # 1d y
np.ndarray | str, # connection array/style
]:
'''
Default xy-nd array to 1d pre-graphics-path render routine.
Return single field column data verbatim
'''
# NOTE: we don't include the very last datum which is filled in
# normally by another graphics object.
x_1d = array[self.index_field][:-1]
y_1d = array[array_key][:-1]
# name = self.viz.name
# if 'trade_rate' == name:
# s = 4
# x_nd = list(self.x_nd[self.xy_slice][-s:-1])
# y_nd = list(self.y_nd[self.xy_slice][-s:-1])
# print(
# f'{name}:\n'
# f'XY data:\n'
# f'x: {x_nd}\n'
# f'y: {y_nd}\n\n'
# f'x_1d: {list(x_1d[-s:])}\n'
# f'y_1d: {list(y_1d[-s:])}\n\n'
# )
return (
x_1d,
y_1d,
# 1d connection array or style-key to
# ``pg.functions.arrayToQPath()``
'all',
)
class OHLCBarsFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
-0.5,
0,
0,
0.5,
])
fields: list[str] = field(
default_factory=lambda: ['open', 'high', 'low', 'close']
)
flat_index_ratio: float = 4
def allocate_xy_nd(
self,
ohlc_shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert an input struct-array holding OHLC samples into a pair of
flattened x, y arrays with the same size (datums wise) as the source
data.
'''
y_nd = ohlc_shm.ustruct(self.fields)
# generate an flat-interpolated x-domain
x_nd = (
np.broadcast_to(
ohlc_shm._array[self.index_field][:, None],
(
ohlc_shm._array.size,
# 4, # only ohlc
y_nd.shape[1],
),
)
+
self.x_offset
)
assert y_nd.any()
# write pushed data to flattened copy
return (
x_nd,
y_nd,
)
def incr_update_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> None:
# write newly pushed data to flattened copy
# a struct-arr is always passed in.
new_y_nd = rfn.structured_to_unstructured(
new_from_src[self.fields]
)
self.y_nd[read_slc] = new_y_nd
# generate same-valued-per-row x support based on y shape
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = np.broadcast_to(
new_from_src[self.index_field][:, None],
new_y_nd.shape,
) + self.x_offset
# TODO: can we drop this frame and just use the above?
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
gap: float = BGM,
) -> tuple[
np.ndarray,
np.ndarray,
np.ndarray,
]:
'''
More or less direct proxy to the ``numba``-fied
``path_arrays_from_ohlc()`` (above) but with closed in kwargs
for line spacing.
'''
x, y, c = path_arrays_from_ohlc(
array[:-1],
start,
bar_w=self.index_step_size,
bar_gap=gap * self.index_step_size,
# XXX: don't ask, due to a ``numba`` bug..
use_time_index=(self.index_field == 'time'),
)
return x, y, c
class OHLCBarsAsCurveFmtr(OHLCBarsFmtr):
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray,
np.ndarray,
str,
]:
# TODO: in the case of an existing ``.update_xy()``
# should we be passing in array as an xy arrays tuple?
# 2 more datum-indexes to capture zero at end
x_flat = self.x_nd[self.xy_nd_start:self.xy_nd_stop-1]
y_flat = self.y_nd[self.xy_nd_start:self.xy_nd_stop-1]
# slice to view
ivl, ivr = vr
x_iv_flat = x_flat[ivl:ivr]
y_iv_flat = y_flat[ivl:ivr]
# reshape to 1d for graphics rendering
y_iv = y_iv_flat.reshape(-1)
x_iv = x_iv_flat.reshape(-1)
return x_iv, y_iv, 'all'
class StepCurveFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
0,
1,
])
def allocate_xy_nd(
self,
shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert an input 1d shm array to a "step array" format
for use by path graphics generation.
'''
i = shm._array[self.index_field].copy()
out = shm._array[data_field].copy()
x_out = (
np.broadcast_to(
i[:, None],
(i.size, 2),
)
+
self.x_offset
)
# fill out Nx2 array to hold each step's left + right vertices.
y_out = np.empty(
x_out.shape,
dtype=out.dtype,
)
# fill in (current) values from source shm buffer
y_out[:] = out[:, np.newaxis]
# TODO: pretty sure we can drop this?
# start y at origin level
# y_out[0, 0] = 0
# y_out[self.xy_nd_start] = 0
return x_out, y_out
def incr_update_xy_nd(
self,
src_shm: ShmArray,
array_key: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> tuple[
np.ndarray,
slice,
]:
# NOTE: for a step curve we slice from one datum prior
# to the current "update slice" to get the previous
# "level".
#
# why this is needed,
# - the current new append slice will often have a zero
# value in the latest datum-step (at least for zero-on-new
# cases like vlm in the) as per configuration of the FSP
# engine.
# - we need to look back a datum to get the last level which
# will be used to terminate/complete the last step x-width
# which will be set to pair with the last x-index THIS MEANS
#
# XXX: this means WE CAN'T USE the append slice since we need to
# "look backward" one step to get the needed back-to-zero level
# and the update data in ``new_from_src`` will only contain the
# latest new data.
back_1 = slice(
read_slc.start - 1,
read_slc.stop,
)
to_write = src_shm._array[back_1]
y_nd_new = self.y_nd[back_1]
y_nd_new[:] = to_write[array_key][:, None]
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = (
new_from_src[self.index_field][:, None]
+
self.x_offset
)
# XXX: uncomment for debugging
# x_nd = self.x_nd[self.xy_slice]
# y_nd = self.y_nd[self.xy_slice]
# name = self.viz.name
# if 'dolla_vlm' in name:
# s = 4
# print(
# f'{name}:\n'
# 'NEW_FROM_SRC:\n'
# f'new_from_src: {new_from_src}\n\n'
# f'PRE self.x_nd:'
# f'\n{x_nd[-s:]}\n'
# f'PRE self.y_nd:\n'
# f'{y_nd[-s:]}\n\n'
# f'TO WRITE:\n'
# f'x_nd_new:\n'
# f'{x_nd_new}\n'
# f'y_nd_new:\n'
# f'{y_nd_new}\n'
# )
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray,
np.ndarray,
str,
]:
last_t, last = array[-1][[self.index_field, array_key]]
start = self.xy_nd_start
stop = self.xy_nd_stop
x_step = self.x_nd[start:stop]
y_step = self.y_nd[start:stop]
# slice out in-view data
ivl, ivr = vr
# NOTE: add an extra step to get the vertical-line-down-to-zero
# adjacent to the last-datum graphic (filled rect).
x_step_iv = x_step[ivl:ivr+1]
y_step_iv = y_step[ivl:ivr+1]
# flatten to 1d
x_1d = x_step_iv.reshape(x_step_iv.size)
y_1d = y_step_iv.reshape(y_step_iv.size)
# debugging
# if y_1d.any():
# s = 6
# print(
# f'x_step_iv:\n{x_step_iv[-s:]}\n'
# f'y_step_iv:\n{y_step_iv[-s:]}\n\n'
# f'x_1d:\n{x_1d[-s:]}\n'
# f'y_1d:\n{y_1d[-s:]}\n'
# )
return x_1d, y_1d, 'all'

View File

@ -15,127 +15,34 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Graphics related downsampling routines for compressing to pixel Graphics downsampling using the infamous M4 algorithm.
limits on the display device.
This is one of ``piker``'s secret weapons allowing us to boss all other
charting platforms B)
(AND DON'T YOU DARE TAKE THIS CODE WITHOUT CREDIT OR WE'LL SUE UR F#&@* ASS).
NOTES: this method is a so called "visualization driven data
aggregation" approach. It gives error-free line chart
downsampling, see
further scientific paper resources:
- http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
- http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
Details on implementation of this algo are based in,
https://github.com/pikers/piker/issues/109
''' '''
import math import math
from typing import Optional from typing import Optional
import numpy as np import numpy as np
from numpy.lib import recfunctions as rfn
from numba import ( from numba import (
jit, njit,
# float64, optional, int64, # float64, optional, int64,
) )
from ..log import get_logger from ._util import log
log = get_logger(__name__)
def hl2mxmn(ohlc: np.ndarray) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc['index']
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@jit(
# TODO: the type annots..
# float64[:](float64[:],),
nopython=True,
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i] = last_h
last_l = l
last_h = h
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc['index']
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat
def ds_m4( def ds_m4(
@ -160,16 +67,6 @@ def ds_m4(
This is more or less an OHLC style sampling of a line-style series. This is more or less an OHLC style sampling of a line-style series.
''' '''
# NOTE: this method is a so called "visualization driven data
# aggregation" approach. It gives error-free line chart
# downsampling, see
# further scientific paper resources:
# - http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
# - http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
# Details on implementation of this algo are based in,
# https://github.com/pikers/piker/issues/109
# XXX: from infinite on downsampling viewable graphics: # XXX: from infinite on downsampling viewable graphics:
# "one thing i remembered about the binning - if you are # "one thing i remembered about the binning - if you are
# picking a range within your timeseries the start and end bin # picking a range within your timeseries the start and end bin
@ -191,6 +88,14 @@ def ds_m4(
x_end = x[-1] # x end value/highest in domain x_end = x[-1] # x end value/highest in domain
xrange = (x_end - x_start) xrange = (x_end - x_start)
if xrange < 0:
log.error(f'-VE M4 X-RANGE: {x_start} -> {x_end}')
# XXX: broken x-range calc-case, likely the x-end points
# are wrong and have some default value set (such as
# x_end -> <some epoch float> while x_start -> 0.5).
# breakpoint()
return None
# XXX: always round up on the input pixels # XXX: always round up on the input pixels
# lnx = len(x) # lnx = len(x)
# uppx *= max(4 / (1 + math.log(uppx, 2)), 1) # uppx *= max(4 / (1 + math.log(uppx, 2)), 1)
@ -223,14 +128,20 @@ def ds_m4(
assert frames >= (xrange / uppx) assert frames >= (xrange / uppx)
# call into ``numba`` # call into ``numba``
nb, i_win, y_out = _m4( (
nb,
x_out,
y_out,
ymn,
ymx,
) = _m4(
x, x,
y, y,
frames, frames,
# TODO: see func below.. # TODO: see func below..
# i_win, # x_out,
# y_out, # y_out,
# first index in x data to start at # first index in x data to start at
@ -243,14 +154,14 @@ def ds_m4(
# filter out any overshoot in the input allocation arrays by # filter out any overshoot in the input allocation arrays by
# removing zero-ed tail entries which should start at a certain # removing zero-ed tail entries which should start at a certain
# index. # index.
i_win = i_win[i_win != 0] x_out = x_out[x_out != 0]
y_out = y_out[:i_win.size] y_out = y_out[:x_out.size]
return nb, i_win, y_out # print(f'M4 output ymn, ymx: {ymn},{ymx}')
return nb, x_out, y_out, ymn, ymx
@jit( @njit(
nopython=True,
nogil=True, nogil=True,
) )
def _m4( def _m4(
@ -260,8 +171,8 @@ def _m4(
frames: int, frames: int,
# TODO: using this approach by having the ``.zeros()`` alloc lines # TODO: using this approach, having the ``.zeros()`` alloc lines
# below, in put python was causing segs faults and alloc crashes.. # below in pure python, there were segs faults and alloc crashes..
# we might need to see how it behaves with shm arrays and consider # we might need to see how it behaves with shm arrays and consider
# allocating them once at startup? # allocating them once at startup?
@ -274,14 +185,22 @@ def _m4(
x_start: int, x_start: int,
step: float, step: float,
) -> int: ) -> tuple[
# nbins = len(i_win) int,
# count = len(xs) np.ndarray,
np.ndarray,
float,
float,
]:
'''
Implementation of the m4 algorithm in ``numba``:
http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
'''
# these are pre-allocated and mutated by ``numba`` # these are pre-allocated and mutated by ``numba``
# code in-place. # code in-place.
y_out = np.zeros((frames, 4), ys.dtype) y_out = np.zeros((frames, 4), ys.dtype)
i_win = np.zeros(frames, xs.dtype) x_out = np.zeros(frames, xs.dtype)
bincount = 0 bincount = 0
x_left = x_start x_left = x_start
@ -295,24 +214,34 @@ def _m4(
# set all bins in the left-most entry to the starting left-most x value # set all bins in the left-most entry to the starting left-most x value
# (aka a row broadcast). # (aka a row broadcast).
i_win[bincount] = x_left x_out[bincount] = x_left
# set all y-values to the first value passed in. # set all y-values to the first value passed in.
y_out[bincount] = ys[0] y_out[bincount] = ys[0]
# full input y-data mx and mn
mx: float = -np.inf
mn: float = np.inf
# compute OHLC style max / min values per window sized x-frame.
for i in range(len(xs)): for i in range(len(xs)):
x = xs[i] x = xs[i]
y = ys[i] y = ys[i]
if x < x_left + step: # the current window "step" is [bin, bin+1) if x < x_left + step: # the current window "step" is [bin, bin+1)
y_out[bincount, 1] = min(y, y_out[bincount, 1]) ymn = y_out[bincount, 1] = min(y, y_out[bincount, 1])
y_out[bincount, 2] = max(y, y_out[bincount, 2]) ymx = y_out[bincount, 2] = max(y, y_out[bincount, 2])
y_out[bincount, 3] = y y_out[bincount, 3] = y
mx = max(mx, ymx)
mn = min(mn, ymn)
else: else:
# Find the next bin # Find the next bin
while x >= x_left + step: while x >= x_left + step:
x_left += step x_left += step
bincount += 1 bincount += 1
i_win[bincount] = x_left x_out[bincount] = x_left
y_out[bincount] = y y_out[bincount] = y
return bincount, i_win, y_out return bincount, x_out, y_out, mn, mx

View File

@ -1,82 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Stream format enforcement.
'''
from itertools import chain
from typing import AsyncIterator
def iterticks(
quote: dict,
types: tuple[str] = (
'trade',
'dark_trade',
),
deduplicate_darks: bool = False,
) -> AsyncIterator:
'''
Iterate through ticks delivered per quote cycle.
'''
if deduplicate_darks:
assert 'dark_trade' in types
# print(f"{quote}\n\n")
ticks = quote.get('ticks', ())
trades = {}
darks = {}
if ticks:
# do a first pass and attempt to remove duplicate dark
# trades with the same tick signature.
if deduplicate_darks:
for tick in ticks:
ttype = tick.get('type')
time = tick.get('time', None)
if time:
sig = (
time,
tick['price'],
tick['size']
)
if ttype == 'dark_trade':
darks[sig] = tick
elif ttype == 'trade':
trades[sig] = tick
# filter duplicates
for sig, tick in trades.items():
tick = darks.pop(sig, None)
if tick:
ticks.remove(tick)
# print(f'DUPLICATE {tick}')
# re-insert ticks
ticks.extend(list(chain(trades.values(), darks.values())))
for tick in ticks:
# print(f"{quote['symbol']}: {tick}")
ttype = tick.get('type')
if ttype in types:
yield tick

View File

@ -0,0 +1,281 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Super fast ``QPainterPath`` generation related operator routines.
"""
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import (
# types,
njit,
float64,
int64,
# optional,
)
# TODO: for ``numba`` typing..
# from ._source import numba_ohlc_dtype
from ._m4 import ds_m4
def xy_downsample(
x,
y,
uppx,
x_spacer: float = 0.5,
) -> tuple[
np.ndarray,
np.ndarray,
float,
float,
]:
'''
Downsample 1D (flat ``numpy.ndarray``) arrays using M4 given an input
``uppx`` (units-per-pixel) and add space between discreet datums.
'''
# downsample whenever more then 1 pixels per datum can be shown.
# always refresh data bounds until we get diffing
# working properly, see above..
m4_out = ds_m4(
x,
y,
uppx,
)
if m4_out is not None:
bins, x, y, ymn, ymx = m4_out
# flatten output to 1d arrays suitable for path-graphics generation.
x = np.broadcast_to(x[:, None], y.shape)
x = (x + np.array(
[-x_spacer, 0, 0, x_spacer]
)).flatten()
y = y.flatten()
return x, y, ymn, ymx
# XXX: we accept a None output for the case where the input range
# to ``ds_m4()`` is bad (-ve) and we want to catch and debug
# that (seemingly super rare) circumstance..
return None
@njit(
# NOTE: need to construct this manually for readonly
# arrays, see https://github.com/numba/numba/issues/4511
# (
# types.Array(
# numba_ohlc_dtype,
# 1,
# 'C',
# readonly=True,
# ),
# int64,
# types.unicode_type,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_w: float64,
bar_gap: float64 = 0.16,
use_time_index: bool = True,
# XXX: ``numba`` issue: https://github.com/numba/numba/issues/8622
# index_field: str,
) -> tuple[
np.ndarray,
np.ndarray,
np.ndarray,
]:
'''
Generate an array of lines objects from input ohlc data.
'''
size = int(data.shape[0] * 6)
# XXX: see this for why the dtype might have to be defined outside
# the routine.
# https://github.com/numba/numba/issues/4098#issuecomment-493914533
x = np.zeros(
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
half_w: float = bar_w/2
# TODO: report bug for assert @
# ../piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
open = q['open']
high = q['high']
low = q['low']
close = q['close']
if use_time_index:
index = float64(q['time'])
else:
index = float64(q['index'])
# XXX: ``numba`` issue: https://github.com/numba/numba/issues/8622
# index = float64(q[index_field])
# AND this (probably)
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
mid: float = index + half_w
x[istart:istop] = (
index + bar_gap,
mid,
mid,
mid,
mid,
index + bar_w - bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def hl2mxmn(
ohlc: np.ndarray,
index_field: str = 'index',
) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc[index_field]
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@njit(
# TODO: the type annots..
# float64[:](float64[:],),
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
lo, hi = row['low'], row['high']
up_diff = hi - last_l
down_diff = last_h - lo
if up_diff > down_diff:
out[2*i + 1] = hi
out[2*i] = last_l
else:
out[2*i + 1] = lo
out[2*i] = last_h
last_l = lo
last_h = hi
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
index_field: str = 'index',
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc[index_field]
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -27,29 +27,22 @@ from multiprocessing.shared_memory import SharedMemory, _USE_POSIX
if _USE_POSIX: if _USE_POSIX:
from _posixshmem import shm_unlink from _posixshmem import shm_unlink
import tractor # import msgspec
import numpy as np import numpy as np
from pydantic import BaseModel
from numpy.lib import recfunctions as rfn from numpy.lib import recfunctions as rfn
import tractor
from ..log import get_logger from ._util import log
from ._source import base_iohlc_dtype from ._source import def_iohlcv_fields
from piker.types import Struct
log = get_logger(__name__)
# how much is probably dependent on lifestyle
_secs_in_day = int(60 * 60 * 24)
# we try for a buncha times, but only on a run-every-other-day kinda week.
_days_worth = 16
_default_size = _days_worth * _secs_in_day
# where to start the new data append index
_rt_buffer_start = int((_days_worth - 1) * _secs_in_day)
def cuckoff_mantracker(): def cuckoff_mantracker():
'''
Disable all ``multiprocessing``` "resource tracking" machinery since
it's an absolute multi-threaded mess of non-SC madness.
'''
from multiprocessing import resource_tracker as mantracker from multiprocessing import resource_tracker as mantracker
# Tell the "resource tracker" thing to fuck off. # Tell the "resource tracker" thing to fuck off.
@ -68,7 +61,6 @@ def cuckoff_mantracker():
mantracker._resource_tracker = ManTracker() mantracker._resource_tracker = ManTracker()
mantracker.register = mantracker._resource_tracker.register mantracker.register = mantracker._resource_tracker.register
mantracker.ensure_running = mantracker._resource_tracker.ensure_running mantracker.ensure_running = mantracker._resource_tracker.ensure_running
# ensure_running = mantracker._resource_tracker.ensure_running
mantracker.unregister = mantracker._resource_tracker.unregister mantracker.unregister = mantracker._resource_tracker.unregister
mantracker.getfd = mantracker._resource_tracker.getfd mantracker.getfd = mantracker._resource_tracker.getfd
@ -107,36 +99,39 @@ class SharedInt:
log.warning(f'Shm for {name} already unlinked?') log.warning(f'Shm for {name} already unlinked?')
class _Token(BaseModel): class _Token(Struct, frozen=True):
''' '''
Internal represenation of a shared memory "token" Internal represenation of a shared memory "token"
which can be used to key a system wide post shm entry. which can be used to key a system wide post shm entry.
''' '''
class Config:
frozen = True
shm_name: str # this servers as a "key" value shm_name: str # this servers as a "key" value
shm_first_index_name: str shm_first_index_name: str
shm_last_index_name: str shm_last_index_name: str
dtype_descr: tuple dtype_descr: tuple
size: int # in struct-array index / row terms
@property @property
def dtype(self) -> np.dtype: def dtype(self) -> np.dtype:
return np.dtype(list(map(tuple, self.dtype_descr))).descr return np.dtype(list(map(tuple, self.dtype_descr))).descr
def as_msg(self): def as_msg(self):
return self.dict() return self.to_dict()
@classmethod @classmethod
def from_msg(cls, msg: dict) -> _Token: def from_msg(cls, msg: dict) -> _Token:
if isinstance(msg, _Token): if isinstance(msg, _Token):
return msg return msg
# TODO: native struct decoding
# return _token_dec.decode(msg)
msg['dtype_descr'] = tuple(map(tuple, msg['dtype_descr'])) msg['dtype_descr'] = tuple(map(tuple, msg['dtype_descr']))
return _Token(**msg) return _Token(**msg)
# _token_dec = msgspec.msgpack.Decoder(_Token)
# TODO: this api? # TODO: this api?
# _known_tokens = tractor.ActorVar('_shm_tokens', {}) # _known_tokens = tractor.ActorVar('_shm_tokens', {})
# _known_tokens = tractor.ContextStack('_known_tokens', ) # _known_tokens = tractor.ContextStack('_known_tokens', )
@ -155,6 +150,7 @@ def get_shm_token(key: str) -> _Token:
def _make_token( def _make_token(
key: str, key: str,
size: int,
dtype: Optional[np.dtype] = None, dtype: Optional[np.dtype] = None,
) -> _Token: ) -> _Token:
''' '''
@ -162,12 +158,13 @@ def _make_token(
to access a shared array. to access a shared array.
''' '''
dtype = base_iohlc_dtype if dtype is None else dtype dtype = def_iohlcv_fields if dtype is None else dtype
return _Token( return _Token(
shm_name=key, shm_name=key,
shm_first_index_name=key + "_first", shm_first_index_name=key + "_first",
shm_last_index_name=key + "_last", shm_last_index_name=key + "_last",
dtype_descr=np.dtype(dtype).descr dtype_descr=tuple(np.dtype(dtype).descr),
size=size,
) )
@ -219,6 +216,7 @@ class ShmArray:
shm_first_index_name=self._first._shm.name, shm_first_index_name=self._first._shm.name,
shm_last_index_name=self._last._shm.name, shm_last_index_name=self._last._shm.name,
dtype_descr=tuple(self._array.dtype.descr), dtype_descr=tuple(self._array.dtype.descr),
size=self._len,
) )
@property @property
@ -250,7 +248,6 @@ class ShmArray:
# to load an empty array.. # to load an empty array..
if len(a) == 0 and self._post_init: if len(a) == 0 and self._post_init:
raise RuntimeError('Empty array race condition hit!?') raise RuntimeError('Empty array race condition hit!?')
# breakpoint()
return a return a
@ -260,7 +257,7 @@ class ShmArray:
# type that all field values will be cast to # type that all field values will be cast to
# in the returned view. # in the returned view.
common_dtype: np.dtype = np.float, common_dtype: np.dtype = float,
) -> np.ndarray: ) -> np.ndarray:
@ -315,7 +312,7 @@ class ShmArray:
field_map: Optional[dict[str, str]] = None, field_map: Optional[dict[str, str]] = None,
prepend: bool = False, prepend: bool = False,
update_first: bool = True, update_first: bool = True,
start: Optional[int] = None, start: int | None = None,
) -> int: ) -> int:
''' '''
@ -357,7 +354,11 @@ class ShmArray:
# tries to access ``.array`` (which due to the index # tries to access ``.array`` (which due to the index
# overlap will be empty). Pretty sure we've fixed it now # overlap will be empty). Pretty sure we've fixed it now
# but leaving this here as a reminder. # but leaving this here as a reminder.
if prepend and update_first and length: if (
prepend
and update_first
and length
):
assert index < self._first.value assert index < self._first.value
if ( if (
@ -431,10 +432,10 @@ class ShmArray:
def open_shm_array( def open_shm_array(
size: int,
key: Optional[str] = None, key: str | None = None,
size: int = _default_size, dtype: np.dtype | None = None,
dtype: Optional[np.dtype] = None, append_start_index: int | None = None,
readonly: bool = False, readonly: bool = False,
) -> ShmArray: ) -> ShmArray:
@ -464,7 +465,8 @@ def open_shm_array(
token = _make_token( token = _make_token(
key=key, key=key,
dtype=dtype size=size,
dtype=dtype,
) )
# create single entry arrays for storing an first and last indices # create single entry arrays for storing an first and last indices
@ -498,10 +500,13 @@ def open_shm_array(
# ``ShmArray._start.value: int = 0`` and the yet-to-be written # ``ShmArray._start.value: int = 0`` and the yet-to-be written
# real-time section will start at ``ShmArray.index: int``. # real-time section will start at ``ShmArray.index: int``.
# this sets the index to 3/4 of the length of the buffer # this sets the index to nearly 2/3rds into the the length of
# leaving a "days worth of second samples" for the real-time # the buffer leaving at least a "days worth of second samples"
# section. # for the real-time section.
last.value = first.value = _rt_buffer_start if append_start_index is None:
append_start_index = round(size * 0.616)
last.value = first.value = append_start_index
shmarr = ShmArray( shmarr = ShmArray(
array, array,
@ -515,16 +520,15 @@ def open_shm_array(
# "unlink" created shm on process teardown by # "unlink" created shm on process teardown by
# pushing teardown calls onto actor context stack # pushing teardown calls onto actor context stack
stack = tractor.current_actor().lifetime_stack
tractor._actor._lifetime_stack.callback(shmarr.close) stack.callback(shmarr.close)
tractor._actor._lifetime_stack.callback(shmarr.destroy) stack.callback(shmarr.destroy)
return shmarr return shmarr
def attach_shm_array( def attach_shm_array(
token: tuple[str, str, tuple[str, str]], token: tuple[str, str, tuple[str, str]],
size: int = _default_size,
readonly: bool = True, readonly: bool = True,
) -> ShmArray: ) -> ShmArray:
@ -563,7 +567,7 @@ def attach_shm_array(
raise _err raise _err
shmarr = np.ndarray( shmarr = np.ndarray(
(size,), (token.size,),
dtype=token.dtype, dtype=token.dtype,
buffer=shm.buf buffer=shm.buf
) )
@ -602,15 +606,18 @@ def attach_shm_array(
if key not in _known_tokens: if key not in _known_tokens:
_known_tokens[key] = token _known_tokens[key] = token
# "close" attached shm on process teardown # "close" attached shm on actor teardown
tractor._actor._lifetime_stack.callback(sha.close) tractor.current_actor().lifetime_stack.callback(sha.close)
return sha return sha
def maybe_open_shm_array( def maybe_open_shm_array(
key: str, key: str,
dtype: Optional[np.dtype] = None, size: int,
dtype: np.dtype | None = None,
append_start_index: int | None = None,
readonly: bool = False,
**kwargs, **kwargs,
) -> tuple[ShmArray, bool]: ) -> tuple[ShmArray, bool]:
@ -634,23 +641,41 @@ def maybe_open_shm_array(
try: try:
# see if we already know this key # see if we already know this key
token = _known_tokens[key] token = _known_tokens[key]
return attach_shm_array(token=token, **kwargs), False return (
attach_shm_array(
token=token,
readonly=readonly,
),
False,
)
except KeyError: except KeyError:
log.warning(f"Could not find {key} in shms cache") log.debug(f"Could not find {key} in shms cache")
if dtype: if dtype:
token = _make_token(key, dtype) token = _make_token(
key,
size=size,
dtype=dtype,
)
try: try:
return attach_shm_array(token=token, **kwargs), False return attach_shm_array(token=token, **kwargs), False
except FileNotFoundError: except FileNotFoundError:
log.warning(f"Could not attach to shm with token {token}") log.debug(f"Could not attach to shm with token {token}")
# This actor does not know about memory # This actor does not know about memory
# associated with the provided "key". # associated with the provided "key".
# Attempt to open a block and expect # Attempt to open a block and expect
# to fail if a block has been allocated # to fail if a block has been allocated
# on the OS by someone else. # on the OS by someone else.
return open_shm_array(key=key, dtype=dtype, **kwargs), True return (
open_shm_array(
key=key,
size=size,
dtype=dtype,
append_start_index=append_start_index,
readonly=readonly,
),
True,
)
def try_read( def try_read(
array: np.ndarray array: np.ndarray

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship for piker0) # Copyright (C) 2018-present Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -18,34 +18,47 @@
numpy data source coversion helpers. numpy data source coversion helpers.
""" """
from __future__ import annotations from __future__ import annotations
from typing import Any
import decimal
from bidict import bidict from bidict import bidict
import numpy as np import numpy as np
from pydantic import BaseModel
# from numba import from_dtype
ohlc_fields = [ def_iohlcv_fields: list[tuple[str, type]] = [
('time', float),
# YES WE KNOW, this isn't needed in polars but we use it for doing
# ring-buffer like pre/append ops our our `ShmArray` real-time
# numpy-array buffering system such that there is a master index
# that can be used for index-arithmetic when write data to the
# "middle" of the array. See the ``tractor.ipc.shm`` pkg for more
# details.
('index', int),
# presume int for epoch stamps since it's most common
# and makes the most sense to avoid float rounding issues.
# TODO: if we want higher reso we should use the new
# ``time.time_ns()`` in python 3.10+
('time', int),
('open', float), ('open', float),
('high', float), ('high', float),
('low', float), ('low', float),
('close', float), ('close', float),
('volume', float), ('volume', float),
('bar_wap', float),
# TODO: can we elim this from default field set to save on mem?
# i think only kraken really uses this in terms of what we get from
# their ohlc history API?
# ('bar_wap', float), # shouldn't be default right?
] ]
ohlc_with_index = ohlc_fields.copy() # remove index field
ohlc_with_index.insert(0, ('index', int)) def_ohlcv_fields: list[tuple[str, type]] = def_iohlcv_fields.copy()
def_ohlcv_fields.pop(0)
# our minimum structured array layout for ohlc data assert (len(def_iohlcv_fields) - len(def_ohlcv_fields)) == 1
base_iohlc_dtype = np.dtype(ohlc_with_index)
base_ohlc_dtype = np.dtype(ohlc_fields)
# TODO: for now need to construct this manually for readonly arrays, see # TODO: for now need to construct this manually for readonly arrays, see
# https://github.com/numba/numba/issues/4511 # https://github.com/numba/numba/issues/4511
# from numba import from_dtype
# base_ohlc_dtype = np.dtype(def_ohlc_fields)
# numba_ohlc_dtype = from_dtype(base_ohlc_dtype) # numba_ohlc_dtype = from_dtype(base_ohlc_dtype)
# map time frame "keys" to seconds values # map time frame "keys" to seconds values
@ -60,28 +73,6 @@ tf_in_1s = bidict({
}) })
def mk_fqsn(
provider: str,
symbol: str,
) -> str:
'''
Generate a "fully qualified symbol name" which is
a reverse-hierarchical cross broker/provider symbol
'''
return '.'.join([symbol, provider]).lower()
def float_digits(
value: float,
) -> int:
if value == 0:
return 0
return int(-decimal.Decimal(str(value)).as_tuple().exponent)
def ohlc_zeros(length: int) -> np.ndarray: def ohlc_zeros(length: int) -> np.ndarray:
"""Construct an OHLC field formatted structarray. """Construct an OHLC field formatted structarray.
@ -92,168 +83,6 @@ def ohlc_zeros(length: int) -> np.ndarray:
return np.zeros(length, dtype=base_ohlc_dtype) return np.zeros(length, dtype=base_ohlc_dtype)
def unpack_fqsn(fqsn: str) -> tuple[str, str, str]:
'''
Unpack a fully-qualified-symbol-name to ``tuple``.
'''
venue = ''
suffix = ''
# TODO: probably reverse the order of all this XD
tokens = fqsn.split('.')
if len(tokens) < 3:
# probably crypto
symbol, broker = tokens
return (
broker,
symbol,
'',
)
elif len(tokens) > 3:
symbol, venue, suffix, broker = tokens
else:
symbol, venue, broker = tokens
suffix = ''
# head, _, broker = fqsn.rpartition('.')
# symbol, _, suffix = head.rpartition('.')
return (
broker,
'.'.join([symbol, venue]),
suffix,
)
class Symbol(BaseModel):
'''
I guess this is some kinda container thing for dealing with
all the different meta-data formats from brokers?
'''
key: str
tick_size: float = 0.01
lot_tick_size: float = 0.0 # "volume" precision as min step value
tick_size_digits: int = 2
lot_size_digits: int = 0
suffix: str = ''
broker_info: dict[str, dict[str, Any]] = {}
# specifies a "class" of financial instrument
# ex. stock, futer, option, bond etc.
# @validate_arguments
@classmethod
def from_broker_info(
cls,
broker: str,
symbol: str,
info: dict[str, Any],
suffix: str = '',
# XXX: like wtf..
# ) -> 'Symbol':
) -> None:
tick_size = info.get('price_tick_size', 0.01)
lot_tick_size = info.get('lot_tick_size', 0.0)
return Symbol(
key=symbol,
tick_size=tick_size,
lot_tick_size=lot_tick_size,
tick_size_digits=float_digits(tick_size),
lot_size_digits=float_digits(lot_tick_size),
suffix=suffix,
broker_info={broker: info},
)
@classmethod
def from_fqsn(
cls,
fqsn: str,
info: dict[str, Any],
# XXX: like wtf..
# ) -> 'Symbol':
) -> None:
broker, key, suffix = unpack_fqsn(fqsn)
return cls.from_broker_info(
broker,
key,
info=info,
suffix=suffix,
)
@property
def type_key(self) -> str:
return list(self.broker_info.values())[0]['asset_type']
@property
def brokers(self) -> list[str]:
return list(self.broker_info.keys())
def nearest_tick(self, value: float) -> float:
'''
Return the nearest tick value based on mininum increment.
'''
mult = 1 / self.tick_size
return round(value * mult) / mult
def front_feed(self) -> tuple[str, str]:
'''
Return the "current" feed key for this symbol.
(i.e. the broker + symbol key in a tuple).
'''
return (
list(self.broker_info.keys())[0],
self.key,
)
def tokens(self) -> tuple[str]:
broker, key = self.front_feed()
if self.suffix:
return (key, self.suffix, broker)
else:
return (key, broker)
def front_fqsn(self) -> str:
'''
fqsn = "fully qualified symbol name"
Basically the idea here is for all client-ish code (aka programs/actors
that ask the provider agnostic layers in the stack for data) should be
able to tell which backend / venue / derivative each data feed/flow is
from by an explicit string key of the current form:
<instrumentname>.<venue>.<suffixwithmetadata>.<brokerbackendname>
TODO: I have thoughts that we should actually change this to be
more like an "attr lookup" (like how the web should have done
urls, but marketting peeps ruined it etc. etc.):
<broker>.<venue>.<instrumentname>.<suffixwithmetadata>
'''
tokens = self.tokens()
fqsn = '.'.join(tokens)
return fqsn
def iterfqsns(self) -> list[str]:
keys = []
for broker in self.broker_info.keys():
fqsn = mk_fqsn(self.key, broker)
if self.suffix:
fqsn += f'.{self.suffix}'
keys.append(fqsn)
return keys
def _nan_to_closest_num(array: np.ndarray): def _nan_to_closest_num(array: np.ndarray):
"""Return interpolated values instead of NaN. """Return interpolated values instead of NaN.

View File

@ -0,0 +1,510 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Mega-simple symbology cache via TOML files.
Allow backend data providers and/or brokers to stash their
symbology sets (aka the meta data we normalize into our
`.accounting.MktPair` type) to the filesystem for faster lookup and
offline usage.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
)
from pathlib import Path
from pprint import pformat
from typing import (
Any,
Sequence,
Hashable,
TYPE_CHECKING,
)
from types import ModuleType
from rapidfuzz import process as fuzzy
import tomli_w # for fast symbol cache writing
import tractor
import trio
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
from msgspec import field
from piker.log import get_logger
from piker import config
from piker.types import Struct
from piker.brokers import (
open_cached_client,
get_brokermod,
)
if TYPE_CHECKING:
from ..accounting import (
Asset,
MktPair,
)
log = get_logger('data.cache')
class SymbologyCache(Struct):
'''
Asset meta-data cache which holds lookup tables for 3 sets of
market-symbology related struct-types required by the
`.accounting` and `.data` subsystems.
'''
mod: ModuleType
fp: Path
# all asset-money-systems descriptions as minimally defined by
# in `.accounting.Asset`
assets: dict[str, Asset] = field(default_factory=dict)
# backend-system pairs loaded in provider (schema) specific
# structs.
pairs: dict[str, Struct] = field(default_factory=dict)
# serialized namespace path to the backend's pair-info-`Struct`
# defn B)
pair_ns_path: tractor.msg.NamespacePath | None = None
# TODO: piker-normalized `.accounting.MktPair` table?
# loaded from the `.pairs` and a normalizer
# provided by the backend pkg.
mktmaps: dict[str, MktPair] = field(default_factory=dict)
def write_config(self) -> None:
# put the backend's pair-struct type ref at the top
# of file if possible.
cachedict: dict[str, Any] = {
'pair_ns_path': str(self.pair_ns_path) or '',
}
# serialize all tables as dicts for TOML.
for key, table in {
'assets': self.assets,
'pairs': self.pairs,
'mktmaps': self.mktmaps,
}.items():
if not table:
log.warning(
f'Asset cache table for `{key}` is empty?'
)
continue
dct = cachedict[key] = {}
for key, struct in table.items():
dct[key] = struct.to_dict(include_non_members=False)
try:
with self.fp.open(mode='wb') as fp:
tomli_w.dump(cachedict, fp)
except TypeError:
self.fp.unlink()
raise
async def load(self) -> None:
'''
Explicitly load the "symbology set" for this provider by using
2 required `Client` methods:
- `.get_assets()`: returning a table of `Asset`s
- `.get_mkt_pairs()`: returning a table of pair-`Struct`
types, custom defined by the particular backend.
AND, the required `.get_mkt_info()` module-level endpoint
which maps `fqme: str` -> `MktPair`s.
These tables are then used to fill out the `.assets`, `.pairs` and
`.mktmaps` tables on this cache instance, respectively.
'''
async with open_cached_client(self.mod.name) as client:
if get_assets := getattr(client, 'get_assets', None):
assets: dict[str, Asset] = await get_assets()
for bs_mktid, asset in assets.items():
self.assets[bs_mktid] = asset
else:
log.warning(
'No symbology cache `Asset` support for `{provider}`..\n'
'Implement `Client.get_assets()`!'
)
if get_mkt_pairs := getattr(client, 'get_mkt_pairs', None):
pairs: dict[str, Struct] = await get_mkt_pairs()
for bs_fqme, pair in pairs.items():
# NOTE: every backend defined pair should
# declare it's ns path for roundtrip
# serialization lookup.
if not getattr(pair, 'ns_path', None):
raise TypeError(
f'Pair-struct for {self.mod.name} MUST define a '
'`.ns_path: str`!\n'
f'{pair}'
)
entry = await self.mod.get_mkt_info(pair.bs_fqme)
if not entry:
continue
mkt: MktPair
pair: Struct
mkt, _pair = entry
assert _pair is pair, (
f'`{self.mod.name}` backend probably has a '
'keying-symmetry problem between the pair-`Struct` '
'returned from `Client.get_mkt_pairs()`and the '
'module level endpoint: `.get_mkt_info()`\n\n'
"Here's the struct diff:\n"
f'{_pair - pair}'
)
# NOTE XXX: this means backends MUST implement
# a `Struct.bs_mktid: str` field to provide
# a native-keyed map to their own symbol
# set(s).
self.pairs[pair.bs_mktid] = pair
# NOTE: `MktPair`s are keyed here using piker's
# internal FQME schema so that search,
# accounting and feed init can be accomplished
# a sane, uniform, normalized basis.
self.mktmaps[mkt.fqme] = mkt
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
pair,
)
else:
log.warning(
'No symbology cache `Pair` support for `{provider}`..\n'
'Implement `Client.get_mkt_pairs()`!'
)
return self
@classmethod
def from_dict(
cls: type,
data: dict,
**kwargs,
) -> SymbologyCache:
# normal init inputs
cache = cls(**kwargs)
# XXX WARNING: this may break if backend namespacing
# changes (eg. `Pair` class def is moved to another
# module) in which case you can manually update the
# `pair_ns_path` in the symcache file and try again.
# TODO: probably a verbose error about this?
Pair: type = tractor.msg.NamespacePath(
str(data['pair_ns_path'])
).load_ref()
pairtable = data.pop('pairs')
for key, pairtable in pairtable.items():
# allow each serialized pair-dict-table to declare its
# specific struct type's path in cases where a backend
# supports multiples (normally with different
# schemas..) and we are storing them in a flat `.pairs`
# table.
ThisPair = Pair
if this_pair_type := pairtable.get('ns_path'):
ThisPair: type = tractor.msg.NamespacePath(
str(this_pair_type)
).load_ref()
pair: Struct = ThisPair(**pairtable)
cache.pairs[key] = pair
from ..accounting import (
Asset,
MktPair,
)
# load `dict` -> `Asset`
assettable = data.pop('assets')
for name, asdict in assettable.items():
cache.assets[name] = Asset.from_msg(asdict)
# load `dict` -> `MktPair`
dne: list[str] = []
mkttable = data.pop('mktmaps')
for fqme, mktdict in mkttable.items():
mkt = MktPair.from_msg(mktdict)
assert mkt.fqme == fqme
# sanity check asset refs from those (presumably)
# loaded asset set above.
src: Asset = cache.assets[mkt.src.name]
assert src == mkt.src
dst: Asset
if not (dst := cache.assets.get(mkt.dst.name)):
dne.append(mkt.dst.name)
continue
else:
assert dst.name == mkt.dst.name
cache.mktmaps[fqme] = mkt
log.warning(
f'These `MktPair.dst: Asset`s DNE says `{cache.mod.name}`?\n'
f'{pformat(dne)}'
)
return cache
@staticmethod
async def from_scratch(
mod: ModuleType,
fp: Path,
**kwargs,
) -> SymbologyCache:
'''
Generate (a) new symcache (contents) entirely from scratch
including all (TOML) serialized data and file.
'''
log.info(f'GENERATING symbology cache for `{mod.name}`')
cache = SymbologyCache(
mod=mod,
fp=fp,
**kwargs,
)
await cache.load()
cache.write_config()
return cache
def search(
self,
pattern: str,
table: str = 'mktmaps'
) -> dict[str, Struct]:
'''
(Fuzzy) search this cache's `.mktmaps` table, which is
keyed by FQMEs, for `pattern: str` and return the best
matches in a `dict` including the `MktPair` values.
'''
matches = fuzzy.extract(
pattern,
getattr(self, table),
score_cutoff=50,
)
# repack in dict[fqme, MktPair] form
return {
item[0].fqme: item[0]
for item in matches
}
# actor-process-local in-mem-cache of symcaches (by backend).
_caches: dict[str, SymbologyCache] = {}
def mk_cachefile(
provider: str,
) -> Path:
cachedir: Path = config.get_conf_dir() / '_cache'
if not cachedir.is_dir():
log.info(f'Creating `nativedb` director: {cachedir}')
cachedir.mkdir()
cachefile: Path = cachedir / f'{str(provider)}.symcache.toml'
cachefile.touch()
return cachefile
@acm
async def open_symcache(
mod_or_name: ModuleType | str,
reload: bool = False,
only_from_memcache: bool = False, # no API req
_no_symcache: bool = False, # no backend support
) -> SymbologyCache:
if isinstance(mod_or_name, str):
mod = get_brokermod(mod_or_name)
else:
mod: ModuleType = mod_or_name
provider: str = mod.name
cachefile: Path = mk_cachefile(provider)
# NOTE: certain backends might not support a symbology cache
# (easily) and thus we allow for an empty instance to be loaded
# and manually filled in at the whim of the caller presuming
# the backend pkg-module is annotated appropriately.
if (
getattr(mod, '_no_symcache', False)
or _no_symcache
):
yield SymbologyCache(
mod=mod,
fp=cachefile,
)
# don't do nuttin
return
# actor-level cache-cache XD
global _caches
if not reload:
try:
yield _caches[provider]
except KeyError:
msg: str = (
f'No asset info cache exists yet for `{provider}`'
)
if only_from_memcache:
raise RuntimeError(msg)
else:
log.warning(msg)
# if no cache exists or an explicit reload is requested, load
# the provider API and call appropriate endpoints to populate
# the mkt and asset tables.
if (
reload
or not cachefile.is_file()
):
cache = await SymbologyCache.from_scratch(
mod=mod,
fp=cachefile,
)
else:
log.info(
f'Loading EXISTING `{mod.name}` symbology cache:\n'
f'> {cachefile}'
)
import time
now = time.time()
with cachefile.open('rb') as existing_fp:
data: dict[str, dict] = tomllib.load(existing_fp)
log.runtime(f'SYMCACHE TOML LOAD TIME: {time.time() - now}')
# if there's an empty file for some reason we need
# to do a full reload as well!
if not data:
cache = await SymbologyCache.from_scratch(
mod=mod,
fp=cachefile,
)
else:
cache = SymbologyCache.from_dict(
data,
mod=mod,
fp=cachefile,
)
# TODO: use a real profiling sys..
# https://github.com/pikers/piker/issues/337
log.info(f'SYMCACHE LOAD TIME: {time.time() - now}')
yield cache
# TODO: write only when changes detected? but that should
# never happen right except on reload?
# cache.write_config()
def get_symcache(
provider: str,
force_reload: bool = False,
) -> SymbologyCache:
'''
Get any available symbology/assets cache from sync code by
(maybe) manually running `trio` to do the work.
'''
# spawn tractor runtime and generate cache
# if not existing.
async def sched_gen_symcache():
async with (
# only for runtime's debug mode
tractor.open_nursery(debug_mode=True),
open_symcache(
get_brokermod(provider),
reload=force_reload,
) as symcache,
):
return symcache
try:
symcache: SymbologyCache = trio.run(sched_gen_symcache)
assert symcache
except BaseException:
import pdbp
pdbp.xpm()
return symcache
def match_from_pairs(
pairs: dict[str, Struct],
query: str,
score_cutoff: int = 50,
**extract_kwargs,
) -> dict[str, Struct]:
'''
Fuzzy search over a "pairs table" maintained by most backends
as part of their symbology-info caching internals.
Scan the native symbol key set and return best ranked
matches back in a new `dict`.
'''
# TODO: somehow cache this list (per call) like we were in
# `open_symbol_search()`?
keys: list[str] = list(pairs)
matches: list[tuple[
Sequence[Hashable], # matching input key
Any, # scores
Any,
]] = fuzzy.extract(
# NOTE: most backends provide keys uppercased
query=query,
choices=keys,
score_cutoff=score_cutoff,
**extract_kwargs,
)
# pop and repack pairs in output dict
matched_pairs: dict[str, Struct] = {}
for item in matches:
pair_key: str = item[0]
matched_pairs[pair_key] = pairs[pair_key]
return matched_pairs

View File

@ -0,0 +1,34 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Data layer module commons.
'''
from functools import partial
from ..log import (
get_logger,
get_console_log,
)
subsys: str = 'piker.data'
log = get_logger(subsys)
get_console_log = partial(
get_console_log,
name=subsys,
)

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -18,13 +18,30 @@
ToOlS fOr CoPInG wITh "tHE wEB" protocols. ToOlS fOr CoPInG wITh "tHE wEB" protocols.
""" """
from contextlib import asynccontextmanager, AsyncExitStack from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
)
from itertools import count
from functools import partial
from types import ModuleType from types import ModuleType
from typing import Any, Callable, AsyncGenerator from typing import (
Any,
Optional,
Callable,
AsyncContextManager,
AsyncGenerator,
Iterable,
)
import json import json
import trio import trio
import trio_websocket from trio_typing import TaskStatus
from trio_websocket import (
WebSocketConnection,
open_websocket_url,
)
from wsproto.utilities import LocalProtocolError
from trio_websocket._impl import ( from trio_websocket._impl import (
ConnectionClosed, ConnectionClosed,
DisconnectionTimeout, DisconnectionTimeout,
@ -33,81 +50,71 @@ from trio_websocket._impl import (
ConnectionTimeout, ConnectionTimeout,
) )
from ..log import get_logger from piker.types import Struct
from ._util import log
log = get_logger(__name__)
class NoBsWs: class NoBsWs:
"""Make ``trio_websocket`` sockets stay up no matter the bs. '''
Make ``trio_websocket`` sockets stay up no matter the bs.
""" A shim interface that allows client code to stream from some
``WebSocketConnection`` but where any connectivy bs is handled
automatcially and entirely in the background.
NOTE: this type should never be created directly but instead is
provided via the ``open_autorecon_ws()`` factor below.
'''
# apparently we can QoS for all sorts of reasons..so catch em.
recon_errors = ( recon_errors = (
ConnectionClosed, ConnectionClosed,
DisconnectionTimeout, DisconnectionTimeout,
ConnectionRejected, ConnectionRejected,
HandshakeError, HandshakeError,
ConnectionTimeout, ConnectionTimeout,
LocalProtocolError,
) )
def __init__( def __init__(
self, self,
url: str, url: str,
token: str, rxchan: trio.MemoryReceiveChannel,
stack: AsyncExitStack, msg_recv_timeout: float,
fixture: Callable,
serializer: ModuleType = json, serializer: ModuleType = json
): ):
self.url = url self.url = url
self.token = token self._rx = rxchan
self.fixture = fixture self._timeout = msg_recv_timeout
self._stack = stack
self._ws: 'WebSocketConnection' = None # noqa
async def _connect( # signaling between caller and relay task which determines when
self, # socket is connected (and subscribed).
tries: int = 1000, self._connected: trio.Event = trio.Event()
) -> None:
while True:
try:
await self._stack.aclose()
except (DisconnectionTimeout, RuntimeError):
await trio.sleep(0.5)
else:
break
last_err = None # dynamically reset by the bg relay task
for i in range(tries): self._ws: WebSocketConnection | None = None
try: self._cs: trio.CancelScope | None = None
self._ws = await self._stack.enter_async_context(
trio_websocket.open_websocket_url(self.url)
)
# rerun user code fixture
if self.token == '':
ret = await self._stack.enter_async_context(
self.fixture(self)
)
else:
ret = await self._stack.enter_async_context(
self.fixture(self, self.token)
)
assert ret is None # interchange codec methods
# TODO: obviously the method API here may be different
# for another interchange format..
self._dumps: Callable = serializer.dumps
self._loads: Callable = serializer.loads
log.info(f'Connection success: {self.url}') def connected(self) -> bool:
return self._ws return self._connected.is_set()
except self.recon_errors as err: async def reset(self) -> None:
last_err = err '''
log.error( Reset the underlying ws connection by cancelling
f'{self} connection bail with ' the bg relay task and waiting for it to signal
f'{type(err)}...retry attempt {i}' a new connection.
)
await trio.sleep(0.5) '''
continue self._connected = trio.Event()
else: self._cs.cancel()
log.exception('ws connection fail...') await self._connected.wait()
raise last_err
async def send_msg( async def send_msg(
self, self,
@ -115,38 +122,348 @@ class NoBsWs:
) -> None: ) -> None:
while True: while True:
try: try:
return await self._ws.send_message(json.dumps(data)) msg: Any = self._dumps(data)
return await self._ws.send_message(msg)
except self.recon_errors: except self.recon_errors:
await self._connect() await self.reset()
async def recv_msg( async def recv_msg(self) -> Any:
msg: Any = await self._rx.receive()
data = self._loads(msg)
return data
def __aiter__(self):
return self
async def __anext__(self):
return await self.recv_msg()
def set_recv_timeout(
self, self,
) -> Any: timeout: float,
) -> None:
self._timeout = timeout
async def _reconnect_forever(
url: str,
snd: trio.MemorySendChannel,
nobsws: NoBsWs,
reset_after: int, # msg recv timeout before reset attempt
fixture: AsyncContextManager | None = None,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
) -> None:
# TODO: can we just report "where" in the call stack
# the client code is using the ws stream?
# Maybe we can just drop this since it's already in the log msg
# orefix?
if fixture is not None:
src_mod: str = fixture.__module__
else:
src_mod: str = 'unknown'
async def proxy_msgs(
ws: WebSocketConnection,
pcs: trio.CancelScope, # parent cancel scope
):
'''
Receive (under `timeout` deadline) all msgs from from underlying
websocket and relay them to (calling) parent task via ``trio``
mem chan.
'''
# after so many msg recv timeouts, reset the connection
timeouts: int = 0
while True: while True:
try: with trio.move_on_after(
return json.loads(await self._ws.get_message()) # can be dynamically changed by user code
except self.recon_errors: nobsws._timeout,
await self._connect() ) as cs:
try:
msg: Any = await ws.get_message()
await snd.send(msg)
except nobsws.recon_errors:
log.exception(
f'{src_mod}\n'
f'{url} connection bail with:'
)
await trio.sleep(0.5)
pcs.cancel()
# go back to reonnect loop in parent task
return
if cs.cancelled_caught:
timeouts += 1
if timeouts > reset_after:
log.error(
f'{src_mod}\n'
'WS feed seems down and slow af.. reconnecting\n'
)
pcs.cancel()
# go back to reonnect loop in parent task
return
async def open_fixture(
fixture: AsyncContextManager,
nobsws: NoBsWs,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
):
'''
Open user provided `@acm` and sleep until any connection
reset occurs.
'''
async with fixture(nobsws) as ret:
assert ret is None
task_status.started()
await trio.sleep_forever()
# last_err = None
nobsws._connected = trio.Event()
task_status.started()
while not snd._closed:
log.info(
f'{src_mod}\n'
f'{url} trying (RE)CONNECT'
)
ws: WebSocketConnection
try:
async with (
trio.open_nursery() as n,
open_websocket_url(url) as ws,
):
cs = nobsws._cs = n.cancel_scope
nobsws._ws = ws
log.info(
f'{src_mod}\n'
f'Connection success: {url}'
)
# begin relay loop to forward msgs
n.start_soon(
proxy_msgs,
ws,
cs,
)
if fixture is not None:
log.info(
f'{src_mod}\n'
f'Entering fixture: {fixture}'
)
# TODO: should we return an explicit sub-cs
# from this fixture task?
await n.start(
open_fixture,
fixture,
nobsws,
)
# indicate to wrapper / opener that we are up and block
# to let tasks run **inside** the ws open block above.
nobsws._connected.set()
await trio.sleep_forever()
except HandshakeError:
log.exception(f'Retrying connection')
# ws & nursery block ends
nobsws._connected = trio.Event()
if cs.cancelled_caught:
log.cancel(
f'{url} connection cancelled!'
)
# if wrapper cancelled us, we expect it to also
# have re-assigned a new event
assert (
nobsws._connected
and not nobsws._connected.is_set()
)
# -> from here, move to next reconnect attempt iteration
# in the while loop above Bp
else:
log.exception(
f'{src_mod}\n'
'ws connection closed by client...'
)
@asynccontextmanager @acm
async def open_autorecon_ws( async def open_autorecon_ws(
url: str, url: str,
# TODO: proper type annot smh fixture: AsyncContextManager | None = None,
fixture: Callable,
# used for authenticated websockets
token: str = '',
) -> AsyncGenerator[tuple[...], NoBsWs]:
"""Apparently we can QoS for all sorts of reasons..so catch em.
""" # time in sec between msgs received before
async with AsyncExitStack() as stack: # we presume connection might need a reset.
ws = NoBsWs(url, token, stack, fixture=fixture) msg_recv_timeout: float = 16,
await ws._connect()
# count of the number of above timeouts before connection reset
reset_after: int = 3,
) -> AsyncGenerator[tuple[...], NoBsWs]:
'''
An auto-reconnect websocket (wrapper API) around
``trio_websocket.open_websocket_url()`` providing automatic
re-connection on network errors, msg latency and thus roaming.
Here we implement a re-connect websocket interface where a bg
nursery runs ``WebSocketConnection.receive_message()``s in a loop
and restarts the full http(s) handshake on catches of certain
connetivity errors, or some user defined recv timeout.
You can provide a ``fixture`` async-context-manager which will be
entered/exitted around each connection reset; eg. for (re)requesting
subscriptions without requiring streaming setup code to rerun.
'''
snd: trio.MemorySendChannel
rcv: trio.MemoryReceiveChannel
snd, rcv = trio.open_memory_channel(616)
async with trio.open_nursery() as n:
nobsws = NoBsWs(
url,
rcv,
msg_recv_timeout=msg_recv_timeout,
)
await n.start(
partial(
_reconnect_forever,
url,
snd,
nobsws,
fixture=fixture,
reset_after=reset_after,
)
)
await nobsws._connected.wait()
assert nobsws._cs
assert nobsws.connected()
try: try:
yield ws yield nobsws
finally: finally:
await stack.aclose() n.cancel_scope.cancel()
'''
JSONRPC response-request style machinery for transparent multiplexing of msgs
over a NoBsWs.
'''
class JSONRPCResult(Struct):
id: int
jsonrpc: str = '2.0'
result: Optional[dict] = None
error: Optional[dict] = None
@acm
async def open_jsonrpc_session(
url: str,
start_id: int = 0,
response_type: type = JSONRPCResult,
request_type: Optional[type] = None,
request_hook: Optional[Callable] = None,
error_hook: Optional[Callable] = None,
) -> Callable[[str, dict], dict]:
async with (
trio.open_nursery() as n,
open_autorecon_ws(url) as ws
):
rpc_id: Iterable = count(start_id)
rpc_results: dict[int, dict] = {}
async def json_rpc(method: str, params: dict) -> dict:
'''
perform a json rpc call and wait for the result, raise exception in
case of error field present on response
'''
msg = {
'jsonrpc': '2.0',
'id': next(rpc_id),
'method': method,
'params': params
}
_id = msg['id']
rpc_results[_id] = {
'result': None,
'event': trio.Event()
}
await ws.send_msg(msg)
await rpc_results[_id]['event'].wait()
ret = rpc_results[_id]['result']
del rpc_results[_id]
if ret.error is not None:
raise Exception(json.dumps(ret.error, indent=4))
return ret
async def recv_task():
'''
receives every ws message and stores it in its corresponding
result field, then sets the event to wakeup original sender
tasks. also recieves responses to requests originated from
the server side.
'''
async for msg in ws:
match msg:
case {
'result': _,
'id': mid,
} if res_entry := rpc_results.get(mid):
res_entry['result'] = response_type(**msg)
res_entry['event'].set()
case {
'result': _,
'id': mid,
} if not rpc_results.get(mid):
log.warning(
f'Unexpected ws msg: {json.dumps(msg, indent=4)}'
)
case {
'method': _,
'params': _,
}:
log.debug(f'Recieved\n{msg}')
if request_hook:
await request_hook(request_type(**msg))
case {
'error': error
}:
log.warning(f'Recieved\n{error}')
if error_hook:
await error_hook(response_type(**msg))
case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
n.start_soon(recv_task)
yield json_rpc
n.cancel_scope.cancel()

View File

@ -1,196 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
marketstore cli.
"""
from functools import partial
from pprint import pformat
from anyio_marketstore import open_marketstore_client
import trio
import tractor
import click
import numpy as np
from .marketstore import (
get_client,
# stream_quotes,
ingest_quote_stream,
# _url,
_tick_tbk_ids,
mk_tbk,
)
from ..cli import cli
from .. import watchlists as wl
from ..log import get_logger
from ._sharedmem import (
maybe_open_shm_array,
)
from ._source import (
base_iohlc_dtype,
)
log = get_logger(__name__)
@cli.command()
@click.option(
'--url',
default='ws://localhost:5993/ws',
help='HTTP URL of marketstore instance'
)
@click.argument('names', nargs=-1)
@click.pass_obj
def ms_stream(
config: dict,
names: list[str],
url: str,
) -> None:
'''
Connect to a marketstore time bucket stream for (a set of) symbols(s)
and print to console.
'''
async def main():
# async for quote in stream_quotes(symbols=names):
# log.info(f"Received quote:\n{quote}")
...
trio.run(main)
# @cli.command()
# @click.option(
# '--url',
# default=_url,
# help='HTTP URL of marketstore instance'
# )
# @click.argument('names', nargs=-1)
# @click.pass_obj
# def ms_destroy(config: dict, names: list[str], url: str) -> None:
# """Destroy symbol entries in the local marketstore instance.
# """
# async def main():
# nonlocal names
# async with get_client(url) as client:
#
# if not names:
# names = await client.list_symbols()
#
# # default is to wipe db entirely.
# answer = input(
# "This will entirely wipe you local marketstore db @ "
# f"{url} of the following symbols:\n {pformat(names)}"
# "\n\nDelete [N/y]?\n")
#
# if answer == 'y':
# for sym in names:
# # tbk = _tick_tbk.format(sym)
# tbk = tuple(sym, *_tick_tbk_ids)
# print(f"Destroying {tbk}..")
# await client.destroy(mk_tbk(tbk))
# else:
# print("Nothing deleted.")
#
# tractor.run(main)
@cli.command()
@click.option(
'--tl',
is_flag=True,
help='Enable tractor logging')
@click.option(
'--host',
default='localhost'
)
@click.option(
'--port',
default=5993
)
@click.argument('symbols', nargs=-1)
@click.pass_obj
def storesh(
config,
tl,
host,
port,
symbols: list[str],
):
'''
Start an IPython shell ready to query the local marketstore db.
'''
from piker.data.marketstore import tsdb_history_update
from piker._daemon import open_piker_runtime
async def main():
nonlocal symbols
async with open_piker_runtime(
'storesh',
enable_modules=['piker.data._ahab'],
):
symbol = symbols[0]
await tsdb_history_update(symbol)
trio.run(main)
@cli.command()
@click.option('--test-file', '-t', help='Test quote stream file')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.argument('name', nargs=1, required=True)
@click.pass_obj
def ingest(config, name, test_file, tl):
'''
Ingest real-time broker quotes and ticks to a marketstore instance.
'''
# global opts
loglevel = config['loglevel']
tractorloglevel = config['tractorloglevel']
# log = config['log']
watchlist_from_file = wl.ensure_watchlists(config['wl_path'])
watchlists = wl.merge_watchlist(watchlist_from_file, wl._builtins)
symbols = watchlists[name]
grouped_syms = {}
for sym in symbols:
symbol, _, provider = sym.rpartition('.')
if provider not in grouped_syms:
grouped_syms[provider] = []
grouped_syms[provider].append(symbol)
async def entry_point():
async with tractor.open_nursery() as n:
for provider, symbols in grouped_syms.items():
await n.run_in_actor(
ingest_quote_stream,
name='ingest_marketstore',
symbols=symbols,
brokername=provider,
tries=1,
actorloglevel=loglevel,
loglevel=tractorloglevel
)
tractor.run(entry_point)

File diff suppressed because it is too large Load Diff

221
piker/data/flows.py 100644
View File

@ -0,0 +1,221 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Public abstractions for organizing, managing and generally operating-on
real-time data processing data-structures.
"Streams, flumes, cascades and flows.."
"""
from __future__ import annotations
from typing import (
TYPE_CHECKING,
)
import tractor
import pendulum
import numpy as np
from piker.types import Struct
from ._sharedmem import (
attach_shm_array,
ShmArray,
_Token,
)
if TYPE_CHECKING:
from ..accounting import MktPair
from .feed import Feed
class Flume(Struct):
'''
Composite reference type which points to all the addressing
handles and other meta-data necessary for the read, measure and
management of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that
can be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport
- history and real-time shm buffers which are both real-time
updated and backfilled.
- associated startup indexing information related to both buffer
real-time-append and historical prepend addresses.
- low level APIs to read and measure the updated data and manage
queuing properties.
'''
mkt: MktPair
first_quote: dict
_rt_shm_token: _Token
# optional since some data flows won't have a "downsampled" history
# buffer/stream (eg. FSPs).
_hist_shm_token: _Token | None = None
# private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None
_readonly: bool = True
stream: tractor.MsgStream | None = None
izero_hist: int = 0
izero_rt: int = 0
throttle_rate: int | None = None
# TODO: do we need this really if we can pull the `Portal` from
# ``tractor``'s internals?
feed: Feed | None = None
@property
def rt_shm(self) -> ShmArray:
if self._rt_shm is None:
self._rt_shm = attach_shm_array(
token=self._rt_shm_token,
readonly=self._readonly,
)
return self._rt_shm
@property
def hist_shm(self) -> ShmArray:
if self._hist_shm_token is None:
raise RuntimeError(
'No shm token has been set for the history buffer?'
)
if self._hist_shm is None:
self._hist_shm = attach_shm_array(
token=self._hist_shm_token,
readonly=self._readonly,
)
return self._hist_shm
async def receive(self) -> dict:
return await self.stream.receive()
def get_ds_info(
self,
) -> tuple[float, float, float]:
'''
Compute the "downsampling" ratio info between the historical shm
buffer and the real-time (HFT) one.
Return a tuple of the fast sample period, historical sample
period and ratio between them.
'''
times: np.ndarray = self.hist_shm.array['time']
end: float | int = pendulum.from_timestamp(times[-1])
start: float | int = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s: float = (end - start).seconds
times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
rt_step_size_s = (end - start).seconds
ratio = hist_step_size_s / rt_step_size_s
return (
rt_step_size_s,
hist_step_size_s,
ratio,
)
# TODO: get native msgspec decoding for these workinn
def to_msg(self) -> dict:
msg = self.to_dict()
msg['mkt'] = self.mkt.to_dict()
# NOTE: pop all un-msg-serializable fields:
# - `tractor.MsgStream`
# - `Feed`
# - `Shmarray`
# it's expected the `.from_msg()` on the other side
# will get instead some kind of msg-compat version
# that it can load.
msg.pop('stream')
msg.pop('feed')
msg.pop('_rt_shm')
msg.pop('_hist_shm')
return msg
@classmethod
def from_msg(
cls,
msg: dict,
readonly: bool = True,
) -> dict:
'''
Load from an IPC msg presumably in either `dict` or
`msgspec.Struct` form.
'''
mkt_msg = msg.pop('mkt')
from ..accounting import MktPair # cycle otherwise..
mkt = MktPair.from_msg(mkt_msg)
msg |= {'_readonly': readonly}
return cls(
mkt=mkt,
**msg,
)
def get_index(
self,
time_s: float,
array: np.ndarray,
) -> int | float:
'''
Return array shm-buffer index for for epoch time.
'''
times = array['time']
first = np.searchsorted(
times,
time_s,
side='left',
)
imx = times.shape[0] - 1
return min(first, imx)
# only set by external msg or creator, never
# manually!
_has_vlm: bool = True
def has_vlm(self) -> bool:
if not self._has_vlm:
return False
# make sure that the instrument supports volume history
# (sometimes this is not the case for some commodities and
# derivatives)
vlm: np.ndarray = self.rt_shm.array['volume']
return not bool(
np.all(np.isin(vlm, -1))
or np.all(np.isnan(vlm))
)

View File

@ -23,7 +23,7 @@ Api layer likely in here...
from types import ModuleType from types import ModuleType
from importlib import import_module from importlib import import_module
from ..log import get_logger from ._util import get_logger
log = get_logger(__name__) log = get_logger(__name__)

View File

@ -0,0 +1,173 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Tick event stream processing, filter-by-types, format-normalization.
'''
from itertools import chain
from typing import (
Any,
AsyncIterator,
)
# tick-type-classes template for all possible "lowest level" events
# that can can be emitted by the "top of book" L1 queues and
# price-matching (with eventual clearing) in a double auction
# market (queuing) system.
_tick_groups: dict[str, set[str]] = {
'clears': {'trade', 'dark_trade', 'last'},
'bids': {'bid', 'bsize'},
'asks': {'ask', 'asize'},
}
# XXX alo define the flattened set of all such "fundamental ticks"
# so that it can be used as filter, eg. in the graphics display
# loop to compute running windowed y-ranges B)
_auction_ticks: set[str] = set.union(*_tick_groups.values())
def frame_ticks(
quote: dict[str, Any],
ticks_by_type: dict | None = None,
ticks_in_order: list[dict[str, Any]] | None = None
) -> dict[
str,
list[dict[str, Any]]
]:
'''
XXX: build a tick-by-type table of lists
of tick messages. This allows for less
iteration on the receiver side by allowing for
a single "latest tick event" look up by
indexing the last entry in each sub-list.
tbt = {
'types': ['bid', 'asize', 'last', .. '<type_n>'],
'bid': [tick0, tick1, tick2, .., tickn],
'asize': [tick0, tick1, tick2, .., tickn],
'last': [tick0, tick1, tick2, .., tickn],
...
'<type_n>': [tick0, tick1, tick2, .., tickn],
}
If `ticks_in_order` is provided, append any retrieved ticks
since last iteration into this array/buffer/list.
'''
# TODO: once we decide to get fancy really we should
# have a shared mem tick buffer that is just
# continually filled and the UI just ready from it
# at it's display rate.
tbt = ticks_by_type if ticks_by_type is not None else {}
if not (ticks := quote.get('ticks')):
return tbt
# append in reverse FIFO order for in-order iteration on
# receiver side.
tick: dict[str, Any]
for tick in ticks:
tbt.setdefault(
tick['type'],
[],
).append(tick)
# TODO: do we need this any more or can we just
# expect the receiver to unwind the below
# `ticks_by_type: dict`?
# => undwinding would potentially require a
# `dict[str, set | list]` instead with an
# included `'types' field which is an (ordered)
# set of tick type fields in the order which
# types arrived?
if ticks_in_order:
ticks_in_order.extend(ticks)
return tbt
def iterticks(
quote: dict,
types: tuple[str] = (
'trade',
'dark_trade',
),
deduplicate_darks: bool = False,
reverse: bool = False,
# TODO: should we offer delegating to `frame_ticks()` above
# with this?
frame_by_type: bool = False,
) -> AsyncIterator:
'''
Iterate through ticks delivered per quote cycle, filter and
yield any declared in `types`.
'''
if deduplicate_darks:
assert 'dark_trade' in types
# print(f"{quote}\n\n")
ticks = quote.get('ticks', ())
trades = {}
darks = {}
if ticks:
# do a first pass and attempt to remove duplicate dark
# trades with the same tick signature.
if deduplicate_darks:
for tick in ticks:
ttype = tick.get('type')
time = tick.get('time', None)
if time:
sig = (
time,
tick['price'],
tick.get('size')
)
if ttype == 'dark_trade':
darks[sig] = tick
elif ttype == 'trade':
trades[sig] = tick
# filter duplicates
for sig, tick in trades.items():
tick = darks.pop(sig, None)
if tick:
ticks.remove(tick)
# print(f'DUPLICATE {tick}')
# re-insert ticks
ticks.extend(list(chain(trades.values(), darks.values())))
# most-recent-first
if reverse:
ticks = reversed(ticks)
for tick in ticks:
# print(f"{quote['symbol']}: {tick}")
ttype = tick.get('type')
if ttype in types:
yield tick

View File

@ -0,0 +1,265 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Data feed synchronization protocols, init msgs, and general
data-provider-backend-agnostic schema definitions.
'''
from __future__ import annotations
from decimal import Decimal
from pprint import pformat
from types import ModuleType
from typing import (
Any,
Callable,
)
from msgspec import field
from piker.types import Struct
from piker.accounting import (
Asset,
MktPair,
)
from ._util import log
class FeedInitializationError(ValueError):
'''
Live data feed setup failed due to API / msg incompatiblity!
'''
class FeedInit(Struct, frozen=True):
'''
A stringent data provider startup msg schema validator.
The fields defined here are matched with those absolutely required
from each backend broker/data provider.
'''
mkt_info: MktPair
# NOTE: only field we use rn in ``.data.feed``
# TODO: maybe make a SamplerConfig(Struct)?
shm_write_opts: dict[str, Any] = field(
default_factory=lambda: {
'has_vlm': True,
'sum_tick_vlm': True,
})
# XXX: we group backend endpoints into 3
# groups to determine "degrees" of functionality.
_eps: dict[str, list[str]] = {
# basic API `Client` layer
'middleware': [
'get_client',
],
# (live) data streaming / loading / search
'datad': [
'get_mkt_info',
'open_history_client',
'open_symbol_search',
'stream_quotes',
],
# live order control and trading
'brokerd': [
'trades_dialogue',
'open_trade_dialog', # live order ctl
'norm_trade', # ledger normalizer for txns
],
}
def validate_backend(
mod: ModuleType,
syms: list[str],
init_msgs: list[FeedInit] | dict[str, dict[str, Any]],
# TODO: do a module method scan and report mismatches.
check_eps: bool = False,
api_log_msg_level: str = 'critical'
) -> FeedInit:
'''
Fail on malformed live quotes feed config/init or warn on changes
that haven't been implemented by this backend yet.
'''
for daemon_name, eps in _eps.items():
for name in eps:
ep: Callable = getattr(
mod,
name,
None,
)
if ep is None:
log.warning(
f'Provider backend {mod.name} is missing '
f'{daemon_name} support :(\n'
f'The following endpoint is missing: {name}'
)
inits: list[
FeedInit | dict[str, Any]
] = init_msgs
# convert to list if from old dict-style
if isinstance(init_msgs, dict):
inits = list(init_msgs.values())
init: FeedInit | dict[str, Any]
for i, init in enumerate(inits):
# XXX: eventually this WILL NOT necessarily be true.
if i > 0:
assert not len(init_msgs) == 1
if isinstance(init_msgs, dict):
keys: set = set(init_msgs.keys()) - set(syms)
raise FeedInitializationError(
'TOO MANY INIT MSGS!\n'
f'Unexpected keys: {keys}\n'
'ALL MSGS:\n'
f'{pformat(init_msgs)}\n'
)
else:
raise FeedInitializationError(
'TOO MANY INIT MSGS!\n'
f'{pformat(init_msgs)}\n'
)
# TODO: once all backends are updated we can remove this branching.
rx_msg: bool = False
warn_msg: str = ''
if not isinstance(init, FeedInit):
warn_msg += (
'\n'
'--------------------------\n'
':::DEPRECATED API STYLE:::\n'
'--------------------------\n'
f'`{mod.name}.stream_quotes()` should deliver '
'`.started(FeedInit)`\n'
f'|-> CURRENTLY it is using DEPRECATED `.started(dict)` style!\n'
f'|-> SEE `FeedInit` in `piker.data.validate`\n'
'--------------------------------------------\n'
)
else:
rx_msg = True
# verify feed init state / schema
bs_fqme: str # backend specific fqme
mkt: MktPair
match init:
# backend is using old dict msg delivery
case {
'symbol_info': dict(symbol_info),
'fqsn': bs_fqme,
} | {
'mkt_info': dict(symbol_info),
'fqsn': bs_fqme,
}:
symbol_info: dict
warn_msg += (
'It may also be still using the legacy `Symbol` style API\n'
'IT SHOULD BE PORTED TO THE NEW '
'`.accounting._mktinfo.MktPair`\n'
'STATTTTT!!!\n'
)
# XXX use default legacy (aka discrete precision) mkt
# price/size_ticks if none delivered.
price_tick = symbol_info.get(
'price_tick_size',
Decimal('0.01'),
)
size_tick = symbol_info.get(
'lot_tick_size',
Decimal('1'),
)
bs_mktid = init.get('bs_mktid') or bs_fqme
mkt = MktPair.from_fqme(
fqme=f'{bs_fqme}.{mod.name}',
price_tick=price_tick,
size_tick=size_tick,
bs_mktid=str(bs_mktid),
_atype=symbol_info['asset_type']
)
# backend is using new `MktPair` but not entirely
case {
'mkt_info': MktPair(
dst=Asset(),
) as mkt,
'fqsn': bs_fqme,
}:
warn_msg += (
f'{mod.name} in API compat transition?\n'
"It's half dict, half man..\n"
'-------------------------------------\n'
)
case FeedInit(
mkt_info=MktPair(dst=Asset()) as mkt,
shm_write_opts=dict(shm_opts),
) as init:
name: str = mod.name
log.info(
f"{name}'s `MktPair` info:\n"
f'{pformat(mkt.to_dict())}\n'
f'shm conf: {pformat(shm_opts)}\n'
)
case _:
raise FeedInitializationError(init)
# build a msg if we received a dict for input.
if not rx_msg:
assert bs_fqme in mkt.fqme
init = FeedInit(
mkt_info=mkt,
shm_write_opts=init.get('shm_write_opts'),
)
# `MktPair` value audits
mkt = init.mkt_info
assert mkt.type_key
# backend is using new `MktPair` but not embedded `Asset` types
# for the .src/.dst..
if not isinstance(mkt.src, Asset):
warn_msg += (
f'ALSO, {mod.name.upper()} should try to deliver\n'
'the new `MktPair.src: Asset` field!\n'
'-----------------------------------------------\n'
)
# complain about any non-idealities
if warn_msg:
# TODO: would be nice to register an API_COMPAT or something in
# maybe cyan for this in general throughput piker no?
logmeth = getattr(log, api_log_msg_level)
logmeth(warn_msg)
return init.copy()

View File

@ -22,17 +22,40 @@ from typing import AsyncIterator
import numpy as np import numpy as np
from ._engine import cascade from ._api import (
maybe_mk_fsp_shm,
Fsp,
)
from ._engine import (
cascade,
Cascade,
)
from ._volume import (
dolla_vlm,
flow_rates,
tina_vwap,
)
__all__ = ['cascade'] __all__: list[str] = [
'cascade',
'Cascade',
'maybe_mk_fsp_shm',
'Fsp',
'dolla_vlm',
'flow_rates',
'tina_vwap',
]
async def latency( async def latency(
source: 'TickStream[Dict[str, float]]', # noqa source: 'TickStream[Dict[str, float]]', # noqa
ohlcv: np.ndarray ohlcv: np.ndarray
) -> AsyncIterator[np.ndarray]: ) -> AsyncIterator[np.ndarray]:
"""Latency measurements, broker to piker. '''
""" Latency measurements, broker to piker.
'''
# TODO: do we want to offer yielding this async # TODO: do we want to offer yielding this async
# before the rt data connection comes up? # before the rt data connection comes up?

View File

@ -78,7 +78,8 @@ class Fsp:
# + the consuming fsp *to* the consumers output # + the consuming fsp *to* the consumers output
# shm flow. # shm flow.
_flow_registry: dict[ _flow_registry: dict[
tuple[_Token, str], _Token, tuple[_Token, str],
tuple[_Token, Optional[ShmArray]],
] = {} ] = {}
def __init__( def __init__(
@ -120,7 +121,6 @@ class Fsp:
): ):
return self.func(*args, **kwargs) return self.func(*args, **kwargs)
# TODO: lru_cache this? prettty sure it'll work?
def get_shm( def get_shm(
self, self,
src_shm: ShmArray, src_shm: ShmArray,
@ -131,12 +131,27 @@ class Fsp:
for this "instance" of a signal processor for for this "instance" of a signal processor for
the given ``key``. the given ``key``.
The destination shm "token" and array are cached if possible to
minimize multiple stdlib/system calls.
''' '''
dst_token = self._flow_registry[ dst_token, maybe_array = self._flow_registry[
(src_shm._token, self.name) (src_shm._token, self.name)
] ]
shm = attach_shm_array(dst_token) if maybe_array is None:
return shm self._flow_registry[
(src_shm._token, self.name)
] = (
dst_token,
# "cache" the ``ShmArray`` such that
# we call the underlying "attach" code as few
# times as possible as per:
# - https://github.com/pikers/piker/issues/359
# - https://github.com/pikers/piker/issues/332
maybe_array := attach_shm_array(dst_token)
)
return maybe_array
def fsp( def fsp(
@ -159,18 +174,10 @@ def fsp(
return Fsp(wrapped, outputs=(wrapped.__name__,)) return Fsp(wrapped, outputs=(wrapped.__name__,))
def mk_fsp_shm_key(
sym: str,
target: Fsp
) -> str:
uid = tractor.current_actor().uid
return f'{sym}.fsp.{target.name}.{".".join(uid)}'
def maybe_mk_fsp_shm( def maybe_mk_fsp_shm(
sym: str, sym: str,
target: Fsp, target: Fsp,
size: int,
readonly: bool = True, readonly: bool = True,
) -> (str, ShmArray, bool): ) -> (str, ShmArray, bool):
@ -179,20 +186,27 @@ def maybe_mk_fsp_shm(
exists, otherwise load the shm already existing for that token. exists, otherwise load the shm already existing for that token.
''' '''
assert isinstance(sym, str), '`sym` should be file-name-friendly `str`' if not isinstance(sym, str):
raise ValueError('`sym: str` should be file-name-friendly')
# TODO: load output types from `Fsp` # TODO: load output types from `Fsp`
# - should `index` be a required internal field? # - should `index` be a required internal field?
fsp_dtype = np.dtype( fsp_dtype = np.dtype(
[('index', int)] + [('index', int)]
+
[('time', float)]
+
[(field_name, float) for field_name in target.outputs] [(field_name, float) for field_name in target.outputs]
) )
key = mk_fsp_shm_key(sym, target) # (attempt to) uniquely key the fsp shm buffers
actor_name, uuid = tractor.current_actor().uid
uuid_snip: str = uuid[:16]
key: str = f'piker.{actor_name}[{uuid_snip}].{sym}.{target.name}'
shm, opened = maybe_open_shm_array( shm, opened = maybe_open_shm_array(
key, key,
# TODO: create entry for each time frame size=size,
dtype=fsp_dtype, dtype=fsp_dtype,
readonly=True, readonly=True,
) )

View File

@ -18,41 +18,43 @@
core task logic for processing chains core task logic for processing chains
''' '''
from dataclasses import dataclass from __future__ import annotations
from contextlib import asynccontextmanager as acm
from functools import partial from functools import partial
from typing import ( from typing import (
AsyncIterator, Callable, Optional, AsyncIterator,
Union, Callable,
) )
import numpy as np import numpy as np
import pyqtgraph as pg
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
from tractor.msg import NamespacePath from tractor.msg import NamespacePath
from piker.types import Struct
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from .. import data from .. import data
from ..data import attach_shm_array from ..data.feed import (
from ..data.feed import Feed Flume,
Feed,
)
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray
from ..data._source import Symbol from ..data._sampling import (
_default_delay_s,
open_sample_stream,
)
from ..accounting import MktPair
from ._api import ( from ._api import (
Fsp, Fsp,
_load_builtins, _load_builtins,
_Token, _Token,
) )
from ..toolz import Profiler
log = get_logger(__name__) log = get_logger(__name__)
@dataclass
class TaskTracker:
complete: trio.Event
cs: trio.CancelScope
async def filter_quotes_by_sym( async def filter_quotes_by_sym(
sym: str, sym: str,
@ -73,50 +75,190 @@ async def filter_quotes_by_sym(
if quote: if quote:
yield quote yield quote
# TODO: unifying the abstractions in this FSP subsys/layer:
# -[ ] move the `.data.flows.Flume` type into this
# module/subsys/pkg?
# -[ ] ideas for further abstractions as per
# - https://github.com/pikers/piker/issues/216,
# - https://github.com/pikers/piker/issues/270:
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
# - https://en.wikipedia.org/wiki/Signal-flow_graph
# - https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
async def fsp_compute( # -[ ] we probably want to eval THE BELOW design and unify with the
# proto `TaskManager` in the `tractor` dev branch as well as with
# our below idea for `Cascade`:
# - https://github.com/goodboy/tractor/pull/363
class Cascade(Struct):
'''
As per sig-proc engineering parlance, this is a chaining of
`Flume`s, which are themselves collections of "Streams"
implemented currently via `ShmArray`s.
symbol: Symbol, A `Cascade` is be the minimal "connection" of 2 `Flumes`
feed: Feed, as per circuit parlance:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
TODO:
-[ ] could cover the combination of our `FspAdmin` and the
backend `.fsp._engine` related machinery to "connect" one flume
to another?
'''
# TODO: make these `Flume`s
src: Flume
dst: Flume
tn: trio.Nursery
fsp: Fsp # UI-side middleware ctl API
# filled during cascade/.bind_func() (fsp_compute) init phases
bind_func: Callable | None = None
complete: trio.Event | None = None
cs: trio.CancelScope | None = None
client_stream: tractor.MsgStream | None = None
async def resync(self) -> int:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.info(f're-syncing fsp {self.fsp.name} to source')
self.cs.cancel()
await self.complete.wait()
index: int = await self.tn.start(self.bind_func)
# always trigger UI refresh after history update,
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
dst_shm: ShmArray = self.dst.rt_shm
await self.client_stream.send({
'fsp_update': {
'key': dst_shm.token,
'first': dst_shm._first.value,
'last': dst_shm._last.value,
}
})
return index
def is_synced(self) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
'''
src_shm: ShmArray = self.src.rt_shm
dst_shm: ShmArray = self.dst.rt_shm
step_diff = src_shm.index - dst_shm.index
len_diff = abs(len(src_shm.array) - len(dst_shm.array))
synced: bool = not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2
# we aren't step synced to the source and may be
# leading/lagging by a step
or step_diff > 1
or step_diff < 0
)
if not synced:
fsp: Fsp = self.fsp
log.warning(
'***DESYNCED FSP***\n'
f'{fsp.ns_path}@{src_shm.token}\n'
f'step_diff: {step_diff}\n'
f'len_diff: {len_diff}\n'
)
return (
synced,
step_diff,
len_diff,
)
async def poll_and_sync_to_step(self) -> int:
synced, step_diff, _ = self.is_synced()
while not synced:
await self.resync()
synced, step_diff, _ = self.is_synced()
return step_diff
@acm
async def open_edge(
self,
bind_func: Callable,
) -> int:
self.bind_func = bind_func
index = await self.tn.start(bind_func)
yield index
# TODO: what do we want on teardown/error?
# -[ ] dynamic reconnection after update?
async def connect_streams(
casc: Cascade,
mkt: MktPair,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
src: Flume,
dst: Flume,
src: ShmArray, edge_func: Callable,
dst: ShmArray,
func: Callable,
# attach_stream: bool = False, # attach_stream: bool = False,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
'''
Stream and per-sample compute and write the cascade of
2 `Flumes`/streams given some operating `func`.
profiler = pg.debug.Profiler( https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
Not literally, but something like:
edge_func(Flume_in) -> Flume_out
'''
profiler = Profiler(
delayed=False, delayed=False,
disabled=True disabled=True
) )
fqsn = symbol.front_fqsn() # TODO: just pull it from src.mkt.fqme no?
out_stream = func( # fqme: str = mkt.fqme
fqme: str = src.mkt.fqme
# TODO: dynamic introspection of what the underlying (vertex)
# function actually requires from input node (flumes) then
# deliver those inputs as part of a graph "compilation" step?
out_stream = edge_func(
# TODO: do we even need this if we do the feed api right? # TODO: do we even need this if we do the feed api right?
# shouldn't a local stream do this before we get a handle # shouldn't a local stream do this before we get a handle
# to the async iterable? it's that or we do some kinda # to the async iterable? it's that or we do some kinda
# async itertools style? # async itertools style?
filter_quotes_by_sym(fqsn, quote_stream), filter_quotes_by_sym(fqme, quote_stream),
# XXX: currently the ``ohlcv`` arg # XXX: currently the ``ohlcv`` arg, but we should allow
feed.shm, # (dynamic) requests for src flume (node) streams?
src.rt_shm,
) )
# Conduct a single iteration of fsp with historical bars input # HISTORY COMPUTE PHASE
# and get historical output # conduct a single iteration of fsp with historical bars input
history_output: Union[ # and get historical output.
dict[str, np.ndarray], # multi-output case history_output: (
np.ndarray, # single output case dict[str, np.ndarray] # multi-output case
] | np.ndarray, # single output case
history_output = await out_stream.__anext__() )
history_output = await anext(out_stream)
func_name = func.__name__ func_name = edge_func.__name__
profiler(f'{func_name} generated history') profiler(f'{func_name} generated history')
# build struct array with an 'index' field to push as history # build struct array with an 'index' field to push as history
@ -124,11 +266,17 @@ async def fsp_compute(
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no? # TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
# if the output array is multi-field then push # if the output array is multi-field then push
# each respective field. # each respective field.
fields = getattr(dst.array.dtype, 'fields', None).copy() dst_shm: ShmArray = dst.rt_shm
fields = getattr(dst_shm.array.dtype, 'fields', None).copy()
fields.pop('index') fields.pop('index')
history: Optional[np.ndarray] = None # TODO: nptyping here! history_by_field: np.ndarray | None = None
src_shm: ShmArray = src.rt_shm
src_time = src_shm.array['time']
if fields and len(fields) > 1 and fields: if (
fields and
len(fields) > 1
):
if not isinstance(history_output, dict): if not isinstance(history_output, dict):
raise ValueError( raise ValueError(
f'`{func_name}` is a multi-output FSP and should yield a ' f'`{func_name}` is a multi-output FSP and should yield a '
@ -139,25 +287,25 @@ async def fsp_compute(
if key in history_output: if key in history_output:
output = history_output[key] output = history_output[key]
if history is None: if history_by_field is None:
if output is None: if output is None:
length = len(src.array) length = len(src_shm.array)
else: else:
length = len(output) length = len(output)
# using the first output, determine # using the first output, determine
# the length of the struct-array that # the length of the struct-array that
# will be pushed to shm. # will be pushed to shm.
history = np.zeros( history_by_field = np.zeros(
length, length,
dtype=dst.array.dtype dtype=dst_shm.array.dtype
) )
if output is None: if output is None:
continue continue
history[key] = output history_by_field[key] = output
# single-key output stream # single-key output stream
else: else:
@ -166,11 +314,15 @@ async def fsp_compute(
f'`{func_name}` is a single output FSP and should yield an ' f'`{func_name}` is a single output FSP and should yield an '
'`np.ndarray` for history' '`np.ndarray` for history'
) )
history = np.zeros( history_by_field = np.zeros(
len(history_output), len(history_output),
dtype=dst.array.dtype dtype=dst_shm.array.dtype
) )
history[func_name] = history_output history_by_field[func_name] = history_output
history_by_field['time'] = src_time[-len(history_by_field):]
history_output['time'] = src_shm.array['time']
# TODO: XXX: # TODO: XXX:
# THERE'S A BIG BUG HERE WITH THE `index` field since we're # THERE'S A BIG BUG HERE WITH THE `index` field since we're
@ -183,11 +335,14 @@ async def fsp_compute(
# is `index` aware such that historical data can be indexed # is `index` aware such that historical data can be indexed
# relative to the true first datum? Not sure if this is sane # relative to the true first datum? Not sure if this is sane
# for incremental compuations. # for incremental compuations.
first = dst._first.value = src._first.value first = dst_shm._first.value = src_shm._first.value
# TODO: can we use this `start` flag instead of the manual # TODO: can we use this `start` flag instead of the manual
# setting above? # setting above?
index = dst.push(history, start=first) index = dst_shm.push(
history_by_field,
start=first,
)
profiler(f'{func_name} pushed history') profiler(f'{func_name} pushed history')
profiler.finish() profiler.finish()
@ -195,12 +350,9 @@ async def fsp_compute(
# setup a respawn handle # setup a respawn handle
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
# TODO: might be better to just make a "restart" method where casc.cs = cs
# the target task is spawned implicitly and then the event is casc.complete = trio.Event()
# set via some higher level api? At that poing we might as well task_status.started(index)
# be writing a one-cancels-one nursery though right?
tracker = TaskTracker(trio.Event(), cs)
task_status.started((tracker, index))
profiler(f'{func_name} yield last index') profiler(f'{func_name} yield last index')
@ -213,8 +365,14 @@ async def fsp_compute(
log.debug(f"{func_name}: {processed}") log.debug(f"{func_name}: {processed}")
key, output = processed key, output = processed
index = src.index # dst.array[-1][key] = output
dst.array[-1][key] = output dst_shm.array[[key, 'time']][-1] = (
output,
# TODO: what about pushing ``time.time_ns()``
# in which case we'll need to round at the graphics
# processing / sampling layer?
src_shm.array[-1]['time']
)
# NOTE: for now we aren't streaming this to the consumer # NOTE: for now we aren't streaming this to the consumer
# stream latest array index entry which basically just acts # stream latest array index entry which basically just acts
@ -225,6 +383,7 @@ async def fsp_compute(
# N-consumers who subscribe for the real-time output, # N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem # which we'll likely want to implement using local-mem
# chans for the fan out? # chans for the fan out?
# index = src_shm.index
# if attach_stream: # if attach_stream:
# await client_stream.send(index) # await client_stream.send(index)
@ -234,7 +393,7 @@ async def fsp_compute(
# log.info(f'FSP quote too fast: {hz}') # log.info(f'FSP quote too fast: {hz}')
# last = time.time() # last = time.time()
finally: finally:
tracker.complete.set() casc.complete.set()
@tractor.context @tractor.context
@ -243,17 +402,17 @@ async def cascade(
ctx: tractor.Context, ctx: tractor.Context,
# data feed key # data feed key
fqsn: str, fqme: str,
src_shm_token: dict,
dst_shm_token: tuple[str, np.dtype],
# flume pair cascaded using an "edge function"
src_flume_addr: dict,
dst_flume_addr: dict,
ns_path: NamespacePath, ns_path: NamespacePath,
shm_registry: dict[str, _Token], shm_registry: dict[str, _Token],
zero_on_step: bool = False, zero_on_step: bool = False,
loglevel: Optional[str] = None, loglevel: str | None = None,
) -> None: ) -> None:
''' '''
@ -261,7 +420,7 @@ async def cascade(
destination shm array buffer. destination shm array buffer.
''' '''
profiler = pg.debug.Profiler( profiler = Profiler(
delayed=False, delayed=False,
disabled=False disabled=False
) )
@ -269,8 +428,14 @@ async def cascade(
if loglevel: if loglevel:
get_console_log(loglevel) get_console_log(loglevel)
src = attach_shm_array(token=src_shm_token) src: Flume = Flume.from_msg(src_flume_addr)
dst = attach_shm_array(readonly=False, token=dst_shm_token) dst: Flume = Flume.from_msg(
dst_flume_addr,
readonly=False,
)
# src: ShmArray = attach_shm_array(token=src_shm_token)
# dst: ShmArray = attach_shm_array(readonly=False, token=dst_shm_token)
reg = _load_builtins() reg = _load_builtins()
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg]) lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
@ -278,28 +443,33 @@ async def cascade(
f'Registered FSP set:\n{lines}' f'Registered FSP set:\n{lines}'
) )
# update actorlocal flows table which registers # NOTE XXX: update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source # readonly "instances" of this fsp for symbol/source so that
# so that consumer fsps can look it up by source + fsp. # consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire # TODO: ugh i hate this wind/unwind to list over the wire but
# but not sure how else to do it. # not sure how else to do it.
for (token, fsp_name, dst_token) in shm_registry: for (token, fsp_name, dst_token) in shm_registry:
Fsp._flow_registry[ Fsp._flow_registry[(
(_Token.from_msg(token), fsp_name) _Token.from_msg(token),
] = _Token.from_msg(dst_token) fsp_name,
)] = _Token.from_msg(dst_token), None
fsp: Fsp = reg.get( fsp: Fsp = reg.get(
NamespacePath(ns_path) NamespacePath(ns_path)
) )
func = fsp.func func: Callable = fsp.func
if not func: if not func:
# TODO: assume it's a func target path # TODO: assume it's a func target path
raise ValueError(f'Unknown fsp target: {ns_path}') raise ValueError(f'Unknown fsp target: {ns_path}')
_fqme: str = src.mkt.fqme
assert _fqme == fqme
# open a data feed stream with requested broker # open a data feed stream with requested broker
feed: Feed
async with data.feed.maybe_open_feed( async with data.feed.maybe_open_feed(
[fqsn], [fqme],
# TODO throttle tick outputs from *this* daemon since # TODO throttle tick outputs from *this* daemon since
# it'll emit tons of ticks due to the throttle only # it'll emit tons of ticks due to the throttle only
@ -307,154 +477,144 @@ async def cascade(
# needs to get throttled the ticks we generate. # needs to get throttled the ticks we generate.
# tick_throttle=60, # tick_throttle=60,
) as (feed, quote_stream): ) as feed:
symbol = feed.symbols[fqsn]
flume: Flume = feed.flumes[fqme]
# XXX: can't do this since flume.feed will be set XD
# assert flume == src
assert flume.mkt == src.mkt
mkt: MktPair = flume.mkt
# NOTE: FOR NOW, sanity checks around the feed as being
# always the src flume (until we get to fancier/lengthier
# chains/graphs.
assert src.rt_shm.token == flume.rt_shm.token
# XXX: won't work bc the _hist_shm_token value will be
# list[list] after IPC..
# assert flume.to_msg() == src_flume_addr
profiler(f'{func}: feed up') profiler(f'{func}: feed up')
assert src.token == feed.shm.token func_name: str = func.__name__
# last_len = new_len = len(src.array)
func_name = func.__name__
async with ( async with (
trio.open_nursery() as n, trio.open_nursery() as tn,
): ):
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
casc = Cascade(
src,
dst,
tn,
fsp,
)
# TODO: this seems like it should be wrapped somewhere?
fsp_target = partial( fsp_target = partial(
connect_streams,
casc=casc,
mkt=mkt,
quote_stream=flume.stream,
fsp_compute, # flumes and shm passthrough
symbol=symbol,
feed=feed,
quote_stream=quote_stream,
# shm
src=src, src=src,
dst=dst, dst=dst,
# target # chain function which takes src flume input(s)
func=func # and renders dst flume output(s)
edge_func=func
) )
async with casc.open_edge(
bind_func=fsp_target,
) as index:
# casc.bind_func = fsp_target
# index = await tn.start(fsp_target)
dst_shm: ShmArray = dst.rt_shm
src_shm: ShmArray = src.rt_shm
tracker, index = await n.start(fsp_target) if zero_on_step:
last = dst.rt_shm.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
if zero_on_step: profiler(f'{func_name}: fsp up')
last = dst.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
profiler(f'{func_name}: fsp up') # sync to client-side actor
await ctx.started(index)
# sync client # XXX: rt stream with client which we MUST
await ctx.started(index) # open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
casc.client_stream: tractor.MsgStream = client_stream
# XXX: rt stream with client which we MUST s, step, ld = casc.is_synced()
# open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
# TODO: these likely should all become # detect sample period step for subscription to increment
# methods of this ``TaskLifetime`` or wtv # signal
# abstraction.. times = src.rt_shm.array['time']
async def resync( if len(times) > 1:
tracker: TaskTracker, last_ts = times[-1]
delay_s: float = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s: float = _default_delay_s
) -> tuple[TaskTracker, int]: # sub and increment the underlying shared memory buffer
# TODO: adopt an incremental update engine/approach # on every step msg received from the global `samplerd`
# where possible here eventually! # service.
log.debug(f're-syncing fsp {func_name} to source') async with open_sample_stream(
tracker.cs.cancel() float(delay_s)
await tracker.complete.wait() ) as istream:
tracker, index = await n.start(fsp_target)
# always trigger UI refresh after history update, profiler(f'{func_name}: sample stream up')
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and profiler.finish()
# ``piker.ui._display.trigger_update()``.
await client_stream.send({
'fsp_update': {
'key': dst_shm_token,
'first': dst._first.value,
'last': dst._last.value,
}})
return tracker, index
def is_synced( async for i in istream:
src: ShmArray, # print(f'FSP incrementing {i}')
dst: ShmArray
) -> tuple[bool, int, int]:
'''Predicate to dertmine if a destination FSP
output array is aligned to its source array.
''' # respawn the compute task if the source
step_diff = src.index - dst.index # array has been updated such that we compute
len_diff = abs(len(src.array) - len(dst.array)) # new history from the (prepended) source.
return not ( synced, step_diff, _ = casc.is_synced()
# the source is likely backfilling and we must if not synced:
# sync history calculations step_diff: int = await casc.poll_and_sync_to_step()
len_diff > 2 or
# we aren't step synced to the source and may be # skip adding a last bar since we should already
# leading/lagging by a step # be step alinged
step_diff > 1 or if step_diff == 0:
step_diff < 0 continue
), step_diff, len_diff
async def poll_and_sync_to_step( # read out last shm row, copy and write new row
array = dst_shm.array
tracker: TaskTracker, # some metrics like vlm should be reset
src: ShmArray, # to zero every step.
dst: ShmArray, if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
) -> tuple[TaskTracker, int]: dst.rt_shm.push(last)
synced, step_diff, _ = is_synced(src, dst) # sync with source buffer's time step
while not synced: src_l2 = src_shm.array[-2:]
tracker, index = await resync(tracker) src_li, src_lt = src_l2[-1][['index', 'time']]
synced, step_diff, _ = is_synced(src, dst) src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst_shm._array['time'][src_li] = src_lt
dst_shm._array['time'][src_2li] = src_2lt
return tracker, step_diff # last2 = dst.array[-2:]
# if (
s, step, ld = is_synced(src, dst) # last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# detect sample period step for subscription to increment # ):
# signal # dstl2 = list(last2)
times = src.array['time'] # srcl2 = list(src_l2)
delay_s = times[-1] - times[times != times[-1]][-1] # print(
# # f'{dst.token}\n'
# Increment the underlying shared memory buffer on every # f'src: {srcl2}\n'
# "increment" msg received from the underlying data feed. # f'dst: {dstl2}\n'
async with feed.index_stream( # )
int(delay_s)
) as istream:
profiler(f'{func_name}: sample stream up')
profiler.finish()
async for _ in istream:
# respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = is_synced(src, dst)
if not synced:
tracker, step_diff = await poll_and_sync_to_step(
tracker,
src,
dst,
)
# skip adding a last bar since we should already
# be step alinged
if step_diff == 0:
continue
# read out last shm row, copy and write new row
array = dst.array
# some metrics like vlm should be reset
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
dst.push(last)

View File

@ -24,7 +24,7 @@ import numpy as np
from numba import jit, float64, optional, int64 from numba import jit, float64, optional, int64
from ._api import fsp from ._api import fsp
from ..data._normalize import iterticks from ..data import iterticks
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray

View File

@ -20,7 +20,7 @@ import numpy as np
from tractor.trionics._broadcast import AsyncReceiver from tractor.trionics._broadcast import AsyncReceiver
from ._api import fsp from ._api import fsp
from ..data._normalize import iterticks from ..data import iterticks
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray
from ._momo import _wma from ._momo import _wma
from ..log import get_logger from ..log import get_logger
@ -234,7 +234,7 @@ async def flow_rates(
# FSPs, user input, and possibly any general event stream in # FSPs, user input, and possibly any general event stream in
# real-time. Hint: ideally implemented with caching until mutated # real-time. Hint: ideally implemented with caching until mutated
# ;) # ;)
period: 'Param[int]' = 6, # noqa period: 'Param[int]' = 1, # noqa
# TODO: support other means by providing a map # TODO: support other means by providing a map
# to weights `partial()`-ed with `wma()`? # to weights `partial()`-ed with `wma()`?
@ -268,8 +268,7 @@ async def flow_rates(
'dark_dvlm_rate': None, 'dark_dvlm_rate': None,
} }
# TODO: 3.10 do ``anext()`` quote = await anext(source)
quote = await source.__anext__()
# ltr = 0 # ltr = 0
# lvr = 0 # lvr = 0

Some files were not shown because too many files have changed in this diff Show More