Compare commits

..

606 Commits

Author SHA1 Message Date
goodboy 5e371f1d73 Merge pull request 'jsonrpc_err_in_rent' (#41) from jsonrpc_err_in_rent into gitea_feats
Reviewed-on: #41
2025-02-21 21:21:02 +00:00
goodboy 6c221bb348 Merge pull request 'tsp_gaps: fixes for fault-less OHLCV time-series loads' (#35) from tsp_gaps into gitea_feats
Reviewed-on: #35
2025-02-21 20:46:37 +00:00
Tyler Goodlet e391c896f8 Mk jsronrpc's underlying ws timeout `float('inf')`
Since currently we're only using this IPC subsys for `deribit`, and
generally speaking we're primarly supporting options markets (which are
fairly "slow moving"), flip to a default of NOT resetting the `NoBsWs`
on timeout since doing so normally breaks the jsron-rpc IPC session.
Without a proper `fixture` passed to `open_autorecon_ws()` (which we
should eventually implement!!) relying on a timeout-to-reset more or
less will just cause breakage issues - a proper reconnect sequence must
be implemented before using that feature.

Deats,
- expose and proxy through the `msg_recv_timeout` from
  `open_jsonrpc_session()` into the underlying `open_autorecon_ws()`
  call.
2025-02-19 17:05:13 -05:00
Tyler Goodlet 5633f5614d Doc-n-clean `.data._web_bs.open_jsonrpc_session()`
Add a doc-string reflecting recent refinements, drop all the old hook
params, rename `n: trio.Nursery` -> `tn` for "task nursery" fitting with
code base's naming style.
2025-02-19 17:05:13 -05:00
Tyler Goodlet 76735189de data._web_bs: try to raise jsonrpc errors in parent task 2025-02-19 17:05:13 -05:00
Tyler Goodlet d49608f74e Refine history gap/termination signalling
Namely handling backends which do not provide a default "frame
size-duration" in their init-config by making the backfiller guess the
value based on the first frame received.

Deats,
- adjust `start_backfill()` to take a more explicit
  `def_frame_duration: Duration` expected to be unpacked from any
  backend hist init-config by the `tsdb_backfill()` caller which now
  also computes a value from the first received frame when the config
  section isn't provided.
- in `start_backfill()` we now always expect the `def_frame_duration`
  input and always decrement the query range by this value whenever
  a `NoData` is raised by the provider-backend paired with an explicit
  `log.warning()` about the handling.
- also relay any `DataUnavailable.args[0]` message from the provider
  in the handler.
- repair "gap reporting" which checks for expected frame duration vs.
  that received with much better humanized logging on the missing
  segment using `pendulum.Interval/Duration.in_words()` output.
2025-02-19 17:01:24 -05:00
Tyler Goodlet bf0ac93aa3 Only use `frame_types` if delivered during enter
The `open_history_client()` provider endpoint can *optionally*
deliver a `frame_types: dict[int, pendulum.Duration]` subsection in its
`config: dict[str, dict]` (as was implemented with the `ib` backend).
This allows the `tsp` backfilling machinery to use this "recommended
frame duration" to subtract from the `last_start_dt` any time a `NoData`
gap is signalled by the `get_hist()` call allowing gaps to be ignored
safely without missing history by knowing the next earliest dt we can
query from using the `end_dt`. However, currently all crypto$ providers
haven't implemented this feat yet..

As such only try to use the `frame_types` feature if provided when
handling `NoData` conditions inside `tsp.start_backfill()` and otherwise
raise as normal.
2025-02-19 17:01:24 -05:00
Tyler Goodlet d7179d47b0 `.tsp._anal`: add (unused) `detect_vlm_gaps()` 2025-02-19 17:01:24 -05:00
Tyler Goodlet c390e87536 `.storage.cli`: collect gap-markup-aids into `tf2aids: dict` prior to pause for introspection 2025-02-19 17:01:24 -05:00
Tyler Goodlet 5e4a6d61c7 Ignore any non-`.parquet` files under `.config/piker/nativedb/` subdir 2025-02-19 17:01:24 -05:00
Tyler Goodlet 3caaa30b03 Mask no-data pause, add perps to no-`/src`-in-fqme asset set
Was orig for debugging an issue with `kucoin` i think but definitely
shouldn't be left in XD

Also add `'perpetual_future'` to the `.start_backfill()` input literal
set since we don't expect the 'btc/usd.perp.binance' for now.
2025-02-19 17:01:24 -05:00
goodboy 1e3942fdc2 Merge pull request 'Add `ruff` to deps, bump `uv.lock`' (#32) from add_ruff_linter into gitea_feats
Reviewed-on: #32
2025-02-17 19:55:32 +00:00
Nelson Torres 49ea380503 Add new `ruff.toml` config file
Based on the default provided in their
[docs](https://docs.astral.sh/ruff/configuration/) and migrating
previous config from the prior `poetry`-verion of our `pyproject.toml`
2025-02-17 14:48:10 -05:00
Tyler Goodlet 933f169938 Add/reorg back some content from `poetry` old config 2025-02-14 13:47:02 -05:00
Nelson Torres 51337052a4 Remove legacy `poetry` config content from pyproject.toml 2025-02-14 15:01:29 -03:00
Tyler Goodlet 8abe55dcc6 Add `ruff` to deps, bump `uv.lock`
Such that we start encouraging devs to lint code they touch and
hopefully we include a pass as part of our tests/CI eventually B)

Also, mk local `tractor` install `--editable` since without it being
a locally hackable repo it's kinda pointless to install from the local
fs Xp
2025-02-13 21:20:11 -05:00
goodboy c933f2ad56 Merge pull request 'kucoin_and_binance_fix' (#9) from kucoin_and_binance_fix into gitea_feats
Reviewed-on: #9
2025-02-13 19:40:50 +00:00
Tyler Goodlet 00108010c9 Mask `pytest` detection block in `piker.config`
Seems to be some kinda super weird env bug since we moved to using
`uv`? When it triggers it also seems to cause a pretty fundamental crash
that not only breaks `tractor.devx._debug` stuff but also seems to get
us in a perma-hang state where no SIGINT or other sys sig will be able
to kill the root proc!?!?

TODO, a `gitea` issue to track so we can fix the fundamental problem as
well as transitive fault in `tractor`'s core which seems to be due to
the error taking place during a sub-actor's module import phase which
prevents the runtime from booting fully and then the proc getting stuck
in a real gnarly SIG-state..
2025-02-13 13:32:11 -05:00
Tyler Goodlet 8a4901c517 `.binance.feed`: moar type fixes, drop `rapidfuzz` 2025-02-13 12:35:41 -05:00
Tyler Goodlet d7f6a5ab63 Update to latest `KucoinMktPair` spec 2025-02-12 18:08:40 -05:00
Tyler Goodlet e0fdabf651 Use `../tractor` srcs in editable mode? 2025-02-12 15:14:30 -05:00
Tyler Goodlet cb88dfc9da `kucoin`: repair live quotes streaming..
This must have broke at some point during the new `MktPair` and thus
`.fqme: str` updates; mas-o-menos the symbol key in the quote-msg-`dict`
was NOT set to the `MktPair.bs_fqme: str` value and thus wasn't being
processed by the downstream sampling and feed subsys.

So fix that as well as a few other refinements,
- set the `topic: mkt.bs_fqme` in quote msgs obvi.
- drop the "wait for first clearing vlm" quote poll loop; going to fix
  the sampler to handle a `first_quote` without a `'last'` key.
- add some typing around calls to `get_mkt_info()`.
- rename `stream_messages()` -> `iter_normed_quotes()`.
2025-02-12 15:05:22 -05:00
Nelson Torres bb41dd6d18 Deleted settlePlan field from binance FutesPair. 2025-02-12 15:05:22 -05:00
Nelson Torres 99e90129ad Added missing fields for kucoin.
feeCategory, makerFeeCoefficient, takerFeeCoefficient and st.
2025-02-12 15:05:22 -05:00
Tyler Goodlet cceb7a37b9 Lel, forgot to add a `SPOT` venue for `binance`.. 2025-02-12 15:05:22 -05:00
goodboy 5382815b2d Merge pull request 'uv migration and default.nix for qt6' (#17) from uv_migration into gitea_feats
Reviewed-on: #17
2025-02-12 20:04:02 +00:00
Tyler Goodlet cb1ba8a05f Further root readme bump, factor `.clearing` content
In line with our move to `uv` and recent `nix` support update a bunch of
the summary content and factor out the order-control section to a new
`.piker.clearing` readme file with embedded todos therein.
2025-02-12 15:01:51 -05:00
Nelson Torres 6c65ec4d3b Readme update:
- uv replace poetry
- update nix-shell command
2025-02-12 13:05:25 -03:00
Nelson Torres 12e371b027 uv migration 2025-02-12 11:19:34 -03:00
goodboy 014bd58db4 Merge pull request 'go_httpx' (#2) from go_httpx into gitea_feats
Landed-in: #2
2025-02-12 13:01:19 +00:00
Tyler Goodlet 844544ed8e Port binance to `httpx`
Like other backends use the `AsyncClient` for all venue specific
client-sessions but change to allocating them inside `get_client()`
using an `AsyncExitStack` and inserting directly in the
`Client.venue_sesh: dict` table during init.

Supporting impl tweaks:
- remove most of the API client session building logic and instead make
  `Client.__init__()` take in a `venue_sessions: dict` (set it to
  `.venue_sesh`) and `conf: dict`, instead opting to do the http client
  configuration inside `get_client()` since all that code only needs to
  be run once.
 |_load config inside `get_client()` once.
 |_move session token creation into a new util func `init_api_keys()` and
  also call it from `get_client()` factory; toss in an ex. toml section
  config to the doc string.
- define `_venue_urls: dict[str, str]` (content taken from old static
  `.venue_sesh` dict) at module level and feed them as `base_url: str`
  inputs to the client create loop.
- adjust all call sigs in httpx-sesh-using methods, namely just
  `._api()`.
- do a `.exch_info()` call in `get_client()` to cache the symbology
  set.

Unrelated changes for various other outstanding buggers:
- to get futures feeds correctly loading when selected
  from search (like 'XMRUSDT.USDTM.PERP'), expect a `MktPair` input to
  `Client.bars()` such that the exact venue-key can be looked up (via
  a new `.pair2venuekey()` meth) and then passed to `._api()`.
- adjust `.broker.open_trade_dialog()` to failover to paper engine when
  there's no `api_key` key set for the `subconf` venue-key.
2025-02-11 16:27:28 -05:00
Nelson Torres f479252d26 Added note to exception when missing field in SpotPair class 2025-02-11 16:27:28 -05:00
Nelson Torres 033ef2e35e Added new fields to SpotPair class in venues 2025-02-11 16:27:28 -05:00
Tyler Goodlet 2cdece244c binance: raise `NoData` on null hist arrays
Like we do with other history backends to indicate lack of a data set.
This avoids any raise that will will bring down the backloader task with
some downstream error.

Raise a `ValueError` on no time index for now.
2025-02-11 16:27:28 -05:00
Tyler Goodlet 018694bbdb Woops, `data` can be an empty list XD 2025-02-11 16:27:28 -05:00
Tyler Goodlet 128a2d507f Woops, fix missing `api_url` ref in error log 2025-02-11 16:27:28 -05:00
Tyler Goodlet 430650a6a7 Change type-annots to use `httpx.Response` 2025-02-11 16:27:28 -05:00
Tyler Goodlet 1da3cf5698 Port `kucoin` backend to `httpx` 2025-02-11 16:27:28 -05:00
Tyler Goodlet a348603fc4 Port `kraken` backend to `httpx` 2025-02-11 16:27:28 -05:00
goodboy 86047824d8 Merge pull request '`.brokers.ib` random fixes-n-improvements from various other dev branches..' (#27) from ib_refinements into gitea_feats
Merged-in: #27
2025-02-11 21:26:20 +00:00
Tyler Goodlet cb92abbc38 ib: add connect status info emit 2025-02-11 14:56:17 -05:00
Tyler Goodlet 70332e375b ib: `.api` mod and log-fmt cleaning
About time we tidy'd a buncha status logging in this backend..
particularly for boot-up where there's lots of client-try-connect poll
looping with account detection from the user config.

`.api.Client` pprint and logging fmt improvements:
- add `Client.__repr__()` which shows the minimally useful set of info
  from the underlying `.ib: IB` as well as a new `.acnts: list[str]`
  of the account aliases defined in the user's `brokers.toml`.
- mk `.bars()` define a comprehensive `query_info: str` with all the
  request deats but only display if there's a problem with the response
  data.
- mk `.get_config()` report both the config file path and the acnt
  aliases (NOT the actual account #s).
- move all `.load_aio_clients()` client poll loop requests do
  `log.runtime()` statuses, only falling through to a `.warning()` when
  the loop fails to connect the client to the spec-ed API-gw addr, and
 |_ don't allow loading accounts for which the user has not defined an
    alias in `brokers.toml::[ib]`; raise a value-error in such cases
    with a message indicating how to mod the config.
 |_ only `log.info()` about acnts if some were loaded..

Other mod logging de-noising:
- better status fmting in `.symbols.open_symbol_search()` with
  `repr(Client)`.
- for `.feed.stream_quotes()` first quote reporting use `.runtime()`.
2025-02-11 14:56:17 -05:00
Tyler Goodlet 4940aabe05 ib: warn about mkt precision cuckups that `Contract`s clearly deliver wrong.. 2025-02-11 14:56:17 -05:00
Tyler Goodlet 4f9998e9fb ib: mask out trade and vlm rates for now 2025-02-11 14:56:17 -05:00
Tyler Goodlet c92a236196 ib: more trade record edge case handling
- timestamps came as `'date'`-keyed from 2022 and before but now are
  `'datetime'`..
- some symbols seem to have no commission field, so handle that..
- when no `'price'` field found return `None` from `norm_trade()`.
- add a warn log on mid-fill commission updates.
2025-02-11 14:56:17 -05:00
goodboy e4cd1f85f6 Merge pull request 'pyqt6' (#3) from pyqt6 into gitea_feats
Reviewed-on: #3 (well by fomo anyway..)
2025-02-11 17:25:03 +00:00
Tyler Goodlet 129cf58d41 Bump deps for Py3.12, go PyQt6, tweak ruff rules
Code base is already ported for `Qt6` so this removes the pyqt5 dep,
adds latest pyqt6 as well as buncha other updates:

- add `xonsh` and ptk as dev deps for those of us using wacky shells ;P
- bump compiled deps as needed for python 3.12 (`numpy`, `numba`)
- add `httpx` and drop `asks` since the latter is zombied and not compat
  with other libs on 3.12.
- add `ruff` linting ignore rules for the new `.ui.qt` shim mod layer.
- few other deps updates to latest versions.
- add in the `keywords` and `classifiers` sections from the old
  `setup.py`.
2024-05-20 11:07:27 -04:00
Tyler Goodlet 1fd8654ca5 Port all `.ui*` submods to new `.ui.qt` imports
This also officially moves the code base to using `PyQt6` including all
necessary reference changes and enum namespace path moves.

Also includes a small `.ui.order_mode` fix to cancel any
`Order.price <= 0` and a name error fix with logging using `msg`,
which is already used for the input order msg..
2024-05-01 14:33:10 -04:00
Tyler Goodlet d0170982bf Add `piker.ui.qt` as a `PyQt6` shim module
For the future, like if we ever get a `PyQt7` (or wtv else..), add
a module which allows changing Qt binding lib imports from one spot for
all other `.ui` submodules. In some sense this is like a shoddier, less
dynamic version of how `pyqtgraph.Qt.__init__.py` supports multiple
libs; it might actually make sense eventually to instead import from
their shim layer instead?

Included is a draft attempt at exposing a bunch of enums which under
custom names:
- while the specific grouping of values seem to always stay consistent,
  the root enum's seem to almost always get moved around in the `PyQtX`
  module namespace.
- changing groupings and/or each top level enum's ns location can more
  simply be changed/re-orged from one spot.
- allows `.ui` consumer code to use a name more relevant to `piker`'s
  usage of wtv UI component is being configured.
2024-05-01 14:30:18 -04:00
Tyler Goodlet 821e73a409 Use a `unit_prefix: str` (like u or $) on health bar 2024-05-01 14:09:39 -04:00
Tyler Goodlet 3d03781810 Impl a sane (with nesting) `.types.Struct.pformat()`
Such that our internal structs can be pretty printed with indented and
type-hinted fields, AND for nested `Struct`-fields call `.pformat()` but
avoiding any recursion errors using `pprint.saferepr()`. Add
a `._sin_props()` iterator over the non-property fields; use it for
`dict` casting when called with `.to_dict(include_non_members=False)`.

Actually, we should also probably figure out how to only pprint like
when required by the user in a REPL or log msg by context-selectively
`pprint.PrettyPrinter` right? Also, if we can generalize decently enough
it'd be cool to maybe patch this in as a util to upstream `msgspec`?
2024-01-17 15:50:27 -05:00
Tyler Goodlet 83d1f117a8 Always cancel (loaded) zero-priced orders
Ran into this trading a peenee where a dark got (mistakenly) submitted
at a price of 0 and then that consecutively broke upstream allocator/pp
code due to a divide-by-zero.. So instead always check for a zero-price
(since that should never ever be valid in any market) and instead cancel
any such order in the EMS and return `None` so that upstream callers can
ignore it without crash handling.
2024-01-17 10:29:43 -05:00
Tyler Goodlet e4ce79f720 Delegate `.toolz.open_crash_handler()` to `tractor.devx`
Means we can drop `.toolz.debug` module outright.
2024-01-16 10:26:38 -05:00
Tyler Goodlet 264246d89b Fix `brokers.toml` load for `kraken` backend 2024-01-10 17:53:15 -05:00
Tyler Goodlet 7c96c9fafe Just warn log on mismatched `MktPair` in paper eng 2024-01-10 17:52:50 -05:00
Tyler Goodlet 52b349fe79 Always reload shm data before annotating gaps, so they line up.. 2024-01-09 15:55:16 -05:00
Tyler Goodlet 6959429af8 Factor gap annotating into new `markup_gaps()`
Since we definitely want to markup gaps that are both data-errors and
just plain old venue closures (time gaps), generalize the `gaps:
pl.DataFrame` loop in a func and call it from the `ldshm` cmd for now.

Some other tweaks to `store ldshm`:
- add `np.median()` failover when detecting (ohlc) ts sample rate.
- use new `tsp.dedupe()` signature.
- differentiate between sample-period size gaps and venue closure gaps
  by calling `tsp.detect_time_gaps()` with diff size thresholds.
- add todo for backfilling null-segs when detected.
2024-01-04 11:01:21 -05:00
Tyler Goodlet 05f874001a Ignore `ContextCancelled`s from non-mngr requests
Since service daemon actors may be cancelled remotely by clients (who
maybe also requested said daemon-actor's spawn in the first place) we
specifically ignore `tractor.ContextCancelled`s from the `ctx.wait()`
inside `Services.start_service_task()` to avoid crashing the service
mngr, and thus for now `pikerd`, (which **does** happen now due to
updated and more explicit remote cancellation semantics implemented in
`tractor`) since the `.canceller` field is not going to match the
`pikerd` uid in such cases!

This explicit check makes sense since the `Services` mngr is built to
allow remote requests to "spawn-n-supervise service actors" where the
services can remain persistent but also cancelled later as requested. We
may want to consider only allowing cancellation by actors who requested
spawn in the future tho?

Also change to more explicit imports to `tractor` types for annots
throughout the sub-pkg.
2024-01-04 10:06:42 -05:00
Tyler Goodlet fc216d37de Drop `__all__` import style from `.services` 2024-01-04 10:05:53 -05:00
Tyler Goodlet 03e429abf8 Extend `enable_modules` from input `tractor_kwargs`
Since certain actors (even if client-like) will want to augment their
module set to provide remote features (such as our new rc annotation
msg-prot for `Qt`-chart actors) we need to ensure we merge in any input
`enable_modules: list[str]` to the value passed to the underlying
`tractor` spawning api. Previously we were passing `_root_modules` as
this value by name, but now instead we simply `list.extend()` that into
whatever is in the `kwargs` both in `open_piker_runtime()` and
`open_pikerd()`.
2024-01-04 09:59:15 -05:00
Tyler Goodlet 7ae7cc829f `tsp`: on backfill, do a smart retry on a `NoData`
Presuming the data provider gives us a config with a `frame_types: dict`
(indicating frame sizes per query/request) we try to be clever and
decrement our submitted `end_dt: DateTime` based on it.. hoping for the
best on the next attempt.
2024-01-03 19:49:41 -05:00
Tyler Goodlet b23d44e21a ib; return `None` on empty bars frame resp so as to trigger raising `NoData` in the caller 2024-01-03 18:16:48 -05:00
Tyler Goodlet 2669db785c Workaround `binance`'s latest API schema bs..
Apparently publishing futures contracts that aren't yet trading AND
changing their contract type `str` format/schema was necessary (such
that's there's a f@#$in space in it..)?

I honestly have no idea where they found their "data engineers" XD

TO CHERRY to #520
2024-01-03 17:50:09 -05:00
Tyler Goodlet d3e7b5cd0e Formalize rc `redraw()` msg-endpoint
So now a chart rc client can ask to invoke the new
`Viz.reset_graphics()` by timeframe and fqme Bo This handy when doing
underlying (real time or tsp) edits and you want to make the UI reflect
the changes incrementally.

Impl deatz:
- tweak the msg schema to use a `cmd:  str` which normally maps to
  (something similar to) the UI method name instead of `annot` and now
  offer 3 such "commands": 'redraw', 'remove', 'SelectRect'.
- impl `AnnotCtl.redraw()` which sends the underlying `msg: dict` on the
  correct `tractor.Msgstream` ipc instance.
  - since ipc-stream lookups now happen in multiple client methods impl
    a private `._get_ipc()` to do the error raise on unknown fqmes.
2024-01-03 17:33:15 -05:00
Tyler Goodlet 9be29a707d Make `ib` failed history requests more debug-able
Been hitting wayy too many cases like this so, finally put my foot down
and stuck in a buncha helper code to figure why (especially for gappy
ass pennies) this can/is happening XD

inside the `.ib.api.Client()`:
- in `.bars()` pack all `.reqHistoricalDataAsync()` kwargs into a dict such that
  wen/if we rx a blank frame we can enter pdb and make sync calls using
  a little `get_hist()` closure from the REPL.
  - tidy up type annots a bit too.
- add a new `.maybe_get_head_time()` meth which will return `None` when
  the dt can't be retrieved for the contract.

inside `.feed.open_history_client()`:
- use new `Client.maybe_get_head_time()` and only do `DataUnavailable`
  raises when the request `end_dt` is actually earlier.
- when `get_bars()` returns a `None` and the `head_dt` is not earlier
  then the `end_dt` submitted, raise a `NoData` with more `.info: dict`.
- deliver a new `frame_types: dict[int, pendulum.Duration]` as part
  of the yielded `config: dict`.
- in `.get_bars()` always assume a `tuple` returned from
  `Client.bars()`.
  - return a `None` on empty frames instead of raising `NoData` at this
    call frame.
- do more explicit imports from `pendulum` for brevity.

inside `.brokers._util`:
- make `NoData` take an `info: dict` as input to allow backends to pack
  in empty frame meta-data for (eventual) use in the tsp back-filling
  layer.
2023-12-29 21:59:59 -05:00
Tyler Goodlet c82ca812a8 Pass display state table to interaction handlers
This took a teensie bit of reworking in some `.ui` modules
more or less in the following order of functional dependence:

- add a `Ctl-R` kb-binding to trigger a `Viz.reset_graphics()` in
  the kb-handler task `handle_viewmode_kb_inputs()`.
  - call the new method on all `Viz`s (& for all sample-rates) and
    `DisplayState` refs provided in a (new input)
    `dss: dict[str, DisplayState]` table, which was originally inite-ed
    from the multi-feed display loop (so orig in `.graphics_update_loop()`
    but now provided as an input to that func, see below..)
- `._interaction`: allow binding in `async_handler()` kwargs (`via
  a `functools.partial`) passed to `ChartView.open_async_input_handler()`
  such that arbitrary inputs to our kb+mouse handler funcs can accept
  "wtv we desire".
  - use ^ to bind in the aforementioned `dss` display-state table to
    said handlers!
- define the `dss` table (as mentioned) inside `._display.display_symbol_data()`
  and pass it into the update loop funcs as well as the newly augmented
  `.open_async_input_handler()` calls,
  - drop calling `chart.view.open_async_input_handler()` from the
    `.order_mode.open_order_mode()`'s enter block and instead factor it
    into the caller to support passing the `dss` table to the kb
    handlers.
  - comment out the original history update loop handling of forced `Viz`
    redraws entirely since we now have a manual method via `Ctl-R`.
  - now, just update the `._remote_ctl.dss: dict` with this table since
    we want to also provide rc for **all** loaded feeds, not just the
    currently shown one/set.
- docs, naming and typing tweaks to `._event.open_handlers()`
2023-12-28 21:06:06 -05:00
Tyler Goodlet a7ad50cf8f Add `Viz.reset_graphics()` for "force re-render"
Since we're now using it multiple layers probably makes sense to impl
and wrap it more correctly / publicly. The main (recent) use case is
where editing an underlying time series and then wanting to refresh the
graphics layers to reflect the changes in a chart. Part of this also
obviously includes wiping the y-range mx/mn cache.

Also ensure that `force_redraw` is proxying through to any `BarItems`
via the new `render_baritems()` func kwarg even when switching between
downsampled-line vs. bars modes.
2023-12-28 18:00:26 -05:00
Tyler Goodlet 661805695e Reimpl axis dt label contents gen with `polars`
Since `polars` has a more sane set of (time-zone aware) datetime APIs it
makes more sense and is definitely no slower then the previous `numpy`
impl. Also, actually use the sample-rate specific formats defined in
`DynamicDateAxis.tick_tpl`: dict[int, str]` finally using the new
`Viz.time_step()` property.
2023-12-28 11:08:29 -05:00
Tyler Goodlet 3de7c9a9eb Add `Viz.time_step()`, the sample step-size in time
Since we end up needing the actual (OHLC sampled) time step info (at
least in seconds) for various purposes (in this specific follow up use
case to determine sample-rate specific `datetime` format strings for
a charted time series x-axis label), allow always reading it from the
viz with the presumption (at least for now) the underlying data-frame
will have an epoch `'time'` col/field.
2023-12-28 11:02:06 -05:00
Tyler Goodlet 59536bd284 Use `import <name> as <name>,` in `.tsp`
Thanks to oremanj in the `trio` room for this hot style tip which i much
prefer to have less LOC and places to change sub-pkg name exports!

Also drop expecting a `gaps` frame output from `dedupe()`.
2023-12-28 10:58:22 -05:00
Tyler Goodlet 5702e422d8 Drop gap detection from `dedupe()`, expect caller to handle it 2023-12-28 10:40:08 -05:00
Tyler Goodlet 07331a160e Expose "bar gap margin" as `.ui._formatters.BGM: float` 2023-12-28 10:37:20 -05:00
Tyler Goodlet 0d18cb65c3 Lul, actually detect gaps for 1s OHLC
Turns out we were always filtering to time gaps longer then a day smh..
Instead tweak `detect_time_gaps()` to only return venue-gaps when
a `gap_dt_unit: str` is passed and pass `'days'` (like it was by default
before) from `dedupe()` though we should really pass in an actual venue
gap duration in the future.
2023-12-27 16:55:00 -05:00
Tyler Goodlet ad565936ec Factor UI-rc loop into ctx-free func
In theory the `async for msg` loop can be re-purposed without having to
always call `remote_annotate()` so factor it into a new
`serve_rc_annots()` and then just call it from the former (for now) with
the wrapping `try:` block outside to delete per-client-ctx annotation
instance sets. Also, use some type aliases instead of repeatedly
defining the same complex `dict`-table defs B)
2023-12-26 20:56:04 -05:00
Tyler Goodlet d4b07cc95a `ui._lines`: more direct Qt imports for typing 2023-12-26 20:49:07 -05:00
Tyler Goodlet 1231c459aa Track data feed subscribers using a new `Sub(Struct)`
In prep for supporting reverse-ipc connect-back to UI actors from
middle-ware systems (for the purposes of triggering data-view canvas
re-renders and built-in tsp annotations), add a new struct type to
better generalize the management of remote feed subscriptions. Include
a `Sub.rc_ui: bool` for now (with nearby todo-comment) and expose an
`allow_remote_ctl_ui: bool` through the feed endpoints to help drive
/ prep for all that ^

Rework all the sampler tasks to expect the `Sub`'s new iface:

- split up the `Sub.ipc: MsgStream`  and `.send_chan` as separate fields
  since we're handling the throttle case in separate
  `sample_and_broadcast()` logic blocks anyway and avoids needing to
  monkey-patch on the `._ctx` malarky..
- explicitly provide the optional handle to the `_throttle_cs:
  CancelScope` again for the case where throttling/event-downsampling is
  requested.
- add `_FeedsBus.subs_items()` as a public iterator.
2023-12-26 20:48:06 -05:00
Tyler Goodlet 88f415e5b8 Cannot delete when the rect has no scene.. 2023-12-26 17:36:34 -05:00
Tyler Goodlet d9c574e291 Add `.sort()` support to `dedupe()` 2023-12-26 17:35:38 -05:00
Tyler Goodlet a86573b5a2 Fix .parquet filenaming..
Apparently `.storage.nativedb.mk_ohlcv_shm_keyed_filepath()` was always
kinda broken if you passed in a `period: float` with an actual non-`int`
to the format string? Fixed it to strictly cast to `int()` before
str-ifying so that you don't get weird `60.0s.parquet` in there..

Further this rejigs the `sotre ldshm` gap correction-annotation loop to,
- use `StorageClient.write_ohlcv()` instead of hackily re-implementing
  it.. now that problem from above is fixed!
- use a `needs_correction: bool` var to determine if gap markup and
  de-duplictated data should be pushed to the shm buffer,
- go back to using `AnnotCtl.add_rect()` for all detected gaps such that
  they all persist (and thus are shown together) until the client
  disconnects.
2023-12-26 17:14:26 -05:00
Tyler Goodlet 1d7e97a295 Woops, need to use `.push_async_callback()`
For non-full-`.__aexit__()` handlers need this method instead (facepalm).
Also create and assign the `AnnotCtl._annot_stack: AsyncExitStack` just
before yielding the client since it's not needed prior and ensures annot
removal happens **before** ipc teardown.
2023-12-24 15:08:44 -05:00
Tyler Goodlet bbb98597a0 Add annot removal via client methods or ctx-mngr
Since leaking annots to a remote `chart` actor probably isn't a thing we
want to do (often), add a removal/deletion handler block to the
`remote_annotate()` ctx which can be triggered using a `{rm_annot: aid}`
msg.

Augmnent the `AnnotCtl` with,
- `.remove() which sends said msg (from above) and returns a `bool`
  indicating success.
- add an `.open_rect()` acm which does the `.add_rect()` / `.remove()`
  calls underneath for use in scope oriented client usage.
- add a `._annot_stack: AsyncExitStack` which will always have any/all
  non-`.open_rect()` calls to `.add_rect()` register removal on client
  teardown, to avoid leaking annots when a client finally disconnects.
- comment out the `.modify()` meth idea for now.
- rename all `Xstream` var-tags to `Xipc` names.
2023-12-24 14:42:12 -05:00
Tyler Goodlet e33d6333ec Woops, remove the label-proxy, not the widget.. 2023-12-24 13:59:16 -05:00
Tyler Goodlet 263a5a8d07 Add `SelectRect.delete()` for permanent scene dealloc 2023-12-23 13:37:47 -05:00
Tyler Goodlet a681b2f0bb Drop passing `bus` to `tsp.manage_history()` in feed allocator 2023-12-22 21:44:38 -05:00
Tyler Goodlet 5b0c94933b `.config`: don't hack the user config dir if user is 'root' and sudo was NOT used.. 2023-12-22 21:41:51 -05:00
Tyler Goodlet 61e52213b2 Oof, fix no-tsdb-entry since needs full backfill case!
Got borked by the logic re-factoring to get more conc going around
tsdb vs. latest frame loads with nested nurseries. So, repair all that
such that we can still backfill symbols previously not loaded as well as
drop all the `_FeedBus` instance passing to subtasks where it's
definitely not needed.

Toss in a pause point around sampler stream `'backfilling'` msgs as well
since there's seems to be a weird ctx-cancelled propagation going on
when a feed client disconnects during backfill and this might be where
the src `tractor.ContextCancelled` is getting bubbled from?
2023-12-22 21:34:31 -05:00
Tyler Goodlet b064a5f94d A working remote annotations controller B)
Obvi took a little `.ui` component fixing (as per prior commits) but
this is now a working PoC for gap detection and markup from a remote
(data) non-`chart` actor!

Iface and impl deats from `.ui._remote_ctl`:
- add new `open_annot_ctl()` mngr which attaches to all locally
  discoverable chart actors, gathers annot-ctl streams per fqme set, and
  delivers a new `AnnotCtl` client which allows adding annotation
  rectangles via a `.add_rect()` method.
  - also template out some other soon-to-get methods for removing and
    modifying pre-exiting annotations on some `ChartView` 💥
- ensure the `chart` CLI subcmd starts the (`qtloops`) guest-mode init
  with the `.ui._remote_ctl` module enabled.
- actually use this stuff in the `piker store ldshm` CLI to submit
  markup rects around any detected null/time gaps in the tsdb data!

Still lots to do:
-  probably colorization of gaps depending on if they're venue
   closures (aka real mkt gaps) vs. "missing data" from the backend (aka
   timeseries consistency gaps).
- run gap detection and markup as part of the std `.tsp` sub-sys
   runtime such that gap annots are a std "built-in" feature of
   charting.
- support for epoch time stamp AND abs-shm-index rect x-values
  (depending on chart operational state).
2023-12-22 15:19:20 -05:00
Tyler Goodlet e7fa841263 Pass scene-points to `.select_box` as per prior comments
As mentioned in a prior commit this was the (seemingly, and so far) only
way to make our `.select_box` annotator shift-click rect work properly
(and the same as) by adopting the code around `ViewBox.rbScaleBox`
(which we now also disable). That means also passing the scene coords to
the `SelectRect.set_scen_pos()`. Also add in the proper `ev:
pyqtgraph.GraphicsScene.mouseEvents.MouseDragEvent` so we can actually
figure out wut the hell all this pg custom mouse-event stuff is XD
2023-12-22 12:09:08 -05:00
Tyler Goodlet 1f346483a0 Always pass full `ShmArray._array` buf to `ContentsLables` updates so the label can be used outside the "backfilled-valid" range 2023-12-22 12:06:55 -05:00
Tyler Goodlet d006ecce7e Fix `._pg_overrides` import cycle caused by our `Axis` override 2023-12-22 12:05:18 -05:00
Tyler Goodlet 69368f20c2 Finally fix our `SelectRect` for use with cursor..
Turns out using the `.setRect()` method was the main cause of the issue
(though still don't really understand how or why) and this instead
adopts verbatim the code from `pg.ViewBox.updateScaleBox()` which uses
a scaling transform to set the rect for the "zoom scale box" thingy.

Further add a shite ton more improvements and interface tweaks in
support of the new remote-annotation control msging subsys:
- re-impl `.set_scen_pos()` to expect `QGraphicsScene` coordinates (i.e.
  passed from the interaction loop and pass scene `QPointF`s from
  `ViewBox.mouseDragEvent()` using the `MouseDragEvent.scenePos()` and
  friends; this is required to properly use the transform setting
  approach to resize the select-rect as mentioned above.
- add `as_point()` converter to maybe-cast python `tuple[float, float]`
  inputs (prolly from IPC msgs) to equivalent `QPointF`s.
- add a ton more detailed Qt-obj-related typing throughout our deriv.
- call `.add_to_view()` from init so that wtv view is passed in during
  instantiation is always set as the `.vb` after creation.
- factor the (proxy widget) label creation into a new `.init_label()`
  so that both the `set_scen/view_pos()` methods can call it and just
  generally decouple rect-pos mods from label content mods.
2023-12-22 11:47:31 -05:00
Tyler Goodlet 31fa0b02f5 Append any `enable_modules` specc-ed in the chart guest-mode runner 2023-12-21 20:40:00 -05:00
Tyler Goodlet 5a60974990 Use explicit `.data.feed` import of `tractor.trionics` 2023-12-21 20:26:45 -05:00
Tyler Goodlet 8d324acf91 First (untested) draft remote annotation ctl API
Since we can and want to eventually allow remote control of pretty much
all UIs, this drafts out a new `.ui._remote_ctl` module with a new
`@tractor.context` called `remote_annotate()` which simply starts a msg
loop which allows for (eventual) initial control of a `SelectRect`
through IPC msgs.

Remote controller impl deats:
- make `._display.graphics_update_loop()` set a `._remote_ctl._dss:
  dict` for all chart actor-global `DisplayState` instances which can
  then be controlled from the `remote_annotate()` handler task.
- also stash any remote client controller `tractor.Context` handles in
  a module var for broadband IPC cancellation on any display loop
  shutdown.
- draft a further global map to track graphics object instances since
  likely we'll want to support remote mutation where the client can use
  the `id(obj): int` key as an IPC handle/uuid.
- just draft out a client-side `@acm` for now: `open_annots_client()` to
  be filled out in up coming commits.

UI component tweaks in support of the above:
- change/add `SelectRect.set_view_pos()` and `.set_scene_pos()` to allow
  specifying the rect coords in either of the scene or viewbox domains.
  - use these new apis in the interaction loop.
- add a `SelectRect.add_to_view()` to avoid having annotation client
  code knowing "how" a graphics obj needs to be added and can instead
  just pass only the target `ChartView` during init.
- drop all the status label updates from the display loop since they
  don't really work all the time, and probably it's not a feature we
  want to keep in the longer term (over just console output and/or using
  the status bar for simpler "current state / mkt" infos).
  - allows a bit of simplification of `.ui._fsp` method APIs to not pass
    around status (bar) callbacks as well!
2023-12-19 15:36:54 -05:00
Tyler Goodlet ab84303da7 Drop `SelectRect.mouse_drag_released()` since it was a dumb method 2023-12-18 20:32:17 -05:00
Tyler Goodlet 659649ec48 Bah, fix nursery indents for maybe tsdb backloading
Can't ref `dt_eps` and `tsdb_entry` if they don't exist.. like for 1s
sampling from `binance` (which dne). So make sure to add better logic
guard and only open the finaly backload nursery if we actually need to
fill the gap between latest history and where tsdb history ends.

TO CHERRY #486
2023-12-18 19:46:59 -05:00
Tyler Goodlet f7cc43ee0b Add pauses to `store anal/ldshm` only on bad segs
Particularly halting before maybe writing the repaired timeseries
history in `store anal` to optionally allow user to avoid writing to
storage.
2023-12-18 11:56:57 -05:00
Tyler Goodlet f5dc21d3f4 Adjust all `.tsp` imports to use new sub-pkg
Also toss in a poll loop around the `hist_shm: ShmArray` backfill
read-check in the `.data.allocate_persisten_feed()`  init to cope with
possible racy-ness from the increased tsdb history loading concurrency
now implemented.
2023-12-18 11:54:28 -05:00
Tyler Goodlet 4568c55f17 Create `piker.tsp` "time series processing" subpkg
Move `.data.history` -> `.tsp.__init__.py` for now as main pkg-mod
and `.data.tsp` -> `.tsp._anal` (for analysis).

Obviously follow commits will change surrounding codebase (imports) to
match..
2023-12-18 11:53:27 -05:00
Tyler Goodlet d5d68f75ea ib: only raise first quote timeout err after tries
Previously we were actually failing silently too fast instead of
actually trying multiple times (now we do for 100) before finally
raising any timeout in the final loop `else:` block.
2023-12-18 11:45:19 -05:00
Tyler Goodlet 1f9a497637 Fixup symcache annot for kucoin as well 2023-12-15 16:01:31 -05:00
Tyler Goodlet 40c5d88a9b Fixup symcache type annots; no more `Pair` type 2023-12-15 16:00:51 -05:00
Tyler Goodlet 8989c73a93 Move `iter_dfs_from_shms` into `.data.history`
Thinking about just moving all of that module (after a content breakup)
to a new `.piker.tsp` which will mostly depend on the `.data` and
`.storage` sub-pkgs; the idea is to move biz-logic for tsdb IO/mgmt and
orchestration with real-time (shm) buffers and the graphics layer into
a common spot for both manual analysis/research work and better
separation of low level data structure primitives from their higher
level usage.

Add a better `data.history` mod doc string in prep for this move
as well as clean out a bunch of legacy commented cruft from the
`trimeter` and `marketstore` days.

TO CHERRY #486 (if we can)
2023-12-15 15:53:02 -05:00
Tyler Goodlet 3639f360c3 Reactivate forced viz updates from sampler broadcasts in hist display loop 2023-12-15 13:59:19 -05:00
Tyler Goodlet afd0781b62 Add (shm) abs index to `ContextLabel` 2023-12-15 13:57:10 -05:00
Tyler Goodlet ba154ef413 ib: don't bother with recursive not-enough-bars queries for now, causes more problems then it solves.. 2023-12-15 13:56:42 -05:00
Tyler Goodlet 97e2403fb1 Rework backfiller and null-segment task conc
For each timeframe open a sub-nursery to do the backfilling + tsdb load
+ null-segment scanning in an effort to both speed up load time (though
we need to reverse the current order to really make it faster rn since
moving to the much faster parquet file backend) and do concurrent
time-gap/null-segment checking of tsdb history while mrf (most recent
frame) history is backfilling.

The details are more or less just `trio` related task-func composition
tricks and a reordering of said funcs for optimal startup latency.
Also commented the `back_load_from_tsdb()` task for now since it's
unused.
2023-12-15 13:11:00 -05:00
Tyler Goodlet a4084d6a0b Bleh, fix another off-by-one issue in `np.argwhere()`
Apparently it returns the index of the prior zero-row (prolly since we
do the backward difference) so ensure `fi_zgaps += 1`..

Also fix remaining edge case handling when there's only 2 zero-segs
which was borked after a refactor to the special case blocks (like
a single zero row) prior to the `absi_zsegs` building loop AND make sure
to always return abs indices OUTSIDE the zero seg, i.e. the indices of
the non-zero row just before and just after so that the history
backfiller can use non-zero timestamps to generate range datetimes for
backend frame queries.

Add much more detailed doc-comments with a small ascii diagram to
explain how all these somewhat subtle vec ops work. Also toss in some
sanity checks on the output indices to ensure they don't point to
zero (time) valued rows when used to read the frame.
2023-12-15 12:48:50 -05:00
Tyler Goodlet 83bdca46a2 Wrap null-gap detect and fill in async gen
Call it `iter_null_segs()` (for now?) and use in the final (sequential)
stage of the `.history.start_backfill()` task-func. Delivers abs,
frame-relative, and equiv time stamps on each iteration pertaining to
each detected null-segment to make it easy to do piece-wise history
queries for each.

Further,
- handle edge case in `get_null_segs()` where there is only 1 zeroed
  row value, in which case we deliver `absi_zsegs` as a single pair of
  the same index value and,
  - when this occurs `iter_null_seqs()` delivers `None` for all the
    `start_` related indices/timestamps since all `get_hist()` routines
    (delivered by `open_history_client()`) should handle it as being a
    "get max history from this end_dt" type query.
- add note about needing to do time gap handling where there's a gap in
  the timeseries-history that isn't actually IN the data-history.
2023-12-13 18:29:06 -05:00
Tyler Goodlet c129f5bb4a Finally write a general purpose null-gap detector!
Using a bunch of fancy `numpy` vec ops (and ideally eventually extending
the same to `polars`) this is a first draft of `get_null_segs()`
a `col: str` field-value-is-zero detector which filters to all zero-valued
input frame segments and returns the corresponding useful slice-indexes:
- gap absolute (in shm buffer terms) index-endpoints as
  `absi_zsegs` for slicing to each null-segment in the src frame.
- ALL abs indices of rows with zeroed `col` values as `absi_zeros`.
- the full set of the input frame's row-entries (view) which are
  null valued for the chosen `col` as `zero_t`.

Use this new null-segment-detector in the
`.data.history.start_backfill()` task to attempt to fill null gaps that
might be extant from some prior backfill attempt. Since
`get_null_segs()` should now deliver a sequence of slices for each gap
we don't really need to have the `while gap_indices:` loop any more, so
just move that to the end-of-func and warn log (for now) if all gaps
aren't eventually filled.

TODO:
-[ ] do the null-seg detection and filling concurrently from
  most-recent-frame backfilling.
-[ ] offer the same detection in `.storage.cli` cmds for manual tsp
  anal.
-[ ] make the graphics layer actually update correctly when null-segs
  are filled (currently still broken somehow in the `Viz` caching
  layer?)

CHERRY INTO #486
2023-12-13 15:26:33 -05:00
Tyler Goodlet c4853a3fee Drop inter-method NL 2023-12-13 09:27:23 -05:00
Tyler Goodlet f274c3db3b Import `np2pl()` from `.data.tsp`
Also toss in todo for a timeseries search CLI cmd which can be handy
when doing offine store mgmt.
2023-12-13 09:25:44 -05:00
Tyler Goodlet b95932ea09 `.data.history`: run `.tsp.dedupe()` in backloader
In an effort to catch out-of-order and/or partial-frame-duplicated
segments, add some `.tsp` calls throughout the backloader tasks
including a call to the new `.sort_diff()` to catch the out-of-order
history cases.
2023-12-12 19:57:46 -05:00
Tyler Goodlet e8bf4c6e04 Return the `.len()` diff from `dedupe()` instead
Since the `diff: int` serves as a predicate anyway (when `0` nothing
duplicate was detected) might as well just return it directly since it's
likely also useful for the caller when doing deeper anal.

Also, handle the zero-diff case by just returning early with a copy of
the input frame and a `diff=0`.

CHERRY INTO #486
2023-12-12 16:48:56 -05:00
Tyler Goodlet 8e4d1a48ed Bleh, fix ib's `Client.bars()` recursion..
Turns out this was the main source of all sorts of gaps and overlaps
in history frame backfilling. The original idea was that when a gap
causes not enough (1m) bars to be delivered (like over a weekend or
holiday) when we just implicitly do another frame query to try and at
least fill out the default duration (normally 1-2 days). Doing the
recursion sloppily was causing all sorts of stupid problems..

It's kinda obvious now what was wrong in hindsight:
- always pass the sampling period (timeframe) when recursing
- adjust the logic to not be mutex with the no-data case (since it
  already is mutex..)
- pack to the `numpy` array BEFORE the recursive call to ensure the
  `end_dt: DateTime` is selected and passed correctly!

Toss in some other helpfuls:
- more explicit `pendulum` typing imports
- some masked out sorted-diffing checks (that can be enabled when
  debugging out-of-order frame issues)
- always error log about less-than time step mismatches since we should never
  have time-diff steps **smaller** then specified in the
  `sample_period_s`!
2023-12-12 16:19:21 -05:00
Tyler Goodlet b03eceebef data.tsp: drop masked `return` one liner 2023-12-11 20:11:42 -05:00
Tyler Goodlet f7a8d79b7b Add `NativeStorageClient._cache_df()` use it in `.write_ohlcv()` for caching on writes as well 2023-12-11 20:10:53 -05:00
Tyler Goodlet 49c458710e Move `numpy` <-> `polars` converters into `.data.tsp`
Yet again these are (going to be) generally useful in the data proc
layer as well as going forward with (possibly) moving the history and
shm rt-processing layer to apache (arrow or other) shared-ds
equivalents.
2023-12-11 17:53:31 -05:00
Tyler Goodlet b94582cb35 Move `dedupe()` to `.data.tsp` (so it has pals)
Includes a rename of `.data._timeseries` -> `.data.tsp` for "time series
processing", making it a public sub-mod; it contains a highly useful set
of data-frame and `numpy.ndarray` ops routines in various subsystems Bo
2023-12-11 16:24:27 -05:00
Tyler Goodlet 7311000846 Facepalm, set `was_deduped` as bool not the deduped frame.. 2023-12-11 13:18:10 -05:00
Tyler Goodlet e719733f97 Comment out overlap case block for now too? 2023-12-08 19:08:10 -05:00
Tyler Goodlet cb941a5554 BABOSO.. fix last history frame overlap slicing!
I guess since i started supporting the whole "allow a gap between
the latest tsdb sample and the latest retrieved history frame" the
overlap slicing has been completely borked XD where we've been sticking
in duplicate history samples and this has caused all sorts of down
stream time-series processing issues..

So fix that but ensuring whenever there IS an overlap between history in
the latest frame and the tsdb that we always prefer the latest frame's
data and slice OUT the tsdb's duplicate indices..

CHERRY TO #486
2023-12-08 18:56:38 -05:00
Tyler Goodlet 2d72a052aa Woops, make sure non-disti mode still works wen maybe getting `pikerd` XD 2023-12-08 17:43:52 -05:00
Tyler Goodlet 2eeef2a123 Add `dedupe()` to help with gap detection/resolution
Think i finally figured out the weird issue without out-of-order OHLC
history getting jammed in the wrong place:
- gap is detected in parquet/offline ts (likely due to a zero dt or
  other gap),
- query for history in the gap is made BUT that frame is then inserted
  in the shm buffer **at the end** (likely using array int-entry
  indexing) which inserts it at the wrong location,
- later this out-of-order frame is written to the storage layer
  (parquet) and then is repeated on further reboots with the original
  gap causing further queries for the same frame on every history
  backfill.

A set of tools useful for detecting these issues and annotating them
nicely on chart part of this patch's intent:
- `dedupe()` will detect any dt gaps, deduplicate datetime rows and
  return the de-duplicated df along with gaps table.
- use this in both `piker store anal` such that we potentially
  resolve and backfill the gaps correctly if some rows were removed.
- possibly also use this to detect the backfilling error in logic at
  the time of backfilling the frame instead of after the fact (which
  would require re-writing the shm array from something like `store
  ldshm` and would be a manual post-hoc solution, not a fix to the
  original issue..
2023-12-08 15:11:34 -05:00
Tyler Goodlet b6d2550f33 Add datetime col de-duplicator 2023-12-08 14:38:27 -05:00
Tyler Goodlet b9af6176c5 Factor `TimeseriesNotFound` to top level
TO CHERRY into #486
2023-12-07 12:31:14 -05:00
Tyler Goodlet dd0167b9a5 Make `fsp.cascade()` expect src/dst `Flume`s
Been meaning to this for a while, and there's still a few design
/ interface kinks (like `.mkt: MktPair` which should be better
generalized?) but this flips over all of the fsp chaining engine
to operate on the higher level `Flume` APIs via the newly cobbled
`Cascade` thinger..
2023-12-06 17:53:35 -05:00
Tyler Goodlet 9e71e0768f Define and pass a default `Flume._readonly: bool`
Allows opening with `.from_msg(readonly=False)` for write permissions
making underlyig shm arrays readonly. Also, make sure to pop the
`ShmArray` field entries prior to msg-ization, not sure how that worked
with the `Feed.flumes` equivalent..but?
2023-12-06 17:25:49 -05:00
Tyler Goodlet 6029f39a3f Allow `MktPair.from/to_msg()` to still do `.dst: str` for fsp flumes 2023-12-06 17:09:52 -05:00
Tyler Goodlet 656e2c6a88 fsp: intro a `Cascade` type that connects `Flume`s of streams 2023-12-05 16:59:07 -05:00
Tyler Goodlet b8065a413b ib: update ibc.ini from latest upstream template 2023-12-05 16:57:38 -05:00
Tyler Goodlet 9245d24b47 ib: add `.pause()` on symbol query overruns to aid in fixing the issue 2023-12-04 13:10:15 -05:00
Tyler Goodlet 22bd83943b .storage: support `store anal --pdb` flag 2023-12-04 13:00:33 -05:00
Tyler Goodlet b94931bbdd Fix `Portal.channel: Channel` attr name error 2023-12-04 13:00:04 -05:00
Tyler Goodlet 239c1c457e Sort fqme suggestions pre-print 2023-12-04 11:34:39 -05:00
Tyler Goodlet 24a54a7085 Add `TimeseriesNotFound` for fqme lookup failures
A common usage error is to run `piker anal mnq.cme.ib` where the CLI
passed fqme is not actually fully-qualified (in this case missing an
expiry token) and we get an underlying `FileNotFoundError` from the
`StorageClient.read_ohlcv()` call. In such key misses, scan the existing
`StorageClient._index` for possible matches and report in a `raise from`
the new error.

CHERRY into #486
2023-12-04 11:22:55 -05:00
Tyler Goodlet ebd1eb114e Port runtime init to new `tractor.Actor.reg_addrs` related changes 2023-11-21 15:18:52 -05:00
Tyler Goodlet 29ce8de462 Use new container image mentioned on IBC thread 2023-10-29 13:21:32 -04:00
Tyler Goodlet d3dab17939 order_mode: fix to avoid `Dialog.uuid` on null dialog.. 2023-10-20 13:57:52 -04:00
Tyler Goodlet cadc200818 Always ignore untracked-order error msgs from `brokerd` 2023-10-16 13:15:12 -04:00
Tyler Goodlet 363c8dfdb1 Default spec registrar set as empty addr list
Since it probably IS sane to just assume a root-actor-as-registrar
listening on the localhost as a default, AND allows NOT expecting every
caller of `open_piker_runtime()` to not have to pass an addr set XD

This makes a bucha CLI shit work again after breakage due to no
default..
2023-10-03 13:36:22 -04:00
Tyler Goodlet 00c046c280 Factor transport-ep parser/loader into helper
For now def it `.cli.load_trans_eps()` just inside the pkg mod; only
loads the ep for `pikerd` which currently acts as the main service-actor
registrar per host. Delegate to this new `.load_trans_eps()`
as-it-was-used from the `pikerd` cmd body and add fresh support for
`piker chart --maddr <addr: str>` using the routine in the body of the
`piker.cli.cli` cmd group after loading the `conf.toml::network` section
B)

Also, toss in runtime debug mode wrapping around `piker chart` using the
new `tractor.devx.maybe_open_crash_handler()` and pull the switch from
a `--pdb` flag now factored into the `.cli.cli` click group.
2023-10-03 10:00:01 -04:00
Tyler Goodlet 9165515811 ib: more detailed comments on wait-for-quote-task todo 2023-10-02 17:57:47 -04:00
Tyler Goodlet 543c11f377 ib: only normalize and log first quote if it arrives 2023-10-01 19:14:08 -04:00
Tyler Goodlet 637d33d7cc Make `.config.load_accounts()` load `brokers.toml`.. 2023-10-01 19:09:15 -04:00
Tyler Goodlet e5fdb33e31 Port cache-`dict` search to new `rapidfuzz` api 2023-10-01 17:46:46 -04:00
Tyler Goodlet 81a8cd1685 binance: always load the `brokers.toml` file since default is `conf.toml` now 2023-10-01 17:37:09 -04:00
Tyler Goodlet a382f01c85 Move tsdb section to `service.tsdb.name` and get host from `.maddrs` 2023-10-01 17:23:39 -04:00
Tyler Goodlet 653348fcd8 Use `.service.find_service()` instead of of `tractor.find_actor()` in pape-eng 2023-10-01 16:10:37 -04:00
Tyler Goodlet e139d2e259 Set `registry_addrs` in CLI (click) context-config
Since `tractor` and our runtime internals is now moved to multihomed semantics,
do the same in the CLI / config entrypoints.

Also, try using the new `tractor.devx.maybe_open_crash_handler()` around
the `pikerd` CLI.
2023-10-01 15:42:31 -04:00
Tyler Goodlet 7258d57c69 Only warn on mismatched `open_registry()` input addrs
When a new (actor) caller opens the registry there are 2 possible cases:
1. - some task already opened the registry during init and set the global
  superset of registrar addrs that are expected to be used,
2. - some task after the init task opens with a subset of addrs.
3. - some task after init opens with a disjoint set - should be an error?

In the 2nd case we don't want to error since the may just not need to
know about other registrar (multi-homed) addrs and thus only needs
specific access - so only warn about the diff in that case. If the
caller is requesting some disjoint set then we still runtime raise.

Adjust `find_service()` to allow a null `registry_addrs` input in which
case we fail over to using whatever pre-set the `Registry.addrs` has;
makes it simple for actors that don't want/need to know about the global
registrar set for their actor tree. Also, always set pass
`tractor.find_actor(only_first=True)` (for now).
2023-10-01 15:36:17 -04:00
Tyler Goodlet 5d081a40d5 Port to new `parse_maddr()` name 2023-09-29 15:20:56 -04:00
Tyler Goodlet fcececce19 Move multi-addr parser mod to `tractor` 2023-09-29 14:33:15 -04:00
Tyler Goodlet b6ac6069fe Temporarily use crash handler around search CLI ep 2023-09-29 14:02:17 -04:00
Tyler Goodlet a98f5877bc ui._exec: use new `get_runtime_vars()` name 2023-09-28 12:31:24 -04:00
Tyler Goodlet 50ddef0985 data.feed: dynamically load `ui._search` mod for headless installs 2023-09-28 12:30:10 -04:00
Tyler Goodlet b1cde3df49 config: make `conf.toml` the default load target 2023-09-28 12:29:07 -04:00
Tyler Goodlet 57010d479d Support multi-homed service actors and multiaddrs
This commit requires an equivalent commit in `tractor` which adds
multi-homed transport server support to the runtime and thus the ability
ability to listen on multiple (embedded protocol) addrs / networks as
well as exposing registry actors similarly. Multiple bind addresses can
now be (bare bones) specified either in the `conf.toml:[network]`
section, or passed on the `pikerd` CLI.

This patch specifically requires the ability to pass a `registry_addrs:
list[tuple]` into `tractor.open_root_actor()` as well as adjusts all
internal runtime routines to do the same, mostly inside the `.service`
pkg.

Further details include:
- adding a new `.service._multiaddr` parser module (which will likely be
  moved into `tractor`'s core) which supports loading lib2p2 style
  "multiaddresses" both from the `conf.toml` and the `pikerd` CLI as
  per,
- reworking the `pikerd` cmd to accept a new `--maddr`/`-m` param that
  accepts multiaddresses.
- adjust the actor-registry subsys to support multi-homing by also
  accepting a list of addrs to its top level API eps.
- various internal name changes to reflect the multi-address interface
  changes throughout.
- non-working CLI tweaks to `piker chart` (ui-client cmds) to begin
  accepting maddrs.
- dropping all elasticsearch and marketstore flags / usage from `pikerd`
  for now since we're planning to drop mkts and elasticsearch will be an
  optional dep in the future.
2023-09-28 12:13:34 -04:00
Tyler Goodlet f94244aad4 Load `network` section from `conf.toml` for service-addr map 2023-09-28 12:04:24 -04:00
Tyler Goodlet 261c331602 Try using `.mkPoetryEnv` instead for devving (dont work yet..) 2023-09-22 16:39:54 -04:00
Tyler Goodlet 3b4a4db7b6 Muck with `develop.nix` to try and hack it with `poetry` venv, go py3.11 2023-09-22 16:39:54 -04:00
Tyler Goodlet ad59a581c7 symcache: passthrough `rapidfuzz.process.extract` kwargs 2023-09-22 15:56:49 -04:00
Tyler Goodlet c312f90c0c kucoin: port to using `rapidfuzz`
Just like the others but also flip to using a `Client.get_mkt_pairs()`
meth name for consistency across clients.
2023-09-22 15:55:19 -04:00
Tyler Goodlet 1a859bc1a2 kraken: drop now unused `rapidfuzz` import 2023-09-22 15:53:03 -04:00
Tyler Goodlet e9887cb611 binance: parse .expiry separate from .venue
Apparently they're being massive cucks and changing their futes pair
schema again now adding a `NEXT_QUARTER` contract type which we weren't
handling at all. The good news is falling back to an old symcache file
would have prevented this from crashing.

Add a new `FutesPair.expiry: str` field so that `.bs_fqme` can simply
call it during the summary FQME-ification output rendering..
2023-09-22 14:48:50 -04:00
Tyler Goodlet 0ba75df877 Add `data.match_from_pairs` fuzzy symbology scanner
A helper for scanning a "pairs table" that most backends should expose
as part of their (internal) symbology set using `rapidfuzz` over
a `dict[str, Struct]` input table.

Also expose the `data.types.Struct` at the subpkg top level.
2023-09-22 13:54:25 -04:00
Tyler Goodlet a97a0ced8c kraken: switch to `rapidfuzz` API 2023-09-21 19:49:10 -04:00
Tyler Goodlet 46d83e9ca9 deribit: switch to `rapidfuzz` API 2023-09-21 19:44:27 -04:00
Tyler Goodlet d4833eba21 binance: switch to `rapidfuzz` API 2023-09-21 19:44:06 -04:00
Tyler Goodlet 14f124164a ib: fix mktpair fallback table: use `Client._con2mkts` to translate..
Previously we were assuming that the `Client._contracts: dict[str,
Contract]` would suffice this directly, which obviously isn't true XD

Also,
- add the `NSE` venue to skip list.
- use new `rapidfuzz.process.extract()` lib API.
- only get con deats for non null exchange names..
2023-09-21 19:14:44 -04:00
Tyler Goodlet 05959eaf70 Always ensure symcache mkt pair entry is valid type 2023-09-19 15:56:47 -04:00
Tyler Goodlet 30d55fdb27 Add `--pdb` support to `piker search` 2023-09-13 12:13:56 -04:00
Tyler Goodlet 2c88ebe697 binance: implement `Client.search_symbols()` using `rapidfuzz`
Change the deats inside the method and have the `brokerd` search task
just call it as needed since we already do internal mem caching on the
lookup table.

APIs changed so we need to make some tweaks as per:
- https://github.com/maxbachmann/RapidFuzz/blob/main/api_differences.md
- https://github.com/maxbachmann/RapidFuzz/blob/main/api_differences.md#differences-in-processor-functions

The main motivation is to get better wheel pkging support (for nixos),
better impl in C++, and a more simply licensed dep.
2023-09-13 11:59:51 -04:00
Tyler Goodlet 4a180019f0 Swap out `fuzzywuzzy` for the newer `rapidfuzz` lib 2023-09-13 11:57:02 -04:00
Tyler Goodlet 4d274b16d8 Attempt to generate .uis deps free lock file
Since `poetry` doesn't seem to actually mark optional group deps as such
in the lock file (!?) manually generate a `poetry.lock` with the
optional groups commented out in the `pyproject.toml`; this is all in
an attempt at trying to make `poetry2nix` build without any UI components
which seem to be the source of much frustration without hacking on p2n
and/or nixpkgs repos..

Further drop all the old build system files including the
setup.py and requirements.txt files.
2023-09-07 14:17:01 -04:00
Tyler Goodlet 481618cc51 kraken: handle ws live trading API symbology
Of course I missed this first try but, we need to use the ws market pair
symbology set (since apparently kraken loves redundancy at least 3 times
XD) when processing transactions that arrive from live clears since it's
an entirely different `LTC/EUR` style key then the `XLTCEUR` style
delivered from the ReST eps..

As part of this:
- add `Client._altnames`, `._wsnames` as `dict[str, Pair]` tables,
  leaving the `._AssetPairs` table as is keyed by the "xname"s.
- Change `Pair.respname: str` -> `.xname` since these keys all just seem
  to have a weird 'X' prefix.
- do the appropriately keyed pair table lookup via a new `api_name_set:
  str` to `norm_trade_records()` and set is correctly in the ws live txn
  handler task.
2023-08-30 16:32:34 -04:00
Tyler Goodlet 778d26067d ib.api: return None on manual quote timeout 2023-08-30 14:56:11 -04:00
Tyler Goodlet e54c3dc523 TOSQUASH 9005335e18: pack empty dict on no flow 2023-08-29 08:45:45 -04:00
Tyler Goodlet ad37cfbe2f Break backfill loop on `end_dt < start_dt` 2023-08-29 08:43:14 -04:00
Tyler Goodlet 8369f557c7 TOSQUASH 2e6b1330f375c310ad: adding .dev / .ui groups 2023-08-25 18:07:15 -04:00
Tyler Goodlet 461764419d ib.api: always key `._contracts` with '.ib' suffix
So that pos msgs from the ems are correctly loaded..
2023-08-25 17:47:30 -04:00
Tyler Goodlet 1002ce1e10 kraken.broker: one last fix to `Position.cumsize`.. 2023-08-25 17:47:30 -04:00
Tyler Goodlet 546049b62f data.history: handle venue-closure gap edge case 2023-08-25 17:47:30 -04:00
Tyler Goodlet e9517cdb02 ib: handle commodity-contract trade records 2023-08-25 17:47:30 -04:00
Tyler Goodlet 2b8cd031e8 By default silence `Client.get_quote()` timeout errors unless caller specifies to raise 2023-08-25 17:47:30 -04:00
Tyler Goodlet 2e6b1330f3 Add `.ui` and `.dev` deps groups via `poetry` Bo
Since we eventually want to allow users to minimally deploy `pikerd`
service-tree (aka distributed cross host) installs, we need to offer
a "headless" deps group. Really this is just the core dep set minus Qt
and some aux search related libs (for now).

The new `.dev` group is for adding hacking and testing tools including
`xonsh` since that will eventually be our REPL of choice more then
likely B)

Oh, and fix the namespace path (was a typo) for the `ledger` CLI and
of course bump the lock file.
2023-08-25 17:47:28 -04:00
Tyler Goodlet 995d1534b6 Drop hard redraws for now 2023-08-25 13:33:59 -04:00
Tyler Goodlet 9d31941d42 order_mode: embedded `Order` maybe be in dict form.. 2023-08-25 13:33:59 -04:00
Tyler Goodlet a695208992 brokers._daemon: drop question-comment about enabling feed module 2023-08-25 13:33:59 -04:00
Tyler Goodlet fed89562dc Import crash handler mngr from `piker.toolz` 2023-08-25 13:33:59 -04:00
Tyler Goodlet 9005335e18 ib: pack empty `dict` on no flow entry 2023-08-25 13:33:59 -04:00
Tyler Goodlet c3f8b089be Drop `.service._ahab` from storage cli runtime mods 2023-08-25 13:33:59 -04:00
Tyler Goodlet 0068119a6d ib: use `asyncio.wait_for()` on ticker first quote; on 3.11 input coros are not allowed.. 2023-08-25 13:33:59 -04:00
Tyler Goodlet 94540ce1cf Pin tomlkit as a path dep for now 2023-08-25 13:13:29 -04:00
Tyler Goodlet ea9a5e524c Factor prefer wheels deps into new `ahot_overrides`
Makes it easier to pass the overrides to multiple p2n functions (like
hopefully `.mkPoetryEnv`). Also, add some commented attempts at using
`mkPoetryEnv` and todo list for "why", remove the `poetry` CLI main
point from the pyproject.toml, bump the poetry lock file.
2023-08-17 15:56:28 -04:00
Tyler Goodlet 6b22024570 MVP get us working fully on nixos
NB: for now this is linking to a presumed local clone of the
`poetry2nix` repo since part of fixing what was adjusted here needs to
be patched upstream, which means hackin on the p2n repo in tandem B)

Since there's some dependency build issues we need
to tweak the following to get baseline `nix develop` working:
- drop `python-levenshtein` (required by `fuzzywuzzy[speedup]`) for now
  since the overlay and/or wheel install needs to be properly figured
  out.
- build `pyqt5` from src for the moment (since `preferWheel` doesn't
  seem to be workin?) despite it taking forever XD
- add in the `flake.lock` file.
2023-08-16 12:19:00 -04:00
Tyler Goodlet 847cb7740c Drop `marketstore` mod import from CLIs loader 2023-08-16 12:15:49 -04:00
Tyler Goodlet 84dd0ae4ce Bump `msgspect`, `polars` versions and add CLI script eps 2023-08-16 08:07:35 -04:00
Tyler Goodlet 6b90e2e3ee Factor and gen per-dep overrides via "fancy" `.extend()`
As per the hot tip from the edgecases.md,
https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md#modulenotfounderror-no-module-named-packagename

Factor all the (mostly `setuptools`) overrides into
a `pypkgs-build-requirements` set and `.extend()` in any `preferWheel`
additions (`polars`, `pyqt`, etc.) before passing to to
`mkPoetryApplication(overrides=<it>)`.

Add a buncha todos for improving the poetry2nix pkging including:
- adding the override requirements to the json file for all our deps
  in the `pypkgs-build-requirement` set.
- maybe propose docs for the edgecases.md to show how to do the auto-gen
  set (via func) AND extend with further overrides like `preferWheel`?
- task to support `polars` build from src (by copying `cryptography`
  stuff) instead of only from a wheel?
- get pyqt5 building from wheel since it seems to be taking forever from
  src..
- get pyqt6 working in general - going to require taking stuff from
  nixpkgs and applying it in the overrides of p2n.
2023-08-15 12:40:01 -04:00
Tyler Goodlet 482ad1cc83 Add `prompt-toolkit` for full `xonsh` feats 2023-08-14 13:10:23 -04:00
Tyler Goodlet 6e8d07852c Pkg with `poetry`, `poetry2nix` and a `flake.nix` 2023-08-14 11:36:34 -04:00
Tyler Goodlet 4aa04e1c8e Add note about broadcast when no `.symbol` found 2023-08-11 14:52:10 -04:00
Tyler Goodlet c5ed6e6ac4 Facepalm: remove now unused `CostModel` idea.. 2023-08-11 13:34:23 -04:00
Tyler Goodlet 077d9bf1d2 Better commenting around order-mode error block 2023-08-10 12:41:53 -04:00
Tyler Goodlet 78178c2fb7 Add example mtr prober from `mtrpacket`
Started rejigging example code from this example to use more modern
`asyncio` APIs:
https://github.com/matt-kimball/mtr-packet-python/blob/master/examples/trace-concurrent.py

Relates to #330
2023-08-10 11:49:09 -04:00
Tyler Goodlet f66a1f8b23 ib: relay submission errors, allow adhoc mkt overrides
This is a tricky edge case we weren't handling prior; an example is
submitting a limit order with a price tick precision which mismatches
that supported (probably bc IB reported the wrong one..) and IB responds
immediately with an error event (via a special code..) but doesn't
include any `Trade` object(s) nor details beyond the `reqid`. So, we
have to do a little reverse EMS order lookup on our own and ideally
indicate to the requester which order failed and *why*.

To enable this we,
- create a `flows: OrderDialogs` instance and pass it to most order/event relay
  tasks, particularly ensuring we update update ASAP in `handle_order_requests()`
  such that any successful submit has an `Ack` recorded in the flow.
- on such errors lookup the `.symbol` / `Order` from the `flow` and
  respond back to the EMS with as many details as possible about the
  prior msg history.
- always explicitly relay `error` events which don't fall into the
  sensible filtered set and wrap in
  a `BrokerdError.broker_details['flow']: dict` snapshot for the EMS.
- in `symbols.get_mkt_info()` support adhoc lookup for `MktPair` inputs
  and when defined we re-construct with those inputs; in this case we do
  this for a first mkt: `'vtgn.nasdaq'`..
2023-08-10 10:31:00 -04:00
Tyler Goodlet 562d027ee6 Relay brokerd errors to client side, correctly..
Turns out we were expecting/processing `Status(resp='error')` msgs not
`BrokerdError` (i guess bc latter was only really being used in initial
`brokerd` msg responses and not for relay of actual provider clearing
engine failures?) and the case block match / logic wasn't really
correct. So this changes a few things:

- always do reverse `oid` lookups from `reqid`s if possible in error msg
  handling case.
- add a new `Error` client-dialog msg (derived from `Status`) which we
  now relay when `brokerd` sends a `BrokerdError` and no prior `Status`
  can be found (when it is we still fill in appropriate fields from the
  backend-error and just send back the last status msg like before).
- try hard to look up the original `Order.symbol: str` for client
  broadcasting trying first using any `Status.req` and failing over to
  embedded `.brokerd_msg` field lookups.
- drop the `Status.name = 'error'` from literal def.
2023-08-09 21:43:38 -04:00
Tyler Goodlet ff2bbd5aca ib: handle order errors via `reqid` lookup
Finally this is a reason to use our new `OrderDialogs` abstraction; on
order submission errors IB doesn't really pass back anything other then
the `orderId` and the reason so we have to conduct our own lookup for
a message to relay to the EMS..

So, for every EMS msg we send, add it to the dialog tracker and then use
the `flows: OrderDialogs` for lookup in the case where we need to relay
said error. Also, include sending a `canceled` status such that the
order won't get stuck as a stale entry in the `emsd`'s own dialog table.
For now we just filter out errors that are unrelated from the stream
since there's always going to be stuff to do with live/history data
queries..
2023-08-07 18:19:35 -04:00
Tyler Goodlet 85a38d057b Factor cumsize sign to var 2023-08-07 10:13:31 -04:00
Tyler Goodlet eba6a77966 Add paper-engine cost simulation support
If a backend declares a top level `get_cost()` (provisional name)
we call it in the paper engine to try and simulate costs according to
the provider's own schedule. For now only `binance` has support (via the
ep def) but ideally we can fill these in incrementally as users start
forward testing on multiple cexes.
2023-08-07 09:55:45 -04:00
Tyler Goodlet 5ed8544fd1 Bleh, move `.data.types` back up to top level pkg
Since it's depended on by `.data` stuff as well as pretty much
everything else, makes more sense to expose it as a top level module
(and maybe eventually as a subpkg as we add to it).
2023-08-05 15:57:10 -04:00
Tyler Goodlet 5d86d336f2 Parametrize account names for offline ledger tests 2023-08-03 17:28:08 -04:00
Tyler Goodlet e4ea7d6193 Lul, fix `open_ledger_dfs()` to `yield` when ledger passed in.. 2023-08-03 17:27:26 -04:00
Tyler Goodlet 60751acf85 Officially drop `Position.size` 2023-08-03 16:57:02 -04:00
Tyler Goodlet e9dfd28aac ib: add back `src/dst` parsing for fiat pairs 2023-08-03 16:56:33 -04:00
Tyler Goodlet ae444d1bc7 Add note about `xonsh.main.main()` attempted usage 2023-08-03 13:56:23 -04:00
Tyler Goodlet a51a61090d Drop `virt_cost: str` from df output 2023-08-02 20:42:18 -04:00
Tyler Goodlet 94ebe1e87e Add some new hotkey maps for chart zoom and pane hiding 2023-08-02 20:41:56 -04:00
Tyler Goodlet fff610fa8d Fix `PositionTracker.pane` attr resolve bug.. 2023-08-02 17:33:02 -04:00
Tyler Goodlet 7ecf2bd89a Guess exit transaction costs for BEP prediction
In order to attempt giving the user a realistic prediction for a BEP per
txn we need to model what the (worst case) anticipated exit txn costs
will be during the equivalent, paired entries. For now we use a simple
"symmetric cost prediction" model where we assume the exit costs will be
simply the same as the enter txn costs and thus on every entry we apply
2x the enter txn cost; on exit txns we then unroll these predictions by
keeping a cumulative sum of the cost-per-unit and reversing the charges
based on applying that mean to the current exit txn's size. Once
unrolled we apply the actual exit txn cost received from the
broker-provider.
2023-08-02 17:25:23 -04:00
Tyler Goodlet 1e3a4ca36d Drop commented, now deprecated edge case notes 🏄 2023-08-01 15:49:56 -04:00
Tyler Goodlet b6a705852d Handle txn costs in BEP, factor enter/exit blocks and df row assignments B) 2023-08-01 15:42:30 -04:00
Tyler Goodlet 29bab02c64 Pass sync code flag in flex report processor 2023-08-01 09:12:52 -04:00
Tyler Goodlet 85ae180f8f Factor df conversion into lone routine: `ledger_to_dfs()` 2023-07-31 17:48:03 -04:00
Tyler Goodlet 5d24b5defb Swap branch order for enter/exit
Also fix bug since we always need to reset cum_pos_pnl
after a `exit_to_zero` case.
2023-07-31 17:32:49 -04:00
Tyler Goodlet 100be54641 data.history: add TODO for non-zero epochs and some typing 2023-07-31 17:21:11 -04:00
Tyler Goodlet a088ebf5e2 Use inf row/col repr for debugging atm 2023-07-31 17:18:28 -04:00
Tyler Goodlet b37a447595 Implement PPU and BEP and inject the ledger frames
Since it appears impossible to compute the recurrence relations for PPU
(at least sanely) without using embedded `polars.List` elements, this
instead just implements price-per-unit and break-even-price calcs
doing a plain-ol-for-loop imperative approach with logic branching.

I burned wayy too much time trying to implement this in some kinda
`polars` DF native way without luck, so hopefuly someone smarter can
come in and make it work at some point xD

Resolves a related bullet in #515
2023-07-31 16:01:31 -04:00
Tyler Goodlet b1edaf0639 First draft position accounting with `polars`
Took a little while to get right using declarative style but it's
finally workin and seems (mostly correct B)

Computes the ppu (price per unit) using the PnL since last
net-zero-cumsize (aka the pnl from open to close) and uses it to calc
the pnl-per-exit trade (using the ppu).

Next up, bep (break even price both) per position and maybe since
ledger start or an arbitrary ref point?
2023-07-29 21:02:59 -04:00
Tyler Goodlet 385561276b Add gap detection into the `store ldshm` cmd 2023-07-26 15:45:55 -04:00
Tyler Goodlet d94ab9d5b2 order_mode: Only send cancels for dialogs that still exist 2023-07-26 15:43:48 -04:00
Tyler Goodlet 08e8990fe3 Do single `ShmArray.array` read on zero-time filtering 2023-07-26 15:41:04 -04:00
Tyler Goodlet 2c6ae5d994 Drop the `gap_dt_unit: str` column
We don't need it in `detect_time_gaps()` since doing straight up
datetime diffs in `polars` already has a humanized `str` representation
but with higher precision like '2d 1h 24m 1s' B)
2023-07-26 15:37:59 -04:00
Tyler Goodlet f1289ccce2 ib: Oof, right need to create ledger entries too.. 2023-07-26 14:55:17 -04:00
Tyler Goodlet 7802febd20 Backfill history gaps with pre-gap close 2023-07-26 12:56:06 -04:00
Tyler Goodlet 64329d44e7 Flip `tractor.breakpoint()`s to new `.pause()` 2023-07-26 12:48:19 -04:00
Tyler Goodlet bd0af7a4c0 kucoin: facepalm, use correct pair fields for price/size ticks 2023-07-26 12:44:41 -04:00
Tyler Goodlet 618c461bfb binance: always upper case venue and expiry tokens
Since we need `.get_mkt_info()` to remain symmetric across calls with
different fqme inputs, and binance generally uses upper case for it's
symbology keys, we always upper the FQME related tokens for both
symcaching and general search purposes.

Also don't set `_atype` on mkt pairs since it should be fully handled
via the dst asset loading in `Client._cache_pairs()`.
2023-07-26 12:44:29 -04:00
Tyler Goodlet c00cf41541 kraken: `norm_trade()` now much accept an optional symcache 2023-07-26 12:40:58 -04:00
Tyler Goodlet 4436342d33 Change ui stuff to use new `Position.cumsize` attr name 2023-07-26 12:40:09 -04:00
Tyler Goodlet 58cf7ce10e Add `norm_trade()` ep to validator warnings 2023-07-26 12:39:08 -04:00
Tyler Goodlet 9fbb75ce7f Remove piker.trionics; already factored into `tractor` 2023-07-26 12:38:25 -04:00
Tyler Goodlet d0f72bf269 Wrap symcache loading into `.from_scratch()`
Since we need it both when explicitly reloading **and**
whenever either the file or data in the file doesn't exist.
2023-07-26 12:27:26 -04:00
Tyler Goodlet 188508575a Utilize the new `_mktmap_table` input in paper engine
In cases where a brokerd backend doesn't yet support a symcache we need
to do manual `.get_mkt_info()` queries and stash them in a table that we
pass in for the mkt failover lookup to `Account.update_from_ledger()`.
Set the `PaperBoi._mkts` to this table for use on real-time ledger
writes in `.fake_fill()`.
2023-07-26 12:21:27 -04:00
Tyler Goodlet bebc817d19 Partition ledger data frames by `bs_mktid`
Since some backends are going to have the issue of supporting multiple
venues for a given "position distinguishing instrument", like IB, we
can't presume that every `Position` can be uniquely keyed by
a `MktPair.fqme` (since the venue part can change and still be the same
"pair" relationship in accounting terms) so instead presume the
"backend system's market id" is the unique key (at least for now)
instead of the fqme.

More practically we use the `bs_mktid` to groupby-partition the per
pair DFs from the trades ledger and attempt to scan-match the input
fqme (in `ledger disect` cli) against the fqme column values set.
2023-07-26 12:13:54 -04:00
Tyler Goodlet 1d35747fbf Always clear `Position._events` in `.from_msg()`..
Not sure why i ever thought it would work otherwise but, obviously if
you're replicating a `Position` from a **summary** (IPC) msg we
need to wipe any prior clearing events from the events history..
The main use for this loading mechanism is precisely if you don't have
local access to the txn ledger and need to represent a position from
a summary 🤦

Also, never bother with ledger file fqme "rewriting" if the backend has
no symcache support (yet) since obviously there's then no symbol set to
search for a better key xD
2023-07-26 12:10:26 -04:00
Tyler Goodlet e344bdbf1b ib: rework trade handling, take ib position sizes as gospel
Instead of casting to `dict`s and rewriting event names in the
`push_tradesies()` handler, be transparent with event names (also
defining and piker-equivalent mapping them in a redefined `_statuses`
table) and types
passing them directly to the `deliver_trade_events()` task and generally
make event handler blocks much easier to grok with type annotations. To
deal with the causality dilemma of *when to emit a pos msg* due to
needing all of `execDetailsEvent, commissionReportEvent, positionEvent`
but having no guarantee on received order, we implement a small task
`clears: dict[Contract, tuple[Position, Fill]]` tracker table and (as
before) only emit a position event once the "cost" can be accessed for
the fill. We now ALWAYS relay any `Position` update from IB directly to
ensure (at least) the cumsize is correct (since it appears we still have
ongoing issues with computing this correctly via `.accounting.Position`
updates..).

Further related adjustments:
- load (fiat) balances and startup positions into a new `IbAcnt` struct.
- change `update_and_audit_pos_msg()` to blindly forward ib position
  event updates for the **the size** since it should always be
  considered the true gospel for accounting!
  - drop ib-has-no-position handling since it should never occur..
- move `update_ledger_from_api_trades()` to the `.ledger` submod and do
  processing of ib_insync `Fill` related objects instead of dict-casted
  versions instead doing the casting in
  `api_trades_to_ledger_entries()`.
- `norm_trade()`: add `symcache.mktmaps[bs_mktid] = mkt` in since it
  turns out API (and sometimes FLEX) records don't contain the listing
  exchange/venue thus making it impossible to map an asset pair in the
  "position sense" (i.e. over multiple venues: qqq.nasdaq, qqq.arca,
  qqq.directedge) to an fqme when doing offline ledger processing;
  instead use frickin IB's internal int-id so there's no discrepancy.
  - also much better handle futures mkt trade flex records such that
    parsed `MktPair.fqme` is consistent.
2023-07-25 20:28:54 -04:00
Tyler Goodlet b33be86b2f ib: fill out contract tables in `.get_mkt_info()`
Since getting a global symcache result from the API is basically
impossible, we ad-hoc fill out the needed client tables on demand per
client code queries to the mkt info EP.

Also, use `unpack_fqme()` in fqme (search) pattern parser instead of
hacky `str.partition()`.
2023-07-25 16:43:08 -04:00
Tyler Goodlet 50b221f788 ib: rework client-internal contract caching
Add new `Client` attr tables to better stash `Contract` lookup results
normally mapped from some in put FQME;

- `._contracts: dict[str, Contract]` for any input pattern (fqme).
- `._cons: dict[str, Contract] = {}` for the `.conId: int` inputs.
- `_cons2mkts: bidict[Contract, MktPair]` for mapping back and forth
  between ib and piker internal pair types.

Further,
- type out as many ib_insync internal types as possible mostly for
  contract related objects.
- change `Client.trades()` -> `.get_fills()` and return directly the
  result from `IB.fill()`.
2023-07-25 16:42:15 -04:00
Tyler Goodlet 897c20bd4a Moar `.accounting` tweaks
- start flipping over internals to `Position.cumsize`
- allow passing in a `_mktmap_table` to `Account.update_from_ledger()`
  for cases where the caller wants to per-call-dyamically insert the
  `MktPair` via a one-off table (cough IB).
- use `polars.from_dicts()` in `.calc.open_ledger_dfs()`. and wrap the
  whole func in a new `toolz.open_crash_handler()`.
2023-07-21 23:48:53 -04:00
Tyler Goodlet 759ebe71e9 Allow disabling symcache load via kwarg as well 2023-07-20 15:27:46 -04:00
Tyler Goodlet e88913e1f3 .data._pathops: drop profiler imports, fix some naming to appease `ruff` 2023-07-20 15:27:22 -04:00
Tyler Goodlet 5e7916a0df Start `piker.toolz` subpkg for all our tooling B)
Since there's a growing list of top level mods which are more or less
utils/tools for working with the runtime; begin to move them into a new
subpkg starting with a new `.toolz.debug`.

Start with,
- a new `open_crash_handller()` for doing breakpoints around blocks that
  might error.
- move in what was `piker._profile` into `.toolz.profile` and adjust all
  importing appropriately.
2023-07-20 15:23:01 -04:00
Tyler Goodlet 5eb310cac9 ib: more fixes to try and get positioning correct..
Define and bind in the `tx_sort()` routine to be used by
`open_trade_ledger()` when datetime sorting trade records.

Further deats:
- always use the IB reported position size (since apparently our ledger
  based accounting is getting rekt on occasion..).
- better ib pos msg formatting when there's mismatches with the piker
  equivalent.
- never emit zero-size pos msgs (in terms of strict ib pos sizing) since
  when there's piker ledger sizing errors we'll send the wrong thing to
  the ems and its clients..
2023-07-19 16:46:36 -04:00
Tyler Goodlet 8a10cbf6ab Change `Position.clearsdict()` -> `.clearsitems()`
Since apparently rendering to dict from a sorted generator func clearly
doesn't preserve the order when using a `dict`-comprehension.. Further,
there's really no reason to strictly return a `dict`. Adjust
`.calc.ppu()` to make the return value instead a `list[tuple[str,
dict]]`; this results in the current df cumsum values matching the
original impl and the existing `binance.paper` unit tests now passing XD

Other details that fix a variety of nonsense..
- adjust all `.clearsitems()` consumers to the new list output.
- use `str(pendulum.now())` in `Position.from_msg()` since adding
  multiples with an `unknown` str will obviously discard them, facepalm.
- fix `.calc.ppu()` to NOT short circuit when `accum_size` is 0; it's
  been causing all sorts of incorrect size outputs in the clearing
  table.. lel, this is what fixed the unit test!
2023-07-18 21:00:19 -04:00
Tyler Goodlet fe78277948 ib: add new `.symbols` sub-mod
Move in the obvious things XD
- all the specially defined venue tables from `.api`.
- some parser funcs: `con2fqme()` and `parse_patt2fqme()`.
- the `get_mkt_info()` and `open_symbol_search()` broker eps.
- the `_asset_type_map` table which converts to `.accounting.Asset`
  compat keys for each contract/security.
2023-07-17 18:30:11 -04:00
Tyler Goodlet 9e87b6515b ib: be symcache compat by using bypass attr
Since there's no easy way to support it yet, we bypass symbology caching
in for now and instead allow the `ib.ledger` routines to fill in
`MktPair` and `Asset` entries ad-hoc for the purposes of txn ledger
processing.
2023-07-17 17:31:34 -04:00
Tyler Goodlet a05a82486d Log a warning on no symcache support in a backend 2023-07-17 17:31:12 -04:00
Tyler Goodlet e4731eff10 Fix `Position.expiry == None` bug 2023-07-17 17:27:22 -04:00
Tyler Goodlet dfa13afe22 Allow backends to "bypass" symcache loading
Some backends like `ib` don't have an obvious (nor practical) way to
easily download the entire symbology set available from all its mkt
venues. For such backends loading might require a non-std approach (like
using the contract search from some input mkt-key set) and can't be
expected to necessarily be supported out of the box. As such, allow
annotating a broker sub-pkg module with a `_no_symcache: bool = True`
attr which will make `open_symcache()` yield early with an empty
`SymbologyCache` instance for use by the caller to fill in the mkt and
assets tables in whatever ad-hoc way desired.
2023-07-17 17:12:40 -04:00
Tyler Goodlet 912f1bc635 .kraken: start new `.symbols` submod and move symcache and search stuff there 2023-07-17 16:20:11 -04:00
Tyler Goodlet 82fd785646 Adjust default `[binance]` config to use paper and disable testnets 2023-07-17 14:58:15 -04:00
Tyler Goodlet 71d0097dc7 Switch to `Position.cumsize` in tracker and order mode mods 2023-07-17 13:51:30 -04:00
Tyler Goodlet 8fb667686f Open symcaches as part of per-backend search spawning 2023-07-17 01:24:45 -04:00
Tyler Goodlet 2dab0e2e56 Expose `.data._symcache` stuff at subpkg toplevel
The list is `open_symcache()`, `get_symcache()`, `SymbologyCache`, and
`Stuct` which seems more or less fine to make part of the public
namespace. Also, make `._timeseries.t_unit` an instance of literal to make
`ruff` happy?
2023-07-17 01:20:52 -04:00
Tyler Goodlet e8025d0985 .data.types.Struct: by default include non-members from `.to_dict()`.. 2023-07-16 21:32:36 -04:00
Tyler Goodlet 430309b5dc .accounting: type `Transaction.etype` as a `Literal`
Start working out the set of possible "txn types" we want to define in
a simple set.

Relates to #510
2023-07-16 21:22:15 -04:00
Tyler Goodlet 4c5507301e kraken: be symcache compatible!
This was more involved then expected but on the bright side, is going to
help drive a more general `Account` update/processing/loading API
providing for all the high-level txn update methods needed for any
backend to generically update the participant's account *state* via
an input ledger/txn set B)

Key changes to enable `SymbologyCache` compat:
- adjust `Client` pairs / assets lookup tables to include a duplicate
  keying of all assets and "asset pairs" using the (chitty) default key
  set that kraken ships which is NOT the `.altname` no `.wsname` keys;
  the "default ReST response keys" i guess?
  - `._AssetPairs` and `._Assets` are *these ^* rest-key sets delivered
    verbatim from the endpoint responses,
  - `._pairs` and `._assets` the equivalent value-sets keyed by piker
    style FQME-looking keys (now provided via the new
    `.kraken.symbols.Pair.bs_fqme: str` and the delivered `'altname'`
    field (for assets) respectively.
- re-implement `.get_assets()` and `.get_mkt_pairs()` to appropriately
  delegate to internal methods and these new (multi-keyed) tables to
  deliver the cacheable set of symbology info.
- adjust `.feed.get_mkt_info()` to handle parsing of both fqme-style and
  wtv(-the-shit-stupid) kraken key set a caller passes via
  a key-matches-first-table-style-scan after pre-processing the
  input `fqme: str`; also do the `Asset` lookups from the new
  `Pair.bs_dst/src_asset: str` fields which should always map correctly
  to an internal asset entry delivered by `Client.get_assets()`.

Dirty impl deatz:
- add new `.kraken.symbols` and move the newly refined `Pair` there.
- add `.kraken.ledger` and move in the factored out ledger processing
  routines.
- also move out what was the `has_pp()` and large chung of nested-ish
  looking acnt-position verification logic blocks into a new
  `verify_balances()` B)
2023-07-16 21:21:53 -04:00
Tyler Goodlet a5821ae9b1 binance: spec `.ns_path: str` on pair structs
Provides for fully isolated symbology caching in a flat TOML table
without special case handling B)

Also explicitly define `.bs_mktid: str` which is now used by the
symcache to to key-index the backend specific pair set and thus provides
for round-trip marshalling without special knowledge of any backend
schema.
2023-07-15 17:37:56 -04:00
Tyler Goodlet d794afcb5c Adjust `.clearing._paper_engine.norm_trade()` to new sig
Always expect a `tid: str` and `pair: dict[str, Struct]` for aiding with
txn struct packing B)
2023-07-15 17:35:41 -04:00
Tyler Goodlet 3d20490ee5 Move cum-calcs to `open_ledger_dfs()`, always parse `str`->`Datetime`
Previously the cum-size calc(s) was in the `disect` CLI but it's better
stuffed into the backing df converter. Also, ensure that whenever
a `dt` field is type-detected as a `str` we parse it to `DateTime`.
2023-07-15 15:43:09 -04:00
Tyler Goodlet 69314e9fca Passthrough all **kwargs `Struct.to_dict()`
Since for symcache-ing we don't want to write non-member fields we need
to allow passing the appropriate flag; i hate frickin inheritance XD
2023-07-14 20:29:05 -04:00
Tyler Goodlet b9fec091ca Allow accounting (file) dir override via kwarg
For testing (and probably hacking) it's handy to be able to point
somewhere other the default user-config dir for a ledger or account file
to test offline processing apis from `.accounting` subsystems. For now
it's a private optional named-arg: `_fp: Path` and it's obviously passed
down into the `load_account()` config getter.

Note that in the non-paper account case `Account.update_from_ledger()`
will use the ledger's `.symcache` and `.iter_txns()` method to acquite
actual txn-structs to compute positions.
2023-07-14 20:17:24 -04:00
Tyler Goodlet 803f4a6354 Add first account cumsize test; known to fail Bo 2023-07-14 17:54:13 -04:00
Tyler Goodlet 494b3faa9b Formalize transaction normalizer func signature
Since each broker backend generally needs to define a specific
field-name-schema to determine the exact instantiation arguments to
`Transaction`, we generally need each backend to define an endpoint
function to conduct this transaction from an input `dict[str, Any]`
received either directly from provided ledger APIs or from previously
stored `.accounting._ledger` saved trades ledger TOML files.

To accomplish this we now require backends to declare a new routine:

```python
def norm_trade(
    tid: str,  # the uuid for the transaction
    txdict: dict,  # the input record-dict

    # a table of mkt-symbols to backend
    # struct objects which define the (meta-data) for the backend specific
    # venue's symbology
    pairs: dict[str, Struct],

) -> Transaction:
    ...
```

which implements that record conversion (at least for trades)
and can thus be used in `TransactionLedger.iter_txns()` which requires
"some code" to implement the loading from a serialization format (aka
the input `dict` record) to our local `Transaction` struct, normally
also using a `Pair`-struct table defined (and maybe previously cached)
by the specific backend such our (normalization layer's) `MktPair`'s
fields can be set.

For the case of our `.clearing._paper_engine` we def the routine to
simply extract the exact same fields from the TOML ledger records that
we previously had written (to it) and define it in that module.

Also, we always pass `pairs=SymbologyCache.pairs: dict[str, Struct]` on
norm trade calls such that offline ledger and accounting processing
clients can use a previously cached symbology set without having to
necessarily start the async-actor runtime to query the actual backend API
if the data has already been saved locally on the system B)

Other related:
- always passthrough kwargs in overridden `.to_dict()` method.
- only do fqme related trade record field name rewrites/names when
  operating on a paper ledger; normally a backend's records don't
  contain these.
- fix `pendulum.DateTime` type annots.
- just deliver `Transaction`s from `.iter_txns()`
2023-07-14 16:13:04 -04:00
Tyler Goodlet da206f5242 Store "namespace path" for each backend's pair struct
Since some backends have multiple venues keyed by the same
symbol-pair-name, AND often the market/symbol info for those different
market-venues is entirely different (cough binance), we will have to
(sometimes) save the struct namespace-path as str for lookup when
deserializing a symcache to object form.

NOTE: this change is reliant on the following `tractor` dev commit which
improves support for constructing a path from object-instance:
bee2c36072

Add a backend(-wide) default struct path stored as a (TOML top level)
field `pair_ns_path: str` in the serialized `dict`-table as well as
allow for a per pair-`Struct` value optionally defined on each type def;
the global is only used if none was defined per struct via a `ns_path:
str`.

Further deats:
- don't write non-struct-member fields to dict for TOML file cache.
- always keep object forms, well as objects (in tables).. XD
- factor cache loading from `dict` (and thus from TOML or presumably any
  other interchange form) into a `@classmethod` constructor method B)
- all choosing the subtable for `.search()` by name.
2023-07-13 17:58:50 -04:00
Tyler Goodlet 7f4884a6d9 data.types.Struct.to_dict(): discard non-member struct by default 2023-07-12 12:33:30 -04:00
Tyler Goodlet c30d8ac9ba ib: port to new `.accounting` APIs
Still kinda borked since i don't think there actually is a (per venue)
"get-all-symbologies" endpoint.. so we're likely gonna have to figure
out either how to hack it or provide a bypass in ledger processing?

Deatz:
- use new `Account` type name, rename endpoint vars to match and
  obviously use any new method name(s).
- mask out split ratio handling for now.
- async open the symcache prior to ledger processing (again, for now).
- drop passing `Transaction.sym`.
- fix parser set for dt-sorter since apparently 2022 and back had
  a `date` field instead?
2023-07-12 08:45:55 -04:00
Tyler Goodlet 8b9494281d Don't verify the history step period for now in `tsdb_backfill()` 2023-07-12 08:45:55 -04:00
Tyler Goodlet 06c581bfab Async enter/open the symcache in paper engine
Since we don't want to be doing a `trio.run()` from async code (being
already in the `tractor` runtime and all); for now just put a top level
block wrapping async enter until we figure out to embed it (likely)
inside `open_account()` and pass the ref to `open_trade_ledger()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 108e8c7082 .accounting: expose `open_account()` at subsys pkg level 2023-07-12 08:45:55 -04:00
Tyler Goodlet ddcdbce1a2 Use `acnt` instead of `table` for ref name B) 2023-07-12 08:45:55 -04:00
Tyler Goodlet 14d5b3c963 Be pedantic in `open_trade_ledger()` from sync code
Require passing an explicit flag when entering from sync code with an
extra super duper explicit runtime error to indicate how to use in the
async case as well!

Also, do rewrites of both the fqme (from best match in the symcache
according to search - the worst case) or from the `bs_mktid` field if
it exists (should only be true for paper engine accounts) AND the
`bs_mktid` for paper accounts if it seems un-fully-qualified.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 8330b36e58 User/return explicit `symcache` var name in sync case 2023-07-12 08:45:55 -04:00
Tyler Goodlet 243821aab1 Bleh! Ok make `open_symcache()` and `@acm`..
Turns in order to make things much cleaner from inside-the-runtime usage
we do probably want to just make the manager async so that we can
generate the cache on demand from async UI inits as well as daemon
actors.. So change to that and instead make `get_symcache()` the helper
that should ONLY be called from sync funcs / offline ledger processing
utils!
2023-07-12 08:45:55 -04:00
Tyler Goodlet 4123c97139 Add symcache support to paper eng
- add the `.norm_trade()` required ep (for symcache offline loading)
- port to new `Account` apis (which now require a symcache input)
2023-07-12 08:45:55 -04:00
Tyler Goodlet 55c3d617fa brokers.core: open cached client before hitting `.get_mkt_info()` 2023-07-12 08:45:55 -04:00
Tyler Goodlet a2c6749112 binance.feed: use `Client.get_assets()` for mkt pairs
Instead of constructing them (previously manually) in `.get_mkt_info()` ep,
just call `.get_assets()` and do key lookups for assets to hand directly
to the `.src/dst` of `MktPair`.

Refine fqme input parsing to match:
- adjust parsing logic to only use `unpack_fqme()` on the input fqme
  token.
- set `.mkt_mode: str` to the derivs venue when an expiry token is
  detected in the fqme.
- pass the parsed `expiry: str` to `Client.exch_info()` to ensure
  a deriv venue (table) is used for pair lookup.
- skip any "DEFI" venue or other unknown asset type cases (since binance
  doesn't seem to define some assets anywhere?).

Also, just use the `Client._pairs` unified table for search input since
the first call to `.exch_info()` won't necessarily contain the most
up-to-date state whereas `._pairs` always will.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 19be8348e5 binance.api: add venue qualified symcache support
Meaning we add the `Client.get_assets()` and `.get_mkt_pairs()` methods.
Also implement `.exch_info()` to take in a `expiry: str` to detect
whether to look up a derivative venue instead of spot.

In support of all this we now explicitly key all assets (via
`._cache_pairs() during the populate of `._venue2assets` sub-tables)
with their `.bs_dst_asset: str` value to ensure, for ex., a spot
`BTCUSDT` has a distinct value from any futures contracts with the same
`Pair.symbol: str` value!

Also, ensure we always create a `brokers.toml` (from template) if DNE
and binance is the user's first used backend XD
2023-07-12 08:45:55 -04:00
Tyler Goodlet 3c84ac326a binance.venues: add pair-type specific asset keying
Add `bs_src/dst_asset: str` properties which provide for unique keying
into futures vs. spot venues by offering a `.venue: str` property which,
for non-spot delivers normally an expiry suffix (eg. '.PERP') and for
spot just delivers the bair chain-token key.

This enables keying multiple venues with the same mkt pairs easily in
a global flat key->pair table needed as part of supporting a symcache.
2023-07-12 08:45:55 -04:00
Tyler Goodlet c9681d0aa2 .nativedb: ignore an `expired/` subdir 2023-07-12 08:45:55 -04:00
Tyler Goodlet 8f40e522ef Add handy `DiffDump`ing for our `.types.Struct`
So you can do a `Struct1` - `Struct2` and we dump a little diff `list`
of tuples for anal on the REPL B)

Prolly can be broken out into it's own micro-patch?
2023-07-12 08:45:55 -04:00
Tyler Goodlet 87185cf8bb Drop `config` get/set/del apis.. 2023-07-12 08:45:55 -04:00
Tyler Goodlet ff267890d1 Change cached-client hit msg to runtime level 2023-07-12 08:45:55 -04:00
Tyler Goodlet 749401e500 .accounting: expose new names at pkg top level 2023-07-12 08:45:55 -04:00
Tyler Goodlet 3704e2ceac Call `open_ledger_dfs()` for `disect` sub-cmd
Drop all the old `polars` (groupby + agg related) mangling to get a df
per fqme by delegating to the new routine and add in the `.cumsum()`ing
(per frame) as a first start on computing pps using dfs instead of
python dicts + loops as in `ppu()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 8f1983fd8e Move df loading into `calc.load_ledger_dfs()`
To isolate it from the ledger/account mods and bc it is actually for
doing (eventual) position calcs / anal, might as well put it in this
mod. Add in the old-masked `ensure_state()` method content in case we
want to use it later for testing. Also tighten up the parser loading
inside `dyn_parse_to_dt()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet f5d4f58610 `Account` api update and refine
Rename `open_pps()` -> `open_account()` for obvious reasons as well as
expect a bit tighter integration with `SymbologyCache` and consequently
`LedgerTransaction` in order to drop `Transaction.sym: MktPair`
dependence when compiling / allocating new `Position`s from a ledger.

Also we drop a bunch of  prior attrs and do some cleaning,
- `Position.first_clear_dt` we no longer sort during insert.
- `._clears` now replaces by `._events` table.
- drop the now masked `.ensure_state()` method (eventually moved to
  `.calc` submod for maybe-later-use).
- drop `.sym=` from all remaining txns init calls.
- clean out the `Position.add_clear()` method and only add the provided
  txn directly to the `._events` table.

Improve some `Account` docs and interface:
- fill out the main type descr.
- add the backend broker modules as `Account.mod` allowing to drop
  `.brokername` as input and instead wrap as a `@property`.
- make `.update_from_trans()` now a new `.update_from_ledger()` and
  expect either of a `TransactionLedger` (user-dict) or a dict of txns;
  in the latter case if we have not been also passed a symcache as input
  then runtime error since the symcache is necessary to allocate
  positions.
  - also, delegate to `TransactionLedger.iter_txns()` instead of
    a manual datetime sorted iter-loop.
  - drop all the clears datetime don't-insert-if-earlier-then-first
    logic.
- rename `.to_toml()` -> `.prep_toml()`.
- drop old `PpTable` alias.
- rename `load_pps_from_ledger()` -> `load_account_from_ledger()` and
  make it only deliver the account instance and also move out all the
  `polars.DataFrame` related stuff (to `.calc`).

And tweak some account clears table formatting,
- store datetimes as TOML native equivs.
- drop `be_price` fixing.
- obvsly drop `.ensure_state()` call to pps.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 0e94e89373 Finally, just drop `Transaction.sym`
Turns out we don't really need it directly for most "txn processing" AND
if we do it's usually related to some `Account`-ing related calcs; which
means we can instead just rely on the new `SymbologyCache` lookup to get
it when needed. So, basically just get rid of it and rely instead on the
`.fqme` to be the god-key to getting `MktPair` info (from the cache).

Further, extend the `TransactionLedger` to contain much more info on the
pertaining backend:
- `.mod` mapping to the (pkg) py mod.
- `.filepath` pointing to the actual ledger TOML file.
- `_symcache` for doing any needed asset or mkt lookup as mentioned
  above.
- rename `.iter_trans()` -> `.iter_txns()` and allow passing in
  a symcache or using the init provided one.
  - rename `.to_trans()` similarly.
- delegate paper account txn processing to the `.clearing._paper_engine`
  mod's `norm_trade()` (and expect this similarly from other backends!)
- use new `SymbologyCache.search()` to find the best but
  un-fully-qualified fqme for a given `txdict` being processed when
  writing a config (aka always try to expand to the most verbose `.fqme`
  possible).
- add a `rewrite: bool` control to `open_trade_ledger()`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 520414a096 Oof, fix `.size` tick msg encode.. 2023-07-12 08:45:55 -04:00
Tyler Goodlet ddc5f2b441 Use `MktPair.from_msg()` in symcache
Since we now fully support interchange-as-dict-msg, use the msg codec
API and drop manual `Asset` unpacking. Also, wrap `get_symcache()` in
a `pdbp` crash handler block for now B)
2023-07-12 08:45:55 -04:00
Tyler Goodlet 3994fd8384 Also handle `Decimal` interchange in `MktPair` msg-ification 2023-07-12 08:45:55 -04:00
Tyler Goodlet 13f231b926 Decode cached mkts and assets back to structs B)
As part of loading the cache we can now fill the asset sub-tables:
`.mktmaps` and `.assets` with their deserialized struct instances!
In theory this might be possible for the backend defined `Pair` structs
as well but we need to figure out probably an endpoint to offer
the conversion?

Also, add a `SymbologyCache.search()` which allows sync code to scan the
existing (known via cache) symbol set just like how async code can use the
(much slower) `open_symbol_search()` ctx endpoint 💥
2023-07-12 08:45:55 -04:00
Tyler Goodlet 309b91676d Finally, support full `MktPair` + `Asset` msgs
Previously we weren't necessarily serializing mkt pairs (for IPC msging)
entirely bc the assets `.src/.dst` were being sent just by their
str-names. This now properly supports fully serializing `Asset`s as
`dict`-msgs such that use of `MktPair.to_dict()` can be transmitted over
`tractor.MsgStream`s and deserialized entirely back to struct from on
the receiver end.

Deats:
- implement `Asset.to_dict()` and `.from_msg()`
- adjust `MktPair.to_dict()` and `.from_msg()` to use these methods.
  - drop all the hacky "if .src/.dst is str" handling.
- add better `MktPair.from_fqme()` input handling for expiry and venue;
  ensure that either can be extracted from passed fqme *and* if so they
  are also popped from any duplicate passed in `**kwargs**`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet c8c28df62f Much (much) better symbology cache refinements
For starters rename the cache type to `SymbologyCache` and fill out its
interface to include an (async) `.reload()` which can be used to populate
the in-mem asset-table sets such that any tractor-runtime task can
actually directly call it. Use a symcache file name schema of
`_cache/<backend>.symcache.toml`.

Dirtier deatz:
- make `.open_symcache()` a `@cm` such that it can be used from sync code
  and will actually call `trio.run()` in the case where it needs to do a
  full (re)load; also don't write on exit only on reloads.
- add `.get_symcache()` a simple non-ctx-mngr reader which again can
  mostly be called willy-nilly from sync code without the full runtime
  being up (but likely will only work if symcache files already exist
  for the backend).
2023-07-12 08:45:55 -04:00
Tyler Goodlet 005023275e Add a symbology cache subsys
New mod is `.data._symcache` and it needs backend clients to declare
`Client.get_assets()` and `.get_mkt_pairs()` to generate the cache files
which now go in the config dir under `_cache/`.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 05af2b3e64 Rework `.accounting.Position` calcs to prep for `polars`
We're probably going to move to implementing all accounting using
`polars.DataFrame` and friends and thus this rejig preps for a much more
"stateless" implementation of our `Position` type and its internal
pos-accounting metrics: `ppu` and `cumsize`.

Summary:
- wrt to `._pos.Position`:
  - rename `.size`/`.accum_size` to `.cumsize` to be more in line
    with `polars.DataFrame.cumsum()`.
  - make `Position.expiry` delegate to the underlying `.mkt: MktPair`
    handling (hopefully) all edge cases..
  - change over to a new `._events: dict[str, Transaction]` in prep
    for #510 (and friends) and enforce a new `Transaction.etype: str`
    which is by default `clear`.
  - add `.iter_by_type()` which iterates, filters and sorts the
    entries in `._events` from above.
  - add `Position.clearsdict()` which returns the dict-ified and
    datetime-sorted table which can more-or-less be stored in the
    toml account file.
  - add `.minimized_clears()` a new (and close) version of the old
    method which always grabs at least one clear before
    a position-side-polarity-change.
  - mask-drop `.ensure_state()` since there is no more `.size`/`.price`
    state vars (per say) as we always re-calc the ppu and cumsize from
    the clears records on every read.
  - `.add_clear` no longer does bisec insorting since all sorting is
    done on position properties *reads*.
  - move the PPU (price per unit) calculator to a new `.accounting.calcs`
    as well as add in the `iter_by_dt()` clearing transaction sorted
    iterator.
    - also make some fixes to this to handle both lists of `Transaction`
      as well as `dict`s as before.

- start rename of `PpTable` -> `Account` and make a note about adding
  a `.balances` table.
- always `float()` the transaction size/price values since it seems if
  they get processed as `tomlkit.Integer` there's some suuper weird
  double negative on read-then-write to the clears table?
  - something like `cumsize = -1` -> `cumsize = --1` !?!?
- make `load_pps_from_ledger()` work again but now includes some very
  very first draft `polars` df processing from a transaction ledger.
  - use this from the `accounting.cli.disect` subcmd which is also in
    *super early draft* mode ;)
- obviously as mentioned in the `Position` section, add the new `.calcs`
  module with a `.ppu()` calculator func B)
2023-07-12 08:45:55 -04:00
Tyler Goodlet 745c144314 ib.feed: handle fiat (forex) pairs with `Asset`
Also finally adds full `FeedInit` and `MktPair` support for this backend
by handling:
- all "currency" fields for each `Contract` by constructing
  and `Asset` and setting the `MktPair.src` with `.atype='fiat'`.
- always render the `MktPair.src` name in the `.fqme` for fiat pairs
  (aka forex) but never for other instruments.
2023-07-12 08:45:55 -04:00
Tyler Goodlet 10ebc855e4 ib: fully handle `MktPair.src` and `.dst` in ledger loading
In an effort to properly support fiat pairs (aka forex) as well as more
generally insert a fully-qualified `MktPair` in for the
`Transaction.sys`. Note that there's a bit of special handling for API
`Contract`s-as-dict records vs. flex-report-from-xml equivalents.
2023-07-12 08:45:55 -04:00
Tyler Goodlet c0929c042a ib: fix `Client.trades()` return type annot 2023-07-12 08:45:55 -04:00
Tyler Goodlet 9748b22d34 Always include the src asset for (parquet file names) for fiat pairs 2023-07-12 08:45:55 -04:00
Tyler Goodlet 3ff9fb3e10 clearing._messages: add todo to drop the `BrokedPosition` msg 2023-07-12 08:45:55 -04:00
Tyler Goodlet 75f01e22d7 Drop `Position.expiry`, delegate to `.mkt: MktPair`
No point having duplicate data when we already stash the `expiry` on the
mkt info type and can just read it (and cast to `datetime` obj).

Further this fixes a regression caused by converting `._clears` to
a list by adding a `._events: dict[str, Transaction]` which prevents
double entering transactions based on checking the events table for the
existing id.. Further add a sanity check that all events are popped
(for now) after serializing the clearing table for the toml account
file.

In the longer run, ideally we don't have the separate sequences ._clears
and ._events by choosing a better data structure (sorted unique set of
mkt events) maybe a specially used `polars.DataFrame` (which we kind
need eventually anyway)?
2023-07-12 08:45:55 -04:00
Tyler Goodlet 87d6115954 Add src asset name ignore via `MktPair._fqme_without_src: bool` 2023-07-12 08:45:55 -04:00
Tyler Goodlet c780164f69 Fix test to use new `load_account()` location 2023-07-12 08:45:55 -04:00
Tyler Goodlet 482403c887 Expose `.accounting.load_account()` 2023-07-12 08:45:55 -04:00
Ebisu 2ac8191722 discrepancy between live/testnet urls 2023-07-12 01:49:17 +02:00
Tyler Goodlet 35af5f11fa binance: Map `use_testnet` to off by default (since data feeds) 2023-06-30 20:20:14 -04:00
Tyler Goodlet a7ec59862a binance: Map `use_testnet` to off by default (since data feeds) 2023-06-30 20:17:02 -04:00
Tyler Goodlet ad4847cbac basic bot: iter latest ticks first to decide new submission price per quote 2023-06-27 15:47:23 -04:00
Tyler Goodlet da07685e8b Use `iterticks()` to filter to clears, get first price manually before submit.. 2023-06-27 15:47:23 -04:00
Tyler Goodlet f1eb76d29f Drop prints, break on latest clear match tick 2023-06-27 15:47:23 -04:00
Tyler Goodlet 46b22958f0 basic bot: add real-time price trailer (task) that keeps bid price 0.0005% below last clear value 2023-06-27 15:47:23 -04:00
Tyler Goodlet 57399e4f5d basic bot: drop registry addr and connect to default pikerd 2023-06-27 15:47:23 -04:00
Tyler Goodlet 5690595064 basic bot: set unix fileformat, add KBI handling to cancel order submission 2023-06-27 15:47:23 -04:00
Tyler Goodlet 63a6c6efde Add a super basic "order bot" example B)
Shows how to boot the piker runtime, submit an order to the ems, cancel
said order right away. NOTE, this uses piker's built in paper engine but
can be easily tweaked to use a live backend at the user's whim.
2023-06-27 15:47:23 -04:00
Tyler Goodlet f2fff5a5fa ib._ledger: move trades transaction processing helpers into new module 2023-06-27 15:47:05 -04:00
Tyler Goodlet c0d575c009 Change `Position.clears` -> `._clears[list[dict]]`
When you look at usage we don't end up really needing clear entries to
be keyed by their `Transaction.tid`, instead it's much more important to
ensure the time sorted order of trade-clearing transactions such that
position properties such as the size and ppu are calculated correctly.
Thus, this instead simplified the `.clears` table to a list of clear
dict entries making a bunch of things simpler:
- object form `Position._clears` compared to the offline TOML schema
  (saved in account files) is now data-structure-symmetrical.
- `Position.add_clear()` now uses `bisect.insort()` to
  datetime-field-sort-insert into the *list* which saves having to worry
  about sorting on every sequence *read*.

Further deats:
- adjust `.accounting._ledger.iter_by_dt()` to expect an input `list`.
- change `Position.iter_clears()` to iterate only the clearing entry
  dicts without yielding a key/tid; no more tuples.
- drop `Position.to_dict()` since parent `Struct` already implements it.
2023-06-27 15:47:05 -04:00
Tyler Goodlet 66d402b80e Load ledger records into `pl.DataFrame` for `disect`-tion 2023-06-27 15:47:05 -04:00
Tyler Goodlet ea270d3396 .data.ticktools: add reverse flag, better docs
Since it may be handy to get the latest ticks first, add a `reverse:
bool` to `iterticks()` and add some cleaner logic and a proper doc
string to `frame_ticks()`.
2023-06-27 15:47:05 -04:00
Tyler Goodlet 621634b5a2 Move `frame_ticks()` and tick-type defs into `.ticktools` 2023-06-27 15:47:05 -04:00
Tyler Goodlet eacc59226f rename `.data._normalize` -> `.ticktools` 2023-06-27 15:47:05 -04:00
Tyler Goodlet 7b4472e37e data._sampling.frame_ticks(): slight rework to generalize 2023-06-27 15:47:05 -04:00
Tyler Goodlet 4a8eafabb8 Never key error on bad flow pops.. 2023-06-27 13:48:03 -04:00
Tyler Goodlet e7e7919a43 Ensure paper engine logger is `piker.clearing` instance.. 2023-06-27 13:48:03 -04:00
Tyler Goodlet cdf9105d0d Export `Flume` and `Feed` from `piker.data` 2023-06-27 13:48:03 -04:00
Tyler Goodlet 49e67d5f36 Always add a paper (account) entry to order mode init
Allows for tracking paper engine orders despite the ems not necessarily
being opened by the current order mode instance (UI) in "paper"
execution mode; useful for tracking bots/strats running against the same
EMS daemon.
2023-06-27 13:48:03 -04:00
Tyler Goodlet 85fa87fe6f Update the `_emsd_main()` doc task tree layout 2023-06-27 13:48:03 -04:00
Tyler Goodlet 249b091c2f binance: better bad account in order request error msg 2023-06-27 13:48:03 -04:00
Tyler Goodlet 2d291bd2c3 ib: expose `.broker.norm_trade_records()` from pkg 2023-06-27 13:42:08 -04:00
Tyler Goodlet cf1f4bed75 Move `.accounting` related config loaders to subpkg
Like you'd think:
- `load_ledger()` -> ._ledger
- `load_accounrt()` -> ._pos

Also fixup the old `load_pps_from_ledger()` and expose it from a new
`.accounting.cli.disect` cli cmd for trying to figure out why pp calcs
are totally mucked on stupid ib..
2023-06-27 13:42:08 -04:00
Tyler Goodlet 032976b118 view_mode: add in one missing debug_print block.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet cbe364cb62 Add explicit `piker.cli` logger name for `pikerd` 2023-06-27 13:42:08 -04:00
Tyler Goodlet efd52e8ce3 kraken: always insert ticks `list`, only append if vlm 2023-06-27 13:42:08 -04:00
Tyler Goodlet 3be1d610e0 ib: expose trade EP as `open_trade_dialog()`
Should be the final production backend to switch this over B)

Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..
2023-06-27 13:42:08 -04:00
Tyler Goodlet b1ef549276 Move `broker_init()` into `brokers._daemon`
We might as well start standardizing on `brokerd` init such that it can
be used more generally in client code (such as the `.accounting.cli`
stuff).

Deats of `broker_init()` impl:
- loads appropriate py pkg module,
- reads any declared `__enable_modules__: listr[str]` which will be
  passed to `tractor.ActorNursery.start_actor(enabled_modules=<this>)`
- loads the `.brokers._daemon._setup_persistent_brokerd

As expected the `accounting.cli` tools can now import directly from this
new location and use the common daemon fixture definition.
2023-06-27 13:42:08 -04:00
Tyler Goodlet f7f76137ca kraken: handle `.spot.kraken` new-style FQMEs
After #520 we've moved to better supporting explicit venues for cex
backends which is important where a provider offers both spot and
derivatives markets (kraken, binance, kucoin) and we need to distinguish
which is being traded given a common asset pair (eg. BTC/USDT). So, make
this work for `kraken`'s brokerd such that requests and pre-existing
live order are (un)packed to/from EMS messaging form.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 3fcf44aa52 Skip marketstore docker tests, we're gonna drop it.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet d9708e28c8 kraken: drop `OHLC.ticks` field and just inject to quote before send 2023-06-27 13:42:08 -04:00
Tyler Goodlet 65f2549d90 binance: more explicit var naming in `OHLC` parse loop 2023-06-27 13:42:08 -04:00
Tyler Goodlet a4d16ec6ab Fix ems tests: add `.spot` venue token to fqme 2023-06-27 13:42:08 -04:00
Tyler Goodlet d82173dd50 Always use fully expanded FQME throughout `.clearing`
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 5d930175e4 kraken: use new `OrderDialogs` type, handle `.spot`
Drop the older `dict[str, ChainMap]` prototype we had since the new
`OrderDialogs` built-out while adding `binance` order support is more
refined and general. Also, handle new and now expect `.spot` venue token
in FQMEs since kraken too has futes markets that we'll likely want to
support eventually.
2023-06-27 13:42:08 -04:00
Tyler Goodlet e4c1003aba Hard code futes venue(s) for now in `brokerd`.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet 676b00592d Don't allow `Client.api()` testnet queries by default, require explicit flag set 2023-06-27 13:42:08 -04:00
Tyler Goodlet 9970fa89ee Drop per-venue request methods from `Client`
Use dynamic lookups instead by mapping to the correct http session and
endpoints path using the venue routing/mode key. This let's us simplify
from 3 methods down to a single `Client._api()` which either can be
passed the `venue: str` explicitly by the caller (as is needed in the
`._cache_pairs()` case) or falls back to the client's current
`.mkt_mode: str` setting B)

Deatz:
- add couple more tables to suffice all authed-endpoint use cases:
  - `.venue2configkey: dict[str, str]` which maps the venue key to the
    `brokers.toml` subsection which should be used for auth creds and
    testnet config.
  - `.confkey2venuekeys: dict[str, list[str]]` which maps each config
    subsection key to the list of venue name keys for doing config to
    venues lookup.
- always build out testnet sessions for spot and futes venues (though if
  not set the sessions obviously won't ever be used).
- add and use new `config.ConfigurationError` custom exceptions when api
  creds are missing.
- rename `action: str` to `method: str` in `._api()` since it's the
  proper ReST term and switch what was "method" to be `endpoint: str`.
- mask out `.get_positions()` since we can get that from a user stream
  wss request (and are doing that).
- (in theory) import and use spot testnet url as necessary.
2023-06-27 13:42:08 -04:00
Tyler Goodlet fe902c017b Drop `OrderedDict` usage, not necessary in modern python 2023-06-27 13:42:08 -04:00
Tyler Goodlet 77db2fa7c8 Support loading quarterly futes existing lives
Do parsing of the `'symbol'` and check for an `_<expiry>` suffix, in
which case we re-format in capitalized FQME style, do the
`Client._pairs[str, Pair]` lookup and then send the `Pair.bs_fqme` in
the `Order.fqme: str` field.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 7f39de59d4 Factor `OrderDialogs` into `.clearing._util`
It's finally a decent little design / interface and definitely can be
used in other backends like `kraken` which rolled something lower level
but more or less the same without a wrapper class.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 5c315ba163 Support live order loading (with caveats)
As you'd expect query and sync the EMS with existing live orders
reported by the market venue by packing them in `Status` msgs and
sending over the order dialog stream before starting the handler tasks.

XXX CAVEAT:
- there appears to be no way (at least on the usdtm market/venue) to
  distinguish between different contracts such as perps vs. the
  quarterlies?
- for now we just assume that the perp is being used since
  there's no indicator otherwise in the 'symbol' field?
- we should maybe open an issue with the futures-connector project to
  see how they'd recommend solving this discrepancy?
2023-06-27 13:42:08 -04:00
Tyler Goodlet dc3ac8de01 binance: support order "modifies" B)
Only a couple tweaks to make this work according to the docs:
https://binance-docs.github.io/apidocs/futures/en/#modify-order-trade

- use a PUT request.
- provide the original user id in a `'origClientOrderId'` msg field.
- don't expect the same oid in the PUT response.

Other broker-mode related details:
- don't call `OrderDialogs.add_msg()` until after the existing check
  since we want to check against the *last* msgs contents not the new
  request.
- ensure we pass the `modify=True` flag in the edit case.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 6eee6ead79 binance: add accounts def to `brokers.toml` template 2023-06-27 13:42:08 -04:00
Tyler Goodlet 572badb4d8 Add full real-time position update support B)
There was one trick which was that it seems that binance will often send
the account/position update event over the user stream *before* the
actual clearing (aka FILLED) order update event, so make sure we put an
entry in the `dialogs: OrderDialogs` as soon as an order request comes
in such that even if the account update arrives first the
`BrokerdPosition` msg can be relayed without delay / order event order
considerations.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 4eeb232248 kraken: add more type annots in broker codez 2023-06-27 13:42:08 -04:00
Tyler Goodlet 3f555b2f5a Fix user event matching
Was using the wrong key before from our old code (not sure how that
slipped back in.. prolly doing too many git stashes XD), so fix that to
properly match against order update events with 'ORDER_TRADE_UPDATE'.

Also, don't match on the types we want to *cast to*, that's not how
match syntax works (facepalm), so we have to typecast prior to EMS msg
creation / downstream logic.

Further,
- try not bothering with binance's own internal `'orderId'` field
  tracking since they seem to support just using your own user version
  for all ctl endpoints? (thus we only need to track the EMS `.oid`s B)
- log all event update msgs for now.
- pop order dialogs on 'closed' statuses.
- wrap cancel requests in an error handler block since it seems the EMS
  is double sending requests from the client?
2023-06-27 13:42:08 -04:00
Tyler Goodlet 09007cbf08 Do native symbology lookup in order methods, send user oid in cancel requests 2023-06-27 13:42:08 -04:00
Tyler Goodlet 8a06e4d073 Wrap dialog tracking in new `OrderDialogs` type, info log all user stream msgs 2023-06-27 13:42:08 -04:00
Tyler Goodlet 45ded4f2d1 binance: order submission "user id" is not the same as their internal `int` one.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet 60b0b721c5 Split out crypto$ derivs into separate type set
For crypto derivatives (at least futes), yes they are margined, but
generally not around a single unit of vlm (like equities or commodities
futes) so don't pre-set the order mode allocator to use a #unit limit,
$limit is fine.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 249d358737 Woops, fix wss_url lookup depending on venue.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet a9c016ba10 Use `Client._pairs` cross-venue table for orders
Since the request handler task will work concurrently across venues
(spot, futes, margin) we need to be sure that we look up the correct
venue to update the order dialog and this is naturally determined by the
FQME-style symbol in the `BrokerdOrder` msg; the best way to map that
symbol-key to the correct venue/`Pair` is by using said `._pairs:
ChainMap`.

Further, handle limit order errors by catching and relaying back an
error response to the EMS. Fix the "account name" to be `binance.usdtm`
so that we can eventually and explicitly support all venues by name.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 98f6d85b65 Make order request methods be venue aware 2023-06-27 13:42:08 -04:00
Tyler Goodlet f36061a149 binance: first draft live order ctl support B)
Untested fully but has ostensibly working position and balance loading
(by delegating entirely to binance's internals for that) and an MVP ems
order request handler; still need to fill out the order status update
task implementation..

Notes:
- uses user data stream for all per account balance and position tracking.
- no support yet for `piker.accounting` position tracking.
- no support yet for full order / position real-time update via user
  stream.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 43494e4994 Add note about expecting client side to cache search domain? 2023-06-27 13:42:08 -04:00
Tyler Goodlet c6d1007e66 Load `Asset`s during echange info queries
Since we need them for accounting and since we can get them directly
from the usdtm futes `exchangeInfo` ep, just preload all asset info that
we can during initial `Pair` caching. Cache the asset infos inside a new per venue
`Client._venues2assets: dict[str, dict[str, Asset | None]]` and mostly
be pedantic with the spot asset list for now since futes seems much
smaller and doesn't include transaction precision info.

Further:
- load a testnet http session if `binance.use_testnet.futes = true`.
- add testnet support for all non-data endpoints.
- hardcode user stream methods to work for usdtm futes for the moment.
- add logging around order request calls.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 1bb7c9a2e4 Handle pending futes, optional `.filters` add testnet urls 2023-06-27 13:42:08 -04:00
Tyler Goodlet 2ee11f65f0 binance: facepalm, always lower case venue token.. 2023-06-27 13:42:08 -04:00
Tyler Goodlet 0c74a67ee1 Move API urls to `.venues`
Also add a lookup helper for getting addrs by venue:
`get_api_eps()` which returns the rest and wss values.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 9972bd387a kraken: use new `open_trade_dialog()` ep name B) 2023-06-27 13:42:08 -04:00
Tyler Goodlet f792ecf3af binance: use new `open_trade_dialog()` endpoint name B) 2023-06-27 13:42:08 -04:00
Tyler Goodlet 3c89295efe Rename `.binance.schemas` -> `.venues` 2023-06-27 13:42:08 -04:00
Tyler Goodlet 9ff03ba00c kraken: add `<pair>.spot.kraken` fqme interpolation
As just added for binance move to using an explicit `.<venue>.kraken`
style for spot markets which makes the current spot symbology expand to
`<PAIR>.SPOT` from the new `Pair.bs_fqme: str`. Reasons for why are
laid out in the equivalent patch for binance. Obviously this also primes
for supporting kraken's futures venue APIs as well 🏄
https://docs.futures.kraken.com/#introduction

Detalles:
- add `.spot.kraken` parsing to `get_mkt_info()` so that if the venue
  token is not passed by caller we implicitly expand it in.
- change `normalize()` to only return the `quote: dict` not the topic
  key.
- rewrite live feed msg loop to use `match:` syntax B)
2023-06-27 13:42:08 -04:00
Tyler Goodlet 8e03212e40 Always expand FQMEs with .venue and .expiry values
Since there are indeed multiple futures (perp swaps) contracts including
a set with expiry, we need a way to distinguish through search and
`FutesPair` lookup which contract we're requesting. To solve this extend
the `FutesPair` and `SpotPair` to include a `.bs_fqme` field similar to
`MktPair` and key the `Client._pairs: ChainMap`'s backing tables with
these expanded fqmes. For example the perp swap now expands to
`btcusdt.usdtm.perp` which fills in the venue as `'usdtm'` (the
usd-margined fututes market) and the expiry as `'perp'` (as before).
This allows distinguishing explicitly from, for ex., coin-margined
contracts which could instead (since we haven't added the support yet)
fqmes of the sort `btcusdt.<coin>m.perp.binance` thus making it explicit
and obvious which contract is which B)

Further we interpolate the venue token to `spot` for spot markets going
forward, which again makes cex spot markets explicit in symbology; we'll
need to add this as well to other cex backends ;)

Other misc detalles:

- change USD-M futes `MarketType` key to `'usdtm_futes'`.

- add `Pair.bs_fqme: str` for all pair subtypes with particular
  special contract handling for futes including quarterlies, perps and
  the weird "DEFI" ones..

- drop `OHLC.bar_wap` since it's no longer in the default time-series
  schema and we weren't filling it in here anyway..

- `Client._pairs: ChainMap` is now a read-only fqme-re-keyed view into
  the underlying pairs tables (which themselves are ideally keyed
  identically cross-venue) which we populate inside `Client.exch_info()`
  which itself now does concurrent pairs info fetching via a new
  `._cache_pairs()` using a `trio` task per API-venue.

- support klines history query across all venues using same
  `Client.mkt_mode_req[Client.mkt_mode]` style as we're doing for
  `.exch_info()` B)
  - use the venue specific klines history query limits where documented.

- handle new FQME venue / expiry fields inside `get_mkt_info()` ep such
  that again the correct `Client.mkt_mode` is selected based on parsing
  the desired spot vs. derivative contract.

- do venue-specific-WSS-addr lookup based on output from
  `get_mkt_info()`; use usdtm venue WSS addr if a `FutesPair` is loaded.

- set `topic: str` to the `.bs_fqme` value in live feed quotes!

- use `Pair.bs_fqme: str` values for fuzzy-search input set.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 4c4787ce58 Add a "perpetual_future" mkt info type 2023-06-27 13:42:08 -04:00
Tyler Goodlet e68c55e9bd Switch `Client.mkt_mode` to 'usd_futes' if 'perp' in fqme
The beginning of supporting multi-markets through a common API client.
Change to futes market mode in the client if `.perp.` is matched in the
fqme. Currently the exchange info and live feed ws impl will swap out
for their usd-margin futures market equivalent (endpoints).
2023-06-27 13:42:08 -04:00
Tyler Goodlet dc23f1c9bd binance: fix `FutesPair` to have `.filters`
Not sure why it seemed like futures pairs didn't have this field but add
it to the parent `Pair` type as well as drop the overridden
`.price/size_tick` fields instead doing the same as in spot as well.

Also moves the `MarketType: Literal` (for the `Client.mkt_mode: str`)
and adds a pair type lookup table for exchange info loading.
2023-06-27 13:42:08 -04:00
Tyler Goodlet d173d373cb kraken: raise `SymbolNotFound` on symbology query errors 2023-06-27 13:42:08 -04:00
Tyler Goodlet 8220bd152e Extend `MktPair` doc string to refer to binance pairs 2023-06-27 13:42:08 -04:00
Tyler Goodlet aa49c38d55 Add `binance` section to `brokers.toml` 2023-06-27 13:42:08 -04:00
Tyler Goodlet dac93dd8f8 Support USD-M futes live feeds and exchange info
Add the usd-futes "Pair" type and thus ability to load all exchange
(info for) contracts settled in USDT. Luckily we don't seem to have to
modify anything in the `Client` interface (yet) other then a new
`.mkt_mode: str` which determines which endpoint set to make requests.
Obviously data received from endpoints will likely need diff handling as
per below.

Deats:
- add a bunch more API and WSS top level domains to `.api` with comments
- start a `.binance.schemas` module to house the structs for loading
  different `Pair` subtypes depending on target market: `SpotPair`,
  `FutesPair`, .. etc. and implement required `MktPair` fields on the
  new futes type for compatibility with the clearing layer.
- add `Client.mkt_mode: str` and a method lookup for endpoint parent
  paths depending on market via `.mkt_req: dict`

Also related to live feeds,
- drop `Struct` typecasting instead opting for specific fields both for
  speed and simplicity atm.
- breakout `subscribe()` into module level acm from being embedded
  closure.
- for now swap over the ws feed to be strictly the futes ep (while
  testing) and set the `.mkt_mode = 'usd_futes'`.
- hack in `Client._pairs` to only load `FutesPair`s until we figure out
  whether we want separate `Client` instances per market or not..
2023-06-27 13:42:08 -04:00
Tyler Goodlet ae1c5a0db0 binance: breakout into `feed` and `broker` mods like other backends 2023-06-27 13:42:08 -04:00
Tyler Goodlet ed0c2555fc binance: make pkgmod expose endpoints from coming submods 2023-06-27 13:42:08 -04:00
Tyler Goodlet 26a8638836 binance: convert to subpkg module 2023-06-27 13:42:08 -04:00
Tyler Goodlet e035af2f42 Don't filter out clearing ticks XD 2023-06-27 13:42:08 -04:00
Tyler Goodlet 2dc8ee2b4e Don't bother casting `AggTrade` values for now, just floatify the price/quantity 2023-06-27 13:42:08 -04:00
Tyler Goodlet 06026ec661 Add `binance` section to broker conf template 2023-06-27 13:42:08 -04:00
Guillermo Rodriguez 7c00ca0254 binance: add deposits/withdrawals API support
From @guilledk,
- Drop Decimal quantize for now
- Minor tweaks to trades_dialogue proto
2023-06-27 13:42:08 -04:00
Tyler Goodlet eaaf6e4cc1 kraken: fix `trades2pps()` type sig 2023-06-27 13:42:08 -04:00
Guillermo Rodriguez ef544ba55a Add order status tracking 2023-06-27 13:42:08 -04:00
Tyler Goodlet e85e031df7 Use new config get/set API in `brokercnf` cmd? 2023-06-27 13:42:08 -04:00
Tyler Goodlet e03da40867 Add a config get/set API (from @guilledk) ? 2023-06-27 13:42:08 -04:00
Tyler Goodlet f8af13d010 binance: add `submit_cancel()` & listen key mgmt
Patch again originally from @guilledk and adds a sesh for futures
testnet as well as a order canceller method B)
2023-06-27 13:42:08 -04:00
Tyler Goodlet 1d9c195506 kraken: tidy up paper mode activation comments 2023-06-27 13:42:08 -04:00
Tyler Goodlet d3a504864a Add draft `brokercnf` CLI cmd from @guilledk 2023-06-27 13:42:08 -04:00
Tyler Goodlet f99e8fe7eb binance: dynamically choose the rest method
Instead of having a buncha logic branches for 'get', 'post', etc. just
pass the `method: str` and do a attr lookup on the `asks` sesh.

Also, adjust the `trades_dialogue()` ep to switch to paper mode when no
client API key is detected/loaded.
2023-06-27 13:42:08 -04:00
Guillermo Rodriguez bc4ded2662 binance: start drafting live order ctl endpoints
First draft originally by @guilledk but update by myself 2 years later
xD. Will crash at runtime but at least has the machinery to setup signed
requests for auth-ed endpoints B)

Also adds a generic `NoSignature` error for when credentials are not
present in `brokers.toml` but user is trying to access auth-ed eps with
the client.
2023-06-27 13:42:08 -04:00
Tyler Goodlet 35359861bb .brokers._daemon: add notes around needed brokerd respawn tech 2023-06-27 13:41:47 -04:00
Tyler Goodlet a44bc4aeb3 binance: pre-#520 fixes for `open_cached_client()` import and struct-field casting 2023-06-27 13:41:47 -04:00
Tyler Goodlet c4277ebd8e .ui._display: filter y-ranging to `_auction_ticks`
Since we only ever want to do incremental y-range calcs based on the
price always skip any tick types emitted by the data daemon which aren't
defined in the fundamental set. Further, toss in a new `debug_n_trade:
bool` toggle which by default turns off all loggin and profiler calls;
if you want to do profiling this has to now be adjusted manually!
2023-06-27 13:41:47 -04:00
Tyler Goodlet d42aa60325 Define the flattened "fundamental double auction" emitted tick type set 2023-06-27 13:41:47 -04:00
Tyler Goodlet c57d4b2181 ib: map some tick types particulary "volumeRate" to avoid auto-range issue 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6c10c2f623 order_mode: add comment around `Order` being a dict bug 2023-06-27 13:41:47 -04:00
Tyler Goodlet ad31631a8f Always round order pane $limit to 3 digits 2023-06-27 13:41:47 -04:00
Tyler Goodlet 020a3955d2 Always use fully expanded FQME throughout `.clearing`
Since crypto backends now also may expand an FQME like `xbteur.kraken`
-> `xbteur.spot.kraken` (by filling in the venue token), we need to use
this identifier when looking up per-market order dialogs or submitting
new requests. The simple fix is to simply look up that expanded from
from the `Feed.flumes` table which is always keyed by the `MktPair.fqme:
str` - the expanded form.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 736bbbff77 view_mode: drop rounding dispersions and "debug print" 2023-06-27 13:41:47 -04:00
Tyler Goodlet 80461e18a5 Use `MktPair.price_tick: Decimal` in dark triggers
This was actually incorrect prior, we were rounding triggered limit
orders with the `.size_tick` value's digits when we should have been
using the `.price_tick` (facepalm). So fix that and compute the rounding
number of digits (as passed to the round(<value>, ndigits=<here>)`
builtin) and store it in the `DarkBook.triggers` tuples so that at
trigger/match time the round call is done *just prior* to msg send to
`brokerd` given the last known live L1 queue price.
2023-06-27 13:41:47 -04:00
Tyler Goodlet a149e71fb1 ib: pull vnc sockaddrs from brokers.toml config if defined 2023-06-27 13:41:47 -04:00
Tyler Goodlet b28b38afab Fix double cancel bug!
Not sure how this lasted so long without complaint (literally since we
added history 1m OHLC it seems; guess it means most backends are pretty
tolerant XD ) but we've been sending 2 cancels per order (dialog) due to
the mirrored lines on each chart: 1s and 1m. This fixes that by
reworking the `OrderMode` methods to be a bit more sane and less
conflated with the graphics (lines) layer.

Deatz:
- add new methods:
  - `.oids_from_lines()` line -> oid extraction,
  - `.cancel_orders()` which makes the order client cancel requests from
    a `oids: list[str]`.
- re-impl `.cancel_all_orders()` and `.cancel_orders_under_cursor()` to
  use the above methods thus fixing the original bug B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 84613cd596 clearing._messages: don't require `.symbol` in brokerd side error msgs 2023-06-27 13:41:47 -04:00
Tyler Goodlet 909f880211 ib: prep for passing `Client` to data reset hacker
Since we want to be able to support user-configurable vnc socketaddrs,
this preps for passing the piker client direct into the vnc hacker
routine so that we can (eventually load) and read the ib brokers config
settings into the client and then read those in the `asyncvnc` task
spawner.
2023-06-27 13:41:47 -04:00
Tyler Goodlet bc58e42a74 Refine accounting related config loading routine doc strings 2023-06-27 13:41:47 -04:00
Tyler Goodlet 77dfeb4bf2 Update brokerd msgs with modern type annots, add a "closed" status 2023-06-27 13:41:47 -04:00
Tyler Goodlet f2c1988536 Better empty account console msg styling 2023-06-27 13:41:47 -04:00
Tyler Goodlet 81d5ca9bc2 ib: drop `ibis` import and use fq object imports instead 2023-06-27 13:41:47 -04:00
Tyler Goodlet a4b8fb2d6b Woops, drop paper mode detection on client side.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet e7437cb722 Facepalm, break on first matching trades ep.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet f81ea64cab Drop unused `Union` 2023-06-27 13:41:47 -04:00
Tyler Goodlet 2e878ca52a Don't pass loglevel to trade dialog endpoint
It's been getting setup in the `brokerd` daemon-actor spawn task for
a while now and worker tasks already get a ref to that global log
instance so they don't need to care (in data or trading) task spawn
endpoints.

Also move to the new `open_trade_dialog()` naming for working broker
backends B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 6b2e85e4b3 Add type-annots to sampler subscription method internals 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6a1c49be4e view_mode: handle duplicate overlay dispersions
Discovered due to originally having a history loading bug between
btcusdt futes display where the same time series was being loaded into
the graphics system, this avoids the issue where 2 (or more) curves are
measured to have the same dispersion and thus do not get added as unique
entries to the `overlay_table: dict[float, tuple]` during the scaling
phase..

Practically speaking this should never really be a problem if the curves
(and their backing timeseries) are indeed unique but keying the
overlay table by the dispersion and the `Viz` is a minimal performance
hit when looping the sorted table and is a lot nicer then you **do want
to show** duplicate curves then having one overlay just not be ranged
correctly at all XD
2023-06-27 13:41:47 -04:00
Tyler Goodlet 0f8c685735 .clearing._client: return early on cancel-dead-dialog attempts 2023-06-27 13:41:47 -04:00
Tyler Goodlet 921e18728c Move `._cacheables.open_cached_client()` into `.brokers` pkg mod 2023-06-27 13:41:47 -04:00
Tyler Goodlet c0552fa352 Just use brokermods dict directly in chart entrypoint now 2023-06-27 13:41:47 -04:00
Tyler Goodlet 90810dcffd Right partition the fqme to remove broker part in mkt-info cli 2023-06-27 13:41:47 -04:00
Tyler Goodlet ebbfa7f48d Passthrough kwargs to `open_cached_client()` 2023-06-27 13:41:47 -04:00
Tyler Goodlet bb02775cab Change `ledger` CLI to use new `open_brokerd_dialog()`
Instead of effectively (and poorly) duplicating the trade dialog setup
logic, just use the new helper we exposed in the EMS module B)
Also, handle paper accounts that have no ledger / positions existing.
2023-06-27 13:41:47 -04:00
Tyler Goodlet b15e736e3e Change `piker symbol-info` -> `mkt-info`
As part of bringing the brokerd agnostic APIs up to date and modernizing
wrapping CLIs, this adds a new sub-cmd to allow more or less directly
calling the `.get_mkt_info()` broker mod endpoint and dumping the both
the backend specific `Pair`-ish and `.accounting.MktPair` normalized
version to console.

Deatz:
- make the click config's `brokermods` entry a `dict`
- make `.brokers.core.mkt_info()` strip the broker name part from the
  input fqme before calling the backend.
2023-06-27 13:41:47 -04:00
Tyler Goodlet cc3037149c Factor `brokerd` trade dialog init into acm
Connecting to a `brokerd` daemon's trading dialog via a helper `@acm`
func is handy so that arbitrary trading middleware clients **and** the
ems can setup a trading dialog and, at the least, query existing
position state; this is in fact our immediate need when simply querying
for an account's position status in the `.accounting.cli.ledger` cli.

It's now exposed (for now) as `.clearing._ems.open_brokerd_dialog()` and
is called by the `Router.maybe_open_brokerd_dialog()` for every new
relay allocation or paper-account engine instance.
2023-06-27 13:41:47 -04:00
Tyler Goodlet d704d631ba Add `store ldshm` subcmd
Changed from the old `store clone` to instead simply load any shm buffer
matching a user provided `FQME: str` pattern; writing to parquet file is
only done if an explicit option flag is passed by user.

Implement new `iter_dfs_from_shms()` generator which allows interatively
loading both 1m and 1s buffers delivering the `Path`, `ShmArray` and
`polars.DataFrame` instances per matching file B)

Also add a todo for a `NativeStorageClient.clear_range()` method.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 58c096bfad Bleh go back to using pdbp for REPL in anal 2023-06-27 13:41:47 -04:00
Tyler Goodlet 9eeea51165 Define shm buffer sizing in `.data.history`
Also adjust sizing such that the history buffer will backfill the last
six years by default (in 1m OHLC) and the hft buffer will do only 3 days
worth. Also ensure the fsp layer passes the src shm's buffer size when
allocating since the size is now required by allocators in the shm apis.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 33ec27715b Sync shm mod with dev version in `tractor`, drop buffer sizing vars, require `size: int` to all allocators 2023-06-27 13:41:47 -04:00
Tyler Goodlet e1be098406 Only hard re-render `Viz`s matching backfill deats
Avoid unnecessarily re-rendering the wrong (1min OHLC history) chart
and/or other such charts with update tasks listening to the sampler
stream. Instead only redraw in tasks which are updating vizs which match
the actual details of the backfill event.

We can probably also eventually match against a range tuple (emitted in
the msg) and then have the task further only update the formatter layer
unless the range is actually in view?
2023-06-27 13:41:47 -04:00
Tyler Goodlet dd3e4b5a1f Emit backfill details in broadcasts
Send both the `Viz.name` and `timeframe: int` so that the UI side can
match against them and only update a lone curve in a single plot.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 2a1835843f Drop `wap_in_history` stuff from display loop
It's no longer part of the default OHLCV array-buffer schema and just
generally we should be processing and managing **any** non source data
in the FSP subsystem(s) despite it maybe being provided as a default by
some backends.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 8947932289 Use last 16 steps in period detection, not first 16.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 0484e97382 Try to not overrun shm during gap backfilling.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 5251561e20 TOCHERRY: into #486, add polars/apache deps for nix 2023-06-27 13:41:47 -04:00
Tyler Goodlet 937d8c410d binance: add futes API link, freeze the agg tradez struct 2023-06-27 13:41:47 -04:00
Tyler Goodlet 75ff3921b6 ib: fix mega borked hist queries on gappy assets
Explains why stuff always seemed wrong before XD

Previously whenever a time-gappy asset (like a stock due to it's venue
operating hours) was being loaded, we weren't querying for a "durations
worth" of bars and this was causing all sorts of actual gaps in our
data set that shouldn't exist..

Fix that by always attempting to retrieve a min aggregate-time's
worth/duration of bars/datums in the history manager. Actually,
i implemented this in both the feed and api layers for this backend
since it doesn't seem to strictly work just implementing it at the
`Client.bars()` level, not sure why but..

Also, buncha `ruff` linting cleanups and fix the logger nameeee, lel.
2023-06-27 13:41:47 -04:00
Tyler Goodlet c8f8724887 Mask out all the duplicate frame detection 2023-06-27 13:41:47 -04:00
Tyler Goodlet c1546eb043 Add note about appending parquet files on write 2023-06-27 13:41:47 -04:00
Tyler Goodlet f8ab3bde35 Allow sampler step events to overrun; only 1s period 2023-06-27 13:41:47 -04:00
Tyler Goodlet c1201c164c Parametrize index margin around gap detection segment 2023-06-27 13:41:47 -04:00
Tyler Goodlet a575e67fab Go back to just opening sampler stream inside history update task? 2023-06-27 13:41:47 -04:00
Tyler Goodlet 34dd6ffc22 Add a configurable timeout around backend live feed startup
For now make it a larger value but ideally in the long run we can tune
it to specific backends and expose it in the config(s).
2023-06-27 13:41:47 -04:00
Tyler Goodlet fda7111305 Import from new `.data._timeseries` mod for anal 2023-06-27 13:41:47 -04:00
Tyler Goodlet 8233d12afb Detect and fill time gaps in tsdb history
For now, just detect and fill in gaps (via fresh backend queries)
*in the shm buffer* but eventually i'm pretty sure we can just write
these direct to the parquet file as well.

Use the new `.data._timeseries.detect_null_time_gap()` to find and fill
in the `ShmArray` index range, re-check it and enter a prompt if it
didn't totally fill.

Also,
- do a massive cleanup and removal of all unused/commented code.
  - drop the duplicate frames tracking, don't think we need it after
    removing multi-frame concurrent queries.
- change backfill loop variable `end_dt` -> `last_start_dt` which is
  more semantically correct.
- fix logic to backfill any missing sub-sequence portion for any frame
  query that overruns the shm buffer prependable space by detecting
  the available rows left to insert and only push those.
  - add a new `shm_push_in_between()` helper to match.
2023-06-27 13:41:47 -04:00
Tyler Goodlet f25248c871 Add `.data._timeseries` utility mod
Org all the new (time) gap detection routines here and also move in the
`slice_from_time()` epoch -> index converter routine from `._pathops` B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 54f8a615fc Use `code.interact()` in anal subcmd for now 2023-06-27 13:41:47 -04:00
Tyler Goodlet 2dbcecdac7 Generalize time-gap detector to accept unit and threshold 2023-06-27 13:41:47 -04:00
Tyler Goodlet 0dcfcea6ee Finally get partial backfills after tsdb load workinnn
It took a little while (and a lot of commenting out of old no longer
needed code) but, this gets tsdb (from parquet file) loading *before*
final backfilling from the most recent history frame until the most
recent tsdb time stamp!

More or less all the convoluted concurrency shit we had for coping with
`marketstore` IPC junk is no longer needed, particularly all the query
size limits and accompanying load loops.. The recent frame loading
technique/order *has* now changed though since we'd like to show charts
asap once tsdb history loads.

The new load sequence is as follows:
- load mr (most recent) frame from backend.
- load existing history (one shot) from the "tsdb" aka parquet files
  with `polars`.
- backfill the gap part from the mr frame back to the tsdb start
  incrementally by making (hacky) `ShmArray.push(start=<blah>)` calls
  and *not* updating the `._first.value` while doing it XD

Dirtier deatz:
- make `tsdb_backfill()` run per timeframe in a separate task.
  - drop all the loop through timeframes and insert `dts_per_tf` crap.
  - only spawn a subtask for the `start_backfill()` call which in turn
    only does the gap backfilling as mentioned above.
- mask out all the code related to being limited to certain query sizes
  (over gRPC) as was restricted by marketstore.. not gonna go through
  what all of that was since it's probably getting deleted in a follow
  up commit.
- buncha off-by-one tweaks to do with backfilling the gap from mr frame
  to tsdb start.. mostly tinkered it to get it all right but seems to be
  working correctly B)
- still use the `broadcast_all()` msg stuff when doing the gap backfill
  though don't have it really working yet on the UI side (since
  previously we were relying on the shm first/last values.. so this will
  be "coming soon" :)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7a5c43d01a Support injecting a `info: dict` to `Sampler.broadcast_all()` calls 2023-06-27 13:41:47 -04:00
Tyler Goodlet f1252983e4 kucoin: support start and end dt based bars queries 2023-06-27 13:41:47 -04:00
Tyler Goodlet 6dc3ed8d6a Expose a `force_reformat: bool` up through graphics stack 2023-06-27 13:41:47 -04:00
Tyler Goodlet 4f4860cfb0 Update shm.push() type sig style 2023-06-27 13:41:47 -04:00
Tyler Goodlet 1e683a4b91 Another guard around sampling subscriber popped race.. 2023-06-27 13:41:47 -04:00
Tyler Goodlet 9fd412f631 Add basic time-sampling gap detection via `polars`
For OHLCV time series we normally presume a uniform sampling period
(1s or 60s by default) and it's handy to have tools to ensure a series
is gapless or contains expected gaps based on (legacy) market hours.

For this we leverage `polars`:
- add `.nativedb.with_dts()` a datetime-from-epoch-time-column frame
  "column-expander" which inserts datetime-casted, epoch-diff and
  dt-diff columns.
- add `.nativedb.detect_time_gaps()` which filters to any larger then
  expected sampling period rows.
- wrap the above (for now) in a `piker store anal` (analysis) cmd which
  atm always enters a breakpoint for tinkering.

Supporting storage client additions:
- add a `detect_period()` helper for extracting expected OHLC time step.
- add new `NativedbStorageClient` methods and attrs to provide for the above:
    - `.mk_path()` to **only** deliver a parquet-file path for use in
      other methods.
    - `._dfs` to house cached `pl.DataFrame`s loaded from `.parquet` files.
    - `.as_df()` which loads cached frames or loads them from disk and
      then caches (for next use).
    - `_write_ohlcv()` a private-sync version of the public equivalent
      meth since we don't currently have any actual async file IO
      underneath; add a flag for whether to return as a `numpy.ndarray`.
2023-06-27 13:41:47 -04:00
Tyler Goodlet d027ad5a4f Whenever there is overlays, set a title on main chart price-y axis! 2023-06-27 13:41:47 -04:00
Tyler Goodlet 106ebe94bf Drop marketstore and tina install from readme, add polars and apache! 2023-06-27 13:41:47 -04:00
Tyler Goodlet d2accdac9b Drop remaining mkts nonsense from `store delete` 2023-06-27 13:41:47 -04:00
Tyler Goodlet c020ab76be Clean out marketstore specifics
- drop buncha cruft from `store ls` cmd and make it work for
  multi-backend fqme listing.
  - including adding an `.address` to the mkts client which shows the
    grpc socketaddr details.
- change defauls to new `'nativedb'.
- drop 'marketstore' from built-in backend list (for now)
2023-06-27 13:41:47 -04:00
Tyler Goodlet c52e889fe5 First draft history loading rework
It was a concurrency-hack mess somewhat due to all sorts of limitations
imposed by marketstore (query size limits, strange datetime/timestamp
errors, slow table loads for large queries..) and we can drastically
simplify. There's still some issues with getting new backfills (not yet
in storage) correctly prepended: there's sometimes little gaps due to shm
races when reading history indexing vs. when the live-feed startup
finishes.

We generally need tests for all this and likely a better rework of the
feed layer's init such that we're showing history in chart afap instead
of waiting on backfills or the live feed to come up.

Much more to come B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 0ba3c798d7 Drop `bar_wap` from default ohlc field set
Turns out no backend (including kraken) requires it and really this
kinda of measure should be implemented and recorded from our fsp layer
instead of (hackily) sometimes expecting it to be in "source data".
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7b4f4bf804 First draft `.storage.nativedb.` using parquet files
After much frustration with a particular tsdb (cough) this instead
implements a new native-file (and apache tech based) backend which
stores time series in parquet files (for now) using the `polars` apis
(since we plan to use that lib as well for processing).

Note this code is currently **very** rough and in draft mode.

Details:
- add conversion routines for going from `polars.DataFrame` to
  `numpy.ndarray` and back.
- lay out a simple file-name as series key symbology:
  `fqme.<datadescriptions>.parquet`, though probably it will evolve.
- implement the entire `StorageClient` interface as it stands.
- adjust `storage.cli` cmds to instead expect to use this new backend,
  which means it's a complete mess XD

Main benefits/motivation:
- wayy faster load times with no "datums to load limit" required.
- smaller space footprint and we haven't even touched compression
  settings yet!
- wayyy more compatible with other systems which can lever the apache
  ecosystem.
- gives us finer grained control over the filesystem usage so we can
  choose to swap out stuff like the replication system or networking
  access.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 8de92179da kucoin: fix missing default fields def import 2023-06-27 13:41:47 -04:00
Tyler Goodlet 94733c4a0b A PoC tsdb prototype: `parqdb` using `polars`
Turns out just (over)writing `.parquet` files with >= 1M datums is like
less then a second, and we can likely speed up appends using
`fastparquet` (usage coming soon).

Includes:
- a new `clone` CLI subcmd to test this all out by ad-hoc copy of
  (literally hardcoded to a daemon-actor specific shm allocation X) an
  existing `/dev/shm/<ShmArray>` and push to `.parquet` file.
  - code to convert from our `ShmArray.array: np.ndarray` ->
    `polars.DataFrame` (thanks SO).
  - timing checks around the file IO and np -> polars conversion.
- a `read` subcmd which i was using to test the sync `pymarketstore`
  client against our async one to see if the issues from
  https://github.com/pikers/piker/issues/443 were resolved, but nope!
2023-06-27 13:41:47 -04:00
Tyler Goodlet 7d1cc47db9 ROFL, even using `pymarketstore`'s json-RPC it's borked..
Turns out trying to switch to the old sync client and going back to
using the old json-RPC API (after having had to patch the upstream repo
to not import gRPC machinery to avoid crashes..) I'm basically getting
the exact same issues.

New tinkering results does possibly tell some new stuff:
- the EOF error seems to indeed be due to trying fetch records which haven't been
  written (properly) - like asking for a `end=<epoch_int>` that is
  earlier then the earliest record.
- the "snappy input corrupt" error seems to have something to do with
  the `Params.end` field not being an `int` and/or the int precision not
  being chosen correctly?
  - toying with this a bunch manually shows that the internals of the
    client (particularly `.build_query()` stuff) is parsing/calcing the
    `Epoch` and `Nanoseconds` values out incorrectly.. which is likely
    part of the problem.
  - we also changed `anyio_marketstore.MarketStoreclient.build_query()`
    logic when removing `pandas` a while back, which also seems to be
    part of the problem on the async side, however reverting those
    changes also didn't fix the issue entirely; likely something else
    more subtle going on (maybe with the write vs. read `Epoch` field
    type we pass?).

Despite all this malarky, we're already underway more or less obsoleting
this whole thing with a much less complex approach of using apache
parquet files and modern filesystem tools to get a more flexible and
numerics-native dataframe-oriented tsdb B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet 9859f601ca Invert data provider's OHLCV field defs
Turns out the reason we were originally making the `time: float` column in our
ohlcv arrays was bc that's what **only** ib uses XD (and/or 🤦)

Instead we changed the default field type to be an `int` (which is also
more correct to avoid `float` rounding/precision discrepancies) and thus
**do not need to override it** in all other (crypto) backends (except
`ib`). Now we only do the customization (via `._ohlc_dtype`) to `float`
only for `ib` for now (though pretty sure we can also not do that
eventually as well..)!
2023-06-27 13:41:47 -04:00
Tyler Goodlet af64152640 .data.history: update to new naming
-> `._source.def_iohlcv_fields`
-> `.storage.StorageClient`
2023-06-27 13:41:47 -04:00
Tyler Goodlet bf21d2e329 Rename default OHLCV `np.dtype` descriptions
Use `def_iohlcv_fields` for a name and instead of copying and inserting
the index field pop it for the non-index version. Drop creating
`np.dtype()` instances since `numpy`'s apis accept both input forms so
this is simpler on our end.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 848577488e Add public config dir getter 2023-06-27 13:41:47 -04:00
Tyler Goodlet e82538eded .data: export ohlc dtypes at top level 2023-06-27 13:41:47 -04:00
Tyler Goodlet 8ccb8b0744 kucoin: drop shm-array `numpy` dtype def, our default is the same 2023-06-27 13:41:47 -04:00
Tyler Goodlet e83de2906f Relegate old marketstore cli eps to masked module 2023-06-27 13:41:47 -04:00
Tyler Goodlet 33c464524b Lower the paper engine order-cancel latency 2023-06-27 13:41:47 -04:00
Tyler Goodlet cb774e5a5d Re-implement `piker store` CLI with `typer`
Turns out you can mix and match `click` with `typer` so this moves what
was the `.data.cli` stuff into `storage.cli` and uses the integration
api to make it all work B)

New subcmd: `piker store`
- add `piker store ls` which lists all fqme keyed time-series from backend.
- add `store delete` to remove any such key->time-series.
  - now uses a nursery for multi-timeframe concurrency B)

Mask out all the old `marketstore` specific subcmds for now (streaming,
ingest, storesh, etc..) in anticipation of moving them into
a subpkg-module and make sure to import the sub-cmd module in our top
level cli package.

Other `.storage` api tweaks:
- drop the reraising with custom error (for now).
- rename `Storage` -> `StorageClient` (or should it be API?).
2023-06-27 13:41:47 -04:00
Tyler Goodlet 1ec9b0565f Move `.data.cli` to `.storage.cli` 2023-06-27 13:41:47 -04:00
Tyler Goodlet 7ab97fb21d Add marketstore client as storage-backend module
To kick off our (tsdb) storage backends this adds our first implementing
a new `Storage(Protocol)` client interface. Going foward, the top level
`.storage` pkg-module will now expose backend agnostic APIs and helpers
whilst specific backend implementations will adhere to that middle-ware
layer.

Deats:
- add `.storage.marketstore.Storage` as the first client implementation,
  moving all needed (import) dependencies out from
  `.service.marketstore` as well as `.ohlc_key_map` and `get_client()`.
- move root `conf.toml` loading from `.data.history` into
  `.storage.__init__.open_storage_client()` which now takes in a `name:
  str` and does all the work of loading the correct backend module, its
  config, and determining if a service-instance can be contacted and
  a client loaded; in the case where this fails we raise a new
  `StorageConnectionError`.
- add a new `.storage.get_storagemod()` just like we have for brokers.
- make `open_storage_client()` also return the backend module such that
  the history-data layer can make backend specific calls as needed (eg.
  ohlc_key_map).
- fall back to a basic non-tsdb backfill when `open_storage_client()`
  raises the new connection error.
2023-06-27 13:41:47 -04:00
Tyler Goodlet 29211b200d Start `piker.storage` subsys: cross-(ts)db middlewares
The plan is to offer multiple tsdb and other storage backends (for
a variety of use cases) and expose them similarly to how we do for
broker and data providers B)
2023-06-27 13:41:47 -04:00
Tyler Goodlet ae8358a5e7 Tidy up unused imports and doc string 2023-06-27 13:32:18 -04:00
Tyler Goodlet 00a51c0288 Use new `msgspec.structs` api for `.typecast()` 2023-06-27 13:26:52 -04:00
Tyler Goodlet 994564f923 Just warn-print when annots are str values? 2023-06-27 13:26:52 -04:00
Tyler Goodlet 12172cc5cd Make `.data.types.Struct.typecast()` work via type lookup from `builtins` 2023-06-27 13:26:52 -04:00
goodboy a65910c732
Merge pull request #523 from ebisu4/master
get font style from main config
2023-06-27 13:25:11 -04:00
ebisu4 949fa9fbb9
Merge pull request #1 from pikers/fix_custom_font_settings
Fix reading font size from user config on linux
2023-06-21 10:53:46 +02:00
Tyler Goodlet 4b77de5e2d Fix reading font size from user config
Was borked on linux if you didn't provide the setting in `conf.toml` due
to some logic errors. Fix that by rejigging `DpiAwareFont` internal
variables:

- add new `._font_size_calc_key: str` which was the old `._font_size`
  and is only used when no explicit font size is set by the user in the
  `conf.toml` config:
  - this is the "key" that is used to lookup a calculation function
    which attempts to compute a best fit font size given the measured
    system displays DPI settings and dimensions.
- make the `._font_size: int` the **actual** font size integer that is
  cached and passed to `Qt` to set the size.
  - this is overridden by user config now if defined.
- change the input kwarg `font_size: str` to the constructor to better
  change the input kwarg `font_size: str` to the constructor to better
  named private `_font_size_key: str` which gets set to the new
  `._font_size_calc_key`.

Also, adjust all client code which instantiates `DpiAwareFont` to use
the new `_font_size_key` kwarg input so nothing breaks XD
2023-06-19 15:13:01 -04:00
Ebisu d660376206 get font style from main config 2023-06-19 00:10:37 +02:00
goodboy 201b0d99c1
Merge pull request #518 from pikers/fix_price_label_digits
Fix price label precision as `MktPair.price_tick_digits`
2023-05-31 10:59:30 -04:00
Tyler Goodlet c27da99e12 Fix price label precision as `MktPair.price_tick_digits`
Was only really borked for higher-precision but lower priced assets
(like TLOS or peeneez) which have a `MktPair.price_tick_digits >= 2`.

The issue was using the wrong attr, the `size_tick_digits`..
2023-05-31 10:36:20 -04:00
goodboy e51ba404fc
Merge pull request #489 from pikers/rekt_pps
Rekt pps? problem? => `piker.accounting`
2023-05-28 15:41:50 -04:00
Tyler Goodlet abd3cefd84 Parametrize ems service test to cancel with API and kbi 2023-05-28 14:28:56 -04:00
Tyler Goodlet f6549fcb62 Always allocate a new `OrderClient` per `open_ems()` call 2023-05-28 14:05:03 -04:00
Tyler Goodlet 41aa87f847 Fix `_digits` attr names in order mode.. 2023-05-28 13:13:43 -04:00
Tyler Goodlet d6331ce9e1 Add nonlocal annots to satisfy ruff 2023-05-28 12:41:14 -04:00
Tyler Goodlet 4f67ac0337 Change to new context-cancelled msg contents: pikerd is canceller 2023-05-26 17:16:43 -04:00
Tyler Goodlet 024cf8b8c2 add in `[kucoin]` section to brokers conf 2023-05-26 16:51:11 -04:00
Tyler Goodlet 9ec664f7c8 Drop elastic search container build for now since we're also skipping the test 2023-05-26 16:50:53 -04:00
Tyler Goodlet 5e2107ff15 Adjust `config.load()` to handle CI git checkout dir, seems they changed it!? 2023-05-26 16:50:15 -04:00
Tyler Goodlet 5f1d0fcb8c `tmpconfdir`: always assert brokers config created 2023-05-26 14:58:59 -04:00
Tyler Goodlet 3b5bd8f43e Ensure quote last price is a `float` 2023-05-26 14:42:35 -04:00
Tyler Goodlet 40c5f39f0d conftest: be explicit about which config we touch 2023-05-26 14:42:09 -04:00
Tyler Goodlet 3d8c1a7b3c ib: don't log-emit ib pp msg when none exists.. 2023-05-26 14:05:32 -04:00
Tyler Goodlet 06cc3ac92c Tidy up ems tests as per some `ruff`in 2023-05-25 18:04:52 -04:00
Tyler Goodlet 4a8e8a32f9 Fix account config loading logic discovered in new test XD 2023-05-25 17:56:14 -04:00
Tyler Goodlet 9bc11d8dd9 Add basic config checking tests 2023-05-25 17:55:20 -04:00
Tyler Goodlet 9c80969fd5 .data.validate: add missing endpoint warnings 2023-05-25 16:01:21 -04:00
Tyler Goodlet da4d344e63 Change to `piker_pin` branch in `tomlkit` fork 2023-05-25 13:53:14 -04:00
goodboy 073ff0103a
Merge pull request #506 from pikers/py311
`python3.11` support!
2023-05-24 19:34:10 -04:00
Tyler Goodlet f0a346dcc3 Some linting fixes after trying out `ruff` 2023-05-24 17:25:23 -04:00
Tyler Goodlet 7381c361cd Strictly drop `LinkedSplits.symbol` B) 2023-05-24 15:42:14 -04:00
Tyler Goodlet 1b577eebf6 Change over the UI layer to use `MktPair`
Including changing to `LinkedSplits.mkt: MktPair` and adding an explicit
setter method for setting it and being sure that nothing breaks
in the display system init!

For this commit we leave in warning access to `LinkedSplits.symbol` but
will remove in following commit.
2023-05-24 15:30:17 -04:00
Tyler Goodlet 39af215d61 kraken: use new `Position.mkt` attr 2023-05-24 15:29:42 -04:00
Tyler Goodlet 35f0520cb0 Drop `Symbol` / `.symbol` support from `.accounting`
Only stuff left was the allocator stuff. Drop the top level subpkg
exports and finally kill off the awkwardly named
`Symbol.lot_size_digits` properties XD

Expose a bunch more util funcs at subpkg top level, do some typing in
allocator method internals.
2023-05-24 15:26:51 -04:00
Tyler Goodlet 738d0ca38b Rename db tests to test_docker_services 2023-05-24 12:30:57 -04:00
Tyler Goodlet bd8e4760d5 Port everything strictly to `Position.mkt` and `Flume.mkt` 2023-05-24 12:16:28 -04:00
Tyler Goodlet 9a063ccb11 ib: Solve lingering bugs for non-vlm contracts
Contract matching in live setup was borked; switch to
`MktPair.dst.atype` matching, don't override the `cmdty` "venue" (a
weird special case) in `get_mkt_info()` otherwise lookup will fail..
2023-05-24 09:11:24 -04:00
Tyler Goodlet e8787d89c6 ib: unset vlm via new `FeedInit.shm_write_opts` field 2023-05-24 08:28:16 -04:00
Tyler Goodlet 8e97814c1f Add "no vlm" indication to `FeedInit`
Stash it for now in the (now mutable by default) `.shm_write_opts` and
have the new `Flume._has_vlm: bool` (only set to false internally by
feed layer) which can be read via new public `.has_vlm()` predicate.
Move out the old `.ui/_fsp` helper logic to this flume method.
2023-05-24 08:25:14 -04:00
Tyler Goodlet e82f7f9012 Skip elasticsearch test for now, container build seems borked? 2023-05-23 22:39:38 -04:00
Tyler Goodlet b44b0915ca ib: i guess only discard `MktPair.src: Asset` on non-forex XD 2023-05-23 19:11:40 -04:00
Tyler Goodlet ff74d47fd5 kucoin: fix fqme or search result key lookups 2023-05-23 16:46:21 -04:00
Tyler Goodlet 6ad8c603d5 More detailed `Position.events` todo 2023-05-23 16:45:58 -04:00
Tyler Goodlet cd55d027c4 Re-implement db tests using new ahab daemons
Avoids the really sloppy flag passing to `open_pikerd()` and allows for
separation of the individual docker daemon starts.

Also add a new `root_conf() -> Path` fixture which will open and load
the `dict` for the new root `conf.toml` file.
2023-05-23 14:16:08 -04:00
Tyler Goodlet d094625bd6 Activate docker daemons via flags using exit stack 2023-05-23 14:16:08 -04:00
Tyler Goodlet e7a172b656 Reimplement marketstore and elasticsearch daemons
Using the new `._ahab.start_ahab_service()` mngr of course, and now
support user config overrides (such that our defaults can be modified by
a keen user, say using a config file, or for testing). This is where the
functionality moved out of the `pikerd` init has been moved - instead of
being triggered by bool flag inputs to that factory.

For marketstore actually support overriding the entire yaml config via
runtime `_yaml_config_str: str` formatting with any passed user dict,
primarily focussing on supporting override of the sockaddrs for testing.
2023-05-23 14:16:02 -04:00
Tyler Goodlet bd919f9d66 _ahab: use `Services` api to spawn docker tasks
Allows for using the `Services.cancel_service()` api for explicit
cancellation in tests and eventually for remote teardown. Change
`.start_ahab()` to an `@acm` `start_ahab_service()` and just yield back
the same values we were returning prior. Also fix the logging (level) to
actually reflect what's passed in - we weren't using the correct name
/ instance from the `.sevice` subpkg..
2023-05-23 14:16:02 -04:00
Tyler Goodlet 611d1ee3fc Drop db flags from pikerd startup 2023-05-23 14:16:02 -04:00
Tyler Goodlet 56b23e1fcc Add docker and elasticsearch to test deps 2023-05-23 14:16:02 -04:00
Tyler Goodlet d3bafb0063 Always prefer a config template if found 2023-05-23 14:16:02 -04:00
Tyler Goodlet 7f246697b4 Remove remaining `fqsn` usage from code base minus backward compats 2023-05-23 14:16:02 -04:00
Tyler Goodlet dd10acbbf9 Replace `Transaction.fqsn` -> `.fqme`
Change over all client (broker) code which constructs transactions
and finally wipe required `.fqsn` usage from `.accounting` B)
2023-05-23 14:15:57 -04:00
Tyler Goodlet 31a00eca94 Rename fqsn -> fqme in ui mods 2023-05-22 12:13:00 -04:00
Tyler Goodlet c93d119873 Move tmpdir creation into separate fixture
Since `.config.load()` was changed to not touch conf files by default
(without explicitly setting `touch_if_dne: bool`), this ensures both the
global module value is set and the `brokers.toml` file exists before
every test.
2023-05-22 12:03:32 -04:00
Tyler Goodlet 588770d034 ib: rename lingering fqsn -> fqme 2023-05-22 12:00:13 -04:00
Tyler Goodlet 2f2d612b5f Add todo to switch to `dst/src` delim 2023-05-22 11:57:37 -04:00
Tyler Goodlet 660a94d610 Don't expect `conf.toml`'s network section
For testing this is particularly true until we offer a template
with whatever (likely localhost) settings planned to ship.
2023-05-22 11:54:36 -04:00
Tyler Goodlet e4e4cacef3 .data.feed: Less stringency with fqme matching
`Flume.mkt.fqme` might not be exactly the same as the local
version now since we've had to add some hacks to certain backends
(cough ib) to handle `MktPair.src` not being set as an `Asset` (yet).
2023-05-22 11:52:36 -04:00
Tyler Goodlet 60a6f3269c ib: use flex report datetime sort
Since `open_trade_ledger()` now requires a sort we pass in a combo of
the std `pendulum.parse()` for API records and a custom flex parser for
flex entries pulled offline.

Add special handling for `MktPair.src` such that when it's a fiat (like
it should always be for most legacy assets) we try to get the fqme
without that `.src` token (i.e. not mnqusd) to avoid breaking
roundtripping of live feed requests (due to new symbology) as well as
the current tsdb table key set..

Do a wholesale renaming of fqsn -> fqme in most of the rest of the
backend modules.
2023-05-22 09:41:44 -04:00
Tyler Goodlet 53003618cb Add longer timeout on brokerd ctx cancel; seems to work? 2023-05-22 00:16:58 -04:00
Tyler Goodlet c6da09f3c6 Add fast(er), time-sorted ledger records
Turns out that reading **and** writing with `tomlkit` is just wayya slow
for large documents like ledger files so move to using the `tomli`
sibling pkg `tomli-w` which seems to much improve on the latency, though
obviously longer run we're likely going to want:
- a better algorithm for only back loading records using as little
  history as possible
- a different serialization format for production maybe something
  like apache parquet?

The only issue with using a non-style-preserving writer is that we don't
necessarily get TOML conf ordering for free (without first ordering it
ourselves), and thus this patch also adds much more general date-time
sorting machinery which is now **required** when using
`open_trades_ledger()` via a `tx_sort: Callable`. By default we now
provide `.accounting._ledger.iter_by_dt()` (exposed in the subpkg mod)
which conducts dynamic "datetime key detection" based parsing of records
based on a `parsers: dict[str, Callabe]` input table. The default should
handle most use cases including all currently supported live backends
(kraken, ib) as well as our paper engine ledger-records format.

Granulars:
- adjust `Position.iter_clears()` to use new `iter_by_dt(key=lambda ..)`
  signature.
- add `tomli-w` to setup and our `tomlkit` fork to requirements file.
- move `.write_config()` to bottom of class defn.
- fix closed pos popping to not error if pp was already popped..
2023-05-18 18:27:54 -04:00
Tyler Goodlet 89d24cfe33 Oof, fix closed position popping by fqme.. 2023-05-18 12:52:34 -04:00
Tyler Goodlet 8d7a9fa19e Make `MktPair.pair()` a meth, allow passing in a delim character 2023-05-18 12:01:30 -04:00
Tyler Goodlet a1a10676cd Go back to `tomllib` for ledger loading, it's wayy faster 2023-05-18 11:27:31 -04:00
Tyler Goodlet 97b2b25256 Avoid import cycle in clearing client 2023-05-18 01:25:04 -04:00
Tyler Goodlet b2bf0b06f2 ib.api: wholesale fqsn -> fqme renames 2023-05-17 16:56:04 -04:00
Tyler Goodlet 907eaa68cb Pass `mkt: MktPair` to `.open_history_client()`
Since porting all backends to the new `FeedInit` + `MktPair` + `Asset`
style init, we can now just directly pass a `MktPair` instance to the
history endpoint(s) since it's always called *after* the live feed
`.stream_quotes()` ep B)

This has a lot of benefits including allowing brokerd backends to have
more flexible, pre-processed market endpoint meta-data that piker has
already validated; makes handling special cases in much more straight
forward as well such as forex pairs from legacy brokers XD

First pass changes all crypto backends to expect this new input, ib will
come next after handling said special cases..
2023-05-17 16:52:15 -04:00
Tyler Goodlet 89e8a834bf Support fqme rendering *without* the src key
Since most (legacy) stock brokers design their symbology without
including the target exchange's source asset name - normally a fiat
currency like USD - this adds an option for rendering market endpoints
without that token for simpler use in backends for such brokers.

As an example IB doesn't expect a `mnq/usd.cme.ib` symbol and instead
presumes that since the CME lists all assets in USD then the source
asset is implied.

Impl details:
- add `MktPair.pair: str` which replaces `.key` as a better name.
- offer a `without_src: bool` to a new `.get_fqme()` getter method
  which will render everything the same minus the src token.
- expose the new flag through both the new `.get_fqme()` and
  `.get_bs_fqme()` methods and wrap those both under the original
  property names `.bs_fqme` and `.fqme`.
2023-05-17 16:47:15 -04:00
Tyler Goodlet 12bfabf056 Expose `.accounting.unpack_fqme()` 2023-05-17 16:43:31 -04:00
Tyler Goodlet a44e926c2f kucoin: handle ws welcome, subs-ack and pong msgs
Previously the subscription response handling was a bit sloppy what with
ignoring the welcome msg; this now correctly expects the correct startup
sequence. Also this avoids warn logging on pong messages by expecting
them in the msg loop and further drops the `KucoinMsg` struct and
instead changes the msg loop to expect `dict`s and only cast to structs
on live feed msgs that we actually process/relay.
2023-05-17 12:30:52 -04:00
Tyler Goodlet d0ba9a0a58 Start draft `conf.toml` "root" config with tsdb contact info 2023-05-17 10:58:12 -04:00
Tyler Goodlet 3294defee1 `fqme` adjustments to marketstore module
Mostly renaming from the old acronym. This also contains necessary
conf.toml loading in order to call `open_storage_client()` which now
does not have default network contact info.
2023-05-17 10:46:32 -04:00
Tyler Goodlet ae049eb84f Pass and use `MktPair` throughout history routines
Previously we were passing the `fqme: str` which isn't as extensive nor
were we able to pass `MktPair` direct to backend history manager-loading
routines (which should be able to rely on always receiving it since
currently `stream_quotes()` is always called first for setup).

This also starts a slight bit of configuration oriented tsdb info
loading (via a new `conf.toml`) such that a user can decide to host
their (marketstore) db on a remote host and our container spawning and
client code will do the right startup automatically based on the config.
|-> Related to this I've added some comments about doing storage
backend module loading which should get actually written out as part of
patches coming in #486 (or something related).

Don't allow overruns again in history context since it seems it was
never a problem?
2023-05-17 10:19:14 -04:00
Tyler Goodlet 5c8a45c64a Fix `MktPair.bs_fqme` to properly strip broker suffix 2023-05-17 09:45:00 -04:00
Tyler Goodlet 07b7d1d229 ib: implement `FeedInit` style quote stream setup
As per the new market info packing schema this patch almost gets it
completely compatible and useful via implementing the `get_mkt_info()`
backend module endpoint B)

There's still some questions around `MktPair.src` since all the contract
search machinery in the ib api isn't expecting a fiat currency in the
symbol key: for ex. `mnq/usd.cme.20230616.ib` has no handling for the
`[/]usd` part. For now i'm just excluding the `.src` since it requires
extra parsing on quotes-feed requests even though this is also currently
breaking forex pairs (idealpro or wtv). I think ideally we do move to
a `dst/src.<venue>.<etc..>` style but it's going to require adjustments
to all the existing crypto backends..

This also allows dropping the old `mk_init_msgs()` closure.
2023-05-16 17:29:07 -04:00
Tyler Goodlet 147e1baee9 Remove typo-ed `sum_tick_vlm` config from all crypto backends 2023-05-16 17:00:15 -04:00
Tyler Goodlet b096ee3b7a Make `FeedInit.shm_write_opts` an empty dict by default 2023-05-16 16:30:30 -04:00
Tyler Goodlet f20e2d6ee2 ib.feed: start drafting out `get_mkt_info()` endpoint 2023-05-15 15:35:57 -04:00
Tyler Goodlet 1263835034 ib.api: make `get_sym_details()` and `get_quote()` mutex methods 2023-05-15 15:35:30 -04:00
Tyler Goodlet 1e1e64f7f9 ib: fix op error when `end_dt` is `None`: the first query 2023-05-15 13:30:34 -04:00
Tyler Goodlet 98c043815a Woops, implement `Symbol.fqme` same a `Mktpair`.. 2023-05-14 20:24:19 -04:00
Tyler Goodlet ebe351e2ee kucoin: raise `DataUnavailable` if we get empty time array at some point? 2023-05-14 15:13:14 -04:00
Tyler Goodlet cfb125beef `.data.feed`: finally solve startup overruns issue
We need to allow overruns during the async multi-broker context spawning
init bc some backends might take longer then others to setup (eg.
binance vs. kucoin) and result in some context (stream) being overrun by
the time we get to the `.open_stream()` phase. Ideally, we can maybe
adjust the concurrent setup to be more of a task-per-provider style to
avoid this in the future - which would also in theory result in
more-immediate per-provider setup in terms showing ready feeds asap.

Also, does a bunch of renaming from fqsn -> fqme and drops the lower
casing of input symbols instead expecting the caller to know what the
data backend it's requesting is going to be able to handle in terms of
symbology.
2023-05-13 17:35:46 -04:00
Tyler Goodlet 1f0db3103d ib.broker: always cast `asset_type` to `str` 2023-05-13 17:27:45 -04:00
Tyler Goodlet 2e8268b53e Allow passing `allow_overruns: bool` to `Services.start_service_task()` 2023-05-13 16:51:11 -04:00
Tyler Goodlet b572cd1b77 kucoin: store fqme -> mktids table
Instead of pre-converting and mapping piker style fqmes to
`KucoinMktPair`s make `Client._pairs` keyed by the kucoin native market
ids and instead also create a `._fqmes2mktids: bidict[str, str]` for
doing lookups to the native pair from the fqme.

Also, adjust any remaining `fqsn` naming to fqme.
2023-05-13 16:45:05 -04:00
Tyler Goodlet b288d7051a ib.broker: load account name map as a `bidict` (no `tomlkit` support) 2023-05-13 16:44:28 -04:00
Tyler Goodlet c349d50f2f Allow creation of empty account files 2023-05-13 16:12:18 -04:00
Tyler Goodlet 779c0b73c9 Make `.accounting._ledger` use `tomlkit`
So that styling is preserved on write but requires that we pop `None`
values (in this case any unset `.expiry` transactions) due to `tomkit`
having no support for writing them as values?
2023-05-13 16:07:17 -04:00
Tyler Goodlet 50a4c425d3 Add `touch_if_dne: bool` to `config.load()`
So that we aren't creating blank files for legacy configs (as we do name
changes or wtv). Further change `.get_conf_path()` to validate against
new `account.` prefix and a god `conf.toml` file.
2023-05-13 16:05:23 -04:00
Tyler Goodlet df96155057 Always allow overruns in sampler context
Requires https://github.com/goodboy/tractor/pull/357.
Avoid overruns when doing concurrent live feed init over multiple
brokers.
2023-05-13 14:06:27 -04:00
Tyler Goodlet a62283bae2 Drop final use of `toml` 3rd party lib
We moved to `tomlkit` as per #496 and this lets us drop the mess that
was the inline-table encoder in `.accounting._toml` XD

Relates to #496
2023-05-12 16:15:12 -04:00
Tyler Goodlet 2865f0efe9 `piker.config`: use `tomlkit` for accounting files
We still need to get some patches landed in order to resolve:
- https://github.com/sdispater/tomlkit/issues/288
- https://github.com/sdispater/tomlkit/issues/289
- https://github.com/sdispater/tomlkit/issues/290

But, this does work for style preservation and the inline-table style we
were previously hacking into the `toml` lib in `.accounting._toml`,
which we can pretty much just drop now B)

Relates to #496 (pretty much solves it near-term i think?)
2023-05-12 16:05:45 -04:00
Tyler Goodlet 5f79434b23 Use new `.config` helpers for `accounting._pos/._ledger` file loading 2023-05-12 13:02:29 -04:00
Tyler Goodlet 5278f8b560 Add `.config.load_ledger()` for transaction record files 2023-05-12 13:01:45 -04:00
Tyler Goodlet 488a0cd119 Add `.config.load_account()`
Allows for direct loading of an "account file configuration" contents
without having to pass the explicit config dir path. In this case we are
also rewriting the `pps.<brokername>.<accnt_name>.toml` file names to
instead have a `account.` prefix, but providing this helper function
allows such changes more easily in the future - since callers won't have
to use the lower level `.load()` input signature.

Also add some todo comments about moving to `tomlkit`.
2023-05-12 12:40:09 -04:00
Tyler Goodlet 957224bdc5 ib: support remote host vnc client connections
I figure we might as well support multiple types of distributed
multi-host setups; why not allow running the API (gateway) and thus vnc
server on a diff host and allowing clients to connect and do their thing
B)

Deatz:
- make `ib._util.data_reset_hack()` take in a `vnc_host` which gets
  proxied through to the `asyncvnc` client.
- pull `ib_insync.client.Client` host value and pass-through to data
  reset machinery, presuming the vnc server is running in the same
  container (and/or the same host).
- if no vnc connection **and** no i3ipc trick can be used, just report
  to the user that they need to remove the data throttle manually.
- fix `feed.get_bars()` to handle throttle cases the same based on error
  msg matching, not error the code and add a max `_failed_resets` count
  to trigger bailing on the query loop.
2023-05-12 09:48:31 -04:00
Tyler Goodlet 7ff8aa1ba0 ib: passthrough host arg to vnc client for click hack 2023-05-11 12:32:38 -04:00
Tyler Goodlet e06f9dc5c0 kucoin: port to new `NoBsWs` api semantics
No longer need to implement connection timeout logic in the streaming
code, instead we just `async for` that bby B)

Further refining:
- better `KucoinTrade` msg parsing and handling with object cases.
- make `subscribe()` do sub request in a loop wand wair for acks.
2023-05-10 16:22:09 -04:00
Tyler Goodlet c6e5368520 paperboi: fix fqme parsing to handle `bs_fqme` cases 2023-05-09 18:34:01 -04:00
Tyler Goodlet 769b292dca Allow `brokerd` runtime switch to paper mode
Previously you couldn't have a brokerd backend which defined
`.trades_dialogue()` but which could also indicate that the paper
clearing engine should be used. This adds that support by allowing the
endpoint task to return a simple `'paper'` string, in which case the ems
will boot a paperboi.

The obvious useful case for this is if you have a broker you want to use
but do not have actual broker credentials setup (yet) with that provider
in your `brokers.toml`; demonstrated here with the adjustment to
`kraken`'s startup to no longer raise a runtime error B)
2023-05-09 18:29:28 -04:00
Tyler Goodlet 361fc4645c Drop passing `loglevel` to `stream_quotes()`, level is set when actor spawns 2023-05-09 18:28:51 -04:00
Tyler Goodlet f1f2ba2e02 kucoin: deliver `FeedInit` msgs on feed startup
To fit with the rest of the new requirements added in `.data.validate`
this adds `FeedInit` init including `MktPair` and `Asset` loading for
all spot currencies provided by `kucoin`.

Deatz:
- add a `Currency` struct and accompanying `Client.get_currencies()` for
  storing all asset infos.
- implement `.get_mkt_info()` which loads all necessary accounting and
  mkt meta-data structs including adding `.price/size_tick` fields to
  the `KucoinMktPair`.
- on client boot, async spawn requests to cache both symbols and currencies.
- pass `subscribe()` as the `fixture` arg to `open_autorecon_ws()`
  instead of opening it manually.

Other:
- tweak `Client._request` to not expect the prefixed `'/'` for the
  `endpoint: str`.
- change the `api_v` arg to just be `api: str`.
2023-05-09 18:17:50 -04:00
Tyler Goodlet 80338e1ddd kucoin: WIP moving to FeedInit API 2023-05-09 14:49:46 -04:00
Tyler Goodlet f8c8f63e87 Drop `Optional` usage from marketstore module 2023-05-09 14:49:46 -04:00
Tyler Goodlet 96532ad38c ui._display: no downsampling on history chart default view call 2023-05-09 14:49:46 -04:00
Tyler Goodlet 88f3912b2d test_ems: doc out some remaining suites 2023-05-09 14:49:46 -04:00
Tyler Goodlet cb8833d430 ib: clear error events on every received? 2023-05-09 14:49:46 -04:00
Tyler Goodlet 038b20d13a wsbs: increase msg rx timeout to 16 secs 2023-05-09 14:49:46 -04:00
Tyler Goodlet 05fb4a4014 kraken: drop recv timeout for recon ws 2023-05-09 14:49:46 -04:00
Tyler Goodlet c415bd1ee1 If backend does not provide `bs_mktid`, use the `bs_fqme` 2023-05-09 14:49:46 -04:00
Tyler Goodlet 226c3364c3 Smh, handle `fixture==None` case.. 2023-05-09 14:49:46 -04:00
Tyler Goodlet 685688d2b2 ib: add `mbt.cme` micro-btc futes to adhoc set 2023-05-09 14:49:46 -04:00
Tyler Goodlet 7a3bce3f33 .data._web_bs: add client module name to log msgs 2023-05-09 14:49:46 -04:00
Tyler Goodlet 363a2bbcc6 binance: use new `int` sub-id for each request 2023-05-09 14:49:46 -04:00
Tyler Goodlet 0a8dd7b6da Try to disable `snappy` compression on variables; it breaks everything XD 2023-05-09 14:49:46 -04:00
151 changed files with 25878 additions and 9209 deletions

View File

@ -43,16 +43,21 @@ jobs:
- name: Checkout
uses: actions/checkout@v3
- name: Build DB container
run: docker build -t piker:elastic dockering/elastic
# elastic only
# - name: Build DB container
# run: docker build -t piker:elastic dockering/elastic
- name: Setup python
uses: actions/setup-python@v3
uses: actions/setup-python@v4
with:
python-version: '3.10'
# elastic only
# - name: Install dependencies
# run: pip install -U .[es] -r requirements-test.txt -r requirements.txt --upgrade-strategy eager
- name: Install dependencies
run: pip install -U .[es] -r requirements-test.txt -r requirements.txt --upgrade-strategy eager
run: pip install -U . -r requirements-test.txt -r requirements.txt --upgrade-strategy eager
- name: Test suite
run: pytest tests -rs

View File

@ -1,235 +1,161 @@
piker
-----
trading gear for hackers.
trading gear for hackers
|gh_actions|
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/piker/pikers/goto
``piker`` is a broker agnostic, next-gen FOSS toolset for real-time
computational trading targeted at `hardcore Linux users <comp_trader>`_ .
``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for
real-time computational trading targeted at `hardcore Linux users
<comp_trader>`_ .
we use as much bleeding edge tech as possible including (but not limited to):
we use much bleeding edge tech including (but not limited to):
- latest python for glue_
- trio_ for `structured concurrency`_
- tractor_ for distributed, multi-core, real-time streaming
- marketstore_ for historical and real-time tick data persistence and sharing
- techtonicdb_ for L2 book storage
- Qt_ for pristine high performance UIs
- pyqtgraph_ for real-time charting
- ``numpy`` and ``numba`` for `fast numerics`_
- uv_ for packaging and distribution
- trio_ & tractor_ for our distributed `structured concurrency`_ runtime
- Qt_ for pristine low latency UIs
- pyqtgraph_ (which we've extended) for real-time charting and graphics
- ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_
- `apache arrow and parquet`_ for time-series storage
.. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg
:target: https://travis-ci.org/pikers/piker
potential projects we might integrate with soon,
- (already prototyped in ) techtonicdb_ for L2 book storage
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _uv: https://docs.astral.sh/uv/
.. _trio: https://github.com/python-trio/trio
.. _tractor: https://github.com/goodboy/tractor
.. _structured concurrency: https://trio.discourse.group/
.. _marketstore: https://github.com/alpacahq/marketstore
.. _techtonicdb: https://github.com/0b01/tectonicdb
.. _Qt: https://www.qt.io/
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _apache arrow and parquet: https://arrow.apache.org/faq/
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _techtonicdb: https://github.com/0b01/tectonicdb
focus and features:
*******************
- 100% federated: your code, your hardware, your data feeds, your broker fills.
- zero web: low latency, native software that doesn't try to re-invent the OS
- maximal **privacy**: prevent brokers and mms from knowing your
planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency
primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will
make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
fitting with these tenets, we're always open to new framework suggestions and ideas.
building the best looking, most reliable, keyboard friendly trading
platform is the dream; join the cause.
install
*******
``piker`` is currently under heavy pre-alpha development and as such
should be cloned from this repo and hacked on directly.
for a development install::
git clone git@github.com:pikers/piker.git
cd piker
virtualenv env
source ./env/bin/activate
pip install -r requirements.txt -e .
install for nixos
*****************
for users of `NixOS` we offer a development shell envoirment that can be
loaded with::
nix-shell develop.nix
this will setup the required python environment to run piker, make sure to
run::
pip install -r requirements.txt -e .
once after loading the shell
install for tinas
*****************
for windows peeps you can start by installing all the prerequisite software:
- install git with all default settings - https://git-scm.com/download/win
- install anaconda all default settings - https://www.anaconda.com/products/individual
- install microsoft build tools (check the box for Desktop development for C++, you might be able to uncheck some optional downloads) - https://visualstudio.microsoft.com/visual-cpp-build-tools/
- install visual studio code default settings - https://code.visualstudio.com/download
then, `crack a conda shell`_ and run the following commands::
mkdir code # create code directory
cd code # change directory to code
git clone https://github.com/pikers/piker.git # downloads piker installation package from github
cd piker # change directory to piker
conda create -n pikonda # creates conda environment named pikonda
conda activate pikonda # activates pikonda
conda install -c conda-forge python-levenshtein # in case it is not already installed
conda install pip # may already be installed
pip # will show if pip is installed
pip install -e . -r requirements.txt # install piker in editable mode
test Piker to see if it is working::
piker -b binance chart btcusdt.binance # formatting for loading a chart
piker -b kraken -b binance chart xbtusdt.kraken
piker -b kraken -b binance -b ib chart qqq.nasdaq.ib
piker -b ib chart tsla.nasdaq.ib
potential error::
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\AppData\\Roaming\\piker\\brokers.toml'
solution:
- navigate to file directory above (may be different on your machine, location should be listed in the error code)
- copy and paste file from 'C:\\Users\\user\\code\\data/brokers.toml' or create a blank file using notepad at the location above
Visual Studio Code setup:
- now that piker is installed we can set up vscode as the default terminal for running piker and editing the code
- open Visual Studio Code
- file --> Add Folder to Workspace --> C:\Users\user\code\piker (adds piker directory where all piker files are located)
- file --> Save Workspace As --> save it wherever you want and call it whatever you want, this is going to be your default workspace for running and editing piker code
- ctrl + shift + p --> start typing Python: Select Interpetter --> when the option comes up select it --> Select at the workspace level --> select the one that shows ('pikonda')
- change the default terminal to cmd.exe instead of powershell (default)
- now when you create a new terminal VScode should automatically activate you conda env so that piker can be run as the first command after a new terminal is created
also, try out fancyzones as part of powertoyz for a decent tiling windows manager to manage all the cool new software you are going to be running.
.. _conda installed: https://
.. _C++ build toolz: https://
.. _crack a conda shell: https://
.. _vscode: https://
.. link to the tina guide
.. _setup a coolio tiled wm console: https://
provider support
focus and feats:
****************
for live data feeds the in-progress set of supported brokers is:
fitting with these tenets, we're always open to new
framework/lib/service interop suggestions and ideas!
- IB_ via ``ib_insync``, also see our `container docs`_
- binance_ and kraken_ for crypto over their public websocket API
- questrade_ (ish) which comes with effectively free L1
- **100% federated**:
your code, your hardware, your data feeds, your broker fills.
coming soon...
- **zero web**:
low latency as a prime objective, native UIs and modern IPC
protocols without trying to re-invent the "OS-as-an-app"..
- webull_ via the reverse engineered public API
- yahoo via yliveticker_
- **maximal privacy**:
prevent brokers and mms from knowing your planz; smack their
spreads with dark volume from a VPN tunnel.
if you want your broker supported and they have an API let us know.
- **zero clutter**:
modal, context oriented UIs that echew minimalism, reduce thought
noise and encourage un-emotion.
.. _IB: https://interactivebrokers.github.io/tws-api/index.html
.. _container docs: https://github.com/pikers/piker/tree/master/dockering/ib
.. _questrade: https://www.questrade.com/api/documentation
.. _kraken: https://www.kraken.com/features/api#public-market-data
.. _binance: https://github.com/pikers/piker/pull/182
.. _webull: https://github.com/tedchou12/webull
.. _yliveticker: https://github.com/yahoofinancelive/yliveticker
.. _coinbase: https://docs.pro.coinbase.com/#websocket-feed
- **first class parallelism**:
built from the ground up on a next-gen structured concurrency
supervision sys.
- **traders first**:
broker/exchange/venue/asset-class/money-sys agnostic
- **systems grounded**:
real-time financial signal processing (fsp) that will make any
queuing or DSP eng juice their shorts.
- **non-tina UX**:
sleek, powerful keyboard driven interaction with expected use in
tiling wms (or maybe even a DDE).
- **data collab at scale**:
every actor-process and protocol is multi-host aware.
- **fight club ready**:
zero interest in adoption by suits; no corporate friendly license,
ever.
building the hottest looking, fastest, most reliable, keyboard
friendly FOSS trading platform is the dream; join the cause.
check out our charts
********************
bet you weren't expecting this from the foss::
a sane install with `uv`
************************
bc why install with `python` when you can faster with `rust` ::
piker -l info -b kraken -b binance chart btcusdt.binance --pdb
uv lock
this runs the main chart (currently with 1m sampled OHLC) in in debug
mode and you can practice paper trading using the following
micro-manual:
hacky install on nixos
**********************
``NixOS`` is our core devs' distro of choice for which we offer
a stringently defined development shell envoirment that can be loaded with::
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
nix-shell default.nix
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
start a chart
*************
run a realtime OHLCV chart stand-alone::
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken
this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes
overlayed on the same graph. Use of `piker` without first starting
a daemon (`pikerd` - see below) means there is an implicit spawning of the
multi-actor-runtime (implemented as a `tractor` app).
For additional subsystem feats available through our chart UI see the
various sub-readmes:
- order control using a mouse-n-keyboard UX B)
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
- src-asset derivatives scan for anal, like the infamous "max pain" XO
you can also configure your position allocation limits from the
sidepane.
run in distributed mode
***********************
start the service manager and data feed daemon in the background and
connect to it::
spawn a daemon standalone
*************************
we call the root actor-process the ``pikerd``. it can be (and is
recommended normally to be) started separately from the ``piker
chart`` program::
pikerd -l info --pdb
the daemon does nothing until a ``piker``-client (like ``piker
chart``) connects and requests some particular sub-system. for
a connecting chart ``pikerd`` will spawn and manage at least,
connect your chart::
- a data-feed daemon: ``datad`` which does all the work of comms with
the backend provider (in this case the ``binance`` cex).
- a paper-trading engine instance, ``paperboi.binance``, (if no live
account has been configured) which allows for auto/manual order
control against the live quote stream.
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
*using* an actor-service (aka micro-daemon) manager which dynamically
supervises various sub-subsystems-as-services throughout the ``piker``
runtime-stack.
now you can (implicitly) connect your chart::
enjoy persistent real-time data feeds tied to daemon lifetime. the next
time you spawn a chart it will load much faster since the data feed has
been cached and is now always running live in the background until you
kill ``pikerd``.
piker chart btcusdt.spot.binance
since ``pikerd`` was started separately you can now enjoy a persistent
real-time data stream tied to the daemon-tree's lifetime. i.e. the next
time you spawn a chart it will obviously not only load much faster
(since the underlying ``datad.binance`` is left running with its
in-memory IPC data structures) but also the data-feed and any order
mgmt states should be persistent until you finally cancel ``pikerd``.
if anyone asks you what this project is about
*********************************************
you don't talk about it.
you don't talk about it; just use it.
how do i get involved?
@ -239,6 +165,15 @@ enter the matrix.
how come there ain't that many docs
***********************************
suck it up, learn the code; no one is trying to sell you on anything.
also, we need lotsa help so if you want to start somewhere and can't
necessarily write serious code, this might be the place for you!
i mean we want/need them but building the core right has been higher
prio then marketting (and likely will stay that way Bp).
soo, suck it up bc,
- no one is trying to sell you on anything
- learning the code base is prolly way more valuable
- the UI/UXs are intended to be "intuitive" for any hacker..
we obviously need tonz help so if you want to start somewhere and
can't necessarily write "advanced" concurrent python/rust code, this
helping document literally anything might be the place for you!

View File

@ -1,19 +1,52 @@
[questrade]
refresh_token = ""
access_token = ""
api_server = "https://api06.iq.questrade.com/"
expires_in = 1800
token_type = "Bearer"
expires_at = 1616095326.355846
################
# ---- CEXY ----
################
[binance]
accounts.paper = 'paper'
accounts.usdtm = 'futes'
futes.use_testnet = false
futes.api_key = ''
futes.api_secret = ''
accounts.spot = 'spot'
spot.use_testnet = false
spot.api_key = ''
spot.api_secret = ''
[deribit]
key_id = ''
key_secret = ''
[kraken]
key_descr = "api_0"
api_key = ""
secret = ""
key_descr = ''
api_key = ''
secret = ''
[kucoin]
key_id = ''
key_secret = ''
key_passphrase = ''
################
# -- BROKERZ ---
################
[questrade]
refresh_token = ''
access_token = ''
api_server = 'https://api06.iq.questrade.com/'
expires_in = 1800
token_type = 'Bearer'
expires_at = 1616095326.355846
[ib]
hosts = [
"127.0.0.1",
'127.0.0.1',
]
# XXX: the order in which ports will be scanned
# (by the `brokerd` daemon-actor)
@ -30,8 +63,8 @@ ports = [
# is not supported so you have to manually download
# and XML report and put it in a location that can be
# accessed by the ``brokerd.ib`` backend code for parsing.
flex_token = '666666666666666666666666'
flex_trades_query_id = '666666' # live account
flex_token = ''
flex_trades_query_id = '' # live account
# when clients are being scanned this determines
# which clients are preferred to be used for data
@ -47,11 +80,6 @@ prefer_data_account = [
# the order in which accounts will be selectable
# in the order mode UI (if found via clients during
# API-app scanning)when a new symbol is loaded.
paper = "XX0000000"
margin = "X0000000"
ira = "X0000000"
[deribit]
key_id = 'XXXXXXXX'
key_secret = 'Xx_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx'
paper = 'XX0000000'
margin = 'X0000000'
ira = 'X0000000'

12
config/conf.toml 100644
View File

@ -0,0 +1,12 @@
[network]
tsdb.backend = 'marketstore'
tsdb.host = 'localhost'
tsdb.grpc_port = 5995
[ui]
# set custom font + size which will scale entire UI
# font_size = 16
# font_name = 'Monospaced'
# colorscheme = 'default' # UNUSED
# graphics.update_throttle = 60 # Hz # TODO

134
default.nix 100644
View File

@ -0,0 +1,134 @@
with (import <nixpkgs> {});
let
glibStorePath = lib.getLib glib;
zlibStorePath = lib.getLib zlib;
zstdStorePath = lib.getLib zstd;
dbusStorePath = lib.getLib dbus;
libGLStorePath = lib.getLib libGL;
freetypeStorePath = lib.getLib freetype;
qt6baseStorePath = lib.getLib qt6.qtbase;
fontconfigStorePath = lib.getLib fontconfig;
libxkbcommonStorePath = lib.getLib libxkbcommon;
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
qtpyStorePath = lib.getLib python312Packages.qtpy;
pyqt6StorePath = lib.getLib python312Packages.pyqt6;
pyqt6SipStorePath = lib.getLib python312Packages.pyqt6-sip;
rapidfuzzStorePath = lib.getLib python312Packages.rapidfuzz;
qdarkstyleStorePath = lib.getLib python312Packages.qdarkstyle;
xorgLibX11StorePath = lib.getLib xorg.libX11;
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
in
stdenv.mkDerivation {
name = "piker-qt6-uv";
buildInputs = [
# System requirements.
glib
zlib
dbus
zstd
libGL
freetype
qt6.qtbase
libgcc.lib
fontconfig
libxkbcommon
# Xorg requirements
xcb-util-cursor
xorg.libxcb
xorg.libX11
xorg.xcbutilwm
xorg.xcbutilimage
xorg.xcbutilerrors
xorg.xcbutilkeysyms
xorg.xcbutilrenderutil
# Python requirements.
python312Full
python312Packages.uv
python312Packages.qdarkstyle
python312Packages.rapidfuzz
python312Packages.pyqt6
python312Packages.qtpy
];
src = null;
shellHook = ''
set -e
# Set the Qt plugin path
# export QT_DEBUG_PLUGINS=1
QTBASE_PATH="${qt6baseStorePath}/lib"
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
LIB_GCC_PATH="${libgcc.lib}/lib"
GLIB_PATH="${glibStorePath}/lib"
ZSTD_PATH="${zstdStorePath}/lib"
ZLIB_PATH="${zlibStorePath}/lib"
DBUS_PATH="${dbusStorePath}/lib"
LIBGL_PATH="${libGLStorePath}/lib"
FREETYPE_PATH="${freetypeStorePath}/lib"
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
export LD_LIBRARY_PATH
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.12/site-packages"
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.12/site-packages"
QTPY_PATH="${qtpyStorePath}/lib/python3.12/site-packages"
PYQT6_PATH="${pyqt6StorePath}/lib/python3.12/site-packages"
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.12/site-packages"
PATCH="$PATCH:$RPDFUZZ_PATH"
PATCH="$PATCH:$QDRKSTYLE_PATH"
PATCH="$PATCH:$QTPY_PATH"
PATCH="$PATCH:$PYQT6_PATH"
PATCH="$PATCH:$PYQT6_SIP_PATH"
export PATCH
# Install deps
uv lock
'';
}

View File

@ -1,18 +1,34 @@
with (import <nixpkgs> {});
with python310Packages;
stdenv.mkDerivation {
name = "pip-env";
name = "poetry-env";
buildInputs = [
# System requirements.
readline
# TODO: hacky non-poetry install stuff we need to get rid of!!
poetry
# virtualenv
# setuptools
# pip
# Python requirements (enough to get a virtualenv going).
python310Full
virtualenv
setuptools
pyqt5
pip
python311Full
# obviously, and see below for hacked linking
python311Packages.pyqt5
python311Packages.pyqt5_sip
# python311Packages.qtpy
# numerics deps
python311Packages.levenshtein
python311Packages.fastparquet
python311Packages.polars
];
# environment.sessionVariables = {
# LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib";
# };
src = null;
shellHook = ''
# Allow the use of wheels.
@ -20,13 +36,12 @@ stdenv.mkDerivation {
# Augment the dynamic linker path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
if [ ! -d "venv" ]; then
virtualenv venv
if [ ! -d ".venv" ]; then
poetry install --with uis
fi
source venv/bin/activate
poetry shell
'';
}

View File

@ -19,8 +19,9 @@ services:
# other image tags available:
# https://github.com/waytrade/ib-gateway-docker#supported-tags
# image: waytrade/ib-gateway:981.3j
image: waytrade/ib-gateway:1012.2i
# image: waytrade/ib-gateway:1012.2i
image: ghcr.io/gnzsnz/ib-gateway:latest
restart: 'no' # restart on boot whenev there's a crash or user clicsk
network_mode: 'host'

View File

@ -117,9 +117,57 @@ SecondFactorDevice=
# If you use the IBKR Mobile app for second factor authentication,
# and you fail to complete the process before the time limit imposed
# by IBKR, you can use this setting to tell IBC to exit: arrangements
# can then be made to automatically restart IBC in order to initiate
# the login sequence afresh. Otherwise, manual intervention at TWS's
# by IBKR, this setting tells IBC whether to automatically restart
# the login sequence, giving you another opportunity to complete
# second factor authentication.
#
# Permitted values are 'yes' and 'no'.
#
# If this setting is not present or has no value, then the value
# of the deprecated ExitAfterSecondFactorAuthenticationTimeout is
# used instead. If this also has no value, then this setting defaults
# to 'no'.
#
# NB: you must be using IBC v3.14.0 or later to use this setting:
# earlier versions ignore it.
ReloginAfterSecondFactorAuthenticationTimeout=
# This setting is only relevant if
# ReloginAfterSecondFactorAuthenticationTimeout is set to 'yes',
# or if ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 60.
SecondFactorAuthenticationExitInterval=
# This setting specifies the timeout for second factor authentication
# imposed by IB. The value is in seconds. You should not change this
# setting unless you have reason to believe that IB has changed the
# timeout. The default value is 180.
SecondFactorAuthenticationTimeout=180
# DEPRECATED SETTING
# ------------------
#
# ExitAfterSecondFactorAuthenticationTimeout - THIS SETTING WILL BE
# REMOVED IN A FUTURE RELEASE. For IBC version 3.14.0 and later, see
# the notes for ReloginAfterSecondFactorAuthenticationTimeout above.
#
# For IBC versions earlier than 3.14.0: If you use the IBKR Mobile
# app for second factor authentication, and you fail to complete the
# process before the time limit imposed by IBKR, you can use this
# setting to tell IBC to exit: arrangements can then be made to
# automatically restart IBC in order to initiate the login sequence
# afresh. Otherwise, manual intervention at TWS's
# Second Factor Authentication dialog is needed to complete the
# login.
#
@ -132,29 +180,18 @@ SecondFactorDevice=
ExitAfterSecondFactorAuthenticationTimeout=no
# This setting is only relevant if
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 40.
SecondFactorAuthenticationExitInterval=
# Trading Mode
# ------------
#
# TWS 955 introduced a new Trading Mode combo box on its login
# dialog. This indicates whether the live account or the paper
# trading account corresponding to the supplied credentials is
# to be used. The allowed values are 'live' (the default) and
# 'paper'. For earlier versions of TWS this setting has no
# effect.
# This indicates whether the live account or the paper trading
# account corresponding to the supplied credentials is to be used.
# The allowed values are 'live' (the default) and 'paper'.
#
# If this is set to 'live', then the credentials for the live
# account must be supplied. If it is set to 'paper', then either
# the live or the paper-trading credentials may be supplied.
TradingMode=
TradingMode=paper
# Paper-trading Account Warning
@ -188,7 +225,7 @@ AcceptNonBrokerageAccountWarning=yes
#
# The default value is 60.
LoginDialogDisplayTimeout=20
LoginDialogDisplayTimeout=60
@ -217,7 +254,15 @@ LoginDialogDisplayTimeout=20
# but they are acceptable.
#
# The default is the current working directory when IBC is
# started.
# started, unless the TWS_SETTINGS_PATH setting in the relevant
# start script is set.
#
# If both this setting and TWS_SETTINGS_PATH are set, then this
# setting takes priority. Note that if they have different values,
# auto-restart will not work.
#
# NB: this setting is now DEPRECATED. You should use the
# TWS_SETTINGS_PATH setting in the relevant start script.
IbDir=/root/Jts
@ -284,15 +329,32 @@ ExistingSessionDetectedAction=primary
# Override TWS API Port Number
# ----------------------------
#
# If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly
# after startup. Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to
# If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly
# after startup (but note that for the FIX Gateway, this setting is
# actually stored in jts.ini rather than the Gateway's settings
# file). Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to
# be set dynamically at run-time, and for the FIX Gateway: most
# non-FIX users will never need it, so don't use it unless you know
# you need it.
OverrideTwsApiPort=4000
# Override TWS Master Client ID
# -----------------------------
#
# If OverrideTwsMasterClientID is set to an integer, IBC changes the
# 'Master Client ID' value in TWS's API configuration to that
# value shortly after startup. Leaving the setting blank will make
# no change to the current setting. This setting is only intended
# for use in certain specialized situations where the value needs to
# be set dynamically at run-time: most users will never need it,
# so don't use it unless you know you need it.
; OverrideTwsApiPort=4002
OverrideTwsMasterClientID=
# Read-only Login
@ -302,11 +364,13 @@ ExistingSessionDetectedAction=primary
# account security programme, the user will not be asked to perform
# the second factor authentication action, and login to TWS will
# occur automatically in read-only mode: in this mode, placing or
# managing orders is not allowed. If set to 'no', and the user is
# enrolled in IB's account security programme, the user must perform
# the relevant second factor authentication action to complete the
# login.
# managing orders is not allowed.
#
# If set to 'no', and the user is enrolled in IB's account security
# programme, the second factor authentication process is handled
# according to the Second Factor Authentication Settings described
# elsewhere in this file.
#
# If the user is not enrolled in IB's account security programme,
# this setting is ignored. The default is 'no'.
@ -326,7 +390,44 @@ ReadOnlyLogin=no
# set the relevant checkbox (this only needs to be done once) and
# not provide a value for this setting.
ReadOnlyApi=no
ReadOnlyApi=
# API Precautions
# ---------------
#
# These settings relate to the corresponding 'Precautions' checkboxes in the
# API section of the Global Configuration dialog.
#
# For all of these, the accepted values are:
# - 'yes' sets the checkbox
# - 'no' clears the checkbox
# - if not set, the existing TWS/Gateway configuration is unchanged
#
# NB: thess settings are really only supplied for the benefit of new TWS
# or Gateway instances that are being automatically installed and
# started without user intervention, or where user settings are not preserved
# between sessions (eg some Docker containers). Where a user is involved, they
# should use the Global Configuration to set the relevant checkboxes and not
# provide values for these settings.
BypassOrderPrecautions=
BypassBondWarning=
BypassNegativeYieldToWorstConfirmation=
BypassCalledBondWarning=
BypassSameActionPairTradeWarning=
BypassPriceBasedVolatilityRiskWarning=
BypassUSStocksMarketDataInSharesWarning=
BypassRedirectOrderWarning=
BypassNoOverfillProtectionPrecaution=
# Market data size for US stocks - lots or shares
@ -381,54 +482,145 @@ AcceptBidAskLastSizeDisplayUpdateNotification=accept
SendMarketDataInLotsForUSstocks=
# Trusted API Client IPs
# ----------------------
#
# NB: THIS SETTING IS ONLY RELEVANT FOR THE GATEWAY, AND ONLY WHEN FIX=yes.
# In all other cases it is ignored.
#
# This is a list of IP addresses separated by commas. API clients with IP
# addresses in this list are able to connect to the API without Gateway
# generating the 'Incoming connection' popup.
#
# Note that 127.0.0.1 is always permitted to connect, so do not include it
# in this setting.
TrustedTwsApiClientIPs=
# Reset Order ID Sequence
# -----------------------
#
# The setting resets the order id sequence for orders submitted via the API, so
# that the next invocation of the `NextValidId` API callback will return the
# value 1. The reset occurs when TWS starts.
#
# Note that order ids are reset for all API clients, except those that have
# outstanding (ie incomplete) orders: their order id sequence carries on as
# before.
#
# Valid values are 'yes', 'true', 'false' and 'no'. The default is 'no'.
ResetOrderIdsAtStart=
# This setting specifies IBC's action when TWS displays the dialog asking for
# confirmation of a request to reset the API order id sequence.
#
# Note that the Gateway never displays this dialog, so this setting is ignored
# for a Gateway session.
#
# Valid values consist of two strings separated by a solidus '/'. The first
# value specifies the action to take when the order id reset request resulted
# from setting ResetOrderIdsAtStart=yes. The second specifies the action to
# take when the order id reset request is a result of the user clicking the
# 'Reset API order ID sequence' button in the API configuration. Each value
# must be one of the following:
#
# 'confirm'
# order ids will be reset
#
# 'reject'
# order ids will not be reset
#
# 'ignore'
# IBC will ignore the dialog. The user must take action.
#
# The default setting is ignore/ignore
# Examples:
#
# 'confirm/reject' - confirm order id reset only if ResetOrderIdsAtStart=yes
# and reject any user-initiated requests
#
# 'ignore/confirm' - user must decide what to do if ResetOrderIdsAtStart=yes
# and confirm user-initiated requests
#
# 'reject/ignore' - reject order id reset if ResetOrderIdsAtStart=yes but
# allow user to handle user-initiated requests
ConfirmOrderIdReset=
# =============================================================================
# 4. TWS Auto-Closedown
# 4. TWS Auto-Logoff and Auto-Restart
# =============================================================================
#
# IMPORTANT NOTE: Starting with TWS 974, this setting no longer
# works properly, because IB have changed the way TWS handles its
# autologoff mechanism.
# TWS and Gateway insist on being restarted every day. Two alternative
# automatic options are offered:
#
# You should now configure the TWS autologoff time to something
# convenient for you, and restart IBC each day.
# - Auto-Logoff: at a specified time, TWS shuts down tidily, without
# restarting.
#
# Alternatively, discontinue use of IBC and use the auto-relogin
# mechanism within TWS 974 and later versions (note that the
# auto-relogin mechanism provided by IB is not available if you
# use IBC).
# - Auto-Restart: at a specified time, TWS shuts down and then restarts
# without the user having to re-autheticate.
#
# The normal way to configure the time at which this happens is via the Lock
# and Exit section of the Configuration dialog. Once this time has been
# configured in this way, the setting persists until the user changes it again.
#
# However, there are situations where there is no user available to do this
# configuration, or where there is no persistent storage (for example some
# Docker images). In such cases, the auto-restart or auto-logoff time can be
# set whenever IBC starts with the settings below.
#
# The value, if specified, must be a time in HH:MM AM/PM format, for example
# 08:00 AM or 10:00 PM. Note that there must be a single space between the
# two parts of this value; also that midnight is "12:00 AM" and midday is
# "12:00 PM".
#
# If no value is specified for either setting, the currently configured
# settings will apply. If a value is supplied for one setting, the other
# setting is cleared. If values are supplied for both settings, only the
# auto-restart time is set, and the auto-logoff time is cleared.
#
# Note that for a normal TWS/Gateway installation with persistent storage
# (for example on a desktop computer) the value will be persisted as if the
# user had set it via the configuration dialog.
#
# If you choose to auto-restart, you should take note of the considerations
# described at the link below. Note that where this information mentions
# 'manual authentication', restarting IBC will do the job (IBKR does not
# recognise the existence of IBC in its docuemntation).
#
# https://www.interactivebrokers.com/en/software/tws/twsguide.htm#usersguidebook/configuretws/auto_restart_info.htm
#
# If you use the "RESTART" command via the IBC command server, and IBC is
# running any version of the Gateway (or a version of TWS earlier than 1018),
# note that this will set the Auto-Restart time in Gateway/TWS's configuration
# dialog to the time at which the restart actually happens (which may be up to
# a minute after the RESTART command is issued). To prevent future auto-
# restarts at this time, you must make sure you have set AutoLogoffTime or
# AutoRestartTime to your desired value before running IBC. NB: this does not
# apply to TWS from version 1018 onwards.
# Set to yes or no (lower case).
#
# yes means allow TWS to shut down automatically at its
# specified shutdown time, which is set via the TWS
# configuration menu.
#
# no means TWS never shuts down automatically.
#
# NB: IB recommends that you do not keep TWS running
# continuously. If you set this setting to 'no', you may
# experience incorrect TWS operation.
#
# NB: the default for this setting is 'no'. Since this will
# only work properly with TWS versions earlier than 974, you
# should explicitly set this to 'yes' for version 974 and later.
IbAutoClosedown=yes
AutoLogoffTime=
AutoRestartTime=
# =============================================================================
# 5. TWS Tidy Closedown Time
# =============================================================================
#
# NB: starting with TWS 974 this is no longer a useful option
# because both TWS and Gateway now have the same auto-logoff
# mechanism, and IBC can no longer avoid this.
# Specifies a time at which TWS will close down tidily, with no restart.
#
# Note that giving this setting a value does not change TWS's
# auto-logoff in any way: any setting will be additional to the
# TWS auto-logoff.
# There is little reason to use this setting. It is similar to AutoLogoffTime,
# but can include a day-of-the-week, whereas AutoLogoffTime and AutoRestartTime
# apply every day. So for example you could use ClosedownAt in conjunction with
# AutoRestartTime to shut down TWS on Friday evenings after the markets
# close, without it running on Saturday as well.
#
# To tell IBC to tidily close TWS at a specified time every
# day, set this value to <hh:mm>, for example:
@ -487,7 +679,7 @@ AcceptIncomingConnectionAction=reject
# no means the dialog remains on display and must be
# handled by the user.
AllowBlindTrading=yes
AllowBlindTrading=no
# Save Settings on a Schedule
@ -530,6 +722,26 @@ AllowBlindTrading=yes
SaveTwsSettingsAt=
# Confirm Crypto Currency Orders Automatically
# --------------------------------------------
#
# When you place an order for a cryptocurrency contract, a dialog is displayed
# asking you to confirm that you want to place the order, and notifying you
# that you are placing an order to trade cryptocurrency with Paxos, a New York
# limited trust company, and not at Interactive Brokers.
#
# transmit means that the order will be placed automatically, and the
# dialog will then be closed
#
# cancel means that the order will not be placed, and the dialog will
# then be closed
#
# manual means that IBC will take no action and the user must deal
# with the dialog
ConfirmCryptoCurrencyOrders=transmit
# =============================================================================
# 7. Settings Specific to Indian Versions of TWS
@ -566,13 +778,17 @@ DismissNSEComplianceNotice=yes
#
# The port number that IBC listens on for commands
# such as "STOP". DO NOT set this to the port number
# used for TWS API connections. There is no good reason
# to change this setting unless the port is used by
# some other application (typically another instance of
# IBC). The default value is 0, which tells IBC not to
# start the command server
# used for TWS API connections.
#
# The convention is to use 7462 for this port,
# but it must be set to a different value from any other
# IBC instance that might run at the same time.
#
# The default value is 0, which tells IBC not to start
# the command server
#CommandServerPort=7462
CommandServerPort=0
# Permitted Command Sources
@ -583,19 +799,19 @@ DismissNSEComplianceNotice=yes
# IBC. Commands can always be sent from the
# same host as IBC is running on.
ControlFrom=127.0.0.1
ControlFrom=
# Address for Receiving Commands
# ------------------------------
#
# Specifies the IP address on which the Command Server
# is so listen. For a multi-homed host, this can be used
# is to listen. For a multi-homed host, this can be used
# to specify that connection requests are only to be
# accepted on the specified address. The default is to
# accept connection requests on all local addresses.
BindAddress=127.0.0.1
BindAddress=
# Command Prompt
@ -621,7 +837,7 @@ CommandPrompt=
# information is sent. The default is that such information
# is not sent.
SuppressInfoMessages=no
SuppressInfoMessages=yes
@ -651,10 +867,10 @@ SuppressInfoMessages=no
# The LogStructureScope setting indicates which windows are
# eligible for structure logging:
#
# - if set to 'known', only windows that IBC recognizes
# are eligible - these are windows that IBC has some
# interest in monitoring, usually to take some action
# on the user's behalf;
# - (default value) if set to 'known', only windows that
# IBC recognizes are eligible - these are windows that
# IBC has some interest in monitoring, usually to take
# some action on the user's behalf;
#
# - if set to 'unknown', only windows that IBC does not
# recognize are eligible. Most windows displayed by
@ -667,9 +883,8 @@ SuppressInfoMessages=no
# - if set to 'all', then every window displayed by TWS
# is eligible.
#
# The default value is 'known'.
LogStructureScope=all
LogStructureScope=known
# When to Log Window Structure
@ -682,13 +897,15 @@ LogStructureScope=all
# structure of an eligible window the first time it
# is encountered;
#
# - if set to 'openclose', the structure is logged every
# time an eligible window is opened or closed;
#
# - if set to 'activate', the structure is logged every
# time an eligible window is made active;
#
# - if set to 'never' or 'no' or 'false', structure
# information is never logged.
# - (default value) if set to 'never' or 'no' or 'false',
# structure information is never logged.
#
# The default value is 'never'.
LogStructureWhen=never
@ -708,4 +925,3 @@ LogStructureWhen=never
#LogComponents=

View File

@ -0,0 +1,91 @@
### NOTE this is likely out of date given it was written some
(years) time ago by a user that has since not really partaken in
contributing since.
install for tinas
*****************
for windows peeps you can start by installing all the prerequisite software:
- install git with all default settings - https://git-scm.com/download/win
- install anaconda all default settings - https://www.anaconda.com/products/individual
- install microsoft build tools (check the box for Desktop development for C++, you might be able to uncheck some optional downloads) - https://visualstudio.microsoft.com/visual-cpp-build-tools/
- install visual studio code default settings - https://code.visualstudio.com/download
then, `crack a conda shell`_ and run the following commands::
mkdir code # create code directory
cd code # change directory to code
git clone https://github.com/pikers/piker.git # downloads piker installation package from github
cd piker # change directory to piker
conda create -n pikonda # creates conda environment named pikonda
conda activate pikonda # activates pikonda
conda install -c conda-forge python-levenshtein # in case it is not already installed
conda install pip # may already be installed
pip # will show if pip is installed
pip install -e . -r requirements.txt # install piker in editable mode
test Piker to see if it is working::
piker -b binance chart btcusdt.binance # formatting for loading a chart
piker -b kraken -b binance chart xbtusdt.kraken
piker -b kraken -b binance -b ib chart qqq.nasdaq.ib
piker -b ib chart tsla.nasdaq.ib
potential error::
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user\\AppData\\Roaming\\piker\\brokers.toml'
solution:
- navigate to file directory above (may be different on your machine, location should be listed in the error code)
- copy and paste file from 'C:\\Users\\user\\code\\data/brokers.toml' or create a blank file using notepad at the location above
Visual Studio Code setup:
- now that piker is installed we can set up vscode as the default terminal for running piker and editing the code
- open Visual Studio Code
- file --> Add Folder to Workspace --> C:\Users\user\code\piker (adds piker directory where all piker files are located)
- file --> Save Workspace As --> save it wherever you want and call it whatever you want, this is going to be your default workspace for running and editing piker code
- ctrl + shift + p --> start typing Python: Select Interpetter --> when the option comes up select it --> Select at the workspace level --> select the one that shows ('pikonda')
- change the default terminal to cmd.exe instead of powershell (default)
- now when you create a new terminal VScode should automatically activate you conda env so that piker can be run as the first command after a new terminal is created
also, try out fancyzones as part of powertoyz for a decent tiling windows manager to manage all the cool new software you are going to be running.
.. _conda installed: https://
.. _C++ build toolz: https://
.. _crack a conda shell: https://
.. _vscode: https://
.. link to the tina guide
.. _setup a coolio tiled wm console: https://
provider support
****************
for live data feeds the in-progress set of supported brokers is:
- IB_ via ``ib_insync``, also see our `container docs`_
- binance_ and kraken_ for crypto over their public websocket API
- questrade_ (ish) which comes with effectively free L1
coming soon...
- webull_ via the reverse engineered public API
- yahoo via yliveticker_
if you want your broker supported and they have an API let us know.
.. _IB: https://interactivebrokers.github.io/tws-api/index.html
.. _container docs: https://github.com/pikers/piker/tree/master/dockering/ib
.. _questrade: https://www.questrade.com/api/documentation
.. _kraken: https://www.kraken.com/features/api#public-market-data
.. _binance: https://github.com/pikers/piker/pull/182
.. _webull: https://github.com/tedchou12/webull
.. _yliveticker: https://github.com/yahoofinancelive/yliveticker
.. _coinbase: https://docs.pro.coinbase.com/#websocket-feed

View File

@ -0,0 +1,263 @@
# from pprint import pformat
from functools import partial
from decimal import Decimal
from typing import Callable
import tractor
import trio
from uuid import uuid4
from piker.service import maybe_open_pikerd
from piker.accounting import dec_digits
from piker.clearing import (
open_ems,
OrderClient,
)
# TODO: we should probably expose these top level in this subsys?
from piker.clearing._messages import (
Order,
Status,
BrokerdPosition,
)
from piker.data import (
iterticks,
Flume,
open_feed,
Feed,
# ShmArray,
)
# TODO: handle other statuses:
# - fills, errors, and position tracking
async def wait_for_order_status(
trades_stream: tractor.MsgStream,
oid: str,
expect_status: str,
) -> tuple[
list[Status],
list[BrokerdPosition],
]:
'''
Wait for a specific order status for a given dialog, return msg flow
up to that msg and any position update msgs in a tuple.
'''
# Wait for position message before moving on to verify flow(s)
# for the multi-order position entry/exit.
status_msgs: list[Status] = []
pp_msgs: list[BrokerdPosition] = []
async for msg in trades_stream:
match msg:
case {'name': 'position'}:
ppmsg = BrokerdPosition(**msg)
pp_msgs.append(ppmsg)
case {
'name': 'status',
}:
msg = Status(**msg)
status_msgs.append(msg)
# if we get the status we expect then return all
# collected msgs from the brokerd dialog up to the
# exected msg B)
if (
msg.resp == expect_status
and msg.oid == oid
):
return status_msgs, pp_msgs
async def bot_main():
'''
Boot the piker runtime, open an ems connection, submit
and process orders statuses in real-time.
'''
ll: str = 'info'
# open an order ctl client, live data feed, trio nursery for
# spawning an order trailer task
client: OrderClient
trades_stream: tractor.MsgStream
feed: Feed
accounts: list[str]
fqme: str = 'btcusdt.usdtm.perp.binance'
async with (
# TODO: do this implicitly inside `open_ems()` ep below?
# init and sync actor-service runtime
maybe_open_pikerd(
loglevel=ll,
debug_mode=True,
),
open_ems(
fqme,
mode='paper', # {'live', 'paper'}
# mode='live', # for real-brokerd submissions
loglevel=ll,
) as (
client, # OrderClient
trades_stream, # tractor.MsgStream startup_pps,
_, # positions
accounts,
_, # dialogs
),
open_feed(
fqmes=[fqme],
loglevel=ll,
# TODO: if you want to throttle via downsampling
# how many tick updates your feed received on
# quote streams B)
# tick_throttle=10,
) as feed,
trio.open_nursery() as tn,
):
assert accounts
print(f'Loaded binance accounts: {accounts}')
flume: Flume = feed.flumes[fqme]
min_tick = Decimal(flume.mkt.price_tick)
min_tick_digits: int = dec_digits(min_tick)
price_round: Callable = partial(
round,
ndigits=min_tick_digits,
)
quote_stream: trio.abc.ReceiveChannel = feed.streams['binance']
# always keep live limit 0.003% below last
# clearing price
clear_margin: float = 0.9997
async def trailer(
order: Order,
):
# ref shm OHLCV array history, if you want
# s_shm: ShmArray = flume.rt_shm
# m_shm: ShmArray = flume.hist_shm
# NOTE: if you wanted to frame ticks by type like the
# the quote throttler does.. and this is probably
# faster in terms of getting the latest tick type
# embedded value of interest?
# from piker.data._sampling import frame_ticks
async for quotes in quote_stream:
for fqme, quote in quotes.items():
# print(
# f'{quote["symbol"]} -> {quote["ticks"]}\n'
# f'last 1s OHLC:\n{s_shm.array[-1]}\n'
# f'last 1m OHLC:\n{m_shm.array[-1]}\n'
# )
for tick in iterticks(
quote,
reverse=True,
# types=('trade', 'dark_trade'), # defaults
):
await client.update(
uuid=order.oid,
price=price_round(
clear_margin
*
tick['price']
),
)
msgs, pps = await wait_for_order_status(
trades_stream,
order.oid,
'open'
)
# if multiple clears per quote just
# skip to the next quote?
break
# get first live quote to be sure we submit the initial
# live buy limit low enough that it doesn't clear due to
# a stale initial price from the data feed layer!
first_ask_price: float | None = None
async for quotes in quote_stream:
for fqme, quote in quotes.items():
# print(quote['symbol'])
for tick in iterticks(quote, types=('ask')):
first_ask_price: float = tick['price']
break
if first_ask_price:
break
# setup order dialog via first msg
price: float = price_round(
clear_margin
*
first_ask_price,
)
# compute a 1k USD sized pos
size: float = round(1e3/price, ndigits=3)
order = Order(
# docs on how this all works, bc even i'm not entirely
# clear XD. also we probably want to figure out how to
# offer both the paper engine running and the brokerd
# order ctl tasks with the ems choosing which stream to
# route msgs on given the account value!
account='paper', # use built-in paper clearing engine and .accounting
# account='binance.usdtm', # for live binance futes
oid=str(uuid4()),
exec_mode='live', # {'dark', 'live', 'alert'}
action='buy', # TODO: remove this from our schema?
size=size,
symbol=fqme,
price=price,
brokers=['binance'],
)
await client.send(order)
msgs, pps = await wait_for_order_status(
trades_stream,
order.oid,
'open',
)
assert not pps
assert msgs[-1].oid == order.oid
# start "trailer task" which tracks rt quote stream
tn.start_soon(trailer, order)
try:
# wait for ctl-c from user..
await trio.sleep_forever()
except KeyboardInterrupt:
# cancel the open order
await client.cancel(order.oid)
msgs, pps = await wait_for_order_status(
trades_stream,
order.oid,
'canceled'
)
raise
if __name__ == '__main__':
trio.run(bot_main)

138
flake.lock 100644
View File

@ -0,0 +1,138 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1689068808,
"narHash": "sha256-6ixXo3wt24N/melDWjq70UuHQLxGV8jZvooRanIHXw0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "919d646de7be200f3bf08cb76ae1f09402b6f9b4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1689068808,
"narHash": "sha256-6ixXo3wt24N/melDWjq70UuHQLxGV8jZvooRanIHXw0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "919d646de7be200f3bf08cb76ae1f09402b6f9b4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"poetry2nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1688870561,
"narHash": "sha256-4UYkifnPEw1nAzqqPOTL2MvWtm3sNGw1UTYTalkTcGY=",
"owner": "nix-community",
"repo": "nix-github-actions",
"rev": "165b1650b753316aa7f1787f3005a8d2da0f5301",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-github-actions",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1692174805,
"narHash": "sha256-xmNPFDi/AUMIxwgOH/IVom55Dks34u1g7sFKKebxUm0=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "caac0eb6bdcad0b32cb2522e03e4002c8975c62e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"poetry2nix": {
"inputs": {
"flake-utils": "flake-utils_2",
"nix-github-actions": "nix-github-actions",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1692048894,
"narHash": "sha256-cDw03rso2V4CDc3Mll0cHN+ztzysAvdI8pJ7ybbz714=",
"ref": "refs/heads/pyqt6",
"rev": "b059ad4c3051f45d6c912e17747aae37a9ec1544",
"revCount": 2276,
"type": "git",
"url": "file:///home/lord_fomo/repos/poetry2nix"
},
"original": {
"type": "git",
"url": "file:///home/lord_fomo/repos/poetry2nix"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"poetry2nix": "poetry2nix"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

180
flake.nix 100644
View File

@ -0,0 +1,180 @@
# NOTE: to convert to a poetry2nix env like this here are the
# steps:
# - install poetry in your system nix config
# - convert the repo to use poetry using `poetry init`:
# https://python-poetry.org/docs/basic-usage/#initialising-a-pre-existing-project
# - then manually ensuring all deps are converted over:
# - add this file to the repo and commit it
# -
# GROKin tips:
# - CLI eps are (ostensibly) added via an `entry_points.txt`:
# - https://packaging.python.org/en/latest/specifications/entry-points/#file-format
# - https://github.com/nix-community/poetry2nix/blob/master/editable.nix#L49
{
description = "piker: trading gear for hackers (pkged with poetry2nix)";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
# see https://github.com/nix-community/poetry2nix/tree/master#api
inputs.poetry2nix = {
# url = "github:nix-community/poetry2nix";
# url = "github:K900/poetry2nix/qt5-explicit-deps";
url = "/home/lord_fomo/repos/poetry2nix";
inputs.nixpkgs.follows = "nixpkgs";
};
outputs = {
self,
nixpkgs,
flake-utils,
poetry2nix,
}:
# TODO: build cross-OS and use the `${system}` var thingy..
flake-utils.lib.eachDefaultSystem (system:
let
# use PWD as sources
projectDir = ./.;
pyproject = ./pyproject.toml;
poetrylock = ./poetry.lock;
# TODO: port to 3.11 and support both versions?
python = "python3.10";
# for more functions and examples.
# inherit
# (poetry2nix.legacyPackages.${system})
# mkPoetryApplication;
# pkgs = nixpkgs.legacyPackages.${system};
pkgs = nixpkgs.legacyPackages.x86_64-linux;
lib = pkgs.lib;
p2npkgs = poetry2nix.legacyPackages.x86_64-linux;
# define all pkg overrides per dep, see edgecases.md:
# https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md
# TODO: add these into the json file:
# https://github.com/nix-community/poetry2nix/blob/master/overrides/build-systems.json
pypkgs-build-requirements = {
asyncvnc = [ "setuptools" ];
eventkit = [ "setuptools" ];
ib-insync = [ "setuptools" "flake8" ];
msgspec = [ "setuptools"];
pdbp = [ "setuptools" ];
pyqt6-sip = [ "setuptools" ];
tabcompleter = [ "setuptools" ];
tractor = [ "setuptools" ];
tricycle = [ "setuptools" ];
trio-typing = [ "setuptools" ];
trio-util = [ "setuptools" ];
xonsh = [ "setuptools" ];
};
# auto-generate override entries
p2n-overrides = p2npkgs.defaultPoetryOverrides.extend (self: super:
builtins.mapAttrs (package: build-requirements:
(builtins.getAttr package super).overridePythonAttrs (old: {
buildInputs = (
old.buildInputs or [ ]
) ++ (
builtins.map (
pkg: if builtins.isString pkg then builtins.getAttr pkg super else pkg
) build-requirements
);
})
) pypkgs-build-requirements
);
# override some ahead-of-time compiled extensions
# to be built with their wheels.
ahot_overrides = p2n-overrides.extend(
final: prev: {
# llvmlite = prev.llvmlite.override {
# preferWheel = false;
# };
# TODO: get this workin with p2n and nixpkgs..
# pyqt6 = prev.pyqt6.override {
# preferWheel = true;
# };
# NOTE: this DOESN'T work atm but after a fix
# to poetry2nix, it will and actually this line
# won't be needed - thanks @k900:
# https://github.com/nix-community/poetry2nix/pull/1257
pyqt5 = prev.pyqt5.override {
# withWebkit = false;
preferWheel = true;
};
# see PR from @k900:
# https://github.com/nix-community/poetry2nix/pull/1257
# pyqt5-qt5 = prev.pyqt5-qt5.override {
# withWebkit = false;
# preferWheel = true;
# };
# TODO: patch in an override for polars to build
# from src! See the details likely needed from
# the cryptography entry:
# https://github.com/nix-community/poetry2nix/blob/master/overrides/default.nix#L426-L435
polars = prev.polars.override {
preferWheel = true;
};
}
);
# WHY!? -> output-attrs that `nix develop` scans for:
# https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop.html#flake-output-attributes
in
rec {
packages = {
# piker = poetry2nix.legacyPackages.x86_64-linux.mkPoetryEditablePackage {
# editablePackageSources = { piker = ./piker; };
piker = p2npkgs.mkPoetryApplication {
projectDir = projectDir;
# SEE ABOVE for auto-genned input set, override
# buncha deps with extras.. like `setuptools` mostly.
# TODO: maybe propose a patch to p2n to show that you
# can even do this in the edgecases docs?
overrides = ahot_overrides;
# XXX: won't work on llvmlite..
# preferWheels = true;
};
};
# devShells.default = pkgs.mkShell {
# projectDir = projectDir;
# python = "python3.10";
# overrides = ahot_overrides;
# inputsFrom = [ self.packages.x86_64-linux.piker ];
# packages = packages;
# # packages = [ poetry2nix.packages.${system}.poetry ];
# };
# TODO: grok the difference here..
# - avoid re-cloning git repos on every develop entry..
# - ideally allow hacking on the src code of some deps
# (tractor, pyqtgraph, tomlkit, etc.) WITHOUT having to
# re-install them every time a change is made.
# - boot a usable xonsh inside the poetry virtualenv when
# defined via a custom entry point?
devShells.default = p2npkgs.mkPoetryEnv {
# env = p2npkgs.mkPoetryEnv {
projectDir = projectDir;
python = pkgs.python310;
overrides = ahot_overrides;
editablePackageSources = packages;
# piker = "./";
# tractor = "../tractor/";
# }; # wut?
};
}
); # end of .outputs scope
}

View File

@ -20,9 +20,6 @@ Cacheing apis and toolz.
'''
from collections import OrderedDict
from contextlib import (
asynccontextmanager as acm,
)
from typing import (
Awaitable,
Callable,
@ -30,12 +27,8 @@ from typing import (
TypeVar,
)
from tractor.trionics import maybe_open_context
from .brokers import get_brokermod
from .log import get_logger
log = get_logger(__name__)
T = TypeVar("T")
@ -104,21 +97,3 @@ def async_lifo_cache(
return decorated
return decorator
# TODO: move this to `.brokers.utils`..
@acm
async def open_cached_client(
brokername: str,
) -> 'Client': # noqa
'''
Get a cached broker client from the current actor's local vars.
If one has not been setup do it and cache it.
'''
brokermod = get_brokermod(brokername)
async with maybe_open_context(
acm_func=brokermod.get_client,
) as (cache_hit, client):
yield client

View File

@ -0,0 +1,16 @@
.accounting
-----------
A subsystem for transaction processing, storage and historical
measurement.
.pnl
----
BEP, the break even price: the price at which liquidating
a remaining position results in a zero PnL since the position was
"opened" in the destination asset.
PPU: price-per-unit: the "average cost" (in cumulative mean terms)
of the "entry" transactions which "make a position larger"; taking
a profit relative to this price means that you will "make more
profit then made prior" since the position was opened.

View File

@ -21,38 +21,59 @@ for tendiez.
'''
from ..log import get_logger
from .calc import (
iter_by_dt,
)
from ._ledger import (
Transaction,
TransactionLedger,
open_trade_ledger,
)
from ._pos import (
load_pps_from_ledger,
Account,
load_account,
load_account_from_ledger,
open_pps,
open_account,
Position,
PpTable,
)
from ._mktinfo import (
Asset,
dec_digits,
digits_to_dec,
MktPair,
Symbol,
unpack_fqme,
_derivs as DerivTypes,
)
from ._allocate import (
mk_allocator,
Allocator,
)
log = get_logger(__name__)
__all__ = [
'Account',
'Allocator',
'Asset',
'dec_digits',
'digits_to_dec',
'MktPair',
'Position',
'PpTable',
'Symbol',
'Transaction',
'TransactionLedger',
'load_pps_from_ledger',
'dec_digits',
'digits_to_dec',
'iter_by_dt',
'load_account',
'load_account_from_ledger',
'mk_allocator',
'open_account',
'open_pps',
'open_trade_ledger',
'unpack_fqme',
'DerivTypes',
]
@ -61,14 +82,14 @@ def get_likely_pair(
dst: str,
bs_mktid: str,
) -> str:
) -> str | None:
'''
Attempt to get the likely trading pair matching a given destination
asset `dst: str`.
'''
try:
src_name_start = bs_mktid.rindex(src)
src_name_start: str = bs_mktid.rindex(src)
except (
ValueError, # substr not found
):
@ -76,28 +97,11 @@ def get_likely_pair(
# positions where the src fiat was used to
# buy some other dst which was furhter used
# to buy another dst..)
log.warning(
f'No src fiat {src} found in {bs_mktid}?'
)
return
# log.warning(
# f'No src fiat {src} found in {bs_mktid}?'
# )
return None
likely_dst = bs_mktid[:src_name_start]
likely_dst: str = bs_mktid[:src_name_start]
if likely_dst == dst:
return bs_mktid
if __name__ == '__main__':
import sys
from pprint import pformat
args = sys.argv
assert len(args) > 1, 'Specifiy account(s) from `brokers.toml`'
args = args[1:]
for acctid in args:
broker, name = acctid.split('.')
trans, updated_pps = load_pps_from_ledger(broker, name)
print(
f'Processing transactions into pps for {broker}:{acctid}\n'
f'{pformat(trans)}\n\n'
f'{pformat(updated_pps)}'
)

View File

@ -24,8 +24,8 @@ from typing import Optional
from bidict import bidict
from ._pos import Position
from ._mktinfo import Symbol
from ..data.types import Struct
from . import MktPair
from piker.types import Struct
_size_units = bidict({
@ -42,7 +42,7 @@ SizeUnit = Enum(
class Allocator(Struct):
symbol: Symbol
mkt: MktPair
# TODO: if we ever want ot support non-uniform entry-slot-proportion
# "sizes"
@ -114,24 +114,24 @@ class Allocator(Struct):
depending on position / order entry config.
'''
sym = self.symbol
ld = sym.lot_size_digits
mkt: MktPair = self.mkt
ld: int = mkt.size_tick_digits
size_unit = self.size_unit
live_size = live_pp.size
live_size = live_pp.cumsize
abs_live_size = abs(live_size)
abs_startup_size = abs(startup_pp.size)
abs_startup_size = abs(startup_pp.cumsize)
u_per_slot, currency_per_slot = self.step_sizes()
if size_unit == 'units':
slot_size = u_per_slot
l_sub_pp = self.units_limit - abs_live_size
slot_size: float = u_per_slot
l_sub_pp: float = self.units_limit - abs_live_size
elif size_unit == 'currency':
live_cost_basis = abs_live_size * live_pp.ppu
slot_size = currency_per_slot / price
l_sub_pp = (self.currency_limit - live_cost_basis) / price
live_cost_basis: float = abs_live_size * live_pp.ppu
slot_size: float = currency_per_slot / price
l_sub_pp: float = (self.currency_limit - live_cost_basis) / price
else:
raise ValueError(
@ -141,8 +141,14 @@ class Allocator(Struct):
# an entry (adding-to or starting a pp)
if (
live_size == 0
or (action == 'buy' and live_size > 0)
or action == 'sell' and live_size < 0
or (
action == 'buy'
and live_size > 0
)
or (
action == 'sell'
and live_size < 0
)
):
order_size = min(
slot_size,
@ -178,7 +184,7 @@ class Allocator(Struct):
order_size = max(slotted_pp, slot_size)
if (
abs_live_size < slot_size or
abs_live_size < slot_size
# NOTE: front/back "loading" heurstic:
# if the remaining pp is in between 0-1.5x a slot's
@ -187,14 +193,17 @@ class Allocator(Struct):
# **without** going past a net-zero pp. if the pp is
# > 1.5x a slot size, then front load: exit a slot's and
# expect net-zero to be acquired on the final exit.
slot_size < pp_size < round((1.5*slot_size), ndigits=ld) or
or slot_size < pp_size < round((1.5*slot_size), ndigits=ld)
or (
# underlying requires discrete (int) units (eg. stocks)
# and thus our slot size (based on our limit) would
# exit a fractional unit's worth so, presuming we aren't
# supporting a fractional-units-style broker, we need
# exit the final unit.
ld == 0 and abs_live_size == 1
# underlying requires discrete (int) units (eg. stocks)
# and thus our slot size (based on our limit) would
# exit a fractional unit's worth so, presuming we aren't
# supporting a fractional-units-style broker, we need
# exit the final unit.
ld == 0
and abs_live_size == 1
)
):
order_size = abs_live_size
@ -203,13 +212,12 @@ class Allocator(Struct):
# compute a fractional slots size to display
slots_used = self.slots_used(
Position(
symbol=sym,
size=order_size,
ppu=price,
bs_mktid=sym,
mkt=mkt,
bs_mktid=mkt.bs_mktid,
)
)
# TODO: render an actual ``Executable`` type here?
return {
'size': abs(round(order_size, ndigits=ld)),
'size_digits': ld,
@ -231,7 +239,7 @@ class Allocator(Struct):
Calc and return the number of slots used by this ``Position``.
'''
abs_pp_size = abs(pp.size)
abs_pp_size = abs(pp.cumsize)
if self.size_unit == 'currency':
# live_currency_size = size or (abs_pp_size * pp.ppu)
@ -249,7 +257,7 @@ class Allocator(Struct):
def mk_allocator(
symbol: Symbol,
mkt: MktPair,
startup_pp: Position,
# default allocation settings
@ -276,6 +284,6 @@ def mk_allocator(
defaults.update(user_def)
return Allocator(
symbol=symbol,
mkt=mkt,
**defaults,
)

View File

@ -21,69 +21,77 @@ Trade and transaction ledger processing.
from __future__ import annotations
from collections import UserDict
from contextlib import contextmanager as cm
from functools import partial
from pathlib import Path
import time
from pprint import pformat
from types import ModuleType
from typing import (
Any,
Iterator,
Union,
Generator
Callable,
Generator,
Literal,
TYPE_CHECKING,
)
from pendulum import (
datetime,
parse,
DateTime,
)
import tomli
import toml
import tomli_w # for fast ledger writing
from .. import config
from ..data.types import Struct
from piker.types import Struct
from piker import config
from ..log import get_logger
from ._mktinfo import (
Symbol, # legacy
MktPair,
Asset,
from .calc import (
iter_by_dt,
)
if TYPE_CHECKING:
from ..data._symcache import (
SymbologyCache,
)
log = get_logger(__name__)
TxnType = Literal[
'clear',
'transfer',
# TODO: see https://github.com/pikers/piker/issues/510
# 'split',
# 'rename',
# 'resize',
# 'removal',
]
class Transaction(Struct, frozen=True):
# TODO: unify this with the `MktPair`,
# once we have that as a required field,
# we don't really need the fqsn any more..
fqsn: str
# NOTE: this is a unified acronym also used in our `MktPair`
# and can stand for any of a
# "fully qualified <blank> endpoint":
# - "market" in the case of financial trades
# (btcusdt.spot.binance).
# - "merkel (tree)" aka a blockchain system "wallet tranfers"
# (btc.blockchain)
# - "money" for tradtitional (digital databases)
# *bank accounts* (usd.swift, eur.sepa)
fqme: str
tid: Union[str, int] # unique transaction id
tid: str | int # unique transaction id
size: float
price: float
cost: float # commisions or other additional costs
dt: datetime
dt: DateTime
# the "event type" in terms of "market events" see above and
# https://github.com/pikers/piker/issues/510
etype: TxnType = 'clear'
# TODO: we can drop this right since we
# can instead expect the backend to provide this
# via the `MktPair`?
expiry: datetime | None = None
# remap for back-compat
@property
def fqme(self) -> str:
return self.fqsn
# TODO: drop the Symbol type, construct using
# t.sys (the transaction system)
# the underlying "transaction system", normally one of a ``MktPair``
# (a description of a tradable double auction) or a ledger-recorded
# ("ledger" in any sense as long as you can record transfers) of any
# sort) ``Asset``.
sym: MktPair | Asset | Symbol | None = None
@property
def sys(self) -> Symbol:
return self.sym
expiry: DateTime | None = None
# (optional) key-id defined by the broker-service backend which
# ensures the instrument-symbol market key for this record is unique
@ -92,15 +100,16 @@ class Transaction(Struct, frozen=True):
# service.
bs_mktid: str | int | None = None
def to_dict(self) -> dict:
dct = super().to_dict()
# TODO: switch to sys!
dct.pop('sym')
def to_dict(
self,
**kwargs,
) -> dict:
dct: dict[str, Any] = super().to_dict(**kwargs)
# ensure we use a pendulum formatted
# ISO style str here!@
dct['dt'] = str(self.dt)
return dct
@ -112,42 +121,63 @@ class TransactionLedger(UserDict):
outside.
'''
# NOTE: see `open_trade_ledger()` for defaults, this should
# never be constructed manually!
def __init__(
self,
ledger_dict: dict,
file_path: Path,
account: str,
mod: ModuleType, # broker mod
tx_sort: Callable,
symcache: SymbologyCache,
) -> None:
self.file_path = file_path
self.account: str = account
self.file_path: Path = file_path
self.mod: ModuleType = mod
self.tx_sort: Callable = tx_sort
self._symcache: SymbologyCache = symcache
# any added txns we keep in that form for meta-data
# gathering purposes
self._txns: dict[str, Transaction] = {}
super().__init__(ledger_dict)
def write_config(self) -> None:
def __repr__(self) -> str:
return (
f'TransactionLedger: {len(self)}\n'
f'{pformat(list(self.data))}'
)
@property
def symcache(self) -> SymbologyCache:
'''
Render the self.data ledger dict to it's TML file form.
Read-only ref to backend's ``SymbologyCache``.
'''
with self.file_path.open(mode='w') as fp:
# rewrite the key name to fqme if needed
fqsn: str = self.data.get('fqsn')
if fqsn:
self.data['fqme'] = fqsn
toml.dump(self.data, fp)
return self._symcache
def update_from_t(
self,
t: Transaction,
) -> None:
self.data[t.tid] = t.to_dict()
'''
Given an input `Transaction`, cast to `dict` and update
from it's transaction id.
def iter_trans(
'''
self.data[t.tid] = t.to_dict()
self._txns[t.tid] = t
def iter_txns(
self,
mkt_by_fqme: dict[str, MktPair],
broker: str = 'paper',
symcache: SymbologyCache | None = None,
) -> Generator[
tuple[str, Transaction],
Transaction,
None,
None,
]:
@ -156,60 +186,162 @@ class TransactionLedger(UserDict):
form via generator.
'''
if broker != 'paper':
raise NotImplementedError('Per broker support not dun yet!')
symcache = symcache or self._symcache
# TODO: lookup some standard normalizer
# func in the backend?
# from ..brokers import get_brokermod
# mod = get_brokermod(broker)
# trans_dict = mod.norm_trade_records(self.data)
# NOTE: instead i propose the normalizer is
# a one shot routine (that can be lru cached)
# and instead call it for each entry incrementally:
# normer = mod.norm_trade_record(txdict)
for tid, txdict in self.data.items():
# special field handling for datetimes
# to ensure pendulum is used!
fqme = txdict.get('fqme', txdict['fqsn'])
dt = parse(txdict['dt'])
expiry = txdict.get('expiry')
mkt = mkt_by_fqme.get(fqme)
if not mkt:
# we can't build a trans if we don't have
# the ``.sys: MktPair`` info, so skip.
continue
yield (
tid,
Transaction(
fqsn=fqme,
tid=txdict['tid'],
dt=dt,
price=txdict['price'],
size=txdict['size'],
cost=txdict.get('cost', 0),
bs_mktid=txdict['bs_mktid'],
# TODO: change to .sys!
sym=mkt,
expiry=parse(expiry) if expiry else None,
)
if self.account == 'paper':
from piker.clearing import _paper_engine
norm_trade: Callable = partial(
_paper_engine.norm_trade,
brokermod=self.mod,
)
def to_trans(
else:
norm_trade: Callable = self.mod.norm_trade
# datetime-sort and pack into txs
for tid, txdict in self.tx_sort(self.data.items()):
txn: Transaction = norm_trade(
tid,
txdict,
pairs=symcache.pairs,
symcache=symcache,
)
yield txn
def to_txns(
self,
**kwargs,
symcache: SymbologyCache | None = None,
) -> dict[str, Transaction]:
'''
Return entire output from ``.iter_trans()`` in a ``dict``.
Return entire output from ``.iter_txns()`` in a ``dict``.
'''
return dict(self.iter_trans(**kwargs))
txns: dict[str, Transaction] = {}
for t in self.iter_txns(symcache=symcache):
if not t:
log.warning(f'{self.mod.name}:{self.account} TXN is -> {t}')
continue
txns[t.tid] = t
return txns
def write_config(self) -> None:
'''
Render the self.data ledger dict to its TOML file form.
ALWAYS order datetime sorted!
'''
is_paper: bool = self.account == 'paper'
symcache: SymbologyCache = self._symcache
towrite: dict[str, Any] = {}
for tid, txdict in self.tx_sort(self.data.copy()):
# write blank-str expiry for non-expiring assets
if (
'expiry' in txdict
and txdict['expiry'] is None
):
txdict['expiry'] = ''
# (maybe) re-write old acro-key
if (
is_paper
# if symcache is empty/not supported (yet), don't
# bother xD
and symcache.mktmaps
):
fqme: str = txdict.pop('fqsn', None) or txdict['fqme']
bs_mktid: str | None = txdict.get('bs_mktid')
if (
fqme not in symcache.mktmaps
or (
# also try to see if this is maybe a paper
# engine ledger in which case the bs_mktid
# should be the fqme as well!
bs_mktid
and fqme != bs_mktid
)
):
# always take any (paper) bs_mktid if defined and
# in the backend's cache key set.
if bs_mktid in symcache.mktmaps:
fqme: str = bs_mktid
else:
best_fqme: str = list(symcache.search(fqme))[0]
log.warning(
f'Could not find FQME: {fqme} in qualified set?\n'
f'Qualifying and expanding {fqme} -> {best_fqme}'
)
fqme = best_fqme
if (
bs_mktid
and bs_mktid != fqme
):
# in paper account case always make sure both the
# fqme and bs_mktid are fully qualified..
txdict['bs_mktid'] = fqme
# in paper ledgers always write the latest
# symbology key field: an FQME.
txdict['fqme'] = fqme
towrite[tid] = txdict
with self.file_path.open(mode='wb') as fp:
tomli_w.dump(towrite, fp)
def load_ledger(
brokername: str,
acctid: str,
# for testing or manual load from file
dirpath: Path | None = None,
) -> tuple[dict, Path]:
'''
Load a ledger (TOML) file from user's config directory:
$CONFIG_DIR/accounting/ledgers/trades_<brokername>_<acctid>.toml
Return its `dict`-content and file path.
'''
import time
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
ldir: Path = (
dirpath
or
config._config_dir / 'accounting' / 'ledgers'
)
if not ldir.is_dir():
ldir.mkdir()
fname = f'trades_{brokername}_{acctid}.toml'
fpath: Path = ldir / fname
if not fpath.is_file():
log.info(
f'Creating new local trades ledger: {fpath}'
)
fpath.touch()
with fpath.open(mode='rb') as cf:
start = time.time()
ledger_dict = tomllib.load(cf)
log.debug(f'Ledger load took {time.time() - start}s')
return ledger_dict, fpath
@cm
@ -217,7 +349,17 @@ def open_trade_ledger(
broker: str,
account: str,
) -> Generator[dict, None, None]:
allow_from_sync_code: bool = False,
symcache: SymbologyCache | None = None,
# default is to sort by detected datetime-ish field
tx_sort: Callable = iter_by_dt,
rewrite: bool = False,
# for testing or manual load from file
_fp: Path | None = None,
) -> Generator[TransactionLedger, None, None]:
'''
Indempotently create and read in a trade log file from the
``<configuration_dir>/ledgers/`` directory.
@ -227,52 +369,53 @@ def open_trade_ledger(
name as defined in the user's ``brokers.toml`` config.
'''
ldir: Path = config._config_dir / 'ledgers'
if not ldir.is_dir():
ldir.mkdir()
from ..brokers import get_brokermod
mod: ModuleType = get_brokermod(broker)
fname = f'trades_{broker}_{account}.toml'
tradesfile: Path = ldir / fname
ledger_dict, fpath = load_ledger(
broker,
account,
dirpath=_fp,
)
cpy = ledger_dict.copy()
if not tradesfile.is_file():
log.info(
f'Creating new local trades ledger: {tradesfile}'
# XXX NOTE: if not provided presume we are being called from
# sync code and need to maybe run `trio` to generate..
if symcache is None:
# XXX: be mega pendantic and ensure the caller knows what
# they're doing!
if not allow_from_sync_code:
raise RuntimeError(
'You MUST set `allow_from_sync_code=True` when '
'calling `open_trade_ledger()` from sync code! '
'If you are calling from async code you MUST '
'instead pass a `symcache: SymbologyCache`!'
)
from ..data._symcache import (
get_symcache,
)
tradesfile.touch()
symcache: SymbologyCache = get_symcache(broker)
with tradesfile.open(mode='rb') as cf:
start = time.time()
ledger_dict = tomli.load(cf)
log.info(f'Ledger load took {time.time() - start}s')
cpy = ledger_dict.copy()
assert symcache
ledger = TransactionLedger(
ledger_dict=cpy,
file_path=tradesfile,
file_path=fpath,
account=account,
mod=mod,
symcache=symcache,
tx_sort=getattr(mod, 'tx_sort', tx_sort),
)
try:
yield ledger
finally:
if ledger.data != ledger_dict:
if (
ledger.data != ledger_dict
or rewrite
):
# TODO: show diff output?
# https://stackoverflow.com/questions/12956957/print-diff-of-python-dictionaries
log.info(f'Updating ledger for {tradesfile}:\n')
log.info(f'Updating ledger for {fpath}:\n')
ledger.write_config()
def iter_by_dt(
clears: dict[str, Any],
) -> Iterator[tuple[str, dict]]:
'''
Iterate entries of a ``clears: dict`` table sorted by entry recorded
datetime presumably set at the ``'dt'`` field in each entry.
'''
for tid, data in sorted(
list(clears.items()),
key=lambda item: item[1]['dt'],
):
yield tid, data

View File

@ -36,9 +36,10 @@ from typing import (
Literal,
)
from ..data.types import Struct
from piker.types import Struct
# TODO: make these literals..
_underlyings: list[str] = [
'stock',
'bond',
@ -47,6 +48,10 @@ _underlyings: list[str] = [
'commodity',
]
_crypto_derivs: list[str] = [
'perpetual_future',
'crypto_future',
]
_derivs: list[str] = [
'swap',
@ -66,6 +71,8 @@ AssetTypeName: Literal[
_underlyings
+
_derivs
+
_crypto_derivs
]
# egs. stock, futer, option, bond etc.
@ -121,10 +128,31 @@ class Asset(Struct, frozen=True):
# NOTE: additional info optionally packed in by the backend, but
# should not be explicitly required in our generic API.
info: dict = {} # make it frozen?
info: dict | None = None
# TODO?
# _to_dict_skip = {'info'}
# `None` is not toml-compat so drop info
# if no extra data added..
def to_dict(
self,
**kwargs,
) -> dict:
dct = super().to_dict(**kwargs)
if (info := dct.pop('info', None)):
dct['info'] = info
assert dct['tx_tick']
return dct
@classmethod
def from_msg(
cls,
msg: dict[str, Any],
) -> Asset:
return cls(
tx_tick=Decimal(str(msg.pop('tx_tick'))),
info=msg.pop('info', None),
**msg,
)
def __str__(self) -> str:
return self.name
@ -207,6 +235,33 @@ class MktPair(Struct, frozen=True):
<dst>/<src>.<expiry>.<con_info_1>.<con_info_2>. -> .<venue>.<broker>
^ -- optional tokens ------------------------------- ^
Notes:
------
Some venues provide a different semantic (which we frankly find
confusing and non-general) such as "base" and "quote" asset.
For example this is how `binance` defines the terms:
https://binance-docs.github.io/apidocs/websocket_api/en/#public-api-definitions
https://binance-docs.github.io/apidocs/futures/en/#public-endpoints-info
- *base* asset refers to the asset that is the *quantity* of a symbol.
- *quote* asset refers to the asset that is the *price* of a symbol.
In other words the "quote" asset is the asset that the market
is pricing "buys" *in*, and the *base* asset it the one that the market
allows you to "buy" an *amount of*. Put more simply the *quote*
asset is our "source" asset and the *base* asset is our "destination"
asset.
This defintion can be further understood reading our
`.brokers.binance.api.Pair` type wherein the
`Pair.[quote/base]AssetPrecision` field determines the (transfer)
transaction precision available per asset; i.e. the satoshis
unit in bitcoin for representing the minimum size of a
transaction that can take place on the blockchain.
'''
dst: str | Asset
# "destination asset" (name) used to buy *to*
@ -254,12 +309,40 @@ class MktPair(Struct, frozen=True):
# strike price, call or put, swap type, exercise model, etc.
contract_info: list[str] | None = None
# TODO: rename to sectype since all of these can
# be considered "securities"?
_atype: str = ''
# allow explicit disable of the src part of the market
# pair name -> useful for legacy markets like qqq.nasdaq.ib
_fqme_without_src: bool = False
# NOTE: when cast to `str` return fqme
def __str__(self) -> str:
return self.fqme
def to_dict(
self,
**kwargs,
) -> dict:
d = super().to_dict(**kwargs)
d['src'] = self.src.to_dict(**kwargs)
if not isinstance(self.dst, str):
d['dst'] = self.dst.to_dict(**kwargs)
else:
d['dst'] = str(self.dst)
d['price_tick'] = str(self.price_tick)
d['size_tick'] = str(self.size_tick)
if self.contract_info is None:
d.pop('contract_info')
# d.pop('_fqme_without_src')
return d
@classmethod
def from_msg(
cls,
@ -270,30 +353,31 @@ class MktPair(Struct, frozen=True):
Constructor for a received msg-dict normally received over IPC.
'''
dst_asset_msg = msg.pop('dst')
src_asset_msg = msg.pop('src')
if isinstance(dst_asset_msg, str):
src: str = str(src_asset_msg)
assert isinstance(src, str)
return cls.from_fqme(
dst_asset_msg,
src=src,
**msg,
)
if not isinstance(
dst_asset_msg := msg.pop('dst'),
str,
):
dst: Asset = Asset.from_msg(dst_asset_msg) # .copy()
else:
# NOTE: we call `.copy()` here to ensure
# type casting!
dst = Asset(**dst_asset_msg).copy()
if not isinstance(src_asset_msg, str):
src = Asset(**src_asset_msg).copy()
else:
src = str(src_asset_msg)
dst: str = dst_asset_msg
src_asset_msg: dict = msg.pop('src')
src: Asset = Asset.from_msg(src_asset_msg) # .copy()
# XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't
# decide to it by default since we aren't spec-cing these
# msgs as structs proper to get them to decode implictily
# (yet) as per,
# - https://github.com/pikers/piker/pull/354
# - https://github.com/goodboy/tractor/pull/311
# SO we have to ensure we do a struct type
# case (which `.copy()` does) to ensure we get the right
# type!
return cls(
dst=dst,
src=src,
price_tick=Decimal(msg.pop('price_tick')),
size_tick=Decimal(msg.pop('size_tick')),
**msg,
).copy()
@ -322,7 +406,20 @@ class MktPair(Struct, frozen=True):
):
_fqme = f'{fqme}.{broker}'
broker, mkt_ep_key, venue, suffix = unpack_fqme(_fqme)
broker, mkt_ep_key, venue, expiry = unpack_fqme(_fqme)
kven: str = kwargs.pop('venue', venue)
if venue:
assert venue == kven
else:
venue = kven
exp: str = kwargs.pop('expiry', expiry)
if expiry:
assert exp == expiry
else:
expiry = exp
dst: Asset = Asset.guess_from_mkt_ep_key(
mkt_ep_key,
atype=kwargs.get('_atype'),
@ -334,14 +431,15 @@ class MktPair(Struct, frozen=True):
# which we expect to be filled in by some
# backend client with access to that data-info.
return cls(
# XXX: not resolved to ``Asset`` :(
dst=dst,
# XXX: not resolved to ``Asset`` :(
#src=src,
broker=broker,
venue=venue,
# XXX NOTE: we presume this token
# if the expiry for now!
expiry=suffix,
expiry=expiry,
price_tick=price_tick,
size_tick=size_tick,
@ -356,6 +454,15 @@ class MktPair(Struct, frozen=True):
'''
The "endpoint key" for this market.
'''
return self.pair
def pair(
self,
delim_char: str | None = None,
) -> str:
'''
The "endpoint asset pair key" for this market.
Eg. mnq/usd or btc/usdt or xmr/btc
In most other tina platforms this is referred to as the
@ -365,7 +472,8 @@ class MktPair(Struct, frozen=True):
return maybe_cons_tokens(
[str(self.dst),
str(self.src)],
delim_char='',
# TODO: make the default '/'
delim_char=delim_char or '',
)
@property
@ -390,13 +498,17 @@ class MktPair(Struct, frozen=True):
return maybe_cons_tokens(field_strs)
# NOTE: the main idea behind an fqme is to map a "market address"
# to some endpoint from a transaction provider (eg. a broker) such
# that we build a table of `fqme: str -> bs_mktid: Any` where any "piker
# market address" maps 1-to-1 to some broker trading endpoint.
# @cached_property
@property
def fqme(self) -> str:
def get_fqme(
self,
# NOTE: allow dropping the source asset from the
# market endpoint's pair key. Eg. to change
# mnq/usd.<> -> mnq.<> which is useful when
# searching (legacy) stock exchanges.
without_src: bool = False,
delim_char: str | None = None,
) -> str:
'''
Return the fully qualified market endpoint-address for the
pair of transacting assets.
@ -431,20 +543,38 @@ class MktPair(Struct, frozen=True):
https://github.com/pikers/piker/issues/467
'''
key: str = (
self.pair(delim_char=delim_char)
if not (without_src or self._fqme_without_src)
else str(self.dst)
)
return maybe_cons_tokens([
self.key, # final "pair name" (eg. qqq[/usd], btcusdt)
key, # final "pair name" (eg. qqq[/usd], btcusdt)
self.venue,
self.suffix, # includes expiry and other con info
self.broker,
])
@property
def bs_fqme(self) -> str:
# NOTE: the main idea behind an fqme is to map a "market address"
# to some endpoint from a transaction provider (eg. a broker) such
# that we build a table of `fqme: str -> bs_mktid: Any` where any "piker
# market address" maps 1-to-1 to some broker trading endpoint.
# @cached_property
fqme = property(get_fqme)
def get_bs_fqme(
self,
**kwargs,
) -> str:
'''
FQME sin broker part XD
'''
return self.fqme.rstrip(f'.{self.broker}')
sin_broker, *_ = self.get_fqme(**kwargs).rpartition('.')
return sin_broker
bs_fqme = property(get_bs_fqme)
@property
def fqsn(self) -> str:
@ -476,17 +606,22 @@ class MktPair(Struct, frozen=True):
# TODO: BACKWARD COMPAT, TO REMOVE?
@property
def type_key(self) -> str:
# if set explicitly then use it!
if self._atype:
return self._atype
if isinstance(self.dst, Asset):
return str(self.dst.atype)
return self._atype
return 'UNKNOWN'
@property
def tick_size_digits(self) -> int:
def price_tick_digits(self) -> int:
return float_digits(self.price_tick)
@property
def lot_size_digits(self) -> int:
def size_tick_digits(self) -> int:
return float_digits(self.size_tick)
@ -552,6 +687,9 @@ class Symbol(Struct):
'''
key: str
broker: str = ''
venue: str = ''
# precision descriptors for price and vlm
tick_size: Decimal = Decimal('0.01')
lot_tick_size: Decimal = Decimal('0.0')
@ -571,9 +709,11 @@ class Symbol(Struct):
lot_size = info.get('lot_tick_size', 0.0)
return Symbol(
broker=broker,
key=mktep,
tick_size=tick_size,
lot_tick_size=lot_size,
venue=venue,
suffix=suffix,
broker_info={broker: info},
)
@ -603,17 +743,13 @@ class Symbol(Struct):
return list(self.broker_info.keys())[0]
@property
def fqsn(self) -> str:
broker = self.broker
key = self.key
if self.suffix:
tokens = (key, self.suffix, broker)
else:
tokens = (key, broker)
return '.'.join(tokens).lower()
fqme = fqsn
def fqme(self) -> str:
return maybe_cons_tokens([
self.key, # final "pair name" (eg. qqq[/usd], btcusdt)
self.venue,
self.suffix, # includes expiry and other con info
self.broker,
])
def quantize(
self,

File diff suppressed because it is too large Load Diff

View File

@ -1,156 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
TOML codec hacks to make position tables look decent.
(looking at you "`toml`-lib"..)
'''
import re
import toml
# TODO: instead see if we can hack tomli and tomli-w to do the same:
# - https://github.com/hukkin/tomli
# - https://github.com/hukkin/tomli-w
class PpsEncoder(toml.TomlEncoder):
'''
Special "styled" encoder that makes a ``pps.toml`` redable and
compact by putting `.clears` tables inline and everything else
flat-ish.
'''
separator = ','
def dump_list(self, v):
'''
Dump an inline list with a newline after every element and
with consideration for denoted inline table types.
'''
retval = "[\n"
for u in v:
if isinstance(u, toml.decoder.InlineTableDict):
out = self.dump_inline_table(u)
else:
out = str(self.dump_value(u))
retval += " " + out + "," + "\n"
retval += "]"
return retval
def dump_inline_table(self, section):
"""Preserve inline table in its compact syntax instead of expanding
into subsection.
https://github.com/toml-lang/toml#user-content-inline-table
"""
val_list = []
for k, v in section.items():
# if isinstance(v, toml.decoder.InlineTableDict):
if isinstance(v, dict):
val = self.dump_inline_table(v)
else:
val = str(self.dump_value(v))
val_list.append(k + " = " + val)
retval = "{ " + ", ".join(val_list) + " }"
return retval
def dump_sections(self, o, sup):
retstr = ""
if sup != "" and sup[-1] != ".":
sup += '.'
retdict = self._dict()
arraystr = ""
for section in o:
qsection = str(section)
value = o[section]
if not re.match(r'^[A-Za-z0-9_-]+$', section):
qsection = toml.encoder._dump_str(section)
# arrayoftables = False
if (
self.preserve
and isinstance(value, toml.decoder.InlineTableDict)
):
retstr += (
qsection
+
" = "
+
self.dump_inline_table(o[section])
+
'\n' # only on the final terminating left brace
)
# XXX: this code i'm pretty sure is just blatantly bad
# and/or wrong..
# if isinstance(o[section], list):
# for a in o[section]:
# if isinstance(a, dict):
# arrayoftables = True
# if arrayoftables:
# for a in o[section]:
# arraytabstr = "\n"
# arraystr += "[[" + sup + qsection + "]]\n"
# s, d = self.dump_sections(a, sup + qsection)
# if s:
# if s[0] == "[":
# arraytabstr += s
# else:
# arraystr += s
# while d:
# newd = self._dict()
# for dsec in d:
# s1, d1 = self.dump_sections(d[dsec], sup +
# qsection + "." +
# dsec)
# if s1:
# arraytabstr += ("[" + sup + qsection +
# "." + dsec + "]\n")
# arraytabstr += s1
# for s1 in d1:
# newd[dsec + "." + s1] = d1[s1]
# d = newd
# arraystr += arraytabstr
elif isinstance(value, dict):
retdict[qsection] = o[section]
elif o[section] is not None:
retstr += (
qsection
+
" = "
+
str(self.dump_value(o[section]))
)
# if not isinstance(value, dict):
if not isinstance(value, toml.decoder.InlineTableDict):
# inline tables should not contain newlines:
# https://toml.io/en/v1.0.0#inline-table
retstr += '\n'
else:
raise ValueError(value)
retstr += arraystr
return (retstr, retdict)

View File

@ -0,0 +1,698 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Calculation routines for balance and position tracking such that
you know when you're losing money (if possible) XD
'''
from __future__ import annotations
from collections.abc import ValuesView
from contextlib import contextmanager as cm
from math import copysign
from typing import (
Any,
Callable,
Iterator,
TYPE_CHECKING,
)
import polars as pl
from pendulum import (
DateTime,
from_timestamp,
parse,
)
if TYPE_CHECKING:
from ._ledger import (
Transaction,
TransactionLedger,
)
def ppu(
clears: Iterator[Transaction],
# include transaction cost in breakeven price
# and presume the worst case of the same cost
# to exit this transaction (even though in reality
# it will be dynamic based on exit stratetgy).
cost_scalar: float = 2,
# return the ledger of clears as a (now dt sorted) dict with
# new position fields inserted alongside each entry.
as_ledger: bool = False,
) -> float | list[(str, dict)]:
'''
Compute the "price-per-unit" price for the given non-zero sized
rolling position.
The recurrence relation which computes this (exponential) mean
per new clear which **increases** the accumulative postiion size
is:
ppu[-1] = (
ppu[-2] * accum_size[-2]
+
ppu[-1] * size
) / accum_size[-1]
where `cost_basis` for the current step is simply the price
* size of the most recent clearing transaction.
-----
TODO: get the BEP computed and working similarly!
-----
the equivalent "break even price" or bep at each new clear
event step conversely only changes when an "position exiting
clear" which **decreases** the cumulative dst asset size:
bep[-1] = ppu[-1] - (cum_pnl[-1] / cumsize[-1])
'''
asize_h: list[float] = [] # historical accumulative size
ppu_h: list[float] = [] # historical price-per-unit
# ledger: dict[str, dict] = {}
ledger: list[dict] = []
t: Transaction
for t in clears:
clear_size: float = t.size
clear_price: str | float = t.price
is_clear: bool = not isinstance(clear_price, str)
last_accum_size = asize_h[-1] if asize_h else 0
accum_size: float = last_accum_size + clear_size
accum_sign = copysign(1, accum_size)
sign_change: bool = False
# on transfers we normally write some non-valid
# price since withdrawal to another account/wallet
# has nothing to do with inter-asset-market prices.
# TODO: this should be better handled via a `type: 'tx'`
# field as per existing issue surrounding all this:
# https://github.com/pikers/piker/issues/510
if isinstance(clear_price, str):
# TODO: we can't necessarily have this commit to
# the overall pos size since we also need to
# include other positions contributions to this
# balance or we might end up with a -ve balance for
# the position..
continue
# test if the pp somehow went "passed" a net zero size state
# resulting in a change of the "sign" of the size (+ve for
# long, -ve for short).
sign_change = (
copysign(1, last_accum_size) + accum_sign == 0
and last_accum_size != 0
)
# since we passed the net-zero-size state the new size
# after sum should be the remaining size the new
# "direction" (aka, long vs. short) for this clear.
if sign_change:
clear_size: float = accum_size
abs_diff: float = abs(accum_size)
asize_h.append(0)
ppu_h.append(0)
else:
# old size minus the new size gives us size diff with
# +ve -> increase in pp size
# -ve -> decrease in pp size
abs_diff = abs(accum_size) - abs(last_accum_size)
# XXX: LIFO breakeven price update. only an increaze in size
# of the position contributes the breakeven price,
# a decrease does not (i.e. the position is being made
# smaller).
# abs_clear_size = abs(clear_size)
abs_new_size: float | int = abs(accum_size)
if (
abs_diff > 0
and is_clear
):
cost_basis = (
# cost basis for this clear
clear_price * abs(clear_size)
+
# transaction cost
accum_sign * cost_scalar * t.cost
)
if asize_h:
size_last: float = abs(asize_h[-1])
cb_last: float = ppu_h[-1] * size_last
ppu: float = (cost_basis + cb_last) / abs_new_size
else:
ppu: float = cost_basis / abs_new_size
else:
# TODO: for PPU we should probably handle txs out
# (aka withdrawals) similarly by simply not having
# them contrib to the running PPU calc and only
# when the next entry clear comes in (which will
# then have a higher weighting on the PPU).
# on "exit" clears from a given direction,
# only the size changes not the price-per-unit
# need to be updated since the ppu remains constant
# and gets weighted by the new size.
ppu: float = ppu_h[-1] if ppu_h else 0 # set to previous value
# extend with new rolling metric for this step
ppu_h.append(ppu)
asize_h.append(accum_size)
# ledger[t.tid] = {
# 'txn': t,
# ledger[t.tid] = t.to_dict() | {
ledger.append((
t.tid,
t.to_dict() | {
'ppu': ppu,
'cumsize': accum_size,
'sign_change': sign_change,
# TODO: cum_pnl, bep
}
))
final_ppu = ppu_h[-1] if ppu_h else 0
# TODO: once we have etypes in all ledger entries..
# handle any split info entered (for now) manually by user
# if self.split_ratio is not None:
# final_ppu /= self.split_ratio
if as_ledger:
return ledger
else:
return final_ppu
def iter_by_dt(
records: (
dict[str, dict[str, Any]]
| ValuesView[dict] # eg. `Position._events.values()`
| list[dict]
| list[Transaction] # XXX preferred!
),
# NOTE: parsers are looked up in the insert order
# so if you know that the record stats show some field
# is more common then others, stick it at the top B)
parsers: dict[str, Callable | None] = {
'dt': parse, # parity case
'datetime': parse, # datetime-str
'time': from_timestamp, # float epoch
},
key: Callable | None = None,
) -> Iterator[tuple[str, dict]]:
'''
Iterate entries of a transaction table sorted by entry recorded
datetime presumably set at the ``'dt'`` field in each entry.
'''
if isinstance(records, dict):
records: list[tuple[str, dict]] = list(records.items())
def dyn_parse_to_dt(
tx: tuple[str, dict[str, Any]] | Transaction,
) -> DateTime:
# handle `.items()` inputs
if isinstance(tx, tuple):
tx = tx[1]
# dict or tx object?
isdict: bool = isinstance(tx, dict)
# get best parser for this record..
for k in parsers:
if (
isdict and k in tx
or getattr(tx, k, None)
):
v = tx[k] if isdict else tx.dt
assert v is not None, f'No valid value for `{k}`!?'
# only call parser on the value if not None from
# the `parsers` table above (when NOT using
# `.get()`), otherwise pass through the value and
# sort on it directly
if (
not isinstance(v, DateTime)
and (parser := parsers.get(k))
):
return parser(v)
else:
return v
else:
# XXX: should never get here..
breakpoint()
entry: tuple[str, dict] | Transaction
for entry in sorted(
records,
key=key or dyn_parse_to_dt,
):
# NOTE the type sig above; either pairs or txns B)
yield entry
# TODO: probably just move this into the test suite or
# keep it here for use from as such?
# def ensure_state(self) -> None:
# '''
# Audit either the `.cumsize` and `.ppu` local instance vars against
# the clears table calculations and return the calc-ed values if
# they differ and log warnings to console.
# '''
# # clears: list[dict] = self._clears
# # self.first_clear_dt = min(clears, key=lambda e: e['dt'])['dt']
# last_clear: dict = clears[-1]
# csize: float = self.calc_size()
# accum: float = last_clear['accum_size']
# if not self.expired():
# if (
# csize != accum
# and csize != round(accum * (self.split_ratio or 1))
# ):
# raise ValueError(f'Size mismatch: {csize}')
# else:
# assert csize == 0, 'Contract is expired but non-zero size?'
# if self.cumsize != csize:
# log.warning(
# 'Position state mismatch:\n'
# f'{self.cumsize} => {csize}'
# )
# self.cumsize = csize
# cppu: float = self.calc_ppu()
# ppu: float = last_clear['ppu']
# if (
# cppu != ppu
# and self.split_ratio is not None
# # handle any split info entered (for now) manually by user
# and cppu != (ppu / self.split_ratio)
# ):
# raise ValueError(f'PPU mismatch: {cppu}')
# if self.ppu != cppu:
# log.warning(
# 'Position state mismatch:\n'
# f'{self.ppu} => {cppu}'
# )
# self.ppu = cppu
@cm
def open_ledger_dfs(
brokername: str,
acctname: str,
ledger: TransactionLedger | None = None,
**kwargs,
) -> tuple[
dict[str, pl.DataFrame],
TransactionLedger,
]:
'''
Open a ledger of trade records (presumably from some broker
backend), normalize the records into `Transactions` via the
backend's declared endpoint, cast to a `polars.DataFrame` which
can update the ledger on exit.
'''
from piker.toolz import open_crash_handler
with open_crash_handler():
if not ledger:
import time
from ._ledger import open_trade_ledger
now = time.time()
with open_trade_ledger(
brokername,
acctname,
rewrite=True,
allow_from_sync_code=True,
# proxied through from caller
**kwargs,
) as ledger:
if not ledger:
raise ValueError(f'No ledger for {acctname}@{brokername} exists?')
print(f'LEDGER LOAD TIME: {time.time() - now}')
yield ledger_to_dfs(ledger), ledger
def ledger_to_dfs(
ledger: TransactionLedger,
) -> dict[str, pl.DataFrame]:
txns: dict[str, Transaction] = ledger.to_txns()
# ldf = pl.DataFrame(
# list(txn.to_dict() for txn in txns.values()),
ldf = pl.from_dicts(
list(txn.to_dict() for txn in txns.values()),
# only for ordering the cols
schema=[
('fqme', str),
('tid', str),
('bs_mktid', str),
('expiry', str),
('etype', str),
('dt', str),
('size', pl.Float64),
('price', pl.Float64),
('cost', pl.Float64),
],
).sort( # chronological order
'dt'
).with_columns([
pl.col('dt').str.to_datetime(),
# pl.col('expiry').str.to_datetime(),
# pl.col('expiry').dt.date(),
])
# filter out to the columns matching values filter passed
# as input.
# if filter_by_ids:
# for col, vals in filter_by_ids.items():
# str_vals = set(map(str, vals))
# pred: pl.Expr = pl.col(col).eq(str_vals.pop())
# for val in str_vals:
# pred |= pl.col(col).eq(val)
# fdf = df.filter(pred)
# TODO: originally i had tried just using a plain ol' groupby
# + agg here but the issue was re-inserting to the src frame.
# however, learning more about `polars` seems like maybe we can
# use `.over()`?
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.Expr.over.html#polars.Expr.over
# => CURRENTLY we break up into a frame per mkt / fqme
dfs: dict[str, pl.DataFrame] = ldf.partition_by(
'bs_mktid',
as_dict=True,
)
# TODO: not sure if this is even possible but..
# - it'd be more ideal to use `ppt = df.groupby('fqme').agg([`
# - ppu and bep calcs!
for key in dfs:
# covert to lazy form (since apparently we might need it
# eventually ...)
df: pl.DataFrame = dfs[key]
ldf: pl.LazyFrame = df.lazy()
df = dfs[key] = ldf.with_columns([
pl.cumsum('size').alias('cumsize'),
# amount of source asset "sent" (via buy txns in
# the market) to acquire the dst asset, PER txn.
# when this value is -ve (i.e. a sell operation) then
# the amount sent is actually "returned".
(
(pl.col('price') * pl.col('size'))
+
(pl.col('cost')) # * pl.col('size').sign())
).alias('dst_bot'),
]).with_columns([
# rolling balance in src asset units
(pl.col('dst_bot').cumsum() * -1).alias('src_balance'),
# "position operation type" in terms of increasing the
# amount in the dst asset (entering) or decreasing the
# amount in the dst asset (exiting).
pl.when(
pl.col('size').sign() == pl.col('cumsize').sign()
).then(
pl.lit('enter') # see above, but is just price * size per txn
).otherwise(
pl.when(pl.col('cumsize') == 0)
.then(pl.lit('exit_to_zero'))
.otherwise(pl.lit('exit'))
).alias('descr'),
(pl.col('cumsize').sign() == pl.col('size').sign())
.alias('is_enter'),
]).with_columns([
# pl.lit(0, dtype=pl.Utf8).alias('virt_cost'),
pl.lit(0, dtype=pl.Float64).alias('applied_cost'),
pl.lit(0, dtype=pl.Float64).alias('pos_ppu'),
pl.lit(0, dtype=pl.Float64).alias('per_txn_pnl'),
pl.lit(0, dtype=pl.Float64).alias('cum_pos_pnl'),
pl.lit(0, dtype=pl.Float64).alias('pos_bep'),
pl.lit(0, dtype=pl.Float64).alias('cum_ledger_pnl'),
pl.lit(None, dtype=pl.Float64).alias('ledger_bep'),
# TODO: instead of the iterative loop below i guess we
# could try using embedded lists to track which txns
# are part of which ppu / bep calcs? Not sure this will
# look any better nor be any more performant though xD
# pl.lit([[0]], dtype=pl.List(pl.Float64)).alias('list'),
# choose fields to emit for accounting puposes
]).select([
pl.exclude([
'tid',
# 'dt',
'expiry',
'bs_mktid',
'etype',
# 'is_enter',
]),
]).collect()
# compute recurrence relations for ppu and bep
last_ppu: float = 0
last_cumsize: float = 0
last_ledger_pnl: float = 0
last_pos_pnl: float = 0
virt_costs: list[float, float] = [0., 0.]
# imperatively compute the PPU (price per unit) and BEP
# (break even price) iteratively over the ledger, oriented
# around each position state: a state of split balances in
# > 1 asset.
for i, row in enumerate(df.iter_rows(named=True)):
cumsize: float = row['cumsize']
is_enter: bool = row['is_enter']
price: float = row['price']
size: float = row['size']
# the profit is ALWAYS decreased, aka made a "loss"
# by the constant fee charged by the txn provider!
# see below in final PnL calculation and row element
# set.
txn_cost: float = row['cost']
pnl: float = 0
# ALWAYS reset per-position cum PnL
if last_cumsize == 0:
last_pos_pnl: float = 0
# a "position size INCREASING" or ENTER transaction
# which "makes larger", in src asset unit terms, the
# trade's side-size of the destination asset:
# - "buying" (more) units of the dst asset
# - "selling" (more short) units of the dst asset
if is_enter:
# Naively include transaction cost in breakeven
# price and presume the worst case of the
# exact-same-cost-to-exit this transaction's worth
# of size even though in reality it will be dynamic
# based on exit strategy, price, liquidity, etc..
virt_cost: float = txn_cost
# cpu: float = cost / size
# cummean of the cost-per-unit used for modelling
# a projected future exit cost which we immediately
# include in the costs incorporated to BEP on enters
last_cum_costs_size, last_cpu = virt_costs
cum_costs_size: float = last_cum_costs_size + abs(size)
cumcpu = (
(last_cpu * last_cum_costs_size)
+
txn_cost
) / cum_costs_size
virt_costs = [cum_costs_size, cumcpu]
txn_cost = txn_cost + virt_cost
# df[i, 'virt_cost'] = f'{-virt_cost} FROM {cumcpu}@{cum_costs_size}'
# a cumulative mean of the price-per-unit acquired
# in the destination asset:
# https://en.wikipedia.org/wiki/Moving_average#Cumulative_average
# You could also think of this measure more
# generally as an exponential mean with `alpha
# = 1/N` where `N` is the current number of txns
# included in the "position" defining set:
# https://en.wikipedia.org/wiki/Exponential_smoothing
ppu: float = (
(
(last_ppu * last_cumsize)
+
(price * size)
) /
cumsize
)
# a "position size DECREASING" or EXIT transaction
# which "makes smaller" the trade's side-size of the
# destination asset:
# - selling previously bought units of the dst asset
# (aka 'closing' a long position).
# - buying previously borrowed and sold (short) units
# of the dst asset (aka 'covering'/'closing' a short
# position).
else:
# only changes on position size increasing txns
ppu: float = last_ppu
# UNWIND IMPLIED COSTS FROM ENTRIES
# => Reverse the virtual/modelled (2x predicted) txn
# cost that was included in the least-recently
# entered txn that is still part of the current CSi
# set.
# => we look up the cost-per-unit cumsum and apply
# if over the current txn size (by multiplication)
# and then reverse that previusly applied cost on
# the txn_cost for this record.
#
# NOTE: current "model" is just to previously assumed 2x
# the txn cost for a matching enter-txn's
# cost-per-unit; we then immediately reverse this
# prediction and apply the real cost received here.
last_cum_costs_size, last_cpu = virt_costs
prev_virt_cost: float = last_cpu * abs(size)
txn_cost: float = txn_cost - prev_virt_cost # +ve thus a "reversal"
cum_costs_size: float = last_cum_costs_size - abs(size)
virt_costs = [cum_costs_size, last_cpu]
# df[i, 'virt_cost'] = (
# f'{-prev_virt_cost} FROM {last_cpu}@{cum_costs_size}'
# )
# the per-txn profit or loss (PnL) given we are
# (partially) "closing"/"exiting" the position via
# this txn.
pnl: float = (last_ppu - price) * size
# always subtract txn cost from total txn pnl
txn_pnl: float = pnl - txn_cost
# cumulative PnLs per txn
last_ledger_pnl = (
last_ledger_pnl + txn_pnl
)
last_pos_pnl = df[i, 'cum_pos_pnl'] = (
last_pos_pnl + txn_pnl
)
if cumsize == 0:
last_ppu = ppu = 0
# compute the BEP: "break even price", a value that
# determines at what price the remaining cumsize can be
# liquidated such that the net-PnL on the current
# position will result in ZERO gain or loss from open
# to close including all txn costs B)
if (
abs(cumsize) > 0 # non-exit-to-zero position txn
):
cumsize_sign: float = copysign(1, cumsize)
ledger_bep: float = (
(
(ppu * cumsize)
-
(last_ledger_pnl * cumsize_sign)
) / cumsize
)
# NOTE: when we "enter more" dst asset units (aka
# increase position state) AFTER having exited some
# units (aka decreasing the pos size some) the bep
# needs to be RECOMPUTED based on new ppu such that
# liquidation of the cumsize at the bep price
# results in a zero-pnl for the existing position
# (since the last one).
# for position lifetime BEP we never can have
# a valid value once the position is "closed"
# / full exitted Bo
pos_bep: float = (
(
(ppu * cumsize)
-
(last_pos_pnl * cumsize_sign)
) / cumsize
)
# inject DF row with all values
df[i, 'pos_ppu'] = ppu
df[i, 'per_txn_pnl'] = txn_pnl
df[i, 'applied_cost'] = -txn_cost
df[i, 'cum_pos_pnl'] = last_pos_pnl
df[i, 'pos_bep'] = pos_bep
df[i, 'cum_ledger_pnl'] = last_ledger_pnl
df[i, 'ledger_bep'] = ledger_bep
# keep backrefs to suffice reccurence relation
last_ppu: float = ppu
last_cumsize: float = cumsize
# TODO?: pass back the current `Position` object loaded from
# the account as well? Would provide incentive to do all
# this ledger loading inside a new async open_account().
# bs_mktid: str = df[0]['bs_mktid']
# pos: Position = acnt.pps[bs_mktid]
return dfs

View File

@ -18,78 +18,55 @@
CLI front end for trades ledger and position tracking management.
'''
from typing import (
Any,
)
from __future__ import annotations
from pprint import pformat
from rich.console import Console
from rich.markdown import Markdown
import polars as pl
import tractor
import trio
import typer
from ..log import get_logger
from ..service import (
open_piker_runtime,
)
from ..clearing._messages import BrokerdPosition
from ..calc import humanize
from ..brokers._daemon import broker_init
from ._ledger import (
load_ledger,
TransactionLedger,
# open_trade_ledger,
)
from .calc import (
open_ledger_dfs,
)
ledger = typer.Typer()
def broker_init(
brokername: str,
loglevel: str | None = None,
**start_actor_kwargs,
) -> dict:
'''
Given an input broker name, load all named arguments
which can be passed to a daemon + context spawn for
the relevant `brokerd` service endpoint.
'''
from ..brokers import get_brokermod
brokermod = get_brokermod(brokername)
modpath = brokermod.__name__
start_actor_kwargs['name'] = f'brokerd.{brokername}'
start_actor_kwargs.update(
getattr(
brokermod,
'_spawn_kwargs',
{},
)
)
# lookup actor-enabled modules declared by the backend offering the
# `brokerd` endpoint(s).
enabled = start_actor_kwargs['enable_modules'] = [modpath]
for submodname in getattr(
brokermod,
'__enable_modules__',
[],
):
subpath = f'{modpath}.{submodname}'
enabled.append(subpath)
# TODO XXX: DO WE NEED THIS?
# enabled.append('piker.data.feed')
# non-blocking setup of brokerd service nursery
from ..brokers import _setup_persistent_brokerd
return (
start_actor_kwargs, # to `ActorNursery.start_actor()`
_setup_persistent_brokerd, # service task ep
getattr( # trades endpoint
brokermod,
'trades_dialogue',
None,
),
)
def unpack_fqan(
fully_qualified_account_name: str,
console: Console | None = None,
) -> tuple | bool:
try:
brokername, account = fully_qualified_account_name.split('.')
return brokername, account
except ValueError:
if console is not None:
md = Markdown(
f'=> `{fully_qualified_account_name}` <=\n\n'
'is not a valid '
'__fully qualified account name?__\n\n'
'Your account name needs to be of the form '
'`<brokername>.<account_name>`\n'
)
console.print(md)
return False
@ledger.command()
@ -102,25 +79,23 @@ def sync(
"-l",
),
):
log = get_logger(loglevel)
console = Console()
try:
brokername, account = fully_qualified_account_name.split('.')
except ValueError:
md = Markdown(
f'=> `{fully_qualified_account_name}` <=\n\n'
'is not a valid '
'__fully qualified account name?__\n\n'
'Your account name needs to be of the form '
'`<brokername>.<account_name>`\n'
)
console.print(md)
pair: tuple[str, str]
if not (pair := unpack_fqan(
fully_qualified_account_name,
console,
)):
return
start_kwargs, _, trades_ep = broker_init(
brokername, account = pair
brokermod, start_kwargs, deamon_ep = broker_init(
brokername,
loglevel=loglevel,
)
brokername: str = brokermod.name
async def main():
@ -134,87 +109,203 @@ def sync(
tractor.open_nursery() as an,
):
portal = await an.start_actor(
loglevel=loglevel,
debug_mode=pdb,
**start_kwargs,
)
try:
log.info(
f'Piker runtime up as {actor.uid}@{sockaddr}'
)
if (
brokername == 'paper'
or trades_ep is None
):
from ..clearing import _paper_engine as paper
open_trades_endpoint = paper.open_paperboi(
fqme=None, # tell paper to not start clearing loop
broker=brokername,
loglevel=loglevel,
)
else:
# open live brokerd trades endpoint
open_trades_endpoint = portal.open_context(
trades_ep,
portal = await an.start_actor(
loglevel=loglevel,
debug_mode=pdb,
**start_kwargs,
)
positions: dict[str, Any]
accounts: list[str]
async with (
open_trades_endpoint as (
brokerd_ctx,
(positions, accounts,),
),
):
summary: str = (
'[dim underline]Piker Position Summary[/] '
f'[dim blue underline]{brokername}[/]'
'[dim].[/]'
f'[blue underline]{account}[/]'
f'[dim underline] -> total pps: [/]'
f'[green]{len(positions)}[/]\n'
from ..clearing import (
open_brokerd_dialog,
)
for ppdict in positions:
ppmsg = BrokerdPosition(**ppdict)
size = ppmsg.size
if size:
ppu: float = round(
ppmsg.avg_price,
ndigits=2,
brokerd_stream: tractor.MsgStream
async with (
# engage the brokerd daemon context
portal.open_context(
deamon_ep,
brokername=brokername,
loglevel=loglevel,
),
# manually open the brokerd trade dialog EP
# (what the EMS normally does internall) B)
open_brokerd_dialog(
brokermod,
portal,
exec_mode=(
'paper'
if account == 'paper'
else 'live'
),
loglevel=loglevel,
) as (
brokerd_stream,
pp_msg_table,
accounts,
),
):
try:
assert len(accounts) == 1
if not pp_msg_table:
ld, fpath = load_ledger(brokername, account)
assert not ld, f'WTF did we fail to parse ledger:\n{ld}'
console.print(
'[yellow]'
'No pps found for '
f'`{brokername}.{account}` '
'account!\n\n'
'[/][underline]'
'None of the following ledger files exist:\n\n[/]'
f'{fpath.as_uri()}\n'
)
return
pps_by_symbol: dict[str, BrokerdPosition] = pp_msg_table[
brokername,
account,
]
summary: str = (
'[dim underline]Piker Position Summary[/] '
f'[dim blue underline]{brokername}[/]'
'[dim].[/]'
f'[blue underline]{account}[/]'
f'[dim underline] -> total pps: [/]'
f'[green]{len(pps_by_symbol)}[/]\n'
)
cost_basis: str = humanize(size * ppu)
h_size: str = humanize(size)
# for ppdict in positions:
for fqme, ppmsg in pps_by_symbol.items():
# ppmsg = BrokerdPosition(**ppdict)
size = ppmsg.size
if size:
ppu: float = round(
ppmsg.avg_price,
ndigits=2,
)
cost_basis: str = humanize(size * ppu)
h_size: str = humanize(size)
if size < 0:
pcolor = 'red'
else:
pcolor = 'green'
if size < 0:
pcolor = 'red'
else:
pcolor = 'green'
# sematic-highlight of fqme
fqme = ppmsg.symbol
tokens = fqme.split('.')
styled_fqme = f'[blue underline]{tokens[0]}[/]'
for tok in tokens[1:]:
styled_fqme += '[dim].[/]'
styled_fqme += f'[dim blue underline]{tok}[/]'
# sematic-highlight of fqme
fqme = ppmsg.symbol
tokens = fqme.split('.')
styled_fqme = f'[blue underline]{tokens[0]}[/]'
for tok in tokens[1:]:
styled_fqme += '[dim].[/]'
styled_fqme += f'[dim blue underline]{tok}[/]'
# TODO: instead display in a ``rich.Table``?
summary += (
styled_fqme +
'[dim]: [/]'
f'[{pcolor}]{h_size}[/]'
'[dim blue]u @[/]'
f'[{pcolor}]{ppu}[/]'
'[dim blue] = [/]'
f'[{pcolor}]$ {cost_basis}\n[/]'
)
# TODO: instead display in a ``rich.Table``?
summary += (
styled_fqme +
'[dim]: [/]'
f'[{pcolor}]{h_size}[/]'
'[dim blue]u @[/]'
f'[{pcolor}]{ppu}[/]'
'[dim blue] = [/]'
f'[{pcolor}]$ {cost_basis}\n[/]'
)
console.print(summary)
await brokerd_ctx.cancel()
console.print(summary)
await portal.cancel_actor()
finally:
# exit via ctx cancellation.
brokerd_ctx: tractor.Context = brokerd_stream._ctx
await brokerd_ctx.cancel(timeout=1)
# TODO: once ported to newer tractor branch we should
# be able to do a loop like this:
# while brokerd_ctx.cancel_called_remote is None:
# await trio.sleep(0.01)
# await brokerd_ctx.cancel()
finally:
await portal.cancel_actor()
trio.run(main)
if __name__ == "__main__":
ledger()
@ledger.command()
def disect(
# "fully_qualified_account_name"
fqan: str,
fqme: str, # for ib
# TODO: in tractor we should really have
# a debug_mode ctx for wrapping any kind of code no?
pdb: bool = False,
bs_mktid: str = typer.Option(
None,
"-bid",
),
loglevel: str = typer.Option(
'error',
"-l",
),
):
from piker.log import get_console_log
from piker.toolz import open_crash_handler
get_console_log(loglevel)
pair: tuple[str, str]
if not (pair := unpack_fqan(fqan)):
raise ValueError('{fqan} malformed!?')
brokername, account = pair
# ledger dfs groupby-partitioned by fqme
dfs: dict[str, pl.DataFrame]
# actual ledger instance
ldgr: TransactionLedger
pl.Config.set_tbl_cols(-1)
pl.Config.set_tbl_rows(-1)
with (
open_crash_handler(),
open_ledger_dfs(
brokername,
account,
) as (dfs, ldgr),
):
# look up specific frame for fqme-selected asset
if (df := dfs.get(fqme)) is None:
mktids2fqmes: dict[str, list[str]] = {}
for bs_mktid in dfs:
df: pl.DataFrame = dfs[bs_mktid]
fqmes: pl.Series[str] = df['fqme']
uniques: list[str] = fqmes.unique()
mktids2fqmes[bs_mktid] = set(uniques)
if fqme in uniques:
break
print(
f'No specific ledger for fqme={fqme} could be found in\n'
f'{pformat(mktids2fqmes)}?\n'
f'Maybe the `{brokername}` backend uses something '
'else for its `bs_mktid` then the `fqme`?\n'
'Scanning for matches in unique fqmes per frame..\n'
)
# :pray:
assert not df.is_empty()
# muck around in pdbp REPL
breakpoint()
# TODO: we REALLY need a better console REPL for this
# kinda thing..
# - `xonsh` is an obvious option (and it looks amazin) but
# we need to figure out how to embed it better then just:
# from xonsh.main import main
# main(argv=[])
# which will not actually inject the `df` to globals?

View File

@ -17,14 +17,41 @@
"""
Broker clients, daemons and general back end machinery.
"""
from contextlib import (
asynccontextmanager as acm,
)
from importlib import import_module
from types import ModuleType
__brokers__ = [
from tractor.trionics import maybe_open_context
from ._util import (
log,
BrokerError,
SymbolNotFound,
NoData,
DataUnavailable,
DataThrottle,
resproc,
get_logger,
)
__all__: list[str] = [
'BrokerError',
'SymbolNotFound',
'NoData',
'DataUnavailable',
'DataThrottle',
'resproc',
'get_logger',
]
__brokers__: list[str] = [
'binance',
'ib',
'kraken',
'kucoin'
'kucoin',
# broken but used to work
# 'questrade',
# 'robinhood',
@ -44,7 +71,7 @@ def get_brokermod(brokername: str) -> ModuleType:
Return the imported broker module by name.
'''
module = import_module('.' + brokername, 'piker.brokers')
module: ModuleType = import_module('.' + brokername, 'piker.brokers')
# we only allow monkeying because it's for internal keying
module.name = module.__name__.split('.')[-1]
return module
@ -57,3 +84,28 @@ def iter_brokermods():
'''
for name in __brokers__:
yield get_brokermod(name)
@acm
async def open_cached_client(
brokername: str,
**kwargs,
) -> 'Client': # noqa
'''
Get a cached broker client from the current actor's local vars.
If one has not been setup do it and cache it.
'''
brokermod = get_brokermod(brokername)
async with maybe_open_context(
acm_func=brokermod.get_client,
kwargs=kwargs,
) as (cache_hit, client):
if cache_hit:
log.runtime(f'Reusing existing {client}')
yield client

View File

@ -19,9 +19,16 @@ Broker-daemon-actor "endpoint-hooks": the service task entry points for
``brokerd``.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
)
from types import ModuleType
from typing import (
TYPE_CHECKING,
AsyncContextManager,
)
import exceptiongroup as eg
import tractor
import trio
@ -29,11 +36,14 @@ import trio
from . import _util
from . import get_brokermod
if TYPE_CHECKING:
from ..data import _FeedsBus
# `brokerd` enabled modules
# TODO: move this def to the `.data` subpkg..
# NOTE: keeping this list as small as possible is part of our caps-sec
# model and should be treated with utmost care!
_data_mods = [
_data_mods: str = [
'piker.brokers.core',
'piker.brokers.data',
'piker.brokers._daemon',
@ -58,31 +68,125 @@ async def _setup_persistent_brokerd(
the broker backend as needed.
'''
# NOTE: we only need to setup logging once (and only) here
# since all hosted daemon tasks will reference this same
# log instance's (actor local) state and thus don't require
# any further (level) configuration on their own B)
log = _util.get_console_log(
loglevel or tractor.current_actor().loglevel,
name=f'{_util.subsys}.{brokername}',
)
# set global for this actor to this new process-wide instance B)
_util.log = log
from piker.data.feed import (
_bus,
get_feed_bus,
# further, set the log level on any broker broker specific
# logger instance.
from piker.data import feed
assert not feed._bus
# allocate a nursery to the bus for spawning background
# tasks to service client IPC requests, normally
# `tractor.Context` connections to explicitly required
# `brokerd` endpoints such as:
# - `stream_quotes()`,
# - `manage_history()`,
# - `allocate_persistent_feed()`,
# - `open_symbol_search()`
# NOTE: see ep invocation details inside `.data.feed`.
try:
async with trio.open_nursery() as service_nursery:
bus: _FeedsBus = feed.get_feed_bus(
brokername,
service_nursery,
)
assert bus is feed._bus
# unblock caller
await ctx.started()
# we pin this task to keep the feeds manager active until the
# parent actor decides to tear it down
await trio.sleep_forever()
except eg.ExceptionGroup:
# TODO: likely some underlying `brokerd` IPC connection
# broke so here we handle a respawn and re-connect attempt!
# This likely should pair with development of the OCO task
# nusery in dev over @ `tractor` B)
# https://github.com/goodboy/tractor/pull/363
raise
def broker_init(
brokername: str,
loglevel: str | None = None,
**start_actor_kwargs,
) -> tuple[
ModuleType,
dict,
AsyncContextManager,
]:
'''
Given an input broker name, load all named arguments
which can be passed for daemon endpoint + context spawn
as required in every `brokerd` (actor) service.
This includes:
- load the appropriate <brokername>.py pkg module,
- reads any declared `__enable_modules__: listr[str]` which will be
passed to `tractor.ActorNursery.start_actor(enabled_modules=<this>)`
at actor start time,
- deliver a references to the daemon lifetime fixture, which
for now is always the `_setup_persistent_brokerd()` context defined
above.
'''
from ..brokers import get_brokermod
brokermod = get_brokermod(brokername)
modpath: str = brokermod.__name__
start_actor_kwargs['name'] = f'brokerd.{brokername}'
start_actor_kwargs.update(
getattr(
brokermod,
'_spawn_kwargs',
{},
)
)
global _bus
assert not _bus
async with trio.open_nursery() as service_nursery:
# assign a nursery to the feeds bus for spawning
# background tasks from clients
get_feed_bus(brokername, service_nursery)
# XXX TODO: make this not so hacky/monkeypatched..
# -> we need a sane way to configure the logging level for all
# code running in brokerd.
# if utilmod := getattr(brokermod, '_util', False):
# utilmod.log.setLevel(loglevel.upper())
# unblock caller
await ctx.started()
# lookup actor-enabled modules declared by the backend offering the
# `brokerd` endpoint(s).
enabled: list[str]
enabled = start_actor_kwargs['enable_modules'] = [
__name__, # so that eps from THIS mod can be invoked
modpath,
]
for submodname in getattr(
brokermod,
'__enable_modules__',
[],
):
subpath: str = f'{modpath}.{submodname}'
enabled.append(subpath)
# we pin this task to keep the feeds manager active until the
# parent actor decides to tear it down
await trio.sleep_forever()
return (
brokermod,
start_actor_kwargs, # to `ActorNursery.start_actor()`
# XXX see impl above; contains all (actor global)
# setup/teardown expected in all `brokerd` actor instances.
_setup_persistent_brokerd,
)
async def spawn_brokerd(
@ -94,44 +198,44 @@ async def spawn_brokerd(
) -> bool:
from piker.service import Services
from piker.service._util import log # use service mngr log
log.info(f'Spawning {brokername} broker daemon')
brokermod = get_brokermod(brokername)
dname = f'brokerd.{brokername}'
(
brokermode,
tractor_kwargs,
daemon_fixture_ep,
) = broker_init(
brokername,
loglevel,
**tractor_kwargs,
)
brokermod = get_brokermod(brokername)
extra_tractor_kwargs = getattr(brokermod, '_spawn_kwargs', {})
tractor_kwargs.update(extra_tractor_kwargs)
# ask `pikerd` to spawn a new sub-actor and manage it under its
# actor nursery
modpath = brokermod.__name__
broker_enable = [modpath]
for submodname in getattr(
brokermod,
'__enable_modules__',
[],
):
subpath = f'{modpath}.{submodname}'
broker_enable.append(subpath)
from piker.service import Services
dname: str = tractor_kwargs.pop('name') # f'brokerd.{brokername}'
portal = await Services.actor_n.start_actor(
dname,
enable_modules=_data_mods + broker_enable,
loglevel=loglevel,
enable_modules=_data_mods + tractor_kwargs.pop('enable_modules'),
debug_mode=Services.debug_mode,
**tractor_kwargs
)
# non-blocking setup of brokerd service nursery
# NOTE: the service mngr expects an already spawned actor + its
# portal ref in order to do non-blocking setup of brokerd
# service nursery.
await Services.start_service_task(
dname,
portal,
# signature of target root-task endpoint
_setup_persistent_brokerd,
daemon_fixture_ep,
brokername=brokername,
loglevel=loglevel,
)
@ -148,8 +252,11 @@ async def maybe_spawn_brokerd(
) -> tractor.Portal:
'''
Helper to spawn a brokerd service *from* a client
who wishes to use the sub-actor-daemon.
Helper to spawn a brokerd service *from* a client who wishes to
use the sub-actor-daemon but is fine with re-using any existing
and contactable `brokerd`.
Mas o menos, acts as a cached-actor-getter factory.
'''
from piker.service import maybe_spawn_daemon

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -18,10 +18,11 @@
Handy cross-broker utils.
"""
from __future__ import annotations
from functools import partial
import json
import asks
import httpx
import logging
from ..log import (
@ -32,6 +33,8 @@ from ..log import (
subsys: str = 'piker.brokers'
# NOTE: level should be reset by any actor that is spawned
# as well as given a (more) explicit name/key such
# as `piker.brokers.binance` matching the subpkg.
log = get_logger(subsys)
get_console_log = partial(
@ -48,6 +51,7 @@ class SymbolNotFound(BrokerError):
"Symbol not found by broker search"
# TODO: these should probably be moved to `.tsp/.data`?
class NoData(BrokerError):
'''
Symbol data not permitted or no data
@ -57,14 +61,15 @@ class NoData(BrokerError):
def __init__(
self,
*args,
frame_size: int = 1000,
info: dict|None = None,
) -> None:
super().__init__(*args)
self.info: dict|None = info
# when raised, machinery can check if the backend
# set a "frame size" for doing datetime calcs.
self.frame_size: int = 1000
# self.frame_size: int = 1000
class DataUnavailable(BrokerError):
@ -86,16 +91,18 @@ class DataThrottle(BrokerError):
def resproc(
resp: asks.response_objects.Response,
resp: httpx.Response,
log: logging.Logger,
return_json: bool = True,
log_resp: bool = False,
) -> asks.response_objects.Response:
"""Process response and return its json content.
) -> httpx.Response:
'''
Process response and return its json content.
Raise the appropriate error on non-200 OK responses.
"""
'''
if not resp.status_code == 200:
raise BrokerError(resp.body)
try:

View File

@ -1,680 +0,0 @@
# piker: trading gear for hackers
# Copyright (C)
# Guillermo Rodriguez
# Tyler Goodlet
# (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Binance backend
"""
from contextlib import (
asynccontextmanager as acm,
aclosing,
)
from datetime import datetime
# from functools import lru_cache
from decimal import Decimal
from typing import (
Any, Union, Optional,
AsyncGenerator, Callable,
)
import time
import trio
from trio_typing import TaskStatus
import pendulum
import asks
from fuzzywuzzy import process as fuzzy
import numpy as np
import tractor
from .._cacheables import async_lifo_cache
from ..accounting._mktinfo import (
Asset,
MktPair,
digits_to_dec,
)
from .._cacheables import open_cached_client
from ._util import (
resproc,
SymbolNotFound,
DataUnavailable,
)
from ._util import (
log,
get_console_log,
)
from ..data.types import Struct
from ..data.validate import FeedInit
from ..data._web_bs import (
open_autorecon_ws,
NoBsWs,
)
_url = 'https://api.binance.com'
# Broker specific ohlc schema (rest)
_ohlc_dtype = [
('index', int),
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('bar_wap', float), # will be zeroed by sampler if not filled
# XXX: some additional fields are defined in the docs:
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
# ('close_time', int),
# ('quote_vol', float),
# ('num_trades', int),
# ('buy_base_vol', float),
# ('buy_quote_vol', float),
# ('ignore', float),
]
# UI components allow this to be declared such that additional
# (historical) fields can be exposed.
ohlc_dtype = np.dtype(_ohlc_dtype)
_show_wap_in_history = False
# https://binance-docs.github.io/apidocs/spot/en/#exchange-information
# TODO: make this frozen again by pre-processing the
# filters list to a dict at init time?
class Pair(Struct, frozen=True):
symbol: str
status: str
baseAsset: str
baseAssetPrecision: int
cancelReplaceAllowed: bool
allowTrailingStop: bool
quoteAsset: str
quotePrecision: int
quoteAssetPrecision: int
baseCommissionPrecision: int
quoteCommissionPrecision: int
orderTypes: list[str]
icebergAllowed: bool
ocoAllowed: bool
quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool
isMarginTradingAllowed: bool
defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str]
filters: dict[
str,
Union[str, int, float]
]
permissions: list[str]
@property
def price_tick(self) -> Decimal:
# XXX: lul, after manually inspecting the response format we
# just directly pick out the info we need
step_size: str = self.filters['PRICE_FILTER']['tickSize'].rstrip('0')
return Decimal(step_size)
@property
def size_tick(self) -> Decimal:
step_size: str = self.filters['LOT_SIZE']['stepSize'].rstrip('0')
return Decimal(step_size)
class OHLC(Struct):
'''
Description of the flattened OHLC quote format.
For schema details see:
https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-streams
'''
time: int
open: float
high: float
low: float
close: float
volume: float
close_time: int
quote_vol: float
num_trades: int
buy_base_vol: float
buy_quote_vol: float
ignore: int
# null the place holder for `bar_wap` until we
# figure out what to extract for this.
bar_wap: float = 0.0
class L1(Struct):
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
update_id: int
sym: str
bid: float
bsize: float
ask: float
asize: float
# convert datetime obj timestamp to unixtime in milliseconds
def binance_timestamp(
when: datetime
) -> int:
return int((when.timestamp() * 1000) + (when.microsecond / 1000))
class Client:
def __init__(self) -> None:
self._sesh = asks.Session(connections=4)
self._sesh.base_location = _url
self._pairs: dict[str, Pair] = {}
async def _api(
self,
method: str,
params: dict,
) -> dict[str, Any]:
resp = await self._sesh.get(
path=f'/api/v3/{method}',
params=params,
timeout=float('inf')
)
return resproc(resp, log)
async def exch_info(
self,
sym: str | None = None,
) -> dict[str, Pair] | Pair:
'''
Fresh exchange-pairs info query for symbol ``sym: str``:
https://binance-docs.github.io/apidocs/spot/en/#exchange-information
'''
cached_pair = self._pairs.get(sym)
if cached_pair:
return cached_pair
# retrieve all symbols by default
params = {}
if sym is not None:
sym = sym.lower()
params = {'symbol': sym}
resp = await self._api('exchangeInfo', params=params)
entries = resp['symbols']
if not entries:
raise SymbolNotFound(f'{sym} not found:\n{resp}')
# pre-process .filters field into a table
pairs = {}
for item in entries:
symbol = item['symbol']
filters = {}
filters_ls: list = item.pop('filters')
for entry in filters_ls:
ftype = entry['filterType']
filters[ftype] = entry
pairs[symbol] = Pair(
filters=filters,
**item,
)
# pairs = {
# item['symbol']: Pair(**item) for item in entries
# }
self._pairs.update(pairs)
if sym is not None:
return pairs[sym]
else:
return self._pairs
symbol_info = exch_info
async def search_symbols(
self,
pattern: str,
limit: int = None,
) -> dict[str, Any]:
if self._pairs is not None:
data = self._pairs
else:
data = await self.exch_info()
matches = fuzzy.extractBests(
pattern,
data,
score_cutoff=50,
)
# repack in dict form
return {item[0]['symbol']: item[0]
for item in matches}
async def bars(
self,
symbol: str,
start_dt: Optional[datetime] = None,
end_dt: Optional[datetime] = None,
limit: int = 1000, # <- max allowed per query
as_np: bool = True,
) -> dict:
if end_dt is None:
end_dt = pendulum.now('UTC').add(minutes=1)
if start_dt is None:
start_dt = end_dt.start_of(
'minute').subtract(minutes=limit)
start_time = binance_timestamp(start_dt)
end_time = binance_timestamp(end_dt)
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
bars = await self._api(
'klines',
params={
'symbol': symbol.upper(),
'interval': '1m',
'startTime': start_time,
'endTime': end_time,
'limit': limit
}
)
# TODO: pack this bars scheme into a ``pydantic`` validator type:
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
# TODO: we should port this to ``pydantic`` to avoid doing
# manual validation ourselves..
new_bars = []
for i, bar in enumerate(bars):
bar = OHLC(*bar)
bar.typecast()
row = []
for j, (name, ftype) in enumerate(_ohlc_dtype[1:]):
# TODO: maybe we should go nanoseconds on all
# history time stamps?
if name == 'time':
# convert to epoch seconds: float
row.append(bar.time / 1000.0)
else:
row.append(getattr(bar, name))
new_bars.append((i,) + tuple(row))
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else bars
return array
@acm
async def get_client() -> Client:
client = Client()
log.info('Caching exchange infos..')
await client.exch_info()
yield client
# validation type
class AggTrade(Struct):
e: str # Event type
E: int # Event time
s: str # Symbol
a: int # Aggregate trade ID
p: float # Price
q: float # Quantity
f: int # First trade ID
l: int # Last trade ID
T: int # Trade time
m: bool # Is the buyer the market maker?
M: bool # Ignore
async def stream_messages(
ws: NoBsWs,
) -> AsyncGenerator[NoBsWs, dict]:
# TODO: match syntax here!
msg: dict[str, Any]
async for msg in ws:
match msg:
# for l1 streams binance doesn't add an event type field so
# identify those messages by matching keys
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
case {
# NOTE: this is never an old value it seems, so
# they are always sending real L1 spread updates.
'u': upid, # update id
's': sym,
'b': bid,
'B': bsize,
'a': ask,
'A': asize,
}:
# TODO: it would be super nice to have a `L1` piker type
# which "renders" incremental tick updates from a packed
# msg-struct:
# - backend msgs after packed into the type such that we
# can reduce IPC usage but without each backend having
# to do that incremental update logic manually B)
# - would it maybe be more efficient to use this instead?
# https://binance-docs.github.io/apidocs/spot/en/#diff-depth-stream
l1 = L1(
update_id=upid,
sym=sym,
bid=bid,
bsize=bsize,
ask=ask,
asize=asize,
)
l1.typecast()
# repack into piker's tick-quote format
yield 'l1', {
'symbol': l1.sym,
'ticks': [
{
'type': 'bid',
'price': l1.bid,
'size': l1.bsize,
},
{
'type': 'bsize',
'price': l1.bid,
'size': l1.bsize,
},
{
'type': 'ask',
'price': l1.ask,
'size': l1.asize,
},
{
'type': 'asize',
'price': l1.ask,
'size': l1.asize,
}
]
}
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
case {
'e': 'aggTrade',
}:
# NOTE: this is purely for a definition,
# ``msgspec.Struct`` does not runtime-validate until you
# decode/encode, see:
# https://jcristharif.com/msgspec/structs.html#type-validation
msg = AggTrade(**msg)
msg.typecast()
yield 'trade', {
'symbol': msg.s,
'last': msg.p,
'brokerd_ts': time.time(),
'ticks': [{
'type': 'trade',
'price': msg.p,
'size': msg.q,
'broker_ts': msg.T,
}],
}
def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
"""Create a request subscription packet dict.
https://binance-docs.github.io/apidocs/spot/en/#live-subscribing-unsubscribing-to-streams
"""
return {
'method': 'SUBSCRIBE',
'params': [
f'{pair.lower()}@{sub_name}'
for pair in pairs
],
'id': uid
}
@acm
async def open_history_client(
symbol: str,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('binance') as client:
async def get_ohlc(
timeframe: float,
end_dt: datetime | None = None,
start_dt: datetime | None = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
if timeframe != 60:
raise DataUnavailable('Only 1m bars are supported')
array = await client.bars(
symbol,
start_dt=start_dt,
end_dt=end_dt,
)
times = array['time']
if (
end_dt is None
):
inow = round(time.time())
if (inow - times[-1]) > 60:
await tractor.breakpoint()
start_dt = pendulum.from_timestamp(times[0])
end_dt = pendulum.from_timestamp(times[-1])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, Pair]:
async with open_cached_client('binance') as client:
pair: Pair = await client.exch_info(fqme.upper())
mkt = MktPair(
dst=Asset(
name=pair.baseAsset,
atype='crypto',
tx_tick=digits_to_dec(pair.baseAssetPrecision),
),
src=Asset(
name=pair.quoteAsset,
atype='crypto',
tx_tick=digits_to_dec(pair.quoteAssetPrecision),
),
price_tick=pair.price_tick,
size_tick=pair.size_tick,
bs_mktid=pair.symbol,
broker='binance',
)
both = mkt, pair
return both
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
uid = 0
async with (
send_chan as send_chan,
):
init_msgs: list[FeedInit] = []
for sym in symbols:
mkt, pair = await get_mkt_info(sym)
# build out init msgs according to latest spec
init_msgs.append(
FeedInit(
mkt_info=mkt,
shm_write_opts={'sum_tick_vml': False},
)
)
@acm
async def subscribe(ws: NoBsWs):
# setup subs
# trade data (aka L1)
# https://binance-docs.github.io/apidocs/spot/en/#symbol-order-book-ticker
l1_sub = make_sub(symbols, 'bookTicker', uid)
await ws.send_msg(l1_sub)
# aggregate (each order clear by taker **not** by maker)
# trades data:
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
agg_trades_sub = make_sub(symbols, 'aggTrade', uid)
await ws.send_msg(agg_trades_sub)
# ack from ws server
res = await ws.recv_msg()
assert res['id'] == uid
yield
subs = []
for sym in symbols:
subs.append("{sym}@aggTrade")
subs.append("{sym}@bookTicker")
# unsub from all pairs on teardown
if ws.connected():
await ws.send_msg({
"method": "UNSUBSCRIBE",
"params": subs,
"id": uid,
})
# XXX: do we need to ack the unsub?
# await ws.recv_msg()
async with (
open_autorecon_ws(
# XXX: see api docs which show diff addr?
# https://developers.binance.com/docs/binance-trading-api/websocket_api#general-api-information
# 'wss://ws-api.binance.com:443/ws-api/v3',
'wss://stream.binance.com/ws',
fixture=subscribe,
) as ws,
# avoid stream-gen closure from breaking trio..
aclosing(stream_messages(ws)) as msg_gen,
):
typ, quote = await anext(msg_gen)
# pull a first quote and deliver
while typ != 'trade':
typ, quote = await anext(msg_gen)
task_status.started((init_msgs, quote))
# signal to caller feed is ready for consumption
feed_is_live.set()
# import time
# last = time.time()
# start streaming
async for typ, msg in msg_gen:
# period = time.time() - last
# hz = 1/period if period else float('inf')
# if hz > 60:
# log.info(f'Binance quotez : {hz}')
topic = msg['symbol'].lower()
await send_chan.send({topic: msg})
# last = time.time()
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
async with open_cached_client('binance') as client:
# load all symbols locally for fast search
cache = await client.exch_info()
await ctx.started()
async with ctx.open_stream() as stream:
async for pattern in stream:
# results = await client.exch_info(sym=pattern.upper())
matches = fuzzy.extractBests(
pattern,
cache,
score_cutoff=50,
)
# repack in dict form
await stream.send({
item[0].symbol: item[0]
for item in matches
})

View File

@ -0,0 +1,60 @@
# piker: trading gear for hackers
# Copyright (C)
# Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet
# (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
binancial secs on the floor, in the office, behind the dumpster.
"""
from .api import (
get_client,
)
from .feed import (
get_mkt_info,
open_history_client,
open_symbol_search,
stream_quotes,
)
from .broker import (
open_trade_dialog,
get_cost,
)
from .venues import (
SpotPair,
FutesPair,
)
__all__ = [
'get_client',
'get_mkt_info',
'get_cost',
'SpotPair',
'FutesPair',
'open_trade_dialog',
'open_history_client',
'open_symbol_search',
'stream_quotes',
]
# `brokerd` modules
__enable_modules__: list[str] = [
'api',
'feed',
'broker',
]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,710 @@
# piker: trading gear for hackers
# Copyright (C)
# Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet
# (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Live order control B)
'''
from __future__ import annotations
from pprint import pformat
from typing import (
Any,
AsyncIterator,
)
import time
from time import time_ns
from bidict import bidict
import tractor
import trio
from piker.accounting import (
Asset,
)
from piker.brokers._util import (
get_logger,
)
from piker.data._web_bs import (
open_autorecon_ws,
NoBsWs,
)
from piker.brokers import (
open_cached_client,
BrokerError,
)
from piker.clearing import (
OrderDialogs,
)
from piker.clearing._messages import (
BrokerdOrder,
BrokerdOrderAck,
BrokerdStatus,
BrokerdPosition,
BrokerdFill,
BrokerdCancel,
BrokerdError,
Status,
Order,
)
from .venues import (
Pair,
_futes_ws,
_testnet_futes_ws,
)
from .api import Client
log = get_logger('piker.brokers.binance')
# Fee schedule template, mostly for paper engine fees modelling.
# https://www.binance.com/en/support/faq/what-are-market-makers-and-takers-360007720071
def get_cost(
price: float,
size: float,
is_taker: bool = False,
) -> float:
# https://www.binance.com/en/fee/trading
cb: float = price * size
match is_taker:
case True:
return cb * 0.001000
case False if cb < 1e6:
return cb * 0.001000
case False if 1e6 >= cb < 5e6:
return cb * 0.000900
# NOTE: there's more but are you really going
# to have a cb bigger then this per trade?
case False if cb >= 5e6:
return cb * 0.000800
async def handle_order_requests(
ems_order_stream: tractor.MsgStream,
client: Client,
dids: bidict[str, str],
dialogs: OrderDialogs,
) -> None:
'''
Receive order requests from `emsd`, translate tramsit API calls and transmit.
'''
msg: dict | BrokerdOrder | BrokerdCancel
async for msg in ems_order_stream:
log.info(f'Rx order request:\n{pformat(msg)}')
match msg:
case {
'action': 'cancel',
}:
cancel = BrokerdCancel(**msg)
existing: BrokerdOrder | None = dialogs.get(cancel.oid)
if not existing:
log.error(
f'NO Existing order-dialog for {cancel.oid}!?'
)
await ems_order_stream.send(BrokerdError(
oid=cancel.oid,
# TODO: do we need the symbol?
# https://github.com/pikers/piker/issues/514
symbol='unknown',
reason=(
'Invalid `binance` order request dialog oid',
)
))
continue
else:
symbol: str = existing['symbol']
try:
await client.submit_cancel(
symbol,
cancel.oid,
)
except BrokerError as be:
await ems_order_stream.send(
BrokerdError(
oid=msg['oid'],
symbol=symbol,
reason=(
'`binance` CANCEL failed:\n'
f'{be}'
))
)
continue
case {
'account': ('binance.usdtm' | 'binance.spot') as account,
'action': action,
} if action in {'buy', 'sell'}:
# validate
order = BrokerdOrder(**msg)
oid: str = order.oid # emsd order id
modify: bool = False
# NOTE: check and report edits
if existing := dialogs.get(order.oid):
log.info(
f'Existing order for {oid} updated:\n'
f'{pformat(existing.maps[-1])} -> {pformat(msg)}'
)
modify = True
# only add new msg AFTER the existing check
dialogs.add_msg(oid, msg)
else:
# XXX NOTE: update before the ack!
# track latest request state such that map
# lookups start at the most recent msg and then
# scan reverse-chronologically.
dialogs.add_msg(oid, msg)
# XXX: ACK the request **immediately** before sending
# the api side request to ensure the ems maps the oid ->
# reqid correctly!
resp = BrokerdOrderAck(
oid=oid, # ems order request id
reqid=oid, # our custom int mapping
account='binance', # piker account
)
await ems_order_stream.send(resp)
# call our client api to submit the order
# NOTE: modifies only require diff key for user oid:
# https://binance-docs.github.io/apidocs/futures/en/#modify-order-trade
try:
reqid = await client.submit_limit(
symbol=order.symbol,
side=order.action,
quantity=order.size,
price=order.price,
oid=oid,
modify=modify,
)
# SMH they do gen their own order id: ints..
# assert reqid == order.oid
dids[order.oid] = reqid
except BrokerError as be:
await ems_order_stream.send(
BrokerdError(
oid=msg['oid'],
symbol=msg['symbol'],
reason=(
'`binance` request failed:\n'
f'{be}'
))
)
continue
case _:
account = msg.get('account')
if account not in {'binance.spot', 'binance.futes'}:
log.error(
'Order request does not have a valid binance account name?\n'
'Only one of\n'
'- `binance.spot` or,\n'
'- `binance.usdtm`\n'
'is currently valid!'
)
await ems_order_stream.send(
BrokerdError(
oid=msg['oid'],
symbol=msg['symbol'],
reason=(
f'Invalid `binance` broker request msg:\n{msg}'
))
)
@tractor.context
async def open_trade_dialog(
ctx: tractor.Context,
) -> AsyncIterator[dict[str, Any]]:
# TODO: how do we set this from the EMS such that
# positions are loaded from the correct venue on the user
# stream at startup? (that is in an attempt to support both
# spot and futes markets?)
# - I guess we just want to instead start 2 separate user
# stream tasks right? unless we want another actor pool?
# XXX: see issue: <urlhere>
venue_name: str = 'futes'
venue_mode: str = 'usdtm_futes'
account_name: str = 'usdtm'
use_testnet: bool = False
# TODO: if/when we add .accounting support we need to
# do a open_symcache() call.. though maybe we can hide
# this in a new async version of open_account()?
async with open_cached_client('binance') as client:
subconf: dict|None = client.conf.get(venue_name)
# XXX: if no futes.api_key or spot.api_key has been set we
# always fall back to the paper engine!
if (
not subconf
or
not subconf.get('api_key')
):
await ctx.started('paper')
return
use_testnet: bool = subconf.get('use_testnet', False)
async with (
open_cached_client('binance') as client,
):
client.mkt_mode: str = venue_mode
# TODO: map these wss urls depending on spot or futes
# setting passed when this task is spawned?
wss_url: str = _futes_ws if not use_testnet else _testnet_futes_ws
wss: NoBsWs
async with (
client.manage_listen_key() as listen_key,
open_autorecon_ws(f'{wss_url}/?listenKey={listen_key}') as wss,
):
nsid: int = time_ns()
await wss.send_msg({
# "method": "SUBSCRIBE",
"method": "REQUEST",
"params":
[
f"{listen_key}@account",
f"{listen_key}@balance",
f"{listen_key}@position",
# TODO: does this even work!? seems to cause
# a hang on the first msg..? lelelel.
# f"{listen_key}@order",
],
"id": nsid
})
with trio.fail_after(6):
msg = await wss.recv_msg()
assert msg['id'] == nsid
# TODO: load other market wide data / statistics:
# - OI: https://binance-docs.github.io/apidocs/futures/en/#open-interest
# - OI stats: https://binance-docs.github.io/apidocs/futures/en/#open-interest-statistics
accounts: bidict[str, str] = bidict({'binance.usdtm': None})
balances: dict[Asset, float] = {}
positions: list[BrokerdPosition] = []
for resp_dict in msg['result']:
resp: dict = resp_dict['res']
req: str = resp_dict['req']
# @account response should be something like:
# {'accountAlias': 'sRFzFzAuuXsR',
# 'canDeposit': True,
# 'canTrade': True,
# 'canWithdraw': True,
# 'feeTier': 0}
if 'account' in req:
# NOTE: fill in the hash-like key/alias binance
# provides for the account.
alias: str = resp['accountAlias']
accounts['binance.usdtm'] = alias
# @balance response:
# {'accountAlias': 'sRFzFzAuuXsR',
# 'balances': [{'asset': 'BTC',
# 'availableBalance': '0.00000000',
# 'balance': '0.00000000',
# 'crossUnPnl': '0.00000000',
# 'crossWalletBalance': '0.00000000',
# 'maxWithdrawAmount': '0.00000000',
# 'updateTime': 0}]
# ...
# }
elif 'balance' in req:
for entry in resp['balances']:
name: str = entry['asset']
balance: float = float(entry['balance'])
last_update_t: int = entry['updateTime']
spot_asset: Asset = client._venue2assets['spot'][name]
if balance > 0:
balances[spot_asset] = (balance, last_update_t)
# await tractor.pause()
# @position response:
# {'positions': [{'entryPrice': '0.0',
# 'isAutoAddMargin': False,
# 'isolatedMargin': '0',
# 'leverage': 20,
# 'liquidationPrice': '0',
# 'marginType': 'CROSSED',
# 'markPrice': '0.60289650',
# 'markPrice': '0.00000000',
# 'maxNotionalValue': '25000',
# 'notional': '0',
# 'positionAmt': '0',
# 'positionSide': 'BOTH',
# 'symbol': 'ETHUSDT_230630',
# 'unRealizedProfit': '0.00000000',
# 'updateTime': 1672741444894}
# ...
# }
elif 'position' in req:
for entry in resp['positions']:
bs_mktid: str = entry['symbol']
entry_size: float = float(entry['positionAmt'])
pair: Pair | None = client._venue2pairs[
venue_mode
].get(bs_mktid)
if (
pair
and entry_size > 0
):
entry_price: float = float(entry['entryPrice'])
ppmsg = BrokerdPosition(
broker='binance',
account=f'binance.{account_name}',
# TODO: maybe we should be passing back
# a `MktPair` here?
symbol=pair.bs_fqme.lower() + '.binance',
size=entry_size,
avg_price=entry_price,
)
positions.append(ppmsg)
if pair is None:
log.warning(
f'`{bs_mktid}` Position entry but no market pair?\n'
f'{pformat(entry)}\n'
)
await ctx.started((
positions,
list(accounts)
))
# TODO: package more state tracking into the dialogs API?
# - hmm maybe we could include `OrderDialogs.dids:
# bidict` as part of the interface and then ask for
# a reqid field to be passed at init?
# |-> `OrderDialog(reqid_field='orderId')` kinda thing?
# - also maybe bundle in some kind of dialog to account
# table?
dialogs = OrderDialogs()
dids: dict[str, int] = bidict()
# TODO: further init setup things to get full EMS and
# .accounting support B)
# - live order loading via user stream subscription and
# update to the order dialog table.
# - MAKE SURE we add live orders loaded during init
# into the dialogs table to ensure they can be
# cancelled, meaning we can do a symbol lookup.
# - position loading using `piker.accounting` subsys
# and comparison with binance's own position calcs.
# - load pps and accounts using accounting apis, write
# the ledger and account files
# - table: Account
# - ledger: TransactionLedger
async with (
trio.open_nursery() as tn,
ctx.open_stream() as ems_stream,
):
# deliver all pre-exist open orders to EMS thus syncing
# state with existing live limits reported by them.
order: Order
for order in await client.get_open_orders():
status_msg = Status(
time_ns=time.time_ns(),
resp='open',
oid=order.oid,
reqid=order.oid,
# embedded order info
req=order,
src='binance',
)
dialogs.add_msg(order.oid, order.to_dict())
await ems_stream.send(status_msg)
tn.start_soon(
handle_order_requests,
ems_stream,
client,
dids,
dialogs,
)
tn.start_soon(
handle_order_updates,
venue_mode,
account_name,
client,
ems_stream,
wss,
dialogs,
)
await trio.sleep_forever()
async def handle_order_updates(
venue: str,
account_name: str,
client: Client,
ems_stream: tractor.MsgStream,
wss: NoBsWs,
dialogs: OrderDialogs,
) -> None:
'''
Main msg handling loop for all things order management.
This code is broken out to make the context explicit and state
variables defined in the signature clear to the reader.
'''
async for msg in wss:
log.info(f'Rx USERSTREAM msg:\n{pformat(msg)}')
match msg:
# ORDER update
# spot: https://binance-docs.github.io/apidocs/spot/en/#payload-balance-update
# futes: https://binance-docs.github.io/apidocs/futures/en/#event-order-update
# futes: https://binance-docs.github.io/apidocs/futures/en/#event-balance-and-position-update
# {'o': {
# 'L': '0',
# 'N': 'USDT',
# 'R': False,
# 'S': 'BUY',
# 'T': 1687028772484,
# 'X': 'NEW',
# 'a': '0',
# 'ap': '0',
# 'b': '7012.06520',
# 'c': '518d4122-8d3e-49b0-9a1e-1fabe6f62e4c',
# 'cp': False,
# 'f': 'GTC',
# 'i': 3376956924,
# 'l': '0',
# 'm': False,
# 'n': '0',
# 'o': 'LIMIT',
# 'ot': 'LIMIT',
# 'p': '21136.80',
# 'pP': False,
# 'ps': 'BOTH',
# 'q': '0.047',
# 'rp': '0',
# 's': 'BTCUSDT',
# 'si': 0,
# 'sp': '0',
# 'ss': 0,
# 't': 0,
# 'wt': 'CONTRACT_PRICE',
# 'x': 'NEW',
# 'z': '0'}
# }
case {
# 'e': 'executionReport',
'e': 'ORDER_TRADE_UPDATE',
'T': int(epoch_ms),
'o': {
's': bs_mktid,
# XXX NOTE XXX see special ids for market
# events or margin calls:
# // special client order id:
# // starts with "autoclose-": liquidation order
# // "adl_autoclose": ADL auto close order
# // "settlement_autoclose-": settlement order
# for delisting or delivery
'c': oid,
# 'i': reqid, # binance internal int id
# prices
'a': submit_price,
'ap': avg_price,
'L': fill_price,
# sizing
'q': req_size,
'l': clear_size_filled, # this event
'z': accum_size_filled, # accum
# commissions
'n': cost,
'N': cost_asset,
# state
'S': side,
'X': status,
},
} as order_msg:
log.info(
f'{status} for {side} ORDER oid: {oid}\n'
f'bs_mktid: {bs_mktid}\n\n'
f'order size: {req_size}\n'
f'cleared size: {clear_size_filled}\n'
f'accum filled size: {accum_size_filled}\n\n'
f'submit price: {submit_price}\n'
f'fill_price: {fill_price}\n'
f'avg clearing price: {avg_price}\n\n'
f'cost: {cost}@{cost_asset}\n'
)
# status remap from binance to piker's
# status set:
# - NEW
# - PARTIALLY_FILLED
# - FILLED
# - CANCELED
# - EXPIRED
# https://binance-docs.github.io/apidocs/futures/en/#event-order-update
req_size: float = float(req_size)
accum_size_filled: float = float(accum_size_filled)
fill_price: float = float(fill_price)
match status:
case 'PARTIALLY_FILLED' | 'FILLED':
status = 'fill'
fill_msg = BrokerdFill(
time_ns=time_ns(),
# reqid=reqid,
reqid=oid,
# just use size value for now?
# action=action,
size=clear_size_filled,
price=fill_price,
# TODO: maybe capture more msg data
# i.e fees?
broker_details={'name': 'broker'} | order_msg,
broker_time=time.time(),
)
await ems_stream.send(fill_msg)
if accum_size_filled == req_size:
status = 'closed'
dialogs.pop(oid)
case 'NEW':
status = 'open'
case 'EXPIRED':
status = 'canceled'
dialogs.pop(oid)
case _:
status = status.lower()
resp = BrokerdStatus(
time_ns=time_ns(),
# reqid=reqid,
reqid=oid,
# TODO: i feel like we don't need to make the
# ems and upstream clients aware of this?
# account='binance.usdtm',
status=status,
filled=accum_size_filled,
remaining=req_size - accum_size_filled,
broker_details={
'name': 'binance',
'broker_time': epoch_ms / 1000.
}
)
await ems_stream.send(resp)
# ACCOUNT and POSITION update B)
# {
# 'E': 1687036749218,
# 'e': 'ACCOUNT_UPDATE'
# 'T': 1687036749215,
# 'a': {'B': [{'a': 'USDT',
# 'bc': '0',
# 'cw': '1267.48920735',
# 'wb': '1410.90245576'}],
# 'P': [{'cr': '-3292.10973007',
# 'ep': '26349.90000',
# 'iw': '143.41324841',
# 'ma': 'USDT',
# 'mt': 'isolated',
# 'pa': '0.038',
# 'ps': 'BOTH',
# 's': 'BTCUSDT',
# 'up': '5.17555453'}],
# 'm': 'ORDER'},
# }
case {
'T': int(epoch_ms),
'e': 'ACCOUNT_UPDATE',
'a': {
'P': [{
's': bs_mktid,
'pa': pos_amount,
'ep': entry_price,
}],
},
}:
# real-time relay position updates back to EMS
pair: Pair | None = client._venue2pairs[venue].get(bs_mktid)
ppmsg = BrokerdPosition(
broker='binance',
account=f'binance.{account_name}',
# TODO: maybe we should be passing back
# a `MktPair` here?
symbol=pair.bs_fqme.lower() + '.binance',
size=float(pos_amount),
avg_price=float(entry_price),
)
await ems_stream.send(ppmsg)
case _:
log.warning(
'Unhandled event:\n'
f'{pformat(msg)}'
)

View File

@ -0,0 +1,557 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Real-time and historical data feed endpoints.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
aclosing,
)
from datetime import datetime
from functools import (
partial,
)
import itertools
from pprint import pformat
from typing import (
Any,
AsyncGenerator,
Callable,
Generator,
)
import time
import trio
from trio_typing import TaskStatus
from pendulum import (
from_timestamp,
)
import numpy as np
import tractor
from piker.brokers import (
open_cached_client,
NoData,
)
from piker._cacheables import (
async_lifo_cache,
)
from piker.accounting import (
Asset,
DerivTypes,
MktPair,
unpack_fqme,
)
from piker.types import Struct
from piker.data.validate import FeedInit
from piker.data._web_bs import (
open_autorecon_ws,
NoBsWs,
)
from piker.brokers._util import (
DataUnavailable,
get_logger,
)
from .api import (
Client,
)
from .venues import (
Pair,
FutesPair,
get_api_eps,
)
log = get_logger('piker.brokers.binance')
class L1(Struct):
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
update_id: int
sym: str
bid: float
bsize: float
ask: float
asize: float
# validation type
class AggTrade(Struct, frozen=True):
e: str # Event type
E: int # Event time
s: str # Symbol
a: int # Aggregate trade ID
p: float # Price
q: float # Quantity
f: int # First trade ID
l: int # noqa Last trade ID
T: int # Trade time
m: bool # Is the buyer the market maker?
M: bool | None = None # Ignore
async def stream_messages(
ws: NoBsWs,
) -> AsyncGenerator[NoBsWs, dict]:
# TODO: match syntax here!
msg: dict[str, Any]
async for msg in ws:
match msg:
# for l1 streams binance doesn't add an event type field so
# identify those messages by matching keys
# https://binance-docs.github.io/apidocs/spot/en/#individual-symbol-book-ticker-streams
case {
# NOTE: this is never an old value it seems, so
# they are always sending real L1 spread updates.
'u': upid, # update id
's': sym,
'b': bid,
'B': bsize,
'a': ask,
'A': asize,
}:
# TODO: it would be super nice to have a `L1` piker type
# which "renders" incremental tick updates from a packed
# msg-struct:
# - backend msgs after packed into the type such that we
# can reduce IPC usage but without each backend having
# to do that incremental update logic manually B)
# - would it maybe be more efficient to use this instead?
# https://binance-docs.github.io/apidocs/spot/en/#diff-depth-stream
l1 = L1(
update_id=upid,
sym=sym,
bid=bid,
bsize=bsize,
ask=ask,
asize=asize,
)
# for speed probably better to only specifically
# cast fields we need in numerical form?
# l1.typecast()
# repack into piker's tick-quote format
yield 'l1', {
'symbol': l1.sym,
'ticks': [
{
'type': 'bid',
'price': float(l1.bid),
'size': float(l1.bsize),
},
{
'type': 'bsize',
'price': float(l1.bid),
'size': float(l1.bsize),
},
{
'type': 'ask',
'price': float(l1.ask),
'size': float(l1.asize),
},
{
'type': 'asize',
'price': float(l1.ask),
'size': float(l1.asize),
}
]
}
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
case {
'e': 'aggTrade',
}:
# NOTE: this is purely for a definition,
# ``msgspec.Struct`` does not runtime-validate until you
# decode/encode, see:
# https://jcristharif.com/msgspec/structs.html#type-validation
msg = AggTrade(**msg) # TODO: should we .copy() ?
piker_quote: dict = {
'symbol': msg.s,
'last': float(msg.p),
'brokerd_ts': time.time(),
'ticks': [{
'type': 'trade',
'price': float(msg.p),
'size': float(msg.q),
'broker_ts': msg.T,
}],
}
yield 'trade', piker_quote
def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
'''
Create a request subscription packet dict.
- spot:
https://binance-docs.github.io/apidocs/spot/en/#live-subscribing-unsubscribing-to-streams
- futes:
https://binance-docs.github.io/apidocs/futures/en/#websocket-market-streams
'''
return {
'method': 'SUBSCRIBE',
'params': [
f'{pair.lower()}@{sub_name}'
for pair in pairs
],
'id': uid
}
# TODO, why aren't frame resp `log.info()`s showing in upstream
# code?!
@acm
async def open_history_client(
mkt: MktPair,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('binance') as client:
async def get_ohlc(
timeframe: float,
end_dt: datetime | None = None,
start_dt: datetime | None = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
if timeframe != 60:
raise DataUnavailable('Only 1m bars are supported')
# TODO: better wrapping for venue / mode?
# - eventually logic for usd vs. coin settled futes
# based on `MktPair.src` type/value?
# - maybe something like `async with
# Client.use_venue('usdtm_futes')`
if mkt.type_key in DerivTypes:
client.mkt_mode = 'usdtm_futes'
else:
client.mkt_mode = 'spot'
array: np.ndarray = await client.bars(
mkt=mkt,
start_dt=start_dt,
end_dt=end_dt,
)
if array.size == 0:
raise NoData(
f'No frame for {start_dt} -> {end_dt}\n'
)
times = array['time']
if not times.any():
raise ValueError(
'Bad frame with null-times?\n\n'
f'{times}'
)
if end_dt is None:
inow: int = round(time.time())
if (inow - times[-1]) > 60:
await tractor.pause()
start_dt = from_timestamp(times[0])
end_dt = from_timestamp(times[-1])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, Pair] | None:
# uppercase since kraken bs_mktid is always upper
if 'binance' not in fqme.lower():
fqme += '.binance'
mkt_mode: str = ''
broker, mkt_ep, venue, expiry = unpack_fqme(fqme)
# NOTE: we always upper case all tokens to be consistent with
# binance's symbology style for pairs, like `BTCUSDT`, but in
# theory we could also just keep things lower case; as long as
# we're consistent and the symcache matches whatever this func
# returns, always!
expiry: str = expiry.upper()
venue: str = venue.upper()
venue_lower: str = venue.lower()
# XXX TODO: we should change the usdtm_futes name to just
# usdm_futes (dropping the tether part) since it turns out that
# there are indeed USD-tokens OTHER THEN tether being used as
# the margin assets.. it's going to require a wholesale
# (variable/key) rename as well as file name adjustments to any
# existing tsdb set..
if 'usd' in venue_lower:
mkt_mode: str = 'usdtm_futes'
# NO IDEA what these contracts (some kinda DEX-ish futes?) are
# but we're masking them for now..
elif (
'defi' in venue_lower
# TODO: handle coinm futes which have a margin asset that
# is some crypto token!
# https://binance-docs.github.io/apidocs/delivery/en/#exchange-information
or 'btc' in venue_lower
):
return None
else:
# NOTE: see the `FutesPair.bs_fqme: str` implementation
# to understand the reverse market info lookup below.
mkt_mode = venue_lower or 'spot'
if (
venue
and 'spot' not in venue_lower
# XXX: catch all in case user doesn't know which
# venue they want (usdtm vs. coinm) and we can choose
# a default (via config?) once we support coin-m APIs.
or 'perp' in venue_lower
):
if not mkt_mode:
mkt_mode: str = f'{venue_lower}_futes'
async with open_cached_client(
'binance',
) as client:
assets: dict[str, Asset] = await client.get_assets()
pair_str: str = mkt_ep.upper()
# switch venue-mode depending on input pattern parsing
# since we want to use a particular endpoint (set) for
# pair info lookup!
client.mkt_mode = mkt_mode
pair: Pair = await client.exch_info(
pair_str,
venue=mkt_mode, # explicit
expiry=expiry,
)
if 'futes' in mkt_mode:
assert isinstance(pair, FutesPair)
dst: Asset | None = assets.get(pair.bs_dst_asset)
if (
not dst
# TODO: a known asset DNE list?
# and pair.baseAsset == 'DEFI'
):
log.warning(
f'UNKNOWN {venue} asset {pair.baseAsset} from,\n'
f'{pformat(pair.to_dict())}'
)
# XXX UNKNOWN missing "asset", though no idea why?
# maybe it's only avail in the margin venue(s): /dapi/ ?
return None
mkt = MktPair(
dst=dst,
src=assets[pair.bs_src_asset],
price_tick=pair.price_tick,
size_tick=pair.size_tick,
bs_mktid=pair.symbol,
expiry=expiry,
venue=venue,
broker='binance',
# NOTE: sectype is always taken from dst, see
# `MktPair.type_key` and `Client._cache_pairs()`
# _atype=sectype,
)
return mkt, pair
@acm
async def subscribe(
ws: NoBsWs,
symbols: list[str],
# defined once at import time to keep a global state B)
iter_subids: Generator[int, None, None] = itertools.count(),
):
# setup subs
subid: int = next(iter_subids)
# trade data (aka L1)
# https://binance-docs.github.io/apidocs/spot/en/#symbol-order-book-ticker
l1_sub = make_sub(symbols, 'bookTicker', subid)
await ws.send_msg(l1_sub)
# aggregate (each order clear by taker **not** by maker)
# trades data:
# https://binance-docs.github.io/apidocs/spot/en/#aggregate-trade-streams
agg_trades_sub = make_sub(symbols, 'aggTrade', subid)
await ws.send_msg(agg_trades_sub)
# might get ack from ws server, or maybe some
# other msg still in transit..
res = await ws.recv_msg()
subid: str | None = res.get('id')
if subid:
assert res['id'] == subid
yield
subs = []
for sym in symbols:
subs.append("{sym}@aggTrade")
subs.append("{sym}@bookTicker")
# unsub from all pairs on teardown
if ws.connected():
await ws.send_msg({
"method": "UNSUBSCRIBE",
"params": subs,
"id": subid,
})
# XXX: do we need to ack the unsub?
# await ws.recv_msg()
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
async with (
send_chan as send_chan,
open_cached_client('binance') as client,
):
init_msgs: list[FeedInit] = []
for sym in symbols:
mkt: MktPair
pair: Pair
mkt, pair = await get_mkt_info(sym)
# build out init msgs according to latest spec
init_msgs.append(
FeedInit(mkt_info=mkt)
)
wss_url: str = get_api_eps(client.mkt_mode)[1] # 2nd elem is wss url
# TODO: for sanity, but remove eventually Xp
if 'future' in mkt.type_key:
assert 'fstream' in wss_url
async with (
open_autorecon_ws(
url=wss_url,
fixture=partial(
subscribe,
symbols=[mkt.bs_mktid],
),
) as ws,
# avoid stream-gen closure from breaking trio..
aclosing(stream_messages(ws)) as msg_gen,
):
# log.info('WAITING ON FIRST LIVE QUOTE..')
typ, quote = await anext(msg_gen)
# pull a first quote and deliver
while typ != 'trade':
typ, quote = await anext(msg_gen)
task_status.started((init_msgs, quote))
# signal to caller feed is ready for consumption
feed_is_live.set()
# import time
# last = time.time()
# XXX NOTE: can't include the `.binance` suffix
# or the sampling loop will not broadcast correctly
# since `bus._subscribers.setdefault(bs_fqme, set())`
# is used inside `.data.open_feed_bus()` !!!
topic: str = mkt.bs_fqme
# start streaming
async for typ, quote in msg_gen:
# period = time.time() - last
# hz = 1/period if period else float('inf')
# if hz > 60:
# log.info(f'Binance quotez : {hz}')
await send_chan.send({topic: quote})
# last = time.time()
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
# NOTE: symbology tables are loaded as part of client
# startup in ``.api.get_client()`` and in this case
# are stored as `Client._pairs`.
async with open_cached_client('binance') as client:
# TODO: maybe we should deliver the cache
# so that client's can always do a local-lookup-first
# style try and then update async as (new) match results
# are delivered from here?
await ctx.started()
async with ctx.open_stream() as stream:
pattern: str
async for pattern in stream:
# NOTE: pattern fuzzy-matching is done within
# the methd impl.
pairs: dict[str, Pair] = await client.search_symbols(
pattern,
)
# repack in fqme-keyed table
byfqme: dict[str, Pair] = {}
for pair in pairs.values():
byfqme[pair.bs_fqme] = pair
await stream.send(byfqme)

View File

@ -0,0 +1,303 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Per market data-type definitions and schemas types.
"""
from __future__ import annotations
from typing import (
Literal,
)
from decimal import Decimal
from msgspec import field
from piker.types import Struct
# API endpoint paths by venue / sub-API
_domain: str = 'binance.com'
_spot_url = f'https://api.{_domain}'
_futes_url = f'https://fapi.{_domain}'
# WEBsocketz
# NOTE XXX: see api docs which show diff addr?
# https://developers.binance.com/docs/binance-trading-api/websocket_api#general-api-information
_spot_ws: str = 'wss://stream.binance.com/ws'
# or this one? ..
# 'wss://ws-api.binance.com:443/ws-api/v3',
# https://binance-docs.github.io/apidocs/futures/en/#websocket-market-streams
_futes_ws: str = f'wss://fstream.{_domain}/ws'
_auth_futes_ws: str = 'wss://fstream-auth.{_domain}/ws'
# test nets
# NOTE: spot test network only allows certain ep sets:
# https://testnet.binance.vision/
# https://www.binance.com/en/support/faq/how-to-test-my-functions-on-binance-testnet-ab78f9a1b8824cf0a106b4229c76496d
_testnet_spot_url: str = 'https://testnet.binance.vision/api'
_testnet_spot_ws: str = 'wss://testnet.binance.vision/ws'
# or this one? ..
# 'wss://testnet.binance.vision/ws-api/v3'
_testnet_futes_url: str = 'https://testnet.binancefuture.com'
_testnet_futes_ws: str = 'wss://stream.binancefuture.com/ws'
MarketType = Literal[
'spot',
# 'margin',
'usdtm_futes',
# 'coinm_futes',
]
def get_api_eps(venue: MarketType) -> tuple[str, str]:
'''
Return API ep root paths per venue.
'''
return {
'spot': (
_spot_url,
_spot_ws,
),
'usdtm_futes': (
_futes_url,
_futes_ws,
),
}[venue]
class Pair(Struct, frozen=True, kw_only=True):
symbol: str
status: str
orderTypes: list[str]
# src
quoteAsset: str
quotePrecision: int
# dst
baseAsset: str
baseAssetPrecision: int
filters: dict[
str,
str | int | float,
] = field(default_factory=dict)
@property
def price_tick(self) -> Decimal:
# XXX: lul, after manually inspecting the response format we
# just directly pick out the info we need
step_size: str = self.filters['PRICE_FILTER']['tickSize'].rstrip('0')
return Decimal(step_size)
@property
def size_tick(self) -> Decimal:
step_size: str = self.filters['LOT_SIZE']['stepSize'].rstrip('0')
return Decimal(step_size)
@property
def bs_fqme(self) -> str:
return self.symbol
@property
def bs_mktid(self) -> str:
return f'{self.symbol}.{self.venue}'
class SpotPair(Pair, frozen=True):
cancelReplaceAllowed: bool
allowTrailingStop: bool
quoteAssetPrecision: int
baseCommissionPrecision: int
quoteCommissionPrecision: int
icebergAllowed: bool
ocoAllowed: bool
quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool
isMarginTradingAllowed: bool
otoAllowed: bool
defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str]
permissions: list[str]
permissionSets: list[list[str]]
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:SpotPair'
@property
def venue(self) -> str:
return 'SPOT'
@property
def bs_fqme(self) -> str:
return f'{self.symbol}.SPOT'
@property
def bs_src_asset(self) -> str:
return f'{self.quoteAsset}'
@property
def bs_dst_asset(self) -> str:
return f'{self.baseAsset}'
class FutesPair(Pair):
symbol: str # 'BTCUSDT',
pair: str # 'BTCUSDT',
baseAssetPrecision: int # 8,
contractType: str # 'PERPETUAL',
deliveryDate: int # 4133404800000,
liquidationFee: float # '0.012500',
maintMarginPercent: float # '2.5000',
marginAsset: str # 'USDT',
marketTakeBound: float # '0.05',
maxMoveOrderLimit: int # 10000,
onboardDate: int # 1569398400000,
pricePrecision: int # 2,
quantityPrecision: int # 3,
quoteAsset: str # 'USDT',
quotePrecision: int # 8,
requiredMarginPercent: float # '5.0000',
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
triggerProtect: float # '0.0500',
underlyingSubType: list[str] # ['PoW'],
underlyingType: str # 'COIN'
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:FutesPair'
# NOTE: for compat with spot pairs and `MktPair.src: Asset`
# processing..
@property
def quoteAssetPrecision(self) -> int:
return self.quotePrecision
@property
def expiry(self) -> str:
symbol: str = self.symbol
contype: str = self.contractType
match contype:
case (
'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance..
):
pair, _, expiry = symbol.partition('_')
assert pair == self.pair # sanity
return f'{expiry}'
case 'PERPETUAL':
return 'PERP'
case '':
subtype: list[str] = self.underlyingSubType
if not subtype:
if self.status == 'PENDING_TRADING':
return 'PENDING'
match subtype:
case ['DEFI']:
return 'PERP'
# wow, just wow you binance guys suck..
if self.status == 'PENDING_TRADING':
return 'PENDING'
# XXX: yeah no clue then..
raise ValueError(
f'Bad .expiry token match: {contype} for {symbol}'
)
@property
def venue(self) -> str:
symbol: str = self.symbol
ctype: str = self.contractType
margin: str = self.marginAsset
match ctype:
case 'PERPETUAL':
return f'{margin}M'
case (
'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance..
):
_, _, expiry = symbol.partition('_')
return f'{margin}M'
case '':
subtype: list[str] = self.underlyingSubType
if not subtype:
if self.status == 'PENDING_TRADING':
return f'{margin}M'
match subtype:
case (
['DEFI']
| ['USDC']
):
return f'{subtype[0]}'
# XXX: yeah no clue then..
raise ValueError(
f'Bad .venue token match: {ctype}'
)
@property
def bs_fqme(self) -> str:
symbol: str = self.symbol
ctype: str = self.contractType
venue: str = self.venue
pair: str = self.pair
match ctype:
case (
'CURRENT_QUARTER'
| 'NEXT_QUARTER' # su madre binance..
):
pair, _, expiry = symbol.partition('_')
assert pair == self.pair
return f'{pair}.{venue}.{self.expiry}'
@property
def bs_src_asset(self) -> str:
return f'{self.quoteAsset}'
@property
def bs_dst_asset(self) -> str:
return f'{self.baseAsset}.{self.venue}'
PAIRTYPES: dict[MarketType, Pair] = {
'spot': SpotPair,
'usdtm_futes': FutesPair,
# TODO: support coin-margined venue:
# https://binance-docs.github.io/apidocs/delivery/en/#change-log
# 'coinm_futes': CoinFutesPair,
}

View File

@ -21,6 +21,7 @@ import os
from functools import partial
from operator import attrgetter
from operator import itemgetter
from types import ModuleType
import click
import trio
@ -194,7 +195,7 @@ def brokercheck(config, broker):
@cli.command()
@click.option('--keys', '-k', multiple=True,
help='Return results only for these keys')
help='Return results only for these keys')
@click.argument('meth', nargs=1)
@click.argument('kwargs', nargs=-1)
@click.pass_obj
@ -241,7 +242,7 @@ def quote(config, tickers):
'''
# global opts
brokermod = config['brokermods'][0]
brokermod = list(config['brokermods'].values())[0]
quotes = trio.run(partial(core.stocks_quote, brokermod, tickers))
if not quotes:
@ -268,7 +269,7 @@ def bars(config, symbol, count):
'''
# global opts
brokermod = config['brokermods'][0]
brokermod = list(config['brokermods'].values())[0]
# broker backend should return at the least a
# list of candle dictionaries
@ -303,7 +304,7 @@ def record(config, rate, name, dhost, filename):
'''
# global opts
brokermod = config['brokermods'][0]
brokermod = list(config['brokermods'].values())[0]
loglevel = config['loglevel']
log = config['log']
@ -368,7 +369,7 @@ def optsquote(config, symbol, date):
'''
# global opts
brokermod = config['brokermods'][0]
brokermod = list(config['brokermods'].values())[0]
quotes = trio.run(
partial(
@ -385,58 +386,151 @@ def optsquote(config, symbol, date):
@cli.command()
@click.argument('tickers', nargs=-1, required=True)
@click.pass_obj
def symbol_info(config, tickers):
def mkt_info(
config: dict,
tickers: list[str],
):
'''
Print symbol quotes to the console
'''
# global opts
brokermod = config['brokermods'][0]
from msgspec.json import encode, decode
from ..accounting import MktPair
from ..service import (
open_piker_runtime,
)
quotes = trio.run(partial(core.symbol_info, brokermod, tickers))
if not quotes:
log.error(f"No quotes could be found for {tickers}?")
# global opts
brokermods: dict[str, ModuleType] = config['brokermods']
mkts: list[MktPair] = []
async def main():
async with open_piker_runtime(
name='mkt_info_query',
# loglevel=loglevel,
debug_mode=True,
) as (_, _):
for fqme in tickers:
bs_fqme, _, broker = fqme.rpartition('.')
brokermod: ModuleType = brokermods[broker]
mkt, bs_pair = await core.mkt_info(
brokermod,
bs_fqme,
)
mkts.append((mkt, bs_pair))
trio.run(main)
if not mkts:
log.error(
f'No market info could be found for {tickers}'
)
return
if len(quotes) < len(tickers):
syms = tuple(map(itemgetter('symbol'), quotes))
if len(mkts) < len(tickers):
syms = tuple(map(itemgetter('fqme'), mkts))
for ticker in tickers:
if ticker not in syms:
brokermod.log.warn(f"Could not find symbol {ticker}?")
log.warn(f"Could not find symbol {ticker}?")
click.echo(colorize_json(quotes))
# TODO: use ``rich.Table`` intead here!
for mkt, bs_pair in mkts:
click.echo(
'\n'
'----------------------------------------------------\n'
f'{type(bs_pair)}\n'
'----------------------------------------------------\n'
f'{colorize_json(bs_pair.to_dict())}\n'
'----------------------------------------------------\n'
f'as piker `MktPair` with fqme: {mkt.fqme}\n'
'----------------------------------------------------\n'
# NOTE: roundtrip to json codec for console print
f'{colorize_json(decode(encode(mkt)))}'
)
@cli.command()
@click.argument('pattern', required=True)
# TODO: move this to top level click/typer context for all subs
@click.option(
'--pdb',
is_flag=True,
help='Enable tractor debug mode',
)
@click.pass_obj
def search(config, pattern):
def search(
config: dict,
pattern: str,
pdb: bool,
):
'''
Search for symbols from broker backend(s).
'''
# global opts
brokermods = config['brokermods']
brokermods = list(config['brokermods'].values())
# define tractor entrypoint
async def main(func):
async with maybe_open_pikerd(
loglevel=config['loglevel'],
debug_mode=pdb,
):
return await func()
quotes = trio.run(
main,
partial(
core.symbol_search,
brokermods,
pattern,
),
)
from piker.toolz import open_crash_handler
with open_crash_handler():
quotes = trio.run(
main,
partial(
core.symbol_search,
brokermods,
pattern,
),
)
if not quotes:
log.error(f"No matches could be found for {pattern}?")
return
if not quotes:
log.error(f"No matches could be found for {pattern}?")
return
click.echo(colorize_json(quotes))
click.echo(colorize_json(quotes))
@cli.command()
@click.argument('section', required=False)
@click.argument('value', required=False)
@click.option('--delete', '-d', flag_value=True, help='Delete section')
@click.pass_obj
def brokercfg(config, section, value, delete):
'''
If invoked with no arguments, open an editor to edit broker
configs file or get / update an individual section.
'''
from .. import config
if section:
conf, path = config.load()
if not delete:
if value:
config.set_value(conf, section, value)
click.echo(
colorize_json(
config.get_value(conf, section))
)
else:
config.del_value(conf, section)
config.write(config=conf)
else:
conf, path = config.load(raw=True)
config.write(
raw=click.edit(text=conf)
)

View File

@ -29,7 +29,8 @@ import trio
from ._util import log
from . import get_brokermod
from ..service import maybe_spawn_brokerd
from .._cacheables import open_cached_client
from . import open_cached_client
from ..accounting import MktPair
async def api(brokername: str, methname: str, **kwargs) -> dict:
@ -94,15 +95,15 @@ async def option_chain(
return await client.option_chains(contracts)
async def contracts(
brokermod: ModuleType,
symbol: str,
) -> Dict[str, Dict[str, Dict[str, Any]]]:
"""Return option contracts (all expiries) for ``symbol``.
"""
async with brokermod.get_client() as client:
# return await client.get_all_contracts([symbol])
return await client.get_all_contracts([symbol])
# async def contracts(
# brokermod: ModuleType,
# symbol: str,
# ) -> Dict[str, Dict[str, Dict[str, Any]]]:
# """Return option contracts (all expiries) for ``symbol``.
# """
# async with brokermod.get_client() as client:
# # return await client.get_all_contracts([symbol])
# return await client.get_all_contracts([symbol])
async def bars(
@ -116,17 +117,6 @@ async def bars(
return await client.bars(symbol, **kwargs)
async def symbol_info(
brokermod: ModuleType,
symbol: str,
**kwargs,
) -> Dict[str, Dict[str, Dict[str, Any]]]:
"""Return symbol info from broker.
"""
async with brokermod.get_client() as client:
return await client.symbol_info(symbol, **kwargs)
async def search_w_brokerd(name: str, pattern: str) -> dict:
async with open_cached_client(name) as client:
@ -155,7 +145,11 @@ async def symbol_search(
async with maybe_spawn_brokerd(
mod.name,
infect_asyncio=getattr(mod, '_infect_asyncio', False),
infect_asyncio=getattr(
mod,
'_infect_asyncio',
False,
),
) as portal:
results.append((
@ -173,3 +167,20 @@ async def symbol_search(
n.start_soon(search_backend, mod.name)
return results
async def mkt_info(
brokermod: ModuleType,
fqme: str,
**kwargs,
) -> MktPair:
'''
Return MktPair info from broker including src and dst assets.
'''
async with open_cached_client(brokermod.name) as client:
assert client
return await brokermod.get_mkt_info(
fqme.replace(brokermod.name, '')
)

View File

@ -21,8 +21,6 @@ Deribit backend.
from piker.log import get_logger
log = get_logger(__name__)
from .api import (
get_client,
)
@ -30,13 +28,15 @@ from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
backfill_bars
# backfill_bars,
)
# from .broker import (
# trades_dialogue,
# open_trade_dialog,
# norm_trade_records,
# )
log = get_logger(__name__)
__all__ = [
'get_client',
# 'trades_dialogue',

View File

@ -18,43 +18,33 @@
Deribit backend.
'''
import json
import time
import asyncio
from contextlib import asynccontextmanager as acm, AsyncExitStack
from functools import partial
from contextlib import (
asynccontextmanager as acm,
)
from datetime import datetime
from typing import Any, Optional, Iterable, Callable
import pendulum
import asks
import trio
from trio_typing import Nursery, TaskStatus
from fuzzywuzzy import process as fuzzy
import numpy as np
from piker.data.types import Struct
from piker.data._web_bs import (
NoBsWs,
open_autorecon_ws,
open_jsonrpc_session
from functools import partial
import time
from typing import (
Any,
Optional,
Callable,
)
from .._util import resproc
from piker import config
from piker.log import get_logger
import pendulum
import trio
from trio_typing import TaskStatus
from rapidfuzz import process as fuzzy
import numpy as np
from tractor.trionics import (
broadcast_receiver,
BroadcastReceiver,
maybe_open_context
)
from tractor import to_asyncio
# XXX WOOPS XD
# yeah you'll need to install it since it was removed in #489 by
# accident; well i thought we had removed all usage..
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT,
L1_BOOK, TRADES,
@ -62,6 +52,20 @@ from cryptofeed.defines import (
)
from cryptofeed.symbols import Symbol
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
Struct,
)
from piker.data._web_bs import (
open_jsonrpc_session
)
from piker import config
from piker.log import get_logger
log = get_logger(__name__)
@ -75,26 +79,13 @@ _ws_url = 'wss://www.deribit.com/ws/api/v2'
_testnet_ws_url = 'wss://test.deribit.com/ws/api/v2'
# Broker specific ohlc schema (rest)
_ohlc_dtype = [
('index', int),
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('bar_wap', float), # will be zeroed by sampler if not filled
]
class JSONRPCResult(Struct):
jsonrpc: str = '2.0'
id: int
result: Optional[dict] = None
result: Optional[list[dict]] = None
error: Optional[dict] = None
usIn: int
usOut: int
usIn: int
usOut: int
usDiff: int
testnet: bool
@ -301,24 +292,29 @@ class Client:
currency: str = 'btc', # BTC, ETH, SOL, USDC
kind: str = 'option',
expired: bool = False
) -> dict[str, Any]:
"""Get symbol info for the exchange.
"""
) -> dict[str, dict]:
'''
Get symbol infos.
'''
if self._pairs:
return self._pairs
# will retrieve all symbols by default
params = {
params: dict[str, str] = {
'currency': currency.upper(),
'kind': kind,
'expired': str(expired).lower()
}
resp = await self.json_rpc('public/get_instruments', params)
results = resp.result
instruments = {
resp: JSONRPCResult = await self.json_rpc(
'public/get_instruments',
params,
)
# convert to symbol-keyed table
results: list[dict] | None = resp.result
instruments: dict[str, dict] = {
item['instrument_name'].lower(): item
for item in results
}
@ -331,6 +327,7 @@ class Client:
async def cache_symbols(
self,
) -> dict:
if not self._pairs:
self._pairs = await self.symbol_info()
@ -341,17 +338,23 @@ class Client:
pattern: str,
limit: int = 30,
) -> dict[str, Any]:
data = await self.symbol_info()
'''
Fuzzy search symbology set for pairs matching `pattern`.
matches = fuzzy.extractBests(
pattern,
data,
'''
pairs: dict[str, Any] = await self.symbol_info()
matches: dict[str, Pair] = match_from_pairs(
pairs=pairs,
query=pattern.upper(),
score_cutoff=35,
limit=limit
)
# repack in dict form
return {item[0]['instrument_name'].lower(): item[0]
for item in matches}
# repack in name-keyed table
return {
pair['instrument_name'].lower(): pair
for pair in matches.values()
}
async def bars(
self,
@ -405,7 +408,7 @@ class Client:
new_bars.append((i,) + tuple(row))
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else klines
array = np.array(new_bars, dtype=def_iohlcv_fields) if as_np else klines
return array
async def last_trades(

View File

@ -26,11 +26,11 @@ import time
import trio
from trio_typing import TaskStatus
import pendulum
from fuzzywuzzy import process as fuzzy
from rapidfuzz import process as fuzzy
import numpy as np
import tractor
from piker._cacheables import open_cached_client
from piker.brokers import open_cached_client
from piker.log import get_logger, get_console_log
from piker.data import ShmArray
from piker.brokers._util import (
@ -39,7 +39,6 @@ from piker.brokers._util import (
)
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT, L1_BOOK, TRADES, OPTION, CALL, PUT
)
@ -62,9 +61,10 @@ log = get_logger(__name__)
@acm
async def open_history_client(
instrument: str,
mkt: MktPair,
) -> tuple[Callable, int]:
fnstrument: str = mkt.bs_fqme
# TODO implement history getter for the new storage layer.
async with open_cached_client('deribit') as client:

View File

@ -30,19 +30,33 @@ from .api import (
)
from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
)
from .broker import (
trades_dialogue,
open_trade_dialog,
)
from .ledger import (
norm_trade,
norm_trade_records,
tx_sort,
)
from .symbols import (
get_mkt_info,
open_symbol_search,
_search_conf,
)
__all__ = [
'get_client',
'trades_dialogue',
'get_mkt_info',
'norm_trade',
'norm_trade_records',
'open_trade_dialog',
'open_history_client',
'open_symbol_search',
'stream_quotes',
'_search_conf',
'tx_sort',
]
_brokerd_mods: list[str] = [
@ -52,6 +66,7 @@ _brokerd_mods: list[str] = [
_datad_mods: list[str] = [
'feed',
'symbols',
]
@ -71,3 +86,8 @@ _spawn_kwargs = {
# know if ``brokerd`` should be spawned with
# ``tractor``'s aio mode.
_infect_asyncio: bool = True
# XXX NOTE: for now we disable symcache with this backend since
# there is no clearly simple nor practical way to download "all
# symbology info" for all supported venues..
_no_symcache: bool = True

View File

@ -35,6 +35,10 @@ from piker.accounting import (
def parse_flex_dt(
record: str,
) -> pendulum.datetime:
'''
Parse stupid flex record datetime stamps for the `dateTime` field..
'''
date, ts = record.split(';')
dt = pendulum.parse(date)
ts = f'{ts[:2]}:{ts[2:4]}:{ts[4:]}'
@ -155,7 +159,11 @@ def load_flex_trades(
for acctid in trades_by_account:
trades_by_id = trades_by_account[acctid]
with open_trade_ledger('ib', acctid) as ledger_dict:
with open_trade_ledger(
'ib',
acctid,
allow_from_sync_code=True,
) as ledger_dict:
tid_delta = set(trades_by_id) - set(ledger_dict)
log.info(
'New trades detected\n'

View File

@ -19,13 +19,23 @@
runnable script-programs.
'''
from typing import Literal
from __future__ import annotations
from functools import partial
from typing import (
Literal,
TYPE_CHECKING,
)
import subprocess
import tractor
from .._util import log
from piker.brokers._util import get_logger
if TYPE_CHECKING:
from .api import Client
from ib_insync import IB
log = get_logger('piker.brokers.ib')
_reset_tech: Literal[
'vnc',
@ -39,7 +49,9 @@ _reset_tech: Literal[
async def data_reset_hack(
reset_type: str = 'data',
# vnc_host: str,
client: Client,
reset_type: Literal['data', 'connection'],
) -> None:
'''
@ -69,18 +81,61 @@ async def data_reset_hack(
that need to be wrangle.
'''
ib_client: IB = client.ib
# look up any user defined vnc socket address mapped from
# a particular API socket port.
api_port: str = str(ib_client.client.port)
vnc_host: str
vnc_port: int
vnc_sockaddr: tuple[str] | None = client.conf.get('vnc_addrs')
no_setup_msg:str = (
f'No data reset hack test setup for {vnc_sockaddr}!\n'
'See config setup tips @\n'
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
)
if not vnc_sockaddr:
log.warning(
no_setup_msg
+
'REQUIRES A `vnc_addrs: array` ENTRY'
)
vnc_host, vnc_port = vnc_sockaddr.get(
api_port,
('localhost', 3003)
)
global _reset_tech
match _reset_tech:
case 'vnc':
try:
await tractor.to_asyncio.run_task(vnc_click_hack)
await tractor.to_asyncio.run_task(
partial(
vnc_click_hack,
host=vnc_host,
port=vnc_port,
)
)
except OSError:
_reset_tech = 'i3ipc_xdotool'
if vnc_host != 'localhost':
log.warning(no_setup_msg)
return False
try:
import i3ipc # noqa (since a deps dynamic check)
except ModuleNotFoundError:
log.warning(no_setup_msg)
return False
try:
i3ipc_xdotool_manual_click_hack()
_reset_tech = 'i3ipc_xdotool'
return True
except OSError:
log.exception(no_setup_msg)
return False
case 'i3ipc_xdotool':
@ -94,21 +149,39 @@ async def data_reset_hack(
async def vnc_click_hack(
host: str,
port: int,
reset_type: str = 'data'
) -> None:
'''
Reset the data or netowork connection for the VNC attached
Reset the data or network connection for the VNC attached
ib gateway using magic combos.
'''
key = {'data': 'f', 'connection': 'r'}[reset_type]
try:
import asyncvnc
except ModuleNotFoundError:
log.warning(
"In order to leverage `piker`'s built-in data reset hacks, install "
"the `asyncvnc` project: https://github.com/barneygale/asyncvnc"
)
return
import asyncvnc
# two different hot keys which trigger diff types of reset
# requests B)
key = {
'data': 'f',
'connection': 'r'
}[reset_type]
async with asyncvnc.connect(
'localhost',
port=3003,
host,
port=port,
# TODO: doesn't work see:
# https://github.com/barneygale/asyncvnc/issues/7
# password='ibcansmbz',
) as client:
# move to middle of screen
@ -122,9 +195,16 @@ async def vnc_click_hack(
def i3ipc_xdotool_manual_click_hack() -> None:
import i3ipc
'''
Do the data reset hack but expecting a local X-window using `xdotool`.
'''
import i3ipc
i3 = i3ipc.Connection()
# TODO: might be worth offering some kinda api for grabbing
# the window id from the pid?
# https://stackoverflow.com/a/2250879
t = i3.get_tree()
orig_win_id = t.find_focused().window
@ -179,7 +259,7 @@ def i3ipc_xdotool_manual_click_hack() -> None:
timeout=timeout,
)
# re-activate and focus original window
# re-activate and focus original window
subprocess.call([
'xdotool',
'windowactivate', '--sync', str(orig_win_id),

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,529 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Trade transaction accounting and normalization.
'''
from __future__ import annotations
from bisect import insort
from dataclasses import asdict
from decimal import Decimal
from functools import partial
from pprint import pformat
from typing import (
Any,
Callable,
TYPE_CHECKING,
)
from bidict import bidict
from pendulum import (
DateTime,
parse,
from_timestamp,
)
from ib_insync import (
Contract,
Commodity,
Fill,
Execution,
CommissionReport,
)
from piker.types import Struct
from piker.data import (
SymbologyCache,
)
from piker.accounting import (
Asset,
dec_digits,
digits_to_dec,
Transaction,
MktPair,
iter_by_dt,
)
from ._flex_reports import parse_flex_dt
from ._util import log
if TYPE_CHECKING:
from .api import (
Client,
MethodProxy,
)
tx_sort: Callable = partial(
iter_by_dt,
parsers={
'dateTime': parse_flex_dt,
'datetime': parse,
# XXX: for some some fucking 2022 and
# back options records.. f@#$ me..
'date': parse,
}
)
def norm_trade(
tid: str,
record: dict[str, Any],
# this is the dict that was returned from
# `Client.get_mkt_pairs()` and when running offline ledger
# processing from `.accounting`, this will be the table loaded
# into `SymbologyCache.pairs`.
pairs: dict[str, Struct],
symcache: SymbologyCache | None = None,
) -> Transaction | None:
conid: int = str(record.get('conId') or record['conid'])
bs_mktid: str = str(conid)
# NOTE: sometimes weird records (like BTTX?)
# have no field for this?
comms: float = -1 * (
record.get('commission')
or record.get('ibCommission')
or 0
)
if not comms:
log.warning(
'No commissions found for record?\n'
f'{pformat(record)}\n'
)
price: float = (
record.get('price')
or record.get('tradePrice')
)
if price is None:
log.warning(
'No `price` field found in record?\n'
'Skipping normalization..\n'
f'{pformat(record)}\n'
)
return None
# the api doesn't do the -/+ on the quantity for you but flex
# records do.. are you fucking serious ib...!?
size: float|int = (
record.get('quantity')
or record['shares']
) * {
'BOT': 1,
'SLD': -1,
}[record['side']]
symbol: str = record['symbol']
exch: str = (
record.get('listingExchange')
or record.get('primaryExchange')
or record['exchange']
)
# NOTE: remove null values since `tomlkit` can't serialize
# them to file.
if dnc := record.pop('deltaNeutralContract', None):
record['deltaNeutralContract'] = dnc
# likely an opts contract record from a flex report..
# TODO: no idea how to parse ^ the strike part from flex..
# (00010000 any, or 00007500 tsla, ..)
# we probably must do the contract lookup for this?
if (
' ' in symbol
or '--' in exch
):
underlying, _, tail = symbol.partition(' ')
exch: str = 'opt'
expiry: str = tail[:6]
# otype = tail[6]
# strike = tail[7:]
log.warning(
f'Skipping option contract -> NO SUPPORT YET!\n'
f'{symbol}\n'
)
return None
# timestamping is way different in API records
dtstr: str = record.get('datetime')
date: str = record.get('date')
flex_dtstr: str = record.get('dateTime')
if dtstr or date:
dt: DateTime = parse(dtstr or date)
elif flex_dtstr:
# probably a flex record with a wonky non-std timestamp..
dt: DateTime = parse_flex_dt(record['dateTime'])
# special handling of symbol extraction from
# flex records using some ad-hoc schema parsing.
asset_type: str = (
record.get('assetCategory')
or record.get('secType')
or 'STK'
)
if (expiry := (
record.get('lastTradeDateOrContractMonth')
or record.get('expiry')
)
):
expiry: str = str(expiry).strip(' ')
# NOTE: we directly use the (simple and usually short)
# date-string expiry token when packing the `MktPair`
# since we want the fqme to contain *that* token.
# It might make sense later to instead parse and then
# render different output str format(s) for this same
# purpose depending on asset-type-market down the road.
# Eg. for derivs we use the short token only for fqme
# but use the isoformat('T') for transactions and
# account file position entries?
# dt_str: str = pendulum.parse(expiry).isoformat('T')
# XXX: pretty much all legacy market assets have a fiat
# currency (denomination) determined by their venue.
currency: str = record['currency']
src = Asset(
name=currency.lower(),
atype='fiat',
tx_tick=Decimal('0.01'),
)
match asset_type:
case 'FUT':
# XXX (flex) ledger entries don't necessarily have any
# simple 3-char key.. sometimes the .symbol is some
# weird internal key that we probably don't want in the
# .fqme => we should probably just wrap `Contract` to
# this like we do other crypto$ backends XD
# NOTE: at least older FLEX records should have
# this field.. no idea about API entries..
local_symbol: str | None = record.get('localSymbol')
underlying_key: str = record.get('underlyingSymbol')
descr: str | None = record.get('description')
if (
not (
local_symbol
and symbol in local_symbol
)
and (
descr
and symbol not in descr
)
):
con_key, exp_str = descr.split(' ')
symbol: str = underlying_key or con_key
dst = Asset(
name=symbol.lower(),
atype='future',
tx_tick=Decimal('1'),
)
case 'STK':
dst = Asset(
name=symbol.lower(),
atype='stock',
tx_tick=Decimal('1'),
)
case 'CASH':
if currency not in symbol:
# likely a dict-casted `Forex` contract which
# has .symbol as the dst and .currency as the
# src.
name: str = symbol.lower()
else:
# likely a flex-report record which puts
# EUR.USD as the symbol field and just USD in
# the currency field.
name: str = symbol.lower().replace(f'.{src.name}', '')
dst = Asset(
name=name,
atype='fiat',
tx_tick=Decimal('0.01'),
)
case 'OPT':
dst = Asset(
name=symbol.lower(),
atype='option',
tx_tick=Decimal('1'),
# TODO: we should probably always cast to the
# `Contract` instance then dict-serialize that for
# the `.info` field!
# info=asdict(Option()),
)
case 'CMDTY':
from .symbols import _adhoc_symbol_map
con_kwargs, _ = _adhoc_symbol_map[symbol.upper()]
dst = Asset(
name=symbol.lower(),
atype='commodity',
tx_tick=Decimal('1'),
info=asdict(Commodity(**con_kwargs)),
)
# try to build out piker fqme from record.
# src: str = record['currency']
price_tick: Decimal = digits_to_dec(dec_digits(price))
# NOTE: can't serlialize `tomlkit.String` so cast to native
atype: str = str(dst.atype)
# if not (mkt := symcache.mktmaps.get(bs_mktid)):
mkt = MktPair(
bs_mktid=bs_mktid,
dst=dst,
price_tick=price_tick,
# NOTE: for "legacy" assets, volume is normally discreet, not
# a float, but we keep a digit in case the suitz decide
# to get crazy and change it; we'll be kinda ready
# schema-wise..
size_tick=Decimal('1'),
src=src, # XXX: normally always a fiat
_atype=atype,
venue=exch,
expiry=expiry,
broker='ib',
_fqme_without_src=(atype != 'fiat'),
)
fqme: str = mkt.fqme
# XXX: if passed in, we fill out the symcache ad-hoc in order
# to make downstream accounting work..
if symcache is not None:
orig_mkt: MktPair | None = symcache.mktmaps.get(bs_mktid)
if (
orig_mkt
and orig_mkt.fqme != mkt.fqme
):
log.warning(
# print(
f'Contracts with common `conId`: {bs_mktid} mismatch..\n'
f'{orig_mkt.fqme} -> {mkt.fqme}\n'
# 'with DIFF:\n'
# f'{mkt - orig_mkt}'
)
symcache.mktmaps[bs_mktid] = mkt
symcache.mktmaps[fqme] = mkt
symcache.assets[src.name] = src
symcache.assets[dst.name] = dst
# NOTE: for flex records the normal fields for defining an fqme
# sometimes won't be available so we rely on two approaches for
# the "reverse lookup" of piker style fqme keys:
# - when dealing with API trade records received from
# `IB.trades()` we do a contract lookup at he time of processing
# - when dealing with flex records, it is assumed the record
# is at least a day old and thus the TWS position reporting system
# should already have entries if the pps are still open, in
# which case, we can pull the fqme from that table (see
# `trades_dialogue()` above).
return Transaction(
fqme=fqme,
tid=tid,
size=size,
price=price,
cost=comms,
dt=dt,
expiry=expiry,
bs_mktid=str(conid),
)
def norm_trade_records(
ledger: dict[str, Any],
symcache: SymbologyCache | None = None,
) -> dict[str, Transaction]:
'''
Normalize (xml) flex-report or (recent) API trade records into
our ledger format with parsing for `MktPair` and `Asset`
extraction to fill in the `Transaction.sys: MktPair` field.
'''
records: list[Transaction] = []
for tid, record in ledger.items():
txn = norm_trade(
tid,
record,
# NOTE: currently no symcache support
pairs={},
symcache=symcache,
)
if txn is None:
continue
# inject txns sorted by datetime
insort(
records,
txn,
key=lambda t: t.dt
)
return {r.tid: r for r in records}
def api_trades_to_ledger_entries(
accounts: bidict[str, str],
fills: list[Fill],
) -> dict[str, dict]:
'''
Convert API execution objects entry objects into
flattened-``dict`` form, pretty much straight up without
modification except add a `pydatetime` field from the parsed
timestamp so that on write
'''
trades_by_account: dict[str, dict] = {}
for fill in fills:
# NOTE: for the schema, see the defn for `Fill` which is
# a `NamedTuple` subtype
fdict: dict = fill._asdict()
# flatten all (sub-)objects and convert to dicts.
# with values packed into one top level entry.
val: CommissionReport | Execution | Contract
txn_dict: dict[str, Any] = {}
for attr_name, val in fdict.items():
match attr_name:
# value is a `@dataclass` subtype
case 'contract' | 'execution' | 'commissionReport':
txn_dict.update(asdict(val))
case 'time':
# ib has wack ns timestamps, or is that us?
continue
# TODO: we can remove this case right since there's
# only 4 fields on a `Fill`?
case _:
txn_dict[attr_name] = val
tid = str(txn_dict['execId'])
dt = from_timestamp(txn_dict['time'])
txn_dict['datetime'] = str(dt)
acctid = accounts[txn_dict['acctNumber']]
# NOTE: only inserted (then later popped) for sorting below!
txn_dict['pydatetime'] = dt
if not tid:
# this is likely some kind of internal adjustment
# transaction, likely one of the following:
# - an expiry event that will show a "book trade" indicating
# some adjustment to cash balances: zeroing or itm settle.
# - a manual cash balance position adjustment likely done by
# the user from the accounts window in TWS where they can
# manually set the avg price and size:
# https://api.ibkr.com/lib/cstools/faq/web1/index.html#/tag/DTWS_ADJ_AVG_COST
log.warning(
'Skipping ID-less ledger txn_dict:\n'
f'{pformat(txn_dict)}'
)
continue
trades_by_account.setdefault(
acctid, {}
)[tid] = txn_dict
# TODO: maybe we should just bisect.insort() into a list of
# tuples and then return a dict of that?
# sort entries in output by python based datetime
for acctid in trades_by_account:
trades_by_account[acctid] = dict(sorted(
trades_by_account[acctid].items(),
key=lambda entry: entry[1].pop('pydatetime'),
))
return trades_by_account
async def update_ledger_from_api_trades(
fills: list[Fill],
client: Client | MethodProxy,
accounts_def_inv: bidict[str, str],
# NOTE: provided for ad-hoc insertions "as transactions are
# processed" -> see `norm_trade()` signature requirements.
symcache: SymbologyCache | None = None,
) -> tuple[
dict[str, Transaction],
dict[str, dict],
]:
# XXX; ERRGGG..
# pack in the "primary/listing exchange" value from a
# contract lookup since it seems this isn't available by
# default from the `.fills()` method endpoint...
fill: Fill
for fill in fills:
con: Contract = fill.contract
conid: str = con.conId
pexch: str | None = con.primaryExchange
if not pexch:
cons = await client.get_con(conid=conid)
if cons:
con = cons[0]
pexch = con.primaryExchange or con.exchange
else:
# for futes it seems like the primary is always empty?
pexch: str = con.exchange
# pack in the ``Contract.secType``
# entry['asset_type'] = condict['secType']
entries: dict[str, dict] = api_trades_to_ledger_entries(
accounts_def_inv,
fills,
)
# normalize recent session's trades to the `Transaction` type
trans_by_acct: dict[str, dict[str, Transaction]] = {}
for acctid, trades_by_id in entries.items():
# normalize to transaction form
trans_by_acct[acctid] = norm_trade_records(
trades_by_id,
symcache=symcache,
)
return trans_by_acct, entries

View File

@ -0,0 +1,615 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Symbology search and normalization.
'''
from __future__ import annotations
from contextlib import (
nullcontext,
)
from decimal import Decimal
import time
from typing import (
Awaitable,
TYPE_CHECKING,
)
from rapidfuzz import process as fuzzy
import ib_insync as ibis
import tractor
import trio
from piker.accounting import (
Asset,
MktPair,
unpack_fqme,
)
from piker._cacheables import (
async_lifo_cache,
)
from ._util import (
log,
)
if TYPE_CHECKING:
from .api import (
MethodProxy,
Client,
)
_futes_venues = (
'GLOBEX',
'NYMEX',
'CME',
'CMECRYPTO',
'COMEX',
# 'CMDTY', # special name case..
'CBOT', # (treasury) yield futures
)
_adhoc_cmdty_set = {
# metals
# https://misc.interactivebrokers.com/cstools/contract_info/v3.10/index.php?action=Conid%20Info&wlId=IB&conid=69067924
'xauusd.cmdty', # london gold spot ^
'xagusd.cmdty', # silver spot
}
# NOTE: if you aren't seeing one of these symbol's futues contracts
# show up, it's likely the `.<venue>` part is wrong!
_adhoc_futes_set = {
# equities
'nq.cme',
'mnq.cme', # micro
'es.cme',
'mes.cme', # micro
# cypto$
'brr.cme',
'mbt.cme', # micro
'ethusdrr.cme',
# agriculture
'he.comex', # lean hogs
'le.comex', # live cattle (geezers)
'gf.comex', # feeder cattle (younguns)
# raw
'lb.comex', # random len lumber
'gc.comex',
'mgc.comex', # micro
# oil & gas
'cl.nymex',
'ni.comex', # silver futes
'qi.comex', # mini-silver futes
# treasury yields
# etfs by duration:
# SHY -> IEI -> IEF -> TLT
'zt.cbot', # 2y
'z3n.cbot', # 3y
'zf.cbot', # 5y
'zn.cbot', # 10y
'zb.cbot', # 30y
# (micros of above)
'2yy.cbot',
'5yy.cbot',
'10y.cbot',
'30y.cbot',
}
# taken from list here:
# https://www.interactivebrokers.com/en/trading/products-spot-currencies.php
_adhoc_fiat_set = set((
'USD, AED, AUD, CAD,'
'CHF, CNH, CZK, DKK,'
'EUR, GBP, HKD, HUF,'
'ILS, JPY, MXN, NOK,'
'NZD, PLN, RUB, SAR,'
'SEK, SGD, TRY, ZAR'
).split(' ,')
)
# manually discovered tick discrepancies,
# onl god knows how or why they'd cuck these up..
_adhoc_mkt_infos: dict[int | str, dict] = {
'vtgn.nasdaq': {'price_tick': Decimal('0.01')},
}
# map of symbols to contract ids
_adhoc_symbol_map = {
# https://misc.interactivebrokers.com/cstools/contract_info/v3.10/index.php?action=Conid%20Info&wlId=IB&conid=69067924
# NOTE: some cmdtys/metals don't have trade data like gold/usd:
# https://groups.io/g/twsapi/message/44174
'XAUUSD': ({'conId': 69067924}, {'whatToShow': 'MIDPOINT'}),
}
for qsn in _adhoc_futes_set:
sym, venue = qsn.split('.')
assert venue.upper() in _futes_venues, f'{venue}'
_adhoc_symbol_map[sym.upper()] = (
{'exchange': venue},
{},
)
# exchanges we don't support at the moment due to not knowing
# how to do symbol-contract lookup correctly likely due
# to not having the data feeds subscribed.
_exch_skip_list = {
'ASX', # aussie stocks
'MEXI', # mexican stocks
# no idea
'NSE',
'VALUE',
'FUNDSERV',
'SWB2',
'PSE',
'PHLX',
}
# optional search config the backend can register for
# it's symbol search handling (in this case we avoid
# accepting patterns before the kb has settled more then
# a quarter second).
_search_conf = {
'pause_period': 6 / 16,
}
@tractor.context
async def open_symbol_search(ctx: tractor.Context) -> None:
'''
Symbology search brokerd-endpoint.
'''
from .api import open_client_proxies
from .feed import open_data_client
# TODO: load user defined symbol set locally for fast search?
await ctx.started({})
async with (
open_client_proxies() as (proxies, _),
open_data_client() as data_proxy,
):
async with ctx.open_stream() as stream:
# select a non-history client for symbol search to lighten
# the load in the main data node.
proxy = data_proxy
for name, proxy in proxies.items():
if proxy is data_proxy:
continue
break
ib_client = proxy._aio_ns.ib
log.info(
f'Using API client for symbol-search\n'
f'{ib_client}\n'
)
last = time.time()
async for pattern in stream:
log.info(f'received {pattern}')
now: float = time.time()
# this causes tractor hang...
# assert 0
assert pattern, 'IB can not accept blank search pattern'
# throttle search requests to no faster then 1Hz
diff = now - last
if diff < 1.0:
log.debug('throttle sleeping')
await trio.sleep(diff)
try:
pattern = stream.receive_nowait()
except trio.WouldBlock:
pass
if (
not pattern
or pattern.isspace()
# XXX: not sure if this is a bad assumption but it
# seems to make search snappier?
or len(pattern) < 1
):
log.warning('empty pattern received, skipping..')
# TODO: *BUG* if nothing is returned here the client
# side will cache a null set result and not showing
# anything to the use on re-searches when this query
# timed out. We probably need a special "timeout" msg
# or something...
# XXX: this unblocks the far end search task which may
# hold up a multi-search nursery block
await stream.send({})
continue
log.info(f'searching for {pattern}')
last = time.time()
# async batch search using api stocks endpoint and module
# defined adhoc symbol set.
stock_results = []
async def extend_results(
target: Awaitable[list]
) -> None:
try:
results = await target
except tractor.trionics.Lagged:
print("IB SYM-SEARCH OVERRUN?!?")
return
stock_results.extend(results)
for _ in range(10):
with trio.move_on_after(3) as cs:
async with trio.open_nursery() as sn:
sn.start_soon(
extend_results,
proxy.search_symbols(
pattern=pattern,
upto=5,
),
)
# trigger async request
await trio.sleep(0)
if cs.cancelled_caught:
log.warning(
f'Search timeout? {proxy._aio_ns.ib.client}'
)
continue
elif stock_results:
break
# else:
# await tractor.pause()
# # match against our ad-hoc set immediately
# adhoc_matches = fuzzy.extract(
# pattern,
# list(_adhoc_futes_set),
# score_cutoff=90,
# )
# log.info(f'fuzzy matched adhocs: {adhoc_matches}')
# adhoc_match_results = {}
# if adhoc_matches:
# # TODO: do we need to pull contract details?
# adhoc_match_results = {i[0]: {} for i in
# adhoc_matches}
log.debug(f'fuzzy matching stocks {stock_results}')
stock_matches = fuzzy.extract(
pattern,
stock_results,
score_cutoff=50,
)
# matches = adhoc_match_results | {
matches = {
item[0]: {} for item in stock_matches
}
# TODO: we used to deliver contract details
# {item[2]: item[0] for item in stock_matches}
log.debug(f"sending matches: {matches.keys()}")
await stream.send(matches)
# re-mapping to piker asset type names
# https://github.com/erdewit/ib_insync/blob/master/ib_insync/contract.py#L113
_asset_type_map = {
'STK': 'stock',
'OPT': 'option',
'FUT': 'future',
'CONTFUT': 'continuous_future',
'CASH': 'fiat',
'IND': 'index',
'CFD': 'cfd',
'BOND': 'bond',
'CMDTY': 'commodity',
'FOP': 'futures_option',
'FUND': 'mutual_fund',
'WAR': 'warrant',
'IOPT': 'warran',
'BAG': 'bag',
'CRYPTO': 'crypto', # bc it's diff then fiat?
# 'NEWS': 'news',
}
def parse_patt2fqme(
# client: Client,
pattern: str,
) -> tuple[str, str, str, str]:
# TODO: we can't use this currently because
# ``wrapper.starTicker()`` currently cashes ticker instances
# which means getting a singel quote will potentially look up
# a quote for a ticker that it already streaming and thus run
# into state clobbering (eg. list: Ticker.ticks). It probably
# makes sense to try this once we get the pub-sub working on
# individual symbols...
# XXX UPDATE: we can probably do the tick/trades scraping
# inside our eventkit handler instead to bypass this entirely?
currency = ''
# fqme parsing stage
# ------------------
if '.ib' in pattern:
_, symbol, venue, expiry = unpack_fqme(pattern)
else:
symbol = pattern
expiry = ''
# # another hack for forex pairs lul.
# if (
# '.idealpro' in symbol
# # or '/' in symbol
# ):
# exch: str = 'IDEALPRO'
# symbol = symbol.removesuffix('.idealpro')
# if '/' in symbol:
# symbol, currency = symbol.split('/')
# else:
# TODO: yes, a cache..
# try:
# # give the cache a go
# return client._contracts[symbol]
# except KeyError:
# log.debug(f'Looking up contract for {symbol}')
expiry: str = ''
if symbol.count('.') > 1:
symbol, _, expiry = symbol.rpartition('.')
# use heuristics to figure out contract "type"
symbol, venue = symbol.upper().rsplit('.', maxsplit=1)
return symbol, currency, venue, expiry
def con2fqme(
con: ibis.Contract,
_cache: dict[int, (str, bool)] = {}
) -> tuple[str, bool]:
'''
Convert contracts to fqme-style strings to be used both in
symbol-search matching and as feed tokens passed to the front
end data deed layer.
Previously seen contracts are cached by id.
'''
# should be real volume for this contract by default
calc_price: bool = False
if con.conId:
try:
# TODO: LOL so apparently IB just changes the contract
# ID (int) on a whim.. so we probably need to use an
# FQME style key after all...
return _cache[con.conId]
except KeyError:
pass
suffix: str = con.primaryExchange or con.exchange
symbol: str = con.symbol
expiry: str = con.lastTradeDateOrContractMonth or ''
match con:
case ibis.Option():
# TODO: option symbol parsing and sane display:
symbol = con.localSymbol.replace(' ', '')
case (
ibis.Commodity()
# search API endpoint returns std con box..
| ibis.Contract(secType='CMDTY')
):
# commodities and forex don't have an exchange name and
# no real volume so we have to calculate the price
suffix = con.secType
# no real volume on this tract
calc_price = True
case ibis.Forex() | ibis.Contract(secType='CASH'):
dst, src = con.localSymbol.split('.')
symbol = ''.join([dst, src])
suffix = con.exchange or 'idealpro'
# no real volume on forex feeds..
calc_price = True
if not suffix:
entry = _adhoc_symbol_map.get(
con.symbol or con.localSymbol
)
if entry:
meta, kwargs = entry
cid = meta.get('conId')
if cid:
assert con.conId == meta['conId']
suffix = meta['exchange']
# append a `.<suffix>` to the returned symbol
# key for derivatives that normally is the expiry
# date key.
if expiry:
suffix += f'.{expiry}'
fqme_key = symbol.lower()
if suffix:
fqme_key = '.'.join((fqme_key, suffix)).lower()
_cache[con.conId] = fqme_key, calc_price
return fqme_key, calc_price
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
proxy: MethodProxy | None = None,
) -> tuple[MktPair, ibis.ContractDetails]:
if '.ib' not in fqme:
fqme += '.ib'
broker, pair, venue, expiry = unpack_fqme(fqme)
proxy: MethodProxy
if proxy is not None:
client_ctx = nullcontext(proxy)
else:
from .feed import (
open_data_client,
)
client_ctx = open_data_client
async with client_ctx as proxy:
try:
(
con, # Contract
details, # ContractDetails
) = await proxy.get_sym_details(fqme=fqme)
except ConnectionError:
log.exception(f'Proxy is ded {proxy._aio_ns}')
raise
# TODO: more consistent field translation
atype = _asset_type_map[con.secType]
if atype == 'commodity':
venue: str = 'cmdty'
else:
venue = con.primaryExchange or con.exchange
price_tick: Decimal = Decimal(str(details.minTick))
ib_min_tick_gt_2: Decimal = Decimal('0.01')
if (
price_tick < ib_min_tick_gt_2
):
# TODO: we need to add some kinda dynamic rounding sys
# to our MktPair i guess?
# not sure where the logic should sit, but likely inside
# the `.clearing._ems` i suppose...
log.warning(
'IB seems to disallow a min price tick < 0.01 '
'when the price is > 2.0..?\n'
f'Decreasing min tick precision for {fqme} to 0.01'
)
# price_tick = ib_min_tick
# await tractor.pause()
if atype == 'stock':
# XXX: GRRRR they don't support fractional share sizes for
# stocks from the API?!
# if con.secType == 'STK':
size_tick = Decimal('1')
else:
size_tick: Decimal = Decimal(
str(details.minSize).rstrip('0')
)
# |-> TODO: there is also the Contract.sizeIncrement, bt wtf is it?
# NOTE: this is duplicate from the .broker.norm_trade_records()
# routine, we should factor all this parsing somewhere..
expiry_str = str(con.lastTradeDateOrContractMonth)
# if expiry:
# expiry_str: str = str(pendulum.parse(
# str(expiry).strip(' ')
# ))
# TODO: currently we can't pass the fiat src asset because
# then we'll get a `MNQUSD` request for history data..
# we need to figure out how we're going to handle this (later?)
# but likely we want all backends to eventually handle
# ``dst/src.venue.`` style !?
src = Asset(
name=str(con.currency).lower(),
atype='fiat',
tx_tick=Decimal('0.01'), # right?
)
dst = Asset(
name=con.symbol.lower(),
atype=atype,
tx_tick=size_tick,
)
mkt = MktPair(
src=src,
dst=dst,
price_tick=price_tick,
size_tick=size_tick,
bs_mktid=str(con.conId),
venue=str(venue),
expiry=expiry_str,
broker='ib',
# TODO: options contract info as str?
# contract_info=<optionsdetails>
_fqme_without_src=(atype != 'fiat'),
)
# just.. wow.
if entry := _adhoc_mkt_infos.get(mkt.bs_fqme):
log.warning(f'Frickin {mkt.fqme} has an adhoc {entry}..')
new = mkt.to_dict()
new['price_tick'] = entry['price_tick']
new['src'] = src
new['dst'] = dst
mkt = MktPair(**new)
# if possible register the bs_mktid to the just-built
# mkt so that it can be retreived by order mode tasks later.
# TODO NOTE: this is going to be problematic if/when we split
# out the datatd vs. brokerd actors since the mktmap lookup
# table will now be inaccessible..
if proxy is not None:
client: Client = proxy._aio_ns
client._contracts[mkt.bs_fqme] = con
client._cons2mkts[con] = mkt
return mkt, details

View File

@ -19,44 +19,57 @@ Kraken backend.
Sub-modules within break into the core functionalities:
- ``broker.py`` part for orders / trading endpoints
- ``feed.py`` for real-time data feed endpoints
- ``api.py`` for the core API machinery which is ``trio``-ized
wrapping around ``ib_insync``.
- .api: for the core API machinery which generally
a ``asks``/``trio-websocket`` implemented ``Client``.
- .broker: part for orders / trading endpoints.
- .feed: for real-time and historical data query endpoints.
- .ledger: for transaction processing as it pertains to accounting.
- .symbols: for market (name) search and symbology meta-defs.
'''
from piker.log import get_logger
log = get_logger(__name__)
from .symbols import (
Pair, # for symcache
open_symbol_search,
# required by `.accounting`, `.data`
get_mkt_info,
)
# required by `.brokers`
from .api import (
get_client,
)
from .feed import (
get_mkt_info,
open_history_client,
open_symbol_search,
# required by `.data`
stream_quotes,
open_history_client,
)
from .broker import (
trades_dialogue,
# required by `.clearing`
open_trade_dialog,
)
from .ledger import (
# required by `.accounting`
norm_trade,
norm_trade_records,
)
__all__ = [
'get_client',
'trades_dialogue',
'get_mkt_info',
'Pair',
'open_trade_dialog',
'open_history_client',
'open_symbol_search',
'stream_quotes',
'norm_trade_records',
'norm_trade',
]
# tractor RPC enable arg
__enable_modules__: list[str] = [
'api',
'feed',
'broker',
'feed',
'symbols',
]

View File

@ -15,12 +15,11 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Kraken web API wrapping.
Core (web) API client
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
from decimal import Decimal
import itertools
from typing import (
Any,
@ -28,10 +27,8 @@ from typing import (
)
import time
from bidict import bidict
import httpx
import pendulum
import asks
from fuzzywuzzy import process as fuzzy
import numpy as np
import urllib.parse
import hashlib
@ -40,10 +37,14 @@ import base64
import trio
from piker import config
from piker.data.types import Struct
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
)
from piker.accounting._mktinfo import (
Asset,
digits_to_dec,
dec_digits,
)
from piker.brokers._util import (
resproc,
@ -52,29 +53,21 @@ from piker.brokers._util import (
DataThrottle,
)
from piker.accounting import Transaction
from . import log
from piker.log import get_logger
from .symbols import Pair
log = get_logger('piker.brokers.kraken')
# <uri>/<version>/
_url = 'https://api.kraken.com/0'
_headers: dict[str, str] = {
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
}
# Broker specific ohlc schema which includes a vwap field
_ohlc_dtype = [
('index', int),
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('count', int),
('bar_wap', float),
]
# UI components allow this to be declared such that additional
# (historical) fields can be exposed.
ohlc_dtype = np.dtype(_ohlc_dtype)
# TODO: this is the only backend providing this right?
# in which case we should drop it from the defaults and
# instead make a custom fields descr in this module!
_show_wap_in_history = True
_symbol_info_translation: dict[str, str] = {
'tick_decimals': 'pair_decimals',
@ -82,12 +75,18 @@ _symbol_info_translation: dict[str, str] = {
def get_config() -> dict[str, Any]:
'''
Load our section from `piker/brokers.toml`.
conf, path = config.load()
section = conf.get('kraken')
if section is None:
log.warning(f'No config section found for kraken in {path}')
'''
conf, path = config.load(
conf_name='brokers',
touch_if_dne=True,
)
if (section := conf.get('kraken')) is None:
log.warning(
f'No config section found for kraken in {path}'
)
return {}
return section
@ -118,92 +117,51 @@ class InvalidKey(ValueError):
'''
# https://www.kraken.com/features/api#get-tradable-pairs
class Pair(Struct):
altname: str # alternate pair name
wsname: str # WebSocket pair name (if available)
aclass_base: str # asset class of base component
base: str # asset id of base component
aclass_quote: str # asset class of quote component
quote: str # asset id of quote component
lot: str # volume lot size
cost_decimals: int
costmin: float
pair_decimals: int # scaling decimal places for pair
lot_decimals: int # scaling decimal places for volume
# amount to multiply lot volume by to get currency volume
lot_multiplier: float
# array of leverage amounts available when buying
leverage_buy: list[int]
# array of leverage amounts available when selling
leverage_sell: list[int]
# fee schedule array in [volume, percent fee] tuples
fees: list[tuple[int, float]]
# maker fee schedule array in [volume, percent fee] tuples (if on
# maker/taker)
fees_maker: list[tuple[int, float]]
fee_volume_currency: str # volume discount currency
margin_call: str # margin call level
margin_stop: str # stop-out/liquidation margin level
ordermin: float # minimum order volume for pair
tick_size: float # min price step size
status: str
short_position_limit: float = 0
long_position_limit: float = float('inf')
@property
def price_tick(self) -> Decimal:
return digits_to_dec(self.pair_decimals)
@property
def size_tick(self) -> Decimal:
return digits_to_dec(self.lot_decimals)
class Client:
# symbol mapping from all names to the altname
_ntable: dict[str, str] = {}
# assets and mkt pairs are key-ed by kraken's ReST response
# symbol-bs_mktids (we call them "X-keys" like fricking
# "XXMRZEUR"). these keys used directly since ledger endpoints
# return transaction sets keyed with the same set!
_Assets: dict[str, Asset] = {}
_AssetPairs: dict[str, Pair] = {}
# 2-way map of symbol names to their "alt names" ffs XD
_altnames: bidict[str, str] = bidict()
# offer lookup tables for all .altname and .wsname
# to the equivalent .xname so that various symbol-schemas
# can be mapped to `Pair`s in the tables above.
_altnames: dict[str, str] = {}
_wsnames: dict[str, str] = {}
# key-ed by `Pair.bs_fqme: str`, and thus used for search
# allowing for lookup using piker's own FQME symbology sys.
_pairs: dict[str, Pair] = {}
_assets: dict[str, Asset] = {}
def __init__(
self,
config: dict[str, str],
httpx_client: httpx.AsyncClient,
name: str = '',
api_key: str = '',
secret: str = ''
) -> None:
self._sesh = asks.Session(connections=4)
self._sesh.base_location = _url
self._sesh.headers.update({
'User-Agent':
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
})
self._sesh: httpx.AsyncClient = httpx_client
self._name = name
self._api_key = api_key
self._secret = secret
self.conf: dict[str, str] = config
self.assets: dict[str, Asset] = {}
@property
def pairs(self) -> dict[str, Pair]:
if self._pairs is None:
raise RuntimeError(
"Make sure to run `cache_symbols()` on startup!"
"Client didn't run `.get_mkt_pairs()` on startup?!"
)
# retreive and cache all symbols
return self._pairs
@ -212,10 +170,9 @@ class Client:
method: str,
data: dict,
) -> dict[str, Any]:
resp = await self._sesh.post(
path=f'/public/{method}',
resp: httpx.Response = await self._sesh.post(
url=f'/public/{method}',
json=data,
timeout=float('inf')
)
return resproc(resp, log)
@ -226,18 +183,18 @@ class Client:
uri_path: str
) -> dict[str, Any]:
headers = {
'Content-Type':
'application/x-www-form-urlencoded',
'API-Key':
self._api_key,
'API-Sign':
get_kraken_signature(uri_path, data, self._secret)
'Content-Type': 'application/x-www-form-urlencoded',
'API-Key': self._api_key,
'API-Sign': get_kraken_signature(
uri_path,
data,
self._secret,
),
}
resp = await self._sesh.post(
path=f'/private/{method}',
resp: httpx.Response = await self._sesh.post(
url=f'/private/{method}',
data=data,
headers=headers,
timeout=float('inf')
)
return resproc(resp, log)
@ -263,17 +220,29 @@ class Client:
'Balance',
{},
)
by_bsmktid = resp['result']
by_bsmktid: dict[str, dict] = resp['result']
# TODO: we need to pull out the "asset" decimals
# data and return a `decimal.Decimal` instead here!
# using the underlying Asset
return {
self._altnames[sym].lower(): float(bal)
for sym, bal in by_bsmktid.items()
}
balances: dict = {}
for xname, bal in by_bsmktid.items():
asset: Asset = self._Assets[xname]
async def get_assets(self) -> dict[str, Asset]:
# TODO: which KEY should we use? it's used to index
# the `Account.pps: dict` ..
key: str = asset.name.lower()
# TODO: should we just return a `Decimal` here
# or is the rounded version ok?
balances[key] = round(
float(bal),
ndigits=dec_digits(asset.tx_tick)
)
return balances
async def get_assets(
self,
reload: bool = False,
) -> dict[str, Asset]:
'''
Load and cache all asset infos and pack into
our native ``Asset`` struct.
@ -291,21 +260,37 @@ class Client:
}
'''
resp = await self._public('Assets', {})
assets = resp['result']
if (
not self._assets
or reload
):
resp = await self._public('Assets', {})
assets: dict[str, dict] = resp['result']
for bs_mktid, info in assets.items():
altname = self._altnames[bs_mktid] = info['altname']
aclass = info['aclass']
for bs_mktid, info in assets.items():
self.assets[bs_mktid] = Asset(
name=altname.lower(),
atype=f'crypto_{aclass}',
tx_tick=digits_to_dec(info['decimals']),
info=info,
)
altname: str = info['altname']
aclass: str = info['aclass']
asset = Asset(
name=altname,
atype=f'crypto_{aclass}',
tx_tick=digits_to_dec(info['decimals']),
info=info,
)
# NOTE: yes we keep 2 sets since kraken insists on
# keeping 3 frickin sets bc apparently they have
# no sane data engineers whol all like different
# keys for their fricking symbology sets..
self._Assets[bs_mktid] = asset
self._assets[altname.lower()] = asset
self._assets[altname] = asset
return self.assets
# we return the "most native" set merged with our preferred
# naming (which i guess is the "altname" one) since that's
# what the symcache loader will be storing, and we need the
# keys that are easiest to match against in any trade
# records.
return self._Assets | self._assets
async def get_trades(
self,
@ -386,23 +371,25 @@ class Client:
# 'amount': '0.00300726', 'fee': '0.00001000', 'time':
# 1658347714, 'status': 'Success'}]}
if xfers:
import tractor
await tractor.pp()
trans: dict[str, Transaction] = {}
for entry in xfers:
# look up the normalized name and asset info
asset_key = entry['asset']
asset = self.assets[asset_key]
asset_key = self._altnames[asset_key].lower()
asset_key: str = entry['asset']
asset: Asset = self._Assets[asset_key]
asset_key: str = asset.name.lower()
# XXX: this is in the asset units (likely) so it isn't
# quite the same as a commisions cost necessarily..)
# TODO: also round this based on `Pair` cost precision info?
cost = float(entry['fee'])
fqme = asset_key + '.kraken'
# fqme: str = asset_key + '.kraken'
tx = Transaction(
fqsn=fqme,
sym=asset,
fqme=asset_key, # this must map to an entry in .assets!
tid=entry['txid'],
dt=pendulum.from_timestamp(entry['time']),
bs_mktid=f'{asset_key}{src_asset}',
@ -417,6 +404,11 @@ class Client:
# XXX: see note above
cost=cost,
# not a trade but a withdrawal or deposit on the
# asset (chain) system.
etype='transfer',
)
trans[tx.tid] = tx
@ -467,7 +459,7 @@ class Client:
# txid is a transaction id given by kraken
return await self.endpoint('CancelOrder', {"txid": reqid})
async def pair_info(
async def asset_pairs(
self,
pair_patt: str | None = None,
@ -479,64 +471,76 @@ class Client:
https://docs.kraken.com/rest/#tag/Market-Data/operation/getTradableAssetPairs
'''
# get all pairs by default, or filter
# to whatever pattern is provided as input.
pairs: dict[str, str] | None = None
if pair_patt is not None:
pairs = {'pair': pair_patt}
if not self._AssetPairs:
# get all pairs by default, or filter
# to whatever pattern is provided as input.
req_pairs: dict[str, str] | None = None
if pair_patt is not None:
req_pairs = {'pair': pair_patt}
resp = await self._public(
'AssetPairs',
pairs,
)
err = resp['error']
if err:
raise SymbolNotFound(pair_patt)
resp = await self._public(
'AssetPairs',
req_pairs,
)
err = resp['error']
if err:
raise SymbolNotFound(pair_patt)
pairs: dict[str, Pair] = {
# NOTE: we try to key pairs by our custom defined
# `.bs_fqme` field since we want to offer search over
# this pattern set, callers should fill out lookup
# tables for kraken's bs_mktid keys to map to these
# keys!
# XXX: FURTHER kraken's data eng team decided to offer
# 3 frickin market-pair-symbol key sets depending on
# which frickin API is being used.
# Example for the trading pair 'LTC<EUR'
# - the "X-key" from rest eps 'XLTCZEUR'
# - the "websocket key" from ws msgs is 'LTC/EUR'
# - the "altname key" also delivered in pair info is 'LTCEUR'
for xkey, data in resp['result'].items():
key: Pair(**data)
for key, data in resp['result'].items()
}
# always cache so we can possibly do faster lookup
self._pairs.update(pairs)
# NOTE: always cache in pairs tables for faster lookup
pair = Pair(xname=xkey, **data)
# register the above `Pair` structs for all
# key-sets/monikers: a set of 4 (frickin) tables
# acting as a combined surjection of all possible
# (and stupid) kraken names to their `Pair` obj.
self._AssetPairs[xkey] = pair
self._pairs[pair.bs_fqme] = pair
self._altnames[pair.altname] = pair
self._wsnames[pair.wsname] = pair
if pair_patt is not None:
return next(iter(pairs.items()))[1]
return next(iter(self._pairs.items()))[1]
return pairs
return self._AssetPairs
async def cache_symbols(self) -> dict:
async def get_mkt_pairs(
self,
reload: bool = False,
) -> dict:
'''
Load all market pair info build and cache it for downstream use.
Load all market pair info build and cache it for downstream
use.
A ``._ntable: dict[str, str]`` is available for mapping the
websocket pair name-keys and their http endpoint API (smh)
equivalents to the "alternative name" which is generally the one
we actually want to use XD
Multiple pair info lookup tables (like ``._altnames:
dict[str, str]``) are created for looking up the
piker-native `Pair`-struct from any input of the three
(yes, it's that idiotic..) available symbol/pair-key-sets
that kraken frickin offers depending on the API including
the .altname, .wsname and the weird ass default set they
return in ReST responses .xname..
'''
if not self._pairs:
pairs = await self.pair_info()
assert self._pairs == pairs
if (
not self._pairs
or reload
):
await self.asset_pairs()
# table of all ws and rest keys to their alt-name values.
ntable: dict[str, str] = {}
for rest_key in list(pairs.keys()):
pair: Pair = pairs[rest_key]
altname = pair.altname
wsname = pair.wsname
ntable[altname] = ntable[rest_key] = ntable[wsname] = altname
# register the pair under all monikers, a giant flat
# surjection of all possible names to each info obj.
self._pairs[altname] = self._pairs[wsname] = pair
self._ntable.update(ntable)
return self._pairs
return self._AssetPairs
async def search_symbols(
self,
@ -552,16 +556,20 @@ class Client:
'''
if not len(self._pairs):
await self.cache_symbols()
assert self._pairs, '`Client.cache_symbols()` was never called!?'
await self.get_mkt_pairs()
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
matches = fuzzy.extractBests(
pattern,
self._pairs,
matches: dict[str, Pair] = match_from_pairs(
pairs=self._pairs,
query=pattern.upper(),
score_cutoff=50,
)
# repack in dict form
return {item[0].altname: item[0] for item in matches}
# repack in .altname-keyed output table
return {
pair.altname: pair
for pair in matches.values()
}
async def bars(
self,
@ -622,11 +630,11 @@ class Client:
new_bars.append(
(i,) + tuple(
ftype(bar[j]) for j, (name, ftype) in enumerate(
_ohlc_dtype[1:]
def_iohlcv_fields[1:]
)
)
)
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else bars
array = np.array(new_bars, dtype=def_iohlcv_fields) if as_np else bars
return array
except KeyError:
errmsg = json['error'][0]
@ -641,37 +649,55 @@ class Client:
raise BrokerError(errmsg)
@classmethod
def normalize_symbol(
def to_bs_fqme(
cls,
ticker: str
) -> tuple[str, Pair]:
pair_str: str
) -> str:
'''
Normalize symbol names to to a 3x3 pair from the global
definition map which we build out from the data retreived from
the 'AssetPairs' endpoint, see methods above.
'''
return cls._ntable[ticker].lower()
try:
return cls._altnames[pair_str.upper()].bs_fqme
except KeyError as ke:
raise SymbolNotFound(f'kraken has no {ke.args[0]}')
@acm
async def get_client() -> Client:
conf = get_config()
if conf:
client = Client(
conf,
name=conf['key_descr'],
api_key=conf['api_key'],
secret=conf['secret']
)
else:
client = Client({})
conf: dict[str, Any] = get_config()
async with httpx.AsyncClient(
base_url=_url,
headers=_headers,
# at startup, load all symbols, and asset info in
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.cache_symbols()
# TODO: is there a way to numerate this?
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
# connections=4
) as trio_client:
if conf:
client = Client(
conf,
httpx_client=trio_client,
yield client
# TODO: don't break these up and just do internal
# conf lookups instead..
name=conf['key_descr'],
api_key=conf['api_key'],
secret=conf['secret']
)
else:
client = Client(
conf={},
httpx_client=trio_client,
)
# at startup, load all symbols, and asset info in
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.get_mkt_pairs()
yield client

View File

@ -18,14 +18,12 @@
Order api and machinery
'''
from collections import ChainMap, defaultdict
from contextlib import (
asynccontextmanager as acm,
aclosing,
)
from functools import partial
from itertools import count
import math
from pprint import pformat
import time
from typing import (
@ -36,21 +34,19 @@ from typing import (
)
from bidict import bidict
import pendulum
import trio
import tractor
from piker.accounting import (
Position,
PpTable,
Account,
Transaction,
TransactionLedger,
open_trade_ledger,
open_pps,
get_likely_pair,
open_account,
)
from piker.accounting._mktinfo import (
MktPair,
from piker.clearing import(
OrderDialogs,
)
from piker.clearing._messages import (
Order,
@ -63,18 +59,24 @@ from piker.clearing._messages import (
BrokerdPosition,
BrokerdStatus,
)
from . import log
from piker.brokers import (
open_cached_client,
)
from piker.data import open_symcache
from .api import (
log,
Client,
BrokerError,
get_client,
)
from .feed import (
get_mkt_info,
open_autorecon_ws,
NoBsWs,
stream_messages,
)
from .ledger import (
norm_trade_records,
verify_balances,
)
MsgUnion = Union[
BrokerdCancel,
@ -124,7 +126,7 @@ async def handle_order_requests(
client: Client,
ems_order_stream: tractor.MsgStream,
token: str,
apiflows: dict[int, ChainMap[dict[str, dict]]],
apiflows: OrderDialogs,
ids: bidict[str, int],
reqids2txids: dict[int, str],
@ -134,10 +136,8 @@ async def handle_order_requests(
and deliver acks or errors.
'''
# XXX: UGH, let's unify this.. with ``msgspec``.
msg: dict[str, Any]
order: BrokerdOrder
# XXX: UGH, let's unify this.. with ``msgspec``!!!
msg: dict | Order
async for msg in ems_order_stream:
log.info(f'Rx order msg:\n{pformat(msg)}')
match msg:
@ -183,11 +183,12 @@ async def handle_order_requests(
# logic from old `Client.submit_limit()`
if order.oid in ids:
ep = 'editOrder'
reqid = ids[order.oid] # integer not txid
ep: str = 'editOrder'
reqid: int = ids[order.oid] # integer not txid
try:
txid = reqids2txids[reqid]
txid: str = reqids2txids[reqid]
except KeyError:
# XXX: not sure if this block ever gets hit now?
log.error('TOO FAST EDIT')
reqids2txids[reqid] = TooFastEdit(reqid)
@ -208,7 +209,7 @@ async def handle_order_requests(
}
else:
ep = 'addOrder'
ep: str = 'addOrder'
reqid = BrokerClient.new_reqid()
ids[order.oid] = reqid
@ -221,8 +222,12 @@ async def handle_order_requests(
'type': order.action,
}
psym = order.symbol.upper()
pair = f'{psym[:3]}/{psym[3:]}'
# XXX strip any .<venue> token which should
# ONLY ever be '.spot' rn, until we support
# futes.
bs_fqme: str = order.symbol.replace('.spot', '')
psym: str = bs_fqme.upper()
pair: str = f'{psym[:3]}/{psym[3:]}'
# XXX: ACK the request **immediately** before sending
# the api side request to ensure the ems maps the oid ->
@ -260,7 +265,7 @@ async def handle_order_requests(
await ws.send_msg(req)
# placehold for sanity checking in relay loop
apiflows[reqid].maps.append(msg)
apiflows.add_msg(reqid, msg)
case _:
account = msg.get('account')
@ -366,24 +371,23 @@ async def subscribe(
def trades2pps(
table: PpTable,
acnt: Account,
ledger: TransactionLedger,
acctid: str,
new_trans: dict[str, Transaction] = {},
write_storage: bool = True,
) -> tuple[
list[BrokerdPosition],
list[Transaction],
]:
) -> list[BrokerdPosition]:
if new_trans:
updated = table.update_from_trans(
updated = acnt.update_from_ledger(
new_trans,
symcache=ledger.symcache,
)
log.info(f'Updated pps:\n{pformat(updated)}')
pp_entries, closed_pp_objs = table.dump_active()
pp_objs: dict[Union[str, int], Position] = table.pps
pp_entries, closed_pp_objs = acnt.dump_active()
pp_objs: dict[Union[str, int], Position] = acnt.pps
pps: dict[int, Position]
position_msgs: list[dict] = []
@ -397,13 +401,13 @@ def trades2pps(
# backend suffix prefixed but when
# reading accounts from ledgers we
# don't need it and/or it's prefixed
# in the section table.. we should
# in the section acnt.. we should
# just strip this from the message
# right since `.broker` is already
# included?
account='kraken.' + acctid,
symbol=p.symbol.fqme,
size=p.size,
symbol=p.mkt.fqme,
size=p.cumsize,
avg_price=p.ppu,
currency='',
)
@ -414,29 +418,28 @@ def trades2pps(
# as little as possible. we need to either do
# these writes in another actor, or try out `trio`'s
# async file IO api?
table.write_config()
acnt.write_config()
return position_msgs
@tractor.context
async def trades_dialogue(
async def open_trade_dialog(
ctx: tractor.Context,
loglevel: str = None,
) -> AsyncIterator[dict[str, Any]]:
async with get_client() as client:
async with (
# TODO: maybe bind these together and deliver
# a tuple from `.open_cached_client()`?
open_cached_client('kraken') as client,
open_symcache('kraken') as symcache,
):
# make ems flip to paper mode when no creds setup in
# `brokers.toml` B0
if not client._api_key:
raise RuntimeError(
'Missing Kraken API key in `brokers.toml`!?!?')
# TODO: make ems flip to paper mode via
# some returned signal if the user only wants to use
# the data feed or we return this?
# else:
# await ctx.started(({}, ['paper']))
await ctx.started('paper')
return
# NOTE: currently we expect the user to define a "source fiat"
# (much like the web UI let's you set an "account currency")
@ -449,10 +452,7 @@ async def trades_dialogue(
acc_name = 'kraken.' + acctid
# task local msg dialog tracking
apiflows: defaultdict[
int,
ChainMap[dict[str, dict]],
] = defaultdict(ChainMap)
apiflows = OrderDialogs()
# 2way map for ems ids to kraken int reqids..
ids: bidict[str, int] = bidict()
@ -464,8 +464,8 @@ async def trades_dialogue(
# - delete the *ABSOLUTE LAST* entry from account's corresponding
# trade ledgers file (NOTE this MUST be the last record
# delivered from the api ledger),
# - open you ``pps.toml`` and find that same tid and delete it
# from the pp's clears table,
# - open you ``account.kraken.spot.toml`` and find that
# same tid and delete it from the pos's clears table,
# - set this flag to `True`
#
# You should see an update come in after the order mode
@ -476,172 +476,85 @@ async def trades_dialogue(
# update things correctly.
simulate_pp_update: bool = False
table: PpTable
acnt: Account
ledger: TransactionLedger
with (
open_pps(
open_account(
'kraken',
acctid,
write_on_exit=True,
) as table,
) as acnt,
open_trade_ledger(
'kraken',
acctid,
symcache=symcache,
) as ledger,
):
# transaction-ify the ledger entries
ledger_trans = await norm_trade_records(ledger)
# TODO: loading ledger entries should all be done
# within a newly implemented `async with open_account()
# as acnt` where `Account.ledger: TransactionLedger`
# can be used to explicitily update and write the
# offline TOML files!
# ------ - ------
# MOL the init sequence is:
# - get `Account` (with presumed pre-loaded ledger done
# beind the scenes as part of ctx enter).
# - pull new trades from API, update the ledger with
# normalized to `Transaction` entries of those
# records, presumably (and implicitly) update the
# acnt state including expiries, positions,
# transfers..), and finally of course existing
# per-asset balances.
# - validate all pos and balances ensuring there's
# no seemingly noticeable discrepancies?
if not table.pps:
# NOTE: we can't use this since it first needs
# broker: str input support!
# table.update_from_trans(ledger.to_trans())
table.update_from_trans(ledger_trans)
table.write_config()
# LOAD and transaction-ify the EXISTING LEDGER
ledger_trans: dict[str, Transaction] = await norm_trade_records(
ledger,
client,
api_name_set='xname',
)
if not acnt.pps:
acnt.update_from_ledger(
ledger_trans,
symcache=ledger.symcache,
)
acnt.write_config()
# TODO: eventually probably only load
# as far back as it seems is not deliverd in the
# most recent 50 trades and assume that by ordering we
# already have those records in the ledger.
tids2trades = await client.get_trades()
# already have those records in the ledger?
tids2trades: dict[str, dict] = await client.get_trades()
ledger.update(tids2trades)
if tids2trades:
ledger.write_config()
api_trans = await norm_trade_records(tids2trades)
api_trans: dict[str, Transaction] = await norm_trade_records(
tids2trades,
client,
api_name_set='xname',
)
# retrieve kraken reported balances
# and do diff with ledger to determine
# what amount of trades-transactions need
# to be reloaded.
balances = await client.get_balances()
balances: dict[str, float] = await client.get_balances()
for dst, size in balances.items():
verify_balances(
acnt,
src_fiat,
balances,
client,
ledger,
ledger_trans,
api_trans,
)
# we don't care about tracking positions
# in the user's source fiat currency.
if (
dst == src_fiat
or not any(
dst in bs_mktid for bs_mktid in table.pps
)
):
log.warning(
f'Skipping balance `{dst}`:{size} for position calcs!'
)
continue
def has_pp(
dst: str,
size: float,
) -> Position | None:
src2dst: dict[str, str] = {}
for bs_mktid in table.pps:
likely_pair = get_likely_pair(
src_fiat,
dst,
bs_mktid,
)
if likely_pair:
src2dst[src_fiat] = dst
for src, dst in src2dst.items():
pair = f'{dst}{src_fiat}'
pp = table.pps.get(pair)
if (
pp
and math.isclose(pp.size, size)
):
return pp
elif (
size == 0
and pp.size
):
log.warning(
f'`kraken` account says you have a ZERO '
f'balance for {bs_mktid}:{pair}\n'
f'but piker seems to think `{pp.size}`\n'
'This is likely a discrepancy in piker '
'accounting if the above number is'
"large,' though it's likely to due lack"
"f tracking xfers fees.."
)
return pp
return None # signal no entry
pos = has_pp(dst, size)
if not pos:
# we have a balance for which there is no pp
# entry? so we have to likely update from the
# ledger.
updated = table.update_from_trans(ledger_trans)
log.info(f'Updated pps from ledger:\n{pformat(updated)}')
pos = has_pp(dst, size)
if (
not pos
and not simulate_pp_update
):
# try reloading from API
table.update_from_trans(api_trans)
pos = has_pp(dst, size)
if not pos:
# get transfers to make sense of abs balances.
# NOTE: we do this after ledger and API
# loading since we might not have an entry
# in the ``pps.toml`` for the necessary pair
# yet and thus this likely pair grabber will
# likely fail.
for bs_mktid in table.pps:
likely_pair = get_likely_pair(
src_fiat,
dst,
bs_mktid,
)
if likely_pair:
break
else:
raise ValueError(
'Could not find a position pair in '
'ledger for likely widthdrawal '
f'candidate: {dst}'
)
if likely_pair:
# this was likely pp that had a withdrawal
# from the dst asset out of the account.
xfer_trans = await client.get_xfers(
dst,
# TODO: not all src assets are
# 3 chars long...
src_asset=likely_pair[3:],
)
if xfer_trans:
updated = table.update_from_trans(
xfer_trans,
cost_scalar=1,
)
log.info(
f'Updated {dst} from transfers:\n'
f'{pformat(updated)}'
)
if has_pp(dst, size):
raise ValueError(
'Could not reproduce balance:\n'
f'dst: {dst}, {size}\n'
)
# only for simulate-testing a "new fill" since
# XXX NOTE: only for simulate-testing a "new fill" since
# otherwise we have to actually conduct a live clear.
if simulate_pp_update:
tid = list(tids2trades)[0]
@ -649,26 +562,28 @@ async def trades_dialogue(
# stage a first reqid of `0`
reqids2txids[0] = last_trade_dict['ordertxid']
ppmsgs = trades2pps(
table,
ppmsgs: list[BrokerdPosition] = trades2pps(
acnt,
ledger,
acctid,
)
# sync with EMS delivering pps and accounts
await ctx.started((ppmsgs, [acc_name]))
# TODO: ideally this blocks the this task
# as little as possible. we need to either do
# these writes in another actor, or try out `trio`'s
# async file IO api?
table.write_config()
acnt.write_config()
# Get websocket token for authenticated data stream
# Assert that a token was actually received.
resp = await client.endpoint('GetWebSocketsToken', {})
err = resp.get('error')
if err:
if err := resp.get('error'):
raise BrokerError(err)
token = resp['result']['token']
# resp token for ws init
token: str = resp['result']['token']
ws: NoBsWs
async with (
@ -697,32 +612,35 @@ async def trades_dialogue(
# enter relay loop
await handle_order_updates(
ws,
stream,
ems_stream,
apiflows,
ids,
reqids2txids,
table,
api_trans,
acctid,
acc_name,
token,
client=client,
ws=ws,
ws_stream=stream,
ems_stream=ems_stream,
apiflows=apiflows,
ids=ids,
reqids2txids=reqids2txids,
acnt=acnt,
ledger=ledger,
acctid=acctid,
acc_name=acc_name,
token=token,
)
async def handle_order_updates(
client: Client, # only for pairs table needed in ledger proc
ws: NoBsWs,
ws_stream: AsyncIterator,
ems_stream: tractor.MsgStream,
apiflows: dict[int, ChainMap[dict[str, dict]]],
apiflows: OrderDialogs,
ids: bidict[str, int],
reqids2txids: bidict[int, str],
table: PpTable,
acnt: Account,
# transaction records which will be updated
# on new trade clearing events (aka order "fills")
ledger_trans: dict[str, Transaction],
ledger: TransactionLedger,
# ledger_trans: dict[str, Transaction],
acctid: str,
acc_name: str,
token: str,
@ -740,7 +658,7 @@ async def handle_order_updates(
# TODO: turns out you get the fill events from the
# `openOrders` before you get this, so it might be better
# to do all fill/status/pp updates in that sub and just use
# to do all fill/status/pos updates in that sub and just use
# this one for ledger syncs?
# For eg. we could take the "last 50 trades" and do a diff
@ -782,7 +700,8 @@ async def handle_order_updates(
# if tid not in ledger_trans
}
for tid, trade in trades.items():
assert tid not in ledger_trans
# assert tid not in ledger_trans
assert tid not in ledger
txid = trade['ordertxid']
reqid = trade.get('userref')
@ -825,12 +744,22 @@ async def handle_order_updates(
)
await ems_stream.send(status_msg)
new_trans = await norm_trade_records(trades)
ppmsgs = trades2pps(
table,
acctid,
new_trans,
new_trans = await norm_trade_records(
trades,
client,
api_name_set='wsname',
)
ppmsgs: list[BrokerdPosition] = trades2pps(
acnt=acnt,
ledger=ledger,
acctid=acctid,
new_trans=new_trans,
)
# ppmsgs = trades2pps(
# acnt,
# acctid,
# new_trans,
# )
for pp_msg in ppmsgs:
await ems_stream.send(pp_msg)
@ -875,8 +804,9 @@ async def handle_order_updates(
# 'vol_exec': exec_vlm} # 0.0000
match update_msg:
# EMS-unknown LIVE order that needs to be
# delivered and loaded on the client-side.
# EMS-unknown pre-exising-submitted LIVE
# order that needs to be delivered and
# loaded on the client-side.
case {
'userref': reqid,
'descr': {
@ -895,7 +825,7 @@ async def handle_order_updates(
ids.inverse.get(reqid) is None
):
# parse out existing live order
fqsn = pair.replace('/', '').lower()
fqme = pair.replace('/', '').lower() + '.spot'
price = float(price)
size = float(vol)
@ -922,14 +852,14 @@ async def handle_order_updates(
action=action,
exec_mode='live',
oid=oid,
symbol=fqsn,
symbol=fqme,
account=acc_name,
price=price,
size=size,
),
src='kraken',
)
apiflows[reqid].maps.append(status_msg.to_dict())
apiflows.add_msg(reqid, status_msg.to_dict())
await ems_stream.send(status_msg)
continue
@ -1065,7 +995,7 @@ async def handle_order_updates(
),
)
apiflows[reqid].maps.append(update_msg)
apiflows.add_msg(reqid, update_msg)
await ems_stream.send(resp)
# fill msg.
@ -1144,9 +1074,8 @@ async def handle_order_updates(
)
continue
# update the msg chain
chain = apiflows[reqid]
chain.maps.append(event)
# update the msg history
apiflows.add_msg(reqid, event)
if status == 'error':
# any of ``{'add', 'edit', 'cancel'}``
@ -1156,11 +1085,16 @@ async def handle_order_updates(
f'Failed to {action} order {reqid}:\n'
f'{errmsg}'
)
symbol: str = 'N/A'
if chain := apiflows.get(reqid):
symbol: str = chain.get('symbol', 'N/A')
await ems_stream.send(BrokerdError(
oid=oid,
# XXX: use old reqid in case it changed?
reqid=reqid,
symbol=chain.get('symbol', 'N/A'),
symbol=symbol,
reason=f'Failed {action}:\n{errmsg}',
broker_details=event
@ -1185,36 +1119,3 @@ async def handle_order_updates(
})
case _:
log.warning(f'Unhandled trades update msg: {msg}')
async def norm_trade_records(
ledger: dict[str, Any],
) -> dict[str, Transaction]:
records: dict[str, Transaction] = {}
for tid, record in ledger.items():
size = float(record.get('vol')) * {
'buy': 1,
'sell': -1,
}[record['type']]
# we normalize to kraken's `altname` always..
bs_mktid = Client.normalize_symbol(record['pair'])
fqme = f'{bs_mktid}.kraken'
mkt: MktPair = (await get_mkt_info(fqme))[0]
records[tid] = Transaction(
fqsn=fqme,
sym=mkt,
tid=tid,
size=size,
price=float(record['price']),
cost=float(record['fee']),
dt=pendulum.from_timestamp(float(record['time'])),
bs_mktid=bs_mktid,
)
return records

View File

@ -24,43 +24,38 @@ from contextlib import (
)
from datetime import datetime
from typing import (
Any,
Optional,
AsyncGenerator,
Callable,
Optional,
)
import time
from fuzzywuzzy import process as fuzzy
import numpy as np
import pendulum
from trio_typing import TaskStatus
import tractor
import trio
from piker.accounting._mktinfo import (
Asset,
MktPair,
)
from piker._cacheables import (
from piker.brokers import (
open_cached_client,
async_lifo_cache,
)
from piker.brokers._util import (
BrokerError,
DataThrottle,
DataUnavailable,
)
from piker.data.types import Struct
from piker.types import Struct
from piker.data.validate import FeedInit
from piker.data._web_bs import open_autorecon_ws, NoBsWs
from . import log
from .api import (
Client,
Pair,
log,
)
from .symbols import get_mkt_info
class OHLC(Struct):
class OHLC(Struct, frozen=True):
'''
Description of the flattened OHLC quote format.
@ -71,6 +66,8 @@ class OHLC(Struct):
chan_id: int # internal kraken id
chan_name: str # eg. ohlc-1 (name-interval)
pair: str # fx pair
# unpacked from array
time: float # Begin time of interval, in seconds since epoch
etime: float # End time of interval, in seconds since epoch
open: float # Open price of interval
@ -80,8 +77,6 @@ class OHLC(Struct):
vwap: float # Volume weighted average price within interval
volume: float # Accumulated volume **within interval**
count: int # Number of trades within interval
# (sampled) generated tick data
ticks: list[Any] = []
async def stream_messages(
@ -145,14 +140,15 @@ async def process_data_feed_msgs(
pair
]:
if 'ohlc' in chan_name:
array: list = payload_array[0]
ohlc = OHLC(
chan_id,
chan_name,
pair,
*payload_array[0]
*map(float, array[:-1]),
count=array[-1],
)
ohlc.typecast()
yield 'ohlc', ohlc
yield 'ohlc', ohlc.copy()
elif 'spread' in chan_name:
@ -192,31 +188,27 @@ async def process_data_feed_msgs(
# yield msg
def normalize(
ohlc: OHLC,
def normalize(ohlc: OHLC) -> dict:
'''
Norm an `OHLC` msg to piker's minimal (live-)quote schema.
) -> dict:
'''
quote = ohlc.to_dict()
quote['broker_ts'] = quote['time']
quote['brokerd_ts'] = time.time()
quote['symbol'] = quote['pair'] = quote['pair'].replace('/', '')
quote['last'] = quote['close']
quote['bar_wap'] = ohlc.vwap
# seriously eh? what's with this non-symmetry everywhere
# in subscription systems...
# XXX: piker style is always lowercases symbols.
topic = quote['pair'].replace('/', '').lower()
# print(quote)
return topic, quote
return quote
@acm
async def open_history_client(
symbol: str,
mkt: MktPair,
) -> tuple[Callable, int]:
) -> AsyncGenerator[Callable, None]:
symbol: str = mkt.bs_mktid
# TODO implement history getter for the new storage layer.
async with open_cached_client('kraken') as client:
@ -266,44 +258,6 @@ async def open_history_client(
yield get_ohlc, {'erlangs': 1, 'rate': 1}
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, Pair]:
'''
Query for and return a `MktPair` and backend-native `Pair` (or
wtv else) info.
If more then one fqme is provided return a ``dict`` of native
key-strs to `MktPair`s.
'''
async with open_cached_client('kraken') as client:
# uppercase since kraken bs_mktid is always upper
bs_fqme, _, broker = fqme.partition('.')
pair_str: str = bs_fqme.upper()
bs_mktid: str = Client.normalize_symbol(pair_str)
pair: Pair = await client.pair_info(pair_str)
assets = client.assets
dst_asset: Asset = assets[pair.base]
src_asset: Asset = assets[pair.quote]
mkt = MktPair(
dst=dst_asset,
src=src_asset,
price_tick=pair.price_tick,
size_tick=pair.size_tick,
bs_mktid=bs_mktid,
broker='kraken',
)
return mkt, pair
async def stream_quotes(
send_chan: trio.abc.SendChannel,
@ -334,12 +288,7 @@ async def stream_quotes(
for sym_str in symbols:
mkt, pair = await get_mkt_info(sym_str)
init_msgs.append(
FeedInit(
mkt_info=mkt,
shm_write_opts={
'sum_tick_vml': False,
},
)
FeedInit(mkt_info=mkt)
)
ws_pairs.append(pair.wsname)
@ -399,7 +348,6 @@ async def stream_quotes(
open_autorecon_ws(
'wss://ws.kraken.com/',
fixture=subscribe,
msg_recv_timeout=5,
reset_after=20,
) as ws,
@ -411,78 +359,57 @@ async def stream_quotes(
):
# pull a first quote and deliver
typ, ohlc_last = await anext(msg_gen)
topic, quote = normalize(ohlc_last)
quote = normalize(ohlc_last)
task_status.started((init_msgs, quote))
# lol, only "closes" when they're margin squeezing clients ;P
feed_is_live.set()
# keep start of last interval for volume tracking
last_interval_start = ohlc_last.etime
last_interval_start: float = ohlc_last.etime
# start streaming
async for typ, ohlc in msg_gen:
if typ == 'ohlc':
topic: str = mkt.bs_fqme
async for typ, quote in msg_gen:
match typ:
# TODO: can get rid of all this by using
# ``trades`` subscription...
# ``trades`` subscription..? Not sure why this
# wasn't used originally? (music queues) zoltannn..
# https://docs.kraken.com/websockets/#message-trade
case 'ohlc':
# generate tick values to match time & sales pane:
# https://trade.kraken.com/charts/KRAKEN:BTC-USD?period=1m
volume = quote.volume
# generate tick values to match time & sales pane:
# https://trade.kraken.com/charts/KRAKEN:BTC-USD?period=1m
volume = ohlc.volume
# new OHLC sample interval
if quote.etime > last_interval_start:
last_interval_start: float = quote.etime
tick_volume: float = volume
# new OHLC sample interval
if ohlc.etime > last_interval_start:
last_interval_start = ohlc.etime
tick_volume = volume
else:
# this is the tick volume *within the interval*
tick_volume: float = volume - ohlc_last.volume
else:
# this is the tick volume *within the interval*
tick_volume = volume - ohlc_last.volume
ohlc_last = quote
last = quote.close
ohlc_last = ohlc
last = ohlc.close
quote = normalize(quote)
ticks = quote.setdefault(
'ticks',
[],
)
if tick_volume:
ticks.append({
'type': 'trade',
'price': last,
'size': tick_volume,
})
if tick_volume:
ohlc.ticks.append({
'type': 'trade',
'price': last,
'size': tick_volume,
})
case 'l1':
# passthrough quote msg
pass
topic, quote = normalize(ohlc)
elif typ == 'l1':
quote = ohlc
topic = quote['symbol'].lower()
case _:
log.warning(f'Unknown WSS message: {typ}, {quote}')
await send_chan.send({topic: quote})
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
async with open_cached_client('kraken') as client:
# load all symbols locally for fast search
cache = await client.cache_symbols()
await ctx.started(cache)
async with ctx.open_stream() as stream:
async for pattern in stream:
matches = fuzzy.extractBests(
pattern,
cache,
score_cutoff=50,
)
# repack in dict form
await stream.send({
pair[0].altname: pair[0]
for pair in matches
})

View File

@ -0,0 +1,269 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Trade transaction accounting and normalization.
'''
import math
from pprint import pformat
from typing import (
Any,
)
import pendulum
from piker.accounting import (
Transaction,
Position,
Account,
get_likely_pair,
TransactionLedger,
# MktPair,
)
from piker.types import Struct
from piker.data import (
SymbologyCache,
)
from .api import (
log,
Client,
Pair,
)
# from .feed import get_mkt_info
def norm_trade(
tid: str,
record: dict[str, Any],
# this is the dict that was returned from
# `Client.get_mkt_pairs()` and when running offline ledger
# processing from `.accounting`, this will be the table loaded
# into `SymbologyCache.pairs`.
pairs: dict[str, Struct],
symcache: SymbologyCache | None = None,
) -> Transaction:
size: float = float(record.get('vol')) * {
'buy': 1,
'sell': -1,
}[record['type']]
# NOTE: this value may be either the websocket OR the rest schema
# so we need to detect the key format and then choose the
# correct symbol lookup table to evetually get a ``Pair``..
# See internals of `Client.asset_pairs()` for deats!
src_pair_key: str = record['pair']
# XXX: kraken's data engineering is soo bad they require THREE
# different pair schemas (more or less seemingly tied to
# transport-APIs)..LITERALLY they return different market id
# pairs in the ledger endpoints vs. the websocket event subs..
# lookup pair using appropriately provided tabled depending
# on API-key-schema..
pair: Pair = pairs[src_pair_key]
fqme: str = pair.bs_fqme.lower() + '.kraken'
return Transaction(
fqme=fqme,
tid=tid,
size=size,
price=float(record['price']),
cost=float(record['fee']),
dt=pendulum.from_timestamp(float(record['time'])),
bs_mktid=pair.bs_mktid,
)
async def norm_trade_records(
ledger: dict[str, Any],
client: Client,
api_name_set: str = 'xname',
) -> dict[str, Transaction]:
'''
Loop through an input ``dict`` of trade records
and convert them to ``Transactions``.
'''
records: dict[str, Transaction] = {}
for tid, record in ledger.items():
# manual_fqme: str = f'{bs_mktid.lower()}.kraken'
# mkt: MktPair = (await get_mkt_info(manual_fqme))[0]
# fqme: str = mkt.fqme
# assert fqme == manual_fqme
pairs: dict[str, Pair] = {
'xname': client._AssetPairs,
'wsname': client._wsnames,
'altname': client._altnames,
}[api_name_set]
records[tid] = norm_trade(
tid,
record,
pairs=pairs,
)
return records
def has_pp(
acnt: Account,
src_fiat: str,
dst: str,
size: float,
) -> Position | None:
src2dst: dict[str, str] = {}
for bs_mktid in acnt.pps:
likely_pair = get_likely_pair(
src_fiat,
dst,
bs_mktid,
)
if likely_pair:
src2dst[src_fiat] = dst
for src, dst in src2dst.items():
pair: str = f'{dst}{src_fiat}'
pos: Position = acnt.pps.get(pair)
if (
pos
and math.isclose(pos.size, size)
):
return pos
elif (
size == 0
and pos.size
):
log.warning(
f'`kraken` account says you have a ZERO '
f'balance for {bs_mktid}:{pair}\n'
f'but piker seems to think `{pos.size}`\n'
'This is likely a discrepancy in piker '
'accounting if the above number is'
"large,' though it's likely to due lack"
"f tracking xfers fees.."
)
return pos
return None # indicate no entry found
# TODO: factor most of this "account updating from txns" into the
# the `Account` impl so has to provide for hiding the mostly
# cross-provider updates from txn sets
async def verify_balances(
acnt: Account,
src_fiat: str,
balances: dict[str, float],
client: Client,
ledger: TransactionLedger,
ledger_trans: dict[str, Transaction], # from toml
api_trans: dict[str, Transaction], # from API
simulate_pp_update: bool = False,
) -> None:
for dst, size in balances.items():
# we don't care about tracking positions
# in the user's source fiat currency.
if (
dst == src_fiat
or not any(
dst in bs_mktid for bs_mktid in acnt.pps
)
):
log.warning(
f'Skipping balance `{dst}`:{size} for position calcs!'
)
continue
# we have a balance for which there is no pos entry
# - we have to likely update from the ledger?
if not has_pp(acnt, src_fiat, dst, size):
updated = acnt.update_from_ledger(
ledger_trans,
symcache=ledger.symcache,
)
log.info(f'Updated pps from ledger:\n{pformat(updated)}')
# FIRST try reloading from API records
if (
not has_pp(acnt, src_fiat, dst, size)
and not simulate_pp_update
):
acnt.update_from_ledger(
api_trans,
symcache=ledger.symcache,
)
# get transfers to make sense of abs
# balances.
# NOTE: we do this after ledger and API
# loading since we might not have an
# entry in the
# ``account.kraken.spot.toml`` for the
# necessary pair yet and thus this
# likely pair grabber will likely fail.
if not has_pp(acnt, src_fiat, dst, size):
for bs_mktid in acnt.pps:
likely_pair: str | None = get_likely_pair(
src_fiat,
dst,
bs_mktid,
)
if likely_pair:
break
else:
raise ValueError(
'Could not find a position pair in '
'ledger for likely widthdrawal '
f'candidate: {dst}'
)
# this was likely pos that had a withdrawal
# from the dst asset out of the account.
if likely_pair:
xfer_trans = await client.get_xfers(
dst,
# TODO: not all src assets are
# 3 chars long...
src_asset=likely_pair[3:],
)
if xfer_trans:
updated = acnt.update_from_ledger(
xfer_trans,
cost_scalar=1,
symcache=ledger.symcache,
)
log.info(
f'Updated {dst} from transfers:\n'
f'{pformat(updated)}'
)
if has_pp(acnt, src_fiat, dst, size):
raise ValueError(
'Could not reproduce balance:\n'
f'dst: {dst}, {size}\n'
)

View File

@ -0,0 +1,206 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Symbology defs and search.
'''
from decimal import Decimal
import tractor
from rapidfuzz import process as fuzzy
from piker._cacheables import (
async_lifo_cache,
)
from piker.accounting._mktinfo import (
digits_to_dec,
)
from piker.brokers import (
open_cached_client,
SymbolNotFound,
)
from piker.types import Struct
from piker.accounting._mktinfo import (
Asset,
MktPair,
unpack_fqme,
)
# https://www.kraken.com/features/api#get-tradable-pairs
class Pair(Struct):
xname: str # idiotic bs_mktid equiv i guess?
altname: str # alternate pair name
wsname: str # WebSocket pair name (if available)
aclass_base: str # asset class of base component
base: str # asset id of base component
aclass_quote: str # asset class of quote component
quote: str # asset id of quote component
lot: str # volume lot size
cost_decimals: int
costmin: float
pair_decimals: int # scaling decimal places for pair
lot_decimals: int # scaling decimal places for volume
# amount to multiply lot volume by to get currency volume
lot_multiplier: float
# array of leverage amounts available when buying
leverage_buy: list[int]
# array of leverage amounts available when selling
leverage_sell: list[int]
# fee schedule array in [volume, percent fee] tuples
fees: list[tuple[int, float]]
# maker fee schedule array in [volume, percent fee] tuples (if on
# maker/taker)
fees_maker: list[tuple[int, float]]
fee_volume_currency: str # volume discount currency
margin_call: str # margin call level
margin_stop: str # stop-out/liquidation margin level
ordermin: float # minimum order volume for pair
tick_size: float # min price step size
status: str
short_position_limit: float = 0
long_position_limit: float = float('inf')
# TODO: should we make this a literal NamespacePath ref?
ns_path: str = 'piker.brokers.kraken:Pair'
@property
def bs_mktid(self) -> str:
'''
Kraken seems to index it's market symbol sets in
transaction ledgers using the key returned from rest
queries.. so use that since apparently they can't
make up their minds on a better key set XD
'''
return self.xname
@property
def price_tick(self) -> Decimal:
return digits_to_dec(self.pair_decimals)
@property
def size_tick(self) -> Decimal:
return digits_to_dec(self.lot_decimals)
@property
def bs_dst_asset(self) -> str:
dst, _ = self.wsname.split('/')
return dst
@property
def bs_src_asset(self) -> str:
_, src = self.wsname.split('/')
return src
@property
def bs_fqme(self) -> str:
'''
Basically the `.altname` but with special '.' handling and
`.SPOT` suffix appending (for future multi-venue support).
'''
dst, src = self.wsname.split('/')
# XXX: omg for stupid shite like ETH2.S/ETH..
dst = dst.replace('.', '-')
return f'{dst}{src}.SPOT'
@tractor.context
async def open_symbol_search(ctx: tractor.Context) -> None:
async with open_cached_client('kraken') as client:
# load all symbols locally for fast search
cache = await client.get_mkt_pairs()
await ctx.started(cache)
async with ctx.open_stream() as stream:
async for pattern in stream:
await stream.send(
await client.search_symbols(pattern)
)
@async_lifo_cache()
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, Pair]:
'''
Query for and return a `MktPair` and backend-native `Pair` (or
wtv else) info.
If more then one fqme is provided return a ``dict`` of native
key-strs to `MktPair`s.
'''
venue: str = 'spot'
expiry: str = ''
if '.kraken' not in fqme:
fqme += '.kraken'
broker, pair, venue, expiry = unpack_fqme(fqme)
venue: str = venue or 'spot'
if venue.lower() != 'spot':
raise SymbolNotFound(
'kraken only supports spot markets right now!\n'
f'{fqme}\n'
)
async with open_cached_client('kraken') as client:
# uppercase since kraken bs_mktid is always upper
# bs_fqme, _, broker = fqme.partition('.')
# pair_str: str = bs_fqme.upper()
pair_str: str = f'{pair}.{venue}'
pair: Pair | None = client._pairs.get(pair_str.upper())
if not pair:
bs_fqme: str = client.to_bs_fqme(pair_str)
pair: Pair = client._pairs[bs_fqme]
if not (assets := client._assets):
assets: dict[str, Asset] = await client.get_assets()
dst_asset: Asset = assets[pair.bs_dst_asset]
src_asset: Asset = assets[pair.bs_src_asset]
mkt = MktPair(
dst=dst_asset,
src=src_asset,
price_tick=pair.price_tick,
size_tick=pair.size_tick,
bs_mktid=pair.bs_mktid,
expiry=expiry,
venue=venue or 'spot',
# TODO: futes
# _atype=_atype,
broker='kraken',
)
return mkt, pair

File diff suppressed because it is too large Load Diff

View File

@ -40,7 +40,8 @@ import wrapt
import asks
from ..calc import humanize, percent_change
from .._cacheables import open_cached_client, async_lifo_cache
from . import open_cached_client
from piker._cacheables import async_lifo_cache
from .. import config
from ._util import resproc, BrokerError, SymbolNotFound
from ..log import (

View File

@ -0,0 +1,49 @@
piker.clearing
______________
trade execution-n-control subsys for both live and paper trading as
well as algo-trading manual override/interaction across any backend
broker and data provider.
avail UIs
*********
order ctl
---------
the `piker.clearing` subsys is exposed mainly though
the `piker chart` GUI as a "chart trader" style UX and
is automatically enabled whenever a chart is opened.
.. ^TODO, more prose here!
the "manual" order control features are exposed via the
`piker.ui.order_mode` API and can pretty much always be
used (at least) in simulated-trading mode, aka "paper"-mode, and
the micro-manual is as follows:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
position (pp) mgmt
------------------
you can also configure your position allocation limits from the
sidepane.
.. ^TODO, explain and provide tut once more refined!

View File

@ -23,11 +23,32 @@ from ._client import (
open_ems,
OrderClient,
)
from ._ems import (
open_brokerd_dialog,
)
from ._util import OrderDialogs
from ._messages import(
Order,
Status,
Cancel,
# TODO: deprecate these and replace end-2-end with
# client-side-dialog set above B)
# https://github.com/pikers/piker/issues/514
BrokerdPosition
)
__all__ = [
'FeeModel',
'open_ems',
'OrderClient',
'open_brokerd_dialog',
'OrderDialogs',
'Order',
'Status',
'Cancel',
'BrokerdPosition'
]

View File

@ -30,15 +30,13 @@ from tractor.trionics import broadcast_receiver
from ._util import (
log, # sub-sys logger
)
from ..accounting._mktinfo import unpack_fqme
from ..data.types import Struct
from piker.types import Struct
from ..service import maybe_open_emsd
from ._messages import (
Order,
Cancel,
BrokerdPosition,
)
from ..brokers import get_brokermod
if TYPE_CHECKING:
from ._messages import (
@ -133,6 +131,8 @@ class OrderClient(Struct):
f'Maybe there is a stale entry or line?\n'
f'You should report this as a bug!'
)
return
fqme = str(cmd.symbol)
return Cancel(
oid=uuid,
@ -166,8 +166,6 @@ class OrderClient(Struct):
)
_client: OrderClient = None
async def relay_orders_from_sync_code(
@ -218,8 +216,8 @@ async def open_ems(
loglevel: str = 'error',
) -> tuple[
OrderClient,
tractor.MsgStream,
OrderClient, # client
tractor.MsgStream, # order ctl stream
dict[
# brokername, acctid
tuple[str, str],
@ -238,6 +236,8 @@ async def open_ems(
broker control client-API.
'''
# TODO: prolly hand in the `MktPair` instance directly here as well!
from piker.accounting import unpack_fqme
broker, mktep, venue, suffix = unpack_fqme(fqme)
async with maybe_open_emsd(
@ -245,13 +245,6 @@ async def open_ems(
loglevel=loglevel,
) as portal:
mod = get_brokermod(broker)
if (
not getattr(mod, 'trades_dialogue', None)
or mode == 'paper'
):
mode = 'paper'
from ._ems import _emsd_main
async with (
# connect to emsd
@ -273,34 +266,31 @@ async def open_ems(
# open 2-way trade command stream
ctx.open_stream() as trades_stream,
):
# use any pre-existing actor singleton client.
global _client
if _client is None:
size = 100
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
size: int = 100 # what should this be?
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
# setup local ui event streaming channels for request/resp
# streamging with EMS daemon
_client = OrderClient(
_ems_stream=trades_stream,
_to_relay_task=tx,
_from_sync_order_client=brx,
)
# setup local ui event streaming channels for request/resp
# streamging with EMS daemon
client = OrderClient(
_ems_stream=trades_stream,
_to_relay_task=tx,
_from_sync_order_client=brx,
)
_client._ems_stream = trades_stream
client._ems_stream = trades_stream
# start sync code order msg delivery task
async with trio.open_nursery() as n:
n.start_soon(
relay_orders_from_sync_code,
_client,
client,
fqme,
trades_stream
)
yield (
_client,
client,
trades_stream,
positions,
accounts,

View File

@ -24,9 +24,10 @@ from collections import (
# ChainMap,
)
from contextlib import asynccontextmanager as acm
from decimal import Decimal
from math import isnan
from pprint import pformat
import time
from time import time_ns
from types import ModuleType
from typing import (
AsyncIterator,
@ -34,6 +35,7 @@ from typing import (
Callable,
Hashable,
Optional,
TYPE_CHECKING,
)
from bidict import bidict
@ -45,22 +47,17 @@ from ._util import (
log, # sub-sys logger
get_console_log,
)
from ..data._normalize import iterticks
from ..accounting._mktinfo import (
unpack_fqme,
float_digits,
)
from ..data.feed import (
Feed,
Flume,
maybe_open_feed,
dec_digits,
)
from piker.types import Struct
from ..ui._notify import notify_from_ems_status_msg
from ..data.types import Struct
from . import _paper_engine as paper
from ..data import iterticks
from ._messages import (
Order,
Status,
Error,
BrokerdCancel,
BrokerdOrder,
# BrokerdOrderAck,
@ -70,6 +67,12 @@ from ._messages import (
BrokerdPosition,
)
if TYPE_CHECKING:
from ..data import (
Feed,
Flume,
)
# TODO: numba all of this
def mk_check(
@ -129,11 +132,16 @@ class DarkBook(Struct):
triggers: dict[
str, # symbol
dict[
str, # uuid
str, # uuid for triggerable execution
tuple[
Callable[[float], bool], # predicate
str, # name
dict, # cmd / msg type
tuple[str, ...], # tickfilter
dict | Order, # cmd / msg type
# live submission constraint parameters
float, # percent_away max price diff
float, # abs_diff_away max price diff
int, # min_tick_digits to round the clearable price
]
]
] = {}
@ -176,7 +184,8 @@ async def clear_dark_triggers(
async for quotes in quote_stream:
# start = time.time()
for sym, quote in quotes.items():
execs = book.triggers.get(sym, {})
# TODO: make this a msg-compat struct
execs: tuple = book.triggers.get(sym, {})
for tick in iterticks(
quote,
# dark order price filter(s)
@ -199,7 +208,8 @@ async def clear_dark_triggers(
# TODO: send this msg instead?
cmd,
percent_away,
abs_diff_away
abs_diff_away,
price_tick_digits,
) in (
tuple(execs.items())
):
@ -232,8 +242,11 @@ async def clear_dark_triggers(
size=size,
):
bfqme: str = symbol.replace(f'.{broker}', '')
submit_price = price + abs_diff_away
resp = 'triggered' # hidden on client-side
submit_price: float = round(
price + abs_diff_away,
ndigits=price_tick_digits,
)
resp: str = 'triggered' # hidden on client-side
log.info(
f'Dark order triggered for price {price}\n'
@ -243,7 +256,7 @@ async def clear_dark_triggers(
action=action,
oid=oid,
account=account,
time_ns=time.time_ns(),
time_ns=time_ns(),
symbol=bfqme,
price=submit_price,
size=size,
@ -256,18 +269,18 @@ async def clear_dark_triggers(
# fallthrough logic
status = Status(
oid=oid, # ems dialog id
time_ns=time.time_ns(),
time_ns=time_ns(),
resp=resp,
req=cmd,
brokerd_msg=brokerd_msg,
)
# remove exec-condition from set
log.info(f'removing pred for {oid}')
pred = execs.pop(oid, None)
if not pred:
log.info(f'Removing trigger for {oid}')
trigger: tuple | None = execs.pop(oid, None)
if not trigger:
log.warning(
f'pred for {oid} was already removed!?'
f'trigger for {oid} was already removed!?'
)
# update actives
@ -307,15 +320,177 @@ class TradesRelay(Struct):
# map of symbols to dicts of accounts to pp msgs
positions: dict[
# brokername, acctid
# brokername, acctid ->
tuple[str, str],
list[BrokerdPosition],
# fqme -> msg
dict[str, BrokerdPosition],
]
# allowed account names
accounts: tuple[str]
@acm
async def open_brokerd_dialog(
brokermod: ModuleType,
portal: tractor.Portal,
exec_mode: str,
fqme: str | None = None,
loglevel: str | None = None,
) -> tuple[
tractor.MsgStream,
# {(brokername, accountname) -> {fqme -> msg}}
dict[(str, str), dict[str, BrokerdPosition]],
list[str],
]:
'''
Open either a live trades control dialog or a dialog with a new
paper engine instance depending on live trading support for the
broker backend, configuration, or client code usage.
'''
broker: str = brokermod.name
def mk_paper_ep():
from . import _paper_engine as paper_mod
nonlocal brokermod, exec_mode
# for logging purposes
brokermod = paper_mod
# for paper mode we need to mock this trades response feed
# so we load bidir stream to a new sub-actor running
# a paper-simulator clearing engine.
# load the paper trading engine
log.info(f'{broker}: Entering `paper` trading mode')
# load the paper trading engine as a subactor of this emsd
# actor to simulate the real IPC load it'll have when also
# pulling data from feeds
if not fqme:
log.warning(
f'Paper engine activate for {broker} but no fqme provided?'
)
return paper_mod.open_paperboi(
fqme=fqme,
broker=broker,
loglevel=loglevel,
)
# take the first supported ep we detect
# on the backend mod.
trades_endpoint: Callable
for ep_name in [
'open_trade_dialog', # probably final name?
'trades_dialogue', # legacy
]:
trades_endpoint = getattr(
brokermod,
ep_name,
None,
)
if trades_endpoint:
break
else:
log.warning(
f'No live trading EP found: {brokermod.name}?'
)
exec_mode: str = 'paper'
if (
trades_endpoint is not None
or exec_mode != 'paper'
):
# open live brokerd trades endpoint
open_trades_endpoint = portal.open_context(
trades_endpoint,
)
@acm
async def maybe_open_paper_ep():
if exec_mode == 'paper':
async with mk_paper_ep() as msg:
yield msg
return
# open trades-dialog endpoint with backend broker
async with open_trades_endpoint as msg:
ctx, first = msg
# runtime indication that the backend can't support live
# order ctrl yet, so boot the paperboi B0
if first == 'paper':
async with mk_paper_ep() as msg:
yield msg
return
else:
# working live ep case B)
yield msg
return
pps_by_broker_account: dict[(str, str), BrokerdPosition] = {}
async with (
maybe_open_paper_ep() as (
brokerd_ctx,
(position_msgs, accounts),
),
brokerd_ctx.open_stream() as brokerd_trades_stream,
):
# XXX: really we only want one stream per `emsd`
# actor to relay global `brokerd` order events
# unless we're going to expect each backend to
# relay only orders affiliated with a particular
# ``trades_dialogue()`` session (seems annoying
# for implementers). So, here we cache the relay
# task and instead of running multiple tasks
# (which will result in multiples of the same
# msg being relayed for each EMS client) we just
# register each client stream to this single
# relay loop in the dialog table.
# begin processing order events from the target
# brokerd backend by receiving order submission
# response messages, normalizing them to EMS
# messages and relaying back to the piker order
# client set.
# locally cache and track positions per account with
# a nested table of msgs:
# tuple(brokername, acctid) ->
# (fqme: str ->
# `BrokerdPosition`)
for msg in position_msgs:
msg = BrokerdPosition(**msg)
log.info(
f'loading pp for {brokermod.__name__}:\n'
f'{pformat(msg.to_dict())}',
)
# TODO: state any mismatch here?
account: str = msg.account
assert account in accounts
pps_by_broker_account.setdefault(
(broker, account),
{},
)[msg.symbol] = msg
# should be unique entries, verdad!
assert len(set(accounts)) == len(accounts)
yield (
brokerd_trades_stream,
pps_by_broker_account,
accounts,
)
class Router(Struct):
'''
Order router which manages and tracks per-broker dark book,
@ -347,6 +522,7 @@ class Router(Struct):
] = defaultdict(set)
# TODO: mapping of ems dialog ids to msg flow history
# - use the new ._util.OrderDialogs?
# msgflows: defaultdict[
# str,
# ChainMap[dict[str, dict]],
@ -407,93 +583,25 @@ class Router(Struct):
yield relay
return
trades_endpoint = getattr(brokermod, 'trades_dialogue', None)
if (
trades_endpoint is None
or exec_mode == 'paper'
async with open_brokerd_dialog(
brokermod=brokermod,
portal=portal,
exec_mode=exec_mode,
fqme=fqme,
loglevel=loglevel,
) as (
brokerd_stream,
pp_msg_table,
accounts,
):
# for logging purposes
brokermod = paper
# for paper mode we need to mock this trades response feed
# so we load bidir stream to a new sub-actor running
# a paper-simulator clearing engine.
# load the paper trading engine
exec_mode = 'paper'
log.info(f'{broker}: Entering `paper` trading mode')
# load the paper trading engine as a subactor of this emsd
# actor to simulate the real IPC load it'll have when also
# pulling data from feeds
open_trades_endpoint = paper.open_paperboi(
fqme=fqme,
loglevel=loglevel,
)
else:
# open live brokerd trades endpoint
open_trades_endpoint = portal.open_context(
trades_endpoint,
loglevel=loglevel,
)
# open trades-dialog endpoint with backend broker
positions: list[BrokerdPosition]
accounts: tuple[str]
async with (
open_trades_endpoint as (
brokerd_ctx,
(positions, accounts,),
),
brokerd_ctx.open_stream() as brokerd_trades_stream,
):
# XXX: really we only want one stream per `emsd`
# actor to relay global `brokerd` order events
# unless we're going to expect each backend to
# relay only orders affiliated with a particular
# ``trades_dialogue()`` session (seems annoying
# for implementers). So, here we cache the relay
# task and instead of running multiple tasks
# (which will result in multiples of the same
# msg being relayed for each EMS client) we just
# register each client stream to this single
# relay loop in the dialog table.
# begin processing order events from the target
# brokerd backend by receiving order submission
# response messages, normalizing them to EMS
# messages and relaying back to the piker order
# client set.
# locally cache and track positions per account with
# a nested table of msgs:
# tuple(brokername, acctid) ->
# (fqme: str ->
# `BrokerdPosition`)
# create a new relay and sync it's state according
# to brokerd-backend reported position msgs.
relay = TradesRelay(
brokerd_stream=brokerd_trades_stream,
positions={},
accounts=accounts,
brokerd_stream=brokerd_stream,
positions=pp_msg_table,
accounts=tuple(accounts),
)
for msg in positions:
msg = BrokerdPosition(**msg)
log.info(
f'loading pp for {brokermod.__name__}:\n'
f'{pformat(msg.to_dict())}',
)
# TODO: state any mismatch here?
account = msg.account
assert account in accounts
relay.positions.setdefault(
(broker, account),
{},
)[msg.symbol] = msg
self.relays[broker] = relay
# this context should block here indefinitely until
@ -525,12 +633,17 @@ class Router(Struct):
indefinitely.
'''
from ..data.feed import maybe_open_feed
async with (
maybe_open_feed(
[fqme],
loglevel=loglevel,
) as feed,
):
# extract expanded fqme in case input was of a less
# qualified form, eg. xbteur.kraken -> xbteur.spot.kraken
fqme: str = list(feed.flumes.keys())[0]
brokername, _, _, _ = unpack_fqme(fqme)
brokermod = feed.mods[brokername]
broker = brokermod.name
@ -540,7 +653,7 @@ class Router(Struct):
flume = feed.flumes[fqme]
first_quote: dict = flume.first_quote
book: DarkBook = self.get_dark_book(broker)
book.lasts[fqme]: float = first_quote['last']
book.lasts[fqme]: float = float(first_quote['last'])
async with self.maybe_open_brokerd_dialog(
brokermod=brokermod,
@ -565,7 +678,7 @@ class Router(Struct):
client_ready = trio.Event()
task_status.started(
(relay, feed, client_ready)
(fqme, relay, feed, client_ready)
)
# sync to the client side by waiting for the stream
@ -714,8 +827,8 @@ async def translate_and_relay_brokerd_events(
# keep pps per account up to date locally in ``emsd`` mem
# sym, broker = pos_msg.symbol, pos_msg.broker
# NOTE: translate to a FQME!
relay.positions.setdefault(
# NOTE: translate to a FQSN!
(broker, pos_msg.account),
{}
)[pos_msg.symbol] = pos_msg
@ -771,7 +884,7 @@ async def translate_and_relay_brokerd_events(
BrokerdCancel(
oid=oid,
reqid=reqid,
time_ns=time.time_ns(),
time_ns=time_ns(),
account=status_msg.req.account,
)
)
@ -786,38 +899,75 @@ async def translate_and_relay_brokerd_events(
continue
# BrokerdError
# TODO: figure out how this will interact with EMS clients
# for ex. on an error do we react with a dark orders
# management response, like cancelling all dark orders?
# This looks like a supervision policy for pending orders on
# some unexpected failure - something we need to think more
# about. In most default situations, with composed orders
# (ex. brackets), most brokers seem to use a oca policy.
case {
'name': 'error',
'oid': oid, # ems order-dialog id
'reqid': reqid, # brokerd generated order-request id
}:
status_msg = book._active.get(oid)
if (
not oid
# try to lookup any order dialog by
# brokerd-side id..
and not (
oid := book._ems2brokerd_ids.inverse.get(reqid)
)
):
log.warning(
f'Rxed unusable error-msg:\n'
f'{brokerd_msg}'
)
continue
msg = BrokerdError(**brokerd_msg)
log.error(fmsg) # XXX make one when it's blank?
# TODO: figure out how this will interact with EMS clients
# for ex. on an error do we react with a dark orders
# management response, like cancelling all dark orders?
# This looks like a supervision policy for pending orders on
# some unexpected failure - something we need to think more
# about. In most default situations, with composed orders
# (ex. brackets), most brokers seem to use a oca policy.
# only relay to client side if we have an active
# ongoing dialog
if status_msg:
# NOTE: retreive the last client-side response
# OR create an error when we have no last msg /dialog
# on record
status_msg: Status
if not (status_msg := book._active.get(oid)):
status_msg = Error(
time_ns=time_ns(),
oid=oid,
reqid=reqid,
brokerd_msg=msg,
)
else:
# only modify last status if we have an active
# ongoing dialog..
status_msg.resp = 'error'
status_msg.brokerd_msg = msg
book._active[oid] = status_msg
await router.client_broadcast(
status_msg.req.symbol,
status_msg,
book._active[oid] = status_msg
log.error(
'Translating brokerd error to status:\n'
f'{fmsg}'
f'{status_msg.to_dict()}'
)
if req := status_msg.req:
fqme: str = req.symbol
else:
bdmsg: Struct = status_msg.brokerd_msg
fqme: str = (
bdmsg.symbol # might be None
or
bdmsg.broker_details['flow']
# NOTE: what happens in empty case in the
# broadcast below? it's a problem?
.get('symbol', '')
)
else:
log.error(f'Error for unknown order flow:\n{msg}')
continue
await router.client_broadcast(
fqme,
status_msg,
)
# BrokerdStatus
case {
@ -958,7 +1108,7 @@ async def translate_and_relay_brokerd_events(
status_msg.req = order
assert status_msg.src # source tag?
oid = str(status_msg.reqid)
oid: str = str(status_msg.reqid)
# attempt to avoid collisions
status_msg.reqid = oid
@ -975,38 +1125,28 @@ async def translate_and_relay_brokerd_events(
status_msg,
)
# don't fall through
continue
# brokerd error
case {
'name': 'status',
'status': 'error',
}:
log.error(f'Broker error:\n{fmsg}')
# XXX: we presume the brokerd cancels its own order
continue
# TOO FAST ``BrokerdStatus`` that arrives
# before the ``BrokerdAck``.
# NOTE XXX: sometimes there is a race with the backend (like
# `ib` where the pending status will be relayed *before*
# the ack msg, in which case we just ignore the faster
# pending msg and wait for our expected ack to arrive
# later (i.e. the first block below should enter).
case {
# XXX: sometimes there is a race with the backend (like
# `ib` where the pending stauts will be related before
# the ack, in which case we just ignore the faster
# pending msg and wait for our expected ack to arrive
# later (i.e. the first block below should enter).
'name': 'status',
'status': status,
'reqid': reqid,
}:
oid = book._ems2brokerd_ids.inverse.get(reqid)
msg = f'Unhandled broker status for dialog {reqid}:\n'
if oid:
status_msg = book._active.get(oid)
# status msg may not have been set yet or popped?
msg = (
f'Unhandled broker status for dialog {reqid}:\n'
f'{pformat(brokerd_msg)}'
)
if (
oid := book._ems2brokerd_ids.inverse.get(reqid)
):
# NOTE: have seen a key error here on kraken
# clearable limits..
if status_msg:
if status_msg := book._active.get(oid):
msg += (
f'last status msg: {pformat(status_msg)}\n\n'
f'this msg:{fmsg}\n'
@ -1102,7 +1242,7 @@ async def process_client_order_cmds(
BrokerdCancel(
oid=oid,
reqid=reqid,
time_ns=time.time_ns(),
time_ns=time_ns(),
account=order.account,
)
)
@ -1116,14 +1256,15 @@ async def process_client_order_cmds(
and status.resp == 'dark_open'
):
# remove from dark book clearing
entry = dark_book.triggers[fqme].pop(oid, None)
entry: tuple | None = dark_book.triggers[fqme].pop(oid, None)
if entry:
(
pred,
tickfilter,
cmd,
percent_away,
abs_diff_away
abs_diff_away,
min_tick_digits,
) = entry
# tell client side that we've cancelled the
@ -1176,7 +1317,7 @@ async def process_client_order_cmds(
msg = BrokerdOrder(
oid=oid, # no ib support for oids...
time_ns=time.time_ns(),
time_ns=time_ns(),
# if this is None, creates a new order
# otherwise will modify any existing one
@ -1194,7 +1335,7 @@ async def process_client_order_cmds(
oid=oid,
reqid=reqid,
resp='pending',
time_ns=time.time_ns(),
time_ns=time_ns(),
brokerd_msg=msg,
req=req,
)
@ -1258,33 +1399,36 @@ async def process_client_order_cmds(
# TODO: make this configurable from our top level
# config, prolly in a .clearing` section?
spread_slap: float = 5
min_tick = float(flume.symbol.size_tick)
min_tick_digits = float_digits(min_tick)
min_tick = Decimal(flume.mkt.price_tick)
min_tick_digits: int = dec_digits(min_tick)
tickfilter: tuple[str, ...]
percent_away: float
if action == 'buy':
tickfilter = ('ask', 'last', 'trade')
percent_away = 0.005
percent_away: float = 0.005
# TODO: we probably need to scale this based
# on some near term historical spread
# measure?
abs_diff_away = round(
abs_diff_away = float(round(
spread_slap * min_tick,
ndigits=min_tick_digits,
)
))
elif action == 'sell':
tickfilter = ('bid', 'last', 'trade')
percent_away = -0.005
abs_diff_away = round(
percent_away: float = -0.005
abs_diff_away: float = float(round(
-spread_slap * min_tick,
ndigits=min_tick_digits,
)
))
else: # alert
tickfilter = ('trade', 'utrade', 'last')
percent_away = 0
abs_diff_away = 0
percent_away: float = 0
abs_diff_away: float = 0
# submit execution/order to EMS scan loop
# NOTE: this may result in an override of an existing
@ -1296,7 +1440,8 @@ async def process_client_order_cmds(
tickfilter,
req,
percent_away,
abs_diff_away
abs_diff_away,
min_tick_digits,
)
resp = 'dark_open'
@ -1307,7 +1452,7 @@ async def process_client_order_cmds(
status = Status(
resp=resp,
oid=oid,
time_ns=time.time_ns(),
time_ns=time_ns(),
req=req,
src='dark',
)
@ -1353,13 +1498,13 @@ async def maybe_open_trade_relays(
loglevel: str = 'info',
):
relay, feed, client_ready = await _router.nursery.start(
fqme, relay, feed, client_ready = await _router.nursery.start(
_router.open_trade_relays,
fqme,
exec_mode,
loglevel,
)
yield relay, feed, client_ready
yield fqme, relay, feed, client_ready
async with tractor.trionics.maybe_open_context(
acm_func=cached_mngr,
@ -1372,9 +1517,13 @@ async def maybe_open_trade_relays(
key=cache_on_fqme_unless_paper,
) as (
cache_hit,
(relay, feed, client_ready)
(fqme, relay, feed, client_ready)
):
yield relay, feed, client_ready
if cache_hit:
log.info(f'Reusing existing trades relay for {fqme}:\n'
f'{relay}\n')
yield fqme, relay, feed, client_ready
@tractor.context
@ -1408,30 +1557,34 @@ async def _emsd_main(
received in a stream from that client actor and then responses are
streamed back up to the original calling task in the same client.
The primary ``emsd`` task trees are:
The primary ``emsd`` task tree is:
- ``_setup_persistent_emsd()``:
is the ``emsd`` actor's primary root task which sets up an
actor-global ``Router`` instance and starts a relay loop task
which lives until the backend broker is shutdown or the ems is
terminated.
|
- (maybe) ``translate_and_relay_brokerd_events()``:
accept normalized trades responses from brokerd, process and
relay to ems client(s); this is a effectively a "trade event
reponse" proxy-broker.
- ``_emsd_main()``:
attaches a brokerd real-time quote feed and trades dialogue with
brokderd trading api for every connecting client.
|
- ``clear_dark_triggers()``:
run (dark order) conditions on inputs and trigger brokerd "live"
order submissions.
|
- ``process_client_order_cmds()``:
accepts order cmds from requesting clients, registers dark orders and
alerts with clearing loop.
is the ``emsd`` actor's primary *service-fixture* task which
is opened by the `pikerd` service manager and sets up
a process-global (actor-local) ``Router`` instance and opens
a service nursery which lives until the backend broker is
shutdown or the ems is terminated; all tasks are
*dynamically* started (and persisted) within this service
nursery when the below endpoint context is opened:
|
- ``_emsd_main()``:
attaches a real-time quote feed and trades dialogue with
a `brokerd` actor which connects to the backend broker's
trading api for every connecting client.
|
- ``clear_dark_triggers()``:
run (dark order) conditions on inputs and trigger brokerd
"live" order submissions.
|
- ``process_client_order_cmds()``:
accepts order cmds from requesting clients, registers
dark orders and alerts with above (dark) clearing loop.
|
- (maybe) ``translate_and_relay_brokerd_events()``:
accept normalized trades responses from brokerd, process and
relay to ems client(s); this is a effectively a "trade event
reponse" proxy-broker.
'''
global _router
@ -1439,9 +1592,9 @@ async def _emsd_main(
broker, _, _, _ = unpack_fqme(fqme)
# TODO: would be nice if in tractor we can require either a ctx arg,
# or a named arg with ctx in it and a type annotation of
# tractor.Context instead of strictly requiring a ctx arg.
# TODO: would be nice if in tractor we can require either a ctx
# arg, or a named arg with ctx in it and a type annotation of
# `tractor.Context` instead of strictly requiring a ctx arg.
ems_ctx = ctx
# spawn one task per broker feed
@ -1457,7 +1610,7 @@ async def _emsd_main(
fqme,
exec_mode,
loglevel,
) as (relay, feed, client_ready):
) as (fqme, relay, feed, client_ready):
brokerd_stream = relay.brokerd_stream
dark_book = _router.get_dark_book(broker)

View File

@ -18,40 +18,14 @@
Clearing sub-system message and protocols.
"""
# from collections import (
# ChainMap,
# deque,
# )
from __future__ import annotations
from typing import (
Optional,
Literal,
)
from msgspec import field
from ..data.types import Struct
# TODO: a composite for tracking msg flow on 2-legged
# dialogs.
# class Dialog(ChainMap):
# '''
# Msg collection abstraction to easily track the state changes of
# a msg flow in one high level, query-able and immutable construct.
# The main use case is to query data from a (long-running)
# msg-transaction-sequence
# '''
# def update(
# self,
# msg,
# ) -> None:
# self.maps.insert(0, msg.to_dict())
# def flatten(self) -> dict:
# return dict(self)
from piker.types import Struct
# TODO: ``msgspec`` stuff worth paying attention to:
@ -140,7 +114,7 @@ class Status(Struct):
# this maps normally to the ``BrokerdOrder.reqid`` below, an id
# normally allocated internally by the backend broker routing system
reqid: Optional[int | str] = None
reqid: int | str | None = None
# the (last) source order/request msg if provided
# (eg. the Order/Cancel which causes this msg) and
@ -153,7 +127,7 @@ class Status(Struct):
# event that wasn't originated by piker's emsd (eg. some external
# trading system which does it's own order control but that you
# might want to "track" using piker UIs/systems).
src: Optional[str] = None
src: str | None = None
# set when a cancel request msg was set for this order flow dialog
# but the brokerd dialog isn't yet in a cancelled state.
@ -164,6 +138,18 @@ class Status(Struct):
brokerd_msg: dict = {}
class Error(Status):
resp: str = 'error'
# TODO: allow re-wrapping from existing (last) status?
@classmethod
def from_status(
cls,
msg: Status,
) -> Error:
...
# ---------------
# emsd -> brokerd
# ---------------
@ -181,7 +167,7 @@ class BrokerdCancel(Struct):
# for setting a unique order id then this value will be relayed back
# on the emsd order request stream as the ``BrokerdOrderAck.reqid``
# field
reqid: Optional[int | str] = None
reqid: int | str | None = None
action: str = 'cancel'
@ -191,7 +177,7 @@ class BrokerdOrder(Struct):
account: str
time_ns: int
symbol: str # fqsn
symbol: str # fqme
price: float
size: float
@ -205,7 +191,7 @@ class BrokerdOrder(Struct):
# for setting a unique order id then this value will be relayed back
# on the emsd order request stream as the ``BrokerdOrderAck.reqid``
# field
reqid: Optional[int | str] = None
reqid: int | str | None = None
# ---------------
@ -227,24 +213,27 @@ class BrokerdOrderAck(Struct):
# emsd id originally sent in matching request msg
oid: str
# TODO: do we need this?
account: str = ''
name: str = 'ack'
class BrokerdStatus(Struct):
reqid: int | str
time_ns: int
reqid: int | str
status: Literal[
'open',
'canceled',
'fill',
'pending',
'error',
# 'error', # NOTE: use `BrokerdError`
'closed',
]
account: str
name: str = 'status'
oid: str = ''
# TODO: do we need this?
account: str | None = None,
filled: float = 0.0
reason: str = ''
remaining: float = 0.0
@ -259,24 +248,24 @@ class BrokerdStatus(Struct):
class BrokerdFill(Struct):
'''
A single message indicating a "fill-details" event from the broker
if avaiable.
A single message indicating a "fill-details" event from the
broker if avaiable.
'''
# brokerd timestamp required for order mode arrow placement on x-axis
# TODO: maybe int if we force ns?
# we need to normalize this somehow since backends will use their
# own format and likely across many disparate epoch clocks...
time_ns: int
broker_time: float
reqid: int | str
time_ns: int
# order exeuction related
size: float
price: float
name: str = 'fill'
action: Optional[str] = None
action: str | None = None
broker_details: dict = {} # meta-data (eg. commisions etc.)
@ -287,18 +276,22 @@ class BrokerdError(Struct):
This is still a TODO thing since we're not sure how to employ it yet.
'''
oid: str
symbol: str
reason: str
# TODO: drop this right?
symbol: str | None = None
oid: str | None = None
# if no brokerd order request was actually submitted (eg. we errored
# at the ``pikerd`` layer) then there will be ``reqid`` allocated.
reqid: Optional[int | str] = None
reqid: str | None = None
name: str = 'error'
broker_details: dict = {}
# TODO: yeah, so we REALLY need to completely deprecate
# this and use the `.accounting.Position` msg-type instead..
class BrokerdPosition(Struct):
'''
Position update event from brokerd.

View File

@ -26,10 +26,12 @@ from contextlib import asynccontextmanager as acm
from datetime import datetime
from operator import itemgetter
import itertools
from pprint import pformat
import time
from typing import (
Callable,
)
from types import ModuleType
import uuid
from bidict import bidict
@ -37,24 +39,29 @@ import pendulum
import trio
import tractor
from ..brokers import get_brokermod
from .. import data
from ..data.types import Struct
from ..accounting._mktinfo import (
from piker.brokers import get_brokermod
from piker.service import find_service
from piker.accounting import (
Account,
MktPair,
)
from ..accounting import (
Position,
PpTable,
Transaction,
TransactionLedger,
open_account,
open_trade_ledger,
open_pps,
unpack_fqme,
)
from ..data._normalize import iterticks
from ..accounting._mktinfo import unpack_fqme
from piker.data import (
Feed,
SymbologyCache,
iterticks,
open_feed,
open_symcache,
)
from piker.types import Struct
from ._util import (
log, # sub-sys logger
get_console_log,
)
from ._messages import (
BrokerdCancel,
@ -76,11 +83,10 @@ class PaperBoi(Struct):
'''
broker: str
ems_trades_stream: tractor.MsgStream
ppt: PpTable
acnt: Account
ledger: TransactionLedger
fees: Callable
# map of paper "live" orders which be used
# to simulate fills based on paper engine settings
@ -124,7 +130,7 @@ class PaperBoi(Struct):
# for dark orders since we want the dark_executed
# to trigger first thus creating a lookup entry
# in the broker trades event processing loop
await trio.sleep(0.05)
await trio.sleep(0.01)
if (
action == 'sell'
@ -191,7 +197,7 @@ class PaperBoi(Struct):
self._sells[symbol].pop(oid, None)
# TODO: net latency model
await trio.sleep(0.05)
await trio.sleep(0.01)
msg = BrokerdStatus(
status='canceled',
@ -224,7 +230,7 @@ class PaperBoi(Struct):
'''
# TODO: net latency model
await trio.sleep(0.05)
await trio.sleep(0.01)
fill_time_ns = time.time_ns()
fill_time_s = time.time()
@ -262,29 +268,42 @@ class PaperBoi(Struct):
# we don't actually have any unique backend symbol ourselves
# other then this thing, our fqme address.
bs_mktid: str = fqme
if fees := self.fees:
cost: float = fees(price, size)
else:
cost: float = 0
t = Transaction(
fqsn=fqme,
sym=self._mkts[fqme],
fqme=fqme,
tid=oid,
size=size,
price=price,
cost=0, # TODO: cost model
cost=cost,
dt=pendulum.from_timestamp(fill_time_s),
bs_mktid=bs_mktid,
)
# update in-mem ledger and pos table
self.ledger.update_from_t(t)
self.ppt.update_from_trans({oid: t})
self.acnt.update_from_ledger(
{oid: t},
symcache=self.ledger._symcache,
# XXX when a backend has no symcache support yet we can
# simply pass in the gmi() retreived table created
# during init :o
_mktmap_table=self._mkts,
)
# transmit pp msg to ems
pp = self.ppt.pps[bs_mktid]
pp: Position = self.acnt.pps[bs_mktid]
pp_msg = BrokerdPosition(
broker=self.broker,
account='paper',
symbol=fqme,
size=pp.size,
size=pp.cumsize,
avg_price=pp.ppu,
# TODO: we need to look up the asset currency from
@ -295,7 +314,7 @@ class PaperBoi(Struct):
# write all updates to filesys immediately
# (adds latency but that works for simulation anyway)
self.ledger.write_config()
self.ppt.write_config()
self.acnt.write_config()
await self.ems_trades_stream.send(pp_msg)
@ -324,6 +343,7 @@ async def simulate_fills(
# this stream may eventually contain multiple symbols
async for quotes in quote_stream:
for sym, quote in quotes.items():
# print(sym)
for tick in iterticks(
quote,
# dark order price filter(s)
@ -527,7 +547,7 @@ _sells: defaultdict[
@tractor.context
async def trades_dialogue(
async def open_trade_dialog(
ctx: tractor.Context,
broker: str,
@ -536,141 +556,188 @@ async def trades_dialogue(
) -> None:
tractor.log.get_console_log(loglevel)
# enable piker.clearing console log for *this* subactor
get_console_log(loglevel)
ppt: PpTable
ledger: TransactionLedger
with (
open_pps(
broker,
'paper',
write_on_exit=True,
) as ppt,
symcache: SymbologyCache
async with open_symcache(get_brokermod(broker)) as symcache:
open_trade_ledger(
broker,
'paper',
) as ledger
):
# NOTE: retreive market(pair) info from the backend broker
# since ledger entries (in their backend native format) often
# don't contain necessary market info per trade record entry..
# - if no fqme was passed in, we presume we're running in
# "ledger-sync-only mode" and thus we load mkt info for
# each symbol found in the ledger to a ppt table manually.
acnt: Account
ledger: TransactionLedger
with (
# TODO: how to process ledger info from backends?
# - should we be rolling our own actor-cached version of these
# client API refs or using portal IPC to send requests to the
# existing brokerd daemon?
# - alternatively we can possibly expect and use
# a `.broker.norm_trade_records()` ep?
brokermod = get_brokermod(broker)
gmi = getattr(brokermod, 'get_mkt_info', None)
# TODO: probably do the symcache and ledger loading
# implicitly behind this? Deliver an account, and ledger
# pair or make the ledger an attr of the account?
open_account(
broker,
'paper',
write_on_exit=True,
) as acnt,
# update all transactions with mkt info before
# loading any pps
mkt_by_fqme: dict[str, MktPair] = {}
if fqme:
mkt, _ = await brokermod.get_mkt_info(fqme.rstrip(f'.{broker}'))
mkt_by_fqme[fqme] = mkt
open_trade_ledger(
broker,
'paper',
symcache=symcache,
) as ledger
):
# NOTE: WE MUST retreive market(pair) info from each
# backend broker since ledger entries (in their
# provider-native format) often don't contain necessary
# market info per trade record entry..
# FURTHER, if no fqme was passed in, we presume we're
# running in "ledger-sync-only mode" and thus we load
# mkt info for each symbol found in the ledger to
# an acnt table manually.
# for each sym in the ledger load it's `MktPair` info
for tid, tdict in ledger.data.items():
# TODO: switch this to fqme
l_fqme = tdict['fqsn']
# TODO: how to process ledger info from backends?
# - should we be rolling our own actor-cached version of these
# client API refs or using portal IPC to send requests to the
# existing brokerd daemon?
# - alternatively we can possibly expect and use
# a `.broker.ledger.norm_trade()` ep?
brokermod: ModuleType = get_brokermod(broker)
gmi: Callable = getattr(brokermod, 'get_mkt_info', None)
# update all transactions with mkt info before
# loading any pps
mkt_by_fqme: dict[str, MktPair] = {}
if (
gmi
and l_fqme not in mkt_by_fqme
fqme
and fqme not in symcache.mktmaps
):
mkt, pair = await brokermod.get_mkt_info(
l_fqme.rstrip(f'.{broker}'),
log.warning(
f'Symcache for {broker} has no `{fqme}` entry?\n'
'Manually requesting mkt map data via `.get_mkt_info()`..'
)
mkt_by_fqme[l_fqme] = mkt
# if an ``fqme: str`` input was provided we only
# need a ``MktPair`` for that one market, since we're
# running in real simulated-clearing mode, not just ledger
# syncing.
if (
fqme is not None
and fqme in mkt_by_fqme
):
break
bs_fqme, _, broker = fqme.rpartition('.')
mkt, pair = await gmi(bs_fqme)
mkt_by_fqme[mkt.fqme] = mkt
# update pos table from ledger history and provide a ``MktPair``
# lookup for internal position accounting calcs.
ppt.update_from_trans(ledger.to_trans(mkt_by_fqme=mkt_by_fqme))
# for each sym in the ledger load its `MktPair` info
for tid, txdict in ledger.data.items():
l_fqme: str = txdict.get('fqme') or txdict['fqsn']
pp_msgs: list[BrokerdPosition] = []
pos: Position
token: str # f'{symbol}.{self.broker}'
for token, pos in ppt.pps.items():
pp_msgs.append(BrokerdPosition(
broker=broker,
account='paper',
symbol=pos.symbol.fqme,
size=pos.size,
avg_price=pos.ppu,
if (
gmi
and l_fqme not in symcache.mktmaps
and l_fqme not in mkt_by_fqme
):
log.warning(
f'Symcache for {broker} has no `{l_fqme}` entry?\n'
'Manually requesting mkt map data via `.get_mkt_info()`..'
)
mkt, pair = await gmi(
l_fqme.rstrip(f'.{broker}'),
)
mkt_by_fqme[l_fqme] = mkt
# if an ``fqme: str`` input was provided we only
# need a ``MktPair`` for that one market, since we're
# running in real simulated-clearing mode, not just ledger
# syncing.
if (
fqme is not None
and fqme in mkt_by_fqme
):
break
# update pos table from ledger history and provide a ``MktPair``
# lookup for internal position accounting calcs.
acnt.update_from_ledger(
ledger,
# NOTE: if the symcache fails on fqme lookup
# (either sycache not yet supported or not filled
# in) use manually constructed table from calling
# the `.get_mkt_info()` provider EP above.
_mktmap_table=mkt_by_fqme,
)
pp_msgs: list[BrokerdPosition] = []
pos: Position
token: str # f'{symbol}.{self.broker}'
for token, pos in acnt.pps.items():
pp_msgs.append(BrokerdPosition(
broker=broker,
account='paper',
symbol=pos.mkt.fqme,
size=pos.cumsize,
avg_price=pos.ppu,
))
await ctx.started((
pp_msgs,
['paper'],
))
await ctx.started((
pp_msgs,
['paper'],
))
# write new positions state in case ledger was
# newer then that tracked in pps.toml
acnt.write_config()
# write new positions state in case ledger was
# newer then that tracked in pps.toml
ppt.write_config()
# exit early since no fqme was passed,
# normally this case is just to load
# positions "offline".
if fqme is None:
log.warning(
'Paper engine only running in position delivery mode!\n'
'NO SIMULATED CLEARING LOOP IS ACTIVE!'
)
await trio.sleep_forever()
return
async with (
data.open_feed(
[fqme],
loglevel=loglevel,
) as feed,
):
# sanity check all the mkt infos
for fqme, flume in feed.flumes.items():
assert mkt_by_fqme[fqme] == flume.mkt
# exit early since no fqme was passed,
# normally this case is just to load
# positions "offline".
if fqme is None:
log.warning(
'Paper engine only running in position delivery mode!\n'
'NO SIMULATED CLEARING LOOP IS ACTIVE!'
)
await trio.sleep_forever()
return
feed: Feed
async with (
ctx.open_stream() as ems_stream,
trio.open_nursery() as n,
open_feed(
[fqme],
loglevel=loglevel,
) as feed,
):
client = PaperBoi(
broker=broker,
ems_trades_stream=ems_stream,
ppt=ppt,
ledger=ledger,
_buys=_buys,
_sells=_sells,
_reqids=_reqids,
_mkts=mkt_by_fqme,
# sanity check all the mkt infos
for fqme, flume in feed.flumes.items():
mkt: MktPair = symcache.mktmaps.get(fqme) or mkt_by_fqme[fqme]
if mkt != flume.mkt:
diff: tuple = mkt - flume.mkt
log.warning(
'MktPair sig mismatch?\n'
f'{pformat(diff)}'
)
get_cost: Callable = getattr(
brokermod,
'get_cost',
None,
)
n.start_soon(
handle_order_requests,
client,
ems_stream,
)
async with (
ctx.open_stream() as ems_stream,
trio.open_nursery() as n,
):
client = PaperBoi(
broker=broker,
ems_trades_stream=ems_stream,
acnt=acnt,
ledger=ledger,
fees=get_cost,
# paper engine simulator clearing task
await simulate_fills(feed.streams[broker], client)
_buys=_buys,
_sells=_sells,
_reqids=_reqids,
_mkts=mkt_by_fqme,
)
n.start_soon(
handle_order_requests,
client,
ems_stream,
)
# paper engine simulator clearing task
await simulate_fills(feed.streams[broker], client)
@acm
@ -694,22 +761,22 @@ async def open_paperboi(
service_name = f'paperboi.{broker}'
async with (
tractor.find_actor(service_name) as portal,
tractor.open_nursery() as tn,
find_service(service_name) as portal,
tractor.open_nursery() as an,
):
# NOTE: only spawn if no paperboi already is up since we likely
# don't need more then one actor for simulated order clearing
# per broker-backend.
if portal is None:
log.info('Starting new paper-engine actor')
portal = await tn.start_actor(
portal = await an.start_actor(
service_name,
enable_modules=[__name__]
)
we_spawned = True
async with portal.open_context(
trades_dialogue,
open_trade_dialog,
broker=broker,
fqme=fqme,
loglevel=loglevel,
@ -717,7 +784,59 @@ async def open_paperboi(
) as (ctx, first):
yield ctx, first
# tear down connection and any spawned actor on exit
# ALWAYS tear down connection AND any newly spawned
# paperboi actor on exit!
await ctx.cancel()
if we_spawned:
await portal.cancel_actor()
def norm_trade(
tid: str,
txdict: dict,
pairs: dict[str, Struct],
symcache: SymbologyCache | None = None,
brokermod: ModuleType | None = None,
) -> Transaction:
from pendulum import (
DateTime,
parse,
)
# special field handling for datetimes
# to ensure pendulum is used!
dt: DateTime = parse(txdict['dt'])
expiry: str | None = txdict.get('expiry')
fqme: str = txdict.get('fqme') or txdict.pop('fqsn')
price: float = txdict['price']
size: float = txdict['size']
cost: float = txdict.get('cost', 0)
if (
brokermod
and (get_cost := getattr(
brokermod,
'get_cost',
False,
))
):
cost = get_cost(
price,
size,
is_taker=True,
)
return Transaction(
fqme=fqme,
tid=txdict['tid'],
dt=dt,
price=price,
size=size,
cost=cost,
bs_mktid=txdict['bs_mktid'],
expiry=parse(expiry) if expiry else None,
etype='clear',
)

View File

@ -17,12 +17,15 @@
Sub-sys module commons.
"""
from collections import ChainMap
from functools import partial
from typing import Any
from ..log import (
get_logger,
get_console_log,
)
from piker.types import Struct
subsys: str = 'piker.clearing'
log = get_logger(subsys)
@ -31,3 +34,60 @@ get_console_log = partial(
get_console_log,
name=subsys,
)
class OrderDialogs(Struct):
'''
Order control dialog (and thus transaction) tracking via
message recording.
Allows easily recording messages associated with a given set of
order control transactions and looking up the latest field
state using the entire (reverse chronological) msg flow.
'''
_flows: dict[str, ChainMap] = {}
def add_msg(
self,
oid: str,
msg: dict,
) -> None:
# NOTE: manually enter a new map on the first msg add to
# avoid creating one with an empty dict first entry in
# `ChainMap.maps` which is the default if none passed at
# init.
cm: ChainMap = self._flows.get(oid)
if cm:
cm.maps.insert(0, msg)
else:
cm = ChainMap(msg)
self._flows[oid] = cm
# TODO: wrap all this in the `collections.abc.Mapping` interface?
def get(
self,
oid: str,
) -> ChainMap[str, Any]:
'''
Return the dialog `ChainMap` for provided id.
'''
return self._flows.get(oid, None)
def pop(
self,
oid: str,
) -> ChainMap[str, Any]:
'''
Pop and thus remove the `ChainMap` containing the msg flow
for the given order id.
'''
if (flow := self._flows.pop(oid, None)) is None:
log.warning(f'No flow found for oid: {oid}')
return flow

View File

@ -1,28 +1,33 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# Copyright (C) 2018-present Tyler Goodlet
# (in stewardship for pikers, everywhere.)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is free software: you can redistribute it and/or
# modify it under the terms of the GNU Affero General Public
# License as published by the Free Software Foundation, either
# version 3 of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# You should have received a copy of the GNU Affero General Public
# License along with this program. If not, see
# <https://www.gnu.org/licenses/>.
'''
CLI commons.
'''
import os
# from contextlib import AsyncExitStack
from types import ModuleType
import click
import trio
import tractor
from tractor._multiaddr import parse_maddr
from ..log import (
get_console_log,
@ -37,74 +42,178 @@ from ..service import (
from .. import config
log = get_logger('cli')
log = get_logger('piker.cli')
def load_trans_eps(
network: dict | None = None,
maddrs: list[tuple] | None = None,
) -> dict[str, dict[str, dict]]:
# transport-oriented endpoint multi-addresses
eps: dict[
str, # service name, eg. `pikerd`, `emsd`..
# libp2p style multi-addresses parsed into prot layers
list[dict[str, str | int]]
] = {}
if (
network
and not maddrs
):
# load network section and (attempt to) connect all endpoints
# which are reachable B)
for key, maddrs in network.items():
match key:
# TODO: resolve table across multiple discov
# prots Bo
case 'resolv':
pass
case 'pikerd':
dname: str = key
for maddr in maddrs:
layers: dict = parse_maddr(maddr)
eps.setdefault(
dname,
[],
).append(layers)
elif maddrs:
# presume user is manually specifying the root actor ep.
eps['pikerd'] = [parse_maddr(maddr)]
return eps
@click.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
@click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.option(
'--tsdb',
is_flag=True,
help='Enable local ``marketstore`` instance'
'--loglevel',
'-l',
default='warning',
help='Logging level',
)
@click.option(
'--es',
'--tl',
is_flag=True,
help='Enable local ``elasticsearch`` instance'
help='Enable tractor-runtime logs',
)
@click.option(
'--pdb',
is_flag=True,
help='Enable tractor debug mode',
)
@click.option(
'--maddr',
'-m',
default=None,
help='Multiaddrs to bind or contact',
)
# @click.option(
# '--tsdb',
# is_flag=True,
# help='Enable local ``marketstore`` instance'
# )
# @click.option(
# '--es',
# is_flag=True,
# help='Enable local ``elasticsearch`` instance'
# )
def pikerd(
maddr: list[str] | None,
loglevel: str,
host: str,
port: int,
tl: bool,
pdb: bool,
tsdb: bool,
es: bool,
# tsdb: bool,
# es: bool,
):
'''
Spawn the piker broker-daemon.
'''
from tractor.devx import maybe_open_crash_handler
with maybe_open_crash_handler(pdb=pdb):
log = get_console_log(loglevel, name='cli')
from ..service import open_pikerd
log = get_console_log(loglevel)
if pdb:
log.warning((
"\n"
"!!! YOU HAVE ENABLED DAEMON DEBUG MODE !!!\n"
"When a `piker` daemon crashes it will block the "
"task-thread until resumed from console!\n"
"\n"
))
if pdb:
log.warning((
"\n"
"!!! You have enabled daemon DEBUG mode !!!\n"
"If a daemon crashes it will likely block"
" the service until resumed from console!\n"
"\n"
))
# service-actor registry endpoint socket-address set
regaddrs: list[tuple[str, int]] = []
reg_addr: None | tuple[str, int] = None
if host or port:
reg_addr = (
host or _default_registry_host,
int(port) or _default_registry_port,
conf, _ = config.load(
conf_name='conf',
)
async def main():
async with (
open_pikerd(
tsdb=tsdb,
es=es,
loglevel=loglevel,
debug_mode=pdb,
registry_addr=reg_addr,
), # normally delivers a ``Services`` handle
trio.open_nursery() as n,
network: dict = conf.get('network')
if (
network is None
and not maddr
):
regaddrs = [(
_default_registry_host,
_default_registry_port,
)]
await trio.sleep_forever()
else:
eps: dict = load_trans_eps(
network,
maddr,
)
for layers in eps['pikerd']:
regaddrs.append((
layers['ipv4']['addr'],
layers['tcp']['port'],
))
trio.run(main)
from .. import service
async def main():
service_mngr: service.Services
async with (
service.open_pikerd(
registry_addrs=regaddrs,
loglevel=loglevel,
debug_mode=pdb,
) as service_mngr, # normally delivers a ``Services`` handle
# AsyncExitStack() as stack,
):
# TODO: spawn all other sub-actor daemons according to
# multiaddress endpoint spec defined by user config
assert service_mngr
# if tsdb:
# dname, conf = await stack.enter_async_context(
# service.marketstore.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'TSDB `{dname}` up with conf:\n{conf}')
# if es:
# dname, conf = await stack.enter_async_context(
# service.elastic.start_ahab_daemon(
# service_mngr,
# loglevel=loglevel,
# )
# )
# log.info(f'DB `{dname}` up with conf:\n{conf}')
await trio.sleep_forever()
trio.run(main)
@click.group(context_settings=config._context_defaults)
@ -117,8 +226,24 @@ def pikerd(
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--configdir', '-c', help='Configuration directory')
@click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.option(
'--pdb',
is_flag=True,
help='Enable runtime debug mode ',
)
@click.option(
'--maddr',
'-m',
default=None,
multiple=True,
help='Multiaddr to bind',
)
@click.option(
'--regaddr',
'-r',
default=None,
help='Registrar addr to contact',
)
@click.pass_context
def cli(
ctx: click.Context,
@ -126,14 +251,19 @@ def cli(
loglevel: str,
tl: bool,
configdir: str,
host: str,
port: int,
pdb: bool,
# TODO: make these list[str] with multiple -m maddr0 -m maddr1
maddr: list[str],
regaddr: str,
) -> None:
if configdir is not None:
assert os.path.isdir(configdir), f"`{configdir}` is not a valid path"
config._override_config_dir(configdir)
# TODO: for typer see
# https://typer.tiangolo.com/tutorial/commands/context/
ctx.ensure_object(dict)
if not brokers:
@ -141,15 +271,25 @@ def cli(
from piker.brokers import __brokers__
brokers = __brokers__
brokermods = [get_brokermod(broker) for broker in brokers]
brokermods: dict[str, ModuleType] = {
broker: get_brokermod(broker) for broker in brokers
}
assert brokermods
reg_addr: None | tuple[str, int] = None
if host or port:
reg_addr = (
host or _default_registry_host,
int(port) or _default_registry_port,
)
# TODO: load endpoints from `conf::[network].pikerd`
# - pikerd vs. regd, separate registry daemon?
# - expose datad vs. brokerd?
# - bind emsd with certain perms on public iface?
regaddrs: list[tuple[str, int]] = regaddr or [(
_default_registry_host,
_default_registry_port,
)]
# TODO: factor [network] section parsing out from pikerd
# above and call it here as well.
# if maddr:
# for addr in maddr:
# layers: dict = parse_maddr(addr)
ctx.obj.update({
'brokers': brokers,
@ -159,7 +299,12 @@ def cli(
'log': get_console_log(loglevel),
'confdir': config._config_dir,
'wl_path': config._watchlists_data_path,
'registry_addr': reg_addr,
'registry_addrs': regaddrs,
'pdb': pdb, # debug mode flag
# TODO: endpoint parsing, pinging and binding
# on no existing server.
# 'maddrs': maddr,
})
# allow enabling same loglevel in ``tractor`` machinery
@ -206,13 +351,15 @@ def services(config, tl, ports):
def _load_clis() -> None:
from ..service import marketstore # noqa
from ..service import elastic
from ..data import cli # noqa
# from ..service import elastic # noqa
from ..brokers import cli # noqa
from ..ui import cli # noqa
from ..watchlists import cli # noqa
# typer implemented
from ..storage import cli # noqa
from ..accounting import cli # noqa
# load downstream cli modules
_load_clis()

View File

@ -22,11 +22,19 @@ import platform
import sys
import os
import shutil
from typing import Optional
from typing import (
Callable,
MutableMapping,
)
from pathlib import Path
from bidict import bidict
import toml
import tomlkit
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
from .log import get_logger
@ -96,14 +104,15 @@ def get_app_dir(
# `tractor`) with the testing dir and check for it whenever we
# detect `pytest` is being used (which it isn't under normal
# operation).
if "pytest" in sys.modules:
import tractor
actor = tractor.current_actor(err_on_no_runtime=False)
if actor: # runtime is up
rvs = tractor._state._runtime_vars
testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
assert testdirpath.exists(), 'piker test harness might be borked!?'
app_name = str(testdirpath)
# if "pytest" in sys.modules:
# import tractor
# actor = tractor.current_actor(err_on_no_runtime=False)
# if actor: # runtime is up
# rvs = tractor._state._runtime_vars
# import pdbp; pdbp.set_trace()
# testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
# assert testdirpath.exists(), 'piker test harness might be borked!?'
# app_name = str(testdirpath)
if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA"
@ -126,14 +135,19 @@ def get_app_dir(
_click_config_dir: Path = Path(get_app_dir('piker'))
_config_dir: Path = _click_config_dir
_parent_user: str = os.environ.get('SUDO_USER')
if _parent_user:
# NOTE: when using `sudo` we attempt to determine the non-root user
# and still use their normal config dir.
if (
(_parent_user := os.environ.get('SUDO_USER'))
and
_parent_user != 'root'
):
non_root_user_dir = Path(
os.path.expanduser(f'~{_parent_user}')
)
root: str = 'root'
_ccds: str = str(_click_config_dir) # click config dir string
_ccds: str = str(_click_config_dir) # click config dir as string
i_tail: int = int(_ccds.rfind(root) + len(root))
_config_dir = (
non_root_user_dir
@ -143,11 +157,9 @@ if _parent_user:
_conf_names: set[str] = {
'brokers',
# 'pps',
'trades',
'watchlists',
'paper_trades'
'conf', # god config
'brokers', # sec backend deatz
'watchlists', # (user defined) market lists
}
# TODO: probably drop all this super legacy, questrade specific,
@ -166,6 +178,14 @@ _context_defaults = dict(
)
class ConfigurationError(Exception):
'Misconfigured settings, likely in a TOML file.'
class NoSignature(ConfigurationError):
'No credentials setup for broker backend!'
def _override_config_dir(
path: str
) -> None:
@ -180,6 +200,15 @@ def _conf_fn_w_ext(
return f'{name}.toml'
def get_conf_dir() -> Path:
'''
Return the user configuration directory ``Path``
on the local filesystem.
'''
return _config_dir
def get_conf_path(
conf_name: str = 'brokers',
@ -191,7 +220,6 @@ def get_conf_path(
Contains files such as:
- brokers.toml
- pp.toml
- watchlists.toml
# maybe coming soon ;)
@ -199,7 +227,7 @@ def get_conf_path(
- strats.toml
'''
if 'pps.' not in conf_name:
if 'account.' not in conf_name:
assert str(conf_name) in _conf_names
fn = _conf_fn_w_ext(conf_name)
@ -211,45 +239,79 @@ def repodir() -> Path:
Return the abspath as ``Path`` to the git repo's root dir.
'''
return Path(__file__).absolute().parent.parent
repodir: Path = Path(__file__).absolute().parent.parent
confdir: Path = repodir / 'config'
if not confdir.is_dir():
# prolly inside stupid GH actions CI..
repodir: Path = Path(os.environ.get('GITHUB_WORKSPACE'))
confdir: Path = repodir / 'config'
assert confdir.is_dir(), f'{confdir} DNE, {repodir} is likely incorrect!'
return repodir
def load(
conf_name: str = 'brokers',
# NOTE: always appended with .toml suffix
conf_name: str = 'conf',
path: Path | None = None,
decode: Callable[
[str | bytes,],
MutableMapping,
] = tomllib.loads,
touch_if_dne: bool = False,
**tomlkws,
) -> tuple[dict, str]:
) -> tuple[dict, Path]:
'''
Load config file by name.
'''
path: Path = path or get_conf_path(conf_name)
If desired config is not in the top level piker-user config path then
pass the ``path: Path`` explicitly.
'''
# create the $HOME/.config/piker dir if dne
if not _config_dir.is_dir():
_config_dir.mkdir(
parents=True,
exist_ok=True,
)
if not path.is_file():
fn: str = _conf_fn_w_ext(conf_name)
path_provided: bool = path is not None
path: Path = path or get_conf_path(conf_name)
# try to copy in a template config to the user's directory if
# one exists.
template: Path = repodir() / 'config' / fn
if template.is_file():
shutil.copyfile(template, path)
else:
# create empty file
if (
not path.is_file()
and touch_if_dne
):
# only do a template if no path provided,
# just touch an empty file with same name.
if path_provided:
with path.open(mode='x'):
pass
else:
with path.open(mode='r'):
pass # touch it
config: dict = toml.load(str(path), **tomlkws)
# try to copy in a template config to the user's dir if one
# exists.
else:
fn: str = _conf_fn_w_ext(conf_name)
template: Path = repodir() / 'config' / fn
if template.is_file():
shutil.copyfile(template, path)
elif fn and template:
assert template.is_file(), f'{template} is not a file!?'
assert path.is_file(), f'Config file {path} not created!?'
with path.open(mode='r') as fp:
config: dict = decode(
fp.read(),
**tomlkws,
)
log.debug(f"Read config file {path}")
return config, path
@ -289,20 +351,22 @@ def write(
f"Writing config `{name}` file to:\n"
f"{path}"
)
with path.open(mode='w') as cf:
return toml.dump(
with path.open(mode='w') as fp:
return tomlkit.dump( # preserve style on write B)
config,
cf,
fp,
**toml_kwargs,
)
def load_accounts(
providers: Optional[list[str]] = None
providers: list[str] | None = None
) -> bidict[str, Optional[str]]:
) -> bidict[str, str | None]:
conf, path = load()
conf, path = load(
conf_name='brokers',
)
accounts = bidict()
for provider_name, section in conf.items():
accounts_section = section.get('accounts')

View File

@ -22,13 +22,7 @@ and storing data from your brokers as well as
sharing live streams over a network.
"""
import tractor
import trio
from ._util import (
get_console_log,
)
from ._normalize import iterticks
from .ticktools import iterticks
from ._sharedmem import (
maybe_open_shm_array,
attach_shm_array,
@ -36,17 +30,42 @@ from ._sharedmem import (
get_shm_token,
ShmArray,
)
from ._source import (
def_iohlcv_fields,
def_ohlcv_fields,
)
from .feed import (
Feed,
open_feed,
)
from .flows import Flume
from ._symcache import (
SymbologyCache,
open_symcache,
get_symcache,
match_from_pairs,
)
from ._sampling import open_sample_stream
from ..types import Struct
__all__ = [
__all__: list[str] = [
'Flume',
'Feed',
'open_feed',
'ShmArray',
'iterticks',
'maybe_open_shm_array',
'match_from_pairs',
'attach_shm_array',
'open_shm_array',
'get_shm_token',
'def_iohlcv_fields',
'def_ohlcv_fields',
'open_symcache',
'open_sample_stream',
'get_symcache',
'Struct',
'SymbologyCache',
'types',
]

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -13,10 +13,10 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
'''
Pre-(path)-graphics formatted x/y nd/1d rendering subsystem.
"""
'''
from __future__ import annotations
from typing import (
Optional,
@ -39,7 +39,12 @@ if TYPE_CHECKING:
from ._dataviz import (
Viz,
)
from .._profile import Profiler
from piker.toolz import Profiler
# default gap between bars: "bar gap multiplier"
# - 0.5 is no overlap between OC arms,
# - 1.0 is full overlap on each neighbor sample
BGM: float = 0.16
class IncrementalFormatter(msgspec.Struct):
@ -222,6 +227,7 @@ class IncrementalFormatter(msgspec.Struct):
profiler: Profiler,
slice_to_inview: bool = True,
force_full_realloc: bool = False,
) -> tuple[
np.ndarray,
@ -248,7 +254,10 @@ class IncrementalFormatter(msgspec.Struct):
# we first need to allocate xy data arrays
# from the source data.
if self.y_nd is None:
if (
self.y_nd is None
or force_full_realloc
):
self.xy_nd_start = shm._first.value
self.xy_nd_stop = shm._last.value
self.x_nd, self.y_nd = self.allocate_xy_nd(
@ -509,6 +518,7 @@ class IncrementalFormatter(msgspec.Struct):
class OHLCBarsFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
-0.5,
0,
@ -600,8 +610,9 @@ class OHLCBarsFmtr(IncrementalFormatter):
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.16,
gap: float = BGM,
) -> tuple[
np.ndarray,
@ -618,7 +629,7 @@ class OHLCBarsFmtr(IncrementalFormatter):
array[:-1],
start,
bar_w=self.index_step_size,
bar_gap=w * self.index_step_size,
bar_gap=gap * self.index_step_size,
# XXX: don't ask, due to a ``numba`` bug..
use_time_index=(self.index_field == 'time'),

View File

@ -1,82 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Stream format enforcement.
'''
from itertools import chain
from typing import AsyncIterator
def iterticks(
quote: dict,
types: tuple[str] = (
'trade',
'dark_trade',
),
deduplicate_darks: bool = False,
) -> AsyncIterator:
'''
Iterate through ticks delivered per quote cycle.
'''
if deduplicate_darks:
assert 'dark_trade' in types
# print(f"{quote}\n\n")
ticks = quote.get('ticks', ())
trades = {}
darks = {}
if ticks:
# do a first pass and attempt to remove duplicate dark
# trades with the same tick signature.
if deduplicate_darks:
for tick in ticks:
ttype = tick.get('type')
time = tick.get('time', None)
if time:
sig = (
time,
tick['price'],
tick.get('size')
)
if ttype == 'dark_trade':
darks[sig] = tick
elif ttype == 'trade':
trades[sig] = tick
# filter duplicates
for sig, tick in trades.items():
tick = darks.pop(sig, None)
if tick:
ticks.remove(tick)
# print(f'DUPLICATE {tick}')
# re-insert ticks
ticks.extend(list(chain(trades.values(), darks.values())))
for tick in ticks:
# print(f"{quote['symbol']}: {tick}")
ttype = tick.get('type')
if ttype in types:
yield tick

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -17,11 +17,6 @@
Super fast ``QPainterPath`` generation related operator routines.
"""
from math import (
ceil,
floor,
)
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import (
@ -35,11 +30,6 @@ from numba import (
# TODO: for ``numba`` typing..
# from ._source import numba_ohlc_dtype
from ._m4 import ds_m4
from .._profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
def xy_downsample(
@ -135,7 +125,7 @@ def path_arrays_from_ohlc(
half_w: float = bar_w/2
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
# ../piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
open = q['open']
@ -237,20 +227,20 @@ def trace_hl(
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
lo, hi = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
up_diff = hi - last_l
down_diff = last_h - lo
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i + 1] = hi
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i + 1] = lo
out[2*i] = last_h
last_l = l
last_h = h
last_l = lo
last_h = hi
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
@ -289,158 +279,3 @@ def ohlc_flatten(
num=len(flat),
)
return x, flat
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: float, # sampler period step-diff
) -> slice:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='right',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc

View File

@ -27,17 +27,27 @@ from collections import (
from contextlib import asynccontextmanager as acm
import time
from typing import (
Any,
AsyncIterator,
TYPE_CHECKING,
)
import tractor
from tractor import (
Context,
MsgStream,
Channel,
)
from tractor.trionics import (
maybe_open_nursery,
)
import trio
from trio_typing import TaskStatus
from .ticktools import (
frame_ticks,
_tick_groups,
)
from ._util import (
log,
get_console_log,
@ -48,7 +58,10 @@ if TYPE_CHECKING:
from ._sharedmem import (
ShmArray,
)
from .feed import _FeedsBus
from .feed import (
_FeedsBus,
Sub,
)
# highest frequency sample step is 1 second by default, though in
@ -89,7 +102,7 @@ class Sampler:
float,
list[
float,
set[tractor.MsgStream]
set[MsgStream]
],
] = defaultdict(
lambda: [
@ -230,6 +243,7 @@ class Sampler:
self,
period_s: float,
time_stamp: float | None = None,
info: dict | None = None,
) -> None:
'''
@ -252,16 +266,20 @@ class Sampler:
f'broadcasting {period_s} -> {last_ts}\n'
# f'consumers: {subs}'
)
borked: set[tractor.MsgStream] = set()
sent: set[tractor.MsgStream] = set()
borked: set[MsgStream] = set()
sent: set[MsgStream] = set()
while True:
try:
for stream in (subs - sent):
try:
await stream.send({
msg = {
'index': time_stamp or last_ts,
'period': period_s,
})
}
if info:
msg.update(info)
await stream.send(msg)
sent.add(stream)
except (
@ -287,14 +305,24 @@ class Sampler:
)
@classmethod
async def broadcast_all(self) -> None:
for period_s in self.subscribers:
await self.broadcast(period_s)
async def broadcast_all(
self,
info: dict | None = None,
) -> None:
# NOTE: take a copy of subs since removals can happen
# during the broadcast checkpoint which can cause
# a `RuntimeError` on interation of the underlying `dict`.
for period_s in list(self.subscribers):
await self.broadcast(
period_s,
info=info,
)
@tractor.context
async def register_with_sampler(
ctx: tractor.Context,
ctx: Context,
period_s: float,
shms_by_period: dict[float, dict] | None = None,
@ -351,17 +379,29 @@ async def register_with_sampler(
if open_index_stream:
try:
async with ctx.open_stream() as stream:
async with ctx.open_stream(
allow_overruns=True,
) as stream:
if sub_for_broadcasts:
subs.add(stream)
# except broadcast requests from the subscriber
async for msg in stream:
if msg == 'broadcast_all':
await Sampler.broadcast_all()
if 'broadcast_all' in msg:
await Sampler.broadcast_all(
info=msg['broadcast_all'],
)
finally:
if sub_for_broadcasts:
subs.remove(stream)
if (
sub_for_broadcasts
and subs
):
try:
subs.remove(stream)
except KeyError:
log.warning(
f'{stream._ctx.chan.uid} sub already removed!?'
)
else:
# if no shms are passed in we just wait until cancelled
# by caller.
@ -458,6 +498,8 @@ async def open_sample_stream(
cache_key: str | None = None,
allow_new_sampler: bool = True,
ensure_is_active: bool = False,
) -> AsyncIterator[dict[str, float]]:
'''
Subscribe to OHLC sampling "step" events: when the time aggregation
@ -500,11 +542,20 @@ async def open_sample_stream(
},
) as (ctx, first)
):
async with (
ctx.open_stream() as istream,
if ensure_is_active:
assert len(first) > 1
# TODO: we don't need this task-bcasting right?
# istream.subscribe() as istream,
async with (
ctx.open_stream(
allow_overruns=True,
) as istream,
# TODO: we DO need this task-bcasting so that
# for eg. the history chart update loop eventually
# receceives all backfilling event msgs such that
# the underlying graphics format arrays are
# re-allocated until all history is loaded!
istream.subscribe() as istream,
):
yield istream
@ -549,9 +600,9 @@ async def sample_and_broadcast(
# TODO: we should probably not write every single
# value to an OHLC sample stream XD
# for a tick stream sure.. but this is excessive..
ticks = quote['ticks']
ticks: list[dict] = quote['ticks']
for tick in ticks:
ticktype = tick['type']
ticktype: str = tick['type']
# write trade events to shm last OHLC sample
if ticktype in ('trade', 'utrade'):
@ -561,13 +612,14 @@ async def sample_and_broadcast(
# more compact inline-way to do this assignment
# to both buffers?
for shm in [rt_shm, hist_shm]:
# update last entry
# benchmarked in the 4-5 us range
o, high, low, v = shm.array[-1][
['open', 'high', 'low', 'volume']
]
new_v = tick.get('size', 0)
new_v: float = tick.get('size', 0)
if v == 0 and new_v:
# no trades for this bar yet so the open
@ -586,14 +638,14 @@ async def sample_and_broadcast(
'high',
'low',
'close',
'bar_wap', # can be optionally provided
# 'bar_wap', # can be optionally provided
'volume',
]][-1] = (
o,
max(high, last),
min(low, last),
last,
quote.get('bar_wap', 0),
# quote.get('bar_wap', 0),
volume,
)
@ -605,48 +657,49 @@ async def sample_and_broadcast(
# eventually block this producer end of the feed and
# thus other consumers still attached.
sub_key: str = broker_symbol.lower()
subs: list[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
float | None, # tick throttle in Hz
]
] = bus.get_subs(sub_key)
subs: set[Sub] = bus.get_subs(sub_key)
# NOTE: by default the broker backend doesn't append
# it's own "name" into the fqsn schema (but maybe it
# it's own "name" into the fqme schema (but maybe it
# should?) so we have to manually generate the correct
# key here.
fqsn = f'{broker_symbol}.{brokername}'
fqme: str = f'{broker_symbol}.{brokername}'
lags: int = 0
# TODO: speed up this loop in an AOT compiled lang (like
# rust or nim or zig) and/or instead of doing a fan out to
# TCP sockets here, we add a shm-style tick queue which
# readers can pull from instead of placing the burden of
# broadcast on solely on this `brokerd` actor. see issues:
# XXX TODO XXX: speed up this loop in an AOT compiled
# lang (like rust or nim or zig)!
# AND/OR instead of doing a fan out to TCP sockets
# here, we add a shm-style tick queue which readers can
# pull from instead of placing the burden of broadcast
# on solely on this `brokerd` actor. see issues:
# - https://github.com/pikers/piker/issues/98
# - https://github.com/pikers/piker/issues/107
for (stream, tick_throttle) in subs.copy():
# for (stream, tick_throttle) in subs.copy():
for sub in subs.copy():
ipc: MsgStream = sub.ipc
throttle: float = sub.throttle_rate
try:
with trio.move_on_after(0.2) as cs:
if tick_throttle:
if throttle:
send_chan: trio.abc.SendChannel = sub.send_chan
# this is a send mem chan that likely
# pushes to the ``uniform_rate_send()`` below.
try:
stream.send_nowait(
(fqsn, quote)
send_chan.send_nowait(
(fqme, quote)
)
except trio.WouldBlock:
overruns[sub_key] += 1
ctx = stream._ctx
chan = ctx.chan
ctx: Context = ipc._ctx
chan: Channel = ctx.chan
log.warning(
f'Feed OVERRUN {sub_key}'
'@{bus.brokername} -> \n'
f'feed @ {chan.uid}\n'
f'throttle = {tick_throttle} Hz'
f'throttle = {throttle} Hz'
)
if overruns[sub_key] > 6:
@ -663,33 +716,33 @@ async def sample_and_broadcast(
f'{sub_key}:'
f'{ctx.cid}@{chan.uid}'
)
await stream.aclose()
await ipc.aclose()
raise trio.BrokenResourceError
else:
await stream.send(
{fqsn: quote}
await ipc.send(
{fqme: quote}
)
if cs.cancelled_caught:
lags += 1
if lags > 10:
await tractor.breakpoint()
await tractor.pause()
except (
trio.BrokenResourceError,
trio.ClosedResourceError,
trio.EndOfChannel,
):
ctx = stream._ctx
chan = ctx.chan
ctx: Context = ipc._ctx
chan: Channel = ctx.chan
if ctx:
log.warning(
'Dropped `brokerd`-quotes-feed connection:\n'
f'{broker_symbol}:'
f'{ctx.cid}@{chan.uid}'
)
if tick_throttle:
assert stream._closed
if sub.throttle_rate:
assert ipc._closed
# XXX: do we need to deregister here
# if it's done in the fee bus code?
@ -698,69 +751,15 @@ async def sample_and_broadcast(
# since there seems to be some kinda race..
bus.remove_subs(
sub_key,
{(stream, tick_throttle)},
{sub},
)
# a working tick-type-classes template
_tick_groups = {
'clears': {'trade', 'dark_trade', 'last'},
'bids': {'bid', 'bsize'},
'asks': {'ask', 'asize'},
}
def frame_ticks(
first_quote: dict,
last_quote: dict,
ticks_by_type: dict,
) -> None:
# append quotes since last iteration into the last quote's
# tick array/buffer.
ticks = last_quote.get('ticks')
# TODO: once we decide to get fancy really we should
# have a shared mem tick buffer that is just
# continually filled and the UI just ready from it
# at it's display rate.
if ticks:
# TODO: do we need this any more or can we just
# expect the receiver to unwind the below
# `ticks_by_type: dict`?
# => undwinding would potentially require a
# `dict[str, set | list]` instead with an
# included `'types' field which is an (ordered)
# set of tick type fields in the order which
# types arrived?
first_quote['ticks'].extend(ticks)
# XXX: build a tick-by-type table of lists
# of tick messages. This allows for less
# iteration on the receiver side by allowing for
# a single "latest tick event" look up by
# indexing the last entry in each sub-list.
# tbt = {
# 'types': ['bid', 'asize', 'last', .. '<type_n>'],
# 'bid': [tick0, tick1, tick2, .., tickn],
# 'asize': [tick0, tick1, tick2, .., tickn],
# 'last': [tick0, tick1, tick2, .., tickn],
# ...
# '<type_n>': [tick0, tick1, tick2, .., tickn],
# }
# append in reverse FIFO order for in-order iteration on
# receiver side.
for tick in ticks:
ttype = tick['type']
ticks_by_type[ttype].append(tick)
async def uniform_rate_send(
rate: float,
quote_stream: trio.abc.ReceiveChannel,
stream: tractor.MsgStream,
stream: MsgStream,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
@ -789,10 +788,10 @@ async def uniform_rate_send(
diff = 0
task_status.started()
ticks_by_type: defaultdict[
ticks_by_type: dict[
str,
list[dict],
] = defaultdict(list)
list[dict[str, Any]],
] = {}
clear_types = _tick_groups['clears']
@ -820,9 +819,9 @@ async def uniform_rate_send(
# expired we aren't supposed to send yet so append
# to the tick frame.
frame_ticks(
first_quote,
last_quote,
ticks_by_type,
ticks_in_order=first_quote['ticks'],
ticks_by_type=ticks_by_type,
)
# send cycle isn't due yet so continue waiting
@ -842,8 +841,8 @@ async def uniform_rate_send(
frame_ticks(
first_quote,
first_quote,
ticks_by_type,
ticks_in_order=first_quote['ticks'],
ticks_by_type=ticks_by_type,
)
# we have a quote already so send it now.
@ -859,9 +858,9 @@ async def uniform_rate_send(
break
frame_ticks(
first_quote,
last_quote,
ticks_by_type,
ticks_in_order=first_quote['ticks'],
ticks_by_type=ticks_by_type,
)
# measured_rate = 1 / (time.time() - last_send)

View File

@ -33,17 +33,8 @@ from numpy.lib import recfunctions as rfn
import tractor
from ._util import log
from ._source import base_iohlc_dtype
from .types import Struct
# how much is probably dependent on lifestyle
_secs_in_day = int(60 * 60 * 24)
# we try for a buncha times, but only on a run-every-other-day kinda week.
_days_worth = 16
_default_size = _days_worth * _secs_in_day
# where to start the new data append index
_rt_buffer_start = int((_days_worth - 1) * _secs_in_day)
from ._source import def_iohlcv_fields
from piker.types import Struct
def cuckoff_mantracker():
@ -70,7 +61,6 @@ def cuckoff_mantracker():
mantracker._resource_tracker = ManTracker()
mantracker.register = mantracker._resource_tracker.register
mantracker.ensure_running = mantracker._resource_tracker.ensure_running
# ensure_running = mantracker._resource_tracker.ensure_running
mantracker.unregister = mantracker._resource_tracker.unregister
mantracker.getfd = mantracker._resource_tracker.getfd
@ -168,7 +158,7 @@ def _make_token(
to access a shared array.
'''
dtype = base_iohlc_dtype if dtype is None else dtype
dtype = def_iohlcv_fields if dtype is None else dtype
return _Token(
shm_name=key,
shm_first_index_name=key + "_first",
@ -258,7 +248,6 @@ class ShmArray:
# to load an empty array..
if len(a) == 0 and self._post_init:
raise RuntimeError('Empty array race condition hit!?')
# breakpoint()
return a
@ -323,7 +312,7 @@ class ShmArray:
field_map: Optional[dict[str, str]] = None,
prepend: bool = False,
update_first: bool = True,
start: Optional[int] = None,
start: int | None = None,
) -> int:
'''
@ -365,7 +354,11 @@ class ShmArray:
# tries to access ``.array`` (which due to the index
# overlap will be empty). Pretty sure we've fixed it now
# but leaving this here as a reminder.
if prepend and update_first and length:
if (
prepend
and update_first
and length
):
assert index < self._first.value
if (
@ -439,10 +432,10 @@ class ShmArray:
def open_shm_array(
key: Optional[str] = None,
size: int = _default_size, # see above
dtype: Optional[np.dtype] = None,
size: int,
key: str | None = None,
dtype: np.dtype | None = None,
append_start_index: int | None = None,
readonly: bool = False,
) -> ShmArray:
@ -507,10 +500,13 @@ def open_shm_array(
# ``ShmArray._start.value: int = 0`` and the yet-to-be written
# real-time section will start at ``ShmArray.index: int``.
# this sets the index to 3/4 of the length of the buffer
# leaving a "days worth of second samples" for the real-time
# section.
last.value = first.value = _rt_buffer_start
# this sets the index to nearly 2/3rds into the the length of
# the buffer leaving at least a "days worth of second samples"
# for the real-time section.
if append_start_index is None:
append_start_index = round(size * 0.616)
last.value = first.value = append_start_index
shmarr = ShmArray(
array,
@ -524,7 +520,6 @@ def open_shm_array(
# "unlink" created shm on process teardown by
# pushing teardown calls onto actor context stack
stack = tractor.current_actor().lifetime_stack
stack.callback(shmarr.close)
stack.callback(shmarr.destroy)
@ -619,7 +614,10 @@ def attach_shm_array(
def maybe_open_shm_array(
key: str,
dtype: Optional[np.dtype] = None,
size: int,
dtype: np.dtype | None = None,
append_start_index: int | None = None,
readonly: bool = False,
**kwargs,
) -> tuple[ShmArray, bool]:
@ -640,11 +638,16 @@ def maybe_open_shm_array(
use ``attach_shm_array``.
'''
size = kwargs.pop('size', _default_size)
try:
# see if we already know this key
token = _known_tokens[key]
return attach_shm_array(token=token, **kwargs), False
return (
attach_shm_array(
token=token,
readonly=readonly,
),
False,
)
except KeyError:
log.debug(f"Could not find {key} in shms cache")
if dtype:
@ -663,8 +666,16 @@ def maybe_open_shm_array(
# Attempt to open a block and expect
# to fail if a block has been allocated
# on the OS by someone else.
return open_shm_array(key=key, dtype=dtype, **kwargs), True
return (
open_shm_array(
key=key,
size=size,
dtype=dtype,
append_start_index=append_start_index,
readonly=readonly,
),
True,
)
def try_read(
array: np.ndarray

View File

@ -23,26 +23,42 @@ from bidict import bidict
import numpy as np
ohlc_fields = [
('time', float),
def_iohlcv_fields: list[tuple[str, type]] = [
# YES WE KNOW, this isn't needed in polars but we use it for doing
# ring-buffer like pre/append ops our our `ShmArray` real-time
# numpy-array buffering system such that there is a master index
# that can be used for index-arithmetic when write data to the
# "middle" of the array. See the ``tractor.ipc.shm`` pkg for more
# details.
('index', int),
# presume int for epoch stamps since it's most common
# and makes the most sense to avoid float rounding issues.
# TODO: if we want higher reso we should use the new
# ``time.time_ns()`` in python 3.10+
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('bar_wap', float),
# TODO: can we elim this from default field set to save on mem?
# i think only kraken really uses this in terms of what we get from
# their ohlc history API?
# ('bar_wap', float), # shouldn't be default right?
]
ohlc_with_index = ohlc_fields.copy()
ohlc_with_index.insert(0, ('index', int))
# our minimum structured array layout for ohlc data
base_iohlc_dtype = np.dtype(ohlc_with_index)
base_ohlc_dtype = np.dtype(ohlc_fields)
# remove index field
def_ohlcv_fields: list[tuple[str, type]] = def_iohlcv_fields.copy()
def_ohlcv_fields.pop(0)
assert (len(def_iohlcv_fields) - len(def_ohlcv_fields)) == 1
# TODO: for now need to construct this manually for readonly arrays, see
# https://github.com/numba/numba/issues/4511
# from numba import from_dtype
# base_ohlc_dtype = np.dtype(def_ohlc_fields)
# numba_ohlc_dtype = from_dtype(base_ohlc_dtype)
# map time frame "keys" to seconds values

View File

@ -0,0 +1,510 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Mega-simple symbology cache via TOML files.
Allow backend data providers and/or brokers to stash their
symbology sets (aka the meta data we normalize into our
`.accounting.MktPair` type) to the filesystem for faster lookup and
offline usage.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
)
from pathlib import Path
from pprint import pformat
from typing import (
Any,
Sequence,
Hashable,
TYPE_CHECKING,
)
from types import ModuleType
from rapidfuzz import process as fuzzy
import tomli_w # for fast symbol cache writing
import tractor
import trio
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
from msgspec import field
from piker.log import get_logger
from piker import config
from piker.types import Struct
from piker.brokers import (
open_cached_client,
get_brokermod,
)
if TYPE_CHECKING:
from ..accounting import (
Asset,
MktPair,
)
log = get_logger('data.cache')
class SymbologyCache(Struct):
'''
Asset meta-data cache which holds lookup tables for 3 sets of
market-symbology related struct-types required by the
`.accounting` and `.data` subsystems.
'''
mod: ModuleType
fp: Path
# all asset-money-systems descriptions as minimally defined by
# in `.accounting.Asset`
assets: dict[str, Asset] = field(default_factory=dict)
# backend-system pairs loaded in provider (schema) specific
# structs.
pairs: dict[str, Struct] = field(default_factory=dict)
# serialized namespace path to the backend's pair-info-`Struct`
# defn B)
pair_ns_path: tractor.msg.NamespacePath | None = None
# TODO: piker-normalized `.accounting.MktPair` table?
# loaded from the `.pairs` and a normalizer
# provided by the backend pkg.
mktmaps: dict[str, MktPair] = field(default_factory=dict)
def write_config(self) -> None:
# put the backend's pair-struct type ref at the top
# of file if possible.
cachedict: dict[str, Any] = {
'pair_ns_path': str(self.pair_ns_path) or '',
}
# serialize all tables as dicts for TOML.
for key, table in {
'assets': self.assets,
'pairs': self.pairs,
'mktmaps': self.mktmaps,
}.items():
if not table:
log.warning(
f'Asset cache table for `{key}` is empty?'
)
continue
dct = cachedict[key] = {}
for key, struct in table.items():
dct[key] = struct.to_dict(include_non_members=False)
try:
with self.fp.open(mode='wb') as fp:
tomli_w.dump(cachedict, fp)
except TypeError:
self.fp.unlink()
raise
async def load(self) -> None:
'''
Explicitly load the "symbology set" for this provider by using
2 required `Client` methods:
- `.get_assets()`: returning a table of `Asset`s
- `.get_mkt_pairs()`: returning a table of pair-`Struct`
types, custom defined by the particular backend.
AND, the required `.get_mkt_info()` module-level endpoint
which maps `fqme: str` -> `MktPair`s.
These tables are then used to fill out the `.assets`, `.pairs` and
`.mktmaps` tables on this cache instance, respectively.
'''
async with open_cached_client(self.mod.name) as client:
if get_assets := getattr(client, 'get_assets', None):
assets: dict[str, Asset] = await get_assets()
for bs_mktid, asset in assets.items():
self.assets[bs_mktid] = asset
else:
log.warning(
'No symbology cache `Asset` support for `{provider}`..\n'
'Implement `Client.get_assets()`!'
)
if get_mkt_pairs := getattr(client, 'get_mkt_pairs', None):
pairs: dict[str, Struct] = await get_mkt_pairs()
for bs_fqme, pair in pairs.items():
# NOTE: every backend defined pair should
# declare it's ns path for roundtrip
# serialization lookup.
if not getattr(pair, 'ns_path', None):
raise TypeError(
f'Pair-struct for {self.mod.name} MUST define a '
'`.ns_path: str`!\n'
f'{pair}'
)
entry = await self.mod.get_mkt_info(pair.bs_fqme)
if not entry:
continue
mkt: MktPair
pair: Struct
mkt, _pair = entry
assert _pair is pair, (
f'`{self.mod.name}` backend probably has a '
'keying-symmetry problem between the pair-`Struct` '
'returned from `Client.get_mkt_pairs()`and the '
'module level endpoint: `.get_mkt_info()`\n\n'
"Here's the struct diff:\n"
f'{_pair - pair}'
)
# NOTE XXX: this means backends MUST implement
# a `Struct.bs_mktid: str` field to provide
# a native-keyed map to their own symbol
# set(s).
self.pairs[pair.bs_mktid] = pair
# NOTE: `MktPair`s are keyed here using piker's
# internal FQME schema so that search,
# accounting and feed init can be accomplished
# a sane, uniform, normalized basis.
self.mktmaps[mkt.fqme] = mkt
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
pair,
)
else:
log.warning(
'No symbology cache `Pair` support for `{provider}`..\n'
'Implement `Client.get_mkt_pairs()`!'
)
return self
@classmethod
def from_dict(
cls: type,
data: dict,
**kwargs,
) -> SymbologyCache:
# normal init inputs
cache = cls(**kwargs)
# XXX WARNING: this may break if backend namespacing
# changes (eg. `Pair` class def is moved to another
# module) in which case you can manually update the
# `pair_ns_path` in the symcache file and try again.
# TODO: probably a verbose error about this?
Pair: type = tractor.msg.NamespacePath(
str(data['pair_ns_path'])
).load_ref()
pairtable = data.pop('pairs')
for key, pairtable in pairtable.items():
# allow each serialized pair-dict-table to declare its
# specific struct type's path in cases where a backend
# supports multiples (normally with different
# schemas..) and we are storing them in a flat `.pairs`
# table.
ThisPair = Pair
if this_pair_type := pairtable.get('ns_path'):
ThisPair: type = tractor.msg.NamespacePath(
str(this_pair_type)
).load_ref()
pair: Struct = ThisPair(**pairtable)
cache.pairs[key] = pair
from ..accounting import (
Asset,
MktPair,
)
# load `dict` -> `Asset`
assettable = data.pop('assets')
for name, asdict in assettable.items():
cache.assets[name] = Asset.from_msg(asdict)
# load `dict` -> `MktPair`
dne: list[str] = []
mkttable = data.pop('mktmaps')
for fqme, mktdict in mkttable.items():
mkt = MktPair.from_msg(mktdict)
assert mkt.fqme == fqme
# sanity check asset refs from those (presumably)
# loaded asset set above.
src: Asset = cache.assets[mkt.src.name]
assert src == mkt.src
dst: Asset
if not (dst := cache.assets.get(mkt.dst.name)):
dne.append(mkt.dst.name)
continue
else:
assert dst.name == mkt.dst.name
cache.mktmaps[fqme] = mkt
log.warning(
f'These `MktPair.dst: Asset`s DNE says `{cache.mod.name}`?\n'
f'{pformat(dne)}'
)
return cache
@staticmethod
async def from_scratch(
mod: ModuleType,
fp: Path,
**kwargs,
) -> SymbologyCache:
'''
Generate (a) new symcache (contents) entirely from scratch
including all (TOML) serialized data and file.
'''
log.info(f'GENERATING symbology cache for `{mod.name}`')
cache = SymbologyCache(
mod=mod,
fp=fp,
**kwargs,
)
await cache.load()
cache.write_config()
return cache
def search(
self,
pattern: str,
table: str = 'mktmaps'
) -> dict[str, Struct]:
'''
(Fuzzy) search this cache's `.mktmaps` table, which is
keyed by FQMEs, for `pattern: str` and return the best
matches in a `dict` including the `MktPair` values.
'''
matches = fuzzy.extract(
pattern,
getattr(self, table),
score_cutoff=50,
)
# repack in dict[fqme, MktPair] form
return {
item[0].fqme: item[0]
for item in matches
}
# actor-process-local in-mem-cache of symcaches (by backend).
_caches: dict[str, SymbologyCache] = {}
def mk_cachefile(
provider: str,
) -> Path:
cachedir: Path = config.get_conf_dir() / '_cache'
if not cachedir.is_dir():
log.info(f'Creating `nativedb` director: {cachedir}')
cachedir.mkdir()
cachefile: Path = cachedir / f'{str(provider)}.symcache.toml'
cachefile.touch()
return cachefile
@acm
async def open_symcache(
mod_or_name: ModuleType | str,
reload: bool = False,
only_from_memcache: bool = False, # no API req
_no_symcache: bool = False, # no backend support
) -> SymbologyCache:
if isinstance(mod_or_name, str):
mod = get_brokermod(mod_or_name)
else:
mod: ModuleType = mod_or_name
provider: str = mod.name
cachefile: Path = mk_cachefile(provider)
# NOTE: certain backends might not support a symbology cache
# (easily) and thus we allow for an empty instance to be loaded
# and manually filled in at the whim of the caller presuming
# the backend pkg-module is annotated appropriately.
if (
getattr(mod, '_no_symcache', False)
or _no_symcache
):
yield SymbologyCache(
mod=mod,
fp=cachefile,
)
# don't do nuttin
return
# actor-level cache-cache XD
global _caches
if not reload:
try:
yield _caches[provider]
except KeyError:
msg: str = (
f'No asset info cache exists yet for `{provider}`'
)
if only_from_memcache:
raise RuntimeError(msg)
else:
log.warning(msg)
# if no cache exists or an explicit reload is requested, load
# the provider API and call appropriate endpoints to populate
# the mkt and asset tables.
if (
reload
or not cachefile.is_file()
):
cache = await SymbologyCache.from_scratch(
mod=mod,
fp=cachefile,
)
else:
log.info(
f'Loading EXISTING `{mod.name}` symbology cache:\n'
f'> {cachefile}'
)
import time
now = time.time()
with cachefile.open('rb') as existing_fp:
data: dict[str, dict] = tomllib.load(existing_fp)
log.runtime(f'SYMCACHE TOML LOAD TIME: {time.time() - now}')
# if there's an empty file for some reason we need
# to do a full reload as well!
if not data:
cache = await SymbologyCache.from_scratch(
mod=mod,
fp=cachefile,
)
else:
cache = SymbologyCache.from_dict(
data,
mod=mod,
fp=cachefile,
)
# TODO: use a real profiling sys..
# https://github.com/pikers/piker/issues/337
log.info(f'SYMCACHE LOAD TIME: {time.time() - now}')
yield cache
# TODO: write only when changes detected? but that should
# never happen right except on reload?
# cache.write_config()
def get_symcache(
provider: str,
force_reload: bool = False,
) -> SymbologyCache:
'''
Get any available symbology/assets cache from sync code by
(maybe) manually running `trio` to do the work.
'''
# spawn tractor runtime and generate cache
# if not existing.
async def sched_gen_symcache():
async with (
# only for runtime's debug mode
tractor.open_nursery(debug_mode=True),
open_symcache(
get_brokermod(provider),
reload=force_reload,
) as symcache,
):
return symcache
try:
symcache: SymbologyCache = trio.run(sched_gen_symcache)
assert symcache
except BaseException:
import pdbp
pdbp.xpm()
return symcache
def match_from_pairs(
pairs: dict[str, Struct],
query: str,
score_cutoff: int = 50,
**extract_kwargs,
) -> dict[str, Struct]:
'''
Fuzzy search over a "pairs table" maintained by most backends
as part of their symbology-info caching internals.
Scan the native symbol key set and return best ranked
matches back in a new `dict`.
'''
# TODO: somehow cache this list (per call) like we were in
# `open_symbol_search()`?
keys: list[str] = list(pairs)
matches: list[tuple[
Sequence[Hashable], # matching input key
Any, # scores
Any,
]] = fuzzy.extract(
# NOTE: most backends provide keys uppercased
query=query,
choices=keys,
score_cutoff=score_cutoff,
**extract_kwargs,
)
# pop and repack pairs in output dict
matched_pairs: dict[str, Struct] = {}
for item in matches:
pair_key: str = item[0]
matched_pairs[pair_key] = pairs[pair_key]
return matched_pairs

View File

@ -50,8 +50,8 @@ from trio_websocket._impl import (
ConnectionTimeout,
)
from piker.types import Struct
from ._util import log
from .types import Struct
class NoBsWs:
@ -156,6 +156,15 @@ async def _reconnect_forever(
) -> None:
# TODO: can we just report "where" in the call stack
# the client code is using the ws stream?
# Maybe we can just drop this since it's already in the log msg
# orefix?
if fixture is not None:
src_mod: str = fixture.__module__
else:
src_mod: str = 'unknown'
async def proxy_msgs(
ws: WebSocketConnection,
pcs: trio.CancelScope, # parent cancel scope
@ -179,6 +188,7 @@ async def _reconnect_forever(
await snd.send(msg)
except nobsws.recon_errors:
log.exception(
f'{src_mod}\n'
f'{url} connection bail with:'
)
await trio.sleep(0.5)
@ -191,7 +201,8 @@ async def _reconnect_forever(
timeouts += 1
if timeouts > reset_after:
log.error(
'WS feed seems down and slow af? .. resetting\n'
f'{src_mod}\n'
'WS feed seems down and slow af.. reconnecting\n'
)
pcs.cancel()
@ -218,14 +229,23 @@ async def _reconnect_forever(
task_status.started()
while not snd._closed:
log.info(f'{url} trying (RE)CONNECT')
log.info(
f'{src_mod}\n'
f'{url} trying (RE)CONNECT'
)
async with trio.open_nursery() as n:
cs = nobsws._cs = n.cancel_scope
ws: WebSocketConnection
async with open_websocket_url(url) as ws:
ws: WebSocketConnection
try:
async with (
trio.open_nursery() as n,
open_websocket_url(url) as ws,
):
cs = nobsws._cs = n.cancel_scope
nobsws._ws = ws
log.info(f'Connection success: {url}')
log.info(
f'{src_mod}\n'
f'Connection success: {url}'
)
# begin relay loop to forward msgs
n.start_soon(
@ -235,7 +255,10 @@ async def _reconnect_forever(
)
if fixture is not None:
log.info(f'Entering fixture: {fixture}')
log.info(
f'{src_mod}\n'
f'Entering fixture: {fixture}'
)
# TODO: should we return an explicit sub-cs
# from this fixture task?
@ -249,9 +272,11 @@ async def _reconnect_forever(
# to let tasks run **inside** the ws open block above.
nobsws._connected.set()
await trio.sleep_forever()
except HandshakeError:
log.exception('Retrying connection')
# ws & nursery block ends
# ws open block end
# nursery block end
nobsws._connected = trio.Event()
if cs.cancelled_caught:
log.cancel(
@ -264,10 +289,14 @@ async def _reconnect_forever(
and not nobsws._connected.is_set()
)
# -> from here, move to next reconnect attempt
# -> from here, move to next reconnect attempt iteration
# in the while loop above Bp
else:
log.exception('ws connection closed by client...')
log.exception(
f'{src_mod}\n'
'ws connection closed by client...'
)
@acm
@ -276,8 +305,9 @@ async def open_autorecon_ws(
fixture: AsyncContextManager | None = None,
# time in sec
msg_recv_timeout: float = 3,
# time in sec between msgs received before
# we presume connection might need a reset.
msg_recv_timeout: float = 16,
# count of the number of above timeouts before connection reset
reset_after: int = 3,
@ -329,8 +359,8 @@ async def open_autorecon_ws(
'''
JSONRPC response-request style machinery for transparent multiplexing of msgs
over a NoBsWs.
JSONRPC response-request style machinery for transparent multiplexing
of msgs over a `NoBsWs`.
'''
@ -347,43 +377,82 @@ async def open_jsonrpc_session(
url: str,
start_id: int = 0,
response_type: type = JSONRPCResult,
request_type: Optional[type] = None,
request_hook: Optional[Callable] = None,
error_hook: Optional[Callable] = None,
msg_recv_timeout: float = float('inf'),
# ^NOTE, since only `deribit` is using this jsonrpc stuff atm
# and options mkts are generally "slow moving"..
#
# FURTHER if we break the underlying ws connection then since we
# don't pass a `fixture` to the task that manages `NoBsWs`, i.e.
# `_reconnect_forever()`, the jsonrpc "transport pipe" get's
# broken and never restored with wtv init sequence is required to
# re-establish a working req-resp session.
) -> Callable[[str, dict], dict]:
'''
Init a json-RPC-over-websocket connection to the provided `url`.
A `json_rpc: Callable[[str, dict], dict` is delivered to the
caller for sending requests and a bg-`trio.Task` handles
processing of response msgs including error reporting/raising in
the parent/caller task.
'''
# NOTE, store all request msgs so we can raise errors on the
# caller side!
req_msgs: dict[int, dict] = {}
async with (
trio.open_nursery() as n,
open_autorecon_ws(url) as ws
trio.open_nursery() as tn,
open_autorecon_ws(
url=url,
msg_recv_timeout=msg_recv_timeout,
) as ws
):
rpc_id: Iterable = count(start_id)
rpc_id: Iterable[int] = count(start_id)
rpc_results: dict[int, dict] = {}
async def json_rpc(method: str, params: dict) -> dict:
async def json_rpc(
method: str,
params: dict,
) -> dict:
'''
perform a json rpc call and wait for the result, raise exception in
case of error field present on response
'''
nonlocal req_msgs
req_id: int = next(rpc_id)
msg = {
'jsonrpc': '2.0',
'id': next(rpc_id),
'id': req_id,
'method': method,
'params': params
}
_id = msg['id']
rpc_results[_id] = {
result = rpc_results[_id] = {
'result': None,
'event': trio.Event()
'error': None,
'event': trio.Event(), # signal caller resp arrived
}
req_msgs[_id] = msg
await ws.send_msg(msg)
# wait for reponse before unblocking requester code
await rpc_results[_id]['event'].wait()
ret = rpc_results[_id]['result']
if (maybe_result := result['result']):
ret = maybe_result
del rpc_results[_id]
del rpc_results[_id]
else:
err = result['error']
raise Exception(
f'JSONRPC request failed\n'
f'req: {msg}\n'
f'resp: {err}\n'
)
if ret.error is not None:
raise Exception(json.dumps(ret.error, indent=4))
@ -398,6 +467,7 @@ async def open_jsonrpc_session(
the server side.
'''
nonlocal req_msgs
async for msg in ws:
match msg:
case {
@ -421,19 +491,28 @@ async def open_jsonrpc_session(
'params': _,
}:
log.debug(f'Recieved\n{msg}')
if request_hook:
await request_hook(request_type(**msg))
case {
'error': error
}:
log.warning(f'Recieved\n{error}')
if error_hook:
await error_hook(response_type(**msg))
# retreive orig request msg, set error
# response in original "result" msg,
# THEN FINALLY set the event to signal caller
# to raise the error in the parent task.
req_id: int = error['id']
req_msg: dict = req_msgs[req_id]
result: dict = rpc_results[req_id]
result['error'] = error
result['event'].set()
log.error(
f'JSONRPC request failed\n'
f'req: {req_msg}\n'
f'resp: {error}\n'
)
case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
n.start_soon(recv_task)
tn.start_soon(recv_task)
yield json_rpc
n.cancel_scope.cancel()
tn.cancel_scope.cancel()

View File

@ -1,255 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
marketstore cli.
"""
import trio
import tractor
import click
from ..service.marketstore import (
# get_client,
# stream_quotes,
ingest_quote_stream,
# _url,
# _tick_tbk_ids,
# mk_tbk,
)
from ..cli import cli
from .. import watchlists as wl
from ._util import (
log,
)
@cli.command()
@click.option(
'--url',
default='ws://localhost:5993/ws',
help='HTTP URL of marketstore instance'
)
@click.argument('names', nargs=-1)
@click.pass_obj
def ms_stream(
config: dict,
names: list[str],
url: str,
) -> None:
'''
Connect to a marketstore time bucket stream for (a set of) symbols(s)
and print to console.
'''
async def main():
# async for quote in stream_quotes(symbols=names):
# log.info(f"Received quote:\n{quote}")
...
trio.run(main)
# @cli.command()
# @click.option(
# '--url',
# default=_url,
# help='HTTP URL of marketstore instance'
# )
# @click.argument('names', nargs=-1)
# @click.pass_obj
# def ms_destroy(config: dict, names: list[str], url: str) -> None:
# """Destroy symbol entries in the local marketstore instance.
# """
# async def main():
# nonlocal names
# async with get_client(url) as client:
#
# if not names:
# names = await client.list_symbols()
#
# # default is to wipe db entirely.
# answer = input(
# "This will entirely wipe you local marketstore db @ "
# f"{url} of the following symbols:\n {pformat(names)}"
# "\n\nDelete [N/y]?\n")
#
# if answer == 'y':
# for sym in names:
# # tbk = _tick_tbk.format(sym)
# tbk = tuple(sym, *_tick_tbk_ids)
# print(f"Destroying {tbk}..")
# await client.destroy(mk_tbk(tbk))
# else:
# print("Nothing deleted.")
#
# tractor.run(main)
@cli.command()
@click.option(
'--tsdb_host',
default='localhost'
)
@click.option(
'--tsdb_port',
default=5993
)
@click.argument('symbols', nargs=-1)
@click.pass_obj
def storesh(
config,
tl,
host,
port,
symbols: list[str],
):
'''
Start an IPython shell ready to query the local marketstore db.
'''
from piker.data.marketstore import open_tsdb_client
from piker.service import open_piker_runtime
async def main():
nonlocal symbols
async with open_piker_runtime(
'storesh',
enable_modules=['piker.service._ahab'],
):
symbol = symbols[0]
async with open_tsdb_client(symbol):
# TODO: ask if user wants to write history for detected
# available shm buffers?
from tractor.trionics import ipython_embed
await ipython_embed()
trio.run(main)
@cli.command()
@click.option(
'--host',
default='localhost'
)
@click.option(
'--port',
default=5993
)
@click.option(
'--delete',
'-d',
is_flag=True,
help='Delete history (1 Min) for symbol(s)',
)
@click.argument('symbols', nargs=-1)
@click.pass_obj
def storage(
config,
host,
port,
symbols: list[str],
delete: bool,
):
'''
Start an IPython shell ready to query the local marketstore db.
'''
from piker.service.marketstore import open_tsdb_client
from piker.service import open_piker_runtime
async def main():
nonlocal symbols
async with open_piker_runtime(
'tsdb_storage',
enable_modules=['piker.service._ahab'],
):
symbol = symbols[0]
async with open_tsdb_client(symbol) as storage:
if delete:
for fqsn in symbols:
syms = await storage.client.list_symbols()
resp60s = await storage.delete_ts(fqsn, 60)
msgish = resp60s.ListFields()[0][1]
if 'error' in str(msgish):
# TODO: MEGA LOL, apparently the symbols don't
# flush out until you refresh something or other
# (maybe the WALFILE)... #lelandorlulzone, classic
# alpaca(Rtm) design here ..
# well, if we ever can make this work we
# probably want to dogsplain the real reason
# for the delete errurz..llululu
if fqsn not in syms:
log.error(f'Pair {fqsn} dne in DB')
log.error(f'Deletion error: {fqsn}\n{msgish}')
resp1s = await storage.delete_ts(fqsn, 1)
msgish = resp1s.ListFields()[0][1]
if 'error' in str(msgish):
log.error(f'Deletion error: {fqsn}\n{msgish}')
trio.run(main)
@cli.command()
@click.option('--test-file', '-t', help='Test quote stream file')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.argument('name', nargs=1, required=True)
@click.pass_obj
def ingest(config, name, test_file, tl):
'''
Ingest real-time broker quotes and ticks to a marketstore instance.
'''
# global opts
loglevel = config['loglevel']
tractorloglevel = config['tractorloglevel']
# log = config['log']
watchlist_from_file = wl.ensure_watchlists(config['wl_path'])
watchlists = wl.merge_watchlist(watchlist_from_file, wl._builtins)
symbols = watchlists[name]
grouped_syms = {}
for sym in symbols:
symbol, _, provider = sym.rpartition('.')
if provider not in grouped_syms:
grouped_syms[provider] = []
grouped_syms[provider].append(symbol)
async def entry_point():
async with tractor.open_nursery() as n:
for provider, symbols in grouped_syms.items():
await n.run_in_actor(
ingest_quote_stream,
name='ingest_marketstore',
symbols=symbols,
brokername=provider,
tries=1,
actorloglevel=loglevel,
loglevel=tractorloglevel
)
tractor.run(entry_point)

View File

@ -28,6 +28,7 @@ module.
from __future__ import annotations
from collections import (
defaultdict,
abc,
)
from contextlib import asynccontextmanager as acm
from functools import partial
@ -36,51 +37,70 @@ from types import ModuleType
from typing import (
Any,
AsyncContextManager,
Optional,
Awaitable,
Sequence,
Union,
)
import trio
from trio.abc import ReceiveChannel
from trio_typing import TaskStatus
import tractor
from tractor.trionics import (
maybe_open_context,
gather_contexts,
)
from tractor import trionics
from ..brokers import get_brokermod
from ..calc import humanize
from piker.accounting import (
MktPair,
unpack_fqme,
)
from piker.types import Struct
from piker.brokers import get_brokermod
from piker.service import (
maybe_spawn_brokerd,
)
from piker.calc import humanize
from ._util import (
log,
get_console_log,
)
from ..service import (
maybe_spawn_brokerd,
)
from .flows import Flume
from .validate import (
FeedInit,
validate_backend,
)
from .history import (
from ..tsp import (
manage_history,
)
from .ingest import get_ingestormod
from .types import Struct
from ..accounting._mktinfo import (
MktPair,
unpack_fqme,
)
from ..ui import _search
from ._sampling import (
sample_and_broadcast,
uniform_rate_send,
)
class Sub(Struct, frozen=True):
'''
A live feed subscription entry.
Contains meta-data on the remote-actor type (in functionality
terms) as well as refs to IPC streams and sampler runtime
params.
'''
ipc: tractor.MsgStream
send_chan: trio.abc.SendChannel | None = None
# tick throttle rate in Hz; determines how live
# quotes/ticks should be downsampled before relay
# to the receiving remote consumer (process).
throttle_rate: float | None = None
_throttle_cs: trio.CancelScope | None = None
# TODO: actually stash comms info for the far end to allow
# `.tsp`, `.fsp` and `.data._sampling` sub-systems to re-render
# the data view as needed via msging with the `._remote_ctl`
# ipc ctx.
rc_ui: bool = False
class _FeedsBus(Struct):
'''
Data feeds broadcaster and persistence management.
@ -105,13 +125,7 @@ class _FeedsBus(Struct):
_subscribers: defaultdict[
str,
set[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
# tractor.Context,
float | None, # tick throttle in Hz
]
]
set[Sub]
] = defaultdict(set)
async def start_task(
@ -126,6 +140,8 @@ class _FeedsBus(Struct):
trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
with trio.CancelScope() as cs:
# TODO: shouldn't this be a direct await to avoid
# cancellation contagion to the bus nursery!?!?!
await self.nursery.start(
target,
*args,
@ -143,33 +159,28 @@ class _FeedsBus(Struct):
def get_subs(
self,
key: str,
) -> set[
tuple[
Union[tractor.MsgStream, trio.MemorySendChannel],
# tractor.Context,
float | None, # tick throttle in Hz
]
]:
) -> set[Sub]:
'''
Get the ``set`` of consumer subscription entries for the given key.
'''
return self._subscribers[key]
def subs_items(self) -> abc.ItemsView[str, set[Sub]]:
return self._subscribers.items()
def add_subs(
self,
key: str,
subs: set[tuple[
tractor.MsgStream | trio.MemorySendChannel,
# tractor.Context,
float | None, # tick throttle in Hz
]],
) -> set[tuple]:
subs: set[Sub],
) -> set[Sub]:
'''
Add a ``set`` of consumer subscription entries for the given key.
'''
_subs = self._subscribers[key]
_subs: set[Sub] = self._subscribers.setdefault(key, set())
_subs.update(subs)
return _subs
@ -183,7 +194,7 @@ class _FeedsBus(Struct):
Remove a ``set`` of consumer subscription entries for key.
'''
_subs = self.get_subs(key)
_subs: set[tuple] = self.get_subs(key)
_subs.difference_update(subs)
return _subs
@ -193,7 +204,7 @@ _bus: _FeedsBus = None
def get_feed_bus(
brokername: str,
nursery: Optional[trio.Nursery] = None,
nursery: trio.Nursery | None = None,
) -> _FeedsBus:
'''
@ -226,6 +237,7 @@ async def allocate_persistent_feed(
loglevel: str,
start_stream: bool = True,
init_timeout: float = 616,
task_status: TaskStatus[FeedInit] = trio.TASK_STATUS_IGNORED,
@ -267,26 +279,23 @@ async def allocate_persistent_feed(
# TODO: probably make a struct msg type for this as well
# since eventually we do want to have more efficient IPC..
first_quote: dict[str, Any]
with trio.fail_after(init_timeout):
(
init_msgs,
first_quote,
) = await bus.nursery.start(
partial(
mod.stream_quotes,
send_chan=send,
feed_is_live=feed_is_live,
symstr = symstr.lower()
(
init_msgs,
first_quote,
) = await bus.nursery.start(
partial(
mod.stream_quotes,
send_chan=send,
feed_is_live=feed_is_live,
# NOTE / TODO: eventualy we may support providing more then
# one input here such that a datad daemon can multiplex
# multiple live feeds from one task, instead of getting
# a new request (and thus new task) for each subscription.
symbols=[symstr],
loglevel=loglevel,
# NOTE / TODO: eventualy we may support providing more then
# one input here such that a datad daemon can multiplex
# multiple live feeds from one task, instead of getting
# a new request (and thus new task) for each subscription.
symbols=[symstr],
)
)
)
# TODO: this is indexed by symbol for now since we've planned (for
# some time) to expect backends to handle single
@ -313,7 +322,7 @@ async def allocate_persistent_feed(
init: FeedInit = validate_backend(
mod,
[symstr],
init_msgs,
init_msgs, # NOTE: only 1 should be delivered for now..
)
mkt: MktPair = init.mkt_info
fqme: str = mkt.fqme
@ -336,15 +345,14 @@ async def allocate_persistent_feed(
) = await bus.nursery.start(
manage_history,
mod,
bus,
fqme,
mkt,
some_data_ready,
feed_is_live,
)
# yield back control to starting nursery once we receive either
# some history or a real-time quote.
log.info(f'waiting on history to load: {fqme}')
log.info(f'loading OHLCV history: {fqme}')
await some_data_ready.wait()
flume = Flume(
@ -359,6 +367,10 @@ async def allocate_persistent_feed(
_hist_shm_token=hist_shm.token,
izero_hist=izero_hist,
izero_rt=izero_rt,
# NOTE: some instruments don't have this provided,
# eg. commodities and forex from ib.
_has_vlm=init.shm_write_opts['has_vlm'],
)
# for ambiguous names we simply register the
@ -370,7 +382,8 @@ async def allocate_persistent_feed(
mkt.bs_fqme: flume,
})
# signal the ``open_feed_bus()`` caller task to continue
# signal the ``open_feed_bus()`` caller task to continue since
# we now have (some) history pushed to the shm buffer.
task_status.started(init)
if not start_stream:
@ -382,7 +395,12 @@ async def allocate_persistent_feed(
# NOTE: if not configured otherwise, we always sum tick volume
# values in the OHLCV sampler.
sum_tick_vlm: bool = (init.shm_write_opts or {}).get('sum_tick_vlm', True)
sum_tick_vlm: bool = True
if init.shm_write_opts:
sum_tick_vlm: bool = init.shm_write_opts.get(
'sum_tick_vlm',
True,
)
# NOTE: if no high-freq sampled data has (yet) been loaded,
# seed the buffer with a history datum - this is most handy
@ -403,7 +421,13 @@ async def allocate_persistent_feed(
rt_shm.array['time'][1] = ts + 1
elif hist_shm.array.size == 0:
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
for i in range(100):
await trio.sleep(0.1)
if hist_shm.array.size > 0:
break
else:
await tractor.pause()
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
# wait the spawning parent task to register its subscriber
# send-stream entry before we start the sample loop.
@ -433,8 +457,9 @@ async def open_feed_bus(
symbols: list[str], # normally expected to the broker-specific fqme
loglevel: str = 'error',
tick_throttle: Optional[float] = None,
tick_throttle: float | None = None,
start_stream: bool = True,
allow_remote_ctl_ui: bool = False,
) -> dict[
str, # fqme
@ -449,8 +474,12 @@ async def open_feed_bus(
if loglevel is None:
loglevel = tractor.current_actor().loglevel
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
# XXX: required to propagate ``tractor`` loglevel to piker
# logging
get_console_log(
loglevel
or tractor.current_actor().loglevel
)
# local state sanity checks
# TODO: check for any stale shm entries for this symbol
@ -460,16 +489,13 @@ async def open_feed_bus(
assert 'brokerd' in servicename
assert brokername in servicename
bus = get_feed_bus(brokername)
bus: _FeedsBus = get_feed_bus(brokername)
sub_registered = trio.Event()
flumes: dict[str, Flume] = {}
for symbol in symbols:
# we always use lower case keys internally
symbol = symbol.lower()
# if no cached feed for this symbol has been created for this
# brokerd yet, start persistent stream and shm writer task in
# service nursery
@ -510,10 +536,10 @@ async def open_feed_bus(
# pack for ``.started()`` sync msg
flumes[fqme] = flume
# we use the broker-specific fqme (bs_fqme) for the
# sampler subscription since the backend isn't (yet) expected to
# append it's own name to the fqme, so we filter on keys which
# *do not* include that name (e.g .ib) .
# we use the broker-specific fqme (bs_fqme) for the sampler
# subscription since the backend isn't (yet) expected to
# append it's own name to the fqme, so we filter on keys
# which *do not* include that name (e.g .ib) .
bus._subscribers.setdefault(bs_fqme, set())
# sync feed subscribers with flume handles
@ -532,7 +558,7 @@ async def open_feed_bus(
# NOTE we allow this since it's common to have the live
# quote feed actor's sampling task push faster then the
# the local UI-graphics code during startup.
allow_overruns=True,
# allow_overruns=True,
) as stream,
):
@ -552,49 +578,60 @@ async def open_feed_bus(
# that the ``sample_and_broadcast()`` task (spawned inside
# ``allocate_persistent_feed()``) will push real-time quote
# (ticks) to this new consumer.
cs: trio.CancelScope | None = None
send: trio.MemorySendChannel | None = None
if tick_throttle:
flume.throttle_rate = tick_throttle
# open a bg task which receives quotes over a mem chan
# and only pushes them to the target actor-consumer at
# a max ``tick_throttle`` instantaneous rate.
# open a bg task which receives quotes over a mem
# chan and only pushes them to the target
# actor-consumer at a max ``tick_throttle``
# (instantaneous) rate.
send, recv = trio.open_memory_channel(2**10)
cs = await bus.start_task(
# NOTE: the ``.send`` channel here is a swapped-in
# trio mem chan which gets `.send()`-ed by the normal
# sampler task but instead of being sent directly
# over the IPC msg stream it's the throttle task
# does the work of incrementally forwarding to the
# IPC stream at the throttle rate.
cs: trio.CancelScope = await bus.start_task(
uniform_rate_send,
tick_throttle,
recv,
stream,
)
# NOTE: so the ``send`` channel here is actually a swapped
# in trio mem chan which gets pushed by the normal sampler
# task but instead of being sent directly over the IPC msg
# stream it's the throttle task does the work of
# incrementally forwarding to the IPC stream at the throttle
# rate.
send._ctx = ctx # mock internal ``tractor.MsgStream`` ref
sub = (send, tick_throttle)
else:
sub = (stream, tick_throttle)
sub = Sub(
ipc=stream,
send_chan=send,
throttle_rate=tick_throttle,
_throttle_cs=cs,
rc_ui=allow_remote_ctl_ui,
)
# TODO: add an api for this on the bus?
# maybe use the current task-id to key the sub list that's
# added / removed? Or maybe we can add a general
# pause-resume by sub-key api?
bs_fqme = fqme.removesuffix(f'.{brokername}')
local_subs.setdefault(bs_fqme, set()).add(sub)
bus.add_subs(bs_fqme, {sub})
local_subs.setdefault(
bs_fqme,
set()
).add(sub)
bus.add_subs(
bs_fqme,
{sub}
)
# sync caller with all subs registered state
sub_registered.set()
uid = ctx.chan.uid
uid: tuple[str, str] = ctx.chan.uid
try:
# ctrl protocol for start/stop of quote streams based on UI
# state (eg. don't need a stream when a symbol isn't being
# displayed).
# ctrl protocol for start/stop of live quote streams
# based on UI state (eg. don't need a stream when
# a symbol isn't being displayed).
async for msg in stream:
if msg == 'pause':
@ -640,9 +677,12 @@ class Feed(Struct):
'''
mods: dict[str, ModuleType] = {}
portals: dict[ModuleType, tractor.Portal] = {}
flumes: dict[str, Flume] = {}
flumes: dict[
str, # FQME
Flume,
] = {}
streams: dict[
str,
str, # broker name
trio.abc.ReceiveChannel[dict[str, Any]],
] = {}
@ -655,7 +695,12 @@ class Feed(Struct):
brokers: Sequence[str] | None = None,
) -> trio.abc.ReceiveChannel:
'''
Open steams to multiple data providers (``brokers``) and
multiplex their msgs onto a common mem chan for
only-requires-a-single-thread style consumption.
'''
if brokers is None:
mods = self.mods
brokers = list(self.mods)
@ -711,7 +756,7 @@ async def install_brokerd_search(
async with portal.open_context(
brokermod.open_symbol_search
) as (ctx, cache):
) as (ctx, _):
# shield here since we expect the search rpc to be
# cancellable by the user as they see fit.
@ -724,6 +769,7 @@ async def install_brokerd_search(
except trio.EndOfChannel:
return {}
from piker.ui import _search
async with _search.register_symbol_search(
provider_name=brokermod.name,
@ -741,8 +787,8 @@ async def install_brokerd_search(
@acm
async def maybe_open_feed(
fqsns: list[str],
loglevel: Optional[str] = None,
fqmes: list[str],
loglevel: str | None = None,
**kwargs,
@ -756,12 +802,12 @@ async def maybe_open_feed(
in a tractor broadcast receiver.
'''
fqsn = fqsns[0]
fqme = fqmes[0]
async with maybe_open_context(
async with trionics.maybe_open_context(
acm_func=open_feed,
kwargs={
'fqsns': fqsns,
'fqmes': fqmes,
'loglevel': loglevel,
'tick_throttle': kwargs.get('tick_throttle'),
@ -769,16 +815,16 @@ async def maybe_open_feed(
'allow_overruns': kwargs.get('allow_overruns', True),
'start_stream': kwargs.get('start_stream', True),
},
key=fqsn,
key=fqme,
) as (cache_hit, feed):
if cache_hit:
log.info(f'Using cached feed for {fqsn}')
log.info(f'Using cached feed for {fqme}')
# add a new broadcast subscription for the quote stream
# if this feed is likely already in use
async with gather_contexts(
async with trionics.gather_contexts(
mngrs=[stream.subscribe() for stream in feed.streams.values()]
) as bstreams:
for bstream, flume in zip(bstreams, feed.flumes.values()):
@ -795,13 +841,15 @@ async def maybe_open_feed(
@acm
async def open_feed(
fqsns: list[str],
fqmes: list[str],
loglevel: str | None = None,
allow_overruns: bool = True,
start_stream: bool = True,
tick_throttle: float | None = None, # Hz
allow_remote_ctl_ui: bool = False,
) -> Feed:
'''
Open a "data feed" which provides streamed real-time quotes.
@ -810,9 +858,9 @@ async def open_feed(
providers: dict[ModuleType, list[str]] = {}
feed = Feed()
for fqsn in fqsns:
brokername, *_ = unpack_fqme(fqsn)
bfqsn = fqsn.replace('.' + brokername, '')
for fqme in fqmes:
brokername, *_ = unpack_fqme(fqme)
bs_fqme = fqme.replace('.' + brokername, '')
try:
mod = get_brokermod(brokername)
@ -820,13 +868,13 @@ async def open_feed(
mod = get_ingestormod(brokername)
# built a per-provider map to instrument names
providers.setdefault(mod, []).append(bfqsn)
providers.setdefault(mod, []).append(bs_fqme)
feed.mods[mod.name] = mod
# one actor per brokerd for now
brokerd_ctxs = []
for brokermod, bfqsns in providers.items():
for brokermod, bfqmes in providers.items():
# if no `brokerd` for this backend exists yet we spawn
# a daemon actor for it.
@ -838,14 +886,14 @@ async def open_feed(
)
portals: tuple[tractor.Portal]
async with gather_contexts(
async with trionics.gather_contexts(
brokerd_ctxs,
) as portals:
bus_ctxs: list[AsyncContextManager] = []
for (
portal,
(brokermod, bfqsns),
(brokermod, bfqmes),
) in zip(portals, providers.items()):
feed.portals[brokermod] = portal
@ -870,28 +918,45 @@ async def open_feed(
portal.open_context(
open_feed_bus,
brokername=brokermod.name,
symbols=bfqsns,
symbols=bfqmes,
loglevel=loglevel,
start_stream=start_stream,
tick_throttle=tick_throttle,
# XXX: super important to avoid
# the brokerd from some other
# backend overruning the task here
# bc some other brokerd took longer
# to startup before we hit the `.open_stream()`
# loop below XD .. really we should try to do each
# of these stream open sequences sequentially per
# backend? .. need some thot!
allow_overruns=True,
# NOTE: UI actors (like charts) can allow
# remote control of certain graphics rendering
# capabilities via the
# `.ui._remote_ctl.remote_annotate()` msg loop.
allow_remote_ctl_ui=allow_remote_ctl_ui,
)
)
assert len(feed.mods) == len(feed.portals)
async with (
gather_contexts(bus_ctxs) as ctxs,
trionics.gather_contexts(bus_ctxs) as ctxs,
):
stream_ctxs = []
stream_ctxs: list[tractor.MsgStream] = []
for (
(ctx, flumes_msg_dict),
(brokermod, bfqsns),
(brokermod, bfqmes),
) in zip(ctxs, providers.items()):
for fqsn, flume_msg in flumes_msg_dict.items():
for fqme, flume_msg in flumes_msg_dict.items():
flume = Flume.from_msg(flume_msg)
assert flume.symbol.fqsn == fqsn
feed.flumes[fqsn] = flume
# assert flume.mkt.fqme == fqme
feed.flumes[fqme] = flume
# TODO: do we need this?
flume.feed = feed
@ -917,23 +982,32 @@ async def open_feed(
)
)
stream: tractor.MsgStream
brokermod: ModuleType
fqmes: list[str]
async with (
gather_contexts(stream_ctxs) as streams,
trionics.gather_contexts(stream_ctxs) as streams,
):
for (
stream,
(brokermod, bfqsns),
(brokermod, bfqmes),
) in zip(streams, providers.items()):
assert stream
feed.streams[brokermod.name] = stream
# apply `brokerd`-common steam to each flume
# tracking a symbol from that provider.
for fqsn, flume in feed.flumes.items():
if brokermod.name == flume.symbol.broker:
# apply `brokerd`-common stream to each flume
# tracking a live market feed from that provider.
for fqme, flume in feed.flumes.items():
if brokermod.name == flume.mkt.broker:
flume.stream = stream
assert len(feed.mods) == len(feed.portals) == len(feed.streams)
assert (
len(feed.mods)
==
len(feed.portals)
==
len(feed.streams)
)
yield feed

View File

@ -22,7 +22,6 @@ real-time data processing data-structures.
"""
from __future__ import annotations
# from decimal import Decimal
from typing import (
TYPE_CHECKING,
)
@ -31,58 +30,27 @@ import tractor
import pendulum
import numpy as np
from ..accounting._mktinfo import (
MktPair,
Symbol,
)
from ._util import (
log,
)
from .types import Struct
from piker.types import Struct
from ._sharedmem import (
attach_shm_array,
ShmArray,
_Token,
)
# from .._profile import (
# Profiler,
# pg_profile_enabled,
# )
if TYPE_CHECKING:
# from pyqtgraph import PlotItem
from ..accounting import MktPair
from .feed import Feed
# TODO: ideas for further abstractions as per
# https://github.com/pikers/piker/issues/216 and
# https://github.com/pikers/piker/issues/270:
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
# as per circuit parlance:
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
# - could cover the combination of our `FspAdmin` and the
# backend `.fsp._engine` related machinery to "connect" one flume
# to another?
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
class Flume(Struct):
'''
Composite reference type which points to all the addressing handles
and other meta-data necessary for the read, measure and management
of a set of real-time updated data flows.
Composite reference type which points to all the addressing
handles and other meta-data necessary for the read, measure and
management of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that can
be used seamlessly across process-memory boundaries.
describes the high level properties of a set of data flows that
can be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport
@ -94,18 +62,10 @@ class Flume(Struct):
queuing properties.
'''
mkt: MktPair | Symbol
mkt: MktPair
first_quote: dict
_rt_shm_token: _Token
@property
def symbol(self) -> MktPair | Symbol:
log.warning(
'`Flume.symbol` is deprecated!\n'
'Use `.mkt: MktPair` instead!'
)
return self.mkt
# optional since some data flows won't have a "downsampled" history
# buffer/stream (eg. FSPs).
_hist_shm_token: _Token | None = None
@ -113,6 +73,7 @@ class Flume(Struct):
# private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None
_readonly: bool = True
stream: tractor.MsgStream | None = None
izero_hist: int = 0
@ -129,7 +90,7 @@ class Flume(Struct):
if self._rt_shm is None:
self._rt_shm = attach_shm_array(
token=self._rt_shm_token,
readonly=True,
readonly=self._readonly,
)
return self._rt_shm
@ -142,12 +103,10 @@ class Flume(Struct):
'No shm token has been set for the history buffer?'
)
if (
self._hist_shm is None
):
if self._hist_shm is None:
self._hist_shm = attach_shm_array(
token=self._hist_shm_token,
readonly=True,
readonly=self._readonly,
)
return self._hist_shm
@ -166,10 +125,10 @@ class Flume(Struct):
period and ratio between them.
'''
times = self.hist_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s = (end - start).seconds
times: np.ndarray = self.hist_shm.array['time']
end: float | int = pendulum.from_timestamp(times[-1])
start: float | int = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s: float = (end - start).seconds
times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1])
@ -189,17 +148,25 @@ class Flume(Struct):
msg = self.to_dict()
msg['mkt'] = self.mkt.to_dict()
# can't serialize the stream or feed objects, it's expected
# you'll have a ref to it since this msg should be rxed on
# a stream on whatever far end IPC..
# NOTE: pop all un-msg-serializable fields:
# - `tractor.MsgStream`
# - `Feed`
# - `Shmarray`
# it's expected the `.from_msg()` on the other side
# will get instead some kind of msg-compat version
# that it can load.
msg.pop('stream')
msg.pop('feed')
msg.pop('_rt_shm')
msg.pop('_hist_shm')
return msg
@classmethod
def from_msg(
cls,
msg: dict,
readonly: bool = True,
) -> dict:
'''
@ -208,19 +175,13 @@ class Flume(Struct):
'''
mkt_msg = msg.pop('mkt')
if 'dst' in mkt_msg:
mkt = MktPair.from_msg(mkt_msg)
else:
# XXX NOTE: ``msgspec`` can encode `Decimal`
# but it doesn't decide to it by default since
# we aren't spec-cing these msgs as structs, SO
# we have to ensure we do a struct type case (which `.copy()`
# does) to ensure we get the right type!
mkt = Symbol(**mkt_msg).copy()
return cls(mkt=mkt, **msg)
from ..accounting import MktPair # cycle otherwise..
mkt = MktPair.from_msg(mkt_msg)
msg |= {'_readonly': readonly}
return cls(
mkt=mkt,
**msg,
)
def get_index(
self,
@ -240,3 +201,21 @@ class Flume(Struct):
)
imx = times.shape[0] - 1
return min(first, imx)
# only set by external msg or creator, never
# manually!
_has_vlm: bool = True
def has_vlm(self) -> bool:
if not self._has_vlm:
return False
# make sure that the instrument supports volume history
# (sometimes this is not the case for some commodities and
# derivatives)
vlm: np.ndarray = self.rt_shm.array['volume']
return not bool(
np.all(np.isin(vlm, -1))
or np.all(np.isnan(vlm))
)

View File

@ -1,770 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Historical data business logic for load, backfill and tsdb storage.
'''
from __future__ import annotations
from collections import (
Counter,
)
from datetime import datetime
from functools import partial
import time
from types import ModuleType
from typing import (
Callable,
Optional,
TYPE_CHECKING,
)
import trio
from trio_typing import TaskStatus
import tractor
import pendulum
import numpy as np
from ._util import (
log,
)
from ..service import (
check_for_service,
)
from ._sharedmem import (
maybe_open_shm_array,
ShmArray,
_secs_in_day,
)
from ..accounting._mktinfo import (
unpack_fqme,
)
from ._source import base_iohlc_dtype
from ._sampling import (
open_sample_stream,
)
from ..brokers._util import (
DataUnavailable,
)
if TYPE_CHECKING:
from ..service.marketstore import Storage
from .feed import _FeedsBus
def diff_history(
array: np.ndarray,
timeframe: int,
start_dt: datetime,
end_dt: datetime,
last_tsdb_dt: datetime | None = None
) -> np.ndarray:
# no diffing with tsdb dt index possible..
if last_tsdb_dt is None:
return array
time = array['time']
return array[time > last_tsdb_dt.timestamp()]
async def start_backfill(
mod: ModuleType,
bfqsn: str,
shm: ShmArray,
timeframe: float,
sampler_stream: tractor.MsgStream,
feed_is_live: trio.Event,
last_tsdb_dt: Optional[datetime] = None,
storage: Optional[Storage] = None,
write_tsdb: bool = True,
tsdb_is_up: bool = False,
task_status: TaskStatus[tuple] = trio.TASK_STATUS_IGNORED,
) -> int:
hist: Callable[
[int, datetime, datetime],
tuple[np.ndarray, str]
]
config: dict[str, int]
async with mod.open_history_client(bfqsn) as (hist, config):
# get latest query's worth of history all the way
# back to what is recorded in the tsdb
array, start_dt, end_dt = await hist(
timeframe,
end_dt=None,
)
times = array['time']
# sample period step size in seconds
step_size_s = (
pendulum.from_timestamp(times[-1])
- pendulum.from_timestamp(times[-2])
).seconds
# if the market is open (aka we have a live feed) but the
# history sample step index seems off we report the surrounding
# data and drop into a bp. this case shouldn't really ever
# happen if we're doing history retrieval correctly.
if (
step_size_s == 60
and feed_is_live.is_set()
):
inow = round(time.time())
diff = inow - times[-1]
if abs(diff) > 60:
surr = array[-6:]
diff_in_mins = round(diff/60., ndigits=2)
log.warning(
f'STEP ERROR `{bfqsn}` for period {step_size_s}s:\n'
f'Off by `{diff}` seconds (or `{diff_in_mins}` mins)\n'
'Surrounding 6 time stamps:\n'
f'{list(surr["time"])}\n'
'Here is surrounding 6 samples:\n'
f'{surr}\nn'
)
# uncomment this for a hacker who wants to investigate
# this case manually..
# await tractor.breakpoint()
# frame's worth of sample-period-steps, in seconds
frame_size_s = len(array) * step_size_s
to_push = diff_history(
array,
timeframe,
start_dt,
end_dt,
last_tsdb_dt=last_tsdb_dt,
)
log.info(f'Pushing {to_push.size} to shm!')
shm.push(to_push, prepend=True)
# TODO: *** THIS IS A BUG ***
# we need to only broadcast to subscribers for this fqsn..
# otherwise all fsps get reset on every chart..
await sampler_stream.send('broadcast_all')
# signal that backfilling to tsdb's end datum is complete
bf_done = trio.Event()
# let caller unblock and deliver latest history frame
task_status.started((
start_dt,
end_dt,
bf_done,
))
# based on the sample step size, maybe load a certain amount history
if last_tsdb_dt is None:
if step_size_s not in (1, 60):
raise ValueError(
'`piker` only needs to support 1m and 1s sampling '
'but ur api is trying to deliver a longer '
f'timeframe of {step_size_s} seconds..\n'
'So yuh.. dun do dat brudder.'
)
# when no tsdb "last datum" is provided, we just load
# some near-term history.
periods = {
1: {'days': 1},
60: {'days': 14},
}
if tsdb_is_up:
# do a decently sized backfill and load it into storage.
periods = {
1: {'days': 6},
60: {'years': 6},
}
period_duration = periods[step_size_s]
# NOTE: manually set the "latest" datetime which we intend to
# backfill history "until" so as to adhere to the history
# settings above when the tsdb is detected as being empty.
last_tsdb_dt = start_dt.subtract(**period_duration)
# configure async query throttling
# rate = config.get('rate', 1)
# XXX: legacy from ``trimeter`` code but unsupported now.
# erlangs = config.get('erlangs', 1)
# avoid duplicate history frames with a set of datetime frame
# starts and associated counts of how many duplicates we see
# per time stamp.
starts: Counter[datetime] = Counter()
# inline sequential loop where we simply pass the
# last retrieved start dt to the next request as
# it's end dt.
while end_dt > last_tsdb_dt:
log.debug(
f'Requesting {step_size_s}s frame ending in {start_dt}'
)
try:
array, next_start_dt, end_dt = await hist(
timeframe,
end_dt=start_dt,
)
# broker says there never was or is no more history to pull
except DataUnavailable:
log.warning(
f'NO-MORE-DATA: backend {mod.name} halted history!?'
)
# ugh, what's a better way?
# TODO: fwiw, we probably want a way to signal a throttle
# condition (eg. with ib) so that we can halt the
# request loop until the condition is resolved?
return
if (
next_start_dt in starts
and starts[next_start_dt] <= 6
):
start_dt = min(starts)
log.warning(
f"{bfqsn}: skipping duplicate frame @ {next_start_dt}"
)
starts[start_dt] += 1
continue
elif starts[next_start_dt] > 6:
log.warning(
f'NO-MORE-DATA: backend {mod.name} before {next_start_dt}?'
)
return
# only update new start point if not-yet-seen
start_dt = next_start_dt
starts[start_dt] += 1
assert array['time'][0] == start_dt.timestamp()
diff = end_dt - start_dt
frame_time_diff_s = diff.seconds
expected_frame_size_s = frame_size_s + step_size_s
if frame_time_diff_s > expected_frame_size_s:
# XXX: query result includes a start point prior to our
# expected "frame size" and thus is likely some kind of
# history gap (eg. market closed period, outage, etc.)
# so just report it to console for now.
log.warning(
f'History frame ending @ {end_dt} appears to have a gap:\n'
f'{diff} ~= {frame_time_diff_s} seconds'
)
to_push = diff_history(
array,
timeframe,
start_dt,
end_dt,
last_tsdb_dt=last_tsdb_dt,
)
ln = len(to_push)
if ln:
log.info(f'{ln} bars for {start_dt} -> {end_dt}')
else:
log.warning(
f'{ln} BARS TO PUSH after diff?!: {start_dt} -> {end_dt}'
)
# bail gracefully on shm allocation overrun/full condition
try:
shm.push(to_push, prepend=True)
except ValueError:
log.info(
f'Shm buffer overrun on: {start_dt} -> {end_dt}?'
)
# can't push the entire frame? so
# push only the amount that can fit..
break
log.info(
f'Shm pushed {ln} frame:\n'
f'{start_dt} -> {end_dt}'
)
if (
storage is not None
and write_tsdb
):
log.info(
f'Writing {ln} frame to storage:\n'
f'{start_dt} -> {end_dt}'
)
await storage.write_ohlcv(
f'{bfqsn}.{mod.name}', # lul..
to_push,
timeframe,
)
# TODO: can we only trigger this if the respective
# history in "in view"?!?
# XXX: extremely important, there can be no checkpoints
# in the block above to avoid entering new ``frames``
# values while we're pipelining the current ones to
# memory...
await sampler_stream.send('broadcast_all')
# short-circuit (for now)
bf_done.set()
async def basic_backfill(
bus: _FeedsBus,
mod: ModuleType,
bfqsn: str,
shms: dict[int, ShmArray],
sampler_stream: tractor.MsgStream,
feed_is_live: trio.Event,
) -> None:
# do a legacy incremental backfill from the provider.
log.info('No TSDB (marketstored) found, doing basic backfill..')
# start history backfill task ``backfill_bars()`` is
# a required backend func this must block until shm is
# filled with first set of ohlc bars
for timeframe, shm in shms.items():
try:
await bus.nursery.start(
partial(
start_backfill,
mod,
bfqsn,
shm,
timeframe,
sampler_stream,
feed_is_live,
)
)
except DataUnavailable:
# XXX: timeframe not supported for backend
continue
async def tsdb_backfill(
mod: ModuleType,
marketstore: ModuleType,
bus: _FeedsBus,
storage: Storage,
fqsn: str,
bfqsn: str,
shms: dict[int, ShmArray],
sampler_stream: tractor.MsgStream,
feed_is_live: trio.Event,
task_status: TaskStatus[
tuple[ShmArray, ShmArray]
] = trio.TASK_STATUS_IGNORED,
) -> None:
# TODO: this should be used verbatim for the pure
# shm backfiller approach below.
dts_per_tf: dict[int, datetime] = {}
# start history anal and load missing new data via backend.
for timeframe, shm in shms.items():
# loads a (large) frame of data from the tsdb depending
# on the db's query size limit.
tsdb_history, first_tsdb_dt, last_tsdb_dt = await storage.load(
fqsn,
timeframe=timeframe,
)
broker, *_ = unpack_fqme(fqsn)
try:
(
latest_start_dt,
latest_end_dt,
bf_done,
) = await bus.nursery.start(
partial(
start_backfill,
mod,
bfqsn,
shm,
timeframe,
sampler_stream,
feed_is_live,
last_tsdb_dt=last_tsdb_dt,
tsdb_is_up=True,
storage=storage,
)
)
except DataUnavailable:
# XXX: timeframe not supported for backend
dts_per_tf[timeframe] = (
tsdb_history,
last_tsdb_dt,
None,
None,
None,
)
continue
# tsdb_history = series.get(timeframe)
dts_per_tf[timeframe] = (
tsdb_history,
last_tsdb_dt,
latest_start_dt,
latest_end_dt,
bf_done,
)
# if len(hist_shm.array) < 2:
# TODO: there's an edge case here to solve where if the last
# frame before market close (at least on ib) was pushed and
# there was only "1 new" row pushed from the first backfill
# query-iteration, then the sample step sizing calcs will
# break upstream from here since you can't diff on at least
# 2 steps... probably should also add logic to compute from
# the tsdb series and stash that somewhere as meta data on
# the shm buffer?.. no se.
# unblock the feed bus management task
# assert len(shms[1].array)
task_status.started()
async def back_load_from_tsdb(
timeframe: int,
shm: ShmArray,
):
(
tsdb_history,
last_tsdb_dt,
latest_start_dt,
latest_end_dt,
bf_done,
) = dts_per_tf[timeframe]
# sync to backend history task's query/load completion
if bf_done:
await bf_done.wait()
# TODO: eventually it'd be nice to not require a shm array/buffer
# to accomplish this.. maybe we can do some kind of tsdb direct to
# graphics format eventually in a child-actor?
# TODO: see if there's faster multi-field reads:
# https://numpy.org/doc/stable/user/basics.rec.html#accessing-multiple-fields
# re-index with a `time` and index field
prepend_start = shm._first.value
array = shm.array
if len(array):
shm_last_dt = pendulum.from_timestamp(shm.array[0]['time'])
else:
shm_last_dt = None
if last_tsdb_dt:
assert shm_last_dt >= last_tsdb_dt
# do diff against start index of last frame of history and only
# fill in an amount of datums from tsdb allows for most recent
# to be loaded into mem *before* tsdb data.
if (
last_tsdb_dt
and latest_start_dt
):
backfilled_size_s = (
latest_start_dt - last_tsdb_dt
).seconds
# if the shm buffer len is not large enough to contain
# all missing data between the most recent backend-queried frame
# and the most recent dt-index in the db we warn that we only
# want to load a portion of the next tsdb query to fill that
# space.
log.info(
f'{backfilled_size_s} seconds worth of {timeframe}s loaded'
)
# Load TSDB history into shm buffer (for display) if there is
# remaining buffer space.
if (
len(tsdb_history)
):
# load the first (smaller) bit of history originally loaded
# above from ``Storage.load()``.
to_push = tsdb_history[-prepend_start:]
shm.push(
to_push,
# insert the history pre a "days worth" of samples
# to leave some real-time buffer space at the end.
prepend=True,
# update_first=False,
# start=prepend_start,
field_map=marketstore.ohlc_key_map,
)
tsdb_last_frame_start = tsdb_history['Epoch'][0]
if timeframe == 1:
times = shm.array['time']
assert (times[1] - times[0]) == 1
# load as much from storage into shm possible (depends on
# user's shm size settings).
while shm._first.value > 0:
tsdb_history = await storage.read_ohlcv(
fqsn,
timeframe=timeframe,
end=tsdb_last_frame_start,
)
# empty query
if not len(tsdb_history):
break
next_start = tsdb_history['Epoch'][0]
if next_start >= tsdb_last_frame_start:
# no earlier data detected
break
else:
tsdb_last_frame_start = next_start
prepend_start = shm._first.value
to_push = tsdb_history[-prepend_start:]
# insert the history pre a "days worth" of samples
# to leave some real-time buffer space at the end.
shm.push(
to_push,
prepend=True,
field_map=marketstore.ohlc_key_map,
)
log.info(f'Loaded {to_push.shape} datums from storage')
# manually trigger step update to update charts/fsps
# which need an incremental update.
# NOTE: the way this works is super duper
# un-intuitive right now:
# - the broadcaster fires a msg to the fsp subsystem.
# - fsp subsys then checks for a sample step diff and
# possibly recomputes prepended history.
# - the fsp then sends back to the parent actor
# (usually a chart showing graphics for said fsp)
# which tells the chart to conduct a manual full
# graphics loop cycle.
await sampler_stream.send('broadcast_all')
# TODO: write new data to tsdb to be ready to for next read.
# backload from db (concurrently per timeframe) once backfilling of
# recent dat a loaded from the backend provider (see
# ``bf_done.wait()`` call).
async with trio.open_nursery() as nurse:
for timeframe, shm in shms.items():
nurse.start_soon(
back_load_from_tsdb,
timeframe,
shm,
)
async def manage_history(
mod: ModuleType,
bus: _FeedsBus,
fqsn: str,
some_data_ready: trio.Event,
feed_is_live: trio.Event,
timeframe: float = 60, # in seconds
task_status: TaskStatus[
tuple[ShmArray, ShmArray]
] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Load and manage historical data including the loading of any
available series from `marketstore` as well as conducting real-time
update of both that existing db and the allocated shared memory
buffer.
'''
# TODO: is there a way to make each shm file key
# actor-tree-discovery-addr unique so we avoid collisions
# when doing tests which also allocate shms for certain instruments
# that may be in use on the system by some other running daemons?
# from tractor._state import _runtime_vars
# port = _runtime_vars['_root_mailbox'][1]
uid = tractor.current_actor().uid
name, uuid = uid
service = name.rstrip(f'.{mod.name}')
# (maybe) allocate shm array for this broker/symbol which will
# be used for fast near-term history capture and processing.
hist_shm, opened = maybe_open_shm_array(
# key=f'{fqsn}_hist_p{port}',
key=f'piker.{service}[{uuid[:16]}.{fqsn}.hist',
# use any broker defined ohlc dtype:
dtype=getattr(mod, '_ohlc_dtype', base_iohlc_dtype),
# we expect the sub-actor to write
readonly=False,
)
hist_zero_index = hist_shm.index - 1
# TODO: history validation
if not opened:
raise RuntimeError(
"Persistent shm for sym was already open?!"
)
rt_shm, opened = maybe_open_shm_array(
# key=f'{fqsn}_rt_p{port}',
# key=f'piker.{service}.{fqsn}_rt.{uuid}',
key=f'piker.{service}[{uuid[:16]}.{fqsn}.rt',
# use any broker defined ohlc dtype:
dtype=getattr(mod, '_ohlc_dtype', base_iohlc_dtype),
# we expect the sub-actor to write
readonly=False,
size=3*_secs_in_day,
)
# (for now) set the rt (hft) shm array with space to prepend
# only a few days worth of 1s history.
days = 2
start_index = days*_secs_in_day
rt_shm._first.value = start_index
rt_shm._last.value = start_index
rt_zero_index = rt_shm.index - 1
if not opened:
raise RuntimeError(
"Persistent shm for sym was already open?!"
)
# register 1s and 1m buffers with the global incrementer task
async with open_sample_stream(
period_s=1.,
shms_by_period={
1.: rt_shm.token,
60.: hist_shm.token,
},
# NOTE: we want to only open a stream for doing broadcasts on
# backfill operations, not receive the sample index-stream
# (since there's no code in this data feed layer that needs to
# consume it).
open_index_stream=True,
sub_for_broadcasts=False,
) as sample_stream:
log.info('Scanning for existing `marketstored`')
tsdb_is_up = await check_for_service('marketstored')
bfqsn = fqsn.replace('.' + mod.name, '')
open_history_client = getattr(mod, 'open_history_client', None)
assert open_history_client
if (
tsdb_is_up
and opened
and open_history_client
):
log.info('Found existing `marketstored`')
from ..service import marketstore
async with (
marketstore.open_storage_client(fqsn)as storage,
):
# TODO: drop returning the output that we pass in?
await bus.nursery.start(
tsdb_backfill,
mod,
marketstore,
bus,
storage,
fqsn,
bfqsn,
{
1: rt_shm,
60: hist_shm,
},
sample_stream,
feed_is_live,
)
# yield back after client connect with filled shm
task_status.started((
hist_zero_index,
hist_shm,
rt_zero_index,
rt_shm,
))
# indicate to caller that feed can be delivered to
# remote requesting client since we've loaded history
# data that can be used.
some_data_ready.set()
# history retreival loop depending on user interaction
# and thus a small RPC-prot for remotely controllinlg
# what data is loaded for viewing.
await trio.sleep_forever()
# load less history if no tsdb can be found
elif (
not tsdb_is_up
and opened
):
await basic_backfill(
bus,
mod,
bfqsn,
{
1: rt_shm,
60: hist_shm,
},
sample_stream,
feed_is_live,
)
task_status.started((
hist_zero_index,
hist_shm,
rt_zero_index,
rt_shm,
))
some_data_ready.set()
await trio.sleep_forever()

View File

@ -0,0 +1,173 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Tick event stream processing, filter-by-types, format-normalization.
'''
from itertools import chain
from typing import (
Any,
AsyncIterator,
)
# tick-type-classes template for all possible "lowest level" events
# that can can be emitted by the "top of book" L1 queues and
# price-matching (with eventual clearing) in a double auction
# market (queuing) system.
_tick_groups: dict[str, set[str]] = {
'clears': {'trade', 'dark_trade', 'last'},
'bids': {'bid', 'bsize'},
'asks': {'ask', 'asize'},
}
# XXX alo define the flattened set of all such "fundamental ticks"
# so that it can be used as filter, eg. in the graphics display
# loop to compute running windowed y-ranges B)
_auction_ticks: set[str] = set.union(*_tick_groups.values())
def frame_ticks(
quote: dict[str, Any],
ticks_by_type: dict | None = None,
ticks_in_order: list[dict[str, Any]] | None = None
) -> dict[
str,
list[dict[str, Any]]
]:
'''
XXX: build a tick-by-type table of lists
of tick messages. This allows for less
iteration on the receiver side by allowing for
a single "latest tick event" look up by
indexing the last entry in each sub-list.
tbt = {
'types': ['bid', 'asize', 'last', .. '<type_n>'],
'bid': [tick0, tick1, tick2, .., tickn],
'asize': [tick0, tick1, tick2, .., tickn],
'last': [tick0, tick1, tick2, .., tickn],
...
'<type_n>': [tick0, tick1, tick2, .., tickn],
}
If `ticks_in_order` is provided, append any retrieved ticks
since last iteration into this array/buffer/list.
'''
# TODO: once we decide to get fancy really we should
# have a shared mem tick buffer that is just
# continually filled and the UI just ready from it
# at it's display rate.
tbt = ticks_by_type if ticks_by_type is not None else {}
if not (ticks := quote.get('ticks')):
return tbt
# append in reverse FIFO order for in-order iteration on
# receiver side.
tick: dict[str, Any]
for tick in ticks:
tbt.setdefault(
tick['type'],
[],
).append(tick)
# TODO: do we need this any more or can we just
# expect the receiver to unwind the below
# `ticks_by_type: dict`?
# => undwinding would potentially require a
# `dict[str, set | list]` instead with an
# included `'types' field which is an (ordered)
# set of tick type fields in the order which
# types arrived?
if ticks_in_order:
ticks_in_order.extend(ticks)
return tbt
def iterticks(
quote: dict,
types: tuple[str] = (
'trade',
'dark_trade',
),
deduplicate_darks: bool = False,
reverse: bool = False,
# TODO: should we offer delegating to `frame_ticks()` above
# with this?
frame_by_type: bool = False,
) -> AsyncIterator:
'''
Iterate through ticks delivered per quote cycle, filter and
yield any declared in `types`.
'''
if deduplicate_darks:
assert 'dark_trade' in types
# print(f"{quote}\n\n")
ticks = quote.get('ticks', ())
trades = {}
darks = {}
if ticks:
# do a first pass and attempt to remove duplicate dark
# trades with the same tick signature.
if deduplicate_darks:
for tick in ticks:
ttype = tick.get('type')
time = tick.get('time', None)
if time:
sig = (
time,
tick['price'],
tick.get('size')
)
if ttype == 'dark_trade':
darks[sig] = tick
elif ttype == 'trade':
trades[sig] = tick
# filter duplicates
for sig, tick in trades.items():
tick = darks.pop(sig, None)
if tick:
ticks.remove(tick)
# print(f'DUPLICATE {tick}')
# re-insert ticks
ticks.extend(list(chain(trades.values(), darks.values())))
# most-recent-first
if reverse:
ticks = reversed(ticks)
for tick in ticks:
# print(f"{quote['symbol']}: {tick}")
ttype = tick.get('type')
if ttype in types:
yield tick

View File

@ -1,89 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Built-in (extension) types.
"""
import sys
from pprint import pformat
import msgspec
class Struct(
msgspec.Struct,
# https://jcristharif.com/msgspec/structs.html#tagged-unions
# tag='pikerstruct',
# tag=True,
):
'''
A "human friendlier" (aka repl buddy) struct subtype.
'''
def to_dict(self) -> dict:
return {
f: getattr(self, f)
for f in self.__struct_fields__
}
# Lul, doesn't seem to work that well..
# def __repr__(self):
# # only turn on pprint when we detect a python REPL
# # at runtime B)
# if (
# hasattr(sys, 'ps1')
# # TODO: check if we're in pdb
# ):
# return self.pformat()
# return super().__repr__()
def pformat(self) -> str:
return f'Struct({pformat(self.to_dict())})'
def copy(
self,
update: dict | None = None,
) -> msgspec.Struct:
'''
Validate-typecast all self defined fields, return a copy of us
with all such fields.
This is kinda like the default behaviour in `pydantic.BaseModel`.
'''
if update:
for k, v in update.items():
setattr(self, k, v)
# roundtrip serialize to validate
return msgspec.msgpack.Decoder(
type=type(self)
).decode(
msgspec.msgpack.Encoder().encode(self)
)
# NOTE XXX: this won't work on frozen types!
# use ``.copy()`` above in such cases.
def typecast(
self,
# fields: list[str] | None = None,
) -> None:
for fname, ftype in self.__annotations__.items():
setattr(self, fname, ftype(getattr(self, fname)))

View File

@ -18,15 +18,19 @@ Data feed synchronization protocols, init msgs, and general
data-provider-backend-agnostic schema definitions.
'''
from __future__ import annotations
from decimal import Decimal
from pprint import pformat
from types import ModuleType
from typing import (
Any,
Callable,
)
from .types import Struct
from ..accounting import (
from msgspec import field
from piker.types import Struct
from piker.accounting import (
Asset,
MktPair,
)
@ -49,7 +53,39 @@ class FeedInit(Struct, frozen=True):
'''
mkt_info: MktPair
shm_write_opts: dict[str, Any] | None = None
# NOTE: only field we use rn in ``.data.feed``
# TODO: maybe make a SamplerConfig(Struct)?
shm_write_opts: dict[str, Any] = field(
default_factory=lambda: {
'has_vlm': True,
'sum_tick_vlm': True,
})
# XXX: we group backend endpoints into 3
# groups to determine "degrees" of functionality.
_eps: dict[str, list[str]] = {
# basic API `Client` layer
'middleware': [
'get_client',
],
# (live) data streaming / loading / search
'datad': [
'get_mkt_info',
'open_history_client',
'open_symbol_search',
'stream_quotes',
],
# live order control and trading
'brokerd': [
'trades_dialogue',
'open_trade_dialog', # live order ctl
'norm_trade', # ledger normalizer for txns
],
}
def validate_backend(
@ -68,6 +104,20 @@ def validate_backend(
that haven't been implemented by this backend yet.
'''
for daemon_name, eps in _eps.items():
for name in eps:
ep: Callable = getattr(
mod,
name,
None,
)
if ep is None:
log.warning(
f'Provider backend {mod.name} is missing '
f'{daemon_name} support :(\n'
f'The following endpoint is missing: {name}'
)
inits: list[
FeedInit | dict[str, Any]
] = init_msgs
@ -119,6 +169,8 @@ def validate_backend(
mkt: MktPair
match init:
# backend is using old dict msg delivery
case {
'symbol_info': dict(symbol_info),
'fqsn': bs_fqme,
@ -144,16 +196,18 @@ def validate_backend(
'lot_tick_size',
Decimal('1'),
)
bs_mktid = init.get('bs_mktid') or bs_fqme
mkt = MktPair.from_fqme(
fqme=f'{bs_fqme}.{mod.name}',
price_tick=price_tick,
size_tick=size_tick,
bs_mktid=str(init['bs_mktid']),
bs_mktid=str(bs_mktid),
_atype=symbol_info['asset_type']
)
# backend is using new `MktPair` but not entirely
case {
'mkt_info': MktPair(
dst=Asset(),
@ -172,7 +226,6 @@ def validate_backend(
) as init:
name: str = mod.name
log.info(
f'NICE JOB {name} BACKEND being fully up to API spec B)\n'
f"{name}'s `MktPair` info:\n"
f'{pformat(mkt.to_dict())}\n'
f'shm conf: {pformat(shm_opts)}\n'
@ -193,7 +246,8 @@ def validate_backend(
mkt = init.mkt_info
assert mkt.type_key
# `MktPair` wish list
# backend is using new `MktPair` but not embedded `Asset` types
# for the .src/.dst..
if not isinstance(mkt.src, Asset):
warn_msg += (
f'ALSO, {mod.name.upper()} should try to deliver\n'

View File

@ -22,17 +22,40 @@ from typing import AsyncIterator
import numpy as np
from ._engine import cascade
from ._api import (
maybe_mk_fsp_shm,
Fsp,
)
from ._engine import (
cascade,
Cascade,
)
from ._volume import (
dolla_vlm,
flow_rates,
tina_vwap,
)
__all__ = ['cascade']
__all__: list[str] = [
'cascade',
'Cascade',
'maybe_mk_fsp_shm',
'Fsp',
'dolla_vlm',
'flow_rates',
'tina_vwap',
]
async def latency(
source: 'TickStream[Dict[str, float]]', # noqa
ohlcv: np.ndarray
) -> AsyncIterator[np.ndarray]:
"""Latency measurements, broker to piker.
"""
'''
Latency measurements, broker to piker.
'''
# TODO: do we want to offer yielding this async
# before the rt data connection comes up?

View File

@ -174,19 +174,10 @@ def fsp(
return Fsp(wrapped, outputs=(wrapped.__name__,))
def mk_fsp_shm_key(
sym: str,
target: Fsp
) -> str:
actor_name, uuid = tractor.current_actor().uid
uuid_snip: str = uuid[:16]
return f'piker.{actor_name}[{uuid_snip}].{sym}.{target.name}'
def maybe_mk_fsp_shm(
sym: str,
target: Fsp,
size: int,
readonly: bool = True,
) -> (str, ShmArray, bool):
@ -195,7 +186,8 @@ def maybe_mk_fsp_shm(
exists, otherwise load the shm already existing for that token.
'''
assert isinstance(sym, str), '`sym` should be file-name-friendly `str`'
if not isinstance(sym, str):
raise ValueError('`sym: str` should be file-name-friendly')
# TODO: load output types from `Fsp`
# - should `index` be a required internal field?
@ -207,11 +199,14 @@ def maybe_mk_fsp_shm(
[(field_name, float) for field_name in target.outputs]
)
key = mk_fsp_shm_key(sym, target)
# (attempt to) uniquely key the fsp shm buffers
actor_name, uuid = tractor.current_actor().uid
uuid_snip: str = uuid[:16]
key: str = f'piker.{actor_name}[{uuid_snip}].{sym}.{target.name}'
shm, opened = maybe_open_shm_array(
key,
# TODO: create entry for each time frame
size=size,
dtype=fsp_dtype,
readonly=True,
)

View File

@ -18,13 +18,12 @@
core task logic for processing chains
'''
from dataclasses import dataclass
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from functools import partial
from typing import (
AsyncIterator,
Callable,
Optional,
Union,
)
import numpy as np
@ -33,9 +32,9 @@ from trio_typing import TaskStatus
import tractor
from tractor.msg import NamespacePath
from piker.types import Struct
from ..log import get_logger, get_console_log
from .. import data
from ..data import attach_shm_array
from ..data.feed import (
Flume,
Feed,
@ -45,23 +44,17 @@ from ..data._sampling import (
_default_delay_s,
open_sample_stream,
)
from ..accounting._mktinfo import Symbol
from ..accounting import MktPair
from ._api import (
Fsp,
_load_builtins,
_Token,
)
from .._profile import Profiler
from ..toolz import Profiler
log = get_logger(__name__)
@dataclass
class TaskTracker:
complete: trio.Event
cs: trio.CancelScope
async def filter_quotes_by_sym(
sym: str,
@ -82,51 +75,190 @@ async def filter_quotes_by_sym(
if quote:
yield quote
# TODO: unifying the abstractions in this FSP subsys/layer:
# -[ ] move the `.data.flows.Flume` type into this
# module/subsys/pkg?
# -[ ] ideas for further abstractions as per
# - https://github.com/pikers/piker/issues/216,
# - https://github.com/pikers/piker/issues/270:
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
# - https://en.wikipedia.org/wiki/Signal-flow_graph
# - https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
async def fsp_compute(
# -[ ] we probably want to eval THE BELOW design and unify with the
# proto `TaskManager` in the `tractor` dev branch as well as with
# our below idea for `Cascade`:
# - https://github.com/goodboy/tractor/pull/363
class Cascade(Struct):
'''
As per sig-proc engineering parlance, this is a chaining of
`Flume`s, which are themselves collections of "Streams"
implemented currently via `ShmArray`s.
symbol: Symbol,
flume: Flume,
A `Cascade` is be the minimal "connection" of 2 `Flumes`
as per circuit parlance:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
TODO:
-[ ] could cover the combination of our `FspAdmin` and the
backend `.fsp._engine` related machinery to "connect" one flume
to another?
'''
# TODO: make these `Flume`s
src: Flume
dst: Flume
tn: trio.Nursery
fsp: Fsp # UI-side middleware ctl API
# filled during cascade/.bind_func() (fsp_compute) init phases
bind_func: Callable | None = None
complete: trio.Event | None = None
cs: trio.CancelScope | None = None
client_stream: tractor.MsgStream | None = None
async def resync(self) -> int:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.info(f're-syncing fsp {self.fsp.name} to source')
self.cs.cancel()
await self.complete.wait()
index: int = await self.tn.start(self.bind_func)
# always trigger UI refresh after history update,
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
dst_shm: ShmArray = self.dst.rt_shm
await self.client_stream.send({
'fsp_update': {
'key': dst_shm.token,
'first': dst_shm._first.value,
'last': dst_shm._last.value,
}
})
return index
def is_synced(self) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
'''
src_shm: ShmArray = self.src.rt_shm
dst_shm: ShmArray = self.dst.rt_shm
step_diff = src_shm.index - dst_shm.index
len_diff = abs(len(src_shm.array) - len(dst_shm.array))
synced: bool = not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2
# we aren't step synced to the source and may be
# leading/lagging by a step
or step_diff > 1
or step_diff < 0
)
if not synced:
fsp: Fsp = self.fsp
log.warning(
'***DESYNCED FSP***\n'
f'{fsp.ns_path}@{src_shm.token}\n'
f'step_diff: {step_diff}\n'
f'len_diff: {len_diff}\n'
)
return (
synced,
step_diff,
len_diff,
)
async def poll_and_sync_to_step(self) -> int:
synced, step_diff, _ = self.is_synced()
while not synced:
await self.resync()
synced, step_diff, _ = self.is_synced()
return step_diff
@acm
async def open_edge(
self,
bind_func: Callable,
) -> int:
self.bind_func = bind_func
index = await self.tn.start(bind_func)
yield index
# TODO: what do we want on teardown/error?
# -[ ] dynamic reconnection after update?
async def connect_streams(
casc: Cascade,
mkt: MktPair,
quote_stream: trio.abc.ReceiveChannel,
src: Flume,
dst: Flume,
src: ShmArray,
dst: ShmArray,
func: Callable,
edge_func: Callable,
# attach_stream: bool = False,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Stream and per-sample compute and write the cascade of
2 `Flumes`/streams given some operating `func`.
https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
Not literally, but something like:
edge_func(Flume_in) -> Flume_out
'''
profiler = Profiler(
delayed=False,
disabled=True
)
fqsn = symbol.fqme
out_stream = func(
# TODO: just pull it from src.mkt.fqme no?
# fqme: str = mkt.fqme
fqme: str = src.mkt.fqme
# TODO: dynamic introspection of what the underlying (vertex)
# function actually requires from input node (flumes) then
# deliver those inputs as part of a graph "compilation" step?
out_stream = edge_func(
# TODO: do we even need this if we do the feed api right?
# shouldn't a local stream do this before we get a handle
# to the async iterable? it's that or we do some kinda
# async itertools style?
filter_quotes_by_sym(fqsn, quote_stream),
filter_quotes_by_sym(fqme, quote_stream),
# XXX: currently the ``ohlcv`` arg
flume.rt_shm,
# XXX: currently the ``ohlcv`` arg, but we should allow
# (dynamic) requests for src flume (node) streams?
src.rt_shm,
)
# HISTORY COMPUTE PHASE
# conduct a single iteration of fsp with historical bars input
# and get historical output.
history_output: Union[
dict[str, np.ndarray], # multi-output case
np.ndarray, # single output case
]
history_output: (
dict[str, np.ndarray] # multi-output case
| np.ndarray, # single output case
)
history_output = await anext(out_stream)
func_name = func.__name__
func_name = edge_func.__name__
profiler(f'{func_name} generated history')
# build struct array with an 'index' field to push as history
@ -134,10 +266,12 @@ async def fsp_compute(
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
# if the output array is multi-field then push
# each respective field.
fields = getattr(dst.array.dtype, 'fields', None).copy()
dst_shm: ShmArray = dst.rt_shm
fields = getattr(dst_shm.array.dtype, 'fields', None).copy()
fields.pop('index')
history_by_field: Optional[np.ndarray] = None
src_time = src.array['time']
history_by_field: np.ndarray | None = None
src_shm: ShmArray = src.rt_shm
src_time = src_shm.array['time']
if (
fields and
@ -156,7 +290,7 @@ async def fsp_compute(
if history_by_field is None:
if output is None:
length = len(src.array)
length = len(src_shm.array)
else:
length = len(output)
@ -165,7 +299,7 @@ async def fsp_compute(
# will be pushed to shm.
history_by_field = np.zeros(
length,
dtype=dst.array.dtype
dtype=dst_shm.array.dtype
)
if output is None:
@ -182,13 +316,13 @@ async def fsp_compute(
)
history_by_field = np.zeros(
len(history_output),
dtype=dst.array.dtype
dtype=dst_shm.array.dtype
)
history_by_field[func_name] = history_output
history_by_field['time'] = src_time[-len(history_by_field):]
history_output['time'] = src.array['time']
history_output['time'] = src_shm.array['time']
# TODO: XXX:
# THERE'S A BIG BUG HERE WITH THE `index` field since we're
@ -201,11 +335,11 @@ async def fsp_compute(
# is `index` aware such that historical data can be indexed
# relative to the true first datum? Not sure if this is sane
# for incremental compuations.
first = dst._first.value = src._first.value
first = dst_shm._first.value = src_shm._first.value
# TODO: can we use this `start` flag instead of the manual
# setting above?
index = dst.push(
index = dst_shm.push(
history_by_field,
start=first,
)
@ -216,12 +350,9 @@ async def fsp_compute(
# setup a respawn handle
with trio.CancelScope() as cs:
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
tracker = TaskTracker(trio.Event(), cs)
task_status.started((tracker, index))
casc.cs = cs
casc.complete = trio.Event()
task_status.started(index)
profiler(f'{func_name} yield last index')
@ -235,12 +366,12 @@ async def fsp_compute(
log.debug(f"{func_name}: {processed}")
key, output = processed
# dst.array[-1][key] = output
dst.array[[key, 'time']][-1] = (
dst_shm.array[[key, 'time']][-1] = (
output,
# TODO: what about pushing ``time.time_ns()``
# in which case we'll need to round at the graphics
# processing / sampling layer?
src.array[-1]['time']
src_shm.array[-1]['time']
)
# NOTE: for now we aren't streaming this to the consumer
@ -252,7 +383,7 @@ async def fsp_compute(
# N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem
# chans for the fan out?
# index = src.index
# index = src_shm.index
# if attach_stream:
# await client_stream.send(index)
@ -262,7 +393,7 @@ async def fsp_compute(
# log.info(f'FSP quote too fast: {hz}')
# last = time.time()
finally:
tracker.complete.set()
casc.complete.set()
@tractor.context
@ -271,17 +402,17 @@ async def cascade(
ctx: tractor.Context,
# data feed key
fqsn: str,
src_shm_token: dict,
dst_shm_token: tuple[str, np.dtype],
fqme: str,
# flume pair cascaded using an "edge function"
src_flume_addr: dict,
dst_flume_addr: dict,
ns_path: NamespacePath,
shm_registry: dict[str, _Token],
zero_on_step: bool = False,
loglevel: Optional[str] = None,
loglevel: str | None = None,
) -> None:
'''
@ -297,8 +428,14 @@ async def cascade(
if loglevel:
get_console_log(loglevel)
src = attach_shm_array(token=src_shm_token)
dst = attach_shm_array(readonly=False, token=dst_shm_token)
src: Flume = Flume.from_msg(src_flume_addr)
dst: Flume = Flume.from_msg(
dst_flume_addr,
readonly=False,
)
# src: ShmArray = attach_shm_array(token=src_shm_token)
# dst: ShmArray = attach_shm_array(readonly=False, token=dst_shm_token)
reg = _load_builtins()
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
@ -306,11 +443,11 @@ async def cascade(
f'Registered FSP set:\n{lines}'
)
# update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source
# so that consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire
# but not sure how else to do it.
# NOTE XXX: update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source so that
# consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire but
# not sure how else to do it.
for (token, fsp_name, dst_token) in shm_registry:
Fsp._flow_registry[(
_Token.from_msg(token),
@ -320,16 +457,19 @@ async def cascade(
fsp: Fsp = reg.get(
NamespacePath(ns_path)
)
func = fsp.func
func: Callable = fsp.func
if not func:
# TODO: assume it's a func target path
raise ValueError(f'Unknown fsp target: {ns_path}')
_fqme: str = src.mkt.fqme
assert _fqme == fqme
# open a data feed stream with requested broker
feed: Feed
async with data.feed.maybe_open_feed(
[fqsn],
[fqme],
# TODO throttle tick outputs from *this* daemon since
# it'll emit tons of ticks due to the throttle only
@ -339,177 +479,142 @@ async def cascade(
) as feed:
flume = feed.flumes[fqsn]
symbol = flume.symbol
assert src.token == flume.rt_shm.token
flume: Flume = feed.flumes[fqme]
# XXX: can't do this since flume.feed will be set XD
# assert flume == src
assert flume.mkt == src.mkt
mkt: MktPair = flume.mkt
# NOTE: FOR NOW, sanity checks around the feed as being
# always the src flume (until we get to fancier/lengthier
# chains/graphs.
assert src.rt_shm.token == flume.rt_shm.token
# XXX: won't work bc the _hist_shm_token value will be
# list[list] after IPC..
# assert flume.to_msg() == src_flume_addr
profiler(f'{func}: feed up')
func_name = func.__name__
func_name: str = func.__name__
async with (
trio.open_nursery() as n,
trio.open_nursery() as tn,
):
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
casc = Cascade(
src,
dst,
tn,
fsp,
)
# TODO: this seems like it should be wrapped somewhere?
fsp_target = partial(
fsp_compute,
symbol=symbol,
flume=flume,
connect_streams,
casc=casc,
mkt=mkt,
quote_stream=flume.stream,
# shm
# flumes and shm passthrough
src=src,
dst=dst,
# target
func=func
# chain function which takes src flume input(s)
# and renders dst flume output(s)
edge_func=func
)
async with casc.open_edge(
bind_func=fsp_target,
) as index:
# casc.bind_func = fsp_target
# index = await tn.start(fsp_target)
dst_shm: ShmArray = dst.rt_shm
src_shm: ShmArray = src.rt_shm
tracker, index = await n.start(fsp_target)
if zero_on_step:
last = dst.rt_shm.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
if zero_on_step:
last = dst.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
profiler(f'{func_name}: fsp up')
profiler(f'{func_name}: fsp up')
# sync to client-side actor
await ctx.started(index)
# sync client
await ctx.started(index)
# XXX: rt stream with client which we MUST
# open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
casc.client_stream: tractor.MsgStream = client_stream
# XXX: rt stream with client which we MUST
# open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
s, step, ld = casc.is_synced()
# TODO: these likely should all become
# methods of this ``TaskLifetime`` or wtv
# abstraction..
async def resync(
tracker: TaskTracker,
# detect sample period step for subscription to increment
# signal
times = src.rt_shm.array['time']
if len(times) > 1:
last_ts = times[-1]
delay_s: float = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s: float = _default_delay_s
) -> tuple[TaskTracker, int]:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.info(f're-syncing fsp {func_name} to source')
tracker.cs.cancel()
await tracker.complete.wait()
tracker, index = await n.start(fsp_target)
# sub and increment the underlying shared memory buffer
# on every step msg received from the global `samplerd`
# service.
async with open_sample_stream(
float(delay_s)
) as istream:
# always trigger UI refresh after history update,
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
await client_stream.send({
'fsp_update': {
'key': dst_shm_token,
'first': dst._first.value,
'last': dst._last.value,
}
})
return tracker, index
profiler(f'{func_name}: sample stream up')
profiler.finish()
def is_synced(
src: ShmArray,
dst: ShmArray
) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
async for i in istream:
# print(f'FSP incrementing {i}')
'''
step_diff = src.index - dst.index
len_diff = abs(len(src.array) - len(dst.array))
return not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2
# respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = casc.is_synced()
if not synced:
step_diff: int = await casc.poll_and_sync_to_step()
# we aren't step synced to the source and may be
# leading/lagging by a step
or step_diff > 1
or step_diff < 0
), step_diff, len_diff
# skip adding a last bar since we should already
# be step alinged
if step_diff == 0:
continue
async def poll_and_sync_to_step(
tracker: TaskTracker,
src: ShmArray,
dst: ShmArray,
# read out last shm row, copy and write new row
array = dst_shm.array
) -> tuple[TaskTracker, int]:
# some metrics like vlm should be reset
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
synced, step_diff, _ = is_synced(src, dst)
while not synced:
tracker, index = await resync(tracker)
synced, step_diff, _ = is_synced(src, dst)
dst.rt_shm.push(last)
return tracker, step_diff
# sync with source buffer's time step
src_l2 = src_shm.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst_shm._array['time'][src_li] = src_lt
dst_shm._array['time'][src_2li] = src_2lt
s, step, ld = is_synced(src, dst)
# detect sample period step for subscription to increment
# signal
times = src.array['time']
if len(times) > 1:
last_ts = times[-1]
delay_s = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s = _default_delay_s
# sub and increment the underlying shared memory buffer
# on every step msg received from the global `samplerd`
# service.
async with open_sample_stream(float(delay_s)) as istream:
profiler(f'{func_name}: sample stream up')
profiler.finish()
async for i in istream:
# print(f'FSP incrementing {i}')
# respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = is_synced(src, dst)
if not synced:
tracker, step_diff = await poll_and_sync_to_step(
tracker,
src,
dst,
)
# skip adding a last bar since we should already
# be step alinged
if step_diff == 0:
continue
# read out last shm row, copy and write new row
array = dst.array
# some metrics like vlm should be reset
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
dst.push(last)
# sync with source buffer's time step
src_l2 = src.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst._array['time'][src_li] = src_lt
dst._array['time'][src_2li] = src_2lt
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )

View File

@ -24,7 +24,7 @@ import numpy as np
from numba import jit, float64, optional, int64
from ._api import fsp
from ..data._normalize import iterticks
from ..data import iterticks
from ..data._sharedmem import ShmArray

View File

@ -20,7 +20,7 @@ import numpy as np
from tractor.trionics._broadcast import AsyncReceiver
from ._api import fsp
from ..data._normalize import iterticks
from ..data import iterticks
from ..data._sharedmem import ShmArray
from ._momo import _wma
from ..log import get_logger

View File

@ -40,7 +40,10 @@ def get_logger(
Return the package log or a sub-log for `name` if provided.
'''
return tractor.log.get_logger(name=name, _root_name=_proj_name)
return tractor.log.get_logger(
name=name,
_root_name=_proj_name,
)
def get_console_log(

View File

@ -14,49 +14,45 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Actor-runtime service orchestration machinery.
'''
Actor runtime primtives and (distributed) service APIs for,
"""
from __future__ import annotations
- daemon-service mgmt: `_daemon` (i.e. low-level spawn and supervise machinery
for sub-actors like `brokerd`, `emsd`, datad`, etc.)
from ._mngr import Services
from ._registry import ( # noqa
_tractor_kwargs,
_default_reg_addr,
_default_registry_host,
_default_registry_port,
open_registry,
find_service,
check_for_service,
- service-actor supervision (via `trio` tasks) API: `._mngr`
- discovery interface (via light wrapping around `tractor`'s built-in
prot): `._registry`
- `docker` cntr SC supervision for use with `trio`: `_ahab`
- wrappers for marketstore and elasticsearch dbs
=> TODO: maybe to (re)move elsewhere?
'''
from ._mngr import Services as Services
from ._registry import (
_tractor_kwargs as _tractor_kwargs,
_default_reg_addr as _default_reg_addr,
_default_registry_host as _default_registry_host,
_default_registry_port as _default_registry_port,
open_registry as open_registry,
find_service as find_service,
check_for_service as check_for_service,
)
from ._daemon import ( # noqa
maybe_spawn_daemon,
spawn_emsd,
maybe_open_emsd,
from ._daemon import (
maybe_spawn_daemon as maybe_spawn_daemon,
spawn_emsd as spawn_emsd,
maybe_open_emsd as maybe_open_emsd,
)
from ._actor_runtime import (
open_piker_runtime,
maybe_open_pikerd,
open_pikerd,
get_tractor_runtime_kwargs,
open_piker_runtime as open_piker_runtime,
maybe_open_pikerd as maybe_open_pikerd,
open_pikerd as open_pikerd,
get_runtime_vars as get_runtime_vars,
)
from ..brokers._daemon import (
spawn_brokerd,
maybe_spawn_brokerd,
spawn_brokerd as spawn_brokerd,
maybe_spawn_brokerd as maybe_spawn_brokerd,
)
__all__ = [
'check_for_service',
'Services',
'maybe_spawn_daemon',
'spawn_brokerd',
'maybe_spawn_brokerd',
'spawn_emsd',
'maybe_open_emsd',
'open_piker_runtime',
'maybe_open_pikerd',
'open_pikerd',
'get_tractor_runtime_kwargs',
]

View File

@ -19,8 +19,6 @@
"""
from __future__ import annotations
from pprint import pformat
from functools import partial
import os
from typing import (
Optional,
@ -35,7 +33,6 @@ import tractor
import trio
from ._util import (
log, # sub-sys logger
get_console_log,
)
from ._mngr import (
@ -48,7 +45,7 @@ from ._registry import ( # noqa
)
def get_tractor_runtime_kwargs() -> dict[str, Any]:
def get_runtime_vars() -> dict[str, Any]:
'''
Deliver ``tractor`` related runtime variables in a `dict`.
@ -59,6 +56,8 @@ def get_tractor_runtime_kwargs() -> dict[str, Any]:
@acm
async def open_piker_runtime(
name: str,
registry_addrs: list[tuple[str, int]] = [],
enable_modules: list[str] = [],
loglevel: Optional[str] = None,
@ -66,8 +65,6 @@ async def open_piker_runtime(
# for data daemons when running in production.
debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
# TODO: once we have `rsyscall` support we will read a config
# and spawn the service tree distributed per that.
start_method: str = 'trio',
@ -77,7 +74,7 @@ async def open_piker_runtime(
) -> tuple[
tractor.Actor,
tuple[str, int],
list[tuple[str, int]],
]:
'''
Start a piker actor who's runtime will automatically sync with
@ -87,21 +84,31 @@ async def open_piker_runtime(
a root actor.
'''
# check for existing runtime, boot it
# if not already running.
try:
# check for existing runtime
actor = tractor.current_actor().uid
actor = tractor.current_actor()
except tractor._exceptions.NoRuntime:
tractor._state._runtime_vars[
'piker_vars'] = tractor_runtime_overrides
'piker_vars'
] = tractor_runtime_overrides
registry_addr = registry_addr or _default_reg_addr
# NOTE: if no registrar list passed used the default of just
# setting it as the root actor on localhost.
registry_addrs = (
registry_addrs
or [_default_reg_addr]
)
if ems := tractor_kwargs.pop('enable_modules', None):
# import pdbp; pdbp.set_trace()
enable_modules.extend(ems)
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=registry_addr,
registry_addrs=registry_addrs,
name=name,
loglevel=loglevel,
debug_mode=debug_mode,
@ -113,24 +120,30 @@ async def open_piker_runtime(
enable_modules=enable_modules,
**tractor_kwargs,
) as _,
) as actor,
open_registry(registry_addr, ensure_exists=False) as addr,
open_registry(
registry_addrs,
ensure_exists=False,
) as addrs,
):
yield (
tractor.current_actor(),
addr,
)
else:
async with open_registry(registry_addr) as addr:
assert actor is tractor.current_actor()
yield (
actor,
addr,
addrs,
)
else:
async with open_registry(
registry_addrs
) as addrs:
yield (
actor,
addrs,
)
_root_dname = 'pikerd'
_root_modules = [
_root_dname: str = 'pikerd'
_root_modules: list[str] = [
__name__,
'piker.service._daemon',
'piker.brokers._daemon',
@ -144,18 +157,13 @@ _root_modules = [
@acm
async def open_pikerd(
registry_addrs: list[tuple[str, int]],
loglevel: str | None = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
# db init flags
tsdb: bool = False,
es: bool = False,
drop_root_perms_for_ahab: bool = True,
**kwargs,
@ -167,79 +175,45 @@ async def open_pikerd(
alive underling services (see below).
'''
# NOTE: for the root daemon we always enable the root
# mod set and we `list.extend()` it into wtv the
# caller requested.
# TODO: make this mod set more strict?
# -[ ] eventually we should be able to avoid
# having the root have more then permissions to spawn other
# specialized daemons I think?
ems: list[str] = kwargs.setdefault('enable_modules', [])
ems.extend(_root_modules)
async with (
open_piker_runtime(
name=_root_dname,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
loglevel=loglevel,
debug_mode=debug_mode,
registry_addr=registry_addr,
registry_addrs=registry_addrs,
**kwargs,
) as (root_actor, reg_addr),
) as (
root_actor,
reg_addrs,
),
tractor.open_nursery() as actor_nursery,
trio.open_nursery() as service_nursery,
):
if root_actor.accept_addr != reg_addr:
raise RuntimeError(
f'`pikerd` failed to bind on {reg_addr}!\n'
'Maybe you have another daemon already running?'
)
for addr in reg_addrs:
if addr not in root_actor.accept_addrs:
raise RuntimeError(
f'`pikerd` failed to bind on {addr}!\n'
'Maybe you have another daemon already running?'
)
# assign globally for future daemon/task creation
Services.actor_n = actor_nursery
Services.service_n = service_nursery
Services.debug_mode = debug_mode
if tsdb:
from ._ahab import start_ahab
from .marketstore import start_marketstore
log.info('Spawning `marketstore` supervisor')
ctn_ready, config, (cid, pid) = await service_nursery.start(
partial(
start_ahab,
'marketstored',
start_marketstore,
loglevel=loglevel,
drop_root_perms=drop_root_perms_for_ahab,
)
)
log.info(
f'`marketstored` up!\n'
f'pid: {pid}\n'
f'container id: {cid[:12]}\n'
f'config: {pformat(config)}'
)
if es:
from ._ahab import start_ahab
from .elastic import start_elasticsearch
log.info('Spawning `elasticsearch` supervisor')
ctn_ready, config, (cid, pid) = await service_nursery.start(
partial(
start_ahab,
'elasticsearch',
start_elasticsearch,
loglevel=loglevel,
drop_root_perms=drop_root_perms_for_ahab,
)
)
log.info(
f'`elasticsearch` up!\n'
f'pid: {pid}\n'
f'container id: {cid[:12]}\n'
f'config: {pformat(config)}'
)
try:
yield Services
@ -277,12 +251,9 @@ async def open_pikerd(
@acm
async def maybe_open_pikerd(
loglevel: Optional[str] = None,
registry_addr: None | tuple = None,
tsdb: bool = False,
es: bool = False,
drop_root_perms_for_ahab: bool = True,
registry_addrs: list[tuple[str, int]] | None = None,
loglevel: str | None = None,
**kwargs,
) -> tractor._portal.Portal | ClassVar[Services]:
@ -308,37 +279,51 @@ async def maybe_open_pikerd(
# async with open_portal(chan) as arb_portal:
# yield arb_portal
registry_addrs: list[tuple[str, int]] = (
registry_addrs
or [_default_reg_addr]
)
pikerd_portal: tractor.Portal | None
async with (
open_piker_runtime(
name=query_name,
registry_addr=registry_addr,
registry_addrs=registry_addrs,
loglevel=loglevel,
**kwargs,
) as _,
tractor.find_actor(
_root_dname,
arbiter_sockaddr=registry_addr,
) as portal
) as (actor, addrs),
):
# connect to any existing daemon presuming
# its registry socket was selected.
if (
portal is not None
):
yield portal
if _root_dname in actor.uid:
yield None
return
# NOTE: IFF running in disti mode, try to attach to any
# existing (host-local) `pikerd`.
else:
async with tractor.find_actor(
_root_dname,
registry_addrs=registry_addrs,
only_first=True,
# raise_on_none=True,
) as pikerd_portal:
# connect to any existing remote daemon presuming its
# registry socket was selected.
if pikerd_portal is not None:
# sanity check that we are actually connecting to
# a remote process and not ourselves.
assert actor.uid != pikerd_portal.channel.uid
assert registry_addrs
yield pikerd_portal
return
# presume pikerd role since no daemon could be found at
# configured address
async with open_pikerd(
loglevel=loglevel,
registry_addr=registry_addr,
# ahabd (docker super) specific controls
tsdb=tsdb,
es=es,
drop_root_perms_for_ahab=drop_root_perms_for_ahab,
registry_addrs=registry_addrs,
# passthrough to ``tractor`` init
**kwargs,

View File

@ -15,10 +15,11 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Supervisor for ``docker`` with included async and SC wrapping
to ensure a cancellable container lifetime system.
Supervisor for ``docker`` with included async and SC wrapping to
ensure a cancellable container lifetime system.
'''
from __future__ import annotations
from collections import ChainMap
from functools import partial
import os
@ -48,6 +49,7 @@ from requests.exceptions import (
ReadTimeout,
)
from ._mngr import Services
from ._util import (
log, # sub-sys logger
get_console_log,
@ -187,7 +189,11 @@ class Container:
and entry not in seen_so_far
):
seen_so_far.add(entry)
getattr(log, level.lower(), log.error)(f'{msg}')
getattr(
log,
level.lower(),
log.error
)(f'{msg}')
if level == 'fatal':
raise ApplicationLogError(msg)
@ -263,8 +269,10 @@ class Container:
start = time.time()
for _ in range(6):
with trio.move_on_after(0.5) as cs:
log.cancel('polling for CNTR logs...')
with trio.move_on_after(1) as cs:
log.cancel(
'polling for CNTR logs for {stop_predicate}..'
)
try:
await self.process_logs_until(
@ -328,16 +336,13 @@ class Container:
async def open_ahabd(
ctx: tractor.Context,
endpoint: str, # ns-pointer str-msg-type
loglevel: str | None = 'cancel',
loglevel: str | None = None,
**kwargs,
**ep_kwargs,
) -> None:
log = get_console_log(
loglevel,
name=__name__,
)
log = get_console_log(loglevel or 'cancel')
async with open_docker() as client:
@ -350,7 +355,7 @@ async def open_ahabd(
cntr_config,
start_pred,
stop_pred,
) = ep_func(client)
) = ep_func(client, **ep_kwargs)
cntr = Container(dcntr)
conf: ChainMap[str, Any] = ChainMap(
@ -446,10 +451,17 @@ async def open_ahabd(
)
async def start_ahab(
@acm
async def start_ahab_service(
services: Services,
service_name: str,
# endpoint config passed as **kwargs
endpoint: Callable[docker.DockerClient, DockerContainer],
ep_kwargs: dict,
loglevel: str | None = 'cancel',
# supervisor config
drop_root_perms: bool = True,
task_status: TaskStatus[
@ -470,6 +482,9 @@ async def start_ahab(
is started.
'''
# global log
log = get_console_log(loglevel or 'cancel')
cn_ready = trio.Event()
try:
async with tractor.open_nursery() as an:
@ -498,21 +513,28 @@ async def start_ahab(
)[2] # named user's uid
)
async with portal.open_context(
open_ahabd,
cs, first = await services.start_service_task(
name=service_name,
portal=portal,
# rest: endpoint inputs
target=open_ahabd,
endpoint=str(NamespacePath.from_ref(endpoint)),
loglevel='cancel',
) as (ctx, first):
**ep_kwargs,
)
cid, pid, cntr_config = first
cid, pid, cntr_config = first
task_status.started((
try:
yield (
cn_ready,
cntr_config,
(cid, pid),
))
await trio.sleep_forever()
)
finally:
log.info(f'Cancelling ahab service `{service_name}`')
await services.cancel_service(service_name)
# since we demoted root perms in this parent
# we'll get a perms error on proc cleanup in

View File

@ -70,7 +70,10 @@ async def maybe_spawn_daemon(
lock = Services.locks[service_name]
await lock.acquire()
async with find_service(service_name) as portal:
async with find_service(
service_name,
registry_addrs=[('127.0.0.1', 6116)],
) as portal:
if portal is not None:
lock.release()
yield portal

View File

@ -27,14 +27,25 @@ from typing import (
import trio
from trio_typing import TaskStatus
import tractor
from tractor import (
current_actor,
ContextCancelled,
Context,
Portal,
)
from ._util import (
log, # sub-sys logger
)
# TODO: factor this into a ``tractor.highlevel`` extension
# pack for the library.
# TODO: we need remote wrapping and a general soln:
# - factor this into a ``tractor.highlevel`` extension # pack for the
# library.
# - wrap a "remote api" wherein you can get a method proxy
# to the pikerd actor for starting services remotely!
# - prolly rename this to ActorServicesNursery since it spawns
# new actors and supervises them to completion?
class Services:
actor_n: tractor._supervise.ActorNursery
@ -44,7 +55,7 @@ class Services:
str,
tuple[
trio.CancelScope,
tractor.Portal,
Portal,
trio.Event,
]
] = {}
@ -54,11 +65,12 @@ class Services:
async def start_service_task(
self,
name: str,
portal: tractor.Portal,
portal: Portal,
target: Callable,
allow_overruns: bool = False,
**ctx_kwargs,
) -> (trio.CancelScope, tractor.Context):
) -> (trio.CancelScope, Context):
'''
Open a context in a service sub-actor, add to a stack
that gets unwound at ``pikerd`` teardown.
@ -79,8 +91,10 @@ class Services:
) -> Any:
with trio.CancelScope() as cs:
async with portal.open_context(
target,
allow_overruns=allow_overruns,
**ctx_kwargs,
) as (ctx, first):
@ -95,13 +109,30 @@ class Services:
# wait on any context's return value
# and any final portal result from the
# sub-actor.
ctx_res = await ctx.result()
ctx_res: Any = await ctx.result()
# NOTE: blocks indefinitely until cancelled
# either by error from the target context
# function or by being cancelled here by the
# surrounding cancel scope.
return (await portal.result(), ctx_res)
except ContextCancelled as ctxe:
canceller: tuple[str, str] = ctxe.canceller
our_uid: tuple[str, str] = current_actor().uid
if (
canceller != portal.channel.uid
and
canceller != our_uid
):
log.cancel(
f'Actor-service {name} was remotely cancelled?\n'
f'remote canceller: {canceller}\n'
f'Keeping {our_uid} alive, ignoring sub-actor cancel..\n'
)
else:
raise
finally:
await portal.cancel_actor()

View File

@ -27,6 +27,7 @@ from typing import (
)
import tractor
from tractor import Portal
from ._util import (
log, # sub-sys logger
@ -46,7 +47,9 @@ _registry: Registry | None = None
class Registry:
addr: None | tuple[str, int] = None
# TODO: should this be a set or should we complain
# on duplicates?
addrs: list[tuple[str, int]] = []
# TODO: table of uids to sockaddrs
peers: dict[
@ -60,69 +63,115 @@ _tractor_kwargs: dict[str, Any] = {}
@acm
async def open_registry(
addr: None | tuple[str, int] = None,
addrs: list[tuple[str, int]],
ensure_exists: bool = True,
) -> tuple[str, int]:
) -> list[tuple[str, int]]:
'''
Open the service-actor-discovery registry by returning a set of
tranport socket-addrs to registrar actors which may be
contacted and queried for similar addresses for other
non-registrar actors.
'''
global _tractor_kwargs
actor = tractor.current_actor()
uid = actor.uid
preset_reg_addrs: list[tuple[str, int]] = Registry.addrs
if (
Registry.addr is not None
and addr
preset_reg_addrs
and addrs
):
raise RuntimeError(
f'`{uid}` registry addr already bound @ {_registry.sockaddr}'
)
if preset_reg_addrs != addrs:
# if any(addr in preset_reg_addrs for addr in addrs):
diff: set[tuple[str, int]] = set(preset_reg_addrs) - set(addrs)
if diff:
log.warning(
f'`{uid}` requested only subset of registrars: {addrs}\n'
f'However there are more @{diff}'
)
else:
raise RuntimeError(
f'`{uid}` has non-matching registrar addresses?\n'
f'request: {addrs}\n'
f'already set: {preset_reg_addrs}'
)
was_set: bool = False
if (
not tractor.is_root_process()
and Registry.addr is None
and not Registry.addrs
):
Registry.addr = actor._arb_addr
Registry.addrs.extend(actor.reg_addrs)
if (
ensure_exists
and Registry.addr is None
and not Registry.addrs
):
raise RuntimeError(
f"`{uid}` registry should already exist bug doesn't?"
f"`{uid}` registry should already exist but doesn't?"
)
if (
Registry.addr is None
not Registry.addrs
):
was_set = True
Registry.addr = addr or _default_reg_addr
Registry.addrs = addrs or [_default_reg_addr]
_tractor_kwargs['arbiter_addr'] = Registry.addr
# NOTE: only spot this seems currently used is inside
# `.ui._exec` which is the (eventual qtloops) bootstrapping
# with guest mode.
_tractor_kwargs['registry_addrs'] = Registry.addrs
try:
yield Registry.addr
yield Registry.addrs
finally:
# XXX: always clear the global addr if we set it so that the
# next (set of) calls will apply whatever new one is passed
# in.
if was_set:
Registry.addr = None
Registry.addrs = None
@acm
async def find_service(
service_name: str,
) -> tractor.Portal | None:
registry_addrs: list[tuple[str, int]] | None = None,
async with open_registry() as reg_addr:
first_only: bool = True,
) -> (
Portal
| list[Portal]
| None
):
reg_addrs: list[tuple[str, int]]
async with open_registry(
addrs=(
registry_addrs
# NOTE: if no addr set is passed assume the registry has
# already been opened and use the previously applied
# startup set.
or Registry.addrs
),
) as reg_addrs:
log.info(f'Scanning for service `{service_name}`')
maybe_portals: list[Portal] | Portal | None
# attach to existing daemon by name if possible
async with tractor.find_actor(
service_name,
arbiter_sockaddr=reg_addr,
) as maybe_portal:
yield maybe_portal
registry_addrs=reg_addrs,
only_first=first_only, # if set only returns single ref
) as maybe_portals:
if not maybe_portals:
yield None
return
yield maybe_portals
async def check_for_service(
@ -133,9 +182,11 @@ async def check_for_service(
Service daemon "liveness" predicate.
'''
async with open_registry(ensure_exists=False) as reg_addr:
async with tractor.query_actor(
async with (
open_registry(ensure_exists=False) as reg_addr,
tractor.query_actor(
service_name,
arbiter_sockaddr=reg_addr,
) as sockaddr:
return sockaddr
) as sockaddr,
):
return sockaddr

View File

@ -15,6 +15,7 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from typing import (
Any,
TYPE_CHECKING,
@ -122,3 +123,47 @@ def start_elasticsearch(
health_query,
chk_for_closed_msg,
)
@acm
async def start_ahab_daemon(
service_mngr: Services,
user_config: dict | None = None,
loglevel: str | None = None,
) -> tuple[str, dict]:
'''
Task entrypoint to start the estasticsearch docker container using
the service manager.
'''
from ._ahab import start_ahab_service
# dict-merge any user settings
conf: dict = _config.copy()
if user_config:
conf = conf | user_config
dname: str = 'esd'
log.info(f'Spawning `{dname}` supervisor')
async with start_ahab_service(
service_mngr,
dname,
# NOTE: docker-py client is passed at runtime
start_elasticsearch,
ep_kwargs={'user_config': conf},
loglevel=loglevel,
) as (
ctn_ready,
config,
(cid, pid),
):
log.info(
f'`{dname}` up!\n'
f'pid: {pid}\n'
f'container id: {cid[:12]}\n'
f'config: {pformat(config)}'
)
yield dname, conf

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -25,11 +25,9 @@
'''
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from datetime import datetime
from pprint import pformat
from typing import (
Any,
Optional,
Union,
TYPE_CHECKING,
)
import time
@ -37,29 +35,34 @@ from math import isnan
from pathlib import Path
from bidict import bidict
from msgspec.msgpack import encode, decode
from msgspec.msgpack import (
encode,
decode,
)
# import pyqtgraph as pg
import numpy as np
import tractor
from trio_websocket import open_websocket_url
from anyio_marketstore import (
from anyio_marketstore import ( # noqa
open_marketstore_client,
MarketstoreClient,
Params,
)
import pendulum
import purerpc
# TODO: import this for specific error set expected by mkts client
# import purerpc
from ..data.feed import maybe_open_feed
from . import Services
from ._util import (
log, # sub-sys logger
get_console_log,
)
if TYPE_CHECKING:
import docker
from ._ahab import DockerContainer
from ._util import (
log, # sub-sys logger
get_console_log,
)
from ..data.feed import maybe_open_feed
from .._profile import Profiler
# ahabd-supervisor and container level config
@ -70,7 +73,7 @@ _config = {
'startup_timeout': 2,
}
_yaml_config = '''
_yaml_config_str: str = '''
# piker's ``marketstore`` config.
# mount this config using:
@ -89,6 +92,12 @@ stale_threshold: 5
enable_add: true
enable_remove: false
# SUPER DUPER CRITICAL to address a super weird issue:
# https://github.com/pikers/piker/issues/443
# seems like "variable compression" is possibly borked
# or snappy compression somehow breaks easily?
disable_variable_compression: true
triggers:
- module: ondiskagg.so
on: "*/1Sec/OHLCV"
@ -106,18 +115,18 @@ triggers:
# config:
# filter: "nasdaq"
'''.format(**_config)
'''
def start_marketstore(
client: docker.DockerClient,
user_config: dict,
**kwargs,
) -> tuple[DockerContainer, dict[str, Any]]:
'''
Start and supervise a marketstore instance with its config bind-mounted
in from the piker config directory on the system.
Start and supervise a marketstore instance with its config
bind-mounted in from the piker config directory on the system.
The equivalent cli cmd to this code is:
@ -141,14 +150,16 @@ def start_marketstore(
os.mkdir(mktsdir)
yml_file = os.path.join(mktsdir, 'mkts.yml')
yaml_config = _yaml_config_str.format(**user_config)
if not os.path.isfile(yml_file):
log.warning(
f'No `marketstore` config exists?: {yml_file}\n'
'Generating new file from template:\n'
f'{_yaml_config}\n'
f'{yaml_config}\n'
)
with open(yml_file, 'w') as yf:
yf.write(_yaml_config)
yf.write(yaml_config)
# create a mount from user's local piker config dir into container
config_dir_mnt = docker.types.Mount(
@ -171,6 +182,9 @@ def start_marketstore(
type='bind',
)
grpc_listen_port = int(user_config['grpc_listen_port'])
ws_listen_port = int(user_config['ws_listen_port'])
dcntr: DockerContainer = client.containers.run(
'alpacamarkets/marketstore:latest',
# do we need this for cmds?
@ -178,8 +192,8 @@ def start_marketstore(
# '-p 5993:5993',
ports={
'5993/tcp': 5993, # jsonrpc / ws?
'5995/tcp': 5995, # grpc
f'{ws_listen_port}/tcp': ws_listen_port,
f'{grpc_listen_port}/tcp': grpc_listen_port,
},
mounts=[
config_dir_mnt,
@ -199,7 +213,13 @@ def start_marketstore(
return "launching tcp listener for all services..." in msg
async def stop_matcher(msg: str):
return "exiting..." in msg
return (
# not sure when this happens, some kinda stop condition
"exiting..." in msg
# after we send SIGINT..
or "initiating graceful shutdown due to 'interrupt' request" in msg
)
return (
dcntr,
@ -211,6 +231,49 @@ def start_marketstore(
)
@acm
async def start_ahab_daemon(
service_mngr: Services,
user_config: dict | None = None,
loglevel: str | None = None,
) -> tuple[str, dict]:
'''
Task entrypoint to start the marketstore docker container using the
service manager.
'''
from ._ahab import start_ahab_service
# dict-merge any user settings
conf: dict = _config.copy()
if user_config:
conf: dict = conf | user_config
dname: str = 'marketstored'
log.info(f'Spawning `{dname}` supervisor')
async with start_ahab_service(
service_mngr,
dname,
# NOTE: docker-py client is passed at runtime
start_marketstore,
ep_kwargs={'user_config': conf},
loglevel=loglevel,
) as (
_,
config,
(cid, pid),
):
log.info(
f'`{dname}` up!\n'
f'pid: {pid}\n'
f'container id: {cid[:12]}\n'
f'config: {pformat(config)}'
)
yield dname, conf
_tick_tbk_ids: tuple[str, str] = ('1Sec', 'TICK')
_tick_tbk: str = '{}/' + '/'.join(_tick_tbk_ids)
@ -264,16 +327,6 @@ _ohlcv_dt = [
]
ohlc_key_map = bidict({
'Epoch': 'time',
'Open': 'open',
'High': 'high',
'Low': 'low',
'Close': 'close',
'Volume': 'volume',
})
def mk_tbk(keys: tuple[str, str, str]) -> str:
'''
Generate a marketstore table key from a tuple.
@ -286,7 +339,7 @@ def mk_tbk(keys: tuple[str, str, str]) -> str:
def quote_to_marketstore_structarray(
quote: dict[str, Any],
last_fill: Optional[float]
last_fill: float | None,
) -> np.array:
'''
@ -325,24 +378,6 @@ def quote_to_marketstore_structarray(
return np.array([tuple(array_input)], dtype=_quote_dt)
@acm
async def get_client(
host: str = 'localhost',
port: int = _config['grpc_listen_port'],
) -> MarketstoreClient:
'''
Load a ``anyio_marketstore`` grpc client connected
to an existing ``marketstore`` server.
'''
async with open_marketstore_client(
host,
port
) as client:
yield client
class MarketStoreError(Exception):
"Generic marketstore client error"
@ -370,356 +405,6 @@ tf_in_1s = bidict({
})
class Storage:
'''
High level storage api for both real-time and historical ingest.
'''
def __init__(
self,
client: MarketstoreClient,
) -> None:
# TODO: eventually this should be an api/interface type that
# ensures we can support multiple tsdb backends.
self.client = client
# series' cache from tsdb reads
self._arrays: dict[str, np.ndarray] = {}
async def list_keys(self) -> list[str]:
return await self.client.list_symbols()
async def search_keys(self, pattern: str) -> list[str]:
'''
Search for time series key in the storage backend.
'''
...
async def write_ticks(self, ticks: list) -> None:
...
async def load(
self,
fqsn: str,
timeframe: int,
) -> tuple[
np.ndarray, # timeframe sampled array-series
Optional[datetime], # first dt
Optional[datetime], # last dt
]:
first_tsdb_dt, last_tsdb_dt = None, None
hist = await self.read_ohlcv(
fqsn,
# on first load we don't need to pull the max
# history per request size worth.
limit=3000,
timeframe=timeframe,
)
log.info(f'Loaded tsdb history {hist}')
if len(hist):
times = hist['Epoch']
first, last = times[0], times[-1]
first_tsdb_dt, last_tsdb_dt = map(
pendulum.from_timestamp, [first, last]
)
return (
hist, # array-data
first_tsdb_dt, # start of query-frame
last_tsdb_dt, # most recent
)
async def read_ohlcv(
self,
fqsn: str,
timeframe: int | str,
end: Optional[int] = None,
limit: int = int(800e3),
) -> np.ndarray:
client = self.client
syms = await client.list_symbols()
if fqsn not in syms:
return {}
# use the provided timeframe or 1s by default
tfstr = tf_in_1s.get(timeframe, tf_in_1s[1])
params = Params(
symbols=fqsn,
timeframe=tfstr,
attrgroup='OHLCV',
end=end,
# limit_from_start=True,
# TODO: figure the max limit here given the
# ``purepc`` msg size limit of purerpc: 33554432
limit=limit,
)
try:
result = await client.query(params)
except purerpc.grpclib.exceptions.UnknownError as err:
# indicate there is no history for this timeframe
log.exception(
f'Unknown mkts QUERY error: {params}\n'
f'{err.args}'
)
return {}
# TODO: it turns out column access on recarrays is actually slower:
# https://jakevdp.github.io/PythonDataScienceHandbook/02.09-structured-data-numpy.html#RecordArrays:-Structured-Arrays-with-a-Twist
# it might make sense to make these structured arrays?
data_set = result.by_symbols()[fqsn]
array = data_set.array
# XXX: ensure sample rate is as expected
time = data_set.array['Epoch']
if len(time) > 1:
time_step = time[-1] - time[-2]
ts = tf_in_1s.inverse[data_set.timeframe]
if time_step != ts:
log.warning(
f'MKTS BUG: wrong timeframe loaded: {time_step}'
'YOUR DATABASE LIKELY CONTAINS BAD DATA FROM AN OLD BUG'
f'WIPING HISTORY FOR {ts}s'
)
await self.delete_ts(fqsn, timeframe)
# try reading again..
return await self.read_ohlcv(
fqsn,
timeframe,
end,
limit,
)
return array
async def delete_ts(
self,
key: str,
timeframe: Optional[Union[int, str]] = None,
fmt: str = 'OHLCV',
) -> bool:
client = self.client
syms = await client.list_symbols()
if key not in syms:
raise KeyError(f'`{key}` table key not found in\n{syms}?')
tbk = mk_tbk((
key,
tf_in_1s.get(timeframe, tf_in_1s[60]),
fmt,
))
return await client.destroy(tbk=tbk)
async def write_ohlcv(
self,
fqsn: str,
ohlcv: np.ndarray,
timeframe: int,
append_and_duplicate: bool = True,
limit: int = int(800e3),
) -> None:
# build mkts schema compat array for writing
mkts_dt = np.dtype(_ohlcv_dt)
mkts_array = np.zeros(
len(ohlcv),
dtype=mkts_dt,
)
# copy from shm array (yes it's this easy):
# https://numpy.org/doc/stable/user/basics.rec.html#assignment-from-other-structured-arrays
mkts_array[:] = ohlcv[[
'time',
'open',
'high',
'low',
'close',
'volume',
]]
m, r = divmod(len(mkts_array), limit)
tfkey = tf_in_1s[timeframe]
for i in range(m, 1):
to_push = mkts_array[i-1:i*limit]
# write to db
resp = await self.client.write(
to_push,
tbk=f'{fqsn}/{tfkey}/OHLCV',
# NOTE: will will append duplicates
# for the same timestamp-index.
# TODO: pre-deduplicate?
isvariablelength=append_and_duplicate,
)
log.info(
f'Wrote {mkts_array.size} datums to tsdb\n'
)
for resp in resp.responses:
err = resp.error
if err:
raise MarketStoreError(err)
if r:
to_push = mkts_array[m*limit:]
# write to db
resp = await self.client.write(
to_push,
tbk=f'{fqsn}/{tfkey}/OHLCV',
# NOTE: will will append duplicates
# for the same timestamp-index.
# TODO: pre deduplicate?
isvariablelength=append_and_duplicate,
)
log.info(
f'Wrote {mkts_array.size} datums to tsdb\n'
)
for resp in resp.responses:
err = resp.error
if err:
raise MarketStoreError(err)
# XXX: currently the only way to do this is through the CLI:
# sudo ./marketstore connect --dir ~/.config/piker/data
# >> \show mnq.globex.20220617.ib/1Sec/OHLCV 2022-05-15
# and this seems to block and use up mem..
# >> \trim mnq.globex.20220617.ib/1Sec/OHLCV 2022-05-15
# relevant source code for this is here:
# https://github.com/alpacahq/marketstore/blob/master/cmd/connect/session/trim.go#L14
# def delete_range(self, start_dt, end_dt) -> None:
# ...
@acm
async def open_storage_client(
fqsn: str,
period: Optional[Union[int, str]] = None, # in seconds
) -> tuple[Storage, dict[str, np.ndarray]]:
'''
Load a series by key and deliver in ``numpy`` struct array format.
'''
async with (
# eventually a storage backend endpoint
get_client() as client,
):
# slap on our wrapper api
yield Storage(client)
@acm
async def open_tsdb_client(
fqsn: str,
) -> Storage:
# TODO: real-time dedicated task for ensuring
# history consistency between the tsdb, shm and real-time feed..
# update sequence design notes:
# - load existing highest frequency data from mkts
# * how do we want to offer this to the UI?
# - lazy loading?
# - try to load it all and expect graphics caching/diffing
# to hide extra bits that aren't in view?
# - compute the diff between latest data from broker and shm
# * use sql api in mkts to determine where the backend should
# start querying for data?
# * append any diff with new shm length
# * determine missing (gapped) history by scanning
# * how far back do we look?
# - begin rt update ingest and aggregation
# * could start by always writing ticks to mkts instead of
# worrying about a shm queue for now.
# * we have a short list of shm queues worth groking:
# - https://github.com/pikers/piker/issues/107
# * the original data feed arch blurb:
# - https://github.com/pikers/piker/issues/98
#
profiler = Profiler(
disabled=True, # not pg_profile_enabled(),
delayed=False,
)
async with (
open_storage_client(fqsn) as storage,
maybe_open_feed(
[fqsn],
start_stream=False,
) as feed,
):
profiler(f'opened feed for {fqsn}')
# to_append = feed.hist_shm.array
# to_prepend = None
if fqsn:
flume = feed.flumes[fqsn]
symbol = flume.symbol
if symbol:
fqsn = symbol.fqsn
# diff db history with shm and only write the missing portions
# ohlcv = flume.hist_shm.array
# TODO: use pg profiler
# for secs in (1, 60):
# tsdb_array = await storage.read_ohlcv(
# fqsn,
# timeframe=timeframe,
# )
# # hist diffing:
# # these aren't currently used but can be referenced from
# # within the embedded ipython shell below.
# to_append = ohlcv[ohlcv['time'] > ts['Epoch'][-1]]
# to_prepend = ohlcv[ohlcv['time'] < ts['Epoch'][0]]
# profiler('Finished db arrays diffs')
_ = await storage.client.list_symbols()
# log.info(f'Existing tsdb symbol set:\n{pformat(syms)}')
# profiler(f'listed symbols {syms}')
yield storage
# for array in [to_append, to_prepend]:
# if array is None:
# continue
# log.info(
# f'Writing datums {array.size} -> to tsdb from shm\n'
# )
# await storage.write_ohlcv(fqsn, array)
# profiler('Finished db writes')
async def ingest_quote_stream(
symbols: list[str],
brokername: str,
@ -731,6 +416,7 @@ async def ingest_quote_stream(
Ingest a broker quote stream into a ``marketstore`` tsdb.
'''
from piker.storage.marketstore import get_client
async with (
maybe_open_feed(brokername, symbols, loglevel=loglevel) as feed,
get_client() as ms_client,

View File

@ -0,0 +1,320 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
(time-series) database middle ware layer.
- APIs for read, write, delete, replicate over multiple
db systems.
- backend agnostic tick msg ingest machinery.
- broadcast systems for fan out of real-time ingested
data to live consumers.
- test harness utilities for data-processing verification.
'''
from abc import abstractmethod
from contextlib import asynccontextmanager as acm
from functools import partial
from importlib import import_module
from datetime import datetime
from types import ModuleType
from typing import (
# Callable,
# Awaitable,
# Any,
# AsyncIterator,
Protocol,
# Generic,
# TypeVar,
)
import numpy as np
from .. import config
from ..service import (
check_for_service,
)
from ..log import (
get_logger,
get_console_log,
)
subsys: str = 'piker.storage'
log = get_logger(subsys)
get_console_log = partial(
get_console_log,
name=subsys,
)
__tsdbs__: list[str] = [
'nativedb',
# 'marketstore',
]
class StorageClient(
Protocol,
):
'''
Api description that all storage backends must implement
in order to suffice the historical data mgmt layer.
'''
name: str
@abstractmethod
async def list_keys(self) -> list[str]:
...
@abstractmethod
def search_keys(self) -> list[str]:
...
# @abstractmethod
# async def write_ticks(
# self,
# ticks: list,
# ) -> ReceiveType:
# ...
# ``trio.abc.AsyncResource`` methods
@abstractmethod
async def load(
self,
fqme: str,
timeframe: int,
) -> tuple[
np.ndarray, # timeframe sampled array-series
datetime | None, # first dt
datetime | None, # last dt
]:
...
@abstractmethod
async def delete_ts(
self,
key: str,
timeframe: int | str | None = None,
fmt: str = 'OHLCV',
) -> bool:
...
@abstractmethod
async def read_ohlcv(
self,
fqme: str,
timeframe: int | str,
end: int | None = None,
limit: int = int(800e3),
) -> np.ndarray:
...
async def write_ohlcv(
self,
fqme: str,
ohlcv: np.ndarray,
timeframe: int,
append_and_duplicate: bool = True,
limit: int = int(800e3),
) -> None:
...
class TimeseriesNotFound(Exception):
'''
No timeseries entry can be found for this backend.
'''
class StorageConnectionError(ConnectionError):
'''
Can't connect to the desired tsdb subsys/service.
'''
def get_storagemod(name: str) -> ModuleType:
mod: ModuleType = import_module(
'.' + name,
'piker.storage',
)
# we only allow monkeying because it's for internal keying
mod.name = mod.__name__.split('.')[-1]
return mod
@acm
async def open_storage_client(
backend: str | None = None,
) -> tuple[ModuleType, StorageClient]:
'''
Load the ``StorageClient`` for named backend.
'''
def_backend: str = 'nativedb'
tsdb_host: str = 'localhost'
# load root config and any tsdb user defined settings
conf, path = config.load(
conf_name='conf',
touch_if_dne=True,
)
# TODO: maybe not under a "network" section.. since
# no more chitty `marketstore`..
tsdbconf: dict = {}
service_section = conf.get('service')
if (
not backend
and service_section
):
tsdbconf = service_section.get('tsdb')
# lookup backend tsdb module by name and load any user service
# settings for connecting to the tsdb service.
backend: str = tsdbconf.pop(
'name',
def_backend,
)
tsdb_host: str = tsdbconf.get('maddrs', [])
if backend is None:
backend: str = def_backend
# import and load storagemod by name
mod: ModuleType = get_storagemod(backend)
get_client = mod.get_client
log.info('Scanning for existing `{tsbd_backend}`')
if backend != def_backend:
tsdb_is_up: bool = await check_for_service(f'{backend}d')
if (
tsdb_host == 'localhost'
or tsdb_is_up
):
log.info(f'Connecting to local: {backend}@{tsdbconf}')
else:
log.info(f'Attempting to connect to remote: {backend}@{tsdbconf}')
else:
log.info(f'Connecting to default storage: {backend}@{tsdbconf}')
async with (
get_client(**tsdbconf) as client,
):
# slap on our wrapper api
yield mod, client
# NOTE: pretty sure right now this is only being
# called by a CLI entrypoint?
@acm
async def open_tsdb_client(
fqme: str,
) -> StorageClient:
# TODO: real-time dedicated task for ensuring
# history consistency between the tsdb, shm and real-time feed..
# update sequence design notes:
# - load existing highest frequency data from mkts
# * how do we want to offer this to the UI?
# - lazy loading?
# - try to load it all and expect graphics caching/diffing
# to hide extra bits that aren't in view?
# - compute the diff between latest data from broker and shm
# * use sql api in mkts to determine where the backend should
# start querying for data?
# * append any diff with new shm length
# * determine missing (gapped) history by scanning
# * how far back do we look?
# - begin rt update ingest and aggregation
# * could start by always writing ticks to mkts instead of
# worrying about a shm queue for now.
# * we have a short list of shm queues worth groking:
# - https://github.com/pikers/piker/issues/107
# * the original data feed arch blurb:
# - https://github.com/pikers/piker/issues/98
#
from ..toolz import Profiler
profiler = Profiler(
disabled=True, # not pg_profile_enabled(),
delayed=False,
)
from ..data.feed import maybe_open_feed
async with (
open_storage_client() as (_, storage),
maybe_open_feed(
[fqme],
start_stream=False,
) as feed,
):
profiler(f'opened feed for {fqme}')
# to_append = feed.hist_shm.array
# to_prepend = None
if fqme:
flume = feed.flumes[fqme]
symbol = flume.mkt
if symbol:
fqme = symbol.fqme
# diff db history with shm and only write the missing portions
# ohlcv = flume.hist_shm.array
# TODO: use pg profiler
# for secs in (1, 60):
# tsdb_array = await storage.read_ohlcv(
# fqme,
# timeframe=timeframe,
# )
# # hist diffing:
# # these aren't currently used but can be referenced from
# # within the embedded ipython shell below.
# to_append = ohlcv[ohlcv['time'] > ts['Epoch'][-1]]
# to_prepend = ohlcv[ohlcv['time'] < ts['Epoch'][0]]
# profiler('Finished db arrays diffs')
_ = await storage.client.list_symbols()
# log.info(f'Existing tsdb symbol set:\n{pformat(syms)}')
# profiler(f'listed symbols {syms}')
yield storage
# for array in [to_append, to_prepend]:
# if array is None:
# continue
# log.info(
# f'Writing datums {array.size} -> to tsdb from shm\n'
# )
# await storage.write_ohlcv(fqme, array)
# profiler('Finished db writes')

View File

@ -0,0 +1,553 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Storage middle-ware CLIs.
"""
from __future__ import annotations
# from datetime import datetime
# from contextlib import (
# AsyncExitStack,
# )
from pathlib import Path
from math import copysign
import time
from types import ModuleType
from typing import (
Any,
TYPE_CHECKING,
)
import polars as pl
import numpy as np
import tractor
# import pendulum
from rich.console import Console
import trio
# from rich.markdown import Markdown
import typer
from piker.service import open_piker_runtime
from piker.cli import cli
from piker.data import (
ShmArray,
)
from piker import tsp
from piker.data._formatters import BGM
from . import log
from . import (
__tsdbs__,
open_storage_client,
StorageClient,
)
if TYPE_CHECKING:
from piker.ui._remote_ctl import AnnotCtl
store = typer.Typer()
@store.command()
def ls(
backends: list[str] = typer.Argument(
default=None,
help='Storage backends to query, default is all.'
),
):
from rich.table import Table
if not backends:
backends: list[str] = __tsdbs__
console = Console()
async def query_all():
nonlocal backends
async with (
open_piker_runtime(
'tsdb_storage',
),
):
for i, backend in enumerate(backends):
table = Table()
try:
async with open_storage_client(backend=backend) as (
mod,
client,
):
table.add_column(f'{mod.name}@{client.address}')
keys: list[str] = await client.list_keys()
for key in keys:
table.add_row(key)
console.print(table)
except Exception:
log.error(f'Unable to connect to storage engine: `{backend}`')
trio.run(query_all)
# TODO: like ls but takes in a pattern and matches
# @store.command()
# def search(
# patt: str,
# backends: list[str] = typer.Argument(
# default=None,
# help='Storage backends to query, default is all.'
# ),
# ):
# ...
@store.command()
def delete(
symbols: list[str],
backend: str = typer.Option(
default=None,
help='Storage backend to update'
),
# TODO: expose this as flagged multi-option?
timeframes: list[int] = [1, 60],
):
'''
Delete a storage backend's time series for (table) keys provided as
``symbols``.
'''
from . import open_storage_client
async def main(symbols: list[str]):
async with (
open_piker_runtime(
'tsdb_storage',
),
open_storage_client(backend) as (_, client),
trio.open_nursery() as n,
):
# spawn queries as tasks for max conc!
for fqme in symbols:
for tf in timeframes:
n.start_soon(
client.delete_ts,
fqme,
tf,
)
trio.run(main, symbols)
@store.command()
def anal(
fqme: str,
period: int = 60,
pdb: bool = False,
) -> np.ndarray:
'''
Anal-ysis is when you take the data do stuff to it.
NOTE: This ONLY loads the offline timeseries data (by default
from a parquet file) NOT the in-shm version you might be seeing
in a chart.
'''
async def main():
async with (
open_piker_runtime(
# are you a bear or boi?
'tsdb_polars_anal',
debug_mode=pdb,
),
open_storage_client() as (
mod,
client,
),
):
syms: list[str] = await client.list_keys()
log.info(f'{len(syms)} FOUND for {mod.name}')
history: ShmArray # np buffer format
(
history,
first_dt,
last_dt,
) = await client.load(
fqme,
period,
)
assert first_dt < last_dt
null_segs: tuple = tsp.get_null_segs(
frame=history,
period=period,
)
# TODO: do tsp queries to backcend to fill i missing
# history and then prolly write it to tsdb!
shm_df: pl.DataFrame = await client.as_df(
fqme,
period,
)
df: pl.DataFrame # with dts
deduped: pl.DataFrame # deduplicated dts
(
df,
deduped,
diff,
) = tsp.dedupe(
shm_df,
period=period,
)
write_edits: bool = True
if (
write_edits
and (
diff
or null_segs
)
):
await tractor.pause()
await client.write_ohlcv(
fqme,
ohlcv=deduped,
timeframe=period,
)
else:
# TODO: something better with tab completion..
# is there something more minimal but nearly as
# functional as ipython?
await tractor.pause()
assert not null_segs
trio.run(main)
async def markup_gaps(
fqme: str,
timeframe: float,
actl: AnnotCtl,
wdts: pl.DataFrame,
gaps: pl.DataFrame,
) -> dict[int, dict]:
'''
Remote annotate time-gaps in a dt-fielded ts (normally OHLC)
with rectangles.
'''
aids: dict[int] = {}
for i in range(gaps.height):
row: pl.DataFrame = gaps[i]
# the gap's RIGHT-most bar's OPEN value
# at that time (sample) step.
iend: int = row['index'][0]
# dt: datetime = row['dt'][0]
# dt_prev: datetime = row['dt_prev'][0]
# dt_end_t: float = dt.timestamp()
# TODO: can we eventually remove this
# once we figure out why the epoch cols
# don't match?
# TODO: FIX HOW/WHY these aren't matching
# and are instead off by 4hours (EST
# vs. UTC?!?!)
# end_t: float = row['time']
# assert (
# dt.timestamp()
# ==
# end_t
# )
# the gap's LEFT-most bar's CLOSE value
# at that time (sample) step.
prev_r: pl.DataFrame = wdts.filter(
pl.col('index') == iend - 1
)
# XXX: probably a gap in the (newly sorted or de-duplicated)
# dt-df, so we might need to re-index first..
if prev_r.is_empty():
await tractor.pause()
istart: int = prev_r['index'][0]
# dt_start_t: float = dt_prev.timestamp()
# start_t: float = prev_r['time']
# assert (
# dt_start_t
# ==
# start_t
# )
# TODO: implement px-col width measure
# and ensure at least as many px-cols
# shown per rect as configured by user.
# gap_w: float = abs((iend - istart))
# if gap_w < 6:
# margin: float = 6
# iend += margin
# istart -= margin
rect_gap: float = BGM*3/8
opn: float = row['open'][0]
ro: tuple[float, float] = (
# dt_end_t,
iend + rect_gap + 1,
opn,
)
cls: float = prev_r['close'][0]
lc: tuple[float, float] = (
# dt_start_t,
istart - rect_gap, # + 1 ,
cls,
)
color: str = 'dad_blue'
diff: float = cls - opn
sgn: float = copysign(1, diff)
color: str = {
-1: 'buy_green',
1: 'sell_red',
}[sgn]
rect_kwargs: dict[str, Any] = dict(
fqme=fqme,
timeframe=timeframe,
start_pos=lc,
end_pos=ro,
color=color,
)
aid: int = await actl.add_rect(**rect_kwargs)
assert aid
aids[aid] = rect_kwargs
# tell chart to redraw all its
# graphics view layers Bo
await actl.redraw(
fqme=fqme,
timeframe=timeframe,
)
return aids
@store.command()
def ldshm(
fqme: str,
write_parquet: bool = True,
reload_parquet_to_shm: bool = True,
) -> None:
'''
Linux ONLY: load any fqme file name matching shm buffer from
/dev/shm/ into an OHLCV numpy array and polars DataFrame,
optionally write to offline storage via `.parquet` file.
'''
async def main():
from piker.ui._remote_ctl import (
open_annot_ctl,
)
actl: AnnotCtl
mod: ModuleType
client: StorageClient
async with (
open_piker_runtime(
'polars_boi',
enable_modules=['piker.data._sharedmem'],
debug_mode=True,
),
open_storage_client() as (
mod,
client,
),
open_annot_ctl() as actl,
):
shm_df: pl.DataFrame | None = None
tf2aids: dict[float, dict] = {}
for (
shmfile,
shm,
# parquet_path,
shm_df,
) in tsp.iter_dfs_from_shms(fqme):
times: np.ndarray = shm.array['time']
d1: float = float(times[-1] - times[-2])
d2: float = float(times[-2] - times[-3])
med: float = np.median(np.diff(times))
if (
d1 < 1.
and d2 < 1.
and med < 1.
):
raise ValueError(
f'Something is wrong with time period for {shm}:\n{times}'
)
period_s: float = float(max(d1, d2, med))
null_segs: tuple = tsp.get_null_segs(
frame=shm.array,
period=period_s,
)
# TODO: call null-seg fixer somehow?
if null_segs:
await tractor.pause()
# async with (
# trio.open_nursery() as tn,
# mod.open_history_client(
# mkt,
# ) as (get_hist, config),
# ):
# nulls_detected: trio.Event = await tn.start(partial(
# tsp.maybe_fill_null_segments,
# shm=shm,
# timeframe=timeframe,
# get_hist=get_hist,
# sampler_stream=sampler_stream,
# mkt=mkt,
# ))
# over-write back to shm?
wdts: pl.DataFrame # with dts
deduped: pl.DataFrame # deduplicated dts
(
wdts,
deduped,
diff,
) = tsp.dedupe(
shm_df,
period=period_s,
)
# detect gaps from in expected (uniform OHLC) sample period
step_gaps: pl.DataFrame = tsp.detect_time_gaps(
deduped,
expect_period=period_s,
)
# TODO: by default we always want to mark these up
# with rects showing up/down gaps Bo
venue_gaps: pl.DataFrame = tsp.detect_time_gaps(
deduped,
expect_period=period_s,
# TODO: actually pull the exact duration
# expected for each venue operational period?
gap_dt_unit='days',
gap_thresh=1,
)
# TODO: find the disjoint set of step gaps from
# venue (closure) set!
# -[ ] do a set diff by checking for the unique
# gap set only in the step_gaps?
if (
not venue_gaps.is_empty()
or (
period_s < 60
and not step_gaps.is_empty()
)
):
# write repaired ts to parquet-file?
if write_parquet:
start: float = time.time()
path: Path = await client.write_ohlcv(
fqme,
ohlcv=deduped,
timeframe=period_s,
)
write_delay: float = round(
time.time() - start,
ndigits=6,
)
# read back from fs
start: float = time.time()
read_df: pl.DataFrame = pl.read_parquet(path)
read_delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'parquet write took {write_delay} secs\n'
f'file path: {path}'
f'parquet read took {read_delay} secs\n'
f'polars df: {read_df}'
)
if reload_parquet_to_shm:
new = tsp.pl2np(
deduped,
dtype=shm.array.dtype,
)
# since normally readonly
shm._array.setflags(
write=int(1),
)
shm.push(
new,
prepend=True,
start=new['index'][-1],
update_first=False, # don't update ._first
)
do_markup_gaps: bool = True
if do_markup_gaps:
new_df: pl.DataFrame = tsp.np2pl(new)
aids: dict = await markup_gaps(
fqme,
period_s,
actl,
new_df,
step_gaps,
)
# last chance manual overwrites in REPL
# await tractor.pause()
assert aids
tf2aids[period_s] = aids
else:
# allow interaction even when no ts problems.
assert not diff
await tractor.pause()
log.info('Exiting TSP shm anal-izer!')
if shm_df is None:
log.error(
f'No matching shm buffers for {fqme} ?'
)
trio.run(main)
typer_click_object = typer.main.get_command(store)
cli.add_command(typer_click_object, 'store')

View File

@ -0,0 +1,384 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
marketstore tsdb backend:
https://github.com/alpacahq/marketstore
We wrote an async gGRPC client:
https://github.com/pikers/anyio-marketstore
which is normally preferred minus the discovered issues
in https://github.com/pikers/piker/issues/443
Which is the main reason for us moving away from this
platform..
'''
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from datetime import datetime
# from pprint import pformat
from typing import (
Union,
)
from bidict import bidict
import tractor
import numpy as np
from anyio_marketstore import (
Params,
)
import pendulum
# import purerpc
from piker.service.marketstore import (
MarketstoreClient,
tf_in_1s,
mk_tbk,
_ohlcv_dt,
MarketStoreError,
)
from anyio_marketstore import ( # noqa
open_marketstore_client,
MarketstoreClient,
Params,
)
from piker.log import get_logger
log = get_logger(__name__)
class MktsStorageClient:
'''
High level storage api for both real-time and historical ingest.
'''
name: str = 'marketstore'
def __init__(
self,
client: MarketstoreClient,
config: dict,
) -> None:
# TODO: eventually this should be an api/interface type that
# ensures we can support multiple tsdb backends.
self.client = client
self._config = config
# series' cache from tsdb reads
self._arrays: dict[str, np.ndarray] = {}
@property
def address(self) -> str:
conf = self._config
return f'grpc://{conf["host"]}:{conf["port"]}'
async def list_keys(self) -> list[str]:
return await self.client.list_symbols()
async def search_keys(self, pattern: str) -> list[str]:
'''
Search for time series key in the storage backend.
'''
...
async def write_ticks(self, ticks: list) -> None:
...
async def load(
self,
fqme: str,
timeframe: int,
) -> tuple[
np.ndarray, # timeframe sampled array-series
datetime | None, # first dt
datetime | None, # last dt
]:
first_tsdb_dt, last_tsdb_dt = None, None
hist = await self.read_ohlcv(
fqme,
# on first load we don't need to pull the max
# history per request size worth.
limit=3000,
timeframe=timeframe,
)
log.info(f'Loaded tsdb history {hist}')
if len(hist):
# breakpoint()
times: np.ndarray = hist['Epoch']
first, last = times[0], times[-1]
first_tsdb_dt, last_tsdb_dt = map(
pendulum.from_timestamp,
[first, last]
)
return (
hist, # array-data
first_tsdb_dt, # start of query-frame
last_tsdb_dt, # most recent
)
async def read_ohlcv(
self,
fqme: str,
timeframe: int | str,
end: float | None = None, # epoch or none
limit: int = int(200e3),
) -> np.ndarray:
client = self.client
syms = await client.list_symbols()
if fqme not in syms:
return {}
# ensure end time is in correct int format!
if (
end
and not isinstance(end, float)
):
end = int(float(end))
# breakpoint()
# use the provided timeframe or 1s by default
tfstr = tf_in_1s.get(timeframe, tf_in_1s[1])
import pymarketstore as pymkts
sync_client = pymkts.Client()
param = pymkts.Params(
symbols=fqme,
timeframe=tfstr,
attrgroup='OHLCV',
end=end,
limit=limit,
# limit_from_start=True,
)
try:
reply = sync_client.query(param)
except Exception as err:
if 'no files returned from query parse: None' in err.args:
return []
raise
data_set: pymkts.results.DataSet = reply.first()
array: np.ndarray = data_set.array
# params = Params(
# symbols=fqme,
# timeframe=tfstr,
# attrgroup='OHLCV',
# end=end,
# # limit_from_start=True,
# # TODO: figure the max limit here given the
# # ``purepc`` msg size limit of purerpc: 33554432
# limit=limit,
# )
# for i in range(3):
# try:
# result = await client.query(params)
# break
# except purerpc.grpclib.exceptions.UnknownError as err:
# if 'snappy' in err.args:
# await tractor.pause()
# # indicate there is no history for this timeframe
# log.exception(
# f'Unknown mkts QUERY error: {params}\n'
# f'{err.args}'
# )
# else:
# return {}
# # TODO: it turns out column access on recarrays is actually slower:
# # https://jakevdp.github.io/PythonDataScienceHandbook/02.09-structured-data-numpy.html#RecordArrays:-Structured-Arrays-with-a-Twist
# # it might make sense to make these structured arrays?
# data_set = result.by_symbols()[fqme]
# array = data_set.array
# XXX: ensure sample rate is as expected
time = data_set.array['Epoch']
if len(time) > 1:
time_step = time[-1] - time[-2]
ts = tf_in_1s.inverse[data_set.timeframe]
if time_step != ts:
log.warning(
f'MKTS BUG: wrong timeframe loaded: {time_step}\n'
'YOUR DATABASE LIKELY CONTAINS BAD DATA FROM AN OLD BUG '
f'WIPING HISTORY FOR {ts}s'
)
await tractor.pause()
# await self.delete_ts(fqme, timeframe)
# try reading again..
# return await self.read_ohlcv(
# fqme,
# timeframe,
# end,
# limit,
# )
return array
async def delete_ts(
self,
key: str,
timeframe: Union[int, str | None] = None,
fmt: str = 'OHLCV',
) -> bool:
client = self.client
# syms = await client.list_symbols()
# if key not in syms:
# raise KeyError(f'`{key}` table key not found in\n{syms}?')
tbk = mk_tbk((
key,
tf_in_1s.get(timeframe, tf_in_1s[60]),
fmt,
))
return await client.destroy(tbk=tbk)
async def write_ohlcv(
self,
fqme: str,
ohlcv: np.ndarray,
timeframe: int,
append_and_duplicate: bool = True,
limit: int = int(800e3),
) -> None:
# build mkts schema compat array for writing
mkts_dt = np.dtype(_ohlcv_dt)
mkts_array = np.zeros(
len(ohlcv),
dtype=mkts_dt,
)
# copy from shm array (yes it's this easy):
# https://numpy.org/doc/stable/user/basics.rec.html#assignment-from-other-structured-arrays
mkts_array[:] = ohlcv[[
'time',
'open',
'high',
'low',
'close',
'volume',
]]
m, r = divmod(len(mkts_array), limit)
tfkey = tf_in_1s[timeframe]
for i in range(m, 1):
to_push = mkts_array[i-1:i*limit]
# write to db
resp = await self.client.write(
to_push,
tbk=f'{fqme}/{tfkey}/OHLCV',
# NOTE: will will append duplicates
# for the same timestamp-index.
# TODO: pre-deduplicate?
isvariablelength=append_and_duplicate,
)
log.info(
f'Wrote {mkts_array.size} datums to tsdb\n'
)
for resp in resp.responses:
err = resp.error
if err:
raise MarketStoreError(err)
if r:
to_push = mkts_array[m*limit:]
# write to db
resp = await self.client.write(
to_push,
tbk=f'{fqme}/{tfkey}/OHLCV',
# NOTE: will will append duplicates
# for the same timestamp-index.
# TODO: pre deduplicate?
isvariablelength=append_and_duplicate,
)
log.info(
f'Wrote {mkts_array.size} datums to tsdb\n'
)
for resp in resp.responses:
err = resp.error
if err:
raise MarketStoreError(err)
# XXX: currently the only way to do this is through the CLI:
# sudo ./marketstore connect --dir ~/.config/piker/data
# >> \show mnq.globex.20220617.ib/1Sec/OHLCV 2022-05-15
# and this seems to block and use up mem..
# >> \trim mnq.globex.20220617.ib/1Sec/OHLCV 2022-05-15
# relevant source code for this is here:
# https://github.com/alpacahq/marketstore/blob/master/cmd/connect/session/trim.go#L14
# def delete_range(self, start_dt, end_dt) -> None:
# ...
ohlc_key_map = bidict({
'Epoch': 'time',
'Open': 'open',
'High': 'high',
'Low': 'low',
'Close': 'close',
'Volume': 'volume',
})
@acm
async def get_client(
grpc_port: int = 5995, # required
host: str = 'localhost',
) -> MarketstoreClient:
'''
Load a ``anyio_marketstore`` grpc client connected
to an existing ``marketstore`` server.
'''
async with open_marketstore_client(
host or 'localhost',
grpc_port,
) as client:
yield MktsStorageClient(
client,
config={'host': host, 'port': grpc_port},
)

View File

@ -0,0 +1,177 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Legacy marketstore ingest and streaming related clis.
'''
# from .. import watchlists as wl
# from ..service.marketstore import (
# get_client,
# stream_quotes,
# ingest_quote_stream,
# _url,
# _tick_tbk_ids,
# mk_tbk,
# )
# @cli.command()
# @click.option(
# '--url',
# default='ws://localhost:5993/ws',
# help='HTTP URL of marketstore instance'
# )
# @click.argument('names', nargs=-1)
# @click.pass_obj
# def ms_stream(
# config: dict,
# names: list[str],
# url: str,
# ) -> None:
# '''
# Connect to a marketstore time bucket stream for (a set of) symbols(s)
# and print to console.
# '''
# async def main():
# # async for quote in stream_quotes(symbols=names):
# # log.info(f"Received quote:\n{quote}")
# ...
# trio.run(main)
# @cli.command()
# @click.option(
# '--url',
# default=_url,
# help='HTTP URL of marketstore instance'
# )
# @click.argument('names', nargs=-1)
# @click.pass_obj
# def ms_destroy(config: dict, names: list[str], url: str) -> None:
# """Destroy symbol entries in the local marketstore instance.
# """
# async def main():
# nonlocal names
# async with get_client(url) as client:
#
# if not names:
# names = await client.list_symbols()
#
# # default is to wipe db entirely.
# answer = input(
# "This will entirely wipe you local marketstore db @ "
# f"{url} of the following symbols:\n {pformat(names)}"
# "\n\nDelete [N/y]?\n")
#
# if answer == 'y':
# for sym in names:
# # tbk = _tick_tbk.format(sym)
# tbk = tuple(sym, *_tick_tbk_ids)
# print(f"Destroying {tbk}..")
# await client.destroy(mk_tbk(tbk))
# else:
# print("Nothing deleted.")
#
# tractor.run(main)
# @cli.command()
# @click.option(
# '--tsdb_host',
# default='localhost'
# )
# @click.option(
# '--tsdb_port',
# default=5993
# )
# @click.argument('symbols', nargs=-1)
# @click.pass_obj
# def storesh(
# config,
# tl,
# host,
# port,
# symbols: list[str],
# ):
# '''
# Start an IPython shell ready to query the local marketstore db.
# '''
# from piker.storage import open_tsdb_client
# from piker.service import open_piker_runtime
# async def main():
# nonlocal symbols
# async with open_piker_runtime(
# 'storesh',
# enable_modules=['piker.service._ahab'],
# ):
# symbol = symbols[0]
# async with open_tsdb_client(symbol):
# # TODO: ask if user wants to write history for detected
# # available shm buffers?
# from tractor.trionics import ipython_embed
# await ipython_embed()
# trio.run(main)
# @cli.command()
# @click.option('--test-file', '-t', help='Test quote stream file')
# @click.option('--tl', is_flag=True, help='Enable tractor logging')
# @click.argument('name', nargs=1, required=True)
# @click.pass_obj
# def ingest(config, name, test_file, tl):
# '''
# Ingest real-time broker quotes and ticks to a marketstore instance.
# '''
# # global opts
# loglevel = config['loglevel']
# tractorloglevel = config['tractorloglevel']
# # log = config['log']
# watchlist_from_file = wl.ensure_watchlists(config['wl_path'])
# watchlists = wl.merge_watchlist(watchlist_from_file, wl._builtins)
# symbols = watchlists[name]
# grouped_syms = {}
# for sym in symbols:
# symbol, _, provider = sym.rpartition('.')
# if provider not in grouped_syms:
# grouped_syms[provider] = []
# grouped_syms[provider].append(symbol)
# async def entry_point():
# async with tractor.open_nursery() as n:
# for provider, symbols in grouped_syms.items():
# await n.run_in_actor(
# ingest_quote_stream,
# name='ingest_marketstore',
# symbols=symbols,
# brokername=provider,
# tries=1,
# actorloglevel=loglevel,
# loglevel=tractorloglevel
# )
# tractor.run(entry_point)

View File

@ -0,0 +1,433 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
`nativedb`: a lulzy Apache-parquet file manager (that some might
call a poor man's tsdb).
AKA a `piker`-native file-system native "time series database"
without needing an extra process and no standard TSDB features,
YET!
'''
# TODO: like there's soo much..
# - better name like "parkdb" or "nativedb" (lel)? bundle this lib with
# others to make full system:
# - tractor for failover and reliablity?
# - borg for replication and sync?
#
# - use `fastparquet` for appends:
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
# (presuming it's actually faster then overwrites and
# makes sense in terms of impl?)
#
# - use `polars` support for lazy scanning, processing and schema
# validation?
# - https://pola-rs.github.io/polars-book/user-guide/io/parquet/#scan
# - https://pola-rs.github.io/polars-book/user-guide/concepts/lazy-vs-eager/
# - consider delta writes for appends?
# - https://github.com/pola-rs/polars/blob/main/py-polars/polars/dataframe/frame.py#L3232
# - consider multi-file appends with appropriate time-range naming?
# - https://pola-rs.github.io/polars-book/user-guide/io/multiple/
#
# - use `borg` for replication?
# - https://borgbackup.readthedocs.io/en/stable/quickstart.html#remote-repositories
# - https://github.com/borgbackup/borg
# - https://borgbackup.readthedocs.io/en/stable/faq.html#usage-limitations
# - https://github.com/borgbackup/community
# - https://github.com/spslater/borgapi
# - https://nixos.wiki/wiki/ZFS
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from datetime import datetime
from pathlib import Path
import time
import numpy as np
import polars as pl
from pendulum import (
from_timestamp,
)
from piker import config
from piker import tsp
from piker.data import (
def_iohlcv_fields,
ShmArray,
)
from piker.log import get_logger
from . import TimeseriesNotFound
log = get_logger('storage.nativedb')
def detect_period(shm: ShmArray) -> float:
'''
Attempt to detect the series time step sampling period
in seconds.
'''
# TODO: detect sample rate helper?
# calc ohlc sample period for naming
ohlcv: np.ndarray = shm.array
times: np.ndarray = ohlcv['time']
period: float = times[-1] - times[-2]
if period == 0:
# maybe just last sample is borked?
period: float = times[-2] - times[-3]
return period
def mk_ohlcv_shm_keyed_filepath(
fqme: str,
period: float | int, # ow known as the "timeframe"
datadir: Path,
) -> Path:
if period < 1.:
raise ValueError('Sample period should be >= 1.!?')
path: Path = (
datadir
/
f'{fqme}.ohlcv{int(period)}s.parquet'
)
return path
def unpack_fqme_from_parquet_filepath(path: Path) -> str:
filename: str = str(path.name)
fqme, fmt_descr, suffix = filename.split('.')
assert suffix == 'parquet'
return fqme
ohlc_key_map = None
class NativeStorageClient:
'''
High level storage api for OHLCV time series stored in
a (modern) filesystem as apache parquet files B)
Part of a grander scheme to use arrow and parquet as our main
lowlevel data framework: https://arrow.apache.org/faq/.
'''
name: str = 'nativedb'
def __init__(
self,
datadir: Path,
) -> None:
self._datadir = datadir
self._index: dict[str, dict] = {}
# series' cache from tsdb reads
self._dfs: dict[str, dict[str, pl.DataFrame]] = {}
@property
def address(self) -> str:
return self._datadir.as_uri()
@property
def cardinality(self) -> int:
return len(self._index)
# @property
# def compression(self) -> str:
# ...
async def list_keys(self) -> list[str]:
return list(self._index)
def index_files(self):
for path in self._datadir.iterdir():
if (
path.is_dir()
or
'.parquet' not in str(path)
# or
# path.name in {'borked', 'expired',}
):
continue
key: str = path.name.rstrip('.parquet')
fqme, _, descr = key.rpartition('.')
prefix, _, suffix = descr.partition('ohlcv')
period: int = int(suffix.strip('s'))
# cache description data
self._index[fqme] = {
'path': path,
'period': period,
}
return self._index
# async def search_keys(self, pattern: str) -> list[str]:
# '''
# Search for time series key in the storage backend.
# '''
# ...
# async def write_ticks(self, ticks: list) -> None:
# ...
async def load(
self,
fqme: str,
timeframe: int,
) -> tuple[
np.ndarray, # timeframe sampled array-series
datetime | None, # first dt
datetime | None, # last dt
] | None:
try:
array: np.ndarray = await self.read_ohlcv(
fqme,
timeframe,
)
except FileNotFoundError as fnfe:
bs_fqme, _, *_ = fqme.rpartition('.')
possible_matches: list[str] = []
for tskey in self._index:
if bs_fqme in tskey:
possible_matches.append(tskey)
match_str: str = '\n'.join(sorted(possible_matches))
raise TimeseriesNotFound(
f'No entry for `{fqme}`?\n'
f'Maybe you need a more specific fqme-key like:\n\n'
f'{match_str}'
) from fnfe
times = array['time']
return (
array,
from_timestamp(times[0]),
from_timestamp(times[-1]),
)
def mk_path(
self,
fqme: str,
period: float,
) -> Path:
return mk_ohlcv_shm_keyed_filepath(
fqme=fqme,
period=period,
datadir=self._datadir,
)
def _cache_df(
self,
fqme: str,
df: pl.DataFrame,
timeframe: float,
) -> None:
# cache df for later usage since we (currently) need to
# convert to np.ndarrays to push to our `ShmArray` rt
# buffers subsys but later we may operate entirely on
# pyarrow arrays/buffers so keeping the dfs around for
# a variety of purposes is handy.
self._dfs.setdefault(
timeframe,
{},
)[fqme] = df
async def read_ohlcv(
self,
fqme: str,
timeframe: int | str,
end: float | None = None, # epoch or none
# limit: int = int(200e3),
) -> np.ndarray:
path: Path = self.mk_path(
fqme,
period=int(timeframe),
)
df: pl.DataFrame = pl.read_parquet(path)
self._cache_df(
fqme=fqme,
df=df,
timeframe=timeframe,
)
# TODO: filter by end and limit inputs
# times: pl.Series = df['time']
array: np.ndarray = tsp.pl2np(
df,
dtype=np.dtype(def_iohlcv_fields),
)
return array
async def as_df(
self,
fqme: str,
period: int = 60,
load_from_offline: bool = True,
) -> pl.DataFrame:
try:
return self._dfs[period][fqme]
except KeyError:
if not load_from_offline:
raise
await self.read_ohlcv(fqme, period)
return self._dfs[period][fqme]
def _write_ohlcv(
self,
fqme: str,
ohlcv: np.ndarray | pl.DataFrame,
timeframe: int,
) -> Path:
'''
Sync version of the public interface meth, since we don't
currently actually need or support an async impl.
'''
path: Path = mk_ohlcv_shm_keyed_filepath(
fqme=fqme,
period=timeframe,
datadir=self._datadir,
)
if isinstance(ohlcv, np.ndarray):
df: pl.DataFrame = tsp.np2pl(ohlcv)
else:
df = ohlcv
self._cache_df(
fqme=fqme,
df=df,
timeframe=timeframe,
)
# TODO: in terms of managing the ultra long term data
# -[ ] use a proper profiler to measure all this IO and
# roundtripping!
# -[ ] implement parquet append!? see issue:
# https://github.com/pikers/piker/issues/536
# -[ ] try out ``fastparquet``'s append writing:
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
start = time.time()
df.write_parquet(path)
delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'parquet write took {delay} secs\n'
f'file path: {path}'
)
return path
async def write_ohlcv(
self,
fqme: str,
ohlcv: np.ndarray | pl.DataFrame,
timeframe: int,
) -> Path:
'''
Write input ohlcv time series for fqme and sampling period
to (local) disk.
'''
return self._write_ohlcv(
fqme,
ohlcv,
timeframe,
)
async def delete_ts(
self,
key: str,
timeframe: int | None = None,
) -> bool:
path: Path = mk_ohlcv_shm_keyed_filepath(
fqme=key,
period=timeframe,
datadir=self._datadir,
)
if path.is_file():
path.unlink()
log.warning(f'Deleting parquet entry:\n{path}')
else:
log.error(f'No path exists:\n{path}')
return path
# TODO: allow wiping and refetching a segment of the OHLCV timeseries
# data.
# def clear_range(
# self,
# key: str,
# start_dt: datetime,
# end_dt: datetime,
# timeframe: int | None = None,
# ) -> pl.DataFrame:
# '''
# Clear and re-fetch a range of datums for the OHLCV time series.
# Useful for series editing from a chart B)
# '''
# ...
# TODO: does this need to be async on average?
# I guess for any IPC connected backend yes?
@acm
async def get_client(
# TODO: eventually support something something apache arrow
# transport over ssh something..?
# host: str | None = None,
**kwargs,
) -> NativeStorageClient:
'''
Load a ``anyio_marketstore`` grpc client connected
to an existing ``marketstore`` server.
'''
datadir: Path = config.get_conf_dir() / 'nativedb'
if not datadir.is_dir():
log.info(f'Creating `nativedb` dir: {datadir}')
datadir.mkdir()
client = NativeStorageClient(datadir)
client.index_files()
yield client

View File

@ -0,0 +1,29 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Toolz for debug, profile and trace of the distributed runtime :surfer:
'''
from tractor.devx import (
open_crash_handler as open_crash_handler,
)
from .profile import (
Profiler as Profiler,
pg_profile_enabled as pg_profile_enabled,
ms_slower_then as ms_slower_then,
timeit as timeit,
)

View File

@ -1,80 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
sugarz for trio/tractor conc peeps.
'''
from typing import AsyncContextManager
from typing import TypeVar
from contextlib import asynccontextmanager as acm
import trio
# A regular invariant generic type
T = TypeVar("T")
async def _enter_and_sleep(
mngr: AsyncContextManager[T],
to_yield: dict[int, T],
all_entered: trio.Event,
# task_status: TaskStatus[T] = trio.TASK_STATUS_IGNORED,
) -> T:
'''Open the async context manager deliver it's value
to this task's spawner and sleep until cancelled.
'''
async with mngr as value:
to_yield[id(mngr)] = value
if all(to_yield.values()):
all_entered.set()
# sleep until cancelled
await trio.sleep_forever()
@acm
async def async_enter_all(
*mngrs: list[AsyncContextManager[T]],
) -> tuple[T]:
to_yield = {}.fromkeys(id(mngr) for mngr in mngrs)
all_entered = trio.Event()
async with trio.open_nursery() as n:
for mngr in mngrs:
n.start_soon(
_enter_and_sleep,
mngr,
to_yield,
all_entered,
)
# deliver control once all managers have started up
await all_entered.wait()
yield tuple(to_yield.values())
# tear down all sleeper tasks thus triggering individual
# mngr ``__aexit__()``s.
n.cancel_scope.cancel()

1449
piker/tsp/__init__.py 100644

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More