Compare commits

...

315 Commits

Author SHA1 Message Date
goodboy 9e82a46c0b Merge pull request 'how_to_show_ur_pp: fixes for end-2-end order/position display'
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/60
2026-01-07 19:32:55 +00:00
Tyler Goodlet 7b68444c7a accounting.calc: `.error()` on bad txn-time fields..
Since i'm seeing IB records with a `None` value and i don't want to be
debugging every time order-mode boots up..

Also use `pdb=debug` in `.open_ledger_dfs()`

Note, this had conflicts on `piker/accounting/calc.py` when rebasing
onto the refactored `brokers_refinery` history which were resolved
manually!
2026-01-07 14:05:23 -05:00
Tyler Goodlet 58654915ac Set `.bs_mktid` on all IB position-msg emissions.. 2026-01-07 13:41:07 -05:00
Tyler Goodlet 90389d0b94 `accouning.calc`: enable crash handlers on `debug_mode` input (via test harness) 2026-01-07 13:41:07 -05:00
Tyler Goodlet f5850fe5c2 Draft a gt-one-`.fqme`-in-txns/account-file test
To start this is just a shell for the test, there's no checking logic
yet.. put it as `test_accounting.test_ib_account_with_duplicated_mktids()`.
The test is composed for now to be completely runtime-free using only
the offline txn-ledger / symcache / account loading APIs, ideally we
fill in the activated symbology-data-runtime cases once we figure a sane
way to handle incremental symcache updates for backends like IB..

To actually fill the test out with real checks we still need to,
- extract the problem account file from my ib.algopape into the test
  harness data.
- pick some contracts with multiple fqmes despite a single bs_mktid and
  ensure they're aggregated as a single `Position` as well as,
  * ideally de-duplicating txns from the account file section for the
    mkt..
  * warning appropriately about greater-then-one fqme for the bs_mktid
    and providing a way for the ledger re-writing to choose the
    appropriate `<venue>` as the "primary" when the
    data-symbology-runtime is up and possibly use it to incrementally
    update the IB symcache and store offline for next use?
2026-01-07 13:41:07 -05:00
Tyler Goodlet 1a4f8fa76f Drop `open_pps()` from ems tests 2026-01-07 13:41:07 -05:00
Tyler Goodlet c609858f20 `ui._remote_ctl`: shield remote rect removals
Since under `trio`-cancellation the `.remove()` is a checkpoint and will
be masked by a taskc AND we **always want to remove the rect** despite
the surrounding teardown conditions.
2026-01-07 13:41:07 -05:00
Tyler Goodlet 0e9b50de4b `_ems`: tolerate and warn on already popped execs
In the `translate_and_relay_brokerd_events()` loop task that is, such
that we never crash on a `status_msg = book._active.pop(oid)` in the
'closed' status handler whenever a double removal happens.

Turns out there were unforeseen races here when a benign backend error
would cause an order-mode dialog to be cancelled (incorrectly) and then
a UI side `.on_cancel()` would trigger too-early removal from the
`book._active` table despite the backend sending an actual 'closed'
event (much) later, this would crash on the now missing entry..

So instead we now,
- obviously use `book._active.pop(oid, None)`
- emit a `log.warning()` (not info lol) on a null-read and with a less
  "one-line-y" message explaining the double removal and maybe *why*.
2026-01-07 13:41:07 -05:00
Tyler Goodlet 388a9a4da7 ui.order_mode: prioritize mkt-match on `.bs_mktid`
For backends which opt to set the new `BrokerdPosition.bs_mktid` field,
give (matching logic) priority to it such that even if the `.symbol`
field doesn't match the mkt currently focussed on chart, it will
always match on a provider's own internal asset-mapping-id. The original
fallback logic for `.fqme` matching is left as is.

As an example with IB, a qqq.nasdaq.ib txn may have been filled on
a non-primary venue as qqq.directedea.ib, in this case if the mkt is
displayed and focused on chart we want the **entire position info** to
be overlayed by the `OrderMode` UX without discrepancy.

Other refinements,
- improve logging and add a detailed edge-case-comment around the
  `.on_fill()` handler to clarify where if a benign 'error' msg is
  relayed from a backend it will cause the UI to operate as though the
  order **was not-cleared/cancelled** since the `.on_cancel()` handler
  will have likely been called just before, popping the `.dialogs`
  entry. Return `bool` to indicate whether the UI removed-lines
  / added-fill-arrows.
- inverse the `return` branching logic in `.on_cancel()` to reduce
  indent.
- add a very loud `log.error()` in `Status(resp='error')` case-block
  ensuring the console yells about the order being cancelled, also
  a todo for the weird msg-field recursion nonsense..
2026-01-07 13:41:07 -05:00
Tyler Goodlet 5b91b08963 Add an option `BrokerdPosition.bs_mktid` field
Such that backends can deliver their own internal unique
`MktPair.bs_mktid` when they can't seem to get it right via the
`.fqme: str` export.. (COUGH ib, you piece of sh#$).

Also add todo for possibly replacing the msg with a `Position.summary()`
"snapshot" as a better and more rigorously generated wire-ready msg.
2026-01-07 13:41:06 -05:00
Tyler Goodlet d67ace75a4 Don't override `Account.pps: dict` entries..
Despite a `.bs_mktid` ideally being a bijection with `MktPair.fqme`
values, apparently some backends (cough IB) will switch the .<venue>`
part in txn records resulting in multiple account-conf-file sections for
the same dst asset. Obviously that means we can't allocate new
`Position` entries keyed by that `bs_mktid`, instead be sure to **update
them instead**!

Deats,
- add case logic to avoid pp overwrites using a `pp_objs.get()` check.
- warn on duplicated pos entries whenever the current account-file
  entry's `mkt` doesn't match the pre-existing position's.
- mk `Position.add_clear()` return a `bool` indicating if the record was
  newly added, warn when it was already existing/added prior.

Also,
- drop the already deprecated `open_pps()`, also from sub-pkg exports.
- draft TODO for `Position.summary()` idea as a replacement for
  `BrokerdPosition`-msgs.
2026-01-07 13:41:06 -05:00
Tyler Goodlet b6d70d5012 ib-related: cope with invalid txn timestamps
That is inside embedded `.accounting.calc.dyn_parse_to_dt()` closure add
an optional `_invalid: list` param to where we can report
bad-timestamped records which we instead override and return as
`from_timestamp(0.)` (when the parser loop falls through) and report
later (in summary ) from the `.accounting.calc.iter_by_dt()` caller. Add
some logging and an optional debug block for future tracing.

NOTE, this commit was re-edited during a conflict between the orig
branches: `dev/binance_api_3.1` & `dev/alt_tpts_for_perf`.
2026-01-07 13:41:06 -05:00
goodboy 2ca50348ce Merge pull request 'ib_2025_updates: to make it not suck despite edwault's epic exit'
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/59
2026-01-07 18:40:51 +00:00
Tyler Goodlet 55116eea01 Bump `brokers.toml`, update ib and deribit sections
For `[ib]` adjust content to match changes to the
`dockering/ib/README.rst` and for `[deribit]` toss in the WIP options
related params for anyone who wants to play around with @nt's work.
2026-01-07 13:39:34 -05:00
Tyler Goodlet a0020d485e Bump ib-container docs and compose file
Add necessary details for the `brokers.toml`, cleanup and link to the
new GH container repo in the `docker-compose.yml`.
2026-01-07 13:39:34 -05:00
Tyler Goodlet ccb4f79170 Bump various `.brokers.core` doc string content/style 2026-01-07 13:39:34 -05:00
Tyler Goodlet 1089de024a ib: multiline stylings, typing, timeout report 2026-01-07 13:39:34 -05:00
Tyler Goodlet 05bdac5542 Woops, fix to read `.api_port` ref from the `Client.ib.client`.. 2026-01-07 13:39:34 -05:00
Tyler Goodlet a392185d2f Support per-`ib.vnc_addrs` vnc passwords
Such that the `brokers.toml` can contain any of the following
<port> = dict|tuple styles,

```toml
    [ib.vnc_addrs]
    4002 = {host = 'localhost', port = 5900, pw = 'doggy'}  # host, port, pw
    4002 = {host = 'localhost', port = 5900}  # host, port, pw
    4002 = ['localhost', 5900]  # host, port, pw
```

With the first line demonstrating a vnc-server password (as normally set
via a `.env` file in the `dockering/ib/` subdir) with the `pw =` field.
This obviously removes the hardcoded `'doggy'` password from prior.

Impl details in `.brokers.ib._util`:
- pass the `ib.api.Client` down into `vnc_click_hack()` doing all config
 reading within and removing host, port unpacking in the callingn
 `data_reset_hack()`.
- also pass the client `try_xdo_manual()` and comment (with plans to
  remove) the recently added localhost-only fallback section since
  we now have a fully working py vnc client again with `pyvnc` B)
- in `vnc_click_hack()` match for all the possible config line styles
  and,
  * pass any `pw` field to `pyvncVNCConfig`,
  * continue matching host, port without password,
  * fallthrough to raising a val-err when neither ^ match.
2026-01-07 13:39:34 -05:00
Tyler Goodlet 9fd14ad6ce ib: bump `docker/ib/README.rst`
For the new github image, a high-level look at its basic
features/usage/docs and prosing around our expected default usage with
the `piker.brokers.ib` backend.
2026-01-07 13:39:34 -05:00
Tyler Goodlet 6ff9ba2e78 ib.feed: better no-bars error-log message format 2026-01-07 13:39:34 -05:00
Tyler Goodlet c1fbf70c62 Switch to `pyvnc` for IB reset hackz
It actually works for vncAuth(2) (thank god!) which the previous
`asyncvnc` **did not**, and seems to be mostly based on the work
from the `asyncvnc` author anyway (so all my past efforts don't seem to
have been in vain XD).

NOTE, the below deats ended up being factored in earlier into the
`pyproject.toml` alongside nix(os) support needed for testing and
landing this history. As the such, the comments are the originals but
the changes are not.

Deats,
- switch to `pyvnc` async API (using `asyncio` again obvi) in
  `.ib._util._vnc_click_hack()`.
- add `pyvnc` as src installed dep from GH.
- drop `asyncvnc` as dep.

Other,
- update `pytest` version range to avoid weird auto-load plugin exposed
  by `xonsh`?
- add a `tool.pytest.ini_options` to project file with vars to,
  - disable that^ `xonsh` plug using `addopts = '-p no:xonsh'`.
  - set a `testpaths` to avoid running anything but that subdir.
  - try out the `'progress'` style console output (does it work?).
2026-01-07 13:23:41 -05:00
Tyler Goodlet 269b8158e6 Convert remaining `.to_asyncio.open_channel_from()` to `chan` fn-sig usage 2026-01-07 13:23:41 -05:00
Tyler Goodlet 728a6f428e `ib.feed`: finally solve `push()` exc propagation
Such that if/when the `push()` ticker callback (closure) errors
internally, we actually eventually bubble the error out-and-up from the
`asyncio.Task` and from there out the `.to_asyncio.open_channel_from()` to
the parent `trio.Task`..

It ended up being much more subtle to solve then i would have liked
thanks to,

- whatever `Ticker.updateEvent.connect()` does behind the scenes in
  terms of (clearly) swallowing with only log reporting any exc raised
  in the registered callback (in our case `push()`),

- `asyncio.Task.set_excepion()` never working and instead needing to
  resort to `Task.cancel()`, catching `CancelledError` and re-raising
  the stashed `maybe_exc` from `push()` when set..

Further this ports `.to_asyncio.open_channel_from()` usage to use
the new `chan: tractor.to_asyncio.LinkedTaskChannel` fn-sig API, namely
for `_setup_quote_stream()` task. Requires the latest `tractor` updates
to the inter-eventloop-chan iface providing a `.set_nowait()` and
`.get()` for the `asyncio`-side.

Impl deats within `_setup_quote_stream()`,
- implement `push()` error-bubbling by adding a `maybe_exc` which can be
  set by that callback itself or by its registering task; when set it is
  both,
  * reported on by the `teardown()` cb,
  * re-raised by the terminated (via `.cancel()`) `asyncio.Task` after
    woken from its sleep, aka "cancelled" (since that's apparently one
    of the only options.. see big rant further todo comments).
- add explicit error-tolerance-tuning via a `handler_tries: int` counter
  and `tries_before_raise: int` limit such that we only bubble
  a `push()` raised exc once enough tries have consecutively failed.
- as mentioned, use the new `chan` fn-sig support and thus the new
  method API for `asyncio` -> `trio` comms.
- a big TODO XXX around the need to use a better sys for terminating
  `asyncio.Task`s whether it's by delegating to some `.to_asyncio`
  internals after a factor-out OR by potentially going full bore `anyio`
  throughout `.to_asyncio`'s impl in general..
- mk `teardown()` use appropriate `log.<level>()`s based on outcome.

Surroundingly,
- add a ton of doc-strings to mod fns previously missing them.
- improved / added-new comments to `wait_on_data_reset()` internals and
  anything changed per ^above.

NOTE, resolved conflicts on `piker/brokers/ib/feed.py` due to
`brokers_refinery` commit:

d809c797 `.brokers.ib.feed`: better `tractor.to_asyncio` typing and var naming throughout!
2026-01-07 13:23:41 -05:00
Tyler Goodlet 323840fdfc `ib`: various type-annot, multiline styling and todos updates 2026-01-07 13:23:41 -05:00
Tyler Goodlet 27c83fae0c ib: add venue-hours checking
Such that we can avoid other (pretty unreliable) "alternative" checks to
determine whether a real-time quote should be waited on or (when venue
is closed) we should just signal that historical backfilling can
commence immediately.

This has been a todo for a very long time and it turned out to be much
easier to accomplish than anticipated..

Deats,
- add a new `is_current_time_in_range()` dt range checker to predicate
  whether an input range contains `datetime.now(start_dt.tzinfo)`.
- in `.ib.feed.stream_quotes()` add a `venue_is_open: bool` which uses
  all of the new ^^ to determine whether to branch for the
  short-circuit-and-do-history-now-case or the std real-time-quotes
  should-be-awaited-since-venue-is-open, case; drop all the old hacks
  trying to workaround not figuring that venue state stuff..

Other,
- also add a gpt5 composed parser to `._util` for the
  `ib_insync.ContractDetails.tradingHours: str` for before i realized
  there was a `.tradingSessions` property XD
- in `.ib_feed`,
  * add various EG-collapsings per recent tractor/trio updates.
  * better logging / exc-handling around ticker quote pushes.
  * stop clearing `Ticker.ticks` each quote iteration; not sure if this
    is needed/correct tho?
  * add masked `Ticker.ticks` poll loop that logs.
- fix some `str.format()` usage in `._util.try_xdo_manual()`

NOTE, resolved conflicts on `piker/brokers/ib/feed.py` due to
rebasing onto up stream `brokers_refinery` commit,

d809c797 `.brokers.ib.feed`: better `tractor.to_asyncio` typing and var naming throughout
2026-01-07 13:23:41 -05:00
Tyler Goodlet e92d5baf99 ib: never relay "Warning:" errors to EMS..
You'd think they could be bothered to make either a "log" or "warning"
msg type instead of a `type='error'`.. but alas, this attempts to detect
all such "warning"-errors and never proxy them to the clearing engine
thus avoiding the cancellation of any associated (by `reqid`)
pre-existing orders (control dialogs).

Also update all surrounding log messages to a more multiline style.
2026-01-07 13:23:41 -05:00
Tyler Goodlet b1111bf9b0 ib: jig `.data_reset_hack()` with vnc-client failover
Since apparently porting to the new docker container enforces using
a vnc password and `asyncvnc` seems to have a bug/mis-config whenever
i've tried a pw over a wg tunnel..?

Soo, this tries out the old `i3ipc`-win-focus + `xdo` click hack when
the above fails.

Deats,
- add a mod-level `try_xdo_manual()` to wrap calling
  `i3ipc_xdotool_manual_click_hack()` with an oserr handler, ensure we
  don't bother trying if `i3ipc` import fails beforehand tho.
- call ^ from both the orig case block and the failover from the
  vnc-client case.
- factor the `+no_setup_msg: str` out to mod level and expect it to be
  `.format()`-ed.
- refresh todo around `asyncvnc` pw ish..
- add a new `i3ipc_fin_wins_titled()` window-title scanner which
  predicates input `titles` and delivers any matches alongside the orig
  focused win at call time.
- tweak `i3ipc_xdotool_manual_click_hack()` to call ^ and remove prior
  unfactored window scanning logic.
2026-01-07 13:23:41 -05:00
goodboy d75c34d173 Merge pull request 'providers_sync: required API updates and `.brokers` refinements'
From providers_sync into main.
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/56
2026-01-07 18:23:13 +00:00
Tyler Goodlet 9be8ca6097 binance: add `AggTrade.nq: float`: "normal quantity" field.. 2026-01-06 23:43:44 -05:00
Tyler Goodlet bda8154d55 binance: handle new `TRADIFI_PERPETUAL`.. 2026-01-06 23:43:44 -05:00
Tyler Goodlet fd4dca9963 binance: add `Pair.opoAllowed` field
Handle new API field per 2025-12-02 update.

(this patch was generated in some part by [`claude-code`][claude-code-gh])
[claude-code-gh]: https://github.com/anthropics/claude-code
2026-01-06 23:43:44 -05:00
Tyler Goodlet 3c024206d4 binance: set `Pair.pegInstructionsAllowed = False`
Lol, a cheeky unforeseen bug due to TOML's lack of a null type and
thinking i can render an `Optional` field on a `msgspec.Struct`
(defaulted to `None`) the `binance.symcache.toml` cache file..

I didn't catch this when i first updated to the 3.1 API in f7caa75228
because i never did a cache-files flush.. lesson learned and we **really
need tests for this**!!
2026-01-06 23:43:44 -05:00
Tyler Goodlet 4e9394f24b Add fix for binance API 3.1 rollout..
See https://developers.binance.com/docs/binance-spot-api-docs#2025-08-26
2026-01-06 23:43:44 -05:00
Tyler Goodlet cc0da23687 kraken: add crash-handling around `Pair()` init
Since it can otherwise be difficult to debug due to nursery cancellation
(we need that taskman yo!).
2026-01-06 23:43:44 -05:00
Tyler Goodlet c6998431ea kraken: `Pair.costmin` is now optional?
Some pairs don't seem to define it but it's not listed as deprecated on
official API page (new one now linked in type def's doc string).
2026-01-06 23:43:44 -05:00
Tyler Goodlet af39a8d0a7 binance: add new `permissionSets` to base `Pair` 2026-01-06 23:43:44 -05:00
Tyler Goodlet 85834b41eb Update `binance` spot pairs with `amendAllowed`
As per API updates,
https://developers.binance.com/docs/binance-spot-api-docs
https://developers.binance.com/docs/binance-spot-api-docs/faqs/order_amend_keep_priority

I also slightly tweaked the filed mismatch exception note to include the
`repr(pair_type)` so the dev can know which pair types should be
changed.
2026-01-06 23:43:44 -05:00
Tyler Goodlet 04be48e2d2 `.kraken`: add masked pauses for order req debug
Such that the next time i inevitably must debug the some order-request
error status or precision discrepancy, i have the mkt-symbol branch
ready to go. Also, switch to `'action': 'buy'|'sell' as action,` style
`case` matching instead of the post-`if` predicate style.
2026-01-06 23:43:44 -05:00
Tyler Goodlet b6d8ddae94 `.questrade`: link in ws-API issue! 2026-01-06 23:43:44 -05:00
Tyler Goodlet 925a12bd81 `.kraken.broker`: need to `await verify_balances()` .. 2026-01-06 23:43:44 -05:00
Tyler Goodlet 13b7dfe1d0 `.brokers.ib.feed`: better `tractor.to_asyncio` typing and var naming throughout! 2026-01-06 23:43:44 -05:00
Tyler Goodlet 19609b3214 `.brokers.cli`: module type and todo for `--pdb` flag to NOT src from sub-cmd 2026-01-06 23:43:44 -05:00
Tyler Goodlet 51541b46be Type loaded backend modules 2026-01-06 23:43:44 -05:00
goodboy f218cf450e Merge pull request 'port_to_latest_tractor'
#45 from port_to_latest_tractor into main
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/45
2026-01-07 04:43:27 +00:00
Tyler Goodlet c77aca1f90 Flip (back) `pikerd` to use TCP by default
It'll break all non-linux OS-platforms atm and bc it should only be set
to a "non-std transport" through the config anyways.

Yeah yeah, we're slowly appealing to the frickin masses..
2026-01-06 23:34:32 -05:00
Tyler Goodlet 3adbabcba6 Use `pytest` plugin now exposed by `tractor` 2026-01-06 22:27:58 -05:00
Tyler Goodlet 2b17b99964 `.ui._search`: collapse EGs as needed, use `tn` naming. 2026-01-06 22:27:58 -05:00
Tyler Goodlet f3767e4269 Port `.data._web_bs` stuff to strict-EGs
Using `tractor.trionics.collapse_eg()` as needed and doing
some renames, in similar style as elsewhere:
- `pcs` -> `rent_cs`,
- `n` -> `tn` for nursery handles,

Also,
- tweak the `._reconnect_forever()` while loop to use the
  (also) `trio`-internal
  `mc_state: trio._channel.MemoryChannelState = snd._state` instead
  of `snd._close` to poll for open send/receive consumer task counts
  since,
    1. it seems more reliable then using the `snd._closed`,
    2. there's no other way to access the info.. afaik?

- handle `ConnectionRejected` explicitly alongside handshake-errs as
  a retry case.
- add a base-exc handler which `.exception()` reports the reconnect
  attempt failure explicitly.
- drop some lingering `Optional` usage.
2026-01-06 22:27:58 -05:00
Tyler Goodlet c065ff6b86 Port `.cli` & `.service` to latest `tractor` registry APIs
Namely changes for the `registry_addrs: list`, enable_transports: list`
and related `tractor._addr` primitive requirements.

Other updates include,
- passing `maybe_enable_greenback=True`,
- additional exc logging around `pikerd` syncing/booting,
- changing to newer `Context.wait_for_result()`,
- dropping (unnecessary?) `maybe_open_crash_handler()` around `pikerd` ep.
2026-01-06 22:27:58 -05:00
Tyler Goodlet 5dc0ecc802 binance; unmask around send-chan @acm usage 2026-01-06 22:27:58 -05:00
Tyler Goodlet ff81e57e73 Spurious first-draft of EG collapsing
Topically, throughout various (seemingly) console-UX-affecting or benign
spots in the code base; nothing that required more intervention beyond
things superficial. A few spots also include `trio.Nursery` ref renames
(always to something with a `tn` in it) and log-level reductions to
quiet (benign) console noise oriented around issues meant to be solved
long..

Note there's still a couple spots i left with the loose-ify flag because
i haven't fully tested them without using the latest version of
`tractor.trionics.collapse_eg()`, but more then likely they should flip
over fine.
2026-01-06 22:27:58 -05:00
Tyler Goodlet ef748c7599 Use `.trionics.collapse_eg()` in `.deribit.api`
Commit this change separate from the (original) broader set applied to
the entire code base since the `.deribit.api` mod contained changes from
upstream max-pain work (from our very own @nt) which caused a noticeable
conflict and intros un-required changes from his work to re-enable
`deribit` support.

Note the original commit, "69eac7bb Spurious first-draft of EG
collapsing", applied similar changes through the rest of the code base.
AGAIN, this mod's change is only being broken out to minimize upstream
change conflicts due to updates to the `deribit` backend done earlier in
time-history.
2026-01-06 22:27:58 -05:00
Tyler Goodlet 3f6853a437 Try running daemons on UDS tpt
The root daemon, pikerd, needs to be adjusted to use diff default
registry addrs to also utilize non-TCP, but for now this gets us started
testing; so far so good B)
2026-01-06 22:27:58 -05:00
Tyler Goodlet 0bd8cd1882 Adjust feed status fields/display-pane to new actor-ID
That is to use the new `tractor.msg.types.Aid` struct to pull the
`brokerd` info from the `tractor.Channel.aid: Aid` attr as well as more
generally handling the new `Channel.raddr.proto_key: str` and no longer
assuming a TCP IPC transport; this per the recent `tractor.ipc`
subsys which adds multi-IPC-transports!

Downstream tweaks to match,
- use an "opt-in" field set to display in the `brokerd` info pane in
  `.ui._feedstatus.mk_feed_label()`.
 |_ also add some todos and drop some seemingly unneeded form sizing
    calcs?
- tweak `.ui._label` to allow not using markdown, though ended up not
  doing that since it looked too plain..
2026-01-06 22:27:58 -05:00
Tyler Goodlet 28db478da1 Adjust to `trio`'s strict eg nurseries throughout!
Using `tractor.trionics.collapse_eg()` as needed to avoid, at the least,
crash-worthy (in debug-mode REPL-ing terms) nested cancellation egs that
exhibit on SIGINT/ctl-c of each "app" (chart & daemon).

Also a bit of renaming of all `trio.Nursery`s to `tn`, the new "task
nursery" shorthand-var-name being used in all our other `tractor`
related projects.
2026-01-06 22:27:58 -05:00
Tyler Goodlet d36575cd0d Port to newer `tractor.get_registry()` 2026-01-06 22:27:58 -05:00
Tyler Goodlet 9a2b43495d Update legacy type to `tractor.MsgStream` 2026-01-06 22:27:58 -05:00
goodboy 8a17a75ba2 Merge pull request 'decimal_prices_thru_ems
Yeah, just suck it up and do `Order.price: Decimal` for now..'

(#44) from decimal_prices_thru_ems into main
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/44
2026-01-07 03:25:27 +00:00
Tyler Goodlet 838ddd6e79 Fix type-check assertion in ems test to use `is` 2026-01-06 21:43:59 -05:00
Tyler Goodlet aaf2dbcd79 Cast to `float` as needed from order-mode and ems
Since we're not quite yet using automatic typed msging from
`tractor`/`msgspec` (i.e. still manually decoding order ctl msgs from
built-in types..`dict`s still not `msgspec.Struct`) this adds the
appropriate typecasting ops to ensure the required precision is attained
prior to processing and/or submission to a brokerd backend service.

For the `.clearing._ems`,
- flip all `trigger_price` previously presumed to be `float` to just
  the field-identical `price: Decimal` and ensure we cast to `float`
  for any `trigger_price` usage, like before passing to `mk_check()`.

For `.ui.order_mode.OrderMode`,
- add a new `.curr_mkt: MktPair` convenience property to get the
  chart-active value.
- ensure we always use the `.curr_mkt.quantize() -> Decimal` before
  setting any IPC-msg's `.price` field!
- always cast `float(Order.price)` before use in setting line-levels.
- don't bother setting `Order.symbol` to a (now fully removed) `Symbol`
  instance since it's not really required-for-use anywhere; leaving it
  a `str` (per the type-annot) is fine for now?
2026-01-06 21:43:59 -05:00
Tyler Goodlet cf976ff12b Mk `Brokerd[Order].price` avoid `float`-errs
By re-typing to a `.price: Decimal` field on both legs of the EMS.

It seems we must do it ourselves since,
- these msg's (fields) are relayed through the clearing engine to each
  `brokerd` backend and,
- bc many (if not all) of those backends `.broker`-clients (nor their
  encapsulated "brokerage services") **are not** doing any
  precision-truncation themselves.

So, for now, instead we opt to expect rounding at the source. This means
we will explicitly require casting to/from `float` at the line-graphics
interface to the order-clearing-engine (as implemented throughout
`.ui.order_mode.OrderMode`); and this is coming shortly.
2026-01-06 21:43:59 -05:00
goodboy fa0d088ebc Merge pull request 'rando_data_subsys_styling
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/58

Mostly `.data` subsys styling from feats branches' (#58) from rando_data_subsys_styling into main
2026-01-07 02:43:35 +00:00
Tyler Goodlet dc61e6fc4f Report with `{fqme!r}` in feed allocator for type clarity 2026-01-06 19:33:23 -05:00
Tyler Goodlet b2b0e4c40d `.config.get_app_dir()`: link to `click`'s orig impl on GH 2026-01-06 19:33:23 -05:00
Tyler Goodlet 4b1fa2173b Touch `conf.toml` by default when dne? 2026-01-06 19:33:23 -05:00
Tyler Goodlet b3d345fc41 Wow, update root `conf.toml` to new multiaddr style
I don't know how this wasn't already committed but.. drops the legacy
`marketstore` tsdb socket info vars since we're going all in on
`nativedb` BP
2026-01-06 19:33:23 -05:00
Tyler Goodlet 0282e632f9 `data._symcache`, impl a summary `.__repr__()`, avoids `Asset` causality issues 2026-01-06 19:33:23 -05:00
Tyler Goodlet 7e600b3901 Avoid `msgspec` eval-err on `Asset` in symcache? 2026-01-06 19:33:23 -05:00
Tyler Goodlet dbe2567fe8 Flip screen-info script to qt6, refine it to heck.
Buncha updates and improvements,
- adjust sub-namespace imports according to console warnings.
- iterate all detected screens in a loop and instead report which is the
  primary and the current.
- type annotate all vars where non-obvious, particularly the`Qt` refs.
2026-01-06 19:33:23 -05:00
Tyler Goodlet 60df863a6a Mk a `notes_to_self/` move orig file `ideas.rst' 2026-01-06 19:33:23 -05:00
Tyler Goodlet 2d44a9afaa Drop old/masked ahab-docker daemon starting 2026-01-06 19:33:23 -05:00
Tyler Goodlet 57a5903ccf Start a manual `tags` file for internal refs 2026-01-06 19:33:23 -05:00
Tyler Goodlet cbe0cbd29c Add a couple new grays to the pallete 2026-01-06 19:33:23 -05:00
Tyler Goodlet 2158e27a66 Add missing f-str prefix to log line 2026-01-06 19:33:23 -05:00
Tyler Goodlet 323290d20b Teensie `piker.data` styling tweaks
- use more compact optional value style with `|`-union
- fix `.flows` typing-only import since we need `MktPair` to be
  immediately defined for use on a `msgspec.Struct` field.
- more "tree-like" warning msg in `.validate()` reporting.
2026-01-06 19:33:23 -05:00
goodboy 4dd7391da7 Merge pull request 'bump_polars: new version with API adjustments' (#57) from bump_polars into main
Reviewed-on: https://www.pikers.dev/pikers/piker/pulls/57
2026-01-06 23:02:07 +00:00
Tyler Goodlet 2ced05c4d5 `polars.cumsum()` is now `.cum_sum()` 2026-01-06 16:10:36 -05:00
Tyler Goodlet e10f3a16dd Bump to (latest) `polars`, the `0.20.6x` series B)
Since I was trying out the neat lookin `polars-fuzzy-match` (also added
for now as a core dep here) which requires the new plugin sys, plus it's
about time we synced with upstream!

Adjust some column syntax to the new `.name` sub-field-space and the
`uv` lock-file to match.

Other,
- add back `trio-typing` bc i guess something else needs it (debug
  tooling stuff in new `tractor`?)
- flip back to the `tractor` pre-main pin since the new `main`-branch
  requires new `trio` stuff we haven't ported yet..
2026-01-06 16:10:36 -05:00
Tyler Goodlet 44a3385604 Just drop the merge-msg template, more trouble then it's worth XD 2026-01-06 12:35:51 -05:00
Tyler Goodlet 65320a5e0f Gitea template, wow fix it again.. 2026-01-06 12:28:30 -05:00
goodboy 272b74d214 Simplify gitea merge template
Mk title line same as PR, drop issues bit, keep `ReviewedOn` (since nothing else will contain the web addr..) and put the reviewers list.
2026-01-06 17:25:49 +00:00
Tyler Goodlet 4baa330e23 Ye, nm it turns out there's no ${URL} !?
Lol like wtf, how can they have this `ReviewedOn` but not just the PR's
web addr.. XD

I guess i'll just suck back the OCD and try it like this.
2026-01-06 12:22:56 -05:00
Tyler Goodlet f9514582b8 Mk title line same as PR, drop issues bit
Left in docs link for ref, hopefully that doesn't also do something
annoying in the web UI Bp
2026-01-05 14:55:34 -05:00
goodboy 8f24a35a5d Merge pull request 'Merge-msg template' (#54) from gitea_merge_template into main
Submitted-in: https://www.pikers.dev/pikers/piker/pulls/54
2026-01-05 18:51:52 +00:00
Tyler Goodlet cccf001aa4 Try out what gemini says will work? 2026-01-05 13:43:10 -05:00
goodboy 65a4fafb5d Merge pull request 'no_symcache_no_problem: be more tolerant of not-yet-implemented provider backends' (#39) from no_symcache_no_problem into main
Submitted-in: https://www.pikers.dev/pikers/piker/pulls/39
2026-01-05 16:28:59 +00:00
Tyler Goodlet 07fbe859c3 Finally drop `Symbol`
It was replaced by `MktPair` long ago in,
https://github.com/pikers/piker/pull/489

with follow up for final removal in,
https://github.com/pikers/piker/issues/517

Resolves #517
2026-01-02 16:49:16 -05:00
Tyler Goodlet db0872e350 `.accounting._ledger`: typing anda more multiline styling 2026-01-02 16:49:16 -05:00
Tyler Goodlet 878002aee0 Drop some bps and style logic to multiline 2026-01-02 16:49:16 -05:00
Tyler Goodlet c9e6510535 Invert `getattr()` check for `get_mkt_pairs()` ep
Such that we `return` early when not defined by the provider backend to
reduce an indent level in `SymbologyCache.load()`.
2026-01-02 16:49:16 -05:00
Tyler Goodlet 4cae3778c1 Allow ledger passes to ignore (symcache) unknown fqmes
For example in the paper-eng, if you have a backend that doesn't fully
support a symcache (yet) it's handy to be able to ignore processing
other paper-eng txns when all you care about at the moment is the
simulated symbol.

NOTE, that currently this will still result in a key-error when you load
more then one mkt with the paper engine (for which the backend does not
have the symcache implemented) since no fqme ad-hoc query was made for
the 2nd symbol (and i'm not sure we should support that kinda hackery
over just encouraging the sym-cache being added?). Def needs a little
more thought depending on how many backends are never going to be able
to (easily) support caching..
2026-01-02 16:49:16 -05:00
goodboy ff49ff0376 Merge pull request 'wayland_nix_py313: keeping up with modern DEs and nix(os)' (#53) from wayland_nix_py313 into main
Submitted-in: https://www.pikers.dev/pikers/piker/pulls/53
2026-01-02 21:47:40 +00:00
Tyler Goodlet b884febd5f Update readme with `nix develop`/flake usage on wayland, and tweaked `uv sync` cmds 2026-01-02 14:07:56 -05:00
Tyler Goodlet 291508a9b1 Fix readme to `uv sync`.. link to astral docs 2026-01-02 14:01:49 -05:00
Tyler Goodlet 7498c221a8 Drop variable regex from `ruff.toml`
Same as in other projects, seems to be not parsing and causing `ruff` to
crash?!?
2026-01-02 12:38:36 -05:00
Tyler Goodlet 64828d2fe1 Bump `uv.lock` on nixos
Namely from `pyproject.toml` re-org of dep-groups.
2026-01-02 12:38:19 -05:00
Tyler Goodlet 1e6fa8675d A better dep-groups specificity breakdown
Trying to start organizing non-hard deps into groups with sensible
"domain names" as it were. I coulda sworn we originally had at least UI
libs setup this way.. musta got lost in prior nix(os) porting.

Specifics,
- move all Qt and `rapidfuzz` deps into the `uis` group.
- add a new `repl` group for all the `pdbp` (debugging utils) and
  `xonsh` (@goodboy's shell pref) related console related extensions.
- add a `testing` group for the harness' needs.
- add a `de` for (as of rn) TWM specific libs.
- nest all the new ^ groups in the `dev` group as needed.
2026-01-02 12:37:49 -05:00
Tyler Goodlet 51fb871f57 Skip `ruff` dev-dep on nix(os) overlays
Since the linking will be borked if we pull the wheel using `uv`; we
need to instead delegate to the `nixpkgs` version in the dev-shell.

`pyrpoject` deats,
- add a new deps-group: 'lint' which contains `ruff`.
- drop `ruff` from std deps (not sure how it got there anyway).
- mv `elasticsearch` to a new `dbs` deps group (we don't really even
  want to be using it in the near furure).
- mv `uis` group into dep-groups section from `project.optionals-deps`.
- add a `tool.uv.default-groups = ['uis', 'dev']` setting which then
  will avoid install of any non-explicit extras.
- put `rapidfuzz` only in `uis` group.

`flake.nix` tweaks,
- include `ruff` and `pypkgs.ruff` in the overlay.
- pass `--no-group ruff` to the `uv sync` line of shell init.
2026-01-02 12:36:39 -05:00
Tyler Goodlet ffd6438b88 Add bash-completion pkgs to flake overlay
Mks completions work inside custom embedded shells (like `xonsh`!).
2026-01-02 12:36:33 -05:00
Tyler Goodlet 5449141ec4 Update `default.nix` (from @nt) for py313 2026-01-02 12:36:19 -05:00
Tyler Goodlet 5337f8abee nix: make Qt6 work on wayland
Taking many tips from our `default.nix` (thanks @nt!) this seems to be
the minimal overlay required for a flake to get up and running with
`piker chart` B)

Notes,
- for now, we're pinning to a major `cpython` version (3.13)
- ensure we (can) build with `nixpkgs.qt6.qtwayland`
- add the minimal Qt ld-lib-path linkings including those for plugin
  use (required for wayland mode).
- for now, hardcode "wayland" platform-mode and the linux standard
  "xdg-shell" integration.
- leave some TODOs to better parameterize around py versions.
2026-01-02 12:36:12 -05:00
Tyler Goodlet 0329a6d852 Bump `flake.lock`, seemly nicely minimized B) 2026-01-02 12:36:05 -05:00
Tyler Goodlet ff045f699f Redo `flake.nix` using `pyproject.nix` recos
Particularly using their recommended "impure template",
- https://pyproject-nix.github.io/pyproject.nix/templates.html#impure
- code: https://github.com/pyproject-nix/pyproject.nix/blob/master/templates/impure/flake.nix

Note the `shellHook` now contains various `uv`-specific osenv settings
and cmds to get a dev-env setup the way i would do it by default, that
includes all dev and extra (group) deps. For now i've hard coded the
"virt-env subdir" used by `uv` to match the cpython version. We can
obviously parameterize this much better in the future.

For those who want to spawn a diff shell then bash see the commented
line, for ex. i personally use `nix develop -c uv run xonsh`.
2026-01-02 12:35:59 -05:00
Tyler Goodlet 6d6ca1a908 Don't pin `pendulum` version so we can use wheel
Bump version in lock file to match.
2026-01-02 12:35:44 -05:00
goodboy a00e9c0e64 Merge pull request 'ems_no_last_required: don't require `last` field to boot dark-pool engine' (#38) from ems_no_last_required into main
Submitted-in: https://www.pikers.dev/pikers/piker/pulls/38
2026-01-01 20:15:57 +00:00
goodboy cb694700c2 Merge pull request 'stop_is_oec: expect `trio.EndOfChannel` as graceful stream shutdown' (#52) from stop_is_eoc into main
Submitted-as: https://www.pikers.dev/pikers/piker/pulls/52
2026-01-01 19:57:35 +00:00
Tyler Goodlet 11c931f65d User `piker_pin` branch from gitea `tractor` repo 2026-01-01 14:50:23 -05:00
Tyler Goodlet 60390ae596 Various `.clearing` todos/notes on potential issues with loglevel settings.. 2025-02-21 16:25:22 -05:00
Tyler Goodlet 9592735aaa .clearing._ems: Don't require `first_quote['last']`
Instead just check for the field (which i'm not huge on the key-name for
anyway) and if not found get the "last price" from the real-time shm
buffer's latest 'close' sample.

Unrelatedly, use a `subs.copy()` in the `Router.client_broadcast()` loop
such that if a `client_stream` is popped on connection failure, we don't
RTE for the "size changed on iteration".
2025-02-21 16:25:22 -05:00
Tyler Goodlet 49841f5b91 Catch using `Sampler.bcast_errors` where possible
In all other possible IPC disconnect handling blocks. Also more
comprehensive typing throughout `uniform_rate_send()`.
2025-02-21 16:24:54 -05:00
Tyler Goodlet b2827ef3c3 Group bcast errors as `Sampler.bcast_errors`
A new class var `tuple[Exception]` such that the err set can be reffed
externally as needed for catching other similar pub-sub/IPC failures in
other (related) real-time sub-systems.

Also added some now-masked logging for debugging live-feed stream reading
issues that should ONLY be used for debugging since they'll greatly
degrade HFT perf. Used the new `log.mk_repr()` stuff (that one day we
should prolly pull from `modden` as a dep) for pretty console emissions.
2025-02-21 16:24:54 -05:00
Tyler Goodlet 2fc4ccf011 Suppress `trio.EndOfChannel`s raised by remote peer
Since now `tractor` will raise this native `trio`-exc translated from
a `Stop` msg when the peer gracefully terminates a `tractor.MsgStream`.
Just `info()` log in such cases versus continuing to warn for the
others.
2025-02-21 16:24:54 -05:00
goodboy 5e371f1d73 Merge pull request 'jsonrpc_err_in_rent' (#41) from jsonrpc_err_in_rent into gitea_feats
Reviewed-on: #41
2025-02-21 21:21:02 +00:00
goodboy 6c221bb348 Merge pull request 'tsp_gaps: fixes for fault-less OHLCV time-series loads' (#35) from tsp_gaps into gitea_feats
Reviewed-on: #35
2025-02-21 20:46:37 +00:00
Tyler Goodlet e391c896f8 Mk jsronrpc's underlying ws timeout `float('inf')`
Since currently we're only using this IPC subsys for `deribit`, and
generally speaking we're primarly supporting options markets (which are
fairly "slow moving"), flip to a default of NOT resetting the `NoBsWs`
on timeout since doing so normally breaks the jsron-rpc IPC session.
Without a proper `fixture` passed to `open_autorecon_ws()` (which we
should eventually implement!!) relying on a timeout-to-reset more or
less will just cause breakage issues - a proper reconnect sequence must
be implemented before using that feature.

Deats,
- expose and proxy through the `msg_recv_timeout` from
  `open_jsonrpc_session()` into the underlying `open_autorecon_ws()`
  call.
2025-02-19 17:05:13 -05:00
Tyler Goodlet 5633f5614d Doc-n-clean `.data._web_bs.open_jsonrpc_session()`
Add a doc-string reflecting recent refinements, drop all the old hook
params, rename `n: trio.Nursery` -> `tn` for "task nursery" fitting with
code base's naming style.
2025-02-19 17:05:13 -05:00
Tyler Goodlet 76735189de data._web_bs: try to raise jsonrpc errors in parent task 2025-02-19 17:05:13 -05:00
Tyler Goodlet d49608f74e Refine history gap/termination signalling
Namely handling backends which do not provide a default "frame
size-duration" in their init-config by making the backfiller guess the
value based on the first frame received.

Deats,
- adjust `start_backfill()` to take a more explicit
  `def_frame_duration: Duration` expected to be unpacked from any
  backend hist init-config by the `tsdb_backfill()` caller which now
  also computes a value from the first received frame when the config
  section isn't provided.
- in `start_backfill()` we now always expect the `def_frame_duration`
  input and always decrement the query range by this value whenever
  a `NoData` is raised by the provider-backend paired with an explicit
  `log.warning()` about the handling.
- also relay any `DataUnavailable.args[0]` message from the provider
  in the handler.
- repair "gap reporting" which checks for expected frame duration vs.
  that received with much better humanized logging on the missing
  segment using `pendulum.Interval/Duration.in_words()` output.
2025-02-19 17:01:24 -05:00
Tyler Goodlet bf0ac93aa3 Only use `frame_types` if delivered during enter
The `open_history_client()` provider endpoint can *optionally*
deliver a `frame_types: dict[int, pendulum.Duration]` subsection in its
`config: dict[str, dict]` (as was implemented with the `ib` backend).
This allows the `tsp` backfilling machinery to use this "recommended
frame duration" to subtract from the `last_start_dt` any time a `NoData`
gap is signalled by the `get_hist()` call allowing gaps to be ignored
safely without missing history by knowing the next earliest dt we can
query from using the `end_dt`. However, currently all crypto$ providers
haven't implemented this feat yet..

As such only try to use the `frame_types` feature if provided when
handling `NoData` conditions inside `tsp.start_backfill()` and otherwise
raise as normal.
2025-02-19 17:01:24 -05:00
Tyler Goodlet d7179d47b0 `.tsp._anal`: add (unused) `detect_vlm_gaps()` 2025-02-19 17:01:24 -05:00
Tyler Goodlet c390e87536 `.storage.cli`: collect gap-markup-aids into `tf2aids: dict` prior to pause for introspection 2025-02-19 17:01:24 -05:00
Tyler Goodlet 5e4a6d61c7 Ignore any non-`.parquet` files under `.config/piker/nativedb/` subdir 2025-02-19 17:01:24 -05:00
Tyler Goodlet 3caaa30b03 Mask no-data pause, add perps to no-`/src`-in-fqme asset set
Was orig for debugging an issue with `kucoin` i think but definitely
shouldn't be left in XD

Also add `'perpetual_future'` to the `.start_backfill()` input literal
set since we don't expect the 'btc/usd.perp.binance' for now.
2025-02-19 17:01:24 -05:00
goodboy 1e3942fdc2 Merge pull request 'Add `ruff` to deps, bump `uv.lock`' (#32) from add_ruff_linter into gitea_feats
Reviewed-on: #32
2025-02-17 19:55:32 +00:00
Nelson Torres 49ea380503 Add new `ruff.toml` config file
Based on the default provided in their
[docs](https://docs.astral.sh/ruff/configuration/) and migrating
previous config from the prior `poetry`-verion of our `pyproject.toml`
2025-02-17 14:48:10 -05:00
Tyler Goodlet 933f169938 Add/reorg back some content from `poetry` old config 2025-02-14 13:47:02 -05:00
Nelson Torres 51337052a4 Remove legacy `poetry` config content from pyproject.toml 2025-02-14 15:01:29 -03:00
Tyler Goodlet 8abe55dcc6 Add `ruff` to deps, bump `uv.lock`
Such that we start encouraging devs to lint code they touch and
hopefully we include a pass as part of our tests/CI eventually B)

Also, mk local `tractor` install `--editable` since without it being
a locally hackable repo it's kinda pointless to install from the local
fs Xp
2025-02-13 21:20:11 -05:00
goodboy c933f2ad56 Merge pull request 'kucoin_and_binance_fix' (#9) from kucoin_and_binance_fix into gitea_feats
Reviewed-on: #9
2025-02-13 19:40:50 +00:00
Tyler Goodlet 00108010c9 Mask `pytest` detection block in `piker.config`
Seems to be some kinda super weird env bug since we moved to using
`uv`? When it triggers it also seems to cause a pretty fundamental crash
that not only breaks `tractor.devx._debug` stuff but also seems to get
us in a perma-hang state where no SIGINT or other sys sig will be able
to kill the root proc!?!?

TODO, a `gitea` issue to track so we can fix the fundamental problem as
well as transitive fault in `tractor`'s core which seems to be due to
the error taking place during a sub-actor's module import phase which
prevents the runtime from booting fully and then the proc getting stuck
in a real gnarly SIG-state..
2025-02-13 13:32:11 -05:00
Tyler Goodlet 8a4901c517 `.binance.feed`: moar type fixes, drop `rapidfuzz` 2025-02-13 12:35:41 -05:00
Tyler Goodlet d7f6a5ab63 Update to latest `KucoinMktPair` spec 2025-02-12 18:08:40 -05:00
Tyler Goodlet e0fdabf651 Use `../tractor` srcs in editable mode? 2025-02-12 15:14:30 -05:00
Tyler Goodlet cb88dfc9da `kucoin`: repair live quotes streaming..
This must have broke at some point during the new `MktPair` and thus
`.fqme: str` updates; mas-o-menos the symbol key in the quote-msg-`dict`
was NOT set to the `MktPair.bs_fqme: str` value and thus wasn't being
processed by the downstream sampling and feed subsys.

So fix that as well as a few other refinements,
- set the `topic: mkt.bs_fqme` in quote msgs obvi.
- drop the "wait for first clearing vlm" quote poll loop; going to fix
  the sampler to handle a `first_quote` without a `'last'` key.
- add some typing around calls to `get_mkt_info()`.
- rename `stream_messages()` -> `iter_normed_quotes()`.
2025-02-12 15:05:22 -05:00
Nelson Torres bb41dd6d18 Deleted settlePlan field from binance FutesPair. 2025-02-12 15:05:22 -05:00
Nelson Torres 99e90129ad Added missing fields for kucoin.
feeCategory, makerFeeCoefficient, takerFeeCoefficient and st.
2025-02-12 15:05:22 -05:00
Tyler Goodlet cceb7a37b9 Lel, forgot to add a `SPOT` venue for `binance`.. 2025-02-12 15:05:22 -05:00
goodboy 5382815b2d Merge pull request 'uv migration and default.nix for qt6' (#17) from uv_migration into gitea_feats
Reviewed-on: #17
2025-02-12 20:04:02 +00:00
Tyler Goodlet cb1ba8a05f Further root readme bump, factor `.clearing` content
In line with our move to `uv` and recent `nix` support update a bunch of
the summary content and factor out the order-control section to a new
`.piker.clearing` readme file with embedded todos therein.
2025-02-12 15:01:51 -05:00
Nelson Torres 6c65ec4d3b Readme update:
- uv replace poetry
- update nix-shell command
2025-02-12 13:05:25 -03:00
Nelson Torres 12e371b027 uv migration 2025-02-12 11:19:34 -03:00
goodboy 014bd58db4 Merge pull request 'go_httpx' (#2) from go_httpx into gitea_feats
Landed-in: #2
2025-02-12 13:01:19 +00:00
Tyler Goodlet 844544ed8e Port binance to `httpx`
Like other backends use the `AsyncClient` for all venue specific
client-sessions but change to allocating them inside `get_client()`
using an `AsyncExitStack` and inserting directly in the
`Client.venue_sesh: dict` table during init.

Supporting impl tweaks:
- remove most of the API client session building logic and instead make
  `Client.__init__()` take in a `venue_sessions: dict` (set it to
  `.venue_sesh`) and `conf: dict`, instead opting to do the http client
  configuration inside `get_client()` since all that code only needs to
  be run once.
 |_load config inside `get_client()` once.
 |_move session token creation into a new util func `init_api_keys()` and
  also call it from `get_client()` factory; toss in an ex. toml section
  config to the doc string.
- define `_venue_urls: dict[str, str]` (content taken from old static
  `.venue_sesh` dict) at module level and feed them as `base_url: str`
  inputs to the client create loop.
- adjust all call sigs in httpx-sesh-using methods, namely just
  `._api()`.
- do a `.exch_info()` call in `get_client()` to cache the symbology
  set.

Unrelated changes for various other outstanding buggers:
- to get futures feeds correctly loading when selected
  from search (like 'XMRUSDT.USDTM.PERP'), expect a `MktPair` input to
  `Client.bars()` such that the exact venue-key can be looked up (via
  a new `.pair2venuekey()` meth) and then passed to `._api()`.
- adjust `.broker.open_trade_dialog()` to failover to paper engine when
  there's no `api_key` key set for the `subconf` venue-key.
2025-02-11 16:27:28 -05:00
Nelson Torres f479252d26 Added note to exception when missing field in SpotPair class 2025-02-11 16:27:28 -05:00
Nelson Torres 033ef2e35e Added new fields to SpotPair class in venues 2025-02-11 16:27:28 -05:00
Tyler Goodlet 2cdece244c binance: raise `NoData` on null hist arrays
Like we do with other history backends to indicate lack of a data set.
This avoids any raise that will will bring down the backloader task with
some downstream error.

Raise a `ValueError` on no time index for now.
2025-02-11 16:27:28 -05:00
Tyler Goodlet 018694bbdb Woops, `data` can be an empty list XD 2025-02-11 16:27:28 -05:00
Tyler Goodlet 128a2d507f Woops, fix missing `api_url` ref in error log 2025-02-11 16:27:28 -05:00
Tyler Goodlet 430650a6a7 Change type-annots to use `httpx.Response` 2025-02-11 16:27:28 -05:00
Tyler Goodlet 1da3cf5698 Port `kucoin` backend to `httpx` 2025-02-11 16:27:28 -05:00
Tyler Goodlet a348603fc4 Port `kraken` backend to `httpx` 2025-02-11 16:27:28 -05:00
goodboy 86047824d8 Merge pull request '`.brokers.ib` random fixes-n-improvements from various other dev branches..' (#27) from ib_refinements into gitea_feats
Merged-in: #27
2025-02-11 21:26:20 +00:00
Tyler Goodlet cb92abbc38 ib: add connect status info emit 2025-02-11 14:56:17 -05:00
Tyler Goodlet 70332e375b ib: `.api` mod and log-fmt cleaning
About time we tidy'd a buncha status logging in this backend..
particularly for boot-up where there's lots of client-try-connect poll
looping with account detection from the user config.

`.api.Client` pprint and logging fmt improvements:
- add `Client.__repr__()` which shows the minimally useful set of info
  from the underlying `.ib: IB` as well as a new `.acnts: list[str]`
  of the account aliases defined in the user's `brokers.toml`.
- mk `.bars()` define a comprehensive `query_info: str` with all the
  request deats but only display if there's a problem with the response
  data.
- mk `.get_config()` report both the config file path and the acnt
  aliases (NOT the actual account #s).
- move all `.load_aio_clients()` client poll loop requests do
  `log.runtime()` statuses, only falling through to a `.warning()` when
  the loop fails to connect the client to the spec-ed API-gw addr, and
 |_ don't allow loading accounts for which the user has not defined an
    alias in `brokers.toml::[ib]`; raise a value-error in such cases
    with a message indicating how to mod the config.
 |_ only `log.info()` about acnts if some were loaded..

Other mod logging de-noising:
- better status fmting in `.symbols.open_symbol_search()` with
  `repr(Client)`.
- for `.feed.stream_quotes()` first quote reporting use `.runtime()`.
2025-02-11 14:56:17 -05:00
Tyler Goodlet 4940aabe05 ib: warn about mkt precision cuckups that `Contract`s clearly deliver wrong.. 2025-02-11 14:56:17 -05:00
Tyler Goodlet 4f9998e9fb ib: mask out trade and vlm rates for now 2025-02-11 14:56:17 -05:00
Tyler Goodlet c92a236196 ib: more trade record edge case handling
- timestamps came as `'date'`-keyed from 2022 and before but now are
  `'datetime'`..
- some symbols seem to have no commission field, so handle that..
- when no `'price'` field found return `None` from `norm_trade()`.
- add a warn log on mid-fill commission updates.
2025-02-11 14:56:17 -05:00
goodboy e4cd1f85f6 Merge pull request 'pyqt6' (#3) from pyqt6 into gitea_feats
Reviewed-on: #3 (well by fomo anyway..)
2025-02-11 17:25:03 +00:00
Tyler Goodlet 129cf58d41 Bump deps for Py3.12, go PyQt6, tweak ruff rules
Code base is already ported for `Qt6` so this removes the pyqt5 dep,
adds latest pyqt6 as well as buncha other updates:

- add `xonsh` and ptk as dev deps for those of us using wacky shells ;P
- bump compiled deps as needed for python 3.12 (`numpy`, `numba`)
- add `httpx` and drop `asks` since the latter is zombied and not compat
  with other libs on 3.12.
- add `ruff` linting ignore rules for the new `.ui.qt` shim mod layer.
- few other deps updates to latest versions.
- add in the `keywords` and `classifiers` sections from the old
  `setup.py`.
2024-05-20 11:07:27 -04:00
Tyler Goodlet 1fd8654ca5 Port all `.ui*` submods to new `.ui.qt` imports
This also officially moves the code base to using `PyQt6` including all
necessary reference changes and enum namespace path moves.

Also includes a small `.ui.order_mode` fix to cancel any
`Order.price <= 0` and a name error fix with logging using `msg`,
which is already used for the input order msg..
2024-05-01 14:33:10 -04:00
Tyler Goodlet d0170982bf Add `piker.ui.qt` as a `PyQt6` shim module
For the future, like if we ever get a `PyQt7` (or wtv else..), add
a module which allows changing Qt binding lib imports from one spot for
all other `.ui` submodules. In some sense this is like a shoddier, less
dynamic version of how `pyqtgraph.Qt.__init__.py` supports multiple
libs; it might actually make sense eventually to instead import from
their shim layer instead?

Included is a draft attempt at exposing a bunch of enums which under
custom names:
- while the specific grouping of values seem to always stay consistent,
  the root enum's seem to almost always get moved around in the `PyQtX`
  module namespace.
- changing groupings and/or each top level enum's ns location can more
  simply be changed/re-orged from one spot.
- allows `.ui` consumer code to use a name more relevant to `piker`'s
  usage of wtv UI component is being configured.
2024-05-01 14:30:18 -04:00
Tyler Goodlet 821e73a409 Use a `unit_prefix: str` (like u or $) on health bar 2024-05-01 14:09:39 -04:00
Tyler Goodlet 3d03781810 Impl a sane (with nesting) `.types.Struct.pformat()`
Such that our internal structs can be pretty printed with indented and
type-hinted fields, AND for nested `Struct`-fields call `.pformat()` but
avoiding any recursion errors using `pprint.saferepr()`. Add
a `._sin_props()` iterator over the non-property fields; use it for
`dict` casting when called with `.to_dict(include_non_members=False)`.

Actually, we should also probably figure out how to only pprint like
when required by the user in a REPL or log msg by context-selectively
`pprint.PrettyPrinter` right? Also, if we can generalize decently enough
it'd be cool to maybe patch this in as a util to upstream `msgspec`?
2024-01-17 15:50:27 -05:00
Tyler Goodlet 83d1f117a8 Always cancel (loaded) zero-priced orders
Ran into this trading a peenee where a dark got (mistakenly) submitted
at a price of 0 and then that consecutively broke upstream allocator/pp
code due to a divide-by-zero.. So instead always check for a zero-price
(since that should never ever be valid in any market) and instead cancel
any such order in the EMS and return `None` so that upstream callers can
ignore it without crash handling.
2024-01-17 10:29:43 -05:00
Tyler Goodlet e4ce79f720 Delegate `.toolz.open_crash_handler()` to `tractor.devx`
Means we can drop `.toolz.debug` module outright.
2024-01-16 10:26:38 -05:00
Tyler Goodlet 264246d89b Fix `brokers.toml` load for `kraken` backend 2024-01-10 17:53:15 -05:00
Tyler Goodlet 7c96c9fafe Just warn log on mismatched `MktPair` in paper eng 2024-01-10 17:52:50 -05:00
Tyler Goodlet 52b349fe79 Always reload shm data before annotating gaps, so they line up.. 2024-01-09 15:55:16 -05:00
Tyler Goodlet 6959429af8 Factor gap annotating into new `markup_gaps()`
Since we definitely want to markup gaps that are both data-errors and
just plain old venue closures (time gaps), generalize the `gaps:
pl.DataFrame` loop in a func and call it from the `ldshm` cmd for now.

Some other tweaks to `store ldshm`:
- add `np.median()` failover when detecting (ohlc) ts sample rate.
- use new `tsp.dedupe()` signature.
- differentiate between sample-period size gaps and venue closure gaps
  by calling `tsp.detect_time_gaps()` with diff size thresholds.
- add todo for backfilling null-segs when detected.
2024-01-04 11:01:21 -05:00
Tyler Goodlet 05f874001a Ignore `ContextCancelled`s from non-mngr requests
Since service daemon actors may be cancelled remotely by clients (who
maybe also requested said daemon-actor's spawn in the first place) we
specifically ignore `tractor.ContextCancelled`s from the `ctx.wait()`
inside `Services.start_service_task()` to avoid crashing the service
mngr, and thus for now `pikerd`, (which **does** happen now due to
updated and more explicit remote cancellation semantics implemented in
`tractor`) since the `.canceller` field is not going to match the
`pikerd` uid in such cases!

This explicit check makes sense since the `Services` mngr is built to
allow remote requests to "spawn-n-supervise service actors" where the
services can remain persistent but also cancelled later as requested. We
may want to consider only allowing cancellation by actors who requested
spawn in the future tho?

Also change to more explicit imports to `tractor` types for annots
throughout the sub-pkg.
2024-01-04 10:06:42 -05:00
Tyler Goodlet fc216d37de Drop `__all__` import style from `.services` 2024-01-04 10:05:53 -05:00
Tyler Goodlet 03e429abf8 Extend `enable_modules` from input `tractor_kwargs`
Since certain actors (even if client-like) will want to augment their
module set to provide remote features (such as our new rc annotation
msg-prot for `Qt`-chart actors) we need to ensure we merge in any input
`enable_modules: list[str]` to the value passed to the underlying
`tractor` spawning api. Previously we were passing `_root_modules` as
this value by name, but now instead we simply `list.extend()` that into
whatever is in the `kwargs` both in `open_piker_runtime()` and
`open_pikerd()`.
2024-01-04 09:59:15 -05:00
Tyler Goodlet 7ae7cc829f `tsp`: on backfill, do a smart retry on a `NoData`
Presuming the data provider gives us a config with a `frame_types: dict`
(indicating frame sizes per query/request) we try to be clever and
decrement our submitted `end_dt: DateTime` based on it.. hoping for the
best on the next attempt.
2024-01-03 19:49:41 -05:00
Tyler Goodlet b23d44e21a ib; return `None` on empty bars frame resp so as to trigger raising `NoData` in the caller 2024-01-03 18:16:48 -05:00
Tyler Goodlet 2669db785c Workaround `binance`'s latest API schema bs..
Apparently publishing futures contracts that aren't yet trading AND
changing their contract type `str` format/schema was necessary (such
that's there's a f@#$in space in it..)?

I honestly have no idea where they found their "data engineers" XD

TO CHERRY to #520
2024-01-03 17:50:09 -05:00
Tyler Goodlet d3e7b5cd0e Formalize rc `redraw()` msg-endpoint
So now a chart rc client can ask to invoke the new
`Viz.reset_graphics()` by timeframe and fqme Bo This handy when doing
underlying (real time or tsp) edits and you want to make the UI reflect
the changes incrementally.

Impl deatz:
- tweak the msg schema to use a `cmd:  str` which normally maps to
  (something similar to) the UI method name instead of `annot` and now
  offer 3 such "commands": 'redraw', 'remove', 'SelectRect'.
- impl `AnnotCtl.redraw()` which sends the underlying `msg: dict` on the
  correct `tractor.Msgstream` ipc instance.
  - since ipc-stream lookups now happen in multiple client methods impl
    a private `._get_ipc()` to do the error raise on unknown fqmes.
2024-01-03 17:33:15 -05:00
Tyler Goodlet 9be29a707d Make `ib` failed history requests more debug-able
Been hitting wayy too many cases like this so, finally put my foot down
and stuck in a buncha helper code to figure why (especially for gappy
ass pennies) this can/is happening XD

inside the `.ib.api.Client()`:
- in `.bars()` pack all `.reqHistoricalDataAsync()` kwargs into a dict such that
  wen/if we rx a blank frame we can enter pdb and make sync calls using
  a little `get_hist()` closure from the REPL.
  - tidy up type annots a bit too.
- add a new `.maybe_get_head_time()` meth which will return `None` when
  the dt can't be retrieved for the contract.

inside `.feed.open_history_client()`:
- use new `Client.maybe_get_head_time()` and only do `DataUnavailable`
  raises when the request `end_dt` is actually earlier.
- when `get_bars()` returns a `None` and the `head_dt` is not earlier
  then the `end_dt` submitted, raise a `NoData` with more `.info: dict`.
- deliver a new `frame_types: dict[int, pendulum.Duration]` as part
  of the yielded `config: dict`.
- in `.get_bars()` always assume a `tuple` returned from
  `Client.bars()`.
  - return a `None` on empty frames instead of raising `NoData` at this
    call frame.
- do more explicit imports from `pendulum` for brevity.

inside `.brokers._util`:
- make `NoData` take an `info: dict` as input to allow backends to pack
  in empty frame meta-data for (eventual) use in the tsp back-filling
  layer.
2023-12-29 21:59:59 -05:00
Tyler Goodlet c82ca812a8 Pass display state table to interaction handlers
This took a teensie bit of reworking in some `.ui` modules
more or less in the following order of functional dependence:

- add a `Ctl-R` kb-binding to trigger a `Viz.reset_graphics()` in
  the kb-handler task `handle_viewmode_kb_inputs()`.
  - call the new method on all `Viz`s (& for all sample-rates) and
    `DisplayState` refs provided in a (new input)
    `dss: dict[str, DisplayState]` table, which was originally inite-ed
    from the multi-feed display loop (so orig in `.graphics_update_loop()`
    but now provided as an input to that func, see below..)
- `._interaction`: allow binding in `async_handler()` kwargs (`via
  a `functools.partial`) passed to `ChartView.open_async_input_handler()`
  such that arbitrary inputs to our kb+mouse handler funcs can accept
  "wtv we desire".
  - use ^ to bind in the aforementioned `dss` display-state table to
    said handlers!
- define the `dss` table (as mentioned) inside `._display.display_symbol_data()`
  and pass it into the update loop funcs as well as the newly augmented
  `.open_async_input_handler()` calls,
  - drop calling `chart.view.open_async_input_handler()` from the
    `.order_mode.open_order_mode()`'s enter block and instead factor it
    into the caller to support passing the `dss` table to the kb
    handlers.
  - comment out the original history update loop handling of forced `Viz`
    redraws entirely since we now have a manual method via `Ctl-R`.
  - now, just update the `._remote_ctl.dss: dict` with this table since
    we want to also provide rc for **all** loaded feeds, not just the
    currently shown one/set.
- docs, naming and typing tweaks to `._event.open_handlers()`
2023-12-28 21:06:06 -05:00
Tyler Goodlet a7ad50cf8f Add `Viz.reset_graphics()` for "force re-render"
Since we're now using it multiple layers probably makes sense to impl
and wrap it more correctly / publicly. The main (recent) use case is
where editing an underlying time series and then wanting to refresh the
graphics layers to reflect the changes in a chart. Part of this also
obviously includes wiping the y-range mx/mn cache.

Also ensure that `force_redraw` is proxying through to any `BarItems`
via the new `render_baritems()` func kwarg even when switching between
downsampled-line vs. bars modes.
2023-12-28 18:00:26 -05:00
Tyler Goodlet 661805695e Reimpl axis dt label contents gen with `polars`
Since `polars` has a more sane set of (time-zone aware) datetime APIs it
makes more sense and is definitely no slower then the previous `numpy`
impl. Also, actually use the sample-rate specific formats defined in
`DynamicDateAxis.tick_tpl`: dict[int, str]` finally using the new
`Viz.time_step()` property.
2023-12-28 11:08:29 -05:00
Tyler Goodlet 3de7c9a9eb Add `Viz.time_step()`, the sample step-size in time
Since we end up needing the actual (OHLC sampled) time step info (at
least in seconds) for various purposes (in this specific follow up use
case to determine sample-rate specific `datetime` format strings for
a charted time series x-axis label), allow always reading it from the
viz with the presumption (at least for now) the underlying data-frame
will have an epoch `'time'` col/field.
2023-12-28 11:02:06 -05:00
Tyler Goodlet 59536bd284 Use `import <name> as <name>,` in `.tsp`
Thanks to oremanj in the `trio` room for this hot style tip which i much
prefer to have less LOC and places to change sub-pkg name exports!

Also drop expecting a `gaps` frame output from `dedupe()`.
2023-12-28 10:58:22 -05:00
Tyler Goodlet 5702e422d8 Drop gap detection from `dedupe()`, expect caller to handle it 2023-12-28 10:40:08 -05:00
Tyler Goodlet 07331a160e Expose "bar gap margin" as `.ui._formatters.BGM: float` 2023-12-28 10:37:20 -05:00
Tyler Goodlet 0d18cb65c3 Lul, actually detect gaps for 1s OHLC
Turns out we were always filtering to time gaps longer then a day smh..
Instead tweak `detect_time_gaps()` to only return venue-gaps when
a `gap_dt_unit: str` is passed and pass `'days'` (like it was by default
before) from `dedupe()` though we should really pass in an actual venue
gap duration in the future.
2023-12-27 16:55:00 -05:00
Tyler Goodlet ad565936ec Factor UI-rc loop into ctx-free func
In theory the `async for msg` loop can be re-purposed without having to
always call `remote_annotate()` so factor it into a new
`serve_rc_annots()` and then just call it from the former (for now) with
the wrapping `try:` block outside to delete per-client-ctx annotation
instance sets. Also, use some type aliases instead of repeatedly
defining the same complex `dict`-table defs B)
2023-12-26 20:56:04 -05:00
Tyler Goodlet d4b07cc95a `ui._lines`: more direct Qt imports for typing 2023-12-26 20:49:07 -05:00
Tyler Goodlet 1231c459aa Track data feed subscribers using a new `Sub(Struct)`
In prep for supporting reverse-ipc connect-back to UI actors from
middle-ware systems (for the purposes of triggering data-view canvas
re-renders and built-in tsp annotations), add a new struct type to
better generalize the management of remote feed subscriptions. Include
a `Sub.rc_ui: bool` for now (with nearby todo-comment) and expose an
`allow_remote_ctl_ui: bool` through the feed endpoints to help drive
/ prep for all that ^

Rework all the sampler tasks to expect the `Sub`'s new iface:

- split up the `Sub.ipc: MsgStream`  and `.send_chan` as separate fields
  since we're handling the throttle case in separate
  `sample_and_broadcast()` logic blocks anyway and avoids needing to
  monkey-patch on the `._ctx` malarky..
- explicitly provide the optional handle to the `_throttle_cs:
  CancelScope` again for the case where throttling/event-downsampling is
  requested.
- add `_FeedsBus.subs_items()` as a public iterator.
2023-12-26 20:48:06 -05:00
Tyler Goodlet 88f415e5b8 Cannot delete when the rect has no scene.. 2023-12-26 17:36:34 -05:00
Tyler Goodlet d9c574e291 Add `.sort()` support to `dedupe()` 2023-12-26 17:35:38 -05:00
Tyler Goodlet a86573b5a2 Fix .parquet filenaming..
Apparently `.storage.nativedb.mk_ohlcv_shm_keyed_filepath()` was always
kinda broken if you passed in a `period: float` with an actual non-`int`
to the format string? Fixed it to strictly cast to `int()` before
str-ifying so that you don't get weird `60.0s.parquet` in there..

Further this rejigs the `sotre ldshm` gap correction-annotation loop to,
- use `StorageClient.write_ohlcv()` instead of hackily re-implementing
  it.. now that problem from above is fixed!
- use a `needs_correction: bool` var to determine if gap markup and
  de-duplictated data should be pushed to the shm buffer,
- go back to using `AnnotCtl.add_rect()` for all detected gaps such that
  they all persist (and thus are shown together) until the client
  disconnects.
2023-12-26 17:14:26 -05:00
Tyler Goodlet 1d7e97a295 Woops, need to use `.push_async_callback()`
For non-full-`.__aexit__()` handlers need this method instead (facepalm).
Also create and assign the `AnnotCtl._annot_stack: AsyncExitStack` just
before yielding the client since it's not needed prior and ensures annot
removal happens **before** ipc teardown.
2023-12-24 15:08:44 -05:00
Tyler Goodlet bbb98597a0 Add annot removal via client methods or ctx-mngr
Since leaking annots to a remote `chart` actor probably isn't a thing we
want to do (often), add a removal/deletion handler block to the
`remote_annotate()` ctx which can be triggered using a `{rm_annot: aid}`
msg.

Augmnent the `AnnotCtl` with,
- `.remove() which sends said msg (from above) and returns a `bool`
  indicating success.
- add an `.open_rect()` acm which does the `.add_rect()` / `.remove()`
  calls underneath for use in scope oriented client usage.
- add a `._annot_stack: AsyncExitStack` which will always have any/all
  non-`.open_rect()` calls to `.add_rect()` register removal on client
  teardown, to avoid leaking annots when a client finally disconnects.
- comment out the `.modify()` meth idea for now.
- rename all `Xstream` var-tags to `Xipc` names.
2023-12-24 14:42:12 -05:00
Tyler Goodlet e33d6333ec Woops, remove the label-proxy, not the widget.. 2023-12-24 13:59:16 -05:00
Tyler Goodlet 263a5a8d07 Add `SelectRect.delete()` for permanent scene dealloc 2023-12-23 13:37:47 -05:00
Tyler Goodlet a681b2f0bb Drop passing `bus` to `tsp.manage_history()` in feed allocator 2023-12-22 21:44:38 -05:00
Tyler Goodlet 5b0c94933b `.config`: don't hack the user config dir if user is 'root' and sudo was NOT used.. 2023-12-22 21:41:51 -05:00
Tyler Goodlet 61e52213b2 Oof, fix no-tsdb-entry since needs full backfill case!
Got borked by the logic re-factoring to get more conc going around
tsdb vs. latest frame loads with nested nurseries. So, repair all that
such that we can still backfill symbols previously not loaded as well as
drop all the `_FeedBus` instance passing to subtasks where it's
definitely not needed.

Toss in a pause point around sampler stream `'backfilling'` msgs as well
since there's seems to be a weird ctx-cancelled propagation going on
when a feed client disconnects during backfill and this might be where
the src `tractor.ContextCancelled` is getting bubbled from?
2023-12-22 21:34:31 -05:00
Tyler Goodlet b064a5f94d A working remote annotations controller B)
Obvi took a little `.ui` component fixing (as per prior commits) but
this is now a working PoC for gap detection and markup from a remote
(data) non-`chart` actor!

Iface and impl deats from `.ui._remote_ctl`:
- add new `open_annot_ctl()` mngr which attaches to all locally
  discoverable chart actors, gathers annot-ctl streams per fqme set, and
  delivers a new `AnnotCtl` client which allows adding annotation
  rectangles via a `.add_rect()` method.
  - also template out some other soon-to-get methods for removing and
    modifying pre-exiting annotations on some `ChartView` 💥
- ensure the `chart` CLI subcmd starts the (`qtloops`) guest-mode init
  with the `.ui._remote_ctl` module enabled.
- actually use this stuff in the `piker store ldshm` CLI to submit
  markup rects around any detected null/time gaps in the tsdb data!

Still lots to do:
-  probably colorization of gaps depending on if they're venue
   closures (aka real mkt gaps) vs. "missing data" from the backend (aka
   timeseries consistency gaps).
- run gap detection and markup as part of the std `.tsp` sub-sys
   runtime such that gap annots are a std "built-in" feature of
   charting.
- support for epoch time stamp AND abs-shm-index rect x-values
  (depending on chart operational state).
2023-12-22 15:19:20 -05:00
Tyler Goodlet e7fa841263 Pass scene-points to `.select_box` as per prior comments
As mentioned in a prior commit this was the (seemingly, and so far) only
way to make our `.select_box` annotator shift-click rect work properly
(and the same as) by adopting the code around `ViewBox.rbScaleBox`
(which we now also disable). That means also passing the scene coords to
the `SelectRect.set_scen_pos()`. Also add in the proper `ev:
pyqtgraph.GraphicsScene.mouseEvents.MouseDragEvent` so we can actually
figure out wut the hell all this pg custom mouse-event stuff is XD
2023-12-22 12:09:08 -05:00
Tyler Goodlet 1f346483a0 Always pass full `ShmArray._array` buf to `ContentsLables` updates so the label can be used outside the "backfilled-valid" range 2023-12-22 12:06:55 -05:00
Tyler Goodlet d006ecce7e Fix `._pg_overrides` import cycle caused by our `Axis` override 2023-12-22 12:05:18 -05:00
Tyler Goodlet 69368f20c2 Finally fix our `SelectRect` for use with cursor..
Turns out using the `.setRect()` method was the main cause of the issue
(though still don't really understand how or why) and this instead
adopts verbatim the code from `pg.ViewBox.updateScaleBox()` which uses
a scaling transform to set the rect for the "zoom scale box" thingy.

Further add a shite ton more improvements and interface tweaks in
support of the new remote-annotation control msging subsys:
- re-impl `.set_scen_pos()` to expect `QGraphicsScene` coordinates (i.e.
  passed from the interaction loop and pass scene `QPointF`s from
  `ViewBox.mouseDragEvent()` using the `MouseDragEvent.scenePos()` and
  friends; this is required to properly use the transform setting
  approach to resize the select-rect as mentioned above.
- add `as_point()` converter to maybe-cast python `tuple[float, float]`
  inputs (prolly from IPC msgs) to equivalent `QPointF`s.
- add a ton more detailed Qt-obj-related typing throughout our deriv.
- call `.add_to_view()` from init so that wtv view is passed in during
  instantiation is always set as the `.vb` after creation.
- factor the (proxy widget) label creation into a new `.init_label()`
  so that both the `set_scen/view_pos()` methods can call it and just
  generally decouple rect-pos mods from label content mods.
2023-12-22 11:47:31 -05:00
Tyler Goodlet 31fa0b02f5 Append any `enable_modules` specc-ed in the chart guest-mode runner 2023-12-21 20:40:00 -05:00
Tyler Goodlet 5a60974990 Use explicit `.data.feed` import of `tractor.trionics` 2023-12-21 20:26:45 -05:00
Tyler Goodlet 8d324acf91 First (untested) draft remote annotation ctl API
Since we can and want to eventually allow remote control of pretty much
all UIs, this drafts out a new `.ui._remote_ctl` module with a new
`@tractor.context` called `remote_annotate()` which simply starts a msg
loop which allows for (eventual) initial control of a `SelectRect`
through IPC msgs.

Remote controller impl deats:
- make `._display.graphics_update_loop()` set a `._remote_ctl._dss:
  dict` for all chart actor-global `DisplayState` instances which can
  then be controlled from the `remote_annotate()` handler task.
- also stash any remote client controller `tractor.Context` handles in
  a module var for broadband IPC cancellation on any display loop
  shutdown.
- draft a further global map to track graphics object instances since
  likely we'll want to support remote mutation where the client can use
  the `id(obj): int` key as an IPC handle/uuid.
- just draft out a client-side `@acm` for now: `open_annots_client()` to
  be filled out in up coming commits.

UI component tweaks in support of the above:
- change/add `SelectRect.set_view_pos()` and `.set_scene_pos()` to allow
  specifying the rect coords in either of the scene or viewbox domains.
  - use these new apis in the interaction loop.
- add a `SelectRect.add_to_view()` to avoid having annotation client
  code knowing "how" a graphics obj needs to be added and can instead
  just pass only the target `ChartView` during init.
- drop all the status label updates from the display loop since they
  don't really work all the time, and probably it's not a feature we
  want to keep in the longer term (over just console output and/or using
  the status bar for simpler "current state / mkt" infos).
  - allows a bit of simplification of `.ui._fsp` method APIs to not pass
    around status (bar) callbacks as well!
2023-12-19 15:36:54 -05:00
Tyler Goodlet ab84303da7 Drop `SelectRect.mouse_drag_released()` since it was a dumb method 2023-12-18 20:32:17 -05:00
Tyler Goodlet 659649ec48 Bah, fix nursery indents for maybe tsdb backloading
Can't ref `dt_eps` and `tsdb_entry` if they don't exist.. like for 1s
sampling from `binance` (which dne). So make sure to add better logic
guard and only open the finaly backload nursery if we actually need to
fill the gap between latest history and where tsdb history ends.

TO CHERRY #486
2023-12-18 19:46:59 -05:00
Tyler Goodlet f7cc43ee0b Add pauses to `store anal/ldshm` only on bad segs
Particularly halting before maybe writing the repaired timeseries
history in `store anal` to optionally allow user to avoid writing to
storage.
2023-12-18 11:56:57 -05:00
Tyler Goodlet f5dc21d3f4 Adjust all `.tsp` imports to use new sub-pkg
Also toss in a poll loop around the `hist_shm: ShmArray` backfill
read-check in the `.data.allocate_persisten_feed()`  init to cope with
possible racy-ness from the increased tsdb history loading concurrency
now implemented.
2023-12-18 11:54:28 -05:00
Tyler Goodlet 4568c55f17 Create `piker.tsp` "time series processing" subpkg
Move `.data.history` -> `.tsp.__init__.py` for now as main pkg-mod
and `.data.tsp` -> `.tsp._anal` (for analysis).

Obviously follow commits will change surrounding codebase (imports) to
match..
2023-12-18 11:53:27 -05:00
Tyler Goodlet d5d68f75ea ib: only raise first quote timeout err after tries
Previously we were actually failing silently too fast instead of
actually trying multiple times (now we do for 100) before finally
raising any timeout in the final loop `else:` block.
2023-12-18 11:45:19 -05:00
Tyler Goodlet 1f9a497637 Fixup symcache annot for kucoin as well 2023-12-15 16:01:31 -05:00
Tyler Goodlet 40c5d88a9b Fixup symcache type annots; no more `Pair` type 2023-12-15 16:00:51 -05:00
Tyler Goodlet 8989c73a93 Move `iter_dfs_from_shms` into `.data.history`
Thinking about just moving all of that module (after a content breakup)
to a new `.piker.tsp` which will mostly depend on the `.data` and
`.storage` sub-pkgs; the idea is to move biz-logic for tsdb IO/mgmt and
orchestration with real-time (shm) buffers and the graphics layer into
a common spot for both manual analysis/research work and better
separation of low level data structure primitives from their higher
level usage.

Add a better `data.history` mod doc string in prep for this move
as well as clean out a bunch of legacy commented cruft from the
`trimeter` and `marketstore` days.

TO CHERRY #486 (if we can)
2023-12-15 15:53:02 -05:00
Tyler Goodlet 3639f360c3 Reactivate forced viz updates from sampler broadcasts in hist display loop 2023-12-15 13:59:19 -05:00
Tyler Goodlet afd0781b62 Add (shm) abs index to `ContextLabel` 2023-12-15 13:57:10 -05:00
Tyler Goodlet ba154ef413 ib: don't bother with recursive not-enough-bars queries for now, causes more problems then it solves.. 2023-12-15 13:56:42 -05:00
Tyler Goodlet 97e2403fb1 Rework backfiller and null-segment task conc
For each timeframe open a sub-nursery to do the backfilling + tsdb load
+ null-segment scanning in an effort to both speed up load time (though
we need to reverse the current order to really make it faster rn since
moving to the much faster parquet file backend) and do concurrent
time-gap/null-segment checking of tsdb history while mrf (most recent
frame) history is backfilling.

The details are more or less just `trio` related task-func composition
tricks and a reordering of said funcs for optimal startup latency.
Also commented the `back_load_from_tsdb()` task for now since it's
unused.
2023-12-15 13:11:00 -05:00
Tyler Goodlet a4084d6a0b Bleh, fix another off-by-one issue in `np.argwhere()`
Apparently it returns the index of the prior zero-row (prolly since we
do the backward difference) so ensure `fi_zgaps += 1`..

Also fix remaining edge case handling when there's only 2 zero-segs
which was borked after a refactor to the special case blocks (like
a single zero row) prior to the `absi_zsegs` building loop AND make sure
to always return abs indices OUTSIDE the zero seg, i.e. the indices of
the non-zero row just before and just after so that the history
backfiller can use non-zero timestamps to generate range datetimes for
backend frame queries.

Add much more detailed doc-comments with a small ascii diagram to
explain how all these somewhat subtle vec ops work. Also toss in some
sanity checks on the output indices to ensure they don't point to
zero (time) valued rows when used to read the frame.
2023-12-15 12:48:50 -05:00
Tyler Goodlet 83bdca46a2 Wrap null-gap detect and fill in async gen
Call it `iter_null_segs()` (for now?) and use in the final (sequential)
stage of the `.history.start_backfill()` task-func. Delivers abs,
frame-relative, and equiv time stamps on each iteration pertaining to
each detected null-segment to make it easy to do piece-wise history
queries for each.

Further,
- handle edge case in `get_null_segs()` where there is only 1 zeroed
  row value, in which case we deliver `absi_zsegs` as a single pair of
  the same index value and,
  - when this occurs `iter_null_seqs()` delivers `None` for all the
    `start_` related indices/timestamps since all `get_hist()` routines
    (delivered by `open_history_client()`) should handle it as being a
    "get max history from this end_dt" type query.
- add note about needing to do time gap handling where there's a gap in
  the timeseries-history that isn't actually IN the data-history.
2023-12-13 18:29:06 -05:00
Tyler Goodlet c129f5bb4a Finally write a general purpose null-gap detector!
Using a bunch of fancy `numpy` vec ops (and ideally eventually extending
the same to `polars`) this is a first draft of `get_null_segs()`
a `col: str` field-value-is-zero detector which filters to all zero-valued
input frame segments and returns the corresponding useful slice-indexes:
- gap absolute (in shm buffer terms) index-endpoints as
  `absi_zsegs` for slicing to each null-segment in the src frame.
- ALL abs indices of rows with zeroed `col` values as `absi_zeros`.
- the full set of the input frame's row-entries (view) which are
  null valued for the chosen `col` as `zero_t`.

Use this new null-segment-detector in the
`.data.history.start_backfill()` task to attempt to fill null gaps that
might be extant from some prior backfill attempt. Since
`get_null_segs()` should now deliver a sequence of slices for each gap
we don't really need to have the `while gap_indices:` loop any more, so
just move that to the end-of-func and warn log (for now) if all gaps
aren't eventually filled.

TODO:
-[ ] do the null-seg detection and filling concurrently from
  most-recent-frame backfilling.
-[ ] offer the same detection in `.storage.cli` cmds for manual tsp
  anal.
-[ ] make the graphics layer actually update correctly when null-segs
  are filled (currently still broken somehow in the `Viz` caching
  layer?)

CHERRY INTO #486
2023-12-13 15:26:33 -05:00
Tyler Goodlet c4853a3fee Drop inter-method NL 2023-12-13 09:27:23 -05:00
Tyler Goodlet f274c3db3b Import `np2pl()` from `.data.tsp`
Also toss in todo for a timeseries search CLI cmd which can be handy
when doing offine store mgmt.
2023-12-13 09:25:44 -05:00
Tyler Goodlet b95932ea09 `.data.history`: run `.tsp.dedupe()` in backloader
In an effort to catch out-of-order and/or partial-frame-duplicated
segments, add some `.tsp` calls throughout the backloader tasks
including a call to the new `.sort_diff()` to catch the out-of-order
history cases.
2023-12-12 19:57:46 -05:00
Tyler Goodlet e8bf4c6e04 Return the `.len()` diff from `dedupe()` instead
Since the `diff: int` serves as a predicate anyway (when `0` nothing
duplicate was detected) might as well just return it directly since it's
likely also useful for the caller when doing deeper anal.

Also, handle the zero-diff case by just returning early with a copy of
the input frame and a `diff=0`.

CHERRY INTO #486
2023-12-12 16:48:56 -05:00
Tyler Goodlet 8e4d1a48ed Bleh, fix ib's `Client.bars()` recursion..
Turns out this was the main source of all sorts of gaps and overlaps
in history frame backfilling. The original idea was that when a gap
causes not enough (1m) bars to be delivered (like over a weekend or
holiday) when we just implicitly do another frame query to try and at
least fill out the default duration (normally 1-2 days). Doing the
recursion sloppily was causing all sorts of stupid problems..

It's kinda obvious now what was wrong in hindsight:
- always pass the sampling period (timeframe) when recursing
- adjust the logic to not be mutex with the no-data case (since it
  already is mutex..)
- pack to the `numpy` array BEFORE the recursive call to ensure the
  `end_dt: DateTime` is selected and passed correctly!

Toss in some other helpfuls:
- more explicit `pendulum` typing imports
- some masked out sorted-diffing checks (that can be enabled when
  debugging out-of-order frame issues)
- always error log about less-than time step mismatches since we should never
  have time-diff steps **smaller** then specified in the
  `sample_period_s`!
2023-12-12 16:19:21 -05:00
Tyler Goodlet b03eceebef data.tsp: drop masked `return` one liner 2023-12-11 20:11:42 -05:00
Tyler Goodlet f7a8d79b7b Add `NativeStorageClient._cache_df()` use it in `.write_ohlcv()` for caching on writes as well 2023-12-11 20:10:53 -05:00
Tyler Goodlet 49c458710e Move `numpy` <-> `polars` converters into `.data.tsp`
Yet again these are (going to be) generally useful in the data proc
layer as well as going forward with (possibly) moving the history and
shm rt-processing layer to apache (arrow or other) shared-ds
equivalents.
2023-12-11 17:53:31 -05:00
Tyler Goodlet b94582cb35 Move `dedupe()` to `.data.tsp` (so it has pals)
Includes a rename of `.data._timeseries` -> `.data.tsp` for "time series
processing", making it a public sub-mod; it contains a highly useful set
of data-frame and `numpy.ndarray` ops routines in various subsystems Bo
2023-12-11 16:24:27 -05:00
Tyler Goodlet 7311000846 Facepalm, set `was_deduped` as bool not the deduped frame.. 2023-12-11 13:18:10 -05:00
Tyler Goodlet e719733f97 Comment out overlap case block for now too? 2023-12-08 19:08:10 -05:00
Tyler Goodlet cb941a5554 BABOSO.. fix last history frame overlap slicing!
I guess since i started supporting the whole "allow a gap between
the latest tsdb sample and the latest retrieved history frame" the
overlap slicing has been completely borked XD where we've been sticking
in duplicate history samples and this has caused all sorts of down
stream time-series processing issues..

So fix that but ensuring whenever there IS an overlap between history in
the latest frame and the tsdb that we always prefer the latest frame's
data and slice OUT the tsdb's duplicate indices..

CHERRY TO #486
2023-12-08 18:56:38 -05:00
Tyler Goodlet 2d72a052aa Woops, make sure non-disti mode still works wen maybe getting `pikerd` XD 2023-12-08 17:43:52 -05:00
Tyler Goodlet 2eeef2a123 Add `dedupe()` to help with gap detection/resolution
Think i finally figured out the weird issue without out-of-order OHLC
history getting jammed in the wrong place:
- gap is detected in parquet/offline ts (likely due to a zero dt or
  other gap),
- query for history in the gap is made BUT that frame is then inserted
  in the shm buffer **at the end** (likely using array int-entry
  indexing) which inserts it at the wrong location,
- later this out-of-order frame is written to the storage layer
  (parquet) and then is repeated on further reboots with the original
  gap causing further queries for the same frame on every history
  backfill.

A set of tools useful for detecting these issues and annotating them
nicely on chart part of this patch's intent:
- `dedupe()` will detect any dt gaps, deduplicate datetime rows and
  return the de-duplicated df along with gaps table.
- use this in both `piker store anal` such that we potentially
  resolve and backfill the gaps correctly if some rows were removed.
- possibly also use this to detect the backfilling error in logic at
  the time of backfilling the frame instead of after the fact (which
  would require re-writing the shm array from something like `store
  ldshm` and would be a manual post-hoc solution, not a fix to the
  original issue..
2023-12-08 15:11:34 -05:00
Tyler Goodlet b6d2550f33 Add datetime col de-duplicator 2023-12-08 14:38:27 -05:00
Tyler Goodlet b9af6176c5 Factor `TimeseriesNotFound` to top level
TO CHERRY into #486
2023-12-07 12:31:14 -05:00
Tyler Goodlet dd0167b9a5 Make `fsp.cascade()` expect src/dst `Flume`s
Been meaning to this for a while, and there's still a few design
/ interface kinks (like `.mkt: MktPair` which should be better
generalized?) but this flips over all of the fsp chaining engine
to operate on the higher level `Flume` APIs via the newly cobbled
`Cascade` thinger..
2023-12-06 17:53:35 -05:00
Tyler Goodlet 9e71e0768f Define and pass a default `Flume._readonly: bool`
Allows opening with `.from_msg(readonly=False)` for write permissions
making underlyig shm arrays readonly. Also, make sure to pop the
`ShmArray` field entries prior to msg-ization, not sure how that worked
with the `Feed.flumes` equivalent..but?
2023-12-06 17:25:49 -05:00
Tyler Goodlet 6029f39a3f Allow `MktPair.from/to_msg()` to still do `.dst: str` for fsp flumes 2023-12-06 17:09:52 -05:00
Tyler Goodlet 656e2c6a88 fsp: intro a `Cascade` type that connects `Flume`s of streams 2023-12-05 16:59:07 -05:00
Tyler Goodlet b8065a413b ib: update ibc.ini from latest upstream template 2023-12-05 16:57:38 -05:00
Tyler Goodlet 9245d24b47 ib: add `.pause()` on symbol query overruns to aid in fixing the issue 2023-12-04 13:10:15 -05:00
Tyler Goodlet 22bd83943b .storage: support `store anal --pdb` flag 2023-12-04 13:00:33 -05:00
Tyler Goodlet b94931bbdd Fix `Portal.channel: Channel` attr name error 2023-12-04 13:00:04 -05:00
Tyler Goodlet 239c1c457e Sort fqme suggestions pre-print 2023-12-04 11:34:39 -05:00
Tyler Goodlet 24a54a7085 Add `TimeseriesNotFound` for fqme lookup failures
A common usage error is to run `piker anal mnq.cme.ib` where the CLI
passed fqme is not actually fully-qualified (in this case missing an
expiry token) and we get an underlying `FileNotFoundError` from the
`StorageClient.read_ohlcv()` call. In such key misses, scan the existing
`StorageClient._index` for possible matches and report in a `raise from`
the new error.

CHERRY into #486
2023-12-04 11:22:55 -05:00
Tyler Goodlet ebd1eb114e Port runtime init to new `tractor.Actor.reg_addrs` related changes 2023-11-21 15:18:52 -05:00
Tyler Goodlet 29ce8de462 Use new container image mentioned on IBC thread 2023-10-29 13:21:32 -04:00
Tyler Goodlet d3dab17939 order_mode: fix to avoid `Dialog.uuid` on null dialog.. 2023-10-20 13:57:52 -04:00
Tyler Goodlet cadc200818 Always ignore untracked-order error msgs from `brokerd` 2023-10-16 13:15:12 -04:00
Tyler Goodlet 363c8dfdb1 Default spec registrar set as empty addr list
Since it probably IS sane to just assume a root-actor-as-registrar
listening on the localhost as a default, AND allows NOT expecting every
caller of `open_piker_runtime()` to not have to pass an addr set XD

This makes a bucha CLI shit work again after breakage due to no
default..
2023-10-03 13:36:22 -04:00
Tyler Goodlet 00c046c280 Factor transport-ep parser/loader into helper
For now def it `.cli.load_trans_eps()` just inside the pkg mod; only
loads the ep for `pikerd` which currently acts as the main service-actor
registrar per host. Delegate to this new `.load_trans_eps()`
as-it-was-used from the `pikerd` cmd body and add fresh support for
`piker chart --maddr <addr: str>` using the routine in the body of the
`piker.cli.cli` cmd group after loading the `conf.toml::network` section
B)

Also, toss in runtime debug mode wrapping around `piker chart` using the
new `tractor.devx.maybe_open_crash_handler()` and pull the switch from
a `--pdb` flag now factored into the `.cli.cli` click group.
2023-10-03 10:00:01 -04:00
Tyler Goodlet 9165515811 ib: more detailed comments on wait-for-quote-task todo 2023-10-02 17:57:47 -04:00
Tyler Goodlet 543c11f377 ib: only normalize and log first quote if it arrives 2023-10-01 19:14:08 -04:00
Tyler Goodlet 637d33d7cc Make `.config.load_accounts()` load `brokers.toml`.. 2023-10-01 19:09:15 -04:00
Tyler Goodlet e5fdb33e31 Port cache-`dict` search to new `rapidfuzz` api 2023-10-01 17:46:46 -04:00
Tyler Goodlet 81a8cd1685 binance: always load the `brokers.toml` file since default is `conf.toml` now 2023-10-01 17:37:09 -04:00
Tyler Goodlet a382f01c85 Move tsdb section to `service.tsdb.name` and get host from `.maddrs` 2023-10-01 17:23:39 -04:00
Tyler Goodlet 653348fcd8 Use `.service.find_service()` instead of of `tractor.find_actor()` in pape-eng 2023-10-01 16:10:37 -04:00
Tyler Goodlet e139d2e259 Set `registry_addrs` in CLI (click) context-config
Since `tractor` and our runtime internals is now moved to multihomed semantics,
do the same in the CLI / config entrypoints.

Also, try using the new `tractor.devx.maybe_open_crash_handler()` around
the `pikerd` CLI.
2023-10-01 15:42:31 -04:00
Tyler Goodlet 7258d57c69 Only warn on mismatched `open_registry()` input addrs
When a new (actor) caller opens the registry there are 2 possible cases:
1. - some task already opened the registry during init and set the global
  superset of registrar addrs that are expected to be used,
2. - some task after the init task opens with a subset of addrs.
3. - some task after init opens with a disjoint set - should be an error?

In the 2nd case we don't want to error since the may just not need to
know about other registrar (multi-homed) addrs and thus only needs
specific access - so only warn about the diff in that case. If the
caller is requesting some disjoint set then we still runtime raise.

Adjust `find_service()` to allow a null `registry_addrs` input in which
case we fail over to using whatever pre-set the `Registry.addrs` has;
makes it simple for actors that don't want/need to know about the global
registrar set for their actor tree. Also, always set pass
`tractor.find_actor(only_first=True)` (for now).
2023-10-01 15:36:17 -04:00
Tyler Goodlet 5d081a40d5 Port to new `parse_maddr()` name 2023-09-29 15:20:56 -04:00
Tyler Goodlet fcececce19 Move multi-addr parser mod to `tractor` 2023-09-29 14:33:15 -04:00
Tyler Goodlet b6ac6069fe Temporarily use crash handler around search CLI ep 2023-09-29 14:02:17 -04:00
Tyler Goodlet a98f5877bc ui._exec: use new `get_runtime_vars()` name 2023-09-28 12:31:24 -04:00
Tyler Goodlet 50ddef0985 data.feed: dynamically load `ui._search` mod for headless installs 2023-09-28 12:30:10 -04:00
Tyler Goodlet b1cde3df49 config: make `conf.toml` the default load target 2023-09-28 12:29:07 -04:00
Tyler Goodlet 57010d479d Support multi-homed service actors and multiaddrs
This commit requires an equivalent commit in `tractor` which adds
multi-homed transport server support to the runtime and thus the ability
ability to listen on multiple (embedded protocol) addrs / networks as
well as exposing registry actors similarly. Multiple bind addresses can
now be (bare bones) specified either in the `conf.toml:[network]`
section, or passed on the `pikerd` CLI.

This patch specifically requires the ability to pass a `registry_addrs:
list[tuple]` into `tractor.open_root_actor()` as well as adjusts all
internal runtime routines to do the same, mostly inside the `.service`
pkg.

Further details include:
- adding a new `.service._multiaddr` parser module (which will likely be
  moved into `tractor`'s core) which supports loading lib2p2 style
  "multiaddresses" both from the `conf.toml` and the `pikerd` CLI as
  per,
- reworking the `pikerd` cmd to accept a new `--maddr`/`-m` param that
  accepts multiaddresses.
- adjust the actor-registry subsys to support multi-homing by also
  accepting a list of addrs to its top level API eps.
- various internal name changes to reflect the multi-address interface
  changes throughout.
- non-working CLI tweaks to `piker chart` (ui-client cmds) to begin
  accepting maddrs.
- dropping all elasticsearch and marketstore flags / usage from `pikerd`
  for now since we're planning to drop mkts and elasticsearch will be an
  optional dep in the future.
2023-09-28 12:13:34 -04:00
Tyler Goodlet f94244aad4 Load `network` section from `conf.toml` for service-addr map 2023-09-28 12:04:24 -04:00
Tyler Goodlet 261c331602 Try using `.mkPoetryEnv` instead for devving (dont work yet..) 2023-09-22 16:39:54 -04:00
Tyler Goodlet 3b4a4db7b6 Muck with `develop.nix` to try and hack it with `poetry` venv, go py3.11 2023-09-22 16:39:54 -04:00
Tyler Goodlet ad59a581c7 symcache: passthrough `rapidfuzz.process.extract` kwargs 2023-09-22 15:56:49 -04:00
Tyler Goodlet c312f90c0c kucoin: port to using `rapidfuzz`
Just like the others but also flip to using a `Client.get_mkt_pairs()`
meth name for consistency across clients.
2023-09-22 15:55:19 -04:00
Tyler Goodlet 1a859bc1a2 kraken: drop now unused `rapidfuzz` import 2023-09-22 15:53:03 -04:00
Tyler Goodlet e9887cb611 binance: parse .expiry separate from .venue
Apparently they're being massive cucks and changing their futes pair
schema again now adding a `NEXT_QUARTER` contract type which we weren't
handling at all. The good news is falling back to an old symcache file
would have prevented this from crashing.

Add a new `FutesPair.expiry: str` field so that `.bs_fqme` can simply
call it during the summary FQME-ification output rendering..
2023-09-22 14:48:50 -04:00
Tyler Goodlet 0ba75df877 Add `data.match_from_pairs` fuzzy symbology scanner
A helper for scanning a "pairs table" that most backends should expose
as part of their (internal) symbology set using `rapidfuzz` over
a `dict[str, Struct]` input table.

Also expose the `data.types.Struct` at the subpkg top level.
2023-09-22 13:54:25 -04:00
Tyler Goodlet a97a0ced8c kraken: switch to `rapidfuzz` API 2023-09-21 19:49:10 -04:00
Tyler Goodlet 46d83e9ca9 deribit: switch to `rapidfuzz` API 2023-09-21 19:44:27 -04:00
Tyler Goodlet d4833eba21 binance: switch to `rapidfuzz` API 2023-09-21 19:44:06 -04:00
Tyler Goodlet 14f124164a ib: fix mktpair fallback table: use `Client._con2mkts` to translate..
Previously we were assuming that the `Client._contracts: dict[str,
Contract]` would suffice this directly, which obviously isn't true XD

Also,
- add the `NSE` venue to skip list.
- use new `rapidfuzz.process.extract()` lib API.
- only get con deats for non null exchange names..
2023-09-21 19:14:44 -04:00
Tyler Goodlet 05959eaf70 Always ensure symcache mkt pair entry is valid type 2023-09-19 15:56:47 -04:00
Tyler Goodlet 30d55fdb27 Add `--pdb` support to `piker search` 2023-09-13 12:13:56 -04:00
Tyler Goodlet 2c88ebe697 binance: implement `Client.search_symbols()` using `rapidfuzz`
Change the deats inside the method and have the `brokerd` search task
just call it as needed since we already do internal mem caching on the
lookup table.

APIs changed so we need to make some tweaks as per:
- https://github.com/maxbachmann/RapidFuzz/blob/main/api_differences.md
- https://github.com/maxbachmann/RapidFuzz/blob/main/api_differences.md#differences-in-processor-functions

The main motivation is to get better wheel pkging support (for nixos),
better impl in C++, and a more simply licensed dep.
2023-09-13 11:59:51 -04:00
Tyler Goodlet 4a180019f0 Swap out `fuzzywuzzy` for the newer `rapidfuzz` lib 2023-09-13 11:57:02 -04:00
Tyler Goodlet 4d274b16d8 Attempt to generate .uis deps free lock file
Since `poetry` doesn't seem to actually mark optional group deps as such
in the lock file (!?) manually generate a `poetry.lock` with the
optional groups commented out in the `pyproject.toml`; this is all in
an attempt at trying to make `poetry2nix` build without any UI components
which seem to be the source of much frustration without hacking on p2n
and/or nixpkgs repos..

Further drop all the old build system files including the
setup.py and requirements.txt files.
2023-09-07 14:17:01 -04:00
Tyler Goodlet 481618cc51 kraken: handle ws live trading API symbology
Of course I missed this first try but, we need to use the ws market pair
symbology set (since apparently kraken loves redundancy at least 3 times
XD) when processing transactions that arrive from live clears since it's
an entirely different `LTC/EUR` style key then the `XLTCEUR` style
delivered from the ReST eps..

As part of this:
- add `Client._altnames`, `._wsnames` as `dict[str, Pair]` tables,
  leaving the `._AssetPairs` table as is keyed by the "xname"s.
- Change `Pair.respname: str` -> `.xname` since these keys all just seem
  to have a weird 'X' prefix.
- do the appropriately keyed pair table lookup via a new `api_name_set:
  str` to `norm_trade_records()` and set is correctly in the ws live txn
  handler task.
2023-08-30 16:32:34 -04:00
Tyler Goodlet 778d26067d ib.api: return None on manual quote timeout 2023-08-30 14:56:11 -04:00
Tyler Goodlet e54c3dc523 TOSQUASH 9005335e18: pack empty dict on no flow 2023-08-29 08:45:45 -04:00
Tyler Goodlet ad37cfbe2f Break backfill loop on `end_dt < start_dt` 2023-08-29 08:43:14 -04:00
Tyler Goodlet 8369f557c7 TOSQUASH 2e6b1330f375c310ad: adding .dev / .ui groups 2023-08-25 18:07:15 -04:00
Tyler Goodlet 461764419d ib.api: always key `._contracts` with '.ib' suffix
So that pos msgs from the ems are correctly loaded..
2023-08-25 17:47:30 -04:00
Tyler Goodlet 1002ce1e10 kraken.broker: one last fix to `Position.cumsize`.. 2023-08-25 17:47:30 -04:00
Tyler Goodlet 546049b62f data.history: handle venue-closure gap edge case 2023-08-25 17:47:30 -04:00
Tyler Goodlet e9517cdb02 ib: handle commodity-contract trade records 2023-08-25 17:47:30 -04:00
Tyler Goodlet 2b8cd031e8 By default silence `Client.get_quote()` timeout errors unless caller specifies to raise 2023-08-25 17:47:30 -04:00
Tyler Goodlet 2e6b1330f3 Add `.ui` and `.dev` deps groups via `poetry` Bo
Since we eventually want to allow users to minimally deploy `pikerd`
service-tree (aka distributed cross host) installs, we need to offer
a "headless" deps group. Really this is just the core dep set minus Qt
and some aux search related libs (for now).

The new `.dev` group is for adding hacking and testing tools including
`xonsh` since that will eventually be our REPL of choice more then
likely B)

Oh, and fix the namespace path (was a typo) for the `ledger` CLI and
of course bump the lock file.
2023-08-25 17:47:28 -04:00
Tyler Goodlet 995d1534b6 Drop hard redraws for now 2023-08-25 13:33:59 -04:00
Tyler Goodlet 9d31941d42 order_mode: embedded `Order` maybe be in dict form.. 2023-08-25 13:33:59 -04:00
Tyler Goodlet a695208992 brokers._daemon: drop question-comment about enabling feed module 2023-08-25 13:33:59 -04:00
Tyler Goodlet fed89562dc Import crash handler mngr from `piker.toolz` 2023-08-25 13:33:59 -04:00
Tyler Goodlet 9005335e18 ib: pack empty `dict` on no flow entry 2023-08-25 13:33:59 -04:00
Tyler Goodlet c3f8b089be Drop `.service._ahab` from storage cli runtime mods 2023-08-25 13:33:59 -04:00
Tyler Goodlet 0068119a6d ib: use `asyncio.wait_for()` on ticker first quote; on 3.11 input coros are not allowed.. 2023-08-25 13:33:59 -04:00
Tyler Goodlet 94540ce1cf Pin tomlkit as a path dep for now 2023-08-25 13:13:29 -04:00
Tyler Goodlet ea9a5e524c Factor prefer wheels deps into new `ahot_overrides`
Makes it easier to pass the overrides to multiple p2n functions (like
hopefully `.mkPoetryEnv`). Also, add some commented attempts at using
`mkPoetryEnv` and todo list for "why", remove the `poetry` CLI main
point from the pyproject.toml, bump the poetry lock file.
2023-08-17 15:56:28 -04:00
Tyler Goodlet 6b22024570 MVP get us working fully on nixos
NB: for now this is linking to a presumed local clone of the
`poetry2nix` repo since part of fixing what was adjusted here needs to
be patched upstream, which means hackin on the p2n repo in tandem B)

Since there's some dependency build issues we need
to tweak the following to get baseline `nix develop` working:
- drop `python-levenshtein` (required by `fuzzywuzzy[speedup]`) for now
  since the overlay and/or wheel install needs to be properly figured
  out.
- build `pyqt5` from src for the moment (since `preferWheel` doesn't
  seem to be workin?) despite it taking forever XD
- add in the `flake.lock` file.
2023-08-16 12:19:00 -04:00
Tyler Goodlet 847cb7740c Drop `marketstore` mod import from CLIs loader 2023-08-16 12:15:49 -04:00
Tyler Goodlet 84dd0ae4ce Bump `msgspect`, `polars` versions and add CLI script eps 2023-08-16 08:07:35 -04:00
Tyler Goodlet 6b90e2e3ee Factor and gen per-dep overrides via "fancy" `.extend()`
As per the hot tip from the edgecases.md,
https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md#modulenotfounderror-no-module-named-packagename

Factor all the (mostly `setuptools`) overrides into
a `pypkgs-build-requirements` set and `.extend()` in any `preferWheel`
additions (`polars`, `pyqt`, etc.) before passing to to
`mkPoetryApplication(overrides=<it>)`.

Add a buncha todos for improving the poetry2nix pkging including:
- adding the override requirements to the json file for all our deps
  in the `pypkgs-build-requirement` set.
- maybe propose docs for the edgecases.md to show how to do the auto-gen
  set (via func) AND extend with further overrides like `preferWheel`?
- task to support `polars` build from src (by copying `cryptography`
  stuff) instead of only from a wheel?
- get pyqt5 building from wheel since it seems to be taking forever from
  src..
- get pyqt6 working in general - going to require taking stuff from
  nixpkgs and applying it in the overrides of p2n.
2023-08-15 12:40:01 -04:00
Tyler Goodlet 482ad1cc83 Add `prompt-toolkit` for full `xonsh` feats 2023-08-14 13:10:23 -04:00
Tyler Goodlet 6e8d07852c Pkg with `poetry`, `poetry2nix` and a `flake.nix` 2023-08-14 11:36:34 -04:00
Tyler Goodlet 4aa04e1c8e Add note about broadcast when no `.symbol` found 2023-08-11 14:52:10 -04:00
122 changed files with 13615 additions and 4532 deletions

View File

@ -1,162 +1,196 @@
piker
-----
trading gear for hackers.
trading gear for hackers
|gh_actions|
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpikers%2Fpiker%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/piker/pikers/goto
``piker`` is a broker agnostic, next-gen FOSS toolset for real-time
computational trading targeted at `hardcore Linux users <comp_trader>`_ .
``piker`` is a broker agnostic, next-gen FOSS toolset and runtime for
real-time computational trading targeted at `hardcore Linux users
<comp_trader>`_ .
we use as much bleeding edge tech as possible including (but not limited to):
we use much bleeding edge tech including (but not limited to):
- latest python for glue_
- trio_ & tractor_ for our distributed, multi-core, real-time streaming
`structured concurrency`_ runtime B)
- Qt_ for pristine high performance UIs
- pyqtgraph_ for real-time charting
- ``polars`` ``numpy`` and ``numba`` for `fast numerics`_
- `apache arrow and parquet`_ for time series history management
persistence and sharing
- (prototyped) techtonicdb_ for L2 book storage
- uv_ for packaging and distribution
- trio_ & tractor_ for our distributed `structured concurrency`_ runtime
- Qt_ for pristine low latency UIs
- pyqtgraph_ (which we've extended) for real-time charting and graphics
- ``polars`` ``numpy`` and ``numba`` for redic `fast numerics`_
- `apache arrow and parquet`_ for time-series storage
.. |travis| image:: https://img.shields.io/travis/pikers/piker/master.svg
:target: https://travis-ci.org/pikers/piker
potential projects we might integrate with soon,
- (already prototyped in ) techtonicdb_ for L2 book storage
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _uv: https://docs.astral.sh/uv/
.. _trio: https://github.com/python-trio/trio
.. _tractor: https://github.com/goodboy/tractor
.. _structured concurrency: https://trio.discourse.group/
.. _marketstore: https://github.com/alpacahq/marketstore
.. _techtonicdb: https://github.com/0b01/tectonicdb
.. _Qt: https://www.qt.io/
.. _pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
.. _glue: https://numpy.org/doc/stable/user/c-info.python-as-glue.html#using-python-as-glue
.. _apache arrow and parquet: https://arrow.apache.org/faq/
.. _fast numerics: https://zerowithdot.com/python-numpy-and-pandas-performance/
.. _comp_trader: https://jfaleiro.wordpress.com/2019/10/09/computational-trader/
.. _techtonicdb: https://github.com/0b01/tectonicdb
focus and features:
*******************
- 100% federated: your code, your hardware, your data feeds, your broker fills.
- zero web: low latency, native software that doesn't try to re-invent the OS
- maximal **privacy**: prevent brokers and mms from knowing your
planz; smack their spreads with dark volume.
- zero clutter: modal, context oriented UIs that echew minimalism, reduce
thought noise and encourage un-emotion.
- first class parallelism: built from the ground up on next-gen structured concurrency
primitives.
- traders first: broker/exchange/asset-class agnostic
- systems grounded: real-time financial signal processing that will
make any queuing or DSP eng juice their shorts.
- non-tina UX: sleek, powerful keyboard driven interaction with expected use in tiling wms
- data collaboration: every process and protocol is multi-host scalable.
- fight club ready: zero interest in adoption by suits; no corporate friendly license, ever.
focus and feats:
****************
fitting with these tenets, we're always open to new
framework/lib/service interop suggestions and ideas!
fitting with these tenets, we're always open to new framework suggestions and ideas.
- **100% federated**:
your code, your hardware, your data feeds, your broker fills.
building the best looking, most reliable, keyboard friendly trading
platform is the dream; join the cause.
- **zero web**:
low latency as a prime objective, native UIs and modern IPC
protocols without trying to re-invent the "OS-as-an-app"..
- **maximal privacy**:
prevent brokers and mms from knowing your planz; smack their
spreads with dark volume from a VPN tunnel.
- **zero clutter**:
modal, context oriented UIs that echew minimalism, reduce thought
noise and encourage un-emotion.
- **first class parallelism**:
built from the ground up on a next-gen structured concurrency
supervision sys.
- **traders first**:
broker/exchange/venue/asset-class/money-sys agnostic
- **systems grounded**:
real-time financial signal processing (fsp) that will make any
queuing or DSP eng juice their shorts.
- **non-tina UX**:
sleek, powerful keyboard driven interaction with expected use in
tiling wms (or maybe even a DDE).
- **data collab at scale**:
every actor-process and protocol is multi-host aware.
- **fight club ready**:
zero interest in adoption by suits; no corporate friendly license,
ever.
building the hottest looking, fastest, most reliable, keyboard
friendly FOSS trading platform is the dream; join the cause.
sane install with `poetry`
**************************
TODO!
a sane install with `uv`
************************
bc why install with `python` when you can faster with `rust` ::
uv sync
# ^ astral's docs,
# https://docs.astral.sh/uv/concepts/projects/sync/
include all GUIs (ex. for charting)::
uv sync --extra uis
AND with all our hacking tools and WIP integrations::
uv sync --dev --all-extras
rigorous install on ``nixos`` using ``poetry2nix``
**************************************************
TODO!
Ensure you can run the root-daemon::
uv run pikerd [-l info --pdb]
hacky install on nixos
**********************
`NixOS` is our core devs' distro of choice for which we offer
a stringently defined development shell envoirment that can be loaded with::
install on nix(os)
******************
``NixOS`` is our core devs' distro of choice for which we offer
a stringently defined development shell envoirment that can currently
be applied in one of 2 ways::
nix-shell develop.nix
# ONLY if running on X11
nix-shell default.nix
this will setup the required python environment to run piker, make sure to
run::
Or if you prefer flakes style and a modern DE::
pip install -r requirements.txt -e .
once after loading the shell
# ONLY if also running on Wayland
nix develop # for default bash
nix develop -c uv run xonsh # for @goodboy's preferred sh B)
install wild-west style via `pip`
*********************************
``piker`` is currently under heavy pre-alpha development and as such
should be cloned from this repo and hacked on directly.
start a chart
*************
run a realtime OHLCV chart stand-alone::
for a development install::
[uv run] piker -l info chart btcusdt.spot.binance xmrusdt.spot.kraken
git clone git@github.com:pikers/piker.git
cd piker
virtualenv env
source ./env/bin/activate
pip install -r requirements.txt -e .
# ^^^ iff you haven't activated the py-env,
# - https://docs.astral.sh/uv/concepts/projects/run/
#
# in order to create an explicit virt-env see,
# - https://docs.astral.sh/uv/concepts/projects/layout/#the-project-environment
# - https://docs.astral.sh/uv/pip/environments/
#
# use $UV_PROJECT_ENVIRONMENT to select any non-`.venv/`
# as the venv sudir in the repo's root.
# - https://docs.astral.sh/uv/reference/environment/#uv_project_environment
this runs a chart UI (with 1m sampled OHLCV) and shows 2 spot markets from 2 diff cexes
overlayed on the same graph. Use of `piker` without first starting
a daemon (`pikerd` - see below) means there is an implicit spawning of the
multi-actor-runtime (implemented as a `tractor` app).
For additional subsystem feats available through our chart UI see the
various sub-readmes:
- order control using a mouse-n-keyboard UX B)
- cross venue market-pair (what most call "symbol") search, select, overlay Bo
- financial-signal-processing (`piker.fsp`) write-n-reload to sub-chart BO
- src-asset derivatives scan for anal, like the infamous "max pain" XO
check out our charts
********************
bet you weren't expecting this from the foss::
piker -l info -b kraken -b binance chart btcusdt.binance --pdb
this runs the main chart (currently with 1m sampled OHLC) in in debug
mode and you can practice paper trading using the following
micro-manual:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
you can also configure your position allocation limits from the
sidepane.
run in distributed mode
***********************
start the service manager and data feed daemon in the background and
connect to it::
spawn a daemon standalone
*************************
we call the root actor-process the ``pikerd``. it can be (and is
recommended normally to be) started separately from the ``piker
chart`` program::
pikerd -l info --pdb
the daemon does nothing until a ``piker``-client (like ``piker
chart``) connects and requests some particular sub-system. for
a connecting chart ``pikerd`` will spawn and manage at least,
connect your chart::
- a data-feed daemon: ``datad`` which does all the work of comms with
the backend provider (in this case the ``binance`` cex).
- a paper-trading engine instance, ``paperboi.binance``, (if no live
account has been configured) which allows for auto/manual order
control against the live quote stream.
piker -l info -b kraken -b binance chart xmrusdt.binance --pdb
*using* an actor-service (aka micro-daemon) manager which dynamically
supervises various sub-subsystems-as-services throughout the ``piker``
runtime-stack.
now you can (implicitly) connect your chart::
enjoy persistent real-time data feeds tied to daemon lifetime. the next
time you spawn a chart it will load much faster since the data feed has
been cached and is now always running live in the background until you
kill ``pikerd``.
piker chart btcusdt.spot.binance
since ``pikerd`` was started separately you can now enjoy a persistent
real-time data stream tied to the daemon-tree's lifetime. i.e. the next
time you spawn a chart it will obviously not only load much faster
(since the underlying ``datad.binance`` is left running with its
in-memory IPC data structures) but also the data-feed and any order
mgmt states should be persistent until you finally cancel ``pikerd``.
if anyone asks you what this project is about
*********************************************
you don't talk about it.
you don't talk about it; just use it.
how do i get involved?
@ -166,6 +200,15 @@ enter the matrix.
how come there ain't that many docs
***********************************
suck it up, learn the code; no one is trying to sell you on anything.
also, we need lotsa help so if you want to start somewhere and can't
necessarily write serious code, this might be the place for you!
i mean we want/need them but building the core right has been higher
prio then marketting (and likely will stay that way Bp).
soo, suck it up bc,
- no one is trying to sell you on anything
- learning the code base is prolly way more valuable
- the UI/UXs are intended to be "intuitive" for any hacker..
we obviously need tonz help so if you want to start somewhere and
can't necessarily write "advanced" concurrent python/rust code, this
helping document literally anything might be the place for you!

View File

@ -1,6 +1,5 @@
################
# ---- CEXY ----
################
[binance]
accounts.paper = 'paper'
@ -13,28 +12,41 @@ accounts.spot = 'spot'
spot.use_testnet = false
spot.api_key = ''
spot.api_secret = ''
# ------ binance ------
[deribit]
# std assets
key_id = ''
key_secret = ''
# options
accounts.option = 'option'
option.use_testnet = false
option.key_id = ''
option.key_secret = ''
# aux logging from `cryptofeed`
option.log.filename = 'cryptofeed.log'
option.log.level = 'DEBUG'
option.log.disabled = true
# ------ deribit ------
[kraken]
key_descr = ''
api_key = ''
secret = ''
# ------ kraken ------
[kucoin]
key_id = ''
key_secret = ''
key_passphrase = ''
# ------ kucoin ------
################
# -- BROKERZ ---
################
[questrade]
refresh_token = ''
access_token = ''
@ -42,44 +54,55 @@ api_server = 'https://api06.iq.questrade.com/'
expires_in = 1800
token_type = 'Bearer'
expires_at = 1616095326.355846
# ------ questrade ------
[ib]
# define the (set of) host-port socketaddrs that
# brokerd.ib will scan to connect to an API endpoint
# (ib-gw or ib-tws listening instances)
hosts = [
'127.0.0.1',
]
# XXX: the order in which ports will be scanned
# (by the `brokerd` daemon-actor)
# is determined # by the line order here.
# TODO: when we eventually spawn gateways in our
# container, we can just dynamically allocate these
# using IBC.
ports = [
4002, # gw
7497, # tws
]
# XXX: for a paper account the flex web query service
# is not supported so you have to manually download
# and XML report and put it in a location that can be
# accessed by the ``brokerd.ib`` backend code for parsing.
flex_token = ''
flex_trades_query_id = '' # live account
# when clients are being scanned this determines
# which clients are preferred to be used for data
# feeds based on the order of account names, if
# detected as active on an API client.
# When API endpoints are being scanned durin startup, the order
# of user-defined-account "names" (as defined below) here
# determines which py-client connection is given priority to be
# used for data-feed-requests by according to whichever client
# connected to an API endpoing which reported the equivalent
# account number for that name.
prefer_data_account = [
'paper',
'margin',
'ira',
]
# For long-term trades txn (transaction) history
# processing (i.e your txn ledger with IB) you can
# (automatically for live accounts) query the FLEX
# report system for past history.
#
# (For paper accounts the web query service
# is not supported so you have to manually download
# an XML report and put it in a location that can be
# accessed by our `brokerd.ib` backend code for parsing).
#
flex_token = ''
flex_trades_query_id = '' # live account
# define "aliases" (names) for each account number
# such that the names can be reffed and logged throughout
# `piker.accounting` subsys and more easily
# referred to by the user.
#
# These keys will be the set exposed through the order-mode
# account-selection UI so that numbers are never shown.
[ib.accounts]
# the order in which accounts will be selectable
# in the order mode UI (if found via clients during
# API-app scanning)when a new symbol is loaded.
paper = 'XX0000000'
margin = 'X0000000'
ira = 'X0000000'
paper = 'DU0000000' # <- literal account #
margin = 'U0000000'
ira = 'U0000000'
# ------ ib ------

View File

@ -1,7 +1,9 @@
[network]
tsdb.backend = 'marketstore'
tsdb.host = 'localhost'
tsdb.grpc_port = 5995
pikerd = [
'/ipv4/127.0.0.1/tcp/6116', # std localhost daemon-actor tree
# '/uds/6116', # TODO std uds socket file
]
[ui]
# set custom font + size which will scale entire UI

135
default.nix 100644
View File

@ -0,0 +1,135 @@
with (import <nixpkgs> {});
let
glibStorePath = lib.getLib glib;
zlibStorePath = lib.getLib zlib;
zstdStorePath = lib.getLib zstd;
dbusStorePath = lib.getLib dbus;
libGLStorePath = lib.getLib libGL;
freetypeStorePath = lib.getLib freetype;
qt6baseStorePath = lib.getLib qt6.qtbase;
fontconfigStorePath = lib.getLib fontconfig;
libxkbcommonStorePath = lib.getLib libxkbcommon;
xcbutilcursorStorePath = lib.getLib xcb-util-cursor;
pypkgs = python313Packages;
qtpyStorePath = lib.getLib pypkgs.qtpy;
pyqt6StorePath = lib.getLib pypkgs.pyqt6;
pyqt6SipStorePath = lib.getLib pypkgs.pyqt6-sip;
rapidfuzzStorePath = lib.getLib pypkgs.rapidfuzz;
qdarkstyleStorePath = lib.getLib pypkgs.qdarkstyle;
xorgLibX11StorePath = lib.getLib xorg.libX11;
xorgLibxcbStorePath = lib.getLib xorg.libxcb;
xorgxcbutilwmStorePath = lib.getLib xorg.xcbutilwm;
xorgxcbutilimageStorePath = lib.getLib xorg.xcbutilimage;
xorgxcbutilerrorsStorePath = lib.getLib xorg.xcbutilerrors;
xorgxcbutilkeysymsStorePath = lib.getLib xorg.xcbutilkeysyms;
xorgxcbutilrenderutilStorePath = lib.getLib xorg.xcbutilrenderutil;
in
stdenv.mkDerivation {
name = "piker-qt6-uv";
buildInputs = [
# System requirements.
glib
zlib
dbus
zstd
libGL
freetype
qt6.qtbase
libgcc.lib
fontconfig
libxkbcommon
# Xorg requirements
xcb-util-cursor
xorg.libxcb
xorg.libX11
xorg.xcbutilwm
xorg.xcbutilimage
xorg.xcbutilerrors
xorg.xcbutilkeysyms
xorg.xcbutilrenderutil
# Python requirements.
python313
uv
pypkgs.qdarkstyle
pypkgs.rapidfuzz
pypkgs.pyqt6
pypkgs.qtpy
];
src = null;
shellHook = ''
set -e
# Set the Qt plugin path
# export QT_DEBUG_PLUGINS=1
QTBASE_PATH="${qt6baseStorePath}/lib"
QT_PLUGIN_PATH="$QTBASE_PATH/qt-6/plugins"
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
LIB_GCC_PATH="${libgcc.lib}/lib"
GLIB_PATH="${glibStorePath}/lib"
ZSTD_PATH="${zstdStorePath}/lib"
ZLIB_PATH="${zlibStorePath}/lib"
DBUS_PATH="${dbusStorePath}/lib"
LIBGL_PATH="${libGLStorePath}/lib"
FREETYPE_PATH="${freetypeStorePath}/lib"
FONTCONFIG_PATH="${fontconfigStorePath}/lib"
LIB_XKB_COMMON_PATH="${libxkbcommonStorePath}/lib"
XCB_UTIL_CURSOR_PATH="${xcbutilcursorStorePath}/lib"
XORG_LIB_X11_PATH="${xorgLibX11StorePath}/lib"
XORG_LIB_XCB_PATH="${xorgLibxcbStorePath}/lib"
XORG_XCB_UTIL_IMAGE_PATH="${xorgxcbutilimageStorePath}/lib"
XORG_XCB_UTIL_WM_PATH="${xorgxcbutilwmStorePath}/lib"
XORG_XCB_UTIL_RENDER_UTIL_PATH="${xorgxcbutilrenderutilStorePath}/lib"
XORG_XCB_UTIL_KEYSYMS_PATH="${xorgxcbutilkeysymsStorePath}/lib"
XORG_XCB_UTIL_ERRORS_PATH="${xorgxcbutilerrorsStorePath}/lib"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_GCC_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$DBUS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$GLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZLIB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ZSTD_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBGL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FONTCONFIG_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$FREETYPE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIB_XKB_COMMON_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XCB_UTIL_CURSOR_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_X11_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_LIB_XCB_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_IMAGE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_WM_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_RENDER_UTIL_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_KEYSYMS_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$XORG_XCB_UTIL_ERRORS_PATH"
export LD_LIBRARY_PATH
RPDFUZZ_PATH="${rapidfuzzStorePath}/lib/python3.13/site-packages"
QDRKSTYLE_PATH="${qdarkstyleStorePath}/lib/python3.13/site-packages"
QTPY_PATH="${qtpyStorePath}/lib/python3.13/site-packages"
PYQT6_PATH="${pyqt6StorePath}/lib/python3.13/site-packages"
PYQT6_SIP_PATH="${pyqt6SipStorePath}/lib/python3.13/site-packages"
PATCH="$PATCH:$RPDFUZZ_PATH"
PATCH="$PATCH:$QDRKSTYLE_PATH"
PATCH="$PATCH:$QTPY_PATH"
PATCH="$PATCH:$PYQT6_PATH"
PATCH="$PATCH:$PYQT6_SIP_PATH"
export PATCH
# install all dev and extras
uv sync --dev --all-extras
'';
}

View File

@ -1,28 +1,34 @@
with (import <nixpkgs> {});
with python310Packages;
stdenv.mkDerivation {
name = "pip-env";
name = "poetry-env";
buildInputs = [
# System requirements.
readline
# TODO: hacky non-poetry install stuff we need to get rid of!!
virtualenv
setuptools
pip
# obviously, and see below for hacked linking
pyqt5
poetry
# virtualenv
# setuptools
# pip
# Python requirements (enough to get a virtualenv going).
python310Full
python311Full
# obviously, and see below for hacked linking
python311Packages.pyqt5
python311Packages.pyqt5_sip
# python311Packages.qtpy
# numerics deps
python310Packages.python-Levenshtein
python310Packages.fastparquet
python310Packages.polars
python311Packages.levenshtein
python311Packages.fastparquet
python311Packages.polars
];
# environment.sessionVariables = {
# LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib";
# };
src = null;
shellHook = ''
# Allow the use of wheels.
@ -30,13 +36,12 @@ stdenv.mkDerivation {
# Augment the dynamic linker path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
if [ ! -d "venv" ]; then
virtualenv venv
if [ ! -d ".venv" ]; then
poetry install --with uis
fi
source venv/bin/activate
poetry shell
'';
}

View File

@ -1,30 +1,138 @@
running ``ib`` gateway in ``docker``
------------------------------------
We have a config based on the (now defunct)
image from "waytrade":
We have a config based on a well maintained community
image from `@gnzsnz`:
https://github.com/waytrade/ib-gateway-docker
https://github.com/gnzsnz/ib-gateway-docker
To startup this image with our custom settings
simply run the command::
To startup this image simply run the command::
docker compose up
And you should have the following socket-available services:
(For further usage^ see the official `docker-compose`_ docs)
- ``x11vnc1@127.0.0.1:3003``
- ``ib-gw@127.0.0.1:4002``
You can attach to the container via a VNC client
without password auth.
And you should have the following socket-available services by
default:
SECURITY STUFF!?!?!
-------------------
Though "``ib``" claims they host filter connections outside
localhost (aka ``127.0.0.1``) it's probably better if you filter
the socket at the OS level using a stateless firewall rule::
- ``x11vnc1 @ 127.0.0.1:5900``
- ``ib-gw @ 127.0.0.1:4002``
You can now attach to the container via a VNC client with password-auth;
here is an example using ``vncclient`` on ``linux``::
vncviewer localhost:5900
now enter the pw (password) you set via an (see second code blob)
`.env file`_ or pw-file according to the `credentials section`_.
If you want to change away from their default config see the example
`docker-compose.yml`-config issue and config-section of the readme,
- https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#configuration
- https://github.com/gnzsnz/ib-gateway-docker/discussions/103
.. _.env file: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#how-to-use-it
.. _docker-compose: https://docs.docker.com/compose/
.. _credentials section: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#credentials
Connecting to the API from `piker`
---------------------------------
In order to expose the container's API endpoint to the
`brokerd/datad/ib` actor, we need to add a section to the user's
`brokers.toml` config (note the below is similar to the repo-shipped
template file),
.. code:: toml
[ib]
# define the (set of) host-port socketaddrs that
# brokerd.ib will scan to connect to an API endpoint
# (ib-gw or ib-tws listening instances)
hosts = [
'127.0.0.1',
]
ports = [
4002, # gw
7497, # tws
]
# When API endpoints are being scanned durin startup, the order
# of user-defined-account "names" (as defined below) here
# determines which py-client connection is given priority to be
# used for data-feed-requests by according to whichever client
# connected to an API endpoing which reported the equivalent
# account number for that name.
prefer_data_account = [
'paper',
'margin',
'ira',
]
# define "aliases" (names) for each account number
# such that the names can be reffed and logged throughout
# `piker.accounting` subsys and more easily
# referred to by the user.
#
# These keys will be the set exposed through the order-mode
# account-selection UI so that numbers are never shown.
[ib.accounts]
paper = 'XX0000000'
margin = 'X0000000'
ira = 'X0000000'
the broker daemon can also connect to the container's VNC server for
added functionalies including,
- viewing the API endpoint program's GUI for manual interventions,
- workarounds for historical data throttling using hotkey hacks,
Add a further section to `brokers.toml` which maps each API-ep's
port to a table of VNC server connection info like,
.. code:: toml
[ib.vnc_addrs]
4002 = {host = 'localhost', port = 5900, pw = 'doggy'}
The `pw = 'doggy'` here ^ should the same value as the particular
container instances `.env` file setting (when it was run),
.. code:: ini
VNC_SERVER_PASSWORD='doggy'
IF you also want to run ``TWS``
-------------------------------
You can also run it containerized,
https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#using-tws
SECURITY stuff (advanced, only if you're paranoid)
--------------------------------------------------
First and foremost if doing a "distributed" container setup where you
run the ``ib-gw`` docker container and your connecting API client
(likely ``ib_async`` from python) on **different hosts** be sure to
read the `security considerations`_ section!
And for a further (somewhat paranoid) perspective from
a long-time-ago serious devops eng..
Though "``ib``" claims they filter remote host connections outside
``localhost`` (aka ``127.0.0.1`` on ipv4) it's prolly justified if
you'd like to filter the socket at the *OS level* using a stateless
firewall rule::
ip rule add not unicast iif lo to 0.0.0.0/0 dport 4002
We will soon have this baked into our own custom image but for
now you'll have to do it urself dawgy.
We will soon have this either baked into our own custom derivative
image (or patched into the current upstream one after further testin)
but for now you'll have to do it urself, diggity dawg.
.. _security considerations: https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#security-considerations

View File

@ -1,10 +1,15 @@
# rework from the original @
# https://github.com/waytrade/ib-gateway-docker/blob/master/docker-compose.yml
version: "3.5"
# a community maintained IB API container!
#
# https://github.com/gnzsnz/ib-gateway-docker
#
# For piker we (currently) include some minor deviations
# for some config files in the `volumes` section.
#
# See full configuration settings @
# - https://github.com/gnzsnz/ib-gateway-docker?tab=readme-ov-file#configuration
# - https://github.com/gnzsnz/ib-gateway-docker/discussions/103
services:
ib_gw_paper:
# apparently java is a mega cukc:
@ -19,8 +24,9 @@ services:
# other image tags available:
# https://github.com/waytrade/ib-gateway-docker#supported-tags
# image: waytrade/ib-gateway:981.3j
image: waytrade/ib-gateway:1012.2i
# image: waytrade/ib-gateway:1012.2i
image: ghcr.io/gnzsnz/ib-gateway:latest
restart: 'no' # restart on boot whenev there's a crash or user clicsk
network_mode: 'host'
@ -49,16 +55,22 @@ services:
target: /root/scripts/run_x11_vnc.sh
read_only: true
# NOTE:to fill these out, define an `.env` file in the same dir as
# this compose file which looks something like:
# TWS_USERID='myuser'
# TWS_PASSWORD='guest'
# NOTE: an alt method to fill these out is to
# define an `.env` file in the same dir as
# this compose file.
environment:
TWS_USERID: ${TWS_USERID}
# TWS_USERID: 'myuser'
TWS_PASSWORD: ${TWS_PASSWORD}
TRADING_MODE: 'paper'
VNC_SERVER_PASSWORD: 'doggy'
VNC_SERVER_PORT: '3003'
# TWS_PASSWORD: 'guest'
TRADING_MODE: ${TRADING_MODE}
# TRADING_MODE: 'paper'
VNC_SERVER_PASSWORD: ${VNC_SERVER_PASSWORD}
# VNC_SERVER_PASSWORD: 'doggy'
# TODO, see if we can get this supported like it
# was on the old `waytrade` image?
# VNC_SERVER_PORT: '3003'
# ports:
# - target: 4002
@ -75,6 +87,9 @@ services:
# - "127.0.0.1:4002:4002"
# - "127.0.0.1:5900:5900"
# TODO, a masked but working example of dual paper + live
# ib-gw instances running in a single app run!
#
# ib_gw_live:
# image: waytrade/ib-gateway:1012.2i
# restart: no

View File

@ -117,9 +117,57 @@ SecondFactorDevice=
# If you use the IBKR Mobile app for second factor authentication,
# and you fail to complete the process before the time limit imposed
# by IBKR, you can use this setting to tell IBC to exit: arrangements
# can then be made to automatically restart IBC in order to initiate
# the login sequence afresh. Otherwise, manual intervention at TWS's
# by IBKR, this setting tells IBC whether to automatically restart
# the login sequence, giving you another opportunity to complete
# second factor authentication.
#
# Permitted values are 'yes' and 'no'.
#
# If this setting is not present or has no value, then the value
# of the deprecated ExitAfterSecondFactorAuthenticationTimeout is
# used instead. If this also has no value, then this setting defaults
# to 'no'.
#
# NB: you must be using IBC v3.14.0 or later to use this setting:
# earlier versions ignore it.
ReloginAfterSecondFactorAuthenticationTimeout=
# This setting is only relevant if
# ReloginAfterSecondFactorAuthenticationTimeout is set to 'yes',
# or if ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 60.
SecondFactorAuthenticationExitInterval=
# This setting specifies the timeout for second factor authentication
# imposed by IB. The value is in seconds. You should not change this
# setting unless you have reason to believe that IB has changed the
# timeout. The default value is 180.
SecondFactorAuthenticationTimeout=180
# DEPRECATED SETTING
# ------------------
#
# ExitAfterSecondFactorAuthenticationTimeout - THIS SETTING WILL BE
# REMOVED IN A FUTURE RELEASE. For IBC version 3.14.0 and later, see
# the notes for ReloginAfterSecondFactorAuthenticationTimeout above.
#
# For IBC versions earlier than 3.14.0: If you use the IBKR Mobile
# app for second factor authentication, and you fail to complete the
# process before the time limit imposed by IBKR, you can use this
# setting to tell IBC to exit: arrangements can then be made to
# automatically restart IBC in order to initiate the login sequence
# afresh. Otherwise, manual intervention at TWS's
# Second Factor Authentication dialog is needed to complete the
# login.
#
@ -132,29 +180,18 @@ SecondFactorDevice=
ExitAfterSecondFactorAuthenticationTimeout=no
# This setting is only relevant if
# ExitAfterSecondFactorAuthenticationTimeout is set to 'yes'.
#
# It controls how long (in seconds) IBC waits for login to complete
# after the user acknowledges the second factor authentication
# alert at the IBKR Mobile app. If login has not completed after
# this time, IBC terminates.
# The default value is 40.
SecondFactorAuthenticationExitInterval=
# Trading Mode
# ------------
#
# TWS 955 introduced a new Trading Mode combo box on its login
# dialog. This indicates whether the live account or the paper
# trading account corresponding to the supplied credentials is
# to be used. The allowed values are 'live' (the default) and
# 'paper'. For earlier versions of TWS this setting has no
# effect.
# This indicates whether the live account or the paper trading
# account corresponding to the supplied credentials is to be used.
# The allowed values are 'live' (the default) and 'paper'.
#
# If this is set to 'live', then the credentials for the live
# account must be supplied. If it is set to 'paper', then either
# the live or the paper-trading credentials may be supplied.
TradingMode=
TradingMode=paper
# Paper-trading Account Warning
@ -188,7 +225,7 @@ AcceptNonBrokerageAccountWarning=yes
#
# The default value is 60.
LoginDialogDisplayTimeout=20
LoginDialogDisplayTimeout=60
@ -217,7 +254,15 @@ LoginDialogDisplayTimeout=20
# but they are acceptable.
#
# The default is the current working directory when IBC is
# started.
# started, unless the TWS_SETTINGS_PATH setting in the relevant
# start script is set.
#
# If both this setting and TWS_SETTINGS_PATH are set, then this
# setting takes priority. Note that if they have different values,
# auto-restart will not work.
#
# NB: this setting is now DEPRECATED. You should use the
# TWS_SETTINGS_PATH setting in the relevant start script.
IbDir=/root/Jts
@ -284,15 +329,32 @@ ExistingSessionDetectedAction=primary
# Override TWS API Port Number
# ----------------------------
#
# If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly
# after startup. Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to
# If OverrideTwsApiPort is set to an integer, IBC changes the
# 'Socket port' in TWS's API configuration to that number shortly
# after startup (but note that for the FIX Gateway, this setting is
# actually stored in jts.ini rather than the Gateway's settings
# file). Leaving the setting blank will make no change to
# the current setting. This setting is only intended for use in
# certain specialized situations where the port number needs to
# be set dynamically at run-time, and for the FIX Gateway: most
# non-FIX users will never need it, so don't use it unless you know
# you need it.
OverrideTwsApiPort=4000
# Override TWS Master Client ID
# -----------------------------
#
# If OverrideTwsMasterClientID is set to an integer, IBC changes the
# 'Master Client ID' value in TWS's API configuration to that
# value shortly after startup. Leaving the setting blank will make
# no change to the current setting. This setting is only intended
# for use in certain specialized situations where the value needs to
# be set dynamically at run-time: most users will never need it,
# so don't use it unless you know you need it.
; OverrideTwsApiPort=4002
OverrideTwsMasterClientID=
# Read-only Login
@ -302,11 +364,13 @@ ExistingSessionDetectedAction=primary
# account security programme, the user will not be asked to perform
# the second factor authentication action, and login to TWS will
# occur automatically in read-only mode: in this mode, placing or
# managing orders is not allowed. If set to 'no', and the user is
# enrolled in IB's account security programme, the user must perform
# the relevant second factor authentication action to complete the
# login.
# managing orders is not allowed.
#
# If set to 'no', and the user is enrolled in IB's account security
# programme, the second factor authentication process is handled
# according to the Second Factor Authentication Settings described
# elsewhere in this file.
#
# If the user is not enrolled in IB's account security programme,
# this setting is ignored. The default is 'no'.
@ -326,7 +390,44 @@ ReadOnlyLogin=no
# set the relevant checkbox (this only needs to be done once) and
# not provide a value for this setting.
ReadOnlyApi=no
ReadOnlyApi=
# API Precautions
# ---------------
#
# These settings relate to the corresponding 'Precautions' checkboxes in the
# API section of the Global Configuration dialog.
#
# For all of these, the accepted values are:
# - 'yes' sets the checkbox
# - 'no' clears the checkbox
# - if not set, the existing TWS/Gateway configuration is unchanged
#
# NB: thess settings are really only supplied for the benefit of new TWS
# or Gateway instances that are being automatically installed and
# started without user intervention, or where user settings are not preserved
# between sessions (eg some Docker containers). Where a user is involved, they
# should use the Global Configuration to set the relevant checkboxes and not
# provide values for these settings.
BypassOrderPrecautions=
BypassBondWarning=
BypassNegativeYieldToWorstConfirmation=
BypassCalledBondWarning=
BypassSameActionPairTradeWarning=
BypassPriceBasedVolatilityRiskWarning=
BypassUSStocksMarketDataInSharesWarning=
BypassRedirectOrderWarning=
BypassNoOverfillProtectionPrecaution=
# Market data size for US stocks - lots or shares
@ -381,54 +482,145 @@ AcceptBidAskLastSizeDisplayUpdateNotification=accept
SendMarketDataInLotsForUSstocks=
# Trusted API Client IPs
# ----------------------
#
# NB: THIS SETTING IS ONLY RELEVANT FOR THE GATEWAY, AND ONLY WHEN FIX=yes.
# In all other cases it is ignored.
#
# This is a list of IP addresses separated by commas. API clients with IP
# addresses in this list are able to connect to the API without Gateway
# generating the 'Incoming connection' popup.
#
# Note that 127.0.0.1 is always permitted to connect, so do not include it
# in this setting.
TrustedTwsApiClientIPs=
# Reset Order ID Sequence
# -----------------------
#
# The setting resets the order id sequence for orders submitted via the API, so
# that the next invocation of the `NextValidId` API callback will return the
# value 1. The reset occurs when TWS starts.
#
# Note that order ids are reset for all API clients, except those that have
# outstanding (ie incomplete) orders: their order id sequence carries on as
# before.
#
# Valid values are 'yes', 'true', 'false' and 'no'. The default is 'no'.
ResetOrderIdsAtStart=
# This setting specifies IBC's action when TWS displays the dialog asking for
# confirmation of a request to reset the API order id sequence.
#
# Note that the Gateway never displays this dialog, so this setting is ignored
# for a Gateway session.
#
# Valid values consist of two strings separated by a solidus '/'. The first
# value specifies the action to take when the order id reset request resulted
# from setting ResetOrderIdsAtStart=yes. The second specifies the action to
# take when the order id reset request is a result of the user clicking the
# 'Reset API order ID sequence' button in the API configuration. Each value
# must be one of the following:
#
# 'confirm'
# order ids will be reset
#
# 'reject'
# order ids will not be reset
#
# 'ignore'
# IBC will ignore the dialog. The user must take action.
#
# The default setting is ignore/ignore
# Examples:
#
# 'confirm/reject' - confirm order id reset only if ResetOrderIdsAtStart=yes
# and reject any user-initiated requests
#
# 'ignore/confirm' - user must decide what to do if ResetOrderIdsAtStart=yes
# and confirm user-initiated requests
#
# 'reject/ignore' - reject order id reset if ResetOrderIdsAtStart=yes but
# allow user to handle user-initiated requests
ConfirmOrderIdReset=
# =============================================================================
# 4. TWS Auto-Closedown
# 4. TWS Auto-Logoff and Auto-Restart
# =============================================================================
#
# IMPORTANT NOTE: Starting with TWS 974, this setting no longer
# works properly, because IB have changed the way TWS handles its
# autologoff mechanism.
# TWS and Gateway insist on being restarted every day. Two alternative
# automatic options are offered:
#
# You should now configure the TWS autologoff time to something
# convenient for you, and restart IBC each day.
# - Auto-Logoff: at a specified time, TWS shuts down tidily, without
# restarting.
#
# Alternatively, discontinue use of IBC and use the auto-relogin
# mechanism within TWS 974 and later versions (note that the
# auto-relogin mechanism provided by IB is not available if you
# use IBC).
# - Auto-Restart: at a specified time, TWS shuts down and then restarts
# without the user having to re-autheticate.
#
# The normal way to configure the time at which this happens is via the Lock
# and Exit section of the Configuration dialog. Once this time has been
# configured in this way, the setting persists until the user changes it again.
#
# However, there are situations where there is no user available to do this
# configuration, or where there is no persistent storage (for example some
# Docker images). In such cases, the auto-restart or auto-logoff time can be
# set whenever IBC starts with the settings below.
#
# The value, if specified, must be a time in HH:MM AM/PM format, for example
# 08:00 AM or 10:00 PM. Note that there must be a single space between the
# two parts of this value; also that midnight is "12:00 AM" and midday is
# "12:00 PM".
#
# If no value is specified for either setting, the currently configured
# settings will apply. If a value is supplied for one setting, the other
# setting is cleared. If values are supplied for both settings, only the
# auto-restart time is set, and the auto-logoff time is cleared.
#
# Note that for a normal TWS/Gateway installation with persistent storage
# (for example on a desktop computer) the value will be persisted as if the
# user had set it via the configuration dialog.
#
# If you choose to auto-restart, you should take note of the considerations
# described at the link below. Note that where this information mentions
# 'manual authentication', restarting IBC will do the job (IBKR does not
# recognise the existence of IBC in its docuemntation).
#
# https://www.interactivebrokers.com/en/software/tws/twsguide.htm#usersguidebook/configuretws/auto_restart_info.htm
#
# If you use the "RESTART" command via the IBC command server, and IBC is
# running any version of the Gateway (or a version of TWS earlier than 1018),
# note that this will set the Auto-Restart time in Gateway/TWS's configuration
# dialog to the time at which the restart actually happens (which may be up to
# a minute after the RESTART command is issued). To prevent future auto-
# restarts at this time, you must make sure you have set AutoLogoffTime or
# AutoRestartTime to your desired value before running IBC. NB: this does not
# apply to TWS from version 1018 onwards.
# Set to yes or no (lower case).
#
# yes means allow TWS to shut down automatically at its
# specified shutdown time, which is set via the TWS
# configuration menu.
#
# no means TWS never shuts down automatically.
#
# NB: IB recommends that you do not keep TWS running
# continuously. If you set this setting to 'no', you may
# experience incorrect TWS operation.
#
# NB: the default for this setting is 'no'. Since this will
# only work properly with TWS versions earlier than 974, you
# should explicitly set this to 'yes' for version 974 and later.
IbAutoClosedown=yes
AutoLogoffTime=
AutoRestartTime=
# =============================================================================
# 5. TWS Tidy Closedown Time
# =============================================================================
#
# NB: starting with TWS 974 this is no longer a useful option
# because both TWS and Gateway now have the same auto-logoff
# mechanism, and IBC can no longer avoid this.
# Specifies a time at which TWS will close down tidily, with no restart.
#
# Note that giving this setting a value does not change TWS's
# auto-logoff in any way: any setting will be additional to the
# TWS auto-logoff.
# There is little reason to use this setting. It is similar to AutoLogoffTime,
# but can include a day-of-the-week, whereas AutoLogoffTime and AutoRestartTime
# apply every day. So for example you could use ClosedownAt in conjunction with
# AutoRestartTime to shut down TWS on Friday evenings after the markets
# close, without it running on Saturday as well.
#
# To tell IBC to tidily close TWS at a specified time every
# day, set this value to <hh:mm>, for example:
@ -487,7 +679,7 @@ AcceptIncomingConnectionAction=reject
# no means the dialog remains on display and must be
# handled by the user.
AllowBlindTrading=yes
AllowBlindTrading=no
# Save Settings on a Schedule
@ -530,6 +722,26 @@ AllowBlindTrading=yes
SaveTwsSettingsAt=
# Confirm Crypto Currency Orders Automatically
# --------------------------------------------
#
# When you place an order for a cryptocurrency contract, a dialog is displayed
# asking you to confirm that you want to place the order, and notifying you
# that you are placing an order to trade cryptocurrency with Paxos, a New York
# limited trust company, and not at Interactive Brokers.
#
# transmit means that the order will be placed automatically, and the
# dialog will then be closed
#
# cancel means that the order will not be placed, and the dialog will
# then be closed
#
# manual means that IBC will take no action and the user must deal
# with the dialog
ConfirmCryptoCurrencyOrders=transmit
# =============================================================================
# 7. Settings Specific to Indian Versions of TWS
@ -566,13 +778,17 @@ DismissNSEComplianceNotice=yes
#
# The port number that IBC listens on for commands
# such as "STOP". DO NOT set this to the port number
# used for TWS API connections. There is no good reason
# to change this setting unless the port is used by
# some other application (typically another instance of
# IBC). The default value is 0, which tells IBC not to
# start the command server
# used for TWS API connections.
#
# The convention is to use 7462 for this port,
# but it must be set to a different value from any other
# IBC instance that might run at the same time.
#
# The default value is 0, which tells IBC not to start
# the command server
#CommandServerPort=7462
CommandServerPort=0
# Permitted Command Sources
@ -583,19 +799,19 @@ DismissNSEComplianceNotice=yes
# IBC. Commands can always be sent from the
# same host as IBC is running on.
ControlFrom=127.0.0.1
ControlFrom=
# Address for Receiving Commands
# ------------------------------
#
# Specifies the IP address on which the Command Server
# is so listen. For a multi-homed host, this can be used
# is to listen. For a multi-homed host, this can be used
# to specify that connection requests are only to be
# accepted on the specified address. The default is to
# accept connection requests on all local addresses.
BindAddress=127.0.0.1
BindAddress=
# Command Prompt
@ -621,7 +837,7 @@ CommandPrompt=
# information is sent. The default is that such information
# is not sent.
SuppressInfoMessages=no
SuppressInfoMessages=yes
@ -651,10 +867,10 @@ SuppressInfoMessages=no
# The LogStructureScope setting indicates which windows are
# eligible for structure logging:
#
# - if set to 'known', only windows that IBC recognizes
# are eligible - these are windows that IBC has some
# interest in monitoring, usually to take some action
# on the user's behalf;
# - (default value) if set to 'known', only windows that
# IBC recognizes are eligible - these are windows that
# IBC has some interest in monitoring, usually to take
# some action on the user's behalf;
#
# - if set to 'unknown', only windows that IBC does not
# recognize are eligible. Most windows displayed by
@ -667,9 +883,8 @@ SuppressInfoMessages=no
# - if set to 'all', then every window displayed by TWS
# is eligible.
#
# The default value is 'known'.
LogStructureScope=all
LogStructureScope=known
# When to Log Window Structure
@ -682,13 +897,15 @@ LogStructureScope=all
# structure of an eligible window the first time it
# is encountered;
#
# - if set to 'openclose', the structure is logged every
# time an eligible window is opened or closed;
#
# - if set to 'activate', the structure is logged every
# time an eligible window is made active;
#
# - if set to 'never' or 'no' or 'false', structure
# information is never logged.
# - (default value) if set to 'never' or 'no' or 'false',
# structure information is never logged.
#
# The default value is 'never'.
LogStructureWhen=never
@ -708,4 +925,3 @@ LogStructureWhen=never
#LogComponents=

View File

@ -121,6 +121,7 @@ async def bot_main():
# tick_throttle=10,
) as feed,
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn,
):
assert accounts

27
flake.lock 100644
View File

@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1765779637,
"narHash": "sha256-KJ2wa/BLSrTqDjbfyNx70ov/HdgNBCBBSQP3BIzKnv4=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "1306659b587dc277866c7b69eb97e5f07864d8c4",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

103
flake.nix 100644
View File

@ -0,0 +1,103 @@
# An "impure" template thx to `pyproject.nix`,
# https://pyproject-nix.github.io/pyproject.nix/templates.html#impure
# https://github.com/pyproject-nix/pyproject.nix/blob/master/templates/impure/flake.nix
{
description = "An impure `piker` overlay using `uv` with Nix(OS)";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
};
outputs =
{ nixpkgs, ... }:
let
inherit (nixpkgs) lib;
forAllSystems = lib.genAttrs lib.systems.flakeExposed;
in
{
devShells = forAllSystems (
system:
let
pkgs = nixpkgs.legacyPackages.${system};
# do store-path extractions
qt6baseStorePath = lib.getLib pkgs.qt6.qtbase;
# ?TODO? can remove below since manual linking not needed?
# qt6QtWaylandStorePath = lib.getLib pkgs.qt6.qtwayland;
# XXX NOTE XXX, for now we overlay specific pkgs via
# a major-version-pinned-`cpython`
cpython = "python313";
pypkgs = pkgs."${cpython}Packages";
in
{
default = pkgs.mkShell {
packages = with pkgs; [
# XXX, ensure sh completions active!
bashInteractive
bash-completion
# dev utils
ruff
pypkgs.ruff
qt6.qtwayland
qt6.qtbase
uv
python313 # ?TODO^ how to set from `cpython` above?
pypkgs.pyqt6
pypkgs.pyqt6-sip
pypkgs.qtpy
pypkgs.qdarkstyle
pypkgs.rapidfuzz
];
shellHook = ''
# unmask to debug **this** dev-shell-hook
# set -e
# set qt-base/plugin path(s)
QTBASE_PATH="${qt6baseStorePath}/lib"
QT_PLUGIN_PATH="${qt6baseStorePath}/lib/qt-6/plugins"
QT_QPA_PLATFORM_PLUGIN_PATH="$QT_PLUGIN_PATH/platforms"
# link in Qt cc lib paths from <nixpkgs>
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QTBASE_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_PLUGIN_PATH"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$QT_QPA_PLATFORM_PLUGIN_PATH"
# link-in c++ stdlib for various AOT-ext-pkgs (numpy, etc.)
LD_LIBRARY_PATH="${pkgs.stdenv.cc.cc.lib}/lib:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
# RUNTIME-SETTINGS
#
# ------ Qt ------
# XXX, unmask to debug qt .so linking/loading deats
# export QT_DEBUG_PLUGINS=1
#
# ALSO, for *modern linux* DEs,
# - maybe set wayland-mode (TODO, parametrtize this!)
# * a chosen wayland-mode shell-integration
export QT_QPA_PLATFORM="wayland"
export QT_WAYLAND_SHELL_INTEGRATION="xdg-shell"
# ------ uv ------
# - always use the ./py313/ venv-subdir
export UV_PROJECT_ENVIRONMENT="py313"
# sync project-env with all extras
uv sync --dev --all-extras --no-group lint
# ------ TIPS ------
# NOTE, to launch the py-venv installed `xonsh` (like @goodboy)
# run the `nix develop` cmd with,
# >> nix develop -c uv run xonsh
'';
};
}
);
};
}

View File

@ -33,7 +33,6 @@ from ._pos import (
Account,
load_account,
load_account_from_ledger,
open_pps,
open_account,
Position,
)
@ -42,7 +41,6 @@ from ._mktinfo import (
dec_digits,
digits_to_dec,
MktPair,
Symbol,
unpack_fqme,
_derivs as DerivTypes,
)
@ -60,7 +58,6 @@ __all__ = [
'Asset',
'MktPair',
'Position',
'Symbol',
'Transaction',
'TransactionLedger',
'dec_digits',
@ -70,7 +67,6 @@ __all__ = [
'load_account_from_ledger',
'mk_allocator',
'open_account',
'open_pps',
'open_trade_ledger',
'unpack_fqme',
'DerivTypes',

View File

@ -40,7 +40,7 @@ import tomli_w # for fast ledger writing
from piker.types import Struct
from piker import config
from ..log import get_logger
from piker.log import get_logger
from .calc import (
iter_by_dt,
)
@ -239,7 +239,9 @@ class TransactionLedger(UserDict):
symcache: SymbologyCache = self._symcache
towrite: dict[str, Any] = {}
for tid, txdict in self.tx_sort(self.data.copy()):
for tid, txdict in self.tx_sort(
self.data.copy()
):
# write blank-str expiry for non-expiring assets
if (
'expiry' in txdict
@ -377,7 +379,7 @@ def open_trade_ledger(
account,
dirpath=_fp,
)
cpy = ledger_dict.copy()
cpy: dict = ledger_dict.copy()
# XXX NOTE: if not provided presume we are being called from
# sync code and need to maybe run `trio` to generate..
@ -406,7 +408,13 @@ def open_trade_ledger(
account=account,
mod=mod,
symcache=symcache,
tx_sort=getattr(mod, 'tx_sort', tx_sort),
# NOTE: allow backends to provide custom ledger sorting
tx_sort=getattr(
mod,
'tx_sort',
tx_sort,
),
)
try:
yield ledger

View File

@ -305,8 +305,8 @@ class MktPair(Struct, frozen=True):
# config right?
# src_type: AssetTypeName
# for derivs, info describing contract, egs.
# strike price, call or put, swap type, exercise model, etc.
# for derivs, info describing contract, egs. strike price, call
# or put, swap type, exercise model, etc.
contract_info: list[str] | None = None
# TODO: rename to sectype since all of these can
@ -327,7 +327,11 @@ class MktPair(Struct, frozen=True):
) -> dict:
d = super().to_dict(**kwargs)
d['src'] = self.src.to_dict(**kwargs)
d['dst'] = self.dst.to_dict(**kwargs)
if not isinstance(self.dst, str):
d['dst'] = self.dst.to_dict(**kwargs)
else:
d['dst'] = str(self.dst)
d['price_tick'] = str(self.price_tick)
d['size_tick'] = str(self.size_tick)
@ -349,11 +353,16 @@ class MktPair(Struct, frozen=True):
Constructor for a received msg-dict normally received over IPC.
'''
dst_asset_msg = msg.pop('dst')
dst = Asset.from_msg(dst_asset_msg) # .copy()
if not isinstance(
dst_asset_msg := msg.pop('dst'),
str,
):
dst: Asset = Asset.from_msg(dst_asset_msg) # .copy()
else:
dst: str = dst_asset_msg
src_asset_msg = msg.pop('src')
src = Asset.from_msg(src_asset_msg) # .copy()
src_asset_msg: dict = msg.pop('src')
src: Asset = Asset.from_msg(src_asset_msg) # .copy()
# XXX NOTE: ``msgspec`` can encode `Decimal` but it doesn't
# decide to it by default since we aren't spec-cing these
@ -381,8 +390,8 @@ class MktPair(Struct, frozen=True):
cls,
fqme: str,
price_tick: float | str,
size_tick: float | str,
price_tick: float|str,
size_tick: float|str,
bs_mktid: str,
broker: str | None = None,
@ -668,90 +677,3 @@ def unpack_fqme(
# '.'.join([mkt_ep, venue]),
suffix,
)
class Symbol(Struct):
'''
I guess this is some kinda container thing for dealing with
all the different meta-data formats from brokers?
'''
key: str
broker: str = ''
venue: str = ''
# precision descriptors for price and vlm
tick_size: Decimal = Decimal('0.01')
lot_tick_size: Decimal = Decimal('0.0')
suffix: str = ''
broker_info: dict[str, dict[str, Any]] = {}
@classmethod
def from_fqme(
cls,
fqsn: str,
info: dict[str, Any],
) -> Symbol:
broker, mktep, venue, suffix = unpack_fqme(fqsn)
tick_size = info.get('price_tick_size', 0.01)
lot_size = info.get('lot_tick_size', 0.0)
return Symbol(
broker=broker,
key=mktep,
tick_size=tick_size,
lot_tick_size=lot_size,
venue=venue,
suffix=suffix,
broker_info={broker: info},
)
@property
def type_key(self) -> str:
return list(self.broker_info.values())[0]['asset_type']
@property
def tick_size_digits(self) -> int:
return float_digits(self.tick_size)
@property
def lot_size_digits(self) -> int:
return float_digits(self.lot_tick_size)
@property
def price_tick(self) -> Decimal:
return Decimal(str(self.tick_size))
@property
def size_tick(self) -> Decimal:
return Decimal(str(self.lot_tick_size))
@property
def broker(self) -> str:
return list(self.broker_info.keys())[0]
@property
def fqme(self) -> str:
return maybe_cons_tokens([
self.key, # final "pair name" (eg. qqq[/usd], btcusdt)
self.venue,
self.suffix, # includes expiry and other con info
self.broker,
])
def quantize(
self,
size: float,
) -> Decimal:
digits = float_digits(self.lot_tick_size)
return Decimal(size).quantize(
Decimal(f'1.{"0".ljust(digits, "0")}'),
rounding=ROUND_HALF_EVEN
)
# NOTE: when cast to `str` return fqme
def __str__(self) -> str:
return self.fqme

View File

@ -30,7 +30,8 @@ from types import ModuleType
from typing import (
Any,
Iterator,
Generator
Generator,
TYPE_CHECKING,
)
import pendulum
@ -59,8 +60,10 @@ from ..clearing._messages import (
BrokerdPosition,
)
from piker.types import Struct
from piker.data._symcache import SymbologyCache
from ..log import get_logger
from piker.log import get_logger
if TYPE_CHECKING:
from piker.data._symcache import SymbologyCache
log = get_logger(__name__)
@ -353,17 +356,20 @@ class Position(Struct):
) -> bool:
'''
Update clearing table by calculating the rolling ppu and
(accumulative) size in both the clears entry and local
attrs state.
(accumulative) size in both the clears entry and local attrs
state.
Inserts are always done in datetime sorted order.
'''
# added: bool = False
tid: str = t.tid
if tid in self._events:
log.warning(f'{t} is already added?!')
# return added
log.debug(
f'Txn is already added?\n'
f'\n'
f'{t}\n'
)
return False
# TODO: apparently this IS possible with a dict but not
# common and probably not that beneficial unless we're also
@ -444,6 +450,12 @@ class Position(Struct):
# def suggest_split(self) -> float:
# ...
# ?TODO, for sending rendered state over the wire?
# def summary(self) -> PositionSummary:
# do minimal conversion to a subset of fields
# currently defined in `.clearing._messages.BrokerdPosition`
class Account(Struct):
'''
@ -487,12 +499,23 @@ class Account(Struct):
def update_from_ledger(
self,
ledger: TransactionLedger | dict[str, Transaction],
ledger: TransactionLedger|dict[str, Transaction],
cost_scalar: float = 2,
symcache: SymbologyCache | None = None,
symcache: SymbologyCache|None = None,
_mktmap_table: dict[str, MktPair] | None = None,
only_require: list[str]|True = True,
# ^list of fqmes that are "required" to be processed from
# this ledger pass; we often don't care about others and
# definitely shouldn't always error in such cases.
# (eg. broker backend loaded that doesn't yet supsport the
# symcache but also, inside the paper engine we don't ad-hoc
# request `get_mkt_info()` for every symbol in the ledger,
# only the one for which we're simulating against).
# TODO, not sure if there's a better soln for this, ideally
# all backends get symcache support afap i guess..
) -> dict[str, Position]:
'''
Update the internal `.pps[str, Position]` table from input
@ -535,14 +558,40 @@ class Account(Struct):
if _mktmap_table is None:
raise
required: bool = (
only_require is True
or (
only_require is not True
and
fqme in only_require
)
)
# XXX: caller is allowed to provide a fallback
# mktmap table for the case where a new position is
# being added and the preloaded symcache didn't
# have this entry prior (eg. with frickin IB..)
mkt = _mktmap_table[fqme]
if (
not (mkt := _mktmap_table.get(fqme))
and
required
):
raise
elif not required:
continue
else:
# should be an entry retreived somewhere
assert mkt
if not (pos := pps.get(bs_mktid)):
assert isinstance(
mkt,
MktPair,
)
# if no existing pos, allocate fresh one.
pos = pps[bs_mktid] = Position(
mkt=mkt,
@ -651,7 +700,7 @@ class Account(Struct):
def write_config(self) -> None:
'''
Write the current account state to the user's account TOML file, normally
something like ``pps.toml``.
something like `pps.toml`.
'''
# TODO: show diff output?
@ -691,7 +740,7 @@ class Account(Struct):
else:
# TODO: we reallly need a diff set of
# loglevels/colors per subsys.
log.warning(
log.debug(
f'Recent position for {fqme} was closed!'
)
@ -705,7 +754,7 @@ class Account(Struct):
# XXX WTF: if we use a tomlkit.Integer here we get this
# super weird --1 thing going on for cumsize!?1!
# NOTE: the fix was to always float() the size value loaded
# in open_pps() below!
# in open_account() below!
config.write(
config=self.conf,
path=self.conf_path,
@ -889,7 +938,6 @@ def open_account(
clears_table['dt'] = dt
trans.append(Transaction(
fqme=bs_mktid,
# sym=mkt,
bs_mktid=bs_mktid,
tid=tid,
# XXX: not sure why sometimes these are loaded as
@ -912,11 +960,22 @@ def open_account(
):
expiry: pendulum.DateTime = pendulum.parse(expiry)
pp = pp_objs[bs_mktid] = Position(
mkt,
split_ratio=split_ratio,
bs_mktid=bs_mktid,
)
# !XXX, should never be duplicates over
# a backend-(broker)-system's unique market-IDs!
if pos := pp_objs.get(bs_mktid):
if mkt != pos.mkt:
log.warning(
f'Duplicated position but diff `MktPair.fqme` ??\n'
f'bs_mktid: {bs_mktid!r}\n'
f'pos.mkt: {pos.mkt}\n'
f'mkt: {mkt}\n'
)
else:
pos = pp_objs[bs_mktid] = Position(
mkt,
split_ratio=split_ratio,
bs_mktid=bs_mktid,
)
# XXX: super critical, we need to be sure to include
# all pps.toml clears to avoid reusing clears that were
@ -924,8 +983,13 @@ def open_account(
# state, since today's records may have already been
# processed!
for t in trans:
pp.add_clear(t)
added: bool = pos.add_clear(t)
if not added:
log.warning(
f'Txn already recorded in pp ??\n'
f'\n'
f'{t}\n'
)
try:
yield acnt
finally:
@ -933,20 +997,6 @@ def open_account(
acnt.write_config()
# TODO: drop the old name and THIS!
@cm
def open_pps(
*args,
**kwargs,
) -> Generator[Account, None, None]:
log.warning(
'`open_pps()` is now deprecated!\n'
'Please use `with open_account() as cnt:`'
)
with open_account(*args, **kwargs) as acnt:
yield acnt
def load_account_from_ledger(
brokername: str,

View File

@ -22,7 +22,9 @@ you know when you're losing money (if possible) XD
from __future__ import annotations
from collections.abc import ValuesView
from contextlib import contextmanager as cm
from functools import partial
from math import copysign
from pprint import pformat
from typing import (
Any,
Callable,
@ -30,6 +32,7 @@ from typing import (
TYPE_CHECKING,
)
from tractor.devx import maybe_open_crash_handler
import polars as pl
from pendulum import (
DateTime,
@ -37,12 +40,16 @@ from pendulum import (
parse,
)
from ..log import get_logger
if TYPE_CHECKING:
from ._ledger import (
Transaction,
TransactionLedger,
)
log = get_logger(__name__)
def ppu(
clears: Iterator[Transaction],
@ -238,6 +245,9 @@ def iter_by_dt(
def dyn_parse_to_dt(
tx: tuple[str, dict[str, Any]] | Transaction,
debug: bool = False,
_invalid: list|None = None,
) -> DateTime:
# handle `.items()` inputs
@ -250,33 +260,90 @@ def iter_by_dt(
# get best parser for this record..
for k in parsers:
if (
isdict and k in tx
or getattr(tx, k, None)
(v := getattr(tx, k, None))
or
(
isdict
and
(v := tx.get(k))
)
):
v = tx[k] if isdict else tx.dt
assert v is not None, f'No valid value for `{k}`!?'
# only call parser on the value if not None from
# the `parsers` table above (when NOT using
# `.get()`), otherwise pass through the value and
# sort on it directly
if (
not isinstance(v, DateTime)
and (parser := parsers.get(k))
and
(parser := parsers.get(k))
):
return parser(v)
ret = parser(v)
else:
return v
ret = v
return ret
else:
log.debug(
f'Parser-field not found in txn\n'
f'\n'
f'parser-field: {k!r}\n'
f'txn: {tx!r}\n'
f'\n'
f'Trying next..\n'
)
continue
# XXX: we should never really get here bc it means some kinda
# bad txn-record (field) data..
#
# -> set the `debug_mode = True` if you want to trace such
# cases from REPL ;)
else:
# XXX: should never get here..
breakpoint()
# XXX: we should really never get here..
# only if a ledger record has no expected sort(able)
# field will we likely hit this.. like with ze IB.
# if no sortable field just deliver epoch?
log.warning(
'No (time) sortable field for TXN:\n'
f'{tx!r}\n'
)
report: str = (
f'No supported time-field found in txn !?\n'
f'\n'
f'supported-time-fields: {parsers!r}\n'
f'\n'
f'txn: {tx!r}\n'
)
if debug:
with maybe_open_crash_handler(
pdb=debug,
raise_on_exit=False,
):
raise ValueError(report)
else:
log.error(report)
entry: tuple[str, dict] | Transaction
if _invalid is not None:
_invalid.append(tx)
return from_timestamp(0.)
entry: tuple[str, dict]|Transaction
invalid: list = []
for entry in sorted(
records,
key=key or dyn_parse_to_dt,
key=key or partial(
dyn_parse_to_dt,
_invalid=invalid,
),
):
if entry in invalid:
log.warning(
f'Ignoring txn w invalid timestamp ??\n'
f'{pformat(entry)}\n'
)
continue
# NOTE the type sig above; either pairs or txns B)
yield entry
@ -339,6 +406,7 @@ def open_ledger_dfs(
acctname: str,
ledger: TransactionLedger | None = None,
debug_mode: bool = False,
**kwargs,
@ -353,8 +421,10 @@ def open_ledger_dfs(
can update the ledger on exit.
'''
from tractor._debug import open_crash_handler
with open_crash_handler():
with maybe_open_crash_handler(
pdb=debug_mode,
# raise_on_exit=False,
):
if not ledger:
import time
from ._ledger import open_trade_ledger
@ -446,7 +516,7 @@ def ledger_to_dfs(
df = dfs[key] = ldf.with_columns([
pl.cumsum('size').alias('cumsize'),
pl.cum_sum('size').alias('cumsize'),
# amount of source asset "sent" (via buy txns in
# the market) to acquire the dst asset, PER txn.
@ -461,7 +531,7 @@ def ledger_to_dfs(
]).with_columns([
# rolling balance in src asset units
(pl.col('dst_bot').cumsum() * -1).alias('src_balance'),
(pl.col('dst_bot').cum_sum() * -1).alias('src_balance'),
# "position operation type" in terms of increasing the
# amount in the dst asset (entering) or decreasing the
@ -603,7 +673,7 @@ def ledger_to_dfs(
# cost that was included in the least-recently
# entered txn that is still part of the current CSi
# set.
# => we look up the cost-per-unit cumsum and apply
# => we look up the cost-per-unit cum_sum and apply
# if over the current txn size (by multiplication)
# and then reverse that previusly applied cost on
# the txn_cost for this record.

View File

@ -300,7 +300,8 @@ def disect(
assert not df.is_empty()
# muck around in pdbp REPL
breakpoint()
# tractor.devx.mk_pdb().set_trace()
# breakpoint()
# TODO: we REALLY need a better console REPL for this
# kinda thing..

View File

@ -50,7 +50,7 @@ __brokers__: list[str] = [
'binance',
'ib',
'kraken',
'kucoin'
'kucoin',
# broken but used to work
# 'questrade',
@ -71,7 +71,7 @@ def get_brokermod(brokername: str) -> ModuleType:
Return the imported broker module by name.
'''
module = import_module('.' + brokername, 'piker.brokers')
module: ModuleType = import_module('.' + brokername, 'piker.brokers')
# we only allow monkeying because it's for internal keying
module.name = module.__name__.split('.')[-1]
return module
@ -98,13 +98,14 @@ async def open_cached_client(
If one has not been setup do it and cache it.
'''
brokermod = get_brokermod(brokername)
brokermod: ModuleType = get_brokermod(brokername)
# TODO: make abstract or `typing.Protocol`
# client: Client
async with maybe_open_context(
acm_func=brokermod.get_client,
kwargs=kwargs,
) as (cache_hit, client):
if cache_hit:
log.runtime(f'Reusing existing {client}')

View File

@ -96,7 +96,10 @@ async def _setup_persistent_brokerd(
# - `open_symbol_search()`
# NOTE: see ep invocation details inside `.data.feed`.
try:
async with trio.open_nursery() as service_nursery:
async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as service_nursery
):
bus: _FeedsBus = feed.get_feed_bus(
brokername,
service_nursery,
@ -179,9 +182,6 @@ def broker_init(
subpath: str = f'{modpath}.{submodname}'
enabled.append(subpath)
# TODO XXX: DO WE NEED THIS?
# enabled.append('piker.data.feed')
return (
brokermod,
start_actor_kwargs, # to `ActorNursery.start_actor()`

View File

@ -18,10 +18,11 @@
Handy cross-broker utils.
"""
from __future__ import annotations
from functools import partial
import json
import asks
import httpx
import logging
from ..log import (
@ -50,6 +51,7 @@ class SymbolNotFound(BrokerError):
"Symbol not found by broker search"
# TODO: these should probably be moved to `.tsp/.data`?
class NoData(BrokerError):
'''
Symbol data not permitted or no data
@ -59,14 +61,15 @@ class NoData(BrokerError):
def __init__(
self,
*args,
frame_size: int = 1000,
info: dict|None = None,
) -> None:
super().__init__(*args)
self.info: dict|None = info
# when raised, machinery can check if the backend
# set a "frame size" for doing datetime calcs.
self.frame_size: int = 1000
# self.frame_size: int = 1000
class DataUnavailable(BrokerError):
@ -88,16 +91,18 @@ class DataThrottle(BrokerError):
def resproc(
resp: asks.response_objects.Response,
resp: httpx.Response,
log: logging.Logger,
return_json: bool = True,
log_resp: bool = False,
) -> asks.response_objects.Response:
"""Process response and return its json content.
) -> httpx.Response:
'''
Process response and return its json content.
Raise the appropriate error on non-200 OK responses.
"""
'''
if not resp.status_code == 200:
raise BrokerError(resp.body)
try:

View File

@ -1,8 +1,8 @@
# piker: trading gear for hackers
# Copyright (C)
# Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet
# (in stewardship for pikers)
# Guillermo Rodriguez (aka ze jefe)
# Tyler Goodlet
# (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -25,6 +25,7 @@ from __future__ import annotations
from collections import ChainMap
from contextlib import (
asynccontextmanager as acm,
AsyncExitStack,
)
from datetime import datetime
from pprint import pformat
@ -41,8 +42,7 @@ import trio
from pendulum import (
now,
)
import asks
from fuzzywuzzy import process as fuzzy
import httpx
import numpy as np
from piker import config
@ -52,9 +52,13 @@ from piker.clearing._messages import (
from piker.accounting import (
Asset,
digits_to_dec,
MktPair,
)
from piker.types import Struct
from piker.data import def_iohlcv_fields
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
)
from piker.brokers import (
resproc,
SymbolNotFound,
@ -64,7 +68,6 @@ from .venues import (
PAIRTYPES,
Pair,
MarketType,
_spot_url,
_futes_url,
_testnet_futes_url,
@ -74,16 +77,18 @@ from .venues import (
log = get_logger('piker.brokers.binance')
def get_config() -> dict:
def get_config() -> dict[str, Any]:
conf: dict
path: Path
conf, path = config.load(touch_if_dne=True)
section = conf.get('binance')
conf, path = config.load(
conf_name='brokers',
touch_if_dne=True,
)
section: dict = conf.get('binance')
if not section:
log.warning(f'No config section found for binance in {path}')
log.warning(
f'No config section found for binance in {path}'
)
return {}
return section
@ -139,7 +144,7 @@ def binance_timestamp(
class Client:
'''
Async ReST API client using ``trio`` + ``asks`` B)
Async ReST API client using `trio` + `httpx` B)
Supports all of the spot, margin and futures endpoints depending
on method.
@ -148,10 +153,17 @@ class Client:
def __init__(
self,
venue_sessions: dict[
str, # venue key
tuple[httpx.AsyncClient, str] # session, eps path
],
conf: dict[str, Any],
# TODO: change this to `Client.[mkt_]venue: MarketType`?
mkt_mode: MarketType = 'spot',
) -> None:
self.conf = conf
# build out pair info tables for each market type
# and wrap in a chain-map view for search / query.
self._spot_pairs: dict[str, Pair] = {} # spot info table
@ -178,44 +190,13 @@ class Client:
# market symbols for use by search. See `.exch_info()`.
self._pairs: ChainMap[str, Pair] = ChainMap()
# spot EPs sesh
self._sesh = asks.Session(connections=4)
self._sesh.base_location: str = _spot_url
# spot testnet
self._test_sesh: asks.Session = asks.Session(connections=4)
self._test_sesh.base_location: str = _testnet_spot_url
# margin and extended spot endpoints session.
self._sapi_sesh = asks.Session(connections=4)
self._sapi_sesh.base_location: str = _spot_url
# futes EPs sesh
self._fapi_sesh = asks.Session(connections=4)
self._fapi_sesh.base_location: str = _futes_url
# futes testnet
self._test_fapi_sesh: asks.Session = asks.Session(connections=4)
self._test_fapi_sesh.base_location: str = _testnet_futes_url
# global client "venue selection" mode.
# set this when you want to switch venues and not have to
# specify the venue for the next request.
self.mkt_mode: MarketType = mkt_mode
# per 8
self.venue_sesh: dict[
str, # venue key
tuple[asks.Session, str] # session, eps path
] = {
'spot': (self._sesh, '/api/v3/'),
'spot_testnet': (self._test_sesh, '/fapi/v1/'),
'margin': (self._sapi_sesh, '/sapi/v1/'),
'usdtm_futes': (self._fapi_sesh, '/fapi/v1/'),
'usdtm_futes_testnet': (self._test_fapi_sesh, '/fapi/v1/'),
# 'futes_coin': self._dapi, # TODO
}
# per-mkt-venue API client table
self.venue_sesh = venue_sessions
# lookup for going from `.mkt_mode: str` to the config
# subsection `key: str`
@ -230,40 +211,6 @@ class Client:
'futes': ['usdtm_futes'],
}
# for creating API keys see,
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
self.conf: dict = get_config()
for key, subconf in self.conf.items():
if api_key := subconf.get('api_key', ''):
venue_keys: list[str] = self.confkey2venuekeys[key]
venue_key: str
sesh: asks.Session
for venue_key in venue_keys:
sesh, _ = self.venue_sesh[venue_key]
api_key_header: dict = {
# taken from official:
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
"Content-Type": "application/json;charset=utf-8",
# TODO: prolly should just always query and copy
# in the real latest ver?
"User-Agent": "binance-connector/6.1.6smbz6",
"X-MBX-APIKEY": api_key,
}
sesh.headers.update(api_key_header)
# if `.use_tesnet = true` in the config then
# also add headers for the testnet session which
# will be used for all order control
if subconf.get('use_testnet', False):
testnet_sesh, _ = self.venue_sesh[
venue_key + '_testnet'
]
testnet_sesh.headers.update(api_key_header)
def _mk_sig(
self,
data: dict,
@ -282,7 +229,6 @@ class Client:
'to define the creds for auth-ed endpoints!?'
)
# XXX: Info on security and authentification
# https://binance-docs.github.io/apidocs/#endpoint-security-type
if not (api_secret := subconf.get('api_secret')):
@ -311,7 +257,7 @@ class Client:
params: dict,
method: str = 'get',
venue: str | None = None, # if None use `.mkt_mode` state
venue: str|None = None, # if None use `.mkt_mode` state
signed: bool = False,
allow_testnet: bool = False,
@ -322,8 +268,9 @@ class Client:
- /fapi/v3/ USD-M FUTURES, or
- /api/v3/ SPOT/MARGIN
account/market endpoint request depending on either passed in `venue: str`
or the current setting `.mkt_mode: str` setting, default `'spot'`.
account/market endpoint request depending on either passed in
`venue: str` or the current setting `.mkt_mode: str` setting,
default `'spot'`.
Docs per venue API:
@ -352,9 +299,6 @@ class Client:
venue=venue_key,
)
sesh: asks.Session
path: str
# Check if we're configured to route order requests to the
# venue equivalent's testnet.
use_testnet: bool = False
@ -379,11 +323,12 @@ class Client:
# ctl machinery B)
venue_key += '_testnet'
sesh, path = self.venue_sesh[venue_key]
meth: Callable = getattr(sesh, method)
client: httpx.AsyncClient
path: str
client, path = self.venue_sesh[venue_key]
meth: Callable = getattr(client, method)
resp = await meth(
path=path + endpoint,
url=path + endpoint,
params=params,
timeout=float('inf'),
)
@ -425,7 +370,20 @@ class Client:
item['filters'] = filters
pair_type: Type = PAIRTYPES[venue]
pair: Pair = pair_type(**item)
try:
pair: Pair = pair_type(**item)
except Exception as e:
e.add_note(
f'\n'
f'New or removed field we need to codify!\n'
f'pair-type: {pair_type!r}\n'
f'\n'
f"Don't panic, prolly stupid binance changed their symbology schema again..\n"
f'Check out their API docs here:\n'
f'\n'
f'https://binance-docs.github.io/apidocs/spot/en/#exchange-information\n'
)
raise
pair_table[pair.symbol.upper()] = pair
# update an additional top-level-cross-venue-table
@ -520,7 +478,9 @@ class Client:
'''
pair_table: dict[str, Pair] = self._venue2pairs[
venue or self.mkt_mode
venue
or
self.mkt_mode
]
if (
expiry
@ -539,9 +499,9 @@ class Client:
venues: list[str] = [venue]
# batch per-venue download of all exchange infos
async with trio.open_nursery() as rn:
async with trio.open_nursery() as tn:
for ven in venues:
rn.start_soon(
tn.start_soon(
self._cache_pairs,
ven,
)
@ -549,7 +509,7 @@ class Client:
if sym:
return pair_table[sym]
else:
self._pairs
return self._pairs
async def get_assets(
self,
@ -594,20 +554,32 @@ class Client:
) -> dict[str, Any]:
fq_pairs: dict = await self.exch_info()
fq_pairs: dict[str, Pair] = await self.exch_info()
matches = fuzzy.extractBests(
pattern,
fq_pairs,
# TODO: cache this list like we were in
# `open_symbol_search()`?
# keys: list[str] = list(fq_pairs)
return match_from_pairs(
pairs=fq_pairs,
query=pattern.upper(),
score_cutoff=50,
)
# repack in dict form
return {item[0]['symbol']: item[0]
for item in matches}
def pair2venuekey(
self,
pair: Pair,
) -> str:
return {
'USDTM': 'usdtm_futes',
'SPOT': 'spot',
# 'COINM': 'coin_futes',
# ^-TODO-^ bc someone might want it..?
}[pair.venue]
async def bars(
self,
symbol: str,
mkt: MktPair,
start_dt: datetime | None = None,
end_dt: datetime | None = None,
@ -637,16 +609,20 @@ class Client:
start_time = binance_timestamp(start_dt)
end_time = binance_timestamp(end_dt)
bs_pair: Pair = self._pairs[mkt.bs_fqme.upper()]
# https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data
bars = await self._api(
'klines',
params={
'symbol': symbol.upper(),
# NOTE: always query using their native symbology!
'symbol': mkt.bs_mktid.upper(),
'interval': '1m',
'startTime': start_time,
'endTime': end_time,
'limit': limit
},
venue=self.pair2venuekey(bs_pair),
allow_testnet=False,
)
new_bars: list[tuple] = []
@ -963,17 +939,148 @@ class Client:
await self.close_listen_key(key)
_venue_urls: dict[str, str] = {
'spot': (
_spot_url,
'/api/v3/',
),
'spot_testnet': (
_testnet_spot_url,
'/fapi/v1/'
),
# margin and extended spot endpoints session.
# TODO: did this ever get implemented fully?
# 'margin': (
# _spot_url,
# '/sapi/v1/'
# ),
'usdtm_futes': (
_futes_url,
'/fapi/v1/',
),
'usdtm_futes_testnet': (
_testnet_futes_url,
'/fapi/v1/',
),
# TODO: for anyone who actually needs it ;P
# 'coin_futes': ()
}
def init_api_keys(
client: Client,
conf: dict[str, Any],
) -> None:
'''
Set up per-venue API keys each http client according to the user's
`brokers.conf`.
For ex, to use spot-testnet and live usdt futures APIs:
```toml
[binance]
# spot test net
spot.use_testnet = true
spot.api_key = '<spot_api_key_from_binance_account>'
spot.api_secret = '<spot_api_key_password>'
# futes live
futes.use_testnet = false
accounts.usdtm = 'futes'
futes.api_key = '<futes_api_key_from_binance>'
futes.api_secret = '<futes_api_key_password>''
# if uncommented will use the built-in paper engine and not
# connect to `binance` API servers for order ctl.
# accounts.paper = 'paper'
```
'''
for key, subconf in conf.items():
if api_key := subconf.get('api_key', ''):
venue_keys: list[str] = client.confkey2venuekeys[key]
venue_key: str
client: httpx.AsyncClient
for venue_key in venue_keys:
client, _ = client.venue_sesh[venue_key]
api_key_header: dict = {
# taken from official:
# https://github.com/binance/binance-futures-connector-python/blob/main/binance/api.py#L47
"Content-Type": "application/json;charset=utf-8",
# TODO: prolly should just always query and copy
# in the real latest ver?
"User-Agent": "binance-connector/6.1.6smbz6",
"X-MBX-APIKEY": api_key,
}
client.headers.update(api_key_header)
# if `.use_tesnet = true` in the config then
# also add headers for the testnet session which
# will be used for all order control
if subconf.get('use_testnet', False):
testnet_sesh, _ = client.venue_sesh[
venue_key + '_testnet'
]
testnet_sesh.headers.update(api_key_header)
@acm
async def get_client() -> Client:
async def get_client(
mkt_mode: MarketType = 'spot',
) -> Client:
'''
Construct an single `piker` client which composes multiple underlying venue
specific API clients both for live and test networks.
client = Client()
await client.exch_info()
log.info(
f'{client} in {client.mkt_mode} mode: caching exchange infos..\n'
'Cached multi-market pairs:\n'
f'spot: {len(client._spot_pairs)}\n'
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
f'Total: {len(client._pairs)}\n'
)
'''
venue_sessions: dict[
str, # venue key
tuple[httpx.AsyncClient, str] # session, eps path
] = {}
async with AsyncExitStack() as client_stack:
for name, (base_url, path) in _venue_urls.items():
api: httpx.AsyncClient = await client_stack.enter_async_context(
httpx.AsyncClient(
base_url=base_url,
# headers={},
yield client
# TODO: is there a way to numerate this?
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
# connections=4
)
)
venue_sessions[name] = (
api,
path,
)
conf: dict[str, Any] = get_config()
# for creating API keys see,
# https://www.binance.com/en/support/faq/how-to-create-api-keys-on-binance-360002502072
client = Client(
venue_sessions=venue_sessions,
conf=conf,
mkt_mode=mkt_mode,
)
init_api_keys(
client=client,
conf=conf,
)
fq_pairs: dict[str, Pair] = await client.exch_info()
assert fq_pairs
log.info(
f'Loaded multi-venue `Client` in mkt_mode={client.mkt_mode!r}\n\n'
f'Symbology Summary:\n'
f'------ - ------\n'
f'spot: {len(client._spot_pairs)}\n'
f'usdtm_futes: {len(client._ufutes_pairs)}\n'
'------ - ------\n'
f'total: {len(client._pairs)}\n'
)
yield client

View File

@ -264,15 +264,20 @@ async def open_trade_dialog(
# do a open_symcache() call.. though maybe we can hide
# this in a new async version of open_account()?
async with open_cached_client('binance') as client:
subconf: dict = client.conf[venue_name]
use_testnet = subconf.get('use_testnet', False)
subconf: dict|None = client.conf.get(venue_name)
# XXX: if no futes.api_key or spot.api_key has been set we
# always fall back to the paper engine!
if not subconf.get('api_key'):
if (
not subconf
or
not subconf.get('api_key')
):
await ctx.started('paper')
return
use_testnet: bool = subconf.get('use_testnet', False)
async with (
open_cached_client('binance') as client,
):
@ -435,6 +440,7 @@ async def open_trade_dialog(
# - ledger: TransactionLedger
async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn,
ctx.open_stream() as ems_stream,
):

View File

@ -42,12 +42,12 @@ from trio_typing import TaskStatus
from pendulum import (
from_timestamp,
)
from fuzzywuzzy import process as fuzzy
import numpy as np
import tractor
from piker.brokers import (
open_cached_client,
NoData,
)
from piker._cacheables import (
async_lifo_cache,
@ -94,13 +94,15 @@ class L1(Struct):
# validation type
# https://developers.binance.com/docs/derivatives/usds-margined-futures/websocket-market-streams/Aggregate-Trade-Streams#response-example
class AggTrade(Struct, frozen=True):
e: str # Event type
E: int # Event time
s: str # Symbol
a: int # Aggregate trade ID
p: float # Price
q: float # Quantity
q: float # Quantity with all the market trades
nq: float # Normal quantity without the trades involving RPI orders
f: int # First trade ID
l: int # noqa Last trade ID
T: int # Trade time
@ -110,6 +112,7 @@ class AggTrade(Struct, frozen=True):
async def stream_messages(
ws: NoBsWs,
) -> AsyncGenerator[NoBsWs, dict]:
# TODO: match syntax here!
@ -220,6 +223,8 @@ def make_sub(pairs: list[str], sub_name: str, uid: int) -> dict[str, str]:
}
# TODO, why aren't frame resp `log.info()`s showing in upstream
# code?!
@acm
async def open_history_client(
mkt: MktPair,
@ -252,24 +257,30 @@ async def open_history_client(
else:
client.mkt_mode = 'spot'
# NOTE: always query using their native symbology!
mktid: str = mkt.bs_mktid
array = await client.bars(
mktid,
array: np.ndarray = await client.bars(
mkt=mkt,
start_dt=start_dt,
end_dt=end_dt,
)
if array.size == 0:
raise NoData(
f'No frame for {start_dt} -> {end_dt}\n'
)
times = array['time']
if (
end_dt is None
):
inow = round(time.time())
if not times.any():
raise ValueError(
'Bad frame with null-times?\n\n'
f'{times}'
)
if end_dt is None:
inow: int = round(time.time())
if (inow - times[-1]) > 60:
await tractor.pause()
start_dt = from_timestamp(times[0])
end_dt = from_timestamp(times[-1])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
@ -439,7 +450,6 @@ async def subscribe(
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
@ -451,11 +461,14 @@ async def stream_quotes(
) -> None:
async with (
tractor.trionics.maybe_raise_from_masking_exc(),
send_chan as send_chan,
open_cached_client('binance') as client,
):
init_msgs: list[FeedInit] = []
for sym in symbols:
mkt: MktPair
pair: Pair
mkt, pair = await get_mkt_info(sym)
# build out init msgs according to latest spec
@ -504,7 +517,6 @@ async def stream_quotes(
# start streaming
async for typ, quote in msg_gen:
# period = time.time() - last
# hz = 1/period if period else float('inf')
# if hz > 60:
@ -533,14 +545,15 @@ async def open_symbol_search(
pattern: str
async for pattern in stream:
matches = fuzzy.extractBests(
# NOTE: pattern fuzzy-matching is done within
# the methd impl.
pairs: dict[str, Pair] = await client.search_symbols(
pattern,
client._pairs,
score_cutoff=50,
)
# repack in dict form
await stream.send({
item[0].bs_fqme: item[0]
for item in matches
})
# repack in fqme-keyed table
byfqme: dict[str, Pair] = {}
for pair in pairs.values():
byfqme[pair.bs_fqme] = pair
await stream.send(byfqme)

View File

@ -97,6 +97,16 @@ class Pair(Struct, frozen=True, kw_only=True):
baseAsset: str
baseAssetPrecision: int
permissionSets: list[list[str]]
# https://developers.binance.com/docs/binance-spot-api-docs#2025-08-26
# will become non-optional 2025-08-28?
# https://developers.binance.com/docs/binance-spot-api-docs#future-changes
pegInstructionsAllowed: bool = False
# https://developers.binance.com/docs/binance-spot-api-docs#2025-12-02
opoAllowed: bool = False
filters: dict[
str,
str | int | float,
@ -137,11 +147,17 @@ class SpotPair(Pair, frozen=True):
quoteOrderQtyMarketAllowed: bool
isSpotTradingAllowed: bool
isMarginTradingAllowed: bool
otoAllowed: bool
defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str]
permissions: list[str]
# can the paint botz creat liq gaps even easier on this asset?
# Bp
# https://developers.binance.com/docs/binance-spot-api-docs/faqs/order_amend_keep_priority
amendAllowed: bool
# NOTE: see `.data._symcache.SymbologyCache.load()` for why
ns_path: str = 'piker.brokers.binance:SpotPair'
@ -179,7 +195,6 @@ class FutesPair(Pair):
quoteAsset: str # 'USDT',
quotePrecision: int # 8,
requiredMarginPercent: float # '5.0000',
settlePlan: int # 0,
timeInForce: list[str] # ['GTC', 'IOC', 'FOK', 'GTX'],
triggerProtect: float # '0.0500',
underlyingSubType: list[str] # ['PoW'],
@ -194,6 +209,45 @@ class FutesPair(Pair):
def quoteAssetPrecision(self) -> int:
return self.quotePrecision
@property
def expiry(self) -> str:
symbol: str = self.symbol
contype: str = self.contractType
match contype:
case (
'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance..
):
pair, _, expiry = symbol.partition('_')
assert pair == self.pair # sanity
return f'{expiry}'
case (
'PERPETUAL'
| 'TRADIFI_PERPETUAL'
):
return 'PERP'
case '':
subtype: list[str] = self.underlyingSubType
if not subtype:
if self.status == 'PENDING_TRADING':
return 'PENDING'
match subtype:
case ['DEFI']:
return 'PERP'
# wow, just wow you binance guys suck..
if self.status == 'PENDING_TRADING':
return 'PENDING'
# XXX: yeah no clue then..
raise ValueError(
f'Bad .expiry token match: {contype} for {symbol}'
)
@property
def venue(self) -> str:
symbol: str = self.symbol
@ -201,37 +255,54 @@ class FutesPair(Pair):
margin: str = self.marginAsset
match ctype:
case 'PERPETUAL':
return f'{margin}M.PERP'
case (
'PERPETUAL'
| 'TRADIFI_PERPETUAL'
):
return f'{margin}M'
case 'CURRENT_QUARTER':
case (
'CURRENT_QUARTER'
| 'CURRENT_QUARTER DELIVERING'
| 'NEXT_QUARTER' # su madre binance..
):
_, _, expiry = symbol.partition('_')
return f'{margin}M.{expiry}'
return f'{margin}M'
case '':
subtype: list[str] = self.underlyingSubType
if not subtype:
if self.status == 'PENDING_TRADING':
return f'{margin}M.PENDING'
return f'{margin}M'
match subtype:
case ['DEFI']:
return f'{subtype[0]}.PERP'
case (
['DEFI']
| ['USDC']
):
return f'{subtype[0]}'
# XXX: yeah no clue then..
return 'WTF.PWNED.BBQ'
raise ValueError(
f'Bad .venue token match: {ctype}'
)
@property
def bs_fqme(self) -> str:
symbol: str = self.symbol
ctype: str = self.contractType
venue: str = self.venue
pair: str = self.pair
match ctype:
case 'CURRENT_QUARTER':
symbol, _, expiry = symbol.partition('_')
case (
'CURRENT_QUARTER'
| 'NEXT_QUARTER' # su madre binance..
):
pair, _, expiry = symbol.partition('_')
assert pair == self.pair
return f'{symbol}.{venue}'
return f'{pair}.{venue}.{self.expiry}'
@property
def bs_src_asset(self) -> str:

View File

@ -454,37 +454,54 @@ def mkt_info(
@cli.command()
@click.argument('pattern', required=True)
# TODO: move this to top level click/typer context for all subs
@click.option(
'--pdb',
is_flag=True,
help='Enable tractor debug mode',
)
@click.pass_obj
def search(config, pattern):
def search(
config: dict,
pattern: str,
pdb: bool,
):
'''
Search for symbols from broker backend(s).
'''
# global opts
brokermods = list(config['brokermods'].values())
brokermods: list[ModuleType] = list(config['brokermods'].values())
# TODO: this is coming from the `search --pdb` NOT from
# the `piker --pdb` XD ..
# -[ ] pull from the parent click ctx's values..dumdum
# assert pdb
# define tractor entrypoint
async def main(func):
async with maybe_open_pikerd(
loglevel=config['loglevel'],
debug_mode=pdb,
):
return await func()
quotes = trio.run(
main,
partial(
core.symbol_search,
brokermods,
pattern,
),
)
from piker.toolz import open_crash_handler
with open_crash_handler():
quotes = trio.run(
main,
partial(
core.symbol_search,
brokermods,
pattern,
),
)
if not quotes:
log.error(f"No matches could be found for {pattern}?")
return
if not quotes:
log.error(f"No matches could be found for {pattern}?")
return
click.echo(colorize_json(quotes))
click.echo(colorize_json(quotes))
@cli.command()
@ -493,9 +510,11 @@ def search(config, pattern):
@click.option('--delete', '-d', flag_value=True, help='Delete section')
@click.pass_obj
def brokercfg(config, section, value, delete):
"""If invoked with no arguments, open an editor to edit broker configs file
or get / update an individual section.
"""
'''
If invoked with no arguments, open an editor to edit broker
configs file or get / update an individual section.
'''
from .. import config
if section:

View File

@ -22,7 +22,9 @@ routines should be primitive data types where possible.
"""
import inspect
from types import ModuleType
from typing import List, Dict, Any, Optional
from typing import (
Any,
)
import trio
@ -34,8 +36,10 @@ from ..accounting import MktPair
async def api(brokername: str, methname: str, **kwargs) -> dict:
"""Make (proxy through) a broker API call by name and return its result.
"""
'''
Make (proxy through) a broker API call by name and return its result.
'''
brokermod = get_brokermod(brokername)
async with brokermod.get_client() as client:
meth = getattr(client, methname, None)
@ -62,10 +66,14 @@ async def api(brokername: str, methname: str, **kwargs) -> dict:
async def stocks_quote(
brokermod: ModuleType,
tickers: List[str]
) -> Dict[str, Dict[str, Any]]:
"""Return quotes dict for ``tickers``.
"""
tickers: list[str]
) -> dict[str, dict[str, Any]]:
'''
Return a `dict` of snapshot quotes for the provided input
`tickers`: a `list` of fqmes.
'''
async with brokermod.get_client() as client:
return await client.quote(tickers)
@ -74,13 +82,15 @@ async def stocks_quote(
async def option_chain(
brokermod: ModuleType,
symbol: str,
date: Optional[str] = None,
) -> Dict[str, Dict[str, Dict[str, Any]]]:
"""Return option chain for ``symbol`` for ``date``.
date: str|None = None,
) -> dict[str, dict[str, dict[str, Any]]]:
'''
Return option chain for ``symbol`` for ``date``.
By default all expiries are returned. If ``date`` is provided
then contract quotes for that single expiry are returned.
"""
'''
async with brokermod.get_client() as client:
if date:
id = int((await client.tickers2ids([symbol]))[symbol])
@ -98,7 +108,7 @@ async def option_chain(
# async def contracts(
# brokermod: ModuleType,
# symbol: str,
# ) -> Dict[str, Dict[str, Dict[str, Any]]]:
# ) -> dict[str, dict[str, dict[str, Any]]]:
# """Return option contracts (all expiries) for ``symbol``.
# """
# async with brokermod.get_client() as client:
@ -110,15 +120,24 @@ async def bars(
brokermod: ModuleType,
symbol: str,
**kwargs,
) -> Dict[str, Dict[str, Dict[str, Any]]]:
"""Return option contracts (all expiries) for ``symbol``.
"""
) -> dict[str, dict[str, dict[str, Any]]]:
'''
Return option contracts (all expiries) for ``symbol``.
'''
async with brokermod.get_client() as client:
return await client.bars(symbol, **kwargs)
async def search_w_brokerd(name: str, pattern: str) -> dict:
async def search_w_brokerd(
name: str,
pattern: str,
) -> dict:
# TODO: WHY NOT WORK!?!
# when we `step` through the next block?
# import tractor
# await tractor.pause()
async with open_cached_client(name) as client:
# TODO: support multiple asset type concurrent searches.
@ -130,12 +149,12 @@ async def symbol_search(
pattern: str,
**kwargs,
) -> Dict[str, Dict[str, Dict[str, Any]]]:
) -> dict[str, dict[str, dict[str, Any]]]:
'''
Return symbol info from broker.
'''
results = []
results: list[str] = []
async def search_backend(
brokermod: ModuleType
@ -143,9 +162,20 @@ async def symbol_search(
brokername: str = mod.name
# TODO: figure this the FUCK OUT
# -> ok so obvi in the root actor any async task that's
# spawned outside the main tractor-root-actor task needs to
# call this..
# await tractor.devx._debug.maybe_init_greenback()
# tractor.pause_from_sync()
async with maybe_spawn_brokerd(
mod.name,
infect_asyncio=getattr(mod, '_infect_asyncio', False),
infect_asyncio=getattr(
mod,
'_infect_asyncio',
False,
),
) as portal:
results.append((
@ -158,7 +188,6 @@ async def symbol_search(
))
async with trio.open_nursery() as n:
for mod in brokermods:
n.start_soon(search_backend, mod.name)
@ -168,11 +197,13 @@ async def symbol_search(
async def mkt_info(
brokermod: ModuleType,
fqme: str,
**kwargs,
) -> MktPair:
'''
Return MktPair info from broker including src and dst assets.
Return the `piker.accounting.MktPair` info struct from a given
backend broker tradable src/dst asset pair.
'''
async with open_cached_client(brokermod.name) as client:

View File

@ -31,14 +31,15 @@ from typing import (
Callable,
)
import pendulum
from pendulum import now
import trio
from trio_typing import TaskStatus
from fuzzywuzzy import process as fuzzy
from rapidfuzz import process as fuzzy
import numpy as np
from tractor.trionics import (
broadcast_receiver,
maybe_open_context
collapse_eg,
)
from tractor import to_asyncio
# XXX WOOPS XD
@ -52,8 +53,11 @@ from cryptofeed.defines import (
)
from cryptofeed.symbols import Symbol
from piker.data.types import Struct
from piker.data import def_iohlcv_fields
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
Struct,
)
from piker.data._web_bs import (
open_jsonrpc_session
)
@ -79,7 +83,7 @@ _testnet_ws_url = 'wss://test.deribit.com/ws/api/v2'
class JSONRPCResult(Struct):
jsonrpc: str = '2.0'
id: int
result: Optional[dict] = None
result: Optional[list[dict]] = None
error: Optional[dict] = None
usIn: int
usOut: int
@ -289,24 +293,29 @@ class Client:
currency: str = 'btc', # BTC, ETH, SOL, USDC
kind: str = 'option',
expired: bool = False
) -> dict[str, Any]:
"""Get symbol info for the exchange.
"""
) -> dict[str, dict]:
'''
Get symbol infos.
'''
if self._pairs:
return self._pairs
# will retrieve all symbols by default
params = {
params: dict[str, str] = {
'currency': currency.upper(),
'kind': kind,
'expired': str(expired).lower()
}
resp = await self.json_rpc('public/get_instruments', params)
results = resp.result
instruments = {
resp: JSONRPCResult = await self.json_rpc(
'public/get_instruments',
params,
)
# convert to symbol-keyed table
results: list[dict] | None = resp.result
instruments: dict[str, dict] = {
item['instrument_name'].lower(): item
for item in results
}
@ -319,6 +328,7 @@ class Client:
async def cache_symbols(
self,
) -> dict:
if not self._pairs:
self._pairs = await self.symbol_info()
@ -329,17 +339,23 @@ class Client:
pattern: str,
limit: int = 30,
) -> dict[str, Any]:
data = await self.symbol_info()
'''
Fuzzy search symbology set for pairs matching `pattern`.
matches = fuzzy.extractBests(
pattern,
data,
'''
pairs: dict[str, Any] = await self.symbol_info()
matches: dict[str, Pair] = match_from_pairs(
pairs=pairs,
query=pattern.upper(),
score_cutoff=35,
limit=limit
)
# repack in dict form
return {item[0]['instrument_name'].lower(): item[0]
for item in matches}
# repack in name-keyed table
return {
pair['instrument_name'].lower(): pair
for pair in matches.values()
}
async def bars(
self,
@ -417,6 +433,7 @@ async def get_client(
) -> Client:
async with (
collapse_eg(),
trio.open_nursery() as n,
open_jsonrpc_session(
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc

View File

@ -26,7 +26,7 @@ import time
import trio
from trio_typing import TaskStatus
import pendulum
from fuzzywuzzy import process as fuzzy
from rapidfuzz import process as fuzzy
import numpy as np
import tractor

View File

@ -20,6 +20,11 @@ runnable script-programs.
'''
from __future__ import annotations
from datetime import ( # noqa
datetime,
date,
tzinfo as TzInfo,
)
from functools import partial
from typing import (
Literal,
@ -33,7 +38,7 @@ from piker.brokers._util import get_logger
if TYPE_CHECKING:
from .api import Client
from ib_insync import IB
import i3ipc
log = get_logger('piker.brokers.ib')
@ -48,8 +53,39 @@ _reset_tech: Literal[
] = 'vnc'
no_setup_msg:str = (
'No data reset hack test setup for {vnc_sockaddr}!\n'
'See config setup tips @\n'
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
)
def try_xdo_manual(
client: Client,
):
'''
Do the "manual" `xdo`-based screen switch + click
combo since apparently the `asyncvnc` client ain't workin..
Note this is only meant as a backup method for Xorg users,
ideally you can use a real vnc client and the `vnc_click_hack()`
impl!
'''
global _reset_tech
try:
i3ipc_xdotool_manual_click_hack()
_reset_tech = 'i3ipc_xdotool'
return True
except OSError:
vnc_sockaddr: str = client.conf.vnc_addrs
log.exception(
no_setup_msg.format(vnc_sockaddr=vnc_sockaddr)
)
return False
async def data_reset_hack(
# vnc_host: str,
client: Client,
reset_type: Literal['data', 'connection'],
@ -81,56 +117,60 @@ async def data_reset_hack(
that need to be wrangle.
'''
ib_client: IB = client.ib
# look up any user defined vnc socket address mapped from
# a particular API socket port.
api_port: str = str(ib_client.client.port)
vnc_host: str
vnc_port: int
vnc_host, vnc_port = client.conf['vnc_addrs'].get(
api_port,
('localhost', 3003)
)
vnc_addrs: tuple[str]|None = client.conf.get('vnc_addrs')
if not vnc_addrs:
log.warning(
no_setup_msg.format(vnc_sockaddr=client.conf)
+
'REQUIRES A `vnc_addrs: array` ENTRY'
)
no_setup_msg:str = (
f'No data reset hack test setup for {vnc_host}!\n'
'See setup @\n'
'https://github.com/pikers/piker/tree/master/piker/brokers/ib'
)
global _reset_tech
match _reset_tech:
case 'vnc':
try:
await tractor.to_asyncio.run_task(
partial(
vnc_click_hack,
host=vnc_host,
port=vnc_port,
client=client,
)
)
except OSError:
if vnc_host != 'localhost':
log.warning(no_setup_msg)
return False
except (
OSError, # no VNC server avail..
PermissionError, # asyncvnc pw fail..
):
try:
import i3ipc # noqa (since a deps dynamic check)
except ModuleNotFoundError:
log.warning(no_setup_msg)
log.warning(
no_setup_msg.format(vnc_sockaddr=client.conf)
)
return False
try:
i3ipc_xdotool_manual_click_hack()
_reset_tech = 'i3ipc_xdotool'
return True
except OSError:
log.exception(no_setup_msg)
return False
# XXX, Xorg only workaround..
# TODO? remove now that we have `pyvnc`?
# if vnc_host not in {
# 'localhost',
# '127.0.0.1',
# }:
# focussed, matches = i3ipc_fin_wins_titled()
# if not matches:
# log.warning(
# no_setup_msg.format(vnc_sockaddr=vnc_sockaddr)
# )
# return False
# else:
# try_xdo_manual(vnc_sockaddr)
# localhost but no vnc-client or it borked..
else:
try_xdo_manual(client)
case 'i3ipc_xdotool':
i3ipc_xdotool_manual_click_hack()
try_xdo_manual(client)
# i3ipc_xdotool_manual_click_hack()
case _ as tech:
raise RuntimeError(f'{tech} is not supported for reset tech!?')
@ -140,21 +180,66 @@ async def data_reset_hack(
async def vnc_click_hack(
host: str,
port: int,
reset_type: str = 'data'
client: Client,
reset_type: str = 'data',
pw: str|None = None,
) -> None:
'''
Reset the data or network connection for the VNC attached
ib gateway using magic combos.
ib-gateway using a (magic) keybinding combo.
A vnc-server password can be set either by an input `pw` param or
set in the client's config with the latter loaded from the user's
`brokers.toml` in a vnc-addrs-port-mapping section,
.. code:: toml
[ib.vnc_addrs]
4002 = {host = 'localhost', port = 5900, pw = 'doggy'}
'''
api_port: str = str(client.ib.client.port)
conf: dict = client.conf
vnc_addrs: dict[int, tuple] = conf.get('vnc_addrs')
if not vnc_addrs:
return None
addr_entry: dict|tuple = vnc_addrs.get(
api_port,
('localhost', 5900) # a typical default
)
if pw is None:
match addr_entry:
case (
host,
port,
):
pass
case {
'host': host,
'port': port,
'pw': pw
}:
pass
case _:
raise ValueError(
f'Invalid `ib.vnc_addrs` entry ?\n'
f'{addr_entry!r}\n'
)
try:
import asyncvnc
from pyvnc import (
AsyncVNCClient,
VNCConfig,
Point,
MOUSE_BUTTON_LEFT,
)
except ModuleNotFoundError:
log.warning(
"In order to leverage `piker`'s built-in data reset hacks, install "
"the `asyncvnc` project: https://github.com/barneygale/asyncvnc"
"the `pyvnc` project: https://github.com/regulad/pyvnc.git"
)
return
@ -165,24 +250,79 @@ async def vnc_click_hack(
'connection': 'r'
}[reset_type]
async with asyncvnc.connect(
host,
port=port,
# TODO: doesn't work see:
# https://github.com/barneygale/asyncvnc/issues/7
# password='ibcansmbz',
) as client:
# move to middle of screen
# 640x1800
client.mouse.move(
x=500,
y=500,
with tractor.devx.open_crash_handler():
client = await AsyncVNCClient.connect(
VNCConfig(
host=host,
port=port,
password=pw,
)
)
client.mouse.click()
client.keyboard.press('Ctrl', 'Alt', key) # keys are stacked
async with client:
# move to middle of screen
# 640x1800
await client.move(
Point(
500,
500,
)
)
# ensure the ib-gw window is active
await client.click(MOUSE_BUTTON_LEFT)
# send the hotkeys combo B)
await client.press('Ctrl', 'Alt', key) # keys are stacked
def i3ipc_fin_wins_titled(
titles: list[str] = [
'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
# !TODO, remote vnc instance
# -[ ] something in title (or other Con-props) that indicates
# this is explicitly for ibrk sw?
# |_[ ] !can use modden spawn eventually!
'TigerVNC',
# 'vncviewer', # the terminal..
],
) -> tuple[
i3ipc.Con, # orig focussed win
list[tuple[str, i3ipc.Con]], # matching wins by title
]:
'''
Attempt to find a local-DE window titled with an entry in
`titles`.
If found deliver the current focussed window and all matching
`i3ipc.Con`s in a list.
'''
import i3ipc
ipc = i3ipc.Connection()
# TODO: might be worth offering some kinda api for grabbing
# the window id from the pid?
# https://stackoverflow.com/a/2250879
tree = ipc.get_tree()
focussed: i3ipc.Con = tree.find_focused()
matches: list[i3ipc.Con] = []
for name in titles:
results = tree.find_titled(name)
print(f'results for {name}: {results}')
if results:
con = results[0]
matches.append((
name,
con,
))
return (
focussed,
matches,
)
def i3ipc_xdotool_manual_click_hack() -> None:
@ -190,67 +330,48 @@ def i3ipc_xdotool_manual_click_hack() -> None:
Do the data reset hack but expecting a local X-window using `xdotool`.
'''
import i3ipc
i3 = i3ipc.Connection()
# TODO: might be worth offering some kinda api for grabbing
# the window id from the pid?
# https://stackoverflow.com/a/2250879
t = i3.get_tree()
orig_win_id = t.find_focused().window
# for tws
win_names: list[str] = [
'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
]
focussed, matches = i3ipc_fin_wins_titled()
orig_win_id = focussed.window
try:
for name in win_names:
results = t.find_titled(name)
print(f'results for {name}: {results}')
if results:
con = results[0]
print(f'Resetting data feed for {name}')
win_id = str(con.window)
w, h = con.rect.width, con.rect.height
for name, con in matches:
print(f'Resetting data feed for {name}')
win_id = str(con.window)
w, h = con.rect.width, con.rect.height
# TODO: seems to be a few libs for python but not sure
# if they support all the sub commands we need, order of
# most recent commit history:
# https://github.com/rr-/pyxdotool
# https://github.com/ShaneHutter/pyxdotool
# https://github.com/cphyc/pyxdotool
# TODO: seems to be a few libs for python but not sure
# if they support all the sub commands we need, order of
# most recent commit history:
# https://github.com/rr-/pyxdotool
# https://github.com/ShaneHutter/pyxdotool
# https://github.com/cphyc/pyxdotool
# TODO: only run the reconnect (2nd) kc on a detected
# disconnect?
for key_combo, timeout in [
# only required if we need a connection reset.
# ('ctrl+alt+r', 12),
# data feed reset.
('ctrl+alt+f', 6)
]:
subprocess.call([
'xdotool',
'windowactivate', '--sync', win_id,
# TODO: only run the reconnect (2nd) kc on a detected
# disconnect?
for key_combo, timeout in [
# only required if we need a connection reset.
# ('ctrl+alt+r', 12),
# data feed reset.
('ctrl+alt+f', 6)
]:
subprocess.call([
'xdotool',
'windowactivate', '--sync', win_id,
# move mouse to bottom left of window (where
# there should be nothing to click).
'mousemove_relative', '--sync', str(w-4), str(h-4),
# move mouse to bottom left of window (where
# there should be nothing to click).
'mousemove_relative', '--sync', str(w-4), str(h-4),
# NOTE: we may need to stick a `--retry 3` in here..
'click', '--window', win_id,
'--repeat', '3', '1',
# NOTE: we may need to stick a `--retry 3` in here..
'click', '--window', win_id,
'--repeat', '3', '1',
# hackzorzes
'key', key_combo,
],
timeout=timeout,
)
# hackzorzes
'key', key_combo,
],
timeout=timeout,
)
# re-activate and focus original window
# re-activate and focus original window
subprocess.call([
'xdotool',
'windowactivate', '--sync', str(orig_win_id),
@ -258,3 +379,99 @@ def i3ipc_xdotool_manual_click_hack() -> None:
])
except subprocess.TimeoutExpired:
log.exception('xdotool timed out?')
def is_current_time_in_range(
start_dt: datetime,
end_dt: datetime,
) -> bool:
'''
Check if current time is within the datetime range.
Use any/the-same timezone as provided by `start_dt.tzinfo` value
in the range.
'''
now: datetime = datetime.now(start_dt.tzinfo)
return start_dt <= now <= end_dt
# TODO, put this into `._util` and call it from here!
#
# NOTE, this was generated by @guille from a gpt5 prompt
# and was originally thot to be needed before learning about
# `ib_insync.contract.ContractDetails._parseSessions()` and
# it's downstream meths..
#
# This is still likely useful to keep for now to parse the
# `.tradingHours: str` value manually if we ever decide
# to move off `ib_async` and implement our own `trio`/`anyio`
# based version Bp
#
# >attempt to parse the retarted ib "time stampy thing" they
# >do for "venue hours" with this.. written by
# >gpt5-"thinking",
#
def parse_trading_hours(
spec: str,
tz: TzInfo|None = None
) -> dict[
date,
tuple[datetime, datetime]
]|None:
'''
Parse venue hours like:
'YYYYMMDD:HHMM-YYYYMMDD:HHMM;YYYYMMDD:CLOSED;...'
Returns `dict[date] = (open_dt, close_dt)` or `None` if
closed.
'''
if (
not isinstance(spec, str)
or
not spec
):
raise ValueError('spec must be a non-empty string')
out: dict[
date,
tuple[datetime, datetime]
]|None = {}
for part in (p.strip() for p in spec.split(';') if p.strip()):
if part.endswith(':CLOSED'):
day_s, _ = part.split(':', 1)
d = datetime.strptime(day_s, '%Y%m%d').date()
out[d] = None
continue
try:
start_s, end_s = part.split('-', 1)
start_dt = datetime.strptime(start_s, '%Y%m%d:%H%M')
end_dt = datetime.strptime(end_s, '%Y%m%d:%H%M')
except ValueError as exc:
raise ValueError(f'invalid segment: {part}') from exc
if tz is not None:
start_dt = start_dt.replace(tzinfo=tz)
end_dt = end_dt.replace(tzinfo=tz)
out[start_dt.date()] = (start_dt, end_dt)
return out
# ORIG desired usage,
#
# TODO, for non-drunk tomorrow,
# - call above fn and check that `output[today] is not None`
# trading_hrs: dict = parse_trading_hours(
# details.tradingHours
# )
# liq_hrs: dict = parse_trading_hours(
# details.liquidHours
# )

View File

@ -41,7 +41,6 @@ import time
from typing import (
Any,
Callable,
Union,
)
from types import SimpleNamespace
@ -49,7 +48,13 @@ from bidict import bidict
import trio
import tractor
from tractor import to_asyncio
import pendulum
from tractor import trionics
from pendulum import (
from_timestamp,
DateTime,
Duration,
duration as mk_duration,
)
from eventkit import Event
from ib_insync import (
client as ib_client,
@ -221,16 +226,20 @@ def bars_to_np(bars: list) -> np.ndarray:
# https://interactivebrokers.github.io/tws-api/historical_limitations.html#non-available_hd
_samplings: dict[int, tuple[str, str]] = {
1: (
# ib strs
'1 secs',
f'{int(2e3)} S',
pendulum.duration(seconds=2e3),
mk_duration(seconds=2e3),
),
# TODO: benchmark >1 D duration on query to see if
# throughput can be made faster during backfilling.
60: (
# ib strs
'1 min',
'1 D',
pendulum.duration(days=1),
'2 D',
mk_duration(days=2),
),
}
@ -279,9 +288,31 @@ class Client:
self.conf = config
# NOTE: the ib.client here is "throttled" to 45 rps by default
self.ib = ib
self.ib: IB = ib
self.ib.RaiseRequestErrors: bool = True
# self._acnt_names: set[str] = {}
self._acnt_names: list[str] = []
@property
def acnts(self) -> list[str]:
# return list(self._acnt_names)
return self._acnt_names
def __repr__(self) -> str:
return (
f'<{type(self).__name__}('
f'ib={self.ib} '
f'acnts={self.acnts}'
# TODO: we need to mask out acnt-#s and other private
# infos if we're going to console this!
# f' |_.conf:\n'
# f' {pformat(self.conf)}\n'
')>'
)
async def get_fills(self) -> list[Fill]:
'''
Return list of rents `Fills` from trading session.
@ -303,19 +334,19 @@ class Client:
fqme: str,
# EST in ISO 8601 format is required... below is EPOCH
start_dt: Union[datetime, str] = "1970-01-01T00:00:00.000000-05:00",
end_dt: Union[datetime, str] = "",
start_dt: datetime|str = "1970-01-01T00:00:00.000000-05:00",
end_dt: datetime|str = "",
# ohlc sample period in seconds
sample_period_s: int = 1,
# optional "duration of time" equal to the
# length of the returned history frame.
duration: str | None = None,
duration: str|None = None,
**kwargs,
) -> tuple[BarDataList, np.ndarray, pendulum.Duration]:
) -> tuple[BarDataList, np.ndarray, Duration]:
'''
Retreive OHLCV bars for a fqme over a range to the present.
@ -324,14 +355,19 @@ class Client:
# https://interactivebrokers.github.io/tws-api/historical_data.html
bars_kwargs = {'whatToShow': 'TRADES'}
bars_kwargs.update(kwargs)
bar_size, duration, dt_duration = _samplings[sample_period_s]
(
bar_size,
ib_duration_str,
default_dt_duration,
) = _samplings[sample_period_s]
global _enters
log.info(
f"REQUESTING {duration}'s worth {bar_size} BARS\n"
f'{_enters} @ end={end_dt}"'
dt_duration: Duration = (
duration
or default_dt_duration
)
# TODO: maybe remove all this?
global _enters
if not end_dt:
end_dt = ''
@ -340,8 +376,8 @@ class Client:
contract: Contract = (await self.find_contracts(fqme))[0]
bars_kwargs.update(getattr(contract, 'bars_kwargs', {}))
bars = await self.ib.reqHistoricalDataAsync(
contract,
kwargs: dict[str, Any] = dict(
contract=contract,
endDateTime=end_dt,
formatDate=2,
@ -353,7 +389,7 @@ class Client:
# time history length values format:
# ``durationStr=integer{SPACE}unit (S|D|W|M|Y)``
durationStr=duration,
durationStr=ib_duration_str,
# always use extended hours
useRTH=False,
@ -363,50 +399,122 @@ class Client:
# whatToShow='MIDPOINT',
# whatToShow='TRADES',
)
bars = await self.ib.reqHistoricalDataAsync(
**kwargs,
)
query_info: str = (
f'REQUESTING IB history BARS\n'
f' ------ - ------\n'
f'dt_duration: {dt_duration}\n'
f'ib_duration_str: {ib_duration_str}\n'
f'bar_size: {bar_size}\n'
f'fqme: {fqme}\n'
f'actor-global _enters: {_enters}\n'
f'kwargs: {pformat(kwargs)}\n'
)
# tail case if no history for range or none prior.
# NOTE: there's actually 3 cases here to handle (and
# this should be read alongside the implementation of
# `.reqHistoricalDataAsync()`):
# - a timeout occurred in which case insync internals return
# an empty list thing with bars.clear()...
# - no data exists for the period likely due to
# a weekend, holiday or other non-trading period prior to
# ``end_dt`` which exceeds the ``duration``,
# - LITERALLY this is the start of the mkt's history!
if not bars:
# NOTE: there's 2 cases here to handle (and this should be
# read alongside the implementation of
# ``.reqHistoricalDataAsync()``):
# - no data is returned for the period likely due to
# a weekend, holiday or other non-trading period prior to
# ``end_dt`` which exceeds the ``duration``,
# - a timeout occurred in which case insync internals return
# an empty list thing with bars.clear()...
return [], np.empty(0), dt_duration
# TODO: figure out wut's going on here.
# TODO: is this handy, a sync requester for tinkering
# with empty frame cases?
# def get_hist():
# return self.ib.reqHistoricalData(**kwargs)
# import pdbp
# pdbp.set_trace()
log.critical(
'STUPID IB SAYS NO HISTORY\n\n'
+ query_info
)
# TODO: we could maybe raise ``NoData`` instead if we
# rewrite the method in the first case? right now there's no
# way to detect a timeout.
# rewrite the method in the first case?
# right now there's no way to detect a timeout..
return [], np.empty(0), dt_duration
# NOTE XXX: ensure minimum duration in bars B)
# => we recursively call this method until we get at least
# as many bars such that they sum in aggregate to the the
# desired total time (duration) at most.
elif (
end_dt
and (
(len(bars) * sample_period_s) < dt_duration.in_seconds()
)
log.info(query_info)
# NOTE XXX: ensure minimum duration in bars?
# => recursively call this method until we get at least as
# many bars such that they sum in aggregate to the the
# desired total time (duration) at most.
# - if you query over a gap and get no data
# that may short circuit the history
if (
# XXX XXX XXX
# => WHY DID WE EVEN NEED THIS ORIGINALLY!? <=
# XXX XXX XXX
False
and end_dt
):
log.warning(
f'Recursing to get more bars from {end_dt} for {dt_duration}'
)
end_dt -= dt_duration
(
r_bars,
r_arr,
r_duration,
) = await self.bars(
fqme,
start_dt=start_dt,
end_dt=end_dt,
)
r_bars.extend(bars)
bars = r_bars
nparr: np.ndarray = bars_to_np(bars)
times: np.ndarray = nparr['time']
first: float = times[0]
tdiff: float = times[-1] - first
nparr = bars_to_np(bars)
return bars, nparr, dt_duration
if (
# len(bars) * sample_period_s) < dt_duration.in_seconds()
tdiff < dt_duration.in_seconds()
# and False
):
end_dt: DateTime = from_timestamp(first)
log.warning(
f'Frame result was shorter then {dt_duration}!?\n'
'Recursing for more bars:\n'
f'end_dt: {end_dt}\n'
f'dt_duration: {dt_duration}\n'
)
(
r_bars,
r_arr,
r_duration,
) = await self.bars(
fqme,
start_dt=start_dt,
end_dt=end_dt,
sample_period_s=sample_period_s,
# TODO: make a table for Duration to
# the ib str values in order to use this?
# duration=duration,
)
r_bars.extend(bars)
bars = r_bars
nparr: np.ndarray = bars_to_np(bars)
# timestep should always be at least as large as the
# period step.
tdiff: np.ndarray = np.diff(nparr['time'])
to_short: np.ndarray = tdiff < sample_period_s
if (to_short).any():
# raise ValueError(
log.error(
f'OHLC frame for {sample_period_s} has {to_short.size} '
'time steps which are shorter then expected?!"'
)
# OOF: this will break teardown?
# -[ ] check if it's greenback
# -[ ] why tf are we leaking shm entries..
# -[ ] make a test on the debugging asyncio testing
# branch..
# breakpoint()
return (
bars,
nparr,
dt_duration,
)
async def con_deats(
self,
@ -416,20 +524,26 @@ class Client:
futs: list[asyncio.Future] = []
for con in contracts:
if con.primaryExchange not in _exch_skip_list:
exch: str = con.primaryExchange or con.exchange
if (
exch
and exch not in _exch_skip_list
):
futs.append(self.ib.reqContractDetailsAsync(con))
# batch request all details
try:
results: list[ContractDetails] = await asyncio.gather(*futs)
except RequestError as err:
msg = err.message
msg: str = err.message
if (
'No security definition' in msg
):
log.warning(f'{msg}: {contracts}')
return {}
raise
# one set per future result
details: dict[str, ContractDetails] = {}
for details_set in results:
@ -602,8 +716,8 @@ class Client:
async def find_contracts(
self,
pattern: str | None = None,
contract: Contract | None = None,
pattern: str|None = None,
contract: Contract|None = None,
qualify: bool = True,
err_on_qualify: bool = True,
@ -663,7 +777,7 @@ class Client:
# commodities
elif exch == 'CMDTY': # eg. XAUUSD.CMDTY
con_kwargs, bars_kwargs = _adhoc_symbol_map[symbol]
con_kwargs, bars_kwargs = _adhoc_symbol_map[symbol.upper()]
con = Commodity(**con_kwargs)
con.bars_kwargs = bars_kwargs
@ -727,12 +841,16 @@ class Client:
or tract.exchange
or exch
)
pattern: str = f'{symbol}.{exch}'
pattern: str = f'{symbol}.{exch.lower()}'
expiry: str = tract.lastTradeDateOrContractMonth
# add an entry with expiry suffix if available
if expiry:
pattern += f'.{expiry}'
# since pos update msgs will always have the full fqme
# with suffix?
pattern += '.ib'
# directly cache the input pattern to the output
# contract match as well as by the IB-internal conId.
self._contracts[pattern] = tract
@ -740,6 +858,23 @@ class Client:
return contracts
async def maybe_get_head_time(
self,
fqme: str,
) -> datetime|None:
'''
Return the first datetime stamp for `fqme` or `None`
on request failure.
'''
try:
head_dt: datetime = await self.get_head_time(fqme=fqme)
return head_dt
except RequestError:
log.warning(f'Unable to get head time: {fqme} ?')
return None
async def get_head_time(
self,
fqme: str,
@ -779,8 +914,11 @@ class Client:
async def get_quote(
self,
contract: Contract,
timeout: float = 1,
tries: int = 100,
raise_on_timeout: bool = False,
) -> Ticker:
) -> Ticker|None:
'''
Return a single (snap) quote for symbol.
@ -789,30 +927,53 @@ class Client:
contract,
snapshot=True,
)
ready = ticker.updateEvent
ready: ticker.TickerUpdateEvent = ticker.updateEvent
# ensure a last price gets filled in before we deliver quote
timeouterr: Exception|None = None
warnset: bool = False
for _ in range(100):
if isnan(ticker.last):
for _ in range(tries):
# wait for a first update(Event) indicatingn a
# live quote feed.
if isnan(ticker.last):
try:
tkr = await asyncio.wait_for(
ready,
timeout=timeout,
)
if tkr:
break
except TimeoutError as err:
timeouterr = err
await asyncio.sleep(0.01)
continue
done, pending = await asyncio.wait(
[ready],
timeout=0.01,
)
if ready in done:
break
else:
if not warnset:
log.warning(
f'Quote for {contract} timed out: market is closed?'
f'Quote req timed out..\n'
f'Maybe the venue is closed?\n'
f'\n'
f'{asdict(contract)}'
)
warnset = True
else:
log.info(f'Got first quote for {contract}')
log.info(
'Got first quote for contract\n'
f'{contract}\n'
)
break
else:
if (
timeouterr
and
raise_on_timeout
):
raise timeouterr
if not warnset:
log.warning(
f'Contract {contract} is not returning a quote '
@ -820,6 +981,8 @@ class Client:
)
warnset = True
return None
return ticker
# async to be consistent for the client proxy, and cuz why not.
@ -867,8 +1030,12 @@ class Client:
outsideRth=True,
optOutSmartRouting=True,
# TODO: need to understand this setting better as
# it pertains to shit ass mms..
routeMarketableToBbo=True,
designatedLocation='SMART',
# TODO: make all orders GTC?
# https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html#a95539081751afb9980f4c6bd1655a6ba
# goodTillDate=f"yyyyMMdd-HH:mm:ss",
@ -981,7 +1148,9 @@ _scan_ignore: set[tuple[str, int]] = set()
def get_config() -> dict[str, Any]:
conf, path = config.load('brokers')
conf, path = config.load(
conf_name='brokers',
)
section = conf.get('ib')
accounts = section.get('accounts')
@ -994,8 +1163,8 @@ def get_config() -> dict[str, Any]:
names = list(accounts.keys())
accts = section['accounts'] = bidict(accounts)
log.info(
f'brokers.toml defines {len(accts)} accounts: '
f'{pformat(names)}'
f'{path} defines {len(accts)} account aliases:\n'
f'{pformat(names)}\n'
)
if section is None:
@ -1062,7 +1231,7 @@ async def load_aio_clients(
try_ports = list(try_ports.values())
_err = None
accounts_def = config.load_accounts(['ib'])
accounts_def: dict[str, str] = config.load_accounts(['ib'])
ports = try_ports if port is None else [port]
combos = list(itertools.product(hosts, ports))
accounts_found: dict[str, Client] = {}
@ -1087,6 +1256,12 @@ async def load_aio_clients(
for i in range(connect_retries):
try:
log.info(
'Trying `ib_async` connect\n'
f'{host}: {port}\n'
f'clientId: {client_id}\n'
f'timeout: {connect_timeout}\n'
)
await ib.connectAsync(
host,
port,
@ -1101,7 +1276,9 @@ async def load_aio_clients(
client = Client(ib=ib, config=conf)
# update all actor-global caches
log.info(f"Caching client for {sockaddr}")
log.runtime(
f'Connected and caching `Client` @ {sockaddr!r}'
)
_client_cache[sockaddr] = client
break
@ -1116,37 +1293,59 @@ async def load_aio_clients(
OSError,
) as ce:
_err = ce
log.warning(
f'Failed to connect on {host}:{port} for {i} time with,\n'
f'{ib.client.apiError.value()}\n'
'retrying with a new client id..')
message: str = (
f'Failed to connect on {host}:{port} after {i} tries with\n'
f'{ib.client.apiError.value()!r}\n\n'
'Retrying with a new client id..\n'
)
log.runtime(message)
else:
# XXX report loudly if we never established after all
# re-tries
log.warning(message)
# Pre-collect all accounts available for this
# connection and map account names to this client
# instance.
for value in ib.accountValues():
acct_number = value.account
acct_number: str = value.account
entry = accounts_def.inverse.get(acct_number)
if not entry:
acnt_alias: str = accounts_def.inverse.get(acct_number)
if not acnt_alias:
# TODO: should we constuct the below reco-ex from
# the existing config content?
_, path = config.load(
conf_name='brokers',
)
raise ValueError(
'No section in brokers.toml for account:'
f' {acct_number}\n'
f'Please add entry to continue using this API client'
'No alias in account section for account!\n'
f'Please add an acnt alias entry to your {path}\n'
'For example,\n\n'
'[ib.accounts]\n'
'margin = {accnt_number!r}\n'
'^^^^^^ <- you need this part!\n\n'
'This ensures `piker` will not leak private acnt info '
'to console output by default!\n'
)
# surjection of account names to operating clients.
if acct_number not in accounts_found:
accounts_found[entry] = client
if acnt_alias not in accounts_found:
accounts_found[acnt_alias] = client
# client._acnt_names.add(acnt_alias)
client._acnt_names.append(acnt_alias)
log.info(
f'Loaded accounts for client @ {host}:{port}\n'
f'{pformat(accounts_found)}'
)
if accounts_found:
log.info(
f'Loaded accounts for api client\n\n'
f'{pformat(accounts_found)}\n'
)
# XXX: why aren't we just updating this directy above
# instead of using the intermediary `accounts_found`?
_accounts2clients.update(accounts_found)
# XXX: why aren't we just updating this directy above
# instead of using the intermediary `accounts_found`?
_accounts2clients.update(accounts_found)
# if we have no clients after the scan loop then error out.
if not _client_cache:
@ -1169,21 +1368,20 @@ async def load_aio_clients(
async def load_clients_for_trio(
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
chan: tractor.to_asyncio.LinkedTaskChannel,
) -> None:
'''
Pure async mngr proxy to ``load_aio_clients()``.
This is a bootstrap entrypoing to call from
a ``tractor.to_asyncio.open_channel_from()``.
This is a bootstrap entrypoint to call from
a `tractor.to_asyncio.open_channel_from()`.
'''
async with load_aio_clients() as accts2clients:
to_trio.send_nowait(accts2clients)
async with load_aio_clients(
disconnect_on_exit=False,
) as accts2clients:
chan.started_nowait(accts2clients)
# TODO: maybe a sync event to wait on instead?
await asyncio.sleep(float('inf'))
@ -1196,7 +1394,10 @@ async def open_client_proxies() -> tuple[
async with (
tractor.trionics.maybe_open_context(
acm_func=tractor.to_asyncio.open_channel_from,
kwargs={'target': load_clients_for_trio},
kwargs={
'target': load_clients_for_trio,
# ^XXX, kwarg to `open_channel_from()`
},
# lock around current actor task access
# TODO: maybe this should be the default in tractor?
@ -1306,7 +1507,7 @@ class MethodProxy:
self,
pattern: str,
) -> Union[dict[str, Any], trio.Event]:
) -> dict[str, Any]|trio.Event:
ev = self.event_table.get(pattern)
@ -1327,27 +1528,25 @@ class MethodProxy:
async def open_aio_client_method_relay(
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
chan: tractor.to_asyncio.LinkedTaskChannel,
client: Client,
event_consumers: dict[str, trio.Event],
) -> None:
# sync with `open_client_proxy()` caller
to_trio.send_nowait(client)
chan.started_nowait(client)
# TODO: separate channel for error handling?
client.inline_errors(to_trio)
client.inline_errors(chan)
# relay all method requests to ``asyncio``-side client and deliver
# back results
while not to_trio._closed:
msg = await from_trio.get()
while not chan._to_trio._closed: # <- TODO, better check like `._web_bs`?
msg: tuple[str, dict]|dict|None = await chan.get()
match msg:
case None: # termination sentinel
print('asyncio PROXY-RELAY SHUTDOWN')
log.info('asyncio `Client` method-proxy SHUTDOWN!')
break
case (meth_name, kwargs):
@ -1357,7 +1556,7 @@ async def open_aio_client_method_relay(
try:
resp = await meth(**kwargs)
# echo the msg back
to_trio.send_nowait({'result': resp})
chan.send_nowait({'result': resp})
except (
RequestError,
@ -1365,10 +1564,10 @@ async def open_aio_client_method_relay(
# TODO: relay all errors to trio?
# BaseException,
) as err:
to_trio.send_nowait({'exception': err})
chan.send_nowait({'exception': err})
case {'error': content}:
to_trio.send_nowait({'exception': content})
chan.send_nowait({'exception': content})
case _:
raise ValueError(f'Unhandled msg {msg}')
@ -1390,7 +1589,8 @@ async def open_client_proxy(
event_consumers=event_table,
) as (first, chan),
trio.open_nursery() as relay_n,
trionics.collapse_eg(), # loose-ify
trio.open_nursery() as relay_tn,
):
assert isinstance(first, Client)
@ -1430,7 +1630,7 @@ async def open_client_proxy(
continue
relay_n.start_soon(relay_events)
relay_tn.start_soon(relay_events)
yield proxy

View File

@ -20,7 +20,7 @@ Order and trades endpoints for use with ``piker``'s EMS.
"""
from __future__ import annotations
from contextlib import ExitStack
from collections import ChainMap
# from collections import ChainMap
from functools import partial
from pprint import pformat
import time
@ -34,6 +34,7 @@ import trio
from trio_typing import TaskStatus
import tractor
from tractor.to_asyncio import LinkedTaskChannel
from tractor import trionics
from ib_insync.contract import (
Contract,
)
@ -116,7 +117,11 @@ def pack_position(
symbol=fqme,
currency=con.currency,
size=float(pos.position),
avg_price=float(pos.avgCost) / float(con.multiplier or 1.0),
avg_price=(
float(pos.avgCost)
/
float(con.multiplier or 1.0)
),
),
)
@ -357,6 +362,10 @@ async def update_and_audit_pos_msg(
size=ibpos.position,
avg_price=pikerpos.ppu,
# XXX ensures matching even if multiple venue-names
# in `.bs_fqme`, likely from txn records..
bs_mktid=mkt.bs_mktid,
)
ibfmtmsg: str = pformat(ibpos._asdict())
@ -407,7 +416,7 @@ async def update_and_audit_pos_msg(
# TODO: make this a "propaganda" log level?
if ibpos.avgCost != msg.avg_price:
log.warning(
log.debug(
f'IB "FIFO" avg price for {msg.symbol} is DIFF:\n'
f'ib: {ibfmtmsg}\n'
'---------------------------\n'
@ -425,7 +434,8 @@ async def aggr_open_orders(
) -> None:
'''
Collect all open orders from client and fill in `order_msgs: list`.
Collect all open orders from client and fill in `order_msgs:
list`.
'''
trades: list[Trade] = client.ib.openTrades()
@ -546,7 +556,10 @@ async def open_trade_dialog(
),
# TODO: do this as part of `open_account()`!?
open_symcache('ib', only_from_memcache=True) as symcache,
open_symcache(
'ib',
only_from_memcache=True,
) as symcache,
):
# Open a trade ledgers stack for appending trade records over
# multiple accounts.
@ -554,8 +567,10 @@ async def open_trade_dialog(
ledgers: dict[str, TransactionLedger] = {}
tables: dict[str, Account] = {}
order_msgs: list[Status] = []
conf = get_config()
accounts_def_inv: bidict[str, str] = bidict(conf['accounts']).inverse
conf: dict = get_config()
accounts_def_inv: bidict[str, str] = bidict(
conf['accounts']
).inverse
with (
ExitStack() as lstack,
@ -705,7 +720,11 @@ async def open_trade_dialog(
# client-account and build out position msgs to deliver to
# EMS.
for acctid, acnt in tables.items():
active_pps, closed_pps = acnt.dump_active()
active_pps: dict[str, Position]
(
active_pps,
closed_pps,
) = acnt.dump_active()
for pps in [active_pps, closed_pps]:
piker_pps: list[Position] = list(pps.values())
@ -721,6 +740,7 @@ async def open_trade_dialog(
)
if ibpos:
bs_mktid: str = str(ibpos.contract.conId)
msg = await update_and_audit_pos_msg(
acctid,
pikerpos,
@ -738,7 +758,7 @@ async def open_trade_dialog(
f'UNEXPECTED POSITION says IB => {msg.symbol}\n'
'Maybe they LIQUIDATED YOU or your ledger is wrong?\n'
)
log.error(logmsg)
log.debug(logmsg)
await ctx.started((
all_positions,
@ -747,21 +767,22 @@ async def open_trade_dialog(
async with (
ctx.open_stream() as ems_stream,
trio.open_nursery() as n,
trionics.collapse_eg(),
trio.open_nursery() as tn,
):
# relay existing open orders to ems
for msg in order_msgs:
await ems_stream.send(msg)
for client in set(aioclients.values()):
trade_event_stream: LinkedTaskChannel = await n.start(
trade_event_stream: LinkedTaskChannel = await tn.start(
open_trade_event_stream,
client,
)
# start order request handler **before** local trades
# event loop
n.start_soon(
tn.start_soon(
handle_order_requests,
ems_stream,
accounts_def,
@ -769,7 +790,7 @@ async def open_trade_dialog(
)
# allocate event relay tasks for each client connection
n.start_soon(
tn.start_soon(
deliver_trade_events,
trade_event_stream,
@ -846,6 +867,18 @@ async def emit_pp_update(
# con: Contract = fill.contract
# provide a backup fqme -> MktPair table in case the
# symcache does not (yet) have an entry for the current mkt
# txn.
backup_table: dict[str, MktPair] = {}
for tid, txn in trans.items():
fqme: str = txn.fqme
if fqme not in ledger.symcache.mktmaps:
# bs_mktid: str = txn.bs_mktid
backup_table[fqme] = client._cons2mkts[
client._contracts[fqme]
]
acnt.update_from_ledger(
trans,
@ -855,7 +888,7 @@ async def emit_pp_update(
# TODO: remove this hack by attempting to symcache an
# incrementally updated table?
_mktmap_table=client._contracts
_mktmap_table=backup_table,
)
# re-compute all positions that have changed state.
@ -1171,7 +1204,14 @@ async def deliver_trade_events(
pos
and fill
):
assert fill.commissionReport == cr
now_cr: CommissionReport = fill.commissionReport
if (now_cr != cr):
log.warning(
'UhhHh ib updated the commission report mid-fill..?\n'
f'was: {pformat(cr)}\n'
f'now: {pformat(now_cr)}\n'
)
await emit_pp_update(
ems_stream,
accounts_def,
@ -1222,34 +1262,52 @@ async def deliver_trade_events(
# never relay errors for non-broker related issues
# https://interactivebrokers.github.io/tws-api/message_codes.html
code: int = err['error_code']
if code in {
200, # uhh
reason: str = err['reason']
reqid: str = str(err['reqid'])
# "Warning:" msg codes,
# https://interactivebrokers.github.io/tws-api/message_codes.html#warning_codes
# - 2109: 'Outside Regular Trading Hours'
if 'Warning:' in reason:
log.warning(
f'Order-API-warning: {code!r}\n'
f'reqid: {reqid!r}\n'
f'\n'
f'{pformat(err)}\n'
# ^TODO? should we just print the `reason`
# not the full `err`-dict?
)
continue
# XXX known special (ignore) cases
elif code in {
200, # uhh.. ni idea
# hist pacing / connectivity
162,
165,
# WARNING codes:
# https://interactivebrokers.github.io/tws-api/message_codes.html#warning_codes
# Attribute 'Outside Regular Trading Hours' is
# " 'ignored based on the order type and
# destination. PlaceOrder is now ' 'being
# processed.',
2109,
# XXX: lol this isn't even documented..
# 'No market data during competing live session'
1669,
}:
log.error(
f'Order-API-error which is non-cancel-causing ?!\n'
f'\n'
f'{pformat(err)}\n'
)
continue
reqid: str = str(err['reqid'])
reason: str = err['reason']
if err['reqid'] == -1:
log.error(f'TWS external order error:\n{pformat(err)}')
log.error(
f'TWS external order error ??\n'
f'{pformat(err)}\n'
)
flow: ChainMap = flows.get(reqid)
flow: dict = dict(
flows.get(reqid)
or {}
)
# TODO: we don't want to relay data feed / lookup errors
# so we need some further filtering logic here..
@ -1260,7 +1318,7 @@ async def deliver_trade_events(
reason=reason,
broker_details={
'name': 'ib',
'flow': dict(flow),
'flow': flow,
},
)
flows.add_msg(reqid, err_msg.to_dict())

File diff suppressed because it is too large Load Diff

View File

@ -31,9 +31,14 @@ from typing import (
)
from bidict import bidict
import pendulum
from ib_insync.objects import (
from pendulum import (
DateTime,
parse,
from_timestamp,
)
from ib_insync import (
Contract,
Commodity,
Fill,
Execution,
CommissionReport,
@ -65,10 +70,11 @@ tx_sort: Callable = partial(
iter_by_dt,
parsers={
'dateTime': parse_flex_dt,
'datetime': pendulum.parse,
# for some some fucking 2022 and
# back options records...fuck me.
'date': pendulum.parse,
'datetime': parse,
# XXX: for some some fucking 2022 and
# back options records.. f@#$ me..
'date': parse,
}
)
@ -88,15 +94,38 @@ def norm_trade(
conid: int = str(record.get('conId') or record['conid'])
bs_mktid: str = str(conid)
comms = record.get('commission')
if comms is None:
comms = -1*record['ibCommission']
price = record.get('price') or record['tradePrice']
# NOTE: sometimes weird records (like BTTX?)
# have no field for this?
comms: float = -1 * (
record.get('commission')
or record.get('ibCommission')
or 0
)
if not comms:
log.warning(
'No commissions found for record?\n'
f'{pformat(record)}\n'
)
price: float = (
record.get('price')
or record.get('tradePrice')
)
if price is None:
log.warning(
'No `price` field found in record?\n'
'Skipping normalization..\n'
f'{pformat(record)}\n'
)
return None
# the api doesn't do the -/+ on the quantity for you but flex
# records do.. are you fucking serious ib...!?
size = record.get('quantity') or record['shares'] * {
size: float|int = (
record.get('quantity')
or record['shares']
) * {
'BOT': 1,
'SLD': -1,
}[record['side']]
@ -127,26 +156,31 @@ def norm_trade(
# otype = tail[6]
# strike = tail[7:]
print(f'skipping opts contract {symbol}')
log.warning(
f'Skipping option contract -> NO SUPPORT YET!\n'
f'{symbol}\n'
)
return None
# timestamping is way different in API records
dtstr = record.get('datetime')
date = record.get('date')
flex_dtstr = record.get('dateTime')
dtstr: str = record.get('datetime')
date: str = record.get('date')
flex_dtstr: str = record.get('dateTime')
if dtstr or date:
dt = pendulum.parse(dtstr or date)
dt: DateTime = parse(dtstr or date)
elif flex_dtstr:
# probably a flex record with a wonky non-std timestamp..
dt = parse_flex_dt(record['dateTime'])
dt: DateTime = parse_flex_dt(record['dateTime'])
# special handling of symbol extraction from
# flex records using some ad-hoc schema parsing.
asset_type: str = record.get(
'assetCategory'
) or record.get('secType', 'STK')
asset_type: str = (
record.get('assetCategory')
or record.get('secType')
or 'STK'
)
if (expiry := (
record.get('lastTradeDateOrContractMonth')
@ -237,6 +271,21 @@ def norm_trade(
name=symbol.lower(),
atype='option',
tx_tick=Decimal('1'),
# TODO: we should probably always cast to the
# `Contract` instance then dict-serialize that for
# the `.info` field!
# info=asdict(Option()),
)
case 'CMDTY':
from .symbols import _adhoc_symbol_map
con_kwargs, _ = _adhoc_symbol_map[symbol.upper()]
dst = Asset(
name=symbol.lower(),
atype='commodity',
tx_tick=Decimal('1'),
info=asdict(Commodity(**con_kwargs)),
)
# try to build out piker fqme from record.
@ -341,6 +390,7 @@ def norm_trade_records(
if txn is None:
continue
# inject txns sorted by datetime
insort(
records,
txn,
@ -389,7 +439,7 @@ def api_trades_to_ledger_entries(
txn_dict[attr_name] = val
tid = str(txn_dict['execId'])
dt = pendulum.from_timestamp(txn_dict['time'])
dt = from_timestamp(txn_dict['time'])
txn_dict['datetime'] = str(dt)
acctid = accounts[txn_dict['acctNumber']]

View File

@ -29,7 +29,7 @@ from typing import (
TYPE_CHECKING,
)
from fuzzywuzzy import process as fuzzy
from rapidfuzz import process as fuzzy
import ib_insync as ibis
import tractor
import trio
@ -165,6 +165,7 @@ _exch_skip_list = {
'MEXI', # mexican stocks
# no idea
'NSE',
'VALUE',
'FUNDSERV',
'SWB2',
@ -208,12 +209,15 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
break
ib_client = proxy._aio_ns.ib
log.info(f'Using {ib_client} for symbol search')
log.info(
f'Using API client for symbol-search\n'
f'{ib_client}\n'
)
last = time.time()
async for pattern in stream:
log.info(f'received {pattern}')
now = time.time()
now: float = time.time()
# this causes tractor hang...
# assert 0
@ -260,7 +264,9 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
# defined adhoc symbol set.
stock_results = []
async def stash_results(target: Awaitable[list]):
async def extend_results(
target: Awaitable[list]
) -> None:
try:
results = await target
except tractor.trionics.Lagged:
@ -269,11 +275,11 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
stock_results.extend(results)
for i in range(10):
for _ in range(10):
with trio.move_on_after(3) as cs:
async with trio.open_nursery() as sn:
sn.start_soon(
stash_results,
extend_results,
proxy.search_symbols(
pattern=pattern,
upto=5,
@ -288,11 +294,13 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
f'Search timeout? {proxy._aio_ns.ib.client}'
)
continue
else:
elif stock_results:
break
# else:
# await tractor.pause()
# # match against our ad-hoc set immediately
# adhoc_matches = fuzzy.extractBests(
# adhoc_matches = fuzzy.extract(
# pattern,
# list(_adhoc_futes_set),
# score_cutoff=90,
@ -305,7 +313,7 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
# adhoc_matches}
log.debug(f'fuzzy matching stocks {stock_results}')
stock_matches = fuzzy.extractBests(
stock_matches = fuzzy.extract(
pattern,
stock_results,
score_cutoff=50,
@ -423,9 +431,9 @@ def con2fqme(
except KeyError:
pass
suffix = con.primaryExchange or con.exchange
symbol = con.symbol
expiry = con.lastTradeDateOrContractMonth or ''
suffix: str = con.primaryExchange or con.exchange
symbol: str = con.symbol
expiry: str = con.lastTradeDateOrContractMonth or ''
match con:
case ibis.Option():
@ -517,7 +525,21 @@ async def get_mkt_info(
venue = con.primaryExchange or con.exchange
price_tick: Decimal = Decimal(str(details.minTick))
# price_tick: Decimal = Decimal('0.01')
ib_min_tick_gt_2: Decimal = Decimal('0.01')
if (
price_tick < ib_min_tick_gt_2
):
# TODO: we need to add some kinda dynamic rounding sys
# to our MktPair i guess?
# not sure where the logic should sit, but likely inside
# the `.clearing._ems` i suppose...
log.warning(
'IB seems to disallow a min price tick < 0.01 '
'when the price is > 2.0..?\n'
f'Decreasing min tick precision for {fqme} to 0.01'
)
# price_tick = ib_min_tick
# await tractor.pause()
if atype == 'stock':
# XXX: GRRRR they don't support fractional share sizes for

View File

@ -27,18 +27,21 @@ from typing import (
)
import time
import httpx
import pendulum
import asks
from fuzzywuzzy import process as fuzzy
import numpy as np
import urllib.parse
import hashlib
import hmac
import base64
import tractor
import trio
from piker import config
from piker.data import def_iohlcv_fields
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
)
from piker.accounting._mktinfo import (
Asset,
digits_to_dec,
@ -58,6 +61,11 @@ log = get_logger('piker.brokers.kraken')
# <uri>/<version>/
_url = 'https://api.kraken.com/0'
_headers: dict[str, str] = {
'User-Agent': 'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
}
# TODO: this is the only backend providing this right?
# in which case we should drop it from the defaults and
# instead make a custom fields descr in this module!
@ -68,12 +76,18 @@ _symbol_info_translation: dict[str, str] = {
def get_config() -> dict[str, Any]:
'''
Load our section from `piker/brokers.toml`.
conf, path = config.load()
section = conf.get('kraken')
if section is None:
log.warning(f'No config section found for kraken in {path}')
'''
conf, path = config.load(
conf_name='brokers',
touch_if_dne=True,
)
if (section := conf.get('kraken')) is None:
log.warning(
f'No config section found for kraken in {path}'
)
return {}
return section
@ -106,16 +120,19 @@ class InvalidKey(ValueError):
class Client:
# symbol mapping from all names to the altname
_altnames: dict[str, str] = {}
# key-ed by kraken's own bs_mktids (like fricking "XXMRZEUR")
# with said keys used directly from EP responses so that ledger
# parsing can be easily accomplished from both trade-event-msgs
# and offline toml files
# assets and mkt pairs are key-ed by kraken's ReST response
# symbol-bs_mktids (we call them "X-keys" like fricking
# "XXMRZEUR"). these keys used directly since ledger endpoints
# return transaction sets keyed with the same set!
_Assets: dict[str, Asset] = {}
_AssetPairs: dict[str, Pair] = {}
# offer lookup tables for all .altname and .wsname
# to the equivalent .xname so that various symbol-schemas
# can be mapped to `Pair`s in the tables above.
_altnames: dict[str, str] = {}
_wsnames: dict[str, str] = {}
# key-ed by `Pair.bs_fqme: str`, and thus used for search
# allowing for lookup using piker's own FQME symbology sys.
_pairs: dict[str, Pair] = {}
@ -124,16 +141,15 @@ class Client:
def __init__(
self,
config: dict[str, str],
httpx_client: httpx.AsyncClient,
name: str = '',
api_key: str = '',
secret: str = ''
) -> None:
self._sesh = asks.Session(connections=4)
self._sesh.base_location = _url
self._sesh.headers.update({
'User-Agent':
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
})
self._sesh: httpx.AsyncClient = httpx_client
self._name = name
self._api_key = api_key
self._secret = secret
@ -155,10 +171,9 @@ class Client:
method: str,
data: dict,
) -> dict[str, Any]:
resp = await self._sesh.post(
path=f'/public/{method}',
resp: httpx.Response = await self._sesh.post(
url=f'/public/{method}',
json=data,
timeout=float('inf')
)
return resproc(resp, log)
@ -169,18 +184,18 @@ class Client:
uri_path: str
) -> dict[str, Any]:
headers = {
'Content-Type':
'application/x-www-form-urlencoded',
'API-Key':
self._api_key,
'API-Sign':
get_kraken_signature(uri_path, data, self._secret)
'Content-Type': 'application/x-www-form-urlencoded',
'API-Key': self._api_key,
'API-Sign': get_kraken_signature(
uri_path,
data,
self._secret,
),
}
resp = await self._sesh.post(
path=f'/private/{method}',
resp: httpx.Response = await self._sesh.post(
url=f'/private/{method}',
data=data,
headers=headers,
timeout=float('inf')
)
return resproc(resp, log)
@ -209,8 +224,8 @@ class Client:
by_bsmktid: dict[str, dict] = resp['result']
balances: dict = {}
for respname, bal in by_bsmktid.items():
asset: Asset = self._Assets[respname]
for xname, bal in by_bsmktid.items():
asset: Asset = self._Assets[xname]
# TODO: which KEY should we use? it's used to index
# the `Account.pps: dict` ..
@ -358,8 +373,7 @@ class Client:
# 1658347714, 'status': 'Success'}]}
if xfers:
import tractor
await tractor.pp()
await tractor.pause()
trans: dict[str, Transaction] = {}
for entry in xfers:
@ -367,7 +381,6 @@ class Client:
asset_key: str = entry['asset']
asset: Asset = self._Assets[asset_key]
asset_key: str = asset.name.lower()
# asset_key: str = self._altnames[asset_key].lower()
# XXX: this is in the asset units (likely) so it isn't
# quite the same as a commisions cost necessarily..)
@ -473,25 +486,32 @@ class Client:
if err:
raise SymbolNotFound(pair_patt)
# NOTE: we key pairs by our custom defined `.bs_fqme`
# field since we want to offer search over this key
# set, callers should fill out lookup tables for
# kraken's bs_mktid keys to map to these keys!
for key, data in resp['result'].items():
pair = Pair(respname=key, **data)
# NOTE: we try to key pairs by our custom defined
# `.bs_fqme` field since we want to offer search over
# this pattern set, callers should fill out lookup
# tables for kraken's bs_mktid keys to map to these
# keys!
# XXX: FURTHER kraken's data eng team decided to offer
# 3 frickin market-pair-symbol key sets depending on
# which frickin API is being used.
# Example for the trading pair 'LTC<EUR'
# - the "X-key" from rest eps 'XLTCZEUR'
# - the "websocket key" from ws msgs is 'LTC/EUR'
# - the "altname key" also delivered in pair info is 'LTCEUR'
for xkey, data in resp['result'].items():
# always cache so we can possibly do faster lookup
self._AssetPairs[key] = pair
# NOTE: always cache in pairs tables for faster lookup
with tractor.devx.maybe_open_crash_handler(): # as bxerr:
pair = Pair(xname=xkey, **data)
bs_fqme: str = pair.bs_fqme
self._pairs[bs_fqme] = pair
# register the piker pair under all monikers, a giant flat
# surjection of all possible (and stupid) kraken names to
# the FMQE style piker key.
self._altnames[pair.altname] = bs_fqme
self._altnames[pair.wsname] = bs_fqme
# register the above `Pair` structs for all
# key-sets/monikers: a set of 4 (frickin) tables
# acting as a combined surjection of all possible
# (and stupid) kraken names to their `Pair` obj.
self._AssetPairs[xkey] = pair
self._pairs[pair.bs_fqme] = pair
self._altnames[pair.altname] = pair
self._wsnames[pair.wsname] = pair
if pair_patt is not None:
return next(iter(self._pairs.items()))[1]
@ -506,12 +526,13 @@ class Client:
Load all market pair info build and cache it for downstream
use.
An ``._altnames: dict[str, str]`` is available for looking
up the piker-native FQME style `Pair.bs_fqme: str` for any
input of the three (yes, it's that idiotic) available
key-sets that kraken frickin offers depending on the API
including the .altname, .wsname and the weird ass default
set they return in rest responses..
Multiple pair info lookup tables (like ``._altnames:
dict[str, str]``) are created for looking up the
piker-native `Pair`-struct from any input of the three
(yes, it's that idiotic..) available symbol/pair-key-sets
that kraken frickin offers depending on the API including
the .altname, .wsname and the weird ass default set they
return in ReST responses .xname..
'''
if (
@ -539,13 +560,17 @@ class Client:
await self.get_mkt_pairs()
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
matches = fuzzy.extractBests(
pattern,
self._pairs,
matches: dict[str, Pair] = match_from_pairs(
pairs=self._pairs,
query=pattern.upper(),
score_cutoff=50,
)
# repack in dict form
return {item[0].altname: item[0] for item in matches}
# repack in .altname-keyed output table
return {
pair.altname: pair
for pair in matches.values()
}
async def bars(
self,
@ -628,7 +653,7 @@ class Client:
def to_bs_fqme(
cls,
pair_str: str
) -> tuple[str, Pair]:
) -> str:
'''
Normalize symbol names to to a 3x3 pair from the global
definition map which we build out from the data retreived from
@ -636,7 +661,7 @@ class Client:
'''
try:
return cls._altnames[pair_str.upper()]
return cls._altnames[pair_str.upper()].bs_fqme
except KeyError as ke:
raise SymbolNotFound(f'kraken has no {ke.args[0]}')
@ -644,24 +669,36 @@ class Client:
@acm
async def get_client() -> Client:
conf = get_config()
if conf:
client = Client(
conf,
conf: dict[str, Any] = get_config()
async with httpx.AsyncClient(
base_url=_url,
headers=_headers,
# TODO: don't break these up and just do internal
# conf lookups instead..
name=conf['key_descr'],
api_key=conf['api_key'],
secret=conf['secret']
)
else:
client = Client({})
# TODO: is there a way to numerate this?
# https://www.python-httpx.org/advanced/clients/#why-use-a-client
# connections=4
) as trio_client:
if conf:
client = Client(
conf,
httpx_client=trio_client,
# at startup, load all symbols, and asset info in
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.get_mkt_pairs()
# TODO: don't break these up and just do internal
# conf lookups instead..
name=conf['key_descr'],
api_key=conf['api_key'],
secret=conf['secret']
)
else:
client = Client(
conf={},
httpx_client=trio_client,
)
yield client
# at startup, load all symbols, and asset info in
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.get_assets)
await client.get_mkt_pairs()
yield client

View File

@ -175,9 +175,8 @@ async def handle_order_requests(
case {
'account': 'kraken.spot' as account,
'action': action,
} if action in {'buy', 'sell'}:
'action': 'buy'|'sell',
}:
# validate
order = BrokerdOrder(**msg)
@ -262,6 +261,12 @@ async def handle_order_requests(
} | extra
log.info(f'Submitting WS order request:\n{pformat(req)}')
# NOTE HOWTO, debug order requests
#
# if 'XRP' in pair:
# await tractor.pause()
await ws.send_msg(req)
# placehold for sanity checking in relay loop
@ -407,7 +412,7 @@ def trades2pps(
# included?
account='kraken.' + acctid,
symbol=p.mkt.fqme,
size=p.size,
size=p.cumsize,
avg_price=p.ppu,
currency='',
)
@ -513,6 +518,7 @@ async def open_trade_dialog(
ledger_trans: dict[str, Transaction] = await norm_trade_records(
ledger,
client,
api_name_set='xname',
)
if not acnt.pps:
@ -534,6 +540,7 @@ async def open_trade_dialog(
api_trans: dict[str, Transaction] = await norm_trade_records(
tids2trades,
client,
api_name_set='xname',
)
# retrieve kraken reported balances
@ -542,7 +549,7 @@ async def open_trade_dialog(
# to be reloaded.
balances: dict[str, float] = await client.get_balances()
verify_balances(
await verify_balances(
acnt,
src_fiat,
balances,
@ -610,18 +617,18 @@ async def open_trade_dialog(
# enter relay loop
await handle_order_updates(
client,
ws,
stream,
ems_stream,
apiflows,
ids,
reqids2txids,
acnt,
api_trans,
acctid,
acc_name,
token,
client=client,
ws=ws,
ws_stream=stream,
ems_stream=ems_stream,
apiflows=apiflows,
ids=ids,
reqids2txids=reqids2txids,
acnt=acnt,
ledger=ledger,
acctid=acctid,
acc_name=acc_name,
token=token,
)
@ -637,7 +644,8 @@ async def handle_order_updates(
# transaction records which will be updated
# on new trade clearing events (aka order "fills")
ledger_trans: dict[str, Transaction],
ledger: TransactionLedger,
# ledger_trans: dict[str, Transaction],
acctid: str,
acc_name: str,
token: str,
@ -697,7 +705,8 @@ async def handle_order_updates(
# if tid not in ledger_trans
}
for tid, trade in trades.items():
assert tid not in ledger_trans
# assert tid not in ledger_trans
assert tid not in ledger
txid = trade['ordertxid']
reqid = trade.get('userref')
@ -743,12 +752,19 @@ async def handle_order_updates(
new_trans = await norm_trade_records(
trades,
client,
api_name_set='wsname',
)
ppmsgs = trades2pps(
acnt,
acctid,
new_trans,
ppmsgs: list[BrokerdPosition] = trades2pps(
acnt=acnt,
ledger=ledger,
acctid=acctid,
new_trans=new_trans,
)
# ppmsgs = trades2pps(
# acnt,
# acctid,
# new_trans,
# )
for pp_msg in ppmsgs:
await ems_stream.send(pp_msg)
@ -1074,6 +1090,8 @@ async def handle_order_updates(
f'Failed to {action} order {reqid}:\n'
f'{errmsg}'
)
# if tractor._state.debug_mode():
# await tractor.pause()
symbol: str = 'N/A'
if chain := apiflows.get(reqid):

View File

@ -64,9 +64,19 @@ def norm_trade(
'sell': -1,
}[record['type']]
rest_pair_key: str = record['pair']
pair: Pair = pairs[rest_pair_key]
# NOTE: this value may be either the websocket OR the rest schema
# so we need to detect the key format and then choose the
# correct symbol lookup table to evetually get a ``Pair``..
# See internals of `Client.asset_pairs()` for deats!
src_pair_key: str = record['pair']
# XXX: kraken's data engineering is soo bad they require THREE
# different pair schemas (more or less seemingly tied to
# transport-APIs)..LITERALLY they return different market id
# pairs in the ledger endpoints vs. the websocket event subs..
# lookup pair using appropriately provided tabled depending
# on API-key-schema..
pair: Pair = pairs[src_pair_key]
fqme: str = pair.bs_fqme.lower() + '.kraken'
return Transaction(
@ -83,6 +93,7 @@ def norm_trade(
async def norm_trade_records(
ledger: dict[str, Any],
client: Client,
api_name_set: str = 'xname',
) -> dict[str, Transaction]:
'''
@ -97,11 +108,16 @@ async def norm_trade_records(
# mkt: MktPair = (await get_mkt_info(manual_fqme))[0]
# fqme: str = mkt.fqme
# assert fqme == manual_fqme
pairs: dict[str, Pair] = {
'xname': client._AssetPairs,
'wsname': client._wsnames,
'altname': client._altnames,
}[api_name_set]
records[tid] = norm_trade(
tid,
record,
pairs=client._AssetPairs,
pairs=pairs,
)
return records

View File

@ -21,7 +21,6 @@ Symbology defs and search.
from decimal import Decimal
import tractor
from fuzzywuzzy import process as fuzzy
from piker._cacheables import (
async_lifo_cache,
@ -41,9 +40,14 @@ from piker.accounting._mktinfo import (
)
# https://www.kraken.com/features/api#get-tradable-pairs
class Pair(Struct):
respname: str # idiotic bs_mktid equiv i guess?
'''
A tradable asset pair as schema-defined by,
https://docs.kraken.com/api/docs/rest-api/get-tradable-asset-pairs
'''
xname: str # idiotic bs_mktid equiv i guess?
altname: str # alternate pair name
wsname: str # WebSocket pair name (if available)
aclass_base: str # asset class of base component
@ -53,7 +57,6 @@ class Pair(Struct):
lot: str # volume lot size
cost_decimals: int
costmin: float
pair_decimals: int # scaling decimal places for pair
lot_decimals: int # scaling decimal places for volume
@ -79,6 +82,7 @@ class Pair(Struct):
tick_size: float # min price step size
status: str
costmin: str|None = None # XXX, only some mktpairs?
short_position_limit: float = 0
long_position_limit: float = float('inf')
@ -94,7 +98,7 @@ class Pair(Struct):
make up their minds on a better key set XD
'''
return self.respname
return self.xname
@property
def price_tick(self) -> Decimal:
@ -136,19 +140,10 @@ async def open_symbol_search(ctx: tractor.Context) -> None:
await ctx.started(cache)
async with ctx.open_stream() as stream:
async for pattern in stream:
matches = fuzzy.extractBests(
pattern,
client._pairs,
score_cutoff=50,
await stream.send(
await client.search_symbols(pattern)
)
# repack in dict form
await stream.send({
pair[0].altname: pair[0]
for pair in matches
})
@async_lifo_cache()

View File

@ -16,10 +16,9 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Kucoin broker backend
Kucoin cex API backend.
'''
from contextlib import (
asynccontextmanager as acm,
aclosing,
@ -41,9 +40,8 @@ from typing import (
import wsproto
from uuid import uuid4
from fuzzywuzzy import process as fuzzy
from trio_typing import TaskStatus
import asks
import httpx
from bidict import bidict
import numpy as np
import pendulum
@ -64,8 +62,11 @@ from piker._cacheables import (
)
from piker.log import get_logger
from piker.data.validate import FeedInit
from piker.types import Struct
from piker.data import def_iohlcv_fields
from piker.types import Struct # NOTE, this is already a `tractor.msg.Struct`
from piker.data import (
def_iohlcv_fields,
match_from_pairs,
)
from piker.data._web_bs import (
open_autorecon_ws,
NoBsWs,
@ -97,9 +98,18 @@ class KucoinMktPair(Struct, frozen=True):
def size_tick(self) -> Decimal:
return Decimal(str(self.quoteMinSize))
callauctionFirstStageStartTime: None|float
callauctionIsEnabled: bool
callauctionPriceCeiling: float|None
callauctionPriceFloor: float|None
callauctionSecondStageStartTime: float|None
callauctionThirdStageStartTime: float|None
enableTrading: bool
feeCategory: int
feeCurrency: str
isMarginEnabled: bool
makerFeeCoefficient: float
market: str
minFunds: float
name: str
@ -109,7 +119,10 @@ class KucoinMktPair(Struct, frozen=True):
quoteIncrement: float
quoteMaxSize: float
quoteMinSize: float
st: bool
symbol: str # our bs_mktid, kucoin's internal id
takerFeeCoefficient: float
tradingStartTime: float|None
class AccountTrade(Struct, frozen=True):
@ -210,8 +223,12 @@ def get_config() -> BrokerConfig | None:
class Client:
def __init__(self) -> None:
self._config: BrokerConfig | None = get_config()
def __init__(
self,
httpx_client: httpx.AsyncClient,
) -> None:
self._http: httpx.AsyncClient = httpx_client
self._config: BrokerConfig|None = get_config()
self._pairs: dict[str, KucoinMktPair] = {}
self._fqmes2mktids: bidict[str, str] = bidict()
self._bars: list[list[float]] = []
@ -225,18 +242,24 @@ class Client:
) -> dict[str, str | bytes]:
'''
Generate authenticated request headers
Generate authenticated request headers:
https://docs.kucoin.com/#authentication
https://www.kucoin.com/docs/basic-info/connection-method/authentication/creating-a-request
https://www.kucoin.com/docs/basic-info/connection-method/authentication/signing-a-message
'''
if not self._config:
raise ValueError(
'No config found when trying to send authenticated request')
'No config found when trying to send authenticated request'
)
str_to_sign = (
str(int(time.time() * 1000))
+ action + f'/api/{api}/{endpoint.lstrip("/")}'
+
action
+
f'/api/{api}/{endpoint.lstrip("/")}'
)
signature = base64.b64encode(
@ -247,6 +270,7 @@ class Client:
).digest()
)
# TODO: can we cache this between calls?
passphrase = base64.b64encode(
hmac.new(
self._config.key_secret.encode('utf-8'),
@ -268,8 +292,10 @@ class Client:
self,
action: Literal['POST', 'GET'],
endpoint: str,
api: str = 'v2',
headers: dict = {},
) -> Any:
'''
Generic request wrapper for Kucoin API
@ -282,14 +308,19 @@ class Client:
api,
)
api_url = f'https://api.kucoin.com/api/{api}/{endpoint}'
res = await asks.request(action, api_url, headers=headers)
json = res.json()
if 'data' in json:
return json['data']
req_meth: Callable = getattr(
self._http,
action.lower(),
)
res = await req_meth(
url=f'/{api}/{endpoint}',
headers=headers,
)
json: dict = res.json()
if (data := json.get('data')) is not None:
return data
else:
api_url: str = self._http.base_url
log.error(
f'Error making request to {api_url} ->\n'
f'{pformat(res)}'
@ -309,7 +340,7 @@ class Client:
'''
token_type = 'private' if private else 'public'
try:
data: dict[str, Any] | None = await self._request(
data: dict[str, Any]|None = await self._request(
'POST',
endpoint=f'bullet-{token_type}',
api='v1'
@ -347,8 +378,8 @@ class Client:
currencies: dict[str, Currency] = {}
entries: list[dict] = await self._request(
'GET',
api='v1',
endpoint='currencies',
api='v1',
)
for entry in entries:
curr = Currency(**entry).copy()
@ -364,20 +395,29 @@ class Client:
dict[str, KucoinMktPair],
bidict[str, KucoinMktPair],
]:
entries = await self._request('GET', 'symbols')
entries = await self._request(
'GET',
endpoint='symbols',
)
log.info(f' {len(entries)} Kucoin market pairs fetched')
pairs: dict[str, KucoinMktPair] = {}
fqmes2mktids: bidict[str, str] = bidict()
for item in entries:
pair = pairs[item['name']] = KucoinMktPair(**item)
try:
pair = pairs[item['name']] = KucoinMktPair(**item)
except TypeError as te:
raise TypeError(
'`KucoinMktPair` and reponse fields do not match ??\n'
f'{KucoinMktPair.fields_diff(item)}\n'
) from te
fqmes2mktids[
item['name'].lower().replace('-', '')
] = pair.name
return pairs, fqmes2mktids
async def cache_pairs(
async def get_mkt_pairs(
self,
update: bool = False,
@ -405,16 +445,27 @@ class Client:
) -> dict[str, KucoinMktPair]:
'''
Use fuzzy search to match against all market names.
Use fuzzy search engine to match against pairs, deliver
matching ones.
'''
data = await self.cache_pairs()
if not len(self._pairs):
await self.get_mkt_pairs()
assert self._pairs, '`Client.get_mkt_pairs()` was never called!?'
matches = fuzzy.extractBests(
pattern, data, score_cutoff=35, limit=limit
matches: dict[str, KucoinMktPair] = match_from_pairs(
pairs=self._pairs,
# query=pattern.upper(),
query=pattern.upper(),
score_cutoff=35,
limit=limit,
)
# repack in dict form
return {item[0].name: item[0] for item in matches}
return {
pair.name: pair
for pair in matches.values()
}
async def last_trades(self, sym: str) -> list[AccountTrade]:
trades = await self._request(
@ -554,13 +605,21 @@ def fqme_to_kucoin_sym(
@acm
async def get_client() -> AsyncGenerator[Client, None]:
client = Client()
'''
Load an API `Client` preconfigured from user settings
async with trio.open_nursery() as n:
n.start_soon(client.cache_pairs)
await client.get_currencies()
'''
async with (
httpx.AsyncClient(
base_url='https://api.kucoin.com/api',
) as trio_client,
):
client = Client(httpx_client=trio_client)
async with trio.open_nursery() as tn:
tn.start_soon(client.get_mkt_pairs)
await client.get_currencies()
yield client
yield client
@tractor.context
@ -569,7 +628,7 @@ async def open_symbol_search(
) -> None:
async with open_cached_client('kucoin') as client:
# load all symbols locally for fast search
await client.cache_pairs()
await client.get_mkt_pairs()
await ctx.started()
async with ctx.open_stream() as stream:
@ -596,7 +655,7 @@ async def open_ping_task(
await trio.sleep((ping_interval - 1000) / 1000)
await ws.send_msg({'id': connect_id, 'type': 'ping'})
log.info('Starting ping task for kucoin ws connection')
log.warning('Starting ping task for kucoin ws connection')
n.start_soon(ping_server)
yield
@ -608,16 +667,21 @@ async def open_ping_task(
async def get_mkt_info(
fqme: str,
) -> tuple[MktPair, KucoinMktPair]:
) -> tuple[
MktPair,
KucoinMktPair,
]:
'''
Query for and return a `MktPair` and `KucoinMktPair`.
Query for and return both a `piker.accounting.MktPair` and
`KucoinMktPair` from provided `fqme: str`
(fully-qualified-market-endpoint).
'''
async with open_cached_client('kucoin') as client:
# split off any fqme broker part
bs_fqme, _, broker = fqme.partition('.')
pairs: dict[str, KucoinMktPair] = await client.cache_pairs()
pairs: dict[str, KucoinMktPair] = await client.get_mkt_pairs()
try:
# likely search result key which is already in native mkt symbol form
@ -685,6 +749,8 @@ async def stream_quotes(
log.info(f'Starting up quote stream(s) for {symbols}')
for sym_str in symbols:
mkt: MktPair
pair: KucoinMktPair
mkt, pair = await get_mkt_info(sym_str)
init_msgs.append(
FeedInit(mkt_info=mkt)
@ -692,7 +758,11 @@ async def stream_quotes(
ws: NoBsWs
token, ping_interval = await client._get_ws_token()
connect_id = str(uuid4())
log.info('API reported ping_interval: {ping_interval}\n')
connect_id: str = str(uuid4())
typ: str
quote: dict
async with (
open_autorecon_ws(
(
@ -706,20 +776,37 @@ async def stream_quotes(
),
) as ws,
open_ping_task(ws, ping_interval, connect_id),
aclosing(stream_messages(ws, sym_str)) as msg_gen,
aclosing(
iter_normed_quotes(
ws, sym_str
)
) as iter_quotes,
):
typ, quote = await anext(msg_gen)
typ, quote = await anext(iter_quotes)
while typ != 'trade':
# take care to not unblock here until we get a real
# trade quote
typ, quote = await anext(msg_gen)
# take care to not unblock here until we get a real
# trade quote?
# ^TODO, remove this right?
# -[ ] what often blocks chart boot/new-feed switching
# since we'ere waiting for a live quote instead of just
# loading history afap..
# |_ XXX, not sure if we require a bit of rework to core
# feed init logic or if backends justg gotta be
# changed up.. feel like there was some causality
# dilema prolly only seen with IB too..
# while typ != 'trade':
# typ, quote = await anext(iter_quotes)
task_status.started((init_msgs, quote))
feed_is_live.set()
async for typ, msg in msg_gen:
await send_chan.send({sym_str: msg})
# XXX NOTE, DO NOT include the `.<backend>` suffix!
# OW the sampling loop will not broadcast correctly..
# since `bus._subscribers.setdefault(bs_fqme, set())`
# is used inside `.data.open_feed_bus()` !!!
topic: str = mkt.bs_fqme
async for typ, quote in iter_quotes:
await send_chan.send({topic: quote})
@acm
@ -774,7 +861,7 @@ async def subscribe(
)
async def stream_messages(
async def iter_normed_quotes(
ws: NoBsWs,
sym: str,
@ -805,6 +892,9 @@ async def stream_messages(
yield 'trade', {
'symbol': sym,
# TODO, is 'last' even used elsewhere/a-good
# semantic? can't we just read the ticks with our
# .data.ticktools.frame_ticks()`/
'last': trade_data.price,
'brokerd_ts': last_trade_ts,
'ticks': [
@ -897,7 +987,7 @@ async def open_history_client(
if end_dt is None:
inow = round(time.time())
print(
log.debug(
f'difference in time between load and processing'
f'{inow - times[-1]}'
)

View File

@ -37,6 +37,12 @@ import tractor
from async_generator import asynccontextmanager
import numpy as np
import wrapt
# TODO, port to `httpx`/`trio-websocket` whenver i get back to
# writing a proper ws-api streamer for this backend (since the data
# feeds are free now) as per GH feat-req:
# https://github.com/pikers/piker/issues/509
#
import asks
from ..calc import humanize, percent_change

View File

@ -0,0 +1,49 @@
piker.clearing
______________
trade execution-n-control subsys for both live and paper trading as
well as algo-trading manual override/interaction across any backend
broker and data provider.
avail UIs
*********
order ctl
---------
the `piker.clearing` subsys is exposed mainly though
the `piker chart` GUI as a "chart trader" style UX and
is automatically enabled whenever a chart is opened.
.. ^TODO, more prose here!
the "manual" order control features are exposed via the
`piker.ui.order_mode` API and can pretty much always be
used (at least) in simulated-trading mode, aka "paper"-mode, and
the micro-manual is as follows:
``order_mode`` (
edge triggered activation by any of the following keys,
``mouse-click`` on y-level to submit at that price
):
- ``f``/ ``ctl-f`` to stage buy
- ``d``/ ``ctl-d`` to stage sell
- ``a`` to stage alert
``search_mode`` (
``ctl-l`` or ``ctl-space`` to open,
``ctl-c`` or ``ctl-space`` to close
) :
- begin typing to have symbol search automatically lookup
symbols from all loaded backend (broker) providers
- arrow keys and mouse click to navigate selection
- vi-like ``ctl-[hjkl]`` for navigation
position (pp) mgmt
------------------
you can also configure your position allocation limits from the
sidepane.
.. ^TODO, explain and provide tut once more refined!

View File

@ -25,7 +25,10 @@ from typing import TYPE_CHECKING
import trio
import tractor
from tractor.trionics import broadcast_receiver
from tractor.trionics import (
broadcast_receiver,
collapse_eg,
)
from ._util import (
log, # sub-sys logger
@ -168,7 +171,6 @@ class OrderClient(Struct):
async def relay_orders_from_sync_code(
client: OrderClient,
symbol_key: str,
to_ems_stream: tractor.MsgStream,
@ -242,6 +244,11 @@ async def open_ems(
async with maybe_open_emsd(
broker,
# XXX NOTE, LOL so this determines the daemon `emsd` loglevel
# then FYI.. that's kinda wrong no?
# -[ ] shouldn't it be set by `pikerd -l` or no?
# -[ ] would make a lot more sense to have a subsys ctl for
# levels.. like `-l emsd.info` or something?
loglevel=loglevel,
) as portal:
@ -281,8 +288,11 @@ async def open_ems(
client._ems_stream = trades_stream
# start sync code order msg delivery task
async with trio.open_nursery() as n:
n.start_soon(
async with (
collapse_eg(),
trio.open_nursery() as tn,
):
tn.start_soon(
relay_orders_from_sync_code,
client,
fqme,
@ -298,4 +308,4 @@ async def open_ems(
)
# stop the sync-msg-relay task on exit.
n.cancel_scope.cancel()
tn.cancel_scope.cancel()

View File

@ -42,6 +42,7 @@ from bidict import bidict
import trio
from trio_typing import TaskStatus
import tractor
from tractor import trionics
from ._util import (
log, # sub-sys logger
@ -76,7 +77,6 @@ if TYPE_CHECKING:
# TODO: numba all of this
def mk_check(
trigger_price: float,
known_last: float,
action: str,
@ -162,7 +162,7 @@ async def clear_dark_triggers(
router: Router,
brokerd_orders_stream: tractor.MsgStream,
quote_stream: tractor.ReceiveMsgStream, # noqa
quote_stream: tractor.MsgStream,
broker: str,
fqme: str,
@ -178,6 +178,7 @@ async def clear_dark_triggers(
'''
# XXX: optimize this for speed!
# TODO:
# - port to the new ringbuf stuff in `tractor.ipc`!
# - numba all this!
# - this stream may eventually contain multiple symbols
quote_stream._raise_on_lag = False
@ -387,6 +388,7 @@ async def open_brokerd_dialog(
for ep_name in [
'open_trade_dialog', # probably final name?
'trades_dialogue', # legacy
# ^!TODO, rm this since all backends ported no ?!?
]:
trades_endpoint = getattr(
brokermod,
@ -500,7 +502,7 @@ class Router(Struct):
'''
# setup at actor spawn time
nursery: trio.Nursery
_tn: trio.Nursery
# broker to book map
books: dict[str, DarkBook] = {}
@ -653,7 +655,11 @@ class Router(Struct):
flume = feed.flumes[fqme]
first_quote: dict = flume.first_quote
book: DarkBook = self.get_dark_book(broker)
book.lasts[fqme]: float = float(first_quote['last'])
if not (last := first_quote.get('last')):
last: float = flume.rt_shm.array[-1]['close']
book.lasts[fqme]: float = float(last)
async with self.maybe_open_brokerd_dialog(
brokermod=brokermod,
@ -666,7 +672,7 @@ class Router(Struct):
# dark book clearing loop, also lives with parent
# daemon to allow dark order clearing while no
# client is connected.
self.nursery.start_soon(
self._tn.start_soon(
clear_dark_triggers,
self,
relay.brokerd_stream,
@ -689,7 +695,7 @@ class Router(Struct):
# spawn a ``brokerd`` order control dialog stream
# that syncs lifetime with the parent `emsd` daemon.
self.nursery.start_soon(
self._tn.start_soon(
translate_and_relay_brokerd_events,
broker,
relay.brokerd_stream,
@ -716,7 +722,7 @@ class Router(Struct):
subs = self.subscribers[sub_key]
sent_some: bool = False
for client_stream in subs:
for client_stream in subs.copy():
try:
await client_stream.send(msg)
sent_some = True
@ -763,10 +769,12 @@ async def _setup_persistent_emsd(
global _router
# open a root "service nursery" for the ``emsd`` actor
async with trio.open_nursery() as service_nursery:
_router = Router(nursery=service_nursery)
# open a root "service task-nursery" for the `emsd`-actor
async with (
trionics.collapse_eg(),
trio.open_nursery() as tn
):
_router = Router(_tn=tn)
# TODO: send back the full set of persistent
# orders/execs?
@ -913,8 +921,17 @@ async def translate_and_relay_brokerd_events(
}:
if (
not oid
# try to lookup any order dialog by
# brokerd-side id..
and not (
oid := book._ems2brokerd_ids.inverse.get(reqid)
)
):
oid: str = book._ems2brokerd_ids.inverse[reqid]
log.warning(
f'Rxed unusable error-msg:\n'
f'{brokerd_msg}'
)
continue
msg = BrokerdError(**brokerd_msg)
@ -949,7 +966,10 @@ async def translate_and_relay_brokerd_events(
fqme: str = (
bdmsg.symbol # might be None
or
bdmsg.broker_details['flow']['symbol']
bdmsg.broker_details['flow']
# NOTE: what happens in empty case in the
# broadcast below? it's a problem?
.get('symbol', '')
)
await router.client_broadcast(
@ -998,14 +1018,28 @@ async def translate_and_relay_brokerd_events(
status_msg.brokerd_msg = msg
status_msg.src = msg.broker_details['name']
await router.client_broadcast(
status_msg.req.symbol,
status_msg,
)
if not status_msg.req:
# likely some order change state?
await tractor.pause()
else:
await router.client_broadcast(
status_msg.req.symbol,
status_msg,
)
if status == 'closed':
log.info(f'Execution for {oid} is complete!')
status_msg = book._active.pop(oid)
log.info(
f'Execution is complete!\n'
f'oid: {oid!r}\n'
)
status_msg = book._active.pop(oid, None)
if status_msg is None:
log.warning(
f'Order was already cleared from book ??\n'
f'oid: {oid!r}\n'
f'\n'
f'Maybe the order cancelled before submitted ??\n'
)
elif status == 'canceled':
log.cancel(f'Cancellation for {oid} is complete!')
@ -1170,12 +1204,16 @@ async def process_client_order_cmds(
submitting live orders immediately if requested by the client.
'''
# cmd: dict
# TODO, only allow `msgspec.Struct` form!
cmd: dict
async for cmd in client_order_stream:
log.info(f'Received order cmd:\n{pformat(cmd)}')
log.info(
f'Received order cmd:\n'
f'{pformat(cmd)}\n'
)
# CAWT DAMN we need struct support!
oid = str(cmd['oid'])
oid: str = str(cmd['oid'])
# register this stream as an active order dialog (msg flow) for
# this order id such that translated message from the brokerd
@ -1281,7 +1319,7 @@ async def process_client_order_cmds(
case {
'oid': oid,
'symbol': fqme,
'price': trigger_price,
'price': price,
'size': size,
'action': ('buy' | 'sell') as action,
'exec_mode': ('live' | 'paper'),
@ -1313,7 +1351,7 @@ async def process_client_order_cmds(
symbol=sym,
action=action,
price=trigger_price,
price=price,
size=size,
account=req.account,
)
@ -1335,7 +1373,11 @@ async def process_client_order_cmds(
# (``translate_and_relay_brokerd_events()`` above) will
# handle relaying the ems side responses back to
# the client/cmd sender from this request
log.info(f'Sending live order to {broker}:\n{pformat(msg)}')
log.info(
f'Sending live order to {broker}:\n'
f'{pformat(msg)}'
)
await brokerd_order_stream.send(msg)
# an immediate response should be ``BrokerdOrderAck``
@ -1351,7 +1393,7 @@ async def process_client_order_cmds(
case {
'oid': oid,
'symbol': fqme,
'price': trigger_price,
'price': price,
'size': size,
'exec_mode': exec_mode,
'action': action,
@ -1379,7 +1421,12 @@ async def process_client_order_cmds(
if isnan(last):
last = flume.rt_shm.array[-1]['close']
pred = mk_check(trigger_price, last, action)
trigger_price: float = float(price)
pred = mk_check(
trigger_price,
last,
action,
)
# NOTE: for dark orders currently we submit
# the triggered live order at a price 5 ticks
@ -1486,7 +1533,7 @@ async def maybe_open_trade_relays(
loglevel: str = 'info',
):
fqme, relay, feed, client_ready = await _router.nursery.start(
fqme, relay, feed, client_ready = await _router._tn.start(
_router.open_trade_relays,
fqme,
exec_mode,
@ -1516,19 +1563,18 @@ async def maybe_open_trade_relays(
@tractor.context
async def _emsd_main(
ctx: tractor.Context,
ctx: tractor.Context, # becomes `ems_ctx` below
fqme: str,
exec_mode: str, # ('paper', 'live')
loglevel: str | None = None,
loglevel: str|None = None,
) -> tuple[
dict[
# brokername, acctid
tuple[str, str],
) -> tuple[ # `ctx.started()` value!
dict[ # positions
tuple[str, str], # brokername, acctid
list[BrokerdPosition],
],
list[str],
dict[str, Status],
list[str], # accounts
dict[str, Status], # dialogs
]:
'''
EMS (sub)actor entrypoint providing the execution management

View File

@ -19,6 +19,7 @@ Clearing sub-system message and protocols.
"""
from __future__ import annotations
from decimal import Decimal
from typing import (
Literal,
)
@ -71,7 +72,15 @@ class Order(Struct):
symbol: str # | MktPair
account: str # should we set a default as '' ?
price: float
# https://docs.python.org/3/library/decimal.html#decimal-objects
#
# ?TODO? decimal usage throughout?
# -[ ] possibly leverage the `Encoder(decimal_format='number')`
# bit?
# |_https://jcristharif.com/msgspec/supported-types.html#decimal
# -[ ] should we also use it for .size?
#
price: Decimal
size: float # -ve is "sell", +ve is "buy"
brokers: list[str] = []
@ -178,7 +187,7 @@ class BrokerdOrder(Struct):
time_ns: int
symbol: str # fqme
price: float
price: Decimal
size: float
# TODO: if we instead rely on a +ve/-ve size to determine
@ -292,6 +301,9 @@ class BrokerdError(Struct):
# TODO: yeah, so we REALLY need to completely deprecate
# this and use the `.accounting.Position` msg-type instead..
# -[ ] an alternative might be to add a `Position.summary() ->
# `PositionSummary`-msg that we generate since `Position` has a lot
# of fields by default we likely don't want to send over the wire?
class BrokerdPosition(Struct):
'''
Position update event from brokerd.
@ -304,3 +316,4 @@ class BrokerdPosition(Struct):
avg_price: float
currency: str = ''
name: str = 'position'
bs_mktid: str|int|None = None

View File

@ -26,6 +26,7 @@ from contextlib import asynccontextmanager as acm
from datetime import datetime
from operator import itemgetter
import itertools
from pprint import pformat
import time
from typing import (
Callable,
@ -39,6 +40,7 @@ import trio
import tractor
from piker.brokers import get_brokermod
from piker.service import find_service
from piker.accounting import (
Account,
MktPair,
@ -295,6 +297,8 @@ class PaperBoi(Struct):
# transmit pp msg to ems
pp: Position = self.acnt.pps[bs_mktid]
# TODO, this will break if `require_only=True` was passed to
# `.update_from_ledger()`
pp_msg = BrokerdPosition(
broker=self.broker,
@ -506,7 +510,7 @@ async def handle_order_requests(
reqid = await client.submit_limit(
oid=order.oid,
symbol=f'{order.symbol}.{client.broker}',
price=order.price,
price=float(order.price),
action=order.action,
size=order.size,
# XXX: by default 0 tells ``ib_insync`` methods that
@ -651,6 +655,7 @@ async def open_trade_dialog(
# in) use manually constructed table from calling
# the `.get_mkt_info()` provider EP above.
_mktmap_table=mkt_by_fqme,
only_require=list(mkt_by_fqme),
)
pp_msgs: list[BrokerdPosition] = []
@ -696,7 +701,12 @@ async def open_trade_dialog(
# sanity check all the mkt infos
for fqme, flume in feed.flumes.items():
mkt: MktPair = symcache.mktmaps.get(fqme) or mkt_by_fqme[fqme]
assert mkt == flume.mkt
if mkt != flume.mkt:
diff: tuple = mkt - flume.mkt
log.warning(
'MktPair sig mismatch?\n'
f'{pformat(diff)}'
)
get_cost: Callable = getattr(
brokermod,
@ -754,7 +764,7 @@ async def open_paperboi(
service_name = f'paperboi.{broker}'
async with (
tractor.find_actor(service_name) as portal,
find_service(service_name) as portal,
tractor.open_nursery() as an,
):
# NOTE: only spawn if no paperboi already is up since we likely
@ -777,8 +787,10 @@ async def open_paperboi(
) as (ctx, first):
yield ctx, first
# tear down connection and any spawned actor on exit
# ALWAYS tear down connection AND any newly spawned
# paperboi actor on exit!
await ctx.cancel()
if we_spawned:
await portal.cancel_actor()

View File

@ -30,6 +30,7 @@ subsys: str = 'piker.clearing'
log = get_logger(subsys)
# TODO, oof doesn't this ignore the `loglevel` then???
get_console_log = partial(
get_console_log,
name=subsys,

View File

@ -1,30 +1,33 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# Copyright (C) 2018-present Tyler Goodlet
# (in stewardship for pikers, everywhere.)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is free software: you can redistribute it and/or
# modify it under the terms of the GNU Affero General Public
# License as published by the Free Software Foundation, either
# version 3 of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# You should have received a copy of the GNU Affero General Public
# License along with this program. If not, see
# <https://www.gnu.org/licenses/>.
'''
CLI commons.
'''
import os
from contextlib import AsyncExitStack
# from contextlib import AsyncExitStack
from types import ModuleType
import click
import trio
import tractor
from tractor._multiaddr import parse_maddr
from ..log import (
get_console_log,
@ -42,35 +45,97 @@ from .. import config
log = get_logger('piker.cli')
def load_trans_eps(
network: dict | None = None,
maddrs: list[tuple] | None = None,
) -> dict[str, dict[str, dict]]:
# transport-oriented endpoint multi-addresses
eps: dict[
str, # service name, eg. `pikerd`, `emsd`..
# libp2p style multi-addresses parsed into prot layers
list[dict[str, str | int]]
] = {}
if (
network
and not maddrs
):
# load network section and (attempt to) connect all endpoints
# which are reachable B)
for key, maddrs in network.items():
match key:
# TODO: resolve table across multiple discov
# prots Bo
case 'resolv':
pass
case 'pikerd':
dname: str = key
for maddr in maddrs:
layers: dict = parse_maddr(maddr)
eps.setdefault(
dname,
[],
).append(layers)
elif maddrs:
# presume user is manually specifying the root actor ep.
eps['pikerd'] = [parse_maddr(maddr)]
return eps
@click.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
@click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.option(
'--tsdb',
is_flag=True,
help='Enable local ``marketstore`` instance'
'--loglevel',
'-l',
default='warning',
help='Logging level',
)
@click.option(
'--es',
'--tl',
is_flag=True,
help='Enable local ``elasticsearch`` instance'
help='Enable tractor-runtime logs',
)
@click.option(
'--pdb',
is_flag=True,
help='Enable tractor debug mode',
)
@click.option(
'--maddr',
'-m',
default=None,
help='Multiaddrs to bind or contact',
)
# @click.option(
# '--tsdb',
# is_flag=True,
# help='Enable local ``marketstore`` instance'
# )
# @click.option(
# '--es',
# is_flag=True,
# help='Enable local ``elasticsearch`` instance'
# )
def pikerd(
maddr: list[str] | None,
loglevel: str,
host: str,
port: int,
tl: bool,
pdb: bool,
tsdb: bool,
es: bool,
# tsdb: bool,
# es: bool,
):
'''
Spawn the piker broker-daemon.
'''
# from tractor.devx import maybe_open_crash_handler
# with maybe_open_crash_handler(pdb=False):
log = get_console_log(loglevel, name='cli')
if pdb:
@ -82,46 +147,49 @@ def pikerd(
"\n"
))
reg_addr: None | tuple[str, int] = None
if host or port:
reg_addr = (
host or _default_registry_host,
int(port) or _default_registry_port,
# service-actor registry endpoint socket-address set
regaddrs: list[tuple[str, int]] = []
conf, _ = config.load(
conf_name='conf',
)
network: dict = conf.get('network')
if (
network is None
and not maddr
):
regaddrs = [(
_default_registry_host,
_default_registry_port,
)]
else:
eps: dict = load_trans_eps(
network,
maddr,
)
for layers in eps['pikerd']:
regaddrs.append((
layers['ipv4']['addr'],
layers['tcp']['port'],
))
from .. import service
async def main():
service_mngr: service.Services
async with (
service.open_pikerd(
registry_addrs=regaddrs,
loglevel=loglevel,
debug_mode=pdb,
registry_addr=reg_addr,
) as service_mngr, # normally delivers a ``Services`` handle
AsyncExitStack() as stack,
# enable_transports=['uds'],
enable_transports=['tcp'],
) as service_mngr,
):
if tsdb:
dname, conf = await stack.enter_async_context(
service.marketstore.start_ahab_daemon(
service_mngr,
loglevel=loglevel,
)
)
log.info(f'TSDB `{dname}` up with conf:\n{conf}')
if es:
dname, conf = await stack.enter_async_context(
service.elastic.start_ahab_daemon(
service_mngr,
loglevel=loglevel,
)
)
log.info(f'DB `{dname}` up with conf:\n{conf}')
assert service_mngr
# ?TODO? spawn all other sub-actor daemons according to
# multiaddress endpoint spec defined by user config
await trio.sleep_forever()
trio.run(main)
@ -137,8 +205,24 @@ def pikerd(
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--configdir', '-c', help='Configuration directory')
@click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.option(
'--pdb',
is_flag=True,
help='Enable runtime debug mode ',
)
@click.option(
'--maddr',
'-m',
default=None,
multiple=True,
help='Multiaddr to bind',
)
@click.option(
'--regaddr',
'-r',
default=None,
help='Registrar addr to contact',
)
@click.pass_context
def cli(
ctx: click.Context,
@ -146,8 +230,11 @@ def cli(
loglevel: str,
tl: bool,
configdir: str,
host: str,
port: int,
pdb: bool,
# TODO: make these list[str] with multiple -m maddr0 -m maddr1
maddr: list[str],
regaddr: str,
) -> None:
if configdir is not None:
@ -168,12 +255,20 @@ def cli(
}
assert brokermods
reg_addr: None | tuple[str, int] = None
if host or port:
reg_addr = (
host or _default_registry_host,
int(port) or _default_registry_port,
)
# TODO: load endpoints from `conf::[network].pikerd`
# - pikerd vs. regd, separate registry daemon?
# - expose datad vs. brokerd?
# - bind emsd with certain perms on public iface?
regaddrs: list[tuple[str, int]] = regaddr or [(
_default_registry_host,
_default_registry_port,
)]
# TODO: factor [network] section parsing out from pikerd
# above and call it here as well.
# if maddr:
# for addr in maddr:
# layers: dict = parse_maddr(addr)
ctx.obj.update({
'brokers': brokers,
@ -183,7 +278,12 @@ def cli(
'log': get_console_log(loglevel),
'confdir': config._config_dir,
'wl_path': config._watchlists_data_path,
'registry_addr': reg_addr,
'registry_addrs': regaddrs,
'pdb': pdb, # debug mode flag
# TODO: endpoint parsing, pinging and binding
# on no existing server.
# 'maddrs': maddr,
})
# allow enabling same loglevel in ``tractor`` machinery
@ -207,6 +307,10 @@ def services(config, tl, ports):
if not ports:
ports = [_default_registry_port]
addr = tractor._addr.wrap_address(
addr=(host, ports[0])
)
async def list_services():
nonlocal host
async with (
@ -214,24 +318,25 @@ def services(config, tl, ports):
name='service_query',
loglevel=config['loglevel'] if tl else None,
),
tractor.get_arbiter(
host=host,
port=ports[0]
tractor.get_registry(
addr=addr,
) as portal
):
registry = await portal.run_from_ns('self', 'get_registry')
registry = await portal.run_from_ns(
'self',
'get_registry',
)
json_d = {}
for key, socket in registry.items():
host, port = socket
json_d[key] = f'{host}:{port}'
json_d[key] = f'{socket}'
click.echo(f"{colorize_json(json_d)}")
trio.run(list_services)
def _load_clis() -> None:
from ..service import marketstore # noqa
from ..service import elastic # noqa
# from ..service import elastic # noqa
from ..brokers import cli # noqa
from ..ui import cli # noqa
from ..watchlists import cli # noqa

View File

@ -41,10 +41,13 @@ from .log import get_logger
log = get_logger('broker-config')
# XXX NOTE: taken from ``click`` since apparently they have some
# super weirdness with sigint and sudo..no clue
# we're probably going to slowly just modify it to our own version over
# time..
# XXX NOTE: taken from `click`
# |_https://github.com/pallets/click/blob/main/src/click/utils.py#L449
#
# (since apparently they have some super weirdness with SIGINT and
# sudo.. no clue we're probably going to slowly just modify it to our
# own version over time..)
#
def get_app_dir(
app_name: str,
roaming: bool = True,
@ -104,14 +107,15 @@ def get_app_dir(
# `tractor`) with the testing dir and check for it whenever we
# detect `pytest` is being used (which it isn't under normal
# operation).
if "pytest" in sys.modules:
import tractor
actor = tractor.current_actor(err_on_no_runtime=False)
if actor: # runtime is up
rvs = tractor._state._runtime_vars
testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
assert testdirpath.exists(), 'piker test harness might be borked!?'
app_name = str(testdirpath)
# if "pytest" in sys.modules:
# import tractor
# actor = tractor.current_actor(err_on_no_runtime=False)
# if actor: # runtime is up
# rvs = tractor._state._runtime_vars
# import pdbp; pdbp.set_trace()
# testdirpath = Path(rvs['piker_vars']['piker_test_dir'])
# assert testdirpath.exists(), 'piker test harness might be borked!?'
# app_name = str(testdirpath)
if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA"
@ -134,14 +138,19 @@ def get_app_dir(
_click_config_dir: Path = Path(get_app_dir('piker'))
_config_dir: Path = _click_config_dir
_parent_user: str = os.environ.get('SUDO_USER')
if _parent_user:
# NOTE: when using `sudo` we attempt to determine the non-root user
# and still use their normal config dir.
if (
(_parent_user := os.environ.get('SUDO_USER'))
and
_parent_user != 'root'
):
non_root_user_dir = Path(
os.path.expanduser(f'~{_parent_user}')
)
root: str = 'root'
_ccds: str = str(_click_config_dir) # click config dir string
_ccds: str = str(_click_config_dir) # click config dir as string
i_tail: int = int(_ccds.rfind(root) + len(root))
_config_dir = (
non_root_user_dir
@ -246,7 +255,8 @@ def repodir() -> Path:
def load(
conf_name: str = 'brokers', # appended with .toml suffix
# NOTE: always appended with .toml suffix
conf_name: str = 'conf',
path: Path | None = None,
decode: Callable[
@ -254,7 +264,7 @@ def load(
MutableMapping,
] = tomllib.loads,
touch_if_dne: bool = False,
touch_if_dne: bool = True,
**tomlkws,
@ -263,7 +273,7 @@ def load(
Load config file by name.
If desired config is not in the top level piker-user config path then
pass the ``path: Path`` explicitly.
pass the `path: Path` explicitly.
'''
# create the $HOME/.config/piker dir if dne
@ -278,7 +288,8 @@ def load(
if (
not path.is_file()
and touch_if_dne
and
touch_if_dne
):
# only do a template if no path provided,
# just touch an empty file with same name.
@ -357,7 +368,9 @@ def load_accounts(
) -> bidict[str, str | None]:
conf, path = load()
conf, path = load(
conf_name='brokers',
)
accounts = bidict()
for provider_name, section in conf.items():
accounts_section = section.get('accounts')

View File

@ -43,8 +43,10 @@ from ._symcache import (
SymbologyCache,
open_symcache,
get_symcache,
match_from_pairs,
)
from ._sampling import open_sample_stream
from ..types import Struct
__all__: list[str] = [
@ -54,6 +56,7 @@ __all__: list[str] = [
'ShmArray',
'iterticks',
'maybe_open_shm_array',
'match_from_pairs',
'attach_shm_array',
'open_shm_array',
'get_shm_token',
@ -62,6 +65,7 @@ __all__: list[str] = [
'open_symcache',
'open_sample_stream',
'get_symcache',
'Struct',
'SymbologyCache',
'types',
]

View File

@ -41,6 +41,11 @@ if TYPE_CHECKING:
)
from piker.toolz import Profiler
# default gap between bars: "bar gap multiplier"
# - 0.5 is no overlap between OC arms,
# - 1.0 is full overlap on each neighbor sample
BGM: float = 0.16
class IncrementalFormatter(msgspec.Struct):
'''
@ -513,6 +518,7 @@ class IncrementalFormatter(msgspec.Struct):
class OHLCBarsFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
-0.5,
0,
@ -604,8 +610,9 @@ class OHLCBarsFmtr(IncrementalFormatter):
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.16,
gap: float = BGM,
) -> tuple[
np.ndarray,
@ -622,7 +629,7 @@ class OHLCBarsFmtr(IncrementalFormatter):
array[:-1],
start,
bar_w=self.index_step_size,
bar_gap=w * self.index_step_size,
bar_gap=gap * self.index_step_size,
# XXX: don't ask, due to a ``numba`` bug..
use_time_index=(self.index_field == 'time'),

View File

@ -33,6 +33,11 @@ from typing import (
)
import tractor
from tractor import (
Context,
MsgStream,
Channel,
)
from tractor.trionics import (
maybe_open_nursery,
)
@ -53,7 +58,10 @@ if TYPE_CHECKING:
from ._sharedmem import (
ShmArray,
)
from .feed import _FeedsBus
from .feed import (
_FeedsBus,
Sub,
)
# highest frequency sample step is 1 second by default, though in
@ -87,6 +95,12 @@ class Sampler:
# history loading.
incr_task_cs: trio.CancelScope | None = None
bcast_errors: tuple[Exception] = (
trio.BrokenResourceError,
trio.ClosedResourceError,
trio.EndOfChannel,
)
# holds all the ``tractor.Context`` remote subscriptions for
# a particular sample period increment event: all subscribers are
# notified on a step.
@ -94,7 +108,7 @@ class Sampler:
float,
list[
float,
set[tractor.MsgStream]
set[MsgStream]
],
] = defaultdict(
lambda: [
@ -250,16 +264,17 @@ class Sampler:
subs: set
last_ts, subs = pair
task = trio.lowlevel.current_task()
log.debug(
f'SUBS {self.subscribers}\n'
f'PAIR {pair}\n'
f'TASK: {task}: {id(task)}\n'
f'broadcasting {period_s} -> {last_ts}\n'
# f'consumers: {subs}'
)
borked: set[tractor.MsgStream] = set()
sent: set[tractor.MsgStream] = set()
# NOTE, for debugging pub-sub issues
# task = trio.lowlevel.current_task()
# log.debug(
# f'AlL-SUBS@{period_s!r}: {self.subscribers}\n'
# f'PAIR: {pair}\n'
# f'TASK: {task}: {id(task)}\n'
# f'broadcasting {period_s} -> {last_ts}\n'
# f'consumers: {subs}'
# )
borked: set[MsgStream] = set()
sent: set[MsgStream] = set()
while True:
try:
for stream in (subs - sent):
@ -274,12 +289,11 @@ class Sampler:
await stream.send(msg)
sent.add(stream)
except (
trio.BrokenResourceError,
trio.ClosedResourceError
):
except self.bcast_errors as err:
log.error(
f'{stream._ctx.chan.uid} dropped connection'
f'Connection dropped for IPC ctx\n'
f'{stream._ctx}\n\n'
f'Due to {type(err)}'
)
borked.add(stream)
else:
@ -314,7 +328,7 @@ class Sampler:
@tractor.context
async def register_with_sampler(
ctx: tractor.Context,
ctx: Context,
period_s: float,
shms_by_period: dict[float, dict] | None = None,
@ -386,7 +400,8 @@ async def register_with_sampler(
finally:
if (
sub_for_broadcasts
and subs
and
subs
):
try:
subs.remove(stream)
@ -553,8 +568,7 @@ async def open_sample_stream(
async def sample_and_broadcast(
bus: _FeedsBus, # noqa
bus: _FeedsBus,
rt_shm: ShmArray,
hist_shm: ShmArray,
quote_stream: trio.abc.ReceiveChannel,
@ -574,11 +588,33 @@ async def sample_and_broadcast(
overruns = Counter()
# NOTE, only used for debugging live-data-feed issues, though
# this should be resolved more correctly in the future using the
# new typed-msgspec feats of `tractor`!
#
# XXX, a multiline nested `dict` formatter (since rn quote-msgs
# are just that).
# pfmt: Callable[[str], str] = mk_repr()
# iterate stream delivered by broker
async for quotes in quote_stream:
# print(quotes)
# TODO: ``numba`` this!
# XXX WARNING XXX only enable for debugging bc ow can cost
# ALOT of perf with HF-feedz!!!
#
# log.info(
# 'Rx live quotes:\n'
# f'{pfmt(quotes)}'
# )
# TODO,
# -[ ] `numba` or `cython`-nize this loop possibly?
# |_alternatively could we do it in rust somehow by upacking
# arrow msgs instead of using `msgspec`?
# -[ ] use `msgspec.Struct` support in new typed-msging from
# `tractor` to ensure only allowed msgs are transmitted?
#
for broker_symbol, quote in quotes.items():
# TODO: in theory you can send the IPC msg *before* writing
# to the sharedmem array to decrease latency, however, that
@ -649,12 +685,22 @@ async def sample_and_broadcast(
# eventually block this producer end of the feed and
# thus other consumers still attached.
sub_key: str = broker_symbol.lower()
subs: list[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
float | None, # tick throttle in Hz
]
] = bus.get_subs(sub_key)
subs: set[Sub] = bus.get_subs(sub_key)
# TODO, figure out how to make this useful whilst
# incoporating feed "pausing" ..
#
# if not subs:
# all_bs_fqmes: list[str] = list(
# bus._subscribers.keys()
# )
# log.warning(
# f'No subscribers for {brokername!r} live-quote ??\n'
# f'broker_symbol: {broker_symbol}\n\n'
# f'Maybe the backend-sys symbol does not match one of,\n'
# f'{pfmt(all_bs_fqmes)}\n'
# )
# NOTE: by default the broker backend doesn't append
# it's own "name" into the fqme schema (but maybe it
@ -663,34 +709,40 @@ async def sample_and_broadcast(
fqme: str = f'{broker_symbol}.{brokername}'
lags: int = 0
# TODO: speed up this loop in an AOT compiled lang (like
# rust or nim or zig) and/or instead of doing a fan out to
# TCP sockets here, we add a shm-style tick queue which
# readers can pull from instead of placing the burden of
# broadcast on solely on this `brokerd` actor. see issues:
# XXX TODO XXX: speed up this loop in an AOT compiled
# lang (like rust or nim or zig)!
# AND/OR instead of doing a fan out to TCP sockets
# here, we add a shm-style tick queue which readers can
# pull from instead of placing the burden of broadcast
# on solely on this `brokerd` actor. see issues:
# - https://github.com/pikers/piker/issues/98
# - https://github.com/pikers/piker/issues/107
for (stream, tick_throttle) in subs.copy():
# for (stream, tick_throttle) in subs.copy():
for sub in subs.copy():
ipc: MsgStream = sub.ipc
throttle: float = sub.throttle_rate
try:
with trio.move_on_after(0.2) as cs:
if tick_throttle:
if throttle:
send_chan: trio.abc.SendChannel = sub.send_chan
# this is a send mem chan that likely
# pushes to the ``uniform_rate_send()`` below.
try:
stream.send_nowait(
send_chan.send_nowait(
(fqme, quote)
)
except trio.WouldBlock:
overruns[sub_key] += 1
ctx = stream._ctx
chan = ctx.chan
ctx: Context = ipc._ctx
chan: Channel = ctx.chan
log.warning(
f'Feed OVERRUN {sub_key}'
'@{bus.brokername} -> \n'
f'@{bus.brokername} -> \n'
f'feed @ {chan.uid}\n'
f'throttle = {tick_throttle} Hz'
f'throttle = {throttle} Hz'
)
if overruns[sub_key] > 6:
@ -707,10 +759,10 @@ async def sample_and_broadcast(
f'{sub_key}:'
f'{ctx.cid}@{chan.uid}'
)
await stream.aclose()
await ipc.aclose()
raise trio.BrokenResourceError
else:
await stream.send(
await ipc.send(
{fqme: quote}
)
@ -719,21 +771,17 @@ async def sample_and_broadcast(
if lags > 10:
await tractor.pause()
except (
trio.BrokenResourceError,
trio.ClosedResourceError,
trio.EndOfChannel,
):
ctx = stream._ctx
chan = ctx.chan
except Sampler.bcast_errors as ipc_err:
ctx: Context = ipc._ctx
chan: Channel = ctx.chan
if ctx:
log.warning(
'Dropped `brokerd`-quotes-feed connection:\n'
f'{broker_symbol}:'
f'{ctx.cid}@{chan.uid}'
f'Dropped `brokerd`-feed for {broker_symbol!r} due to,\n'
f'x>) {ctx.cid}@{chan.uid}'
f'|_{ipc_err!r}\n\n'
)
if tick_throttle:
assert stream._closed
if sub.throttle_rate:
assert ipc._closed
# XXX: do we need to deregister here
# if it's done in the fee bus code?
@ -742,17 +790,16 @@ async def sample_and_broadcast(
# since there seems to be some kinda race..
bus.remove_subs(
sub_key,
{(stream, tick_throttle)},
{sub},
)
async def uniform_rate_send(
rate: float,
quote_stream: trio.abc.ReceiveChannel,
stream: tractor.MsgStream,
stream: MsgStream,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
@ -770,13 +817,16 @@ async def uniform_rate_send(
https://gist.github.com/njsmith/7ea44ec07e901cb78ebe1dd8dd846cb9
'''
# TODO: compute the approx overhead latency per cycle
left_to_sleep = throttle_period = 1/rate - 0.000616
# ?TODO? dynamically compute the **actual** approx overhead latency per cycle
# instead of this magic # bidinezz?
throttle_period: float = 1/rate - 0.000616
left_to_sleep: float = throttle_period
# send cycle state
first_quote: dict|None
first_quote = last_quote = None
last_send = time.time()
diff = 0
last_send: float = time.time()
diff: float = 0
task_status.started()
ticks_by_type: dict[
@ -787,22 +837,28 @@ async def uniform_rate_send(
clear_types = _tick_groups['clears']
while True:
# compute the remaining time to sleep for this throttled cycle
left_to_sleep = throttle_period - diff
left_to_sleep: float = throttle_period - diff
if left_to_sleep > 0:
cs: trio.CancelScope
with trio.move_on_after(left_to_sleep) as cs:
sym: str
last_quote: dict
try:
sym, last_quote = await quote_stream.receive()
except trio.EndOfChannel:
log.exception(f"feed for {stream} ended?")
log.exception(
f'Live stream for feed for ended?\n'
f'<=c\n'
f' |_[{stream!r}\n'
)
break
diff = time.time() - last_send
diff: float = time.time() - last_send
if not first_quote:
first_quote = last_quote
first_quote: float = last_quote
# first_quote['tbt'] = ticks_by_type
if (throttle_period - diff) > 0:
@ -863,7 +919,9 @@ async def uniform_rate_send(
# TODO: now if only we could sync this to the display
# rate timing exactly lul
try:
await stream.send({sym: first_quote})
await stream.send({
sym: first_quote
})
except tractor.RemoteActorError as rme:
if rme.type is not tractor._exceptions.StreamOverrun:
raise
@ -874,19 +932,28 @@ async def uniform_rate_send(
f'{sym}:{ctx.cid}@{chan.uid}'
)
# NOTE: any of these can be raised by `tractor`'s IPC
# transport-layer and we want to be highly resilient
# to consumers which crash or lose network connection.
# I.e. we **DO NOT** want to crash and propagate up to
# ``pikerd`` these kinds of errors!
except (
# NOTE: any of these can be raised by ``tractor``'s IPC
# transport-layer and we want to be highly resilient
# to consumers which crash or lose network connection.
# I.e. we **DO NOT** want to crash and propagate up to
# ``pikerd`` these kinds of errors!
trio.ClosedResourceError,
trio.BrokenResourceError,
ConnectionResetError,
):
# if the feed consumer goes down then drop
# out of this rate limiter
log.warning(f'{stream} closed')
) + Sampler.bcast_errors as ipc_err:
match ipc_err:
case trio.EndOfChannel():
log.info(
f'{stream} terminated by peer,\n'
f'{ipc_err!r}'
)
case _:
# if the feed consumer goes down then drop
# out of this rate limiter
log.warning(
f'{stream} closed due to,\n'
f'{ipc_err!r}'
)
await stream.aclose()
return

View File

@ -31,11 +31,14 @@ from pathlib import Path
from pprint import pformat
from typing import (
Any,
Callable,
Sequence,
Hashable,
TYPE_CHECKING,
)
from types import ModuleType
from fuzzywuzzy import process as fuzzy
from rapidfuzz import process as fuzzy
import tomli_w # for fast symbol cache writing
import tractor
import trio
@ -54,7 +57,7 @@ from piker.brokers import (
)
if TYPE_CHECKING:
from ..accounting import (
from piker.accounting import (
Asset,
MktPair,
)
@ -88,6 +91,18 @@ class SymbologyCache(Struct):
# provided by the backend pkg.
mktmaps: dict[str, MktPair] = field(default_factory=dict)
def pformat(self) -> str:
return (
f'<{type(self).__name__}(\n'
f' .mod: {self.mod!r}\n'
f' .assets: {len(self.assets)!r}\n'
f' .pairs: {len(self.pairs)!r}\n'
f' .mktmaps: {len(self.mktmaps)!r}\n'
f')>'
)
__repr__ = pformat
def write_config(self) -> None:
# put the backend's pair-struct type ref at the top
@ -128,8 +143,8 @@ class SymbologyCache(Struct):
- `.get_mkt_pairs()`: returning a table of pair-`Struct`
types, custom defined by the particular backend.
AND, the required `.get_mkt_info()` module-level endpoint which
maps `fqme: str` -> `MktPair`s.
AND, the required `.get_mkt_info()` module-level endpoint
which maps `fqme: str` -> `MktPair`s.
These tables are then used to fill out the `.assets`, `.pairs` and
`.mktmaps` tables on this cache instance, respectively.
@ -147,57 +162,68 @@ class SymbologyCache(Struct):
'Implement `Client.get_assets()`!'
)
if get_mkt_pairs := getattr(client, 'get_mkt_pairs', None):
pairs: dict[str, Struct] = await get_mkt_pairs()
for bs_fqme, pair in pairs.items():
# NOTE: every backend defined pair should
# declare it's ns path for roundtrip
# serialization lookup.
if not getattr(pair, 'ns_path', None):
raise TypeError(
f'Pair-struct for {self.mod.name} MUST define a '
'`.ns_path: str`!\n'
f'{pair}'
)
entry = await self.mod.get_mkt_info(pair.bs_fqme)
if not entry:
continue
mkt: MktPair
pair: Struct
mkt, _pair = entry
assert _pair is pair, (
f'`{self.mod.name}` backend probably has a '
'keying-symmetry problem between the pair-`Struct` '
'returned from `Client.get_mkt_pairs()`and the '
'module level endpoint: `.get_mkt_info()`\n\n'
"Here's the struct diff:\n"
f'{_pair - pair}'
)
# NOTE XXX: this means backends MUST implement
# a `Struct.bs_mktid: str` field to provide
# a native-keyed map to their own symbol
# set(s).
self.pairs[pair.bs_mktid] = pair
# NOTE: `MktPair`s are keyed here using piker's
# internal FQME schema so that search,
# accounting and feed init can be accomplished
# a sane, uniform, normalized basis.
self.mktmaps[mkt.fqme] = mkt
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
pair,
)
else:
get_mkt_pairs: Callable|None = getattr(
client,
'get_mkt_pairs',
None,
)
if not get_mkt_pairs:
log.warning(
'No symbology cache `Pair` support for `{provider}`..\n'
'Implement `Client.get_mkt_pairs()`!'
)
return self
pairs: dict[str, Struct] = await get_mkt_pairs()
if not pairs:
log.warning(
'No pairs from intial {provider!r} sym-cache request?\n\n'
'`Client.get_mkt_pairs()` -> {pairs!r} ?'
)
return self
for bs_fqme, pair in pairs.items():
if not getattr(pair, 'ns_path', None):
# XXX: every backend defined pair must declare
# a `.ns_path: tractor.NamespacePath` to enable
# roundtrip serialization lookup from a local
# cache file.
raise TypeError(
f'Pair-struct for {self.mod.name} MUST define a '
'`.ns_path: str`!\n\n'
f'{pair!r}'
)
entry = await self.mod.get_mkt_info(pair.bs_fqme)
if not entry:
continue
mkt: MktPair
pair: Struct
mkt, _pair = entry
assert _pair is pair, (
f'`{self.mod.name}` backend probably has a '
'keying-symmetry problem between the pair-`Struct` '
'returned from `Client.get_mkt_pairs()`and the '
'module level endpoint: `.get_mkt_info()`\n\n'
"Here's the struct diff:\n"
f'{_pair - pair}'
)
# NOTE XXX: this means backends MUST implement
# a `Struct.bs_mktid: str` field to provide
# a native-keyed map to their own symbol
# set(s).
self.pairs[pair.bs_mktid] = pair
# NOTE: `MktPair`s are keyed here using piker's
# internal FQME schema so that search,
# accounting and feed init can be accomplished
# a sane, uniform, normalized basis.
self.mktmaps[mkt.fqme] = mkt
self.pair_ns_path: str = tractor.msg.NamespacePath.from_ref(
pair,
)
return self
@ -308,7 +334,7 @@ class SymbologyCache(Struct):
matches in a `dict` including the `MktPair` values.
'''
matches = fuzzy.extractBests(
matches = fuzzy.extract(
pattern,
getattr(self, table),
score_cutoff=50,
@ -466,3 +492,43 @@ def get_symcache(
pdbp.xpm()
return symcache
def match_from_pairs(
pairs: dict[str, Struct],
query: str,
score_cutoff: int = 50,
**extract_kwargs,
) -> dict[str, Struct]:
'''
Fuzzy search over a "pairs table" maintained by most backends
as part of their symbology-info caching internals.
Scan the native symbol key set and return best ranked
matches back in a new `dict`.
'''
# TODO: somehow cache this list (per call) like we were in
# `open_symbol_search()`?
keys: list[str] = list(pairs)
matches: list[tuple[
Sequence[Hashable], # matching input key
Any, # scores
Any,
]] = fuzzy.extract(
# NOTE: most backends provide keys uppercased
query=query,
choices=keys,
score_cutoff=score_cutoff,
**extract_kwargs,
)
# pop and repack pairs in output dict
matched_pairs: dict[str, Struct] = {}
for item in matches:
pair_key: str = item[0]
matched_pairs[pair_key] = pairs[pair_key]
return matched_pairs

View File

@ -1,336 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Financial time series processing utilities usually
pertaining to OHLCV style sampled data.
Routines are generally implemented in either ``numpy`` or
``polars`` B)
'''
from __future__ import annotations
from typing import Literal
from math import (
ceil,
floor,
)
import numpy as np
import polars as pl
from ._sharedmem import ShmArray
from ..toolz.profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: float, # sampler period step-diff
) -> slice:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='right',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc
def detect_null_time_gap(
shm: ShmArray,
imargin: int = 1,
) -> tuple[float, float] | None:
'''
Detect if there are any zero-epoch stamped rows in
the presumed 'time' field-column.
Filter to the gap and return a surrounding index range.
NOTE: for now presumes only ONE gap XD
'''
# ensure we read buffer state only once so that ShmArray rt
# circular-buffer updates don't cause a indexing/size mismatch.
array: np.ndarray = shm.array
zero_pred: np.ndarray = array['time'] == 0
zero_t: np.ndarray = array[zero_pred]
if zero_t.size:
istart, iend = zero_t['index'][[0, -1]]
start, end = shm._array['time'][
[istart - imargin, iend + imargin]
]
return (
istart - imargin,
start,
end,
iend + imargin,
)
return None
t_unit: Literal = Literal[
'days',
'hours',
'minutes',
'seconds',
'miliseconds',
'microseconds',
'nanoseconds',
]
def with_dts(
df: pl.DataFrame,
time_col: str = 'time',
) -> pl.DataFrame:
'''
Insert datetime (casted) columns to a (presumably) OHLC sampled
time series with an epoch-time column keyed by ``time_col``.
'''
return df.with_columns([
pl.col(time_col).shift(1).suffix('_prev'),
pl.col(time_col).diff().alias('s_diff'),
pl.from_epoch(pl.col(time_col)).alias('dt'),
]).with_columns([
pl.from_epoch(pl.col(f'{time_col}_prev')).alias('dt_prev'),
pl.col('dt').diff().alias('dt_diff'),
]) #.with_columns(
# pl.col('dt').diff().dt.days().alias('days_dt_diff'),
# )
def detect_time_gaps(
df: pl.DataFrame,
time_col: str = 'time',
# epoch sampling step diff
expect_period: float = 60,
# datetime diff unit and gap value
# crypto mkts
# gap_dt_unit: t_unit = 'minutes',
# gap_thresh: int = 1,
# NOTE: legacy stock mkts have venue operating hours
# and thus gaps normally no more then 1-2 days at
# a time.
# XXX -> must be valid ``polars.Expr.dt.<name>``
# TODO: allow passing in a frame of operating hours
# durations/ranges for faster legit gap checks.
gap_dt_unit: t_unit = 'days',
gap_thresh: int = 1,
) -> pl.DataFrame:
'''
Filter to OHLC datums which contain sample step gaps.
For eg. legacy markets which have venue close gaps and/or
actual missing data segments.
'''
return (
with_dts(df)
.filter(
pl.col('s_diff').abs() > expect_period
)
.filter(
getattr(
pl.col('dt_diff').dt,
gap_dt_unit,
)().abs() > gap_thresh
)
)
def detect_price_gaps(
df: pl.DataFrame,
gt_multiplier: float = 2.,
price_fields: list[str] = ['high', 'low'],
) -> pl.DataFrame:
'''
Detect gaps in clearing price over an OHLC series.
2 types of gaps generally exist; up gaps and down gaps:
- UP gap: when any next sample's lo price is strictly greater
then the current sample's hi price.
- DOWN gap: when any next sample's hi price is strictly
less then the current samples lo price.
'''
# return df.filter(
# pl.col('high') - ) > expect_period,
# ).select([
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
# pl.all(),
# ]).select([
# pl.all(),
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
# ])
...

View File

@ -27,7 +27,6 @@ from functools import partial
from types import ModuleType
from typing import (
Any,
Optional,
Callable,
AsyncContextManager,
AsyncGenerator,
@ -35,6 +34,7 @@ from typing import (
)
import json
import tractor
import trio
from trio_typing import TaskStatus
from trio_websocket import (
@ -167,7 +167,7 @@ async def _reconnect_forever(
async def proxy_msgs(
ws: WebSocketConnection,
pcs: trio.CancelScope, # parent cancel scope
rent_cs: trio.CancelScope, # parent cancel scope
):
'''
Receive (under `timeout` deadline) all msgs from from underlying
@ -192,7 +192,7 @@ async def _reconnect_forever(
f'{url} connection bail with:'
)
await trio.sleep(0.5)
pcs.cancel()
rent_cs.cancel()
# go back to reonnect loop in parent task
return
@ -204,7 +204,7 @@ async def _reconnect_forever(
f'{src_mod}\n'
'WS feed seems down and slow af.. reconnecting\n'
)
pcs.cancel()
rent_cs.cancel()
# go back to reonnect loop in parent task
return
@ -228,16 +228,25 @@ async def _reconnect_forever(
nobsws._connected = trio.Event()
task_status.started()
while not snd._closed:
mc_state: trio._channel.MemoryChannelState = snd._state
while (
mc_state.open_receive_channels > 0
and
mc_state.open_send_channels > 0
):
log.info(
f'{src_mod}\n'
f'{url} trying (RE)CONNECT'
)
async with trio.open_nursery() as n:
cs = nobsws._cs = n.cancel_scope
ws: WebSocketConnection
async with open_websocket_url(url) as ws:
ws: WebSocketConnection
try:
async with (
open_websocket_url(url) as ws,
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn,
):
cs = nobsws._cs = tn.cancel_scope
nobsws._ws = ws
log.info(
f'{src_mod}\n'
@ -245,7 +254,7 @@ async def _reconnect_forever(
)
# begin relay loop to forward msgs
n.start_soon(
tn.start_soon(
proxy_msgs,
ws,
cs,
@ -259,7 +268,7 @@ async def _reconnect_forever(
# TODO: should we return an explicit sub-cs
# from this fixture task?
await n.start(
await tn.start(
open_fixture,
fixture,
nobsws,
@ -270,8 +279,22 @@ async def _reconnect_forever(
nobsws._connected.set()
await trio.sleep_forever()
# ws open block end
# nursery block end
except (
HandshakeError,
ConnectionRejected,
):
log.exception('Retrying connection')
await trio.sleep(0.5) # throttle
except BaseException as _berr:
berr = _berr
log.exception(
'Reconnect-attempt failed ??\n'
)
await trio.sleep(0.2) # throttle
raise berr
#|_ws & nursery block ends
nobsws._connected = trio.Event()
if cs.cancelled_caught:
log.cancel(
@ -284,7 +307,8 @@ async def _reconnect_forever(
and not nobsws._connected.is_set()
)
# -> from here, move to next reconnect attempt
# -> from here, move to next reconnect attempt iteration
# in the while loop above Bp
else:
log.exception(
@ -318,21 +342,25 @@ async def open_autorecon_ws(
connetivity errors, or some user defined recv timeout.
You can provide a ``fixture`` async-context-manager which will be
entered/exitted around each connection reset; eg. for (re)requesting
subscriptions without requiring streaming setup code to rerun.
entered/exitted around each connection reset; eg. for
(re)requesting subscriptions without requiring streaming setup
code to rerun.
'''
snd: trio.MemorySendChannel
rcv: trio.MemoryReceiveChannel
snd, rcv = trio.open_memory_channel(616)
async with trio.open_nursery() as n:
async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn
):
nobsws = NoBsWs(
url,
rcv,
msg_recv_timeout=msg_recv_timeout,
)
await n.start(
await tn.start(
partial(
_reconnect_forever,
url,
@ -345,16 +373,15 @@ async def open_autorecon_ws(
await nobsws._connected.wait()
assert nobsws._cs
assert nobsws.connected()
try:
yield nobsws
finally:
n.cancel_scope.cancel()
tn.cancel_scope.cancel()
'''
JSONRPC response-request style machinery for transparent multiplexing of msgs
over a NoBsWs.
JSONRPC response-request style machinery for transparent multiplexing
of msgs over a `NoBsWs`.
'''
@ -362,8 +389,8 @@ over a NoBsWs.
class JSONRPCResult(Struct):
id: int
jsonrpc: str = '2.0'
result: Optional[dict] = None
error: Optional[dict] = None
result: dict|None = None
error: dict|None = None
@acm
@ -371,43 +398,82 @@ async def open_jsonrpc_session(
url: str,
start_id: int = 0,
response_type: type = JSONRPCResult,
request_type: Optional[type] = None,
request_hook: Optional[Callable] = None,
error_hook: Optional[Callable] = None,
msg_recv_timeout: float = float('inf'),
# ^NOTE, since only `deribit` is using this jsonrpc stuff atm
# and options mkts are generally "slow moving"..
#
# FURTHER if we break the underlying ws connection then since we
# don't pass a `fixture` to the task that manages `NoBsWs`, i.e.
# `_reconnect_forever()`, the jsonrpc "transport pipe" get's
# broken and never restored with wtv init sequence is required to
# re-establish a working req-resp session.
) -> Callable[[str, dict], dict]:
'''
Init a json-RPC-over-websocket connection to the provided `url`.
A `json_rpc: Callable[[str, dict], dict` is delivered to the
caller for sending requests and a bg-`trio.Task` handles
processing of response msgs including error reporting/raising in
the parent/caller task.
'''
# NOTE, store all request msgs so we can raise errors on the
# caller side!
req_msgs: dict[int, dict] = {}
async with (
trio.open_nursery() as n,
open_autorecon_ws(url) as ws
trio.open_nursery() as tn,
open_autorecon_ws(
url=url,
msg_recv_timeout=msg_recv_timeout,
) as ws
):
rpc_id: Iterable = count(start_id)
rpc_id: Iterable[int] = count(start_id)
rpc_results: dict[int, dict] = {}
async def json_rpc(method: str, params: dict) -> dict:
async def json_rpc(
method: str,
params: dict,
) -> dict:
'''
perform a json rpc call and wait for the result, raise exception in
case of error field present on response
'''
nonlocal req_msgs
req_id: int = next(rpc_id)
msg = {
'jsonrpc': '2.0',
'id': next(rpc_id),
'id': req_id,
'method': method,
'params': params
}
_id = msg['id']
rpc_results[_id] = {
result = rpc_results[_id] = {
'result': None,
'event': trio.Event()
'error': None,
'event': trio.Event(), # signal caller resp arrived
}
req_msgs[_id] = msg
await ws.send_msg(msg)
# wait for reponse before unblocking requester code
await rpc_results[_id]['event'].wait()
ret = rpc_results[_id]['result']
if (maybe_result := result['result']):
ret = maybe_result
del rpc_results[_id]
del rpc_results[_id]
else:
err = result['error']
raise Exception(
f'JSONRPC request failed\n'
f'req: {msg}\n'
f'resp: {err}\n'
)
if ret.error is not None:
raise Exception(json.dumps(ret.error, indent=4))
@ -422,6 +488,7 @@ async def open_jsonrpc_session(
the server side.
'''
nonlocal req_msgs
async for msg in ws:
match msg:
case {
@ -445,19 +512,28 @@ async def open_jsonrpc_session(
'params': _,
}:
log.debug(f'Recieved\n{msg}')
if request_hook:
await request_hook(request_type(**msg))
case {
'error': error
}:
log.warning(f'Recieved\n{error}')
if error_hook:
await error_hook(response_type(**msg))
# retreive orig request msg, set error
# response in original "result" msg,
# THEN FINALLY set the event to signal caller
# to raise the error in the parent task.
req_id: int = error['id']
req_msg: dict = req_msgs[req_id]
result: dict = rpc_results[req_id]
result['error'] = error
result['event'].set()
log.error(
f'JSONRPC request failed\n'
f'req: {req_msg}\n'
f'resp: {error}\n'
)
case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
n.start_soon(recv_task)
tn.start_soon(recv_task)
yield json_rpc
n.cancel_scope.cancel()
tn.cancel_scope.cancel()

View File

@ -28,6 +28,7 @@ module.
from __future__ import annotations
from collections import (
defaultdict,
abc,
)
from contextlib import asynccontextmanager as acm
from functools import partial
@ -36,19 +37,16 @@ from types import ModuleType
from typing import (
Any,
AsyncContextManager,
Optional,
Awaitable,
Sequence,
TYPE_CHECKING,
)
import trio
from trio.abc import ReceiveChannel
from trio_typing import TaskStatus
import tractor
from tractor.trionics import (
maybe_open_context,
gather_contexts,
)
from tractor import trionics
from piker.accounting import (
MktPair,
@ -59,7 +57,6 @@ from piker.brokers import get_brokermod
from piker.service import (
maybe_spawn_brokerd,
)
from piker.ui import _search
from piker.calc import humanize
from ._util import (
log,
@ -70,7 +67,7 @@ from .validate import (
FeedInit,
validate_backend,
)
from .history import (
from ..tsp import (
manage_history,
)
from .ingest import get_ingestormod
@ -79,6 +76,35 @@ from ._sampling import (
uniform_rate_send,
)
if TYPE_CHECKING:
from tractor._addr import Address
from tractor.msg.types import Aid
class Sub(Struct, frozen=True):
'''
A live feed subscription entry.
Contains meta-data on the remote-actor type (in functionality
terms) as well as refs to IPC streams and sampler runtime
params.
'''
ipc: tractor.MsgStream
send_chan: trio.abc.SendChannel | None = None
# tick throttle rate in Hz; determines how live
# quotes/ticks should be downsampled before relay
# to the receiving remote consumer (process).
throttle_rate: float | None = None
_throttle_cs: trio.CancelScope | None = None
# TODO: actually stash comms info for the far end to allow
# `.tsp`, `.fsp` and `.data._sampling` sub-systems to re-render
# the data view as needed via msging with the `._remote_ctl`
# ipc ctx.
rc_ui: bool = False
class _FeedsBus(Struct):
'''
@ -104,13 +130,7 @@ class _FeedsBus(Struct):
_subscribers: defaultdict[
str,
set[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
# tractor.Context,
float | None, # tick throttle in Hz
]
]
set[Sub]
] = defaultdict(set)
async def start_task(
@ -125,6 +145,8 @@ class _FeedsBus(Struct):
trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
with trio.CancelScope() as cs:
# TODO: shouldn't this be a direct await to avoid
# cancellation contagion to the bus nursery!?!?!
await self.nursery.start(
target,
*args,
@ -142,31 +164,28 @@ class _FeedsBus(Struct):
def get_subs(
self,
key: str,
) -> set[
tuple[
tractor.MsgStream | trio.MemorySendChannel,
float | None, # tick throttle in Hz
]
]:
) -> set[Sub]:
'''
Get the ``set`` of consumer subscription entries for the given key.
'''
return self._subscribers[key]
def subs_items(self) -> abc.ItemsView[str, set[Sub]]:
return self._subscribers.items()
def add_subs(
self,
key: str,
subs: set[tuple[
tractor.MsgStream | trio.MemorySendChannel,
float | None, # tick throttle in Hz
]],
) -> set[tuple]:
subs: set[Sub],
) -> set[Sub]:
'''
Add a ``set`` of consumer subscription entries for the given key.
'''
_subs: set[tuple] = self._subscribers[key]
_subs: set[Sub] = self._subscribers.setdefault(key, set())
_subs.update(subs)
return _subs
@ -331,7 +350,6 @@ async def allocate_persistent_feed(
) = await bus.nursery.start(
manage_history,
mod,
bus,
mkt,
some_data_ready,
feed_is_live,
@ -339,7 +357,9 @@ async def allocate_persistent_feed(
# yield back control to starting nursery once we receive either
# some history or a real-time quote.
log.info(f'loading OHLCV history: {fqme}')
log.info(
f'loading OHLCV history: {fqme!r}\n'
)
await some_data_ready.wait()
flume = Flume(
@ -408,7 +428,13 @@ async def allocate_persistent_feed(
rt_shm.array['time'][1] = ts + 1
elif hist_shm.array.size == 0:
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
for i in range(100):
await trio.sleep(0.1)
if hist_shm.array.size > 0:
break
else:
await tractor.pause()
raise RuntimeError(f'History (1m) Shm for {fqme} is empty!?')
# wait the spawning parent task to register its subscriber
# send-stream entry before we start the sample loop.
@ -438,8 +464,9 @@ async def open_feed_bus(
symbols: list[str], # normally expected to the broker-specific fqme
loglevel: str = 'error',
tick_throttle: Optional[float] = None,
tick_throttle: float | None = None,
start_stream: bool = True,
allow_remote_ctl_ui: bool = False,
) -> dict[
str, # fqme
@ -454,8 +481,12 @@ async def open_feed_bus(
if loglevel is None:
loglevel = tractor.current_actor().loglevel
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
# XXX: required to propagate ``tractor`` loglevel to piker
# logging
get_console_log(
loglevel
or tractor.current_actor().loglevel
)
# local state sanity checks
# TODO: check for any stale shm entries for this symbol
@ -465,7 +496,7 @@ async def open_feed_bus(
assert 'brokerd' in servicename
assert brokername in servicename
bus = get_feed_bus(brokername)
bus: _FeedsBus = get_feed_bus(brokername)
sub_registered = trio.Event()
flumes: dict[str, Flume] = {}
@ -512,10 +543,10 @@ async def open_feed_bus(
# pack for ``.started()`` sync msg
flumes[fqme] = flume
# we use the broker-specific fqme (bs_fqme) for the
# sampler subscription since the backend isn't (yet) expected to
# append it's own name to the fqme, so we filter on keys which
# *do not* include that name (e.g .ib) .
# we use the broker-specific fqme (bs_fqme) for the sampler
# subscription since the backend isn't (yet) expected to
# append it's own name to the fqme, so we filter on keys
# which *do not* include that name (e.g .ib) .
bus._subscribers.setdefault(bs_fqme, set())
# sync feed subscribers with flume handles
@ -554,49 +585,60 @@ async def open_feed_bus(
# that the ``sample_and_broadcast()`` task (spawned inside
# ``allocate_persistent_feed()``) will push real-time quote
# (ticks) to this new consumer.
cs: trio.CancelScope | None = None
send: trio.MemorySendChannel | None = None
if tick_throttle:
flume.throttle_rate = tick_throttle
# open a bg task which receives quotes over a mem chan
# and only pushes them to the target actor-consumer at
# a max ``tick_throttle`` instantaneous rate.
# open a bg task which receives quotes over a mem
# chan and only pushes them to the target
# actor-consumer at a max ``tick_throttle``
# (instantaneous) rate.
send, recv = trio.open_memory_channel(2**10)
cs = await bus.start_task(
# NOTE: the ``.send`` channel here is a swapped-in
# trio mem chan which gets `.send()`-ed by the normal
# sampler task but instead of being sent directly
# over the IPC msg stream it's the throttle task
# does the work of incrementally forwarding to the
# IPC stream at the throttle rate.
cs: trio.CancelScope = await bus.start_task(
uniform_rate_send,
tick_throttle,
recv,
stream,
)
# NOTE: so the ``send`` channel here is actually a swapped
# in trio mem chan which gets pushed by the normal sampler
# task but instead of being sent directly over the IPC msg
# stream it's the throttle task does the work of
# incrementally forwarding to the IPC stream at the throttle
# rate.
send._ctx = ctx # mock internal ``tractor.MsgStream`` ref
sub = (send, tick_throttle)
else:
sub = (stream, tick_throttle)
sub = Sub(
ipc=stream,
send_chan=send,
throttle_rate=tick_throttle,
_throttle_cs=cs,
rc_ui=allow_remote_ctl_ui,
)
# TODO: add an api for this on the bus?
# maybe use the current task-id to key the sub list that's
# added / removed? Or maybe we can add a general
# pause-resume by sub-key api?
bs_fqme = fqme.removesuffix(f'.{brokername}')
local_subs.setdefault(bs_fqme, set()).add(sub)
bus.add_subs(bs_fqme, {sub})
local_subs.setdefault(
bs_fqme,
set()
).add(sub)
bus.add_subs(
bs_fqme,
{sub}
)
# sync caller with all subs registered state
sub_registered.set()
uid = ctx.chan.uid
uid: tuple[str, str] = ctx.chan.uid
try:
# ctrl protocol for start/stop of quote streams based on UI
# state (eg. don't need a stream when a symbol isn't being
# displayed).
# ctrl protocol for start/stop of live quote streams
# based on UI state (eg. don't need a stream when
# a symbol isn't being displayed).
async for msg in stream:
if msg == 'pause':
@ -688,7 +730,10 @@ class Feed(Struct):
async for msg in stream:
await tx.send(msg)
async with trio.open_nursery() as nurse:
async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as nurse
):
# spawn a relay task for each stream so that they all
# multiplex to a common channel.
for brokername in mods:
@ -734,6 +779,7 @@ async def install_brokerd_search(
except trio.EndOfChannel:
return {}
from piker.ui import _search
async with _search.register_symbol_search(
provider_name=brokermod.name,
@ -750,9 +796,8 @@ async def install_brokerd_search(
@acm
async def maybe_open_feed(
fqmes: list[str],
loglevel: Optional[str] = None,
loglevel: str | None = None,
**kwargs,
@ -768,7 +813,7 @@ async def maybe_open_feed(
'''
fqme = fqmes[0]
async with maybe_open_context(
async with trionics.maybe_open_context(
acm_func=open_feed,
kwargs={
'fqmes': fqmes,
@ -788,7 +833,7 @@ async def maybe_open_feed(
# add a new broadcast subscription for the quote stream
# if this feed is likely already in use
async with gather_contexts(
async with trionics.gather_contexts(
mngrs=[stream.subscribe() for stream in feed.streams.values()]
) as bstreams:
for bstream, flume in zip(bstreams, feed.flumes.values()):
@ -804,13 +849,14 @@ async def maybe_open_feed(
@acm
async def open_feed(
fqmes: list[str],
loglevel: str | None = None,
loglevel: str|None = None,
allow_overruns: bool = True,
start_stream: bool = True,
tick_throttle: float | None = None, # Hz
tick_throttle: float|None = None, # Hz
allow_remote_ctl_ui: bool = False,
) -> Feed:
'''
@ -848,7 +894,7 @@ async def open_feed(
)
portals: tuple[tractor.Portal]
async with gather_contexts(
async with trionics.gather_contexts(
brokerd_ctxs,
) as portals:
@ -861,19 +907,19 @@ async def open_feed(
feed.portals[brokermod] = portal
# fill out "status info" that the UI can show
host, port = portal.channel.raddr
if host == '127.0.0.1':
host = 'localhost'
chan: tractor.Channel = portal.chan
raddr: Address = chan.raddr
aid: Aid = chan.aid
# TAG_feed_status_update
feed.status.update({
'actor_name': portal.channel.uid[0],
'host': host,
'port': port,
'actor_id': aid,
'actor_short_id': f'{aid.name}@{aid.pid}',
'ipc': chan.raddr.proto_key,
'ipc_addr': raddr,
'hist_shm': 'NA',
'rt_shm': 'NA',
'throttle_rate': tick_throttle,
'throttle_hz': tick_throttle,
})
# feed.status.update(init_msg.pop('status', {}))
# (allocate and) connect to any feed bus for this broker
bus_ctxs.append(
@ -894,13 +940,19 @@ async def open_feed(
# of these stream open sequences sequentially per
# backend? .. need some thot!
allow_overruns=True,
# NOTE: UI actors (like charts) can allow
# remote control of certain graphics rendering
# capabilities via the
# `.ui._remote_ctl.remote_annotate()` msg loop.
allow_remote_ctl_ui=allow_remote_ctl_ui,
)
)
assert len(feed.mods) == len(feed.portals)
async with (
gather_contexts(bus_ctxs) as ctxs,
trionics.gather_contexts(bus_ctxs) as ctxs,
):
stream_ctxs: list[tractor.MsgStream] = []
for (
@ -942,7 +994,7 @@ async def open_feed(
brokermod: ModuleType
fqmes: list[str]
async with (
gather_contexts(stream_ctxs) as streams,
trionics.gather_contexts(stream_ctxs) as streams,
):
for (
stream,
@ -958,6 +1010,12 @@ async def open_feed(
if brokermod.name == flume.mkt.broker:
flume.stream = stream
assert len(feed.mods) == len(feed.portals) == len(feed.streams)
assert (
len(feed.mods)
==
len(feed.portals)
==
len(feed.streams)
)
yield feed

View File

@ -36,41 +36,21 @@ from ._sharedmem import (
ShmArray,
_Token,
)
from piker.accounting import MktPair
if TYPE_CHECKING:
from ..accounting import MktPair
from .feed import Feed
# TODO: ideas for further abstractions as per
# https://github.com/pikers/piker/issues/216 and
# https://github.com/pikers/piker/issues/270:
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
# as per circuit parlance:
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
# - could cover the combination of our `FspAdmin` and the
# backend `.fsp._engine` related machinery to "connect" one flume
# to another?
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
from piker.data.feed import Feed
class Flume(Struct):
'''
Composite reference type which points to all the addressing handles
and other meta-data necessary for the read, measure and management
of a set of real-time updated data flows.
Composite reference type which points to all the addressing
handles and other meta-data necessary for the read, measure and
management of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that can
be used seamlessly across process-memory boundaries.
describes the high level properties of a set of data flows that
can be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport
@ -93,6 +73,7 @@ class Flume(Struct):
# private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None
_readonly: bool = True
stream: tractor.MsgStream | None = None
izero_hist: int = 0
@ -101,7 +82,7 @@ class Flume(Struct):
# TODO: do we need this really if we can pull the `Portal` from
# ``tractor``'s internals?
feed: Feed | None = None
feed: Feed|None = None
@property
def rt_shm(self) -> ShmArray:
@ -109,7 +90,7 @@ class Flume(Struct):
if self._rt_shm is None:
self._rt_shm = attach_shm_array(
token=self._rt_shm_token,
readonly=True,
readonly=self._readonly,
)
return self._rt_shm
@ -122,12 +103,10 @@ class Flume(Struct):
'No shm token has been set for the history buffer?'
)
if (
self._hist_shm is None
):
if self._hist_shm is None:
self._hist_shm = attach_shm_array(
token=self._hist_shm_token,
readonly=True,
readonly=self._readonly,
)
return self._hist_shm
@ -146,10 +125,10 @@ class Flume(Struct):
period and ratio between them.
'''
times = self.hist_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s = (end - start).seconds
times: np.ndarray = self.hist_shm.array['time']
end: float | int = pendulum.from_timestamp(times[-1])
start: float | int = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s: float = (end - start).seconds
times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1])
@ -169,17 +148,25 @@ class Flume(Struct):
msg = self.to_dict()
msg['mkt'] = self.mkt.to_dict()
# can't serialize the stream or feed objects, it's expected
# you'll have a ref to it since this msg should be rxed on
# a stream on whatever far end IPC..
# NOTE: pop all un-msg-serializable fields:
# - `tractor.MsgStream`
# - `Feed`
# - `Shmarray`
# it's expected the `.from_msg()` on the other side
# will get instead some kind of msg-compat version
# that it can load.
msg.pop('stream')
msg.pop('feed')
msg.pop('_rt_shm')
msg.pop('_hist_shm')
return msg
@classmethod
def from_msg(
cls,
msg: dict,
readonly: bool = True,
) -> dict:
'''
@ -190,7 +177,11 @@ class Flume(Struct):
mkt_msg = msg.pop('mkt')
from ..accounting import MktPair # cycle otherwise..
mkt = MktPair.from_msg(mkt_msg)
return cls(mkt=mkt, **msg)
msg |= {'_readonly': readonly}
return cls(
mkt=mkt,
**msg,
)
def get_index(
self,

View File

@ -1,982 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Historical data business logic for load, backfill and tsdb storage.
'''
from __future__ import annotations
# from collections import (
# Counter,
# )
from datetime import datetime
from functools import partial
# import time
from types import ModuleType
from typing import (
Callable,
TYPE_CHECKING,
)
import trio
from trio_typing import TaskStatus
import tractor
from pendulum import (
Duration,
from_timestamp,
)
import numpy as np
from ..accounting import (
MktPair,
)
from ._util import (
log,
)
from ._sharedmem import (
maybe_open_shm_array,
ShmArray,
)
from ._source import def_iohlcv_fields
from ._sampling import (
open_sample_stream,
)
from ..brokers._util import (
DataUnavailable,
)
if TYPE_CHECKING:
from bidict import bidict
from ..service.marketstore import StorageClient
from .feed import _FeedsBus
# `ShmArray` buffer sizing configuration:
_mins_in_day = int(60 * 24)
# how much is probably dependent on lifestyle
# but we reco a buncha times (but only on a
# run-every-other-day kinda week).
_secs_in_day = int(60 * _mins_in_day)
_days_in_week: int = 7
_days_worth: int = 3
_default_hist_size: int = 6 * 365 * _mins_in_day
_hist_buffer_start = int(
_default_hist_size - round(7 * _mins_in_day)
)
_default_rt_size: int = _days_worth * _secs_in_day
# NOTE: start the append index in rt buffer such that 1 day's worth
# can be appenened before overrun.
_rt_buffer_start = int((_days_worth - 1) * _secs_in_day)
def diff_history(
array: np.ndarray,
append_until_dt: datetime | None = None,
prepend_until_dt: datetime | None = None,
) -> np.ndarray:
# no diffing with tsdb dt index possible..
if (
prepend_until_dt is None
and append_until_dt is None
):
return array
times = array['time']
if append_until_dt:
return array[times < append_until_dt.timestamp()]
else:
return array[times >= prepend_until_dt.timestamp()]
async def shm_push_in_between(
shm: ShmArray,
to_push: np.ndarray,
prepend_index: int,
update_start_on_prepend: bool = False,
) -> int:
shm.push(
to_push,
prepend=True,
# XXX: only update the ._first index if no tsdb
# segment was previously prepended by the
# parent task.
update_first=update_start_on_prepend,
# XXX: only prepend from a manually calculated shm
# index if there was already a tsdb history
# segment prepended (since then the
# ._first.value is going to be wayyy in the
# past!)
start=(
prepend_index
if not update_start_on_prepend
else None
),
)
# XXX: extremely important, there can be no checkpoints
# in the block above to avoid entering new ``frames``
# values while we're pipelining the current ones to
# memory...
array = shm.array
zeros = array[array['low'] == 0]
# always backfill gaps with the earliest (price) datum's
# value to avoid the y-ranger including zeros and completely
# stretching the y-axis..
if 0 < zeros.size:
zeros[[
'open',
'high',
'low',
'close',
]] = shm._array[zeros['index'][0] - 1]['close']
# await tractor.pause()
async def start_backfill(
get_hist,
mod: ModuleType,
mkt: MktPair,
shm: ShmArray,
timeframe: float,
backfill_from_shm_index: int,
backfill_from_dt: datetime,
sampler_stream: tractor.MsgStream,
backfill_until_dt: datetime | None = None,
storage: StorageClient | None = None,
write_tsdb: bool = True,
task_status: TaskStatus[tuple] = trio.TASK_STATUS_IGNORED,
) -> int:
# let caller unblock and deliver latest history frame
# and use to signal that backfilling the shm gap until
# the tsdb end is complete!
bf_done = trio.Event()
task_status.started(bf_done)
# based on the sample step size, maybe load a certain amount history
update_start_on_prepend: bool = False
if backfill_until_dt is None:
# TODO: drop this right and just expose the backfill
# limits inside a [storage] section in conf.toml?
# when no tsdb "last datum" is provided, we just load
# some near-term history.
# periods = {
# 1: {'days': 1},
# 60: {'days': 14},
# }
# do a decently sized backfill and load it into storage.
periods = {
1: {'days': 6},
60: {'years': 6},
}
period_duration: int = periods[timeframe]
update_start_on_prepend = True
# NOTE: manually set the "latest" datetime which we intend to
# backfill history "until" so as to adhere to the history
# settings above when the tsdb is detected as being empty.
backfill_until_dt = backfill_from_dt.subtract(**period_duration)
# TODO: can we drop this? without conc i don't think this
# is necessary any more?
# configure async query throttling
# rate = config.get('rate', 1)
# XXX: legacy from ``trimeter`` code but unsupported now.
# erlangs = config.get('erlangs', 1)
# avoid duplicate history frames with a set of datetime frame
# starts and associated counts of how many duplicates we see
# per time stamp.
# starts: Counter[datetime] = Counter()
# conduct "backward history gap filling" where we push to
# the shm buffer until we have history back until the
# latest entry loaded from the tsdb's table B)
last_start_dt: datetime = backfill_from_dt
next_prepend_index: int = backfill_from_shm_index
while last_start_dt > backfill_until_dt:
log.debug(
f'Requesting {timeframe}s frame ending in {last_start_dt}'
)
try:
(
array,
next_start_dt,
next_end_dt,
) = await get_hist(
timeframe,
end_dt=last_start_dt,
)
# broker says there never was or is no more history to pull
except DataUnavailable:
log.warning(
f'NO-MORE-DATA: backend {mod.name} halted history!?'
)
# ugh, what's a better way?
# TODO: fwiw, we probably want a way to signal a throttle
# condition (eg. with ib) so that we can halt the
# request loop until the condition is resolved?
return
# TODO: drop this? see todo above..
# if (
# next_start_dt in starts
# and starts[next_start_dt] <= 6
# ):
# start_dt = min(starts)
# log.warning(
# f"{mkt.fqme}: skipping duplicate frame @ {next_start_dt}"
# )
# starts[start_dt] += 1
# await tractor.pause()
# continue
# elif starts[next_start_dt] > 6:
# log.warning(
# f'NO-MORE-DATA: backend {mod.name} before {next_start_dt}?'
# )
# return
# # only update new start point if not-yet-seen
# starts[next_start_dt] += 1
assert array['time'][0] == next_start_dt.timestamp()
diff = last_start_dt - next_start_dt
frame_time_diff_s = diff.seconds
# frame's worth of sample-period-steps, in seconds
frame_size_s = len(array) * timeframe
expected_frame_size_s = frame_size_s + timeframe
if frame_time_diff_s > expected_frame_size_s:
# XXX: query result includes a start point prior to our
# expected "frame size" and thus is likely some kind of
# history gap (eg. market closed period, outage, etc.)
# so just report it to console for now.
log.warning(
f'History frame ending @ {last_start_dt} appears to have a gap:\n'
f'{diff} ~= {frame_time_diff_s} seconds'
)
to_push = diff_history(
array,
prepend_until_dt=backfill_until_dt,
)
ln = len(to_push)
if ln:
log.info(f'{ln} bars for {next_start_dt} -> {last_start_dt}')
else:
log.warning(
'0 BARS TO PUSH after diff!?\n'
f'{next_start_dt} -> {last_start_dt}'
)
# bail gracefully on shm allocation overrun/full
# condition
try:
await shm_push_in_between(
shm,
to_push,
prepend_index=next_prepend_index,
update_start_on_prepend=update_start_on_prepend,
)
await sampler_stream.send({
'broadcast_all': {
'backfilling': (mkt.fqme, timeframe),
},
})
# decrement next prepend point
next_prepend_index = next_prepend_index - ln
last_start_dt = next_start_dt
except ValueError as ve:
_ve = ve
log.error(
f'Shm prepend OVERRUN on: {next_start_dt} -> {last_start_dt}?'
)
if next_prepend_index < ln:
log.warning(
f'Shm buffer can only hold {next_prepend_index} more rows..\n'
f'Appending those from recent {ln}-sized frame, no more!'
)
to_push = to_push[-next_prepend_index + 1:]
await shm_push_in_between(
shm,
to_push,
prepend_index=next_prepend_index,
update_start_on_prepend=update_start_on_prepend,
)
await sampler_stream.send({
'broadcast_all': {
'backfilling': (mkt.fqme, timeframe),
},
})
# can't push the entire frame? so
# push only the amount that can fit..
break
log.info(
f'Shm pushed {ln} frame:\n'
f'{next_start_dt} -> {last_start_dt}'
)
# FINALLY, maybe write immediately to the tsdb backend for
# long-term storage.
if (
storage is not None
and write_tsdb
):
log.info(
f'Writing {ln} frame to storage:\n'
f'{next_start_dt} -> {last_start_dt}'
)
# always drop the src asset token for
# non-currency-pair like market types (for now)
if mkt.dst.atype not in {
'crypto',
'crypto_currency',
'fiat', # a "forex pair"
}:
# for now, our table key schema is not including
# the dst[/src] source asset token.
col_sym_key: str = mkt.get_fqme(
delim_char='',
without_src=True,
)
else:
col_sym_key: str = mkt.get_fqme(delim_char='')
# TODO: implement parquet append!?
await storage.write_ohlcv(
col_sym_key,
shm.array,
timeframe,
)
else:
# finally filled gap
log.info(
f'Finished filling gap to tsdb start @ {backfill_until_dt}!'
)
# conduct tsdb timestamp gap detection and backfill any
# seemingly missing sequence segments..
# TODO: ideally these never exist but somehow it seems
# sometimes we're writing zero-ed segments on certain
# (teardown) cases?
from ._timeseries import detect_null_time_gap
gap_indices: tuple | None = detect_null_time_gap(shm)
while gap_indices:
(
istart,
start,
end,
iend,
) = gap_indices
start_dt = from_timestamp(start)
end_dt = from_timestamp(end)
(
array,
next_start_dt,
next_end_dt,
) = await get_hist(
timeframe,
start_dt=start_dt,
end_dt=end_dt,
)
# XXX TODO: pretty sure if i plot tsla, btcusdt.binance
# and mnq.cme.ib this causes a Qt crash XXDDD
# make sure we don't overrun the buffer start
len_to_push: int = min(iend, array.size)
to_push: np.ndarray = array[-len_to_push:]
await shm_push_in_between(
shm,
to_push,
prepend_index=iend,
update_start_on_prepend=False,
)
# TODO: UI side needs IPC event to update..
# - make sure the UI actually always handles
# this update!
# - remember that in the display side, only refersh this
# if the respective history is actually "in view".
# loop
await sampler_stream.send({
'broadcast_all': {
'backfilling': (mkt.fqme, timeframe),
},
})
gap_indices: tuple | None = detect_null_time_gap(shm)
# XXX: extremely important, there can be no checkpoints
# in the block above to avoid entering new ``frames``
# values while we're pipelining the current ones to
# memory...
# await sampler_stream.send('broadcast_all')
# short-circuit (for now)
bf_done.set()
async def back_load_from_tsdb(
storemod: ModuleType,
storage: StorageClient,
fqme: str,
tsdb_history: np.ndarray,
last_tsdb_dt: datetime,
latest_start_dt: datetime,
latest_end_dt: datetime,
bf_done: trio.Event,
timeframe: int,
shm: ShmArray,
):
assert len(tsdb_history)
# sync to backend history task's query/load completion
# if bf_done:
# await bf_done.wait()
# TODO: eventually it'd be nice to not require a shm array/buffer
# to accomplish this.. maybe we can do some kind of tsdb direct to
# graphics format eventually in a child-actor?
if storemod.name == 'nativedb':
return
await tractor.pause()
assert shm._first.value == 0
array = shm.array
# if timeframe == 1:
# times = shm.array['time']
# assert (times[1] - times[0]) == 1
if len(array):
shm_last_dt = from_timestamp(
shm.array[0]['time']
)
else:
shm_last_dt = None
if last_tsdb_dt:
assert shm_last_dt >= last_tsdb_dt
# do diff against start index of last frame of history and only
# fill in an amount of datums from tsdb allows for most recent
# to be loaded into mem *before* tsdb data.
if (
last_tsdb_dt
and latest_start_dt
):
backfilled_size_s = (
latest_start_dt - last_tsdb_dt
).seconds
# if the shm buffer len is not large enough to contain
# all missing data between the most recent backend-queried frame
# and the most recent dt-index in the db we warn that we only
# want to load a portion of the next tsdb query to fill that
# space.
log.info(
f'{backfilled_size_s} seconds worth of {timeframe}s loaded'
)
# Load TSDB history into shm buffer (for display) if there is
# remaining buffer space.
time_key: str = 'time'
if getattr(storemod, 'ohlc_key_map', False):
keymap: bidict = storemod.ohlc_key_map
time_key: str = keymap.inverse['time']
# if (
# not len(tsdb_history)
# ):
# return
tsdb_last_frame_start: datetime = last_tsdb_dt
# load as much from storage into shm possible (depends on
# user's shm size settings).
while shm._first.value > 0:
tsdb_history = await storage.read_ohlcv(
fqme,
timeframe=timeframe,
end=tsdb_last_frame_start,
)
# # empty query
# if not len(tsdb_history):
# break
next_start = tsdb_history[time_key][0]
if next_start >= tsdb_last_frame_start:
# no earlier data detected
break
else:
tsdb_last_frame_start = next_start
# TODO: see if there's faster multi-field reads:
# https://numpy.org/doc/stable/user/basics.rec.html#accessing-multiple-fields
# re-index with a `time` and index field
prepend_start = shm._first.value
to_push = tsdb_history[-prepend_start:]
shm.push(
to_push,
# insert the history pre a "days worth" of samples
# to leave some real-time buffer space at the end.
prepend=True,
# update_first=False,
# start=prepend_start,
field_map=storemod.ohlc_key_map,
)
log.info(f'Loaded {to_push.shape} datums from storage')
tsdb_last_frame_start = tsdb_history[time_key][0]
# manually trigger step update to update charts/fsps
# which need an incremental update.
# NOTE: the way this works is super duper
# un-intuitive right now:
# - the broadcaster fires a msg to the fsp subsystem.
# - fsp subsys then checks for a sample step diff and
# possibly recomputes prepended history.
# - the fsp then sends back to the parent actor
# (usually a chart showing graphics for said fsp)
# which tells the chart to conduct a manual full
# graphics loop cycle.
# await sampler_stream.send('broadcast_all')
async def tsdb_backfill(
mod: ModuleType,
storemod: ModuleType,
tn: trio.Nursery,
storage: StorageClient,
mkt: MktPair,
shm: ShmArray,
timeframe: float,
sampler_stream: tractor.MsgStream,
task_status: TaskStatus[
tuple[ShmArray, ShmArray]
] = trio.TASK_STATUS_IGNORED,
) -> None:
get_hist: Callable[
[int, datetime, datetime],
tuple[np.ndarray, str]
]
config: dict[str, int]
async with mod.open_history_client(
mkt,
) as (get_hist, config):
log.info(f'{mod} history client returned backfill config: {config}')
# get latest query's worth of history all the way
# back to what is recorded in the tsdb
try:
array, mr_start_dt, mr_end_dt = await get_hist(
timeframe,
end_dt=None,
)
# XXX: timeframe not supported for backend (since
# above exception type), terminate immediately since
# there's no backfilling possible.
except DataUnavailable:
task_status.started()
return
# TODO: fill in non-zero epoch time values ALWAYS!
# hist_shm._array['time'] = np.arange(
# start=
# NOTE: removed for now since it'll always break
# on the first 60s of the venue open..
# times: np.ndarray = array['time']
# # sample period step size in seconds
# step_size_s = (
# from_timestamp(times[-1])
# - from_timestamp(times[-2])
# ).seconds
# if step_size_s not in (1, 60):
# log.error(f'Last 2 sample period is off!? -> {step_size_s}')
# step_size_s = (
# from_timestamp(times[-2])
# - from_timestamp(times[-3])
# ).seconds
# NOTE: on the first history, most recent history
# frame we PREPEND from the current shm ._last index
# and thus a gap between the earliest datum loaded here
# and the latest loaded from the tsdb may exist!
log.info(f'Pushing {array.size} to shm!')
shm.push(
array,
prepend=True, # append on first frame
)
backfill_gap_from_shm_index: int = shm._first.value + 1
# tell parent task to continue
task_status.started()
# loads a (large) frame of data from the tsdb depending
# on the db's query size limit; our "nativedb" (using
# parquet) generally can load the entire history into mem
# but if not then below the remaining history can be lazy
# loaded?
fqme: str = mkt.fqme
tsdb_entry: tuple | None = await storage.load(
fqme,
timeframe=timeframe,
)
last_tsdb_dt: datetime | None = None
if tsdb_entry:
(
tsdb_history,
first_tsdb_dt,
last_tsdb_dt,
) = tsdb_entry
# calc the index from which the tsdb data should be
# prepended, presuming there is a gap between the
# latest frame (loaded/read above) and the latest
# sample loaded from the tsdb.
backfill_diff: Duration = mr_start_dt - last_tsdb_dt
offset_s: float = backfill_diff.in_seconds()
offset_samples: int = round(offset_s / timeframe)
# TODO: see if there's faster multi-field reads:
# https://numpy.org/doc/stable/user/basics.rec.html#accessing-multiple-fields
# re-index with a `time` and index field
prepend_start = shm._first.value - offset_samples + 1
# tsdb history is so far in the past we can't fit it in
# shm buffer space so simply don't load it!
if prepend_start > 0:
to_push = tsdb_history[-prepend_start:]
shm.push(
to_push,
# insert the history pre a "days worth" of samples
# to leave some real-time buffer space at the end.
prepend=True,
# update_first=False,
start=prepend_start,
field_map=storemod.ohlc_key_map,
)
log.info(f'Loaded {to_push.shape} datums from storage')
# TODO: maybe start history anal and load missing "history
# gaps" via backend..
if timeframe not in (1, 60):
raise ValueError(
'`piker` only needs to support 1m and 1s sampling '
'but ur api is trying to deliver a longer '
f'timeframe of {timeframe} seconds..\n'
'So yuh.. dun do dat brudder.'
)
# if there is a gap to backfill from the first
# history frame until the last datum loaded from the tsdb
# continue that now in the background
bf_done = await tn.start(
partial(
start_backfill,
get_hist,
mod,
mkt,
shm,
timeframe,
backfill_from_shm_index=backfill_gap_from_shm_index,
backfill_from_dt=mr_start_dt,
sampler_stream=sampler_stream,
backfill_until_dt=last_tsdb_dt,
storage=storage,
)
)
# if len(hist_shm.array) < 2:
# TODO: there's an edge case here to solve where if the last
# frame before market close (at least on ib) was pushed and
# there was only "1 new" row pushed from the first backfill
# query-iteration, then the sample step sizing calcs will
# break upstream from here since you can't diff on at least
# 2 steps... probably should also add logic to compute from
# the tsdb series and stash that somewhere as meta data on
# the shm buffer?.. no se.
# backload any further data from tsdb (concurrently per
# timeframe) if not all data was able to be loaded (in memory)
# from the ``StorageClient.load()`` call above.
try:
await trio.sleep_forever()
finally:
return
# IF we need to continue backloading incrementally from the
# tsdb client..
tn.start_soon(
back_load_from_tsdb,
storemod,
storage,
fqme,
tsdb_history,
last_tsdb_dt,
mr_start_dt,
mr_end_dt,
bf_done,
timeframe,
shm,
)
async def manage_history(
mod: ModuleType,
bus: _FeedsBus,
mkt: MktPair,
some_data_ready: trio.Event,
feed_is_live: trio.Event,
timeframe: float = 60, # in seconds
task_status: TaskStatus[
tuple[ShmArray, ShmArray]
] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Load and manage historical data including the loading of any
available series from any connected tsdb as well as conduct
real-time update of both that existing db and the allocated
shared memory buffer.
Init sequence:
- allocate shm (numpy array) buffers for 60s & 1s sample rates
- configure "zero index" for each buffer: the index where
history will prepended *to* and new live data will be
appened *from*.
- open a ``.storage.StorageClient`` and load any existing tsdb
history as well as (async) start a backfill task which loads
missing (newer) history from the data provider backend:
- tsdb history is loaded first and pushed to shm ASAP.
- the backfill task loads the most recent history before
unblocking its parent task, so that the `ShmArray._last` is
up to date to allow the OHLC sampler to begin writing new
samples as the correct buffer index once the provider feed
engages.
'''
# TODO: is there a way to make each shm file key
# actor-tree-discovery-addr unique so we avoid collisions
# when doing tests which also allocate shms for certain instruments
# that may be in use on the system by some other running daemons?
# from tractor._state import _runtime_vars
# port = _runtime_vars['_root_mailbox'][1]
uid: tuple = tractor.current_actor().uid
name, uuid = uid
service: str = name.rstrip(f'.{mod.name}')
fqme: str = mkt.get_fqme(delim_char='')
# (maybe) allocate shm array for this broker/symbol which will
# be used for fast near-term history capture and processing.
hist_shm, opened = maybe_open_shm_array(
size=_default_hist_size,
append_start_index=_hist_buffer_start,
key=f'piker.{service}[{uuid[:16]}].{fqme}.hist',
# use any broker defined ohlc dtype:
dtype=getattr(mod, '_ohlc_dtype', def_iohlcv_fields),
# we expect the sub-actor to write
readonly=False,
)
hist_zero_index = hist_shm.index - 1
# TODO: history validation
if not opened:
raise RuntimeError(
"Persistent shm for sym was already open?!"
)
rt_shm, opened = maybe_open_shm_array(
size=_default_rt_size,
append_start_index=_rt_buffer_start,
key=f'piker.{service}[{uuid[:16]}].{fqme}.rt',
# use any broker defined ohlc dtype:
dtype=getattr(mod, '_ohlc_dtype', def_iohlcv_fields),
# we expect the sub-actor to write
readonly=False,
)
# (for now) set the rt (hft) shm array with space to prepend
# only a few days worth of 1s history.
days: int = 2
start_index: int = days*_secs_in_day
rt_shm._first.value = start_index
rt_shm._last.value = start_index
rt_zero_index = rt_shm.index - 1
if not opened:
raise RuntimeError(
"Persistent shm for sym was already open?!"
)
open_history_client = getattr(
mod,
'open_history_client',
)
assert open_history_client
# TODO: maybe it should be a subpkg of `.data`?
from piker import storage
async with (
storage.open_storage_client() as (storemod, client),
trio.open_nursery() as tn,
):
log.info(
f'Connecting to storage backend `{storemod.name}`:\n'
f'location: {client.address}\n'
f'db cardinality: {client.cardinality}\n'
# TODO: show backend config, eg:
# - network settings
# - storage size with compression
# - number of loaded time series?
)
# NOTE: this call ONLY UNBLOCKS once the latest-most frame
# (i.e. history just before the live feed latest datum) of
# history has been loaded and written to the shm buffer:
# - the backfiller task can write in reverse chronological
# to the shm and tsdb
# - the tsdb data can be loaded immediately and the
# backfiller can do a single append from it's end datum and
# then prepends backward to that from the current time
# step.
tf2mem: dict = {
1: rt_shm,
60: hist_shm,
}
async with open_sample_stream(
period_s=1.,
shms_by_period={
1.: rt_shm.token,
60.: hist_shm.token,
},
# NOTE: we want to only open a stream for doing
# broadcasts on backfill operations, not receive the
# sample index-stream (since there's no code in this
# data feed layer that needs to consume it).
open_index_stream=True,
sub_for_broadcasts=False,
) as sample_stream:
# register 1s and 1m buffers with the global incrementer task
log.info(f'Connected to sampler stream: {sample_stream}')
for timeframe in [60, 1]:
await tn.start(
tsdb_backfill,
mod,
storemod,
tn,
# bus,
client,
mkt,
tf2mem[timeframe],
timeframe,
sample_stream,
)
# indicate to caller that feed can be delivered to
# remote requesting client since we've loaded history
# data that can be used.
some_data_ready.set()
# wait for a live feed before starting the sampler.
await feed_is_live.wait()
# yield back after client connect with filled shm
task_status.started((
hist_zero_index,
hist_shm,
rt_zero_index,
rt_shm,
))
# history retreival loop depending on user interaction
# and thus a small RPC-prot for remotely controllinlg
# what data is loaded for viewing.
await trio.sleep_forever()

View File

@ -113,9 +113,9 @@ def validate_backend(
)
if ep is None:
log.warning(
f'Provider backend {mod.name} is missing '
f'{daemon_name} support :(\n'
f'The following endpoint is missing: {name}'
f'Provider backend {mod.name!r} is missing '
f'{daemon_name!r} support?\n'
f'|_module endpoint-func missing: {name!r}\n'
)
inits: list[

View File

@ -26,7 +26,10 @@ from ._api import (
maybe_mk_fsp_shm,
Fsp,
)
from ._engine import cascade
from ._engine import (
cascade,
Cascade,
)
from ._volume import (
dolla_vlm,
flow_rates,
@ -35,6 +38,7 @@ from ._volume import (
__all__: list[str] = [
'cascade',
'Cascade',
'maybe_mk_fsp_shm',
'Fsp',
'dolla_vlm',
@ -46,9 +50,12 @@ __all__: list[str] = [
async def latency(
source: 'TickStream[Dict[str, float]]', # noqa
ohlcv: np.ndarray
) -> AsyncIterator[np.ndarray]:
"""Latency measurements, broker to piker.
"""
'''
Latency measurements, broker to piker.
'''
# TODO: do we want to offer yielding this async
# before the rt data connection comes up?

View File

@ -18,13 +18,12 @@
core task logic for processing chains
'''
from dataclasses import dataclass
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from functools import partial
from typing import (
AsyncIterator,
Callable,
Optional,
Union,
)
import numpy as np
@ -33,9 +32,9 @@ from trio_typing import TaskStatus
import tractor
from tractor.msg import NamespacePath
from piker.types import Struct
from ..log import get_logger, get_console_log
from .. import data
from ..data import attach_shm_array
from ..data.feed import (
Flume,
Feed,
@ -56,12 +55,6 @@ from ..toolz import Profiler
log = get_logger(__name__)
@dataclass
class TaskTracker:
complete: trio.Event
cs: trio.CancelScope
async def filter_quotes_by_sym(
sym: str,
@ -82,30 +75,168 @@ async def filter_quotes_by_sym(
if quote:
yield quote
# TODO: unifying the abstractions in this FSP subsys/layer:
# -[ ] move the `.data.flows.Flume` type into this
# module/subsys/pkg?
# -[ ] ideas for further abstractions as per
# - https://github.com/pikers/piker/issues/216,
# - https://github.com/pikers/piker/issues/270:
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
# - https://en.wikipedia.org/wiki/Signal-flow_graph
# - https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
async def fsp_compute(
# -[ ] we probably want to eval THE BELOW design and unify with the
# proto `TaskManager` in the `tractor` dev branch as well as with
# our below idea for `Cascade`:
# - https://github.com/goodboy/tractor/pull/363
class Cascade(Struct):
'''
As per sig-proc engineering parlance, this is a chaining of
`Flume`s, which are themselves collections of "Streams"
implemented currently via `ShmArray`s.
A `Cascade` is be the minimal "connection" of 2 `Flumes`
as per circuit parlance:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
TODO:
-[ ] could cover the combination of our `FspAdmin` and the
backend `.fsp._engine` related machinery to "connect" one flume
to another?
'''
# TODO: make these `Flume`s
src: Flume
dst: Flume
tn: trio.Nursery
fsp: Fsp # UI-side middleware ctl API
# filled during cascade/.bind_func() (fsp_compute) init phases
bind_func: Callable | None = None
complete: trio.Event | None = None
cs: trio.CancelScope | None = None
client_stream: tractor.MsgStream | None = None
async def resync(self) -> int:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.info(f're-syncing fsp {self.fsp.name} to source')
self.cs.cancel()
await self.complete.wait()
index: int = await self.tn.start(self.bind_func)
# always trigger UI refresh after history update,
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
dst_shm: ShmArray = self.dst.rt_shm
await self.client_stream.send({
'fsp_update': {
'key': dst_shm.token,
'first': dst_shm._first.value,
'last': dst_shm._last.value,
}
})
return index
def is_synced(self) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
'''
src_shm: ShmArray = self.src.rt_shm
dst_shm: ShmArray = self.dst.rt_shm
step_diff = src_shm.index - dst_shm.index
len_diff = abs(len(src_shm.array) - len(dst_shm.array))
synced: bool = not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2
# we aren't step synced to the source and may be
# leading/lagging by a step
or step_diff > 1
or step_diff < 0
)
if not synced:
fsp: Fsp = self.fsp
log.warning(
'***DESYNCED FSP***\n'
f'{fsp.ns_path}@{src_shm.token}\n'
f'step_diff: {step_diff}\n'
f'len_diff: {len_diff}\n'
)
return (
synced,
step_diff,
len_diff,
)
async def poll_and_sync_to_step(self) -> int:
synced, step_diff, _ = self.is_synced()
while not synced:
await self.resync()
synced, step_diff, _ = self.is_synced()
return step_diff
@acm
async def open_edge(
self,
bind_func: Callable,
) -> int:
self.bind_func = bind_func
index = await self.tn.start(bind_func)
yield index
# TODO: what do we want on teardown/error?
# -[ ] dynamic reconnection after update?
async def connect_streams(
casc: Cascade,
mkt: MktPair,
flume: Flume,
quote_stream: trio.abc.ReceiveChannel,
src: Flume,
dst: Flume,
src: ShmArray,
dst: ShmArray,
func: Callable,
edge_func: Callable,
# attach_stream: bool = False,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Stream and per-sample compute and write the cascade of
2 `Flumes`/streams given some operating `func`.
https://en.wikipedia.org/wiki/Signal-flow_graph#Basic_components
Not literally, but something like:
edge_func(Flume_in) -> Flume_out
'''
profiler = Profiler(
delayed=False,
disabled=True
)
fqme = mkt.fqme
out_stream = func(
# TODO: just pull it from src.mkt.fqme no?
# fqme: str = mkt.fqme
fqme: str = src.mkt.fqme
# TODO: dynamic introspection of what the underlying (vertex)
# function actually requires from input node (flumes) then
# deliver those inputs as part of a graph "compilation" step?
out_stream = edge_func(
# TODO: do we even need this if we do the feed api right?
# shouldn't a local stream do this before we get a handle
@ -113,20 +244,21 @@ async def fsp_compute(
# async itertools style?
filter_quotes_by_sym(fqme, quote_stream),
# XXX: currently the ``ohlcv`` arg
flume.rt_shm,
# XXX: currently the ``ohlcv`` arg, but we should allow
# (dynamic) requests for src flume (node) streams?
src.rt_shm,
)
# HISTORY COMPUTE PHASE
# conduct a single iteration of fsp with historical bars input
# and get historical output.
history_output: Union[
dict[str, np.ndarray], # multi-output case
np.ndarray, # single output case
]
history_output: (
dict[str, np.ndarray] # multi-output case
| np.ndarray, # single output case
)
history_output = await anext(out_stream)
func_name = func.__name__
func_name = edge_func.__name__
profiler(f'{func_name} generated history')
# build struct array with an 'index' field to push as history
@ -134,10 +266,12 @@ async def fsp_compute(
# TODO: push using a[['f0', 'f1', .., 'fn']] = .. syntax no?
# if the output array is multi-field then push
# each respective field.
fields = getattr(dst.array.dtype, 'fields', None).copy()
dst_shm: ShmArray = dst.rt_shm
fields = getattr(dst_shm.array.dtype, 'fields', None).copy()
fields.pop('index')
history_by_field: Optional[np.ndarray] = None
src_time = src.array['time']
history_by_field: np.ndarray | None = None
src_shm: ShmArray = src.rt_shm
src_time = src_shm.array['time']
if (
fields and
@ -156,7 +290,7 @@ async def fsp_compute(
if history_by_field is None:
if output is None:
length = len(src.array)
length = len(src_shm.array)
else:
length = len(output)
@ -165,7 +299,7 @@ async def fsp_compute(
# will be pushed to shm.
history_by_field = np.zeros(
length,
dtype=dst.array.dtype
dtype=dst_shm.array.dtype
)
if output is None:
@ -182,13 +316,13 @@ async def fsp_compute(
)
history_by_field = np.zeros(
len(history_output),
dtype=dst.array.dtype
dtype=dst_shm.array.dtype
)
history_by_field[func_name] = history_output
history_by_field['time'] = src_time[-len(history_by_field):]
history_output['time'] = src.array['time']
history_output['time'] = src_shm.array['time']
# TODO: XXX:
# THERE'S A BIG BUG HERE WITH THE `index` field since we're
@ -201,11 +335,11 @@ async def fsp_compute(
# is `index` aware such that historical data can be indexed
# relative to the true first datum? Not sure if this is sane
# for incremental compuations.
first = dst._first.value = src._first.value
first = dst_shm._first.value = src_shm._first.value
# TODO: can we use this `start` flag instead of the manual
# setting above?
index = dst.push(
index = dst_shm.push(
history_by_field,
start=first,
)
@ -216,12 +350,9 @@ async def fsp_compute(
# setup a respawn handle
with trio.CancelScope() as cs:
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
tracker = TaskTracker(trio.Event(), cs)
task_status.started((tracker, index))
casc.cs = cs
casc.complete = trio.Event()
task_status.started(index)
profiler(f'{func_name} yield last index')
@ -235,12 +366,12 @@ async def fsp_compute(
log.debug(f"{func_name}: {processed}")
key, output = processed
# dst.array[-1][key] = output
dst.array[[key, 'time']][-1] = (
dst_shm.array[[key, 'time']][-1] = (
output,
# TODO: what about pushing ``time.time_ns()``
# in which case we'll need to round at the graphics
# processing / sampling layer?
src.array[-1]['time']
src_shm.array[-1]['time']
)
# NOTE: for now we aren't streaming this to the consumer
@ -252,7 +383,7 @@ async def fsp_compute(
# N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem
# chans for the fan out?
# index = src.index
# index = src_shm.index
# if attach_stream:
# await client_stream.send(index)
@ -262,7 +393,7 @@ async def fsp_compute(
# log.info(f'FSP quote too fast: {hz}')
# last = time.time()
finally:
tracker.complete.set()
casc.complete.set()
@tractor.context
@ -273,15 +404,15 @@ async def cascade(
# data feed key
fqme: str,
src_shm_token: dict,
dst_shm_token: tuple[str, np.dtype],
# flume pair cascaded using an "edge function"
src_flume_addr: dict,
dst_flume_addr: dict,
ns_path: NamespacePath,
shm_registry: dict[str, _Token],
zero_on_step: bool = False,
loglevel: Optional[str] = None,
loglevel: str | None = None,
) -> None:
'''
@ -297,8 +428,14 @@ async def cascade(
if loglevel:
get_console_log(loglevel)
src = attach_shm_array(token=src_shm_token)
dst = attach_shm_array(readonly=False, token=dst_shm_token)
src: Flume = Flume.from_msg(src_flume_addr)
dst: Flume = Flume.from_msg(
dst_flume_addr,
readonly=False,
)
# src: ShmArray = attach_shm_array(token=src_shm_token)
# dst: ShmArray = attach_shm_array(readonly=False, token=dst_shm_token)
reg = _load_builtins()
lines = '\n'.join([f'{key.rpartition(":")[2]} => {key}' for key in reg])
@ -306,11 +443,11 @@ async def cascade(
f'Registered FSP set:\n{lines}'
)
# update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source
# so that consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire
# but not sure how else to do it.
# NOTE XXX: update actorlocal flows table which registers
# readonly "instances" of this fsp for symbol/source so that
# consumer fsps can look it up by source + fsp.
# TODO: ugh i hate this wind/unwind to list over the wire but
# not sure how else to do it.
for (token, fsp_name, dst_token) in shm_registry:
Fsp._flow_registry[(
_Token.from_msg(token),
@ -320,12 +457,15 @@ async def cascade(
fsp: Fsp = reg.get(
NamespacePath(ns_path)
)
func = fsp.func
func: Callable = fsp.func
if not func:
# TODO: assume it's a func target path
raise ValueError(f'Unknown fsp target: {ns_path}')
_fqme: str = src.mkt.fqme
assert _fqme == fqme
# open a data feed stream with requested broker
feed: Feed
async with data.feed.maybe_open_feed(
@ -339,177 +479,143 @@ async def cascade(
) as feed:
flume = feed.flumes[fqme]
mkt = flume.mkt
assert src.token == flume.rt_shm.token
flume: Flume = feed.flumes[fqme]
# XXX: can't do this since flume.feed will be set XD
# assert flume == src
assert flume.mkt == src.mkt
mkt: MktPair = flume.mkt
# NOTE: FOR NOW, sanity checks around the feed as being
# always the src flume (until we get to fancier/lengthier
# chains/graphs.
assert src.rt_shm.token == flume.rt_shm.token
# XXX: won't work bc the _hist_shm_token value will be
# list[list] after IPC..
# assert flume.to_msg() == src_flume_addr
profiler(f'{func}: feed up')
func_name = func.__name__
func_name: str = func.__name__
async with (
trio.open_nursery() as n,
tractor.trionics.collapse_eg(), # avoid multi-taskc tb in console
trio.open_nursery() as tn,
):
# TODO: might be better to just make a "restart" method where
# the target task is spawned implicitly and then the event is
# set via some higher level api? At that poing we might as well
# be writing a one-cancels-one nursery though right?
casc = Cascade(
src,
dst,
tn,
fsp,
)
# TODO: this seems like it should be wrapped somewhere?
fsp_target = partial(
fsp_compute,
connect_streams,
casc=casc,
mkt=mkt,
flume=flume,
quote_stream=flume.stream,
# shm
# flumes and shm passthrough
src=src,
dst=dst,
# target
func=func
# chain function which takes src flume input(s)
# and renders dst flume output(s)
edge_func=func
)
async with casc.open_edge(
bind_func=fsp_target,
) as index:
# casc.bind_func = fsp_target
# index = await tn.start(fsp_target)
dst_shm: ShmArray = dst.rt_shm
src_shm: ShmArray = src.rt_shm
tracker, index = await n.start(fsp_target)
if zero_on_step:
last = dst.rt_shm.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
if zero_on_step:
last = dst.array[-1:]
zeroed = np.zeros(last.shape, dtype=last.dtype)
profiler(f'{func_name}: fsp up')
profiler(f'{func_name}: fsp up')
# sync to client-side actor
await ctx.started(index)
# sync client
await ctx.started(index)
# XXX: rt stream with client which we MUST
# open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
casc.client_stream: tractor.MsgStream = client_stream
# XXX: rt stream with client which we MUST
# open here (and keep it open) in order to make
# incremental "updates" as history prepends take
# place.
async with ctx.open_stream() as client_stream:
s, step, ld = casc.is_synced()
# TODO: these likely should all become
# methods of this ``TaskLifetime`` or wtv
# abstraction..
async def resync(
tracker: TaskTracker,
# detect sample period step for subscription to increment
# signal
times = src.rt_shm.array['time']
if len(times) > 1:
last_ts = times[-1]
delay_s: float = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s: float = _default_delay_s
) -> tuple[TaskTracker, int]:
# TODO: adopt an incremental update engine/approach
# where possible here eventually!
log.info(f're-syncing fsp {func_name} to source')
tracker.cs.cancel()
await tracker.complete.wait()
tracker, index = await n.start(fsp_target)
# sub and increment the underlying shared memory buffer
# on every step msg received from the global `samplerd`
# service.
async with open_sample_stream(
float(delay_s)
) as istream:
# always trigger UI refresh after history update,
# see ``piker.ui._fsp.FspAdmin.open_chain()`` and
# ``piker.ui._display.trigger_update()``.
await client_stream.send({
'fsp_update': {
'key': dst_shm_token,
'first': dst._first.value,
'last': dst._last.value,
}
})
return tracker, index
profiler(f'{func_name}: sample stream up')
profiler.finish()
def is_synced(
src: ShmArray,
dst: ShmArray
) -> tuple[bool, int, int]:
'''
Predicate to dertmine if a destination FSP
output array is aligned to its source array.
async for i in istream:
# print(f'FSP incrementing {i}')
'''
step_diff = src.index - dst.index
len_diff = abs(len(src.array) - len(dst.array))
return not (
# the source is likely backfilling and we must
# sync history calculations
len_diff > 2
# respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = casc.is_synced()
if not synced:
step_diff: int = await casc.poll_and_sync_to_step()
# we aren't step synced to the source and may be
# leading/lagging by a step
or step_diff > 1
or step_diff < 0
), step_diff, len_diff
# skip adding a last bar since we should already
# be step alinged
if step_diff == 0:
continue
async def poll_and_sync_to_step(
tracker: TaskTracker,
src: ShmArray,
dst: ShmArray,
# read out last shm row, copy and write new row
array = dst_shm.array
) -> tuple[TaskTracker, int]:
# some metrics like vlm should be reset
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
synced, step_diff, _ = is_synced(src, dst)
while not synced:
tracker, index = await resync(tracker)
synced, step_diff, _ = is_synced(src, dst)
dst.rt_shm.push(last)
return tracker, step_diff
# sync with source buffer's time step
src_l2 = src_shm.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst_shm._array['time'][src_li] = src_lt
dst_shm._array['time'][src_2li] = src_2lt
s, step, ld = is_synced(src, dst)
# detect sample period step for subscription to increment
# signal
times = src.array['time']
if len(times) > 1:
last_ts = times[-1]
delay_s = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s = _default_delay_s
# sub and increment the underlying shared memory buffer
# on every step msg received from the global `samplerd`
# service.
async with open_sample_stream(float(delay_s)) as istream:
profiler(f'{func_name}: sample stream up')
profiler.finish()
async for i in istream:
# print(f'FSP incrementing {i}')
# respawn the compute task if the source
# array has been updated such that we compute
# new history from the (prepended) source.
synced, step_diff, _ = is_synced(src, dst)
if not synced:
tracker, step_diff = await poll_and_sync_to_step(
tracker,
src,
dst,
)
# skip adding a last bar since we should already
# be step alinged
if step_diff == 0:
continue
# read out last shm row, copy and write new row
array = dst.array
# some metrics like vlm should be reset
# to zero every step.
if zero_on_step:
last = zeroed
else:
last = array[-1:].copy()
dst.push(last)
# sync with source buffer's time step
src_l2 = src.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst._array['time'][src_li] = src_lt
dst._array['time'][src_2li] = src_2lt
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )

View File

@ -19,6 +19,10 @@ Log like a forester!
"""
import logging
import json
import reprlib
from typing import (
Callable,
)
import tractor
from pygments import (
@ -84,3 +88,29 @@ def colorize_json(
# likeable styles: algol_nu, tango, monokai
formatters.TerminalTrueColorFormatter(style=style)
)
# TODO, eventually defer to the version in `modden` once
# it becomes a dep!
def mk_repr(
**repr_kws,
) -> Callable[[str], str]:
'''
Allocate and deliver a `repr.Repr` instance with provided input
settings using the std-lib's `reprlib` mod,
* https://docs.python.org/3/library/reprlib.html
------ Ex. ------
An up to 6-layer-nested `dict` as multi-line:
- https://stackoverflow.com/a/79102479
- https://docs.python.org/3/library/reprlib.html#reprlib.Repr.maxlevel
'''
def_kws: dict[str, int] = dict(
indent=2,
maxlevel=6, # recursion levels
maxstring=66, # match editor line-len limit
)
def_kws |= repr_kws
reprr = reprlib.Repr(**def_kws)
return reprr.repr

View File

@ -14,49 +14,45 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Actor-runtime service orchestration machinery.
'''
Actor runtime primtives and (distributed) service APIs for,
"""
from __future__ import annotations
- daemon-service mgmt: `_daemon` (i.e. low-level spawn and supervise machinery
for sub-actors like `brokerd`, `emsd`, datad`, etc.)
from ._mngr import Services
from ._registry import ( # noqa
_tractor_kwargs,
_default_reg_addr,
_default_registry_host,
_default_registry_port,
open_registry,
find_service,
check_for_service,
- service-actor supervision (via `trio` tasks) API: `._mngr`
- discovery interface (via light wrapping around `tractor`'s built-in
prot): `._registry`
- `docker` cntr SC supervision for use with `trio`: `_ahab`
- wrappers for marketstore and elasticsearch dbs
=> TODO: maybe to (re)move elsewhere?
'''
from ._mngr import Services as Services
from ._registry import (
_tractor_kwargs as _tractor_kwargs,
_default_reg_addr as _default_reg_addr,
_default_registry_host as _default_registry_host,
_default_registry_port as _default_registry_port,
open_registry as open_registry,
find_service as find_service,
check_for_service as check_for_service,
)
from ._daemon import ( # noqa
maybe_spawn_daemon,
spawn_emsd,
maybe_open_emsd,
from ._daemon import (
maybe_spawn_daemon as maybe_spawn_daemon,
spawn_emsd as spawn_emsd,
maybe_open_emsd as maybe_open_emsd,
)
from ._actor_runtime import (
open_piker_runtime,
maybe_open_pikerd,
open_pikerd,
get_tractor_runtime_kwargs,
open_piker_runtime as open_piker_runtime,
maybe_open_pikerd as maybe_open_pikerd,
open_pikerd as open_pikerd,
get_runtime_vars as get_runtime_vars,
)
from ..brokers._daemon import (
spawn_brokerd,
maybe_spawn_brokerd,
spawn_brokerd as spawn_brokerd,
maybe_spawn_brokerd as maybe_spawn_brokerd,
)
__all__ = [
'check_for_service',
'Services',
'maybe_spawn_daemon',
'spawn_brokerd',
'maybe_spawn_brokerd',
'spawn_emsd',
'maybe_open_emsd',
'open_piker_runtime',
'maybe_open_pikerd',
'open_pikerd',
'get_tractor_runtime_kwargs',
]

View File

@ -45,7 +45,7 @@ from ._registry import ( # noqa
)
def get_tractor_runtime_kwargs() -> dict[str, Any]:
def get_runtime_vars() -> dict[str, Any]:
'''
Deliver ``tractor`` related runtime variables in a `dict`.
@ -56,6 +56,8 @@ def get_tractor_runtime_kwargs() -> dict[str, Any]:
@acm
async def open_piker_runtime(
name: str,
registry_addrs: list[tuple[str, int]] = [],
enable_modules: list[str] = [],
loglevel: Optional[str] = None,
@ -63,8 +65,6 @@ async def open_piker_runtime(
# for data daemons when running in production.
debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
# TODO: once we have `rsyscall` support we will read a config
# and spawn the service tree distributed per that.
start_method: str = 'trio',
@ -74,7 +74,7 @@ async def open_piker_runtime(
) -> tuple[
tractor.Actor,
tuple[str, int],
list[tuple[str, int]],
]:
'''
Start a piker actor who's runtime will automatically sync with
@ -84,50 +84,71 @@ async def open_piker_runtime(
a root actor.
'''
# check for existing runtime, boot it
# if not already running.
try:
# check for existing runtime
actor = tractor.current_actor().uid
actor = tractor.current_actor()
except tractor._exceptions.NoRuntime:
tractor._state._runtime_vars[
'piker_vars'] = tractor_runtime_overrides
'piker_vars'
] = tractor_runtime_overrides
registry_addr = registry_addr or _default_reg_addr
# NOTE: if no registrar list passed used the default of just
# setting it as the root actor on localhost.
registry_addrs = (
registry_addrs
or [_default_reg_addr]
)
if ems := tractor_kwargs.pop('enable_modules', None):
# import pdbp; pdbp.set_trace()
enable_modules.extend(ems)
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=registry_addr,
# passed through to `open_root_actor`
registry_addrs=registry_addrs,
name=name,
start_method=start_method,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
# XXX NOTE MEMBER DAT der's a perf hit yo!!
# https://greenback.readthedocs.io/en/latest/principle.html#performance
maybe_enable_greenback=True,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=enable_modules,
hide_tb=False,
**tractor_kwargs,
) as _,
) as actor,
open_registry(registry_addr, ensure_exists=False) as addr,
open_registry(
registry_addrs,
ensure_exists=False,
) as addrs,
):
yield (
tractor.current_actor(),
addr,
)
else:
async with open_registry(registry_addr) as addr:
assert actor is tractor.current_actor()
yield (
actor,
addr,
addrs,
)
else:
async with open_registry(
registry_addrs
) as addrs:
yield (
actor,
addrs,
)
_root_dname = 'pikerd'
_root_modules = [
_root_dname: str = 'pikerd'
_root_modules: list[str] = [
__name__,
'piker.service._daemon',
'piker.brokers._daemon',
@ -141,13 +162,13 @@ _root_modules = [
@acm
async def open_pikerd(
registry_addrs: list[tuple[str, int]],
loglevel: str | None = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
**kwargs,
@ -159,33 +180,44 @@ async def open_pikerd(
alive underling services (see below).
'''
# NOTE: for the root daemon we always enable the root
# mod set and we `list.extend()` it into wtv the
# caller requested.
# TODO: make this mod set more strict?
# -[ ] eventually we should be able to avoid
# having the root have more then permissions to spawn other
# specialized daemons I think?
ems: list[str] = kwargs.setdefault('enable_modules', [])
ems.extend(_root_modules)
async with (
open_piker_runtime(
name=_root_dname,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
loglevel=loglevel,
debug_mode=debug_mode,
registry_addr=registry_addr,
registry_addrs=registry_addrs,
**kwargs,
) as (root_actor, reg_addr),
) as (
root_actor,
reg_addrs,
),
tractor.open_nursery() as actor_nursery,
trio.open_nursery() as service_nursery,
tractor.trionics.collapse_eg(),
trio.open_nursery() as service_tn,
):
if root_actor.accept_addr != reg_addr:
raise RuntimeError(
f'`pikerd` failed to bind on {reg_addr}!\n'
'Maybe you have another daemon already running?'
)
for addr in reg_addrs:
if addr not in root_actor.accept_addrs:
raise RuntimeError(
f'`pikerd` failed to bind on {addr}!\n'
'Maybe you have another daemon already running?'
)
# assign globally for future daemon/task creation
Services.actor_n = actor_nursery
Services.service_n = service_nursery
Services.service_n = service_tn
Services.debug_mode = debug_mode
try:
@ -195,7 +227,7 @@ async def open_pikerd(
# TODO: is this more clever/efficient?
# if 'samplerd' in Services.service_tasks:
# await Services.cancel_service('samplerd')
service_nursery.cancel_scope.cancel()
service_tn.cancel_scope.cancel()
# TODO: do we even need this?
@ -225,12 +257,15 @@ async def open_pikerd(
@acm
async def maybe_open_pikerd(
loglevel: Optional[str] = None,
registry_addr: None | tuple = None,
registry_addrs: list[tuple[str, int]] | None = None,
loglevel: str | None = None,
**kwargs,
) -> tractor._portal.Portal | ClassVar[Services]:
) -> (
tractor._portal.Portal
|ClassVar[Services]
):
'''
If no ``pikerd`` daemon-root-actor can be found start it and
yield up (we should probably figure out returning a portal to self
@ -253,32 +288,52 @@ async def maybe_open_pikerd(
# async with open_portal(chan) as arb_portal:
# yield arb_portal
registry_addrs: list[tuple[str, int]] = (
registry_addrs
or
[_default_reg_addr]
)
pikerd_portal: tractor.Portal|None
async with (
open_piker_runtime(
name=query_name,
registry_addr=registry_addr,
registry_addrs=registry_addrs,
loglevel=loglevel,
**kwargs,
) as _,
tractor.find_actor(
_root_dname,
arbiter_sockaddr=registry_addr,
) as portal
) as (actor, addrs),
):
# connect to any existing daemon presuming
# its registry socket was selected.
if (
portal is not None
):
yield portal
if _root_dname in actor.uid:
yield None
return
# NOTE: IFF running in disti mode, try to attach to any
# existing (host-local) `pikerd`.
else:
async with tractor.find_actor(
_root_dname,
registry_addrs=registry_addrs,
only_first=True,
# raise_on_none=True,
) as pikerd_portal:
# connect to any existing remote daemon presuming its
# registry socket was selected.
if pikerd_portal is not None:
# sanity check that we are actually connecting to
# a remote process and not ourselves.
assert actor.uid != pikerd_portal.channel.uid
assert registry_addrs
yield pikerd_portal
return
# presume pikerd role since no daemon could be found at
# configured address
async with open_pikerd(
loglevel=loglevel,
registry_addr=registry_addr,
registry_addrs=registry_addrs,
# passthrough to ``tractor`` init
**kwargs,

View File

@ -15,8 +15,8 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Supervisor for ``docker`` with included async and SC wrapping
to ensure a cancellable container lifetime system.
Supervisor for ``docker`` with included async and SC wrapping to
ensure a cancellable container lifetime system.
'''
from __future__ import annotations

View File

@ -28,6 +28,7 @@ from contextlib import (
)
import tractor
from trio.lowlevel import current_task
from ._util import (
log, # sub-sys logger
@ -70,66 +71,84 @@ async def maybe_spawn_daemon(
lock = Services.locks[service_name]
await lock.acquire()
async with find_service(service_name) as portal:
if portal is not None:
lock.release()
yield portal
return
try:
async with find_service(
service_name,
registry_addrs=[('127.0.0.1', 6116)],
) as portal:
if portal is not None:
lock.release()
yield portal
return
log.warning(
f"Couldn't find any existing {service_name}\n"
'Attempting to spawn new daemon-service..'
)
log.warning(
f"Couldn't find any existing {service_name}\n"
'Attempting to spawn new daemon-service..'
)
# ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the
# process tree
async with maybe_open_pikerd(
loglevel=loglevel,
**pikerd_kwargs,
# ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the
# process tree
async with maybe_open_pikerd(
loglevel=loglevel,
**pikerd_kwargs,
) as pikerd_portal:
) as pikerd_portal:
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
started: bool
if pikerd_portal is None:
started = await service_task_target(
loglevel=loglevel,
**spawn_args,
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
started: bool
if pikerd_portal is None:
started = await service_task_target(
loglevel=loglevel,
**spawn_args,
)
else:
# request a remote `pikerd` (service manager) to start the
# target daemon-task, the target can't return
# a non-serializable value since it is expected that service
# starting is non-blocking and the target task will persist
# running "under" or "within" the `pikerd` actor tree after
# the questing client disconnects. in other words this
# spawns a persistent daemon actor that continues to live
# for the lifespan of whatever the service manager inside
# `pikerd` says it should.
started = await pikerd_portal.run(
service_task_target,
loglevel=loglevel,
**spawn_args,
)
if started:
log.info(f'Service {service_name} started!')
# block until we can discover (by IPC connection) to the newly
# spawned daemon-actor and then deliver the portal to the
# caller.
async with tractor.wait_for_actor(service_name) as portal:
lock.release()
yield portal
await portal.cancel_actor()
except BaseException as _err:
err = _err
if (
lock.locked()
and
lock.statistics().owner is current_task()
):
log.exception(
f'Releasing stale lock after crash..?'
f'{err!r}\n'
)
else:
# request a remote `pikerd` (service manager) to start the
# target daemon-task, the target can't return
# a non-serializable value since it is expected that service
# starting is non-blocking and the target task will persist
# running "under" or "within" the `pikerd` actor tree after
# the questing client disconnects. in other words this
# spawns a persistent daemon actor that continues to live
# for the lifespan of whatever the service manager inside
# `pikerd` says it should.
started = await pikerd_portal.run(
service_task_target,
loglevel=loglevel,
**spawn_args,
)
if started:
log.info(f'Service {service_name} started!')
# block until we can discover (by IPC connection) to the newly
# spawned daemon-actor and then deliver the portal to the
# caller.
async with tractor.wait_for_actor(service_name) as portal:
lock.release()
yield portal
await portal.cancel_actor()
raise err
async def spawn_emsd(

View File

@ -27,6 +27,12 @@ from typing import (
import trio
from trio_typing import TaskStatus
import tractor
from tractor import (
current_actor,
ContextCancelled,
Context,
Portal,
)
from ._util import (
log, # sub-sys logger
@ -38,6 +44,8 @@ from ._util import (
# library.
# - wrap a "remote api" wherein you can get a method proxy
# to the pikerd actor for starting services remotely!
# - prolly rename this to ActorServicesNursery since it spawns
# new actors and supervises them to completion?
class Services:
actor_n: tractor._supervise.ActorNursery
@ -47,7 +55,7 @@ class Services:
str,
tuple[
trio.CancelScope,
tractor.Portal,
Portal,
trio.Event,
]
] = {}
@ -57,12 +65,12 @@ class Services:
async def start_service_task(
self,
name: str,
portal: tractor.Portal,
portal: Portal,
target: Callable,
allow_overruns: bool = False,
**ctx_kwargs,
) -> (trio.CancelScope, tractor.Context):
) -> (trio.CancelScope, Context):
'''
Open a context in a service sub-actor, add to a stack
that gets unwound at ``pikerd`` teardown.
@ -101,13 +109,30 @@ class Services:
# wait on any context's return value
# and any final portal result from the
# sub-actor.
ctx_res = await ctx.result()
ctx_res: Any = await ctx.wait_for_result()
# NOTE: blocks indefinitely until cancelled
# either by error from the target context
# function or by being cancelled here by the
# surrounding cancel scope.
return (await portal.result(), ctx_res)
except ContextCancelled as ctxe:
canceller: tuple[str, str] = ctxe.canceller
our_uid: tuple[str, str] = current_actor().uid
if (
canceller != portal.channel.uid
and
canceller != our_uid
):
log.cancel(
f'Actor-service {name} was remotely cancelled?\n'
f'remote canceller: {canceller}\n'
f'Keeping {our_uid} alive, ignoring sub-actor cancel..\n'
)
else:
raise
finally:
await portal.cancel_actor()

View File

@ -27,6 +27,7 @@ from typing import (
)
import tractor
from tractor import Portal
from ._util import (
log, # sub-sys logger
@ -46,7 +47,9 @@ _registry: Registry | None = None
class Registry:
addr: None | tuple[str, int] = None
# TODO: should this be a set or should we complain
# on duplicates?
addrs: list[tuple[str, int]] = []
# TODO: table of uids to sockaddrs
peers: dict[
@ -60,69 +63,134 @@ _tractor_kwargs: dict[str, Any] = {}
@acm
async def open_registry(
addr: None | tuple[str, int] = None,
addrs: list[tuple[str, int]],
ensure_exists: bool = True,
) -> tuple[str, int]:
) -> list[tuple[str, int]]:
'''
Open the service-actor-discovery registry by returning a set of
tranport socket-addrs to registrar actors which may be
contacted and queried for similar addresses for other
non-registrar actors.
'''
global _tractor_kwargs
actor = tractor.current_actor()
uid = actor.uid
preset_reg_addrs: list[tuple[str, int]] = Registry.addrs
if (
Registry.addr is not None
and addr
preset_reg_addrs
and addrs
):
raise RuntimeError(
f'`{uid}` registry addr already bound @ {_registry.sockaddr}'
)
if preset_reg_addrs != addrs:
# if any(addr in preset_reg_addrs for addr in addrs):
diff: set[tuple[str, int]] = set(preset_reg_addrs) - set(addrs)
if diff:
log.warning(
f'`{uid}` requested only subset of registrars: {addrs}\n'
f'However there are more @{diff}'
)
else:
raise RuntimeError(
f'`{uid}` has non-matching registrar addresses?\n'
f'request: {addrs}\n'
f'already set: {preset_reg_addrs}'
)
was_set: bool = False
if (
not tractor.is_root_process()
and Registry.addr is None
and
not Registry.addrs
):
Registry.addr = actor._arb_addr
Registry.addrs.extend(actor.reg_addrs)
if (
ensure_exists
and Registry.addr is None
and
not Registry.addrs
):
raise RuntimeError(
f"`{uid}` registry should already exist bug doesn't?"
f"`{uid}` registry should already exist but doesn't?"
)
if (
Registry.addr is None
not Registry.addrs
):
was_set = True
Registry.addr = addr or _default_reg_addr
Registry.addrs = addrs or [_default_reg_addr]
_tractor_kwargs['arbiter_addr'] = Registry.addr
# NOTE: only spot this seems currently used is inside
# `.ui._exec` which is the (eventual qtloops) bootstrapping
# with guest mode.
_tractor_kwargs['registry_addrs'] = Registry.addrs
try:
yield Registry.addr
yield Registry.addrs
finally:
# XXX: always clear the global addr if we set it so that the
# next (set of) calls will apply whatever new one is passed
# in.
if was_set:
Registry.addr = None
Registry.addrs = None
@acm
async def find_service(
service_name: str,
) -> tractor.Portal | None:
registry_addrs: list[tuple[str, int]] | None = None,
first_only: bool = True,
) -> (
Portal
| list[Portal]
| None
):
# try:
reg_addrs: list[tuple[str, int]]
async with open_registry(
addrs=(
registry_addrs
# NOTE: if no addr set is passed assume the registry has
# already been opened and use the previously applied
# startup set.
or Registry.addrs
),
) as reg_addrs:
log.info(
f'Scanning for service {service_name!r}'
)
async with open_registry() as reg_addr:
log.info(f'Scanning for service `{service_name}`')
# attach to existing daemon by name if possible
maybe_portals: list[Portal]|Portal|None
async with tractor.find_actor(
service_name,
arbiter_sockaddr=reg_addr,
) as maybe_portal:
yield maybe_portal
registry_addrs=reg_addrs,
only_first=first_only, # if set only returns single ref
) as maybe_portals:
if not maybe_portals:
# log.info(
print(
f'Could NOT find service {service_name!r} -> {maybe_portals!r}'
)
yield None
return
# log.info(
print(
f'Found service {service_name!r} -> {maybe_portals}'
)
yield maybe_portals
# except BaseException as _berr:
# berr = _berr
# log.exception(
# 'tractor.find_actor() failed with,\n'
# )
# raise berr
async def check_for_service(
@ -133,9 +201,11 @@ async def check_for_service(
Service daemon "liveness" predicate.
'''
async with open_registry(ensure_exists=False) as reg_addr:
async with tractor.query_actor(
async with (
open_registry(ensure_exists=False) as reg_addr,
tractor.query_actor(
service_name,
arbiter_sockaddr=reg_addr,
) as sockaddr:
return sockaddr
) as sockaddr,
):
return sockaddr

View File

@ -139,6 +139,13 @@ class StorageClient(
...
class TimeseriesNotFound(Exception):
'''
No timeseries entry can be found for this backend.
'''
class StorageConnectionError(ConnectionError):
'''
Can't connect to the desired tsdb subsys/service.
@ -169,10 +176,13 @@ async def open_storage_client(
tsdb_host: str = 'localhost'
# load root config and any tsdb user defined settings
conf, path = config.load('conf', touch_if_dne=True)
conf, path = config.load(
conf_name='conf',
touch_if_dne=True,
)
# TODO: maybe not under a "network" section.. since
# no more chitty mkts..
# no more chitty `marketstore`..
tsdbconf: dict = {}
service_section = conf.get('service')
if (
@ -183,8 +193,11 @@ async def open_storage_client(
# lookup backend tsdb module by name and load any user service
# settings for connecting to the tsdb service.
backend: str = tsdbconf.pop('backend')
tsdb_host: str = tsdbconf['host']
backend: str = tsdbconf.pop(
'name',
def_backend,
)
tsdb_host: str = tsdbconf.get('maddrs', [])
if backend is None:
backend: str = def_backend

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -19,10 +19,18 @@ Storage middle-ware CLIs.
"""
from __future__ import annotations
# from datetime import datetime
# from contextlib import (
# AsyncExitStack,
# )
from pathlib import Path
from math import copysign
import time
from typing import Generator
# from typing import TYPE_CHECKING
from types import ModuleType
from typing import (
Any,
TYPE_CHECKING,
)
import polars as pl
import numpy as np
@ -35,24 +43,21 @@ import typer
from piker.service import open_piker_runtime
from piker.cli import cli
from piker.config import get_conf_dir
from piker.data import (
maybe_open_shm_array,
def_iohlcv_fields,
ShmArray,
)
from piker.data.history import (
_default_hist_size,
_default_rt_size,
)
from . import (
log,
)
from piker import tsp
from piker.data._formatters import BGM
from . import log
from . import (
__tsdbs__,
open_storage_client,
StorageClient,
)
if TYPE_CHECKING:
from piker.ui._remote_ctl import AnnotCtl
store = typer.Typer()
@ -77,7 +82,6 @@ def ls(
async with (
open_piker_runtime(
'tsdb_storage',
enable_modules=['piker.service._ahab'],
),
):
for i, backend in enumerate(backends):
@ -99,6 +103,18 @@ def ls(
trio.run(query_all)
# TODO: like ls but takes in a pattern and matches
# @store.command()
# def search(
# patt: str,
# backends: list[str] = typer.Argument(
# default=None,
# help='Storage backends to query, default is all.'
# ),
# ):
# ...
@store.command()
def delete(
symbols: list[str],
@ -121,7 +137,6 @@ def delete(
async with (
open_piker_runtime(
'tsdb_storage',
enable_modules=['piker.service._ahab']
),
open_storage_client(backend) as (_, client),
trio.open_nursery() as n,
@ -142,21 +157,33 @@ def delete(
def anal(
fqme: str,
period: int = 60,
pdb: bool = False,
) -> np.ndarray:
'''
Anal-ysis is when you take the data do stuff to it.
NOTE: This ONLY loads the offline timeseries data (by default
from a parquet file) NOT the in-shm version you might be seeing
in a chart.
'''
async def main():
async with (
open_piker_runtime(
# are you a bear or boi?
'tsdb_polars_anal',
# enable_modules=['piker.service._ahab']
debug_mode=True,
debug_mode=pdb,
),
open_storage_client() as (
mod,
client,
),
open_storage_client() as (mod, client),
):
syms: list[str] = await client.list_keys()
print(f'{len(syms)} FOUND for {mod.name}')
log.info(f'{len(syms)} FOUND for {mod.name}')
history: ShmArray # np buffer format
(
history,
first_dt,
@ -167,179 +194,357 @@ def anal(
)
assert first_dt < last_dt
src_df = await client.as_df(fqme, period)
from piker.data import _timeseries as tsmod
df: pl.DataFrame = tsmod.with_dts(src_df)
gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
null_segs: tuple = tsp.get_null_segs(
frame=history,
period=period,
)
# TODO: do tsp queries to backcend to fill i missing
# history and then prolly write it to tsdb!
if not gaps.is_empty():
print(f'Gaps found:\n{gaps}')
shm_df: pl.DataFrame = await client.as_df(
fqme,
period,
)
# TODO: something better with tab completion..
# is there something more minimal but nearly as
# functional as ipython?
await tractor.pause()
df: pl.DataFrame # with dts
deduped: pl.DataFrame # deduplicated dts
(
df,
deduped,
diff,
) = tsp.dedupe(
shm_df,
period=period,
)
write_edits: bool = True
if (
write_edits
and (
diff
or null_segs
)
):
await tractor.pause()
await client.write_ohlcv(
fqme,
ohlcv=deduped,
timeframe=period,
)
else:
# TODO: something better with tab completion..
# is there something more minimal but nearly as
# functional as ipython?
await tractor.pause()
assert not null_segs
trio.run(main)
def iter_dfs_from_shms(fqme: str) -> Generator[
tuple[Path, ShmArray, pl.DataFrame],
None,
None,
]:
# shm buffer size table based on known sample rates
sizes: dict[str, int] = {
'hist': _default_hist_size,
'rt': _default_rt_size,
}
async def markup_gaps(
fqme: str,
timeframe: float,
actl: AnnotCtl,
wdts: pl.DataFrame,
gaps: pl.DataFrame,
# load all detected shm buffer files which have the
# passed FQME pattern in the file name.
shmfiles: list[Path] = []
shmdir = Path('/dev/shm/')
) -> dict[int, dict]:
'''
Remote annotate time-gaps in a dt-fielded ts (normally OHLC)
with rectangles.
for shmfile in shmdir.glob(f'*{fqme}*'):
filename: str = shmfile.name
'''
aids: dict[int] = {}
for i in range(gaps.height):
# skip index files
if (
'_first' in filename
or '_last' in filename
):
continue
row: pl.DataFrame = gaps[i]
assert shmfile.is_file()
log.debug(f'Found matching shm buffer file: {filename}')
shmfiles.append(shmfile)
# the gap's RIGHT-most bar's OPEN value
# at that time (sample) step.
iend: int = row['index'][0]
# dt: datetime = row['dt'][0]
# dt_prev: datetime = row['dt_prev'][0]
# dt_end_t: float = dt.timestamp()
for shmfile in shmfiles:
# lookup array buffer size based on file suffix
# being either .rt or .hist
key: str = shmfile.name.rsplit('.')[-1]
# TODO: can we eventually remove this
# once we figure out why the epoch cols
# don't match?
# TODO: FIX HOW/WHY these aren't matching
# and are instead off by 4hours (EST
# vs. UTC?!?!)
# end_t: float = row['time']
# assert (
# dt.timestamp()
# ==
# end_t
# )
# skip FSP buffers for now..
if key not in sizes:
continue
size: int = sizes[key]
# attach to any shm buffer, load array into polars df,
# write to local parquet file.
shm, opened = maybe_open_shm_array(
key=shmfile.name,
size=size,
dtype=def_iohlcv_fields,
readonly=True,
# the gap's LEFT-most bar's CLOSE value
# at that time (sample) step.
prev_r: pl.DataFrame = wdts.filter(
pl.col('index') == iend - 1
)
assert not opened
ohlcv = shm.array
# XXX: probably a gap in the (newly sorted or de-duplicated)
# dt-df, so we might need to re-index first..
if prev_r.is_empty():
await tractor.pause()
start = time.time()
istart: int = prev_r['index'][0]
# dt_start_t: float = dt_prev.timestamp()
# XXX: thanks to this SO answer for this conversion tip:
# https://stackoverflow.com/a/72054819
df = pl.DataFrame({
field_name: ohlcv[field_name]
for field_name in ohlcv.dtype.fields
})
delay: float = round(
time.time() - start,
ndigits=6,
# start_t: float = prev_r['time']
# assert (
# dt_start_t
# ==
# start_t
# )
# TODO: implement px-col width measure
# and ensure at least as many px-cols
# shown per rect as configured by user.
# gap_w: float = abs((iend - istart))
# if gap_w < 6:
# margin: float = 6
# iend += margin
# istart -= margin
rect_gap: float = BGM*3/8
opn: float = row['open'][0]
ro: tuple[float, float] = (
# dt_end_t,
iend + rect_gap + 1,
opn,
)
log.info(
f'numpy -> polars conversion took {delay} secs\n'
f'polars df: {df}'
cls: float = prev_r['close'][0]
lc: tuple[float, float] = (
# dt_start_t,
istart - rect_gap, # + 1 ,
cls,
)
yield (
shmfile,
shm,
df,
color: str = 'dad_blue'
diff: float = cls - opn
sgn: float = copysign(1, diff)
color: str = {
-1: 'buy_green',
1: 'sell_red',
}[sgn]
rect_kwargs: dict[str, Any] = dict(
fqme=fqme,
timeframe=timeframe,
start_pos=lc,
end_pos=ro,
color=color,
)
aid: int = await actl.add_rect(**rect_kwargs)
assert aid
aids[aid] = rect_kwargs
# tell chart to redraw all its
# graphics view layers Bo
await actl.redraw(
fqme=fqme,
timeframe=timeframe,
)
return aids
@store.command()
def ldshm(
fqme: str,
write_parquet: bool = False,
write_parquet: bool = True,
reload_parquet_to_shm: bool = True,
) -> None:
'''
Linux ONLY: load any fqme file name matching shm buffer from
/dev/shm/ into an OHLCV numpy array and polars DataFrame,
optionally write to .parquet file.
optionally write to offline storage via `.parquet` file.
'''
async def main():
from piker.ui._remote_ctl import (
open_annot_ctl,
)
actl: AnnotCtl
mod: ModuleType
client: StorageClient
async with (
open_piker_runtime(
'polars_boi',
enable_modules=['piker.data._sharedmem'],
debug_mode=True,
),
open_storage_client() as (
mod,
client,
),
open_annot_ctl() as actl,
):
df: pl.DataFrame | None = None
for shmfile, shm, src_df in iter_dfs_from_shms(fqme):
shm_df: pl.DataFrame | None = None
tf2aids: dict[float, dict] = {}
for (
shmfile,
shm,
# parquet_path,
shm_df,
) in tsp.iter_dfs_from_shms(fqme):
# compute ohlc properties for naming
times: np.ndarray = shm.array['time']
secs: float = times[-1] - times[-2]
if secs < 1.:
d1: float = float(times[-1] - times[-2])
d2: float = float(times[-2] - times[-3])
med: float = np.median(np.diff(times))
if (
d1 < 1.
and d2 < 1.
and med < 1.
):
raise ValueError(
f'Something is wrong with time period for {shm}:\n{times}'
)
from piker.data import _timeseries as tsmod
df: pl.DataFrame = tsmod.with_dts(src_df)
gaps: pl.DataFrame = tsmod.detect_time_gaps(df)
period_s: float = float(max(d1, d2, med))
# TODO: maybe only optionally enter this depending
# on some CLI flags and/or gap detection?
if (
not gaps.is_empty()
or secs > 2
):
null_segs: tuple = tsp.get_null_segs(
frame=shm.array,
period=period_s,
)
# TODO: call null-seg fixer somehow?
if null_segs:
await tractor.pause()
# async with (
# trio.open_nursery() as tn,
# mod.open_history_client(
# mkt,
# ) as (get_hist, config),
# ):
# nulls_detected: trio.Event = await tn.start(partial(
# tsp.maybe_fill_null_segments,
# write to parquet file?
if write_parquet:
timeframe: str = f'{secs}s'
# shm=shm,
# timeframe=timeframe,
# get_hist=get_hist,
# sampler_stream=sampler_stream,
# mkt=mkt,
# ))
datadir: Path = get_conf_dir() / 'nativedb'
if not datadir.is_dir():
datadir.mkdir()
# over-write back to shm?
wdts: pl.DataFrame # with dts
deduped: pl.DataFrame # deduplicated dts
(
wdts,
deduped,
diff,
) = tsp.dedupe(
shm_df,
period=period_s,
)
path: Path = datadir / f'{fqme}.{timeframe}.parquet'
# detect gaps from in expected (uniform OHLC) sample period
step_gaps: pl.DataFrame = tsp.detect_time_gaps(
deduped,
expect_period=period_s,
)
# write to fs
start = time.time()
df.write_parquet(path)
delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'parquet write took {delay} secs\n'
f'file path: {path}'
# TODO: by default we always want to mark these up
# with rects showing up/down gaps Bo
venue_gaps: pl.DataFrame = tsp.detect_time_gaps(
deduped,
expect_period=period_s,
# TODO: actually pull the exact duration
# expected for each venue operational period?
gap_dt_unit='days',
gap_thresh=1,
)
# TODO: find the disjoint set of step gaps from
# venue (closure) set!
# -[ ] do a set diff by checking for the unique
# gap set only in the step_gaps?
if (
not venue_gaps.is_empty()
or (
period_s < 60
and not step_gaps.is_empty()
)
):
# write repaired ts to parquet-file?
if write_parquet:
start: float = time.time()
path: Path = await client.write_ohlcv(
fqme,
ohlcv=deduped,
timeframe=period_s,
)
write_delay: float = round(
time.time() - start,
ndigits=6,
)
# read back from fs
start = time.time()
read_df: pl.DataFrame = pl.read_parquet(path)
delay: float = round(
time.time() - start,
ndigits=6,
)
print(
f'parquet read took {delay} secs\n'
f'polars df: {read_df}'
)
# read back from fs
start: float = time.time()
read_df: pl.DataFrame = pl.read_parquet(path)
read_delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'parquet write took {write_delay} secs\n'
f'file path: {path}'
f'parquet read took {read_delay} secs\n'
f'polars df: {read_df}'
)
if df is None:
log.error(f'No matching shm buffers for {fqme} ?')
if reload_parquet_to_shm:
new = tsp.pl2np(
deduped,
dtype=shm.array.dtype,
)
# since normally readonly
shm._array.setflags(
write=int(1),
)
shm.push(
new,
prepend=True,
start=new['index'][-1],
update_first=False, # don't update ._first
)
do_markup_gaps: bool = True
if do_markup_gaps:
new_df: pl.DataFrame = tsp.np2pl(new)
aids: dict = await markup_gaps(
fqme,
period_s,
actl,
new_df,
step_gaps,
)
# last chance manual overwrites in REPL
# await tractor.pause()
assert aids
tf2aids[period_s] = aids
else:
# allow interaction even when no ts problems.
assert not diff
await tractor.pause()
log.info('Exiting TSP shm anal-izer!')
if shm_df is None:
log.error(
f'No matching shm buffers for {fqme} ?'
)
trio.run(main)

View File

@ -19,7 +19,8 @@
call a poor man's tsdb).
AKA a `piker`-native file-system native "time series database"
without needing an extra process and no standard TSDB features, YET!
without needing an extra process and no standard TSDB features,
YET!
'''
# TODO: like there's soo much..
@ -55,8 +56,6 @@ from datetime import datetime
from pathlib import Path
import time
# from bidict import bidict
# import tractor
import numpy as np
import polars as pl
from pendulum import (
@ -64,45 +63,18 @@ from pendulum import (
)
from piker import config
from piker.data import def_iohlcv_fields
from piker.data import ShmArray
from piker import tsp
from piker.data import (
def_iohlcv_fields,
ShmArray,
)
from piker.log import get_logger
from . import TimeseriesNotFound
log = get_logger('storage.nativedb')
# NOTE: thanks to this SO answer for the below conversion routines
# to go from numpy struct-arrays to polars dataframes and back:
# https://stackoverflow.com/a/72054819
def np2pl(array: np.ndarray) -> pl.DataFrame:
return pl.DataFrame({
field_name: array[field_name]
for field_name in array.dtype.fields
})
def pl2np(
df: pl.DataFrame,
dtype: np.dtype,
) -> np.ndarray:
# Create numpy struct array of the correct size and dtype
# and loop through df columns to fill in array fields.
array = np.empty(
df.height,
dtype,
)
for field, col in zip(
dtype.fields,
df.columns,
):
array[field] = df.get_column(col).to_numpy()
return array
def detect_period(shm: ShmArray) -> float:
'''
Attempt to detect the series time step sampling period
@ -123,16 +95,19 @@ def detect_period(shm: ShmArray) -> float:
def mk_ohlcv_shm_keyed_filepath(
fqme: str,
period: float, # ow known as the "timeframe"
period: float | int, # ow known as the "timeframe"
datadir: Path,
) -> str:
) -> Path:
if period < 1.:
raise ValueError('Sample period should be >= 1.!?')
period_s: str = f'{period}s'
path: Path = datadir / f'{fqme}.ohlcv{period_s}.parquet'
path: Path = (
datadir
/
f'{fqme}.ohlcv{int(period)}s.parquet'
)
return path
@ -186,7 +161,13 @@ class NativeStorageClient:
def index_files(self):
for path in self._datadir.iterdir():
if path.name in {'borked', 'expired',}:
if (
path.is_dir()
or
'.parquet' not in str(path)
# or
# path.name in {'borked', 'expired',}
):
continue
key: str = path.name.rstrip('.parquet')
@ -228,8 +209,21 @@ class NativeStorageClient:
fqme,
timeframe,
)
except FileNotFoundError:
return None
except FileNotFoundError as fnfe:
bs_fqme, _, *_ = fqme.rpartition('.')
possible_matches: list[str] = []
for tskey in self._index:
if bs_fqme in tskey:
possible_matches.append(tskey)
match_str: str = '\n'.join(sorted(possible_matches))
raise TimeseriesNotFound(
f'No entry for `{fqme}`?\n'
f'Maybe you need a more specific fqme-key like:\n\n'
f'{match_str}'
) from fnfe
times = array['time']
return (
@ -242,6 +236,7 @@ class NativeStorageClient:
self,
fqme: str,
period: float,
) -> Path:
return mk_ohlcv_shm_keyed_filepath(
fqme=fqme,
@ -249,6 +244,23 @@ class NativeStorageClient:
datadir=self._datadir,
)
def _cache_df(
self,
fqme: str,
df: pl.DataFrame,
timeframe: float,
) -> None:
# cache df for later usage since we (currently) need to
# convert to np.ndarrays to push to our `ShmArray` rt
# buffers subsys but later we may operate entirely on
# pyarrow arrays/buffers so keeping the dfs around for
# a variety of purposes is handy.
self._dfs.setdefault(
timeframe,
{},
)[fqme] = df
async def read_ohlcv(
self,
fqme: str,
@ -257,13 +269,20 @@ class NativeStorageClient:
# limit: int = int(200e3),
) -> np.ndarray:
path: Path = self.mk_path(fqme, period=int(timeframe))
path: Path = self.mk_path(
fqme,
period=int(timeframe),
)
df: pl.DataFrame = pl.read_parquet(path)
self._dfs.setdefault(timeframe, {})[fqme] = df
self._cache_df(
fqme=fqme,
df=df,
timeframe=timeframe,
)
# TODO: filter by end and limit inputs
# times: pl.Series = df['time']
array: np.ndarray = pl2np(
array: np.ndarray = tsp.pl2np(
df,
dtype=np.dtype(def_iohlcv_fields),
)
@ -273,11 +292,15 @@ class NativeStorageClient:
self,
fqme: str,
period: int = 60,
load_from_offline: bool = True,
) -> pl.DataFrame:
try:
return self._dfs[period][fqme]
except KeyError:
if not load_from_offline:
raise
await self.read_ohlcv(fqme, period)
return self._dfs[period][fqme]
@ -299,32 +322,39 @@ class NativeStorageClient:
datadir=self._datadir,
)
if isinstance(ohlcv, np.ndarray):
df: pl.DataFrame = np2pl(ohlcv)
df: pl.DataFrame = tsp.np2pl(ohlcv)
else:
df = ohlcv
self._cache_df(
fqme=fqme,
df=df,
timeframe=timeframe,
)
# TODO: in terms of managing the ultra long term data
# - use a proper profiler to measure all this IO and
# -[ ] use a proper profiler to measure all this IO and
# roundtripping!
# - try out ``fastparquet``'s append writing:
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
# -[ ] implement parquet append!? see issue:
# https://github.com/pikers/piker/issues/536
# -[ ] try out ``fastparquet``'s append writing:
# https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
start = time.time()
df.write_parquet(path)
delay: float = round(
time.time() - start,
ndigits=6,
)
print(
log.info(
f'parquet write took {delay} secs\n'
f'file path: {path}'
)
return path
async def write_ohlcv(
self,
fqme: str,
ohlcv: np.ndarray,
ohlcv: np.ndarray | pl.DataFrame,
timeframe: int,
) -> Path:
@ -376,6 +406,8 @@ class NativeStorageClient:
# ...
# TODO: does this need to be async on average?
# I guess for any IPC connected backend yes?
@acm
async def get_client(
@ -393,7 +425,7 @@ async def get_client(
'''
datadir: Path = config.get_conf_dir() / 'nativedb'
if not datadir.is_dir():
log.info(f'Creating `nativedb` director: {datadir}')
log.info(f'Creating `nativedb` dir: {datadir}')
datadir.mkdir()
client = NativeStorageClient(datadir)

View File

@ -18,24 +18,12 @@
Toolz for debug, profile and trace of the distributed runtime :surfer:
'''
from .debug import (
open_crash_handler,
from tractor.devx import (
open_crash_handler as open_crash_handler,
)
from .profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
timeit,
Profiler as Profiler,
pg_profile_enabled as pg_profile_enabled,
ms_slower_then as ms_slower_then,
timeit as timeit,
)
# TODO: other mods to include?
# - DROP .trionics, already moved into tractor
# - move in `piker.calc`
__all__: list[str] = [
'open_crash_handler',
'pg_profile_enabled',
'ms_slower_then',
'Profiler',
'timeit',
]

View File

@ -1,40 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Debugger wrappers for `pdbp` as used by `tractor`.
'''
from contextlib import contextmanager as cm
import pdbp
# TODO: better naming and what additionals?
# - optional runtime plugging?
# - detection for sync vs. async code?
# - specialized REPL entry when in distributed mode?
@cm
def open_crash_handler():
'''
Super basic crash handler using `pdbp` debugger.
'''
try:
yield
except BaseException:
pdbp.xpm()
raise

1462
piker/tsp/__init__.py 100644

File diff suppressed because it is too large Load Diff

746
piker/tsp/_anal.py 100644
View File

@ -0,0 +1,746 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Financial time series processing utilities usually
pertaining to OHLCV style sampled data.
Routines are generally implemented in either ``numpy`` or
``polars`` B)
'''
from __future__ import annotations
from functools import partial
from math import (
ceil,
floor,
)
import time
from typing import (
Literal,
# AsyncGenerator,
Generator,
)
import numpy as np
import polars as pl
from pendulum import (
DateTime,
from_timestamp,
)
from ..toolz.profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
from ..log import (
get_logger,
get_console_log,
)
# for "time series processing"
subsys: str = 'piker.tsp'
log = get_logger(subsys)
get_console_log = partial(
get_console_log,
name=subsys,
)
# NOTE: union type-defs to handle generic `numpy` and `polars` types
# side-by-side Bo
# |_ TODO: schema spec typing?
# -[ ] nptyping!
# -[ ] wtv we can with polars?
Frame = pl.DataFrame | np.ndarray
Seq = pl.Series | np.ndarray
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: float, # sampler period step-diff
) -> slice:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='right',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc
def get_null_segs(
frame: Frame,
period: float, # sampling step in seconds
imargin: int = 1,
col: str = 'time',
) -> tuple[
# Seq, # TODO: can we make it an array-type instead?
list[
list[int, int],
],
Seq,
Frame
] | None:
'''
Detect if there are any zero(-epoch stamped) valued
rows in for the provided `col: str` column; by default
presume the 'time' field/column.
Filter to all such zero (time) segments and return
the corresponding frame zeroed segment's,
- gap absolute (in buffer terms) indices-endpoints as
`absi_zsegs`
- abs indices of all rows with zeroed `col` values as `absi_zeros`
- the corresponding frame's row-entries (view) which are
zeroed for the `col` as `zero_t`
'''
times: Seq = frame['time']
zero_pred: Seq = (times == 0)
if isinstance(frame, np.ndarray):
tis_zeros: int = zero_pred.any()
else:
tis_zeros: int = zero_pred.any()
if not tis_zeros:
return None
# TODO: use ndarray for this?!
absi_zsegs: list[list[int, int]] = []
if isinstance(frame, np.ndarray):
# view of ONLY the zero segments as one continuous chunk
zero_t: np.ndarray = frame[zero_pred]
# abs indices of said zeroed rows
absi_zeros = zero_t['index']
# diff of abs index steps between each zeroed row
absi_zdiff: np.ndarray = np.diff(absi_zeros)
# scan for all frame-indices where the
# zeroed-row-abs-index-step-diff is greater then the
# expected increment of 1.
# data 1st zero seg data zeros
# ---- ------------ ---- ----- ------ ----
# ||||..000000000000..||||..00000..||||||..0000
# ---- ------------ ---- ----- ------ ----
# ^zero_t[0] ^zero_t[-1]
# ^fi_zgaps[0] ^fi_zgaps[1]
# ^absi_zsegs[0][0] ^---^ => absi_zsegs[1]: tuple
# absi_zsegs[0][1]^
#
# NOTE: the first entry in `fi_zgaps` is where
# the first (absolute) index step diff is > 1.
# and it is a frame-relative index into `zero_t`.
fi_zgaps = np.argwhere(
absi_zdiff > 1
# NOTE: +1 here is ensure we index to the "start" of each
# segment (if we didn't the below loop needs to be
# re-written to expect `fi_end_rows`!
) + 1
# the rows from the contiguous zeroed segments which have
# abs-index steps >1 compared to the previous zero row
# (indicating an end of zeroed segment).
fi_zseg_start_rows = zero_t[fi_zgaps]
# TODO: equiv for pl.DataFrame case!
else:
izeros: pl.Series = zero_pred.arg_true()
zero_t: pl.DataFrame = frame[izeros]
absi_zeros = zero_t['index']
absi_zdiff: pl.Series = absi_zeros.diff()
fi_zgaps = (absi_zdiff > 1).arg_true()
# XXX: our goal (in this func) is to select out slice index
# pairs (zseg0_start, zseg_end) in abs index units for each
# null-segment portion detected throughout entire input frame.
# only up to one null-segment in entire frame?
num_gaps: int = fi_zgaps.size + 1
if num_gaps < 1:
if absi_zeros.size > 1:
absi_zsegs = [[
# TODO: maybe mk these max()/min() limits func
# consts instead of called more then once?
max(
absi_zeros[0] - 1,
0,
),
# NOTE: need the + 1 to guarantee we index "up to"
# the next non-null row-datum.
min(
absi_zeros[-1] + 1,
frame['index'][-1],
),
]]
else:
# XXX EDGE CASE: only one null-datum found so
# mark the start abs index as None to trigger
# a full frame-len query to the respective backend?
absi_zsegs = [[
# see `get_hist()` in backend, should ALWAYS be
# able to handle a `start_dt=None`!
# None,
None,
absi_zeros[0] + 1,
]]
# XXX NOTE XXX: if >= 2 zeroed segments are found, there should
# ALWAYS be more then one zero-segment-abs-index-step-diff row
# in `absi_zdiff`, so loop through all such
# abs-index-step-diffs >1 (i.e. the entries of `absi_zdiff`)
# and add them as the "end index" entries for each segment.
# Then, iif NOT iterating the first such segment end, look back
# for the prior segments zero-segment start indext by relative
# indexing the `zero_t` frame by -1 and grabbing the abs index
# of what should be the prior zero-segment abs start index.
else:
# NOTE: since `absi_zdiff` will never have a row
# corresponding to the first zero-segment's row, we add it
# manually here.
absi_zsegs.append([
max(
absi_zeros[0] - 1,
0,
),
None,
])
# TODO: can we do it with vec ops?
for i, (
fi, # frame index of zero-seg start
zseg_start_row, # full row for ^
) in enumerate(zip(
fi_zgaps,
fi_zseg_start_rows,
)):
assert (zseg_start_row == zero_t[fi]).all()
iabs: int = zseg_start_row['index'][0]
absi_zsegs.append([
iabs - 1,
None, # backfilled on next iter
])
# final iter case, backfill FINAL end iabs!
if (i + 1) == fi_zgaps.size:
absi_zsegs[-1][1] = absi_zeros[-1] + 1
# NOTE: only after the first segment (due to `.diff()`
# usage above) can we do a lookback to the prior
# segment's end row and determine it's abs index to
# retroactively insert to the prior
# `absi_zsegs[i-1][1]` entry Bo
last_end: int = absi_zsegs[i][1]
if last_end is None:
prev_zseg_row = zero_t[fi - 1]
absi_post_zseg = prev_zseg_row['index'][0] + 1
# XXX: MUST BACKFILL previous end iabs!
absi_zsegs[i][1] = absi_post_zseg
else:
if 0 < num_gaps < 2:
absi_zsegs[-1][1] = min(
absi_zeros[-1] + 1,
frame['index'][-1],
)
iabs_first: int = frame['index'][0]
for start, end in absi_zsegs:
ts_start: float = times[start - iabs_first]
ts_end: float = times[end - iabs_first]
if (
(ts_start == 0 and not start == 0)
or
ts_end == 0
):
import pdbp
pdbp.set_trace()
assert end
assert start < end
log.warning(
f'Frame has {len(absi_zsegs)} NULL GAPS!?\n'
f'period: {period}\n'
f'total null samples: {len(zero_t)}\n'
)
return (
absi_zsegs, # [start, end] abs slice indices of seg
absi_zeros, # all abs indices within all null-segs
zero_t, # sliced-view of all null-segment rows-datums
)
def iter_null_segs(
timeframe: float,
frame: Frame | None = None,
null_segs: tuple | None = None,
) -> Generator[
tuple[
int, int,
int, int,
float, float,
float, float,
# Seq, # TODO: can we make it an array-type instead?
# list[
# list[int, int],
# ],
# Seq,
# Frame
],
None,
]:
if not (
null_segs := get_null_segs(
frame,
period=timeframe,
)
):
return
absi_pairs_zsegs: list[list[float, float]]
izeros: Seq
zero_t: Frame
(
absi_pairs_zsegs,
izeros,
zero_t,
) = null_segs
absi_first: int = frame[0]['index']
for (
absi_start,
absi_end,
) in absi_pairs_zsegs:
fi_end: int = absi_end - absi_first
end_row: Seq = frame[fi_end]
end_t: float = end_row['time']
end_dt: DateTime = from_timestamp(end_t)
fi_start = None
start_row = None
start_t = None
start_dt = None
if (
absi_start is not None
and start_t != 0
):
fi_start: int = absi_start - absi_first
start_row: Seq = frame[fi_start]
start_t: float = start_row['time']
start_dt: DateTime = from_timestamp(start_t)
if absi_start < 0:
import pdbp
pdbp.set_trace()
yield (
absi_start, absi_end, # abs indices
fi_start, fi_end, # relative "frame" indices
start_t, end_t,
start_dt, end_dt,
)
def with_dts(
df: pl.DataFrame,
time_col: str = 'time',
) -> pl.DataFrame:
'''
Insert datetime (casted) columns to a (presumably) OHLC sampled
time series with an epoch-time column keyed by `time_col: str`.
'''
return df.with_columns([
pl.col(time_col).shift(1).name.suffix('_prev'),
pl.col(time_col).diff().alias('s_diff'),
pl.from_epoch(pl.col(time_col)).alias('dt'),
]).with_columns([
pl.from_epoch(
column=pl.col(f'{time_col}_prev'),
).alias('dt_prev'),
pl.col('dt').diff().alias('dt_diff'),
])
t_unit: Literal = Literal[
'days',
'hours',
'minutes',
'seconds',
'miliseconds',
'microseconds',
'nanoseconds',
]
def detect_time_gaps(
w_dts: pl.DataFrame,
time_col: str = 'time',
# epoch sampling step diff
expect_period: float = 60,
# NOTE: legacy stock mkts have venue operating hours
# and thus gaps normally no more then 1-2 days at
# a time.
gap_thresh: float = 1.,
# TODO: allow passing in a frame of operating hours?
# -[ ] durations/ranges for faster legit gap checks?
# XXX -> must be valid ``polars.Expr.dt.<name>``
# like 'days' which a sane default for venue closures
# though will detect weekend gaps which are normal :o
gap_dt_unit: t_unit | None = None,
) -> pl.DataFrame:
'''
Filter to OHLC datums which contain sample step gaps.
For eg. legacy markets which have venue close gaps and/or
actual missing data segments.
'''
# first select by any sample-period (in seconds unit) step size
# greater then expected.
step_gaps: pl.DataFrame = w_dts.filter(
pl.col('s_diff').abs() > expect_period
)
if gap_dt_unit is None:
return step_gaps
# NOTE: this flag is to indicate that on this (sampling) time
# scale we expect to only be filtering against larger venue
# closures-scale time gaps.
return step_gaps.filter(
# Second by an arbitrary dt-unit step size
getattr(
pl.col('dt_diff').dt,
gap_dt_unit,
)().abs() > gap_thresh
)
def detect_price_gaps(
df: pl.DataFrame,
gt_multiplier: float = 2.,
price_fields: list[str] = ['high', 'low'],
) -> pl.DataFrame:
'''
Detect gaps in clearing price over an OHLC series.
2 types of gaps generally exist; up gaps and down gaps:
- UP gap: when any next sample's lo price is strictly greater
then the current sample's hi price.
- DOWN gap: when any next sample's hi price is strictly
less then the current samples lo price.
'''
# return df.filter(
# pl.col('high') - ) > expect_period,
# ).select([
# pl.dt.datetime(pl.col(time_col).shift(1)).suffix('_previous'),
# pl.all(),
# ]).select([
# pl.all(),
# (pl.col(time_col) - pl.col(f'{time_col}_previous')).alias('diff'),
# ])
...
# TODO: probably just use the null_segs impl above?
def detect_vlm_gaps(
df: pl.DataFrame,
col: str = 'volume',
) -> pl.DataFrame:
vnull: pl.DataFrame = df.filter(
pl.col(col) == 0
)
return vnull
def dedupe(
src_df: pl.DataFrame,
time_gaps: pl.DataFrame | None = None,
sort: bool = True,
period: float = 60,
) -> tuple[
pl.DataFrame, # with dts
pl.DataFrame, # with deduplicated dts (aka gap/repeat removal)
int, # len diff between input and deduped
]:
'''
Check for time series gaps and if found
de-duplicate any datetime entries, check for
a frame height diff and return the newly
dt-deduplicated frame.
'''
wdts: pl.DataFrame = with_dts(src_df)
deduped = wdts
# remove duplicated datetime samples/sections
deduped: pl.DataFrame = wdts.unique(
# subset=['dt'],
subset=['time'],
maintain_order=True,
)
# maybe sort on any time field
if sort:
deduped = deduped.sort(by='time')
# TODO: detect out-of-order segments which were corrected!
# -[ ] report in log msg
# -[ ] possibly return segment sections which were moved?
diff: int = (
wdts.height
-
deduped.height
)
return (
wdts,
deduped,
diff,
)
def sort_diff(
src_df: pl.DataFrame,
col: str = 'time',
) -> tuple[
pl.DataFrame, # with dts
pl.DataFrame, # sorted
list[int], # indices of segments that are out-of-order
]:
ser: pl.Series = src_df[col]
sortd: pl.DataFrame = ser.sort()
diff: pl.Series = ser.diff()
sortd_diff: pl.Series = sortd.diff()
i_step_diff = (diff != sortd_diff).arg_true()
frame_reorders: int = i_step_diff.len()
if frame_reorders:
log.warn(
f'Resorted frame on col: {col}\n'
f'{frame_reorders}'
)
# import pdbp; pdbp.set_trace()
# NOTE: thanks to this SO answer for the below conversion routines
# to go from numpy struct-arrays to polars dataframes and back:
# https://stackoverflow.com/a/72054819
def np2pl(array: np.ndarray) -> pl.DataFrame:
start: float = time.time()
# XXX: thanks to this SO answer for this conversion tip:
# https://stackoverflow.com/a/72054819
df = pl.DataFrame({
field_name: array[field_name]
for field_name in array.dtype.fields
})
delay: float = round(
time.time() - start,
ndigits=6,
)
log.info(
f'numpy -> polars conversion took {delay} secs\n'
f'polars df: {df}'
)
return df
def pl2np(
df: pl.DataFrame,
dtype: np.dtype,
) -> np.ndarray:
# Create numpy struct array of the correct size and dtype
# and loop through df columns to fill in array fields.
array = np.empty(
df.height,
dtype,
)
for field, col in zip(
dtype.fields,
df.columns,
):
array[field] = df.get_column(col).to_numpy()
return array

View File

@ -21,15 +21,16 @@ Extensions to built-in or (heavily used but 3rd party) friend-lib
types.
'''
from __future__ import annotations
from collections import UserList
from pprint import (
pformat,
saferepr,
)
from typing import Any
from msgspec import (
msgpack,
Struct,
Struct as _Struct,
structs,
)
@ -62,7 +63,7 @@ class DiffDump(UserList):
class Struct(
Struct,
_Struct,
# https://jcristharif.com/msgspec/structs.html#tagged-unions
# tag='pikerstruct',
@ -72,9 +73,27 @@ class Struct(
A "human friendlier" (aka repl buddy) struct subtype.
'''
def _sin_props(self) -> Iterator[
tuple[
structs.FieldIinfo,
str,
Any,
]
]:
'''
Iterate over all non-@property fields of this struct.
'''
fi: structs.FieldInfo
for fi in structs.fields(self):
key: str = fi.name
val: Any = getattr(self, key)
yield fi, key, val
def to_dict(
self,
include_non_members: bool = True,
) -> dict:
'''
Like it sounds.. direct delegation to:
@ -90,16 +109,72 @@ class Struct(
# only return a dict of the struct members
# which were provided as input, NOT anything
# added as `@properties`!
# added as type-defined `@property` methods!
sin_props: dict = {}
for fi in structs.fields(self):
key: str = fi.name
sin_props[key] = asdict[key]
fi: structs.FieldInfo
for fi, k, v in self._sin_props():
sin_props[k] = asdict[k]
return sin_props
def pformat(self) -> str:
return f'Struct({pformat(self.to_dict())})'
def pformat(
self,
field_indent: int = 2,
indent: int = 0,
) -> str:
'''
Recursion-safe `pprint.pformat()` style formatting of
a `msgspec.Struct` for sane reading by a human using a REPL.
'''
# global whitespace indent
ws: str = ' '*indent
# field whitespace indent
field_ws: str = ' '*(field_indent + indent)
# qtn: str = ws + self.__class__.__qualname__
qtn: str = self.__class__.__qualname__
obj_str: str = '' # accumulator
fi: structs.FieldInfo
k: str
v: Any
for fi, k, v in self._sin_props():
# TODO: how can we prefer `Literal['option1', 'option2,
# ..]` over .__name__ == `Literal` but still get only the
# latter for simple types like `str | int | None` etc..?
ft: type = fi.type
typ_name: str = getattr(ft, '__name__', str(ft))
# recurse to get sub-struct's `.pformat()` output Bo
if isinstance(v, Struct):
val_str: str = v.pformat(
indent=field_indent + indent,
field_indent=indent + field_indent,
)
else: # the `pprint` recursion-safe format:
# https://docs.python.org/3.11/library/pprint.html#pprint.saferepr
val_str: str = saferepr(v)
obj_str += (field_ws + f'{k}: {typ_name} = {val_str},\n')
return (
f'{qtn}(\n'
f'{obj_str}'
f'{ws})'
)
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
# inside a known tty?
# def __repr__(self) -> str:
# ...
# __str__ = __repr__ = pformat
__repr__ = pformat
def copy(
self,

View File

@ -14,9 +14,8 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Stuff for your eyes, aka super hawt Qt UI components.
'''
UI components built using `Qt` with major versions swapped in via
the import indirection in the `.qt` sub-mod.
Currently we only support PyQt5 due to this issue in Pyside2:
https://bugreports.qt.io/projects/PYSIDE/issues/PYSIDE-1313
"""
'''

View File

@ -21,8 +21,10 @@ Anchor funtions for UI placement of annotions.
from __future__ import annotations
from typing import Callable, TYPE_CHECKING
from PyQt5.QtCore import QPointF
from PyQt5.QtWidgets import QGraphicsPathItem
from piker.ui.qt import (
QPointF,
QGraphicsPathItem,
)
if TYPE_CHECKING:
from ._chart import ChartPlotWidget

View File

@ -20,12 +20,22 @@ Annotations for ur faces.
"""
from typing import Callable
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF, QRectF
from PyQt5.QtWidgets import QGraphicsPathItem
from pyqtgraph import Point, functions as fn, Color
from pyqtgraph import (
Point,
functions as fn,
Color,
)
import numpy as np
from piker.ui.qt import (
QtCore,
QtGui,
QtWidgets,
QPointF,
QRectF,
QGraphicsPathItem,
)
def mk_marker_path(

View File

@ -21,9 +21,12 @@ Main app startup and run.
from functools import partial
from types import ModuleType
from PyQt5.QtCore import QEvent
import tractor
import trio
from piker.ui.qt import (
QEvent,
)
from ..service import maybe_spawn_brokerd
from . import _event
from ._exec import run_qtractor
@ -114,6 +117,7 @@ async def _async_main(
needed_brokermods[brokername] = brokers[brokername]
async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as root_n,
):
# set root nursery and task stack for spawning other charts/feeds

View File

@ -23,16 +23,24 @@ from functools import lru_cache
from typing import Callable
from math import floor
import numpy as np
import polars as pl
import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF
from piker.ui.qt import (
QtCore,
QtGui,
QtWidgets,
QPointF,
txt_flag,
align_flag,
px_cache_mode,
)
from . import _pg_overrides as pgo
from ..accounting._mktinfo import float_digits
from ._label import Label
from ._style import DpiAwareFont, hcolor, _font
from ._interaction import ChartView
from ._dataviz import Viz
_axis_pen = pg.mkPen(hcolor('bracket'))
@ -287,9 +295,7 @@ class DynamicDateAxis(Axis):
# time formats mapped by seconds between bars
tick_tpl = {
60 * 60 * 24: '%Y-%b-%d',
60: '%H:%M',
30: '%H:%M:%S',
5: '%H:%M:%S',
60: '%Y-%b-%d(%H:%M)',
1: '%H:%M:%S',
}
@ -305,10 +311,10 @@ class DynamicDateAxis(Axis):
# XX: ARGGGGG AG:LKSKDJF:LKJSDFD
chart = self.pi.chart_widget
viz = chart._vizs[chart.name]
viz: Viz = chart._vizs[chart.name]
shm = viz.shm
array = shm.array
ifield = viz.index_field
ifield: str = viz.index_field
index = array[ifield]
i_0, i_l = index[0], index[-1]
@ -329,7 +335,7 @@ class DynamicDateAxis(Axis):
arr_len = index.shape[0]
first = shm._first.value
times = array['time']
epochs = times[
epochs: list[int] = times[
list(
map(
int,
@ -341,23 +347,30 @@ class DynamicDateAxis(Axis):
)
]
else:
epochs = list(map(int, indexes))
epochs: list[int] = list(map(int, indexes))
# TODO: **don't** have this hard coded shift to EST
# delay = times[-1] - times[-2]
dts = np.array(
delay: float = viz.time_step()
if delay > 1:
# NOTE: use less granular dt-str when using 1M+ OHLC
fmtstr: str = self.tick_tpl[delay]
else:
fmtstr: str = '%Y-%m-%d(%H:%M:%S)'
# https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.from_epoch.html#polars-from-epoch
pl_dts: pl.Series = pl.from_epoch(
epochs,
dtype='datetime64[s]',
time_unit='s',
# NOTE: kinda weird we can pass it to `.from_epoch()` no?
).dt.replace_time_zone(
time_zone='UTC'
).dt.convert_time_zone(
# TODO: pull this from either:
# -[ ] the mkt venue tz by default
# -[ ] the user's config under `sys.mkt_timezone: str`
'EST'
)
# see units listing:
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units
return list(np.datetime_as_string(dts))
# TODO: per timeframe formatting?
# - we probably need this based on zoom now right?
# prec = self.np_dt_precision[delay]
# return dts.strftime(self.tick_tpl[delay])
return pl_dts.dt.to_string(fmtstr).to_list()
def tickStrings(
self,
@ -408,11 +421,15 @@ class AxisLabel(pg.GraphicsObject):
super().__init__()
self.setParentItem(parent)
self.setFlag(self.ItemIgnoresTransformations)
self.setFlag(
self.GraphicsItemFlag.ItemIgnoresTransformations
)
self.setZValue(100)
# XXX: pretty sure this is faster
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.setCacheMode(
px_cache_mode.DeviceCoordinateCache
)
self._parent = parent
@ -549,21 +566,14 @@ class AxisLabel(pg.GraphicsObject):
return (self.rect.width(), self.rect.height())
# _common_text_flags = (
# QtCore.Qt.TextDontClip |
# QtCore.Qt.AlignCenter |
# QtCore.Qt.AlignTop |
# QtCore.Qt.AlignHCenter |
# QtCore.Qt.AlignVCenter
# )
class XAxisLabel(AxisLabel):
_x_margin = 8
text_flags = (
QtCore.Qt.TextDontClip
| QtCore.Qt.AlignCenter
align_flag.AlignCenter
| txt_flag.TextDontClip
)
def size_hint(self) -> tuple[float, float]:
@ -620,10 +630,10 @@ class YAxisLabel(AxisLabel):
_y_margin: int = 4
text_flags = (
QtCore.Qt.AlignLeft
# QtCore.Qt.AlignHCenter
| QtCore.Qt.AlignVCenter
| QtCore.Qt.TextDontClip
align_flag.AlignLeft
| align_flag.AlignVCenter
# | align_flag.AlignHCenter
| txt_flag.TextDontClip
)
def __init__(

View File

@ -28,22 +28,19 @@ from typing import (
TYPE_CHECKING,
)
from PyQt5 import QtCore, QtWidgets
from PyQt5.QtCore import (
import pyqtgraph as pg
import trio
from piker.ui.qt import (
QtCore,
Qt,
QLineF,
# QPointF,
)
from PyQt5.QtWidgets import (
QFrame,
QWidget,
QHBoxLayout,
QVBoxLayout,
QSplitter,
)
import pyqtgraph as pg
import trio
from ._axes import (
DynamicDateAxis,
PriceAxis,
@ -570,8 +567,8 @@ class LinkedSplits(QWidget):
# style?
self.chart.setFrameStyle(
QFrame.StyledPanel |
QFrame.Plain
QFrame.Shape.StyledPanel |
QFrame.Shadow.Plain
)
return self.chart
@ -689,8 +686,8 @@ class LinkedSplits(QWidget):
cpw.plotItem.vb.linked = self
cpw.setFrameStyle(
QtWidgets.QFrame.StyledPanel
# | QtWidgets.QFrame.Plain
QFrame.Shape.StyledPanel
# | QFrame.Shadow.Plain
)
# don't show the little "autoscale" A label.

View File

@ -28,9 +28,14 @@ from typing import (
import inspect
import numpy as np
import pyqtgraph as pg
from PyQt5 import QtCore, QtWidgets
from PyQt5.QtCore import QPointF, QRectF
from piker.ui.qt import (
QPointF,
QRectF,
QtCore,
QtWidgets,
px_cache_mode,
)
from ._style import (
_xaxis_at,
hcolor,
@ -104,7 +109,9 @@ class LineDot(pg.CurvePoint):
dot.setParentItem(self)
# keep a static size
self.setFlag(self.ItemIgnoresTransformations)
self.setFlag(
self.GraphicsItemFlag.ItemIgnoresTransformations
)
def event(
self,
@ -207,9 +214,10 @@ class ContentsLabel(pg.LabelItem):
# this being "html" is the dumbest shit :eyeroll:
self.setText(
"<b>i</b>:{index}<br/>"
"<b>i_arr</b>:{index}<br/>"
# NB: these fields must be indexed in the correct order via
# the slice syntax below.
"<b>i_shm</b>:{}<br/>"
"<b>epoch</b>:{}<br/>"
"<b>O</b>:{}<br/>"
"<b>H</b>:{}<br/>"
@ -219,6 +227,7 @@ class ContentsLabel(pg.LabelItem):
# "<b>wap</b>:{}".format(
*array[ix][
[
'index',
'time',
'open',
'high',
@ -270,10 +279,15 @@ class ContentsLabels:
x_in: int,
) -> None:
for chart, name, label, update in self._labels:
for (
chart,
name,
label,
update,
)in self._labels:
viz = chart.get_viz(name)
array = viz.shm.array
array: np.ndarray = viz.shm._array
index = array[viz.index_field]
start = index[0]
stop = index[-1]
@ -284,7 +298,7 @@ class ContentsLabels:
):
# out of range
print('WTF out of range?')
continue
# continue
# call provided update func with data point
try:
@ -292,6 +306,7 @@ class ContentsLabels:
ix = np.searchsorted(index, x_in)
if ix > len(array):
breakpoint()
update(ix, array)
except IndexError:
@ -416,10 +431,10 @@ class Cursor(pg.GraphicsObject):
# vertical and horizonal lines and a y-axis label
vl = plot.addLine(x=0, pen=self.lines_pen, movable=False)
vl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
vl.setCacheMode(px_cache_mode.DeviceCoordinateCache)
hl = plot.addLine(y=0, pen=self.lines_pen, movable=False)
hl.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
hl.setCacheMode(px_cache_mode.DeviceCoordinateCache)
hl.hide()
yl = YAxisLabel(
@ -503,7 +518,10 @@ class Cursor(pg.GraphicsObject):
plot=chart
)
chart.addItem(cursor)
self.graphics[chart].setdefault('cursors', []).append(cursor)
self.graphics[chart].setdefault(
'cursors',
[],
).append(cursor)
return cursor
def mouseAction(

View File

@ -19,20 +19,21 @@ Fast, smooth, sexy curves.
"""
from contextlib import contextmanager as cm
from enum import EnumType
from typing import Callable
import numpy as np
import pyqtgraph as pg
from PyQt5 import QtWidgets
from PyQt5.QtWidgets import QGraphicsItem
from PyQt5.QtCore import (
from piker.ui.qt import (
QtWidgets,
QGraphicsItem,
Qt,
QLineF,
QRectF,
)
from PyQt5.QtGui import (
QPainter,
QPainterPath,
px_cache_mode,
)
from ._style import hcolor
from ..log import get_logger
@ -42,22 +43,23 @@ from ..toolz.profile import (
ms_slower_then,
)
log = get_logger(__name__)
pen_style: EnumType = Qt.PenStyle
_line_styles: dict[str, int] = {
'solid': Qt.PenStyle.SolidLine,
'dash': Qt.PenStyle.DashLine,
'dot': Qt.PenStyle.DotLine,
'dashdot': Qt.PenStyle.DashDotLine,
'solid': pen_style.SolidLine,
'dash': pen_style.DashLine,
'dot': pen_style.DotLine,
'dashdot': pen_style.DashDotLine,
}
class FlowGraphic(pg.GraphicsObject):
'''
Base class with minimal interface for `QPainterPath` implemented,
real-time updated "data flow" graphics.
Base class with minimal interface for `QPainterPath`
implemented, real-time updated "data flow" graphics.
See subtypes below.
@ -69,12 +71,12 @@ class FlowGraphic(pg.GraphicsObject):
# XXX-NOTE-XXX: graphics caching B)
# see explanation for different caching modes:
# https://stackoverflow.com/a/39410081
cache_mode: int = QGraphicsItem.DeviceCoordinateCache
cache_mode: int = px_cache_mode.DeviceCoordinateCache
# XXX: WARNING item caching seems to only be useful
# if we don't re-generate the entire QPainterPath every time
# don't ever use this - it's a colossal nightmare of artefacts
# and is disastrous for performance.
# QGraphicsItem.ItemCoordinateCache
# cache_mode.ItemCoordinateCache
# TODO: still questions todo with coord-cacheing that we should
# probably talk to a core dev about:
# - if this makes trasform interactions slower (such as zooming)
@ -167,15 +169,16 @@ class FlowGraphic(pg.GraphicsObject):
return None
# XXX: due to a variety of weird jitter bugs and "smearing"
# artifacts when click-drag panning and viewing history time series,
# we offer this ctx-mngr interface to allow temporarily disabling
# Qt's graphics caching mode; this is now currently used from
# ``ChartView.start/signal_ic()`` methods which also disable the
# rt-display loop when the user is moving around a view.
# artifacts when click-drag panning and viewing history time
# series, we offer this ctx-mngr interface to allow temporarily
# disabling Qt's graphics caching mode; this is now currently
# used from ``ChartView.start/signal_ic()`` methods which also
# disable the rt-display loop when the user is moving around
# a view.
@cm
def reset_cache(self) -> None:
try:
none = QGraphicsItem.NoCache
none = px_cache_mode.NoCache
log.debug(
f'{self._name} -> CACHE DISABLE: {none}'
)

View File

@ -36,9 +36,12 @@ from msgspec import (
field,
)
import numpy as np
from numpy import (
ndarray,
)
import pyqtgraph as pg
from PyQt5.QtCore import QLineF
from piker.ui.qt import QLineF
from ..data._sharedmem import (
ShmArray,
)
@ -49,7 +52,7 @@ from ..data._formatters import (
OHLCBarsAsCurveFmtr, # OHLC converted to line
StepCurveFmtr, # "step" curve (like for vlm)
)
from ..data._timeseries import (
from ..tsp import (
slice_from_time,
)
from ._ohlc import (
@ -82,10 +85,11 @@ def render_baritems(
viz: Viz,
graphics: BarItems,
read: tuple[
int, int, np.ndarray,
int, int, np.ndarray,
int, int, ndarray,
int, int, ndarray,
],
profiler: Profiler,
force_redraw: bool = False,
**kwargs,
) -> None:
@ -216,9 +220,11 @@ def render_baritems(
viz._in_ds = should_line
should_redraw = (
changed_to_line
force_redraw
or changed_to_line
or not should_line
)
# print(f'should_redraw: {should_redraw}')
return (
graphics,
r,
@ -250,7 +256,7 @@ class ViewState(Struct):
] | None = None
# last in view ``ShmArray.array[read_slc]`` data
in_view: np.ndarray | None = None
in_view: ndarray | None = None
class Viz(Struct):
@ -313,6 +319,7 @@ class Viz(Struct):
_last_uppx: float = 0
_in_ds: bool = False
_index_step: float | None = None
_time_step: float | None = None
# map from uppx -> (downsampled data, incremental graphics)
_src_r: Renderer | None = None
@ -359,7 +366,8 @@ class Viz(Struct):
def index_step(
self,
reset: bool = False,
index_field: str | None = None,
) -> float:
'''
Return the size between sample steps in the units of the
@ -367,12 +375,17 @@ class Viz(Struct):
epoch time in seconds.
'''
# attempt to dectect the best step size by scanning a sample of
# the source data.
if self._index_step is None:
index: np.ndarray = self.shm.array[self.index_field]
isample: np.ndarray = index[-16:]
# attempt to detect the best step size by scanning a sample
# of the source data.
if (
self._index_step is None
or index_field is not None
):
index: ndarray = self.shm.array[
index_field
or self.index_field
]
isample: ndarray = index[-16:]
mxdiff: None | float = None
for step in np.diff(isample):
@ -386,7 +399,15 @@ class Viz(Struct):
)
mxdiff = step
self._index_step = max(mxdiff, 1)
step: float = max(mxdiff, 1)
# only SET the internal index step if an explicit
# field name is NOT passed, since in such cases this
# is likely just being called from `.time_step()`.
if index_field is not None:
return step
self._index_step = step
if (
mxdiff < 1
or 1 < mxdiff < 60
@ -397,6 +418,17 @@ class Viz(Struct):
return self._index_step
def time_step(self) -> float:
'''
Attempt to determine the per-sample time-step period by
forcing an epoch-index and calling `.index_step()`.
'''
if self._time_step is None:
self._time_step: float = self.index_step(index_field='time')
return self._time_step
def maxmin(
self,
@ -404,6 +436,9 @@ class Viz(Struct):
i_read_range: tuple[int, int] | None = None,
use_caching: bool = True,
# XXX: internal debug
_do_print: bool = False
) -> tuple[float, float] | None:
'''
Compute the cached max and min y-range values for a given
@ -423,15 +458,14 @@ class Viz(Struct):
if shm is None:
return None
do_print: bool = False
arr = shm.array
arr: ndarray = shm.array
if i_read_range is not None:
read_slc = slice(*i_read_range)
index = arr[read_slc][self.index_field]
index: float | int = arr[read_slc][self.index_field]
if not index.size:
return None
ixrng = (index[0], index[-1])
ixrng: tuple[int, int] = (index[0], index[-1])
else:
if x_range is None:
@ -449,15 +483,24 @@ class Viz(Struct):
# TODO: hash the slice instead maybe?
# https://stackoverflow.com/a/29980872
ixrng = lbar, rbar = round(x_range[0]), round(x_range[1])
ixrng = lbar, rbar = (
round(x_range[0]),
round(x_range[1]),
)
if (
use_caching
and self._mxmn_cache_enabled
):
# TODO: is there a way to ONLY clear ranges containing
# a certain sub-range?
# -[ ] currently we have a problem where a previously
# cached mxmn will persist even if the viz is "hard
# re-rendered" (usually bc underlying data was
# corrected)
cached_result = self._mxmns.get(ixrng)
if cached_result:
if do_print:
if _do_print:
print(
f'{self.name} CACHED maxmin\n'
f'{ixrng} -> {cached_result}'
@ -487,7 +530,7 @@ class Viz(Struct):
(rbar - ifirst) + 1
)
slice_view = arr[read_slc]
slice_view: ndarray = arr[read_slc]
if not slice_view.size:
log.warning(
@ -498,7 +541,7 @@ class Viz(Struct):
elif self.ds_yrange:
mxmn = self.ds_yrange
if do_print:
if _do_print:
print(
f'{self.name} M4 maxmin:\n'
f'{ixrng} -> {mxmn}'
@ -515,7 +558,7 @@ class Viz(Struct):
mxmn = ylow, yhigh
if (
do_print
_do_print
):
s = 3
print(
@ -529,14 +572,23 @@ class Viz(Struct):
# cache result for input range
ylow, yhi = mxmn
diff: float = yhi - ylow
# order-of-magnitude check
# TODO: really we should be checking the hi or low
# against the previous sample to catch stuff like,
# - rando stock (reverse-)split
# - null-segments written by some prior
# crash-during-backfil
if diff > 0:
omg: float = abs(logf(diff, 10))
else:
omg: float = 0
try:
prolly_anomaly: bool = (
(
abs(logf(ylow, 10)) > 16
if ylow
else False
)
# diff == 0
(ylow and omg > 10)
or (
isnan(ylow) or isnan(yhi)
)
@ -563,7 +615,8 @@ class Viz(Struct):
def view_range(self) -> tuple[int, int]:
'''
Return the start and stop x-indexes for the managed ``ViewBox``.
Return the start and stop x-indexes for the managed
``ViewBox``.
'''
vr = self.plot.viewRect()
@ -576,7 +629,7 @@ class Viz(Struct):
self,
view_range: None | tuple[float, float] = None,
index_field: str | None = None,
array: np.ndarray | None = None,
array: ndarray | None = None,
) -> tuple[
int, int, int, int, int, int
@ -647,8 +700,8 @@ class Viz(Struct):
profiler: None | Profiler = None,
) -> tuple[
int, int, np.ndarray,
int, int, np.ndarray,
int, int, ndarray,
int, int, ndarray,
]:
'''
Read the underlying shm array buffer and
@ -818,6 +871,10 @@ class Viz(Struct):
graphics,
read,
profiler,
# NOTE: only set when caller says to
force_redraw=should_redraw,
**kwargs,
)
@ -980,6 +1037,39 @@ class Viz(Struct):
graphics,
)
def reset_graphics(
self,
# TODO: allow only resetting within some x-domain range?
# ixrng: tuple[int, int] | None = None,
) -> None:
'''
Hard reset all graphics (rendering) layers for this
data viz including clearing the mxmn auto-y-range
cache.
Normally called when the underlying data set is modified
(probably by some `.tsp` correcting/editing routine) and
the (now cached) graphics need to be fully re-rendered from
source.
'''
log.warning(
f'Forcing hard Viz graphihcs RESET:\n'
f'.name: {self.name}\n'
f'.index_field: {self.index_field}\n'
f'.index_step(): {self.index_step()}\n'
f'.time_step(): {self.time_step()}\n'
)
# XXX: always clear the mxn y-range cache
# to avoid old data (anomalies) from being
# retained in auto-yrange output.
self._mxmn_cache_enabled = False
self._mxmns.clear()
self.update_graphics(force_redraw=True)
self._mxmn_cache_enabled = True
def draw_last(
self,
array_key: str | None = None,
@ -1072,7 +1162,7 @@ class Viz(Struct):
'''
shm: ShmArray = self.shm
array: np.ndarray = shm.array
array: ndarray = shm.array
view: ChartView = self.plot.vb
(
vl,

View File

@ -57,6 +57,7 @@ from piker.toolz import (
Profiler,
)
from piker.log import get_logger
from piker import config
# from ..data._source import tf_in_1s
from ._axes import YAxisLabel
from ._chart import (
@ -210,9 +211,9 @@ async def increment_history_view(
):
hist_chart: ChartPlotWidget = ds.hist_chart
hist_viz: Viz = ds.hist_viz
viz: Viz = ds.viz
# viz: Viz = ds.viz
assert 'hist' in hist_viz.shm.token['shm_name']
name: str = hist_viz.name
# name: str = hist_viz.name
# TODO: seems this is more reliable at keeping the slow
# chart incremented in view more correctly?
@ -225,7 +226,8 @@ async def increment_history_view(
# draw everything from scratch on first entry!
for curve_name, hist_viz in hist_chart._vizs.items():
log.info(f'Forcing hard redraw -> {curve_name}')
hist_viz.update_graphics(force_redraw=True)
hist_viz.reset_graphics()
# hist_viz.update_graphics(force_redraw=True)
async with open_sample_stream(1.) as min_istream:
async for msg in min_istream:
@ -248,17 +250,27 @@ async def increment_history_view(
# - samplerd could emit the actual update range via
# tuple and then we only enter the below block if that
# range is detected as in-view?
if (
(bf_wut := msg.get('backfilling', False))
):
viz_name, timeframe = bf_wut
if viz_name == name:
log.info(f'Forcing hard redraw -> {name}@{timeframe}')
match timeframe:
case 60:
hist_viz.update_graphics(force_redraw=True)
case 1:
viz.update_graphics(force_redraw=True)
# match msg:
# case {
# 'backfilling': (viz_name, timeframe),
# } if (
# viz_name == name
# ):
# log.warning(
# f'Forcing HARD REDRAW:\n'
# f'name: {name}\n'
# f'timeframe: {timeframe}\n'
# )
# # TODO: only allow this when the data is IN VIEW!
# # also, we probably can do this more efficiently
# # / smarter by only redrawing the portion of the
# # path necessary?
# {
# 60: hist_viz,
# 1: viz,
# }[timeframe].update_graphics(
# force_redraw=True
# )
# check if slow chart needs an x-domain shift and/or
# y-range resize.
@ -299,6 +311,7 @@ async def increment_history_view(
async def graphics_update_loop(
dss: dict[str, DisplayState],
nurse: trio.Nursery,
godwidget: GodWidget,
feed: Feed,
@ -340,8 +353,6 @@ async def graphics_update_loop(
'i_last_slow_t': 0, # multiview-global slow (1m) step index
}
dss: dict[str, DisplayState] = {}
for fqme, flume in feed.flumes.items():
ohlcv = flume.rt_shm
hist_ohlcv = flume.hist_shm
@ -460,10 +471,18 @@ async def graphics_update_loop(
if ds.hist_vars['i_last'] < ds.hist_vars['i_last_append']:
await tractor.pause()
# try:
# XXX TODO: we need to do _dss UPDATE here so that when
# a feed-view is switched you can still remote annotate the
# prior view..
from . import _remote_ctl
_remote_ctl._dss.update(dss)
# main real-time quotes update loop
stream: tractor.MsgStream
async with feed.open_multi_stream() as stream:
assert stream
# assert stream
async for quotes in stream:
quote_period = time.time() - last_quote_s
quote_rate = round(
@ -479,7 +498,7 @@ async def graphics_update_loop(
pass
# log.warning(f'High quote rate {mkt.fqme}: {quote_rate}')
last_quote_s = time.time()
last_quote_s: float = time.time()
for fqme, quote in quotes.items():
ds = dss[fqme]
@ -509,6 +528,12 @@ async def graphics_update_loop(
quote,
)
# finally:
# # XXX: cancel any remote annotation control ctxs
# _remote_ctl._dss = None
# for cid, (ctx, aids) in _remote_ctl._ctxs.items():
# await ctx.cancel()
def graphics_update_cycle(
ds: DisplayState,
@ -1207,6 +1232,8 @@ async def link_views_with_region(
# region.sigRegionChangeFinished.connect(update_pi_from_region)
# NOTE: default is set to 60 FPS until the runtime delivers the
# discoverd hw value below.
_quote_throttle_rate: int = 60 - 6
@ -1225,7 +1252,7 @@ async def display_symbol_data(
fast from a cached watch-list.
'''
sbar = godwidget.window.status_bar
# sbar = godwidget.window.status_bar
# historical data fetch
# brokermod = brokers.get_brokermod(provider)
@ -1235,11 +1262,11 @@ async def display_symbol_data(
# group_key=loading_sym_key,
# )
for fqme in fqmes:
loading_sym_key = sbar.open_status(
f'loading {fqme} ->',
group_key=True
)
# for fqme in fqmes:
# loading_sym_key = sbar.open_status(
# f'loading {fqme} ->',
# group_key=True
# )
# (TODO: make this not so shit XD)
# close group status once a symbol feed fully loads to view.
@ -1248,26 +1275,54 @@ async def display_symbol_data(
# TODO: ctl over update loop's maximum frequency.
# - load this from a config.toml!
# - allow dyanmic configuration from chart UI?
(
conf,
path,
) = config.load()
ui_conf: dict = conf['ui']
global _quote_throttle_rate
from ._window import main_window
display_rate = main_window().current_screen().refreshRate()
_quote_throttle_rate = floor(display_rate) - 6
display_rate: int = floor(
main_window().current_screen().refreshRate()
) - 6
mx_redraw_rate: int = ui_conf.get(
'max_redraw_rate',
_quote_throttle_rate,
)
if mx_redraw_rate < display_rate:
log.info(
'Down-throttling redraw rate to config setting\n'
f'display FPS: {display_rate}\n'
'max_redraw_rate: {max_redraw_rate}\n'
)
else:
_quote_throttle_rate = display_rate
# TODO: we should be able to increase this if we use some
# `mypyc` speedups elsewhere? 22ish seems to be the sweet
# spot for single-feed chart.
num_of_feeds = len(fqmes)
mx: int = 22
if num_of_feeds > 1:
# there will be more ctx switches with more than 1 feed so we
# max throttle down a bit more.
mx = 16
# if num_of_feeds > 1:
# there will be more ctx switches with more than 1 feed so we
# max throttle down a bit more.
mx_per_feed: int = (
ui_conf.get(
'per_feed_redraw_rate',
mx_redraw_rate,
)
or 16
)
# limit to at least display's FPS
# avoiding needless Qt-in-guest-mode context switches
cycles_per_feed = min(
round(_quote_throttle_rate/num_of_feeds),
mx,
mx_per_feed,
)
feed: Feed
@ -1390,7 +1445,10 @@ async def display_symbol_data(
# for pause/resume on mouse interaction
rt_chart.feed = feed
async with trio.open_nursery() as ln:
async with (
tractor.trionics.collapse_eg(),
trio.open_nursery() as ln,
):
# if available load volume related built-in display(s)
vlm_charts: dict[
str,
@ -1412,7 +1470,7 @@ async def display_symbol_data(
start_fsp_displays,
rt_linked,
flume,
loading_sym_key,
# loading_sym_key,
loglevel,
)
@ -1531,8 +1589,10 @@ async def display_symbol_data(
)
# start update loop task
dss: dict[str, DisplayState] = {}
ln.start_soon(
graphics_update_loop,
dss,
ln,
godwidget,
feed,
@ -1546,15 +1606,31 @@ async def display_symbol_data(
order_ctl_fqme: str = fqmes[0]
mode: OrderMode
async with (
open_order_mode(
feed,
godwidget,
order_ctl_fqme,
order_mode_started,
loglevel=loglevel
) as mode
):
) as mode,
# TODO: maybe have these startup sooner before
# order mode fully boots? but we gotta,
# -[ ] decouple the order mode bindings until
# the mode has fully booted..
# -[ ] maybe do an Event to sync?
# start input handling for ``ChartView`` input
# (i.e. kb + mouse handling loops)
rt_chart.view.open_async_input_handler(
dss=dss,
),
hist_chart.view.open_async_input_handler(
dss=dss,
),
):
rt_linked.mode = mode
rt_viz = rt_chart.get_viz(order_ctl_fqme)

View File

@ -21,7 +21,8 @@ Higher level annotation editors.
from __future__ import annotations
from collections import defaultdict
from typing import (
TYPE_CHECKING
Sequence,
TYPE_CHECKING,
)
import pyqtgraph as pg
@ -31,24 +32,34 @@ from pyqtgraph import (
QtCore,
QtWidgets,
)
from PyQt5.QtGui import (
QColor,
)
from PyQt5.QtWidgets import (
QLabel,
)
from pyqtgraph import functions as fn
from PyQt5.QtCore import QPointF
import numpy as np
from piker.types import Struct
from ._style import hcolor, _font
from piker.ui.qt import (
Qt,
QPointF,
QRectF,
QGraphicsProxyWidget,
QGraphicsScene,
QLabel,
QColor,
QTransform,
)
from ._style import (
hcolor,
_font,
)
from ._lines import LevelLine
from ..log import get_logger
if TYPE_CHECKING:
from ._chart import GodWidget
from ._chart import (
GodWidget,
ChartPlotWidget,
)
from ._interaction import ChartView
log = get_logger(__name__)
@ -65,7 +76,7 @@ class ArrowEditor(Struct):
uid: str,
x: float,
y: float,
color='default',
color: str = 'default',
pointing: str | None = None,
) -> pg.ArrowItem:
@ -251,43 +262,75 @@ class LineEditor(Struct):
return lines
class SelectRect(QtWidgets.QGraphicsRectItem):
def as_point(
pair: Sequence[float, float] | QPointF,
) -> list[QPointF, QPointF]:
'''
Case any input tuple of floats to a a list of `QPoint` objects
for use in Qt geometry routines.
'''
if isinstance(pair, QPointF):
return pair
return QPointF(pair[0], pair[1])
# TODO: maybe implement better, something something RectItemProxy??
# -[ ] dig into details of how proxy's work?
# https://doc.qt.io/qt-5/qgraphicsscene.html#addWidget
# -[ ] consider using `.addRect()` maybe?
class SelectRect(QtWidgets.QGraphicsRectItem):
'''
A data-view "selection rectangle": the most fundamental
geometry for annotating data views.
- https://doc.qt.io/qt-5/qgraphicsrectitem.html
- https://doc.qt.io/qt-6/qgraphicsrectitem.html
'''
def __init__(
self,
viewbox: ViewBox,
color: str = 'dad_blue',
color: str | None = None,
) -> None:
super().__init__(0, 0, 1, 1)
# self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1)
self.vb = viewbox
self._chart: 'ChartPlotWidget' = None # noqa
self.vb: ViewBox = viewbox
# override selection box color
self._chart: ChartPlotWidget | None = None # noqa
# TODO: maybe allow this to be dynamic via a method?
#l override selection box color
color: str = color or 'dad_blue'
color = QColor(hcolor(color))
self.setPen(fn.mkPen(color, width=1))
color.setAlpha(66)
self.setBrush(fn.mkBrush(color))
self.setZValue(1e9)
self.hide()
self._label = None
label = self._label = QLabel()
label.setTextFormat(0) # markdown
label.setTextFormat(
Qt.TextFormat.MarkdownText
)
label.setFont(_font.font)
label.setMargin(0)
label.setAlignment(
QtCore.Qt.AlignLeft
# | QtCore.Qt.AlignVCenter
)
label.hide() # always right after init
# proxy is created after containing scene is initialized
self._label_proxy = None
self._abs_top_right = None
self._label_proxy: QGraphicsProxyWidget | None = None
self._abs_top_right: Point | None = None
# TODO: "swing %" might be handy here (data's max/min # % change)
self._contents = [
# TODO: "swing %" might be handy here (data's max/min
# # % change)?
self._contents: list[str] = [
'change: {pchng:.2f} %',
'range: {rng:.2f}',
'bars: {nbars}',
@ -297,12 +340,31 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
'sigma: {std:.2f}',
]
self.add_to_view(viewbox)
self.hide()
def add_to_view(
self,
view: ChartView,
) -> None:
'''
Self-defined view hookup impl which will
also re-assign the internal ref.
'''
view.addItem(
self,
ignoreBounds=True,
)
if self.vb is not view:
self.vb = view
@property
def chart(self) -> 'ChartPlotWidget': # noqa
def chart(self) -> ChartPlotWidget: # noqa
return self._chart
@chart.setter
def chart(self, chart: 'ChartPlotWidget') -> None: # noqa
def chart(self, chart: ChartPlotWidget) -> None: # noqa
self._chart = chart
chart.sigRangeChanged.connect(self.update_on_resize)
palette = self._label.palette()
@ -315,57 +377,155 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
)
def update_on_resize(self, vr, r):
"""Re-position measure label on view range change.
'''
Re-position measure label on view range change.
"""
'''
if self._abs_top_right:
self._label_proxy.setPos(
self.vb.mapFromView(self._abs_top_right)
)
def mouse_drag_released(
def set_scen_pos(
self,
p1: QPointF,
p2: QPointF
scen_p1: QPointF,
scen_p2: QPointF,
update_label: bool = True,
) -> None:
"""Called on final button release for mouse drag with start and
end positions.
'''
Set position from scene coords of selection rect (normally
from mouse position) and accompanying label, move label to
match.
"""
self.set_pos(p1, p2)
'''
# NOTE XXX: apparently just setting it doesn't work!?
# i have no idea why but it's pretty weird we have to do
# this transform thing which was basically pulled verbatim
# from the `pg.ViewBox.updateScaleBox()` method.
view_rect: QRectF = self.vb.childGroup.mapRectFromScene(
QRectF(
scen_p1,
scen_p2,
)
)
self.setPos(view_rect.topLeft())
# XXX: does not work..!?!?
# https://doc.qt.io/qt-5/qgraphicsrectitem.html#setRect
# self.setRect(view_rect)
def set_pos(
self,
p1: QPointF,
p2: QPointF
) -> None:
"""Set position of selection rect and accompanying label, move
label to match.
tr = QTransform.fromScale(
view_rect.width(),
view_rect.height(),
)
self.setTransform(tr)
"""
if self._label_proxy is None:
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
self._label_proxy = self.vb.scene().addWidget(self._label)
start_pos = self.vb.mapToView(p1)
end_pos = self.vb.mapToView(p2)
# map to view coords and update area
r = QtCore.QRectF(start_pos, end_pos)
# old way; don't need right?
# lr = QtCore.QRectF(p1, p2)
# r = self.vb.childGroup.mapRectFromParent(lr)
self.setPos(r.topLeft())
self.resetTransform()
self.setRect(r)
# XXX: never got this working, was always offset
# / transformed completely wrong (and off to the far right
# from the cursor?)
# self.set_view_pos(
# view_rect=view_rect,
# # self.vwqpToView(p1),
# # self.vb.mapToView(p2),
# # start_pos=self.vb.mapToScene(p1),
# # end_pos=self.vb.mapToScene(p2),
# )
self.show()
y1, y2 = start_pos.y(), end_pos.y()
x1, x2 = start_pos.x(), end_pos.x()
if update_label:
self.init_label(view_rect)
# TODO: heh, could probably use a max-min streamin algo here too
def set_view_pos(
self,
start_pos: QPointF | Sequence[float, float] | None = None,
end_pos: QPointF | Sequence[float, float] | None = None,
view_rect: QRectF | None = None,
update_label: bool = True,
) -> None:
'''
Set position from `ViewBox` coords (i.e. from the actual
data domain) of rect (and any accompanying label which is
moved to match).
'''
if self._chart is None:
raise RuntimeError(
'You MUST assign a `SelectRect.chart: ChartPlotWidget`!'
)
if view_rect is None:
# ensure point casting
start_pos: QPointF = as_point(start_pos)
end_pos: QPointF = as_point(end_pos)
# map to view coords and update area
view_rect = QtCore.QRectF(
start_pos,
end_pos,
)
self.setPos(view_rect.topLeft())
# NOTE: SERIOUSLY NO IDEA WHY THIS WORKS...
# but it does and all the other commented stuff above
# dint, dawg..
# self.resetTransform()
# self.setRect(view_rect)
tr = QTransform.fromScale(
view_rect.width(),
view_rect.height(),
)
self.setTransform(tr)
if update_label:
self.init_label(view_rect)
print(
'SelectRect modify:\n'
f'QRectF: {view_rect}\n'
f'start_pos: {start_pos}\n'
f'end_pos: {end_pos}\n'
)
self.show()
def init_label(
self,
view_rect: QRectF,
) -> QLabel:
# should be init-ed in `.__init__()`
label: QLabel = self._label
cv: ChartView = self.vb
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html
if self._label_proxy is None:
scen: QGraphicsScene = cv.scene()
# NOTE: specifically this is passing a widget
# pointer to the scene's `.addWidget()` as per,
# https://doc.qt.io/qt-5/qgraphicsproxywidget.html#embedding-a-widget-with-qgraphicsproxywidget
self._label_proxy: QGraphicsProxyWidget = scen.addWidget(label)
# get label startup coords
tl: QPointF = view_rect.topLeft()
br: QPointF = view_rect.bottomRight()
x1, y1 = tl.x(), tl.y()
x2, y2 = br.x(), br.y()
# TODO: to remove, previous label corner point unpacking
# x1, y1 = start_pos.x(), start_pos.y()
# x2, y2 = end_pos.x(), end_pos.y()
# y1, y2 = start_pos.y(), end_pos.y()
# x1, x2 = start_pos.x(), end_pos.x()
# TODO: heh, could probably use a max-min streamin algo
# here too?
_, xmn = min(y1, y2), min(x1, x2)
ymx, xmx = max(y1, y2), max(x1, x2)
@ -375,26 +535,35 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
ixmn, ixmx = round(xmn), round(xmx)
nbars = ixmx - ixmn + 1
chart = self._chart
data = chart.get_viz(chart.name).shm.array[ixmn:ixmx]
chart: ChartPlotWidget = self._chart
data: np.ndarray = chart.get_viz(
chart.name
).shm.array[ixmn:ixmx]
if len(data):
std = data['close'].std()
dmx = data['high'].max()
dmn = data['low'].min()
std: float = data['close'].std()
dmx: float = data['high'].max()
dmn: float = data['low'].min()
else:
dmn = dmx = std = np.nan
# update label info
self._label.setText('\n'.join(self._contents).format(
pchng=pchng, rng=rng, nbars=nbars,
std=std, dmx=dmx, dmn=dmn,
label.setText('\n'.join(self._contents).format(
pchng=pchng,
rng=rng,
nbars=nbars,
std=std,
dmx=dmx,
dmn=dmn,
))
# print(f'x2, y2: {(x2, y2)}')
# print(f'xmn, ymn: {(xmn, ymx)}')
label_anchor = Point(xmx + 2, ymx)
label_anchor = Point(
xmx + 2,
ymx,
)
# XXX: in the drag bottom-right -> top-left case we don't
# want the label to overlay the box.
@ -403,13 +572,40 @@ class SelectRect(QtWidgets.QGraphicsRectItem):
# # label_anchor = Point(x2, y2 + self._label.height())
# label_anchor = Point(xmn, ymn)
self._abs_top_right = label_anchor
self._label_proxy.setPos(self.vb.mapFromView(label_anchor))
# self._label.show()
self._abs_top_right: Point = label_anchor
self._label_proxy.setPos(
cv.mapFromView(label_anchor)
)
label.show()
def clear(self):
"""Clear the selection box from view.
def hide(self):
'''
Clear the selection box from its graphics scene but
don't delete it permanently.
"""
'''
super().hide()
self._label.hide()
self.hide()
# TODO: ensure noone else using dis.
clear = hide
def delete(self) -> None:
'''
De-allocate this rect from its rendering graphics scene.
Like a permanent hide.
'''
scen: QGraphicsScene = self.scene()
if scen is None:
return
scen.removeItem(self)
if (
self._label
and
self._label_proxy
):
scen.removeItem(self._label_proxy)

View File

@ -22,29 +22,33 @@ from contextlib import asynccontextmanager as acm
from typing import Callable
import trio
from tractor.trionics import gather_contexts
from PyQt5 import QtCore
from PyQt5.QtCore import QEvent, pyqtBoundSignal
from PyQt5.QtWidgets import QWidget
from PyQt5.QtWidgets import (
QGraphicsSceneMouseEvent as gs_mouse,
from tractor.trionics import (
gather_contexts,
collapse_eg,
)
from piker.ui.qt import (
QtCore,
QWidget,
QEvent,
keys,
gs_keys,
pyqtBoundSignal,
)
from piker.types import Struct
MOUSE_EVENTS = {
gs_mouse.GraphicsSceneMousePress,
gs_mouse.GraphicsSceneMouseRelease,
QEvent.MouseButtonPress,
QEvent.MouseButtonRelease,
gs_keys.GraphicsSceneMousePress,
gs_keys.GraphicsSceneMouseRelease,
keys.MouseButtonPress,
keys.MouseButtonRelease,
# QtGui.QMouseEvent,
}
# TODO: maybe consider some constrained ints down the road?
# https://pydantic-docs.helpmanual.io/usage/types/#constrained-types
class KeyboardMsg(Struct):
'''Unpacked Qt keyboard event data.
@ -114,7 +118,10 @@ class EventRelay(QtCore.QObject):
# something to do with Qt internals and calling the
# parent handler?
if etype in {QEvent.KeyPress, QEvent.KeyRelease}:
if etype in {
QEvent.Type.KeyPress,
QEvent.Type.KeyRelease,
}:
msg = KeyboardMsg(
event=ev,
@ -160,7 +167,9 @@ class EventRelay(QtCore.QObject):
async def open_event_stream(
source_widget: QWidget,
event_types: set[QEvent] = {QEvent.KeyPress},
event_types: set[QEvent] = {
QEvent.Type.KeyPress,
},
filter_auto_repeats: bool = True,
) -> trio.abc.ReceiveChannel:
@ -201,8 +210,11 @@ async def open_signal_handler(
async for args in recv:
await async_handler(*args)
async with trio.open_nursery() as n:
n.start_soon(proxy_to_handler)
async with (
collapse_eg(),
trio.open_nursery() as tn
):
tn.start_soon(proxy_to_handler)
async with send:
yield
@ -212,18 +224,49 @@ async def open_handlers(
source_widgets: list[QWidget],
event_types: set[QEvent],
async_handler: Callable[[QWidget, trio.abc.ReceiveChannel], None],
**kwargs,
# NOTE: if you want to bind in additional kwargs to the handler
# pass in a `partial()` instead!
async_handler: Callable[
[QWidget, trio.abc.ReceiveChannel], # required handler args
None
],
# XXX: these are ONLY inputs available to the
# `open_event_stream()` event-relay to mem-chan factor above!
**open_ev_stream_kwargs,
) -> None:
'''
Connect and schedule an async handler function to receive an
arbitrary `QWidget`'s events with kb/mouse msgs repacked into
structs (see above) and shuttled over a mem-chan to the input
`async_handler` to allow interaction-IO processing from
a `trio` func-as-task.
'''
widget: QWidget
streams: list[trio.abc.ReceiveChannel]
async with (
trio.open_nursery() as n,
collapse_eg(),
trio.open_nursery() as tn,
gather_contexts([
open_event_stream(widget, event_types, **kwargs)
open_event_stream(
widget,
event_types,
**open_ev_stream_kwargs,
)
for widget in source_widgets
]) as streams,
):
for widget, event_recv_stream in zip(source_widgets, streams):
n.start_soon(async_handler, widget, event_recv_stream)
for widget, event_recv_stream in zip(
source_widgets,
streams,
):
tn.start_soon(
async_handler,
widget,
event_recv_stream,
)
yield

View File

@ -30,34 +30,35 @@ from typing import (
import platform
import traceback
# Qt specific
import PyQt5 # noqa
from PyQt5.QtWidgets import (
QWidget,
QMainWindow,
QApplication,
)
from PyQt5 import QtCore
from PyQt5.QtCore import (
pyqtRemoveInputHook,
Qt,
QCoreApplication,
)
import qdarkstyle
from qdarkstyle import DarkPalette
# import qdarkgraystyle # TODO: play with it
import trio
from outcome import Error
# Qt version-agnostic
from .qt import (
QWidget,
QMainWindow,
QApplication,
QtCore,
pyqtRemoveInputHook,
Qt,
QCoreApplication,
)
from ..service import (
maybe_open_pikerd,
get_tractor_runtime_kwargs,
get_runtime_vars,
)
from ..log import get_logger
from ._pg_overrides import _do_overrides
from . import _style
if TYPE_CHECKING:
from ._chart import GodWidget
log = get_logger(__name__)
# pyqtgraph global config
@ -146,7 +147,7 @@ def run_qtractor(
# load dark theme
stylesheet = qdarkstyle.load_stylesheet(
qt_api='pyqt5',
qt_api='pyqt6',
palette=DarkPalette,
)
app.setStyleSheet(stylesheet)
@ -173,7 +174,9 @@ def run_qtractor(
instance.window = window
# override tractor's defaults
tractor_kwargs.update(get_tractor_runtime_kwargs())
tractor_kwargs.update(
get_runtime_vars()
)
# define tractor entrypoint
async def main():

View File

@ -18,10 +18,11 @@
Feed status and controls widget(s) for embedding in a UI-pane.
"""
from __future__ import annotations
from textwrap import dedent
from typing import TYPE_CHECKING
from typing import (
Any,
TYPE_CHECKING,
)
# from PyQt5.QtCore import Qt
@ -49,35 +50,55 @@ def mk_feed_label(
a feed control protocol.
'''
status = feed.status
status: dict[str, Any] = feed.status
assert status
msg = dedent("""
actor: **{actor_name}**\n
|_ @**{host}:{port}**\n
""")
# SO tips on ws/nls,
# https://stackoverflow.com/a/15721400
ws: str = '&nbsp;'
# nl: str = '<br>' # dun work?
actor_info_repr: str = (
f')> **{status["actor_short_id"]}**\n'
'\n' # bc md?
)
for key, val in status.items():
if key in ('host', 'port', 'actor_name'):
continue
msg += f'\n|_ {key}: **{{{key}}}**\n'
# fields to select *IN* for display
# (see `.data.feed.open_feed()` status
# update -> TAG_feed_status_update)
for key in [
'ipc',
'hist_shm',
'rt_shm',
'throttle_hz',
]:
# NOTE, the 2nd key is filled via `.format()` updates.
actor_info_repr += (
f'\n' # bc md?
f'{ws}|_{key}: **{{{key}}}**\n'
)
# ^TODO? formatting and content..
# -[ ] showing which fqme is "forward" on the
# chart/fsp/order-mode?
# '|_ flows: **{symbols}**\n'
#
# -[x] why isn't the indent working?
# => markdown, now solved..
feed_label = FormatLabel(
fmt_str=msg,
# |_ streams: **{symbols}**\n
fmt_str=actor_info_repr,
font=_font.font,
font_size=_font_small.px_size,
font_color='default_lightest',
)
# ?TODO, remove this?
# form.vbox.setAlignment(feed_label, Qt.AlignBottom)
# form.vbox.setAlignment(Qt.AlignBottom)
_ = chart.height() - (
form.height() +
form.fill_bar.height()
# feed_label.height()
)
# _ = chart.height() - (
# form.height() +
# form.fill_bar.height()
# # feed_label.height()
# )
feed_label.format(**feed.status)
return feed_label

View File

@ -28,9 +28,15 @@ from typing import (
)
import trio
from PyQt5 import QtGui
from PyQt5.QtCore import QSize, QModelIndex, Qt, QEvent
from PyQt5.QtWidgets import (
from piker.ui.qt import (
keys,
size_policy,
QtGui,
QSize,
QModelIndex,
Qt,
QEvent,
QWidget,
QLabel,
QComboBox,
@ -39,7 +45,6 @@ from PyQt5.QtWidgets import (
QVBoxLayout,
QFormLayout,
QProgressBar,
QSizePolicy,
QStyledItemDelegate,
QStyleOptionViewItem,
)
@ -71,14 +76,14 @@ class Edit(QLineEdit):
if width_in_chars:
self._chars = int(width_in_chars)
x_size_policy = QSizePolicy.Fixed
x_size_policy = size_policy.Fixed
else:
# chart count which will be used to calculate
# width of input field.
self._chars: int = 6
# fit to surroundingn frame width
x_size_policy = QSizePolicy.Expanding
x_size_policy = size_policy.Expanding
super().__init__(parent)
@ -86,7 +91,7 @@ class Edit(QLineEdit):
# https://doc.qt.io/qt-5/qsizepolicy.html#Policy-enum
self.setSizePolicy(
x_size_policy,
QSizePolicy.Fixed,
size_policy.Fixed,
)
self.setFont(font.font)
@ -180,11 +185,13 @@ class Selection(QComboBox):
self._items: dict[str, int] = {}
super().__init__(parent=parent)
self.setSizeAdjustPolicy(QComboBox.AdjustToContents)
self.setSizeAdjustPolicy(
QComboBox.SizeAdjustPolicy.AdjustToContents,
)
# make line edit expand to surrounding frame
self.setSizePolicy(
QSizePolicy.Expanding,
QSizePolicy.Fixed,
size_policy.Expanding,
size_policy.Fixed,
)
view = self.view()
view.setUniformItemSizes(True)
@ -308,8 +315,8 @@ class FieldsForm(QWidget):
# size it as we specify
self.setSizePolicy(
QSizePolicy.Expanding,
QSizePolicy.Expanding,
size_policy.Expanding,
size_policy.Expanding,
)
# XXX: not sure why we have to create this here exactly
@ -416,8 +423,8 @@ class FieldsForm(QWidget):
select.set_items(values)
self.setSizePolicy(
QSizePolicy.Fixed,
QSizePolicy.Fixed,
size_policy.Fixed,
size_policy.Fixed,
)
select.show()
self.form.addRow(label, select)
@ -437,7 +444,10 @@ async def handle_field_input(
async for kbmsg in recv_chan:
if kbmsg.etype in {QEvent.KeyPress, QEvent.KeyRelease}:
if kbmsg.etype in {
keys.KeyPress,
keys.KeyRelease,
}:
event, etype, key, mods, txt = kbmsg.to_tuple()
print(f'key: {kbmsg.key}, mods: {kbmsg.mods}, txt: {kbmsg.txt}')
@ -703,7 +713,8 @@ def mk_fill_status_bar(
)
bottom_label = form.add_field_label(
'x: {step_size}',
# 'x: {step_size}',
'{unit_prefix}: {step_size}',
font_size=bar_label_font_size,
font_color='gunmetal',
)

View File

@ -181,7 +181,10 @@ async def open_fsp_sidepane(
async def open_fsp_actor_cluster(
names: list[str] = ['fsp_0', 'fsp_1'],
) -> AsyncGenerator[int, dict[str, tractor.Portal]]:
) -> AsyncGenerator[
int,
dict[str, tractor.Portal]
]:
from tractor._clustering import open_actor_cluster
@ -390,7 +393,7 @@ class FspAdmin:
complete: trio.Event,
started: trio.Event,
fqme: str,
dst_fsp_flume: Flume,
dst_flume: Flume,
conf: dict,
target: Fsp,
loglevel: str,
@ -408,16 +411,14 @@ class FspAdmin:
# chaining entrypoint
cascade,
# TODO: can't we just drop this and expect
# far end to read the src flume's .mkt.fqme?
# data feed key
fqme=fqme,
# TODO: pass `Flume.to_msg()`s here?
# mems
src_shm_token=self.flume.rt_shm.token,
dst_shm_token=dst_fsp_flume.rt_shm.token,
# target
ns_path=ns_path,
src_flume_addr=self.flume.to_msg(),
dst_flume_addr=dst_flume.to_msg(),
ns_path=ns_path, # edge-bind-func
loglevel=loglevel,
zero_on_step=conf.get('zero_on_step', False),
@ -431,14 +432,14 @@ class FspAdmin:
ctx.open_stream() as stream,
):
dst_fsp_flume.stream: tractor.MsgStream = stream
dst_flume.stream: tractor.MsgStream = stream
# register output data
self._registry[
(fqme, ns_path)
] = (
stream,
dst_fsp_flume.rt_shm,
dst_flume.rt_shm,
complete
)
@ -515,7 +516,7 @@ class FspAdmin:
broker='piker',
_atype='fsp',
)
dst_fsp_flume = Flume(
dst_flume = Flume(
mkt=mkt,
_rt_shm_token=dst_shm.token,
first_quote={},
@ -543,13 +544,13 @@ class FspAdmin:
complete,
started,
fqme,
dst_fsp_flume,
dst_flume,
conf,
target,
loglevel,
)
return dst_fsp_flume, started
return dst_flume, started
async def open_fsp_chart(
self,
@ -559,7 +560,7 @@ class FspAdmin:
conf: dict, # yeah probably dumb..
loglevel: str = 'error',
) -> (trio.Event, ChartPlotWidget):
) -> trio.Event:
flume, started = await self.start_engine_task(
target,
@ -599,6 +600,7 @@ async def open_fsp_admin(
kwargs=kwargs,
) as (cache_hit, cluster_map),
tractor.trionics.collapse_eg(),
trio.open_nursery() as tn,
):
if cache_hit:
@ -612,6 +614,8 @@ async def open_fsp_admin(
)
try:
yield admin
# ??TODO, does this *need* to be inside a finally?
finally:
# terminate all tasks via signals
for key, entry in admin._registry.items():
@ -926,7 +930,7 @@ async def start_fsp_displays(
linked: LinkedSplits,
flume: Flume,
group_status_key: str,
# group_status_key: str,
loglevel: str,
) -> None:
@ -973,21 +977,23 @@ async def start_fsp_displays(
flume,
) as admin,
):
statuses = []
statuses: list[trio.Event] = []
for target, conf in fsp_conf.items():
started = await admin.open_fsp_chart(
started: trio.Event = await admin.open_fsp_chart(
target,
conf,
)
done = linked.window().status_bar.open_status(
f'loading fsp, {target}..',
group_key=group_status_key,
)
statuses.append((started, done))
# done = linked.window().status_bar.open_status(
# f'loading fsp, {target}..',
# group_key=group_status_key,
# )
# statuses.append((started, done))
statuses.append(started)
for fsp_loaded, status_cb in statuses:
# for fsp_loaded, status_cb in statuses:
for fsp_loaded in statuses:
await fsp_loaded.wait()
profiler(f'attached to fsp portal: {target}')
status_cb()
# status_cb()
# blocks on nursery until all fsp actors complete

View File

@ -15,15 +15,18 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
``QIcon`` hackery.
`QIcon` hackery.
Mostly dynamically loading pixmaps for use with `QGraphicsScene`.
'''
from PyQt5.QtWidgets import QStyle
from PyQt5.QtGui import (
QIcon, QPixmap, QColor
from piker.ui.qt import (
QSize,
QStyle,
QIcon,
QPixmap,
QColor,
)
from PyQt5.QtCore import QSize
from ._style import hcolor
# https://www.pythonguis.com/faq/built-in-qicons-pyqt/
@ -44,7 +47,8 @@ def mk_icons(
size: QSize,
) -> dict[str, QIcon]:
'''This helper is indempotent.
'''
This helper is indempotent.
'''
global _icons, _icon_names
@ -56,7 +60,11 @@ def mk_icons(
# load account selection using current style
for name, icon_name in _icon_names.items():
stdpixmap = getattr(QStyle, icon_name)
stdpixmap = getattr(
# https://www.pythonguis.com/faq/built-in-qicons-pyqt/
QStyle.StandardPixmap, # pyqt/pyside6
icon_name,
)
stdicon = style.standardIcon(stdpixmap)
pixmap = stdicon.pixmap(size)

View File

@ -23,6 +23,7 @@ from contextlib import (
asynccontextmanager,
ExitStack,
)
from functools import partial
import time
from typing import (
Callable,
@ -30,24 +31,26 @@ from typing import (
)
import pyqtgraph as pg
# from pyqtgraph.GraphicsScene import mouseEvents
from PyQt5.QtWidgets import QGraphicsSceneMouseEvent as gs_mouse
from PyQt5.QtGui import (
QWheelEvent,
)
from PyQt5.QtCore import (
Qt,
QEvent,
)
# NOTE XXX: pg is super annoying and re-implements it's own mouse
# event subsystem.. we should really look into re-working/writing
# this down the road.. Bo
from pyqtgraph.GraphicsScene import mouseEvents as mevs
# from pyqtgraph.GraphicsScene.mouseEvents import MouseDragEvent
from pyqtgraph import (
ViewBox,
Point,
QtCore,
functions as fn,
)
from pyqtgraph import functions as fn
import numpy as np
import trio
from piker.ui.qt import (
QWheelEvent,
QGraphicsSceneMouseEvent as gs_mouse,
Qt,
QEvent,
)
from ..log import get_logger
from ..toolz import (
Profiler,
@ -70,27 +73,28 @@ if TYPE_CHECKING:
)
from ._dataviz import Viz
from .order_mode import OrderMode
from ._display import DisplayState
log = get_logger(__name__)
NUMBER_LINE = {
Qt.Key_1,
Qt.Key_2,
Qt.Key_3,
Qt.Key_4,
Qt.Key_5,
Qt.Key_6,
Qt.Key_7,
Qt.Key_8,
Qt.Key_9,
Qt.Key_0,
Qt.Key.Key_1,
Qt.Key.Key_2,
Qt.Key.Key_3,
Qt.Key.Key_4,
Qt.Key.Key_5,
Qt.Key.Key_6,
Qt.Key.Key_7,
Qt.Key.Key_8,
Qt.Key.Key_9,
Qt.Key.Key_0,
}
ORDER_MODE = {
Qt.Key_A,
Qt.Key_F,
Qt.Key_D,
Qt.Key.Key_A,
Qt.Key.Key_F,
Qt.Key.Key_D,
}
@ -98,6 +102,7 @@ async def handle_viewmode_kb_inputs(
view: ChartView,
recv_chan: trio.abc.ReceiveChannel,
dss: dict[str, DisplayState],
) -> None:
@ -173,17 +178,42 @@ async def handle_viewmode_kb_inputs(
Qt.Key_P,
}
):
import tractor
feed = order_mode.feed # noqa
chart = order_mode.chart # noqa
viz = chart.main_viz # noqa
vlm_chart = chart.linked.subplots['volume'] # noqa
vlm_viz = vlm_chart.main_viz # noqa
dvlm_pi = vlm_chart._vizs['dolla_vlm'].plot # noqa
import tractor
await tractor.pause()
view.interact_graphics_cycle()
# SEARCH MODE #
# FORCE graphics reset-and-render of all currently
# shown data `Viz`s for the current chart app.
if (
ctrl
and key in {
Qt.Key_R,
}
):
fqme: str
ds: DisplayState
for fqme, ds in dss.items():
viz: Viz
for tf, viz in {
60: ds.hist_viz,
1: ds.viz,
}.items():
# TODO: only allow this when the data is IN VIEW!
# also, we probably can do this more efficiently
# / smarter by only redrawing the portion of the
# path necessary?
viz.reset_graphics()
# ------ - ------
# SEARCH MODE
# ------ - ------
# ctlr-<space>/<l> for "lookup", "search" -> open search tree
if (
ctrl
@ -243,8 +273,10 @@ async def handle_viewmode_kb_inputs(
delta=-view.def_delta,
)
elif key == Qt.Key_R:
elif (
not ctrl
and key == Qt.Key_R
):
# NOTE: seems that if we don't yield a Qt render
# cycle then the m4 downsampled curves will show here
# without another reset..
@ -427,6 +459,7 @@ async def handle_viewmode_mouse(
view: ChartView,
recv_chan: trio.abc.ReceiveChannel,
dss: dict[str, DisplayState],
) -> None:
@ -466,6 +499,7 @@ class ChartView(ViewBox):
mode_name: str = 'view'
def_delta: float = 616 * 6
def_scale_factor: float = 1.016 ** (def_delta * -1 / 20)
# annots: dict[int, GraphicsObject] = {}
def __init__(
self,
@ -486,6 +520,7 @@ class ChartView(ViewBox):
# defaultPadding=0.,
**kwargs
)
# for "known y-range style"
self._static_yrange = static_yrange
@ -500,7 +535,11 @@ class ChartView(ViewBox):
# add our selection box annotator
self.select_box = SelectRect(self)
self.addItem(self.select_box, ignoreBounds=True)
# self.select_box.add_to_view(self)
# self.addItem(
# self.select_box,
# ignoreBounds=True,
# )
self.mode = None
self.order_mode: bool = False
@ -557,6 +596,7 @@ class ChartView(ViewBox):
@asynccontextmanager
async def open_async_input_handler(
self,
**handler_kwargs,
) -> ChartView:
@ -567,14 +607,20 @@ class ChartView(ViewBox):
QEvent.KeyPress,
QEvent.KeyRelease,
},
async_handler=handle_viewmode_kb_inputs,
async_handler=partial(
handle_viewmode_kb_inputs,
**handler_kwargs,
),
),
_event.open_handlers(
[self],
event_types={
gs_mouse.GraphicsSceneMousePress,
},
async_handler=handle_viewmode_mouse,
async_handler=partial(
handle_viewmode_mouse,
**handler_kwargs,
),
),
):
yield self
@ -711,17 +757,18 @@ class ChartView(ViewBox):
def mouseDragEvent(
self,
ev,
ev: mevs.MouseDragEvent,
axis: int | None = None,
) -> None:
pos = ev.pos()
lastPos = ev.lastPos()
dif = pos - lastPos
dif = dif * -1
pos: Point = ev.pos()
lastPos: Point = ev.lastPos()
dif: Point = (pos - lastPos) * -1
# dif: Point = pos - lastPos
# dif: Point = dif * -1
# NOTE: if axis is specified, event will only affect that axis.
button = ev.button()
btn = ev.button()
# Ignore axes if mouse is disabled
mouseEnabled = np.array(
@ -733,7 +780,7 @@ class ChartView(ViewBox):
mask[1-axis] = 0.0
# Scale or translate based on mouse button
if button & (
if btn & (
QtCore.Qt.LeftButton | QtCore.Qt.MidButton
):
# zoom y-axis ONLY when click-n-drag on it
@ -756,34 +803,55 @@ class ChartView(ViewBox):
# XXX: WHY
ev.accept()
down_pos = ev.buttonDownPos()
down_pos: Point = ev.buttonDownPos(
btn=btn,
)
scen_pos: Point = ev.scenePos()
scen_down_pos: Point = ev.buttonDownScenePos(
btn=btn,
)
# This is the final position in the drag
if ev.isFinish():
self.select_box.mouse_drag_released(down_pos, pos)
# import pdbp; pdbp.set_trace()
ax = QtCore.QRectF(down_pos, pos)
ax = self.childGroup.mapRectFromParent(ax)
# NOTE: think of this as a `.mouse_drag_release()`
# (bc HINT that's what i called the shit ass
# method that wrapped this call [yes, as a single
# fucking call] originally.. you bish, guille)
# Bo.. oraleeee
self.select_box.set_scen_pos(
# down_pos,
# pos,
scen_down_pos,
scen_pos,
)
# this is the zoom transform cmd
self.showAxRect(ax)
ax = QtCore.QRectF(down_pos, pos)
ax = self.childGroup.mapRectFromParent(ax)
# self.showAxRect(ax)
# axis history tracking
self.axHistoryPointer += 1
self.axHistory = self.axHistory[
:self.axHistoryPointer] + [ax]
else:
print('drag finish?')
self.select_box.set_pos(down_pos, pos)
self.select_box.set_scen_pos(
# down_pos,
# pos,
scen_down_pos,
scen_pos,
)
# update shape of scale box
# self.updateScaleBox(ev.buttonDownPos(), ev.pos())
self.updateScaleBox(
down_pos,
ev.pos(),
)
# breakpoint()
# self.updateScaleBox(
# down_pos,
# ev.pos(),
# )
# PANNING MODE
else:
@ -822,7 +890,7 @@ class ChartView(ViewBox):
# ev.accept()
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
elif button & QtCore.Qt.RightButton:
elif btn & QtCore.Qt.RightButton:
if self.state['aspectLocked'] is not False:
mask[0] = 0

View File

@ -21,9 +21,12 @@ Double auction top-of-book (L1) graphics.
from typing import Tuple
import pyqtgraph as pg
from PyQt5 import QtCore, QtGui
from PyQt5.QtCore import QPointF
from piker.ui.qt import (
QPointF,
QtCore,
QtGui,
)
from ._axes import YAxisLabel
from ._style import hcolor
from ._pg_overrides import PlotItem

View File

@ -25,10 +25,17 @@ from typing import (
)
import pyqtgraph as pg
from PyQt5 import QtGui, QtWidgets
from PyQt5.QtWidgets import QLabel, QSizePolicy
from PyQt5.QtCore import QPointF, QRectF, Qt
from piker.ui.qt import (
px_cache_mode,
QtGui,
QtWidgets,
QLabel,
size_policy,
QPointF,
QRectF,
Qt,
)
from ._style import (
DpiAwareFont,
hcolor,
@ -78,7 +85,7 @@ class Label:
self._x_offset = x_offset
txt = self.txt = QtWidgets.QGraphicsTextItem(parent=parent)
txt.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
txt.setCacheMode(px_cache_mode.DeviceCoordinateCache)
vb.scene().addItem(txt)
@ -103,7 +110,7 @@ class Label:
self._anchor_func = self.txt.pos().x
# not sure if this makes a diff
self.txt.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.txt.setCacheMode(px_cache_mode.DeviceCoordinateCache)
# TODO: edit and selection support
# https://doc.qt.io/qt-5/qt.html#TextInteractionFlag-enum
@ -278,18 +285,20 @@ class FormatLabel(QLabel):
font_size: int,
font_color: str,
use_md: bool = True,
parent=None,
) -> None:
super().__init__(parent)
# by default set the format string verbatim and expect user to
# call ``.format()`` later (presumably they'll notice the
# by default set the format string verbatim and expect user
# to call ``.format()`` later (presumably they'll notice the
# unformatted content if ``fmt_str`` isn't meant to be
# unformatted).
self.fmt_str = fmt_str
self.setText(fmt_str)
# self.setText(fmt_str) # ?TODO, why here?
self.setStyleSheet(
f"""QLabel {{
@ -299,15 +308,21 @@ class FormatLabel(QLabel):
"""
)
self.setFont(_font.font)
self.setTextFormat(Qt.MarkdownText) # markdown
if use_md:
self.setTextFormat(
Qt.TextFormat.MarkdownText
)
self.setMargin(0)
self.setSizePolicy(
QSizePolicy.Expanding,
QSizePolicy.Expanding,
size_policy.Expanding,
size_policy.Expanding,
)
self.setAlignment(
Qt.AlignVCenter | Qt.AlignLeft
Qt.AlignLeft
|
Qt.AlignBottom
# Qt.AlignVCenter
)
self.setText(self.fmt_str)

View File

@ -27,10 +27,22 @@ from typing import (
)
import pyqtgraph as pg
from pyqtgraph import Point, functions as fn
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF
from pyqtgraph import (
Point,
functions as fn,
)
from piker.ui.qt import (
px_cache_mode,
QtCore,
QtGui,
QGraphicsPathItem,
QStyleOptionGraphicsItem,
QGraphicsItem,
QGraphicsScene,
QWidget,
QPointF,
)
from ._annotate import LevelMarker
from ._anchors import (
vbr_left,
@ -130,7 +142,9 @@ class LevelLine(pg.InfiniteLine):
self._right_end_sc: float = 0
# use px caching
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.setCacheMode(
px_cache_mode.DeviceCoordinateCache
)
def txt_offsets(self) -> tuple[int, int]:
return 0, 0
@ -201,7 +215,7 @@ class LevelLine(pg.InfiniteLine):
) -> None:
if not called_from_on_pos_change:
last = self.value()
last: float = self.value()
# if the position hasn't changed then ``.update_labels()``
# will not be called by a non-triggered `.on_pos_change()`,
@ -308,7 +322,7 @@ class LevelLine(pg.InfiniteLine):
Remove this line from containing chart/view/scene.
'''
scene = self.scene()
scene: QGraphicsScene = self.scene()
if scene:
for label in self._labels:
label.delete()
@ -339,8 +353,8 @@ class LevelLine(pg.InfiniteLine):
self,
p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget
opt: QStyleOptionGraphicsItem,
w: QWidget
) -> None:
'''
@ -417,9 +431,9 @@ class LevelLine(pg.InfiniteLine):
def add_marker(
self,
path: QtWidgets.QGraphicsPathItem,
path: QGraphicsPathItem,
) -> QtWidgets.QGraphicsPathItem:
) -> QGraphicsPathItem:
self._marker = path
self._marker.setPen(self.currentPen)

View File

@ -20,16 +20,14 @@ Super fast OHLC sampling graphics types.
from __future__ import annotations
import numpy as np
from PyQt5 import (
from piker.ui.qt import (
QtGui,
QtWidgets,
)
from PyQt5.QtCore import (
QPainterPath,
QLineF,
QRectF,
)
from PyQt5.QtGui import QPainterPath
from ._curve import FlowGraphic
from ..toolz import (
Profiler,

View File

@ -24,8 +24,6 @@ view transforms.
"""
import pyqtgraph as pg
from ._axes import Axis
def invertQTransform(tr):
"""Return a QTransform that is the inverse of *tr*.
@ -53,6 +51,9 @@ def _do_overrides() -> None:
pg.functions.invertQTransform = invertQTransform
pg.PlotItem = PlotItem
from ._axes import Axis
pg.Axis = Axis
# enable "QPainterPathPrivate for faster arrayToQPath" from
# https://github.com/pyqtgraph/pyqtgraph/pull/2324
pg.setConfigOption('enableExperimental', True)
@ -234,7 +235,7 @@ class PlotItem(pg.PlotItem):
# ``ViewBox`` geometry bug.. where a gap for the
# 'bottom' axis is somehow left in?
# axis = pg.AxisItem(orientation=name, parent=self)
axis = Axis(
axis = pg.Axis(
self,
orientation=name,
parent=self,

View File

@ -344,7 +344,10 @@ class SettingsPane:
dsize = tracker.live_pp.dsize
# READ out settings and update the status UI / settings widgets
suffix = {'currency': ' $', 'units': ' u'}[alloc.size_unit]
unit_char: str = {
'currency': '$',
'units': 'u',
}[alloc.size_unit]
size_unit, limit = alloc.limit_info()
step_size, currency_per_slot = alloc.step_sizes()
@ -358,10 +361,11 @@ class SettingsPane:
self.apply_setting('limit', limit)
self.step_label.format(
step_size=str(humanize(step_size)) + suffix
unit_prefix=unit_char,
step_size=str(humanize(step_size))
)
self.limit_label.format(
limit=str(humanize(limit)) + suffix
limit=f'{unit_char}: {str(humanize(limit))}'
)
# update size unit in UI

Some files were not shown because too many files have changed in this diff Show More