Compare commits

...

1010 Commits

Author SHA1 Message Date
Tyler Goodlet daa6a5c80a `ib`: restore and (maybe) use `xdotool` + `i3ipc` reset method
Since apparently the container we were using is totally borked on new
kernels and/or latest jvm, this move our old manual local-X-desktop script
back for use in `brokerd` backend code.

Adds a new `.brokers.ib._util` which contains the 2 methods and fails
over to this one when we can't connect to a VNC server. Also adjusts the
original in `scripts/ib_data_reset.py` to import and run the module code
as a script-program.
2023-03-03 17:37:26 -05:00
goodboy 201f86e482
Merge pull request #470 from pikers/decimalization_take_2
Fixed float dust bug on zero position
2023-03-03 17:34:36 -05:00
Guillermo Rodriguez d4ac8972ac
Merge pull request #477 from pikers/backward_compat_trans_with_symbolinfo
Backward compat support for `Transaction.sym: Symbol`
2023-03-02 23:19:55 -03:00
Tyler Goodlet b4a1cc8f22 `kraken`: parse and load info `Transaction.sym: Symbol`
Also includes a retyping of `Client._pair: dict[str, Pair]` to look up
pair structs and map all alt-key-name-sets to each for easy precision
info lookup to set the `.sym` field for each transaction including for
on-chain transfers which kraken provides as an "asset decimals" field,
presumably pulled from the particular block-token's limitation info.
2023-03-02 19:25:43 -05:00
Tyler Goodlet 69b85aa7e5 `ib`: parse and load info for new `Transaction.sym: Symbol` field 2023-03-02 19:23:47 -05:00
Tyler Goodlet 3a4794e9d1 Backward-compat: don't require `'lot_tick_size'`
In order to support existing `pps.toml` files in the wild which don't
have the `asset_type, price_tick_size, lot_tick_size` fields, we need to
only optionally read them and instead expect that backends will write
the fields going forward (coming in follow patches).

Further this makes some small asset-size (vlm accounting) quantization
related adjustments:
- rename `Symbol.decimal_quant()` -> `.quantize_size()` since that is
  explicitly what this method is doing.
- and expect an input `size: float` which we cast to decimal instead of
  doing it inside the `.calc_size()` caller code.
- drop `Symbol.iterfqsns()` which wasn't being used anywhere at all..

Additionally, this drafts out a new replacement market-trading-pair data
type to eventually replace `.data._source.Symbol` -> `MktPair` which we
aren't using yet, but serves as the documentation-driven motivator ;)
and, it relates to https://github.com/pikers/piker/issues/467.
2023-03-02 19:22:19 -05:00
Guillermo Rodriguez 6be96a96aa
Drop symbol section on Position serialization 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez d704b153ca
Fix mayor bug found by fomo, sym info getting stored incorrectly on pps.toml causing it to load pp wrong on second open, also fix header leak bug 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez 20d91f5e06
Good catch by j, unnecesary kwarg on open_pps 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez 6c23c79f2a
Minor fixes after fomo's review 2023-03-01 21:06:52 -03:00
Guillermo Rodriguez f5b8b9a14f
Add sym registry to PaperBoi as well as a sym ref on Transaction
Add decimal quantize API to Symbol to simplify by-broker truncation
Add symbol info to `pps.toml`
Move _assert call to outside the _async_main context manager
Minor indentation and styling changes, also convert a few prints to log calls
Fix multi write / race condition on open_pps call
Switch open_pps to not write by default
Fix integer math kraken syminfo _tick_size initialization
2023-03-01 21:06:48 -03:00
Guillermo Rodriguez dc78994dcf
Fixed float dust bug on zero position case 2023-03-01 21:05:37 -03:00
goodboy 269a04ba1a
Merge pull request #475 from pikers/explicit_write_pps_on_exit
Explicitly write `pps.toml` on exit for `ib` and `kraken`
2023-03-01 17:47:57 -05:00
Tyler Goodlet 569df45d18 `kraken.`: drop trade history query limit 2023-03-01 17:40:36 -05:00
Tyler Goodlet f53f4df583 `ib/kraken`: adjust to new default of not-writing in `open_pps()` 2023-03-01 17:40:33 -05:00
jaredgoldman d04fe366ab
Merge pull request #462 from pikers/paper_trade_improvements_rebase
Paper trade improvements
2023-02-28 14:30:20 -05:00
jaredgoldman c83fe5aaa7 Fix typo in test docstring 2023-02-28 14:22:24 -05:00
jaredgoldman 41f81eb701 Make write on exit default false 2023-02-28 14:14:05 -05:00
jaredgoldman 05fdc9dd60 Add xfail 2023-02-28 13:55:12 -05:00
jaredgoldman 1323981cc4 Format lines in conftest
Add extra line in conftest
2023-02-28 13:52:12 -05:00
jaredgoldman 882032e3a3 Change skip to xfail 2023-02-28 13:52:03 -05:00
jaredgoldman a6257ae615 Add docstrings to test cases,
format function calls
2023-02-28 13:52:03 -05:00
jaredgoldman 973c068e96 Assert conditions like a nerd 2023-02-28 13:52:03 -05:00
jaredgoldman d7317c3710 Shorten assertion docstring 2023-02-28 13:52:03 -05:00
jaredgoldman 87eb9c5772 Format assertion conditions 2023-02-28 13:51:47 -05:00
jaredgoldman ecb22dda1a Remove whitespace, remove stale comments 2023-02-28 13:51:47 -05:00
jaredgoldman 6f15d47012 Add space in docstrings,
remove duplicate import
2023-02-28 13:51:47 -05:00
jaredgoldman 802af306ac Add specific location of _testing dir in delete_testing_dir fixture 2023-02-28 13:51:47 -05:00
jaredgoldman e4e368923d Add specific kwarg key to open_pps call when starting paperboi 2023-02-28 13:51:47 -05:00
jaredgoldman 342aec648b Skip zero test and change use Path when creating a config folder in marketstore 2023-02-28 13:51:47 -05:00
jaredgoldman 55253c8469 Remove whitespace and correct typo 2023-02-28 13:51:47 -05:00
jaredgoldman 4b72d3ba99 Add backpressure setting back as it wasn't altering test behaviour 2023-02-28 13:51:47 -05:00
jaredgoldman 61296bbdfc Minor formatting, removing whitespace 2023-02-28 13:51:47 -05:00
jaredgoldman 36f466fff8 Ensure tests are running and working up until asserting pps 2023-02-28 13:51:47 -05:00
Guillermo Rodriguez 26146097eb
Merge pull request #469 from pikers/emsd_loglevel_fix
Fix `loglevel` not getting propagated to `emsd`
2023-02-26 00:49:43 -03:00
jaredgoldman fcd8b8eb78 Remove breaking call to load pps from ledger 2023-02-25 18:59:40 -05:00
jaredgoldman 3e83764b5b Remove whitespace, uneeded comments 2023-02-25 18:59:40 -05:00
jaredgoldman 3a6fbabaf8 Minor formatting 2023-02-25 18:59:40 -05:00
jaredgoldman 85ad23a1e9 Remove uneeded assert_precision arg 2023-02-25 18:59:40 -05:00
jaredgoldman 15525c2b46 Add functionality and tests for executing mutliple orders 2023-02-25 18:59:40 -05:00
jaredgoldman 76736a5441 Refactor to avoid global state while testing 2023-02-25 18:59:40 -05:00
jaredgoldman 4c2e776e01 Ensure to cleanup by passing fixture in paper_test signature 2023-02-25 18:59:40 -05:00
jaredgoldman 1e748f11ef Ensure config path is being updated with _testing correctly during testing 2023-02-25 18:59:40 -05:00
jaredgoldman 3fcad16298 Ensure not to write to pps when asserting? 2023-02-25 18:59:40 -05:00
jaredgoldman 2d25d1f048 Push failing assert no pps test 2023-02-25 18:59:40 -05:00
jaredgoldman e54d928405 Reformat fake fill in paper engine,
Ensure tests pass, refactor test wrapper
2023-02-25 18:59:40 -05:00
jaredgoldman c99381216d Ensure actual pp is sent to ems
ensure not to write pp header on startup

Comment out pytest settings
Add comments explaining delete_testing_dir fixture
use nonlocal instead of global for test state

Add unpacking get_fqsn method
Format test_paper
Add comments explaining sync/async book.send calls
2023-02-25 18:59:40 -05:00
algorandpa db2e2ed78f Use constants value for test config dir path 2023-02-25 18:59:39 -05:00
algorandpa 3bc54e308f Use Path.mkdir instead of os.mkdir 2023-02-25 18:59:39 -05:00
algorandpa 8c9c165e0a Remove broken import 2023-02-25 18:59:39 -05:00
algorandpa 7bd8019876 Add back cleanup fixture 2023-02-25 18:59:39 -05:00
algorandpa 8122e6c86f Disable cleanup to see if CI passes 2023-02-25 18:59:39 -05:00
algorandpa 7e87dc52eb Scope fixture to session 2023-02-25 18:59:39 -05:00
algorandpa 2c366d7349 Fix type 2023-02-25 18:59:39 -05:00
algorandpa 9acbfacd4c only clean up if _testing file exists 2023-02-25 18:59:39 -05:00
algorandpa 316ead577d Remove scoping 2023-02-25 18:59:39 -05:00
algorandpa 4b6d3fe138 Scope cleanup fixture to module 2023-02-25 18:59:39 -05:00
algorandpa 0dec2b9c89 Enable backpressure during data-feed layer startup to avoid overruns 2023-02-25 18:59:39 -05:00
algorandpa acc86ae6db more formatting 2023-02-25 18:59:39 -05:00
algorandpa 730906a072 Minor formatting 2023-02-25 18:59:39 -05:00
algorandpa e5cefeb44b Format to prep for PR 2023-02-25 18:59:39 -05:00
algorandpa 7142a6a7ca Add hacky cleanup solution for _testng data 2023-02-25 18:59:39 -05:00
algorandpa dff8abd6ad Minor reformatting 2023-02-25 18:59:39 -05:00
algorandpa b180602a3e Make config grab _testing dir in pytest env,
- Remove print statements
2023-02-25 18:59:39 -05:00
algorandpa 95b9dacb7a Break test into steps 2023-02-25 18:59:39 -05:00
algorandpa df868cec35 Assert that trades persist in ems after teardown and startup 2023-02-25 18:59:39 -05:00
algorandpa 68a196218b force change branch name 2023-02-25 18:59:39 -05:00
algorandpa 84cd1e0059 initial commit on copy 2023-02-25 18:59:39 -05:00
algorandpa 86b4386522 minor changes, prepare for rebase of overlays branch 2023-02-25 18:59:39 -05:00
algorandpa 5bb93ccc5f change id to 'piker-paper' 2023-02-25 18:59:39 -05:00
algorandpa 3028a8b1f8 restore spacing 2023-02-25 18:59:39 -05:00
algorandpa 6126c4f438 restore spacing 2023-02-25 18:59:39 -05:00
algorandpa 41bb0445e0 remove unnecessary return 2023-02-25 18:59:39 -05:00
algorandpa 97627a4976 remove more logs 2023-02-25 18:59:39 -05:00
algorandpa 1b2fce430f remove logs, unused args 2023-02-25 18:59:39 -05:00
algorandpa 8cd2354d73 ensure that paper pps are pulled on open 2023-02-25 18:59:39 -05:00
algorandpa 9c28d7086e Add Generator as return type of open_trade_ledger 2023-02-25 18:59:39 -05:00
algorandpa a4bd51a01b change open_trade_ledger typing to return a Generator type 2023-02-25 18:59:39 -05:00
algorandpa b67d020e23 add basic func to load paper_trades file 2023-02-25 18:59:39 -05:00
Guillermo Rodriguez 85a1b858b4
Fix logging on emsd 2023-02-25 20:56:25 -03:00
Guillermo Rodriguez 47bf45f30e
Merge pull request #464 from pikers/elasticsearch_integration
Elasticsearch integration
2023-02-24 16:38:37 -03:00
Esmeralda Gallardo b96e2c314a
Minor style changes and removed unnecesary comments 2023-02-24 15:11:15 -03:00
Esmeralda Gallardo f96d6a04b6
Fixed UnboundLocalError on _ahab. Added test for marketstore's initialization 2023-02-22 13:28:07 -03:00
Guillermo Rodriguez acc6249d88
Remove unnesesary arguments to some pikerd functions, fix container init error
by switching from log reading to quering es health endpoint, fix install on ci
and add more logging.
2023-02-21 20:45:10 -03:00
jaredgoldman 82174d01c5
Merge pull request #465 from pikers/loglevel_to_testpikerd
`loglevel` to `open_test_pikerd()` via `--ll <level>` flag
2023-02-21 12:34:55 -05:00
Tyler Goodlet 0b678c97f4 Pass `loglevel: str` cli value through to service tests 2023-02-21 12:02:26 -05:00
Tyler Goodlet d0d1554d74 Expose `emsd` task loglevel through to clients 2023-02-21 12:02:01 -05:00
Esmeralda Gallardo 4122c482ba
Added new tests for elasticsearch's and marketstore's initialization and stop 2023-02-21 13:34:29 -03:00
Esmeralda Gallardo b5cdf14036
Modified elasticsearch file name to 'elastic' to avoid name errors. Applied changes suggested in the pr. 2023-02-21 13:34:29 -03:00
Esmeralda Gallardo 3ce8bfa012
Moved database initialization code inside the open_pikerd context manager 2023-02-21 13:34:29 -03:00
Guillermo Rodriguez bf9ca4a4a8
Generalize ahab to support elasticsearch logs and init procedure 2023-02-21 13:34:29 -03:00
Guillermo Rodriguez 17a4fe4b2f
Trim unnecesary stuff left from marketstore copy, also fix elastic config name for docker build, add elasticsearch to dependencies 2023-02-21 13:34:28 -03:00
Esmeralda Gallardo 0dc24bd475
Added dockerfile, yaml file and script to statrt an elasticsearch's docker instance. 2023-02-21 13:34:26 -03:00
Tyler Goodlet b3400f0d9c Add `loglevel: str` fixture, passthrough to `open_test_pikerd()` 2023-02-21 10:54:18 -05:00
Tyler Goodlet 2bad692703 Fix up some test warnings (summary) spots 2023-02-21 10:54:18 -05:00
Tyler Goodlet cd3e9b1b2a Move quest fixtures to test mod, clean out old travis fixture 2023-02-21 10:54:18 -05:00
Tyler Goodlet e01220af14 Type annot tweaks to feeds mod 2023-02-21 10:54:18 -05:00
goodboy bfc0220a47
Merge pull request #456 from pikers/nix-env
NixOS development envoirment
2023-02-16 14:59:48 -05:00
goodboy 139b8ba0f4
Merge pull request #453 from pikers/overlays_interaction_latency_tuning
Overlays interaction latency tuning
2023-02-14 13:48:12 -05:00
Guillermo Rodriguez 71b2f24a2e
Merge pull request #460 from pikers/fnf_notify-send
Fix crash on notification daemon not found
2023-02-13 18:22:27 -03:00
Guillermo Rodriguez ffd707db62
Add try catch for when notify-send is not present on system 2023-02-13 18:08:56 -03:00
Tyler Goodlet fefb0de51f Don't update overlays as fsps 2023-02-13 12:27:58 -05:00
Tyler Goodlet 59f34c94b0 Return fast on bad range in `.default_view()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet ebf53e32bd Fix return type annot for `slice_from_time()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet 9ce52033f0 Fix `do_px_step` output for epoch step sizing 2023-02-13 12:27:58 -05:00
Tyler Goodlet 9876f200c1 Support chart draw-api-kwargs-passthrough in lined plot meths 2023-02-13 12:27:58 -05:00
Tyler Goodlet 81b8cd5461 Use normal pen when last-datum color not provided 2023-02-13 12:27:58 -05:00
Tyler Goodlet 731eb91a58 Make profiler work when nested and not? 2023-02-13 12:27:58 -05:00
Tyler Goodlet 49ca743e6a Add back `.prepareGeometryChange()`, seems faster? 2023-02-13 12:27:58 -05:00
Tyler Goodlet a36d4b1dc6 Factor color and cache mode settings into `FlowGraphics`
Curve-path colouring and cache mode settings are used (and can thus be
factored out of) all child types; this moves them into the parent type's
`.__init__()` and adjusts all sub-types match:

- the bulk was moved out of the `Curve.__init__()` including all
  previous commentary around cache settings.
- adjust `BarItems` to use a `NoCache` mode and instead use the
  `last_step_pen: pg.Pen` and `._pen` inside it's `.pain()` instead of
  defining functionally duplicate vars.
- adjust all (transitive) calls to `BarItems` to use the new kwargs
  names.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 33df4f9927 Return `in_view: bool` from `Viz.update_graphics()`
Allows callers to know if they should care about a particular viz
rendering call by immediately knowing if the graphics are in view. This
turns out super useful particularly when doing dynamic y-ranging overlay
calcs.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 72a9af21ac Fix profiler f-strings 2023-02-13 12:27:58 -05:00
Tyler Goodlet 1a10514cad Disable coordinate caching on OHLC ds curves to avoid smearing 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5d9b7c72b3 Fix `Viz.draw_last()` to divide by `.flat_index_ratio` for uppx index lookback 2023-02-13 12:27:58 -05:00
Tyler Goodlet efddd43760 Drop masked `._maxmin()` override code from fsp stuff 2023-02-13 12:27:58 -05:00
Tyler Goodlet 1606b3a9c3 Document `Viz.incr_info()` outputs 2023-02-13 12:27:58 -05:00
Tyler Goodlet 8b5b1c214b Rework display loop maxmin-ing with `Viz` pipelining
First, we rename what was `chart_maxmin()` -> `multi_maxmin()` and don't
`partial` it in to the `DisplayState`, just call it with correct `Viz`
ref inputs.

Second, as we've done with `ChartView.maybe_downsample_graphics()` use
the output from the main `Viz.update_graphics()` and feed it to the
`.maxmin()` calls for the ohlc and vlm chart but still deliver the same
output signature as prior. Also accept and use an optional profiler
input, drop `DisplayState.maxmin()` and add `.vlm_viz`.

Further perf related tweak to do with more efficient incremental
updates:
- only call `multi_maxmin()` if the main fast chart viz does a pixel
  column step.
- mask out hist viz and vlm viz and all linked fsp `._set_yrange()`
  calls for now until we figure out how to best optimize these updates
  when considering the new group-scaled-by-% style for multicharts.
- drop `.enable_auto_yrange()` calls during startup.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 9780263cfa Adjust vlm fsp code to new `Viz.update_graphics()` output sig 2023-02-13 12:27:58 -05:00
Tyler Goodlet e1e3afb495 Support read-slice input to `Viz.maxmin()`
Acts as short cut when pipe-lining from `Viz.update_graphics()` (which
now returns the needed in-view array-relative-read-slice as output) such
that `Viz.read()` and `.datums_range()` doesn't need to be called
internally multiple times. In this case where `i_read_range` is provided
we of course skip doing time index translations and consequently lookup
the appropriate (epoch-time) index indices for caching.
2023-02-13 12:27:58 -05:00
Tyler Goodlet f9eb880404 Backlink subchart views to "main chart" in `.add_plot()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet a3bbbeda9d Drop `ChartView._maxmin()` usage in `.ui._fsp`
Removes the multi-maxmin usage as well as ensures appropriate `Viz` refs
are passed into the view methods now requiring it. Also drops the "back
linking" of the vlm chart view to the source OHLC chart since we're
going to add this as a default to the charting API.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 3ad7844fdf Drop `ChartView._maxmin()` idea, use `Viz.maxmin()`
The max min for a given data range is defined on the lowest level
through the `Viz` api intermingling it with the view is a layering
issue. Instead make `._set_yrange()` call the appropriate view's viz
(since they should be one-to-one) directly and thus avoid any callback
monkey patching nonsense.

Requires that we now make `._set_yrange()` require either one of an
explicit `yrange: tuple[float, float]` min/max pair or the `Viz` ref (so
that maxmin can be called) as input. Adjust
`enable/disable_auto_yrange()` to bind in a new `._yranger()` partial
that's (solely) needed for signal reg/unreg which binds in the now
required input `Viz` to these methods.

Comment the `autoscale_overlays` block in `.maybe_downsample_graphics()`
for now until we figure out the most sane way to auto-range all linked
overlays and subplots (with their own overlays).
2023-02-13 12:27:58 -05:00
Tyler Goodlet b71c61e23f More thoroughly profile the display loop 2023-02-13 12:27:58 -05:00
Tyler Goodlet 9650b32786 Use `Viz.draw_last()` inside `.update_graphics()`
In an effort to ensure uniform and uppx-optimized last datum graphics
updates call this method directly instead of the equivalent graphics
object thus ensuring we only update the last pixel column according with
the appropriate max/min computed from the last uppx's worth of data.

Fixes / improvements to enable `.draw_last()` usage include,
- change `Viz._render_table` -> `._alt_r: tuple[Renderer, pg.GraphicsItem] | None`
  which holds an alternative (usually downsampled) render and graphics
  obj.
- extend the `.draw_last()` signature to include:
  - `last_read` to allow passing in the already read data from
    `.update_graphics()`, if it isn't passed then a manual read is done
    internally.
  - `reset_cache: bool` which is passed through to the graphics obj.
- use the new `Formatter.flat_index_ratio: float` when indexing into xy
  1d data to compute the max/min for that px column.

Other,
- drop `bars_range` input from `maxmin()` since it's unused.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 433697cc4f Add cached refs to last 1d xy outputs
For the purposes of avoiding another full format call we can stash the
last rendered 1d xy pre-graphics formats as
`IncrementalFormatter.x/y_1d: np.ndarray`s and allow readers in the viz
and render machinery to use this data easily for things like "only
drawing the last uppx's worth of data as a line". Also add
a `.flat_index_ratio: float` which can be used similarly as a scalar
applied to indexes into the src array but instead when indexing
(flattened) 1d xy formatted outputs. Finally, this drops the way
overdone/noisy `.__repr__()` meth we had XD
2023-02-13 12:27:58 -05:00
Tyler Goodlet d622b4157c Only draw up to 2nd last datum for OHLC bars paths 2023-02-13 12:27:58 -05:00
Tyler Goodlet 1add591b2c Only update last datum graphic(s) on clear ticks
When a new tick comes in but no new time step / bar is yet needed (to be
appended) we can simply adjust **only** the last bar datum
lines-graphic(s) to avoid a redraw of the preceding `QPainterPath` on
every tick. Do this by calling `Viz.draw_last()` on the fast and slow
chart and adjusting the guards around calls to `Viz.update_graphics()`
(which *does* update paths) to only enter when there's a `do_px_step`
condition. We can stop calling `main_viz.plot.vb._set_yrange()` on view
treading cases since the range should have already been adjusted by the
clearing-tick processing mxmn updates.

Further this changes,
- the `chart_maxmin()` helper (which we should eventually just get rid
  of) to take bound in `Viz`s for the ohlc and vlm chart instead of the
  chart widget handles.
- extend the guard around hist viz yranging to only enter when not in
  "axis mode" - the same as for the fast viz.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 60440bc6b7 Ensure full hist OHLC path is drawn on tread
Since we removed the `Viz.update_graphics()` call from the main rt loop
we have to be sure to call it in the history chart incr-loop to avoid
a gap between the  last bar and prior history since startup. We only
need to update on tread since that should be the only time a full redraw
is ever necessary, ow only the last datum is needed.

Further this moves the graphics cycle func's profiler init to the top in
an effort to get more correct latency measures.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 4003729231 Use `Viz.update_graphics()` throughout remainder of graphics loop where possible 2023-02-13 12:27:58 -05:00
Tyler Goodlet 934b32c342 Use `Viz` over charts where possible in display loop
Since `ChartPlotWidget.update_graphics_from_flow()` is more or less just
a call to `Viz.update_graphics()` try to call that directly where
possible.

Changes include:
- calling the viz in the display state specific `maxmin()`.
- passing a viz instance to each `ChartView._set_yrange()` call (in prep
  of explicit group auto-ranging); not that this input is unused in the
  method for now.
- drop `bars_range` var passing since we don't use it.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 97bb3b48da Set a `PlotItem.viz` for interaction lookup
Inside `._interaction` routines we need access to `Viz` instances.
Instead of doing `CharPlotWidget._vizs: dict` lookups this ensures each
plot can lookup it's (parent) viz without error.

Also, adjusts `Viz.maxmin()` output parsing to new signature.
2023-02-13 12:27:58 -05:00
Tyler Goodlet da618e1d38 Always cache `read_slc` alongside y-mnmx values 2023-02-13 12:27:58 -05:00
Tyler Goodlet 23c03a0905 Add back coord-caching to ohlc graphic 2023-02-13 12:27:58 -05:00
Tyler Goodlet 07c8ed8a3a Use (modern) literal type annots in view code 2023-02-13 12:27:58 -05:00
Tyler Goodlet bcf2a9868d Drop x-range query from `ChartPlotWidget.maxmin()`
Move the `Viz.datums_range()` call into `Viz.maxmin()` itself thus
minimizing the chart `.maxmin()` method to an ultra light wrapper around
the viz call. Also move all profiling into the `Viz` method.

Adjust `Viz.maxmin()` to return both the (rounded) x-range values which
correspond to the range containing the y-domain min and max so that
it can be used for up and coming overlay group maxmin calcs.
2023-02-13 12:27:58 -05:00
Tyler Goodlet c09c3925a4 Drop multi mxmn from display mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet 92ce1b3304 Only handle hist discrepancies when market is open
We obviously don't want to be debugging a sample-index issue if/when the
market for the asset is closed (since we'll be guaranteed to have
a mismatch, lul). Pass in the `feed_is_live: trio.Event` throughout the
backfilling routines to allow first checking for the live feed being active
so as to avoid breakpointing on false +ves. Also, add a detailed warning
log message for when *actually* investigating a mismatch.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 0fc06a98d4 Passthrough `tractor` kwargs directly 2023-02-13 12:27:58 -05:00
Tyler Goodlet 4ba99494f0 Fix `open_trade_ledger()` enter value type annot 2023-02-13 12:27:58 -05:00
Tyler Goodlet a8e1796a8b Comment bad x-range bp for now 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5ced05aab0 Breakpoint bad (-ve or too large) x-ranges to m4
This should never really happen but when it does it appears to be a race
with writing startup pre-graphics-formatter array data where we get
`x_end` epoch value subtracting some really small offset value (like
`-/+0.5`) or the opposite where the `x_start` is epoch and `x_end` is
small.

This adds a warning msg and `breakpoint()` as well as guards around the
entire code downsampling code path so that when resumed the downsampling
cycle should just be skipped and avoid a crash.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 4a6339ffc2 Downthrottle to 16Hz on multi-feed charts 2023-02-13 12:27:58 -05:00
Tyler Goodlet efa4089920 Attempt to keep selected item highlighted
This attempt was unsuccessful since trying to (re)select the last
highlighted item on both an "enter" or "click" of that item causes
a hang and then segfault in `Qt`; no clue why..

Adds a `keep_current_item_selected: bool` flag to
`CompleterView.show_cache_entries()` but using it seems to always cause
a hang and crash; we keep all potential use spots commented for now
obviously to avoid this. Also included is a bunch of tidying to logic
blocks in the kb-control loop for readability.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 35cc37ddc1 Lol, pull hist chart from the display state 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5ea4be1d4b Make (cache) search-results a `set` and avoid overlay duplicate entries 2023-02-13 12:27:58 -05:00
Tyler Goodlet 0c5b5a5aea Take outer-interval values in `Viz.datums_range()` 2023-02-13 12:27:58 -05:00
Tyler Goodlet 4027d683e9 Clean a buncha cruft from render mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet 7afc9301ac Handle last-in-view time slicing edge case
Whenever the last datum is in view `slice_from_time()` need to always
spec the final array index (i.e. the len - 1 value we set as
`read_i_max`) to avoid a uniform-step arithmetic error where gaps in the
underlying time series causes an index that's too low to be returned.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 12c6d58c2a Drop bp blocks from formatters mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet c5db7295e6 Fix query-mode cursor labels to work with epoch-indexing 2023-02-13 12:27:58 -05:00
Tyler Goodlet 02c3ea1743 Use `open_sample_stream()` in display loop 2023-02-13 12:27:58 -05:00
Tyler Goodlet 63f0567418 Drop `Flume.index_stream()`, `._sampling.open_sample_stream()` replaces it 2023-02-13 12:27:58 -05:00
Tyler Goodlet 3e17e52555 Add back another panes resize during startup 2023-02-13 12:27:58 -05:00
Tyler Goodlet 65dca16dc0 Always zero-on-step $vlm 2023-02-13 12:27:58 -05:00
Tyler Goodlet e742d18a6c Mouse interaction tweaks
- adjust zoom focal to be min of the view-right coord or the right-most
  point on the flow graphic in view and drop all the legacy l1-in-view
  focal point cruft.
- flip to not auto-scaling overlays by default.
- change the `._set_yrange()` margin to `0.09`.
- drop `use_vr: bool` usage.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 7e29c36a24 Modernize optional path variable type annots 2023-02-13 12:27:58 -05:00
Tyler Goodlet 4d2b5c8f86 Use `Curve.x_last()` for zoom focal point 2023-02-13 12:27:58 -05:00
Tyler Goodlet fe932a96a9 Make `PlotItemOverlay` add items inwards->out
Before this axes were being stacked from the outside in (for `'right'`
and 'bottom'` axes) which is somewhat non-intuitive for an `.append()`
operation. As such this change makes a symbol list stack a set of
`'right'` axes from left-to-right.

Details:
- rename `ComposeGridLayout.items` -> `.pitems`
- return `(int, list[AxisItem])` pairs from `.insert/append_plotitem()`
  and the down stream `PlotItemOverlay.add_plotitem()`.
- drop `PlotItemOverlay.overlays` and add it back as `@property` around
  the underlying `.layout.pitems`.
2023-02-13 12:27:58 -05:00
Tyler Goodlet c1b7063e3c Drop the legacy `relayed_from` cruft from our view box 2023-02-13 12:27:58 -05:00
goodboy 42d2f9e461
Merge pull request #452 from pikers/l1_compaction
Compact L1 labels
2023-02-13 11:21:26 -05:00
goodboy 31fc2d73ce
Merge pull request #459 from pikers/kraken_deposits_fixes
`kraken`: make pps work with arbitrary deposits
2023-02-12 16:17:23 -05:00
Tyler Goodlet 1346c33f04 `kraken`: make pps work with arbitrary deposits
Factor and fix dst <- src pair parsing into a new func
`get_likely_pair()` and use throughout initial position loading; solves
a parsing bug for src asset balances which aren't only 3 chars long..
a terrible assumption.
2023-02-12 15:52:48 -05:00
Tyler Goodlet cee6321a9f Do full marker width after line 2023-02-12 15:38:43 -05:00
Tyler Goodlet 1abed2ad9e Fix indent level 2023-02-12 15:38:43 -05:00
Tyler Goodlet 5bd6fa3cbf Make $vlm axis color same as clears 2023-02-12 15:38:43 -05:00
Tyler Goodlet a82911d8a9 Correctly load order mode for first fqsn in overlay set 2023-02-12 15:38:43 -05:00
Tyler Goodlet dc88364253 Move $vlm y-axis to LHS 2023-02-12 15:38:43 -05:00
Tyler Goodlet 4c51a68691 Better index step value scanning by checking with our expected set 2023-02-12 15:38:43 -05:00
Tyler Goodlet 42d3537516 Repair auto-y-ranging to always include L1 spread
Goes back to always adjusting the y-axis range to include the L1 spread
and clearing label in view whenever the last datum is also in view,
previously this was broken after reworking the display loop for
multi-feeds.

Drops a bunch of old commented tick looping cruft from before we started
using tick-type framing. Also adds more stringent guards for ignoring
but error logging quote values that are more then 25% out of range; it
seems particularly our `ib` feed has some issues with strange `price`
values that are way off here and there?
2023-02-12 15:38:43 -05:00
Tyler Goodlet 3fd394d693 Use static `L1Label._x_br_offset` as l1 label length 2023-02-12 15:38:43 -05:00
Tyler Goodlet a7a08aced9 Drop l1 labels attr from chart widget 2023-02-12 15:38:43 -05:00
Tyler Goodlet 1d83fdb510 Handle empty `indexes` input edge case.. 2023-02-12 15:38:43 -05:00
Tyler Goodlet 924fcca463 TOSQUASH: 84f19308 (l1 rework) 2023-02-12 15:38:43 -05:00
Tyler Goodlet 26f497e2bb Set cursor label color to "bracket" 2023-02-12 15:38:43 -05:00
Tyler Goodlet e37e118a7e Don't set y-axis label colors to curve's, use the default from global scheme 2023-02-12 15:38:43 -05:00
Tyler Goodlet b2bb7f4923 Simplify L1 labels for multicharts
Instead of having the l1 lines be inside the view space, move them to be
inside their respective axis (with only a 16 unit portion inside the
view) such that the clear price label can overlay with them nicely
without obscuring; this is much better suited to multiple adjacent
y-axes and in general is simpler and less noisy.

Further `L1Labels` + `LevelLabel` style tweaks:
- adjust `.rect` positioning to be "right" (i.e. inside the parent
  y-axis) with a slight 16 unit shift toward the viewbox (using the new
  `._x_br_offset`) to allow seeing each level label's line even when the
  clearing price label is positioned at that same level.
- add a newline's worth of vertical space to each of the bid/ask labels
  so that L1 labels' text content isn't ever obscured by the clear price
  label.
- set a low (10) z-value to ensure l1 labels are always placed
  underneath the clear price label.
- always fill the label rect with the chosen background color.
- make labels fully opaque so as to always make them hide the parent
  axes' `.tickStrings()` contents.
- make default color the "default" from the global scheme.
- drop the "price" part from the l1 label text contents, just show the
  book-queue's amount (in dst asset's units, aka the potential clearing vlm).
2023-02-12 15:38:43 -05:00
Tyler Goodlet 97b03bbfbb Move old label sizing cruft to label mod 2023-02-12 15:38:43 -05:00
goodboy d690ad2bab
Merge pull request #451 from pikers/epoch_indexing_and_dataviz_layer
Epoch indexing and dataviz layer
2023-02-12 14:27:43 -05:00
Guillermo Rodriguez 0f082ed9d4
Merge pull request #458 from pikers/missing_protobuf
Add missing protobuf dependency
2023-02-12 16:19:31 -03:00
Guillermo Rodriguez 2851a0ecc5
Add missing protobuf dependency 2023-02-12 16:07:42 -03:00
Tyler Goodlet 340045af77 Make `FlowGraphic.x_last()` be optionally `None`
In the case where the last-datum-graphic hasn't been created yet, simply
return a `None` from this method so the caller can choose to ignore the
output. Further, drop `.px_width()` since it makes more sense defined on
`Viz` as well as the previously commented `BarItems.x_uppx()` method.
Also, don't round the `.x_uppx()` output since it can then be used when
< 1 to do x-domain scaling during high zoom usage.
2023-02-12 13:55:26 -05:00
Tyler Goodlet c1988c4d8d Add a parent-type for graphics: `FlowGraphic`
Factor some common methods into the parent type:
- `.x_uppx()` for reading the horizontal units-per-pixel.
- `.x_last()` for reading the "closest to y-axis" last datum coordinate
  for zooming "around" during mouse interaction.
- `.px_width()` for computing the max width of any curve in view in
  pixels.

Adjust all previous derived `pg.GraphicsObject` child types to now
inherit from this new parent and in particular enable proper `.x_uppx()`
support to `BarItems`.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 6a0c36922e Drop `._index_step` from formatters and instead defer to `Viz.index_step()` 2023-02-12 13:55:26 -05:00
Tyler Goodlet 459cbfdbad Further fixes `Viz.default_view()` and `.index_step()`
Use proper uppx scaling when either of scaling the data to the x-domain
index-range or when the uppx is < 1 (now that we support it) such that
both the fast and slow chart always appropriately scale and offset to
the y-axis with the last datum graphic just adjacent to the order line
arrow markers.

Further this fixes the `.index_step()` calc to use the "earliest" 16
values to compute the expected sample step diff since the last set often
contained gaps due to start up race conditions and generated
unexpected/incorrect output.

Further this drops the `.curve_width_pxs()` method and replaces it with
`.px_width()`, taken from the graphics object API and instead returns
the pixel account for the whole view width instead of the
x-domain-data-range within the view.
2023-02-12 13:55:26 -05:00
Tyler Goodlet fc17187ff4 Drop edge case from `slice_from_time()`
Doesn't seem like we really need to handle the situation where the start
or stop input time stamps are outside the index range of the data since
the new binary search handling via `numpy.searchsorted()` covers this
case at minimal runtime cost and with an equally correct output. Allows
us to drop some other indexing endpoint internal variables as well.
2023-02-12 13:55:26 -05:00
Tyler Goodlet a7d78a3f40 Use left-style index search on RHS scan as well 2023-02-12 13:55:26 -05:00
Tyler Goodlet 7ce3f10e73 Just-offset-from-arrow-marker on slow chart
We want the fast and slow chart to behave the same on calls to
`Viz.default_view()` so adjust the offset calc to make both work:
- just offset by the line len regardless of step / uppx
- add back the `should_line: bool` output from `render_bar_items()` (and
  use it to set a new `ds_allowed: bool` guard variable) so that we can
  bypass calling the m4 downsampler unless the bars have been switched
  to the interpolation line graphic (which we normally required before
  any downsampling of OHLC graphics data).

Further, this drops use of the `use_vr: bool` flag from all rendering
since we pretty much always use it by default.
2023-02-12 13:55:26 -05:00
Tyler Goodlet bfc6014ad3 Fix history array name 2023-02-12 13:55:26 -05:00
Tyler Goodlet a5eed8fc1e Fix x-axis labelling when using an epoch domain
Previously with array-int indexing we had to map the input x-domain
"indexes" passed to `DynamicDateAxis._indexes_to_timestr()`. In the
epoch-time indexing case we obviously don't need to lookup time stamps
from the underlying shm array and can instead just cast to `int` and
relay the values verbatim.

Further, this patch includes some style adjustments to `AxisLabel` to
better enable multi-feed chart overlays by avoiding L1 label clutter
when multiple y-axes are stacked adjacent:
- adjust the `Axis` typical max string to include a couple spaces suffix
 providing for a bit more margin between side-by-side y-axes.
- make the default label (fill) color the "default" from the global
 color scheme and drop it's opacity to .9
- add some new label placement options and use them in the
 `.boundingRect()` method:
 * `._x/y_br_offset` for relatively shifting the overall label relative
   to it's parent axis.
 * `._y_txt_h_scaling` for increasing the bounding rect's height
   without including more whitespace in the label's text content.
- ensure labels have a high z-value such that by default they are always
 placed "on top" such that when we adjust the l1 labels they can be set
 to a lower value and thus never obscure the last-price label.
2023-02-12 13:55:26 -05:00
Tyler Goodlet cdec4782f0 Add commented append slice-len sanity check 2023-02-12 13:55:26 -05:00
Tyler Goodlet f30a48b82c Use `np.diff()` on last 16 samples instead of only last datum pair 2023-02-12 13:55:26 -05:00
Tyler Goodlet 98de22a740 Enable the experimental `QPrivatePath` functionality from latest `pyqtgraph` 2023-02-12 13:55:26 -05:00
Tyler Goodlet efbb8e86d4 Fix overlayed slow chart "treading"
Turns out we were updating the wrong ``Viz``/``DisplayState`` inside the
closure style `increment_history_view()`` (probably due to looping
through the flumes and dynamically closing in that task-func).. Instead
define the history incrementer at module level and pass in the
`DisplayState` explicitly. Further rework the `DisplayState` attrs to be
more focused around the `Viz` associated with the fast and slow chart
and be sure to adjust output from each `Viz.incr_info()` call to latest
update. Oh, and just tweaked the line palette for the moment.

FYI "treading" here is referring to  the x-shifting of the curve when
the last datum is in view such that on new sampled appends the "last"
datum is kept in the same x-location in UI terms.
2023-02-12 13:55:26 -05:00
Tyler Goodlet b6521498f4 Make `.increment_view()` take in a `datums: int` and always scale it by sample step size 2023-02-12 13:55:26 -05:00
Tyler Goodlet 06f1b94147 Make `Viz.incr_info()` do treading with time-index, and appending with array-index 2023-02-12 13:55:26 -05:00
Tyler Goodlet ffb57f0256 Rename `reset` -> `reset_cache` 2023-02-12 13:55:26 -05:00
Tyler Goodlet ed1f64cf43 Fix gap detection on RHS; always bin-search on overshot time range 2023-02-12 13:55:26 -05:00
Tyler Goodlet bf8ea33697 Add type annots to vars inside `Render.render()` 2023-02-12 13:55:26 -05:00
Tyler Goodlet bc17308de7 Drop coordinate cacheing from `BarItems`, causes weird jitter on pan 2023-02-12 13:55:26 -05:00
Tyler Goodlet 1ece704d6e Add `ChartPlotWidget.main_viz: Viz` convenience `@property` 2023-02-12 13:55:26 -05:00
Tyler Goodlet dea1c1c2d6 Make `Viz.incr_info()` sample rate agnostic
Mainly it was the global (should we )increment logic that needs to be
independent for the fast vs. slow chart such that the slow isn't
update-shifted by the fast and vice versa. We do this using a new
`'i_last_slow'` key in the `DisplayState.globalz: dict` which is
singleton for each sample-rate-specific chart and works for both time
and array indexing.

Also, we drop some old commented `graphics.draw_last_datum()` code that
never ended up being needed again inside the coordinate cache reset
bloc.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 3300a240c6 Use array-`int`-indexing on single feed
Might as well since it makes the chart look less gappy and we can easily
flip the index switch now B)

Also adds a new `'i_slow_last'` key to `DisplayState` for a singleton
across all slow charts and thus no more need for special case logic in
`viz.incr_info()`.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 50ef4efccb Align step curves the same as OHLC bars 2023-02-12 13:55:26 -05:00
Tyler Goodlet 51f2461e8b Add `IncrementalFormatter.x_offset: np.ndarray`
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 444768d30f Adjust OHLC bar x-offsets to be time span matched
Previously we were drawing with the middle of the bar on each index with
arms to either side: +/- some arm length. Instead this changes so that
each bar is drawn *after* each index/timestamp such that in graphics
coords the bar span more correctly matches the time span in the
x-domain. This makes the linked region between slow and fast chart
directly match (without any transform) for epoch-time indexing such that
the last x-coord in view on the fast chart is no more then the
next time step in (downsampled) slow view.

Deats:
- adjust in `._pathops.path_arrays_from_ohlc()` and take an `bar_w` bar
  width input (normally taken from the data step size).
- change `.ui._ohlc.bar_from_ohlc_row()` and
  `BarItems.draw_last_datum()` to match.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 0d0675ac7e `Viz._index_field` a `typing.Literal[str]` 2023-02-12 13:55:26 -05:00
Tyler Goodlet 24b384f3ef Set `path_arrays_from_ohlc(use_time_index=True)` on epoch indexing
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.

Also, guard all the x-data audit breakpoints with a time indexing
condition.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 93330954c2 Ugh, use `bool` flag to determine index field.. 2023-02-12 13:55:26 -05:00
Tyler Goodlet edf721f755 Make `LinearRegion` link using epoch-time index
Turned out to be super simple to get the first draft to work since the
fast and slow chart now use the same domain, however, it seems like
maybe there's an offset issue still where the fast may be a couple
minutes ahead of the slow?

Need to dig in a bit..
2023-02-12 13:55:26 -05:00
Tyler Goodlet 530b2731ba Add global `i_step` per overlay to `DisplayState`
Using a global "last index step" (via module var) obviously has problems
when working with multiple feed sets in a single global app instance:
any separate feed-set will be incremented according to an app-global
index-step and thus won't correctly calc per-feed-set-step update info.

Impl deatz:
- drop `DisplayState.incr_info()` (since previously moved to `Viz`) and
  call that method on each appropriate `Viz` instance where necessary;
  further ensure the appropriate `DisplayState` instance is passed in to
  each call and make sure to pass a `state: DisplayState`.
- add `DisplayState.hist_vars: dict` for history chart (sets) to
  determine the per-feed (not set) current slow chart (time) step.
- add `DisplayState.globalz: dict` to house a common per-feed-set state
  and use it inside the new `Viz.incr_info()` such that
  a `should_increment: bool` can be returned and used by the display
  loop to determine whether to x-shift the current chart.
2023-02-12 13:55:24 -05:00
Tyler Goodlet 14104185d2 Move `DisplayState.incr_info()` -> `Viz` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 3019c35e30 Move `Viz` layer to new `.ui` mod 2023-02-12 13:41:18 -05:00
Tyler Goodlet 4d74bc29b4 Fix line -> bars on 6x UPPX
Read the `Viz.index_step()` directly to avoid always reading 1 on the
slow chart; this was completely broken before and resulting in not
rendering the bars graphic on the slow chart until at a true uppx of
1 which obviously doesn't work for 60 width bars XD

Further cleanups to `._render` module:
- drop `array` output from `Renderer.render()`, `read_from_key` input
  and fix type annot.
- drop `should_line`, `changed_to_line` and `render_kwargs` from
  `render_baritems()` outputs and instead calc `should_redraw` logic
  inside the func body and return as output.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 3638ae8d3e Drop unused `read_src_from_key: bool` to `.format_to_1d()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet c5dd67e63c Right, do index lookup for int-index as well.. 2023-02-12 13:41:18 -05:00
Tyler Goodlet 0663880a6d Fix formatter xy ndarray first prepend case
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.

Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.

Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
  downsample to, this is normally based on the ratio of pixel columns on
  screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
  first and last index would be the size of the input buffer and thus
  would never cause a large mem allocation issue (though it may have
  been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
  near-now epoch time stamp **minus** an x-allocation value: generally
  some value in `[0.5, -0.5]` which would result in a massive frames and
  thus internal `np.ndarray()` allocation causing either a crash in
  `numba` code or actual system mem over allocation.

Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 3bed142d15 Handle time-indexing for fill arrows
Call into a reworked `Flume.get_index()` for both the slow and fast
chart and do time index clipping to last datum where necessary.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 9fcc6f9c44 Restore coord-cache resetting
Turns out we can't seem to avoid the artefacts when click-drag-scrolling
(results in weird repeated "smeared" curve segments) so just go back to
the original code.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 7aef31701b Add some commented debug prints for default fmtr 2023-02-12 13:41:18 -05:00
Tyler Goodlet 135627e142 Slicec to an extra index around each timestamp input 2023-02-12 13:41:18 -05:00
Tyler Goodlet 5216a6b732 Drop passing `render_data` to `Curve.draw_last_datum()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 2a797d32dc Add back `.default_view()` slice logic for `int` indexing 2023-02-12 13:41:18 -05:00
Tyler Goodlet 35a16ded2d Block out `do_print` stuff inside `Viz.maxmin()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 44f50e3d0e Implement `stop_t` gap adjustments; the good lord said it is the problem 2023-02-12 13:41:18 -05:00
Tyler Goodlet 96b871c4d7 Draw last datums on boot
Ensures that a "last datum" graphics object exists so that zooming can
read it using `.x_last()`. Also, disable the linked region stuff for now
since it's totally borked after flipping to the time indexing.
2023-02-12 13:41:18 -05:00
Tyler Goodlet d2aad74dfc Delegate to `Viz.default_view()` on chart
Also add a rage print to not forget about the global index
tracking/diffing in the display loop we still need to change.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 50209752c3 Re-implement `.default_view()` on `Viz`
Since we don't really need it defined on the "chart widget" move it to
a viz method and rework it to hell:

- always discard the invalid view l > r case.
- use the graphic's UPPX to determine UI-to-scene coordinate scaling for
  the L1-label collision detection, if there is no L1 just offset by
  a few (index step scaled) datums; this allows us to drop the 2x
  x-range calls as was hacked previous.
- handle no-data-in-view cases explicitly and error if we get any
  ostensibly impossible cases.
- expect caller to trigger a graphics cycle if needed.

Further support this includes a rework a slew of other important
details:

- add `Viz.index_step`, an idempotent computed, index (presumably uniform)
  step value which is needed for variable sample rate graphics displayed
  on an epoch (second) time index.
- rework `Viz.datums_range()` to pass view x-endpoints as first and last
  elements in return `tuple`; tighten up snap-to-data edge case logic
  using `max()`/`min()` calls and better internal var naming.
- adjust all calls to `slice_from_time()` to not expect an "abs" slice.
- drop all `.yrange` resetting since we can just have the `Renderer` do
  it when necessary.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 5ab4e5493e Add gap detection for `stop_t`, though only report atm 2023-02-12 13:41:18 -05:00
Tyler Goodlet e252f70253 Add `.x_last()` meth to flow graphics 2023-02-12 13:41:18 -05:00
Tyler Goodlet 98438e29ef Drop `Flume.view_data()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet d649a7d1fa Drop old breakpoint 2023-02-12 13:41:18 -05:00
Tyler Goodlet 2669ced629 Drop `_slice_from_time()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet f2c0987a04 Use uniform step arithmetic in `slice_from_time()`
If we presume that time indexing using a uniform step we can calculate
the exact index (using `//`) for the input time presuming the data
set has zero gaps. This gives a massive speedup over `numpy` fancy
indexing and (naive) `numba` iteration. Further in the case where time
gaps are detected, we can use `numpy.searchsorted()` to binary search
for the nearest expected index at lower latency.

Deatz,
- comment-disable the call to the naive `numba` scan impl.
- add a optional `step: int` input (calced if not provided).
- add todos for caching binary search results in the gap detection
  cases.
- drop returning the "absolute buffer indexing" slice since the caller
  can always just use the read-relative slice to acquire it.
2023-02-12 13:41:18 -05:00
Tyler Goodlet bb84715bf0 Make `.default_view()` time step aware
When we use an epoch index and any sample rate > 1s we need to scale the
"number of bars" to that step in order to place the view correctly in
x-domain terms. For now we're calcing the step in-method but likely,
longer run, we'll pull this from elsewhere (like a ``Viz`` attr).
2023-02-12 13:41:17 -05:00
Tyler Goodlet 0bdb7261d1 Flip over to epoch-time based x-domain indexing 2023-02-12 13:41:17 -05:00
Tyler Goodlet 12857a258b Adjust all `slice_from_time()` calls to not expect mask 2023-02-12 13:41:17 -05:00
Tyler Goodlet 46808fbb89 Rewrite `slice_from_time()` using `numba`
Gives approx a 3-4x speedup using plain old iterate-with-for-loop style
though still not really happy with this .5 to 1 ms latency..

Move the core `@njit` part to a `_slice_from_time()` with a pure python
func with orig name around it. Also, drop the output `mask` array since
we can generally just use the slices in the caller to accomplish the
same input array slicing, duh..
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6ca8334253 Use index (time) step to calc OHLC bar/line uppx threshold 2023-02-12 13:41:17 -05:00
Tyler Goodlet a3844f9922 Use step size to determine bar gaps 2023-02-12 13:41:17 -05:00
Tyler Goodlet 58b36db2e5 Use step size to determine last datum bar gap 2023-02-12 13:41:17 -05:00
Tyler Goodlet a33f58a61a Move `Flume.slice_from_time()` to `.data._pathops` mod func 2023-02-12 13:41:17 -05:00
Tyler Goodlet a4392696a1 Drop `index_field` input to renders, add `.read()` profiling 2023-02-12 13:41:17 -05:00
Tyler Goodlet d5844ce8ff Delegate formatter `.index_field` to the parent `Viz` 2023-02-12 13:41:17 -05:00
Tyler Goodlet bf88b40a50 Facepalm**2: fix array-read-slice, like actually..
We need to subtract the first index in the array segment read, not the
first index value in the time-sliced output, to get the correct offset
into the non-absolute (`ShmArray.array` read) array..

Further we **do** need the `&` between the advance indexing conditions
and this adds profiling to see that it is indeed real slow (like 20ms
ish even when using `np.where()`).
2023-02-12 13:41:17 -05:00
Tyler Goodlet e4a0d4ecea Markup OHLC->path gen with `numba` issue # 2023-02-12 13:41:17 -05:00
Tyler Goodlet cca3417c57 Facepalm: put graphics cycle in `do_ds: bool` block.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 031d7967de Facepalm: actually return latest index on time slice fail.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 2e67e98b4d Go with explicit `.data._m4` mod name
Since it's a notable and self-contained graphics compression algo, might
as well give it a dedicated module B)
2023-02-12 13:41:17 -05:00
Tyler Goodlet 7124a131dd Move (unused) path gen routines to `.ui._pathops` 2023-02-12 13:41:17 -05:00
Tyler Goodlet 9052ed5ddf Move qpath-ops routines back to separate mod 2023-02-12 13:41:17 -05:00
Tyler Goodlet 7ec21c7f3b Rename `.ui._pathops.py` -> `.ui._formatters.py 2023-02-12 13:41:17 -05:00
Tyler Goodlet 309ae240cf Look up "index field" in display cycles
Again, to make epoch indexing a flip-of-switch for testing look up the
`Viz.index_field: str` value when updating labels.

Also, drops the legacy tick-type set tracking which we no longer use
thanks to the new throttler subsys and it's framing msgs.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 382a619a03 Fix from-time index slicing?
Apparently we want an `|` for the advanced indexing logic?
Also, fix `read_slc` start to not always be 0 XD
2023-02-12 13:41:17 -05:00
Tyler Goodlet 7f3f6f871a Move path ops routines to top of mod
Planning to put the formatters into a new mod and aggregate all path
gen/op helpers into this module.

Further tweak include:
- moving `path_arrays_from_ohlc()` back to module level
- slice out the last xy datum for `OHLCBarsAsCurveFmtr` 1d formatting
- always copy the new x-value from the source to `.x_nd`
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6ea04f850d Drop diff state tracking in formatter
This was a major cause of error (particularly trying to get epoch
indexing working) and really isn't necessary; instead just have
`.diff()` always read from the underlying source array for current
index-step diffing and append/prepend slice construction.

Allows us to,
- drop `._last_read` state management and thus usage.
- better handle startup indexing by setting `.xy_nd_start/stop` to
  `None` initially so that the first update can be done in one large
  prepend.
- better understand and document the step curve "slice back to previous
  level" logic which is now heavily commented B)
- drop all the `slice_to_head` stuff from and instead allow each
  formatter to choose it's 1d segmenting.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 3d5695f40a Explicitly enable chart widget yranging in display init 2023-02-12 13:41:17 -05:00
Tyler Goodlet 5affad942f Enable/disable vlm chart yranging (TO SQUASH) 2023-02-12 13:41:17 -05:00
Tyler Goodlet eb9ab20646 Don't disable non-enabled vlm chart y-autoranging 2023-02-12 13:41:17 -05:00
Tyler Goodlet f3bab826f6 Comment out bps for time indexing 2023-02-12 13:41:17 -05:00
Tyler Goodlet 2b9ca5f805 Call `Viz.bars_range()` from display loop 2023-02-12 13:41:17 -05:00
Tyler Goodlet 25a75e5bec Fix `.default_view()` to view-left-of-data 2023-02-12 13:41:17 -05:00
Tyler Goodlet 702ae29a2c Add `Viz.index_field: str`, pass to graphics objs
In an effort to make it easy to override the indexing scheme.

Further, this repairs the `.datums_range()` special case to handle when
the view box is to-the-right-of the data set (i.e. l > datum_start).
2023-02-12 13:41:17 -05:00
Tyler Goodlet ac1f37a2c2 Expect `index_field: str` in all graphics objects 2023-02-12 13:41:17 -05:00
Tyler Goodlet 344d2eeb9e Facepalm: pass correct flume to each FSP chart group.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 9133103f8f Attempt to make `.default_view()` time-index ready
As in make the call to `Flume.slice_from_time()` to try and convert any
time index values from the view range to array-indices; all untested
atm.

Also drop some old/unused/moved methods:
- `._set_xlimits()`
- `.bars_range()`
- `.curve_width_pxs()`

and fix some `flow` -> `viz` var naming.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 166d14af69 Simplify formatter update methodology
Don't expect values (array + slice) to be returned and applied by
`.incr_update_xy_nd()` and instead presume this will implemented
internally in each (sub)formatter.

Attempt to simplify some incr-update routines, (particularly in the step
curve formatter, though most of it was reverted to just a simpler form
of the original implementation XD) including:
- dropping the need for the `slice_to_head: int` control.
- using the `xy_nd_start/stop` index counters over custom lookups.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 696c6f8897 First attempt, field-index agnostic formatting
Remove harcoded `'index'` field refs from all formatters in a first
attempt at moving towards epoch-time alignment (though don't actually
use it it yet).

Adjustments to the formatter interface:
- property for `.xy_nd` the x/y nd arrays.
- property for and `.xy_slice` the nd format array(s) start->stop index
  slice.

Internal routine tweaks:
- drop `read_src_from_key` and always pass full source array on updates
  and adjust handlers to expect to have to index the data field of
  interest.
- set `.last_read` right after update calls instead of after 1d
  conversion.
- drop `slice_to_head` array read slicing.
- add some debug points for testing 'time' indexing (though not used
  here yet).
- add `.x_nd` array update logic for when the `.index_field` is not
  'index' - i.e. when we begin to try and support epoch time.
- simplify some new y_nd updates to not require use of `np.broadcast()`
  where possible.
2023-02-12 13:41:17 -05:00
Tyler Goodlet be21f9829e Pepper render routines with time-slice calls 2023-02-12 13:41:17 -05:00
Tyler Goodlet 5a0673d66f Add `Viz.bars_range()` (moved from chart API)
Call it from view kb loop.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6cacd7d18b Make `Viz.slice_from_time()` take input array
Probably means it doesn't need to be a `Flume` method but it's
convenient to expect the caller to pass in the `np.ndarray` with
a `'time'` field instead of a `timeframe: str` arg; also, return the
slice mask instead of the sliced array as output (again allowing the
caller to do any slicing). Also, handle the slice-outside-time-range
case by just returning the entire index range with a `None` mask.

Adjust `Viz.view_data()` to instead do timeframe (for rt vs. hist shm
array) lookup and equiv array slicing with the returned mask.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 5b08e9cba3 Add breakpoint on -ve range for now 2023-02-12 13:41:17 -05:00
Tyler Goodlet d3f5ff1b4f Go back to hard-coded index field
Turns out https://github.com/numba/numba/issues/8622 is real
and the suggested `numba.literally` hack doesn't seem to work..
2023-02-12 13:41:16 -05:00
Tyler Goodlet e45bc4c619 Move `ui._compression`/`._pathops` to `.data` subpkg
Since these modules no longer contain Qt specific code we might
as well include them in the data sub-package.

Also, add `IncrementalFormatter.index_field` as single point to def the
indexing field that should be used for all x-domain graphics-data
rendering.
2023-02-12 13:39:10 -05:00
Tyler Goodlet baee86a2d6 Rename `.ui._flows.py` -> `.ui._render.py` 2023-02-12 13:39:10 -05:00
Tyler Goodlet 86d09d9305 Rename `Flow` -> `Viz`
The type is better described as a "data visualization":
https://en.wikipedia.org/wiki/Data_and_information_visualization

Add `ChartPlotWidget.get_viz()` to start working towards not accessing
the private table directly XD

We'll probably end up using the name `Flow` for a type that tracks
a collection of composed/cascaded `Flume`s:
https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
2023-02-12 13:39:10 -05:00
Tyler Goodlet 9ace053aaf Copy timestamps from source to FSP dest buffer 2023-02-12 13:39:10 -05:00
Guillermo Rodriguez 69707786fc
Fix environment spelling 2023-02-12 13:23:55 -03:00
Guillermo Rodriguez 096e87cd3b
Add info about nix to README.rst 2023-02-12 13:23:55 -03:00
Guillermo Rodriguez 5017c541db
Auto initialize and activate virtualenv 2023-02-12 13:23:55 -03:00
Guillermo Rodriguez 3ea6554ab0
Add nix development shell file 2023-02-12 13:23:45 -03:00
Guillermo Rodriguez f0b17cb8f7
Merge pull request #457 from pikers/msgspec-default-factories
Use new msgspec default factories
2023-02-12 13:17:31 -03:00
Guillermo Rodriguez 5ca45362c8
Add default factories for all required fields 2023-02-11 16:08:45 -03:00
Tyler Goodlet 1f2081911f Revert "Adjust chart call to graphics cycle to not pass quotes"
This reverts commit 50ad7370c7
which was originally applied due to missing API changes coming in
a future patchset..
2023-02-09 16:26:32 -05:00
goodboy a7d02ecec8
Merge pull request #449 from pikers/multi_symbol_input
Multi symbol input (support)
2023-02-09 16:20:34 -05:00
goodboy 11ba706797
Merge pull request #448 from pikers/axis_sticky_api
Axis sticky api, `PlotItem` is the new "chart"
2023-02-05 15:32:22 -05:00
Tyler Goodlet 50ad7370c7 Adjust chart call to graphics cycle to not pass quotes
Was breaking the `'r'` hotkey to reset the chart..
2023-02-05 15:27:12 -05:00
goodboy 0616cbd1f1
Merge pull request #454 from pikers/ib_fix_cmdtys
`ib`: fix cmdtys feeds
2023-02-03 07:53:39 -05:00
Tyler Goodlet af92602027 `ib`: make commodities search and feeds work again..
Was broken since the `_adhoc_futes_set` rework a while back. Removes the
cmdty symbols from that set into a new one and fixes the contract
case block to catch `Contract(secType='CMDTY')` case. Also makes
`Client.search_symbols()` return details `dict`s so that `piker search`
will work again..
2023-02-02 16:52:34 -05:00
Tyler Goodlet d8bf45b02d Use latest `asks` 2023-02-02 16:52:34 -05:00
Tyler Goodlet 07ab853d3d `Order.symbol` is a `str`.. 2023-02-02 15:05:26 -05:00
Tyler Goodlet 414866fc6b Assign pnl calc output for use when debugging 2023-02-02 15:05:26 -05:00
Tyler Goodlet bc7fe6114d Adjust order mode to use `Flume.get_index()` 2023-02-02 15:05:23 -05:00
Tyler Goodlet 8d592886fa Pass `Flume`s throughout FSP-ui and charting APIs
Since higher level charting and fsp management need access to the
new `Flume` indexing apis this adjusts some func sigs to pass through
(and/or create) flume instances:
- `LinkedSplits.add_plot()` and dependents.
- `ChartPlotWidget.draw_curve()` and deps, and it now returns a `Flow`.
- `.ui._fsp.open_fsp_admin()` and `FspAdmin.open_fsp_ui()` related
  methods => now we wrap the destination fsp shm in a flume on the admin
  side and is returned from `.start_engine_method()`.

Drop a bunch of (unused) chart widget methods including some already
moved to flume methods: `.get_index()`, `.in_view()`,
`.last_bar_in_view()`, `.is_valid_index()`.
2023-02-02 13:32:30 -05:00
Tyler Goodlet 69ea296a9b Max out per symbol throttle @ 22Hz 2023-02-02 13:32:30 -05:00
Tyler Goodlet 03821fdf6f Expect and update from by-type tick frames
Move to expect and process new by-tick-event frames where the display
loop can now just iterate the most recent tick events by type instead of
the entire tick history sequence - thus we reduce iterations inside the
update loop.

Also, go back to use using the detected display's refresh rate (minus 6)
as the default feed requested throttle rate since we can now handle
much more bursty-ness in display updates thanks to the new framing
format B)
2023-02-02 13:32:30 -05:00
Tyler Goodlet 1aa9ab03da Brighter last OHLC graphics datum by default 2023-02-02 13:32:20 -05:00
Tyler Goodlet 1d83b43efe Factor setup loop, 1 FSP chain, colors, throttling
Factor out the chart widget creation since it's only executed once
during rendering of the first feed/flow whilst keeping plotitem overlay
creation inside the (flume oriented) init loop. Only create one vlm and
FSP chart/chain for now until we figure out if we want FSPs overlayed by
default or selected based on the "front" symbol in use. Add a default
color-palette set using shades of gray when plotting overlays. Presume
that the display loop's quote throttle rate should be uniformly
distributed over all input symbol-feeds for now. Restore feed pausing on
mouse interaction.
2023-02-02 13:32:20 -05:00
Tyler Goodlet 6986be1b21 Define a single `ChartPlotWidget.feed: Feed` for pause/resume 2023-02-02 13:32:20 -05:00
Tyler Goodlet 92c50aa6a7 Drop tick frame builder loop for now 2023-02-02 13:32:20 -05:00
Tyler Goodlet eac79c5cdd Adjust FSP UI/mgmt apis to be `Flume` oriented 2023-02-02 13:32:20 -05:00
Tyler Goodlet 7aec238f5f Make graphics-update-loop multi-sym aware B)
Initial support for real-time multi-symbol overlay charts using an
aggregate feed delivered by `Feed.open_multi_stream()`.

The setup steps for constructing the overlayed plot items is still very
very rough and will likely provide incentive for better refactoring high
level "charting APIs". For each fqsn passed into `display_symbol_data()`
we now synchronously,
- create a single call to `LinkedSplits.plot_ohlc_main() -> `ChartPlotWidget`
  where we cache the chart in scope and for all other "sibling" fqsns
  we,
- make a call to `ChartPlotWidget.overlay_plotitem()` -> `PlotItem`, hide its axes,
  make another call with this plotitem input to
  `ChartPlotWidget.draw_curve()`, set a sym-specific view box auto-yrange maxmin callback,
  register the plotitem in a global `pis: dict[str, list[pgo.PlotItem, pgo.PlotItem]] = {}`

Once all plots have been created we then asynchronously for each symbol,
- maybe create a volume chart and register it in a similar task-global
  table: `vlms: dict[str, ChartPlotWidget] = {}`
- start fsp displays for each symbol

Then common entrypoints are entered once for all symbols:
- a single `graphics_update_loop()` loop-task is started wherein
  real-time graphics update components for each symbol are created,
      * `L1Labels`
      * y-axis last clearing price stickies
      * `maxmin()` auto-ranger
      * `DisplayState` (stored in a table `dss: dict[str, DisplayState] = {}`)
      * an `increment_history_view()` task
  and a single call to `Feed.open_multi_stream()` is used to create
  a symbol-multiplexed quote stream which drives a single loop over all
  symbols wherein for each quote the appropriate components are looked
  up and passed to `graphics_update_cycle()`.
- a single call to `open_order_mode()` is made with the first symbol
  provided as input, though eventually we want to support passing in the
  entire list.

Further internal implementation details:
- special tweaks to the `pg.LinearRegionItem` setup wherein the region
  is added with a zero opacity and *after* all plotitem overlays to
  avoid and issue where overlays weren't being shown within the region
  area in the history chart.
- all symbol-specific graphics oriented update calls are adjusted to
  pass in the fqsn:
  * `update_fsp_chart()`
  * `ChartView._set_yrange()`
  * ChartPlotWidget.update_graphics_from_flow()`
- avoid a double increment on sample step updates by not calling the
  increment on any vlm chart since it seems the vlm-ohlc chart linking
  already takes care of this now?
- use global counters for the last epoch time step to avoid incrementing
  all views more then once per new time step given underlying shm array
  buffers may be on different array-index values from one another.
2023-02-02 13:30:02 -05:00
Tyler Goodlet be3dc69290 Only update pnl label on quotes with an fqsn match 2023-02-02 13:30:02 -05:00
Tyler Goodlet 6100bd19c7 Adjust search to handle multi-sym results 2023-01-31 15:16:34 -05:00
Tyler Goodlet d57bc6c6d9 Adjust to using `PlotItem`s for axis sticky mgmt 2023-01-31 15:15:56 -05:00
Tyler Goodlet 58b42d629f Passthrough fqsns list directly to `.load_symbols()` 2023-01-31 14:54:19 -05:00
Tyler Goodlet 36a81cb2de Only add plot to cursor set if not an overlay 2023-01-31 14:27:39 -05:00
Tyler Goodlet ae0f3118f4 Pass plotitem to axis from cursor 2023-01-31 14:27:39 -05:00
Tyler Goodlet 727c7ce2b1 Adjust L1 labels to expect `.pi: PlotItem` 2023-01-31 14:27:39 -05:00
Tyler Goodlet a39c980266 Allocate our internal `Axis` subtype in our `PlotItem` override 2023-01-31 14:27:39 -05:00
Tyler Goodlet 00be100e71 Initial chart widget adjustments for agg feeds
Main "public" API change is to make `GodWidget.get/set_chart_symbol()`
accept and cache-on fqsn tuples to allow handling overlayed chart groups
and adjust method names to be plural to match.

Wrt `LinkedSplits`,
- create all chart widget axes with a `None` plotitem argument and set
  the `.pi` field after axis creation (since apparently we have another
  object reference causality dilemma..)
- set a monkeyed `PlotItem.chart_widget` for use in axes that still need
  the widget reference.
- drop feed pause/resume for now since it's leaking feed tasks on the
  `brokerd` side and we probably don't really need it any more, and if
  we still do it should be done on the feed not the flume.

Wrt `ChartPlotItem`,
- drop `._add_sticky()` and use the `Axis` method instead and add some
  overlay + axis sanity checks.
- refactor `.draw_ohlc()` to be a lighter wrapper around a call to
  `.add_plot()`.
2023-01-31 14:27:39 -05:00
Tyler Goodlet 9217610734 Simplify OHLC graphic color instance var name 2023-01-31 14:27:39 -05:00
Tyler Goodlet 31af7a2c99 Add `Axis.add_sticky()` for creating axis labels
We have this method on our `ChartPlotWidget` but it makes more sense to
directly associate axis-labels with, well, the label's parent axis XD.

We add `._stickies: dict[str, YAxisLabel]` to replace
`ChartPlotWidget._ysticks` and pass in the `pg.PlotItem` to each axis
instance, stored as `Axis.pi` instead of handing around linked split
references (which are way out of scope for a single axis).

More work needs to be done to remove dependence on `.chart:
ChartPlotWidget` references in the date axis type as per comments.
2023-01-31 14:27:39 -05:00
Tyler Goodlet 34fac364fd Add default YAxisLable.x_offset: int` 2023-01-31 14:27:39 -05:00
goodboy dcdfd2577a
Merge pull request #447 from pikers/pregraphics_formatters
Pregraphics formatters: `IncrementalFormatter`
2023-01-31 13:55:04 -05:00
goodboy 6733dc57af
Merge pull request #441 from pikers/dark_clearing_repairs
Dark clearing repairs
2023-01-30 14:21:23 -05:00
Tyler Goodlet 05c4b6afb9 Drop px-cache-resets, failed try at path appends
Comments out the pixel-cache resetting since it doesn't seem we need it
any more to avoid draw oddities?

For `.fast_path` appends, this nearly got it working except the new path
segments are either not being connected correctly (step curve) or not
being drawn in full since the history path (plain line).

Leaving the attempted code commented in for a retry in the future; my
best guesses are that maybe,
- `.connectPath()` call is being done with incorrect segment length
  and/or start point.
- the "appended" data: `appended = array[-append_len-1:slice_to_head]`
  (done inside the formatter) isn't correct (i.e. endpoint handling
  considering a path append) and needs special handling for different
  curve types?
2023-01-30 13:22:24 -05:00
Tyler Goodlet 4b22325ffc Mask profile points and drop rect `.united()` attempts 2023-01-30 13:22:14 -05:00
Tyler Goodlet 9d16299f60 Make curve graphics timeframe agnostic
Ensure `.boundingRect()` calcs and `.draw_last_datum()` do geo-sizing
based on source data instead of presuming some `1.0` unit steps in some
spots; we need this to support an epoch index as is needed for overlays.

Further, clean out a bunch of old bounding rect calc code and add some
commented code for trying out `QRectF.united()` on the path + last datum
curve segment. Turns out that approach is slower as per eyeballing the
added profiler points.
2023-01-30 13:21:43 -05:00
Tyler Goodlet ab1f15506d Add graphics incr-updated "formatter" subsys
After trying to hack epoch indexed time series and failing miserably,
decided to properly factor out all formatting routines into a common
subsystem API: ``IncrementalFormatter`` which provides the interface for
incrementally updating and tracking pre-path-graphics formatted data.

Previously this functionality was mangled into our `Renderer` (which
also does the work of `QPath` generation and update) but splitting it
out also preps for being able to do graphics-buffer downsampling and
caching on a remote host B)

The ``IncrementalFormatter`` (parent type) has the default behaviour of
tracking a single field-array on some source `ShmArray`, updating
a flattened `numpy.ndarray` in-mem allocation, and providing a default
1d conversion for pre-downsampling and path generation.

Changed out of `Renderer`,
- `.allocate_xy()`, `update_xy()` and `format_xy()` all are moved to
  more explicitly named formatter methods.
- all `.x/y_data` nd array management and update
- "last view range" tracking
- `.last_read`, `.diff()`
- now calls `IncrementalFormatter.format_to_1d()` inside `.render()`

The new API gets,
- `.diff()`, `.last_read`
- all view range diff tracking through `.track_inview_range()`.
- better nd format array names: `.x/y_nd`, `xy_nd_start/stop`.
- `.format_to_1d()` which renders pre-path formatted arrays ready for
  both m4 sampling and path gen.
- better explicit overloadable formatting method names:
  * `.allocate_xy()` -> `.allocate_xy_nd()`
  * `.update_xy()` -> `.incr_update_xy_nd()`
  * `.format_xy()` -> `.format_xy_nd_to_1d()`

Finally this implements per-graphics-type formatters which define
each set up related formatting routines:
- `OHLCBarsFmtr`: std multi-line style bars
- `OHLCBarsAsCurveFmtr`: draws an interpolated line for ohlc sampled data
- `StepCurveFmtr`: handles vlm style curves
2023-01-30 13:20:17 -05:00
Tyler Goodlet 0db5451e47 Move all pre-path formatting routines to `._pathops`, proto formatter type 2023-01-30 13:19:33 -05:00
goodboy 61218f30f5
Merge pull request #440 from pikers/samplerd_service
`samplerd` service
2023-01-30 11:48:07 -05:00
Tyler Goodlet fcfc0f31f0 Enable backpressure in an effort to prevent bootup overruns 2023-01-30 11:45:29 -05:00
Tyler Goodlet 69074f4fa5 Bump up service tree spawn timeout a couple secs 2023-01-26 17:59:25 -05:00
Tyler Goodlet fe4fb37b58 Add service tree tests for data-feeds and the EMS 2023-01-24 15:15:27 -05:00
Tyler Goodlet 7cfd431a2b Yield `Services` in `open_test_pikerd()` fixture 2023-01-24 15:15:27 -05:00
Tyler Goodlet 61e20a86cc Fix clearing endpoint type annots, export `open_ems()` 2023-01-24 15:15:27 -05:00
Tyler Goodlet d9b73e1d08 Yield services (manager) from `maybe_open_pikerd()` 2023-01-24 15:15:27 -05:00
goodboy 4833d56ecb
Merge pull request #442 from pikers/misc_brokerd_backend_repairs
Misc brokerd backend repairs
2023-01-23 18:44:00 -05:00
Tyler Goodlet 090d1ba524 `kraken`: catch value error not index on missing `src_fiat` in pair 2023-01-23 15:36:20 -05:00
Tyler Goodlet afc45a8e16 `binance`: same thing, only unsub when connected 2023-01-23 15:29:24 -05:00
Tyler Goodlet 844626f6dc Move `brokerd` service task to root `.data` mod 2023-01-13 13:21:49 -05:00
Tyler Goodlet 470079665f Use new tractor kwargs getter func 2023-01-13 13:21:49 -05:00
Tyler Goodlet 0cd87d9e54 Drop commented markestored spawner code 2023-01-13 13:21:49 -05:00
Tyler Goodlet 09711750bf Registry subsys rework
More or less a revamp (and possibly first draft for something similar in
`tractor` core) which ensures all actor trees attempt to discover the
`pikerd` registry actor.

Implementation improvements include:
- new `Registry` singleton which houses the `pikerd` discovery
  socket-address `Registry.addr` + a `open_registry()` manager which
  provides bootstrapped actor-local access.
- refine `open_piker_runtime()` to do the work of opening a root actor
  and call the new `open_registry()` depending on whether a runtime has
  yet been bootstrapped.
- rejig `[maybe_]open_pikerd()` in terms of the above.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 71ca4c8e1f Use actor uid in shm keys for rt quote buffers
Allows running simultaneous data feed services on the same (linux) host
by avoiding file-name collisions instead keying shm buffer sets by the
given `brokerd` instance. This allows, for example, either multiple dev
versions of the data layer to run side-by-side or for the test suite to
be seamlessly run alongside a production instance.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 9811dcf5f3 Match `services` subcmd to new reg addr module variables 2023-01-13 13:21:49 -05:00
Tyler Goodlet da659cf607 Facepalm: definitely do not short circuit discovery helpers.. 2023-01-13 13:21:49 -05:00
Tyler Goodlet 37e0ec7b7d Assert fixture caller is `pikerd` 2023-01-13 13:21:49 -05:00
Tyler Goodlet 045b76bab5 Make `Flume.index_stream()` defer to new sampling api 2023-01-13 13:21:49 -05:00
Tyler Goodlet c8c641a038 Ensure all sub-services cancel on `pikerd` exit
Previously we were relying on implicit actor termination in
`maybe_spawn_daemon()` but really on `pikerd` teardown we should be sure
to tear down not only all service tasks in each actor but also the actor
runtimes. This adjusts `Services.cancel_service()` to only cancel the
service task scope and wait on the `complete` event and reworks the
`open_context_in_task()` inner closure body to,

- always cancel the service actor at exit.
- not call `.cancel_service()` (potentially causing recursion issues on
  cancellation).
- allocate a `complete: trio.Event` to signal full task + actor termination.
- pop the service task from the `.service_tasks` registry.

Further, add a `maybe_set_global_registry_sockaddr()` helper-cm to do
the work of checking whether a registry socket needs-to/has-been set
and use it for discovery calls to the `pikerd` service tree.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 6a1bb13feb Add base `pikerd` service tree custom check test 2023-01-13 13:21:49 -05:00
Tyler Goodlet 75591dd7e9 Don't raise on quote feed lags to dark clearing loop 2023-01-13 13:21:49 -05:00
Tyler Goodlet d792fed099 Move sync log msg back to info 2023-01-13 13:21:49 -05:00
Tyler Goodlet d66fb49077 Don't deliver shms from `start_backfill()`, they're not used 2023-01-13 13:21:49 -05:00
Tyler Goodlet 78c7c8524c Breakpoint when bad 1m history offsets are detected 2023-01-13 13:21:49 -05:00
Tyler Goodlet a746258f99 `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 5adb234a24 Don't receive sample-index msgs in feed layer 2023-01-13 13:21:49 -05:00
Tyler Goodlet 2778ee1401 Support not registering for sample-index msgs via `sub_for_broadcasts: bool` flag 2023-01-13 13:21:49 -05:00
Tyler Goodlet e0ca5d5200 Use `open_sample_stream()` to increment fsp buffers 2023-01-13 13:21:47 -05:00
Tyler Goodlet b3d1b1aa63 Port feed layer to use new `samplerd` APIs
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 5ec1a72a3d Implement a `samplerd` singleton actor service
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:

- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
  `._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
  the work of (lazily) starting the `samplerd` service (if dne) and
  registers shm buffers for update as well as connect a sample-index
  stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
  endpoint implementing all the shm buffer and index-stream registry
  details as well as logic to ensure a lone service task runs
  `Services.increment_ohlc_buffer()`; it increments at the shortest period
  registered which, for now, is the default 1s duration.

Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
  gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
  one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet a342f7d2d4 Make `._daemon.Services` for use as singleton
Drop the `_services` module level ref and adjust all client code to
match. Drop struct inheritance and convert all methods to class level.
Move `Brokerd.locks` -> `Services.locks` and add sampling mod to pikerd
enabled set.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 2c76cee928 Begin formalizing `Sampler` singleton API
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.

Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
  registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.

Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet b5f2ff854c Drop meaning the clearing rate, use per step count 2023-01-13 13:21:15 -05:00
Tyler Goodlet 3efb0b5884 Sync 1s (or less) sampler steps using rounded now-epoch 2023-01-13 13:21:15 -05:00
Tyler Goodlet 009bbe456e Always `.error()` log unknown queries for `marketstore` 2023-01-13 13:21:15 -05:00
Tyler Goodlet daf7b3f4a5 Only accept 6 tries for the same duplicate hist frame
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
2023-01-13 13:21:15 -05:00
Tyler Goodlet b0a6dd46e4 Use recon set on stack closing during reconnect
Hopefully resolves https://github.com/pikers/piker/issues/434
2023-01-13 13:21:15 -05:00
Tyler Goodlet 1c5141f4c6 Fix f-str in duplicate frame msg print 2023-01-13 13:21:15 -05:00
Tyler Goodlet 4cdd2271b0 Drop `tractor` assert bug note 2023-01-13 13:21:15 -05:00
Tyler Goodlet 89095d4e9f Ensure FSPs last 2 times are synced with its source 2023-01-13 13:21:15 -05:00
Tyler Goodlet 04c0d77595 Frame ticks in helper routine
Wow, turns out tick framing was totally borked since we weren't framing
on "greater then throttle period long waits" XD

This moves all the framing logic into a common func and calls it in
every case:
- every (normal) "pre throttle period expires" quote receive
- each "no new quote before throttle period expires" (slow case)
- each "no clearing tick yet received" / only burst on clears case
2023-01-13 13:21:15 -05:00
Tyler Goodlet d1b07c625f Copy timestamps from source to FSP dest buffer
Slice up to history's length worth of (latest) time stamps from source
series read at the start of the history init phase.
2023-01-13 13:21:15 -05:00
Tyler Goodlet a5bb33b0ff Avoid key error on already popped cancel 2023-01-13 13:21:15 -05:00
Tyler Goodlet 8e1ceca43d Add some data-flows jargon notes (re: #270) 2023-01-13 13:21:15 -05:00
Tyler Goodlet c85e7790de Rename `._flumes.py` -> `.flows.py` 2023-01-13 13:21:15 -05:00
Tyler Goodlet 2399c618b6 Expand sampler loop shm write lines 2023-01-13 13:21:15 -05:00
Tyler Goodlet 7ec88f8cac Make hist shm token optional to allow for FSPs 2023-01-13 13:21:15 -05:00
Tyler Goodlet eacd44dd65 Move `Flume` to a new `.data._flumes` module 2023-01-13 13:21:15 -05:00
Tyler Goodlet e5e70a6011 Extend `Flume` methods
Add some (untested) data slicing util methods for mapping time ranges to
source data indices:
- `.get_index()` which maps a single input epoch time to an equiv array
  (int) index.
- add `slice_from_time()` which returns a view of the shm data from an
  input epoch range presuming the underlying struct array contains
  a `'time'` field with epoch stamps.
- `.view_data()` which slices out the "in view" data according to the
  current state of the passed in `pg.PlotItem`'s view box.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 7da5c2b238 Add epoch time index to fsp buffers 2023-01-13 13:21:15 -05:00
Tyler Goodlet 1ee49df31d Ensure a rt shm buffer without backfill has correct epoch timestamping 2023-01-13 13:21:15 -05:00
Tyler Goodlet f2df32a673 Use throttle period for wait-on-clearing-event timeout 2023-01-13 13:21:15 -05:00
Tyler Goodlet 125e31dbf3 Implement by-type tick-framing in throttler loop
This has been an outstanding idea for a while and changes the framing
format of tick events into a `dict[str, list[dict]]` wherein for each
tick "type" (eg. 'bid', 'ask', 'trade', 'asize'..etc) we create an FIFO
ordered `list` of events (data) and then pack this table into each
(throttled) send. This gives an additional implied downsample reduction
(in terms of iteration on the consumer side) from `N` tick-events to
a (max) `T` tick-types presuming the rx side only needs the latest tick
event.

Drop the `types: set` and adjust clearing event test to use the new
`ticks_by_type` map's keys.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 715e693564 Improved clearing-tick-burst-oriented throttling
Instead of uniformly distributing the msg send rate for a given
aggregate subscription, choose to be more bursty around clearing ticks
so as to avoid saturating the consumer with L1 book updates and vs.
delivering real trade data as-fast-as-possible.

Presuming the consumer is in the "UI land of slow" (eg. modern display
frame rates) such an approach serves more useful for seeing "material
changes" in the market as-bursty-as-possible (i.e. more short lived fast
changes in last clearing price vs. many slower changes in the bid-ask
spread queues). Such an approach also lends better to multi-feed
overlays which in aggregate tend to scale linearly with the number of
feeds/overlays; centralization of bursty arrival rates allows for
a higher overall throttle rate if used cleverly with framing.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 43717c92d9 Type annot-declare fsp-engine data `Feed` 2023-01-13 13:21:15 -05:00
Tyler Goodlet f370685c62 Init msg keys are always lower case 2023-01-13 13:21:15 -05:00
Tyler Goodlet 4300470786 Fix for empty tsdb query result case 2023-01-13 13:21:15 -05:00
Tyler Goodlet b89fd9652c `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-13 13:14:35 -05:00
Tyler Goodlet 51f4afbd88 Don't raise on quote feed lags to dark clearing loop 2023-01-13 12:51:07 -05:00
Tyler Goodlet 7ef8111381 Provide `datetime`-sorted clears table iteration
Likely pertains to helping with stuff in issues #345 and #373 and just
generally is handy to have when processing ledgers / clearing event
tables.

Adds the following helper methods:
- `iter_by_dt()` to iter-sort an arbitrary `Transaction`-like table of
  clear entries.
- `Position.iter_clears()` as a convenience wrapper for the above.
2023-01-13 12:51:01 -05:00
Tyler Goodlet 35b097469b Round spread (slap) offset to min tick digits 2023-01-13 12:51:01 -05:00
Tyler Goodlet 94290c7d8b `kraken`: ignore mismatched zero-ed pps (for now)
See more details in the GH comment:
https://github.com/pikers/piker/issues/373#issuecomment-1380988581

More or less we need to pull and include the transfer fees for
withdrawals in our ledger tracking but this serves as a sloppy
workaround for the moment.
2023-01-13 12:48:18 -05:00
Tyler Goodlet 73379d3627 Run CI on all PRs 2023-01-13 12:39:17 -05:00
Tyler Goodlet 23835f2c08 `deribit`: drop old `backfill_bars()` ep 2023-01-13 12:39:17 -05:00
Tyler Goodlet d2aee00a56 `kraken`: only do unsub if connected
Trying to send a message in the `NoBsWs.fixture()` exit when the ws is
not currently disconnected causes a double `._stack.close()` call which
will corrupt `trio`'s coro stack. Instead only do the unsub if we detect
the ws is still up.

Also drops the legacy `backfill_bars()` module endpoint.

Fixes #437
2023-01-13 12:39:17 -05:00
Tyler Goodlet cf6e44cb9c Add `NoBsWs.connected()` predicate 2023-01-13 12:39:17 -05:00
Tyler Goodlet a146ad9e69 Never restart `ib-gw` containers on boot 2023-01-13 12:37:49 -05:00
Tyler Goodlet 70ad1a1860 `kraken`: don't presume src fiat symbol size in pos predicate 2023-01-13 12:37:49 -05:00
Tyler Goodlet f3ef73ef41 `kraken`: drop symbol token size =6 check 2023-01-13 12:37:49 -05:00
Tyler Goodlet a9832dc0cb `ib`: fix position log msg 2023-01-13 12:37:49 -05:00
Tyler Goodlet 9be245e955 `ib`: Add treasury yield futs to adhoc fqsn set 2023-01-13 12:37:49 -05:00
Tyler Goodlet 800773e585 ib: ignore throttles on `.get_head_time()` 2023-01-13 12:37:49 -05:00
goodboy 8d1eb81f16
Merge pull request #414 from pikers/agg_feedz
Agg feedz
2023-01-13 12:20:47 -05:00
Tyler Goodlet 963e5bdd62 Go back to `Feed.pause/resume()`, new flume APIs coming later 2023-01-10 11:09:19 -05:00
Tyler Goodlet 55de9abc41 Adjust cli mod imports of daemon sockaddr vars 2023-01-10 11:09:19 -05:00
Tyler Goodlet 593db0ed0d Only run `kraken` feed tests in CI, use `open_test_pikerd()` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 06622105cd Add a `open_test_pikerd()` acm fixture for easy booting of the service stack 2023-01-10 11:09:19 -05:00
Tyler Goodlet 008ae47e14 Reset `._registry_addr` to any passed in value from caller 2023-01-10 11:09:19 -05:00
Tyler Goodlet 81585d9e6e Set global registry addr after first entry point spawns `pikerd` 2023-01-10 11:09:19 -05:00
Tyler Goodlet f6b7057b0d `binance`: always request an extra 1min OHLC bar
Seems that by default their history indexing rounds down/back to the
previous time step, so make sure we add a minute inside `Client.bars()`
when the `end_dt=None`, indicating "get the latest bar". Add
a breakpoint block that should trigger whenever the latest bar vs. the
latest epoch time is mismatched; we'll remove this after some testing
verifying the history bars issue is resolved.

Further this drops the legacy `backfill_bars()` endpoint which has been
deprecated and unused for a while.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 76f920a16b Always force lowercase on `binance` symbol keys
Hopefully helps resolve #435
2023-01-10 11:09:19 -05:00
Tyler Goodlet f232d6d4ee Add `ci_env` detector fixture 2023-01-10 11:09:19 -05:00
Tyler Goodlet b7e1443618 Use ETH on kraken to ensure enough quotes 2023-01-10 11:09:19 -05:00
Tyler Goodlet 5d021ffb85 Bump up timeout on multi-feed test for CI 2023-01-10 11:09:19 -05:00
Tyler Goodlet 28fd795280 Only require `-b <brokername>` for filtering
Instead of requiring any `-b` try to import all built-in broker backend
python modules by default and only load those detected from the input symbol
list's fqsn values. In other words the `piker chart` cmd can be run sin
`-b` now and that flag is only required if you only want to load
a subset of the built-ins or are trying to load a specific
not-yet-builtin backend.
2023-01-10 11:09:19 -05:00
Tyler Goodlet c944db5f02 Revert "Fix `_main()` arg back to `sym: str`"
This reverts commit 02fbc0a0ed.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 967e28b7ac Adjust built-in backend list to known working 2023-01-10 11:09:19 -05:00
Tyler Goodlet 2a158aea2c Rework `_FeedsBus` subscriptions mgmt using `set`
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
  `set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
  a msg on each unique underlying IPC msg stream.

Use new api in sampler task.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 88870fdda7 Set `brokers: list[st]` from mods when not provided.. 2023-01-10 11:09:19 -05:00
Tyler Goodlet 326f153a47 Catch overruns on throttled feed subs too
Previously we would only detect overruns and drop subscriptions on
non-throttled feed subs, however you can get the same issue with
a wrapping throttler task:
- the intermediate mem chan can be blocked either by the throttler task
  being too slow, in which case we still want to warn about it
- the stream's IPC channel actually breaks and we still want to drop
  the connection and subscription so it doesn't be come a source of
  stale backpressure.
2023-01-10 11:09:19 -05:00
Tyler Goodlet f5cd63ad35 Ensure correct stream is set on each `Flume`
Set each quote-stream by matching the provider for each `Flume` and thus
results in some flumes mapping to the same (multiplexed) stream.
Monkey-patch the equivalent `tractor.MsgStream._ctx: tractor.Context` on
each broadcast-receiver subscription to allow use by feed bus methods as
well as other internals which need to reference IPC channel/portal info.

Start a `_FeedsBus` subscription management API:
- add `.get_subs()` which returns the list of tuples registered for the
  given key (normally the fqsn).
- add `.remove_sub()` which allows removing by key and tuple value and
  provides encapsulation for sampler task(s) which deal with dropped
  connections/subscribers.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 1e96ca32df Move `maybe_open_feed()` above for readability 2023-01-10 11:09:19 -05:00
Tyler Goodlet c088963cf2 Always touch config file dir if dne 2023-01-10 11:09:19 -05:00
Tyler Goodlet 79fcbcc281 Add an sdist job to CI 2023-01-10 11:09:19 -05:00
Tyler Goodlet ddbba76095 Use (a new) `piker_pin` branch in `tractor` (again) 2023-01-10 11:09:19 -05:00
Tyler Goodlet 0a959c1c74 Not all accounts will have API trade transactions this session.. 2023-01-10 11:09:19 -05:00
Tyler Goodlet e348968113 Add multi-broker streaming test using both `binance` and `kraken` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7bbe86d6fb Unpack broker mod and portal from fqsn for brokerd-trade-dialogs 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7b9db86753 Multi-`broker` quotes with `Feed.open_multi_stream()`
Adds provider-list-filtered (quote) stream multiplexing support allowing
for merged real-time `tractor.MsgStream`s using an `@acm` interface.
Behind the scenes we are just doing a classic multi-task push to common
mem chan approach.

Details to make it work on `Feed`:
- add `Feed.mods: dict[str, Moduletype]` and
  `Feed.portals[ModuleType, tractor.Portal]` which are both populated
  during init in `open_feed()`
- drop `Feed.portal` and `Feed.name`

Also fix a final lingering tsdb history loading loop termination bug.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 20a396270e `Storage.read_ohlcv()` now returns a `numpy` array 2023-01-10 11:09:19 -05:00
Tyler Goodlet 81516c5204 Finally fix tsdb -> shm backfill loading
A slight facepalm but, the main issue was a simple indexing logic error:
we need to slice with `tsdb_history[-shm._first.value:]` to push most
recent history not oldest.. This allows cleanup of tsdb backfill loop as
well.

Further, greatly simply `diff_history()` time slicing by using the
classic `numpy` conditional slice on the epoch field.
2023-01-10 11:09:19 -05:00
Tyler Goodlet d6fb6fe3ae Just drop the pretty repr from our struct for now 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8476d8d056 Fix partial-frame-missing backfill logic
This had a bug prior where the end of a frame (a partial) wasn't being
sliced correctly and we'd get odd gaps showing up in the backfilled from
`brokerd` vs. tsdb end index. Repair this by doing timeframe aware index
diffing in `diff_history()` which seems to resolve it. Also, use the
frame-result's `end_dt: datetime` for the loop exit condition.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 36868bb86e Add `kraken` test, ensure single broker-provider for now 2023-01-10 11:09:19 -05:00
Tyler Goodlet 29b6b3e54f Port `storesh` cli-cmd machinery to `Flume` apis 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8a01c9e42b Fix broker-tail stripping using `str.removesuffix()` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 2c4daf08e0 Adjust to per-fqsn-oriented `Flume` lookups throughout 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7daab6329d Make `Symbol` derive from internal `.types.Struct` 2023-01-10 11:09:19 -05:00
Tyler Goodlet bb6452b969 Further feed syncing fixes wrt to `Flumes`
Sync per-symbol sampler loop start to subscription registers such that
the loop can't start until the consumer's stream subscription is added;
the task-sync uses a `trio.Event`. This patch also drops a ton of
commented cruft.

Further adjustments needed to get parity with prior functionality:
- pass init msg 'symbol_info' field to the `Symbol.broker_info: dict`.
- ensure the `_FeedsBus._subscriptions` table uses the broker specific
  (without brokername suffix) as keys for lookup so that the sampler
  loop doesn't have to append in the brokername as a suffix.
- ensure the `open_feed_bus()` flumes-table-msg returned sent by
  `tractor.Context.started()` uses the `.to_msg()` form of all flume
  structs.
- ensure `maybe_open_feed()` uses `tractor.MsgStream.subscribe()` on all
  `Flume.stream`s on cache hits using the
  `tractor.trionics.gather_contexts()` helper.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 25bfe6f035 Use new |-union style type annots in sampling routines 2023-01-10 11:09:19 -05:00
Tyler Goodlet 32b36aa042 Expect init startup quotes from each symbol 2023-01-10 11:09:19 -05:00
Tyler Goodlet e7de5404d3 Add `Symbol.fqsn: str` property 2023-01-10 11:09:19 -05:00
Tyler Goodlet 18dc8b08e4 First draft aggregate feedz support
Orient shm-flow-arrays around the new idea of a `Flume` which provides
access, mgmt and basic measure of real-time data flow sets (see water
flow management semantics).

- We discard the previous idea of a "init message" which contained all
  the shm attachment info and instead send a startup message full of
  `Flume.to_msg()`s which are symmetrically loaded on the caller actor
  side.

- Create data-flows "entries" for every passed in fqsn such that the consumer gets back
  streams and shm for each, now all wrapped in `Flume` types. For now we
  allocate `brokermod.stream_quotes()` tasks 1-to-1 for each fqsn
  (instead of expecting each backend to do multi-plexing, though we
  might want that eventually) as well a `_FeedsBus._subscriber` entry
  for each. The pause/resume management loop is adjusted to match.
  Previously `Feed`s were  allocated 1-to-1 with each fqsn.

- Make `Feed` a `Struct` subtype instead of a `@dataclass` and move all
  flow specific attrs to the new `Flume`:
  - move `.index_stream()`, `.get_ds_info()` to `Flume`.
  - drop `.receive()`: each fqsn entry will now require knowledge of
    separate streams by feed users.
  - add multi-fqsn tables: `.flumes`, `.streams` which point to the
    appropriate per-symbol entries.

- Async load all `Flume`s from all contexts and all quote streams using
  `tractor.trionics.gather_contexts()` on the client `open_feed()` side.

- Update feeds test to include streaming 2 symbols on the same (binance)
  backend.
2023-01-10 11:09:18 -05:00
Tyler Goodlet 5bf3cb8e4b Just warn on `ib` symbol search lags 2023-01-10 11:09:18 -05:00
Tyler Goodlet c7d5db5f90 Start data feed layer test suite
Initial test that starts a `binance` feed and reads the quote messages
alongside shm buffers for 1s and 1m OHLC; just prints to console for
now.

Template out parametrization for multi-symbol quote-multiplexed feeds
which coming soon B)
2023-01-10 11:09:18 -05:00
Tyler Goodlet 1bf1965a8b Drop `tractor.log` level override fixture 2023-01-10 11:09:18 -05:00
Tyler Goodlet 051a8729b6 EMS: expect fqsn key in `Feed.symbols` 2023-01-10 11:09:18 -05:00
Tyler Goodlet 8e85ed92c8 Use new `GodWidget.load_symbols()` from search 2023-01-10 11:09:18 -05:00
Tyler Goodlet 2a9042b1b1 Make all UI entrypoints accept an fqsn `list`
This is to prep for multi-symbol feeds and charts so we accept
a sequence of fqsns to the top level entrypoints as well as the
`.data.feed.open_feed()` API (though we're not actually supporting true
multiplexed feeds nor shm lookups per fqsn yet).
2023-01-10 11:09:18 -05:00
Tyler Goodlet 344a634cb6 Always set fqsn in `Feed.symbols: dict` 2023-01-10 11:09:18 -05:00
Tyler Goodlet 508de6182a Drop duplicate live gateway from compose file for now 2023-01-10 11:09:18 -05:00
Tyler Goodlet 40000345a1 Only log pos size errors for `ib` 2023-01-10 11:09:18 -05:00
goodboy 220d38b4a9
Merge pull request #439 from pikers/binance_syminfo_fix
Update Binance exchange information
2023-01-10 11:08:19 -05:00
Esmeralda Gallardo 888438ca25
Add two attributes to Pair class to match Binance exchange information update 2023-01-10 10:18:40 -03:00
goodboy d84bcf77c0
Merge pull request #438 from pikers/msgspec_ordering
Msgspec field ordering
2023-01-09 19:01:12 -05:00
Guillermo Rodriguez 0474d66531
Switch msgspec struct ordering to always have required fields first and optionals last 2023-01-09 18:43:50 -03:00
algorandpa f218b804b4
Merge pull request #433 from pikers/add_config_dir_on_daemon_startup
Add config dir on daemon startup
2022-12-22 19:40:47 +00:00
Guillermo Rodriguez 7b14f498a8
Merge pull request #409 from esmegl/json_rpc_req
Added support for JSONRPC requests coming from the server side
2022-12-21 15:14:12 -03:00
Esmeralda Gallardo 18e4352faf
Deleted unused timeout logic 2022-12-19 14:55:06 -03:00
Esmeralda Gallardo a6e921548b
Modified recv_task(): added functionality to restart ws after timeout, modified match msg and added new case to match in case of receiving an error. 2022-12-19 13:48:18 -03:00
Esmeralda Gallardo 3f5dec82ed
Replaced try/except block in recv_task() by match msg, and added new changes to description comment 2022-12-19 13:48:17 -03:00
Esmeralda Gallardo db0b59abaa
Added support for JSONRPC requests coming from the server side 2022-12-19 13:48:10 -03:00
algorandpa f5bcd1d91c remove binance additions 2022-12-17 21:53:57 +00:00
algorandpa db11c3c0f8 add config dir on pikerd startup 2022-12-17 21:51:49 +00:00
Tyler Goodlet df6071ae9e `binance`: more fields.. `SelfTradePreventMode`.. 2022-12-15 22:23:56 +00:00
goodboy cc1694760c
Merge pull request #432 from pikers/kraken_limits_fields
Kraken limits fields
2022-12-10 16:12:54 -05:00
goodboy 4d8b22dd8f
Merge pull request #431 from pikers/cz_post_ftx
Cz post ftx
2022-12-10 16:08:39 -05:00
Tyler Goodlet fd296a557e Add position limit fields 2022-12-10 16:07:03 -05:00
Tyler Goodlet 0de2f863bd `kraken`: Explicitly report missing `Pair` fields in error 2022-12-10 16:07:03 -05:00
Tyler Goodlet de93da202b Reconnect on ping-pong errors too i guess? 2022-12-10 16:05:36 -05:00
Tyler Goodlet 5c459f21be Honestly, f$@%! you cz... 2022-12-10 16:05:36 -05:00
goodboy 5915cf3acf
Merge pull request #430 from pikers/catch_notification_daemon_error
Catch notification daemon error
2022-12-04 17:06:12 -05:00
algorandpa 997bf31bd4 remove spacing again 2022-12-04 21:19:34 +00:00
algorandpa f3427bb13b restore spacing 2022-12-04 21:15:41 +00:00
algorandpa 6fa266e3e0 wrap notification process in try catch and capture stderr data 2022-12-04 21:13:33 +00:00
Guillermo Rodriguez 019a6432fb
Merge pull request #421 from pikers/ib_contract_updates
`ib` futes contract consolidation fixes
2022-11-17 18:38:22 -03:00
goodboy 209e1085ae
Merge pull request #422 from pikers/kraken_pair_status
Add `.status: str` to kraken pairs..
2022-11-17 15:22:17 -05:00
Tyler Goodlet 0ef75e6aa6 Add `.status: str` to kraken pairs.. 2022-11-17 15:18:12 -05:00
Tyler Goodlet 243d0329f6 Client.get_head_time()` seems unsupported for forex? 2022-11-17 15:12:10 -05:00
Tyler Goodlet a0ce9ecc0d Only append con suffix if not empty 2022-11-17 15:12:10 -05:00
Tyler Goodlet af9c30c3f5 Handle futes venue remaps as per oct-nov 2022 rollout 2022-11-17 15:12:10 -05:00
Zoltan ebbfa47baf
Merge pull request #419 from pikers/pre_multifeed_hotfix
HOTFIX: Fix `_main()` arg back to `sym: str`
2022-11-12 17:34:25 -05:00
Tyler Goodlet 02fbc0a0ed Fix `_main()` arg back to `sym: str`
This slipped in early from #414 before merge and was likely due to
cherry-picking from #417.
2022-11-12 16:26:21 -05:00
goodboy 4729e4c6bc
Merge pull request #418 from pikers/kraken_pair_updates
Kraken pair updates
2022-11-10 17:31:39 -05:00
goodboy a44b8e3e22
Merge pull request #417 from pikers/daemon_sockaddr_config
Daemon sockaddr config
2022-11-10 17:31:24 -05:00
goodboy 8a89303cb3
Merge pull request #415 from pikers/no_signal_pi_overlays
`Signal`-less pi overlays
2022-11-10 17:31:04 -05:00
Tyler Goodlet e547b307f6 Deflect 1s OHLC loading for `kraken` 2022-11-10 13:16:21 -05:00
Tyler Goodlet 72ec9b1e10 Add `Pair.tick_size` to `kraken` schema 2022-11-10 13:16:21 -05:00
Tyler Goodlet 40c70ae6d8 Drop unecessary services var asserts? 2022-11-10 13:06:31 -05:00
Tyler Goodlet d3fefdeaff Expose registry sockaddr in `open_piker_runtime()` 2022-11-10 13:06:31 -05:00
Tyler Goodlet 8be005212f Expose `.open_feed()` and `open_piker_runtime()` eps at top level 2022-11-10 13:06:31 -05:00
Tyler Goodlet 5a2795e76b Passthrough registry sockaddr from chart cmd to daemon 2022-11-10 13:06:31 -05:00
Tyler Goodlet a987f0ab81 Add registry socket cli flags to all client cmds
Allows starting UI apps and passing the `pikerd` registry socket-addr
args via `--host` or `--port` such that a separate actor tree can be
started by selecting an unused port. This is handy when hacking new
features but while also wishing to run a more stable version of the code
for trading on the same host.
2022-11-10 13:06:31 -05:00
Tyler Goodlet d99b40317d Add a `pikerd -p <port_number>` flag 2022-11-10 13:06:31 -05:00
Tyler Goodlet 9ae519f6fa Re-work chart-overlay event broadcasting
Drop all attempts at rewiring `ViewBox` signals, monkey-patching
relayee handlers, and generally modifying event source public
attributes. Instead take a much simpler approach where the event source
graphics object simply has it's handler dynamically overridden by
a broadcaster function which relays to all consumers using a Python
loop.

The benefits of this much simplified approach include:
- avoiding the tedious and often complex (re)connection of signals between
  the source plot and the overlayed consumers.
- requiring zero modification of the public interface of any of the
  publisher or consumer `ViewBox`s, no decoration, extra signal
  definitions (eg. previous `mouseDragEventRelay` or the like).
- only a single dynamic method override on the event source graphics object
  (`ViewBox`) which does the broadcasting work and requires no
  modification to handler implementations.

Detailed `.ui._overlay` changes:
- drop `mk_relay_signal()`, `enable_relays()` which removes signal/slot
  hacking methodology.
- drop unused `ComposedGridLayout.grid` and `.reverse`, change some
  method names: `.insert()` -> `.insert_plotitem()`, `append()` ->
  `.append_plotitem()`.
- in `PlotOverlay`, again drop all signal/slot rewiring in
  `.add_plotitem()` and instead add our new closure based python-loop in
  `broadcast()` routine which is used to override the event-source
  object's handler.
- comment out all the auxiliary/want-to-have event source selection
  methods for now.
2022-11-10 11:45:49 -05:00
Tyler Goodlet 8f3fe8e542 Back link auto-y-ranging to ohlc chart from vlm overlay fsp 2022-11-10 11:45:49 -05:00
Tyler Goodlet 490d85aba5 Drop fast chart buffer to 2 days worth 2022-11-10 11:45:49 -05:00
goodboy ba2e1e04cd
Merge pull request #413 from pikers/pg_exts_fork
Pg exts fork
2022-11-08 12:47:32 -05:00
Tyler Goodlet 5d4929db9c Pin to our `pyqtgraph` fork's master branch 2022-10-31 15:00:38 -04:00
Tyler Goodlet c41400ae18 Use `.setRect()`; not sure how this was ever working? 2022-10-31 14:58:35 -04:00
Tyler Goodlet e71bd2cb1e Move axis-tick-values lru caching into our existing `Axis` 2022-10-31 14:23:29 -04:00
Tyler Goodlet be24473fb4 Adjust remaining chart internals to pg extensions
Mainly this involves instantiating our overriden `PlotItem` in a few
places and tweaking type annots. A further detail is that inside
the fsp sub-chart creation code we hide some axes for overlays in the
flows subchart; these were previously somehow hidden implicitly?
2022-10-31 14:13:02 -04:00
Tyler Goodlet b524ea5c22 Extract and fork `pyqtgraph` upstream submissions
Fork out our patch set submitted to upstream in multiple PRs (since they
aren't moving and/or aren't a priority to core) which can be seen in
full from the following diff:
https://github.com/pyqtgraph/pyqtgraph/compare/master...pikers:pyqtgraph:graphics_pin

Move these type extensions into the internal `.ui._pg_overrides` module.

The changes are related to both `pyqtgraph.PlotItem` and `.AxisItem` and
were driven for our need for multi-view overlays (overlaid charts with
optionally synced axis and interaction controls) as documented in the PR
to upstream: https://github.com/pyqtgraph/pyqtgraph/pull/2162

More specifically,
- wrt to `AxisItem` we added lru caching of tick values as per:
  https://github.com/pyqtgraph/pyqtgraph/pull/2160.
- wrt to `PlotItem` we adjusted some of the axis management code, namely
  adding a standalone `.removeAxis()` and modifying the `.setAxisItems()` logic
  to use it in: https://github.com/pyqtgraph/pyqtgraph/pull/2162
  as well as some tweaks to `.updateGrid()` to loop through all possible
  axes when grid setting.
2022-10-31 09:37:32 -04:00
Tyler Goodlet d46945cb09 Move profiler imports to internal version 2022-10-31 09:26:36 -04:00
Tyler Goodlet 1d4fc6f327 Fork our latency tune-able profiler from `pyqtgraph.debug`
Details of the original patch to upstream are in:
https://github.com/pyqtgraph/pyqtgraph/pull/2281

Instead of trying to land this we've opted to just copy out that version
of `.debug.Profiler` into our own internals (luckily the class is
entirely self-contained) until such a time when we choose to find
a better dependency as per https://github.com/pikers/piker/issues/337
2022-10-30 21:11:27 -04:00
Tyler Goodlet 5976acbe76 `PyQt5` + `pyqtgraph` import updates (`QtGui -> `QtWidgets`) 2022-10-30 21:11:14 -04:00
goodboy 11ecf9cb09
Merge pull request #401 from pikers/ib_1m_hist
Ib 1m hist
2022-10-29 13:14:53 -04:00
goodboy 2dac531729
Merge pull request #410 from pikers/even_moar_kraken_order_fixes
Even moar `kraken` order fixes
2022-10-28 19:52:20 -04:00
Tyler Goodlet 1fadf58ab7 Add todo for order duration setting `goodTillDuration` 2022-10-28 17:50:09 -04:00
Tyler Goodlet ceca0d9fb7 Order ledger entries by processed datetime
To make it easier to manually read/decipher long ledger files this adds
`dict` sorting based on record-type-specific (api vs. flex report)
datetime processing prior to ledger file write.

- break up parsers into separate routines for flex and api record
  processing.
- add `parse_flex_dt()` for special handling of the weird semicolon
  stamps in flex reports.
2022-10-28 16:17:27 -04:00
Tyler Goodlet df16726211 Just wipe wrong timeframe filled tsdb colseries for now 2022-10-28 16:17:14 -04:00
Tyler Goodlet fb4f1732b6 Drop key error again 2022-10-28 16:17:14 -04:00
Tyler Goodlet d5b357b69a Raise `DataUnavailable` on >= 6 no data error events 2022-10-28 16:17:14 -04:00
Tyler Goodlet 610fb5f7c6 Drop `NoData` handler, just let it bubble 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2b231ba631 Lul, fix timeframe key when writing history
There never was any underlying db bug, it was a hardcoded timeframe in
the column series write key.. Now we always assert a matching timeframe
in results.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 286228c290 Only wait on backfill if provider supports timeframe 2022-10-28 16:17:14 -04:00
Tyler Goodlet a1a24da7b6 Make `binance` reject 1s OHLC history requests 2022-10-28 16:17:14 -04:00
Tyler Goodlet 553d0557b6 Raise `DataUnavailable` when a contract's 'earliest time' is hit 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2f7b272d8c Make `ib` client's `.get_head_time()` (only) expect an fqsn 2022-10-28 16:17:14 -04:00
Tyler Goodlet dc1edeecda Do tsdb backloading to shm concurrently
Not only improves startup latency but also avoids a bug where the rt
buffer was being tsdb-history prepended *before* the backfilling of
recent data from the backend was complete resulting in our of order
frames in shm.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 4ca7817735 Use feed-shm offsets in fill-arrow indexing arithmetic 2022-10-28 16:17:14 -04:00
Tyler Goodlet 5b63585398 Pack multi-chart region linking into helper
Factor the multi-sample-rate region UI connecting into a new helper
`link_views_with_region()` which reads in the shm buffer offsets from
the `Feed` and appropriately connects the fast and slow chart handlers
for the linear region graphics. Add detailed comments writeup for the
inter-sampling transform algebra.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 0000d9a314 Handle backends with no 1s OHLC history
If a history manager raises a `DataUnavailable` just assume the sample
rate isn't supported and that no shm prepends will be done. Further seed
the shm array in such cases as before from the 1m history's last datum.

Also, fix tsdb -> shm back-loading, cancelling tsdb queries when either
no array-data is returned or a frame is delivered which has a start time
no lesser then the least last retrieved. Use strict timeframes for every
`Storage` API call.
2022-10-28 16:17:14 -04:00
Tyler Goodlet f7ec66362e Only get dbus user on sudo-user-present 2022-10-28 16:17:14 -04:00
Tyler Goodlet b7ef0596b9 Drop remaining timeframe scanning from `.read_ohlcv()` 2022-10-28 16:17:14 -04:00
Tyler Goodlet 143e86a80c Handle super annoying mkts query bug..
Turns out querying for a high freq timeframe (like 1sec) will still
return a lower freq timeframe (like 1Min) SMH, and no idea if it's the
server or the client's fault, so we have to explicitly check the sample
step size and discard lower freq series-results. Do this inside
`Storage.read_ohlcv()` and return an empty `dict` when the wrong time
step is detected from the query result.

Further enforcements,
- both `.load()` and `read_ohlcv()` now require an explicit `timeframe:
  int` input to guarantee the time step of the output array.
- drop all calls `.load()` with non-timeframe specific input.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 956c7d3435 Add concurrent multi-time-frame history loading
Our default sample periods are 60s (1m) for the history chart and 1s for
the fast chart. This patch adds concurrent loading of both (or more)
different sample period data sets using the existing loading code but
with new support for looping through a passed "timeframe" table which
points to each shm instance.

More detailed adjustments include:
- breaking the "basic" and tsdb loading into 2 new funcs:
  `basic_backfill()` and `tsdb_backfill()` the latter of which is run
  when the tsdb daemon is discovered.
- adjust the fast shm buffer to offset with one day's worth of 1s so
  that only up to a day is backfilled as history in the fast chart.
- adjust bus task starting in `manage_history()` to deliver back the
  offset indices for both fast and slow shms and set them on the
  `Feed` object as `.izero_hist/rt: int` values:
  - allows the chart-UI linked view region handlers to use the offsets
    in the view-linking-transform math to index-align the history and
    fast chart.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 330d16262e Add data-reset-task global state var
Allows keeping mutex state around data reset requests which (if more
then one are sent) can cause a throttling condition where ib's servers
will get slower and slower to conduct a reconnect. With this you can
have multiple ongoing contract requests without hitting that issue and
we can go back to having a nice 3s timeout on the history queries before
activating the hack.
2022-10-28 16:17:14 -04:00
Tyler Goodlet c7f57b940c Add back adhoc symbol lookup support, some exchs info is off 2022-10-28 16:17:14 -04:00
Tyler Goodlet 27bd3c07af Comment format tweak 2022-10-28 16:17:14 -04:00
Tyler Goodlet 55dc27a197 Subtract duration instead of passing to `.subtract()` (facepalm) 2022-10-28 16:17:14 -04:00
Tyler Goodlet a11f20fac2 Fix `piker services`; `tractor.run()` is done.. 2022-10-28 16:17:14 -04:00
Tyler Goodlet daebb78755 Re-request quote feed on data reset events
When a network outage or data feed connection is reset often the
`ib_insync` task will hang until some kind of (internal?) timeout takes
place or, in some (worst) cases it never re-establishes (the event
stream) and thus the backend needs to restart or the live feed will
never resume..

In order to avoid this issue once and for all this patch implements an
additional (extremely simple) task that is started with the  real-time
feed and simply waits for any market data reset events; when detected
restarts the `open_aio_quote_stream()` call in a loop using
a surrounding cancel scope.

Been meaning to implement this for ages and it's finally working!
2022-10-28 16:17:14 -04:00
Tyler Goodlet 90a395a069 Support no-disconnect on `open_aio_clients()` exit
Allows for easier restarts of certain `trio` side tasks without killing
the `asyncio`-side clients; support via flag.

Also fix a bug in `Client.bars()`: we need to return the duration on the
empty bars case..
2022-10-28 16:17:14 -04:00
Tyler Goodlet 23d0353934 Drop duplicate frame request
Must have gotten left in during refactor from the `trimeter` version?
Drop down to 6 years for 1m sampling.
2022-10-28 16:17:14 -04:00
Tyler Goodlet ede67ed184 Return history-frame duration from `.bars()`
This allows the history manager to know the decrement size for
`end_dt: datetime` on the next query if a no-data / gap case was
encountered; subtract this in `get_bars()` in such cases. Define the
expected `pendulum.Duration`s in the `.api._samplings` table.

Also add a bit of query latency profiling that we may use later to more
dynamically determine timeout driven data feed resets. Factor the `162`
error cases into a common exception handler block.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 811d21e111 Explicit fast chart naming, auto-yrange the fast chart on increment 2022-10-28 16:17:14 -04:00
Tyler Goodlet 54567d33da More correct no-data output handling
When we get a timeout or a `NoData` condition still return a tuple of
empty sequences instead of `None` from `Client.bars()`. Move the
sampling period-duration table to module level.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 61ca5f7e19 Drop `trimeter`-ized concurrent history querying
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD

Commented some issues we still need to resolve as well.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 7396624be0 Rework history frame request concurrency
Manual tinker-testing demonstrated that triggering data resets
completely independent of the frame request gets more throughput and
further, that repeated requests (for the same frame after cancelling on
the `trio`-side) can yield duplicate frame responses. Re-work the
dual-task structure to instead have one task wait indefinitely on the
frame response (and thus not trigger duplicate frames) and the 2nd data
reset task poll for the first task to complete in a poll loop which
terminates when the frame arrives via an event.

Dirty deatz:
- make `get_bars()` take an optional timeout (which will eventually be
  dynamically passed from the history mgmt machinery) and move request
  logic inside a new `query()` closure meant to be spawned in a task
  which sets an event on frame arrival, add data reset poll loop in the
  main/parent task, deliver result on nursery completion.
- handle frame request cancelled event case without crash.
- on no-frame result (due to real history gap) hack in a 1 day decrement
  case which we need to eventually allow the caller to control likely
  based on measured frame rx latency.
- make `wait_on_data_reset()` a predicate without output indicating
  reset success as well as `trio.Nursery.start()` compat so that it can
  be started in a new task with the started values yielded being
  a cancel scope and completion event.
- drop the legacy `backfill_bars()`, not longer used.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 25b90afbdb Add `timeframe` input to `kraken` history api 2022-10-28 16:17:13 -04:00
Tyler Goodlet 72dfeb2b4e Pass back interal cancel scope from data reset task 2022-10-28 16:17:13 -04:00
Tyler Goodlet 6b34c9e866 Temporarily disable error on pos size mismatch 2022-10-28 16:17:13 -04:00
Tyler Goodlet e7ec01b8e6 Pass in default history time of 1 min
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.
2022-10-28 16:17:13 -04:00
Tyler Goodlet fce7055c62 Make `binance` history api accept a timeframe 2022-10-28 16:17:13 -04:00
Tyler Goodlet bf7d5e9a71 Make `marketstore` storage api timeframe aware
The `Store.load()`, `.read_ohlcv()` and `.write_ohlcv()` and
`.delete_ts()` now can take a `timeframe: Optional[float]` param which
is used to look up the appropriate sampling period table-key from
`marketstore`.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 2a866dde65 Make history routines `timeframe` aware
Allow data feed sub-system to specify the timeframe (aka OHLC sample
period) to the `open_history_client()` delivered history fetching API.
Factor the data keycombo hack into a new routine to be used also from
the history backfiller code when request latency increases; there is
a first draft at trying to use the feed reset to speed up 1m frame
throttling by timing out on the history frame response, but it needs
a lot of fine tuning.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 220981e718 Add 1m ohlc sample rate support to `Client.bars()`; frame query is 1 day 2022-10-28 16:17:13 -04:00
Tyler Goodlet 8537a4091b Use new `Status.cancel_called` in EMS msg loops 2022-10-28 16:16:45 -04:00
Tyler Goodlet 71a11a23bd Add `Status.cancel_called: bool`
This is a simpler (and oddly more `trio`-nic and/or SC) way to handle
the cancelled-before-acked race for order dialogs. Will allow keeping
the `.req` field as solely an `Order` msg.
2022-10-28 16:16:45 -04:00
Tyler Goodlet fa368b1263 'Just getitem access the 'action' from req msg' 2022-10-28 16:16:45 -04:00
Tyler Goodlet e6dd1458f8 `kraken`: the apiflows chain map needs a `dict` 2022-10-28 16:16:45 -04:00
Tyler Goodlet 9486d993ce Drop order mode settings change logmsgs to `.runtime` again 2022-10-28 16:16:45 -04:00
Tyler Goodlet 30994dac10 Better handle order-cancelled-but-not-yet-acked races
When the client is faster then a `brokerd` at submitting and cancelling
an order we run into the case where we need to specify that the EMS
cancels the order-flow as soon as the brokerd's ack arrives. Previously
we were stashing a `BrokerdCancel` msg as the `Status.req` msg (to be
both tested for as a "already cancelled" and sent immediately on ack arrival to
the broker), but for such
cases we can't use that msg to find the fqsn (since only the client side
msgs have it defined) which is required by the new
`Router.client_broadcast()`.

So, Since `Status.req` is supposed to be a client-side flow msg anyway,
and we need the fqsn for client broadcasting, we change this `.req`
value to the client's submitted `Cancel` msg (thus rectifying the
missing `Router.client_broadcast()` fqsn input issue) and build the
`BrokerdCancel` request from that `Cancel` inline in the relay loop
from the `.req: Cancel` status msg lookup.

Further we allow `Cancel` msgs to define an `.account` and adjust the
order mode loop to expect `Cancel` source requests in cancelled status
updates.
2022-10-28 16:16:45 -04:00
Tyler Goodlet 8a61211c8c Handle brokerd errors even when no client-side-status found 2022-10-28 16:16:45 -04:00
Tyler Goodlet c43f7eb656 Fix missing `costmin: float` field in pair msgs 2022-10-28 16:16:45 -04:00
goodboy d05caa4b02
Merge pull request #411 from pikers/ci_fix_tractor_testing
Drop `tractor.testing` import in qt tests
2022-10-28 16:15:47 -04:00
Tyler Goodlet 63e9af002d Drop `tractor.testing` import in qt tests 2022-10-28 16:09:55 -04:00
goodboy 5144299f4f
Merge pull request #408 from pikers/offline_dark_clearing
Offline dark clearing
2022-10-10 09:25:59 -04:00
Tyler Goodlet c437f9370a Factor out all `maybe_open_context()` guff 2022-10-07 14:13:52 -04:00
Tyler Goodlet 94f81587ab Cache EMS trade relay tasks on feed fqsn
Except for paper accounts (in which case we need a trades dialog and
paper engine per symbol to enable simulated clearing) we can rely on the
instrument feed (symbol name) to be the caching key. Utilize
`tractor.trionics.maybe_open_context()` and the new key-as-callable
support in the paper case to ensure we have separate paper clearing
loops per symbol.

Requires https://github.com/goodboy/tractor/pull/329
2022-10-07 14:13:52 -04:00
Tyler Goodlet 2bc25e3593 Repair already-open order relay, fix causality dilemma
With the refactor of the dark loop into a daemon task already-open order
relaying from a `brokerd` was broken since no subscribed clients were
registered prior to the relay loop sending status msgs for such existing
live orders. Repair that by adding one more synchronization phase to the
`Router.open_trade_relays()` task: deliver a `client_ready: trio.Event`
which is set by the client task once the client stream has been
established and don't start the `brokerd` order dialog relay loop until
this event is ready.

Further implementation deats:
- factor the `brokerd` relay caching back into it's own `@acm` method:
  `maybe_open_brokerd_dialog()` since we do want (but only this) stream
  singleton-cached per broker backend.
- spawn all relay tasks on every entry for the moment until we figure
  out what we're caching against (any client pre-existing right, which
  would mean there's an entry in the `.subscribers` table?)
- rename `_DarkBook` -> `DarkBook` and `DarkBook.orders` -> `.triggers`
2022-10-07 14:13:52 -04:00
Tyler Goodlet 1d9ab7b0de More direct import 2022-10-07 14:13:52 -04:00
Tyler Goodlet 4c96a4878e Process unknown order mode msgs 2022-10-07 14:13:52 -04:00
Tyler Goodlet 8cd56cb6d3 Flip ems-side-client (`OrderBook`) to be a struct
`@dataclass` is so 2 years ago ;)

Also rename `.update()` -> `.send_update()` to be a bit more explicit
about actually sending an update msg.
2022-10-07 14:13:52 -04:00
Tyler Goodlet c246dcef6f Drop uuid from notify func inputs 2022-10-07 14:13:52 -04:00
Tyler Goodlet 26d6e10ad7 Parameterize duration, pprint msg 2022-10-07 14:13:52 -04:00
Tyler Goodlet 3924c66bd0 Move headless notifies into `.client_broadcast()` 2022-10-07 14:13:52 -04:00
Tyler Goodlet 2fbfe583dd Drop the `Router.clients: set`, `.subscribers` is enough 2022-10-07 14:13:52 -04:00
Tyler Goodlet 525f805cdb Port order mode to new notify routine 2022-10-07 14:13:52 -04:00
Tyler Goodlet b65c02336d Don't short circuit relay loop when headless
If no clients are connected we now process as normal and try to fire
a desktop notification on linux.
2022-10-07 14:13:52 -04:00
Tyler Goodlet d3abfce540 Start notify mod, linux only 2022-10-07 14:13:52 -04:00
Tyler Goodlet 49433ea87d Run dark-clear-loop in daemon task
This enables "headless" dark order matching and clearing where an `emsd`
daemon subactor can be left running with active dark (or other
algorithmic) orders which will still trigger despite to attached-controlling
ems-client.

Impl details:
- rename/add `Router.maybe_open_trade_relays()` which now does all work
  of starting up ems-side long living clearing and relay tasks and the
  associated data feed; make is a `Nursery.start()`-able task instead of
  an `@acm`.
- drop `open_brokerd_trades_dialog()` and move/factor contents into the
  above method.
- add support for a `router.client_broadcast('all', msg)` to wholesale
  fan out a msg to all clients.
2022-10-07 14:13:52 -04:00
goodboy 31b0d8cee8
Merge pull request #402 from pikers/multi_client_order_mgt
Multi client order mgt
2022-10-05 01:46:09 -04:00
Tyler Goodlet 35871d0213 Support line update from `Order` msg in `.on_submit()` 2022-10-05 01:41:18 -04:00
Tyler Goodlet 4877af9bc3 Add pub-sub broadcasting
Establishes a more formalized subscription based fan out pattern to ems
clients who subscribe for order flow for a particular symbol (the fqsn
is the default subscription key for now).

Make `Router.client_broadcast()` take a `sub_key: str` value which
determines the set of clients to forward a message to and drop all such
manually defined broadcast loops from task (func) code. Also add
`.get_subs()` which (hackily) allows getting the set of clients for
a given sub key where any stream that is detected as "closed" is
discarded in the output. Further we simplify to `Router.dialogs:
defaultdict[str, set[tractor.MsgStream]]` and `.subscriptions` as maps
to sets of streams for much easier broadcast management/logic using set
operations inside `.client_broadcast()`.
2022-10-05 01:41:18 -04:00
Tyler Goodlet 909e068121 Support multi-client order-dialog management
This patch was originally to fix a bug where new clients who
re-connected to an `emsd` that was running a paper engine were not
getting updates from new fills and/or cancels. It turns out the solution
is more general: now, any client that creates a order dialog will be
subscribing to receive updates on the order flow set mapped for that
symbol/instrument as long as the client has registered for that
particular fqsn with the EMS. This means re-connecting clients as well
as "monitoring" clients can see the same orders, alerts, fills and
clears.

Impl details:
- change all var names spelled as `dialogues` -> `dialogs` to be
  murican.
- make `Router.dialogs: dict[str, defaultdict[str, list]]` so that each
  dialog id (oid) maps to a set of potential subscribing ems clients.
- add `Router.fqsn2dialogs: dict[str, list[str]]` a map of fqsn entries to
  sets of oids.
- adjust all core task code to make appropriate lookups into these 2 new
  tables instead of being handed specific client streams as input.
- start the `translate_and_relay_brokerd_events` task as a daemon task
  that lives with the particular `TradesRelay` such that dialogs cleared
  while no client is connected are still processed.
- rename `TradesRelay.brokerd_dialogue` -> `.brokerd_stream`
- broadcast all status msgs to all subscribed clients in the relay loop.
- always de-reg each client stream from the `Router.dialogs` table on close.
2022-10-05 01:41:18 -04:00
Tyler Goodlet cf835b97ca Add some info logs around paper fills 2022-10-05 01:41:18 -04:00
Tyler Goodlet 30bce42c0b Don't spin paper clear loop on non-clearing ticks
Not sure what exactly happened but it seemed clears weren't working in
some cases without this, also there's no point in spinning the simulated
clearing loop if we're handling a non-clearing tick type.
2022-10-05 01:41:18 -04:00
Tyler Goodlet 48ff4859e6 Update to new pair schema, adds `.cost_decimals` field 2022-10-05 01:41:18 -04:00
Tyler Goodlet 887583d27f Bleh, convert fill data to `float`s in kraken broker.. 2022-10-05 01:41:18 -04:00
Tyler Goodlet 45b97bf6c3 Make fill msg `.action: str` optional for `kraken` 2022-10-05 01:41:18 -04:00
Tyler Goodlet 91397b85a4 Fix missing f-str in ems msg sender err block 2022-10-05 01:41:18 -04:00
Tyler Goodlet 47f81b31af Kraken can cause status msg key error!? 2022-10-05 01:41:18 -04:00
goodboy 30c452cfd0
Merge pull request #404 from pikers/pin_tractor_main
Pin back to `tractor` master branch
2022-10-04 09:53:02 -04:00
Tyler Goodlet fda1c5b554 Pin back to `tractor` master branch 2022-10-03 13:48:58 -04:00
goodboy d6c9834a9a
Merge pull request #395 from pikers/history_view
History view
2022-09-23 20:28:02 -04:00
Tyler Goodlet 41b0c11aaa Hide existing level line markers on startup 2022-09-23 17:17:32 -04:00
Tyler Goodlet cc67d23eee Drop old marker drawing code from `LevelLine.paint()`
We haven't been using it for a while and the supposed (remembered)
latency issue on interaction doesn't seem existing after applying the
cache mode. This allows dropping some internal state-logic and generally
simplifying the show-on-hover checks.

Further add `.show_markers()` and `.hide_markers()` as explicit methods
that can be called externally by UI business logic.
2022-09-23 17:17:32 -04:00
Tyler Goodlet 4818af1445 Add better doc string on marker factory 2022-09-21 15:43:35 -04:00
Tyler Goodlet 2cf1742999 Always apply at least the pos size as the limit 2022-09-21 15:43:35 -04:00
Tyler Goodlet 25ac6e6665 Soft pop lines, handle error-cancel races 2022-09-21 15:43:35 -04:00
Tyler Goodlet 90754f979b Tick the slow chart task on a 1sec index event 2022-09-19 17:39:26 -04:00
Tyler Goodlet c0d490ed63 Only show pos nav on non-zero size 2022-09-19 16:17:05 -04:00
Tyler Goodlet 7c6d12d982 Always set marker y-pos even if we're tracking its x-pos 2022-09-19 16:17:05 -04:00
Tyler Goodlet fd8c05e024 A lines entry should always exist or it's a bug 2022-09-19 16:17:05 -04:00
Tyler Goodlet 5d65c86c84 Don't delete pp lines or markers
Bit of a face palm but obviously `LevelLine.delete()` also removes any
`._marker` from the view which makes it disappear permanently when
moving from non-zero to zero to non-zero positions.. We don't really
need to delete the line since it can be re-used so just remove that
code.

Further this patch removes marker style setting logic from within the
`pp_line()` factory and instead expects the caller to set the correct
"direction" (for long / short) afterward.
2022-09-19 16:17:05 -04:00
Tyler Goodlet cf11e8d7d8 Update navs on all slow and fast charts, only default the fast chart on switch 2022-09-19 16:17:05 -04:00
Tyler Goodlet ed868f6246 Go back to origin slow chart split proportion 2022-09-19 16:17:05 -04:00
goodboy 5d371ad80e
Merge pull request #396 from pikers/tractor_core_port
Tractor core port
2022-09-16 18:09:33 -04:00
Tyler Goodlet 6897aed6b6 Don't call show on marker in `Nav.show()` 2022-09-14 16:02:07 -04:00
Tyler Goodlet a61a11f86b Add draft but commented "scale-to-fast-chart" logic 2022-09-14 10:11:43 -04:00
Tyler Goodlet 286f620f8e Use fqsn to key pnl tasks 2022-09-13 18:59:12 -04:00
Tyler Goodlet b7e60b9653 Hide labels, show markers for lines on slow chart 2022-09-13 18:31:21 -04:00
Tyler Goodlet df42e7acc4 Add `LevelLine.get_cursor()` to get any currently hovering mouse-cursor 2022-09-13 18:26:06 -04:00
Tyler Goodlet e492e9ca0c Fix pp arrow/label placement bugs
- Every time a symbol is switched on chart we need to wait until the
  search bar sidepane has been added beside the slow chart before
  determining the offset for the pp line's arrow/labels; trigger this in
  `GodWidget.load_symbol()` -> required monkeypatching on a
  `.mode: OrderMode` to the `.rt_linked` for now..
- Drop the search pane widget removal from the current linked chart,
  seems faster?
- On the slow chart override the `LevelMarker.scene_x()` callback to
  adjust for the case where no L1 labels are shown beside the y-axis.
2022-09-13 17:58:20 -04:00
Tyler Goodlet 44c6f6dfda Add level line flag to allow tracking its marker x-position 2022-09-13 17:43:04 -04:00
Tyler Goodlet ad2100fe3f Only don't pp arrow on startup 2022-09-13 16:21:49 -04:00
Tyler Goodlet ae64ac79a6 Doc str tweaks 2022-09-13 16:13:46 -04:00
Tyler Goodlet 20663dfa1c Add (more) order mode race guards to avoid crashes on "kitty-keys" 2022-09-12 20:25:15 -04:00
Tyler Goodlet 70f2241d22 Hide pp markers on startup 2022-09-12 20:25:15 -04:00
Tyler Goodlet b3fcc25e21 Add extra row count for header, drop prints 2022-09-12 20:25:15 -04:00
Tyler Goodlet 4f15ce346b Drop splitter resizes except for once at startup
Also adds a `GodWidget.resize_all()` helper method which resizes all
sub-widgets and charts to their default ratios and/or parent-widget
dependent defaults using the detected available space on screen. This is
a "default layout" config method that eventually we'll probably want
allow users to customize.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 445849337f Always resize to slow chart height, not just on changes 2022-09-12 20:25:15 -04:00
Tyler Goodlet 3fd7107e08 Scale view to measured results row count
In other words instead of some static view size previously determined by
the accompanying (slow) chart's height, (recursively) calculate the
number of displayed rows and compute the minimal height needed. This
still caps the view at the height of the chart such that the view will
switch to scroll bar mode when too many results are shown and can't all
be fit in the vertical space.

Deats:
- add a ``CompleterView.iter_df_rows()`` which recursively iterates all
  rows in depth-first order making it simple to compute the absolute
  number of result rows in view and thus the minimal number of pixels to
  show all results.
- always pass the height in the `.on_resize()` handler to ensure
  triggering the height logic when new results are generated in the
  search loop.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 73a02d54b7 Down size the slots bar by .9 2022-09-12 20:25:15 -04:00
Tyler Goodlet b734af6dd0 Only delete lines under cursor if not `None` 2022-09-12 20:25:15 -04:00
Tyler Goodlet f7c0ee930a Offset last (live) datum from y-axis by a 16th 2022-09-12 20:25:15 -04:00
Tyler Goodlet ead426abc4 More space to fast chart(s), less to slow chart 2022-09-12 20:25:15 -04:00
Tyler Goodlet bcd6bbb7ca Increase the `brokerd` mem-chan size
Intention is to hopefully minimize (as many) context switches when
processing (near-)HFT feeds - tho not sure if it's improving things that
much XD
2022-09-12 20:25:15 -04:00
Tyler Goodlet 80929d080f Add more detailed splitter of splitters comment 2022-09-12 20:25:15 -04:00
Tyler Goodlet eed47b3733 Add splitter move handler which calls search widget resizer method 2022-09-12 20:25:15 -04:00
Tyler Goodlet d5f0c59b57 Ignore resize events with the same height (for now) 2022-09-12 20:25:15 -04:00
Tyler Goodlet d11dc787a1 First working attempt of search results view scaling
Scales the "view" instance that holds search results to the size of the
accompanying "slow chart" for which the search pane is a "sidepane".
A lot of mucking about was required due to resizing of the view
seemingly feeding back into window resizing and further implementing the
sizing logic such that the parent `QSplitter` can be resized as the
user's whim as well.

Details,
- add a `CompleterView._init: bool` which is set once (and only once)
  after startup where the first display of the current symbol/feed is
  shown allowing and a single *width* padding applied once at startup
  to ensure we don't have an awkward line to the right of the longest
  result.
- in `.resize_to_results()` only apply a minimum height to the view
  using `.setMinimumHeight()` with a down-scaled (`0.91` for now) height
  value from input.
- re-implement `CompleterView.show_matches()` to accept and optional
  width, heigh tuple and when not supplied pull the slow chart's
  dimensions and pass as input to the resize method.
- Make `SearchWidget` x dim sizing policy "fixed".
- register the `SearchWidget` for resize events with god.
- add `.show_only_cache_entries()` for easy results clearing.
- add `.space_dims()` to retrieve slow linked-charts dimensions.
- implement `SearchWidget.on_resize()` which is the caller of all the
  previously mentioned resizing routines.
- do resizing and cache entry showing on search loop startup and be sure
  to clear to cache when the user selects a symbol-feed with Enter.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 1e81feee46 Finally get chart startup view-state kinda correct
It ended up being what'd you expect, races on the accessing shm buffer
data by the UI during the whole "mega-async-startup-everything" phase XD

So we add the following list of ad-hoc startup steps:
- do `.default_view()` on the slow chart after the fast chart is mostly
  fully spawned with the intention being to capture the state where the
  historical buffer is mostly loaded before sizing the view to the
  graphical form of the data.
- resize slow chart sidepanes from the fast chart just before sleeping
  forever (and after order mode has booted).
2022-09-12 20:25:15 -04:00
Tyler Goodlet 40a9761943 Actually support resize events..
Turns out god widget resizes aren't triggered implicitly by window
resizes, so instead, hook into the window by moving what was our useless
method to that class. Further we explicitly define and declare that our
window has a `.godwidget: GodWidget` and set it up in the bootstrap
phase - in `run_qutractor()` during `trio` guest mode configuration.

Further deatz:
- retype the runtime/bootstrap routines to take a qwidget "type" not an
  instance, and drop the whole implicit `.main_widget` stuff.
- delegate into the `GodWidget.on_win_resize()` for any window resize
  which then triggers all the custom resize callbacks we already had in
  place.
- privatize `ChartnPane.sidepane` so that it can't be mutated willy
  nilly without calling `.set_sidepane()`.
- always adjust splitter sizes inside `LinkeSplits.add_plot()`.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 256bcf36d3 Drop use `tractor.trionics.gather_contexts()` in `open_handlers()` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 9944277096 Handle null lines that were removed, don't error on bad $size 2022-09-12 20:25:15 -04:00
Tyler Goodlet f9dc5637fa Use rt buffer for last price on nan in ems 2022-09-12 20:25:15 -04:00
Tyler Goodlet addedc20f1 WIP search pane always shown.. 2022-09-12 20:25:15 -04:00
Tyler Goodlet 1fa6e8d9ba Only show slow chart xlabel when focussed 2022-09-12 20:25:15 -04:00
Tyler Goodlet 2a06dc997f Use pixel caching on our level lines 2022-09-12 20:25:15 -04:00
Tyler Goodlet 6b93eedcda Port to new `._position.Nav` apis in order mode 2022-09-12 20:25:15 -04:00
Tyler Goodlet a786df65de Factor pos tracker UI element mgmt into new type
More or less moves all the UI related position "nav" logic and graphics
item management into a new `._position.Nav` composite type + api for
high level mgmt of position graphics indicators across multiple charts
(fast and slow).
2022-09-12 20:25:15 -04:00
Tyler Goodlet 8f2823d5f0 Stage line only on active cursor chart 2022-09-12 20:25:15 -04:00
Tyler Goodlet 58fe220fde Use ref annotations in position mod 2022-09-12 20:25:15 -04:00
Tyler Goodlet 161448c31a Support order staging from slow chart using `.get_cursor()` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 1c685189d1 Change to using real type annots 2022-09-12 20:25:15 -04:00
Tyler Goodlet ceac3f2ee4 Adjust corresponding fast/slow chart line level on edits 2022-09-12 20:25:15 -04:00
Tyler Goodlet a07367fae2 Fix div-by-zero split sizing bug 2022-09-12 20:25:15 -04:00
Tyler Goodlet 006190d227 Add fill arrow-mark support to history view 2022-09-12 20:25:15 -04:00
Tyler Goodlet 412197019e Make ArrowEditor.add()` expect a `PlotItem` as input for render 2022-09-12 20:25:15 -04:00
Tyler Goodlet 271e378ce3 Add `GodWidget.iter_linked()` interator over linked split charts 2022-09-12 20:25:15 -04:00
Tyler Goodlet 8e07fda88f Expose multi-chart-lines support through to order mode api 2022-09-12 20:25:15 -04:00
Tyler Goodlet a4935b8fa8 Make line editor multi-line aware, drop `dataclass` for `Struct` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 2b76baeb10 Pass god widget to line editor and order mode instances 2022-09-12 20:25:15 -04:00
Tyler Goodlet 2dfa8976a0 Make line editor expect god as input, use new .`get_cursor()` api 2022-09-12 20:25:15 -04:00
Tyler Goodlet d3402f715b Set godwidget active cursor from xhair callback 2022-09-12 20:25:15 -04:00
Tyler Goodlet f070f9a984 Add "active cursor" api to god widget 2022-09-12 20:25:15 -04:00
Tyler Goodlet 416270ee6c Refocus view on ctl-c from search 2022-09-12 20:25:15 -04:00
Tyler Goodlet 14bee778ec Hook up kb ctrls to hist chart, order mode not working yet 2022-09-12 20:25:15 -04:00
Tyler Goodlet 10c1944de5 Proper slow chart auto y-range support
The slow (history) chart requires it's own y-range checker logic which
needs to be run in 2 cases:
- the last datum is in view and goes outside the previous mx/mn in view
- the chart is incremented a step

Since we need this duplicate logic this patch also factors the incremental
graphics update info "reading" into a new `DisplayState.incr_info()`
method that can be configured to a chart and input state and returns all
relevant "graphics update measure" in a tuple (for now).

Use this method throughout the rest of the display loop for both fast
and slow chart checks and in the `increment_history_view()` slow chart
task.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 7958d8ad4f Up sample info poll loop iters 2022-09-12 20:25:15 -04:00
Tyler Goodlet 50c5dc255c Update history view y-sticky with last clear price 2022-09-12 20:25:15 -04:00
Tyler Goodlet 31735f26d3 Poll for sampling info at startup, tolerate races
Use the new `Feed.get_ds_info()` method in a poll loop to definitively
get the inter-chart sampling info and avoid races with shm buffer
backfilling.

Also, factor the history increment closure-task into
`graphics_update_loop()` which will make it clearer how to factor
all the "should we update" logic into some `DisplayState` API.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 2ef6460853 Add `Feed.get_ds_info()` to detect/compute sample rates 2022-09-12 20:25:15 -04:00
Tyler Goodlet 5e98a30537 Add simplified history incrementer consumer task 2022-09-12 20:25:15 -04:00
Tyler Goodlet dd03ef42ac Return empty search result on connection failure
If you spawn a brokerd set and no `ib` data feed was started (via our
`.data.feed.Feed` api) then there will be no active client loaded and
thus wont' be connected. So in these cases just return nothing, and
I guess we'll figure out real connection failures later?
2022-09-12 20:25:15 -04:00
Tyler Goodlet 59884d251e Update history "last" bar, compute ampling ratio
Add an update call to the display loop to consistently update the last
datum in the history view chart. Compute the inter-chart sampling ratio
and use it to sync the linear region.
2022-09-12 20:25:15 -04:00
Tyler Goodlet e06e257a81 Another history view splitter proportion tweak 2022-09-12 20:25:15 -04:00
Tyler Goodlet 6e574835c8 Update history shm buffer in ohlc sampler loop 2022-09-12 20:25:15 -04:00
Tyler Goodlet 49ccfdd673 Pass history shm "last index" in init msg, assign on feed 2022-09-12 20:25:15 -04:00
Tyler Goodlet 3a434f312b Add sidepane like color region styling 2022-09-12 20:25:15 -04:00
Tyler Goodlet bb4dc448b3 Add history chart and "linear region" for syncing
Add a first draft of a working `pyqtgraph.LinearRegionItem` link between
a history view chart (+ data set) and the normal real-time "HFT" chart
set.

Add the history view (aka more downsampled data view) chart set to the
rt/hft set's splitter as it's "first widget". Hook up linear region
callbacks to enable syncing between charts including compenstating for
the downsampling rate ration (in this case hardcoded 60 since 1s to 1M,
but we'll actually compute it going forward obvs).

More to come dawgys..
2022-09-12 20:25:15 -04:00
Tyler Goodlet 9846396df2 Add initial history (view) to charting sys
Adds an additional `GodWidget.hist_linked: LinkedSplits` alongside the
renamed `.rt_linked` to enable 2 sets of linked charts with different
sampled data sets/flows. The history set is added without "all the
fixins" for now (i.e. no order mode sidepane or search integration) such
that it is merely a top level chart which shows a much longer term
history and can be added to the UI via embedding the entire history
linked-splits instance into the real-time linked set's splitter.

Further impl deats:
- adjust the `GodWidget._chart_cache: dict[str, tuple]]` to store both
  linked split chart sets per symbol so that symbol switching will
  continue to work with the added history chart (set).
- rework `.load_symbol()` to operate on both the real-time (HFT) chart
  set and the history set.
- rework `LinkedSplits.set_split_sizes()` to compensate for the history
  chart and do more detailed height calcs arithmetic to make it appear
  by default as a minor sub-chart.
- adjust `LinkedSplits.add_plot()` and `ChartPlotWidget` internals to allow
  adding a plot without a sidepane and/or container `ChartnPane`
  composite widget by checking for a `sidepane == False` input.
- make `.default_view()` accept a manual y-axis offset kwarg.
- adjust search mode to provide history linked splits to
  `.set_chart_symbol()` call.
2022-09-12 20:25:15 -04:00
Tyler Goodlet f0d417ce42 Drop status msg var deleting from ns 2022-09-12 20:25:15 -04:00
Tyler Goodlet 55fc4114b4 Initial draft code working with `pg.LinearRegionItem` 2022-09-12 20:25:15 -04:00
Tyler Goodlet 97b074365b Use rt buffer for close price pnl calcs 2022-09-12 20:25:15 -04:00
Tyler Goodlet f79c3617d6 Always load FSPs with the default (fast) sampling period 2022-09-12 20:25:15 -04:00
Tyler Goodlet 861fe791eb Allocate 2 shm buffers for history and real-time
As part of supporting a "history view" chart which shows downsampled
datums alongside our 1s (or higher) sampled OHLC we need a separate
buffer to store a the slower history from broker backends. This begins
that design by allocating 2 buffers:
- `rt_shm: ShmArray` which maps to a `/dev/shm/` file with `_rt` suffix
- `hist_shm: ShmArray` which maps to a file with `_hist` suffix

Deliver both of these shms back from both `manage_history()` and load
them as `Feed.rt_shm`/`.hist_shm` on the client side.

Impl deats:
- init the rt buffer with the first datum from loaded history and
  assign all OHLC values to that row's 'close' and the vlm to 0.
- pass the hist buffer to the backfiller task
- only spawn **one** global sampler array-row increment task per
  `brokerd` and pass in the 1s delay which we presume is our lowest
  OHLC sample rate for now.
- drop `open_sample_step_stream()` and just move its body contents into
  `Feed.index_stream()`
2022-09-12 20:25:15 -04:00
Tyler Goodlet 60052ff73a Presume shortest delay input to `increment_ohlc_buffer()`
Instead of worrying about the increment period per shm subscription,
just use the value passed as input and presume the caller knows that
only one task is necessary and that the wakeup (sampling) period should
be the shortest that is needed.

It's very unlikely we don't want at least a 1s sampling (both in terms
of task switching cost and general usage) which will eventually ship as
the default "real-time" feed "timeframe". Further, this "fast" increment
sampling task can handle all lower sampling periods (eg. 1m, 5m, 1H)
based on the current implementation just the same.

Also, add a global default sample period as `_defaul_delay_s` for use in
other internal modules.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 4d2708cd42 Force 1s sample step so crypto boiz can seee 2022-09-12 20:25:15 -04:00
Tyler Goodlet d1cc52dff5 Use new public lifetime-stack class attr 2022-09-12 20:24:56 -04:00
Tyler Goodlet 4fa901dbcb Port to new `tractor._runtime` mod 2022-09-12 20:24:56 -04:00
goodboy f2c488c1e0
Merge pull request #399 from pikers/kraken_fill_bugs
Kraken fill bugs
2022-09-12 20:12:04 -04:00
Tyler Goodlet 4a9c16d298 Fix stream type annot 2022-09-12 15:52:50 -04:00
Tyler Goodlet b9d5b904f4 Drop order entry removals on modify 2022-09-12 15:52:22 -04:00
Tyler Goodlet 0aef762d9a Bleh `kraken`, fix another ref error in fill block
Clearly, the linter didn't help us here.. but, just pass the
`brokerd` time for now in the `.broker_time` field; we can't get it from
the fill-case incremental updates in the `openOrders` sub. Add some
notes about this and how we might approach for backends with this
limitation.
2022-09-12 15:52:22 -04:00
goodboy c724117c1a
Merge pull request #398 from pikers/paper_clear_logics_fix
Oof, reverse clearing logic-routines in paper eng
2022-09-11 22:20:04 -04:00
Tyler Goodlet cc3bb85c66 Oof, reverse clearing logic-routines in paper eng 2022-09-10 16:35:31 -04:00
goodboy 20817313b1
Merge pull request #397 from pikers/kraken_nameerr_fix
Lul, fix name error on msg var name..
2022-09-06 08:18:17 -04:00
Tyler Goodlet 23d0b8a7ac Lul, fix name error on msg var name.. 2022-09-05 21:15:22 -04:00
goodboy 087a34f061
Merge pull request #367 from pikers/livenpaper
`ib`: live & paper accounts together, infra refinements
2022-08-31 18:15:39 -04:00
Tyler Goodlet 653f5c824b Drop empty vnc server script idea for live account 2022-08-31 17:45:02 -04:00
Tyler Goodlet f9217570ab Add intiial `ib` backend readme 2022-08-31 17:38:24 -04:00
Tyler Goodlet 7f224f0342 Doc string typos 2022-08-31 17:22:15 -04:00
Tyler Goodlet 75a5f3795a I guess go back to doing vnc servers on both? 2022-08-31 17:22:15 -04:00
Tyler Goodlet de9f215c83 If more then one `ib` api client is available use next available for search 2022-08-31 17:22:15 -04:00
Tyler Goodlet 848e345364 POC using paper-in-docker gw for symbol search 2022-08-31 17:22:15 -04:00
Tyler Goodlet 38b190e598 Add `ib` `Crypto` contract support 2022-08-31 17:22:15 -04:00
Tyler Goodlet 3a9bc8058f Spawn a live account gateway alongside paper
This is like, super first-draft-y (and ideally we move to offering an
`piker.data._ahab` super for this) but, it's a start at allowing easy
setup of both paper and live `ib-gw` container spawning. We Expect the
user to input creds for the live account manually and the vnc server is
(hackily) only run inside the paper instance which most of the time
seems to make it possible click on the live gui window and input creds
manually.

We also add extra files for the live instance:
- a `dockering/ib/run_x11_vnc_live.sh` which is is a blank script
  that avoids running an `x11vnc` server in the line account cntr.
- a `dockering/ib/jts_live.ini` config which manually sets the live
  gw to use the `4001` port for api connections.

Further config tweaks:
- IBC: drop the api dynamic port override, decrease login display
  timeout to the riskier but likely to be faster 20s.
- `x11vnc` cmd: got back to using `rfbport` instead of `autoport`, drop
  `-logappend` so we see logging on docker console again, drop the
  frame cacheing flags and add in some x-hack disable flags.
2022-08-31 17:22:15 -04:00
Guillermo Rodriguez 739a231afc
Merge pull request #394 from pikers/size_in_shm_token
Store shm array size in token schema, use for loading
2022-08-29 15:15:49 -03:00
Tyler Goodlet 7dfa4c3cde Better comment on the `size`'s purpose/units 2022-08-29 13:56:26 -04:00
Tyler Goodlet 7b653fe4f4 Store shm array size in token schema, use for loading 2022-08-29 13:46:41 -04:00
goodboy 77a687bced
Merge pull request #386 from pikers/paper_tolerance
Paper race tolerance
2022-08-29 13:28:38 -04:00
Tyler Goodlet d5c1cdd91d Configure allocator from pos msg on startup
This fixes a regression added after moving the msg parsing to later in
the order mode startup sequence. The `Allocator` needs to be configured
*to* the initial pos otherwise default settings will show in the UI..

Move the startup config logic from inside `mk_allocator()` to
`PositionTracker.update_from_pp()` and add a flag to allow setting the
`.startup_pp` from the current live one as is needed during initial
load.
2022-08-29 11:39:28 -04:00
Tyler Goodlet 46d3fe88ca Fix sub-slot-remains limiting for -ve sizes
In the short case (-ve size) we had a bug where the last sub-slots worth
of exit size would never be limited to zero once the allocator limit pos
size was hit (i.e. you could keep going more -ve on the pos,
exponentially per slot over the limit). It's a simple fix, just
a `max()` around the `l_sub_pp` var used in the next-step-size calc.

Resolves #392
2022-08-28 13:51:54 -04:00
Tyler Goodlet 5c8c5d8fbf Fix disti-mode paper pps relaying
Turns out we were putting too many brokername suffixes in the symbol
field and thus the order mode msg parser wasn't matching the current
asset to said msgs correctly and pps weren't being shown...

This repairs that plus simplifies the order mode initial pos msg loading
to just delegate into `process_trade_msg()` just as is done for
real-time msg updates.
2022-08-27 15:37:54 -04:00
goodboy 71412310c4
Merge pull request #391 from pikers/json_rpc_generic
Pull jsonrpc machinery out of deribit backend
2022-08-27 15:33:12 -04:00
Guillermo Rodriguez 0c323fdc0b
Minor style changes and warning on unexpected msg 2022-08-27 09:12:02 -03:00
Tyler Goodlet 02f53d0c13 Error on zero-size orders received by paper engine 2022-08-26 10:46:47 -04:00
Tyler Goodlet 8792c97de6 More stringent settings pane input handling
If a setting fails to apply try to log an error msg and revert to the
previous setting by not applying the UI read-update until after the new
`SettingsPane.apply_setting()` call. This prevents crashes when the user
tries to give bad inputs on editable allocator fields.
2022-08-26 10:46:47 -04:00
Tyler Goodlet 980815d075 Avoid handling account as numeric field in settings 2022-08-26 10:46:46 -04:00
Tyler Goodlet 4cedfedc21 Support clearing ticks ('last' & 'trade') fills
Previously we only simulated paper engine fills when the data feed
provide L1 queue-levels matched an execution. This patch add further
support for clear-level matches when there are real live clears on the
data feed that are faster/not synced with the L1 (aka usually during
periods of HFT).

The solution was to simply iterate the interleaved paper book entries on
both sides for said tick types and instead yield side-specific predicate
per entry.
2022-08-26 10:46:46 -04:00
Tyler Goodlet fe3d0c6fdd Handle too-fast-edits with `defaultdict[str, bidict[str, tuple]]`
Not entirely sure why this all of a sudden became a problem but it seems
price changes on order edits were sometimes resulting in key errors when
modifying paper book entries quickly. This changes the implementation to
not care about matching the last price when keying/popping old orders
and use `bidict`s to more easily pop cleared orders in the paper loop.
2022-08-26 10:46:46 -04:00
Tyler Goodlet 9200e8da57 Raw-dog-pop cancelled paper entries; old price dun matter 2022-08-26 10:46:46 -04:00
Tyler Goodlet 430d065da6 Handle paper-engine too-fast clearing race cases
When the paper engine is used it seems we can definitely hit races where
order ack msgs arrive close enough to status messages that `trio`
schedules the status processing before the acks. In such cases we want
to be tolerant and not crash but instead warn that we got an
unknown/out-of-order msg.
2022-08-26 10:46:46 -04:00
Tyler Goodlet ecd93cb05a Pass symbol with broker suffix to `.submit_limit()`; fix clearing 2022-08-26 10:46:46 -04:00
Guillermo Rodriguez 4facd161a9
Pull jsonrpc machinery out of deribit backend into piker.data._web_bs module and make it generic 2022-08-25 14:08:09 -03:00
goodboy c5447fda06
Merge pull request #390 from pikers/actually_enable_modules
Oneliner enable rpc modules on runtime open
2022-08-25 13:06:53 -04:00
Guillermo Rodriguez 0447612b34
Oneliner enable rpc modules on runtime open 2022-08-25 11:47:40 -03:00
goodboy b5499b8225
Merge pull request #331 from pikers/deribit
Deribit backend & minimal broker check tool
2022-08-25 10:08:29 -04:00
Guillermo Rodriguez 00aabddfe8
Fix link 2022-08-25 09:22:15 -03:00
Guillermo Rodriguez 43fb720877
Do multiline imports 2022-08-25 09:20:41 -03:00
Guillermo Rodriguez 9626dbd7ac
Simplify rpc machinery, and switch refs to Dict and List to builtins, make brokercheck call public broker methods and get their results again 2022-08-25 09:18:52 -03:00
Guillermo Rodriguez f286c79a03
Woops enable backfill_bars in module __init__.py 2022-08-24 19:41:04 -03:00
Guillermo Rodriguez accb0eee6c
Add brokercheck guard on deribit.get_client && drop method running in brokercheck 2022-08-24 19:32:54 -03:00
Guillermo Rodriguez e97dd1cbdb
Stop using as much closures
Use a custom tractor branch that fixes a `maybe_open_context` re entrant related bug
2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 34fb497eb4
Add aiter api to NoBsWs and rework cryptofeed relay to not be OOPy 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 6669ba6590
Switch back to using async for and dont install signal handlers on cryptofeed 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez cb8099bb8c
Add README.rst and brokers.toml section in config example 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 80a1a58bfc
Refactor cryptofeed relay api and move it to client
Added submit_limit and submit_cancel
Cache syms correctly
Lowercase search results
2022-08-24 18:09:32 -03:00
Guillermo Rodriguez d60f222bb7
Add get_balances, and get_assets rpc to deribit.api.Client
Improve symbol_info search results
Expect cancellation on cryptofeeds asyncio task
Fix the no trades on instrument bug that we had on startup
2022-08-24 18:08:45 -03:00
Guillermo Rodriguez 2c2e43d8ac
Add comments and update cryptofeed fork url in requirements 2022-08-24 18:08:31 -03:00
Guillermo Rodriguez 212b3d620d
Tweaks on Client init to make api credentials optional 2022-08-24 18:08:29 -03:00
Guillermo Rodriguez 92090b01b8
Begin jsonrpc over ws refactor 2022-08-24 18:06:00 -03:00
Guillermo Rodriguez 9073fbc317
drop pydantic to match master 2022-08-23 15:18:45 -03:00
Guillermo Rodriguez f55f56a29f
Refactored deribit backend into new multi file format 2022-08-23 15:18:45 -03:00
Guillermo Rodriguez 28e025d02e
Finally get a chart going! lots of fixes to streaming machinery and custom cryptofeed fork with fixes 2022-08-23 15:18:43 -03:00
Guillermo Rodriguez e558e5837e
Introduce piker protocol in stream_messages 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez a0b415095a
Brokermod check output fixed and tweaks to deribit Client.bars function 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez 6df181c233
Add brokercheck test and got deribit to dump l1 and trades to console 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez 7acc4e3208
Initial deribit mock up 2022-08-23 15:17:18 -03:00
Guillermo Rodriguez 10ea242143
Merge pull request #385 from pikers/asycvnc_pin_bump
Pin to `asyncvnc@main` after upstream fixes
2022-08-22 13:03:08 -03:00
Tyler Goodlet eda6ecd529 Pin to `asyncvnc@main` after upstream fixes
We helped drive a bunch of fixes in
https://github.com/barneygale/asyncvnc/pull/4

This pins to our forked but matched `main` branch to get those fixes
until such a time as upstream makes another release.
2022-08-22 11:58:40 -04:00
goodboy cf5b0bf9c6
Merge pull request #374 from pikers/open_order_loading
Open order loading
2022-08-19 15:23:49 -04:00
Tyler Goodlet b9dba48306 Show correct account label on loaded order lines
Quite a simple fix, we just assign the account-specific
`PositionTracker` to the level line's `._on_level_change()` handler
instead of whatever the current `OrderMode.current_pp` is set to.

Further this adds proper pane switching support such that when a user
modifies an order line from an account which is not the currently
selected one, the settings pane is changed to reflect the
account and thus corresponding position info for that account and
instrument B)
2022-08-18 16:04:44 -04:00
Tyler Goodlet 4d2e23b5ce Expose level line marker via property 2022-08-18 16:00:41 -04:00
Tyler Goodlet 973bf87e67 Don't log aboout unknown status msg if no oid 2022-08-18 11:51:12 -04:00
Tyler Goodlet 5861839783 Fix multi-account order loading..
We were overwriting the existing loaded orders list in the per client
loop (lul) so move the def above all that.

Comment out the "try-to-cancel-inactive-orders-via-task-after-timeout"
stuff pertaining to https://github.com/erdewit/ib_insync/issues/363 for
now since we don't have a mechanism in place to cancel the re-cancel
task once the order is cancelled - plus who knows if this is even the
best way to do it..
2022-08-18 11:51:12 -04:00
Tyler Goodlet 06845e5504 `kraken`: drop `make_sub()` and inline sub defs in `subscribe()` 2022-08-18 11:51:12 -04:00
Tyler Goodlet 43bdd4d022 Pass correct instrument symbol in position msgs 2022-08-18 11:51:12 -04:00
Tyler Goodlet bafd2cb44f Only relay fills if dialog still alive 2022-08-18 11:51:12 -04:00
Tyler Goodlet be8fd32e7d Only emit ems fill msgs for 'status' events from ib
Fills seems to be dual emitted from both the `status` and `fill` events
in `ib_insync` internals and more or less contain the same data nested
inside their `Trade` type. We started handling the 'fill' case to deal
with a race issue in commissions/cost report tracking but we don't
really want to leak that same race to incremental fills vs.
order-"closed" tracking.. So go back to only emitting the fill msgs
on statuses and a "closed" on `.remaining == 0`.
2022-08-18 11:51:12 -04:00
Tyler Goodlet ee8c00684b Add actor-global "broker client" for tracking reqids 2022-08-18 11:51:12 -04:00
Tyler Goodlet 7379dc03af The `ps1` check doesn't work for `pdb`.. 2022-08-18 11:51:12 -04:00
Tyler Goodlet a602c47d47 Support loading paper engine live orders 2022-08-18 11:51:12 -04:00
Tyler Goodlet 317610e00a Store positions globally and deliver on ctx connects 2022-08-18 11:51:12 -04:00
Tyler Goodlet c4af706d51 Make order-book-vars globals to persist across ems-dialog connections 2022-08-18 11:51:12 -04:00
Tyler Goodlet 665bb183f7 Unpack existing live order params in case statement 2022-08-18 11:51:12 -04:00
Tyler Goodlet f6ba95a6c7 Split existing live-open case into its own block 2022-08-18 11:51:12 -04:00
Tyler Goodlet e2cd8c4aef Add initial `kraken` live order loading 2022-08-18 11:51:12 -04:00
Tyler Goodlet c8bff81220 Add runtime guards around feed pausing during interaction 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2aec1c5f1d Only pprint our struct when we detect a py REPL 2022-08-18 11:51:12 -04:00
Tyler Goodlet bec32956a8 Move fill case-block earlier, log broker errors 2022-08-18 11:51:12 -04:00
Tyler Goodlet 91fdc7c5c7 Load boxed `.req` values as `Order`s in mode loop 2022-08-18 11:51:12 -04:00
Tyler Goodlet b59ed74bc1 'Only send `'closed'` on Filled events, lowercase all statues' 2022-08-18 11:51:12 -04:00
Tyler Goodlet 16012f6f02 Include both symbols in error msg when a mismatch 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2b61672723 Handle 'closed' vs. 'fill` race case..
`ib` is super good not being reliable with order event sequence order
and duplication of fill info. This adds some guards to try and avoid
popping the last status status too early if we end up receiving
a `'closed'` before the expected `'fill`' event(s). Further delete the
`status_msg` ref on each iteration to avoid stale reference lookups in
the relay task/loop.
2022-08-18 11:51:12 -04:00
Tyler Goodlet 176b230a46 Use modern `Union` pipe op syntax for msg fields 2022-08-18 11:51:12 -04:00
Tyler Goodlet 7fa9dbf869 Add full EMS order-dialog (re-)load support!
This includes darks, lives and alerts with all connecting clients
being broadcast all existing order-flow dialog states. Obviously
for now darks and alerts only live as long as the `emsd` actor lifetime
(though we will store these in local state eventually) and "live" orders
have lifetimes managed by their respective backend broker.

The details of this change-set is extensive, so here we go..

Messaging schema:
- change the messaging `Status` status-key set to:
  `resp: Literal['pending', 'open', 'dark_open', 'triggered',
                'closed',  'fill', 'canceled', 'error']`

  which better reflects the semantics of order lifetimes and was
  partially inspired by the status keys `kraken` provides for their
  order-entry API. The prior key set was based on `ib`'s horrible
  semantics which sound like they're right out of the 80s..
  Also, we reflect this same set in the `BrokerdStatus` msg and likely
  we'll just get rid of the separate brokerd-dialog side type
  eventually.
- use `Literal` type annots for statuses where applicable and as they
  are supported by `msgspec`.
- add additional optional `Status` fields:
  -`req: Order` to allow each status msg to optionally ref its
    commanding order-request msg allowing at least a request-response
    style implicit tracing in all response msgs.
  -`src: str` tag string to show the source of the msg.
  -`reqid: str | int` such that the ems can relay the `brokerd`
    request id both to the client side and have one spot to look
    up prior status msgs and
- draft a (unused/commented) `Dialog` type which can be eventually used
  at all EMS endpoints to track msg-flow states

EMS engine adjustments/rework:
- use the new status key set throughout and expect `BrokerdStatus` msgs
  to use the same new schema as `Status`.
- add a `_DarkBook._active: dict[str, Status]` table which is now used for
  all per-leg-dialog associations and order flow state tracking
  allowing for the both the brokerd-relay and client-request handler loops
  to read/write the same msg-table and provides for delivering
  the overall EMS-active-orders state to newly/re-connecting clients
  with minimal processing; this table replaces what the `._ems_entries`
  table from prior.
- add `Router.client_broadcast()` to send a msg to all currently
  connected peers.
- a variety of msg handler block logic tweaks including more `case:`
  blocks to be both flatter and improve explicitness:
  - for the relay loop move all `Status` msg update and sending to
    within each block instead of a fallthrough case plus hard-to-follow
    state logic.
  - add a specific case for unhandled backend status keys and just log
    them.
  - pop alerts from `._active` immediately once triggered.
  - where possible mutate status msgs fields over instantiating new
    ones.
- insert and expect `Order` instances in the dark clearing loop and
  adjust `case:` blocks accordingly.
- tag `dark_open` and `triggered` statuses as sourced from the ems.
- drop all the `ChainMap` stuff for now; we're going to make our own
  `Dialog` type for this purpose..

Order mode rework:
- always parse the `Status` msg and use match syntax cases with object
  patterns, hackily assign the `.req` in many blocks to work around not
  yet having proper on-the-wire decoding yet.
- make `.load_unknown_dialog_from_msg()` expect a `Status` with boxed
  `.req: Order` as input.
- change `OrderDialog` -> `Dialog` in prep for a general purpose type
  of the same name.

`ib` backend order loading support:
- do "closed" status detection inside the msg-relay loop instead
  of expecting the ems to do this..
- add an attempt to cancel inactive orders by scheduling cancel
  submissions continually (no idea if this works).
- add a status map to go from the 80s keys to our new set.
- deliver `Status` msgs with an embedded `Order` for existing live order
  loading and make sure to try an get the source exchange info (instead
  of SMART).

Paper engine ported to match:
- use new status keys in `BrokerdStatus` msgs
- use `match:` syntax in request handler loop
2022-08-18 11:51:12 -04:00
Tyler Goodlet 87ed9abefa WIP playing with a `ChainMap` of messages 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2548aae73d Deliver existing dialog (msgs) to every EMS client
Ideally every client that connects to the ems can know its state
(immediately) meaning relay all the order dialogs that are currently
active. This adds full (hacky WIP) support to receive those dialog
(msgs) from the `open_ems()` startup values via the `.started()` msg
from `_emsd_main()`.

Further this adds support to the order mode chart-UI to display existing
(live) orders on the chart during startup. Details include,

- add a `OrderMode.load_unknown_dialog_from_msg()` for processing and
  displaying a ``BrokerdStatus`` (for now) msg from the EMS that was not
  previously created by the current ems client and registering and
  displaying it on the chart.
- break out the ems msg processing into a new
  `order_mode.process_trade_msg()` func so that it can be called on the
  startup dialog-msg set as well as eventually used a more general low
  level auto-strat API (eg. when we get to displaying auto-strat and
  group trading automatically on an observing chart UI.
- hackyness around msg-processing for the dialogs delivery since we're
  technically delivering `BrokerdStatus` msgs when the client-side
  processing technically expects `Status` msgs.. we'll rectify this
  soon!
2022-08-18 11:51:12 -04:00
Tyler Goodlet 1cfa04927d Lol, handle failed-to-cancel statuses.. 2022-08-18 11:51:12 -04:00
Tyler Goodlet e34ea94f9f Start brokerd relay loop after opening client stream
In order to avoid missed existing order message emissions on startup we
need to be sure the client side stream is registered with the router
first. So break out the starting of the
`translate_and_relay_brokerd_events()` task until inside the client
stream block and start the task using the dark clearing loop nursery.

Also, ensure `oid` (and thus for `ib` the equivalent re-used `reqid`)
are cast to `str` before registering the dark book. Deliver the dark
book entries as part of the `_emsd_main()` context `.started()` values.
2022-08-18 11:51:12 -04:00
Tyler Goodlet 1510383738 Always cast ems `requid` values to `int` 2022-08-18 11:51:12 -04:00
Tyler Goodlet 016b669d63 Drop staged line runtime guard 2022-08-18 11:51:12 -04:00
Tyler Goodlet 682a0191ef First draft: relay open orders through ems and display on chart 2022-08-18 11:51:12 -04:00
Tyler Goodlet 9e36dbe47f Relay existing open orders from ib on startup 2022-08-18 11:51:12 -04:00
goodboy 8bef67642e
Merge pull request #383 from pikers/doin_the_splits
Doin the splits
2022-08-18 11:50:46 -04:00
Tyler Goodlet 52febac6ae Facepalm: order-handler tasks are one-to-one with unique clients 2022-08-18 11:34:11 -04:00
Tyler Goodlet f202699c25 Fix scan loop: only stash clients that actually connect.. 2022-08-18 11:34:11 -04:00
Tyler Goodlet 0fb07670d2 Fix multi-account positioning and order tracking..
This seems to have been broken in refactoring from commit 279c899de5
which was never tested against multiple accounts/clients.

The fix is 2 part:
- position tables are now correctly loaded ahead of time and used by
  account for each connected client in processing of ledgers and
  existing positions.
- a task for each API client is started (as implemented prior) so that
  we actually get status updates for every client used for submissions.

Further we add a bit of code using `bisect.insort()` to normalize
ledgers to a datetime sorted list records (though pretty sure the `dict`
transform ruins it?) in an effort to avoid issues with ledger
transaction processing with previously minimized `Position.clears`
tables, which should (but might not?) avoid incorporating clear events
prior to the last "net-zero" positioning state.
2022-08-17 14:14:20 -04:00
Tyler Goodlet 73d2e7716f Pre-loop clients to build out pps tables, handle missing commission field 2022-08-17 10:23:01 -04:00
Tyler Goodlet 999ae5a1c6 Handle `Position.split_ratio` in state audits
This firstly changes `.audit_sizing()` => `.ensure_state()` and makes it
return `None` as well as only error when split ratio denoted (via
config) positions do not size as expected.

Further refinements,
- add an `.expired()` predicate method
- always return a size of zero from `.calc_size()` on expired assets
- load each `pps.toml` entry's clear tabe into `Transaction`s and use
  `.add_clear()` during from config init.
2022-08-17 10:06:58 -04:00
Tyler Goodlet 23ba0e5e69 Don't raise on missing position for now, just error log 2022-08-17 10:06:41 -04:00
Tyler Goodlet 941a2196b3 Get pos entry from table not `updated: dict` output 2022-08-17 10:06:37 -04:00
Tyler Goodlet 0cf4e07b84 Use `datetime` sorting on clears table appends
In order to avoid issues with reloading ledger and API trades after an
existing `pps.toml` exists we have to make sure we not only avoid
duplicate entries but also avoid re-adding entries that would have been
removed during a prior call to the `Position.minimize_clears()` filter.
The easiest way to do this is to sort on timestamps and avoid adding any
record that pre-existed the last net-zero position ledger event that
`.minimize_clears()` discarded. In order to implement this it means
parsing config file clears table's timestamps into datetime objects for
inequality checks and we add a `Position.first_clear_dt` attr for
storing this value when managing pps in object form but never store it
in the config (since it should be obviously from the sorted clear event
table).
2022-08-17 10:05:05 -04:00
Tyler Goodlet 7bec989eed First try mega-basic stock (reverse) split support with `ib` and `pps.toml` 2022-08-17 09:54:49 -04:00
Tyler Goodlet 6856ca207f Fix for TWS created position loading 2022-08-17 09:53:42 -04:00
Guillermo Rodriguez 2e5616850c
Merge pull request #378 from pikers/msgpack_zombie
Drop `msgpack` from `marketstore` module
2022-08-11 17:07:47 -03:00
Tyler Goodlet a83bd9c608 Drop `msgpack` from `marketstore` module 2022-08-11 14:21:36 -04:00
goodboy 9651ca84bf
Merge pull request #372 from pikers/the_ems_flattening
The ems flattening
2022-08-05 21:03:59 -04:00
Tyler Goodlet 109b35f6eb Matchify paper clearing loop 2022-08-05 21:02:15 -04:00
Tyler Goodlet e28c1748fc Comment out "unknown msg" case for now 2022-08-05 21:02:15 -04:00
Tyler Goodlet 72889b4d1f Fix reference error 2022-08-05 21:02:15 -04:00
Tyler Goodlet ae001c3dd7 Matchify the dark trigger loop 2022-08-05 21:02:15 -04:00
Tyler Goodlet 2309e7ab05 Flatten the brokerd-dialog relay loop using `match:` 2022-08-05 21:02:15 -04:00
Tyler Goodlet 46c51b55f7 Flatten the client-request handler loop with `match:` 2022-08-05 21:02:15 -04:00
goodboy a9185e7d6f
Merge pull request #349 from pikers/kraken_ws_orders
Kraken ws orders
2022-08-05 21:01:24 -04:00
Tyler Goodlet 3a0987e0be Fix to-fast-edit guard case 2022-08-05 21:00:54 -04:00
Tyler Goodlet d280a592b1 Repair normalize method logic to only error on lookup failure 2022-08-05 16:14:19 -04:00
goodboy ef5829a6b7
Merge pull request #368 from pikers/kraken_userref_hackzin
`kraken`: use `userref` field AND `reqid`, utilize `openOrders` sub for most msging
2022-08-03 09:11:42 -04:00
Tyler Goodlet 30bcfdcc83 Emit fills from `openOrders` block
The (partial) fills from this sub are most indicative of clears (also
says support) whereas the msgs in the `ownTrades` sub are only emitted
after the entire order request has completed - there is no size-vlm
remaining.

Further enhancements:
- this also includes proper subscription-syncing inside `subscribe()` with
  a small pre-msg-loop which waits on ack-msgs for each sub and raises any
  errors. This approach should probably be implemented for the data feed
  streams as well.
- configure the `ownTrades` sub to not bother sending historical data on
  startup.
- make the `openOrders` sub include rate limit counters.
- handle the rare case where the ems is trying to cancel an order which
  was just edited and hasn't yet had it's new `txid` registered.
2022-08-01 19:22:31 -04:00
Tyler Goodlet 1a291939c3 Drop subs ack handling from streamer 2022-08-01 16:55:04 -04:00
Tyler Goodlet 69e501764a Drop status event processing at large
Since we figured out how to pass through ems dialog ids to the
`openOrders` sub we don't really need to do much with status updates
other then error handling. This drops `process_status()` and moves the
error handling logic into a status handler sub-block; we now just
info-log status updates for troubleshooting purposes.
2022-08-01 14:08:45 -04:00
goodboy 7f3f7f0372
Merge pull request #370 from pikers/kill_pydantic_from_kraken
Kill `pydantic` from `kraken`
2022-07-31 15:18:43 -04:00
Tyler Goodlet 1cbf45b4c4 Use the ``newuserref`` field on order edits
Why we need so many fields to accomplish passing through a dialog key to
orders is beyond me but this is how they do it with edits..

Allows not having to handle `editOrderStatus` msgs to update the dialog
key table and instead just do it in the `openOrders` sub by checking the
canceled msg for a 'cancel_reason' of 'Order replaced', in which case we
just pop the txid and wait for the new order the kraken backend engine
will submit automatically, which will now have the correct 'userref'
value we passed in via the `newuserref`, and then we add that new `txid`
to our table.
2022-07-31 14:36:06 -04:00
Tyler Goodlet 227a80469e Use both `reqid` and `userref` in order requests
Turns out you can pass both thus making mapping an ems `oid` to
a brokerd-side `reqid` much more simple. This allows us to avoid keeping
as much local dialog state but with still the following caveats:

- ok `editOrder` msgs must update the reqid<->txid map
- only pop `reqids2txids` entries inside the `cancelOrderStatus` handler
2022-07-31 14:36:06 -04:00
Tyler Goodlet dc8072c6db WIP: use `userref` field over `reqid`... 2022-07-31 14:36:06 -04:00
Tyler Goodlet 808dbb12e6 Drop forgotten `pydantic` dataclass in binance backend.. 2022-07-31 14:35:25 -04:00
Tyler Goodlet 44e21b1de9 Drop field import 2022-07-30 17:34:40 -04:00
Tyler Goodlet b3058b8c78 Drop remaining `pydantic` usage, convert `OHLC` to our struct variant 2022-07-30 17:34:40 -04:00
Tyler Goodlet db564d7977 Add casting method to our struct variant 2022-07-30 17:34:40 -04:00
Tyler Goodlet e6a3e8b65a Add warning msg for `openOrders.userref` always being 0 2022-07-30 17:33:45 -04:00
Tyler Goodlet d43ba47ebe Renames to `ppu` 2022-07-30 17:33:45 -04:00
Tyler Goodlet 168c9863cb Look for transfers after ledger + api trans load
If we don't have a pos table built out already (in mem) we can't figure
out the likely dst asset (since there's no pair entry to guide us) that
we should use to search for withdrawal transactions; so move it later.

Further this ports to the new api changes in `piker.pp`` that will land
with #365.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 0fb31586fd Go back to using `Position.size` property in pp loading audits 2022-07-30 17:33:45 -04:00
Tyler Goodlet 8b609f531b Add transfers knowledge to positions validation 2022-07-30 17:33:45 -04:00
Tyler Goodlet d502274eb9 Add a `Client.get_xfers()` to retreive withdrawal transactions 2022-07-30 17:33:45 -04:00
Tyler Goodlet b1419c850d Update ledger from api immediately, cruft cleaning 2022-07-30 17:33:45 -04:00
Tyler Goodlet aa7f24b6db Drop old reversed order idea for rt-pp msg testing 2022-07-30 17:33:45 -04:00
Tyler Goodlet 319e68c855 TOSQUASH: revert to 22Hz display throttle 2022-07-30 17:33:45 -04:00
Tyler Goodlet 64f920d7e5 Accept direct fqsn matches on position msg updates 2022-07-30 17:33:45 -04:00
Tyler Goodlet 3b79743c7b Finally get real-time pp updates workin for `kraken`
This ended up driving the rework of the `piker.pp` apis to use context
manager + table style which resulted in a much easier to follow
state/update system B). Also added is a flag to do a manual simulation
of a "fill triggered rt pp msg" which requires the user to delete the
last ledgered trade entry from config files and then allowing that trade
to emit through the `openOrders` sub and update client shortly after
order mode boot; this is how the rt updates were verified to work
without doing even more live orders 😂.

Patch details:
- open both `open_trade_ledger()` and `open_pps()` inside the trade
  dialog startup and conduct a "pp state sync" logic phase where we now
  pull the account balances and incrementally load pp data (in order,
  from `pps.toml`, ledger, api) until we can generate the asset balance
  by reverse incrementing through trade history eventually erroring out
  if we can't reproduce the balance value.
- rework the `trade2pps()` to take in the `PpTable` and generate new
  ems msgs from table updates.
- return the new `dict[str, Transaction]` expected from
  `norm_trade_records()`
- only update pp config and ledger on dialog exit.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 54008a1976 Add balance and assets retreival methods, cache assets on startup
Pass config dict into client and assign to `.conf`.
2022-07-30 17:33:45 -04:00
Tyler Goodlet b96b7a8b9c Use `aclosing()` on all msg async-gens 2022-07-30 17:33:45 -04:00
Tyler Goodlet 0fca1b3e1a Also map the ws symbol set to the alt set 2022-07-30 17:33:45 -04:00
Tyler Goodlet 2386270cad Handle too-fast-edits, add `ChainMap` msg tracing
Since our ems doesn't actually do blocking style client-side submission
updates, thus resulting in the client being able to update an existing
order's state before knowing its current state, we can run into race
conditions where for some backends an order is updated using the wrong
order id. For kraken we manually implement detecting this race (lol, for
now anyway) such that when a new client side edit comes in before the
new `txid` is known, we simply expect the handler loop to cancel the
order. Further this adds cancellation on arbitrary status errors, like
rate limits.

Also this adds 2 leg (ems <-> brokerd <-> kraken) msg tracing using
a `collections.ChainMap` which is likely going to end up being the POC
for a more general data structure recommended for backends that need to
trace msg flow for translation with the ems.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 5b135fad61 Handle pre-existing open orders specifically by checking for null `oid` 2022-07-30 17:33:45 -04:00
Tyler Goodlet abb6854e74 Make all `.bsuid`s the normed symbol "altname"s 2022-07-30 17:33:45 -04:00
Tyler Goodlet 22f9b2552c Provide symbol norming via a classmethod + global table 2022-07-30 17:33:45 -04:00
Tyler Goodlet 57f2478dc7 Fixes for state updates and clears
Turns out the `openOrders` and `ownTrades` subs always return a `reqid`
value (the one brokerd sends to the kraken api in order requests) is
always set to zero, which seems to be a bug? So this includes patches to
work around that as well reliance on the `openOrders` sub to do most
`BrokerdStatus` updates since `XOrderStatus` events don't seem to have
much data in them at all (they almost look like pure ack events so maybe
they aren't affirmative of final state changes anyway..).

Other fixes:
- respond with a `BrokerdOrderAck` immediately after `requid` generation
  not after order submission to ensure the ems has a valid `requid`
  *before* kraken api events are relayed through.
- add a `reqids2txids: bidict[int, str]` which maps brokerd genned
  `requid`s to kraken-side `txid`s since (as mentioned above) the
  clearing and state endpoints don't relay back this value (it's always
  0...)
- add log messages for each sub so that (at least for now) we can see
  exact msg contents coming from kraken.
- drop `.remaining` calcs for now since we need to keep record of the
  order states manually in order to retreive the original submission
  vlm..
- fix the `openOrders` case for fills, in this case the message includes
  no `status` field and thus we must catch it in a block *after* the
  normal state handler to avoid masking.
- drop response msg generation from the cancel status case since we
  can do it again from the `openOrders` handler and sending a double
  status causes issues on the client side.
- add a shite ton of notes around all this missing `requid` stuff.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 5dc9a61ec4 Use cancel level logging for cancelled orders 2022-07-30 17:33:45 -04:00
Tyler Goodlet b0d3d9bb01 TOSQUASH: lingering `.dict()`s 2022-07-30 17:33:45 -04:00
Tyler Goodlet caecbaa231 Cancel any live orders found on connect
More or less just to avoid orders the user wasn't aware of from
persisting until we get "open order relaying" through the ems working.

Some further fixes which required a new `reqids2txids` map which keeps
track of which `kraken` "txid" is mapped to our `reqid: int`; mainly
this was needed for cancel requests which require knowing the underlying
`txid`s (since apparently kraken doesn't keep track of the "reqid"  we
pass it). Pass the ws instance into `handle_order_updates()` to enable
the cancelling orders on startup. Don't key error on unknown `reqid`
values (for eg. when receiving historical trade events on startup).
Handle cancel requests first in the ems side loop.
2022-07-30 17:33:45 -04:00
Tyler Goodlet a20a8d95d5 Use `aclosing()` around ws async gen 2022-07-30 17:33:45 -04:00
Tyler Goodlet ba93f96c71 Lol, gotta `float()` that vlm before `*` XD 2022-07-30 17:33:45 -04:00
Tyler Goodlet 804e9afdde Pass our manually mapped `reqid: int` to EMS
Since we seem to always be able to get back the `reqid`/`userref` value
we send to kraken ws endpoints, we can use this as our brokerd side
order id and avoid all race cases with getting the true `txid` value
that `kraken` assigns (and which changes when you do "edits"
:eyeroll:). This simplifies status updates by allowing our relay loop
just to pass back our generated `.reqid` verbatim and allows responding
with a `BrokerdOrderAck` immediately in the request handler task which
should guarantee there are no further race conditions with the relay
loop and mapping `txid`s from kraken.. and figuring out wtf to do when
they change, etc.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 89bcaed15e Add ledger and `pps.toml` snippets 2022-07-30 17:33:45 -04:00
Tyler Goodlet bb2f8e4304 Try out a backend readme 2022-07-30 17:33:45 -04:00
Tyler Goodlet 8ab8268edc Don't require an ems msg symbol on error statuses 2022-07-30 17:33:45 -04:00
Tyler Goodlet bbcc55b24c Update ledger *after* pps updates from new trades
Addressing same issue as in #350 where we need to compute position
updates using the *first read* from the ledger **before** we update it
to make sure `Position.lifo_update()` gets called and **not skipped**
because new trades were read as clears entries but haven't actually been
included in update calcs yet.. aka we call `Position.lifo_update()`.

Main change here is to convert `update_ledger()` into a context mngr so
that the ledger write is committed after pps updates using
`pp.update_pps_conf()`..

This is basically a hotfix to #346 as well.
2022-07-30 17:33:45 -04:00
Tyler Goodlet 9fa9c27e4d Factor status handling into a new `process_status()` helper 2022-07-30 17:33:45 -04:00
Tyler Goodlet d9b4c4a413 Factor msg loop into new func: `handle_order_updates()` 2022-07-30 17:33:45 -04:00
Tyler Goodlet 84cab1327d Drop uneeded count-sequencec verification 2022-07-30 17:33:45 -04:00
Tyler Goodlet df4cec930b Get order "editing" working fully
Turns out the EMS can support this as originally expected: you can
update a `brokerd`-side `.reqid` through a `BrokerdAck` msg and the ems
which update its cross-dialog (leg) tracking correctly! The issue was
a bug in the `editOrderStatus` msg handling and appropriate tracking
of the correct `.oid` (ems uid) on the kraken side. This unfortunately
required adding a `emsflow: dict[str, list[BrokerdOrder]]` msg flow
tracing table which means the broker daemon is tracking all the msg flow
with the ems, though I'm wondering now if this is just good practise
anyway and maybe we should offer a small primitive type from our msging
utils to aid with this? I've used such constructs in event handling
systems prior.

There's a lot more factoring that can be done after these changes as
well but the quick detailed summary is,
- rework the `handle_order_requests()` loop to use `match:` syntax and
  update the new `emsflow` table on every new request from the ems.
- fix the `editOrderStatus` case pattern to not include an error msg and
  thus actually be triggered to respond to the ems with a `BrokerdAck`
  containing the new `.reqid`, the new kraken side `txid`.
- skip any `openOrders` msgs which are detected as being kraken's
  internal order "edits" by matching on the `cancel_reason` field.
- update the `emsflow` table in all ws-stream msg handling blocks
  with responses sent to the ems.

Relates to #290
2022-07-30 17:33:45 -04:00
Tyler Goodlet ab08dc582d Make ems relay loop report on brokerd `.reqid` changes 2022-07-30 17:33:45 -04:00
Tyler Goodlet f79d9865a0 Use `match:` syntax in data feed subs processing 2022-07-30 17:33:45 -04:00
Tyler Goodlet 00378c330c First draft, working WS based order management
Move to using the websocket API for all order control ops and dropping
the sync rest api approach which resulted in a bunch of buggy races.
Further this gets us must faster (batch) order cancellation for free
and a simpler ems request handler loop. We now heavily leverage the new
py3.10 `match:` syntax for all kraken-side API msg parsing and
processing and handle both the `openOrders` and `ownTrades` subscription
streams.

We also block "order editing" (by immediate cancellation) for now since
the EMS isn't entirely yet equipped to handle brokerd side `.reqid`
changes (which is how kraken implements so called order "updates" or
"edits") for a given order-request dialog and we may want to even
consider just implementing "updates" ourselves via independent cancel
and submit requests? Definitely something to ponder. Alternatively we
can "masquerade" such updates behind the count-style `.oid` remapping we
had to implement anyway (kraken's limitation) and maybe everything will
just work?

Further details in this patch:
- create 2 tables for tracking the EMS's `.oid` (uui4) value to `int`s
  that kraken expects (for `reqid`s): `ids` and `reqmsgs` which enable
  local lookup of ems uids to piker-backend-client-side request ids and
  received order messages.
- add `openOrders` sub support which more or less directly relays to
  equivalent `BrokerdStatus` updates and calc the `.filled` and
  `.remaining` values based on cleared vlm updates.
- add handler blocks for `[add/edit/cancel]OrderStatus` events including
  error msg cases.
- don't do any order request response processing in
  `handle_order_requests()` since responses are always received via one
  (or both?) of the new ws subs: `ownTrades` and `openOrders` and thus
  such msgs are now handled in the response relay loop.

Relates to #290
Resolves #310, #296
2022-07-30 17:33:45 -04:00
goodboy 180b97b180
Merge pull request #369 from pikers/pydantic_zombie
Drop `pydantic.create_model()` usage for `msgspec.defstruct()`
2022-07-30 17:33:18 -04:00
Tyler Goodlet f0b3a4d5c0 Drop `pydantic.create_model()` usage for `msgspec.defstruct()` 2022-07-30 17:01:56 -04:00
goodboy e2e66324cc
Merge pull request #363 from pikers/ib_pps_upgrade
`ib` pps api layer upgrade
2022-07-27 14:50:28 -04:00
Tyler Goodlet d950c78b81 Mention liquidation in error msg 2022-07-27 14:40:32 -04:00
Tyler Goodlet 7dbcbfdcd5 Write `pps.toml` shortly after broker startup 2022-07-27 14:40:32 -04:00
Tyler Goodlet 279c899de5 Port to new PpTable.dump_active()` output, move order event task to child nursery 2022-07-27 14:40:32 -04:00
Tyler Goodlet db5aacdb9c Only allow vnc client connections from localhost 2022-07-27 14:40:32 -04:00
Tyler Goodlet c7b84ab500 Port position calcs to new ctx mngr apis and drop multi-loop madness 2022-07-27 14:40:32 -04:00
Tyler Goodlet 9967adb371 Lol, drop unintented accound name key layer from ledger ledger 2022-07-27 14:40:32 -04:00
Tyler Goodlet 30ff793a22 Port `ib` broker machinery to new ctx mngr pp api
This drops the use of `pp.update_pps_conf()` (and friends) and instead
moves to using the context style `open_trade_ledger()` and `open_pps()`
managers for faster pp msg gen due to delayed file writing (which was
the main source update latency).

In order to make this work with potentially multiple accounts this also
uses an exit stack which loads each ledger / `pps.toml` into an account
id mapped `dict`; a POC for likely how we should implement some higher
level position manager api.
2022-07-27 12:29:53 -04:00
Tyler Goodlet 666587991a Avoid crash when no vnc server running 2022-07-27 12:29:53 -04:00
goodboy 01005e40a8
Merge pull request #366 from pikers/multisympaper
Fix #222 multi-symbol paper engine support
2022-07-27 12:29:05 -04:00
goodboy d81e629c29
Merge pull request #365 from pikers/ppu_history
Ppu history
2022-07-27 12:25:23 -04:00
Tyler Goodlet 2766fad719 Fix #222 multi-symbol paper engine support 2022-07-27 12:18:59 -04:00
Tyler Goodlet ae71168216 Change name `be_price` -> `ppu` throughout codebase 2022-07-27 12:18:36 -04:00
Tyler Goodlet a0c238daa7 Adjust paper-engine to use `Transaction` for pps updates 2022-07-27 11:20:59 -04:00
Tyler Goodlet 7cbdc6a246 Move clears updates back into a method 2022-07-27 11:17:57 -04:00
Tyler Goodlet 2ff8be71aa Add `PpTable.write_config(), order `pps.toml` columns 2022-07-27 11:17:57 -04:00
Tyler Goodlet ddffaa952d Rework "breakeven" price as "price-per-uni": ppu
The original implementation of `.calc_be_price()` wasn't correct since
the real so called "price per unit" (ppu), is actually defined by
a recurrence relation (which is why the original state-updated
`.lifo_update()` approach worked well) and requires the previous ppu to
be weighted by the new accumulated position size when considering a new
clear event. The ppu is the price that above or below which the trader
takes a win or loss on transacting one unit of the trading asset and
thus it is the true "break even price" that determines making or losing
money per fill. This patches fixes the implementation to use trailing
windows of the accumulated size and ppu to compute the next ppu value
for any new clear event as well as handle rare cases where the
"direction" changes polarity (eg. long to short in a single order). The
new method is `Position.calc_ppu()` and further details of the relation
can be seen in the doc strings.

This patch also includes a wack-ton of clean ups and removals in an
effort to refine position management api for easier use in new backends:

- drop `updaate_pps_conf()`, `load_pps_from_toml()` and rename
  `load_trands_from_ledger()` -> `load_pps_from_ledger()`.
- extend `PpTable` to have a `.to_toml()` method which returns the
  active set of positions ready to be serialized to the `pps.toml` file
  which is collects from calling,
- `PpTable.dump_active()` which now returns double dicts of the
  open/closed pp object maps.
- make `Position.minimize_clears()` now iterate the clears table in
  chronological order (instead of reverse) and only drop fills prior
  to any zero-size state (the old reversed way can result incorrect
  history-size-retracement in cases where a position is lessened but
  not completely exited).
- drop `Position.add_clear()` and instead just manually add entries
  inside `.update_from_trans()` and also add a `accum_size` and `ppu`
  field to ever entry thus creating a position "history" sequence of
  the ppu and accum size for every position and prepares for being
  and to show "position lifetimes" in the UI.
- move fqsn getting into `Position.to_pretoml()`.
2022-07-26 12:09:59 -04:00
Tyler Goodlet 5520e9ef21 Minimize clears and audit sizing for all updates in `.update_from_trans()` 2022-07-26 12:09:59 -04:00
Tyler Goodlet 958e542f7d Drop `.lifo_upate()` add `.audit_sizing()`
Use the new `.calc_[be_price/size]()` methods when serializing to and
from the `pps.toml` format and add an audit method which will warn about
mismatched values and assign the clears table calculated values pre-write.

Drop the `.lifo_update()` method and instead allow both
`.size`/`.be_price` properties to exist (for non-ledger related uses of
`Position`) alongside the new calc methods and only get fussy about
*what* the properties are set to in the case of ledger audits.

Also changes `Position.update()` -> `.add_clear()`.
2022-07-25 12:06:52 -04:00
goodboy 927bbc7258
Merge pull request #364 from pikers/historical_breakeven_pp_price
Add non-state-incremented calculation methods
2022-07-25 09:24:26 -04:00
Tyler Goodlet 45bef0cea9 Add non-state-incremented calculation methods
Since we're going to need them anyway for desired features, add
2 new `Position` methods:
- `.calc_be_price()` which computes the breakeven cost basis price
  from the entries in the clears table.
- `.calc_size()` which just sums the clear sizes.

Add a `cost_scalar: float` control to the `.update_from_trans()` method
to allow manual adjustment of the cost weighting for the case where
a "non-symmetrical" model is wanted.

Go back to always trying to write the backing ledger files on exit, even
when there's an error (obvs without the `return` in the `finally:` block
f$#% up).
2022-07-23 19:39:47 -04:00
goodboy a3d46f713e
Merge pull request #361 from pikers/pptables
`PpTable`s
2022-07-21 17:54:43 -04:00
Tyler Goodlet 5684120c11 Wow, drop idiotic `return` inside `finally:`
Can't believe i missed this but any `return` inside a `finally` will
suppress the error from the `try:` part... XD

Thought i was losing my mind when the ledger was mutated and then
an error just after wasn't getting raised.. lul.

Never again...
2022-07-21 17:52:44 -04:00
Tyler Goodlet ddb195ed2c Add a flag to prevent writing `pps.toml` on exit 2022-07-21 17:52:44 -04:00
Tyler Goodlet 6747831677 Don't pop zero pps from table in `.dump_active()`
In order to avoid double transaction adds/updates and too-early-discard
of zero sized pps (like when trades are loaded from a backend broker but
were already added to a ledger or `pps.toml` prior) we now **don't** pop
such `Position` entries from the `.pps` table in order to keep each
position's clears table always in place. This avoids the edge case where
an entry was removed too early (due to zero size) but then duplicate
trade entries that were in that entrie's clears show up from the backend
and are entered into a new entry resulting in an incorrect size in a new
entry..We still only push non-net-zero entries to the `pps.toml`.

More fixes:
- return the updated set of `Positions` from `.lifo_update()`.
- return the full table set from `update_pps()`.
- use `PpTable.update_from_trans()` more throughout.
- always write the `pps.toml` on `open_pps()` exit.
- only return table from `load_pps_from_toml()`.
2022-07-21 17:52:44 -04:00
Tyler Goodlet 9326379b04 Add a `PpTable` type, give it the update methods
In an effort to begin allowing backends to have more granular control
over position updates, particular in the case where they need to be
reloaded from a trades ledger, this adds a new table API which can
be loaded using `open_pps()`.

- offer an `.update_trans()` method which takes in a `dict` of
  `Transactions` and updates the current table of `Positions` from it.
- add a `.dump_active()` which renders the active pp entries dict in
  a format ready for toml serialization and all closed positions since
  the last update (we might want to not drop these?)

All other module-function apis currently in use should remain working as
before for the moment.
2022-07-21 17:52:44 -04:00
Tyler Goodlet 09d9a7ea2b Expect `<brokermod>.norm_trade_records()` to return `dict` 2022-07-21 17:52:44 -04:00
Tyler Goodlet 45871d5846 Freeze transactions, add todo notes for incr update 2022-07-21 17:52:44 -04:00
goodboy bf7a49c19b
Merge pull request #358 from pikers/fix_forex
Fix forex
2022-07-21 17:52:08 -04:00
goodboy 0a7fce087c
Merge pull request #362 from pikers/ahab_you_bad_boi
Revert to hard container kill on log error
2022-07-21 17:51:11 -04:00
Tyler Goodlet d3130ca04c Revert to hard container kill on log error 2022-07-21 17:00:36 -04:00
Tyler Goodlet e30a3c5b54 Single chart requires view reset to size to data on startup 2022-07-21 11:39:10 -04:00
Tyler Goodlet 2393965e83 Fix bottom axis when no fsps/subplots 2022-07-21 11:39:04 -04:00
Tyler Goodlet fb39da19f4 Add option and adhoc meta-info support to `con2fqsn()` 2022-07-21 11:38:53 -04:00
Tyler Goodlet a27431c34f Unify contract->fqsn translation with new cached-helper 2022-07-21 11:38:42 -04:00
Tyler Goodlet 070b9f3dc1 Log msg tweak 2022-07-19 09:58:43 -04:00
goodboy f2dba44169
Merge pull request #360 from pikers/fsp_shm_caching
Fsp shm caching
2022-07-19 09:55:27 -04:00
Tyler Goodlet 0ef5da0881 Unbreak regular searches and stock lookups..
Change `.find_contract()` -> `.find_contracts()` to allow multi-search
for so called "ambiguous" contracts (like for `Future`s) such that the
method now returns a `list` of tracts and populates the contract cache
with all specific tracts retrieved. Let it take in an (unvalidated)
contract that will be fqsn-style-tokenized such that it can be called
from `.search_symbols()` (though we're not quite yet XD).

More stuff,

- add `Client.parse_patt2fqsn()` which is an fqsn to token unpacker
  built from the original logic in the old `.find_contract()`.
- handle fiat/forex pairs with the `'CASH'` sectype.
- add a flag to allow unqualified contracts to fail with a warning msg.
- populate the client's contract cache with all expiries of
  an ambiguous derivative.
- allow `.con_deats()` to warn msg instead of raise on def-not-found.
- add commented `assert 0` which was triggering a debugger deadlock in
  `tractor` which we still haven't been able to create a unit test for.
2022-07-19 09:42:01 -04:00
Tyler Goodlet 0580b204a3 A `size` field in ticks is optional 2022-07-19 09:41:37 -04:00
Tyler Goodlet 6ce699ae1f Repair display loop to work when no vlm chart is loaded 2022-07-19 09:41:37 -04:00
Tyler Goodlet 3aa72abacf Primary exchange can never be "smart" 2022-07-19 09:41:37 -04:00
Tyler Goodlet 04004525c1 Specifically denote no-vlm contracts in symbol info 2022-07-19 09:41:37 -04:00
Tyler Goodlet a7f0adf1cf Make forex rt feeds work again 2022-07-19 09:41:37 -04:00
Tyler Goodlet cef511092d Support `Forex` in the pp packer 2022-07-19 09:41:37 -04:00
Tyler Goodlet 4e5df973a9 Support `Forex` tracts in `normalize()` 2022-07-19 09:41:37 -04:00
Tyler Goodlet 6a1a62d8c0 Add (hacky) forex pair support to `Client.find_contract()` 2022-07-19 09:41:37 -04:00
Tyler Goodlet e0491cf2e7 Cache fsp ``ShmArrays`` where possible
Minimize calling `.data._shmarray.attach_shm_array()` as much as is
possible to avoid the crash from #332. This is the suggested hack from
issue #359.

Resolves https://github.com/pikers/piker/issues/359
2022-07-19 09:07:40 -04:00
Tyler Goodlet 90bc9b9730 Only 4k seconds of 1s ohlc when no tsdb 2022-07-19 09:07:27 -04:00
goodboy f449672c68
Merge pull request #357 from pikers/paper_eng_msg_fixes
Oof, paper engine msg fixes after using `msgspec.Struct`..
2022-07-11 13:14:39 -04:00
Tyler Goodlet fd22f45178 Oof, paper engine msg fixes after using `msgspec.Struct`.. 2022-07-11 13:04:07 -04:00
goodboy 37f634a2ed
Merge pull request #353 from pikers/drop_pydantic
Drop `pydantic`
2022-07-09 14:15:50 -04:00
Tyler Goodlet dfee9dd97e Remove `pydantic` from deps 2022-07-09 13:10:09 -04:00
Tyler Goodlet 2a99f7a4d7 Drop remaining `BaseModel` api usage from rest of codebase 2022-07-09 12:38:17 -04:00
Tyler Goodlet b44e2d9ed9 Support `0` value `reqid`s 🤦 2022-07-09 12:10:23 -04:00
Tyler Goodlet 795d4d76f4 Add some todo-reminders for ``msgspec`` stuff 2022-07-09 12:09:50 -04:00
Tyler Goodlet c26acb1fa8 Add `Struct.copy()` which does a rountrip validate 2022-07-09 12:09:38 -04:00
Tyler Goodlet 11b6699a54 Change all clearing msgs over to `msgspec` 2022-07-09 12:09:38 -04:00
Tyler Goodlet f9bdd643cf Cast slots to `int` before range set 2022-07-09 12:09:38 -04:00
Tyler Goodlet 2baea21c7d Drop pydantic from allocator 2022-07-09 12:09:38 -04:00
Tyler Goodlet bea0111753 Add a custom `msgspec.Struct` with some humanizing 2022-07-09 12:09:38 -04:00
Tyler Goodlet c870665be0 Remove `BaseModel` use from all dataclass-like uses 2022-07-09 12:08:41 -04:00
Tyler Goodlet 4ff1090284 Use struct for shm tokens 2022-07-09 12:06:47 -04:00
Tyler Goodlet f22461a844 Use our struct for kraken `Pair` type 2022-07-09 12:06:47 -04:00
Tyler Goodlet 458c7211ee Drop `pydantic` from service mngr 2022-07-09 12:06:47 -04:00
Tyler Goodlet 5cc4b19a7c Use our struct in binance backend 2022-07-09 12:06:47 -04:00
goodboy f5236f658b
Merge pull request #356 from pikers/null_last_quote_fix
Finally solve the last-price-is-`nan` issue..
2022-07-08 17:47:45 -04:00
goodboy a360b66cc0
Merge pull request #355 from pikers/ahab_hardkill
Ahab hardkill
2022-07-08 17:47:17 -04:00
Tyler Goodlet 4bcb791161 Finally solve the last-price-is-`nan` issue..
Not sure why I put this off for so long but the check is in now such
that if the market isn't open or no rt quote comes in from the first
query, we just pull from the last shm history 'close' value.
Includes another fix to avoid raising when a double remove on the client
side stream from the registry sometimes happens.
2022-07-08 17:30:34 -04:00
Tyler Goodlet 4c7c78c815 Add a `ApplicationLogError` custom exc instead 2022-07-08 17:29:03 -04:00
Tyler Goodlet 019867b413 Fix missing container id, drop custom exception 2022-07-08 17:22:37 -04:00
Tyler Goodlet f356fb0a68 Hard kill container on both a timeout or connection error 2022-07-08 17:22:37 -04:00
goodboy 756249ff70
Merge pull request #348 from pikers/notokeninwswrapper
Drop token attr from `NoBsWs`
2022-07-05 20:57:30 -04:00
goodboy 419ebebe72
Merge pull request #346 from pikers/kraken_ledger_pps
Kraken ledger pps
2022-07-05 20:56:44 -04:00
goodboy a229996ebe
Merge pull request #350 from pikers/ib_rt_pp_update_hotfix
`ib` rt pps update hotfix..
2022-07-05 20:55:14 -04:00
Tyler Goodlet af01e89612 Create sub-pkg logger once during import 2022-07-05 16:59:47 -04:00
Tyler Goodlet 609034c634 Fix typo / line length 2022-07-05 16:46:31 -04:00
Tyler Goodlet 95dd0e6bd6 `ib` rt pps update hotfix..
Not sure this didn't get caught in usage, but basically real-time
updates got broken by a rework of `update_ledger_from_api_trades()`.
The issue is that the ledger was being updated **before** calling
`piker.pp.update_pps_conf()` which resulted in the `Position.size`
not being updated correctly since the [latest added] clears passed
in via the `trade_records` arg were already found in the `.clears` table
and thus were causing the loop to skip the `Position.lifo_update()`
call..

The solution here is to not update the ledger **until after** we call
`update_pps_conf()` - it's more read/writes but it's correct and we
figure out a less io heavy way to do the file writing later.

Further this includes a fix to avoid double emitting a pp update caused
by non-thorough logic that waits for a commission report to arrive
during a fill event; previously we were emitting the same message twice
due to the lack of a check for an existing comms report in the case
where the report arrives *after* the fill.
2022-07-05 16:25:11 -04:00
goodboy 479ad1bb15
Merge pull request #347 from pikers/pps_postmortem
Pps postmortem
2022-07-04 15:28:27 -04:00
Tyler Goodlet d506235a8b Drop token attr from `NoBsWs` 2022-07-03 17:07:35 -04:00
Tyler Goodlet 7846446a44 Add real-time incremental pp updates
Moves to using the new `piker.pp` apis to both store real-time trade
events in a ledger file as well emit position update msgs (which were
not in this backend at all prior) when new orders clear (aka fill).

In terms of outstanding issues,
- solves the pp update part of the bugs reported in #310
- starts a msg case block in prep for #293

Details of rework:
- move the `subscribe()` ws fixture to module level and `partial()` in
  the client token instead of passing it to the instance; in prep for
  removal of the `.token` attr from the `NoBsWs` wrapper.
- drop `make_auth_sub()` since it was too thin and we can just
  do it all succinctly in `subscribe()`
- filter trade update msgs to those not yet stored int the toml ledger
- much better kraken api msg unpacking using new `match:` synax B)

Resolves #311
2022-07-03 14:52:27 -04:00
Tyler Goodlet 214f864dcf Handle ws style symbol schema 2022-07-03 14:37:15 -04:00
Tyler Goodlet 4c0f2099aa Send fill msg first 2022-07-03 11:19:33 -04:00
Tyler Goodlet aea7bec2c3 Inline `process_trade_msgs()` into relay loop 2022-07-03 11:18:45 -04:00
Tyler Goodlet 47777e4192 Use new `str.removeprefix()` from py3.10 2022-07-02 16:20:22 -04:00
Tyler Goodlet f6888057c3 Just do a naive lookup for symbol normalization 2022-07-02 16:20:22 -04:00
Tyler Goodlet f65f56ec75 Initial `piker.pp` ledger support for `kraken`
No real-time update support (yet) but this is the first draft at writing
trades ledgers and `pps.toml` entries for the kraken backend.

Deatz:
- drop `pack_positions()`, no longer used.
- use `piker.pp` apis to both write a trades ledger file and update the
  `pps.toml` inside the `trades_dialogue()` endpoint startup.
- drop the weird paper engine swap over if auth can't be done, we should
  be doing something with messaging in the ems over this..
- more web API error response raising.
- pass the `pp.Transaction` set loaded from ledger into
  `process_trade_msgs()` do avoid duplicate sends of already collected
  trades msgs.
- add `norm_trade_records()` public endpoing (used by `piker.pp` api)
  and `update_ledger()` helper.
- rejig `process_trade_msgs()` to drop the weird `try:` assertion block
  and skip already-recorded-in-ledger trade msgs as well as yield *each*
  trade instead of sub-sequences.
2022-07-02 16:20:22 -04:00
Tyler Goodlet 5d39b04552 Invert normalizer branching logic, raise on edge case 2022-07-02 16:20:22 -04:00
Tyler Goodlet 735fbc6259 Raise any error from response 2022-07-02 16:20:22 -04:00
Tyler Goodlet fcd7e0f3f3 Avoid crash on trades ledger msgs
Just ignore them for now using new `match:` syntax B)
but we'll do incremental update sooon!

Resolves #311
2022-07-02 16:20:22 -04:00
Tyler Goodlet 9106d13dfe Drop wacky if block logic, while loop, handle errors and prep for async batching 2022-07-02 16:20:22 -04:00
Tyler Goodlet d3caad6e11 Factor data feeds endpoints into new sub-mod 2022-07-02 16:20:22 -04:00
Tyler Goodlet f87a2a810a Make broker mod import from new api mod 2022-07-02 16:20:21 -04:00
Tyler Goodlet 208e2e9e97 Move core api code into sub-module 2022-07-02 16:20:21 -04:00
Tyler Goodlet 90cc6eb317 Factor clearing related endpoints into new `.kraken.broker` submod 2022-07-02 16:20:21 -04:00
Tyler Goodlet b118becc84 Start `kraken` sub-pkg 2022-07-02 16:20:21 -04:00
Tyler Goodlet 7442d68ecf Drop nesting level from emsd's pp cacheing, adjust order mode 2022-07-02 16:19:58 -04:00
Tyler Goodlet 076c167d6e Fix ib pkg mod doc string 2022-07-02 16:14:34 -04:00
Tyler Goodlet 64d8cd448f Right, handle brand-new pp case.. 2022-07-02 16:14:34 -04:00
Tyler Goodlet ec6a28a8b1 Drop stale comment 2022-07-02 16:14:34 -04:00
Tyler Goodlet cc15d02488 Fix `.minimize_clears()` to include clears since zero
This was just implemented totally wrong but somehow worked XD

The idea was to include all trades that contribute to ongoing position
size since the last time the position was "net zero", i.e. no position
in the asset. Adjust arithmetic to *subtract* from the current size
until a zero size condition is met and then keep all those clears as
part of the "current state" clears table.

Additionally this fixes another bug where the positions freshly loaded
from a ledger *were not* being merged with the current `pps.toml` state.
2022-07-02 16:14:34 -04:00
goodboy d5bc43e8dd
Merge pull request #336 from pikers/lifo_pps_ib
LIFO/"breakeven" pps for `ib`
2022-06-29 10:07:56 -04:00
Tyler Goodlet 287a2c8396 Put swb2 in venue filter for now 2022-06-29 10:00:38 -04:00
Tyler Goodlet 453ebdfe30 Fix field name to new `.bsuid` 2022-06-28 10:07:57 -04:00
Tyler Goodlet 2b1fb90e03 Add tractor breaker assert.. 2022-06-28 10:07:57 -04:00
Tyler Goodlet 695ba5288d Comment-drop adhoc symbol (futes) matching in search 2022-06-28 10:07:57 -04:00
Tyler Goodlet d6c32bba86 Use new adhoc sym map for symbols without exchange tags (usually futes) 2022-06-28 10:07:57 -04:00
Tyler Goodlet fa89207583 Use sign of the new size which indicates direction of position 2022-06-28 10:07:57 -04:00
Tyler Goodlet 557562e25c Build out adhoc sym map from futes list 2022-06-28 10:07:57 -04:00
Tyler Goodlet c6efa2641b Cost part of position breakeven calc is direction dependent 2022-06-28 10:07:57 -04:00
Tyler Goodlet 8a7e391b4e Terser startup msg fields 2022-06-28 10:07:57 -04:00
Tyler Goodlet aec48a1dd5 Right, zero sized "closed out" msgs are totally fine 2022-06-28 10:07:57 -04:00
Tyler Goodlet 87f301500d Simplify updates to single-pass, fix clears minimizing
Gah, was a remaining bug where if you tried to update the pps state with
both new trades and from the ledger you'd do a double add of
transactions that were cleared during a `update_pps()` loop. Instead now
keep all clears in tact until ready to serialize to the `pps.toml` file
in which cases we call a new method `Position.minimize_clears()` which
does the work of only keep clears since the last net-zero size.

Re-implement `update_pps_conf()` update logic as a single pass loop
which does expiry and size checking for closed pps all in one pass thus
allowing us to drop `dump_active()` which was kinda redundant anyway..
2022-06-28 10:07:57 -04:00
Tyler Goodlet 566a54ffb6 Reset the clears table on zero size conditions 2022-06-28 10:07:57 -04:00
Tyler Goodlet f9c4b3cc96 Fixes for newly opened and closed pps
Before we weren't emitting pp msgs when a position went back to "net
zero" (aka the size is zero) nor when a new one was opened (wasn't
previously loaded from the `pps.toml`). This reworks a bunch of the
incremental update logic as well as ports to the changes in the
`piker.pp` module:

- rename a few of the normalizing helpers to be more explicit.
- drop calling `pp.get_pps()` in the trades dialog task and instead
  create msgs iteratively, per account, by iterating through collected
  position and API trade records and calling instead
  `pp.update_pps_conf()`.
- always from-ledger-update both positions reported from ib's pp sys and
  session api trades detected on ems-trade-dialog startup.
- `update_ledger_from_api_trades()` now does **just** that: only updates
  the trades ledger and returns the transaction set.
- `update_and_audit_msgs()` now only the input list of msgs and properly
  generates new msgs for newly created positions that weren't previously
  loaded from the `pps.toml`.
2022-06-28 10:07:57 -04:00
Tyler Goodlet a12e6800ff Support per-symbol reload from ledger pp loading
- use `tomli` package for reading since it's the fastest pure python
  reader available apparently.
- add new fields to each pp's clears table: price, size, dt
- make `load_pps_from_toml()`'s `reload_records` a dict that can be
  passed in by the caller and is verbatim used to re-read a ledger and
  filter to the specified symbol set to build out fresh pp objects.
- add a `update_from_ledger: bool` flag to `load_pps_from_toml()`
  to allow forcing a full backend ledger read.
- if a set of trades records is passed into `update_pps_conf()` parse
  out the meta data required to cause a ledger reload as per 2 bullets
  above.
- return active and closed pps in separate by-account maps from
  `update_pps_conf()`.
- drop the `key_by` kwarg.
2022-06-28 10:07:57 -04:00
Tyler Goodlet cc68501c7a Make pp msg `.currency` not required 2022-06-28 10:07:57 -04:00
Tyler Goodlet 7ebf8a8dc0 Add `tomli` as dep being fastest in the west 2022-06-28 10:07:57 -04:00
Tyler Goodlet 4475823e48 Add draft ip-mismatch skip case 2022-06-28 10:07:57 -04:00
Tyler Goodlet 3713288b48 Strip ib prefix before acctid use 2022-06-28 10:07:57 -04:00
Tyler Goodlet 4fdfb81876 Support re-processing a filtered ledger entry set
This makes it possible to refresh a single fqsn-position in one's
`pps.toml` by simply deleting the file entry, in which case, if there is
new trade records passed to `load_pps_from_toml()` via the new
`reload_records` kwarg, then the backend ledger entries matching that
symbol will be filtered and used to recompute a fresh position.

This turns out to be super handy when you have crashes that prevent
a `pps.toml` entry from being updated correctly but where the ledger
does have all the data necessary to calculate a fresh correct entry.
2022-06-28 10:07:57 -04:00
Tyler Goodlet f32b4d37cb Support pp audits with multiple accounts 2022-06-28 10:07:56 -04:00
Tyler Goodlet 2063b9d8bb Drop ledger entries that have no transaction id 2022-06-28 10:07:56 -04:00
Tyler Goodlet fe14605034 Fix null case return 2022-06-28 10:07:56 -04:00
Tyler Goodlet 68b32208de Key pps by bsuid to avoid incorrect disparate entries 2022-06-28 10:07:56 -04:00
Tyler Goodlet f1fe369bbf Write clears table as a list of tables in toml 2022-06-28 10:07:56 -04:00
Tyler Goodlet 16b2937d23 Passthrough toml lib kwargs 2022-06-28 10:07:56 -04:00
Tyler Goodlet bfad676b7c Add expiry and datetime support to ledger parsing 2022-06-28 10:07:56 -04:00
Tyler Goodlet c617a06905 Port everything to `Position.be_price` 2022-06-28 10:07:56 -04:00
Tyler Goodlet ff74f4302a Support pp expiries, datetimes on transactions
Since some positions obviously expire and thus shouldn't continually
exist inside a `pps.toml` add naive support for tracking and discarding
expired contracts:
- add `Transaction.expiry: Optional[pendulum.datetime]`.
- add `Position.expiry: Optional[pendulum.datetime]` which can be parsed
  from a transaction ledger.
- only write pps with a non-none expiry to the `pps.toml`
- change `Position.avg_price` -> `.be_price` (be is "breakeven")
  since it's a much less ambiguous name.
- change `load_pps_from_legder()` to *not* call `dump_active()` since
  for the only use case it ends up getting called later anyway.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 21153a0e1e Ugh, hack our own toml encoder since it seems everything in the lib is half-baked.. 2022-06-28 10:07:56 -04:00
Tyler Goodlet b6f344f34a Only emit pps msg for trade triggering instrument
We can probably make this better (and with less file sys accesses) later
such that we keep a consistent pps state in mem and only write async
maybe from another side-task?
2022-06-28 10:07:56 -04:00
Tyler Goodlet ecdc747ced Allow packing pps by a different key set 2022-06-28 10:07:56 -04:00
Tyler Goodlet 5147cd7be0 Drop global proxies table, isn't multi-task safe.. 2022-06-28 10:07:56 -04:00
Tyler Goodlet 3dcb72d429 Only finally-write around the ledger yield up 2022-06-28 10:07:56 -04:00
Tyler Goodlet fbee33b00d Get real-time trade oriented pp updates workin
What a nightmare this was.. main holdup was that cost (commissions)
reports are fired independent from "fills" so you can't really emit
a proper full position update until they both arrive.

Deatz:
- move `push_tradesies()` and relay loop in `deliver_trade_events()` to
  the new py3.10 `match:` syntax B)
- subscribe for, and handle `CommissionReport` events from `ib_insync`
  and repack as a `cost` event type.
- handle cons with no primary/listing exchange (like futes) in
  `update_ledger_from_api_trades()` by falling back to the plain
  'exchange' field.
- drop reverse fqsn lookup from ib positions map; just use contract
  lookup for api trade logs since we're already connected..
- make validation in `update_and_audit()` optional via flag.
- pass in the accounts def, ib pp msg table and the proxies table to the
  trade event relay task-loop.
- add `emit_pp_update()` too encapsulate a full api trade entry
  incremental update which calls into the `piker.pp` apis to,
  - update the ledger
  - update the pps.toml
  - generate a new `BrokerdPosition` msg to send to the ems
- adjust trades relay loop to only emit pp updates when a cost report
  arrives for the fill/execution by maintaining a small table per exec
  id.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 3991d8f911 Add `update_and_audit()` in prep for rt per-trade-event pp udpates 2022-06-28 10:07:56 -04:00
Tyler Goodlet 7b2e8f1ba5 Return object form from `update_pps_conf()` 2022-06-28 10:07:56 -04:00
Tyler Goodlet cbcbb2b243 Filter pps loading to client-active accounts set 2022-06-28 10:07:56 -04:00
Tyler Goodlet cd3bfb1ea4 Maybe load from ledger in `get_pps()`, allow account filtering 2022-06-28 10:07:56 -04:00
Tyler Goodlet 82b718d5a3 Many, many `ib` trade log schema hackz
I don't want to rant too much any more since it's pretty clear `ib` has
either zero concern for its (api) user's or a severely terrible data
management team and/or general inter-team coordination system, but this
patch more or less hacks the flex report records to be similar enough to
API "execution" / "fill" records such that they can be similarly
normalized and stored as well as processed for position calculations..

Dirty deats,
- use the `IB.fills()` method for pulling current session trade events
  since it's both recommended in the docs and does seem to capture
  more extensive meta-data.
- add a `update_ledger_from_api()` helper which does all the insane work
  of making sure api trade entries are usable both within piker's global
  fqsn system but also compatible with incremental updates of positions
  computed from trade ledgers derived from ib's "flex reports".
- add "auditting" of `ib`'s reported positioning API messages by
  comparison with piker's new "traders first" breakeven price style and
  complain via logging on mismatches.
- handle buy vs. sell arithmetic (via a +ve or -ve multiplier) to make
  "size" arithmetic work for API trade entries..
- draft out options contract transaction parsing but skip in pps
  generation for now.
- always use the "execution id" as ledger keys both in flex and api
  trade processing.
- for whatever weird reason `ib_insync` doesn't include the so called
  "primary exchange" in contracts reported in fill events, so do manual
  contract lookups in such cases such that pps entries can be placed
  in the right fqsn section...

Still ToDo:
- incremental update on trade clears / position updates
- pps audit from ledger depending on user config?
2022-06-28 10:07:56 -04:00
Tyler Goodlet 05a1a4e3d8 Use new `Position.bsuid` field throughout 2022-06-28 10:07:56 -04:00
Tyler Goodlet 412138a75b Add transaction costs to "fills"
This makes a few major changes but mostly is centered around including
transaction (aka trade-clear) costs in the avg breakeven price
calculation.

TL;DR:
- rename `TradeRecord` -> `Transaction`.
- make `Position.fills` a `dict[str, float]` which holds each clear's
  cost value.
- change `Transaction.symkey` -> `.bsuid` for "backend symbol unique id".
- drop `brokername: str` arg to `update_pps()`
- rename `._split_active()` -> `dump_active()` and use input keys
  verbatim in output map.
- in `update_pps_conf()` always incrementally update from trade records
  even when no `pps.toml` exists yet since it may be both the case that
  the ledger needs loading **and** the caller is handing new records not
  yet in the ledger.
2022-06-28 10:07:56 -04:00
Tyler Goodlet c1b63f4757 Use `IB.fills()` method for `Client.trades()` 2022-06-28 10:07:56 -04:00
Tyler Goodlet 5d774bef90 Move `open_trade_ledger()` to pp mod, add `get_pps()` 2022-06-28 10:07:56 -04:00
Tyler Goodlet de77c7d209 Better doc strings and detailed comments 2022-06-28 10:07:56 -04:00
Tyler Goodlet ce1eb11b59 Use new ledger pps but cross-ref with what ib says 2022-06-28 10:07:56 -04:00
Tyler Goodlet b629ce177d Ensure `.fills` are filled in during object construct.. 2022-06-28 10:07:56 -04:00
Tyler Goodlet 73fa320917 Cut schema-related comment down to major sections 2022-06-28 10:07:56 -04:00
Tyler Goodlet dd05ed1371 Implement updates and write to config: `pps.toml`
Begins the position tracking incremental update API which supports both
constructing a `pps.toml` both from trade ledgers as well diff-oriented
incremental update from an existing config assumed to be previously
generated from some prior ledger.

New set of routines includes:
- `_split_active()` a helper to split a position table into the active
  and closed positions (aka pps of size 0) for determining entry updates
  in the `pps.toml`.
- `update_pps_conf()` to maybe load a `pps.toml` and update it from
   an input trades ledger including necessary (de)serialization to and
   from `Position` object form(s).
- `load_pps_from_ledger()` a ledger parser-loader which constructs
  a table of pps strictly from the broker-account ledger data without
  any consideration for any existing pps file.

Each "entry" in `pps.toml` also contains a `fills: list` attr (name may
change) which references the set of trade records which make up its
state since the last net-zero position in the instrument.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 2a641ab8b4 Call it `pps.toml`, allows toml passthrough kwargs 2022-06-28 10:07:56 -04:00
Tyler Goodlet f8f7ca350c Extend trade-record tools, add ledger to pps extraction
Add a `TradeRecord` struct which holds the minimal field set to build
out position entries. Add `.update_pps()` to convert a set of records
into LIFO position entries, optionally allowing for an update to some
existing pp input set. Add `load_pps_from_ledger()` which does a full
ledger extraction to pp objects, ready for writing a `pps.toml`.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 88b4ccc768 Add API trade/exec entry parsing and ledger updates
Since "flex reports" are only available for the current session's trades
the day after, this adds support for also collecting trade execution
records for the current session and writing them to the equivalent
ledger file.

Summary:
- add `trades_to_records()` to handle parsing both flex and API event
  objects into a common record form.
- add `norm_trade_records()` to handle converting ledger entries into
  `TradeRecord` types from the new `piker.pps` mod (coming in next
  commit).
2022-06-28 10:07:56 -04:00
Tyler Goodlet eb2bad5138 Make our `Symbol` a `msgspec.Struct` 2022-06-28 10:07:56 -04:00
Tyler Goodlet f768576060 Delegate paper engine pp tracking to new type 2022-06-28 10:07:56 -04:00
Tyler Goodlet add0e92335 Drop old trade log config writing code 2022-06-28 10:07:56 -04:00
Tyler Goodlet 1eb7e109e6 Start `piker.pp` module, LIFO pp updates
Start a generic "position related" util mod and bring in the `Position`
type from the allocator , convert it to a `msgspec.Struct` and add
a `.lifo_update()` method. Implement a WIP pp parser from a trades
ledger and use the new lifo method to gather position entries.
2022-06-28 10:07:56 -04:00
Tyler Goodlet 725909a94c Convert accounts table to `bidict` after config load 2022-06-28 10:07:56 -04:00
Tyler Goodlet 050aa7594c Simplify trades ledger collection to single pass loop 2022-06-28 10:07:56 -04:00
Tyler Goodlet 450009ff9c Add `open_trade_ledger()` for writing `<confdir>/ledgers/trades_<broker>_<acct>.toml` files 2022-06-28 10:07:56 -04:00
goodboy b2d5892010
Merge pull request #342 from pikers/mxmn_from_m4
Mxmn from m4
2022-06-28 10:07:17 -04:00
goodboy 5a3b465ac0
Merge pull request #344 from pikers/310_plus
Go Python 3.10+ in anticipation of upcoming feature PRs
2022-06-28 10:04:45 -04:00
Tyler Goodlet be7afdaa89 Drop commented draft quotes-drain-loop code/idea 2022-06-28 09:43:49 -04:00
Tyler Goodlet 1c561207f5 Simplify `Flow.maxmin()` block logics 2022-06-28 09:43:49 -04:00
Tyler Goodlet ed2c962bb9 Add an idempotent, graphics-state startup flag
Add `ChartPlotWidget._on_screen: bool` which allows detecting for the
first state where there is y-range-able flow data loaded and able to be
drawn. Check for this flag to be set in `.maxmin()` such that until the
historical data is loaded `.default_view()` will be called to ensure
that a blank view is never shown: race with the UI starting versus the
data layer loading flow graphics can have this outcome.
2022-06-28 09:43:49 -04:00
Tyler Goodlet 147ceca016 Drop uneeded render filter idea 2022-06-28 09:43:49 -04:00
Tyler Goodlet 03a7940f83 Rewrite per-pi group mxmn sorter to always expect output 2022-06-27 18:24:09 -04:00
Tyler Goodlet dd2a9f74f1 Add todo around graphics loop vlm chart mxmn sort calls 2022-06-27 18:23:13 -04:00
Tyler Goodlet 49c720af3c Add commented prints for debugging 2022-06-27 18:22:51 -04:00
Tyler Goodlet c620517543 Set zeros for `Flow.maxmin() -> None` results 2022-06-27 18:22:30 -04:00
Tyler Goodlet a425c29ef1 Play with render skip logic on non-dark vlm crypto feeds 2022-06-27 13:59:08 -04:00
Tyler Goodlet 783914c7fe Better comment, use -inf as startup min 2022-06-27 13:59:08 -04:00
Tyler Goodlet 920a394539 Use new `anext()` builtin 2022-06-27 13:59:08 -04:00
Tyler Goodlet e977597cd0 Commented for doing incrementing when downsampled, but doesn't seem to work? 2022-06-27 13:59:08 -04:00
Tyler Goodlet 7a33ba64f1 Avoid crash due to race on chart instance ref during startup? 2022-06-27 13:59:08 -04:00
Tyler Goodlet 191b94b67c POC try using yrange mxmn from m4 when downsampling 2022-06-27 13:59:08 -04:00
Tyler Goodlet 4ad7b073c3 Proxy through input y-mx/mn from `xy_downsample()` 2022-06-27 13:59:08 -04:00
Tyler Goodlet d92ff9c7a0 Return input y-range min/max values from m4 2022-06-27 13:59:08 -04:00
109 changed files with 21082 additions and 9781 deletions

View File

@ -3,9 +3,8 @@ name: CI
on: on:
# Triggers the workflow on push or pull request events but only for the master branch # Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request: pull_request:
push:
branches: [ master ] branches: [ master ]
# Allows you to run this workflow manually from the Actions tab # Allows you to run this workflow manually from the Actions tab
@ -14,6 +13,27 @@ on:
jobs: jobs:
# test that we can generate a software distribution and install it
# thus avoid missing file issues after packaging.
sdist-linux:
name: 'sdist'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Build sdist
run: python setup.py sdist --formats=zip
- name: Install sdist from .zips
run: python -m pip install dist/*.zip
testing: testing:
name: 'install + test-suite' name: 'install + test-suite'
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -22,13 +42,16 @@ jobs:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
- name: Build DB container
run: docker build -t piker:elastic dockering/elastic
- name: Setup python - name: Setup python
uses: actions/setup-python@v3 uses: actions/setup-python@v3
with: with:
python-version: '3.10' python-version: '3.10'
- name: Install dependencies - name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements.txt --upgrade-strategy eager run: pip install -U .[es] -r requirements-test.txt -r requirements.txt --upgrade-strategy eager
- name: Test suite - name: Test suite
run: pytest tests -rs run: pytest tests -rs

View File

@ -71,6 +71,19 @@ for a development install::
source ./env/bin/activate source ./env/bin/activate
pip install -r requirements.txt -e . pip install -r requirements.txt -e .
install for nixos
*****************
for users of `NixOS` we offer a development shell envoirment that can be
loaded with::
nix-shell develop.nix
this will setup the required python environment to run piker, make sure to
run::
pip install -r requirements.txt -e .
once after loading the shell
install for tinas install for tinas
***************** *****************

View File

@ -50,3 +50,8 @@ prefer_data_account = [
paper = "XX0000000" paper = "XX0000000"
margin = "X0000000" margin = "X0000000"
ira = "X0000000" ira = "X0000000"
[deribit]
key_id = 'XXXXXXXX'
key_secret = 'Xx_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx'

32
develop.nix 100644
View File

@ -0,0 +1,32 @@
with (import <nixpkgs> {});
with python310Packages;
stdenv.mkDerivation {
name = "pip-env";
buildInputs = [
# System requirements.
readline
# Python requirements (enough to get a virtualenv going).
python310Full
virtualenv
setuptools
pyqt5
pip
];
src = null;
shellHook = ''
# Allow the use of wheels.
SOURCE_DATE_EPOCH=$(date +%s)
# Augment the dynamic linker path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${R}/lib/R/lib:${readline}/lib
export QT_QPA_PLATFORM_PLUGIN_PATH="${qt5.qtbase.bin}/lib/qt-${qt5.qtbase.version}/plugins";
if [ ! -d "venv" ]; then
virtualenv venv
fi
source venv/bin/activate
'';
}

View File

@ -0,0 +1,11 @@
FROM elasticsearch:7.17.4
ENV ES_JAVA_OPTS "-Xms2g -Xmx2g"
ENV ELASTIC_USERNAME "elastic"
ENV ELASTIC_PASSWORD "password"
COPY elasticsearch.yml /usr/share/elasticsearch/config/
RUN printf "password" | ./bin/elasticsearch-keystore add -f -x "bootstrap.password"
EXPOSE 19200

View File

@ -0,0 +1,5 @@
network.host: 0.0.0.0
http.port: 19200
discovery.type: single-node

View File

@ -3,11 +3,12 @@
version: "3.5" version: "3.5"
services: services:
ib-gateway: ib_gw_paper:
# other image tags available: # other image tags available:
# https://github.com/waytrade/ib-gateway-docker#supported-tags # https://github.com/waytrade/ib-gateway-docker#supported-tags
image: waytrade/ib-gateway:981.3j # image: waytrade/ib-gateway:981.3j
restart: always image: waytrade/ib-gateway:1012.2i
restart: 'no' # restart on boot whenev there's a crash or user clicsk
network_mode: 'host' network_mode: 'host'
volumes: volumes:
@ -39,14 +40,12 @@ services:
# this compose file which looks something like: # this compose file which looks something like:
# TWS_USERID='myuser' # TWS_USERID='myuser'
# TWS_PASSWORD='guest' # TWS_PASSWORD='guest'
# TRADING_MODE=paper (or live)
# VNC_SERVER_PASSWORD='diggity'
environment: environment:
TWS_USERID: ${TWS_USERID} TWS_USERID: ${TWS_USERID}
TWS_PASSWORD: ${TWS_PASSWORD} TWS_PASSWORD: ${TWS_PASSWORD}
TRADING_MODE: ${TRADING_MODE:-paper} TRADING_MODE: 'paper'
VNC_SERVER_PASSWORD: ${VNC_SERVER_PASSWORD:-} VNC_SERVER_PASSWORD: 'doggy'
VNC_SERVER_PORT: '3003'
# ports: # ports:
# - target: 4002 # - target: 4002
@ -62,3 +61,40 @@ services:
# - "127.0.0.1:4001:4001" # - "127.0.0.1:4001:4001"
# - "127.0.0.1:4002:4002" # - "127.0.0.1:4002:4002"
# - "127.0.0.1:5900:5900" # - "127.0.0.1:5900:5900"
# ib_gw_live:
# image: waytrade/ib-gateway:1012.2i
# restart: no
# network_mode: 'host'
# volumes:
# - type: bind
# source: ./jts_live.ini
# target: /root/jts/jts.ini
# # don't let ibc clobber this file for
# # the main reason of not having a stupid
# # timezone set..
# read_only: true
# # force our own ibc config
# - type: bind
# source: ./ibc.ini
# target: /root/ibc/config.ini
# # force our noop script - socat isn't needed in host mode.
# - type: bind
# source: ./fork_ports_delayed.sh
# target: /root/scripts/fork_ports_delayed.sh
# # force our noop script - socat isn't needed in host mode.
# - type: bind
# source: ./run_x11_vnc.sh
# target: /root/scripts/run_x11_vnc.sh
# read_only: true
# # NOTE: to fill these out, define an `.env` file in the same dir as
# # this compose file which looks something like:
# environment:
# TRADING_MODE: 'live'
# VNC_SERVER_PASSWORD: 'doggy'
# VNC_SERVER_PORT: '3004'

View File

@ -188,7 +188,7 @@ AcceptNonBrokerageAccountWarning=yes
# #
# The default value is 60. # The default value is 60.
LoginDialogDisplayTimeout = 60 LoginDialogDisplayTimeout=20
@ -292,7 +292,7 @@ ExistingSessionDetectedAction=primary
# be set dynamically at run-time: most users will never need it, # be set dynamically at run-time: most users will never need it,
# so don't use it unless you know you need it. # so don't use it unless you know you need it.
OverrideTwsApiPort=4002 ; OverrideTwsApiPort=4002
# Read-only Login # Read-only Login

View File

@ -0,0 +1,33 @@
[IBGateway]
ApiOnly=true
LocalServerPort=4001
# NOTE: must be set if using IBC's "reject" mode
TrustedIPs=127.0.0.1
; RemoteHostOrderRouting=ndc1.ibllc.com
; WriteDebug=true
; RemotePortOrderRouting=4001
; useRemoteSettings=false
; tradingMode=p
; Steps=8
; colorPalletName=dark
# window geo, this may be useful for sending `xdotool` commands?
; MainWindow.Width=1986
; screenHeight=3960
[Logon]
Locale=en
# most markets are oriented around this zone
# so might as well hard code it.
TimeZone=America/New_York
UseSSL=true
displayedproxymsg=1
os_titlebar=true
s3store=true
useRemoteSettings=false
[Communication]
ctciAutoEncrypt=true
Region=usr
; Peer=cdc1.ibllc.com:4001

View File

@ -1,16 +1,35 @@
#!/bin/sh #!/bin/sh
# start vnc server and listen for connections
# on port specced in `$VNC_SERVER_PORT`
# start VNC server
x11vnc \ x11vnc \
-ncache_cr \ -listen 127.0.0.1 \
-listen localhost \ -allow 127.0.0.1 \
-rfbport "${VNC_SERVER_PORT}" \
-display :1 \ -display :1 \
-forever \ -forever \
-shared \ -shared \
-logappend /var/log/x11vnc.log \
-bg \ -bg \
-nowf \
-noxdamage \
-noxfixes \
-no6 \
-noipv6 \ -noipv6 \
-autoport 3003 \
# can't use this because of ``asyncvnc`` issue:
# -nowcr \
# TODO: can't use this because of ``asyncvnc`` issue:
# https://github.com/barneygale/asyncvnc/issues/1 # https://github.com/barneygale/asyncvnc/issues/1
# -passwd 'ibcansmbz' # -passwd 'ibcansmbz'
# XXX: optional graphics caching flags that seem to rekt the overlay
# of the 2 gw windows? When running a single gateway
# this seems to maybe optimize some memory usage?
# -ncache_cr \
# -ncache \
# NOTE: this will prevent logs from going to the console.
# -logappend /var/log/x11vnc.log \
# where to start allocating ports
# -autoport "${VNC_SERVER_PORT}" \

View File

@ -18,3 +18,10 @@
piker: trading gear for hackers. piker: trading gear for hackers.
""" """
from ._daemon import open_piker_runtime
from .data.feed import open_feed
__all__ = [
'open_piker_runtime',
'open_feed',
]

View File

@ -18,45 +18,149 @@
Structured, daemon tree service management. Structured, daemon tree service management.
""" """
from typing import Optional, Union, Callable, Any from __future__ import annotations
from contextlib import asynccontextmanager as acm import os
from typing import (
Optional,
Callable,
Any,
ClassVar,
)
from contextlib import (
asynccontextmanager as acm,
)
from collections import defaultdict from collections import defaultdict
from pydantic import BaseModel import tractor
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor
from .log import get_logger, get_console_log from .log import (
get_logger,
get_console_log,
)
from .brokers import get_brokermod from .brokers import get_brokermod
from pprint import pformat
from functools import partial
log = get_logger(__name__) log = get_logger(__name__)
_root_dname = 'pikerd' _root_dname = 'pikerd'
_registry_addr = ('127.0.0.1', 6116) _default_registry_host: str = '127.0.0.1'
_tractor_kwargs: dict[str, Any] = { _default_registry_port: int = 6116
# use a different registry addr then tractor's default _default_reg_addr: tuple[str, int] = (
'arbiter_addr': _registry_addr _default_registry_host,
} _default_registry_port,
)
# NOTE: this value is set as an actor-global once the first endpoint
# who is capable, spawns a `pikerd` service tree.
_registry: Registry | None = None
class Registry:
addr: None | tuple[str, int] = None
# TODO: table of uids to sockaddrs
peers: dict[
tuple[str, str],
tuple[str, int],
] = {}
_tractor_kwargs: dict[str, Any] = {}
@acm
async def open_registry(
addr: None | tuple[str, int] = None,
ensure_exists: bool = True,
) -> tuple[str, int]:
global _tractor_kwargs
actor = tractor.current_actor()
uid = actor.uid
if (
Registry.addr is not None
and addr
):
raise RuntimeError(
f'`{uid}` registry addr already bound @ {_registry.sockaddr}'
)
was_set: bool = False
if (
not tractor.is_root_process()
and Registry.addr is None
):
Registry.addr = actor._arb_addr
if (
ensure_exists
and Registry.addr is None
):
raise RuntimeError(
f"`{uid}` registry should already exist bug doesn't?"
)
if (
Registry.addr is None
):
was_set = True
Registry.addr = addr or _default_reg_addr
_tractor_kwargs['arbiter_addr'] = Registry.addr
try:
yield Registry.addr
finally:
# XXX: always clear the global addr if we set it so that the
# next (set of) calls will apply whatever new one is passed
# in.
if was_set:
Registry.addr = None
def get_tractor_runtime_kwargs() -> dict[str, Any]:
'''
Deliver ``tractor`` related runtime variables in a `dict`.
'''
return _tractor_kwargs
_root_modules = [ _root_modules = [
__name__, __name__,
'piker.clearing._ems', 'piker.clearing._ems',
'piker.clearing._client', 'piker.clearing._client',
'piker.data._sampling',
] ]
class Services(BaseModel): # TODO: factor this into a ``tractor.highlevel`` extension
# pack for the library.
class Services:
actor_n: tractor._supervise.ActorNursery actor_n: tractor._supervise.ActorNursery
service_n: trio.Nursery service_n: trio.Nursery
debug_mode: bool # tractor sub-actor debug mode flag debug_mode: bool # tractor sub-actor debug mode flag
service_tasks: dict[str, tuple[trio.CancelScope, tractor.Portal]] = {} service_tasks: dict[
str,
class Config: tuple[
arbitrary_types_allowed = True trio.CancelScope,
tractor.Portal,
trio.Event,
]
] = {}
locks = defaultdict(trio.Lock)
@classmethod
async def start_service_task( async def start_service_task(
self, self,
name: str, name: str,
@ -75,7 +179,12 @@ class Services(BaseModel):
''' '''
async def open_context_in_task( async def open_context_in_task(
task_status: TaskStatus[ task_status: TaskStatus[
trio.CancelScope] = trio.TASK_STATUS_IGNORED, tuple[
trio.CancelScope,
trio.Event,
Any,
]
] = trio.TASK_STATUS_IGNORED,
) -> Any: ) -> Any:
@ -87,143 +196,220 @@ class Services(BaseModel):
) as (ctx, first): ) as (ctx, first):
# unblock once the remote context has started # unblock once the remote context has started
task_status.started((cs, first)) complete = trio.Event()
task_status.started((cs, complete, first))
log.info( log.info(
f'`pikerd` service {name} started with value {first}' f'`pikerd` service {name} started with value {first}'
) )
try: try:
# wait on any context's return value # wait on any context's return value
# and any final portal result from the
# sub-actor.
ctx_res = await ctx.result() ctx_res = await ctx.result()
except tractor.ContextCancelled:
return await self.cancel_service(name) # NOTE: blocks indefinitely until cancelled
else: # either by error from the target context
# wait on any error from the sub-actor # function or by being cancelled here by the
# NOTE: this will block indefinitely until # surrounding cancel scope.
# cancelled either by error from the target
# context function or by being cancelled here by
# the surrounding cancel scope
return (await portal.result(), ctx_res) return (await portal.result(), ctx_res)
cs, first = await self.service_n.start(open_context_in_task) finally:
await portal.cancel_actor()
complete.set()
self.service_tasks.pop(name)
cs, complete, first = await self.service_n.start(open_context_in_task)
# store the cancel scope and portal for later cancellation or # store the cancel scope and portal for later cancellation or
# retstart if needed. # retstart if needed.
self.service_tasks[name] = (cs, portal) self.service_tasks[name] = (cs, portal, complete)
return cs, first return cs, first
# TODO: per service cancellation by scope, we aren't using this @classmethod
# anywhere right?
async def cancel_service( async def cancel_service(
self, self,
name: str, name: str,
) -> Any: ) -> Any:
'''
Cancel the service task and actor for the given ``name``.
'''
log.info(f'Cancelling `pikerd` service {name}') log.info(f'Cancelling `pikerd` service {name}')
cs, portal = self.service_tasks[name] cs, portal, complete = self.service_tasks[name]
# XXX: not entirely sure why this is required,
# and should probably be better fine tuned in
# ``tractor``?
cs.cancel() cs.cancel()
return await portal.cancel_actor() await complete.wait()
assert name not in self.service_tasks, \
f'Serice task for {name} not terminated?'
_services: Optional[Services] = None
@acm
async def open_pikerd(
start_method: str = 'trio',
loglevel: Optional[str] = None,
# XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
) -> Optional[tractor._portal.Portal]:
'''
Start a root piker daemon who's lifetime extends indefinitely
until cancelled.
A root actor nursery is created which can be used to create and keep
alive underling services (see below).
'''
global _services
assert _services is None
# XXX: this may open a root actor as well
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=_registry_addr,
name=_root_dname,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=_root_modules,
) as _,
tractor.open_nursery() as actor_nursery,
):
async with trio.open_nursery() as service_nursery:
# # setup service mngr singleton instance
# async with AsyncExitStack() as stack:
# assign globally for future daemon/task creation
_services = Services(
actor_n=actor_nursery,
service_n=service_nursery,
debug_mode=debug_mode,
)
yield _services
@acm @acm
async def open_piker_runtime( async def open_piker_runtime(
name: str, name: str,
enable_modules: list[str] = [], enable_modules: list[str] = [],
start_method: str = 'trio',
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
# XXX NOTE XXX: you should pretty much never want debug mode
# for data daemons when running in production.
debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
# TODO: once we have `rsyscall` support we will read a config
# and spawn the service tree distributed per that.
start_method: str = 'trio',
**tractor_kwargs,
) -> tuple[
tractor.Actor,
tuple[str, int],
]:
'''
Start a piker actor who's runtime will automatically sync with
existing piker actors on the local link based on configuration.
Can be called from a subactor or any program that needs to start
a root actor.
'''
try:
# check for existing runtime
actor = tractor.current_actor().uid
except tractor._exceptions.NoRuntime:
registry_addr = registry_addr or _default_reg_addr
async with (
tractor.open_root_actor(
# passed through to ``open_root_actor``
arbiter_addr=registry_addr,
name=name,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
# TODO: eventually we should be able to avoid
# having the root have more then permissions to
# spawn other specialized daemons I think?
enable_modules=enable_modules,
**tractor_kwargs,
) as _,
open_registry(registry_addr, ensure_exists=False) as addr,
):
yield (
tractor.current_actor(),
addr,
)
else:
async with open_registry(registry_addr) as addr:
yield (
actor,
addr,
)
@acm
async def open_pikerd(
loglevel: str | None = None,
# XXX: you should pretty much never want debug mode # XXX: you should pretty much never want debug mode
# for data daemons when running in production. # for data daemons when running in production.
debug_mode: bool = False, debug_mode: bool = False,
registry_addr: None | tuple[str, int] = None,
) -> Optional[tractor._portal.Portal]: # db init flags
tsdb: bool = False,
es: bool = False,
) -> Services:
''' '''
Start a piker actor who's runtime will automatically Start a root piker daemon who's lifetime extends indefinitely until
sync with existing piker actors in local network cancelled.
based on configuration.
A root actor nursery is created which can be used to create and keep
alive underling services (see below).
''' '''
global _services
assert _services is None
# XXX: this may open a root actor as well
async with ( async with (
tractor.open_root_actor( open_piker_runtime(
# passed through to ``open_root_actor``
arbiter_addr=_registry_addr,
name=name,
loglevel=loglevel,
debug_mode=debug_mode,
start_method=start_method,
name=_root_dname,
# TODO: eventually we should be able to avoid # TODO: eventually we should be able to avoid
# having the root have more then permissions to # having the root have more then permissions to
# spawn other specialized daemons I think? # spawn other specialized daemons I think?
enable_modules=_root_modules, enable_modules=_root_modules,
) as _,
loglevel=loglevel,
debug_mode=debug_mode,
registry_addr=registry_addr,
) as (root_actor, reg_addr),
tractor.open_nursery() as actor_nursery,
trio.open_nursery() as service_nursery,
): ):
yield tractor.current_actor() assert root_actor.accept_addr == reg_addr
if tsdb:
from piker.data._ahab import start_ahab
from piker.data.marketstore import start_marketstore
log.info('Spawning `marketstore` supervisor')
ctn_ready, config, (cid, pid) = await service_nursery.start(
start_ahab,
'marketstored',
start_marketstore,
)
log.info(
f'`marketstored` up!\n'
f'pid: {pid}\n'
f'container id: {cid[:12]}\n'
f'config: {pformat(config)}'
)
if es:
from piker.data._ahab import start_ahab
from piker.data.elastic import start_elasticsearch
log.info('Spawning `elasticsearch` supervisor')
ctn_ready, config, (cid, pid) = await service_nursery.start(
partial(
start_ahab,
'elasticsearch',
start_elasticsearch,
start_timeout=240.0 # high cause ci
)
)
log.info(
f'`elasticsearch` up!\n'
f'pid: {pid}\n'
f'container id: {cid[:12]}\n'
f'config: {pformat(config)}'
)
# assign globally for future daemon/task creation
Services.actor_n = actor_nursery
Services.service_n = service_nursery
Services.debug_mode = debug_mode
try:
yield Services
finally:
# TODO: is this more clever/efficient?
# if 'samplerd' in Services.service_tasks:
# await Services.cancel_service('samplerd')
service_nursery.cancel_scope.cancel()
@acm @acm
@ -232,61 +418,93 @@ async def maybe_open_runtime(
**kwargs, **kwargs,
) -> None: ) -> None:
""" '''
Start the ``tractor`` runtime (a root actor) if none exists. Start the ``tractor`` runtime (a root actor) if none exists.
""" '''
settings = _tractor_kwargs name = kwargs.pop('name')
settings.update(kwargs)
if not tractor.current_actor(err_on_no_runtime=False): if not tractor.current_actor(err_on_no_runtime=False):
async with tractor.open_root_actor( async with open_piker_runtime(
name,
loglevel=loglevel, loglevel=loglevel,
**settings, **kwargs,
): ) as (_, addr):
yield yield addr,
else: else:
yield async with open_registry() as addr:
yield addr
@acm @acm
async def maybe_open_pikerd( async def maybe_open_pikerd(
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
registry_addr: None | tuple = None,
tsdb: bool = False,
es: bool = False,
**kwargs, **kwargs,
) -> Union[tractor._portal.Portal, Services]: ) -> tractor._portal.Portal | ClassVar[Services]:
"""If no ``pikerd`` daemon-root-actor can be found start it and '''
If no ``pikerd`` daemon-root-actor can be found start it and
yield up (we should probably figure out returning a portal to self yield up (we should probably figure out returning a portal to self
though). though).
""" '''
if loglevel: if loglevel:
get_console_log(loglevel) get_console_log(loglevel)
# subtle, we must have the runtime up here or portal lookup will fail # subtle, we must have the runtime up here or portal lookup will fail
async with maybe_open_runtime(loglevel, **kwargs): query_name = kwargs.pop('name', f'piker_query_{os.getpid()}')
async with tractor.find_actor(_root_dname) as portal: # TODO: if we need to make the query part faster we could not init
# assert portal is not None # an actor runtime and instead just hit the socket?
if portal is not None: # from tractor._ipc import _connect_chan, Channel
yield portal # async with _connect_chan(host, port) as chan:
return # async with open_portal(chan) as arb_portal:
# yield arb_portal
async with (
open_piker_runtime(
name=query_name,
registry_addr=registry_addr,
loglevel=loglevel,
**kwargs,
) as _,
tractor.find_actor(
_root_dname,
arbiter_sockaddr=registry_addr,
) as portal
):
# connect to any existing daemon presuming
# its registry socket was selected.
if (
portal is not None
):
yield portal
return
# presume pikerd role since no daemon could be found at # presume pikerd role since no daemon could be found at
# configured address # configured address
async with open_pikerd( async with open_pikerd(
loglevel=loglevel, loglevel=loglevel,
debug_mode=kwargs.get('debug_mode', False), debug_mode=kwargs.get('debug_mode', False),
registry_addr=registry_addr,
tsdb=tsdb,
es=es,
) as _: ) as service_manager:
# in the case where we're starting up the # in the case where we're starting up the
# tractor-piker runtime stack in **this** process # tractor-piker runtime stack in **this** process
# we return no portal to self. # we return no portal to self.
yield None assert service_manager
yield service_manager
# brokerd enabled modules # `brokerd` enabled modules
# NOTE: keeping this list as small as possible is part of our caps-sec
# model and should be treated with utmost care!
_data_mods = [ _data_mods = [
'piker.brokers.core', 'piker.brokers.core',
'piker.brokers.data', 'piker.brokers.data',
@ -296,37 +514,35 @@ _data_mods = [
] ]
class Brokerd:
locks = defaultdict(trio.Lock)
@acm @acm
async def find_service( async def find_service(
service_name: str, service_name: str,
) -> Optional[tractor.Portal]: ) -> tractor.Portal | None:
log.info(f'Scanning for service `{service_name}`') async with open_registry() as reg_addr:
# attach to existing daemon by name if possible log.info(f'Scanning for service `{service_name}`')
async with tractor.find_actor( # attach to existing daemon by name if possible
service_name, async with tractor.find_actor(
arbiter_sockaddr=_registry_addr, service_name,
) as maybe_portal: arbiter_sockaddr=reg_addr,
yield maybe_portal ) as maybe_portal:
yield maybe_portal
async def check_for_service( async def check_for_service(
service_name: str, service_name: str,
) -> bool: ) -> None | tuple[str, int]:
''' '''
Service daemon "liveness" predicate. Service daemon "liveness" predicate.
''' '''
async with tractor.query_actor( async with open_registry(ensure_exists=False) as reg_addr:
service_name, async with tractor.query_actor(
arbiter_sockaddr=_registry_addr, service_name,
) as sockaddr: arbiter_sockaddr=reg_addr,
return sockaddr ) as sockaddr:
return sockaddr
@acm @acm
@ -336,6 +552,8 @@ async def maybe_spawn_daemon(
service_task_target: Callable, service_task_target: Callable,
spawn_args: dict[str, Any], spawn_args: dict[str, Any],
loglevel: Optional[str] = None, loglevel: Optional[str] = None,
singleton: bool = False,
**kwargs, **kwargs,
) -> tractor.Portal: ) -> tractor.Portal:
@ -356,7 +574,7 @@ async def maybe_spawn_daemon(
# serialize access to this section to avoid # serialize access to this section to avoid
# 2 or more tasks racing to create a daemon # 2 or more tasks racing to create a daemon
lock = Brokerd.locks[service_name] lock = Services.locks[service_name]
await lock.acquire() await lock.acquire()
async with find_service(service_name) as portal: async with find_service(service_name) as portal:
@ -367,6 +585,9 @@ async def maybe_spawn_daemon(
log.warning(f"Couldn't find any existing {service_name}") log.warning(f"Couldn't find any existing {service_name}")
# TODO: really shouldn't the actor spawning be part of the service
# starting method `Services.start_service()` ?
# ask root ``pikerd`` daemon to spawn the daemon we need if # ask root ``pikerd`` daemon to spawn the daemon we need if
# pikerd is not live we now become the root of the # pikerd is not live we now become the root of the
# process tree # process tree
@ -377,15 +598,16 @@ async def maybe_spawn_daemon(
) as pikerd_portal: ) as pikerd_portal:
# we are the root and thus are `pikerd`
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
started: bool
if pikerd_portal is None: if pikerd_portal is None:
# we are the root and thus are `pikerd` started = await service_task_target(**spawn_args)
# so spawn the target service directly by calling
# the provided target routine.
# XXX: this assumes that the target is well formed and will
# do the right things to setup both a sub-actor **and** call
# the ``_Services`` api from above to start the top level
# service task for that actor.
await service_task_target(**spawn_args)
else: else:
# tell the remote `pikerd` to start the target, # tell the remote `pikerd` to start the target,
@ -394,11 +616,14 @@ async def maybe_spawn_daemon(
# non-blocking and the target task will persist running # non-blocking and the target task will persist running
# on `pikerd` after the client requesting it's start # on `pikerd` after the client requesting it's start
# disconnects. # disconnects.
await pikerd_portal.run( started = await pikerd_portal.run(
service_task_target, service_task_target,
**spawn_args, **spawn_args,
) )
if started:
log.info(f'Service {service_name} started!')
async with tractor.wait_for_actor(service_name) as portal: async with tractor.wait_for_actor(service_name) as portal:
lock.release() lock.release()
yield portal yield portal
@ -421,9 +646,6 @@ async def spawn_brokerd(
extra_tractor_kwargs = getattr(brokermod, '_spawn_kwargs', {}) extra_tractor_kwargs = getattr(brokermod, '_spawn_kwargs', {})
tractor_kwargs.update(extra_tractor_kwargs) tractor_kwargs.update(extra_tractor_kwargs)
global _services
assert _services
# ask `pikerd` to spawn a new sub-actor and manage it under its # ask `pikerd` to spawn a new sub-actor and manage it under its
# actor nursery # actor nursery
modpath = brokermod.__name__ modpath = brokermod.__name__
@ -436,18 +658,18 @@ async def spawn_brokerd(
subpath = f'{modpath}.{submodname}' subpath = f'{modpath}.{submodname}'
broker_enable.append(subpath) broker_enable.append(subpath)
portal = await _services.actor_n.start_actor( portal = await Services.actor_n.start_actor(
dname, dname,
enable_modules=_data_mods + broker_enable, enable_modules=_data_mods + broker_enable,
loglevel=loglevel, loglevel=loglevel,
debug_mode=_services.debug_mode, debug_mode=Services.debug_mode,
**tractor_kwargs **tractor_kwargs
) )
# non-blocking setup of brokerd service nursery # non-blocking setup of brokerd service nursery
from .data import _setup_persistent_brokerd from .data import _setup_persistent_brokerd
await _services.start_service_task( await Services.start_service_task(
dname, dname,
portal, portal,
_setup_persistent_brokerd, _setup_persistent_brokerd,
@ -493,24 +715,21 @@ async def spawn_emsd(
""" """
log.info('Spawning emsd') log.info('Spawning emsd')
global _services portal = await Services.actor_n.start_actor(
assert _services
portal = await _services.actor_n.start_actor(
'emsd', 'emsd',
enable_modules=[ enable_modules=[
'piker.clearing._ems', 'piker.clearing._ems',
'piker.clearing._client', 'piker.clearing._client',
], ],
loglevel=loglevel, loglevel=loglevel,
debug_mode=_services.debug_mode, # set by pikerd flag debug_mode=Services.debug_mode, # set by pikerd flag
**extra_tractor_kwargs **extra_tractor_kwargs
) )
# non-blocking setup of clearing service # non-blocking setup of clearing service
from .clearing._ems import _setup_persistent_emsd from .clearing._ems import _setup_persistent_emsd
await _services.start_service_task( await Services.start_service_task(
'emsd', 'emsd',
portal, portal,
_setup_persistent_emsd, _setup_persistent_emsd,
@ -537,25 +756,3 @@ async def maybe_open_emsd(
) as portal: ) as portal:
yield portal yield portal
# TODO: ideally we can start the tsdb "on demand" but it's
# probably going to require "rootless" docker, at least if we don't
# want to expect the user to start ``pikerd`` with root perms all the
# time.
# async def maybe_open_marketstored(
# loglevel: Optional[str] = None,
# **kwargs,
# ) -> tractor._portal.Portal: # noqa
# async with maybe_spawn_daemon(
# 'marketstored',
# service_task_target=spawn_emsd,
# spawn_args={'loglevel': loglevel},
# loglevel=loglevel,
# **kwargs,
# ) as portal:
# yield portal

View File

@ -18,7 +18,10 @@
Profiling wrappers for internal libs. Profiling wrappers for internal libs.
""" """
import os
import sys
import time import time
from time import perf_counter
from functools import wraps from functools import wraps
# NOTE: you can pass a flag to enable this: # NOTE: you can pass a flag to enable this:
@ -44,3 +47,193 @@ def timeit(fn):
return res return res
return wrapper return wrapper
# Modified version of ``pyqtgraph.debug.Profiler`` that
# core seems hesitant to land in:
# https://github.com/pyqtgraph/pyqtgraph/pull/2281
class Profiler(object):
'''
Simple profiler allowing measurement of multiple time intervals.
By default, profilers are disabled. To enable profiling, set the
environment variable `PYQTGRAPHPROFILE` to a comma-separated list of
fully-qualified names of profiled functions.
Calling a profiler registers a message (defaulting to an increasing
counter) that contains the time elapsed since the last call. When the
profiler is about to be garbage-collected, the messages are passed to the
outer profiler if one is running, or printed to stdout otherwise.
If `delayed` is set to False, messages are immediately printed instead.
Example:
def function(...):
profiler = Profiler()
... do stuff ...
profiler('did stuff')
... do other stuff ...
profiler('did other stuff')
# profiler is garbage-collected and flushed at function end
If this function is a method of class C, setting `PYQTGRAPHPROFILE` to
"C.function" (without the module name) will enable this profiler.
For regular functions, use the qualified name of the function, stripping
only the initial "pyqtgraph." prefix from the module.
'''
_profilers = os.environ.get("PYQTGRAPHPROFILE", None)
_profilers = _profilers.split(",") if _profilers is not None else []
_depth = 0
# NOTE: without this defined at the class level
# you won't see apprpriately "nested" sub-profiler
# instance calls.
_msgs = []
# set this flag to disable all or individual profilers at runtime
disable = False
class DisabledProfiler(object):
def __init__(self, *args, **kwds):
pass
def __call__(self, *args):
pass
def finish(self):
pass
def mark(self, msg=None):
pass
_disabledProfiler = DisabledProfiler()
def __new__(
cls,
msg=None,
disabled='env',
delayed=True,
ms_threshold: float = 0.0,
):
"""Optionally create a new profiler based on caller's qualname.
``ms_threshold`` can be set to value in ms for which, if the
total measured time of the lifetime of this profiler is **less
than** this value, then no profiling messages will be printed.
Setting ``delayed=False`` disables this feature since messages
are emitted immediately.
"""
if (
disabled is True
or (
disabled == 'env'
and len(cls._profilers) == 0
)
):
return cls._disabledProfiler
# determine the qualified name of the caller function
caller_frame = sys._getframe(1)
try:
caller_object_type = type(caller_frame.f_locals["self"])
except KeyError: # we are in a regular function
qualifier = caller_frame.f_globals["__name__"].split(".", 1)[-1]
else: # we are in a method
qualifier = caller_object_type.__name__
func_qualname = qualifier + "." + caller_frame.f_code.co_name
if disabled == 'env' and func_qualname not in cls._profilers:
# don't do anything
return cls._disabledProfiler
cls._depth += 1
obj = super(Profiler, cls).__new__(cls)
obj._msgs = []
# create an actual profiling object
if cls._depth < 1:
cls._msgs = []
obj._name = msg or func_qualname
obj._delayed = delayed
obj._markCount = 0
obj._finished = False
obj._firstTime = obj._lastTime = perf_counter()
obj._mt = ms_threshold
obj._newMsg("> Entering " + obj._name)
return obj
def __call__(self, msg=None):
"""Register or print a new message with timing information.
"""
if self.disable:
return
if msg is None:
msg = str(self._markCount)
self._markCount += 1
newTime = perf_counter()
tot_ms = (newTime - self._firstTime) * 1000
ms = (newTime - self._lastTime) * 1000
self._newMsg(
f" {msg}: {ms:0.4f}, tot:{tot_ms:0.4f}"
)
self._lastTime = newTime
def mark(self, msg=None):
self(msg)
def _newMsg(self, msg, *args):
msg = " " * (self._depth - 1) + msg
if self._delayed:
self._msgs.append((msg, args))
else:
print(msg % args)
def __del__(self):
self.finish()
def finish(self, msg=None):
"""Add a final message; flush the message list if no parent profiler.
"""
if self._finished or self.disable:
return
self._finished = True
if msg is not None:
self(msg)
tot_ms = (perf_counter() - self._firstTime) * 1000
self._newMsg(
"< Exiting %s, total time: %0.4f ms",
self._name,
tot_ms,
)
if tot_ms < self._mt:
# print(f'{tot_ms} < {self._mt}, clearing')
# NOTE: this list **must** be an instance var to avoid
# deleting common messages during GC I think?
self._msgs.clear()
# else:
# print(f'{tot_ms} > {self._mt}, not clearing')
# XXX: why is this needed?
# don't we **want to show** nested profiler messages?
if self._msgs: # and self._depth < 1:
# if self._msgs:
print("\n".join([m[0] % m[1] for m in self._msgs]))
# clear all entries
self._msgs.clear()
# type(self)._msgs = []
type(self)._depth -= 1

View File

@ -20,30 +20,41 @@ Broker clients, daemons and general back end machinery.
from importlib import import_module from importlib import import_module
from types import ModuleType from types import ModuleType
# TODO: move to urllib3/requests once supported
import asks
asks.init('trio')
__brokers__ = [ __brokers__ = [
'binance', 'binance',
'questrade',
'robinhood',
'ib', 'ib',
'kraken', 'kraken',
# broken but used to work
# 'questrade',
# 'robinhood',
# TODO: we should get on these stat!
# alpaca
# wstrade
# iex
# deribit
# kucoin
# bitso
] ]
def get_brokermod(brokername: str) -> ModuleType: def get_brokermod(brokername: str) -> ModuleType:
"""Return the imported broker module by name. '''
""" Return the imported broker module by name.
'''
module = import_module('.' + brokername, 'piker.brokers') module = import_module('.' + brokername, 'piker.brokers')
# we only allow monkeying because it's for internal keying # we only allow monkeying because it's for internal keying
module.name = module.__name__.split('.')[-1] module.name = module.__name__.split('.')[-1]
return module return module
def iter_brokermods(): def iter_brokermods():
"""Iterate all built-in broker modules. '''
""" Iterate all built-in broker modules.
'''
for name in __brokers__: for name in __brokers__:
yield get_brokermod(name) yield get_brokermod(name)

View File

@ -33,15 +33,23 @@ import asks
from fuzzywuzzy import process as fuzzy from fuzzywuzzy import process as fuzzy
import numpy as np import numpy as np
import tractor import tractor
from pydantic.dataclasses import dataclass
from pydantic import BaseModel
import wsproto import wsproto
from .._cacheables import open_cached_client from .._cacheables import open_cached_client
from ._util import resproc, SymbolNotFound from ._util import (
from ..log import get_logger, get_console_log resproc,
from ..data import ShmArray SymbolNotFound,
from ..data._web_bs import open_autorecon_ws, NoBsWs DataUnavailable,
)
from ..log import (
get_logger,
get_console_log,
)
from ..data.types import Struct
from ..data._web_bs import (
open_autorecon_ws,
NoBsWs,
)
log = get_logger(__name__) log = get_logger(__name__)
@ -79,12 +87,14 @@ _show_wap_in_history = False
# https://binance-docs.github.io/apidocs/spot/en/#exchange-information # https://binance-docs.github.io/apidocs/spot/en/#exchange-information
class Pair(BaseModel): class Pair(Struct, frozen=True):
symbol: str symbol: str
status: str status: str
baseAsset: str baseAsset: str
baseAssetPrecision: int baseAssetPrecision: int
cancelReplaceAllowed: bool
allowTrailingStop: bool
quoteAsset: str quoteAsset: str
quotePrecision: int quotePrecision: int
quoteAssetPrecision: int quoteAssetPrecision: int
@ -100,18 +110,21 @@ class Pair(BaseModel):
isSpotTradingAllowed: bool isSpotTradingAllowed: bool
isMarginTradingAllowed: bool isMarginTradingAllowed: bool
defaultSelfTradePreventionMode: str
allowedSelfTradePreventionModes: list[str]
filters: list[dict[str, Union[str, int, float]]] filters: list[dict[str, Union[str, int, float]]]
permissions: list[str] permissions: list[str]
@dataclass class OHLC(Struct):
class OHLC: '''
"""Description of the flattened OHLC quote format. Description of the flattened OHLC quote format.
For schema details see: For schema details see:
https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-streams https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-streams
""" '''
time: int time: int
open: float open: float
@ -134,7 +147,9 @@ class OHLC:
# convert datetime obj timestamp to unixtime in milliseconds # convert datetime obj timestamp to unixtime in milliseconds
def binance_timestamp(when): def binance_timestamp(
when: datetime
) -> int:
return int((when.timestamp() * 1000) + (when.microsecond / 1000)) return int((when.timestamp() * 1000) + (when.microsecond / 1000))
@ -173,7 +188,7 @@ class Client:
params = {} params = {}
if sym is not None: if sym is not None:
sym = sym.upper() sym = sym.lower()
params = {'symbol': sym} params = {'symbol': sym}
resp = await self._api( resp = await self._api(
@ -230,7 +245,7 @@ class Client:
) -> dict: ) -> dict:
if end_dt is None: if end_dt is None:
end_dt = pendulum.now('UTC') end_dt = pendulum.now('UTC').add(minutes=1)
if start_dt is None: if start_dt is None:
start_dt = end_dt.start_of( start_dt = end_dt.start_of(
@ -260,6 +275,7 @@ class Client:
for i, bar in enumerate(bars): for i, bar in enumerate(bars):
bar = OHLC(*bar) bar = OHLC(*bar)
bar.typecast()
row = [] row = []
for j, (name, ftype) in enumerate(_ohlc_dtype[1:]): for j, (name, ftype) in enumerate(_ohlc_dtype[1:]):
@ -287,7 +303,7 @@ async def get_client() -> Client:
# validation type # validation type
class AggTrade(BaseModel): class AggTrade(Struct):
e: str # Event type e: str # Event type
E: int # Event time E: int # Event time
s: str # Symbol s: str # Symbol
@ -341,7 +357,9 @@ async def stream_messages(ws: NoBsWs) -> AsyncGenerator[NoBsWs, dict]:
elif msg.get('e') == 'aggTrade': elif msg.get('e') == 'aggTrade':
# validate # NOTE: this is purely for a definition, ``msgspec.Struct``
# does not runtime-validate until you decode/encode.
# see: https://jcristharif.com/msgspec/structs.html#type-validation
msg = AggTrade(**msg) msg = AggTrade(**msg)
# TODO: type out and require this quote format # TODO: type out and require this quote format
@ -352,8 +370,8 @@ async def stream_messages(ws: NoBsWs) -> AsyncGenerator[NoBsWs, dict]:
'brokerd_ts': time.time(), 'brokerd_ts': time.time(),
'ticks': [{ 'ticks': [{
'type': 'trade', 'type': 'trade',
'price': msg.p, 'price': float(msg.p),
'size': msg.q, 'size': float(msg.q),
'broker_ts': msg.T, 'broker_ts': msg.T,
}], }],
} }
@ -384,41 +402,39 @@ async def open_history_client(
async with open_cached_client('binance') as client: async with open_cached_client('binance') as client:
async def get_ohlc( async def get_ohlc(
end_dt: Optional[datetime] = None, timeframe: float,
start_dt: Optional[datetime] = None, end_dt: datetime | None = None,
start_dt: datetime | None = None,
) -> tuple[ ) -> tuple[
np.ndarray, np.ndarray,
datetime, # start datetime, # start
datetime, # end datetime, # end
]: ]:
if timeframe != 60:
raise DataUnavailable('Only 1m bars are supported')
array = await client.bars( array = await client.bars(
symbol, symbol,
start_dt=start_dt, start_dt=start_dt,
end_dt=end_dt, end_dt=end_dt,
) )
start_dt = pendulum.from_timestamp(array[0]['time']) times = array['time']
end_dt = pendulum.from_timestamp(array[-1]['time']) if (
end_dt is None
):
inow = round(time.time())
if (inow - times[-1]) > 60:
await tractor.breakpoint()
start_dt = pendulum.from_timestamp(times[0])
end_dt = pendulum.from_timestamp(times[-1])
return array, start_dt, end_dt return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3} yield get_ohlc, {'erlangs': 3, 'rate': 3}
async def backfill_bars(
sym: str,
shm: ShmArray, # type: ignore # noqa
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED,
) -> None:
"""Fill historical bars into shared mem / storage afap.
"""
with trio.CancelScope() as cs:
async with open_cached_client('binance') as client:
bars = await client.bars(symbol=sym)
shm.push(bars)
task_status.started(cs)
async def stream_quotes( async def stream_quotes(
send_chan: trio.abc.SendChannel, send_chan: trio.abc.SendChannel,
@ -448,12 +464,20 @@ async def stream_quotes(
d = cache[sym.upper()] d = cache[sym.upper()]
syminfo = Pair(**d) # validation syminfo = Pair(**d) # validation
si = sym_infos[sym] = syminfo.dict() si = sym_infos[sym] = syminfo.to_dict()
filters = {}
for entry in syminfo.filters:
ftype = entry['filterType']
filters[ftype] = entry
# XXX: after manually inspecting the response format we # XXX: after manually inspecting the response format we
# just directly pick out the info we need # just directly pick out the info we need
si['price_tick_size'] = float(syminfo.filters[0]['tickSize']) si['price_tick_size'] = float(
si['lot_tick_size'] = float(syminfo.filters[2]['stepSize']) filters['PRICE_FILTER']['tickSize']
)
si['lot_tick_size'] = float(
filters['LOT_SIZE']['stepSize']
)
si['asset_type'] = 'crypto' si['asset_type'] = 'crypto'
symbol = symbols[0] symbol = symbols[0]
@ -495,14 +519,15 @@ async def stream_quotes(
subs.append("{sym}@bookTicker") subs.append("{sym}@bookTicker")
# unsub from all pairs on teardown # unsub from all pairs on teardown
await ws.send_msg({ if ws.connected():
"method": "UNSUBSCRIBE", await ws.send_msg({
"params": subs, "method": "UNSUBSCRIBE",
"id": uid, "params": subs,
}) "id": uid,
})
# XXX: do we need to ack the unsub? # XXX: do we need to ack the unsub?
# await ws.recv_msg() # await ws.recv_msg()
async with open_autorecon_ws( async with open_autorecon_ws(
'wss://stream.binance.com/ws', 'wss://stream.binance.com/ws',

View File

@ -39,6 +39,148 @@ _config_dir = click.get_app_dir('piker')
_watchlists_data_path = os.path.join(_config_dir, 'watchlists.json') _watchlists_data_path = os.path.join(_config_dir, 'watchlists.json')
OK = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
def print_ok(s: str, **kwargs):
print(OK + s + ENDC, **kwargs)
def print_error(s: str, **kwargs):
print(FAIL + s + ENDC, **kwargs)
def get_method(client, meth_name: str):
print(f'checking client for method \'{meth_name}\'...', end='', flush=True)
method = getattr(client, meth_name, None)
assert method
print_ok('found!.')
return method
async def run_method(client, meth_name: str, **kwargs):
method = get_method(client, meth_name)
print('running...', end='', flush=True)
result = await method(**kwargs)
print_ok(f'done! result: {type(result)}')
return result
async def run_test(broker_name: str):
brokermod = get_brokermod(broker_name)
total = 0
passed = 0
failed = 0
print(f'getting client...', end='', flush=True)
if not hasattr(brokermod, 'get_client'):
print_error('fail! no \'get_client\' context manager found.')
return
async with brokermod.get_client(is_brokercheck=True) as client:
print_ok(f'done! inside client context.')
# check for methods present on brokermod
method_list = [
'backfill_bars',
'get_client',
'trades_dialogue',
'open_history_client',
'open_symbol_search',
'stream_quotes',
]
for method in method_list:
print(
f'checking brokermod for method \'{method}\'...',
end='', flush=True)
if not hasattr(brokermod, method):
print_error(f'fail! method \'{method}\' not found.')
failed += 1
else:
print_ok('done!')
passed += 1
total += 1
# check for methods present con brokermod.Client and their
# results
# for private methods only check is present
method_list = [
'get_balances',
'get_assets',
'get_trades',
'get_xfers',
'submit_limit',
'submit_cancel',
'search_symbols',
]
for method_name in method_list:
try:
get_method(client, method_name)
passed += 1
except AssertionError:
print_error(f'fail! method \'{method_name}\' not found.')
failed += 1
total += 1
# check for methods present con brokermod.Client and their
# results
syms = await run_method(client, 'symbol_info')
total += 1
if len(syms) == 0:
raise BaseException('Empty Symbol list?')
passed += 1
first_sym = tuple(syms.keys())[0]
method_list = [
('cache_symbols', {}),
('search_symbols', {'pattern': first_sym[:-1]}),
('bars', {'symbol': first_sym})
]
for method_name, method_kwargs in method_list:
try:
await run_method(client, method_name, **method_kwargs)
passed += 1
except AssertionError:
print_error(f'fail! method \'{method_name}\' not found.')
failed += 1
total += 1
print(f'total: {total}, passed: {passed}, failed: {failed}')
@cli.command()
@click.argument('broker', nargs=1, required=True)
@click.pass_obj
def brokercheck(config, broker):
'''
Test broker apis for completeness.
'''
async def bcheck_main():
async with maybe_spawn_brokerd(broker) as portal:
await portal.run(run_test, broker)
await portal.cancel_actor()
trio.run(run_test, broker)
@cli.command() @cli.command()
@click.option('--keys', '-k', multiple=True, @click.option('--keys', '-k', multiple=True,
help='Return results only for these keys') help='Return results only for these keys')
@ -193,6 +335,8 @@ def contracts(ctx, loglevel, broker, symbol, ids):
brokermod = get_brokermod(broker) brokermod = get_brokermod(broker)
get_console_log(loglevel) get_console_log(loglevel)
contracts = trio.run(partial(core.contracts, brokermod, symbol)) contracts = trio.run(partial(core.contracts, brokermod, symbol))
if not ids: if not ids:
# just print out expiry dates which can be used with # just print out expiry dates which can be used with

View File

@ -227,26 +227,28 @@ async def get_cached_feed(
@tractor.stream @tractor.stream
async def start_quote_stream( async def start_quote_stream(
ctx: tractor.Context, # marks this as a streaming func stream: tractor.Context, # marks this as a streaming func
broker: str, broker: str,
symbols: List[Any], symbols: List[Any],
feed_type: str = 'stock', feed_type: str = 'stock',
rate: int = 3, rate: int = 3,
) -> None: ) -> None:
"""Handle per-broker quote stream subscriptions using a "lazy" pub-sub '''
Handle per-broker quote stream subscriptions using a "lazy" pub-sub
pattern. pattern.
Spawns new quoter tasks for each broker backend on-demand. Spawns new quoter tasks for each broker backend on-demand.
Since most brokers seems to support batch quote requests we Since most brokers seems to support batch quote requests we
limit to one task per process (for now). limit to one task per process (for now).
"""
'''
# XXX: why do we need this again? # XXX: why do we need this again?
get_console_log(tractor.current_actor().loglevel) get_console_log(tractor.current_actor().loglevel)
# pull global vars from local actor # pull global vars from local actor
symbols = list(symbols) symbols = list(symbols)
log.info( log.info(
f"{ctx.chan.uid} subscribed to {broker} for symbols {symbols}") f"{stream.chan.uid} subscribed to {broker} for symbols {symbols}")
# another actor task may have already created it # another actor task may have already created it
async with get_cached_feed(broker) as feed: async with get_cached_feed(broker) as feed:
@ -290,13 +292,13 @@ async def start_quote_stream(
assert fquote['displayable'] assert fquote['displayable']
payload[sym] = fquote payload[sym] = fquote
await ctx.send_yield(payload) await stream.send_yield(payload)
await stream_poll_requests( await stream_poll_requests(
# ``trionics.msgpub`` required kwargs # ``trionics.msgpub`` required kwargs
task_name=feed_type, task_name=feed_type,
ctx=ctx, ctx=stream,
topics=symbols, topics=symbols,
packetizer=feed.mod.packetizer, packetizer=feed.mod.packetizer,
@ -319,9 +321,11 @@ async def call_client(
class DataFeed: class DataFeed:
"""Data feed client for streaming symbol data from and making API client calls '''
to a (remote) ``brokerd`` daemon. Data feed client for streaming symbol data from and making API
""" client calls to a (remote) ``brokerd`` daemon.
'''
_allowed = ('stock', 'option') _allowed = ('stock', 'option')
def __init__(self, portal, brokermod): def __init__(self, portal, brokermod):

View File

@ -0,0 +1,70 @@
``deribit`` backend
------------------
pretty good liquidity crypto derivatives, uses custom json rpc over ws for
client methods, then `cryptofeed` for data streams.
status
******
- supports option charts
- no order support yet
config
******
In order to get order mode support your ``brokers.toml``
needs to have something like the following:
.. code:: toml
[deribit]
key_id = 'XXXXXXXX'
key_secret = 'Xx_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx'
To obtain an api id and secret you need to create an account, which can be a
real market account over at:
- deribit.com (requires KYC for deposit address)
Or a testnet account over at:
- test.deribit.com
For testnet once the account is created here is how you deposit fake crypto to
try it out:
1) Go to Wallet:
.. figure:: assets/0_wallet.png
:align: center
:target: assets/0_wallet.png
:alt: wallet page
2) Then click on the elipsis menu and select deposit
.. figure:: assets/1_wallet_select_deposit.png
:align: center
:target: assets/1_wallet_select_deposit.png
:alt: wallet deposit page
3) This will take you to the deposit address page
.. figure:: assets/2_gen_deposit_addr.png
:align: center
:target: assets/2_gen_deposit_addr.png
:alt: generate deposit address page
4) After clicking generate you should see the address, copy it and go to the
`coin faucet <https://test.deribit.com/dericoin/BTC/deposit>`_ and send fake
coins to that address.
.. figure:: assets/3_deposit_address.png
:align: center
:target: assets/3_deposit_address.png
:alt: generated address
5) Back in the deposit address page you should see the deposit in your history
.. figure:: assets/4_wallet_deposit_history.png
:align: center
:target: assets/4_wallet_deposit_history.png
:alt: wallet deposit history

View File

@ -0,0 +1,65 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Deribit backend.
'''
from piker.log import get_logger
log = get_logger(__name__)
from .api import (
get_client,
)
from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
backfill_bars
)
# from .broker import (
# trades_dialogue,
# norm_trade_records,
# )
__all__ = [
'get_client',
# 'trades_dialogue',
'open_history_client',
'open_symbol_search',
'stream_quotes',
# 'norm_trade_records',
]
# tractor RPC enable arg
__enable_modules__: list[str] = [
'api',
'feed',
# 'broker',
]
# passed to ``tractor.ActorNursery.start_actor()``
_spawn_kwargs = {
'infect_asyncio': True,
}
# annotation to let backend agnostic code
# know if ``brokerd`` should be spawned with
# ``tractor``'s aio mode.
_infect_asyncio: bool = True

View File

@ -0,0 +1,672 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Deribit backend.
'''
import json
import time
import asyncio
from contextlib import asynccontextmanager as acm, AsyncExitStack
from functools import partial
from datetime import datetime
from typing import Any, Optional, Iterable, Callable
import pendulum
import asks
import trio
from trio_typing import Nursery, TaskStatus
from fuzzywuzzy import process as fuzzy
import numpy as np
from piker.data.types import Struct
from piker.data._web_bs import (
NoBsWs,
open_autorecon_ws,
open_jsonrpc_session
)
from .._util import resproc
from piker import config
from piker.log import get_logger
from tractor.trionics import (
broadcast_receiver,
BroadcastReceiver,
maybe_open_context
)
from tractor import to_asyncio
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT,
L1_BOOK, TRADES,
OPTION, CALL, PUT
)
from cryptofeed.symbols import Symbol
log = get_logger(__name__)
_spawn_kwargs = {
'infect_asyncio': True,
}
_url = 'https://www.deribit.com'
_ws_url = 'wss://www.deribit.com/ws/api/v2'
_testnet_ws_url = 'wss://test.deribit.com/ws/api/v2'
# Broker specific ohlc schema (rest)
_ohlc_dtype = [
('index', int),
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('bar_wap', float), # will be zeroed by sampler if not filled
]
class JSONRPCResult(Struct):
jsonrpc: str = '2.0'
id: int
result: Optional[dict] = None
error: Optional[dict] = None
usIn: int
usOut: int
usDiff: int
testnet: bool
class JSONRPCChannel(Struct):
jsonrpc: str = '2.0'
method: str
params: dict
class KLinesResult(Struct):
close: list[float]
cost: list[float]
high: list[float]
low: list[float]
open: list[float]
status: str
ticks: list[int]
volume: list[float]
class Trade(Struct):
trade_seq: int
trade_id: str
timestamp: int
tick_direction: int
price: float
mark_price: float
iv: float
instrument_name: str
index_price: float
direction: str
combo_trade_id: Optional[int] = 0,
combo_id: Optional[str] = '',
amount: float
class LastTradesResult(Struct):
trades: list[Trade]
has_more: bool
# convert datetime obj timestamp to unixtime in milliseconds
def deribit_timestamp(when):
return int((when.timestamp() * 1000) + (when.microsecond / 1000))
def str_to_cb_sym(name: str) -> Symbol:
base, strike_price, expiry_date, option_type = name.split('-')
quote = base
if option_type == 'put':
option_type = PUT
elif option_type == 'call':
option_type = CALL
else:
raise Exception("Couldn\'t parse option type")
return Symbol(
base, quote,
type=OPTION,
strike_price=strike_price,
option_type=option_type,
expiry_date=expiry_date,
expiry_normalize=False)
def piker_sym_to_cb_sym(name: str) -> Symbol:
base, expiry_date, strike_price, option_type = tuple(
name.upper().split('-'))
quote = base
if option_type == 'P':
option_type = PUT
elif option_type == 'C':
option_type = CALL
else:
raise Exception("Couldn\'t parse option type")
return Symbol(
base, quote,
type=OPTION,
strike_price=strike_price,
option_type=option_type,
expiry_date=expiry_date.upper())
def cb_sym_to_deribit_inst(sym: Symbol):
# cryptofeed normalized
cb_norm = ['F', 'G', 'H', 'J', 'K', 'M', 'N', 'Q', 'U', 'V', 'X', 'Z']
# deribit specific
months = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']
exp = sym.expiry_date
# YYMDD
# 01234
year, month, day = (
exp[:2], months[cb_norm.index(exp[2:3])], exp[3:])
otype = 'C' if sym.option_type == CALL else 'P'
return f'{sym.base}-{day}{month}{year}-{sym.strike_price}-{otype}'
def get_config() -> dict[str, Any]:
conf, path = config.load()
section = conf.get('deribit')
# TODO: document why we send this, basically because logging params for cryptofeed
conf['log'] = {}
conf['log']['disabled'] = True
if section is None:
log.warning(f'No config section found for deribit in {path}')
return conf
class Client:
def __init__(self, json_rpc: Callable) -> None:
self._pairs: dict[str, Any] = None
config = get_config().get('deribit', {})
if ('key_id' in config) and ('key_secret' in config):
self._key_id = config['key_id']
self._key_secret = config['key_secret']
else:
self._key_id = None
self._key_secret = None
self.json_rpc = json_rpc
@property
def currencies(self):
return ['btc', 'eth', 'sol', 'usd']
async def get_balances(self, kind: str = 'option') -> dict[str, float]:
"""Return the set of positions for this account
by symbol.
"""
balances = {}
for currency in self.currencies:
resp = await self.json_rpc(
'private/get_positions', params={
'currency': currency.upper(),
'kind': kind})
balances[currency] = resp.result
return balances
async def get_assets(self) -> dict[str, float]:
"""Return the set of asset balances for this account
by symbol.
"""
balances = {}
for currency in self.currencies:
resp = await self.json_rpc(
'private/get_account_summary', params={
'currency': currency.upper()})
balances[currency] = resp.result['balance']
return balances
async def submit_limit(
self,
symbol: str,
price: float,
action: str,
size: float
) -> dict:
"""Place an order
"""
params = {
'instrument_name': symbol.upper(),
'amount': size,
'type': 'limit',
'price': price,
}
resp = await self.json_rpc(
f'private/{action}', params)
return resp.result
async def submit_cancel(self, oid: str):
"""Send cancel request for order id
"""
resp = await self.json_rpc(
'private/cancel', {'order_id': oid})
return resp.result
async def symbol_info(
self,
instrument: Optional[str] = None,
currency: str = 'btc', # BTC, ETH, SOL, USDC
kind: str = 'option',
expired: bool = False
) -> dict[str, Any]:
"""Get symbol info for the exchange.
"""
if self._pairs:
return self._pairs
# will retrieve all symbols by default
params = {
'currency': currency.upper(),
'kind': kind,
'expired': str(expired).lower()
}
resp = await self.json_rpc('public/get_instruments', params)
results = resp.result
instruments = {
item['instrument_name'].lower(): item
for item in results
}
if instrument is not None:
return instruments[instrument]
else:
return instruments
async def cache_symbols(
self,
) -> dict:
if not self._pairs:
self._pairs = await self.symbol_info()
return self._pairs
async def search_symbols(
self,
pattern: str,
limit: int = 30,
) -> dict[str, Any]:
data = await self.symbol_info()
matches = fuzzy.extractBests(
pattern,
data,
score_cutoff=35,
limit=limit
)
# repack in dict form
return {item[0]['instrument_name'].lower(): item[0]
for item in matches}
async def bars(
self,
symbol: str,
start_dt: Optional[datetime] = None,
end_dt: Optional[datetime] = None,
limit: int = 1000,
as_np: bool = True,
) -> dict:
instrument = symbol
if end_dt is None:
end_dt = pendulum.now('UTC')
if start_dt is None:
start_dt = end_dt.start_of(
'minute').subtract(minutes=limit)
start_time = deribit_timestamp(start_dt)
end_time = deribit_timestamp(end_dt)
# https://docs.deribit.com/#public-get_tradingview_chart_data
resp = await self.json_rpc(
'public/get_tradingview_chart_data',
params={
'instrument_name': instrument.upper(),
'start_timestamp': start_time,
'end_timestamp': end_time,
'resolution': '1'
})
result = KLinesResult(**resp.result)
new_bars = []
for i in range(len(result.close)):
_open = result.open[i]
high = result.high[i]
low = result.low[i]
close = result.close[i]
volume = result.volume[i]
row = [
(start_time + (i * (60 * 1000))) / 1000.0, # time
result.open[i],
result.high[i],
result.low[i],
result.close[i],
result.volume[i],
0
]
new_bars.append((i,) + tuple(row))
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else klines
return array
async def last_trades(
self,
instrument: str,
count: int = 10
):
resp = await self.json_rpc(
'public/get_last_trades_by_instrument',
params={
'instrument_name': instrument,
'count': count
})
return LastTradesResult(**resp.result)
@acm
async def get_client(
is_brokercheck: bool = False
) -> Client:
async with (
trio.open_nursery() as n,
open_jsonrpc_session(
_testnet_ws_url, dtype=JSONRPCResult) as json_rpc
):
client = Client(json_rpc)
_refresh_token: Optional[str] = None
_access_token: Optional[str] = None
async def _auth_loop(
task_status: TaskStatus = trio.TASK_STATUS_IGNORED
):
"""Background task that adquires a first access token and then will
refresh the access token while the nursery isn't cancelled.
https://docs.deribit.com/?python#authentication-2
"""
renew_time = 10
access_scope = 'trade:read_write'
_expiry_time = time.time()
got_access = False
nonlocal _refresh_token
nonlocal _access_token
while True:
if time.time() - _expiry_time < renew_time:
# if we are close to token expiry time
if _refresh_token != None:
# if we have a refresh token already dont need to send
# secret
params = {
'grant_type': 'refresh_token',
'refresh_token': _refresh_token,
'scope': access_scope
}
else:
# we don't have refresh token, send secret to initialize
params = {
'grant_type': 'client_credentials',
'client_id': client._key_id,
'client_secret': client._key_secret,
'scope': access_scope
}
resp = await json_rpc('public/auth', params)
result = resp.result
_expiry_time = time.time() + result['expires_in']
_refresh_token = result['refresh_token']
if 'access_token' in result:
_access_token = result['access_token']
if not got_access:
# first time this loop runs we must indicate task is
# started, we have auth
got_access = True
task_status.started()
else:
await trio.sleep(renew_time / 2)
# if we have client creds launch auth loop
if client._key_id is not None:
await n.start(_auth_loop)
await client.cache_symbols()
yield client
n.cancel_scope.cancel()
@acm
async def open_feed_handler():
fh = FeedHandler(config=get_config())
yield fh
await to_asyncio.run_task(fh.stop_async)
@acm
async def maybe_open_feed_handler() -> trio.abc.ReceiveStream:
async with maybe_open_context(
acm_func=open_feed_handler,
key='feedhandler',
) as (cache_hit, fh):
yield fh
async def aio_price_feed_relay(
fh: FeedHandler,
instrument: Symbol,
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> None:
async def _trade(data: dict, receipt_timestamp):
to_trio.send_nowait(('trade', {
'symbol': cb_sym_to_deribit_inst(
str_to_cb_sym(data.symbol)).lower(),
'last': data,
'broker_ts': time.time(),
'data': data.to_dict(),
'receipt': receipt_timestamp
}))
async def _l1(data: dict, receipt_timestamp):
to_trio.send_nowait(('l1', {
'symbol': cb_sym_to_deribit_inst(
str_to_cb_sym(data.symbol)).lower(),
'ticks': [
{'type': 'bid',
'price': float(data.bid_price), 'size': float(data.bid_size)},
{'type': 'bsize',
'price': float(data.bid_price), 'size': float(data.bid_size)},
{'type': 'ask',
'price': float(data.ask_price), 'size': float(data.ask_size)},
{'type': 'asize',
'price': float(data.ask_price), 'size': float(data.ask_size)}
]
}))
fh.add_feed(
DERIBIT,
channels=[TRADES, L1_BOOK],
symbols=[piker_sym_to_cb_sym(instrument)],
callbacks={
TRADES: _trade,
L1_BOOK: _l1
})
if not fh.running:
fh.run(
start_loop=False,
install_signal_handlers=False)
# sync with trio
to_trio.send_nowait(None)
await asyncio.sleep(float('inf'))
@acm
async def open_price_feed(
instrument: str
) -> trio.abc.ReceiveStream:
async with maybe_open_feed_handler() as fh:
async with to_asyncio.open_channel_from(
partial(
aio_price_feed_relay,
fh,
instrument
)
) as (first, chan):
yield chan
@acm
async def maybe_open_price_feed(
instrument: str
) -> trio.abc.ReceiveStream:
# TODO: add a predicate to maybe_open_context
async with maybe_open_context(
acm_func=open_price_feed,
kwargs={
'instrument': instrument
},
key=f'{instrument}-price',
) as (cache_hit, feed):
if cache_hit:
yield broadcast_receiver(feed, 10)
else:
yield feed
async def aio_order_feed_relay(
fh: FeedHandler,
instrument: Symbol,
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> None:
async def _fill(data: dict, receipt_timestamp):
breakpoint()
async def _order_info(data: dict, receipt_timestamp):
breakpoint()
fh.add_feed(
DERIBIT,
channels=[FILLS, ORDER_INFO],
symbols=[instrument.upper()],
callbacks={
FILLS: _fill,
ORDER_INFO: _order_info,
})
if not fh.running:
fh.run(
start_loop=False,
install_signal_handlers=False)
# sync with trio
to_trio.send_nowait(None)
await asyncio.sleep(float('inf'))
@acm
async def open_order_feed(
instrument: list[str]
) -> trio.abc.ReceiveStream:
async with maybe_open_feed_handler() as fh:
async with to_asyncio.open_channel_from(
partial(
aio_order_feed_relay,
fh,
instrument
)
) as (first, chan):
yield chan
@acm
async def maybe_open_order_feed(
instrument: str
) -> trio.abc.ReceiveStream:
# TODO: add a predicate to maybe_open_context
async with maybe_open_context(
acm_func=open_order_feed,
kwargs={
'instrument': instrument,
'fh': fh
},
key=f'{instrument}-order',
) as (cache_hit, feed):
if cache_hit:
yield broadcast_receiver(feed, 10)
else:
yield feed

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

View File

@ -0,0 +1,185 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Deribit backend.
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
from typing import Any, Optional, Callable
import time
import trio
from trio_typing import TaskStatus
import pendulum
from fuzzywuzzy import process as fuzzy
import numpy as np
import tractor
from piker._cacheables import open_cached_client
from piker.log import get_logger, get_console_log
from piker.data import ShmArray
from piker.brokers._util import (
BrokerError,
DataUnavailable,
)
from cryptofeed import FeedHandler
from cryptofeed.defines import (
DERIBIT, L1_BOOK, TRADES, OPTION, CALL, PUT
)
from cryptofeed.symbols import Symbol
from .api import (
Client, Trade,
get_config,
str_to_cb_sym, piker_sym_to_cb_sym, cb_sym_to_deribit_inst,
maybe_open_price_feed
)
_spawn_kwargs = {
'infect_asyncio': True,
}
log = get_logger(__name__)
@acm
async def open_history_client(
instrument: str,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('deribit') as client:
async def get_ohlc(
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
array = await client.bars(
instrument,
start_dt=start_dt,
end_dt=end_dt,
)
if len(array) == 0:
raise DataUnavailable
start_dt = pendulum.from_timestamp(array[0]['time'])
end_dt = pendulum.from_timestamp(array[-1]['time'])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 3, 'rate': 3}
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
sym = symbols[0]
async with (
open_cached_client('deribit') as client,
send_chan as send_chan
):
init_msgs = {
# pass back token, and bool, signalling if we're the writer
# and that history has been written
sym: {
'symbol_info': {
'asset_type': 'option',
'price_tick_size': 0.0005
},
'shm_write_opts': {'sum_tick_vml': False},
'fqsn': sym,
},
}
nsym = piker_sym_to_cb_sym(sym)
async with maybe_open_price_feed(sym) as stream:
cache = await client.cache_symbols()
last_trades = (await client.last_trades(
cb_sym_to_deribit_inst(nsym), count=1)).trades
if len(last_trades) == 0:
last_trade = None
async for typ, quote in stream:
if typ == 'trade':
last_trade = Trade(**(quote['data']))
break
else:
last_trade = Trade(**(last_trades[0]))
first_quote = {
'symbol': sym,
'last': last_trade.price,
'brokerd_ts': last_trade.timestamp,
'ticks': [{
'type': 'trade',
'price': last_trade.price,
'size': last_trade.amount,
'broker_ts': last_trade.timestamp
}]
}
task_status.started((init_msgs, first_quote))
feed_is_live.set()
async for typ, quote in stream:
topic = quote['symbol']
await send_chan.send({topic: quote})
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
async with open_cached_client('deribit') as client:
# load all symbols locally for fast search
cache = await client.cache_symbols()
await ctx.started()
async with ctx.open_stream() as stream:
async for pattern in stream:
# repack in dict form
await stream.send(
await client.search_symbols(pattern))

View File

@ -0,0 +1,134 @@
``ib`` backend
--------------
more or less the "everything broker" for traditional and international
markets. they are the "go to" provider for automatic retail trading
and we interface to their APIs using the `ib_insync` project.
status
******
current support is *production grade* and both real-time data and order
management should be correct and fast. this backend is used by core devs
for live trading.
currently there is not yet full support for:
- options charting and trading
- paxos based crypto rt feeds and trading
config
******
In order to get order mode support your ``brokers.toml``
needs to have something like the following:
.. code:: toml
[ib]
hosts = [
"127.0.0.1",
]
# TODO: when we eventually spawn gateways in our
# container, we can just dynamically allocate these
# using IBC.
ports = [
4002,
4003,
4006,
4001,
7497,
]
# XXX: for a paper account the flex web query service
# is not supported so you have to manually download
# and XML report and put it in a location that can be
# accessed by the ``brokerd.ib`` backend code for parsing.
flex_token = '1111111111111111'
flex_trades_query_id = '6969696' # live accounts only?
# 3rd party web-api token
# (XXX: not sure if this works yet)
trade_log_token = '111111111111111'
# when clients are being scanned this determines
# which clients are preferred to be used for data feeds
# based on account names which are detected as active
# on each client.
prefer_data_account = [
# this has to be first in order to make data work with dual paper + live
'main',
'algopaper',
]
[ib.accounts]
main = 'U69696969'
algopaper = 'DU9696969'
If everything works correctly you should see any current positions
loaded in the pps pane on chart load and you should also be able to
check your trade records in the file::
<pikerk_conf_dir>/ledgers/trades_ib_algopaper.toml
An example ledger file will have entries written verbatim from the
trade events schema:
.. code:: toml
["0000e1a7.630f5e5a.01.01"]
secType = "FUT"
conId = 515416577
symbol = "MNQ"
lastTradeDateOrContractMonth = "20221216"
strike = 0.0
right = ""
multiplier = "2"
exchange = "GLOBEX"
primaryExchange = ""
currency = "USD"
localSymbol = "MNQZ2"
tradingClass = "MNQ"
includeExpired = false
secIdType = ""
secId = ""
comboLegsDescrip = ""
comboLegs = []
execId = "0000e1a7.630f5e5a.01.01"
time = 1661972086.0
acctNumber = "DU69696969"
side = "BOT"
shares = 1.0
price = 12372.75
permId = 441472655
clientId = 6116
orderId = 985
liquidation = 0
cumQty = 1.0
avgPrice = 12372.75
orderRef = ""
evRule = ""
evMultiplier = 0.0
modelCode = ""
lastLiquidity = 1
broker_time = 1661972086.0
name = "ib"
commission = 0.57
realizedPNL = 243.41
yield_ = 0.0
yieldRedemptionDate = 0
listingExchange = "GLOBEX"
date = "2022-08-31T18:54:46+00:00"
your ``pps.toml`` file will have position entries like,
.. code:: toml
[ib.algopaper."mnq.globex.20221216"]
size = -1.0
ppu = 12423.630576923071
bsuid = 515416577
expiry = "2022-12-16T00:00:00+00:00"
clears = [
{ dt = "2022-08-31T18:54:46+00:00", ppu = 12423.630576923071, accum_size = -19.0, price = 12372.75, size = 1.0, cost = 0.57, tid = "0000e1a7.630f5e5a.01.01" },
]

View File

@ -20,15 +20,10 @@ Interactive Brokers API backend.
Sub-modules within break into the core functionalities: Sub-modules within break into the core functionalities:
- ``broker.py`` part for orders / trading endpoints - ``broker.py`` part for orders / trading endpoints
- ``data.py`` for real-time data feed endpoints - ``feed.py`` for real-time data feed endpoints
- ``api.py`` for the core API machinery which is ``trio``-ized
- ``client.py`` for the core API machinery which is ``trio``-ized
wrapping around ``ib_insync``. wrapping around ``ib_insync``.
- ``report.py`` for the hackery to build manual pp calcs
to avoid ib's absolute bullshit FIFO style position
tracking..
""" """
from .api import ( from .api import (
get_client, get_client,
@ -38,7 +33,10 @@ from .feed import (
open_symbol_search, open_symbol_search,
stream_quotes, stream_quotes,
) )
from .broker import trades_dialogue from .broker import (
trades_dialogue,
norm_trade_records,
)
__all__ = [ __all__ = [
'get_client', 'get_client',

View File

@ -0,0 +1,184 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
``ib`` utilities and hacks suitable for use in the backend and/or as
runnable script-programs.
'''
from typing import Literal
import subprocess
import tractor
_reset_tech: Literal[
'vnc',
'i3ipc_xdotool',
# TODO: in theory we can use a different linux DE API or
# some other type of similar window scanning/mgmt client
# (on other OSs) to do the same.
] = 'vnc'
async def data_reset_hack(
reset_type: str = 'data',
) -> None:
'''
Run key combos for resetting data feeds and yield back to caller
when complete.
NOTE: this is a linux-only hack around!
There are multiple "techs" you can use depending on your infra setup:
- if running ib-gw in a container with a VNC server running the most
performant method is the `'vnc'` option.
- if running ib-gw/tws locally, and you are using `i3` you can use
the ``i3ipc`` lib and ``xdotool`` to send the appropriate click
and key-combos automatically to your local desktop's java X-apps.
https://interactivebrokers.github.io/tws-api/historical_limitations.html#pacing_violations
TODOs:
- a return type that hopefully determines if the hack was
successful.
- other OS support?
- integration with ``ib-gw`` run in docker + Xorg?
- is it possible to offer a local server that can be accessed by
a client? Would be sure be handy for running native java blobs
that need to be wrangle.
'''
global _reset_tech
match _reset_tech:
case 'vnc':
try:
await tractor.to_asyncio.run_task(vnc_click_hack)
except OSError:
_reset_tech = 'i3ipc_xdotool'
try:
i3ipc_xdotool_manual_click_hack()
return True
except OSError:
return False
case 'i3ipc_xdotool':
i3ipc_xdotool_manual_click_hack()
case _ as tech:
raise RuntimeError(f'{tech} is not supported for reset tech!?')
# we don't really need the ``xdotool`` approach any more B)
return True
async def vnc_click_hack(
reset_type: str = 'data'
) -> None:
'''
Reset the data or netowork connection for the VNC attached
ib gateway using magic combos.
'''
key = {'data': 'f', 'connection': 'r'}[reset_type]
import asyncvnc
async with asyncvnc.connect(
'localhost',
port=3003,
# password='ibcansmbz',
) as client:
# move to middle of screen
# 640x1800
client.mouse.move(
x=500,
y=500,
)
client.mouse.click()
client.keyboard.press('Ctrl', 'Alt', key) # keys are stacked
def i3ipc_xdotool_manual_click_hack() -> None:
import i3ipc
i3 = i3ipc.Connection()
t = i3.get_tree()
orig_win_id = t.find_focused().window
# for tws
win_names: list[str] = [
'Interactive Brokers', # tws running in i3
'IB Gateway', # gw running in i3
# 'IB', # gw running in i3 (newer version?)
]
for name in win_names:
results = t.find_titled(name)
print(f'results for {name}: {results}')
if results:
con = results[0]
print(f'Resetting data feed for {name}')
win_id = str(con.window)
w, h = con.rect.width, con.rect.height
# TODO: seems to be a few libs for python but not sure
# if they support all the sub commands we need, order of
# most recent commit history:
# https://github.com/rr-/pyxdotool
# https://github.com/ShaneHutter/pyxdotool
# https://github.com/cphyc/pyxdotool
# TODO: only run the reconnect (2nd) kc on a detected
# disconnect?
for key_combo, timeout in [
# only required if we need a connection reset.
# ('ctrl+alt+r', 12),
# data feed reset.
('ctrl+alt+f', 6)
]:
subprocess.call([
'xdotool',
'windowactivate', '--sync', win_id,
# move mouse to bottom left of window (where there should
# be nothing to click).
'mousemove_relative', '--sync', str(w-4), str(h-4),
# NOTE: we may need to stick a `--retry 3` in here..
'click', '--window', win_id,
'--repeat', '3', '1',
# hackzorzes
'key', key_combo,
],
timeout=timeout,
)
# re-activate and focus original window
subprocess.call([
'xdotool',
'windowactivate', '--sync', str(orig_win_id),
'click', '--window', str(orig_win_id), '1',
])

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,64 @@
``kraken`` backend
------------------
though they don't have the most liquidity of all the cexes they sure are
accommodating to those of us who appreciate a little ``xmr``.
status
******
current support is *production grade* and both real-time data and order
management should be correct and fast. this backend is used by core devs
for live trading.
config
******
In order to get order mode support your ``brokers.toml``
needs to have something like the following:
.. code:: toml
[kraken]
accounts.spot = 'spot'
key_descr = "spot"
api_key = "69696969696969696696969696969696969696969696969696969696"
secret = "BOOBSBOOBSBOOBSBOOBSBOOBSSMBZ69696969696969669969696969696"
If everything works correctly you should see any current positions
loaded in the pps pane on chart load and you should also be able to
check your trade records in the file::
<pikerk_conf_dir>/ledgers/trades_kraken_spot.toml
An example ledger file will have entries written verbatim from the
trade events schema:
.. code:: toml
[TFJBKK-SMBZS-VJ4UWS]
ordertxid = "SMBZSA-7CNQU-3HWLNJ"
postxid = "SMBZSE-M7IF5-CFI7LT"
pair = "XXMRZEUR"
time = 1655691993.4133966
type = "buy"
ordertype = "limit"
price = "103.97000000"
cost = "499.99999977"
fee = "0.80000000"
vol = "4.80907954"
margin = "0.00000000"
misc = ""
your ``pps.toml`` file will have position entries like,
.. code:: toml
[kraken.spot."xmreur.kraken"]
size = 4.80907954
ppu = 103.97000000
bsuid = "XXMRZEUR"
clears = [
{ tid = "TFJBKK-SMBZS-VJ4UWS", cost = 0.8, price = 103.97, size = 4.80907954, dt = "2022-05-20T02:26:33.413397+00:00" },
]

View File

@ -0,0 +1,61 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Kraken backend.
Sub-modules within break into the core functionalities:
- ``broker.py`` part for orders / trading endpoints
- ``feed.py`` for real-time data feed endpoints
- ``api.py`` for the core API machinery which is ``trio``-ized
wrapping around ``ib_insync``.
'''
from piker.log import get_logger
log = get_logger(__name__)
from .api import (
get_client,
)
from .feed import (
open_history_client,
open_symbol_search,
stream_quotes,
)
from .broker import (
trades_dialogue,
norm_trade_records,
)
__all__ = [
'get_client',
'trades_dialogue',
'open_history_client',
'open_symbol_search',
'stream_quotes',
'norm_trade_records',
]
# tractor RPC enable arg
__enable_modules__: list[str] = [
'api',
'feed',
'broker',
]

View File

@ -0,0 +1,621 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Kraken web API wrapping.
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
import itertools
from typing import (
Any,
Optional,
Union,
)
import time
from bidict import bidict
import pendulum
import asks
from fuzzywuzzy import process as fuzzy
import numpy as np
import urllib.parse
import hashlib
import hmac
import base64
import trio
from piker import config
from piker.data.types import Struct
from piker.data._source import Symbol
from piker.brokers._util import (
resproc,
SymbolNotFound,
BrokerError,
DataThrottle,
)
from piker.pp import Transaction
from . import log
# <uri>/<version>/
_url = 'https://api.kraken.com/0'
# Broker specific ohlc schema which includes a vwap field
_ohlc_dtype = [
('index', int),
('time', int),
('open', float),
('high', float),
('low', float),
('close', float),
('volume', float),
('count', int),
('bar_wap', float),
]
# UI components allow this to be declared such that additional
# (historical) fields can be exposed.
ohlc_dtype = np.dtype(_ohlc_dtype)
_show_wap_in_history = True
_symbol_info_translation: dict[str, str] = {
'tick_decimals': 'pair_decimals',
}
def get_config() -> dict[str, Any]:
conf, path = config.load()
section = conf.get('kraken')
if section is None:
log.warning(f'No config section found for kraken in {path}')
return {}
return section
def get_kraken_signature(
urlpath: str,
data: dict[str, Any],
secret: str
) -> str:
postdata = urllib.parse.urlencode(data)
encoded = (str(data['nonce']) + postdata).encode()
message = urlpath.encode() + hashlib.sha256(encoded).digest()
mac = hmac.new(base64.b64decode(secret), message, hashlib.sha512)
sigdigest = base64.b64encode(mac.digest())
return sigdigest.decode()
class InvalidKey(ValueError):
'''
EAPI:Invalid key
This error is returned when the API key used for the call is
either expired or disabled, please review the API key in your
Settings -> API tab of account management or generate a new one
and update your application.
'''
# https://www.kraken.com/features/api#get-tradable-pairs
class Pair(Struct):
altname: str # alternate pair name
wsname: str # WebSocket pair name (if available)
aclass_base: str # asset class of base component
base: str # asset id of base component
aclass_quote: str # asset class of quote component
quote: str # asset id of quote component
lot: str # volume lot size
cost_decimals: int
costmin: float
pair_decimals: int # scaling decimal places for pair
lot_decimals: int # scaling decimal places for volume
# amount to multiply lot volume by to get currency volume
lot_multiplier: float
# array of leverage amounts available when buying
leverage_buy: list[int]
# array of leverage amounts available when selling
leverage_sell: list[int]
# fee schedule array in [volume, percent fee] tuples
fees: list[tuple[int, float]]
# maker fee schedule array in [volume, percent fee] tuples (if on
# maker/taker)
fees_maker: list[tuple[int, float]]
fee_volume_currency: str # volume discount currency
margin_call: str # margin call level
margin_stop: str # stop-out/liquidation margin level
ordermin: float # minimum order volume for pair
tick_size: float # min price step size
status: str
short_position_limit: float = 0
long_position_limit: float = float('inf')
class Client:
# global symbol normalization table
_ntable: dict[str, str] = {}
_atable: bidict[str, str] = bidict()
_pairs: dict[str, Pair] = {}
def __init__(
self,
config: dict[str, str],
name: str = '',
api_key: str = '',
secret: str = ''
) -> None:
self._sesh = asks.Session(connections=4)
self._sesh.base_location = _url
self._sesh.headers.update({
'User-Agent':
'krakenex/2.1.0 (+https://github.com/veox/python3-krakenex)'
})
self.conf: dict[str, str] = config
self._name = name
self._api_key = api_key
self._secret = secret
@property
def pairs(self) -> dict[str, Pair]:
if self._pairs is None:
raise RuntimeError(
"Make sure to run `cache_symbols()` on startup!"
)
# retreive and cache all symbols
return self._pairs
async def _public(
self,
method: str,
data: dict,
) -> dict[str, Any]:
resp = await self._sesh.post(
path=f'/public/{method}',
json=data,
timeout=float('inf')
)
return resproc(resp, log)
async def _private(
self,
method: str,
data: dict,
uri_path: str
) -> dict[str, Any]:
headers = {
'Content-Type':
'application/x-www-form-urlencoded',
'API-Key':
self._api_key,
'API-Sign':
get_kraken_signature(uri_path, data, self._secret)
}
resp = await self._sesh.post(
path=f'/private/{method}',
data=data,
headers=headers,
timeout=float('inf')
)
return resproc(resp, log)
async def endpoint(
self,
method: str,
data: dict[str, Any]
) -> dict[str, Any]:
uri_path = f'/0/private/{method}'
data['nonce'] = str(int(1000*time.time()))
return await self._private(method, data, uri_path)
async def get_balances(
self,
) -> dict[str, float]:
'''
Return the set of asset balances for this account
by symbol.
'''
resp = await self.endpoint(
'Balance',
{},
)
by_bsuid = resp['result']
return {
self._atable[sym].lower(): float(bal)
for sym, bal in by_bsuid.items()
}
async def get_assets(self) -> dict[str, dict]:
resp = await self._public('Assets', {})
return resp['result']
async def cache_assets(self) -> None:
assets = self.assets = await self.get_assets()
for bsuid, info in assets.items():
self._atable[bsuid] = info['altname']
async def get_trades(
self,
fetch_limit: int | None = None,
) -> dict[str, Any]:
'''
Get the trades (aka cleared orders) history from the rest endpoint:
https://docs.kraken.com/rest/#operation/getTradeHistory
'''
ofs = 0
trades_by_id: dict[str, Any] = {}
for i in itertools.count():
if (
fetch_limit
and i >= fetch_limit
):
break
# increment 'ofs' pagination offset
ofs = i*50
resp = await self.endpoint(
'TradesHistory',
{'ofs': ofs},
)
by_id = resp['result']['trades']
trades_by_id.update(by_id)
# can get up to 50 results per query, see:
# https://docs.kraken.com/rest/#tag/User-Data/operation/getTradeHistory
if (
len(by_id) < 50
):
err = resp.get('error')
if err:
raise BrokerError(err)
# we know we received the max amount of
# trade results so there may be more history.
# catch the end of the trades
count = resp['result']['count']
break
# santity check on update
assert count == len(trades_by_id.values())
return trades_by_id
async def get_xfers(
self,
asset: str,
src_asset: str = '',
) -> dict[str, Transaction]:
'''
Get asset balance transfer transactions.
Currently only withdrawals are supported.
'''
xfers: list[dict] = (await self.endpoint(
'WithdrawStatus',
{'asset': asset},
))['result']
# eg. resp schema:
# 'result': [{'method': 'Bitcoin', 'aclass': 'currency', 'asset':
# 'XXBT', 'refid': 'AGBJRMB-JHD2M4-NDI3NR', 'txid':
# 'b95d66d3bb6fd76cbccb93f7639f99a505cb20752c62ea0acc093a0e46547c44',
# 'info': 'bc1qc8enqjekwppmw3g80p56z5ns7ze3wraqk5rl9z',
# 'amount': '0.00300726', 'fee': '0.00001000', 'time':
# 1658347714, 'status': 'Success'}]}
trans: dict[str, Transaction] = {}
for entry in xfers:
# look up the normalized name and asset info
asset_key = entry['asset']
asset_info = self.assets[asset_key]
asset = self._atable[asset_key].lower()
# XXX: this is in the asset units (likely) so it isn't
# quite the same as a commisions cost necessarily..)
cost = float(entry['fee'])
fqsn = asset + '.kraken'
pairinfo = Symbol.from_fqsn(
fqsn,
info={
'asset_type': 'crypto',
'lot_tick_size': asset_info['decimals'],
},
)
tran = Transaction(
fqsn=fqsn,
sym=pairinfo,
tid=entry['txid'],
dt=pendulum.from_timestamp(entry['time']),
bsuid=f'{asset}{src_asset}',
size=-1*(
float(entry['amount'])
+
cost
),
# since this will be treated as a "sell" it
# shouldn't be needed to compute the be price.
price='NaN',
# XXX: see note above
cost=cost,
)
trans[tran.tid] = tran
return trans
async def submit_limit(
self,
symbol: str,
price: float,
action: str,
size: float,
reqid: str = None,
validate: bool = False # set True test call without a real submission
) -> dict:
'''
Place an order and return integer request id provided by client.
'''
# Build common data dict for common keys from both endpoints
data = {
"pair": symbol,
"price": str(price),
"validate": validate
}
if reqid is None:
# Build order data for kraken api
data |= {
"ordertype": "limit",
"type": action,
"volume": str(size),
}
return await self.endpoint('AddOrder', data)
else:
# Edit order data for kraken api
data["txid"] = reqid
return await self.endpoint('EditOrder', data)
async def submit_cancel(
self,
reqid: str,
) -> dict:
'''
Send cancel request for order id ``reqid``.
'''
# txid is a transaction id given by kraken
return await self.endpoint('CancelOrder', {"txid": reqid})
async def symbol_info(
self,
pair: Optional[str] = None,
) -> dict[str, Pair] | Pair:
if pair is not None:
pairs = {'pair': pair}
else:
pairs = None # get all pairs
resp = await self._public('AssetPairs', pairs)
err = resp['error']
if err:
symbolname = pairs['pair'] if pair else None
raise SymbolNotFound(f'{symbolname}.kraken')
pairs = resp['result']
if pair is not None:
_, data = next(iter(pairs.items()))
return Pair(**data)
else:
return {key: Pair(**data) for key, data in pairs.items()}
async def cache_symbols(self) -> dict:
'''
Load all market pair info build and cache it for downstream use.
A ``._ntable: dict[str, str]`` is available for mapping the
websocket pair name-keys and their http endpoint API (smh)
equivalents to the "alternative name" which is generally the one
we actually want to use XD
'''
if not self._pairs:
self._pairs.update(await self.symbol_info())
# table of all ws and rest keys to their alt-name values.
ntable: dict[str, str] = {}
for rest_key in list(self._pairs.keys()):
pair: Pair = self._pairs[rest_key]
altname = pair.altname
wsname = pair.wsname
ntable[rest_key] = ntable[wsname] = altname
# register the pair under all monikers, a giant flat
# surjection of all possible names to each info obj.
self._pairs[altname] = self._pairs[wsname] = pair
self._ntable.update(ntable)
return self._pairs
async def search_symbols(
self,
pattern: str,
limit: int = None,
) -> dict[str, Any]:
'''
Search for a symbol by "alt name"..
It is expected that the ``Client._pairs`` table
gets populated before conducting the underlying fuzzy-search
over the pair-key set.
'''
if not len(self._pairs):
await self.cache_symbols()
assert self._pairs, '`Client.cache_symbols()` was never called!?'
matches = fuzzy.extractBests(
pattern,
self._pairs,
score_cutoff=50,
)
# repack in dict form
return {item[0].altname: item[0] for item in matches}
async def bars(
self,
symbol: str = 'XBTUSD',
# UTC 2017-07-02 12:53:20
since: Union[int, datetime] | None = None,
count: int = 720, # <- max allowed per query
as_np: bool = True,
) -> dict:
if since is None:
since = pendulum.now('UTC').start_of('minute').subtract(
minutes=count).timestamp()
elif isinstance(since, int):
since = pendulum.from_timestamp(since).timestamp()
else: # presumably a pendulum datetime
since = since.timestamp()
# UTC 2017-07-02 12:53:20 is oldest seconds value
since = str(max(1499000000, int(since)))
json = await self._public(
'OHLC',
data={
'pair': symbol,
'since': since,
},
)
try:
res = json['result']
res.pop('last')
bars = next(iter(res.values()))
new_bars = []
first = bars[0]
last_nz_vwap = first[-3]
if last_nz_vwap == 0:
# use close if vwap is zero
last_nz_vwap = first[-4]
# convert all fields to native types
for i, bar in enumerate(bars):
# normalize weird zero-ed vwap values..cmon kraken..
# indicates vwap didn't change since last bar
vwap = float(bar.pop(-3))
if vwap != 0:
last_nz_vwap = vwap
if vwap == 0:
vwap = last_nz_vwap
# re-insert vwap as the last of the fields
bar.append(vwap)
new_bars.append(
(i,) + tuple(
ftype(bar[j]) for j, (name, ftype) in enumerate(
_ohlc_dtype[1:]
)
)
)
array = np.array(new_bars, dtype=_ohlc_dtype) if as_np else bars
return array
except KeyError:
errmsg = json['error'][0]
if 'not found' in errmsg:
raise SymbolNotFound(errmsg + f': {symbol}')
elif 'Too many requests' in errmsg:
raise DataThrottle(f'{symbol}')
else:
raise BrokerError(errmsg)
@classmethod
def normalize_symbol(
cls,
ticker: str
) -> tuple[str, Pair]:
'''
Normalize symbol names to to a 3x3 pair from the global
definition map which we build out from the data retreived from
the 'AssetPairs' endpoint, see methods above.
'''
ticker = cls._ntable[ticker]
return ticker.lower(), cls._pairs[ticker]
@acm
async def get_client() -> Client:
conf = get_config()
if conf:
client = Client(
conf,
name=conf['key_descr'],
api_key=conf['api_key'],
secret=conf['secret']
)
else:
client = Client({})
# at startup, load all symbols, and asset info in
# batch requests.
async with trio.open_nursery() as nurse:
nurse.start_soon(client.cache_assets)
await client.cache_symbols()
yield client

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,459 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Real-time and historical data feed endpoints.
'''
from contextlib import asynccontextmanager as acm
from datetime import datetime
from typing import (
Any,
Optional,
Callable,
)
import time
from async_generator import aclosing
from fuzzywuzzy import process as fuzzy
import numpy as np
import pendulum
from trio_typing import TaskStatus
import tractor
import trio
from piker._cacheables import open_cached_client
from piker.brokers._util import (
BrokerError,
DataThrottle,
DataUnavailable,
)
from piker.log import get_console_log
from piker.data.types import Struct
from piker.data._web_bs import open_autorecon_ws, NoBsWs
from . import log
from .api import (
Client,
Pair,
)
class OHLC(Struct):
'''
Description of the flattened OHLC quote format.
For schema details see:
https://docs.kraken.com/websockets/#message-ohlc
'''
chan_id: int # internal kraken id
chan_name: str # eg. ohlc-1 (name-interval)
pair: str # fx pair
time: float # Begin time of interval, in seconds since epoch
etime: float # End time of interval, in seconds since epoch
open: float # Open price of interval
high: float # High price within interval
low: float # Low price within interval
close: float # Close price of interval
vwap: float # Volume weighted average price within interval
volume: float # Accumulated volume **within interval**
count: int # Number of trades within interval
# (sampled) generated tick data
ticks: list[Any] = []
async def stream_messages(
ws: NoBsWs,
):
'''
Message stream parser and heartbeat handler.
Deliver ws subscription messages as well as handle heartbeat logic
though a single async generator.
'''
too_slow_count = last_hb = 0
while True:
with trio.move_on_after(5) as cs:
msg = await ws.recv_msg()
# trigger reconnection if heartbeat is laggy
if cs.cancelled_caught:
too_slow_count += 1
if too_slow_count > 20:
log.warning(
"Heartbeat is too slow, resetting ws connection")
await ws._connect()
too_slow_count = 0
continue
match msg:
case {'event': 'heartbeat'}:
now = time.time()
delay = now - last_hb
last_hb = now
# XXX: why tf is this not printing without --tl flag?
log.debug(f"Heartbeat after {delay}")
# print(f"Heartbeat after {delay}")
continue
case _:
# passthrough sub msgs
yield msg
async def process_data_feed_msgs(
ws: NoBsWs,
):
'''
Parse and pack data feed messages.
'''
async for msg in stream_messages(ws):
match msg:
case {
'errorMessage': errmsg
}:
raise BrokerError(errmsg)
case {
'event': 'subscriptionStatus',
} as sub:
log.info(
'WS subscription is active:\n'
f'{sub}'
)
continue
case [
chan_id,
*payload_array,
chan_name,
pair
]:
if 'ohlc' in chan_name:
ohlc = OHLC(
chan_id,
chan_name,
pair,
*payload_array[0]
)
ohlc.typecast()
yield 'ohlc', ohlc
elif 'spread' in chan_name:
bid, ask, ts, bsize, asize = map(
float, payload_array[0])
# TODO: really makes you think IB has a horrible API...
quote = {
'symbol': pair.replace('/', ''),
'ticks': [
{'type': 'bid', 'price': bid, 'size': bsize},
{'type': 'bsize', 'price': bid, 'size': bsize},
{'type': 'ask', 'price': ask, 'size': asize},
{'type': 'asize', 'price': ask, 'size': asize},
],
}
yield 'l1', quote
# elif 'book' in msg[-2]:
# chan_id, *payload_array, chan_name, pair = msg
# print(msg)
case _:
print(f'UNHANDLED MSG: {msg}')
# yield msg
def normalize(
ohlc: OHLC,
) -> dict:
quote = ohlc.to_dict()
quote['broker_ts'] = quote['time']
quote['brokerd_ts'] = time.time()
quote['symbol'] = quote['pair'] = quote['pair'].replace('/', '')
quote['last'] = quote['close']
quote['bar_wap'] = ohlc.vwap
# seriously eh? what's with this non-symmetry everywhere
# in subscription systems...
# XXX: piker style is always lowercases symbols.
topic = quote['pair'].replace('/', '').lower()
# print(quote)
return topic, quote
@acm
async def open_history_client(
symbol: str,
) -> tuple[Callable, int]:
# TODO implement history getter for the new storage layer.
async with open_cached_client('kraken') as client:
# lol, kraken won't send any more then the "last"
# 720 1m bars.. so we have to just ignore further
# requests of this type..
queries: int = 0
async def get_ohlc(
timeframe: float,
end_dt: Optional[datetime] = None,
start_dt: Optional[datetime] = None,
) -> tuple[
np.ndarray,
datetime, # start
datetime, # end
]:
nonlocal queries
if (
queries > 0
or timeframe != 60
):
raise DataUnavailable(
'Only a single query for 1m bars supported')
count = 0
while count <= 3:
try:
array = await client.bars(
symbol,
since=end_dt,
)
count += 1
queries += 1
break
except DataThrottle:
log.warning(f'kraken OHLC throttle for {symbol}')
await trio.sleep(1)
start_dt = pendulum.from_timestamp(array[0]['time'])
end_dt = pendulum.from_timestamp(array[-1]['time'])
return array, start_dt, end_dt
yield get_ohlc, {'erlangs': 1, 'rate': 1}
async def stream_quotes(
send_chan: trio.abc.SendChannel,
symbols: list[str],
feed_is_live: trio.Event,
loglevel: str = None,
# backend specific
sub_type: str = 'ohlc',
# startup sync
task_status: TaskStatus[tuple[dict, dict]] = trio.TASK_STATUS_IGNORED,
) -> None:
'''
Subscribe for ohlc stream of quotes for ``pairs``.
``pairs`` must be formatted <crypto_symbol>/<fiat_symbol>.
'''
# XXX: required to propagate ``tractor`` loglevel to piker logging
get_console_log(loglevel or tractor.current_actor().loglevel)
ws_pairs = {}
sym_infos = {}
async with open_cached_client('kraken') as client, send_chan as send_chan:
# keep client cached for real-time section
for sym in symbols:
# transform to upper since piker style is always lower
sym = sym.upper()
si: Pair = await client.symbol_info(sym)
# try:
# si = Pair(**sym_info) # validation
# except TypeError:
# fields_diff = set(sym_info) - set(Pair.__struct_fields__)
# raise TypeError(
# f'Missing msg fields {fields_diff}'
# )
syminfo = si.to_dict()
syminfo['price_tick_size'] = 1. / 10**si.pair_decimals
syminfo['lot_tick_size'] = 1. / 10**si.lot_decimals
syminfo['asset_type'] = 'crypto'
sym_infos[sym] = syminfo
ws_pairs[sym] = si.wsname
symbol = symbols[0].lower()
init_msgs = {
# pass back token, and bool, signalling if we're the writer
# and that history has been written
symbol: {
'symbol_info': sym_infos[sym],
'shm_write_opts': {'sum_tick_vml': False},
'fqsn': sym,
},
}
@acm
async def subscribe(ws: NoBsWs):
# XXX: setup subs
# https://docs.kraken.com/websockets/#message-subscribe
# specific logic for this in kraken's sync client:
# https://github.com/krakenfx/kraken-wsclient-py/blob/master/kraken_wsclient_py/kraken_wsclient_py.py#L188
ohlc_sub = {
'event': 'subscribe',
'pair': list(ws_pairs.values()),
'subscription': {
'name': 'ohlc',
'interval': 1,
},
}
# TODO: we want to eventually allow unsubs which should
# be completely fine to request from a separate task
# since internally the ws methods appear to be FIFO
# locked.
await ws.send_msg(ohlc_sub)
# trade data (aka L1)
l1_sub = {
'event': 'subscribe',
'pair': list(ws_pairs.values()),
'subscription': {
'name': 'spread',
# 'depth': 10}
},
}
# pull a first quote and deliver
await ws.send_msg(l1_sub)
yield
# unsub from all pairs on teardown
if ws.connected():
await ws.send_msg({
'pair': list(ws_pairs.values()),
'event': 'unsubscribe',
'subscription': ['ohlc', 'spread'],
})
# XXX: do we need to ack the unsub?
# await ws.recv_msg()
# see the tips on reconnection logic:
# https://support.kraken.com/hc/en-us/articles/360044504011-WebSocket-API-unexpected-disconnections-from-market-data-feeds
ws: NoBsWs
async with (
open_autorecon_ws(
'wss://ws.kraken.com/',
fixture=subscribe,
) as ws,
aclosing(process_data_feed_msgs(ws)) as msg_gen,
):
# pull a first quote and deliver
typ, ohlc_last = await anext(msg_gen)
topic, quote = normalize(ohlc_last)
task_status.started((init_msgs, quote))
# lol, only "closes" when they're margin squeezing clients ;P
feed_is_live.set()
# keep start of last interval for volume tracking
last_interval_start = ohlc_last.etime
# start streaming
async for typ, ohlc in msg_gen:
if typ == 'ohlc':
# TODO: can get rid of all this by using
# ``trades`` subscription...
# generate tick values to match time & sales pane:
# https://trade.kraken.com/charts/KRAKEN:BTC-USD?period=1m
volume = ohlc.volume
# new OHLC sample interval
if ohlc.etime > last_interval_start:
last_interval_start = ohlc.etime
tick_volume = volume
else:
# this is the tick volume *within the interval*
tick_volume = volume - ohlc_last.volume
ohlc_last = ohlc
last = ohlc.close
if tick_volume:
ohlc.ticks.append({
'type': 'trade',
'price': last,
'size': tick_volume,
})
topic, quote = normalize(ohlc)
elif typ == 'l1':
quote = ohlc
topic = quote['symbol'].lower()
await send_chan.send({topic: quote})
@tractor.context
async def open_symbol_search(
ctx: tractor.Context,
) -> Client:
async with open_cached_client('kraken') as client:
# load all symbols locally for fast search
cache = await client.cache_symbols()
await ctx.started(cache)
async with ctx.open_stream() as stream:
async for pattern in stream:
matches = fuzzy.extractBests(
pattern,
cache,
score_cutoff=50,
)
# repack in dict form
await stream.send(
{item[0]['altname']: item[0]
for item in matches}
)

View File

@ -18,3 +18,9 @@
Market machinery for order executions, book, management. Market machinery for order executions, book, management.
""" """
from ._client import open_ems
__all__ = [
'open_ems',
]

View File

@ -22,54 +22,10 @@ from enum import Enum
from typing import Optional from typing import Optional
from bidict import bidict from bidict import bidict
from pydantic import BaseModel, validator
from ..data._source import Symbol from ..data._source import Symbol
from ._messages import BrokerdPosition, Status from ..data.types import Struct
from ..pp import Position
class Position(BaseModel):
'''
Basic pp (personal position) model with attached fills history.
This type should be IPC wire ready?
'''
symbol: Symbol
# last size and avg entry price
size: float
avg_price: float # TODO: contextual pricing
# ordered record of known constituent trade messages
fills: list[Status] = []
def update_from_msg(
self,
msg: BrokerdPosition,
) -> None:
# XXX: better place to do this?
symbol = self.symbol
lot_size_digits = symbol.lot_size_digits
avg_price, size = (
round(msg['avg_price'], ndigits=symbol.tick_size_digits),
round(msg['size'], ndigits=lot_size_digits),
)
self.avg_price = avg_price
self.size = size
@property
def dsize(self) -> float:
'''
The "dollar" size of the pp, normally in trading (fiat) unit
terms.
'''
return self.avg_price * self.size
_size_units = bidict({ _size_units = bidict({
@ -84,34 +40,9 @@ SizeUnit = Enum(
) )
class Allocator(BaseModel): class Allocator(Struct):
class Config:
validate_assignment = True
copy_on_model_validation = False
arbitrary_types_allowed = True
# required to get the account validator lookup working?
extra = 'allow'
underscore_attrs_are_private = False
symbol: Symbol symbol: Symbol
account: Optional[str] = 'paper'
# TODO: for enums this clearly doesn't fucking work, you can't set
# a default at startup by passing in a `dict` but yet you can set
# that value through assignment..for wtv cucked reason.. honestly, pure
# unintuitive garbage.
size_unit: str = 'currency'
_size_units: dict[str, Optional[str]] = _size_units
@validator('size_unit', pre=True)
def maybe_lookup_key(cls, v):
# apply the corresponding enum key for the text "description" value
if v not in _size_units:
return _size_units.inverse[v]
assert v in _size_units
return v
# TODO: if we ever want ot support non-uniform entry-slot-proportion # TODO: if we ever want ot support non-uniform entry-slot-proportion
# "sizes" # "sizes"
@ -120,6 +51,28 @@ class Allocator(BaseModel):
units_limit: float units_limit: float
currency_limit: float currency_limit: float
slots: int slots: int
account: Optional[str] = 'paper'
_size_units: bidict[str, Optional[str]] = _size_units
# TODO: for enums this clearly doesn't fucking work, you can't set
# a default at startup by passing in a `dict` but yet you can set
# that value through assignment..for wtv cucked reason.. honestly, pure
# unintuitive garbage.
_size_unit: str = 'currency'
@property
def size_unit(self) -> str:
return self._size_unit
@size_unit.setter
def size_unit(self, v: str) -> Optional[str]:
if v not in _size_units:
v = _size_units.inverse[v]
assert v in _size_units
self._size_unit = v
return v
def step_sizes( def step_sizes(
self, self,
@ -140,10 +93,13 @@ class Allocator(BaseModel):
else: else:
return self.units_limit return self.units_limit
def limit_info(self) -> tuple[str, float]:
return self.size_unit, self.limit()
def next_order_info( def next_order_info(
self, self,
# we only need a startup size for exit calcs, we can the # we only need a startup size for exit calcs, we can then
# determine how large slots should be if the initial pp size was # determine how large slots should be if the initial pp size was
# larger then the current live one, and the live one is smaller # larger then the current live one, and the live one is smaller
# then the initial config settings. # then the initial config settings.
@ -173,7 +129,7 @@ class Allocator(BaseModel):
l_sub_pp = self.units_limit - abs_live_size l_sub_pp = self.units_limit - abs_live_size
elif size_unit == 'currency': elif size_unit == 'currency':
live_cost_basis = abs_live_size * live_pp.avg_price live_cost_basis = abs_live_size * live_pp.ppu
slot_size = currency_per_slot / price slot_size = currency_per_slot / price
l_sub_pp = (self.currency_limit - live_cost_basis) / price l_sub_pp = (self.currency_limit - live_cost_basis) / price
@ -184,12 +140,14 @@ class Allocator(BaseModel):
# an entry (adding-to or starting a pp) # an entry (adding-to or starting a pp)
if ( if (
action == 'buy' and live_size > 0 or
action == 'sell' and live_size < 0 or
live_size == 0 live_size == 0
or (action == 'buy' and live_size > 0)
or action == 'sell' and live_size < 0
): ):
order_size = min(
order_size = min(slot_size, l_sub_pp) slot_size,
max(l_sub_pp, 0),
)
# an exit (removing-from or going to net-zero pp) # an exit (removing-from or going to net-zero pp)
else: else:
@ -205,7 +163,7 @@ class Allocator(BaseModel):
if size_unit == 'currency': if size_unit == 'currency':
# compute the "projected" limit's worth of units at the # compute the "projected" limit's worth of units at the
# current pp (weighted) price: # current pp (weighted) price:
slot_size = currency_per_slot / live_pp.avg_price slot_size = currency_per_slot / live_pp.ppu
else: else:
slot_size = u_per_slot slot_size = u_per_slot
@ -244,7 +202,12 @@ class Allocator(BaseModel):
if order_size < slot_size: if order_size < slot_size:
# compute a fractional slots size to display # compute a fractional slots size to display
slots_used = self.slots_used( slots_used = self.slots_used(
Position(symbol=sym, size=order_size, avg_price=price) Position(
symbol=sym,
size=order_size,
ppu=price,
bsuid=sym,
)
) )
return { return {
@ -271,8 +234,8 @@ class Allocator(BaseModel):
abs_pp_size = abs(pp.size) abs_pp_size = abs(pp.size)
if self.size_unit == 'currency': if self.size_unit == 'currency':
# live_currency_size = size or (abs_pp_size * pp.avg_price) # live_currency_size = size or (abs_pp_size * pp.ppu)
live_currency_size = abs_pp_size * pp.avg_price live_currency_size = abs_pp_size * pp.ppu
prop = live_currency_size / self.currency_limit prop = live_currency_size / self.currency_limit
else: else:
@ -284,14 +247,6 @@ class Allocator(BaseModel):
return round(prop * self.slots) return round(prop * self.slots)
_derivs = (
'future',
'continuous_future',
'option',
'futures_option',
)
def mk_allocator( def mk_allocator(
symbol: Symbol, symbol: Symbol,
@ -300,7 +255,7 @@ def mk_allocator(
# default allocation settings # default allocation settings
defaults: dict[str, float] = { defaults: dict[str, float] = {
'account': None, # select paper by default 'account': None, # select paper by default
'size_unit': 'currency', # 'size_unit': 'currency',
'units_limit': 400, 'units_limit': 400,
'currency_limit': 5e3, 'currency_limit': 5e3,
'slots': 4, 'slots': 4,
@ -318,42 +273,9 @@ def mk_allocator(
'currency_limit': 6e3, 'currency_limit': 6e3,
'slots': 6, 'slots': 6,
} }
defaults.update(user_def) defaults.update(user_def)
alloc = Allocator( return Allocator(
symbol=symbol, symbol=symbol,
**defaults, **defaults,
) )
asset_type = symbol.type_key
# specific configs by asset class / type
if asset_type in _derivs:
# since it's harder to know how currency "applies" in this case
# given leverage properties
alloc.size_unit = '# units'
# set units limit to slots size thus making make the next
# entry step 1.0
alloc.units_limit = alloc.slots
# if the current position is already greater then the limit
# settings, increase the limit to the current position
if alloc.size_unit == 'currency':
startup_size = startup_pp.size * startup_pp.avg_price
if startup_size > alloc.currency_limit:
alloc.currency_limit = round(startup_size, ndigits=2)
else:
startup_size = abs(startup_pp.size)
if startup_size > alloc.units_limit:
alloc.units_limit = startup_size
if asset_type in _derivs:
alloc.slots = alloc.units_limit
return alloc

View File

@ -18,26 +18,32 @@
Orders and execution client API. Orders and execution client API.
""" """
from __future__ import annotations
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from typing import Dict
from pprint import pformat from pprint import pformat
from dataclasses import dataclass, field from typing import TYPE_CHECKING
import trio import trio
import tractor import tractor
from tractor.trionics import broadcast_receiver from tractor.trionics import broadcast_receiver
from ..log import get_logger from ..log import get_logger
from ._ems import _emsd_main from ..data.types import Struct
from .._daemon import maybe_open_emsd from .._daemon import maybe_open_emsd
from ._messages import Order, Cancel from ._messages import Order, Cancel
from ..brokers import get_brokermod
if TYPE_CHECKING:
from ._messages import (
BrokerdPosition,
Status,
)
log = get_logger(__name__) log = get_logger(__name__)
@dataclass class OrderBook(Struct):
class OrderBook:
'''EMS-client-side order book ctl and tracking. '''EMS-client-side order book ctl and tracking.
A style similar to "model-view" is used here where this api is A style similar to "model-view" is used here where this api is
@ -52,20 +58,18 @@ class OrderBook:
# mem channels used to relay order requests to the EMS daemon # mem channels used to relay order requests to the EMS daemon
_to_ems: trio.abc.SendChannel _to_ems: trio.abc.SendChannel
_from_order_book: trio.abc.ReceiveChannel _from_order_book: trio.abc.ReceiveChannel
_sent_orders: dict[str, Order] = {}
_sent_orders: Dict[str, Order] = field(default_factory=dict)
_ready_to_receive: trio.Event = trio.Event()
def send( def send(
self, self,
msg: Order, msg: Order | dict,
) -> dict: ) -> dict:
self._sent_orders[msg.oid] = msg self._sent_orders[msg.oid] = msg
self._to_ems.send_nowait(msg.dict()) self._to_ems.send_nowait(msg)
return msg return msg
def update( def send_update(
self, self,
uuid: str, uuid: str,
@ -73,9 +77,8 @@ class OrderBook:
) -> dict: ) -> dict:
cmd = self._sent_orders[uuid] cmd = self._sent_orders[uuid]
msg = cmd.dict() msg = cmd.copy(update=data)
msg.update(data) self._sent_orders[uuid] = msg
self._sent_orders[uuid] = Order(**msg)
self._to_ems.send_nowait(msg) self._to_ems.send_nowait(msg)
return cmd return cmd
@ -83,12 +86,18 @@ class OrderBook:
"""Cancel an order (or alert) in the EMS. """Cancel an order (or alert) in the EMS.
""" """
cmd = self._sent_orders[uuid] cmd = self._sent_orders.get(uuid)
if not cmd:
log.error(
f'Unknown order {uuid}!?\n'
f'Maybe there is a stale entry or line?\n'
f'You should report this as a bug!'
)
msg = Cancel( msg = Cancel(
oid=uuid, oid=uuid,
symbol=cmd.symbol, symbol=cmd.symbol,
) )
self._to_ems.send_nowait(msg.dict()) self._to_ems.send_nowait(msg)
_orders: OrderBook = None _orders: OrderBook = None
@ -149,21 +158,36 @@ async def relay_order_cmds_from_sync_code(
book = get_orders() book = get_orders()
async with book._from_order_book.subscribe() as orders_stream: async with book._from_order_book.subscribe() as orders_stream:
async for cmd in orders_stream: async for cmd in orders_stream:
if cmd['symbol'] == symbol_key: sym = cmd.symbol
log.info(f'Send order cmd:\n{pformat(cmd)}') msg = pformat(cmd)
if sym == symbol_key:
log.info(f'Send order cmd:\n{msg}')
# send msg over IPC / wire # send msg over IPC / wire
await to_ems_stream.send(cmd) await to_ems_stream.send(cmd)
else:
log.warning(
f'Ignoring unmatched order cmd for {sym} != {symbol_key}:'
f'\n{msg}'
)
@acm @acm
async def open_ems( async def open_ems(
fqsn: str, fqsn: str,
mode: str = 'live',
loglevel: str = 'error',
) -> ( ) -> tuple[
OrderBook, OrderBook,
tractor.MsgStream, tractor.MsgStream,
dict, dict[
): # brokername, acctid
tuple[str, str],
list[BrokerdPosition],
],
list[str],
dict[str, Status],
]:
''' '''
Spawn an EMS daemon and begin sending orders and receiving Spawn an EMS daemon and begin sending orders and receiving
alerts. alerts.
@ -206,18 +230,36 @@ async def open_ems(
async with maybe_open_emsd(broker) as portal: async with maybe_open_emsd(broker) as portal:
mod = get_brokermod(broker)
if (
not getattr(mod, 'trades_dialogue', None)
or mode == 'paper'
):
mode = 'paper'
from ._ems import _emsd_main
async with ( async with (
# connect to emsd # connect to emsd
portal.open_context( portal.open_context(
_emsd_main, _emsd_main,
fqsn=fqsn, fqsn=fqsn,
exec_mode=mode,
loglevel=loglevel,
) as (ctx, (positions, accounts)), ) as (
ctx,
(
positions,
accounts,
dialogs,
)
),
# open 2-way trade command stream # open 2-way trade command stream
ctx.open_stream() as trades_stream, ctx.open_stream() as trades_stream,
): ):
# start sync code order msg delivery task
async with trio.open_nursery() as n: async with trio.open_nursery() as n:
n.start_soon( n.start_soon(
relay_order_cmds_from_sync_code, relay_order_cmds_from_sync_code,
@ -225,4 +267,10 @@ async def open_ems(
trades_stream trades_stream
) )
yield book, trades_stream, positions, accounts yield (
book,
trades_stream,
positions,
accounts,
dialogs,
)

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -15,108 +15,162 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Clearing system messagingn types and protocols. Clearing sub-system message and protocols.
""" """
from typing import Optional, Union # from collections import (
# ChainMap,
# deque,
# )
from typing import (
Optional,
Literal,
)
# TODO: try out just encoding/send direction for now? from msgspec import field
# import msgspec
from pydantic import BaseModel
from ..data._source import Symbol from ..data._source import Symbol
from ..data.types import Struct
# TODO: a composite for tracking msg flow on 2-legged
# dialogs.
# class Dialog(ChainMap):
# '''
# Msg collection abstraction to easily track the state changes of
# a msg flow in one high level, query-able and immutable construct.
# The main use case is to query data from a (long-running)
# msg-transaction-sequence
# '''
# def update(
# self,
# msg,
# ) -> None:
# self.maps.insert(0, msg.to_dict())
# def flatten(self) -> dict:
# return dict(self)
# TODO: ``msgspec`` stuff worth paying attention to:
# - schema evolution:
# https://jcristharif.com/msgspec/usage.html#schema-evolution
# - for eg. ``BrokerdStatus``, instead just have separate messages?
# - use literals for a common msg determined by diff keys?
# - https://jcristharif.com/msgspec/usage.html#literal
# --------------
# Client -> emsd # Client -> emsd
# --------------
class Order(Struct):
class Cancel(BaseModel): # TODO: ideally we can combine these 2 fields into
'''Cancel msg for removing a dark (ems triggered) or # 1 and just use the size polarity to determine a buy/sell.
broker-submitted (live) trigger/order. # i would like to see this become more like
# https://jcristharif.com/msgspec/usage.html#literal
''' # action: Literal[
action: str = 'cancel' # 'live',
oid: str # uuid4 # 'dark',
symbol: str # 'alert',
# ]
class Order(BaseModel):
action: str # {'buy', 'sell', 'alert'}
# internal ``emdsd`` unique "order id"
oid: str # uuid4
symbol: Union[str, Symbol]
account: str # should we set a default as '' ?
price: float
size: float
brokers: list[str]
# Assigned once initial ack is received
# ack_time_ns: Optional[int] = None
action: Literal[
'buy',
'sell',
'alert',
]
# determines whether the create execution # determines whether the create execution
# will be submitted to the ems or directly to # will be submitted to the ems or directly to
# the backend broker # the backend broker
exec_mode: str # {'dark', 'live', 'paper'} exec_mode: Literal[
'dark',
'live',
# 'paper', no right?
]
class Config: # internal ``emdsd`` unique "order id"
# just for pre-loading a ``Symbol`` when used oid: str # uuid4
# in the order mode staging process symbol: str | Symbol
arbitrary_types_allowed = True account: str # should we set a default as '' ?
# don't copy this model instance when used in
# a recursive model
copy_on_model_validation = False
price: float
size: float # -ve is "sell", +ve is "buy"
brokers: list[str] = []
class Cancel(Struct):
'''
Cancel msg for removing a dark (ems triggered) or
broker-submitted (live) trigger/order.
'''
oid: str # uuid4
symbol: str
action: str = 'cancel'
# --------------
# Client <- emsd # Client <- emsd
# --------------
# update msgs from ems which relay state change info # update msgs from ems which relay state change info
# from the active clearing engine. # from the active clearing engine.
class Status(Struct):
class Status(BaseModel): time_ns: int
oid: str # uuid4 ems-order dialog id
resp: Literal[
'pending', # acked by broker but not yet open
'open',
'dark_open', # dark/algo triggered order is open in ems clearing loop
'triggered', # above triggered order sent to brokerd, or an alert closed
'closed', # fully cleared all size/units
'fill', # partial execution
'canceled',
'error',
]
name: str = 'status' name: str = 'status'
oid: str # uuid4
time_ns: int
# {
# 'dark_submitted',
# 'dark_cancelled',
# 'dark_triggered',
# 'broker_submitted',
# 'broker_cancelled',
# 'broker_executed',
# 'broker_filled',
# 'broker_errored',
# 'alert_submitted',
# 'alert_triggered',
# }
resp: str # "response", see above
# symbol: str
# trigger info
trigger_price: Optional[float] = None
# price: float
# broker: Optional[str] = None
# this maps normally to the ``BrokerdOrder.reqid`` below, an id # this maps normally to the ``BrokerdOrder.reqid`` below, an id
# normally allocated internally by the backend broker routing system # normally allocated internally by the backend broker routing system
broker_reqid: Optional[Union[int, str]] = None reqid: Optional[int | str] = None
# for relaying backend msg data "through" the ems layer # the (last) source order/request msg if provided
# (eg. the Order/Cancel which causes this msg) and
# acts as a back-reference to the corresponding
# request message which was the source of this msg.
req: Order | None = None
# XXX: better design/name here?
# flag that can be set to indicate a message for an order
# event that wasn't originated by piker's emsd (eg. some external
# trading system which does it's own order control but that you
# might want to "track" using piker UIs/systems).
src: Optional[str] = None
# set when a cancel request msg was set for this order flow dialog
# but the brokerd dialog isn't yet in a cancelled state.
cancel_called: bool = False
# for relaying a boxed brokerd-dialog-side msg data "through" the
# ems layer to clients.
brokerd_msg: dict = {} brokerd_msg: dict = {}
# ---------------
# emsd -> brokerd # emsd -> brokerd
# ---------------
# requests *sent* from ems to respective backend broker daemon # requests *sent* from ems to respective backend broker daemon
class BrokerdCancel(BaseModel): class BrokerdCancel(Struct):
action: str = 'cancel'
oid: str # piker emsd order id oid: str # piker emsd order id
time_ns: int time_ns: int
@ -127,34 +181,39 @@ class BrokerdCancel(BaseModel):
# for setting a unique order id then this value will be relayed back # for setting a unique order id then this value will be relayed back
# on the emsd order request stream as the ``BrokerdOrderAck.reqid`` # on the emsd order request stream as the ``BrokerdOrderAck.reqid``
# field # field
reqid: Optional[Union[int, str]] = None reqid: Optional[int | str] = None
action: str = 'cancel'
class BrokerdOrder(BaseModel): class BrokerdOrder(Struct):
action: str # {buy, sell}
oid: str oid: str
account: str account: str
time_ns: int time_ns: int
symbol: str # fqsn
price: float
size: float
# TODO: if we instead rely on a +ve/-ve size to determine
# the action we more or less don't need this field right?
action: str = '' # {buy, sell}
# "broker request id": broker specific/internal order id if this is # "broker request id": broker specific/internal order id if this is
# None, creates a new order otherwise if the id is valid the backend # None, creates a new order otherwise if the id is valid the backend
# api must modify the existing matching order. If the broker allows # api must modify the existing matching order. If the broker allows
# for setting a unique order id then this value will be relayed back # for setting a unique order id then this value will be relayed back
# on the emsd order request stream as the ``BrokerdOrderAck.reqid`` # on the emsd order request stream as the ``BrokerdOrderAck.reqid``
# field # field
reqid: Optional[Union[int, str]] = None reqid: Optional[int | str] = None
symbol: str # symbol.<providername> ?
price: float
size: float
# ---------------
# emsd <- brokerd # emsd <- brokerd
# ---------------
# requests *received* to ems from broker backend # requests *received* to ems from broker backend
class BrokerdOrderAck(Struct):
class BrokerdOrderAck(BaseModel):
''' '''
Immediate reponse to a brokerd order request providing the broker Immediate reponse to a brokerd order request providing the broker
specific unique order id so that the EMS can associate this specific unique order id so that the EMS can associate this
@ -162,102 +221,93 @@ class BrokerdOrderAck(BaseModel):
``.oid`` (which is a uuid4). ``.oid`` (which is a uuid4).
''' '''
name: str = 'ack'
# defined and provided by backend # defined and provided by backend
reqid: Union[int, str] reqid: int | str
# emsd id originally sent in matching request msg # emsd id originally sent in matching request msg
oid: str oid: str
account: str = '' account: str = ''
name: str = 'ack'
class BrokerdStatus(BaseModel): class BrokerdStatus(Struct):
name: str = 'status' reqid: int | str
reqid: Union[int, str]
time_ns: int time_ns: int
status: Literal[
'open',
'canceled',
'fill',
'pending',
'error',
]
# XXX: should be best effort set for every update account: str
account: str = '' name: str = 'status'
# {
# 'submitted',
# 'cancelled',
# 'filled',
# }
status: str
filled: float = 0.0 filled: float = 0.0
reason: str = '' reason: str = ''
remaining: float = 0.0 remaining: float = 0.0
# XXX: better design/name here? # external: bool = False
# flag that can be set to indicate a message for an order
# event that wasn't originated by piker's emsd (eg. some external
# trading system which does it's own order control but that you
# might want to "track" using piker UIs/systems).
external: bool = False
# XXX: not required schema as of yet # XXX: not required schema as of yet
broker_details: dict = { broker_details: dict = field(default_factory=lambda: {
'name': '', 'name': '',
} })
class BrokerdFill(BaseModel): class BrokerdFill(Struct):
''' '''
A single message indicating a "fill-details" event from the broker A single message indicating a "fill-details" event from the broker
if avaiable. if avaiable.
''' '''
name: str = 'fill'
reqid: Union[int, str]
time_ns: int
# order exeuction related
action: str
size: float
price: float
broker_details: dict = {} # meta-data (eg. commisions etc.)
# brokerd timestamp required for order mode arrow placement on x-axis # brokerd timestamp required for order mode arrow placement on x-axis
# TODO: maybe int if we force ns? # TODO: maybe int if we force ns?
# we need to normalize this somehow since backends will use their # we need to normalize this somehow since backends will use their
# own format and likely across many disparate epoch clocks... # own format and likely across many disparate epoch clocks...
broker_time: float broker_time: float
reqid: int | str
time_ns: int
# order exeuction related
size: float
price: float
name: str = 'fill'
action: Optional[str] = None
broker_details: dict = {} # meta-data (eg. commisions etc.)
class BrokerdError(BaseModel): class BrokerdError(Struct):
''' '''
Optional error type that can be relayed to emsd for error handling. Optional error type that can be relayed to emsd for error handling.
This is still a TODO thing since we're not sure how to employ it yet. This is still a TODO thing since we're not sure how to employ it yet.
''' '''
name: str = 'error'
oid: str oid: str
symbol: str
reason: str
# if no brokerd order request was actually submitted (eg. we errored # if no brokerd order request was actually submitted (eg. we errored
# at the ``pikerd`` layer) then there will be ``reqid`` allocated. # at the ``pikerd`` layer) then there will be ``reqid`` allocated.
reqid: Optional[Union[int, str]] = None reqid: Optional[int | str] = None
symbol: str name: str = 'error'
reason: str
broker_details: dict = {} broker_details: dict = {}
class BrokerdPosition(BaseModel): class BrokerdPosition(Struct):
'''Position update event from brokerd. '''Position update event from brokerd.
''' '''
name: str = 'position'
broker: str broker: str
account: str account: str
symbol: str symbol: str
currency: str
size: float size: float
avg_price: float avg_price: float
currency: str = ''
name: str = 'position'

View File

@ -18,54 +18,75 @@
Fake trading for forward testing. Fake trading for forward testing.
""" """
from collections import defaultdict
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from datetime import datetime from datetime import datetime
from operator import itemgetter from operator import itemgetter
import itertools
import time import time
from typing import Tuple, Optional, Callable from typing import (
Any,
Optional,
Callable,
)
import uuid import uuid
from bidict import bidict from bidict import bidict
import pendulum
import trio import trio
import tractor import tractor
from dataclasses import dataclass
from .. import data from .. import data
from ..data.types import Struct
from ..data._source import Symbol
from ..pp import (
Position,
Transaction,
open_trade_ledger,
open_pps,
)
from ..data._normalize import iterticks from ..data._normalize import iterticks
from ..data._source import unpack_fqsn from ..data._source import unpack_fqsn
from ..log import get_logger from ..log import get_logger
from ._messages import ( from ._messages import (
BrokerdCancel, BrokerdOrder, BrokerdOrderAck, BrokerdStatus, BrokerdCancel,
BrokerdFill, BrokerdPosition, BrokerdError BrokerdOrder,
BrokerdOrderAck,
BrokerdStatus,
BrokerdFill,
BrokerdPosition,
BrokerdError,
) )
from ..config import load
log = get_logger(__name__) log = get_logger(__name__)
@dataclass class PaperBoi(Struct):
class PaperBoi: '''
""" Emulates a broker order client providing approximately the same API
Emulates a broker order client providing the same API and and delivering an order-event response stream but with methods for
delivering an order-event response stream but with methods for
triggering desired events based on forward testing engine triggering desired events based on forward testing engine
requirements. requirements (eg open, closed, fill msgs).
""" '''
broker: str broker: str
ems_trades_stream: tractor.MsgStream ems_trades_stream: tractor.MsgStream
# map of paper "live" orders which be used # map of paper "live" orders which be used
# to simulate fills based on paper engine settings # to simulate fills based on paper engine settings
_buys: bidict _buys: defaultdict[str, bidict]
_sells: bidict _sells: defaultdict[str, bidict]
_reqids: bidict _reqids: bidict
_positions: dict[str, BrokerdPosition] _positions: dict[str, Position]
_trade_ledger: dict[str, Any]
_syms: dict[str, Symbol] = {}
# init edge case L1 spread # init edge case L1 spread
last_ask: Tuple[float, float] = (float('inf'), 0) # price, size last_ask: tuple[float, float] = (float('inf'), 0) # price, size
last_bid: Tuple[float, float] = (0, 0) last_bid: tuple[float, float] = (0, 0)
async def submit_limit( async def submit_limit(
self, self,
@ -75,27 +96,24 @@ class PaperBoi:
action: str, action: str,
size: float, size: float,
reqid: Optional[str], reqid: Optional[str],
) -> int: ) -> int:
"""Place an order and return integer request id provided by client. '''
Place an order and return integer request id provided by client.
"""
is_modify: bool = False
if reqid is None:
reqid = str(uuid.uuid4())
else:
# order is already existing, this is a modify
(oid, symbol, action, old_price) = self._reqids[reqid]
assert old_price != price
is_modify = True
# register order internally
self._reqids[reqid] = (oid, symbol, action, price)
'''
if action == 'alert': if action == 'alert':
# bypass all fill simulation # bypass all fill simulation
return reqid return reqid
entry = self._reqids.get(reqid)
if entry:
# order is already existing, this is a modify
(oid, symbol, action, old_price) = entry
else:
# register order internally
self._reqids[reqid] = (oid, symbol, action, price)
# TODO: net latency model # TODO: net latency model
# we checkpoint here quickly particulalry # we checkpoint here quickly particulalry
# for dark orders since we want the dark_executed # for dark orders since we want the dark_executed
@ -107,15 +125,18 @@ class PaperBoi:
size = -size size = -size
msg = BrokerdStatus( msg = BrokerdStatus(
status='submitted', status='open',
# account=f'paper_{self.broker}',
account='paper',
reqid=reqid, reqid=reqid,
broker=self.broker,
time_ns=time.time_ns(), time_ns=time.time_ns(),
filled=0.0, filled=0.0,
reason='paper_trigger', reason='paper_trigger',
remaining=size, remaining=size,
broker_details={'name': 'paperboi'},
) )
await self.ems_trades_stream.send(msg.dict()) await self.ems_trades_stream.send(msg)
# if we're already a clearing price simulate an immediate fill # if we're already a clearing price simulate an immediate fill
if ( if (
@ -123,28 +144,28 @@ class PaperBoi:
) or ( ) or (
action == 'sell' and (clear_price := self.last_bid[0]) >= price action == 'sell' and (clear_price := self.last_bid[0]) >= price
): ):
await self.fake_fill(symbol, clear_price, size, action, reqid, oid) await self.fake_fill(
symbol,
clear_price,
size,
action,
reqid,
oid,
)
# register this submissions as a paper live order
else: else:
# register this submissions as a paper live order # set the simulated order in the respective table for lookup
# and trigger by the simulated clearing task normally
# submit order to book simulation fill loop # running ``simulate_fills()``.
if action == 'buy': if action == 'buy':
orders = self._buys orders = self._buys
elif action == 'sell': elif action == 'sell':
orders = self._sells orders = self._sells
# set the simulated order in the respective table for lookup # {symbol -> bidict[oid, (<price data>)]}
# and trigger by the simulated clearing task normally orders[symbol][oid] = (price, size, reqid, action)
# running ``simulate_fills()``.
if is_modify:
# remove any existing order for the old price
orders[symbol].pop((oid, old_price))
# buys/sells: (symbol -> (price -> order))
orders.setdefault(symbol, {})[(oid, price)] = (size, reqid, action)
return reqid return reqid
@ -157,26 +178,26 @@ class PaperBoi:
oid, symbol, action, price = self._reqids[reqid] oid, symbol, action, price = self._reqids[reqid]
if action == 'buy': if action == 'buy':
self._buys[symbol].pop((oid, price)) self._buys[symbol].pop(oid, None)
elif action == 'sell': elif action == 'sell':
self._sells[symbol].pop((oid, price)) self._sells[symbol].pop(oid, None)
# TODO: net latency model # TODO: net latency model
await trio.sleep(0.05) await trio.sleep(0.05)
msg = BrokerdStatus( msg = BrokerdStatus(
status='cancelled', status='canceled',
oid=oid, account='paper',
reqid=reqid, reqid=reqid,
broker=self.broker,
time_ns=time.time_ns(), time_ns=time.time_ns(),
broker_details={'name': 'paperboi'},
) )
await self.ems_trades_stream.send(msg.dict()) await self.ems_trades_stream.send(msg)
async def fake_fill( async def fake_fill(
self, self,
symbol: str, fqsn: str,
price: float, price: float,
size: float, size: float,
action: str, # one of {'buy', 'sell'} action: str, # one of {'buy', 'sell'}
@ -190,21 +211,21 @@ class PaperBoi:
remaining: float = 0, remaining: float = 0,
) -> None: ) -> None:
"""Pretend to fill a broker order @ price and size. '''
Pretend to fill a broker order @ price and size.
""" '''
# TODO: net latency model # TODO: net latency model
await trio.sleep(0.05) await trio.sleep(0.05)
fill_time_ns = time.time_ns()
fill_time_s = time.time()
msg = BrokerdFill( fill_msg = BrokerdFill(
reqid=reqid, reqid=reqid,
time_ns=time.time_ns(), time_ns=fill_time_ns,
action=action, action=action,
size=size, size=size,
price=price, price=price,
broker_time=datetime.now().timestamp(), broker_time=datetime.now().timestamp(),
broker_details={ broker_details={
'paper_info': { 'paper_info': {
@ -214,79 +235,64 @@ class PaperBoi:
'name': self.broker + '_paper', 'name': self.broker + '_paper',
}, },
) )
await self.ems_trades_stream.send(msg.dict()) log.info(f'Fake filling order:\n{fill_msg}')
await self.ems_trades_stream.send(fill_msg)
if order_complete: if order_complete:
msg = BrokerdStatus( msg = BrokerdStatus(
reqid=reqid, reqid=reqid,
time_ns=time.time_ns(), time_ns=time.time_ns(),
# account=f'paper_{self.broker}',
status='filled', account='paper',
status='closed',
filled=size, filled=size,
remaining=0 if order_complete else remaining, remaining=0 if order_complete else remaining,
action=action,
size=size,
price=price,
broker_details={
'paper_info': {
'oid': oid,
},
'name': self.broker,
},
) )
await self.ems_trades_stream.send(msg.dict()) await self.ems_trades_stream.send(msg)
# lookup any existing position # lookup any existing position
token = f'{symbol}.{self.broker}' key = fqsn.rstrip(f'.{self.broker}')
pp_msg = self._positions.setdefault( t = Transaction(
token, fqsn=fqsn,
BrokerdPosition( sym=self._syms[fqsn],
tid=oid,
size=size,
price=price,
cost=0, # TODO: cost model
dt=pendulum.from_timestamp(fill_time_s),
bsuid=key,
)
with (
open_trade_ledger(self.broker, 'paper') as ledger,
open_pps(self.broker, 'paper', write_on_exit=True) as table
):
tx = t.to_dict()
tx.pop('sym')
ledger.update({oid: tx})
# Write to pps toml right now
table.update_from_trans({oid: t})
pp = table.pps[key]
pp_msg = BrokerdPosition(
broker=self.broker, broker=self.broker,
account='paper', account='paper',
symbol=symbol, symbol=fqsn,
# TODO: we need to look up the asset currency from # TODO: we need to look up the asset currency from
# broker info. i guess for crypto this can be # broker info. i guess for crypto this can be
# inferred from the pair? # inferred from the pair?
currency='', currency=key,
size=0.0, size=pp.size,
avg_price=0, avg_price=pp.ppu,
) )
)
# "avg position price" calcs await self.ems_trades_stream.send(pp_msg)
# TODO: eventually it'd be nice to have a small set of routines
# to do this stuff from a sequence of cleared orders to enable
# so called "contextual positions".
new_size = size + pp_msg.size
# old size minus the new size gives us size differential with
# +ve -> increase in pp size
# -ve -> decrease in pp size
size_diff = abs(new_size) - abs(pp_msg.size)
if new_size == 0:
pp_msg.avg_price = 0
elif size_diff > 0:
# only update the "average position price" when the position
# size increases not when it decreases (i.e. the position is
# being made smaller)
pp_msg.avg_price = (
abs(size) * price + pp_msg.avg_price * abs(pp_msg.size)
) / abs(new_size)
pp_msg.size = new_size
await self.ems_trades_stream.send(pp_msg.dict())
async def simulate_fills( async def simulate_fills(
quote_stream: 'tractor.ReceiveStream', # noqa quote_stream: tractor.MsgStream, # noqa
client: PaperBoi, client: PaperBoi,
) -> None: ) -> None:
# TODO: more machinery to better simulate real-world market things: # TODO: more machinery to better simulate real-world market things:
@ -306,61 +312,116 @@ async def simulate_fills(
# this stream may eventually contain multiple symbols # this stream may eventually contain multiple symbols
async for quotes in quote_stream: async for quotes in quote_stream:
for sym, quote in quotes.items(): for sym, quote in quotes.items():
for tick in iterticks( for tick in iterticks(
quote, quote,
# dark order price filter(s) # dark order price filter(s)
types=('ask', 'bid', 'trade', 'last') types=('ask', 'bid', 'trade', 'last')
): ):
# print(tick) tick_price = tick['price']
tick_price = tick.get('price')
ttype = tick['type']
if ttype in ('ask',): buys: bidict[str, tuple] = client._buys[sym]
iter_buys = reversed(sorted(
buys.values(),
key=itemgetter(0),
))
client.last_ask = ( def buy_on_ask(our_price):
tick_price, return tick_price <= our_price
tick.get('size', client.last_ask[1]),
)
orders = client._buys.get(sym, {}) sells: bidict[str, tuple] = client._sells[sym]
iter_sells = sorted(
sells.values(),
key=itemgetter(0)
)
book_sequence = reversed( def sell_on_bid(our_price):
sorted(orders.keys(), key=itemgetter(1))) return tick_price >= our_price
def pred(our_price): match tick:
return tick_price < our_price
elif ttype in ('bid',): # on an ask queue tick, only clear buy entries
case {
'price': tick_price,
'type': 'ask',
}:
client.last_ask = (
tick_price,
tick.get('size', client.last_ask[1]),
)
client.last_bid = ( iter_entries = zip(
tick_price, iter_buys,
tick.get('size', client.last_bid[1]), itertools.repeat(buy_on_ask)
) )
orders = client._sells.get(sym, {}) # on a bid queue tick, only clear sell entries
book_sequence = sorted(orders.keys(), key=itemgetter(1)) case {
'price': tick_price,
'type': 'bid',
}:
client.last_bid = (
tick_price,
tick.get('size', client.last_bid[1]),
)
def pred(our_price): iter_entries = zip(
return tick_price > our_price iter_sells,
itertools.repeat(sell_on_bid)
)
elif ttype in ('trade', 'last'): # TODO: fix this block, though it definitely
# TODO: simulate actual book queues and our orders # costs a lot more CPU-wise
# place in it, might require full L2 data? # - doesn't seem like clears are happening still on
continue # "resting" limit orders?
case {
'price': tick_price,
'type': ('trade' | 'last'),
}:
# in the clearing price / last price case we
# want to iterate both sides of our book for
# clears since we don't know which direction the
# price is going to move (especially with HFT)
# and thus we simply interleave both sides (buys
# and sells) until one side clears and then
# break until the next tick?
def interleave():
for pair in zip(
iter_buys,
iter_sells,
):
for order_info, pred in zip(
pair,
itertools.cycle([buy_on_ask, sell_on_bid]),
):
yield order_info, pred
# iterate book prices descending iter_entries = interleave()
for oid, our_price in book_sequence:
if pred(our_price):
# retreive order info # NOTE: all other (non-clearable) tick event types
(size, reqid, action) = orders.pop((oid, our_price)) # - we don't want to sping the simulated clear loop
# below unecessarily and further don't want to pop
# simulated live orders prematurely.
case _:
continue
# iterate all potentially clearable book prices
# in FIFO order per side.
for order_info, pred in iter_entries:
(our_price, size, reqid, action) = order_info
# print(order_info)
clearable = pred(our_price)
if clearable:
# pop and retreive order info
oid = {
'buy': buys,
'sell': sells
}[action].inverse.pop(order_info)
# clearing price would have filled entirely # clearing price would have filled entirely
await client.fake_fill( await client.fake_fill(
symbol=sym, fqsn=sym,
# todo slippage to determine fill price # todo slippage to determine fill price
price=tick_price, price=tick_price,
size=size, size=size,
@ -368,9 +429,6 @@ async def simulate_fills(
reqid=reqid, reqid=reqid,
oid=oid, oid=oid,
) )
else:
# prices are iterated in sorted order so we're done
break
async def handle_order_requests( async def handle_order_requests(
@ -380,66 +438,81 @@ async def handle_order_requests(
) -> None: ) -> None:
# order_request: dict request_msg: dict
async for request_msg in ems_order_stream: async for request_msg in ems_order_stream:
match request_msg:
case {'action': ('buy' | 'sell')}:
order = BrokerdOrder(**request_msg)
account = order.account
action = request_msg['action'] # error on bad inputs
reason = None
if account != 'paper':
reason = f'No account found:`{account}` (paper only)?'
if action in {'buy', 'sell'}: elif order.size == 0:
reason = 'Invalid size: 0'
account = request_msg['account'] if reason:
if account != 'paper': log.error(reason)
log.error( await ems_order_stream.send(BrokerdError(
'This is a paper account, only a `paper` selection is valid' oid=order.oid,
symbol=order.symbol,
reason=reason,
))
continue
reqid = order.reqid or str(uuid.uuid4())
# deliver ack that order has been submitted to broker routing
await ems_order_stream.send(
BrokerdOrderAck(
oid=order.oid,
reqid=reqid,
)
) )
await ems_order_stream.send(BrokerdError(
oid=request_msg['oid'],
symbol=request_msg['symbol'],
reason=f'Paper only. No account found: `{account}` ?',
).dict())
continue
# validate # call our client api to submit the order
order = BrokerdOrder(**request_msg) reqid = await client.submit_limit(
# call our client api to submit the order
reqid = await client.submit_limit(
oid=order.oid,
symbol=order.symbol,
price=order.price,
action=order.action,
size=order.size,
# XXX: by default 0 tells ``ib_insync`` methods that
# there is no existing order so ask the client to create
# a new one (which it seems to do by allocating an int
# counter - collision prone..)
reqid=order.reqid,
)
# deliver ack that order has been submitted to broker routing
await ems_order_stream.send(
BrokerdOrderAck(
# ems order request id
oid=order.oid, oid=order.oid,
symbol=f'{order.symbol}.{client.broker}',
# broker specific request id price=order.price,
action=order.action,
size=order.size,
# XXX: by default 0 tells ``ib_insync`` methods that
# there is no existing order so ask the client to create
# a new one (which it seems to do by allocating an int
# counter - collision prone..)
reqid=reqid, reqid=reqid,
)
log.info(f'Submitted paper LIMIT {reqid}:\n{order}')
).dict() case {'action': 'cancel'}:
) msg = BrokerdCancel(**request_msg)
await client.submit_cancel(
reqid=msg.reqid
)
elif action == 'cancel': case _:
msg = BrokerdCancel(**request_msg) log.error(f'Unknown order command: {request_msg}')
await client.submit_cancel(
reqid=msg.reqid
)
else: _reqids: bidict[str, tuple] = {}
log.error(f'Unknown order command: {request_msg}') _buys: defaultdict[
str, # symbol
bidict[
str, # oid
tuple[float, float, str, str], # order info
]
] = defaultdict(bidict)
_sells: defaultdict[
str, # symbol
bidict[
str, # oid
tuple[float, float, str, str], # order info
]
] = defaultdict(bidict)
_positions: dict[str, Position] = {}
@tractor.context @tractor.context
@ -451,42 +524,68 @@ async def trades_dialogue(
loglevel: str = None, loglevel: str = None,
) -> None: ) -> None:
tractor.log.get_console_log(loglevel) tractor.log.get_console_log(loglevel)
async with ( async with (
data.open_feed( data.open_feed(
[fqsn], [fqsn],
loglevel=loglevel, loglevel=loglevel,
) as feed, ) as feed,
): ):
# TODO: load paper positions per broker from .toml config file
# and pass as symbol to position data mapping: ``dict[str, dict]`` with open_pps(broker, 'paper') as table:
# await ctx.started(all_positions) # save pps in local state
await ctx.started(({}, {'paper',})) _positions.update(table.pps)
pp_msgs: list[BrokerdPosition] = []
pos: Position
token: str # f'{symbol}.{self.broker}'
for token, pos in _positions.items():
pp_msgs.append(BrokerdPosition(
broker=broker,
account='paper',
symbol=pos.symbol.front_fqsn(),
size=pos.size,
avg_price=pos.ppu,
))
await ctx.started((
pp_msgs,
['paper'],
))
async with ( async with (
ctx.open_stream() as ems_stream, ctx.open_stream() as ems_stream,
trio.open_nursery() as n, trio.open_nursery() as n,
): ):
client = PaperBoi( client = PaperBoi(
broker, broker,
ems_stream, ems_stream,
_buys={}, _buys=_buys,
_sells={}, _sells=_sells,
_reqids={}, _reqids=_reqids,
# TODO: load paper positions from ``positions.toml`` _positions=_positions,
_positions={},
# TODO: load postions from ledger file
_trade_ledger={},
_syms={
fqsn: flume.symbol
for fqsn, flume in feed.flumes.items()
}
) )
n.start_soon(handle_order_requests, client, ems_stream) n.start_soon(
handle_order_requests,
client,
ems_stream,
)
# paper engine simulator clearing task # paper engine simulator clearing task
await simulate_fills(feed.stream, client) await simulate_fills(feed.streams[broker], client)
@asynccontextmanager @asynccontextmanager
@ -511,17 +610,17 @@ async def open_paperboi(
# (we likely don't need more then one proc for basic # (we likely don't need more then one proc for basic
# simulated order clearing) # simulated order clearing)
if portal is None: if portal is None:
log.info('Starting new paper-engine actor')
portal = await tn.start_actor( portal = await tn.start_actor(
service_name, service_name,
enable_modules=[__name__] enable_modules=[__name__]
) )
async with portal.open_context( async with portal.open_context(
trades_dialogue, trades_dialogue,
broker=broker, broker=broker,
fqsn=fqsn, fqsn=fqsn,
loglevel=loglevel, loglevel=loglevel,
) as (ctx, first): ) as (ctx, first):
yield ctx, first yield ctx, first

View File

@ -20,6 +20,7 @@ CLI commons.
''' '''
import os import os
from pprint import pformat from pprint import pformat
from functools import partial
import click import click
import trio import trio
@ -27,29 +28,46 @@ import tractor
from ..log import get_console_log, get_logger, colorize_json from ..log import get_console_log, get_logger, colorize_json
from ..brokers import get_brokermod from ..brokers import get_brokermod
from .._daemon import _tractor_kwargs from .._daemon import (
_default_registry_host,
_default_registry_port,
)
from .. import config from .. import config
log = get_logger('cli') log = get_logger('cli')
DEFAULT_BROKER = 'questrade'
@click.command() @click.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level') @click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--pdb', is_flag=True, help='Enable tractor debug mode') @click.option('--pdb', is_flag=True, help='Enable tractor debug mode')
@click.option('--host', '-h', default='127.0.0.1', help='Host address to bind') @click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.option( @click.option(
'--tsdb', '--tsdb',
is_flag=True, is_flag=True,
help='Enable local ``marketstore`` instance' help='Enable local ``marketstore`` instance'
) )
def pikerd(loglevel, host, tl, pdb, tsdb): @click.option(
'--es',
is_flag=True,
help='Enable local ``elasticsearch`` instance'
)
def pikerd(
loglevel: str,
host: str,
port: int,
tl: bool,
pdb: bool,
tsdb: bool,
es: bool,
):
''' '''
Spawn the piker broker-daemon. Spawn the piker broker-daemon.
''' '''
from .._daemon import open_pikerd from .._daemon import open_pikerd
log = get_console_log(loglevel) log = get_console_log(loglevel)
@ -62,32 +80,25 @@ def pikerd(loglevel, host, tl, pdb, tsdb):
"\n" "\n"
)) ))
async def main(): reg_addr: None | tuple[str, int] = None
if host or port:
reg_addr = (
host or _default_registry_host,
int(port) or _default_registry_port,
)
async def main():
async with ( async with (
open_pikerd( open_pikerd(
tsdb=tsdb,
es=es,
loglevel=loglevel, loglevel=loglevel,
debug_mode=pdb, debug_mode=pdb,
registry_addr=reg_addr,
), # normally delivers a ``Services`` handle ), # normally delivers a ``Services`` handle
trio.open_nursery() as n, trio.open_nursery() as n,
): ):
if tsdb:
from piker.data._ahab import start_ahab
from piker.data.marketstore import start_marketstore
log.info('Spawning `marketstore` supervisor')
ctn_ready, config, (cid, pid) = await n.start(
start_ahab,
'marketstored',
start_marketstore,
)
log.info(
f'`marketstore` up!\n'
f'`marketstored` pid: {pid}\n'
f'docker container id: {cid}\n'
f'config: {pformat(config)}'
)
await trio.sleep_forever() await trio.sleep_forever()
@ -97,25 +108,46 @@ def pikerd(loglevel, host, tl, pdb, tsdb):
@click.group(context_settings=config._context_defaults) @click.group(context_settings=config._context_defaults)
@click.option( @click.option(
'--brokers', '-b', '--brokers', '-b',
default=[DEFAULT_BROKER], default=None,
multiple=True, multiple=True,
help='Broker backend to use' help='Broker backend to use'
) )
@click.option('--loglevel', '-l', default='warning', help='Logging level') @click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.option('--configdir', '-c', help='Configuration directory') @click.option('--configdir', '-c', help='Configuration directory')
@click.option('--host', '-h', default=None, help='Host addr to bind')
@click.option('--port', '-p', default=None, help='Port number to bind')
@click.pass_context @click.pass_context
def cli(ctx, brokers, loglevel, tl, configdir): def cli(
ctx: click.Context,
brokers: list[str],
loglevel: str,
tl: bool,
configdir: str,
host: str,
port: int,
) -> None:
if configdir is not None: if configdir is not None:
assert os.path.isdir(configdir), f"`{configdir}` is not a valid path" assert os.path.isdir(configdir), f"`{configdir}` is not a valid path"
config._override_config_dir(configdir) config._override_config_dir(configdir)
ctx.ensure_object(dict) ctx.ensure_object(dict)
if len(brokers) == 1: if not brokers:
brokermods = [get_brokermod(brokers[0])] # (try to) load all (supposedly) supported data/broker backends
else: from piker.brokers import __brokers__
brokermods = [get_brokermod(broker) for broker in brokers] brokers = __brokers__
brokermods = [get_brokermod(broker) for broker in brokers]
assert brokermods
reg_addr: None | tuple[str, int] = None
if host or port:
reg_addr = (
host or _default_registry_host,
int(port) or _default_registry_port,
)
ctx.obj.update({ ctx.obj.update({
'brokers': brokers, 'brokers': brokers,
@ -125,6 +157,7 @@ def cli(ctx, brokers, loglevel, tl, configdir):
'log': get_console_log(loglevel), 'log': get_console_log(loglevel),
'confdir': config._config_dir, 'confdir': config._config_dir,
'wl_path': config._watchlists_data_path, 'wl_path': config._watchlists_data_path,
'registry_addr': reg_addr,
}) })
# allow enabling same loglevel in ``tractor`` machinery # allow enabling same loglevel in ``tractor`` machinery
@ -134,33 +167,45 @@ def cli(ctx, brokers, loglevel, tl, configdir):
@cli.command() @cli.command()
@click.option('--tl', is_flag=True, help='Enable tractor logging') @click.option('--tl', is_flag=True, help='Enable tractor logging')
@click.argument('names', nargs=-1, required=False) @click.argument('ports', nargs=-1, required=False)
@click.pass_obj @click.pass_obj
def services(config, tl, names): def services(config, tl, ports):
from .._daemon import (
open_piker_runtime,
_default_registry_port,
_default_registry_host,
)
host = _default_registry_host
if not ports:
ports = [_default_registry_port]
async def list_services(): async def list_services():
nonlocal host
async with tractor.get_arbiter( async with (
*_tractor_kwargs['arbiter_addr'] open_piker_runtime(
) as portal: name='service_query',
loglevel=config['loglevel'] if tl else None,
),
tractor.get_arbiter(
host=host,
port=ports[0]
) as portal
):
registry = await portal.run_from_ns('self', 'get_registry') registry = await portal.run_from_ns('self', 'get_registry')
json_d = {} json_d = {}
for key, socket in registry.items(): for key, socket in registry.items():
# name, uuid = uid
host, port = socket host, port = socket
json_d[key] = f'{host}:{port}' json_d[key] = f'{host}:{port}'
click.echo(f"{colorize_json(json_d)}") click.echo(f"{colorize_json(json_d)}")
tractor.run( trio.run(list_services)
list_services,
name='service_query',
loglevel=config['loglevel'] if tl else None,
arbiter_addr=_tractor_kwargs['arbiter_addr'],
)
def _load_clis() -> None: def _load_clis() -> None:
from ..data import marketstore # noqa from ..data import marketstore # noqa
from ..data import elastic
from ..data import cli # noqa from ..data import cli # noqa
from ..brokers import cli # noqa from ..brokers import cli # noqa
from ..ui import cli # noqa from ..ui import cli # noqa

View File

@ -21,13 +21,14 @@ Broker configuration mgmt.
import platform import platform
import sys import sys
import os import os
from os import path
from os.path import dirname from os.path import dirname
import shutil import shutil
from typing import Optional from typing import Optional
from pathlib import Path
from bidict import bidict from bidict import bidict
import toml import toml
from piker.testing import TEST_CONFIG_DIR_PATH
from .log import get_logger from .log import get_logger
log = get_logger('broker-config') log = get_logger('broker-config')
@ -74,6 +75,13 @@ def get_app_dir(app_name, roaming=True, force_posix=False):
def _posixify(name): def _posixify(name):
return "-".join(name.split()).lower() return "-".join(name.split()).lower()
# TODO: This is a hacky way to a) determine we're testing
# and b) creating a test dir. We should aim to set a variable
# within the tractor runtimes and store testing config data
# outside of the users filesystem
if "pytest" in sys.modules:
app_name = os.path.join(app_name, TEST_CONFIG_DIR_PATH)
# if WIN: # if WIN:
if platform.system() == 'Windows': if platform.system() == 'Windows':
key = "APPDATA" if roaming else "LOCALAPPDATA" key = "APPDATA" if roaming else "LOCALAPPDATA"
@ -111,8 +119,10 @@ if _parent_user:
_conf_names: set[str] = { _conf_names: set[str] = {
'brokers', 'brokers',
'pps',
'trades', 'trades',
'watchlists', 'watchlists',
'paper_trades'
} }
_watchlists_data_path = os.path.join(_config_dir, 'watchlists.json') _watchlists_data_path = os.path.join(_config_dir, 'watchlists.json')
@ -147,19 +157,21 @@ def get_conf_path(
conf_name: str = 'brokers', conf_name: str = 'brokers',
) -> str: ) -> str:
"""Return the default config path normally under '''
``~/.config/piker`` on linux. Return the top-level default config path normally under
``~/.config/piker`` on linux for a given ``conf_name``, the config
name.
Contains files such as: Contains files such as:
- brokers.toml - brokers.toml
- pp.toml
- watchlists.toml - watchlists.toml
- trades.toml
# maybe coming soon ;) # maybe coming soon ;)
- signals.toml - signals.toml
- strats.toml - strats.toml
""" '''
assert conf_name in _conf_names assert conf_name in _conf_names
fn = _conf_fn_w_ext(conf_name) fn = _conf_fn_w_ext(conf_name)
return os.path.join( return os.path.join(
@ -173,7 +185,7 @@ def repodir():
Return the abspath to the repo directory. Return the abspath to the repo directory.
''' '''
dirpath = os.path.abspath( dirpath = path.abspath(
# we're 3 levels down in **this** module file # we're 3 levels down in **this** module file
dirname(dirname(os.path.realpath(__file__))) dirname(dirname(os.path.realpath(__file__)))
) )
@ -182,7 +194,9 @@ def repodir():
def load( def load(
conf_name: str = 'brokers', conf_name: str = 'brokers',
path: str = None path: str = None,
**tomlkws,
) -> (dict, str): ) -> (dict, str):
''' '''
@ -190,6 +204,10 @@ def load(
''' '''
path = path or get_conf_path(conf_name) path = path or get_conf_path(conf_name)
if not os.path.isdir(_config_dir):
Path(_config_dir).mkdir(parents=True, exist_ok=True)
if not os.path.isfile(path): if not os.path.isfile(path):
fn = _conf_fn_w_ext(conf_name) fn = _conf_fn_w_ext(conf_name)
@ -202,8 +220,15 @@ def load(
# if one exists. # if one exists.
if os.path.isfile(template): if os.path.isfile(template):
shutil.copyfile(template, path) shutil.copyfile(template, path)
else:
# create an empty file
with open(path, 'x'):
pass
else:
with open(path, 'r'):
pass # touch it
config = toml.load(path) config = toml.load(path, **tomlkws)
log.debug(f"Read config file {path}") log.debug(f"Read config file {path}")
return config, path return config, path
@ -212,6 +237,8 @@ def write(
config: dict, # toml config as dict config: dict, # toml config as dict
name: str = 'brokers', name: str = 'brokers',
path: str = None, path: str = None,
fail_empty: bool = True,
**toml_kwargs,
) -> None: ) -> None:
'''' ''''
@ -226,7 +253,7 @@ def write(
log.debug(f"Creating config dir {_config_dir}") log.debug(f"Creating config dir {_config_dir}")
os.makedirs(dirname) os.makedirs(dirname)
if not config: if not config and fail_empty:
raise ValueError( raise ValueError(
"Watch out you're trying to write a blank config!") "Watch out you're trying to write a blank config!")
@ -235,11 +262,14 @@ def write(
f"{path}" f"{path}"
) )
with open(path, 'w') as cf: with open(path, 'w') as cf:
return toml.dump(config, cf) return toml.dump(
config,
cf,
**toml_kwargs,
)
def load_accounts( def load_accounts(
providers: Optional[list[str]] = None providers: Optional[list[str]] = None
) -> bidict[str, Optional[str]]: ) -> bidict[str, Optional[str]]:

View File

@ -22,6 +22,12 @@ and storing data from your brokers as well as
sharing live streams over a network. sharing live streams over a network.
""" """
import tractor
import trio
from ..log import (
get_console_log,
)
from ._normalize import iterticks from ._normalize import iterticks
from ._sharedmem import ( from ._sharedmem import (
maybe_open_shm_array, maybe_open_shm_array,
@ -32,7 +38,6 @@ from ._sharedmem import (
) )
from .feed import ( from .feed import (
open_feed, open_feed,
_setup_persistent_brokerd,
) )
@ -44,5 +49,40 @@ __all__ = [
'attach_shm_array', 'attach_shm_array',
'open_shm_array', 'open_shm_array',
'get_shm_token', 'get_shm_token',
'_setup_persistent_brokerd',
] ]
@tractor.context
async def _setup_persistent_brokerd(
ctx: tractor.Context,
brokername: str,
) -> None:
'''
Allocate a actor-wide service nursery in ``brokerd``
such that feeds can be run in the background persistently by
the broker backend as needed.
'''
get_console_log(tractor.current_actor().loglevel)
from .feed import (
_bus,
get_feed_bus,
)
global _bus
assert not _bus
async with trio.open_nursery() as service_nursery:
# assign a nursery to the feeds bus for spawning
# background tasks from clients
get_feed_bus(brokername, service_nursery)
# unblock caller
await ctx.started()
# we pin this task to keep the feeds manager active until the
# parent actor decides to tear it down
await trio.sleep_forever()

View File

@ -37,8 +37,13 @@ from docker.models.containers import Container as DockerContainer
from docker.errors import ( from docker.errors import (
DockerException, DockerException,
APIError, APIError,
# ContainerError,
)
import requests
from requests.exceptions import (
ConnectionError,
ReadTimeout,
) )
from requests.exceptions import ConnectionError, ReadTimeout
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from .. import config from .. import config
@ -50,8 +55,8 @@ class DockerNotStarted(Exception):
'Prolly you dint start da daemon bruh' 'Prolly you dint start da daemon bruh'
class ContainerError(RuntimeError): class ApplicationLogError(Exception):
'Error reported via app-container logging level' 'App in container reported an error in logs'
@acm @acm
@ -96,9 +101,9 @@ async def open_docker(
# not perms? # not perms?
raise raise
finally: # finally:
if client: # if client:
client.close() # client.close()
class Container: class Container:
@ -119,7 +124,9 @@ class Container:
async def process_logs_until( async def process_logs_until(
self, self,
patt: str, # this is a predicate func for matching log msgs emitted by the
# underlying containerized app
patt_matcher: Callable[[str], bool],
bp_on_msg: bool = False, bp_on_msg: bool = False,
) -> bool: ) -> bool:
''' '''
@ -130,7 +137,14 @@ class Container:
seen_so_far = self.seen_so_far seen_so_far = self.seen_so_far
while True: while True:
logs = self.cntr.logs() try:
logs = self.cntr.logs()
except (
docker.errors.NotFound,
docker.errors.APIError
):
return False
entries = logs.decode().split('\n') entries = logs.decode().split('\n')
for entry in entries: for entry in entries:
@ -138,31 +152,38 @@ class Container:
if not entry: if not entry:
continue continue
entry = entry.strip()
try: try:
record = json.loads(entry.strip()) record = json.loads(entry)
except json.JSONDecodeError:
if 'Error' in entry: if 'msg' in record:
raise RuntimeError(entry) msg = record['msg']
raise elif 'message' in record:
msg = record['message']
else:
raise KeyError(f'Unexpected log format\n{record}')
level = record['level']
except json.JSONDecodeError:
msg = entry
level = 'error'
msg = record['msg']
level = record['level']
if msg and entry not in seen_so_far: if msg and entry not in seen_so_far:
seen_so_far.add(entry) seen_so_far.add(entry)
if bp_on_msg: if bp_on_msg:
await tractor.breakpoint() await tractor.breakpoint()
getattr(log, level, log.error)(f'{msg}') getattr(log, level.lower(), log.error)(f'{msg}')
# print(f'level: {level}') if level == 'fatal':
if level in ('error', 'fatal'): raise ApplicationLogError(msg)
raise ContainerError(msg)
if patt in msg: if await patt_matcher(msg):
return True return True
# do a checkpoint so we don't block if cancelled B) # do a checkpoint so we don't block if cancelled B)
await trio.sleep(0.01) await trio.sleep(0.1)
return False return False
@ -185,12 +206,29 @@ class Container:
if 'is not running' in err.explanation: if 'is not running' in err.explanation:
return False return False
def hard_kill(self, start: float) -> None:
delay = time.time() - start
# get out the big guns, bc apparently marketstore
# doesn't actually know how to terminate gracefully
# :eyeroll:...
log.error(
f'SIGKILL-ing: {self.cntr.id} after {delay}s\n'
)
self.try_signal('SIGKILL')
self.cntr.wait(
timeout=3,
condition='not-running',
)
async def cancel( async def cancel(
self, self,
stop_msg: str, stop_msg: str,
hard_kill: bool = False,
) -> None: ) -> None:
cid = self.cntr.id cid = self.cntr.id
# first try a graceful cancel # first try a graceful cancel
log.cancel( log.cancel(
f'SIGINT cancelling container: {cid}\n' f'SIGINT cancelling container: {cid}\n'
@ -199,15 +237,25 @@ class Container:
self.try_signal('SIGINT') self.try_signal('SIGINT')
start = time.time() start = time.time()
for _ in range(30): for _ in range(6):
with trio.move_on_after(0.5) as cs: with trio.move_on_after(0.5) as cs:
cs.shield = True log.cancel('polling for CNTR logs...')
await self.process_logs_until(stop_msg)
# if we aren't cancelled on above checkpoint then we try:
# assume we read the expected stop msg and terminated. await self.process_logs_until(stop_msg)
break except ApplicationLogError:
hard_kill = True
else:
# if we aren't cancelled on above checkpoint then we
# assume we read the expected stop msg and
# terminated.
break
if cs.cancelled_caught:
# on timeout just try a hard kill after
# a quick container sync-wait.
hard_kill = True
try: try:
log.info(f'Polling for container shutdown:\n{cid}') log.info(f'Polling for container shutdown:\n{cid}')
@ -218,6 +266,7 @@ class Container:
condition='not-running', condition='not-running',
) )
# graceful exit if we didn't time out
break break
except ( except (
@ -229,31 +278,30 @@ class Container:
except ( except (
docker.errors.APIError, docker.errors.APIError,
ConnectionError, ConnectionError,
requests.exceptions.ConnectionError,
trio.Cancelled,
): ):
log.exception('Docker connection failure') log.exception('Docker connection failure')
break self.hard_kill(start)
else: raise
delay = time.time() - start
log.error(
f'Failed to kill container {cid} after {delay}s\n'
'sending SIGKILL..'
)
# get out the big guns, bc apparently marketstore
# doesn't actually know how to terminate gracefully
# :eyeroll:...
self.try_signal('SIGKILL')
self.cntr.wait(
timeout=3,
condition='not-running',
)
log.cancel(f'Container stopped: {cid}') except trio.Cancelled:
log.exception('trio cancelled...')
self.hard_kill(start)
else:
hard_kill = True
if hard_kill:
self.hard_kill(start)
else:
log.cancel(f'Container stopped: {cid}')
@tractor.context @tractor.context
async def open_ahabd( async def open_ahabd(
ctx: tractor.Context, ctx: tractor.Context,
endpoint: str, # ns-pointer str-msg-type endpoint: str, # ns-pointer str-msg-type
start_timeout: float = 1.0,
**kwargs, **kwargs,
@ -269,17 +317,20 @@ async def open_ahabd(
( (
dcntr, dcntr,
cntr_config, cntr_config,
start_msg, start_lambda,
stop_msg, stop_lambda,
) = ep_func(client) ) = ep_func(client)
cntr = Container(dcntr) cntr = Container(dcntr)
with trio.move_on_after(1): with trio.move_on_after(start_timeout):
found = await cntr.process_logs_until(start_msg) found = await cntr.process_logs_until(start_lambda)
if not found and dcntr not in client.containers.list():
for entry in cntr.seen_so_far:
log.info(entry)
if not found and cntr not in client.containers.list():
raise RuntimeError( raise RuntimeError(
'Failed to start `marketstore` check logs deats' f'Failed to start {dcntr.id} check logs deats'
) )
await ctx.started(( await ctx.started((
@ -289,20 +340,19 @@ async def open_ahabd(
)) ))
try: try:
# TODO: we might eventually want a proxy-style msg-prot here # TODO: we might eventually want a proxy-style msg-prot here
# to allow remote control of containers without needing # to allow remote control of containers without needing
# callers to have root perms? # callers to have root perms?
await trio.sleep_forever() await trio.sleep_forever()
finally: finally:
with trio.CancelScope(shield=True): await cntr.cancel(stop_lambda)
await cntr.cancel(stop_msg)
async def start_ahab( async def start_ahab(
service_name: str, service_name: str,
endpoint: Callable[docker.DockerClient, DockerContainer], endpoint: Callable[docker.DockerClient, DockerContainer],
start_timeout: float = 1.0,
task_status: TaskStatus[ task_status: TaskStatus[
tuple[ tuple[
trio.Event, trio.Event,
@ -350,6 +400,7 @@ async def start_ahab(
async with portal.open_context( async with portal.open_context(
open_ahabd, open_ahabd,
endpoint=str(NamespacePath.from_ref(endpoint)), endpoint=str(NamespacePath.from_ref(endpoint)),
start_timeout=start_timeout
) as (ctx, first): ) as (ctx, first):
cid, pid, cntr_config = first cid, pid, cntr_config = first

View File

@ -0,0 +1,827 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Pre-(path)-graphics formatted x/y nd/1d rendering subsystem.
"""
from __future__ import annotations
from typing import (
Optional,
TYPE_CHECKING,
)
import msgspec
from msgspec import field
import numpy as np
from numpy.lib import recfunctions as rfn
from ._sharedmem import (
ShmArray,
)
from ._pathops import (
path_arrays_from_ohlc,
)
if TYPE_CHECKING:
from ._dataviz import (
Viz,
)
from .._profile import Profiler
class IncrementalFormatter(msgspec.Struct):
'''
Incrementally updating, pre-path-graphics tracking, formatter.
Allows tracking source data state in an updateable pre-graphics
``np.ndarray`` format (in local process memory) as well as
incrementally rendering from that format **to** 1d x/y for path
generation using ``pg.functions.arrayToQPath()``.
'''
shm: ShmArray
viz: Viz
# the value to be multiplied any any index into the x/y_1d arrays
# given the input index is based on the original source data array.
flat_index_ratio: float = 1
@property
def index_field(self) -> 'str':
'''
Value (``str``) used to look up the "index series" from the
underlying source ``numpy`` struct-array; delegate directly to
the managing ``Viz``.
'''
return self.viz.index_field
# Incrementally updated xy ndarray formatted data, a pre-1d
# format which is updated and cached independently of the final
# pre-graphics-path 1d format.
x_nd: Optional[np.ndarray] = None
y_nd: Optional[np.ndarray] = None
@property
def xy_nd(self) -> tuple[np.ndarray, np.ndarray]:
return (
self.x_nd[self.xy_slice],
self.y_nd[self.xy_slice],
)
@property
def xy_slice(self) -> slice:
return slice(
self.xy_nd_start,
self.xy_nd_stop,
)
# indexes which slice into the above arrays (which are allocated
# based on source data shm input size) and allow retrieving
# incrementally updated data.
xy_nd_start: int | None = None
xy_nd_stop: int | None = None
# TODO: eventually incrementally update 1d-pre-graphics path data?
x_1d: np.ndarray | None = None
y_1d: np.ndarray | None = None
# incremental view-change state(s) tracking
_last_vr: tuple[float, float] | None = None
_last_ivdr: tuple[float, float] | None = None
@property
def index_step_size(self) -> float:
'''
Readonly value computed on first ``.diff()`` call.
'''
return self.viz.index_step()
def diff(
self,
new_read: tuple[np.ndarray],
) -> tuple[
np.ndarray,
np.ndarray,
]:
# TODO:
# - can the renderer just call ``Viz.read()`` directly? unpack
# latest source data read
# - eventually maybe we can implement some kind of
# transform on the ``QPainterPath`` that will more or less
# detect the diff in "elements" terms? update diff state since
# we've now rendered paths.
(
xfirst,
xlast,
array,
ivl,
ivr,
in_view,
) = new_read
index = array['index']
# if the first index in the read array is 0 then
# it means the source buffer has bee completely backfilled to
# available space.
src_start = index[0]
src_stop = index[-1] + 1
# these are the "formatted output data" indices
# for the pre-graphics arrays.
nd_start = self.xy_nd_start
nd_stop = self.xy_nd_stop
if (
nd_start is None
):
assert nd_stop is None
# setup to do a prepend of all existing src history
nd_start = self.xy_nd_start = src_stop
# set us in a zero-to-append state
nd_stop = self.xy_nd_stop = src_stop
# compute the length diffs between the first/last index entry in
# the input data and the last indexes we have on record from the
# last time we updated the curve index.
prepend_length = int(nd_start - src_start)
append_length = int(src_stop - nd_stop)
# blah blah blah
# do diffing for prepend, append and last entry
return (
slice(src_start, nd_start),
prepend_length,
append_length,
slice(nd_stop, src_stop),
)
def _track_inview_range(
self,
view_range: tuple[int, int],
) -> bool:
# if a view range is passed, plan to draw the
# source ouput that's "in view" of the chart.
vl, vr = view_range
zoom_or_append = False
last_vr = self._last_vr
# incremental in-view data update.
if last_vr:
lvl, lvr = last_vr # relative slice indices
# TODO: detecting more specifically the interaction changes
# last_ivr = self._last_ivdr or (vl, vr)
# al, ar = last_ivr # abs slice indices
# left_change = abs(x_iv[0] - al) >= 1
# right_change = abs(x_iv[-1] - ar) >= 1
# likely a zoom/pan view change or data append update
if (
(vr - lvr) > 2
or vl < lvl
# append / prepend update
# we had an append update where the view range
# didn't change but the data-viewed (shifted)
# underneath, so we need to redraw.
# or left_change and right_change and last_vr == view_range
# not (left_change and right_change) and ivr
# (
# or abs(x_iv[ivr] - livr) > 1
):
zoom_or_append = True
self._last_vr = view_range
return zoom_or_append
def format_to_1d(
self,
new_read: tuple,
array_key: str,
profiler: Profiler,
slice_to_inview: bool = True,
) -> tuple[
np.ndarray,
np.ndarray,
]:
shm = self.shm
(
_,
_,
array,
ivl,
ivr,
in_view,
) = new_read
(
pre_slice,
prepend_len,
append_len,
post_slice,
) = self.diff(new_read)
# we first need to allocate xy data arrays
# from the source data.
if self.y_nd is None:
self.xy_nd_start = shm._first.value
self.xy_nd_stop = shm._last.value
self.x_nd, self.y_nd = self.allocate_xy_nd(
shm,
array_key,
)
profiler('allocated xy history')
# once allocated we do incremental pre/append
# updates from the diff with the source buffer.
else:
if prepend_len:
self.incr_update_xy_nd(
shm,
array_key,
# this is the pre-sliced, "normally expected"
# new data that an updater would normally be
# expected to process, however in some cases (like
# step curves) the updater routine may want to do
# the source history-data reading itself, so we pass
# both here.
shm._array[pre_slice],
pre_slice,
prepend_len,
self.xy_nd_start,
self.xy_nd_stop,
is_append=False,
)
self.xy_nd_start -= prepend_len
profiler('prepended xy history: {prepend_length}')
if append_len:
self.incr_update_xy_nd(
shm,
array_key,
shm._array[post_slice],
post_slice,
append_len,
self.xy_nd_start,
self.xy_nd_stop,
is_append=True,
)
self.xy_nd_stop += append_len
profiler('appened xy history: {append_length}')
# sanity
# slice_ln = post_slice.stop - post_slice.start
# assert append_len == slice_ln
view_changed: bool = False
view_range: tuple[int, int] = (ivl, ivr)
if slice_to_inview:
view_changed = self._track_inview_range(view_range)
array = in_view
profiler(f'{self.viz.name} view range slice {view_range}')
# TODO: we need to check if the last-datum-in-view is true and
# if so only slice to the 2nd last datumonly slice to the 2nd
# last datum.
# hist = array[:slice_to_head]
# XXX: WOA WTF TRACTOR DEBUGGING BUGGG
# assert 0
# xy-path data transform: convert source data to a format
# able to be passed to a `QPainterPath` rendering routine.
if not len(array):
# XXX: this might be why the profiler only has exits?
return
# TODO: hist here should be the pre-sliced
# x/y_data in the case where allocate_xy is
# defined?
x_1d, y_1d, connect = self.format_xy_nd_to_1d(
array,
array_key,
view_range,
)
# cache/save last 1d outputs for use by other
# readers (eg. `Viz.draw_last_datum()` in the
# only-draw-last-uppx case).
self.x_1d = x_1d
self.y_1d = y_1d
# app_tres = None
# if append_len:
# appended = array[-append_len-1:slice_to_head]
# app_tres = self.format_xy_nd_to_1d(
# appended,
# array_key,
# (
# view_range[1] - append_len + slice_to_head,
# view_range[1]
# ),
# )
# # assert (len(appended) - 1) == append_len
# # assert len(appended) == append_len
# print(
# f'{self.viz.name} APPEND LEN: {append_len}\n'
# f'{self.viz.name} APPENDED: {appended}\n'
# f'{self.viz.name} app_tres: {app_tres}\n'
# )
# update the last "in view data range"
if len(x_1d):
self._last_ivdr = x_1d[0], x_1d[-1]
profiler('.format_to_1d()')
return (
x_1d,
y_1d,
connect,
prepend_len,
append_len,
view_changed,
# app_tres,
)
###############################
# Sub-type override interface #
###############################
x_offset: np.ndarray = np.array([0])
# optional pre-graphics xy formatted data which
# is incrementally updated in sync with the source data.
# XXX: was ``.allocate_xy()``
def allocate_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert the structured-array ``src_shm`` format to
a equivalently shaped (and field-less) ``np.ndarray``.
Eg. a 4 field x N struct-array => (N, 4)
'''
y_nd = src_shm._array[data_field].copy()
x_nd = (
src_shm._array[self.index_field].copy()
+
self.x_offset
)
return x_nd, y_nd
# XXX: was ``.update_xy()``
def incr_update_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> None:
# write pushed data to flattened copy
y_nd_new = new_from_src[data_field]
self.y_nd[read_slc] = y_nd_new
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = (
new_from_src[self.index_field]
+
self.x_offset
)
# x_nd = self.x_nd[self.xy_slice]
# y_nd = self.y_nd[self.xy_slice]
# name = self.viz.name
# if 'trade_rate' == name:
# s = 4
# print(
# f'{name.upper()}:\n'
# 'NEW_FROM_SRC:\n'
# f'new_from_src: {new_from_src}\n\n'
# f'PRE self.x_nd:'
# f'\n{list(x_nd[-s:])}\n'
# f'PRE self.y_nd:\n'
# f'{list(y_nd[-s:])}\n\n'
# f'TO WRITE:\n'
# f'x_nd_new:\n'
# f'{x_nd_new[0]}\n'
# f'y_nd_new:\n'
# f'{y_nd_new}\n'
# )
# XXX: was ``.format_xy()``
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray, # 1d x
np.ndarray, # 1d y
np.ndarray | str, # connection array/style
]:
'''
Default xy-nd array to 1d pre-graphics-path render routine.
Return single field column data verbatim
'''
# NOTE: we don't include the very last datum which is filled in
# normally by another graphics object.
x_1d = array[self.index_field][:-1]
y_1d = array[array_key][:-1]
# name = self.viz.name
# if 'trade_rate' == name:
# s = 4
# x_nd = list(self.x_nd[self.xy_slice][-s:-1])
# y_nd = list(self.y_nd[self.xy_slice][-s:-1])
# print(
# f'{name}:\n'
# f'XY data:\n'
# f'x: {x_nd}\n'
# f'y: {y_nd}\n\n'
# f'x_1d: {list(x_1d[-s:])}\n'
# f'y_1d: {list(y_1d[-s:])}\n\n'
# )
return (
x_1d,
y_1d,
# 1d connection array or style-key to
# ``pg.functions.arrayToQPath()``
'all',
)
class OHLCBarsFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
-0.5,
0,
0,
0.5,
])
fields: list[str] = field(
default_factory=lambda: ['open', 'high', 'low', 'close']
)
flat_index_ratio: float = 4
def allocate_xy_nd(
self,
ohlc_shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert an input struct-array holding OHLC samples into a pair of
flattened x, y arrays with the same size (datums wise) as the source
data.
'''
y_nd = ohlc_shm.ustruct(self.fields)
# generate an flat-interpolated x-domain
x_nd = (
np.broadcast_to(
ohlc_shm._array[self.index_field][:, None],
(
ohlc_shm._array.size,
# 4, # only ohlc
y_nd.shape[1],
),
)
+
self.x_offset
)
assert y_nd.any()
# write pushed data to flattened copy
return (
x_nd,
y_nd,
)
def incr_update_xy_nd(
self,
src_shm: ShmArray,
data_field: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> None:
# write newly pushed data to flattened copy
# a struct-arr is always passed in.
new_y_nd = rfn.structured_to_unstructured(
new_from_src[self.fields]
)
self.y_nd[read_slc] = new_y_nd
# generate same-valued-per-row x support based on y shape
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = np.broadcast_to(
new_from_src[self.index_field][:, None],
new_y_nd.shape,
) + self.x_offset
# TODO: can we drop this frame and just use the above?
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.16,
) -> tuple[
np.ndarray,
np.ndarray,
np.ndarray,
]:
'''
More or less direct proxy to the ``numba``-fied
``path_arrays_from_ohlc()`` (above) but with closed in kwargs
for line spacing.
'''
x, y, c = path_arrays_from_ohlc(
array[:-1],
start,
bar_w=self.index_step_size,
bar_gap=w * self.index_step_size,
# XXX: don't ask, due to a ``numba`` bug..
use_time_index=(self.index_field == 'time'),
)
return x, y, c
class OHLCBarsAsCurveFmtr(OHLCBarsFmtr):
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray,
np.ndarray,
str,
]:
# TODO: in the case of an existing ``.update_xy()``
# should we be passing in array as an xy arrays tuple?
# 2 more datum-indexes to capture zero at end
x_flat = self.x_nd[self.xy_nd_start:self.xy_nd_stop-1]
y_flat = self.y_nd[self.xy_nd_start:self.xy_nd_stop-1]
# slice to view
ivl, ivr = vr
x_iv_flat = x_flat[ivl:ivr]
y_iv_flat = y_flat[ivl:ivr]
# reshape to 1d for graphics rendering
y_iv = y_iv_flat.reshape(-1)
x_iv = x_iv_flat.reshape(-1)
return x_iv, y_iv, 'all'
class StepCurveFmtr(IncrementalFormatter):
x_offset: np.ndarray = np.array([
0,
1,
])
def allocate_xy_nd(
self,
shm: ShmArray,
data_field: str,
) -> tuple[
np.ndarray, # x
np.nd.array # y
]:
'''
Convert an input 1d shm array to a "step array" format
for use by path graphics generation.
'''
i = shm._array[self.index_field].copy()
out = shm._array[data_field].copy()
x_out = (
np.broadcast_to(
i[:, None],
(i.size, 2),
)
+
self.x_offset
)
# fill out Nx2 array to hold each step's left + right vertices.
y_out = np.empty(
x_out.shape,
dtype=out.dtype,
)
# fill in (current) values from source shm buffer
y_out[:] = out[:, np.newaxis]
# TODO: pretty sure we can drop this?
# start y at origin level
# y_out[0, 0] = 0
# y_out[self.xy_nd_start] = 0
return x_out, y_out
def incr_update_xy_nd(
self,
src_shm: ShmArray,
array_key: str,
new_from_src: np.ndarray, # portion of source that was updated
read_slc: slice,
ln: int, # len of updated
nd_start: int,
nd_stop: int,
is_append: bool,
) -> tuple[
np.ndarray,
slice,
]:
# NOTE: for a step curve we slice from one datum prior
# to the current "update slice" to get the previous
# "level".
#
# why this is needed,
# - the current new append slice will often have a zero
# value in the latest datum-step (at least for zero-on-new
# cases like vlm in the) as per configuration of the FSP
# engine.
# - we need to look back a datum to get the last level which
# will be used to terminate/complete the last step x-width
# which will be set to pair with the last x-index THIS MEANS
#
# XXX: this means WE CAN'T USE the append slice since we need to
# "look backward" one step to get the needed back-to-zero level
# and the update data in ``new_from_src`` will only contain the
# latest new data.
back_1 = slice(
read_slc.start - 1,
read_slc.stop,
)
to_write = src_shm._array[back_1]
y_nd_new = self.y_nd[back_1]
y_nd_new[:] = to_write[array_key][:, None]
x_nd_new = self.x_nd[read_slc]
x_nd_new[:] = (
new_from_src[self.index_field][:, None]
+
self.x_offset
)
# XXX: uncomment for debugging
# x_nd = self.x_nd[self.xy_slice]
# y_nd = self.y_nd[self.xy_slice]
# name = self.viz.name
# if 'dolla_vlm' in name:
# s = 4
# print(
# f'{name}:\n'
# 'NEW_FROM_SRC:\n'
# f'new_from_src: {new_from_src}\n\n'
# f'PRE self.x_nd:'
# f'\n{x_nd[-s:]}\n'
# f'PRE self.y_nd:\n'
# f'{y_nd[-s:]}\n\n'
# f'TO WRITE:\n'
# f'x_nd_new:\n'
# f'{x_nd_new}\n'
# f'y_nd_new:\n'
# f'{y_nd_new}\n'
# )
def format_xy_nd_to_1d(
self,
array: np.ndarray,
array_key: str,
vr: tuple[int, int],
) -> tuple[
np.ndarray,
np.ndarray,
str,
]:
last_t, last = array[-1][[self.index_field, array_key]]
start = self.xy_nd_start
stop = self.xy_nd_stop
x_step = self.x_nd[start:stop]
y_step = self.y_nd[start:stop]
# slice out in-view data
ivl, ivr = vr
# NOTE: add an extra step to get the vertical-line-down-to-zero
# adjacent to the last-datum graphic (filled rect).
x_step_iv = x_step[ivl:ivr+1]
y_step_iv = y_step[ivl:ivr+1]
# flatten to 1d
x_1d = x_step_iv.reshape(x_step_iv.size)
y_1d = y_step_iv.reshape(y_step_iv.size)
# debugging
# if y_1d.any():
# s = 6
# print(
# f'x_step_iv:\n{x_step_iv[-s:]}\n'
# f'y_step_iv:\n{y_step_iv[-s:]}\n\n'
# f'x_1d:\n{x_1d[-s:]}\n'
# f'y_1d:\n{y_1d[-s:]}\n'
# )
return x_1d, y_1d, 'all'

View File

@ -15,17 +15,30 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Graphics related downsampling routines for compressing to pixel Graphics downsampling using the infamous M4 algorithm.
limits on the display device.
This is one of ``piker``'s secret weapons allowing us to boss all other
charting platforms B)
(AND DON'T YOU DARE TAKE THIS CODE WITHOUT CREDIT OR WE'LL SUE UR F#&@* ASS).
NOTES: this method is a so called "visualization driven data
aggregation" approach. It gives error-free line chart
downsampling, see
further scientific paper resources:
- http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
- http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
Details on implementation of this algo are based in,
https://github.com/pikers/piker/issues/109
''' '''
import math import math
from typing import Optional from typing import Optional
import numpy as np import numpy as np
from numpy.lib import recfunctions as rfn
from numba import ( from numba import (
jit, njit,
# float64, optional, int64, # float64, optional, int64,
) )
@ -35,109 +48,6 @@ from ..log import get_logger
log = get_logger(__name__) log = get_logger(__name__)
def hl2mxmn(ohlc: np.ndarray) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc['index']
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@jit(
# TODO: the type annots..
# float64[:](float64[:],),
nopython=True,
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i] = last_h
last_l = l
last_h = h
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc['index']
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat
def ds_m4( def ds_m4(
x: np.ndarray, x: np.ndarray,
y: np.ndarray, y: np.ndarray,
@ -160,16 +70,6 @@ def ds_m4(
This is more or less an OHLC style sampling of a line-style series. This is more or less an OHLC style sampling of a line-style series.
''' '''
# NOTE: this method is a so called "visualization driven data
# aggregation" approach. It gives error-free line chart
# downsampling, see
# further scientific paper resources:
# - http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
# - http://www.vldb.org/2014/program/papers/demo/p997-jugel.pdf
# Details on implementation of this algo are based in,
# https://github.com/pikers/piker/issues/109
# XXX: from infinite on downsampling viewable graphics: # XXX: from infinite on downsampling viewable graphics:
# "one thing i remembered about the binning - if you are # "one thing i remembered about the binning - if you are
# picking a range within your timeseries the start and end bin # picking a range within your timeseries the start and end bin
@ -191,6 +91,14 @@ def ds_m4(
x_end = x[-1] # x end value/highest in domain x_end = x[-1] # x end value/highest in domain
xrange = (x_end - x_start) xrange = (x_end - x_start)
if xrange < 0:
log.error(f'-VE M4 X-RANGE: {x_start} -> {x_end}')
# XXX: broken x-range calc-case, likely the x-end points
# are wrong and have some default value set (such as
# x_end -> <some epoch float> while x_start -> 0.5).
# breakpoint()
return None
# XXX: always round up on the input pixels # XXX: always round up on the input pixels
# lnx = len(x) # lnx = len(x)
# uppx *= max(4 / (1 + math.log(uppx, 2)), 1) # uppx *= max(4 / (1 + math.log(uppx, 2)), 1)
@ -223,14 +131,20 @@ def ds_m4(
assert frames >= (xrange / uppx) assert frames >= (xrange / uppx)
# call into ``numba`` # call into ``numba``
nb, i_win, y_out = _m4( (
nb,
x_out,
y_out,
ymn,
ymx,
) = _m4(
x, x,
y, y,
frames, frames,
# TODO: see func below.. # TODO: see func below..
# i_win, # x_out,
# y_out, # y_out,
# first index in x data to start at # first index in x data to start at
@ -243,14 +157,14 @@ def ds_m4(
# filter out any overshoot in the input allocation arrays by # filter out any overshoot in the input allocation arrays by
# removing zero-ed tail entries which should start at a certain # removing zero-ed tail entries which should start at a certain
# index. # index.
i_win = i_win[i_win != 0] x_out = x_out[x_out != 0]
y_out = y_out[:i_win.size] y_out = y_out[:x_out.size]
return nb, i_win, y_out # print(f'M4 output ymn, ymx: {ymn},{ymx}')
return nb, x_out, y_out, ymn, ymx
@jit( @njit(
nopython=True,
nogil=True, nogil=True,
) )
def _m4( def _m4(
@ -260,8 +174,8 @@ def _m4(
frames: int, frames: int,
# TODO: using this approach by having the ``.zeros()`` alloc lines # TODO: using this approach, having the ``.zeros()`` alloc lines
# below, in put python was causing segs faults and alloc crashes.. # below in pure python, there were segs faults and alloc crashes..
# we might need to see how it behaves with shm arrays and consider # we might need to see how it behaves with shm arrays and consider
# allocating them once at startup? # allocating them once at startup?
@ -274,14 +188,22 @@ def _m4(
x_start: int, x_start: int,
step: float, step: float,
) -> int: ) -> tuple[
# nbins = len(i_win) int,
# count = len(xs) np.ndarray,
np.ndarray,
float,
float,
]:
'''
Implementation of the m4 algorithm in ``numba``:
http://www.vldb.org/pvldb/vol7/p797-jugel.pdf
'''
# these are pre-allocated and mutated by ``numba`` # these are pre-allocated and mutated by ``numba``
# code in-place. # code in-place.
y_out = np.zeros((frames, 4), ys.dtype) y_out = np.zeros((frames, 4), ys.dtype)
i_win = np.zeros(frames, xs.dtype) x_out = np.zeros(frames, xs.dtype)
bincount = 0 bincount = 0
x_left = x_start x_left = x_start
@ -295,24 +217,34 @@ def _m4(
# set all bins in the left-most entry to the starting left-most x value # set all bins in the left-most entry to the starting left-most x value
# (aka a row broadcast). # (aka a row broadcast).
i_win[bincount] = x_left x_out[bincount] = x_left
# set all y-values to the first value passed in. # set all y-values to the first value passed in.
y_out[bincount] = ys[0] y_out[bincount] = ys[0]
# full input y-data mx and mn
mx: float = -np.inf
mn: float = np.inf
# compute OHLC style max / min values per window sized x-frame.
for i in range(len(xs)): for i in range(len(xs)):
x = xs[i] x = xs[i]
y = ys[i] y = ys[i]
if x < x_left + step: # the current window "step" is [bin, bin+1) if x < x_left + step: # the current window "step" is [bin, bin+1)
y_out[bincount, 1] = min(y, y_out[bincount, 1]) ymn = y_out[bincount, 1] = min(y, y_out[bincount, 1])
y_out[bincount, 2] = max(y, y_out[bincount, 2]) ymx = y_out[bincount, 2] = max(y, y_out[bincount, 2])
y_out[bincount, 3] = y y_out[bincount, 3] = y
mx = max(mx, ymx)
mn = min(mn, ymn)
else: else:
# Find the next bin # Find the next bin
while x >= x_left + step: while x >= x_left + step:
x_left += step x_left += step
bincount += 1 bincount += 1
i_win[bincount] = x_left x_out[bincount] = x_left
y_out[bincount] = y y_out[bincount] = y
return bincount, i_win, y_out return bincount, x_out, y_out, mn, mx

View File

@ -56,7 +56,7 @@ def iterticks(
sig = ( sig = (
time, time,
tick['price'], tick['price'],
tick['size'] tick.get('size')
) )
if ttype == 'dark_trade': if ttype == 'dark_trade':

View File

@ -0,0 +1,452 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Super fast ``QPainterPath`` generation related operator routines.
"""
from math import (
ceil,
floor,
)
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import (
# types,
njit,
float64,
int64,
# optional,
)
# TODO: for ``numba`` typing..
# from ._source import numba_ohlc_dtype
from ._m4 import ds_m4
from .._profile import (
Profiler,
pg_profile_enabled,
ms_slower_then,
)
def xy_downsample(
x,
y,
uppx,
x_spacer: float = 0.5,
) -> tuple[
np.ndarray,
np.ndarray,
float,
float,
]:
'''
Downsample 1D (flat ``numpy.ndarray``) arrays using M4 given an input
``uppx`` (units-per-pixel) and add space between discreet datums.
'''
# downsample whenever more then 1 pixels per datum can be shown.
# always refresh data bounds until we get diffing
# working properly, see above..
m4_out = ds_m4(
x,
y,
uppx,
)
if m4_out is not None:
bins, x, y, ymn, ymx = m4_out
# flatten output to 1d arrays suitable for path-graphics generation.
x = np.broadcast_to(x[:, None], y.shape)
x = (x + np.array(
[-x_spacer, 0, 0, x_spacer]
)).flatten()
y = y.flatten()
return x, y, ymn, ymx
# XXX: we accept a None output for the case where the input range
# to ``ds_m4()`` is bad (-ve) and we want to catch and debug
# that (seemingly super rare) circumstance..
return None
@njit(
# NOTE: need to construct this manually for readonly
# arrays, see https://github.com/numba/numba/issues/4511
# (
# types.Array(
# numba_ohlc_dtype,
# 1,
# 'C',
# readonly=True,
# ),
# int64,
# types.unicode_type,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_w: float64,
bar_gap: float64 = 0.16,
use_time_index: bool = True,
# XXX: ``numba`` issue: https://github.com/numba/numba/issues/8622
# index_field: str,
) -> tuple[
np.ndarray,
np.ndarray,
np.ndarray,
]:
'''
Generate an array of lines objects from input ohlc data.
'''
size = int(data.shape[0] * 6)
# XXX: see this for why the dtype might have to be defined outside
# the routine.
# https://github.com/numba/numba/issues/4098#issuecomment-493914533
x = np.zeros(
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
half_w: float = bar_w/2
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
open = q['open']
high = q['high']
low = q['low']
close = q['close']
if use_time_index:
index = float64(q['time'])
else:
index = float64(q['index'])
# XXX: ``numba`` issue: https://github.com/numba/numba/issues/8622
# index = float64(q[index_field])
# AND this (probably)
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
mid: float = index + half_w
x[istart:istop] = (
index + bar_gap,
mid,
mid,
mid,
mid,
index + bar_w - bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def hl2mxmn(
ohlc: np.ndarray,
index_field: str = 'index',
) -> np.ndarray:
'''
Convert a OHLC struct-array containing 'high'/'low' columns
to a "joined" max/min 1-d array.
'''
index = ohlc[index_field]
hls = ohlc[[
'low',
'high',
]]
mxmn = np.empty(2*hls.size, dtype=np.float64)
x = np.empty(2*hls.size, dtype=np.float64)
trace_hl(hls, mxmn, x, index[0])
x = x + index[0]
return mxmn, x
@njit(
# TODO: the type annots..
# float64[:](float64[:],),
)
def trace_hl(
hl: 'np.ndarray',
out: np.ndarray,
x: np.ndarray,
start: int,
# the "offset" values in the x-domain which
# place the 2 output points around each ``int``
# master index.
margin: float = 0.43,
) -> None:
'''
"Trace" the outline of the high-low values of an ohlc sequence
as a line such that the maximum deviation (aka disperaion) between
bars if preserved.
This routine is expected to modify input arrays in-place.
'''
last_l = hl['low'][0]
last_h = hl['high'][0]
for i in range(hl.size):
row = hl[i]
l, h = row['low'], row['high']
up_diff = h - last_l
down_diff = last_h - l
if up_diff > down_diff:
out[2*i + 1] = h
out[2*i] = last_l
else:
out[2*i + 1] = l
out[2*i] = last_h
last_l = l
last_h = h
x[2*i] = int(i) - margin
x[2*i + 1] = int(i) + margin
return out
def ohlc_flatten(
ohlc: np.ndarray,
use_mxmn: bool = True,
index_field: str = 'index',
) -> tuple[np.ndarray, np.ndarray]:
'''
Convert an OHLCV struct-array into a flat ready-for-line-plotting
1-d array that is 4 times the size with x-domain values distributed
evenly (by 0.5 steps) over each index.
'''
index = ohlc[index_field]
if use_mxmn:
# traces a line optimally over highs to lows
# using numba. NOTE: pretty sure this is faster
# and looks about the same as the below output.
flat, x = hl2mxmn(ohlc)
else:
flat = rfn.structured_to_unstructured(
ohlc[['open', 'high', 'low', 'close']]
).flatten()
x = np.linspace(
start=index[0] - 0.5,
stop=index[-1] + 0.5,
num=len(flat),
)
return x, flat
def slice_from_time(
arr: np.ndarray,
start_t: float,
stop_t: float,
step: int | None = None,
) -> slice:
'''
Calculate array indices mapped from a time range and return them in
a slice.
Given an input array with an epoch `'time'` series entry, calculate
the indices which span the time range and return in a slice. Presume
each `'time'` step increment is uniform and when the time stamp
series contains gaps (the uniform presumption is untrue) use
``np.searchsorted()`` binary search to look up the appropriate
index.
'''
profiler = Profiler(
msg='slice_from_time()',
disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then,
)
times = arr['time']
t_first = floor(times[0])
t_last = ceil(times[-1])
# the greatest index we can return which slices to the
# end of the input array.
read_i_max = arr.shape[0]
# TODO: require this is always passed in?
if step is None:
step = round(t_last - times[-2])
if step == 0:
step = 1
# compute (presumed) uniform-time-step index offsets
i_start_t = floor(start_t)
read_i_start = floor(((i_start_t - t_first) // step)) - 1
i_stop_t = ceil(stop_t)
# XXX: edge case -> always set stop index to last in array whenever
# the input stop time is detected to be greater then the equiv time
# stamp at that last entry.
if i_stop_t >= t_last:
read_i_stop = read_i_max
else:
read_i_stop = ceil((i_stop_t - t_first) // step) + 1
# always clip outputs to array support
# for read start:
# - never allow a start < the 0 index
# - never allow an end index > the read array len
read_i_start = min(
max(0, read_i_start),
read_i_max - 1,
)
read_i_stop = max(
0,
min(read_i_stop, read_i_max),
)
# check for larger-then-latest calculated index for given start
# time, in which case we do a binary search for the correct index.
# NOTE: this is usually the result of a time series with time gaps
# where it is expected that each index step maps to a uniform step
# in the time stamp series.
t_iv_start = times[read_i_start]
if (
t_iv_start > i_start_t
):
# do a binary search for the best index mapping to ``start_t``
# given we measured an overshoot using the uniform-time-step
# calculation from above.
# TODO: once we start caching these per source-array,
# we can just overwrite ``read_i_start`` directly.
new_read_i_start = np.searchsorted(
times,
i_start_t,
side='left',
)
# TODO: minimize binary search work as much as possible:
# - cache these remap values which compensate for gaps in the
# uniform time step basis where we calc a later start
# index for the given input ``start_t``.
# - can we shorten the input search sequence by heuristic?
# up_to_arith_start = index[:read_i_start]
if (
new_read_i_start <= read_i_start
):
# t_diff = t_iv_start - start_t
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'start_t:{start_t} -> 0index start_t:{t_iv_start}\n'
# f'diff: {t_diff}\n'
# f'REMAPPED START i: {read_i_start} -> {new_read_i_start}\n'
# )
read_i_start = new_read_i_start - 1
t_iv_stop = times[read_i_stop - 1]
if (
t_iv_stop > i_stop_t
):
# t_diff = stop_t - t_iv_stop
# print(
# f"WE'RE CUTTING OUT TIME - STEP:{step}\n"
# f'calced iv stop:{t_iv_stop} -> stop_t:{stop_t}\n'
# f'diff: {t_diff}\n'
# # f'SHOULD REMAP STOP: {read_i_start} -> {new_read_i_start}\n'
# )
new_read_i_stop = np.searchsorted(
times[read_i_start:],
# times,
i_stop_t,
side='left',
)
if (
new_read_i_stop <= read_i_stop
):
read_i_stop = read_i_start + new_read_i_stop + 1
# sanity checks for range size
# samples = (i_stop_t - i_start_t) // step
# index_diff = read_i_stop - read_i_start + 1
# if index_diff > (samples + 3):
# breakpoint()
# read-relative indexes: gives a slice where `shm.array[read_slc]`
# will be the data spanning the input time range `start_t` ->
# `stop_t`
read_slc = slice(
int(read_i_start),
int(read_i_stop),
)
profiler(
'slicing complete'
# f'{start_t} -> {abs_slc.start} | {read_slc.start}\n'
# f'{stop_t} -> {abs_slc.stop} | {read_slc.stop}\n'
)
# NOTE: if caller needs absolute buffer indices they can
# slice the buffer abs index like so:
# index = arr['index']
# abs_indx = index[read_slc]
# abs_slc = slice(
# int(abs_indx[0]),
# int(abs_indx[-1]),
# )
return read_slc

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
# piker: trading gear for hackers # piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0) # Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
@ -27,13 +27,14 @@ from multiprocessing.shared_memory import SharedMemory, _USE_POSIX
if _USE_POSIX: if _USE_POSIX:
from _posixshmem import shm_unlink from _posixshmem import shm_unlink
import tractor # import msgspec
import numpy as np import numpy as np
from pydantic import BaseModel
from numpy.lib import recfunctions as rfn from numpy.lib import recfunctions as rfn
import tractor
from ..log import get_logger from ..log import get_logger
from ._source import base_iohlc_dtype from ._source import base_iohlc_dtype
from .types import Struct
log = get_logger(__name__) log = get_logger(__name__)
@ -49,7 +50,11 @@ _rt_buffer_start = int((_days_worth - 1) * _secs_in_day)
def cuckoff_mantracker(): def cuckoff_mantracker():
'''
Disable all ``multiprocessing``` "resource tracking" machinery since
it's an absolute multi-threaded mess of non-SC madness.
'''
from multiprocessing import resource_tracker as mantracker from multiprocessing import resource_tracker as mantracker
# Tell the "resource tracker" thing to fuck off. # Tell the "resource tracker" thing to fuck off.
@ -107,36 +112,39 @@ class SharedInt:
log.warning(f'Shm for {name} already unlinked?') log.warning(f'Shm for {name} already unlinked?')
class _Token(BaseModel): class _Token(Struct, frozen=True):
''' '''
Internal represenation of a shared memory "token" Internal represenation of a shared memory "token"
which can be used to key a system wide post shm entry. which can be used to key a system wide post shm entry.
''' '''
class Config:
frozen = True
shm_name: str # this servers as a "key" value shm_name: str # this servers as a "key" value
shm_first_index_name: str shm_first_index_name: str
shm_last_index_name: str shm_last_index_name: str
dtype_descr: tuple dtype_descr: tuple
size: int # in struct-array index / row terms
@property @property
def dtype(self) -> np.dtype: def dtype(self) -> np.dtype:
return np.dtype(list(map(tuple, self.dtype_descr))).descr return np.dtype(list(map(tuple, self.dtype_descr))).descr
def as_msg(self): def as_msg(self):
return self.dict() return self.to_dict()
@classmethod @classmethod
def from_msg(cls, msg: dict) -> _Token: def from_msg(cls, msg: dict) -> _Token:
if isinstance(msg, _Token): if isinstance(msg, _Token):
return msg return msg
# TODO: native struct decoding
# return _token_dec.decode(msg)
msg['dtype_descr'] = tuple(map(tuple, msg['dtype_descr'])) msg['dtype_descr'] = tuple(map(tuple, msg['dtype_descr']))
return _Token(**msg) return _Token(**msg)
# _token_dec = msgspec.msgpack.Decoder(_Token)
# TODO: this api? # TODO: this api?
# _known_tokens = tractor.ActorVar('_shm_tokens', {}) # _known_tokens = tractor.ActorVar('_shm_tokens', {})
# _known_tokens = tractor.ContextStack('_known_tokens', ) # _known_tokens = tractor.ContextStack('_known_tokens', )
@ -155,6 +163,7 @@ def get_shm_token(key: str) -> _Token:
def _make_token( def _make_token(
key: str, key: str,
size: int,
dtype: Optional[np.dtype] = None, dtype: Optional[np.dtype] = None,
) -> _Token: ) -> _Token:
''' '''
@ -167,7 +176,8 @@ def _make_token(
shm_name=key, shm_name=key,
shm_first_index_name=key + "_first", shm_first_index_name=key + "_first",
shm_last_index_name=key + "_last", shm_last_index_name=key + "_last",
dtype_descr=np.dtype(dtype).descr dtype_descr=tuple(np.dtype(dtype).descr),
size=size,
) )
@ -219,6 +229,7 @@ class ShmArray:
shm_first_index_name=self._first._shm.name, shm_first_index_name=self._first._shm.name,
shm_last_index_name=self._last._shm.name, shm_last_index_name=self._last._shm.name,
dtype_descr=tuple(self._array.dtype.descr), dtype_descr=tuple(self._array.dtype.descr),
size=self._len,
) )
@property @property
@ -433,7 +444,7 @@ class ShmArray:
def open_shm_array( def open_shm_array(
key: Optional[str] = None, key: Optional[str] = None,
size: int = _default_size, size: int = _default_size, # see above
dtype: Optional[np.dtype] = None, dtype: Optional[np.dtype] = None,
readonly: bool = False, readonly: bool = False,
@ -464,7 +475,8 @@ def open_shm_array(
token = _make_token( token = _make_token(
key=key, key=key,
dtype=dtype size=size,
dtype=dtype,
) )
# create single entry arrays for storing an first and last indices # create single entry arrays for storing an first and last indices
@ -516,15 +528,15 @@ def open_shm_array(
# "unlink" created shm on process teardown by # "unlink" created shm on process teardown by
# pushing teardown calls onto actor context stack # pushing teardown calls onto actor context stack
tractor._actor._lifetime_stack.callback(shmarr.close) stack = tractor.current_actor().lifetime_stack
tractor._actor._lifetime_stack.callback(shmarr.destroy) stack.callback(shmarr.close)
stack.callback(shmarr.destroy)
return shmarr return shmarr
def attach_shm_array( def attach_shm_array(
token: tuple[str, str, tuple[str, str]], token: tuple[str, str, tuple[str, str]],
size: int = _default_size,
readonly: bool = True, readonly: bool = True,
) -> ShmArray: ) -> ShmArray:
@ -563,7 +575,7 @@ def attach_shm_array(
raise _err raise _err
shmarr = np.ndarray( shmarr = np.ndarray(
(size,), (token.size,),
dtype=token.dtype, dtype=token.dtype,
buffer=shm.buf buffer=shm.buf
) )
@ -602,8 +614,8 @@ def attach_shm_array(
if key not in _known_tokens: if key not in _known_tokens:
_known_tokens[key] = token _known_tokens[key] = token
# "close" attached shm on process teardown # "close" attached shm on actor teardown
tractor._actor._lifetime_stack.callback(sha.close) tractor.current_actor().lifetime_stack.callback(sha.close)
return sha return sha
@ -631,6 +643,7 @@ def maybe_open_shm_array(
use ``attach_shm_array``. use ``attach_shm_array``.
''' '''
size = kwargs.pop('size', _default_size)
try: try:
# see if we already know this key # see if we already know this key
token = _known_tokens[key] token = _known_tokens[key]
@ -638,7 +651,11 @@ def maybe_open_shm_array(
except KeyError: except KeyError:
log.warning(f"Could not find {key} in shms cache") log.warning(f"Could not find {key} in shms cache")
if dtype: if dtype:
token = _make_token(key, dtype) token = _make_token(
key,
size=size,
dtype=dtype,
)
try: try:
return attach_shm_array(token=token, **kwargs), False return attach_shm_array(token=token, **kwargs), False
except FileNotFoundError: except FileNotFoundError:

View File

@ -18,12 +18,16 @@
numpy data source coversion helpers. numpy data source coversion helpers.
""" """
from __future__ import annotations from __future__ import annotations
from decimal import (
Decimal,
ROUND_HALF_EVEN,
)
from typing import Any from typing import Any
import decimal
from bidict import bidict from bidict import bidict
import numpy as np import numpy as np
from pydantic import BaseModel
from .types import Struct
# from numba import from_dtype # from numba import from_dtype
@ -76,10 +80,14 @@ def mk_fqsn(
def float_digits( def float_digits(
value: float, value: float,
) -> int: ) -> int:
'''
Return the number of precision digits read from a float value.
'''
if value == 0: if value == 0:
return 0 return 0
return int(-decimal.Decimal(str(value)).as_tuple().exponent) return int(-Decimal(str(value)).as_tuple().exponent)
def ohlc_zeros(length: int) -> np.ndarray: def ohlc_zeros(length: int) -> np.ndarray:
@ -126,7 +134,57 @@ def unpack_fqsn(fqsn: str) -> tuple[str, str, str]:
) )
class Symbol(BaseModel): class MktPair(Struct, frozen=True):
src: str # source asset name being used to buy
src_type: str # source asset's financial type/classification name
# ^ specifies a "class" of financial instrument
# egs. stock, futer, option, bond etc.
dst: str # destination asset name being bought
dst_type: str # destination asset's financial type/classification name
price_tick: float # minimum price increment value increment
price_tick_digits: int # required decimal digits for above
size_tick: float # minimum size (aka vlm) increment value increment
size_tick_digits: int # required decimal digits for above
venue: str | None = None # market venue provider name
expiry: str | None = None # for derivs, expiry datetime parseable str
# for derivs, info describing contract, egs.
# strike price, call or put, swap type, exercise model, etc.
contract_info: str | None = None
@classmethod
def from_msg(
self,
msg: dict[str, Any],
) -> MktPair:
'''
Constructor for a received msg-dict normally received over IPC.
'''
...
# fqa, fqma, .. etc. see issue:
# https://github.com/pikers/piker/issues/467
@property
def fqsn(self) -> str:
'''
Return the fully qualified market (endpoint) name for the
pair of transacting assets.
'''
...
# TODO: rework the below `Symbol` (which was originally inspired and
# derived from stuff in quantdom) into a simpler, ipc msg ready, market
# endpoint meta-data container type as per the drafted interace above.
class Symbol(Struct):
''' '''
I guess this is some kinda container thing for dealing with I guess this is some kinda container thing for dealing with
all the different meta-data formats from brokers? all the different meta-data formats from brokers?
@ -140,10 +198,6 @@ class Symbol(BaseModel):
suffix: str = '' suffix: str = ''
broker_info: dict[str, dict[str, Any]] = {} broker_info: dict[str, dict[str, Any]] = {}
# specifies a "class" of financial instrument
# ex. stock, futer, option, bond etc.
# @validate_arguments
@classmethod @classmethod
def from_broker_info( def from_broker_info(
cls, cls,
@ -152,19 +206,17 @@ class Symbol(BaseModel):
info: dict[str, Any], info: dict[str, Any],
suffix: str = '', suffix: str = '',
# XXX: like wtf.. ) -> Symbol:
# ) -> 'Symbol':
) -> None:
tick_size = info.get('price_tick_size', 0.01) tick_size = info.get('price_tick_size', 0.01)
lot_tick_size = info.get('lot_tick_size', 0.0) lot_size = info.get('lot_tick_size', 0.0)
return Symbol( return Symbol(
key=symbol, key=symbol,
tick_size=tick_size, tick_size=tick_size,
lot_tick_size=lot_tick_size, lot_tick_size=lot_size,
tick_size_digits=float_digits(tick_size), tick_size_digits=float_digits(tick_size),
lot_size_digits=float_digits(lot_tick_size), lot_size_digits=float_digits(lot_size),
suffix=suffix, suffix=suffix,
broker_info={broker: info}, broker_info={broker: info},
) )
@ -175,9 +227,7 @@ class Symbol(BaseModel):
fqsn: str, fqsn: str,
info: dict[str, Any], info: dict[str, Any],
# XXX: like wtf.. ) -> Symbol:
# ) -> 'Symbol':
) -> None:
broker, key, suffix = unpack_fqsn(fqsn) broker, key, suffix = unpack_fqsn(fqsn)
return cls.from_broker_info( return cls.from_broker_info(
broker, broker,
@ -221,6 +271,10 @@ class Symbol(BaseModel):
else: else:
return (key, broker) return (key, broker)
@property
def fqsn(self) -> str:
return '.'.join(self.tokens()).lower()
def front_fqsn(self) -> str: def front_fqsn(self) -> str:
''' '''
fqsn = "fully qualified symbol name" fqsn = "fully qualified symbol name"
@ -240,18 +294,24 @@ class Symbol(BaseModel):
''' '''
tokens = self.tokens() tokens = self.tokens()
fqsn = '.'.join(tokens) fqsn = '.'.join(map(str.lower, tokens))
return fqsn return fqsn
def iterfqsns(self) -> list[str]: def quantize_size(
keys = [] self,
for broker in self.broker_info.keys(): size: float,
fqsn = mk_fqsn(self.key, broker)
if self.suffix:
fqsn += f'.{self.suffix}'
keys.append(fqsn)
return keys ) -> Decimal:
'''
Truncate input ``size: float`` using ``Decimal``
and ``.lot_size_digits``.
'''
digits = self.lot_size_digits
return Decimal(size).quantize(
Decimal(f'1.{"0".ljust(digits, "0")}'),
rounding=ROUND_HALF_EVEN
)
def _nan_to_closest_num(array: np.ndarray): def _nan_to_closest_num(array: np.ndarray):

View File

@ -18,13 +18,24 @@
ToOlS fOr CoPInG wITh "tHE wEB" protocols. ToOlS fOr CoPInG wITh "tHE wEB" protocols.
""" """
from contextlib import asynccontextmanager, AsyncExitStack from contextlib import (
asynccontextmanager,
AsyncExitStack,
)
from itertools import count
from types import ModuleType from types import ModuleType
from typing import Any, Callable, AsyncGenerator from typing import (
Any,
Optional,
Callable,
AsyncGenerator,
Iterable,
)
import json import json
import trio import trio
import trio_websocket import trio_websocket
from wsproto.utilities import LocalProtocolError
from trio_websocket._impl import ( from trio_websocket._impl import (
ConnectionClosed, ConnectionClosed,
DisconnectionTimeout, DisconnectionTimeout,
@ -35,43 +46,53 @@ from trio_websocket._impl import (
from ..log import get_logger from ..log import get_logger
from .types import Struct
log = get_logger(__name__) log = get_logger(__name__)
class NoBsWs: class NoBsWs:
"""Make ``trio_websocket`` sockets stay up no matter the bs. '''
Make ``trio_websocket`` sockets stay up no matter the bs.
""" You can provide a ``fixture`` async-context-manager which will be
enter/exitted around each reconnect operation.
'''
recon_errors = ( recon_errors = (
ConnectionClosed, ConnectionClosed,
DisconnectionTimeout, DisconnectionTimeout,
ConnectionRejected, ConnectionRejected,
HandshakeError, HandshakeError,
ConnectionTimeout, ConnectionTimeout,
LocalProtocolError,
) )
def __init__( def __init__(
self, self,
url: str, url: str,
token: str,
stack: AsyncExitStack, stack: AsyncExitStack,
fixture: Callable, fixture: Optional[Callable] = None,
serializer: ModuleType = json, serializer: ModuleType = json
): ):
self.url = url self.url = url
self.token = token
self.fixture = fixture self.fixture = fixture
self._stack = stack self._stack = stack
self._ws: 'WebSocketConnection' = None # noqa self._ws: 'WebSocketConnection' = None # noqa
# TODO: is there some method we can call
# on the underlying `._ws` to get this?
self._connected: bool = False
async def _connect( async def _connect(
self, self,
tries: int = 1000, tries: int = 1000,
) -> None: ) -> None:
self._connected = False
while True: while True:
try: try:
await self._stack.aclose() await self._stack.aclose()
except (DisconnectionTimeout, RuntimeError): except self.recon_errors:
await trio.sleep(0.5) await trio.sleep(0.5)
else: else:
break break
@ -82,19 +103,18 @@ class NoBsWs:
self._ws = await self._stack.enter_async_context( self._ws = await self._stack.enter_async_context(
trio_websocket.open_websocket_url(self.url) trio_websocket.open_websocket_url(self.url)
) )
# rerun user code fixture
if self.token == '': if self.fixture is not None:
# rerun user code fixture
ret = await self._stack.enter_async_context( ret = await self._stack.enter_async_context(
self.fixture(self) self.fixture(self)
) )
else:
ret = await self._stack.enter_async_context(
self.fixture(self, self.token)
)
assert ret is None assert ret is None
log.info(f'Connection success: {self.url}') log.info(f'Connection success: {self.url}')
self._connected = True
return self._ws return self._ws
except self.recon_errors as err: except self.recon_errors as err:
@ -104,11 +124,15 @@ class NoBsWs:
f'{type(err)}...retry attempt {i}' f'{type(err)}...retry attempt {i}'
) )
await trio.sleep(0.5) await trio.sleep(0.5)
self._connected = False
continue continue
else: else:
log.exception('ws connection fail...') log.exception('ws connection fail...')
raise last_err raise last_err
def connected(self) -> bool:
return self._connected
async def send_msg( async def send_msg(
self, self,
data: Any, data: Any,
@ -128,21 +152,26 @@ class NoBsWs:
except self.recon_errors: except self.recon_errors:
await self._connect() await self._connect()
def __aiter__(self):
return self
async def __anext__(self):
return await self.recv_msg()
@asynccontextmanager @asynccontextmanager
async def open_autorecon_ws( async def open_autorecon_ws(
url: str, url: str,
# TODO: proper type annot smh # TODO: proper type cannot smh
fixture: Callable, fixture: Optional[Callable] = None,
# used for authenticated websockets
token: str = '',
) -> AsyncGenerator[tuple[...], NoBsWs]: ) -> AsyncGenerator[tuple[...], NoBsWs]:
"""Apparently we can QoS for all sorts of reasons..so catch em. """Apparently we can QoS for all sorts of reasons..so catch em.
""" """
async with AsyncExitStack() as stack: async with AsyncExitStack() as stack:
ws = NoBsWs(url, token, stack, fixture=fixture) ws = NoBsWs(url, stack, fixture=fixture)
await ws._connect() await ws._connect()
try: try:
@ -150,3 +179,114 @@ async def open_autorecon_ws(
finally: finally:
await stack.aclose() await stack.aclose()
'''
JSONRPC response-request style machinery for transparent multiplexing of msgs
over a NoBsWs.
'''
class JSONRPCResult(Struct):
id: int
jsonrpc: str = '2.0'
result: Optional[dict] = None
error: Optional[dict] = None
@asynccontextmanager
async def open_jsonrpc_session(
url: str,
start_id: int = 0,
response_type: type = JSONRPCResult,
request_type: Optional[type] = None,
request_hook: Optional[Callable] = None,
error_hook: Optional[Callable] = None,
) -> Callable[[str, dict], dict]:
async with (
trio.open_nursery() as n,
open_autorecon_ws(url) as ws
):
rpc_id: Iterable = count(start_id)
rpc_results: dict[int, dict] = {}
async def json_rpc(method: str, params: dict) -> dict:
'''
perform a json rpc call and wait for the result, raise exception in
case of error field present on response
'''
msg = {
'jsonrpc': '2.0',
'id': next(rpc_id),
'method': method,
'params': params
}
_id = msg['id']
rpc_results[_id] = {
'result': None,
'event': trio.Event()
}
await ws.send_msg(msg)
await rpc_results[_id]['event'].wait()
ret = rpc_results[_id]['result']
del rpc_results[_id]
if ret.error is not None:
raise Exception(json.dumps(ret.error, indent=4))
return ret
async def recv_task():
'''
receives every ws message and stores it in its corresponding
result field, then sets the event to wakeup original sender
tasks. also recieves responses to requests originated from
the server side.
'''
async for msg in ws:
match msg:
case {
'result': _,
'id': mid,
} if res_entry := rpc_results.get(mid):
res_entry['result'] = response_type(**msg)
res_entry['event'].set()
case {
'result': _,
'id': mid,
} if not rpc_results.get(mid):
log.warning(
f'Unexpected ws msg: {json.dumps(msg, indent=4)}'
)
case {
'method': _,
'params': _,
}:
log.debug(f'Recieved\n{msg}')
if request_hook:
await request_hook(request_type(**msg))
case {
'error': error
}:
log.warning(f'Recieved\n{error}')
if error_hook:
await error_hook(response_type(**msg))
case _:
log.warning(f'Unhandled JSON-RPC msg!?\n{msg}')
n.start_soon(recv_task)
yield json_rpc
n.cancel_scope.cancel()

View File

@ -0,0 +1,109 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from pprint import pformat
from typing import (
Any,
TYPE_CHECKING,
)
import pyqtgraph as pg
import numpy as np
import tractor
if TYPE_CHECKING:
import docker
from ._ahab import DockerContainer
from piker.log import (
get_logger,
get_console_log
)
import asks
log = get_logger(__name__)
# container level config
_config = {
'port': 19200,
'log_level': 'debug',
}
def start_elasticsearch(
client: docker.DockerClient,
**kwargs,
) -> tuple[DockerContainer, dict[str, Any]]:
'''
Start and supervise an elasticsearch instance with its config bind-mounted
in from the piker config directory on the system.
The equivalent cli cmd to this code is:
sudo docker run \
-itd \
--rm \
--network=host \
--mount type=bind,source="$(pwd)"/elastic,target=/usr/share/elasticsearch/data \
--env "elastic_username=elastic" \
--env "elastic_password=password" \
--env "xpack.security.enabled=false" \
elastic
'''
import docker
get_console_log('info', name=__name__)
dcntr: DockerContainer = client.containers.run(
'piker:elastic',
name='piker-elastic',
network='host',
detach=True,
remove=True
)
async def start_matcher(msg: str):
try:
health = (await asks.get(
f'http://localhost:19200/_cat/health',
params={'format': 'json'}
)).json()
except OSError:
log.error('couldnt reach elastic container')
return False
log.info(health)
return health[0]['status'] == 'green'
async def stop_matcher(msg: str):
return msg == 'closed'
return (
dcntr,
{},
# expected startup and stop msgs
start_matcher,
stop_matcher,
)

File diff suppressed because it is too large Load Diff

210
piker/data/flows.py 100644
View File

@ -0,0 +1,210 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
abstractions for organizing, managing and generally operating-on
real-time data processing data-structures.
"Streams, flumes, cascades and flows.."
"""
from __future__ import annotations
from typing import (
TYPE_CHECKING,
)
import tractor
import pendulum
import numpy as np
from .types import Struct
from ._source import (
Symbol,
)
from ._sharedmem import (
attach_shm_array,
ShmArray,
_Token,
)
# from .._profile import (
# Profiler,
# pg_profile_enabled,
# )
if TYPE_CHECKING:
# from pyqtgraph import PlotItem
from .feed import Feed
# TODO: ideas for further abstractions as per
# https://github.com/pikers/piker/issues/216 and
# https://github.com/pikers/piker/issues/270:
# - a ``Cascade`` would be the minimal "connection" of 2 ``Flumes``
# as per circuit parlance:
# https://en.wikipedia.org/wiki/Two-port_network#Cascade_connection
# - could cover the combination of our `FspAdmin` and the
# backend `.fsp._engine` related machinery to "connect" one flume
# to another?
# - a (financial signal) ``Flow`` would be the a "collection" of such
# minmial cascades. Some engineering based jargon concepts:
# - https://en.wikipedia.org/wiki/Signal_chain
# - https://en.wikipedia.org/wiki/Daisy_chain_(electrical_engineering)
# - https://en.wikipedia.org/wiki/Audio_signal_flow
# - https://en.wikipedia.org/wiki/Digital_signal_processing#Implementation
# - https://en.wikipedia.org/wiki/Dataflow_programming
# - https://en.wikipedia.org/wiki/Signal_programming
# - https://en.wikipedia.org/wiki/Incremental_computing
class Flume(Struct):
'''
Composite reference type which points to all the addressing handles
and other meta-data necessary for the read, measure and management
of a set of real-time updated data flows.
Can be thought of as a "flow descriptor" or "flow frame" which
describes the high level properties of a set of data flows that can
be used seamlessly across process-memory boundaries.
Each instance's sub-components normally includes:
- a msg oriented quote stream provided via an IPC transport
- history and real-time shm buffers which are both real-time
updated and backfilled.
- associated startup indexing information related to both buffer
real-time-append and historical prepend addresses.
- low level APIs to read and measure the updated data and manage
queuing properties.
'''
symbol: Symbol
first_quote: dict
_rt_shm_token: _Token
# optional since some data flows won't have a "downsampled" history
# buffer/stream (eg. FSPs).
_hist_shm_token: _Token | None = None
# private shm refs loaded dynamically from tokens
_hist_shm: ShmArray | None = None
_rt_shm: ShmArray | None = None
stream: tractor.MsgStream | None = None
izero_hist: int = 0
izero_rt: int = 0
throttle_rate: int | None = None
# TODO: do we need this really if we can pull the `Portal` from
# ``tractor``'s internals?
feed: Feed | None = None
@property
def rt_shm(self) -> ShmArray:
if self._rt_shm is None:
self._rt_shm = attach_shm_array(
token=self._rt_shm_token,
readonly=True,
)
return self._rt_shm
@property
def hist_shm(self) -> ShmArray:
if self._hist_shm_token is None:
raise RuntimeError(
'No shm token has been set for the history buffer?'
)
if (
self._hist_shm is None
):
self._hist_shm = attach_shm_array(
token=self._hist_shm_token,
readonly=True,
)
return self._hist_shm
async def receive(self) -> dict:
return await self.stream.receive()
def get_ds_info(
self,
) -> tuple[float, float, float]:
'''
Compute the "downsampling" ratio info between the historical shm
buffer and the real-time (HFT) one.
Return a tuple of the fast sample period, historical sample
period and ratio between them.
'''
times = self.hist_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
hist_step_size_s = (end - start).seconds
times = self.rt_shm.array['time']
end = pendulum.from_timestamp(times[-1])
start = pendulum.from_timestamp(times[times != times[-1]][-1])
rt_step_size_s = (end - start).seconds
ratio = hist_step_size_s / rt_step_size_s
return (
rt_step_size_s,
hist_step_size_s,
ratio,
)
# TODO: get native msgspec decoding for these workinn
def to_msg(self) -> dict:
msg = self.to_dict()
msg['symbol'] = msg['symbol'].to_dict()
# can't serialize the stream or feed objects, it's expected
# you'll have a ref to it since this msg should be rxed on
# a stream on whatever far end IPC..
msg.pop('stream')
msg.pop('feed')
return msg
@classmethod
def from_msg(cls, msg: dict) -> dict:
symbol = Symbol(**msg.pop('symbol'))
return cls(
symbol=symbol,
**msg,
)
def get_index(
self,
time_s: float,
array: np.ndarray,
) -> int | float:
'''
Return array shm-buffer index for for epoch time.
'''
times = array['time']
first = np.searchsorted(
times,
time_s,
side='left',
)
imx = times.shape[0] - 1
return min(first, imx)

View File

@ -35,10 +35,11 @@ from typing import (
) )
import time import time
from math import isnan from math import isnan
from pathlib import Path
from bidict import bidict from bidict import bidict
import msgpack from msgspec.msgpack import encode, decode
import pyqtgraph as pg # import pyqtgraph as pg
import numpy as np import numpy as np
import tractor import tractor
from trio_websocket import open_websocket_url from trio_websocket import open_websocket_url
@ -56,6 +57,7 @@ if TYPE_CHECKING:
from .feed import maybe_open_feed from .feed import maybe_open_feed
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from .._profile import Profiler
log = get_logger(__name__) log = get_logger(__name__)
@ -131,7 +133,10 @@ def start_marketstore(
mktsdir = os.path.join(config._config_dir, 'marketstore') mktsdir = os.path.join(config._config_dir, 'marketstore')
# create when dne # create dirs when dne
if not os.path.isdir(config._config_dir):
Path(config._config_dir).mkdir(parents=True, exist_ok=True)
if not os.path.isdir(mktsdir): if not os.path.isdir(mktsdir):
os.mkdir(mktsdir) os.mkdir(mktsdir)
@ -185,13 +190,20 @@ def start_marketstore(
init=True, init=True,
# remove=True, # remove=True,
) )
async def start_matcher(msg: str):
return "launching tcp listener for all services..." in msg
async def stop_matcher(msg: str):
return "exiting..." in msg
return ( return (
dcntr, dcntr,
_config, _config,
# expected startup and stop msgs # expected startup and stop msgs
"launching tcp listener for all services...", start_matcher,
"exiting...", stop_matcher,
) )
@ -387,50 +399,54 @@ class Storage:
async def load( async def load(
self, self,
fqsn: str, fqsn: str,
timeframe: int,
) -> tuple[ ) -> tuple[
dict[int, np.ndarray], # timeframe (in secs) to series np.ndarray, # timeframe sampled array-series
Optional[datetime], # first dt Optional[datetime], # first dt
Optional[datetime], # last dt Optional[datetime], # last dt
]: ]:
first_tsdb_dt, last_tsdb_dt = None, None first_tsdb_dt, last_tsdb_dt = None, None
tsdb_arrays = await self.read_ohlcv( hist = await self.read_ohlcv(
fqsn, fqsn,
# on first load we don't need to pull the max # on first load we don't need to pull the max
# history per request size worth. # history per request size worth.
limit=3000, limit=3000,
timeframe=timeframe,
) )
log.info(f'Loaded tsdb history {tsdb_arrays}') log.info(f'Loaded tsdb history {hist}')
if tsdb_arrays: if len(hist):
fastest = list(tsdb_arrays.values())[0] times = hist['Epoch']
times = fastest['Epoch']
first, last = times[0], times[-1] first, last = times[0], times[-1]
first_tsdb_dt, last_tsdb_dt = map( first_tsdb_dt, last_tsdb_dt = map(
pendulum.from_timestamp, [first, last] pendulum.from_timestamp, [first, last]
) )
return tsdb_arrays, first_tsdb_dt, last_tsdb_dt return (
hist, # array-data
first_tsdb_dt, # start of query-frame
last_tsdb_dt, # most recent
)
async def read_ohlcv( async def read_ohlcv(
self, self,
fqsn: str, fqsn: str,
timeframe: Optional[Union[int, str]] = None, timeframe: int | str,
end: Optional[int] = None, end: Optional[int] = None,
limit: int = int(800e3), limit: int = int(800e3),
) -> tuple[ ) -> np.ndarray:
MarketstoreClient,
Union[dict, np.ndarray]
]:
client = self.client client = self.client
syms = await client.list_symbols() syms = await client.list_symbols()
if fqsn not in syms: if fqsn not in syms:
return {} return {}
tfstr = tf_in_1s[1] # use the provided timeframe or 1s by default
tfstr = tf_in_1s.get(timeframe, tf_in_1s[1])
params = Params( params = Params(
symbols=fqsn, symbols=fqsn,
@ -444,58 +460,72 @@ class Storage:
limit=limit, limit=limit,
) )
if timeframe is None: try:
log.info(f'starting {fqsn} tsdb granularity scan..')
# loop through and try to find highest granularity
for tfstr in tf_in_1s.values():
try:
log.info(f'querying for {tfstr}@{fqsn}')
params.set('timeframe', tfstr)
result = await client.query(params)
break
except purerpc.grpclib.exceptions.UnknownError:
# XXX: this is already logged by the container and
# thus shows up through `marketstored` logs relay.
# log.warning(f'{tfstr}@{fqsn} not found')
continue
else:
return {}
else:
result = await client.query(params) result = await client.query(params)
except purerpc.grpclib.exceptions.UnknownError as err:
# indicate there is no history for this timeframe
log.exception(
f'Unknown mkts QUERY error: {params}\n'
f'{err.args}'
)
return {}
# TODO: it turns out column access on recarrays is actually slower: # TODO: it turns out column access on recarrays is actually slower:
# https://jakevdp.github.io/PythonDataScienceHandbook/02.09-structured-data-numpy.html#RecordArrays:-Structured-Arrays-with-a-Twist # https://jakevdp.github.io/PythonDataScienceHandbook/02.09-structured-data-numpy.html#RecordArrays:-Structured-Arrays-with-a-Twist
# it might make sense to make these structured arrays? # it might make sense to make these structured arrays?
# Fill out a `numpy` array-results map data_set = result.by_symbols()[fqsn]
arrays = {} array = data_set.array
for fqsn, data_set in result.by_symbols().items():
arrays.setdefault(fqsn, {})[
tf_in_1s.inverse[data_set.timeframe]
] = data_set.array
return arrays[fqsn][timeframe] if timeframe else arrays[fqsn] # XXX: ensure sample rate is as expected
time = data_set.array['Epoch']
if len(time) > 1:
time_step = time[-1] - time[-2]
ts = tf_in_1s.inverse[data_set.timeframe]
if time_step != ts:
log.warning(
f'MKTS BUG: wrong timeframe loaded: {time_step}'
'YOUR DATABASE LIKELY CONTAINS BAD DATA FROM AN OLD BUG'
f'WIPING HISTORY FOR {ts}s'
)
await self.delete_ts(fqsn, timeframe)
# try reading again..
return await self.read_ohlcv(
fqsn,
timeframe,
end,
limit,
)
return array
async def delete_ts( async def delete_ts(
self, self,
key: str, key: str,
timeframe: Optional[Union[int, str]] = None, timeframe: Optional[Union[int, str]] = None,
fmt: str = 'OHLCV',
) -> bool: ) -> bool:
client = self.client client = self.client
syms = await client.list_symbols() syms = await client.list_symbols()
print(syms) print(syms)
# if key not in syms: if key not in syms:
# raise KeyError(f'`{fqsn}` table key not found?') raise KeyError(f'`{key}` table key not found in\n{syms}?')
return await client.destroy(tbk=key) tbk = mk_tbk((
key,
tf_in_1s.get(timeframe, tf_in_1s[60]),
fmt,
))
return await client.destroy(tbk=tbk)
async def write_ohlcv( async def write_ohlcv(
self, self,
fqsn: str, fqsn: str,
ohlcv: np.ndarray, ohlcv: np.ndarray,
timeframe: int,
append_and_duplicate: bool = True, append_and_duplicate: bool = True,
limit: int = int(800e3), limit: int = int(800e3),
@ -519,17 +549,18 @@ class Storage:
m, r = divmod(len(mkts_array), limit) m, r = divmod(len(mkts_array), limit)
tfkey = tf_in_1s[timeframe]
for i in range(m, 1): for i in range(m, 1):
to_push = mkts_array[i-1:i*limit] to_push = mkts_array[i-1:i*limit]
# write to db # write to db
resp = await self.client.write( resp = await self.client.write(
to_push, to_push,
tbk=f'{fqsn}/1Sec/OHLCV', tbk=f'{fqsn}/{tfkey}/OHLCV',
# NOTE: will will append duplicates # NOTE: will will append duplicates
# for the same timestamp-index. # for the same timestamp-index.
# TODO: pre deduplicate? # TODO: pre-deduplicate?
isvariablelength=append_and_duplicate, isvariablelength=append_and_duplicate,
) )
@ -548,7 +579,7 @@ class Storage:
# write to db # write to db
resp = await self.client.write( resp = await self.client.write(
to_push, to_push,
tbk=f'{fqsn}/1Sec/OHLCV', tbk=f'{fqsn}/{tfkey}/OHLCV',
# NOTE: will will append duplicates # NOTE: will will append duplicates
# for the same timestamp-index. # for the same timestamp-index.
@ -577,6 +608,7 @@ class Storage:
# def delete_range(self, start_dt, end_dt) -> None: # def delete_range(self, start_dt, end_dt) -> None:
# ... # ...
@acm @acm
async def open_storage_client( async def open_storage_client(
fqsn: str, fqsn: str,
@ -626,7 +658,7 @@ async def tsdb_history_update(
# * the original data feed arch blurb: # * the original data feed arch blurb:
# - https://github.com/pikers/piker/issues/98 # - https://github.com/pikers/piker/issues/98
# #
profiler = pg.debug.Profiler( profiler = Profiler(
disabled=False, # not pg_profile_enabled(), disabled=False, # not pg_profile_enabled(),
delayed=False, delayed=False,
) )
@ -638,34 +670,35 @@ async def tsdb_history_update(
[fqsn], [fqsn],
start_stream=False, start_stream=False,
) as (feed, stream), ) as feed,
): ):
profiler(f'opened feed for {fqsn}') profiler(f'opened feed for {fqsn}')
to_append = feed.shm.array # to_append = feed.hist_shm.array
to_prepend = None # to_prepend = None
if fqsn: if fqsn:
symbol = feed.symbols.get(fqsn) flume = feed.flumes[fqsn]
symbol = flume.symbol
if symbol: if symbol:
fqsn = symbol.front_fqsn() fqsn = symbol.fqsn
# diff db history with shm and only write the missing portions # diff db history with shm and only write the missing portions
ohlcv = feed.shm.array # ohlcv = flume.hist_shm.array
# TODO: use pg profiler # TODO: use pg profiler
tsdb_arrays = await storage.read_ohlcv(fqsn) # for secs in (1, 60):
# hist diffing # tsdb_array = await storage.read_ohlcv(
if tsdb_arrays: # fqsn,
for secs in (1, 60): # timeframe=timeframe,
ts = tsdb_arrays.get(secs) # )
if ts is not None and len(ts): # # hist diffing:
# these aren't currently used but can be referenced from # # these aren't currently used but can be referenced from
# within the embedded ipython shell below. # # within the embedded ipython shell below.
to_append = ohlcv[ohlcv['time'] > ts['Epoch'][-1]] # to_append = ohlcv[ohlcv['time'] > ts['Epoch'][-1]]
to_prepend = ohlcv[ohlcv['time'] < ts['Epoch'][0]] # to_prepend = ohlcv[ohlcv['time'] < ts['Epoch'][0]]
profiler('Finished db arrays diffs') # profiler('Finished db arrays diffs')
syms = await storage.client.list_symbols() syms = await storage.client.list_symbols()
log.info(f'Existing tsdb symbol set:\n{pformat(syms)}') log.info(f'Existing tsdb symbol set:\n{pformat(syms)}')
@ -774,12 +807,13 @@ async def stream_quotes(
async with open_websocket_url(f'ws://{host}:{port}/ws') as ws: async with open_websocket_url(f'ws://{host}:{port}/ws') as ws:
# send subs topics to server # send subs topics to server
resp = await ws.send_message( resp = await ws.send_message(
msgpack.dumps({'streams': list(tbks.values())})
encode({'streams': list(tbks.values())})
) )
log.info(resp) log.info(resp)
async def recv() -> dict[str, Any]: async def recv() -> dict[str, Any]:
return msgpack.loads((await ws.get_message()), encoding='utf-8') return decode((await ws.get_message()), encoding='utf-8')
streams = (await recv())['streams'] streams = (await recv())['streams']
log.info(f"Subscribed to {streams}") log.info(f"Subscribed to {streams}")

View File

@ -0,0 +1,88 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Built-in (extension) types.
"""
import sys
from typing import Optional
from pprint import pformat
import msgspec
class Struct(
msgspec.Struct,
# https://jcristharif.com/msgspec/structs.html#tagged-unions
# tag='pikerstruct',
# tag=True,
):
'''
A "human friendlier" (aka repl buddy) struct subtype.
'''
def to_dict(self) -> dict:
return {
f: getattr(self, f)
for f in self.__struct_fields__
}
# Lul, doesn't seem to work that well..
# def __repr__(self):
# # only turn on pprint when we detect a python REPL
# # at runtime B)
# if (
# hasattr(sys, 'ps1')
# # TODO: check if we're in pdb
# ):
# return self.pformat()
# return super().__repr__()
def pformat(self) -> str:
return f'Struct({pformat(self.to_dict())})'
def copy(
self,
update: Optional[dict] = None,
) -> msgspec.Struct:
'''
Validate-typecast all self defined fields, return a copy of us
with all such fields.
This is kinda like the default behaviour in `pydantic.BaseModel`.
'''
if update:
for k, v in update.items():
setattr(self, k, v)
# roundtrip serialize to validate
return msgspec.msgpack.Decoder(
type=type(self)
).decode(
msgspec.msgpack.Encoder().encode(self)
)
def typecast(
self,
# fields: Optional[list[str]] = None,
) -> None:
for fname, ftype in self.__annotations__.items():
setattr(self, fname, ftype(getattr(self, fname)))

View File

@ -78,7 +78,8 @@ class Fsp:
# + the consuming fsp *to* the consumers output # + the consuming fsp *to* the consumers output
# shm flow. # shm flow.
_flow_registry: dict[ _flow_registry: dict[
tuple[_Token, str], _Token, tuple[_Token, str],
tuple[_Token, Optional[ShmArray]],
] = {} ] = {}
def __init__( def __init__(
@ -120,7 +121,6 @@ class Fsp:
): ):
return self.func(*args, **kwargs) return self.func(*args, **kwargs)
# TODO: lru_cache this? prettty sure it'll work?
def get_shm( def get_shm(
self, self,
src_shm: ShmArray, src_shm: ShmArray,
@ -131,12 +131,27 @@ class Fsp:
for this "instance" of a signal processor for for this "instance" of a signal processor for
the given ``key``. the given ``key``.
The destination shm "token" and array are cached if possible to
minimize multiple stdlib/system calls.
''' '''
dst_token = self._flow_registry[ dst_token, maybe_array = self._flow_registry[
(src_shm._token, self.name) (src_shm._token, self.name)
] ]
shm = attach_shm_array(dst_token) if maybe_array is None:
return shm self._flow_registry[
(src_shm._token, self.name)
] = (
dst_token,
# "cache" the ``ShmArray`` such that
# we call the underlying "attach" code as few
# times as possible as per:
# - https://github.com/pikers/piker/issues/359
# - https://github.com/pikers/piker/issues/332
maybe_array := attach_shm_array(dst_token)
)
return maybe_array
def fsp( def fsp(
@ -184,7 +199,10 @@ def maybe_mk_fsp_shm(
# TODO: load output types from `Fsp` # TODO: load output types from `Fsp`
# - should `index` be a required internal field? # - should `index` be a required internal field?
fsp_dtype = np.dtype( fsp_dtype = np.dtype(
[('index', int)] + [('index', int)]
+
[('time', float)]
+
[(field_name, float) for field_name in target.outputs] [(field_name, float) for field_name in target.outputs]
) )

View File

@ -21,12 +21,13 @@ core task logic for processing chains
from dataclasses import dataclass from dataclasses import dataclass
from functools import partial from functools import partial
from typing import ( from typing import (
AsyncIterator, Callable, Optional, AsyncIterator,
Callable,
Optional,
Union, Union,
) )
import numpy as np import numpy as np
import pyqtgraph as pg
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
@ -35,14 +36,22 @@ from tractor.msg import NamespacePath
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from .. import data from .. import data
from ..data import attach_shm_array from ..data import attach_shm_array
from ..data.feed import Feed from ..data.feed import (
Flume,
Feed,
)
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray
from ..data._sampling import (
_default_delay_s,
open_sample_stream,
)
from ..data._source import Symbol from ..data._source import Symbol
from ._api import ( from ._api import (
Fsp, Fsp,
_load_builtins, _load_builtins,
_Token, _Token,
) )
from .._profile import Profiler
log = get_logger(__name__) log = get_logger(__name__)
@ -77,7 +86,7 @@ async def filter_quotes_by_sym(
async def fsp_compute( async def fsp_compute(
symbol: Symbol, symbol: Symbol,
feed: Feed, flume: Flume,
quote_stream: trio.abc.ReceiveChannel, quote_stream: trio.abc.ReceiveChannel,
src: ShmArray, src: ShmArray,
@ -90,7 +99,7 @@ async def fsp_compute(
) -> None: ) -> None:
profiler = pg.debug.Profiler( profiler = Profiler(
delayed=False, delayed=False,
disabled=True disabled=True
) )
@ -105,16 +114,17 @@ async def fsp_compute(
filter_quotes_by_sym(fqsn, quote_stream), filter_quotes_by_sym(fqsn, quote_stream),
# XXX: currently the ``ohlcv`` arg # XXX: currently the ``ohlcv`` arg
feed.shm, flume.rt_shm,
) )
# Conduct a single iteration of fsp with historical bars input # HISTORY COMPUTE PHASE
# and get historical output # conduct a single iteration of fsp with historical bars input
# and get historical output.
history_output: Union[ history_output: Union[
dict[str, np.ndarray], # multi-output case dict[str, np.ndarray], # multi-output case
np.ndarray, # single output case np.ndarray, # single output case
] ]
history_output = await out_stream.__anext__() history_output = await anext(out_stream)
func_name = func.__name__ func_name = func.__name__
profiler(f'{func_name} generated history') profiler(f'{func_name} generated history')
@ -126,9 +136,13 @@ async def fsp_compute(
# each respective field. # each respective field.
fields = getattr(dst.array.dtype, 'fields', None).copy() fields = getattr(dst.array.dtype, 'fields', None).copy()
fields.pop('index') fields.pop('index')
history: Optional[np.ndarray] = None # TODO: nptyping here! history_by_field: Optional[np.ndarray] = None
src_time = src.array['time']
if fields and len(fields) > 1 and fields: if (
fields and
len(fields) > 1
):
if not isinstance(history_output, dict): if not isinstance(history_output, dict):
raise ValueError( raise ValueError(
f'`{func_name}` is a multi-output FSP and should yield a ' f'`{func_name}` is a multi-output FSP and should yield a '
@ -139,7 +153,7 @@ async def fsp_compute(
if key in history_output: if key in history_output:
output = history_output[key] output = history_output[key]
if history is None: if history_by_field is None:
if output is None: if output is None:
length = len(src.array) length = len(src.array)
@ -149,7 +163,7 @@ async def fsp_compute(
# using the first output, determine # using the first output, determine
# the length of the struct-array that # the length of the struct-array that
# will be pushed to shm. # will be pushed to shm.
history = np.zeros( history_by_field = np.zeros(
length, length,
dtype=dst.array.dtype dtype=dst.array.dtype
) )
@ -157,7 +171,7 @@ async def fsp_compute(
if output is None: if output is None:
continue continue
history[key] = output history_by_field[key] = output
# single-key output stream # single-key output stream
else: else:
@ -166,11 +180,15 @@ async def fsp_compute(
f'`{func_name}` is a single output FSP and should yield an ' f'`{func_name}` is a single output FSP and should yield an '
'`np.ndarray` for history' '`np.ndarray` for history'
) )
history = np.zeros( history_by_field = np.zeros(
len(history_output), len(history_output),
dtype=dst.array.dtype dtype=dst.array.dtype
) )
history[func_name] = history_output history_by_field[func_name] = history_output
history_by_field['time'] = src_time[-len(history_by_field):]
history_output['time'] = src.array['time']
# TODO: XXX: # TODO: XXX:
# THERE'S A BIG BUG HERE WITH THE `index` field since we're # THERE'S A BIG BUG HERE WITH THE `index` field since we're
@ -187,7 +205,10 @@ async def fsp_compute(
# TODO: can we use this `start` flag instead of the manual # TODO: can we use this `start` flag instead of the manual
# setting above? # setting above?
index = dst.push(history, start=first) index = dst.push(
history_by_field,
start=first,
)
profiler(f'{func_name} pushed history') profiler(f'{func_name} pushed history')
profiler.finish() profiler.finish()
@ -213,8 +234,14 @@ async def fsp_compute(
log.debug(f"{func_name}: {processed}") log.debug(f"{func_name}: {processed}")
key, output = processed key, output = processed
index = src.index # dst.array[-1][key] = output
dst.array[-1][key] = output dst.array[[key, 'time']][-1] = (
output,
# TODO: what about pushing ``time.time_ns()``
# in which case we'll need to round at the graphics
# processing / sampling layer?
src.array[-1]['time']
)
# NOTE: for now we aren't streaming this to the consumer # NOTE: for now we aren't streaming this to the consumer
# stream latest array index entry which basically just acts # stream latest array index entry which basically just acts
@ -225,6 +252,7 @@ async def fsp_compute(
# N-consumers who subscribe for the real-time output, # N-consumers who subscribe for the real-time output,
# which we'll likely want to implement using local-mem # which we'll likely want to implement using local-mem
# chans for the fan out? # chans for the fan out?
# index = src.index
# if attach_stream: # if attach_stream:
# await client_stream.send(index) # await client_stream.send(index)
@ -261,7 +289,7 @@ async def cascade(
destination shm array buffer. destination shm array buffer.
''' '''
profiler = pg.debug.Profiler( profiler = Profiler(
delayed=False, delayed=False,
disabled=False disabled=False
) )
@ -284,9 +312,10 @@ async def cascade(
# TODO: ugh i hate this wind/unwind to list over the wire # TODO: ugh i hate this wind/unwind to list over the wire
# but not sure how else to do it. # but not sure how else to do it.
for (token, fsp_name, dst_token) in shm_registry: for (token, fsp_name, dst_token) in shm_registry:
Fsp._flow_registry[ Fsp._flow_registry[(
(_Token.from_msg(token), fsp_name) _Token.from_msg(token),
] = _Token.from_msg(dst_token) fsp_name,
)] = _Token.from_msg(dst_token), None
fsp: Fsp = reg.get( fsp: Fsp = reg.get(
NamespacePath(ns_path) NamespacePath(ns_path)
@ -298,6 +327,7 @@ async def cascade(
raise ValueError(f'Unknown fsp target: {ns_path}') raise ValueError(f'Unknown fsp target: {ns_path}')
# open a data feed stream with requested broker # open a data feed stream with requested broker
feed: Feed
async with data.feed.maybe_open_feed( async with data.feed.maybe_open_feed(
[fqsn], [fqsn],
@ -307,14 +337,13 @@ async def cascade(
# needs to get throttled the ticks we generate. # needs to get throttled the ticks we generate.
# tick_throttle=60, # tick_throttle=60,
) as (feed, quote_stream): ) as feed:
symbol = feed.symbols[fqsn]
flume = feed.flumes[fqsn]
symbol = flume.symbol
assert src.token == flume.rt_shm.token
profiler(f'{func}: feed up') profiler(f'{func}: feed up')
assert src.token == feed.shm.token
# last_len = new_len = len(src.array)
func_name = func.__name__ func_name = func.__name__
async with ( async with (
trio.open_nursery() as n, trio.open_nursery() as n,
@ -324,8 +353,8 @@ async def cascade(
fsp_compute, fsp_compute,
symbol=symbol, symbol=symbol,
feed=feed, flume=flume,
quote_stream=quote_stream, quote_stream=flume.stream,
# shm # shm
src=src, src=src,
@ -361,7 +390,7 @@ async def cascade(
) -> tuple[TaskTracker, int]: ) -> tuple[TaskTracker, int]:
# TODO: adopt an incremental update engine/approach # TODO: adopt an incremental update engine/approach
# where possible here eventually! # where possible here eventually!
log.debug(f're-syncing fsp {func_name} to source') log.info(f're-syncing fsp {func_name} to source')
tracker.cs.cancel() tracker.cs.cancel()
await tracker.complete.wait() await tracker.complete.wait()
tracker, index = await n.start(fsp_target) tracker, index = await n.start(fsp_target)
@ -374,14 +403,16 @@ async def cascade(
'key': dst_shm_token, 'key': dst_shm_token,
'first': dst._first.value, 'first': dst._first.value,
'last': dst._last.value, 'last': dst._last.value,
}}) }
})
return tracker, index return tracker, index
def is_synced( def is_synced(
src: ShmArray, src: ShmArray,
dst: ShmArray dst: ShmArray
) -> tuple[bool, int, int]: ) -> tuple[bool, int, int]:
'''Predicate to dertmine if a destination FSP '''
Predicate to dertmine if a destination FSP
output array is aligned to its source array. output array is aligned to its source array.
''' '''
@ -390,16 +421,15 @@ async def cascade(
return not ( return not (
# the source is likely backfilling and we must # the source is likely backfilling and we must
# sync history calculations # sync history calculations
len_diff > 2 or len_diff > 2
# we aren't step synced to the source and may be # we aren't step synced to the source and may be
# leading/lagging by a step # leading/lagging by a step
step_diff > 1 or or step_diff > 1
step_diff < 0 or step_diff < 0
), step_diff, len_diff ), step_diff, len_diff
async def poll_and_sync_to_step( async def poll_and_sync_to_step(
tracker: TaskTracker, tracker: TaskTracker,
src: ShmArray, src: ShmArray,
dst: ShmArray, dst: ShmArray,
@ -418,18 +448,23 @@ async def cascade(
# detect sample period step for subscription to increment # detect sample period step for subscription to increment
# signal # signal
times = src.array['time'] times = src.array['time']
delay_s = times[-1] - times[times != times[-1]][-1] if len(times) > 1:
last_ts = times[-1]
delay_s = float(last_ts - times[times != last_ts][-1])
else:
# our default "HFT" sample rate.
delay_s = _default_delay_s
# Increment the underlying shared memory buffer on every # sub and increment the underlying shared memory buffer
# "increment" msg received from the underlying data feed. # on every step msg received from the global `samplerd`
async with feed.index_stream( # service.
int(delay_s) async with open_sample_stream(float(delay_s)) as istream:
) as istream:
profiler(f'{func_name}: sample stream up') profiler(f'{func_name}: sample stream up')
profiler.finish() profiler.finish()
async for _ in istream: async for i in istream:
# print(f'FSP incrementing {i}')
# respawn the compute task if the source # respawn the compute task if the source
# array has been updated such that we compute # array has been updated such that we compute
@ -458,3 +493,23 @@ async def cascade(
last = array[-1:].copy() last = array[-1:].copy()
dst.push(last) dst.push(last)
# sync with source buffer's time step
src_l2 = src.array[-2:]
src_li, src_lt = src_l2[-1][['index', 'time']]
src_2li, src_2lt = src_l2[-2][['index', 'time']]
dst._array['time'][src_li] = src_lt
dst._array['time'][src_2li] = src_2lt
# last2 = dst.array[-2:]
# if (
# last2[-1]['index'] != src_li
# or last2[-2]['index'] != src_2li
# ):
# dstl2 = list(last2)
# srcl2 = list(src_l2)
# print(
# # f'{dst.token}\n'
# f'src: {srcl2}\n'
# f'dst: {dstl2}\n'
# )

View File

@ -234,7 +234,7 @@ async def flow_rates(
# FSPs, user input, and possibly any general event stream in # FSPs, user input, and possibly any general event stream in
# real-time. Hint: ideally implemented with caching until mutated # real-time. Hint: ideally implemented with caching until mutated
# ;) # ;)
period: 'Param[int]' = 6, # noqa period: 'Param[int]' = 1, # noqa
# TODO: support other means by providing a map # TODO: support other means by providing a map
# to weights `partial()`-ed with `wma()`? # to weights `partial()`-ed with `wma()`?
@ -268,8 +268,7 @@ async def flow_rates(
'dark_dvlm_rate': None, 'dark_dvlm_rate': None,
} }
# TODO: 3.10 do ``anext()`` quote = await anext(source)
quote = await source.__anext__()
# ltr = 0 # ltr = 0
# lvr = 0 # lvr = 0

1039
piker/pp.py 100644

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
TEST_CONFIG_DIR_PATH = '_testing'

View File

@ -32,16 +32,22 @@ def mk_marker_path(
style: str, style: str,
) -> QGraphicsPathItem: ) -> QGraphicsPathItem:
"""Add a marker to be displayed on the line wrapped in a ``QGraphicsPathItem`` '''
ready to be placed using scene coordinates (not view). Add a marker to be displayed on the line wrapped in
a ``QGraphicsPathItem`` ready to be placed using scene coordinates
(not view).
**Arguments** **Arguments**
style String indicating the style of marker to add: style String indicating the style of marker to add:
``'<|'``, ``'|>'``, ``'>|'``, ``'|<'``, ``'<|>'``, ``'<|'``, ``'|>'``, ``'>|'``, ``'|<'``, ``'<|>'``,
``'>|<'``, ``'^'``, ``'v'``, ``'o'`` ``'>|<'``, ``'^'``, ``'v'``, ``'o'``
size Size of the marker in pixels.
""" This code is taken nearly verbatim from the
`InfiniteLine.addMarker()` method but does not attempt do be aware
of low(er) level graphics controls and expects for the output
polygon to be applied to a ``QGraphicsPathItem``.
'''
path = QtGui.QPainterPath() path = QtGui.QPainterPath()
if style == 'o': if style == 'o':
@ -87,7 +93,8 @@ def mk_marker_path(
class LevelMarker(QGraphicsPathItem): class LevelMarker(QGraphicsPathItem):
'''An arrow marker path graphich which redraws itself '''
An arrow marker path graphich which redraws itself
to the specified view coordinate level on each paint cycle. to the specified view coordinate level on each paint cycle.
''' '''
@ -104,7 +111,8 @@ class LevelMarker(QGraphicsPathItem):
# get polygon and scale # get polygon and scale
super().__init__() super().__init__()
self.scale(size, size) # self.setScale(size, size)
self.setScale(size)
# interally generates path # interally generates path
self._style = None self._style = None
@ -114,6 +122,7 @@ class LevelMarker(QGraphicsPathItem):
self.get_level = get_level self.get_level = get_level
self._on_paint = on_paint self._on_paint = on_paint
self.scene_x = lambda: chart.marker_right_points()[1] self.scene_x = lambda: chart.marker_right_points()[1]
self.level: float = 0 self.level: float = 0
self.keep_in_view = keep_in_view self.keep_in_view = keep_in_view
@ -149,12 +158,9 @@ class LevelMarker(QGraphicsPathItem):
def w(self) -> float: def w(self) -> float:
return self.path_br().width() return self.path_br().width()
def position_in_view( def position_in_view(self) -> None:
self, '''
# level: float, Show a pp off-screen indicator for a level label.
) -> None:
'''Show a pp off-screen indicator for a level label.
This is like in fps games where you have a gps "nav" indicator This is like in fps games where you have a gps "nav" indicator
but your teammate is outside the range of view, except in 2D, on but your teammate is outside the range of view, except in 2D, on
@ -162,7 +168,6 @@ class LevelMarker(QGraphicsPathItem):
''' '''
level = self.get_level() level = self.get_level()
view = self.chart.getViewBox() view = self.chart.getViewBox()
vr = view.state['viewRange'] vr = view.state['viewRange']
ymn, ymx = vr[1] ymn, ymx = vr[1]
@ -186,7 +191,6 @@ class LevelMarker(QGraphicsPathItem):
) )
elif level < ymn: # pin to bottom of view elif level < ymn: # pin to bottom of view
self.setPos( self.setPos(
QPointF( QPointF(
x, x,
@ -211,7 +215,8 @@ class LevelMarker(QGraphicsPathItem):
w: QtWidgets.QWidget w: QtWidgets.QWidget
) -> None: ) -> None:
'''Core paint which we override to always update '''
Core paint which we override to always update
our marker position in scene coordinates from a our marker position in scene coordinates from a
view cooridnate "level". view cooridnate "level".
@ -235,11 +240,12 @@ def qgo_draw_markers(
right_offset: float, right_offset: float,
) -> float: ) -> float:
"""Paint markers in ``pg.GraphicsItem`` style by first '''
Paint markers in ``pg.GraphicsItem`` style by first
removing the view transform for the painter, drawing the markers removing the view transform for the painter, drawing the markers
in scene coords, then restoring the view coords. in scene coords, then restoring the view coords.
""" '''
# paint markers in native coordinate system # paint markers in native coordinate system
orig_tr = p.transform() orig_tr = p.transform()

View File

@ -19,15 +19,16 @@ Main app startup and run.
''' '''
from functools import partial from functools import partial
from types import ModuleType
from PyQt5.QtCore import QEvent from PyQt5.QtCore import QEvent
import trio import trio
from .._daemon import maybe_spawn_brokerd from .._daemon import maybe_spawn_brokerd
from ..brokers import get_brokermod
from . import _event from . import _event
from ._exec import run_qtractor from ._exec import run_qtractor
from ..data.feed import install_brokerd_search from ..data.feed import install_brokerd_search
from ..data._source import unpack_fqsn
from . import _search from . import _search
from ._chart import GodWidget from ._chart import GodWidget
from ..log import get_logger from ..log import get_logger
@ -36,27 +37,26 @@ log = get_logger(__name__)
async def load_provider_search( async def load_provider_search(
brokermod: str,
broker: str,
loglevel: str, loglevel: str,
) -> None: ) -> None:
log.info(f'loading brokerd for {broker}..') name = brokermod.name
log.info(f'loading brokerd for {name}..')
async with ( async with (
maybe_spawn_brokerd( maybe_spawn_brokerd(
broker, name,
loglevel=loglevel loglevel=loglevel
) as portal, ) as portal,
install_brokerd_search( install_brokerd_search(
portal, portal,
get_brokermod(broker), brokermod,
), ),
): ):
# keep search engine stream up until cancelled # keep search engine stream up until cancelled
await trio.sleep_forever() await trio.sleep_forever()
@ -66,8 +66,8 @@ async def _async_main(
# implicit required argument provided by ``qtractor_run()`` # implicit required argument provided by ``qtractor_run()``
main_widget: GodWidget, main_widget: GodWidget,
sym: str, syms: list[str],
brokernames: str, brokers: dict[str, ModuleType],
loglevel: str, loglevel: str,
) -> None: ) -> None:
@ -78,6 +78,8 @@ async def _async_main(
""" """
from . import _display from . import _display
from ._pg_overrides import _do_overrides
_do_overrides()
godwidget = main_widget godwidget = main_widget
@ -97,6 +99,11 @@ async def _async_main(
sbar = godwidget.window.status_bar sbar = godwidget.window.status_bar
starting_done = sbar.open_status('starting ze sexy chartz') starting_done = sbar.open_status('starting ze sexy chartz')
needed_brokermods: dict[str, ModuleType] = {}
for fqsn in syms:
brokername, *_ = unpack_fqsn(fqsn)
needed_brokermods[brokername] = brokers[brokername]
async with ( async with (
trio.open_nursery() as root_n, trio.open_nursery() as root_n,
): ):
@ -107,18 +114,14 @@ async def _async_main(
# setup search widget and focus main chart view at startup # setup search widget and focus main chart view at startup
# search widget is a singleton alongside the godwidget # search widget is a singleton alongside the godwidget
search = _search.SearchWidget(godwidget=godwidget) search = _search.SearchWidget(godwidget=godwidget)
search.bar.unfocus() # search.bar.unfocus()
# godwidget.hbox.addWidget(search)
godwidget.hbox.addWidget(search)
godwidget.search = search godwidget.search = search
symbol, _, provider = sym.rpartition('.')
# this internally starts a ``display_symbol_data()`` task above # this internally starts a ``display_symbol_data()`` task above
order_mode_ready = await godwidget.load_symbol( order_mode_ready = await godwidget.load_symbols(
provider, fqsns=syms,
symbol, loglevel=loglevel,
loglevel
) )
# spin up a search engine for the local cached symbol set # spin up a search engine for the local cached symbol set
@ -135,8 +138,12 @@ async def _async_main(
): ):
# load other providers into search **after** # load other providers into search **after**
# the chart's select cache # the chart's select cache
for broker in brokernames: for brokername, mod in needed_brokermods.items():
root_n.start_soon(load_provider_search, broker, loglevel) root_n.start_soon(
load_provider_search,
mod,
loglevel,
)
await order_mode_ready.wait() await order_mode_ready.wait()
@ -165,19 +172,22 @@ async def _async_main(
def _main( def _main(
sym: str, syms: list[str],
brokernames: [str], brokermods: list[ModuleType],
piker_loglevel: str, piker_loglevel: str,
tractor_kwargs, tractor_kwargs,
) -> None: ) -> None:
''' '''
Sync entry point to start a chart: a ``tractor`` + Qt runtime Sync entry point to start a chart: a ``tractor`` + Qt runtime.
entry point
''' '''
run_qtractor( run_qtractor(
func=_async_main, func=_async_main,
args=(sym, brokernames, piker_loglevel), args=(
main_widget=GodWidget, syms,
{mod.name: mod for mod in brokermods},
piker_loglevel,
),
main_widget_type=GodWidget,
tractor_kwargs=tractor_kwargs, tractor_kwargs=tractor_kwargs,
) )

View File

@ -18,6 +18,7 @@
Chart axes graphics and behavior. Chart axes graphics and behavior.
""" """
from __future__ import annotations
from functools import lru_cache from functools import lru_cache
from typing import Optional, Callable from typing import Optional, Callable
from math import floor from math import floor
@ -27,6 +28,7 @@ import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QPointF from PyQt5.QtCore import QPointF
from . import _pg_overrides as pgo
from ..data._source import float_digits from ..data._source import float_digits
from ._label import Label from ._label import Label
from ._style import DpiAwareFont, hcolor, _font from ._style import DpiAwareFont, hcolor, _font
@ -39,12 +41,17 @@ class Axis(pg.AxisItem):
''' '''
A better axis that sizes tick contents considering font size. A better axis that sizes tick contents considering font size.
Also includes tick values lru caching originally proposed in but never
accepted upstream:
https://github.com/pyqtgraph/pyqtgraph/pull/2160
''' '''
def __init__( def __init__(
self, self,
linkedsplits, plotitem: pgo.PlotItem,
typical_max_str: str = '100 000.000', typical_max_str: str = '100 000.000 ',
text_color: str = 'bracket', text_color: str = 'bracket',
lru_cache_tick_strings: bool = True,
**kwargs **kwargs
) -> None: ) -> None:
@ -56,41 +63,78 @@ class Axis(pg.AxisItem):
# XXX: pretty sure this makes things slower # XXX: pretty sure this makes things slower
# self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache) # self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
self.linkedsplits = linkedsplits self.pi = plotitem
self._dpi_font = _font self._dpi_font = _font
self.setTickFont(_font.font) self.setTickFont(_font.font)
font_size = self._dpi_font.font.pixelSize() font_size = self._dpi_font.font.pixelSize()
style_conf = {
'textFillLimits': [(0, 0.5)],
'tickFont': self._dpi_font.font,
}
text_offset = None
if self.orientation in ('bottom',): if self.orientation in ('bottom',):
text_offset = floor(0.25 * font_size) text_offset = floor(0.25 * font_size)
elif self.orientation in ('left', 'right'): elif self.orientation in ('left', 'right'):
text_offset = floor(font_size / 2) text_offset = floor(font_size / 2)
self.setStyle(**{ if text_offset:
'textFillLimits': [(0, 0.5)], style_conf.update({
'tickFont': self._dpi_font.font, # offset of text *away from* axis line in px
# use approx. half the font pixel size (height)
# offset of text *away from* axis line in px 'tickTextOffset': text_offset,
# use approx. half the font pixel size (height) })
'tickTextOffset': text_offset,
})
self.setStyle(**style_conf)
self.setTickFont(_font.font) self.setTickFont(_font.font)
# NOTE: this is for surrounding "border" # NOTE: this is for surrounding "border"
self.setPen(_axis_pen) self.setPen(_axis_pen)
# this is the text color # this is the text color
# self.setTextPen(pg.mkPen(hcolor(text_color)))
self.text_color = text_color self.text_color = text_color
# generate a bounding rect based on sizing to a "typical"
# maximum length-ed string defined as init default.
self.typical_br = _font._qfm.boundingRect(typical_max_str) self.typical_br = _font._qfm.boundingRect(typical_max_str)
# size the pertinent axis dimension to a "typical value" # size the pertinent axis dimension to a "typical value"
self.size_to_values() self.size_to_values()
# NOTE: requires override ``.tickValues()`` method seen below.
if lru_cache_tick_strings:
self.tickStrings = lru_cache(
maxsize=2**20
)(self.tickStrings)
# axis "sticky" labels
self._stickies: dict[str, YAxisLabel] = {}
# NOTE: only overriden to cast tick values entries into tuples
# for use with the lru caching.
def tickValues(
self,
minVal: float,
maxVal: float,
size: int,
) -> list[tuple[float, tuple[str]]]:
'''
Repack tick values into tuples for lru caching.
'''
ticks = []
for scalar, values in super().tickValues(minVal, maxVal, size):
ticks.append((
scalar,
tuple(values), # this
))
return ticks
@property @property
def text_color(self) -> str: def text_color(self) -> str:
return self._text_color return self._text_color
@ -106,6 +150,38 @@ class Axis(pg.AxisItem):
def txt_offsets(self) -> tuple[int, int]: def txt_offsets(self) -> tuple[int, int]:
return tuple(self.style['tickTextOffset']) return tuple(self.style['tickTextOffset'])
def add_sticky(
self,
pi: pgo.PlotItem,
name: None | str = None,
digits: None | int = 2,
bg_color='default',
fg_color='black',
) -> YAxisLabel:
# if the sticky is for our symbol
# use the tick size precision for display
name = name or pi.name
digits = digits or 2
# TODO: ``._ysticks`` should really be an attr on each
# ``PlotItem`` now instead of the containing widget (because of
# overlays) ?
# add y-axis "last" value label
sticky = self._stickies[name] = YAxisLabel(
pi=pi,
parent=self,
digits=digits, # TODO: pass this from symbol data
opacity=0.9, # slight see-through
bg_color=bg_color,
fg_color=fg_color,
)
pi.sigRangeChanged.connect(sticky.update_on_resize)
return sticky
class PriceAxis(Axis): class PriceAxis(Axis):
@ -167,7 +243,6 @@ class PriceAxis(Axis):
self._min_tick = size self._min_tick = size
def size_to_values(self) -> None: def size_to_values(self) -> None:
# self.typical_br = _font._qfm.boundingRect(typical_max_str)
self.setWidth(self.typical_br.width()) self.setWidth(self.typical_br.width())
# XXX: drop for now since it just eats up h space # XXX: drop for now since it just eats up h space
@ -222,28 +297,50 @@ class DynamicDateAxis(Axis):
) -> list[str]: ) -> list[str]:
chart = self.linkedsplits.chart # XX: ARGGGGG AG:LKSKDJF:LKJSDFD
flow = chart._flows[chart.name] chart = self.pi.chart_widget
shm = flow.shm
bars = shm.array
first = shm._first.value
bars_len = len(bars) viz = chart._vizs[chart.name]
times = bars['time'] shm = viz.shm
array = shm.array
times = array['time']
i_0, i_l = times[0], times[-1]
epochs = times[list( # edge cases
map( if (
int, not indexes
filter( or
lambda i: i > 0 and i < bars_len, (indexes[0] < i_0
(i-first for i in indexes) and indexes[-1] < i_l)
or
(indexes[0] > i_0
and indexes[-1] > i_l)
):
return []
if viz.index_field == 'index':
arr_len = times.shape[0]
first = shm._first.value
epochs = times[
list(
map(
int,
filter(
lambda i: i > 0 and i < arr_len,
(i - first for i in indexes)
)
)
) )
) ]
)] else:
epochs = list(map(int, indexes))
# TODO: **don't** have this hard coded shift to EST # TODO: **don't** have this hard coded shift to EST
# delay = times[-1] - times[-2] # delay = times[-1] - times[-2]
dts = np.array(epochs, dtype='datetime64[s]') dts = np.array(
epochs,
dtype='datetime64[s]',
)
# see units listing: # see units listing:
# https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units # https://numpy.org/devdocs/reference/arrays.datetime.html#datetime-units
@ -261,24 +358,39 @@ class DynamicDateAxis(Axis):
spacing: float, spacing: float,
) -> list[str]: ) -> list[str]:
return self._indexes_to_timestrs(values)
# NOTE: handy for debugging the lru cache
# info = self.tickStrings.cache_info() # info = self.tickStrings.cache_info()
# print(info) # print(info)
return self._indexes_to_timestrs(values)
class AxisLabel(pg.GraphicsObject): class AxisLabel(pg.GraphicsObject):
_x_margin = 0 # relative offsets *OF* the bounding rect relative
_y_margin = 0 # to parent graphics object.
# eg. <parent>| => <_x_br_offset> => | <text> |
_x_br_offset: float = 0
_y_br_offset: float = 0
# relative offsets of text *within* bounding rect
# eg. | <_x_margin> => <text> |
_x_margin: float = 0
_y_margin: float = 0
# multiplier of the text content's height in order
# to force a larger (y-dimension) bounding rect.
_y_txt_h_scaling: float = 1
def __init__( def __init__(
self, self,
parent: pg.GraphicsItem, parent: pg.GraphicsItem,
digits: int = 2, digits: int = 2,
bg_color: str = 'bracket', bg_color: str = 'default',
fg_color: str = 'black', fg_color: str = 'black',
opacity: int = 1, # XXX: seriously don't set this to 0 opacity: int = .8, # XXX: seriously don't set this to 0
font_size: str = 'default', font_size: str = 'default',
use_arrow: bool = True, use_arrow: bool = True,
@ -289,6 +401,7 @@ class AxisLabel(pg.GraphicsObject):
self.setParentItem(parent) self.setParentItem(parent)
self.setFlag(self.ItemIgnoresTransformations) self.setFlag(self.ItemIgnoresTransformations)
self.setZValue(100)
# XXX: pretty sure this is faster # XXX: pretty sure this is faster
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache) self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
@ -320,14 +433,14 @@ class AxisLabel(pg.GraphicsObject):
p: QtGui.QPainter, p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem, opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget w: QtWidgets.QWidget
) -> None: ) -> None:
"""Draw a filled rectangle based on the size of ``.label_str`` text. '''
Draw a filled rectangle based on the size of ``.label_str`` text.
Subtypes can customize further by overloading ``.draw()``. Subtypes can customize further by overloading ``.draw()``.
""" '''
# p.setCompositionMode(QtWidgets.QPainter.CompositionMode_SourceOver)
if self.label_str: if self.label_str:
# if not self.rect: # if not self.rect:
@ -338,7 +451,11 @@ class AxisLabel(pg.GraphicsObject):
p.setFont(self._dpifont.font) p.setFont(self._dpifont.font)
p.setPen(self.fg_color) p.setPen(self.fg_color)
p.drawText(self.rect, self.text_flags, self.label_str) p.drawText(
self.rect,
self.text_flags,
self.label_str,
)
def draw( def draw(
self, self,
@ -346,6 +463,8 @@ class AxisLabel(pg.GraphicsObject):
rect: QtCore.QRectF rect: QtCore.QRectF
) -> None: ) -> None:
p.setOpacity(self.opacity)
if self._use_arrow: if self._use_arrow:
if not self.path: if not self.path:
self._draw_arrow_path() self._draw_arrow_path()
@ -353,15 +472,13 @@ class AxisLabel(pg.GraphicsObject):
p.drawPath(self.path) p.drawPath(self.path)
p.fillPath(self.path, pg.mkBrush(self.bg_color)) p.fillPath(self.path, pg.mkBrush(self.bg_color))
# this adds a nice black outline around the label for some odd
# reason; ok by us
p.setOpacity(self.opacity)
# this cause the L1 labels to glitch out if used in the subtype # this cause the L1 labels to glitch out if used in the subtype
# and it will leave a small black strip with the arrow path if # and it will leave a small black strip with the arrow path if
# done before the above # done before the above
p.fillRect(self.rect, self.bg_color) p.fillRect(
self.rect,
self.bg_color,
)
def boundingRect(self): # noqa def boundingRect(self): # noqa
''' '''
@ -405,15 +522,18 @@ class AxisLabel(pg.GraphicsObject):
txt_h, txt_w = txt_br.height(), txt_br.width() txt_h, txt_w = txt_br.height(), txt_br.width()
# print(f'wsw: {self._dpifont.boundingRect(" ")}') # print(f'wsw: {self._dpifont.boundingRect(" ")}')
# allow subtypes to specify a static width and height # allow subtypes to override width and height
h, w = self.size_hint() h, w = self.size_hint()
# print(f'axis size: {self._parent.size()}')
# print(f'axis geo: {self._parent.geometry()}')
self.rect = QtCore.QRectF( self.rect = QtCore.QRectF(
0, 0,
# relative bounds offsets
self._x_br_offset,
self._y_br_offset,
(w or txt_w) + self._x_margin / 2, (w or txt_w) + self._x_margin / 2,
(h or txt_h) + self._y_margin / 2,
(h or txt_h) * self._y_txt_h_scaling + (self._y_margin / 2),
) )
# print(self.rect) # print(self.rect)
# hb = self.path.controlPointRect() # hb = self.path.controlPointRect()
@ -489,7 +609,7 @@ class XAxisLabel(AxisLabel):
class YAxisLabel(AxisLabel): class YAxisLabel(AxisLabel):
_y_margin = 4 _y_margin: int = 4
text_flags = ( text_flags = (
QtCore.Qt.AlignLeft QtCore.Qt.AlignLeft
@ -500,19 +620,19 @@ class YAxisLabel(AxisLabel):
def __init__( def __init__(
self, self,
chart, pi: pgo.PlotItem,
*args, *args,
**kwargs **kwargs
) -> None: ) -> None:
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self._chart = chart self._pi = pi
pi.sigRangeChanged.connect(self.update_on_resize)
chart.sigRangeChanged.connect(self.update_on_resize)
self._last_datum = (None, None) self._last_datum = (None, None)
self.x_offset = 0
# pull text offset from axis from parent axis # pull text offset from axis from parent axis
if getattr(self._parent, 'txt_offsets', False): if getattr(self._parent, 'txt_offsets', False):
self.x_offset, y_offset = self._parent.txt_offsets() self.x_offset, y_offset = self._parent.txt_offsets()
@ -531,7 +651,8 @@ class YAxisLabel(AxisLabel):
value: float, # data for text value: float, # data for text
# on odd dimension and/or adds nice black line # on odd dimension and/or adds nice black line
x_offset: Optional[int] = None x_offset: int = 0,
) -> None: ) -> None:
# this is read inside ``.paint()`` # this is read inside ``.paint()``
@ -577,7 +698,7 @@ class YAxisLabel(AxisLabel):
self._last_datum = (index, value) self._last_datum = (index, value)
self.update_label( self.update_label(
self._chart.mapFromView(QPointF(index, value)), self._pi.mapFromView(QPointF(index, value)),
value value
) )

File diff suppressed because it is too large Load Diff

View File

@ -18,8 +18,13 @@
Mouse interaction graphics Mouse interaction graphics
""" """
from __future__ import annotations
from functools import partial from functools import partial
from typing import Optional, Callable from typing import (
Optional,
Callable,
TYPE_CHECKING,
)
import inspect import inspect
import numpy as np import numpy as np
@ -36,6 +41,12 @@ from ._style import (
from ._axes import YAxisLabel, XAxisLabel from ._axes import YAxisLabel, XAxisLabel
from ..log import get_logger from ..log import get_logger
if TYPE_CHECKING:
from ._chart import (
ChartPlotWidget,
LinkedSplits,
)
log = get_logger(__name__) log = get_logger(__name__)
@ -58,9 +69,9 @@ class LineDot(pg.CurvePoint):
curve: pg.PlotCurveItem, curve: pg.PlotCurveItem,
index: int, index: int,
plot: 'ChartPlotWidget', # type: ingore # noqa plot: ChartPlotWidget, # type: ingore # noqa
pos=None, pos=None,
color: str = 'default_light', color: str = 'bracket',
) -> None: ) -> None:
# scale from dpi aware font size # scale from dpi aware font size
@ -151,7 +162,7 @@ class ContentsLabel(pg.LabelItem):
def __init__( def __init__(
self, self,
# chart: 'ChartPlotWidget', # noqa # chart: ChartPlotWidget, # noqa
view: pg.ViewBox, view: pg.ViewBox,
anchor_at: str = ('top', 'right'), anchor_at: str = ('top', 'right'),
@ -187,12 +198,11 @@ class ContentsLabel(pg.LabelItem):
self, self,
name: str, name: str,
index: int, ix: int,
array: np.ndarray, array: np.ndarray,
) -> None: ) -> None:
# this being "html" is the dumbest shit :eyeroll: # this being "html" is the dumbest shit :eyeroll:
first = array[0]['index']
self.setText( self.setText(
"<b>i</b>:{index}<br/>" "<b>i</b>:{index}<br/>"
@ -205,7 +215,7 @@ class ContentsLabel(pg.LabelItem):
"<b>C</b>:{}<br/>" "<b>C</b>:{}<br/>"
"<b>V</b>:{}<br/>" "<b>V</b>:{}<br/>"
"<b>wap</b>:{}".format( "<b>wap</b>:{}".format(
*array[index - first][ *array[ix][
[ [
'time', 'time',
'open', 'open',
@ -217,7 +227,7 @@ class ContentsLabel(pg.LabelItem):
] ]
], ],
name=name, name=name,
index=index, index=ix,
) )
) )
@ -225,15 +235,12 @@ class ContentsLabel(pg.LabelItem):
self, self,
name: str, name: str,
index: int, ix: int,
array: np.ndarray, array: np.ndarray,
) -> None: ) -> None:
data = array[ix][name]
first = array[0]['index'] self.setText(f"{name}: {data:.2f}")
if index < array[-1]['index'] and index > first:
data = array[index - first][name]
self.setText(f"{name}: {data:.2f}")
class ContentsLabels: class ContentsLabels:
@ -244,7 +251,7 @@ class ContentsLabels:
''' '''
def __init__( def __init__(
self, self,
linkedsplits: 'LinkedSplits', # type: ignore # noqa linkedsplits: LinkedSplits, # type: ignore # noqa
) -> None: ) -> None:
@ -258,17 +265,20 @@ class ContentsLabels:
def update_labels( def update_labels(
self, self,
index: int, x_in: int,
) -> None: ) -> None:
for chart, name, label, update in self._labels: for chart, name, label, update in self._labels:
flow = chart._flows[name] viz = chart.get_viz(name)
array = flow.shm.array array = viz.shm.array
index = array[viz.index_field]
start = index[0]
stop = index[-1]
if not ( if not (
index >= 0 x_in >= start
and index < array[-1]['index'] and x_in <= stop
): ):
# out of range # out of range
print('WTF out of range?') print('WTF out of range?')
@ -277,7 +287,10 @@ class ContentsLabels:
# call provided update func with data point # call provided update func with data point
try: try:
label.show() label.show()
update(index, array) ix = np.searchsorted(index, x_in)
if ix > len(array):
breakpoint()
update(ix, array)
except IndexError: except IndexError:
log.exception(f"Failed to update label: {name}") log.exception(f"Failed to update label: {name}")
@ -289,7 +302,7 @@ class ContentsLabels:
def add_label( def add_label(
self, self,
chart: 'ChartPlotWidget', # type: ignore # noqa chart: ChartPlotWidget, # type: ignore # noqa
name: str, name: str,
anchor_at: tuple[str, str] = ('top', 'left'), anchor_at: tuple[str, str] = ('top', 'left'),
update_func: Callable = ContentsLabel.update_from_value, update_func: Callable = ContentsLabel.update_from_value,
@ -316,7 +329,7 @@ class Cursor(pg.GraphicsObject):
def __init__( def __init__(
self, self,
linkedsplits: 'LinkedSplits', # noqa linkedsplits: LinkedSplits, # noqa
digits: int = 0 digits: int = 0
) -> None: ) -> None:
@ -325,6 +338,8 @@ class Cursor(pg.GraphicsObject):
self.linked = linkedsplits self.linked = linkedsplits
self.graphics: dict[str, pg.GraphicsObject] = {} self.graphics: dict[str, pg.GraphicsObject] = {}
self.xaxis_label: Optional[XAxisLabel] = None
self.always_show_xlabel: bool = True
self.plots: list['PlotChartWidget'] = [] # type: ignore # noqa self.plots: list['PlotChartWidget'] = [] # type: ignore # noqa
self.active_plot = None self.active_plot = None
self.digits: int = digits self.digits: int = digits
@ -336,7 +351,7 @@ class Cursor(pg.GraphicsObject):
# XXX: not sure why these are instance variables? # XXX: not sure why these are instance variables?
# It's not like we can change them on the fly..? # It's not like we can change them on the fly..?
self.pen = pg.mkPen( self.pen = pg.mkPen(
color=hcolor('default'), color=hcolor('bracket'),
style=QtCore.Qt.DashLine, style=QtCore.Qt.DashLine,
) )
self.lines_pen = pg.mkPen( self.lines_pen = pg.mkPen(
@ -352,7 +367,7 @@ class Cursor(pg.GraphicsObject):
self._lw = self.pixelWidth() * self.lines_pen.width() self._lw = self.pixelWidth() * self.lines_pen.width()
# xhair label's color name # xhair label's color name
self.label_color: str = 'default' self.label_color: str = 'bracket'
self._y_label_update: bool = True self._y_label_update: bool = True
@ -385,7 +400,7 @@ class Cursor(pg.GraphicsObject):
def add_plot( def add_plot(
self, self,
plot: 'ChartPlotWidget', # noqa plot: ChartPlotWidget, # noqa
digits: int = 0, digits: int = 0,
) -> None: ) -> None:
@ -405,7 +420,7 @@ class Cursor(pg.GraphicsObject):
hl.hide() hl.hide()
yl = YAxisLabel( yl = YAxisLabel(
chart=plot, pi=plot.plotItem,
# parent=plot.getAxis('right'), # parent=plot.getAxis('right'),
parent=plot.pi_overlay.get_axis(plot.plotItem, 'right'), parent=plot.pi_overlay.get_axis(plot.plotItem, 'right'),
digits=digits or self.digits, digits=digits or self.digits,
@ -469,39 +484,58 @@ class Cursor(pg.GraphicsObject):
def add_curve_cursor( def add_curve_cursor(
self, self,
plot: 'ChartPlotWidget', # noqa chart: ChartPlotWidget, # noqa
curve: 'PlotCurveItem', # noqa curve: 'PlotCurveItem', # noqa
) -> LineDot: ) -> LineDot:
# if this plot contains curves add line dot "cursors" to denote # if this chart contains curves add line dot "cursors" to denote
# the current sample under the mouse # the current sample under the mouse
main_flow = plot._flows[plot.name] main_viz = chart.get_viz(chart.name)
# read out last index # read out last index
i = main_flow.shm.array[-1]['index'] i = main_viz.shm.array[-1]['index']
cursor = LineDot( cursor = LineDot(
curve, curve,
index=i, index=i,
plot=plot plot=chart
) )
plot.addItem(cursor) chart.addItem(cursor)
self.graphics[plot].setdefault('cursors', []).append(cursor) self.graphics[chart].setdefault('cursors', []).append(cursor)
return cursor return cursor
def mouseAction(self, action, plot): # noqa def mouseAction(
self,
action: str,
plot: ChartPlotWidget,
) -> None: # noqa
log.debug(f"{(action, plot.name)}") log.debug(f"{(action, plot.name)}")
if action == 'Enter': if action == 'Enter':
self.active_plot = plot self.active_plot = plot
plot.linked.godwidget._active_cursor = self
# show horiz line and y-label # show horiz line and y-label
self.graphics[plot]['hl'].show() self.graphics[plot]['hl'].show()
self.graphics[plot]['yl'].show() self.graphics[plot]['yl'].show()
else: # Leave if (
not self.always_show_xlabel
and not self.xaxis_label.isVisible()
):
self.xaxis_label.show()
# hide horiz line and y-label # Leave: hide horiz line and y-label
else:
self.graphics[plot]['hl'].hide() self.graphics[plot]['hl'].hide()
self.graphics[plot]['yl'].hide() self.graphics[plot]['yl'].hide()
if (
not self.always_show_xlabel
and self.xaxis_label.isVisible()
):
self.xaxis_label.hide()
def mouseMoved( def mouseMoved(
self, self,
coords: tuple[QPointF], # noqa coords: tuple[QPointF], # noqa
@ -590,13 +624,17 @@ class Cursor(pg.GraphicsObject):
left_axis_width += left.width() left_axis_width += left.width()
# map back to abs (label-local) coordinates # map back to abs (label-local) coordinates
self.xaxis_label.update_label( if (
abs_pos=( self.always_show_xlabel
plot.mapFromView(QPointF(vl_x, iy)) - or self.xaxis_label.isVisible()
QPointF(left_axis_width, 0) ):
), self.xaxis_label.update_label(
value=ix, abs_pos=(
) plot.mapFromView(QPointF(vl_x, iy)) -
QPointF(left_axis_width, 0)
),
value=ix,
)
self._datum_xy = ix, iy self._datum_xy = ix, iy

View File

@ -28,10 +28,7 @@ from PyQt5.QtWidgets import QGraphicsItem
from PyQt5.QtCore import ( from PyQt5.QtCore import (
Qt, Qt,
QLineF, QLineF,
QSizeF,
QRectF, QRectF,
# QRect,
QPointF,
) )
from PyQt5.QtGui import ( from PyQt5.QtGui import (
QPainter, QPainter,
@ -39,11 +36,8 @@ from PyQt5.QtGui import (
) )
from .._profile import pg_profile_enabled, ms_slower_then from .._profile import pg_profile_enabled, ms_slower_then
from ._style import hcolor from ._style import hcolor
# from ._compression import (
# # ohlc_to_m4_line,
# ds_m4,
# )
from ..log import get_logger from ..log import get_logger
from .._profile import Profiler
log = get_logger(__name__) log = get_logger(__name__)
@ -57,7 +51,117 @@ _line_styles: dict[str, int] = {
} }
class Curve(pg.GraphicsObject): class FlowGraphic(pg.GraphicsObject):
'''
Base class with minimal interface for `QPainterPath` implemented,
real-time updated "data flow" graphics.
See subtypes below.
'''
# sub-type customization methods
declare_paintables: Callable | None = None
sub_paint: Callable | None = None
# XXX-NOTE-XXX: graphics caching B)
# see explanation for different caching modes:
# https://stackoverflow.com/a/39410081
cache_mode: int = QGraphicsItem.DeviceCoordinateCache
# XXX: WARNING item caching seems to only be useful
# if we don't re-generate the entire QPainterPath every time
# don't ever use this - it's a colossal nightmare of artefacts
# and is disastrous for performance.
# QGraphicsItem.ItemCoordinateCache
# TODO: still questions todo with coord-cacheing that we should
# probably talk to a core dev about:
# - if this makes trasform interactions slower (such as zooming)
# and if so maybe if/when we implement a "history" mode for the
# view we disable this in that mode?
def __init__(
self,
*args,
name: str | None = None,
# line styling
color: str = 'bracket',
last_step_color: str | None = None,
fill_color: Optional[str] = None,
style: str = 'solid',
**kwargs
) -> None:
self._name = name
# primary graphics item used for history
self.path: QPainterPath = QPainterPath()
# additional path that can be optionally used for appends which
# tries to avoid triggering an update/redraw of the presumably
# larger historical ``.path`` above. the flag to enable
# this behaviour is found in `Renderer.render()`.
self.fast_path: QPainterPath | None = None
# TODO: evaluating the path capacity stuff and see
# if it really makes much diff pre-allocating it.
# self._last_cap: int = 0
# cap = path.capacity()
# if cap != self._last_cap:
# print(f'NEW CAPACITY: {self._last_cap} -> {cap}')
# self._last_cap = cap
# all history of curve is drawn in single px thickness
self._color: str = color
pen = pg.mkPen(hcolor(color), width=1)
pen.setStyle(_line_styles[style])
if 'dash' in style:
pen.setDashPattern([8, 3])
self._pen = pen
self._brush = pg.functions.mkBrush(
hcolor(fill_color or color)
)
# last segment is drawn in 2px thickness for emphasis
if last_step_color:
self.last_step_pen = pg.mkPen(
hcolor(last_step_color),
width=2,
)
else:
self.last_step_pen = pg.mkPen(
self._pen,
width=2,
)
self._last_line: QLineF = QLineF()
super().__init__(*args, **kwargs)
# apply cache mode
self.setCacheMode(self.cache_mode)
def x_uppx(self) -> int:
px_vecs = self.pixelVectors()[0]
if px_vecs:
return px_vecs.x()
else:
return 0
def x_last(self) -> float | None:
'''
Return the last most x value of the last line segment or if not
drawn yet, ``None``.
'''
return self._last_line.x1() if self._last_line else None
class Curve(FlowGraphic):
''' '''
A faster, simpler, append friendly version of A faster, simpler, append friendly version of
``pyqtgraph.PlotCurveItem`` built for highly customizable real-time ``pyqtgraph.PlotCurveItem`` built for highly customizable real-time
@ -74,7 +178,7 @@ class Curve(pg.GraphicsObject):
lower level graphics data can be rendered in different threads and lower level graphics data can be rendered in different threads and
then read and drawn in this main thread without having to worry then read and drawn in this main thread without having to worry
about dealing with Qt's concurrency primitives. See about dealing with Qt's concurrency primitives. See
``piker.ui._flows.Renderer`` for details and logic related to lower ``piker.ui._render.Renderer`` for details and logic related to lower
level path generation and incremental update. The main differences in level path generation and incremental update. The main differences in
the path generation code include: the path generation code include:
@ -86,127 +190,38 @@ class Curve(pg.GraphicsObject):
updates don't trigger a full path redraw. updates don't trigger a full path redraw.
''' '''
# TODO: can we remove this?
# sub-type customization methods # sub_br: Optional[Callable] = None
sub_br: Optional[Callable] = None
sub_paint: Optional[Callable] = None
declare_paintables: Optional[Callable] = None
def __init__( def __init__(
self, self,
*args, *args,
step_mode: bool = False, # color: str = 'default_lightest',
color: str = 'default_lightest', # fill_color: Optional[str] = None,
fill_color: Optional[str] = None, # style: str = 'solid',
style: str = 'solid',
name: Optional[str] = None,
use_fpath: bool = True,
**kwargs **kwargs
) -> None: ) -> None:
self._name = name
# brutaaalll, see comments within.. # brutaaalll, see comments within..
self.yData = None self.yData = None
self.xData = None self.xData = None
# self._last_cap: int = 0
self.path: Optional[QPainterPath] = None
# additional path used for appends which tries to avoid
# triggering an update/redraw of the presumably larger
# historical ``.path`` above.
self.use_fpath = use_fpath
self.fast_path: Optional[QPainterPath] = None
# TODO: we can probably just dispense with the parent since # TODO: we can probably just dispense with the parent since
# we're basically only using the pen setting now... # we're basically only using the pen setting now...
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
# all history of curve is drawn in single px thickness self._last_line: QLineF = QLineF()
pen = pg.mkPen(hcolor(color))
pen.setStyle(_line_styles[style])
if 'dash' in style:
pen.setDashPattern([8, 3])
self._pen = pen
# last segment is drawn in 2px thickness for emphasis
# self.last_step_pen = pg.mkPen(hcolor(color), width=2)
self.last_step_pen = pg.mkPen(pen, width=2)
# self._last_line: Optional[QLineF] = None
self._last_line = QLineF()
self._last_w: float = 1
# flat-top style histogram-like discrete curve
# self._step_mode: bool = step_mode
# self._fill = True # self._fill = True
self._brush = pg.functions.mkBrush(hcolor(fill_color or color))
# NOTE: this setting seems to mostly prevent redraws on mouse
# interaction which is a huge boon for avg interaction latency.
# TODO: one question still remaining is if this makes trasform
# interactions slower (such as zooming) and if so maybe if/when
# we implement a "history" mode for the view we disable this in
# that mode?
# don't enable caching by default for the case where the
# only thing drawn is the "last" line segment which can
# have a weird artifact where it won't be fully drawn to its
# endpoint (something we saw on trade rate curves)
self.setCacheMode(QGraphicsItem.DeviceCoordinateCache)
# XXX: see explanation for different caching modes:
# https://stackoverflow.com/a/39410081
# seems to only be useful if we don't re-generate the entire
# QPainterPath every time
# curve.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
# don't ever use this - it's a colossal nightmare of artefacts
# and is disastrous for performance.
# curve.setCacheMode(QtWidgets.QGraphicsItem.ItemCoordinateCache)
# allow sub-type customization # allow sub-type customization
declare = self.declare_paintables declare = self.declare_paintables
if declare: if declare:
declare() declare()
# TODO: probably stick this in a new parent
# type which will contain our own version of
# what ``PlotCurveItem`` had in terms of base
# functionality? A `FlowGraphic` maybe?
def x_uppx(self) -> int:
px_vecs = self.pixelVectors()[0]
if px_vecs:
xs_in_px = px_vecs.x()
return round(xs_in_px)
else:
return 0
def px_width(self) -> float:
vb = self.getViewBox()
if not vb:
return 0
vr = self.viewRect()
l, r = int(vr.left()), int(vr.right())
start, stop = self._xrange
lbar = max(l, start)
rbar = min(r, stop)
return vb.mapViewToDevice(
QLineF(lbar, 0, rbar, 0)
).length()
# XXX: lol brutal, the internals of `CurvePoint` (inherited by # XXX: lol brutal, the internals of `CurvePoint` (inherited by
# our `LineDot`) required ``.getData()`` to work.. # our `LineDot`) required ``.getData()`` to work..
def getData(self): def getData(self):
@ -230,8 +245,8 @@ class Curve(pg.GraphicsObject):
self.path.clear() self.path.clear()
if self.fast_path: if self.fast_path:
# self.fast_path.clear() self.fast_path.clear()
self.fast_path = None # self.fast_path = None
@cm @cm
def reset_cache(self) -> None: def reset_cache(self) -> None:
@ -251,77 +266,65 @@ class Curve(pg.GraphicsObject):
self.boundingRect = self._path_br self.boundingRect = self._path_br
return self._path_br() return self._path_br()
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
def _path_br(self): def _path_br(self):
''' '''
Post init ``.boundingRect()```. Post init ``.boundingRect()```.
''' '''
# hb = self.path.boundingRect() # profiler = Profiler(
hb = self.path.controlPointRect() # msg=f'Curve.boundingRect(): `{self._name}`',
hb_size = hb.size() # disabled=not pg_profile_enabled(),
# ms_threshold=ms_slower_then,
fp = self.fast_path
if fp:
fhb = fp.controlPointRect()
hb_size = fhb.size() + hb_size
# print(f'hb_size: {hb_size}')
# if self._last_step_rect:
# hb_size += self._last_step_rect.size()
# if self._line:
# br = self._last_step_rect.bottomRight()
# tl = QPointF(
# # self._vr[0],
# # hb.topLeft().y(),
# # 0,
# # hb_size.height() + 1
# ) # )
pr = self.path.controlPointRect()
# br = self._last_step_rect.bottomRight() hb_tl, hb_br = (
pr.topLeft(),
w = hb_size.width() pr.bottomRight(),
h = hb_size.height() )
mn_y = hb_tl.y()
sbr = self.sub_br mx_y = hb_br.y()
if sbr: most_left = hb_tl.x()
w, h = self.sub_br(w, h) most_right = hb_br.x()
else: # profiler('calc path vertices')
# assume plain line graphic and use
# default unit step in each direction. # TODO: if/when we get fast path appends working in the
# `Renderer`, then we might need to actually use this..
# only on a plane line do we include # fp = self.fast_path
# and extra index step's worth of width # if fp:
# since in the step case the end of the curve # fhb = fp.controlPointRect()
# actually terminates earlier so we don't need # # hb_size = fhb.size() + hb_size
# this for the last step. # br = pr.united(fhb)
w += self._last_w
# ll = self._last_line # XXX: *was* a way to allow sub-types to extend the
h += 1 # ll.y2() - ll.y1() # boundingrect calc, but in the one use case for a step curve
# doesn't seem like we need it as long as the last line segment
# br = QPointF( # is drawn as it is?
# self._vr[-1],
# # tl.x() + w, # sbr = self.sub_br
# tl.y() + h, # if sbr:
# ) # # w, h = self.sub_br(w, h)
# sub_br = sbr()
br = QRectF( # br = br.united(sub_br)
# top left # assume plain line graphic and use
# hb.topLeft() # default unit step in each direction.
# tl, ll = self._last_line
QPointF(hb.topLeft()), y1, y2 = ll.y1(), ll.y2()
x1, x2 = ll.x1(), ll.x2()
# br,
# total size ymn = min(y1, y2, mn_y)
# QSizeF(hb_size) ymx = max(y1, y2, mx_y)
# hb_size, most_left = min(x1, x2, most_left)
QSizeF(w, h) most_right = max(x1, x2, most_right)
# profiler('calc last line vertices')
return QRectF(
most_left,
ymn,
most_right - most_left + 1,
ymx,
) )
# print(f'bounding rect: {br}')
return br
def paint( def paint(
self, self,
@ -331,7 +334,7 @@ class Curve(pg.GraphicsObject):
) -> None: ) -> None:
profiler = pg.debug.Profiler( profiler = Profiler(
msg=f'Curve.paint(): `{self._name}`', msg=f'Curve.paint(): `{self._name}`',
disabled=not pg_profile_enabled(), disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then, ms_threshold=ms_slower_then,
@ -339,18 +342,14 @@ class Curve(pg.GraphicsObject):
sub_paint = self.sub_paint sub_paint = self.sub_paint
if sub_paint: if sub_paint:
sub_paint(p, profiler) sub_paint(p)
p.setPen(self.last_step_pen) p.setPen(self.last_step_pen)
p.drawLine(self._last_line) p.drawLine(self._last_line)
profiler('.drawLine()') profiler('last datum `.drawLine()`')
p.setPen(self._pen)
p.setPen(self._pen)
path = self.path path = self.path
# cap = path.capacity()
# if cap != self._last_cap:
# print(f'NEW CAPACITY: {self._last_cap} -> {cap}')
# self._last_cap = cap
if path: if path:
p.drawPath(path) p.drawPath(path)
@ -373,22 +372,30 @@ class Curve(pg.GraphicsObject):
self, self,
path: QPainterPath, path: QPainterPath,
src_data: np.ndarray, src_data: np.ndarray,
render_data: np.ndarray,
reset: bool, reset: bool,
array_key: str, array_key: str,
index_field: str,
) -> None: ) -> None:
# default line draw last call # default line draw last call
# with self.reset_cache(): # with self.reset_cache():
x = render_data['index'] x = src_data[index_field]
y = render_data[array_key] y = src_data[array_key]
x_last = x[-1]
x_2last = x[-2]
# draw the "current" step graphic segment so it # draw the "current" step graphic segment so it
# lines up with the "middle" of the current # lines up with the "middle" of the current
# (OHLC) sample. # (OHLC) sample.
self._last_line = QLineF( self._last_line = QLineF(
x[-2], y[-2],
x[-1], y[-1], # NOTE: currently we draw in x-domain
# from last datum to current such that
# the end of line touches the "beginning"
# of the current datum step span.
x_2last, y[-2],
x_last, y[-1],
) )
return x, y return x, y
@ -400,17 +407,20 @@ class Curve(pg.GraphicsObject):
# (via it's max / min) even when highly zoomed out. # (via it's max / min) even when highly zoomed out.
class FlattenedOHLC(Curve): class FlattenedOHLC(Curve):
# avoids strange dragging/smearing artifacts when panning..
cache_mode: int = QGraphicsItem.NoCache
def draw_last_datum( def draw_last_datum(
self, self,
path: QPainterPath, path: QPainterPath,
src_data: np.ndarray, src_data: np.ndarray,
render_data: np.ndarray,
reset: bool, reset: bool,
array_key: str, array_key: str,
index_field: str,
) -> None: ) -> None:
lasts = src_data[-2:] lasts = src_data[-2:]
x = lasts['index'] x = lasts[index_field]
y = lasts['close'] y = lasts['close']
# draw the "current" step graphic segment so it # draw the "current" step graphic segment so it
@ -434,9 +444,9 @@ class StepCurve(Curve):
self, self,
path: QPainterPath, path: QPainterPath,
src_data: np.ndarray, src_data: np.ndarray,
render_data: np.ndarray,
reset: bool, reset: bool,
array_key: str, array_key: str,
index_field: str,
w: float = 0.5, w: float = 0.5,
@ -445,40 +455,31 @@ class StepCurve(Curve):
# TODO: remove this and instead place all step curve # TODO: remove this and instead place all step curve
# updating into pre-path data render callbacks. # updating into pre-path data render callbacks.
# full input data # full input data
x = src_data['index'] x = src_data[index_field]
y = src_data[array_key] y = src_data[array_key]
x_last = x[-1] x_last = x[-1]
x_2last = x[-2]
y_last = y[-1] y_last = y[-1]
step_size = x_last - x_2last
# lol, commenting this makes step curves # lol, commenting this makes step curves
# all "black" for me :eyeroll:.. # all "black" for me :eyeroll:..
self._last_line = QLineF( self._last_line = QLineF(
x_last - w, 0, x_2last, 0,
x_last + w, 0, x_last, 0,
) )
self._last_step_rect = QRectF( self._last_step_rect = QRectF(
x_last - w, 0, x_last, 0,
x_last + w, y_last, step_size, y_last,
) )
return x, y return x, y
def sub_paint( def sub_paint(
self, self,
p: QPainter, p: QPainter,
profiler: pg.debug.Profiler,
) -> None: ) -> None:
# p.drawLines(*tuple(filter(bool, self._last_step_lines))) # p.drawLines(*tuple(filter(bool, self._last_step_lines)))
# p.drawRect(self._last_step_rect) # p.drawRect(self._last_step_rect)
p.fillRect(self._last_step_rect, self._brush) p.fillRect(self._last_step_rect, self._brush)
profiler('.fillRect()')
def sub_br(
self,
path_w: float,
path_h: float,
) -> (float, float):
# passthrough
return path_w, path_h

1238
piker/ui/_dataviz.py 100644

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -18,11 +18,27 @@
Higher level annotation editors. Higher level annotation editors.
""" """
from dataclasses import dataclass, field from __future__ import annotations
from typing import Optional from collections import defaultdict
from typing import (
Optional,
TYPE_CHECKING
)
import pyqtgraph as pg import pyqtgraph as pg
from pyqtgraph import ViewBox, Point, QtCore, QtGui from pyqtgraph import (
ViewBox,
Point,
QtCore,
QtWidgets,
)
from PyQt5.QtGui import (
QColor,
)
from PyQt5.QtWidgets import (
QLabel,
)
from pyqtgraph import functions as fn from pyqtgraph import functions as fn
from PyQt5.QtCore import QPointF from PyQt5.QtCore import QPointF
import numpy as np import numpy as np
@ -30,28 +46,34 @@ import numpy as np
from ._style import hcolor, _font from ._style import hcolor, _font
from ._lines import LevelLine from ._lines import LevelLine
from ..log import get_logger from ..log import get_logger
from ..data.types import Struct
if TYPE_CHECKING:
from ._chart import GodWidget
log = get_logger(__name__) log = get_logger(__name__)
@dataclass class ArrowEditor(Struct):
class ArrowEditor:
chart: 'ChartPlotWidget' # noqa godw: GodWidget = None # type: ignore # noqa
_arrows: field(default_factory=dict) _arrows: dict[str, list[pg.ArrowItem]] = {}
def add( def add(
self, self,
plot: pg.PlotItem,
uid: str, uid: str,
x: float, x: float,
y: float, y: float,
color='default', color='default',
pointing: Optional[str] = None, pointing: Optional[str] = None,
) -> pg.ArrowItem:
"""Add an arrow graphic to view at given (x, y).
""" ) -> pg.ArrowItem:
'''
Add an arrow graphic to view at given (x, y).
'''
angle = { angle = {
'up': 90, 'up': 90,
'down': -90, 'down': -90,
@ -74,25 +96,25 @@ class ArrowEditor:
brush=pg.mkBrush(hcolor(color)), brush=pg.mkBrush(hcolor(color)),
) )
arrow.setPos(x, y) arrow.setPos(x, y)
self._arrows.setdefault(uid, []).append(arrow)
self._arrows[uid] = arrow
# render to view # render to view
self.chart.plotItem.addItem(arrow) plot.addItem(arrow)
return arrow return arrow
def remove(self, arrow) -> bool: def remove(self, arrow) -> bool:
self.chart.plotItem.removeItem(arrow) for linked in self.godw.iter_linked():
linked.chart.plotItem.removeItem(arrow)
@dataclass class LineEditor(Struct):
class LineEditor: '''
'''The great editor of linez. The great editor of linez.
''' '''
chart: 'ChartPlotWidget' = None # type: ignore # noqa godw: GodWidget = None # type: ignore # noqa
_order_lines: dict[str, LevelLine] = field(default_factory=dict) _order_lines: defaultdict[str, LevelLine] = defaultdict(list)
_active_staged_line: LevelLine = None _active_staged_line: LevelLine = None
def stage_line( def stage_line(
@ -100,11 +122,11 @@ class LineEditor:
line: LevelLine, line: LevelLine,
) -> LevelLine: ) -> LevelLine:
"""Stage a line at the current chart's cursor position '''
Stage a line at the current chart's cursor position
and return it. and return it.
""" '''
# add a "staged" cursor-tracking line to view # add a "staged" cursor-tracking line to view
# and cash it in a a var # and cash it in a a var
if self._active_staged_line: if self._active_staged_line:
@ -115,17 +137,25 @@ class LineEditor:
return line return line
def unstage_line(self) -> LevelLine: def unstage_line(self) -> LevelLine:
"""Inverse of ``.stage_line()``. '''
Inverse of ``.stage_line()``.
""" '''
# chart = self.chart._cursor.active_plot cursor = self.godw.get_cursor()
# # chart.setCursor(QtCore.Qt.ArrowCursor) if not cursor:
cursor = self.chart.linked.cursor return None
# delete "staged" cursor tracking line from view # delete "staged" cursor tracking line from view
line = self._active_staged_line line = self._active_staged_line
if line: if line:
cursor._trackers.remove(line) try:
cursor._trackers.remove(line)
except KeyError:
# when the current cursor doesn't have said line
# registered (probably means that user held order mode
# key while panning to another view) then we just
# ignore the remove error.
pass
line.delete() line.delete()
self._active_staged_line = None self._active_staged_line = None
@ -133,55 +163,58 @@ class LineEditor:
# show the crosshair y line and label # show the crosshair y line and label
cursor.show_xhair() cursor.show_xhair()
def submit_line( def submit_lines(
self, self,
line: LevelLine, lines: list[LevelLine],
uuid: str, uuid: str,
) -> LevelLine: ) -> LevelLine:
staged_line = self._active_staged_line # staged_line = self._active_staged_line
if not staged_line: # if not staged_line:
raise RuntimeError("No line is currently staged!?") # raise RuntimeError("No line is currently staged!?")
# for now, until submission reponse arrives # for now, until submission reponse arrives
line.hide_labels() for line in lines:
line.hide_labels()
# register for later lookup/deletion # register for later lookup/deletion
self._order_lines[uuid] = line self._order_lines[uuid] += lines
return line return lines
def commit_line(self, uuid: str) -> LevelLine: def commit_line(self, uuid: str) -> list[LevelLine]:
"""Commit a "staged line" to view. '''
Commit a "staged line" to view.
Submits the line graphic under the cursor as a (new) permanent Submits the line graphic under the cursor as a (new) permanent
graphic in view. graphic in view.
""" '''
try: lines = self._order_lines[uuid]
line = self._order_lines[uuid] if lines:
except KeyError: for line in lines:
log.warning(f'No line for {uuid} could be found?') line.show_labels()
return line.hide_markers()
else: log.debug(f'Level active for level: {line.value()}')
line.show_labels() # TODO: other flashy things to indicate the order is active
# TODO: other flashy things to indicate the order is active return lines
log.debug(f'Level active for level: {line.value()}')
return line
def lines_under_cursor(self) -> list[LevelLine]: def lines_under_cursor(self) -> list[LevelLine]:
"""Get the line(s) under the cursor position. '''
Get the line(s) under the cursor position.
""" '''
# Delete any hoverable under the cursor # Delete any hoverable under the cursor
return self.chart.linked.cursor._hovered return self.godw.get_cursor()._hovered
def all_lines(self) -> tuple[LevelLine]: def all_lines(self) -> list[LevelLine]:
return tuple(self._order_lines.values()) all_lines = []
for lines in list(self._order_lines.values()):
all_lines.extend(lines)
return all_lines
def remove_line( def remove_line(
self, self,
@ -196,29 +229,30 @@ class LineEditor:
''' '''
# try to look up line from our registry # try to look up line from our registry
line = self._order_lines.pop(uuid, line) lines = self._order_lines.pop(uuid, None)
if line: if lines:
cursor = self.godw.get_cursor()
if cursor:
for line in lines:
# if hovered remove from cursor set
hovered = cursor._hovered
if line in hovered:
hovered.remove(line)
# if hovered remove from cursor set log.debug(f'deleting {line} with oid: {uuid}')
cursor = self.chart.linked.cursor line.delete()
hovered = cursor._hovered
if line in hovered:
hovered.remove(line)
# make sure the xhair doesn't get left off # make sure the xhair doesn't get left off
# just because we never got a un-hover event # just because we never got a un-hover event
cursor.show_xhair() cursor.show_xhair()
log.debug(f'deleting {line} with oid: {uuid}')
line.delete()
else: else:
log.warning(f'Could not find line for {line}') log.warning(f'Could not find line for {line}')
return line return lines
class SelectRect(QtGui.QGraphicsRectItem): class SelectRect(QtWidgets.QGraphicsRectItem):
def __init__( def __init__(
self, self,
@ -227,12 +261,12 @@ class SelectRect(QtGui.QGraphicsRectItem):
) -> None: ) -> None:
super().__init__(0, 0, 1, 1) super().__init__(0, 0, 1, 1)
# self.rbScaleBox = QtGui.QGraphicsRectItem(0, 0, 1, 1) # self.rbScaleBox = QGraphicsRectItem(0, 0, 1, 1)
self.vb = viewbox self.vb = viewbox
self._chart: 'ChartPlotWidget' = None # noqa self._chart: 'ChartPlotWidget' = None # noqa
# override selection box color # override selection box color
color = QtGui.QColor(hcolor(color)) color = QColor(hcolor(color))
self.setPen(fn.mkPen(color, width=1)) self.setPen(fn.mkPen(color, width=1))
color.setAlpha(66) color.setAlpha(66)
self.setBrush(fn.mkBrush(color)) self.setBrush(fn.mkBrush(color))
@ -240,7 +274,7 @@ class SelectRect(QtGui.QGraphicsRectItem):
self.hide() self.hide()
self._label = None self._label = None
label = self._label = QtGui.QLabel() label = self._label = QLabel()
label.setTextFormat(0) # markdown label.setTextFormat(0) # markdown
label.setFont(_font.font) label.setFont(_font.font)
label.setMargin(0) label.setMargin(0)
@ -277,8 +311,8 @@ class SelectRect(QtGui.QGraphicsRectItem):
# TODO: get bg color working # TODO: get bg color working
palette.setColor( palette.setColor(
self._label.backgroundRole(), self._label.backgroundRole(),
# QtGui.QColor(chart.backgroundBrush()), # QColor(chart.backgroundBrush()),
QtGui.QColor(hcolor('papas_special')), QColor(hcolor('papas_special')),
) )
def update_on_resize(self, vr, r): def update_on_resize(self, vr, r):
@ -326,7 +360,7 @@ class SelectRect(QtGui.QGraphicsRectItem):
self.setPos(r.topLeft()) self.setPos(r.topLeft())
self.resetTransform() self.resetTransform()
self.scale(r.width(), r.height()) self.setRect(r)
self.show() self.show()
y1, y2 = start_pos.y(), end_pos.y() y1, y2 = start_pos.y(), end_pos.y()
@ -343,7 +377,7 @@ class SelectRect(QtGui.QGraphicsRectItem):
nbars = ixmx - ixmn + 1 nbars = ixmx - ixmn + 1
chart = self._chart chart = self._chart
data = chart._flows[chart.name].shm.array[ixmn:ixmx] data = chart.get_viz(chart.name).shm.array[ixmn:ixmx]
if len(data): if len(data):
std = data['close'].std() std = data['close'].std()

View File

@ -18,11 +18,11 @@
Qt event proxying and processing using ``trio`` mem chans. Qt event proxying and processing using ``trio`` mem chans.
""" """
from contextlib import asynccontextmanager, AsyncExitStack from contextlib import asynccontextmanager as acm
from typing import Callable from typing import Callable
from pydantic import BaseModel
import trio import trio
from tractor.trionics import gather_contexts
from PyQt5 import QtCore from PyQt5 import QtCore
from PyQt5.QtCore import QEvent, pyqtBoundSignal from PyQt5.QtCore import QEvent, pyqtBoundSignal
from PyQt5.QtWidgets import QWidget from PyQt5.QtWidgets import QWidget
@ -30,6 +30,8 @@ from PyQt5.QtWidgets import (
QGraphicsSceneMouseEvent as gs_mouse, QGraphicsSceneMouseEvent as gs_mouse,
) )
from ..data.types import Struct
MOUSE_EVENTS = { MOUSE_EVENTS = {
gs_mouse.GraphicsSceneMousePress, gs_mouse.GraphicsSceneMousePress,
@ -43,13 +45,10 @@ MOUSE_EVENTS = {
# TODO: maybe consider some constrained ints down the road? # TODO: maybe consider some constrained ints down the road?
# https://pydantic-docs.helpmanual.io/usage/types/#constrained-types # https://pydantic-docs.helpmanual.io/usage/types/#constrained-types
class KeyboardMsg(BaseModel): class KeyboardMsg(Struct):
'''Unpacked Qt keyboard event data. '''Unpacked Qt keyboard event data.
''' '''
class Config:
arbitrary_types_allowed = True
event: QEvent event: QEvent
etype: int etype: int
key: int key: int
@ -57,16 +56,13 @@ class KeyboardMsg(BaseModel):
txt: str txt: str
def to_tuple(self) -> tuple: def to_tuple(self) -> tuple:
return tuple(self.dict().values()) return tuple(self.to_dict().values())
class MouseMsg(BaseModel): class MouseMsg(Struct):
'''Unpacked Qt keyboard event data. '''Unpacked Qt keyboard event data.
''' '''
class Config:
arbitrary_types_allowed = True
event: QEvent event: QEvent
etype: int etype: int
button: int button: int
@ -160,7 +156,7 @@ class EventRelay(QtCore.QObject):
return False return False
@asynccontextmanager @acm
async def open_event_stream( async def open_event_stream(
source_widget: QWidget, source_widget: QWidget,
@ -186,7 +182,7 @@ async def open_event_stream(
source_widget.removeEventFilter(kc) source_widget.removeEventFilter(kc)
@asynccontextmanager @acm
async def open_signal_handler( async def open_signal_handler(
signal: pyqtBoundSignal, signal: pyqtBoundSignal,
@ -211,7 +207,7 @@ async def open_signal_handler(
yield yield
@asynccontextmanager @acm
async def open_handlers( async def open_handlers(
source_widgets: list[QWidget], source_widgets: list[QWidget],
@ -220,16 +216,14 @@ async def open_handlers(
**kwargs, **kwargs,
) -> None: ) -> None:
async with ( async with (
trio.open_nursery() as n, trio.open_nursery() as n,
AsyncExitStack() as stack, gather_contexts([
open_event_stream(widget, event_types, **kwargs)
for widget in source_widgets
]) as streams,
): ):
for widget in source_widgets: for widget, event_recv_stream in zip(source_widgets, streams):
event_recv_stream = await stack.enter_async_context(
open_event_stream(widget, event_types, **kwargs)
)
n.start_soon(async_handler, widget, event_recv_stream) n.start_soon(async_handler, widget, event_recv_stream)
yield yield

View File

@ -20,16 +20,24 @@ Trio - Qt integration
Run ``trio`` in guest mode on top of the Qt event loop. Run ``trio`` in guest mode on top of the Qt event loop.
All global Qt runtime settings are mostly defined here. All global Qt runtime settings are mostly defined here.
""" """
from typing import Tuple, Callable, Dict, Any from __future__ import annotations
from typing import (
Callable,
Any,
Type,
TYPE_CHECKING,
)
import platform import platform
import traceback import traceback
# Qt specific # Qt specific
import PyQt5 # noqa import PyQt5 # noqa
import pyqtgraph as pg from PyQt5.QtWidgets import (
from pyqtgraph import QtGui QWidget,
QMainWindow,
QApplication,
)
from PyQt5 import QtCore from PyQt5 import QtCore
# from PyQt5.QtGui import QLabel, QStatusBar
from PyQt5.QtCore import ( from PyQt5.QtCore import (
pyqtRemoveInputHook, pyqtRemoveInputHook,
Qt, Qt,
@ -37,15 +45,19 @@ from PyQt5.QtCore import (
) )
import qdarkstyle import qdarkstyle
from qdarkstyle import DarkPalette from qdarkstyle import DarkPalette
# import qdarkgraystyle # import qdarkgraystyle # TODO: play with it
import trio import trio
from outcome import Error from outcome import Error
from .._daemon import maybe_open_pikerd, _tractor_kwargs from .._daemon import (
maybe_open_pikerd,
get_tractor_runtime_kwargs,
)
from ..log import get_logger from ..log import get_logger
from ._pg_overrides import _do_overrides from ._pg_overrides import _do_overrides
from . import _style from . import _style
log = get_logger(__name__) log = get_logger(__name__)
# pyqtgraph global config # pyqtgraph global config
@ -72,17 +84,18 @@ if platform.system() == "Windows":
def run_qtractor( def run_qtractor(
func: Callable, func: Callable,
args: Tuple, args: tuple,
main_widget: QtGui.QWidget, main_widget_type: Type[QWidget],
tractor_kwargs: Dict[str, Any] = {}, tractor_kwargs: dict[str, Any] = {},
window_type: QtGui.QMainWindow = None, window_type: QMainWindow = None,
) -> None: ) -> None:
# avoids annoying message when entering debugger from qt loop # avoids annoying message when entering debugger from qt loop
pyqtRemoveInputHook() pyqtRemoveInputHook()
app = QtGui.QApplication.instance() app = QApplication.instance()
if app is None: if app is None:
app = PyQt5.QtWidgets.QApplication([]) app = QApplication([])
# TODO: we might not need this if it's desired # TODO: we might not need this if it's desired
# to cancel the tractor machinery on Qt loop # to cancel the tractor machinery on Qt loop
@ -156,11 +169,11 @@ def run_qtractor(
# hook into app focus change events # hook into app focus change events
app.focusChanged.connect(window.on_focus_change) app.focusChanged.connect(window.on_focus_change)
instance = main_widget() instance = main_widget_type()
instance.window = window instance.window = window
# override tractor's defaults # override tractor's defaults
tractor_kwargs.update(_tractor_kwargs) tractor_kwargs.update(get_tractor_runtime_kwargs())
# define tractor entrypoint # define tractor entrypoint
async def main(): async def main():
@ -178,7 +191,7 @@ def run_qtractor(
# restrict_keyboard_interrupt_to_checkpoints=True, # restrict_keyboard_interrupt_to_checkpoints=True,
) )
window.main_widget = main_widget window.godwidget: GodWidget = instance
window.setCentralWidget(instance) window.setCentralWidget(instance)
if is_windows: if is_windows:
window.configure_to_desktop() window.configure_to_desktop()

File diff suppressed because it is too large Load Diff

View File

@ -619,7 +619,7 @@ class FillStatusBar(QProgressBar):
# color: #19232D; # color: #19232D;
# width: 10px; # width: 10px;
self.setRange(0, slots) self.setRange(0, int(slots))
self.setValue(value) self.setValue(value)
@ -644,7 +644,7 @@ def mk_fill_status_bar(
# TODO: calc this height from the ``ChartnPane`` # TODO: calc this height from the ``ChartnPane``
chart_h = round(parent_pane.height() * 5/8) chart_h = round(parent_pane.height() * 5/8)
bar_h = chart_h * 0.375 bar_h = chart_h * 0.375*0.9
# TODO: once things are sized to screen # TODO: once things are sized to screen
bar_label_font_size = label_font_size or _font.px_size - 2 bar_label_font_size = label_font_size or _font.px_size - 2

View File

@ -27,12 +27,13 @@ from itertools import cycle
from typing import Optional, AsyncGenerator, Any from typing import Optional, AsyncGenerator, Any
import numpy as np import numpy as np
from pydantic import create_model import msgspec
import tractor import tractor
import pyqtgraph as pg import pyqtgraph as pg
import trio import trio
from trio_typing import TaskStatus from trio_typing import TaskStatus
from piker.data.types import Struct
from ._axes import PriceAxis from ._axes import PriceAxis
from .._cacheables import maybe_open_context from .._cacheables import maybe_open_context
from ..calc import humanize from ..calc import humanize
@ -41,6 +42,8 @@ from ..data._sharedmem import (
_Token, _Token,
try_read, try_read,
) )
from ..data.feed import Flume
from ..data._source import Symbol
from ._chart import ( from ._chart import (
ChartPlotWidget, ChartPlotWidget,
LinkedSplits, LinkedSplits,
@ -50,14 +53,18 @@ from ._forms import (
mk_form, mk_form,
open_form_input_handling, open_form_input_handling,
) )
from ..fsp._api import maybe_mk_fsp_shm, Fsp from ..fsp._api import (
maybe_mk_fsp_shm,
Fsp,
)
from ..fsp import cascade from ..fsp import cascade
from ..fsp._volume import ( from ..fsp._volume import (
tina_vwap, # tina_vwap,
dolla_vlm, dolla_vlm,
flow_rates, flow_rates,
) )
from ..log import get_logger from ..log import get_logger
from .._profile import Profiler
log = get_logger(__name__) log = get_logger(__name__)
@ -71,15 +78,14 @@ def has_vlm(ohlcv: ShmArray) -> bool:
def update_fsp_chart( def update_fsp_chart(
chart: ChartPlotWidget, viz,
flow,
graphics_name: str, graphics_name: str,
array_key: Optional[str], array_key: Optional[str],
**kwargs, **kwargs,
) -> None: ) -> None:
shm = flow.shm shm = viz.shm
if not shm: if not shm:
return return
@ -94,18 +100,15 @@ def update_fsp_chart(
# update graphics # update graphics
# NOTE: this does a length check internally which allows it # NOTE: this does a length check internally which allows it
# staying above the last row check below.. # staying above the last row check below..
chart.update_graphics_from_flow( viz.update_graphics()
graphics_name,
array_key=array_key or graphics_name,
**kwargs,
)
# XXX: re: ``array_key``: fsp func names must be unique meaning we # XXX: re: ``array_key``: fsp func names must be unique meaning we
# can't have duplicates of the underlying data even if multiple # can't have duplicates of the underlying data even if multiple
# sub-charts reference it under different 'named charts'. # sub-charts reference it under different 'named charts'.
# read from last calculated value and update any label # read from last calculated value and update any label
last_val_sticky = chart._ysticks.get(graphics_name) last_val_sticky = viz.plot.getAxis(
'right')._stickies.get(graphics_name)
if last_val_sticky: if last_val_sticky:
last = last_row[array_key] last = last_row[array_key]
last_val_sticky.update_from_data(-1, last) last_val_sticky.update_from_data(-1, last)
@ -153,12 +156,13 @@ async def open_fsp_sidepane(
) )
# https://pydantic-docs.helpmanual.io/usage/models/#dynamic-model-creation # https://pydantic-docs.helpmanual.io/usage/models/#dynamic-model-creation
FspConfig = create_model( FspConfig = msgspec.defstruct(
'FspConfig', "Point",
name=name, [('name', name)] + list(params.items()),
**params, bases=(Struct,),
) )
sidepane.model = FspConfig() model = FspConfig(name=name, **params)
sidepane.model = model
# just a logger for now until we get fsp configs up and running. # just a logger for now until we get fsp configs up and running.
async def settings_change( async def settings_change(
@ -188,7 +192,7 @@ async def open_fsp_actor_cluster(
from tractor._clustering import open_actor_cluster from tractor._clustering import open_actor_cluster
# profiler = pg.debug.Profiler( # profiler = Profiler(
# delayed=False, # delayed=False,
# disabled=False # disabled=False
# ) # )
@ -205,12 +209,12 @@ async def open_fsp_actor_cluster(
async def run_fsp_ui( async def run_fsp_ui(
linkedsplits: LinkedSplits, linkedsplits: LinkedSplits,
shm: ShmArray, flume: Flume,
started: trio.Event, started: trio.Event,
target: Fsp, target: Fsp,
conf: dict[str, dict], conf: dict[str, dict],
loglevel: str, loglevel: str,
# profiler: pg.debug.Profiler, # profiler: Profiler,
# _quote_throttle_rate: int = 58, # _quote_throttle_rate: int = 58,
) -> None: ) -> None:
@ -242,9 +246,11 @@ async def run_fsp_ui(
else: else:
chart = linkedsplits.subplots[overlay_with] chart = linkedsplits.subplots[overlay_with]
shm = flume.rt_shm
chart.draw_curve( chart.draw_curve(
name=name, name,
shm=shm, shm,
flume,
overlay=True, overlay=True,
color='default_light', color='default_light',
array_key=name, array_key=name,
@ -254,8 +260,9 @@ async def run_fsp_ui(
else: else:
# create a new sub-chart widget for this fsp # create a new sub-chart widget for this fsp
chart = linkedsplits.add_plot( chart = linkedsplits.add_plot(
name=name, name,
shm=shm, shm,
flume,
array_key=name, array_key=name,
sidepane=sidepane, sidepane=sidepane,
@ -275,9 +282,10 @@ async def run_fsp_ui(
# profiler(f'fsp:{name} chart created') # profiler(f'fsp:{name} chart created')
# first UI update, usually from shm pushed history # first UI update, usually from shm pushed history
viz = chart.get_viz(array_key)
update_fsp_chart( update_fsp_chart(
chart, chart,
chart._flows[array_key], viz,
name, name,
array_key=array_key, array_key=array_key,
) )
@ -304,7 +312,7 @@ async def run_fsp_ui(
# level_line(chart, 70, orient_v='bottom') # level_line(chart, 70, orient_v='bottom')
# level_line(chart, 80, orient_v='top') # level_line(chart, 80, orient_v='top')
chart.view._set_yrange() chart.view._set_yrange(viz=viz)
# done() # status updates # done() # status updates
# profiler(f'fsp:{func_name} starting update loop') # profiler(f'fsp:{func_name} starting update loop')
@ -345,6 +353,9 @@ async def run_fsp_ui(
# last = time.time() # last = time.time()
# TODO: maybe this should be our ``Viz`` type since it maps
# one flume to the next? The machinery for task/actor mgmt should
# be part of the instantiation API?
class FspAdmin: class FspAdmin:
''' '''
Client API for orchestrating FSP actors and displaying Client API for orchestrating FSP actors and displaying
@ -356,7 +367,7 @@ class FspAdmin:
tn: trio.Nursery, tn: trio.Nursery,
cluster: dict[str, tractor.Portal], cluster: dict[str, tractor.Portal],
linked: LinkedSplits, linked: LinkedSplits,
src_shm: ShmArray, flume: Flume,
) -> None: ) -> None:
self.tn = tn self.tn = tn
@ -368,7 +379,11 @@ class FspAdmin:
tuple[tractor.MsgStream, ShmArray] tuple[tractor.MsgStream, ShmArray]
] = {} ] = {}
self._flow_registry: dict[_Token, str] = {} self._flow_registry: dict[_Token, str] = {}
self.src_shm = src_shm
# TODO: make this a `.src_flume` and add
# a `dst_flume`?
# (=> but then wouldn't this be the most basic `Viz`?)
self.flume = flume
def rr_next_portal(self) -> tractor.Portal: def rr_next_portal(self) -> tractor.Portal:
name, portal = next(self._rr_next_actor) name, portal = next(self._rr_next_actor)
@ -381,7 +396,7 @@ class FspAdmin:
complete: trio.Event, complete: trio.Event,
started: trio.Event, started: trio.Event,
fqsn: str, fqsn: str,
dst_shm: ShmArray, dst_fsp_flume: Flume,
conf: dict, conf: dict,
target: Fsp, target: Fsp,
loglevel: str, loglevel: str,
@ -402,9 +417,10 @@ class FspAdmin:
# data feed key # data feed key
fqsn=fqsn, fqsn=fqsn,
# TODO: pass `Flume.to_msg()`s here?
# mems # mems
src_shm_token=self.src_shm.token, src_shm_token=self.flume.rt_shm.token,
dst_shm_token=dst_shm.token, dst_shm_token=dst_fsp_flume.rt_shm.token,
# target # target
ns_path=ns_path, ns_path=ns_path,
@ -421,12 +437,14 @@ class FspAdmin:
ctx.open_stream() as stream, ctx.open_stream() as stream,
): ):
dst_fsp_flume.stream: tractor.MsgStream = stream
# register output data # register output data
self._registry[ self._registry[
(fqsn, ns_path) (fqsn, ns_path)
] = ( ] = (
stream, stream,
dst_shm, dst_fsp_flume.rt_shm,
complete complete
) )
@ -440,7 +458,9 @@ class FspAdmin:
# if the chart isn't hidden try to update # if the chart isn't hidden try to update
# the data on screen. # the data on screen.
if not self.linked.isHidden(): if not self.linked.isHidden():
log.debug(f'Re-syncing graphics for fsp: {ns_path}') log.debug(
f'Re-syncing graphics for fsp: {ns_path}'
)
self.linked.graphics_cycle( self.linked.graphics_cycle(
trigger_all=True, trigger_all=True,
prepend_update_index=info['first'], prepend_update_index=info['first'],
@ -459,9 +479,9 @@ class FspAdmin:
worker_name: Optional[str] = None, worker_name: Optional[str] = None,
loglevel: str = 'info', loglevel: str = 'info',
) -> (ShmArray, trio.Event): ) -> (Flume, trio.Event):
fqsn = self.linked.symbol.front_fqsn() fqsn = self.flume.symbol.fqsn
# allocate an output shm array # allocate an output shm array
key, dst_shm, opened = maybe_mk_fsp_shm( key, dst_shm, opened = maybe_mk_fsp_shm(
@ -469,16 +489,36 @@ class FspAdmin:
target=target, target=target,
readonly=True, readonly=True,
) )
self._flow_registry[
(self.src_shm._token, target.name) portal = self.cluster.get(worker_name) or self.rr_next_portal()
] = dst_shm._token provider_tag = portal.channel.uid
symbol = Symbol(
key=key,
broker_info={
provider_tag: {'asset_type': 'fsp'},
},
)
dst_fsp_flume = Flume(
symbol=symbol,
_rt_shm_token=dst_shm.token,
first_quote={},
# set to 0 presuming for now that we can't load
# FSP history (though we should eventually).
izero_hist=0,
izero_rt=0,
)
self._flow_registry[(
self.flume.rt_shm._token,
target.name
)] = dst_shm._token
# if not opened: # if not opened:
# raise RuntimeError( # raise RuntimeError(
# f'Already started FSP `{fqsn}:{func_name}`' # f'Already started FSP `{fqsn}:{func_name}`'
# ) # )
portal = self.cluster.get(worker_name) or self.rr_next_portal()
complete = trio.Event() complete = trio.Event()
started = trio.Event() started = trio.Event()
self.tn.start_soon( self.tn.start_soon(
@ -487,13 +527,13 @@ class FspAdmin:
complete, complete,
started, started,
fqsn, fqsn,
dst_shm, dst_fsp_flume,
conf, conf,
target, target,
loglevel, loglevel,
) )
return dst_shm, started return dst_fsp_flume, started
async def open_fsp_chart( async def open_fsp_chart(
self, self,
@ -505,7 +545,7 @@ class FspAdmin:
) -> (trio.Event, ChartPlotWidget): ) -> (trio.Event, ChartPlotWidget):
shm, started = await self.start_engine_task( flume, started = await self.start_engine_task(
target, target,
conf, conf,
loglevel, loglevel,
@ -517,7 +557,7 @@ class FspAdmin:
run_fsp_ui, run_fsp_ui,
self.linked, self.linked,
shm, flume,
started, started,
target, target,
@ -531,7 +571,7 @@ class FspAdmin:
@acm @acm
async def open_fsp_admin( async def open_fsp_admin(
linked: LinkedSplits, linked: LinkedSplits,
src_shm: ShmArray, flume: Flume,
**kwargs, **kwargs,
) -> AsyncGenerator[dict, dict[str, tractor.Portal]]: ) -> AsyncGenerator[dict, dict[str, tractor.Portal]]:
@ -552,7 +592,7 @@ async def open_fsp_admin(
tn, tn,
cluster_map, cluster_map,
linked, linked,
src_shm, flume,
) )
try: try:
yield admin yield admin
@ -566,7 +606,7 @@ async def open_fsp_admin(
async def open_vlm_displays( async def open_vlm_displays(
linked: LinkedSplits, linked: LinkedSplits,
ohlcv: ShmArray, flume: Flume,
dvlm: bool = True, dvlm: bool = True,
task_status: TaskStatus[ChartPlotWidget] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[ChartPlotWidget] = trio.TASK_STATUS_IGNORED,
@ -588,6 +628,8 @@ async def open_vlm_displays(
sig = inspect.signature(flow_rates.func) sig = inspect.signature(flow_rates.func)
params = sig.parameters params = sig.parameters
ohlcv: ShmArray = flume.rt_shm
async with ( async with (
open_fsp_sidepane( open_fsp_sidepane(
linked, { linked, {
@ -607,7 +649,7 @@ async def open_vlm_displays(
} }
}, },
) as sidepane, ) as sidepane,
open_fsp_admin(linked, ohlcv) as admin, open_fsp_admin(linked, flume) as admin,
): ):
# TODO: support updates # TODO: support updates
# period_field = sidepane.fields['period'] # period_field = sidepane.fields['period']
@ -615,12 +657,21 @@ async def open_vlm_displays(
# str(period_param.default) # str(period_param.default)
# ) # )
# use slightly less light (then bracket) gray
# for volume from "main exchange" and a more "bluey"
# gray for "dark" vlm.
vlm_color = 'i3'
dark_vlm_color = 'charcoal'
# built-in vlm which we plot ASAP since it's # built-in vlm which we plot ASAP since it's
# usually data provided directly with OHLC history. # usually data provided directly with OHLC history.
shm = ohlcv shm = ohlcv
chart = linked.add_plot( # ohlc_chart = linked.chart
vlm_chart = linked.add_plot(
name='volume', name='volume',
shm=shm, shm=shm,
flume=flume,
array_key='volume', array_key='volume',
sidepane=sidepane, sidepane=sidepane,
@ -633,63 +684,47 @@ async def open_vlm_displays(
# the curve item internals are pretty convoluted. # the curve item internals are pretty convoluted.
style='step', style='step',
) )
vlm_viz = vlm_chart._vizs['volume']
# force 0 to always be in view
def multi_maxmin(
names: list[str],
) -> tuple[float, float]:
mx = 0
for name in names:
mxmn = chart.maxmin(name=name)
if mxmn:
ymax = mxmn[1]
if ymax > mx:
mx = ymax
return 0, mx
chart.view.maxmin = partial(multi_maxmin, names=['volume'])
# TODO: fix the x-axis label issue where if you put # TODO: fix the x-axis label issue where if you put
# the axis on the left it's totally not lined up... # the axis on the left it's totally not lined up...
# show volume units value on LHS (for dinkus) # show volume units value on LHS (for dinkus)
# chart.hideAxis('right') # vlm_chart.hideAxis('right')
# chart.showAxis('left') # vlm_chart.showAxis('left')
# send back new chart to caller # send back new chart to caller
task_status.started(chart) task_status.started(vlm_chart)
# should **not** be the same sub-chart widget # should **not** be the same sub-chart widget
assert chart.name != linked.chart.name assert vlm_chart.name != linked.chart.name
# sticky only on sub-charts atm # sticky only on sub-charts atm
last_val_sticky = chart._ysticks[chart.name] last_val_sticky = vlm_chart.plotItem.getAxis(
'right')._stickies.get(vlm_chart.name)
# read from last calculated value # read from last calculated value
value = shm.array['volume'][-1] value = shm.array['volume'][-1]
last_val_sticky.update_from_data(-1, value) last_val_sticky.update_from_data(-1, value)
vlm_curve = chart.update_graphics_from_flow( _, _, vlm_curve = vlm_chart.update_graphics_from_flow(
'volume', 'volume',
# shm.array,
) )
# size view to data once at outset # size view to data once at outset
chart.view._set_yrange() vlm_chart.view._set_yrange(
viz=vlm_viz
)
# add axis title # add axis title
axis = chart.getAxis('right') axis = vlm_chart.getAxis('right')
axis.set_title(' vlm') axis.set_title(' vlm')
if dvlm: if dvlm:
tasks_ready = [] tasks_ready = []
# spawn and overlay $ vlm on the same subchart # spawn and overlay $ vlm on the same subchart
dvlm_shm, started = await admin.start_engine_task( dvlm_flume, started = await admin.start_engine_task(
dolla_vlm, dolla_vlm,
{ # fsp engine conf { # fsp engine conf
@ -708,7 +743,7 @@ async def open_vlm_displays(
# FIXME: we should error on starting the same fsp right # FIXME: we should error on starting the same fsp right
# since it might collide with existing shm.. or wait we # since it might collide with existing shm.. or wait we
# had this before?? # had this before??
# dolla_vlm, # dolla_vlm
tasks_ready.append(started) tasks_ready.append(started)
# profiler(f'created shm for fsp actor: {display_name}') # profiler(f'created shm for fsp actor: {display_name}')
@ -722,22 +757,29 @@ async def open_vlm_displays(
# XXX: the main chart already contains a vlm "units" axis # XXX: the main chart already contains a vlm "units" axis
# so here we add an overlay wth a y-range in # so here we add an overlay wth a y-range in
# $ liquidity-value units (normally a fiat like USD). # $ liquidity-value units (normally a fiat like USD).
dvlm_pi = chart.overlay_plotitem( dvlm_pi = vlm_chart.overlay_plotitem(
'dolla_vlm', 'dolla_vlm',
index=0, # place axis on inside (nearest to chart) index=0, # place axis on inside (nearest to chart)
axis_title=' $vlm', axis_title=' $vlm',
axis_side='right', axis_side='left',
axis_kwargs={ axis_kwargs={
'typical_max_str': ' 100.0 M ', 'typical_max_str': ' 100.0 M ',
'formatter': partial( 'formatter': partial(
humanize, humanize,
digits=2, digits=2,
), ),
'text_color': vlm_color,
}, },
) )
# TODO: should this maybe be implicit based on input args to
# `.overlay_plotitem()` above?
dvlm_pi.hideAxis('bottom')
# all to be overlayed curve names # all to be overlayed curve names
fields = [ dvlm_fields = [
'dolla_vlm', 'dolla_vlm',
'dark_vlm', 'dark_vlm',
] ]
@ -750,32 +792,18 @@ async def open_vlm_displays(
'dark_trade_rate', 'dark_trade_rate',
] ]
group_mxmn = partial(
multi_maxmin,
# keep both regular and dark vlm in view
names=fields,
# names=fields + dvlm_rate_fields,
)
# add custom auto range handler
dvlm_pi.vb._maxmin = group_mxmn
# use slightly less light (then bracket) gray
# for volume from "main exchange" and a more "bluey"
# gray for "dark" vlm.
vlm_color = 'i3'
dark_vlm_color = 'charcoal'
# add dvlm (step) curves to common view # add dvlm (step) curves to common view
def chart_curves( def chart_curves(
names: list[str], names: list[str],
pi: pg.PlotItem, pi: pg.PlotItem,
shm: ShmArray, shm: ShmArray,
flume: Flume,
step_mode: bool = False, step_mode: bool = False,
style: str = 'solid', style: str = 'solid',
) -> None: ) -> None:
for name in names: for name in names:
if 'dark' in name: if 'dark' in name:
color = dark_vlm_color color = dark_vlm_color
elif 'rate' in name: elif 'rate' in name:
@ -783,9 +811,13 @@ async def open_vlm_displays(
else: else:
color = 'bracket' color = 'bracket'
curve, _ = chart.draw_curve( assert isinstance(shm, ShmArray)
name=name, assert isinstance(flume, Flume)
shm=shm,
viz = vlm_chart.draw_curve(
name,
shm,
flume,
array_key=name, array_key=name,
overlay=pi, overlay=pi,
color=color, color=color,
@ -793,29 +825,24 @@ async def open_vlm_displays(
style=style, style=style,
pi=pi, pi=pi,
) )
assert viz.plot is pi
# TODO: we need a better API to do this..
# specially store ref to shm for lookup in display loop
# since only a placeholder of `None` is entered in
# ``.draw_curve()``.
flow = chart._flows[name]
assert flow.plot is pi
chart_curves( chart_curves(
fields, dvlm_fields,
dvlm_pi, dvlm_pi,
dvlm_shm, dvlm_flume.rt_shm,
dvlm_flume,
step_mode=True, step_mode=True,
) )
# spawn flow rates fsp **ONLY AFTER** the 'dolla_vlm' fsp is # spawn flow rates fsp **ONLY AFTER** the 'dolla_vlm' fsp is
# up since this one depends on it. # up since this one depends on it.
fr_shm, started = await admin.start_engine_task( fr_flume, started = await admin.start_engine_task(
flow_rates, flow_rates,
{ # fsp engine conf { # fsp engine conf
'func_name': 'flow_rates', 'func_name': 'flow_rates',
'zero_on_step': False, 'zero_on_step': True,
}, },
# loglevel, # loglevel,
) )
@ -824,7 +851,7 @@ async def open_vlm_displays(
# chart_curves( # chart_curves(
# dvlm_rate_fields, # dvlm_rate_fields,
# dvlm_pi, # dvlm_pi,
# fr_shm, # fr_flume.rt_shm,
# ) # )
# TODO: is there a way to "sync" the dual axes such that only # TODO: is there a way to "sync" the dual axes such that only
@ -833,24 +860,24 @@ async def open_vlm_displays(
# displayed and the curves are effectively the same minus # displayed and the curves are effectively the same minus
# liquidity events (well at least on low OHLC periods - 1s). # liquidity events (well at least on low OHLC periods - 1s).
vlm_curve.hide() vlm_curve.hide()
chart.removeItem(vlm_curve) vlm_chart.removeItem(vlm_curve)
vflow = chart._flows['volume'] vlm_viz = vlm_chart._vizs['volume']
vflow.render = False vlm_viz.render = False
# avoid range sorting on volume once disabled # avoid range sorting on volume once disabled
chart.view.disable_auto_yrange() vlm_chart.view.disable_auto_yrange()
# Trade rate overlay # Trade rate overlay
# XXX: requires an additional overlay for # XXX: requires an additional overlay for
# a trades-per-period (time) y-range. # a trades-per-period (time) y-range.
tr_pi = chart.overlay_plotitem( tr_pi = vlm_chart.overlay_plotitem(
'trade_rates', 'trade_rates',
# TODO: dynamically update period (and thus this axis?) # TODO: dynamically update period (and thus this axis?)
# title from user input. # title from user input.
axis_title='clears', axis_title='clears',
axis_side='left', axis_side='left',
axis_kwargs={ axis_kwargs={
'typical_max_str': ' 10.0 M ', 'typical_max_str': ' 10.0 M ',
'formatter': partial( 'formatter': partial(
@ -861,17 +888,13 @@ async def open_vlm_displays(
}, },
) )
# add custom auto range handler tr_pi.hideAxis('bottom')
tr_pi.vb.maxmin = partial(
multi_maxmin,
# keep both regular and dark vlm in view
names=trade_rate_fields,
)
chart_curves( chart_curves(
trade_rate_fields, trade_rate_fields,
tr_pi, tr_pi,
fr_shm, fr_flume.rt_shm,
fr_flume,
# step_mode=True, # step_mode=True,
# dashed line to represent "individual trades" being # dashed line to represent "individual trades" being
@ -905,7 +928,7 @@ async def open_vlm_displays(
async def start_fsp_displays( async def start_fsp_displays(
linked: LinkedSplits, linked: LinkedSplits,
ohlcv: ShmArray, flume: Flume,
group_status_key: str, group_status_key: str,
loglevel: str, loglevel: str,
@ -940,7 +963,7 @@ async def start_fsp_displays(
# }, # },
# }, # },
} }
profiler = pg.debug.Profiler( profiler = Profiler(
delayed=False, delayed=False,
disabled=False disabled=False
) )
@ -948,7 +971,10 @@ async def start_fsp_displays(
async with ( async with (
# NOTE: this admin internally opens an actor cluster # NOTE: this admin internally opens an actor cluster
open_fsp_admin(linked, ohlcv) as admin, open_fsp_admin(
linked,
flume,
) as admin,
): ):
statuses = [] statuses = []
for target, conf in fsp_conf.items(): for target, conf in fsp_conf.items():

View File

@ -20,8 +20,13 @@ Chart view box primitives
""" """
from __future__ import annotations from __future__ import annotations
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from functools import partial
import time import time
from typing import Optional, Callable from typing import (
Optional,
Callable,
TYPE_CHECKING,
)
import pyqtgraph as pg import pyqtgraph as pg
# from pyqtgraph.GraphicsScene import mouseEvents # from pyqtgraph.GraphicsScene import mouseEvents
@ -33,11 +38,16 @@ import numpy as np
import trio import trio
from ..log import get_logger from ..log import get_logger
from .._profile import Profiler
from .._profile import pg_profile_enabled, ms_slower_then from .._profile import pg_profile_enabled, ms_slower_then
# from ._style import _min_points_to_show # from ._style import _min_points_to_show
from ._editors import SelectRect from ._editors import SelectRect
from . import _event from . import _event
if TYPE_CHECKING:
from ._chart import ChartPlotWidget
from ._dataviz import Viz
log = get_logger(__name__) log = get_logger(__name__)
@ -75,7 +85,6 @@ async def handle_viewmode_kb_inputs(
pressed: set[str] = set() pressed: set[str] = set()
last = time.time() last = time.time()
trigger_mode: str
action: str action: str
on_next_release: Optional[Callable] = None on_next_release: Optional[Callable] = None
@ -141,13 +150,16 @@ async def handle_viewmode_kb_inputs(
Qt.Key_Space, Qt.Key_Space,
} }
): ):
view._chart.linked.godwidget.search.focus() godw = view._chart.linked.godwidget
godw.hist_linked.resize_sidepanes(from_linked=godw.rt_linked)
godw.search.focus()
# esc and ctrl-c # esc and ctrl-c
if key == Qt.Key_Escape or (ctrl and key == Qt.Key_C): if key == Qt.Key_Escape or (ctrl and key == Qt.Key_C):
# ctrl-c as cancel # ctrl-c as cancel
# https://forum.qt.io/topic/532/how-to-catch-ctrl-c-on-a-widget/9 # https://forum.qt.io/topic/532/how-to-catch-ctrl-c-on-a-widget/9
view.select_box.clear() view.select_box.clear()
view.linked.focus()
# cancel order or clear graphics # cancel order or clear graphics
if key == Qt.Key_C or key == Qt.Key_Delete: if key == Qt.Key_C or key == Qt.Key_Delete:
@ -178,17 +190,17 @@ async def handle_viewmode_kb_inputs(
if key in pressed: if key in pressed:
pressed.remove(key) pressed.remove(key)
# QUERY/QUOTE MODE # # QUERY/QUOTE MODE
# ----------------
if {Qt.Key_Q}.intersection(pressed): if {Qt.Key_Q}.intersection(pressed):
view.linkedsplits.cursor.in_query_mode = True view.linked.cursor.in_query_mode = True
else: else:
view.linkedsplits.cursor.in_query_mode = False view.linked.cursor.in_query_mode = False
# SELECTION MODE # SELECTION MODE
# -------------- # --------------
if shift: if shift:
if view.state['mouseMode'] == ViewBox.PanMode: if view.state['mouseMode'] == ViewBox.PanMode:
view.setMouseMode(ViewBox.RectMode) view.setMouseMode(ViewBox.RectMode)
@ -209,18 +221,27 @@ async def handle_viewmode_kb_inputs(
# ORDER MODE # ORDER MODE
# ---------- # ----------
# live vs. dark trigger + an action {buy, sell, alert} # live vs. dark trigger + an action {buy, sell, alert}
order_keys_pressed = ORDER_MODE.intersection(pressed) order_keys_pressed = ORDER_MODE.intersection(pressed)
if order_keys_pressed: if order_keys_pressed:
# show the pp size label # TODO: it seems like maybe the composition should be
order_mode.current_pp.show() # reversed here? Like, maybe we should have the nav have
# access to the pos state and then make encapsulated logic
# that shows the right stuff on screen instead or order mode
# and position-related abstractions doing this?
# show the pp size label only if there is
# a non-zero pos existing
tracker = order_mode.current_pp
if tracker.live_pp.size:
tracker.nav.show()
# TODO: show pp config mini-params in status bar widget # TODO: show pp config mini-params in status bar widget
# mode.pp_config.show() # mode.pp_config.show()
trigger_type: str = 'dark'
if ( if (
# 's' for "submit" to activate "live" order # 's' for "submit" to activate "live" order
Qt.Key_S in pressed or Qt.Key_S in pressed or
@ -228,9 +249,6 @@ async def handle_viewmode_kb_inputs(
): ):
trigger_type: str = 'live' trigger_type: str = 'live'
else:
trigger_type: str = 'dark'
# order mode trigger "actions" # order mode trigger "actions"
if Qt.Key_D in pressed: # for "damp eet" if Qt.Key_D in pressed: # for "damp eet"
action = 'sell' action = 'sell'
@ -259,8 +277,8 @@ async def handle_viewmode_kb_inputs(
Qt.Key_S in pressed or Qt.Key_S in pressed or
order_keys_pressed or order_keys_pressed or
Qt.Key_O in pressed Qt.Key_O in pressed
) and )
key in NUMBER_LINE and key in NUMBER_LINE
): ):
# hot key to set order slots size. # hot key to set order slots size.
# change edit field to current number line value, # change edit field to current number line value,
@ -278,7 +296,7 @@ async def handle_viewmode_kb_inputs(
else: # none active else: # none active
# hide pp label # hide pp label
order_mode.current_pp.hide_info() order_mode.current_pp.nav.hide_info()
# if none are pressed, remove "staged" level # if none are pressed, remove "staged" level
# line under cursor position # line under cursor position
@ -319,7 +337,6 @@ async def handle_viewmode_mouse(
): ):
# when in order mode, submit execution # when in order mode, submit execution
# msg.event.accept() # msg.event.accept()
# breakpoint()
view.order_mode.submit_order() view.order_mode.submit_order()
@ -336,16 +353,6 @@ class ChartView(ViewBox):
''' '''
mode_name: str = 'view' mode_name: str = 'view'
# "relay events" for making overlaid views work.
# NOTE: these MUST be defined here (and can't be monkey patched
# on later) due to signal construction requiring refs to be
# in place during the run of meta-class machinery.
mouseDragEventRelay = QtCore.Signal(object, object, object)
wheelEventRelay = QtCore.Signal(object, object, object)
event_relay_source: 'Optional[ViewBox]' = None
relays: dict[str, QtCore.Signal] = {}
def __init__( def __init__(
self, self,
@ -367,7 +374,6 @@ class ChartView(ViewBox):
) )
# for "known y-range style" # for "known y-range style"
self._static_yrange = static_yrange self._static_yrange = static_yrange
self._maxmin = None
# disable vertical scrolling # disable vertical scrolling
self.setMouseEnabled( self.setMouseEnabled(
@ -375,8 +381,8 @@ class ChartView(ViewBox):
y=True, y=True,
) )
self.linkedsplits = None self.linked = None
self._chart: 'ChartPlotWidget' = None # noqa self._chart: ChartPlotWidget | None = None # noqa
# add our selection box annotator # add our selection box annotator
self.select_box = SelectRect(self) self.select_box = SelectRect(self)
@ -387,6 +393,7 @@ class ChartView(ViewBox):
self.setFocusPolicy(QtCore.Qt.StrongFocus) self.setFocusPolicy(QtCore.Qt.StrongFocus)
self._ic = None self._ic = None
self._yranger: Callable | None = None
def start_ic( def start_ic(
self, self,
@ -397,8 +404,11 @@ class ChartView(ViewBox):
''' '''
if self._ic is None: if self._ic is None:
self.chart.pause_all_feeds() try:
self._ic = trio.Event() self.chart.pause_all_feeds()
self._ic = trio.Event()
except RuntimeError:
pass
def signal_ic( def signal_ic(
self, self,
@ -411,9 +421,12 @@ class ChartView(ViewBox):
''' '''
if self._ic: if self._ic:
self._ic.set() try:
self._ic = None self._ic.set()
self.chart.resume_all_feeds() self._ic = None
self.chart.resume_all_feeds()
except RuntimeError:
pass
@asynccontextmanager @asynccontextmanager
async def open_async_input_handler( async def open_async_input_handler(
@ -441,29 +454,18 @@ class ChartView(ViewBox):
yield self yield self
@property @property
def chart(self) -> 'ChartPlotWidget': # type: ignore # noqa def chart(self) -> ChartPlotWidget: # type: ignore # noqa
return self._chart return self._chart
@chart.setter @chart.setter
def chart(self, chart: 'ChartPlotWidget') -> None: # type: ignore # noqa def chart(self, chart: ChartPlotWidget) -> None: # type: ignore # noqa
self._chart = chart self._chart = chart
self.select_box.chart = chart self.select_box.chart = chart
if self._maxmin is None:
self._maxmin = chart.maxmin
@property
def maxmin(self) -> Callable:
return self._maxmin
@maxmin.setter
def maxmin(self, callback: Callable) -> None:
self._maxmin = callback
def wheelEvent( def wheelEvent(
self, self,
ev, ev,
axis=None, axis=None,
relayed_from: ChartView = None,
): ):
''' '''
Override "center-point" location for scrolling. Override "center-point" location for scrolling.
@ -474,27 +476,34 @@ class ChartView(ViewBox):
TODO: PR a method into ``pyqtgraph`` to make this configurable TODO: PR a method into ``pyqtgraph`` to make this configurable
''' '''
linked = self.linked
if (
not linked
):
return
if axis in (0, 1): if axis in (0, 1):
mask = [False, False] mask = [False, False]
mask[axis] = self.state['mouseEnabled'][axis] mask[axis] = self.state['mouseEnabled'][axis]
else: else:
mask = self.state['mouseEnabled'][:] mask = self.state['mouseEnabled'][:]
chart = self.linkedsplits.chart chart = self.linked.chart
# don't zoom more then the min points setting # don't zoom more then the min points setting
l, lbar, rbar, r = chart.bars_range() viz = chart.get_viz(chart.name)
# vl = r - l vl, lbar, rbar, vr = viz.bars_range()
# if ev.delta() > 0 and vl <= _min_points_to_show: # TODO: max/min zoom limits incorporating time step size.
# log.debug("Max zoom bruh...") # rl = vr - vl
# if ev.delta() > 0 and rl <= _min_points_to_show:
# log.warning("Max zoom bruh...")
# return # return
# if ( # if (
# ev.delta() < 0 # ev.delta() < 0
# and vl >= len(chart._flows[chart.name].shm.array) + 666 # and rl >= len(chart._vizs[chart.name].shm.array) + 666
# ): # ):
# log.debug("Min zoom bruh...") # log.warning("Min zoom bruh...")
# return # return
# actual scaling factor # actual scaling factor
@ -525,49 +534,17 @@ class ChartView(ViewBox):
self.scaleBy(s, center) self.scaleBy(s, center)
else: else:
# use right-most point of current curve graphic
# center = pg.Point( xl = viz.graphics.x_last()
# fn.invertQTransform(self.childGroup.transform()).map(ev.pos())
# )
# XXX: scroll "around" the right most element in the view
# which stays "pinned" in place.
# furthest_right_coord = self.boundingRect().topRight()
# yaxis = pg.Point(
# fn.invertQTransform(
# self.childGroup.transform()
# ).map(furthest_right_coord)
# )
# This seems like the most "intuitive option, a hybrid of
# tws and tv styles
last_bar = pg.Point(int(rbar)) + 1
ryaxis = chart.getAxis('right')
r_axis_x = ryaxis.pos().x()
end_of_l1 = pg.Point(
round(
chart.cv.mapToView(
pg.Point(r_axis_x - chart._max_l1_line_len)
# QPointF(chart._max_l1_line_len, 0)
).x()
)
) # .x()
# self.state['viewRange'][0][1] = end_of_l1
# focal = pg.Point((last_bar.x() + end_of_l1)/2)
focal = min( focal = min(
last_bar, xl,
end_of_l1, vr,
key=lambda p: p.x()
) )
# focal = pg.Point(last_bar.x() + end_of_l1)
self._resetTarget() self._resetTarget()
# NOTE: scroll "around" the right most datum-element in view
# gives the feeling of staying "pinned" in place.
self.scaleBy(s, focal) self.scaleBy(s, focal)
# XXX: the order of the next 2 lines i'm pretty sure # XXX: the order of the next 2 lines i'm pretty sure
@ -593,10 +570,8 @@ class ChartView(ViewBox):
self, self,
ev, ev,
axis: Optional[int] = None, axis: Optional[int] = None,
relayed_from: ChartView = None,
) -> None: ) -> None:
pos = ev.pos() pos = ev.pos()
lastPos = ev.lastPos() lastPos = ev.lastPos()
dif = pos - lastPos dif = pos - lastPos
@ -666,10 +641,10 @@ class ChartView(ViewBox):
# PANNING MODE # PANNING MODE
else: else:
# XXX: WHY try:
ev.accept() self.start_ic()
except RuntimeError:
self.start_ic() pass
# if self._ic is None: # if self._ic is None:
# self.chart.pause_all_feeds() # self.chart.pause_all_feeds()
# self._ic = trio.Event() # self._ic = trio.Event()
@ -697,6 +672,9 @@ class ChartView(ViewBox):
# self._ic = None # self._ic = None
# self.chart.resume_all_feeds() # self.chart.resume_all_feeds()
# XXX: WHY
ev.accept()
# WEIRD "RIGHT-CLICK CENTER ZOOM" MODE # WEIRD "RIGHT-CLICK CENTER ZOOM" MODE
elif button & QtCore.Qt.RightButton: elif button & QtCore.Qt.RightButton:
@ -742,7 +720,12 @@ class ChartView(ViewBox):
*, *,
yrange: Optional[tuple[float, float]] = None, yrange: Optional[tuple[float, float]] = None,
range_margin: float = 0.06, viz: Viz | None = None,
# NOTE: this value pairs (more or less) with L1 label text
# height offset from from the bid/ask lines.
range_margin: float = 0.09,
bars_range: Optional[tuple[int, int, int, int]] = None, bars_range: Optional[tuple[int, int, int, int]] = None,
# flag to prevent triggering sibling charts from the same linked # flag to prevent triggering sibling charts from the same linked
@ -761,7 +744,7 @@ class ChartView(ViewBox):
''' '''
name = self.name name = self.name
# print(f'YRANGE ON {name}') # print(f'YRANGE ON {name}')
profiler = pg.debug.Profiler( profiler = Profiler(
msg=f'`ChartView._set_yrange()`: `{name}`', msg=f'`ChartView._set_yrange()`: `{name}`',
disabled=not pg_profile_enabled(), disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then, ms_threshold=ms_slower_then,
@ -795,18 +778,28 @@ class ChartView(ViewBox):
# XXX: only compute the mxmn range # XXX: only compute the mxmn range
# if none is provided as input! # if none is provided as input!
if not yrange: if not yrange:
# flow = chart._flows[name]
yrange = self._maxmin() if not viz:
breakpoint()
out = viz.maxmin()
if out is None:
log.warning(f'No yrange provided for {name}!?')
return
(
ixrng,
_,
yrange
) = out
profiler(f'`{self.name}:Viz.maxmin()` -> {ixrng}=>{yrange}')
if yrange is None: if yrange is None:
log.warning(f'No yrange provided for {name}!?') log.warning(f'No yrange provided for {name}!?')
print(f"WTF NO YRANGE {name}")
return return
ylow, yhigh = yrange ylow, yhigh = yrange
profiler(f'callback ._maxmin(): {yrange}')
# view margins: stay within a % of the "true range" # view margins: stay within a % of the "true range"
diff = yhigh - ylow diff = yhigh - ylow
ylow = ylow - (diff * range_margin) ylow = ylow - (diff * range_margin)
@ -826,54 +819,55 @@ class ChartView(ViewBox):
def enable_auto_yrange( def enable_auto_yrange(
self, self,
viz: Viz,
src_vb: Optional[ChartView] = None, src_vb: Optional[ChartView] = None,
) -> None: ) -> None:
''' '''
Assign callback for rescaling y-axis automatically Assign callbacks for rescaling and resampling y-axis data
based on data contents and ``ViewBox`` state. automatically based on data contents and ``ViewBox`` state.
''' '''
if src_vb is None: if src_vb is None:
src_vb = self src_vb = self
# splitter(s) resizing if self._yranger is None:
src_vb.sigResized.connect(self._set_yrange) self._yranger = partial(
self._set_yrange,
viz=viz,
)
# widget-UIs/splitter(s) resizing
src_vb.sigResized.connect(self._yranger)
# mouse wheel doesn't emit XRangeChanged
src_vb.sigRangeChangedManually.connect(self._yranger)
# re-sampling trigger:
# TODO: a smarter way to avoid calling this needlessly? # TODO: a smarter way to avoid calling this needlessly?
# 2 things i can think of: # 2 things i can think of:
# - register downsample-able graphics specially and only # - register downsample-able graphics specially and only
# iterate those. # iterate those.
# - only register this when certain downsampleable graphics are # - only register this when certain downsample-able graphics are
# "added to scene". # "added to scene".
src_vb.sigRangeChangedManually.connect( src_vb.sigRangeChangedManually.connect(
self.maybe_downsample_graphics self.maybe_downsample_graphics
) )
# mouse wheel doesn't emit XRangeChanged
src_vb.sigRangeChangedManually.connect(self._set_yrange)
# src_vb.sigXRangeChanged.connect(self._set_yrange)
# src_vb.sigXRangeChanged.connect(
# self.maybe_downsample_graphics
# )
def disable_auto_yrange(self) -> None: def disable_auto_yrange(self) -> None:
# XXX: not entirely sure why we can't de-reg this..
self.sigResized.disconnect( self.sigResized.disconnect(
self._set_yrange, self._yranger,
) )
self.sigRangeChangedManually.disconnect(
self._yranger,
)
self.sigRangeChangedManually.disconnect( self.sigRangeChangedManually.disconnect(
self.maybe_downsample_graphics self.maybe_downsample_graphics
) )
self.sigRangeChangedManually.disconnect(
self._set_yrange,
)
# self.sigXRangeChanged.disconnect(self._set_yrange)
# self.sigXRangeChanged.disconnect(
# self.maybe_downsample_graphics
# )
def x_uppx(self) -> float: def x_uppx(self) -> float:
''' '''
@ -882,7 +876,7 @@ class ChartView(ViewBox):
graphics items which are our children. graphics items which are our children.
''' '''
graphics = [f.graphics for f in self._chart._flows.values()] graphics = [f.graphics for f in self._chart._vizs.values()]
if not graphics: if not graphics:
return 0 return 0
@ -895,10 +889,9 @@ class ChartView(ViewBox):
def maybe_downsample_graphics( def maybe_downsample_graphics(
self, self,
autoscale_overlays: bool = True, autoscale_overlays: bool = False,
): ):
profiler = Profiler(
profiler = pg.debug.Profiler(
msg=f'ChartView.maybe_downsample_graphics() for {self.name}', msg=f'ChartView.maybe_downsample_graphics() for {self.name}',
disabled=not pg_profile_enabled(), disabled=not pg_profile_enabled(),
@ -912,10 +905,14 @@ class ChartView(ViewBox):
# TODO: a faster single-loop-iterator way of doing this XD # TODO: a faster single-loop-iterator way of doing this XD
chart = self._chart chart = self._chart
linked = self.linkedsplits plots = {chart.name: chart}
plots = linked.subplots | {chart.name: chart}
linked = self.linked
if linked:
plots |= linked.subplots
for chart_name, chart in plots.items(): for chart_name, chart in plots.items():
for name, flow in chart._flows.items(): for name, flow in chart._vizs.items():
if ( if (
not flow.render not flow.render
@ -923,25 +920,24 @@ class ChartView(ViewBox):
# XXX: super important to be aware of this. # XXX: super important to be aware of this.
# or not flow.graphics.isVisible() # or not flow.graphics.isVisible()
): ):
# print(f'skipping {flow.name}')
continue continue
# pass in no array which will read and render from the last # pass in no array which will read and render from the last
# passed array (normally provided by the display loop.) # passed array (normally provided by the display loop.)
chart.update_graphics_from_flow( chart.update_graphics_from_flow(name)
name,
use_vr=True,
)
# for each overlay on this chart auto-scale the # for each overlay on this chart auto-scale the
# y-range to max-min values. # y-range to max-min values.
if autoscale_overlays: # if autoscale_overlays:
overlay = chart.pi_overlay # overlay = chart.pi_overlay
if overlay: # if overlay:
for pi in overlay.overlays: # for pi in overlay.overlays:
pi.vb._set_yrange( # pi.vb._set_yrange(
# TODO: get the range once up front... # # TODO: get the range once up front...
# bars_range=br, # # bars_range=br,
) # viz=pi.viz,
profiler('autoscaled linked plots') # )
# profiler('autoscaled linked plots')
profiler(f'<{chart_name}>.update_graphics_from_flow({name})') profiler(f'<{chart_name}>.update_graphics_from_flow({name})')

View File

@ -26,22 +26,24 @@ from PyQt5.QtCore import QPointF
from ._axes import YAxisLabel from ._axes import YAxisLabel
from ._style import hcolor from ._style import hcolor
from ._pg_overrides import PlotItem
class LevelLabel(YAxisLabel): class LevelLabel(YAxisLabel):
"""Y-axis (vertically) oriented, horizontal label that sticks to '''
Y-axis (vertically) oriented, horizontal label that sticks to
where it's placed despite chart resizing and supports displaying where it's placed despite chart resizing and supports displaying
multiple fields. multiple fields.
TODO: replace the rectangle-text part with our new ``Label`` type. TODO: replace the rectangle-text part with our new ``Label`` type.
""" '''
_x_margin = 0 _x_br_offset: float = -16
_y_margin = 0 _y_txt_h_scaling: float = 2
# adjustment "further away from" anchor point # adjustment "further away from" anchor point
_x_offset = 9 _x_offset = 0
_y_offset = 0 _y_offset = 0
# fields to be displayed in the label string # fields to be displayed in the label string
@ -57,12 +59,12 @@ class LevelLabel(YAxisLabel):
chart, chart,
parent, parent,
color: str = 'bracket', color: str = 'default_light',
orient_v: str = 'bottom', orient_v: str = 'bottom',
orient_h: str = 'left', orient_h: str = 'right',
opacity: float = 0, opacity: float = 1,
# makes order line labels offset from their parent axis # makes order line labels offset from their parent axis
# such that they don't collide with the L1/L2 lines/prices # such that they don't collide with the L1/L2 lines/prices
@ -98,13 +100,15 @@ class LevelLabel(YAxisLabel):
self._h_shift = { self._h_shift = {
'left': -1., 'left': -1.,
'right': 0. 'right': 0.,
}[orient_h] }[orient_h]
self.fields = self._fields.copy() self.fields = self._fields.copy()
# ensure default format fields are in correct # ensure default format fields are in correct
self.set_fmt_str(self._fmt_str, self.fields) self.set_fmt_str(self._fmt_str, self.fields)
self.setZValue(10)
@property @property
def color(self): def color(self):
return self._hcolor return self._hcolor
@ -112,7 +116,10 @@ class LevelLabel(YAxisLabel):
@color.setter @color.setter
def color(self, color: str) -> None: def color(self, color: str) -> None:
self._hcolor = color self._hcolor = color
self._pen = self.pen = pg.mkPen(hcolor(color)) self._pen = self.pen = pg.mkPen(
hcolor(color),
width=3,
)
def update_on_resize(self, vr, r): def update_on_resize(self, vr, r):
"""Tiis is a ``.sigRangeChanged()`` handler. """Tiis is a ``.sigRangeChanged()`` handler.
@ -124,15 +131,16 @@ class LevelLabel(YAxisLabel):
self, self,
fields: dict = None, fields: dict = None,
) -> None: ) -> None:
"""Update the label's text contents **and** position from '''
Update the label's text contents **and** position from
a view box coordinate datum. a view box coordinate datum.
""" '''
self.fields.update(fields) self.fields.update(fields)
level = self.fields['level'] level = self.fields['level']
# map "level" to local coords # map "level" to local coords
abs_xy = self._chart.mapFromView(QPointF(0, level)) abs_xy = self._pi.mapFromView(QPointF(0, level))
self.update_label( self.update_label(
abs_xy, abs_xy,
@ -149,7 +157,7 @@ class LevelLabel(YAxisLabel):
h, w = self.set_label_str(fields) h, w = self.set_label_str(fields)
if self._adjust_to_l1: if self._adjust_to_l1:
self._x_offset = self._chart._max_l1_line_len self._x_offset = self._pi.chart_widget._max_l1_line_len
self.setPos(QPointF( self.setPos(QPointF(
self._h_shift * (w + self._x_offset), self._h_shift * (w + self._x_offset),
@ -174,7 +182,8 @@ class LevelLabel(YAxisLabel):
fields: dict, fields: dict,
): ):
# use space as e3 delim # use space as e3 delim
self.label_str = self._fmt_str.format(**fields).replace(',', ' ') self.label_str = self._fmt_str.format(
**fields).replace(',', ' ')
br = self.boundingRect() br = self.boundingRect()
h, w = br.height(), br.width() h, w = br.height(), br.width()
@ -187,14 +196,14 @@ class LevelLabel(YAxisLabel):
self, self,
p: QtGui.QPainter, p: QtGui.QPainter,
rect: QtCore.QRectF rect: QtCore.QRectF
) -> None:
p.setPen(self._pen)
) -> None:
p.setPen(self._pen)
rect = self.rect rect = self.rect
if self._orient_v == 'bottom': if self._orient_v == 'bottom':
lp, rp = rect.topLeft(), rect.topRight() lp, rp = rect.topLeft(), rect.topRight()
# p.drawLine(rect.topLeft(), rect.topRight())
elif self._orient_v == 'top': elif self._orient_v == 'top':
lp, rp = rect.bottomLeft(), rect.bottomRight() lp, rp = rect.bottomLeft(), rect.bottomRight()
@ -208,6 +217,11 @@ class LevelLabel(YAxisLabel):
]) ])
) )
p.fillRect(
self.rect,
self.bg_color,
)
def highlight(self, pen) -> None: def highlight(self, pen) -> None:
self._pen = pen self._pen = pen
self.update() self.update()
@ -236,43 +250,46 @@ class L1Label(LevelLabel):
# Set a global "max L1 label length" so we can # Set a global "max L1 label length" so we can
# look it up on order lines and adjust their # look it up on order lines and adjust their
# labels not to overlap with it. # labels not to overlap with it.
chart = self._chart chart = self._pi.chart_widget
chart._max_l1_line_len: float = max( chart._max_l1_line_len: float = max(
chart._max_l1_line_len, chart._max_l1_line_len,
w w,
) )
return h, w return h, w
class L1Labels: class L1Labels:
"""Level 1 bid ask labels for dynamic update on price-axis. '''
Level 1 bid ask labels for dynamic update on price-axis.
""" '''
def __init__( def __init__(
self, self,
chart: 'ChartPlotWidget', # noqa plotitem: PlotItem,
digits: int = 2, digits: int = 2,
size_digits: int = 3, size_digits: int = 3,
font_size: str = 'small', font_size: str = 'small',
) -> None: ) -> None:
self.chart = chart chart = self.chart = plotitem.chart_widget
raxis = chart.getAxis('right') raxis = plotitem.getAxis('right')
kwargs = { kwargs = {
'chart': chart, 'chart': plotitem,
'parent': raxis, 'parent': raxis,
'opacity': 1, 'opacity': .9,
'font_size': font_size, 'font_size': font_size,
'fg_color': chart.pen_color, 'fg_color': 'default_light',
'bg_color': chart.view_color, 'bg_color': chart.view_color, # normally 'papas_special'
} }
# TODO: add humanized source-asset
# info format.
fmt_str = ( fmt_str = (
' {size:.{size_digits}f} x ' ' {size:.{size_digits}f} u'
'{level:,.{level_digits}f} ' # '{level:,.{level_digits}f} '
) )
fields = { fields = {
'level': 0, 'level': 0,
@ -285,12 +302,17 @@ class L1Labels:
orient_v='bottom', orient_v='bottom',
**kwargs, **kwargs,
) )
bid.set_fmt_str(fmt_str=fmt_str, fields=fields) bid.set_fmt_str(
fmt_str='\n' + fmt_str,
fields=fields,
)
bid.show() bid.show()
ask = self.ask_label = L1Label( ask = self.ask_label = L1Label(
orient_v='top', orient_v='top',
**kwargs, **kwargs,
) )
ask.set_fmt_str(fmt_str=fmt_str, fields=fields) ask.set_fmt_str(
fmt_str=fmt_str,
fields=fields)
ask.show() ask.show()

View File

@ -233,6 +233,36 @@ class Label:
def delete(self) -> None: def delete(self) -> None:
self.vb.scene().removeItem(self.txt) self.vb.scene().removeItem(self.txt)
# NOTE: pulled out from ``ChartPlotWidget`` from way way old code.
# def _label_h(self, yhigh: float, ylow: float) -> float:
# # compute contents label "height" in view terms
# # to avoid having data "contents" overlap with them
# if self._labels:
# label = self._labels[self.name][0]
# rect = label.itemRect()
# tl, br = rect.topLeft(), rect.bottomRight()
# vb = self.plotItem.vb
# try:
# # on startup labels might not yet be rendered
# top, bottom = (vb.mapToView(tl).y(), vb.mapToView(br).y())
# # XXX: magic hack, how do we compute exactly?
# label_h = (top - bottom) * 0.42
# except np.linalg.LinAlgError:
# label_h = 0
# else:
# label_h = 0
# # print(f'label height {self.name}: {label_h}')
# if label_h > yhigh - ylow:
# label_h = 0
# print(f"bounds (ylow, yhigh): {(ylow, yhigh)}")
class FormatLabel(QLabel): class FormatLabel(QLabel):
''' '''

View File

@ -18,9 +18,14 @@
Lines for orders, alerts, L2. Lines for orders, alerts, L2.
""" """
from __future__ import annotations
from functools import partial from functools import partial
from math import floor from math import floor
from typing import Optional, Callable from typing import (
Optional,
Callable,
TYPE_CHECKING,
)
import pyqtgraph as pg import pyqtgraph as pg
from pyqtgraph import Point, functions as fn from pyqtgraph import Point, functions as fn
@ -37,6 +42,9 @@ from ..calc import humanize
from ._label import Label from ._label import Label
from ._style import hcolor, _font from ._style import hcolor, _font
if TYPE_CHECKING:
from ._cursor import Cursor
# TODO: probably worth investigating if we can # TODO: probably worth investigating if we can
# make .boundingRect() faster: # make .boundingRect() faster:
@ -84,7 +92,7 @@ class LevelLine(pg.InfiniteLine):
self._marker = None self._marker = None
self.only_show_markers_on_hover = only_show_markers_on_hover self.only_show_markers_on_hover = only_show_markers_on_hover
self.show_markers: bool = True # presuming the line is hovered at init self.track_marker_pos: bool = False
# should line go all the way to far end or leave a "margin" # should line go all the way to far end or leave a "margin"
# space for other graphics (eg. L1 book) # space for other graphics (eg. L1 book)
@ -122,6 +130,9 @@ class LevelLine(pg.InfiniteLine):
self._y_incr_mult = 1 / chart.linked.symbol.tick_size self._y_incr_mult = 1 / chart.linked.symbol.tick_size
self._right_end_sc: float = 0 self._right_end_sc: float = 0
# use px caching
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
def txt_offsets(self) -> tuple[int, int]: def txt_offsets(self) -> tuple[int, int]:
return 0, 0 return 0, 0
@ -216,20 +227,23 @@ class LevelLine(pg.InfiniteLine):
y: float y: float
) -> None: ) -> None:
'''Chart coordinates cursor tracking callback. '''
Chart coordinates cursor tracking callback.
this is called by our ``Cursor`` type once this line is set to this is called by our ``Cursor`` type once this line is set to
track the cursor: for every movement this callback is invoked to track the cursor: for every movement this callback is invoked to
reposition the line with the current view coordinates. reposition the line with the current view coordinates.
''' '''
self.movable = True self.movable = True
self.set_level(y) # implictly calls reposition handler self.set_level(y) # implictly calls reposition handler
def mouseDragEvent(self, ev): def mouseDragEvent(self, ev):
"""Override the ``InfiniteLine`` handler since we need more '''
Override the ``InfiniteLine`` handler since we need more
detailed control and start end signalling. detailed control and start end signalling.
""" '''
cursor = self._chart.linked.cursor cursor = self._chart.linked.cursor
# hide y-crosshair # hide y-crosshair
@ -281,10 +295,20 @@ class LevelLine(pg.InfiniteLine):
# show y-crosshair again # show y-crosshair again
cursor.show_xhair() cursor.show_xhair()
def delete(self) -> None: def get_cursor(self) -> Optional[Cursor]:
"""Remove this line from containing chart/view/scene.
""" chart = self._chart
cur = chart.linked.cursor
if self in cur._hovered:
return cur
return None
def delete(self) -> None:
'''
Remove this line from containing chart/view/scene.
'''
scene = self.scene() scene = self.scene()
if scene: if scene:
for label in self._labels: for label in self._labels:
@ -298,9 +322,8 @@ class LevelLine(pg.InfiniteLine):
# remove from chart/cursor states # remove from chart/cursor states
chart = self._chart chart = self._chart
cur = chart.linked.cursor cur = self.get_cursor()
if cur:
if self in cur._hovered:
cur._hovered.remove(self) cur._hovered.remove(self)
chart.plotItem.removeItem(self) chart.plotItem.removeItem(self)
@ -308,8 +331,8 @@ class LevelLine(pg.InfiniteLine):
def mouseDoubleClickEvent( def mouseDoubleClickEvent(
self, self,
ev: QtGui.QMouseEvent, ev: QtGui.QMouseEvent,
) -> None:
) -> None:
# TODO: enter labels edit mode # TODO: enter labels edit mode
print(f'double click {ev}') print(f'double click {ev}')
@ -334,30 +357,22 @@ class LevelLine(pg.InfiniteLine):
line_end, marker_right, r_axis_x = self._chart.marker_right_points() line_end, marker_right, r_axis_x = self._chart.marker_right_points()
if self.show_markers and self.markers: # (legacy) NOTE: at one point this seemed slower when moving around
# order lines.. not sure if that's still true or why but we've
p.setPen(self.pen) # dropped the original hacky `.pain()` transform stuff for inf
qgo_draw_markers( # line markers now - check the git history if it needs to be
self.markers, # reverted.
self.pen.color(), if self._marker:
p, if self.track_marker_pos:
vb_left, # make the line end at the marker's x pos
vb_right, line_end = marker_right = self._marker.pos().x()
marker_right,
)
# marker_size = self.markers[0][2]
self._maxMarkerSize = max([m[2] / 2. for m in self.markers])
# this seems slower when moving around
# order lines.. not sure wtf is up with that.
# for now we're just using it on the position line.
elif self._marker:
# TODO: make this label update part of a scene-aware-marker # TODO: make this label update part of a scene-aware-marker
# composed annotation # composed annotation
self._marker.setPos( self._marker.setPos(
QPointF(marker_right, self.scene_y()) QPointF(marker_right, self.scene_y())
) )
if hasattr(self._marker, 'label'): if hasattr(self._marker, 'label'):
self._marker.label.update() self._marker.label.update()
@ -379,16 +394,14 @@ class LevelLine(pg.InfiniteLine):
def hide(self) -> None: def hide(self) -> None:
super().hide() super().hide()
if self._marker: mkr = self._marker
self._marker.hide() if mkr:
# needed for ``order_line()`` lines currently mkr.hide()
self._marker.label.hide()
def show(self) -> None: def show(self) -> None:
super().show() super().show()
if self._marker: if self._marker:
self._marker.show() self._marker.show()
# self._marker.label.show()
def scene_y(self) -> float: def scene_y(self) -> float:
return self.getViewBox().mapFromView( return self.getViewBox().mapFromView(
@ -421,6 +434,10 @@ class LevelLine(pg.InfiniteLine):
return path return path
@property
def marker(self) -> LevelMarker:
return self._marker
def hoverEvent(self, ev): def hoverEvent(self, ev):
''' '''
Mouse hover callback. Mouse hover callback.
@ -429,17 +446,16 @@ class LevelLine(pg.InfiniteLine):
cur = self._chart.linked.cursor cur = self._chart.linked.cursor
# hovered # hovered
if (not ev.isExit()) and ev.acceptDrags(QtCore.Qt.LeftButton): if (
not ev.isExit()
and ev.acceptDrags(QtCore.Qt.LeftButton)
):
# if already hovered we don't need to run again # if already hovered we don't need to run again
if self.mouseHovering is True: if self.mouseHovering is True:
return return
if self.only_show_markers_on_hover: if self.only_show_markers_on_hover:
self.show_markers = True self.show_markers()
if self._marker:
self._marker.show()
# highlight if so configured # highlight if so configured
if self.highlight_on_hover: if self.highlight_on_hover:
@ -482,11 +498,7 @@ class LevelLine(pg.InfiniteLine):
cur._hovered.remove(self) cur._hovered.remove(self)
if self.only_show_markers_on_hover: if self.only_show_markers_on_hover:
self.show_markers = False self.hide_markers()
if self._marker:
self._marker.hide()
self._marker.label.hide()
if self not in cur._trackers: if self not in cur._trackers:
cur.show_xhair(y_label_level=self.value()) cur.show_xhair(y_label_level=self.value())
@ -498,6 +510,15 @@ class LevelLine(pg.InfiniteLine):
self.update() self.update()
def hide_markers(self) -> None:
if self._marker:
self._marker.hide()
self._marker.label.hide()
def show_markers(self) -> None:
if self._marker:
self._marker.show()
def level_line( def level_line(
@ -518,9 +539,10 @@ def level_line(
**kwargs, **kwargs,
) -> LevelLine: ) -> LevelLine:
"""Convenience routine to add a styled horizontal line to a plot. '''
Convenience routine to add a styled horizontal line to a plot.
""" '''
hl_color = color + '_light' if highlight_on_hover else color hl_color = color + '_light' if highlight_on_hover else color
line = LevelLine( line = LevelLine(
@ -702,7 +724,7 @@ def order_line(
marker = LevelMarker( marker = LevelMarker(
chart=chart, chart=chart,
style=marker_style, style=marker_style,
get_level=line.value, get_level=line.value, # callback
size=marker_size, size=marker_size,
keep_in_view=False, keep_in_view=False,
) )
@ -711,7 +733,8 @@ def order_line(
marker = line.add_marker(marker) marker = line.add_marker(marker)
# XXX: DON'T COMMENT THIS! # XXX: DON'T COMMENT THIS!
# this fixes it the artifact issue! .. of course, bounding rect stuff # this fixes it the artifact issue!
# .. of course, bounding rect stuff
line._maxMarkerSize = marker_size line._maxMarkerSize = marker_size
assert line._marker is marker assert line._marker is marker
@ -732,7 +755,8 @@ def order_line(
if action != 'alert': if action != 'alert':
# add a partial position label if we also added a level marker # add a partial position label if we also added a level
# marker
pp_size_label = Label( pp_size_label = Label(
view=view, view=view,
color=line.color, color=line.color,
@ -766,9 +790,9 @@ def order_line(
# XXX: without this the pp proportion label next the marker # XXX: without this the pp proportion label next the marker
# seems to lag? this is the same issue we had with position # seems to lag? this is the same issue we had with position
# lines which we handle with ``.update_graphcis()``. # lines which we handle with ``.update_graphcis()``.
# marker._on_paint=lambda marker: pp_size_label.update()
marker._on_paint = lambda marker: pp_size_label.update() marker._on_paint = lambda marker: pp_size_label.update()
# XXX: THIS IS AN UNTYPED MONKEY PATCH!?!?!
marker.label = label marker.label = label
# sanity check # sanity check

108
piker/ui/_notify.py 100644
View File

@ -0,0 +1,108 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Notifications utils.
"""
import os
import platform
import subprocess
from typing import Optional
import trio
from ..log import get_logger
from ..clearing._messages import (
Status,
)
log = get_logger(__name__)
_dbus_uid: Optional[str] = ''
async def notify_from_ems_status_msg(
msg: Status,
duration: int = 3000,
is_subproc: bool = False,
) -> None:
'''
Send a linux desktop notification.
Handle subprocesses by discovering the dbus user id
on first call.
'''
if platform.system() != "Linux":
return
# TODO: this in another task?
# not sure if this will ever be a bottleneck,
# we probably could do graphics stuff first tho?
if is_subproc:
global _dbus_uid
su = os.environ.get('SUDO_USER')
if (
not _dbus_uid
and su
):
# TODO: use `trio` but we need to use nursery.start()
# to use pipes?
# result = await trio.run_process(
result = subprocess.run(
[
'id',
'-u',
su,
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
# check=True
)
_dbus_uid = result.stdout.decode("utf-8").replace('\n', '')
os.environ['DBUS_SESSION_BUS_ADDRESS'] = (
f'unix:path=/run/user/{_dbus_uid}/bus'
)
try:
result = await trio.run_process(
[
'notify-send',
'-u', 'normal',
'-t', f'{duration}',
'piker',
# TODO: add in standard fill/exec info that maybe we
# pack in a broker independent way?
f"'{msg.pformat()}'",
],
capture_stdout=True,
capture_stderr=True,
check=False,
)
if result.returncode != 0:
log.warn(f'Notification daemon crashed stderr: {result.stderr}')
log.runtime(result)
except FileNotFoundError:
log.warn('Tried to send a notification but \'notify-send\' not present')

View File

@ -18,23 +18,23 @@ Super fast OHLC sampling graphics types.
""" """
from __future__ import annotations from __future__ import annotations
from typing import (
Optional,
TYPE_CHECKING,
)
import numpy as np import numpy as np
import pyqtgraph as pg from PyQt5 import (
from PyQt5 import QtCore, QtGui, QtWidgets QtGui,
from PyQt5.QtCore import QLineF, QPointF QtWidgets,
)
from PyQt5.QtCore import (
QLineF,
QRectF,
)
from PyQt5.QtWidgets import QGraphicsItem
from PyQt5.QtGui import QPainterPath from PyQt5.QtGui import QPainterPath
from ._curve import FlowGraphic
from .._profile import pg_profile_enabled, ms_slower_then from .._profile import pg_profile_enabled, ms_slower_then
from ._style import hcolor
from ..log import get_logger from ..log import get_logger
from .._profile import Profiler
if TYPE_CHECKING:
from ._chart import LinkedSplits
log = get_logger(__name__) log = get_logger(__name__)
@ -43,7 +43,8 @@ log = get_logger(__name__)
def bar_from_ohlc_row( def bar_from_ohlc_row(
row: np.ndarray, row: np.ndarray,
# 0.5 is no overlap between arms, 1.0 is full overlap # 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.43 bar_w: float,
bar_gap: float = 0.16
) -> tuple[QLineF]: ) -> tuple[QLineF]:
''' '''
@ -51,8 +52,7 @@ def bar_from_ohlc_row(
OHLC "bar" for use in the "last datum" of a series. OHLC "bar" for use in the "last datum" of a series.
''' '''
open, high, low, close, index = row[ open, high, low, close, index = row
['open', 'high', 'low', 'close', 'index']]
# TODO: maybe consider using `QGraphicsLineItem` ?? # TODO: maybe consider using `QGraphicsLineItem` ??
# gives us a ``.boundingRect()`` on the objects which may make # gives us a ``.boundingRect()`` on the objects which may make
@ -60,9 +60,11 @@ def bar_from_ohlc_row(
# history path faster since it's done in C++: # history path faster since it's done in C++:
# https://doc.qt.io/qt-5/qgraphicslineitem.html # https://doc.qt.io/qt-5/qgraphicslineitem.html
mid: float = (bar_w / 2) + index
# high -> low vertical (body) line # high -> low vertical (body) line
if low != high: if low != high:
hl = QLineF(index, low, index, high) hl = QLineF(mid, low, mid, high)
else: else:
# XXX: if we don't do it renders a weird rectangle? # XXX: if we don't do it renders a weird rectangle?
# see below for filtering this later... # see below for filtering this later...
@ -73,48 +75,55 @@ def bar_from_ohlc_row(
# the index's range according to the view mapping coordinates. # the index's range according to the view mapping coordinates.
# open line # open line
o = QLineF(index - w, open, index, open) o = QLineF(index + bar_gap, open, mid, open)
# close line # close line
c = QLineF(index, close, index + w, close) c = QLineF(
mid, close,
index + bar_w - bar_gap, close,
)
return [hl, o, c] return [hl, o, c]
class BarItems(pg.GraphicsObject): class BarItems(FlowGraphic):
''' '''
"Price range" bars graphics rendered from a OHLC sampled sequence. "Price range" bars graphics rendered from a OHLC sampled sequence.
''' '''
# XXX: causes this weird jitter bug when click-drag panning
# where the path curve will awkwardly flicker back and forth?
cache_mode: int = QGraphicsItem.NoCache
def __init__( def __init__(
self, self,
linked: LinkedSplits, *args,
plotitem: 'pg.PlotItem', # noqa **kwargs,
pen_color: str = 'bracket',
last_bar_color: str = 'bracket',
name: Optional[str] = None,
) -> None: ) -> None:
super().__init__()
self.linked = linked
# XXX: for the mega-lulz increasing width here increases draw
# latency... so probably don't do it until we figure that out.
self._color = pen_color
self.bars_pen = pg.mkPen(hcolor(pen_color), width=1)
self.last_bar_pen = pg.mkPen(hcolor(last_bar_color), width=2)
self._name = name
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache) super().__init__(*args, **kwargs)
self.path = QPainterPath() self._last_bar_lines: tuple[QLineF, ...] | None = None
self._last_bar_lines: Optional[tuple[QLineF, ...]] = None
def x_uppx(self) -> int: def x_last(self) -> None | float:
# we expect the downsample curve report this. '''
return 0 Return the last most x value of the close line segment
or if not drawn yet, ``None``.
'''
if self._last_bar_lines:
close_arm_line = self._last_bar_lines[-1]
return close_arm_line.x2() if close_arm_line else None
else:
return None
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
def boundingRect(self): def boundingRect(self):
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect # profiler = Profiler(
# msg=f'BarItems.boundingRect(): `{self._name}`',
# disabled=not pg_profile_enabled(),
# ms_threshold=ms_slower_then,
# )
# TODO: Can we do rect caching to make this faster # TODO: Can we do rect caching to make this faster
# like `pg.PlotCurveItem` does? In theory it's just # like `pg.PlotCurveItem` does? In theory it's just
@ -134,32 +143,37 @@ class BarItems(pg.GraphicsObject):
hb.topLeft(), hb.topLeft(),
hb.bottomRight(), hb.bottomRight(),
) )
mn_y = hb_tl.y()
mx_y = hb_br.y()
most_left = hb_tl.x()
most_right = hb_br.x()
# profiler('calc path vertices')
# need to include last bar height or BR will be off # need to include last bar height or BR will be off
mx_y = hb_br.y() # OHLC line segments: [hl, o, c]
mn_y = hb_tl.y() last_lines: tuple[QLineF] | None = self._last_bar_lines
last_lines = self._last_bar_lines
if last_lines: if last_lines:
body_line = self._last_bar_lines[0] (
if body_line: hl,
mx_y = max(mx_y, max(body_line.y1(), body_line.y2())) o,
mn_y = min(mn_y, min(body_line.y1(), body_line.y2())) c,
) = last_lines
most_right = c.x2() + 1
ymx = ymn = c.y2()
return QtCore.QRectF( if hl:
y1, y2 = hl.y1(), hl.y2()
# top left ymn = min(y1, y2)
QPointF( ymx = max(y1, y2)
hb_tl.x(), mx_y = max(ymx, mx_y)
mn_y, mn_y = min(ymn, mn_y)
), # profiler('calc last bar vertices')
# bottom right
QPointF(
hb_br.x() + 1,
mx_y,
)
return QRectF(
most_left,
mn_y,
most_right - most_left + 1,
mx_y - mn_y,
) )
def paint( def paint(
@ -170,7 +184,7 @@ class BarItems(pg.GraphicsObject):
) -> None: ) -> None:
profiler = pg.debug.Profiler( profiler = Profiler(
disabled=not pg_profile_enabled(), disabled=not pg_profile_enabled(),
ms_threshold=ms_slower_then, ms_threshold=ms_slower_then,
) )
@ -183,12 +197,12 @@ class BarItems(pg.GraphicsObject):
# as is necesarry for what's in "view". Not sure if this will # as is necesarry for what's in "view". Not sure if this will
# lead to any perf gains other then when zoomed in to less bars # lead to any perf gains other then when zoomed in to less bars
# in view. # in view.
p.setPen(self.last_bar_pen) p.setPen(self.last_step_pen)
if self._last_bar_lines: if self._last_bar_lines:
p.drawLines(*tuple(filter(bool, self._last_bar_lines))) p.drawLines(*tuple(filter(bool, self._last_bar_lines)))
profiler('draw last bar') profiler('draw last bar')
p.setPen(self.bars_pen) p.setPen(self._pen)
p.drawPath(self.path) p.drawPath(self.path)
profiler(f'draw history path: {self.path.capacity()}') profiler(f'draw history path: {self.path.capacity()}')
@ -196,29 +210,40 @@ class BarItems(pg.GraphicsObject):
self, self,
path: QPainterPath, path: QPainterPath,
src_data: np.ndarray, src_data: np.ndarray,
render_data: np.ndarray,
reset: bool, reset: bool,
array_key: str, array_key: str,
index_field: str,
fields: list[str] = [
'index',
'open',
'high',
'low',
'close',
],
) -> None: ) -> None:
# relevant fields # relevant fields
fields: list[str] = [
'open',
'high',
'low',
'close',
index_field,
]
ohlc = src_data[fields] ohlc = src_data[fields]
last_row = ohlc[-1:] # last_row = ohlc[-1:]
# individual values # individual values
last_row = i, o, h, l, last = ohlc[-1] last_row = o, h, l, last, i = ohlc[-1]
# times = src_data['time']
# if times[-1] - times[-2]:
# breakpoint()
index = src_data[index_field]
step_size = index[-1] - index[-2]
# generate new lines objects for updatable "current bar" # generate new lines objects for updatable "current bar"
self._last_bar_lines = bar_from_ohlc_row(last_row) bg: float = 0.16 * step_size
self._last_bar_lines = bar_from_ohlc_row(
last_row,
bar_w=step_size,
bar_gap=bg,
)
# assert i == graphics.start_index - 1 # assert i == graphics.start_index - 1
# assert i == last_index # assert i == last_index
@ -233,10 +258,16 @@ class BarItems(pg.GraphicsObject):
if l != h: # noqa if l != h: # noqa
if body is None: if body is None:
body = self._last_bar_lines[0] = QLineF(i, l, i, h) body = self._last_bar_lines[0] = QLineF(
i + bg, l,
i + step_size - bg, h,
)
else: else:
# update body # update body
body.setLine(i, l, i, h) body.setLine(
body.x1(), l,
body.x2(), h,
)
# XXX: pretty sure this is causing an issue where the # XXX: pretty sure this is causing an issue where the
# bar has a large upward move right before the next # bar has a large upward move right before the next
@ -247,4 +278,4 @@ class BarItems(pg.GraphicsObject):
# date / from some previous sample. It's weird though # date / from some previous sample. It's weird though
# because i've seen it do this to bars i - 3 back? # because i've seen it do this to bars i - 3 back?
return ohlc['index'], ohlc['close'] return ohlc[index_field], ohlc['close']

View File

@ -22,12 +22,9 @@ from __future__ import annotations
from typing import ( from typing import (
Optional, Generic, Optional, Generic,
TypeVar, Callable, TypeVar, Callable,
Literal,
) )
import enum
import sys
from pydantic import BaseModel, validator # from pydantic import BaseModel, validator
from pydantic.generics import GenericModel from pydantic.generics import GenericModel
from PyQt5.QtWidgets import ( from PyQt5.QtWidgets import (
QWidget, QWidget,
@ -38,6 +35,7 @@ from ._forms import (
# FontScaledDelegate, # FontScaledDelegate,
Edit, Edit,
) )
from ..data.types import Struct
DataType = TypeVar('DataType') DataType = TypeVar('DataType')
@ -62,7 +60,7 @@ class Selection(Field[DataType], Generic[DataType]):
options: dict[str, DataType] options: dict[str, DataType]
# value: DataType = None # value: DataType = None
@validator('value') # , always=True) # @validator('value') # , always=True)
def set_value_first( def set_value_first(
cls, cls,
@ -100,7 +98,7 @@ class Edit(Field[DataType], Generic[DataType]):
widget_factory = Edit widget_factory = Edit
class AllocatorPane(BaseModel): class AllocatorPane(Struct):
account = Selection[str]( account = Selection[str](
options=dict.fromkeys( options=dict.fromkeys(

View File

@ -18,23 +18,27 @@
Charting overlay helpers. Charting overlay helpers.
''' '''
from typing import Callable, Optional from collections import defaultdict
from functools import partial
from pyqtgraph.Qt.QtCore import ( from typing import (
# QObject, Callable,
# Signal, Optional,
Qt,
# QEvent,
) )
from pyqtgraph.graphicsItems.AxisItem import AxisItem from pyqtgraph.graphicsItems.AxisItem import AxisItem
from pyqtgraph.graphicsItems.ViewBox import ViewBox from pyqtgraph.graphicsItems.ViewBox import ViewBox
from pyqtgraph.graphicsItems.GraphicsWidget import GraphicsWidget # from pyqtgraph.graphicsItems.GraphicsWidget import GraphicsWidget
from pyqtgraph.graphicsItems.PlotItem.PlotItem import PlotItem from pyqtgraph.graphicsItems.PlotItem.PlotItem import PlotItem
from pyqtgraph.Qt.QtCore import QObject, Signal, QEvent from pyqtgraph.Qt.QtCore import (
from pyqtgraph.Qt.QtWidgets import QGraphicsGridLayout, QGraphicsLinearLayout QObject,
Signal,
from ._interaction import ChartView QEvent,
Qt,
)
from pyqtgraph.Qt.QtWidgets import (
# QGraphicsGridLayout,
QGraphicsLinearLayout,
)
__all__ = ["PlotItemOverlay"] __all__ = ["PlotItemOverlay"]
@ -80,25 +84,20 @@ class ComposedGridLayout:
``<axis_name>i`` in the layout. ``<axis_name>i`` in the layout.
The ``item: PlotItem`` passed to the constructor's grid layout is The ``item: PlotItem`` passed to the constructor's grid layout is
used verbatim as the "main plot" who's view box is give precedence used verbatim as the "main plot" who's view box is given precedence
for input handling. The main plot's axes are removed from it's for input handling. The main plot's axes are removed from its
layout and placed in the surrounding exterior layouts to allow for layout and placed in the surrounding exterior layouts to allow for
re-ordering if desired. re-ordering if desired.
''' '''
def __init__( def __init__(
self, self,
item: PlotItem, pi: PlotItem,
grid: QGraphicsGridLayout,
reverse: bool = False, # insert items to the "center"
) -> None: ) -> None:
self.items: list[PlotItem] = []
# self.grid = grid
self.reverse = reverse
# TODO: use a ``bidict`` here? self.pitems: list[PlotItem] = []
self._pi2axes: dict[ self._pi2axes: dict[ # TODO: use a ``bidict`` here?
int, int,
dict[str, AxisItem], dict[str, AxisItem],
] = {} ] = {}
@ -120,12 +119,13 @@ class ComposedGridLayout:
if name in ('top', 'bottom'): if name in ('top', 'bottom'):
orient = Qt.Vertical orient = Qt.Vertical
elif name in ('left', 'right'): elif name in ('left', 'right'):
orient = Qt.Horizontal orient = Qt.Horizontal
layout.setOrientation(orient) layout.setOrientation(orient)
self.insert(0, item) self.insert_plotitem(0, pi)
# insert surrounding linear layouts into the parent pi's layout # insert surrounding linear layouts into the parent pi's layout
# such that additional axes can be appended arbitrarily without # such that additional axes can be appended arbitrarily without
@ -135,13 +135,14 @@ class ComposedGridLayout:
# TODO: do we need this? # TODO: do we need this?
# axis should have been removed during insert above # axis should have been removed during insert above
index = _axes_layout_indices[name] index = _axes_layout_indices[name]
axis = item.layout.itemAt(*index) axis = pi.layout.itemAt(*index)
if axis and axis.isVisible(): if axis and axis.isVisible():
assert linlayout.itemAt(0) is axis assert linlayout.itemAt(0) is axis
# item.layout.removeItem(axis) # XXX: see comment in ``.insert_plotitem()``...
item.layout.addItem(linlayout, *index) # pi.layout.removeItem(axis)
layout = item.layout.itemAt(*index) pi.layout.addItem(linlayout, *index)
layout = pi.layout.itemAt(*index)
assert layout is linlayout assert layout is linlayout
def _register_item( def _register_item(
@ -157,27 +158,32 @@ class ComposedGridLayout:
self._pi2axes.setdefault(name, {})[index] = axis self._pi2axes.setdefault(name, {})[index] = axis
# enter plot into list for index tracking # enter plot into list for index tracking
self.items.insert(index, plotitem) self.pitems.insert(index, plotitem)
def insert( def insert_plotitem(
self, self,
index: int, index: int,
plotitem: PlotItem, plotitem: PlotItem,
) -> (int, int): ) -> tuple[int, list[AxisItem]]:
''' '''
Place item at index by inserting all axes into the grid Place item at index by inserting all axes into the grid
at list-order appropriate position. at list-order appropriate position.
''' '''
if index < 0: if index < 0:
raise ValueError('`insert()` only supports an index >= 0') raise ValueError(
'`.insert_plotitem()` only supports an index >= 0'
)
inserted_axes: list[AxisItem] = []
# add plot's axes in sequence to the embedded linear layouts # add plot's axes in sequence to the embedded linear layouts
# for each "side" thus avoiding graphics collisions. # for each "side" thus avoiding graphics collisions.
for name, axis_info in plotitem.axes.copy().items(): for name, axis_info in plotitem.axes.copy().items():
linlayout, axes = self.sides[name] linlayout, axes = self.sides[name]
axis = axis_info['item'] axis = axis_info['item']
inserted_axes.append(axis)
if axis in axes: if axis in axes:
# TODO: re-order using ``.pop()`` ? # TODO: re-order using ``.pop()`` ?
@ -190,19 +196,20 @@ class ComposedGridLayout:
if ( if (
not axis.isVisible() not axis.isVisible()
# XXX: we never skip moving the axes for the *first* # XXX: we never skip moving the axes for the *root*
# plotitem inserted (even if not shown) since we need to # plotitem inserted (even if not shown) since we need to
# move all the hidden axes into linear sub-layouts for # move all the hidden axes into linear sub-layouts for
# that "central" plot in the overlay. Also if we don't # that "central" plot in the overlay. Also if we don't
# do it there's weird geomoetry calc offsets that make # do it there's weird geomoetry calc offsets that make
# view coords slightly off somehow .. smh # view coords slightly off somehow .. smh
and not len(self.items) == 0 and not len(self.pitems) == 0
): ):
continue continue
# XXX: Remove old axis? No, turns out we don't need this? # XXX: Remove old axis?
# DON'T unlink it since we the original ``ViewBox`` # No, turns out we don't need this?
# to still drive it B) # DON'T UNLINK IT since we need the original ``ViewBox`` to
# still drive it with events/handlers B)
# popped = plotitem.removeAxis(name, unlink=False) # popped = plotitem.removeAxis(name, unlink=False)
# assert axis is popped # assert axis is popped
@ -218,9 +225,9 @@ class ComposedGridLayout:
self._register_item(index, plotitem) self._register_item(index, plotitem)
return index return (index, inserted_axes)
def append( def append_plotitem(
self, self,
item: PlotItem, item: PlotItem,
@ -232,7 +239,7 @@ class ComposedGridLayout:
''' '''
# for left and bottom axes we have to first remove # for left and bottom axes we have to first remove
# items and re-insert to maintain a list-order. # items and re-insert to maintain a list-order.
return self.insert(len(self.items), item) return self.insert_plotitem(len(self.pitems), item)
def get_axis( def get_axis(
self, self,
@ -245,20 +252,20 @@ class ComposedGridLayout:
if axis for that name is not shown. if axis for that name is not shown.
''' '''
index = self.items.index(plot) index = self.pitems.index(plot)
named = self._pi2axes[name] named = self._pi2axes[name]
return named.get(index) return named.get(index)
def pop( # def pop(
self, # self,
item: PlotItem, # item: PlotItem,
) -> PlotItem: # ) -> PlotItem:
''' # '''
Remove item and restack all axes in list-order. # Remove item and restack all axes in list-order.
''' # '''
raise NotImplementedError # raise NotImplementedError
# Unimplemented features TODO: # Unimplemented features TODO:
@ -279,194 +286,6 @@ class ComposedGridLayout:
# axis? # axis?
# TODO: we might want to enabled some kind of manual flag to disable
# this method wrapping during type creation? As example a user could
# definitively decide **not** to enable broadcasting support by
# setting something like ``ViewBox.disable_relays = True``?
def mk_relay_method(
signame: str,
slot: Callable[
[ViewBox,
'QEvent',
Optional[AxisItem]],
None,
],
) -> Callable[
[
ViewBox,
# lol, there isn't really a generic type thanks
# to the rewrite of Qt's event system XD
'QEvent',
'Optional[AxisItem]',
'Optional[ViewBox]', # the ``relayed_from`` arg we provide
],
None,
]:
def maybe_broadcast(
vb: 'ViewBox',
ev: 'QEvent',
axis: 'Optional[int]' = None,
relayed_from: 'ViewBox' = None,
) -> None:
'''
(soon to be) Decorator which makes an event handler
"broadcastable" to overlayed ``GraphicsWidget``s.
Adds relay signals based on the decorated handler's name
and conducts a signal broadcast of the relay signal if there
are consumers registered.
'''
# When no relay source has been set just bypass all
# the broadcast machinery.
if vb.event_relay_source is None:
ev.accept()
return slot(
vb,
ev,
axis=axis,
)
if relayed_from:
assert axis is None
# this is a relayed event and should be ignored (so it does not
# halt/short circuit the graphicscene loop). Further the
# surrounding handler for this signal must be allowed to execute
# and get processed by **this consumer**.
# print(f'{vb.name} rx relayed from {relayed_from.name}')
ev.ignore()
return slot(
vb,
ev,
axis=axis,
)
if axis is not None:
# print(f'{vb.name} handling axis event:\n{str(ev)}')
ev.accept()
return slot(
vb,
ev,
axis=axis,
)
elif (
relayed_from is None
and vb.event_relay_source is vb # we are the broadcaster
and axis is None
):
# Broadcast case: this is a source event which will be
# relayed to attached consumers and accepted after all
# consumers complete their own handling followed by this
# routine's processing. Sequence is,
# - pre-relay to all consumers *first* - ``.emit()`` blocks
# until all downstream relay handlers have run.
# - run the source handler for **this** event and accept
# the event
# Access the "bound signal" that is created
# on the widget type as part of instantiation.
signal = getattr(vb, signame)
# print(f'{vb.name} emitting {signame}')
# TODO/NOTE: we could also just bypass a "relay" signal
# entirely and instead call the handlers manually in
# a loop? This probably is a lot simpler and also doesn't
# have any downside, and allows not touching target widget
# internals.
signal.emit(
ev,
axis,
# passing this demarks a broadcasted/relayed event
vb,
)
# accept event so no more relays are fired.
ev.accept()
# call underlying wrapped method with an extra
# ``relayed_from`` value to denote that this is a relayed
# event handling case.
return slot(
vb,
ev,
axis=axis,
)
return maybe_broadcast
# XXX: :( can't define signals **after** class compile time
# so this is not really useful.
# def mk_relay_signal(
# func,
# name: str = None,
# ) -> Signal:
# (
# args,
# varargs,
# varkw,
# defaults,
# kwonlyargs,
# kwonlydefaults,
# annotations
# ) = inspect.getfullargspec(func)
# # XXX: generate a relay signal with 1 extra
# # argument for a ``relayed_from`` kwarg. Since
# # ``'self'`` is already ignored by signals we just need
# # to count the arguments since we're adding only 1 (and
# # ``args`` will capture that).
# numargs = len(args + list(defaults))
# signal = Signal(*tuple(numargs * [object]))
# signame = name or func.__name__ + 'Relay'
# return signame, signal
def enable_relays(
widget: GraphicsWidget,
handler_names: list[str],
) -> list[Signal]:
'''
Method override helper which enables relay of a particular
``Signal`` from some chosen broadcaster widget to a set of
consumer widgets which should operate their event handlers normally
but instead of signals "relayed" from the broadcaster.
Mostly useful for overlaying widgets that handle user input
that you want to overlay graphically. The target ``widget`` type must
define ``QtCore.Signal``s each with a `'Relay'` suffix for each
name provided in ``handler_names: list[str]``.
'''
signals = []
for name in handler_names:
handler = getattr(widget, name)
signame = name + 'Relay'
# ensure the target widget defines a relay signal
relay = getattr(widget, signame)
widget.relays[signame] = name
signals.append(relay)
method = mk_relay_method(signame, handler)
setattr(widget, name, method)
return signals
enable_relays(
ChartView,
['wheelEvent', 'mouseDragEvent']
)
class PlotItemOverlay: class PlotItemOverlay:
''' '''
A composite for managing overlaid ``PlotItem`` instances such that A composite for managing overlaid ``PlotItem`` instances such that
@ -482,86 +301,191 @@ class PlotItemOverlay:
) -> None: ) -> None:
self.root_plotitem: PlotItem = root_plotitem self.root_plotitem: PlotItem = root_plotitem
self.relay_handlers: defaultdict[
str,
list[Callable],
] = defaultdict(list)
vb = root_plotitem.vb # NOTE: required for scene layering/relaying; this guarantees
vb.event_relay_source = vb # TODO: maybe change name? # the "root" plot receives priority for interaction
vb.setZValue(1000) # XXX: critical for scene layering/relaying # events/signals.
root_plotitem.vb.setZValue(10)
self.overlays: list[PlotItem] = [] self.layout = ComposedGridLayout(root_plotitem)
self.layout = ComposedGridLayout(
root_plotitem,
root_plotitem.layout,
)
self._relays: dict[str, Signal] = {} self._relays: dict[str, Signal] = {}
@property
def overlays(self) -> list[PlotItem]:
return self.layout.pitems
def add_plotitem( def add_plotitem(
self, self,
plotitem: PlotItem, plotitem: PlotItem,
index: Optional[int] = None, index: Optional[int] = None,
# TODO: we could also put the ``ViewBox.XAxis`` # event/signal names which will be broadcasted to all added
# style enum here? # (relayee) ``PlotItem``s (eg. ``ViewBox.mouseDragEvent``).
relay_events: list[str] = [],
# (0,), # link x # (0,), # link x
# (1,), # link y # (1,), # link y
# (0, 1), # link both # (0, 1), # link both
link_axes: tuple[int] = (), link_axes: tuple[int] = (),
) -> None: ) -> tuple[int, list[AxisItem]]:
index = index or len(self.overlays)
root = self.root_plotitem root = self.root_plotitem
# layout: QGraphicsGridLayout = root.layout
self.overlays.insert(index, plotitem)
vb: ViewBox = plotitem.vb vb: ViewBox = plotitem.vb
# mark this consumer overlay as ready to expect relayed events
# from the root plotitem.
vb.event_relay_source = root.vb
# TODO: some sane way to allow menu event broadcast XD # TODO: some sane way to allow menu event broadcast XD
# vb.setMenuEnabled(False) # vb.setMenuEnabled(False)
# TODO: inside the `maybe_broadcast()` (soon to be) decorator # wire up any relay signal(s) from the source plot to added
# we need have checks that consumers have been attached to # "overlays". We use a plain loop instead of mucking with
# these relay signals. # re-connecting signal/slots which tends to be more invasive and
if link_axes != (0, 1): # harder to implement and provides no measurable performance
# gain.
if relay_events:
for ev_name in relay_events:
relayee_handler: Callable[
[
ViewBox,
# lol, there isn't really a generic type thanks
# to the rewrite of Qt's event system XD
QEvent,
# wire up relay signals AxisItem | None,
for relay_signal_name, handler_name in vb.relays.items(): ],
# print(handler_name) None,
# XXX: Signal class attrs are bound after instantiation ] = getattr(vb, ev_name)
# of the defining type, so we need to access that bound
# version here. sub_handlers: list[Callable] = self.relay_handlers[ev_name]
signal = getattr(root.vb, relay_signal_name)
handler = getattr(vb, handler_name) # on the first registry of a relayed event we pop the
signal.connect(handler) # root's handler and override it to a custom broadcaster
# routine.
if not sub_handlers:
src_handler = getattr(
root.vb,
ev_name,
)
def broadcast(
ev: 'QEvent',
# TODO: drop this viewbox specific input and
# allow a predicate to be passed in by user.
axis: 'Optional[int]' = None,
*,
# these are bound in by the ``partial`` below
# and ensure a unique broadcaster per event.
ev_name: str = None,
src_handler: Callable = None,
relayed_from: 'ViewBox' = None,
# remaining inputs the source handler expects
**kwargs,
) -> None:
'''
Broadcast signal or event: this is a source
event which will be relayed to attached
"relayee" plot item consumers.
The event is accepted halting any further
handlers from being triggered.
Sequence is,
- pre-relay to all consumers *first* - exactly
like how a ``Signal.emit()`` blocks until all
downstream relay handlers have run.
- run the event's source handler event
'''
ev.accept()
# broadcast first to relayees *first*. trigger
# relay of event to all consumers **before**
# processing/consumption in the source handler.
relayed_handlers = self.relay_handlers[ev_name]
assert getattr(vb, ev_name).__name__ == ev_name
# TODO: generalize as an input predicate
if axis is None:
for handler in relayed_handlers:
handler(
ev,
axis=axis,
**kwargs,
)
# run "source" widget's handler last
src_handler(
ev,
axis=axis,
)
# dynamic handler override on the publisher plot
setattr(
root.vb,
ev_name,
partial(
broadcast,
ev_name=ev_name,
src_handler=src_handler
),
)
else:
assert getattr(root.vb, ev_name)
assert relayee_handler not in sub_handlers
# append relayed-to widget's handler to relay table
sub_handlers.append(relayee_handler)
# link dim-axes to root if requested by user. # link dim-axes to root if requested by user.
# TODO: solve more-then-wanted scaled panning on click drag
# which seems to be due to broadcast. So we probably need to
# disable broadcast when axes are linked in a particular
# dimension?
for dim in link_axes: for dim in link_axes:
# link x and y axes to new view box such that the top level # link x and y axes to new view box such that the top level
# viewbox propagates to the root (and whatever other # viewbox propagates to the root (and whatever other
# plotitem overlays that have been added). # plotitem overlays that have been added).
vb.linkView(dim, root.vb) vb.linkView(dim, root.vb)
# make overlaid viewbox impossible to focus since the top # => NOTE: in order to prevent "more-then-linear" scaled
# level should handle all input and relay to overlays. # panning moves on (for eg. click-drag) certain range change
# NOTE: this was solved with the `setZValue()` above! # signals (i.e. ``.sigXRangeChanged``), the user needs to be
# careful that any broadcasted ``relay_events`` are are short
# circuited in sub-handlers (aka relayee's) implementations. As
# an example if a ``ViewBox.mouseDragEvent`` is broadcasted, the
# overlayed implementations need to be sure they either don't
# also link the x-axes (by not providing ``link_axes=(0,)``
# above) or that the relayee ``.mouseDragEvent()`` handlers are
# ready to "``return`` early" in the case that
# ``.sigXRangeChanged`` is emitted as part of linked axes.
# For more details on such signalling mechanics peek in
# ``ViewBox.linkView()``.
# TODO: we will probably want to add a "focus" api such that # make overlaid viewbox impossible to focus since the top level
# a new "top level" ``PlotItem`` can be selected dynamically # should handle all input and relay to overlays. Note that the
# (and presumably the axes dynamically sorted to match). # "root" plot item gettingn interaction priority is configured
# with the ``.setZValue()`` during init.
vb.setFlag( vb.setFlag(
vb.GraphicsItemFlag.ItemIsFocusable, vb.GraphicsItemFlag.ItemIsFocusable,
False False
) )
vb.setFocusPolicy(Qt.NoFocus) vb.setFocusPolicy(Qt.NoFocus)
# => TODO: add a "focus" api for switching the "top level"
# ``PlotItem`` dynamically.
# append-compose into the layout all axes from this plot # append-compose into the layout all axes from this plot
self.layout.insert(index, plotitem) if index is None:
insert_index, axes = self.layout.append_plotitem(plotitem)
else:
insert_index, axes = self.layout.insert_plotitem(index, plotitem)
plotitem.setGeometry(root.vb.sceneBoundingRect()) plotitem.setGeometry(root.vb.sceneBoundingRect())
@ -579,24 +503,12 @@ class PlotItemOverlay:
root.vb.setFocus() root.vb.setFocus()
assert root.vb.focusWidget() assert root.vb.focusWidget()
# XXX: do we need this? Why would you build then destroy? vb.setZValue(100)
def remove_plotitem(self, plotItem: PlotItem) -> None:
'''
Remove this ``PlotItem`` from the overlayed set making not shown
and unable to accept input.
''' return (
... index,
axes,
# TODO: i think this would be super hot B) )
def focus_item(self, plotitem: PlotItem) -> PlotItem:
'''
Apply focus to a contained PlotItem thus making it the "top level"
item in the overlay able to accept peripheral's input from the user
and responsible for zoom and panning control via its ``ViewBox``.
'''
...
def get_axis( def get_axis(
self, self,
@ -630,8 +542,9 @@ class PlotItemOverlay:
return axes return axes
# TODO: i guess we need this if you want to detach existing plots # XXX: untested as of now.
# dynamically? XXX: untested as of now. # TODO: need this as part of selecting a different root/source
# plot to rewire interaction event broadcast dynamically.
def _disconnect_all( def _disconnect_all(
self, self,
plotitem: PlotItem, plotitem: PlotItem,
@ -646,3 +559,22 @@ class PlotItemOverlay:
disconnected.append(sig) disconnected.append(sig)
return disconnected return disconnected
# XXX: do we need this? Why would you build then destroy?
# def remove_plotitem(self, plotItem: PlotItem) -> None:
# '''
# Remove this ``PlotItem`` from the overlayed set making not shown
# and unable to accept input.
# '''
# ...
# TODO: i think this would be super hot B)
# def focus_plotitem(self, plotitem: PlotItem) -> PlotItem:
# '''
# Apply focus to a contained PlotItem thus making it the "top level"
# item in the overlay able to accept peripheral's input from the user
# and responsible for zoom and panning control via its ``ViewBox``.
# '''
# ...

View File

@ -1,236 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) 2018-present Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Super fast ``QPainterPath`` generation related operator routines.
"""
from __future__ import annotations
from typing import (
# Optional,
TYPE_CHECKING,
)
import numpy as np
from numpy.lib import recfunctions as rfn
from numba import njit, float64, int64 # , optional
# import pyqtgraph as pg
from PyQt5 import QtGui
# from PyQt5.QtCore import QLineF, QPointF
from ..data._sharedmem import (
ShmArray,
)
# from .._profile import pg_profile_enabled, ms_slower_then
from ._compression import (
ds_m4,
)
if TYPE_CHECKING:
from ._flows import Renderer
def xy_downsample(
x,
y,
uppx,
x_spacer: float = 0.5,
) -> tuple[np.ndarray, np.ndarray]:
# downsample whenever more then 1 pixels per datum can be shown.
# always refresh data bounds until we get diffing
# working properly, see above..
bins, x, y = ds_m4(
x,
y,
uppx,
)
# flatten output to 1d arrays suitable for path-graphics generation.
x = np.broadcast_to(x[:, None], y.shape)
x = (x + np.array(
[-x_spacer, 0, 0, x_spacer]
)).flatten()
y = y.flatten()
return x, y
@njit(
# TODO: for now need to construct this manually for readonly arrays, see
# https://github.com/numba/numba/issues/4511
# ntypes.tuple((float64[:], float64[:], float64[:]))(
# numba_ohlc_dtype[::1], # contiguous
# int64,
# optional(float64),
# ),
nogil=True
)
def path_arrays_from_ohlc(
data: np.ndarray,
start: int64,
bar_gap: float64 = 0.43,
) -> np.ndarray:
'''
Generate an array of lines objects from input ohlc data.
'''
size = int(data.shape[0] * 6)
x = np.zeros(
# data,
shape=size,
dtype=float64,
)
y, c = x.copy(), x.copy()
# TODO: report bug for assert @
# /home/goodboy/repos/piker/env/lib/python3.8/site-packages/numba/core/typing/builtins.py:991
for i, q in enumerate(data[start:], start):
# TODO: ask numba why this doesn't work..
# open, high, low, close, index = q[
# ['open', 'high', 'low', 'close', 'index']]
open = q['open']
high = q['high']
low = q['low']
close = q['close']
index = float64(q['index'])
istart = i * 6
istop = istart + 6
# x,y detail the 6 points which connect all vertexes of a ohlc bar
x[istart:istop] = (
index - bar_gap,
index,
index,
index,
index,
index + bar_gap,
)
y[istart:istop] = (
open,
open,
low,
high,
close,
close,
)
# specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``.
c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c
def gen_ohlc_qpath(
r: Renderer,
data: np.ndarray,
array_key: str, # we ignore this
vr: tuple[int, int],
start: int = 0, # XXX: do we need this?
# 0.5 is no overlap between arms, 1.0 is full overlap
w: float = 0.43,
) -> QtGui.QPainterPath:
'''
More or less direct proxy to ``path_arrays_from_ohlc()``
but with closed in kwargs for line spacing.
'''
x, y, c = path_arrays_from_ohlc(
data,
start,
bar_gap=w,
)
return x, y, c
def ohlc_to_line(
ohlc_shm: ShmArray,
data_field: str,
fields: list[str] = ['open', 'high', 'low', 'close']
) -> tuple[
np.ndarray,
np.ndarray,
]:
'''
Convert an input struct-array holding OHLC samples into a pair of
flattened x, y arrays with the same size (datums wise) as the source
data.
'''
y_out = ohlc_shm.ustruct(fields)
first = ohlc_shm._first.value
last = ohlc_shm._last.value
# write pushed data to flattened copy
y_out[first:last] = rfn.structured_to_unstructured(
ohlc_shm.array[fields]
)
# generate an flat-interpolated x-domain
x_out = (
np.broadcast_to(
ohlc_shm._array['index'][:, None],
(
ohlc_shm._array.size,
# 4, # only ohlc
y_out.shape[1],
),
) + np.array([-0.5, 0, 0, 0.5])
)
assert y_out.any()
return (
x_out,
y_out,
)
def to_step_format(
shm: ShmArray,
data_field: str,
index_field: str = 'index',
) -> tuple[int, np.ndarray, np.ndarray]:
'''
Convert an input 1d shm array to a "step array" format
for use by path graphics generation.
'''
i = shm._array['index'].copy()
out = shm._array[data_field].copy()
x_out = np.broadcast_to(
i[:, None],
(i.size, 2),
) + np.array([-0.5, 0.5])
y_out = np.empty((len(out), 2), dtype=out.dtype)
y_out[:] = out[:, np.newaxis]
# start y at origin level
y_out[0, 0] = 0
return x_out, y_out

View File

@ -15,13 +15,19 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Customization of ``pyqtgraph`` core routines to speed up our use mostly Customization of ``pyqtgraph`` core routines and various types normally
based on not requiring "scentific precision" for pixel perfect view for speedups.
transforms.
Generally, our does not require "scentific precision" for pixel perfect
view transforms.
""" """
from typing import Optional
import pyqtgraph as pg import pyqtgraph as pg
from ._axes import Axis
def invertQTransform(tr): def invertQTransform(tr):
"""Return a QTransform that is the inverse of *tr*. """Return a QTransform that is the inverse of *tr*.
@ -46,3 +52,236 @@ def _do_overrides() -> None:
""" """
# we don't care about potential fp issues inside Qt # we don't care about potential fp issues inside Qt
pg.functions.invertQTransform = invertQTransform pg.functions.invertQTransform = invertQTransform
pg.PlotItem = PlotItem
# enable "QPainterPathPrivate for faster arrayToQPath" from
# https://github.com/pyqtgraph/pyqtgraph/pull/2324
pg.setConfigOption('enableExperimental', True)
# NOTE: the below customized type contains all our changes on a method
# by method basis as per the diff:
# https://github.com/pyqtgraph/pyqtgraph/commit/8e60bc14234b6bec1369ff4192dbfb82f8682920#diff-a2b5865955d2ba703dbc4c35ff01aa761aa28d2aeaac5e68d24e338bc82fb5b1R500
class PlotItem(pg.PlotItem):
'''
Overrides for the core plot object mostly pertaining to overlayed
multi-view management as it relates to multi-axis managment.
This object is the combination of a ``ViewBox`` and multiple
``AxisItem``s and so far we've added additional functionality and
APIs for:
- removal of axes
---
From ``pyqtgraph`` super type docs:
- Manage placement of ViewBox, AxisItems, and LabelItems
- Create and manage a list of PlotDataItems displayed inside the
ViewBox
- Implement a context menu with commonly used display and analysis
options
'''
def __init__(
self,
parent=None,
name=None,
labels=None,
title=None,
viewBox=None,
axisItems=None,
default_axes=['left', 'bottom'],
enableMenu=True,
**kargs
):
super().__init__(
parent=parent,
name=name,
labels=labels,
title=title,
viewBox=viewBox,
axisItems=axisItems,
# default_axes=default_axes,
enableMenu=enableMenu,
kargs=kargs,
)
self.name = name
self.chart_widget = None
# self.setAxisItems(
# axisItems,
# default_axes=default_axes,
# )
# NOTE: this is an entirely new method not in upstream.
def removeAxis(
self,
name: str,
unlink: bool = True,
) -> Optional[pg.AxisItem]:
"""
Remove an axis from the contained axis items
by ```name: str```.
This means the axis graphics object will be removed
from the ``.layout: QGraphicsGridLayout`` as well as unlinked
from the underlying associated ``ViewBox``.
If the ``unlink: bool`` is set to ``False`` then the axis will
stay linked to its view and will only be removed from the
layoutonly be removed from the layout.
If no axis with ``name: str`` is found then this is a noop.
Return the axis instance that was removed.
"""
entry = self.axes.pop(name, None)
if not entry:
return
axis = entry['item']
self.layout.removeItem(axis)
axis.scene().removeItem(axis)
if unlink:
axis.unlinkFromView()
self.update()
return axis
# Why do we need to always have all axes created?
#
# I don't understand this at all.
#
# Everything seems to work if you just always apply the
# set passed to this method **EXCEPT** for some super weird reason
# the view box geometry still computes as though the space for the
# `'bottom'` axis is always there **UNLESS** you always add that
# axis but hide it?
#
# Why in tf would this be the case!?!?
def setAxisItems(
self,
# XXX: yeah yeah, i know we can't use type annots like this yet.
axisItems: Optional[dict[str, pg.AxisItem]] = None,
add_to_layout: bool = True,
default_axes: list[str] = ['left', 'bottom'],
):
"""
Override axis item setting to only
"""
axisItems = axisItems or {}
# XXX: wth is is this even saying?!?
# Array containing visible axis items
# Also containing potentially hidden axes, but they are not
# touched so it does not matter
# visibleAxes = ['left', 'bottom']
# Note that it does not matter that this adds
# some values to visibleAxes a second time
# XXX: uhhh wat^ ..?
visibleAxes = list(default_axes) + list(axisItems.keys())
# TODO: we should probably invert the loop here to not loop the
# predefined "axis name set" and instead loop the `axisItems`
# input and lookup indices from a predefined map.
for name, pos in (
('top', (1, 1)),
('bottom', (3, 1)),
('left', (2, 0)),
('right', (2, 2))
):
if (
name in self.axes and
name in axisItems
):
# we already have an axis entry for this name
# so remove the existing entry.
self.removeAxis(name)
# elif name not in axisItems:
# # this axis entry is not provided in this call
# # so remove any old/existing entry.
# self.removeAxis(name)
# Create new axis
if name in axisItems:
axis = axisItems[name]
if axis.scene() is not None:
if (
name not in self.axes
or axis != self.axes[name]["item"]
):
raise RuntimeError(
"Can't add an axis to multiple plots. Shared axes"
" can be achieved with multiple AxisItem instances"
" and set[X/Y]Link.")
else:
# Set up new axis
# XXX: ok but why do we want to add axes for all entries
# if not desired by the user? The only reason I can see
# adding this is without it there's some weird
# ``ViewBox`` geometry bug.. where a gap for the
# 'bottom' axis is somehow left in?
# axis = pg.AxisItem(orientation=name, parent=self)
axis = Axis(
self,
orientation=name,
parent=self,
)
axis.linkToView(self.vb)
# XXX: shouldn't you already know the ``pos`` from the name?
# Oh right instead of a global map that would let you
# reasily look that up it's redefined over and over and over
# again in methods..
self.axes[name] = {'item': axis, 'pos': pos}
# NOTE: in the overlay case the axis may be added to some
# other layout and should not be added here.
if add_to_layout:
self.layout.addItem(axis, *pos)
# place axis above images at z=0, items that want to draw
# over the axes should be placed at z>=1:
axis.setZValue(0.5)
axis.setFlag(
axis.GraphicsItemFlag.ItemNegativeZStacksBehindParent
)
if name in visibleAxes:
self.showAxis(name, True)
else:
# why do we need to insert all axes to ``.axes`` and
# only hide the ones the user doesn't specify? It all
# seems to work fine without doing this except for this
# weird gap for the 'bottom' axis that always shows up
# in the view box geometry??
self.hideAxis(name)
def updateGrid(
self,
*args,
):
alpha = self.ctrl.gridAlphaSlider.value()
x = alpha if self.ctrl.xGridCheck.isChecked() else False
y = alpha if self.ctrl.yGridCheck.isChecked() else False
for name, dim in (
('top', x),
('bottom', x),
('left', y),
('right', y)
):
if name in self.axes:
self.getAxis(name).setGrid(dim)
# self.getAxis('bottom').setGrid(x)
# self.getAxis('left').setGrid(y)
# self.getAxis('right').setGrid(y)

File diff suppressed because it is too large Load Diff

320
piker/ui/_render.py 100644
View File

@ -0,0 +1,320 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship for pikers)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
High level streaming graphics primitives.
This is an intermediate layer which associates real-time low latency
graphics primitives with underlying stream/flow related data structures
for fast incremental update.
'''
from __future__ import annotations
from typing import (
TYPE_CHECKING,
)
import msgspec
import numpy as np
import pyqtgraph as pg
from PyQt5.QtGui import QPainterPath
from ..data._formatters import (
IncrementalFormatter,
)
from ..data._pathops import (
xy_downsample,
)
from ..log import get_logger
from .._profile import (
Profiler,
)
if TYPE_CHECKING:
from ._dataviz import Viz
log = get_logger(__name__)
class Renderer(msgspec.Struct):
viz: Viz
fmtr: IncrementalFormatter
# output graphics rendering, the main object
# processed in ``QGraphicsObject.paint()``
path: QPainterPath | None = None
fast_path: QPainterPath | None = None
# downsampling state
_last_uppx: float = 0
_in_ds: bool = False
def draw_path(
self,
x: np.ndarray,
y: np.ndarray,
connect: str | np.ndarray = 'all',
path: QPainterPath | None = None,
redraw: bool = False,
) -> QPainterPath:
path_was_none = path is None
if redraw and path:
path.clear()
# TODO: avoid this?
if self.fast_path:
self.fast_path.clear()
path = pg.functions.arrayToQPath(
x,
y,
connect=connect,
finiteCheck=False,
# reserve mem allocs see:
# - https://doc.qt.io/qt-5/qpainterpath.html#reserve
# - https://doc.qt.io/qt-5/qpainterpath.html#capacity
# - https://doc.qt.io/qt-5/qpainterpath.html#clear
# XXX: right now this is based on ad-hoc checks on a
# hidpi 3840x2160 4k monitor but we should optimize for
# the target display(s) on the sys.
# if no_path_yet:
# graphics.path.reserve(int(500e3))
# path=path, # path re-use / reserving
)
# avoid mem allocs if possible
if path_was_none:
path.reserve(path.capacity())
return path
def render(
self,
new_read,
array_key: str,
profiler: Profiler,
uppx: float = 1,
# redraw and ds flags
should_redraw: bool = False,
new_sample_rate: bool = False,
should_ds: bool = False,
showing_src_data: bool = True,
do_append: bool = True,
use_fpath: bool = True,
# only render datums "in view" of the ``ChartView``
use_vr: bool = True,
) -> tuple[QPainterPath, bool]:
'''
Render the current graphics path(s)
There are (at least) 3 stages from source data to graphics data:
- a data transform (which can be stored in additional shm)
- a graphics transform which converts discrete basis data to
a `float`-basis view-coords graphics basis. (eg. ``ohlc_flatten()``,
``step_path_arrays_from_1d()``, etc.)
- blah blah blah (from notes)
'''
# TODO: can the renderer just call ``Viz.read()`` directly?
# unpack latest source data read
fmtr = self.fmtr
(
_,
_,
array,
ivl,
ivr,
in_view,
) = new_read
# xy-path data transform: convert source data to a format
# able to be passed to a `QPainterPath` rendering routine.
fmt_out = fmtr.format_to_1d(
new_read,
array_key,
profiler,
slice_to_inview=use_vr,
)
# no history in view case
if not fmt_out:
# XXX: this might be why the profiler only has exits?
return
(
x_1d,
y_1d,
connect,
prepend_length,
append_length,
view_changed,
# append_tres,
) = fmt_out
# redraw conditions
if (
prepend_length > 0
or new_sample_rate
or view_changed
# NOTE: comment this to try and make "append paths"
# work below..
or append_length > 0
):
should_redraw = True
path: QPainterPath = self.path
fast_path: QPainterPath = self.fast_path
reset: bool = False
self.viz.yrange = None
# redraw the entire source data if we have either of:
# - no prior path graphic rendered or,
# - we always intend to re-render the data only in view
if (
path is None
or should_redraw
):
# print(f"{self.viz.name} -> REDRAWING BRUH")
if new_sample_rate and showing_src_data:
log.info(f'DE-downsampling -> {array_key}')
self._in_ds = False
elif should_ds and uppx > 1:
ds_out = xy_downsample(
x_1d,
y_1d,
uppx,
)
if ds_out is not None:
x_1d, y_1d, ymn, ymx = ds_out
self.viz.yrange = ymn, ymx
# print(f'{self.viz.name} post ds: ymn, ymx: {ymn},{ymx}')
reset = True
profiler(f'FULL PATH downsample redraw={should_ds}')
self._in_ds = True
path = self.draw_path(
x=x_1d,
y=y_1d,
connect=connect,
path=path,
redraw=True,
)
profiler(
'generated fresh path. '
f'(should_redraw: {should_redraw} '
f'should_ds: {should_ds} new_sample_rate: {new_sample_rate})'
)
# TODO: get this piecewise prepend working - right now it's
# giving heck on vwap...
# elif prepend_length:
# prepend_path = pg.functions.arrayToQPath(
# x[0:prepend_length],
# y[0:prepend_length],
# connect='all'
# )
# # swap prepend path in "front"
# old_path = graphics.path
# graphics.path = prepend_path
# # graphics.path.moveTo(new_x[0], new_y[0])
# graphics.path.connectPath(old_path)
elif (
append_length > 0
and do_append
):
profiler(f'sliced append path {append_length}')
# (
# x_1d,
# y_1d,
# connect,
# ) = append_tres
profiler(
f'diffed array input, append_length={append_length}'
)
# if should_ds and uppx > 1:
# new_x, new_y = xy_downsample(
# new_x,
# new_y,
# uppx,
# )
# profiler(f'fast path downsample redraw={should_ds}')
append_path = self.draw_path(
x=x_1d,
y=y_1d,
connect=connect,
path=fast_path,
)
profiler('generated append qpath')
if use_fpath:
# an attempt at trying to make append-updates faster..
if fast_path is None:
fast_path = append_path
# fast_path.reserve(int(6e3))
else:
# print(
# f'{self.viz.name}: FAST PATH\n'
# f"append_path br: {append_path.boundingRect()}\n"
# f"path size: {size}\n"
# f"append_path len: {append_path.length()}\n"
# f"fast_path len: {fast_path.length()}\n"
# )
fast_path.connectPath(append_path)
size = fast_path.capacity()
profiler(f'connected fast path w size: {size}')
# graphics.path.moveTo(new_x[0], new_y[0])
# path.connectPath(append_path)
# XXX: lol this causes a hang..
# graphics.path = graphics.path.simplified()
else:
size = path.capacity()
profiler(f'connected history path w size: {size}')
path.connectPath(append_path)
self.path = path
self.fast_path = fast_path
return self.path, reset

View File

@ -35,9 +35,13 @@ from collections import defaultdict
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from functools import partial from functools import partial
from typing import ( from typing import (
Optional, Callable, Optional,
Awaitable, Sequence, Callable,
Any, AsyncIterator Awaitable,
Sequence,
Any,
AsyncIterator,
Iterator,
) )
import time import time
# from pprint import pformat # from pprint import pformat
@ -119,7 +123,7 @@ class CompleterView(QTreeView):
# TODO: size this based on DPI font # TODO: size this based on DPI font
self.setIndentation(_font.px_size) self.setIndentation(_font.px_size)
# self.setUniformRowHeights(True) self.setUniformRowHeights(True)
# self.setColumnWidth(0, 3) # self.setColumnWidth(0, 3)
# self.setVerticalBarPolicy(Qt.ScrollBarAlwaysOff) # self.setVerticalBarPolicy(Qt.ScrollBarAlwaysOff)
# self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustIgnored) # self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustIgnored)
@ -138,15 +142,31 @@ class CompleterView(QTreeView):
model.setHorizontalHeaderLabels(labels) model.setHorizontalHeaderLabels(labels)
self._font_size: int = 0 # pixels self._font_size: int = 0 # pixels
self._init: bool = False
async def on_pressed(self, idx: QModelIndex) -> None: async def on_pressed(
'''Mouse pressed on view handler. self,
idx: QModelIndex,
) -> None:
'''
Mouse pressed on view handler.
''' '''
search = self.parent() search = self.parent()
await search.chart_current_item(clear_to_cache=False)
await search.chart_current_item(
clear_to_cache=True,
)
# XXX: this causes Qt to hang and segfault..lovely
# self.show_cache_entries(
# only=True,
# keep_current_item_selected=True,
# )
search.focus() search.focus()
def set_font_size(self, size: int = 18): def set_font_size(self, size: int = 18):
# print(size) # print(size)
if size < 0: if size < 0:
@ -156,56 +176,64 @@ class CompleterView(QTreeView):
self.setStyleSheet(f"font: {size}px") self.setStyleSheet(f"font: {size}px")
# def resizeEvent(self, event: 'QEvent') -> None: def resize_to_results(
# event.accept() self,
# super().resizeEvent(event) w: Optional[float] = 0,
h: Optional[float] = None,
def on_resize(self) -> None: ) -> None:
'''
Resize relay event from god.
'''
self.resize_to_results()
def resize_to_results(self):
model = self.model() model = self.model()
cols = model.columnCount() cols = model.columnCount()
# rows = model.rowCount() cidx = self.selectionModel().currentIndex()
rows = model.rowCount()
self.expandAll()
# compute the approx height in pixels needed to include
# all result rows in view.
row_h = rows_h = self.rowHeight(cidx) * (rows + 1)
for idx, item in self.iter_df_rows():
row_h = self.rowHeight(idx)
rows_h += row_h
# print(f'row_h: {row_h}\nrows_h: {rows_h}')
# TODO: could we just break early here on detection
# of ``rows_h >= h``?
col_w_tot = 0 col_w_tot = 0
for i in range(cols): for i in range(cols):
# only slap in a rows's height's worth
# of padding once at startup.. no idea
if (
not self._init
and row_h
):
col_w_tot = row_h
self._init = True
self.resizeColumnToContents(i) self.resizeColumnToContents(i)
col_w_tot += self.columnWidth(i) col_w_tot += self.columnWidth(i)
win = self.window() # NOTE: if the heigh `h` set here is **too large** then the
win_h = win.height() # resize event will perpetually trigger as the window causes
edit_h = self.parent().bar.height() # some kind of recompute of callbacks.. so we have to ensure
sb_h = win.statusBar().height() # it's limited.
if h:
h: int = round(h)
abs_mx = round(0.91 * h)
self.setMaximumHeight(abs_mx)
# TODO: probably make this more general / less hacky if rows_h <= abs_mx:
# we should figure out the exact number of rows to allow # self.setMinimumHeight(rows_h)
# inclusive of search bar and header "rows", in pixel terms. self.setMinimumHeight(rows_h)
# Eventually when we have an "info" widget below the results we # self.setFixedHeight(rows_h)
# will want space for it and likely terminating the results-view
# space **exactly on a row** would be ideal.
# if row_px > 0:
# rows = ceil(window_h / row_px) - 4
# else:
# rows = 16
# self.setFixedHeight(rows * row_px)
# self.resize(self.width(), rows * row_px)
# NOTE: if the heigh set here is **too large** then the resize else:
# event will perpetually trigger as the window causes some kind self.setMinimumHeight(abs_mx)
# of recompute of callbacks.. so we have to ensure it's limited.
h = win_h - (edit_h + 1.666*sb_h)
assert h > 0
self.setFixedHeight(round(h))
# size to width of longest result seen thus far # dyncamically size to width of longest result seen
# TODO: should we always dynamically scale to longest result? curr_w = self.width()
if self.width() < col_w_tot: if curr_w < col_w_tot:
self.setFixedWidth(col_w_tot) self.setMinimumWidth(col_w_tot)
self.update() self.update()
@ -274,7 +302,7 @@ class CompleterView(QTreeView):
def select_first(self) -> QStandardItem: def select_first(self) -> QStandardItem:
''' '''
Select the first depth >= 2 entry from the completer tree and Select the first depth >= 2 entry from the completer tree and
return it's item. return its item.
''' '''
# ensure we're **not** selecting the first level parent node and # ensure we're **not** selecting the first level parent node and
@ -331,6 +359,23 @@ class CompleterView(QTreeView):
item = model.itemFromIndex(idx) item = model.itemFromIndex(idx)
yield idx, item yield idx, item
def iter_df_rows(
self,
iparent: QModelIndex = QModelIndex(),
) -> Iterator[tuple[QModelIndex, QStandardItem]]:
model = self.model()
isections = model.rowCount(iparent)
for i in range(isections):
idx = model.index(i, 0, iparent)
item = model.itemFromIndex(idx)
yield idx, item
if model.hasChildren(idx):
# recursively yield child items depth-first
yield from self.iter_df_rows(idx)
def find_section( def find_section(
self, self,
section: str, section: str,
@ -354,7 +399,8 @@ class CompleterView(QTreeView):
status_field: str = None, status_field: str = None,
) -> None: ) -> None:
'''Clear all result-rows from under the depth = 1 section. '''
Clear all result-rows from under the depth = 1 section.
''' '''
idx = self.find_section(section) idx = self.find_section(section)
@ -375,8 +421,6 @@ class CompleterView(QTreeView):
else: else:
model.setItem(idx.row(), 1, QStandardItem()) model.setItem(idx.row(), 1, QStandardItem())
self.resize_to_results()
return idx return idx
else: else:
return None return None
@ -386,12 +430,26 @@ class CompleterView(QTreeView):
section: str, section: str,
values: Sequence[str], values: Sequence[str],
clear_all: bool = False, clear_all: bool = False,
reverse: bool = False,
) -> None: ) -> None:
''' '''
Set result-rows for depth = 1 tree section ``section``. Set result-rows for depth = 1 tree section ``section``.
''' '''
if (
values
and not isinstance(values[0], str)
):
flattened: list[str] = []
for val in values:
flattened.extend(val)
values = flattened
if reverse:
values = reversed(values)
model = self.model() model = self.model()
if clear_all: if clear_all:
# XXX: rewrite the model from scratch if caller requests it # XXX: rewrite the model from scratch if caller requests it
@ -444,9 +502,22 @@ class CompleterView(QTreeView):
self.show_matches() self.show_matches()
def show_matches(self) -> None: def show_matches(
self,
wh: Optional[tuple[float, float]] = None,
) -> None:
if wh:
self.resize_to_results(*wh)
else:
# case where it's just an update from results and *NOT*
# a resize of some higher level parent-container widget.
search = self.parent()
w, h = search.space_dims()
self.resize_to_results(w=w, h=h)
self.show() self.show()
self.resize_to_results()
class SearchBar(Edit): class SearchBar(Edit):
@ -466,18 +537,15 @@ class SearchBar(Edit):
self.godwidget = godwidget self.godwidget = godwidget
super().__init__(parent, **kwargs) super().__init__(parent, **kwargs)
self.view: CompleterView = view self.view: CompleterView = view
godwidget._widgets[view.mode_name] = view
def show(self) -> None:
super().show()
self.view.show_matches()
def unfocus(self) -> None: def unfocus(self) -> None:
self.parent().hide() self.parent().hide()
self.clearFocus() self.clearFocus()
def hide(self) -> None:
if self.view: if self.view:
self.view.hide() self.view.hide()
super().hide()
class SearchWidget(QtWidgets.QWidget): class SearchWidget(QtWidgets.QWidget):
@ -496,15 +564,16 @@ class SearchWidget(QtWidgets.QWidget):
parent=None, parent=None,
) -> None: ) -> None:
super().__init__(parent or godwidget) super().__init__(parent)
# size it as we specify # size it as we specify
self.setSizePolicy( self.setSizePolicy(
QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed,
QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed,
) )
self.godwidget = godwidget self.godwidget = godwidget
godwidget.reg_for_resize(self)
self.vbox = QtWidgets.QVBoxLayout(self) self.vbox = QtWidgets.QVBoxLayout(self)
self.vbox.setContentsMargins(0, 4, 4, 0) self.vbox.setContentsMargins(0, 4, 4, 0)
@ -554,20 +623,53 @@ class SearchWidget(QtWidgets.QWidget):
self.vbox.setAlignment(self.view, Qt.AlignTop | Qt.AlignLeft) self.vbox.setAlignment(self.view, Qt.AlignTop | Qt.AlignLeft)
def focus(self) -> None: def focus(self) -> None:
if self.view.model().rowCount(QModelIndex()) == 0:
# fill cache list if nothing existing
self.view.set_section_entries(
'cache',
list(reversed(self.godwidget._chart_cache)),
clear_all=True,
)
self.bar.focus()
self.show() self.show()
self.bar.focus()
def get_current_item(self) -> Optional[tuple[str, str]]: def show_cache_entries(
'''Return the current completer tree selection as self,
only: bool = False,
keep_current_item_selected: bool = False,
) -> None:
'''
Clear the search results view and show only cached (aka recently
loaded with active data) feeds in the results section.
'''
godw = self.godwidget
# first entry in the cache is the current symbol(s)
fqsns = set()
for multi_fqsns in list(godw._chart_cache):
for fqsn in set(multi_fqsns):
fqsns.add(fqsn)
if keep_current_item_selected:
sel = self.view.selectionModel()
cidx = sel.currentIndex()
self.view.set_section_entries(
'cache',
list(fqsns),
# remove all other completion results except for cache
clear_all=only,
reverse=True,
)
if (
keep_current_item_selected
and cidx.isValid()
):
# set current selection back to what it was before filling out
# the view results.
self.view.select_from_idx(cidx)
else:
self.view.select_first()
def get_current_item(self) -> tuple[QModelIndex, str, str] | None:
'''
Return the current completer tree selection as
a tuple ``(parent: str, child: str)`` if valid, else ``None``. a tuple ``(parent: str, child: str)`` if valid, else ``None``.
''' '''
@ -593,7 +695,11 @@ class SearchWidget(QtWidgets.QWidget):
if provider == 'cache': if provider == 'cache':
symbol, _, provider = symbol.rpartition('.') symbol, _, provider = symbol.rpartition('.')
return provider, symbol return (
cidx,
provider,
symbol,
)
else: else:
return None return None
@ -603,7 +709,8 @@ class SearchWidget(QtWidgets.QWidget):
clear_to_cache: bool = True, clear_to_cache: bool = True,
) -> Optional[str]: ) -> Optional[str]:
'''Attempt to load and switch the current selected '''
Attempt to load and switch the current selected
completion result to the affiliated chart app. completion result to the affiliated chart app.
Return any loaded symbol. Return any loaded symbol.
@ -613,15 +720,16 @@ class SearchWidget(QtWidgets.QWidget):
if value is None: if value is None:
return None return None
provider, symbol = value cidx, provider, symbol = value
chart = self.godwidget godw = self.godwidget
log.info(f'Requesting symbol: {symbol}.{provider}') fqsn = f'{symbol}.{provider}'
log.info(f'Requesting symbol: {fqsn}')
await chart.load_symbol( # assert provider in symbol
provider, await godw.load_symbols(
symbol, fqsns=[fqsn],
'info', loglevel='info',
) )
# fully qualified symbol name (SNS i guess is what we're # fully qualified symbol name (SNS i guess is what we're
@ -635,18 +743,48 @@ class SearchWidget(QtWidgets.QWidget):
# Re-order the symbol cache on the chart to display in # Re-order the symbol cache on the chart to display in
# LIFO order. this is normally only done internally by # LIFO order. this is normally only done internally by
# the chart on new symbols being loaded into memory # the chart on new symbols being loaded into memory
chart.set_chart_symbol(fqsn, chart.linkedsplits) godw.set_chart_symbols(
(fqsn,), (
self.view.set_section_entries( godw.hist_linked,
'cache', godw.rt_linked,
values=list(reversed(chart._chart_cache)), )
)
# remove all other completion results except for cache self.show_cache_entries(
clear_all=True, only=True,
) )
self.bar.focus()
return fqsn return fqsn
def space_dims(self) -> tuple[float, float]:
'''
Compute and return the "available space dimentions" for this
search widget in terms of px space for results by return the
pair of width and height.
'''
# XXX: dun need dis rite?
# win = self.window()
# win_h = win.height()
# sb_h = win.statusBar().height()
godw = self.godwidget
hl = godw.hist_linked
edit_h = self.bar.height()
h = hl.height() - edit_h
w = hl.width()
return w, h
def on_resize(self) -> None:
'''
Resize relay event from god, resize all child widgets.
Right now this is just view to contents and/or the fast chart
height.
'''
w, h = self.space_dims()
self.bar.view.show_matches(wh=(w, h))
_search_active: trio.Event = trio.Event() _search_active: trio.Event = trio.Event()
_search_enabled: bool = False _search_enabled: bool = False
@ -682,9 +820,10 @@ async def pack_matches(
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
task_status.started(cs) task_status.started(cs)
# ensure ^ status is updated # ensure ^ status is updated
results = await search(pattern) results = list(await search(pattern))
if provider != 'cache': # XXX: don't cache the cache results xD # XXX: don't cache the cache results xD
if provider != 'cache':
matches[(provider, pattern)] = results matches[(provider, pattern)] = results
# print(f'results from {provider}: {results}') # print(f'results from {provider}: {results}')
@ -712,10 +851,11 @@ async def fill_results(
max_pause_time: float = 6/16 + 0.001, max_pause_time: float = 6/16 + 0.001,
) -> None: ) -> None:
"""Task to search through providers and fill in possible '''
Task to search through providers and fill in possible
completion results. completion results.
""" '''
global _search_active, _search_enabled, _searcher_cache global _search_active, _search_enabled, _searcher_cache
bar = search.bar bar = search.bar
@ -729,6 +869,10 @@ async def fill_results(
matches = defaultdict(list) matches = defaultdict(list)
has_results: defaultdict[str, set[str]] = defaultdict(set) has_results: defaultdict[str, set[str]] = defaultdict(set)
# show cached feed list at startup
search.show_cache_entries()
search.on_resize()
while True: while True:
await _search_active.wait() await _search_active.wait()
period = None period = None
@ -742,7 +886,7 @@ async def fill_results(
pattern = await recv_chan.receive() pattern = await recv_chan.receive()
period = time.time() - wait_start period = time.time() - wait_start
print(f'{pattern} after {period}') log.debug(f'{pattern} after {period}')
# during fast multiple key inputs, wait until a pause # during fast multiple key inputs, wait until a pause
# (in typing) to initiate search # (in typing) to initiate search
@ -780,8 +924,9 @@ async def fill_results(
# it hasn't already been searched with the current # it hasn't already been searched with the current
# input pattern (in which case just look up the old # input pattern (in which case just look up the old
# results). # results).
if (period >= pause) and ( if (
provider not in already_has_results period >= pause
and provider not in already_has_results
): ):
# TODO: it may make more sense TO NOT search the # TODO: it may make more sense TO NOT search the
@ -789,7 +934,9 @@ async def fill_results(
# cpu-bound. # cpu-bound.
if provider != 'cache': if provider != 'cache':
view.clear_section( view.clear_section(
provider, status_field='-> searchin..') provider,
status_field='-> searchin..',
)
await n.start( await n.start(
pack_matches, pack_matches,
@ -810,11 +957,20 @@ async def fill_results(
# re-searching it's ``dict`` since it's easier # re-searching it's ``dict`` since it's easier
# but it also causes it to be slower then cached # but it also causes it to be slower then cached
# results from other providers on occasion. # results from other providers on occasion.
if results and provider != 'cache': if (
view.set_section_entries( results
section=provider, ):
values=results, if provider != 'cache':
) view.set_section_entries(
section=provider,
values=results,
)
else:
# if provider == 'cache':
# for the cache just show what we got
# that matches
search.show_cache_entries()
else: else:
view.clear_section(provider) view.clear_section(provider)
@ -836,13 +992,11 @@ async def handle_keyboard_input(
global _search_active, _search_enabled global _search_active, _search_enabled
# startup # startup
bar = searchbar searchw = searchbar.parent()
search = searchbar.parent() godwidget = searchw.godwidget
godwidget = search.godwidget view = searchbar.view
view = bar.view view.set_font_size(searchbar.dpi_font.px_size)
view.set_font_size(bar.dpi_font.px_size) send, recv = trio.open_memory_channel(616)
send, recv = trio.open_memory_channel(16)
async with trio.open_nursery() as n: async with trio.open_nursery() as n:
@ -852,11 +1006,15 @@ async def handle_keyboard_input(
n.start_soon( n.start_soon(
partial( partial(
fill_results, fill_results,
search, searchw,
recv, recv,
) )
) )
searchbar.focus()
searchw.show_cache_entries()
await trio.sleep(0)
async for kbmsg in recv_chan: async for kbmsg in recv_chan:
event, etype, key, mods, txt = kbmsg.to_tuple() event, etype, key, mods, txt = kbmsg.to_tuple()
@ -866,19 +1024,29 @@ async def handle_keyboard_input(
if mods == Qt.ControlModifier: if mods == Qt.ControlModifier:
ctl = True ctl = True
if key in (Qt.Key_Enter, Qt.Key_Return): if key in (
Qt.Key_Enter,
await search.chart_current_item(clear_to_cache=True) Qt.Key_Return
):
_search_enabled = False _search_enabled = False
continue await searchw.chart_current_item(clear_to_cache=True)
elif not ctl and not bar.text(): # XXX: causes hang and segfault..
# if nothing in search text show the cache # searchw.show_cache_entries(
view.set_section_entries( # only=True,
'cache', # keep_current_item_selected=True,
list(reversed(godwidget._chart_cache)), # )
clear_all=True,
) view.show_matches()
searchw.focus()
elif (
not ctl
and not searchbar.text()
):
# TODO: really should factor this somewhere..bc
# we're doin it in another spot as well..
searchw.show_cache_entries(only=True)
continue continue
# cancel and close # cancel and close
@ -887,7 +1055,7 @@ async def handle_keyboard_input(
Qt.Key_Space, # i feel like this is the "native" one Qt.Key_Space, # i feel like this is the "native" one
Qt.Key_Alt, Qt.Key_Alt,
}: }:
search.bar.unfocus() searchbar.unfocus()
# kill the search and focus back on main chart # kill the search and focus back on main chart
if godwidget: if godwidget:
@ -895,68 +1063,95 @@ async def handle_keyboard_input(
continue continue
if ctl and key in { if (
Qt.Key_L, ctl
}: and key in {Qt.Key_L}
):
# like url (link) highlight in a web browser # like url (link) highlight in a web browser
bar.focus() searchbar.focus()
# selection navigation controls # selection navigation controls
elif ctl and key in { elif (
Qt.Key_D, ctl
}: and key in {Qt.Key_D}
):
view.next_section(direction='down') view.next_section(direction='down')
_search_enabled = False _search_enabled = False
elif ctl and key in { elif (
Qt.Key_U, ctl
}: and key in {Qt.Key_U}
):
view.next_section(direction='up') view.next_section(direction='up')
_search_enabled = False _search_enabled = False
# selection navigation controls # selection navigation controls
elif (ctl and key in { elif (
ctl and (
key in {
Qt.Key_K,
Qt.Key_J,
}
Qt.Key_K, or key in {
Qt.Key_J, Qt.Key_Up,
Qt.Key_Down,
}) or key in { }
)
Qt.Key_Up, ):
Qt.Key_Down,
}:
_search_enabled = False _search_enabled = False
if key in {Qt.Key_K, Qt.Key_Up}:
if key in {
Qt.Key_K,
Qt.Key_Up
}:
item = view.select_previous() item = view.select_previous()
elif key in {Qt.Key_J, Qt.Key_Down}: elif key in {
Qt.Key_J,
Qt.Key_Down,
}:
item = view.select_next() item = view.select_next()
if item: if item:
parent_item = item.parent() parent_item = item.parent()
if parent_item and parent_item.text() == 'cache': # if we're in the cache section and thus the next
# selection is a cache item, switch and show it
# if it's a cache item, switch and show it immediately # immediately since it should be very fast.
await search.chart_current_item(clear_to_cache=False) if (
parent_item
and parent_item.text() == 'cache'
):
await searchw.chart_current_item(clear_to_cache=False)
# ACTUAL SEARCH BLOCK #
# where we fuzzy complete and fill out sections.
elif not ctl: elif not ctl:
# relay to completer task # relay to completer task
_search_enabled = True _search_enabled = True
send.send_nowait(search.bar.text()) send.send_nowait(searchw.bar.text())
_search_active.set() _search_active.set()
async def search_simple_dict( async def search_simple_dict(
text: str, text: str,
source: dict, source: dict,
) -> dict[str, Any]: ) -> dict[str, Any]:
tokens = []
for key in source:
if not isinstance(key, str):
tokens.extend(key)
else:
tokens.append(key)
# search routine can be specified as a function such # search routine can be specified as a function such
# as in the case of the current app's local symbol cache # as in the case of the current app's local symbol cache
matches = fuzzy.extractBests( matches = fuzzy.extractBests(
text, text,
source.keys(), tokens,
score_cutoff=90, score_cutoff=90,
) )

View File

@ -240,12 +240,12 @@ def hcolor(name: str) -> str:
'gunmetal': '#91A3B0', 'gunmetal': '#91A3B0',
'battleship': '#848482', 'battleship': '#848482',
# default ohlc-bars/curve gray
'bracket': '#666666', # like the logo
# bluish # bluish
'charcoal': '#36454F', 'charcoal': '#36454F',
# default bars
'bracket': '#666666', # like the logo
# work well for filled polygons which want a 'bracket' feel # work well for filled polygons which want a 'bracket' feel
# going light to dark # going light to dark
'davies': '#555555', 'davies': '#555555',

View File

@ -21,15 +21,29 @@ Qt main window singletons and stuff.
import os import os
import signal import signal
import time import time
from typing import Callable, Optional, Union from typing import (
Callable,
Optional,
Union,
)
import uuid import uuid
from pyqtgraph import QtGui
from PyQt5 import QtCore from PyQt5 import QtCore
from PyQt5.QtWidgets import QLabel, QStatusBar from PyQt5.QtWidgets import (
QWidget,
QMainWindow,
QApplication,
QLabel,
QStatusBar,
)
from PyQt5.QtGui import (
QScreen,
QCloseEvent,
)
from ..log import get_logger from ..log import get_logger
from ._style import _font_small, hcolor from ._style import _font_small, hcolor
from ._chart import GodWidget
log = get_logger(__name__) log = get_logger(__name__)
@ -148,12 +162,13 @@ class MultiStatus:
self.bar.clearMessage() self.bar.clearMessage()
class MainWindow(QtGui.QMainWindow): class MainWindow(QMainWindow):
# XXX: for tiling wms this should scale # XXX: for tiling wms this should scale
# with the alloted window size. # with the alloted window size.
# TODO: detect for tiling and if untrue set some size? # TODO: detect for tiling and if untrue set some size?
size = (300, 500) # size = (300, 500)
godwidget: GodWidget
title = 'piker chart (ur symbol is loading bby)' title = 'piker chart (ur symbol is loading bby)'
@ -162,17 +177,20 @@ class MainWindow(QtGui.QMainWindow):
# self.setMinimumSize(*self.size) # self.setMinimumSize(*self.size)
self.setWindowTitle(self.title) self.setWindowTitle(self.title)
# set by runtime after `trio` is engaged.
self.godwidget: Optional[GodWidget] = None
self._status_bar: QStatusBar = None self._status_bar: QStatusBar = None
self._status_label: QLabel = None self._status_label: QLabel = None
self._size: Optional[tuple[int, int]] = None self._size: Optional[tuple[int, int]] = None
@property @property
def mode_label(self) -> QtGui.QLabel: def mode_label(self) -> QLabel:
# init mode label # init mode label
if not self._status_label: if not self._status_label:
self._status_label = label = QtGui.QLabel() self._status_label = label = QLabel()
label.setStyleSheet( label.setStyleSheet(
f"""QLabel {{ f"""QLabel {{
color : {hcolor('gunmetal')}; color : {hcolor('gunmetal')};
@ -194,8 +212,7 @@ class MainWindow(QtGui.QMainWindow):
def closeEvent( def closeEvent(
self, self,
event: QCloseEvent,
event: QtGui.QCloseEvent,
) -> None: ) -> None:
'''Cancel the root actor asap. '''Cancel the root actor asap.
@ -235,8 +252,8 @@ class MainWindow(QtGui.QMainWindow):
def on_focus_change( def on_focus_change(
self, self,
last: QtGui.QWidget, last: QWidget,
current: QtGui.QWidget, current: QWidget,
) -> None: ) -> None:
@ -247,11 +264,12 @@ class MainWindow(QtGui.QMainWindow):
name = getattr(current, 'mode_name', '') name = getattr(current, 'mode_name', '')
self.set_mode_name(name) self.set_mode_name(name)
def current_screen(self) -> QtGui.QScreen: def current_screen(self) -> QScreen:
"""Get a frickin screen (if we can, gawd). '''
Get a frickin screen (if we can, gawd).
""" '''
app = QtGui.QApplication.instance() app = QApplication.instance()
for _ in range(3): for _ in range(3):
screen = app.screenAt(self.pos()) screen = app.screenAt(self.pos())
@ -284,7 +302,7 @@ class MainWindow(QtGui.QMainWindow):
''' '''
# https://stackoverflow.com/a/18975846 # https://stackoverflow.com/a/18975846
if not size and not self._size: if not size and not self._size:
app = QtGui.QApplication.instance() # app = QApplication.instance()
geo = self.current_screen().geometry() geo = self.current_screen().geometry()
h, w = geo.height(), geo.width() h, w = geo.height(), geo.width()
# use approx 1/3 of the area of the screen by default # use approx 1/3 of the area of the screen by default
@ -292,9 +310,36 @@ class MainWindow(QtGui.QMainWindow):
self.resize(*size or self._size) self.resize(*size or self._size)
def resizeEvent(self, event: QtCore.QEvent) -> None:
if (
# event.spontaneous()
event.oldSize().height == event.size().height
):
event.ignore()
return
# XXX: uncomment for debugging..
# attrs = {}
# for key in dir(event):
# if key == '__dir__':
# continue
# attr = getattr(event, key)
# try:
# attrs[key] = attr()
# except TypeError:
# attrs[key] = attr
# from pprint import pformat
# print(
# f'{pformat(attrs)}\n'
# f'WINDOW RESIZE: {self.size()}\n\n'
# )
self.godwidget.on_win_resize(event)
event.accept()
# singleton app per actor # singleton app per actor
_qt_win: QtGui.QMainWindow = None _qt_win: QMainWindow = None
def main_window() -> MainWindow: def main_window() -> MainWindow:

View File

@ -46,8 +46,10 @@ def _kivy_import_hack():
@click.argument('name', nargs=1, required=True) @click.argument('name', nargs=1, required=True)
@click.pass_obj @click.pass_obj
def monitor(config, rate, name, dhost, test, tl): def monitor(config, rate, name, dhost, test, tl):
"""Start a real-time watchlist UI '''
""" Start a real-time watchlist UI
'''
# global opts # global opts
brokermod = config['brokermods'][0] brokermod = config['brokermods'][0]
loglevel = config['loglevel'] loglevel = config['loglevel']
@ -70,8 +72,12 @@ def monitor(config, rate, name, dhost, test, tl):
) as portal: ) as portal:
# run app "main" # run app "main"
await _async_main( await _async_main(
name, portal, tickers, name,
brokermod, rate, test=test, portal,
tickers,
brokermod,
rate,
test=test,
) )
tractor.run( tractor.run(
@ -122,7 +128,7 @@ def optschain(config, symbol, date, rate, test):
@cli.command() @cli.command()
@click.option( @click.option(
'--profile', '--profile',
'-p', # '-p',
default=None, default=None,
help='Enable pyqtgraph profiling' help='Enable pyqtgraph profiling'
) )
@ -131,9 +137,14 @@ def optschain(config, symbol, date, rate, test):
is_flag=True, is_flag=True,
help='Enable tractor debug mode' help='Enable tractor debug mode'
) )
@click.argument('symbol', required=True) @click.argument('symbols', nargs=-1, required=True)
@click.pass_obj @click.pass_obj
def chart(config, symbol, profile, pdb): def chart(
config,
symbols: list[str],
profile,
pdb: bool,
):
''' '''
Start a real-time chartng UI Start a real-time chartng UI
@ -144,24 +155,27 @@ def chart(config, symbol, profile, pdb):
_profile._pg_profile = True _profile._pg_profile = True
_profile.ms_slower_then = float(profile) _profile.ms_slower_then = float(profile)
# Qt UI entrypoint
from ._app import _main from ._app import _main
if '.' not in symbol: for symbol in symbols:
click.echo(click.style( if '.' not in symbol:
f'symbol: {symbol} must have a {symbol}.<provider> suffix', click.echo(click.style(
fg='red', f'symbol: {symbol} must have a {symbol}.<provider> suffix',
)) fg='red',
return ))
return
# global opts # global opts
brokernames = config['brokers'] brokernames = config['brokers']
brokermods = config['brokermods']
assert brokermods
tractorloglevel = config['tractorloglevel'] tractorloglevel = config['tractorloglevel']
pikerloglevel = config['loglevel'] pikerloglevel = config['loglevel']
_main( _main(
sym=symbol, syms=symbols,
brokernames=brokernames, brokermods=brokermods,
piker_loglevel=pikerloglevel, piker_loglevel=pikerloglevel,
tractor_kwargs={ tractor_kwargs={
'debug_mode': pdb, 'debug_mode': pdb,
@ -170,5 +184,6 @@ def chart(config, symbol, profile, pdb):
'enable_modules': [ 'enable_modules': [
'piker.clearing._client' 'piker.clearing._client'
], ],
'registry_addr': config.get('registry_addr'),
}, },
) )

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +0,0 @@
"""
Super hawt Qt UI components
"""

View File

@ -1,67 +0,0 @@
import sys
from PySide2.QtCharts import QtCharts
from PySide2.QtWidgets import QApplication, QMainWindow
from PySide2.QtCore import Qt, QPointF
from PySide2 import QtGui
import qdarkstyle
data = ((1, 7380, 7520, 7380, 7510, 7324),
(2, 7520, 7580, 7410, 7440, 7372),
(3, 7440, 7650, 7310, 7520, 7434),
(4, 7450, 7640, 7450, 7550, 7480),
(5, 7510, 7590, 7460, 7490, 7502),
(6, 7500, 7590, 7480, 7560, 7512),
(7, 7560, 7830, 7540, 7800, 7584))
app = QApplication([])
# set dark stylesheet
# import pdb; pdb.set_trace()
app.setStyleSheet(qdarkstyle.load_stylesheet_pyside())
series = QtCharts.QCandlestickSeries()
series.setDecreasingColor(Qt.darkRed)
series.setIncreasingColor(Qt.darkGreen)
ma5 = QtCharts.QLineSeries() # 5-days average data line
tm = [] # stores str type data
# in a loop, series and ma5 append corresponding data
for num, o, h, l, c, m in data:
candle = QtCharts.QCandlestickSet(o, h, l, c)
series.append(candle)
ma5.append(QPointF(num, m))
tm.append(str(num))
pen = candle.pen()
# import pdb; pdb.set_trace()
chart = QtCharts.QChart()
# import pdb; pdb.set_trace()
series.setBodyOutlineVisible(False)
series.setCapsVisible(False)
# brush = QtGui.QBrush()
# brush.setColor(Qt.green)
# series.setBrush(brush)
chart.addSeries(series) # candle
chart.addSeries(ma5) # ma5 line
chart.setAnimationOptions(QtCharts.QChart.SeriesAnimations)
chart.createDefaultAxes()
chart.legend().hide()
chart.axisX(series).setCategories(tm)
chart.axisX(ma5).setVisible(False)
view = QtCharts.QChartView(chart)
view.chart().setTheme(QtCharts.QChart.ChartTheme.ChartThemeDark)
view.setRubberBand(QtCharts.QChartView.HorizontalRubberBand)
# chartview.chart().setTheme(QtCharts.QChart.ChartTheme.ChartThemeBlueCerulean)
ui = QMainWindow()
# ui.setGeometry(50, 50, 500, 300)
ui.setCentralWidget(view)
ui.show()
sys.exit(app.exec_())

3
pytest.ini 100644
View File

@ -0,0 +1,3 @@
#[pytest]
#trio_mode=True
#log_cli=1

Some files were not shown because too many files have changed in this diff Show More