Allows starting UI apps and passing the `pikerd` registry socket-addr
args via `--host` or `--port` such that a separate actor tree can be
started by selecting an unused port. This is handy when hacking new
features but while also wishing to run a more stable version of the code
for trading on the same host.
Drop all attempts at rewiring `ViewBox` signals, monkey-patching
relayee handlers, and generally modifying event source public
attributes. Instead take a much simpler approach where the event source
graphics object simply has it's handler dynamically overridden by
a broadcaster function which relays to all consumers using a Python
loop.
The benefits of this much simplified approach include:
- avoiding the tedious and often complex (re)connection of signals between
the source plot and the overlayed consumers.
- requiring zero modification of the public interface of any of the
publisher or consumer `ViewBox`s, no decoration, extra signal
definitions (eg. previous `mouseDragEventRelay` or the like).
- only a single dynamic method override on the event source graphics object
(`ViewBox`) which does the broadcasting work and requires no
modification to handler implementations.
Detailed `.ui._overlay` changes:
- drop `mk_relay_signal()`, `enable_relays()` which removes signal/slot
hacking methodology.
- drop unused `ComposedGridLayout.grid` and `.reverse`, change some
method names: `.insert()` -> `.insert_plotitem()`, `append()` ->
`.append_plotitem()`.
- in `PlotOverlay`, again drop all signal/slot rewiring in
`.add_plotitem()` and instead add our new closure based python-loop in
`broadcast()` routine which is used to override the event-source
object's handler.
- comment out all the auxiliary/want-to-have event source selection
methods for now.
Mainly this involves instantiating our overriden `PlotItem` in a few
places and tweaking type annots. A further detail is that inside
the fsp sub-chart creation code we hide some axes for overlays in the
flows subchart; these were previously somehow hidden implicitly?
Fork out our patch set submitted to upstream in multiple PRs (since they
aren't moving and/or aren't a priority to core) which can be seen in
full from the following diff:
https://github.com/pyqtgraph/pyqtgraph/compare/master...pikers:pyqtgraph:graphics_pin
Move these type extensions into the internal `.ui._pg_overrides` module.
The changes are related to both `pyqtgraph.PlotItem` and `.AxisItem` and
were driven for our need for multi-view overlays (overlaid charts with
optionally synced axis and interaction controls) as documented in the PR
to upstream: https://github.com/pyqtgraph/pyqtgraph/pull/2162
More specifically,
- wrt to `AxisItem` we added lru caching of tick values as per:
https://github.com/pyqtgraph/pyqtgraph/pull/2160.
- wrt to `PlotItem` we adjusted some of the axis management code, namely
adding a standalone `.removeAxis()` and modifying the `.setAxisItems()` logic
to use it in: https://github.com/pyqtgraph/pyqtgraph/pull/2162
as well as some tweaks to `.updateGrid()` to loop through all possible
axes when grid setting.
Details of the original patch to upstream are in:
https://github.com/pyqtgraph/pyqtgraph/pull/2281
Instead of trying to land this we've opted to just copy out that version
of `.debug.Profiler` into our own internals (luckily the class is
entirely self-contained) until such a time when we choose to find
a better dependency as per https://github.com/pikers/piker/issues/337
To make it easier to manually read/decipher long ledger files this adds
`dict` sorting based on record-type-specific (api vs. flex report)
datetime processing prior to ledger file write.
- break up parsers into separate routines for flex and api record
processing.
- add `parse_flex_dt()` for special handling of the weird semicolon
stamps in flex reports.
There never was any underlying db bug, it was a hardcoded timeframe in
the column series write key.. Now we always assert a matching timeframe
in results.
Not only improves startup latency but also avoids a bug where the rt
buffer was being tsdb-history prepended *before* the backfilling of
recent data from the backend was complete resulting in our of order
frames in shm.
Factor the multi-sample-rate region UI connecting into a new helper
`link_views_with_region()` which reads in the shm buffer offsets from
the `Feed` and appropriately connects the fast and slow chart handlers
for the linear region graphics. Add detailed comments writeup for the
inter-sampling transform algebra.
If a history manager raises a `DataUnavailable` just assume the sample
rate isn't supported and that no shm prepends will be done. Further seed
the shm array in such cases as before from the 1m history's last datum.
Also, fix tsdb -> shm back-loading, cancelling tsdb queries when either
no array-data is returned or a frame is delivered which has a start time
no lesser then the least last retrieved. Use strict timeframes for every
`Storage` API call.
Turns out querying for a high freq timeframe (like 1sec) will still
return a lower freq timeframe (like 1Min) SMH, and no idea if it's the
server or the client's fault, so we have to explicitly check the sample
step size and discard lower freq series-results. Do this inside
`Storage.read_ohlcv()` and return an empty `dict` when the wrong time
step is detected from the query result.
Further enforcements,
- both `.load()` and `read_ohlcv()` now require an explicit `timeframe:
int` input to guarantee the time step of the output array.
- drop all calls `.load()` with non-timeframe specific input.
Our default sample periods are 60s (1m) for the history chart and 1s for
the fast chart. This patch adds concurrent loading of both (or more)
different sample period data sets using the existing loading code but
with new support for looping through a passed "timeframe" table which
points to each shm instance.
More detailed adjustments include:
- breaking the "basic" and tsdb loading into 2 new funcs:
`basic_backfill()` and `tsdb_backfill()` the latter of which is run
when the tsdb daemon is discovered.
- adjust the fast shm buffer to offset with one day's worth of 1s so
that only up to a day is backfilled as history in the fast chart.
- adjust bus task starting in `manage_history()` to deliver back the
offset indices for both fast and slow shms and set them on the
`Feed` object as `.izero_hist/rt: int` values:
- allows the chart-UI linked view region handlers to use the offsets
in the view-linking-transform math to index-align the history and
fast chart.
Allows keeping mutex state around data reset requests which (if more
then one are sent) can cause a throttling condition where ib's servers
will get slower and slower to conduct a reconnect. With this you can
have multiple ongoing contract requests without hitting that issue and
we can go back to having a nice 3s timeout on the history queries before
activating the hack.
When a network outage or data feed connection is reset often the
`ib_insync` task will hang until some kind of (internal?) timeout takes
place or, in some (worst) cases it never re-establishes (the event
stream) and thus the backend needs to restart or the live feed will
never resume..
In order to avoid this issue once and for all this patch implements an
additional (extremely simple) task that is started with the real-time
feed and simply waits for any market data reset events; when detected
restarts the `open_aio_quote_stream()` call in a loop using
a surrounding cancel scope.
Been meaning to implement this for ages and it's finally working!
Allows for easier restarts of certain `trio` side tasks without killing
the `asyncio`-side clients; support via flag.
Also fix a bug in `Client.bars()`: we need to return the duration on the
empty bars case..
This allows the history manager to know the decrement size for
`end_dt: datetime` on the next query if a no-data / gap case was
encountered; subtract this in `get_bars()` in such cases. Define the
expected `pendulum.Duration`s in the `.api._samplings` table.
Also add a bit of query latency profiling that we may use later to more
dynamically determine timeout driven data feed resets. Factor the `162`
error cases into a common exception handler block.
When we get a timeout or a `NoData` condition still return a tuple of
empty sequences instead of `None` from `Client.bars()`. Move the
sampling period-duration table to module level.
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD
Commented some issues we still need to resolve as well.
Manual tinker-testing demonstrated that triggering data resets
completely independent of the frame request gets more throughput and
further, that repeated requests (for the same frame after cancelling on
the `trio`-side) can yield duplicate frame responses. Re-work the
dual-task structure to instead have one task wait indefinitely on the
frame response (and thus not trigger duplicate frames) and the 2nd data
reset task poll for the first task to complete in a poll loop which
terminates when the frame arrives via an event.
Dirty deatz:
- make `get_bars()` take an optional timeout (which will eventually be
dynamically passed from the history mgmt machinery) and move request
logic inside a new `query()` closure meant to be spawned in a task
which sets an event on frame arrival, add data reset poll loop in the
main/parent task, deliver result on nursery completion.
- handle frame request cancelled event case without crash.
- on no-frame result (due to real history gap) hack in a 1 day decrement
case which we need to eventually allow the caller to control likely
based on measured frame rx latency.
- make `wait_on_data_reset()` a predicate without output indicating
reset success as well as `trio.Nursery.start()` compat so that it can
be started in a new task with the started values yielded being
a cancel scope and completion event.
- drop the legacy `backfill_bars()`, not longer used.
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.