Since downsampling with the more correct version of m4 (uppx driven
windows sizing) is super fast now we don't need to avoid downsampling
on low uppx values. Further all graphics objects now support in-view
slicing so make sure to use it on interaction updates. Pass in the view
profiler to update method calls for more detailed measuring.
Even moar,
- Add a manual call to `.maybe_downsample_graphics()` inside the mouse
wheel event handler since it seems that sometimes trailing events get
lost from the `.sigRangeChangedManually` signal which can result in
"non-downsampled-enough" graphics on chart given the scroll amount;
this manual call seems to entirely fix this?
- drop "max zoom" guard since internals now support (near) infinite
scroll out to graphics becoming a single pixel column line XD
- add back in commented xrange signal connect code for easy testing to
verify against range updates not happening without it
This took longer then i care to admit XD but it definitely adds a huge
speedup and with only a few outstanding correctness bugs:
- panning from left to right causes strange trailing artifacts in the
flows fsp (vlm) sub-plot but only when some data is off-screen on the
left but doesn't appear to be an issue if we keep the `._set_yrange()`
handler hooked up to the `.sigXRangeChanged` signal (but we aren't
going to because this makes panning way slower). i've got a feeling
this is a bug todo with the device coordinate cache stuff and we may
need to report to Qt core?
- factoring out the step curve logic from
`FastAppendCurve.update_from_array()` (un)fortunately required some
logic branch uncoupling but also meant we needed special input controls
to avoid things like redraws and curve appends for special cases,
this will hopefully all be better rectified in code when the core of
this method is moved into a renderer type/implementation.
- the `tina_vwap` fsp curve now somehow causes hangs when doing erratic
scrolling on downsampled graphics data. i have no idea why or how but
disabling it makes the issue go away (ui will literally just freeze
and gobble CPU on a `.paint()` call until you ctrl-c the hell out of
it). my guess is that something in the logic for standard line curves
and appends on large data sets is the issue?
Code related changes/hacks:
- drop use of `step_path_arrays_from_1d()`, it was always a bit hacky
(being based on `pyqtgraph` internals) and was generally hard to
understand since it returns 1d data instead of the more expected (N,2)
array of "step levels"; instead this is now implemented (uglily) in
the `Flow.update_graphics()` block for step curves (which will
obviously get cleaned up and factored elsewhere).
- add a bunch of new flags to the update method on the fast append
curve: `draw_last: bool`, `slice_to_head: int`, `do_append: bool`,
`should_redraw: bool` which are all controls to aid with previously
mentioned issues specific to getting step curve updates working
correctly.
- add a ton of commented tinkering related code (that we may end up
using) to both the flow and append curve methods that was written as
part of the effort to get this all working.
- implement all step curve updating inline in `Flow.update_graphics()`
including prepend and append logic for pre-graphics incremental step
data maintenance and in-view slicing as well as "last step" graphics
updating.
Obviously clean up commits coming stat B)
Since we have in-view style rendering working for all curve types
(finally) we can avoid the guard for low uppx levels and without losing
interaction speed. Further don't delay the profiler so that the nested
method calls correctly report upward - which wasn't working likely due
to some kinda GC collection related issue.
More or less this improves update latency like mad. Only draw data in
view and avoid full path regen as much as possible within a given
(down)sampling setting. We now support append path updates with in-view
data and the *SPECIAL CAVEAT* is that we avoid redrawing the whole curve
**only when** we calc an `append_length <= 1` **even if the view range
changed**. XXX: this should change in the future probably such that the
caller graphics update code can pass a flag which says whether or not to
do a full redraw based on it knowing where it's an interaction based
view-range change or a flow update change which doesn't require a full
path re-render.
After much effort (and exhaustion) but failure to get a view into our
`numpy` OHLC struct-array, this instead allocates an in-thread-memory
array which is updated with flattened data every flow update cycle.
I need to report what I think is a bug to `numpy` core about the whole
view thing not working but, more or less this gets the same behaviour
and minimizes work to flatten the sampled data for line-graphics drawing
thus improving refresh latency when drawing large downsampled curves.
Update the OHLC ds curve with view aware data sliced out from the
pre-allocated and incrementally updated data (we had to add a last index
var `._iflat` to track appends - this should be moved into a renderer
eventually?).
This begins the removal of data processing / analysis methods from the
chart widget and instead moving them to our new `Flow` API (in the new
module introduce here) and delegating the old chart methods to the
respective internal flow. Most importantly is no longer storing the
"last read" of an array from shm in an internal chart table (was
`._arrays`) and instead the `ShmArray` instance is passed as input and
stored in the `Flow` instance. This greatly simplifies lookup logic such
that the display loop now doesn't have to worry about reading shm, it
can be done by internal graphics logic as desired. Generally speaking,
all previous `._arrays`/`._graphics` lookups are now delegated to the
entries in the chart's `._flows` table.
The new `Flow` methods are generally better factored and provide more
detailed output regarding data-stream <-> graphics inter-relations for
the future purpose of allowing much more efficient update calls in the
display loop as well as supporting low latency interaction UX.
The concept here is that we're introducing an intermediary layer that
ties together graphics and real-time data flows such that widget code is
oriented around plot layout and the flow apis are oriented around
real-time low latency updates and providing an efficient high level
metric layer for the UX.
The summary api transition is something like:
- `update_graphics_from_array()` -> `.update_graphics_from_flow()`
- `.bars_range()` -> `Flow.datums_range()`
- `.bars_range()` -> `Flow.datums_range()`
Given that naming the port map is mostly pointless, since accounts can
be detected once the client connects, just expect a `brokers.toml` to
define a simple sequence of port numbers. Toss in a warning for using
the old map/`dict` style.
Now that we have working client auth thanks to:
https://github.com/barneygale/asyncvnc/pull/4 and related issue,
we can use a pw for the vnc server, though we should eventually
auto-generate a random one from a docker super obviously.
Add logic to the data reset hack loop to do a connection reset after
2 failed/timeout attempts at the regular data reset. We need to also add
this logic around reconnectionn events that are due to the host
network connection: aka roaming that's faster then timing logic
builtin to the gateway.
`ib-gw` seems particularly fragile to connections from clients with the
same id (can result in weird connect hangs and even crashes) and
`ib_insync` doesn't handle intermittent tcp disconnects that
well..(especially on dockerized IBC setups). This adds a bunch of
changes to our client caching and scan loop as well a proper
task-locking-to-cache-proxies so that,
- `asyncio`-side clients aren't double-loaded/connected even when
explicitly trying to reconnect repeatedly with a given client to work
around the unreliability of the `asyncio.Transport` design in
`ib_insync`.
- we can use `tractor.trionics.maybe_open_context()` to lock the `trio`
side from loading more then one `Client` on the `asyncio` side and
instead on cache hits only making a new `MethodProxy` around the
reused `asyncio`-side client (since each `trio` task needs its own
inter-task msg channel).
- a `finally:` block teardown on all clients loaded in the scan loop
avoids stale connections.
- the connect params are now exposed as named args to
`load_aio_clients()` can be easily controlled from caller code.
Oh, and we properly hooked up the internal `ib_insync` logging to our
own internal schema - makes it a lot easier to debug wtf is going on XD
In order to expose more `asyncio` powered `Client` methods to endpoint
task-code this adds a more extensive and layered set of `MethodProxy`
loading routines, in dependency order these are:
- `load_clients_for_trio()` a `tractor.to_asyncio.open_channel_from()`
entry-point factory for loading all scanned clients on the `asyncio` side
and delivering them over the inter-task channel to a `trio`-side task.
- `get_preferred_data_client()` a simple client instance loading routine
which reads from the users `brokers.toml -> `prefer_data_account:
list[str]` which must list account names, in priority order, that are
acceptable to be used as the main "data connection client" such that
only one of the detected clients is used for data (whereas the rest
are used only for order entry).
- `open_client_proxies()` which delivers the detected `Client` set
wrapped each in a `MethodProxy`.
- `open_data_client()` which directly delivers the preferred data client
as a proxy for `trio` tasks.
- update `open_client_method_proxy()` and `open_client_proxy` to require
an input `Client` instance.
Further impl details:
- add `MethodProxy._aio_ns` to ref the original `asyncio` side proxied instance
- add `Client.trades()` to pull executions from the last day/session
- load proxies inside `trades_dialogue` and use the new `.trades()`
method to try and pull a fill ledger for eventual correct pp price
calcs (pertains to #307)..