In effort to start getting some graphics speedups as detailed in #109,
this adds a `FastAppendCurve`to every `BarItems` as a `._ds_line` which
is only displayed (instead of the normal mult-line bars curve) when the
"width" of a bar is indistinguishable on screen from a line -> so once
the view coordinates map to > 2 pixels on the display device.
`BarItems.maybe_paint_line()` takes care of this scaling detection logic and is
called by the associated view's `.sigXRangeChanged` signal handler.
The graphics update loop is much easier to grok when all the UI
components which potentially need to be updated on a cycle are arranged
together in a high-level composite namespace, thus this new
`DisplayState` addition. Create and set this state on each
`LinkedSplits` chart set and add a new method `.graphics_cycle()` which
let's a caller trigger a graphics loop update manually. Use this method
in the fsp graphics manager such that a chain can update new history
output even if there is no real-time feed driving the display loop (eg.
when a market is "closed").
As per https://github.com/erdewit/ib_insync/pull/454 the more correct
way to do this is with `.reqContractDetailsAsync()` which we wrap with
`Client.con_deats()` and which works just as well. Further drop all the
`dict`-ifying that was being done in that method and instead always
return `ContractDetails` object in an fqsn-like explicitly keyed `dict`.
ib has a throttle limit for "hft" bars but contained in here is some
hackery using ``xdotool`` to reset data farms auto-magically B)
This copies the working script into the ib backend mod as a routine and
now uses `trio.run_process()` and calls into it from the `get_bars()`
history retriever and then waits for "data re-established" events to be
received from the client before making more history queries.
TL;DR summary of changes:
- relay ib's "system status" events (like for data farm statuses)
as a new "event" msg that can be processed by registers of
`Client.inline_errors()` (though we should probably make a new
method for this).
- add `MethodProxy.status_event()` which allows a proxy user to register
for a particular "system event" (as mentioned above), which puts
a `trio.Event` entry in a small table can be set by an relay task if
there are any detected waiters.
- start a "msg relay task" when opening the method proxy which does
the event setting mentioned above in the background.
- drop the request error handling around the proxy creation, doesn't
seem necessary any more now that we have better error propagation from
`asyncio`.
- add event waiting logic around the data feed reset hackzorin.
- change the order relay task to only log system events for now (though
we need to do some better parsing/logic to get tws-external order
updates to work again..
Found an issue (that was predictably brushed aside XD) where the
`ib_insync.util.df()` helper was changing the timestamps on bars data to
be way off (probably a `pandas.Timestamp` timezone thing?).
Anyway, dropped all that (which will hopefully let us drop `pandas` as
a hard dep) and added a buncha timestamp checking as well as start/end
datetime return values using `pendulum` so that consumer code can know
which "slice" is output.
Also added some WIP code to work around "no history found" request
errors where instead now we try to increment backward another 200
seconds - not sure if this actually correct yet.
Make the throttle error propagate through to `trio` again by adding
`dict`-msg support between the two loops such that errors can be
re-raised on the `trio` side. This is all integrated into the
`MethoProxy` and accompanying result relay task.
Further fix a longer standing issue where sometimes the `ib_insync`
order entry method will raise a weird assertion error because it detects
some internal order-id state issue.. Just ignore those and make relay
back an error to the ems in such cases.
Add a bunch of notes for todos surrounding data feed reset hackery.
To start we only have futes working but this allows both searching
and loading multiple expiries of the same instrument by specifying
different expiries with a `.<expiry>` suffix in the symbol key (eg.
`mnq.globex.20220617`). This also paves the way for options contracts
which will need something similar plus a strike property. This change
set also required a patch to `ib_insync` to allow retrieving multiple
"ambiguous" contracts from the `IB.reqContractDetailsAcync()` method,
see https://github.com/erdewit/ib_insync/pull/454 for further discussion
since the approach here might change.
This patch also includes a lot of serious reworking of some `trio`-`asyncio`
integration to use the newer `tractor.to_asyncio.open_channel_from()`
api and use it (with a relay task) to open a persistent connection with
an in-actor `ib_insync` `Client` mostly for history requests.
Deats,
- annot the module with a `_infect_asyncio: bool` for `tractor` spawning
- add a futes venu list
- support ambiguous futes contracts lookups so that all expiries will
show in search
- support both continuous and specific expiry fute contract
qualification
- allow searching with "fqsn" keys
- don't crash on "data not found" errors in history requests
- move all quotes msg "topic-key" generation (which should now be
a broker-specific fqsn) and per-contract quote processing into
`normalize()`
- set the fqsn key in the symbol info init msg
- use `open_client_proxy()` in bars backfiller endpoint
- include expiry suffix in position update keys
This adds a new client manager-factory: `open_client_proxy()` which uses
the newer `tractor.to_asyncio.open_channel_from()` (and thus the
inter-loop-task-channel style) a `aio_client_method_relay()` and
a re-implemented `MethodProxy` wrapper to allow transparently calling
`asyncio` client methods from `trio` tasks. Use this proxy in the
history backfiller task and add a new (prototype)
`open_history_client()` which will be used in the new storage management
layer. Drop `get_client()` which was the portal wrapping equivalent of
the same proxy but with a one-task-per-call approach. Oh, and
`Client.bars()` can take `datetime`, so let's use it B)
Use fqsn as input to the client-side EMS apis but strip broker-name
stuff before generating and sending `Brokerd*` msgs to each backend for
live order requests (since it's weird for a backend to expect it's own
name, though maybe that could be a sanity check?).
Summary of fqsn use vs. broker native keys:
- client side pps, order requests and general UX for order management
use an fqsn for tracking
- brokerd side order dialogs use the broker-specific symbol which is
usually nearly the same key minus the broker name
- internal dark book and quote feed lookups use the fqsn where possible