ib has a throttle limit for "hft" bars but contained in here is some
hackery using ``xdotool`` to reset data farms auto-magically B)
This copies the working script into the ib backend mod as a routine and
now uses `trio.run_process()` and calls into it from the `get_bars()`
history retriever and then waits for "data re-established" events to be
received from the client before making more history queries.
TL;DR summary of changes:
- relay ib's "system status" events (like for data farm statuses)
as a new "event" msg that can be processed by registers of
`Client.inline_errors()` (though we should probably make a new
method for this).
- add `MethodProxy.status_event()` which allows a proxy user to register
for a particular "system event" (as mentioned above), which puts
a `trio.Event` entry in a small table can be set by an relay task if
there are any detected waiters.
- start a "msg relay task" when opening the method proxy which does
the event setting mentioned above in the background.
- drop the request error handling around the proxy creation, doesn't
seem necessary any more now that we have better error propagation from
`asyncio`.
- add event waiting logic around the data feed reset hackzorin.
- change the order relay task to only log system events for now (though
we need to do some better parsing/logic to get tws-external order
updates to work again..
Found an issue (that was predictably brushed aside XD) where the
`ib_insync.util.df()` helper was changing the timestamps on bars data to
be way off (probably a `pandas.Timestamp` timezone thing?).
Anyway, dropped all that (which will hopefully let us drop `pandas` as
a hard dep) and added a buncha timestamp checking as well as start/end
datetime return values using `pendulum` so that consumer code can know
which "slice" is output.
Also added some WIP code to work around "no history found" request
errors where instead now we try to increment backward another 200
seconds - not sure if this actually correct yet.
Make the throttle error propagate through to `trio` again by adding
`dict`-msg support between the two loops such that errors can be
re-raised on the `trio` side. This is all integrated into the
`MethoProxy` and accompanying result relay task.
Further fix a longer standing issue where sometimes the `ib_insync`
order entry method will raise a weird assertion error because it detects
some internal order-id state issue.. Just ignore those and make relay
back an error to the ems in such cases.
Add a bunch of notes for todos surrounding data feed reset hackery.
To start we only have futes working but this allows both searching
and loading multiple expiries of the same instrument by specifying
different expiries with a `.<expiry>` suffix in the symbol key (eg.
`mnq.globex.20220617`). This also paves the way for options contracts
which will need something similar plus a strike property. This change
set also required a patch to `ib_insync` to allow retrieving multiple
"ambiguous" contracts from the `IB.reqContractDetailsAcync()` method,
see https://github.com/erdewit/ib_insync/pull/454 for further discussion
since the approach here might change.
This patch also includes a lot of serious reworking of some `trio`-`asyncio`
integration to use the newer `tractor.to_asyncio.open_channel_from()`
api and use it (with a relay task) to open a persistent connection with
an in-actor `ib_insync` `Client` mostly for history requests.
Deats,
- annot the module with a `_infect_asyncio: bool` for `tractor` spawning
- add a futes venu list
- support ambiguous futes contracts lookups so that all expiries will
show in search
- support both continuous and specific expiry fute contract
qualification
- allow searching with "fqsn" keys
- don't crash on "data not found" errors in history requests
- move all quotes msg "topic-key" generation (which should now be
a broker-specific fqsn) and per-contract quote processing into
`normalize()`
- set the fqsn key in the symbol info init msg
- use `open_client_proxy()` in bars backfiller endpoint
- include expiry suffix in position update keys
This adds a new client manager-factory: `open_client_proxy()` which uses
the newer `tractor.to_asyncio.open_channel_from()` (and thus the
inter-loop-task-channel style) a `aio_client_method_relay()` and
a re-implemented `MethodProxy` wrapper to allow transparently calling
`asyncio` client methods from `trio` tasks. Use this proxy in the
history backfiller task and add a new (prototype)
`open_history_client()` which will be used in the new storage management
layer. Drop `get_client()` which was the portal wrapping equivalent of
the same proxy but with a one-task-per-call approach. Oh, and
`Client.bars()` can take `datetime`, so let's use it B)
Move the core ws message handling into `stream_messages()` and call that
from 2 new stream processors: `process_data_feed_msgs()` and
`process_order_msgs()`. Add comments for hints on how to implement the
order msg parsing as well as `pprint` received msgs to console for now.
Must have run into some confusion with data structures in `brokerd` vs.
`emsd`. This fixes the ems `relay.positions` state tracking to be
composed maps, vs. messages from `brokerd` should just be a sequence.
This adds full support for a single `brokerd` managing multiple API
endpoint clients in tandem. Get the client scan loop correct and load
accounts from all discovered clients as specified in a user's
`broker.toml`. We now just always re-scan for all clients and if there's
a cache hit just skip a creation/connection logic.
Route orders with an account name to the correct client in the
`handle_order_requests()` endpoint and spawn an event relay task per
client for transmitting trade events back to `emsd`.
Make the `handle_order_requests()` tasks now lookup the appropriate API
client for a given account (or error if it can't be found) and use it
for submission. Account names are loaded from the
`brokers.toml::accounts.ib` section both UI side and in the `brokerd`.
Change `_aio_get_client()` to a `load_aio_client()` which now tries to
scan and load api clients for all connections defined in the config as
well as deliver the client cache and account lookup tables.
This gives us fast search over a known set of symbols you can't search
for with the api such as futures and commodities contracts.
Toss in a new client method to lookup contract details
`Client.con_deats()` and avoid calling it for now from `.search_stock()`
for speed; it seems originally we were doing the 2nd lookup due to weird
suffixes in the `.primaryExchange` which we can just discard.
Obviously this only supports stocks to start, it looks like we might
actually have to hard code some of the futures/forex/cmdtys that don't
have a search.. so lame. Special throttling is added here since the api
will grog out at anything more then 1Hz.
Additionally, decouple the bar loading request error handling from the
shm pushing loop so that we can always recover from a historical bars
throttle-error even if it's on the first try for a new symbol.
This gets the binance provider meeting the data feed schema requirements
of both the OHLC sampling/charting machinery as well as proper
formatting of historical OHLC history.
Notably,
- spec a minimal ohlc dtype based on the kline endpoint
- use a dataclass to parse out OHLC bar datums and pack into np.ndarray/shm
- add the ``aggTrade`` endpoint to get last clearing (traded) prices,
validate with ``pydantic`` and then normalize these into our tick-quote
format for delivery over the feed stream api.
- a notable requirement is that the "first" quote from the feed must
contain a 'last` field so the clearing system can start up correctly.
Move all feed/stream agnostic logic and shared mem writing into a new
set of routines inside the ``data`` sub-package. This lets us move
toward a more standard API for broker and data backends to provide
cache-able persistent streams to client apps.
The data layer now takes care of
- starting a single background brokerd task to start a stream for as
symbol if none yet exists and register that stream for later lookups
- the existing broker backend actor is now always re-used if possible
if it can be found in a service tree
- synchronization with the brokerd stream's startup sequence is now
oriented around fast startup concurrency such that client code gets
a handle to historical data and quote schema as fast as possible
- historical data loading is delegated to the backend more formally by
starting a ``backfill_bars()`` task
- write shared mem in the brokerd task and only destruct it once requested
either from the parent actor or further clients
- fully de-duplicate stream data by using a dynamic pub-sub strategy
where new clients register for copies of the same quote set per symbol
This new API is entirely working with the IB backend; others will need
to be ported. That's to come shortly.
Async spawn a deats getter task whenever we load a symbol data feed.
Pass these symbol details in the first message delivered by the feed at
open. Move stream loop into a new func.
If you have a common broker feed daemon then likely you don't want to
create superfluous shared mem buffers for the same symbol. This adds an
ad hoc little context manger which keeps a bool state of whether
a buffer writer task currently is running in this process. Before we
were checking the shared array token cache and **not** clearing it when
the writer task exited, resulting in incorrect writer/loader logic on
the next entry..
Really, we need a better set of SC semantics around the shared mem stuff
presuming there's only ever one writer per shared buffer at given time.
Hopefully that will come soon!
- Move to new shared mem system only writing on the first (by process)
entry to `stream_quotes()`.
- Deliver bars before first quote arrives so that chart can populate and
then wait for initial arrival.
- Allow caching clients per actor.
- Load bars using the same (cached) client that starts the quote stream
thus speeding up initialization.
Adjust the `data.open_feed()` api to take a shm token so the
broker-daemon can attach a previously created (by the parent actor) mem
buf and push real-time tick data. There's still some sloppiness here in
terms of ensuring only one mem buf per symbol (can be seen in
`stream_quotes()`) which should really managed at the data api level.
Add a bar incrementing stream-task which delivers increment msgs to any
consumers.
Since the new FSP system will require time aligned data amongst actors,
it makes sense to share broker data feeds as much as possible on a local
system. There doesn't seem to be downside to this approach either since
if not fanning-out in our code, the broker (server) has to do it anyway
(and who knows how junk their implementation is) though with more
clients, sockets etc. in memory on our end. It also preps the code for
introducing a more "serious" pub-sub systems like zeromq/nanomessage.
Start a draft normalization format for (sampled) tick data.
Ideally we move toward the dense tick format (DFT) enforced by
techtonicDB, but for now let's just get a dict of something simple
going: `{'type': 'trade', 'price': <price}` kind of thing. This
gets us started being able to real-time chart from all data feed
back-ends. Oh, and hack in support for XAUUSD..and get subactor
logging workin.
Add a `Client.find_contract()` which internally takes
a <symbol>.<exchange> str as input and uses `IB.qualifyContractsAsync()`
internally to try and validate the most likely contract. Make the module
script call this using `asyncio.run()` for console testing.
Infected `asyncio` support is being added to `tractor` in
goodboy/tractor#121 so delegate to all that new machinery.
Start building out an "actor-aware" api which takes care of all the
`trio`-`asyncio` interaction for data streaming and request handling.
Add a little (shudder) method proxy system which can be used to invoke
client methods from another actor. Start on a streaming api in
preparation for real-time charting.
Start working towards meeting the backend client api.
Infect `asyncio` using `trio`'s new guest mode and demonstrate
real-time ticker streaming to console.
Since the new FSP system will require time aligned data amongst actors,
it makes sense to share broker data feeds as much as possible on a local
system. There doesn't seem to be downside to this approach either since
if not fanning-out in our code, the broker (server) has to do it anyway
(and who knows how junk their implementation is) though with more
clients, sockets etc. in memory on our end. It also preps the code for
introducing a more "serious" pub-sub systems like zeromq/nanomessage.
This is something I've been meaning to try for a while and will likely
make writing tick data to a db more straight forward (filling in NaN
values is more matter of fact) plus it should minimize bandwidth usage.
Note, it'll require stream consumers to be considerate of non-full
quotes arriving and thus using the first "full" quote message to fill
out dynamically formatted systems or displays.
For easy testing of questrade historical data from cli.
Re-org the common cli components into a new package to avoid having all
commands defined in a top-level module.
There's some expected limitations with the number of sticks allowed in
a single query (they say 2k but I've been able to pull 20k). Also note
without a paid data sub there's a 15m delay on 1m sticks (we'll hack
around that shortly, don't worry).
Gets us better throughput when polling multiple endpoints (eg. option
and stock quotes simultaneously) since slower round trip request won't
block faster ones when using multiple connections.
- stop displaying search bar widget on <ctrl-c>
- if there's existing search bar content highlight it automatically
to allow user to start typing new content right away
- when activated allow search bar to insert its own set of keybinding
controls; restore prior bindings on exit
Fixes to `tractor` that resolve issues with async generators being
non-task safe make the need for the mutex lock in
`DataFeed.open_stream()` unnecessary. Also, don't bother pushing empty
quotes from the publisher; avoids hitting the network when possible.
Questrade's API is half baked and can't handle concurrency.
It allows multiple concurrent requests to most endpoints *except*
for the auth endpoint used to refresh tokens:
https://www.questrade.com/api/documentation/security
I've gone through extensive dialogue with their API team and despite
making what I think are very good arguments for doing the request
serialization on the server side, they decided that I should instead
do the "locking" on the client side. Frankly it doesn't seem like they
have that competent an engineering department as it took me a long time
to explain the issue even though it's rather trivial and probably not
that hard to fix; maybe it's better this way.
This adds a few things to ensure more reliable token refreshes on
expiry:
- add a `@refresh_token_on_err` decorator which can be used on `_API`
methods that should refresh tokens on failure
- decorate most endpoints with this *except* for the auth ep
- add locking logic for the troublesome scenario as follows:
* every time a request is sent out set a "request in progress" event
variable that can be used to determine when no requests are currently
outstanding
* every time the auth end point is hit in order to refresh tokens set
an event that locks out other tasks from making requests
* only allow hitting the auth endpoint when there are no "requests in
progress" using the first event
* mutex all auth endpoint requests; there can only be one outstanding
- don't hit the accounts endpoint at client startup; we want to
eventually support keys from multiple accounts and you can disable
account info per key and just share the market data function
Adjust feed locking around internal manager `yields` to make this work.
Also, change quote publisher to deliver a list of quotes for each
retrieved batch. This was actually broken for option streaming since
each quote was being overwritten due to a common `key` value for all
expiries. Asjust the `packetizer` function accordingly to work for
both options and stocks.
The pub-sub data feed system was factored into `tractor` as an
experimental api / subsystem. Move to using that which greatly
simplifies the data feed architecture.
Start working toward a more general (on-demand) pub-sub system which
can be brought into ``tractor``. Right now this just means making
the code in the `fan_out_to_ctxs()` less specific but, eventually
I think this function should be coupled with a decorator and shipped
as a standard "message pattern".
Additionally,
- try out making `BrokerFeed` a `@dataclass`
- strip out all the `trio.Event` / uneeded nursery / extra task crap
from `start_quote_stream()`
If quotes are pushed using the adjusted contract symbol (i.e. with
trailing '-1' suffix) the subscriber won't receive them under the
normal symbol. The logic was wrong for determining whether to add
a suffix (was failing for any symbol with an exchange suffix)
which was causing normal data feed subscriptions to fail to match
in every case.
I did some testing of the `optionsIds` parameter to the option quote
endpoint and found that it limits you to 100 symbols so it's not
practical for real-time "all-strike"" chain updating; we have to stick
to filters for now. The only real downside of this is that it seems
multiple filters across multiple symbols is quite latent. I need to
toy with it more to be sure it's not something slow on the client side.
Oh, and store option contract to ids in a `dict` for now as we may want
to try the `optionsIds` thing again down the road as I coordinate with
the QT tech team.