Commit Graph

129 Commits (144255bf016f3135ab82e98a6254cc324c18ebad)

Author SHA1 Message Date
Tyler Goodlet f5de361f49 Import directly from `tractor.trionics` 2022-06-05 22:13:35 -04:00
Tyler Goodlet 5d26609693 Add "no-tsdb-found" history load length defaults 2022-06-05 22:13:08 -04:00
Tyler Goodlet 4f36743f64 Only udpate prepended graphics when actually in view 2022-06-05 22:13:08 -04:00
Tyler Goodlet 88eccc1e15 Fill in label with pairs from `status` value of backend init msg 2022-06-05 22:08:00 -04:00
Tyler Goodlet 1657f51edc Manually fetch missing out-of-order history frames
It seems once in a while a frame can get missed or dropped (at least
with binance?) so in those cases, when the request erlangs is already at
max, we just manually request the missing frame and presume things will
work out XD

Further, discard out of order frames that are "from the future" that
somehow end up in the async queue once in a while? Not sure why this
happens but it seems thus far just discarding them is nbd.
2022-05-10 17:25:20 -04:00
Tyler Goodlet 277ca29018 Always write missing history frames to tsdb (again) 2022-05-10 14:55:52 -04:00
Tyler Goodlet 26fddae3c0 Fix earliest frame-end not-yet-pushed check
Bleh/🤦, the ``end_dt`` in scope is not the "earliest" frame's
`end_dt` in the async response queue.. Parse the queue's latest epoch
and use **that** to compare to the last last pushed datetime index..

Add more detailed logging to help debug any (un)expected datetime index
gaps.
2022-05-10 14:55:52 -04:00
Tyler Goodlet 30ddf63ec0 Handle gaps greater then a frame within a frame 2022-05-09 11:15:14 -04:00
Tyler Goodlet 1676bceee1 Don't offset the start index by a step 2022-05-09 11:15:14 -04:00
Tyler Goodlet c9a621fc2a Fix less-then-frame off by one slice, add db write toggle and disable 2022-05-09 11:15:14 -04:00
Tyler Goodlet 61e9db3229 Handle ``iter_dts()`` already exhausted edge case 2022-05-09 11:15:14 -04:00
Tyler Goodlet 40753ae93c Always write newly pulled frames to tsdb 2022-05-09 11:15:14 -04:00
Tyler Goodlet 969530ba19 Fix slice logic for less-then-frame tsdb overlap
When the tsdb has a last datum that is in the past less then a "frame's
worth" of sample steps we need to slice out only the data from the
latest frame that doesn't overlap; this fixes that slice logic..
Previously i dunno wth it was doing..
2022-05-09 11:15:14 -04:00
Tyler Goodlet b44786e5b7 Support async-batched ohlc queries in all backends
Expect each backend to deliver a `config: dict[str, Any]` which provides
concurrency controls to `trimeter`'s batch task scheduler such that
backends can define their own concurrency limits.

The dirty deats in this patch include handling history "gaps" where
a query returns a history-frame-result which spans more then the typical
frame size (in seconds). In such cases we reset the target frame index
(datetime index sequence implemented with a `pendulum.Period`) using
a generator protocol `.send()` such that the sequence can be dynamically
re-indexed starting at the new (possibly) pre-gap datetime. The new gap
logic also allows us to detect out of order frames easier and thus wait
for the next-in-order to arrive before making more requests.
2022-05-09 11:15:14 -04:00
Tyler Goodlet fcb85873de Terminate early on data unavailable errors 2022-05-09 11:15:14 -04:00
Tyler Goodlet 7b1c0939bd Add first-draft `trimeter` based concurrent ohlc history fetching 2022-05-09 11:15:14 -04:00
Tyler Goodlet 6ba3c15c4e Add to signal broker won't deliver more data 2022-05-09 11:15:14 -04:00
Tyler Goodlet 2f04a8c939 Drop legacy back-filling logic
Use the new `open_history_client()` endpoint/API and expect backends to
provide a history "getter" routine that can be called to load historical
data into shm even when **not** using a tsdb. Add logic for filling in
data from the tsdb once the backend has provided data up to the last
recorded in the db. Add logic for avoiding overruns of the shm buffer
with more-then-necessary queries of tsdb data.
2022-05-09 11:15:14 -04:00
Tyler Goodlet 8bf40ae299 Drop legacy backfilling, load a day's worth of data by default 2022-05-09 11:15:14 -04:00
Tyler Goodlet 0f683205f4 Add 16 fetch limit if no tsdb data found 2022-05-09 11:15:14 -04:00
Tyler Goodlet 3056bc3143 Don't run legacy backfill when isn't up 2022-05-09 11:15:13 -04:00
Tyler Goodlet d3824c8c0b Start legacy backfill with partial too 2022-05-09 11:15:13 -04:00
Tyler Goodlet 727d3cc027 Unify backfilling logic into common task-routine 2022-05-09 11:15:13 -04:00
Tyler Goodlet bcf3be1fe4 A bit hacky but, broadcast index streams on each history prepend 2022-05-09 11:15:13 -04:00
Tyler Goodlet 9fe5cd647a Handle non-fqsn for derivs and don't put brokername in 2022-05-09 11:15:13 -04:00
Tyler Goodlet ce3229df7d Get sync-to-marketstore-tsdb history retrieval workinnn 2022-05-09 11:15:13 -04:00
Tyler Goodlet 41325ad418 Add basic tsdb history loading
If `marketstore` is detected try to only load most recent missing data
from the data provider (broker) and the rest from the tsdb and push it
all to shm for display in the UI. If the provider/broker doesn't have
the history client endpoint, just use the old one for now so we can
start to incrementally add support. Don't start the ohlc step
incrementer task until the backend signals that the feed is live.
2022-05-09 11:15:13 -04:00
Tyler Goodlet 6cdd017cd6 Ensure bfqsn is lower cased for feed api consumers
Also, Start tinkering with `tractor.trionics.ipython_embed()`

In effort to get back to a usable REPL around the mkts client
this adds usage of the new `tractor` integration api as well as logic
for skipping backfilling if existing tsdb arrays are found.
2022-05-09 11:15:13 -04:00
Tyler Goodlet 706c8085f2 Prototype a high level `Storage` api
Starts a wrapper around the `marketstore` client to do basic ohlcv query
and retrieval and prototypes out write methods for ohlc and tick.
Try to connect to `marketstore` automatically (which will fail if not
started currently) but we will eventually first do a service query.

Further:

- get `pikerd` working with and without `--tsdb` flag.
- support spawning `brokerd` with no real-time quotes.
- bring back in "fqsn" support that was originally not
  in this history before commits factoring.
2022-05-09 11:15:13 -04:00
Tyler Goodlet 7d2e9bff46 Type annot updates 2022-05-09 11:15:13 -04:00
Tyler Goodlet ebe2680355 Change `uncons_fqsn()` -> `unpack_fqsn()` 2022-04-11 01:01:36 -04:00
Tyler Goodlet 32e316ebff Drop nl 2022-04-10 17:33:02 -04:00
Tyler Goodlet 7bd5b42f9e Ensure we lower case the fqsn received from all backends before delivery 2022-04-10 17:30:02 -04:00
Tyler Goodlet 8462ea8a28 Make the data feed layer "fqsn" aware
In order to support instruments with lifetimes (aka derivatives) we need
generally need special symbol annotations which detail such meta data
(such as `MNQ.GLOBEX.20220717` for daq futes). Further there is really
no reason for the public api for this feed layer to care about getting
a special "brokername" field since generally the data is coming directly
from UIs (eg. search selection) so we might as well accept a fqsn (fully
qualified symbol name) which includes the broker name; for now a suffix
like `'.ib'`. We may change this schema (soon) but this at least gets us
to a point where we expect the full name including broker/provider.

An additional detail: for certain "generic" symbol names (like for
futes) we will pull a so called "front contract" and map this to
a specific fqsn underneath, so there is a double (cached) entry for that
entry such that other consumers can use it the same way if desired.

Some other machinery changes:
- expect the `stream_quotes()` endpoint to deliver it's `.started()` msg
  almost immediately since we now need it deliver any fqsn asap (yes
  this means the ep should no longer wait on a "live" first quote and
  instead deliver what quote data it can right away.
- expect the quotes ohlc sampler task to add in the broker name before
  broadcast to remote (actor) consumers since the backend isn't (yet)
  expected to do that add in itself.
- obviously we start using all the new fqsn related `Symbol` apis
2022-04-10 17:30:02 -04:00
Tyler Goodlet e9d64ffee8 Use fqsn in `.manage_history()`
Allocate and `.started()` return the `ShmArray` from here as well in
prep for tsdb integration.
2022-04-10 17:30:02 -04:00
Tyler Goodlet cc026dfb1d Open feeds using `Portal.open_context()` 2022-04-10 17:30:02 -04:00
Tyler Goodlet f7d03489d8 Drop `marketstore` loading cruft (will come later) 2022-03-01 12:39:12 -05:00
Tyler Goodlet 09079b61fc Comment task canceller method prototype 2022-03-01 12:37:31 -05:00
Tyler Goodlet c239faf4e5 Add a `._sampling.sampler` registry composite type 2022-03-01 12:36:32 -05:00
Tyler Goodlet b1cce8f9cf Adjust and add notes for python-trio/trio#2258 2022-02-28 08:30:22 -05:00
Tyler Goodlet bf3b58e861 Async load data history, allow "offline" feed use
Break up real-time quote feed and history loading into 2 separate tasks
and deliver a client side `data.Feed` as soon as history is loaded
(instead of waiting for a rt quote - the previous logic). If
a symbol doesn't have history then likely the feed shouldn't be loaded
(since presumably client code will need at least "some" datums history
to do anything) and waiting on a real-time quote is dumb, since it'll
hang if the market isn't open XD. If a symbol doesn't have history we
can always write a zero/null array when we run into that case. This also
greatly speeds up feed loading when both history and quotes are available.

TL;DR summary:
- add a `_Feedsbus.start_task()` one-cancel-scope-per-task method for
  assisting with (re-)starting and stopping long running persistent
  feeds (basically a "one cancels one" style nursery API).
- add a `manage_history()` task which does all history loading (and
  eventually real-time writing) which has an independent signal and
  start it in a separate task.
- drop the "sample rate per symbol" stuff since client code doesn't really
  care when it can just inspect shm indexing/time-steps itself.
- run throttle tasks in the bus nursery thus avoiding cancelling the
  underlying sampler task on feed client disconnects.
- don't store a repeated ref the bus nursery's cancel scope..
2022-02-28 08:26:13 -05:00
Tyler Goodlet 1d3ed6c333 Add `mk_` prefix since assignments will use `fqsn` 2022-02-28 08:23:57 -05:00
Tyler Goodlet c2a13c474c Support no realtime stream sending with feed bus 2022-02-28 08:22:40 -05:00
Tyler Goodlet 2d3c685e19 Typecast np dtype description to a tuple 2022-02-07 12:53:30 -05:00
Tyler Goodlet 50713030f8 annoying doc strings 2022-01-25 07:57:01 -05:00
Tyler Goodlet 422977d27a Port to new `tractor.trionics.maybe_open_context()` api 2022-01-23 21:01:38 -05:00
Tyler Goodlet 835ad7794c Don't error on sub removal attempts, feeds need backpressure 2022-01-23 19:47:20 -05:00
Tyler Goodlet 8f023cd66f Factor out context cacher to `tractor.trionics` 2022-01-23 19:45:34 -05:00
Tyler Goodlet 21386f6c1f Rename feed bus entrypoint 2022-01-23 12:22:37 -05:00
Tyler Goodlet bcc8d8a0d5 Simplify throttle loop to a single while block
This should in theory result in increased burstiness since we remove
the plain `trio.sleep()` and instead always wait on the receive channel
as much as possible until the `trio.move_on_after()` (+ time diffing
calcs) times out and signals the next throttled send cycle. This also is
slightly easier to grok code-wise instead of the `try, except` and
another tight while loop until a `trio.WouldBlock`. The only simpler
way i can think to do it is with 2 tasks: 1 to collect ticks and the
other to read and send at the throttle rate.

Comment out the log msg for now to avoid latency and add much more
detailed comments. Add an overrun log msg to the main sample loop.
2021-12-09 08:23:59 -05:00
Tyler Goodlet fd8be33f10 Add portal getter, store throttle rate 2021-09-21 15:48:40 -04:00
Tyler Goodlet 03c38a1163 It's a map of symbols to first quote dicts 2021-09-06 09:28:10 -04:00
Tyler Goodlet dfb9c55944 Compute symbol digits at creation time
Add a new factory func `mk_symbol()` to create the initial
instance at feed creation time.
2021-09-06 09:28:10 -04:00
Tyler Goodlet 26cb7aa660 Drop tractor stream shielding use 2021-09-01 09:03:55 -04:00
Tyler Goodlet 1184a4d88e Cache sample step streams per actor 2021-08-31 09:28:22 -04:00
Tyler Goodlet bbcce0cab6 Facepalm^2: pass through kwargs 2021-08-30 18:04:19 -04:00
Tyler Goodlet 1e42f58478 Add pause/resume feed api, delegate to msg stream for broadcast api 2021-08-26 17:04:59 -04:00
Tyler Goodlet c8e320849a Add super basic support for data feed "pausing" 2021-08-26 17:04:59 -04:00
Tyler Goodlet 7d0f47364c Use `maybe_open_feed()` in ems and fsp daemons 2021-08-26 17:04:59 -04:00
Tyler Goodlet a7d3afc9b1 Add a `maybe_open_feed()` which uses new broadcast chans
Try out he new broadcast channels from `tractor` for data feeds
we already have cached. Any time there's a cache hit we load the
cached feed and just slap a broadcast receiver on it for the local
consumer task.
2021-08-26 17:04:59 -04:00
Tyler Goodlet f4a998655b Feed detach must explicitly unsub throttled streams
If a client attaches to a quotes data feed and requests a throttle rate,
be sure to unsub that side-band memchan + task when it detaches and
especially so on any transport connection error.

Also, use an explicit `tractor.Context.cancel()` on the client feed
block exit since we removed the implicit cancel option from the
`tractor` api.
2021-07-07 07:51:09 -04:00
Tyler Goodlet df2f6487ff Apply `brokerd` quote rate throttling when requested in `open_feed()` 2021-06-14 21:55:51 -04:00
Tyler Goodlet 57a35a3c6c Port feed bus endpoint to a `@tractor.context` 2021-06-14 10:55:01 -04:00
Tyler Goodlet 3455ebc60c Cast back to tuples after msgspec strips them... 2021-06-14 00:03:05 -04:00
Tyler Goodlet f4c9e20f0d Avoid `numpy` type usage on the wire 2021-06-01 10:48:23 -04:00
Tyler Goodlet 50aff72f8e Don't require map (yet) in backend modules 2021-05-27 13:05:23 -04:00
Tyler Goodlet 59475cfd81 Store lowercase symbols within piker data internals 2021-05-27 13:05:23 -04:00
Tyler Goodlet 9bfc230dde Speedup: load provider searches async at startup 2021-05-27 13:05:23 -04:00
Tyler Goodlet 42fda2a9e6 Drop old code 2021-05-27 13:05:23 -04:00
Tyler Goodlet 1bd0ee8746 Support loading multi-brokerds search at startup 2021-05-27 13:05:23 -04:00
Tyler Goodlet 59377da0ad Load pause configs from backends on feed opens 2021-05-27 13:05:23 -04:00
Tyler Goodlet c9c686c98d Register context-stream with multi-search for each feed 2021-05-27 13:05:23 -04:00
Tyler Goodlet f19f4348e0 Decouple symbol search from feed type 2021-05-27 13:05:22 -04:00
Tyler Goodlet 5766dd518d Enforce lower case symbols across providers 2021-05-27 13:05:22 -04:00
Tyler Goodlet 534553a6f5 Add client side multi-provider feed symbol search 2021-05-27 13:05:22 -04:00
Tyler Goodlet dcc60524cb Add remote context allocation api to service daemon
This allows for more deterministically managing long running sub-daemon
services under `pikerd` using the new context api from `tractor`.
The contexts are allocated in an async exit stack and torn down at root
daemon termination. Spawn brokerds using this method by changing the
persistence entry point to be a `@tractor.context`.
2021-05-24 12:26:11 -04:00
Tyler Goodlet 0b36bacfb6 Avoid weird `pydantic` runtime warning 2021-05-24 12:22:17 -04:00
Tyler Goodlet 0d9f091a34 Port data feed to new tractor stream api 2021-04-29 09:10:18 -04:00
Tyler Goodlet 7d6bc4d856 Move feed api(s) into new submodule
Also add a --pdb flag to chart app.
2021-04-15 10:43:29 -04:00