Commit Graph

575 Commits (30d55fdb275373516dd2133ae6f850a6acd6d7a3)

Author SHA1 Message Date
Tyler Goodlet e45bc4c619 Move `ui._compression`/`._pathops` to `.data` subpkg
Since these modules no longer contain Qt specific code we might
as well include them in the data sub-package.

Also, add `IncrementalFormatter.index_field` as single point to def the
indexing field that should be used for all x-domain graphics-data
rendering.
2023-02-12 13:39:10 -05:00
Tyler Goodlet 8d592886fa Pass `Flume`s throughout FSP-ui and charting APIs
Since higher level charting and fsp management need access to the
new `Flume` indexing apis this adjusts some func sigs to pass through
(and/or create) flume instances:
- `LinkedSplits.add_plot()` and dependents.
- `ChartPlotWidget.draw_curve()` and deps, and it now returns a `Flow`.
- `.ui._fsp.open_fsp_admin()` and `FspAdmin.open_fsp_ui()` related
  methods => now we wrap the destination fsp shm in a flume on the admin
  side and is returned from `.start_engine_method()`.

Drop a bunch of (unused) chart widget methods including some already
moved to flume methods: `.get_index()`, `.in_view()`,
`.last_bar_in_view()`, `.is_valid_index()`.
2023-02-02 13:32:30 -05:00
Tyler Goodlet fcfc0f31f0 Enable backpressure in an effort to prevent bootup overruns 2023-01-30 11:45:29 -05:00
Tyler Goodlet 844626f6dc Move `brokerd` service task to root `.data` mod 2023-01-13 13:21:49 -05:00
Tyler Goodlet 71ca4c8e1f Use actor uid in shm keys for rt quote buffers
Allows running simultaneous data feed services on the same (linux) host
by avoiding file-name collisions instead keying shm buffer sets by the
given `brokerd` instance. This allows, for example, either multiple dev
versions of the data layer to run side-by-side or for the test suite to
be seamlessly run alongside a production instance.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 045b76bab5 Make `Flume.index_stream()` defer to new sampling api 2023-01-13 13:21:49 -05:00
Tyler Goodlet d66fb49077 Don't deliver shms from `start_backfill()`, they're not used 2023-01-13 13:21:49 -05:00
Tyler Goodlet 78c7c8524c Breakpoint when bad 1m history offsets are detected 2023-01-13 13:21:49 -05:00
Tyler Goodlet 5adb234a24 Don't receive sample-index msgs in feed layer 2023-01-13 13:21:49 -05:00
Tyler Goodlet 2778ee1401 Support not registering for sample-index msgs via `sub_for_broadcasts: bool` flag 2023-01-13 13:21:49 -05:00
Tyler Goodlet b3d1b1aa63 Port feed layer to use new `samplerd` APIs
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 5ec1a72a3d Implement a `samplerd` singleton actor service
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:

- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
  `._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
  the work of (lazily) starting the `samplerd` service (if dne) and
  registers shm buffers for update as well as connect a sample-index
  stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
  endpoint implementing all the shm buffer and index-stream registry
  details as well as logic to ensure a lone service task runs
  `Services.increment_ohlc_buffer()`; it increments at the shortest period
  registered which, for now, is the default 1s duration.

Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
  gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
  one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 2c76cee928 Begin formalizing `Sampler` singleton API
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.

Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
  registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.

Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 3efb0b5884 Sync 1s (or less) sampler steps using rounded now-epoch 2023-01-13 13:21:15 -05:00
Tyler Goodlet 009bbe456e Always `.error()` log unknown queries for `marketstore` 2023-01-13 13:21:15 -05:00
Tyler Goodlet daf7b3f4a5 Only accept 6 tries for the same duplicate hist frame
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
2023-01-13 13:21:15 -05:00
Tyler Goodlet b0a6dd46e4 Use recon set on stack closing during reconnect
Hopefully resolves https://github.com/pikers/piker/issues/434
2023-01-13 13:21:15 -05:00
Tyler Goodlet 1c5141f4c6 Fix f-str in duplicate frame msg print 2023-01-13 13:21:15 -05:00
Tyler Goodlet 4cdd2271b0 Drop `tractor` assert bug note 2023-01-13 13:21:15 -05:00
Tyler Goodlet 04c0d77595 Frame ticks in helper routine
Wow, turns out tick framing was totally borked since we weren't framing
on "greater then throttle period long waits" XD

This moves all the framing logic into a common func and calls it in
every case:
- every (normal) "pre throttle period expires" quote receive
- each "no new quote before throttle period expires" (slow case)
- each "no clearing tick yet received" / only burst on clears case
2023-01-13 13:21:15 -05:00
Tyler Goodlet 8e1ceca43d Add some data-flows jargon notes (re: #270) 2023-01-13 13:21:15 -05:00
Tyler Goodlet c85e7790de Rename `._flumes.py` -> `.flows.py` 2023-01-13 13:21:15 -05:00
Tyler Goodlet 2399c618b6 Expand sampler loop shm write lines 2023-01-13 13:21:15 -05:00
Tyler Goodlet 7ec88f8cac Make hist shm token optional to allow for FSPs 2023-01-13 13:21:15 -05:00
Tyler Goodlet eacd44dd65 Move `Flume` to a new `.data._flumes` module 2023-01-13 13:21:15 -05:00
Tyler Goodlet e5e70a6011 Extend `Flume` methods
Add some (untested) data slicing util methods for mapping time ranges to
source data indices:
- `.get_index()` which maps a single input epoch time to an equiv array
  (int) index.
- add `slice_from_time()` which returns a view of the shm data from an
  input epoch range presuming the underlying struct array contains
  a `'time'` field with epoch stamps.
- `.view_data()` which slices out the "in view" data according to the
  current state of the passed in `pg.PlotItem`'s view box.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 1ee49df31d Ensure a rt shm buffer without backfill has correct epoch timestamping 2023-01-13 13:21:15 -05:00
Tyler Goodlet f2df32a673 Use throttle period for wait-on-clearing-event timeout 2023-01-13 13:21:15 -05:00
Tyler Goodlet 125e31dbf3 Implement by-type tick-framing in throttler loop
This has been an outstanding idea for a while and changes the framing
format of tick events into a `dict[str, list[dict]]` wherein for each
tick "type" (eg. 'bid', 'ask', 'trade', 'asize'..etc) we create an FIFO
ordered `list` of events (data) and then pack this table into each
(throttled) send. This gives an additional implied downsample reduction
(in terms of iteration on the consumer side) from `N` tick-events to
a (max) `T` tick-types presuming the rx side only needs the latest tick
event.

Drop the `types: set` and adjust clearing event test to use the new
`ticks_by_type` map's keys.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 715e693564 Improved clearing-tick-burst-oriented throttling
Instead of uniformly distributing the msg send rate for a given
aggregate subscription, choose to be more bursty around clearing ticks
so as to avoid saturating the consumer with L1 book updates and vs.
delivering real trade data as-fast-as-possible.

Presuming the consumer is in the "UI land of slow" (eg. modern display
frame rates) such an approach serves more useful for seeing "material
changes" in the market as-bursty-as-possible (i.e. more short lived fast
changes in last clearing price vs. many slower changes in the bid-ask
spread queues). Such an approach also lends better to multi-feed
overlays which in aggregate tend to scale linearly with the number of
feeds/overlays; centralization of bursty arrival rates allows for
a higher overall throttle rate if used cleverly with framing.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 4300470786 Fix for empty tsdb query result case 2023-01-13 13:21:15 -05:00
Tyler Goodlet cf6e44cb9c Add `NoBsWs.connected()` predicate 2023-01-13 12:39:17 -05:00
Tyler Goodlet 2a158aea2c Rework `_FeedsBus` subscriptions mgmt using `set`
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
  `set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
  a msg on each unique underlying IPC msg stream.

Use new api in sampler task.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 88870fdda7 Set `brokers: list[st]` from mods when not provided.. 2023-01-10 11:09:19 -05:00
Tyler Goodlet 326f153a47 Catch overruns on throttled feed subs too
Previously we would only detect overruns and drop subscriptions on
non-throttled feed subs, however you can get the same issue with
a wrapping throttler task:
- the intermediate mem chan can be blocked either by the throttler task
  being too slow, in which case we still want to warn about it
- the stream's IPC channel actually breaks and we still want to drop
  the connection and subscription so it doesn't be come a source of
  stale backpressure.
2023-01-10 11:09:19 -05:00
Tyler Goodlet f5cd63ad35 Ensure correct stream is set on each `Flume`
Set each quote-stream by matching the provider for each `Flume` and thus
results in some flumes mapping to the same (multiplexed) stream.
Monkey-patch the equivalent `tractor.MsgStream._ctx: tractor.Context` on
each broadcast-receiver subscription to allow use by feed bus methods as
well as other internals which need to reference IPC channel/portal info.

Start a `_FeedsBus` subscription management API:
- add `.get_subs()` which returns the list of tuples registered for the
  given key (normally the fqsn).
- add `.remove_sub()` which allows removing by key and tuple value and
  provides encapsulation for sampler task(s) which deal with dropped
  connections/subscribers.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 1e96ca32df Move `maybe_open_feed()` above for readability 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7b9db86753 Multi-`broker` quotes with `Feed.open_multi_stream()`
Adds provider-list-filtered (quote) stream multiplexing support allowing
for merged real-time `tractor.MsgStream`s using an `@acm` interface.
Behind the scenes we are just doing a classic multi-task push to common
mem chan approach.

Details to make it work on `Feed`:
- add `Feed.mods: dict[str, Moduletype]` and
  `Feed.portals[ModuleType, tractor.Portal]` which are both populated
  during init in `open_feed()`
- drop `Feed.portal` and `Feed.name`

Also fix a final lingering tsdb history loading loop termination bug.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 20a396270e `Storage.read_ohlcv()` now returns a `numpy` array 2023-01-10 11:09:19 -05:00
Tyler Goodlet 81516c5204 Finally fix tsdb -> shm backfill loading
A slight facepalm but, the main issue was a simple indexing logic error:
we need to slice with `tsdb_history[-shm._first.value:]` to push most
recent history not oldest.. This allows cleanup of tsdb backfill loop as
well.

Further, greatly simply `diff_history()` time slicing by using the
classic `numpy` conditional slice on the epoch field.
2023-01-10 11:09:19 -05:00
Tyler Goodlet d6fb6fe3ae Just drop the pretty repr from our struct for now 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8476d8d056 Fix partial-frame-missing backfill logic
This had a bug prior where the end of a frame (a partial) wasn't being
sliced correctly and we'd get odd gaps showing up in the backfilled from
`brokerd` vs. tsdb end index. Repair this by doing timeframe aware index
diffing in `diff_history()` which seems to resolve it. Also, use the
frame-result's `end_dt: datetime` for the loop exit condition.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 29b6b3e54f Port `storesh` cli-cmd machinery to `Flume` apis 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8a01c9e42b Fix broker-tail stripping using `str.removesuffix()` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7daab6329d Make `Symbol` derive from internal `.types.Struct` 2023-01-10 11:09:19 -05:00
Tyler Goodlet bb6452b969 Further feed syncing fixes wrt to `Flumes`
Sync per-symbol sampler loop start to subscription registers such that
the loop can't start until the consumer's stream subscription is added;
the task-sync uses a `trio.Event`. This patch also drops a ton of
commented cruft.

Further adjustments needed to get parity with prior functionality:
- pass init msg 'symbol_info' field to the `Symbol.broker_info: dict`.
- ensure the `_FeedsBus._subscriptions` table uses the broker specific
  (without brokername suffix) as keys for lookup so that the sampler
  loop doesn't have to append in the brokername as a suffix.
- ensure the `open_feed_bus()` flumes-table-msg returned sent by
  `tractor.Context.started()` uses the `.to_msg()` form of all flume
  structs.
- ensure `maybe_open_feed()` uses `tractor.MsgStream.subscribe()` on all
  `Flume.stream`s on cache hits using the
  `tractor.trionics.gather_contexts()` helper.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 25bfe6f035 Use new |-union style type annots in sampling routines 2023-01-10 11:09:19 -05:00
Tyler Goodlet e7de5404d3 Add `Symbol.fqsn: str` property 2023-01-10 11:09:19 -05:00
Tyler Goodlet 18dc8b08e4 First draft aggregate feedz support
Orient shm-flow-arrays around the new idea of a `Flume` which provides
access, mgmt and basic measure of real-time data flow sets (see water
flow management semantics).

- We discard the previous idea of a "init message" which contained all
  the shm attachment info and instead send a startup message full of
  `Flume.to_msg()`s which are symmetrically loaded on the caller actor
  side.

- Create data-flows "entries" for every passed in fqsn such that the consumer gets back
  streams and shm for each, now all wrapped in `Flume` types. For now we
  allocate `brokermod.stream_quotes()` tasks 1-to-1 for each fqsn
  (instead of expecting each backend to do multi-plexing, though we
  might want that eventually) as well a `_FeedsBus._subscriber` entry
  for each. The pause/resume management loop is adjusted to match.
  Previously `Feed`s were  allocated 1-to-1 with each fqsn.

- Make `Feed` a `Struct` subtype instead of a `@dataclass` and move all
  flow specific attrs to the new `Flume`:
  - move `.index_stream()`, `.get_ds_info()` to `Flume`.
  - drop `.receive()`: each fqsn entry will now require knowledge of
    separate streams by feed users.
  - add multi-fqsn tables: `.flumes`, `.streams` which point to the
    appropriate per-symbol entries.

- Async load all `Flume`s from all contexts and all quote streams using
  `tractor.trionics.gather_contexts()` on the client `open_feed()` side.

- Update feeds test to include streaming 2 symbols on the same (binance)
  backend.
2023-01-10 11:09:18 -05:00
Tyler Goodlet 344a634cb6 Always set fqsn in `Feed.symbols: dict` 2023-01-10 11:09:18 -05:00
Guillermo Rodriguez 0474d66531
Switch msgspec struct ordering to always have required fields first and optionals last 2023-01-09 18:43:50 -03:00
algorandpa f218b804b4
Merge pull request #433 from pikers/add_config_dir_on_daemon_startup
Add config dir on daemon startup
2022-12-22 19:40:47 +00:00
Esmeralda Gallardo 18e4352faf
Deleted unused timeout logic 2022-12-19 14:55:06 -03:00
Esmeralda Gallardo a6e921548b
Modified recv_task(): added functionality to restart ws after timeout, modified match msg and added new case to match in case of receiving an error. 2022-12-19 13:48:18 -03:00
Esmeralda Gallardo 3f5dec82ed
Replaced try/except block in recv_task() by match msg, and added new changes to description comment 2022-12-19 13:48:17 -03:00
Esmeralda Gallardo db0b59abaa
Added support for JSONRPC requests coming from the server side 2022-12-19 13:48:10 -03:00
algorandpa db11c3c0f8 add config dir on pikerd startup 2022-12-17 21:51:49 +00:00
Tyler Goodlet de93da202b Reconnect on ping-pong errors too i guess? 2022-12-10 16:05:36 -05:00
Tyler Goodlet 490d85aba5 Drop fast chart buffer to 2 days worth 2022-11-10 11:45:49 -05:00
Tyler Goodlet d46945cb09 Move profiler imports to internal version 2022-10-31 09:26:36 -04:00
Tyler Goodlet df16726211 Just wipe wrong timeframe filled tsdb colseries for now 2022-10-28 16:17:14 -04:00
Tyler Goodlet fb4f1732b6 Drop key error again 2022-10-28 16:17:14 -04:00
Tyler Goodlet 610fb5f7c6 Drop `NoData` handler, just let it bubble 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2b231ba631 Lul, fix timeframe key when writing history
There never was any underlying db bug, it was a hardcoded timeframe in
the column series write key.. Now we always assert a matching timeframe
in results.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 286228c290 Only wait on backfill if provider supports timeframe 2022-10-28 16:17:14 -04:00
Tyler Goodlet dc1edeecda Do tsdb backloading to shm concurrently
Not only improves startup latency but also avoids a bug where the rt
buffer was being tsdb-history prepended *before* the backfilling of
recent data from the backend was complete resulting in our of order
frames in shm.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 0000d9a314 Handle backends with no 1s OHLC history
If a history manager raises a `DataUnavailable` just assume the sample
rate isn't supported and that no shm prepends will be done. Further seed
the shm array in such cases as before from the 1m history's last datum.

Also, fix tsdb -> shm back-loading, cancelling tsdb queries when either
no array-data is returned or a frame is delivered which has a start time
no lesser then the least last retrieved. Use strict timeframes for every
`Storage` API call.
2022-10-28 16:17:14 -04:00
Tyler Goodlet b7ef0596b9 Drop remaining timeframe scanning from `.read_ohlcv()` 2022-10-28 16:17:14 -04:00
Tyler Goodlet 143e86a80c Handle super annoying mkts query bug..
Turns out querying for a high freq timeframe (like 1sec) will still
return a lower freq timeframe (like 1Min) SMH, and no idea if it's the
server or the client's fault, so we have to explicitly check the sample
step size and discard lower freq series-results. Do this inside
`Storage.read_ohlcv()` and return an empty `dict` when the wrong time
step is detected from the query result.

Further enforcements,
- both `.load()` and `read_ohlcv()` now require an explicit `timeframe:
  int` input to guarantee the time step of the output array.
- drop all calls `.load()` with non-timeframe specific input.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 956c7d3435 Add concurrent multi-time-frame history loading
Our default sample periods are 60s (1m) for the history chart and 1s for
the fast chart. This patch adds concurrent loading of both (or more)
different sample period data sets using the existing loading code but
with new support for looping through a passed "timeframe" table which
points to each shm instance.

More detailed adjustments include:
- breaking the "basic" and tsdb loading into 2 new funcs:
  `basic_backfill()` and `tsdb_backfill()` the latter of which is run
  when the tsdb daemon is discovered.
- adjust the fast shm buffer to offset with one day's worth of 1s so
  that only up to a day is backfilled as history in the fast chart.
- adjust bus task starting in `manage_history()` to deliver back the
  offset indices for both fast and slow shms and set them on the
  `Feed` object as `.izero_hist/rt: int` values:
  - allows the chart-UI linked view region handlers to use the offsets
    in the view-linking-transform math to index-align the history and
    fast chart.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 23d0353934 Drop duplicate frame request
Must have gotten left in during refactor from the `trimeter` version?
Drop down to 6 years for 1m sampling.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 61ca5f7e19 Drop `trimeter`-ized concurrent history querying
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD

Commented some issues we still need to resolve as well.
2022-10-28 16:17:13 -04:00
Tyler Goodlet e7ec01b8e6 Pass in default history time of 1 min
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.
2022-10-28 16:17:13 -04:00
Tyler Goodlet bf7d5e9a71 Make `marketstore` storage api timeframe aware
The `Store.load()`, `.read_ohlcv()` and `.write_ohlcv()` and
`.delete_ts()` now can take a `timeframe: Optional[float]` param which
is used to look up the appropriate sampling period table-key from
`marketstore`.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 26d6e10ad7 Parameterize duration, pprint msg 2022-10-07 14:13:52 -04:00
Tyler Goodlet bcd6bbb7ca Increase the `brokerd` mem-chan size
Intention is to hopefully minimize (as many) context switches when
processing (near-)HFT feeds - tho not sure if it's improving things that
much XD
2022-09-12 20:25:15 -04:00
Tyler Goodlet 2ef6460853 Add `Feed.get_ds_info()` to detect/compute sample rates 2022-09-12 20:25:15 -04:00
Tyler Goodlet 6e574835c8 Update history shm buffer in ohlc sampler loop 2022-09-12 20:25:15 -04:00
Tyler Goodlet 49ccfdd673 Pass history shm "last index" in init msg, assign on feed 2022-09-12 20:25:15 -04:00
Tyler Goodlet 861fe791eb Allocate 2 shm buffers for history and real-time
As part of supporting a "history view" chart which shows downsampled
datums alongside our 1s (or higher) sampled OHLC we need a separate
buffer to store a the slower history from broker backends. This begins
that design by allocating 2 buffers:
- `rt_shm: ShmArray` which maps to a `/dev/shm/` file with `_rt` suffix
- `hist_shm: ShmArray` which maps to a file with `_hist` suffix

Deliver both of these shms back from both `manage_history()` and load
them as `Feed.rt_shm`/`.hist_shm` on the client side.

Impl deats:
- init the rt buffer with the first datum from loaded history and
  assign all OHLC values to that row's 'close' and the vlm to 0.
- pass the hist buffer to the backfiller task
- only spawn **one** global sampler array-row increment task per
  `brokerd` and pass in the 1s delay which we presume is our lowest
  OHLC sample rate for now.
- drop `open_sample_step_stream()` and just move its body contents into
  `Feed.index_stream()`
2022-09-12 20:25:15 -04:00
Tyler Goodlet 60052ff73a Presume shortest delay input to `increment_ohlc_buffer()`
Instead of worrying about the increment period per shm subscription,
just use the value passed as input and presume the caller knows that
only one task is necessary and that the wakeup (sampling) period should
be the shortest that is needed.

It's very unlikely we don't want at least a 1s sampling (both in terms
of task switching cost and general usage) which will eventually ship as
the default "real-time" feed "timeframe". Further, this "fast" increment
sampling task can handle all lower sampling periods (eg. 1m, 5m, 1H)
based on the current implementation just the same.

Also, add a global default sample period as `_defaul_delay_s` for use in
other internal modules.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 4d2708cd42 Force 1s sample step so crypto boiz can seee 2022-09-12 20:25:15 -04:00
Tyler Goodlet d1cc52dff5 Use new public lifetime-stack class attr 2022-09-12 20:24:56 -04:00
Tyler Goodlet 4fa901dbcb Port to new `tractor._runtime` mod 2022-09-12 20:24:56 -04:00
Tyler Goodlet 7dfa4c3cde Better comment on the `size`'s purpose/units 2022-08-29 13:56:26 -04:00
Tyler Goodlet 7b653fe4f4 Store shm array size in token schema, use for loading 2022-08-29 13:46:41 -04:00
Guillermo Rodriguez 0c323fdc0b
Minor style changes and warning on unexpected msg 2022-08-27 09:12:02 -03:00
Guillermo Rodriguez 4facd161a9
Pull jsonrpc machinery out of deribit backend into piker.data._web_bs module and make it generic 2022-08-25 14:08:09 -03:00
Guillermo Rodriguez 34fb497eb4
Add aiter api to NoBsWs and rework cryptofeed relay to not be OOPy 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 92090b01b8
Begin jsonrpc over ws refactor 2022-08-24 18:06:00 -03:00
Tyler Goodlet 7379dc03af The `ps1` check doesn't work for `pdb`.. 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2aec1c5f1d Only pprint our struct when we detect a py REPL 2022-08-18 11:51:12 -04:00
Tyler Goodlet a83bd9c608 Drop `msgpack` from `marketstore` module 2022-08-11 14:21:36 -04:00
Tyler Goodlet 69e501764a Drop status event processing at large
Since we figured out how to pass through ems dialog ids to the
`openOrders` sub we don't really need to do much with status updates
other then error handling. This drops `process_status()` and moves the
error handling logic into a status handler sub-block; we now just
info-log status updates for troubleshooting purposes.
2022-08-01 14:08:45 -04:00
Tyler Goodlet db564d7977 Add casting method to our struct variant 2022-07-30 17:34:40 -04:00
goodboy bf7a49c19b
Merge pull request #358 from pikers/fix_forex
Fix forex
2022-07-21 17:52:08 -04:00
Tyler Goodlet d3130ca04c Revert to hard container kill on log error 2022-07-21 17:00:36 -04:00
Tyler Goodlet 0580b204a3 A `size` field in ticks is optional 2022-07-19 09:41:37 -04:00
Tyler Goodlet 90bc9b9730 Only 4k seconds of 1s ohlc when no tsdb 2022-07-19 09:07:27 -04:00
Tyler Goodlet c26acb1fa8 Add `Struct.copy()` which does a rountrip validate 2022-07-09 12:09:38 -04:00
Tyler Goodlet bea0111753 Add a custom `msgspec.Struct` with some humanizing 2022-07-09 12:09:38 -04:00
Tyler Goodlet c870665be0 Remove `BaseModel` use from all dataclass-like uses 2022-07-09 12:08:41 -04:00
Tyler Goodlet 4ff1090284 Use struct for shm tokens 2022-07-09 12:06:47 -04:00
Tyler Goodlet 4c7c78c815 Add a `ApplicationLogError` custom exc instead 2022-07-08 17:29:03 -04:00
Tyler Goodlet 019867b413 Fix missing container id, drop custom exception 2022-07-08 17:22:37 -04:00
Tyler Goodlet f356fb0a68 Hard kill container on both a timeout or connection error 2022-07-08 17:22:37 -04:00
Tyler Goodlet d506235a8b Drop token attr from `NoBsWs` 2022-07-03 17:07:35 -04:00
Tyler Goodlet eb2bad5138 Make our `Symbol` a `msgspec.Struct` 2022-06-28 10:07:56 -04:00
Tyler Goodlet e45cb9d08a Always cancel container on teardown 2022-06-26 13:36:29 -04:00
Tyler Goodlet 27c523ca74 Speedup: only load a "views worth" of datums on first query 2022-06-23 15:21:09 -04:00
Tyler Goodlet b8b76a32a6 Harden container cancel-and-wait supervisor loop
This should hopefully make teardown more reliable and includes better
logic to fail over to a hard kill path after a 3 second timeout waiting
for the instance to complete using the `docker-py` wait API. Also
generalize the supervisor teardown loop by allowing the container config
endpoint to return 2 msgs to expect:
- a startup message that can be read from the container's internal
  process logging that indicates it is fully up and ready.
- a teardown msg that can be polled for that indicates the container has
  gracefully terminated after a cancellation request which is passed to
  our container wrappers `.cancel()` method.

Make the marketstore config endpoint return the 2 messages we previously
had hard coded and use this new api.
2022-06-23 10:23:14 -04:00
Tyler Goodlet dcee0ddd55 Move/expect all marketstore configs under a `<configdir>/piker/marketstore` subdir 2022-06-23 09:48:32 -04:00
Tyler Goodlet 1c1661b783 Factor all data feed endpoints into `.ib.feed.py` 2022-06-06 19:33:12 -04:00
Tyler Goodlet 44c242a794 Fill in label with pairs from `status` value of backend init msg 2022-06-05 22:14:32 -04:00
Tyler Goodlet 55772efb34 Bleh, try avoiding the too many files bug-thing.. 2022-06-05 22:13:36 -04:00
Tyler Goodlet 80835d4e04 More detailed rt feed drop logging 2022-06-05 22:13:36 -04:00
Tyler Goodlet 363ba8f9ae Only drop throttle feeds if channel disconnects? 2022-06-05 22:13:36 -04:00
Tyler Goodlet fc24f5efd1 Iterate 1s and 1m from tsdb series 2022-06-05 22:13:36 -04:00
Tyler Goodlet a7ff47158b Pass tsdb flag when db is up XD 2022-06-05 22:13:36 -04:00
Tyler Goodlet 1b38628b09 Handle teardown race, add comment about shm subdirs 2022-06-05 22:13:36 -04:00
Tyler Goodlet bbe1ff19ef Don't kill all containers on teardown XD 2022-06-05 22:13:36 -04:00
Tyler Goodlet f5de361f49 Import directly from `tractor.trionics` 2022-06-05 22:13:35 -04:00
Tyler Goodlet 1dca7766d2 Add notes about how to do mkts "trimming"
Which is basically just "deleting" rows from a column series.
You can only use the trim command from the `.cmd` cli and only with a so
called `LocalClient` currently; it's also sketchy af and caused
a machine to hang due to mem usage..

Ideally we can patch in this functionality for use by the rpc api
and have it not hang like this XD

Pertains to https://github.com/alpacahq/marketstore/issues/264
2022-06-05 22:13:08 -04:00
Tyler Goodlet b236dc72e4 Make vlm a float; discrete is so 80s 2022-06-05 22:13:08 -04:00
Tyler Goodlet 5d26609693 Add "no-tsdb-found" history load length defaults 2022-06-05 22:13:08 -04:00
Tyler Goodlet 4f36743f64 Only udpate prepended graphics when actually in view 2022-06-05 22:13:08 -04:00
Tyler Goodlet af6aad4e9c If a sample stream is already ded, just warn 2022-06-05 22:13:08 -04:00
Tyler Goodlet 88eccc1e15 Fill in label with pairs from `status` value of backend init msg 2022-06-05 22:08:00 -04:00
Tyler Goodlet 6e2e2fc03f Use `pendulum` for timestamp parsing 2022-05-15 13:45:44 -04:00
Tyler Goodlet b3f9c4f93d Only assert if input array actually has a size 2022-05-10 17:59:24 -04:00
Tyler Goodlet 09431aad85 Add support for no `._first.value` update shm prepends 2022-05-10 17:59:16 -04:00
Tyler Goodlet 8219307bf5 Double up shm buffer size 2022-05-10 17:59:08 -04:00
Tyler Goodlet b910eceb3b Add `ShmArray.ustruct()`: return an unstructured array copy
We return a copy (since since a view doesn't seem to work..) of the
(field filtered) shm array contents which is the same index-length as
the source data.

Further, fence off the resource tracker disable-hack into a helper
routine.
2022-05-10 17:58:57 -04:00
Tyler Goodlet 1657f51edc Manually fetch missing out-of-order history frames
It seems once in a while a frame can get missed or dropped (at least
with binance?) so in those cases, when the request erlangs is already at
max, we just manually request the missing frame and presume things will
work out XD

Further, discard out of order frames that are "from the future" that
somehow end up in the async queue once in a while? Not sure why this
happens but it seems thus far just discarding them is nbd.
2022-05-10 17:25:20 -04:00
Tyler Goodlet b1246446c2 Raise error on 'fatal' and 'error' log levels 2022-05-10 17:25:20 -04:00
Tyler Goodlet 769e803695 Write `mkts.yml` from template if one dne 2022-05-10 14:55:52 -04:00
Tyler Goodlet e196e9d1a0 Factor `marketstore` container specifics into `piker.data.marketstore` 2022-05-10 14:55:52 -04:00
Tyler Goodlet 9ddfae44d2 Parametrize and deliver (relevant) mkts config in `start_ahab()` 2022-05-10 14:55:52 -04:00
Tyler Goodlet 277ca29018 Always write missing history frames to tsdb (again) 2022-05-10 14:55:52 -04:00
Tyler Goodlet 26fddae3c0 Fix earliest frame-end not-yet-pushed check
Bleh/🤦, the ``end_dt`` in scope is not the "earliest" frame's
`end_dt` in the async response queue.. Parse the queue's latest epoch
and use **that** to compare to the last last pushed datetime index..

Add more detailed logging to help debug any (un)expected datetime index
gaps.
2022-05-10 14:55:52 -04:00
Tyler Goodlet 30ddf63ec0 Handle gaps greater then a frame within a frame 2022-05-09 11:15:14 -04:00
Tyler Goodlet 8e08fb7b23 Add comment about un-reffed vars meant for use in shell 2022-05-09 11:15:14 -04:00
Tyler Goodlet fb9b6990ae Drop unneeded/commented cancel-by-msg code; roots perms wasn't the problem 2022-05-09 11:15:14 -04:00
Tyler Goodlet 1676bceee1 Don't offset the start index by a step 2022-05-09 11:15:14 -04:00
Tyler Goodlet c9a621fc2a Fix less-then-frame off by one slice, add db write toggle and disable 2022-05-09 11:15:14 -04:00
Tyler Goodlet 61e9db3229 Handle ``iter_dts()`` already exhausted edge case 2022-05-09 11:15:14 -04:00
Tyler Goodlet e4a900168d Add timeframe key to seconds map 2022-05-09 11:15:14 -04:00
Tyler Goodlet 40753ae93c Always write newly pulled frames to tsdb 2022-05-09 11:15:14 -04:00
Tyler Goodlet 969530ba19 Fix slice logic for less-then-frame tsdb overlap
When the tsdb has a last datum that is in the past less then a "frame's
worth" of sample steps we need to slice out only the data from the
latest frame that doesn't overlap; this fixes that slice logic..
Previously i dunno wth it was doing..
2022-05-09 11:15:14 -04:00
Tyler Goodlet 9b5f052597 Handle no sampler subs case on history broadcasts
When the market isn't open the feed layer won't create a subscriber
entry in the sampler broadcast loop and so if a manual call to
``broadcast()`` is made (like when trying to update a chart from
a history prepend) we need to handle that case and just broadcast
a random `-1` for now..BD
2022-05-09 11:15:14 -04:00
Tyler Goodlet b44786e5b7 Support async-batched ohlc queries in all backends
Expect each backend to deliver a `config: dict[str, Any]` which provides
concurrency controls to `trimeter`'s batch task scheduler such that
backends can define their own concurrency limits.

The dirty deats in this patch include handling history "gaps" where
a query returns a history-frame-result which spans more then the typical
frame size (in seconds). In such cases we reset the target frame index
(datetime index sequence implemented with a `pendulum.Period`) using
a generator protocol `.send()` such that the sequence can be dynamically
re-indexed starting at the new (possibly) pre-gap datetime. The new gap
logic also allows us to detect out of order frames easier and thus wait
for the next-in-order to arrive before making more requests.
2022-05-09 11:15:14 -04:00
Tyler Goodlet 7e951f17ca Support large ohlcv writes via slicing, add struct array keymap 2022-05-09 11:15:14 -04:00
Tyler Goodlet fcb85873de Terminate early on data unavailable errors 2022-05-09 11:15:14 -04:00
Tyler Goodlet 7b1c0939bd Add first-draft `trimeter` based concurrent ohlc history fetching 2022-05-09 11:15:14 -04:00
Tyler Goodlet d77cfa3587 Add back fqsn passthrough and feed opening 2022-05-09 11:15:14 -04:00
Tyler Goodlet 6ba3c15c4e Add to signal broker won't deliver more data 2022-05-09 11:15:14 -04:00
Tyler Goodlet 0061fabb56 More tolerance for "stream-ended-early" conditions in quote throttler 2022-05-09 11:15:14 -04:00
Tyler Goodlet 2f04a8c939 Drop legacy back-filling logic
Use the new `open_history_client()` endpoint/API and expect backends to
provide a history "getter" routine that can be called to load historical
data into shm even when **not** using a tsdb. Add logic for filling in
data from the tsdb once the backend has provided data up to the last
recorded in the db. Add logic for avoiding overruns of the shm buffer
with more-then-necessary queries of tsdb data.
2022-05-09 11:15:14 -04:00
Tyler Goodlet 8bf40ae299 Drop legacy backfilling, load a day's worth of data by default 2022-05-09 11:15:14 -04:00
Tyler Goodlet 0f683205f4 Add 16 fetch limit if no tsdb data found 2022-05-09 11:15:14 -04:00
Tyler Goodlet d244af69c9 Don't require a symbol to subcmd 2022-05-09 11:15:13 -04:00
Tyler Goodlet b8b95f1081 Don't open a feed, write or read ohlc in for now 2022-05-09 11:15:13 -04:00
Tyler Goodlet 3056bc3143 Don't run legacy backfill when isn't up 2022-05-09 11:15:13 -04:00
Tyler Goodlet d3824c8c0b Start legacy backfill with partial too 2022-05-09 11:15:13 -04:00
Tyler Goodlet 727d3cc027 Unify backfilling logic into common task-routine 2022-05-09 11:15:13 -04:00
Tyler Goodlet 46c23e90db Add `Storage.load()` and `.write_ohlcv()` 2022-05-09 11:15:13 -04:00
Tyler Goodlet bcf3be1fe4 A bit hacky but, broadcast index streams on each history prepend 2022-05-09 11:15:13 -04:00
Tyler Goodlet 7d8cf3eaf8 Factor subscription broadcasting into a func 2022-05-09 11:15:13 -04:00
Tyler Goodlet a6c5902437 More reliable `marketstored` + container supervision
It turns out (i guess not so shockingly?) that `marketstore` doesn't
always teardown "gracefully" under SIGINT (seems to hang if there are
open client connections which are also in the midst of teardown?) so
this instead first tries the SIGINT and then fails over to a SIGKILL
(destroy loop) which seems to be much more reliable to ensure shutdown
without any downside - in terms of a "hard kill".

Originally i was thinking the issue was root perms related (which get
relegated solely to the `marketstored` daemon actor after spawn) but
actually it was indeed the signalling / application layer causing the
hold-up/latency on teardown. There's a bunch of lingering (now
commented) code which tried to solve this non-problem as well as a bunch
logging/prints to help decipher the root of the issue - this will all
get cleaned out shortly.
2022-05-09 11:15:13 -04:00
Tyler Goodlet 9fe5cd647a Handle non-fqsn for derivs and don't put brokername in 2022-05-09 11:15:13 -04:00
Tyler Goodlet 15630f465d Limit ohlc queries to 800k datums to avoid `purepc` size error 2022-05-09 11:15:13 -04:00
Tyler Goodlet ce3229df7d Get sync-to-marketstore-tsdb history retrieval workinnn 2022-05-09 11:15:13 -04:00
Tyler Goodlet 53ad5e6f65 Handle "fatal" level log msgs in docker super 2022-05-09 11:15:13 -04:00
Tyler Goodlet 41325ad418 Add basic tsdb history loading
If `marketstore` is detected try to only load most recent missing data
from the data provider (broker) and the rest from the tsdb and push it
all to shm for display in the UI. If the provider/broker doesn't have
the history client endpoint, just use the old one for now so we can
start to incrementally add support. Don't start the ohlc step
incrementer task until the backend signals that the feed is live.
2022-05-09 11:15:13 -04:00
Tyler Goodlet a971de2b67 Drop `ms-shell`, add `piker storesh` cmd 2022-05-09 11:15:13 -04:00
Tyler Goodlet ca48577c60 Add diffing logic to `tsdb_history_update()`
Add some basic `numpy` epoch slice logic to generate append and prepend
arrays to write to the db.

Mooar cool things,
- add a `Storage.delete_ts()` method to wipe a column series from the db
  easily.
- don't attempt to read in any OHLC series by default on client load
- add some `pyqtgraph` profiling and drop manual latency measures
- if no db series for the fqsn exists write the entire shm array
2022-05-09 11:15:13 -04:00
Tyler Goodlet 950cb03e07 Drop `pandas` to `numpy` converter 2022-05-09 11:15:13 -04:00
Tyler Goodlet 6cdd017cd6 Ensure bfqsn is lower cased for feed api consumers
Also, Start tinkering with `tractor.trionics.ipython_embed()`

In effort to get back to a usable REPL around the mkts client
this adds usage of the new `tractor` integration api as well as logic
for skipping backfilling if existing tsdb arrays are found.
2022-05-09 11:15:13 -04:00
Tyler Goodlet 6dc6d00a9b Try downsampling mkts data 2022-05-09 11:15:13 -04:00
Tyler Goodlet 565573b609 Load any symbol-matching shm array if no `marketstored` found 2022-05-09 11:15:13 -04:00
Tyler Goodlet 9138f376f7 Return all timeframe arrays if `timeframe` not passed as input 2022-05-09 11:15:13 -04:00
Tyler Goodlet 3d6d77364b Allow kill-child-proc-with-root-perms to fail silently in `tractor` reaping 2022-05-09 11:15:13 -04:00
Tyler Goodlet 8003878248 Proxy `marketstore` container log level to our own 2022-05-09 11:15:13 -04:00
Tyler Goodlet 706c8085f2 Prototype a high level `Storage` api
Starts a wrapper around the `marketstore` client to do basic ohlcv query
and retrieval and prototypes out write methods for ohlc and tick.
Try to connect to `marketstore` automatically (which will fail if not
started currently) but we will eventually first do a service query.

Further:

- get `pikerd` working with and without `--tsdb` flag.
- support spawning `brokerd` with no real-time quotes.
- bring back in "fqsn" support that was originally not
  in this history before commits factoring.
2022-05-09 11:15:13 -04:00
Tyler Goodlet cbe74d126e Doc str formatting 2022-05-09 11:15:13 -04:00
Tyler Goodlet 3dba456cf8 Add latency measures around diffs/writes to mkts 2022-05-09 11:15:13 -04:00
Tyler Goodlet 4555a1f279 Prototype out writing `1Sec` OHLCV data 2022-05-09 11:15:13 -04:00
Tyler Goodlet a2fe814857 Better doc string 2022-05-09 11:15:13 -04:00
Tyler Goodlet 8c558d05d6 Persist backing `/data/` filesystem across container runs 2022-05-09 11:15:13 -04:00
Tyler Goodlet e1bbcff8e0 Get basic OHLCV writes working with `anyio` client 2022-05-09 11:15:13 -04:00
Tyler Goodlet d9773217e9 Map the grpc port and add graceful container teardown
Not sure how I missed mapping the 5995 grpc port 🤦; done now.
Also adds graceful teardown using SIGINT with included container
logging relayed to the piker console B).
2022-05-09 11:15:13 -04:00
Tyler Goodlet 2c51ad2a0d Revive `ms-shell` sub-cmd 2022-05-09 11:15:13 -04:00
Tyler Goodlet 56fa759452 Add WIP backfiller from data feed helper 2022-05-09 11:15:13 -04:00
Tyler Goodlet 4bcc301c01 Better handle nested erros from docker client 2022-05-09 11:15:13 -04:00
Tyler Goodlet 445b82283d Add back in legacy write loop for reference 2022-05-09 11:15:13 -04:00
Tyler Goodlet 8047714101 Add back in OHLCV dtype template and client side ws streamer 2022-05-09 11:15:13 -04:00
Tyler Goodlet 970393bb85 Drop ununsed `Services` ref 2022-05-09 11:15:13 -04:00
Tyler Goodlet ed5bae0e11 Py3.9+ type updates 2022-05-09 11:15:13 -04:00
Tyler Goodlet 7395b56321 De-escalate sudo perms in `pikerd` once docker spawns 2022-05-09 11:15:13 -04:00
Tyler Goodlet aecc5973fa Handle the non-root perms case specifically too 2022-05-09 11:15:13 -04:00
Tyler Goodlet faa5a785cb Add explicit no-docker error and supervisor start task-func 2022-05-09 11:15:13 -04:00
Tyler Goodlet 7d2e9bff46 Type annot updates 2022-05-09 11:15:13 -04:00
Tyler Goodlet ec413541d3 Drop old client instantiate line 2022-05-09 11:15:13 -04:00
Tyler Goodlet fbd3d1e308 Add a super simple `marketstore` container supervisor 2022-05-09 11:15:13 -04:00
Tyler Goodlet aca3ca8aa6 Basic module-script for spawning `marketstore`, needs correct bind mount usage 2022-05-09 11:15:13 -04:00
Guillermo Rodriguez 943b02573d Still WIP, switch to using new marketstore client, missing streaming from marketstore 2022-05-09 11:15:13 -04:00
Guillermo Rodriguez 897a5cf2f6 Simplify and optimize tick format, similar to techtonicdb's 2022-05-09 11:15:13 -04:00
Guillermo Rodriguez 3c09bfba57 Add multi ingestor support and update to new feed API 2022-05-09 11:15:13 -04:00
Tyler Goodlet d1f45b0883 Add `ShmArray.last()` docstr 2022-04-13 00:39:15 -04:00
Tyler Goodlet 00a7f20292 Up the shm size to 10d of 1s ohlc 2022-04-13 00:39:15 -04:00
Tyler Goodlet 0178fcd26f Increase shm size to days of 1s steps 2022-04-13 00:39:15 -04:00
Tyler Goodlet 24fa1b8ff7 Support an array field map to `ShmArray.push()`, start index 3days in 2022-04-13 00:39:15 -04:00
Tyler Goodlet 4b0ca40b17 Document "fqsn" on `Symbol` method 2022-04-11 08:48:17 -04:00
Tyler Goodlet ebe2680355 Change `uncons_fqsn()` -> `unpack_fqsn()` 2022-04-11 01:01:36 -04:00
Tyler Goodlet 32e316ebff Drop nl 2022-04-10 17:33:02 -04:00
Tyler Goodlet 8df614465c Fix missing f-str prefix 2022-04-10 17:30:02 -04:00
Tyler Goodlet 81cd696ec8 Drop sampler consumers that overrun 6x 2022-04-10 17:30:02 -04:00
Tyler Goodlet a6e32e7530 Add `Symbol.tokens()` for grabbing separate strs 2022-04-10 17:30:02 -04:00
Tyler Goodlet 7bd5b42f9e Ensure we lower case the fqsn received from all backends before delivery 2022-04-10 17:30:02 -04:00
Tyler Goodlet 76f398bd9f Support no venue or suffix symbols (normally crypto$) 2022-04-10 17:30:02 -04:00
Tyler Goodlet 7f36e85815 Append broker name to symbols before quotes broadcast in sampler task 2022-04-10 17:30:02 -04:00
Tyler Goodlet 8462ea8a28 Make the data feed layer "fqsn" aware
In order to support instruments with lifetimes (aka derivatives) we need
generally need special symbol annotations which detail such meta data
(such as `MNQ.GLOBEX.20220717` for daq futes). Further there is really
no reason for the public api for this feed layer to care about getting
a special "brokername" field since generally the data is coming directly
from UIs (eg. search selection) so we might as well accept a fqsn (fully
qualified symbol name) which includes the broker name; for now a suffix
like `'.ib'`. We may change this schema (soon) but this at least gets us
to a point where we expect the full name including broker/provider.

An additional detail: for certain "generic" symbol names (like for
futes) we will pull a so called "front contract" and map this to
a specific fqsn underneath, so there is a double (cached) entry for that
entry such that other consumers can use it the same way if desired.

Some other machinery changes:
- expect the `stream_quotes()` endpoint to deliver it's `.started()` msg
  almost immediately since we now need it deliver any fqsn asap (yes
  this means the ep should no longer wait on a "live" first quote and
  instead deliver what quote data it can right away.
- expect the quotes ohlc sampler task to add in the broker name before
  broadcast to remote (actor) consumers since the backend isn't (yet)
  expected to do that add in itself.
- obviously we start using all the new fqsn related `Symbol` apis
2022-04-10 17:30:02 -04:00
Tyler Goodlet e9d64ffee8 Use fqsn in `.manage_history()`
Allocate and `.started()` return the `ShmArray` from here as well in
prep for tsdb integration.
2022-04-10 17:30:02 -04:00
Tyler Goodlet b16167b8f3 Add prelim fqsn support into our `Symbol` type 2022-04-10 17:30:02 -04:00
Tyler Goodlet 434c340cb8 Move factor helper to a classmethod 2022-04-10 17:30:02 -04:00
Tyler Goodlet 94e2103bf5 Be mega-tolerant to feed consumer disconnects 2022-04-10 17:30:02 -04:00
Tyler Goodlet cc026dfb1d Open feeds using `Portal.open_context()` 2022-04-10 17:30:02 -04:00
Tyler Goodlet 97c2a2da3e Convert `iter_ohlc_periods()` to a `@tractor.context` 2022-04-10 17:30:02 -04:00
Konstantine Tsafatinos 0c905920e2 connect to krakens openOrders websocket 2022-03-06 15:17:26 -05:00
Tyler Goodlet f7d03489d8 Drop `marketstore` loading cruft (will come later) 2022-03-01 12:39:12 -05:00
Tyler Goodlet 09079b61fc Comment task canceller method prototype 2022-03-01 12:37:31 -05:00
Tyler Goodlet c239faf4e5 Add a `._sampling.sampler` registry composite type 2022-03-01 12:36:32 -05:00
Tyler Goodlet b1cce8f9cf Adjust and add notes for python-trio/trio#2258 2022-02-28 08:30:22 -05:00
Tyler Goodlet 7a943f0e1e Always transmit index event even when no shm is registered 2022-02-28 08:29:56 -05:00
Tyler Goodlet 786ffde4e6 Use 3.9+ annots 2022-02-28 08:27:59 -05:00
Tyler Goodlet 11d4ebd0b5 Just warn on double-remove of a sub 2022-02-28 08:27:37 -05:00
Tyler Goodlet bf3b58e861 Async load data history, allow "offline" feed use
Break up real-time quote feed and history loading into 2 separate tasks
and deliver a client side `data.Feed` as soon as history is loaded
(instead of waiting for a rt quote - the previous logic). If
a symbol doesn't have history then likely the feed shouldn't be loaded
(since presumably client code will need at least "some" datums history
to do anything) and waiting on a real-time quote is dumb, since it'll
hang if the market isn't open XD. If a symbol doesn't have history we
can always write a zero/null array when we run into that case. This also
greatly speeds up feed loading when both history and quotes are available.

TL;DR summary:
- add a `_Feedsbus.start_task()` one-cancel-scope-per-task method for
  assisting with (re-)starting and stopping long running persistent
  feeds (basically a "one cancels one" style nursery API).
- add a `manage_history()` task which does all history loading (and
  eventually real-time writing) which has an independent signal and
  start it in a separate task.
- drop the "sample rate per symbol" stuff since client code doesn't really
  care when it can just inspect shm indexing/time-steps itself.
- run throttle tasks in the bus nursery thus avoiding cancelling the
  underlying sampler task on feed client disconnects.
- don't store a repeated ref the bus nursery's cancel scope..
2022-02-28 08:26:13 -05:00
Tyler Goodlet 1d3ed6c333 Add `mk_` prefix since assignments will use `fqsn` 2022-02-28 08:23:57 -05:00
Tyler Goodlet c2a13c474c Support no realtime stream sending with feed bus 2022-02-28 08:22:40 -05:00
Tyler Goodlet e4244e96a9 Fix var name typo 2022-02-07 12:53:30 -05:00
Tyler Goodlet 2d3c685e19 Typecast np dtype description to a tuple 2022-02-07 12:53:30 -05:00
Tyler Goodlet efb743fd85 Flip to using `pydantic` for shm tokens 2022-02-07 12:53:30 -05:00
Tyler Goodlet 8118a57b9a Guard against no time field in some provider quotes 2022-02-07 12:53:30 -05:00
Tyler Goodlet 5952e7f538 Add dark vlm deduplication support via flag 2022-02-07 12:53:30 -05:00
Tyler Goodlet 95b31cbc0f Drop references to deprecated `tractor.msg.pub` 2022-01-29 12:44:45 -05:00
Tyler Goodlet 55cfe6082b Re-key ib's 'unreportable trades' (tick 48) as 2022-01-26 13:48:21 -05:00
Tyler Goodlet 9813cf4169 Add a symbol "front feed" helper 2022-01-25 08:24:55 -05:00
Tyler Goodlet b7f27f201f Add `try_read()` to shm mod 2022-01-25 08:24:55 -05:00
Tyler Goodlet 8e390278f5 Handle logging against IPC stream vs. throttled channel on overruns 2022-01-25 07:57:01 -05:00
Tyler Goodlet 50713030f8 annoying doc strings 2022-01-25 07:57:01 -05:00