Commit Graph

575 Commits (30d55fdb275373516dd2133ae6f850a6acd6d7a3)

Author SHA1 Message Date
Tyler Goodlet a8e1796a8b Comment bad x-range bp for now 2023-02-13 12:27:58 -05:00
Tyler Goodlet 5ced05aab0 Breakpoint bad (-ve or too large) x-ranges to m4
This should never really happen but when it does it appears to be a race
with writing startup pre-graphics-formatter array data where we get
`x_end` epoch value subtracting some really small offset value (like
`-/+0.5`) or the opposite where the `x_start` is epoch and `x_end` is
small.

This adds a warning msg and `breakpoint()` as well as guards around the
entire code downsampling code path so that when resumed the downsampling
cycle should just be skipped and avoid a crash.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 7afc9301ac Handle last-in-view time slicing edge case
Whenever the last datum is in view `slice_from_time()` need to always
spec the final array index (i.e. the len - 1 value we set as
`read_i_max`) to avoid a uniform-step arithmetic error where gaps in the
underlying time series causes an index that's too low to be returned.
2023-02-13 12:27:58 -05:00
Tyler Goodlet 12c6d58c2a Drop bp blocks from formatters mod 2023-02-13 12:27:58 -05:00
Tyler Goodlet 63f0567418 Drop `Flume.index_stream()`, `._sampling.open_sample_stream()` replaces it 2023-02-13 12:27:58 -05:00
Tyler Goodlet 6a0c36922e Drop `._index_step` from formatters and instead defer to `Viz.index_step()` 2023-02-12 13:55:26 -05:00
Tyler Goodlet fc17187ff4 Drop edge case from `slice_from_time()`
Doesn't seem like we really need to handle the situation where the start
or stop input time stamps are outside the index range of the data since
the new binary search handling via `numpy.searchsorted()` covers this
case at minimal runtime cost and with an equally correct output. Allows
us to drop some other indexing endpoint internal variables as well.
2023-02-12 13:55:26 -05:00
Tyler Goodlet a7d78a3f40 Use left-style index search on RHS scan as well 2023-02-12 13:55:26 -05:00
Tyler Goodlet cdec4782f0 Add commented append slice-len sanity check 2023-02-12 13:55:26 -05:00
Tyler Goodlet ed1f64cf43 Fix gap detection on RHS; always bin-search on overshot time range 2023-02-12 13:55:26 -05:00
Tyler Goodlet 50ef4efccb Align step curves the same as OHLC bars 2023-02-12 13:55:26 -05:00
Tyler Goodlet 51f2461e8b Add `IncrementalFormatter.x_offset: np.ndarray`
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 444768d30f Adjust OHLC bar x-offsets to be time span matched
Previously we were drawing with the middle of the bar on each index with
arms to either side: +/- some arm length. Instead this changes so that
each bar is drawn *after* each index/timestamp such that in graphics
coords the bar span more correctly matches the time span in the
x-domain. This makes the linked region between slow and fast chart
directly match (without any transform) for epoch-time indexing such that
the last x-coord in view on the fast chart is no more then the
next time step in (downsampled) slow view.

Deats:
- adjust in `._pathops.path_arrays_from_ohlc()` and take an `bar_w` bar
  width input (normally taken from the data step size).
- change `.ui._ohlc.bar_from_ohlc_row()` and
  `BarItems.draw_last_datum()` to match.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 24b384f3ef Set `path_arrays_from_ohlc(use_time_index=True)` on epoch indexing
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.

Also, guard all the x-data audit breakpoints with a time indexing
condition.
2023-02-12 13:55:26 -05:00
Tyler Goodlet 93330954c2 Ugh, use `bool` flag to determine index field.. 2023-02-12 13:55:26 -05:00
Tyler Goodlet 3019c35e30 Move `Viz` layer to new `.ui` mod 2023-02-12 13:41:18 -05:00
Tyler Goodlet 3638ae8d3e Drop unused `read_src_from_key: bool` to `.format_to_1d()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet 0663880a6d Fix formatter xy ndarray first prepend case
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.

Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.

Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
  downsample to, this is normally based on the ratio of pixel columns on
  screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
  first and last index would be the size of the input buffer and thus
  would never cause a large mem allocation issue (though it may have
  been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
  near-now epoch time stamp **minus** an x-allocation value: generally
  some value in `[0.5, -0.5]` which would result in a massive frames and
  thus internal `np.ndarray()` allocation causing either a crash in
  `numba` code or actual system mem over allocation.

Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 3bed142d15 Handle time-indexing for fill arrows
Call into a reworked `Flume.get_index()` for both the slow and fast
chart and do time index clipping to last datum where necessary.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 7aef31701b Add some commented debug prints for default fmtr 2023-02-12 13:41:18 -05:00
Tyler Goodlet 135627e142 Slicec to an extra index around each timestamp input 2023-02-12 13:41:18 -05:00
Tyler Goodlet 44f50e3d0e Implement `stop_t` gap adjustments; the good lord said it is the problem 2023-02-12 13:41:18 -05:00
Tyler Goodlet 5ab4e5493e Add gap detection for `stop_t`, though only report atm 2023-02-12 13:41:18 -05:00
Tyler Goodlet 98438e29ef Drop `Flume.view_data()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet d649a7d1fa Drop old breakpoint 2023-02-12 13:41:18 -05:00
Tyler Goodlet 2669ced629 Drop `_slice_from_time()` 2023-02-12 13:41:18 -05:00
Tyler Goodlet f2c0987a04 Use uniform step arithmetic in `slice_from_time()`
If we presume that time indexing using a uniform step we can calculate
the exact index (using `//`) for the input time presuming the data
set has zero gaps. This gives a massive speedup over `numpy` fancy
indexing and (naive) `numba` iteration. Further in the case where time
gaps are detected, we can use `numpy.searchsorted()` to binary search
for the nearest expected index at lower latency.

Deatz,
- comment-disable the call to the naive `numba` scan impl.
- add a optional `step: int` input (calced if not provided).
- add todos for caching binary search results in the gap detection
  cases.
- drop returning the "absolute buffer indexing" slice since the caller
  can always just use the read-relative slice to acquire it.
2023-02-12 13:41:18 -05:00
Tyler Goodlet 0bdb7261d1 Flip over to epoch-time based x-domain indexing 2023-02-12 13:41:17 -05:00
Tyler Goodlet 12857a258b Adjust all `slice_from_time()` calls to not expect mask 2023-02-12 13:41:17 -05:00
Tyler Goodlet 46808fbb89 Rewrite `slice_from_time()` using `numba`
Gives approx a 3-4x speedup using plain old iterate-with-for-loop style
though still not really happy with this .5 to 1 ms latency..

Move the core `@njit` part to a `_slice_from_time()` with a pure python
func with orig name around it. Also, drop the output `mask` array since
we can generally just use the slices in the caller to accomplish the
same input array slicing, duh..
2023-02-12 13:41:17 -05:00
Tyler Goodlet a3844f9922 Use step size to determine bar gaps 2023-02-12 13:41:17 -05:00
Tyler Goodlet a33f58a61a Move `Flume.slice_from_time()` to `.data._pathops` mod func 2023-02-12 13:41:17 -05:00
Tyler Goodlet d5844ce8ff Delegate formatter `.index_field` to the parent `Viz` 2023-02-12 13:41:17 -05:00
Tyler Goodlet bf88b40a50 Facepalm**2: fix array-read-slice, like actually..
We need to subtract the first index in the array segment read, not the
first index value in the time-sliced output, to get the correct offset
into the non-absolute (`ShmArray.array` read) array..

Further we **do** need the `&` between the advance indexing conditions
and this adds profiling to see that it is indeed real slow (like 20ms
ish even when using `np.where()`).
2023-02-12 13:41:17 -05:00
Tyler Goodlet e4a0d4ecea Markup OHLC->path gen with `numba` issue # 2023-02-12 13:41:17 -05:00
Tyler Goodlet 031d7967de Facepalm: actually return latest index on time slice fail.. 2023-02-12 13:41:17 -05:00
Tyler Goodlet 2e67e98b4d Go with explicit `.data._m4` mod name
Since it's a notable and self-contained graphics compression algo, might
as well give it a dedicated module B)
2023-02-12 13:41:17 -05:00
Tyler Goodlet 7124a131dd Move (unused) path gen routines to `.ui._pathops` 2023-02-12 13:41:17 -05:00
Tyler Goodlet 9052ed5ddf Move qpath-ops routines back to separate mod 2023-02-12 13:41:17 -05:00
Tyler Goodlet 7ec21c7f3b Rename `.ui._pathops.py` -> `.ui._formatters.py 2023-02-12 13:41:17 -05:00
Tyler Goodlet 382a619a03 Fix from-time index slicing?
Apparently we want an `|` for the advanced indexing logic?
Also, fix `read_slc` start to not always be 0 XD
2023-02-12 13:41:17 -05:00
Tyler Goodlet 7f3f6f871a Move path ops routines to top of mod
Planning to put the formatters into a new mod and aggregate all path
gen/op helpers into this module.

Further tweak include:
- moving `path_arrays_from_ohlc()` back to module level
- slice out the last xy datum for `OHLCBarsAsCurveFmtr` 1d formatting
- always copy the new x-value from the source to `.x_nd`
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6ea04f850d Drop diff state tracking in formatter
This was a major cause of error (particularly trying to get epoch
indexing working) and really isn't necessary; instead just have
`.diff()` always read from the underlying source array for current
index-step diffing and append/prepend slice construction.

Allows us to,
- drop `._last_read` state management and thus usage.
- better handle startup indexing by setting `.xy_nd_start/stop` to
  `None` initially so that the first update can be done in one large
  prepend.
- better understand and document the step curve "slice back to previous
  level" logic which is now heavily commented B)
- drop all the `slice_to_head` stuff from and instead allow each
  formatter to choose it's 1d segmenting.
2023-02-12 13:41:17 -05:00
Tyler Goodlet f3bab826f6 Comment out bps for time indexing 2023-02-12 13:41:17 -05:00
Tyler Goodlet ac1f37a2c2 Expect `index_field: str` in all graphics objects 2023-02-12 13:41:17 -05:00
Tyler Goodlet 166d14af69 Simplify formatter update methodology
Don't expect values (array + slice) to be returned and applied by
`.incr_update_xy_nd()` and instead presume this will implemented
internally in each (sub)formatter.

Attempt to simplify some incr-update routines, (particularly in the step
curve formatter, though most of it was reverted to just a simpler form
of the original implementation XD) including:
- dropping the need for the `slice_to_head: int` control.
- using the `xy_nd_start/stop` index counters over custom lookups.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 696c6f8897 First attempt, field-index agnostic formatting
Remove harcoded `'index'` field refs from all formatters in a first
attempt at moving towards epoch-time alignment (though don't actually
use it it yet).

Adjustments to the formatter interface:
- property for `.xy_nd` the x/y nd arrays.
- property for and `.xy_slice` the nd format array(s) start->stop index
  slice.

Internal routine tweaks:
- drop `read_src_from_key` and always pass full source array on updates
  and adjust handlers to expect to have to index the data field of
  interest.
- set `.last_read` right after update calls instead of after 1d
  conversion.
- drop `slice_to_head` array read slicing.
- add some debug points for testing 'time' indexing (though not used
  here yet).
- add `.x_nd` array update logic for when the `.index_field` is not
  'index' - i.e. when we begin to try and support epoch time.
- simplify some new y_nd updates to not require use of `np.broadcast()`
  where possible.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 6cacd7d18b Make `Viz.slice_from_time()` take input array
Probably means it doesn't need to be a `Flume` method but it's
convenient to expect the caller to pass in the `np.ndarray` with
a `'time'` field instead of a `timeframe: str` arg; also, return the
slice mask instead of the sliced array as output (again allowing the
caller to do any slicing). Also, handle the slice-outside-time-range
case by just returning the entire index range with a `None` mask.

Adjust `Viz.view_data()` to instead do timeframe (for rt vs. hist shm
array) lookup and equiv array slicing with the returned mask.
2023-02-12 13:41:17 -05:00
Tyler Goodlet 5b08e9cba3 Add breakpoint on -ve range for now 2023-02-12 13:41:17 -05:00
Tyler Goodlet d3f5ff1b4f Go back to hard-coded index field
Turns out https://github.com/numba/numba/issues/8622 is real
and the suggested `numba.literally` hack doesn't seem to work..
2023-02-12 13:41:16 -05:00
Tyler Goodlet e45bc4c619 Move `ui._compression`/`._pathops` to `.data` subpkg
Since these modules no longer contain Qt specific code we might
as well include them in the data sub-package.

Also, add `IncrementalFormatter.index_field` as single point to def the
indexing field that should be used for all x-domain graphics-data
rendering.
2023-02-12 13:39:10 -05:00
Tyler Goodlet 8d592886fa Pass `Flume`s throughout FSP-ui and charting APIs
Since higher level charting and fsp management need access to the
new `Flume` indexing apis this adjusts some func sigs to pass through
(and/or create) flume instances:
- `LinkedSplits.add_plot()` and dependents.
- `ChartPlotWidget.draw_curve()` and deps, and it now returns a `Flow`.
- `.ui._fsp.open_fsp_admin()` and `FspAdmin.open_fsp_ui()` related
  methods => now we wrap the destination fsp shm in a flume on the admin
  side and is returned from `.start_engine_method()`.

Drop a bunch of (unused) chart widget methods including some already
moved to flume methods: `.get_index()`, `.in_view()`,
`.last_bar_in_view()`, `.is_valid_index()`.
2023-02-02 13:32:30 -05:00
Tyler Goodlet fcfc0f31f0 Enable backpressure in an effort to prevent bootup overruns 2023-01-30 11:45:29 -05:00
Tyler Goodlet 844626f6dc Move `brokerd` service task to root `.data` mod 2023-01-13 13:21:49 -05:00
Tyler Goodlet 71ca4c8e1f Use actor uid in shm keys for rt quote buffers
Allows running simultaneous data feed services on the same (linux) host
by avoiding file-name collisions instead keying shm buffer sets by the
given `brokerd` instance. This allows, for example, either multiple dev
versions of the data layer to run side-by-side or for the test suite to
be seamlessly run alongside a production instance.
2023-01-13 13:21:49 -05:00
Tyler Goodlet 045b76bab5 Make `Flume.index_stream()` defer to new sampling api 2023-01-13 13:21:49 -05:00
Tyler Goodlet d66fb49077 Don't deliver shms from `start_backfill()`, they're not used 2023-01-13 13:21:49 -05:00
Tyler Goodlet 78c7c8524c Breakpoint when bad 1m history offsets are detected 2023-01-13 13:21:49 -05:00
Tyler Goodlet 5adb234a24 Don't receive sample-index msgs in feed layer 2023-01-13 13:21:49 -05:00
Tyler Goodlet 2778ee1401 Support not registering for sample-index msgs via `sub_for_broadcasts: bool` flag 2023-01-13 13:21:49 -05:00
Tyler Goodlet b3d1b1aa63 Port feed layer to use new `samplerd` APIs
Always use `open_sample_stream()` to register fast and slow quote feed
buffers and get a sampler stream which we use to trigger
`Sampler.broadcast_all()` calls on the service side after backfill
events.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 5ec1a72a3d Implement a `samplerd` singleton actor service
Now spawned under the `pikerd` tree as a singleton-daemon-actor we offer
a slew of new routines in support of this micro-service:

- `maybe_open_samplerd()` and `spawn_samplerd()` which provide the
  `._daemon.Services` integration to conduct service spawning.
- `open_sample_stream()` which is a client-side endpoint which does all
  the work of (lazily) starting the `samplerd` service (if dne) and
  registers shm buffers for update as well as connect a sample-index
  stream for iterator by the caller.
- `register_with_sampler()` which is the `samplerd`-side service task
  endpoint implementing all the shm buffer and index-stream registry
  details as well as logic to ensure a lone service task runs
  `Services.increment_ohlc_buffer()`; it increments at the shortest period
  registered which, for now, is the default 1s duration.

Further impl notes:
- fixes to `Services.broadcast()` to ensure broken streams get discarded
  gracefully.
- we use a `pikerd` side singleton mutex `trio.Lock()` to ensure
  one-and-only-one `samplerd` is ever spawned per `pikerd` actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 2c76cee928 Begin formalizing `Sampler` singleton API
We're moving toward a single actor managing sampler work and distributed
independently of `brokerd` services such that a user can run samplers on
different hosts then real-time data feed infra. Most of the
implementation details include aggregating `.data._sampling` routines
into a new `Sampler` singleton type.

Move the following methods to class methods:
- `.increment_ohlc_buffer()` to allow a single task to increment all
  registered shm buffers.
- `.broadcast()` for IPC relay to all registered clients/shms.

Further add a new `maybe_open_global_sampler()` which allocates
a service nursery and assigns it to the `Sampler.service_nursery`; this
is prep for putting the step incrementer in a singleton service task
higher up the data-layer actor tree.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 3efb0b5884 Sync 1s (or less) sampler steps using rounded now-epoch 2023-01-13 13:21:15 -05:00
Tyler Goodlet 009bbe456e Always `.error()` log unknown queries for `marketstore` 2023-01-13 13:21:15 -05:00
Tyler Goodlet daf7b3f4a5 Only accept 6 tries for the same duplicate hist frame
When we see multiple history frames that are duplicate to the request
set, bail re-trying after a number of tries (6 just cuz) and return
early from the tsdb backfill loop; presume that this many duplicates
means we've hit the beginning of history. Use a `collections.Counter`
for the duplicate counts. Make sure and warn log in such cases.
2023-01-13 13:21:15 -05:00
Tyler Goodlet b0a6dd46e4 Use recon set on stack closing during reconnect
Hopefully resolves https://github.com/pikers/piker/issues/434
2023-01-13 13:21:15 -05:00
Tyler Goodlet 1c5141f4c6 Fix f-str in duplicate frame msg print 2023-01-13 13:21:15 -05:00
Tyler Goodlet 4cdd2271b0 Drop `tractor` assert bug note 2023-01-13 13:21:15 -05:00
Tyler Goodlet 04c0d77595 Frame ticks in helper routine
Wow, turns out tick framing was totally borked since we weren't framing
on "greater then throttle period long waits" XD

This moves all the framing logic into a common func and calls it in
every case:
- every (normal) "pre throttle period expires" quote receive
- each "no new quote before throttle period expires" (slow case)
- each "no clearing tick yet received" / only burst on clears case
2023-01-13 13:21:15 -05:00
Tyler Goodlet 8e1ceca43d Add some data-flows jargon notes (re: ) 2023-01-13 13:21:15 -05:00
Tyler Goodlet c85e7790de Rename `._flumes.py` -> `.flows.py` 2023-01-13 13:21:15 -05:00
Tyler Goodlet 2399c618b6 Expand sampler loop shm write lines 2023-01-13 13:21:15 -05:00
Tyler Goodlet 7ec88f8cac Make hist shm token optional to allow for FSPs 2023-01-13 13:21:15 -05:00
Tyler Goodlet eacd44dd65 Move `Flume` to a new `.data._flumes` module 2023-01-13 13:21:15 -05:00
Tyler Goodlet e5e70a6011 Extend `Flume` methods
Add some (untested) data slicing util methods for mapping time ranges to
source data indices:
- `.get_index()` which maps a single input epoch time to an equiv array
  (int) index.
- add `slice_from_time()` which returns a view of the shm data from an
  input epoch range presuming the underlying struct array contains
  a `'time'` field with epoch stamps.
- `.view_data()` which slices out the "in view" data according to the
  current state of the passed in `pg.PlotItem`'s view box.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 1ee49df31d Ensure a rt shm buffer without backfill has correct epoch timestamping 2023-01-13 13:21:15 -05:00
Tyler Goodlet f2df32a673 Use throttle period for wait-on-clearing-event timeout 2023-01-13 13:21:15 -05:00
Tyler Goodlet 125e31dbf3 Implement by-type tick-framing in throttler loop
This has been an outstanding idea for a while and changes the framing
format of tick events into a `dict[str, list[dict]]` wherein for each
tick "type" (eg. 'bid', 'ask', 'trade', 'asize'..etc) we create an FIFO
ordered `list` of events (data) and then pack this table into each
(throttled) send. This gives an additional implied downsample reduction
(in terms of iteration on the consumer side) from `N` tick-events to
a (max) `T` tick-types presuming the rx side only needs the latest tick
event.

Drop the `types: set` and adjust clearing event test to use the new
`ticks_by_type` map's keys.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 715e693564 Improved clearing-tick-burst-oriented throttling
Instead of uniformly distributing the msg send rate for a given
aggregate subscription, choose to be more bursty around clearing ticks
so as to avoid saturating the consumer with L1 book updates and vs.
delivering real trade data as-fast-as-possible.

Presuming the consumer is in the "UI land of slow" (eg. modern display
frame rates) such an approach serves more useful for seeing "material
changes" in the market as-bursty-as-possible (i.e. more short lived fast
changes in last clearing price vs. many slower changes in the bid-ask
spread queues). Such an approach also lends better to multi-feed
overlays which in aggregate tend to scale linearly with the number of
feeds/overlays; centralization of bursty arrival rates allows for
a higher overall throttle rate if used cleverly with framing.
2023-01-13 13:21:15 -05:00
Tyler Goodlet 4300470786 Fix for empty tsdb query result case 2023-01-13 13:21:15 -05:00
Tyler Goodlet cf6e44cb9c Add `NoBsWs.connected()` predicate 2023-01-13 12:39:17 -05:00
Tyler Goodlet 2a158aea2c Rework `_FeedsBus` subscriptions mgmt using `set`
Allows using `set` ops for subscription management and guarantees no
duplicates per `brokerd` actor. New API is simpler for dynamic
pause/resume changes per `Feed`:
- `_FeedsBus.add_subs()`, `.get_subs()`, `.remove_subs()` all accept multi-sub
  `set` inputs.
- `Feed.pause()` / `.resume()` encapsulates management of *only* sending
  a msg on each unique underlying IPC msg stream.

Use new api in sampler task.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 88870fdda7 Set `brokers: list[st]` from mods when not provided.. 2023-01-10 11:09:19 -05:00
Tyler Goodlet 326f153a47 Catch overruns on throttled feed subs too
Previously we would only detect overruns and drop subscriptions on
non-throttled feed subs, however you can get the same issue with
a wrapping throttler task:
- the intermediate mem chan can be blocked either by the throttler task
  being too slow, in which case we still want to warn about it
- the stream's IPC channel actually breaks and we still want to drop
  the connection and subscription so it doesn't be come a source of
  stale backpressure.
2023-01-10 11:09:19 -05:00
Tyler Goodlet f5cd63ad35 Ensure correct stream is set on each `Flume`
Set each quote-stream by matching the provider for each `Flume` and thus
results in some flumes mapping to the same (multiplexed) stream.
Monkey-patch the equivalent `tractor.MsgStream._ctx: tractor.Context` on
each broadcast-receiver subscription to allow use by feed bus methods as
well as other internals which need to reference IPC channel/portal info.

Start a `_FeedsBus` subscription management API:
- add `.get_subs()` which returns the list of tuples registered for the
  given key (normally the fqsn).
- add `.remove_sub()` which allows removing by key and tuple value and
  provides encapsulation for sampler task(s) which deal with dropped
  connections/subscribers.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 1e96ca32df Move `maybe_open_feed()` above for readability 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7b9db86753 Multi-`broker` quotes with `Feed.open_multi_stream()`
Adds provider-list-filtered (quote) stream multiplexing support allowing
for merged real-time `tractor.MsgStream`s using an `@acm` interface.
Behind the scenes we are just doing a classic multi-task push to common
mem chan approach.

Details to make it work on `Feed`:
- add `Feed.mods: dict[str, Moduletype]` and
  `Feed.portals[ModuleType, tractor.Portal]` which are both populated
  during init in `open_feed()`
- drop `Feed.portal` and `Feed.name`

Also fix a final lingering tsdb history loading loop termination bug.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 20a396270e `Storage.read_ohlcv()` now returns a `numpy` array 2023-01-10 11:09:19 -05:00
Tyler Goodlet 81516c5204 Finally fix tsdb -> shm backfill loading
A slight facepalm but, the main issue was a simple indexing logic error:
we need to slice with `tsdb_history[-shm._first.value:]` to push most
recent history not oldest.. This allows cleanup of tsdb backfill loop as
well.

Further, greatly simply `diff_history()` time slicing by using the
classic `numpy` conditional slice on the epoch field.
2023-01-10 11:09:19 -05:00
Tyler Goodlet d6fb6fe3ae Just drop the pretty repr from our struct for now 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8476d8d056 Fix partial-frame-missing backfill logic
This had a bug prior where the end of a frame (a partial) wasn't being
sliced correctly and we'd get odd gaps showing up in the backfilled from
`brokerd` vs. tsdb end index. Repair this by doing timeframe aware index
diffing in `diff_history()` which seems to resolve it. Also, use the
frame-result's `end_dt: datetime` for the loop exit condition.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 29b6b3e54f Port `storesh` cli-cmd machinery to `Flume` apis 2023-01-10 11:09:19 -05:00
Tyler Goodlet 8a01c9e42b Fix broker-tail stripping using `str.removesuffix()` 2023-01-10 11:09:19 -05:00
Tyler Goodlet 7daab6329d Make `Symbol` derive from internal `.types.Struct` 2023-01-10 11:09:19 -05:00
Tyler Goodlet bb6452b969 Further feed syncing fixes wrt to `Flumes`
Sync per-symbol sampler loop start to subscription registers such that
the loop can't start until the consumer's stream subscription is added;
the task-sync uses a `trio.Event`. This patch also drops a ton of
commented cruft.

Further adjustments needed to get parity with prior functionality:
- pass init msg 'symbol_info' field to the `Symbol.broker_info: dict`.
- ensure the `_FeedsBus._subscriptions` table uses the broker specific
  (without brokername suffix) as keys for lookup so that the sampler
  loop doesn't have to append in the brokername as a suffix.
- ensure the `open_feed_bus()` flumes-table-msg returned sent by
  `tractor.Context.started()` uses the `.to_msg()` form of all flume
  structs.
- ensure `maybe_open_feed()` uses `tractor.MsgStream.subscribe()` on all
  `Flume.stream`s on cache hits using the
  `tractor.trionics.gather_contexts()` helper.
2023-01-10 11:09:19 -05:00
Tyler Goodlet 25bfe6f035 Use new |-union style type annots in sampling routines 2023-01-10 11:09:19 -05:00
Tyler Goodlet e7de5404d3 Add `Symbol.fqsn: str` property 2023-01-10 11:09:19 -05:00
Tyler Goodlet 18dc8b08e4 First draft aggregate feedz support
Orient shm-flow-arrays around the new idea of a `Flume` which provides
access, mgmt and basic measure of real-time data flow sets (see water
flow management semantics).

- We discard the previous idea of a "init message" which contained all
  the shm attachment info and instead send a startup message full of
  `Flume.to_msg()`s which are symmetrically loaded on the caller actor
  side.

- Create data-flows "entries" for every passed in fqsn such that the consumer gets back
  streams and shm for each, now all wrapped in `Flume` types. For now we
  allocate `brokermod.stream_quotes()` tasks 1-to-1 for each fqsn
  (instead of expecting each backend to do multi-plexing, though we
  might want that eventually) as well a `_FeedsBus._subscriber` entry
  for each. The pause/resume management loop is adjusted to match.
  Previously `Feed`s were  allocated 1-to-1 with each fqsn.

- Make `Feed` a `Struct` subtype instead of a `@dataclass` and move all
  flow specific attrs to the new `Flume`:
  - move `.index_stream()`, `.get_ds_info()` to `Flume`.
  - drop `.receive()`: each fqsn entry will now require knowledge of
    separate streams by feed users.
  - add multi-fqsn tables: `.flumes`, `.streams` which point to the
    appropriate per-symbol entries.

- Async load all `Flume`s from all contexts and all quote streams using
  `tractor.trionics.gather_contexts()` on the client `open_feed()` side.

- Update feeds test to include streaming 2 symbols on the same (binance)
  backend.
2023-01-10 11:09:18 -05:00
Tyler Goodlet 344a634cb6 Always set fqsn in `Feed.symbols: dict` 2023-01-10 11:09:18 -05:00
Guillermo Rodriguez 0474d66531
Switch msgspec struct ordering to always have required fields first and optionals last 2023-01-09 18:43:50 -03:00
algorandpa f218b804b4
Merge pull request from pikers/add_config_dir_on_daemon_startup
Add config dir on daemon startup
2022-12-22 19:40:47 +00:00
Esmeralda Gallardo 18e4352faf
Deleted unused timeout logic 2022-12-19 14:55:06 -03:00
Esmeralda Gallardo a6e921548b
Modified recv_task(): added functionality to restart ws after timeout, modified match msg and added new case to match in case of receiving an error. 2022-12-19 13:48:18 -03:00
Esmeralda Gallardo 3f5dec82ed
Replaced try/except block in recv_task() by match msg, and added new changes to description comment 2022-12-19 13:48:17 -03:00
Esmeralda Gallardo db0b59abaa
Added support for JSONRPC requests coming from the server side 2022-12-19 13:48:10 -03:00
algorandpa db11c3c0f8 add config dir on pikerd startup 2022-12-17 21:51:49 +00:00
Tyler Goodlet de93da202b Reconnect on ping-pong errors too i guess? 2022-12-10 16:05:36 -05:00
Tyler Goodlet 490d85aba5 Drop fast chart buffer to 2 days worth 2022-11-10 11:45:49 -05:00
Tyler Goodlet d46945cb09 Move profiler imports to internal version 2022-10-31 09:26:36 -04:00
Tyler Goodlet df16726211 Just wipe wrong timeframe filled tsdb colseries for now 2022-10-28 16:17:14 -04:00
Tyler Goodlet fb4f1732b6 Drop key error again 2022-10-28 16:17:14 -04:00
Tyler Goodlet 610fb5f7c6 Drop `NoData` handler, just let it bubble 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2b231ba631 Lul, fix timeframe key when writing history
There never was any underlying db bug, it was a hardcoded timeframe in
the column series write key.. Now we always assert a matching timeframe
in results.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 286228c290 Only wait on backfill if provider supports timeframe 2022-10-28 16:17:14 -04:00
Tyler Goodlet dc1edeecda Do tsdb backloading to shm concurrently
Not only improves startup latency but also avoids a bug where the rt
buffer was being tsdb-history prepended *before* the backfilling of
recent data from the backend was complete resulting in our of order
frames in shm.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 0000d9a314 Handle backends with no 1s OHLC history
If a history manager raises a `DataUnavailable` just assume the sample
rate isn't supported and that no shm prepends will be done. Further seed
the shm array in such cases as before from the 1m history's last datum.

Also, fix tsdb -> shm back-loading, cancelling tsdb queries when either
no array-data is returned or a frame is delivered which has a start time
no lesser then the least last retrieved. Use strict timeframes for every
`Storage` API call.
2022-10-28 16:17:14 -04:00
Tyler Goodlet b7ef0596b9 Drop remaining timeframe scanning from `.read_ohlcv()` 2022-10-28 16:17:14 -04:00
Tyler Goodlet 143e86a80c Handle super annoying mkts query bug..
Turns out querying for a high freq timeframe (like 1sec) will still
return a lower freq timeframe (like 1Min) SMH, and no idea if it's the
server or the client's fault, so we have to explicitly check the sample
step size and discard lower freq series-results. Do this inside
`Storage.read_ohlcv()` and return an empty `dict` when the wrong time
step is detected from the query result.

Further enforcements,
- both `.load()` and `read_ohlcv()` now require an explicit `timeframe:
  int` input to guarantee the time step of the output array.
- drop all calls `.load()` with non-timeframe specific input.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 956c7d3435 Add concurrent multi-time-frame history loading
Our default sample periods are 60s (1m) for the history chart and 1s for
the fast chart. This patch adds concurrent loading of both (or more)
different sample period data sets using the existing loading code but
with new support for looping through a passed "timeframe" table which
points to each shm instance.

More detailed adjustments include:
- breaking the "basic" and tsdb loading into 2 new funcs:
  `basic_backfill()` and `tsdb_backfill()` the latter of which is run
  when the tsdb daemon is discovered.
- adjust the fast shm buffer to offset with one day's worth of 1s so
  that only up to a day is backfilled as history in the fast chart.
- adjust bus task starting in `manage_history()` to deliver back the
  offset indices for both fast and slow shms and set them on the
  `Feed` object as `.izero_hist/rt: int` values:
  - allows the chart-UI linked view region handlers to use the offsets
    in the view-linking-transform math to index-align the history and
    fast chart.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 23d0353934 Drop duplicate frame request
Must have gotten left in during refactor from the `trimeter` version?
Drop down to 6 years for 1m sampling.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 61ca5f7e19 Drop `trimeter`-ized concurrent history querying
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD

Commented some issues we still need to resolve as well.
2022-10-28 16:17:13 -04:00
Tyler Goodlet e7ec01b8e6 Pass in default history time of 1 min
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.
2022-10-28 16:17:13 -04:00
Tyler Goodlet bf7d5e9a71 Make `marketstore` storage api timeframe aware
The `Store.load()`, `.read_ohlcv()` and `.write_ohlcv()` and
`.delete_ts()` now can take a `timeframe: Optional[float]` param which
is used to look up the appropriate sampling period table-key from
`marketstore`.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 26d6e10ad7 Parameterize duration, pprint msg 2022-10-07 14:13:52 -04:00
Tyler Goodlet bcd6bbb7ca Increase the `brokerd` mem-chan size
Intention is to hopefully minimize (as many) context switches when
processing (near-)HFT feeds - tho not sure if it's improving things that
much XD
2022-09-12 20:25:15 -04:00
Tyler Goodlet 2ef6460853 Add `Feed.get_ds_info()` to detect/compute sample rates 2022-09-12 20:25:15 -04:00
Tyler Goodlet 6e574835c8 Update history shm buffer in ohlc sampler loop 2022-09-12 20:25:15 -04:00
Tyler Goodlet 49ccfdd673 Pass history shm "last index" in init msg, assign on feed 2022-09-12 20:25:15 -04:00
Tyler Goodlet 861fe791eb Allocate 2 shm buffers for history and real-time
As part of supporting a "history view" chart which shows downsampled
datums alongside our 1s (or higher) sampled OHLC we need a separate
buffer to store a the slower history from broker backends. This begins
that design by allocating 2 buffers:
- `rt_shm: ShmArray` which maps to a `/dev/shm/` file with `_rt` suffix
- `hist_shm: ShmArray` which maps to a file with `_hist` suffix

Deliver both of these shms back from both `manage_history()` and load
them as `Feed.rt_shm`/`.hist_shm` on the client side.

Impl deats:
- init the rt buffer with the first datum from loaded history and
  assign all OHLC values to that row's 'close' and the vlm to 0.
- pass the hist buffer to the backfiller task
- only spawn **one** global sampler array-row increment task per
  `brokerd` and pass in the 1s delay which we presume is our lowest
  OHLC sample rate for now.
- drop `open_sample_step_stream()` and just move its body contents into
  `Feed.index_stream()`
2022-09-12 20:25:15 -04:00
Tyler Goodlet 60052ff73a Presume shortest delay input to `increment_ohlc_buffer()`
Instead of worrying about the increment period per shm subscription,
just use the value passed as input and presume the caller knows that
only one task is necessary and that the wakeup (sampling) period should
be the shortest that is needed.

It's very unlikely we don't want at least a 1s sampling (both in terms
of task switching cost and general usage) which will eventually ship as
the default "real-time" feed "timeframe". Further, this "fast" increment
sampling task can handle all lower sampling periods (eg. 1m, 5m, 1H)
based on the current implementation just the same.

Also, add a global default sample period as `_defaul_delay_s` for use in
other internal modules.
2022-09-12 20:25:15 -04:00
Tyler Goodlet 4d2708cd42 Force 1s sample step so crypto boiz can seee 2022-09-12 20:25:15 -04:00
Tyler Goodlet d1cc52dff5 Use new public lifetime-stack class attr 2022-09-12 20:24:56 -04:00
Tyler Goodlet 4fa901dbcb Port to new `tractor._runtime` mod 2022-09-12 20:24:56 -04:00
Tyler Goodlet 7dfa4c3cde Better comment on the `size`'s purpose/units 2022-08-29 13:56:26 -04:00
Tyler Goodlet 7b653fe4f4 Store shm array size in token schema, use for loading 2022-08-29 13:46:41 -04:00
Guillermo Rodriguez 0c323fdc0b
Minor style changes and warning on unexpected msg 2022-08-27 09:12:02 -03:00
Guillermo Rodriguez 4facd161a9
Pull jsonrpc machinery out of deribit backend into piker.data._web_bs module and make it generic 2022-08-25 14:08:09 -03:00
Guillermo Rodriguez 34fb497eb4
Add aiter api to NoBsWs and rework cryptofeed relay to not be OOPy 2022-08-24 18:09:35 -03:00
Guillermo Rodriguez 92090b01b8
Begin jsonrpc over ws refactor 2022-08-24 18:06:00 -03:00
Tyler Goodlet 7379dc03af The `ps1` check doesn't work for `pdb`.. 2022-08-18 11:51:12 -04:00
Tyler Goodlet 2aec1c5f1d Only pprint our struct when we detect a py REPL 2022-08-18 11:51:12 -04:00
Tyler Goodlet a83bd9c608 Drop `msgpack` from `marketstore` module 2022-08-11 14:21:36 -04:00
Tyler Goodlet 69e501764a Drop status event processing at large
Since we figured out how to pass through ems dialog ids to the
`openOrders` sub we don't really need to do much with status updates
other then error handling. This drops `process_status()` and moves the
error handling logic into a status handler sub-block; we now just
info-log status updates for troubleshooting purposes.
2022-08-01 14:08:45 -04:00
Tyler Goodlet db564d7977 Add casting method to our struct variant 2022-07-30 17:34:40 -04:00
goodboy bf7a49c19b
Merge pull request from pikers/fix_forex
Fix forex
2022-07-21 17:52:08 -04:00
Tyler Goodlet d3130ca04c Revert to hard container kill on log error 2022-07-21 17:00:36 -04:00
Tyler Goodlet 0580b204a3 A `size` field in ticks is optional 2022-07-19 09:41:37 -04:00
Tyler Goodlet 90bc9b9730 Only 4k seconds of 1s ohlc when no tsdb 2022-07-19 09:07:27 -04:00
Tyler Goodlet c26acb1fa8 Add `Struct.copy()` which does a rountrip validate 2022-07-09 12:09:38 -04:00
Tyler Goodlet bea0111753 Add a custom `msgspec.Struct` with some humanizing 2022-07-09 12:09:38 -04:00
Tyler Goodlet c870665be0 Remove `BaseModel` use from all dataclass-like uses 2022-07-09 12:08:41 -04:00
Tyler Goodlet 4ff1090284 Use struct for shm tokens 2022-07-09 12:06:47 -04:00
Tyler Goodlet 4c7c78c815 Add a `ApplicationLogError` custom exc instead 2022-07-08 17:29:03 -04:00
Tyler Goodlet 019867b413 Fix missing container id, drop custom exception 2022-07-08 17:22:37 -04:00
Tyler Goodlet f356fb0a68 Hard kill container on both a timeout or connection error 2022-07-08 17:22:37 -04:00
Tyler Goodlet d506235a8b Drop token attr from `NoBsWs` 2022-07-03 17:07:35 -04:00
Tyler Goodlet eb2bad5138 Make our `Symbol` a `msgspec.Struct` 2022-06-28 10:07:56 -04:00
Tyler Goodlet e45cb9d08a Always cancel container on teardown 2022-06-26 13:36:29 -04:00
Tyler Goodlet 27c523ca74 Speedup: only load a "views worth" of datums on first query 2022-06-23 15:21:09 -04:00
Tyler Goodlet b8b76a32a6 Harden container cancel-and-wait supervisor loop
This should hopefully make teardown more reliable and includes better
logic to fail over to a hard kill path after a 3 second timeout waiting
for the instance to complete using the `docker-py` wait API. Also
generalize the supervisor teardown loop by allowing the container config
endpoint to return 2 msgs to expect:
- a startup message that can be read from the container's internal
  process logging that indicates it is fully up and ready.
- a teardown msg that can be polled for that indicates the container has
  gracefully terminated after a cancellation request which is passed to
  our container wrappers `.cancel()` method.

Make the marketstore config endpoint return the 2 messages we previously
had hard coded and use this new api.
2022-06-23 10:23:14 -04:00
Tyler Goodlet dcee0ddd55 Move/expect all marketstore configs under a `<configdir>/piker/marketstore` subdir 2022-06-23 09:48:32 -04:00
Tyler Goodlet 1c1661b783 Factor all data feed endpoints into `.ib.feed.py` 2022-06-06 19:33:12 -04:00
Tyler Goodlet 44c242a794 Fill in label with pairs from `status` value of backend init msg 2022-06-05 22:14:32 -04:00
Tyler Goodlet 55772efb34 Bleh, try avoiding the too many files bug-thing.. 2022-06-05 22:13:36 -04:00
Tyler Goodlet 80835d4e04 More detailed rt feed drop logging 2022-06-05 22:13:36 -04:00
Tyler Goodlet 363ba8f9ae Only drop throttle feeds if channel disconnects? 2022-06-05 22:13:36 -04:00
Tyler Goodlet fc24f5efd1 Iterate 1s and 1m from tsdb series 2022-06-05 22:13:36 -04:00
Tyler Goodlet a7ff47158b Pass tsdb flag when db is up XD 2022-06-05 22:13:36 -04:00
Tyler Goodlet 1b38628b09 Handle teardown race, add comment about shm subdirs 2022-06-05 22:13:36 -04:00
Tyler Goodlet bbe1ff19ef Don't kill all containers on teardown XD 2022-06-05 22:13:36 -04:00
Tyler Goodlet f5de361f49 Import directly from `tractor.trionics` 2022-06-05 22:13:35 -04:00
Tyler Goodlet 1dca7766d2 Add notes about how to do mkts "trimming"
Which is basically just "deleting" rows from a column series.
You can only use the trim command from the `.cmd` cli and only with a so
called `LocalClient` currently; it's also sketchy af and caused
a machine to hang due to mem usage..

Ideally we can patch in this functionality for use by the rpc api
and have it not hang like this XD

Pertains to https://github.com/alpacahq/marketstore/issues/264
2022-06-05 22:13:08 -04:00
Tyler Goodlet b236dc72e4 Make vlm a float; discrete is so 80s 2022-06-05 22:13:08 -04:00
Tyler Goodlet 5d26609693 Add "no-tsdb-found" history load length defaults 2022-06-05 22:13:08 -04:00
Tyler Goodlet 4f36743f64 Only udpate prepended graphics when actually in view 2022-06-05 22:13:08 -04:00
Tyler Goodlet af6aad4e9c If a sample stream is already ded, just warn 2022-06-05 22:13:08 -04:00
Tyler Goodlet 88eccc1e15 Fill in label with pairs from `status` value of backend init msg 2022-06-05 22:08:00 -04:00
Tyler Goodlet 6e2e2fc03f Use `pendulum` for timestamp parsing 2022-05-15 13:45:44 -04:00
Tyler Goodlet b3f9c4f93d Only assert if input array actually has a size 2022-05-10 17:59:24 -04:00
Tyler Goodlet 09431aad85 Add support for no `._first.value` update shm prepends 2022-05-10 17:59:16 -04:00
Tyler Goodlet 8219307bf5 Double up shm buffer size 2022-05-10 17:59:08 -04:00
Tyler Goodlet b910eceb3b Add `ShmArray.ustruct()`: return an unstructured array copy
We return a copy (since since a view doesn't seem to work..) of the
(field filtered) shm array contents which is the same index-length as
the source data.

Further, fence off the resource tracker disable-hack into a helper
routine.
2022-05-10 17:58:57 -04:00
Tyler Goodlet 1657f51edc Manually fetch missing out-of-order history frames
It seems once in a while a frame can get missed or dropped (at least
with binance?) so in those cases, when the request erlangs is already at
max, we just manually request the missing frame and presume things will
work out XD

Further, discard out of order frames that are "from the future" that
somehow end up in the async queue once in a while? Not sure why this
happens but it seems thus far just discarding them is nbd.
2022-05-10 17:25:20 -04:00
Tyler Goodlet b1246446c2 Raise error on 'fatal' and 'error' log levels 2022-05-10 17:25:20 -04:00
Tyler Goodlet 769e803695 Write `mkts.yml` from template if one dne 2022-05-10 14:55:52 -04:00
Tyler Goodlet e196e9d1a0 Factor `marketstore` container specifics into `piker.data.marketstore` 2022-05-10 14:55:52 -04:00
Tyler Goodlet 9ddfae44d2 Parametrize and deliver (relevant) mkts config in `start_ahab()` 2022-05-10 14:55:52 -04:00
Tyler Goodlet 277ca29018 Always write missing history frames to tsdb (again) 2022-05-10 14:55:52 -04:00
Tyler Goodlet 26fddae3c0 Fix earliest frame-end not-yet-pushed check
Bleh/🤦, the ``end_dt`` in scope is not the "earliest" frame's
`end_dt` in the async response queue.. Parse the queue's latest epoch
and use **that** to compare to the last last pushed datetime index..

Add more detailed logging to help debug any (un)expected datetime index
gaps.
2022-05-10 14:55:52 -04:00
Tyler Goodlet 30ddf63ec0 Handle gaps greater then a frame within a frame 2022-05-09 11:15:14 -04:00
Tyler Goodlet 8e08fb7b23 Add comment about un-reffed vars meant for use in shell 2022-05-09 11:15:14 -04:00
Tyler Goodlet fb9b6990ae Drop unneeded/commented cancel-by-msg code; roots perms wasn't the problem 2022-05-09 11:15:14 -04:00
Tyler Goodlet 1676bceee1 Don't offset the start index by a step 2022-05-09 11:15:14 -04:00
Tyler Goodlet c9a621fc2a Fix less-then-frame off by one slice, add db write toggle and disable 2022-05-09 11:15:14 -04:00
Tyler Goodlet 61e9db3229 Handle ``iter_dts()`` already exhausted edge case 2022-05-09 11:15:14 -04:00
Tyler Goodlet e4a900168d Add timeframe key to seconds map 2022-05-09 11:15:14 -04:00
Tyler Goodlet 40753ae93c Always write newly pulled frames to tsdb 2022-05-09 11:15:14 -04:00
Tyler Goodlet 969530ba19 Fix slice logic for less-then-frame tsdb overlap
When the tsdb has a last datum that is in the past less then a "frame's
worth" of sample steps we need to slice out only the data from the
latest frame that doesn't overlap; this fixes that slice logic..
Previously i dunno wth it was doing..
2022-05-09 11:15:14 -04:00
Tyler Goodlet 9b5f052597 Handle no sampler subs case on history broadcasts
When the market isn't open the feed layer won't create a subscriber
entry in the sampler broadcast loop and so if a manual call to
``broadcast()`` is made (like when trying to update a chart from
a history prepend) we need to handle that case and just broadcast
a random `-1` for now..BD
2022-05-09 11:15:14 -04:00