Commit Graph

2969 Commits (b7e1443618b867e1fdb79d8cf347b50bd2909237)

Author SHA1 Message Date
Tyler Goodlet d5b357b69a Raise `DataUnavailable` on >= 6 no data error events 2022-10-28 16:17:14 -04:00
Tyler Goodlet 610fb5f7c6 Drop `NoData` handler, just let it bubble 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2b231ba631 Lul, fix timeframe key when writing history
There never was any underlying db bug, it was a hardcoded timeframe in
the column series write key.. Now we always assert a matching timeframe
in results.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 286228c290 Only wait on backfill if provider supports timeframe 2022-10-28 16:17:14 -04:00
Tyler Goodlet a1a24da7b6 Make `binance` reject 1s OHLC history requests 2022-10-28 16:17:14 -04:00
Tyler Goodlet 553d0557b6 Raise `DataUnavailable` when a contract's 'earliest time' is hit 2022-10-28 16:17:14 -04:00
Tyler Goodlet 2f7b272d8c Make `ib` client's `.get_head_time()` (only) expect an fqsn 2022-10-28 16:17:14 -04:00
Tyler Goodlet dc1edeecda Do tsdb backloading to shm concurrently
Not only improves startup latency but also avoids a bug where the rt
buffer was being tsdb-history prepended *before* the backfilling of
recent data from the backend was complete resulting in our of order
frames in shm.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 4ca7817735 Use feed-shm offsets in fill-arrow indexing arithmetic 2022-10-28 16:17:14 -04:00
Tyler Goodlet 5b63585398 Pack multi-chart region linking into helper
Factor the multi-sample-rate region UI connecting into a new helper
`link_views_with_region()` which reads in the shm buffer offsets from
the `Feed` and appropriately connects the fast and slow chart handlers
for the linear region graphics. Add detailed comments writeup for the
inter-sampling transform algebra.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 0000d9a314 Handle backends with no 1s OHLC history
If a history manager raises a `DataUnavailable` just assume the sample
rate isn't supported and that no shm prepends will be done. Further seed
the shm array in such cases as before from the 1m history's last datum.

Also, fix tsdb -> shm back-loading, cancelling tsdb queries when either
no array-data is returned or a frame is delivered which has a start time
no lesser then the least last retrieved. Use strict timeframes for every
`Storage` API call.
2022-10-28 16:17:14 -04:00
Tyler Goodlet f7ec66362e Only get dbus user on sudo-user-present 2022-10-28 16:17:14 -04:00
Tyler Goodlet b7ef0596b9 Drop remaining timeframe scanning from `.read_ohlcv()` 2022-10-28 16:17:14 -04:00
Tyler Goodlet 143e86a80c Handle super annoying mkts query bug..
Turns out querying for a high freq timeframe (like 1sec) will still
return a lower freq timeframe (like 1Min) SMH, and no idea if it's the
server or the client's fault, so we have to explicitly check the sample
step size and discard lower freq series-results. Do this inside
`Storage.read_ohlcv()` and return an empty `dict` when the wrong time
step is detected from the query result.

Further enforcements,
- both `.load()` and `read_ohlcv()` now require an explicit `timeframe:
  int` input to guarantee the time step of the output array.
- drop all calls `.load()` with non-timeframe specific input.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 956c7d3435 Add concurrent multi-time-frame history loading
Our default sample periods are 60s (1m) for the history chart and 1s for
the fast chart. This patch adds concurrent loading of both (or more)
different sample period data sets using the existing loading code but
with new support for looping through a passed "timeframe" table which
points to each shm instance.

More detailed adjustments include:
- breaking the "basic" and tsdb loading into 2 new funcs:
  `basic_backfill()` and `tsdb_backfill()` the latter of which is run
  when the tsdb daemon is discovered.
- adjust the fast shm buffer to offset with one day's worth of 1s so
  that only up to a day is backfilled as history in the fast chart.
- adjust bus task starting in `manage_history()` to deliver back the
  offset indices for both fast and slow shms and set them on the
  `Feed` object as `.izero_hist/rt: int` values:
  - allows the chart-UI linked view region handlers to use the offsets
    in the view-linking-transform math to index-align the history and
    fast chart.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 330d16262e Add data-reset-task global state var
Allows keeping mutex state around data reset requests which (if more
then one are sent) can cause a throttling condition where ib's servers
will get slower and slower to conduct a reconnect. With this you can
have multiple ongoing contract requests without hitting that issue and
we can go back to having a nice 3s timeout on the history queries before
activating the hack.
2022-10-28 16:17:14 -04:00
Tyler Goodlet c7f57b940c Add back adhoc symbol lookup support, some exchs info is off 2022-10-28 16:17:14 -04:00
Tyler Goodlet 27bd3c07af Comment format tweak 2022-10-28 16:17:14 -04:00
Tyler Goodlet 55dc27a197 Subtract duration instead of passing to `.subtract()` (facepalm) 2022-10-28 16:17:14 -04:00
Tyler Goodlet a11f20fac2 Fix `piker services`; `tractor.run()` is done.. 2022-10-28 16:17:14 -04:00
Tyler Goodlet daebb78755 Re-request quote feed on data reset events
When a network outage or data feed connection is reset often the
`ib_insync` task will hang until some kind of (internal?) timeout takes
place or, in some (worst) cases it never re-establishes (the event
stream) and thus the backend needs to restart or the live feed will
never resume..

In order to avoid this issue once and for all this patch implements an
additional (extremely simple) task that is started with the  real-time
feed and simply waits for any market data reset events; when detected
restarts the `open_aio_quote_stream()` call in a loop using
a surrounding cancel scope.

Been meaning to implement this for ages and it's finally working!
2022-10-28 16:17:14 -04:00
Tyler Goodlet 90a395a069 Support no-disconnect on `open_aio_clients()` exit
Allows for easier restarts of certain `trio` side tasks without killing
the `asyncio`-side clients; support via flag.

Also fix a bug in `Client.bars()`: we need to return the duration on the
empty bars case..
2022-10-28 16:17:14 -04:00
Tyler Goodlet 23d0353934 Drop duplicate frame request
Must have gotten left in during refactor from the `trimeter` version?
Drop down to 6 years for 1m sampling.
2022-10-28 16:17:14 -04:00
Tyler Goodlet ede67ed184 Return history-frame duration from `.bars()`
This allows the history manager to know the decrement size for
`end_dt: datetime` on the next query if a no-data / gap case was
encountered; subtract this in `get_bars()` in such cases. Define the
expected `pendulum.Duration`s in the `.api._samplings` table.

Also add a bit of query latency profiling that we may use later to more
dynamically determine timeout driven data feed resets. Factor the `162`
error cases into a common exception handler block.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 811d21e111 Explicit fast chart naming, auto-yrange the fast chart on increment 2022-10-28 16:17:14 -04:00
Tyler Goodlet 54567d33da More correct no-data output handling
When we get a timeout or a `NoData` condition still return a tuple of
empty sequences instead of `None` from `Client.bars()`. Move the
sampling period-duration table to module level.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 61ca5f7e19 Drop `trimeter`-ized concurrent history querying
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD

Commented some issues we still need to resolve as well.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 7396624be0 Rework history frame request concurrency
Manual tinker-testing demonstrated that triggering data resets
completely independent of the frame request gets more throughput and
further, that repeated requests (for the same frame after cancelling on
the `trio`-side) can yield duplicate frame responses. Re-work the
dual-task structure to instead have one task wait indefinitely on the
frame response (and thus not trigger duplicate frames) and the 2nd data
reset task poll for the first task to complete in a poll loop which
terminates when the frame arrives via an event.

Dirty deatz:
- make `get_bars()` take an optional timeout (which will eventually be
  dynamically passed from the history mgmt machinery) and move request
  logic inside a new `query()` closure meant to be spawned in a task
  which sets an event on frame arrival, add data reset poll loop in the
  main/parent task, deliver result on nursery completion.
- handle frame request cancelled event case without crash.
- on no-frame result (due to real history gap) hack in a 1 day decrement
  case which we need to eventually allow the caller to control likely
  based on measured frame rx latency.
- make `wait_on_data_reset()` a predicate without output indicating
  reset success as well as `trio.Nursery.start()` compat so that it can
  be started in a new task with the started values yielded being
  a cancel scope and completion event.
- drop the legacy `backfill_bars()`, not longer used.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 25b90afbdb Add `timeframe` input to `kraken` history api 2022-10-28 16:17:13 -04:00
Tyler Goodlet 72dfeb2b4e Pass back interal cancel scope from data reset task 2022-10-28 16:17:13 -04:00
Tyler Goodlet 6b34c9e866 Temporarily disable error on pos size mismatch 2022-10-28 16:17:13 -04:00
Tyler Goodlet e7ec01b8e6 Pass in default history time of 1 min
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.
2022-10-28 16:17:13 -04:00
Tyler Goodlet fce7055c62 Make `binance` history api accept a timeframe 2022-10-28 16:17:13 -04:00
Tyler Goodlet bf7d5e9a71 Make `marketstore` storage api timeframe aware
The `Store.load()`, `.read_ohlcv()` and `.write_ohlcv()` and
`.delete_ts()` now can take a `timeframe: Optional[float]` param which
is used to look up the appropriate sampling period table-key from
`marketstore`.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 2a866dde65 Make history routines `timeframe` aware
Allow data feed sub-system to specify the timeframe (aka OHLC sample
period) to the `open_history_client()` delivered history fetching API.
Factor the data keycombo hack into a new routine to be used also from
the history backfiller code when request latency increases; there is
a first draft at trying to use the feed reset to speed up 1m frame
throttling by timing out on the history frame response, but it needs
a lot of fine tuning.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 220981e718 Add 1m ohlc sample rate support to `Client.bars()`; frame query is 1 day 2022-10-28 16:17:13 -04:00
Tyler Goodlet 8537a4091b Use new `Status.cancel_called` in EMS msg loops 2022-10-28 16:16:45 -04:00
Tyler Goodlet 71a11a23bd Add `Status.cancel_called: bool`
This is a simpler (and oddly more `trio`-nic and/or SC) way to handle
the cancelled-before-acked race for order dialogs. Will allow keeping
the `.req` field as solely an `Order` msg.
2022-10-28 16:16:45 -04:00
Tyler Goodlet fa368b1263 'Just getitem access the 'action' from req msg' 2022-10-28 16:16:45 -04:00
Tyler Goodlet e6dd1458f8 `kraken`: the apiflows chain map needs a `dict` 2022-10-28 16:16:45 -04:00
Tyler Goodlet 9486d993ce Drop order mode settings change logmsgs to `.runtime` again 2022-10-28 16:16:45 -04:00
Tyler Goodlet 30994dac10 Better handle order-cancelled-but-not-yet-acked races
When the client is faster then a `brokerd` at submitting and cancelling
an order we run into the case where we need to specify that the EMS
cancels the order-flow as soon as the brokerd's ack arrives. Previously
we were stashing a `BrokerdCancel` msg as the `Status.req` msg (to be
both tested for as a "already cancelled" and sent immediately on ack arrival to
the broker), but for such
cases we can't use that msg to find the fqsn (since only the client side
msgs have it defined) which is required by the new
`Router.client_broadcast()`.

So, Since `Status.req` is supposed to be a client-side flow msg anyway,
and we need the fqsn for client broadcasting, we change this `.req`
value to the client's submitted `Cancel` msg (thus rectifying the
missing `Router.client_broadcast()` fqsn input issue) and build the
`BrokerdCancel` request from that `Cancel` inline in the relay loop
from the `.req: Cancel` status msg lookup.

Further we allow `Cancel` msgs to define an `.account` and adjust the
order mode loop to expect `Cancel` source requests in cancelled status
updates.
2022-10-28 16:16:45 -04:00
Tyler Goodlet 8a61211c8c Handle brokerd errors even when no client-side-status found 2022-10-28 16:16:45 -04:00
Tyler Goodlet c43f7eb656 Fix missing `costmin: float` field in pair msgs 2022-10-28 16:16:45 -04:00
goodboy d05caa4b02
Merge pull request #411 from pikers/ci_fix_tractor_testing
Drop `tractor.testing` import in qt tests
2022-10-28 16:15:47 -04:00
Tyler Goodlet 63e9af002d Drop `tractor.testing` import in qt tests 2022-10-28 16:09:55 -04:00
goodboy 5144299f4f
Merge pull request #408 from pikers/offline_dark_clearing
Offline dark clearing
2022-10-10 09:25:59 -04:00
Tyler Goodlet c437f9370a Factor out all `maybe_open_context()` guff 2022-10-07 14:13:52 -04:00
Tyler Goodlet 94f81587ab Cache EMS trade relay tasks on feed fqsn
Except for paper accounts (in which case we need a trades dialog and
paper engine per symbol to enable simulated clearing) we can rely on the
instrument feed (symbol name) to be the caching key. Utilize
`tractor.trionics.maybe_open_context()` and the new key-as-callable
support in the paper case to ensure we have separate paper clearing
loops per symbol.

Requires https://github.com/goodboy/tractor/pull/329
2022-10-07 14:13:52 -04:00
Tyler Goodlet 2bc25e3593 Repair already-open order relay, fix causality dilemma
With the refactor of the dark loop into a daemon task already-open order
relaying from a `brokerd` was broken since no subscribed clients were
registered prior to the relay loop sending status msgs for such existing
live orders. Repair that by adding one more synchronization phase to the
`Router.open_trade_relays()` task: deliver a `client_ready: trio.Event`
which is set by the client task once the client stream has been
established and don't start the `brokerd` order dialog relay loop until
this event is ready.

Further implementation deats:
- factor the `brokerd` relay caching back into it's own `@acm` method:
  `maybe_open_brokerd_dialog()` since we do want (but only this) stream
  singleton-cached per broker backend.
- spawn all relay tasks on every entry for the moment until we figure
  out what we're caching against (any client pre-existing right, which
  would mean there's an entry in the `.subscribers` table?)
- rename `_DarkBook` -> `DarkBook` and `DarkBook.orders` -> `.triggers`
2022-10-07 14:13:52 -04:00