Commit Graph

3101 Commits (a7cebc0026a892a9e20073950f1a939af0fd1e50)

Author SHA1 Message Date
Tyler Goodlet 55dc27a197 Subtract duration instead of passing to `.subtract()` (facepalm) 2022-10-28 16:17:14 -04:00
Tyler Goodlet a11f20fac2 Fix `piker services`; `tractor.run()` is done.. 2022-10-28 16:17:14 -04:00
Tyler Goodlet daebb78755 Re-request quote feed on data reset events
When a network outage or data feed connection is reset often the
`ib_insync` task will hang until some kind of (internal?) timeout takes
place or, in some (worst) cases it never re-establishes (the event
stream) and thus the backend needs to restart or the live feed will
never resume..

In order to avoid this issue once and for all this patch implements an
additional (extremely simple) task that is started with the  real-time
feed and simply waits for any market data reset events; when detected
restarts the `open_aio_quote_stream()` call in a loop using
a surrounding cancel scope.

Been meaning to implement this for ages and it's finally working!
2022-10-28 16:17:14 -04:00
Tyler Goodlet 90a395a069 Support no-disconnect on `open_aio_clients()` exit
Allows for easier restarts of certain `trio` side tasks without killing
the `asyncio`-side clients; support via flag.

Also fix a bug in `Client.bars()`: we need to return the duration on the
empty bars case..
2022-10-28 16:17:14 -04:00
Tyler Goodlet 23d0353934 Drop duplicate frame request
Must have gotten left in during refactor from the `trimeter` version?
Drop down to 6 years for 1m sampling.
2022-10-28 16:17:14 -04:00
Tyler Goodlet ede67ed184 Return history-frame duration from `.bars()`
This allows the history manager to know the decrement size for
`end_dt: datetime` on the next query if a no-data / gap case was
encountered; subtract this in `get_bars()` in such cases. Define the
expected `pendulum.Duration`s in the `.api._samplings` table.

Also add a bit of query latency profiling that we may use later to more
dynamically determine timeout driven data feed resets. Factor the `162`
error cases into a common exception handler block.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 811d21e111 Explicit fast chart naming, auto-yrange the fast chart on increment 2022-10-28 16:17:14 -04:00
Tyler Goodlet 54567d33da More correct no-data output handling
When we get a timeout or a `NoData` condition still return a tuple of
empty sequences instead of `None` from `Client.bars()`. Move the
sampling period-duration table to module level.
2022-10-28 16:17:14 -04:00
Tyler Goodlet 61ca5f7e19 Drop `trimeter`-ized concurrent history querying
It doesn't seem to be any slower on our least throttled backend
(binance) and it removes a bunch of hard to get correct frame
re-ordering logic that i'm not sure really ever fully worked XD

Commented some issues we still need to resolve as well.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 7396624be0 Rework history frame request concurrency
Manual tinker-testing demonstrated that triggering data resets
completely independent of the frame request gets more throughput and
further, that repeated requests (for the same frame after cancelling on
the `trio`-side) can yield duplicate frame responses. Re-work the
dual-task structure to instead have one task wait indefinitely on the
frame response (and thus not trigger duplicate frames) and the 2nd data
reset task poll for the first task to complete in a poll loop which
terminates when the frame arrives via an event.

Dirty deatz:
- make `get_bars()` take an optional timeout (which will eventually be
  dynamically passed from the history mgmt machinery) and move request
  logic inside a new `query()` closure meant to be spawned in a task
  which sets an event on frame arrival, add data reset poll loop in the
  main/parent task, deliver result on nursery completion.
- handle frame request cancelled event case without crash.
- on no-frame result (due to real history gap) hack in a 1 day decrement
  case which we need to eventually allow the caller to control likely
  based on measured frame rx latency.
- make `wait_on_data_reset()` a predicate without output indicating
  reset success as well as `trio.Nursery.start()` compat so that it can
  be started in a new task with the started values yielded being
  a cancel scope and completion event.
- drop the legacy `backfill_bars()`, not longer used.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 25b90afbdb Add `timeframe` input to `kraken` history api 2022-10-28 16:17:13 -04:00
Tyler Goodlet 72dfeb2b4e Pass back interal cancel scope from data reset task 2022-10-28 16:17:13 -04:00
Tyler Goodlet 6b34c9e866 Temporarily disable error on pos size mismatch 2022-10-28 16:17:13 -04:00
Tyler Goodlet e7ec01b8e6 Pass in default history time of 1 min
Adjust all history query machinery to pass a `timeframe: int` in seconds
and set default of 60 (aka 1m) such that history views from here forward
will be 1m sampled OHLCV. Further when the tsdb is detected as up load
a full 10 years of data if possible on the 1m - backends will eventually
get a config section (`brokers.toml`) that allow user's to tune this.
2022-10-28 16:17:13 -04:00
Tyler Goodlet fce7055c62 Make `binance` history api accept a timeframe 2022-10-28 16:17:13 -04:00
Tyler Goodlet bf7d5e9a71 Make `marketstore` storage api timeframe aware
The `Store.load()`, `.read_ohlcv()` and `.write_ohlcv()` and
`.delete_ts()` now can take a `timeframe: Optional[float]` param which
is used to look up the appropriate sampling period table-key from
`marketstore`.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 2a866dde65 Make history routines `timeframe` aware
Allow data feed sub-system to specify the timeframe (aka OHLC sample
period) to the `open_history_client()` delivered history fetching API.
Factor the data keycombo hack into a new routine to be used also from
the history backfiller code when request latency increases; there is
a first draft at trying to use the feed reset to speed up 1m frame
throttling by timing out on the history frame response, but it needs
a lot of fine tuning.
2022-10-28 16:17:13 -04:00
Tyler Goodlet 220981e718 Add 1m ohlc sample rate support to `Client.bars()`; frame query is 1 day 2022-10-28 16:17:13 -04:00
Tyler Goodlet 8537a4091b Use new `Status.cancel_called` in EMS msg loops 2022-10-28 16:16:45 -04:00
Tyler Goodlet 71a11a23bd Add `Status.cancel_called: bool`
This is a simpler (and oddly more `trio`-nic and/or SC) way to handle
the cancelled-before-acked race for order dialogs. Will allow keeping
the `.req` field as solely an `Order` msg.
2022-10-28 16:16:45 -04:00
Tyler Goodlet fa368b1263 'Just getitem access the 'action' from req msg' 2022-10-28 16:16:45 -04:00
Tyler Goodlet e6dd1458f8 `kraken`: the apiflows chain map needs a `dict` 2022-10-28 16:16:45 -04:00
Tyler Goodlet 9486d993ce Drop order mode settings change logmsgs to `.runtime` again 2022-10-28 16:16:45 -04:00
Tyler Goodlet 30994dac10 Better handle order-cancelled-but-not-yet-acked races
When the client is faster then a `brokerd` at submitting and cancelling
an order we run into the case where we need to specify that the EMS
cancels the order-flow as soon as the brokerd's ack arrives. Previously
we were stashing a `BrokerdCancel` msg as the `Status.req` msg (to be
both tested for as a "already cancelled" and sent immediately on ack arrival to
the broker), but for such
cases we can't use that msg to find the fqsn (since only the client side
msgs have it defined) which is required by the new
`Router.client_broadcast()`.

So, Since `Status.req` is supposed to be a client-side flow msg anyway,
and we need the fqsn for client broadcasting, we change this `.req`
value to the client's submitted `Cancel` msg (thus rectifying the
missing `Router.client_broadcast()` fqsn input issue) and build the
`BrokerdCancel` request from that `Cancel` inline in the relay loop
from the `.req: Cancel` status msg lookup.

Further we allow `Cancel` msgs to define an `.account` and adjust the
order mode loop to expect `Cancel` source requests in cancelled status
updates.
2022-10-28 16:16:45 -04:00
Tyler Goodlet 8a61211c8c Handle brokerd errors even when no client-side-status found 2022-10-28 16:16:45 -04:00
Tyler Goodlet c43f7eb656 Fix missing `costmin: float` field in pair msgs 2022-10-28 16:16:45 -04:00
goodboy d05caa4b02
Merge pull request #411 from pikers/ci_fix_tractor_testing
Drop `tractor.testing` import in qt tests
2022-10-28 16:15:47 -04:00
Tyler Goodlet 63e9af002d Drop `tractor.testing` import in qt tests 2022-10-28 16:09:55 -04:00
goodboy 5144299f4f
Merge pull request #408 from pikers/offline_dark_clearing
Offline dark clearing
2022-10-10 09:25:59 -04:00
Tyler Goodlet c437f9370a Factor out all `maybe_open_context()` guff 2022-10-07 14:13:52 -04:00
Tyler Goodlet 94f81587ab Cache EMS trade relay tasks on feed fqsn
Except for paper accounts (in which case we need a trades dialog and
paper engine per symbol to enable simulated clearing) we can rely on the
instrument feed (symbol name) to be the caching key. Utilize
`tractor.trionics.maybe_open_context()` and the new key-as-callable
support in the paper case to ensure we have separate paper clearing
loops per symbol.

Requires https://github.com/goodboy/tractor/pull/329
2022-10-07 14:13:52 -04:00
Tyler Goodlet 2bc25e3593 Repair already-open order relay, fix causality dilemma
With the refactor of the dark loop into a daemon task already-open order
relaying from a `brokerd` was broken since no subscribed clients were
registered prior to the relay loop sending status msgs for such existing
live orders. Repair that by adding one more synchronization phase to the
`Router.open_trade_relays()` task: deliver a `client_ready: trio.Event`
which is set by the client task once the client stream has been
established and don't start the `brokerd` order dialog relay loop until
this event is ready.

Further implementation deats:
- factor the `brokerd` relay caching back into it's own `@acm` method:
  `maybe_open_brokerd_dialog()` since we do want (but only this) stream
  singleton-cached per broker backend.
- spawn all relay tasks on every entry for the moment until we figure
  out what we're caching against (any client pre-existing right, which
  would mean there's an entry in the `.subscribers` table?)
- rename `_DarkBook` -> `DarkBook` and `DarkBook.orders` -> `.triggers`
2022-10-07 14:13:52 -04:00
Tyler Goodlet 1d9ab7b0de More direct import 2022-10-07 14:13:52 -04:00
Tyler Goodlet 4c96a4878e Process unknown order mode msgs 2022-10-07 14:13:52 -04:00
Tyler Goodlet 8cd56cb6d3 Flip ems-side-client (`OrderBook`) to be a struct
`@dataclass` is so 2 years ago ;)

Also rename `.update()` -> `.send_update()` to be a bit more explicit
about actually sending an update msg.
2022-10-07 14:13:52 -04:00
Tyler Goodlet c246dcef6f Drop uuid from notify func inputs 2022-10-07 14:13:52 -04:00
Tyler Goodlet 26d6e10ad7 Parameterize duration, pprint msg 2022-10-07 14:13:52 -04:00
Tyler Goodlet 3924c66bd0 Move headless notifies into `.client_broadcast()` 2022-10-07 14:13:52 -04:00
Tyler Goodlet 2fbfe583dd Drop the `Router.clients: set`, `.subscribers` is enough 2022-10-07 14:13:52 -04:00
Tyler Goodlet 525f805cdb Port order mode to new notify routine 2022-10-07 14:13:52 -04:00
Tyler Goodlet b65c02336d Don't short circuit relay loop when headless
If no clients are connected we now process as normal and try to fire
a desktop notification on linux.
2022-10-07 14:13:52 -04:00
Tyler Goodlet d3abfce540 Start notify mod, linux only 2022-10-07 14:13:52 -04:00
Tyler Goodlet 49433ea87d Run dark-clear-loop in daemon task
This enables "headless" dark order matching and clearing where an `emsd`
daemon subactor can be left running with active dark (or other
algorithmic) orders which will still trigger despite to attached-controlling
ems-client.

Impl details:
- rename/add `Router.maybe_open_trade_relays()` which now does all work
  of starting up ems-side long living clearing and relay tasks and the
  associated data feed; make is a `Nursery.start()`-able task instead of
  an `@acm`.
- drop `open_brokerd_trades_dialog()` and move/factor contents into the
  above method.
- add support for a `router.client_broadcast('all', msg)` to wholesale
  fan out a msg to all clients.
2022-10-07 14:13:52 -04:00
goodboy 31b0d8cee8
Merge pull request #402 from pikers/multi_client_order_mgt
Multi client order mgt
2022-10-05 01:46:09 -04:00
Tyler Goodlet 35871d0213 Support line update from `Order` msg in `.on_submit()` 2022-10-05 01:41:18 -04:00
Tyler Goodlet 4877af9bc3 Add pub-sub broadcasting
Establishes a more formalized subscription based fan out pattern to ems
clients who subscribe for order flow for a particular symbol (the fqsn
is the default subscription key for now).

Make `Router.client_broadcast()` take a `sub_key: str` value which
determines the set of clients to forward a message to and drop all such
manually defined broadcast loops from task (func) code. Also add
`.get_subs()` which (hackily) allows getting the set of clients for
a given sub key where any stream that is detected as "closed" is
discarded in the output. Further we simplify to `Router.dialogs:
defaultdict[str, set[tractor.MsgStream]]` and `.subscriptions` as maps
to sets of streams for much easier broadcast management/logic using set
operations inside `.client_broadcast()`.
2022-10-05 01:41:18 -04:00
Tyler Goodlet 909e068121 Support multi-client order-dialog management
This patch was originally to fix a bug where new clients who
re-connected to an `emsd` that was running a paper engine were not
getting updates from new fills and/or cancels. It turns out the solution
is more general: now, any client that creates a order dialog will be
subscribing to receive updates on the order flow set mapped for that
symbol/instrument as long as the client has registered for that
particular fqsn with the EMS. This means re-connecting clients as well
as "monitoring" clients can see the same orders, alerts, fills and
clears.

Impl details:
- change all var names spelled as `dialogues` -> `dialogs` to be
  murican.
- make `Router.dialogs: dict[str, defaultdict[str, list]]` so that each
  dialog id (oid) maps to a set of potential subscribing ems clients.
- add `Router.fqsn2dialogs: dict[str, list[str]]` a map of fqsn entries to
  sets of oids.
- adjust all core task code to make appropriate lookups into these 2 new
  tables instead of being handed specific client streams as input.
- start the `translate_and_relay_brokerd_events` task as a daemon task
  that lives with the particular `TradesRelay` such that dialogs cleared
  while no client is connected are still processed.
- rename `TradesRelay.brokerd_dialogue` -> `.brokerd_stream`
- broadcast all status msgs to all subscribed clients in the relay loop.
- always de-reg each client stream from the `Router.dialogs` table on close.
2022-10-05 01:41:18 -04:00
Tyler Goodlet cf835b97ca Add some info logs around paper fills 2022-10-05 01:41:18 -04:00
Tyler Goodlet 30bce42c0b Don't spin paper clear loop on non-clearing ticks
Not sure what exactly happened but it seemed clears weren't working in
some cases without this, also there's no point in spinning the simulated
clearing loop if we're handling a non-clearing tick type.
2022-10-05 01:41:18 -04:00
Tyler Goodlet 48ff4859e6 Update to new pair schema, adds `.cost_decimals` field 2022-10-05 01:41:18 -04:00