This is an optimization to improve performance when the UI is fed real
time data. Instead of resorting all rows on every quote update, only
re-render when the sort key appears in the quote data, and further, only
resort rows which are changed using bisection based widget insertion to
avoid having `kivy` re-add widgets (and thus re-render graphics) more
often than absolutely necessary.
There's still a ton to polish (and some bugs to fix) but this is a first
working draft of a real-time option chain!
Insights and todos:
- `kivy` widgets need to be cached and reused (eg. rows, cells, etc.)
for speed since it seems creating new ones constantly is quite taxing
on the CPU
- the chain will tear down and re-setup the option data feed stream each
time a different contract expiry button set is clicked
- there's still some weird bug with row highlighting where it seems rows
added from a new expiry set (which weren't previously rendered) aren't
being highlighted reliably
`Row`:
- `no_cell`: support a list of keys for which no cells will be created
- allow passing in a `cell_type` at instantiation
`TickerTable`:
- keep track of rendered rows via a private `_rendered` set
- don't create rows inside `append_row()` expect caller to do it
- never render already rendered widgets in `render_rows()`
Miscellaneous:
- generalize `update_quotes()` to not be tied to specific quote fields
and allow passing in a quote `formatter()` func
- don't bother creating a nursery block until necessary in main
- more commenting
Add some extra fields to each quote that QT should already be
providing (instead of hiding them in the symbol and request contract
info); namely, the expiry and contact type (i.e. put or call).
Define the base set of fields to be displayed in an option chain
UI and add a quote formatter.
Copy out `kivy.clock.triggered` from version 1.10.1 since it isn't yet
available in the `trio`/async branch and use it to throttle the callback
rate. Use a `collections.deque` to LIFO iterate widgets each call
using the heuristic that it's more likely the mouse is still within the
currently highlighted (or it's adjacent neighbors) widget as opposed
to some far away widget (the case when the mouse is moved very
drastically across the window).
Thanks yet again to @tshirtman for suggesting this.
Instead of defining a `on_mouse_pos()` on every widget simply
register and track each widget and loop through them all once (or as much
as is necessary) in a single callback. The assumption here is that we
get a performance boost by looping widgets instead of having `kivy` loop
and call back each widget thus avoiding costly python function calls.
Well that was a doozy; had to rejig pretty much all of it.
The deats:
- Track broker components in a new `DataFeed` namedtuple
- port to new list based batch quotes (not dicts any more)
- lock access to cached broker-client / data-feed instantiation
- respawn tasks that fail due to the network
So much changed to get this working for both stocks and options:
- Index contracts by a new `ContractsKey` named tuple
- Move to pushing lists of quotes instead of dicts since option
subscriptions are often not identified by their "symbol" key and
this makes it difficult at fan out time to know how a quote should
be indexed and delivered. Instead add a special `key` entry to each
quote dict which is the quote's subscription key.
Instead of all this adding/removing of canvas instructions nonsense
simple add a static "highlighted" rectangle to each row and make its
size very small when there's no mouse over.
Mad props to @tshirtman for showing me the light :D
It's still a bit of a shit show, and I've left a lot of commented tweaks
that need to be further played with, but I think this is a much
better look for what I'm considering to be one of the main "entry point"
apps for `piker`. To get any more serious fine tuning the way I want
I may have to talk to some kivy experts as I'm having some headaches
with button borders, padding, and the header row height..
Some of the new changes include:
- port to the new `brokers.data` module
- much darker theme with a stronger terminal vibe
- last trade price and volume amount flash on each trade
- fixed the symbol search bar to be a static height; before it was
getting squashed oddly when using stacked windows
- make all the cells transparent (for now) such that I can just use
a row color (relates to cell padding/spacing - can't seem to ditch it)
- start adding type annotations
Add `contracts` and `optsquote` commands for querying option contracts
info and market quotes respectively. Add a `record` command for
streaming real-time data feed quotes to disk. Port `monitor` to the
new `piker.brokers.data` module. Forward loglevel flags through to
`tractor` for relevant commands.
Add a couple functions for storing and retrieving live json data feed
recordings to disk using a very rudimentary character + newline delimited
format.
Also, split out the pub-sub logic from `stream_quotes()` into a new
func, `fan_out_to_chans()`. Eventually I want to formalize this pattern
into a decorator exposed through `tractor`.
Makes it easy to request all the option contracts for a particular symbol.
Also, let `option_chain()` accept a `date` arg which can be used to only
retrieve quotes for a single expiry date (much faster then getting all
of them).
Every actor now registers (and unregisters) with the arbiter at
startup/teardown. For now the registry is stored in a plain `dict` in
the arbiter's memory. This makes it possible to easily coordinate actors
started as plain Python processes or via `multiprocessing`.
A whole smörgåsbord of changes was required to accomplish this:
- factor handshake steps into a func
- track *every* channel connected to an actor including multiples to the
same remote peer (may want to optimize this later)
- handle `trio.ClosedStreamError` gracefully in the message loop
- add an `open_portal` asynccontextmanager which handles channel
creation, handshaking, and spawning a bg task for msg processing
- add a `start_actor()` for starting in-process actors directly
- add working `get_arbiter()` and `find_actor()` public routines
- `_main` now tries an anonymous channel connect to the stated
arbiter sockaddr and uses that to determine whether to crown itself
Fail gracefully (by "aborting") the same way `trio` does. This avoids
ugly sub-proc tracebacks thrown at the console. Unset the local actor
when `tractor._main` completes. Cancel all tasks for a peer channel on
disconnect.
Drop all channel/connection handling from the core and break up all the
start up steps into compact and useful functions. The main difference is
the daemon now only needs to worry about spawning per broker streaming
tasks and handling symbol list subscription requests.
When an error is raised inside a nursery block (in the local actor)
cancel all spawned children and ensure the error is properly
unsuppressed.
Also,
- change `invoke_cmd` to `send_cmd` and expect the caller to use
`result_from_q` (avoids implicit blocking for responses that might
never arrive)
- `nursery.start()` the channel server block such that we wait for the
underlying listener to spawn before making outbound connections
- cancel the channel server when an actor's main task completes
(given that `outlive_main == False`)
- raise subactor errors directly in the local actors's msg loop
- enforce that `treat_as_gen` async functions respond with a caller id
(`cid`) in each yield packet
Command requests are sent out and responses are handled in a "message
loop" where each command is associated with a "caller id" and multiple
cmds and results are multiplexed on the came inter-actor channel. When
a cmd result arrives it is pushed into a local queue and delivered to the
appropriate calling actor's task. Errors from a remote actor are always shipped
in an "error" packet back to their spawning-parent actor such that any error
in a subactor is always raised directly in the parent. Based on the
first response to a cmd (either a 'return' or 'yield' packet) the caller
side portal will retrieve values by wrapping the local response queue in
either of an async function or generator as appropriate.
- Rename the `Client` to `Channel`
- Add better `__repr__()`
- use laddr, raddr instead of sockaddr, peer
- don't allow re-entrant `Channel.connect()` calls
- Make `Channel` an async iterable
Couple fixes here:
- if no tickers for a watchlist name -> bail
- swallow the symbol data response in the reconnect handler coro
- don't sleep 5 seconds before connecting to subproc daemon...
Resolves#43
When a client loses a connection it will currently need to re-subscribe
for symbols and receive a symbol data summary as a first quote response.
Only run the provided coroutine on reconnect and call the kwarg
`on_reconnect`. The client consuming code is entirely expected at this
point to know how the symbol registration protocol works.
Event if a broker client is already spawned new clients should still
receive a detailed symbol data packet as the first response. Avoid
exposing the new client's queue to the broker (i.e. subscribing it for
quotes) until after first pushing this packet with all bad symbols
filtered out.
Oh boy where to start.
- Handle broken streams in the `StreamQueue` gracefully; terminate the
async generator.
- When a stream queue connection is unwritable discard its subscriptions
inside the quoter task
- If all subscriptions are discarded for a broker then tear down its
quoter task
- Use listener parent nursery for spawning quoter tasks
- Make broker subs data structures global/shared between conn
handler tasks
- Register the `tickers2qs` entry *after* instantiating broker client(s)
(avoids race condition when mulitple client connections are coming
online simultaneously)
- Push smoke quotes to every client not just the first that connects
- Track quoter tasks in a cross-task set
- Handle unsubscriptions more correctly
In order to start working toward a HA distributed
architecture make apps use a `Client` type to talk to daemons.
The `Client` provides fault-tolerance for connection failures such
that the app will continue running until a connection to the original
service can be made or the process is killed. This will make it easier
to simply spawn up new daemon child processes when faults are detected.
Filter out bad symbols by processing an initial batch quote and
pushing to the subscribing client before spawning a quoter task.
This also avoids exposing the quoter task to anything but the
broker module and a `get_quotes()` routine.
Allow client connections to subscribe for quote streams from specific
brokers and spawn broker-client quoter tasks on-demand according
to client connection demands. Support multiple subscribers to a
single daemon process.
Async generators are faster and less code. Handle segmented packets
which can happen during periods of high quote volume. Move per-broker
rate limit logic into daemon task.