Move all the ``pydantic`` finagling to an `_orm.py` and
just keep an `Allocator` as the backing model for our pp controls
in the position module. This all needs to be tied together in some sane
with with facility for multiple symbols/streams per chart for when we
get to charting-trading aggregate feeds.
It was becoming too much with all the labels and markers and lines..
Might as well package it all together instead of cramming it in the
order mode loop, chief.
The techincal summary,
- move `_lines.position_line()` -> `PositionInfo.position_line()`.
- slap a `.pp` on the order mode instance which *is* a `PositionInfo`
- drop the position info info label for now (let's see what users want
eventually but for now let's keep it super minimal).
- add a `LevelMarker` type to replace the old `LevelLine` internal
marker system (includes ability to change the style and level on the
fly).
- change `_annotate.mk_marker()` -> `mk_maker_path()` and expect caller
to wrap in a `QGraphicsPathItem` if needed.
Generate and maintain position messages in the paper engine for each
`pikerd` session. We no longer tear down the engine on each client
disconnect. Ensure -ve size on sells to make the math work.
This gives us fast search over a known set of symbols you can't search
for with the api such as futures and commodities contracts.
Toss in a new client method to lookup contract details
`Client.con_deats()` and avoid calling it for now from `.search_stock()`
for speed; it seems originally we were doing the 2nd lookup due to weird
suffixes in the `.primaryExchange` which we can just discard.
In order to ensure the lifetime of the feed can in fact be kept open
until the last consumer task has completed we need to maintain
a lifetime which is hierarchically greater then all consumer tasks.
This solution is somewhat hacky but seems to work well: we just use the
`tractor` actor's "service nursery" (the one normally used to invoke rpc
tasks) to launch the task which will start and keep open the target
cached async context manager. To make this more "proper" we may want to
offer a "root nursery" in all piker actors that is exposed through some
singleton api or even introduce a public api for it into `tractor`
directly.
Think this was fixed by passing through `**kwargs` in
`maybe_open_feed()`, the shielding for fsp respawns wasn't being
properly passed through..
This reverts commit 2f1455d423.
Maybe i've finally learned my lesson that exit stacks and per task ctx
manager caching is just not trionic.. Use the approach we've taken for
the daemon service manager as well: create a process global nursery for
each unique ctx manager we wish to cache and simply tear it down when
the number of consumers goes to zero.
This seems to resolve all prior issues and gets us error-free cached
feeds!
Try out he new broadcast channels from `tractor` for data feeds
we already have cached. Any time there's a cache hit we load the
cached feed and just slap a broadcast receiver on it for the local
consumer task.
Add a new type/api to manage "contents labels" (labels that sit in
a view and display info about viewed data) since it's mostly used by
the linked charts cursor. Make `LinkedSplits.cursor` the new and only
instance var for the cursor such that charts can look it up from that
common class. Drop the `ChartPlotWidget._ohlc` array, just add
a `'ohlc'` entry to `._arrays`.
Orders in order mode should be chart oriented since there's a mode per
chart. If you want all orders just ask the ems or query all the charts
in a loop.
This fixes cancel-all-orders such that when 'cc' is tapped only the
orders on the *current* chart are cancelled, lel.
Generalize the methods for cancelling groups of orders (all or those
under cursor) and add new group status support such that statuses for
each cancel or order submission is displayed in the status bar. In the
"cancel-all-orders" case, use the new group status stuff.
Allows for submitting a top level "group status" associated with
a "group key" which eventually resolves once all sub-statuses associated
with that group key (and thus top level status) complete and are also
removed. Also add support for a "final message" for each status such
that once the status clear callback is called a final msg is placed on
the status bar that is then removed when the next status is set.
It's all a questionable bunch of closures/callbacks but it worx.
Instead of callbacks for key presses/releases convert our `ChartView`'s
kb input handling to async code using our event relaying-over-mem-chan
system. This is a first step toward a more async driven modal control
UX. Changed a bunch of "chart" component naming as part of this as well,
namely: `ChartSpace` -> `GodWidget` and `LinkedSplitCharts` ->
`LinkedSplits`. Engage the view boxe's async handler code as part of new
symbol data loading in `display_symbol_data()`. More re-orging to come!
Add an `open_handler()` ctx manager for wholesale handling event sets
with a passed in async func. Better document and implement the event
filtering core including adding support for key "auto repeat" filtering;
it turns out the events delivered when `trio` does its guest-most tick
are not the same (Qt has somehow consumed them or something) so we have
to do certain things (like getting the `.type()`, `.isAutoRepeat()`,
etc.) before shipping over the mem chan. The alt might be to copy the
event objects first but haven't tried it yet. For now just offer
auto-repeat filtering through a flag.
If a client attaches to a quotes data feed and requests a throttle rate,
be sure to unsub that side-band memchan + task when it detaches and
especially so on any transport connection error.
Also, use an explicit `tractor.Context.cancel()` on the client feed
block exit since we removed the implicit cancel option from the
`tractor` api.
There is no reason to have more then `brokerd` trades dialogue stream
open per `emsd`. Here we minimize to managing that lone stream and
multiplexing msgs from each client such that multiple clients can be
connected to the ems, conducting trading without requiring multiple
ems-client connections to the backend broker and without the broker
being aware there are even multiple flows going on.
This patch also sets up for being able to have ems clients which
register to receive and track trade flows from other piker clients thus
enabling so called "multi-player" trading where orders for both paper
and live trades can be shared between multiple participants in the form
of a pre-broker, local clearing service and trade signals dark book.
This solves a bunch of issues to do with `brokerd` order status msgs
getting relayed for each order to **every** correspondingly connected
EMS client. Previously we weren't keeping track of which emsd orders
were associated with which clients so you had backend msgs getting
broadcast to all clients which not only resulted in duplicate (and
sometimes erroneous, due to state tracking) actions taking place in the
UI's order mode, but it's also just duplicate traffic (usually to the
same actor) over multiple logical streams. Instead, only keep up **one**
(cached) stream with the `trades_dialogue()` endpoint such that **all**
emsd orders route over that single connection to the particular
`brokerd` actor.
An async exit stack around the new `@tractor.context` is problematic
since a pushed context can't bubble errors unless the exit stack has
been closed. But in that case why do you need the exit stack if you're
going to push it and wait it right away; it seems more correct to use
a nursery and spawn a task in `pikerd` that waits on the both the
target context completion first (thus being able to bubble up any errors
from the remote, and top level service task) and the sub-actor portal.
(Sub)service Daemons are spawned with `.start_actor()` and thus will
block forever until cancelled so, add a way to cancel them explicitly
which we'll need eventually for restarts and dynamic feed management.
The big lesson here is that async exit stacks are not conducive to
spawning and monitoring service tasks, and especially so if
a `@tractor.context` is used since if the `.open_context()` call isn't
exited (only possible by the stack being closed), then there will be no
way for `trio` to cancel the task that pushed that context (since it
can't run a checkpoint while yielded inside the stack) without also
cancelling all other contexts pushed on that stack. Presuming one
`pikerd` task is used to do the original pushing (which it was) then
any error would have to kill all service daemon tasks which obviously
won't work.
I see this mostly as the painz of tinkering out an SC service manager
with `tractor` / `trio` for the first time, so try to go easy on the
process ;P
Adding binance's "hft" ws feeds has resulted in a lot of context
switching in our Qt charts, so much so it's chewin CPU and definitely
worth it to throttle to the detected display rate as per discussion in
issue #192.
This is a first very very naive attempt at throttling L1 tick feeds on
the `brokerd` end (producer side) using a constant and uniform delivery
rate by way of a `trio` task + mem chan. The new func is
`data._sampling.uniform_rate_send()`. Basically if a client request
a feed and provides a throttle rate we just spawn a task and queue up
ticks until approximately the next display rate's worth period of time
has passed before forwarding. It's definitely nothing fancy but does
provide fodder and a start point for an up and coming queueing eng to
start digging into both #107 and #109 ;)
Avoids some cyclical and confusing import time stuff that we needed to get
DPI aware fonts configured from the active display. Move the main window
singleton into its own module and add a `main_window()` getter for it.
Make `current_screen()` a ``MainWindow` method to avoid so many module
variables.
This moves the entire clearing system to use typed messages using
`pydantic.BaseModel` such that the streamed request-response order
submission protocols can be explicitly viewed in terms of message
schema, flow, and sequencing. Using the explicit message formats we can
now dig into simplifying and normalizing across broker provider apis to
get the best uniformity and simplicity.
The order submission sequence is now fully async: an order request is
expected to be explicitly acked with a new message and if cancellation
is requested by the client before the ack arrives, the cancel message is
stashed and then later sent immediately on receipt of the order
submission's ack from the backend broker. Backend brokers are now
controlled using a 2-way request-response streaming dialogue which is
fully api agnostic of the clearing system's core processing; This
leverages the new bi-directional streaming apis from `tractor`. The
clearing core (emsd) was also simplified by moving the paper engine to
it's own sub-actor and making it api-symmetric with expected `brokerd`
endpoints.
A couple of the ems status messages were changed/added:
'dark_executed' -> 'dark_triggered'
added 'alert_triggered'
More cleaning of old code to come!
This makes the paper engine look IPC-wise exactly like any
broker-provider backend module and uses the new ``trades_dialogue()``
2-way streaming endpoint for commanding order requests.
This serves as a first step toward truly distributed forward testing
since the paper engine can now be run out-of tree from `pikerd` if
needed thus demonstrating how real-time clearing signals can be shared
between fully distinct services.
This avoids somewhat convoluted "hackery" making 2 one-way streams
between the order client and the EMS and instead uses the new
bi-directional streaming and context API from `tractor`. Add a router
type to the EMS that gets setup by the initial service tree and which
we'll eventually use to work toward multi-provider executions and
order-trigger monitoring. Move to py3.9 style where possible throughout.
Makes it so we can move toward separate provider results fills in an
async way, on demand.
Also,
- add depth 1 iteration helper method
- add section finder helper method
- fix last selection loading to be mostly consistent
This allows for more deterministically managing long running sub-daemon
services under `pikerd` using the new context api from `tractor`.
The contexts are allocated in an async exit stack and torn down at root
daemon termination. Spawn brokerds using this method by changing the
persistence entry point to be a `@tractor.context`.
Some providers do well with a "longer" debounce period (like ib) since
searching them too frequently causes latency and stalls. By supporting
both a min and max debounce period on keyboard input we can only send
patterns to the slower engines when that period is triggered via
`trio.move_on_after()` and continue to relay to faster engines when the
measured period permits. Allow search routines to register their "min
period" such that they can choose to ignore patterns that arrive before
their heuristically known ideal wait.
Obviously this only supports stocks to start, it looks like we might
actually have to hard code some of the futures/forex/cmdtys that don't
have a search.. so lame. Special throttling is added here since the api
will grog out at anything more then 1Hz.
Additionally, decouple the bar loading request error handling from the
shm pushing loop so that we can always recover from a historical bars
throttle-error even if it's on the first try for a new symbol.
This allows for more deterministically managing long running sub-daemon
services under `pikerd` using the new context api from `tractor`.
The contexts are allocated in an async exit stack and torn down at root
daemon termination. Spawn brokerds using this method by changing the
persistence entry point to be a `@tractor.context`.
This gets the binance provider meeting the data feed schema requirements
of both the OHLC sampling/charting machinery as well as proper
formatting of historical OHLC history.
Notably,
- spec a minimal ohlc dtype based on the kline endpoint
- use a dataclass to parse out OHLC bar datums and pack into np.ndarray/shm
- add the ``aggTrade`` endpoint to get last clearing (traded) prices,
validate with ``pydantic`` and then normalize these into our tick-quote
format for delivery over the feed stream api.
- a notable requirement is that the "first" quote from the feed must
contain a 'last` field so the clearing system can start up correctly.
This required a fsp task spawning logic rework which ended up being
cleaner just spawning tasks in a loop sequentially instead of trying
a 2-phase spawn-then-initialize approach.
This also includes changes from the symbol search branch hacked in.
Mostly it includes isolating the main chart startup-sequence to a
function that can be run in a new task every time a new symbol is
requested by the selector/searcher. The actual search functionality
obviously isn't in here yet but minor changes are included as part of
pulling out the `tractor` stream api patch from the symbol search dev
branch.
Avoid bothering with a trio event and expect the caller to do manual shm
registering with the write loop. Provide OHLC sample period indexing
through a re-branded pub-sub func ``iter_ohlc_periods()``.
Move all feed/stream agnostic logic and shared mem writing into a new
set of routines inside the ``data`` sub-package. This lets us move
toward a more standard API for broker and data backends to provide
cache-able persistent streams to client apps.
The data layer now takes care of
- starting a single background brokerd task to start a stream for as
symbol if none yet exists and register that stream for later lookups
- the existing broker backend actor is now always re-used if possible
if it can be found in a service tree
- synchronization with the brokerd stream's startup sequence is now
oriented around fast startup concurrency such that client code gets
a handle to historical data and quote schema as fast as possible
- historical data loading is delegated to the backend more formally by
starting a ``backfill_bars()`` task
- write shared mem in the brokerd task and only destruct it once requested
either from the parent actor or further clients
- fully de-duplicate stream data by using a dynamic pub-sub strategy
where new clients register for copies of the same quote set per symbol
This new API is entirely working with the IB backend; others will need
to be ported. That's to come shortly.
Add a `Services` nurseries container singleton for spawning sub-daemons
inside the long running `pikerd` tree. Bring in `brokerd` spawning util
funcs to start getting eyes on what can be factored into a service api.
The direct open is needed for running `pikerd` cmd and
the ems spawn point is the first step toward detaching UI based order
entry from the engine itself.
- break (custom) graphics item style marker drawing into separate func
but keep using it since it still seems oddly faster then the
QGraphicsPathItem thing..
- unfactor hover handler; it was uncessary
- make both the graphics path item and custom graphics items approaches
both work inside ``.paint()``
Add support for drawing ``QPathGraphicsItem`` markers but don't use them
since they seem to be shitting up when combined with the infinite line
(bounding rect?): weird artifacts and whatnot. The only way to avoid
said glitches seems to be to update inside the infinite line's
`.paint()` but that slows stuff down.. Instead stick with the manual
paint job use the same pin point: left of the L1 spread graphics - where
the lines now also extend to.
Further stuff:
- Pin the y-label to a line's value on hover.
- Disable x-dimension line moving
- Rework the labelling to be more minimal
Add a line which shows the current average price position with and arrow
marker denoting the direction (long or short). Required some further
rewriting of the infinite line from pyqtgraph including:
- adjusting marker (arrow) placement to be offset from axis + l1 labels
- fixing the hover event to not require the `.movable` attribute to be
set
It's a super naive implementation with no slippage model or network
latency besides some slight delays. Clearing only happens on bid/ask
sweep ticks at the moment - simple last volume based clearing coming
up next.
This turned into a larger endeavour then intended but now we're using our
own label system on level lines to be able to display things nicely
**pinned wherever we want in the UI**. Keep the old ``LevelLabel`` for
now for the L1 graphics but we'll likely replace this as well since i'm
pretty sure the new label type (which wraps `QGraphicsTextItem`) is more
performant anyway.
For labels that want it add nice arrow paths that point just over the
respective axis. Couple label text offset from the axis line based on
parent 'tickTextOffset' setting. Drop `YSticky` it was not enough
meat to bother with.
The min tick size is the smallest step an instrument can move in value
(think the number of decimals places of precision the value can have).
We start leveraging this in a few places:
- make our internal "symbol" type expose it as part of it's api
so that it can be passed around by UI components
- in y-axis view box scaling, use it to keep the bid/ask spread (L1 UI)
always on screen even in the case where the spread has moved further
out of view then the last clearing price
- allows the EMS to determine dark order live order submission offsets
Async spawn a deats getter task whenever we load a symbol data feed.
Pass these symbol details in the first message delivered by the feed at
open. Move stream loop into a new func.
Basically a stop limit mode where the dirty execution-condition deats
are entirely held client side away from the broker. For now, there's
a static order size setting and a 0.5% limit setting relative to the
trigger price. Swap to using 'd' for dump and 'f' for fill - they're
easier for use with ctrl (which is used now to submit orders directly to
broker - ala "live (order) mode"). Still more kinks to work out with too
fast cancelled orders and alerts but we're getting there.
Our first major UI "mode" (yes kinda like the modes in emacs) that has
handles to a client side order book api, line and arrow editors, and
interacts with a spawned `emsd` (the EMS daemon actor).
Buncha cleaning and fixes in here for various thingers as well.
Since the "crosshair" is growing more and more UX implementation details
it probably makes sense to call it what it is; a python level mouse
abstraction. Add 2 internal sets: `_hovered` for allowing mouse hovered
objects to register themselves to other cursor aware components, and
`_trackers` for allowing scene items to "track" cursor movements via
a `on_tracked_source()` callback.
Support tracking the mouse cursor using a new `on_tracked_sources()`
callback method. Make hovered highlight a bit thicker and highlight when
click-dragged. Add a delete method for removing from the scene along
with label.
Leverages `QGraphicsItem.cacheMode` to speed up interactivity via
less `.paint()` calls (on mouse interaction) and redraws of the
underlying path when there are no transformations (other then a shift).
In order to keep the "flat bar on new time period" UX, a couple special
methods have to be triggered to get a redraw of the pixel buffer when
appending new data.
Use `QPainterPath.controlPointRect()` over `.boundingRect()` since
supposedly it's a lot faster. Drop all use of `QPicture` (since it seems
to conflict with the pixel buffer stuff?) and it doesn't give any
measurable speedup when drawing the "last bar" lines.
Oh, and add some profiling for now.
This is a bit hacky (what with array indexing semantics being relative
to the primary index's "start" value but it works. We'll likely want
to somehow wrap this index finagling into an API soon.
Failed at using either.
Quirks in numba's typing require specifying readonly arrays by
composing types manually.
The graphics item path thing, while it does take less time to write on
bar appends, seems to be slower in general in calculating the
``.boundingRect()`` value. Likely we'll just add manual max/min tracking
on array updates like ``pg.PlotCurveItem`` to squeeze some final juices
on this.
Pertains further to #109.
Instead of redrawing the entire `QPainterPath` every time there's
a historical bars update just use `.addPath()` to slap in latest
history. It seems to work and is fast. This also seems like it will be
a great strategy for filling in earlier data, woot!
This gives a massive speedup when viewing large bar sets (think a day's
worth of 5s bars) by using the `pg.functions.arrayToQPath()` "magic"
binary array writing that is also used in `PlotCurveItem`. We're using
this same (lower level) function directly to draw bars as part of one
large path and it seems to be painting 15k (ish) bars with around 3ms
`.paint()` latency. The only thing still a bit slow is the path array
generation despite doing it with `numba`. Likely, either having multiple
paths or, only regenerating the missing backing array elements should
speed this up further to avoid slight delays when incrementing the bar
step.
This is of course a first draft and more cleanups are coming.
This makes it so you don't have to ctrl-c kill apps.
Add in the experimental openGL support even though I'm pretty sure it's
not being used much for curve plotting (but could be wrong).
Break the chart update code for fsps into a new task (add a nursery) in
new `spawn_fsps` (was `chart_from_fsps`) that async requests actor
spawning and initial historical data (all CPU bound work). For multiple
fsp subcharts this allows processing initial output in parallel
(multi-core). We might want to wrap this in a "feed" like api
eventually. Basically the fsp startup sequence is now:
- start all requested fsp actors in an async loop and wait for
historical data to arrive
- loop through them all again to start update tasks which do chart
graphics rendering
Add separate x-axis objects for each new subchart (required by
pyqtgraph); still need to fix hiding unnecessary ones.
Add a `ChartPlotWidget._arrays: dict` for holding overlay data distinct
from ohlc. Drop the sizing yrange to label heights for now since it's
pretty much all gone to hell since adding L1 labels. Fix y-stickies to
look up correct overly arrays.
Requires decent modification of the built-in ``ViewBox``.
We do away with the zoom functionality for now and instead just add
a label full of some simple stats on the bounded data.
I think this gets us to the same output as TWS both on booktrader and
the quote details pane. In theory there might be logic needed to
decreases an L1 queue size on trades but can't seem to get it without
getting -ves displayed occasionally - thus leaving it for now.
Also, fix the max-min streaming logic to actually do its job, lel.
Start a simple API for L1 bid/ask labels.
Make `LevelLabel` draw a line above/below it's text (instead of the
rect fill we had before) since it looks much simpler/slicker.
Generalize the label text orientation through bounding rect
geometry positioning.
Until we get a better datum "cursor" figured out just draw the flat bar
despite the extra overhead. The reason to do this in 2 separate calls is
detailed in the comment but basic gist is that there's a race between
writer and reader of the last shm index.
Oh, and toss in some draft symbol search label code.
Not sure what fixed it exactly, and I guess we didn't need any relative
DPI scaling factor after all. Using the 3px margin on the level label
seems to make it look nice for any font size (i think) as well.
Gonna need some cleanup after this one.
Make our own ``Axis`` and have it call an impl specific ``.resize()``
such that different axes can size to their own spec. Allow passing in a
"typical maximum value string" which will be used by default for sizing
the axis' minor dimension; a common value should be passed to all axes
in a linked split charts widget. Add size hinting for axes labels such
that they can check their parent (axis) for desired dimensions if
needed.
Compute the size in pixels the label based on the label's contents.
Eventually we want to have an update system that can iterate through
axes and labels to do this whenever needed (eg. after widget is moved
to a new screen with a different DPI).
Avoid drawing a new new sticky position if the mouse hasn't moved to the
next (rounded) index in terms of the scene's coordinates. This completes
the "discrete-ization" of the mouse/cursor UX.
Finalizing this feature helped discover and solve
pyqtgraph/pyqtgraph#1418 which masssively improves interaction
performance throughout the whole lib!
Hide stickys on startup until cursor shows up on plot.
This is likely a marginal improvement but is slightly less execution and
adds a coolio black border around the label. Drop all the legacy code
from quantdom which was quite a convoluted mess for "coloring".
Had to tweak sticky offsets to get the crosshair to line up right; not
sure what that's all about yet.
With the improved update logic on `BarsItems` it doesn't seem to be
necessary. Remove y sticky for overlays for now to avoid clutter that
looks like double draws when the last overlay value is close to the last
price.
It seems a plethora of problems (including drawing performance) are due
to trying to hack around the strange rendering bug in Qt with `QLineF`
with y1 == y2. There was all sorts of weirdness that would show up with
trying (a hack) to just set all 4 points to the same value including
strange infinite diagonal ghost lines randomly on charts. Instead, just
place hold these flat bar's 'body' line with a `None` and filter the
null values out before calling `QPainter.drawLines()`. This results
in simply no body lines drawn for these datums. We can probably `numba`
the filtering too if it turns out to be a bottleneck.
Add a new graphic `LineDot` which is a `pg.CurvePoint` that draws
a simple filled dot over a curve at the specified index.
Add support for adding these cursor-dots to the crosshair/mouse through
a new `CrossHair.add_curve_cursor()`. Discretized the vertical line
updates on the crosshair such that it's only drawn in the middle of
the current bar in the main chart.
Makes the chart act like tws where each new time step increment the
chart shifts to the right so that the last bar stays in place. This
gets things looking like a proper auto-trading UX.
Added a couple methods to ``ChartPlotWidget`` to make this work:
- ``.default_view()`` to set the preferred view based on user settings
- ``.increment_view()`` to shift the view one time frame right
Also, split up the `.update_from_array()` method to be curve/ohlc
specific allowing for passing in a struct array with a named field
containing curve data more straightforwardly. This also simplifies the
contest label update functions.
Lookup overlay contents from the OHLC struct array (for now / to make
things work) and fix anchoring logic with better offsets to keep
contents labels super tight to the edge of the view box. Unfortunately,
had to hack the label-height-calc thing for avoiding overlap of graphics
with the label; haven't found a better solution yet and pyqtgraph seems
to require more rabbit holing to figure out something better. Slap in
some inf lines for over[sold/bought] rsi conditions thresholding.
If you have a common broker feed daemon then likely you don't want to
create superfluous shared mem buffers for the same symbol. This adds an
ad hoc little context manger which keeps a bool state of whether
a buffer writer task currently is running in this process. Before we
were checking the shared array token cache and **not** clearing it when
the writer task exited, resulting in incorrect writer/loader logic on
the next entry..
Really, we need a better set of SC semantics around the shared mem stuff
presuming there's only ever one writer per shared buffer at given time.
Hopefully that will come soon!
If we know the max and min in view then on datum updates we can avoid
resizing the y-range when a new max/min has not yet arrived.
This adds a very naive numpy calc in the drawing thread which we can
likely improve with a more efficient streaming alternative which can
also likely be run in a fsp subactor. Also, since this same calc is
essentially done inside `._set_yrange()` we will likely want to allow
passing the result into the method to avoid duplicate work.