Handling the edge cases in this was "fun", namely:
- entering with less then a slot's worth of units to purchase
before hitting the pp limit or, less then a slots worth when exiting
toward a net-zero position.
- round pp msg updates using the symbol tick and lot size digits to
avoid super small (1e-30 lel) positions lingering in the ems (happens
moreso with the paper engine).
- don't expect the next size method to be called for alert level changes
- pass label text and field widget key separately
- fix fill status bar slot sizing logic (once and for all) and
create a new type that allows generating / resizing the bar's
size / values with a `.set_slots()` method
- pull account names from allocator attr
- set `.fill_bar` as the fill status bar on the form for now
- make `GodWidget.load_symbol()` async
- track loaded feeds with a private `._feeds` dict
- add methods to pause/resume all feeds when chart is (un)focussed
- add some commented test code for 2nd feed consumer task and rsi2 fsp
- load async signal handler for view clicking
- generate lines from staged `Order` msgs
- apply level update callback to each order that dynamically
updates the order size from the allocator calcs
- pass order msg instances to the ems client for submission
- update order size on line moves
- add `Order` msg and `Symbol` refs to each dialog
In an effort to simplify line creation and management from an order
mode here's a slew of changes:
- use our new ``LevelMarker`` for order lines and fully drop usage
of the original marker implementation stuff from `pg.InfiniteLine`
- add a left side label which shows the instrument's "units" value
- the most fundamental unit for the "size" of the order
- allow passing in an optional `marker_size: str` so that `action: str`
doesn't necessarily have to be passed (eg. when copying from an
existing line)
- change a couple of internal line config options to be public attrs
which can now be configured dynamically in real-time (since they're
all `bool` anyway):
* `hl_on_hover` -> `highlight_on_hover`
* `_always_show_labels` -> `always_show_labels`
- `LevelLine.set_level()` now only sets the position if it was **not**
called from the position changed signal (which would be redundant)
Move all the ``pydantic`` finagling to an `_orm.py` and
just keep an `Allocator` as the backing model for our pp controls
in the position module. This all needs to be tied together in some sane
with with facility for multiple symbols/streams per chart for when we
get to charting-trading aggregate feeds.
It was becoming too much with all the labels and markers and lines..
Might as well package it all together instead of cramming it in the
order mode loop, chief.
The techincal summary,
- move `_lines.position_line()` -> `PositionInfo.position_line()`.
- slap a `.pp` on the order mode instance which *is* a `PositionInfo`
- drop the position info info label for now (let's see what users want
eventually but for now let's keep it super minimal).
- add a `LevelMarker` type to replace the old `LevelLine` internal
marker system (includes ability to change the style and level on the
fly).
- change `_annotate.mk_marker()` -> `mk_maker_path()` and expect caller
to wrap in a `QGraphicsPathItem` if needed.
Generate and maintain position messages in the paper engine for each
`pikerd` session. We no longer tear down the engine on each client
disconnect. Ensure -ve size on sells to make the math work.
This gives us fast search over a known set of symbols you can't search
for with the api such as futures and commodities contracts.
Toss in a new client method to lookup contract details
`Client.con_deats()` and avoid calling it for now from `.search_stock()`
for speed; it seems originally we were doing the 2nd lookup due to weird
suffixes in the `.primaryExchange` which we can just discard.
In order to ensure the lifetime of the feed can in fact be kept open
until the last consumer task has completed we need to maintain
a lifetime which is hierarchically greater then all consumer tasks.
This solution is somewhat hacky but seems to work well: we just use the
`tractor` actor's "service nursery" (the one normally used to invoke rpc
tasks) to launch the task which will start and keep open the target
cached async context manager. To make this more "proper" we may want to
offer a "root nursery" in all piker actors that is exposed through some
singleton api or even introduce a public api for it into `tractor`
directly.
Think this was fixed by passing through `**kwargs` in
`maybe_open_feed()`, the shielding for fsp respawns wasn't being
properly passed through..
This reverts commit 2f1455d423.
Maybe i've finally learned my lesson that exit stacks and per task ctx
manager caching is just not trionic.. Use the approach we've taken for
the daemon service manager as well: create a process global nursery for
each unique ctx manager we wish to cache and simply tear it down when
the number of consumers goes to zero.
This seems to resolve all prior issues and gets us error-free cached
feeds!
Try out he new broadcast channels from `tractor` for data feeds
we already have cached. Any time there's a cache hit we load the
cached feed and just slap a broadcast receiver on it for the local
consumer task.
Add a new type/api to manage "contents labels" (labels that sit in
a view and display info about viewed data) since it's mostly used by
the linked charts cursor. Make `LinkedSplits.cursor` the new and only
instance var for the cursor such that charts can look it up from that
common class. Drop the `ChartPlotWidget._ohlc` array, just add
a `'ohlc'` entry to `._arrays`.
Orders in order mode should be chart oriented since there's a mode per
chart. If you want all orders just ask the ems or query all the charts
in a loop.
This fixes cancel-all-orders such that when 'cc' is tapped only the
orders on the *current* chart are cancelled, lel.
Generalize the methods for cancelling groups of orders (all or those
under cursor) and add new group status support such that statuses for
each cancel or order submission is displayed in the status bar. In the
"cancel-all-orders" case, use the new group status stuff.
Allows for submitting a top level "group status" associated with
a "group key" which eventually resolves once all sub-statuses associated
with that group key (and thus top level status) complete and are also
removed. Also add support for a "final message" for each status such
that once the status clear callback is called a final msg is placed on
the status bar that is then removed when the next status is set.
It's all a questionable bunch of closures/callbacks but it worx.
Instead of callbacks for key presses/releases convert our `ChartView`'s
kb input handling to async code using our event relaying-over-mem-chan
system. This is a first step toward a more async driven modal control
UX. Changed a bunch of "chart" component naming as part of this as well,
namely: `ChartSpace` -> `GodWidget` and `LinkedSplitCharts` ->
`LinkedSplits`. Engage the view boxe's async handler code as part of new
symbol data loading in `display_symbol_data()`. More re-orging to come!
Add an `open_handler()` ctx manager for wholesale handling event sets
with a passed in async func. Better document and implement the event
filtering core including adding support for key "auto repeat" filtering;
it turns out the events delivered when `trio` does its guest-most tick
are not the same (Qt has somehow consumed them or something) so we have
to do certain things (like getting the `.type()`, `.isAutoRepeat()`,
etc.) before shipping over the mem chan. The alt might be to copy the
event objects first but haven't tried it yet. For now just offer
auto-repeat filtering through a flag.
If a client attaches to a quotes data feed and requests a throttle rate,
be sure to unsub that side-band memchan + task when it detaches and
especially so on any transport connection error.
Also, use an explicit `tractor.Context.cancel()` on the client feed
block exit since we removed the implicit cancel option from the
`tractor` api.
There is no reason to have more then `brokerd` trades dialogue stream
open per `emsd`. Here we minimize to managing that lone stream and
multiplexing msgs from each client such that multiple clients can be
connected to the ems, conducting trading without requiring multiple
ems-client connections to the backend broker and without the broker
being aware there are even multiple flows going on.
This patch also sets up for being able to have ems clients which
register to receive and track trade flows from other piker clients thus
enabling so called "multi-player" trading where orders for both paper
and live trades can be shared between multiple participants in the form
of a pre-broker, local clearing service and trade signals dark book.