Compare commits

...

66 Commits

Author SHA1 Message Date
wattygetlood 75faf83004 Set isn't serializable on std msgpack 2021-11-07 13:14:38 -05:00
wattygetlood a270b2e033 Only load 4 ib requests worth of bars on windows... 2021-11-07 13:14:38 -05:00
wattygetlood 09fd8ef742 Hack search view on windows to 1/2 window height; needs a better solution 2021-11-07 13:14:38 -05:00
wattygetlood 422f203fc3 Size the window to aproximately 1/3 the screen space 2021-11-07 13:14:38 -05:00
wattygetlood 8fbd0cd067 No support for notifications (yet) on windows 2021-11-07 13:14:38 -05:00
wattygetlood 68ed1164a1 Fix divide-by-zero when quote read is too fast in throttle task 2021-11-07 13:14:38 -05:00
wattygetlood 2f73f809f1 Fix default `brokers.toml` copying since module move 2021-11-07 13:14:38 -05:00
wattygetlood ba7b01b704 Configure window size based on screen dims on windows 2021-11-07 13:14:38 -05:00
Tyler Goodlet 83299c3a8b Start nts 2021-11-05 15:56:43 -04:00
Tyler Goodlet 0d17f4ff4c Drop print around unshown fsp updates 2021-11-05 15:48:03 -04:00
Tyler Goodlet 30dfb8f03d Don't push stream msgs in fsps by default 2021-11-05 15:46:39 -04:00
Tyler Goodlet 5f45404efb Stopgap: don't rerun Context.started() fsp calc task 2021-11-05 15:45:56 -04:00
Tyler Goodlet 49885ca750 Use `round()` for magnitude check 2021-11-05 10:27:03 -04:00
Tyler Goodlet 5b703782fc Drop order status bar down a font px size 2021-11-05 10:27:03 -04:00
Tyler Goodlet 8bf9ebc55c Guard against empty array read in step update task 2021-11-05 10:27:03 -04:00
Tyler Goodlet 6f2c2b46d5 Factor out context cacher to `tractor.trionics` 2021-11-05 10:27:03 -04:00
Tyler Goodlet 65ad18b5c3 Error out clearing task on first quote being nan 2021-11-05 10:27:03 -04:00
Tyler Goodlet 59defa378c Drop throttled rate margin to 100us 2021-11-05 10:27:03 -04:00
Tyler Goodlet 65bf5a386d Turn on profiling for the moment 2021-11-05 10:27:03 -04:00
Tyler Goodlet aa3bff704c De-densify some funcs 2021-11-05 10:27:03 -04:00
Tyler Goodlet 1061436c20 Add some typing around web bs 2021-11-05 10:27:03 -04:00
Tyler Goodlet 490126c672 Rename feed bus entrypoint 2021-11-05 10:27:03 -04:00
Tyler Goodlet eb05c78381 Update some typing and add latency checks for binance 2021-11-05 10:27:03 -04:00
Tyler Goodlet c677ff47a4 Please please please let this dpi scaling hack work 2021-11-05 10:27:03 -04:00
Tyler Goodlet bc2791526f Port imports to tractor's new subpkg 2021-11-05 10:27:03 -04:00
Tyler Goodlet 25e2d8a28e Repeat the click 3 times 2021-11-05 10:27:03 -04:00
Tyler Goodlet 11d18e2e8d Start testing out trionics helpers, put vlm before rsi 2021-11-05 10:27:03 -04:00
Tyler Goodlet c85a50289e Make openGL flag actually work.. 2021-11-05 10:27:03 -04:00
wattygetlood 2615af3955 Only scale down for scale < 2 2021-11-05 10:27:03 -04:00
Tyler Goodlet 8a151ddd54 Revert to old shm "last" meaning last row 2021-11-05 10:27:03 -04:00
Tyler Goodlet 6bea1b1adf Spawn and cache an fsp cluster ahead of time
Use a fixed worker count and don't respawn for every chart, instead
opting for a round-robin to tasks in a cluster and (for now) hoping for
the best in terms of trio scheduling, though we should obviously route
via symbol-locality next. This is currently a boon for chart spawning
startup times since actor creation is done AOT.

Additionally,
- use `zero_on_step` for dollar volume
- drop rsi on startup (again)
- add dollar volume (via fsp) along side unit volume
- litter more profiling to fsp chart startup sequence
- pre-define tick type classes for update loop
2021-11-05 10:27:03 -04:00
Tyler Goodlet 65a645bdde Start trionics mod with an `async_enter_all` 2021-11-05 10:27:03 -04:00
Tyler Goodlet 936171a7d0 Activate/focus original window after feed reset 2021-11-05 10:27:03 -04:00
Tyler Goodlet 81666fd76e Expose dollar volume to fsp engine
It can now be declared inside an fsp config dict under the name
`dolla_vlm`. We still need to offer an engine control that zeros
the newest sample value instead of copying from the previous.

This also litters the engine code with `pyqtgraph` profiling to see if
we can improve startup times - likely it'll mean pre-allocating a small
fsp daemon cluster at startup.
2021-11-05 10:27:03 -04:00
Tyler Goodlet 436a86ba2d Fix shm index update race
There was a lingering issue where the fsp daemon would sync its shm
array with the source data and we'd set the start/end indices to the
same value. Under some races a reader would then read an empty `.array`
which it wasn't expecting. This fixes that as well as tidies up the
`ShmArray.push()` logic and adds a temporary check in `.array` for zero
length if the array hasn't been written yet.

We can now start removing read array length checks in consumer code
and hopefully no more races will show up.
2021-11-05 10:27:03 -04:00
Tyler Goodlet 9df27931ab TOSQUASH fix subplots.values() cuckup 2021-11-05 10:27:03 -04:00
Tyler Goodlet c440ecefa4 Add first draft of "dollar volume" fsp 2021-11-05 10:27:03 -04:00
Tyler Goodlet 622372a7d5 Autoscale the y-range for all linked charts 2021-11-05 10:27:03 -04:00
Tyler Goodlet e367ffa107 `graphics_name` is more explicit then `name` 2021-11-05 10:27:03 -04:00
Tyler Goodlet 590e08a4d4 Process framed ticks by type in main graphics loop
We are already packing framed ticks in extended lists from
the `.data._sampling.uniform_rate_send()` task so the natural solution
to avoid needless graphics cycles for HFT-ish feeds (like binance) is
to unpack those frames and for most cases only update graphics with the
"latest" data per loop iteration. Unpacking in this way also lessens
nested-iterations per tick type.

Btw, this also effectively solves all remaining issues of fast tick
feeds over-triggering the graphics loop renders as long as the original
quote stream is throttled appropriately, usually to the local display
rate.

Relates to #183, #192

Dirty deats:
- drop all per-tick rate checks, they were always somewhat pointless
  when iterating a frame of ticks per render cycle XD.
- unpack tick frame into ticks per frame type, and last of each type;
  the lasts are used to update each part of the UI/graphics by class.
- only skip the label update if we can't retrieve the last from from a
  graphics source array; it seems `chart.update_curve_from_array()`
  already does a `len` check internally.
- add some draft commented code for tick type classes and a possible
  wire framed tick data structure.
- move `chart_maxmin()` range computer to module level, bind a chart to
  it with a `partial.`
- only check rate limits in main quote loop thus reporting actual
  overages
- add in commented logic for only updating the "last" cleared price from
  the most recent framed value if we want to eventually (right now seems
  like this is only relevant to ib and it's dark trades: `utrade`).
- rename `_clear_throttle_rate` -> `_quote_throttle_rate`, drop
  `_book_throttle_rate`.
2021-11-05 10:27:03 -04:00
Tyler Goodlet f98733118b Update fsps and overlays inside main OHLC chart update loop 2021-11-05 10:27:03 -04:00
Tyler Goodlet 3095d602e4 Fix color passthrough, make overlays a `dict` 2021-11-05 10:27:03 -04:00
Tyler Goodlet d1a3c80d93 Factor FSP subplot update code into func
This is in prep toward doing fsp graphics updates from the main quotes
update loop (where OHLC and volume are done). Updating fsp output from
that task should, for the majority of cases, be fine presuming the
processing is derived from the quote stream as a source. Further,
calling an update function on each fsp subplot/overlay is of course
faster then a full task switch - which is how it currently works with
a separate stream for every fsp output. This also will let us delay
adding full `Feed` support around fsp streams for the moment while still
getting quote throttling dictated by the quote stream.

Going forward, We can still support a separate task/fsp stream for
updates as needed (ex. some kind of fast external data source that isn't
synced with price data) but it should be enabled as needed required by
the user.
2021-11-05 10:27:03 -04:00
Tyler Goodlet b4cdd36337 More prep for FSP feeds
The major change is moving the fsp "daemon" (more like wanna-be fspd)
endpoint to use the newer `tractor.Portal.open_context()` and
bi-directional streaming api.

There's a few other things in here too:
- make a helper for allocating single colume fsp shm arrays
- rename some some fsp related functions to be more explicit on their
  purposes
2021-11-05 10:27:01 -04:00
Tyler Goodlet 280dfe36ca Clean up some imports, shift around some commented code 2021-11-05 10:25:43 -04:00
Tyler Goodlet 03ac7d075f Resize volume yaxis to in view range 2021-11-05 10:25:43 -04:00
Tyler Goodlet 9b7c8ed01b Update vlm sticky 2021-11-05 10:25:43 -04:00
Tyler Goodlet 0a4bc72341 Pass curve color through to y sticky label 2021-11-05 10:25:43 -04:00
Tyler Goodlet 4621d1af1a Re-order grays by "lightness" 2021-11-05 10:25:43 -04:00
Tyler Goodlet a042c7f2b3 Add back in rsi 2021-11-05 10:25:43 -04:00
Tyler Goodlet 1b437f80e0 Add dynamic subplot sizing logic, passthrouh step curve colors 2021-11-05 10:25:43 -04:00
Tyler Goodlet 92d6eedbb7 Add test logic for range based volume curve filling 2021-11-05 10:25:43 -04:00
Tyler Goodlet 33e5a6628d Drop rsi from display by default 2021-11-05 10:25:43 -04:00
Tyler Goodlet 6affead81e Add todo for new view padding testing 2021-11-05 10:25:43 -04:00
Tyler Goodlet fa9eebab35 Add volume plot as default
Toss in support for a "step mode" curve (unfinished atm) and use it to
plot from the `volume` field of the ohlcv shm array (if available).

changes to make it happen,
- dynamically generate the fsp sidepane form from an input config `dict`
  |_ dynamically generate the underlying `pydantic` model
  |_
- add a "volume checker" helper func that inspects the shm array
- toss in sidepane resize calls to avoid race where the ohlcv array
  is plotted too slowly compared to the volume and the chart somehow
  doesn't show..
- drop duplicate rsi2 cruft (previously used to test plots of the shm
  data)
2021-11-05 10:25:43 -04:00
Tyler Goodlet 4436ed2c18 Draft tina install section 2021-11-05 10:25:43 -04:00
Tyler Goodlet bf387bdc77 Only update curve lengths on non-negative index diffs 2021-11-05 10:25:36 -04:00
Tyler Goodlet c9359f265d Make `.paint()` method always the last 2021-11-05 10:25:36 -04:00
Tyler Goodlet 11c5264025 Always draw a last step line with px width=2 2021-11-05 10:25:36 -04:00
Tyler Goodlet 52e89ecd67 Increase current bar's pen size by a px 2021-11-05 10:25:36 -04:00
Tyler Goodlet 99ac72455f Use filled rect for current step
A `QRectF` is easier to make and draw (i think?) so use that and fill it
on volume events for decent sleek real-time look. Adjust the step array
generator to allow for an endpoints flag. Comment and/or clean out all
the old path filling calls that gave us perf issues..
2021-11-05 10:25:36 -04:00
Tyler Goodlet 0a7863e4fa Bleh, try a bunch of stuff for step filling
Turns out the performance of updating and refilling step curves > 1k ish
points is super slow :sadkek:. Disabling the fill basically returns
normal performance, so it seems maybe we'll stick with unfilled volume
"bars" for now. The other tricky bit is getting the path to extend and
fill which is particularly slow if you use the `QPainterPath.united()`
(what `+` set op does) operation which seems to require an entire redraw
of the curve each paint iteration. Removing the pixel buffer cache makes
things that much worse too..

One technique i tried was only setting a `._fill` flag when so many
datums are in view (< 1k as determined by the chart widget), and this
helps, but under high load (trade rates) you still see more lag then
without the fill which makes me say screw it and let's stick with
unfilled bars for now. Trying go to get performant filled curves will be
an exercise for an aspiring graphics eng :P
2021-11-05 10:25:36 -04:00
Tyler Goodlet 46c9943612 Add last step updates and path fill support 2021-11-05 10:25:36 -04:00
Tyler Goodlet e8e5f1525c Invert 'c' (connection) array
In latest `pyqtgraph` it seems there's a discrepancy
since `function.arrayToQPath()` was reworked and now
we need to *not* connect the last point for each bar.
2021-11-05 10:25:36 -04:00
Tyler Goodlet f4e1362792 Draft 'step' curve; couldn't get pg builtin to work 2021-11-05 10:25:36 -04:00
Tyler Goodlet 69c2dd866e Toss in references step mode impl 2021-11-05 10:25:36 -04:00
30 changed files with 1405 additions and 704 deletions

View File

@ -72,6 +72,34 @@ for a development install::
pip install -r requirements.txt -e . pip install -r requirements.txt -e .
install for tinas
*****************
for windows peeps you can start by getting `conda installed`_
and the `C++ build toolz`_ on your system.
then, `crack a conda shell`_ and run the following commands::
conda create piker --python=3.9
conda activate piker
conda install pip
pip install --upgrade setuptools
cd dIreCToRieZ\oF\cODez\piker\
pip install -r requirements -e .
in order to look coolio in front of all ur tina friends (and maybe
want to help us with testin, hackzing or configgin), install
`vscode`_ and `setup a coolio tiled wm console`_ so you can start
living the life of the tech literate..
.. _conda installed: https://
.. _C++ build toolz: https://
.. _crack a conda shell: https://
.. _vscode: https://
.. link to the tina guide
.. _setup a coolio tiled wm console: https://
provider support provider support
**************** ****************
for live data feeds the in-progress set of supported brokers is: for live data feeds the in-progress set of supported brokers is:

28
notes_to_self.rst 100644
View File

@ -0,0 +1,28 @@
Notes to self
=============
chicken scratch we shan't forget, consider this staging
for actual feature issues on wtv git wrapper-provider we're
using (no we shan't stick with GH long term likely).
cool chart features
-------------------
- allow right-click to spawn shell with current in view
data passed to the new process via ``msgpack-numpy``.
- expand OHLC datum to lower time frame.
- auto-highlight current time range on tick feed
features from IB charting
-------------------------
- vlm diffing from ticks and compare when bar arrives from historical
- should help isolate dark vlm / trades
chart ux ideas
--------------
- hotkey to zoom to order intersection (horizontal line) with previous
price levels (+ some margin obvs).
- L1 "lines" (queue size repr) should normalize to some fixed x width
such that when levels with more vlm appear other smaller levels are
scaled down giving an immediate indication of the liquidity diff.

View File

@ -18,30 +18,18 @@
Cacheing apis and toolz. Cacheing apis and toolz.
""" """
# further examples of interest:
# https://gist.github.com/njsmith/cf6fc0a97f53865f2c671659c88c1798#file-cache-py-L8
from collections import OrderedDict from collections import OrderedDict
from typing import (
Any,
Hashable,
Optional,
TypeVar,
AsyncContextManager,
)
from contextlib import ( from contextlib import (
asynccontextmanager, asynccontextmanager,
) )
import trio from tractor.trionics import maybe_open_context
from trio_typing import TaskStatus
import tractor
from .brokers import get_brokermod from .brokers import get_brokermod
from .log import get_logger from .log import get_logger
T = TypeVar('T')
log = get_logger(__name__) log = get_logger(__name__)
@ -74,112 +62,6 @@ def async_lifo_cache(maxsize=128):
return decorator return decorator
_cache: dict[str, 'Client'] = {} # noqa
class cache:
'''Globally (processs wide) cached, task access to a
kept-alive-while-in-use async resource.
'''
lock = trio.Lock()
users: int = 0
values: dict[Any, Any] = {}
resources: dict[
int,
Optional[tuple[trio.Nursery, trio.Event]]
] = {}
no_more_users: Optional[trio.Event] = None
@classmethod
async def run_ctx(
cls,
mng,
key,
task_status: TaskStatus[T] = trio.TASK_STATUS_IGNORED,
) -> None:
async with mng as value:
_, no_more_users = cls.resources[id(mng)]
cls.values[key] = value
task_status.started(value)
try:
await no_more_users.wait()
finally:
value = cls.values.pop(key)
# discard nursery ref so it won't be re-used (an error)
cls.resources.pop(id(mng))
@asynccontextmanager
async def maybe_open_ctx(
key: Hashable,
mngr: AsyncContextManager[T],
) -> (bool, T):
'''Maybe open a context manager if there is not already a cached
version for the provided ``key``. Return the cached instance on
a cache hit.
'''
await cache.lock.acquire()
ctx_key = id(mngr)
value = None
try:
# lock feed acquisition around task racing / ``trio``'s
# scheduler protocol
value = cache.values[key]
log.info(f'Reusing cached resource for {key}')
cache.users += 1
cache.lock.release()
yield True, value
except KeyError:
log.info(f'Allocating new resource for {key}')
# **critical section** that should prevent other tasks from
# checking the cache until complete otherwise the scheduler
# may switch and by accident we create more then one feed.
# TODO: avoid pulling from ``tractor`` internals and
# instead offer a "root nursery" in piker actors?
service_n = tractor.current_actor()._service_n
# TODO: does this need to be a tractor "root nursery"?
ln = cache.resources.get(ctx_key)
assert not ln
ln, _ = cache.resources[ctx_key] = (service_n, trio.Event())
value = await ln.start(cache.run_ctx, mngr, key)
cache.users += 1
cache.lock.release()
yield False, value
finally:
cache.users -= 1
if cache.lock.locked():
cache.lock.release()
if value is not None:
# if no more consumers, teardown the client
if cache.users <= 0:
log.warning(f'De-allocating resource for {key}')
# terminate mngr nursery
entry = cache.resources.get(ctx_key)
if entry:
_, no_more_users = entry
no_more_users.set()
@asynccontextmanager @asynccontextmanager
async def open_cached_client( async def open_cached_client(
brokername: str, brokername: str,
@ -190,7 +72,7 @@ async def open_cached_client(
''' '''
brokermod = get_brokermod(brokername) brokermod = get_brokermod(brokername)
async with maybe_open_ctx( async with maybe_open_context(
key=brokername, key=brokername,
mngr=brokermod.get_client(), mngr=brokermod.get_client(),
) as (cache_hit, client): ) as (cache_hit, client):

View File

@ -21,7 +21,7 @@ Profiling wrappers for internal libs.
import time import time
from functools import wraps from functools import wraps
_pg_profile: bool = False _pg_profile: bool = True
def pg_profile_enabled() -> bool: def pg_profile_enabled() -> bool:

View File

@ -19,7 +19,7 @@ Binance backend
""" """
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from typing import List, Dict, Any, Tuple, Union, Optional from typing import List, Dict, Any, Tuple, Union, Optional, AsyncGenerator
import time import time
import trio import trio
@ -37,7 +37,7 @@ from .._cacheables import open_cached_client
from ._util import resproc, SymbolNotFound from ._util import resproc, SymbolNotFound
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from ..data import ShmArray from ..data import ShmArray
from ..data._web_bs import open_autorecon_ws from ..data._web_bs import open_autorecon_ws, NoBsWs
log = get_logger(__name__) log = get_logger(__name__)
@ -213,7 +213,7 @@ class Client:
) )
# repack in dict form # repack in dict form
return {item[0]['symbol']: item[0] return {item[0]['symbol']: item[0]
for item in matches} for item in matches}
async def bars( async def bars(
self, self,
@ -295,7 +295,7 @@ class AggTrade(BaseModel):
M: bool # Ignore M: bool # Ignore
async def stream_messages(ws): async def stream_messages(ws: NoBsWs) -> AsyncGenerator[NoBsWs, dict]:
timeouts = 0 timeouts = 0
while True: while True:
@ -487,11 +487,20 @@ async def stream_quotes(
# signal to caller feed is ready for consumption # signal to caller feed is ready for consumption
feed_is_live.set() feed_is_live.set()
# import time
# last = time.time()
# start streaming # start streaming
async for typ, msg in msg_gen: async for typ, msg in msg_gen:
# period = time.time() - last
# hz = 1/period if period else float('inf')
# if hz > 60:
# log.info(f'Binance quotez : {hz}')
topic = msg['symbol'].lower() topic = msg['symbol'].lower()
await send_chan.send({topic: msg}) await send_chan.send({topic: msg})
# last = time.time()
@tractor.context @tractor.context

View File

@ -1157,6 +1157,11 @@ async def backfill_bars(
https://github.com/pikers/piker/issues/128 https://github.com/pikers/piker/issues/128
""" """
if platform.system() == 'Windows':
log.warning(
'Decreasing history query count to 4 since, windows...')
count = 4
out, fails = await get_bars(sym) out, fails = await get_bars(sym)
if out is None: if out is None:
raise RuntimeError("Could not pull currrent history?!") raise RuntimeError("Could not pull currrent history?!")

View File

@ -43,11 +43,15 @@ def humanize(
if not number or number <= 0: if not number or number <= 0:
return round(number, ndigits=digits) return round(number, ndigits=digits)
mag = math.floor(math.log(number, 10)) mag = round(math.log(number, 10))
if mag < 3: if mag < 3:
return round(number, ndigits=digits) return round(number, ndigits=digits)
maxmag = max(itertools.takewhile(lambda key: mag >= key, _mag2suffix)) maxmag = max(
itertools.takewhile(
lambda key: mag >= key, _mag2suffix
)
)
return "{value}{suffix}".format( return "{value}{suffix}".format(
value=round(number/10**maxmag, ndigits=digits), value=round(number/10**maxmag, ndigits=digits),

View File

@ -20,6 +20,7 @@ In da suit parlances: "Execution management systems"
""" """
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from dataclasses import dataclass, field from dataclasses import dataclass, field
from math import isnan
from pprint import pformat from pprint import pformat
import time import time
from typing import AsyncIterator, Callable from typing import AsyncIterator, Callable
@ -47,9 +48,11 @@ log = get_logger(__name__)
# TODO: numba all of this # TODO: numba all of this
def mk_check( def mk_check(
trigger_price: float, trigger_price: float,
known_last: float, known_last: float,
action: str, action: str,
) -> Callable[[float, float], bool]: ) -> Callable[[float, float], bool]:
"""Create a predicate for given ``exec_price`` based on last known """Create a predicate for given ``exec_price`` based on last known
price, ``known_last``. price, ``known_last``.
@ -77,8 +80,7 @@ def mk_check(
return check_lt return check_lt
else: raise ValueError('trigger: {trigger_price}, last: {known_last}')
return None
@dataclass @dataclass
@ -177,7 +179,15 @@ async def clear_dark_triggers(
tuple(execs.items()) tuple(execs.items())
): ):
if not pred or (ttype not in tf) or (not pred(price)): if (
not pred or
ttype not in tf or
not pred(price)
):
log.debug(
f'skipping quote for {sym} '
f'{pred}, {ttype} not in {tf}?, {pred(price)}'
)
# majority of iterations will be non-matches # majority of iterations will be non-matches
continue continue
@ -1005,7 +1015,8 @@ async def _emsd_main(
first_quote = feed.first_quotes[symbol] first_quote = feed.first_quotes[symbol]
book = _router.get_dark_book(broker) book = _router.get_dark_book(broker)
book.lasts[(broker, symbol)] = first_quote['last'] last = book.lasts[(broker, symbol)] = first_quote['last']
assert not isnan(last) # ib is a cucker but we've fixed it in the backend
# open a stream with the brokerd backend for order # open a stream with the brokerd backend for order
# flow dialogue # flow dialogue
@ -1035,7 +1046,7 @@ async def _emsd_main(
# signal to client that we're started and deliver # signal to client that we're started and deliver
# all known pps and accounts for this ``brokerd``. # all known pps and accounts for this ``brokerd``.
await ems_ctx.started((pp_msgs, relay.accounts)) await ems_ctx.started((pp_msgs, list(relay.accounts)))
# establish 2-way stream with requesting order-client and # establish 2-way stream with requesting order-client and
# begin handling inbound order requests and updates # begin handling inbound order requests and updates

View File

@ -60,7 +60,7 @@ def repodir():
""" """
dirpath = os.path.abspath( dirpath = os.path.abspath(
# we're 3 levels down in **this** module file # we're 3 levels down in **this** module file
dirname(dirname(dirname(os.path.realpath(__file__)))) dirname(dirname(os.path.realpath(__file__)))
) )
return dirpath return dirpath
@ -73,7 +73,7 @@ def load(
path = path or get_broker_conf_path() path = path or get_broker_conf_path()
if not os.path.isfile(path): if not os.path.isfile(path):
shutil.copyfile( shutil.copyfile(
os.path.join(repodir(), 'data/brokers.toml'), os.path.join(repodir(), 'config', 'brokers.toml'),
path, path,
) )

View File

@ -313,7 +313,8 @@ async def uniform_rate_send(
except trio.WouldBlock: except trio.WouldBlock:
now = time.time() now = time.time()
rate = 1 / (now - last_send) diff = now - last_send
rate = 1 / diff if diff else float('inf')
last_send = now last_send = now
# log.info(f'{rate} Hz sending quotes') # \n{first_quote}') # log.info(f'{rate} Hz sending quotes') # \n{first_quote}')

View File

@ -272,9 +272,8 @@ class ShmArray:
return end return end
except ValueError as err: except ValueError as err:
# shoudl raise if diff detected # should raise if diff detected
self.diff_err_fields(data) self.diff_err_fields(data)
raise err raise err
def diff_err_fields( def diff_err_fields(
@ -395,6 +394,7 @@ def open_shm_array(
# "unlink" created shm on process teardown by # "unlink" created shm on process teardown by
# pushing teardown calls onto actor context stack # pushing teardown calls onto actor context stack
tractor._actor._lifetime_stack.callback(shmarr.close) tractor._actor._lifetime_stack.callback(shmarr.close)
tractor._actor._lifetime_stack.callback(shmarr.destroy) tractor._actor._lifetime_stack.callback(shmarr.destroy)

View File

@ -133,9 +133,11 @@ def mk_symbol(
def from_df( def from_df(
df: pd.DataFrame, df: pd.DataFrame,
source=None, source=None,
default_tf=None default_tf=None
) -> np.recarray: ) -> np.recarray:
"""Convert OHLC formatted ``pandas.DataFrame`` to ``numpy.recarray``. """Convert OHLC formatted ``pandas.DataFrame`` to ``numpy.recarray``.

View File

@ -20,7 +20,7 @@ ToOlS fOr CoPInG wITh "tHE wEB" protocols.
""" """
from contextlib import asynccontextmanager, AsyncExitStack from contextlib import asynccontextmanager, AsyncExitStack
from types import ModuleType from types import ModuleType
from typing import Any, Callable from typing import Any, Callable, AsyncGenerator
import json import json
import trio import trio
@ -127,7 +127,7 @@ async def open_autorecon_ws(
# TODO: proper type annot smh # TODO: proper type annot smh
fixture: Callable, fixture: Callable,
): ) -> AsyncGenerator[tuple[...], NoBsWs]:
"""Apparently we can QoS for all sorts of reasons..so catch em. """Apparently we can QoS for all sorts of reasons..so catch em.
""" """

View File

@ -34,11 +34,10 @@ import trio
from trio.abc import ReceiveChannel from trio.abc import ReceiveChannel
from trio_typing import TaskStatus from trio_typing import TaskStatus
import tractor import tractor
# from tractor import _broadcast
from pydantic import BaseModel from pydantic import BaseModel
from ..brokers import get_brokermod from ..brokers import get_brokermod
from .._cacheables import maybe_open_ctx from .._cacheables import maybe_open_context
from ..log import get_logger, get_console_log from ..log import get_logger, get_console_log
from .._daemon import ( from .._daemon import (
maybe_spawn_brokerd, maybe_spawn_brokerd,
@ -247,7 +246,7 @@ async def allocate_persistent_feed(
@tractor.context @tractor.context
async def attach_feed_bus( async def open_feed_bus(
ctx: tractor.Context, ctx: tractor.Context,
brokername: str, brokername: str,
@ -364,7 +363,7 @@ async def open_sample_step_stream(
# XXX: this should be singleton on a host, # XXX: this should be singleton on a host,
# a lone broker-daemon per provider should be # a lone broker-daemon per provider should be
# created for all practical purposes # created for all practical purposes
async with maybe_open_ctx( async with maybe_open_context(
key=delay_s, key=delay_s,
mngr=portal.open_stream_from( mngr=portal.open_stream_from(
iter_ohlc_periods, iter_ohlc_periods,
@ -507,7 +506,7 @@ async def open_feed(
portal.open_context( portal.open_context(
attach_feed_bus, open_feed_bus,
brokername=brokername, brokername=brokername,
symbol=sym, symbol=sym,
loglevel=loglevel, loglevel=loglevel,
@ -586,7 +585,7 @@ async def maybe_open_feed(
''' '''
sym = symbols[0].lower() sym = symbols[0].lower()
async with maybe_open_ctx( async with maybe_open_context(
key=(brokername, sym), key=(brokername, sym),
mngr=open_feed( mngr=open_feed(
brokername, brokername,

View File

@ -34,7 +34,7 @@ from ..data import attach_shm_array
from ..data.feed import Feed from ..data.feed import Feed
from ..data._sharedmem import ShmArray from ..data._sharedmem import ShmArray
from ._momo import _rsi, _wma from ._momo import _rsi, _wma
from ._volume import _tina_vwap from ._volume import _tina_vwap, dolla_vlm
log = get_logger(__name__) log = get_logger(__name__)
@ -42,6 +42,7 @@ _fsp_builtins = {
'rsi': _rsi, 'rsi': _rsi,
'wma': _wma, 'wma': _wma,
'vwap': _tina_vwap, 'vwap': _tina_vwap,
'dolla_vlm': dolla_vlm,
} }
# TODO: things to figure the heck out: # TODO: things to figure the heck out:
@ -90,7 +91,7 @@ async def fsp_compute(
func_name: str, func_name: str,
func: Callable, func: Callable,
attach_stream: bool = True, attach_stream: bool = False,
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED, task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
) -> None: ) -> None:
@ -144,10 +145,13 @@ async def fsp_compute(
profiler(f'{func_name} pushed history') profiler(f'{func_name} pushed history')
profiler.finish() profiler.finish()
# TODO: UGH, what is the right way to do something like this?
if not ctx._started_called:
await ctx.started(index)
# setup a respawn handle # setup a respawn handle
with trio.CancelScope() as cs: with trio.CancelScope() as cs:
tracker = TaskTracker(trio.Event(), cs) tracker = TaskTracker(trio.Event(), cs)
await ctx.started(index)
task_status.started((tracker, index)) task_status.started((tracker, index))
profiler(f'{func_name} yield last index') profiler(f'{func_name} yield last index')

View File

@ -14,16 +14,20 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
from typing import AsyncIterator, Optional from typing import AsyncIterator, Optional, Union
import numpy as np import numpy as np
from tractor.trionics._broadcast import AsyncReceiver
from ..data._normalize import iterticks from ..data._normalize import iterticks
from ..data._sharedmem import ShmArray
def wap( def wap(
signal: np.ndarray, signal: np.ndarray,
weights: np.ndarray, weights: np.ndarray,
) -> np.ndarray: ) -> np.ndarray:
"""Weighted average price from signal and weights. """Weighted average price from signal and weights.
@ -47,15 +51,22 @@ def wap(
async def _tina_vwap( async def _tina_vwap(
source, #: AsyncStream[np.ndarray],
ohlcv: np.ndarray, # price time-frame "aware" source: AsyncReceiver[dict],
ohlcv: ShmArray, # OHLC sampled history
# TODO: anchor logic (eg. to session start)
anchors: Optional[np.ndarray] = None, anchors: Optional[np.ndarray] = None,
) -> AsyncIterator[np.ndarray]: # maybe something like like FspStream?
"""Streaming volume weighted moving average. ) -> Union[
AsyncIterator[np.ndarray],
float
]:
'''Streaming volume weighted moving average.
Calling this "tina" for now since we're using HLC3 instead of tick. Calling this "tina" for now since we're using HLC3 instead of tick.
""" '''
if anchors is None: if anchors is None:
# TODO: # TODO:
# anchor to session start of data if possible # anchor to session start of data if possible
@ -75,7 +86,6 @@ async def _tina_vwap(
# vwap_tot = h_vwap[-1] # vwap_tot = h_vwap[-1]
async for quote in source: async for quote in source:
for tick in iterticks(quote, types=['trade']): for tick in iterticks(quote, types=['trade']):
# c, h, l, v = ohlcv.array[-1][ # c, h, l, v = ohlcv.array[-1][
@ -91,3 +101,44 @@ async def _tina_vwap(
# yield ((((o + h + l) / 3) * v) weights_tot) / v_tot # yield ((((o + h + l) / 3) * v) weights_tot) / v_tot
yield w_tot / v_tot yield w_tot / v_tot
async def dolla_vlm(
source: AsyncReceiver[dict],
ohlcv: ShmArray, # OHLC sampled history
) -> Union[
AsyncIterator[np.ndarray],
float
]:
a = ohlcv.array
chl3 = (a['close'] + a['high'] + a['low']) / 3
v = a['volume']
# history
yield chl3 * v
i = ohlcv.index
lvlm = 0
async for quote in source:
for tick in iterticks(quote):
# this computes tick-by-tick weightings from here forward
size = tick['size']
price = tick['price']
li = ohlcv.index
if li > i:
i = li
lvlm = 0
c, h, l, v = ohlcv.last()[
['close', 'high', 'low', 'volume']
][0]
lvlm += price * size
tina_lvlm = c+h+l/3 * v
# print(f' tinal vlm: {tina_lvlm}')
yield lvlm

80
piker/trionics.py 100644
View File

@ -0,0 +1,80 @@
# piker: trading gear for hackers
# Copyright (C) Tyler Goodlet (in stewardship of piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
sugarz for trio/tractor conc peeps.
'''
from typing import AsyncContextManager
from typing import TypeVar
from contextlib import asynccontextmanager as acm
import trio
# A regular invariant generic type
T = TypeVar("T")
async def _enter_and_sleep(
mngr: AsyncContextManager[T],
to_yield: dict[int, T],
all_entered: trio.Event,
# task_status: TaskStatus[T] = trio.TASK_STATUS_IGNORED,
) -> T:
'''Open the async context manager deliver it's value
to this task's spawner and sleep until cancelled.
'''
async with mngr as value:
to_yield[id(mngr)] = value
if all(to_yield.values()):
all_entered.set()
# sleep until cancelled
await trio.sleep_forever()
@acm
async def async_enter_all(
*mngrs: list[AsyncContextManager[T]],
) -> tuple[T]:
to_yield = {}.fromkeys(id(mngr) for mngr in mngrs)
all_entered = trio.Event()
async with trio.open_nursery() as n:
for mngr in mngrs:
n.start_soon(
_enter_and_sleep,
mngr,
to_yield,
all_entered,
)
# deliver control once all managers have started up
await all_entered.wait()
yield tuple(to_yield.values())
# tear down all sleeper tasks thus triggering individual
# mngr ``__aexit__()``s.
n.cancel_scope.cancel()

View File

@ -85,11 +85,11 @@ async def _async_main(
screen = godwidget.window.current_screen() screen = godwidget.window.current_screen()
# configure graphics update throttling based on display refresh rate # configure graphics update throttling based on display refresh rate
_display._clear_throttle_rate = min( _display._quote_throttle_rate = min(
round(screen.refreshRate()), round(screen.refreshRate()),
_display._clear_throttle_rate, _display._quote_throttle_rate,
) )
log.info(f'Set graphics update rate to {_display._clear_throttle_rate} Hz') log.info(f'Set graphics update rate to {_display._quote_throttle_rate} Hz')
# TODO: do styling / themeing setup # TODO: do styling / themeing setup
# _style.style_ze_sheets(godwidget) # _style.style_ze_sheets(godwidget)

View File

@ -25,6 +25,9 @@ from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import ( from PyQt5.QtWidgets import (
QFrame, QFrame,
QWidget, QWidget,
QHBoxLayout,
QVBoxLayout,
QSplitter,
# QSizePolicy, # QSizePolicy,
) )
import numpy as np import numpy as np
@ -53,6 +56,7 @@ from ._style import (
) )
from ..data.feed import Feed from ..data.feed import Feed
from ..data._source import Symbol from ..data._source import Symbol
from ..data._sharedmem import ShmArray
from ..log import get_logger from ..log import get_logger
from ._interaction import ChartView from ._interaction import ChartView
from ._forms import FieldsForm from ._forms import FieldsForm
@ -64,11 +68,11 @@ log = get_logger(__name__)
class GodWidget(QWidget): class GodWidget(QWidget):
''' '''
"Our lord and savior, the holy child of window-shua, there is no "Our lord and savior, the holy child of window-shua, there is no
widget above thee." - 6|6 widget above thee." - 6||6
The highest level composed widget which contains layouts for The highest level composed widget which contains layouts for
organizing lower level charts as well as other widgets used to organizing charts as well as other sub-widgets used to control or
control or modify them. modify them.
''' '''
def __init__( def __init__(
@ -80,19 +84,19 @@ class GodWidget(QWidget):
super().__init__(parent) super().__init__(parent)
self.hbox = QtWidgets.QHBoxLayout(self) self.hbox = QHBoxLayout(self)
self.hbox.setContentsMargins(0, 0, 0, 0) self.hbox.setContentsMargins(0, 0, 0, 0)
self.hbox.setSpacing(6) self.hbox.setSpacing(6)
self.hbox.setAlignment(Qt.AlignTop) self.hbox.setAlignment(Qt.AlignTop)
self.vbox = QtWidgets.QVBoxLayout() self.vbox = QVBoxLayout()
self.vbox.setContentsMargins(0, 0, 0, 0) self.vbox.setContentsMargins(0, 0, 0, 0)
self.vbox.setSpacing(2) self.vbox.setSpacing(2)
self.vbox.setAlignment(Qt.AlignTop) self.vbox.setAlignment(Qt.AlignTop)
self.hbox.addLayout(self.vbox) self.hbox.addLayout(self.vbox)
# self.toolbar_layout = QtWidgets.QHBoxLayout() # self.toolbar_layout = QHBoxLayout()
# self.toolbar_layout.setContentsMargins(0, 0, 0, 0) # self.toolbar_layout.setContentsMargins(0, 0, 0, 0)
# self.vbox.addLayout(self.toolbar_layout) # self.vbox.addLayout(self.toolbar_layout)
@ -106,25 +110,8 @@ class GodWidget(QWidget):
# assigned in the startup func `_async_main()` # assigned in the startup func `_async_main()`
self._root_n: trio.Nursery = None self._root_n: trio.Nursery = None
def set_chart_symbol(
self,
symbol_key: str, # of form <fqsn>.<providername>
linkedsplits: 'LinkedSplits', # type: ignore
) -> None:
# re-sort org cache symbol list in LIFO order
cache = self._chart_cache
cache.pop(symbol_key, None)
cache[symbol_key] = linkedsplits
def get_chart_symbol(
self,
symbol_key: str,
) -> 'LinkedSplits': # type: ignore
return self._chart_cache.get(symbol_key)
# def init_timeframes_ui(self): # def init_timeframes_ui(self):
# self.tf_layout = QtWidgets.QHBoxLayout() # self.tf_layout = QHBoxLayout()
# self.tf_layout.setSpacing(0) # self.tf_layout.setSpacing(0)
# self.tf_layout.setContentsMargins(0, 12, 0, 0) # self.tf_layout.setContentsMargins(0, 12, 0, 0)
# time_frames = ('1M', '5M', '15M', '30M', '1H', '1D', '1W', 'MN') # time_frames = ('1M', '5M', '15M', '30M', '1H', '1D', '1W', 'MN')
@ -145,6 +132,23 @@ class GodWidget(QWidget):
# self.strategy_box = StrategyBoxWidget(self) # self.strategy_box = StrategyBoxWidget(self)
# self.toolbar_layout.addWidget(self.strategy_box) # self.toolbar_layout.addWidget(self.strategy_box)
def set_chart_symbol(
self,
symbol_key: str, # of form <fqsn>.<providername>
linkedsplits: 'LinkedSplits', # type: ignore
) -> None:
# re-sort org cache symbol list in LIFO order
cache = self._chart_cache
cache.pop(symbol_key, None)
cache[symbol_key] = linkedsplits
def get_chart_symbol(
self,
symbol_key: str,
) -> 'LinkedSplits': # type: ignore
return self._chart_cache.get(symbol_key)
async def load_symbol( async def load_symbol(
self, self,
@ -255,7 +259,7 @@ class ChartnPane(QFrame):
''' '''
sidepane: FieldsForm sidepane: FieldsForm
hbox: QtWidgets.QHBoxLayout hbox: QHBoxLayout
chart: Optional['ChartPlotWidget'] = None chart: Optional['ChartPlotWidget'] = None
def __init__( def __init__(
@ -271,7 +275,7 @@ class ChartnPane(QFrame):
self.sidepane = sidepane self.sidepane = sidepane
self.chart = None self.chart = None
hbox = self.hbox = QtWidgets.QHBoxLayout(self) hbox = self.hbox = QHBoxLayout(self)
hbox.setAlignment(Qt.AlignTop | Qt.AlignLeft) hbox.setAlignment(Qt.AlignTop | Qt.AlignLeft)
hbox.setContentsMargins(0, 0, 0, 0) hbox.setContentsMargins(0, 0, 0, 0)
hbox.setSpacing(3) hbox.setSpacing(3)
@ -281,21 +285,14 @@ class ChartnPane(QFrame):
class LinkedSplits(QWidget): class LinkedSplits(QWidget):
''' '''
Widget that holds a central chart plus derived Composite that holds a central chart plus a set of (derived)
subcharts computed from the original data set apart subcharts (usually computed from the original data) arranged in
by splitters for resizing. a splitter for resizing.
A single internal references to the data is maintained A single internal references to the data is maintained
for each chart and can be updated externally. for each chart and can be updated externally.
''' '''
long_pen = pg.mkPen('#006000')
long_brush = pg.mkBrush('#00ff00')
short_pen = pg.mkPen('#600000')
short_brush = pg.mkBrush('#ff0000')
zoomIsDisabled = QtCore.pyqtSignal(bool)
def __init__( def __init__(
self, self,
@ -325,11 +322,11 @@ class LinkedSplits(QWidget):
# self.xaxis_ind.setStyle(showValues=False) # self.xaxis_ind.setStyle(showValues=False)
# self.xaxis.hide() # self.xaxis.hide()
self.splitter = QtWidgets.QSplitter(QtCore.Qt.Vertical) self.splitter = QSplitter(QtCore.Qt.Vertical)
self.splitter.setMidLineWidth(1) self.splitter.setMidLineWidth(0)
self.splitter.setHandleWidth(0) self.splitter.setHandleWidth(2)
self.layout = QtWidgets.QVBoxLayout(self) self.layout = QVBoxLayout(self)
self.layout.setContentsMargins(0, 0, 0, 0) self.layout.setContentsMargins(0, 0, 0, 0)
self.layout.addWidget(self.splitter) self.layout.addWidget(self.splitter)
@ -341,20 +338,28 @@ class LinkedSplits(QWidget):
def set_split_sizes( def set_split_sizes(
self, self,
# prop: float = 0.375, # proportion allocated to consumer subcharts prop: Optional[float] = None,
prop: float = 5/8,
) -> None: ) -> None:
'''Set the proportion of space allocated for linked subcharts. '''Set the proportion of space allocated for linked subcharts.
''' '''
ln = len(self.subplots)
if not prop:
# proportion allocated to consumer subcharts
if ln < 2:
prop = 1/(.666 * 6)
elif ln >= 2:
prop = 3/8
major = 1 - prop major = 1 - prop
min_h_ind = int((self.height() * prop) / len(self.subplots)) min_h_ind = int((self.height() * prop) / ln)
sizes = [int(self.height() * major)] sizes = [int(self.height() * major)]
sizes.extend([min_h_ind] * len(self.subplots)) sizes.extend([min_h_ind] * ln)
self.splitter.setSizes(sizes) # , int(self.height()*0.2) self.splitter.setSizes(sizes)
def focus(self) -> None: def focus(self) -> None:
if self.chart is not None: if self.chart is not None:
@ -374,16 +379,21 @@ class LinkedSplits(QWidget):
style: str = 'bar', style: str = 'bar',
) -> 'ChartPlotWidget': ) -> 'ChartPlotWidget':
"""Start up and show main (price) chart and all linked subcharts. '''Start up and show main (price) chart and all linked subcharts.
The data input struct array must include OHLC fields. The data input struct array must include OHLC fields.
"""
'''
# add crosshairs # add crosshairs
self.cursor = Cursor( self.cursor = Cursor(
linkedsplits=self, linkedsplits=self,
digits=symbol.tick_size_digits, digits=symbol.tick_size_digits,
) )
# NOTE: atm the first (and only) OHLC price chart for the symbol
# is given a special reference but in the future there shouldn't
# be no distinction since we will have multiple symbols per
# view as part of "aggregate feeds".
self.chart = self.add_plot( self.chart = self.add_plot(
name=symbol.key, name=symbol.key,
@ -425,9 +435,7 @@ class LinkedSplits(QWidget):
**cpw_kwargs, **cpw_kwargs,
) -> 'ChartPlotWidget': ) -> 'ChartPlotWidget':
'''Add (sub)plots to chart widget by name. '''Add (sub)plots to chart widget by key.
If ``name`` == ``"main"`` the chart will be the the primary view.
''' '''
if self.chart is None and not _is_main: if self.chart is None and not _is_main:
@ -495,8 +503,9 @@ class LinkedSplits(QWidget):
cpw.plotItem.vb.linkedsplits = self cpw.plotItem.vb.linkedsplits = self
cpw.setFrameStyle( cpw.setFrameStyle(
QtWidgets.QFrame.StyledPanel QtWidgets.QFrame.StyledPanel
# | QtWidgets.QFrame.Plain) # | QtWidgets.QFrame.Plain
) )
cpw.hideButtons() cpw.hideButtons()
# XXX: gives us outline on backside of y-axis # XXX: gives us outline on backside of y-axis
@ -515,7 +524,22 @@ class LinkedSplits(QWidget):
cpw.draw_ohlc(name, array, array_key=array_key) cpw.draw_ohlc(name, array, array_key=array_key)
elif style == 'line': elif style == 'line':
cpw.draw_curve(name, array, array_key=array_key) cpw.draw_curve(
name,
array,
array_key=array_key,
color='default_light',
)
elif style == 'step':
cpw.draw_curve(
name,
array,
array_key=array_key,
step_mode=True,
color='davies',
fill_color='davies',
)
else: else:
raise ValueError(f"Chart style {style} is currently unsupported") raise ValueError(f"Chart style {style} is currently unsupported")
@ -523,14 +547,7 @@ class LinkedSplits(QWidget):
if not _is_main: if not _is_main:
# track by name # track by name
self.subplots[name] = cpw self.subplots[name] = cpw
# if sidepane:
# # TODO: use a "panes" collection to manage this?
# qframe.setMaximumWidth(self.chart.sidepane.width())
# qframe.setMinimumWidth(self.chart.sidepane.width())
self.splitter.addWidget(qframe) self.splitter.addWidget(qframe)
# scale split regions # scale split regions
self.set_split_sizes() self.set_split_sizes()
@ -586,6 +603,9 @@ class ChartPlotWidget(pg.PlotWidget):
view_color: str = 'papas_special', view_color: str = 'papas_special',
pen_color: str = 'bracket', pen_color: str = 'bracket',
# TODO: load from config
use_open_gl: bool = False,
static_yrange: Optional[tuple[float, float]] = None, static_yrange: Optional[tuple[float, float]] = None,
**kwargs, **kwargs,
@ -600,9 +620,9 @@ class ChartPlotWidget(pg.PlotWidget):
# parent=None, # parent=None,
# plotItem=None, # plotItem=None,
# antialias=True, # antialias=True,
useOpenGL=True,
**kwargs **kwargs
) )
self.useOpenGL(use_open_gl)
self.name = name self.name = name
self.data_key = data_key self.data_key = data_key
self.linked = linkedsplits self.linked = linkedsplits
@ -619,7 +639,8 @@ class ChartPlotWidget(pg.PlotWidget):
'ohlc': array, 'ohlc': array,
} }
self._graphics = {} # registry of underlying graphics self._graphics = {} # registry of underlying graphics
self._overlays = set() # registry of overlay curve names # registry of overlay curve names
self._overlays: dict[str, ShmArray] = {}
self._feeds: dict[Symbol, Feed] = {} self._feeds: dict[Symbol, Feed] = {}
@ -732,6 +753,7 @@ class ChartPlotWidget(pg.PlotWidget):
self._vb.setXRange( self._vb.setXRange(
min=l + 1, min=l + 1,
max=r + 1, max=r + 1,
# TODO: holy shit, wtf dude... why tf would this not be 0 by # TODO: holy shit, wtf dude... why tf would this not be 0 by
# default... speechless. # default... speechless.
padding=0, padding=0,
@ -772,7 +794,7 @@ class ChartPlotWidget(pg.PlotWidget):
update_func=ContentsLabel.update_from_ohlc, update_func=ContentsLabel.update_from_ohlc,
) )
self._add_sticky(name) self._add_sticky(name, bg_color='davies')
return graphics return graphics
@ -784,7 +806,7 @@ class ChartPlotWidget(pg.PlotWidget):
array_key: Optional[str] = None, array_key: Optional[str] = None,
overlay: bool = False, overlay: bool = False,
color: str = 'default_light', color: Optional[str] = None,
add_label: bool = True, add_label: bool = True,
**pdi_kwargs, **pdi_kwargs,
@ -794,15 +816,18 @@ class ChartPlotWidget(pg.PlotWidget):
the input array ``data``. the input array ``data``.
""" """
_pdi_defaults = { color = color or self.pen_color or 'default_light'
'pen': pg.mkPen(hcolor(color)), pdi_kwargs.update({
} 'color': color
pdi_kwargs.update(_pdi_defaults) })
data_key = array_key or name data_key = array_key or name
# pg internals for reference.
# curve = pg.PlotDataItem( # curve = pg.PlotDataItem(
# curve = pg.PlotCurveItem( # curve = pg.PlotCurveItem(
# yah, we wrote our own B)
curve = FastAppendCurve( curve = FastAppendCurve(
y=data[data_key], y=data[data_key],
x=data['index'], x=data['index'],
@ -840,14 +865,14 @@ class ChartPlotWidget(pg.PlotWidget):
if overlay: if overlay:
anchor_at = ('bottom', 'left') anchor_at = ('bottom', 'left')
self._overlays.add(name) self._overlays[name] = None
else: else:
anchor_at = ('top', 'left') anchor_at = ('top', 'left')
# TODO: something instead of stickies for overlays # TODO: something instead of stickies for overlays
# (we need something that avoids clutter on x-axis). # (we need something that avoids clutter on x-axis).
self._add_sticky(name, bg_color='default_light') self._add_sticky(name, bg_color=color)
if self.linked.cursor: if self.linked.cursor:
self.linked.cursor.add_curve_cursor(self, curve) self.linked.cursor.add_curve_cursor(self, curve)
@ -861,6 +886,7 @@ class ChartPlotWidget(pg.PlotWidget):
return curve return curve
# TODO: make this a ctx mngr
def _add_sticky( def _add_sticky(
self, self,
@ -890,67 +916,78 @@ class ChartPlotWidget(pg.PlotWidget):
def update_ohlc_from_array( def update_ohlc_from_array(
self, self,
name: str,
graphics_name: str,
array: np.ndarray, array: np.ndarray,
**kwargs, **kwargs,
) -> pg.GraphicsObject:
"""Update the named internal graphics from ``array``.
""" ) -> pg.GraphicsObject:
'''Update the named internal graphics from ``array``.
'''
self._arrays['ohlc'] = array self._arrays['ohlc'] = array
graphics = self._graphics[name] graphics = self._graphics[graphics_name]
graphics.update_from_array(array, **kwargs) graphics.update_from_array(array, **kwargs)
return graphics return graphics
def update_curve_from_array( def update_curve_from_array(
self, self,
name: str, graphics_name: str,
array: np.ndarray, array: np.ndarray,
array_key: Optional[str] = None, array_key: Optional[str] = None,
**kwargs, **kwargs,
) -> pg.GraphicsObject: ) -> pg.GraphicsObject:
"""Update the named internal graphics from ``array``. '''Update the named internal graphics from ``array``.
""" '''
assert len(array)
data_key = array_key or graphics_name
data_key = array_key or name if graphics_name not in self._overlays:
if name not in self._overlays:
self._arrays['ohlc'] = array self._arrays['ohlc'] = array
else: else:
self._arrays[data_key] = array self._arrays[data_key] = array
curve = self._graphics[name] curve = self._graphics[graphics_name]
if len(array): # NOTE: back when we weren't implementing the curve graphics
# TODO: we should instead implement a diff based # ourselves you'd have updates using this method:
# "only update with new items" on the pg.PlotCurveItem # curve.setData(y=array[graphics_name], x=array['index'], **kwargs)
# one place to dig around this might be the `QBackingStore`
# https://doc.qt.io/qt-5/qbackingstore.html # NOTE: graphics **must** implement a diff based update
# curve.setData(y=array[name], x=array['index'], **kwargs) # operation where an internal ``FastUpdateCurve._xrange`` is
curve.update_from_array( # used to determine if the underlying path needs to be
x=array['index'], # pre/ap-pended.
y=array[data_key], curve.update_from_array(
**kwargs x=array['index'],
) y=array[data_key],
**kwargs
)
return curve return curve
def _set_yrange( def _set_yrange(
self, self,
*, *,
yrange: Optional[tuple[float, float]] = None, yrange: Optional[tuple[float, float]] = None,
range_margin: float = 0.06, range_margin: float = 0.06,
bars_range: Optional[tuple[int, int, int, int]] = None,
# flag to prevent triggering sibling charts from the same linked
# set from recursion errors.
autoscale_linked_plots: bool = True,
) -> None: ) -> None:
"""Set the viewable y-range based on embedded data. '''Set the viewable y-range based on embedded data.
This adds auto-scaling like zoom on the scroll wheel such This adds auto-scaling like zoom on the scroll wheel such
that data always fits nicely inside the current view of the that data always fits nicely inside the current view of the
data set. data set.
""" '''
set_range = True set_range = True
if self._static_yrange == 'axis': if self._static_yrange == 'axis':
@ -966,52 +1003,50 @@ class ChartPlotWidget(pg.PlotWidget):
# Determine max, min y values in viewable x-range from data. # Determine max, min y values in viewable x-range from data.
# Make sure min bars/datums on screen is adhered. # Make sure min bars/datums on screen is adhered.
l, lbar, rbar, r = self.bars_range() l, lbar, rbar, r = bars_range or self.bars_range()
# figure out x-range in view such that user can scroll "off" if autoscale_linked_plots:
# the data set up to the point where ``_min_points_to_show`` # avoid recursion by sibling plots
# are left. linked = self.linked
# view_len = r - l plots = list(linked.subplots.copy().values())
main = linked.chart
if main:
plots.append(main)
for chart in plots:
if chart and not chart._static_yrange:
chart._set_yrange(
bars_range=(l, lbar, rbar, r),
autoscale_linked_plots=False,
)
# TODO: logic to check if end of bars in view # TODO: logic to check if end of bars in view
# extra = view_len - _min_points_to_show # extra = view_len - _min_points_to_show
# begin = self._arrays['ohlc'][0]['index'] - extra # begin = self._arrays['ohlc'][0]['index'] - extra
# # end = len(self._arrays['ohlc']) - 1 + extra # # end = len(self._arrays['ohlc']) - 1 + extra
# end = self._arrays['ohlc'][-1]['index'] - 1 + extra # end = self._arrays['ohlc'][-1]['index'] - 1 + extra
# XXX: test code for only rendering lines for the bars in view.
# This turns out to be very very poor perf when scaling out to
# many bars (think > 1k) on screen.
# name = self.name
# bars = self._graphics[self.name]
# bars.draw_lines(
# istart=max(lbar, l), iend=min(rbar, r), just_history=True)
# bars_len = rbar - lbar # bars_len = rbar - lbar
# log.debug( # log.debug(
# f"\nl: {l}, lbar: {lbar}, rbar: {rbar}, r: {r}\n" # f"\nl: {l}, lbar: {lbar}, rbar: {rbar}, r: {r}\n"
# f"view_len: {view_len}, bars_len: {bars_len}\n" # f"view_len: {view_len}, bars_len: {bars_len}\n"
# f"begin: {begin}, end: {end}, extra: {extra}" # f"begin: {begin}, end: {end}, extra: {extra}"
# ) # )
# self._set_xlimits(begin, end)
# TODO: this should be some kind of numpy view api
# bars = self._arrays['ohlc'][lbar:rbar]
a = self._arrays['ohlc'] a = self._arrays['ohlc']
ifirst = a[0]['index'] ifirst = a[0]['index']
bars = a[lbar - ifirst:rbar - ifirst + 1] bars = a[lbar - ifirst:rbar - ifirst + 1]
if not len(bars): if not len(bars):
# likely no data loaded yet or extreme scrolling? # likely no data loaded yet or extreme scrolling?
log.error(f"WTF bars_range = {lbar}:{rbar}") log.error(f"WTF bars_range = {lbar}:{rbar}")
return return
if self.data_key != self.linked.symbol.key: if self.data_key != self.linked.symbol.key:
bars = a[self.data_key] bars = bars[self.data_key]
ylow = np.nanmin(bars) ylow = np.nanmin(bars)
yhigh = np.nanmax((bars)) yhigh = np.nanmax(bars)
# print(f'{(ylow, yhigh)}')
else: else:
# just the std ohlc bars # just the std ohlc bars
ylow = np.nanmin(bars['low']) ylow = np.nanmin(bars['low'])
@ -1072,7 +1107,6 @@ class ChartPlotWidget(pg.PlotWidget):
# TODO: this should go onto some sort of # TODO: this should go onto some sort of
# data-view strimg thinger..right? # data-view strimg thinger..right?
ohlc = self._shm.array ohlc = self._shm.array
# ohlc = chart._shm.array
# XXX: not sure why the time is so off here # XXX: not sure why the time is so off here
# looks like we're gonna have to do some fixing.. # looks like we're gonna have to do some fixing..

View File

@ -18,25 +18,105 @@
Fast, smooth, sexy curves. Fast, smooth, sexy curves.
""" """
from typing import Tuple from typing import Optional
import numpy as np
import pyqtgraph as pg import pyqtgraph as pg
from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5 import QtGui, QtWidgets
from PyQt5.QtCore import (
QLineF,
QSizeF,
QRectF,
QPointF,
)
from .._profile import pg_profile_enabled from .._profile import pg_profile_enabled
from ._style import hcolor
def step_path_arrays_from_1d(
x: np.ndarray,
y: np.ndarray,
include_endpoints: bool = False,
) -> (np.ndarray, np.ndarray):
'''Generate a "step mode" curve aligned with OHLC style bars
such that each segment spans each bar (aka "centered" style).
'''
y_out = y.copy()
x_out = x.copy()
x2 = np.empty(
# the data + 2 endpoints on either end for
# "termination of the path".
(len(x) + 1, 2),
# we want to align with OHLC or other sampling style
# bars likely so we need fractinal values
dtype=float,
)
x2[0] = x[0] - 0.5
x2[1] = x[0] + 0.5
x2[1:] = x[:, np.newaxis] + 0.5
# flatten to 1-d
x_out = x2.reshape(x2.size)
# we create a 1d with 2 extra indexes to
# hold the start and (current) end value for the steps
# on either end
y2 = np.empty((len(y), 2), dtype=y.dtype)
y2[:] = y[:, np.newaxis]
y_out = np.empty(
2*len(y) + 2,
dtype=y.dtype
)
# flatten and set 0 endpoints
y_out[1:-1] = y2.reshape(y2.size)
y_out[0] = 0
y_out[-1] = 0
if not include_endpoints:
return x_out[:-1], y_out[:-1]
else:
return x_out, y_out
# TODO: got a feeling that dropping this inheritance gets us even more speedups # TODO: got a feeling that dropping this inheritance gets us even more speedups
class FastAppendCurve(pg.PlotCurveItem): class FastAppendCurve(pg.PlotCurveItem):
def __init__(self, *args, **kwargs): def __init__(
self,
*args,
step_mode: bool = False,
color: str = 'default_lightest',
fill_color: Optional[str] = None,
**kwargs
) -> None:
# TODO: we can probably just dispense with the parent since # TODO: we can probably just dispense with the parent since
# we're basically only using the pen setting now... # we're basically only using the pen setting now...
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self._last_line: QtCore.QLineF = None self._xrange: tuple[int, int] = self.dataBounds(ax=0)
self._xrange: Tuple[int, int] = self.dataBounds(ax=0)
# all history of curve is drawn in single px thickness
self.setPen(hcolor(color))
# last segment is drawn in 2px thickness for emphasis
self.last_step_pen = pg.mkPen(hcolor(color), width=2)
self._last_line: QLineF = None
self._last_step_rect: QRectF = None
# flat-top style histogram-like discrete curve
self._step_mode: bool = step_mode
self._fill = False
self.setBrush(hcolor(fill_color or color))
# TODO: one question still remaining is if this makes trasform # TODO: one question still remaining is if this makes trasform
# interactions slower (such as zooming) and if so maybe if/when # interactions slower (such as zooming) and if so maybe if/when
@ -46,8 +126,9 @@ class FastAppendCurve(pg.PlotCurveItem):
def update_from_array( def update_from_array(
self, self,
x, x: np.ndarray,
y, y: np.ndarray,
) -> QtGui.QPainterPath: ) -> QtGui.QPainterPath:
profiler = pg.debug.Profiler(disabled=not pg_profile_enabled()) profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
@ -56,17 +137,33 @@ class FastAppendCurve(pg.PlotCurveItem):
# print(f"xrange: {self._xrange}") # print(f"xrange: {self._xrange}")
istart, istop = self._xrange istart, istop = self._xrange
# compute the length diffs between the first/last index entry in
# the input data and the last indexes we have on record from the
# last time we updated the curve index.
prepend_length = istart - x[0] prepend_length = istart - x[0]
append_length = x[-1] - istop append_length = x[-1] - istop
if self.path is None or prepend_length: # step mode: draw flat top discrete "step"
# over the index space for each datum.
if self._step_mode:
x_out, y_out = step_path_arrays_from_1d(x[:-1], y[:-1])
else:
# by default we only pull data up to the last (current) index
x_out, y_out = x[:-1], y[:-1]
if self.path is None or prepend_length > 0:
self.path = pg.functions.arrayToQPath( self.path = pg.functions.arrayToQPath(
x[:-1], x_out,
y[:-1], y_out,
connect='all' connect='all',
finiteCheck=False,
) )
profiler('generate fresh path') profiler('generate fresh path')
# if self._step_mode:
# self.path.closeSubpath()
# TODO: get this working - right now it's giving heck on vwap... # TODO: get this working - right now it's giving heck on vwap...
# if prepend_length: # if prepend_length:
# breakpoint() # breakpoint()
@ -83,21 +180,47 @@ class FastAppendCurve(pg.PlotCurveItem):
# # self.path.moveTo(new_x[0], new_y[0]) # # self.path.moveTo(new_x[0], new_y[0])
# self.path.connectPath(old_path) # self.path.connectPath(old_path)
if append_length: elif append_length > 0:
# print(f"append_length: {append_length}") if self._step_mode:
new_x = x[-append_length - 2:-1] new_x, new_y = step_path_arrays_from_1d(
new_y = y[-append_length - 2:-1] x[-append_length - 2:-1],
# print((new_x, new_y)) y[-append_length - 2:-1],
)
new_x = new_x[1:]
new_y = new_y[1:]
else:
# print(f"append_length: {append_length}")
new_x = x[-append_length - 2:-1]
new_y = y[-append_length - 2:-1]
# print((new_x, new_y))
append_path = pg.functions.arrayToQPath( append_path = pg.functions.arrayToQPath(
new_x, new_x,
new_y, new_y,
connect='all' connect='all',
# finiteCheck=False,
) )
# print(f"append_path br: {append_path.boundingRect()}")
# self.path.moveTo(new_x[0], new_y[0]) path = self.path
# self.path.connectPath(append_path)
self.path.connectPath(append_path) # other merging ideas:
# https://stackoverflow.com/questions/8936225/how-to-merge-qpainterpaths
if self._step_mode:
if self._fill:
# XXX: super slow set "union" op
self.path = self.path.united(append_path).simplified()
# path.addPath(append_path)
# path.closeSubpath()
else:
# path.addPath(append_path)
self.path.connectPath(append_path)
else:
# print(f"append_path br: {append_path.boundingRect()}")
# self.path.moveTo(new_x[0], new_y[0])
# self.path.connectPath(append_path)
path.connectPath(append_path)
# XXX: pretty annoying but, without this there's little # XXX: pretty annoying but, without this there's little
# artefacts on the append updates to the curve... # artefacts on the append updates to the curve...
@ -112,8 +235,23 @@ class FastAppendCurve(pg.PlotCurveItem):
self.xData = x self.xData = x
self.yData = y self.yData = y
self._xrange = x[0], x[-1] x0, x_last = self._xrange = x[0], x[-1]
self._last_line = QtCore.QLineF(x[-2], y[-2], x[-1], y[-1]) y_last = y[-1]
if self._step_mode:
self._last_line = QLineF(
x_last - 0.5, 0,
x_last + 0.5, 0,
)
self._last_step_rect = QRectF(
x_last - 0.5, 0,
x_last + 0.5, y_last
)
else:
self._last_line = QLineF(
x[-2], y[-2],
x[-1], y_last
)
# trigger redraw of path # trigger redraw of path
# do update before reverting to cache mode # do update before reverting to cache mode
@ -143,13 +281,13 @@ class FastAppendCurve(pg.PlotCurveItem):
w = hb_size.width() + 1 w = hb_size.width() + 1
h = hb_size.height() + 1 h = hb_size.height() + 1
br = QtCore.QRectF( br = QRectF(
# top left # top left
QtCore.QPointF(hb.topLeft()), QPointF(hb.topLeft()),
# total size # total size
QtCore.QSizeF(w, h) QSizeF(w, h)
) )
# print(f'bounding rect: {br}') # print(f'bounding rect: {br}')
return br return br
@ -164,9 +302,26 @@ class FastAppendCurve(pg.PlotCurveItem):
profiler = pg.debug.Profiler(disabled=not pg_profile_enabled()) profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
# p.setRenderHint(p.Antialiasing, True) # p.setRenderHint(p.Antialiasing, True)
p.setPen(self.opts['pen']) if self._step_mode:
brush = self.opts['brush']
# p.drawLines(*tuple(filter(bool, self._last_step_lines)))
# p.drawRect(self._last_step_rect)
p.fillRect(self._last_step_rect, brush)
# p.drawPath(self.path)
# profiler('.drawPath()')
# else:
p.setPen(self.last_step_pen)
p.drawLine(self._last_line) p.drawLine(self._last_line)
profiler('.drawLine()') profiler('.drawLine()')
p.setPen(self.opts['pen'])
p.drawPath(self.path) p.drawPath(self.path)
profiler('.drawPath()') profiler('.drawPath()')
if self._fill:
print('FILLED')
p.fillPath(self.path, brush)

File diff suppressed because it is too large Load Diff

View File

@ -61,7 +61,9 @@ _do_overrides()
# XXX: pretty sure none of this shit works on linux as per: # XXX: pretty sure none of this shit works on linux as per:
# https://bugreports.qt.io/browse/QTBUG-53022 # https://bugreports.qt.io/browse/QTBUG-53022
# it seems to work on windows.. no idea wtf is up. # it seems to work on windows.. no idea wtf is up.
is_windows = False
if platform.system() == "Windows": if platform.system() == "Windows":
is_windows = True
# Proper high DPI scaling is available in Qt >= 5.6.0. This attibute # Proper high DPI scaling is available in Qt >= 5.6.0. This attibute
# must be set before creating the application # must be set before creating the application
@ -182,6 +184,8 @@ def run_qtractor(
window.main_widget = main_widget window.main_widget = main_widget
window.setCentralWidget(instance) window.setCentralWidget(instance)
if is_windows:
window.configure_to_desktop()
# actually render to screen # actually render to screen
window.show() window.show()

View File

@ -732,7 +732,7 @@ def mk_order_pane_layout(
) -> FieldsForm: ) -> FieldsForm:
font_size: int = _font.px_size - 1 font_size: int = _font.px_size - 2
# TODO: maybe just allocate the whole fields form here # TODO: maybe just allocate the whole fields form here
# and expect an async ctx entry? # and expect an async ctx entry?

View File

@ -341,7 +341,14 @@ class ChartView(ViewBox):
**kwargs, **kwargs,
): ):
super().__init__(parent=parent, **kwargs) super().__init__(
parent=parent,
# TODO: look into the default view padding
# support that might replace somem of our
# ``ChartPlotWidget._set_yrange()`
# defaultPadding=0.,
**kwargs
)
# disable vertical scrolling # disable vertical scrolling
self.setMouseEnabled(x=True, y=False) self.setMouseEnabled(x=True, y=False)
@ -533,7 +540,6 @@ class ChartView(ViewBox):
# self.updateScaleBox(ev.buttonDownPos(), ev.pos()) # self.updateScaleBox(ev.buttonDownPos(), ev.pos())
else: else:
# default bevavior: click to pan view # default bevavior: click to pan view
tr = self.childGroup.transform() tr = self.childGroup.transform()
tr = fn.invertQTransform(tr) tr = fn.invertQTransform(tr)
tr = tr.map(dif*mask) - tr.map(Point(0, 0)) tr = tr.map(dif*mask) - tr.map(Point(0, 0))

View File

@ -146,7 +146,7 @@ def path_arrays_from_ohlc(
# specifies that the first edge is never connected to the # specifies that the first edge is never connected to the
# prior bars last edge thus providing a small "gap"/"space" # prior bars last edge thus providing a small "gap"/"space"
# between bars determined by ``bar_gap``. # between bars determined by ``bar_gap``.
c[istart:istop] = (0, 1, 1, 1, 1, 1) c[istart:istop] = (1, 1, 1, 1, 1, 0)
return x, y, c return x, y, c
@ -182,12 +182,14 @@ class BarItems(pg.GraphicsObject):
# scene: 'QGraphicsScene', # noqa # scene: 'QGraphicsScene', # noqa
plotitem: 'pg.PlotItem', # noqa plotitem: 'pg.PlotItem', # noqa
pen_color: str = 'bracket', pen_color: str = 'bracket',
last_bar_color: str = 'bracket',
) -> None: ) -> None:
super().__init__() super().__init__()
# XXX: for the mega-lulz increasing width here increases draw latency... # XXX: for the mega-lulz increasing width here increases draw
# so probably don't do it until we figure that out. # latency... so probably don't do it until we figure that out.
self.bars_pen = pg.mkPen(hcolor(pen_color), width=1) self.bars_pen = pg.mkPen(hcolor(pen_color), width=1)
self.last_bar_pen = pg.mkPen(hcolor(last_bar_color), width=2)
# NOTE: this prevents redraws on mouse interaction which is # NOTE: this prevents redraws on mouse interaction which is
# a huge boon for avg interaction latency. # a huge boon for avg interaction latency.
@ -354,30 +356,6 @@ class BarItems(pg.GraphicsObject):
if flip_cache: if flip_cache:
self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache) self.setCacheMode(QtWidgets.QGraphicsItem.DeviceCoordinateCache)
def paint(
self,
p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget
) -> None:
profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
# p.setCompositionMode(0)
p.setPen(self.bars_pen)
# TODO: one thing we could try here is pictures being drawn of
# a fixed count of bars such that based on the viewbox indices we
# only draw the "rounded up" number of "pictures worth" of bars
# as is necesarry for what's in "view". Not sure if this will
# lead to any perf gains other then when zoomed in to less bars
# in view.
p.drawLines(*tuple(filter(bool, self._last_bar_lines)))
profiler('draw last bar')
p.drawPath(self.path)
profiler('draw history path')
def boundingRect(self): def boundingRect(self):
# Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect # Qt docs: https://doc.qt.io/qt-5/qgraphicsitem.html#boundingRect
@ -421,3 +399,28 @@ class BarItems(pg.GraphicsObject):
) )
) )
def paint(
self,
p: QtGui.QPainter,
opt: QtWidgets.QStyleOptionGraphicsItem,
w: QtWidgets.QWidget
) -> None:
profiler = pg.debug.Profiler(disabled=not pg_profile_enabled())
# p.setCompositionMode(0)
# TODO: one thing we could try here is pictures being drawn of
# a fixed count of bars such that based on the viewbox indices we
# only draw the "rounded up" number of "pictures worth" of bars
# as is necesarry for what's in "view". Not sure if this will
# lead to any perf gains other then when zoomed in to less bars
# in view.
p.setPen(self.last_bar_pen)
p.drawLines(*tuple(filter(bool, self._last_bar_lines)))
profiler('draw last bar')
p.setPen(self.bars_pen)
p.drawPath(self.path)
profiler('draw history path')

View File

@ -49,7 +49,7 @@ from PyQt5 import QtCore
from PyQt5 import QtWidgets from PyQt5 import QtWidgets
from PyQt5.QtCore import ( from PyQt5.QtCore import (
Qt, Qt,
# QSize, QSize,
QModelIndex, QModelIndex,
QItemSelectionModel, QItemSelectionModel,
) )
@ -112,6 +112,7 @@ class CompleterView(QTreeView):
model = QStandardItemModel(self) model = QStandardItemModel(self)
self.labels = labels self.labels = labels
self._last_window_h: Optional[int] = None
# a std "tabular" config # a std "tabular" config
self.setItemDelegate(FontScaledDelegate(self)) self.setItemDelegate(FontScaledDelegate(self))
@ -126,6 +127,10 @@ class CompleterView(QTreeView):
# self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustIgnored) # self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustIgnored)
# ux settings # ux settings
self.setSizePolicy(
QtWidgets.QSizePolicy.Expanding,
QtWidgets.QSizePolicy.Expanding,
)
self.setItemsExpandable(True) self.setItemsExpandable(True)
self.setExpandsOnDoubleClick(False) self.setExpandsOnDoubleClick(False)
self.setAnimated(False) self.setAnimated(False)
@ -152,24 +157,41 @@ class CompleterView(QTreeView):
self._font_size = size self._font_size = size
self.setStyleSheet(f"font: {size}px") self.setStyleSheet(f"font: {size}px")
#def resizeEvent(self, event: 'QEvent') -> None:
# self.resize_to_results()
# super().resizeEvent(event)
def resize(self): def resize_to_results(self):
model = self.model() model = self.model()
cols = model.columnCount() cols = model.columnCount()
for i in range(cols): for i in range(cols):
self.resizeColumnToContents(i) self.resizeColumnToContents(i)
# inclusive of search bar and header "rows" in pixel terms
rows = 100
# max_rows = 8 # 6 + search and headers
row_px = self.rowHeight(self.currentIndex()) row_px = self.rowHeight(self.currentIndex())
# print(f'font_h: {font_h}\n px_height: {px_height}')
# TODO: probably make this more general / less hacky # TODO: probably make this more general / less hacky
self.setMinimumSize(self.width(), rows * row_px) # we should figure out the exact number of rows to allow
self.setMaximumSize(self.width() + 10, rows * row_px) # inclusive of search bar and header "rows", in pixel terms.
window_h = self.window().height()
rows = round(window_h * 0.5 / row_px) - 4
# TODO: the problem here is that this view widget is **not** resizing/scaling
# when the parent layout is adjusted, not sure what exactly is up...
# only "scale up" the results view when the window size has increased/
if not self._last_window_h or self._last_window_h < window_h:
self.setMaximumSize(self.width(), rows * row_px)
self.setMinimumSize(self.width(), rows * row_px)
#elif not self._last_window_h or self._last_window_h > window_h:
# self.setMinimumSize(self.width(), rows * row_px)
# self.setMaximumSize(self.width(), rows * row_px)
self.resize(self.width(), rows * row_px)
self._last_window_h = window_h
self.setFixedWidth(333) self.setFixedWidth(333)
self.update()
def is_selecting_d1(self) -> bool: def is_selecting_d1(self) -> bool:
cidx = self.selectionModel().currentIndex() cidx = self.selectionModel().currentIndex()
@ -334,7 +356,7 @@ class CompleterView(QTreeView):
else: else:
model.setItem(idx.row(), 1, QStandardItem()) model.setItem(idx.row(), 1, QStandardItem())
self.resize() self.resize_to_results()
return idx return idx
else: else:
@ -404,7 +426,7 @@ class CompleterView(QTreeView):
def show_matches(self) -> None: def show_matches(self) -> None:
self.show() self.show()
self.resize() self.resize_to_results()
class SearchBar(Edit): class SearchBar(Edit):
@ -457,7 +479,7 @@ class SearchWidget(QtWidgets.QWidget):
# size it as we specify # size it as we specify
self.setSizePolicy( self.setSizePolicy(
QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed,
QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Expanding,
) )
self.godwidget = godwidget self.godwidget = godwidget

View File

@ -110,7 +110,7 @@ class DpiAwareFont:
mx_dpi = max(pdpi, ldpi) mx_dpi = max(pdpi, ldpi)
mn_dpi = min(pdpi, ldpi) mn_dpi = min(pdpi, ldpi)
scale = round(ldpi/pdpi) scale = round(ldpi/pdpi, ndigits=2)
if mx_dpi <= 97: # for low dpi use larger font sizes if mx_dpi <= 97: # for low dpi use larger font sizes
inches = _font_sizes['lo'][self._font_size] inches = _font_sizes['lo'][self._font_size]
@ -121,17 +121,29 @@ class DpiAwareFont:
dpi = mn_dpi dpi = mn_dpi
# dpi is likely somewhat scaled down so use slightly larger font size # dpi is likely somewhat scaled down so use slightly larger font size
if scale > 1 and self._font_size: if scale >= 1.1 and self._font_size:
# TODO: this denominator should probably be determined from
# relative aspect ratios or something? if 1.2 <= scale:
inches = inches * (1 / scale) * (1 + 6/16) inches *= (1 / scale) * 1.0616
if scale < 1.4 or scale >= 1.5:
# TODO: this denominator should probably be determined from
# relative aspect ratios or something?
inches = inches * (1 + 6/16)
dpi = mx_dpi dpi = mx_dpi
log.info(f'USING MAX DPI {dpi}')
# TODO: we might want to fiddle with incrementing font size by
# +1 for the edge cases above. it seems doing it via scaling is
# always going to hit that error in range mapping from inches:
# float to px size: int.
self._font_inches = inches self._font_inches = inches
font_size = math.floor(inches * dpi) font_size = math.floor(inches * dpi)
log.debug(
f"\nscreen:{screen.name()} with pDPI: {pdpi}, lDPI: {ldpi}" log.info(
f"screen:{screen.name()}]\n"
f"pDPI: {pdpi}, lDPI: {ldpi}, scale: {scale}\n"
f"\nOur best guess font size is {font_size}\n" f"\nOur best guess font size is {font_size}\n"
) )
# apply the size # apply the size
@ -205,19 +217,26 @@ def hcolor(name: str) -> str:
'svags': '#0a0e14', 'svags': '#0a0e14',
# fifty shades # fifty shades
'original': '#a9a9a9',
'gray': '#808080', # like the kick 'gray': '#808080', # like the kick
'grayer': '#4c4c4c', 'grayer': '#4c4c4c',
'grayest': '#3f3f3f', 'grayest': '#3f3f3f',
'i3': '#494D4F',
'jet': '#343434',
'cadet': '#91A3B0', 'cadet': '#91A3B0',
'marengo': '#91A3B0', 'marengo': '#91A3B0',
'charcoal': '#36454F',
'gunmetal': '#91A3B0', 'gunmetal': '#91A3B0',
'battleship': '#848482', 'battleship': '#848482',
'davies': '#555555',
# bluish
'charcoal': '#36454F',
# default bars
'bracket': '#666666', # like the logo 'bracket': '#666666', # like the logo
'original': '#a9a9a9',
# work well for filled polygons which want a 'bracket' feel
# going light to dark
'davies': '#555555',
'i3': '#494D4F',
'jet': '#343434',
# from ``qdarkstyle`` palette # from ``qdarkstyle`` palette
'default_darkest': DarkPalette.COLOR_BACKGROUND_1, 'default_darkest': DarkPalette.COLOR_BACKGROUND_1,

View File

@ -151,8 +151,8 @@ class MainWindow(QtGui.QMainWindow):
# XXX: for tiling wms this should scale # XXX: for tiling wms this should scale
# with the alloted window size. # with the alloted window size.
# TODO: detect for tiling and if untrue set some size? # TODO: detect for tiling and if untrue set some size?
# size = (300, 500) size = (300, 500)
size = (0, 0) #size = (0, 0)
title = 'piker chart (ur symbol is loading bby)' title = 'piker chart (ur symbol is loading bby)'
@ -163,7 +163,8 @@ class MainWindow(QtGui.QMainWindow):
self._status_bar: QStatusBar = None self._status_bar: QStatusBar = None
self._status_label: QLabel = None self._status_label: QLabel = None
self._size: Optional[tuple[int, int]] = None
@property @property
def mode_label(self) -> QtGui.QLabel: def mode_label(self) -> QtGui.QLabel:
@ -267,6 +268,22 @@ class MainWindow(QtGui.QMainWindow):
assert screen, "Wow Qt is dumb as shit and has no screen..." assert screen, "Wow Qt is dumb as shit and has no screen..."
return screen return screen
def configure_to_desktop(
self,
size: Optional[tuple[int, int]] = None,
) -> None:
# https://stackoverflow.com/a/18975846
if not size and not self._size:
app = QtGui.QApplication.instance()
geo = self.current_screen().geometry()
h, w = geo.height(), geo.width()
self.setMaximumSize(w, h)
# use approx 1/3 of the area of the screen by default
self._size = round(w * .666), round(h * .666)
self.resize(*size or self._size)
# singleton app per actor # singleton app per actor
_qt_win: QtGui.QMainWindow = None _qt_win: QtGui.QMainWindow = None

View File

@ -22,6 +22,7 @@ from contextlib import asynccontextmanager
from dataclasses import dataclass, field from dataclasses import dataclass, field
from functools import partial from functools import partial
from pprint import pformat from pprint import pformat
import platform
import time import time
from typing import Optional, Dict, Callable, Any from typing import Optional, Dict, Callable, Any
import uuid import uuid
@ -429,16 +430,17 @@ class OrderMode:
# TODO: make this not trash. # TODO: make this not trash.
# XXX: linux only for now # XXX: linux only for now
result = await trio.run_process( if platform.system() != "Windows":
[ result = await trio.run_process(
'notify-send', [
'-u', 'normal', 'notify-send',
'-t', '10000', '-u', 'normal',
'piker', '-t', '10000',
f'alert: {msg}', 'piker',
], f'alert: {msg}',
) ],
log.runtime(result) )
log.runtime(result)
def on_cancel( def on_cancel(
self, self,

View File

@ -25,6 +25,8 @@ import i3ipc
i3 = i3ipc.Connection() i3 = i3ipc.Connection()
t = i3.get_tree() t = i3.get_tree()
orig_win_id = t.find_focused().window
# for tws # for tws
win_names: list[str] = [ win_names: list[str] = [
'Interactive Brokers', # tws running in i3 'Interactive Brokers', # tws running in i3
@ -51,11 +53,20 @@ for name in win_names:
# move mouse to bottom left of window (where there should # move mouse to bottom left of window (where there should
# be nothing to click). # be nothing to click).
'mousemove_relative', '--sync', str(w-3), str(h-3), 'mousemove_relative', '--sync', str(w-4), str(h-4),
# NOTE: we may need to stick a `--retry 3` in here.. # NOTE: we may need to stick a `--retry 3` in here..
'click', '--window', win_id, '1', 'click', '--window', win_id, '--repeat', '3', '1',
# hackzorzes # hackzorzes
'key', 'ctrl+alt+f', 'key', 'ctrl+alt+f',
]) ],
timeout=1,
)
# re-activate and focus original window
subprocess.call([
'xdotool',
'windowactivate', '--sync', str(orig_win_id),
'click', '--window', str(orig_win_id), '1',
])