It actually works for vncAuth(2) (thank god!) which the previous
`asyncvnc` **did not**, and seems to be mostly based on the work
from the `asyncvnc` author anyway (so all my past efforts don't seem to
have been in vain XD).
Deats,
- switch to `pyvnc` async API (using `asyncio` again obvi) in
`.ib._util._vnc_click_hack()`.
- add `pyvnc` as src installed dep from GH.
- drop `asyncvnc` as dep.
Other,
- update `pytest` version range to avoid weird auto-load plugin exposed
by `xonsh`?
- add a `tool.pytest.ini_options` to project file with vars to,
- disable that^ `xonsh` plug using `addopts = '-p no:xonsh'`.
- set a `testpaths` to avoid running anything but that subdir.
- try out the `'progress'` style console output (does it work?).
Such that we can avoid other (pretty unreliable) "alternative" checks to
determine whether a real-time quote should be waited on or (when venue
is closed) we should just signal that historical backfilling can
commence immediately.
This has been a todo for a very long time and it turned out to be much
easier to accomplish than anticipated..
Deats,
- add a new `is_current_time_in_range()` dt range checker to predicate
whether an input range contains `datetime.now(start_dt.tzinfo)`.
- in `.ib.feed.stream_quotes()` add a `venue_is_open: bool` which uses
all of the new ^^ to determine whether to branch for the
short-circuit-and-do-history-now-case or the std real-time-quotes
should-be-awaited-since-venue-is-open, case; drop all the old hacks
trying to workaround not figuring that venue state stuff..
Other,
- also add a gpt5 composed parser to `._util` for the
`ib_insync.ContractDetails.tradingHours: str` for before i realized
there was a `.tradingSessions` property XD
- in `.ib_feed`,
* add various EG-collapsings per recent tractor/trio updates.
* better logging / exc-handling around ticker quote pushes.
* stop clearing `Ticker.ticks` each quote iteration; not sure if this
is needed/correct tho?
* add masked `Ticker.ticks` poll loop that logs.
- fix some `str.format()` usage in `._util.try_xdo_manual()`
Since apparently porting to the new docker container enforces using
a vnc password and `asyncvnc` seems to have a bug/mis-config whenever
i've tried a pw over a wg tunnel..?
Soo, this tries out the old `i3ipc`-win-focus + `xdo` click hack when
the above fails.
Deats,
- add a mod-level `try_xdo_manual()` to wrap calling
`i3ipc_xdotool_manual_click_hack()` with an oserr handler, ensure we
don't bother trying if `i3ipc` import fails beforehand tho.
- call ^ from both the orig case block and the failover from the
vnc-client case.
- factor the `+no_setup_msg: str` out to mod level and expect it to be
`.format()`-ed.
- refresh todo around `asyncvnc` pw ish..
- add a new `i3ipc_fin_wins_titled()` window-title scanner which
predicates input `titles` and delivers any matches alongside the orig
focused win at call time.
- tweak `i3ipc_xdotool_manual_click_hack()` to call ^ and remove prior
unfactored window scanning logic.
About time we tidy'd a buncha status logging in this backend..
particularly for boot-up where there's lots of client-try-connect poll
looping with account detection from the user config.
`.api.Client` pprint and logging fmt improvements:
- add `Client.__repr__()` which shows the minimally useful set of info
from the underlying `.ib: IB` as well as a new `.acnts: list[str]`
of the account aliases defined in the user's `brokers.toml`.
- mk `.bars()` define a comprehensive `query_info: str` with all the
request deats but only display if there's a problem with the response
data.
- mk `.get_config()` report both the config file path and the acnt
aliases (NOT the actual account #s).
- move all `.load_aio_clients()` client poll loop requests do
`log.runtime()` statuses, only falling through to a `.warning()` when
the loop fails to connect the client to the spec-ed API-gw addr, and
|_ don't allow loading accounts for which the user has not defined an
alias in `brokers.toml::[ib]`; raise a value-error in such cases
with a message indicating how to mod the config.
|_ only `log.info()` about acnts if some were loaded..
Other mod logging de-noising:
- better status fmting in `.symbols.open_symbol_search()` with
`repr(Client)`.
- for `.feed.stream_quotes()` first quote reporting use `.runtime()`.
Should be the final production backend to switch this over B)
Also tidy up the `update_and_audit_msgs()` validator to log vs. raise
when `validate: bool` is set; turn it off by default to avoid raises
until we figure out wtf is up with ib ledger processing or wtv..
Since we want to be able to support user-configurable vnc socketaddrs,
this preps for passing the piker client direct into the vnc hacker
routine so that we can (eventually load) and read the ib brokers config
settings into the client and then read those in the `asyncvnc` task
spawner.
Explains why stuff always seemed wrong before XD
Previously whenever a time-gappy asset (like a stock due to it's venue
operating hours) was being loaded, we weren't querying for a "durations
worth" of bars and this was causing all sorts of actual gaps in our
data set that shouldn't exist..
Fix that by always attempting to retrieve a min aggregate-time's
worth/duration of bars/datums in the history manager. Actually,
i implemented this in both the feed and api layers for this backend
since it doesn't seem to strictly work just implementing it at the
`Client.bars()` level, not sure why but..
Also, buncha `ruff` linting cleanups and fix the logger nameeee, lel.
Since `open_trade_ledger()` now requires a sort we pass in a combo of
the std `pendulum.parse()` for API records and a custom flex parser for
flex entries pulled offline.
Add special handling for `MktPair.src` such that when it's a fiat (like
it should always be for most legacy assets) we try to get the fqme
without that `.src` token (i.e. not mnqusd) to avoid breaking
roundtripping of live feed requests (due to new symbology) as well as
the current tsdb table key set..
Do a wholesale renaming of fqsn -> fqme in most of the rest of the
backend modules.
I figure we might as well support multiple types of distributed
multi-host setups; why not allow running the API (gateway) and thus vnc
server on a diff host and allowing clients to connect and do their thing
B)
Deatz:
- make `ib._util.data_reset_hack()` take in a `vnc_host` which gets
proxied through to the `asyncvnc` client.
- pull `ib_insync.client.Client` host value and pass-through to data
reset machinery, presuming the vnc server is running in the same
container (and/or the same host).
- if no vnc connection **and** no i3ipc trick can be used, just report
to the user that they need to remove the data throttle manually.
- fix `feed.get_bars()` to handle throttle cases the same based on error
msg matching, not error the code and add a max `_failed_resets` count
to trigger bailing on the query loop.
Since apparently the container we were using is totally borked on new
kernels and/or latest jvm, this move our old manual local-X-desktop script
back for use in `brokerd` backend code.
Adds a new `.brokers.ib._util` which contains the 2 methods and fails
over to this one when we can't connect to a VNC server. Also adjusts the
original in `scripts/ib_data_reset.py` to import and run the module code
as a script-program.