Now that we have working client auth thanks to:
https://github.com/barneygale/asyncvnc/pull/4 and related issue,
we can use a pw for the vnc server, though we should eventually
auto-generate a random one from a docker super obviously.
Add logic to the data reset hack loop to do a connection reset after
2 failed/timeout attempts at the regular data reset. We need to also add
this logic around reconnectionn events that are due to the host
network connection: aka roaming that's faster then timing logic
builtin to the gateway.
`ib-gw` seems particularly fragile to connections from clients with the
same id (can result in weird connect hangs and even crashes) and
`ib_insync` doesn't handle intermittent tcp disconnects that
well..(especially on dockerized IBC setups). This adds a bunch of
changes to our client caching and scan loop as well a proper
task-locking-to-cache-proxies so that,
- `asyncio`-side clients aren't double-loaded/connected even when
explicitly trying to reconnect repeatedly with a given client to work
around the unreliability of the `asyncio.Transport` design in
`ib_insync`.
- we can use `tractor.trionics.maybe_open_context()` to lock the `trio`
side from loading more then one `Client` on the `asyncio` side and
instead on cache hits only making a new `MethodProxy` around the
reused `asyncio`-side client (since each `trio` task needs its own
inter-task msg channel).
- a `finally:` block teardown on all clients loaded in the scan loop
avoids stale connections.
- the connect params are now exposed as named args to
`load_aio_clients()` can be easily controlled from caller code.
Oh, and we properly hooked up the internal `ib_insync` logging to our
own internal schema - makes it a lot easier to debug wtf is going on XD
In order to expose more `asyncio` powered `Client` methods to endpoint
task-code this adds a more extensive and layered set of `MethodProxy`
loading routines, in dependency order these are:
- `load_clients_for_trio()` a `tractor.to_asyncio.open_channel_from()`
entry-point factory for loading all scanned clients on the `asyncio` side
and delivering them over the inter-task channel to a `trio`-side task.
- `get_preferred_data_client()` a simple client instance loading routine
which reads from the users `brokers.toml -> `prefer_data_account:
list[str]` which must list account names, in priority order, that are
acceptable to be used as the main "data connection client" such that
only one of the detected clients is used for data (whereas the rest
are used only for order entry).
- `open_client_proxies()` which delivers the detected `Client` set
wrapped each in a `MethodProxy`.
- `open_data_client()` which directly delivers the preferred data client
as a proxy for `trio` tasks.
- update `open_client_method_proxy()` and `open_client_proxy` to require
an input `Client` instance.
Further impl details:
- add `MethodProxy._aio_ns` to ref the original `asyncio` side proxied instance
- add `Client.trades()` to pull executions from the last day/session
- load proxies inside `trades_dialogue` and use the new `.trades()`
method to try and pull a fill ledger for eventual correct pp price
calcs (pertains to #307)..
Currently we're held back by an `asyncvnc` issue,
https://github.com/barneygale/asyncvnc/issues/1 but even still, given
we're running the container to be only accessible by localhost i'm not
sure we need this for the moment (or at all) anyway.
Based on the now defunct project @
https://github.com/waytrade/ib-gateway-docker
Adds a `docker-compose.yml` and necessary gateway and `IBC` config
files to make it possible to spin up a local gateway on localhost:4002
and connect to it without issue using `ib_insync`.
Next up, we'll want to,
- automated the equivalent docker-compose steps using our
`.data._ahab` supervisor system
- probably simplify and roll our own container (likely alpine or nixos
based) which drops uneeded deps (`socat`, vnc) and adds `xdotool`.
- allow for API socket mapping to just be pulled direct from
a user's `brokers.toml` and we'll just pass that direct to `IBC`'s
config.
We return a copy (since since a view doesn't seem to work..) of the
(field filtered) shm array contents which is the same index-length as
the source data.
Further, fence off the resource tracker disable-hack into a helper
routine.
It seems once in a while a frame can get missed or dropped (at least
with binance?) so in those cases, when the request erlangs is already at
max, we just manually request the missing frame and presume things will
work out XD
Further, discard out of order frames that are "from the future" that
somehow end up in the async queue once in a while? Not sure why this
happens but it seems thus far just discarding them is nbd.
Bleh/🤦, the ``end_dt`` in scope is not the "earliest" frame's
`end_dt` in the async response queue.. Parse the queue's latest epoch
and use **that** to compare to the last last pushed datetime index..
Add more detailed logging to help debug any (un)expected datetime index
gaps.