Since we eventually want to allow users to minimally deploy `pikerd`
service-tree (aka distributed cross host) installs, we need to offer
a "headless" deps group. Really this is just the core dep set minus Qt
and some aux search related libs (for now).
The new `.dev` group is for adding hacking and testing tools including
`xonsh` since that will eventually be our REPL of choice more then
likely B)
Oh, and fix the namespace path (was a typo) for the `ledger` CLI and
of course bump the lock file.
Makes it easier to pass the overrides to multiple p2n functions (like
hopefully `.mkPoetryEnv`). Also, add some commented attempts at using
`mkPoetryEnv` and todo list for "why", remove the `poetry` CLI main
point from the pyproject.toml, bump the poetry lock file.
NB: for now this is linking to a presumed local clone of the
`poetry2nix` repo since part of fixing what was adjusted here needs to
be patched upstream, which means hackin on the p2n repo in tandem B)
Since there's some dependency build issues we need
to tweak the following to get baseline `nix develop` working:
- drop `python-levenshtein` (required by `fuzzywuzzy[speedup]`) for now
since the overlay and/or wheel install needs to be properly figured
out.
- build `pyqt5` from src for the moment (since `preferWheel` doesn't
seem to be workin?) despite it taking forever XD
- add in the `flake.lock` file.
As per the hot tip from the edgecases.md,
https://github.com/nix-community/poetry2nix/blob/master/docs/edgecases.md#modulenotfounderror-no-module-named-packagename
Factor all the (mostly `setuptools`) overrides into
a `pypkgs-build-requirements` set and `.extend()` in any `preferWheel`
additions (`polars`, `pyqt`, etc.) before passing to to
`mkPoetryApplication(overrides=<it>)`.
Add a buncha todos for improving the poetry2nix pkging including:
- adding the override requirements to the json file for all our deps
in the `pypkgs-build-requirement` set.
- maybe propose docs for the edgecases.md to show how to do the auto-gen
set (via func) AND extend with further overrides like `preferWheel`?
- task to support `polars` build from src (by copying `cryptography`
stuff) instead of only from a wheel?
- get pyqt5 building from wheel since it seems to be taking forever from
src..
- get pyqt6 working in general - going to require taking stuff from
nixpkgs and applying it in the overrides of p2n.
This is a tricky edge case we weren't handling prior; an example is
submitting a limit order with a price tick precision which mismatches
that supported (probably bc IB reported the wrong one..) and IB responds
immediately with an error event (via a special code..) but doesn't
include any `Trade` object(s) nor details beyond the `reqid`. So, we
have to do a little reverse EMS order lookup on our own and ideally
indicate to the requester which order failed and *why*.
To enable this we,
- create a `flows: OrderDialogs` instance and pass it to most order/event relay
tasks, particularly ensuring we update update ASAP in `handle_order_requests()`
such that any successful submit has an `Ack` recorded in the flow.
- on such errors lookup the `.symbol` / `Order` from the `flow` and
respond back to the EMS with as many details as possible about the
prior msg history.
- always explicitly relay `error` events which don't fall into the
sensible filtered set and wrap in
a `BrokerdError.broker_details['flow']: dict` snapshot for the EMS.
- in `symbols.get_mkt_info()` support adhoc lookup for `MktPair` inputs
and when defined we re-construct with those inputs; in this case we do
this for a first mkt: `'vtgn.nasdaq'`..
Turns out we were expecting/processing `Status(resp='error')` msgs not
`BrokerdError` (i guess bc latter was only really being used in initial
`brokerd` msg responses and not for relay of actual provider clearing
engine failures?) and the case block match / logic wasn't really
correct. So this changes a few things:
- always do reverse `oid` lookups from `reqid`s if possible in error msg
handling case.
- add a new `Error` client-dialog msg (derived from `Status`) which we
now relay when `brokerd` sends a `BrokerdError` and no prior `Status`
can be found (when it is we still fill in appropriate fields from the
backend-error and just send back the last status msg like before).
- try hard to look up the original `Order.symbol: str` for client
broadcasting trying first using any `Status.req` and failing over to
embedded `.brokerd_msg` field lookups.
- drop the `Status.name = 'error'` from literal def.
Finally this is a reason to use our new `OrderDialogs` abstraction; on
order submission errors IB doesn't really pass back anything other then
the `orderId` and the reason so we have to conduct our own lookup for
a message to relay to the EMS..
So, for every EMS msg we send, add it to the dialog tracker and then use
the `flows: OrderDialogs` for lookup in the case where we need to relay
said error. Also, include sending a `canceled` status such that the
order won't get stuck as a stale entry in the `emsd`'s own dialog table.
For now we just filter out errors that are unrelated from the stream
since there's always going to be stuff to do with live/history data
queries..
If a backend declares a top level `get_cost()` (provisional name)
we call it in the paper engine to try and simulate costs according to
the provider's own schedule. For now only `binance` has support (via the
ep def) but ideally we can fill these in incrementally as users start
forward testing on multiple cexes.
Since it's depended on by `.data` stuff as well as pretty much
everything else, makes more sense to expose it as a top level module
(and maybe eventually as a subpkg as we add to it).
In order to attempt giving the user a realistic prediction for a BEP per
txn we need to model what the (worst case) anticipated exit txn costs
will be during the equivalent, paired entries. For now we use a simple
"symmetric cost prediction" model where we assume the exit costs will be
simply the same as the enter txn costs and thus on every entry we apply
2x the enter txn cost; on exit txns we then unroll these predictions by
keeping a cumulative sum of the cost-per-unit and reversing the charges
based on applying that mean to the current exit txn's size. Once
unrolled we apply the actual exit txn cost received from the
broker-provider.