It was a concurrency-hack mess somewhat due to all sorts of limitations
imposed by marketstore (query size limits, strange datetime/timestamp
errors, slow table loads for large queries..) and we can drastically
simplify. There's still some issues with getting new backfills (not yet
in storage) correctly prepended: there's sometimes little gaps due to shm
races when reading history indexing vs. when the live-feed startup
finishes.
We generally need tests for all this and likely a better rework of the
feed layer's init such that we're showing history in chart afap instead
of waiting on backfills or the live feed to come up.
Much more to come B)
Turns out no backend (including kraken) requires it and really this
kinda of measure should be implemented and recorded from our fsp layer
instead of (hackily) sometimes expecting it to be in "source data".
After much frustration with a particular tsdb (cough) this instead
implements a new native-file (and apache tech based) backend which
stores time series in parquet files (for now) using the `polars` apis
(since we plan to use that lib as well for processing).
Note this code is currently **very** rough and in draft mode.
Details:
- add conversion routines for going from `polars.DataFrame` to
`numpy.ndarray` and back.
- lay out a simple file-name as series key symbology:
`fqme.<datadescriptions>.parquet`, though probably it will evolve.
- implement the entire `StorageClient` interface as it stands.
- adjust `storage.cli` cmds to instead expect to use this new backend,
which means it's a complete mess XD
Main benefits/motivation:
- wayy faster load times with no "datums to load limit" required.
- smaller space footprint and we haven't even touched compression
settings yet!
- wayyy more compatible with other systems which can lever the apache
ecosystem.
- gives us finer grained control over the filesystem usage so we can
choose to swap out stuff like the replication system or networking
access.
Turns out just (over)writing `.parquet` files with >= 1M datums is like
less then a second, and we can likely speed up appends using
`fastparquet` (usage coming soon).
Includes:
- a new `clone` CLI subcmd to test this all out by ad-hoc copy of
(literally hardcoded to a daemon-actor specific shm allocation X) an
existing `/dev/shm/<ShmArray>` and push to `.parquet` file.
- code to convert from our `ShmArray.array: np.ndarray` ->
`polars.DataFrame` (thanks SO).
- timing checks around the file IO and np -> polars conversion.
- a `read` subcmd which i was using to test the sync `pymarketstore`
client against our async one to see if the issues from
https://github.com/pikers/piker/issues/443 were resolved, but nope!
Turns out trying to switch to the old sync client and going back to
using the old json-RPC API (after having had to patch the upstream repo
to not import gRPC machinery to avoid crashes..) I'm basically getting
the exact same issues.
New tinkering results does possibly tell some new stuff:
- the EOF error seems to indeed be due to trying fetch records which haven't been
written (properly) - like asking for a `end=<epoch_int>` that is
earlier then the earliest record.
- the "snappy input corrupt" error seems to have something to do with
the `Params.end` field not being an `int` and/or the int precision not
being chosen correctly?
- toying with this a bunch manually shows that the internals of the
client (particularly `.build_query()` stuff) is parsing/calcing the
`Epoch` and `Nanoseconds` values out incorrectly.. which is likely
part of the problem.
- we also changed `anyio_marketstore.MarketStoreclient.build_query()`
logic when removing `pandas` a while back, which also seems to be
part of the problem on the async side, however reverting those
changes also didn't fix the issue entirely; likely something else
more subtle going on (maybe with the write vs. read `Epoch` field
type we pass?).
Despite all this malarky, we're already underway more or less obsoleting
this whole thing with a much less complex approach of using apache
parquet files and modern filesystem tools to get a more flexible and
numerics-native dataframe-oriented tsdb B)
Turns out the reason we were originally making the `time: float` column in our
ohlcv arrays was bc that's what **only** ib uses XD (and/or 🤦)
Instead we changed the default field type to be an `int` (which is also
more correct to avoid `float` rounding/precision discrepancies) and thus
**do not need to override it** in all other (crypto) backends (except
`ib`). Now we only do the customization (via `._ohlc_dtype`) to `float`
only for `ib` for now (though pretty sure we can also not do that
eventually as well..)!
Use `def_iohlcv_fields` for a name and instead of copying and inserting
the index field pop it for the non-index version. Drop creating
`np.dtype()` instances since `numpy`'s apis accept both input forms so
this is simpler on our end.
Turns out you can mix and match `click` with `typer` so this moves what
was the `.data.cli` stuff into `storage.cli` and uses the integration
api to make it all work B)
New subcmd: `piker store`
- add `piker store ls` which lists all fqme keyed time-series from backend.
- add `store delete` to remove any such key->time-series.
- now uses a nursery for multi-timeframe concurrency B)
Mask out all the old `marketstore` specific subcmds for now (streaming,
ingest, storesh, etc..) in anticipation of moving them into
a subpkg-module and make sure to import the sub-cmd module in our top
level cli package.
Other `.storage` api tweaks:
- drop the reraising with custom error (for now).
- rename `Storage` -> `StorageClient` (or should it be API?).
To kick off our (tsdb) storage backends this adds our first implementing
a new `Storage(Protocol)` client interface. Going foward, the top level
`.storage` pkg-module will now expose backend agnostic APIs and helpers
whilst specific backend implementations will adhere to that middle-ware
layer.
Deats:
- add `.storage.marketstore.Storage` as the first client implementation,
moving all needed (import) dependencies out from
`.service.marketstore` as well as `.ohlc_key_map` and `get_client()`.
- move root `conf.toml` loading from `.data.history` into
`.storage.__init__.open_storage_client()` which now takes in a `name:
str` and does all the work of loading the correct backend module, its
config, and determining if a service-instance can be contacted and
a client loaded; in the case where this fails we raise a new
`StorageConnectionError`.
- add a new `.storage.get_storagemod()` just like we have for brokers.
- make `open_storage_client()` also return the backend module such that
the history-data layer can make backend specific calls as needed (eg.
ohlc_key_map).
- fall back to a basic non-tsdb backfill when `open_storage_client()`
raises the new connection error.
The plan is to offer multiple tsdb and other storage backends (for
a variety of use cases) and expose them similarly to how we do for
broker and data providers B)
Was borked on linux if you didn't provide the setting in `conf.toml` due
to some logic errors. Fix that by rejigging `DpiAwareFont` internal
variables:
- add new `._font_size_calc_key: str` which was the old `._font_size`
and is only used when no explicit font size is set by the user in the
`conf.toml` config:
- this is the "key" that is used to lookup a calculation function
which attempts to compute a best fit font size given the measured
system displays DPI settings and dimensions.
- make the `._font_size: int` the **actual** font size integer that is
cached and passed to `Qt` to set the size.
- this is overridden by user config now if defined.
- change the input kwarg `font_size: str` to the constructor to better
change the input kwarg `font_size: str` to the constructor to better
named private `_font_size_key: str` which gets set to the new
`._font_size_calc_key`.
Also, adjust all client code which instantiates `DpiAwareFont` to use
the new `_font_size_key` kwarg input so nothing breaks XD
Was only really borked for higher-precision but lower priced assets
(like TLOS or peeneez) which have a `MktPair.price_tick_digits >= 2`.
The issue was using the wrong attr, the `size_tick_digits`..
Including changing to `LinkedSplits.mkt: MktPair` and adding an explicit
setter method for setting it and being sure that nothing breaks
in the display system init!
For this commit we leave in warning access to `LinkedSplits.symbol` but
will remove in following commit.