The final issue was making sure we do the same thing on ctl-c/SIGINT
from the user. That is, if there's already a bg-thread in REPL, we
`log.pdb()` about SIGINT shielding and re-draw the prompt; the same UX
as normal actor-runtime-task behaviour.
Reasons this wasn't workin.. and the fix:
- `.pause_from_sync()` was overriding the local `repl` var with `None`
delivered by (transitive) calls to `_pause(debug_func=None)`.. so
remove all that and only assign it OAOO prior to thread-type case
branching.
- always call `DebugStatus.shield_sigint()` as needed from all requesting
threads/tasks:
- in `_pause_from_bg_root_thread()` BEFORE calling `._pause()` AND BEFORE
yielding back to the bg-thread via `.started(out)` to ensure we're
definitely overriding the handler in the `trio`-main-thread task
before unblocking the requesting bg-thread.
- from any requesting bg-thread in the root actor such that both its
main-`trio`-thread scheduled task (as per above bullet) AND it are
SIGINT shielded.
- always call `.shield_sigint()` BEFORE any `greenback._await()` case
don't entirely grok why yet, but it works)?
- for `greenback._await()` case always set `bg_task` to the current one..
- tweaks to the `SIGINT` handler, now renamed `sigint_shield()` so as
not to name-collide with the methods when editor-searching:
- always try to `repr()` the REPL thread/task "owner" as well as the
active `PdbREPL` instance.
- add `.devx()` notes around the prompt flushing deats and comments
for any root-actor-bg-thread edge cases.
Related/supporting refinements:
- add `get_lock()`/`get_debug_req()` factory funcs since the plan is to
eventually implement both as `@singleton` instances per actor.
- fix `acquire_debug_lock()`'s call-sig-bug for scheduling
`request_root_stdio_lock()`..
- in `._pause()` only call `mk_pdb()` when `debug_func != None`.
- add some todo/warning notes around the `cls.repl = None` in
`DebugStatus.release()`
`test_pause_from_sync()` tweaks:
- don't use a `attach_patts.copy()`, since we always `break` on match.
- do `pytest.fail()` on that ^ loop's fallthrough..
- pass `do_ctlc(child, patt=attach_key)` such that we always match the
the current thread's name with the ctl-c triggered `.pdb()` emission.
- oh yeah, return the last `before: str` from `do_ctlc()`.
- in the script, flip `abandon_on_cancel=True` since when `False` it
seems to cause `trio.run()` to hang on exit from the last bg-thread
case?!?
There's a been a todo for soo long for this XD
Since all `Actor`'s store a set of `._peers` we can try a lookup on that
table as a shortcut before pinging the registry Bo
Impl deats:
- add a new `._discovery.get_peer_by_name()` routine which attempts the
`._peers` lookup by combining a copy of that `dict` + an entry added
for `Actor._parent_chan` (since all subs have a parent and often the
desired contact is just that connection).
- change `.find_actor()` (for the `only_first == True` case),
`.query_actor()` and `.wait_for_actor()` to call the new helper and
deliver appropriate outputs if possible.
Other,
- deprecate `get_arbiter()` def and all usage in tests and examples.
- drop lingering use of `arbiter_sockaddr` arg to various routines.
- tweak the `Actor` doc str as well as some code fmting and a tweak to
the `._stream_handler()`'s initial `con_status: str` logging value
since the way it was could never be reached.. oh and `.warning()` on
any new connections which already have a `_pre_chan: Channel` entry in
`._peers` so we can start minimizing IPC duplications.
Finally this reproduces the issue as it (originally?) exhibited inside
`piker` where the `Actor.lifetime_stack` wasn't closed in cases where
during `infected_aio`-actor cancellation/shutdown `trio` side tasks
which are doing shielded (teardown) work are NOT being watched/waited on
from the `aio_main()` task-closure inside `run_as_asyncio_guest()`!
This is then the root cause of the guest-run being abandoned since if
our `aio_main()` task-closure doesn't know it should allow the run to
finish, it's going to call `loop.close()` eventually resulting in the
`GeneratorExit` thrown into `trio._core._run.unrolled_run()`..
So, this extends the `test_sigint_closes_lifetime_stack()` suite to
include cases for such shielded `trio`-task ops:
- add a new `trio_side_is_shielded: bool` which will toggle whether to
add a shielded 0.5s `trio.sleep()` loop to `manage_file()` which
should outlive the `asyncio` event-loop shutdown sequence and result
in an abandoned guest-run and thus a leaked file.
- parametrize the existing suite with this case resulting in a total 16
test set B)
This patch demonstrates the problem with our `aio_main()` task-closure
impl via the now 4 failing tests, a fix is coming in a follow up commit!
Turns out it somehow breaks our `to_asyncio` error relay since obvi
`asyncio`'s runtime seems to specially handle it (prolly via
`isinstance()` ?) and it caused our
`test_aio_cancelled_from_aio_causes_trio_cancelled()` to hang..
Further, obvi `unpack_error()` won't be able to find the type def if not
kept inside `._exceptions`..
So given all that, revert the change/move as well as:
- tweak the aio-from-aio cancel test to timeout.
- do `trio.sleep()` conc with any bg aio task by moving out nursery
block.
- add a `send_sigint_to: str` parameter to
`test_sigint_closes_lifetime_stack()` such that we test the SIGINT
being relayed to just the parent or the child.
Took me a while to figure out what the heck was going on but, turns out
`asyncio` changed their SIGINT handling in 3.11 as per:
https://docs.python.org/3/library/asyncio-runner.html#handling-keyboard-interruption
I'm not entirely sure if it's the 3.11 changes or possibly wtv further
updates were made in 3.12 but more or less due to the way
our current main task was written the `trio` guest-run was getting
abandoned on SIGINTs sent from the OS to the infected child proc..
Note that much of the bug and soln cases are layed out in very detailed
comment-notes both in the new test and `run_as_asyncio_guest()`, right
above the final "fix" lines.
Add new `test_infected_aio.test_sigint_closes_lifetime_stack()` test suite
which reliably triggers all abandonment issues with multiple cases
of different parent behaviour post-sending-SIGINT-to-child:
1. briefly sleep then raise a KBI in the parent which was originally
demonstrating the file leak not being cleaned up by `Actor.lifetime_stack.close()`
and simulates a ctl-c from the console (relayed in tandem by
the OS to the parent and child processes).
2. do `Context.wait_for_result()` on the child context which would
hang and timeout since the actor runtime would never complete and
thus never relay a `ContextCancelled`.
3. both with and without running a `asyncio` task in the `manage_file`
child actor; originally it seemed that with an aio task scheduled in
the child actor the guest-run abandonment always was the "loud" case
where there seemed to be some actor teardown but with tbs from
python failing to gracefully exit the `trio` runtime..
The (seemingly working) "fix" required 2 lines of code to be run inside
a `asyncio.CancelledError` handler around the call to `await trio_done_fut`:
- `Actor.cancel_soon()` which schedules the actor runtime to cancel on
the next `trio` runner cycle and results in a "self cancellation" of
the actor.
- "pumping the `asyncio` event loop" with a non-0 `.sleep(0.1)` XD
|_ seems that a "shielded" pump with some actual `delay: float >= 0`
did the trick to get `asyncio` to allow the `trio` runner/loop to
fully complete its guest-run without abandonment.
Other supporting changes:
- move `._exceptions.AsyncioCancelled`, our renamed
`asyncio.CancelledError` error-sub-type-wrapper, to `.to_asyncio` and make
it derive from `CancelledError` so as to be sure when raised by our
`asyncio` x-> `trio` exception relay machinery that `asyncio` is
getting the specific type it expects during cancellation.
- do "summary status" style logging in `run_as_asyncio_guest()` wherein
we compile the eventual `startup_msg: str` emitted just before waiting
on the `trio_done_fut`.
- shield-wait with `out: Outcome = await asyncio.shield(trio_done_fut)`
even though it seems to do nothing in the SIGINT handling case..(I
presume it might help avoid abandonment in a `asyncio.Task.cancel()`
case maybe?)
The tests only use one input spec (conveniently) so there's not much to
change in the logic,
- only pass the `maybe_msg_spec` to the child-side decorator and obvi
drop the surrounding `msgops.limit_plds()` block in the child.
- tweak a few `MsgDec` asserts, mostly dropping the
`msg._ops._def_any_spec` state checks since the child-side won't have
any pre pld-spec state given the runtime now applies the `pld_spec`
before running the task's func body.
- also allowed dropping the `finally:` which did a similar check
outside the `.limit_plds()` block.
Originally discovered as while using `tractor.pause_from_sync()`
from the `i3ipc` client running in a bg-thread that uses `asyncio`
inside `modden`.
Turns out we definitely aren't correctly handling `.pause_from_sync()`
from the root actor when called from a `trio.to_thread.run_sync()`
bg thread:
- root-actor bg threads which can't `Lock._debug_lock.acquire()` since
they aren't in `trio.Task`s.
- even if scheduled via `.to_thread.run_sync(_debug._pause)` the
acquirer won't be the task/thread which calls `Lock.release()` from
`PdbREPL` hooks; this results in a RTE raised by `trio`..
- multiple threads will step on each other's stdio since cpython's GIL
seems to ctx switch threads on every input from the user to the REPL
loop..
Reproduce via reworking our example and test so that they catch and fail
for all edge cases:
- rework the `/examples/debugging/sync_bp.py` example to demonstrate the
above issues, namely the stdio clobbering in the REPL when multiple
threads and/or a subactor try to debug simultaneously.
|_ run one thread using a task nursery to ensure it runs conc with the
nursery's parent task.
|_ ensure the bg threads run conc a subactor usage of
`.pause_from_sync()`.
|_ gravely detail all the special cases inside a TODO comment.
|_ add some control flags to `sync_pause()` helper and don't use
`breakpoint()` by default.
- extend and adjust `test_debugger.test_pause_from_sync` to match (and
thus currently fail) by ensuring exclusive `PdbREPL` attachment when
the 2 bg root-actor threads are concurrently interacting alongside the
subactor:
|_ should only see one of the `_pause_msg` logs at a time for either
one of the threads or the subactor.
|_ ensure each attaches (in no particular order) before expecting the
script to exit.
Impl adjustments to `.devx._debug`:
- drop `Lock.repl`, no longer used.
- add `Lock._owned_by_root: bool` for the `.ctx_in_debug == None`
root-actor-task active case.
- always `log.exception()` for any `._debug_lock.release()` ownership
RTE emitted by `trio`, like we used to..
- add special `Lock.release()` log message for the stale lock but
`._owned_by_root == True` case; oh yeah and actually
`log.devx(message)`..
- rename `Lock.acquire()` -> `.acquire_for_ctx()` since it's only ever
used from subactor IPC usage; well that and for local root-task
usage we should prolly add a `.acquire_from_root_task()`?
- buncha `._pause()` impl improvements:
|_ type `._pause()`'s `debug_func` as a `partial` as well.
|_ offer `called_from_sync: bool` and `called_from_bg_thread: bool`
for the special case handling when called from `.pause_from_sync()`
|_ only set `DebugStatus.repl/repl_task` when `debug_func != None`
(OW ensure the `.repl_task` is not the current one).
|_ handle error logging even when `debug_func is None`..
|_ lotsa detailed commentary around root-actor-bg-thread special cases.
- when `._set_trace(hide_tb=False)` do `pdbp.set_trace(frame=currentframe())`
so the `._debug` internal frames are always included.
- by default always hide tracebacks for `.pause[_from_sync]()` internals.
- improve `.pause_from_sync()` to avoid root-bg-thread crashes:
|_ pass new `called_from_xxx_` flags and ensure `DebugStatus.repl_task`
is actually set to the `threading.current_thread()` when needed.
|_ manually call `Lock._debug_lock.acquire_nowait()` for the non-bg
thread case.
|_ TODO: still need to implement the bg-thread case using a bg
`trio.Task`-in-thread with an `trio.Event` set by thread REPL exit.
It's been a long time prepped and now finally implemented!
Offer a `shield: bool` argument from our async `._debug` APIs:
- `await tractor.pause(shield=True)`,
- `await tractor.post_mortem(shield=True)`
^-These-^ can now be used inside cancelled `trio.CancelScope`s,
something very handy when introspecting complex (distributed) system
tear/shut-downs particularly under remote error or (inter-peer)
cancellation conditions B)
Thanks to previous prepping in a prior attempt and various patches from
the rigorous rework of `.devx._debug` internals around typed msg specs,
there ain't much that was needed!
Impl deats
- obvi passthrough `shield` from the public API endpoints (was already
done from a prior attempt).
- put ad-hoc internal `with trio.CancelScope(shield=shield):` around all
checkpoints inside `._pause()` for both the root-process and subactor
case branches.
Add a fairly rigorous example, `examples/debugging/shielded_pause.py`
with a wrapping `pexpect` test, `test_debugger.test_shield_pause()` and
ensure it covers as many cases as i can think of offhand:
- multiple `.pause()` entries in a loop despite parent scope
cancellation in a subactor RPC task which itself spawns a sub-task.
- a `trio.Nursery.parent_task` which raises, is handled and
tries to enter and unshielded `.post_mortem()`, which of course
internally raises `Cancelled` in a `._pause()` checkpoint, so we catch
the `Cancelled` again and then debug the debugger's internal
cancellation with specific checks for the particular raising
checkpoint-LOC.
- do ^- the latter -^ for both subactor and root cases to ensure we
can debug `._pause()` itself when it tries to REPL engage from
a cancelled task scope Bo
Since turns out we didn't have a single example using that API Bo
The test granular-ly checks all use cases:
- `.post_mortem()` manual calls in both subactor and root.
- ensuring built-in RPC crash handling activates after each manual one
from ^.
- drafted some call-stack frame checking that i commented out for now
since we need to first do ANSI escape code removal due to the
colorization that `pdbp` does by default.
|_ added a TODO with SO link on `assert_before()`.
Also todo-staged a shielded-pause test to match with the already
existing-but-needs-refinement example B)
Namely checking that `Context._remote_error` is set to the raised MTE
in the invalid started and return value cases since prior to the recent
underlying changes to the `Context.result()` impl, it would not match.
Further,
- do asserts for non-MTE raising cases in both the parent and child.
- add todos for testing ctx-outcomes for per-side-validation policies
i anticipate supporting and implied msg-dialog race cases therein.
Expecting `Started` or `Return` with respective bad `.pld` values
depending on what type of failure is test parametrized.
This makes the suite run green it seems B)
Starts with some very basic cases:
- verify both subactor-as-child-ctx-task send side validation (failures)
as well as relay and raise on root-parent-side-task.
- wrap failure expectation cases that bubble out of `@acm`s with
a `maybe_expect_raises()` equiv wrapper with an embedded timeout.
- add `Return` cases including invalid by `str` and valid by a `None`.
Still ToDo:
- commit impl changes to make the bulk of this suite pass.
- adjust how `MsgTypeError`s format the local (`.started()`) send side
`.tb_str` such that we don't do a "boxed" error prior to
`pack_error()` being called normally prior to `Error` transit.
Mostly the result of the `RemoteActorError.pformat()` and our
new `_pause/crash_msg: str`s which include the `trio.Task.__repr__()`
in the `log.pdb()` message.
Obvi use the `in_prompt_msg()` to accomplish where not used prior.
ToDo later:
-[ ] still some outstanding questions on how detailed inceptions
should look, eg. in `test_multi_nested_subactors_error_through_nurseries()`
|_maybe we should be more pedantic at checking `.src_uid` vs.
`.relay_uid` fields?
-[ ] staged a placeholder test for verifying correct call-stack frame on
crash handler REPL entry.
-[ ] also need a test to verify that you can't pause from an already paused actor task
such as can happen if you try to step through runtime code that has
a recurrent entry to `._debug.pause()`.
Mostly adjustments for the new pld-receiver semantics/shim-layer which
results more often in the direct delivery of `RemoteActorError`s from
IPC API primitives (like `Portal.result()`) instead of being embedded in
an `ExceptionGroup` bundled from an embedded nursery.
Tossed usage of the `debug_mode: bool` fixture to a couple problematic
tests while i was working on them.
Also includes detailed assertion updates to the inter-peer cancellation
suite in terms of,
- `Context.canceller` state correctly matching the true src actor when
expecting a ctxc.
- any rxed `ContextCancelled` should instance match the `Context._local/remote_error`
as should the `.msgdata` and `._ipc_msg`.
Expose it from `._state.current_ipc_ctx()` and set it inside
`._rpc._invoke()` for child and inside `Portal.open_context()` for
parent.
Still need to write a few more tests (particularly demonstrating usage
throughout multiple nested nurseries on each side) but this suffices as
a proto for testing with some debugger request-from-subactor stuff.
Other,
- use new `.devx.pformat.add_div()` for ctxc messages.
- add a block to always traceback dump on corrupted cs stacks.
- better handle non-RAEs exception output-formatting in context
termination summary log message.
- use a summary for `start_status` for msg logging in RPC loop.
- Drop `test_msg_spec_xor_pld_spec()` since we no longer support
`ipc_msg_spec` arg to `mk_codec()`.
- Expect `MsgTypeError`s around `.open_context()` calls when
`add_codec_hooks == False`.
- toss in some `.pause()` points in the subactor ctx body whilst hacking
out a `.pld` protocol for debug mode TTY locking.
Since in the receive-side error case the source of the exception is the
sender side (normally causing a local `TypeError` at decode time), might
as well bundle the error in remote-capture-style using boxing semantics
around the causing local type error raised from the
`msgspec.msgpack.Decoder.decode()` and with a traceback packed from
`msgspec`-specific knowledge of any field-type spec matching failure.
Deats on new `MsgTypeError` interface:
- includes a `.msg_dict` to get access to any `Decoder.type`-applied
load of the original (underlying and offending) IPC msg into
a `dict` form using a vanilla decoder which is normally packed into
the instance as a `._msg_dict`.
- a public getter to the "supposed offending msg" via `.payload_msg`
which attempts to take the above `.msg_dict` and load it manually into
the corresponding `.msg.types.MsgType` struct.
- a constructor `.from_decode()` to make it simple to build out error
instances from a failed decode scope where the aforementioned
`msgdict: dict` from the vanilla decode can be provided directly.
- ALSO, we now pack into `MsgTypeError` directly just like ctxc in
`unpack_error()`
This also completes the while-standing todo for `RemoteActorError` to
contain a ref to the underlying `Error` msg as `._ipc_msg` with public
`@property` access that `defstruct()`-creates a pretty struct version
via `.ipc_msg`.
Internal tweaks for this include:
- `._ipc_msg` is the internal literal `Error`-msg instance if provided
with `.ipc_msg` the dynamic wrapper as mentioned above.
- `.__init__()` now can still take variable `**extra_msgdata` (similar
to the `dict`-msgdata as before) to maintain support for subtypes
which are constructed manually (not only by `pack_error()`) and insert
their own attrs which get placed in a `._extra_msgdata: dict` if no
`ipc_msg: Error` is provided as input.
- the `.msgdata` is now a merge of any `._extra_msgdata` and
a `dict`-casted form of any `._ipc_msg`.
- adjust all previous `.msgdata` field lookups to try equivalent field
reads on `._ipc_msg: Error`.
- drop default single ws indent from `.tb_str` and do a failover lookup
to `.msgdata` when `._ipc_msg is None` for the manually constructed
subtype-instance case.
- add a new class attr `.extra_body_fields: list[str]` to allow subtypes
to declare attrs they want shown in the `.__repr__()` output, eg.
`ContextCancelled.canceller`, `StreamOverrun.sender` and
`MsgTypeError.payload_msg`.
- ^-rework defaults pertaining to-^ with rename from
`_msgdata_keys` -> `_ipcmsg_keys` with latter now just loading directly
from the `Error` fields def and `_body_fields: list[str]` just taking
that value and removing the not-so-useful-in-REPL or already shown
(i.e. `.tb_str: str`) field names.
- add a new mod level `.pack_from_raise()` helper for auto-boxing RAE
subtypes constructed manually into `Error`s which is normally how
`StreamOverrun` and `MsgTypeError` get created in the runtime.
- in support of the above expose a `src_uid: tuple` override to
`pack_error()` such that the runtime can provide any remote actor id
when packing a locally-created yet remotely-caused RAE subtype.
- adjust all typing to expect `Error`s over `dict`-msgs.
Adjust some tests to match these changes:
- context and inter-peer-cancel tests to make their `.msgdata` related
checks against the new `.ipc_msg` as well and `.tb_str` directly.
- toss in an extra sleep to `sleep_a_bit_then_cancel_peer()` to keep the
'canceller' ctx child task cancelled by it's parent in the 'root' for
the rte-raised-during-ctxc-handling case (apparently now it's
returning too fast, cool?).
These are likely temporary changes but still needed to actually see the
desired/correct failures (of which 5 of 6 tests are supposed to fail rn)
mostly to do with `Start` and `Return` msgs which are invalid under each
test's applied msg-spec.
Tweak set here:
- bit more `print()`s in root and sub for grokin test flow.
- never use `pytes.fail()` in subactor.. should know this by now XD
- comment out some bits that can't ever pass rn and make the underlying
expected failues harder to grok:
- the sub's child-side-of-ctx task doing sends should only fail
for certain msg types like `Started` + `Return`, `Yield`s are
processed receiver/parent side.
- don't expect `sent` list to match predicate set for the same reason
as last bullet.
The outstanding msg-type-semantic validation questions are:
- how to handle `.open_context()` with an input `kwargs` set that
doesn't adhere to the currently applied msg-spec?
- should the initial `@acm` entry fail before sending to the child
side?
- where should received `MsgTypeError`s be raised, at the `MsgStream`
`.receive()` or lower in the stack?
- i'm thinking we should mk `MsgTypeError` derive from
`RemoteActorError` and then have it be delivered as an error to the
`Context`/`MsgStream` for per-ctx-task handling; would lead to more
flexible/modular policy overrides in user code outside any defaults
we provide.
Set a diff `Msg.pld` spec per test and then send multiple types to
a child actor making sure the child can only send certain types over
a stream and fails with validation or decode errors ow. The test is also
param-ed both with and without hooks demonstrating how a custom type,
`NamespacePath`, needs them for effective use. The subactor IPC context
child is passed a `expect_ipc_send: dict` which relays the values along
with their expected `.send()`-ability.
Deats on technical refinements:
------ - ------
- added a `iter_maybe_sends()` send-value-as-msg-auditor and predicate
generator (literally) so as to be able to pre-determine if given the
current codec and `send_values` which values are expected to be IPC
transmittable.
- as per ^, the diff value-msgs are first round-tripped inside
a `Started` msg using the configured codec in the parent/root actor
before bothering with using IPC primitives + a subactor; this is how
the `expect_ipc_send` table is generated initially.
- for serializing the specs (`Union[Type]`s as required by `msgspec`),
added a pair of codec hooks: `enc/dec_type_union()` (that ideally we
move into a `.msg` submod eventually) which code the type-values as
a `list[str]` of names.
- the `dec_` hook had to be modified to NOT raise an error when an
invalid/unhandled value arrives, this is because we do NOT want the
RPC msg handling loop to raise on the `async for msg in chan:` and
instead prefer to ignore and warn (for now, but eventually respond
with error msg - see notes in hook body) these msgs when sent during
a streaming phase; `Context.started()` will however error on a bad
input for the current msg-spec since it is part of the "cheap"
dialog (again see notes in `._context`) wherein the `Started` msg
is always roundtripped prior to `Channel.send()` to guarantee
the child adheres to its own spec.
- tossed in lotsa `print()`s for console groking of the run progress.
Further notes on typed-msging breaking cancellation:
------ - ------
- turns out since the runtime's cancellation implementation, being done
with `Actor.cancel()` methods and friends will actually break when
a stringent spec is applied (eg. a single type-spec) since the return
values from said methods are generally `bool`s..
- this means we do indeed need special handling of "runtime RPC method
invocations" since ideally a user's msg-spec choices do not break core
functionality on them XD
=> The obvi solution is to add a/some special sub-`Msg` types for such
cases, possibly just a `RuntimeReturn(Return)` type that will always
include a `.pld: bool` for these cancel methods such that their
results are always handled without msg type errors.
More to come on a (hopefully) elegant solution to that last bit!
Since with my in-index runtime-port to our native msg-spec it seems
these ones are hanging B(
- `test_one_end_stream_not_opened()`
- `test_maybe_allow_overruns_stream()`
Tossing in some `trio.fail_after()`s seems to at least gnab them as
failures B)
Though the runtime hasn't been changed over in this patch (it was in the
local index at the time however), the test does now demonstrate that
using a `Started` the correctly typed `.pld` will codec correctly when
passed manually to `MsgCodec.encode/decode()`.
Despite not having the runtime ported to the new shuttle msg set
(meaning the mentioned test will fail without the runtime port patch),
I was able to get this first original test working that limits payload
packets as a `Msg.pld: NamespacePath`this as long as we spec
`enc/dec_hook()`s then the `Msg.pld` will be processed correctly as per:
https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
in both the `Any` and `NamespacePath|None` spec cases.
^- turns out in this case -^ that the codec hooks only get invoked on
the unknown-fields NOT the entire `Struct`-msg.
A further gotcha was merging a `|None` into the `pld_spec` since this
test spawns a subactor and opens a context via `send_back_nsp()` and
that func has no explicit `return` - so of course it delivers
a `Return(pld=None)` which will fail if we only spec `NamespacePath`.
Turns out the generics based payload speccing API, as in
https://jcristharif.com/msgspec/supported-types.html#generic-types,
DOES WORK properly as long as we don't rely on inheritance from `Msg`
a parent `Generic`..
So let's get real pedantic in the `mk_msg_spec()` internals as well as
verification in the test suite!
Fixes in `.msg.types`:
- implement (as part of tinker testing) multiple spec union building
methods via a `spec_build_method: str` to `mk_msg_spec()` and leave a
buncha notes around what did and didn't work:
- 'indexed_generics' is the only method THAT WORKS and the one that
you'd expect being closest to the `msgspec` docs (link above).
- 'defstruct' using dynamically defined msgs => doesn't work!
- 'types_new_class' using dynamically defined msgs but with
`types.new_clas()` => ALSO doesn't work..
- explicitly separate the `.pld` type-constrainable by user code msg
set into `types._payload_spec_msgs` putting the others in
a `types._runtime_spec_msgs` and the full set defined as `.__spec__`
(moving it out of the pkg-mod and back to `.types` as well).
- for the `_payload_spec_msgs` msgs manually make them inherit `Generic[PayloadT]`
and (redunantly) define a `.pld: PayloadT` field.
- make `IpcCtxSpec.functype` an in line `Literal`.
- toss in some TODO notes about choosing a better `Msg.cid` type.
Fixes/tweaks around `.msg._codec`:
- rename `MsgCodec.ipc/pld_msg_spec` -> `.msg/pld_spec`
- make `._enc/._dec` non optional fields
- wow, ^facepalm^ , make sure `._ipc.MsgpackTCPStream.__init__()` uses
`mk_codec()` since `MsgCodec` can't be (easily) constructed directly.
Get more detailed in testing:
- inside the `chk_pld_type()` helper ensure `roundtrip` is always set to
some value, `None` by default but a bool depending on legit outcome.
- drop input `generic`; no longer used.
- drop the masked `typedef` loop from `Msg.__subclasses__()`.
- for add an `expect_roundtrip: bool` and use to jump into debugger
when any expectation doesn't match the outcome.
- use new `MsgCodec` field names (as per first section above).
- ensure the encoded msg matches the decoded one from both the ad-hoc
decoder and codec loaded values.
- ensure the pld checking is only applied to msgs in the
`types._payload_spec_msgs` set by `typef.__name__` filtering
since `mk_msg_spec()` now returns the full `.types.Msg` set.
Mostly adjusting input args/logic to various spec/codec signatures and
new runtime semantics:
- `test_msg_spec_xor_pld_spec()` to verify that a shuttle prot spec and
payload spec are necessarily mutex and that `mk_codec()` enforces it.
- switch to `ipc_msg_spec` input in `mk_custom_codec()` helper.
- drop buncha commented cruft from `test_limit_msgspec()` including no
longer needed type union instance checks in dunder attributes.
As per the long outstanding GH issue this starts our rigorous journey
into an attempt at a type-safe, cross-actor SC, IPC protocol Bo
boop -> https://github.com/goodboy/tractor/issues/36
The idea is to "formally" define our SC "shuttle (dialog) protocol" by
specifying a new `.msg.types.Msg` subtype-set which can fully
encapsulate all IPC msg schemas needed in order to accomplish
cross-process SC!
The msg set deviated a little in terms of (type) names from the existing
`dict`-msgs currently used in the runtime impl but, I think the name
changes are much better in terms of explicitly representing the internal
semantics of the actor runtime machinery/subsystems and the
IPC-msg-dialog required for SC enforced RPC.
------ - ------
In cursory, the new formal msgs-spec includes the following msg-subtypes
of a new top-level `Msg` boxing type (that holds the base field schema
for all msgs):
- `Start` to request RPC task scheduling by passing a `FuncSpec` payload
(to replace the currently used `{'cmd': ... }` dict msg impl)
- `StartAck` to allow the RPC task callee-side to report a `IpcCtxSpec`
payload immediately back to the caller (currently responded naively via
a `{'functype': ... }` msg)
- `Started` to deliver the first value from `Context.started()`
(instead of the existing `{'started': ... }`)
- `Yield` to shuttle `MsgStream.send()`-ed values (instead of
our `{'yield': ... }`)
- `Stop` to terminate a `Context.open_stream()` session/block
(over `{'stop': True }`)
- `Return` to deliver the final value from the `Actor.start_remote_task()`
(which is a `{'return': ... }`)
- `Error` to box `RemoteActorError` exceptions via a `.pld: ErrorData`
payload, planned to replace/extend the current `RemoteActorError.msgdata`
mechanism internal to `._exceptions.pack/unpack_error()`
The new `tractor.msg.types` includes all the above msg defs as well an API
for rendering a "payload type specification" using a
`payload_type_spec: Union[Type]` that can be passed to
`msgspec.msgpack.Decoder(type=payload_type_spec)`. This ensures that
(for a subset of the above msg set) `Msg.pld: PayloadT` data is
type-parameterized using `msgspec`'s new `Generic[PayloadT]` field
support and thus enables providing for an API where IPC `Context`
dialogs can strictly define the allowed payload-datatype-set via type
union!
Iow, this is the foundation for supporting `Channel`/`Context`/`MsgStream`
IPC primitives which are type checked/safe as desired in GH issue:
- https://github.com/goodboy/tractor/issues/365
Misc notes on current impl(s) status:
------ - ------
- add a `.msg.types.mk_msg_spec()` which uses the new `msgspec` support
for `class MyStruct[Struct, Generic[T]]` parameterize-able fields and
delivers our boxing SC-msg-(sub)set with the desired `payload_types`
applied to `.pld`:
- https://jcristharif.com/msgspec/supported-types.html#generic-types
- as a note this impl seems to need to use `type.new_class()` dynamic
subtype generation, though i don't really get *why* still.. but
without that the `msgspec.msgpack.Decoder` doesn't seem to reject
`.pld` limited `Msg` subtypes as demonstrated in the new test.
- around this ^ add a `.msg._codec.limit_msg_spec()` cm which exposes
this payload type limiting API such that it can be applied per task
via a `MsgCodec` in app code.
- the orig approach in https://github.com/goodboy/tractor/pull/311 was
the idea of making payload fields `.pld: Raw` wherein we could have
per-field/sub-msg decoders dynamically loaded depending on the
particular application-layer schema in use. I don't want to lose the
idea of this since I think it might be useful for an idea I have about
capability-based-fields(-sharing, maybe using field-subset
encryption?), and as such i've kept the (ostensibly) working impls in
TODO-comments in `.msg._codec` wherein maybe we can add
a `MsgCodec._payload_decs: dict` table for this later on.
|_ also left in the `.msg.types.enc/decmsg()` impls but renamed as
`enc/dec_payload()` (but reworked to not rely on the lifo codec
stack tables; now removed) such that we can prolly move them to
`MsgCodec` methods in the future.
- add an unused `._codec.mk_tagged_union_dec()` helper which was
originally factored out the #311 proto-code but didn't end up working
as desired with the new parameterized generic fields approach (now
in `msg.types.mk_msg_spec()`)
Testing/deps work:
------ - ------
- new `test_limit_msgspec()` which ensures all the `.types` content is
correct but without using the wrapping APIs in `._codec`; i.e. using
a in-line `Decoder` instead of a `MsgCodec`.
- pin us to `msgspec>=0.18.5` which has the needed generic-types support
(which took me way too long yester to figure out when implementing all
this XD)!
Fitting in line with the issues outstanding:
- #36: (msg)spec-ing out our SCIPP (structured-con-inter-proc-prot).
(https://github.com/goodboy/tractor/issues/36)
- #196: adding strictly typed IPC msg dialog schemas, more or less
better described as "dialog/transaction scoped message specs"
using `msgspec`'s tagged unions and custom codecs.
(https://github.com/goodboy/tractor/issues/196)
- #365: using modern static type-annots to drive capability based
messaging and RPC.
(statically https://github.com/goodboy/tractor/issues/365)
This is a first draft of a new API for dynamically overriding IPC msg
codecs for a given interchange lib from any task in the runtime. Right
now we obviously only support `msgspec` but ideally this API holds
general enough to be used for other backends eventually (like
`capnproto`, and apache arrow).
Impl is in a new `tractor.msg._codec` with:
- a new `MsgCodec` type for encapsing `msgspec.msgpack.Encoder/Decoder`
pairs and configuring any custom enc/dec_hooks or typed decoding.
- factory `mk_codec()` for creating new codecs ad-hoc from a task.
- `contextvars` support for a new `trio.Task` scoped
`_ctxvar_MsgCodec: ContextVar[MsgCodec]` named 'msgspec_codec'.
- `apply_codec()` for temporarily modifying the above per task
as needed around `.open_context()` / `.open_stream()` operation.
A new test (suite) in `test_caps_msging.py`:
- verify a parent and its child can enable the same custom codec (in
this case to transmit `NamespacePath`s) with tons of pedantic ctx-vars
checks.
- ToDo: still need to implement #36 msg types in order to be able to get
decodes working (as in `MsgStream.receive()` will deliver an already
created `NamespacePath` obj) since currently all msgs come packed in `dict`-msg
wrapper packets..
-> use the proto from PR #35 to get nested `msgspec.Raw` processing up
and running Bo
It's **almost** there, we're just missing the final translation code to
get from an `asyncio` side task to be able to call
`.devx._debug..wait_for_parent_stdin_hijack()` to do root actor TTY
locking. Then we just need to ensure internals also do the right thing
with `greenback()` for equivalent sync `breakpoint()` style pause
points.
Since i'm deferring this until later, tossing in some xfail tests to
`test_infected_asyncio` with TODOs for the needed implementation as well
as eventual test org.
By "provision" it means we add:
- `greenback` init block to `_run_asyncio_task()` when debug mode is
enabled (but which will currently rte when `asyncio` is detected)
using `.bestow_portal()` around the `asyncio.Task`.
- a call to `_debug.maybe_init_greenback()` in the `run_as_asyncio_guest()`
guest-mode entry point.
- as part of `._debug.Lock.is_main_trio_thread()` whenever the async-lib
is not 'trio' error lock the backend name (which is obvi `'asyncio'`
in this use case).
Now supports use from any `trio` task, any sync thread started with
`trio.to_thread.run_sync()` AND also via `breakpoint()` builtin API!
The only bit missing now is support for `asyncio` tasks when in infected
mode.. Bo
`greenback` setup/API adjustments:
- move `._rpc.maybe_import_gb()` to -> `devx._debug` and factor out the cached
import checking into a sync func whilst placing the async `.ensure_portal()`
bootstrapping into a new async `maybe_init_greenback()`.
- use the new init-er func inside `open_root_actor()` with the output
predicating whether we override the `breakpoint()` hook.
core `devx._debug` implementation deatz:
- make `mk_mpdb()` only return the `pdp.Pdb` subtype instance since
the sigint unshielding func is now accessible from the `Lock`
singleton from anywhere.
- add non-main thread support (at least for `trio.to_thread` use cases)
to our `Lock` with a new `.is_trio_thread()` predicate that delegates
directly to `trio`'s internal version.
- do `Lock.is_trio_thread()` checks inside any methods which require
special provisions when invoked from a non-main `trio` thread:
- `.[un]shield_sigint()` methods since `signal.signal` usage is only
allowed from cpython's main thread.
- `.release()` since `trio.StrictFIFOLock` can only be called from
a `trio` task.
- rework `.pause_from_sync()` itself to directly call `._set_trace()`
and don't bother with `greenback._await()` when we're already calling
it from a `.to_thread.run_sync()` thread, oh and try to use the
thread/task name when setting `Lock.local_task_in_debug`.
- make it an RTE for now if you try to use `.pause_from_sync()` from any
infected-`asyncio` task, but support is (hopefully) coming soon!
For testing we add a new `test_debugger.py::test_pause_from_sync()`
which includes a ctrl-c parametrization around the
`examples/debugging/sync_bp.py` script which includes all currently
supported/working usages:
- `tractor.pause_from_sync()`.
- via `breakpoint()` overload.
- from a `trio.to_thread.run_sync()` spawn.
Use new `RemoteActorError` fields in various assertions particularly
ensuring that an RTE relayed through the spawner from the little_bro
shows up at the client with the right number of entries in the
`.relay_path` and that the error is raised in the client as desired in
the original use case from `modden`'s remote spawn spawn request API
(which was kinda the whole original motivation to finally get all this
multi-actor error relay stuff workin).
Case extensions:
- RTE relayed from little_bro through spawner to client when
`raise_sub_spawn_error_after` is set; in this case test should raise
the relayed and RAE boxed RTE right up to the `trio.run()`.
-> ensure the `rae.src_uid`, `.relay_uid` are set correctly.
-> ensure ctx cancels are no acked.
- use `expect_ctxc()` around root's `tell_little_bro()` usage.
- do `debug_mode` assertions when enabled by test harness in each actor
layer.
- obvi use new `.src_type`/`.boxed_type` for final error propagation
assertions.
More or less just simplifies to not seeing the stream closure errors and
instead expecting KBIs from the simulated user who 'ctl-cs after hang'.
Toss in a little `stuff_hangin_ctlc()` to the script to wrap all that
and always check stream closure before sending the final KBI.
- `trio_typing` is nearly obsolete since `trio >= 0.23`
- `exceptiongroup` is built-in to python 3.11
- `async_generator` primitives have lived in `contextlib` for quite
a while!
Since importing from our top level `conftest.py` is not scaleable
or as "future forward thinking" in terms of:
- LoC-wise (it's only one file),
- prevents "external" (aka non-test) example scripts from importing
content easily,
- seemingly(?) can't be used via abs-import if using
a `[tool.pytest.ini_options]` in a `pyproject.toml` vs.
a `pytest.ini`, see:
https://docs.pytest.org/en/8.0.x/reference/customize.html#pyproject-toml)
=> Go back to having an internal "testing" pkg like `trio` (kinda) does.
Deats:
- move generic top level helpers into pkg-mod including the new
`expect_ctxc()` (which i needed in the advanced faults testing script.
- move `@tractor_test` into `._testing.pytest` sub-mod.
- adjust all the helper imports to be a `from tractor._testing import <..>`
Rework `test_ipc_channel_break_during_stream()` and backing script:
- make test(s) pull `debug_mode` from new fixture (which is now
controlled manually from `--tpdb` flag) and drop the previous
parametrized input.
- update logic in ^ test for "which-side-fails" cases to better match
recently updated/stricter cancel/failure semantics in terms of
`ClosedResouruceError` vs. `EndOfChannel` expectations.
- handle `ExceptionGroup`s with expected embedded errors in test.
- better pendantics around whether to expect a user simulated KBI.
- for `examples/advanced_faults/ipc_failure_during_stream.py` script:
- generalize ipc breakage in new `break_ipc()` with support for diff
internal `trio` methods and a #TODO for future disti frameworks
- only make one sub-actor task break and the other just stream.
- use new `._testing.expect_ctxc()` around ctx block.
- add a bit of exception handling with `print()`s around ctxc (unused
except if 'msg' break method is set) and eoc cases.
- don't break parent side ipc in loop any more then once
after first break, checked via flag var.
- add a `pre_close: bool` flag to control whether
`MsgStreama.aclose()` is called *before* any ipc breakage method.
Still TODO:
- drop `pytest.ini` and add the alt section to `pyproject.py`.
-> currently can't get `--rootdir=` opt to work.. not showing in
console header.
-> ^ also breaks on 'tests' `enable_modules` imports in subactors
during discovery tests?
With the seeming cause that some cases occasionally raise
`ExceptionGroup` instead of a (collapsed out) single error which, in
those cases at least try to check that `.exceptions` has the original
error.
Found exactly why trying this won't work when playing around with
opening workspaces in `modden` using a `Portal.open_context()` back to
the 'bigd' root actor: the RPC machinery only registers one entry in
`Actor._contexts` which will get overwritten by each task's side and
then experience race-based IPC msging errors (eg. rxing `{'started': _}`
on the callee side..). Instead make opening a ctx back to the self-actor
a runtime error describing it as an invalid op.
To match:
- add a new test `test_ctx_with_self_actor()` to the context semantics
suite.
- tried out adding a new `side: str` to the `Actor.get_context()` (and
callers) but ran into not being able to determine the value from in
`._push_result()` where it's needed to figure out which side to push
to.. So, just leaving the commented arg (passing) in the runtime core
for now in case we can come back to trying to make it work, tho i'm
thinking it's not the right hack anyway XD
Such that it's set to whatever `Actor.reg_addrs: list[tuple]` is during
the actor's init-after-spawn guaranteeing each actor has at least the
registry infos from its parent. Ensure we read this if defined over
`_root._default_lo_addrs` in `._discovery` routines, namely
`.find_actor()` since it's the one API normally used without expecting
the runtime's `current_actor()` to be up.
Update the latest inter-peer cancellation test to use the `reg_addr`
fixture (and thus test this new runtime-vars value via `find_actor()`
usage) since it was failing if run *after* the infected `asyncio` suite
due to registry contact failure.