And IFF the `await wait_func(proc)` is cancelled such that we avoid
clobbering some subactor that might be REPL-ing even though its parent
actor is in the midst of (gracefully) cancelling it.
From both `open_root_actor()` and `._entry._trio_main()`.
Other `breakpoint()`-from-sync-func fixes:
- properly disable the default hook using `"0"` XD
- offer a `hide_tb: bool` from `open_root_actor()`.
- disable hiding the `._trio_main()` frame, bc pretty sure it doesn't
help anyone (either way) when REPL-ing/tb-ing from a subactor..?
Call it `hide_runtime_frames()` and stick all the lines from the top of
the `._debug` mod in there along with a little `log.devx()` emission on
what gets hidden by default ;)
Other,
- fix ref-error where internal-error handler might trigger despite the
debug `req_ctx` not yet having init-ed, such that we don't try to
cancel or log about it when it never was fully created/initialize..
- fix assignment typo iniside `_set_trace()` for `task`.. lel
Such that when displaying with `.__str__()` we do not show the type
header (style) since normally python's raising machinery already prints
the type path like `'tractor._exceptions.RemoteActorError:'`, so doing
it 2x is a bit ugly ;p
In support,
- include `.relay_uid` in `RemoteActorError.extra_body_fields`.
- offer a `with_type_header: bool` to `.pformat()` and only put the
opening type path and closing `')>'` tail line when `True`.
- add `.is_inception() -> bool:` for an easy way to determine if the
error is multi-hop relayed.
- only repr the `'|_relay_uid=<uid>'` field when an error is an inception.
- tweak the invalid-payload case in `_mk_msg_type_err()` to explicitly
state in the `message` how the `any_pld` value does not match the `MsgDec.pld_spec`
by decoding the invalid `.pld` with an any-dec.
- allow `_mk_msg_type_err(**mte_kwargs)` passthrough.
- pass `boxed_type=cls` inside `MsgTypeError.from_decode()`.
More or less by pedantically separating and managing root and subactor
request syncing events to always be managed by the locking IPC context
task-funcs:
- for the root's "child"-side, `lock_tty_for_child()` directly creates
and sets a new `Lock.req_handler_finished` inside a `finally:`
- for the sub's "parent"-side, `request_root_stdio_lock()` does the same
with a new `DebugStatus.req_finished` event and separates it from
the `.repl_release` event (which indicates a "c" or "q" from user and
thus exit of the REPL session) as well as sets a new `.req_task:
trio.Task` to explicitly distinguish from the app-user-task that
enters the REPL vs. the paired bg task used to request the global
root's stdio mutex alongside it.
- apply the `__pld_spec__` on "child"-side of the ctx using the new
`Portal.open_context(pld_spec)` parameter support; drops use of any
`ContextVar` malarky used prior for `PldRx` mgmt.
- removing `Lock.no_remote_has_tty` since it was a nebulous name and
from the prior "everything is in a `Lock`" design..
------ - ------
More rigorous impl to handle various edge cases in `._pause()`:
- rejig `_enter_repl_sync()` to wrap the `debug_func == None` case
inside maybe-internal-error handler blocks.
- better logic for recurrent vs. multi-task contention for REPL entry in
subactors, by guarding using `DebugStatus.req_task` and by now waiting
on the new `DebugStatus.req_finished` for the multi-task contention
case.
- even better internal error handling and reporting for when this code
is hacked on and possibly broken ;p
------ - ------
Updates to `.pause_from_sync()` support:
- add optional `actor`, `task` kwargs to `_set_trace()` to allow
compat with the new explicit `debug_func` calling in `._pause()` and
pass a `threading.Thread` for `task` in the `.to_thread()` usage case.
- add an `except` block that tries to show the frame on any internal
error.
------ - ------
Relatedly includes a buncha cleanups/simplifications somewhat in
prep for some coming refinements (around `DebugStatus`):
- use all the new attrs mentioned above as needed in the SIGINT shielder.
- wait on `Lock.req_handler_finished` in `maybe_wait_for_debugger()`.
- dropping a ton of masked legacy code left in during the recent reworks.
- better comments, like on the use of `Context._scope` for shielding on
the "child"-side to avoid the need to manage yet another cs.
- add/change-to lotsa `log.devx()` level emissions for those infos which
are handy while hacking on the debugger but not ideal/necessary to be
user visible.
- obvi add lotsa follow up todo notes!
Much like other recent changes attempt to detect runtime-bug-causing
crashes and only show the runtime-endpoint frame when present.
Adds a `ActorNursery._scope_error: BaseException|None` attr to aid with
detection. Also toss in some todo notes for removing and replacing the
`.run_in_actor()` method API.
Just inside `._invoke()` after the `ctx: Context` is retrieved.
Also try our best to *not hide* internal frames when a non-user-code
crash happens, normally either due to a runtime RPC EP bug or
a transport failure.
Kinda like a "runtime"-y level for `.pdb()` (which is more or less like
an `.info()` for our debugger subsys) which can be used to report
internals info for those hacking on `.devx` tools.
Also, inject only the *last* 6 digits of the `id(Task)` in
`pformat_task_uid()` output by default.
Since the state mgmt becomes quite messy with multiple sub-tasks inside
an IPC ctx, AND bc generally speaking the payload-type-spec should map
1-to-1 with the `Context`, it doesn't make a lot of sense to be using
`ContextVar`s to modify the `Context.pld_rx: PldRx` instance.
Instead, always allocate a full instance inside `mk_context()` with the
default `.pld_rx: PldRx` set to use the `msg._ops._def_any_pldec: MsgDec`
In support, simplify the `.msg._ops` impl and APIs:
- drop `_ctxvar_PldRx`, `_def_pld_rx` and `current_pldrx()`.
- rename `PldRx._pldec` -> `._pld_dec`.
- rename the unused `PldRx.apply_to_ipc()` -> `.wraps_ipc()`.
- add a required `PldRx._ctx: Context` attr since it is needed
internally in some meths and each pld-rx now maps to a specific ctx.
- modify all recv methods to accept a `ipc: Context|MsgStream` (instead
of a `ctx` arg) since both have a ref to the same `._rx_chan` and there
are only a couple spots (in `.dec_msg()`) where we need the `ctx`
explicitly (which can now be easily accessed via a new `MsgStream.ctx`
property, see below).
- always show the `.dec_msg()` frame in tbs if there's a reference error
when calling `_raise_from_unexpected_msg()` in the fallthrough case.
- implement `limit_plds()` as light wrapper around getting the
`current_ipc_ctx()` and mutating its `MsgDec` via
`Context.pld_rx.limit_plds()`.
- add a `maybe_limit_plds()` which just provides an `@acm` equivalent of
`limit_plds()` handy for composing in a `async with ():` style block
(avoiding additional indent levels in the body of async funcs).
Obvi extend the `Context` and `MsgStream` interfaces as needed
to match the above:
- add a `Context.pld_rx` pub prop.
- new private refs to `Context._started_msg: Started` and
a `._started_pld` (mostly for internal debugging / testing / logging)
and set inside `.open_context()` immediately after the syncing phase.
- a `Context.has_outcome() -> bool:` predicate which can be used to more
easily determine if the ctx errored or has a final result.
- pub props for `MsgStream.ctx: Context` and `.chan: Channel` providing
full `ipc`-arg compat with the `PldRx` method signatures.
Finally got this working so that if/when an internal bug is introduced
to this request task-func, we can actually REPL-debug the lock request
task itself B)
As in, if the subactor's lock request task internally errors we,
- ensure the task always terminates (by calling `DebugStatus.release()`)
and explicitly reports (via a `log.exception()`) the internal error.
- capture the error instance and set as a new `DebugStatus.req_err` and
always check for it on final teardown - in which case we also,
- ensure it's reraised from a new `DebugRequestError`.
- unhide the stack frames for `_pause()`, `_enter_repl_sync()` so that
the dev can upward inspect the `_pause()` call stack sanely.
Supporting internal impl changes,
- add `DebugStatus.cancel()` and `.req_err`.
- don't ever cancel the request task from
`PdbREPL.set_[continue/quit]()` only when there's some internal error
that would likely result in a hang and stale lock state with the root.
- only release the root's lock when the current ask is also the owner
(avoids bad release errors).
- also show internal `._pause()`-related frames on any `repl_err`.
Other temp-dev-tweaks,
- make pld-dec change log msgs info level again while solving this
final context-vars race stuff..
- drop the debug pld-dec instance match asserts for now since
the problem is already caught (and now debug-able B) by an attr-error
on the decoded-as-`dict` started msg, and instead add in
a `log.exception()` trace to see which task is triggering the case
where the debug `MsgDec` isn't set correctly vs. when we think it's
being applied.
Since obviously the thread is likely expected to halt and raise after
the REPL session exits; this was a regression from the prior impl. The
main reason for this is that otherwise the request task will never
unblock if the user steps through the crashed task using 'next' since
the `.do_next()` handler doesn't by default release the request since in
the `.pause()` case this would end the session too early.
Other,
- toss in draft `Pdb.user_exception()`, though doesn't seem to ever
trigger?
- only release `Lock._debug_lock` when already locked.
- start tossing in `__tracebackhide__`s to various eps which don't need
to show in tbs or in the pdb REPL.
- port final `._maybe_enter_pm()` to pass a `api_frame`.
- start comment-marking up some API eps with `@api_frame`
in prep for actually using the new frame-stack tracing.
Woops, due to a `None` test against the `._final_result`, any actual
final `None` result would be received but not acked as such causing
a spawning test to hang. Fix it by instead receiving and assigning both
a `._final_result_msg: PayloadMsg` and `._final_result_pld`.
NB: as mentioned in many recent comments surrounding this API layer,
really this whole `Portal`-has-final-result interface/semantics should
be entirely removed as should the `ActorNursery.run_in_actor()` API(s).
Instead it should all be replaced by a wrapping "high level" API
(`tractor.hilevel` ?) which combines a task nursery, `Portal.open_context()`
and underlying `Context` APIs + an `outcome.Outcome` to accomplish the
same "run a single task in a spawned actor and return it's result"; aka
a "one-shot-task-actor".
- inside the `Actor.cancel()`'s maybe-wait-on-debugger delay,
report the full debug request status and it's affiliated lock request
IPC ctx.
- use the new `.req_ctx.chan.uid` to do the local nursery lookup during
channel teardown handling.
- another couple log fmt tweaks.
Proto-ing a little suite of call-stack-frame annotation-for-scanning
sub-systems for the purposes of both,
- the `.devx._debug`er and its
traceback and frame introspection needs when entering the REPL,
- detailed trace-style logging such that we can explicitly report
on "which and where" `tractor`'s APIs are used in the "app" code.
Deats:
- change mod name obvi from `._code` and adjust client mod imports.
- using `wrapt` (for perf) implement a `@api_frame` annot decorator
which both stashes per-call-stack-frame instances of `CallerInfo` in
a table and marks the function such that API endpoints can be easily
found via runtime stack scanning despite any internal impl changes.
- add a global `_frame2callerinfo_cache: dict[FrameType, CallerInfo]`
table for providing the per func-frame info caching.
- Re-implement `CallerInfo` to require less (types of) inputs:
|_ `_api_func: Callable`, a ref to the (singleton) func def.
|_ `_api_frame: FrameType` taken from the `@api_frame` marked `tractor`-API
func's runtime call-stack, from which we can determine the
app code's `.caller_frame`.
|_`_caller_frames_up: int|None` allowing the specific `@api_frame` to
determine "how many frames up" the application / calling code is.
And, a better set of derived attrs:
|_`caller_frame: FrameType` which finds and caches the API-eps calling
frame.
|_`caller_frame: FrameType` which finds and caches the API-eps calling
- add a new attempt at "getting a method ref from its runtime frame"
with `get_ns_and_func_from_frame()` using a heuristic that the
`CodeType.co_qualname: str` should have a "." in it for methods.
- main issue is still that the func-ref lookup will require searching
for the method's instance type by name, and that name isn't
guaranteed to be defined in any particular ns..
|_rn we try to read it from the `FrameType.f_locals` but that is
going to obvi fail any time the method is called in a module where
it's type is not also defined/imported.
- returns both the ns and the func ref FYI.
- set `._state._ctxvar_Context` just after `StartAck` inside
`open_context_from_portal()` so that `current_ipc_ctx()` always
works on the 'parent' side.
- always set `.canceller` to any `MsgTypeError.src_uid` and otherwise to
any maybe-detected `.src_uid` (i.e. for RAEs).
- always set `.canceller` to us when we rx a ctxc which reports us as
its canceller; this is a sanity check on definite "self cancellation".
- adjust `._is_self_cancelled()` logic to only be `True` when
`._remote_error` is both a ctxc with a `.canceller` set to us AND
when `Context.canceller` is also set to us (since the change above)
as a little bit of extra rigor.
- fill-in/fix some `.repr_state` edge cases:
- merge self-vs.-peer ctxc cases to one block and distinguish via
nested `._is_self_cancelled()` check.
- set 'errored' for all exception matched cases despite `.canceller`.
- add pre-`Return` phase statuses:
|_'pre-started' and 'syncing-to-child' depending on side and when
`._stream` has not (yet) been set.
|_'streaming' and 'streaming-finished' depending on side when
`._stream` is set and whether it was stopped/closed.
- tweak drainage log-message to use "outcome" instead of "result".
- use new `.devx.pformat.pformat_cs()` inside `_maybe_cancel_and_set_remote_error()`
but, IFF the log level is at least 'cancel'.
Since i was running into them (internal errors) during lock request
machinery dev and was getting all sorts of difficult to understand hangs
whenever i intro-ed a bug to either side of the ipc ctx; this all while
trying to get the msg-spec working for `Lock` requesting subactors..
Deats:
- hideframes for `@acm`s and `trio.Event.wait()`, `Lock.release()`.
- better detail out the `Lock.acquire/release()` impls
- drop `Lock.remote_task_in_debug`, use new `.ctx_in_debug`.
- add a `Lock.release(force: bool)`.
- move most of what was `_acquire_debug_lock_from_root_task()` and some
of the `lock_tty_for_child().__a[enter/exit]()` logic into
`Lock.[acquire/release]()` including bunch more logging.
- move `lock_tty_for_child()` up in the module to below `Lock`, with
some rework:
- drop `subactor_uid: tuple` arg since we can just use the `ctx`..
- add exception handler blocks for reporting internal (impl) errors
and always force release the lock in such cases.
- extend `DebugStatus` (prolly will rename to `DebugRequest` btw):
- add `.req_ctx: Context` for subactor side.
- add `.req_finished: trio.Event` to sub to signal request task exit.
- extend `.shield_sigint()` doc-str.
- add `.release()` to encaps all the state mgmt previously strewn
about inside `._pause()`..
- use new `DebugStatus.release()` to replace all the duplication:
- inside `PdbREPL.set_[continue/quit]()`.
- inside `._pause()` for the subactor branch on internal
repl-invocation error cases,
- in the `_enter_repl_sync()` closure on error,
- replace `apply_debug_codec()` -> `apply_debug_pldec()` in tandem with
the new `PldRx` sub-sys which handles the new `__pld_spec__`.
- add a new `pformat_cs()` helper orig to help debug cs stack
a corruption; going to move to `.devx.pformat` obvi.
- rename `wait_for_parent_stdin_hijack()` -> `request_root_stdio_lock()`
with improvements:
- better doc-str and add todos,
- use `DebugStatus` more stringently to encaps all subactor req state.
- error handling blocks for cancellation and straight up impl errors
directly around the `.open_context()` block with the latter doing
a `ctx.cancel()` to avoid hanging in the shielded `.req_cs` scope.
- similar exc blocks for the func's overall body with explicit
`log.exception()` reporting.
- only set the new `DebugStatus.req_finished: trio.Event` in `finally`.
- rename `mk_mpdb()` -> `mk_pdb()` and don't cal `.shield_sigint()`
implicitly since the caller usage does matter for this.
- factor out `any_connected_locker_child()` from the SIGINT handler.
- rework SIGINT handler to better handle any stale-lock/hang cases:
- use new `Lock.ctx_in_debug: Context` to detect subactor-in-debug.
and use it to cancel any lock request instead of the lower level
- use `problem: str` summary approach to log emissions.
- rework `_pause()` given all of the above, stuff not yet mentioned:
- don't take `shield: bool` input and proxy to `debug_func()` (for now).
- drop `extra_frames_up_when_async: int` usage, expect
`**debug_func_kwargs` to passthrough an `api_frame: Frametype` (more
on this later).
- lotsa asserts around the request ctx vs. task-in-debug ctx using new
`current_ipc_ctx()`.
- asserts around `DebugStatus` state.
- rework and simplify the `debug_func` hooks,
`_set_trace()`/`_post_mortem()`:
- make them accept a non-optional `repl: PdbRepl` and `api_frame:
FrameType` which should be used to set the current frame when the
REPL engages.
- always hide the hook frames.
- always accept a `tb: TracebackType` to `_post_mortem()`.
|_ copy and re-impl what was the delegation to
`pdbp.xpm()`/`pdbp.post_mortem()` and instead call the
underlying `Pdb.interaction()` ourselves with a `caller_frame`
and tb instance.
- adjust the public `.pause()` impl:
- accept optional `hide_tb` and `api_frame` inputs.
- mask opening a cancel-scope for now (can cause `trio` stack
corruption, see notes) and thus don't use the `shield` input other
then to eventually passthrough to `_post_mortem()`?
|_ thus drop `task_status` support for now as well.
|_ pretty sure correct soln is a debug-nursery around `._invoke()`.
- since no longer using `extra_frames_up_when_async` inside
`debug_func()`s ensure all public apis pass a `api_frame`.
- re-impl our `tractor.post_mortem()` to directly call into `._pause()`
instead of binding in via `partial` and mk it take similar input as
`.pause()`.
- drop `Lock.release()` from `_maybe_enter_pm()`, expose and pass
expected frame and tb.
- use necessary changes from all the above within
`maybe_wait_for_debugger()` and `acquire_debug_lock()`.
Lel, sorry thought that would be shorter..
There's still a lot more re-org to do particularly with `DebugStatus`
encapsulation but it's coming in follow up.
Since we need to allow it (at the least) inside
`drain_until_final_msg()` for handling stream-phase termination races
where we don't want to have to handle a raised error from something like
`Context.result()`. Expose the passthrough option via
a `passthrough_non_pld_msgs: bool` kwarg.
Add comprehensive comment to `current_pldrx()`.
Expose it from `._state.current_ipc_ctx()` and set it inside
`._rpc._invoke()` for child and inside `Portal.open_context()` for
parent.
Still need to write a few more tests (particularly demonstrating usage
throughout multiple nested nurseries on each side) but this suffices as
a proto for testing with some debugger request-from-subactor stuff.
Other,
- use new `.devx.pformat.add_div()` for ctxc messages.
- add a block to always traceback dump on corrupted cs stacks.
- better handle non-RAEs exception output-formatting in context
termination summary log message.
- use a summary for `start_status` for msg logging in RPC loop.
Since we usually want them raised from some (internal) call to
`Context.maybe_raise()` and NOT directly from the drainage call, make it
possible via a new `raise_error: bool` to both `PldRx.recv_msg_w_pld()`
and `.dec_msg()`.
In support,
- rename `return_msg` -> `result_msg` since we expect to return
`Error`s.
- do a `result_msg` assign and `break` in the `case Error()`.
- add `**dec_msg_kwargs` passthrough for other `.dec_msg()` calling
methods.
Other,
- drop/aggregate todo-notes around the main loop's
`ctx._pld_rx.recv_msg_w_pld()` call.
- add (configurable) frame hiding to most payload receive meths.
Since `._code` is prolly gonna get renamed (to something "frame & stack
tools" related) and to give a bit better organization.
Also adds a new `add_div()` helper, factored out of ctxc message
creation in `._rpc._invoke()`, for adding a little "header line" divider
under a given `message: str` with a little math to center it.
For more sane manual calls as needed in logging purposes. Obvi remap
the dunder methods to it.
Other:
- drop `hide_tb: bool` from `unpack_error()`, shouldn't need it since
frame won't ever be part of any tb raised from returned error.
- add a `is_invalid_payload: bool` to `_raise_from_unexpected_msg()` to
be used from `PldRx` where we don't need to decode the IPC
msg, just the payload; make the error message reflect this case.
- drop commented `._portal._unwrap_msg()` since we've replaced it with
`PldRx`'s delegation to newer `._raise_from_unexpected_msg()`.
- hide the `Portal.result()` frame by default, again.
A better spot for the pretty-formatting of frame text (and thus tracebacks)
is in the new `.devx._code` module:
- move from `._exceptions` -> `.devx._code.pformat_boxed_tb()`.
- add new `pformat_caller_frame()` factored out the use case in
`._exceptions._mk_msg_type_err()` where we dump a stack trace
for bad `.send()` side IPC msgs.
Add some new pretty-format methods to `Context`:
- explicitly implement `.pformat()` and allow an `extra_fields: dict`
which can be used to inject additional fields (maybe eventually by
default) such as is now used inside
`._maybe_cancel_and_set_remote_error()` when reporting the internal
`._scope` state in cancel logging.
- add a new `.repr_state -> str` which provides a single string status
depending on the internal state of the IPC ctx in terms of the shuttle
protocol's "phase"; use it from `.pformat()` for the `|_state:`.
- set `.started(complain_no_parity=False)` now since we presume decoding
with `.pld: Raw` now with the new `PldRx` design.
- use new `msgops.current_pldrx()` in `mk_context()`.
Not sure it's **that** useful (yet) but in theory would allow avoiding
certain log level usage around transient RPC requests for discovery methods
(like `.register_actor()` and friends); can't hurt to be able to
introspect that last message for other future cases I'd imagine as well.
Adjust the calling code in `._runtime` to match; other spots are using
the `trio.Nursery.start()` schedule style and are fine as is.
Improve a bunch more log messages throughout a few mods mostly by going
to a "summary" single-emission style where possible/appropriate:
- in `._runtime` more "single summary" status style log emissions:
|_mk `Actor.load_modules()` render a single mod loaded summary.
|_use a summary `con_status: str` for `Actor._stream_handler()` conn
setup and an equiv (`con_teardown_status`) for connection teardowns.
|_similar thing in `Actor.wait_for_actor()`.
- generally more usage of `.msg.pretty_struct` apis throughout `._runtime`.
Add new task-scope oriented `PldRx.pld_spec` management API similar to
`.msg._codec.limit_msg_spec()`, but obvi built to process and filter
`MsgType.pld` values.
New API related changes include:
- new per-task singleton getter `msg._ops.current_pldrx()` which
delivers the current (global) payload receiver via a new
`_ctxvar_PldRx: ContextVar` configured with a default
`_def_any_pldec: MsgDec[Any]` decoder.
- a `PldRx.limit_plds()` which sets the decoder (`.type` underneath)
for the specific payload rx instance.
- `.msg._ops.limit_plds()` which obtains the current task-scoped `PldRx`
and applies the pld spec via a new `PldRx.limit_plds()`.
- rename `PldRx._msgdec` -> `._pldec`.
- add `.pld_dec` as pub attr for -^
Unrelated adjustments:
- use `.msg.pretty_struct.pformat()` where handy.
- always pass `expect_msg: MsgType`.
- add a `case Stop()` to `PldRx.dec_msg()` which will `log.warning()`
when a stop is received by no stream was open on this receiving side
since we rarely want that to raise since it's prolly just a runtime
race or mistake in user code.
Other:
Particularly when logging around `MsgTypeError`s.
Other:
- make `_raise_from_unexpected_msg()`'s `expect_msg` a non-default value
arg, must always be passed by caller.
- drop `'canceller'` from `_body_fields` ow it shows up twice for ctxc.
- use `.msg.pretty_struct.pformat()`.
- parameterize `RemoteActorError.reprol()` (repr-one-line method) to
show `RemoteActorError[<self.boxed_type_str>]( ..` to make obvi
the boxed remote error type.
- re-impl `.boxed_type_str` as `str`-casting the `.boxed_type` value
which is guaranteed to render non-`None`.
Basically exact same as that for `MsgCodec` with the `.spec` displayed
via a better (maybe multi-line) `.spec_str: str` generated from a common
new set of helper mod funcs factored out msg-codec meths:
- `mk_msgspec_table()` to gen a `MsgType` name -> msg table.
- `pformat_msgspec()` to `str`-ify said table values nicely.q
Also add a new `MsgCodec.msg_spec_str: str` prop which delegates to the
above for the same.
- tweaking logging to include more `MsgType` dumps on IPC faults.
- removing some commented cruft.
- comment formatting / cleanups / add-ons.
- more type annots.
- fill out some TODO content.
As per the reco:
https://jcristharif.com/msgspec/perf-tips.html#reusing-an-output-buffe
BUT, seems to cause this error in `pikerd`..
`BufferError: Existing exports of data: object cannot be re-sized`
Soo no idea? Maybe there's a tweak needed that we can glean from
tests/examples in the `msgspec` repo?
Disabling for now.
Instead of expecting it to be passed in (as it was prior), when
determining if a `Stop` msg is a valid end-of-channel signal use the
`ctx._stream: MsgStream|None` attr which **must** be set by any stream
opening API; either of:
- `Context.open_stream()`
- `Portal.open_stream_from()`
Adjust the case block logic to match with fallthrough from any EoC to
a closed error if necessary. Change the `_type: str` to match the
failing IPC-prim name in the tail case we raise a `MessagingError`.
Other:
- move `.sender: tuple` uid attr up to `RemoteActorError` since `Error`
optionally defines it as a field and for boxed `StreamOverrun`s (an
ignore case we check for in the runtime during cancellation) we want
it readable from the boxing rae.
- drop still unused `InternalActorError`.
As per much tinkering, re-designs and preceding rubber-ducking via many
"commit msg novelas", **finally** this adds the (hopefully) final
missing layer for typed msg safety: `tractor.msg._ops.PldRx`
(or `PayloadReceiver`? haven't decided how verbose to go..)
Design justification summary:
------ - ------
- need a way to be as-close-as-possible to the `tractor`-application
such that when `MsgType.pld: PayloadT` validation takes place, it is
straightforward and obvious how user code can decide to handle any
resulting `MsgTypeError`.
- there should be a common and optional-yet-modular way to modify
**how** data delivered via IPC (possibly embedded as user defined,
type-constrained `.pld: msgspec.Struct`s) can be handled and processed
during fault conditions and/or IPC "msg attacks".
- support for nested type constraints within a `MsgType.pld` field
should be simple to define, implement and understand at runtime.
- a layer between the app-level IPC primitive APIs
(`Context`/`MsgStream`) and application-task code (consumer code of
those APIs) should be easily customized and prove-to-be-as-such
through demonstrably rigorous internal (sub-sys) use!
-> eg. via seemless runtime RPC eps support like `Actor.cancel()`
-> by correctly implementing our `.devx._debug.Lock` REPL TTY mgmt
dialog prot, via a dead simple payload-as-ctl-msg-spec.
There are some fairly detailed doc strings included so I won't duplicate
that content, the majority of the work here is actually somewhat of
a factoring of many similar blocks that are doing more or less the same
`msg = await Context._rx_chan.receive()` with boilerplate for
`Error`/`Stop` handling via `_raise_from_no_key_in_msg()`. The new
`PldRx` basically provides a shim layer for this common "receive msg,
decode its payload, yield it up to the consuming app task" by pairing
the RPC feeder mem-chan with a msg-payload decoder and expecting IPC API
internals to use **one** API instead of re-implementing the same pattern
all over the place XD
`PldRx` breakdown
------ - ------
- for now only expects a `._msgdec: MsgDec` which allows for
override-able `MsgType.pld` validation and most obviously used in
the impl of `.dec_msg()`, the decode message method.
- provides multiple mem-chan receive options including:
|_ `.recv_pld()` which does the e2e operation of receiving a payload
item.
|_ a sync `.recv_pld_nowait()` version.
|_ a `.recv_msg_w_pld()` which optionally allows retreiving both the
shuttling `MsgType` as well as it's `.pld` body for use cases where
info on both is important (eg. draining a `MsgStream`).
Dirty internal changeover/implementation deatz:
------ - ------
- obvi move over all the IPC "primitives" that previously had the duplicate recv-n-yield
logic:
- `MsgStream.receive[_nowait]()` delegating instead to the equivalent
`PldRx.recv_pld[_nowait]()`.
- add `Context._pld_rx: PldRx`, created and passed in by
`mk_context()`; use it for the `.started()` -> `first: Started`
retrieval inside `open_context_from_portal()`.
- all the relevant `Portal` invocation methods: `.result()`,
`.run_from_ns()`, `.run()`; also allows for dropping `_unwrap_msg()`
and `.Portal_return_once()` outright Bo
- rename `Context.ctx._recv_chan` -> `._rx_chan`.
- add detailed `Context._scope` info for logging whether or not it's
cancelled inside `_maybe_cancel_and_set_remote_error()`.
- move `._context._drain_to_final_msg()` -> `._ops.drain_to_final_msg()`
since it's really not necessarily ctx specific per say, and it does
kinda fit with "msg operations" more abstractly ;)
In prep for a "payload receiver" abstraction that will wrap
`MsgType.pld`-IO delivery from `Context` and `MsgStream`, adds a small
`msgspec.msgpack.Decoder` shim which delegates an API similar to
`MsgCodec` and is offered via a `.msg._codec.mk_dec()` factory.
Detalles:
- move over the TODOs/comments from `.msg.types.Start` to to
`MsgDec.spec` since it's probably the ideal spot to start thinking
about it from a consumer code PoV.
- move codec reversion assert and log emit into `finally:` block.
- flip default `.types._tractor_codec = mk_codec_ipc_pld(ipc_pld_spec=Raw)`
in prep for always doing payload-delayed decodes.
- make `MsgCodec._dec` private with public property getter.
- change `CancelAck` to NOT derive from `Return` so it's mutex in
`match/case:` handling.
Since it's going to be used from the IPC primitive APIs
(`Context`/`MsgStream`) for similarly handling payload type spec
validation errors and bc it's really not well situation in the IPC
module XD
Summary of (impl) tweaks:
- obvi move `_mk_msg_type_err()` and import and use it in `._ipc`; ends
up avoiding a lot of ad-hoc imports we had from `._exceptions` anyway!
- mask out "new codec" runtime log emission from `MsgpackTCPStream`.
- allow passing a (coming in next commit) `codec: MsgDec` (message
decoder) which supports the same required `.pld_spec_str: str` attr.
- for send side logging use existing `MsgCodec..pformat_msg_spec()`.
- rename `_raise_from_no_key_in_msg()` to the now more appropriate
`_raise_from_unexpected_msg()`, but leaving alias for now.
Turns out we do want per-task inheritance particularly if there's to be
per `Context` dynamic mutation of the spec; we don't want mutation in
some task to affect any parent/global setting.
Turns out since we use a common "feeder task" in the rpc loop, we need to
offer a per `Context` payload decoder sys anyway in order to enable
per-task controls for inter-actor multi-task-ctx scenarios.
As per some newly added features and APIs:
- pass `portal: Portal` to `Actor.start_remote_task()` from
`open_context_from_portal()` marking `Portal.open_context()` as
always being the "parent" task side.
- add caller tracing via `.devx._code.CallerInfo/.find_caller_info()`
called in `mk_context()` and (for now) a `__runtimeframe__: int = 2`
inside `open_context_from_portal()` such that any enter-er of
`Portal.open_context()` will be reported.
- pass in a new `._caller_info` attr which is used in 2 new meths:
- `.repr_caller: str` for showing the name of the app-code-func.
- `.repr_api: str` for showing the API ep, which for now we just
hardcode to `Portal.open_context()` since ow its gonna show the mod
func name `open_context_from_portal()`.
- use those new props ^ in the `._deliver_msg()` flow body log msg
content for much clearer msg-flow tracing Bo
- add `Context._cancel_on_msgerr: bool` to toggle whether
a delivered `MsgTypeError` should trigger a `._scope.cancel()` call.
- also (temporarily) add separate `.cancel()` emissions for both cases
as i work through hacking out the maybe `MsgType.pld: Raw` support.
Starting with a little sub-sys for tracing caller frames by marking them
with a dunder var (`__runtimeframe__` by default) and then scanning for
that frame such that code that is *calling* our APIs can be reported
easily in logging / tracing output.
New APIs:
- `find_caller_info()` which does the scan and delivers a,
- `CallerInfo` which (attempts) to expose both the runtime frame-info
and frame of the caller func along with `NamespacePath` properties.
Probably going to re-implement the dunder var bit as a decorator later
so we can bind in the literal func-object ref instead of trying to look
it up with `get_class_from_frame()`, since it's kinda hacky/non-general
and def doesn't work for closure funcs..
Breaks out all the (sub)actor local conc primitives from `Lock` (which
is now only used in and by the root actor) such that there's an explicit
distinction between a task that's "consuming" the `Lock` (remotely) vs.
the root-side service tasks which do the actual acquire on behalf of the
requesters.
`DebugStatus` changeover deats:
------ - ------
- move all the actor-local vars over `DebugStatus` including:
- move `_trio_handler` and `_orig_sigint_handler`
- `local_task_in_debug` now `repl_task`
- `_debugger_request_cs` now `req_cs`
- `local_pdb_complete` now `repl_release`
- drop all ^ fields from `Lock.repr()` obvi..
- move over the `.[un]shield_sigint()` and
`.is_main_trio_thread()` methods.
- add some new attrs/meths:
- `DebugStatus.repl` for the currently running `Pdb` in-actor
singleton.
- `.repr()` for pprint of state (like `Lock`).
- Note: that even when a root-actor task is in REPL, the `DebugStatus`
is still used for certain actor-local state mgmt, such as SIGINT
handler shielding.
- obvi change all lock-requester code bits to now use a `DebugStatus` in
their local actor-state instead of `Lock`, i.e. change usage from
`Lock` in `._runtime` and `._root`.
- use new `Lock.get_locking_task_cs()` API in when checking for
sub-in-debug from `._runtime.Actor._stream_handler()`.
Unrelated to topic-at-hand tweaks:
------ - ------
- drop the commented bits about hiding `@[a]cm` stack frames from
`_debug.pause()` and simplify to only one block with the `shield`
passthrough since we already solved the issue with cancel-scopes using
`@pdbp.hideframe` B)
- this includes all the extra logging about the extra frame for the
user (good thing i put in that wasted effort back then eh..)
- put the `try/except BaseException` with `log.exception()` around the
whole of `._pause()` to ensure we don't miss in-func errors which can
cause hangs..
- allow passing in `portal: Portal` to
`Actor.start_remote_task()` such that `Portal` task spawning methods
are always denoted correctly in terms of `Context.side`.
- lotsa logging tweaks, decreasing a bit of noise from `.runtime()`s.
Since it's totes possible to have a spec applied that won't permit
`str`s, might as well formalize a small msg set for subactors to request
the tree-wide TTY `Lock`.
BTW, I'm prolly not going into every single change here in this first
WIP since there's still a variety of broken stuff mostly to do with
races on the codec apply being done in a `trio.lowleve.RunVar`; it
should be re-done with a `ContextVar` such that each task does NOT
mutate the global setting..
New msg set and usage is simply:
- `LockStatus` which is the reponse msg delivered from `lock_tty_for_child()`
- `LockRelease` a one-off request msg from the subactor to drop the
`Lock` from a `MsgStream.send()`.
- use these msgs throughout the root and sub sides of the locking
ctx funcs: `lock_tty_for_child()` & `wait_for_parent_stdin_hijack()`
The codec is now applied in both the root and sub `Lock` request tasks:
- for root inside `lock_tty_for_child()` before the `.started()`.
- for subs, inside `wait_for_parent_stdin_hijack()` since we only want
to affect the codec *for the locking task*.
- (hence the need for ctx-var as mentioned above but currently this
can cause races which will break against other app tasks competing
for the codec setting).
- add a `apply_debug_codec()` helper for use in both cases.
- add more detailed logging to both the root and sub side of `Lock`
requesting funcs including requiring that the sub-side task "uid" (a
`tuple[str, int]` = (trio.Task.name, id(trio.Task)` be provided (more
on this later).
A main issue discovered while proto-testing all this was the ability of
a sub to "double lock" (leading to self-deadlock) via an error in
`wait_for_parent_stdin_hijack()` which, for ex., can happen in debug
mode via crash handling of a `MsgTypeError` received from the root
during a codec applied msg-spec race! Originally I was attempting to
solve this by making the SIGINT override handler more resilient but this
case is somewhat impossible to detect by an external root task other
then checking for duplicate ownership via the new `subactor_task_uid`.
=> SO NOW, we always stick the current task uid in the
`Lock._blocked: set` and raise an rte on a double request by the same
remote task.
Included is a variety of small refinements:
- finally figured out how to mark a variety of `.__exit__()` frames with
`pdbp.hideframe()` to actually hide them B)
- add cls methods around managing `Lock._locking_task_cs` from root only.
- re-org all the `Lock` attrs into those only used in root vs. subactors
and proto-prep a new `DebugStatus` actor-singleton to be used in subs.
- add a `Lock.repr()` to contextually print the current conc primitives.
- rename our `Pdb`-subtype to `PdbREPL`.
- rigor out the SIGINT handler a bit, originally to try and hack-solve
the double-lock issue mentioned above, but now just with better
logging and logic for most (all?) possible hang cases that should be
hang-recoverable after enough ctrl-c mashing by the user.. well
hopefully:
- using `Lock.repr()` for both root and sub cases.
- lots more `log.warn()`s and handler reversions on stale lock or cs
detection.
- factor `._pause()` impl a little better moving the actual repl entry
to a new `_enter_repl_sync()` (originally for easier wrapping in the
sub case with `apply_codec()`).
Since obvi we don't want to just only see the trace in the root most of
the time ;)
Currently the sig keeps firing twice in the root though, and i'm not
sure why yet..
Such that the top level `maybe_enable_greenback` from
`open_root_actor()` can toggle the entire actor tree's usage.
Read the rtv in `._rpc` tasks and only enable if set.
Also, rigor up the `._rpc.process_messages()` loop to handle `Error()`
and `case _:` separately such that we now raise an explicit rte for
unknown / invalid msgs. Use "parent" / "child" for side descriptions in
loop comments and put a fat comment before the `StartAck` in `_invoke()`.
Instead of `allow_msg_keys` since we've fully flipped over to
struct-types for msgs in the runtime.
- drop the loop from `MsgStream.receive_nowait()` since
`Yield/Return.pld` getting will handle both (instead of a loop of
`dict`-key reads).
Which matches with renaming `.payload_msg` -> `.expected_msg` which is
the value we attempt to construct from a vanilla-msgppack
decode-to-`dict` and then construct manually into a `MsgType` using
`.msg.types.from_dict_msg()`. Add a todo to use new `use_pretty` flag
which currently conflicts with `._exceptions.pformat_boxed_type()`
prefix formatting..
Add a bit of special handling for msg-type-errors with a dedicated
log-msg detailing which `.side: str` is the sender/causer and avoiding
a `._scope.cancel()` call in such cases since the local task might be
written to handle and tolerate the badly (typed) IPC msg.
As part of ^, change the ctx task-pair "side" semantics from "caller" ->
"callee" to be "parent" -> "child" which better matches the
cross-process SC-linked-task supervision hierarchy, and
`trio.Nursery.parent_task`; in `trio` the task that opens a nursery is
also named the "parent".
Impl deats / fixes around the `.side` semantics:
- ensure that `._portal: Portal` is set ASAP after
`Actor.start_remote_task()` such that if the `Started` transaction
fails, the parent-vs.-child sides are still denoted correctly (since
`._portal` being set is the predicate for that).
- add a helper func `Context.peer_side(side: str) -> str:` which inverts
from "child" to "parent" and vice versa, useful for logging info.
Other tweaks:
- make `_drain_to_final_msg()` return a tuple of a maybe-`Return` and
the list of other `pre_result_drained: list[MsgType]` such that we
don't ever have to warn about the return msg getting captured as
a pre-"result" msg.
- Add some strictness flags to `.started()` which allow for toggling
whether to error or warn log about mismatching roundtripped `Started`
msgs prior to IPC transit.
Display the new `MsgCodec.pld_spec_str` and format the incorrect field
value to be placed entirely (txt block wise) right of the "type annot"
part of the line:
Iow if you had a bad `dict` value where something else should be it'd
look something like this:
<Started(
|_pld: NamespacePath = {'cid': '3e0ca00c-7d32-4d2a-a0c2-ac2e12453871',
'locked': True,
'msg_type': 'LockStatus',
'subactor_uid': ['sub', 'af7ccb69-1dab-491f-84f7-2ec42c32d137']}
- add some `tb_str: str` indent-prefix args for diff indent levels for the
body vs. the surrounding "ascii box".
- ^-use it-^ from `RemoteActorError.__repr()__` obvi.
- use new `msg.types.from_dict_msg()` in impl of
`MsgTypeError.payload_msg`, handy for showing what the message "would
have looked like in `Struct` form" had it not failed it's type
constraints.
Sure makes console grokability a lot better by showing only the
customizeable fields.
Further, clean up `mk_codec()` a bunch by removing the `ipc_msg_spec`
param since we don't plan to support another msg-set (for now) which
allows cleaning out a buncha logic that was mostly just a source of
bugs..
Also,
- add temporary `log.info()` around codec application.
- throw in some sanity `assert`s to `limit_msg_spec()`.
- add but mask out the `extend_msg_spec()` idea since it seems `msgspec`
won't allow `Decoder.type` extensions when using a custom `dec_hook()`
for some extension type.. (not sure what approach to take here yet).
Handy for re-constructing a struct-`MsgType` from a `dict` decoded from
wire-bytes wherein the msg failed to decode normally due to a field type
error but you'd still like to show the "potential" msg in struct form,
say inside a `MsgTypeError`'s meta data.
Supporting deats:
- add a `.msg.types.from_dict_msg()` to implement it (the helper).
- also a `.msg.types._msg_table: dict[str, MsgType]` for supporting this
func ^ as well as providing just a general `MsgType`-by-`str`-name
lookup.
Unrelated:
- Drop commented idea for still supporting `dict`-msg set via
`enc/dec_hook()`s that would translate to/from `MsgType`s, but that
would require a duplicate impl in the runtime.. so eff that XD
Not sure if this is a good tactic (yet) but it at least covers us from
getting user's confused by `breakpoint()` usage causing REPL clobbering.
Always set an explicit rte raising breakpoint hook such that the user
realizes they can't use `.pause_from_sync()` without enabling debug
mode.
** CHERRY-PICK into `pause_from_sync_w_greenback` branch! **
Mostly removing commented (and replaced) code blocks lingering from the
ctxc semantics work and new typed-msg-spec `MsgType`s handling AND use
the new `._exceptions.pack_from_raise()` helper to construct
`StreamOverrun` msgs.
Deaterz:
- clean out the drain loop now that it's implemented to handle our
struct msg types including the `dict`-msg bits left in as
fallback-reminders, any notes/todos better summarized at the top of
their blocks, remove any `_final_result_is_set()` related duplicate/legacy
tidbits.
- use a `case Error()` block in drain loop with fallthrough to `_:`
always resulting in an rte raise.
- move "XXX" notes into the doc-string for `._deliver_msg()` as
a "rules" section.
- use `match:` syntax for logging the `result_or_err: MsgType` outcome
from the final `.result()` call inside `open_context_from_portal()`.
- generally speaking use `MsgType` type annotations throughout!
Such that `Channel.recv()` + `MsgpackTCPStream.recv()` originating
msg-type-errors are not raised at the IPC transport layer but instead
relayed up the runtime stack for eventual handling by user-app code via
the `Context`/`MsgStream` layer APIs.
This design choice leads to a substantial amount of flexibility and
modularity, and avoids `MsgTypeError` handling policies from being
coupled to a particular backend IPC transport layer:
- receive-side msg-type errors, as can be raised and handled in the
`.open_stream()` "nasty" phase of a ctx, whilst being packed at the
`MsgCodec`/transport layer (keeping the underlying src decode error
coupled to the specific transport + interchange lib) and then relayed
upward to app code for custom handling like a normal Error` msg.
- the policy options for handling such cases could be implemented as
`@acm` wrappers around `.open_context()`/`.open_stream()` blocks (and
their respective delivered primitives) OR just plain old async
generators around `MsgStream.receive()` such that both built-in policy
handling and custom user-app solutions can be swapped without touching
any `tractor` internals or providing specialized "registry APIs".
-> eg. the ignore and relay-invalid-msg-to-sender approach can be more
easily implemented as embedded `try: except MsgTypeError:` blocks
around `MsgStream.receive()` possibly applied as either of an
injected wrapper type around a stream or an async gen that `async
for`s from the stream.
- any performance based AOT-lang extensions used to implement a policy
for handling recv-side errors space can avoid knowledge of the lower
level IPC `Channel` (and-downward) primitives.
- `Context` consuming code can choose to let all msg-type-errs
bubble and handle them manually (like any other remote `Error`
shuttled exception).
- we can keep (as before) send-side msg type checks can be raised
locally and cause offending senders to error and adjust before the
streaming phase of an IPC ctx.
Impl (related) deats:
- obvi make `MsgpackTCPStream.recv()` yield up any `MsgTypeError`
constructed by `_mk_msg_type_err()` such that the exception will
eventually be relayed up to `._rpc.process_messages()` and from
their delivered to the corresponding ctx-task.
- in support of ^, make `Channel.recv()` detect said mtes and use the
new `pack_from_raise()` to inject the far end `Actor.uid` for the
`Error.src_uid`.
- keep raising the send side equivalent (when strict enabled) errors
inline immediately with no upward `Error` packing or relay.
- improve `_mk_msg_type_err()` cases handling with far more detailed
`MsgTypeError` "message" contents pertaining to `msgspec` specific
failure-fixing-tips and type-spec mismatch info:
* use `.from_decode()` constructor in recv-side case to inject the
non-spec decoded `msg_dict: dict` and use the new
`MsgCodec.pld_spec_str: str` when clarifying the type discrepancy
with the offending field.
* on send-side, if we detect that an unsupported field type was
described in the original `src_type_error`, AND there is no
`msgpack.Encoder.enc_hook()` set, that the real issue is likely
that the user needs to extend the codec to support the
non-std/custom type with a hook and link to `msgspec` docs.
* if one of a `src_type/validation_error` is provided, set that
error as the `.__cause__` in the new mte.
Make a new `MsgType: TypeAlias` for the union of all msg types such that
it can be used in annots throughout the code base; just make
`.msg.__msg_spec__` delegate to it.
Add some new codec methods:
- `pld_spec_str`: for the `str`-casted value of the payload spec,
generally useful in logging content.
- `msg_spec_items()`: to render a `dict` of msg types to their
`str()`-casted values with support for singling out a specific
`MsgType`, type by input `msg` instance.
- `pformat_msg_spec()`: for rendering the (partial) `.msg_spec` as
a formatted `str` useful in logging.
Oh right, add a `Error._msg_dict: dict` in support of the previous
commit (for `MsgTypeError` packing as RAEs) such that our error msg type
can house a non-type-spec decoded wire-bytes for error
reporting/analysis purposes.
Since in the receive-side error case the source of the exception is the
sender side (normally causing a local `TypeError` at decode time), might
as well bundle the error in remote-capture-style using boxing semantics
around the causing local type error raised from the
`msgspec.msgpack.Decoder.decode()` and with a traceback packed from
`msgspec`-specific knowledge of any field-type spec matching failure.
Deats on new `MsgTypeError` interface:
- includes a `.msg_dict` to get access to any `Decoder.type`-applied
load of the original (underlying and offending) IPC msg into
a `dict` form using a vanilla decoder which is normally packed into
the instance as a `._msg_dict`.
- a public getter to the "supposed offending msg" via `.payload_msg`
which attempts to take the above `.msg_dict` and load it manually into
the corresponding `.msg.types.MsgType` struct.
- a constructor `.from_decode()` to make it simple to build out error
instances from a failed decode scope where the aforementioned
`msgdict: dict` from the vanilla decode can be provided directly.
- ALSO, we now pack into `MsgTypeError` directly just like ctxc in
`unpack_error()`
This also completes the while-standing todo for `RemoteActorError` to
contain a ref to the underlying `Error` msg as `._ipc_msg` with public
`@property` access that `defstruct()`-creates a pretty struct version
via `.ipc_msg`.
Internal tweaks for this include:
- `._ipc_msg` is the internal literal `Error`-msg instance if provided
with `.ipc_msg` the dynamic wrapper as mentioned above.
- `.__init__()` now can still take variable `**extra_msgdata` (similar
to the `dict`-msgdata as before) to maintain support for subtypes
which are constructed manually (not only by `pack_error()`) and insert
their own attrs which get placed in a `._extra_msgdata: dict` if no
`ipc_msg: Error` is provided as input.
- the `.msgdata` is now a merge of any `._extra_msgdata` and
a `dict`-casted form of any `._ipc_msg`.
- adjust all previous `.msgdata` field lookups to try equivalent field
reads on `._ipc_msg: Error`.
- drop default single ws indent from `.tb_str` and do a failover lookup
to `.msgdata` when `._ipc_msg is None` for the manually constructed
subtype-instance case.
- add a new class attr `.extra_body_fields: list[str]` to allow subtypes
to declare attrs they want shown in the `.__repr__()` output, eg.
`ContextCancelled.canceller`, `StreamOverrun.sender` and
`MsgTypeError.payload_msg`.
- ^-rework defaults pertaining to-^ with rename from
`_msgdata_keys` -> `_ipcmsg_keys` with latter now just loading directly
from the `Error` fields def and `_body_fields: list[str]` just taking
that value and removing the not-so-useful-in-REPL or already shown
(i.e. `.tb_str: str`) field names.
- add a new mod level `.pack_from_raise()` helper for auto-boxing RAE
subtypes constructed manually into `Error`s which is normally how
`StreamOverrun` and `MsgTypeError` get created in the runtime.
- in support of the above expose a `src_uid: tuple` override to
`pack_error()` such that the runtime can provide any remote actor id
when packing a locally-created yet remotely-caused RAE subtype.
- adjust all typing to expect `Error`s over `dict`-msgs.
Adjust some tests to match these changes:
- context and inter-peer-cancel tests to make their `.msgdata` related
checks against the new `.ipc_msg` as well and `.tb_str` directly.
- toss in an extra sleep to `sleep_a_bit_then_cancel_peer()` to keep the
'canceller' ctx child task cancelled by it's parent in the 'root' for
the rte-raised-during-ctxc-handling case (apparently now it's
returning too fast, cool?).
Better describes the internal RPC impl/latest-architecture with the msgs
delivered being those which either define a `.pld: PayloadT` that gets
passed up to user code, or the error-msg subset that similarly is raised
in a ctx-linked task.
Mainly expanding out the runtime endpoints for cancellation to separate
cases and flattening them with the main RPC-request-invoke block, moving
the non-cancel runtime case (where we call `getattr(actor, funcname)`)
inside the main `Start` case (for now) which branches on `ns=="self"`.
Also, add a new IPC msg `class CancelAck(Return):` which is always
included in the default msg-spec such that runtime cancellation (and
eventually all) endpoints return that msg (instead of a `Return`) and
thus sidestep any currently applied `MsgCodec` such that the results
(`bool`s for most cancel methods) are never violating the current type
limit(s) on `Msg.pld`. To support this expose a new variable
`return_msg: Return|CancelAck` param from
`_invoke()`/`_invoke_non_context)()` and set it to `CancelAck` in the
appropriate endpoint case-blocks of the msg loop.
Clean out all the lingering legacy `chan.send(<dict-msg>)` commented
codez from the invoker funcs, with more cleaning likely to come B)