Exactly like how it's organized for `Portal.open_context()`, put the
main streaming API `@acm` with the `MsgStream` code and bind the method
to the new module func.
Other,
- rename `Context.result()` -> `.wait_for_result()` to better match the
blocking semantics and rebind `.result()` as deprecated.
- add doc-str for `Context.maybe_raise()`.
That is as a control to `Context._maybe_raise_remote_err()` such that
if set to anything other then the default (`False` value), we do
`raise remote_error from from_src_exc` such that caller can choose to
suppress or override the `.__cause__` tb.
Also tidy up and old masked TODO regarding calling `.maybe_raise()`
after the caller exits from the `yield` in `.open_context()`..
More specifically, if `.open_context()` is cancelled when awaiting the
first `Context.started()` during the child task sync phase, check to see
if it was due to `._scope.cancel_called` and raise any remote error via
`.maybe_raise()` instead the `trio.Cancelled` like in every other
remote-error handling case. Ensure we set `._scope[_nursery]` only after
the `Started` has arrived and audited.
Filling out the helper `validate_payload_msg()` staged in a prior commit
and adjusting all imports to match.
Also add a `raise_mte: bool` flag for potential usage where the caller
wants to handle the MTE instance themselves.
In the sense that we handle it as a special case that exposed
through to `RxPld.dec_msg()` with a new `is_started_send_side: bool`.
(Non-ideal) `Context.started()` impl deats:
- only do send-side pld-spec validation when a new `validate_pld_spec`
is set (by default it's not).
- call `self.pld_rx.dec_msg(is_started_send_side=True)` to validate the
payload field from the just codec-ed `Started` msg's `msg_bytes` by
passing the `roundtripped` msg (with it's `.pld: Raw`) directly.
- add a `hide_tb: bool` param and proxy it to the `.dec_msg()` call.
(Non-ideal) `PldRx.dec_msg()` impl deats:
- for now we're packing the MTE inside an `Error` via a manual call to
`pack_error()` and then setting that as the `msg` passed to
`_raise_from_unexpected_msg()` (though really we should just raise
inline?).
- manually set the `MsgTypeError._ipc_msg` to the above..
Other,
- more comprehensive `Context` type doc string.
- various `hide_tb: bool` kwarg additions through `._ops.PldRx` meths.
- proto a `.msg._ops.validate_payload_msg()` helper planned to get the
logic from this version of `.started()`'s send-side validation so as
to be useful more generally elsewhere.. (like for raising back
`Return` values on the child side?).
Warning: this commit may have been made out of order from required
changes to `._exceptions` which will come in a follow up!
Since the state mgmt becomes quite messy with multiple sub-tasks inside
an IPC ctx, AND bc generally speaking the payload-type-spec should map
1-to-1 with the `Context`, it doesn't make a lot of sense to be using
`ContextVar`s to modify the `Context.pld_rx: PldRx` instance.
Instead, always allocate a full instance inside `mk_context()` with the
default `.pld_rx: PldRx` set to use the `msg._ops._def_any_pldec: MsgDec`
In support, simplify the `.msg._ops` impl and APIs:
- drop `_ctxvar_PldRx`, `_def_pld_rx` and `current_pldrx()`.
- rename `PldRx._pldec` -> `._pld_dec`.
- rename the unused `PldRx.apply_to_ipc()` -> `.wraps_ipc()`.
- add a required `PldRx._ctx: Context` attr since it is needed
internally in some meths and each pld-rx now maps to a specific ctx.
- modify all recv methods to accept a `ipc: Context|MsgStream` (instead
of a `ctx` arg) since both have a ref to the same `._rx_chan` and there
are only a couple spots (in `.dec_msg()`) where we need the `ctx`
explicitly (which can now be easily accessed via a new `MsgStream.ctx`
property, see below).
- always show the `.dec_msg()` frame in tbs if there's a reference error
when calling `_raise_from_unexpected_msg()` in the fallthrough case.
- implement `limit_plds()` as light wrapper around getting the
`current_ipc_ctx()` and mutating its `MsgDec` via
`Context.pld_rx.limit_plds()`.
- add a `maybe_limit_plds()` which just provides an `@acm` equivalent of
`limit_plds()` handy for composing in a `async with ():` style block
(avoiding additional indent levels in the body of async funcs).
Obvi extend the `Context` and `MsgStream` interfaces as needed
to match the above:
- add a `Context.pld_rx` pub prop.
- new private refs to `Context._started_msg: Started` and
a `._started_pld` (mostly for internal debugging / testing / logging)
and set inside `.open_context()` immediately after the syncing phase.
- a `Context.has_outcome() -> bool:` predicate which can be used to more
easily determine if the ctx errored or has a final result.
- pub props for `MsgStream.ctx: Context` and `.chan: Channel` providing
full `ipc`-arg compat with the `PldRx` method signatures.
Proto-ing a little suite of call-stack-frame annotation-for-scanning
sub-systems for the purposes of both,
- the `.devx._debug`er and its
traceback and frame introspection needs when entering the REPL,
- detailed trace-style logging such that we can explicitly report
on "which and where" `tractor`'s APIs are used in the "app" code.
Deats:
- change mod name obvi from `._code` and adjust client mod imports.
- using `wrapt` (for perf) implement a `@api_frame` annot decorator
which both stashes per-call-stack-frame instances of `CallerInfo` in
a table and marks the function such that API endpoints can be easily
found via runtime stack scanning despite any internal impl changes.
- add a global `_frame2callerinfo_cache: dict[FrameType, CallerInfo]`
table for providing the per func-frame info caching.
- Re-implement `CallerInfo` to require less (types of) inputs:
|_ `_api_func: Callable`, a ref to the (singleton) func def.
|_ `_api_frame: FrameType` taken from the `@api_frame` marked `tractor`-API
func's runtime call-stack, from which we can determine the
app code's `.caller_frame`.
|_`_caller_frames_up: int|None` allowing the specific `@api_frame` to
determine "how many frames up" the application / calling code is.
And, a better set of derived attrs:
|_`caller_frame: FrameType` which finds and caches the API-eps calling
frame.
|_`caller_frame: FrameType` which finds and caches the API-eps calling
- add a new attempt at "getting a method ref from its runtime frame"
with `get_ns_and_func_from_frame()` using a heuristic that the
`CodeType.co_qualname: str` should have a "." in it for methods.
- main issue is still that the func-ref lookup will require searching
for the method's instance type by name, and that name isn't
guaranteed to be defined in any particular ns..
|_rn we try to read it from the `FrameType.f_locals` but that is
going to obvi fail any time the method is called in a module where
it's type is not also defined/imported.
- returns both the ns and the func ref FYI.
- set `._state._ctxvar_Context` just after `StartAck` inside
`open_context_from_portal()` so that `current_ipc_ctx()` always
works on the 'parent' side.
- always set `.canceller` to any `MsgTypeError.src_uid` and otherwise to
any maybe-detected `.src_uid` (i.e. for RAEs).
- always set `.canceller` to us when we rx a ctxc which reports us as
its canceller; this is a sanity check on definite "self cancellation".
- adjust `._is_self_cancelled()` logic to only be `True` when
`._remote_error` is both a ctxc with a `.canceller` set to us AND
when `Context.canceller` is also set to us (since the change above)
as a little bit of extra rigor.
- fill-in/fix some `.repr_state` edge cases:
- merge self-vs.-peer ctxc cases to one block and distinguish via
nested `._is_self_cancelled()` check.
- set 'errored' for all exception matched cases despite `.canceller`.
- add pre-`Return` phase statuses:
|_'pre-started' and 'syncing-to-child' depending on side and when
`._stream` has not (yet) been set.
|_'streaming' and 'streaming-finished' depending on side when
`._stream` is set and whether it was stopped/closed.
- tweak drainage log-message to use "outcome" instead of "result".
- use new `.devx.pformat.pformat_cs()` inside `_maybe_cancel_and_set_remote_error()`
but, IFF the log level is at least 'cancel'.
A better spot for the pretty-formatting of frame text (and thus tracebacks)
is in the new `.devx._code` module:
- move from `._exceptions` -> `.devx._code.pformat_boxed_tb()`.
- add new `pformat_caller_frame()` factored out the use case in
`._exceptions._mk_msg_type_err()` where we dump a stack trace
for bad `.send()` side IPC msgs.
Add some new pretty-format methods to `Context`:
- explicitly implement `.pformat()` and allow an `extra_fields: dict`
which can be used to inject additional fields (maybe eventually by
default) such as is now used inside
`._maybe_cancel_and_set_remote_error()` when reporting the internal
`._scope` state in cancel logging.
- add a new `.repr_state -> str` which provides a single string status
depending on the internal state of the IPC ctx in terms of the shuttle
protocol's "phase"; use it from `.pformat()` for the `|_state:`.
- set `.started(complain_no_parity=False)` now since we presume decoding
with `.pld: Raw` now with the new `PldRx` design.
- use new `msgops.current_pldrx()` in `mk_context()`.
As per much tinkering, re-designs and preceding rubber-ducking via many
"commit msg novelas", **finally** this adds the (hopefully) final
missing layer for typed msg safety: `tractor.msg._ops.PldRx`
(or `PayloadReceiver`? haven't decided how verbose to go..)
Design justification summary:
------ - ------
- need a way to be as-close-as-possible to the `tractor`-application
such that when `MsgType.pld: PayloadT` validation takes place, it is
straightforward and obvious how user code can decide to handle any
resulting `MsgTypeError`.
- there should be a common and optional-yet-modular way to modify
**how** data delivered via IPC (possibly embedded as user defined,
type-constrained `.pld: msgspec.Struct`s) can be handled and processed
during fault conditions and/or IPC "msg attacks".
- support for nested type constraints within a `MsgType.pld` field
should be simple to define, implement and understand at runtime.
- a layer between the app-level IPC primitive APIs
(`Context`/`MsgStream`) and application-task code (consumer code of
those APIs) should be easily customized and prove-to-be-as-such
through demonstrably rigorous internal (sub-sys) use!
-> eg. via seemless runtime RPC eps support like `Actor.cancel()`
-> by correctly implementing our `.devx._debug.Lock` REPL TTY mgmt
dialog prot, via a dead simple payload-as-ctl-msg-spec.
There are some fairly detailed doc strings included so I won't duplicate
that content, the majority of the work here is actually somewhat of
a factoring of many similar blocks that are doing more or less the same
`msg = await Context._rx_chan.receive()` with boilerplate for
`Error`/`Stop` handling via `_raise_from_no_key_in_msg()`. The new
`PldRx` basically provides a shim layer for this common "receive msg,
decode its payload, yield it up to the consuming app task" by pairing
the RPC feeder mem-chan with a msg-payload decoder and expecting IPC API
internals to use **one** API instead of re-implementing the same pattern
all over the place XD
`PldRx` breakdown
------ - ------
- for now only expects a `._msgdec: MsgDec` which allows for
override-able `MsgType.pld` validation and most obviously used in
the impl of `.dec_msg()`, the decode message method.
- provides multiple mem-chan receive options including:
|_ `.recv_pld()` which does the e2e operation of receiving a payload
item.
|_ a sync `.recv_pld_nowait()` version.
|_ a `.recv_msg_w_pld()` which optionally allows retreiving both the
shuttling `MsgType` as well as it's `.pld` body for use cases where
info on both is important (eg. draining a `MsgStream`).
Dirty internal changeover/implementation deatz:
------ - ------
- obvi move over all the IPC "primitives" that previously had the duplicate recv-n-yield
logic:
- `MsgStream.receive[_nowait]()` delegating instead to the equivalent
`PldRx.recv_pld[_nowait]()`.
- add `Context._pld_rx: PldRx`, created and passed in by
`mk_context()`; use it for the `.started()` -> `first: Started`
retrieval inside `open_context_from_portal()`.
- all the relevant `Portal` invocation methods: `.result()`,
`.run_from_ns()`, `.run()`; also allows for dropping `_unwrap_msg()`
and `.Portal_return_once()` outright Bo
- rename `Context.ctx._recv_chan` -> `._rx_chan`.
- add detailed `Context._scope` info for logging whether or not it's
cancelled inside `_maybe_cancel_and_set_remote_error()`.
- move `._context._drain_to_final_msg()` -> `._ops.drain_to_final_msg()`
since it's really not necessarily ctx specific per say, and it does
kinda fit with "msg operations" more abstractly ;)
As per some newly added features and APIs:
- pass `portal: Portal` to `Actor.start_remote_task()` from
`open_context_from_portal()` marking `Portal.open_context()` as
always being the "parent" task side.
- add caller tracing via `.devx._code.CallerInfo/.find_caller_info()`
called in `mk_context()` and (for now) a `__runtimeframe__: int = 2`
inside `open_context_from_portal()` such that any enter-er of
`Portal.open_context()` will be reported.
- pass in a new `._caller_info` attr which is used in 2 new meths:
- `.repr_caller: str` for showing the name of the app-code-func.
- `.repr_api: str` for showing the API ep, which for now we just
hardcode to `Portal.open_context()` since ow its gonna show the mod
func name `open_context_from_portal()`.
- use those new props ^ in the `._deliver_msg()` flow body log msg
content for much clearer msg-flow tracing Bo
- add `Context._cancel_on_msgerr: bool` to toggle whether
a delivered `MsgTypeError` should trigger a `._scope.cancel()` call.
- also (temporarily) add separate `.cancel()` emissions for both cases
as i work through hacking out the maybe `MsgType.pld: Raw` support.
Instead of `allow_msg_keys` since we've fully flipped over to
struct-types for msgs in the runtime.
- drop the loop from `MsgStream.receive_nowait()` since
`Yield/Return.pld` getting will handle both (instead of a loop of
`dict`-key reads).
Add a bit of special handling for msg-type-errors with a dedicated
log-msg detailing which `.side: str` is the sender/causer and avoiding
a `._scope.cancel()` call in such cases since the local task might be
written to handle and tolerate the badly (typed) IPC msg.
As part of ^, change the ctx task-pair "side" semantics from "caller" ->
"callee" to be "parent" -> "child" which better matches the
cross-process SC-linked-task supervision hierarchy, and
`trio.Nursery.parent_task`; in `trio` the task that opens a nursery is
also named the "parent".
Impl deats / fixes around the `.side` semantics:
- ensure that `._portal: Portal` is set ASAP after
`Actor.start_remote_task()` such that if the `Started` transaction
fails, the parent-vs.-child sides are still denoted correctly (since
`._portal` being set is the predicate for that).
- add a helper func `Context.peer_side(side: str) -> str:` which inverts
from "child" to "parent" and vice versa, useful for logging info.
Other tweaks:
- make `_drain_to_final_msg()` return a tuple of a maybe-`Return` and
the list of other `pre_result_drained: list[MsgType]` such that we
don't ever have to warn about the return msg getting captured as
a pre-"result" msg.
- Add some strictness flags to `.started()` which allow for toggling
whether to error or warn log about mismatching roundtripped `Started`
msgs prior to IPC transit.
Mostly removing commented (and replaced) code blocks lingering from the
ctxc semantics work and new typed-msg-spec `MsgType`s handling AND use
the new `._exceptions.pack_from_raise()` helper to construct
`StreamOverrun` msgs.
Deaterz:
- clean out the drain loop now that it's implemented to handle our
struct msg types including the `dict`-msg bits left in as
fallback-reminders, any notes/todos better summarized at the top of
their blocks, remove any `_final_result_is_set()` related duplicate/legacy
tidbits.
- use a `case Error()` block in drain loop with fallthrough to `_:`
always resulting in an rte raise.
- move "XXX" notes into the doc-string for `._deliver_msg()` as
a "rules" section.
- use `match:` syntax for logging the `result_or_err: MsgType` outcome
from the final `.result()` call inside `open_context_from_portal()`.
- generally speaking use `MsgType` type annotations throughout!
Better describes the internal RPC impl/latest-architecture with the msgs
delivered being those which either define a `.pld: PayloadT` that gets
passed up to user code, or the error-msg subset that similarly is raised
in a ctx-linked task.
As detailed in the surrounding notes, it's pretty advantageous to always
have the child context task ensure the first msg it relays back is
msg-type checked against the current spec and thus `MsgCodec`. Implement
the check via a simple codec-roundtrip of the `Started` msg such that
the `.pld` payload is always validated before transit. This ensures the
child will fail early and notify the parent before any streaming takes
place (i.e. the "nasty" dialog protocol phase).
The main motivation here is to avoid inter-actor task syncing bugs that
are hard(er) to recover from and/or such as if an invalid typed msg is
sent to the parent, who then ignores it (depending on config), and then
the child thinks the parent is in some presumed state while the parent
is still thinking a first msg has yet to arrive. Doing the stringent
check on the sender side (i.e. the child is sending the "first"
application msg via `.started()`) avoids/sidesteps dealing with such
syncing/coordinated-state problems by keeping the entire IPC dialog in
a "cheap" or "control" style transaction up until a stream is opened.
Iow, the parent task's `.open_context()` block entry can't occur until
the child side is definitely (as much as is possible with IPC msg type
checking) in a correct state spec wise. During any streaming phase in
the dialog the msg-type-checking is NOT done for performance (the
"nasty" protocol phase) and instead any type errors are relayed back
from the receiving side. I'm still unsure whether to take the same
approach on the `Return` msg, since at that point erroring early doesn't
benefit the parent task if/when a msg-type error occurs? Definitely more
to ponder and tinker out here..
Impl notes:
- a gotcha with the roundtrip-codec-ed msg is that it often won't match
the input `value` bc in the `msgpack` case many native python
sequence/collection types will map to a common array type due to the
surjection that `msgpack`'s type-sys imposes.
- so we can't assert that `started == rt_started` but it may be useful
to at least report the diff of the type-reduced payload so that the
caller can at least be notified how the input `value` might be
better type-casted prior to call, for ex. pre-casting to `list`s.
- added a `._strict_started: bool` that could provide the stringent
checking if desired in the future.
- on any validation error raise our `MsgTypeError` from it.
- ALSO change over the lingering `.send_yield()` deprecated meth body
to use a `Yield()`.
In the particular case of the `Portal.open_context().__aexit__()` frame,
due to usage of `contextlib.asynccontextmanager`, we can't easily hook
into monkeypatching a `__tracebackhide__` set nor catch-n-reraise around
the block exit without defining our own `.__aexit__()` impl. Thus, it's
prolly most sane to do something with an override of
`contextlib._AsyncGeneratorContextManager` or the public exposed
`AsyncContextDecorator` (which uses the former internally right?).
Also fixup some old `._invoke` mod paths in comments and just show
`str(eoc)` in `.open_stream().__aexit__()` terminated-by-EoC log msg
since the `repr()` form won't pprint the IPC msg nicely..
I swear long ago it used to operate this way but, I guess this finalizes
the design decision. It makes a lot more sense to *not* propagate any
`trio.EndOfChannel` raised from a `Context.open_stream() as stream:`
block when that EoC is due to graceful-explicit stream termination.
We use the EoC much like a `StopAsyncIteration` where the error
indicates termination of the stream due to either:
- reception of a stop IPC msg indicating the far end ended the stream
(gracecfully),
- closure of the underlying `Context._recv_chan` either by the runtime
or due to user code having called `MsgStream.aclose()`.
User code shouldn't expect to handle EoC outside the block since the
`@acm` having closed should indicate the exactly same lifetime state
(of said stream) ;)
Deats:
- add special EoC handler in `.open_stream()` which silently "absorbs"
the error only when the stream is already marked as closed (meaning
the EoC indeed corresponds to IPC closure) with an assert for now
ensuring the error is the same as set to `MsgStream._eoc`.
- in `MsgStream.receive()` break up the handlers for EoC and
`trio.ClosedResourceError` since the error instances are saved to
different variables and we **don't** want to rewrite the exception in
the eoc case (normally to mask `trio` internals in tbs) bc we need the
instance to be the exact one for doing checks inside
`.open_stream().__aexit__()` to absorb it.
Other surrounding "improvements":
- start using the new `Context.maybe_raise()` helper where it can easily
replace existing equivalent block-sections.
- use new `RemoteActorError.src_uid` as required.
Finally, since normally you need the content from `._context.Context`
and surroundings in order to effectively grok `Portal.open_context()`
anyways, might as well move the impl to the ctx module as
`open_context_from_portal()` and just bind it on the `Portal` class def.
Associated/required tweaks:
- avoid circ import on `.devx` by only import
`.maybe_wait_for_debugger()` when debug mode is set.
- drop `async_generator` usage, not sure why this hadn't already been
changed to `contextlib`?
- use `@acm` alias throughout `._portal`
Previously i was trying to approach this using lots of
`__tracebackhide__`'s in various internal funcs but since it's not
exactly straight forward to do this inside core deps like `trio` and the
stdlib, it makes a bit more sense to optionally catch and re-raise
certain classes of errors from their originals using `raise from` syntax
as per:
https://docs.python.org/3/library/exceptions.html#exception-context
Deats:
- litter `._context` methods with `__tracebackhide__`/`hide_tb` which
were previously being shown but that don't need to be to application
code now that cancel semantics testing is finished up.
- i originally did the same but later commented it all out in `._ipc`
since error catch and re-raise instead in higher level layers
(above the transport) seems to be a much saner approach.
- add catch-n-reraise-from in `MsgStream.send()`/.`receive()` to avoid
seeing the depths of `trio` and/or our `._ipc` layers on comms errors.
Further this patch adds some refactoring to use the
same remote-error shipper routine from both the actor-core in the RPC
invoker:
- rename it as `try_ship_error_to_remote()` and call it from
`._invoke()` as well as it's prior usage.
- make it optionally accept `cid: str` a `remote_descr: str` and of
course a `hide_tb: bool`.
Other misc tweaks:
- add some todo notes around `Actor.load_modules()` debug hooking.
- tweak the zombie reaper log msg and timeout value ;)
Found exactly why trying this won't work when playing around with
opening workspaces in `modden` using a `Portal.open_context()` back to
the 'bigd' root actor: the RPC machinery only registers one entry in
`Actor._contexts` which will get overwritten by each task's side and
then experience race-based IPC msging errors (eg. rxing `{'started': _}`
on the callee side..). Instead make opening a ctx back to the self-actor
a runtime error describing it as an invalid op.
To match:
- add a new test `test_ctx_with_self_actor()` to the context semantics
suite.
- tried out adding a new `side: str` to the `Actor.get_context()` (and
callers) but ran into not being able to determine the value from in
`._push_result()` where it's needed to figure out which side to push
to.. So, just leaving the commented arg (passing) in the runtime core
for now in case we can come back to trying to make it work, tho i'm
thinking it's not the right hack anyway XD
Since apparently `str(KeyboardInterrupt()) == ''`? So instead add little
`<str> or repr(merr)` expressions throughout to avoid blank strings
rendering if various `repr()`/`.__str__()` outputs..
Changes the condition logic to be more strict and moves it to a private
`._is_self_cancelled() -> bool` predicate which can be used elsewhere
(instead of having almost similar duplicate checks all over the
place..) and allows taking in a specific `remote_error` just for
verification purposes (like for tests).
Main strictness distinctions are now:
- obvi that `.cancel_called` is set (this filters any
`Portal.cancel_actor()` or other out-of-band RPC),
- the received `ContextCancelled` **must** have its `.canceller` set to
this side's `Actor.uid` (indicating we are the requester).
- `.src_actor_uid` **must** be the same as the `.chan.uid` (so the error
must have originated from the opposite side's task.
- `ContextCancelled.canceller` should be already set to the `.chan.uid`
indicating we received the msg via the runtime calling
`._deliver_msg()` -> `_maybe_cancel_and_set_remote_error()` which
ensures the error is specifically destined for this ctx-task exactly
the same as how `Actor._cancel_task()` sets it from an input
`requesting_uid` arg.
In support of the above adjust some impl deats:
- add `Context._actor: Actor` which is set once in `mk_context()` to
avoid issues (particularly in testing) where `current_actor()` raises
after the root actor / runtime is already exited. Use `._actor.uid` in
both `.cancel_acked` (obvi) and '_maybe_cancel_and_set_remote_error()`
when deciding whether to call `._scope.cancel()`.
- always cast `.canceller` to `tuple` if not null.
- delegate `.cancel_acked` directly to new private predicate (obvi).
- always set `._canceller` from any `RemoteActorError.src_actor_uid` or
failing over to the `.chan.uid` when a non-remote error (tho that
shouldn't ever happen right?).
- more extensive doc-string for `.cancel()` detailing the new strictness
rules about whether an eventual `.cancel_acked` might be set.
Also tossed in even more logging format tweaks by adding a
`type_only: bool` to `.repr_outcome()` as desired for simpler output in
the `state: <outcome-repr-here>` and `.repr_rpc()` sections of the
`.__str__()`.
Spanning from the pub API, to instance `repr()` customization (for
logging/REPL content), to the impl details around the notion of a "final
outcome" and surrounding IPC msg draining mechanics during teardown.
A few API and field updates:
- new `.cancel_acked: bool` to replace what we were mostly using
`.cancelled_caught: bool` for but, for purposes of better mapping the
semantics of remote cancellation of parallel executing tasks; it's set
only when `.cancel_called` is set and a ctxc arrives with
a `.canceller` field set to the current actor uid indicating we
requested and received acknowledgement from the other side's task
that is cancelled gracefully.
- strongly document and delegate (and prolly eventually remove as a pub
attr) the `.cancelled_caught` property entirely to the underlying
`._scope: trio.CancelScope`; the `trio` semantics don't really map
well to the "parallel with IPC msging" case in the sense that for
us it breaks the concept of the ctx/scope closure having "caught"
something instead of having "received" a msg that the other side has
"acknowledged" (i.e. which for us is the completion of cancellation).
- new `.__repr__()`/`.__str__()` format that tries to tersely yet
comprehensively as possible display everything you need to know about
the 3 main layers of an SC-linked-IPC-context:
* ipc: the transport + runtime layers net-addressing and prot info.
* rpc: the specific linked caller-callee task signature details
including task and msg-stream instances.
* state: current execution and final outcome state of the task pair.
* a teensie extra `.repr_rpc` for a condensed rpc signature.
- new `.dst_maddr` to get a `libp2p` style "multi-address" (though right
now it's just showing the transport layers so maybe we should move to
to our `Channel`?)
- new public instance-var fields supporting more granular remote
cancellation/result/error state:
* `.maybe_error: Exception|None` for any final (remote) error/ctxc
which computes logic on the values of `._remote_error`/`._local_error`
to determine the "final error" (if any) on termination.
* `.outcome` to the final error or result (or `None` if un-terminated)
* `.repr_outcome()` for a console/logging friendly version of the
final result or error as needed for the `.__str__()`.
- new private interface bits to support all of ^:
* a new "no result yet" sentinel value, `Unresolved`, using a module
level class singleton that `._result` is set too (instead of
`id(self)`) to both determine if and present when no final result
from the callee has-yet-been/was delivered (ever).
=> really we should get rid of `.result()` and change it to
`.wait_for_result()` (or something)u
* `_final_result_is_set()` predicate to avoid waiting for an already
delivered result.
* `._maybe_raise()` proto-impl that we should use to replace all the
`if re:` blocks it can XD
* new `._stream: MsgStream|None` for when a stream is opened to aid
with the state repr mentioned above.
Tweaks to the termination drain loop `_drain_to_final_msg()`:
- obviously (obvi) use all the changes above when determining whether or
not a "final outcome" has arrived and thus breaking from the loop ;)
* like the `.outcome` `.maybe_error` and `._final_ctx_is_set()` in
the `while` pred expression.
- drop the `_recv_chan.receive_nowait()` + guard logic since it seems
with all the surrounding (and coming soon) changes to
`Portal.open_context()` using all the new API stuff (mentioned in
first bullet set above) we never hit the case of inf-block?
Oh right and obviously a ton of (hopefully improved) logging msg content
changes, commented code removal and detailed comment-docs strewn about!
Since we use basically the exact same set of logic in
`Portal.open_context()` when expecting the first `'started'` msg factor
and generalize `._streaming._raise_from_no_yield_msg()` into a new
`._exceptions._raise_from_no_key_in_msg()` (as per the lingering todo)
which obvi requires a more generalized / optional signature including
a caller specific `log` obj. Obvi call the new func from all the other
modules X)
As part of extremely detailed inter-peer-actor testing, add much more
granular `Context` cancellation state tracking via the following (new)
fields:
- `.canceller: tuple[str, str]` the uuid of the actor responsible for
the cancellation condition - always set by
`Context._maybe_cancel_and_set_remote_error()` and replaces
`._cancelled_remote` and `.cancel_called_remote`. If set, this value
should normally always match a value from some `ContextCancelled`
raised or caught by one side of the context.
- `._local_error` which is always set to the locally raised (and caller
or callee task's scope-internal) error which caused any
eventual cancellation/error condition and thus any closure of the
context's per-task-side-`trio.Nursery`.
- `.cancelled_caught: bool` is now always `True` whenever the local task
catches (or "silently absorbs") a `ContextCancelled` (a `ctxc`) that
indeed originated from one of the context's linked tasks or any other
context which raised its own `ctxc` in the current `.open_context()` scope.
=> whenever there is a case that no `ContextCancelled` was raised
**in** the `.open_context().__aexit__()` (eg. `ctx.result()` called
after a call `ctx.cancel()`), we still consider the context's as
having "caught a cancellation" since the `ctxc` was indeed silently
handled by the cancel requester; all other error cases are already
represented by mirroring the state of the `._scope: trio.CancelScope`
=> IOW there should be **no case** where an error is **not raised** in
the context's scope and `.cancelled_caught: bool == False`, i.e. no
case where `._scope.cancelled_caught == False and ._local_error is not
None`!
- always raise any `ctxc` from `.open_stream()` if `._cancel_called ==
True` - if the cancellation request has not already resulted in
a `._remote_error: ContextCancelled` we raise a `RuntimeError` to
indicate improper usage to the guilty side's task code.
- make `._maybe_raise_remote_err()` a sync func and don't raise
any `ctxc` which is matched against a `.canceller` determined to
be the current actor, aka a "self cancel", and always set the
`._local_error` to any such `ctxc`.
- `.side: str` taken from inside `.cancel()` and unused as of now since
it might be better re-written as a similar `.is_opener() -> bool`?
- drop unused `._started_received: bool`..
- TONS and TONS of detailed comments/docs to attempt to explain all the
possible cancellation/exit cases and how they should exhibit as either
silent closes or raises from the `Context` API!
Adjust the `._runtime._invoke()` code to match:
- use `ctx._maybe_raise_remote_err()` in `._invoke()`.
- adjust to new `.canceller` property.
- more type hints.
- better `log.cancel()` msging around self-cancels vs. peer-cancels.
- always set the `._local_error: BaseException` for the "callee" task
just like `Portal.open_context()` now will do B)
Prior we were raising any `Context._remote_error` directly and doing
(more or less) the same `ContextCancelled` "absorbing" logic (well
kinda) in block; instead delegate to the method
Well first off, turns out it's never used and generally speaking
doesn't seem to help much with "runtime hacking/debugging"; why would
we need to "fabricate" a msg when `.cancel()` is called to self-cancel?
Also (and since `._maybe_cancel_and_set_remote_error()` now takes an
`error: BaseException` as input and thus expects error-msg unpacking
prior to being called), we now manually set `Context._cancel_msg: dict`
just prior to any remote error assignment - so any case where we would
have fabbed a "cancel msg" near calling `.cancel()`, just do the manual
assign.
In this vein some other subtle changes:
- obviously don't set `._cancel_msg` in `.cancel()` since it's no longer
an input.
- generally do walrus-style `error := unpack_error()` before applying
and setting remote error-msg state.
- always raise any `._remote_error` in `.result()` instead of returning
the exception instance and check before AND after the underlying mem
chan read.
- add notes/todos around `raise self._remote_error from None` masking of
(runtime) errors in `._maybe_raise_remote_err()` and use it inside
`.result()` since we had the inverse duplicate logic there anyway..
Further, this adds and extends a ton of (internal) interface docs and
details comments around the `Context` API including many subtleties
pertaining to calling `._maybe_cancel_and_set_remote_error()`.
Previously we weren't raising a remote error if the local scope was
cancelled during a call to `Context.result()` which is problematic if
the caller WAS NOT the requester for said remote cancellation; in that
case we still want a `ContextCancelled` raised with the `.canceller:
str` set to the cancelling actor uid.
Further fix a naming bug where the (seemingly older) `._remote_err` was
being set to such an error instead of `._remote_error` XD
Where `.devx` is "developer experience", a hopefully broad enough subpkg
name for all the slick stuff planned to augment working on the actor
runtime 💥
Move the `._debug` module into the new subpkg and adjust rest of core
code base to reflect import path change. Also add a new
`.devx._debug.open_crash_handler()` manager for wrapping any sync code
outside a `trio.run()` which is handy for eventual CLI addons for
popular frameworks like `click`/`typer`.
- `Context._cancel_called_remote` -> `._cancelled_remote` since "called"
implies the cancellation was "requested" when it could be due to
another error and the actor uid is the value - only set once the far
end task scope is terminated due to either error or cancel, which has
nothing to do with *what* caused the cancellation.
- `Actor._cancel_called_remote` -> `._cancel_called_by_remote` which
emphasizes that this variable is **only set** IFF some remote actor
**requested that** this actor's runtime be cancelled via
`Actor.cancel()`.