Allow callers to stick in a header to the `.pdb()` level emitted msg(s)
such that any "waiting status" content is only shown if the caller
actually get's blocked waiting for the debug lock; use it inside the
`._spawn` sub-process reaper call.
Also, return early if `Lock.global_actor_in_debug == None` and thus
only enter the poll loop when actually needed, consequently raise
if we fall through the loop without acquisition.
Since a bug in the new `MsgStream.aclose()` impl's drain block logic was
triggering an actual inf loop (by not ever canceller the streamer child
actor), make sure we put a loop limit on the `inf_streamer`()` XD
Also add a bit more deats to the test `print()`s in each actor and toss
in `debug_mode` fixture support.
Besides improving a bunch more log msg contents similarly as before this
changes the cancel method signatures slightly with different arg names:
for `.cancel()`:
- instead of `requesting_uid: str` take in a `req_chan: Channel`
since we can always just read its `.uid: tuple` for logging and
further we can then offer the `chan=None` case indicating a
"self cancel" (since there's no "requesting channel").
- the semantics of "requesting" here better indicate that the IPC connection
is an IPC peer and further (eventually) will allow permission checking
against given peers for cancellation requests.
- when `chan==None` we also define a meth-internal `requester_type: str`
differently for logging content :)
- add much more detailed `.cancel()` content around the requester, its
type, and any debugger related locking steps.
for `._cancel_task()`:
- change the `chan` arg to `parent_chan: Channel` since "parent"
correctly indicates that the channel is the parent of the locally
spawned rpc task to cancel; in fact no other chan should be able to
cancel tasks parented/spawned by other channels obvi!
- also add more extensive meth-internal `.cancel()` logging with a #TODO
around showing only the "relevant/lasest" `Context` state vars in such
logging content.
for `.cancel_rpc_tasks()`:
- shorten `requesting_uid` -> `req_uid`.
- add `parent_chan: Channel` to be similar as above in `._cancel_task()`
(since it's internally delegated to anyway) which replaces the prior
`only_chan` and use it to filter to only tasks spawned by this channel
(thus as their "parent") as before.
- instead of `if tasks:` to enter, invert and `return` early on
`if not tasks`, for less indentation B)
- add WIP str-repr format (for `.cancel()` emissions) to show
a multi-address (maddr) + task func (via the new `Context._nsf`) and
report all cancel task targets with it a "tree"; include #TODO to
finalize and implement some utils for all this!
To match ensure we adjust `process_messages()` self/`Actor` cancel
handling blocks to provide the new `kwargs` (now with `dict`-merge
syntax) to `._invoke()`.
Turns out that py3.11 might be so fast that iterating a EoC-ed
`MsgStream` 1k times is faster then a `Context.cancel()` msg
transmission from a parent actor to it's child (which i guess makes
sense). So tweak the test to delay 5ms between stream async-for iteration
attempts when the stream is detected to be `.closed: bool` (coming in
patch) or `ctx.cancel_called == true`.
Such that you see the children entries prior to exit instead of the
prior somewhat detail/use-less logging. Also, rename all `anursery` vars
to just `an` as is the convention in most examples.
Since that's what we're now doing in `MsgStream._eoc` internal
assignments (coming in future patch), do the same in this exception
re-raise-helper and include more extensive doc string detailing all
the msg-type-to-raised-error cases. Also expose a `hide_tb: bool` like
we have already in `unpack_error()`.
As similarly improved in other parts of the runtime, adds much more
pedantic (`.cancel()`) logging content to indicate the src of remote
cancellation request particularly for `Actor.cancel()` and
`._cancel_task()` cases prior to `._invoke()` task scheduling. Also add
detailed case comments and much more info to the
"request-to-cancel-already-terminated-RPC-task" log emission to include
the `Channel` and `Context.cid` deats.
This helped me find the src of a race condition causing a test to fail
where a callee ctx task was returning a result *before* an expected
`ctx.cancel()` request arrived B). Adding much more pedantic
`.cancel()` msg contents around the requester's deats should ensure
these cases are much easier to detect going forward!
Also, simplify the `._invoke()` final result/error log msg to only put
*one of either* the final error or returned result above the `Context`
pprint.
The only case where we can't is in `Portal.run_from_ns()` usage (since we
pass a path with `self:<Actor.meth>`) and because `.to_tuple()`
internally uses `.load_ref()` which will of course fail on such a path..
So or now impl as,
- mk `Actor.start_remote_task()` take a `nsf: NamespacePath` but also
offer a `load_nsf: bool = False` such that by default we bypass ref
loading (maybe this is fine for perf long run as well?) for the
`Actor`/'self:'` case mentioned above.
- mk `.get_context()` take an instance `nsf` obvi.
More logging msg format tweaks:
- change msg-flow related content to show the `Context._nsf`, which,
right, is coming follow up commit..
- bunch more `.runtime()` format updates to show `msg: dict` contents
and internal primitives with trailing `'\n'` for easier reading.
- report import loading `stackscope` in subactors.
When entered by the root actor avoid excessive polling cycles by,
- blocking on the `Lock.no_remote_has_tty: trio.Event` and breaking
*immediately* when set (though we should really also lock
it from the root right?) to avoid extra loops..
- shielding the `await trio.sleep(poll_delay)` call to avoid any local
cancellation causing the (presumably root-actor task) caller to move
on (possibly to cancel its children) and instead to continue
poll-blocking until the lock is actually released by its user.
- `break` the poll loop immediately if no remote locker is detected.
- use `.pdb()` level for reporting lock state changes.
Also add a #TODO to handle calls by non-root actors as it pertains to
Such that with `--tpdb` passed (sub)actors will engage the `pdbp` REPL
automatically and so that we can use the new `stackscope` support when
complex cases hang Bo
Also,
- simplified some type-annots (ns paths),
- doc-ed an inter-peer test func with some ascii msg flows,
- added a bottom #TODO for replicating the scenario i hit in `modden`
where a separate client actor-tree was hanging on cancelling a `bigd`
sub-workspace..
If `stackscope` is importable and debug_mode is enabled then we by
default call and report `.devx.enable_stack_on_sig()` is set B)
This makes debugging unexpected (SIGINT ignoring) hangs a cinch!
Given i just similarly revamped a buncha `._runtime` log msg formatting,
might as well do something similar inside the spawning machinery such
that groking teardown sequences of each supervising task is much more
sane XD
Mostly this includes doing similar `'<field>: <value>\n'` multi-line
formatting when reporting various subproc supervision steps as well as
showing a detailed `trio.Process.__repr__()` as appropriate.
Also adds a detailed #TODO according to the needs of #320 for which
we're going to need some internal mechanism for intermediary parent
actors to determine if a given debug tty locker (sub-actor) is one of
*their* (transitive) children and thus stall the normal
cancellation/teardown sequence until that locker is complete.
Can be optionally enabled via a new `enable_stack_on_sig()` which will
swap in the SIGUSR1 handler. Much thanks to @oremanj for writing this
amazing project, it's thus far helped me fix some very subtle hangs
inside our new IPC-context cancellation machinery that would have
otherwise taken much more manual pdb-ing and hair pulling XD
Full credit for `dump_task_tree()` goes to the original project author
with some minor tweaks as was handed to me via the trio-general matrix
room B)
Slight changes from orig version:
- use a `log.pdb()` emission to pprint to console
- toss in an ex sh CLI cmd to trigger the dump from another terminal
using `kill` + `pgrep`.
Allows tests (including any `@tractor_test`s) to subscribe to a CLI flag
`--tpdb` (for "tractor python debugger") which the session can provide
to tests which can then proxy the value to `open_root_actor()` (via
`open_nursery()`) when booting the runtime - thus enabling our debug
mode globally to any subscribers B)
This is real handy if you have some failures but can't determine the
root issue without jumping into a `pdbp` REPL inside a (sub-)actor's
spawned-task.
As part of solving some final edge cases todo with inter-peer remote
cancellation (particularly a remote cancel from a separate actor
tree-client hanging on the request side in `modden`..) I needed less
dense, more line-delimited log msg formats when understanding ipc
channel and context cancels from console logging; this adds a ton of
that to:
- `._invoke()` which now does,
- better formatting of `Context`-task info as multi-line
`'<field>: <value>\n'` messages,
- use of `trio.Task` (from `.lowlevel.current_task()` for full
rpc-func namespace-path info,
- better "msg flow annotations" with `<=` for understanding
`ContextCancelled` flow.
- `Actor._stream_handler()` where in we break down IPC peers reporting
better as multi-line `|_<Channel>` log msgs instead of all jammed on
one line..
- `._ipc.Channel.send()` use `pformat()` for repr of packet.
Also tweak some optional deps imports for debug mode:
- add `maybe_import_gb()` for attempting to import `greenback`.
- maybe enable `stackscope` tree pprinter on `SIGUSR1` if installed.
Add a further stale-debugger-lock guard before removal:
- read the `._debug.Lock.global_actor_in_debug: tuple` uid and possibly
`maybe_wait_for_debugger()` when the child-user is known to have
a live process in our tree.
- only cancel `Lock._root_local_task_cs_in_debug: CancelScope` when
the disconnected channel maps to the `Lock.global_actor_in_debug`,
though not sure this is correct yet?
Started adding missing type annots in sections that were modified.
Since it's generally useful to know who is the cause of an overrun (say
bc you want your system to then adjust the writer side to slow tf down)
might as well pack an extra `.sender: tuple[str, str]` actor uid field
which can be relayed through `RemoteActorError` boxing. Add an extra
case for the exc-type to `unpack_error()` to match B)
Originally designed and used throughout `piker`, the subtype adds some
handy pprinting and field diffing extras often handy when viewing struct
types in logging or REPL console interfaces B)
Obvi this rejigs the `tractor.msg` mod into a sub-pkg and moves the
existing namespace obj-pointer stuff into a new `.msg.ptr` sub mod.
Since we use basically the exact same set of logic in
`Portal.open_context()` when expecting the first `'started'` msg factor
and generalize `._streaming._raise_from_no_yield_msg()` into a new
`._exceptions._raise_from_no_key_in_msg()` (as per the lingering todo)
which obvi requires a more generalized / optional signature including
a caller specific `log` obj. Obvi call the new func from all the other
modules X)
Apparently (and i don't know if this was always broken [i feel like no?]
or is a recent change to stdlib's `logging` stuff) we need increment the
`stacklevel` input by one for our custom level methods now? Without this
you're going to see the path to the method's-callstack-frame on every
emission instead of to the caller's. I first noticed this when debugging
the workspace layer spawning in `modden.bigd` and then verified it in
other depended projects..
I guess we should add some tests for this as well XD
Took me longer then i wanted to figure out the source of
a failed-response to a remote-cancellation (in this case in `modden`
where a client was cancelling a workspace layer.. but disconnects before
receiving the ack msg) that was triggering an IPC error when sending the
error msg for the cancellation of a `Actor._cancel_task()`, but since
this (non-rpc) `._invoke()` task was trying to send to a now
disconnected canceller it was resulting in a `BrokenPipeError` (or similar)
error.
Now, we except for such IPC errors and only raise them when,
1. the transport `Channel` is for sure up (bc ow what's the point of
trying to send an error on the thing that caused it..)
2. it's definitely for handling an RPC task
Similarly if the entire main invoke `try:` excepts,
- we only hide the call-stack frame from the debugger (with
`__tracebackhide__: bool`) if it's an RPC task that has a connected
channel since we always want to see the frame when debugging internal
task or IPC failures.
- we don't bother trying to send errors to the context caller (actor)
when it's a non-RPC request since failures on actor-runtime-internal
tasks shouldn't really ever be reported remotely, only maybe raised
locally.
Also some other tidying,
- this properly corrects for the self-cancel case where an RPC context
is cancelled due to a local (runtime) task calling a method like
`Actor.cancel_soon()`. We now set our own `.uid` as the
`ContextCancelled.canceller` value so that other-end tasks know that
the cancellation was due to a self-cancellation by the actor itself.
We still need to properly test for this though!
- add a more detailed module doc-str.
- more explicit imports for `trio` core types throughout.