Ensuring we can at least use `breakpoint()` from an infected actor's
`asyncio.Task` spawned via a `.to_asyncio` API.
Also includes a little `tests/devx/` reorging,
- start splitting out non-`tractor.pause()` tests into a new
`test_pause_from_non_trio.py` for all the `.pause_from_sync()`
use in bg-threaded or `asyncio` applications.
- factor harness commonalities to the `devx/conftest` (namely
the `do_ctlc()` masher).
- mv `test_pause_from_sync` to the new non`-trio` mod.
NOTE, the `ctlc=True` is still failing for
`test_pause_from_asyncio_task` which is a user-happiness bug but not
anything fundamentally broken - just need to handle the `asyncio` case
in `.devx._debug.sigint_shield()`!
Mostly due to magic from @oremanj where we slap in a little bit of
`.from_asyncio`-type stuff to run a `trio`-task from `asyncio.Task`
code!
I'm not gonna go into tooo too much detail but basically the primary
thing needed was a way to (blocking-ly) invoke a `trio.lowlevel.Task`
from an `asyncio` one (which we now have with a new
`run_trio_task_in_future()` thanks to draft code from the aforementioned
jefe) which we now invoke from a dedicated aio case-branch inside
`.devx._debug.pause_from_sync()`. Further include a case inside
`DebugStatus.release()` to handle using the same func to set the
`repl_release: trio.Event` from the aio side when releasing the REPL on
exit cmds.
Prolly more refinements to come ;{o
It was expecting `AssertionError` as a proceed-in-test signal (by
breaking from a continue loop), but `in_prompt_msg(raise_on_err=True)`
was changed to raise `ValueError`; so instead just use as a predicate
for the `break`.
Also rework `in_prompt_msg()` to accept the `child: BaseSpawn` as input
instead of `before: str` remove the casting boilerplate, and adjust all
usage to match.
By re-purposing our `pexpect`-based console matching with a new
`debugging/shield_hang_in_sub.py` example, this tests a few "hanging
actor" conditions more formally:
- that despite a hanging actor's task we can dump
a `stackscope.extract()` tree on relay of `SIGUSR1`.
- the actor tree will terminate despite a shielded forever-sleep by our
"T-800" zombie reaper machinery activating and hard killing the
underlying subprocess.
Some test deats:
- simulates the expect actions of a real user by manually using
`os.kill()` to send both signals to the actor-tree program.
- `pexpect`-matches against `log.devx()` emissions under normal
`debug_mode == True` usage.
- ensure we get the actual "T-800 deployed" `log.error()` msg and
that the actor tree eventually terminates!
Surrounding (re-org/impl/test-suite) changes:
- allow disabling usage via a `maybe_enable_greenback: bool` to
`open_root_actor()` but enable by def.
- pretty up the actual `.devx()` content from `.devx._stackscope`
including be extra pedantic about the conc-primitives for each signal
event.
- try to avoid double handles of `SIGUSR1` even though it seems the
original (what i thought was a) problem was actually just double
logging in the handler..
|_ avoid double applying the handler func via `signal.signal()`,
|_ use a global to avoid double handle func calls and,
|_ a `threading.RLock` around handling.
- move common fixtures and helper routines from `test_debugger` to
`tests/devx/conftest.py` and import them for use in both test mods.
The final issue was making sure we do the same thing on ctl-c/SIGINT
from the user. That is, if there's already a bg-thread in REPL, we
`log.pdb()` about SIGINT shielding and re-draw the prompt; the same UX
as normal actor-runtime-task behaviour.
Reasons this wasn't workin.. and the fix:
- `.pause_from_sync()` was overriding the local `repl` var with `None`
delivered by (transitive) calls to `_pause(debug_func=None)`.. so
remove all that and only assign it OAOO prior to thread-type case
branching.
- always call `DebugStatus.shield_sigint()` as needed from all requesting
threads/tasks:
- in `_pause_from_bg_root_thread()` BEFORE calling `._pause()` AND BEFORE
yielding back to the bg-thread via `.started(out)` to ensure we're
definitely overriding the handler in the `trio`-main-thread task
before unblocking the requesting bg-thread.
- from any requesting bg-thread in the root actor such that both its
main-`trio`-thread scheduled task (as per above bullet) AND it are
SIGINT shielded.
- always call `.shield_sigint()` BEFORE any `greenback._await()` case
don't entirely grok why yet, but it works)?
- for `greenback._await()` case always set `bg_task` to the current one..
- tweaks to the `SIGINT` handler, now renamed `sigint_shield()` so as
not to name-collide with the methods when editor-searching:
- always try to `repr()` the REPL thread/task "owner" as well as the
active `PdbREPL` instance.
- add `.devx()` notes around the prompt flushing deats and comments
for any root-actor-bg-thread edge cases.
Related/supporting refinements:
- add `get_lock()`/`get_debug_req()` factory funcs since the plan is to
eventually implement both as `@singleton` instances per actor.
- fix `acquire_debug_lock()`'s call-sig-bug for scheduling
`request_root_stdio_lock()`..
- in `._pause()` only call `mk_pdb()` when `debug_func != None`.
- add some todo/warning notes around the `cls.repl = None` in
`DebugStatus.release()`
`test_pause_from_sync()` tweaks:
- don't use a `attach_patts.copy()`, since we always `break` on match.
- do `pytest.fail()` on that ^ loop's fallthrough..
- pass `do_ctlc(child, patt=attach_key)` such that we always match the
the current thread's name with the ctl-c triggered `.pdb()` emission.
- oh yeah, return the last `before: str` from `do_ctlc()`.
- in the script, flip `abandon_on_cancel=True` since when `False` it
seems to cause `trio.run()` to hang on exit from the last bg-thread
case?!?
Originally discovered as while using `tractor.pause_from_sync()`
from the `i3ipc` client running in a bg-thread that uses `asyncio`
inside `modden`.
Turns out we definitely aren't correctly handling `.pause_from_sync()`
from the root actor when called from a `trio.to_thread.run_sync()`
bg thread:
- root-actor bg threads which can't `Lock._debug_lock.acquire()` since
they aren't in `trio.Task`s.
- even if scheduled via `.to_thread.run_sync(_debug._pause)` the
acquirer won't be the task/thread which calls `Lock.release()` from
`PdbREPL` hooks; this results in a RTE raised by `trio`..
- multiple threads will step on each other's stdio since cpython's GIL
seems to ctx switch threads on every input from the user to the REPL
loop..
Reproduce via reworking our example and test so that they catch and fail
for all edge cases:
- rework the `/examples/debugging/sync_bp.py` example to demonstrate the
above issues, namely the stdio clobbering in the REPL when multiple
threads and/or a subactor try to debug simultaneously.
|_ run one thread using a task nursery to ensure it runs conc with the
nursery's parent task.
|_ ensure the bg threads run conc a subactor usage of
`.pause_from_sync()`.
|_ gravely detail all the special cases inside a TODO comment.
|_ add some control flags to `sync_pause()` helper and don't use
`breakpoint()` by default.
- extend and adjust `test_debugger.test_pause_from_sync` to match (and
thus currently fail) by ensuring exclusive `PdbREPL` attachment when
the 2 bg root-actor threads are concurrently interacting alongside the
subactor:
|_ should only see one of the `_pause_msg` logs at a time for either
one of the threads or the subactor.
|_ ensure each attaches (in no particular order) before expecting the
script to exit.
Impl adjustments to `.devx._debug`:
- drop `Lock.repl`, no longer used.
- add `Lock._owned_by_root: bool` for the `.ctx_in_debug == None`
root-actor-task active case.
- always `log.exception()` for any `._debug_lock.release()` ownership
RTE emitted by `trio`, like we used to..
- add special `Lock.release()` log message for the stale lock but
`._owned_by_root == True` case; oh yeah and actually
`log.devx(message)`..
- rename `Lock.acquire()` -> `.acquire_for_ctx()` since it's only ever
used from subactor IPC usage; well that and for local root-task
usage we should prolly add a `.acquire_from_root_task()`?
- buncha `._pause()` impl improvements:
|_ type `._pause()`'s `debug_func` as a `partial` as well.
|_ offer `called_from_sync: bool` and `called_from_bg_thread: bool`
for the special case handling when called from `.pause_from_sync()`
|_ only set `DebugStatus.repl/repl_task` when `debug_func != None`
(OW ensure the `.repl_task` is not the current one).
|_ handle error logging even when `debug_func is None`..
|_ lotsa detailed commentary around root-actor-bg-thread special cases.
- when `._set_trace(hide_tb=False)` do `pdbp.set_trace(frame=currentframe())`
so the `._debug` internal frames are always included.
- by default always hide tracebacks for `.pause[_from_sync]()` internals.
- improve `.pause_from_sync()` to avoid root-bg-thread crashes:
|_ pass new `called_from_xxx_` flags and ensure `DebugStatus.repl_task`
is actually set to the `threading.current_thread()` when needed.
|_ manually call `Lock._debug_lock.acquire_nowait()` for the non-bg
thread case.
|_ TODO: still need to implement the bg-thread case using a bg
`trio.Task`-in-thread with an `trio.Event` set by thread REPL exit.
It's been a long time prepped and now finally implemented!
Offer a `shield: bool` argument from our async `._debug` APIs:
- `await tractor.pause(shield=True)`,
- `await tractor.post_mortem(shield=True)`
^-These-^ can now be used inside cancelled `trio.CancelScope`s,
something very handy when introspecting complex (distributed) system
tear/shut-downs particularly under remote error or (inter-peer)
cancellation conditions B)
Thanks to previous prepping in a prior attempt and various patches from
the rigorous rework of `.devx._debug` internals around typed msg specs,
there ain't much that was needed!
Impl deats
- obvi passthrough `shield` from the public API endpoints (was already
done from a prior attempt).
- put ad-hoc internal `with trio.CancelScope(shield=shield):` around all
checkpoints inside `._pause()` for both the root-process and subactor
case branches.
Add a fairly rigorous example, `examples/debugging/shielded_pause.py`
with a wrapping `pexpect` test, `test_debugger.test_shield_pause()` and
ensure it covers as many cases as i can think of offhand:
- multiple `.pause()` entries in a loop despite parent scope
cancellation in a subactor RPC task which itself spawns a sub-task.
- a `trio.Nursery.parent_task` which raises, is handled and
tries to enter and unshielded `.post_mortem()`, which of course
internally raises `Cancelled` in a `._pause()` checkpoint, so we catch
the `Cancelled` again and then debug the debugger's internal
cancellation with specific checks for the particular raising
checkpoint-LOC.
- do ^- the latter -^ for both subactor and root cases to ensure we
can debug `._pause()` itself when it tries to REPL engage from
a cancelled task scope Bo
Since turns out we didn't have a single example using that API Bo
The test granular-ly checks all use cases:
- `.post_mortem()` manual calls in both subactor and root.
- ensuring built-in RPC crash handling activates after each manual one
from ^.
- drafted some call-stack frame checking that i commented out for now
since we need to first do ANSI escape code removal due to the
colorization that `pdbp` does by default.
|_ added a TODO with SO link on `assert_before()`.
Also todo-staged a shielded-pause test to match with the already
existing-but-needs-refinement example B)
It's **almost** there, we're just missing the final translation code to
get from an `asyncio` side task to be able to call
`.devx._debug..wait_for_parent_stdin_hijack()` to do root actor TTY
locking. Then we just need to ensure internals also do the right thing
with `greenback()` for equivalent sync `breakpoint()` style pause
points.
Since i'm deferring this until later, tossing in some xfail tests to
`test_infected_asyncio` with TODOs for the needed implementation as well
as eventual test org.
By "provision" it means we add:
- `greenback` init block to `_run_asyncio_task()` when debug mode is
enabled (but which will currently rte when `asyncio` is detected)
using `.bestow_portal()` around the `asyncio.Task`.
- a call to `_debug.maybe_init_greenback()` in the `run_as_asyncio_guest()`
guest-mode entry point.
- as part of `._debug.Lock.is_main_trio_thread()` whenever the async-lib
is not 'trio' error lock the backend name (which is obvi `'asyncio'`
in this use case).
Now supports use from any `trio` task, any sync thread started with
`trio.to_thread.run_sync()` AND also via `breakpoint()` builtin API!
The only bit missing now is support for `asyncio` tasks when in infected
mode.. Bo
`greenback` setup/API adjustments:
- move `._rpc.maybe_import_gb()` to -> `devx._debug` and factor out the cached
import checking into a sync func whilst placing the async `.ensure_portal()`
bootstrapping into a new async `maybe_init_greenback()`.
- use the new init-er func inside `open_root_actor()` with the output
predicating whether we override the `breakpoint()` hook.
core `devx._debug` implementation deatz:
- make `mk_mpdb()` only return the `pdp.Pdb` subtype instance since
the sigint unshielding func is now accessible from the `Lock`
singleton from anywhere.
- add non-main thread support (at least for `trio.to_thread` use cases)
to our `Lock` with a new `.is_trio_thread()` predicate that delegates
directly to `trio`'s internal version.
- do `Lock.is_trio_thread()` checks inside any methods which require
special provisions when invoked from a non-main `trio` thread:
- `.[un]shield_sigint()` methods since `signal.signal` usage is only
allowed from cpython's main thread.
- `.release()` since `trio.StrictFIFOLock` can only be called from
a `trio` task.
- rework `.pause_from_sync()` itself to directly call `._set_trace()`
and don't bother with `greenback._await()` when we're already calling
it from a `.to_thread.run_sync()` thread, oh and try to use the
thread/task name when setting `Lock.local_task_in_debug`.
- make it an RTE for now if you try to use `.pause_from_sync()` from any
infected-`asyncio` task, but support is (hopefully) coming soon!
For testing we add a new `test_debugger.py::test_pause_from_sync()`
which includes a ctrl-c parametrization around the
`examples/debugging/sync_bp.py` script which includes all currently
supported/working usages:
- `tractor.pause_from_sync()`.
- via `breakpoint()` overload.
- from a `trio.to_thread.run_sync()` spawn.
- `trio_typing` is nearly obsolete since `trio >= 0.23`
- `exceptiongroup` is built-in to python 3.11
- `async_generator` primitives have lived in `contextlib` for quite
a while!
Only found this by luck more or less (while working on something in
a client project) and it turns out we can actually get to (yet another)
hang state where SIGINT will be ignored by the root actor on teardown..
I've added all the necessary logic flags to reproduce. We obviously need
a follow up bug issue and a test suite to replicate!
It appears as though the following are required based on very light
tinkering:
- infected asyncio mode active
- debug mode active
- the `trio` context must breakpoint *before* `.started()`-ing
- the `asyncio` must **not** error
Previously we were leaking our (pdb++) override into the Python runtime
which would always result in a runtime error whenever `breakpoint()` is
called outside our runtime; after exit of the root actor . This
explicitly restores any previous hook override (detected during startup)
or deletes the hook and restores the environment if none existed prior.
Also adds a new WIP debugging example script to ensure breakpointing
works as normal after runtime close; this will be added to the test
suite.
Finally! I think this may be the root issue we've been seeing in
production in a client project.
No idea yet why this is happening but the fault-causing sequence seems
to be:
- `.open_context()` in a child actor
- enter the debugger via `tractor.breakpoint()`
- continue from that entry via `c` command in REPL
- raise an error just after inside the context task's body
Looking at logging it appears as though the child thinks it has the tty
but no input is accepted on the REPL and a further `ctrl-c` results in
some teardown but also a further hang where both parent and child become
unresponsive..
With the new fixes to the trio spawner we can expect that both root
*and* depth > 1 nursery owning actors will now not clobber any children
that are in debug (either via breakpoint or through crashing). The tests
changed now include more checks which ensure the 2nd level parent-ish
actors also bubble up through into `pdb` and don't kill any of their
(crashed) children before they're done themselves debugging.
This resolves and completes #69 allowing all RPC invocation APIs to pass
function references directly instead of explicit `str` names for the
target namespace and function (this is still done implicitly
underneath). This brings us closer to `trio`'s task running API as well
as acknowledges that any inter-host RPC system (and API) will likely
need to be implemented on top of local RPC primitives anyway. Even if
this ends up **not** being true we can always go to "function stubs" as
part of our IAC protocol or, add a new method to do explicit namespace
calls: `.run_from_module()` or whatever everyone votes on.
Resolves#69
Further, this commit drops `Actor.statespace` from the entire system
since a user can easily get this same functionality using module
level variables. Fix docs to match all these changes (luckily mostly
already done due to example scripts referencing).