Using either of `@pdb.hideframe` or `__tracebackhide__` on stdlib
methods doesn't seem to work either.. This all seems to have something
to do with async generator usage I think ?
This gets very close to avoiding any possible hangs to do with tty
locking and SIGINT handling minus a special case that will be detailed
below.
Summary of implementation changes:
- convert `_mk_pdb()` -> `with _open_pdb() as pdb:` which implicitly
handles the `bdb.BdbQuit` case such that debugger teardown hooks are
always called.
- rename the handler to `shield_sigint()` and handle a variety of new
cases:
* the root is in debug but hasn't been cancelled -> call
`Actor.cancel_soon()`
* the root is in debug but *has* been called (`Actor.cancel_soon()`
already called) -> raise KBI
* a child is in debug *and* has a task locking the debugger -> ignore
SIGINT in child *and* the root actor.
- if the debugger instance is provided to the handler at acquire time,
on SIGINT handling completion re-print the last pdb++ REPL output so
that the user realizes they are still actively in debug.
- ignore the unlock case where a race condition of "no task" holding the
lock causes the `RuntimeError` normally associated with the "wrong
task" doing so (not sure if this is a `trio` bug?).
- change debug logs to runtime level.
Unhandled case(s):
- a child is maybe in debug mode but does not itself have any task using
the debugger.
* ToDo: we need a way to decide what to do with
"intermediate" child actors who themselves either are not in
`debug_mode=True` but have children who *are* such that a SIGINT
won't cause cancellation of that child-as-parent-of-another-child
**iff** any of their children are in in debug mode.
This fixes an previously undetected bug where if an
`.open_channel_from()` spawned task errored the error would not be
propagated to the `trio` side and instead would fail silently with
a console log error. What was most odd is that it only seems easy to
trigger when you put a slight task sleep before the error is raised
(:eyeroll:). This patch adds a few things to address this and just in
general improve iter-task lifetime syncing:
- add `LinkedTaskChannel._trio_exited: bool` a flag set from the `trio`
side when the channel block exits.
- add a `wait_on_aio_task: bool` flag to `translate_aio_errors` which
toggles whether to wait the `asyncio` task termination event on exit.
- cancel the `asyncio` task if the trio side has ended, when
`._trio_exited == True`.
- always close the `trio` mem channel when the task exits such that
the `asyncio` side can error on any next `.send()` call.
Sometimes it's handy to just have a non-`Portal` yielding way
to figure out if a "service" actor is up, so add this discovery
helper for that. We'll prolly just leave it undocumented for
now until we figure out a longer-term/better discovery system.