Such that a `TrioCancelled` is raised in the aio task via
`.set_exception()` to explicitly indicate and allow that task to handle
a taskc request from the parent `trio.Task`.
Namely the `tractor.pause_from_sync()` examples using both bg threads
and `asyncio` which seem to go into bad states where SIGINT is ignored..
Deats,
- add `maybe_expect_timeout()` cm to ensure the EOF hangs get
`.xfail()`ed instead.
- @pytest.mark.ctlcs_bish` `test_pause_from_sync` and don't expect the
greenback prompt msg.
- also mark `test_sync_pause_from_aio_task`.
Seems that on 3.13 it's not showing our script code in the output now?
Gotta get an example for @oremanj to see what's up but really it'd be
nice to just custom format stuff above `trio`'s runtime by def..
Anyway, update the `.devx._stackscope`,
- log formatting to be a little more "sclangy" lookin.
- change the per-actor "delimiter" lines style.
- report the `signal.getsignal(SIGINT)` which i needed in the
`sync_bp.py` with ctl-c causing a hang..
- mask the `_tree_dumped` duplicator log report as well as the "dumped
fine" one.
- add an example `pkill --signal SIGUSR1` cmdline.
Tweak the test to cope with,
- not showing our script lines now.. which i've commented in the
`assert_before()` patts..
- to expect the newly formatted delimiter (ascii) lines to separate the
root vs. hanger sub-actor sections.
That is whenever `trio.EndOfChannel` is raised (presumably from the
`._to_trio.receive()` call inside `LinkedTaskChannel.receive()`) we need
to be extra certain that we let it bubble upward transparently DESPITE
special exc-as-signal handling that is normally suppressed from the aio
side; REPEAT we want to ALWAYS bubble any `trio_err ==
trio.EndOfChannel` in the `finally:` handler of `translate_aio_errors()`
despite `chan._trio_to_raise == AsyncioTaskExited` such that the
caller's iterable machinery will operate as normal when the inter-task
stream is stopped (again, presumably by the aio side task terminating
the inter-task stream).
Main impl deats for this,
- in the EoC handler block ensure we assign both `chan._trio_err` and
the local `trio_err` as well as continue to re-raise.
- add a case to the match block in the `finally:` handler which FOR SURE
re-raises any `type(trio_err) is EndOfChannel`!
Additionally fix a bad bug,
- a ref bug where we were NOT using the
`except BaseException as _trio_err` to assign to `chan._trio_err` (by
accident was missing the leading `_`..)
Unrelated impl tweak,
- move all `maybe_raise_aio_side_err()` content back to inline with its
parent func - makes it easier to use `tractor.pause()` mostly Bp
- go back to trying to use `aio_task.set_exception(aio_taskc)` for now
even though i'm pretty sure we're going to move to a try-fute-first
style helper for this in the future.
Adjust some tests to match/mk-them-green,
- break from `aio_echo_server()` recv loop on
`to_asyncio.TrioTaskExited` much like how you'd expect to (implicitly
with a `for`) with a `trio.EndOfChannel`.
- toss in a masked `value is None` pause point i needed for debugging
inf looping caused by not re-raising EoCs per the main patch
description.
- add a debug-mode sized delay to root-infected test.
Such that any combination of task terminations/exits can be explicitly
handled and "dual side independent" crash cases re-raised in egs.
The main error-or-exit impl changes include,
- use of new per-side "signaling exceptions":
- TrioTaskExited|TrioCancelled for signalling aio.
- AsyncioTaskExited|AsyncioCancelled for signalling trio.
- NOT overloading the `LinkedTaskChannel._trio/aio_err` fields for
err-as-signal relay and instead add a new pair of
`._trio/aio_to_raise` maybe-exc-attrs which allow each side's
task to specify what it would want the other side to raise to signal
its/a termination outcome:
- `._trio_to_raise: AsyncioTaskExited|AsyncioCancelled` to signal,
|_ the aio task having returned while the trio side was still reading
from the `asyncio.Queue` or is just not `.done()`.
|_ the aio task being self or trio-request cancelled where
a `asyncio.CancelledError` is raised and caught but NOT relayed
as is back to trio; instead signal a "more explicit" exc type.
- `._aio_to_raise: TrioTaskExited|TrioCancelled` to signal,
|_ the trio task having returned while the aio side was still reading
from the mem chan and indicating that the trio side might not
care any more about future streamed values (like the
`Stop/EndOfChannel` equivs for ipc `Context`s).
|_ when the trio task canceld we do
a `asyncio.Future.set_exception(TrioTaskExited())` to indicate
to the aio side verbosely that it should cancel due to the trio
parent.
- `_aio/trio_err` are now left to only capturing the **actual**
per-side task excs for introspection / other side's handling logic.
- supporting "graceful exits" depending on API in use from
`translate_aio_errors()` such that if either side exits but the other
side isn't expect to consume the final `return`ed value, we just exit
silently, which required:
- adding a `suppress_graceful_exits: bool` flag.
- adjusting the `maybe_raise_aio_side_err()` logic to use that flag
and suppress only on certain combos of `._trio_to_raise/._trio_err`.
- prefer to raise `._trio_to_raise` when the aio-side is the src and
vice versa.
- filling out pedantic logging for cancellation cases indicating which
side is the cause.
- add a `LinkedTaskChannel._aio_result` modelled after our
`Context._result` a a similar `.wait_for_result()` interface which
allows maybe accessing the aio task's final return value if desired
when using the `open_channel_from()` API.
- rename `cancel_trio()` done handler -> `signal_trio_when_done()`
Also some fairly major test suite updates,
- add a `delay: int` producing fixture which delivers a much larger
timeout whenever `debug_mode` is set so that the REPL can be used
without a surrounding cancel firing.
- add a new `test_aio_exits_early_relays_AsyncioTaskExited` including
a paired `exit_early: bool` flag to `push_from_aio_task()`.
- adjust `test_trio_closes_early_causes_aio_checkpoint_raise` to expect
a `to_asyncio.TrioTaskExited`.
It appears that during the reorg commit
a356233b47 this was intended to be moved
(presumably where i have here) to `test_tooling` but was somehow just
never pasted over XD
Good thing this was caught while going through the remaining TODO
bullets in #2 !!
Also includes fixed relative `.conftest` imports!
Since there's no way to activate `greenback`'s portal in such cases, we
should at least have a test verifying our very loud error about the
inability to support this usage..
The (rare) condition is heavily detailed in new comments in
the `cancel_trio()` callback but, more or less the idea here is to be
extra pedantic in raising an `Exceptiongroup` of errors from each task
(both `asyncio` and `trio`) whenever the 2 tasks raise "independently"
- in the sense that it's not obviously one side's task causing an error
(or cancellation) in the other. In this case we set the error for each
side on the `LinkedTaskChannel` (via new attrs described later).
As a synopsis, most of this work was refined out of supporting
`infected_aio=True` mode in the **root actor** and in particular as part
of getting that to work inside the `modden` daemon which at the time of
writing was still using the `i3ipc` lib and thus `asyncio`.
Impl deats,
- extend the `LinkedTaskChannel` field/API set (and type it),
- `._trio_task: trio.Task` for test/user introspection.
- also "stage" some ideas for a more refined interface,
- `.started()` to deliver the value yielded to the `trio.Task` parent.
|_ also includes some todos for how to implement this design
underneath.
- `._aio_first: Any|None = None` to hold that value ^.
- `.wait_aio_complete()` for syncing to the asyncio task.
- some detailed logging around "asyncio cancelled trio" case.
- Move `AsyncioCancelled` in this module.
Styling changes,
- generally more explicit var naming.
- some todos for getting modern and fancy with typing..
NB, Let it be known this commit msg was written on a friday with the
help of various "mr. white" solns.
Might as well break apart the specific test set since there are some
(minor) subtleties and the orig test mod is already getting pretty big
XD
Includes both the new "independent"-event-loops test as well as the std
usage base case suite.
Such that the suite verifies the wip `maybe_raise_from_masking_exc()`
will raise from a `trio.Cancelled.__context__` since I can't think of
any reason a `Cancelled` should ever be raised in-place of
a non-`Cancelled` XD
Not sure what should be raised instead (or maybe just a `log.warning()`
emitted?) but this starts a draft for refinement at the least. Use the
new `@pytest.mark.parametrize` explicit tuple-of-params form with an
`pytest.param + `.mark.xfail()` for the default behaviour case.
Since i wasted 2 days just to find an example of this inside an `@acm`,
figured I better reproduce for the purposes of maybe implementing
a warning sys (inside our wip proto `open_taskman()`) when a nursery
detects a single `Cancelled` in an eg where the `.__context__` is set to
some non-cancel error (which likely means a cancel-causing source
exception was suppressed by accident).
Left in a buncha commented code using `maybe_open_nursery()` which
i thought might be part of the issue but didn't end up being required;
will likely remove on a follow up refinement.
Along the lines of something like `pytest.raises()` where the handled
exception can be inspected from the `pdbp` REPL using its `.value` field
B)
This is super handy in particular for understanding
`BaseException[Group]`s without manually adding surrounding handler code
to assign the `except[*] Exception as exc_var:` particularly when trying
to understand multi-cancelled eg trees.
Trying to replicate cases where errors are raised in both `trio` and
`asyncio` tasks independently (at least in `.to_asyncio` API terms) with
a new `test_trio_prestarted_task_bubbles` that generates 3 cases inside
a `@acm` calls stack composing a `trio.Nursery` with
a `to_asyncio.open_channel_from()` call where a set of `trio` tasks are
started in a loop using `.start()` with various exc raising sequences,
- the aio task raising *before* the last `trio` task spawns.
- the aio task raising just after the last trio task spawns, but before
it starts.
- after the last trio task `.start()` call returns control to the
parent - but (for now) did not error.
TODO, still more cases to discover as i'm still fighting a `modden` bug
of this sort atm..
Other,
- tweak some other tests to have timeouts since some recent hangs were
found..
- started mucking with py3.13 and thus adjustments for strict egs in
some tests; full patchset to test suite likely coming soon!
Since we can't use it to `Task.set_exception()` (since that task method never
seems to work.. XD) and setting the private/internal always seems to do
the desired raising in the task? I realize it's an internal `asyncio`
runtime field but i'd rather take the risk of it breaking then having to
rely on our own equivalent hack..
Also, it seems like the case where the task's associated (and internal)
future-waiter field is null, we won't run into the (same?) prior hanging
issues (maybe since there's nothing for `asyncio` internals to use to
wait XD ??) when `Task.cancel()` is used..??
Main deats,
- add and `Future.set_exception()` a new signal-exception
`class TrioTaskExited(AsyncioCancelled):` whenever the trio-task exits
gracefully and the asyncio-side task is still doing blocking work (of
some sort) which *seem to* be predicated by a check that
`._fut_waiter is not None`.
- always call `asyncio.Queue.shutdown()` for the same^ as well as
whenever we decide to call `Task.cancel()`; in that case the shutdown
relays correctly?
Some further refinements,
- only warn about `Task.cancel()` usage when actually used ;)
- more local scope vars setting in the exit phase of
`translate_aio_errors()`.
- also in ^ use explicit caught-exc var names for each error-type.
Since it can not only cause the guest-mode run to abandon but also in
some edge cases prevent `trio`-errors from propagating (at least on
py3.12-13?) as discovered as part of supporting this mode officially
in the *root actor*.
As such try to avoid that method as much as possible instead opting to
pass the `trio`-side error via the iter-task channel ref.
Deats,
- add a `LinkedTaskChannel._trio_err: BaseException|None` which gets set
whenver the `trio.Task` error is caught; ONLY set `AsyncioCancelled`
when the `trio` task was for sure the cause, whether itself cancelled
or errored.
- always check for this error when exiting the `asyncio` side (even when
terminated via a call to `asyncio.Task.cancel()` or during any other
`CancelledError` handling such that the `asyncio`-task can expect to
handle `AsyncioCancelled` due to the above^^ cases.
- never `cs.cancel()` the `trio` side unless that cancel scope has not
yet been `.cancel_called` whatsoever; it's a noop anyway.
- only raise any exc from `asyncio.Task.result()` when `chan._aio_err`
does not already match it since the existence of the pre-existing
`task_err` means `asyncio` prolly intends (or has already) raised and
interrupted the task elsewhere.
Various supporting tweaks,
- don't bother maybe-init-ing `greenback` from the actor entrypoint
since we already need to (and do) bestow the portals to each `asyncio`
task spawned using the `run_task()`/`open_channel_from()` API; further
the init-ing should be done already by client code that enables
infected mode (even in the root actor).
|_we should prolly also codify it from any
`run_daemon(infected_aio=True, debug_mode=True)` usage we offer.
- pass all the `_<field>`s to `Linked TaskChannel` explicitly in named
kwarg style.
- better sclang-style log reports throughout, particularly on teardowns.
- generally more/better comments and docs around (not well understood)
edge cases.
- prep to just inline `maybe_raise_aio_side_err()` closure..
When `.pause_from_sync()` is called from an `asyncio.Task` which was
never bestowed a portal we want to be mega pedantic about it; indicate
that the task was NOT spawned from our `.to_asyncio` API and likely by
some out-of-our-control code (normally using
`asyncio.ensure_future()/.create_task()`). Though `greenback` already
errors on such usage, it's not always clear why no portal exists;
explaining the situation of a 3rd-party-bg-spawned-task should avoid
dev confusion for most cases.
Impl deats,
- distinguish between an actor in infected mode versus the actual caller
of `.pause_from_sync()` being an `asyncio.Task` with more explicit
`asyncio_task` and `is_infected_aio` vars.
- ONLY in the case of being both an infected-mode-actor AND detecting
that the caller is an `asyncio.Task`, check `greenback.has_portal()`
such that when not bestowed we presume the aforementioned
3rd-party-bg-task case above and raise a new explicit RTE with
a detailed explanatory message.
- add some masked draft code for handling the speical case of a root
actor `asyncio.Task` caller which could (in theory) not actually
require gb portal use since the `Lock` can be acquired directly
without IPC.
|_this will likely require factoring of various pause machinery funcs
into a `_pause_from_root_task()` to mk the impl sane XD
Other,
- expose a new `debug_filter: Callable` which can be provided by the
caller of `_maybe_enter_pm()` to predicate whether to enter the
debugger REPL based on the caught `BaseException|BaseExceptionGroup`;
this is handy for customizing the meaning of "graceful cancellations"
so as to avoid crash handling on expected egs of more then
`trioCancelled`.
|_ make the default as it was implemented: `not is_multi_cancelled(err)`
- pass-through a new `ignore: set[BaseException]` as
`open_crash_handler(ignore_nested=ignore)` to allow for the same
silent-cancellation-egs-swallowing as desired from outside the actor
runtime.
Such that equivalents of `trio.Cancelled` from other runtimes such as
`asyncio.CancelledError` and `subprocess.CalledProcessError` (with
a `.returncode == -2`) can be gracefully ignored as needed by the
caller.
For example this is handy if you want to avoid debug-mode REPL entry on
an exception-group full of only some subset of exception types since you
expect certain tasks to raise such errors after having been cancelled by
a request from some parent supervision sys (some "higher up"
`trio.CancelScope`, a remote triggered `ContextCancelled` or just from
and OS SIGINT).
Impl deats,
- offer a new `ignore_nested: set[BaseException]` param which by
default we add `trio.Cancelled` to when no other types are provided.
- use `ExceptionGroup.subgroup(tuple(ignore_nested)` to filter to egs of
the "ignored sub-errors set" and return any such match (instead of
`True`).
- detail a comment on exclusion case.
Such that we can hook into 3rd-party-libs more easily to monkey them and
use our (prettier/hipper) console logging with something like (an
example from the client project `modden`),
```python
connection_mod = i3ipc.connection
tractor_style_i3ipc_logger: logging.LoggingAdapter = tractor.log.get_console_log(
_root_name=connection_mod.__name__,
logger=i3ipc.connection_mod.logger,
level='info',
)
# monkey the instance-ref in 3rd-party module
connection_mod.logger = our_logger
```
Impl deats,
- expose as `get_console_log(logger: logging.Logger)` and add default
failover logic.
- toss in more typing, also for mod-global instance.
Such that you can use,
```python
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
```
to boostrap the root actor (and thus main parent process) to embed
the actor-rumtime into an `asyncio` loop. Prove it all works with an
subactor-free version of the aio echo-server test suite B)
For comparing a `msgspec.Struct` against an input `dict` presumably to
be used as input for struct instantiation. The main diff with
`.__sub__()` is that non-existing fields on either are reported
(loudly).
Such that maybe we can eventually offer a nicer higher-level API which
implements much of the boilerplate required by `msgspec` (like
type-matched branching to serialization logic) via a type-table
interface or something?
Not sure if the idea is that useful so leaving it all as TODOs for now
obviously.
Ensuring we can at least use `breakpoint()` from an infected actor's
`asyncio.Task` spawned via a `.to_asyncio` API.
Also includes a little `tests/devx/` reorging,
- start splitting out non-`tractor.pause()` tests into a new
`test_pause_from_non_trio.py` for all the `.pause_from_sync()`
use in bg-threaded or `asyncio` applications.
- factor harness commonalities to the `devx/conftest` (namely
the `do_ctlc()` masher).
- mv `test_pause_from_sync` to the new non`-trio` mod.
NOTE, the `ctlc=True` is still failing for
`test_pause_from_asyncio_task` which is a user-happiness bug but not
anything fundamentally broken - just need to handle the `asyncio` case
in `.devx._debug.sigint_shield()`!
Mostly fixing edge cases with `asyncio` and/or bg threads where the
`.repl_release: trio.Event` needs to be used from the main `trio`
thread OW confusing-but-valid teardown tracebacks can show under various
races.
Also improve,
- log reporting for such internal bugs to make them more obvious on
console via `log.exception()`.
- only restore the SIGINT handler when runtime is (still) active.
- reporting when `tractor.pause(shield=True)` should be used and
unhiding the internal frames from the tb in that case.
- for `pause_from_sync()` some deep fixes..
|_add a `allow_no_runtime: bool = False` flag to allow
**not** requiring the actor runtime to be active.
|_fix the `greenback` case-branch to only trigger on `not
is_trio_thread`.
|_add a scope-global `repl_owner: Task|Thread|None = None` to
avoid ref errors..
Various `try`/`except` blocks around external APIs that raise when not
running inside an `tractor` and/or some async framework (mostly to avoid
too-late/benign error tbs on certain classes of actor tree teardown):
- for the `log.pdb()` prompts emitted before REPL console entry.
- inside `DebugStatus.is_main_trio_thread()`'s call to `sniffio`.
- in `_post_mortem()` by catching `NoRuntime` when called from a thread
still active after the `.open_root_actor()` has already exited.
Also,
- create a dedicated `DebugStateError` for raising instead of `assert`s
when we have actual debug-request inconsistencies (as seem to be most
likely with bg thread usage of `breakpoint()`).
- show the `open_crash_handler()` frame on `bdb.BdbQuit` (for now?)
Since it seems that `pdbp.xpm()` can sometimes lose the up-stack
traceback info/frames? Not sure why but ours seems to work just fine
from a `asyncio`-handler in `modden`'s use of `i3ipc` B)
Also call `DebugStatus.shield_sigint()` from `pause_from_sync()` in the
infected-`asyncio` case to get the same shielding behaviour as in all
other usage!
Mostly due to magic from @oremanj where we slap in a little bit of
`.from_asyncio`-type stuff to run a `trio`-task from `asyncio.Task`
code!
I'm not gonna go into tooo too much detail but basically the primary
thing needed was a way to (blocking-ly) invoke a `trio.lowlevel.Task`
from an `asyncio` one (which we now have with a new
`run_trio_task_in_future()` thanks to draft code from the aforementioned
jefe) which we now invoke from a dedicated aio case-branch inside
`.devx._debug.pause_from_sync()`. Further include a case inside
`DebugStatus.release()` to handle using the same func to set the
`repl_release: trio.Event` from the aio side when releasing the REPL on
exit cmds.
Prolly more refinements to come ;{o
It was expecting `AssertionError` as a proceed-in-test signal (by
breaking from a continue loop), but `in_prompt_msg(raise_on_err=True)`
was changed to raise `ValueError`; so instead just use as a predicate
for the `break`.
Also rework `in_prompt_msg()` to accept the `child: BaseSpawn` as input
instead of `before: str` remove the casting boilerplate, and adjust all
usage to match.
Just like we've started doing throughout the rest of the actor runtime
for reporting (and where "sclang" = "structured conc (s)lang", our
little supervision-focused operations syntax i've been playing with in
log msg content).
Further tweaks:
- report the `trio_done_fute` alongside the `main_outcome` value.
- add a todo list for supporting `greenback` for pause points.
The reason for this "duplication" with the `--asyncio` CLI flag (passed
to the child during spawn) is 2-fold:
- allows verifying inside `Actor._from_parent()` that the `trio` runtime was
started via `.start_guest_run()` as well as if the
`Actor._infected_aio` spawn-entrypoint value has been set (by the
`._entry.<spawn-backend>_main()` whenever `--asyncio` is passed)
such that any mismatch can be signaled via an `InternalError`.
- enables checking the `._state._runtime_vars['_is_infected_aio']` value
directly (say from a non-actor/`trio`-thread) instead of calling
`._state.current_actor(err_on_no_runtime=False)` in certain edge
cases.
Impl/testing deats:
- add `._state._runtime_vars['_is_infected_aio'] = False` default.
- raise `InternalError` on any `--asyncio`-flag-passed vs.
`_runtime_vars`-value-relayed-from-parent inside
`Actor._from_parent()` and include a `Runner.is_guest` assert for good
measure B)
- set and relay `infect_asyncio: bool` via runtime-vars to child in
`ActorNursery.start_actor()`.
- verify `actor.is_infected_aio()`, `actor._infected_aio` and
`_state._runtime_vars['_is_infected_aio']` are all set in test suite's
`asyncio_actor()` endpoint.
By re-purposing our `pexpect`-based console matching with a new
`debugging/shield_hang_in_sub.py` example, this tests a few "hanging
actor" conditions more formally:
- that despite a hanging actor's task we can dump
a `stackscope.extract()` tree on relay of `SIGUSR1`.
- the actor tree will terminate despite a shielded forever-sleep by our
"T-800" zombie reaper machinery activating and hard killing the
underlying subprocess.
Some test deats:
- simulates the expect actions of a real user by manually using
`os.kill()` to send both signals to the actor-tree program.
- `pexpect`-matches against `log.devx()` emissions under normal
`debug_mode == True` usage.
- ensure we get the actual "T-800 deployed" `log.error()` msg and
that the actor tree eventually terminates!
Surrounding (re-org/impl/test-suite) changes:
- allow disabling usage via a `maybe_enable_greenback: bool` to
`open_root_actor()` but enable by def.
- pretty up the actual `.devx()` content from `.devx._stackscope`
including be extra pedantic about the conc-primitives for each signal
event.
- try to avoid double handles of `SIGUSR1` even though it seems the
original (what i thought was a) problem was actually just double
logging in the handler..
|_ avoid double applying the handler func via `signal.signal()`,
|_ use a global to avoid double handle func calls and,
|_ a `threading.RLock` around handling.
- move common fixtures and helper routines from `test_debugger` to
`tests/devx/conftest.py` and import them for use in both test mods.
The final issue was making sure we do the same thing on ctl-c/SIGINT
from the user. That is, if there's already a bg-thread in REPL, we
`log.pdb()` about SIGINT shielding and re-draw the prompt; the same UX
as normal actor-runtime-task behaviour.
Reasons this wasn't workin.. and the fix:
- `.pause_from_sync()` was overriding the local `repl` var with `None`
delivered by (transitive) calls to `_pause(debug_func=None)`.. so
remove all that and only assign it OAOO prior to thread-type case
branching.
- always call `DebugStatus.shield_sigint()` as needed from all requesting
threads/tasks:
- in `_pause_from_bg_root_thread()` BEFORE calling `._pause()` AND BEFORE
yielding back to the bg-thread via `.started(out)` to ensure we're
definitely overriding the handler in the `trio`-main-thread task
before unblocking the requesting bg-thread.
- from any requesting bg-thread in the root actor such that both its
main-`trio`-thread scheduled task (as per above bullet) AND it are
SIGINT shielded.
- always call `.shield_sigint()` BEFORE any `greenback._await()` case
don't entirely grok why yet, but it works)?
- for `greenback._await()` case always set `bg_task` to the current one..
- tweaks to the `SIGINT` handler, now renamed `sigint_shield()` so as
not to name-collide with the methods when editor-searching:
- always try to `repr()` the REPL thread/task "owner" as well as the
active `PdbREPL` instance.
- add `.devx()` notes around the prompt flushing deats and comments
for any root-actor-bg-thread edge cases.
Related/supporting refinements:
- add `get_lock()`/`get_debug_req()` factory funcs since the plan is to
eventually implement both as `@singleton` instances per actor.
- fix `acquire_debug_lock()`'s call-sig-bug for scheduling
`request_root_stdio_lock()`..
- in `._pause()` only call `mk_pdb()` when `debug_func != None`.
- add some todo/warning notes around the `cls.repl = None` in
`DebugStatus.release()`
`test_pause_from_sync()` tweaks:
- don't use a `attach_patts.copy()`, since we always `break` on match.
- do `pytest.fail()` on that ^ loop's fallthrough..
- pass `do_ctlc(child, patt=attach_key)` such that we always match the
the current thread's name with the ctl-c triggered `.pdb()` emission.
- oh yeah, return the last `before: str` from `do_ctlc()`.
- in the script, flip `abandon_on_cancel=True` since when `False` it
seems to cause `trio.run()` to hang on exit from the last bg-thread
case?!?
In `lock_stdio_for_peer()` better internal-error handling/reporting:
- only `Lock._blocked.remove(ctx.cid)` if that same cid was added on
entry to avoid needless key-errors.
- drop all `Lock.release(force: bool)` usage remnants.
- if `req_ctx.cancel()` fails mention it with `ctx_err.add_note()`.
- add more explicit internal-failed-request log messaging via a new
`fail_reason: str`.
- use and use new `x)<=\n|_` annots in any failure logging.
Other cleanups/niceties:
- drop `force: bool` flag entirely from the `Lock.release()`.
- use more supervisor-op-annots in `.pdb()` logging
with both `_pause/crash_msg: str` instead of double '|' lines when
`.pdb()`-reported from `._set_trace()`/`._post_mortem()`.
Turns out it does work XD
Prior presumption was from before I had the fute poll-loop so makes
sense we needed more then one sched-tick's worth of context switch vs.
now we can just keep looping-n-pumping as fast possible until the
guest-run's main task completes.
Also,
- minimize the preface commentary (as per todo) now that we have tests
codifying all the edge cases :finger_crossed:
- parameter-ize the pump-loop-cycle delay and default it to 0.
To make the recent set of tests pass this (hopefully) finally solves all
`asyncio` embedded `trio` guest-run abandonment by ensuring we "pump the
event loop" until the guest-run future is fully complete.
Accomplished via simple poll loop of the form `while not
trio_done_fut.done(): await asyncio.sleep(.1)` in the `aio_main()`
task's exception teardown sequence. The loop does a naive 10ms
"pump-via-sleep & poll" for the `trio` side to complete before finally
exiting (and presumably raising) from the SIGINT cancellation.
Other related cleanups and refinements:
- use `asyncio.Task.result()` inside `cancel_trio()` since it also
inline-raises any exception outcome and we can also log-report the
result in non-error cases.
- comment out buncha not-sure-we-need-it stuff in `cancel_trio()`.
- remove the botched `AsyncioCancelled(CancelledError):` idea obvi XD
- comment `greenback` init for now in `aio_main()` since (pretty sure)
we don't ever want to actually REPL in that specific func-as-task?
- always capture any `fute_err: BaseException` from the `main_outcome:
Outcome` delivered by the `trio` side guest-run task.
- add and raise a new super noisy `AsyncioRuntimeTranslationError`
whenever we detect that the guest-run `trio_done_fut` has not
completed before task exit; should avoid abandonment issues ever
happening again without knowing!
Finally this reproduces the issue as it (originally?) exhibited inside
`piker` where the `Actor.lifetime_stack` wasn't closed in cases where
during `infected_aio`-actor cancellation/shutdown `trio` side tasks
which are doing shielded (teardown) work are NOT being watched/waited on
from the `aio_main()` task-closure inside `run_as_asyncio_guest()`!
This is then the root cause of the guest-run being abandoned since if
our `aio_main()` task-closure doesn't know it should allow the run to
finish, it's going to call `loop.close()` eventually resulting in the
`GeneratorExit` thrown into `trio._core._run.unrolled_run()`..
So, this extends the `test_sigint_closes_lifetime_stack()` suite to
include cases for such shielded `trio`-task ops:
- add a new `trio_side_is_shielded: bool` which will toggle whether to
add a shielded 0.5s `trio.sleep()` loop to `manage_file()` which
should outlive the `asyncio` event-loop shutdown sequence and result
in an abandoned guest-run and thus a leaked file.
- parametrize the existing suite with this case resulting in a total 16
test set B)
This patch demonstrates the problem with our `aio_main()` task-closure
impl via the now 4 failing tests, a fix is coming in a follow up commit!
Turns out it somehow breaks our `to_asyncio` error relay since obvi
`asyncio`'s runtime seems to specially handle it (prolly via
`isinstance()` ?) and it caused our
`test_aio_cancelled_from_aio_causes_trio_cancelled()` to hang..
Further, obvi `unpack_error()` won't be able to find the type def if not
kept inside `._exceptions`..
So given all that, revert the change/move as well as:
- tweak the aio-from-aio cancel test to timeout.
- do `trio.sleep()` conc with any bg aio task by moving out nursery
block.
- add a `send_sigint_to: str` parameter to
`test_sigint_closes_lifetime_stack()` such that we test the SIGINT
being relayed to just the parent or the child.
Took me a while to figure out what the heck was going on but, turns out
`asyncio` changed their SIGINT handling in 3.11 as per:
https://docs.python.org/3/library/asyncio-runner.html#handling-keyboard-interruption
I'm not entirely sure if it's the 3.11 changes or possibly wtv further
updates were made in 3.12 but more or less due to the way
our current main task was written the `trio` guest-run was getting
abandoned on SIGINTs sent from the OS to the infected child proc..
Note that much of the bug and soln cases are layed out in very detailed
comment-notes both in the new test and `run_as_asyncio_guest()`, right
above the final "fix" lines.
Add new `test_infected_aio.test_sigint_closes_lifetime_stack()` test suite
which reliably triggers all abandonment issues with multiple cases
of different parent behaviour post-sending-SIGINT-to-child:
1. briefly sleep then raise a KBI in the parent which was originally
demonstrating the file leak not being cleaned up by `Actor.lifetime_stack.close()`
and simulates a ctl-c from the console (relayed in tandem by
the OS to the parent and child processes).
2. do `Context.wait_for_result()` on the child context which would
hang and timeout since the actor runtime would never complete and
thus never relay a `ContextCancelled`.
3. both with and without running a `asyncio` task in the `manage_file`
child actor; originally it seemed that with an aio task scheduled in
the child actor the guest-run abandonment always was the "loud" case
where there seemed to be some actor teardown but with tbs from
python failing to gracefully exit the `trio` runtime..
The (seemingly working) "fix" required 2 lines of code to be run inside
a `asyncio.CancelledError` handler around the call to `await trio_done_fut`:
- `Actor.cancel_soon()` which schedules the actor runtime to cancel on
the next `trio` runner cycle and results in a "self cancellation" of
the actor.
- "pumping the `asyncio` event loop" with a non-0 `.sleep(0.1)` XD
|_ seems that a "shielded" pump with some actual `delay: float >= 0`
did the trick to get `asyncio` to allow the `trio` runner/loop to
fully complete its guest-run without abandonment.
Other supporting changes:
- move `._exceptions.AsyncioCancelled`, our renamed
`asyncio.CancelledError` error-sub-type-wrapper, to `.to_asyncio` and make
it derive from `CancelledError` so as to be sure when raised by our
`asyncio` x-> `trio` exception relay machinery that `asyncio` is
getting the specific type it expects during cancellation.
- do "summary status" style logging in `run_as_asyncio_guest()` wherein
we compile the eventual `startup_msg: str` emitted just before waiting
on the `trio_done_fut`.
- shield-wait with `out: Outcome = await asyncio.shield(trio_done_fut)`
even though it seems to do nothing in the SIGINT handling case..(I
presume it might help avoid abandonment in a `asyncio.Task.cancel()`
case maybe?)