Such that a `TrioCancelled` is raised in the aio task via
`.set_exception()` to explicitly indicate and allow that task to handle
a taskc request from the parent `trio.Task`.
Namely the `tractor.pause_from_sync()` examples using both bg threads
and `asyncio` which seem to go into bad states where SIGINT is ignored..
Deats,
- add `maybe_expect_timeout()` cm to ensure the EOF hangs get
`.xfail()`ed instead.
- @pytest.mark.ctlcs_bish` `test_pause_from_sync` and don't expect the
greenback prompt msg.
- also mark `test_sync_pause_from_aio_task`.
Seems that on 3.13 it's not showing our script code in the output now?
Gotta get an example for @oremanj to see what's up but really it'd be
nice to just custom format stuff above `trio`'s runtime by def..
Anyway, update the `.devx._stackscope`,
- log formatting to be a little more "sclangy" lookin.
- change the per-actor "delimiter" lines style.
- report the `signal.getsignal(SIGINT)` which i needed in the
`sync_bp.py` with ctl-c causing a hang..
- mask the `_tree_dumped` duplicator log report as well as the "dumped
fine" one.
- add an example `pkill --signal SIGUSR1` cmdline.
Tweak the test to cope with,
- not showing our script lines now.. which i've commented in the
`assert_before()` patts..
- to expect the newly formatted delimiter (ascii) lines to separate the
root vs. hanger sub-actor sections.
That is whenever `trio.EndOfChannel` is raised (presumably from the
`._to_trio.receive()` call inside `LinkedTaskChannel.receive()`) we need
to be extra certain that we let it bubble upward transparently DESPITE
special exc-as-signal handling that is normally suppressed from the aio
side; REPEAT we want to ALWAYS bubble any `trio_err ==
trio.EndOfChannel` in the `finally:` handler of `translate_aio_errors()`
despite `chan._trio_to_raise == AsyncioTaskExited` such that the
caller's iterable machinery will operate as normal when the inter-task
stream is stopped (again, presumably by the aio side task terminating
the inter-task stream).
Main impl deats for this,
- in the EoC handler block ensure we assign both `chan._trio_err` and
the local `trio_err` as well as continue to re-raise.
- add a case to the match block in the `finally:` handler which FOR SURE
re-raises any `type(trio_err) is EndOfChannel`!
Additionally fix a bad bug,
- a ref bug where we were NOT using the
`except BaseException as _trio_err` to assign to `chan._trio_err` (by
accident was missing the leading `_`..)
Unrelated impl tweak,
- move all `maybe_raise_aio_side_err()` content back to inline with its
parent func - makes it easier to use `tractor.pause()` mostly Bp
- go back to trying to use `aio_task.set_exception(aio_taskc)` for now
even though i'm pretty sure we're going to move to a try-fute-first
style helper for this in the future.
Adjust some tests to match/mk-them-green,
- break from `aio_echo_server()` recv loop on
`to_asyncio.TrioTaskExited` much like how you'd expect to (implicitly
with a `for`) with a `trio.EndOfChannel`.
- toss in a masked `value is None` pause point i needed for debugging
inf looping caused by not re-raising EoCs per the main patch
description.
- add a debug-mode sized delay to root-infected test.
Such that any combination of task terminations/exits can be explicitly
handled and "dual side independent" crash cases re-raised in egs.
The main error-or-exit impl changes include,
- use of new per-side "signaling exceptions":
- TrioTaskExited|TrioCancelled for signalling aio.
- AsyncioTaskExited|AsyncioCancelled for signalling trio.
- NOT overloading the `LinkedTaskChannel._trio/aio_err` fields for
err-as-signal relay and instead add a new pair of
`._trio/aio_to_raise` maybe-exc-attrs which allow each side's
task to specify what it would want the other side to raise to signal
its/a termination outcome:
- `._trio_to_raise: AsyncioTaskExited|AsyncioCancelled` to signal,
|_ the aio task having returned while the trio side was still reading
from the `asyncio.Queue` or is just not `.done()`.
|_ the aio task being self or trio-request cancelled where
a `asyncio.CancelledError` is raised and caught but NOT relayed
as is back to trio; instead signal a "more explicit" exc type.
- `._aio_to_raise: TrioTaskExited|TrioCancelled` to signal,
|_ the trio task having returned while the aio side was still reading
from the mem chan and indicating that the trio side might not
care any more about future streamed values (like the
`Stop/EndOfChannel` equivs for ipc `Context`s).
|_ when the trio task canceld we do
a `asyncio.Future.set_exception(TrioTaskExited())` to indicate
to the aio side verbosely that it should cancel due to the trio
parent.
- `_aio/trio_err` are now left to only capturing the **actual**
per-side task excs for introspection / other side's handling logic.
- supporting "graceful exits" depending on API in use from
`translate_aio_errors()` such that if either side exits but the other
side isn't expect to consume the final `return`ed value, we just exit
silently, which required:
- adding a `suppress_graceful_exits: bool` flag.
- adjusting the `maybe_raise_aio_side_err()` logic to use that flag
and suppress only on certain combos of `._trio_to_raise/._trio_err`.
- prefer to raise `._trio_to_raise` when the aio-side is the src and
vice versa.
- filling out pedantic logging for cancellation cases indicating which
side is the cause.
- add a `LinkedTaskChannel._aio_result` modelled after our
`Context._result` a a similar `.wait_for_result()` interface which
allows maybe accessing the aio task's final return value if desired
when using the `open_channel_from()` API.
- rename `cancel_trio()` done handler -> `signal_trio_when_done()`
Also some fairly major test suite updates,
- add a `delay: int` producing fixture which delivers a much larger
timeout whenever `debug_mode` is set so that the REPL can be used
without a surrounding cancel firing.
- add a new `test_aio_exits_early_relays_AsyncioTaskExited` including
a paired `exit_early: bool` flag to `push_from_aio_task()`.
- adjust `test_trio_closes_early_causes_aio_checkpoint_raise` to expect
a `to_asyncio.TrioTaskExited`.
It appears that during the reorg commit
a356233b47 this was intended to be moved
(presumably where i have here) to `test_tooling` but was somehow just
never pasted over XD
Good thing this was caught while going through the remaining TODO
bullets in #2 !!
Also includes fixed relative `.conftest` imports!
Since there's no way to activate `greenback`'s portal in such cases, we
should at least have a test verifying our very loud error about the
inability to support this usage..
The (rare) condition is heavily detailed in new comments in
the `cancel_trio()` callback but, more or less the idea here is to be
extra pedantic in raising an `Exceptiongroup` of errors from each task
(both `asyncio` and `trio`) whenever the 2 tasks raise "independently"
- in the sense that it's not obviously one side's task causing an error
(or cancellation) in the other. In this case we set the error for each
side on the `LinkedTaskChannel` (via new attrs described later).
As a synopsis, most of this work was refined out of supporting
`infected_aio=True` mode in the **root actor** and in particular as part
of getting that to work inside the `modden` daemon which at the time of
writing was still using the `i3ipc` lib and thus `asyncio`.
Impl deats,
- extend the `LinkedTaskChannel` field/API set (and type it),
- `._trio_task: trio.Task` for test/user introspection.
- also "stage" some ideas for a more refined interface,
- `.started()` to deliver the value yielded to the `trio.Task` parent.
|_ also includes some todos for how to implement this design
underneath.
- `._aio_first: Any|None = None` to hold that value ^.
- `.wait_aio_complete()` for syncing to the asyncio task.
- some detailed logging around "asyncio cancelled trio" case.
- Move `AsyncioCancelled` in this module.
Styling changes,
- generally more explicit var naming.
- some todos for getting modern and fancy with typing..
NB, Let it be known this commit msg was written on a friday with the
help of various "mr. white" solns.
Might as well break apart the specific test set since there are some
(minor) subtleties and the orig test mod is already getting pretty big
XD
Includes both the new "independent"-event-loops test as well as the std
usage base case suite.
Such that the suite verifies the wip `maybe_raise_from_masking_exc()`
will raise from a `trio.Cancelled.__context__` since I can't think of
any reason a `Cancelled` should ever be raised in-place of
a non-`Cancelled` XD
Not sure what should be raised instead (or maybe just a `log.warning()`
emitted?) but this starts a draft for refinement at the least. Use the
new `@pytest.mark.parametrize` explicit tuple-of-params form with an
`pytest.param + `.mark.xfail()` for the default behaviour case.
Since i wasted 2 days just to find an example of this inside an `@acm`,
figured I better reproduce for the purposes of maybe implementing
a warning sys (inside our wip proto `open_taskman()`) when a nursery
detects a single `Cancelled` in an eg where the `.__context__` is set to
some non-cancel error (which likely means a cancel-causing source
exception was suppressed by accident).
Left in a buncha commented code using `maybe_open_nursery()` which
i thought might be part of the issue but didn't end up being required;
will likely remove on a follow up refinement.
Along the lines of something like `pytest.raises()` where the handled
exception can be inspected from the `pdbp` REPL using its `.value` field
B)
This is super handy in particular for understanding
`BaseException[Group]`s without manually adding surrounding handler code
to assign the `except[*] Exception as exc_var:` particularly when trying
to understand multi-cancelled eg trees.
Trying to replicate cases where errors are raised in both `trio` and
`asyncio` tasks independently (at least in `.to_asyncio` API terms) with
a new `test_trio_prestarted_task_bubbles` that generates 3 cases inside
a `@acm` calls stack composing a `trio.Nursery` with
a `to_asyncio.open_channel_from()` call where a set of `trio` tasks are
started in a loop using `.start()` with various exc raising sequences,
- the aio task raising *before* the last `trio` task spawns.
- the aio task raising just after the last trio task spawns, but before
it starts.
- after the last trio task `.start()` call returns control to the
parent - but (for now) did not error.
TODO, still more cases to discover as i'm still fighting a `modden` bug
of this sort atm..
Other,
- tweak some other tests to have timeouts since some recent hangs were
found..
- started mucking with py3.13 and thus adjustments for strict egs in
some tests; full patchset to test suite likely coming soon!
Since we can't use it to `Task.set_exception()` (since that task method never
seems to work.. XD) and setting the private/internal always seems to do
the desired raising in the task? I realize it's an internal `asyncio`
runtime field but i'd rather take the risk of it breaking then having to
rely on our own equivalent hack..
Also, it seems like the case where the task's associated (and internal)
future-waiter field is null, we won't run into the (same?) prior hanging
issues (maybe since there's nothing for `asyncio` internals to use to
wait XD ??) when `Task.cancel()` is used..??
Main deats,
- add and `Future.set_exception()` a new signal-exception
`class TrioTaskExited(AsyncioCancelled):` whenever the trio-task exits
gracefully and the asyncio-side task is still doing blocking work (of
some sort) which *seem to* be predicated by a check that
`._fut_waiter is not None`.
- always call `asyncio.Queue.shutdown()` for the same^ as well as
whenever we decide to call `Task.cancel()`; in that case the shutdown
relays correctly?
Some further refinements,
- only warn about `Task.cancel()` usage when actually used ;)
- more local scope vars setting in the exit phase of
`translate_aio_errors()`.
- also in ^ use explicit caught-exc var names for each error-type.
Since it can not only cause the guest-mode run to abandon but also in
some edge cases prevent `trio`-errors from propagating (at least on
py3.12-13?) as discovered as part of supporting this mode officially
in the *root actor*.
As such try to avoid that method as much as possible instead opting to
pass the `trio`-side error via the iter-task channel ref.
Deats,
- add a `LinkedTaskChannel._trio_err: BaseException|None` which gets set
whenver the `trio.Task` error is caught; ONLY set `AsyncioCancelled`
when the `trio` task was for sure the cause, whether itself cancelled
or errored.
- always check for this error when exiting the `asyncio` side (even when
terminated via a call to `asyncio.Task.cancel()` or during any other
`CancelledError` handling such that the `asyncio`-task can expect to
handle `AsyncioCancelled` due to the above^^ cases.
- never `cs.cancel()` the `trio` side unless that cancel scope has not
yet been `.cancel_called` whatsoever; it's a noop anyway.
- only raise any exc from `asyncio.Task.result()` when `chan._aio_err`
does not already match it since the existence of the pre-existing
`task_err` means `asyncio` prolly intends (or has already) raised and
interrupted the task elsewhere.
Various supporting tweaks,
- don't bother maybe-init-ing `greenback` from the actor entrypoint
since we already need to (and do) bestow the portals to each `asyncio`
task spawned using the `run_task()`/`open_channel_from()` API; further
the init-ing should be done already by client code that enables
infected mode (even in the root actor).
|_we should prolly also codify it from any
`run_daemon(infected_aio=True, debug_mode=True)` usage we offer.
- pass all the `_<field>`s to `Linked TaskChannel` explicitly in named
kwarg style.
- better sclang-style log reports throughout, particularly on teardowns.
- generally more/better comments and docs around (not well understood)
edge cases.
- prep to just inline `maybe_raise_aio_side_err()` closure..
When `.pause_from_sync()` is called from an `asyncio.Task` which was
never bestowed a portal we want to be mega pedantic about it; indicate
that the task was NOT spawned from our `.to_asyncio` API and likely by
some out-of-our-control code (normally using
`asyncio.ensure_future()/.create_task()`). Though `greenback` already
errors on such usage, it's not always clear why no portal exists;
explaining the situation of a 3rd-party-bg-spawned-task should avoid
dev confusion for most cases.
Impl deats,
- distinguish between an actor in infected mode versus the actual caller
of `.pause_from_sync()` being an `asyncio.Task` with more explicit
`asyncio_task` and `is_infected_aio` vars.
- ONLY in the case of being both an infected-mode-actor AND detecting
that the caller is an `asyncio.Task`, check `greenback.has_portal()`
such that when not bestowed we presume the aforementioned
3rd-party-bg-task case above and raise a new explicit RTE with
a detailed explanatory message.
- add some masked draft code for handling the speical case of a root
actor `asyncio.Task` caller which could (in theory) not actually
require gb portal use since the `Lock` can be acquired directly
without IPC.
|_this will likely require factoring of various pause machinery funcs
into a `_pause_from_root_task()` to mk the impl sane XD
Other,
- expose a new `debug_filter: Callable` which can be provided by the
caller of `_maybe_enter_pm()` to predicate whether to enter the
debugger REPL based on the caught `BaseException|BaseExceptionGroup`;
this is handy for customizing the meaning of "graceful cancellations"
so as to avoid crash handling on expected egs of more then
`trioCancelled`.
|_ make the default as it was implemented: `not is_multi_cancelled(err)`
- pass-through a new `ignore: set[BaseException]` as
`open_crash_handler(ignore_nested=ignore)` to allow for the same
silent-cancellation-egs-swallowing as desired from outside the actor
runtime.
Such that equivalents of `trio.Cancelled` from other runtimes such as
`asyncio.CancelledError` and `subprocess.CalledProcessError` (with
a `.returncode == -2`) can be gracefully ignored as needed by the
caller.
For example this is handy if you want to avoid debug-mode REPL entry on
an exception-group full of only some subset of exception types since you
expect certain tasks to raise such errors after having been cancelled by
a request from some parent supervision sys (some "higher up"
`trio.CancelScope`, a remote triggered `ContextCancelled` or just from
and OS SIGINT).
Impl deats,
- offer a new `ignore_nested: set[BaseException]` param which by
default we add `trio.Cancelled` to when no other types are provided.
- use `ExceptionGroup.subgroup(tuple(ignore_nested)` to filter to egs of
the "ignored sub-errors set" and return any such match (instead of
`True`).
- detail a comment on exclusion case.
Such that we can hook into 3rd-party-libs more easily to monkey them and
use our (prettier/hipper) console logging with something like (an
example from the client project `modden`),
```python
connection_mod = i3ipc.connection
tractor_style_i3ipc_logger: logging.LoggingAdapter = tractor.log.get_console_log(
_root_name=connection_mod.__name__,
logger=i3ipc.connection_mod.logger,
level='info',
)
# monkey the instance-ref in 3rd-party module
connection_mod.logger = our_logger
```
Impl deats,
- expose as `get_console_log(logger: logging.Logger)` and add default
failover logic.
- toss in more typing, also for mod-global instance.
Such that you can use,
```python
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
```
to boostrap the root actor (and thus main parent process) to embed
the actor-rumtime into an `asyncio` loop. Prove it all works with an
subactor-free version of the aio echo-server test suite B)
For comparing a `msgspec.Struct` against an input `dict` presumably to
be used as input for struct instantiation. The main diff with
`.__sub__()` is that non-existing fields on either are reported
(loudly).
Such that maybe we can eventually offer a nicer higher-level API which
implements much of the boilerplate required by `msgspec` (like
type-matched branching to serialization logic) via a type-table
interface or something?
Not sure if the idea is that useful so leaving it all as TODOs for now
obviously.
Ensuring we can at least use `breakpoint()` from an infected actor's
`asyncio.Task` spawned via a `.to_asyncio` API.
Also includes a little `tests/devx/` reorging,
- start splitting out non-`tractor.pause()` tests into a new
`test_pause_from_non_trio.py` for all the `.pause_from_sync()`
use in bg-threaded or `asyncio` applications.
- factor harness commonalities to the `devx/conftest` (namely
the `do_ctlc()` masher).
- mv `test_pause_from_sync` to the new non`-trio` mod.
NOTE, the `ctlc=True` is still failing for
`test_pause_from_asyncio_task` which is a user-happiness bug but not
anything fundamentally broken - just need to handle the `asyncio` case
in `.devx._debug.sigint_shield()`!
Mostly fixing edge cases with `asyncio` and/or bg threads where the
`.repl_release: trio.Event` needs to be used from the main `trio`
thread OW confusing-but-valid teardown tracebacks can show under various
races.
Also improve,
- log reporting for such internal bugs to make them more obvious on
console via `log.exception()`.
- only restore the SIGINT handler when runtime is (still) active.
- reporting when `tractor.pause(shield=True)` should be used and
unhiding the internal frames from the tb in that case.
- for `pause_from_sync()` some deep fixes..
|_add a `allow_no_runtime: bool = False` flag to allow
**not** requiring the actor runtime to be active.
|_fix the `greenback` case-branch to only trigger on `not
is_trio_thread`.
|_add a scope-global `repl_owner: Task|Thread|None = None` to
avoid ref errors..
Various `try`/`except` blocks around external APIs that raise when not
running inside an `tractor` and/or some async framework (mostly to avoid
too-late/benign error tbs on certain classes of actor tree teardown):
- for the `log.pdb()` prompts emitted before REPL console entry.
- inside `DebugStatus.is_main_trio_thread()`'s call to `sniffio`.
- in `_post_mortem()` by catching `NoRuntime` when called from a thread
still active after the `.open_root_actor()` has already exited.
Also,
- create a dedicated `DebugStateError` for raising instead of `assert`s
when we have actual debug-request inconsistencies (as seem to be most
likely with bg thread usage of `breakpoint()`).
- show the `open_crash_handler()` frame on `bdb.BdbQuit` (for now?)
Since it seems that `pdbp.xpm()` can sometimes lose the up-stack
traceback info/frames? Not sure why but ours seems to work just fine
from a `asyncio`-handler in `modden`'s use of `i3ipc` B)
Also call `DebugStatus.shield_sigint()` from `pause_from_sync()` in the
infected-`asyncio` case to get the same shielding behaviour as in all
other usage!