Compare commits

..

35 Commits

Author SHA1 Message Date
Jad Abou-Chakra 95526e5d45 Decouple registery addresses from binding addresses 2025-03-10 12:36:43 -04:00
Tyler Goodlet d2aa92f1a0 Add in depth comment about module naming when used without pkg 2025-03-10 12:36:25 -04:00
Tyler Goodlet 5919665619 Add a super naive multi-host-capable web-req proxier for @jc211 2025-03-10 12:36:25 -04:00
Tyler Goodlet a1d75625e4 Draft test-doc for "out-of-band" `asyncio.Task`..
Since there's no way to activate `greenback`'s portal in such cases, we
should at least have a test verifying our very loud error about the
inability to support this usage..
2025-03-10 12:14:40 -04:00
Tyler Goodlet 85c60095ba Raise "independent" task errors in an eg
The (rare) condition is heavily detailed in new comments in
the `cancel_trio()` callback but, more or less the idea here is to be
extra pedantic in raising an `Exceptiongroup` of errors from each task
(both `asyncio` and `trio`) whenever the 2 tasks raise "independently"
- in the sense that it's not obviously one side's task causing an error
(or cancellation) in the other. In this case we set the error for each
side on the `LinkedTaskChannel` (via new attrs described later).

As a synopsis, most of this work was refined out of supporting
`infected_aio=True` mode in the **root actor** and in particular as part
of getting that to work inside the `modden` daemon which at the time of
writing was still using the `i3ipc` lib and thus `asyncio`.

Impl deats,
- extend the `LinkedTaskChannel` field/API set (and type it),
  - `._trio_task: trio.Task` for test/user introspection.
- also "stage" some ideas for a more refined interface,
  - `.started()` to deliver the value yielded to the `trio.Task` parent.
   |_ also includes some todos for how to implement this design
      underneath.
  - `._aio_first: Any|None = None` to hold that value ^.
  - `.wait_aio_complete()` for syncing to the asyncio task.
- some detailed logging around "asyncio cancelled trio" case.
- Move `AsyncioCancelled` in this module.

Styling changes,
- generally more explicit var naming.
- some todos for getting modern and fancy with typing..

NB, Let it be known this commit msg was written on a friday with the
help of various "mr. white" solns.
2025-03-10 12:14:40 -04:00
Tyler Goodlet e2b9c3e769 Add a `tests/test_root_infect_asyncio`
Might as well break apart the specific test set since there are some
(minor) subtleties and the orig test mod is already getting pretty big
XD

Includes both the new "independent"-event-loops test as well as the std
usage base case suite.
2025-03-10 12:04:51 -04:00
Tyler Goodlet ae18ceb633 Impl a proto "unmasker" `@acm` alongside our test
Such that the suite verifies the wip `maybe_raise_from_masking_exc()`
will raise from a `trio.Cancelled.__context__` since I can't think of
any reason a `Cancelled` should ever be raised in-place of
a non-`Cancelled` XD

Not sure what should be raised instead (or maybe just a `log.warning()`
emitted?) but this starts a draft for refinement at the least. Use the
new `@pytest.mark.parametrize` explicit tuple-of-params form with an
`pytest.param + `.mark.xfail()` for the default behaviour case.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 917699417f Add a "raise-from-`finally:`" example test
Since i wasted 2 days just to find an example of this inside an `@acm`,
figured I better reproduce for the purposes of maybe implementing
a warning sys (inside our wip proto `open_taskman()`) when a nursery
detects a single `Cancelled` in an eg where the `.__context__` is set to
some non-cancel error (which likely means a cancel-causing source
exception was suppressed by accident).

Left in a buncha commented code using `maybe_open_nursery()` which
i thought might be part of the issue but didn't end up being required;
will likely remove on a follow up refinement.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 71a29d0106 Yield a boxed-maybe-error from `open_crash_handler()`
Along the lines of something like `pytest.raises()` where the handled
exception can be inspected from the `pdbp` REPL using its `.value` field
B)

This is super handy in particular for understanding
`BaseException[Group]`s without manually adding surrounding handler code
to assign the `except[*] Exception as exc_var:` particularly when trying
to understand multi-cancelled eg trees.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 095bf28f5d Add an inter-leaved-task error test
Trying to replicate cases where errors are raised in both `trio` and
`asyncio` tasks independently (at least in `.to_asyncio` API terms) with
a new `test_trio_prestarted_task_bubbles` that generates 3 cases inside
a `@acm` calls stack composing a `trio.Nursery` with
a `to_asyncio.open_channel_from()` call where a set of `trio` tasks are
started in a loop using `.start()` with various exc raising sequences,
- the aio task raising *before* the last `trio` task spawns.
- the aio task raising just after the last trio task spawns, but before
  it starts.
- after the last trio task `.start()` call returns control to the
  parent - but (for now) did not error.

TODO, still more cases to discover as i'm still fighting a `modden` bug
of this sort atm..

Other,
- tweak some other tests to have timeouts since some recent hangs were
  found..
- started mucking with py3.13 and thus adjustments for strict egs in
  some tests; full patchset to test suite likely coming soon!
2025-03-10 12:04:51 -04:00
Tyler Goodlet 129dff575f Hm, `asyncio.Task._fut_waiter.set_exception()`?
Since we can't use it to `Task.set_exception()` (since that task method never
seems to work.. XD) and setting the private/internal always seems to do
the desired raising in the task? I realize it's an internal `asyncio`
runtime field but i'd rather take the risk of it breaking then having to
rely on our own equivalent hack..

Also, it seems like the case where the task's associated (and internal)
future-waiter field is null, we won't run into the (same?) prior hanging
issues (maybe since there's nothing for `asyncio` internals to use to
wait XD ??) when `Task.cancel()` is used..??

Main deats,
- add and `Future.set_exception()` a new signal-exception
  `class TrioTaskExited(AsyncioCancelled):` whenever the trio-task exits
  gracefully and the asyncio-side task is still doing blocking work (of
  some sort) which *seem to* be predicated by a check that
  `._fut_waiter is not None`.
- always call `asyncio.Queue.shutdown()` for the same^ as well as
  whenever we decide to call `Task.cancel()`; in that case the shutdown
  relays correctly?

Some further refinements,
- only warn about `Task.cancel()` usage when actually used ;)
- more local scope vars setting in the exit phase of
  `translate_aio_errors()`.
- also in ^ use explicit caught-exc var names for each error-type.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 9167fbb0a8 Much more limited `asyncio.Task.cancel()` use
Since it can not only cause the guest-mode run to abandon but also in
some edge cases prevent `trio`-errors from propagating (at least on
py3.12-13?) as discovered as part of supporting this mode officially
in the *root actor*.

As such try to avoid that method as much as possible instead opting to
pass the `trio`-side error via the iter-task channel ref.

Deats,
- add a `LinkedTaskChannel._trio_err: BaseException|None` which gets set
  whenver the `trio.Task` error is caught; ONLY set `AsyncioCancelled`
  when the `trio` task was for sure the cause, whether itself cancelled
  or errored.
- always check for this error when exiting the `asyncio` side (even when
  terminated via a call to `asyncio.Task.cancel()` or during any other
  `CancelledError` handling such that the `asyncio`-task can expect to
  handle `AsyncioCancelled` due to the above^^ cases.
- never `cs.cancel()` the `trio` side unless that cancel scope has not
  yet been `.cancel_called` whatsoever; it's a noop anyway.
- only raise any exc from `asyncio.Task.result()` when `chan._aio_err`
  does not already match it since the existence of the pre-existing
  `task_err` means `asyncio` prolly intends (or has already) raised and
  interrupted the task elsewhere.

Various supporting tweaks,
- don't bother maybe-init-ing `greenback` from the actor entrypoint
  since we already need to (and do) bestow the portals to each `asyncio`
  task spawned using the `run_task()`/`open_channel_from()` API; further
  the init-ing should be done already by client code that enables
  infected mode (even in the root actor).
 |_we should prolly also codify it from any
   `run_daemon(infected_aio=True, debug_mode=True)` usage we offer.
- pass all the `_<field>`s to `Linked TaskChannel` explicitly in named
  kwarg style.
- better sclang-style log reports throughout, particularly on teardowns.
- generally more/better comments and docs around (not well understood)
  edge cases.
- prep to just inline `maybe_raise_aio_side_err()` closure..
2025-03-10 12:04:51 -04:00
Tyler Goodlet b6608e1c46 Expose `debug_filter` from `open_root_actor()` also
Such that actor-runtime graceful cancel handling can be used throughout
any process tree.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 33e5e2c06f Drop extra nl from boxed error fmt 2025-03-10 12:04:51 -04:00
Tyler Goodlet 52238ade28 Raise explicitly on missing `greenback` portal
When `.pause_from_sync()` is called from an `asyncio.Task` which was
never bestowed a portal we want to be mega pedantic about it; indicate
that the task was NOT spawned from our `.to_asyncio` API and likely by
some out-of-our-control code (normally using
`asyncio.ensure_future()/.create_task()`). Though `greenback` already
errors on such usage, it's not always clear why no portal exists;
explaining the situation of a 3rd-party-bg-spawned-task should avoid
dev confusion for most cases.

Impl deats,
- distinguish between an actor in infected mode versus the actual caller
  of `.pause_from_sync()` being an `asyncio.Task` with more explicit
  `asyncio_task` and `is_infected_aio` vars.
- ONLY in the case of being both an infected-mode-actor AND detecting
  that the caller is an `asyncio.Task`, check `greenback.has_portal()`
  such that when not bestowed we presume the aforementioned
  3rd-party-bg-task case above and raise a new explicit RTE with
  a detailed explanatory message.
- add some masked draft code for handling the speical case of a root
  actor `asyncio.Task` caller which could (in theory) not actually
  require gb portal use since the `Lock` can be acquired directly
  without IPC.
 |_this will likely require factoring of various pause machinery funcs
   into a `_pause_from_root_task()` to mk the impl sane XD

Other,
- expose a new `debug_filter: Callable` which can be provided by the
  caller of `_maybe_enter_pm()` to predicate whether to enter the
  debugger REPL based on the caught `BaseException|BaseExceptionGroup`;
  this is handy for customizing the meaning of "graceful cancellations"
  so as to avoid crash handling on expected egs of more then
  `trioCancelled`.
|_ make the default as it was implemented: `not is_multi_cancelled(err)`
- pass-through a new `ignore: set[BaseException]` as
  `open_crash_handler(ignore_nested=ignore)` to allow for the same
  silent-cancellation-egs-swallowing as desired from outside the actor
  runtime.
2025-03-10 12:04:51 -04:00
Tyler Goodlet f7cd8739a5 Accept err-type override in `is_multi_cancelled()`
Such that equivalents of `trio.Cancelled` from other runtimes such as
`asyncio.CancelledError` and `subprocess.CalledProcessError` (with
a `.returncode == -2`) can be gracefully ignored as needed by the
caller.

For example this is handy if you want to avoid debug-mode REPL entry on
an exception-group full of only some subset of exception types since you
expect certain tasks to raise such errors after having been cancelled by
a request from some parent supervision sys (some "higher up"
`trio.CancelScope`, a remote triggered `ContextCancelled` or just from
and OS SIGINT).

Impl deats,
- offer a new `ignore_nested: set[BaseException]` param which by
  default we add `trio.Cancelled` to when no other types are provided.
- use `ExceptionGroup.subgroup(tuple(ignore_nested)` to filter to egs of
  the "ignored sub-errors set" and return any such match (instead of
  `True`).
- detail a comment on exclusion case.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 7537c6f053 Support passing pre-conf-ed `Logger`
Such that we can hook into 3rd-party-libs more easily to monkey them and
use our (prettier/hipper) console logging with something like (an
example from the client project `modden`),

```python
    connection_mod = i3ipc.connection
    tractor_style_i3ipc_logger: logging.LoggingAdapter = tractor.log.get_console_log(
        _root_name=connection_mod.__name__,
        logger=i3ipc.connection_mod.logger,
        level='info',
    )
    # monkey the instance-ref in 3rd-party module
    connection_mod.logger = our_logger
```

Impl deats,
- expose as `get_console_log(logger: logging.Logger)` and add default
  failover logic.
- toss in more typing, also for mod-global instance.
2025-03-10 12:04:51 -04:00
Tyler Goodlet 9c83f02568 Support and test infected-`asyncio`-mode for root
Such that you can use,

```python

    tractor.to_asyncio.run_as_asyncio_guest(
        trio_main=_trio_main,
    )
```

to boostrap the root actor (and thus main parent process) to embed
the actor-rumtime into an `asyncio` loop. Prove it all works with an
subactor-free version of the aio echo-server test suite B)
2025-03-10 12:04:51 -04:00
Tyler Goodlet 441cf0962d TOSQUASH: 9002f60 howtorelease.md file 2024-12-10 14:43:39 -05:00
Tyler Goodlet fb04f74605 Draft a (pretty)`Struct.fields_diff()`
For comparing a `msgspec.Struct` against an input `dict` presumably to
be used as input for struct instantiation. The main diff with
`.__sub__()` is that non-existing fields on either are reported
(loudly).
2024-12-10 14:12:37 -05:00
Tyler Goodlet aa1f6fa4b5 Spitballing how to expose custom `msgspec` type hooks
Such that maybe we can eventually offer a nicer higher-level API which
implements much of the boilerplate required by `msgspec` (like
type-matched branching to serialization logic) via a type-table
interface or something?

Not sure if the idea is that useful so leaving it all as TODOs for now
obviously.
2024-12-09 21:09:48 -05:00
Tyler Goodlet 9002f608ee Add `notes_to_self/howtorelease.md` reminder doc 2024-12-09 18:14:11 -05:00
Tyler Goodlet 8ebc022535 Add TODO for a runtime-vars passing mechanism 2024-12-09 18:12:22 -05:00
Tyler Goodlet e26fa8330f Change masked `.pause()` line 2024-12-09 18:04:32 -05:00
Tyler Goodlet a2659069c5 Type the inter-loop chans 2024-12-09 17:37:32 -05:00
Tyler Goodlet 54699d7a0b Denoise duplicate chan logging for now 2024-12-09 17:36:52 -05:00
Tyler Goodlet b91ab9e3a8 Add TODO for a tb frame "filterer" sys.. 2024-12-09 17:14:51 -05:00
Tyler Goodlet cd14c4fe72 Set `RemoteActorError.pformat(boxer_header=self.relay_uid)` by def 2024-12-09 16:57:57 -05:00
Tyler Goodlet ad40fcd2bc Support custom `boxer_header: str` provided by `pformat_boxed_tb()` caller 2024-12-09 16:57:22 -05:00
Tyler Goodlet 508ba510a5 Expose a `_ctlc_ignore_header: str` for use in `sigint_shield()` 2024-12-09 16:56:30 -05:00
Tyler Goodlet b875b35b98 Change `tractor.breakpoint()` to new `.pause()` in test suite 2024-12-09 16:08:55 -05:00
Tyler Goodlet 46ddc214cd Wrap `asyncio_bp.py` ex into test suite
Ensuring we can at least use `breakpoint()` from an infected actor's
`asyncio.Task` spawned via a `.to_asyncio` API.

Also includes a little `tests/devx/` reorging,
- start splitting out non-`tractor.pause()` tests into a new
  `test_pause_from_non_trio.py` for all the `.pause_from_sync()`
  use in bg-threaded or `asyncio` applications.
- factor harness commonalities to the `devx/conftest` (namely
  the `do_ctlc()` masher).
- mv `test_pause_from_sync` to the new non`-trio` mod.

NOTE, the `ctlc=True` is still failing for
`test_pause_from_asyncio_task` which is a user-happiness bug but not
anything fundamentally broken - just need to handle the `asyncio` case
in `.devx._debug.sigint_shield()`!
2024-12-09 15:38:28 -05:00
Tyler Goodlet b3ee20d3b9 Add `breakpoint()` hook restoration example + test 2024-12-05 20:56:39 -05:00
Tyler Goodlet cf3e6c1218 Rename `n: trio.Nursery` -> `tn` (task nursery) 2024-12-04 14:01:38 -05:00
Tyler Goodlet 8af9b0201d Messy-teardown `DebugStatus` related fixes
Mostly fixing edge cases with `asyncio` and/or bg threads where the
`.repl_release: trio.Event` needs to be used from the main `trio`
thread OW confusing-but-valid teardown tracebacks can show under various
races.

Also improve,
- log reporting for such internal bugs to make them more obvious on
  console via `log.exception()`.
- only restore the SIGINT handler when runtime is (still) active.
- reporting when `tractor.pause(shield=True)` should be used and
  unhiding the internal frames from the tb in that case.
- for `pause_from_sync()` some deep fixes..
 |_add a `allow_no_runtime: bool = False` flag to allow
   **not** requiring the actor runtime to be active.
 |_fix the `greenback` case-branch to only trigger on `not
   is_trio_thread`.
 |_add a scope-global `repl_owner: Task|Thread|None = None` to
   avoid ref errors..
2024-12-03 15:26:25 -05:00
31 changed files with 2042 additions and 621 deletions

View File

@ -1,3 +1,8 @@
'''
Examples of using the builtin `breakpoint()` from an `asyncio.Task`
running in a subactor spawned with `infect_asyncio=True`.
'''
import asyncio
import trio
@ -26,15 +31,16 @@ async def bp_then_error(
# NOTE: what happens here inside the hook needs some refinement..
# => seems like it's still `._debug._set_trace()` but
# we set `Lock.local_task_in_debug = 'sync'`, we probably want
# some further, at least, meta-data about the task/actoq in debug
# in terms of making it clear it's asyncio mucking about.
# some further, at least, meta-data about the task/actor in debug
# in terms of making it clear it's `asyncio` mucking about.
breakpoint()
# short checkpoint / delay
await asyncio.sleep(0.5)
await asyncio.sleep(0.5) # asyncio-side
if raise_after_bp:
raise ValueError('blah')
raise ValueError('asyncio side error!')
# TODO: test case with this so that it gets cancelled?
else:
@ -46,7 +52,7 @@ async def bp_then_error(
@tractor.context
async def trio_ctx(
ctx: tractor.Context,
bp_before_started: bool = True,
bp_before_started: bool = False,
):
# this will block until the ``asyncio`` task sends a "first"
@ -55,19 +61,19 @@ async def trio_ctx(
to_asyncio.open_channel_from(
bp_then_error,
raise_after_bp=not bp_before_started,
# raise_after_bp=not bp_before_started,
) as (first, chan),
trio.open_nursery() as n,
trio.open_nursery() as tn,
):
assert first == 'start'
if bp_before_started:
await tractor.breakpoint()
await tractor.pause()
await ctx.started(first)
await ctx.started(first) # trio-side
n.start_soon(
tn.start_soon(
to_asyncio.run_task,
aio_sleep_forever,
)
@ -77,14 +83,18 @@ async def trio_ctx(
async def main(
bps_all_over: bool = True,
# TODO, WHICH OF THESE HAZ BUGZ?
cancel_from_root: bool = False,
err_from_root: bool = False,
) -> None:
async with tractor.open_nursery(
debug_mode=True,
maybe_enable_greenback=True,
# loglevel='devx',
) as n:
ptl: Portal = await n.start_actor(
) as an:
ptl: Portal = await an.start_actor(
'aio_daemon',
enable_modules=[__name__],
infect_asyncio=True,
@ -99,12 +109,18 @@ async def main(
assert first == 'start'
if bps_all_over:
await tractor.breakpoint()
# pause in parent to ensure no cross-actor
# locking problems exist!
await tractor.pause()
if cancel_from_root:
await ctx.cancel()
if err_from_root:
assert 0
else:
await trio.sleep_forever()
# await trio.sleep_forever()
await ctx.cancel()
assert 0
# TODO: case where we cancel from trio-side while asyncio task
# has debugger lock?

View File

@ -1,5 +1,5 @@
'''
Fast fail test with a context.
Fast fail test with a `Context`.
Ensure the partially initialized sub-actor process
doesn't cause a hang on error/cancel of the parent

View File

@ -7,7 +7,7 @@ async def breakpoint_forever():
try:
while True:
yield 'yo'
await tractor.breakpoint()
await tractor.pause()
except BaseException:
tractor.log.get_console_log().exception(
'Cancelled while trying to enter pause point!'

View File

@ -10,7 +10,7 @@ async def name_error():
async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor."
while True:
await tractor.breakpoint()
await tractor.pause()
# NOTE: if the test never sent 'q'/'quit' commands
# on the pdb repl, without this checkpoint line the

View File

@ -6,7 +6,7 @@ async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor."
while True:
await trio.sleep(0.1)
await tractor.breakpoint()
await tractor.pause()
async def name_error():

View File

@ -6,19 +6,46 @@ import tractor
async def main() -> None:
async with tractor.open_nursery(debug_mode=True) as an:
assert os.environ['PYTHONBREAKPOINT'] == 'tractor._debug._set_trace'
# intially unset, no entry.
orig_pybp_var: int = os.environ.get('PYTHONBREAKPOINT')
assert orig_pybp_var in {None, "0"}
async with tractor.open_nursery(
debug_mode=True,
) as an:
assert an
assert (
(pybp_var := os.environ['PYTHONBREAKPOINT'])
==
'tractor.devx._debug._sync_pause_from_builtin'
)
# TODO: an assert that verifies the hook has indeed been, hooked
# XD
assert sys.breakpointhook is not tractor._debug._set_trace
assert (
(pybp_hook := sys.breakpointhook)
is not tractor.devx._debug._set_trace
)
print(
f'$PYTHONOBREAKPOINT: {pybp_var!r}\n'
f'`sys.breakpointhook`: {pybp_hook!r}\n'
)
breakpoint()
pass # first bp, tractor hook set.
# TODO: an assert that verifies the hook is unhooked..
# XXX AFTER EXIT (of actor-runtime) verify the hook is unset..
#
# YES, this is weird but it's how stdlib docs say to do it..
# https://docs.python.org/3/library/sys.html#sys.breakpointhook
assert os.environ.get('PYTHONBREAKPOINT') is orig_pybp_var
assert sys.breakpointhook
# now ensure a regular builtin pause still works
breakpoint()
pass # last bp, stdlib hook restored
if __name__ == '__main__':
trio.run(main)

View File

@ -10,7 +10,7 @@ async def main():
await trio.sleep(0.1)
await tractor.breakpoint()
await tractor.pause()
await trio.sleep(0.1)

View File

@ -11,7 +11,7 @@ async def main(
# loglevel='runtime',
):
while True:
await tractor.breakpoint()
await tractor.pause()
if __name__ == '__main__':

View File

@ -4,9 +4,9 @@ import trio
async def gen():
yield 'yo'
await tractor.breakpoint()
await tractor.pause()
yield 'yo'
await tractor.breakpoint()
await tractor.pause()
@tractor.context
@ -15,7 +15,7 @@ async def just_bp(
) -> None:
await ctx.started()
await tractor.breakpoint()
await tractor.pause()
# TODO: bps and errors in this call..
async for val in gen():

View File

@ -0,0 +1,18 @@
First generate a built disti:
```
python -m pip install --upgrade build
python -m build --sdist --outdir dist/alpha5/
```
Then try a test ``pypi`` upload:
```
python -m twine upload --repository testpypi dist/alpha5/*
```
The push to `pypi` for realz.
```
python -m twine upload --repository testpypi dist/alpha5/*
```

View File

@ -150,6 +150,18 @@ def pytest_generate_tests(metafunc):
metafunc.parametrize("start_method", [spawn_backend], scope='module')
# TODO: a way to let test scripts (like from `examples/`)
# guarantee they won't registry addr collide!
# @pytest.fixture
# def open_test_runtime(
# reg_addr: tuple,
# ) -> AsyncContextManager:
# return partial(
# tractor.open_nursery,
# registry_addrs=[reg_addr],
# )
def sig_prog(proc, sig):
"Kill the actor-process with ``sig``."
proc.send_signal(sig)

View File

@ -2,6 +2,7 @@
`tractor.devx.*` tooling sub-pkg test space.
'''
import time
from typing import (
Callable,
)
@ -11,9 +12,19 @@ from pexpect.exceptions import (
TIMEOUT,
)
from pexpect.spawnbase import SpawnBase
from tractor._testing import (
mk_cmd,
)
from tractor.devx._debug import (
_pause_msg as _pause_msg,
_crash_msg as _crash_msg,
_repl_fail_msg as _repl_fail_msg,
_ctlc_ignore_header as _ctlc_ignore_header,
)
from conftest import (
_ci_env,
)
@pytest.fixture
@ -107,6 +118,9 @@ def expect(
raise
PROMPT = r"\(Pdb\+\)"
def in_prompt_msg(
child: SpawnBase,
parts: list[str],
@ -166,3 +180,40 @@ def assert_before(
err_on_false=True,
**kwargs
)
def do_ctlc(
child,
count: int = 3,
delay: float = 0.1,
patt: str|None = None,
# expect repl UX to reprint the prompt after every
# ctrl-c send.
# XXX: no idea but, in CI this never seems to work even on 3.10 so
# needs some further investigation potentially...
expect_prompt: bool = not _ci_env,
) -> str|None:
before: str|None = None
# make sure ctl-c sends don't do anything but repeat output
for _ in range(count):
time.sleep(delay)
child.sendcontrol('c')
# TODO: figure out why this makes CI fail..
# if you run this test manually it works just fine..
if expect_prompt:
time.sleep(delay)
child.expect(PROMPT)
before = str(child.before.decode())
time.sleep(delay)
if patt:
# should see the last line on console
assert patt in before
# return the console content up to the final prompt
return before

View File

@ -21,14 +21,13 @@ from pexpect.exceptions import (
EOF,
)
from tractor.devx._debug import (
from .conftest import (
do_ctlc,
PROMPT,
_pause_msg,
_crash_msg,
_repl_fail_msg,
)
from conftest import (
_ci_env,
)
from .conftest import (
expect,
in_prompt_msg,
@ -70,9 +69,6 @@ has_nested_actors = pytest.mark.has_nested_actors
# )
PROMPT = r"\(Pdb\+\)"
@pytest.mark.parametrize(
'user_in_out',
[
@ -123,8 +119,10 @@ def test_root_actor_error(
ids=lambda item: f'{item[0]} -> {item[1]}',
)
def test_root_actor_bp(spawn, user_in_out):
"""Demonstrate breakpoint from in root actor.
"""
'''
Demonstrate breakpoint from in root actor.
'''
user_input, expect_err_str = user_in_out
child = spawn('root_actor_breakpoint')
@ -146,43 +144,6 @@ def test_root_actor_bp(spawn, user_in_out):
assert expect_err_str in str(child.before)
def do_ctlc(
child,
count: int = 3,
delay: float = 0.1,
patt: str|None = None,
# expect repl UX to reprint the prompt after every
# ctrl-c send.
# XXX: no idea but, in CI this never seems to work even on 3.10 so
# needs some further investigation potentially...
expect_prompt: bool = not _ci_env,
) -> str|None:
before: str|None = None
# make sure ctl-c sends don't do anything but repeat output
for _ in range(count):
time.sleep(delay)
child.sendcontrol('c')
# TODO: figure out why this makes CI fail..
# if you run this test manually it works just fine..
if expect_prompt:
time.sleep(delay)
child.expect(PROMPT)
before = str(child.before.decode())
time.sleep(delay)
if patt:
# should see the last line on console
assert patt in before
# return the console content up to the final prompt
return before
def test_root_actor_bp_forever(
spawn,
ctlc: bool,
@ -919,138 +880,6 @@ def test_different_debug_mode_per_actor(
)
def test_pause_from_sync(
spawn,
ctlc: bool
):
'''
Verify we can use the `pdbp` REPL from sync functions AND from
any thread spawned with `trio.to_thread.run_sync()`.
`examples/debugging/sync_bp.py`
'''
child = spawn('sync_bp')
# first `sync_pause()` after nurseries open
child.expect(PROMPT)
assert_before(
child,
[
# pre-prompt line
_pause_msg,
"<Task '__main__.main'",
"('root'",
]
)
if ctlc:
do_ctlc(child)
# ^NOTE^ subactor not spawned yet; don't need extra delay.
child.sendline('c')
# first `await tractor.pause()` inside `p.open_context()` body
child.expect(PROMPT)
# XXX shouldn't see gb loaded message with PDB loglevel!
assert not in_prompt_msg(
child,
['`greenback` portal opened!'],
)
# should be same root task
assert_before(
child,
[
_pause_msg,
"<Task '__main__.main'",
"('root'",
]
)
if ctlc:
do_ctlc(
child,
# NOTE: setting this to 0 (or some other sufficient
# small val) can cause the test to fail since the
# `subactor` suffers a race where the root/parent
# sends an actor-cancel prior to it hitting its pause
# point; by def the value is 0.1
delay=0.4,
)
# XXX, fwiw without a brief sleep here the SIGINT might actually
# trigger "subactor" cancellation by its parent before the
# shield-handler is engaged.
#
# => similar to the `delay` input to `do_ctlc()` below, setting
# this too low can cause the test to fail since the `subactor`
# suffers a race where the root/parent sends an actor-cancel
# prior to the context task hitting its pause point (and thus
# engaging the `sigint_shield()` handler in time); this value
# seems be good enuf?
time.sleep(0.6)
# one of the bg thread or subactor should have
# `Lock.acquire()`-ed
# (NOT both, which will result in REPL clobbering!)
attach_patts: dict[str, list[str]] = {
'subactor': [
"'start_n_sync_pause'",
"('subactor'",
],
'inline_root_bg_thread': [
"<Thread(inline_root_bg_thread",
"('root'",
],
'start_soon_root_bg_thread': [
"<Thread(start_soon_root_bg_thread",
"('root'",
],
}
conts: int = 0 # for debugging below matching logic on failure
while attach_patts:
child.sendline('c')
conts += 1
child.expect(PROMPT)
before = str(child.before.decode())
for key in attach_patts:
if key in before:
attach_key: str = key
expected_patts: str = attach_patts.pop(key)
assert_before(
child,
[_pause_msg]
+
expected_patts
)
break
else:
pytest.fail(
f'No keys found?\n\n'
f'{attach_patts.keys()}\n\n'
f'{before}\n'
)
# ensure no other task/threads engaged a REPL
# at the same time as the one that was detected above.
for key, other_patts in attach_patts.copy().items():
assert not in_prompt_msg(
child,
other_patts,
)
if ctlc:
do_ctlc(
child,
patt=attach_key,
# NOTE same as comment above
delay=0.4,
)
child.sendline('c')
child.expect(EOF)
def test_post_mortem_api(
spawn,
ctlc: bool,

View File

@ -0,0 +1,350 @@
'''
That "foreign loop/thread" debug REPL support better ALSO WORK!
Same as `test_native_pause.py`.
All these tests can be understood (somewhat) by running the
equivalent `examples/debugging/` scripts manually.
'''
# from functools import partial
# import itertools
import time
# from typing import (
# Iterator,
# )
import pytest
from pexpect.exceptions import (
# TIMEOUT,
EOF,
)
from .conftest import (
# _ci_env,
do_ctlc,
PROMPT,
# expect,
in_prompt_msg,
assert_before,
_pause_msg,
_crash_msg,
_ctlc_ignore_header,
# _repl_fail_msg,
)
def test_pause_from_sync(
spawn,
ctlc: bool,
):
'''
Verify we can use the `pdbp` REPL from sync functions AND from
any thread spawned with `trio.to_thread.run_sync()`.
`examples/debugging/sync_bp.py`
'''
child = spawn('sync_bp')
# first `sync_pause()` after nurseries open
child.expect(PROMPT)
assert_before(
child,
[
# pre-prompt line
_pause_msg,
"<Task '__main__.main'",
"('root'",
]
)
if ctlc:
do_ctlc(child)
# ^NOTE^ subactor not spawned yet; don't need extra delay.
child.sendline('c')
# first `await tractor.pause()` inside `p.open_context()` body
child.expect(PROMPT)
# XXX shouldn't see gb loaded message with PDB loglevel!
assert not in_prompt_msg(
child,
['`greenback` portal opened!'],
)
# should be same root task
assert_before(
child,
[
_pause_msg,
"<Task '__main__.main'",
"('root'",
]
)
if ctlc:
do_ctlc(
child,
# NOTE: setting this to 0 (or some other sufficient
# small val) can cause the test to fail since the
# `subactor` suffers a race where the root/parent
# sends an actor-cancel prior to it hitting its pause
# point; by def the value is 0.1
delay=0.4,
)
# XXX, fwiw without a brief sleep here the SIGINT might actually
# trigger "subactor" cancellation by its parent before the
# shield-handler is engaged.
#
# => similar to the `delay` input to `do_ctlc()` below, setting
# this too low can cause the test to fail since the `subactor`
# suffers a race where the root/parent sends an actor-cancel
# prior to the context task hitting its pause point (and thus
# engaging the `sigint_shield()` handler in time); this value
# seems be good enuf?
time.sleep(0.6)
# one of the bg thread or subactor should have
# `Lock.acquire()`-ed
# (NOT both, which will result in REPL clobbering!)
attach_patts: dict[str, list[str]] = {
'subactor': [
"'start_n_sync_pause'",
"('subactor'",
],
'inline_root_bg_thread': [
"<Thread(inline_root_bg_thread",
"('root'",
],
'start_soon_root_bg_thread': [
"<Thread(start_soon_root_bg_thread",
"('root'",
],
}
conts: int = 0 # for debugging below matching logic on failure
while attach_patts:
child.sendline('c')
conts += 1
child.expect(PROMPT)
before = str(child.before.decode())
for key in attach_patts:
if key in before:
attach_key: str = key
expected_patts: str = attach_patts.pop(key)
assert_before(
child,
[_pause_msg]
+
expected_patts
)
break
else:
pytest.fail(
f'No keys found?\n\n'
f'{attach_patts.keys()}\n\n'
f'{before}\n'
)
# ensure no other task/threads engaged a REPL
# at the same time as the one that was detected above.
for key, other_patts in attach_patts.copy().items():
assert not in_prompt_msg(
child,
other_patts,
)
if ctlc:
do_ctlc(
child,
patt=attach_key,
# NOTE same as comment above
delay=0.4,
)
child.sendline('c')
child.expect(EOF)
def expect_any_of(
attach_patts: dict[str, list[str]],
child, # what type?
ctlc: bool = False,
prompt: str = _ctlc_ignore_header,
ctlc_delay: float = .4,
) -> list[str]:
'''
Receive any of a `list[str]` of patterns provided in
`attach_patts`.
Used to test racing prompts from multiple actors and/or
tasks using a common root process' `pdbp` REPL.
'''
assert attach_patts
child.expect(PROMPT)
before = str(child.before.decode())
for attach_key in attach_patts:
if attach_key in before:
expected_patts: str = attach_patts.pop(attach_key)
assert_before(
child,
expected_patts
)
break # from for
else:
pytest.fail(
f'No keys found?\n\n'
f'{attach_patts.keys()}\n\n'
f'{before}\n'
)
# ensure no other task/threads engaged a REPL
# at the same time as the one that was detected above.
for key, other_patts in attach_patts.copy().items():
assert not in_prompt_msg(
child,
other_patts,
)
if ctlc:
do_ctlc(
child,
patt=prompt,
# NOTE same as comment above
delay=ctlc_delay,
)
return expected_patts
def test_sync_pause_from_aio_task(
spawn,
ctlc: bool
# ^TODO, fix for `asyncio`!!
):
'''
Verify we can use the `pdbp` REPL from an `asyncio.Task` spawned using
APIs in `.to_asyncio`.
`examples/debugging/asycio_bp.py`
'''
child = spawn('asyncio_bp')
# RACE on whether trio/asyncio task bps first
attach_patts: dict[str, list[str]] = {
# first pause in guest-mode (aka "infecting")
# `trio.Task`.
'trio-side': [
_pause_msg,
"<Task 'trio_ctx'",
"('aio_daemon'",
],
# `breakpoint()` from `asyncio.Task`.
'asyncio-side': [
_pause_msg,
"<Task pending name='Task-2' coro=<greenback_shim()",
"('aio_daemon'",
],
}
while attach_patts:
expect_any_of(
attach_patts=attach_patts,
child=child,
ctlc=ctlc,
)
child.sendline('c')
# NOW in race order,
# - the asyncio-task will error
# - the root-actor parent task will pause
#
attach_patts: dict[str, list[str]] = {
# error raised in `asyncio.Task`
"raise ValueError('asyncio side error!')": [
_crash_msg,
'return await chan.receive()', # `.to_asyncio` impl internals in tb
"<Task 'trio_ctx'",
"@ ('aio_daemon'",
"ValueError: asyncio side error!",
],
# parent-side propagation via actor-nursery/portal
# "tractor._exceptions.RemoteActorError: remote task raised a 'ValueError'": [
"remote task raised a 'ValueError'": [
_crash_msg,
"src_uid=('aio_daemon'",
"('aio_daemon'",
],
# a final pause in root-actor
"<Task '__main__.main'": [
_pause_msg,
"<Task '__main__.main'",
"('root'",
],
}
while attach_patts:
expect_any_of(
attach_patts=attach_patts,
child=child,
ctlc=ctlc,
)
child.sendline('c')
assert not attach_patts
# final boxed error propagates to root
assert_before(
child,
[
_crash_msg,
"<Task '__main__.main'",
"('root'",
"remote task raised a 'ValueError'",
"ValueError: asyncio side error!",
]
)
if ctlc:
do_ctlc(
child,
# NOTE: setting this to 0 (or some other sufficient
# small val) can cause the test to fail since the
# `subactor` suffers a race where the root/parent
# sends an actor-cancel prior to it hitting its pause
# point; by def the value is 0.1
delay=0.4,
)
child.sendline('c')
child.expect(EOF)
def test_sync_pause_from_non_greenbacked_aio_task():
'''
Where the `breakpoint()` caller task is NOT spawned by
`tractor.to_asyncio` and thus never activates
a `greenback.ensure_portal()` beforehand, presumably bc the task
was started by some lib/dep as in often seen in the field.
Ensure sync pausing works when the pause is in,
- the root actor running in infected-mode?
|_ since we don't need any IPC to acquire the debug lock?
|_ is there some way to handle this like the non-main-thread case?
All other cases need to error out appropriately right?
- for any subactor we can't avoid needing the repl lock..
|_ is there a way to hook into `asyncio.ensure_future(obj)`?
'''
pass

View File

@ -955,7 +955,7 @@ async def echo_back_sequence(
)
await ctx.started()
# await tractor.breakpoint()
# await tractor.pause()
async with ctx.open_stream(
msg_buffer_size=msg_buffer_size,

View File

@ -5,6 +5,7 @@ The hipster way to force SC onto the stdlib's "async": 'infection mode'.
import asyncio
import builtins
from contextlib import ExitStack
# from functools import partial
import itertools
import importlib
import os
@ -108,7 +109,9 @@ async def asyncio_actor(
except BaseException as err:
if expect_err:
assert isinstance(err, error_type)
assert isinstance(err, error_type), (
f'{type(err)} is not {error_type}?'
)
raise
@ -180,8 +183,8 @@ def test_trio_cancels_aio(reg_addr):
with trio.move_on_after(1):
# cancel the nursery shortly after boot
async with tractor.open_nursery() as n:
await n.run_in_actor(
async with tractor.open_nursery() as tn:
await tn.run_in_actor(
asyncio_actor,
target='aio_sleep_forever',
expect_err='trio.Cancelled',
@ -201,22 +204,33 @@ async def trio_ctx(
# this will block until the ``asyncio`` task sends a "first"
# message.
with trio.fail_after(2):
async with (
trio.open_nursery() as n,
try:
async with (
trio.open_nursery(
# TODO, for new `trio` / py3.13
# strict_exception_groups=False,
) as tn,
tractor.to_asyncio.open_channel_from(
sleep_and_err,
) as (first, chan),
):
tractor.to_asyncio.open_channel_from(
sleep_and_err,
) as (first, chan),
):
assert first == 'start'
assert first == 'start'
# spawn another asyncio task for the cuck of it.
tn.start_soon(
tractor.to_asyncio.run_task,
aio_sleep_forever,
)
await trio.sleep_forever()
# spawn another asyncio task for the cuck of it.
n.start_soon(
tractor.to_asyncio.run_task,
aio_sleep_forever,
)
await trio.sleep_forever()
# TODO, factor this into a `trionics.collapse()`?
except* BaseException as beg:
# await tractor.pause(shield=True)
if len(excs := beg.exceptions) == 1:
raise excs[0]
else:
raise
@pytest.mark.parametrize(
@ -235,7 +249,6 @@ def test_context_spawns_aio_task_that_errors(
'''
async def main():
with trio.fail_after(2):
async with tractor.open_nursery() as n:
p = await n.start_actor(
@ -307,7 +320,9 @@ async def aio_cancel():
await aio_sleep_forever()
def test_aio_cancelled_from_aio_causes_trio_cancelled(reg_addr):
def test_aio_cancelled_from_aio_causes_trio_cancelled(
reg_addr: tuple,
):
'''
When the `asyncio.Task` cancels itself the `trio` side cshould
also cancel and teardown and relay the cancellation cross-process
@ -404,6 +419,7 @@ async def stream_from_aio(
sequence=seq,
expect_cancel=raise_err or exit_early,
fail_early=aio_raise_err,
) as (first, chan):
assert first is True
@ -422,10 +438,15 @@ async def stream_from_aio(
if raise_err:
raise Exception
elif exit_early:
print('`consume()` breaking early!\n')
break
print('returning from `consume()`..\n')
# run 2 tasks each pulling from
# the inter-task-channel with the 2nd
# using a fan-out `BroadcastReceiver`.
if fan_out:
# start second task that get's the same stream value set.
async with (
# NOTE: this has to come first to avoid
@ -435,11 +456,19 @@ async def stream_from_aio(
trio.open_nursery() as n,
):
# start 2nd task that get's broadcast the same
# value set.
n.start_soon(consume, br)
await consume(chan)
else:
await consume(chan)
except BaseException as err:
import logging
log = logging.getLogger()
log.exception('aio-subactor errored!\n')
raise err
finally:
if (
@ -460,7 +489,8 @@ async def stream_from_aio(
assert not fan_out
assert pulled == expect[:51]
print('trio guest mode task completed!')
print('trio guest-mode task completed!')
assert chan._aio_task.done()
@pytest.mark.parametrize(
@ -500,19 +530,37 @@ def test_trio_error_cancels_intertask_chan(reg_addr):
excinfo.value.boxed_type is Exception
def test_trio_closes_early_and_channel_exits(reg_addr):
def test_trio_closes_early_and_channel_exits(
reg_addr: tuple[str, int],
):
'''
Check that if the `trio`-task "exits early" on `async for`ing the
inter-task-channel (via a `break`) we exit silently from the
`open_channel_from()` block and get a final `Return[None]` msg.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
exit_early=True,
infect_asyncio=True,
)
# should raise RAE diectly
await portal.result()
with trio.fail_after(2):
async with tractor.open_nursery(
# debug_mode=True,
# enable_stack_on_sig=True,
) as n:
portal = await n.run_in_actor(
stream_from_aio,
exit_early=True,
infect_asyncio=True,
)
# should raise RAE diectly
print('waiting on final infected subactor result..')
res: None = await portal.wait_for_result()
assert res is None
print('infected subactor returned result: {res!r}\n')
# should be a quiet exit on a simple channel exit
trio.run(main)
trio.run(
main,
# strict_exception_groups=False,
)
def test_aio_errors_and_channel_propagates_and_closes(reg_addr):
@ -536,41 +584,40 @@ def test_aio_errors_and_channel_propagates_and_closes(reg_addr):
excinfo.value.boxed_type is Exception
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
to_trio.send_nowait('start')
while True:
msg = await from_trio.get()
# echo the msg back
to_trio.send_nowait(msg)
# if we get the terminate sentinel
# break the echo loop
if msg is None:
print('breaking aio echo loop')
break
print('exiting asyncio task')
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
ctx: tractor.Context|None,
):
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
to_trio.send_nowait('start')
while True:
msg = await from_trio.get()
# echo the msg back
to_trio.send_nowait(msg)
# if we get the terminate sentinel
# break the echo loop
if msg is None:
print('breaking aio echo loop')
break
print('exiting asyncio task')
async with to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
print(f'asyncio echoing {msg}')
await chan.send(msg)
@ -649,7 +696,6 @@ def test_echoserver_detailed_mechanics(
trio.run(main)
@tractor.context
async def manage_file(
ctx: tractor.Context,

View File

@ -0,0 +1,244 @@
'''
Special attention cases for using "infect `asyncio`" mode from a root
actor; i.e. not using a std `trio.run()` bootstrap.
'''
import asyncio
from functools import partial
import pytest
import trio
import tractor
from tractor import (
to_asyncio,
)
from tests.test_infected_asyncio import (
aio_echo_server,
)
@pytest.mark.parametrize(
'raise_error_mid_stream',
[
False,
Exception,
KeyboardInterrupt,
],
ids='raise_error={}'.format,
)
def test_infected_root_actor(
raise_error_mid_stream: bool|Exception,
# conftest wide
loglevel: str,
debug_mode: bool,
):
'''
Verify you can run the `tractor` runtime with `Actor.is_infected_aio() == True`
in the root actor.
'''
async def _trio_main():
with trio.fail_after(2):
first: str
chan: to_asyncio.LinkedTaskChannel
async with (
tractor.open_root_actor(
debug_mode=debug_mode,
loglevel=loglevel,
),
to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan),
):
assert first == 'start'
for i in range(1000):
await chan.send(i)
out = await chan.receive()
assert out == i
print(f'asyncio echoing {i}')
if raise_error_mid_stream and i == 500:
raise raise_error_mid_stream
if out is None:
try:
out = await chan.receive()
except trio.EndOfChannel:
break
else:
raise RuntimeError(
'aio channel never stopped?'
)
if raise_error_mid_stream:
with pytest.raises(raise_error_mid_stream):
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
else:
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
async def sync_and_err(
# just signature placeholders for compat with
# ``to_asyncio.open_channel_from()``
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
ev: asyncio.Event,
):
if to_trio:
to_trio.send_nowait('start')
await ev.wait()
raise RuntimeError('asyncio-side')
@pytest.mark.parametrize(
'aio_err_trigger',
[
'before_start_point',
'after_trio_task_starts',
'after_start_point',
],
ids='aio_err_triggered={}'.format
)
def test_trio_prestarted_task_bubbles(
aio_err_trigger: str,
# conftest wide
loglevel: str,
debug_mode: bool,
):
async def pre_started_err(
raise_err: bool = False,
pre_sleep: float|None = None,
aio_trigger: asyncio.Event|None = None,
task_status=trio.TASK_STATUS_IGNORED,
):
'''
Maybe pre-started error then sleep.
'''
if pre_sleep is not None:
print(f'Sleeping from trio for {pre_sleep!r}s !')
await trio.sleep(pre_sleep)
# signal aio-task to raise JUST AFTER this task
# starts but has not yet `.started()`
if aio_trigger:
print('Signalling aio-task to raise from `trio`!!')
aio_trigger.set()
if raise_err:
print('Raising from trio!')
raise TypeError('trio-side')
task_status.started()
await trio.sleep_forever()
async def _trio_main():
# with trio.fail_after(2):
with trio.fail_after(999):
first: str
chan: to_asyncio.LinkedTaskChannel
aio_ev = asyncio.Event()
async with (
tractor.open_root_actor(
debug_mode=False,
loglevel=loglevel,
),
):
# TODO, tests for this with 3.13 egs?
# from tractor.devx import open_crash_handler
# with open_crash_handler():
async with (
# where we'll start a sub-task that errors BEFORE
# calling `.started()` such that the error should
# bubble before the guest run terminates!
trio.open_nursery() as tn,
# THEN start an infect task which should error just
# after the trio-side's task does.
to_asyncio.open_channel_from(
partial(
sync_and_err,
ev=aio_ev,
)
) as (first, chan),
):
for i in range(5):
pre_sleep: float|None = None
last_iter: bool = (i == 4)
# TODO, missing cases?
# -[ ] error as well on
# 'after_start_point' case as well for
# another case?
raise_err: bool = False
if last_iter:
raise_err: bool = True
# trigger aio task to error on next loop
# tick/checkpoint
if aio_err_trigger == 'before_start_point':
aio_ev.set()
pre_sleep: float = 0
await tn.start(
pre_started_err,
raise_err,
pre_sleep,
(aio_ev if (
aio_err_trigger == 'after_trio_task_starts'
and
last_iter
) else None
),
)
if (
aio_err_trigger == 'after_start_point'
and
last_iter
):
aio_ev.set()
with pytest.raises(
expected_exception=ExceptionGroup,
) as excinfo:
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
eg = excinfo.value
rte_eg, rest_eg = eg.split(RuntimeError)
# ensure the trio-task's error bubbled despite the aio-side
# having (maybe) errored first.
if aio_err_trigger in (
'after_trio_task_starts',
'after_start_point',
):
assert len(errs := rest_eg.exceptions) == 1
typerr = errs[0]
assert (
type(typerr) is TypeError
and
'trio-side' in typerr.args
)
# when aio errors BEFORE (last) trio task is scheduled, we should
# never see anythinb but the aio-side.
else:
assert len(rtes := rte_eg.exceptions) == 1
assert 'asyncio-side' in rtes[0].args[0]

View File

@ -271,7 +271,7 @@ def test_faster_task_to_recv_is_cancelled_by_slower(
# the faster subtask was cancelled
break
# await tractor.breakpoint()
# await tractor.pause()
# await stream.receive()
print(f'final value: {value}')

View File

@ -3,6 +3,10 @@ Reminders for oddities in `trio` that we need to stay aware of and/or
want to see changed.
'''
from contextlib import (
asynccontextmanager as acm,
)
import pytest
import trio
from trio import TaskStatus
@ -80,3 +84,115 @@ def test_stashed_child_nursery(use_start_soon):
with pytest.raises(NameError):
trio.run(main)
@pytest.mark.parametrize(
('unmask_from_canc', 'canc_from_finally'),
[
(True, False),
(True, True),
pytest.param(False, True,
marks=pytest.mark.xfail(reason="never raises!")
),
],
# TODO, ask ronny how to impl this .. XD
# ids='unmask_from_canc={0}, canc_from_finally={1}',#.format,
)
def test_acm_embedded_nursery_propagates_enter_err(
canc_from_finally: bool,
unmask_from_canc: bool,
):
'''
Demo how a masking `trio.Cancelled` could be handled by unmasking from the
`.__context__` field when a user (by accident) re-raises from a `finally:`.
'''
import tractor
@acm
async def maybe_raise_from_masking_exc(
tn: trio.Nursery,
unmask_from: BaseException|None = trio.Cancelled
# TODO, maybe offer a collection?
# unmask_from: set[BaseException] = {
# trio.Cancelled,
# },
):
if not unmask_from:
yield
return
try:
yield
except* unmask_from as be_eg:
# TODO, if we offer `unmask_from: set`
# for masker_exc_type in unmask_from:
matches, rest = be_eg.split(unmask_from)
if not matches:
raise
for exc_match in be_eg.exceptions:
if (
(exc_ctx := exc_match.__context__)
and
type(exc_ctx) not in {
# trio.Cancelled, # always by default?
unmask_from,
}
):
exc_ctx.add_note(
f'\n'
f'WARNING: the above error was masked by a {unmask_from!r} !?!\n'
f'Are you always cancelling? Say from a `finally:` ?\n\n'
f'{tn!r}'
)
raise exc_ctx from exc_match
@acm
async def wraps_tn_that_always_cancels():
async with (
trio.open_nursery() as tn,
maybe_raise_from_masking_exc(
tn=tn,
unmask_from=(
trio.Cancelled
if unmask_from_canc
else None
),
)
):
try:
yield tn
finally:
if canc_from_finally:
tn.cancel_scope.cancel()
await trio.lowlevel.checkpoint()
async def _main():
with tractor.devx.open_crash_handler() as bxerr:
assert not bxerr.value
async with (
wraps_tn_that_always_cancels() as tn,
):
assert not tn.cancel_scope.cancel_called
assert 0
assert (
(err := bxerr.value)
and
type(err) is AssertionError
)
with pytest.raises(ExceptionGroup) as excinfo:
trio.run(_main)
eg: ExceptionGroup = excinfo.value
assert_eg, rest_eg = eg.split(AssertionError)
assert len(assert_eg.exceptions) == 1

View File

@ -1703,15 +1703,28 @@ class Context:
# TODO: expose as mod func instead!
structfmt = pretty_struct.Struct.pformat
if self._in_overrun:
log.warning(
f'Queueing OVERRUN msg on caller task:\n\n'
report: str = (
f'{flow_body}'
f'{structfmt(msg)}\n'
)
over_q: deque = self._overflow_q
self._overflow_q.append(msg)
if len(over_q) == over_q.maxlen:
report = (
'FAILED to queue OVERRUN msg, OVERAN the OVERRUN QUEUE !!\n\n'
+ report
)
# log.error(report)
log.debug(report)
else:
report = (
'Queueing OVERRUN msg on caller task:\n\n'
+ report
)
log.debug(report)
# XXX NOTE XXX
# overrun is the ONLY case where returning early is fine!
return False

View File

@ -609,6 +609,7 @@ class RemoteActorError(Exception):
# just after <Type(
# |___ ..
tb_body_indent=1,
boxer_header=self.relay_uid,
)
tail = ''
@ -1145,19 +1146,51 @@ def unpack_error(
def is_multi_cancelled(
exc: BaseException|BaseExceptionGroup
) -> bool:
exc: BaseException|BaseExceptionGroup,
ignore_nested: set[BaseException] = set(),
) -> bool|BaseExceptionGroup:
'''
Predicate to determine if a possible ``BaseExceptionGroup`` contains
only ``trio.Cancelled`` sub-exceptions (and is likely the result of
cancelling a collection of subtasks.
Predicate to determine if an `BaseExceptionGroup` only contains
some (maybe nested) set of sub-grouped exceptions (like only
`trio.Cancelled`s which get swallowed silently by default) and is
thus the result of "gracefully cancelling" a collection of
sub-tasks (or other conc primitives) and receiving a "cancelled
ACK" from each after termination.
Docs:
----
- https://docs.python.org/3/library/exceptions.html#exception-groups
- https://docs.python.org/3/library/exceptions.html#BaseExceptionGroup.subgroup
'''
if (
not ignore_nested
or
trio.Cancelled in ignore_nested
# XXX always count-in `trio`'s native signal
):
ignore_nested |= {trio.Cancelled}
if isinstance(exc, BaseExceptionGroup):
return exc.subgroup(
lambda exc: isinstance(exc, trio.Cancelled)
) is not None
matched_exc: BaseExceptionGroup|None = exc.subgroup(
tuple(ignore_nested),
# TODO, complain about why not allowed XD
# condition=tuple(ignore_nested),
)
if matched_exc is not None:
return matched_exc
# NOTE, IFF no excs types match (throughout the error-tree)
# -> return `False`, OW return the matched sub-eg.
#
# IOW, for the inverse of ^ for the purpose of
# maybe-enter-REPL--logic: "only debug when the err-tree contains
# at least one exc-type NOT in `ignore_nested`" ; i.e. the case where
# we fallthrough and return `False` here.
return False

View File

@ -70,7 +70,10 @@ async def open_root_actor(
# defaults are above
arbiter_addr: tuple[str, int]|None = None,
# binding addrs for the transport layer server
trans_bind_addrs: list[tuple[str, int]] = [(_default_host, _default_port)],
name: str|None = 'root',
# either the `multiprocessing` start method:
@ -95,6 +98,17 @@ async def open_root_actor(
hide_tb: bool = True,
# XXX, proxied directly to `.devx._debug._maybe_enter_pm()`
# for REPL-entry logic.
debug_filter: Callable[
[BaseException|BaseExceptionGroup],
bool,
] = lambda err: not is_multi_cancelled(err),
# TODO, a way for actors to augment passing derived
# read-only state to sublayers?
# extra_rt_vars: dict|None = None,
) -> Actor:
'''
Runtime init entry point for ``tractor``.
@ -190,6 +204,8 @@ async def open_root_actor(
_default_lo_addrs
)
assert registry_addrs
assert trans_bind_addrs
loglevel = (
loglevel
@ -276,8 +292,6 @@ async def open_root_actor(
tuple(addr), # TODO: just drop this requirement?
)
trans_bind_addrs: list[tuple[str, int]] = []
# Create a new local root-actor instance which IS NOT THE
# REGISTRAR
if ponged_addrs:
@ -298,11 +312,6 @@ async def open_root_actor(
loglevel=loglevel,
enable_modules=enable_modules,
)
# DO NOT use the registry_addrs as the transport server
# addrs for this new non-registar, root-actor.
for host, port in ponged_addrs:
# NOTE: zero triggers dynamic OS port allocation
trans_bind_addrs.append((host, 0))
# Start this local actor as the "registrar", aka a regular
# actor who manages the local registry of "mailboxes" of
@ -330,6 +339,10 @@ async def open_root_actor(
loglevel=loglevel,
enable_modules=enable_modules,
)
# XXX, in case the root actor runtime was actually run from
# `tractor.to_asyncio.run_as_asyncio_guest()` and NOt
# `.trio.run()`.
actor._infected_aio = _state._runtime_vars['_is_infected_aio']
# Start up main task set via core actor-runtime nurseries.
try:
@ -371,6 +384,7 @@ async def open_root_actor(
Exception,
BaseExceptionGroup,
) as err:
# XXX NOTE XXX see equiv note inside
# `._runtime.Actor._stream_handler()` where in the
# non-root or root-that-opened-this-mahually case we
@ -379,11 +393,15 @@ async def open_root_actor(
entered: bool = await _debug._maybe_enter_pm(
err,
api_frame=inspect.currentframe(),
debug_filter=debug_filter,
)
if (
not entered
and
not is_multi_cancelled(err)
not is_multi_cancelled(
err,
)
):
logger.exception('Root actor crashed\n')

View File

@ -456,11 +456,14 @@ class Actor:
)
if _pre_chan:
log.warning(
# con_status += (
# ^TODO^ swap once we minimize conn duplication
f' -> Wait, we already have IPC with `{uid_short}`??\n'
f' |_{_pre_chan}\n'
# -[ ] last thing might be reg/unreg runtime reqs?
# log.warning(
log.debug(
f'?Wait?\n'
f'We already have IPC with peer {uid_short!r}\n'
f'|_{_pre_chan}\n'
)
# IPC connection tracking for both peers and new children:

View File

@ -75,6 +75,7 @@ from tractor import _state
from tractor._exceptions import (
InternalError,
NoRuntime,
is_multi_cancelled,
)
from tractor._state import (
current_actor,
@ -316,6 +317,7 @@ class Lock:
we_released: bool = False
ctx_in_debug: Context|None = cls.ctx_in_debug
repl_task: Task|Thread|None = DebugStatus.repl_task
message: str = ''
try:
if not DebugStatus.is_main_trio_thread():
@ -443,7 +445,10 @@ class Lock:
f'|_{repl_task}\n'
)
log.devx(message)
if message:
log.devx(message)
else:
import pdbp; pdbp.set_trace()
return we_released
@ -730,6 +735,9 @@ class DebugStatus:
# -[ ] see if we can get our proto oco task-mngr to work for
# this?
repl_task: Task|None = None
# repl_thread: Thread|None = None
# ^TODO?
repl_release: trio.Event|None = None
req_task: Task|None = None
@ -839,11 +847,12 @@ class DebugStatus:
if (
not cls.is_main_trio_thread()
and
# not _state._runtime_vars.get(
# '_is_infected_aio',
# False,
# )
not current_actor().is_infected_aio()
not _state._runtime_vars.get(
'_is_infected_aio',
False,
)
# not current_actor().is_infected_aio()
# ^XXX, since for bg-thr case will always raise..
):
trio.from_thread.run_sync(
signal.signal,
@ -928,12 +937,27 @@ class DebugStatus:
try:
# sometimes the task might already be terminated in
# which case this call will raise an RTE?
if repl_release is not None:
# See below for reporting on that..
if (
repl_release is not None
and
not repl_release.is_set()
):
if cls.is_main_trio_thread():
repl_release.set()
elif current_actor().is_infected_aio():
elif (
_state._runtime_vars.get(
'_is_infected_aio',
False,
)
# ^XXX, again bc we need to not except
# but for bg-thread case it will always raise..
#
# TODO, is there a better api then using
# `err_on_no_runtime=False` in the below?
# current_actor().is_infected_aio()
):
async def _set_repl_release():
repl_release.set()
@ -949,6 +973,15 @@ class DebugStatus:
trio.from_thread.run_sync(
repl_release.set
)
except RuntimeError as rte:
log.exception(
f'Failed to release debug-request ??\n\n'
f'{cls.repr()}\n'
)
# pdbp.set_trace()
raise rte
finally:
# if req_ctx := cls.req_ctx:
# req_ctx._scope.cancel()
@ -976,9 +1009,10 @@ class DebugStatus:
# logging when we don't need to?
cls.repl = None
# restore original sigint handler
cls.unshield_sigint()
# maybe restore original sigint handler
# XXX requires runtime check to avoid crash!
if current_actor(err_on_no_runtime=False):
cls.unshield_sigint()
# TODO: use the new `@lowlevel.singleton` for this!
@ -1066,7 +1100,7 @@ class PdbREPL(pdbp.Pdb):
# Lock.release(raise_on_thread=False)
Lock.release()
# XXX after `Lock.release()` for root local repl usage
# XXX AFTER `Lock.release()` for root local repl usage
DebugStatus.release()
def set_quit(self):
@ -1391,6 +1425,10 @@ def any_connected_locker_child() -> bool:
return False
_ctlc_ignore_header: str = (
'Ignoring SIGINT while debug REPL in use'
)
def sigint_shield(
signum: int,
frame: 'frame', # type: ignore # noqa
@ -1472,7 +1510,9 @@ def sigint_shield(
# NOTE: don't emit this with `.pdb()` level in
# root without a higher level.
log.runtime(
f'Ignoring SIGINT while debug REPL in use by child '
_ctlc_ignore_header
+
f' by child '
f'{uid_in_debug}\n'
)
problem = None
@ -1506,7 +1546,9 @@ def sigint_shield(
# NOTE: since we emit this msg on ctl-c, we should
# also always re-print the prompt the tail block!
log.pdb(
'Ignoring SIGINT while pdb REPL in use by root actor..\n'
_ctlc_ignore_header
+
f' by root actor..\n'
f'{DebugStatus.repl_task}\n'
f' |_{repl}\n'
)
@ -1567,16 +1609,20 @@ def sigint_shield(
repl
):
log.pdb(
f'Ignoring SIGINT while local task using debug REPL\n'
f'|_{repl_task}\n'
f' |_{repl}\n'
_ctlc_ignore_header
+
f' by local task\n\n'
f'{repl_task}\n'
f' |_{repl}\n'
)
elif req_task:
log.debug(
'Ignoring SIGINT while debug request task is open but either,\n'
'- someone else is already REPL-in and has the `Lock`, or\n'
'- some other local task already is replin?\n'
f'|_{req_task}\n'
_ctlc_ignore_header
+
f' by local request-task and either,\n'
f'- someone else is already REPL-in and has the `Lock`, or\n'
f'- some other local task already is replin?\n\n'
f'{req_task}\n'
)
# TODO can we remove this now?
@ -1672,7 +1718,7 @@ class DebugRequestError(RuntimeError):
'''
_repl_fail_msg: str = (
_repl_fail_msg: str|None = (
'Failed to REPl via `_pause()` '
)
@ -1702,7 +1748,7 @@ async def _pause(
] = trio.TASK_STATUS_IGNORED,
**debug_func_kwargs,
) -> tuple[PdbREPL, Task]|None:
) -> tuple[Task, PdbREPL]|None:
'''
Inner impl for `pause()` to avoid the `trio.CancelScope.__exit__()`
stack frame when not shielded (since apparently i can't figure out
@ -1712,6 +1758,7 @@ async def _pause(
'''
__tracebackhide__: bool = hide_tb
pause_err: BaseException|None = None
actor: Actor = current_actor()
try:
task: Task = current_task()
@ -1887,7 +1934,7 @@ async def _pause(
)
with trio.CancelScope(shield=shield):
await trio.lowlevel.checkpoint()
return repl, task
return (repl, task)
# elif repl_task:
# log.warning(
@ -2094,11 +2141,13 @@ async def _pause(
# TODO: prolly factor this plus the similar block from
# `_enter_repl_sync()` into a common @cm?
except BaseException as pause_err:
except BaseException as _pause_err:
pause_err: BaseException = _pause_err
if isinstance(pause_err, bdb.BdbQuit):
log.devx(
'REPL for pdb was quit!\n'
'REPL for pdb was explicitly quit!\n'
)
_repl_fail_msg = None
# when the actor is mid-runtime cancellation the
# `Actor._service_n` might get closed before we can spawn
@ -2117,13 +2166,18 @@ async def _pause(
)
return
else:
log.exception(
_repl_fail_msg
+
f'on behalf of {repl_task} ??\n'
elif isinstance(pause_err, trio.Cancelled):
_repl_fail_msg = (
'You called `tractor.pause()` from an already cancelled scope!\n\n'
'Consider `await tractor.pause(shield=True)` to make it work B)\n'
)
else:
_repl_fail_msg += f'on behalf of {repl_task} ??\n'
if _repl_fail_msg:
log.exception(_repl_fail_msg)
if not actor.is_infected_aio():
DebugStatus.release(cancel_req_task=True)
@ -2152,6 +2206,8 @@ async def _pause(
DebugStatus.req_err
or
repl_err
or
pause_err
):
__tracebackhide__: bool = False
@ -2435,6 +2491,8 @@ def pause_from_sync(
called_from_builtin: bool = False,
api_frame: FrameType|None = None,
allow_no_runtime: bool = False,
# proxy to `._pause()`, for ex:
# shield: bool = False,
# api_frame: FrameType|None = None,
@ -2453,40 +2511,41 @@ def pause_from_sync(
'''
__tracebackhide__: bool = hide_tb
repl_owner: Task|Thread|None = None
try:
actor: tractor.Actor = current_actor(
err_on_no_runtime=False,
)
if not actor:
raise RuntimeError(
'Not inside the `tractor`-runtime?\n'
if (
not actor
and
not allow_no_runtime
):
raise NoRuntime(
'The actor runtime has not been opened?\n\n'
'`tractor.pause_from_sync()` is not functional without a wrapping\n'
'- `async with tractor.open_nursery()` or,\n'
'- `async with tractor.open_root_actor()`\n'
'- `async with tractor.open_root_actor()`\n\n'
'If you are getting this from a builtin `breakpoint()` call\n'
'it might mean the runtime was started then '
'stopped prematurely?\n'
)
message: str = (
f'{actor.uid} task called `tractor.pause_from_sync()`\n'
)
# TODO: once supported, remove this AND the one
# inside `._pause()`!
# outstanding impl fixes:
# -[ ] need to make `.shield_sigint()` below work here!
# -[ ] how to handle `asyncio`'s new SIGINT-handler
# injection?
# -[ ] should `breakpoint()` work and what does it normally
# do in `asyncio` ctxs?
# if actor.is_infected_aio():
# raise RuntimeError(
# '`tractor.pause[_from_sync]()` not yet supported '
# 'for infected `asyncio` mode!'
# )
repl: PdbREPL = mk_pdb()
# message += f'-> created local REPL {repl}\n'
is_trio_thread: bool = DebugStatus.is_main_trio_thread()
is_root: bool = is_root_process()
is_aio: bool = actor.is_infected_aio()
is_infected_aio: bool = actor.is_infected_aio()
thread: Thread = threading.current_thread()
asyncio_task: asyncio.Task|None = None
if is_infected_aio:
asyncio_task = asyncio.current_task()
# TODO: we could also check for a non-`.to_thread` context
# using `trio.from_thread.check_cancelled()` (says
@ -2500,26 +2559,20 @@ def pause_from_sync(
# thread which will call `._pause()` manually with special
# handling for root-actor caller usage.
if (
not DebugStatus.is_main_trio_thread()
not is_trio_thread
and
not is_aio # see below for this usage
not asyncio_task
):
# TODO: `threading.Lock()` this so we don't get races in
# multi-thr cases where they're acquiring/releasing the
# REPL and setting request/`Lock` state, etc..
thread: threading.Thread = threading.current_thread()
repl_owner = thread
repl_owner: Thread = thread
# TODO: make root-actor bg thread usage work!
if (
is_root
# or
# is_aio
):
if is_root:
message += (
f'-> called from a root-actor bg {thread}\n'
)
if is_root:
message += (
f'-> called from a root-actor bg {thread}\n'
)
message += (
'-> scheduling `._pause_from_bg_root_thread()`..\n'
@ -2574,30 +2627,95 @@ def pause_from_sync(
DebugStatus.shield_sigint()
assert bg_task is not DebugStatus.repl_task
elif is_aio:
# TODO: once supported, remove this AND the one
# inside `._pause()`!
# outstanding impl fixes:
# -[ ] need to make `.shield_sigint()` below work here!
# -[ ] how to handle `asyncio`'s new SIGINT-handler
# injection?
# -[ ] should `breakpoint()` work and what does it normally
# do in `asyncio` ctxs?
# if actor.is_infected_aio():
# raise RuntimeError(
# '`tractor.pause[_from_sync]()` not yet supported '
# 'for infected `asyncio` mode!'
# )
elif (
not is_trio_thread
and
is_infected_aio # as in, the special actor-runtime mode
# ^NOTE XXX, that doesn't mean the caller is necessarily
# an `asyncio.Task` just that `trio` has been embedded on
# the `asyncio` event loop!
and
asyncio_task # transitive caller is an actual `asyncio.Task`
):
greenback: ModuleType = maybe_import_greenback()
repl_owner: Task = asyncio.current_task()
DebugStatus.shield_sigint()
fute: asyncio.Future = run_trio_task_in_future(
partial(
_pause,
debug_func=None,
repl=repl,
hide_tb=hide_tb,
# XXX to prevent `._pause()` for setting
# `DebugStatus.repl_task` to the gb task!
called_from_sync=True,
called_from_bg_thread=True,
if greenback.has_portal():
DebugStatus.shield_sigint()
fute: asyncio.Future = run_trio_task_in_future(
partial(
_pause,
debug_func=None,
repl=repl,
hide_tb=hide_tb,
**_pause_kwargs
# XXX to prevent `._pause()` for setting
# `DebugStatus.repl_task` to the gb task!
called_from_sync=True,
called_from_bg_thread=True,
**_pause_kwargs
)
)
)
repl_owner = asyncio_task
bg_task, _ = greenback.await_(fute)
# TODO: ASYNC version -> `.pause_from_aio()`?
# bg_task, _ = await fute
# TODO: for async version -> `.pause_from_aio()`?
# bg_task, _ = await fute
bg_task, _ = greenback.await_(fute)
bg_task: asyncio.Task = asyncio.current_task()
# handle the case where an `asyncio` task has been
# spawned WITHOUT enabling a `greenback` portal..
# => can often happen in 3rd party libs.
else:
bg_task = repl_owner
# TODO, ostensibly we can just acquire the
# debug lock directly presuming we're the
# root actor running in infected asyncio
# mode?
#
# TODO, this would be a special case where
# a `_pause_from_root()` would come in very
# handy!
# if is_root:
# import pdbp; pdbp.set_trace()
# log.warning(
# 'Allowing `asyncio` task to acquire debug-lock in root-actor..\n'
# 'This is not fully implemented yet; there may be teardown hangs!\n\n'
# )
# else:
# simply unsupported, since there exists no hack (i
# can think of) to workaround this in a subactor
# which needs to lock the root's REPL ow we're sure
# to get prompt stdstreams clobbering..
cf_repr: str = ''
if api_frame:
caller_frame: FrameType = api_frame.f_back
cf_repr: str = f'caller_frame: {caller_frame!r}\n'
raise RuntimeError(
f"CAN'T USE `greenback._await()` without a portal !?\n\n"
f'Likely this task was NOT spawned via the `tractor.to_asyncio` API..\n'
f'{asyncio_task}\n'
f'{cf_repr}\n'
f'Prolly the task was started out-of-band (from some lib?)\n'
f'AND one of the below was never called ??\n'
f'- greenback.ensure_portal()\n'
f'- greenback.bestow_portal(<task>)\n'
)
else: # we are presumably the `trio.run()` + main thread
# raises on not-found by default
@ -2758,9 +2876,11 @@ def _post_mortem(
# ^TODO, instead a nice runtime-info + maddr + uid?
# -[ ] impl a `Actor.__repr()__`??
# |_ <task>:<thread> @ <actor>
# no_runtime: bool = False
except NoRuntime:
actor_repr: str = '<no-actor-runtime?>'
# no_runtime: bool = True
try:
task_repr: Task = current_task()
@ -2796,6 +2916,8 @@ def _post_mortem(
# Since we presume the post-mortem was enaged to a task-ending
# error, we MUST release the local REPL request so that not other
# local task nor the root remains blocked!
# if not no_runtime:
# DebugStatus.release()
DebugStatus.release()
@ -2844,8 +2966,14 @@ async def _maybe_enter_pm(
tb: TracebackType|None = None,
api_frame: FrameType|None = None,
hide_tb: bool = False,
# only enter debugger REPL when returns `True`
debug_filter: Callable[
[BaseException|BaseExceptionGroup],
bool,
] = lambda err: not is_multi_cancelled(err),
):
from tractor._exceptions import is_multi_cancelled
if (
debug_mode()
@ -2862,7 +2990,8 @@ async def _maybe_enter_pm(
# Really we just want to mostly avoid catching KBIs here so there
# might be a simpler check we can do?
and not is_multi_cancelled(err)
and
debug_filter(err)
):
api_frame: FrameType = api_frame or inspect.currentframe()
tb: TracebackType = tb or sys.exc_info()[2]
@ -3033,6 +3162,7 @@ async def maybe_wait_for_debugger(
# pass
return False
# TODO: better naming and what additionals?
# - [ ] optional runtime plugging?
# - [ ] detection for sync vs. async code?
@ -3042,7 +3172,7 @@ async def maybe_wait_for_debugger(
@cm
def open_crash_handler(
catch: set[BaseException] = {
Exception,
# Exception,
BaseException,
},
ignore: set[BaseException] = {
@ -3063,14 +3193,30 @@ def open_crash_handler(
'''
__tracebackhide__: bool = tb_hide
class BoxedMaybeException(Struct):
value: BaseException|None = None
# TODO, yield a `outcome.Error`-like boxed type?
# -[~] use `outcome.Value/Error` X-> frozen!
# -[x] write our own..?
# -[ ] consider just wtv is used by `pytest.raises()`?
#
boxed_maybe_exc = BoxedMaybeException()
err: BaseException
try:
yield
yield boxed_maybe_exc
except tuple(catch) as err:
if type(err) not in ignore:
# use our re-impl-ed version
boxed_maybe_exc.value = err
if (
type(err) not in ignore
and
not is_multi_cancelled(
err,
ignore_nested=ignore
)
):
try:
# use our re-impl-ed version
_post_mortem(
repl=mk_pdb(),
tb=sys.exc_info()[2],
@ -3078,13 +3224,13 @@ def open_crash_handler(
)
except bdb.BdbQuit:
__tracebackhide__: bool = False
raise
raise err
# XXX NOTE, `pdbp`'s version seems to lose the up-stack
# tb-info?
# pdbp.xpm()
raise
raise err
@cm

View File

@ -234,7 +234,7 @@ def find_caller_info(
_frame2callerinfo_cache: dict[FrameType, CallerInfo] = {}
# TODO: -[x] move all this into new `.devx._code`!
# TODO: -[x] move all this into new `.devx._frame_stack`!
# -[ ] consider rename to _callstack?
# -[ ] prolly create a `@runtime_api` dec?
# |_ @api_frame seems better?
@ -286,3 +286,18 @@ def api_frame(
wrapped._call_infos: dict[FrameType, CallerInfo] = _frame2callerinfo_cache
wrapped.__api_func__: bool = True
return wrapper(wrapped)
# TODO: something like this instead of the adhoc frame-unhiding
# blocks all over the runtime!! XD
# -[ ] ideally we can expect a certain error (set) and if something
# else is raised then all frames below the wrapped one will be
# un-hidden via `__tracebackhide__: bool = False`.
# |_ might need to dynamically mutate the code objs like
# `pdbp.hideframe()` does?
# -[ ] use this as a `@acm` decorator as introed in 3.10?
# @acm
# async def unhide_frame_when_not(
# error_set: set[BaseException],
# ) -> TracebackType:
# ...

View File

@ -53,6 +53,7 @@ def pformat_boxed_tb(
tb_box_indent: int|None = None,
tb_body_indent: int = 1,
boxer_header: str = '-'
) -> str:
'''
@ -88,10 +89,10 @@ def pformat_boxed_tb(
tb_box: str = (
f'|\n'
f' ------ - ------\n'
f' ------ {boxer_header} ------\n'
f'{tb_body}'
f' ------ - ------\n'
f'_|\n'
f' ------ {boxer_header}- ------\n'
f'_|'
)
tb_box_indent: str = (
tb_box_indent

View File

@ -258,20 +258,28 @@ class ActorContextInfo(Mapping):
def get_logger(
name: str | None = None,
name: str|None = None,
_root_name: str = _proj_name,
logger: Logger|None = None,
# TODO, using `.config.dictConfig()` api?
# -[ ] SO answer with docs links
# |_https://stackoverflow.com/questions/7507825/where-is-a-complete-example-of-logging-config-dictconfig
# |_https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
subsys_spec: str|None = None,
) -> StackLevelAdapter:
'''Return the package log or a sub-logger for ``name`` if provided.
'''
log: Logger
log = rlog = logging.getLogger(_root_name)
log = rlog = logger or logging.getLogger(_root_name)
if (
name
and name != _proj_name
and
name != _proj_name
):
# NOTE: for handling for modules that use ``get_logger(__name__)``
@ -283,7 +291,7 @@ def get_logger(
# since in python the {filename} is always this same
# module-file.
sub_name: None | str = None
sub_name: None|str = None
rname, _, sub_name = name.partition('.')
pkgpath, _, modfilename = sub_name.rpartition('.')
@ -306,7 +314,10 @@ def get_logger(
# add our actor-task aware adapter which will dynamically look up
# the actor and task names at each log emit
logger = StackLevelAdapter(log, ActorContextInfo())
logger = StackLevelAdapter(
log,
ActorContextInfo(),
)
# additional levels
for name, val in CUSTOM_LEVELS.items():
@ -319,15 +330,25 @@ def get_logger(
def get_console_log(
level: str | None = None,
level: str|None = None,
logger: Logger|None = None,
**kwargs,
) -> LoggerAdapter:
'''Get the package logger and enable a handler which writes to stderr.
Yeah yeah, i know we can use ``DictConfig``. You do it.
) -> LoggerAdapter:
'''
log = get_logger(**kwargs) # our root logger
logger = log.logger
Get a `tractor`-style logging instance: a `Logger` wrapped in
a `StackLevelAdapter` which injects various concurrency-primitive
(process, thread, task) fields and enables a `StreamHandler` that
writes on stderr using `colorlog` formatting.
Yeah yeah, i know we can use `logging.config.dictConfig()`. You do it.
'''
log = get_logger(
logger=logger,
**kwargs
) # set a root logger
logger: Logger = log.logger
if not level:
return log
@ -346,9 +367,13 @@ def get_console_log(
None,
)
):
fmt = LOG_FORMAT
# if logger:
# fmt = None
handler = StreamHandler()
formatter = colorlog.ColoredFormatter(
LOG_FORMAT,
fmt=fmt,
datefmt=DATE_FORMAT,
log_colors=STD_PALETTE,
secondary_log_colors=BOLD_PALETTE,
@ -365,7 +390,7 @@ def get_loglevel() -> str:
# global module logger for tractor itself
log = get_logger('tractor')
log: StackLevelAdapter = get_logger('tractor')
def at_least_level(

View File

@ -41,8 +41,10 @@ import textwrap
from typing import (
Any,
Callable,
Protocol,
Type,
TYPE_CHECKING,
TypeVar,
Union,
)
from types import ModuleType
@ -181,7 +183,11 @@ def mk_dec(
dec_hook: Callable|None = None,
) -> MsgDec:
'''
Create an IPC msg decoder, normally used as the
`PayloadMsg.pld: PayloadT` field decoder inside a `PldRx`.
'''
return MsgDec(
_dec=msgpack.Decoder(
type=spec, # like `MsgType[Any]`
@ -227,6 +233,13 @@ def pformat_msgspec(
join_char: str = '\n',
) -> str:
'''
Pretty `str` format the `msgspec.msgpack.Decoder.type` attribute
for display in (console) log messages as a nice (maybe multiline)
presentation of all supported `Struct`s (subtypes) available for
typed decoding.
'''
dec: msgpack.Decoder = getattr(codec, 'dec', codec)
return join_char.join(
mk_msgspec_table(
@ -630,31 +643,57 @@ def limit_msg_spec(
# # import pdbp; pdbp.set_trace()
# assert ext_codec.pld_spec == extended_spec
# yield ext_codec
#
# ^-TODO-^ is it impossible to make something like this orr!?
# TODO: make an auto-custom hook generator from a set of input custom
# types?
# -[ ] below is a proto design using a `TypeCodec` idea?
#
# type var for the expected interchange-lib's
# IPC-transport type when not available as a built-in
# serialization output.
WireT = TypeVar('WireT')
# TODO: make something similar to this inside `._codec` such that
# user can just pass a type table of some sort?
# -[ ] we would need to decode all msgs to `pretty_struct.Struct`
# and then call `.to_dict()` on them?
# -[x] we're going to need to re-impl all the stuff changed in the
# runtime port such that it can handle dicts or `Msg`s?
#
# def mk_dict_msg_codec_hooks() -> tuple[Callable, Callable]:
# '''
# Deliver a `enc_hook()`/`dec_hook()` pair which does
# manual convertion from our above native `Msg` set
# to `dict` equivalent (wire msgs) in order to keep legacy compat
# with the original runtime implementation.
#
# Note: this is is/was primarly used while moving the core
# runtime over to using native `Msg`-struct types wherein we
# start with the send side emitting without loading
# a typed-decoder and then later flipping the switch over to
# load to the native struct types once all runtime usage has
# been adjusted appropriately.
#
# '''
# return (
# # enc_to_dict,
# dec_from_dict,
# )
# TODO: some kinda (decorator) API for built-in subtypes
# that builds this implicitly by inspecting the `mro()`?
class TypeCodec(Protocol):
'''
A per-custom-type wire-transport serialization translator
description type.
'''
src_type: Type
wire_type: WireT
def encode(obj: Type) -> WireT:
...
def decode(
obj_type: Type[WireT],
obj: WireT,
) -> Type:
...
class MsgpackTypeCodec(TypeCodec):
...
def mk_codec_hooks(
type_codecs: list[TypeCodec],
) -> tuple[Callable, Callable]:
'''
Deliver a `enc_hook()`/`dec_hook()` pair which handle
manual convertion from an input `Type` set such that whenever
the `TypeCodec.filter()` predicate matches the
`TypeCodec.decode()` is called on the input native object by
the `dec_hook()` and whenever the
`isiinstance(obj, TypeCodec.type)` matches against an
`enc_hook(obj=obj)` the return value is taken from a
`TypeCodec.encode(obj)` callback.
'''
...

View File

@ -30,9 +30,9 @@ from msgspec import (
Struct as _Struct,
structs,
)
from pprint import (
saferepr,
)
# from pprint import (
# saferepr,
# )
from tractor.log import get_logger
@ -75,8 +75,8 @@ class DiffDump(UserList):
for k, left, right in self:
repstr += (
f'({k},\n'
f'\t{repr(left)},\n'
f'\t{repr(right)},\n'
f' |_{repr(left)},\n'
f' |_{repr(right)},\n'
')\n'
)
repstr += ']\n'
@ -144,15 +144,22 @@ def pformat(
field_indent=indent + field_indent,
)
else: # the `pprint` recursion-safe format:
else:
val_str: str = repr(v)
# XXX LOL, below just seems to be f#$%in causing
# recursion errs..
#
# the `pprint` recursion-safe format:
# https://docs.python.org/3.11/library/pprint.html#pprint.saferepr
try:
val_str: str = saferepr(v)
except Exception:
log.exception(
'Failed to `saferepr({type(struct)})` !?\n'
)
return _Struct.__repr__(struct)
# try:
# val_str: str = saferepr(v)
# except Exception:
# log.exception(
# 'Failed to `saferepr({type(struct)})` !?\n'
# )
# raise
# return _Struct.__repr__(struct)
# TODO: LOLOL use `textwrap.indent()` instead dawwwwwg!
obj_str += (field_ws + f'{k}: {typ_name} = {val_str},\n')
@ -203,12 +210,7 @@ class Struct(
return sin_props
pformat = pformat
# __repr__ = pformat
# __str__ = __repr__ = pformat
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
# inside a known tty?
# def __repr__(self) -> str:
# ...
def __repr__(self) -> str:
try:
return pformat(self)
@ -218,6 +220,13 @@ class Struct(
)
return _Struct.__repr__(self)
# __repr__ = pformat
# __str__ = __repr__ = pformat
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
# inside a known tty?
# def __repr__(self) -> str:
# ...
def copy(
self,
update: dict | None = None,
@ -267,13 +276,15 @@ class Struct(
fi.type(getattr(self, fi.name)),
)
# TODO: make a mod func instead and just point to it here for
# method impl?
def __sub__(
self,
other: Struct,
) -> DiffDump[tuple[str, Any, Any]]:
'''
Compare fields/items key-wise and return a ``DiffDump``
Compare fields/items key-wise and return a `DiffDump`
for easy visual REPL comparison B)
'''
@ -290,3 +301,42 @@ class Struct(
))
return diffs
@classmethod
def fields_diff(
cls,
other: dict|Struct,
) -> DiffDump[tuple[str, Any, Any]]:
'''
Very similar to `PrettyStruct.__sub__()` except accepts an
input `other: dict` (presumably that would normally be called
like `Struct(**other)`) which returns a `DiffDump` of the
fields of the struct and the `dict`'s fields.
'''
nullish = object()
consumed: dict = other.copy()
diffs: DiffDump[tuple[str, Any, Any]] = DiffDump()
for fi in structs.fields(cls):
field_name: str = fi.name
# ours: Any = getattr(self, field_name)
theirs: Any = consumed.pop(field_name, nullish)
if theirs is nullish:
diffs.append((
field_name,
f'{fi.type!r}',
'NOT-DEFINED in `other: dict`',
))
# when there are lingering fields in `other` that this struct
# DOES NOT define we also append those.
if consumed:
for k, v in consumed.items():
diffs.append((
k,
f'NOT-DEFINED for `{cls.__name__}`',
f'`other: dict` has value = {v!r}',
))
return diffs

File diff suppressed because it is too large Load Diff

View File

@ -382,7 +382,7 @@ class BroadcastReceiver(ReceiveChannel):
# likely it makes sense to unwind back to the
# underlying?
# import tractor
# await tractor.breakpoint()
# await tractor.pause()
log.warning(
f'Only one sub left for {self}?\n'
'We can probably unwind from breceiver?'