Compare commits

..

37 Commits

Author SHA1 Message Date
Tyler Goodlet 34bb787d36 pdbp: adding typing to config settings vars 2023-05-08 12:02:42 -04:00
Tyler Goodlet 53f6fa4867 pdbp: flip dep name 2023-05-08 12:01:56 -04:00
Tyler Goodlet 0fe4ce5432 TOSQUASH 4759e30: turn it ON i guess? XD 2023-04-20 19:14:14 -04:00
Tyler Goodlet f65ed12c21 Oof, fix remaining `Actor.cancel()` in `Actor._from_parent()` 2023-04-20 19:13:35 -04:00
Tyler Goodlet 4759e301cb `pdbp`: turn off line truncating by default, fixes terminal resizing stuff 2023-04-19 15:31:02 -04:00
Tyler Goodlet ed5eabd054 Yeahh.. maybe sticky off by default is a little better for us XD 2023-04-18 09:43:14 -04:00
Tyler Goodlet 0f24d4b83c Add a debug-mode-breakpoint-causes-hang case!
Only found this by luck more or less (while working on something in
a client project) and it turns out we can actually get to (yet another)
hang state where SIGINT will be ignored by the root actor on teardown..

I've added all the necessary logic flags to reproduce. We obviously need
a follow up bug issue and a test suite to replicate!

It appears as though the following are required based on very light
tinkering:
- infected asyncio mode active
- debug mode active
- the `trio` context must breakpoint *before* `.started()`-ing
- the `asyncio` must **not** error
2023-04-17 13:43:16 -04:00
Tyler Goodlet 19b7f9c71a Some more 3.10+ optional type sigs 2023-04-17 13:43:16 -04:00
Tyler Goodlet b7341bb81e Add (first-draft) infected-`asyncio` actor task uses debugger example 2023-04-17 13:43:16 -04:00
Tyler Goodlet 4d3c109277 Restore `breakpoint()` hook after runtime exits
Previously we were leaking our (pdb++) override into the Python runtime
which would always result in a runtime error whenever `breakpoint()` is
called outside our runtime; after exit of the root actor . This
explicitly restores any previous hook override (detected during startup)
or deletes the hook and restores the environment if none existed prior.

Also adds a new WIP debugging example script to ensure breakpointing
works as normal after runtime close; this will be added to the test
suite.
2023-04-17 13:43:16 -04:00
Tyler Goodlet d0a65e8922 Hide actor nursery exit frame 2023-04-17 13:43:16 -04:00
Tyler Goodlet 61b3a72b7c First try: switch debug machinery over to `pdbp` B) 2023-04-17 13:43:16 -04:00
Tyler Goodlet f4ed2d29f8 Use multiline import for debug mod 2023-04-17 13:43:16 -04:00
Tyler Goodlet 205d38abfa Change over debugger tests to use `PROMPT` var.. 2023-04-17 13:43:16 -04:00
Tyler Goodlet ae1fa4c75e Tweak doc string 2023-04-14 18:08:08 -04:00
Tyler Goodlet c7b56ae639 Move move context code into new `._context` mod 2023-04-14 16:35:25 -04:00
Tyler Goodlet 97446b84c0 Drop caller cancels overrun test; covered in new tests 2023-04-14 16:35:25 -04:00
Tyler Goodlet b5a27e7864 Ignore drainer-task nursery RTE during context exit 2023-04-14 16:35:25 -04:00
Tyler Goodlet a7faa26686 Set `Context._scope_nursery` on callee side too
Because obviously we probably want to support `allow_overruns` on the
remote callee side as well XD

Only found the bugs fixed in this patch this thanks to writing a much
more exhaustive test set for overrun cases B)
2023-04-14 16:35:25 -04:00
Tyler Goodlet 1183276653 Seriously cover all overrun cases
This actually caught further runtime bugs so it's gud i tried..
Add overrun-ignore enabled / disabled cases and error catching for all
of them. More or less this should cover every possible outcome when
it comes to setting `allow_overruns: bool` i hope XD
2023-04-14 16:35:25 -04:00
Tyler Goodlet 8bd4db150b Adjust aio test for silent cancellation by parent 2023-04-14 16:35:25 -04:00
Tyler Goodlet 6aa780d0cd Fix cluster test to use `allow_overruns` 2023-04-14 16:35:25 -04:00
Tyler Goodlet 45f601a035 Flip allocate log msgs to debug 2023-04-14 16:35:25 -04:00
Tyler Goodlet ec4b72259d Drop brackpressure usage from fan out tests 2023-04-14 16:35:25 -04:00
Tyler Goodlet e97ed377b0 Remote `Context` cancellation semantics rework B)
This adds remote cancellation semantics to our `tractor.Context`
machinery to more closely match that of `trio.CancelScope` but
with operational differences to handle the nature of parallel tasks interoperating
across multiple memory boundaries:

- if an actor task cancels some context it has opened via
  `Context.cancel()`, the remote (scope linked) task will be cancelled
  using the normal `CancelScope` semantics of `trio` meaning the remote
  cancel scope surrounding the far side task is cancelled and
  `trio.Cancelled`s are expected to be raised in that scope as per
  normal `trio` operation, and in the case where no error is raised
  in that remote scope, a `ContextCancelled` error is raised inside the
  runtime machinery and relayed back to the opener/caller side of the
  context.
- if any actor task cancels a full remote actor runtime using
  `Portal.cancel_actor()` the same semantics as above apply except every
  other remote actor task which also has an open context with the actor
  which was cancelled will also be sent a `ContextCancelled` **but**
  with the `.canceller` field set to the uid of the original cancel
  requesting actor.

This changeset also includes a more "proper" solution to the issue of
"allowing overruns" during streaming without attempting to implement any
form of IPC streaming backpressure. Implementing task-granularity
backpressure cross-process turns out to be more or less impossible
without augmenting out streaming protocol (likely at the cost of
performance). Further allowing overruns requires special care since
any blocking of the runtime RPC msg loop task effectively can block
control msgs such as cancels and stream terminations.

The implementation details per abstraction layer are as follows.

._streaming.Context:
- add a new contructor factor func `mk_context()` which provides
  a strictly private init-er whilst allowing us to not have to define
  an `.__init__()` on the type def.
- add public `.cancel_called` and `.cancel_called_remote` properties.
- general rename of what was the internal `._backpressure` var to
  `._allow_overruns: bool`.
- move the old contents of `Actor._push_result()` into a new
  `._deliver_msg()` allowing for better encapsulation of per-ctx
  msg handling.
 - always check for received 'error' msgs and process them with the new
   `_maybe_cancel_and_set_remote_error()` **before** any msg delivery to
   the local task, thus guaranteeing error and cancellation handling
   despite any overflow handling.
- add a new `._drain_overflows()` task-method for use with new
  `._allow_overruns: bool = True` mode.
 - add back a `._scope_nursery: trio.Nursery` (allocated in
   `Portal.open_context()`) who's sole purpose is to spawn a single task
   which runs the above method; anything else is an error.
 - augment `._deliver_msg()` to start a task and run the above method
   when operating in no overrun mode; the task queues overflow msgs and
   attempts to send them to the underlying mem chan using a blocking
   `.send()` call.
 - on context exit, any existing "drainer task" will be cancelled and
   remaining overflow queued msgs are discarded with a warning.
- rename `._error` -> `_remote_error` and set it in a new method
  `_maybe_cancel_and_set_remote_error()` which is called before
  processing
- adjust `.result()` to always call `._maybe_raise_remote_err()` at its
  start such that whenever a `ContextCancelled` arrives we do logic for
  whether or not to immediately raise that error or ignore it due to the
  current actor being the one who requested the cancel, by checking the
  error's `.canceller` field.
 - set the default value of `._result` to be `id(Context()` thus avoiding
   conflict with any `.result()` actually being `False`..

._runtime.Actor:
- augment `.cancel()` and `._cancel_task()` and `.cancel_rpc_tasks()` to
  take a `requesting_uid: tuple` indicating the source actor of every
  cancellation request.
- pass through the new `Context._allow_overruns` through `.get_context()`
- call the new `Context._deliver_msg()` from `._push_result()` (since
  the factoring out that method's contents).

._runtime._invoke:
- `TastStatus.started()` back a `Context` (unless an error is raised)
  instead of the cancel scope to make it easy to set/get state on that
  context for the purposes of cancellation and remote error relay.
- always raise any remote error via `Context._maybe_raise_remote_err()`
  before doing any `ContextCancelled` logic.
- assign any `Context._cancel_called_remote` set by the `requesting_uid`
  cancel methods (mentioned above) to the `ContextCancelled.canceller`.

._runtime.process_messages:
- always pass a `requesting_uid: tuple` to `Actor.cancel()` and
  `._cancel_task` to that any corresponding `ContextCancelled.canceller`
  can be set inside `._invoke()`.
2023-04-14 16:35:25 -04:00
Tyler Goodlet 1ec30577de Only tuplize `.canceller` if non-`None` 2023-04-14 16:35:25 -04:00
Tyler Goodlet 0e81350a42 Move `NoRuntime` import inside `current_actor()` to avoid cycle 2023-04-14 16:35:25 -04:00
Tyler Goodlet ac51cf07b9 Augment test cases for callee-returns-result early
Turns out stuff was totally broken in these cases because we're either
closing the underlying mem chan too early or not handling the
"allow_overruns" mode's cancellation correctly..
2023-04-14 16:35:25 -04:00
Tyler Goodlet e16e7ca82a Add new remote error introspection attrs
To handle both remote cancellation this adds `ContextCanceled.canceller:
tuple` the uid of the cancel requesting actor and is expected to be set
by the runtime when servicing any remote cancel request. This makes it
possible for `ContextCancelled` receivers to know whether "their actor
runtime" is the source of the cancellation.

Also add an explicit `RemoteActor.src_actor_uid` which better formalizes
the notion of "which remote actor" the error originated from.

Both of these new attrs are expected to be packed in the `.msgdata` when
the errors are loaded locally.
2023-04-14 16:35:25 -04:00
Tyler Goodlet 7dd5d8d1f8 Add new set of context cancellation tests
These will verify new changes to the runtime/messaging core which allows
us to adopt an "ignore cancel if requested by us" style handling of
`ContextCancelled` more like how `trio` does with
`trio.Nursery.cancel_scope.cancel()`. We now expect
a `ContextCancelled.canceller: tuple` which is set to the actor uid of
the actor which requested the cancellation which eventually resulted in
the remote error-msg.

Also adds some experimental tweaks to the "backpressure" test which it
turns out is very problematic in coordination with context cancellation
since blocking on the feed mem chan to some task will block the ipc msg
loop and thus handling of cancellation.. More to come to both the test
and core to address this hopefully since right now this test is failing.
2023-04-14 16:35:25 -04:00
Tyler Goodlet 8913829511 Log waiter task cancelling msg as cancel-level 2023-04-14 16:35:25 -04:00
Tyler Goodlet 60bff71cd3 Assign `RemoteActorError` boxed error type for context cancelleds 2023-04-14 16:35:25 -04:00
Tyler Goodlet 29a1171142 Change a bunch of log levels to cancel, including any `ContextCancelled` handling 2023-04-14 16:35:25 -04:00
Tyler Goodlet b6c7f423f0 Add some log-level method doc-strings 2023-04-14 16:35:25 -04:00
Tyler Goodlet 4db87d3c43 Tweak context doc str 2023-04-14 16:35:25 -04:00
Tyler Goodlet 5f1e83e741 More single doc-strs in discovery mod 2023-04-14 16:35:25 -04:00
Tyler Goodlet ea2cc9ec75 Enable `Context` backpressure by default; avoid startup race-crashes? 2023-04-14 16:35:25 -04:00
108 changed files with 5307 additions and 26120 deletions

View File

@ -20,7 +20,7 @@ jobs:
- name: Setup python - name: Setup python
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: '3.11' python-version: '3.10'
- name: Install dependencies - name: Install dependencies
run: pip install -U . --upgrade-strategy eager -r requirements-test.txt run: pip install -U . --upgrade-strategy eager -r requirements-test.txt
@ -41,7 +41,7 @@ jobs:
- name: Setup python - name: Setup python
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: '3.11' python-version: '3.10'
- name: Build sdist - name: Build sdist
run: python setup.py sdist --formats=zip run: python setup.py sdist --formats=zip
@ -59,7 +59,7 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
python: ['3.11'] python: ['3.10']
spawn_backend: [ spawn_backend: [
'trio', 'trio',
'mp_spawn', 'mp_spawn',

View File

@ -1,122 +1,40 @@
|logo| ``tractor``: distributed structurred concurrency |logo| ``tractor``: next-gen Python parallelism
|gh_actions| |gh_actions|
|docs| |docs|
``tractor`` is a `structured concurrency`_ (SC), multi-processing_ runtime built on trio_. ``tractor`` is a `structured concurrent`_, multi-processing_ runtime
built on trio_.
Fundamentally, ``tractor`` provides parallelism via Fundamentally ``tractor`` gives you parallelism via ``trio``-"*actors*":
``trio``-"*actors*": independent Python **processes** (i.e. our nurseries_ let you spawn new Python processes which each run a ``trio``
*non-shared-memory threads*) which can schedule ``trio`` tasks whilst scheduled runtime - a call to ``trio.run()``.
maintaining *end-to-end SC* inside a *distributed supervision tree*.
Cross-process (and thus cross-host) SC is accomplished through the
combined use of our,
- "actor nurseries_" which provide for spawning multiple, and
possibly nested, Python processes each running a ``trio`` scheduled
runtime - a call to ``trio.run()``,
- an "SC-transitive supervision protocol" enforced as an
IPC-message-spec encapsulating all RPC-dialogs.
We believe the system adheres to the `3 axioms`_ of an "`actor model`_" We believe the system adheres to the `3 axioms`_ of an "`actor model`_"
but likely **does not** look like what **you** probably *think* an "actor but likely *does not* look like what *you* probably think an "actor
model" looks like, and that's **intentional**. model" looks like, and that's *intentional*.
The first step to grok ``tractor`` is to get the basics of ``trio`` down.
Where do i start!? A great place to start is the `trio docs`_ and this `blog post`_.
------------------
The first step to grok ``tractor`` is to get an intermediate
knowledge of ``trio`` and **structured concurrency** B)
Some great places to start are,
- the seminal `blog post`_
- obviously the `trio docs`_
- wikipedia's nascent SC_ page
- the fancy diagrams @ libdill-docs_
Features Features
-------- --------
- **It's just** a ``trio`` API! - **It's just** a ``trio`` API
- *Infinitely nesteable* process trees running embedded ``trio`` tasks. - *Infinitely nesteable* process trees
- Swappable, OS-specific, process spawning via multiple backends. - Builtin IPC streaming APIs with task fan-out broadcasting
- Modular IPC stack, allowing for custom interchange formats (eg. - A (first ever?) "native" multi-core debugger UX for Python using `pdb++`_
as offered from `msgspec`_), varied transport protocols (TCP, RUDP, - Support for a swappable, OS specific, process spawning layer
QUIC, wireguard), and OS-env specific higher-perf primitives (UDS, - A modular transport stack, allowing for custom serialization (eg. with
shm-ring-buffers). `msgspec`_), communications protocols, and environment specific IPC
- Optionally distributed_: all IPC and RPC APIs work over multi-host primitives
transports the same as local. - Support for spawning process-level-SC, inter-loop one-to-one-task oriented
- Builtin high-level streaming API that enables your app to easily ``asyncio`` actors via "infected ``asyncio``" mode
leverage the benefits of a "`cheap or nasty`_" `(un)protocol`_. - `structured chadcurrency`_ from the ground up
- A "native UX" around a multi-process safe debugger REPL using
`pdbp`_ (a fork & fix of `pdb++`_)
- "Infected ``asyncio``" mode: support for starting an actor's
runtime as a `guest`_ on the ``asyncio`` loop allowing us to
provide stringent SC-style ``trio.Task``-supervision around any
``asyncio.Task`` spawned via our ``tractor.to_asyncio`` APIs.
- A **very naive** and still very much work-in-progress inter-actor
`discovery`_ sys with plans to support multiple `modern protocol`_
approaches.
- Various ``trio`` extension APIs via ``tractor.trionics`` such as,
- task fan-out `broadcasting`_,
- multi-task-single-resource-caching and fan-out-to-multi
``__aenter__()`` APIs for ``@acm`` functions,
- (WIP) a ``TaskMngr``: one-cancels-one style nursery supervisor.
Install
-------
``tractor`` is still in a *alpha-near-beta-stage* for many
of its subsystems, however we are very close to having a stable
lowlevel runtime and API.
As such, it's currently recommended that you clone and install the
repo from source::
pip install git+git://github.com/goodboy/tractor.git
We use the very hip `uv`_ for project mgmt::
git clone https://github.com/goodboy/tractor.git
cd tractor
uv sync --dev
uv run python examples/rpc_bidir_streaming.py
Consider activating a virtual/project-env before starting to hack on
the code base::
# you could use plain ol' venvs
# https://docs.astral.sh/uv/pip/environments/
uv venv tractor_py313 --python 3.13
# but @goodboy prefers the more explicit (and shell agnostic)
# https://docs.astral.sh/uv/configuration/environment/#uv_project_environment
UV_PROJECT_ENVIRONMENT="tractor_py313
# hint hint, enter @goodboy's fave shell B)
uv run --dev xonsh
Alongside all this we ofc offer "releases" on PyPi::
pip install tractor
Just note that YMMV since the main git branch is often much further
ahead then any latest release.
Example codez
-------------
In ``tractor``'s (very lacking) documention we prefer to point to
example scripts in the repo over duplicating them in docs, but with
that in mind here are some definitive snippets to try and hook you
into digging deeper.
Run a func in a process Run a func in a process
*********************** -----------------------
Use ``trio``'s style of focussing on *tasks as functions*: Use ``trio``'s style of focussing on *tasks as functions*:
.. code:: python .. code:: python
@ -174,7 +92,7 @@ might want to check out `trio-parallel`_.
Zombie safe: self-destruct a process tree Zombie safe: self-destruct a process tree
***************************************** -----------------------------------------
``tractor`` tries to protect you from zombies, no matter what. ``tractor`` tries to protect you from zombies, no matter what.
.. code:: python .. code:: python
@ -200,7 +118,7 @@ Zombie safe: self-destruct a process tree
f"running in pid {os.getpid()}" f"running in pid {os.getpid()}"
) )
await trio.sleep_forever() await trio.sleep_forever()
async def main(): async def main():
@ -230,8 +148,8 @@ it **is a bug**.
"Native" multi-process debugging "Native" multi-process debugging
******************************** --------------------------------
Using the magic of `pdbp`_ and our internal IPC, we've Using the magic of `pdb++`_ and our internal IPC, we've
been able to create a native feeling debugging experience for been able to create a native feeling debugging experience for
any (sub-)process in your ``tractor`` tree. any (sub-)process in your ``tractor`` tree.
@ -285,7 +203,7 @@ We're hoping to add a respawn-from-repl system soon!
SC compatible bi-directional streaming SC compatible bi-directional streaming
************************************** --------------------------------------
Yes, you saw it here first; we provide 2-way streams Yes, you saw it here first; we provide 2-way streams
with reliable, transitive setup/teardown semantics. with reliable, transitive setup/teardown semantics.
@ -377,7 +295,7 @@ hear your thoughts on!
Worker poolz are easy peasy Worker poolz are easy peasy
*************************** ---------------------------
The initial ask from most new users is *"how do I make a worker The initial ask from most new users is *"how do I make a worker
pool thing?"*. pool thing?"*.
@ -399,10 +317,10 @@ This uses no extra threads, fancy semaphores or futures; all we need
is ``tractor``'s IPC! is ``tractor``'s IPC!
"Infected ``asyncio``" mode "Infected ``asyncio``" mode
*************************** ---------------------------
Have a bunch of ``asyncio`` code you want to force to be SC at the process level? Have a bunch of ``asyncio`` code you want to force to be SC at the process level?
Check out our experimental system for `guest`_-mode controlled Check out our experimental system for `guest-mode`_ controlled
``asyncio`` actors: ``asyncio`` actors:
.. code:: python .. code:: python
@ -508,7 +426,7 @@ We need help refining the `asyncio`-side channel API to be more
Higher level "cluster" APIs Higher level "cluster" APIs
*************************** ---------------------------
To be extra terse the ``tractor`` devs have started hacking some "higher To be extra terse the ``tractor`` devs have started hacking some "higher
level" APIs for managing actor trees/clusters. These interfaces should level" APIs for managing actor trees/clusters. These interfaces should
generally be condsidered provisional for now but we encourage you to try generally be condsidered provisional for now but we encourage you to try
@ -565,6 +483,18 @@ spawn a flat cluster:
.. _full worker pool re-implementation: https://github.com/goodboy/tractor/blob/master/examples/parallelism/concurrent_actors_primes.py .. _full worker pool re-implementation: https://github.com/goodboy/tractor/blob/master/examples/parallelism/concurrent_actors_primes.py
Install
-------
From PyPi::
pip install tractor
From git::
pip install git+git://github.com/goodboy/tractor.git
Under the hood Under the hood
-------------- --------------
``tractor`` is an attempt to pair trionic_ `structured concurrency`_ with ``tractor`` is an attempt to pair trionic_ `structured concurrency`_ with
@ -656,7 +586,6 @@ matrix seems too hip, we're also mostly all in the the `trio gitter
channel`_! channel`_!
.. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228 .. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
.. _distributed: https://en.wikipedia.org/wiki/Distributed_computing
.. _multi-processing: https://en.wikipedia.org/wiki/Multiprocessing .. _multi-processing: https://en.wikipedia.org/wiki/Multiprocessing
.. _trio: https://github.com/python-trio/trio .. _trio: https://github.com/python-trio/trio
.. _nurseries: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/#nurseries-a-structured-replacement-for-go-statements .. _nurseries: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/#nurseries-a-structured-replacement-for-go-statements
@ -668,26 +597,19 @@ channel`_!
.. _adherance to: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=1821s .. _adherance to: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=1821s
.. _trio gitter channel: https://gitter.im/python-trio/general .. _trio gitter channel: https://gitter.im/python-trio/general
.. _matrix channel: https://matrix.to/#/!tractor:matrix.org .. _matrix channel: https://matrix.to/#/!tractor:matrix.org
.. _broadcasting: https://github.com/goodboy/tractor/pull/229
.. _modern procotol: https://en.wikipedia.org/wiki/Rendezvous_protocol
.. _pdbp: https://github.com/mdmintz/pdbp
.. _pdb++: https://github.com/pdbpp/pdbpp .. _pdb++: https://github.com/pdbpp/pdbpp
.. _cheap or nasty: https://zguide.zeromq.org/docs/chapter7/#The-Cheap-or-Nasty-Pattern .. _guest mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. _(un)protocol: https://zguide.zeromq.org/docs/chapter7/#Unprotocols
.. _discovery: https://zguide.zeromq.org/docs/chapter8/#Discovery
.. _modern protocol: https://en.wikipedia.org/wiki/Rendezvous_protocol
.. _messages: https://en.wikipedia.org/wiki/Message_passing .. _messages: https://en.wikipedia.org/wiki/Message_passing
.. _trio docs: https://trio.readthedocs.io/en/latest/ .. _trio docs: https://trio.readthedocs.io/en/latest/
.. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ .. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency .. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _SC: https://en.wikipedia.org/wiki/Structured_concurrency .. _structured chadcurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _libdill-docs: https://sustrik.github.io/libdill/structured-concurrency.html .. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony .. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony
.. _async generators: https://www.python.org/dev/peps/pep-0525/ .. _async generators: https://www.python.org/dev/peps/pep-0525/
.. _trio-parallel: https://github.com/richardsheridan/trio-parallel .. _trio-parallel: https://github.com/richardsheridan/trio-parallel
.. _uv: https://docs.astral.sh/uv/
.. _msgspec: https://jcristharif.com/msgspec/ .. _msgspec: https://jcristharif.com/msgspec/
.. _guest: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops .. _guest-mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fgoodboy%2Ftractor%2Fbadge&style=popout-square .. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fgoodboy%2Ftractor%2Fbadge&style=popout-square

View File

@ -6,120 +6,81 @@ been an outage) and we want to ensure that despite being in debug mode
actor tree will eventually be cancelled without leaving any zombies. actor tree will eventually be cancelled without leaving any zombies.
''' '''
from contextlib import asynccontextmanager as acm import trio
from functools import partial
from tractor import ( from tractor import (
open_nursery, open_nursery,
context, context,
Context, Context,
ContextCancelled,
MsgStream, MsgStream,
_testing,
) )
import trio
import pytest
async def break_ipc_then_error( async def break_channel_silently_then_error(
stream: MsgStream, stream: MsgStream,
break_ipc_with: str|None = None,
pre_close: bool = False,
):
await _testing.break_ipc(
stream=stream,
method=break_ipc_with,
pre_close=pre_close,
)
async for msg in stream:
await stream.send(msg)
assert 0
async def iter_ipc_stream(
stream: MsgStream,
break_ipc_with: str|None = None,
pre_close: bool = False,
): ):
async for msg in stream: async for msg in stream:
await stream.send(msg) await stream.send(msg)
# XXX: close the channel right after an error is raised
# purposely breaking the IPC transport to make sure the parent
# doesn't get stuck in debug or hang on the connection join.
# this more or less simulates an infinite msg-receive hang on
# the other end.
await stream._ctx.chan.send(None)
assert 0
async def close_stream_and_error(
stream: MsgStream,
):
async for msg in stream:
await stream.send(msg)
# wipe out channel right before raising
await stream._ctx.chan.send(None)
await stream.aclose()
assert 0
@context @context
async def recv_and_spawn_net_killers( async def recv_and_spawn_net_killers(
ctx: Context, ctx: Context,
break_ipc_after: bool|int = False, break_ipc_after: bool | int = False,
pre_close: bool = False,
) -> None: ) -> None:
''' '''
Receive stream msgs and spawn some IPC killers mid-stream. Receive stream msgs and spawn some IPC killers mid-stream.
''' '''
broke_ipc: bool = False
await ctx.started() await ctx.started()
async with ( async with (
ctx.open_stream() as stream, ctx.open_stream() as stream,
trio.open_nursery( trio.open_nursery() as n,
strict_exception_groups=False,
) as tn,
): ):
async for i in stream: async for i in stream:
print(f'child echoing {i}') print(f'child echoing {i}')
if not broke_ipc: await stream.send(i)
await stream.send(i)
else:
await trio.sleep(0.01)
if ( if (
break_ipc_after break_ipc_after
and and i > break_ipc_after
i >= break_ipc_after
): ):
broke_ipc = True '#################################\n'
tn.start_soon( 'Simulating child-side IPC BREAK!\n'
iter_ipc_stream, '#################################'
stream, n.start_soon(break_channel_silently_then_error, stream)
) n.start_soon(close_stream_and_error, stream)
tn.start_soon(
partial(
break_ipc_then_error,
stream=stream,
pre_close=pre_close,
)
)
@acm
async def stuff_hangin_ctlc(timeout: float = 1) -> None:
with trio.move_on_after(timeout) as cs:
yield timeout
if cs.cancelled_caught:
# pretend to be a user seeing no streaming action
# thinking it's a hang, and then hitting ctl-c..
print(
f"i'm a user on the PARENT side and thingz hangin "
f'after timeout={timeout} ???\n\n'
'MASHING CTlR-C..!?\n'
)
raise KeyboardInterrupt
async def main( async def main(
debug_mode: bool = False, debug_mode: bool = False,
start_method: str = 'trio', start_method: str = 'trio',
loglevel: str = 'cancel',
# by default we break the parent IPC first (if configured to break # by default we break the parent IPC first (if configured to break
# at all), but this can be changed so the child does first (even if # at all), but this can be changed so the child does first (even if
# both are set to break). # both are set to break).
break_parent_ipc_after: int|bool = False, break_parent_ipc_after: int | bool = False,
break_child_ipc_after: int|bool = False, break_child_ipc_after: int | bool = False,
pre_close: bool = False,
) -> None: ) -> None:
@ -130,129 +91,60 @@ async def main(
# NOTE: even debugger is used we shouldn't get # NOTE: even debugger is used we shouldn't get
# a hang since it never engages due to broken IPC # a hang since it never engages due to broken IPC
debug_mode=debug_mode, debug_mode=debug_mode,
loglevel=loglevel, loglevel='warning',
) as an, ) as an,
): ):
sub_name: str = 'chitty_hijo'
portal = await an.start_actor( portal = await an.start_actor(
sub_name, 'chitty_hijo',
enable_modules=[__name__], enable_modules=[__name__],
) )
async with ( async with portal.open_context(
stuff_hangin_ctlc(timeout=2) as timeout, recv_and_spawn_net_killers,
_testing.expect_ctxc( break_ipc_after=break_child_ipc_after,
yay=(
break_parent_ipc_after
or break_child_ipc_after
),
# TODO: we CAN'T remove this right?
# since we need the ctxc to bubble up from either
# the stream API after the `None` msg is sent
# (which actually implicitly cancels all remote
# tasks in the hijo) or from simluated
# KBI-mash-from-user
# or should we expect that a KBI triggers the ctxc
# and KBI in an eg?
reraise=True,
),
portal.open_context( ) as (ctx, sent):
recv_and_spawn_net_killers,
break_ipc_after=break_child_ipc_after,
pre_close=pre_close,
) as (ctx, sent),
):
rx_eoc: bool = False
ipc_break_sent: bool = False
async with ctx.open_stream() as stream: async with ctx.open_stream() as stream:
for i in range(1000): for i in range(1000):
if ( if (
break_parent_ipc_after break_parent_ipc_after
and and i > break_parent_ipc_after
i > break_parent_ipc_after
and
not ipc_break_sent
): ):
print( print(
'#################################\n' '#################################\n'
'Simulating PARENT-side IPC BREAK!\n' 'Simulating parent-side IPC BREAK!\n'
'#################################\n' '#################################'
) )
await stream._ctx.chan.send(None)
# TODO: other methods? see break func above.
# await stream._ctx.chan.send(None)
# await stream._ctx.chan.transport.stream.send_eof()
await stream._ctx.chan.transport.stream.aclose()
ipc_break_sent = True
# it actually breaks right here in the # it actually breaks right here in the
# mp_spawn/forkserver backends and thus the # mp_spawn/forkserver backends and thus the zombie
# zombie reaper never even kicks in? # reaper never even kicks in?
try: print(f'parent sending {i}')
print(f'parent sending {i}') await stream.send(i)
await stream.send(i)
except ContextCancelled as ctxc:
print(
'parent received ctxc on `stream.send()`\n'
f'{ctxc}\n'
)
assert 'root' in ctxc.canceller
assert sub_name in ctx.canceller
# TODO: is this needed or no? with trio.move_on_after(2) as cs:
raise
except trio.ClosedResourceError:
# NOTE: don't send if we already broke the
# connection to avoid raising a closed-error
# such that we drop through to the ctl-c
# mashing by user.
await trio.sleep(0.01)
# timeout: int = 1
# with trio.move_on_after(timeout) as cs:
async with stuff_hangin_ctlc() as timeout:
print(
f'PARENT `stream.receive()` with timeout={timeout}\n'
)
# NOTE: in the parent side IPC failure case this # NOTE: in the parent side IPC failure case this
# will raise an ``EndOfChannel`` after the child # will raise an ``EndOfChannel`` after the child
# is killed and sends a stop msg back to it's # is killed and sends a stop msg back to it's
# caller/this-parent. # caller/this-parent.
try: rx = await stream.receive()
rx = await stream.receive()
print(
"I'm a happy PARENT user and echoed to me is\n"
f'{rx}\n'
)
except trio.EndOfChannel:
rx_eoc: bool = True
print('MsgStream got EoC for PARENT')
raise
print( print(f"I'm a happy user and echoed to me is {rx}")
'Streaming finished and we got Eoc.\n'
'Canceling `.open_context()` in root with\n'
'CTlR-C..'
)
if rx_eoc:
assert stream.closed
try:
await stream.send(i)
pytest.fail('stream not closed?')
except (
trio.ClosedResourceError,
trio.EndOfChannel,
) as send_err:
if rx_eoc:
assert send_err is stream._eoc
else:
assert send_err is stream._closed
raise KeyboardInterrupt if cs.cancelled_caught:
# pretend to be a user seeing no streaming action
# thinking it's a hang, and then hitting ctl-c..
print("YOO i'm a user anddd thingz hangin..")
print(
"YOO i'm mad send side dun but thingz hangin..\n"
'MASHING CTlR-C Ctl-c..'
)
raise KeyboardInterrupt
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,16 +1,8 @@
'''
Examples of using the builtin `breakpoint()` from an `asyncio.Task`
running in a subactor spawned with `infect_asyncio=True`.
'''
import asyncio import asyncio
import trio import trio
import tractor import tractor
from tractor import ( from tractor import to_asyncio
to_asyncio,
Portal,
)
async def aio_sleep_forever(): async def aio_sleep_forever():
@ -25,21 +17,21 @@ async def bp_then_error(
) -> None: ) -> None:
# sync with `trio`-side (caller) task # sync with ``trio``-side (caller) task
to_trio.send_nowait('start') to_trio.send_nowait('start')
# NOTE: what happens here inside the hook needs some refinement.. # NOTE: what happens here inside the hook needs some refinement..
# => seems like it's still `._debug._set_trace()` but # => seems like it's still `._debug._set_trace()` but
# we set `Lock.local_task_in_debug = 'sync'`, we probably want # we set `Lock.local_task_in_debug = 'sync'`, we probably want
# some further, at least, meta-data about the task/actor in debug # some further, at least, meta-data about the task/actoq in debug
# in terms of making it clear it's `asyncio` mucking about. # in terms of making it clear it's asyncio mucking about.
breakpoint() # asyncio-side breakpoint()
# short checkpoint / delay # short checkpoint / delay
await asyncio.sleep(0.5) # asyncio-side await asyncio.sleep(0.5)
if raise_after_bp: if raise_after_bp:
raise ValueError('asyncio side error!') raise ValueError('blah')
# TODO: test case with this so that it gets cancelled? # TODO: test case with this so that it gets cancelled?
else: else:
@ -57,21 +49,23 @@ async def trio_ctx(
# this will block until the ``asyncio`` task sends a "first" # this will block until the ``asyncio`` task sends a "first"
# message, see first line in above func. # message, see first line in above func.
async with ( async with (
to_asyncio.open_channel_from( to_asyncio.open_channel_from(
bp_then_error, bp_then_error,
# raise_after_bp=not bp_before_started, raise_after_bp=not bp_before_started,
) as (first, chan), ) as (first, chan),
trio.open_nursery() as tn, trio.open_nursery() as n,
): ):
assert first == 'start' assert first == 'start'
if bp_before_started: if bp_before_started:
await tractor.pause() # trio-side await tractor.breakpoint()
await ctx.started(first) # trio-side await ctx.started(first)
tn.start_soon( n.start_soon(
to_asyncio.run_task, to_asyncio.run_task,
aio_sleep_forever, aio_sleep_forever,
) )
@ -79,50 +73,37 @@ async def trio_ctx(
async def main( async def main(
bps_all_over: bool = True, bps_all_over: bool = False,
# TODO, WHICH OF THESE HAZ BUGZ?
cancel_from_root: bool = False,
err_from_root: bool = False,
) -> None: ) -> None:
async with tractor.open_nursery( async with tractor.open_nursery() as n:
debug_mode=True,
maybe_enable_greenback=True, p = await n.start_actor(
# loglevel='devx',
) as an:
ptl: Portal = await an.start_actor(
'aio_daemon', 'aio_daemon',
enable_modules=[__name__], enable_modules=[__name__],
infect_asyncio=True, infect_asyncio=True,
debug_mode=True, debug_mode=True,
# loglevel='cancel', loglevel='cancel',
) )
async with ptl.open_context( async with p.open_context(
trio_ctx, trio_ctx,
bp_before_started=bps_all_over, bp_before_started=bps_all_over,
) as (ctx, first): ) as (ctx, first):
assert first == 'start' assert first == 'start'
# pause in parent to ensure no cross-actor if bps_all_over:
# locking problems exist! await tractor.breakpoint()
await tractor.pause() # trio-root
if cancel_from_root:
await ctx.cancel()
if err_from_root:
assert 0
else:
await trio.sleep_forever()
# await trio.sleep_forever()
await ctx.cancel()
assert 0
# TODO: case where we cancel from trio-side while asyncio task # TODO: case where we cancel from trio-side while asyncio task
# has debugger lock? # has debugger lock?
# await ptl.cancel_actor() # await p.cancel_actor()
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,9 +0,0 @@
'''
Reproduce a bug where enabling debug mode for a sub-actor actually causes
a hang on teardown...
'''
import asyncio
import trio
import tractor

View File

@ -1,5 +1,5 @@
''' '''
Fast fail test with a `Context`. Fast fail test with a context.
Ensure the partially initialized sub-actor process Ensure the partially initialized sub-actor process
doesn't cause a hang on error/cancel of the parent doesn't cause a hang on error/cancel of the parent

View File

@ -4,15 +4,9 @@ import trio
async def breakpoint_forever(): async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor." "Indefinitely re-enter debugger in child actor."
try: while True:
while True: yield 'yo'
yield 'yo' await tractor.breakpoint()
await tractor.pause()
except BaseException:
tractor.log.get_console_log().exception(
'Cancelled while trying to enter pause point!'
)
raise
async def name_error(): async def name_error():
@ -21,14 +15,11 @@ async def name_error():
async def main(): async def main():
''' """Test breakpoint in a streaming actor.
Test breakpoint in a streaming actor. """
'''
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=True, debug_mode=True,
loglevel='cancel', loglevel='error',
# loglevel='devx',
) as n: ) as n:
p0 = await n.start_actor('bp_forever', enable_modules=[__name__]) p0 = await n.start_actor('bp_forever', enable_modules=[__name__])
@ -41,7 +32,7 @@ async def main():
try: try:
await p1.run(name_error) await p1.run(name_error)
except tractor.RemoteActorError as rae: except tractor.RemoteActorError as rae:
assert rae.boxed_type is NameError assert rae.type is NameError
async for i in stream: async for i in stream:

View File

@ -10,7 +10,7 @@ async def name_error():
async def breakpoint_forever(): async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor." "Indefinitely re-enter debugger in child actor."
while True: while True:
await tractor.pause() await tractor.breakpoint()
# NOTE: if the test never sent 'q'/'quit' commands # NOTE: if the test never sent 'q'/'quit' commands
# on the pdb repl, without this checkpoint line the # on the pdb repl, without this checkpoint line the
@ -45,7 +45,6 @@ async def spawn_until(depth=0):
) )
# TODO: notes on the new boxed-relayed errors through proxy actors
async def main(): async def main():
"""The main ``tractor`` routine. """The main ``tractor`` routine.

View File

@ -40,7 +40,7 @@ async def main():
""" """
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=True, debug_mode=True,
loglevel='devx', # loglevel='cancel',
) as n: ) as n:
# spawn both actors # spawn both actors

View File

@ -6,7 +6,7 @@ async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor." "Indefinitely re-enter debugger in child actor."
while True: while True:
await trio.sleep(0.1) await trio.sleep(0.1)
await tractor.pause() await tractor.breakpoint()
async def name_error(): async def name_error():
@ -38,7 +38,6 @@ async def main():
""" """
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=True, debug_mode=True,
# loglevel='runtime',
) as n: ) as n:
# Spawn both actors, don't bother with collecting results # Spawn both actors, don't bother with collecting results

View File

@ -23,6 +23,5 @@ async def main():
n.start_soon(debug_actor.run, die) n.start_soon(debug_actor.run, die)
n.start_soon(crash_boi.run, die) n.start_soon(crash_boi.run, die)
if __name__ == '__main__': if __name__ == '__main__':
trio.run(main) trio.run(main)

View File

@ -1,56 +0,0 @@
import trio
import tractor
@tractor.context
async def name_error(
ctx: tractor.Context,
):
'''
Raise a `NameError`, catch it and enter `.post_mortem()`, then
expect the `._rpc._invoke()` crash handler to also engage.
'''
try:
getattr(doggypants) # noqa (on purpose)
except NameError:
await tractor.post_mortem()
raise
async def main():
'''
Test 3 `PdbREPL` entries:
- one in the child due to manual `.post_mortem()`,
- another in the child due to runtime RPC crash handling.
- final one here in parent from the RAE.
'''
# XXX NOTE: ideally the REPL arrives at this frame in the parent
# ONE UP FROM the inner ctx block below!
async with tractor.open_nursery(
debug_mode=True,
# loglevel='cancel',
) as an:
p: tractor.Portal = await an.start_actor(
'child',
enable_modules=[__name__],
)
# XXX should raise `RemoteActorError[NameError]`
# AND be the active frame when REPL enters!
try:
async with p.open_context(name_error) as (ctx, first):
assert first
except tractor.RemoteActorError as rae:
assert rae.boxed_type is NameError
# manually handle in root's parent task
await tractor.post_mortem()
raise
else:
raise RuntimeError('IPC ctx should have remote errored!?')
if __name__ == '__main__':
trio.run(main)

View File

@ -6,44 +6,19 @@ import tractor
async def main() -> None: async def main() -> None:
async with tractor.open_nursery(debug_mode=True) as an:
# intially unset, no entry. assert os.environ['PYTHONBREAKPOINT'] == 'tractor._debug._set_trace'
orig_pybp_var: int = os.environ.get('PYTHONBREAKPOINT')
assert orig_pybp_var in {None, "0"}
async with tractor.open_nursery(
debug_mode=True,
) as an:
assert an
assert (
(pybp_var := os.environ['PYTHONBREAKPOINT'])
==
'tractor.devx._debug._sync_pause_from_builtin'
)
# TODO: an assert that verifies the hook has indeed been, hooked # TODO: an assert that verifies the hook has indeed been, hooked
# XD # XD
assert ( assert sys.breakpointhook is not tractor._debug._set_trace
(pybp_hook := sys.breakpointhook)
is not tractor.devx._debug._set_trace
)
print( breakpoint()
f'$PYTHONOBREAKPOINT: {pybp_var!r}\n'
f'`sys.breakpointhook`: {pybp_hook!r}\n'
)
breakpoint() # first bp, tractor hook set.
# XXX AFTER EXIT (of actor-runtime) verify the hook is unset.. # TODO: an assert that verifies the hook is unhooked..
#
# YES, this is weird but it's how stdlib docs say to do it..
# https://docs.python.org/3/library/sys.html#sys.breakpointhook
assert os.environ.get('PYTHONBREAKPOINT') is orig_pybp_var
assert sys.breakpointhook assert sys.breakpointhook
breakpoint()
# now ensure a regular builtin pause still works
breakpoint() # last bp, stdlib hook restored
if __name__ == '__main__': if __name__ == '__main__':
trio.run(main) trio.run(main)

View File

@ -10,7 +10,7 @@ async def main():
await trio.sleep(0.1) await trio.sleep(0.1)
await tractor.pause() await tractor.breakpoint()
await trio.sleep(0.1) await trio.sleep(0.1)

View File

@ -2,16 +2,13 @@ import trio
import tractor import tractor
async def main( async def main():
registry_addrs: tuple[str, int]|None = None
):
async with tractor.open_root_actor( async with tractor.open_root_actor(
debug_mode=True, debug_mode=True,
# loglevel='runtime',
): ):
while True: while True:
await tractor.pause() await tractor.breakpoint()
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,83 +0,0 @@
'''
Verify we can dump a `stackscope` tree on a hang.
'''
import os
import signal
import trio
import tractor
@tractor.context
async def start_n_shield_hang(
ctx: tractor.Context,
):
# actor: tractor.Actor = tractor.current_actor()
# sync to parent-side task
await ctx.started(os.getpid())
print('Entering shield sleep..')
with trio.CancelScope(shield=True):
await trio.sleep_forever() # in subactor
# XXX NOTE ^^^ since this shields, we expect
# the zombie reaper (aka T800) to engage on
# SIGINT from the user and eventually hard-kill
# this subprocess!
async def main(
from_test: bool = False,
) -> None:
async with (
tractor.open_nursery(
debug_mode=True,
enable_stack_on_sig=True,
# maybe_enable_greenback=False,
loglevel='devx',
) as an,
):
ptl: tractor.Portal = await an.start_actor(
'hanger',
enable_modules=[__name__],
debug_mode=True,
)
async with ptl.open_context(
start_n_shield_hang,
) as (ctx, cpid):
_, proc, _ = an._children[ptl.chan.uid]
assert cpid == proc.pid
print(
'Yo my child hanging..?\n'
# "i'm a user who wants to see a `stackscope` tree!\n"
)
# XXX simulate the wrapping test's "user actions"
# (i.e. if a human didn't run this manually but wants to
# know what they should do to reproduce test behaviour)
if from_test:
print(
f'Sending SIGUSR1 to {cpid!r}!\n'
)
os.kill(
cpid,
signal.SIGUSR1,
)
# simulate user cancelling program
await trio.sleep(0.5)
os.kill(
os.getpid(),
signal.SIGINT,
)
else:
# actually let user send the ctl-c
await trio.sleep_forever() # in root
if __name__ == '__main__':
trio.run(main)

View File

@ -1,88 +0,0 @@
import trio
import tractor
async def cancellable_pause_loop(
task_status: trio.TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED
):
with trio.CancelScope() as cs:
task_status.started(cs)
for _ in range(3):
try:
# ON first entry, there is no level triggered
# cancellation yet, so this cp does a parent task
# ctx-switch so that this scope raises for the NEXT
# checkpoint we hit.
await trio.lowlevel.checkpoint()
await tractor.pause()
cs.cancel()
# parent should have called `cs.cancel()` by now
await trio.lowlevel.checkpoint()
except trio.Cancelled:
print('INSIDE SHIELDED PAUSE')
await tractor.pause(shield=True)
else:
# should raise it again, bubbling up to parent
print('BUBBLING trio.Cancelled to parent task-nursery')
await trio.lowlevel.checkpoint()
async def pm_on_cancelled():
async with trio.open_nursery() as tn:
tn.cancel_scope.cancel()
try:
await trio.sleep_forever()
except trio.Cancelled:
# should also raise `Cancelled` since
# we didn't pass `shield=True`.
try:
await tractor.post_mortem(hide_tb=False)
except trio.Cancelled as taskc:
# should enter just fine, in fact it should
# be debugging the internals of the previous
# sin-shield call above Bo
await tractor.post_mortem(
hide_tb=False,
shield=True,
)
raise taskc
else:
raise RuntimeError('Dint cancel as expected!?')
async def cancelled_before_pause(
):
'''
Verify that using a shielded pause works despite surrounding
cancellation called state in the calling task.
'''
async with trio.open_nursery() as tn:
cs: trio.CancelScope = await tn.start(cancellable_pause_loop)
await trio.sleep(0.1)
assert cs.cancelled_caught
await pm_on_cancelled()
async def main():
async with tractor.open_nursery(
debug_mode=True,
) as n:
portal: tractor.Portal = await n.run_in_actor(
cancelled_before_pause,
)
await portal.result()
# ensure the same works in the root actor!
await pm_on_cancelled()
if __name__ == '__main__':
trio.run(main)

View File

@ -4,9 +4,9 @@ import trio
async def gen(): async def gen():
yield 'yo' yield 'yo'
await tractor.pause() await tractor.breakpoint()
yield 'yo' yield 'yo'
await tractor.pause() await tractor.breakpoint()
@tractor.context @tractor.context
@ -15,7 +15,7 @@ async def just_bp(
) -> None: ) -> None:
await ctx.started() await ctx.started()
await tractor.pause() await tractor.breakpoint()
# TODO: bps and errors in this call.. # TODO: bps and errors in this call..
async for val in gen(): async for val in gen():

View File

@ -3,20 +3,17 @@ import tractor
async def breakpoint_forever(): async def breakpoint_forever():
''' """Indefinitely re-enter debugger in child actor.
Indefinitely re-enter debugger in child actor. """
'''
while True: while True:
await trio.sleep(0.1) await trio.sleep(0.1)
await tractor.pause() await tractor.breakpoint()
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=True, debug_mode=True,
loglevel='cancel',
) as n: ) as n:
portal = await n.run_in_actor( portal = await n.run_in_actor(

View File

@ -3,26 +3,16 @@ import tractor
async def name_error(): async def name_error():
getattr(doggypants) # noqa (on purpose) getattr(doggypants)
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=True, debug_mode=True,
# loglevel='transport', ) as n:
) as an:
# TODO: ideally the REPL arrives at this frame in the parent, portal = await n.run_in_actor(name_error)
# ABOVE the @api_frame of `Portal.run_in_actor()` (which await portal.result()
# should eventually not even be a portal method ... XD)
# await tractor.pause()
p: tractor.Portal = await an.run_in_actor(name_error)
# with this style, should raise on this line
await p.result()
# with this alt style should raise at `open_nusery()`
# return await p.result()
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,169 +0,0 @@
from functools import partial
import time
import trio
import tractor
# TODO: only import these when not running from test harness?
# can we detect `pexpect` usage maybe?
# from tractor.devx._debug import (
# get_lock,
# get_debug_req,
# )
def sync_pause(
use_builtin: bool = False,
error: bool = False,
hide_tb: bool = True,
pre_sleep: float|None = None,
):
if pre_sleep:
time.sleep(pre_sleep)
if use_builtin:
breakpoint(hide_tb=hide_tb)
else:
# TODO: maybe for testing some kind of cm style interface
# where the `._set_trace()` call doesn't happen until block
# exit?
# assert get_lock().ctx_in_debug is None
# assert get_debug_req().repl is None
tractor.pause_from_sync()
# assert get_debug_req().repl is None
if error:
raise RuntimeError('yoyo sync code error')
@tractor.context
async def start_n_sync_pause(
ctx: tractor.Context,
):
actor: tractor.Actor = tractor.current_actor()
# sync to parent-side task
await ctx.started()
print(f'Entering `sync_pause()` in subactor: {actor.uid}\n')
sync_pause()
print(f'Exited `sync_pause()` in subactor: {actor.uid}\n')
async def main() -> None:
async with (
tractor.open_nursery(
debug_mode=True,
maybe_enable_greenback=True,
enable_stack_on_sig=True,
# loglevel='warning',
# loglevel='devx',
) as an,
trio.open_nursery() as tn,
):
# just from root task
sync_pause()
p: tractor.Portal = await an.start_actor(
'subactor',
enable_modules=[__name__],
# infect_asyncio=True,
debug_mode=True,
)
# TODO: 3 sub-actor usage cases:
# -[x] via a `.open_context()`
# -[ ] via a `.run_in_actor()` call
# -[ ] via a `.run()`
# -[ ] via a `.to_thread.run_sync()` in subactor
async with p.open_context(
start_n_sync_pause,
) as (ctx, first):
assert first is None
# TODO: handle bg-thread-in-root-actor special cases!
#
# there are a couple very subtle situations possible here
# and they are likely to become more important as cpython
# moves to support no-GIL.
#
# Cases:
# 1. root-actor bg-threads that call `.pause_from_sync()`
# whilst an in-tree subactor also is using ` .pause()`.
# |_ since the root-actor bg thread can not
# `Lock._debug_lock.acquire_nowait()` without running
# a `trio.Task`, AND because the
# `PdbREPL.set_continue()` is called from that
# bg-thread, we can not `._debug_lock.release()`
# either!
# |_ this results in no actor-tree `Lock` being used
# on behalf of the bg-thread and thus the subactor's
# task and the thread trying to to use stdio
# simultaneously which results in the classic TTY
# clobbering!
#
# 2. mutiple sync-bg-threads that call
# `.pause_from_sync()` where one is scheduled via
# `Nursery.start_soon(to_thread.run_sync)` in a bg
# task.
#
# Due to the GIL, the threads never truly try to step
# through the REPL simultaneously, BUT their `logging`
# and traceback outputs are interleaved since the GIL
# (seemingly) on every REPL-input from the user
# switches threads..
#
# Soo, the context switching semantics of the GIL
# result in a very confusing and messy interaction UX
# since eval and (tb) print output is NOT synced to
# each REPL-cycle (like we normally make it via
# a `.set_continue()` callback triggering the
# `Lock.release()`). Ideally we can solve this
# usability issue NOW because this will of course be
# that much more important when eventually there is no
# GIL!
# XXX should cause double REPL entry and thus TTY
# clobbering due to case 1. above!
tn.start_soon(
partial(
trio.to_thread.run_sync,
partial(
sync_pause,
use_builtin=False,
# pre_sleep=0.5,
),
abandon_on_cancel=True,
thread_name='start_soon_root_bg_thread',
)
)
await tractor.pause()
# XXX should cause double REPL entry and thus TTY
# clobbering due to case 2. above!
await trio.to_thread.run_sync(
partial(
sync_pause,
# NOTE this already works fine since in the new
# thread the `breakpoint()` built-in is never
# overloaded, thus NO locking is used, HOWEVER
# the case 2. from above still exists!
use_builtin=True,
),
# TODO: with this `False` we can hang!??!
# abandon_on_cancel=False,
abandon_on_cancel=True,
thread_name='inline_root_bg_thread',
)
await ctx.cancel()
# TODO: case where we cancel from trio-side while asyncio task
# has debugger lock?
await p.cancel_actor()
if __name__ == '__main__':
trio.run(main)

View File

@ -1,11 +1,6 @@
import time import time
import trio import trio
import tractor import tractor
from tractor import (
ActorNursery,
MsgStream,
Portal,
)
# this is the first 2 actors, streamer_1 and streamer_2 # this is the first 2 actors, streamer_1 and streamer_2
@ -17,18 +12,14 @@ async def stream_data(seed):
# this is the third actor; the aggregator # this is the third actor; the aggregator
async def aggregate(seed): async def aggregate(seed):
''' """Ensure that the two streams we receive match but only stream
Ensure that the two streams we receive match but only stream
a single set of values to the parent. a single set of values to the parent.
"""
''' async with tractor.open_nursery() as nursery:
an: ActorNursery portals = []
async with tractor.open_nursery() as an:
portals: list[Portal] = []
for i in range(1, 3): for i in range(1, 3):
# fork point
# fork/spawn call portal = await nursery.start_actor(
portal = await an.start_actor(
name=f'streamer_{i}', name=f'streamer_{i}',
enable_modules=[__name__], enable_modules=[__name__],
) )
@ -52,11 +43,7 @@ async def aggregate(seed):
async with trio.open_nursery() as n: async with trio.open_nursery() as n:
for portal in portals: for portal in portals:
n.start_soon( n.start_soon(push_to_chan, portal, send_chan.clone())
push_to_chan,
portal,
send_chan.clone(),
)
# close this local task's reference to send side # close this local task's reference to send side
await send_chan.aclose() await send_chan.aclose()
@ -73,36 +60,26 @@ async def aggregate(seed):
print("FINISHED ITERATING in aggregator") print("FINISHED ITERATING in aggregator")
await an.cancel() await nursery.cancel()
print("WAITING on `ActorNursery` to finish") print("WAITING on `ActorNursery` to finish")
print("AGGREGATOR COMPLETE!") print("AGGREGATOR COMPLETE!")
async def main() -> list[int]: # this is the main actor and *arbiter*
''' async def main():
This is the "root" actor's main task's entrypoint. # a nursery which spawns "actors"
By default (and if not otherwise specified) that root process
also acts as a "registry actor" / "registrar" on the localhost
for the purposes of multi-actor "service discovery".
'''
# yes, a nursery which spawns `trio`-"actors" B)
an: ActorNursery
async with tractor.open_nursery( async with tractor.open_nursery(
loglevel='cancel', arbiter_addr=('127.0.0.1', 1616)
# debug_mode=True, ) as nursery:
) as an:
seed = int(1e3) seed = int(1e3)
pre_start = time.time() pre_start = time.time()
portal: Portal = await an.start_actor( portal = await nursery.start_actor(
name='aggregator', name='aggregator',
enable_modules=[__name__], enable_modules=[__name__],
) )
stream: MsgStream
async with portal.open_stream_from( async with portal.open_stream_from(
aggregate, aggregate,
seed=seed, seed=seed,
@ -111,12 +88,11 @@ async def main() -> list[int]:
start = time.time() start = time.time()
# the portal call returns exactly what you'd expect # the portal call returns exactly what you'd expect
# as if the remote "aggregate" function was called locally # as if the remote "aggregate" function was called locally
result_stream: list[int] = [] result_stream = []
async for value in stream: async for value in stream:
result_stream.append(value) result_stream.append(value)
cancelled: bool = await portal.cancel_actor() await portal.cancel_actor()
assert cancelled
print(f"STREAM TIME = {time.time() - start}") print(f"STREAM TIME = {time.time() - start}")
print(f"STREAM + SPAWN TIME = {time.time() - pre_start}") print(f"STREAM + SPAWN TIME = {time.time() - pre_start}")

View File

@ -8,10 +8,7 @@ This uses no extra threads, fancy semaphores or futures; all we need
is ``tractor``'s channels. is ``tractor``'s channels.
""" """
from contextlib import ( from contextlib import asynccontextmanager
asynccontextmanager as acm,
aclosing,
)
from typing import Callable from typing import Callable
import itertools import itertools
import math import math
@ -19,6 +16,7 @@ import time
import tractor import tractor
import trio import trio
from async_generator import aclosing
PRIMES = [ PRIMES = [
@ -46,7 +44,7 @@ async def is_prime(n):
return True return True
@acm @asynccontextmanager
async def worker_pool(workers=4): async def worker_pool(workers=4):
"""Though it's a trivial special case for ``tractor``, the well """Though it's a trivial special case for ``tractor``, the well
known "worker pool" seems to be the defacto "but, I want this known "worker pool" seems to be the defacto "but, I want this

View File

@ -3,18 +3,20 @@ import trio
import tractor import tractor
async def sleepy_jane() -> None: async def sleepy_jane():
uid: tuple = tractor.current_actor().uid uid = tractor.current_actor().uid
print(f'Yo i am actor {uid}') print(f'Yo i am actor {uid}')
await trio.sleep_forever() await trio.sleep_forever()
async def main(): async def main():
''' '''
Spawn a flat actor cluster, with one process per detected core. Spawn a flat actor cluster, with one process per
detected core.
''' '''
portal_map: dict[str, tractor.Portal] portal_map: dict[str, tractor.Portal]
results: dict[str, str]
# look at this hip new syntax! # look at this hip new syntax!
async with ( async with (
@ -23,16 +25,11 @@ async def main():
modules=[__name__] modules=[__name__]
) as portal_map, ) as portal_map,
trio.open_nursery( trio.open_nursery() as n,
strict_exception_groups=False,
) as tn,
): ):
for (name, portal) in portal_map.items(): for (name, portal) in portal_map.items():
tn.start_soon( n.start_soon(portal.run, sleepy_jane)
portal.run,
sleepy_jane,
)
await trio.sleep(0.5) await trio.sleep(0.5)
@ -44,4 +41,4 @@ if __name__ == '__main__':
try: try:
trio.run(main) trio.run(main)
except KeyboardInterrupt: except KeyboardInterrupt:
print('trio cancelled by KBI') pass

View File

@ -13,7 +13,7 @@ async def simple_rpc(
''' '''
# signal to parent that we're up much like # signal to parent that we're up much like
# ``trio.TaskStatus.started()`` # ``trio_typing.TaskStatus.started()``
await ctx.started(data + 1) await ctx.started(data + 1)
async with ctx.open_stream() as stream: async with ctx.open_stream() as stream:

View File

@ -9,7 +9,7 @@ async def main(service_name):
async with tractor.open_nursery() as an: async with tractor.open_nursery() as an:
await an.start_actor(service_name) await an.start_actor(service_name)
async with tractor.get_registry('127.0.0.1', 1616) as portal: async with tractor.get_arbiter('127.0.0.1', 1616) as portal:
print(f"Arbiter is listening on {portal.channel}") print(f"Arbiter is listening on {portal.channel}")
async with tractor.wait_for_actor(service_name) as sockaddr: async with tractor.wait_for_actor(service_name) as sockaddr:

View File

@ -1,7 +0,0 @@
Drop `trio.Process.aclose()` usage, copy into our spawning code.
The details are laid out in https://github.com/goodboy/tractor/issues/330.
`trio` changed is process running quite some time ago, this just copies
out the small bit we needed (from the old `.aclose()`) for hard kills
where a soft runtime cancel request fails and our "zombie killer"
implementation kicks in.

View File

@ -1,15 +0,0 @@
Switch to using the fork & fix of `pdb++`, `pdbp`:
https://github.com/mdmintz/pdbp
Allows us to sidestep a variety of issues that aren't being maintained
in the upstream project thanks to the hard work of @mdmintz!
We also include some default settings adjustments as per recent
development on the fork:
- sticky mode is still turned on by default but now activates when
a using the `ll` repl command.
- turn off line truncation by default to avoid inter-line gaps when
resizing the terimnal during use.
- when using the backtrace cmd either by `w` or `bt`, the config
automatically switches to non-sticky mode.

View File

@ -1,18 +0,0 @@
First generate a built disti:
```
python -m pip install --upgrade build
python -m build --sdist --outdir dist/alpha5/
```
Then try a test ``pypi`` upload:
```
python -m twine upload --repository testpypi dist/alpha5/*
```
The push to `pypi` for realz.
```
python -m twine upload --repository testpypi dist/alpha5/*
```

View File

@ -1,111 +1,3 @@
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
# ------ build-system ------
[project]
name = "tractor"
version = "0.1.0a6dev0"
description = 'structured concurrent `trio`-"actors"'
authors = [{ name = "Tyler Goodlet", email = "goodboy_foss@protonmail.com" }]
requires-python = ">= 3.11"
readme = "docs/README.rst"
license = "AGPL-3.0-or-later"
keywords = [
"trio",
"async",
"concurrency",
"structured concurrency",
"actor model",
"distributed",
"multiprocessing",
]
classifiers = [
"Development Status :: 3 - Alpha",
"Operating System :: POSIX :: Linux",
"Framework :: Trio",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Topic :: System :: Distributed Computing",
]
dependencies = [
# trio runtime and friends
# (poetry) proper range specs,
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/#id5
# TODO, for 3.13 we must go go `0.27` which means we have to
# disable strict egs or port to handling them internally!
"trio>0.27",
"tricycle>=0.4.1,<0.5",
"wrapt>=1.16.0,<2",
"colorlog>=6.8.2,<7",
# built-in multi-actor `pdb` REPL
"pdbp>=1.6,<2", # windows only (from `pdbp`)
# typed IPC msging
"msgspec>=0.19.0",
]
# ------ project ------
[dependency-groups]
dev = [
# test suite
# TODO: maybe some of these layout choices?
# https://docs.pytest.org/en/8.0.x/explanation/goodpractices.html#choosing-a-test-layout-import-rules
"pytest>=8.3.5",
"pexpect>=4.9.0,<5",
# `tractor.devx` tooling
"greenback>=1.2.1,<2",
"stackscope>=0.2.2,<0.3",
"pyperclip>=1.9.0",
"prompt-toolkit>=3.0.50",
"xonsh>=0.19.2",
]
# TODO, add these with sane versions; were originally in
# `requirements-docs.txt`..
# docs = [
# "sphinx>="
# "sphinx_book_theme>="
# ]
# ------ dependency-groups ------
# ------ dependency-groups ------
[tool.uv.sources]
# XXX NOTE, only for @goodboy's hacking on `pprint(sort_dicts=False)`
# for the `pp` alias..
# pdbp = { path = "../pdbp", editable = true }
# ------ tool.uv.sources ------
# TODO, distributed (multi-host) extensions
# linux kernel networking
# 'pyroute2
# ------ tool.uv.sources ------
[tool.uv]
# XXX NOTE, prefer the sys python bc apparently the distis from
# `astral` are built in a way that breaks `pdbp`+`tabcompleter`'s
# likely due to linking against `libedit` over `readline`..
# |_https://docs.astral.sh/uv/concepts/python-versions/#managed-python-distributions
# |_https://gregoryszorc.com/docs/python-build-standalone/main/quirks.html#use-of-libedit-on-linux
#
# https://docs.astral.sh/uv/reference/settings/#python-preference
python-preference = 'system'
# ------ tool.uv ------
[tool.hatch.build.targets.sdist]
include = ["tractor"]
[tool.hatch.build.targets.wheel]
include = ["tractor"]
# ------ tool.hatch ------
[tool.towncrier] [tool.towncrier]
package = "tractor" package = "tractor"
filename = "NEWS.rst" filename = "NEWS.rst"
@ -115,44 +7,22 @@ title_format = "tractor {version} ({project_date})"
template = "nooz/_template.rst" template = "nooz/_template.rst"
all_bullets = true all_bullets = true
[[tool.towncrier.type]] [[tool.towncrier.type]]
directory = "feature" directory = "feature"
name = "Features" name = "Features"
showcontent = true showcontent = true
[[tool.towncrier.type]] [[tool.towncrier.type]]
directory = "bugfix" directory = "bugfix"
name = "Bug Fixes" name = "Bug Fixes"
showcontent = true showcontent = true
[[tool.towncrier.type]] [[tool.towncrier.type]]
directory = "doc" directory = "doc"
name = "Improved Documentation" name = "Improved Documentation"
showcontent = true showcontent = true
[[tool.towncrier.type]] [[tool.towncrier.type]]
directory = "trivial" directory = "trivial"
name = "Trivial/Internal Changes" name = "Trivial/Internal Changes"
showcontent = true showcontent = true
# ------ tool.towncrier ------
[tool.pytest.ini_options]
minversion = '6.0'
testpaths = [
'tests'
]
addopts = [
# TODO: figure out why this isn't working..
'--rootdir=./tests',
'--import-mode=importlib',
# don't show frickin captured logs AGAIN in the report..
'--show-capture=no',
]
log_cli = false
# TODO: maybe some of these layout choices?
# https://docs.pytest.org/en/8.0.x/explanation/goodpractices.html#choosing-a-test-layout-import-rules
# pythonpath = "src"
# ------ tool.pytest ------

View File

@ -1,8 +0,0 @@
# vim: ft=ini
# pytest.ini for tractor
[pytest]
# don't show frickin captured logs AGAIN in the report..
addopts = --show-capture='no'
log_cli = false
; minversion = 6.0

View File

@ -0,0 +1,2 @@
sphinx
sphinx_book_theme

View File

@ -0,0 +1,8 @@
pytest
pytest-trio
pytest-timeout
pdbpp
mypy
trio_typing
pexpect
towncrier

View File

@ -1,82 +0,0 @@
# from default `ruff.toml` @
# https://docs.astral.sh/ruff/configuration/
# Exclude a variety of commonly ignored directories.
exclude = [
".bzr",
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".ipynb_checkpoints",
".mypy_cache",
".nox",
".pants.d",
".pyenv",
".pytest_cache",
".pytype",
".ruff_cache",
".svn",
".tox",
".venv",
".vscode",
"__pypackages__",
"_build",
"buck-out",
"build",
"dist",
"node_modules",
"site-packages",
"venv",
]
# Same as Black.
line-length = 88
indent-width = 4
# Assume Python 3.9
target-version = "py311"
[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
# McCabe complexity (`C901`) by default.
select = ["E4", "E7", "E9", "F"]
ignore = [
'E402', # https://docs.astral.sh/ruff/rules/module-import-not-at-top-of-file/
]
# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
# Allow unused variables when underscore-prefixed.
# dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
[format]
# Use single quotes in `ruff format`.
quote-style = "single"
# Like Black, indent with spaces, rather than tabs.
indent-style = "space"
# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false
# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"
# Enable auto-formatting of code examples in docstrings. Markdown,
# reStructuredText code/literal blocks and doctests are all supported.
#
# This is currently disabled by default, but it is planned for this
# to be opt-out in the future.
docstring-code-format = false
# Set the line length limit used when formatting code snippets in
# docstrings.
#
# This only has an effect when the `docstring-code-format` setting is
# enabled.
docstring-code-line-length = "dynamic"

102
setup.py 100755
View File

@ -0,0 +1,102 @@
#!/usr/bin/env python
#
# tractor: structured concurrent "actors".
#
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from setuptools import setup
with open('docs/README.rst', encoding='utf-8') as f:
readme = f.read()
setup(
name="tractor",
version='0.1.0a6dev0', # alpha zone
description='structured concurrrent "actors"',
long_description=readme,
license='AGPLv3',
author='Tyler Goodlet',
maintainer='Tyler Goodlet',
maintainer_email='jgbt@protonmail.com',
url='https://github.com/goodboy/tractor',
platforms=['linux', 'windows'],
packages=[
'tractor',
'tractor.experimental',
'tractor.trionics',
],
install_requires=[
# trio related
# proper range spec:
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/#id5
'trio >= 0.22',
'async_generator',
'trio_typing',
'exceptiongroup',
# tooling
'tricycle',
'trio_typing',
# tooling
'colorlog',
'wrapt',
# serialization
'msgspec',
# debug mode REPL
'pdbp',
# pip ref docs on these specs:
# https://pip.pypa.io/en/stable/reference/requirement-specifiers/#examples
# and pep:
# https://peps.python.org/pep-0440/#version-specifiers
# windows deps workaround for ``pdbpp``
# https://github.com/pdbpp/pdbpp/issues/498
# https://github.com/pdbpp/fancycompleter/issues/37
'pyreadline3 ; platform_system == "Windows"',
],
tests_require=['pytest'],
python_requires=">=3.9",
keywords=[
'trio',
'async',
'concurrency',
'structured concurrency',
'actor model',
'distributed',
'multiprocessing'
],
classifiers=[
"Development Status :: 3 - Alpha",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Framework :: Trio",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: System :: Distributed Computing",
],
)

View File

View File

@ -7,19 +7,94 @@ import os
import random import random
import signal import signal
import platform import platform
import pathlib
import time import time
import inspect
from functools import partial, wraps
import pytest import pytest
import trio
import tractor import tractor
from tractor._testing import (
examples_dir as examples_dir,
tractor_test as tractor_test,
expect_ctxc as expect_ctxc,
)
# TODO: include wtv plugin(s) we build in `._testing.pytest`?
pytest_plugins = ['pytester'] pytest_plugins = ['pytester']
def tractor_test(fn):
"""
Use:
@tractor_test
async def test_whatever():
await ...
If fixtures:
- ``arb_addr`` (a socket addr tuple where arbiter is listening)
- ``loglevel`` (logging level passed to tractor internals)
- ``start_method`` (subprocess spawning backend)
are defined in the `pytest` fixture space they will be automatically
injected to tests declaring these funcargs.
"""
@wraps(fn)
def wrapper(
*args,
loglevel=None,
arb_addr=None,
start_method=None,
**kwargs
):
# __tracebackhide__ = True
if 'arb_addr' in inspect.signature(fn).parameters:
# injects test suite fixture value to test as well
# as `run()`
kwargs['arb_addr'] = arb_addr
if 'loglevel' in inspect.signature(fn).parameters:
# allows test suites to define a 'loglevel' fixture
# that activates the internal logging
kwargs['loglevel'] = loglevel
if start_method is None:
if platform.system() == "Windows":
start_method = 'trio'
if 'start_method' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['start_method'] = start_method
if kwargs:
# use explicit root actor start
async def _main():
async with tractor.open_root_actor(
# **kwargs,
arbiter_addr=arb_addr,
loglevel=loglevel,
start_method=start_method,
# TODO: only enable when pytest is passed --pdb
# debug_mode=True,
):
await fn(*args, **kwargs)
main = _main
else:
# use implicit root actor start
main = partial(fn, *args, **kwargs)
return trio.run(main)
return wrapper
_arb_addr = '127.0.0.1', random.randint(1000, 9999)
# Sending signal.SIGINT on subprocess fails on windows. Use CTRL_* alternatives # Sending signal.SIGINT on subprocess fails on windows. Use CTRL_* alternatives
if platform.system() == 'Windows': if platform.system() == 'Windows':
_KILL_SIGNAL = signal.CTRL_BREAK_EVENT _KILL_SIGNAL = signal.CTRL_BREAK_EVENT
@ -39,48 +114,41 @@ no_windows = pytest.mark.skipif(
) )
def repodir() -> pathlib.Path:
'''
Return the abspath to the repo directory.
'''
# 2 parents up to step up through tests/<repo_dir>
return pathlib.Path(__file__).parent.parent.absolute()
def examples_dir() -> pathlib.Path:
'''
Return the abspath to the examples directory as `pathlib.Path`.
'''
return repodir() / 'examples'
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addoption( parser.addoption(
"--ll", "--ll", action="store", dest='loglevel',
action="store",
dest='loglevel',
default='ERROR', help="logging level to set when testing" default='ERROR', help="logging level to set when testing"
) )
parser.addoption( parser.addoption(
"--spawn-backend", "--spawn-backend", action="store", dest='spawn_backend',
action="store",
dest='spawn_backend',
default='trio', default='trio',
help="Processing spawning backend to use for test run", help="Processing spawning backend to use for test run",
) )
parser.addoption(
"--tpdb", "--debug-mode",
action="store_true",
dest='tractor_debug_mode',
# default=False,
help=(
'Enable a flag that can be used by tests to to set the '
'`debug_mode: bool` for engaging the internal '
'multi-proc debugger sys.'
),
)
def pytest_configure(config): def pytest_configure(config):
backend = config.option.spawn_backend backend = config.option.spawn_backend
tractor._spawn.try_set_start_method(backend) tractor._spawn.try_set_start_method(backend)
@pytest.fixture(scope='session')
def debug_mode(request):
debug_mode: bool = request.config.option.tractor_debug_mode
# if debug_mode:
# breakpoint()
return debug_mode
@pytest.fixture(scope='session', autouse=True) @pytest.fixture(scope='session', autouse=True)
def loglevel(request): def loglevel(request):
orig = tractor.log._default_loglevel orig = tractor.log._default_loglevel
@ -95,46 +163,19 @@ def spawn_backend(request) -> str:
return request.config.option.spawn_backend return request.config.option.spawn_backend
# @pytest.fixture(scope='function', autouse=True)
# def debug_enabled(request) -> str:
# from tractor import _state
# if _state._runtime_vars['_debug_mode']:
# breakpoint()
_ci_env: bool = os.environ.get('CI', False) _ci_env: bool = os.environ.get('CI', False)
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def ci_env() -> bool: def ci_env() -> bool:
''' """Detect CI envoirment.
Detect CI envoirment. """
'''
return _ci_env return _ci_env
# TODO: also move this to `._testing` for now?
# -[ ] possibly generalize and re-use for multi-tree spawning
# along with the new stuff for multi-addrs in distribute_dis
# branch?
#
# choose randomly at import time
_reg_addr: tuple[str, int] = (
'127.0.0.1',
random.randint(1000, 9999),
)
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def reg_addr() -> tuple[str, int]: def arb_addr():
return _arb_addr
# globally override the runtime to the per-test-session-dynamic
# addr so that all tests never conflict with any other actor
# tree using the default.
from tractor import _root
_root._default_lo_addrs = [_reg_addr]
return _reg_addr
def pytest_generate_tests(metafunc): def pytest_generate_tests(metafunc):
@ -159,18 +200,6 @@ def pytest_generate_tests(metafunc):
metafunc.parametrize("start_method", [spawn_backend], scope='module') metafunc.parametrize("start_method", [spawn_backend], scope='module')
# TODO: a way to let test scripts (like from `examples/`)
# guarantee they won't registry addr collide!
# @pytest.fixture
# def open_test_runtime(
# reg_addr: tuple,
# ) -> AsyncContextManager:
# return partial(
# tractor.open_nursery,
# registry_addrs=[reg_addr],
# )
def sig_prog(proc, sig): def sig_prog(proc, sig):
"Kill the actor-process with ``sig``." "Kill the actor-process with ``sig``."
proc.send_signal(sig) proc.send_signal(sig)
@ -183,40 +212,34 @@ def sig_prog(proc, sig):
assert ret assert ret
# TODO: factor into @cm and move to `._testing`?
@pytest.fixture @pytest.fixture
def daemon( def daemon(
loglevel: str, loglevel: str,
testdir, testdir,
reg_addr: tuple[str, int], arb_addr: tuple[str, int],
): ):
''' '''
Run a daemon root actor as a separate actor-process tree and Run a daemon actor as a "remote arbiter".
"remote registrar" for discovery-protocol related tests.
''' '''
if loglevel in ('trace', 'debug'): if loglevel in ('trace', 'debug'):
# XXX: too much logging will lock up the subproc (smh) # too much logging will lock up the subproc (smh)
loglevel: str = 'info' loglevel = 'info'
code: str = ( cmdargs = [
"import tractor; " sys.executable, '-c',
"tractor.run_daemon([], registry_addrs={reg_addrs}, loglevel={ll})" "import tractor; tractor.run_daemon([], registry_addr={}, loglevel={})"
).format( .format(
reg_addrs=str([reg_addr]), arb_addr,
ll="'{}'".format(loglevel) if loglevel else None, "'{}'".format(loglevel) if loglevel else None)
)
cmd: list[str] = [
sys.executable,
'-c', code,
] ]
kwargs = {} kwargs = dict()
if platform.system() == 'Windows': if platform.system() == 'Windows':
# without this, tests hang on windows forever # without this, tests hang on windows forever
kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
proc = testdir.popen( proc = testdir.popen(
cmd, cmdargs,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
**kwargs, **kwargs,

View File

View File

@ -1,243 +0,0 @@
'''
`tractor.devx.*` tooling sub-pkg test space.
'''
import time
from typing import (
Callable,
)
import pytest
from pexpect.exceptions import (
TIMEOUT,
)
from pexpect.spawnbase import SpawnBase
from tractor._testing import (
mk_cmd,
)
from tractor.devx._debug import (
_pause_msg as _pause_msg,
_crash_msg as _crash_msg,
_repl_fail_msg as _repl_fail_msg,
_ctlc_ignore_header as _ctlc_ignore_header,
)
from ..conftest import (
_ci_env,
)
@pytest.fixture
def spawn(
start_method,
testdir: pytest.Pytester,
reg_addr: tuple[str, int],
) -> Callable[[str], None]:
'''
Use the `pexpect` module shipped via `testdir.spawn()` to
run an `./examples/..` script by name.
'''
if start_method != 'trio':
pytest.skip(
'`pexpect` based tests only supported on `trio` backend'
)
def unset_colors():
'''
Python 3.13 introduced colored tracebacks that break patt
matching,
https://docs.python.org/3/using/cmdline.html#envvar-PYTHON_COLORS
https://docs.python.org/3/using/cmdline.html#using-on-controlling-color
'''
import os
os.environ['PYTHON_COLORS'] = '0'
def _spawn(
cmd: str,
**mkcmd_kwargs,
):
unset_colors()
return testdir.spawn(
cmd=mk_cmd(
cmd,
**mkcmd_kwargs,
),
expect_timeout=3,
# preexec_fn=unset_colors,
# ^TODO? get `pytest` core to expose underlying
# `pexpect.spawn()` stuff?
)
# such that test-dep can pass input script name.
return _spawn
@pytest.fixture(
params=[False, True],
ids='ctl-c={}'.format,
)
def ctlc(
request,
ci_env: bool,
) -> bool:
use_ctlc = request.param
node = request.node
markers = node.own_markers
for mark in markers:
if mark.name == 'has_nested_actors':
pytest.skip(
f'Test {node} has nested actors and fails with Ctrl-C.\n'
f'The test can sometimes run fine locally but until'
' we solve' 'this issue this CI test will be xfail:\n'
'https://github.com/goodboy/tractor/issues/320'
)
if mark.name == 'ctlcs_bish':
pytest.skip(
f'Test {node} prolly uses something from the stdlib (namely `asyncio`..)\n'
f'The test and/or underlying example script can *sometimes* run fine '
f'locally but more then likely until the cpython peeps get their sh#$ together, '
f'this test will definitely not behave like `trio` under SIGINT..\n'
)
if use_ctlc:
# XXX: disable pygments highlighting for auto-tests
# since some envs (like actions CI) will struggle
# the the added color-char encoding..
from tractor.devx._debug import TractorConfig
TractorConfig.use_pygements = False
yield use_ctlc
def expect(
child,
# normally a `pdb` prompt by default
patt: str,
**kwargs,
) -> None:
'''
Expect wrapper that prints last seen console
data before failing.
'''
try:
child.expect(
patt,
**kwargs,
)
except TIMEOUT:
before = str(child.before.decode())
print(before)
raise
PROMPT = r"\(Pdb\+\)"
def in_prompt_msg(
child: SpawnBase,
parts: list[str],
pause_on_false: bool = False,
err_on_false: bool = False,
print_prompt_on_false: bool = True,
) -> bool:
'''
Predicate check if (the prompt's) std-streams output has all
`str`-parts in it.
Can be used in test asserts for bulk matching expected
log/REPL output for a given `pdb` interact point.
'''
__tracebackhide__: bool = False
before: str = str(child.before.decode())
for part in parts:
if part not in before:
if pause_on_false:
import pdbp
pdbp.set_trace()
if print_prompt_on_false:
print(before)
if err_on_false:
raise ValueError(
f'Could not find pattern in `before` output?\n'
f'part: {part!r}\n'
)
return False
return True
# TODO: todo support terminal color-chars stripping so we can match
# against call stack frame output from the the 'll' command the like!
# -[ ] SO answer for stipping ANSI codes: https://stackoverflow.com/a/14693789
def assert_before(
child: SpawnBase,
patts: list[str],
**kwargs,
) -> None:
__tracebackhide__: bool = False
assert in_prompt_msg(
child=child,
parts=patts,
# since this is an "assert" helper ;)
err_on_false=True,
**kwargs
)
def do_ctlc(
child,
count: int = 3,
delay: float = 0.1,
patt: str|None = None,
# expect repl UX to reprint the prompt after every
# ctrl-c send.
# XXX: no idea but, in CI this never seems to work even on 3.10 so
# needs some further investigation potentially...
expect_prompt: bool = not _ci_env,
) -> str|None:
before: str|None = None
# make sure ctl-c sends don't do anything but repeat output
for _ in range(count):
time.sleep(delay)
child.sendcontrol('c')
# TODO: figure out why this makes CI fail..
# if you run this test manually it works just fine..
if expect_prompt:
time.sleep(delay)
child.expect(PROMPT)
before = str(child.before.decode())
time.sleep(delay)
if patt:
# should see the last line on console
assert patt in before
# return the console content up to the final prompt
return before

View File

@ -1,381 +0,0 @@
'''
That "foreign loop/thread" debug REPL support better ALSO WORK!
Same as `test_native_pause.py`.
All these tests can be understood (somewhat) by running the
equivalent `examples/debugging/` scripts manually.
'''
from contextlib import (
contextmanager as cm,
)
# from functools import partial
# import itertools
import time
# from typing import (
# Iterator,
# )
import pytest
from pexpect.exceptions import (
TIMEOUT,
EOF,
)
from .conftest import (
# _ci_env,
do_ctlc,
PROMPT,
# expect,
in_prompt_msg,
assert_before,
_pause_msg,
_crash_msg,
_ctlc_ignore_header,
# _repl_fail_msg,
)
@cm
def maybe_expect_timeout(
ctlc: bool = False,
) -> None:
try:
yield
except TIMEOUT:
# breakpoint()
if ctlc:
pytest.xfail(
'Some kinda redic threading SIGINT bug i think?\n'
'See the notes in `examples/debugging/sync_bp.py`..\n'
)
raise
@pytest.mark.ctlcs_bish
def test_pause_from_sync(
spawn,
ctlc: bool,
):
'''
Verify we can use the `pdbp` REPL from sync functions AND from
any thread spawned with `trio.to_thread.run_sync()`.
`examples/debugging/sync_bp.py`
'''
child = spawn('sync_bp')
# first `sync_pause()` after nurseries open
child.expect(PROMPT)
assert_before(
child,
[
# pre-prompt line
_pause_msg,
"<Task '__main__.main'",
"('root'",
]
)
if ctlc:
do_ctlc(child)
# ^NOTE^ subactor not spawned yet; don't need extra delay.
child.sendline('c')
# first `await tractor.pause()` inside `p.open_context()` body
child.expect(PROMPT)
# XXX shouldn't see gb loaded message with PDB loglevel!
# assert not in_prompt_msg(
# child,
# ['`greenback` portal opened!'],
# )
# should be same root task
assert_before(
child,
[
_pause_msg,
"<Task '__main__.main'",
"('root'",
]
)
if ctlc:
do_ctlc(
child,
# NOTE: setting this to 0 (or some other sufficient
# small val) can cause the test to fail since the
# `subactor` suffers a race where the root/parent
# sends an actor-cancel prior to it hitting its pause
# point; by def the value is 0.1
delay=0.4,
)
# XXX, fwiw without a brief sleep here the SIGINT might actually
# trigger "subactor" cancellation by its parent before the
# shield-handler is engaged.
#
# => similar to the `delay` input to `do_ctlc()` below, setting
# this too low can cause the test to fail since the `subactor`
# suffers a race where the root/parent sends an actor-cancel
# prior to the context task hitting its pause point (and thus
# engaging the `sigint_shield()` handler in time); this value
# seems be good enuf?
time.sleep(0.6)
# one of the bg thread or subactor should have
# `Lock.acquire()`-ed
# (NOT both, which will result in REPL clobbering!)
attach_patts: dict[str, list[str]] = {
'subactor': [
"'start_n_sync_pause'",
"('subactor'",
],
'inline_root_bg_thread': [
"<Thread(inline_root_bg_thread",
"('root'",
],
'start_soon_root_bg_thread': [
"<Thread(start_soon_root_bg_thread",
"('root'",
],
}
conts: int = 0 # for debugging below matching logic on failure
while attach_patts:
child.sendline('c')
conts += 1
child.expect(PROMPT)
before = str(child.before.decode())
for key in attach_patts:
if key in before:
attach_key: str = key
expected_patts: str = attach_patts.pop(key)
assert_before(
child,
[_pause_msg]
+
expected_patts
)
break
else:
pytest.fail(
f'No keys found?\n\n'
f'{attach_patts.keys()}\n\n'
f'{before}\n'
)
# ensure no other task/threads engaged a REPL
# at the same time as the one that was detected above.
for key, other_patts in attach_patts.copy().items():
assert not in_prompt_msg(
child,
other_patts,
)
if ctlc:
do_ctlc(
child,
patt=attach_key,
# NOTE same as comment above
delay=0.4,
)
child.sendline('c')
# XXX TODO, weird threading bug it seems despite the
# `abandon_on_cancel: bool` setting to
# `trio.to_thread.run_sync()`..
with maybe_expect_timeout(
ctlc=ctlc,
):
child.expect(EOF)
def expect_any_of(
attach_patts: dict[str, list[str]],
child, # what type?
ctlc: bool = False,
prompt: str = _ctlc_ignore_header,
ctlc_delay: float = .4,
) -> list[str]:
'''
Receive any of a `list[str]` of patterns provided in
`attach_patts`.
Used to test racing prompts from multiple actors and/or
tasks using a common root process' `pdbp` REPL.
'''
assert attach_patts
child.expect(PROMPT)
before = str(child.before.decode())
for attach_key in attach_patts:
if attach_key in before:
expected_patts: str = attach_patts.pop(attach_key)
assert_before(
child,
expected_patts
)
break # from for
else:
pytest.fail(
f'No keys found?\n\n'
f'{attach_patts.keys()}\n\n'
f'{before}\n'
)
# ensure no other task/threads engaged a REPL
# at the same time as the one that was detected above.
for key, other_patts in attach_patts.copy().items():
assert not in_prompt_msg(
child,
other_patts,
)
if ctlc:
do_ctlc(
child,
patt=prompt,
# NOTE same as comment above
delay=ctlc_delay,
)
return expected_patts
@pytest.mark.ctlcs_bish
def test_sync_pause_from_aio_task(
spawn,
ctlc: bool
# ^TODO, fix for `asyncio`!!
):
'''
Verify we can use the `pdbp` REPL from an `asyncio.Task` spawned using
APIs in `.to_asyncio`.
`examples/debugging/asycio_bp.py`
'''
child = spawn('asyncio_bp')
# RACE on whether trio/asyncio task bps first
attach_patts: dict[str, list[str]] = {
# first pause in guest-mode (aka "infecting")
# `trio.Task`.
'trio-side': [
_pause_msg,
"<Task 'trio_ctx'",
"('aio_daemon'",
],
# `breakpoint()` from `asyncio.Task`.
'asyncio-side': [
_pause_msg,
"<Task pending name='Task-2' coro=<greenback_shim()",
"('aio_daemon'",
],
}
while attach_patts:
expect_any_of(
attach_patts=attach_patts,
child=child,
ctlc=ctlc,
)
child.sendline('c')
# NOW in race order,
# - the asyncio-task will error
# - the root-actor parent task will pause
#
attach_patts: dict[str, list[str]] = {
# error raised in `asyncio.Task`
"raise ValueError('asyncio side error!')": [
_crash_msg,
"<Task 'trio_ctx'",
"@ ('aio_daemon'",
"ValueError: asyncio side error!",
# XXX, we no longer show this frame by default!
# 'return await chan.receive()', # `.to_asyncio` impl internals in tb
],
# parent-side propagation via actor-nursery/portal
# "tractor._exceptions.RemoteActorError: remote task raised a 'ValueError'": [
"remote task raised a 'ValueError'": [
_crash_msg,
"src_uid=('aio_daemon'",
"('aio_daemon'",
],
# a final pause in root-actor
"<Task '__main__.main'": [
_pause_msg,
"<Task '__main__.main'",
"('root'",
],
}
while attach_patts:
expect_any_of(
attach_patts=attach_patts,
child=child,
ctlc=ctlc,
)
child.sendline('c')
assert not attach_patts
# final boxed error propagates to root
assert_before(
child,
[
_crash_msg,
"<Task '__main__.main'",
"('root'",
"remote task raised a 'ValueError'",
"ValueError: asyncio side error!",
]
)
if ctlc:
do_ctlc(
child,
# NOTE: setting this to 0 (or some other sufficient
# small val) can cause the test to fail since the
# `subactor` suffers a race where the root/parent
# sends an actor-cancel prior to it hitting its pause
# point; by def the value is 0.1
delay=0.4,
)
child.sendline('c')
# with maybe_expect_timeout():
child.expect(EOF)
def test_sync_pause_from_non_greenbacked_aio_task():
'''
Where the `breakpoint()` caller task is NOT spawned by
`tractor.to_asyncio` and thus never activates
a `greenback.ensure_portal()` beforehand, presumably bc the task
was started by some lib/dep as in often seen in the field.
Ensure sync pausing works when the pause is in,
- the root actor running in infected-mode?
|_ since we don't need any IPC to acquire the debug lock?
|_ is there some way to handle this like the non-main-thread case?
All other cases need to error out appropriately right?
- for any subactor we can't avoid needing the repl lock..
|_ is there a way to hook into `asyncio.ensure_future(obj)`?
'''
pass

View File

@ -1,172 +0,0 @@
'''
That "native" runtime-hackin toolset better be dang useful!
Verify the funtion of a variety of "developer-experience" tools we
offer from the `.devx` sub-pkg:
- use of the lovely `stackscope` for dumping actor `trio`-task trees
during operation and hangs.
TODO:
- demonstration of `CallerInfo` call stack frame filtering such that
for logging and REPL purposes a user sees exactly the layers needed
when debugging a problem inside the stack vs. in their app.
'''
import os
import signal
import time
from .conftest import (
expect,
assert_before,
in_prompt_msg,
PROMPT,
_pause_msg,
)
from pexpect.exceptions import (
# TIMEOUT,
EOF,
)
def test_shield_pause(
spawn,
):
'''
Verify the `tractor.pause()/.post_mortem()` API works inside an
already cancelled `trio.CancelScope` and that you can step to the
next checkpoint wherein the cancelled will get raised.
'''
child = spawn(
'shield_hang_in_sub'
)
expect(
child,
'Yo my child hanging..?',
)
assert_before(
child,
[
'Entering shield sleep..',
'Enabling trace-trees on `SIGUSR1` since `stackscope` is installed @',
]
)
script_pid: int = child.pid
print(
f'Sending SIGUSR1 to {script_pid}\n'
f'(kill -s SIGUSR1 {script_pid})\n'
)
os.kill(
script_pid,
signal.SIGUSR1,
)
time.sleep(0.2)
expect(
child,
# end-of-tree delimiter
"end-of-\('root'",
)
assert_before(
child,
[
# 'Srying to dump `stackscope` tree..',
# 'Dumping `stackscope` tree for actor',
"('root'", # uid line
# TODO!? this used to show?
# -[ ] mk reproducable for @oremanj?
#
# parent block point (non-shielded)
# 'await trio.sleep_forever() # in root',
]
)
expect(
child,
# end-of-tree delimiter
"end-of-\('hanger'",
)
assert_before(
child,
[
# relay to the sub should be reported
'Relaying `SIGUSR1`[10] to sub-actor',
"('hanger'", # uid line
# TODO!? SEE ABOVE
# hanger LOC where it's shield-halted
# 'await trio.sleep_forever() # in subactor',
]
)
# simulate the user sending a ctl-c to the hanging program.
# this should result in the terminator kicking in since
# the sub is shield blocking and can't respond to SIGINT.
os.kill(
child.pid,
signal.SIGINT,
)
expect(
child,
'Shutting down actor runtime',
timeout=6,
)
assert_before(
child,
[
'raise KeyboardInterrupt',
# 'Shutting down actor runtime',
'#T-800 deployed to collect zombie B0',
"'--uid', \"('hanger',",
]
)
def test_breakpoint_hook_restored(
spawn,
):
'''
Ensures our actor runtime sets a custom `breakpoint()` hook
on open then restores the stdlib's default on close.
The hook state validation is done via `assert`s inside the
invoked script with only `breakpoint()` (not `tractor.pause()`)
calls used.
'''
child = spawn('restore_builtin_breakpoint')
child.expect(PROMPT)
assert_before(
child,
[
_pause_msg,
"<Task '__main__.main'",
"('root'",
"first bp, tractor hook set",
]
)
child.sendline('c')
child.expect(PROMPT)
assert_before(
child,
[
"last bp, stdlib hook restored",
]
)
# since the stdlib hook was already restored there should be NO
# `tractor` `log.pdb()` content from console!
assert not in_prompt_msg(
child,
[
_pause_msg,
"<Task '__main__.main'",
"('root'",
],
)
child.sendline('c')
child.expect(EOF)

View File

@ -4,28 +4,21 @@ cancelacion?..
''' '''
from functools import partial from functools import partial
from types import ModuleType
import pytest import pytest
from _pytest.pathlib import import_path from _pytest.pathlib import import_path
import trio import trio
import tractor import tractor
from tractor._testing import (
from conftest import (
examples_dir, examples_dir,
break_ipc,
) )
@pytest.mark.parametrize( @pytest.mark.parametrize(
'pre_aclose_msgstream', 'debug_mode',
[ [False, True],
False, ids=['no_debug_mode', 'debug_mode'],
True,
],
ids=[
'no_msgstream_aclose',
'pre_aclose_msgstream',
],
) )
@pytest.mark.parametrize( @pytest.mark.parametrize(
'ipc_break', 'ipc_break',
@ -70,10 +63,8 @@ from tractor._testing import (
) )
def test_ipc_channel_break_during_stream( def test_ipc_channel_break_during_stream(
debug_mode: bool, debug_mode: bool,
loglevel: str,
spawn_backend: str, spawn_backend: str,
ipc_break: dict|None, ipc_break: dict | None,
pre_aclose_msgstream: bool,
): ):
''' '''
Ensure we can have an IPC channel break its connection during Ensure we can have an IPC channel break its connection during
@ -90,149 +81,72 @@ def test_ipc_channel_break_during_stream(
# non-`trio` spawners should never hit the hang condition that # non-`trio` spawners should never hit the hang condition that
# requires the user to do ctl-c to cancel the actor tree. # requires the user to do ctl-c to cancel the actor tree.
# expect_final_exc = trio.ClosedResourceError expect_final_exc = trio.ClosedResourceError
expect_final_exc = tractor.TransportClosed
mod: ModuleType = import_path( mod = import_path(
examples_dir() / 'advanced_faults' examples_dir() / 'advanced_faults' / 'ipc_failure_during_stream.py',
/ 'ipc_failure_during_stream.py',
root=examples_dir(), root=examples_dir(),
consider_namespace_packages=False,
) )
# by def we expect KBI from user after a simulated "hang expect_final_exc = KeyboardInterrupt
# period" wherein the user eventually hits ctl-c to kill the
# root-actor tree. # when ONLY the child breaks we expect the parent to get a closed
expect_final_exc: BaseException = KeyboardInterrupt # resource error on the next `MsgStream.receive()` and then fail out
# and cancel the child from there.
if ( if (
# only expect EoC if trans is broken on the child side,
ipc_break['break_child_ipc_after'] is not False
# AND we tell the child to call `MsgStream.aclose()`.
and pre_aclose_msgstream
):
# expect_final_exc = trio.EndOfChannel
# ^XXX NOPE! XXX^ since now `.open_stream()` absorbs this
# gracefully!
expect_final_exc = KeyboardInterrupt
# NOTE when ONLY the child breaks or it breaks BEFORE the # only child breaks
# parent we expect the parent to get a closed resource error (
# on the next `MsgStream.receive()` and then fail out and
# cancel the child from there.
#
# ONLY CHILD breaks
if (
ipc_break['break_child_ipc_after']
and
ipc_break['break_parent_ipc_after'] is False
):
# NOTE: we DO NOT expect this any more since
# the child side's channel will be broken silently
# and nothing on the parent side will indicate this!
# expect_final_exc = trio.ClosedResourceError
# NOTE: child will send a 'stop' msg before it breaks
# the transport channel BUT, that will be absorbed by the
# `ctx.open_stream()` block and thus the `.open_context()`
# should hang, after which the test script simulates
# a user sending ctl-c by raising a KBI.
if pre_aclose_msgstream:
expect_final_exc = KeyboardInterrupt
# XXX OLD XXX
# if child calls `MsgStream.aclose()` then expect EoC.
# ^ XXX not any more ^ since eoc is always absorbed
# gracefully and NOT bubbled to the `.open_context()`
# block!
# expect_final_exc = trio.EndOfChannel
# BOTH but, CHILD breaks FIRST
elif (
ipc_break['break_child_ipc_after'] is not False
and (
ipc_break['break_parent_ipc_after']
> ipc_break['break_child_ipc_after']
)
):
if pre_aclose_msgstream:
expect_final_exc = KeyboardInterrupt
# NOTE when the parent IPC side dies (even if the child does as well
# but the child fails BEFORE the parent) we always expect the
# IPC layer to raise a closed-resource, NEVER do we expect
# a stop msg since the parent-side ctx apis will error out
# IMMEDIATELY before the child ever sends any 'stop' msg.
#
# ONLY PARENT breaks
elif (
ipc_break['break_parent_ipc_after']
and
ipc_break['break_child_ipc_after'] is False
):
# expect_final_exc = trio.ClosedResourceError
expect_final_exc = tractor.TransportClosed
# BOTH but, PARENT breaks FIRST
elif (
ipc_break['break_parent_ipc_after'] is not False
and (
ipc_break['break_child_ipc_after'] ipc_break['break_child_ipc_after']
> and ipc_break['break_parent_ipc_after'] is False
)
# both break but, parent breaks first
or (
ipc_break['break_child_ipc_after'] is not False
and (
ipc_break['break_parent_ipc_after']
> ipc_break['break_child_ipc_after']
)
)
):
expect_final_exc = trio.ClosedResourceError
# when the parent IPC side dies (even if the child's does as well
# but the child fails BEFORE the parent) we expect the channel to be
# sent a stop msg from the child at some point which will signal the
# parent that the stream has been terminated.
# NOTE: when the parent breaks "after" the child you get this same
# case as well, the child breaks the IPC channel with a stop msg
# before any closure takes place.
elif (
# only parent breaks
(
ipc_break['break_parent_ipc_after'] ipc_break['break_parent_ipc_after']
and ipc_break['break_child_ipc_after'] is False
)
# both break but, child breaks first
or (
ipc_break['break_parent_ipc_after'] is not False
and (
ipc_break['break_child_ipc_after']
> ipc_break['break_parent_ipc_after']
)
) )
): ):
# expect_final_exc = trio.ClosedResourceError expect_final_exc = trio.EndOfChannel
expect_final_exc = tractor.TransportClosed
with pytest.raises( with pytest.raises(expect_final_exc):
expected_exception=( trio.run(
expect_final_exc, partial(
ExceptionGroup, mod.main,
), debug_mode=debug_mode,
) as excinfo: start_method=spawn_backend,
try: **ipc_break,
trio.run(
partial(
mod.main,
debug_mode=debug_mode,
start_method=spawn_backend,
loglevel=loglevel,
pre_close=pre_aclose_msgstream,
**ipc_break,
)
) )
except KeyboardInterrupt as _kbi: )
kbi = _kbi
if expect_final_exc is not KeyboardInterrupt:
pytest.fail(
'Rxed unexpected KBI !?\n'
f'{repr(kbi)}'
)
raise
except tractor.TransportClosed as _tc:
tc = _tc
if expect_final_exc is KeyboardInterrupt:
pytest.fail(
'Unexpected transport failure !?\n'
f'{repr(tc)}'
)
cause: Exception = tc.__cause__
assert (
type(cause) is trio.ClosedResourceError
and
cause.args[0] == 'another task closed this fd'
)
raise
# get raw instance from pytest wrapper
value = excinfo.value
if isinstance(value, ExceptionGroup):
excs = value.exceptions
assert len(excs) == 1
final_exc = excs[0]
assert isinstance(final_exc, expect_final_exc)
@tractor.context @tractor.context
@ -241,50 +155,39 @@ async def break_ipc_after_started(
) -> None: ) -> None:
await ctx.started() await ctx.started()
async with ctx.open_stream() as stream: async with ctx.open_stream() as stream:
await stream.aclose()
# TODO: make a test which verifies the error await trio.sleep(0.2)
# for this, i.e. raises a `MsgTypeError` await ctx.chan.send(None)
# await ctx.chan.send(None)
await break_ipc(
stream=stream,
pre_close=True,
)
print('child broke IPC and terminating') print('child broke IPC and terminating')
def test_stream_closed_right_after_ipc_break_and_zombie_lord_engages(): def test_stream_closed_right_after_ipc_break_and_zombie_lord_engages():
''' '''
Verify that is a subactor's IPC goes down just after bringing up Verify that is a subactor's IPC goes down just after bringing up a stream
a stream the parent can trigger a SIGINT and the child will be the parent can trigger a SIGINT and the child will be reaped out-of-IPC by
reaped out-of-IPC by the localhost process supervision machinery: the localhost process supervision machinery: aka "zombie lord".
aka "zombie lord".
''' '''
async def main(): async def main():
with trio.fail_after(3): async with tractor.open_nursery() as n:
async with tractor.open_nursery() as an: portal = await n.start_actor(
portal = await an.start_actor( 'ipc_breaker',
'ipc_breaker', enable_modules=[__name__],
enable_modules=[__name__], )
)
with trio.move_on_after(1): with trio.move_on_after(1):
async with ( async with (
portal.open_context( portal.open_context(
break_ipc_after_started break_ipc_after_started
) as (ctx, sent), ) as (ctx, sent),
): ):
async with ctx.open_stream(): async with ctx.open_stream():
await trio.sleep(0.5) await trio.sleep(0.5)
print('parent waiting on context') print('parent waiting on context')
print( print('parent exited context')
'parent exited context\n' raise KeyboardInterrupt
'parent raising KBI..\n'
)
raise KeyboardInterrupt
with pytest.raises(KeyboardInterrupt): with pytest.raises(KeyboardInterrupt):
trio.run(main) trio.run(main)

View File

@ -6,7 +6,6 @@ from collections import Counter
import itertools import itertools
import platform import platform
import pytest
import trio import trio
import tractor import tractor
@ -144,16 +143,8 @@ def test_dynamic_pub_sub():
try: try:
trio.run(main) trio.run(main)
except ( except trio.TooSlowError:
trio.TooSlowError, pass
ExceptionGroup,
) as err:
if isinstance(err, ExceptionGroup):
for suberr in err.exceptions:
if isinstance(suberr, trio.TooSlowError):
break
else:
pytest.fail('Never got a `TooSlowError` ?')
@tractor.context @tractor.context
@ -307,77 +298,44 @@ async def inf_streamer(
async with ( async with (
ctx.open_stream() as stream, ctx.open_stream() as stream,
trio.open_nursery() as n,
# XXX TODO, INTERESTING CASE!!
# - if we don't collapse the eg then the embedded
# `trio.EndOfChannel` doesn't propagate directly to the above
# .open_stream() parent, resulting in it also raising instead
# of gracefully absorbing as normal.. so how to handle?
trio.open_nursery(
strict_exception_groups=False,
) as tn,
): ):
async def close_stream_on_sentinel(): async def bail_on_sentinel():
async for msg in stream: async for msg in stream:
if msg == 'done': if msg == 'done':
print(
'streamer RXed "done" sentinel msg!\n'
'CLOSING `MsgStream`!'
)
await stream.aclose() await stream.aclose()
else: else:
print(f'streamer received {msg}') print(f'streamer received {msg}')
else:
print('streamer exited recv loop')
# start termination detector # start termination detector
tn.start_soon(close_stream_on_sentinel) n.start_soon(bail_on_sentinel)
cap: int = 10000 # so that we don't spin forever when bug.. for val in itertools.count():
for val in range(cap):
try: try:
print(f'streamer sending {val}')
await stream.send(val) await stream.send(val)
if val > cap:
raise RuntimeError(
'Streamer never cancelled by setinel?'
)
await trio.sleep(0.001)
# close out the stream gracefully
except trio.ClosedResourceError: except trio.ClosedResourceError:
print('transport closed on streamer side!') # close out the stream gracefully
assert stream.closed
break break
else:
raise RuntimeError(
'Streamer not cancelled before finished sending?'
)
print('streamer exited .open_streamer() block') print('terminating streamer')
def test_local_task_fanout_from_stream( def test_local_task_fanout_from_stream():
debug_mode: bool,
):
''' '''
Single stream with multiple local consumer tasks using the Single stream with multiple local consumer tasks using the
``MsgStream.subscribe()` api. ``MsgStream.subscribe()` api.
Ensure all tasks receive all values after stream completes Ensure all tasks receive all values after stream completes sending.
sending.
''' '''
consumers: int = 22 consumers = 22
async def main(): async def main():
counts = Counter() counts = Counter()
async with tractor.open_nursery( async with tractor.open_nursery() as tn:
debug_mode=debug_mode, p = await tn.start_actor(
) as tn:
p: tractor.Portal = await tn.start_actor(
'inf_streamer', 'inf_streamer',
enable_modules=[__name__], enable_modules=[__name__],
) )
@ -385,6 +343,7 @@ def test_local_task_fanout_from_stream(
p.open_context(inf_streamer) as (ctx, _), p.open_context(inf_streamer) as (ctx, _),
ctx.open_stream() as stream, ctx.open_stream() as stream,
): ):
async def pull_and_count(name: str): async def pull_and_count(name: str):
# name = trio.lowlevel.current_task().name # name = trio.lowlevel.current_task().name
async with stream.subscribe() as recver: async with stream.subscribe() as recver:
@ -393,7 +352,7 @@ def test_local_task_fanout_from_stream(
tractor.trionics.BroadcastReceiver tractor.trionics.BroadcastReceiver
) )
async for val in recver: async for val in recver:
print(f'bx {name} rx: {val}') # print(f'{name}: {val}')
counts[name] += 1 counts[name] += 1
print(f'{name} bcaster ended') print(f'{name} bcaster ended')
@ -403,14 +362,10 @@ def test_local_task_fanout_from_stream(
with trio.fail_after(3): with trio.fail_after(3):
async with trio.open_nursery() as nurse: async with trio.open_nursery() as nurse:
for i in range(consumers): for i in range(consumers):
nurse.start_soon( nurse.start_soon(pull_and_count, i)
pull_and_count,
i,
)
# delay to let bcast consumers pull msgs
await trio.sleep(0.5) await trio.sleep(0.5)
print('terminating nursery of bcast rxer consumers!') print('\nterminating')
await stream.send('done') await stream.send('done')
print('closed stream connection') print('closed stream connection')

View File

@ -8,13 +8,15 @@ import platform
import time import time
from itertools import repeat from itertools import repeat
from exceptiongroup import (
BaseExceptionGroup,
ExceptionGroup,
)
import pytest import pytest
import trio import trio
import tractor import tractor
from tractor._testing import (
tractor_test, from conftest import tractor_test, no_windows
)
from .conftest import no_windows
def is_win(): def is_win():
@ -45,19 +47,17 @@ async def do_nuthin():
], ],
ids=['no_args', 'unexpected_args'], ids=['no_args', 'unexpected_args'],
) )
def test_remote_error(reg_addr, args_err): def test_remote_error(arb_addr, args_err):
''' """Verify an error raised in a subactor that is propagated
Verify an error raised in a subactor that is propagated
to the parent nursery, contains the underlying boxed builtin to the parent nursery, contains the underlying boxed builtin
error type info and causes cancellation and reraising all the error type info and causes cancellation and reraising all the
way up the stack. way up the stack.
"""
'''
args, errtype = args_err args, errtype = args_err
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
) as nursery: ) as nursery:
# on a remote type error caused by bad input args # on a remote type error caused by bad input args
@ -65,9 +65,7 @@ def test_remote_error(reg_addr, args_err):
# an exception group outside the nursery since the error # an exception group outside the nursery since the error
# here and the far end task error are one in the same? # here and the far end task error are one in the same?
portal = await nursery.run_in_actor( portal = await nursery.run_in_actor(
assert_err, assert_err, name='errorer', **args
name='errorer',
**args
) )
# get result(s) from main task # get result(s) from main task
@ -77,7 +75,7 @@ def test_remote_error(reg_addr, args_err):
# of this actor nursery. # of this actor nursery.
await portal.result() await portal.result()
except tractor.RemoteActorError as err: except tractor.RemoteActorError as err:
assert err.boxed_type == errtype assert err.type == errtype
print("Look Maa that actor failed hard, hehh") print("Look Maa that actor failed hard, hehh")
raise raise
@ -86,33 +84,20 @@ def test_remote_error(reg_addr, args_err):
with pytest.raises(tractor.RemoteActorError) as excinfo: with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main) trio.run(main)
assert excinfo.value.boxed_type == errtype assert excinfo.value.type == errtype
else: else:
# the root task will also error on the `Portal.result()` # the root task will also error on the `.result()` call
# call so we expect an error from there AND the child. # so we expect an error from there AND the child.
# |_ tho seems like on new `trio` this doesn't always with pytest.raises(BaseExceptionGroup) as excinfo:
# happen?
with pytest.raises((
BaseExceptionGroup,
tractor.RemoteActorError,
)) as excinfo:
trio.run(main) trio.run(main)
# ensure boxed errors are `errtype` # ensure boxed errors
err: BaseException = excinfo.value for exc in excinfo.value.exceptions:
if isinstance(err, BaseExceptionGroup): assert exc.type == errtype
suberrs: list[BaseException] = err.exceptions
else:
suberrs: list[BaseException] = [err]
for exc in suberrs:
assert exc.boxed_type == errtype
def test_multierror( def test_multierror(arb_addr):
reg_addr: tuple[str, int],
):
''' '''
Verify we raise a ``BaseExceptionGroup`` out of a nursery where Verify we raise a ``BaseExceptionGroup`` out of a nursery where
more then one actor errors. more then one actor errors.
@ -120,7 +105,7 @@ def test_multierror(
''' '''
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
) as nursery: ) as nursery:
await nursery.run_in_actor(assert_err, name='errorer1') await nursery.run_in_actor(assert_err, name='errorer1')
@ -130,7 +115,7 @@ def test_multierror(
try: try:
await portal2.result() await portal2.result()
except tractor.RemoteActorError as err: except tractor.RemoteActorError as err:
assert err.boxed_type is AssertionError assert err.type == AssertionError
print("Look Maa that first actor failed hard, hehh") print("Look Maa that first actor failed hard, hehh")
raise raise
@ -145,14 +130,14 @@ def test_multierror(
@pytest.mark.parametrize( @pytest.mark.parametrize(
'num_subactors', range(25, 26), 'num_subactors', range(25, 26),
) )
def test_multierror_fast_nursery(reg_addr, start_method, num_subactors, delay): def test_multierror_fast_nursery(arb_addr, start_method, num_subactors, delay):
"""Verify we raise a ``BaseExceptionGroup`` out of a nursery where """Verify we raise a ``BaseExceptionGroup`` out of a nursery where
more then one actor errors and also with a delay before failure more then one actor errors and also with a delay before failure
to test failure during an ongoing spawning. to test failure during an ongoing spawning.
""" """
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
) as nursery: ) as nursery:
for i in range(num_subactors): for i in range(num_subactors):
@ -182,7 +167,7 @@ def test_multierror_fast_nursery(reg_addr, start_method, num_subactors, delay):
for exc in exceptions: for exc in exceptions:
assert isinstance(exc, tractor.RemoteActorError) assert isinstance(exc, tractor.RemoteActorError)
assert exc.boxed_type is AssertionError assert exc.type == AssertionError
async def do_nothing(): async def do_nothing():
@ -190,20 +175,15 @@ async def do_nothing():
@pytest.mark.parametrize('mechanism', ['nursery_cancel', KeyboardInterrupt]) @pytest.mark.parametrize('mechanism', ['nursery_cancel', KeyboardInterrupt])
def test_cancel_single_subactor(reg_addr, mechanism): def test_cancel_single_subactor(arb_addr, mechanism):
''' """Ensure a ``ActorNursery.start_actor()`` spawned subactor
Ensure a ``ActorNursery.start_actor()`` spawned subactor
cancels when the nursery is cancelled. cancels when the nursery is cancelled.
"""
'''
async def spawn_actor(): async def spawn_actor():
''' """Spawn an actor that blocks indefinitely.
Spawn an actor that blocks indefinitely then cancel via """
either `ActorNursery.cancel()` or an exception raise.
'''
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
) as nursery: ) as nursery:
portal = await nursery.start_actor( portal = await nursery.start_actor(
@ -323,7 +303,7 @@ async def test_some_cancels_all(num_actors_and_errs, start_method, loglevel):
await portal.run(func, **kwargs) await portal.run(func, **kwargs)
except tractor.RemoteActorError as err: except tractor.RemoteActorError as err:
assert err.boxed_type == err_type assert err.type == err_type
# we only expect this first error to propogate # we only expect this first error to propogate
# (all other daemons are cancelled before they # (all other daemons are cancelled before they
# can be scheduled) # can be scheduled)
@ -342,11 +322,11 @@ async def test_some_cancels_all(num_actors_and_errs, start_method, loglevel):
assert len(err.exceptions) == num_actors assert len(err.exceptions) == num_actors
for exc in err.exceptions: for exc in err.exceptions:
if isinstance(exc, tractor.RemoteActorError): if isinstance(exc, tractor.RemoteActorError):
assert exc.boxed_type == err_type assert exc.type == err_type
else: else:
assert isinstance(exc, trio.Cancelled) assert isinstance(exc, trio.Cancelled)
elif isinstance(err, tractor.RemoteActorError): elif isinstance(err, tractor.RemoteActorError):
assert err.boxed_type == err_type assert err.type == err_type
assert n.cancelled is True assert n.cancelled is True
assert not n._children assert not n._children
@ -425,7 +405,7 @@ async def test_nested_multierrors(loglevel, start_method):
elif isinstance(subexc, tractor.RemoteActorError): elif isinstance(subexc, tractor.RemoteActorError):
# on windows it seems we can't exactly be sure wtf # on windows it seems we can't exactly be sure wtf
# will happen.. # will happen..
assert subexc.boxed_type in ( assert subexc.type in (
tractor.RemoteActorError, tractor.RemoteActorError,
trio.Cancelled, trio.Cancelled,
BaseExceptionGroup, BaseExceptionGroup,
@ -435,7 +415,7 @@ async def test_nested_multierrors(loglevel, start_method):
for subsub in subexc.exceptions: for subsub in subexc.exceptions:
if subsub in (tractor.RemoteActorError,): if subsub in (tractor.RemoteActorError,):
subsub = subsub.boxed_type subsub = subsub.type
assert type(subsub) in ( assert type(subsub) in (
trio.Cancelled, trio.Cancelled,
@ -450,16 +430,16 @@ async def test_nested_multierrors(loglevel, start_method):
# we get back the (sent) cancel signal instead # we get back the (sent) cancel signal instead
if is_win(): if is_win():
if isinstance(subexc, tractor.RemoteActorError): if isinstance(subexc, tractor.RemoteActorError):
assert subexc.boxed_type in ( assert subexc.type in (
BaseExceptionGroup, BaseExceptionGroup,
tractor.RemoteActorError tractor.RemoteActorError
) )
else: else:
assert isinstance(subexc, BaseExceptionGroup) assert isinstance(subexc, BaseExceptionGroup)
else: else:
assert subexc.boxed_type is ExceptionGroup assert subexc.type is ExceptionGroup
else: else:
assert subexc.boxed_type in ( assert subexc.type in (
tractor.RemoteActorError, tractor.RemoteActorError,
trio.Cancelled trio.Cancelled
) )
@ -504,9 +484,7 @@ def test_cancel_via_SIGINT_other_task(
if is_win(): # smh if is_win(): # smh
timeout += 1 timeout += 1
async def spawn_and_sleep_forever( async def spawn_and_sleep_forever(task_status=trio.TASK_STATUS_IGNORED):
task_status=trio.TASK_STATUS_IGNORED
):
async with tractor.open_nursery() as tn: async with tractor.open_nursery() as tn:
for i in range(3): for i in range(3):
await tn.run_in_actor( await tn.run_in_actor(
@ -519,9 +497,7 @@ def test_cancel_via_SIGINT_other_task(
async def main(): async def main():
# should never timeout since SIGINT should cancel the current program # should never timeout since SIGINT should cancel the current program
with trio.fail_after(timeout): with trio.fail_after(timeout):
async with trio.open_nursery( async with trio.open_nursery() as n:
strict_exception_groups=False,
) as n:
await n.start(spawn_and_sleep_forever) await n.start(spawn_and_sleep_forever)
if 'mp' in spawn_backend: if 'mp' in spawn_backend:
time.sleep(0.1) time.sleep(0.1)
@ -614,12 +590,6 @@ def test_fast_graceful_cancel_when_spawn_task_in_soft_proc_wait_for_daemon(
nurse.start_soon(delayed_kbi) nurse.start_soon(delayed_kbi)
await p.run(do_nuthin) await p.run(do_nuthin)
# need to explicitly re-raise the lone kbi..now
except* KeyboardInterrupt as kbi_eg:
assert (len(excs := kbi_eg.exceptions) == 1)
raise excs[0]
finally: finally:
duration = time.time() - start duration = time.time() - start
if duration > timeout: if duration > timeout:

View File

@ -6,15 +6,14 @@ sub-sub-actor daemons.
''' '''
from typing import Optional from typing import Optional
import asyncio import asyncio
from contextlib import ( from contextlib import asynccontextmanager as acm
asynccontextmanager as acm,
aclosing,
)
import pytest import pytest
import trio import trio
from trio_typing import TaskStatus
import tractor import tractor
from tractor import RemoteActorError from tractor import RemoteActorError
from async_generator import aclosing
async def aio_streamer( async def aio_streamer(
@ -95,8 +94,8 @@ async def trio_main(
# stash a "service nursery" as "actor local" (aka a Python global) # stash a "service nursery" as "actor local" (aka a Python global)
global _nursery global _nursery
tn = _nursery n = _nursery
assert tn assert n
async def consume_stream(): async def consume_stream():
async with wrapper_mngr() as stream: async with wrapper_mngr() as stream:
@ -104,10 +103,10 @@ async def trio_main(
print(msg) print(msg)
# run 2 tasks to ensure broadcaster chan use # run 2 tasks to ensure broadcaster chan use
tn.start_soon(consume_stream) n.start_soon(consume_stream)
tn.start_soon(consume_stream) n.start_soon(consume_stream)
tn.start_soon(trio_sleep_and_err) n.start_soon(trio_sleep_and_err)
await trio.sleep_forever() await trio.sleep_forever()
@ -117,10 +116,8 @@ async def open_actor_local_nursery(
ctx: tractor.Context, ctx: tractor.Context,
): ):
global _nursery global _nursery
async with trio.open_nursery( async with trio.open_nursery() as n:
strict_exception_groups=False, _nursery = n
) as tn:
_nursery = tn
await ctx.started() await ctx.started()
await trio.sleep(10) await trio.sleep(10)
# await trio.sleep(1) # await trio.sleep(1)
@ -134,7 +131,7 @@ async def open_actor_local_nursery(
# never yields back.. aka a scenario where the # never yields back.. aka a scenario where the
# ``tractor.context`` task IS NOT in the service n's cancel # ``tractor.context`` task IS NOT in the service n's cancel
# scope. # scope.
tn.cancel_scope.cancel() n.cancel_scope.cancel()
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -144,7 +141,7 @@ async def open_actor_local_nursery(
) )
def test_actor_managed_trio_nursery_task_error_cancels_aio( def test_actor_managed_trio_nursery_task_error_cancels_aio(
asyncio_mode: bool, asyncio_mode: bool,
reg_addr: tuple, arb_addr
): ):
''' '''
Verify that a ``trio`` nursery created managed in a child actor Verify that a ``trio`` nursery created managed in a child actor
@ -159,7 +156,7 @@ def test_actor_managed_trio_nursery_task_error_cancels_aio(
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
p = await n.start_actor( p = await n.start_actor(
'nursery_mngr', 'nursery_mngr',
infect_asyncio=asyncio_mode, # TODO, is this enabling debug mode? infect_asyncio=asyncio_mode,
enable_modules=[__name__], enable_modules=[__name__],
) )
async with ( async with (
@ -173,4 +170,4 @@ def test_actor_managed_trio_nursery_task_error_cancels_aio(
# verify boxed error # verify boxed error
err = excinfo.value err = excinfo.value
assert err.boxed_type is NameError assert isinstance(err.type(), NameError)

View File

@ -5,7 +5,9 @@ import trio
import tractor import tractor
from tractor import open_actor_cluster from tractor import open_actor_cluster
from tractor.trionics import gather_contexts from tractor.trionics import gather_contexts
from tractor._testing import tractor_test
from conftest import tractor_test
MESSAGE = 'tractoring at full speed' MESSAGE = 'tractoring at full speed'

File diff suppressed because it is too large Load Diff

View File

@ -10,28 +10,24 @@ TODO:
- wonder if any of it'll work on OS X? - wonder if any of it'll work on OS X?
""" """
from functools import partial
import itertools import itertools
from os import path
from typing import Optional
import platform import platform
import pathlib
import sys
import time import time
import pytest import pytest
import pexpect
from pexpect.exceptions import ( from pexpect.exceptions import (
TIMEOUT, TIMEOUT,
EOF, EOF,
) )
from .conftest import ( from conftest import (
do_ctlc, examples_dir,
PROMPT, _ci_env,
_pause_msg,
_crash_msg,
_repl_fail_msg,
)
from .conftest import (
expect,
in_prompt_msg,
assert_before,
) )
# TODO: The next great debugger audit could be done by you! # TODO: The next great debugger audit could be done by you!
@ -51,6 +47,15 @@ if platform.system() == 'Windows':
) )
def mk_cmd(ex_name: str) -> str:
'''
Generate a command suitable to pass to ``pexpect.spawn()``.
'''
script_path: pathlib.Path = examples_dir() / 'debugging' / f'{ex_name}.py'
return ' '.join(['python', str(script_path)])
# TODO: was trying to this xfail style but some weird bug i see in CI # TODO: was trying to this xfail style but some weird bug i see in CI
# that's happening at collect time.. pretty soon gonna dump actions i'm # that's happening at collect time.. pretty soon gonna dump actions i'm
# thinkin... # thinkin...
@ -69,6 +74,117 @@ has_nested_actors = pytest.mark.has_nested_actors
# ) # )
@pytest.fixture
def spawn(
start_method,
testdir,
arb_addr,
) -> 'pexpect.spawn':
if start_method != 'trio':
pytest.skip(
"Debugger tests are only supported on the trio backend"
)
def _spawn(cmd):
return testdir.spawn(
cmd=mk_cmd(cmd),
expect_timeout=3,
)
return _spawn
PROMPT = r"\(Pdb\+\)"
def expect(
child,
# prompt by default
patt: str = PROMPT,
**kwargs,
) -> None:
'''
Expect wrapper that prints last seen console
data before failing.
'''
try:
child.expect(
patt,
**kwargs,
)
except TIMEOUT:
before = str(child.before.decode())
print(before)
raise
def assert_before(
child,
patts: list[str],
) -> None:
before = str(child.before.decode())
for patt in patts:
try:
assert patt in before
except AssertionError:
print(before)
raise
@pytest.fixture(
params=[False, True],
ids='ctl-c={}'.format,
)
def ctlc(
request,
ci_env: bool,
) -> bool:
use_ctlc = request.param
# TODO: we can remove this bc pdbp right?
if (
sys.version_info <= (3, 10)
and use_ctlc
):
# on 3.9 it seems the REPL UX
# is highly unreliable and frankly annoying
# to test for. It does work from manual testing
# but i just don't think it's wroth it to try
# and get this working especially since we want to
# be 3.10+ mega-asap.
pytest.skip('Py3.9 and `pdbpp` son no bueno..')
node = request.node
markers = node.own_markers
for mark in markers:
if mark.name == 'has_nested_actors':
pytest.skip(
f'Test {node} has nested actors and fails with Ctrl-C.\n'
f'The test can sometimes run fine locally but until'
' we solve' 'this issue this CI test will be xfail:\n'
'https://github.com/goodboy/tractor/issues/320'
)
if use_ctlc:
# XXX: disable pygments highlighting for auto-tests
# since some envs (like actions CI) will struggle
# the the added color-char encoding..
from tractor._debug import TractorConfig
TractorConfig.use_pygements = False
yield use_ctlc
@pytest.mark.parametrize( @pytest.mark.parametrize(
'user_in_out', 'user_in_out',
[ [
@ -77,30 +193,21 @@ has_nested_actors = pytest.mark.has_nested_actors
], ],
ids=lambda item: f'{item[0]} -> {item[1]}', ids=lambda item: f'{item[0]} -> {item[1]}',
) )
def test_root_actor_error( def test_root_actor_error(spawn, user_in_out):
spawn, """Demonstrate crash handler entering pdbpp from basic error in root actor.
user_in_out, """
):
'''
Demonstrate crash handler entering pdb from basic error in root actor.
'''
user_input, expect_err_str = user_in_out user_input, expect_err_str = user_in_out
child = spawn('root_actor_error') child = spawn('root_actor_error')
# scan for the prompt # scan for the pdbpp prompt
expect(child, PROMPT) expect(child, PROMPT)
before = str(child.before.decode())
# make sure expected logging and error arrives # make sure expected logging and error arrives
assert in_prompt_msg( assert "Attaching to pdb in crashed actor: ('root'" in before
child, assert 'AssertionError' in before
[
_crash_msg,
"('root'",
'AssertionError',
]
)
# send user command # send user command
child.sendline(user_input) child.sendline(user_input)
@ -119,14 +226,12 @@ def test_root_actor_error(
ids=lambda item: f'{item[0]} -> {item[1]}', ids=lambda item: f'{item[0]} -> {item[1]}',
) )
def test_root_actor_bp(spawn, user_in_out): def test_root_actor_bp(spawn, user_in_out):
''' """Demonstrate breakpoint from in root actor.
Demonstrate breakpoint from in root actor. """
'''
user_input, expect_err_str = user_in_out user_input, expect_err_str = user_in_out
child = spawn('root_actor_breakpoint') child = spawn('root_actor_breakpoint')
# scan for the prompt # scan for the pdbpp prompt
child.expect(PROMPT) child.expect(PROMPT)
assert 'Error' not in str(child.before) assert 'Error' not in str(child.before)
@ -136,7 +241,7 @@ def test_root_actor_bp(spawn, user_in_out):
child.expect('\r\n') child.expect('\r\n')
# process should exit # process should exit
child.expect(EOF) child.expect(pexpect.EOF)
if expect_err_str is None: if expect_err_str is None:
assert 'Error' not in str(child.before) assert 'Error' not in str(child.before)
@ -144,6 +249,38 @@ def test_root_actor_bp(spawn, user_in_out):
assert expect_err_str in str(child.before) assert expect_err_str in str(child.before)
def do_ctlc(
child,
count: int = 3,
delay: float = 0.1,
patt: Optional[str] = None,
# expect repl UX to reprint the prompt after every
# ctrl-c send.
# XXX: no idea but, in CI this never seems to work even on 3.10 so
# needs some further investigation potentially...
expect_prompt: bool = not _ci_env,
) -> None:
# make sure ctl-c sends don't do anything but repeat output
for _ in range(count):
time.sleep(delay)
child.sendcontrol('c')
# TODO: figure out why this makes CI fail..
# if you run this test manually it works just fine..
if expect_prompt:
before = str(child.before.decode())
time.sleep(delay)
child.expect(PROMPT)
time.sleep(delay)
if patt:
# should see the last line on console
assert patt in before
def test_root_actor_bp_forever( def test_root_actor_bp_forever(
spawn, spawn,
ctlc: bool, ctlc: bool,
@ -183,7 +320,7 @@ def test_root_actor_bp_forever(
# quit out of the loop # quit out of the loop
child.sendline('q') child.sendline('q')
child.expect(EOF) child.expect(pexpect.EOF)
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -202,16 +339,11 @@ def test_subactor_error(
''' '''
child = spawn('subactor_error') child = spawn('subactor_error')
# scan for the prompt # scan for the pdbpp prompt
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg( before = str(child.before.decode())
child, assert "Attaching to pdb in crashed actor: ('name_error'" in before
[
_crash_msg,
"('name_error'",
]
)
if do_next: if do_next:
child.sendline('n') child.sendline('n')
@ -229,16 +361,12 @@ def test_subactor_error(
child.sendline('continue') child.sendline('continue')
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg( before = str(child.before.decode())
child,
[ # root actor gets debugger engaged
_crash_msg, assert "Attaching to pdb in crashed actor: ('root'" in before
# root actor gets debugger engaged # error is a remote error propagated from the subactor
"('root'", assert "RemoteActorError: ('name_error'" in before
# error is a remote error propagated from the subactor
"('name_error'",
]
)
# another round # another round
if ctlc: if ctlc:
@ -248,7 +376,7 @@ def test_subactor_error(
child.expect('\r\n') child.expect('\r\n')
# process should exit # process should exit
child.expect(EOF) child.expect(pexpect.EOF)
def test_subactor_breakpoint( def test_subactor_breakpoint(
@ -258,12 +386,12 @@ def test_subactor_breakpoint(
"Single subactor with an infinite breakpoint loop" "Single subactor with an infinite breakpoint loop"
child = spawn('subactor_breakpoint') child = spawn('subactor_breakpoint')
# scan for the pdbpp prompt
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg(
child, before = str(child.before.decode())
[_pause_msg, assert "Attaching pdb to actor: ('breakpoint_forever'" in before
"('breakpoint_forever'",]
)
# do some "next" commands to demonstrate recurrent breakpoint # do some "next" commands to demonstrate recurrent breakpoint
# entries # entries
@ -278,10 +406,8 @@ def test_subactor_breakpoint(
for _ in range(5): for _ in range(5):
child.sendline('continue') child.sendline('continue')
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg( before = str(child.before.decode())
child, assert "Attaching pdb to actor: ('breakpoint_forever'" in before
[_pause_msg, "('breakpoint_forever'"]
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
@ -292,12 +418,9 @@ def test_subactor_breakpoint(
# child process should exit but parent will capture pdb.BdbQuit # child process should exit but parent will capture pdb.BdbQuit
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg( before = str(child.before.decode())
child, assert "RemoteActorError: ('breakpoint_forever'" in before
['RemoteActorError:', assert 'bdb.BdbQuit' in before
"('breakpoint_forever'",
'bdb.BdbQuit',]
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
@ -306,17 +429,11 @@ def test_subactor_breakpoint(
child.sendline('c') child.sendline('c')
# process should exit # process should exit
child.expect(EOF) child.expect(pexpect.EOF)
assert in_prompt_msg( before = str(child.before.decode())
child, [ assert "RemoteActorError: ('breakpoint_forever'" in before
'MessagingError:', assert 'bdb.BdbQuit' in before
'RemoteActorError:',
"('breakpoint_forever'",
'bdb.BdbQuit',
],
pause_on_false=True,
)
@has_nested_actors @has_nested_actors
@ -331,14 +448,11 @@ def test_multi_subactors(
''' '''
child = spawn(r'multi_subactors') child = spawn(r'multi_subactors')
# scan for the prompt # scan for the pdbpp prompt
child.expect(PROMPT) child.expect(PROMPT)
before = str(child.before.decode()) before = str(child.before.decode())
assert in_prompt_msg( assert "Attaching pdb to actor: ('breakpoint_forever'" in before
child,
[_pause_msg, "('breakpoint_forever'"]
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
@ -357,14 +471,9 @@ def test_multi_subactors(
# first name_error failure # first name_error failure
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg( before = str(child.before.decode())
child, assert "Attaching to pdb in crashed actor: ('name_error'" in before
[ assert "NameError" in before
_crash_msg,
"('name_error'",
"NameError",
]
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
@ -388,10 +497,8 @@ def test_multi_subactors(
# breakpoint loop should re-engage # breakpoint loop should re-engage
child.sendline('c') child.sendline('c')
child.expect(PROMPT) child.expect(PROMPT)
assert in_prompt_msg( before = str(child.before.decode())
child, assert "Attaching pdb to actor: ('breakpoint_forever'" in before
[_pause_msg, "('breakpoint_forever'"]
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
@ -431,28 +538,24 @@ def test_multi_subactors(
child.expect(PROMPT) child.expect(PROMPT)
before = str(child.before.decode()) before = str(child.before.decode())
assert_before( assert_before(child, [
child, [ # debugger attaches to root
# debugger attaches to root "Attaching to pdb in crashed actor: ('root'",
# "Attaching to pdb in crashed actor: ('root'",
_crash_msg,
"('root'",
# expect a multierror with exceptions for each sub-actor # expect a multierror with exceptions for each sub-actor
"RemoteActorError: ('breakpoint_forever'", "RemoteActorError: ('breakpoint_forever'",
"RemoteActorError: ('name_error'", "RemoteActorError: ('name_error'",
"RemoteActorError: ('spawn_error'", "RemoteActorError: ('spawn_error'",
"RemoteActorError: ('name_error_1'", "RemoteActorError: ('name_error_1'",
'bdb.BdbQuit', 'bdb.BdbQuit',
] ])
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
# process should exit # process should exit
child.sendline('c') child.sendline('c')
child.expect(EOF) child.expect(pexpect.EOF)
# repeat of previous multierror for final output # repeat of previous multierror for final output
assert_before(child, [ assert_before(child, [
@ -482,28 +585,18 @@ def test_multi_daemon_subactors(
# the root's tty lock first so anticipate either crash # the root's tty lock first so anticipate either crash
# message on the first entry. # message on the first entry.
bp_forev_parts = [ bp_forever_msg = "Attaching pdb to actor: ('bp_forever'"
_pause_msg, name_error_msg = "NameError: name 'doggypants' is not defined"
"('bp_forever'",
]
bp_forev_in_msg = partial(
in_prompt_msg,
parts=bp_forev_parts,
)
name_error_msg: str = "NameError: name 'doggypants' is not defined"
name_error_parts: list[str] = [name_error_msg]
before = str(child.before.decode()) before = str(child.before.decode())
if bp_forever_msg in before:
if bp_forev_in_msg(child=child): next_msg = name_error_msg
next_parts = name_error_parts
elif name_error_msg in before: elif name_error_msg in before:
next_parts = bp_forev_parts next_msg = bp_forever_msg
else: else:
raise ValueError('Neither log msg was found !?') raise ValueError("Neither log msg was found !?")
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
@ -517,10 +610,7 @@ def test_multi_daemon_subactors(
child.sendline('c') child.sendline('c')
child.expect(PROMPT) child.expect(PROMPT)
assert_before( assert_before(child, [next_msg])
child,
next_parts,
)
# XXX: hooray the root clobbering the child here was fixed! # XXX: hooray the root clobbering the child here was fixed!
# IMO, this demonstrates the true power of SC system design. # IMO, this demonstrates the true power of SC system design.
@ -528,7 +618,7 @@ def test_multi_daemon_subactors(
# now the root actor won't clobber the bp_forever child # now the root actor won't clobber the bp_forever child
# during it's first access to the debug lock, but will instead # during it's first access to the debug lock, but will instead
# wait for the lock to release, by the edge triggered # wait for the lock to release, by the edge triggered
# ``devx._debug.Lock.no_remote_has_tty`` event before sending cancel messages # ``_debug.Lock.no_remote_has_tty`` event before sending cancel messages
# (via portals) to its underlings B) # (via portals) to its underlings B)
# at some point here there should have been some warning msg from # at some point here there should have been some warning msg from
@ -544,15 +634,9 @@ def test_multi_daemon_subactors(
child.expect(PROMPT) child.expect(PROMPT)
try: try:
assert_before( assert_before(child, [bp_forever_msg])
child,
bp_forev_parts,
)
except AssertionError: except AssertionError:
assert_before( assert_before(child, [name_error_msg])
child,
name_error_parts,
)
else: else:
if ctlc: if ctlc:
@ -564,36 +648,32 @@ def test_multi_daemon_subactors(
child.sendline('c') child.sendline('c')
child.expect(PROMPT) child.expect(PROMPT)
assert_before( assert_before(child, [name_error_msg])
child,
name_error_parts,
)
# wait for final error in root # wait for final error in root
# where it crashs with boxed error # where it crashs with boxed error
while True: while True:
child.sendline('c') try:
child.expect(PROMPT) child.sendline('c')
if not in_prompt_msg( child.expect(PROMPT)
child, assert_before(
bp_forev_parts child,
): [bp_forever_msg]
)
except AssertionError:
break break
assert_before( assert_before(
child, child,
[ [
# boxed error raised in root task # boxed error raised in root task
# "Attaching to pdb in crashed actor: ('root'", "Attaching to pdb in crashed actor: ('root'",
_crash_msg, "_exceptions.RemoteActorError: ('name_error'",
"('root'", # should attach in root
"_exceptions.RemoteActorError:", # with an embedded RAE for..
"('name_error'", # the src subactor which raised
] ]
) )
child.sendline('c') child.sendline('c')
child.expect(EOF) child.expect(pexpect.EOF)
@has_nested_actors @has_nested_actors
@ -608,7 +688,7 @@ def test_multi_subactors_root_errors(
''' '''
child = spawn('multi_subactor_root_errors') child = spawn('multi_subactor_root_errors')
# scan for the prompt # scan for the pdbpp prompt
child.expect(PROMPT) child.expect(PROMPT)
# at most one subactor should attach before the root is cancelled # at most one subactor should attach before the root is cancelled
@ -669,7 +749,7 @@ def test_multi_subactors_root_errors(
]) ])
child.sendline('c') child.sendline('c')
child.expect(EOF) child.expect(pexpect.EOF)
assert_before(child, [ assert_before(child, [
# "Attaching to pdb in crashed actor: ('root'", # "Attaching to pdb in crashed actor: ('root'",
@ -689,11 +769,10 @@ def test_multi_nested_subactors_error_through_nurseries(
# https://github.com/goodboy/tractor/issues/320 # https://github.com/goodboy/tractor/issues/320
# ctlc: bool, # ctlc: bool,
): ):
''' """Verify deeply nested actors that error trigger debugger entries
Verify deeply nested actors that error trigger debugger entries
at each actor nurserly (level) all the way up the tree. at each actor nurserly (level) all the way up the tree.
''' """
# NOTE: previously, inside this script was a bug where if the # NOTE: previously, inside this script was a bug where if the
# parent errors before a 2-levels-lower actor has released the lock, # parent errors before a 2-levels-lower actor has released the lock,
# the parent tries to cancel it but it's stuck in the debugger? # the parent tries to cancel it but it's stuck in the debugger?
@ -702,7 +781,7 @@ def test_multi_nested_subactors_error_through_nurseries(
child = spawn('multi_nested_subactors_error_up_through_nurseries') child = spawn('multi_nested_subactors_error_up_through_nurseries')
# timed_out_early: bool = False timed_out_early: bool = False
for send_char in itertools.cycle(['c', 'q']): for send_char in itertools.cycle(['c', 'q']):
try: try:
@ -713,31 +792,22 @@ def test_multi_nested_subactors_error_through_nurseries(
except EOF: except EOF:
break break
assert_before( assert_before(child, [
child,
[ # boxed source errors
"NameError: name 'doggypants' is not defined",
"tractor._exceptions.RemoteActorError:",
"('name_error'",
"bdb.BdbQuit",
# first level subtrees # boxed source errors
# "tractor._exceptions.RemoteActorError: ('spawner0'", "NameError: name 'doggypants' is not defined",
"src_uid=('spawner0'", "tractor._exceptions.RemoteActorError: ('name_error'",
"bdb.BdbQuit",
# "tractor._exceptions.RemoteActorError: ('spawner1'", # first level subtrees
"tractor._exceptions.RemoteActorError: ('spawner0'",
# "tractor._exceptions.RemoteActorError: ('spawner1'",
# propagation of errors up through nested subtrees # propagation of errors up through nested subtrees
# "tractor._exceptions.RemoteActorError: ('spawn_until_0'", "tractor._exceptions.RemoteActorError: ('spawn_until_0'",
# "tractor._exceptions.RemoteActorError: ('spawn_until_1'", "tractor._exceptions.RemoteActorError: ('spawn_until_1'",
# "tractor._exceptions.RemoteActorError: ('spawn_until_2'", "tractor._exceptions.RemoteActorError: ('spawn_until_2'",
# ^-NOTE-^ old RAE repr, new one is below with a field ])
# showing the src actor's uid.
"src_uid=('spawn_until_0'",
"relay_uid=('spawn_until_1'",
"src_uid=('spawn_until_2'",
]
)
@pytest.mark.timeout(15) @pytest.mark.timeout(15)
@ -758,13 +828,10 @@ def test_root_nursery_cancels_before_child_releases_tty_lock(
child = spawn('root_cancelled_but_child_is_in_tty_lock') child = spawn('root_cancelled_but_child_is_in_tty_lock')
child.expect(PROMPT) child.expect(PROMPT)
assert_before(
child, before = str(child.before.decode())
[ assert "NameError: name 'doggypants' is not defined" in before
"NameError: name 'doggypants' is not defined", assert "tractor._exceptions.RemoteActorError: ('name_error'" not in before
"tractor._exceptions.RemoteActorError: ('name_error'",
],
)
time.sleep(0.5) time.sleep(0.5)
if ctlc: if ctlc:
@ -802,7 +869,7 @@ def test_root_nursery_cancels_before_child_releases_tty_lock(
for i in range(3): for i in range(3):
try: try:
child.expect(EOF, timeout=0.5) child.expect(pexpect.EOF, timeout=0.5)
break break
except TIMEOUT: except TIMEOUT:
child.sendline('c') child.sendline('c')
@ -815,14 +882,11 @@ def test_root_nursery_cancels_before_child_releases_tty_lock(
if not timed_out_early: if not timed_out_early:
before = str(child.before.decode()) before = str(child.before.decode())
assert_before( assert_before(child, [
child, "tractor._exceptions.RemoteActorError: ('spawner0'",
[ "tractor._exceptions.RemoteActorError: ('name_error'",
"tractor._exceptions.RemoteActorError: ('spawner0'", "NameError: name 'doggypants' is not defined",
"tractor._exceptions.RemoteActorError: ('name_error'", ])
"NameError: name 'doggypants' is not defined",
],
)
def test_root_cancels_child_context_during_startup( def test_root_cancels_child_context_during_startup(
@ -844,7 +908,7 @@ def test_root_cancels_child_context_during_startup(
do_ctlc(child) do_ctlc(child)
child.sendline('c') child.sendline('c')
child.expect(EOF) child.expect(pexpect.EOF)
def test_different_debug_mode_per_actor( def test_different_debug_mode_per_actor(
@ -855,249 +919,26 @@ def test_different_debug_mode_per_actor(
child.expect(PROMPT) child.expect(PROMPT)
# only one actor should enter the debugger # only one actor should enter the debugger
assert in_prompt_msg( before = str(child.before.decode())
child, assert "Attaching to pdb in crashed actor: ('debugged_boi'" in before
[_crash_msg, "('debugged_boi'", "RuntimeError"], assert "RuntimeError" in before
)
if ctlc: if ctlc:
do_ctlc(child) do_ctlc(child)
child.sendline('c') child.sendline('c')
child.expect(EOF) child.expect(pexpect.EOF)
before = str(child.before.decode())
# NOTE: this debugged actor error currently WON'T show up since the # NOTE: this debugged actor error currently WON'T show up since the
# root will actually cancel and terminate the nursery before the error # root will actually cancel and terminate the nursery before the error
# msg reported back from the debug mode actor is processed. # msg reported back from the debug mode actor is processed.
# assert "tractor._exceptions.RemoteActorError: ('debugged_boi'" in before # assert "tractor._exceptions.RemoteActorError: ('debugged_boi'" in before
assert "tractor._exceptions.RemoteActorError: ('crash_boi'" in before
# the crash boi should not have made a debugger request but # the crash boi should not have made a debugger request but
# instead crashed completely # instead crashed completely
assert_before( assert "tractor._exceptions.RemoteActorError: ('crash_boi'" in before
child, assert "RuntimeError" in before
[
"tractor._exceptions.RemoteActorError:",
"src_uid=('crash_boi'",
"RuntimeError",
]
)
def test_post_mortem_api(
spawn,
ctlc: bool,
):
'''
Verify the `tractor.post_mortem()` API works in an exception
handler block.
'''
child = spawn('pm_in_subactor')
# First entry is via manual `.post_mortem()`
child.expect(PROMPT)
assert_before(
child,
[
_crash_msg,
"<Task 'name_error'",
"NameError",
"('child'",
"tractor.post_mortem()",
]
)
if ctlc:
do_ctlc(child)
child.sendline('c')
# 2nd is RPC crash handler
child.expect(PROMPT)
assert_before(
child,
[
_crash_msg,
"<Task 'name_error'",
"NameError",
"('child'",
]
)
if ctlc:
do_ctlc(child)
child.sendline('c')
# 3rd is via RAE bubbled to root's parent ctx task and
# crash-handled via another manual pm call.
child.expect(PROMPT)
assert_before(
child,
[
_crash_msg,
"<Task '__main__.main'",
"('root'",
"NameError",
"tractor.post_mortem()",
"src_uid=('child'",
]
)
if ctlc:
do_ctlc(child)
child.sendline('c')
# 4th and FINAL is via RAE bubbled to root's parent ctx task and
# crash-handled via another manual pm call.
child.expect(PROMPT)
assert_before(
child,
[
_crash_msg,
"<Task '__main__.main'",
"('root'",
"NameError",
"src_uid=('child'",
]
)
if ctlc:
do_ctlc(child)
# TODO: ensure we're stopped and showing the right call stack frame
# -[ ] need a way to strip the terminal color chars in order to
# pattern match... see TODO around `assert_before()` above!
# child.sendline('w')
# child.expect(PROMPT)
# assert_before(
# child,
# [
# # error src block annot at ctx open
# '-> async with p.open_context(name_error) as (ctx, first):',
# ]
# )
# # step up a frame to ensure the it's the root's nursery
# child.sendline('u')
# child.expect(PROMPT)
# assert_before(
# child,
# [
# # handler block annotation
# '-> async with tractor.open_nursery(',
# ]
# )
child.sendline('c')
child.expect(EOF)
def test_shield_pause(
spawn,
):
'''
Verify the `tractor.pause()/.post_mortem()` API works inside an
already cancelled `trio.CancelScope` and that you can step to the
next checkpoint wherein the cancelled will get raised.
'''
child = spawn('shielded_pause')
# First entry is via manual `.post_mortem()`
child.expect(PROMPT)
assert_before(
child,
[
_pause_msg,
"cancellable_pause_loop'",
"('cancelled_before_pause'", # actor name
]
)
# since 3 tries in ex. shield pause loop
for i in range(3):
child.sendline('c')
child.expect(PROMPT)
assert_before(
child,
[
_pause_msg,
"INSIDE SHIELDED PAUSE",
"('cancelled_before_pause'", # actor name
]
)
# back inside parent task that opened nursery
child.sendline('c')
child.expect(PROMPT)
assert_before(
child,
[
_crash_msg,
"('cancelled_before_pause'", # actor name
_repl_fail_msg,
"trio.Cancelled",
"raise Cancelled._create()",
# we should be handling a taskc inside
# the first `.port_mortem()` sin-shield!
'await DebugStatus.req_finished.wait()',
]
)
# same as above but in the root actor's task
child.sendline('c')
child.expect(PROMPT)
assert_before(
child,
[
_crash_msg,
"('root'", # actor name
_repl_fail_msg,
"trio.Cancelled",
"raise Cancelled._create()",
# handling a taskc inside the first unshielded
# `.port_mortem()`.
# BUT in this case in the root-proc path ;)
'wait Lock._debug_lock.acquire()',
]
)
child.sendline('c')
child.expect(EOF)
# TODO: better error for "non-ideal" usage from the root actor.
# -[ ] if called from an async scope emit a message that suggests
# using `await tractor.pause()` instead since it's less overhead
# (in terms of `greenback` and/or extra threads) and if it's from
# a sync scope suggest that usage must first call
# `ensure_portal()` in the (eventual parent) async calling scope?
def test_sync_pause_from_bg_task_in_root_actor_():
'''
When used from the root actor, normally we can only implicitly
support `.pause_from_sync()` from the main-parent-task (that
opens the runtime via `open_root_actor()`) since `greenback`
requires a `.ensure_portal()` call per `trio.Task` where it is
used.
'''
...
# TODO: needs ANSI code stripping tho, see `assert_before()` # above!
def test_correct_frames_below_hidden():
'''
Ensure that once a `tractor.pause()` enages, when the user
inputs a "next"/"n" command the actual next line steps
and that using a "step"/"s" into the next LOC, particuarly
`tractor` APIs, you can step down into that code.
'''
...
def test_cant_pause_from_paused_task():
'''
Pausing from with an already paused task should raise an error.
Normally this should only happen in practise while debugging the call stack of `tractor.pause()` itself, likely
by a `.pause()` line somewhere inside our runtime.
'''
...

View File

@ -9,24 +9,25 @@ import itertools
import pytest import pytest
import tractor import tractor
from tractor._testing import tractor_test
import trio import trio
from conftest import tractor_test
@tractor_test @tractor_test
async def test_reg_then_unreg(reg_addr): async def test_reg_then_unreg(arb_addr):
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_arbiter assert actor.is_arbiter
assert len(actor._registry) == 1 # only self is registered assert len(actor._registry) == 1 # only self is registered
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
) as n: ) as n:
portal = await n.start_actor('actor', enable_modules=[__name__]) portal = await n.start_actor('actor', enable_modules=[__name__])
uid = portal.channel.uid uid = portal.channel.uid
async with tractor.get_registry(*reg_addr) as aportal: async with tractor.get_arbiter(*arb_addr) as aportal:
# this local actor should be the arbiter # this local actor should be the arbiter
assert actor is aportal.actor assert actor is aportal.actor
@ -52,27 +53,15 @@ async def hi():
return the_line.format(tractor.current_actor().name) return the_line.format(tractor.current_actor().name)
async def say_hello( async def say_hello(other_actor):
other_actor: str,
reg_addr: tuple[str, int],
):
await trio.sleep(1) # wait for other actor to spawn await trio.sleep(1) # wait for other actor to spawn
async with tractor.find_actor( async with tractor.find_actor(other_actor) as portal:
other_actor,
registry_addrs=[reg_addr],
) as portal:
assert portal is not None assert portal is not None
return await portal.run(__name__, 'hi') return await portal.run(__name__, 'hi')
async def say_hello_use_wait( async def say_hello_use_wait(other_actor):
other_actor: str, async with tractor.wait_for_actor(other_actor) as portal:
reg_addr: tuple[str, int],
):
async with tractor.wait_for_actor(
other_actor,
registry_addr=reg_addr,
) as portal:
assert portal is not None assert portal is not None
result = await portal.run(__name__, 'hi') result = await portal.run(__name__, 'hi')
return result return result
@ -80,29 +69,21 @@ async def say_hello_use_wait(
@tractor_test @tractor_test
@pytest.mark.parametrize('func', [say_hello, say_hello_use_wait]) @pytest.mark.parametrize('func', [say_hello, say_hello_use_wait])
async def test_trynamic_trio( async def test_trynamic_trio(func, start_method, arb_addr):
func, """Main tractor entry point, the "master" process (for now
start_method, acts as the "director").
reg_addr, """
):
'''
Root actor acting as the "director" and running one-shot-task-actors
for the directed subs.
'''
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
print("Alright... Action!") print("Alright... Action!")
donny = await n.run_in_actor( donny = await n.run_in_actor(
func, func,
other_actor='gretchen', other_actor='gretchen',
reg_addr=reg_addr,
name='donny', name='donny',
) )
gretchen = await n.run_in_actor( gretchen = await n.run_in_actor(
func, func,
other_actor='donny', other_actor='donny',
reg_addr=reg_addr,
name='gretchen', name='gretchen',
) )
print(await gretchen.result()) print(await gretchen.result())
@ -150,7 +131,7 @@ async def unpack_reg(actor_or_portal):
async def spawn_and_check_registry( async def spawn_and_check_registry(
reg_addr: tuple, arb_addr: tuple,
use_signal: bool, use_signal: bool,
remote_arbiter: bool = False, remote_arbiter: bool = False,
with_streaming: bool = False, with_streaming: bool = False,
@ -158,9 +139,9 @@ async def spawn_and_check_registry(
) -> None: ) -> None:
async with tractor.open_root_actor( async with tractor.open_root_actor(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
): ):
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_arbiter(*arb_addr) as portal:
# runtime needs to be up to call this # runtime needs to be up to call this
actor = tractor.current_actor() actor = tractor.current_actor()
@ -181,9 +162,7 @@ async def spawn_and_check_registry(
try: try:
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
async with trio.open_nursery( async with trio.open_nursery() as trion:
strict_exception_groups=False,
) as trion:
portals = {} portals = {}
for i in range(3): for i in range(3):
@ -234,19 +213,17 @@ async def spawn_and_check_registry(
def test_subactors_unregister_on_cancel( def test_subactors_unregister_on_cancel(
start_method, start_method,
use_signal, use_signal,
reg_addr, arb_addr,
with_streaming, with_streaming,
): ):
''' """Verify that cancelling a nursery results in all subactors
Verify that cancelling a nursery results in all subactors
deregistering themselves with the arbiter. deregistering themselves with the arbiter.
"""
'''
with pytest.raises(KeyboardInterrupt): with pytest.raises(KeyboardInterrupt):
trio.run( trio.run(
partial( partial(
spawn_and_check_registry, spawn_and_check_registry,
reg_addr, arb_addr,
use_signal, use_signal,
remote_arbiter=False, remote_arbiter=False,
with_streaming=with_streaming, with_streaming=with_streaming,
@ -260,7 +237,7 @@ def test_subactors_unregister_on_cancel_remote_daemon(
daemon, daemon,
start_method, start_method,
use_signal, use_signal,
reg_addr, arb_addr,
with_streaming, with_streaming,
): ):
"""Verify that cancelling a nursery results in all subactors """Verify that cancelling a nursery results in all subactors
@ -271,7 +248,7 @@ def test_subactors_unregister_on_cancel_remote_daemon(
trio.run( trio.run(
partial( partial(
spawn_and_check_registry, spawn_and_check_registry,
reg_addr, arb_addr,
use_signal, use_signal,
remote_arbiter=True, remote_arbiter=True,
with_streaming=with_streaming, with_streaming=with_streaming,
@ -285,7 +262,7 @@ async def streamer(agen):
async def close_chans_before_nursery( async def close_chans_before_nursery(
reg_addr: tuple, arb_addr: tuple,
use_signal: bool, use_signal: bool,
remote_arbiter: bool = False, remote_arbiter: bool = False,
) -> None: ) -> None:
@ -298,9 +275,9 @@ async def close_chans_before_nursery(
entries_at_end = 1 entries_at_end = 1
async with tractor.open_root_actor( async with tractor.open_root_actor(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
): ):
async with tractor.get_registry(*reg_addr) as aportal: async with tractor.get_arbiter(*arb_addr) as aportal:
try: try:
get_reg = partial(unpack_reg, aportal) get_reg = partial(unpack_reg, aportal)
@ -318,9 +295,7 @@ async def close_chans_before_nursery(
async with portal2.open_stream_from( async with portal2.open_stream_from(
stream_forever stream_forever
) as agen2: ) as agen2:
async with trio.open_nursery( async with trio.open_nursery() as n:
strict_exception_groups=False,
) as n:
n.start_soon(streamer, agen1) n.start_soon(streamer, agen1)
n.start_soon(cancel, use_signal, .5) n.start_soon(cancel, use_signal, .5)
try: try:
@ -354,7 +329,7 @@ async def close_chans_before_nursery(
def test_close_channel_explicit( def test_close_channel_explicit(
start_method, start_method,
use_signal, use_signal,
reg_addr, arb_addr,
): ):
"""Verify that closing a stream explicitly and killing the actor's """Verify that closing a stream explicitly and killing the actor's
"root nursery" **before** the containing nursery tears down also "root nursery" **before** the containing nursery tears down also
@ -364,7 +339,7 @@ def test_close_channel_explicit(
trio.run( trio.run(
partial( partial(
close_chans_before_nursery, close_chans_before_nursery,
reg_addr, arb_addr,
use_signal, use_signal,
remote_arbiter=False, remote_arbiter=False,
), ),
@ -376,7 +351,7 @@ def test_close_channel_explicit_remote_arbiter(
daemon, daemon,
start_method, start_method,
use_signal, use_signal,
reg_addr, arb_addr,
): ):
"""Verify that closing a stream explicitly and killing the actor's """Verify that closing a stream explicitly and killing the actor's
"root nursery" **before** the containing nursery tears down also "root nursery" **before** the containing nursery tears down also
@ -386,7 +361,7 @@ def test_close_channel_explicit_remote_arbiter(
trio.run( trio.run(
partial( partial(
close_chans_before_nursery, close_chans_before_nursery,
reg_addr, arb_addr,
use_signal, use_signal,
remote_arbiter=True, remote_arbiter=True,
), ),

View File

@ -11,7 +11,8 @@ import platform
import shutil import shutil
import pytest import pytest
from tractor._testing import (
from conftest import (
examples_dir, examples_dir,
) )
@ -19,8 +20,8 @@ from tractor._testing import (
@pytest.fixture @pytest.fixture
def run_example_in_subproc( def run_example_in_subproc(
loglevel: str, loglevel: str,
testdir: pytest.Pytester, testdir,
reg_addr: tuple[str, int], arb_addr: tuple[str, int],
): ):
@contextmanager @contextmanager
@ -81,36 +82,27 @@ def run_example_in_subproc(
# walk yields: (dirpath, dirnames, filenames) # walk yields: (dirpath, dirnames, filenames)
[ [
(p[0], f) (p[0], f) for p in os.walk(examples_dir()) for f in p[2]
for p in os.walk(examples_dir())
for f in p[2]
if ( if '__' not in f
'__' not in f and f[0] != '_'
and f[0] != '_' and 'debugging' not in p[0]
and 'debugging' not in p[0] and 'integration' not in p[0]
and 'integration' not in p[0] and 'advanced_faults' not in p[0]
and 'advanced_faults' not in p[0]
and 'multihost' not in p[0]
)
], ],
ids=lambda t: t[1], ids=lambda t: t[1],
) )
def test_example( def test_example(run_example_in_subproc, example_script):
run_example_in_subproc, """Load and run scripts from this repo's ``examples/`` dir as a user
example_script,
):
'''
Load and run scripts from this repo's ``examples/`` dir as a user
would copy and pasing them into their editor. would copy and pasing them into their editor.
On windows a little more "finessing" is done to make On windows a little more "finessing" is done to make
``multiprocessing`` play nice: we copy the ``__main__.py`` into the ``multiprocessing`` play nice: we copy the ``__main__.py`` into the
test directory and invoke the script as a module with ``python -m test directory and invoke the script as a module with ``python -m
test_example``. test_example``.
"""
''' ex_file = os.path.join(*example_script)
ex_file: str = os.path.join(*example_script)
if 'rpc_bidir_streaming' in ex_file and sys.version_info < (3, 9): if 'rpc_bidir_streaming' in ex_file and sys.version_info < (3, 9):
pytest.skip("2-way streaming example requires py3.9 async with syntax") pytest.skip("2-way streaming example requires py3.9 async with syntax")
@ -136,8 +128,7 @@ def test_example(
# shouldn't eventually once we figure out what's # shouldn't eventually once we figure out what's
# a better way to be explicit about aio side # a better way to be explicit about aio side
# cancels? # cancels?
and and 'asyncio.exceptions.CancelledError' not in last_error
'asyncio.exceptions.CancelledError' not in last_error
): ):
raise Exception(errmsg) raise Exception(errmsg)

View File

@ -1,946 +0,0 @@
'''
Low-level functional audits for our
"capability based messaging"-spec feats.
B~)
'''
from contextlib import (
contextmanager as cm,
# nullcontext,
)
import importlib
from typing import (
Any,
Type,
Union,
)
from msgspec import (
# structs,
# msgpack,
Raw,
# Struct,
ValidationError,
)
import pytest
import trio
import tractor
from tractor import (
Actor,
# _state,
MsgTypeError,
Context,
)
from tractor.msg import (
_codec,
_ctxvar_MsgCodec,
_exts,
NamespacePath,
MsgCodec,
MsgDec,
mk_codec,
mk_dec,
apply_codec,
current_codec,
)
from tractor.msg.types import (
log,
Started,
# _payload_msgs,
# PayloadMsg,
# mk_msg_spec,
)
from tractor.msg._ops import (
limit_plds,
)
def enc_nsp(obj: Any) -> Any:
actor: Actor = tractor.current_actor(
err_on_no_runtime=False,
)
uid: tuple[str, str]|None = None if not actor else actor.uid
print(f'{uid} ENC HOOK')
match obj:
# case NamespacePath()|str():
case NamespacePath():
encoded: str = str(obj)
print(
f'----- ENCODING `NamespacePath` as `str` ------\n'
f'|_obj:{type(obj)!r} = {obj!r}\n'
f'|_encoded: str = {encoded!r}\n'
)
# if type(obj) != NamespacePath:
# breakpoint()
return encoded
case _:
logmsg: str = (
f'{uid}\n'
'FAILED ENCODE\n'
f'obj-> `{obj}: {type(obj)}`\n'
)
raise NotImplementedError(logmsg)
def dec_nsp(
obj_type: Type,
obj: Any,
) -> Any:
# breakpoint()
actor: Actor = tractor.current_actor(
err_on_no_runtime=False,
)
uid: tuple[str, str]|None = None if not actor else actor.uid
print(
f'{uid}\n'
'CUSTOM DECODE\n'
f'type-arg-> {obj_type}\n'
f'obj-arg-> `{obj}`: {type(obj)}\n'
)
nsp = None
# XXX, never happens right?
if obj_type is Raw:
breakpoint()
if (
obj_type is NamespacePath
and isinstance(obj, str)
and ':' in obj
):
nsp = NamespacePath(obj)
# TODO: we could built a generic handler using
# JUST matching the obj_type part?
# nsp = obj_type(obj)
if nsp:
print(f'Returning NSP instance: {nsp}')
return nsp
logmsg: str = (
f'{uid}\n'
'FAILED DECODE\n'
f'type-> {obj_type}\n'
f'obj-arg-> `{obj}`: {type(obj)}\n\n'
f'current codec:\n'
f'{current_codec()}\n'
)
# TODO: figure out the ignore subsys for this!
# -[ ] option whether to defense-relay backc the msg
# inside an `Invalid`/`Ignore`
# -[ ] how to make this handling pluggable such that a
# `Channel`/`MsgTransport` can intercept and process
# back msgs either via exception handling or some other
# signal?
log.warning(logmsg)
# NOTE: this delivers the invalid
# value up to `msgspec`'s decoding
# machinery for error raising.
return obj
# raise NotImplementedError(logmsg)
def ex_func(*args):
'''
A mod level func we can ref and load via our `NamespacePath`
python-object pointer `str` subtype.
'''
print(f'ex_func({args})')
@pytest.mark.parametrize(
'add_codec_hooks',
[
True,
False,
],
ids=['use_codec_hooks', 'no_codec_hooks'],
)
def test_custom_extension_types(
debug_mode: bool,
add_codec_hooks: bool
):
'''
Verify that a `MsgCodec` (used for encoding all outbound IPC msgs
and decoding all inbound `PayloadMsg`s) and a paired `MsgDec`
(used for decoding the `PayloadMsg.pld: Raw` received within a given
task's ipc `Context` scope) can both send and receive "extension types"
as supported via custom converter hooks passed to `msgspec`.
'''
nsp_pld_dec: MsgDec = mk_dec(
spec=None, # ONLY support the ext type
dec_hook=dec_nsp if add_codec_hooks else None,
ext_types=[NamespacePath],
)
nsp_codec: MsgCodec = mk_codec(
# ipc_pld_spec=Raw, # default!
# NOTE XXX: the encode hook MUST be used no matter what since
# our `NamespacePath` is not any of a `Any` native type nor
# a `msgspec.Struct` subtype - so `msgspec` has no way to know
# how to encode it unless we provide the custom hook.
#
# AGAIN that is, regardless of whether we spec an
# `Any`-decoded-pld the enc has no knowledge (by default)
# how to enc `NamespacePath` (nsp), so we add a custom
# hook to do that ALWAYS.
enc_hook=enc_nsp if add_codec_hooks else None,
# XXX NOTE: pretty sure this is mutex with the `type=` to
# `Decoder`? so it won't work in tandem with the
# `ipc_pld_spec` passed above?
ext_types=[NamespacePath],
# TODO? is it useful to have the `.pld` decoded *prior* to
# the `PldRx`?? like perf or mem related?
# ext_dec=nsp_pld_dec,
)
if add_codec_hooks:
assert nsp_codec.dec.dec_hook is None
# TODO? if we pass `ext_dec` above?
# assert nsp_codec.dec.dec_hook is dec_nsp
assert nsp_codec.enc.enc_hook is enc_nsp
nsp = NamespacePath.from_ref(ex_func)
try:
nsp_bytes: bytes = nsp_codec.encode(nsp)
nsp_rt_sin_msg = nsp_pld_dec.decode(nsp_bytes)
nsp_rt_sin_msg.load_ref() is ex_func
except TypeError:
if not add_codec_hooks:
pass
try:
msg_bytes: bytes = nsp_codec.encode(
Started(
cid='cid',
pld=nsp,
)
)
# since the ext-type obj should also be set as the msg.pld
assert nsp_bytes in msg_bytes
started_rt: Started = nsp_codec.decode(msg_bytes)
pld: Raw = started_rt.pld
assert isinstance(pld, Raw)
nsp_rt: NamespacePath = nsp_pld_dec.decode(pld)
assert isinstance(nsp_rt, NamespacePath)
# in obj comparison terms they should be the same
assert nsp_rt == nsp
# ensure we've decoded to ext type!
assert nsp_rt.load_ref() is ex_func
except TypeError:
if not add_codec_hooks:
pass
@tractor.context
async def sleep_forever_in_sub(
ctx: Context,
) -> None:
await trio.sleep_forever()
def mk_custom_codec(
add_hooks: bool,
) -> tuple[
MsgCodec, # encode to send
MsgDec, # pld receive-n-decode
]:
'''
Create custom `msgpack` enc/dec-hooks and set a `Decoder`
which only loads `pld_spec` (like `NamespacePath`) types.
'''
# XXX NOTE XXX: despite defining `NamespacePath` as a type
# field on our `PayloadMsg.pld`, we still need a enc/dec_hook() pair
# to cast to/from that type on the wire. See the docs:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
# if pld_spec is Any:
# pld_spec = Raw
nsp_codec: MsgCodec = mk_codec(
# ipc_pld_spec=Raw, # default!
# NOTE XXX: the encode hook MUST be used no matter what since
# our `NamespacePath` is not any of a `Any` native type nor
# a `msgspec.Struct` subtype - so `msgspec` has no way to know
# how to encode it unless we provide the custom hook.
#
# AGAIN that is, regardless of whether we spec an
# `Any`-decoded-pld the enc has no knowledge (by default)
# how to enc `NamespacePath` (nsp), so we add a custom
# hook to do that ALWAYS.
enc_hook=enc_nsp if add_hooks else None,
# XXX NOTE: pretty sure this is mutex with the `type=` to
# `Decoder`? so it won't work in tandem with the
# `ipc_pld_spec` passed above?
ext_types=[NamespacePath],
)
# dec_hook=dec_nsp if add_hooks else None,
return nsp_codec
@pytest.mark.parametrize(
'limit_plds_args',
[
(
{'dec_hook': None, 'ext_types': None},
None,
),
(
{'dec_hook': dec_nsp, 'ext_types': None},
TypeError,
),
(
{'dec_hook': dec_nsp, 'ext_types': [NamespacePath]},
None,
),
(
{'dec_hook': dec_nsp, 'ext_types': [NamespacePath|None]},
None,
),
],
ids=[
'no_hook_no_ext_types',
'only_hook',
'hook_and_ext_types',
'hook_and_ext_types_w_null',
]
)
def test_pld_limiting_usage(
limit_plds_args: tuple[dict, Exception|None],
):
'''
Verify `dec_hook()` and `ext_types` need to either both be
provided or we raise a explanator type-error.
'''
kwargs, maybe_err = limit_plds_args
async def main():
async with tractor.open_nursery() as an: # just to open runtime
# XXX SHOULD NEVER WORK outside an ipc ctx scope!
try:
with limit_plds(**kwargs):
pass
except RuntimeError:
pass
p: tractor.Portal = await an.start_actor(
'sub',
enable_modules=[__name__],
)
async with (
p.open_context(
sleep_forever_in_sub
) as (ctx, first),
):
try:
with limit_plds(**kwargs):
pass
except maybe_err as exc:
assert type(exc) is maybe_err
pass
def chk_codec_applied(
expect_codec: MsgCodec|None,
enter_value: MsgCodec|None = None,
) -> MsgCodec:
'''
buncha sanity checks ensuring that the IPC channel's
context-vars are set to the expected codec and that are
ctx-var wrapper APIs match the same.
'''
# TODO: play with tricyle again, bc this is supposed to work
# the way we want?
#
# TreeVar
# task: trio.Task = trio.lowlevel.current_task()
# curr_codec = _ctxvar_MsgCodec.get_in(task)
# ContextVar
# task_ctx: Context = task.context
# assert _ctxvar_MsgCodec in task_ctx
# curr_codec: MsgCodec = task.context[_ctxvar_MsgCodec]
if expect_codec is None:
assert enter_value is None
return
# NOTE: currently we use this!
# RunVar
curr_codec: MsgCodec = current_codec()
last_read_codec = _ctxvar_MsgCodec.get()
# assert curr_codec is last_read_codec
assert (
(same_codec := expect_codec) is
# returned from `mk_codec()`
# yielded value from `apply_codec()`
# read from current task's `contextvars.Context`
curr_codec is
last_read_codec
# the default `msgspec` settings
is not _codec._def_msgspec_codec
is not _codec._def_tractor_codec
)
if enter_value:
assert enter_value is same_codec
@tractor.context
async def send_back_values(
ctx: Context,
rent_pld_spec_type_strs: list[str],
add_hooks: bool,
) -> None:
'''
Setup up a custom codec to load instances of `NamespacePath`
and ensure we can round trip a func ref with our parent.
'''
uid: tuple = tractor.current_actor().uid
# init state in sub-actor should be default
chk_codec_applied(
expect_codec=_codec._def_tractor_codec,
)
# load pld spec from input str
rent_pld_spec = _exts.dec_type_union(
rent_pld_spec_type_strs,
mods=[
importlib.import_module(__name__),
],
)
rent_pld_spec_types: set[Type] = _codec.unpack_spec_types(
rent_pld_spec,
)
# ONLY add ext-hooks if the rent specified a non-std type!
add_hooks: bool = (
NamespacePath in rent_pld_spec_types
and
add_hooks
)
# same as on parent side config.
nsp_codec: MsgCodec|None = None
if add_hooks:
nsp_codec = mk_codec(
enc_hook=enc_nsp,
ext_types=[NamespacePath],
)
with (
maybe_apply_codec(nsp_codec) as codec,
limit_plds(
rent_pld_spec,
dec_hook=dec_nsp if add_hooks else None,
ext_types=[NamespacePath] if add_hooks else None,
) as pld_dec,
):
# ?XXX? SHOULD WE NOT be swapping the global codec since it
# breaks `Context.started()` roundtripping checks??
chk_codec_applied(
expect_codec=nsp_codec,
enter_value=codec,
)
# ?TODO, mismatch case(s)?
#
# ensure pld spec matches on both sides
ctx_pld_dec: MsgDec = ctx._pld_rx._pld_dec
assert pld_dec is ctx_pld_dec
child_pld_spec: Type = pld_dec.spec
child_pld_spec_types: set[Type] = _codec.unpack_spec_types(
child_pld_spec,
)
assert (
child_pld_spec_types.issuperset(
rent_pld_spec_types
)
)
# ?TODO, try loop for each of the types in pld-superset?
#
# for send_value in [
# nsp,
# str(nsp),
# None,
# ]:
nsp = NamespacePath.from_ref(ex_func)
try:
print(
f'{uid}: attempting to `.started({nsp})`\n'
f'\n'
f'rent_pld_spec: {rent_pld_spec}\n'
f'child_pld_spec: {child_pld_spec}\n'
f'codec: {codec}\n'
)
# await tractor.pause()
await ctx.started(nsp)
except tractor.MsgTypeError as _mte:
mte = _mte
# false -ve case
if add_hooks:
raise RuntimeError(
f'EXPECTED to `.started()` value given spec ??\n\n'
f'child_pld_spec -> {child_pld_spec}\n'
f'value = {nsp}: {type(nsp)}\n'
)
# true -ve case
raise mte
# TODO: maybe we should add our own wrapper error so as to
# be interchange-lib agnostic?
# -[ ] the error type is wtv is raised from the hook so we
# could also require a type-class of errors for
# indicating whether the hook-failure can be handled by
# a nasty-dialog-unprot sub-sys?
except TypeError as typerr:
# false -ve
if add_hooks:
raise RuntimeError('Should have been able to send `nsp`??')
# true -ve
print('Failed to send `nsp` due to no ext hooks set!')
raise typerr
# now try sending a set of valid and invalid plds to ensure
# the pld spec is respected.
sent: list[Any] = []
async with ctx.open_stream() as ipc:
print(
f'{uid}: streaming all pld types to rent..'
)
# for send_value, expect_send in iter_send_val_items:
for send_value in [
nsp,
str(nsp),
None,
]:
send_type: Type = type(send_value)
print(
f'{uid}: SENDING NEXT pld\n'
f'send_type: {send_type}\n'
f'send_value: {send_value}\n'
)
try:
await ipc.send(send_value)
sent.append(send_value)
except ValidationError as valerr:
print(f'{uid} FAILED TO SEND {send_value}!')
# false -ve
if add_hooks:
raise RuntimeError(
f'EXPECTED to roundtrip value given spec:\n'
f'rent_pld_spec -> {rent_pld_spec}\n'
f'child_pld_spec -> {child_pld_spec}\n'
f'value = {send_value}: {send_type}\n'
)
# true -ve
raise valerr
# continue
else:
print(
f'{uid}: finished sending all values\n'
'Should be exiting stream block!\n'
)
print(f'{uid}: exited streaming block!')
@cm
def maybe_apply_codec(codec: MsgCodec|None) -> MsgCodec|None:
if codec is None:
yield None
return
with apply_codec(codec) as codec:
yield codec
@pytest.mark.parametrize(
'pld_spec',
[
Any,
NamespacePath,
NamespacePath|None, # the "maybe" spec Bo
],
ids=[
'any_type',
'only_nsp_ext',
'maybe_nsp_ext',
]
)
@pytest.mark.parametrize(
'add_hooks',
[
True,
False,
],
ids=[
'use_codec_hooks',
'no_codec_hooks',
],
)
def test_ext_types_over_ipc(
debug_mode: bool,
pld_spec: Union[Type],
add_hooks: bool,
):
'''
Ensure we can support extension types coverted using
`enc/dec_hook()`s passed to the `.msg.limit_plds()` API
and that sane errors happen when we try do the same without
the codec hooks.
'''
pld_types: set[Type] = _codec.unpack_spec_types(pld_spec)
async def main():
# sanity check the default pld-spec beforehand
chk_codec_applied(
expect_codec=_codec._def_tractor_codec,
)
# extension type we want to send as msg payload
nsp = NamespacePath.from_ref(ex_func)
# ^NOTE, 2 cases:
# - codec hooks noto added -> decode nsp as `str`
# - codec with hooks -> decode nsp as `NamespacePath`
nsp_codec: MsgCodec|None = None
if (
NamespacePath in pld_types
and
add_hooks
):
nsp_codec = mk_codec(
enc_hook=enc_nsp,
ext_types=[NamespacePath],
)
async with tractor.open_nursery(
debug_mode=debug_mode,
) as an:
p: tractor.Portal = await an.start_actor(
'sub',
enable_modules=[__name__],
)
with (
maybe_apply_codec(nsp_codec) as codec,
):
chk_codec_applied(
expect_codec=nsp_codec,
enter_value=codec,
)
rent_pld_spec_type_strs: list[str] = _exts.enc_type_union(pld_spec)
# XXX should raise an mte (`MsgTypeError`)
# when `add_hooks == False` bc the input
# `expect_ipc_send` kwarg has a nsp which can't be
# serialized!
#
# TODO:can we ensure this happens from the
# `Return`-side (aka the sub) as well?
try:
ctx: tractor.Context
ipc: tractor.MsgStream
async with (
# XXX should raise an mte (`MsgTypeError`)
# when `add_hooks == False`..
p.open_context(
send_back_values,
# expect_debug=debug_mode,
rent_pld_spec_type_strs=rent_pld_spec_type_strs,
add_hooks=add_hooks,
# expect_ipc_send=expect_ipc_send,
) as (ctx, first),
ctx.open_stream() as ipc,
):
with (
limit_plds(
pld_spec,
dec_hook=dec_nsp if add_hooks else None,
ext_types=[NamespacePath] if add_hooks else None,
) as pld_dec,
):
ctx_pld_dec: MsgDec = ctx._pld_rx._pld_dec
assert pld_dec is ctx_pld_dec
# if (
# not add_hooks
# and
# NamespacePath in
# ):
# pytest.fail('ctx should fail to open without custom enc_hook!?')
await ipc.send(nsp)
nsp_rt = await ipc.receive()
assert nsp_rt == nsp
assert nsp_rt.load_ref() is ex_func
# this test passes bc we can go no further!
except MsgTypeError as mte:
# if not add_hooks:
# # teardown nursery
# await p.cancel_actor()
# return
raise mte
await p.cancel_actor()
if (
NamespacePath in pld_types
and
add_hooks
):
trio.run(main)
else:
with pytest.raises(
expected_exception=tractor.RemoteActorError,
) as excinfo:
trio.run(main)
exc = excinfo.value
# bc `.started(nsp: NamespacePath)` will raise
assert exc.boxed_type is TypeError
# def chk_pld_type(
# payload_spec: Type[Struct]|Any,
# pld: Any,
# expect_roundtrip: bool|None = None,
# ) -> bool:
# pld_val_type: Type = type(pld)
# # TODO: verify that the overridden subtypes
# # DO NOT have modified type-annots from original!
# # 'Start', .pld: FuncSpec
# # 'StartAck', .pld: IpcCtxSpec
# # 'Stop', .pld: UNSEt
# # 'Error', .pld: ErrorData
# codec: MsgCodec = mk_codec(
# # NOTE: this ONLY accepts `PayloadMsg.pld` fields of a specified
# # type union.
# ipc_pld_spec=payload_spec,
# )
# # make a one-off dec to compare with our `MsgCodec` instance
# # which does the below `mk_msg_spec()` call internally
# ipc_msg_spec: Union[Type[Struct]]
# msg_types: list[PayloadMsg[payload_spec]]
# (
# ipc_msg_spec,
# msg_types,
# ) = mk_msg_spec(
# payload_type_union=payload_spec,
# )
# _enc = msgpack.Encoder()
# _dec = msgpack.Decoder(
# type=ipc_msg_spec or Any, # like `PayloadMsg[Any]`
# )
# assert (
# payload_spec
# ==
# codec.pld_spec
# )
# # assert codec.dec == dec
# #
# # ^-XXX-^ not sure why these aren't "equal" but when cast
# # to `str` they seem to match ?? .. kk
# assert (
# str(ipc_msg_spec)
# ==
# str(codec.msg_spec)
# ==
# str(_dec.type)
# ==
# str(codec.dec.type)
# )
# # verify the boxed-type for all variable payload-type msgs.
# if not msg_types:
# breakpoint()
# roundtrip: bool|None = None
# pld_spec_msg_names: list[str] = [
# td.__name__ for td in _payload_msgs
# ]
# for typedef in msg_types:
# skip_runtime_msg: bool = typedef.__name__ not in pld_spec_msg_names
# if skip_runtime_msg:
# continue
# pld_field = structs.fields(typedef)[1]
# assert pld_field.type is payload_spec # TODO-^ does this need to work to get all subtypes to adhere?
# kwargs: dict[str, Any] = {
# 'cid': '666',
# 'pld': pld,
# }
# enc_msg: PayloadMsg = typedef(**kwargs)
# _wire_bytes: bytes = _enc.encode(enc_msg)
# wire_bytes: bytes = codec.enc.encode(enc_msg)
# assert _wire_bytes == wire_bytes
# ve: ValidationError|None = None
# try:
# dec_msg = codec.dec.decode(wire_bytes)
# _dec_msg = _dec.decode(wire_bytes)
# # decoded msg and thus payload should be exactly same!
# assert (roundtrip := (
# _dec_msg
# ==
# dec_msg
# ==
# enc_msg
# ))
# if (
# expect_roundtrip is not None
# and expect_roundtrip != roundtrip
# ):
# breakpoint()
# assert (
# pld
# ==
# dec_msg.pld
# ==
# enc_msg.pld
# )
# # assert (roundtrip := (_dec_msg == enc_msg))
# except ValidationError as _ve:
# ve = _ve
# roundtrip: bool = False
# if pld_val_type is payload_spec:
# raise ValueError(
# 'Got `ValidationError` despite type-var match!?\n'
# f'pld_val_type: {pld_val_type}\n'
# f'payload_type: {payload_spec}\n'
# ) from ve
# else:
# # ow we good cuz the pld spec mismatched.
# print(
# 'Got expected `ValidationError` since,\n'
# f'{pld_val_type} is not {payload_spec}\n'
# )
# else:
# if (
# payload_spec is not Any
# and
# pld_val_type is not payload_spec
# ):
# raise ValueError(
# 'DID NOT `ValidationError` despite expected type match!?\n'
# f'pld_val_type: {pld_val_type}\n'
# f'payload_type: {payload_spec}\n'
# )
# # full code decode should always be attempted!
# if roundtrip is None:
# breakpoint()
# return roundtrip
# ?TODO? maybe remove since covered in the newer `test_pldrx_limiting`
# via end-2-end testing of all this?
# -[ ] IOW do we really NEED this lowlevel unit testing?
#
# def test_limit_msgspec(
# debug_mode: bool,
# ):
# '''
# Internals unit testing to verify that type-limiting an IPC ctx's
# msg spec with `Pldrx.limit_plds()` results in various
# encapsulated `msgspec` object settings and state.
# '''
# async def main():
# async with tractor.open_root_actor(
# debug_mode=debug_mode,
# ):
# # ensure we can round-trip a boxing `PayloadMsg`
# assert chk_pld_type(
# payload_spec=Any,
# pld=None,
# expect_roundtrip=True,
# )
# # verify that a mis-typed payload value won't decode
# assert not chk_pld_type(
# payload_spec=int,
# pld='doggy',
# )
# # parametrize the boxed `.pld` type as a custom-struct
# # and ensure that parametrization propagates
# # to all payload-msg-spec-able subtypes!
# class CustomPayload(Struct):
# name: str
# value: Any
# assert not chk_pld_type(
# payload_spec=CustomPayload,
# pld='doggy',
# )
# assert chk_pld_type(
# payload_spec=CustomPayload,
# pld=CustomPayload(name='doggy', value='urmom')
# )
# # yah, we can `.pause_from_sync()` now!
# # breakpoint()
# trio.run(main)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -9,7 +9,7 @@ import trio
import tractor import tractor
import pytest import pytest
from tractor._testing import tractor_test from conftest import tractor_test
def test_must_define_ctx(): def test_must_define_ctx():
@ -38,13 +38,10 @@ async def async_gen_stream(sequence):
assert cs.cancelled_caught assert cs.cancelled_caught
# TODO: deprecated either remove entirely
# or re-impl in terms of `MsgStream` one-sides
# wrapper, but at least remove `Portal.open_stream_from()`
@tractor.stream @tractor.stream
async def context_stream( async def context_stream(
ctx: tractor.Context, ctx: tractor.Context,
sequence: list[int], sequence
): ):
for i in sequence: for i in sequence:
await ctx.send_yield(i) await ctx.send_yield(i)
@ -58,7 +55,7 @@ async def context_stream(
async def stream_from_single_subactor( async def stream_from_single_subactor(
reg_addr, arb_addr,
start_method, start_method,
stream_func, stream_func,
): ):
@ -67,7 +64,7 @@ async def stream_from_single_subactor(
# only one per host address, spawns an actor if None # only one per host address, spawns an actor if None
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
start_method=start_method, start_method=start_method,
) as nursery: ) as nursery:
@ -118,13 +115,13 @@ async def stream_from_single_subactor(
@pytest.mark.parametrize( @pytest.mark.parametrize(
'stream_func', [async_gen_stream, context_stream] 'stream_func', [async_gen_stream, context_stream]
) )
def test_stream_from_single_subactor(reg_addr, start_method, stream_func): def test_stream_from_single_subactor(arb_addr, start_method, stream_func):
"""Verify streaming from a spawned async generator. """Verify streaming from a spawned async generator.
""" """
trio.run( trio.run(
partial( partial(
stream_from_single_subactor, stream_from_single_subactor,
reg_addr, arb_addr,
start_method, start_method,
stream_func=stream_func, stream_func=stream_func,
), ),
@ -228,14 +225,14 @@ async def a_quadruple_example():
return result_stream return result_stream
async def cancel_after(wait, reg_addr): async def cancel_after(wait, arb_addr):
async with tractor.open_root_actor(registry_addrs=[reg_addr]): async with tractor.open_root_actor(arbiter_addr=arb_addr):
with trio.move_on_after(wait): with trio.move_on_after(wait):
return await a_quadruple_example() return await a_quadruple_example()
@pytest.fixture(scope='module') @pytest.fixture(scope='module')
def time_quad_ex(reg_addr, ci_env, spawn_backend): def time_quad_ex(arb_addr, ci_env, spawn_backend):
if spawn_backend == 'mp': if spawn_backend == 'mp':
"""no idea but the mp *nix runs are flaking out here often... """no idea but the mp *nix runs are flaking out here often...
""" """
@ -243,7 +240,7 @@ def time_quad_ex(reg_addr, ci_env, spawn_backend):
timeout = 7 if platform.system() in ('Windows', 'Darwin') else 4 timeout = 7 if platform.system() in ('Windows', 'Darwin') else 4
start = time.time() start = time.time()
results = trio.run(cancel_after, timeout, reg_addr) results = trio.run(cancel_after, timeout, arb_addr)
diff = time.time() - start diff = time.time() - start
assert results assert results
return results, diff return results, diff
@ -263,14 +260,14 @@ def test_a_quadruple_example(time_quad_ex, ci_env, spawn_backend):
list(map(lambda i: i/10, range(3, 9))) list(map(lambda i: i/10, range(3, 9)))
) )
def test_not_fast_enough_quad( def test_not_fast_enough_quad(
reg_addr, time_quad_ex, cancel_delay, ci_env, spawn_backend arb_addr, time_quad_ex, cancel_delay, ci_env, spawn_backend
): ):
"""Verify we can cancel midway through the quad example and all actors """Verify we can cancel midway through the quad example and all actors
cancel gracefully. cancel gracefully.
""" """
results, diff = time_quad_ex results, diff = time_quad_ex
delay = max(diff - cancel_delay, 0) delay = max(diff - cancel_delay, 0)
results = trio.run(cancel_after, delay, reg_addr) results = trio.run(cancel_after, delay, arb_addr)
system = platform.system() system = platform.system()
if system in ('Windows', 'Darwin') and results is not None: if system in ('Windows', 'Darwin') and results is not None:
# In CI envoirments it seems later runs are quicker then the first # In CI envoirments it seems later runs are quicker then the first
@ -283,7 +280,7 @@ def test_not_fast_enough_quad(
@tractor_test @tractor_test
async def test_respawn_consumer_task( async def test_respawn_consumer_task(
reg_addr, arb_addr,
spawn_backend, spawn_backend,
loglevel, loglevel,
): ):

View File

@ -7,7 +7,7 @@ import pytest
import trio import trio
import tractor import tractor
from tractor._testing import tractor_test from conftest import tractor_test
@pytest.mark.trio @pytest.mark.trio
@ -24,7 +24,7 @@ async def test_no_runtime():
@tractor_test @tractor_test
async def test_self_is_registered(reg_addr): async def test_self_is_registered(arb_addr):
"Verify waiting on the arbiter to register itself using the standard api." "Verify waiting on the arbiter to register itself using the standard api."
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_arbiter assert actor.is_arbiter
@ -34,20 +34,20 @@ async def test_self_is_registered(reg_addr):
@tractor_test @tractor_test
async def test_self_is_registered_localportal(reg_addr): async def test_self_is_registered_localportal(arb_addr):
"Verify waiting on the arbiter to register itself using a local portal." "Verify waiting on the arbiter to register itself using a local portal."
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_arbiter assert actor.is_arbiter
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_arbiter(*arb_addr) as portal:
assert isinstance(portal, tractor._portal.LocalPortal) assert isinstance(portal, tractor._portal.LocalPortal)
with trio.fail_after(0.2): with trio.fail_after(0.2):
sockaddr = await portal.run_from_ns( sockaddr = await portal.run_from_ns(
'self', 'wait_for_actor', name='root') 'self', 'wait_for_actor', name='root')
assert sockaddr[0] == reg_addr assert sockaddr[0] == arb_addr
def test_local_actor_async_func(reg_addr): def test_local_actor_async_func(arb_addr):
"""Verify a simple async function in-process. """Verify a simple async function in-process.
""" """
nums = [] nums = []
@ -55,7 +55,7 @@ def test_local_actor_async_func(reg_addr):
async def print_loop(): async def print_loop():
async with tractor.open_root_actor( async with tractor.open_root_actor(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
): ):
# arbiter is started in-proc if dne # arbiter is started in-proc if dne
assert tractor.current_actor().is_arbiter assert tractor.current_actor().is_arbiter

View File

@ -7,10 +7,8 @@ import time
import pytest import pytest
import trio import trio
import tractor import tractor
from tractor._testing import ( from conftest import (
tractor_test, tractor_test,
)
from .conftest import (
sig_prog, sig_prog,
_INT_SIGNAL, _INT_SIGNAL,
_INT_RETURN_CODE, _INT_RETURN_CODE,
@ -30,9 +28,9 @@ def test_abort_on_sigint(daemon):
@tractor_test @tractor_test
async def test_cancel_remote_arbiter(daemon, reg_addr): async def test_cancel_remote_arbiter(daemon, arb_addr):
assert not tractor.current_actor().is_arbiter assert not tractor.current_actor().is_arbiter
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_arbiter(*arb_addr) as portal:
await portal.cancel_actor() await portal.cancel_actor()
time.sleep(0.1) time.sleep(0.1)
@ -41,16 +39,16 @@ async def test_cancel_remote_arbiter(daemon, reg_addr):
# no arbiter socket should exist # no arbiter socket should exist
with pytest.raises(OSError): with pytest.raises(OSError):
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_arbiter(*arb_addr) as portal:
pass pass
def test_register_duplicate_name(daemon, reg_addr): def test_register_duplicate_name(daemon, arb_addr):
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
) as n: ) as n:
assert not tractor.current_actor().is_arbiter assert not tractor.current_actor().is_arbiter

View File

@ -1,364 +0,0 @@
'''
Audit sub-sys APIs from `.msg._ops`
mostly for ensuring correct `contextvars`
related settings around IPC contexts.
'''
from contextlib import (
asynccontextmanager as acm,
)
from msgspec import (
Struct,
)
import pytest
import trio
import tractor
from tractor import (
Context,
MsgTypeError,
current_ipc_ctx,
Portal,
)
from tractor.msg import (
_ops as msgops,
Return,
)
from tractor.msg import (
_codec,
)
from tractor.msg.types import (
log,
)
class PldMsg(
Struct,
# TODO: with multiple structs in-spec we need to tag them!
# -[ ] offer a built-in `PldMsg` type to inherit from which takes
# case of these details?
#
# https://jcristharif.com/msgspec/structs.html#tagged-unions
# tag=True,
# tag_field='msg_type',
):
field: str
maybe_msg_spec = PldMsg|None
@acm
async def maybe_expect_raises(
raises: BaseException|None = None,
ensure_in_message: list[str]|None = None,
post_mortem: bool = False,
timeout: int = 3,
) -> None:
'''
Async wrapper for ensuring errors propagate from the inner scope.
'''
if tractor._state.debug_mode():
timeout += 999
with trio.fail_after(timeout):
try:
yield
except BaseException as _inner_err:
inner_err = _inner_err
# wasn't-expected to error..
if raises is None:
raise
else:
assert type(inner_err) is raises
# maybe check for error txt content
if ensure_in_message:
part: str
err_repr: str = repr(inner_err)
for part in ensure_in_message:
for i, arg in enumerate(inner_err.args):
if part in err_repr:
break
# if part never matches an arg, then we're
# missing a match.
else:
raise ValueError(
'Failed to find error message content?\n\n'
f'expected: {ensure_in_message!r}\n'
f'part: {part!r}\n\n'
f'{inner_err.args}'
)
if post_mortem:
await tractor.post_mortem()
else:
if raises:
raise RuntimeError(
f'Expected a {raises.__name__!r} to be raised?'
)
@tractor.context(
pld_spec=maybe_msg_spec,
)
async def child(
ctx: Context,
started_value: int|PldMsg|None,
return_value: str|None,
validate_pld_spec: bool,
raise_on_started_mte: bool = True,
) -> None:
'''
Call ``Context.started()`` more then once (an error).
'''
expect_started_mte: bool = started_value == 10
# sanaity check that child RPC context is the current one
curr_ctx: Context = current_ipc_ctx()
assert ctx is curr_ctx
rx: msgops.PldRx = ctx._pld_rx
curr_pldec: _codec.MsgDec = rx.pld_dec
ctx_meta: dict = getattr(
child,
'_tractor_context_meta',
None,
)
if ctx_meta:
assert (
ctx_meta['pld_spec']
is curr_pldec.spec
is curr_pldec.pld_spec
)
# 2 cases: hdndle send-side and recv-only validation
# - when `raise_on_started_mte == True`, send validate
# - else, parent-recv-side only validation
mte: MsgTypeError|None = None
try:
await ctx.started(
value=started_value,
validate_pld_spec=validate_pld_spec,
)
except MsgTypeError as _mte:
mte = _mte
log.exception('started()` raised an MTE!\n')
if not expect_started_mte:
raise RuntimeError(
'Child-ctx-task SHOULD NOT HAVE raised an MTE for\n\n'
f'{started_value!r}\n'
)
boxed_div: str = '------ - ------'
assert boxed_div not in mte._message
assert boxed_div not in mte.tb_str
assert boxed_div not in repr(mte)
assert boxed_div not in str(mte)
mte_repr: str = repr(mte)
for line in mte.message.splitlines():
assert line in mte_repr
# since this is a *local error* there should be no
# boxed traceback content!
assert not mte.tb_str
# propagate to parent?
if raise_on_started_mte:
raise
# no-send-side-error fallthrough
if (
validate_pld_spec
and
expect_started_mte
):
raise RuntimeError(
'Child-ctx-task SHOULD HAVE raised an MTE for\n\n'
f'{started_value!r}\n'
)
assert (
not expect_started_mte
or
not validate_pld_spec
)
# if wait_for_parent_to_cancel:
# ...
#
# ^-TODO-^ logic for diff validation policies on each side:
#
# -[ ] ensure that if we don't validate on the send
# side, that we are eventually error-cancelled by our
# parent due to the bad `Started` payload!
# -[ ] the boxed error should be srced from the parent's
# runtime NOT ours!
# -[ ] we should still error on bad `return_value`s
# despite the parent not yet error-cancelling us?
# |_ how do we want the parent side to look in that
# case?
# -[ ] maybe the equiv of "during handling of the
# above error another occurred" for the case where
# the parent sends a MTE to this child and while
# waiting for the child to terminate it gets back
# the MTE for this case?
#
# XXX should always fail on recv side since we can't
# really do much else beside terminate and relay the
# msg-type-error from this RPC task ;)
return return_value
@pytest.mark.parametrize(
'return_value',
[
'yo',
None,
],
ids=[
'return[invalid-"yo"]',
'return[valid-None]',
],
)
@pytest.mark.parametrize(
'started_value',
[
10,
PldMsg(field='yo'),
],
ids=[
'Started[invalid-10]',
'Started[valid-PldMsg]',
],
)
@pytest.mark.parametrize(
'pld_check_started_value',
[
True,
False,
],
ids=[
'check-started-pld',
'no-started-pld-validate',
],
)
def test_basic_payload_spec(
debug_mode: bool,
loglevel: str,
return_value: str|None,
started_value: int|PldMsg,
pld_check_started_value: bool,
):
'''
Validate the most basic `PldRx` msg-type-spec semantics around
a IPC `Context` endpoint start, started-sync, and final return
value depending on set payload types and the currently applied
pld-spec.
'''
invalid_return: bool = return_value == 'yo'
invalid_started: bool = started_value == 10
async def main():
async with tractor.open_nursery(
debug_mode=debug_mode,
loglevel=loglevel,
) as an:
p: Portal = await an.start_actor(
'child',
enable_modules=[__name__],
)
# since not opened yet.
assert current_ipc_ctx() is None
if invalid_started:
msg_type_str: str = 'Started'
bad_value: int = 10
elif invalid_return:
msg_type_str: str = 'Return'
bad_value: str = 'yo'
else:
# XXX but should never be used below then..
msg_type_str: str = ''
bad_value: str = ''
maybe_mte: MsgTypeError|None = None
should_raise: Exception|None = (
MsgTypeError if (
invalid_return
or
invalid_started
) else None
)
async with (
maybe_expect_raises(
raises=should_raise,
ensure_in_message=[
f"invalid `{msg_type_str}` msg payload",
f'{bad_value}',
f'has type {type(bad_value)!r}',
'not match type-spec',
f'`{msg_type_str}.pld: PldMsg|NoneType`',
],
# only for debug
# post_mortem=True,
),
p.open_context(
child,
return_value=return_value,
started_value=started_value,
validate_pld_spec=pld_check_started_value,
) as (ctx, first),
):
# now opened with 'child' sub
assert current_ipc_ctx() is ctx
assert type(first) is PldMsg
assert first.field == 'yo'
try:
res: None|PldMsg = await ctx.result(hide_tb=False)
assert res is None
except MsgTypeError as mte:
maybe_mte = mte
if not invalid_return:
raise
# expected this invalid `Return.pld` so audit
# the error state + meta-data
assert mte.expected_msg_type is Return
assert mte.cid == ctx.cid
mte_repr: str = repr(mte)
for line in mte.message.splitlines():
assert line in mte_repr
assert mte.tb_str
# await tractor.pause(shield=True)
# verify expected remote mte deats
assert ctx._local_error is None
assert (
mte is
ctx._remote_error is
ctx.maybe_error is
ctx.outcome
)
if should_raise is None:
assert maybe_mte is None
await p.cancel_actor()
trio.run(main)

View File

@ -5,7 +5,8 @@ import pytest
import trio import trio
import tractor import tractor
from tractor.experimental import msgpub from tractor.experimental import msgpub
from tractor._testing import tractor_test
from conftest import tractor_test
def test_type_checks(): def test_type_checks():
@ -159,7 +160,7 @@ async def test_required_args(callwith_expecterror):
) )
def test_multi_actor_subs_arbiter_pub( def test_multi_actor_subs_arbiter_pub(
loglevel, loglevel,
reg_addr, arb_addr,
pub_actor, pub_actor,
): ):
"""Try out the neato @pub decorator system. """Try out the neato @pub decorator system.
@ -169,7 +170,7 @@ def test_multi_actor_subs_arbiter_pub(
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
enable_modules=[__name__], enable_modules=[__name__],
) as n: ) as n:
@ -254,12 +255,12 @@ def test_multi_actor_subs_arbiter_pub(
def test_single_subactor_pub_multitask_subs( def test_single_subactor_pub_multitask_subs(
loglevel, loglevel,
reg_addr, arb_addr,
): ):
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
enable_modules=[__name__], enable_modules=[__name__],
) as n: ) as n:

View File

@ -34,6 +34,7 @@ def test_resource_only_entered_once(key_on):
global _resource global _resource
_resource = 0 _resource = 0
kwargs = {}
key = None key = None
if key_on == 'key_value': if key_on == 'key_value':
key = 'some_common_key' key = 'some_common_key'
@ -138,7 +139,7 @@ def test_open_local_sub_to_stream():
N local tasks using ``trionics.maybe_open_context():``. N local tasks using ``trionics.maybe_open_context():``.
''' '''
timeout: float = 3.6 if platform.system() != "Windows" else 10 timeout = 3 if platform.system() != "Windows" else 10
async def main(): async def main():

View File

@ -1,248 +0,0 @@
'''
Special attention cases for using "infect `asyncio`" mode from a root
actor; i.e. not using a std `trio.run()` bootstrap.
'''
import asyncio
from functools import partial
import pytest
import trio
import tractor
from tractor import (
to_asyncio,
)
from tests.test_infected_asyncio import (
aio_echo_server,
)
@pytest.mark.parametrize(
'raise_error_mid_stream',
[
False,
Exception,
KeyboardInterrupt,
],
ids='raise_error={}'.format,
)
def test_infected_root_actor(
raise_error_mid_stream: bool|Exception,
# conftest wide
loglevel: str,
debug_mode: bool,
):
'''
Verify you can run the `tractor` runtime with `Actor.is_infected_aio() == True`
in the root actor.
'''
async def _trio_main():
with trio.fail_after(2 if not debug_mode else 999):
first: str
chan: to_asyncio.LinkedTaskChannel
async with (
tractor.open_root_actor(
debug_mode=debug_mode,
loglevel=loglevel,
),
to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan),
):
assert first == 'start'
for i in range(1000):
await chan.send(i)
out = await chan.receive()
assert out == i
print(f'asyncio echoing {i}')
if (
raise_error_mid_stream
and
i == 500
):
raise raise_error_mid_stream
if out is None:
try:
out = await chan.receive()
except trio.EndOfChannel:
break
else:
raise RuntimeError(
'aio channel never stopped?'
)
if raise_error_mid_stream:
with pytest.raises(raise_error_mid_stream):
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
else:
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
async def sync_and_err(
# just signature placeholders for compat with
# ``to_asyncio.open_channel_from()``
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
ev: asyncio.Event,
):
if to_trio:
to_trio.send_nowait('start')
await ev.wait()
raise RuntimeError('asyncio-side')
@pytest.mark.parametrize(
'aio_err_trigger',
[
'before_start_point',
'after_trio_task_starts',
'after_start_point',
],
ids='aio_err_triggered={}'.format
)
def test_trio_prestarted_task_bubbles(
aio_err_trigger: str,
# conftest wide
loglevel: str,
debug_mode: bool,
):
async def pre_started_err(
raise_err: bool = False,
pre_sleep: float|None = None,
aio_trigger: asyncio.Event|None = None,
task_status=trio.TASK_STATUS_IGNORED,
):
'''
Maybe pre-started error then sleep.
'''
if pre_sleep is not None:
print(f'Sleeping from trio for {pre_sleep!r}s !')
await trio.sleep(pre_sleep)
# signal aio-task to raise JUST AFTER this task
# starts but has not yet `.started()`
if aio_trigger:
print('Signalling aio-task to raise from `trio`!!')
aio_trigger.set()
if raise_err:
print('Raising from trio!')
raise TypeError('trio-side')
task_status.started()
await trio.sleep_forever()
async def _trio_main():
# with trio.fail_after(2):
with trio.fail_after(999):
first: str
chan: to_asyncio.LinkedTaskChannel
aio_ev = asyncio.Event()
async with (
tractor.open_root_actor(
debug_mode=False,
loglevel=loglevel,
),
):
# TODO, tests for this with 3.13 egs?
# from tractor.devx import open_crash_handler
# with open_crash_handler():
async with (
# where we'll start a sub-task that errors BEFORE
# calling `.started()` such that the error should
# bubble before the guest run terminates!
trio.open_nursery() as tn,
# THEN start an infect task which should error just
# after the trio-side's task does.
to_asyncio.open_channel_from(
partial(
sync_and_err,
ev=aio_ev,
)
) as (first, chan),
):
for i in range(5):
pre_sleep: float|None = None
last_iter: bool = (i == 4)
# TODO, missing cases?
# -[ ] error as well on
# 'after_start_point' case as well for
# another case?
raise_err: bool = False
if last_iter:
raise_err: bool = True
# trigger aio task to error on next loop
# tick/checkpoint
if aio_err_trigger == 'before_start_point':
aio_ev.set()
pre_sleep: float = 0
await tn.start(
pre_started_err,
raise_err,
pre_sleep,
(aio_ev if (
aio_err_trigger == 'after_trio_task_starts'
and
last_iter
) else None
),
)
if (
aio_err_trigger == 'after_start_point'
and
last_iter
):
aio_ev.set()
with pytest.raises(
expected_exception=ExceptionGroup,
) as excinfo:
tractor.to_asyncio.run_as_asyncio_guest(
trio_main=_trio_main,
)
eg = excinfo.value
rte_eg, rest_eg = eg.split(RuntimeError)
# ensure the trio-task's error bubbled despite the aio-side
# having (maybe) errored first.
if aio_err_trigger in (
'after_trio_task_starts',
'after_start_point',
):
assert len(errs := rest_eg.exceptions) == 1
typerr = errs[0]
assert (
type(typerr) is TypeError
and
'trio-side' in typerr.args
)
# when aio errors BEFORE (last) trio task is scheduled, we should
# never see anythinb but the aio-side.
else:
assert len(rtes := rte_eg.exceptions) == 1
assert 'asyncio-side' in rtes[0].args[0]

View File

@ -1,8 +1,6 @@
''' """
RPC (or maybe better labelled as "RTS: remote task scheduling"?) RPC related
related API and error checks. """
'''
import itertools import itertools
import pytest import pytest
@ -15,19 +13,9 @@ async def sleep_back_actor(
func_name, func_name,
func_defined, func_defined,
exposed_mods, exposed_mods,
*,
reg_addr: tuple,
): ):
if actor_name: if actor_name:
async with tractor.find_actor( async with tractor.find_actor(actor_name) as portal:
actor_name,
# NOTE: must be set manually since
# the subactor doesn't have the reg_addr
# fixture code run in it!
# TODO: maybe we should just set this once in the
# _state mod and derive to all children?
registry_addrs=[reg_addr],
) as portal:
try: try:
await portal.run(__name__, func_name) await portal.run(__name__, func_name)
except tractor.RemoteActorError as err: except tractor.RemoteActorError as err:
@ -36,7 +24,7 @@ async def sleep_back_actor(
if not exposed_mods: if not exposed_mods:
expect = tractor.ModuleNotExposed expect = tractor.ModuleNotExposed
assert err.boxed_type is expect assert err.type is expect
raise raise
else: else:
await trio.sleep(float('inf')) await trio.sleep(float('inf'))
@ -54,25 +42,14 @@ async def short_sleep():
(['tmp_mod'], 'import doggy', ModuleNotFoundError), (['tmp_mod'], 'import doggy', ModuleNotFoundError),
(['tmp_mod'], '4doggy', SyntaxError), (['tmp_mod'], '4doggy', SyntaxError),
], ],
ids=[ ids=['no_mods', 'this_mod', 'this_mod_bad_func', 'fail_to_import',
'no_mods', 'fail_on_syntax'],
'this_mod',
'this_mod_bad_func',
'fail_to_import',
'fail_on_syntax',
],
) )
def test_rpc_errors( def test_rpc_errors(arb_addr, to_call, testdir):
reg_addr, """Test errors when making various RPC requests to an actor
to_call,
testdir,
):
'''
Test errors when making various RPC requests to an actor
that either doesn't have the requested module exposed or doesn't define that either doesn't have the requested module exposed or doesn't define
the named function. the named function.
"""
'''
exposed_mods, funcname, inside_err = to_call exposed_mods, funcname, inside_err = to_call
subactor_exposed_mods = [] subactor_exposed_mods = []
func_defined = globals().get(funcname, False) func_defined = globals().get(funcname, False)
@ -100,13 +77,8 @@ def test_rpc_errors(
# spawn a subactor which calls us back # spawn a subactor which calls us back
async with tractor.open_nursery( async with tractor.open_nursery(
registry_addrs=[reg_addr], arbiter_addr=arb_addr,
enable_modules=exposed_mods.copy(), enable_modules=exposed_mods.copy(),
# NOTE: will halt test in REPL if uncommented, so only
# do that if actually debugging subactor but keep it
# disabled for the test.
# debug_mode=True,
) as n: ) as n:
actor = tractor.current_actor() actor = tractor.current_actor()
@ -123,7 +95,6 @@ def test_rpc_errors(
exposed_mods=exposed_mods, exposed_mods=exposed_mods,
func_defined=True if func_defined else False, func_defined=True if func_defined else False,
enable_modules=subactor_exposed_mods, enable_modules=subactor_exposed_mods,
reg_addr=reg_addr,
) )
def run(): def run():
@ -134,20 +105,18 @@ def test_rpc_errors(
run() run()
else: else:
# underlying errors aren't propagated upwards (yet) # underlying errors aren't propagated upwards (yet)
with pytest.raises( with pytest.raises(remote_err) as err:
expected_exception=(remote_err, ExceptionGroup),
) as err:
run() run()
# get raw instance from pytest wrapper # get raw instance from pytest wrapper
value = err.value value = err.value
# might get multiple `trio.Cancelled`s as well inside an inception # might get multiple `trio.Cancelled`s as well inside an inception
if isinstance(value, ExceptionGroup): if isinstance(value, trio.MultiError):
value = next(itertools.dropwhile( value = next(itertools.dropwhile(
lambda exc: not isinstance(exc, tractor.RemoteActorError), lambda exc: not isinstance(exc, tractor.RemoteActorError),
value.exceptions value.exceptions
)) ))
if getattr(value, 'type', None): if getattr(value, 'type', None):
assert value.boxed_type is inside_err assert value.type is inside_err

View File

@ -8,7 +8,7 @@ import pytest
import trio import trio
import tractor import tractor
from tractor._testing import tractor_test from conftest import tractor_test
_file_path: str = '' _file_path: str = ''
@ -64,8 +64,7 @@ async def test_lifetime_stack_wipes_tmpfile(
except ( except (
tractor.RemoteActorError, tractor.RemoteActorError,
# tractor.BaseExceptionGroup, tractor.BaseExceptionGroup,
BaseExceptionGroup,
): ):
pass pass

View File

@ -2,15 +2,13 @@
Spawning basics Spawning basics
""" """
from typing import ( from typing import Optional
Any,
)
import pytest import pytest
import trio import trio
import tractor import tractor
from tractor._testing import tractor_test from conftest import tractor_test
data_to_pass_down = {'doggy': 10, 'kitty': 4} data_to_pass_down = {'doggy': 10, 'kitty': 4}
@ -18,21 +16,24 @@ data_to_pass_down = {'doggy': 10, 'kitty': 4}
async def spawn( async def spawn(
is_arbiter: bool, is_arbiter: bool,
data: dict, data: dict,
reg_addr: tuple[str, int], arb_addr: tuple[str, int],
): ):
namespaces = [__name__] namespaces = [__name__]
await trio.sleep(0.1) await trio.sleep(0.1)
async with tractor.open_root_actor( async with tractor.open_root_actor(
arbiter_addr=reg_addr, arbiter_addr=arb_addr,
): ):
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_arbiter == is_arbiter assert actor.is_arbiter == is_arbiter
data = data_to_pass_down data = data_to_pass_down
if actor.is_arbiter: if actor.is_arbiter:
async with tractor.open_nursery() as nursery:
async with tractor.open_nursery(
) as nursery:
# forks here # forks here
portal = await nursery.run_in_actor( portal = await nursery.run_in_actor(
@ -40,7 +41,7 @@ async def spawn(
is_arbiter=False, is_arbiter=False,
name='sub-actor', name='sub-actor',
data=data, data=data,
reg_addr=reg_addr, arb_addr=arb_addr,
enable_modules=namespaces, enable_modules=namespaces,
) )
@ -54,14 +55,12 @@ async def spawn(
return 10 return 10
def test_local_arbiter_subactor_global_state( def test_local_arbiter_subactor_global_state(arb_addr):
reg_addr,
):
result = trio.run( result = trio.run(
spawn, spawn,
True, True,
data_to_pass_down, data_to_pass_down,
reg_addr, arb_addr,
) )
assert result == 10 assert result == 10
@ -95,9 +94,7 @@ async def test_movie_theatre_convo(start_method):
await portal.cancel_actor() await portal.cancel_actor()
async def cellar_door( async def cellar_door(return_value: Optional[str]):
return_value: str|None,
):
return return_value return return_value
@ -107,18 +104,16 @@ async def cellar_door(
) )
@tractor_test @tractor_test
async def test_most_beautiful_word( async def test_most_beautiful_word(
start_method: str, start_method,
return_value: Any, return_value
debug_mode: bool,
): ):
''' '''
The main ``tractor`` routine. The main ``tractor`` routine.
''' '''
with trio.fail_after(1): with trio.fail_after(1):
async with tractor.open_nursery( async with tractor.open_nursery() as n:
debug_mode=debug_mode,
) as n:
portal = await n.run_in_actor( portal = await n.run_in_actor(
cellar_door, cellar_door,
return_value=return_value, return_value=return_value,
@ -145,7 +140,7 @@ async def check_loglevel(level):
def test_loglevel_propagated_to_subactor( def test_loglevel_propagated_to_subactor(
start_method, start_method,
capfd, capfd,
reg_addr, arb_addr,
): ):
if start_method == 'mp_forkserver': if start_method == 'mp_forkserver':
pytest.skip( pytest.skip(
@ -157,7 +152,7 @@ def test_loglevel_propagated_to_subactor(
async with tractor.open_nursery( async with tractor.open_nursery(
name='arbiter', name='arbiter',
start_method=start_method, start_method=start_method,
arbiter_addr=reg_addr, arbiter_addr=arb_addr,
) as tn: ) as tn:
await tn.run_in_actor( await tn.run_in_actor(

View File

@ -2,9 +2,7 @@
Broadcast channels for fan-out to local tasks. Broadcast channels for fan-out to local tasks.
""" """
from contextlib import ( from contextlib import asynccontextmanager
asynccontextmanager as acm,
)
from functools import partial from functools import partial
from itertools import cycle from itertools import cycle
import time import time
@ -17,7 +15,6 @@ import tractor
from tractor.trionics import ( from tractor.trionics import (
broadcast_receiver, broadcast_receiver,
Lagged, Lagged,
collapse_eg,
) )
@ -65,21 +62,21 @@ async def ensure_sequence(
break break
@acm @asynccontextmanager
async def open_sequence_streamer( async def open_sequence_streamer(
sequence: list[int], sequence: list[int],
reg_addr: tuple[str, int], arb_addr: tuple[str, int],
start_method: str, start_method: str,
) -> tractor.MsgStream: ) -> tractor.MsgStream:
async with tractor.open_nursery( async with tractor.open_nursery(
arbiter_addr=reg_addr, arbiter_addr=arb_addr,
start_method=start_method, start_method=start_method,
) as an: ) as tn:
portal = await an.start_actor( portal = await tn.start_actor(
'sequence_echoer', 'sequence_echoer',
enable_modules=[__name__], enable_modules=[__name__],
) )
@ -96,7 +93,7 @@ async def open_sequence_streamer(
def test_stream_fan_out_to_local_subscriptions( def test_stream_fan_out_to_local_subscriptions(
reg_addr, arb_addr,
start_method, start_method,
): ):
@ -106,7 +103,7 @@ def test_stream_fan_out_to_local_subscriptions(
async with open_sequence_streamer( async with open_sequence_streamer(
sequence, sequence,
reg_addr, arb_addr,
start_method, start_method,
) as stream: ) as stream:
@ -141,7 +138,7 @@ def test_stream_fan_out_to_local_subscriptions(
] ]
) )
def test_consumer_and_parent_maybe_lag( def test_consumer_and_parent_maybe_lag(
reg_addr, arb_addr,
start_method, start_method,
task_delays, task_delays,
): ):
@ -153,17 +150,14 @@ def test_consumer_and_parent_maybe_lag(
async with open_sequence_streamer( async with open_sequence_streamer(
sequence, sequence,
reg_addr, arb_addr,
start_method, start_method,
) as stream: ) as stream:
try: try:
async with ( async with trio.open_nursery() as n:
collapse_eg(),
trio.open_nursery() as tn,
):
tn.start_soon( n.start_soon(
ensure_sequence, ensure_sequence,
stream, stream,
sequence.copy(), sequence.copy(),
@ -217,7 +211,7 @@ def test_consumer_and_parent_maybe_lag(
def test_faster_task_to_recv_is_cancelled_by_slower( def test_faster_task_to_recv_is_cancelled_by_slower(
reg_addr, arb_addr,
start_method, start_method,
): ):
''' '''
@ -231,13 +225,13 @@ def test_faster_task_to_recv_is_cancelled_by_slower(
async with open_sequence_streamer( async with open_sequence_streamer(
sequence, sequence,
reg_addr, arb_addr,
start_method, start_method,
) as stream: ) as stream:
async with trio.open_nursery() as tn: async with trio.open_nursery() as n:
tn.start_soon( n.start_soon(
ensure_sequence, ensure_sequence,
stream, stream,
sequence.copy(), sequence.copy(),
@ -259,7 +253,7 @@ def test_faster_task_to_recv_is_cancelled_by_slower(
continue continue
print('cancelling faster subtask') print('cancelling faster subtask')
tn.cancel_scope.cancel() n.cancel_scope.cancel()
try: try:
value = await stream.receive() value = await stream.receive()
@ -277,7 +271,7 @@ def test_faster_task_to_recv_is_cancelled_by_slower(
# the faster subtask was cancelled # the faster subtask was cancelled
break break
# await tractor.pause() # await tractor.breakpoint()
# await stream.receive() # await stream.receive()
print(f'final value: {value}') print(f'final value: {value}')
@ -308,7 +302,7 @@ def test_subscribe_errors_after_close():
def test_ensure_slow_consumers_lag_out( def test_ensure_slow_consumers_lag_out(
reg_addr, arb_addr,
start_method, start_method,
): ):
'''This is a pure local task test; no tractor '''This is a pure local task test; no tractor
@ -377,13 +371,13 @@ def test_ensure_slow_consumers_lag_out(
f'on {lags}:{value}') f'on {lags}:{value}')
return return
async with trio.open_nursery() as tn: async with trio.open_nursery() as nursery:
for i in range(1, num_laggers): for i in range(1, num_laggers):
task_name = f'sub_{i}' task_name = f'sub_{i}'
laggers[task_name] = 0 laggers[task_name] = 0
tn.start_soon( nursery.start_soon(
partial( partial(
sub_and_print, sub_and_print,
delay=i*0.001, delay=i*0.001,
@ -503,7 +497,6 @@ def test_no_raise_on_lag():
# internals when the no raise flag is set. # internals when the no raise flag is set.
loglevel='warning', loglevel='warning',
), ),
collapse_eg(),
trio.open_nursery() as n, trio.open_nursery() as n,
): ):
n.start_soon(slow) n.start_soon(slow)

View File

@ -3,13 +3,9 @@ Reminders for oddities in `trio` that we need to stay aware of and/or
want to see changed. want to see changed.
''' '''
from contextlib import (
asynccontextmanager as acm,
)
import pytest import pytest
import trio import trio
from trio import TaskStatus from trio_typing import TaskStatus
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -64,9 +60,7 @@ def test_stashed_child_nursery(use_start_soon):
async def main(): async def main():
async with ( async with (
trio.open_nursery( trio.open_nursery() as pn,
strict_exception_groups=False,
) as pn,
): ):
cn = await pn.start(mk_child_nursery) cn = await pn.start(mk_child_nursery)
assert cn assert cn
@ -86,118 +80,3 @@ def test_stashed_child_nursery(use_start_soon):
with pytest.raises(NameError): with pytest.raises(NameError):
trio.run(main) trio.run(main)
@pytest.mark.parametrize(
('unmask_from_canc', 'canc_from_finally'),
[
(True, False),
(True, True),
pytest.param(False, True,
marks=pytest.mark.xfail(reason="never raises!")
),
],
# TODO, ask ronny how to impl this .. XD
# ids='unmask_from_canc={0}, canc_from_finally={1}',#.format,
)
def test_acm_embedded_nursery_propagates_enter_err(
canc_from_finally: bool,
unmask_from_canc: bool,
debug_mode: bool,
):
'''
Demo how a masking `trio.Cancelled` could be handled by unmasking from the
`.__context__` field when a user (by accident) re-raises from a `finally:`.
'''
import tractor
@acm
async def maybe_raise_from_masking_exc(
tn: trio.Nursery,
unmask_from: BaseException|None = trio.Cancelled
# TODO, maybe offer a collection?
# unmask_from: set[BaseException] = {
# trio.Cancelled,
# },
):
if not unmask_from:
yield
return
try:
yield
except* unmask_from as be_eg:
# TODO, if we offer `unmask_from: set`
# for masker_exc_type in unmask_from:
matches, rest = be_eg.split(unmask_from)
if not matches:
raise
for exc_match in be_eg.exceptions:
if (
(exc_ctx := exc_match.__context__)
and
type(exc_ctx) not in {
# trio.Cancelled, # always by default?
unmask_from,
}
):
exc_ctx.add_note(
f'\n'
f'WARNING: the above error was masked by a {unmask_from!r} !?!\n'
f'Are you always cancelling? Say from a `finally:` ?\n\n'
f'{tn!r}'
)
raise exc_ctx from exc_match
@acm
async def wraps_tn_that_always_cancels():
async with (
trio.open_nursery() as tn,
maybe_raise_from_masking_exc(
tn=tn,
unmask_from=(
trio.Cancelled
if unmask_from_canc
else None
),
)
):
try:
yield tn
finally:
if canc_from_finally:
tn.cancel_scope.cancel()
await trio.lowlevel.checkpoint()
async def _main():
with tractor.devx.maybe_open_crash_handler(
pdb=debug_mode,
) as bxerr:
assert not bxerr.value
async with (
wraps_tn_that_always_cancels() as tn,
):
assert not tn.cancel_scope.cancel_called
assert 0
assert (
(err := bxerr.value)
and
type(err) is AssertionError
)
with pytest.raises(ExceptionGroup) as excinfo:
trio.run(_main)
eg: ExceptionGroup = excinfo.value
assert_eg, rest_eg = eg.split(AssertionError)
assert len(assert_eg.exceptions) == 1

View File

@ -18,53 +18,71 @@
tractor: structured concurrent ``trio``-"actors". tractor: structured concurrent ``trio``-"actors".
""" """
from exceptiongroup import BaseExceptionGroup
from ._clustering import ( from ._clustering import open_actor_cluster
open_actor_cluster as open_actor_cluster, from ._ipc import Channel
)
from ._context import ( from ._context import (
Context as Context, # the type Context,
context as context, # a func-decorator context,
) )
from ._streaming import ( from ._streaming import (
MsgStream as MsgStream, MsgStream,
stream as stream, stream,
) )
from ._discovery import ( from ._discovery import (
get_registry as get_registry, get_arbiter,
find_actor as find_actor, find_actor,
wait_for_actor as wait_for_actor, wait_for_actor,
query_actor as query_actor, query_actor,
)
from ._supervise import (
open_nursery as open_nursery,
ActorNursery as ActorNursery,
) )
from ._supervise import open_nursery
from ._state import ( from ._state import (
current_actor as current_actor, current_actor,
is_root_process as is_root_process, is_root_process,
current_ipc_ctx as current_ipc_ctx,
debug_mode as debug_mode
) )
from ._exceptions import ( from ._exceptions import (
ContextCancelled as ContextCancelled, RemoteActorError,
ModuleNotExposed as ModuleNotExposed, ModuleNotExposed,
MsgTypeError as MsgTypeError, ContextCancelled,
RemoteActorError as RemoteActorError,
TransportClosed as TransportClosed,
) )
from .devx import ( from ._debug import (
breakpoint as breakpoint, breakpoint,
pause as pause, post_mortem,
pause_from_sync as pause_from_sync,
post_mortem as post_mortem,
) )
from . import msg as msg from . import msg
from ._root import ( from ._root import (
run_daemon as run_daemon, run_daemon,
open_root_actor as open_root_actor, open_root_actor,
) )
from ._ipc import Channel as Channel from ._portal import Portal
from ._portal import Portal as Portal from ._runtime import Actor
from ._runtime import Actor as Actor
# from . import hilevel as hilevel
__all__ = [
'Actor',
'Channel',
'Context',
'ContextCancelled',
'ModuleNotExposed',
'MsgStream',
'BaseExceptionGroup',
'Portal',
'RemoteActorError',
'breakpoint',
'context',
'current_actor',
'find_actor',
'get_arbiter',
'is_root_process',
'msg',
'open_actor_cluster',
'open_nursery',
'open_root_actor',
'post_mortem',
'query_actor',
'run_daemon',
'stream',
'to_asyncio',
'wait_for_actor',
]

View File

@ -18,6 +18,8 @@
This is the "bootloader" for actors started using the native trio backend. This is the "bootloader" for actors started using the native trio backend.
""" """
import sys
import trio
import argparse import argparse
from ast import literal_eval from ast import literal_eval
@ -35,8 +37,9 @@ def parse_ipaddr(arg):
return (str(host), int(port)) return (str(host), int(port))
from ._entry import _trio_main
if __name__ == "__main__": if __name__ == "__main__":
__tracebackhide__: bool = True
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("--uid", type=parse_uid) parser.add_argument("--uid", type=parse_uid)

View File

@ -19,13 +19,10 @@ Actor cluster helpers.
''' '''
from __future__ import annotations from __future__ import annotations
from contextlib import (
asynccontextmanager as acm, from contextlib import asynccontextmanager as acm
)
from multiprocessing import cpu_count from multiprocessing import cpu_count
from typing import ( from typing import AsyncGenerator, Optional
AsyncGenerator,
)
import trio import trio
import tractor import tractor

File diff suppressed because it is too large Load Diff

934
tractor/_debug.py 100644
View File

@ -0,0 +1,934 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Multi-core debugging for da peeps!
"""
from __future__ import annotations
import bdb
import os
import sys
import signal
from functools import (
partial,
cached_property,
)
from contextlib import asynccontextmanager as acm
from typing import (
Any,
Optional,
Callable,
AsyncIterator,
AsyncGenerator,
)
from types import FrameType
import pdbp
import tractor
import trio
from trio_typing import TaskStatus
from .log import get_logger
from ._discovery import get_root
from ._state import (
is_root_process,
debug_mode,
)
from ._exceptions import (
is_multi_cancelled,
ContextCancelled,
)
from ._ipc import Channel
# TODO: we can drop this now yah?
# try:
# # wtf: only exported when installed in dev mode?
# import pdbp
# except ImportError:
# # pdbpp is installed in regular mode...it monkey patches stuff
# import pdb
# xpm = getattr(pdb, 'xpm', None)
# assert xpm, "pdbpp is not installed?" # type: ignore
# pdbpp = pdb
log = get_logger(__name__)
__all__ = ['breakpoint', 'post_mortem']
class Lock:
'''
Actor global debug lock state.
Mostly to avoid a lot of ``global`` declarations for now XD.
'''
repl: MultiActorPdb | None = None
# placeholder for function to set a ``trio.Event`` on debugger exit
# pdb_release_hook: Optional[Callable] = None
_trio_handler: Callable[
[int, Optional[FrameType]], Any
] | int | None = None
# actor-wide variable pointing to current task name using debugger
local_task_in_debug: str | None = None
# NOTE: set by the current task waiting on the root tty lock from
# the CALLER side of the `lock_tty_for_child()` context entry-call
# and must be cancelled if this actor is cancelled via IPC
# request-message otherwise deadlocks with the parent actor may
# ensure
_debugger_request_cs: Optional[trio.CancelScope] = None
# NOTE: set only in the root actor for the **local** root spawned task
# which has acquired the lock (i.e. this is on the callee side of
# the `lock_tty_for_child()` context entry).
_root_local_task_cs_in_debug: Optional[trio.CancelScope] = None
# actor tree-wide actor uid that supposedly has the tty lock
global_actor_in_debug: Optional[tuple[str, str]] = None
local_pdb_complete: Optional[trio.Event] = None
no_remote_has_tty: Optional[trio.Event] = None
# lock in root actor preventing multi-access to local tty
_debug_lock: trio.StrictFIFOLock = trio.StrictFIFOLock()
_orig_sigint_handler: Optional[Callable] = None
_blocked: set[tuple[str, str]] = set()
@classmethod
def shield_sigint(cls):
cls._orig_sigint_handler = signal.signal(
signal.SIGINT,
shield_sigint_handler,
)
@classmethod
def unshield_sigint(cls):
# always restore ``trio``'s sigint handler. see notes below in
# the pdb factory about the nightmare that is that code swapping
# out the handler when the repl activates...
signal.signal(signal.SIGINT, cls._trio_handler)
cls._orig_sigint_handler = None
@classmethod
def release(cls):
try:
cls._debug_lock.release()
except RuntimeError:
# uhhh makes no sense but been seeing the non-owner
# release error even though this is definitely the task
# that locked?
owner = cls._debug_lock.statistics().owner
if owner:
raise
# actor-local state, irrelevant for non-root.
cls.global_actor_in_debug = None
cls.local_task_in_debug = None
try:
# sometimes the ``trio`` might already be terminated in
# which case this call will raise.
if cls.local_pdb_complete is not None:
cls.local_pdb_complete.set()
finally:
# restore original sigint handler
cls.unshield_sigint()
cls.repl = None
class TractorConfig(pdbp.DefaultConfig):
'''
Custom ``pdbp`` goodness :surfer:
'''
use_pygments: bool = True
sticky_by_default: bool = False
enable_hidden_frames: bool = False
# much thanks @mdmintz for the hot tip!
# fixes line spacing issue when resizing terminal B)
truncate_long_lines: bool = False
class MultiActorPdb(pdbp.Pdb):
'''
Add teardown hooks to the regular ``pdbp.Pdb``.
'''
# override the pdbp config with our coolio one
DefaultConfig = TractorConfig
# def preloop(self):
# print('IN PRELOOP')
# super().preloop()
# TODO: figure out how to disallow recursive .set_trace() entry
# since that'll cause deadlock for us.
def set_continue(self):
try:
super().set_continue()
finally:
Lock.release()
def set_quit(self):
try:
super().set_quit()
finally:
Lock.release()
# XXX NOTE: we only override this because apparently the stdlib pdb
# bois likes to touch the SIGINT handler as much as i like to touch
# my d$%&.
def _cmdloop(self):
self.cmdloop()
@cached_property
def shname(self) -> str | None:
'''
Attempt to return the login shell name with a special check for
the infamous `xonsh` since it seems to have some issues much
different from std shells when it comes to flushing the prompt?
'''
# SUPER HACKY and only really works if `xonsh` is not used
# before spawning further sub-shells..
shpath = os.getenv('SHELL', None)
if shpath:
if (
os.getenv('XONSH_LOGIN', default=False)
or 'xonsh' in shpath
):
return 'xonsh'
return os.path.basename(shpath)
return None
@acm
async def _acquire_debug_lock_from_root_task(
uid: tuple[str, str]
) -> AsyncIterator[trio.StrictFIFOLock]:
'''
Acquire a root-actor local FIFO lock which tracks mutex access of
the process tree's global debugger breakpoint.
This lock avoids tty clobbering (by preventing multiple processes
reading from stdstreams) and ensures multi-actor, sequential access
to the ``pdb`` repl.
'''
task_name = trio.lowlevel.current_task().name
log.runtime(
f"Attempting to acquire TTY lock, remote task: {task_name}:{uid}"
)
we_acquired = False
try:
log.runtime(
f"entering lock checkpoint, remote task: {task_name}:{uid}"
)
we_acquired = True
# NOTE: if the surrounding cancel scope from the
# `lock_tty_for_child()` caller is cancelled, this line should
# unblock and NOT leave us in some kind of
# a "child-locked-TTY-but-child-is-uncontactable-over-IPC"
# condition.
await Lock._debug_lock.acquire()
if Lock.no_remote_has_tty is None:
# mark the tty lock as being in use so that the runtime
# can try to avoid clobbering any connection from a child
# that's currently relying on it.
Lock.no_remote_has_tty = trio.Event()
Lock.global_actor_in_debug = uid
log.runtime(f"TTY lock acquired, remote task: {task_name}:{uid}")
# NOTE: critical section: this yield is unshielded!
# IF we received a cancel during the shielded lock entry of some
# next-in-queue requesting task, then the resumption here will
# result in that ``trio.Cancelled`` being raised to our caller
# (likely from ``lock_tty_for_child()`` below)! In
# this case the ``finally:`` below should trigger and the
# surrounding caller side context should cancel normally
# relaying back to the caller.
yield Lock._debug_lock
finally:
if (
we_acquired
and Lock._debug_lock.locked()
):
Lock._debug_lock.release()
# IFF there are no more requesting tasks queued up fire, the
# "tty-unlocked" event thereby alerting any monitors of the lock that
# we are now back in the "tty unlocked" state. This is basically
# and edge triggered signal around an empty queue of sub-actor
# tasks that may have tried to acquire the lock.
stats = Lock._debug_lock.statistics()
if (
not stats.owner
):
log.runtime(f"No more tasks waiting on tty lock! says {uid}")
if Lock.no_remote_has_tty is not None:
Lock.no_remote_has_tty.set()
Lock.no_remote_has_tty = None
Lock.global_actor_in_debug = None
log.runtime(
f"TTY lock released, remote task: {task_name}:{uid}"
)
@tractor.context
async def lock_tty_for_child(
ctx: tractor.Context,
subactor_uid: tuple[str, str]
) -> str:
'''
Lock the TTY in the root process of an actor tree in a new
inter-actor-context-task such that the ``pdbp`` debugger console
can be mutex-allocated to the calling sub-actor for REPL control
without interference by other processes / threads.
NOTE: this task must be invoked in the root process of the actor
tree. It is meant to be invoked as an rpc-task and should be
highly reliable at releasing the mutex complete!
'''
task_name = trio.lowlevel.current_task().name
if tuple(subactor_uid) in Lock._blocked:
log.warning(
f'Actor {subactor_uid} is blocked from acquiring debug lock\n'
f"remote task: {task_name}:{subactor_uid}"
)
ctx._enter_debugger_on_cancel = False
await ctx.cancel(f'Debug lock blocked for {subactor_uid}')
return 'pdb_lock_blocked'
# TODO: when we get to true remote debugging
# this will deliver stdin data?
log.debug(
"Attempting to acquire TTY lock\n"
f"remote task: {task_name}:{subactor_uid}"
)
log.debug(f"Actor {subactor_uid} is WAITING on stdin hijack lock")
Lock.shield_sigint()
try:
with (
trio.CancelScope(shield=True) as debug_lock_cs,
):
Lock._root_local_task_cs_in_debug = debug_lock_cs
async with _acquire_debug_lock_from_root_task(subactor_uid):
# indicate to child that we've locked stdio
await ctx.started('Locked')
log.debug(
f"Actor {subactor_uid} acquired stdin hijack lock"
)
# wait for unlock pdb by child
async with ctx.open_stream() as stream:
assert await stream.receive() == 'pdb_unlock'
return "pdb_unlock_complete"
finally:
Lock._root_local_task_cs_in_debug = None
Lock.unshield_sigint()
async def wait_for_parent_stdin_hijack(
actor_uid: tuple[str, str],
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED
):
'''
Connect to the root actor via a ``Context`` and invoke a task which
locks a root-local TTY lock: ``lock_tty_for_child()``; this func
should be called in a new task from a child actor **and never the
root*.
This function is used by any sub-actor to acquire mutex access to
the ``pdb`` REPL and thus the root's TTY for interactive debugging
(see below inside ``_breakpoint()``). It can be used to ensure that
an intermediate nursery-owning actor does not clobber its children
if they are in debug (see below inside
``maybe_wait_for_debugger()``).
'''
with trio.CancelScope(shield=True) as cs:
Lock._debugger_request_cs = cs
try:
async with get_root() as portal:
# this syncs to child's ``Context.started()`` call.
async with portal.open_context(
tractor._debug.lock_tty_for_child,
subactor_uid=actor_uid,
) as (ctx, val):
log.debug('locked context')
assert val == 'Locked'
async with ctx.open_stream() as stream:
# unblock local caller
try:
assert Lock.local_pdb_complete
task_status.started(cs)
await Lock.local_pdb_complete.wait()
finally:
# TODO: shielding currently can cause hangs...
# with trio.CancelScope(shield=True):
await stream.send('pdb_unlock')
# sync with callee termination
assert await ctx.result() == "pdb_unlock_complete"
log.debug('exitting child side locking task context')
except ContextCancelled:
log.warning('Root actor cancelled debug lock')
raise
finally:
Lock.local_task_in_debug = None
log.debug('Exiting debugger from child')
def mk_mpdb() -> tuple[MultiActorPdb, Callable]:
pdb = MultiActorPdb()
# signal.signal = pdbp.hideframe(signal.signal)
Lock.shield_sigint()
# XXX: These are the important flags mentioned in
# https://github.com/python-trio/trio/issues/1155
# which resolve the traceback spews to console.
pdb.allow_kbdint = True
pdb.nosigint = True
return pdb, Lock.unshield_sigint
async def _breakpoint(
debug_func,
# TODO:
# shield: bool = False
) -> None:
'''
Breakpoint entry for engaging debugger instance sync-interaction,
from async code, executing in actor runtime (task).
'''
__tracebackhide__ = True
actor = tractor.current_actor()
pdb, undo_sigint = mk_mpdb()
task_name = trio.lowlevel.current_task().name
# TODO: is it possible to debug a trio.Cancelled except block?
# right now it seems like we can kinda do with by shielding
# around ``tractor.breakpoint()`` but not if we move the shielded
# scope here???
# with trio.CancelScope(shield=shield):
# await trio.lowlevel.checkpoint()
if (
not Lock.local_pdb_complete
or Lock.local_pdb_complete.is_set()
):
Lock.local_pdb_complete = trio.Event()
# TODO: need a more robust check for the "root" actor
if (
not is_root_process()
and actor._parent_chan # a connected child
):
if Lock.local_task_in_debug:
# Recurrence entry case: this task already has the lock and
# is likely recurrently entering a breakpoint
if Lock.local_task_in_debug == task_name:
# noop on recurrent entry case but we want to trigger
# a checkpoint to allow other actors error-propagate and
# potetially avoid infinite re-entries in some subactor.
await trio.lowlevel.checkpoint()
return
# if **this** actor is already in debug mode block here
# waiting for the control to be released - this allows
# support for recursive entries to `tractor.breakpoint()`
log.warning(f"{actor.uid} already has a debug lock, waiting...")
await Lock.local_pdb_complete.wait()
await trio.sleep(0.1)
# mark local actor as "in debug mode" to avoid recurrent
# entries/requests to the root process
Lock.local_task_in_debug = task_name
# this **must** be awaited by the caller and is done using the
# root nursery so that the debugger can continue to run without
# being restricted by the scope of a new task nursery.
# TODO: if we want to debug a trio.Cancelled triggered exception
# we have to figure out how to avoid having the service nursery
# cancel on this task start? I *think* this works below:
# ```python
# actor._service_n.cancel_scope.shield = shield
# ```
# but not entirely sure if that's a sane way to implement it?
try:
with trio.CancelScope(shield=True):
await actor._service_n.start(
wait_for_parent_stdin_hijack,
actor.uid,
)
Lock.repl = pdb
except RuntimeError:
Lock.release()
if actor._cancel_called:
# service nursery won't be usable and we
# don't want to lock up the root either way since
# we're in (the midst of) cancellation.
return
raise
elif is_root_process():
# we also wait in the root-parent for any child that
# may have the tty locked prior
# TODO: wait, what about multiple root tasks acquiring it though?
if Lock.global_actor_in_debug == actor.uid:
# re-entrant root process already has it: noop.
return
# XXX: since we need to enter pdb synchronously below,
# we have to release the lock manually from pdb completion
# callbacks. Can't think of a nicer way then this atm.
if Lock._debug_lock.locked():
log.warning(
'Root actor attempting to shield-acquire active tty lock'
f' owned by {Lock.global_actor_in_debug}')
# must shield here to avoid hitting a ``Cancelled`` and
# a child getting stuck bc we clobbered the tty
with trio.CancelScope(shield=True):
await Lock._debug_lock.acquire()
else:
# may be cancelled
await Lock._debug_lock.acquire()
Lock.global_actor_in_debug = actor.uid
Lock.local_task_in_debug = task_name
Lock.repl = pdb
try:
# block here one (at the appropriate frame *up*) where
# ``breakpoint()`` was awaited and begin handling stdio.
log.debug("Entering the synchronous world of pdb")
debug_func(actor, pdb)
except bdb.BdbQuit:
Lock.release()
raise
# XXX: apparently we can't do this without showing this frame
# in the backtrace on first entry to the REPL? Seems like an odd
# behaviour that should have been fixed by now. This is also why
# we scrapped all the @cm approaches that were tried previously.
# finally:
# __tracebackhide__ = True
# # frame = sys._getframe()
# # last_f = frame.f_back
# # last_f.f_globals['__tracebackhide__'] = True
# # signal.signal = pdbp.hideframe(signal.signal)
def shield_sigint_handler(
signum: int,
frame: 'frame', # type: ignore # noqa
# pdb_obj: Optional[MultiActorPdb] = None,
*args,
) -> None:
'''
Specialized, debugger-aware SIGINT handler.
In childred we always ignore to avoid deadlocks since cancellation
should always be managed by the parent supervising actor. The root
is always cancelled on ctrl-c.
'''
__tracebackhide__ = True
uid_in_debug = Lock.global_actor_in_debug
actor = tractor.current_actor()
# print(f'{actor.uid} in HANDLER with ')
def do_cancel():
# If we haven't tried to cancel the runtime then do that instead
# of raising a KBI (which may non-gracefully destroy
# a ``trio.run()``).
if not actor._cancel_called:
actor.cancel_soon()
# If the runtime is already cancelled it likely means the user
# hit ctrl-c again because teardown didn't full take place in
# which case we do the "hard" raising of a local KBI.
else:
raise KeyboardInterrupt
any_connected = False
if uid_in_debug is not None:
# try to see if the supposed (sub)actor in debug still
# has an active connection to *this* actor, and if not
# it's likely they aren't using the TTY lock / debugger
# and we should propagate SIGINT normally.
chans = actor._peers.get(tuple(uid_in_debug))
if chans:
any_connected = any(chan.connected() for chan in chans)
if not any_connected:
log.warning(
'A global actor reported to be in debug '
'but no connection exists for this child:\n'
f'{uid_in_debug}\n'
'Allowing SIGINT propagation..'
)
return do_cancel()
# only set in the actor actually running the REPL
pdb_obj = Lock.repl
# root actor branch that reports whether or not a child
# has locked debugger.
if (
is_root_process()
and uid_in_debug is not None
# XXX: only if there is an existing connection to the
# (sub-)actor in debug do we ignore SIGINT in this
# parent! Otherwise we may hang waiting for an actor
# which has already terminated to unlock.
and any_connected
):
# we are root and some actor is in debug mode
# if uid_in_debug is not None:
if pdb_obj:
name = uid_in_debug[0]
if name != 'root':
log.pdb(
f"Ignoring SIGINT, child in debug mode: `{uid_in_debug}`"
)
else:
log.pdb(
"Ignoring SIGINT while in debug mode"
)
elif (
is_root_process()
):
if pdb_obj:
log.pdb(
"Ignoring SIGINT since debug mode is enabled"
)
if (
Lock._root_local_task_cs_in_debug
and not Lock._root_local_task_cs_in_debug.cancel_called
):
Lock._root_local_task_cs_in_debug.cancel()
# revert back to ``trio`` handler asap!
Lock.unshield_sigint()
# child actor that has locked the debugger
elif not is_root_process():
chan: Channel = actor._parent_chan
if not chan or not chan.connected():
log.warning(
'A global actor reported to be in debug '
'but no connection exists for its parent:\n'
f'{uid_in_debug}\n'
'Allowing SIGINT propagation..'
)
return do_cancel()
task = Lock.local_task_in_debug
if (
task
and pdb_obj
):
log.pdb(
f"Ignoring SIGINT while task in debug mode: `{task}`"
)
# TODO: how to handle the case of an intermediary-child actor
# that **is not** marked in debug mode? See oustanding issue:
# https://github.com/goodboy/tractor/issues/320
# elif debug_mode():
else: # XXX: shouldn't ever get here?
print("WTFWTFWTF")
raise KeyboardInterrupt
# NOTE: currently (at least on ``fancycompleter`` 0.9.2)
# it looks to be that the last command that was run (eg. ll)
# will be repeated by default.
# maybe redraw/print last REPL output to console since
# we want to alert the user that more input is expect since
# nothing has been done dur to ignoring sigint.
if (
pdb_obj # only when this actor has a REPL engaged
):
# XXX: yah, mega hack, but how else do we catch this madness XD
if pdb_obj.shname == 'xonsh':
pdb_obj.stdout.write(pdb_obj.prompt)
pdb_obj.stdout.flush()
# TODO: make this work like sticky mode where if there is output
# detected as written to the tty we redraw this part underneath
# and erase the past draw of this same bit above?
# pdb_obj.sticky = True
# pdb_obj._print_if_sticky()
# also see these links for an approach from ``ptk``:
# https://github.com/goodboy/tractor/issues/130#issuecomment-663752040
# https://github.com/prompt-toolkit/python-prompt-toolkit/blob/c2c6af8a0308f9e5d7c0e28cb8a02963fe0ce07a/prompt_toolkit/patch_stdout.py
# XXX LEGACY: lol, see ``pdbpp`` issue:
# https://github.com/pdbpp/pdbpp/issues/496
def _set_trace(
actor: tractor.Actor | None = None,
pdb: MultiActorPdb | None = None,
):
__tracebackhide__ = True
actor = actor or tractor.current_actor()
# start 2 levels up in user code
frame: Optional[FrameType] = sys._getframe()
if frame:
frame = frame.f_back # type: ignore
if (
frame
and pdb
and actor is not None
):
log.pdb(f"\nAttaching pdb to actor: {actor.uid}\n")
# no f!#$&* idea, but when we're in async land
# we need 2x frames up?
frame = frame.f_back
else:
pdb, undo_sigint = mk_mpdb()
# we entered the global ``breakpoint()`` built-in from sync
# code?
Lock.local_task_in_debug = 'sync'
pdb.set_trace(frame=frame)
breakpoint = partial(
_breakpoint,
_set_trace,
)
def _post_mortem(
actor: tractor.Actor,
pdb: MultiActorPdb,
) -> None:
'''
Enter the ``pdbpp`` port mortem entrypoint using our custom
debugger instance.
'''
log.pdb(f"\nAttaching to pdb in crashed actor: {actor.uid}\n")
# TODO: you need ``pdbpp`` master (at least this commit
# https://github.com/pdbpp/pdbpp/commit/b757794857f98d53e3ebbe70879663d7d843a6c2)
# to fix this and avoid the hang it causes. See issue:
# https://github.com/pdbpp/pdbpp/issues/480
# TODO: help with a 3.10+ major release if/when it arrives.
pdbp.xpm(Pdb=lambda: pdb)
post_mortem = partial(
_breakpoint,
_post_mortem,
)
async def _maybe_enter_pm(err):
if (
debug_mode()
# NOTE: don't enter debug mode recursively after quitting pdb
# Iow, don't re-enter the repl if the `quit` command was issued
# by the user.
and not isinstance(err, bdb.BdbQuit)
# XXX: if the error is the likely result of runtime-wide
# cancellation, we don't want to enter the debugger since
# there's races between when the parent actor has killed all
# comms and when the child tries to contact said parent to
# acquire the tty lock.
# Really we just want to mostly avoid catching KBIs here so there
# might be a simpler check we can do?
and not is_multi_cancelled(err)
):
log.debug("Actor crashed, entering debug mode")
try:
await post_mortem()
finally:
Lock.release()
return True
else:
return False
@acm
async def acquire_debug_lock(
subactor_uid: tuple[str, str],
) -> AsyncGenerator[None, tuple]:
'''
Grab root's debug lock on entry, release on exit.
This helper is for actor's who don't actually need
to acquired the debugger but want to wait until the
lock is free in the process-tree root.
'''
if not debug_mode():
yield None
return
async with trio.open_nursery() as n:
cs = await n.start(
wait_for_parent_stdin_hijack,
subactor_uid,
)
yield None
cs.cancel()
async def maybe_wait_for_debugger(
poll_steps: int = 2,
poll_delay: float = 0.1,
child_in_debug: bool = False,
) -> None:
if (
not debug_mode()
and not child_in_debug
):
return
if (
is_root_process()
):
# If we error in the root but the debugger is
# engaged we don't want to prematurely kill (and
# thus clobber access to) the local tty since it
# will make the pdb repl unusable.
# Instead try to wait for pdb to be released before
# tearing down.
sub_in_debug = None
for _ in range(poll_steps):
if Lock.global_actor_in_debug:
sub_in_debug = tuple(Lock.global_actor_in_debug)
log.debug('Root polling for debug')
with trio.CancelScope(shield=True):
await trio.sleep(poll_delay)
# TODO: could this make things more deterministic? wait
# to see if a sub-actor task will be scheduled and grab
# the tty lock on the next tick?
# XXX: doesn't seem to work
# await trio.testing.wait_all_tasks_blocked(cushion=0)
debug_complete = Lock.no_remote_has_tty
if (
(debug_complete and
not debug_complete.is_set())
):
log.debug(
'Root has errored but pdb is in use by '
f'child {sub_in_debug}\n'
'Waiting on tty lock to release..')
await debug_complete.wait()
await trio.sleep(poll_delay)
continue
else:
log.debug(
'Root acquired TTY LOCK'
)

View File

@ -15,71 +15,52 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Discovery (protocols) API for automatic addressing and location Actor discovery API.
management of (service) actors.
""" """
from __future__ import annotations
from typing import ( from typing import (
Optional,
Union,
AsyncGenerator, AsyncGenerator,
AsyncContextManager,
TYPE_CHECKING,
) )
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from tractor.log import get_logger
from .trionics import gather_contexts
from ._ipc import _connect_chan, Channel from ._ipc import _connect_chan, Channel
from ._portal import ( from ._portal import (
Portal, Portal,
open_portal, open_portal,
LocalPortal, LocalPortal,
) )
from ._state import ( from ._state import current_actor, _runtime_vars
current_actor,
_runtime_vars,
)
if TYPE_CHECKING:
from ._runtime import Actor
log = get_logger(__name__)
@acm @acm
async def get_registry( async def get_arbiter(
host: str, host: str,
port: int, port: int,
) -> AsyncGenerator[ ) -> AsyncGenerator[Union[Portal, LocalPortal], None]:
Portal | LocalPortal | None,
None,
]:
''' '''
Return a portal instance connected to a local or remote Return a portal instance connected to a local or remote
registry-service actor; if a connection already exists re-use it arbiter.
(presumably to call a `.register_actor()` registry runtime RPC
ep).
''' '''
actor: Actor = current_actor() actor = current_actor()
if actor.is_registrar:
if not actor:
raise RuntimeError("No actor instance has been defined yet?")
if actor.is_arbiter:
# we're already the arbiter # we're already the arbiter
# (likely a re-entrant call from the arbiter actor) # (likely a re-entrant call from the arbiter actor)
yield LocalPortal( yield LocalPortal(actor, Channel((host, port)))
actor,
Channel((host, port))
)
else: else:
# TODO: try to look pre-existing connection from async with _connect_chan(host, port) as chan:
# `Actor._peers` and use it instead?
async with (
_connect_chan(host, port) as chan,
open_portal(chan) as regstr_ptl,
):
yield regstr_ptl
async with open_portal(chan) as arb_portal:
yield arb_portal
@acm @acm
@ -87,125 +68,51 @@ async def get_root(
**kwargs, **kwargs,
) -> AsyncGenerator[Portal, None]: ) -> AsyncGenerator[Portal, None]:
# TODO: rename mailbox to `_root_maddr` when we finally
# add and impl libp2p multi-addrs?
host, port = _runtime_vars['_root_mailbox'] host, port = _runtime_vars['_root_mailbox']
assert host is not None assert host is not None
async with ( async with _connect_chan(host, port) as chan:
_connect_chan(host, port) as chan, async with open_portal(chan, **kwargs) as portal:
open_portal(chan, **kwargs) as portal, yield portal
):
yield portal
def get_peer_by_name(
name: str,
# uuid: str|None = None,
) -> list[Channel]|None: # at least 1
'''
Scan for an existing connection (set) to a named actor
and return any channels from `Actor._peers`.
This is an optimization method over querying the registrar for
the same info.
'''
actor: Actor = current_actor()
to_scan: dict[tuple, list[Channel]] = actor._peers.copy()
pchan: Channel|None = actor._parent_chan
if pchan:
to_scan[pchan.uid].append(pchan)
for aid, chans in to_scan.items():
_, peer_name = aid
if name == peer_name:
if not chans:
log.warning(
'No IPC chans for matching peer {peer_name}\n'
)
continue
return chans
return None
@acm @acm
async def query_actor( async def query_actor(
name: str, name: str,
regaddr: tuple[str, int]|None = None, arbiter_sockaddr: Optional[tuple[str, int]] = None,
) -> AsyncGenerator[ ) -> AsyncGenerator[tuple[str, int], None]:
tuple[str, int]|None,
None,
]:
''' '''
Lookup a transport address (by actor name) via querying a registrar Simple address lookup for a given actor name.
listening @ `regaddr`.
Returns the transport protocol (socket) address or `None` if no Returns the (socket) address or ``None``.
entry under that name exists.
''' '''
actor: Actor = current_actor() actor = current_actor()
if ( async with get_arbiter(
name == 'registrar' *arbiter_sockaddr or actor._arb_addr
and actor.is_registrar ) as arb_portal:
):
raise RuntimeError(
'The current actor IS the registry!?'
)
maybe_peers: list[Channel]|None = get_peer_by_name(name) sockaddr = await arb_portal.run_from_ns(
if maybe_peers:
yield maybe_peers[0].raddr
return
reg_portal: Portal
regaddr: tuple[str, int] = regaddr or actor.reg_addrs[0]
async with get_registry(*regaddr) as reg_portal:
# TODO: return portals to all available actors - for now
# just the last one that registered
sockaddr: tuple[str, int] = await reg_portal.run_from_ns(
'self', 'self',
'find_actor', 'find_actor',
name=name, name=name,
) )
yield sockaddr
# TODO: return portals to all available actors - for now just
# the last one that registered
if name == 'arbiter' and actor.is_arbiter:
raise RuntimeError("The current actor is the arbiter")
@acm yield sockaddr if sockaddr else None
async def maybe_open_portal(
addr: tuple[str, int],
name: str,
):
async with query_actor(
name=name,
regaddr=addr,
) as sockaddr:
pass
if sockaddr:
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal
else:
yield None
@acm @acm
async def find_actor( async def find_actor(
name: str, name: str,
registry_addrs: list[tuple[str, int]]|None = None, arbiter_sockaddr: tuple[str, int] | None = None
only_first: bool = True, ) -> AsyncGenerator[Optional[Portal], None]:
raise_on_none: bool = False,
) -> AsyncGenerator[
Portal | list[Portal] | None,
None,
]:
''' '''
Ask the arbiter to find actor(s) by name. Ask the arbiter to find actor(s) by name.
@ -213,102 +120,43 @@ async def find_actor(
known to the arbiter. known to the arbiter.
''' '''
# optimization path, use any pre-existing peer channel async with query_actor(
maybe_peers: list[Channel]|None = get_peer_by_name(name) name=name,
if maybe_peers and only_first: arbiter_sockaddr=arbiter_sockaddr,
async with open_portal(maybe_peers[0]) as peer_portal: ) as sockaddr:
yield peer_portal
return
if not registry_addrs:
# XXX NOTE: make sure to dynamically read the value on
# every call since something may change it globally (eg.
# like in our discovery test suite)!
from . import _root
registry_addrs = (
_runtime_vars['_registry_addrs']
or
_root._default_lo_addrs
)
maybe_portals: list[
AsyncContextManager[tuple[str, int]]
] = list(
maybe_open_portal(
addr=addr,
name=name,
)
for addr in registry_addrs
)
portals: list[Portal]
async with gather_contexts(
mngrs=maybe_portals,
) as portals:
# log.runtime(
# 'Gathered portals:\n'
# f'{portals}'
# )
# NOTE: `gather_contexts()` will return a
# `tuple[None, None, ..., None]` if no contact
# can be made with any regstrar at any of the
# N provided addrs!
if not any(portals):
if raise_on_none:
raise RuntimeError(
f'No actor "{name}" found registered @ {registry_addrs}'
)
yield None
return
portals: list[Portal] = list(portals)
if only_first:
yield portals[0]
if sockaddr:
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal
else: else:
# TODO: currently this may return multiple portals yield None
# given there are multi-homed or multiple registrars..
# SO, we probably need de-duplication logic?
yield portals
@acm @acm
async def wait_for_actor( async def wait_for_actor(
name: str, name: str,
registry_addr: tuple[str, int] | None = None, arbiter_sockaddr: tuple[str, int] | None = None,
# registry_addr: tuple[str, int] | None = None,
) -> AsyncGenerator[Portal, None]: ) -> AsyncGenerator[Portal, None]:
''' '''
Wait on at least one peer actor to register `name` with the Wait on an actor to register with the arbiter.
registrar, yield a `Portal to the first registree.
A portal to the first registered actor is returned.
''' '''
actor: Actor = current_actor() actor = current_actor()
# optimization path, use any pre-existing peer channel async with get_arbiter(
maybe_peers: list[Channel]|None = get_peer_by_name(name) *arbiter_sockaddr or actor._arb_addr,
if maybe_peers: ) as arb_portal:
async with open_portal(maybe_peers[0]) as peer_portal: sockaddrs = await arb_portal.run_from_ns(
yield peer_portal
return
regaddr: tuple[str, int] = (
registry_addr
or
actor.reg_addrs[0]
)
# TODO: use `.trionics.gather_contexts()` like
# above in `find_actor()` as well?
reg_portal: Portal
async with get_registry(*regaddr) as reg_portal:
sockaddrs = await reg_portal.run_from_ns(
'self', 'self',
'wait_for_actor', 'wait_for_actor',
name=name, name=name,
) )
sockaddr = sockaddrs[-1]
# get latest registered addr by default?
# TODO: offer multi-portal yields in multi-homed case?
sockaddr: tuple[str, int] = sockaddrs[-1]
async with _connect_chan(*sockaddr) as chan: async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal: async with open_portal(chan) as portal:

View File

@ -20,9 +20,6 @@ Sub-process entry points.
""" """
from __future__ import annotations from __future__ import annotations
from functools import partial from functools import partial
import multiprocessing as mp
import os
import textwrap
from typing import ( from typing import (
Any, Any,
TYPE_CHECKING, TYPE_CHECKING,
@ -35,7 +32,6 @@ from .log import (
get_logger, get_logger,
) )
from . import _state from . import _state
from .devx import _debug
from .to_asyncio import run_as_asyncio_guest from .to_asyncio import run_as_asyncio_guest
from ._runtime import ( from ._runtime import (
async_main, async_main,
@ -51,8 +47,8 @@ log = get_logger(__name__)
def _mp_main( def _mp_main(
actor: Actor, actor: Actor, # type: ignore
accept_addrs: list[tuple[str, int]], accept_addr: tuple[str, int],
forkserver_info: tuple[Any, Any, Any, Any, Any], forkserver_info: tuple[Any, Any, Any, Any, Any],
start_method: SpawnMethodKey, start_method: SpawnMethodKey,
parent_addr: tuple[str, int] | None = None, parent_addr: tuple[str, int] | None = None,
@ -60,31 +56,29 @@ def _mp_main(
) -> None: ) -> None:
''' '''
The routine called *after fork* which invokes a fresh `trio.run()` The routine called *after fork* which invokes a fresh ``trio.run``
''' '''
actor._forkserver_info = forkserver_info actor._forkserver_info = forkserver_info
from ._spawn import try_set_start_method from ._spawn import try_set_start_method
spawn_ctx: mp.context.BaseContext = try_set_start_method(start_method) spawn_ctx = try_set_start_method(start_method)
assert spawn_ctx
if actor.loglevel is not None: if actor.loglevel is not None:
log.info( log.info(
f'Setting loglevel for {actor.uid} to {actor.loglevel}' f"Setting loglevel for {actor.uid} to {actor.loglevel}")
)
get_console_log(actor.loglevel) get_console_log(actor.loglevel)
# TODO: use scops headers like for `trio` below! assert spawn_ctx
# (well after we libify it maybe..)
log.info( log.info(
f'Started new {spawn_ctx.current_process()} for {actor.uid}' f"Started new {spawn_ctx.current_process()} for {actor.uid}")
# f"parent_addr is {parent_addr}"
) _state._current_actor = actor
_state._current_actor: Actor = actor
log.debug(f"parent_addr is {parent_addr}")
trio_main = partial( trio_main = partial(
async_main, async_main,
actor=actor, actor,
accept_addrs=accept_addrs, accept_addr,
parent_addr=parent_addr parent_addr=parent_addr
) )
try: try:
@ -97,114 +91,12 @@ def _mp_main(
pass # handle it the same way trio does? pass # handle it the same way trio does?
finally: finally:
log.info( log.info(f"Actor {actor.uid} terminated")
f'`mp`-subactor {actor.uid} exited'
)
# TODO: move this func to some kinda `.devx._conc_lang.py` eventually
# as we work out our multi-domain state-flow-syntax!
def nest_from_op(
input_op: str,
#
# ?TODO? an idea for a syntax to the state of concurrent systems
# as a "3-domain" (execution, scope, storage) model and using
# a minimal ascii/utf-8 operator-set.
#
# try not to take any of this seriously yet XD
#
# > is a "play operator" indicating (CPU bound)
# exec/work/ops required at the "lowest level computing"
#
# execution primititves (tasks, threads, actors..) denote their
# lifetime with '(' and ')' since parentheses normally are used
# in many langs to denote function calls.
#
# starting = (
# >( opening/starting; beginning of the thread-of-exec (toe?)
# (> opened/started, (finished spawning toe)
# |_<Task: blah blah..> repr of toe, in py these look like <objs>
#
# >) closing/exiting/stopping,
# )> closed/exited/stopped,
# |_<Task: blah blah..>
# [OR <), )< ?? ]
#
# ending = )
# >c) cancelling to close/exit
# c)> cancelled (caused close), OR?
# |_<Actor: ..>
# OR maybe "<c)" which better indicates the cancel being
# "delivered/returned" / returned" to LHS?
#
# >x) erroring to eventuall exit
# x)> errored and terminated
# |_<Actor: ...>
#
# scopes: supers/nurseries, IPC-ctxs, sessions, perms, etc.
# >{ opening
# {> opened
# }> closed
# >} closing
#
# storage: like queues, shm-buffers, files, etc..
# >[ opening
# [> opened
# |_<FileObj: ..>
#
# >] closing
# ]> closed
# IPC ops: channels, transports, msging
# => req msg
# <= resp msg
# <=> 2-way streaming (of msgs)
# <- recv 1 msg
# -> send 1 msg
#
# TODO: still not sure on R/L-HS approach..?
# =>( send-req to exec start (task, actor, thread..)
# (<= recv-req to ^
#
# (<= recv-req ^
# <=( recv-resp opened remote exec primitive
# <=) recv-resp closed
#
# )<=c req to stop due to cancel
# c=>) req to stop due to cancel
#
# =>{ recv-req to open
# <={ send-status that it closed
tree_str: str,
# NOTE: so move back-from-the-left of the `input_op` by
# this amount.
back_from_op: int = 0,
) -> str:
'''
Depth-increment the input (presumably hierarchy/supervision)
input "tree string" below the provided `input_op` execution
operator, so injecting a `"\n|_{input_op}\n"`and indenting the
`tree_str` to nest content aligned with the ops last char.
'''
return (
f'{input_op}\n'
+
textwrap.indent(
tree_str,
prefix=(
len(input_op)
-
(back_from_op + 1)
) * ' ',
)
)
def _trio_main( def _trio_main(
actor: Actor,
actor: Actor, # type: ignore
*, *,
parent_addr: tuple[str, int] | None = None, parent_addr: tuple[str, int] | None = None,
infect_asyncio: bool = False, infect_asyncio: bool = False,
@ -214,73 +106,33 @@ def _trio_main(
Entry point for a `trio_run_in_process` subactor. Entry point for a `trio_run_in_process` subactor.
''' '''
_debug.hide_runtime_frames() log.info(f"Started new trio process for {actor.uid}")
if actor.loglevel is not None:
log.info(
f"Setting loglevel for {actor.uid} to {actor.loglevel}")
get_console_log(actor.loglevel)
log.info(
f"Started {actor.uid}")
_state._current_actor = actor _state._current_actor = actor
log.debug(f"parent_addr is {parent_addr}")
trio_main = partial( trio_main = partial(
async_main, async_main,
actor, actor,
parent_addr=parent_addr parent_addr=parent_addr
) )
if actor.loglevel is not None:
get_console_log(actor.loglevel)
actor_info: str = (
f'|_{actor}\n'
f' uid: {actor.uid}\n'
f' pid: {os.getpid()}\n'
f' parent_addr: {parent_addr}\n'
f' loglevel: {actor.loglevel}\n'
)
log.info(
'Starting new `trio` subactor:\n'
+
nest_from_op(
input_op='>(', # see syntax ideas above
tree_str=actor_info,
back_from_op=2, # since "complete"
)
)
logmeth = log.info
exit_status: str = (
'Subactor exited\n'
+
nest_from_op(
input_op=')>', # like a "closed-to-play"-icon from super perspective
tree_str=actor_info,
back_from_op=1,
)
)
try: try:
if infect_asyncio: if infect_asyncio:
actor._infected_aio = True actor._infected_aio = True
run_as_asyncio_guest(trio_main) run_as_asyncio_guest(trio_main)
else: else:
trio.run(trio_main) trio.run(trio_main)
except KeyboardInterrupt: except KeyboardInterrupt:
logmeth = log.cancel log.cancel(f"Actor {actor.uid} received KBI")
exit_status: str = (
'Actor received KBI (aka an OS-cancel)\n'
+
nest_from_op(
input_op='c)>', # closed due to cancel (see above)
tree_str=actor_info,
)
)
except BaseException as err:
logmeth = log.error
exit_status: str = (
'Main actor task exited due to crash?\n'
+
nest_from_op(
input_op='x)>', # closed by error
tree_str=actor_info,
)
)
# NOTE since we raise a tb will already be shown on the
# console, thus we do NOT use `.exception()` above.
raise err
finally: finally:
logmeth(exit_status) log.info(f"Actor {actor.uid} terminated")

File diff suppressed because it is too large Load Diff

View File

@ -19,64 +19,38 @@ Inter-process comms abstractions
""" """
from __future__ import annotations from __future__ import annotations
import platform
import struct
import typing
from collections.abc import ( from collections.abc import (
AsyncGenerator, AsyncGenerator,
AsyncIterator, AsyncIterator,
) )
from contextlib import (
asynccontextmanager as acm,
contextmanager as cm,
)
import platform
from pprint import pformat
import struct
import typing
from typing import ( from typing import (
Any, Any,
Callable,
runtime_checkable, runtime_checkable,
Optional,
Protocol, Protocol,
Type, Type,
TypeVar, TypeVar,
) )
import msgspec
from tricycle import BufferedReceiveStream from tricycle import BufferedReceiveStream
import msgspec
import trio import trio
from async_generator import asynccontextmanager
from tractor.log import get_logger from .log import get_logger
from tractor._exceptions import ( from ._exceptions import TransportClosed
MsgTypeError,
pack_from_raise,
TransportClosed,
_mk_send_mte,
_mk_recv_mte,
)
from tractor.msg import (
_ctxvar_MsgCodec,
# _codec, XXX see `self._codec` sanity/debug checks
MsgCodec,
types as msgtypes,
pretty_struct,
)
log = get_logger(__name__) log = get_logger(__name__)
_is_windows = platform.system() == 'Windows' _is_windows = platform.system() == 'Windows'
log = get_logger(__name__)
def get_stream_addrs( def get_stream_addrs(stream: trio.SocketStream) -> tuple:
stream: trio.SocketStream # should both be IP sockets
) -> tuple[
tuple[str, int], # local
tuple[str, int], # remote
]:
'''
Return the `trio` streaming transport prot's socket-addrs for
both the local and remote sides as a pair.
'''
# rn, should both be IP sockets
lsockname = stream.socket.getsockname() lsockname = stream.socket.getsockname()
rsockname = stream.socket.getpeername() rsockname = stream.socket.getpeername()
return ( return (
@ -85,22 +59,16 @@ def get_stream_addrs(
) )
# from tractor.msg.types import MsgType MsgType = TypeVar("MsgType")
# ?TODO? this should be our `Union[*msgtypes.__spec__]` alias now right..?
# => BLEH, except can't bc prots must inherit typevar or param-spec # TODO: consider using a generic def and indexing with our eventual
# vars.. # msg definition/types?
MsgType = TypeVar('MsgType') # - https://docs.python.org/3/library/typing.html#typing.Protocol
# - https://jcristharif.com/msgspec/usage.html#structs
# TODO: break up this mod into a subpkg so we can start adding new
# backends and move this type stuff into a dedicated file.. Bo
#
@runtime_checkable @runtime_checkable
class MsgTransport(Protocol[MsgType]): class MsgTransport(Protocol[MsgType]):
#
# ^-TODO-^ consider using a generic def and indexing with our
# eventual msg definition/types?
# - https://docs.python.org/3/library/typing.html#typing.Protocol
stream: trio.SocketStream stream: trio.SocketStream
drained: list[MsgType] drained: list[MsgType]
@ -135,37 +103,20 @@ class MsgTransport(Protocol[MsgType]):
... ...
# TODO: typing oddity.. not sure why we have to inherit here, but it # TODO: not sure why we have to inherit here, but it seems to be an
# seems to be an issue with `get_msg_transport()` returning # issue with ``get_msg_transport()`` returning a ``Type[Protocol]``;
# a `Type[Protocol]`; probably should make a `mypy` issue? # probably should make a `mypy` issue?
class MsgpackTCPStream(MsgTransport): class MsgpackTCPStream(MsgTransport):
''' '''
A ``trio.SocketStream`` delivering ``msgpack`` formatted data A ``trio.SocketStream`` delivering ``msgpack`` formatted data
using the ``msgspec`` codec lib. using the ``msgspec`` codec lib.
''' '''
layer_key: int = 4
name_key: str = 'tcp'
# TODO: better naming for this?
# -[ ] check how libp2p does naming for such things?
codec_key: str = 'msgpack'
def __init__( def __init__(
self, self,
stream: trio.SocketStream, stream: trio.SocketStream,
prefix_size: int = 4, prefix_size: int = 4,
# XXX optionally provided codec pair for `msgspec`:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
#
# TODO: define this as a `Codec` struct which can be
# overriden dynamically by the application/runtime?
codec: tuple[
Callable[[Any], Any]|None, # coder
Callable[[type, Any], Any]|None, # decoder
]|None = None,
) -> None: ) -> None:
self.stream = stream self.stream = stream
@ -175,44 +126,30 @@ class MsgpackTCPStream(MsgTransport):
self._laddr, self._raddr = get_stream_addrs(stream) self._laddr, self._raddr = get_stream_addrs(stream)
# create read loop instance # create read loop instance
self._aiter_pkts = self._iter_packets() self._agen = self._iter_packets()
self._send_lock = trio.StrictFIFOLock() self._send_lock = trio.StrictFIFOLock()
# public i guess? # public i guess?
self.drained: list[dict] = [] self.drained: list[dict] = []
self.recv_stream = BufferedReceiveStream( self.recv_stream = BufferedReceiveStream(transport_stream=stream)
transport_stream=stream
)
self.prefix_size = prefix_size self.prefix_size = prefix_size
# allow for custom IPC msg interchange format # TODO: struct aware messaging coders
# dynamic override Bo self.encode = msgspec.msgpack.Encoder().encode
self._task = trio.lowlevel.current_task() self.decode = msgspec.msgpack.Decoder().decode # dict[str, Any])
# XXX for ctxvar debug only!
# self._codec: MsgCodec = (
# codec
# or
# _codec._ctxvar_MsgCodec.get()
# )
async def _iter_packets(self) -> AsyncGenerator[dict, None]: async def _iter_packets(self) -> AsyncGenerator[dict, None]:
''' '''Yield packets from the underlying stream.
Yield `bytes`-blob decoded packets from the underlying TCP
stream using the current task's `MsgCodec`.
This is a streaming routine implemented as an async generator
func (which was the original design, but could be changed?)
and is allocated by a `.__call__()` inside `.__init__()` where
it is assigned to the `._aiter_pkts` attr.
''' '''
import msgspec # noqa
decodes_failed: int = 0 decodes_failed: int = 0
while True: while True:
try: try:
header: bytes = await self.recv_stream.receive_exactly(4) header = await self.recv_stream.receive_exactly(4)
except ( except (
ValueError, ValueError,
ConnectionResetError, ConnectionResetError,
@ -221,122 +158,25 @@ class MsgpackTCPStream(MsgTransport):
# seem to be getting racy failures here on # seem to be getting racy failures here on
# arbiter/registry name subs.. # arbiter/registry name subs..
trio.BrokenResourceError, trio.BrokenResourceError,
):
) as trans_err:
loglevel = 'transport'
match trans_err:
# case (
# ConnectionResetError()
# ):
# loglevel = 'transport'
# peer actor (graceful??) TCP EOF but `tricycle`
# seems to raise a 0-bytes-read?
case ValueError() if (
'unclean EOF' in trans_err.args[0]
):
pass
# peer actor (task) prolly shutdown quickly due
# to cancellation
case trio.BrokenResourceError() if (
'Connection reset by peer' in trans_err.args[0]
):
pass
# unless the disconnect condition falls under "a
# normal operation breakage" we usualy console warn
# about it.
case _:
loglevel: str = 'warning'
raise TransportClosed( raise TransportClosed(
message=( f'transport {self} was already closed prior ro read'
f'IPC transport already closed by peer\n' )
f'x]> {type(trans_err)}\n'
f' |_{self}\n' if header == b'':
), raise TransportClosed(
loglevel=loglevel, f'transport {self} was already closed prior ro read'
) from trans_err
# XXX definitely can happen if transport is closed
# manually by another `trio.lowlevel.Task` in the
# same actor; we use this in some simulated fault
# testing for ex, but generally should never happen
# under normal operation!
#
# NOTE: as such we always re-raise this error from the
# RPC msg loop!
except trio.ClosedResourceError as closure_err:
raise TransportClosed(
message=(
f'IPC transport already manually closed locally?\n'
f'x]> {type(closure_err)} \n'
f' |_{self}\n'
),
loglevel='error',
raise_on_report=(
closure_err.args[0] == 'another task closed this fd'
or
closure_err.args[0] in ['another task closed this fd']
),
) from closure_err
# graceful TCP EOF disconnect
if header == b'':
raise TransportClosed(
message=(
f'IPC transport already gracefully closed\n'
f']>\n'
f' |_{self}\n'
),
loglevel='transport',
# cause=??? # handy or no?
) )
size: int
size, = struct.unpack("<I", header) size, = struct.unpack("<I", header)
log.transport(f'received header {size}') # type: ignore log.transport(f'received header {size}') # type: ignore
msg_bytes: bytes = await self.recv_stream.receive_exactly(size)
msg_bytes = await self.recv_stream.receive_exactly(size)
log.transport(f"received {msg_bytes}") # type: ignore log.transport(f"received {msg_bytes}") # type: ignore
try: try:
# NOTE: lookup the `trio.Task.context`'s var for yield self.decode(msg_bytes)
# the current `MsgCodec`.
codec: MsgCodec = _ctxvar_MsgCodec.get()
# XXX for ctxvar debug only!
# if self._codec.pld_spec != codec.pld_spec:
# assert (
# task := trio.lowlevel.current_task()
# ) is not self._task
# self._task = task
# self._codec = codec
# log.runtime(
# f'Using new codec in {self}.recv()\n'
# f'codec: {self._codec}\n\n'
# f'msg_bytes: {msg_bytes}\n'
# )
yield codec.decode(msg_bytes)
# XXX NOTE: since the below error derives from
# `DecodeError` we need to catch is specially
# and always raise such that spec violations
# are never allowed to be caught silently!
except msgspec.ValidationError as verr:
msgtyperr: MsgTypeError = _mk_recv_mte(
msg=msg_bytes,
codec=codec,
src_validation_error=verr,
)
# XXX deliver up to `Channel.recv()` where
# a re-raise and `Error`-pack can inject the far
# end actor `.uid`.
yield msgtyperr
except ( except (
msgspec.DecodeError, msgspec.DecodeError,
UnicodeDecodeError, UnicodeDecodeError,
@ -346,95 +186,29 @@ class MsgpackTCPStream(MsgTransport):
# do with a channel drop - hope that receiving from the # do with a channel drop - hope that receiving from the
# channel will raise an expected error and bubble up. # channel will raise an expected error and bubble up.
try: try:
msg_str: str|bytes = msg_bytes.decode() msg_str: str | bytes = msg_bytes.decode()
except UnicodeDecodeError: except UnicodeDecodeError:
msg_str = msg_bytes msg_str = msg_bytes
log.exception( log.error(
'Failed to decode msg?\n' '`msgspec` failed to decode!?\n'
f'{codec}\n\n' 'dumping bytes:\n'
'Rxed bytes from wire:\n\n' f'{msg_str!r}'
f'{msg_str!r}\n'
) )
decodes_failed += 1 decodes_failed += 1
else: else:
raise raise
async def send( async def send(self, msg: Any) -> None:
self,
msg: msgtypes.MsgType,
strict_types: bool = True,
hide_tb: bool = False,
) -> None:
'''
Send a msgpack encoded py-object-blob-as-msg over TCP.
If `strict_types == True` then a `MsgTypeError` will be raised on any
invalid msg type
'''
__tracebackhide__: bool = hide_tb
# XXX see `trio._sync.AsyncContextManagerMixin` for details
# on the `.acquire()`/`.release()` sequencing..
async with self._send_lock: async with self._send_lock:
# NOTE: lookup the `trio.Task.context`'s var for bytes_data: bytes = self.encode(msg)
# the current `MsgCodec`.
codec: MsgCodec = _ctxvar_MsgCodec.get()
# XXX for ctxvar debug only!
# if self._codec.pld_spec != codec.pld_spec:
# self._codec = codec
# log.runtime(
# f'Using new codec in {self}.send()\n'
# f'codec: {self._codec}\n\n'
# f'msg: {msg}\n'
# )
if type(msg) not in msgtypes.__msg_types__:
if strict_types:
raise _mk_send_mte(
msg,
codec=codec,
)
else:
log.warning(
'Sending non-`Msg`-spec msg?\n\n'
f'{msg}\n'
)
try:
bytes_data: bytes = codec.encode(msg)
except TypeError as _err:
typerr = _err
msgtyperr: MsgTypeError = _mk_send_mte(
msg,
codec=codec,
message=(
f'IPC-msg-spec violation in\n\n'
f'{pretty_struct.Struct.pformat(msg)}'
),
src_type_error=typerr,
)
raise msgtyperr from typerr
# supposedly the fastest says, # supposedly the fastest says,
# https://stackoverflow.com/a/54027962 # https://stackoverflow.com/a/54027962
size: bytes = struct.pack("<I", len(bytes_data)) size: bytes = struct.pack("<I", len(bytes_data))
return await self.stream.send_all(size + bytes_data)
# ?TODO? does it help ever to dynamically show this return await self.stream.send_all(size + bytes_data)
# frame?
# try:
# <the-above_code>
# except BaseException as _err:
# err = _err
# if not isinstance(err, MsgTypeError):
# __tracebackhide__: bool = False
# raise
@property @property
def laddr(self) -> tuple[str, int]: def laddr(self) -> tuple[str, int]:
@ -445,7 +219,7 @@ class MsgpackTCPStream(MsgTransport):
return self._raddr return self._raddr
async def recv(self) -> Any: async def recv(self) -> Any:
return await self._aiter_pkts.asend(None) return await self._agen.asend(None)
async def drain(self) -> AsyncIterator[dict]: async def drain(self) -> AsyncIterator[dict]:
''' '''
@ -462,7 +236,7 @@ class MsgpackTCPStream(MsgTransport):
yield msg yield msg
def __aiter__(self): def __aiter__(self):
return self._aiter_pkts return self._agen
def connected(self) -> bool: def connected(self) -> bool:
return self.stream.socket.fileno() != -1 return self.stream.socket.fileno() != -1
@ -493,7 +267,7 @@ class Channel:
def __init__( def __init__(
self, self,
destaddr: tuple[str, int]|None, destaddr: Optional[tuple[str, int]],
msg_transport_type_key: tuple[str, str] = ('msgpack', 'tcp'), msg_transport_type_key: tuple[str, str] = ('msgpack', 'tcp'),
@ -511,31 +285,18 @@ class Channel:
# Either created in ``.connect()`` or passed in by # Either created in ``.connect()`` or passed in by
# user in ``.from_stream()``. # user in ``.from_stream()``.
self._stream: trio.SocketStream|None = None self._stream: Optional[trio.SocketStream] = None
self._transport: MsgTransport|None = None self.msgstream: Optional[MsgTransport] = None
# set after handshake - always uid of far end # set after handshake - always uid of far end
self.uid: tuple[str, str]|None = None self.uid: Optional[tuple[str, str]] = None
self._aiter_msgs = self._iter_msgs() self._agen = self._aiter_recv()
self._exc: Exception|None = None # set if far end actor errors self._exc: Optional[Exception] = None # set if far end actor errors
self._closed: bool = False self._closed: bool = False
# flag set on ``Portal.cancel_actor()`` indicating
# flag set by ``Portal.cancel_actor()`` indicating remote # remote (peer) cancellation of the far end actor runtime.
# (possibly peer) cancellation of the far end actor self._cancel_called: bool = False # set on ``Portal.cancel_actor()``
# runtime.
self._cancel_called: bool = False
@property
def msgstream(self) -> MsgTransport:
log.info(
'`Channel.msgstream` is an old name, use `._transport`'
)
return self._transport
@property
def transport(self) -> MsgTransport:
return self._transport
@classmethod @classmethod
def from_stream( def from_stream(
@ -546,78 +307,37 @@ class Channel:
) -> Channel: ) -> Channel:
src, dst = get_stream_addrs(stream) src, dst = get_stream_addrs(stream)
chan = Channel( chan = Channel(destaddr=dst, **kwargs)
destaddr=dst,
**kwargs,
)
# set immediately here from provided instance # set immediately here from provided instance
chan._stream: trio.SocketStream = stream chan._stream = stream
chan.set_msg_transport(stream) chan.set_msg_transport(stream)
return chan return chan
def set_msg_transport( def set_msg_transport(
self, self,
stream: trio.SocketStream, stream: trio.SocketStream,
type_key: tuple[str, str]|None = None, type_key: Optional[tuple[str, str]] = None,
# XXX optionally provided codec pair for `msgspec`:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
codec: MsgCodec|None = None,
) -> MsgTransport: ) -> MsgTransport:
type_key = ( type_key = type_key or self._transport_key
type_key self.msgstream = get_msg_transport(type_key)(stream)
or return self.msgstream
self._transport_key
)
# get transport type, then
self._transport = get_msg_transport(
type_key
# instantiate an instance of the msg-transport
)(
stream,
codec=codec,
)
return self._transport
@cm
def apply_codec(
self,
codec: MsgCodec,
) -> None:
'''
Temporarily override the underlying IPC msg codec for
dynamic enforcement of messaging schema.
'''
orig: MsgCodec = self._transport.codec
try:
self._transport.codec = codec
yield
finally:
self._transport.codec = orig
# TODO: do a .src/.dst: str for maddrs?
def __repr__(self) -> str: def __repr__(self) -> str:
if not self._transport: if self.msgstream:
return '<Channel with inactive transport?>' return repr(
self.msgstream.stream.socket._sock).replace( # type: ignore
return repr( "socket.socket", "Channel")
self._transport.stream.socket._sock return object.__repr__(self)
).replace( # type: ignore
"socket.socket",
"Channel",
)
@property @property
def laddr(self) -> tuple[str, int]|None: def laddr(self) -> Optional[tuple[str, int]]:
return self._transport.laddr if self._transport else None return self.msgstream.laddr if self.msgstream else None
@property @property
def raddr(self) -> tuple[str, int]|None: def raddr(self) -> Optional[tuple[str, int]]:
return self._transport.raddr if self._transport else None return self.msgstream.raddr if self.msgstream else None
async def connect( async def connect(
self, self,
@ -636,62 +356,26 @@ class Channel:
*destaddr, *destaddr,
**kwargs **kwargs
) )
transport = self.set_msg_transport(stream) msgstream = self.set_msg_transport(stream)
log.transport( log.transport(
f'Opened channel[{type(transport)}]: {self.laddr} -> {self.raddr}' f'Opened channel[{type(msgstream)}]: {self.laddr} -> {self.raddr}'
) )
return transport return msgstream
# TODO: something like, async def send(self, item: Any) -> None:
# `pdbp.hideframe_on(errors=[MsgTypeError])`
# instead of the `try/except` hack we have rn..
# seems like a pretty useful thing to have in general
# along with being able to filter certain stack frame(s / sets)
# possibly based on the current log-level?
async def send(
self,
payload: Any,
hide_tb: bool = False, log.transport(f"send `{item}`") # type: ignore
assert self.msgstream
) -> None: await self.msgstream.send(item)
'''
Send a coded msg-blob over the transport.
'''
__tracebackhide__: bool = hide_tb
try:
log.transport(
'=> send IPC msg:\n\n'
f'{pformat(payload)}\n'
)
# assert self._transport # but why typing?
await self._transport.send(
payload,
hide_tb=hide_tb,
)
except BaseException as _err:
err = _err # bind for introspection
if not isinstance(_err, MsgTypeError):
# assert err
__tracebackhide__: bool = False
else:
assert err.cid
raise
async def recv(self) -> Any: async def recv(self) -> Any:
assert self._transport assert self.msgstream
return await self._transport.recv() return await self.msgstream.recv()
# TODO: auto-reconnect features like 0mq/nanomsg?
# -[ ] implement it manually with nods to SC prot
# possibly on multiple transport backends?
# -> seems like that might be re-inventing scalability
# prots tho no?
# try: # try:
# return await self._transport.recv() # return await self.msgstream.recv()
# except trio.BrokenResourceError: # except trio.BrokenResourceError:
# if self._autorecon: # if self._autorecon:
# await self._reconnect() # await self._reconnect()
@ -704,8 +388,8 @@ class Channel:
f'Closing channel to {self.uid} ' f'Closing channel to {self.uid} '
f'{self.laddr} -> {self.raddr}' f'{self.laddr} -> {self.raddr}'
) )
assert self._transport assert self.msgstream
await self._transport.stream.aclose() await self.msgstream.stream.aclose()
self._closed = True self._closed = True
async def __aenter__(self): async def __aenter__(self):
@ -716,11 +400,8 @@ class Channel:
await self.aclose(*args) await self.aclose(*args)
def __aiter__(self): def __aiter__(self):
return self._aiter_msgs return self._agen
# ?TODO? run any reconnection sequence?
# -[ ] prolly should be impl-ed as deco-API?
#
# async def _reconnect(self) -> None: # async def _reconnect(self) -> None:
# """Handle connection failures by polling until a reconnect can be # """Handle connection failures by polling until a reconnect can be
# established. # established.
@ -738,6 +419,7 @@ class Channel:
# else: # else:
# log.transport("Stream connection re-established!") # log.transport("Stream connection re-established!")
# # TODO: run any reconnection sequence
# # on_recon = self._recon_seq # # on_recon = self._recon_seq
# # if on_recon: # # if on_recon:
# # await on_recon(self) # # await on_recon(self)
@ -751,42 +433,23 @@ class Channel:
# " for re-establishment") # " for re-establishment")
# await trio.sleep(1) # await trio.sleep(1)
async def _iter_msgs( async def _aiter_recv(
self self
) -> AsyncGenerator[Any, None]: ) -> AsyncGenerator[Any, None]:
''' '''
Yield `MsgType` IPC msgs decoded and deliverd from Async iterate items from underlying stream.
an underlying `MsgTransport` protocol.
This is a streaming routine alo implemented as an async-gen
func (same a `MsgTransport._iter_pkts()`) gets allocated by
a `.__call__()` inside `.__init__()` where it is assigned to
the `._aiter_msgs` attr.
''' '''
assert self._transport assert self.msgstream
while True: while True:
try: try:
async for msg in self._transport: async for item in self.msgstream:
match msg: yield item
# NOTE: if transport/interchange delivers # sent = yield item
# a type error, we pack it with the far # if sent is not None:
# end peer `Actor.uid` and relay the # # optimization, passing None through all the
# `Error`-msg upward to the `._rpc` stack # # time is pointless
# for normal RAE handling. # await self.msgstream.send(sent)
case MsgTypeError():
yield pack_from_raise(
local_err=msg,
cid=msg.cid,
# XXX we pack it here bc lower
# layers have no notion of an
# actor-id ;)
src_uid=self.uid,
)
case _:
yield msg
except trio.BrokenResourceError: except trio.BrokenResourceError:
# if not self._autorecon: # if not self._autorecon:
@ -799,14 +462,12 @@ class Channel:
# continue # continue
def connected(self) -> bool: def connected(self) -> bool:
return self._transport.connected() if self._transport else False return self.msgstream.connected() if self.msgstream else False
@acm @asynccontextmanager
async def _connect_chan( async def _connect_chan(
host: str, host: str, port: int
port: int
) -> typing.AsyncGenerator[Channel, None]: ) -> typing.AsyncGenerator[Channel, None]:
''' '''
Create and connect a channel with disconnect on context manager Create and connect a channel with disconnect on context manager
@ -816,5 +477,4 @@ async def _connect_chan(
chan = Channel((host, port)) chan = Channel((host, port))
await chan.connect() await chan.connect()
yield chan yield chan
with trio.CancelScope(shield=True): await chan.aclose()
await chan.aclose()

View File

@ -1,151 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Multiaddress parser and utils according the spec(s) defined by
`libp2p` and used in dependent project such as `ipfs`:
- https://docs.libp2p.io/concepts/fundamentals/addressing/
- https://github.com/libp2p/specs/blob/master/addressing/README.md
'''
from typing import Iterator
from bidict import bidict
# TODO: see if we can leverage libp2p ecosys projects instead of
# rolling our own (parser) impls of the above addressing specs:
# - https://github.com/libp2p/py-libp2p
# - https://docs.libp2p.io/concepts/nat/circuit-relay/#relay-addresses
# prots: bidict[int, str] = bidict({
prots: bidict[int, str] = {
'ipv4': 3,
'ipv6': 3,
'wg': 3,
'tcp': 4,
'udp': 4,
# TODO: support the next-gen shite Bo
# 'quic': 4,
# 'ssh': 7, # via rsyscall bootstrapping
}
prot_params: dict[str, tuple[str]] = {
'ipv4': ('addr',),
'ipv6': ('addr',),
'wg': ('addr', 'port', 'pubkey'),
'tcp': ('port',),
'udp': ('port',),
# 'quic': ('port',),
# 'ssh': ('port',),
}
def iter_prot_layers(
multiaddr: str,
) -> Iterator[
tuple[
int,
list[str]
]
]:
'''
Unpack a libp2p style "multiaddress" into multiple "segments"
for each "layer" of the protocoll stack (in OSI terms).
'''
tokens: list[str] = multiaddr.split('/')
root, tokens = tokens[0], tokens[1:]
assert not root # there is a root '/' on LHS
itokens = iter(tokens)
prot: str | None = None
params: list[str] = []
for token in itokens:
# every prot path should start with a known
# key-str.
if token in prots:
if prot is None:
prot: str = token
else:
yield prot, params
prot = token
params = []
elif token not in prots:
params.append(token)
else:
yield prot, params
def parse_maddr(
multiaddr: str,
) -> dict[str, str | int | dict]:
'''
Parse a libp2p style "multiaddress" into its distinct protocol
segments where each segment is of the form:
`../<protocol>/<param0>/<param1>/../<paramN>`
and is loaded into a (order preserving) `layers: dict[str,
dict[str, Any]` which holds each protocol-layer-segment of the
original `str` path as a separate entry according to its approx
OSI "layer number".
Any `paramN` in the path must be distinctly defined by a str-token in the
(module global) `prot_params` table.
For eg. for wireguard which requires an address, port number and publickey
the protocol params are specified as the entry:
'wg': ('addr', 'port', 'pubkey'),
and are thus parsed from a maddr in that order:
`'/wg/1.1.1.1/51820/<pubkey>'`
'''
layers: dict[str, str | int | dict] = {}
for (
prot_key,
params,
) in iter_prot_layers(multiaddr):
layer: int = prots[prot_key] # OSI layer used for sorting
ep: dict[str, int | str] = {'layer': layer}
layers[prot_key] = ep
# TODO; validation and resolving of names:
# - each param via a validator provided as part of the
# prot_params def? (also see `"port"` case below..)
# - do a resolv step that will check addrs against
# any loaded network.resolv: dict[str, str]
rparams: list = list(reversed(params))
for key in prot_params[prot_key]:
val: str | int = rparams.pop()
# TODO: UGHH, dunno what we should do for validation
# here, put it in the params spec somehow?
if key == 'port':
val = int(val)
ep[key] = val
return layers

View File

@ -15,70 +15,71 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Memory "portal" contruct. Memory boundary "Portals": an API for structured
concurrency linked tasks running in disparate memory domains.
"Memory portals" are both an API and set of IPC wrapping primitives
for managing structured concurrency "cancel-scope linked" tasks
running in disparate virtual memory domains - at least in different
OS processes, possibly on different (hardware) hosts.
''' '''
from __future__ import annotations from __future__ import annotations
from contextlib import asynccontextmanager as acm
import importlib import importlib
import inspect import inspect
from typing import ( from typing import (
Any, Any, Optional,
Callable, Callable, AsyncGenerator,
AsyncGenerator, Type,
TYPE_CHECKING,
) )
from functools import partial from functools import partial
from dataclasses import dataclass from dataclasses import dataclass
from pprint import pformat
import warnings import warnings
import trio import trio
from async_generator import asynccontextmanager
from .trionics import maybe_open_nursery from .trionics import maybe_open_nursery
from ._state import ( from ._state import current_actor
current_actor,
)
from ._ipc import Channel from ._ipc import Channel
from .log import get_logger from .log import get_logger
from .msg import ( from .msg import NamespacePath
# Error,
PayloadMsg,
NamespacePath,
Return,
)
from ._exceptions import ( from ._exceptions import (
# unpack_error, unpack_error,
NoResult, NoResult,
ContextCancelled,
) )
from ._context import ( from ._context import Context
Context, from ._streaming import MsgStream
open_context_from_portal,
)
from ._streaming import (
MsgStream,
)
if TYPE_CHECKING:
from ._runtime import Actor
log = get_logger(__name__) log = get_logger(__name__)
def _unwrap_msg(
msg: dict[str, Any],
channel: Channel
) -> Any:
__tracebackhide__ = True
try:
return msg['return']
except KeyError:
# internal error should never get here
assert msg.get('cid'), "Received internal error at portal?"
raise unpack_error(msg, channel) from None
class MessagingError(Exception):
'Some kind of unexpected SC messaging dialog issue'
class Portal: class Portal:
''' '''
A 'portal' to a memory-domain-separated `Actor`. A 'portal' to a(n) (remote) ``Actor``.
A portal is "opened" (and eventually closed) by one side of an A portal is "opened" (and eventually closed) by one side of an
inter-actor communication context. The side which opens the portal inter-actor communication context. The side which opens the portal
is equivalent to a "caller" in function parlance and usually is is equivalent to a "caller" in function parlance and usually is
either the called actor's parent (in process tree hierarchy terms) either the called actor's parent (in process tree hierarchy terms)
or a client interested in scheduling work to be done remotely in a or a client interested in scheduling work to be done remotely in a
process which has a separate (virtual) memory domain. far process.
The portal api allows the "caller" actor to invoke remote routines The portal api allows the "caller" actor to invoke remote routines
and receive results through an underlying ``tractor.Channel`` as and receive results through an underlying ``tractor.Channel`` as
@ -88,45 +89,22 @@ class Portal:
like having a "portal" between the seperate actor memory spaces. like having a "portal" between the seperate actor memory spaces.
''' '''
# global timeout for remote cancel requests sent to # the timeout for a remote cancel request sent to
# connected (peer) actors. # a(n) (peer) actor.
cancel_timeout: float = 0.5 cancel_timeout = 0.5
def __init__( def __init__(self, channel: Channel) -> None:
self, self.channel = channel
channel: Channel,
) -> None:
self._chan: Channel = channel
# during the portal's lifetime # during the portal's lifetime
self._final_result_pld: Any|None = None self._result_msg: Optional[dict] = None
self._final_result_msg: PayloadMsg|None = None
# When set to a ``Context`` (when _submit_for_result is called) # When set to a ``Context`` (when _submit_for_result is called)
# it is expected that ``result()`` will be awaited at some # it is expected that ``result()`` will be awaited at some
# point. # point.
self._expect_result_ctx: Context|None = None self._expect_result: Context | None = None
self._streams: set[MsgStream] = set() self._streams: set[MsgStream] = set()
self.actor: Actor = current_actor() self.actor = current_actor()
@property
def chan(self) -> Channel:
return self._chan
@property
def channel(self) -> Channel:
'''
Proxy to legacy attr name..
Consider the shorter `Portal.chan` instead of `.channel` ;)
'''
log.debug(
'Consider the shorter `Portal.chan` instead of `.channel` ;)'
)
return self.chan
# TODO: factor this out into a `.highlevel` API-wrapper that uses
# a single `.open_context()` call underneath.
async def _submit_for_result( async def _submit_for_result(
self, self,
ns: str, ns: str,
@ -134,34 +112,32 @@ class Portal:
**kwargs **kwargs
) -> None: ) -> None:
if self._expect_result_ctx is not None: assert self._expect_result is None, \
raise RuntimeError( "A pending main result has already been submitted"
'A pending main result has already been submitted'
)
self._expect_result_ctx: Context = await self.actor.start_remote_task( self._expect_result = await self.actor.start_remote_task(
self.channel, self.channel,
nsf=NamespacePath(f'{ns}:{func}'), ns,
kwargs=kwargs, func,
portal=self, kwargs
) )
# TODO: we should deprecate this API right? since if we remove async def _return_once(
# `.run_in_actor()` (and instead move it to a `.highlevel`
# wrapper api (around a single `.open_context()` call) we don't
# really have any notion of a "main" remote task any more?
#
# @api_frame
async def wait_for_result(
self, self,
hide_tb: bool = True, ctx: Context,
) -> Any:
) -> dict[str, Any]:
assert ctx._remote_func_type == 'asyncfunc' # single response
msg = await ctx._recv_chan.receive()
return msg
async def result(self) -> Any:
''' '''
Return the final result delivered by a `Return`-msg from the Return the result(s) from the remote actor's "main" task.
remote peer actor's "main" task's `return` statement.
''' '''
__tracebackhide__: bool = hide_tb # __tracebackhide__ = True
# Check for non-rpc errors slapped on the # Check for non-rpc errors slapped on the
# channel for which we always raise # channel for which we always raise
exc = self.channel._exc exc = self.channel._exc
@ -169,7 +145,7 @@ class Portal:
raise exc raise exc
# not expecting a "main" result # not expecting a "main" result
if self._expect_result_ctx is None: if self._expect_result is None:
log.warning( log.warning(
f"Portal for {self.channel.uid} not expecting a final" f"Portal for {self.channel.uid} not expecting a final"
" result?\nresult() should only be called if subactor" " result?\nresult() should only be called if subactor"
@ -177,41 +153,14 @@ class Portal:
return NoResult return NoResult
# expecting a "main" result # expecting a "main" result
assert self._expect_result_ctx assert self._expect_result
if self._final_result_msg is None: if self._result_msg is None:
try: self._result_msg = await self._return_once(
( self._expect_result
self._final_result_msg, )
self._final_result_pld,
) = await self._expect_result_ctx._pld_rx.recv_msg(
ipc=self._expect_result_ctx,
expect_msg=Return,
)
except BaseException as err:
# TODO: wrap this into `@api_frame` optionally with
# some kinda filtering mechanism like log levels?
__tracebackhide__: bool = False
raise err
return self._final_result_pld return _unwrap_msg(self._result_msg, self.channel)
# TODO: factor this out into a `.highlevel` API-wrapper that uses
# a single `.open_context()` call underneath.
async def result(
self,
*args,
**kwargs,
) -> Any|Exception:
typname: str = type(self).__name__
log.warning(
f'`{typname}.result()` is DEPRECATED!\n'
f'Use `{typname}.wait_for_result()` instead!\n'
)
return await self.wait_for_result(
*args,
**kwargs,
)
async def _cancel_streams(self): async def _cancel_streams(self):
# terminate all locally running async generator # terminate all locally running async generator
@ -242,60 +191,33 @@ class Portal:
) -> bool: ) -> bool:
''' '''
Cancel the actor runtime (and thus process) on the far Cancel the actor on the other end of this portal.
end of this portal.
**NOTE** THIS CANCELS THE ENTIRE RUNTIME AND THE
SUBPROCESS, it DOES NOT just cancel the remote task. If you
want to have a handle to cancel a remote ``tri.Task`` look
at `.open_context()` and the definition of
`._context.Context.cancel()` which CAN be used for this
purpose.
''' '''
__runtimeframe__: int = 1 # noqa if not self.channel.connected():
log.cancel("This channel is already closed can't cancel")
chan: Channel = self.channel
if not chan.connected():
log.runtime(
'This channel is already closed, skipping cancel request..'
)
return False return False
reminfo: str = (
f'c)=> {self.channel.uid}\n'
f' |_{chan}\n'
)
log.cancel( log.cancel(
f'Requesting actor-runtime cancel for peer\n\n' f"Sending actor cancel request to {self.channel.uid} on "
f'{reminfo}' f"{self.channel}")
)
self.channel._cancel_called = True
# XXX the one spot we set it?
self.channel._cancel_called: bool = True
try: try:
# send cancel cmd - might not get response # send cancel cmd - might not get response
# XXX: sure would be nice to make this work with # XXX: sure would be nice to make this work with a proper shield
# a proper shield
with trio.move_on_after( with trio.move_on_after(
timeout timeout
or or self.cancel_timeout
self.cancel_timeout
) as cs: ) as cs:
cs.shield: bool = True cs.shield = True
await self.run_from_ns(
'self', await self.run_from_ns('self', 'cancel')
'cancel',
)
return True return True
if cs.cancelled_caught: if cs.cancelled_caught:
# may timeout and we never get an ack (obvi racy) log.cancel(f"May have failed to cancel {self.channel.uid}")
# but that doesn't mean it wasn't cancelled.
log.debug(
'May have failed to cancel peer?\n'
f'{reminfo}'
)
# if we get here some weird cancellation case happened # if we get here some weird cancellation case happened
return False return False
@ -304,15 +226,11 @@ class Portal:
trio.ClosedResourceError, trio.ClosedResourceError,
trio.BrokenResourceError, trio.BrokenResourceError,
): ):
log.debug( log.cancel(
'IPC chan for actor already closed or broken?\n\n' f"{self.channel} for {self.channel.uid} was already "
f'{self.channel.uid}\n' "closed or broken?")
f' |_{self.channel}\n'
)
return False return False
# TODO: do we still need this for low level `Actor`-runtime
# method calls or can we also remove it?
async def run_from_ns( async def run_from_ns(
self, self,
namespace_path: str, namespace_path: str,
@ -329,35 +247,27 @@ class Portal:
Note:: Note::
A special namespace `self` can be used to invoke `Actor` A special namespace `self` can be used to invoke `Actor`
instance methods in the remote runtime. Currently this instance methods in the remote runtime. Currently this
should only ever be used for `Actor` (method) runtime should only be used solely for ``tractor`` runtime
internals! internals.
''' '''
__runtimeframe__: int = 1 # noqa ctx = await self.actor.start_remote_task(
nsf = NamespacePath( self.channel,
f'{namespace_path}:{function_name}' namespace_path,
) function_name,
ctx: Context = await self.actor.start_remote_task( kwargs,
chan=self.channel,
nsf=nsf,
kwargs=kwargs,
portal=self,
)
return await ctx._pld_rx.recv_pld(
ipc=ctx,
expect_msg=Return,
) )
ctx._portal = self
msg = await self._return_once(ctx)
return _unwrap_msg(msg, self.channel)
# TODO: factor this out into a `.highlevel` API-wrapper that uses
# a single `.open_context()` call underneath.
async def run( async def run(
self, self,
func: str, func: str,
fn_name: str|None = None, fn_name: Optional[str] = None,
**kwargs **kwargs
) -> Any: ) -> Any:
''' '''
Submit a remote function to be scheduled and run by actor, in Submit a remote function to be scheduled and run by actor, in
@ -367,8 +277,6 @@ class Portal:
remote rpc task or a local async generator instance. remote rpc task or a local async generator instance.
''' '''
__runtimeframe__: int = 1 # noqa
if isinstance(func, str): if isinstance(func, str):
warnings.warn( warnings.warn(
"`Portal.run(namespace: str, funcname: str)` is now" "`Portal.run(namespace: str, funcname: str)` is now"
@ -378,9 +286,8 @@ class Portal:
DeprecationWarning, DeprecationWarning,
stacklevel=2, stacklevel=2,
) )
fn_mod_path: str = func fn_mod_path = func
assert isinstance(fn_name, str) assert isinstance(fn_name, str)
nsf = NamespacePath(f'{fn_mod_path}:{fn_name}')
else: # function reference was passed directly else: # function reference was passed directly
if ( if (
@ -393,36 +300,27 @@ class Portal:
raise TypeError( raise TypeError(
f'{func} must be a non-streaming async function!') f'{func} must be a non-streaming async function!')
nsf = NamespacePath.from_ref(func) fn_mod_path, fn_name = NamespacePath.from_ref(func).to_tuple()
ctx = await self.actor.start_remote_task( ctx = await self.actor.start_remote_task(
self.channel, self.channel,
nsf=nsf, fn_mod_path,
kwargs=kwargs, fn_name,
portal=self, kwargs,
) )
return await ctx._pld_rx.recv_pld( ctx._portal = self
ipc=ctx, return _unwrap_msg(
expect_msg=Return, await self._return_once(ctx),
self.channel,
) )
# TODO: factor this out into a `.highlevel` API-wrapper that uses @asynccontextmanager
# a single `.open_context()` call underneath.
@acm
async def open_stream_from( async def open_stream_from(
self, self,
async_gen_func: Callable, # typing: ignore async_gen_func: Callable, # typing: ignore
**kwargs, **kwargs,
) -> AsyncGenerator[MsgStream, None]: ) -> AsyncGenerator[MsgStream, None]:
'''
Legacy one-way streaming API.
TODO: re-impl on top `Portal.open_context()` + an async gen
around `Context.open_stream()`.
'''
__runtimeframe__: int = 1 # noqa
if not inspect.isasyncgenfunction(async_gen_func): if not inspect.isasyncgenfunction(async_gen_func):
if not ( if not (
@ -432,12 +330,17 @@ class Portal:
raise TypeError( raise TypeError(
f'{async_gen_func} must be an async generator function!') f'{async_gen_func} must be an async generator function!')
ctx: Context = await self.actor.start_remote_task( fn_mod_path, fn_name = NamespacePath.from_ref(
async_gen_func
).to_tuple()
ctx = await self.actor.start_remote_task(
self.channel, self.channel,
nsf=NamespacePath.from_ref(async_gen_func), fn_mod_path,
kwargs=kwargs, fn_name,
portal=self, kwargs
) )
ctx._portal = self
# ensure receive-only stream entrypoint # ensure receive-only stream entrypoint
assert ctx._remote_func_type == 'asyncgen' assert ctx._remote_func_type == 'asyncgen'
@ -445,14 +348,13 @@ class Portal:
try: try:
# deliver receive only stream # deliver receive only stream
async with MsgStream( async with MsgStream(
ctx=ctx, ctx, ctx._recv_chan,
rx_chan=ctx._rx_chan, ) as rchan:
) as stream: self._streams.add(rchan)
self._streams.add(stream) yield rchan
ctx._stream = stream
yield stream
finally: finally:
# cancel the far end task on consumer close # cancel the far end task on consumer close
# NOTE: this is a special case since we assume that if using # NOTE: this is a special case since we assume that if using
# this ``.open_fream_from()`` api, the stream is one a one # this ``.open_fream_from()`` api, the stream is one a one
@ -471,14 +373,194 @@ class Portal:
# XXX: should this always be done? # XXX: should this always be done?
# await recv_chan.aclose() # await recv_chan.aclose()
self._streams.remove(stream) self._streams.remove(rchan)
# NOTE: impl is found in `._context`` mod to make @asynccontextmanager
# reading/groking the details simpler code-org-wise. This async def open_context(
# method does not have to be used over that `@acm` module func
# directly, it is for conventience and from the original API self,
# design. func: Callable,
open_context = open_context_from_portal **kwargs,
) -> AsyncGenerator[tuple[Context, Any], None]:
'''
Open an inter-actor task context.
This is a synchronous API which allows for deterministic
setup/teardown of a remote task. The yielded ``Context`` further
allows for opening bidirectional streams, explicit cancellation
and synchronized final result collection. See ``tractor.Context``.
'''
# conduct target func method structural checks
if not inspect.iscoroutinefunction(func) and (
getattr(func, '_tractor_contex_function', False)
):
raise TypeError(
f'{func} must be an async generator function!')
# TODO: i think from here onward should probably
# just be factored into an `@acm` inside a new
# a new `_context.py` mod.
fn_mod_path, fn_name = NamespacePath.from_ref(func).to_tuple()
ctx = await self.actor.start_remote_task(
self.channel,
fn_mod_path,
fn_name,
kwargs,
)
assert ctx._remote_func_type == 'context'
msg = await ctx._recv_chan.receive()
try:
# the "first" value here is delivered by the callee's
# ``Context.started()`` call.
first = msg['started']
ctx._started_called = True
except KeyError:
assert msg.get('cid'), ("Received internal error at context?")
if msg.get('error'):
# raise kerr from unpack_error(msg, self.channel)
raise unpack_error(msg, self.channel) from None
else:
raise MessagingError(
f'Context for {ctx.cid} was expecting a `started` message'
f' but received a non-error msg:\n{pformat(msg)}'
)
_err: BaseException | None = None
ctx._portal: Portal = self
uid: tuple = self.channel.uid
cid: str = ctx.cid
etype: Type[BaseException] | None = None
# deliver context instance and .started() msg value in enter
# tuple.
try:
async with trio.open_nursery() as nurse:
ctx._scope_nursery = nurse
ctx._scope = nurse.cancel_scope
yield ctx, first
# when in allow_ovveruns mode there may be lingering
# overflow sender tasks remaining?
if nurse.child_tasks:
# ensure we are in overrun state with
# ``._allow_overruns=True`` bc otherwise
# there should be no tasks in this nursery!
if (
not ctx._allow_overruns
or len(nurse.child_tasks) > 1
):
raise RuntimeError(
'Context has sub-tasks but is '
'not in `allow_overruns=True` Mode!?'
)
ctx._scope.cancel()
except ContextCancelled as err:
_err = err
# swallow and mask cross-actor task context cancels that
# were initiated by *this* side's task.
if not ctx._cancel_called:
# XXX: this should NEVER happen!
# from ._debug import breakpoint
# await breakpoint()
raise
# if the context was cancelled by client code
# then we don't need to raise since user code
# is expecting this and the block should exit.
else:
log.debug(f'Context {ctx} cancelled gracefully')
except (
BaseException,
# more specifically, we need to handle these but not
# sure it's worth being pedantic:
# Exception,
# trio.Cancelled,
# KeyboardInterrupt,
) as err:
etype = type(err)
# cancel ourselves on any error.
log.cancel(
'Context cancelled for task, sending cancel request..\n'
f'task:{cid}\n'
f'actor:{uid}'
)
try:
await ctx.cancel()
except trio.BrokenResourceError:
log.warning(
'IPC connection for context is broken?\n'
f'task:{cid}\n'
f'actor:{uid}'
)
raise
else:
if ctx.chan.connected():
log.info(
'Waiting on final context-task result for\n'
f'task: {cid}\n'
f'actor: {uid}'
)
result = await ctx.result()
log.runtime(
f'Context {fn_name} returned '
f'value from callee `{result}`'
)
finally:
# though it should be impossible for any tasks
# operating *in* this scope to have survived
# we tear down the runtime feeder chan last
# to avoid premature stream clobbers.
if ctx._recv_chan is not None:
# should we encapsulate this in the context api?
await ctx._recv_chan.aclose()
if etype:
if ctx._cancel_called:
log.cancel(
f'Context {fn_name} cancelled by caller with\n{etype}'
)
elif _err is not None:
log.cancel(
f'Context for task cancelled by callee with {etype}\n'
f'target: `{fn_name}`\n'
f'task:{cid}\n'
f'actor:{uid}'
)
# XXX: (MEGA IMPORTANT) if this is a root opened process we
# wait for any immediate child in debug before popping the
# context from the runtime msg loop otherwise inside
# ``Actor._push_result()`` the msg will be discarded and in
# the case where that msg is global debugger unlock (via
# a "stop" msg for a stream), this can result in a deadlock
# where the root is waiting on the lock to clear but the
# child has already cleared it and clobbered IPC.
from ._debug import maybe_wait_for_debugger
await maybe_wait_for_debugger()
# remove the context from runtime tracking
self.actor._contexts.pop(
(self.channel.uid, ctx.cid),
None,
)
@dataclass @dataclass
@ -493,12 +575,7 @@ class LocalPortal:
actor: 'Actor' # type: ignore # noqa actor: 'Actor' # type: ignore # noqa
channel: Channel channel: Channel
async def run_from_ns( async def run_from_ns(self, ns: str, func_name: str, **kwargs) -> Any:
self,
ns: str,
func_name: str,
**kwargs,
) -> Any:
''' '''
Run a requested local function from a namespace path and Run a requested local function from a namespace path and
return it's result. return it's result.
@ -509,11 +586,11 @@ class LocalPortal:
return await func(**kwargs) return await func(**kwargs)
@acm @asynccontextmanager
async def open_portal( async def open_portal(
channel: Channel, channel: Channel,
tn: trio.Nursery|None = None, nursery: Optional[trio.Nursery] = None,
start_msg_loop: bool = True, start_msg_loop: bool = True,
shield: bool = False, shield: bool = False,
@ -521,23 +598,15 @@ async def open_portal(
''' '''
Open a ``Portal`` through the provided ``channel``. Open a ``Portal`` through the provided ``channel``.
Spawns a background task to handle RPC processing, normally Spawns a background task to handle message processing (normally
done by the actor-runtime implicitly via a call to done by the actor-runtime implicitly).
`._rpc.process_messages()`. just after connection establishment.
''' '''
actor = current_actor() actor = current_actor()
assert actor assert actor
was_connected: bool = False was_connected = False
async with maybe_open_nursery( async with maybe_open_nursery(nursery, shield=shield) as nursery:
tn,
shield=shield,
strict_exception_groups=False,
# ^XXX^ TODO? soo roll our own then ??
# -> since we kinda want the "if only one `.exception` then
# just raise that" interface?
) as tn:
if not channel.connected(): if not channel.connected():
await channel.connect() await channel.connect()
@ -546,10 +615,10 @@ async def open_portal(
if channel.uid is None: if channel.uid is None:
await actor._do_handshake(channel) await actor._do_handshake(channel)
msg_loop_cs: trio.CancelScope|None = None msg_loop_cs: Optional[trio.CancelScope] = None
if start_msg_loop: if start_msg_loop:
from ._runtime import process_messages from ._runtime import process_messages
msg_loop_cs = await tn.start( msg_loop_cs = await nursery.start(
partial( partial(
process_messages, process_messages,
actor, actor,
@ -566,10 +635,12 @@ async def open_portal(
await portal.aclose() await portal.aclose()
if was_connected: if was_connected:
await channel.aclose() # gracefully signal remote channel-msg loop
await channel.send(None)
# await channel.aclose()
# cancel background msg loop task # cancel background msg loop task
if msg_loop_cs is not None: if msg_loop_cs:
msg_loop_cs.cancel() msg_loop_cs.cancel()
tn.cancel_scope.cancel() nursery.cancel_scope.cancel()

View File

@ -18,28 +18,26 @@
Root actor runtime ignition(s). Root actor runtime ignition(s).
''' '''
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager
from functools import partial from functools import partial
import importlib import importlib
import inspect
import logging import logging
import os
import signal import signal
import sys import sys
from typing import Callable import os
import typing
import warnings import warnings
from exceptiongroup import BaseExceptionGroup
import trio import trio
from ._runtime import ( from ._runtime import (
Actor, Actor,
Arbiter, Arbiter,
# TODO: rename and make a non-actor subtype?
# Arbiter as Registry,
async_main, async_main,
) )
from .devx import _debug from . import _debug
from . import _spawn from . import _spawn
from . import _state from . import _state
from . import log from . import log
@ -48,131 +46,60 @@ from ._exceptions import is_multi_cancelled
# set at startup and after forks # set at startup and after forks
_default_host: str = '127.0.0.1' _default_arbiter_host: str = '127.0.0.1'
_default_port: int = 1616 _default_arbiter_port: int = 1616
# default registry always on localhost
_default_lo_addrs: list[tuple[str, int]] = [(
_default_host,
_default_port,
)]
logger = log.get_logger('tractor') logger = log.get_logger('tractor')
@acm @asynccontextmanager
async def open_root_actor( async def open_root_actor(
*, *,
# defaults are above # defaults are above
registry_addrs: list[tuple[str, int]]|None = None, arbiter_addr: tuple[str, int] | None = None,
# defaults are above # defaults are above
arbiter_addr: tuple[str, int]|None = None, registry_addr: tuple[str, int] | None = None,
name: str|None = 'root', name: str | None = 'root',
# either the `multiprocessing` start method: # either the `multiprocessing` start method:
# https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods # https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
# OR `trio` (the new default). # OR `trio` (the new default).
start_method: _spawn.SpawnMethodKey|None = None, start_method: _spawn.SpawnMethodKey | None = None,
# enables the multi-process debugger support # enables the multi-process debugger support
debug_mode: bool = False, debug_mode: bool = False,
maybe_enable_greenback: bool = True, # `.pause_from_sync()/breakpoint()` support
enable_stack_on_sig: bool = False,
# internal logging # internal logging
loglevel: str|None = None, loglevel: str | None = None,
enable_modules: list|None = None, enable_modules: list | None = None,
rpc_module_paths: list|None = None, rpc_module_paths: list | None = None,
# NOTE: allow caller to ensure that only one registry exists ) -> typing.Any:
# and that this call creates it.
ensure_registry: bool = False,
hide_tb: bool = True,
# XXX, proxied directly to `.devx._debug._maybe_enter_pm()`
# for REPL-entry logic.
debug_filter: Callable[
[BaseException|BaseExceptionGroup],
bool,
] = lambda err: not is_multi_cancelled(err),
# TODO, a way for actors to augment passing derived
# read-only state to sublayers?
# extra_rt_vars: dict|None = None,
) -> Actor:
''' '''
Runtime init entry point for ``tractor``. Runtime init entry point for ``tractor``.
''' '''
_debug.hide_runtime_frames()
__tracebackhide__: bool = hide_tb
# TODO: stick this in a `@cm` defined in `devx._debug`?
#
# Override the global debugger hook to make it play nice with # Override the global debugger hook to make it play nice with
# ``trio``, see much discussion in: # ``trio``, see much discussion in:
# https://github.com/python-trio/trio/issues/1155#issuecomment-742964018 # https://github.com/python-trio/trio/issues/1155#issuecomment-742964018
builtin_bp_handler: Callable = sys.breakpointhook builtin_bp_handler = sys.breakpointhook
orig_bp_path: str|None = os.environ.get( orig_bp_path: str | None = os.environ.get('PYTHONBREAKPOINT', None)
'PYTHONBREAKPOINT', os.environ['PYTHONBREAKPOINT'] = 'tractor._debug._set_trace'
None,
)
if (
debug_mode
and maybe_enable_greenback
and (
maybe_mod := await _debug.maybe_init_greenback(
raise_not_found=False,
)
)
):
logger.info(
f'Found `greenback` installed @ {maybe_mod}\n'
'Enabling `tractor.pause_from_sync()` support!\n'
)
os.environ['PYTHONBREAKPOINT'] = (
'tractor.devx._debug._sync_pause_from_builtin'
)
_state._runtime_vars['use_greenback'] = True
else:
# TODO: disable `breakpoint()` by default (without
# `greenback`) since it will break any multi-actor
# usage by a clobbered TTY's stdstreams!
def block_bps(*args, **kwargs):
raise RuntimeError(
'Trying to use `breakpoint()` eh?\n\n'
'Welp, `tractor` blocks `breakpoint()` built-in calls by default!\n'
'If you need to use it please install `greenback` and set '
'`debug_mode=True` when opening the runtime '
'(either via `.open_nursery()` or `open_root_actor()`)\n'
)
sys.breakpointhook = block_bps
# lol ok,
# https://docs.python.org/3/library/sys.html#sys.breakpointhook
os.environ['PYTHONBREAKPOINT'] = "0"
# attempt to retreive ``trio``'s sigint handler and stash it # attempt to retreive ``trio``'s sigint handler and stash it
# on our debugger lock state. # on our debugger lock state.
_debug.DebugStatus._trio_handler = signal.getsignal(signal.SIGINT) _debug.Lock._trio_handler = signal.getsignal(signal.SIGINT)
# mark top most level process as root actor # mark top most level process as root actor
_state._runtime_vars['_is_root'] = True _state._runtime_vars['_is_root'] = True
# caps based rpc list # caps based rpc list
enable_modules = ( enable_modules = enable_modules or []
enable_modules
or
[]
)
if rpc_module_paths: if rpc_module_paths:
warnings.warn( warnings.warn(
@ -188,34 +115,29 @@ async def open_root_actor(
if arbiter_addr is not None: if arbiter_addr is not None:
warnings.warn( warnings.warn(
'`arbiter_addr` is now deprecated\n' '`arbiter_addr` is now deprecated and has been renamed to'
'Use `registry_addrs: list[tuple]` instead..', '`registry_addr`.\nUse that instead..',
DeprecationWarning, DeprecationWarning,
stacklevel=2, stacklevel=2,
) )
registry_addrs = [arbiter_addr]
registry_addrs: list[tuple[str, int]] = ( registry_addr = (host, port) = (
registry_addrs registry_addr
or or arbiter_addr
_default_lo_addrs or (
_default_arbiter_host,
_default_arbiter_port,
)
) )
assert registry_addrs
loglevel = ( loglevel = (loglevel or log._default_loglevel).upper()
loglevel
or log._default_loglevel
).upper()
if ( if debug_mode and _spawn._spawn_method == 'trio':
debug_mode
and _spawn._spawn_method == 'trio'
):
_state._runtime_vars['_debug_mode'] = True _state._runtime_vars['_debug_mode'] = True
# expose internal debug module to every actor allowing for # expose internal debug module to every actor allowing
# use of ``await tractor.pause()`` # for use of ``await tractor.breakpoint()``
enable_modules.append('tractor.devx._debug') enable_modules.append('tractor._debug')
# if debug mode get's enabled *at least* use that level of # if debug mode get's enabled *at least* use that level of
# logging for some informative console prompts. # logging for some informative console prompts.
@ -228,196 +150,97 @@ async def open_root_actor(
): ):
loglevel = 'PDB' loglevel = 'PDB'
elif debug_mode: elif debug_mode:
raise RuntimeError( raise RuntimeError(
"Debug mode is only supported for the `trio` backend!" "Debug mode is only supported for the `trio` backend!"
) )
assert loglevel log.get_console_log(loglevel)
_log = log.get_console_log(loglevel)
assert _log
# TODO: factor this into `.devx._stackscope`!! try:
if ( # make a temporary connection to see if an arbiter exists,
debug_mode # if one can't be made quickly we assume none exists.
and arbiter_found = False
enable_stack_on_sig
):
from .devx._stackscope import enable_stack_on_sig
enable_stack_on_sig()
# closed into below ping task-func # TODO: this connect-and-bail forces us to have to carefully
ponged_addrs: list[tuple[str, int]] = [] # rewrap TCP 104-connection-reset errors as EOF so as to avoid
# propagating cancel-causing errors to the channel-msg loop
# machinery. Likely it would be better to eventually have
# a "discovery" protocol with basic handshake instead.
with trio.move_on_after(1):
async with _connect_chan(host, port):
arbiter_found = True
async def ping_tpt_socket( except OSError:
addr: tuple[str, int], # TODO: make this a "discovery" log level?
timeout: float = 1, logger.warning(f"No actor registry found @ {host}:{port}")
) -> None:
'''
Attempt temporary connection to see if a registry is
listening at the requested address by a tranport layer
ping.
If a connection can't be made quickly we assume none no # create a local actor and start up its main routine/task
server is listening at that addr. if arbiter_found:
'''
try:
# TODO: this connect-and-bail forces us to have to
# carefully rewrap TCP 104-connection-reset errors as
# EOF so as to avoid propagating cancel-causing errors
# to the channel-msg loop machinery. Likely it would
# be better to eventually have a "discovery" protocol
# with basic handshake instead?
with trio.move_on_after(timeout):
async with _connect_chan(*addr):
ponged_addrs.append(addr)
except OSError:
# TODO: make this a "discovery" log level?
logger.info(
f'No actor registry found @ {addr}\n'
)
async with trio.open_nursery() as tn:
for addr in registry_addrs:
tn.start_soon(
ping_tpt_socket,
tuple(addr), # TODO: just drop this requirement?
)
trans_bind_addrs: list[tuple[str, int]] = []
# Create a new local root-actor instance which IS NOT THE
# REGISTRAR
if ponged_addrs:
if ensure_registry:
raise RuntimeError(
f'Failed to open `{name}`@{ponged_addrs}: '
'registry socket(s) already bound'
)
# we were able to connect to an arbiter # we were able to connect to an arbiter
logger.info( logger.info(f"Arbiter seems to exist @ {host}:{port}")
f'Registry(s) seem(s) to exist @ {ponged_addrs}'
)
actor = Actor( actor = Actor(
name=name or 'anonymous', name or 'anonymous',
registry_addrs=ponged_addrs, arbiter_addr=registry_addr,
loglevel=loglevel, loglevel=loglevel,
enable_modules=enable_modules, enable_modules=enable_modules,
) )
# DO NOT use the registry_addrs as the transport server host, port = (host, 0)
# addrs for this new non-registar, root-actor.
for host, port in ponged_addrs:
# NOTE: zero triggers dynamic OS port allocation
trans_bind_addrs.append((host, 0))
# Start this local actor as the "registrar", aka a regular
# actor who manages the local registry of "mailboxes" of
# other process-tree-local sub-actors.
else: else:
# start this local actor as the arbiter (aka a regular actor who
# manages the local registry of "mailboxes")
# NOTE that if the current actor IS THE REGISTAR, the # Note that if the current actor is the arbiter it is desirable
# following init steps are taken: # for it to stay up indefinitely until a re-election process has
# - the tranport layer server is bound to each (host, port) # taken place - which is not implemented yet FYI).
# pair defined in provided registry_addrs, or the default.
trans_bind_addrs = registry_addrs
# - it is normally desirable for any registrar to stay up
# indefinitely until either all registered (child/sub)
# actors are terminated (via SC supervision) or,
# a re-election process has taken place.
# NOTE: all of ^ which is not implemented yet - see:
# https://github.com/goodboy/tractor/issues/216
# https://github.com/goodboy/tractor/pull/348
# https://github.com/goodboy/tractor/issues/296
actor = Arbiter( actor = Arbiter(
name or 'registrar', name or 'arbiter',
registry_addrs=registry_addrs, arbiter_addr=registry_addr,
loglevel=loglevel, loglevel=loglevel,
enable_modules=enable_modules, enable_modules=enable_modules,
) )
# XXX, in case the root actor runtime was actually run from
# `tractor.to_asyncio.run_as_asyncio_guest()` and NOt
# `.trio.run()`.
actor._infected_aio = _state._runtime_vars['_is_infected_aio']
# Start up main task set via core actor-runtime nurseries.
try: try:
# assign process-local actor # assign process-local actor
_state._current_actor = actor _state._current_actor = actor
# start local channel-server and fake the portal API # start local channel-server and fake the portal API
# NOTE: this won't block since we provide the nursery # NOTE: this won't block since we provide the nursery
ml_addrs_str: str = '\n'.join( logger.info(f"Starting local {actor} @ {host}:{port}")
f'@{addr}' for addr in trans_bind_addrs
)
logger.info(
f'Starting local {actor.uid} on the following transport addrs:\n'
f'{ml_addrs_str}'
)
# start the actor runtime in a new task # start the actor runtime in a new task
async with trio.open_nursery( async with trio.open_nursery() as nursery:
strict_exception_groups=False,
# ^XXX^ TODO? instead unpack any RAE as per "loose" style? # ``_runtime.async_main()`` creates an internal nursery and
) as nursery: # thus blocks here until the entire underlying actor tree has
# terminated thereby conducting structured concurrency.
# ``_runtime.async_main()`` creates an internal nursery
# and blocks here until any underlying actor(-process)
# tree has terminated thereby conducting so called
# "end-to-end" structured concurrency throughout an
# entire hierarchical python sub-process set; all
# "actor runtime" primitives are SC-compat and thus all
# transitively spawned actors/processes must be as
# well.
await nursery.start( await nursery.start(
partial( partial(
async_main, async_main,
actor, actor,
accept_addrs=trans_bind_addrs, accept_addr=(host, port),
parent_addr=None parent_addr=None
) )
) )
try: try:
yield actor yield actor
except ( except (
Exception, Exception,
BaseExceptionGroup, BaseExceptionGroup,
) as err: ) as err:
# TODO, in beginning to handle the subsubactor with entered = await _debug._maybe_enter_pm(err)
# crashed grandparent cases..
#
# was_locked: bool = await _debug.maybe_wait_for_debugger(
# child_in_debug=True,
# )
# XXX NOTE XXX see equiv note inside
# `._runtime.Actor._stream_handler()` where in the
# non-root or root-that-opened-this-mahually case we
# wait for the local actor-nursery to exit before
# exiting the transport channel handler.
entered: bool = await _debug._maybe_enter_pm(
err,
api_frame=inspect.currentframe(),
debug_filter=debug_filter,
)
if ( if not entered and not is_multi_cancelled(err):
not entered logger.exception("Root actor crashed:")
and
not is_multi_cancelled(
err,
)
):
logger.exception('Root actor crashed\n')
# ALWAYS re-raise any error bubbled up from the # always re-raise
# runtime!
raise raise
finally: finally:
@ -430,29 +253,20 @@ async def open_root_actor(
# for an in nurseries: # for an in nurseries:
# tempn.start_soon(an.exited.wait) # tempn.start_soon(an.exited.wait)
logger.info( logger.cancel("Shutting down root actor")
'Closing down root actor' await actor.cancel(
requesting_uid=actor.uid,
) )
await actor.cancel(None) # self cancel
finally: finally:
_state._current_actor = None _state._current_actor = None
_state._last_actor_terminated = actor
# restore built-in `breakpoint()` hook state # restore breakpoint hook state
if ( sys.breakpointhook = builtin_bp_handler
debug_mode if orig_bp_path is not None:
and os.environ['PYTHONBREAKPOINT'] = orig_bp_path
maybe_enable_greenback else:
): # clear env back to having no entry
if builtin_bp_handler is not None: os.environ.pop('PYTHONBREAKPOINT')
sys.breakpointhook = builtin_bp_handler
if orig_bp_path is not None:
os.environ['PYTHONBREAKPOINT'] = orig_bp_path
else:
# clear env back to having no entry
os.environ.pop('PYTHONBREAKPOINT', None)
logger.runtime("Root actor terminated") logger.runtime("Root actor terminated")
@ -462,23 +276,19 @@ def run_daemon(
# runtime kwargs # runtime kwargs
name: str | None = 'root', name: str | None = 'root',
registry_addrs: list[tuple[str, int]] = _default_lo_addrs, registry_addr: tuple[str, int] = (
_default_arbiter_host,
_default_arbiter_port,
),
start_method: str | None = None, start_method: str | None = None,
debug_mode: bool = False, debug_mode: bool = False,
# TODO, support `infected_aio=True` mode by,
# - calling the appropriate entrypoint-func from `.to_asyncio`
# - maybe init-ing `greenback` as done above in
# `open_root_actor()`.
**kwargs **kwargs
) -> None: ) -> None:
''' '''
Spawn a root (daemon) actor which will respond to RPC; the main Spawn daemon actor which will respond to RPC; the main task simply
task simply starts the runtime and then blocks via embedded starts the runtime and then sleeps forever.
`trio.sleep_forever()`.
This is a very minimal convenience wrapper around starting This is a very minimal convenience wrapper around starting
a "run-until-cancelled" root actor which can be started with a set a "run-until-cancelled" root actor which can be started with a set
@ -491,8 +301,9 @@ def run_daemon(
importlib.import_module(path) importlib.import_module(path)
async def _main(): async def _main():
async with open_root_actor( async with open_root_actor(
registry_addrs=registry_addrs, registry_addr=registry_addr,
name=name, name=name,
start_method=start_method, start_method=start_method,
debug_mode=debug_mode, debug_mode=debug_mode,

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -19,50 +19,48 @@ Machinery for actor process spawning using multiple backends.
""" """
from __future__ import annotations from __future__ import annotations
import multiprocessing as mp
import sys import sys
import platform import platform
from typing import ( from typing import (
Any, Any,
Awaitable,
Literal, Literal,
Optional,
Callable, Callable,
TypeVar, TypeVar,
TYPE_CHECKING, TYPE_CHECKING,
) )
from collections.abc import Awaitable
from exceptiongroup import BaseExceptionGroup
import trio import trio
from trio import TaskStatus from trio_typing import TaskStatus
from .devx._debug import ( from ._debug import (
maybe_wait_for_debugger, maybe_wait_for_debugger,
acquire_debug_lock, acquire_debug_lock,
) )
from tractor._state import ( from ._state import (
current_actor, current_actor,
is_main_process, is_main_process,
is_root_process, is_root_process,
debug_mode, debug_mode,
_runtime_vars,
)
from tractor.log import get_logger
from tractor._portal import Portal
from tractor._runtime import Actor
from tractor._entry import _mp_main
from tractor._exceptions import ActorFailure
from tractor.msg.types import (
SpawnSpec,
) )
from .log import get_logger
from ._portal import Portal
from ._runtime import Actor
from ._entry import _mp_main
from ._exceptions import ActorFailure
if TYPE_CHECKING: if TYPE_CHECKING:
from ._supervise import ActorNursery from ._supervise import ActorNursery
import multiprocessing as mp
ProcessType = TypeVar('ProcessType', mp.Process, trio.Process) ProcessType = TypeVar('ProcessType', mp.Process, trio.Process)
log = get_logger('tractor') log = get_logger('tractor')
# placeholder for an mp start context if so using that backend # placeholder for an mp start context if so using that backend
_ctx: mp.context.BaseContext | None = None _ctx: Optional[mp.context.BaseContext] = None
SpawnMethodKey = Literal[ SpawnMethodKey = Literal[
'trio', # supported on all platforms 'trio', # supported on all platforms
'mp_spawn', 'mp_spawn',
@ -73,6 +71,7 @@ _spawn_method: SpawnMethodKey = 'trio'
if platform.system() == 'Windows': if platform.system() == 'Windows':
import multiprocessing as mp
_ctx = mp.get_context("spawn") _ctx = mp.get_context("spawn")
async def proc_waiter(proc: mp.Process) -> None: async def proc_waiter(proc: mp.Process) -> None:
@ -87,7 +86,7 @@ else:
def try_set_start_method( def try_set_start_method(
key: SpawnMethodKey key: SpawnMethodKey
) -> mp.context.BaseContext | None: ) -> Optional[mp.context.BaseContext]:
''' '''
Attempt to set the method for process starting, aka the "actor Attempt to set the method for process starting, aka the "actor
spawning backend". spawning backend".
@ -143,13 +142,11 @@ async def exhaust_portal(
''' '''
__tracebackhide__ = True __tracebackhide__ = True
try: try:
log.debug( log.debug(f"Waiting on final result from {actor.uid}")
f'Waiting on final result from {actor.uid}'
)
# XXX: streams should never be reaped here since they should # XXX: streams should never be reaped here since they should
# always be established and shutdown using a context manager api # always be established and shutdown using a context manager api
final: Any = await portal.wait_for_result() final = await portal.result()
except ( except (
Exception, Exception,
@ -157,23 +154,13 @@ async def exhaust_portal(
) as err: ) as err:
# we reraise in the parent task via a ``BaseExceptionGroup`` # we reraise in the parent task via a ``BaseExceptionGroup``
return err return err
except trio.Cancelled as err: except trio.Cancelled as err:
# lol, of course we need this too ;P # lol, of course we need this too ;P
# TODO: merge with above? # TODO: merge with above?
log.warning( log.warning(f"Cancelled result waiter for {portal.actor.uid}")
'Cancelled portal result waiter task:\n'
f'uid: {portal.channel.uid}\n'
f'error: {err}\n'
)
return err return err
else: else:
log.debug( log.debug(f"Returning final result: {final}")
f'Returning final result from portal:\n'
f'uid: {portal.channel.uid}\n'
f'result: {final}\n'
)
return final return final
@ -185,126 +172,55 @@ async def cancel_on_completion(
) -> None: ) -> None:
''' '''
Cancel actor gracefully once its "main" portal's Cancel actor gracefully once it's "main" portal's
result arrives. result arrives.
Should only be called for actors spawned via the Should only be called for actors spawned with `run_in_actor()`.
`Portal.run_in_actor()` API.
=> and really this API will be deprecated and should be
re-implemented as a `.hilevel.one_shot_task_nursery()`..)
''' '''
# if this call errors we store the exception for later # if this call errors we store the exception for later
# in ``errors`` which will be reraised inside # in ``errors`` which will be reraised inside
# an exception group and we still send out a cancel request # an exception group and we still send out a cancel request
result: Any|Exception = await exhaust_portal( result = await exhaust_portal(portal, actor)
portal,
actor,
)
if isinstance(result, Exception): if isinstance(result, Exception):
errors[actor.uid]: Exception = result errors[actor.uid] = result
log.cancel( log.warning(
'Cancelling subactor runtime due to error:\n\n' f"Cancelling {portal.channel.uid} after error {result}"
f'Portal.cancel_actor() => {portal.channel.uid}\n\n'
f'error: {result}\n'
) )
else: else:
log.runtime( log.runtime(
'Cancelling subactor gracefully:\n\n' f"Cancelling {portal.channel.uid} gracefully "
f'Portal.cancel_actor() => {portal.channel.uid}\n\n' f"after result {result}")
f'result: {result}\n'
)
# cancel the process now that we have a final result # cancel the process now that we have a final result
await portal.cancel_actor() await portal.cancel_actor()
async def hard_kill( async def do_hard_kill(
proc: trio.Process, proc: trio.Process,
terminate_after: int = 3,
terminate_after: int = 1.6,
# NOTE: for mucking with `.pause()`-ing inside the runtime
# whilst also hacking on it XD
# terminate_after: int = 99999,
# NOTE: for mucking with `.pause()`-ing inside the runtime
# whilst also hacking on it XD
# terminate_after: int = 99999,
) -> None: ) -> None:
'''
Un-gracefully terminate an OS level `trio.Process` after timeout.
Used in 2 main cases:
- "unknown remote runtime state": a hanging/stalled actor that
isn't responding after sending a (graceful) runtime cancel
request via an IPC msg.
- "cancelled during spawn": a process who's actor runtime was
cancelled before full startup completed (such that
cancel-request-handling machinery was never fully
initialized) and thus a "cancel request msg" is never going
to be handled.
'''
log.cancel(
'Terminating sub-proc\n'
f'>x)\n'
f' |_{proc}\n'
)
# NOTE: this timeout used to do nothing since we were shielding # NOTE: this timeout used to do nothing since we were shielding
# the ``.wait()`` inside ``new_proc()`` which will pretty much # the ``.wait()`` inside ``new_proc()`` which will pretty much
# never release until the process exits, now it acts as # never release until the process exits, now it acts as
# a hard-kill time ultimatum. # a hard-kill time ultimatum.
with trio.move_on_after(terminate_after) as cs: with trio.move_on_after(terminate_after) as cs:
# NOTE: code below was copied verbatim from the now deprecated # NOTE: This ``__aexit__()`` shields internally.
# (in 0.20.0) ``trio._subrocess.Process.aclose()``, orig doc async with proc: # calls ``trio.Process.aclose()``
# string: log.debug(f"Terminating {proc}")
#
# Close any pipes we have to the process (both input and output)
# and wait for it to exit. If cancelled, kills the process and
# waits for it to finish exiting before propagating the
# cancellation.
#
# This code was originally triggred by ``proc.__aexit__()``
# but now must be called manually.
with trio.CancelScope(shield=True):
if proc.stdin is not None:
await proc.stdin.aclose()
if proc.stdout is not None:
await proc.stdout.aclose()
if proc.stderr is not None:
await proc.stderr.aclose()
try:
await proc.wait()
finally:
if proc.returncode is None:
proc.kill()
with trio.CancelScope(shield=True):
await proc.wait()
# XXX NOTE XXX: zombie squad dispatch:
# (should ideally never, but) If we do get here it means
# graceful termination of a process failed and we need to
# resort to OS level signalling to interrupt and cancel the
# (presumably stalled or hung) actor. Since we never allow
# zombies (as a feature) we ask the OS to do send in the
# removal swad as the last resort.
if cs.cancelled_caught: if cs.cancelled_caught:
# TODO: toss in the skynet-logo face as ascii art? # XXX: should pretty much never get here unless we have
log.critical( # to move the bits from ``proc.__aexit__()`` out and
# 'Well, the #ZOMBIE_LORD_IS_HERE# to collect\n' # into here.
'#T-800 deployed to collect zombie B0\n' log.critical(f"#ZOMBIE_LORD_IS_HERE: {proc}")
f'>x)\n'
f' |_{proc}\n'
)
proc.kill() proc.kill()
async def soft_kill( async def soft_wait(
proc: ProcessType, proc: ProcessType,
wait_func: Callable[ wait_func: Callable[
[ProcessType], [ProcessType],
@ -313,41 +229,15 @@ async def soft_kill(
portal: Portal, portal: Portal,
) -> None: ) -> None:
''' # Wait for proc termination but **dont' yet** call
Wait for proc termination but **don't yet** teardown # ``trio.Process.__aexit__()`` (it tears down stdio
std-streams since it will clobber any ongoing pdb REPL # which will kill any waiting remote pdb trace).
session. # This is a "soft" (cancellable) join/reap.
uid = portal.channel.uid
This is our "soft"/graceful, and thus itself also cancellable,
join/reap on an actor-runtime-in-process shutdown; it is
**not** the same as a "hard kill" via an OS signal (for that
see `.hard_kill()`).
'''
uid: tuple[str, str] = portal.channel.uid
try: try:
log.cancel( log.cancel(f'Soft waiting on actor:\n{uid}')
f'Soft killing sub-actor via portal request\n'
f'\n'
f'(c=> {portal.chan.uid}\n'
f' |_{proc}\n'
)
# wait on sub-proc to signal termination
await wait_func(proc) await wait_func(proc)
except trio.Cancelled: except trio.Cancelled:
with trio.CancelScope(shield=True):
await maybe_wait_for_debugger(
child_in_debug=_runtime_vars.get(
'_debug_mode', False
),
header_msg=(
'Delaying `soft_kill()` subproc reaper while debugger locked..\n'
),
# TODO: need a diff value then default?
# poll_steps=9999999,
)
# if cancelled during a soft wait, cancel the child # if cancelled during a soft wait, cancel the child
# actor before entering the hard reap sequence # actor before entering the hard reap sequence
# below. This means we try to do a graceful teardown # below. This means we try to do a graceful teardown
@ -358,29 +248,22 @@ async def soft_kill(
async def cancel_on_proc_deth(): async def cancel_on_proc_deth():
''' '''
"Cancel-the-cancel" request: if we detect that the Cancel the actor cancel request if we detect that
underlying sub-process exited prior to that the process terminated.
a `Portal.cancel_actor()` call completing .
''' '''
await wait_func(proc) await wait_func(proc)
n.cancel_scope.cancel() n.cancel_scope.cancel()
# start a task to wait on the termination of the
# process by itself waiting on a (caller provided) wait
# function which should unblock when the target process
# has terminated.
n.start_soon(cancel_on_proc_deth) n.start_soon(cancel_on_proc_deth)
# send the actor-runtime a cancel request.
await portal.cancel_actor() await portal.cancel_actor()
if proc.poll() is None: # type: ignore if proc.poll() is None: # type: ignore
log.warning( log.warning(
'Subactor still alive after cancel request?\n\n' 'Actor still alive after cancel request:\n'
f'uid: {uid}\n' f'{uid}'
f'|_{proc}\n'
) )
n.cancel_scope.cancel() n.cancel_scope.cancel()
raise raise
@ -392,7 +275,7 @@ async def new_proc(
errors: dict[tuple[str, str], Exception], errors: dict[tuple[str, str], Exception],
# passed through to actor main # passed through to actor main
bind_addrs: list[tuple[str, int]], bind_addr: tuple[str, int],
parent_addr: tuple[str, int], parent_addr: tuple[str, int],
_runtime_vars: dict[str, Any], # serialized and sent to _child _runtime_vars: dict[str, Any], # serialized and sent to _child
@ -404,7 +287,7 @@ async def new_proc(
) -> None: ) -> None:
# lookup backend spawning target # lookup backend spawning target
target: Callable = _methods[_spawn_method] target = _methods[_spawn_method]
# mark the new actor with the global spawn method # mark the new actor with the global spawn method
subactor._spawn_method = _spawn_method subactor._spawn_method = _spawn_method
@ -414,7 +297,7 @@ async def new_proc(
actor_nursery, actor_nursery,
subactor, subactor,
errors, errors,
bind_addrs, bind_addr,
parent_addr, parent_addr,
_runtime_vars, # run time vars _runtime_vars, # run time vars
infect_asyncio=infect_asyncio, infect_asyncio=infect_asyncio,
@ -429,7 +312,7 @@ async def trio_proc(
errors: dict[tuple[str, str], Exception], errors: dict[tuple[str, str], Exception],
# passed through to actor main # passed through to actor main
bind_addrs: list[tuple[str, int]], bind_addr: tuple[str, int],
parent_addr: tuple[str, int], parent_addr: tuple[str, int],
_runtime_vars: dict[str, Any], # serialized and sent to _child _runtime_vars: dict[str, Any], # serialized and sent to _child
*, *,
@ -472,21 +355,20 @@ async def trio_proc(
spawn_cmd.append("--asyncio") spawn_cmd.append("--asyncio")
cancelled_during_spawn: bool = False cancelled_during_spawn: bool = False
proc: trio.Process|None = None proc: Optional[trio.Process] = None
try: try:
try: try:
proc: trio.Process = await trio.lowlevel.open_process(spawn_cmd) # TODO: needs ``trio_typing`` patch?
log.runtime( proc = await trio.lowlevel.open_process( # type: ignore
'Started new child\n' spawn_cmd)
f'|_{proc}\n'
) log.runtime(f"Started {proc}")
# wait for actor to spawn and connect back to us # wait for actor to spawn and connect back to us
# channel should have handshake completed by the # channel should have handshake completed by the
# local actor by the time we get a ref to it # local actor by the time we get a ref to it
event, chan = await actor_nursery._actor.wait_for_peer( event, chan = await actor_nursery._actor.wait_for_peer(
subactor.uid subactor.uid)
)
except trio.Cancelled: except trio.Cancelled:
cancelled_during_spawn = True cancelled_during_spawn = True
@ -515,20 +397,18 @@ async def trio_proc(
portal, portal,
) )
# send a "spawning specification" which configures the # send additional init params
# initial runtime state of the child. await chan.send({
await chan.send( "_parent_main_data": subactor._parent_main_data,
SpawnSpec( "enable_modules": subactor.enable_modules,
_parent_main_data=subactor._parent_main_data, "_arb_addr": subactor._arb_addr,
enable_modules=subactor.enable_modules, "bind_host": bind_addr[0],
reg_addrs=subactor.reg_addrs, "bind_port": bind_addr[1],
bind_addrs=bind_addrs, "_runtime_vars": _runtime_vars,
_runtime_vars=_runtime_vars, })
)
)
# track subactor in current nursery # track subactor in current nursery
curr_actor: Actor = current_actor() curr_actor = current_actor()
curr_actor._actoruid2nursery[subactor.uid] = actor_nursery curr_actor._actoruid2nursery[subactor.uid] = actor_nursery
# resume caller at next checkpoint now that child is up # resume caller at next checkpoint now that child is up
@ -550,7 +430,7 @@ async def trio_proc(
# This is a "soft" (cancellable) join/reap which # This is a "soft" (cancellable) join/reap which
# will remote cancel the actor on a ``trio.Cancelled`` # will remote cancel the actor on a ``trio.Cancelled``
# condition. # condition.
await soft_kill( await soft_wait(
proc, proc,
trio.Process.wait, trio.Process.wait,
portal portal
@ -559,24 +439,18 @@ async def trio_proc(
# cancel result waiter that may have been spawned in # cancel result waiter that may have been spawned in
# tandem if not done already # tandem if not done already
log.cancel( log.cancel(
'Cancelling portal result reaper task\n' "Cancelling existing result waiter task for "
f'>c)\n' f"{subactor.uid}")
f' |_{subactor.uid}\n'
)
nursery.cancel_scope.cancel() nursery.cancel_scope.cancel()
finally: finally:
# XXX NOTE XXX: The "hard" reap since no actor zombies are # The "hard" reap since no actor zombies are allowed!
# allowed! Do this **after** cancellation/teardown to avoid # XXX: do this **after** cancellation/tearfown to avoid
# killing the process too early. # killing the process too early.
if proc: if proc:
log.cancel( log.cancel(f'Hard reap sequence starting for {subactor.uid}')
f'Hard reap sequence starting for subactor\n'
f'>x)\n'
f' |_{subactor}@{subactor.uid}\n'
)
with trio.CancelScope(shield=True): with trio.CancelScope(shield=True):
# don't clobber an ongoing pdb # don't clobber an ongoing pdb
if cancelled_during_spawn: if cancelled_during_spawn:
# Try again to avoid TTY clobbering. # Try again to avoid TTY clobbering.
@ -584,40 +458,22 @@ async def trio_proc(
with trio.move_on_after(0.5): with trio.move_on_after(0.5):
await proc.wait() await proc.wait()
await maybe_wait_for_debugger( if is_root_process():
child_in_debug=_runtime_vars.get( # TODO: solve the following issue where we need
'_debug_mode', False # to do a similar wait like this but in an
), # "intermediary" parent actor that itself isn't
header_msg=( # in debug but has a child that is, and we need
'Delaying subproc reaper while debugger locked..\n' # to hold off on relaying SIGINT until that child
), # is complete.
# https://github.com/goodboy/tractor/issues/320
# TODO: need a diff value then default? await maybe_wait_for_debugger(
# poll_steps=9999999, child_in_debug=_runtime_vars.get(
) '_debug_mode', False),
# TODO: solve the following issue where we need )
# to do a similar wait like this but in an
# "intermediary" parent actor that itself isn't
# in debug but has a child that is, and we need
# to hold off on relaying SIGINT until that child
# is complete.
# https://github.com/goodboy/tractor/issues/320
# -[ ] we need to handle non-root parent-actors specially
# by somehow determining if a child is in debug and then
# avoiding cancel/kill of said child by this
# (intermediary) parent until such a time as the root says
# the pdb lock is released and we are good to tear down
# (our children)..
#
# -[ ] so maybe something like this where we try to
# acquire the lock and get notified of who has it,
# check that uid against our known children?
# this_uid: tuple[str, str] = current_actor().uid
# await acquire_debug_lock(this_uid)
if proc.poll() is None: if proc.poll() is None:
log.cancel(f"Attempting to hard kill {proc}") log.cancel(f"Attempting to hard kill {proc}")
await hard_kill(proc) await do_hard_kill(proc)
log.debug(f"Joined {proc}") log.debug(f"Joined {proc}")
else: else:
@ -635,7 +491,7 @@ async def mp_proc(
subactor: Actor, subactor: Actor,
errors: dict[tuple[str, str], Exception], errors: dict[tuple[str, str], Exception],
# passed through to actor main # passed through to actor main
bind_addrs: list[tuple[str, int]], bind_addr: tuple[str, int],
parent_addr: tuple[str, int], parent_addr: tuple[str, int],
_runtime_vars: dict[str, Any], # serialized and sent to _child _runtime_vars: dict[str, Any], # serialized and sent to _child
*, *,
@ -693,7 +549,7 @@ async def mp_proc(
target=_mp_main, target=_mp_main,
args=( args=(
subactor, subactor,
bind_addrs, bind_addr,
fs_info, fs_info,
_spawn_method, _spawn_method,
parent_addr, parent_addr,
@ -761,7 +617,7 @@ async def mp_proc(
# This is a "soft" (cancellable) join/reap which # This is a "soft" (cancellable) join/reap which
# will remote cancel the actor on a ``trio.Cancelled`` # will remote cancel the actor on a ``trio.Cancelled``
# condition. # condition.
await soft_kill( await soft_wait(
proc, proc,
proc_waiter, proc_waiter,
portal portal

View File

@ -18,83 +18,27 @@
Per process state Per process state
""" """
from __future__ import annotations
from contextvars import (
ContextVar,
)
from typing import ( from typing import (
Optional,
Any, Any,
TYPE_CHECKING,
) )
from trio.lowlevel import current_task _current_actor: Optional['Actor'] = None # type: ignore # noqa
if TYPE_CHECKING:
from ._runtime import Actor
from ._context import Context
_current_actor: Actor|None = None # type: ignore # noqa
_last_actor_terminated: Actor|None = None
# TODO: mk this a `msgspec.Struct`!
_runtime_vars: dict[str, Any] = { _runtime_vars: dict[str, Any] = {
'_debug_mode': False, '_debug_mode': False,
'_is_root': False, '_is_root': False,
'_root_mailbox': (None, None), '_root_mailbox': (None, None)
'_registry_addrs': [],
'_is_infected_aio': False,
# for `tractor.pause_from_sync()` & `breakpoint()` support
'use_greenback': False,
} }
def last_actor() -> Actor|None: def current_actor(err_on_no_runtime: bool = True) -> 'Actor': # type: ignore # noqa
'''
Try to return last active `Actor` singleton
for this process.
For case where runtime already exited but someone is asking
about the "last" actor probably to get its `.uid: tuple`.
'''
return _last_actor_terminated
def current_actor(
err_on_no_runtime: bool = True,
) -> Actor:
''' '''
Get the process-local actor instance. Get the process-local actor instance.
''' '''
if ( from ._exceptions import NoRuntime
err_on_no_runtime if _current_actor is None and err_on_no_runtime:
and raise NoRuntime("No local actor has been initialized yet")
_current_actor is None
):
msg: str = 'No local actor has been initialized yet?\n'
from ._exceptions import NoRuntime
if last := last_actor():
msg += (
f'Apparently the lact active actor was\n'
f'|_{last}\n'
f'|_{last.uid}\n'
)
# no actor runtime has (as of yet) ever been started for
# this process.
else:
msg += (
# 'No last actor found?\n'
'\nDid you forget to call one of,\n'
'- `tractor.open_root_actor()`\n'
'- `tractor.open_nursery()`\n'
)
raise NoRuntime(msg)
return _current_actor return _current_actor
@ -108,7 +52,6 @@ def is_main_process() -> bool:
return mp.current_process().name == 'MainProcess' return mp.current_process().name == 'MainProcess'
# TODO, more verby name?
def debug_mode() -> bool: def debug_mode() -> bool:
''' '''
Bool determining if "debug mode" is on which enables Bool determining if "debug mode" is on which enables
@ -120,26 +63,3 @@ def debug_mode() -> bool:
def is_root_process() -> bool: def is_root_process() -> bool:
return _runtime_vars['_is_root'] return _runtime_vars['_is_root']
_ctxvar_Context: ContextVar[Context] = ContextVar(
'ipc_context',
default=None,
)
def current_ipc_ctx(
error_on_not_set: bool = False,
) -> Context|None:
ctx: Context = _ctxvar_Context.get()
if (
not ctx
and error_on_not_set
):
from ._exceptions import InternalError
raise InternalError(
'No IPC context has been allocated for this task yet?\n'
f'|_{current_task()}\n'
)
return ctx

View File

@ -21,12 +21,10 @@ The machinery and types behind ``Context.open_stream()``
''' '''
from __future__ import annotations from __future__ import annotations
from contextlib import asynccontextmanager as acm
import inspect import inspect
from pprint import pformat from contextlib import asynccontextmanager as acm
from typing import ( from typing import (
Any, Any,
AsyncGenerator,
Callable, Callable,
AsyncIterator, AsyncIterator,
TYPE_CHECKING, TYPE_CHECKING,
@ -36,27 +34,16 @@ import warnings
import trio import trio
from ._exceptions import ( from ._exceptions import (
ContextCancelled, unpack_error,
RemoteActorError,
) )
from .log import get_logger from .log import get_logger
from .trionics import ( from .trionics import (
broadcast_receiver, broadcast_receiver,
BroadcastReceiver, BroadcastReceiver,
) )
from tractor.msg import (
Error,
Return,
Stop,
MsgType,
PayloadT,
Yield,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from ._runtime import Actor
from ._context import Context from ._context import Context
from ._ipc import Channel
log = get_logger(__name__) log = get_logger(__name__)
@ -67,12 +54,14 @@ log = get_logger(__name__)
# messages? class ReceiveChannel(AsyncResource, Generic[ReceiveType]): # messages? class ReceiveChannel(AsyncResource, Generic[ReceiveType]):
# - use __slots__ on ``Context``? # - use __slots__ on ``Context``?
class MsgStream(trio.abc.Channel): class MsgStream(trio.abc.Channel):
''' '''
A bidirectional message stream for receiving logically sequenced A bidirectional message stream for receiving logically sequenced
values over an inter-actor IPC `Channel`. values over an inter-actor IPC ``Channel``.
This is the type returned to a local task which entered either
``Portal.open_stream_from()`` or ``Context.open_stream()``.
Termination rules: Termination rules:
@ -88,301 +77,130 @@ class MsgStream(trio.abc.Channel):
self, self,
ctx: Context, # typing: ignore # noqa ctx: Context, # typing: ignore # noqa
rx_chan: trio.MemoryReceiveChannel, rx_chan: trio.MemoryReceiveChannel,
_broadcaster: BroadcastReceiver|None = None, _broadcaster: BroadcastReceiver | None = None,
) -> None: ) -> None:
self._ctx = ctx self._ctx = ctx
self._rx_chan = rx_chan self._rx_chan = rx_chan
self._broadcaster = _broadcaster self._broadcaster = _broadcaster
# any actual IPC msg which is effectively an `EndOfStream`
self._stop_msg: bool|Stop = False
# flag to denote end of stream # flag to denote end of stream
self._eoc: bool|trio.EndOfChannel = False self._eoc: bool = False
self._closed: bool|trio.ClosedResourceError = False self._closed: bool = False
@property
def ctx(self) -> Context:
'''
A read-only ref to this stream's inter-actor-task `Context`.
'''
return self._ctx
@property
def chan(self) -> Channel:
'''
Ref to the containing `Context`'s transport `Channel`.
'''
return self._ctx.chan
# TODO: could we make this a direct method bind to `PldRx`?
# -> receive_nowait = PldRx.recv_pld
# |_ means latter would have to accept `MsgStream`-as-`self`?
# => should be fine as long as,
# -[ ] both define `._rx_chan`
# -[ ] .ctx is bound into `PldRx` using a `@cm`?
#
# delegate directly to underlying mem channel # delegate directly to underlying mem channel
def receive_nowait( def receive_nowait(self):
self, msg = self._rx_chan.receive_nowait()
expect_msg: MsgType = Yield, return msg['yield']
) -> PayloadT:
ctx: Context = self._ctx
(
msg,
pld,
) = ctx._pld_rx.recv_msg_nowait(
ipc=self,
expect_msg=expect_msg,
)
# ?TODO, maybe factor this into a hyper-common `unwrap_pld()` async def receive(self):
# '''Async receive a single msg from the IPC transport, the next
match msg: in sequence for this stream.
# XXX, these never seems to ever hit? cool?
case Stop():
log.cancel(
f'Msg-stream was ended via stop msg\n'
f'{msg}'
)
case Error():
log.error(
f'Msg-stream was ended via error msg\n'
f'{msg}'
)
# XXX NOTE, always set any final result on the ctx to
# avoid teardown race conditions where previously this msg
# would be consumed silently (by `.aclose()` doing its
# own "msg drain loop" but WITHOUT those `drained: lists[MsgType]`
# being post-close-processed!
#
# !!TODO, see the equiv todo-comment in `.receive()`
# around the `if drained:` where we should prolly
# ACTUALLY be doing this post-close processing??
#
case Return(pld=pld):
log.warning(
f'Msg-stream final result msg for IPC ctx?\n'
f'{msg}'
)
# XXX TODO, this **should be covered** by higher
# scoped runtime-side method calls such as
# `Context._deliver_msg()`, so you should never
# really see the warning above or else something
# racy/out-of-order is likely going on between
# actor-runtime-side push tasks and the user-app-side
# consume tasks!
# -[ ] figure out that set of race cases and fix!
# -[ ] possibly return the `msg` given an input
# arg-flag is set so we can process the `Return`
# from the `.aclose()` caller?
#
# breakpoint() # to debug this RACE CASE!
ctx._result = pld
ctx._outcome_msg = msg
return pld
async def receive(
self,
hide_tb: bool = False,
):
'''
Receive a single msg from the IPC transport, the next in
sequence sent by the far end task (possibly in order as
determined by the underlying protocol).
''' '''
__tracebackhide__: bool = hide_tb # see ``.aclose()`` for notes on the old behaviour prior to
# NOTE FYI: `trio.ReceiveChannel` implements EOC handling as
# follows (aka uses it to gracefully exit async for loops):
#
# async def __anext__(self) -> ReceiveType:
# try:
# return await self.receive()
# except trio.EndOfChannel:
# raise StopAsyncIteration
#
# see `.aclose()` for notes on the old behaviour prior to
# introducing this # introducing this
if self._eoc: if self._eoc:
raise self._eoc raise trio.EndOfChannel
if self._closed: if self._closed:
raise self._closed raise trio.ClosedResourceError('This stream was closed')
src_err: Exception|None = None # orig tb
try: try:
ctx: Context = self._ctx msg = await self._rx_chan.receive()
pld = await ctx._pld_rx.recv_pld( return msg['yield']
ipc=self,
expect_msg=Yield,
)
return pld
# XXX: the stream terminates on either of: except KeyError as err:
# - `self._rx_chan.receive()` raising after manual closure # internal error should never get here
# by the rpc-runtime, assert msg.get('cid'), ("Received internal error at portal?")
# OR
# - via a `Stop`-msg received from remote peer task.
# NOTE
# |_ previously this was triggered by calling
# `._rx_chan.aclose()` on the send side of the channel
# inside `Actor._deliver_ctx_payload()`, but now the 'stop'
# message handling gets delegated to `PldRFx.recv_pld()`
# internals.
except trio.EndOfChannel as eoc:
# a graceful stream finished signal
self._eoc = eoc
src_err = eoc
# a `ClosedResourceError` indicates that the internal feeder # TODO: handle 2 cases with 3.10 match syntax
# memory receive channel was closed likely by the runtime # - 'stop'
# after the associated transport-channel disconnected or # - 'error'
# broke. # possibly just handle msg['stop'] here!
except trio.ClosedResourceError as cre: # by self._rx_chan.receive()
src_err = cre
log.warning(
'`Context._rx_chan` was already closed?'
)
self._closed = cre
# when the send is closed we assume the stream has if self._closed:
# terminated and signal this local iterator to stop raise trio.ClosedResourceError('This stream was closed')
drained: list[Exception|dict] = await self.aclose()
if drained:
# ^^^^^^^^TODO? pass these to the `._ctx._drained_msgs:
# deque` and then iterate them as part of any
# `.wait_for_result()` call?
#
# -[ ] move the match-case processing from
# `.receive_nowait()` instead to right here, use it from
# a for msg in drained:` post-proc loop?
#
log.warning(
'Drained context msgs during closure\n\n'
f'{drained}'
)
# NOTE XXX: if the context was cancelled or remote-errored if msg.get('stop') or self._eoc:
# but we received the stream close msg first, we log.debug(f"{self} was stopped at remote end")
# probably want to instead raise the remote error
# over the end-of-stream connection error since likely
# the remote error was the source cause?
# ctx: Context = self._ctx
ctx.maybe_raise(
raise_ctxc_from_self_call=True,
from_src_exc=src_err,
)
# propagate any error but hide low-level frame details from # XXX: important to set so that a new ``.receive()``
# the caller by default for console/debug-REPL noise # call (likely by another task using a broadcast receiver)
# reduction. # doesn't accidentally pull the ``return`` message
if ( # value out of the underlying feed mem chan!
hide_tb self._eoc = True
and (
# XXX NOTE special conditions: don't reraise on # # when the send is closed we assume the stream has
# certain stream-specific internal error types like, # # terminated and signal this local iterator to stop
# # await self.aclose()
# - `trio.EoC` since we want to use the exact instance
# to ensure that it is the error that bubbles upward
# for silent absorption by `Context.open_stream()`.
not self._eoc
# - `RemoteActorError` (or subtypes like ctxc) # XXX: this causes ``ReceiveChannel.__anext__()`` to
# since we want to present the error as though it is # raise a ``StopAsyncIteration`` **and** in our catch
# "sourced" directly from this `.receive()` call and # block below it will trigger ``.aclose()``.
# generally NOT include the stack frames raised from raise trio.EndOfChannel from err
# inside the `PldRx` and/or the transport stack
# layers. # TODO: test that shows stream raising an expected error!!!
or isinstance(src_err, RemoteActorError) elif msg.get('error'):
) # raise the error message
raise unpack_error(msg, self._ctx.chan)
else:
raise
except (
trio.ClosedResourceError, # by self._rx_chan
trio.EndOfChannel, # by self._rx_chan or `stop` msg from far end
): ):
raise type(src_err)(*src_err.args) from src_err # XXX: we close the stream on any of these error conditions:
else:
# for any non-graceful-EOC we want to NOT hide this frame
if not self._eoc:
__tracebackhide__: bool = False
raise src_err # a ``ClosedResourceError`` indicates that the internal
# feeder memory receive channel was closed likely by the
# runtime after the associated transport-channel
# disconnected or broke.
async def aclose(self) -> list[Exception|dict]: # an ``EndOfChannel`` indicates either the internal recv
# memchan exhausted **or** we raisesd it just above after
# receiving a `stop` message from the far end of the stream.
# Previously this was triggered by calling ``.aclose()`` on
# the send side of the channel inside
# ``Actor._push_result()`` (should still be commented code
# there - which should eventually get removed), but now the
# 'stop' message handling has been put just above.
# TODO: Locally, we want to close this stream gracefully, by
# terminating any local consumers tasks deterministically.
# One we have broadcast support, we **don't** want to be
# closing this stream and not flushing a final value to
# remaining (clone) consumers who may not have been
# scheduled to receive it yet.
# when the send is closed we assume the stream has
# terminated and signal this local iterator to stop
await self.aclose()
raise # propagate
async def aclose(self):
''' '''
Cancel associated remote actor task and local memory channel on Cancel associated remote actor task and local memory channel on
close. close.
Notes:
- REMEMBER that this is also called by `.__aexit__()` so
careful consideration must be made to handle whatever
internal stsate is mutated, particuarly in terms of
draining IPC msgs!
- more or less we try to maintain adherance to trio's `.aclose()` semantics:
https://trio.readthedocs.io/en/stable/reference-io.html#trio.abc.AsyncResource.aclose
''' '''
# XXX NOTE XXX # XXX: keep proper adherance to trio's `.aclose()` semantics:
# it's SUPER IMPORTANT that we ensure we don't DOUBLE # https://trio.readthedocs.io/en/stable/reference-io.html#trio.abc.AsyncResource.aclose
# DRAIN msgs on closure so avoid getting stuck handing on rx_chan = self._rx_chan
# the `._rx_chan` since we call this method on
# `.__aexit__()` as well!!! if rx_chan._closed:
# => SO ENSURE WE CATCH ALL TERMINATION STATES in this log.cancel(f"{self} is already closed")
# block including the EoC..
if self.closed:
# this stream has already been closed so silently succeed as # this stream has already been closed so silently succeed as
# per ``trio.AsyncResource`` semantics. # per ``trio.AsyncResource`` semantics.
# https://trio.readthedocs.io/en/stable/reference-io.html#trio.abc.AsyncResource.aclose # https://trio.readthedocs.io/en/stable/reference-io.html#trio.abc.AsyncResource.aclose
# import tractor return
# await tractor.pause()
return []
ctx: Context = self._ctx self._eoc = True
drained: list[Exception|dict] = []
while not drained:
try:
maybe_final_msg: Yield|Return = self.receive_nowait(
expect_msg=Yield|Return,
)
if maybe_final_msg:
log.debug(
'Drained un-processed stream msg:\n'
f'{pformat(maybe_final_msg)}'
)
# TODO: inject into parent `Context` buf?
drained.append(maybe_final_msg)
# NOTE: we only need these handlers due to the
# `.receive_nowait()` call above which may re-raise
# one of these errors on a msg key error!
except trio.WouldBlock as be:
drained.append(be)
break
except trio.EndOfChannel as eoc:
self._eoc: Exception = eoc
drained.append(eoc)
break
except trio.ClosedResourceError as cre:
self._closed = cre
drained.append(cre)
break
except ContextCancelled as ctxc:
# log.exception('GOT CTXC')
log.cancel(
'Context was cancelled during stream closure:\n'
f'canceller: {ctxc.canceller}\n'
f'{pformat(ctxc.msgdata)}'
)
break
# NOTE: this is super subtle IPC messaging stuff: # NOTE: this is super subtle IPC messaging stuff:
# Relay stop iteration to far end **iff** we're # Relay stop iteration to far end **iff** we're
@ -413,53 +231,26 @@ class MsgStream(trio.abc.Channel):
except ( except (
trio.BrokenResourceError, trio.BrokenResourceError,
trio.ClosedResourceError trio.ClosedResourceError
) as re: ):
# the underlying channel may already have been pulled # the underlying channel may already have been pulled
# in which case our stop message is meaningless since # in which case our stop message is meaningless since
# it can't traverse the transport. # it can't traverse the transport.
ctx = self._ctx
log.warning( log.warning(
f'Stream was already destroyed?\n' f'Stream was already destroyed?\n'
f'actor: {ctx.chan.uid}\n' f'actor: {ctx.chan.uid}\n'
f'ctx id: {ctx.cid}' f'ctx id: {ctx.cid}'
) )
drained.append(re)
self._closed = re
# if caught_eoc: self._closed = True
# # from .devx import _debug
# # await _debug.pause()
# with trio.CancelScope(shield=True):
# await rx_chan.aclose()
if not self._eoc: # Do we close the local mem chan ``self._rx_chan`` ??!?
this_side: str = self._ctx.side
peer_side: str = self._ctx.peer_side
message: str = (
f'Stream self-closed by {this_side!r}-side before EoC from {peer_side!r}\n'
# } bc a stream is a "scope"/msging-phase inside an IPC
f'x}}>\n'
f' |_{self}\n'
)
log.cancel(message)
self._eoc = trio.EndOfChannel(message)
if ( # NO, DEFINITELY NOT if we're a bi-dir ``MsgStream``!
(rx_chan := self._rx_chan) # BECAUSE this same core-msg-loop mem recv-chan is used to deliver
and # the potential final result from the surrounding inter-actor
(stats := rx_chan.statistics()).tasks_waiting_receive # `Context` so we don't want to close it until that context has
): # run to completion.
log.cancel(
f'Msg-stream is closing but there is still reader tasks,\n'
f'{stats}\n'
)
# ?XXX WAIT, why do we not close the local mem chan `._rx_chan` XXX?
# => NO, DEFINITELY NOT! <=
# if we're a bi-dir `MsgStream` BECAUSE this same
# core-msg-loop mem recv-chan is used to deliver the
# potential final result from the surrounding inter-actor
# `Context` so we don't want to close it until that
# context has run to completion.
# XXX: Notes on old behaviour: # XXX: Notes on old behaviour:
# await rx_chan.aclose() # await rx_chan.aclose()
@ -488,26 +279,6 @@ class MsgStream(trio.abc.Channel):
# runtime's closure of ``rx_chan`` in the case where we may # runtime's closure of ``rx_chan`` in the case where we may
# still need to consume msgs that are "in transit" from the far # still need to consume msgs that are "in transit" from the far
# end (eg. for ``Context.result()``). # end (eg. for ``Context.result()``).
# self._closed = True
return drained
@property
def closed(self) -> bool:
rxc: bool = self._rx_chan._closed
_closed: bool|Exception = self._closed
_eoc: bool|trio.EndOfChannel = self._eoc
if rxc or _closed or _eoc:
log.runtime(
f'`MsgStream` is already closed\n'
f'{self}\n'
f' |_cid: {self._ctx.cid}\n'
f' |_rx_chan._closed: {type(rxc)} = {rxc}\n'
f' |_closed: {type(_closed)} = {_closed}\n'
f' |_eoc: {type(_eoc)} = {_eoc}'
)
return True
return False
@acm @acm
async def subscribe( async def subscribe(
@ -537,9 +308,6 @@ class MsgStream(trio.abc.Channel):
self, self,
# use memory channel size by default # use memory channel size by default
self._rx_chan._state.max_buffer_size, # type: ignore self._rx_chan._state.max_buffer_size, # type: ignore
# TODO: can remove this kwarg right since
# by default behaviour is to do this anyway?
receive_afunc=self.receive, receive_afunc=self.receive,
) )
@ -566,260 +334,19 @@ class MsgStream(trio.abc.Channel):
async def send( async def send(
self, self,
data: Any, data: Any
hide_tb: bool = True,
) -> None: ) -> None:
''' '''
Send a message over this stream to the far end. Send a message over this stream to the far end.
''' '''
__tracebackhide__: bool = hide_tb if self._ctx._remote_error:
raise self._ctx._remote_error # from None
# raise any alreay known error immediately
self._ctx.maybe_raise()
if self._eoc:
raise self._eoc
if self._closed: if self._closed:
raise self._closed raise trio.ClosedResourceError('This stream was already closed')
try:
await self._ctx.chan.send(
payload=Yield(
cid=self._ctx.cid,
pld=data,
),
)
except (
trio.ClosedResourceError,
trio.BrokenResourceError,
BrokenPipeError,
) as trans_err:
if hide_tb:
raise type(trans_err)(
*trans_err.args
) from trans_err
else:
raise
# TODO: msg capability context api1
# @acm
# async def enable_msg_caps(
# self,
# msg_subtypes: Union[
# list[list[Struct]],
# Protocol, # hypothetical type that wraps a msg set
# ],
# ) -> tuple[Callable, Callable]: # payload enc, dec pair
# ...
@acm
async def open_stream_from_ctx(
ctx: Context,
allow_overruns: bool|None = False,
msg_buffer_size: int|None = None,
) -> AsyncGenerator[MsgStream, None]:
'''
Open a `MsgStream`, a bi-directional msg transport dialog
connected to the cross-actor peer task for an IPC `Context`.
This context manager must be entered in both the "parent" (task
which entered `Portal.open_context()`) and "child" (RPC task
which is decorated by `@context`) tasks for the stream to
logically be considered "open"; if one side begins sending to an
un-opened peer, depending on policy config, msgs will either be
queued until the other side opens and/or a `StreamOverrun` will
(eventually) be raised.
------ - ------
Runtime semantics design:
A `MsgStream` session adheres to "one-shot use" semantics,
meaning if you close the scope it **can not** be "re-opened".
Instead you must re-establish a new surrounding RPC `Context`
(RTC: remote task context?) using `Portal.open_context()`.
In the future this *design choice* may need to be changed but
currently there seems to be no obvious reason to support such
semantics..
- "pausing a stream" can be supported with a message implemented
by the `tractor` application dev.
- any remote error will normally require a restart of the entire
`trio.Task`'s scope due to the nature of `trio`'s cancellation
(`CancelScope`) system and semantics (level triggered).
'''
actor: Actor = ctx._actor
# If the surrounding context has been cancelled by some
# task with a handle to THIS, we error here immediately
# since it likely means the surrounding lexical-scope has
# errored, been `trio.Cancelled` or at the least
# `Context.cancel()` was called by some task.
if ctx._cancel_called:
# XXX NOTE: ALWAYS RAISE any remote error here even if
# it's an expected `ContextCancelled` due to a local
# task having called `.cancel()`!
#
# WHY: we expect the error to always bubble up to the
# surrounding `Portal.open_context()` call and be
# absorbed there (silently) and we DO NOT want to
# actually try to stream - a cancel msg was already
# sent to the other side!
ctx.maybe_raise(
raise_ctxc_from_self_call=True,
)
# NOTE: this is diff then calling
# `._maybe_raise_remote_err()` specifically
# because we want to raise a ctxc on any task entering this `.open_stream()`
# AFTER cancellation was already been requested,
# we DO NOT want to absorb any ctxc ACK silently!
# if ctx._remote_error:
# raise ctx._remote_error
# XXX NOTE: if no `ContextCancelled` has been responded
# back from the other side (yet), we raise a different
# runtime error indicating that this task's usage of
# `Context.cancel()` and then `.open_stream()` is WRONG!
task: str = trio.lowlevel.current_task().name
raise RuntimeError(
'Stream opened after `Context.cancel()` called..?\n'
f'task: {actor.uid[0]}:{task}\n'
f'{ctx}'
)
if (
not ctx._portal
and not ctx._started_called
):
raise RuntimeError(
'Context.started()` must be called before opening a stream'
)
# NOTE: in one way streaming this only happens on the
# parent-ctx-task side (on the side that calls
# `Actor.start_remote_task()`) so if you try to send
# a stop from the caller to the callee in the
# single-direction-stream case you'll get a lookup error
# currently.
ctx: Context = actor.get_context(
chan=ctx.chan,
cid=ctx.cid,
nsf=ctx._nsf,
# side=ctx.side,
msg_buffer_size=msg_buffer_size,
allow_overruns=allow_overruns,
)
ctx._allow_overruns: bool = allow_overruns
assert ctx is ctx
# XXX: If the underlying channel feeder receive mem chan has
# been closed then likely client code has already exited
# a ``.open_stream()`` block prior or there was some other
# unanticipated error or cancellation from ``trio``.
if ctx._rx_chan._closed:
raise trio.ClosedResourceError(
'The underlying channel for this stream was already closed!\n'
)
# NOTE: implicitly this will call `MsgStream.aclose()` on
# `.__aexit__()` due to stream's parent `Channel` type!
#
# XXX NOTE XXX: ensures the stream is "one-shot use",
# which specifically means that on exit,
# - signal ``trio.EndOfChannel``/``StopAsyncIteration`` to
# the far end indicating that the caller exited
# the streaming context purposefully by letting
# the exit block exec.
# - this is diff from the cancel/error case where
# a cancel request from this side or an error
# should be sent to the far end indicating the
# stream WAS NOT just closed normally/gracefully.
async with MsgStream(
ctx=ctx,
rx_chan=ctx._rx_chan,
) as stream:
# NOTE: we track all existing streams per portal for
# the purposes of attempting graceful closes on runtime
# cancel requests.
if ctx._portal:
ctx._portal._streams.add(stream)
try:
ctx._stream_opened: bool = True
ctx._stream = stream
# XXX: do we need this?
# ensure we aren't cancelled before yielding the stream
# await trio.lowlevel.checkpoint()
yield stream
# XXX: (MEGA IMPORTANT) if this is a root opened process we
# wait for any immediate child in debug before popping the
# context from the runtime msg loop otherwise inside
# ``Actor._deliver_ctx_payload()`` the msg will be discarded and in
# the case where that msg is global debugger unlock (via
# a "stop" msg for a stream), this can result in a deadlock
# where the root is waiting on the lock to clear but the
# child has already cleared it and clobbered IPC.
#
# await maybe_wait_for_debugger()
# XXX TODO: pretty sure this isn't needed (see
# note above this block) AND will result in
# a double `.send_stop()` call. The only reason to
# put it here would be to due with "order" in
# terms of raising any remote error (as per
# directly below) or bc the stream's
# `.__aexit__()` block might not get run
# (doubtful)? Either way if we did put this back
# in we also need a state var to avoid the double
# stop-msg send..
#
# await stream.aclose()
# NOTE: absorb and do not raise any
# EoC received from the other side such that
# it is not raised inside the surrounding
# context block's scope!
except trio.EndOfChannel as eoc:
if (
eoc
and
stream.closed
):
# sanity, can remove?
assert eoc is stream._eoc
log.warning(
'Stream was terminated by EoC\n\n'
# NOTE: won't show the error <Type> but
# does show txt followed by IPC msg.
f'{str(eoc)}\n'
)
finally:
if ctx._portal:
try:
ctx._portal._streams.remove(stream)
except KeyError:
log.warning(
f'Stream was already destroyed?\n'
f'actor: {ctx.chan.uid}\n'
f'ctx id: {ctx.cid}'
)
await self._ctx.chan.send({'yield': data, 'cid': self._ctx.cid})
def stream(func: Callable) -> Callable: def stream(func: Callable) -> Callable:
@ -829,7 +356,7 @@ def stream(func: Callable) -> Callable:
''' '''
# TODO: apply whatever solution ``mypy`` ends up picking for this: # TODO: apply whatever solution ``mypy`` ends up picking for this:
# https://github.com/python/mypy/issues/2087#issuecomment-769266912 # https://github.com/python/mypy/issues/2087#issuecomment-769266912
func._tractor_stream_function: bool = True # type: ignore func._tractor_stream_function = True # type: ignore
sig = inspect.signature(func) sig = inspect.signature(func)
params = sig.parameters params = sig.parameters

View File

@ -21,22 +21,22 @@
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from functools import partial from functools import partial
import inspect import inspect
from pprint import pformat from typing import (
from typing import TYPE_CHECKING Optional,
TYPE_CHECKING,
)
import typing import typing
import warnings import warnings
from exceptiongroup import BaseExceptionGroup
import trio import trio
from .devx._debug import maybe_wait_for_debugger from ._debug import maybe_wait_for_debugger
from ._state import current_actor, is_main_process from ._state import current_actor, is_main_process
from .log import get_logger, get_loglevel from .log import get_logger, get_loglevel
from ._runtime import Actor from ._runtime import Actor
from ._portal import Portal from ._portal import Portal
from ._exceptions import ( from ._exceptions import is_multi_cancelled
is_multi_cancelled,
ContextCancelled,
)
from ._root import open_root_actor from ._root import open_root_actor
from . import _state from . import _state
from . import _spawn from . import _spawn
@ -80,85 +80,54 @@ class ActorNursery:
''' '''
def __init__( def __init__(
self, self,
# TODO: maybe def these as fields of a struct looking type?
actor: Actor, actor: Actor,
ria_nursery: trio.Nursery, ria_nursery: trio.Nursery,
da_nursery: trio.Nursery, da_nursery: trio.Nursery,
errors: dict[tuple[str, str], BaseException], errors: dict[tuple[str, str], BaseException],
) -> None: ) -> None:
# self.supervisor = supervisor # TODO # self.supervisor = supervisor # TODO
self._actor: Actor = actor self._actor: Actor = actor
self._ria_nursery = ria_nursery
# TODO: rename to `._tn` for our conventional "task-nursery"
self._da_nursery = da_nursery self._da_nursery = da_nursery
self._children: dict[ self._children: dict[
tuple[str, str], tuple[str, str],
tuple[ tuple[
Actor, Actor,
trio.Process | mp.Process, trio.Process | mp.Process,
Portal | None, Optional[Portal],
] ]
] = {} ] = {}
# portals spawned with ``run_in_actor()`` are
# cancelled when their "main" result arrives
self._cancel_after_result_on_exit: set = set()
self.cancelled: bool = False self.cancelled: bool = False
self._join_procs = trio.Event() self._join_procs = trio.Event()
self._at_least_one_child_in_debug: bool = False self._at_least_one_child_in_debug: bool = False
self.errors = errors self.errors = errors
self._scope_error: BaseException|None = None
self.exited = trio.Event() self.exited = trio.Event()
# NOTE: when no explicit call is made to
# `.open_root_actor()` by application code,
# `.open_nursery()` will implicitly call it to start the
# actor-tree runtime. In this case we mark ourselves as
# such so that runtime components can be aware for logging
# and syncing purposes to any actor opened nurseries.
self._implicit_runtime_started: bool = False
# TODO: remove the `.run_in_actor()` API and thus this 2ndary
# nursery when that API get's moved outside this primitive!
self._ria_nursery = ria_nursery
# portals spawned with ``run_in_actor()`` are
# cancelled when their "main" result arrives
self._cancel_after_result_on_exit: set = set()
async def start_actor( async def start_actor(
self, self,
name: str, name: str,
*, *,
bind_addr: tuple[str, int] = _default_bind_addr,
bind_addrs: list[tuple[str, int]] = [_default_bind_addr], rpc_module_paths: list[str] | None = None,
rpc_module_paths: list[str]|None = None, enable_modules: list[str] | None = None,
enable_modules: list[str]|None = None, loglevel: str | None = None, # set log level per subactor
loglevel: str|None = None, # set log level per subactor nursery: trio.Nursery | None = None,
debug_mode: bool|None = None, debug_mode: Optional[bool] | None = None,
infect_asyncio: bool = False, infect_asyncio: bool = False,
# TODO: ideally we can rm this once we no longer have
# a `._ria_nursery` since the dependent APIs have been
# removed!
nursery: trio.Nursery|None = None,
) -> Portal: ) -> Portal:
''' '''
Start a (daemon) actor: an process that has no designated Start a (daemon) actor: an process that has no designated
"main task" besides the runtime. "main task" besides the runtime.
''' '''
__runtimeframe__: int = 1 # noqa loglevel = loglevel or self._actor.loglevel or get_loglevel()
loglevel: str = (
loglevel
or self._actor.loglevel
or get_loglevel()
)
# configure and pass runtime state # configure and pass runtime state
_rtv = _state._runtime_vars.copy() _rtv = _state._runtime_vars.copy()
_rtv['_is_root'] = False _rtv['_is_root'] = False
_rtv['_is_infected_aio'] = infect_asyncio
# allow setting debug policy per actor # allow setting debug policy per actor
if debug_mode is not None: if debug_mode is not None:
@ -181,16 +150,14 @@ class ActorNursery:
# modules allowed to invoked funcs from # modules allowed to invoked funcs from
enable_modules=enable_modules, enable_modules=enable_modules,
loglevel=loglevel, loglevel=loglevel,
arbiter_addr=current_actor()._arb_addr,
# verbatim relay this actor's registrar addresses
registry_addrs=current_actor().reg_addrs,
) )
parent_addr = self._actor.accept_addr parent_addr = self._actor.accept_addr
assert parent_addr assert parent_addr
# start a task to spawn a process # start a task to spawn a process
# blocks until process has been started and a portal setup # blocks until process has been started and a portal setup
nursery: trio.Nursery = nursery or self._da_nursery nursery = nursery or self._da_nursery
# XXX: the type ignore is actually due to a `mypy` bug # XXX: the type ignore is actually due to a `mypy` bug
return await nursery.start( # type: ignore return await nursery.start( # type: ignore
@ -200,29 +167,21 @@ class ActorNursery:
self, self,
subactor, subactor,
self.errors, self.errors,
bind_addrs, bind_addr,
parent_addr, parent_addr,
_rtv, # run time vars _rtv, # run time vars
infect_asyncio=infect_asyncio, infect_asyncio=infect_asyncio,
) )
) )
# TODO: DEPRECATE THIS:
# -[ ] impl instead as a hilevel wrapper on
# top of a `@context` style invocation.
# |_ dynamic @context decoration on child side
# |_ implicit `Portal.open_context() as (ctx, first):`
# and `return first` on parent side.
# |_ mention how it's similar to `trio-parallel` API?
# -[ ] use @api_frame on the wrapper
async def run_in_actor( async def run_in_actor(
self, self,
fn: typing.Callable, fn: typing.Callable,
*, *,
name: str | None = None, name: Optional[str] = None,
bind_addrs: tuple[str, int] = [_default_bind_addr], bind_addr: tuple[str, int] = _default_bind_addr,
rpc_module_paths: list[str] | None = None, rpc_module_paths: list[str] | None = None,
enable_modules: list[str] | None = None, enable_modules: list[str] | None = None,
loglevel: str | None = None, # set log level per subactor loglevel: str | None = None, # set log level per subactor
@ -231,28 +190,25 @@ class ActorNursery:
**kwargs, # explicit args to ``fn`` **kwargs, # explicit args to ``fn``
) -> Portal: ) -> Portal:
''' """Spawn a new actor, run a lone task, then terminate the actor and
Spawn a new actor, run a lone task, then terminate the actor and
return its result. return its result.
Actors spawned using this method are kept alive at nursery teardown Actors spawned using this method are kept alive at nursery teardown
until the task spawned by executing ``fn`` completes at which point until the task spawned by executing ``fn`` completes at which point
the actor is terminated. the actor is terminated.
"""
''' mod_path = fn.__module__
__runtimeframe__: int = 1 # noqa
mod_path: str = fn.__module__
if name is None: if name is None:
# use the explicit function name if not provided # use the explicit function name if not provided
name = fn.__name__ name = fn.__name__
portal: Portal = await self.start_actor( portal = await self.start_actor(
name, name,
enable_modules=[mod_path] + ( enable_modules=[mod_path] + (
enable_modules or rpc_module_paths or [] enable_modules or rpc_module_paths or []
), ),
bind_addrs=bind_addrs, bind_addr=bind_addr,
loglevel=loglevel, loglevel=loglevel,
# use the run_in_actor nursery # use the run_in_actor nursery
nursery=self._ria_nursery, nursery=self._ria_nursery,
@ -276,42 +232,21 @@ class ActorNursery:
) )
return portal return portal
# @api_frame async def cancel(self, hard_kill: bool = False) -> None:
async def cancel( """Cancel this nursery by instructing each subactor to cancel
self, itself and wait for all subactors to terminate.
hard_kill: bool = False,
) -> None: If ``hard_killl`` is set to ``True`` then kill the processes
''' directly without any far end graceful ``trio`` cancellation.
Cancel this actor-nursery by instructing each subactor's """
runtime to cancel and wait for all underlying sub-processes
to terminate.
If `hard_kill` is set then kill the processes directly using
the spawning-backend's API/OS-machinery without any attempt
at (graceful) `trio`-style cancellation using our
`Actor.cancel()`.
'''
__runtimeframe__: int = 1 # noqa
self.cancelled = True self.cancelled = True
# TODO: impl a repr for spawn more compact log.cancel(f"Cancelling nursery in {self._actor.uid}")
# then `._children`..
children: dict = self._children
child_count: int = len(children)
msg: str = f'Cancelling actor nursery with {child_count} children\n'
with trio.move_on_after(3) as cs: with trio.move_on_after(3) as cs:
async with trio.open_nursery() as tn:
subactor: Actor async with trio.open_nursery() as nursery:
proc: trio.Process
portal: Portal for subactor, proc, portal in self._children.values():
for (
subactor,
proc,
portal,
) in children.values():
# TODO: are we ever even going to use this or # TODO: are we ever even going to use this or
# is the spawning backend responsible for such # is the spawning backend responsible for such
@ -323,13 +258,12 @@ class ActorNursery:
if portal is None: # actor hasn't fully spawned yet if portal is None: # actor hasn't fully spawned yet
event = self._actor._peer_connected[subactor.uid] event = self._actor._peer_connected[subactor.uid]
log.warning( log.warning(
f"{subactor.uid} never 't finished spawning?" f"{subactor.uid} wasn't finished spawning?")
)
await event.wait() await event.wait()
# channel/portal should now be up # channel/portal should now be up
_, _, portal = children[subactor.uid] _, _, portal = self._children[subactor.uid]
# XXX should be impossible to get here # XXX should be impossible to get here
# unless method was called from within # unless method was called from within
@ -346,24 +280,14 @@ class ActorNursery:
# spawn cancel tasks for each sub-actor # spawn cancel tasks for each sub-actor
assert portal assert portal
if portal.channel.connected(): if portal.channel.connected():
tn.start_soon(portal.cancel_actor) nursery.start_soon(portal.cancel_actor)
log.cancel(msg)
# if we cancelled the cancel (we hung cancelling remote actors) # if we cancelled the cancel (we hung cancelling remote actors)
# then hard kill all sub-processes # then hard kill all sub-processes
if cs.cancelled_caught: if cs.cancelled_caught:
log.error( log.error(
f'Failed to cancel {self}?\n' f"Failed to cancel {self}\nHard killing process tree!")
'Hard killing underlying subprocess tree!\n' for subactor, proc, portal in self._children.values():
)
subactor: Actor
proc: trio.Process
portal: Portal
for (
subactor,
proc,
portal,
) in children.values():
log.warning(f"Hard killing process {proc}") log.warning(f"Hard killing process {proc}")
proc.terminate() proc.terminate()
@ -374,15 +298,11 @@ class ActorNursery:
@acm @acm
async def _open_and_supervise_one_cancels_all_nursery( async def _open_and_supervise_one_cancels_all_nursery(
actor: Actor, actor: Actor,
tb_hide: bool = False,
) -> typing.AsyncGenerator[ActorNursery, None]: ) -> typing.AsyncGenerator[ActorNursery, None]:
# normally don't need to show user by default # TODO: yay or nay?
__tracebackhide__: bool = tb_hide __tracebackhide__ = True
outer_err: BaseException|None = None
inner_err: BaseException|None = None
# the collection of errors retreived from spawned sub-actors # the collection of errors retreived from spawned sub-actors
errors: dict[tuple[str, str], BaseException] = {} errors: dict[tuple[str, str], BaseException] = {}
@ -392,28 +312,22 @@ async def _open_and_supervise_one_cancels_all_nursery(
# handling errors that are generated by the inner nursery in # handling errors that are generated by the inner nursery in
# a supervisor strategy **before** blocking indefinitely to wait for # a supervisor strategy **before** blocking indefinitely to wait for
# actors spawned in "daemon mode" (aka started using # actors spawned in "daemon mode" (aka started using
# `ActorNursery.start_actor()`). # ``ActorNursery.start_actor()``).
# errors from this daemon actor nursery bubble up to caller # errors from this daemon actor nursery bubble up to caller
async with trio.open_nursery( async with trio.open_nursery() as da_nursery:
strict_exception_groups=False,
# ^XXX^ TODO? instead unpack any RAE as per "loose" style?
) as da_nursery:
try: try:
# This is the inner level "run in actor" nursery. It is # This is the inner level "run in actor" nursery. It is
# awaited first since actors spawned in this way (using # awaited first since actors spawned in this way (using
# `ActorNusery.run_in_actor()`) are expected to only # ``ActorNusery.run_in_actor()``) are expected to only
# return a single result and then complete (i.e. be canclled # return a single result and then complete (i.e. be canclled
# gracefully). Errors collected from these actors are # gracefully). Errors collected from these actors are
# immediately raised for handling by a supervisor strategy. # immediately raised for handling by a supervisor strategy.
# As such if the strategy propagates any error(s) upwards # As such if the strategy propagates any error(s) upwards
# the above "daemon actor" nursery will be notified. # the above "daemon actor" nursery will be notified.
async with trio.open_nursery( async with trio.open_nursery() as ria_nursery:
strict_exception_groups=False,
# ^XXX^ TODO? instead unpack any RAE as per "loose" style?
) as ria_nursery:
an = ActorNursery( anursery = ActorNursery(
actor, actor,
ria_nursery, ria_nursery,
da_nursery, da_nursery,
@ -422,19 +336,18 @@ async def _open_and_supervise_one_cancels_all_nursery(
try: try:
# spawning of actors happens in the caller's scope # spawning of actors happens in the caller's scope
# after we yield upwards # after we yield upwards
yield an yield anursery
# When we didn't error in the caller's scope, # When we didn't error in the caller's scope,
# signal all process-monitor-tasks to conduct # signal all process-monitor-tasks to conduct
# the "hard join phase". # the "hard join phase".
log.runtime( log.runtime(
'Waiting on subactors to complete:\n' f"Waiting on subactors {anursery._children} "
f'{pformat(an._children)}\n' "to complete"
) )
an._join_procs.set() anursery._join_procs.set()
except BaseException as _inner_err: except BaseException as inner_err:
inner_err = _inner_err
errors[actor.uid] = inner_err errors[actor.uid] = inner_err
# If we error in the root but the debugger is # If we error in the root but the debugger is
@ -444,60 +357,37 @@ async def _open_and_supervise_one_cancels_all_nursery(
# Instead try to wait for pdb to be released before # Instead try to wait for pdb to be released before
# tearing down. # tearing down.
await maybe_wait_for_debugger( await maybe_wait_for_debugger(
child_in_debug=an._at_least_one_child_in_debug child_in_debug=anursery._at_least_one_child_in_debug
) )
# if the caller's scope errored then we activate our # if the caller's scope errored then we activate our
# one-cancels-all supervisor strategy (don't # one-cancels-all supervisor strategy (don't
# worry more are coming). # worry more are coming).
an._join_procs.set() anursery._join_procs.set()
# XXX NOTE XXX: hypothetically an error could # XXX: hypothetically an error could be
# be raised and then a cancel signal shows up # raised and then a cancel signal shows up
# slightly after in which case the `else:` # slightly after in which case the `else:`
# block here might not complete? For now, # block here might not complete? For now,
# shield both. # shield both.
with trio.CancelScope(shield=True): with trio.CancelScope(shield=True):
etype: type = type(inner_err) etype = type(inner_err)
if etype in ( if etype in (
trio.Cancelled, trio.Cancelled,
KeyboardInterrupt, KeyboardInterrupt
) or ( ) or (
is_multi_cancelled(inner_err) is_multi_cancelled(inner_err)
): ):
log.cancel( log.cancel(
f'Actor-nursery cancelled by {etype}\n\n' f"Nursery for {current_actor().uid} "
f"was cancelled with {etype}")
f'{current_actor().uid}\n'
f' |_{an}\n\n'
# TODO: show tb str?
# f'{tb_str}'
)
elif etype in {
ContextCancelled,
}:
log.cancel(
'Actor-nursery caught remote cancellation\n'
'\n'
f'{inner_err.tb_str}'
)
else: else:
log.exception( log.exception(
'Nursery errored with:\n' f"Nursery for {current_actor().uid} "
f"errored with")
# TODO: same thing as in
# `._invoke()` to compute how to
# place this div-line in the
# middle of the above msg
# content..
# -[ ] prolly helper-func it too
# in our `.log` module..
# '------ - ------'
)
# cancel all subactors # cancel all subactors
await an.cancel() await anursery.cancel()
# ria_nursery scope end # ria_nursery scope end
@ -512,30 +402,24 @@ async def _open_and_supervise_one_cancels_all_nursery(
Exception, Exception,
BaseExceptionGroup, BaseExceptionGroup,
trio.Cancelled trio.Cancelled
) as _outer_err:
outer_err = _outer_err
an._scope_error = outer_err or inner_err ) as err:
# XXX: yet another guard before allowing the cancel # XXX: yet another guard before allowing the cancel
# sequence in case a (single) child is in debug. # sequence in case a (single) child is in debug.
await maybe_wait_for_debugger( await maybe_wait_for_debugger(
child_in_debug=an._at_least_one_child_in_debug child_in_debug=anursery._at_least_one_child_in_debug
) )
# If actor-local error was raised while waiting on # If actor-local error was raised while waiting on
# ".run_in_actor()" actors then we also want to cancel all # ".run_in_actor()" actors then we also want to cancel all
# remaining sub-actors (due to our lone strategy: # remaining sub-actors (due to our lone strategy:
# one-cancels-all). # one-cancels-all).
if an._children: log.cancel(f"Nursery cancelling due to {err}")
log.cancel( if anursery._children:
'Actor-nursery cancelling due error type:\n'
f'{outer_err}\n'
)
with trio.CancelScope(shield=True): with trio.CancelScope(shield=True):
await an.cancel() await anursery.cancel()
raise raise
finally: finally:
# No errors were raised while awaiting ".run_in_actor()" # No errors were raised while awaiting ".run_in_actor()"
# actors but those actors may have returned remote errors as # actors but those actors may have returned remote errors as
@ -544,9 +428,9 @@ async def _open_and_supervise_one_cancels_all_nursery(
# collected in ``errors`` so cancel all actors, summarize # collected in ``errors`` so cancel all actors, summarize
# all errors and re-raise. # all errors and re-raise.
if errors: if errors:
if an._children: if anursery._children:
with trio.CancelScope(shield=True): with trio.CancelScope(shield=True):
await an.cancel() await anursery.cancel()
# use `BaseExceptionGroup` as needed # use `BaseExceptionGroup` as needed
if len(errors) > 1: if len(errors) > 1:
@ -557,23 +441,13 @@ async def _open_and_supervise_one_cancels_all_nursery(
else: else:
raise list(errors.values())[0] raise list(errors.values())[0]
# show frame on any (likely) internal error
if (
not an.cancelled
and an._scope_error
):
__tracebackhide__: bool = False
# da_nursery scope end - nursery checkpoint # da_nursery scope end - nursery checkpoint
# final exit # final exit
@acm @acm
# @api_frame
async def open_nursery( async def open_nursery(
hide_tb: bool = True,
**kwargs, **kwargs,
# ^TODO, paramspec for `open_root_actor()`
) -> typing.AsyncGenerator[ActorNursery, None]: ) -> typing.AsyncGenerator[ActorNursery, None]:
''' '''
@ -591,81 +465,44 @@ async def open_nursery(
which cancellation scopes correspond to each spawned subactor set. which cancellation scopes correspond to each spawned subactor set.
''' '''
__tracebackhide__: bool = hide_tb implicit_runtime = False
implicit_runtime: bool = False
actor: Actor = current_actor(err_on_no_runtime=False) actor = current_actor(err_on_no_runtime=False)
an: ActorNursery|None = None
try: try:
if ( if actor is None and is_main_process():
actor is None
and is_main_process()
):
# if we are the parent process start the # if we are the parent process start the
# actor runtime implicitly # actor runtime implicitly
log.info("Starting actor runtime!") log.info("Starting actor runtime!")
# mark us for teardown on exit # mark us for teardown on exit
implicit_runtime: bool = True implicit_runtime = True
async with open_root_actor( async with open_root_actor(**kwargs) as actor:
hide_tb=hide_tb,
**kwargs,
) as actor:
assert actor is current_actor() assert actor is current_actor()
try: try:
async with _open_and_supervise_one_cancels_all_nursery( async with _open_and_supervise_one_cancels_all_nursery(
actor actor
) as an: ) as anursery:
yield anursery
# NOTE: mark this nursery as having
# implicitly started the root actor so
# that `._runtime` machinery can avoid
# certain teardown synchronization
# blocking/waits and any associated (warn)
# logging when it's known that this
# nursery shouldn't be exited before the
# root actor is.
an._implicit_runtime_started = True
yield an
finally: finally:
# XXX: this event will be set after the root actor anursery.exited.set()
# runtime is already torn down, so we want to
# avoid any blocking on it.
an.exited.set()
else: # sub-nursery case else: # sub-nursery case
try: try:
async with _open_and_supervise_one_cancels_all_nursery( async with _open_and_supervise_one_cancels_all_nursery(
actor actor
) as an: ) as anursery:
yield an yield anursery
finally: finally:
an.exited.set() anursery.exited.set()
finally: finally:
# show frame on any internal runtime-scope error log.debug("Nursery teardown complete")
if (
an
and
not an.cancelled
and
an._scope_error
):
__tracebackhide__: bool = False
msg: str = (
'Actor-nursery exited\n'
f'|_{an}\n'
)
# shutdown runtime if it was started
if implicit_runtime: if implicit_runtime:
# shutdown runtime if it was started and report noisly log.info("Shutting down actor tree")
# that we're did so.
msg += '=> Shutting down actor runtime <=\n'
log.info(msg)
else:
# keep noise low during std operation.
log.runtime(msg)

View File

@ -1,113 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Various helpers/utils for auditing your `tractor` app and/or the
core runtime.
'''
from contextlib import (
asynccontextmanager as acm,
)
import os
import pathlib
import tractor
from tractor.devx._debug import (
BoxedMaybeException,
)
from .pytest import (
tractor_test as tractor_test
)
from .fault_simulation import (
break_ipc as break_ipc,
)
def repodir() -> pathlib.Path:
'''
Return the abspath to the repo directory.
'''
# 2 parents up to step up through tests/<repo_dir>
return pathlib.Path(
__file__
# 3 .parents bc:
# <._testing-pkg>.<tractor-pkg>.<git-repo-dir>
# /$HOME/../<tractor-repo-dir>/tractor/_testing/__init__.py
).parent.parent.parent.absolute()
def examples_dir() -> pathlib.Path:
'''
Return the abspath to the examples directory as `pathlib.Path`.
'''
return repodir() / 'examples'
def mk_cmd(
ex_name: str,
exs_subpath: str = 'debugging',
) -> str:
'''
Generate a shell command suitable to pass to `pexpect.spawn()`
which runs the script as a python program's entrypoint.
In particular ensure we disable the new tb coloring via unsetting
`$PYTHON_COLORS` so that `pexpect` can pattern match without
color-escape-codes.
'''
script_path: pathlib.Path = (
examples_dir()
/ exs_subpath
/ f'{ex_name}.py'
)
py_cmd: str = ' '.join([
'python',
str(script_path)
])
# XXX, required for py 3.13+
# https://docs.python.org/3/using/cmdline.html#using-on-controlling-color
# https://docs.python.org/3/using/cmdline.html#envvar-PYTHON_COLORS
os.environ['PYTHON_COLORS'] = '0'
return py_cmd
@acm
async def expect_ctxc(
yay: bool,
reraise: bool = False,
) -> None:
'''
Small acm to catch `ContextCancelled` errors when expected
below it in a `async with ()` block.
'''
if yay:
try:
yield (maybe_exc := BoxedMaybeException())
raise RuntimeError('Never raised ctxc?')
except tractor.ContextCancelled as ctxc:
maybe_exc.value = ctxc
if reraise:
raise
else:
return
else:
yield (maybe_exc := BoxedMaybeException())

View File

@ -1,92 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
`pytest` utils helpers and plugins for testing `tractor`'s runtime
and applications.
'''
from tractor import (
MsgStream,
)
async def break_ipc(
stream: MsgStream,
method: str|None = None,
pre_close: bool = False,
def_method: str = 'socket_close',
) -> None:
'''
XXX: close the channel right after an error is raised
purposely breaking the IPC transport to make sure the parent
doesn't get stuck in debug or hang on the connection join.
this more or less simulates an infinite msg-receive hang on
the other end.
'''
# close channel via IPC prot msging before
# any transport breakage
if pre_close:
await stream.aclose()
method: str = method or def_method
print(
'#################################\n'
'Simulating CHILD-side IPC BREAK!\n'
f'method: {method}\n'
f'pre `.aclose()`: {pre_close}\n'
'#################################\n'
)
match method:
case 'socket_close':
await stream._ctx.chan.transport.stream.aclose()
case 'socket_eof':
# NOTE: `trio` does the following underneath this
# call in `src/trio/_highlevel_socket.py`:
# `Stream.socket.shutdown(tsocket.SHUT_WR)`
await stream._ctx.chan.transport.stream.send_eof()
# TODO: remove since now this will be invalid with our
# new typed msg spec?
# case 'msg':
# await stream._ctx.chan.send(None)
# TODO: the actual real-world simulated cases like
# transport layer hangs and/or lower layer 2-gens type
# scenarios..
#
# -[ ] already have some issues for this general testing
# area:
# - https://github.com/goodboy/tractor/issues/97
# - https://github.com/goodboy/tractor/issues/124
# - PR from @guille:
# https://github.com/goodboy/tractor/pull/149
# case 'hang':
# TODO: framework research:
#
# - https://github.com/GuoTengda1993/pynetem
# - https://github.com/shopify/toxiproxy
# - https://manpages.ubuntu.com/manpages/trusty/man1/wirefilter.1.html
case _:
raise RuntimeError(
f'IPC break method unsupported: {method}'
)

View File

@ -1,113 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
`pytest` utils helpers and plugins for testing `tractor`'s runtime
and applications.
'''
from functools import (
partial,
wraps,
)
import inspect
import platform
import tractor
import trio
def tractor_test(fn):
'''
Decorator for async test funcs to present them as "native"
looking sync funcs runnable by `pytest` using `trio.run()`.
Use:
@tractor_test
async def test_whatever():
await ...
If fixtures:
- ``reg_addr`` (a socket addr tuple where arbiter is listening)
- ``loglevel`` (logging level passed to tractor internals)
- ``start_method`` (subprocess spawning backend)
are defined in the `pytest` fixture space they will be automatically
injected to tests declaring these funcargs.
'''
@wraps(fn)
def wrapper(
*args,
loglevel=None,
reg_addr=None,
start_method: str|None = None,
debug_mode: bool = False,
**kwargs
):
# __tracebackhide__ = True
# NOTE: inject ant test func declared fixture
# names by manually checking!
if 'reg_addr' in inspect.signature(fn).parameters:
# injects test suite fixture value to test as well
# as `run()`
kwargs['reg_addr'] = reg_addr
if 'loglevel' in inspect.signature(fn).parameters:
# allows test suites to define a 'loglevel' fixture
# that activates the internal logging
kwargs['loglevel'] = loglevel
if start_method is None:
if platform.system() == "Windows":
start_method = 'trio'
if 'start_method' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['start_method'] = start_method
if 'debug_mode' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['debug_mode'] = debug_mode
if kwargs:
# use explicit root actor start
async def _main():
async with tractor.open_root_actor(
# **kwargs,
registry_addrs=[reg_addr] if reg_addr else None,
loglevel=loglevel,
start_method=start_method,
# TODO: only enable when pytest is passed --pdb
debug_mode=debug_mode,
):
await fn(*args, **kwargs)
main = _main
else:
# use implicit root actor start
main = partial(fn, *args, **kwargs)
return trio.run(main)
return wrapper

View File

@ -1,78 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Runtime "developer experience" utils and addons to aid our
(advanced) users and core devs in building distributed applications
and working with/on the actor runtime.
"""
from ._debug import (
maybe_wait_for_debugger as maybe_wait_for_debugger,
acquire_debug_lock as acquire_debug_lock,
breakpoint as breakpoint,
pause as pause,
pause_from_sync as pause_from_sync,
sigint_shield as sigint_shield,
open_crash_handler as open_crash_handler,
maybe_open_crash_handler as maybe_open_crash_handler,
maybe_init_greenback as maybe_init_greenback,
post_mortem as post_mortem,
mk_pdb as mk_pdb,
)
from ._stackscope import (
enable_stack_on_sig as enable_stack_on_sig,
)
from .pformat import (
add_div as add_div,
pformat_caller_frame as pformat_caller_frame,
pformat_boxed_tb as pformat_boxed_tb,
)
# TODO, move this to a new `.devx._pdbp` mod?
def _enable_readline_feats() -> str:
'''
Handle `readline` when compiled with `libedit` to avoid breaking
tab completion in `pdbp` (and its dep `tabcompleter`)
particularly since `uv` cpython distis are compiled this way..
See docs for deats,
https://docs.python.org/3/library/readline.html#module-readline
Originally discovered soln via SO answer,
https://stackoverflow.com/q/49287102
'''
import readline
if (
# 3.13+ attr
# https://docs.python.org/3/library/readline.html#readline.backend
(getattr(readline, 'backend', False) == 'libedit')
or
'libedit' in readline.__doc__
):
readline.parse_and_bind("python:bind -v")
readline.parse_and_bind("python:bind ^I rl_complete")
return 'libedit'
else:
readline.parse_and_bind("tab: complete")
readline.parse_and_bind("set editing-mode vi")
readline.parse_and_bind("set keymap vi")
return 'readline'
_enable_readline_feats()

File diff suppressed because it is too large Load Diff

View File

@ -1,303 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Tools for code-object annotation, introspection and mutation
as it pertains to improving the grok-ability of our runtime!
'''
from __future__ import annotations
from functools import partial
import inspect
from types import (
FrameType,
FunctionType,
MethodType,
# CodeType,
)
from typing import (
Any,
Callable,
Type,
)
from tractor.msg import (
pretty_struct,
NamespacePath,
)
import wrapt
# TODO: yeah, i don't love this and we should prolly just
# write a decorator that actually keeps a stupid ref to the func
# obj..
def get_class_from_frame(fr: FrameType) -> (
FunctionType
|MethodType
):
'''
Attempt to get the function (or method) reference
from a given `FrameType`.
Verbatim from an SO:
https://stackoverflow.com/a/2220759
'''
args, _, _, value_dict = inspect.getargvalues(fr)
# we check the first parameter for the frame function is
# named 'self'
if (
len(args)
and
# TODO: other cases for `@classmethod` etc..?)
args[0] == 'self'
):
# in that case, 'self' will be referenced in value_dict
instance: object = value_dict.get('self')
if instance:
# return its class
return getattr(
instance,
'__class__',
None,
)
# return None otherwise
return None
def get_ns_and_func_from_frame(
frame: FrameType,
) -> Callable:
'''
Return the corresponding function object reference from
a `FrameType`, and return it and it's parent namespace `dict`.
'''
ns: dict[str, Any]
# for a method, go up a frame and lookup the name in locals()
if '.' in (qualname := frame.f_code.co_qualname):
cls_name, _, func_name = qualname.partition('.')
ns = frame.f_back.f_locals[cls_name].__dict__
else:
func_name: str = frame.f_code.co_name
ns = frame.f_globals
return (
ns,
ns[func_name],
)
def func_ref_from_frame(
frame: FrameType,
) -> Callable:
func_name: str = frame.f_code.co_name
try:
return frame.f_globals[func_name]
except KeyError:
cls: Type|None = get_class_from_frame(frame)
if cls:
return getattr(
cls,
func_name,
)
class CallerInfo(pretty_struct.Struct):
# https://docs.python.org/dev/reference/datamodel.html#frame-objects
# https://docs.python.org/dev/library/inspect.html#the-interpreter-stack
_api_frame: FrameType
@property
def api_frame(self) -> FrameType:
try:
self._api_frame.clear()
except RuntimeError:
# log.warning(
print(
f'Frame {self._api_frame} for {self.api_func} is still active!'
)
return self._api_frame
_api_func: Callable
@property
def api_func(self) -> Callable:
return self._api_func
_caller_frames_up: int|None = 1
_caller_frame: FrameType|None = None # cached after first stack scan
@property
def api_nsp(self) -> NamespacePath|None:
func: FunctionType = self.api_func
if func:
return NamespacePath.from_ref(func)
return '<unknown>'
@property
def caller_frame(self) -> FrameType:
# if not already cached, scan up stack explicitly by
# configured count.
if not self._caller_frame:
if self._caller_frames_up:
for _ in range(self._caller_frames_up):
caller_frame: FrameType|None = self.api_frame.f_back
if not caller_frame:
raise ValueError(
'No frame exists {self._caller_frames_up} up from\n'
f'{self.api_frame} @ {self.api_nsp}\n'
)
self._caller_frame = caller_frame
return self._caller_frame
@property
def caller_nsp(self) -> NamespacePath|None:
func: FunctionType = self.api_func
if func:
return NamespacePath.from_ref(func)
return '<unknown>'
def find_caller_info(
dunder_var: str = '__runtimeframe__',
iframes:int = 1,
check_frame_depth: bool = True,
) -> CallerInfo|None:
'''
Scan up the callstack for a frame with a `dunder_var: str` variable
and return the `iframes` frames above it.
By default we scan for a `__runtimeframe__` scope var which
denotes a `tractor` API above which (one frame up) is "user
app code" which "called into" the `tractor` method or func.
TODO: ex with `Portal.open_context()`
'''
# TODO: use this instead?
# https://docs.python.org/3/library/inspect.html#inspect.getouterframes
frames: list[inspect.FrameInfo] = inspect.stack()
for fi in frames:
assert (
fi.function
==
fi.frame.f_code.co_name
)
this_frame: FrameType = fi.frame
dunder_val: int|None = this_frame.f_locals.get(dunder_var)
if dunder_val:
go_up_iframes: int = (
dunder_val # could be 0 or `True` i guess?
or
iframes
)
rt_frame: FrameType = fi.frame
call_frame = rt_frame
for i in range(go_up_iframes):
call_frame = call_frame.f_back
return CallerInfo(
_api_frame=rt_frame,
_api_func=func_ref_from_frame(rt_frame),
_caller_frames_up=go_up_iframes,
)
return None
_frame2callerinfo_cache: dict[FrameType, CallerInfo] = {}
# TODO: -[x] move all this into new `.devx._frame_stack`!
# -[ ] consider rename to _callstack?
# -[ ] prolly create a `@runtime_api` dec?
# |_ @api_frame seems better?
# -[ ] ^- make it capture and/or accept buncha optional
# meta-data like a fancier version of `@pdbp.hideframe`.
#
def api_frame(
wrapped: Callable|None = None,
*,
caller_frames_up: int = 1,
) -> Callable:
# handle the decorator called WITHOUT () case,
# i.e. just @api_frame, NOT @api_frame(extra=<blah>)
if wrapped is None:
return partial(
api_frame,
caller_frames_up=caller_frames_up,
)
@wrapt.decorator
async def wrapper(
wrapped: Callable,
instance: object,
args: tuple,
kwargs: dict,
):
# maybe cache the API frame for this call
global _frame2callerinfo_cache
this_frame: FrameType = inspect.currentframe()
api_frame: FrameType = this_frame.f_back
if not _frame2callerinfo_cache.get(api_frame):
_frame2callerinfo_cache[api_frame] = CallerInfo(
_api_frame=api_frame,
_api_func=wrapped,
_caller_frames_up=caller_frames_up,
)
return wrapped(*args, **kwargs)
# annotate the function as a "api function", meaning it is
# a function for which the function above it in the call stack should be
# non-`tractor` code aka "user code".
#
# in the global frame cache for easy lookup from a given
# func-instance
wrapped._call_infos: dict[FrameType, CallerInfo] = _frame2callerinfo_cache
wrapped.__api_func__: bool = True
return wrapper(wrapped)
# TODO: something like this instead of the adhoc frame-unhiding
# blocks all over the runtime!! XD
# -[ ] ideally we can expect a certain error (set) and if something
# else is raised then all frames below the wrapped one will be
# un-hidden via `__tracebackhide__: bool = False`.
# |_ might need to dynamically mutate the code objs like
# `pdbp.hideframe()` does?
# -[ ] use this as a `@acm` decorator as introed in 3.10?
# @acm
# async def unhide_frame_when_not(
# error_set: set[BaseException],
# ) -> TracebackType:
# ...

View File

@ -1,264 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
The fundamental cross process SC abstraction: an inter-actor,
cancel-scope linked task "context".
A ``Context`` is very similar to the ``trio.Nursery.cancel_scope`` built
into each ``trio.Nursery`` except it links the lifetimes of memory space
disjoint, parallel executing tasks in separate actors.
'''
from __future__ import annotations
# from functools import partial
from threading import (
current_thread,
Thread,
RLock,
)
import multiprocessing as mp
from signal import (
signal,
getsignal,
SIGUSR1,
SIGINT,
)
# import traceback
from types import ModuleType
from typing import (
Callable,
TYPE_CHECKING,
)
import trio
from tractor import (
_state,
log as logmod,
)
from tractor.devx import _debug
log = logmod.get_logger(__name__)
if TYPE_CHECKING:
from tractor._spawn import ProcessType
from tractor import (
Actor,
ActorNursery,
)
@trio.lowlevel.disable_ki_protection
def dump_task_tree() -> None:
'''
Do a classic `stackscope.extract()` task-tree dump to console at
`.devx()` level.
'''
import stackscope
tree_str: str = str(
stackscope.extract(
trio.lowlevel.current_root_task(),
recurse_child_tasks=True
)
)
actor: Actor = _state.current_actor()
thr: Thread = current_thread()
current_sigint_handler: Callable = getsignal(SIGINT)
if (
current_sigint_handler
is not
_debug.DebugStatus._trio_handler
):
sigint_handler_report: str = (
'The default `trio` SIGINT handler was replaced?!'
)
else:
sigint_handler_report: str = (
'The default `trio` SIGINT handler is in use?!'
)
# sclang symbology
# |_<object>
# |_(Task/Thread/Process/Actor
# |_{Supervisor/Scope
# |_[Storage/Memory/IPC-Stream/Data-Struct
log.devx(
f'Dumping `stackscope` tree for actor\n'
f'(>: {actor.uid!r}\n'
f' |_{mp.current_process()}\n'
f' |_{thr}\n'
f' |_{actor}\n'
f'\n'
f'{sigint_handler_report}\n'
f'signal.getsignal(SIGINT) -> {current_sigint_handler!r}\n'
# f'\n'
# start-of-trace-tree delimiter (mostly for testing)
# f'------ {actor.uid!r} ------\n'
f'\n'
f'------ start-of-{actor.uid!r} ------\n'
f'|\n'
f'{tree_str}'
# end-of-trace-tree delimiter (mostly for testing)
f'|\n'
f'|_____ end-of-{actor.uid!r} ______\n'
)
# TODO: can remove this right?
# -[ ] was original code from author
#
# print(
# 'DUMPING FROM PRINT\n'
# +
# content
# )
# import logging
# try:
# with open("/dev/tty", "w") as tty:
# tty.write(tree_str)
# except BaseException:
# logging.getLogger(
# "task_tree"
# ).exception("Error printing task tree")
_handler_lock = RLock()
_tree_dumped: bool = False
def dump_tree_on_sig(
sig: int,
frame: object,
relay_to_subs: bool = True,
) -> None:
global _tree_dumped, _handler_lock
with _handler_lock:
# if _tree_dumped:
# log.warning(
# 'Already dumped for this actor...??'
# )
# return
_tree_dumped = True
# actor: Actor = _state.current_actor()
log.devx(
'Trying to dump `stackscope` tree..\n'
)
try:
dump_task_tree()
# await actor._service_n.start_soon(
# partial(
# trio.to_thread.run_sync,
# dump_task_tree,
# )
# )
# trio.lowlevel.current_trio_token().run_sync_soon(
# dump_task_tree
# )
except RuntimeError:
log.exception(
'Failed to dump `stackscope` tree..\n'
)
# not in async context -- print a normal traceback
# traceback.print_stack()
raise
except BaseException:
log.exception(
'Failed to dump `stackscope` tree..\n'
)
raise
# log.devx(
# 'Supposedly we dumped just fine..?'
# )
if not relay_to_subs:
return
an: ActorNursery
for an in _state.current_actor()._actoruid2nursery.values():
subproc: ProcessType
subactor: Actor
for subactor, subproc, _ in an._children.values():
log.warning(
f'Relaying `SIGUSR1`[{sig}] to sub-actor\n'
f'{subactor}\n'
f' |_{subproc}\n'
)
# bc of course stdlib can't have a std API.. XD
match subproc:
case trio.Process():
subproc.send_signal(sig)
case mp.Process():
subproc._send_signal(sig)
def enable_stack_on_sig(
sig: int = SIGUSR1,
) -> ModuleType:
'''
Enable `stackscope` tracing on reception of a signal; by
default this is SIGUSR1.
HOT TIP: a task/ctx-tree dump can be triggered from a shell with
fancy cmds.
For ex. from `bash` using `pgrep` and cmd-sustitution
(https://www.gnu.org/software/bash/manual/bash.html#Command-Substitution)
you could use:
>> kill -SIGUSR1 $(pgrep -f <part-of-cmd: str>)
OR without a sub-shell,
>> pkill --signal SIGUSR1 -f <part-of-cmd: str>
'''
try:
import stackscope
except ImportError:
log.warning(
'`stackscope` not installed for use in debug mode!'
)
return None
handler: Callable|int = getsignal(sig)
if handler is dump_tree_on_sig:
log.devx(
'A `SIGUSR1` handler already exists?\n'
f'|_ {handler!r}\n'
)
return
signal(
sig,
dump_tree_on_sig,
)
log.devx(
'Enabling trace-trees on `SIGUSR1` '
'since `stackscope` is installed @ \n'
f'{stackscope!r}\n\n'
f'With `SIGUSR1` handler\n'
f'|_{dump_tree_on_sig}\n'
)
return stackscope

View File

@ -1,129 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
CLI framework extensions for hacking on the actor runtime.
Currently popular frameworks supported are:
- `typer` via the `@callback` API
"""
from __future__ import annotations
from typing import (
Any,
Callable,
)
from typing_extensions import Annotated
import typer
_runtime_vars: dict[str, Any] = {}
def load_runtime_vars(
ctx: typer.Context,
callback: Callable,
pdb: bool = False, # --pdb
ll: Annotated[
str,
typer.Option(
'--loglevel',
'-l',
help='BigD logging level',
),
] = 'cancel', # -l info
):
'''
Maybe engage crash handling with `pdbp` when code inside
a `typer` CLI endpoint cmd raises.
To use this callback simply take your `app = typer.Typer()` instance
and decorate this function with it like so:
.. code:: python
from tractor.devx import cli
app = typer.Typer()
# manual decoration to hook into `click`'s context system!
cli.load_runtime_vars = app.callback(
invoke_without_command=True,
)
And then you can use the now augmented `click` CLI context as so,
.. code:: python
@app.command(
context_settings={
"allow_extra_args": True,
"ignore_unknown_options": True,
}
)
def my_cli_cmd(
ctx: typer.Context,
):
rtvars: dict = ctx.runtime_vars
pdb: bool = rtvars['pdb']
with tractor.devx.cli.maybe_open_crash_handler(pdb=pdb):
trio.run(
partial(
my_tractor_main_task_func,
debug_mode=pdb,
loglevel=rtvars['ll'],
)
)
which will enable log level and debug mode globally for the entire
`tractor` + `trio` runtime thereafter!
Bo
'''
global _runtime_vars
_runtime_vars |= {
'pdb': pdb,
'll': ll,
}
ctx.runtime_vars: dict[str, Any] = _runtime_vars
print(
f'`typer` sub-cmd: {ctx.invoked_subcommand}\n'
f'`tractor` runtime vars: {_runtime_vars}'
)
# XXX NOTE XXX: hackzone.. if no sub-cmd is specified (the
# default if the user just invokes `bigd`) then we simply
# invoke the sole `_bigd()` cmd passing in the "parent"
# typer.Context directly to that call since we're treating it
# as a "non sub-command" or wtv..
# TODO: ideally typer would have some kinda built-in way to get
# this behaviour without having to construct and manually
# invoke our own cmd..
if (
ctx.invoked_subcommand is None
or ctx.invoked_subcommand == callback.__name__
):
cmd: typer.core.TyperCommand = typer.core.TyperCommand(
name='bigd',
callback=callback,
)
ctx.params = {'ctx': ctx}
cmd.invoke(ctx)

View File

@ -1,169 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Pretty formatters for use throughout the code base.
Mostly handy for logging and exception message content.
'''
import textwrap
import traceback
from trio import CancelScope
def add_div(
message: str,
div_str: str = '------ - ------',
) -> str:
'''
Add a "divider string" to the input `message` with
a little math to center it underneath.
'''
div_offset: int = (
round(len(message)/2)+1
-
round(len(div_str)/2)+1
)
div_str: str = (
'\n' + ' '*div_offset + f'{div_str}\n'
)
return div_str
def pformat_boxed_tb(
tb_str: str,
fields_str: str|None = None,
field_prefix: str = ' |_',
tb_box_indent: int|None = None,
tb_body_indent: int = 1,
boxer_header: str = '-'
) -> str:
'''
Create a "boxed" looking traceback string.
Useful for emphasizing traceback text content as being an
embedded attribute of some other object (like
a `RemoteActorError` or other boxing remote error shuttle
container).
Any other parent/container "fields" can be passed in the
`fields_str` input along with other prefix/indent settings.
'''
if (
fields_str
and
field_prefix
):
fields: str = textwrap.indent(
fields_str,
prefix=field_prefix,
)
else:
fields = fields_str or ''
tb_body = tb_str
if tb_body_indent:
tb_body: str = textwrap.indent(
tb_str,
prefix=tb_body_indent * ' ',
)
tb_box: str = (
f'|\n'
f' ------ {boxer_header} ------\n'
f'{tb_body}'
f' ------ {boxer_header}- ------\n'
f'_|'
)
tb_box_indent: str = (
tb_box_indent
or
1
# (len(field_prefix))
# ? ^-TODO-^ ? if you wanted another indent level
)
if tb_box_indent > 0:
tb_box: str = textwrap.indent(
tb_box,
prefix=tb_box_indent * ' ',
)
return (
fields
+
tb_box
)
def pformat_caller_frame(
stack_limit: int = 1,
box_tb: bool = True,
) -> str:
'''
Capture and return the traceback text content from
`stack_limit` call frames up.
'''
tb_str: str = (
'\n'.join(
traceback.format_stack(limit=stack_limit)
)
)
if box_tb:
tb_str: str = pformat_boxed_tb(
tb_str=tb_str,
field_prefix=' ',
indent='',
)
return tb_str
def pformat_cs(
cs: CancelScope,
var_name: str = 'cs',
field_prefix: str = ' |_',
) -> str:
'''
Pretty format info about a `trio.CancelScope` including most
of its public state and `._cancel_status`.
The output can be modified to show a "var name" for the
instance as a field prefix, just a simple str before each
line more or less.
'''
fields: str = textwrap.indent(
(
f'cancel_called = {cs.cancel_called}\n'
f'cancelled_caught = {cs.cancelled_caught}\n'
f'_cancel_status = {cs._cancel_status}\n'
f'shield = {cs.shield}\n'
),
prefix=field_prefix,
)
return (
f'{var_name}: {cs}\n'
+
fields
)

View File

@ -31,7 +31,7 @@ from typing import (
Callable, Callable,
) )
from functools import partial from functools import partial
from contextlib import aclosing from async_generator import aclosing
import trio import trio
import wrapt import wrapt

View File

@ -21,11 +21,6 @@ Log like a forester!
from collections.abc import Mapping from collections.abc import Mapping
import sys import sys
import logging import logging
from logging import (
LoggerAdapter,
Logger,
StreamHandler,
)
import colorlog # type: ignore import colorlog # type: ignore
import trio import trio
@ -53,20 +48,17 @@ LOG_FORMAT = (
DATE_FORMAT = '%b %d %H:%M:%S' DATE_FORMAT = '%b %d %H:%M:%S'
# FYI, ERROR is 40 LEVELS = {
# TODO: use a `bidict` to avoid the :155 check?
CUSTOM_LEVELS: dict[str, int] = {
'TRANSPORT': 5, 'TRANSPORT': 5,
'RUNTIME': 15, 'RUNTIME': 15,
'DEVX': 17, 'CANCEL': 16,
'CANCEL': 22,
'PDB': 500, 'PDB': 500,
} }
STD_PALETTE = { STD_PALETTE = {
'CRITICAL': 'red', 'CRITICAL': 'red',
'ERROR': 'red', 'ERROR': 'red',
'PDB': 'white', 'PDB': 'white',
'DEVX': 'cyan',
'WARNING': 'yellow', 'WARNING': 'yellow',
'INFO': 'green', 'INFO': 'green',
'CANCEL': 'yellow', 'CANCEL': 'yellow',
@ -83,7 +75,7 @@ BOLD_PALETTE = {
# TODO: this isn't showing the correct '{filename}' # TODO: this isn't showing the correct '{filename}'
# as it did before.. # as it did before..
class StackLevelAdapter(LoggerAdapter): class StackLevelAdapter(logging.LoggerAdapter):
def transport( def transport(
self, self,
@ -91,8 +83,7 @@ class StackLevelAdapter(LoggerAdapter):
) -> None: ) -> None:
''' '''
IPC transport level msg IO; generally anything below IPC level msg-ing.
`._ipc.Channel` and friends.
''' '''
return self.log(5, msg) return self.log(5, msg)
@ -108,67 +99,29 @@ class StackLevelAdapter(LoggerAdapter):
msg: str, msg: str,
) -> None: ) -> None:
''' '''
Cancellation sequencing, mostly for runtime reporting. Cancellation logging, mostly for runtime reporting.
''' '''
return self.log( return self.log(16, msg)
level=22,
msg=msg,
# stacklevel=4,
)
def pdb( def pdb(
self, self,
msg: str, msg: str,
) -> None: ) -> None:
''' '''
`pdb`-REPL (debugger) related statuses. Debugger logging.
''' '''
return self.log(500, msg) return self.log(500, msg)
def devx( def log(self, level, msg, *args, **kwargs):
self, """
msg: str,
) -> None:
'''
"Developer experience" sub-sys statuses.
'''
return self.log(17, msg)
def log(
self,
level,
msg,
*args,
**kwargs,
):
'''
Delegate a log call to the underlying logger, after adding Delegate a log call to the underlying logger, after adding
contextual information from this adapter instance. contextual information from this adapter instance.
"""
NOTE: all custom level methods (above) delegate to this!
'''
if self.isEnabledFor(level): if self.isEnabledFor(level):
stacklevel: int = 3
if (
level in CUSTOM_LEVELS.values()
):
stacklevel: int = 4
# msg, kwargs = self.process(msg, kwargs) # msg, kwargs = self.process(msg, kwargs)
self._log( self._log(level, msg, args, **kwargs)
level=level,
msg=msg,
args=args,
# NOTE: not sure how this worked before but, it
# seems with our custom level methods defined above
# we do indeed (now) require another stack level??
stacklevel=stacklevel,
**kwargs,
)
# LOL, the stdlib doesn't allow passing through ``stacklevel``.. # LOL, the stdlib doesn't allow passing through ``stacklevel``..
def _log( def _log(
@ -181,15 +134,12 @@ class StackLevelAdapter(LoggerAdapter):
stack_info=False, stack_info=False,
# XXX: bit we added to show fileinfo from actual caller. # XXX: bit we added to show fileinfo from actual caller.
# - this level # this level then ``.log()`` then finally the caller's level..
# - then ``.log()`` stacklevel=3,
# - then finally the caller's level..
stacklevel=4,
): ):
''' """
Low-level log implementation, proxied to allow nested logger adapters. Low-level log implementation, proxied to allow nested logger adapters.
"""
'''
return self.logger._log( return self.logger._log(
level, level,
msg, msg,
@ -201,30 +151,8 @@ class StackLevelAdapter(LoggerAdapter):
) )
# TODO IDEAs:
# -[ ] move to `.devx.pformat`?
# -[ ] do per task-name and actor-name color coding
# -[ ] unique color per task-id and actor-uuid
def pformat_task_uid(
id_part: str = 'tail'
):
'''
Return `str`-ified unique for a `trio.Task` via a combo of its
`.name: str` and `id()` truncated output.
'''
task: trio.Task = trio.lowlevel.current_task()
tid: str = str(id(task))
if id_part == 'tail':
tid_part: str = tid[-6:]
else:
tid_part: str = tid[:6]
return f'{task.name}[{tid_part}]'
_conc_name_getters = { _conc_name_getters = {
'task': pformat_task_uid, 'task': lambda: trio.lowlevel.current_task().name,
'actor': lambda: current_actor(), 'actor': lambda: current_actor(),
'actor_name': lambda: current_actor().name, 'actor_name': lambda: current_actor().name,
'actor_uid': lambda: current_actor().uid[1][:6], 'actor_uid': lambda: current_actor().uid[1][:6],
@ -232,10 +160,7 @@ _conc_name_getters = {
class ActorContextInfo(Mapping): class ActorContextInfo(Mapping):
''' "Dyanmic lookup for local actor and task names"
Dyanmic lookup for local actor and task names.
'''
_context_keys = ( _context_keys = (
'task', 'task',
'actor', 'actor',
@ -258,69 +183,33 @@ class ActorContextInfo(Mapping):
def get_logger( def get_logger(
name: str|None = None,
name: str | None = None,
_root_name: str = _proj_name, _root_name: str = _proj_name,
logger: Logger|None = None,
# TODO, using `.config.dictConfig()` api?
# -[ ] SO answer with docs links
# |_https://stackoverflow.com/questions/7507825/where-is-a-complete-example-of-logging-config-dictconfig
# |_https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
subsys_spec: str|None = None,
) -> StackLevelAdapter: ) -> StackLevelAdapter:
'''Return the package log or a sub-logger for ``name`` if provided. '''Return the package log or a sub-logger for ``name`` if provided.
''' '''
log: Logger log = rlog = logging.getLogger(_root_name)
log = rlog = logger or logging.getLogger(_root_name)
if ( if name and name != _proj_name:
name
and
name != _proj_name
):
# NOTE: for handling for modules that use ``get_logger(__name__)`` # handling for modules that use ``get_logger(__name__)`` to
# we make the following stylistic choice: # avoid duplicate project-package token in msg output
# - always avoid duplicate project-package token rname, _, tail = name.partition('.')
# in msg output: i.e. tractor.tractor _ipc.py in header if rname == _root_name:
# looks ridiculous XD name = tail
# - never show the leaf module name in the {name} part
# since in python the {filename} is always this same
# module-file.
sub_name: None|str = None
rname, _, sub_name = name.partition('.')
pkgpath, _, modfilename = sub_name.rpartition('.')
# NOTE: for tractor itself never include the last level
# module key in the name such that something like: eg.
# 'tractor.trionics._broadcast` only includes the first
# 2 tokens in the (coloured) name part.
if rname == 'tractor':
sub_name = pkgpath
if _root_name in sub_name:
duplicate, _, sub_name = sub_name.partition('.')
if not sub_name:
log = rlog
else:
log = rlog.getChild(sub_name)
log = rlog.getChild(name)
log.level = rlog.level log.level = rlog.level
# add our actor-task aware adapter which will dynamically look up # add our actor-task aware adapter which will dynamically look up
# the actor and task names at each log emit # the actor and task names at each log emit
logger = StackLevelAdapter( logger = StackLevelAdapter(log, ActorContextInfo())
log,
ActorContextInfo(),
)
# additional levels # additional levels
for name, val in CUSTOM_LEVELS.items(): for name, val in LEVELS.items():
logging.addLevelName(val, name) logging.addLevelName(val, name)
# ensure customs levels exist as methods # ensure customs levels exist as methods
@ -330,50 +219,28 @@ def get_logger(
def get_console_log( def get_console_log(
level: str|None = None, level: str | None = None,
logger: Logger|None = None,
**kwargs, **kwargs,
) -> logging.LoggerAdapter:
'''Get the package logger and enable a handler which writes to stderr.
) -> LoggerAdapter: Yeah yeah, i know we can use ``DictConfig``. You do it.
''' '''
Get a `tractor`-style logging instance: a `Logger` wrapped in log = get_logger(**kwargs) # our root logger
a `StackLevelAdapter` which injects various concurrency-primitive logger = log.logger
(process, thread, task) fields and enables a `StreamHandler` that
writes on stderr using `colorlog` formatting.
Yeah yeah, i know we can use `logging.config.dictConfig()`. You do it.
'''
log = get_logger(
logger=logger,
**kwargs
) # set a root logger
logger: Logger = log.logger
if not level: if not level:
return log return log
log.setLevel( log.setLevel(level.upper() if not isinstance(level, int) else level)
level.upper()
if not isinstance(level, int)
else level
)
if not any( if not any(
handler.stream == sys.stderr # type: ignore handler.stream == sys.stderr # type: ignore
for handler in logger.handlers if getattr( for handler in logger.handlers if getattr(handler, 'stream', None)
handler,
'stream',
None,
)
): ):
fmt = LOG_FORMAT handler = logging.StreamHandler()
# if logger:
# fmt = None
handler = StreamHandler()
formatter = colorlog.ColoredFormatter( formatter = colorlog.ColoredFormatter(
fmt=fmt, LOG_FORMAT,
datefmt=DATE_FORMAT, datefmt=DATE_FORMAT,
log_colors=STD_PALETTE, log_colors=STD_PALETTE,
secondary_log_colors=BOLD_PALETTE, secondary_log_colors=BOLD_PALETTE,
@ -387,23 +254,3 @@ def get_console_log(
def get_loglevel() -> str: def get_loglevel() -> str:
return _default_loglevel return _default_loglevel
# global module logger for tractor itself
log: StackLevelAdapter = get_logger('tractor')
def at_least_level(
log: Logger|LoggerAdapter,
level: int|str,
) -> bool:
'''
Predicate to test if a given level is active.
'''
if isinstance(level, str):
level: int = CUSTOM_LEVELS[level.upper()]
if log.getEffectiveLevel() <= level:
return True
return False

80
tractor/msg.py 100644
View File

@ -0,0 +1,80 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Built-in messaging patterns, types, APIs and helpers.
'''
# TODO: integration with our ``enable_modules: list[str]`` caps sys.
# ``pkgutil.resolve_name()`` internally uses
# ``importlib.import_module()`` which can be filtered by inserting
# a ``MetaPathFinder`` into ``sys.meta_path`` (which we could do before
# entering the ``_runtime.process_messages()`` loop).
# - https://github.com/python/cpython/blob/main/Lib/pkgutil.py#L645
# - https://stackoverflow.com/questions/1350466/preventing-python-code-from-importing-certain-modules
# - https://stackoverflow.com/a/63320902
# - https://docs.python.org/3/library/sys.html#sys.meta_path
# the new "Implicit Namespace Packages" might be relevant?
# - https://www.python.org/dev/peps/pep-0420/
# add implicit serialized message type support so that paths can be
# handed directly to IPC primitives such as streams and `Portal.run()`
# calls:
# - via ``msgspec``:
# - https://jcristharif.com/msgspec/api.html#struct
# - https://jcristharif.com/msgspec/extending.html
# via ``msgpack-python``:
# - https://github.com/msgpack/msgpack-python#packingunpacking-of-custom-data-type
from __future__ import annotations
from pkgutil import resolve_name
class NamespacePath(str):
'''
A serializeable description of a (function) Python object location
described by the target's module path and namespace key meant as
a message-native "packet" to allows actors to point-and-load objects
by absolute reference.
'''
_ref: object = None
def load_ref(self) -> object:
if self._ref is None:
self._ref = resolve_name(self)
return self._ref
def to_tuple(
self,
) -> tuple[str, str]:
ref = self.load_ref()
return ref.__module__, getattr(ref, '__name__', '')
@classmethod
def from_ref(
cls,
ref,
) -> NamespacePath:
return cls(':'.join(
(ref.__module__,
getattr(ref, '__name__', ''))
))

View File

@ -1,74 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Built-in messaging patterns, types, APIs and helpers.
'''
from typing import (
TypeAlias,
)
from .ptr import (
NamespacePath as NamespacePath,
)
from .pretty_struct import (
Struct as Struct,
)
from ._codec import (
_def_msgspec_codec as _def_msgspec_codec,
_ctxvar_MsgCodec as _ctxvar_MsgCodec,
apply_codec as apply_codec,
mk_codec as mk_codec,
mk_dec as mk_dec,
MsgCodec as MsgCodec,
MsgDec as MsgDec,
current_codec as current_codec,
)
# currently can't bc circular with `._context`
# from ._ops import (
# PldRx as PldRx,
# _drain_to_final_msg as _drain_to_final_msg,
# )
from .types import (
PayloadMsg as PayloadMsg,
Aid as Aid,
SpawnSpec as SpawnSpec,
Start as Start,
StartAck as StartAck,
Started as Started,
Yield as Yield,
Stop as Stop,
Return as Return,
CancelAck as CancelAck,
Error as Error,
# type-var for `.pld` field
PayloadT as PayloadT,
# full msg class set from above as list
__msg_types__ as __msg_types__,
# type-alias for union of all msgs
MsgType as MsgType,
)
__msg_spec__: TypeAlias = MsgType

View File

@ -1,886 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
IPC msg interchange codec management.
Supported backend libs:
- `msgspec.msgpack`
ToDo: backends we prolly should offer:
- see project/lib list throughout GH issue discussion comments:
https://github.com/goodboy/tractor/issues/196
- `capnproto`: https://capnproto.org/rpc.html
- https://capnproto.org/language.html#language-reference
'''
from __future__ import annotations
from contextlib import (
contextmanager as cm,
)
from contextvars import (
ContextVar,
Token,
)
import textwrap
from typing import (
Any,
Callable,
Protocol,
Type,
TYPE_CHECKING,
TypeVar,
Union,
)
from types import ModuleType
import msgspec
from msgspec import (
msgpack,
Raw,
)
# TODO: see notes below from @mikenerone..
# from tricycle import TreeVar
from tractor.msg.pretty_struct import Struct
from tractor.msg.types import (
mk_msg_spec,
MsgType,
PayloadMsg,
)
from tractor.log import get_logger
if TYPE_CHECKING:
from tractor._context import Context
log = get_logger(__name__)
# TODO: unify with `MsgCodec` by making `._dec` part this?
class MsgDec(Struct):
'''
An IPC msg (payload) decoder.
Normally used to decode only a payload: `MsgType.pld:
PayloadT` field before delivery to IPC consumer code.
'''
_dec: msgpack.Decoder
# _ext_types_box: Struct|None = None
@property
def dec(self) -> msgpack.Decoder:
return self._dec
def __repr__(self) -> str:
speclines: str = self.spec_str
# in multi-typed spec case we stick the list
# all on newlines after the |__pld_spec__:,
# OW it's prolly single type spec-value
# so just leave it on same line.
if '\n' in speclines:
speclines: str = '\n' + textwrap.indent(
speclines,
prefix=' '*3,
)
body: str = textwrap.indent(
f'|_dec_hook: {self.dec.dec_hook}\n'
f'|__pld_spec__: {speclines}\n',
prefix=' '*2,
)
return (
f'<{type(self).__name__}(\n'
f'{body}'
')>'
)
# struct type unions
# https://jcristharif.com/msgspec/structs.html#tagged-unions
#
# ^-TODO-^: make a wrapper type for this such that alt
# backends can be represented easily without a `Union` needed,
# AND so that we have better support for wire transport.
#
# -[ ] maybe `FieldSpec` is a good name since msg-spec
# better applies to a `MsgType[FieldSpec]`?
#
# -[ ] both as part of the `.open_context()` call AND as part of the
# immediate ack-reponse (see similar below)
# we should do spec matching and fail if anything is awry?
#
# -[ ] eventually spec should be generated/parsed from the
# type-annots as # desired in GH issue:
# https://github.com/goodboy/tractor/issues/365
#
# -[ ] semantics of the mismatch case
# - when caller-callee specs we should raise
# a `MsgTypeError` or `MsgSpecError` or similar?
#
# -[ ] wrapper types for both spec types such that we can easily
# IPC transport them?
# - `TypeSpec: Union[Type]`
# * also a `.__contains__()` for doing `None in
# TypeSpec[None|int]` since rn you need to do it on
# `.__args__` for unions..
# - `MsgSpec: Union[MsgType]
#
# -[ ] auto-genning this from new (in 3.12) type parameter lists Bo
# |_ https://docs.python.org/3/reference/compound_stmts.html#type-params
# |_ historical pep 695: https://peps.python.org/pep-0695/
# |_ full lang spec: https://typing.readthedocs.io/en/latest/spec/
# |_ on annotation scopes:
# https://docs.python.org/3/reference/executionmodel.html#annotation-scopes
# |_ 3.13 will have subscriptable funcs Bo
# https://peps.python.org/pep-0718/
@property
def spec(self) -> Union[Type[Struct]]:
# NOTE: defined and applied inside `mk_codec()`
return self._dec.type
# no difference, as compared to a `MsgCodec` which defines the
# `MsgType.pld: PayloadT` part of its spec separately
pld_spec = spec
# TODO: would get moved into `FieldSpec.__str__()` right?
@property
def spec_str(self) -> str:
return pformat_msgspec(
codec=self,
join_char='|',
)
pld_spec_str = spec_str
def decode(
self,
raw: Raw|bytes,
) -> Any:
return self._dec.decode(raw)
@property
def hook(self) -> Callable|None:
return self._dec.dec_hook
def mk_dec(
spec: Union[Type[Struct]]|Type|None,
# NOTE, required for ad-hoc type extensions to the underlying
# serialization proto (which is default `msgpack`),
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
dec_hook: Callable|None = None,
ext_types: list[Type]|None = None,
) -> MsgDec:
'''
Create an IPC msg decoder, a slightly higher level wrapper around
a `msgspec.msgpack.Decoder` which provides,
- easier introspection of the underlying type spec via
the `.spec` and `.spec_str` attrs,
- `.hook` access to the `Decoder.dec_hook()`,
- automatic custom extension-types decode support when
`dec_hook()` is provided such that any `PayloadMsg.pld` tagged
as a type from from `ext_types` (presuming the `MsgCodec.encode()` also used
a `.enc_hook()`) is processed and constructed by a `PldRx` implicitily.
NOTE, as mentioned a `MsgDec` is normally used for `PayloadMsg.pld: PayloadT` field
decoding inside an IPC-ctx-oriented `PldRx`.
'''
if (
spec is None
and
ext_types is None
):
raise TypeError(
f'MIssing type-`spec` for msg decoder!\n'
f'\n'
f'`spec=None` is **only** permitted is if custom extension types '
f'are provided via `ext_types`, meaning it must be non-`None`.\n'
f'\n'
f'In this case it is presumed that only the `ext_types`, '
f'which much be handled by a paired `dec_hook()`, '
f'will be permitted within the payload type-`spec`!\n'
f'\n'
f'spec = {spec!r}\n'
f'dec_hook = {dec_hook!r}\n'
f'ext_types = {ext_types!r}\n'
)
if dec_hook:
if ext_types is None:
raise TypeError(
f'If extending the serializable types with a custom decode hook (`dec_hook()`), '
f'you must also provide the expected type set that the hook will handle '
f'via a `ext_types: Union[Type]|None = None` argument!\n'
f'\n'
f'dec_hook = {dec_hook!r}\n'
f'ext_types = {ext_types!r}\n'
)
# XXX, i *thought* we would require a boxing struct as per docs,
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
# |_ see comment,
# > Note that typed deserialization is required for
# > successful roundtripping here, so we pass `MyMessage` to
# > `Decoder`.
#
# BUT, turns out as long as you spec a union with `Raw` it
# will work? kk B)
#
# maybe_box_struct = mk_boxed_ext_struct(ext_types)
spec = Raw | Union[*ext_types]
return MsgDec(
_dec=msgpack.Decoder(
type=spec, # like `MsgType[Any]`
dec_hook=dec_hook,
),
)
# TODO? remove since didn't end up needing this?
def mk_boxed_ext_struct(
ext_types: list[Type],
) -> Struct:
# NOTE, originally was to wrap non-msgpack-supported "extension
# types" in a field-typed boxing struct, see notes around the
# `dec_hook()` branch in `mk_dec()`.
ext_types_union = Union[*ext_types]
repr_ext_types_union: str = (
str(ext_types_union)
or
"|".join(ext_types)
)
BoxedExtType = msgspec.defstruct(
f'BoxedExts[{repr_ext_types_union}]',
fields=[
('boxed', ext_types_union),
],
)
return BoxedExtType
def unpack_spec_types(
spec: Union[Type]|Type,
) -> set[Type]:
'''
Given an input type-`spec`, either a lone type
or a `Union` of types (like `str|int|MyThing`),
return a set of individual types.
When `spec` is not a type-union returns `{spec,}`.
'''
spec_subtypes: set[Union[Type]] = set(
getattr(
spec,
'__args__',
{spec,},
)
)
return spec_subtypes
def mk_msgspec_table(
dec: msgpack.Decoder,
msg: MsgType|None = None,
) -> dict[str, MsgType]|str:
'''
Fill out a `dict` of `MsgType`s keyed by name
for a given input `msgspec.msgpack.Decoder`
as defined by its `.type: Union[Type]` setting.
If `msg` is provided, only deliver a `dict` with a single
entry for that type.
'''
msgspec: Union[Type]|Type = dec.type
if not (msgtypes := getattr(msgspec, '__args__', False)):
msgtypes = [msgspec]
msgt_table: dict[str, MsgType] = {
msgt: str(msgt.__name__)
for msgt in msgtypes
}
if msg:
msgt: MsgType = type(msg)
str_repr: str = msgt_table[msgt]
return {msgt: str_repr}
return msgt_table
def pformat_msgspec(
codec: MsgCodec|MsgDec,
msg: MsgType|None = None,
join_char: str = '\n',
) -> str:
'''
Pretty `str` format the `msgspec.msgpack.Decoder.type` attribute
for display in (console) log messages as a nice (maybe multiline)
presentation of all supported `Struct`s (subtypes) available for
typed decoding.
'''
dec: msgpack.Decoder = getattr(codec, 'dec', codec)
return join_char.join(
mk_msgspec_table(
dec=dec,
msg=msg,
).values()
)
# TODO: overall IPC msg-spec features (i.e. in this mod)!
#
# -[ ] API changes towards being interchange lib agnostic!
# -[ ] capnproto has pre-compiled schema for eg..
# * https://capnproto.org/language.html
# * http://capnproto.github.io/pycapnp/quickstart.html
# * https://github.com/capnproto/pycapnp/blob/master/examples/addressbook.capnp
#
# -[ ] struct aware messaging coders as per:
# -[x] https://github.com/goodboy/tractor/issues/36
# -[ ] https://github.com/goodboy/tractor/issues/196
# -[ ] https://github.com/goodboy/tractor/issues/365
#
class MsgCodec(Struct):
'''
A IPC msg interchange format lib's encoder + decoder pair.
Pretty much nothing more then delegation to underlying
`msgspec.<interchange-protocol>.Encoder/Decoder`s for now.
'''
_enc: msgpack.Encoder
_dec: msgpack.Decoder
_pld_spec: Type[Struct]|Raw|Any
# _ext_types_box: Struct|None = None
def __repr__(self) -> str:
speclines: str = textwrap.indent(
pformat_msgspec(codec=self),
prefix=' '*3,
)
body: str = textwrap.indent(
f'|_lib = {self.lib.__name__!r}\n'
f'|_enc_hook: {self.enc.enc_hook}\n'
f'|_dec_hook: {self.dec.dec_hook}\n'
f'|_pld_spec: {self.pld_spec_str}\n'
# f'|\n'
f'|__msg_spec__:\n'
f'{speclines}\n',
prefix=' '*2,
)
return (
f'<{type(self).__name__}(\n'
f'{body}'
')>'
)
@property
def pld_spec(self) -> Type[Struct]|Raw|Any:
return self._pld_spec
@property
def pld_spec_str(self) -> str:
# TODO: could also use match: instead?
spec: Union[Type]|Type = self.pld_spec
# `typing.Union` case
if getattr(spec, '__args__', False):
return str(spec)
# just a single type
else:
return spec.__name__
# struct type unions
# https://jcristharif.com/msgspec/structs.html#tagged-unions
@property
def msg_spec(self) -> Union[Type[Struct]]:
# NOTE: defined and applied inside `mk_codec()`
return self._dec.type
# TODO: some way to make `pretty_struct.Struct` use this
# wrapped field over the `.msg_spec` one?
@property
def msg_spec_str(self) -> str:
return pformat_msgspec(self.msg_spec)
lib: ModuleType = msgspec
# TODO: use `functools.cached_property` for these ?
# https://docs.python.org/3/library/functools.html#functools.cached_property
@property
def enc(self) -> msgpack.Encoder:
return self._enc
# TODO: reusing encode buffer for perf?
# https://jcristharif.com/msgspec/perf-tips.html#reusing-an-output-buffer
_buf: bytearray = bytearray()
def encode(
self,
py_obj: Any|PayloadMsg,
use_buf: bool = False,
# ^-XXX-^ uhh why am i getting this?
# |_BufferError: Existing exports of data: object cannot be re-sized
as_ext_type: bool = False,
hide_tb: bool = True,
) -> bytes:
'''
Encode input python objects to `msgpack` bytes for
transfer on a tranport protocol connection.
When `use_buf == True` use the output buffer optimization:
https://jcristharif.com/msgspec/perf-tips.html#reusing-an-output-buffer
'''
__tracebackhide__: bool = hide_tb
if use_buf:
self._enc.encode_into(py_obj, self._buf)
return self._buf
return self._enc.encode(py_obj)
# try:
# return self._enc.encode(py_obj)
# except TypeError as typerr:
# typerr.add_note(
# '|_src error from `msgspec`'
# # f'|_{self._enc.encode!r}'
# )
# raise typerr
# TODO! REMOVE once i'm confident we won't ever need it!
#
# box: Struct = self._ext_types_box
# if (
# as_ext_type
# or
# (
# # XXX NOTE, auto-detect if the input type
# box
# and
# (ext_types := unpack_spec_types(
# spec=box.__annotations__['boxed'])
# )
# )
# ):
# match py_obj:
# # case PayloadMsg(pld=pld) if (
# # type(pld) in ext_types
# # ):
# # py_obj.pld = box(boxed=py_obj)
# # breakpoint()
# case _ if (
# type(py_obj) in ext_types
# ):
# py_obj = box(boxed=py_obj)
@property
def dec(self) -> msgpack.Decoder:
return self._dec
def decode(
self,
msg: bytes,
) -> Any:
'''
Decode received `msgpack` bytes into a local python object
with special `msgspec.Struct` (or other type) handling
determined by the
'''
# https://jcristharif.com/msgspec/usage.html#typed-decoding
return self._dec.decode(msg)
# ?TODO? time to remove this finally?
#
# -[x] TODO: a sub-decoder system as well?
# => No! already re-architected to include a "payload-receiver"
# now found in `._ops`.
#
# -[x] do we still want to try and support the sub-decoder with
# `.Raw` technique in the case that the `Generic` approach gives
# future grief?
# => well YES but NO, since we went with the `PldRx` approach
# instead!
#
# IF however you want to see the code that was staged for this
# from wayyy back, see the pure removal commit.
def mk_codec(
ipc_pld_spec: Union[Type[Struct]]|Any|Raw = Raw,
# tagged-struct-types-union set for `Decoder`ing of payloads, as
# per https://jcristharif.com/msgspec/structs.html#tagged-unions.
# NOTE that the default `Raw` here **is very intentional** since
# the `PldRx._pld_dec: MsgDec` is responsible for per ipc-ctx-task
# decoding of msg-specs defined by the user as part of **their**
# `tractor` "app's" type-limited IPC msg-spec.
# TODO: offering a per-msg(-field) type-spec such that
# the fields can be dynamically NOT decoded and left as `Raw`
# values which are later loaded by a sub-decoder specified
# by `tag_field: str` value key?
# payload_msg_specs: dict[
# str, # tag_field value as sub-decoder key
# Union[Type[Struct]] # `MsgType.pld` type spec
# ]|None = None,
libname: str = 'msgspec',
# settings for encoding-to-send extension-types,
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
# dec_hook: Callable|None = None,
enc_hook: Callable|None = None,
ext_types: list[Type]|None = None,
# optionally provided msg-decoder from which we pull its,
# |_.dec_hook()
# |_.type
ext_dec: MsgDec|None = None
#
# ?TODO? other params we might want to support
# Encoder:
# write_buffer_size=write_buffer_size,
#
# Decoder:
# ext_hook: ext_hook_sig
) -> MsgCodec:
'''
Convenience factory for creating codecs eventually meant
to be interchange lib agnostic (i.e. once we support more then just
`msgspec` ;).
'''
pld_spec = ipc_pld_spec
if enc_hook:
if not ext_types:
raise TypeError(
f'If extending the serializable types with a custom encode hook (`enc_hook()`), '
f'you must also provide the expected type set that the hook will handle '
f'via a `ext_types: Union[Type]|None = None` argument!\n'
f'\n'
f'enc_hook = {enc_hook!r}\n'
f'ext_types = {ext_types!r}\n'
)
dec_hook: Callable|None = None
if ext_dec:
dec: msgspec.Decoder = ext_dec.dec
dec_hook = dec.dec_hook
pld_spec |= dec.type
if ext_types:
pld_spec |= Union[*ext_types]
# (manually) generate a msg-spec (how appropes) for all relevant
# payload-boxing-struct-msg-types, parameterizing the
# `PayloadMsg.pld: PayloadT` for the decoder such that all msgs
# in our SC-RPC-protocol will automatically decode to
# a type-"limited" payload (`Struct`) object (set).
(
ipc_msg_spec,
msg_types,
) = mk_msg_spec(
payload_type_union=pld_spec,
)
msg_spec_types: set[Type] = unpack_spec_types(ipc_msg_spec)
assert (
len(ipc_msg_spec.__args__) == len(msg_types)
and
len(msg_spec_types) == len(msg_types)
)
dec = msgpack.Decoder(
type=ipc_msg_spec,
dec_hook=dec_hook,
)
enc = msgpack.Encoder(
enc_hook=enc_hook,
)
codec = MsgCodec(
_enc=enc,
_dec=dec,
_pld_spec=pld_spec,
)
# sanity on expected backend support
assert codec.lib.__name__ == libname
return codec
# instance of the default `msgspec.msgpack` codec settings, i.e.
# no custom structs, hooks or other special types.
#
# XXX NOTE XXX, this will break our `Context.start()` call!
#
# * by default we roundtrip the started pld-`value` and if you apply
# this codec (globally anyway with `apply_codec()`) then the
# `roundtripped` value will include a non-`.pld: Raw` which will
# then type-error on the consequent `._ops.validte_payload_msg()`..
#
_def_msgspec_codec: MsgCodec = mk_codec(
ipc_pld_spec=Any,
)
# The built-in IPC `Msg` spec.
# Our composing "shuttle" protocol which allows `tractor`-app code
# to use any `msgspec` supported type as the `PayloadMsg.pld` payload,
# https://jcristharif.com/msgspec/supported-types.html
#
_def_tractor_codec: MsgCodec = mk_codec(
ipc_pld_spec=Raw, # XXX should be default righ!?
)
# -[x] TODO, IDEALLY provides for per-`trio.Task` specificity of the
# IPC msging codec used by the transport layer when doing
# `Channel.send()/.recv()` of wire data.
# => impled as our `PldRx` which is `Context` scoped B)
# ContextVar-TODO: DIDN'T WORK, kept resetting in every new task to default!?
# _ctxvar_MsgCodec: ContextVar[MsgCodec] = ContextVar(
# TreeVar-TODO: DIDN'T WORK, kept resetting in every new embedded nursery
# even though it's supposed to inherit from a parent context ???
#
# _ctxvar_MsgCodec: TreeVar[MsgCodec] = TreeVar(
#
# ^-NOTE-^: for this to work see the mods by @mikenerone from `trio` gitter:
#
# 22:02:54 <mikenerone> even for regular contextvars, all you have to do is:
# `task: Task = trio.lowlevel.current_task()`
# `task.parent_nursery.parent_task.context.run(my_ctx_var.set, new_value)`
#
# From a comment in his prop code he couldn't share outright:
# 1. For every TreeVar set in the current task (which covers what
# we need from SynchronizerFacade), walk up the tree until the
# root or finding one where the TreeVar is already set, setting
# it in all of the contexts along the way.
# 2. For each of those, we also forcibly set the values that are
# pending for child nurseries that have not yet accessed the
# TreeVar.
# 3. We similarly set the pending values for the child nurseries
# of the *current* task.
#
_ctxvar_MsgCodec: ContextVar[MsgCodec] = ContextVar(
'msgspec_codec',
default=_def_tractor_codec,
)
@cm
def apply_codec(
codec: MsgCodec,
ctx: Context|None = None,
) -> MsgCodec:
'''
Dynamically apply a `MsgCodec` to the current task's runtime
context such that all (of a certain class of payload
containing i.e. `MsgType.pld: PayloadT`) IPC msgs are
processed with it for that task.
Uses a `contextvars.ContextVar` to ensure the scope of any
codec setting matches the current `Context` or
`._rpc.process_messages()` feeder task's prior setting without
mutating any surrounding scope.
When a `ctx` is supplied, only mod its `Context.pld_codec`.
matches the `@cm` block and DOES NOT change to the original
(default) value in new tasks (as it does for `ContextVar`).
'''
__tracebackhide__: bool = True
if ctx is not None:
var: ContextVar = ctx._var_pld_codec
else:
# use IPC channel-connection "global" codec
var: ContextVar = _ctxvar_MsgCodec
orig: MsgCodec = var.get()
assert orig is not codec
if codec.pld_spec is None:
breakpoint()
log.info(
'Applying new msg-spec codec\n\n'
f'{codec}\n'
)
token: Token = var.set(codec)
try:
yield var.get()
finally:
var.reset(token)
log.info(
'Reverted to last msg-spec codec\n\n'
f'{orig}\n'
)
assert var.get() is orig
# ?TODO? for TreeVar approach which copies from the
# cancel-scope of the prior value, NOT the prior task
#
# See the docs:
# - https://tricycle.readthedocs.io/en/latest/reference.html#tree-variables
# - https://github.com/oremanj/tricycle/blob/master/tricycle/_tests/test_tree_var.py
# ^- see docs for @cm `.being()` API
#
# with _ctxvar_MsgCodec.being(codec):
# new = _ctxvar_MsgCodec.get()
# assert new is codec
# yield codec
def current_codec() -> MsgCodec:
'''
Return the current `trio.Task.context`'s value
for `msgspec_codec` used by `Channel.send/.recv()`
for wire serialization.
'''
return _ctxvar_MsgCodec.get()
@cm
def limit_msg_spec(
payload_spec: Union[Type[Struct]],
# TODO: don't need this approach right?
# -> related to the `MsgCodec._payload_decs` stuff above..
# tagged_structs: list[Struct]|None = None,
hide_tb: bool = True,
**codec_kwargs,
) -> MsgCodec:
'''
Apply a `MsgCodec` that will natively decode the SC-msg set's
`PayloadMsg.pld: Union[Type[Struct]]` payload fields using
tagged-unions of `msgspec.Struct`s from the `payload_types`
for all IPC contexts in use by the current `trio.Task`.
'''
__tracebackhide__: bool = hide_tb
curr_codec: MsgCodec = current_codec()
msgspec_codec: MsgCodec = mk_codec(
ipc_pld_spec=payload_spec,
**codec_kwargs,
)
with apply_codec(msgspec_codec) as applied_codec:
assert applied_codec is msgspec_codec
yield msgspec_codec
assert curr_codec is current_codec()
# XXX: msgspec won't allow this with non-struct custom types
# like `NamespacePath`!@!
# @cm
# def extend_msg_spec(
# payload_spec: Union[Type[Struct]],
# ) -> MsgCodec:
# '''
# Extend the current `MsgCodec.pld_spec` (type set) by extending
# the payload spec to **include** the types specified by
# `payload_spec`.
# '''
# codec: MsgCodec = current_codec()
# pld_spec: Union[Type] = codec.pld_spec
# extended_spec: Union[Type] = pld_spec|payload_spec
# with limit_msg_spec(payload_types=extended_spec) as ext_codec:
# # import pdbp; pdbp.set_trace()
# assert ext_codec.pld_spec == extended_spec
# yield ext_codec
#
# ^-TODO-^ is it impossible to make something like this orr!?
# TODO: make an auto-custom hook generator from a set of input custom
# types?
# -[ ] below is a proto design using a `TypeCodec` idea?
#
# type var for the expected interchange-lib's
# IPC-transport type when not available as a built-in
# serialization output.
WireT = TypeVar('WireT')
# TODO: some kinda (decorator) API for built-in subtypes
# that builds this implicitly by inspecting the `mro()`?
class TypeCodec(Protocol):
'''
A per-custom-type wire-transport serialization translator
description type.
'''
src_type: Type
wire_type: WireT
def encode(obj: Type) -> WireT:
...
def decode(
obj_type: Type[WireT],
obj: WireT,
) -> Type:
...
class MsgpackTypeCodec(TypeCodec):
...
def mk_codec_hooks(
type_codecs: list[TypeCodec],
) -> tuple[Callable, Callable]:
'''
Deliver a `enc_hook()`/`dec_hook()` pair which handle
manual convertion from an input `Type` set such that whenever
the `TypeCodec.filter()` predicate matches the
`TypeCodec.decode()` is called on the input native object by
the `dec_hook()` and whenever the
`isiinstance(obj, TypeCodec.type)` matches against an
`enc_hook(obj=obj)` the return value is taken from a
`TypeCodec.encode(obj)` callback.
'''
...

View File

@ -1,94 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Type-extension-utils for codec-ing (python) objects not
covered by the `msgspec.msgpack` protocol.
See the various API docs from `msgspec`.
extending from native types,
- https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
converters,
- https://jcristharif.com/msgspec/converters.html
- https://jcristharif.com/msgspec/api.html#msgspec.convert
`Raw` fields,
- https://jcristharif.com/msgspec/api.html#raw
- support for `.convert()` and `Raw`,
|_ https://jcristharif.com/msgspec/changelog.html
'''
from types import (
ModuleType,
)
import typing
from typing import (
Type,
Union,
)
def dec_type_union(
type_names: list[str],
mods: list[ModuleType] = []
) -> Type|Union[Type]:
'''
Look up types by name, compile into a list and then create and
return a `typing.Union` from the full set.
'''
# import importlib
types: list[Type] = []
for type_name in type_names:
for mod in [
typing,
# importlib.import_module(__name__),
] + mods:
if type_ref := getattr(
mod,
type_name,
False,
):
types.append(type_ref)
# special case handling only..
# ipc_pld_spec: Union[Type] = eval(
# pld_spec_str,
# {}, # globals
# {'typing': typing}, # locals
# )
return Union[*types]
def enc_type_union(
union_or_type: Union[Type]|Type,
) -> list[str]:
'''
Encode a type-union or single type to a list of type-name-strings
ready for IPC interchange.
'''
type_strs: list[str] = []
for typ in getattr(
union_or_type,
'__args__',
{union_or_type,},
):
type_strs.append(typ.__qualname__)
return type_strs

View File

@ -1,905 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Near-application abstractions for `MsgType.pld: PayloadT|Raw`
delivery, filtering and type checking as well as generic
operational helpers for processing transaction flows.
'''
from __future__ import annotations
from contextlib import (
asynccontextmanager as acm,
contextmanager as cm,
)
from typing import (
Any,
Callable,
Type,
TYPE_CHECKING,
Union,
)
# ------ - ------
from msgspec import (
msgpack,
Raw,
Struct,
ValidationError,
)
import trio
# ------ - ------
from tractor.log import get_logger
from tractor._exceptions import (
MessagingError,
InternalError,
_raise_from_unexpected_msg,
MsgTypeError,
_mk_recv_mte,
pack_error,
)
from tractor._state import (
current_ipc_ctx,
)
from ._codec import (
mk_dec,
MsgDec,
MsgCodec,
current_codec,
)
from .types import (
CancelAck,
Error,
MsgType,
PayloadT,
Return,
Started,
Stop,
Yield,
pretty_struct,
)
if TYPE_CHECKING:
from tractor._context import Context
from tractor._streaming import MsgStream
log = get_logger(__name__)
_def_any_pldec: MsgDec[Any] = mk_dec(spec=Any)
class PldRx(Struct):
'''
A "msg payload receiver".
The pairing of a "feeder" `trio.abc.ReceiveChannel` and an
interchange-specific (eg. msgpack) payload field decoder. The
validation/type-filtering rules are runtime mutable and allow
type constraining the set of `MsgType.pld: Raw|PayloadT`
values at runtime, per IPC task-context.
This abstraction, being just below "user application code",
allows for the equivalent of our `MsgCodec` (used for
typer-filtering IPC dialog protocol msgs against a msg-spec)
but with granular control around payload delivery (i.e. the
data-values user code actually sees and uses (the blobs that
are "shuttled" by the wrapping dialog prot) such that invalid
`.pld: Raw` can be decoded and handled by IPC-primitive user
code (i.e. that operates on `Context` and `Msgstream` APIs)
without knowledge of the lower level `Channel`/`MsgTransport`
primitives nor the `MsgCodec` in use. Further, lazily decoding
payload blobs allows for topical (and maybe intentionally
"partial") encryption of msg field subsets.
'''
# TODO: better to bind it here?
# _rx_mc: trio.MemoryReceiveChannel
_pld_dec: MsgDec
@property
def pld_dec(self) -> MsgDec:
return self._pld_dec
@cm
def limit_plds(
self,
spec: Union[Type[Struct]],
**dec_kwargs,
) -> MsgDec:
'''
Type-limit the loadable msg payloads via an applied
`MsgDec` given an input spec, revert to prior decoder on
exit.
'''
# TODO, ensure we pull the current `MsgCodec`'s custom
# dec/enc_hook settings as well ?
# -[ ] see `._codec.mk_codec()` inputs
#
orig_dec: MsgDec = self._pld_dec
limit_dec: MsgDec = mk_dec(
spec=spec,
**dec_kwargs,
)
try:
self._pld_dec = limit_dec
yield limit_dec
finally:
self._pld_dec = orig_dec
@property
def dec(self) -> msgpack.Decoder:
return self._pld_dec.dec
def recv_msg_nowait(
self,
# TODO: make this `MsgStream` compat as well, see above^
# ipc_prim: Context|MsgStream,
ipc: Context|MsgStream,
ipc_msg: MsgType|None = None,
expect_msg: Type[MsgType]|None = None,
hide_tb: bool = False,
**dec_pld_kwargs,
) -> tuple[
MsgType[PayloadT],
PayloadT,
]:
'''
Attempt to non-blocking receive a message from the `._rx_chan` and
unwrap it's payload delivering the pair to the caller.
'''
__tracebackhide__: bool = hide_tb
msg: MsgType = (
ipc_msg
or
# sync-rx msg from underlying IPC feeder (mem-)chan
ipc._rx_chan.receive_nowait()
)
pld: PayloadT = self.decode_pld(
msg,
ipc=ipc,
expect_msg=expect_msg,
hide_tb=hide_tb,
**dec_pld_kwargs,
)
return (
msg,
pld,
)
async def recv_msg(
self,
ipc: Context|MsgStream,
expect_msg: MsgType,
# NOTE: ONLY for handling `Stop`-msgs that arrive during
# a call to `drain_to_final_msg()` above!
passthrough_non_pld_msgs: bool = True,
hide_tb: bool = True,
**decode_pld_kwargs,
) -> tuple[MsgType, PayloadT]:
'''
Retrieve the next avail IPC msg, decode its payload, and
return the (msg, pld) pair.
'''
__tracebackhide__: bool = hide_tb
msg: MsgType = await ipc._rx_chan.receive()
match msg:
case Return()|Error():
log.runtime(
f'Rxed final outcome msg\n'
f'{msg}\n'
)
case Stop():
log.runtime(
f'Rxed stream stopped msg\n'
f'{msg}\n'
)
if passthrough_non_pld_msgs:
return msg, None
# TODO: is there some way we can inject the decoded
# payload into an existing output buffer for the original
# msg instance?
pld: PayloadT = self.decode_pld(
msg,
ipc=ipc,
expect_msg=expect_msg,
hide_tb=hide_tb,
**decode_pld_kwargs,
)
return (
msg,
pld,
)
async def recv_pld(
self,
ipc: Context|MsgStream,
ipc_msg: MsgType[PayloadT]|None = None,
expect_msg: Type[MsgType]|None = None,
hide_tb: bool = True,
**dec_pld_kwargs,
) -> PayloadT:
'''
Receive a `MsgType`, then decode and return its `.pld` field.
'''
__tracebackhide__: bool = hide_tb
msg: MsgType = (
ipc_msg
or
# async-rx msg from underlying IPC feeder (mem-)chan
await ipc._rx_chan.receive()
)
if (
type(msg) is Return
):
log.info(
f'Rxed final result msg\n'
f'{msg}\n'
)
return self.decode_pld(
msg=msg,
ipc=ipc,
expect_msg=expect_msg,
**dec_pld_kwargs,
)
def decode_pld(
self,
msg: MsgType,
ipc: Context|MsgStream,
expect_msg: Type[MsgType]|None,
raise_error: bool = True,
hide_tb: bool = True,
# XXX for special (default?) case of send side call with
# `Context.started(validate_pld_spec=True)`
is_started_send_side: bool = False,
) -> PayloadT|Raw:
'''
Decode a msg's payload field: `MsgType.pld: PayloadT|Raw` and
return the value or raise an appropriate error.
'''
__tracebackhide__: bool = hide_tb
src_err: BaseException|None = None
match msg:
# payload-data shuttle msg; deliver the `.pld` value
# directly to IPC (primitive) client-consumer code.
case (
Started(pld=pld) # sync phase
|Yield(pld=pld) # streaming phase
|Return(pld=pld) # termination phase
):
try:
pld: PayloadT = self._pld_dec.decode(pld)
log.runtime(
'Decoded msg payload\n\n'
f'{msg}\n'
f'where payload decoded as\n'
f'|_pld={pld!r}\n'
)
return pld
except TypeError as typerr:
__tracebackhide__: bool = False
raise typerr
# XXX pld-value type failure
except ValidationError as valerr:
# pack mgterr into error-msg for
# reraise below; ensure remote-actor-err
# info is displayed nicely?
mte: MsgTypeError = _mk_recv_mte(
msg=msg,
codec=self.pld_dec,
src_validation_error=valerr,
is_invalid_payload=True,
expected_msg=expect_msg,
)
# NOTE: just raise the MTE inline instead of all
# the pack-unpack-repack non-sense when this is
# a "send side" validation error.
if is_started_send_side:
raise mte
# NOTE: the `.message` is automatically
# transferred into the message as long as we
# define it as a `Error.message` field.
err_msg: Error = pack_error(
exc=mte,
cid=msg.cid,
src_uid=(
ipc.chan.uid
if not is_started_send_side
else ipc._actor.uid
),
)
mte._ipc_msg = err_msg
# XXX override the `msg` passed to
# `_raise_from_unexpected_msg()` (below) so so
# that we're effectively able to use that same
# func to unpack and raise an "emulated remote
# `Error`" of this local MTE.
msg = err_msg
# XXX NOTE: so when the `_raise_from_unexpected_msg()`
# raises the boxed `err_msg` from above it raises
# it from the above caught interchange-lib
# validation error.
src_err = valerr
# a runtime-internal RPC endpoint response.
# always passthrough since (internal) runtime
# responses are generally never exposed to consumer
# code.
case CancelAck(
pld=bool(cancelled)
):
return cancelled
case Error():
src_err = MessagingError(
'IPC ctx dialog terminated without `Return`-ing a result\n'
f'Instead it raised {msg.boxed_type_str!r}!'
)
# XXX NOTE XXX another super subtle runtime-y thing..
#
# - when user code (transitively) calls into this
# func (usually via a `Context/MsgStream` API) we
# generally want errors to propagate immediately
# and directly so that the user can define how it
# wants to handle them.
#
# HOWEVER,
#
# - for certain runtime calling cases, we don't want to
# directly raise since the calling code might have
# special logic around whether to raise the error
# or supress it silently (eg. a `ContextCancelled`
# received from the far end which was requested by
# this side, aka a self-cancel).
#
# SO, we offer a flag to control this.
if not raise_error:
return src_err
case Stop(cid=cid):
ctx: Context = getattr(ipc, 'ctx', ipc)
message: str = (
f'{ctx.side!r}-side of ctx received stream-`Stop` from '
f'{ctx.peer_side!r} peer ?\n'
f'|_cid: {cid}\n\n'
f'{pretty_struct.pformat(msg)}\n'
)
if ctx._stream is None:
explain: str = (
f'BUT, no `MsgStream` (was) open(ed) on this '
f'{ctx.side!r}-side of the IPC ctx?\n'
f'Maybe check your code for streaming phase race conditions?\n'
)
log.warning(
message
+
explain
)
# let caller decide what to do when only one
# side opened a stream, don't raise.
return msg
else:
explain: str = (
'Received a `Stop` when it should NEVER be possible!?!?\n'
)
# TODO: this is constructed inside
# `_raise_from_unexpected_msg()` but maybe we
# should pass it in?
# src_err = trio.EndOfChannel(explain)
src_err = None
case _:
src_err = InternalError(
'Invalid IPC msg ??\n\n'
f'{msg}\n'
)
# TODO: maybe use the new `.add_note()` from 3.11?
# |_https://docs.python.org/3.11/library/exceptions.html#BaseException.add_note
#
# fallthrough and raise from `src_err`
try:
_raise_from_unexpected_msg(
ctx=getattr(ipc, 'ctx', ipc),
msg=msg,
src_err=src_err,
log=log,
expect_msg=expect_msg,
hide_tb=hide_tb,
)
except UnboundLocalError:
# XXX if there's an internal lookup error in the above
# code (prolly on `src_err`) we want to show this frame
# in the tb!
__tracebackhide__: bool = False
raise
@cm
def limit_plds(
spec: Union[Type[Struct]],
**dec_kwargs,
) -> MsgDec:
'''
Apply a `MsgCodec` that will natively decode the SC-msg set's
`PayloadMsg.pld: Union[Type[Struct]]` payload fields using
tagged-unions of `msgspec.Struct`s from the `payload_types`
for all IPC contexts in use by the current `trio.Task`.
'''
__tracebackhide__: bool = True
curr_ctx: Context|None = current_ipc_ctx()
if curr_ctx is None:
raise RuntimeError(
'No IPC `Context` is active !?\n'
'Did you open `limit_plds()` from outside '
'a `Portal.open_context()` scope-block?'
)
try:
rx: PldRx = curr_ctx._pld_rx
orig_pldec: MsgDec = rx.pld_dec
with rx.limit_plds(
spec=spec,
**dec_kwargs,
) as pldec:
log.runtime(
'Applying payload-decoder\n\n'
f'{pldec}\n'
)
yield pldec
except BaseException:
__tracebackhide__: bool = False
raise
finally:
log.runtime(
'Reverted to previous payload-decoder\n\n'
f'{orig_pldec}\n'
)
# sanity on orig settings
assert rx.pld_dec is orig_pldec
@acm
async def maybe_limit_plds(
ctx: Context,
spec: Union[Type[Struct]]|None = None,
dec_hook: Callable|None = None,
**kwargs,
) -> MsgDec|None:
'''
Async compat maybe-payload type limiter.
Mostly for use inside other internal `@acm`s such that a separate
indent block isn't needed when an async one is already being
used.
'''
if (
spec is None
and
dec_hook is None
):
yield None
return
# sanity check on IPC scoping
curr_ctx: Context = current_ipc_ctx()
assert ctx is curr_ctx
with ctx._pld_rx.limit_plds(
spec=spec,
dec_hook=dec_hook,
**kwargs,
) as msgdec:
yield msgdec
# when the applied spec is unwound/removed, the same IPC-ctx
# should still be in scope.
curr_ctx: Context = current_ipc_ctx()
assert ctx is curr_ctx
async def drain_to_final_msg(
ctx: Context,
msg_limit: int = 6,
hide_tb: bool = True,
) -> tuple[
Return|None,
list[MsgType]
]:
'''
Drain IPC msgs delivered to the underlying IPC context's
rx-mem-chan (i.e. from `Context._rx_chan`) in search for a final
`Return` or `Error` msg.
Deliver the `Return` + preceding drained msgs (`list[MsgType]`)
as a pair unless an `Error` is found, in which unpack and raise
it.
The motivation here is to always capture any remote error relayed
by the remote peer task during a ctxc condition.
For eg. a ctxc-request may be sent to the peer as part of the
local task's (request for) cancellation but then that same task
**also errors** before executing the teardown in the
`Portal.open_context().__aexit__()` block. In such error-on-exit
cases we want to always capture and raise any delivered remote
error (like an expected ctxc-ACK) as part of the final
`ctx.wait_for_result()` teardown sequence such that the
`Context.outcome` related state always reflect what transpired
even after ctx closure and the `.open_context()` block exit.
'''
raise_overrun: bool = not ctx._allow_overruns
parent_never_opened_stream: bool = ctx._stream is None
# wait for a final context result by collecting (but
# basically ignoring) any bi-dir-stream msgs still in transit
# from the far end.
pre_result_drained: list[MsgType] = []
result_msg: Return|Error|None = None
while not (
ctx.maybe_error
and
not ctx._final_result_is_set()
):
try:
# receive all msgs, scanning for either a final result
# or error; the underlying call should never raise any
# remote error directly!
msg, pld = await ctx._pld_rx.recv_msg(
ipc=ctx,
expect_msg=Return,
raise_error=False,
hide_tb=hide_tb,
)
# ^-TODO-^ some bad ideas?
# -[ ] wrap final outcome .receive() in a scope so
# it can be cancelled out of band if needed?
# |_with trio.CancelScope() as res_cs:
# ctx._res_scope = res_cs
# msg: dict = await ctx._rx_chan.receive()
# if res_cs.cancelled_caught:
#
# -[ ] make sure pause points work here for REPLing
# the runtime itself; i.e. ensure there's no hangs!
# |_from tractor.devx._debug import pause
# await pause()
# NOTE: we get here if the far end was
# `ContextCancelled` in 2 cases:
# 1. we requested the cancellation and thus
# SHOULD NOT raise that far end error,
# 2. WE DID NOT REQUEST that cancel and thus
# SHOULD RAISE HERE!
except trio.Cancelled as _taskc:
taskc: trio.Cancelled = _taskc
# report when the cancellation wasn't (ostensibly) due to
# RPC operation, some surrounding parent cancel-scope.
if not ctx._scope.cancel_called:
task: trio.lowlevel.Task = trio.lowlevel.current_task()
rent_n: trio.Nursery = task.parent_nursery
if (
(local_cs := rent_n.cancel_scope).cancel_called
):
log.cancel(
'RPC-ctx cancelled by local-parent scope during drain!\n\n'
f'c}}>\n'
f' |_{rent_n}\n'
f' |_.cancel_scope = {local_cs}\n'
f' |_>c}}\n'
f' |_{ctx.pformat(indent=" "*9)}'
# ^TODO, some (other) simpler repr here?
)
__tracebackhide__: bool = False
else:
log.cancel(
f'IPC ctx cancelled externally during result drain ?\n'
f'{ctx}'
)
# CASE 2: mask the local cancelled-error(s)
# only when we are sure the remote error is
# the source cause of this local task's
# cancellation.
ctx.maybe_raise(
hide_tb=hide_tb,
from_src_exc=taskc,
# ?TODO? when *should* we use this?
)
# CASE 1: we DID request the cancel we simply
# continue to bubble up as normal.
raise taskc
match msg:
# final result arrived!
case Return():
log.runtime(
'Context delivered final draining msg:\n'
f'{pretty_struct.pformat(msg)}'
)
ctx._result: Any = pld
result_msg = msg
break
# far end task is still streaming to us so discard
# and report depending on local ctx state.
case Yield():
pre_result_drained.append(msg)
if (
not parent_never_opened_stream
and (
(ctx._stream.closed
and
(reason := 'stream was already closed')
) or
(ctx.cancel_acked
and
(reason := 'ctx cancelled other side')
)
or (ctx._cancel_called
and
(reason := 'ctx called `.cancel()`')
)
or (len(pre_result_drained) > msg_limit
and
(reason := f'"yield" limit={msg_limit}')
)
)
):
log.cancel(
'Cancelling `MsgStream` drain since '
f'{reason}\n\n'
f'<= {ctx.chan.uid}\n'
f' |_{ctx._nsf}()\n\n'
f'=> {ctx._task}\n'
f' |_{ctx._stream}\n\n'
f'{pretty_struct.pformat(msg)}\n'
)
break
# drain up to the `msg_limit` hoping to get
# a final result or error/ctxc.
else:
report: str = (
'Ignoring "yield" msg during `ctx.result()` drain..\n'
f'<= {ctx.chan.uid}\n'
f' |_{ctx._nsf}()\n\n'
f'=> {ctx._task}\n'
f' |_{ctx._stream}\n\n'
f'{pretty_struct.pformat(msg)}\n'
)
if parent_never_opened_stream:
report = (
f'IPC ctx never opened stream on {ctx.side!r}-side!\n'
f'\n'
# f'{ctx}\n'
) + report
log.warning(report)
continue
# stream terminated, but no result yet..
#
# TODO: work out edge cases here where
# a stream is open but the task also calls
# this?
# -[ ] should be a runtime error if a stream is open right?
# Stop()
case Stop():
pre_result_drained.append(msg)
log.runtime( # normal/expected shutdown transaction
'Remote stream terminated due to "stop" msg:\n\n'
f'{pretty_struct.pformat(msg)}\n'
)
continue
# remote error msg, likely already handled inside
# `Context._deliver_msg()`
case Error():
# TODO: can we replace this with `ctx.maybe_raise()`?
# -[ ] would this be handier for this case maybe?
# |_async with maybe_raise_on_exit() as raises:
# if raises:
# log.error('some msg about raising..')
#
re: Exception|None = ctx._remote_error
if re:
assert msg is ctx._cancel_msg
# NOTE: this solved a super duper edge case XD
# this was THE super duper edge case of:
# - local task opens a remote task,
# - requests remote cancellation of far end
# ctx/tasks,
# - needs to wait for the cancel ack msg
# (ctxc) or some result in the race case
# where the other side's task returns
# before the cancel request msg is ever
# rxed and processed,
# - here this surrounding drain loop (which
# iterates all ipc msgs until the ack or
# an early result arrives) was NOT exiting
# since we are the edge case: local task
# does not re-raise any ctxc it receives
# IFF **it** was the cancellation
# requester..
#
# XXX will raise if necessary but ow break
# from loop presuming any supressed error
# (ctxc) should terminate the context!
ctx._maybe_raise_remote_err(
re,
# NOTE: obvi we don't care if we
# overran the far end if we're already
# waiting on a final result (msg).
# raise_overrun_from_self=False,
raise_overrun_from_self=raise_overrun,
)
result_msg = msg
break # OOOOOF, yeah obvi we need this..
else:
# bubble the original src key error
raise
# XXX should pretty much never get here unless someone
# overrides the default `MsgType` spec.
case _:
pre_result_drained.append(msg)
# It's definitely an internal error if any other
# msg type without a`'cid'` field arrives here!
report: str = (
f'Invalid or unknown msg type {type(msg)!r}!?\n'
)
if not msg.cid:
report += (
'\nWhich also has no `.cid` field?\n'
)
raise MessagingError(
report
+
f'\n{msg}\n'
)
else:
log.cancel(
'Skipping `MsgStream` drain since final outcome is set\n\n'
f'{ctx.outcome}\n'
)
__tracebackhide__: bool = hide_tb
return (
result_msg,
pre_result_drained,
)
def validate_payload_msg(
pld_msg: Started|Yield|Return,
pld_value: PayloadT,
ipc: Context|MsgStream,
raise_mte: bool = True,
strict_pld_parity: bool = False,
hide_tb: bool = True,
) -> MsgTypeError|None:
'''
Validate a `PayloadMsg.pld` value with the current
IPC ctx's `PldRx` and raise an appropriate `MsgTypeError`
on failure.
'''
__tracebackhide__: bool = hide_tb
codec: MsgCodec = current_codec()
msg_bytes: bytes = codec.encode(pld_msg)
roundtripped: Started|None = None
try:
roundtripped: Started = codec.decode(msg_bytes)
except TypeError as typerr:
__tracebackhide__: bool = False
raise typerr
try:
ctx: Context = getattr(ipc, 'ctx', ipc)
pld: PayloadT = ctx.pld_rx.decode_pld(
msg=roundtripped,
ipc=ipc,
expect_msg=Started,
hide_tb=hide_tb,
is_started_send_side=True,
)
if (
strict_pld_parity
and
pld != pld_value
):
# TODO: make that one a mod func too..
diff = pretty_struct.Struct.__sub__(
roundtripped,
pld_msg,
)
complaint: str = (
'Started value does not match after roundtrip?\n\n'
f'{diff}'
)
raise ValidationError(complaint)
# usually due to `.decode()` input type
except TypeError as typerr:
__tracebackhide__: bool = False
raise typerr
# raise any msg type error NO MATTER WHAT!
except ValidationError as verr:
try:
mte: MsgTypeError = _mk_recv_mte(
msg=roundtripped,
codec=codec,
src_validation_error=verr,
verb_header='Trying to send ',
is_invalid_payload=True,
)
except BaseException as _be:
if not roundtripped:
raise verr
be = _be
__tracebackhide__: bool = False
raise be
if not raise_mte:
return mte
raise mte from verr

View File

@ -1,342 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Prettified version of `msgspec.Struct` for easier console grokin.
'''
from __future__ import annotations
from collections import UserList
from typing import (
Any,
Iterator,
)
from msgspec import (
msgpack,
Struct as _Struct,
structs,
)
# from pprint import (
# saferepr,
# )
from tractor.log import get_logger
log = get_logger()
# TODO: auto-gen type sig for input func both for
# type-msgs and logging of RPC tasks?
# taken and modified from:
# https://stackoverflow.com/a/57110117
# import inspect
# from typing import List
# def my_function(input_1: str, input_2: int) -> list[int]:
# pass
# def types_of(func):
# specs = inspect.getfullargspec(func)
# return_type = specs.annotations['return']
# input_types = [t.__name__ for s, t in specs.annotations.items() if s != 'return']
# return f'{func.__name__}({": ".join(input_types)}) -> {return_type}'
# types_of(my_function)
class DiffDump(UserList):
'''
Very simple list delegator that repr() dumps (presumed) tuple
elements of the form `tuple[str, Any, Any]` in a nice
multi-line readable form for analyzing `Struct` diffs.
'''
def __repr__(self) -> str:
if not len(self):
return super().__repr__()
# format by displaying item pair's ``repr()`` on multiple,
# indented lines such that they are more easily visually
# comparable when printed to console when printed to
# console.
repstr: str = '[\n'
for k, left, right in self:
repstr += (
f'({k},\n'
f' |_{repr(left)},\n'
f' |_{repr(right)},\n'
')\n'
)
repstr += ']\n'
return repstr
def iter_fields(struct: Struct) -> Iterator[
tuple[
structs.FieldIinfo,
str,
Any,
]
]:
'''
Iterate over all non-@property fields of this struct.
'''
fi: structs.FieldInfo
for fi in structs.fields(struct):
key: str = fi.name
val: Any = getattr(struct, key)
yield (
fi,
key,
val,
)
def pformat(
struct: Struct,
field_indent: int = 2,
indent: int = 0,
) -> str:
'''
Recursion-safe `pprint.pformat()` style formatting of
a `msgspec.Struct` for sane reading by a human using a REPL.
'''
# global whitespace indent
ws: str = ' '*indent
# field whitespace indent
field_ws: str = ' '*(field_indent + indent)
# qtn: str = ws + struct.__class__.__qualname__
qtn: str = struct.__class__.__qualname__
obj_str: str = '' # accumulator
fi: structs.FieldInfo
k: str
v: Any
for fi, k, v in iter_fields(struct):
# TODO: how can we prefer `Literal['option1', 'option2,
# ..]` over .__name__ == `Literal` but still get only the
# latter for simple types like `str | int | None` etc..?
ft: type = fi.type
typ_name: str = getattr(ft, '__name__', str(ft))
# recurse to get sub-struct's `.pformat()` output Bo
if isinstance(v, Struct):
val_str: str = v.pformat(
indent=field_indent + indent,
field_indent=indent + field_indent,
)
else:
val_str: str = repr(v)
# XXX LOL, below just seems to be f#$%in causing
# recursion errs..
#
# the `pprint` recursion-safe format:
# https://docs.python.org/3.11/library/pprint.html#pprint.saferepr
# try:
# val_str: str = saferepr(v)
# except Exception:
# log.exception(
# 'Failed to `saferepr({type(struct)})` !?\n'
# )
# raise
# return _Struct.__repr__(struct)
# TODO: LOLOL use `textwrap.indent()` instead dawwwwwg!
obj_str += (field_ws + f'{k}: {typ_name} = {val_str},\n')
return (
f'{qtn}(\n'
f'{obj_str}'
f'{ws})'
)
class Struct(
_Struct,
# https://jcristharif.com/msgspec/structs.html#tagged-unions
# tag='pikerstruct',
# tag=True,
):
'''
A "human friendlier" (aka repl buddy) struct subtype.
'''
def to_dict(
self,
include_non_members: bool = True,
) -> dict:
'''
Like it sounds.. direct delegation to:
https://jcristharif.com/msgspec/api.html#msgspec.structs.asdict
BUT, by default we pop all non-member (aka not defined as
struct fields) fields by default.
'''
asdict: dict = structs.asdict(self)
if include_non_members:
return asdict
# only return a dict of the struct members
# which were provided as input, NOT anything
# added as type-defined `@property` methods!
sin_props: dict = {}
fi: structs.FieldInfo
for fi, k, v in iter_fields(self):
sin_props[k] = asdict[k]
return sin_props
pformat = pformat
def __repr__(self) -> str:
try:
return pformat(self)
except Exception:
log.exception(
f'Failed to `pformat({type(self)})` !?\n'
)
return _Struct.__repr__(self)
# __repr__ = pformat
# __str__ = __repr__ = pformat
# TODO: use a pprint.PrettyPrinter instance around ONLY rendering
# inside a known tty?
# def __repr__(self) -> str:
# ...
def copy(
self,
update: dict | None = None,
) -> Struct:
'''
Validate-typecast all self defined fields, return a copy of
us with all such fields.
NOTE: This is kinda like the default behaviour in
`pydantic.BaseModel` except a copy of the object is
returned making it compat with `frozen=True`.
'''
if update:
for k, v in update.items():
setattr(self, k, v)
# NOTE: roundtrip serialize to validate
# - enode to msgpack binary format,
# - decode that back to a struct.
return msgpack.Decoder(type=type(self)).decode(
msgpack.Encoder().encode(self)
)
def typecast(
self,
# TODO: allow only casting a named subset?
# fields: set[str] | None = None,
) -> None:
'''
Cast all fields using their declared type annotations
(kinda like what `pydantic` does by default).
NOTE: this of course won't work on frozen types, use
``.copy()`` above in such cases.
'''
# https://jcristharif.com/msgspec/api.html#msgspec.structs.fields
fi: structs.FieldInfo
for fi in structs.fields(self):
setattr(
self,
fi.name,
fi.type(getattr(self, fi.name)),
)
# TODO: make a mod func instead and just point to it here for
# method impl?
def __sub__(
self,
other: Struct,
) -> DiffDump[tuple[str, Any, Any]]:
'''
Compare fields/items key-wise and return a `DiffDump`
for easy visual REPL comparison B)
'''
diffs: DiffDump[tuple[str, Any, Any]] = DiffDump()
for fi in structs.fields(self):
attr_name: str = fi.name
ours: Any = getattr(self, attr_name)
theirs: Any = getattr(other, attr_name)
if ours != theirs:
diffs.append((
attr_name,
ours,
theirs,
))
return diffs
@classmethod
def fields_diff(
cls,
other: dict|Struct,
) -> DiffDump[tuple[str, Any, Any]]:
'''
Very similar to `PrettyStruct.__sub__()` except accepts an
input `other: dict` (presumably that would normally be called
like `Struct(**other)`) which returns a `DiffDump` of the
fields of the struct and the `dict`'s fields.
'''
nullish = object()
consumed: dict = other.copy()
diffs: DiffDump[tuple[str, Any, Any]] = DiffDump()
for fi in structs.fields(cls):
field_name: str = fi.name
# ours: Any = getattr(self, field_name)
theirs: Any = consumed.pop(field_name, nullish)
if theirs is nullish:
diffs.append((
field_name,
f'{fi.type!r}',
'NOT-DEFINED in `other: dict`',
))
# when there are lingering fields in `other` that this struct
# DOES NOT define we also append those.
if consumed:
for k, v in consumed.items():
diffs.append((
k,
f'NOT-DEFINED for `{cls.__name__}`',
f'`other: dict` has value = {v!r}',
))
return diffs

Some files were not shown because too many files have changed in this diff Show More