forked from goodboy/tractor
1
0
Fork 0

Compare commits

...

472 Commits

Author SHA1 Message Date
goodboy e5ee2e3de8
Merge pull request #358 from goodboy/switch_to_pdbp
Switch to `pdbp` 🏄🏼
2023-05-15 09:58:58 -04:00
Tyler Goodlet 41aa91c8eb Add news file 2023-05-15 09:35:59 -04:00
Tyler Goodlet 6758e4487c Drop lingering `pdbpp` comment-refs in tests 2023-05-15 09:14:42 -04:00
Tyler Goodlet 1c3893a383 Drop commented `pdbpp` import logic 2023-05-15 09:01:55 -04:00
Tyler Goodlet 73befac9bc Switch to `pdbp` in test reqs 2023-05-15 09:01:27 -04:00
Tyler Goodlet 79622bbeea Restore `breakpoint()` hook after runtime exits
Previously we were leaking our (pdb++) override into the Python runtime
which would always result in a runtime error whenever `breakpoint()` is
called outside our runtime; after exit of the root actor . This
explicitly restores any previous hook override (detected during startup)
or deletes the hook and restores the environment if none existed prior.

Also adds a new WIP debugging example script to ensure breakpointing
works as normal after runtime close; this will be added to the test
suite.
2023-05-15 00:47:29 -04:00
Tyler Goodlet 95535b2226 Some more 3.10+ optional type sigs 2023-05-15 00:47:29 -04:00
Tyler Goodlet 87c6e09d6b Switch readme links to point @ `pdbp` B) 2023-05-14 22:52:24 -04:00
Tyler Goodlet 9ccd3a74b6 More detailed preface description 2023-05-14 22:38:47 -04:00
Tyler Goodlet ae4ff5dc8d pdbp: adding typing to config settings vars 2023-05-14 22:38:46 -04:00
Tyler Goodlet 705538398f `pdbp`: turn off line truncating by default, fixes terminal resizing stuff 2023-05-14 22:38:16 -04:00
Tyler Goodlet 86aef5238d Hide actor nursery exit frame 2023-05-14 21:24:26 -04:00
Tyler Goodlet cc82447db6 First try: switch debug machinery over to `pdbp` B) 2023-05-14 21:24:26 -04:00
Tyler Goodlet 23cffbd940 Use multiline import for debug mod 2023-05-14 21:24:26 -04:00
Tyler Goodlet 3d202272c4 Change over debugger tests to use `PROMPT` var.. 2023-05-14 21:24:26 -04:00
Tyler Goodlet 63cdb0891f Switch to `pdbp` since noone is maintaining `pdbpp` 2023-05-14 21:24:26 -04:00
goodboy 0f7db27b68
Merge pull request #356 from goodboy/drop_proc_actxmngr
`trio.Process.aclose()`?
2023-05-14 20:59:53 -04:00
Tyler Goodlet c53d62d2f7 Add news file 2023-05-14 20:31:26 -04:00
Tyler Goodlet f667d16d66 Copy the now deprecated `trio.Process.aclose()`
Move it into our `_spawn.do_hard_kill()` since we do indeed rely on
the particular process killing sequence on "soft kill" failure cases.
2023-05-14 19:31:50 -04:00
Tyler Goodlet 24a062341e Just call `trio.Process.aclose()` directly for now? 2023-04-02 14:34:41 -04:00
goodboy e714bec8db
Merge pull request #355 from kehrazy/patch-1
fixed the `Zombie` example having wrong indentation
2023-04-01 12:11:47 -04:00
Igor 009cd6552e
fixed the `Zombie` example having wrong indentation 2023-03-31 17:50:46 +03:00
goodboy 649c5e7504
Merge pull request #343 from goodboy/breceiver_internals
Avoid inf recursion in `BroadcastReceiver.receive()`
2023-01-30 14:01:13 -05:00
Tyler Goodlet 203f95615c Add nooz 2023-01-30 12:42:26 -05:00
Tyler Goodlet efb8bec828 Add a basic no-raise-on lag test 2023-01-30 12:26:07 -05:00
Tyler Goodlet 8637778739 Expose `raise_on_lag: bool` flag through factory 2023-01-30 12:18:23 -05:00
Tyler Goodlet 47166e45f0 Be explicit with passthrough kwargs (there's so few) 2023-01-29 17:31:21 -05:00
Tyler Goodlet 4ce2dcd12b Switch back to raising `Lagged` by default
Makes the broadcast test suite not hang xD, and is our expected default
behaviour. Also removes a ton of commented legacy cruft from before the
refactor to remove the `.receive()` recursion and fixes some typing.

Oh right, and in the case where there's only one subscriber left we warn
log about it since in theory we could actually entirely unwind the
bcaster back to the original underlying, though not sure if that's sane
or works for some use cases (like wanting to have some other subscriber
get added dynamically later).
2023-01-29 15:03:34 -05:00
Tyler Goodlet 80f983818f Ignore monkey patched `.send()` type annot 2023-01-29 15:03:34 -05:00
Tyler Goodlet 6ba29f8d56 Recurse and get the last value when in warn mode 2023-01-29 15:03:34 -05:00
Tyler Goodlet 2707a0e971 Add `._raise_on_lag` flag to disable `Lag` raising 2023-01-29 15:03:34 -05:00
Tyler Goodlet c8efcdd0d3 Drop `ReceiveMsgStream` from test suite 2023-01-29 15:03:34 -05:00
Tyler Goodlet 9f9907271b Merge `ReceiveMsgStream` and `MsgStream`
Since one-way streaming can be accomplished by just *not* sending on one
side (and/or thus wrapping such usage in a more restrictive API), we
just drop the recv-only parent type. The only method different was
`MsgStream.send()`, now merged in. Further in usage of `.subscribe()`
we monkey patch the underlying stream's `.send()` onto the delivered
broadcast receiver so that subscriber tasks can two-way stream as though
using the stream directly.

This allows us to more definitively drop `tractor.open_stream_from()` in
the longer run if we so choose as well; note currently this will
potentially create an issue if a caller tries to `.send()` on such a one
way stream.
2023-01-29 15:03:34 -05:00
Tyler Goodlet c2367c1c5e Better `trio`-ize `BroadcastReceiver` internals
Driven by a bug found in `piker` where we'd get an inf recursion error
due to `BroadcastReceiver.receive()` being called when consumer tasks
are awoken but no value is ready to `.nowait_receive()`.

This new rework takes an approach closer to the interface and internals
of `trio.MemoryReceiveChannel` particularly in terms of,

- implementing a `BroadcastReceiver.receive_nowait()` and using it
  within the async `.receive()`.
- failing over to an internal `._receive_from_underlying()` when the
  `_nowait()` call raises `trio.WouldBlock`.
- adding `BroadcastState.statistics()` for debugging and testing
  dropping recursion from `.receive()`.
2023-01-29 15:03:34 -05:00
goodboy a777217674
Merge pull request #346 from goodboy/ipc_failure_while_streaming
Ipc failure while streaming
2023-01-29 15:02:54 -05:00
Tyler Goodlet 13c9eadc8f Move result log msg up and drop else block 2023-01-29 14:55:02 -05:00
Tyler Goodlet af6c325072 Bump up legacy streaming timeout a smidgen 2023-01-29 14:55:02 -05:00
Tyler Goodlet 195d2f0ed4 Add nooz 2023-01-29 14:55:02 -05:00
Tyler Goodlet aa4871b13d Call `MsgStream.aclose()` in `Context.open_stream.__aexit__()`
We weren't doing this originally I *think* just because of the path
dependent nature of the way the code was developed (originally being
mega pedantic about one-way vs. bidirectional streams) but, it doesn't
seem like there's any issue just calling the stream's `.aclose()`; also
have the benefit of just being less code and logic checks B)
2023-01-29 14:55:02 -05:00
Tyler Goodlet 556f4626db Tweak warning msg for still-alive-after-cancelled actor 2023-01-29 14:55:02 -05:00
Tyler Goodlet 3967c0ed9e Add a simplified zombie lord specific process reaping test 2023-01-29 14:55:02 -05:00
Tyler Goodlet e34823aab4 Add parent vs. child cancels first cases 2023-01-29 14:55:02 -05:00
Tyler Goodlet 6c35ba2cb6 Add IPC breakage on both parent and child side
With the new fancy `_pytest.pathlib.import_path()` we can do real
parametrization of the example-script-module code and thus configure
whether the child, parent, or both silently break the IPC connection.

Parametrize the test for all the above mentioned cases as well as the
case where the IPC never breaks but we still simulate the user hammering
ctl-c / SIGINT to terminate the actor tree. Adjust expected errors based
on each case and heavily document each of these.
2023-01-29 14:55:02 -05:00
Tyler Goodlet 3a0817ff55 Skip `advanced_faults/` subset in docs examples tests 2023-01-29 14:55:02 -05:00
Tyler Goodlet 7fddb4416b Handle `mp` spawn method cases in test suite 2023-01-29 14:55:02 -05:00
Tyler Goodlet 1d92f2552a Adjust other examples tests to expect `pathlib` objects 2023-01-29 14:55:02 -05:00
Tyler Goodlet 4f8586a928 Wrap ex in new test, change dir helpers to use `pathlib.Path` 2023-01-29 14:55:02 -05:00
Tyler Goodlet fb9ff45745 Move example to a new `advanced_faults` egs subset dir 2023-01-29 14:55:02 -05:00
Tyler Goodlet 36a83cb306 Refine example to drop IPC mid-stream
Use a task nursery in the subactor to spawn tasks which cancel the IPC
channel mid stream to simulate the most concurrent case we're likely to
see. Make `main()` accept a `debug_mode: bool` for parametrization. Fill
out detailed comments/docs on this example.
2023-01-29 14:55:02 -05:00
Tyler Goodlet 7394a187e0 Name one-way streaming (con generators) what it is 2023-01-29 14:55:02 -05:00
Tyler Goodlet df01294bb2 Show more functiony syntax in ctx-cancelled log msgs 2023-01-29 14:55:02 -05:00
Tyler Goodlet ddf3d0d1b3 Show tracebacks for un-shipped/propagated errors 2023-01-29 14:55:02 -05:00
Tyler Goodlet 158569adae Add WIP example of silent IPC breaks while streaming 2023-01-29 14:55:02 -05:00
Tyler Goodlet 97d5f7233b Fix uid2nursery lookup table type annot 2023-01-29 14:55:02 -05:00
Tyler Goodlet d27c081a15 Ensure arbiter sockaddr type before usage 2023-01-29 14:55:02 -05:00
Tyler Goodlet a4874a3227 Always set the `parent_exit: trio.Event` on exit 2023-01-29 14:55:02 -05:00
Tyler Goodlet de04bbb2bb Don't raise on a broken IPC-context when sending stop msg 2023-01-29 14:55:02 -05:00
Tyler Goodlet 4f977189c0 Handle broken mem chan on `Actor._push_result()`
When backpressure is used and a feeder mem chan breaks during msg
delivery (usually because the IPC allocating task already terminated)
instead of raising we simply warn as we do for the non-backpressure
case.

Also, add a proper `Actor.is_arbiter` test inside `._invoke()` to avoid
doing an arbiter-registry lookup if the current actor **is** the
registrar.
2023-01-29 14:55:02 -05:00
goodboy 9fd62cf71f
Merge pull request #348 from goodboy/deprecate_arbiter_addr
Begin deprecation of `arbiter_addr` -> `registry_addr`
2023-01-26 16:05:41 -05:00
Tyler Goodlet 606efa5bb7 Adjust daemon command to use new `registry_addr` 2023-01-26 16:00:08 -05:00
Tyler Goodlet 121a8cc891 Drop `Optional` usage from root mod 2023-01-26 16:00:08 -05:00
Tyler Goodlet c54b8ca4ba Begin deprecation of `arbiter_addr` -> `registry_addr` 2023-01-26 16:00:08 -05:00
goodboy de93c8257c
Merge pull request #349 from goodboy/prompt_on_ctrlc
Re-draw `pdbpp` prompt on `SIGINT`
2023-01-26 15:56:37 -05:00
Tyler Goodlet 5b8a87d0f6 Slightly better `xonsh` check hack, fix typing 2023-01-26 15:48:15 -05:00
Tyler Goodlet 9e5c8ce6f6 Add nooz file 2023-01-26 15:39:03 -05:00
Tyler Goodlet 965cd406a2 Use std `pdbpp` release 2023-01-26 15:27:55 -05:00
Tyler Goodlet 2e278ceb74 Add a super hacky check for `xonsh`, smh.. 2023-01-26 15:26:43 -05:00
Tyler Goodlet 6d124db7c9 Never run ctlc-with-intermediary-actor cases locally either 2023-01-26 12:44:13 -05:00
Tyler Goodlet dba8118553 Always attempt prompt redraw on ctl-c in REPL
The stdlib has all sorts of muckery with ignoring SIGINT in the
`Pdb._cmdloop()` but here we just override all that since we don't trust
their decisions about cancellation handling whatsoever. Adds
a `Lock.repl: MultiActorPdb` attr which is set by any task which
acquires root TTY lock indicating (via actor global state) that the
current actor is using the debugger REPL and can be expected to re-draw
the prompt on SIGINT. Further we mask out log messages from any actor
who also has the `shield_sigint_handler()` enabled to avoid logging
noise when debugging.
2023-01-26 12:44:13 -05:00
Tyler Goodlet fca2e7c10e Simplify closed abruptly log msg 2023-01-26 12:44:13 -05:00
Tyler Goodlet 5ed62c5c54 Add note about intermediary-actor in debug issue 2023-01-26 12:44:13 -05:00
goodboy 588b7ca7bf
Merge pull request #344 from goodboy/harden_cluster_tests
Harden cluster tests
2022-12-12 15:02:23 -05:00
Tyler Goodlet d8214735b9 Add bugfix nooz 2022-12-12 14:53:59 -05:00
Tyler Goodlet 48f6d514ef Handle earlier name error crash in debug test 2022-12-12 14:05:32 -05:00
Tyler Goodlet 6c8cacc9d1 Adjust all default is `None` annots (per new `mypy`) 2022-12-12 13:18:22 -05:00
Tyler Goodlet 38326e8c15 Avoid error on context double pops 2022-12-11 23:46:33 -05:00
Tyler Goodlet b5192cca8e Always greedily `list`-cast`mngrs` input sequence 2022-12-11 23:20:58 -05:00
Tyler Goodlet c606be8c64 Passthrough runtime kwargs from `open_actor_cluster()` 2022-12-11 19:56:08 -05:00
Tyler Goodlet d8e48e29ba Add `mngrs=(<gen_comprehension>)` test 2022-12-11 19:56:01 -05:00
goodboy a0f6668ce8
Merge pull request #333 from goodboy/exceptiongroups
`ExceptiongGroup`s and `trio>=0.22`
2022-10-14 20:11:26 -04:00
Tyler Goodlet 274c66cf9d Add nooz 2022-10-14 19:42:23 -04:00
Tyler Goodlet f2641c8964 Avoid "task never called `.started()`" runtime erros when cancelling 2022-10-14 19:42:23 -04:00
Tyler Goodlet c47575997a Expand nested case to include error prop and breakpointing 2022-10-14 19:42:23 -04:00
Tyler Goodlet f39414ce12 Drop error-repacking for `.run_in_actor()`s block
If we pack the nursery parent task's error into the `errors` table
directly in the handler, we don't need to specially handle packing that
same error into any exception group raised while handling sub-actor
cancellation; drops some ugly indentation ;)
2022-10-14 19:42:23 -04:00
Tyler Goodlet 0a1bf8e57d Tolerate eg in runtime test teardown 2022-10-14 19:42:23 -04:00
Tyler Goodlet e298b70edf Drop added `.pdp()` level msgs used duringn dev 2022-10-14 19:42:23 -04:00
Tyler Goodlet c0dd5d7ffc Adjust multi-daemon test to be more deterministic 2022-10-14 19:42:23 -04:00
Tyler Goodlet 347591c348 Expect egs in tests which retreive portal results 2022-10-14 19:42:23 -04:00
Tyler Goodlet 38f9d35dee Fix errors table type annot 2022-10-14 19:42:23 -04:00
Tyler Goodlet 88448f7281 Fix handler type annot 2022-10-14 19:42:23 -04:00
Tyler Goodlet 0956d5f461 Restore the `trio` SIGINT handler, cancel root lock tasks on no-peers
Pretty sure this is the final touch to alleviate all our debug lock
headaches! Instead of trying to revert to the "last" handler (as `pdb`
does internally in the stdlib) we always just revert to the handler
`trio` registers during startup. Further this seems to allow cancelling
the root-side locking task if it's detected as stale IFF we only do this
when the root actor is in a "no more IPC peers" state.

Deatz:
- (always) set `._debug.Lock._trio_handler` as the `trio` version, not
  some last used handler to make sure we're getting the ctrl-c handling
  we want when not in debug mode.
- assign the trio handler in `open_root_actor()`
  `._runtime._async_main()` to be sure it's applied in subactors as well
  as the root.
- only do debug lock blocking and root-side-locking-task cancels when
  a "no peers" condition is detected in the root actor: i.e. no IPC
  channels are detected by the root meaning it's impossible any actor
  has a sane lock-state ongoing for debug mode.
2022-10-14 18:18:01 -04:00
Tyler Goodlet c646c79a82 Adjust root-errors debug tests for blocking and egs 2022-10-14 18:18:01 -04:00
Tyler Goodlet 33f2234baf Hide some stack layers the user doesn't really need to see 2022-10-14 18:18:01 -04:00
Tyler Goodlet 7521bded3d Pack error from the parent task into the actor nursery 2022-10-14 18:16:51 -04:00
Tyler Goodlet 0f523b65fb Change cancel test over the exception group 2022-10-14 18:16:51 -04:00
Tyler Goodlet 50fe098e06 First pass, swap `MultiError` for `BaseExceptionGroup` 2022-10-14 18:16:51 -04:00
Tyler Goodlet d87d6af7e1 Add `exceptiongroup` (3.11 backport lib) as dep 2022-10-14 18:16:51 -04:00
Tyler Goodlet df69aedcd5 Pin to latest `trio` version 2022-10-14 18:16:51 -04:00
Tyler Goodlet b15e4ed9ce Adjust "no arbiter" test for new runtime defaults
Turns out this test was being silently ignored due to incorrect usage of
sync opening of our `.open_nursery()` block (with a `with` not `async
with`) and thus was an noop XD

Instead this fixes the test to call a `tractor` discovery built-in
without starting the runtime (which is now done implicitly when a user
opens a nursery) which should result in the prior expected outcome,
a `RuntimeError`.
2022-10-12 12:46:20 -04:00
Tyler Goodlet 98056f6ed7 Move logging context map into `log.py` module 2022-10-12 12:46:20 -04:00
goodboy 247d3448ae
Merge pull request #337 from goodboy/debug_lock_blocking
Debug lock blocking
2022-10-12 12:41:14 -04:00
Tyler Goodlet fc17f6790e Bump `towncrier` alpha version 2022-10-12 12:36:09 -04:00
Tyler Goodlet b81b6be98a Drop extra log msgs, some old commented code 2022-10-12 12:35:35 -04:00
Tyler Goodlet 72fbda4cef Add nooz file 2022-10-12 12:35:11 -04:00
Tyler Goodlet fb721f36ef Support debug-lock blocking, use on no-more IPC
This is a lingering debugger locking race case we needed to handle:

- child crashes acquires TTY lock in root and attaches to `pdb`
- child IPC goes down such that all channels to the root are broken
  / non-functional.
- root is stuck thinking the child is still in debug even though it
  can't be contacted and the child actor machinery hasn't been
  cancelled by its parent.
- root get's stuck in deadlock with child since it won't send a cancel
  request until the child is finished debugging, but the child can't
  unlock the debugger bc IPC is down.

To avoid this scenario add debug lock blocking list via
`._debug.Lock._blocked: set[tuple]` which holds actor uids for any actor
that is detected by the root as having no transport channel connections
with said root (of which at least one should exist if this sub-actor at
some point acquired the debug lock). The root consequently checks this
list for any actor that tries to (re)acquire the lock and blocks with
a `ContextCancelled`. When a debug condition is tested in
`._runtime._invoke` the context's `._enter_debugger_on_cancel` which
is set to `False` if the actor is on the block list in which case the
post-mortem entry is skipped.

Further this adds a root-locking-task side cancel scope to
`Lock._root_local_task_cs_in_debug` which can be cancelled by the root
runtime when a stale lock is detected after all IPC channels for the
actor have been torn down. NOTE: right now we're NOT doing this since it
seems to cause test failures likely due because it may cause pre-mature
cancellation and maybe needs a bit more experimenting?
2022-10-11 20:00:05 -04:00
Tyler Goodlet 734d8dd663 Move `trio` scope outside first inter-task-chan receive 2022-10-11 20:00:05 -04:00
Tyler Goodlet 30ea7a06b0 Avoid inf nursery hang by reversing `async with` ordering 2022-10-11 20:00:05 -04:00
Tyler Goodlet 3398153c52 Add timeout around `trio`-callee-task 2022-10-11 20:00:05 -04:00
Tyler Goodlet 1c480e6c92 Add `Context` cancel message and debug toggle flag
In the case of a callee-side context cancelling itself it can be handy
to let the caller-side task know (even if through logging) that the
cancel was due to some known reason. Make `.cancel()` accept such
a message on the callee side and have it included in the
`._runtime._invoke()` raised `ContextCancelled` emission.

Also add a `Context._trigger_debugger_on_cancel: bool` flag which can be
set to `False` to avoid the debugger post-mortem crash mode from
engaging on cross-context tasks which cancel themselves for a known
reason (as is needed for blocked tasks in the debug TTY-lock machinery).
2022-10-11 20:00:05 -04:00
goodboy dfdad4d1fa
Merge pull request #336 from goodboy/callable_key_maybe_open_context
Callable key input to maybe open context
2022-10-10 00:32:27 -04:00
Tyler Goodlet b892bc74f6 Add trivial news snippet 2022-10-09 21:27:23 -04:00
Tyler Goodlet 44b59f3338 Go back to a `global` single-ton nursery per actor
Turns out the lifetime mgmt of separate nurseries per delegate manager
is tricky; a new nursery can't be naively allocated on cache-misses since
it may get closed by some early terminating task instead of by the "last
using" consumer task. In theory if we allocate using the same logic as
that used for the last-task-triggers-exit then this should work?

For now just go back to a single global nursery per `_Cache` which still
avoids use of the internal actor service nursery.
2022-10-09 21:27:23 -04:00
Tyler Goodlet 7a719ac2a7 Use one nursery per unique manager (signature)
Instead of sticking all `trionics.maybe_open_context()` tasks inside the
actor's (root) service nursery, open a unique one per manager function
instance (id).

Further, accept a callable for the `key` such that a user can have
more flexible control on the caching logic and move the
`maybe_open_nursery()` helper out of the portal mod and into this
trionics "managers" module.
2022-10-09 21:27:23 -04:00
goodboy 9e6266dda3
Merge pull request #335 from goodboy/spawn_backend_table
Spawn backend table
2022-10-09 21:26:28 -04:00
Tyler Goodlet b1abec543f Add trivial news snippet 2022-10-09 18:51:31 -04:00
Tyler Goodlet 93b9d2dc2d Drop dynamic backend-spawn-method test generation 2022-10-09 18:29:50 -04:00
Tyler Goodlet 4d808757a6 Fix start method name in logging propagation test 2022-10-09 18:22:55 -04:00
Tyler Goodlet 7e5bb0437e Go to latest `mypy` version in CI 2022-10-09 18:13:45 -04:00
Tyler Goodlet b19f08d9f0 Fill out new backend names in ci script 2022-10-09 18:08:07 -04:00
Tyler Goodlet 2c20b2d64f Fix import to load from `conftest.py` 2022-10-09 18:03:17 -04:00
Tyler Goodlet 023b6fc845 Drop `tractor.testing` sub-package 2022-10-09 17:57:02 -04:00
Tyler Goodlet d24fae8381 'Rename mp spawn methods to have a `'mp_'` prefix' 2022-10-09 17:54:55 -04:00
Tyler Goodlet 5ab98513b7 Move `@tractor_test` into `conftest.py` 2022-10-09 17:14:20 -04:00
Tyler Goodlet 90f4912580 Organize process spawning into lookup table
Instead of the logic branching create a table `._spawn._methods`
which is used to lookup the desired backend framework (in this case
still only one of `multiprocessing` or `trio`) and make the top level
`.new_proc()` do the lookup and any common logic. Use a `typing.Literal`
to define the lookup table's key set.

Repair and ignore a bunch of type-annot related stuff todo with `mypy`
updates and backend-specific process typing.
2022-10-09 16:51:21 -04:00
goodboy 6e24e16068
Merge pull request #334 from goodboy/pin_pre_trio_0.22
Pin pre-0.22 bc exception groups break everything
2022-10-09 16:26:56 -04:00
Tyler Goodlet 15047341bd Ignore forserver override attrs with `mypy` 2022-10-09 16:14:11 -04:00
Tyler Goodlet dc295ab227 Pin pre-0.22 bc exception groups break everything 2022-10-09 16:11:06 -04:00
goodboy 6a0337b69d
Merge pull request #326 from goodboy/lifetime_stack_tests
Expose lifetime stack as class attr, add base test suite
2022-09-16 18:09:24 -04:00
Tyler Goodlet e609183242 Expose lifetime stack as class attr, add base test suite 2022-09-15 23:50:15 -04:00
goodboy 368e9f3f7c
Merge pull request #322 from goodboy/we_bein_all_matchy
3.10 and friends
2022-09-15 23:49:34 -04:00
Tyler Goodlet 10eeda2d2b Use built-ins for all data-structure-type annotations 2022-09-15 23:41:28 -04:00
Tyler Goodlet a113e22bb9 Add trivial nooz snippet 2022-09-15 23:41:28 -04:00
Tyler Goodlet ad19bf2cf1 Remove `tractor.run()` once and for all
It's been deprecated for a while now and all docs and tests have been
changed.

Closes #183
2022-09-15 23:41:28 -04:00
Tyler Goodlet 9aef03772a Expose `Actor` at pkg level, adjust debug type annots 2022-09-15 23:41:28 -04:00
Tyler Goodlet 7548dba8f2 Change to new doc string style 2022-09-15 23:41:28 -04:00
Tyler Goodlet ba4d4e9af3 Change test import 2022-09-15 23:41:28 -04:00
Tyler Goodlet 208d56af2c Make `async_main()` a module func 2022-09-15 23:41:28 -04:00
Tyler Goodlet a3a5bc267e Make `process_messages()` a mod func 2022-09-15 23:41:28 -04:00
Tyler Goodlet d4084b2032 Rename our core module to `_runtime` 2022-09-15 23:41:28 -04:00
Tyler Goodlet 1e6b4d5dd4 Drop `msgspec` min pin 2022-09-15 23:41:28 -04:00
Tyler Goodlet c613acfe5c Start alpha 6 dev, ensure py3.10+ 2022-09-15 23:41:28 -04:00
goodboy fea9dc7065
Merge pull request #324 from goodboy/debug_event_guard
Add debug complete event `None`-guard for when already reset
2022-09-15 23:20:38 -04:00
goodboy e558c427de
Merge pull request #327 from goodboy/disable_win_ci
Disable win tests in CI
2022-09-15 23:20:26 -04:00
Tyler Goodlet f07c3aa4a1 Add nooz 2022-09-15 19:39:34 -04:00
Tyler Goodlet bafd10a260 Make `maybe_open_context()` re-entrant safe, use per factory locks 2022-09-15 19:02:02 -04:00
Tyler Goodlet 5ad540c417 Add debug complete event `None`-guard for when already reset 2022-09-15 19:02:02 -04:00
Tyler Goodlet 83b44cf469 Flip over PR number in readme 2022-09-15 18:54:51 -04:00
Tyler Goodlet 1f2001020e Mention disabled windows CI in readme 2022-09-15 18:46:34 -04:00
Tyler Goodlet 71f9881a60 Drop windows from CI until we get a collab that actually uses it XD 2022-09-15 18:36:45 -04:00
Tyler Goodlet e24645eec8 Drop `pytest` 3.10 issue comment, add todo for `pyreadline3` 2022-09-15 18:36:37 -04:00
Tyler Goodlet c3cdeeb3ba Drop `pytest` full trace flag, use `pip list` 2022-09-15 18:36:27 -04:00
Tyler Goodlet 9bd534df83 Drop 3.9 from CI jobs 2022-09-15 18:36:15 -04:00
goodboy c1d700f257
Merge pull request #321 from goodboy/alpha5
`alpha5` release!
2022-08-03 14:36:52 -04:00
Tyler Goodlet 14c6e34658 Add summary section 2022-08-03 11:42:53 -04:00
Tyler Goodlet 3393bc23e4 Generate release news 2022-08-03 11:41:23 -04:00
Tyler Goodlet 171f1bc243 Move to using `pyproject.toml` for `towncrier`
Add explicit fragment types based on `pytest`'s config
and don't manually spec the version.
2022-08-03 11:36:23 -04:00
Tyler Goodlet ee02cd2496 Move misplaced fragment for #305 2022-08-03 10:54:22 -04:00
Tyler Goodlet 4c5d435aac Fix towncrier bug entry suffix 2022-08-03 10:21:37 -04:00
Tyler Goodlet a9b4a61620 Flip to non-dev version tag 2022-08-03 10:21:07 -04:00
goodboy 641ed7a32a
Merge pull request #165 from goodboy/signint_saviour
Ignore SIGINT when in a debugger REPL
2022-08-03 09:26:54 -04:00
Tyler Goodlet cc5f60bba0 List deps in CI 2022-08-02 18:19:03 -04:00
Tyler Goodlet 8f1fe2376a Simplify all hooks to a common `Lock.release()` 2022-08-02 18:14:05 -04:00
Tyler Goodlet 65540f3e2a Add nooz 2022-08-02 15:29:33 -04:00
Tyler Goodlet 650313dfef Drop legacy handler blocks factored into `_acquire_debug_lock()` 2022-08-02 12:50:27 -04:00
Tyler Goodlet e4006da6f4 Drop `pdbpp` bug notes, add follow up issue #320 note 2022-08-02 12:48:40 -04:00
Tyler Goodlet 7f6169a050 Drop legacy commented/todo remote debug helper block 2022-08-02 12:43:14 -04:00
Tyler Goodlet 2d387f2610 Add in issue link for nested cases 2022-08-02 12:17:34 -04:00
Tyler Goodlet 8115759984 Mark final nested-actor debugger test 2022-08-02 12:17:34 -04:00
Tyler Goodlet 02c3b9a672 Put `pygments` back to default 2022-08-02 12:17:34 -04:00
Tyler Goodlet fa4388835c Add an expect wrapper, use in hanging CI test 2022-08-02 12:17:34 -04:00
Tyler Goodlet 54de72d8df Loosen timeout on nested child re-locking 2022-08-02 12:17:34 -04:00
Tyler Goodlet c5c7a9027c Line len lint and drop rpc log msg level again 2022-08-02 12:17:34 -04:00
Tyler Goodlet e4771eec16 Go back to skipping since xfail is wack 2022-08-02 12:17:28 -04:00
Tyler Goodlet a9aaee9dbd Use xfails for nested cases, revert prompt expect 2022-08-02 12:17:28 -04:00
Tyler Goodlet acfbae4b95 Drop verbose level, report xfails 2022-08-02 12:17:28 -04:00
Tyler Goodlet aca9a6b99a Try just skipping nested actor tests in CI 2022-08-02 12:17:28 -04:00
Tyler Goodlet 8896ba2bf8 Use `assert_before` more extensively 2022-08-02 12:17:28 -04:00
Tyler Goodlet 87b2ccb86a Try less times for EOF 2022-08-02 12:17:28 -04:00
Tyler Goodlet 937ed99e39 Factor sigint overriding into lock methods 2022-08-02 12:17:28 -04:00
Tyler Goodlet 91f034a136 Move all module vars into a `Lock` type 2022-08-02 12:17:28 -04:00
Tyler Goodlet 08cf03cd9e Handle missing prompt render case? 2022-08-02 12:17:28 -04:00
Tyler Goodlet 5e23b3ca0d Drop pytest full-tracing in CI again 2022-08-02 12:17:28 -04:00
Tyler Goodlet 6f01c78122 Disable `pygments` highlighting on ctlc tests 2022-08-02 12:17:28 -04:00
Tyler Goodlet 457499bc2e Avoid infinite wait for EOF 2022-08-02 12:17:28 -04:00
Tyler Goodlet a4bac135d9 Use `pytest-timeout` plug to try and prevent CI hang 2022-08-02 12:17:28 -04:00
Tyler Goodlet 20c660faa7 Add timeout on spawn error msg check 2022-08-02 12:17:28 -04:00
Tyler Goodlet 1d4d55f5cd Increase verbosity in ci tests for now 2022-08-02 12:17:28 -04:00
Tyler Goodlet c0cd99e374 Timeout on arbiter ping, avoid TCP SYN hangs in CI? 2022-08-02 12:17:28 -04:00
Tyler Goodlet a4538a3d84 Drop ctlc tests on Py3.9...
After many tries I just don't think it's worth it to make the tests work
since the repl UX in `pdbpp` is so unreliable in the latest release and
honestly we're trying to go 3.10+ ASAP.

Further,
- entirely drop the pattern matching inside the `do_ctlc()` for now.
- add a `subactor_error` parametrization that catches a case that
  previously caused a hang (when you use 'next' immediately after the
  first crash/debug lock (the fix was pushed just before this commit).
2022-08-02 12:17:28 -04:00
Tyler Goodlet b01daa5319 Factor lock-state release logic into helper
The common logic to both remove our custom SIGINT handler as well
as signal the actor global event that pdb is complete. Call this
whenever we exit a post mortem call and thus any time some rpc task
get's debugged inside `._actor._invoke()`.

Further, we have to manually print the REPL prompt on 3.9 for some wack
reason, so stick a version guard in the sigint handler for that..
2022-08-02 12:17:28 -04:00
Tyler Goodlet bd362a05f0 Run release hook around `next` repl commands as well 2022-08-02 12:17:28 -04:00
Tyler Goodlet cb0c47c42a Try disabling prompt expect in ctrlc cases 2022-08-02 12:17:28 -04:00
Tyler Goodlet 808d7ae2c6 Add timeout guard around caller side context open 2022-08-02 12:17:28 -04:00
Tyler Goodlet b21f2e16ad Always consider the debugger when exiting contexts
When in an uncertain teardown state and in debug mode a context can be
popped from actor runtime before a child finished debugging (the case
when the parent is tearing down but the child hasn't closed/completed
its tty lock IPC exit phase) and the child sends the "stop" message to
unlock the debugger but it's ignored bc the parent has already dropped
the ctx. Instead we call `._debug.maybe_wait_for_deugger()` before these
context removals to avoid the root getting stuck thinking the lock was
never released.

Further, add special `Actor._cancel_task()` handling code inside
`_invoke()` which continues to execute the method despite the IPC
channel to the caller being broken and thus avoiding potential hangs due
to a target (child) actor task remaining alive.
2022-08-02 12:17:28 -04:00
Tyler Goodlet 4779badd96 Add before assert helper and print console bytes on fail 2022-08-02 12:17:28 -04:00
Tyler Goodlet 6bdcbdb96f Do child decode on `do_ctlc` exit? 2022-08-02 12:17:28 -04:00
Tyler Goodlet adbebd3f06 Add ctl-c to remaining tests, only expect prompt in non-CI 2022-08-02 12:17:28 -04:00
Tyler Goodlet a2e90194bc Add ctl-c case to `subactor_breakpoint` example test 2022-08-02 12:17:28 -04:00
Tyler Goodlet ba7b355d9c Add note about default behaviour of `fancycompleter` 2022-08-02 12:17:28 -04:00
Tyler Goodlet 617d57dc35 Disable ctl-c prompt checks again 2022-08-02 12:17:28 -04:00
Tyler Goodlet dadd5e6148 Add back prompt expect via flag 2022-08-02 12:17:28 -04:00
Tyler Goodlet a72350118c Test: drop expect prompt 2022-08-02 12:17:28 -04:00
Tyler Goodlet ef8dc0204c Just drop all longlisting for now and leave comments 2022-08-02 12:17:28 -04:00
Tyler Goodlet a101971027 Go back to original longlist code 2022-08-02 12:17:28 -04:00
Tyler Goodlet 835836123b Just don't call longlist on 3.10+ for now 2022-08-02 12:17:28 -04:00
Tyler Goodlet 70ad0f6b8e Add longer delays around ctl-c loop, don't expect longlist 2022-08-02 12:17:28 -04:00
Tyler Goodlet 56b30a9a53 Add sleep around ctl-c iteration loop 2022-08-02 12:17:27 -04:00
Tyler Goodlet 925d5c1ceb Pin to specific `pdbppp` master commit 2022-08-02 12:17:27 -04:00
Tyler Goodlet b9eb601265 General typing fixes for `mypy` 2022-08-02 12:17:27 -04:00
Tyler Goodlet 4dcc21234e Only call `.poll()` if a method on the spawn backend 2022-08-02 12:17:27 -04:00
Tyler Goodlet 64909e676e Fix loglevel in subactor test; actually pass the level XD 2022-08-02 12:17:27 -04:00
Tyler Goodlet 19fb77f698 Pin to `trio >= 0.20` 2022-08-02 12:17:27 -04:00
Tyler Goodlet 8b9f342eef Port to new `.lowlevel.open_process()` API 2022-08-02 12:17:27 -04:00
Tyler Goodlet bd7d507153 Guard against `asyncio` cancelled logged to console 2022-08-02 12:17:16 -04:00
Tyler Goodlet 9bc38cbf04 Add slight delay 2nd ctlc round.. 2022-08-02 12:17:06 -04:00
Tyler Goodlet a90ca4b384 Call longlist normally when on py < 3.10 2022-08-02 12:17:06 -04:00
Tyler Goodlet d0dcd55f47 Only report disconnected actors if proc is still alive? 2022-08-02 12:17:06 -04:00
Tyler Goodlet 4e08605b0d Only do `pdbpp` from `git` install on 3.10+ 2022-08-02 12:17:06 -04:00
Tyler Goodlet 519f4c300b I dunno, seems like `breakpoint()` needs this? 2022-08-02 12:17:06 -04:00
Tyler Goodlet 56c19093bb Add basic module-not-found when opening a ctx eg. 2022-08-02 12:17:06 -04:00
Tyler Goodlet ff3f5959e9 Always enable debug level logging if mode enabled 2022-08-02 12:16:58 -04:00
Tyler Goodlet abb00531d3 Add help msg for non `__main__` modules as well 2022-08-02 12:16:58 -04:00
Tyler Goodlet 439d320a25 Add basic ctl-c testing cases to suite 2022-08-02 12:16:58 -04:00
Tyler Goodlet 18c525d2f1 Hack around double long list print issue..
See https://github.com/pdbpp/pdbpp/issues/496
2022-08-02 12:16:58 -04:00
Tyler Goodlet 201c026284 Show full KBI trace for help with CI hangs 2022-08-02 12:16:58 -04:00
Tyler Goodlet 2a61aa099b Move pydantic-click hang example to new dir, skip in test suite 2022-08-02 12:16:58 -04:00
Tyler Goodlet e2453fd3da Add spaces before values in log msg 2022-08-02 12:16:58 -04:00
Tyler Goodlet b29def8b5d Add runtime level msg around channel draining 2022-08-02 12:16:58 -04:00
Tyler Goodlet f07e9dbb2f Always undo SIGINT overrides, cancel detached children
Ensure that even when `pdb` resumption methods are called during a crash
where `trio`'s runtime has already terminated (eg. `Event.set()` will
raise) we always revert our sigint handler to the original. Further
inside the handler if we hit a case where a child is in debug and
(thinks it) has the global pdb lock, if it has no IPC connection to
a parent, simply presume tty sync-coordination is now lost and cancel
the child immediately.
2022-08-02 12:16:49 -04:00
Tyler Goodlet 2f5a6049a4 Readme formatting tweaks 2022-07-27 11:40:02 -04:00
Tyler Goodlet 418e74eee7 Pin to `pdbpp` upstream master, 3.10 problem?
See issues:
- https://github.com/pdbpp/pdbpp/issues/480
- https://github.com/pdbpp/pdbpp/pull/482
2022-07-27 11:40:02 -04:00
Tyler Goodlet c7035be2fc Tolerate double `.remove()`s of stream on portal teardowns 2022-07-27 11:40:02 -04:00
Tyler Goodlet deaca7d6cc Always propagate SIGINT when no locking peer found
A hopefully significant fix here is to always avoid suppressing a SIGINT
when the root actor can not detect an active IPC connections (via
a connected channel) to the supposed debug lock holding actor. In that
case it is most likely that the actor has either terminated or has lost
its connection for debugger control and there is no way the root can
verify the lock is in use; thus we choose to allow KBI cancellation.

Drop the (by comment) `try`-`finally` block in
`_hijoack_stdin_for_child()` around the `_acquire_debug_lock()` call
since all that logic should now be handled internal to that locking
manager. Try to catch a weird error around the `.do_longlist()` method
call that seems to sometimes break on py3.10 and latest `pdbpp`.
2022-07-27 11:40:02 -04:00
Tyler Goodlet d47d0e7c37 Always call pdb hook even if tty locking fails 2022-07-27 11:40:02 -04:00
Tyler Goodlet 0062c96a3c Log cancels with appropriate level 2022-07-27 11:40:02 -04:00
Tyler Goodlet 4be13b7387 Just warn on IPC breaks 2022-07-27 11:40:02 -04:00
Tyler Goodlet 7bb5addd4c Only warn on `trio.BrokenResourceError`s from `_invoke()` 2022-07-27 11:40:02 -04:00
Tyler Goodlet 4fd924cfd2 Make example a subpkg for `python -m <mod>` testing 2022-07-27 11:40:02 -04:00
Tyler Goodlet fe0fd1a1c1 Add example that triggers bug #302 2022-07-27 11:40:02 -04:00
Tyler Goodlet dd23e78de1 Add back in async gen loop 2022-07-27 11:40:02 -04:00
Tyler Goodlet 89b44f8163 Pre-declare disconnected flag 2022-07-27 11:40:02 -04:00
Tyler Goodlet 2819b6a5b2 Avoid attr error XD 2022-07-27 11:40:02 -04:00
Tyler Goodlet f2671ed026 Type annot updates 2022-07-27 11:40:02 -04:00
Tyler Goodlet 41924c86a6 Drop uneeded backframe traceback hide annotation 2022-07-27 11:40:02 -04:00
Tyler Goodlet 206c7c0720 Make `Actor._process_messages()` report disconnects
The method now returns a `bool` which flags whether the transport died
to the caller and allows for reporting a disconnect in the
channel-transport handler task. This is something a user will normally
want to know about on the caller side especially after seeing
a traceback from the peer (if in tree) on console.
2022-07-27 11:40:02 -04:00
Tyler Goodlet bf0ac3116c Only cancel/get-result from a ctx if transport is up
There's no point in sending a cancel message to the remote linked task
and especially no reason to block waiting on a result from that task if
the transport layer is detected to be disconnected. We expect that the
transport shouldn't go down at the layer of the message loop
(reconnection logic should be handled in the transport layer itself) so
if we detect the channel is not connected we don't bother requesting
cancels nor waiting on a final result message.

Why?

- if the connection goes down in error the caller side won't have a way
  to know "how long" it should block to wait for a cancel ack or result
  and causes a potential hang that may require an additional ctrl-c from
  the user especially if using the debugger or if the traceback is not
  seen on console.
- obviously there's no point in waiting for messages when there's no
  transport to deliver them XD

Further, add some more detailed cancel logging detailing the task and
actor ids.
2022-07-27 11:40:02 -04:00
Tyler Goodlet bb732cefd0 Drop high log level in ctx example 2022-07-27 11:40:02 -04:00
Tyler Goodlet 74b819a857 Typing fixes, simplify `_set_trace()` 2022-07-27 11:40:02 -04:00
Tyler Goodlet 8892204c84 Add notes around py3.10 stdlib bug from `pdb++`
There's a bug that's triggered in the stdlib without latest `pdb++`
installed; add a note for that.

Further inside `wait_for_parent_stdin_hijack()` don't `.started()` until
the interactor stream has been opened to avoid races when debugging this
`._debug.py` module (at the least) since we usually don't want the
spawning (parent) task to resume until we know for sure the tty lock has
been acquired. Also, drop the random checkpoint we had inside
`_breakpoint()`, not sure it was actually adding anything useful since
we're (mostly) carefully shielded throughout this func.
2022-07-27 11:40:02 -04:00
Tyler Goodlet 8f4bbf1cbf Add and use a pdb instance factory 2022-07-27 11:40:02 -04:00
Tyler Goodlet 21dccb2e79 A `.open_context()` example that causes a hang!
Finally! I think this may be the root issue we've been seeing in
production in a client project.

No idea yet why this is happening but the fault-causing sequence seems
to be:
- `.open_context()` in a child actor
- enter the debugger via `tractor.breakpoint()`
- continue from that entry via `c` command in REPL
- raise an error just after inside the context task's body

Looking at logging it appears as though the child thinks it has the tty
but no input is accepted on the REPL and a further `ctrl-c` results in
some teardown but also a further hang where both parent and child become
unresponsive..
2022-07-27 11:40:02 -04:00
Tyler Goodlet aea8f63bae Drop all the `@cm.__exit__()` override attempts..
None of it worked (you still will see `.__exit__()` frames on debugger
entry - you'd think this would have been solved by now but, shrug) so
instead wrap the debugger entry-point in a `try:` and put the SIGINT
handler restoration inside `MultiActorPdb` teardown hooks.

This seems to restore the UX as it was prior but with also giving the
desired SIGINT override handler behaviour.
2022-07-27 11:40:02 -04:00
Tyler Goodlet 7964a9f6f8 Try overriding `_GeneratorContextManager.__exit__()`; didn't work..
Using either of `@pdb.hideframe` or `__tracebackhide__` on stdlib
methods doesn't seem to work either.. This all seems to have something
to do with async generator usage I think ?
2022-07-27 11:40:02 -04:00
Tyler Goodlet 99c4319940 Fix example name typo 2022-07-27 11:40:02 -04:00
Tyler Goodlet e5195264a1 Handle a context cancel? Might be a noop 2022-07-27 11:40:02 -04:00
Tyler Goodlet 42f9d10252 Add a pre-started breakpoint example 2022-07-27 11:40:02 -04:00
Tyler Goodlet 345573e602 Make `mypy` happy 2022-07-27 11:40:02 -04:00
Tyler Goodlet 4e60c17375 Refine the handler for child vs. root cases
This gets very close to avoiding any possible hangs to do with tty
locking and SIGINT handling minus a special case that will be detailed
below.

Summary of implementation changes:

- convert `_mk_pdb()` -> `with _open_pdb() as pdb:` which implicitly
  handles the `bdb.BdbQuit` case such that debugger teardown hooks are
  always called.
- rename the handler to `shield_sigint()` and handle a variety of new
  cases:
  * the root is in debug but hasn't been cancelled -> call
    `Actor.cancel_soon()`
  * the root is in debug but *has* been called (`Actor.cancel_soon()`
    already called) -> raise KBI
  * a child is in debug *and* has a task locking the debugger -> ignore
    SIGINT in child *and* the root actor.
- if the debugger instance is provided to the handler at acquire time,
  on SIGINT handling completion re-print the last pdb++ REPL output so
  that the user realizes they are still actively in debug.
- ignore the unlock case where a race condition of "no task" holding the
  lock causes the `RuntimeError` normally associated with the "wrong
  task" doing so (not sure if this is a `trio` bug?).
- change debug logs to runtime level.

Unhandled case(s):

- a child is maybe in debug mode but does not itself have any task using
  the debugger.
    * ToDo: we need a way to decide what to do with
      "intermediate" child actors who themselves either are not in
      `debug_mode=True` but have children who *are* such that a SIGINT
      won't cause cancellation of that child-as-parent-of-another-child
      **iff** any of their children are in in debug mode.
2022-07-27 11:40:02 -04:00
Tyler Goodlet 6b7b58346f (facepalm) Reraise `BdbQuit` and discard ownerless lock releases 2022-07-27 11:40:02 -04:00
Tyler Goodlet 3cac323421 Add WIP while-debugger-active SIGINT ignore handler 2022-07-27 11:40:02 -04:00
goodboy 4902e184e9
Merge pull request #318 from goodboy/aio_error_propagation
Add context test that opens an inter-task-channel that errors
2022-07-15 12:42:19 -04:00
Tyler Goodlet 05790a20c1 Slight lint fixes 2022-07-15 11:18:48 -04:00
Tyler Goodlet 565c603300 Add nooz 2022-07-15 11:17:57 -04:00
Tyler Goodlet f0d78e1a6e Use local task ref, fixes `mypy` 2022-07-15 10:39:49 -04:00
Tyler Goodlet ce01f6b21c Increase timeout for CI/windows 2022-07-14 20:44:10 -04:00
Tyler Goodlet 0906559ed9 Drop manual stack construction, fix attr typo 2022-07-14 20:43:17 -04:00
Tyler Goodlet 38d03858d7 Fix `asyncio`-task-sync and error propagation
This fixes an previously undetected bug where if an
`.open_channel_from()` spawned task errored the error would not be
propagated to the `trio` side and instead would fail silently with
a console log error. What was most odd is that it only seems easy to
trigger when you put a slight task sleep before the error is raised
(:eyeroll:). This patch adds a few things to address this and just in
general improve iter-task lifetime syncing:

- add `LinkedTaskChannel._trio_exited: bool` a flag set from the `trio`
  side when the channel block exits.
- add a `wait_on_aio_task: bool` flag to `translate_aio_errors` which
  toggles whether to wait the `asyncio` task termination event on exit.
- cancel the `asyncio` task if the trio side has ended, when
  `._trio_exited == True`.
- always close the `trio` mem channel when the task exits such that
  the `asyncio` side can error on any next `.send()` call.
2022-07-14 16:35:41 -04:00
Tyler Goodlet 98de2fab31 Add context test that opens an inter-task-channel that errors 2022-07-14 16:13:12 -04:00
goodboy 80121ed211
Merge pull request #317 from goodboy/drop_msgpack
Drop `msgpack`
2022-07-12 13:31:45 -04:00
Tyler Goodlet 41983edc43 Use `str` | `bytes` union for typing msg dump 2022-07-12 11:59:11 -04:00
Tyler Goodlet 5168700fbf Tolerate non-decode-able bytes 2022-07-12 11:55:55 -04:00
Tyler Goodlet 673c4a8c66 Decode bytes prior to log msg 2022-07-12 11:55:55 -04:00
Tyler Goodlet 932b841176 Allow up to 4 `msgpsec` decode failures 2022-07-12 11:55:55 -04:00
Tyler Goodlet f594f1bdda Handle a connection reset on `msgspec` transport 2022-07-12 11:55:55 -04:00
Tyler Goodlet 53e3648eca Readme bump 2022-07-12 11:52:42 -04:00
Tyler Goodlet fc36503f4f Add nooz file 2022-07-12 11:43:10 -04:00
Tyler Goodlet 4e7ab54452 Appease `mypy` 2022-07-12 11:22:30 -04:00
goodboy 86d020d309
Merge pull request #316 from goodboy/310_windows
Try windows CI on py 3.10
2022-07-12 10:53:06 -04:00
Tyler Goodlet bb3f35cdd0 Drop `msgspec` specific CI jobs 2022-07-12 10:37:13 -04:00
Tyler Goodlet f94b7cd991 Drop `msgpack` lib and use `msgspec` for transport 2022-07-12 10:37:13 -04:00
Tyler Goodlet f6af5c7bf8 Drop `msgpack` dep, ensure `msgspec` as hard dep 2022-07-12 10:37:09 -04:00
Tyler Goodlet 9740a585d3 Add nooz for win on py3.10 2022-07-12 10:24:44 -04:00
Tyler Goodlet b700dc34a8 Use `pyreadline3` on windows for py3.10 2022-07-12 10:12:03 -04:00
Tyler Goodlet 9bc1c6f385 Try windows CI on py 3.10 2022-07-11 20:15:35 -04:00
goodboy f4973e90e9
Merge pull request #314 from goodboy/ci_sdist_install
Add an sdist install job
2022-07-11 20:13:24 -04:00
Tyler Goodlet 780e3dd13d Include ./docs/README.rst in src dist 2022-07-11 14:25:26 -04:00
Tyler Goodlet e0419b24ec Add an sdist install job
This should hopefully catch issues like,
https://github.com/goodboy/tractor/issues/293
2022-07-11 14:22:22 -04:00
goodboy 71f19f217d
Merge pull request #305 from goodboy/name_query
Add `tractor.query_actor()` an addr looker-upper
2022-04-13 09:19:26 -04:00
Tyler Goodlet 8901272854 Fix typing 2022-04-13 08:20:53 -04:00
Tyler Goodlet 7c151bed48 Add nooz 2022-04-13 08:18:11 -04:00
Tyler Goodlet 80897a8f2b Add `tractor.query_actor()` an addr looker-upper
Sometimes it's handy to just have a non-`Portal` yielding way
to figure out if a "service" actor is up, so add this discovery
helper for that. We'll prolly just leave it undocumented for
now until we figure out a longer-term/better discovery system.
2022-04-13 07:50:42 -04:00
goodboy 62983684d1
Merge pull request #308 from goodboy/sort_subs_results_infected_aio
Sort `.subscribe()` results before comparison in test
2022-04-12 20:06:55 -04:00
Tyler Goodlet 1c63bb6130 Sort fan out results before comparison in test 2022-04-12 19:49:36 -04:00
goodboy bfe99f29b8
Merge pull request #304 from goodboy/aio_explicit_task_cancels
`LinkedTaskChannel.subscribe()`, explicit `asyncio` task cancel logging, `test_trioisms.py`
2022-04-12 17:27:29 -04:00
Tyler Goodlet 9c27858aaf WIP prints to debug frickin windows 2022-04-12 16:48:50 -04:00
Tyler Goodlet 597ae4b690 Add nooz file 2022-04-12 15:59:33 -04:00
Tyler Goodlet fa354ffe2b Handle not all values pulled case 2022-04-12 15:51:06 -04:00
Tyler Goodlet 333fad8819 Facepalm: join nursery first to avoid channel-closed-too-early 2022-04-12 15:06:35 -04:00
Tyler Goodlet 90593611bb Add test for `LinkedTaskChannel.subscribe()` fanout feature 2022-04-12 15:06:35 -04:00
Tyler Goodlet 9c43bb28f1 Add a new "trioisms" test mod for tracking `trio` wishlist behaviour 2022-04-12 13:05:56 -04:00
Tyler Goodlet e45251db56 Simplify to form submitted to njs 2022-04-12 13:05:26 -04:00
Tyler Goodlet faf751acac WIP reproduce deadlock issue during error from piker 2022-04-12 13:04:46 -04:00
Tyler Goodlet 20d281f619 Run `mypy` on 3.10 2022-04-12 12:53:12 -04:00
Tyler Goodlet f3606d5bd8 Type fixes 2022-04-12 11:48:32 -04:00
Tyler Goodlet 032e14e326 Update new license info in setup script 2022-04-12 11:42:44 -04:00
Tyler Goodlet c322a193f2 Make `LinkedTaskChannel` trio-task-broadcastable with `.subscribe()` 2022-04-12 11:42:44 -04:00
Tyler Goodlet 46963c2e63 Don't handle `GeneratorExit` on `asyncio` tasks 2022-04-12 11:42:44 -04:00
Tyler Goodlet 9b77b8c9ee Add more explicit `asyncio` task error logging
When an `asyncio` side task errors or is cancelled we now explicitly
report the traceback and task name if possible as well as the source
reason for the error (some come from the `trio` side).

Further, properly set any `trio` side exception (after unwrapping it
from the `outcome.Error`) on the future that runs the `trio` guest run.
2022-04-12 11:42:44 -04:00
Tyler Goodlet 13c8300226 Add a sub-actor managed service nursery test scenario 2022-04-12 11:42:44 -04:00
goodboy 1109d96263
Merge pull request #303 from goodboy/fence_mp
Fence mp
2022-04-12 10:13:57 -04:00
Tyler Goodlet 65b4bc8888 Add misc nooz file 2022-04-12 08:35:13 -04:00
Tyler Goodlet bef9946f91 Allow re-running jobs from web UI manually? 2022-04-11 17:37:06 -04:00
Tyler Goodlet c30cece37a Fix one missing import/ref 2022-02-17 13:03:37 -05:00
Tyler Goodlet 509082c935 Port to new `msgspec` error type 2022-02-17 11:55:26 -05:00
Tyler Goodlet 75bb1added Avoid importing mp for as long as possible 2022-02-17 11:55:26 -05:00
goodboy 6e5590dad6
Merge pull request #300 from goodboy/msgpack_lists_by_default
Use lists by default like `msgspec`, update to latest `msgspec`  and `msgpack` releases
2022-02-15 09:08:20 -05:00
Tyler Goodlet 76a0492028 Fix type annot 2022-02-15 08:52:04 -05:00
Tyler Goodlet 4eab4a0213 Type fix 2022-02-15 08:51:25 -05:00
Tyler Goodlet 0edc6a26bc Go back to strict map keys 2022-02-15 08:48:43 -05:00
Tyler Goodlet c5acc3b969 Pack tuple keys as . delim strs in registry tests 2022-02-15 08:48:07 -05:00
Tyler Goodlet 17e195aacf They renamed to `msgpack` and the version is 1.0.3 2022-02-14 16:03:54 -05:00
Tyler Goodlet c65756ed80 Add nooz 2022-02-14 16:03:10 -05:00
Tyler Goodlet 927decc88d Pin to latest `msgspec` version 2022-02-14 14:14:05 -05:00
Tyler Goodlet 17bfa120cc Port to msgpec `0.4.0` imports 2022-02-14 14:05:55 -05:00
Tyler Goodlet 77ddc073e8 Use lists by default like `msgspec` 2022-02-09 10:07:33 -05:00
goodboy 26bebc42b7
Merge pull request #295 from goodboy/nspaths
`NamespacePath`: a message compatible "object reference" type
2022-01-30 12:40:05 -05:00
Tyler Goodlet 87de28fd88 Slight doc string update 2022-01-30 12:21:41 -05:00
Tyler Goodlet 56b29c27de Add msg serialization coding todo resources list 2022-01-30 12:19:21 -05:00
Tyler Goodlet adf9a1d0aa Add nooz 2022-01-30 12:17:32 -05:00
Tyler Goodlet 25a27e780d Add todo resources for eventual capability-based module filtering 2022-01-30 11:28:10 -05:00
Tyler Goodlet c265f3f94e Move namespace path type into `msg` mod 2022-01-30 11:27:34 -05:00
Tyler Goodlet 2900ceb003 Not all objects have a `.__name__` 2022-01-30 11:26:34 -05:00
Tyler Goodlet b6ae77b5ac Use `pkgutils.resolve_name()` and a `str` subtype
Python 3.9's new object resolver + a `str` is much simpler then mucking
with tuples (and easier to serialize). Include a `.to_tuple()` formatter
since we still are passing the module namespace and function name
separately inside the runtime's message format but in theory we might be
able to simplify this depending on how we would change the support for
`enable_modules:list[str]` in the spawn API.

Thanks to @Fuyukai for pointing `resolve_name()` which I didn't know
about before!
2022-01-30 11:26:34 -05:00
Tyler Goodlet 949cb2c9fe First draft "namespace path" named tuple; probably will discard 2022-01-30 11:26:34 -05:00
goodboy 094206ee9d
Merge pull request #298 from goodboy/experimental_subpkg
Add `tractor.experimental` subpkg
2022-01-29 19:19:50 -05:00
Tyler Goodlet debbf64d58 Add nooz 2022-01-29 17:58:58 -05:00
Tyler Goodlet 070e6ba459 Add `.experimental` subpkg to setup.py 2022-01-29 14:30:39 -05:00
Tyler Goodlet 7e004c0688 Add back blank `msg.py` 2022-01-29 14:22:15 -05:00
Tyler Goodlet ffe88de53b Better idea: start a `tractor.experimental` subpkg 2022-01-29 14:03:55 -05:00
Tyler Goodlet d29a915d48 Update mod doc string 2022-01-29 14:02:04 -05:00
Tyler Goodlet be87caa99b Move legacy pubsub stuff from `msg.py` to trionics mod 2022-01-29 14:02:04 -05:00
goodboy 0b51ebfe11
Merge pull request #284 from goodboy/maybe_cancel_the_cancel_
Cancel the `.cancel_actor()` request on proc death
2022-01-21 14:22:48 -05:00
Tyler Goodlet 4bf7992200 Bump to alpha 5 dev 2022-01-21 13:05:26 -05:00
Tyler Goodlet 41296448e8 Add nooz 2022-01-21 12:49:26 -05:00
Tyler Goodlet 9650055519 Use `.exitcode` which is poll + error handling 2022-01-21 12:49:26 -05:00
Tyler Goodlet 532974fb90 Drop leftover print 2022-01-21 12:49:26 -05:00
Tyler Goodlet b1d72b77c9 Patch mp procs with a `.poll()`
Not sure why they don't already expose this from the `Popen` backends
but, k.
2022-01-21 12:49:26 -05:00
Tyler Goodlet a2171c7e71 Cancel the `.cancel_actor()` request on proc death
Adjust the `soft_wait()` strategy to avoid sending needless cancel
requests if it is known that a child process is already terminated or
does so before the cancel request times out. This should be no slower
and should avoid needless waits on either closure-in-progress or already
closed channels.

Basic strategy is,
- request child actor to cancel
- if process termination is detected, cancel the cancel
- if the process is still alive after a cancel request timeout warn the
  user and yield back to the hard reap handling
2022-01-21 12:49:26 -05:00
goodboy 30986d6b64
Merge pull request #292 from goodboy/moar_timeoutz
Moar timeoutz
2022-01-21 12:49:01 -05:00
Tyler Goodlet 9ab04b1f6b One more increase for py3.10 2022-01-21 12:20:06 -05:00
Tyler Goodlet b3ff4b7804 Increase some timeouts for windows 2022-01-21 12:20:06 -05:00
goodboy d27bdbd40e
Merge pull request #291 from goodboy/drop_old_nooz_files
Drop old fragments that `towncrier` somehow missed
2022-01-21 12:17:46 -05:00
Tyler Goodlet a95b0dc05e Drop old fragments that `towncrier` somehow missed 2022-01-21 12:09:16 -05:00
goodboy 909c996346
Merge pull request #289 from houtenjack/remove_asyncio_todo
Remove asyncio from TODOs
2022-01-19 16:26:10 -05:00
Giacomo Camporini adc8e5c009 Revert "Added asyncio with trio guest mode feature in feature list"
This reverts commit 5eed85d5dd.
2022-01-19 09:24:08 +01:00
Giacomo Camporini 5eed85d5dd Added asyncio with trio guest mode feature in feature list 2022-01-18 14:25:08 +01:00
Giacomo Camporini 137fed790f Remove asyncio from TODOs
Fixes #286
2022-01-05 11:40:06 +01:00
goodboy 96123f21d2
Merge pull request #285 from overclockworked64/fix-min-version
fix: bump min version
2021-12-26 20:56:50 -05:00
overclockworked64 338aa2b74a
fix: bump min version 2021-12-26 06:37:18 +01:00
goodboy 89551ef371
Merge pull request #282 from goodboy/win_ci_timeout
Lengthen win CI run to 12m
2021-12-20 09:39:44 -05:00
Tyler Goodlet 884bdf2d57 Lengthen win CI run to 12m 2021-12-20 09:25:38 -05:00
goodboy 343b2803b5
Merge pull request #280 from goodboy/alpha4
Alpha4: infect all the `asyncio`s
2021-12-18 20:15:28 -05:00
Tyler Goodlet cdef579d22 Update release tips 2021-12-18 16:22:13 -05:00
Tyler Goodlet f9400b4beb Bump date, fix bullet lists, other typos. 2021-12-18 15:50:01 -05:00
Tyler Goodlet 45cdf25f14 Add re-license bullet, fix pluggy links XD 2021-12-17 12:48:58 -05:00
Tyler Goodlet 0a5818fe05 Gen an alpha4 changelog using modified `pluggy` towncrier template 2021-12-17 12:21:11 -05:00
Tyler Goodlet fd6e18eba4 Bump version to alpha4, change towncrier news dir name 2021-12-17 11:35:37 -05:00
goodboy bbcdbaaba4
Merge pull request #121 from goodboy/infect_asyncio
Infect `asyncio`
2021-12-17 11:02:01 -05:00
Tyler Goodlet 9b4cdb00e6 Add agpl header 2021-12-17 09:39:30 -05:00
Tyler Goodlet 9b14d82086 Add nooz 2021-12-17 09:39:30 -05:00
Tyler Goodlet 4d1a48a47b Link to inter-loop channel issue in readme 2021-12-17 09:39:30 -05:00
Tyler Goodlet 4c0cfa68ac Link to SC on wikipedia 2021-12-17 09:39:28 -05:00
Tyler Goodlet 73d252e09e Emphasize `asyncio` only with sleeps 2021-12-17 09:38:54 -05:00
Tyler Goodlet 1fdcaf36f3 Not enough time for new asyncio tests? 2021-12-17 09:38:54 -05:00
Tyler Goodlet 6952c7defa Add features bullet, slip in a guille-ism 2021-12-17 09:38:52 -05:00
Tyler Goodlet 7237d696ce Add asyncio echo server ex to readme; fix cluster section 2021-12-17 09:38:05 -05:00
Tyler Goodlet b463841019 Add infected `asyncio` echo server example 2021-12-17 09:38:04 -05:00
Tyler Goodlet d65912e1ae Increase kbi delay in remote cancel test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 24078f2d6e More doc string style tweaks 2021-12-17 09:38:04 -05:00
Tyler Goodlet 56cc98375e Return channel type from `_run_asyncio_task()`
Better encapsulate all the mem-chan, Queue, sync-primitives inside our
linked task channel in order to avoid `mypy`'s complaints about monkey
patching. This also sets footing for adding an `asyncio`-side channel
API that can be used more like this `trio`-side API.
2021-12-17 09:38:04 -05:00
Tyler Goodlet 9a2de90de6 Add mid stream echoserver "bail" cases 2021-12-17 09:38:04 -05:00
Tyler Goodlet 2b9b29eb71 Add an asyncio echo server test 2021-12-17 09:38:04 -05:00
Tyler Goodlet b69412a903 Drop cancel scope from linked task channel 2021-12-17 09:38:04 -05:00
Tyler Goodlet c4b3bb354e Port tests to handle our new `asyncio` cancelled type 2021-12-17 09:38:04 -05:00
Tyler Goodlet 6803891bd7 Collect `asyncio` task exceptions to avoid warning msg 2021-12-17 09:38:04 -05:00
Tyler Goodlet 5f4094691d Re-wrap and raise `asyncio.CancelledError`
For whatever reason `trio` seems to be swallowing this exception when
raised in the `trio` task so instead wrap it in our own non-base
exception type: `AsyncioCancelled` and raise that when the `asyncio`
task cancels itself internally using `raise <err> from <src_err>` style.

Further don't bother cancelling the `trio` task (via cancel scope)
since we we can just use the recv mem chan closure error as a signal
and explicitly lookup any set asyncio error.
2021-12-17 09:38:04 -05:00
Tyler Goodlet c48c68c0bc Flip doc strings to my preferred format 2021-12-17 09:38:04 -05:00
Tyler Goodlet ad2567dd73 Add first set of interloop streaming tests 2021-12-17 09:38:04 -05:00
Tyler Goodlet 44d0e9fc32 Add a `LinkedTaskChannel` for synced inter-loop-streaming
Wraps the pairs of underlying `trio` mem chans and the `asyncio.Queue`
with this new composite which will be delivered from `open_channel_from()`.
This allows for both sending and receiving values from the `asyncio`
task (2 way msg passing) as well controls for cancelling or waiting on
the task.

Factor `asyncio` translation and re-raising logic into a new closure
which is run on both `trio` side error handling as well as on normal
termination to avoid missing `asyncio` errors even when `trio` task
cancellation is handled first.

Only close the `trio` mem chans on `trio` task termination *iff*
the task was spawned using `open_channel_from()`:
- on `open_channel_from()` exit, mem chan closure is the desired semantic
- on `run_task()` we normally only return a single value or error and
  if the channel is closed before the error is raised we may propagate
  a `trio.EndOfChannel` instead of the desired underlying `asyncio`
  task's error
2021-12-17 09:38:04 -05:00
Tyler Goodlet d27ddb7bbb Add a basic `open_channel_from()` streaming test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 9bc94b5ccc Factor error translation into a ctx mngr
Pull the common `asyncio` -> `trio` error translation logic into
a common context manager and don't expect a final result to be captured
when using `open_channel_from()` since it's a manager interface and it
would be clunky to try and deliver some "final result" after exit.
2021-12-17 09:38:04 -05:00
Tyler Goodlet e6687bcdc4 Serious-ify doc string 2021-12-17 09:38:04 -05:00
Tyler Goodlet e815f766f6 Add a cancelled-from-remote-trio-task case 2021-12-17 09:38:04 -05:00
Tyler Goodlet c19123b588 Add trio-cancels-anursery-cancels-aio test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 8704664719 Reverse the order for asyncio cancelleds? I dunno why 2021-12-17 09:38:04 -05:00
Tyler Goodlet 04c0eda69d Add an `asyncio`-internal cancel test
Verify that if the `asyncio` side task cancels (itself) that we raise
that `asyncio.CancelledError` on the `trio` side.  In the case where
`trio` initiated the cancel whether or not the `asyncio` side ended up
raising `CancelledError` doesn't really matter to us as long as the far
task did indeed terminate.
2021-12-17 09:38:04 -05:00
Tyler Goodlet 1114b6980e Adjust linked-loop-task tear down sequence
Close the mem chan before cancelling the `trio` task in order to ensure
we retrieve whatever error is shuttled from `asyncio` before the channel
read is potentially cancelled (previously a race?).

Handle `asyncio.CancelledError` specially such that we raise it directly
(instead of `raise aio_cancelled from other_err`) since it *is* the
source error in the case where the cancellation is `asyncio` internal.
2021-12-17 09:38:04 -05:00
Tyler Goodlet 56357242e9 Add a `Portal.cancel_actor()` test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 0ab5e5cadd Fill out nursery docstring 2021-12-17 09:38:04 -05:00
Tyler Goodlet 06fa650ed0 Drop runtime logging for asyncio mode 2021-12-17 09:38:04 -05:00
Tyler Goodlet 446feff172 Clean type imports 2021-12-17 09:38:04 -05:00
Tyler Goodlet 299e4192b0 Plan asyncio test set 2021-12-17 09:38:04 -05:00
Tyler Goodlet 41eddffc2c Drop old (and deluded) "streaming" cruft 2021-12-17 09:38:04 -05:00
Tyler Goodlet 7a65165279 Facepalm, re-raise captured `asyncio` task error 2021-12-17 09:38:04 -05:00
Tyler Goodlet b376b7cd32 First draft: `.to_asyncio.open_channel_from()` 2021-12-17 09:38:04 -05:00
Tyler Goodlet c262b1a3e8 Always cancel the asyncio task? 2021-12-17 09:38:04 -05:00
Tyler Goodlet d9dac3f36c Drop old implementation cruft 2021-12-17 09:38:04 -05:00
Tyler Goodlet 325c0cdb1b Fix error propagation on asyncio streaming tasks 2021-12-17 09:38:04 -05:00
Tyler Goodlet 55e210fec6 Drop bad .close() call 2021-12-17 09:38:04 -05:00
Tyler Goodlet aa24bbc11c Proxy asyncio cancelleds as well 2021-12-17 09:38:04 -05:00
Tyler Goodlet 793bcfb7d4 Pass `infect_asyncio` flag to mp actors as well 2021-12-17 09:38:04 -05:00
Tyler Goodlet d80f8d7a39 WIP redo asyncio async gen streaming 2021-12-17 09:38:04 -05:00
Tyler Goodlet 340effae11 Add initial infected asyncio error propagation test 2021-12-17 09:38:01 -05:00
Tyler Goodlet 509ae132ec Raise any asyncio errors if in trio task on cancel 2021-12-17 09:38:01 -05:00
Tyler Goodlet 80f47dece2 Raise from asyncio error; fixes mypy 2021-12-17 09:38:01 -05:00
Tyler Goodlet 2cf87146a3 Log any asyncio error 2021-12-17 09:38:01 -05:00
Tyler Goodlet 8070b16bd0 Support asyncio actors with the trio spawner backend 2021-12-17 09:38:01 -05:00
Tyler Goodlet 1406ddc5ee Add `infect_asyncio: bool` flag to nursery methods 2021-12-17 09:37:41 -05:00
Tyler Goodlet 055788cf16 Attempt to make mypy happy.. 2021-12-17 09:19:23 -05:00
Tyler Goodlet 1825b21d2c Wow, fix all the broken async func invoking code..
Clearly this wasn't developed against a task that spawned just an async
func in `asyncio`.. Fix all that and remove a bunch of unnecessary func
layers. Add provisional support for the target receiving the `to_trio`
and `from_trio` channels and for the @tractor.stream marker.
2021-12-17 09:19:23 -05:00
Tyler Goodlet acd63d0c89 First draft "infected `asyncio` mode"
This should mostly maintain top level SC principles for any task spawned
using `tractor.to_asyncio.run()`. When the `asyncio` task completes make
sure to cancel the pertaining `trio` cancel scope and raise any error
that may have resulted. This interface uses `trio`'s "guest-mode" to run
`asyncio` loop using a special entrypoint which is handed to Python
during process spawn.
2021-12-17 09:17:59 -05:00
goodboy cdf1f8c2f7
Merge pull request #276 from goodboy/expected_ctx_cancelled
Expected ctx cancelled should not override a source error
2021-12-17 08:08:18 -05:00
Tyler Goodlet 8eff788d2d Pin to previous `trio_typing` release 2021-12-16 19:59:10 -05:00
Tyler Goodlet 916e27eedc Adjust cancelled test to expect raised overrun error 2021-12-16 19:59:10 -05:00
Tyler Goodlet 98a830ccba Drop cancel traceback capture; don't seem to need it? 2021-12-16 19:59:10 -05:00
Tyler Goodlet 8c004c1f36 Add an explicit messaging error for reporting an illegal context transaction 2021-12-16 19:59:10 -05:00
Tyler Goodlet e2139c2bf0 Don't set `Context._error` to expected `ContextCancelled`
If the one side of an inter-actor context cancels the other then that
side should always expect back a `ContextCancelled` message. However we
should not set this error in this case (where the cancel request was
sent and a `ContextCancelled` msg was received back) since it may
override some other error that caused the cancellation request to be
sent out in the first place. As an example when a context opens another
context to a peer and some error happens which causes the second peer
context to be cancelled but we want to propagate the original error.

Fixes the issue found in https://github.com/pikers/piker/issues/244
2021-12-16 19:59:10 -05:00
Tyler Goodlet 9650b010de Add a test for the real issue: error overriding
The underlying issue is actually that a nested `Context` which was
cancelled was overriding the local error that triggered that secondary's
context's cancellation in the first place XD. This test catches that
case.

Relates to https://github.com/pikers/piker/issues/244
2021-12-16 19:59:10 -05:00
Tyler Goodlet 5d424e3703 Hide the key error tb on remote starting errors 2021-12-16 19:59:10 -05:00
Tyler Goodlet c38d0f826e Add an unserializable value causes error before started test 2021-12-16 19:59:10 -05:00
goodboy 4001d2c3fc
Merge pull request #257 from goodboy/context_caching
Add `maybe_open_context()` an actor wide task-resource cache
2021-12-16 19:55:14 -05:00
Tyler Goodlet 953d15b67d Add nooz 2021-12-16 18:02:03 -05:00
Tyler Goodlet da5e36bf0c Revert back to avoiding key errors on cancellation 2021-12-16 18:02:03 -05:00
Tyler Goodlet 21a9c47496 Parameterize over cache keying methods: kwargs and "key" 2021-12-16 18:02:03 -05:00
Tyler Goodlet 67dc0d014c Add basic `maybe_open_context()` caching test 2021-12-16 18:02:03 -05:00
Tyler Goodlet 9b1d8bf7b0 Of course, increase the timeout for windows.. 2021-12-16 18:02:03 -05:00
Tyler Goodlet 26394dd8df Type annot fixes 2021-12-16 18:02:03 -05:00
Tyler Goodlet 11e64426f6 Wake all sleeping consumers on bcaster closure 2021-12-16 18:02:03 -05:00
Tyler Goodlet 213447008b Add draft code for waiting on all nurseries in root 2021-12-16 18:02:03 -05:00
Tyler Goodlet f617da6ff1 Add timeout around test and prints for guidance 2021-12-16 18:02:03 -05:00
Tyler Goodlet 52627a6326 Rework interface: pass func and kwargs
After more extensive testing I realized that keying on the context
manager *instance id* isn't going to work since each entering task is
going to create a unique key XD

Instead pass the manager function as `acm_func` and optionally allow
keying the resource on the passed `kwargs` (if hashable) or the
`key:str`. Further, pass the key to the enterer task and avoid
a separate keying scheme for the manager versus the value it delivers.
Don't bother with checking and releasing the lock in `finally:` block,
it should be an error if it's still locked.
2021-12-16 18:02:03 -05:00
Tyler Goodlet 3826bc9972 Don't catch key errors from the yielded to scope 2021-12-16 18:02:03 -05:00
Tyler Goodlet b210278e2f Naming change `cache` -> `_Cache` 2021-12-16 18:02:03 -05:00
Tyler Goodlet 4a0252baf2 Add task-cached stream test 2021-12-16 18:02:03 -05:00
Tyler Goodlet ac22b4a875 Fix type annots in resource cacher internals 2021-12-16 18:02:03 -05:00
Tyler Goodlet 5f41dbf34f Add `maybe_open_context()` an actor wide task-resource cache 2021-12-16 18:02:03 -05:00
goodboy 2d6fbd5437
Merge pull request #278 from goodboy/end_of_channel_fixes
End of channel fixes for streams and broadcasting
2021-12-16 18:01:04 -05:00
Tyler Goodlet 325e550ff3 Add nooz 2021-12-16 17:30:18 -05:00
Tyler Goodlet b5d62909ff Pin to `mypy` 0.910
Avoids the issue noted in
https://github.com/python-trio/trio-typing/issues/50
to keep CI green.
2021-12-16 16:20:07 -05:00
Tyler Goodlet 57f2aca18c Set eoc on closure (again) 2021-12-16 16:19:15 -05:00
Tyler Goodlet 1652716574 Add timeout to streaming test 2021-12-16 16:19:09 -05:00
Tyler Goodlet f2ba961e81 Mark stream with EOC when stop message is received 2021-12-16 16:18:58 -05:00
Tyler Goodlet 79d63585b0 Add a multi-task fan out streaming test
This actually catches a lot of bugs to do with stream termination and
``MsgStream.subscribe()`` usage where the underlying stream closes from
the producer side. When this passes the broadcaster logic will have to
ensure non-lossy fan out semantics and closure tracking.
2021-12-16 16:16:23 -05:00
Tyler Goodlet 3deb1b91e6 Wake all broadcast consumers on EOC
Without this wakeup you can have tasks which re-enter `.receive()`
and get stuck waiting on the wakeup event indefinitely. Whenever
a ``trio.EndOfChannel`` arrives we want to make sure all consumers
at least know about it and don't block. This previous behaviour was
basically a bug.

Add some state flags for tracking if the broadcaster was either
cancelled or terminated via EOC mostly for testing and debugging
purposes though this info might be useful if we decide to offer
a `.statistics()` like API in the future.
2021-12-16 16:16:14 -05:00
Tyler Goodlet 61e134dc5d Wake up consumers on end of channel as well 2021-12-16 16:15:54 -05:00
goodboy cfdc95fe7f
Merge pull request #275 from goodboy/agpl_commit_msg_fix
Re-license as AGPLv3
2021-12-14 23:51:30 -05:00
Tyler Goodlet 6f94ffc304 Re-license code base for distribution under AGPL
This commit obviously denotes a re-license of all applicable parts of
the code base. Acknowledgement of this change was completed in #274 by
the majority of the current set of contributors. From here henceforth
all changes will be AGPL licensed and distributed. This is purely an
effort to maintain the same copy-left policy whilst closing the
(perceived) SaaS loophole the GPL allows for. It is merely for this
loophole: to avoid code hiding by any potential "network providers" who
are attempting to use the project to make a profit without either
compensating the authors or re-distributing their changes.

I thought quite a bit about this change and can't see a reason not to
close the SaaS loophole in our current license. We still are (hard)
copy-left and I plan to keep the code base this way for a couple
reasons:

- The code base produces income/profit through parent projects and is
  demonstrably of high value.
- I believe firms should not get free lunch for the sake of
  "contributions from their employees" or "usage as a service" which
  I have found to be a dubious argument at best.
- If a firm who intends to profit from the code base wants to use it
  they can propose a secondary commercial license to purchase with the
  proceeds going to the project's authors under some form of well
  defined contract.
- Many successful projects like Qt use this model; I see no reason it
  can't work in this case until such a time as the authors feel it
  should be loosened.

There has been detailed discussion in #103 on licensing alternatives.
The main point of this AGPL change is to protect the code base for the
time being from exploitation while it grows and as we move into the next
phase of development which will include extension into the multi-host
distributed software space.
2021-12-14 23:33:27 -05:00
goodboy 56297cf25c
Merge pull request #271 from goodboy/debug_flag_per_actor
Debug flag per actor
2021-12-11 20:10:21 -05:00
Tyler Goodlet 94f098e5f7 Add nooz 2021-12-10 13:08:20 -05:00
Tyler Goodlet 949aa9c405 Lol. should probably push the example code... 2021-12-10 12:48:05 -05:00
Tyler Goodlet a38a983225 Increase debugger poll delay back to prior value
If we make it too fast a nursery with debug mode children can cancel
too fast and causes some test failures. It's likely not a huge deal
anyway since the purpose of this poll/check is for human interaction
and the current delay isn't really that noticeable.

Decrease log levels in the debug module to avoid console noise when in
use. Toss in some more detailed comments around the new debugger lock
points.
2021-12-10 11:54:27 -05:00
Tyler Goodlet 4f411d6926 Add a per actor debug mode test 2021-12-09 17:53:31 -05:00
Tyler Goodlet 9bee513136 Use manual debugger-in-use flag in nursery and spawn task 2021-12-09 17:53:29 -05:00
Tyler Goodlet 5d9e3d1163 Add a manual debug mode kwarg to debugger waiter 2021-12-09 17:52:35 -05:00
Tyler Goodlet 95c52436e5 Adjust multi-actor debugger test
It turns out recent improvements have made the debugger too good
so we need to just terminate the continue loop in this test when
we finally see the "spawn error" crash out because the breakpoint
forever case will literally, continue forever XD
2021-12-07 16:46:03 -05:00
Tyler Goodlet e51c0e17a2 Properly set console logging in test suite 2021-12-07 13:17:10 -05:00
Tyler Goodlet 92c6ec1882 `get_loglevel()` always returns a str 2021-12-07 13:17:00 -05:00
Tyler Goodlet 72eef2a4a1 Config debug mode log level *after* initial setup 2021-12-07 13:16:07 -05:00
Tyler Goodlet 205e254072 Make test suite use default log level 2021-12-07 13:13:40 -05:00
Tyler Goodlet 9bd5226e76 Only adjust logging in debug mode if not noisy enough already 2021-12-07 13:13:04 -05:00
Tyler Goodlet e899cc42bf Add per actor debug mode toggle 2021-12-07 13:11:06 -05:00
goodboy f7c9056419
Merge pull request #261 from goodboy/stricter_context_starting
`Context` oriented error relay and `MsgStream` overruns
2021-12-07 11:22:48 -05:00
89 changed files with 8019 additions and 2733 deletions

View File

@ -1,6 +1,11 @@
name: CI
on: push
on:
# any time someone pushes a new branch to origin
push:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
@ -15,45 +20,38 @@ jobs:
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.9'
python-version: '3.10'
- name: Install dependencies
run: pip install -U . --upgrade-strategy eager -r requirements-test.txt
- name: Run MyPy check
run: mypy tractor/ --ignore-missing-imports
run: mypy tractor/ --ignore-missing-imports --show-traceback
testing-linux:
name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
timeout-minutes: 9
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python: ['3.9', '3.10']
spawn_backend: ['trio', 'mp']
# test that we can generate a software distribution and install it
# thus avoid missing file issues after packaging.
sdist-linux:
name: 'sdist'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '${{ matrix.python }}'
python-version: '3.10'
- name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager
- name: Build sdist
run: python setup.py sdist --formats=zip
- name: Run tests
run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rs
- name: Install sdist from .zips
run: python -m pip install dist/*.zip
testing-linux-msgspec:
# runs jobs on all OS's but with optional `msgspec` dep installed
name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }} - msgspec'
testing-linux:
name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
timeout-minutes: 10
runs-on: ${{ matrix.os }}
@ -61,45 +59,12 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest]
python: ['3.9', '3.10']
spawn_backend: ['trio', 'mp']
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '${{ matrix.python }}'
- name: Install dependencies
run: pip install -U .[msgspec] -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager
- name: Run tests
run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rs
# We skip 3.10 on windows for now due to
# https://github.com/pytest-dev/pytest/issues/8733
# some kinda weird `pyreadline` issue..
# TODO: use job filtering to accomplish instead of repeated
# boilerplate as is above XD:
# - https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows
# - https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix
# - https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#jobsjob_idif
testing-windows:
name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
timeout-minutes: 9
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-latest]
python: ['3.9']
spawn_backend: ['trio', 'mp']
python: ['3.10']
spawn_backend: [
'trio',
'mp_spawn',
'mp_forkserver',
]
steps:
@ -114,5 +79,53 @@ jobs:
- name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager
- name: List dependencies
run: pip list
- name: Run tests
run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rs
run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rsx
# We skip 3.10 on windows for now due to not having any collabs to
# debug the CI failures. Anyone wanting to hack and solve them is very
# welcome, but our primary user base is not using that OS.
# TODO: use job filtering to accomplish instead of repeated
# boilerplate as is above XD:
# - https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows
# - https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix
# - https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#jobsjob_idif
# testing-windows:
# name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
# timeout-minutes: 12
# runs-on: ${{ matrix.os }}
# strategy:
# fail-fast: false
# matrix:
# os: [windows-latest]
# python: ['3.10']
# spawn_backend: ['trio', 'mp']
# steps:
# - name: Checkout
# uses: actions/checkout@v2
# - name: Setup python
# uses: actions/setup-python@v2
# with:
# python-version: '${{ matrix.python }}'
# - name: Install dependencies
# run: pip install -U . -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager
# # TODO: pretty sure this solves debugger deps-issues on windows, but it needs to
# # be verified by someone with a native setup.
# # - name: Force pyreadline3
# # run: pip uninstall pyreadline; pip install -U pyreadline3
# - name: List dependencies
# run: pip list
# - name: Run tests
# run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rsx

147
LICENSE
View File

@ -1,23 +1,21 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
@ -72,7 +60,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

2
MANIFEST.in 100644
View File

@ -0,0 +1,2 @@
# https://packaging.python.org/en/latest/guides/using-manifest-in/#using-manifest-in
include docs/README.rst

330
NEWS.rst
View File

@ -4,6 +4,321 @@ Changelog
.. towncrier release notes start
tractor 0.1.0a5 (2022-08-03)
============================
This is our final release supporting Python 3.9 since we will be moving
internals to the new `match:` syntax from 3.10 going forward and
further, we have officially dropped usage of the `msgpack` library and
happily adopted `msgspec`.
Features
--------
- `#165 <https://github.com/goodboy/tractor/issues/165>`_: Add SIGINT
protection to our `pdbpp` based debugger subystem such that for
(single-depth) actor trees in debug mode we ignore interrupts in any
actor currently holding the TTY lock thus avoiding clobbering IPC
connections and/or task and process state when working in the REPL.
As a big note currently so called "nested" actor trees (trees with
actors having more then one parent/ancestor) are not fully supported
since we don't yet have a mechanism to relay the debug mode knowledge
"up" the actor tree (for eg. when handling a crash in a leaf actor).
As such currently there is a set of tests and known scenarios which will
result in process cloberring by the zombie repaing machinery and these
have been documented in https://github.com/goodboy/tractor/issues/320.
The implementation details include:
- utilizing a custom SIGINT handler which we apply whenever an actor's
runtime enters the debug machinery, which we also make sure the
stdlib's `pdb` configuration doesn't override (which it does by
default without special instance config).
- litter the runtime with `maybe_wait_for_debugger()` mostly in spots
where the root actor should block before doing embedded nursery
teardown ops which both cancel potential-children-in-deubg as well
as eventually trigger zombie reaping machinery.
- hardening of the TTY locking semantics/API both in terms of IPC
terminations and cancellation and lock release determinism from
sync debugger instance methods.
- factoring of locking infrastructure into a new `._debug.Lock` global
which encapsulates all details of the ``trio`` sync primitives and
task/actor uid management and tracking.
We also add `ctrl-c` cases throughout the test suite though these are
disabled for py3.9 (`pdbpp` UX differences that don't seem worth
compensating for, especially since this will be our last 3.9 supported
release) and there are a slew of marked cases that aren't expected to
work in CI more generally (as mentioned in the "nested" tree note
above) despite seemingly working when run manually on linux.
- `#304 <https://github.com/goodboy/tractor/issues/304>`_: Add a new
``to_asyncio.LinkedTaskChannel.subscribe()`` which gives task-oriented
broadcast functionality semantically equivalent to
``tractor.MsgStream.subscribe()`` this makes it possible for multiple
``trio``-side tasks to consume ``asyncio``-side task msgs in tandem.
Further Improvements to the test suite were added in this patch set
including a new scenario test for a sub-actor managed "service nursery"
(implementing the basics of a "service manager") including use of
*infected asyncio* mode. Further we added a lower level
``test_trioisms.py`` to start to track issues we need to work around in
``trio`` itself which in this case included a bug we were trying to
solve related to https://github.com/python-trio/trio/issues/2258.
Bug Fixes
---------
- `#318 <https://github.com/goodboy/tractor/issues/318>`_: Fix
a previously undetected ``trio``-``asyncio`` task lifetime linking
issue with the ``to_asyncio.open_channel_from()`` api where both sides
where not properly waiting/signalling termination and it was possible
for ``asyncio``-side errors to not propagate due to a race condition.
The implementation fix summary is:
- add state to signal the end of the ``trio`` side task to be
read by the ``asyncio`` side and always cancel any ongoing
task in such cases.
- always wait on the ``asyncio`` task termination from the ``trio``
side on error before maybe raising said error.
- always close the ``trio`` mem chan on exit to ensure the other
side can detect it and follow.
Trivial/Internal Changes
------------------------
- `#248 <https://github.com/goodboy/tractor/issues/248>`_: Adjust the
`tractor._spawn.soft_wait()` strategy to avoid sending an actor cancel
request (via `Portal.cancel_actor()`) if either the child process is
detected as having terminated or the IPC channel is detected to be
closed.
This ensures (even) more deterministic inter-actor cancellation by
avoiding the timeout condition where possible when a whild never
sucessfully spawned, crashed, or became un-contactable over IPC.
- `#295 <https://github.com/goodboy/tractor/issues/295>`_: Add an
experimental ``tractor.msg.NamespacePath`` type for passing Python
objects by "reference" through a ``str``-subtype message and using the
new ``pkgutil.resolve_name()`` for reference loading.
- `#298 <https://github.com/goodboy/tractor/issues/298>`_: Add a new
`tractor.experimental` subpackage for staging new high level APIs and
subystems that we might eventually make built-ins.
- `#300 <https://github.com/goodboy/tractor/issues/300>`_: Update to and
pin latest ``msgpack`` (1.0.3) and ``msgspec`` (0.4.0) both of which
required adjustments for backwards imcompatible API tweaks.
- `#303 <https://github.com/goodboy/tractor/issues/303>`_: Fence off
``multiprocessing`` imports until absolutely necessary in an effort to
avoid "resource tracker" spawning side effects that seem to have
varying degrees of unreliability per Python release. Port to new
``msgspec.DecodeError``.
- `#305 <https://github.com/goodboy/tractor/issues/305>`_: Add
``tractor.query_actor()`` an addr looker-upper which doesn't deliver
a ``Portal`` instance and instead just a socket address ``tuple``.
Sometimes it's handy to just have a simple way to figure out if
a "service" actor is up, so add this discovery helper for that. We'll
prolly just leave it undocumented for now until we figure out
a longer-term/better discovery system.
- `#316 <https://github.com/goodboy/tractor/issues/316>`_: Run windows
CI jobs on python 3.10 after some hacks for ``pdbpp`` dependency
issues.
Issue was to do with the now deprecated `pyreadline` project which
should be changed over to `pyreadline3`.
- `#317 <https://github.com/goodboy/tractor/issues/317>`_: Drop use of
the ``msgpack`` package and instead move fully to the ``msgspec``
codec library.
We've now used ``msgspec`` extensively in production and there's no
reason to not use it as default. Further this change preps us for the up
and coming typed messaging semantics (#196), dialog-unprotocol system
(#297), and caps-based messaging-protocols (#299) planned before our
first beta.
tractor 0.1.0a4 (2021-12-18)
============================
Features
--------
- `#275 <https://github.com/goodboy/tractor/issues/275>`_: Re-license
code base under AGPLv3. Also see `#274
<https://github.com/goodboy/tractor/pull/274>`_ for majority
contributor consensus on this decision.
- `#121 <https://github.com/goodboy/tractor/issues/121>`_: Add
"infected ``asyncio`` mode; a sub-system to spawn and control
``asyncio`` actors using ``trio``'s guest-mode.
This gets us the following very interesting functionality:
- ability to spawn an actor that has a process entry point of
``asyncio.run()`` by passing ``infect_asyncio=True`` to
``Portal.start_actor()`` (and friends).
- the ``asyncio`` actor embeds ``trio`` using guest-mode and starts
a main ``trio`` task which runs the ``tractor.Actor._async_main()``
entry point engages all the normal ``tractor`` runtime IPC/messaging
machinery; for all purposes the actor is now running normally on
a ``trio.run()``.
- the actor can now make one-to-one task spawning requests to the
underlying ``asyncio`` event loop using either of:
* ``to_asyncio.run_task()`` to spawn and run an ``asyncio`` task to
completion and block until a return value is delivered.
* ``async with to_asyncio.open_channel_from():`` which spawns a task
and hands it a pair of "memory channels" to allow for bi-directional
streaming between the now SC-linked ``trio`` and ``asyncio`` tasks.
The output from any call(s) to ``asyncio`` can be handled as normal in
``trio``/``tractor`` task operation with the caveat of the overhead due
to guest-mode use.
For more details see the `original PR
<https://github.com/goodboy/tractor/pull/121>`_ and `issue
<https://github.com/goodboy/tractor/issues/120>`_.
- `#257 <https://github.com/goodboy/tractor/issues/257>`_: Add
``trionics.maybe_open_context()`` an actor-scoped async multi-task
context manager resource caching API.
Adds an SC-safe cacheing async context manager api that only enters on
the *first* task entry and only exits on the *last* task exit while in
between delivering the same cached value per input key. Keys can be
either an explicit ``key`` named arg provided by the user or a
hashable ``kwargs`` dict (will be converted to a ``list[tuple]``) which
is passed to the underlying manager function as input.
- `#261 <https://github.com/goodboy/tractor/issues/261>`_: Add
cross-actor-task ``Context`` oriented error relay, a new stream
overrun error-signal ``StreamOverrun``, and support disabling
``MsgStream`` backpressure as the default before a stream is opened or
by choice of the user.
We added stricter semantics around ``tractor.Context.open_stream():``
particularly to do with streams which are only opened at one end.
Previously, if only one end opened a stream there was no way for that
sender to know if msgs are being received until first, the feeder mem
chan on the receiver side hit a backpressure state and then that
condition delayed its msg loop processing task to eventually create
backpressure on the associated IPC transport. This is non-ideal in the
case where the receiver side never opened a stream by mistake since it
results in silent block of the sender and no adherence to the underlying
mem chan buffer size settings (which is still unsolved btw).
To solve this we add non-backpressure style message pushing inside
``Actor._push_result()`` by default and only use the backpressure
``trio.MemorySendChannel.send()`` call **iff** the local end of the
context has entered ``Context.open_stream():``. This way if the stream
was never opened but the mem chan is overrun, we relay back to the
sender a (new exception) ``SteamOverrun`` error which is raised in the
sender's scope with a special error message about the stream never
having been opened. Further, this behaviour (non-backpressure style
where senders can expect an error on overruns) can now be enabled with
``.open_stream(backpressure=False)`` and the underlying mem chan size
can be specified with a kwarg ``msg_buffer_size: int``.
Further bug fixes and enhancements in this changeset include:
- fix a race we were ignoring where if the callee task opened a context
it could enter ``Context.open_stream()`` before calling
``.started()``.
- Disallow calling ``Context.started()`` more then once.
- Enable ``Context`` linked tasks error relaying via the new
``Context._maybe_raise_from_remote_msg()`` which (for now) uses
a simple ``trio.Nursery.start_soon()`` to raise the error via closure
in the local scope.
- `#267 <https://github.com/goodboy/tractor/issues/267>`_: This
(finally) adds fully acknowledged remote cancellation messaging
support for both explicit ``Portal.cancel_actor()`` calls as well as
when there is a "runtime-wide" cancellations (eg. during KBI or
general actor nursery exception handling which causes a full actor
"crash"/termination).
You can think of this as the most ideal case in 2-generals where the
actor requesting the cancel of its child is able to always receive back
the ACK to that request. This leads to a more deterministic shutdown of
the child where the parent is able to wait for the child to fully
respond to the request. On a localhost setup, where the parent can
monitor the state of the child through process or other OS APIs instead
of solely through IPC messaging, the parent can know whether or not the
child decided to cancel with more certainty. In the case of separate
hosts, we still rely on a simple timeout approach until such a time
where we prefer to get "fancier".
- `#271 <https://github.com/goodboy/tractor/issues/271>`_: Add a per
actor ``debug_mode: bool`` control to our nursery.
This allows spawning actors via ``ActorNursery.start_actor()`` (and
other dependent methods) with a ``debug_mode=True`` flag much like
``tractor.open_nursery():`` such that per process crash handling
can be toggled for cases where a user does not need/want all child actors
to drop into the debugger on error. This is often useful when you have
actor-tasks which are expected to error often (and be re-run) but want
to specifically interact with some (problematic) child.
Bugfixes
--------
- `#239 <https://github.com/goodboy/tractor/issues/239>`_: Fix
keyboard interrupt handling in ``Portal.open_context()`` blocks.
Previously this was not triggering cancellation of the remote task
context and could result in hangs if a stream was also opened. This
fix is to accept `BaseException` since it is likely any other top
level exception other then KBI (even though not expected) should also
get this result.
- `#264 <https://github.com/goodboy/tractor/issues/264>`_: Fix
``Portal.run_in_actor()`` returns ``None`` result.
``None`` was being used as the cached result flag and obviously breaks
on a ``None`` returned from the remote target task. This would cause an
infinite hang if user code ever called ``Portal.result()`` *before* the
nursery exit. The simple fix is to use the *return message* as the
initial "no-result-received-yet" flag value and, once received, the
return value is read from the message to avoid the cache logic error.
- `#266 <https://github.com/goodboy/tractor/issues/266>`_: Fix
graceful cancellation of daemon actors
Previously, his was a bug where if the soft wait on a sub-process (the
``await .proc.wait()``) in the reaper task teardown was cancelled we
would fail over to the hard reaping sequence (meant for culling off any
potential zombies via system kill signals). The hard reap has a timeout
of 3s (currently though in theory we could make it shorter?) before
system signalling kicks in. This means that any daemon actor still
running during nursery exit would get hard reaped (3s later) instead of
cancelled via IPC message. Now we catch the ``trio.Cancelled``, call
``Portal.cancel_actor()`` on the daemon and expect the child to
self-terminate after the runtime cancels and shuts down the process.
- `#278 <https://github.com/goodboy/tractor/issues/278>`_: Repair
inter-actor stream closure semantics to work correctly with
``tractor.trionics.BroadcastReceiver`` task fan out usage.
A set of previously unknown bugs discovered in `#257
<https://github.com/goodboy/tractor/pull/257>`_ let graceful stream
closure result in hanging consumer tasks that use the broadcast APIs.
This adds better internal closure state tracking to the broadcast
receiver and message stream APIs and in particular ensures that when an
underlying stream/receive-channel (a broadcast receiver is receiving
from) is closed, all consumer tasks waiting on that underlying channel
are woken so they can receive the ``trio.EndOfChannel`` signal and
promptly terminate.
tractor 0.1.0a3 (2021-11-02)
============================
@ -21,7 +336,7 @@ Features
Provides us with a path toward supporting typed IPC message contracts. Further,
``msgspec`` structs may be a valid tool to start for formalizing our
"SC dialog un-protocol" messages as described in `#36
<https://github.com/goodboy/tractor/issues/36>`_`.
<https://github.com/goodboy/tractor/issues/36>`_.
- Introduce a new ``tractor.trionics`` `sub-package`_ that exposes
a selection of our relevant high(er) level trio primitives and
@ -51,8 +366,17 @@ Features
improved debugger support since we have determinism guarantees about
which processes must wait before hard killing their children.
- Drop Python 3.8 support in favor of rolling with two latest releases
for the time being. (#248)
- (`#248 <https://github.com/goodboy/tractor/pull/248>`_) Drop Python
3.8 support in favour of rolling with two latest releases for the time
being.
Misc
----
- (`#243 <https://github.com/goodboy/tractor/pull/243>`_) add a distinct
``'CANCEL'`` log level to allow the runtime to emit details about
cancellation machinery statuses.
tractor 0.1.0a2 (2021-09-07)

View File

@ -3,13 +3,20 @@
|gh_actions|
|docs|
``tractor`` is a `structured concurrent`_, multi-processing_ runtime built on trio_.
``tractor`` is a `structured concurrent`_, multi-processing_ runtime
built on trio_.
Fundamentally ``tractor`` gives you parallelism via ``trio``-"*actors*":
our nurseries_ let you spawn new Python processes which each run a ``trio``
Fundamentally, ``tractor`` gives you parallelism via
``trio``-"*actors*": independent Python processes (aka
non-shared-memory threads) which maintain structured
concurrency (SC) *end-to-end* inside a *supervision tree*.
Cross-process (and thus cross-host) SC is accomplished through the
combined use of our "actor nurseries_" and an "SC-transitive IPC
protocol" constructed on top of multiple Pythons each running a ``trio``
scheduled runtime - a call to ``trio.run()``.
We believe the system adhere's to the `3 axioms`_ of an "`actor model`_"
We believe the system adheres to the `3 axioms`_ of an "`actor model`_"
but likely *does not* look like what *you* probably think an "actor
model" looks like, and that's *intentional*.
@ -22,12 +29,15 @@ Features
- **It's just** a ``trio`` API
- *Infinitely nesteable* process trees
- Builtin IPC streaming APIs with task fan-out broadcasting
- A (first ever?) "native" multi-core debugger UX for Python using `pdb++`_
- A "native" multi-core debugger REPL using `pdbp`_ (a fork & fix of
`pdb++`_ thanks to @mdmintz!)
- Support for a swappable, OS specific, process spawning layer
- A modular transport stack, allowing for custom serialization (eg.
- A modular transport stack, allowing for custom serialization (eg. with
`msgspec`_), communications protocols, and environment specific IPC
primitives
- `structured concurrency`_ from the ground up
- Support for spawning process-level-SC, inter-loop one-to-one-task oriented
``asyncio`` actors via "infected ``asyncio``" mode
- `structured chadcurrency`_ from the ground up
Run a func in a process
@ -146,7 +156,7 @@ it **is a bug**.
"Native" multi-process debugging
--------------------------------
Using the magic of `pdb++`_ and our internal IPC, we've
Using the magic of `pdbp`_ and our internal IPC, we've
been able to create a native feeling debugging experience for
any (sub-)process in your ``tractor`` tree.
@ -313,6 +323,117 @@ real time::
This uses no extra threads, fancy semaphores or futures; all we need
is ``tractor``'s IPC!
"Infected ``asyncio``" mode
---------------------------
Have a bunch of ``asyncio`` code you want to force to be SC at the process level?
Check out our experimental system for `guest-mode`_ controlled
``asyncio`` actors:
.. code:: python
import asyncio
from statistics import mean
import time
import trio
import tractor
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
# a first message must be sent **from** this ``asyncio``
# task or the ``trio`` side will never unblock from
# ``tractor.to_asyncio.open_channel_from():``
to_trio.send_nowait('start')
# XXX: this uses an ``from_trio: asyncio.Queue`` currently but we
# should probably offer something better.
while True:
# echo the msg back
to_trio.send_nowait(await from_trio.get())
await asyncio.sleep(0)
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
):
# this will block until the ``asyncio`` task sends a "first"
# message.
async with tractor.to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
await chan.send(msg)
out = await chan.receive()
# echo back to parent actor-task
await stream.send(out)
async def main():
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_server',
enable_modules=[__name__],
infect_asyncio=True,
)
async with p.open_context(
trio_to_aio_echo_server,
) as (ctx, first):
assert first == 'start'
count = 0
async with ctx.open_stream() as stream:
delays = []
send = time.time()
await stream.send(count)
async for msg in stream:
recv = time.time()
delays.append(recv - send)
assert msg == count
count += 1
send = time.time()
await stream.send(count)
if count >= 1e3:
break
print(f'mean round trip rate (Hz): {1/mean(delays)}')
await p.cancel_actor()
if __name__ == '__main__':
trio.run(main)
Yes, we spawn a python process, run ``asyncio``, start ``trio`` on the
``asyncio`` loop, then send commands to the ``trio`` scheduled tasks to
tell ``asyncio`` tasks what to do XD
We need help refining the `asyncio`-side channel API to be more
`trio`-like. Feel free to sling your opinion in `#273`_!
.. _#273: https://github.com/goodboy/tractor/issues/273
Higher level "cluster" APIs
---------------------------
To be extra terse the ``tractor`` devs have started hacking some "higher
level" APIs for managing actor trees/clusters. These interfaces should
generally be condsidered provisional for now but we encourage you to try
@ -376,12 +497,6 @@ From PyPi::
pip install tractor
To try out the (optionally) faster `msgspec`_ codec instead of the
default ``msgpack`` lib::
pip install tractor[msgspec]
From git::
pip install git+git://github.com/goodboy/tractor.git
@ -450,13 +565,22 @@ properties of the system.
What's on the TODO:
-------------------
Help us push toward the future.
Help us push toward the future of distributed `Python`.
- (Soon to land) ``asyncio`` support allowing for "infected" actors where
`trio` drives the `asyncio` scheduler via the astounding "`guest mode`_"
- Typed messaging protocols (ex. via ``msgspec``, see `#36
- Erlang-style supervisors via composed context managers (see `#22
<https://github.com/goodboy/tractor/issues/22>`_)
- Typed messaging protocols (ex. via ``msgspec.Struct``, see `#36
<https://github.com/goodboy/tractor/issues/36>`_)
- Erlang-style supervisors via composed context managers
- Typed capability-based (dialog) protocols ( see `#196
<https://github.com/goodboy/tractor/issues/196>`_ with draft work
started in `#311 <https://github.com/goodboy/tractor/pull/311>`_)
- We **recently disabled CI-testing on windows** and need help getting
it running again! (see `#327
<https://github.com/goodboy/tractor/pull/327>`_). **We do have windows
support** (and have for quite a while) but since no active hacker
exists in the user-base to help test on that OS, for now we're not
actively maintaining testing due to the added hassle and general
latency..
Feel like saying hi?
@ -468,27 +592,32 @@ say hi, please feel free to reach us in our `matrix channel`_. If
matrix seems too hip, we're also mostly all in the the `trio gitter
channel`_!
.. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
.. _multi-processing: https://en.wikipedia.org/wiki/Multiprocessing
.. _trio: https://github.com/python-trio/trio
.. _nurseries: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/#nurseries-a-structured-replacement-for-go-statements
.. _actor model: https://en.wikipedia.org/wiki/Actor_model
.. _trio: https://github.com/python-trio/trio
.. _multi-processing: https://en.wikipedia.org/wiki/Multiprocessing
.. _trionic: https://trio.readthedocs.io/en/latest/design.html#high-level-design-principles
.. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich
.. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
.. _3 axioms: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=162s
.. .. _3 axioms: https://en.wikipedia.org/wiki/Actor_model#Fundamental_concepts
.. _adherance to: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=1821s
.. _trio gitter channel: https://gitter.im/python-trio/general
.. _matrix channel: https://matrix.to/#/!tractor:matrix.org
.. _pdbp: https://github.com/mdmintz/pdbp
.. _pdb++: https://github.com/pdbpp/pdbpp
.. _guest mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. _messages: https://en.wikipedia.org/wiki/Message_passing
.. _trio docs: https://trio.readthedocs.io/en/latest/
.. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _structured chadcurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony
.. _async generators: https://www.python.org/dev/peps/pep-0525/
.. _trio-parallel: https://github.com/richardsheridan/trio-parallel
.. _msgspec: https://jcristharif.com/msgspec/
.. _guest-mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fgoodboy%2Ftractor%2Fbadge&style=popout-square

View File

@ -22,6 +22,17 @@ release name such as `alpha3/` when there's been a sequence of
releases I've made, but it really is up to you how you like to
organize generated sdists locally.
The resulting build cmds are approximately:
.. code:: bash
python setup.py sdist -d ./dist/XXX.X/
twine upload -r testpypi dist/XXX.X/*
twine upload dist/XXX.X/*
.. _documented by twine: https://twine.readthedocs.io/en/latest/#using-twine

View File

@ -396,7 +396,7 @@ tasks spawned via multiple RPC calls to an actor can modify
# a per process cache
_actor_cache: Dict[str, bool] = {}
_actor_cache: dict[str, bool] = {}
def ping_endpoints(endpoints: List[str]):

View File

View File

@ -0,0 +1,151 @@
'''
Complex edge case where during real-time streaming the IPC tranport
channels are wiped out (purposely in this example though it could have
been an outage) and we want to ensure that despite being in debug mode
(or not) the user can sent SIGINT once they notice the hang and the
actor tree will eventually be cancelled without leaving any zombies.
'''
import trio
from tractor import (
open_nursery,
context,
Context,
MsgStream,
)
async def break_channel_silently_then_error(
stream: MsgStream,
):
async for msg in stream:
await stream.send(msg)
# XXX: close the channel right after an error is raised
# purposely breaking the IPC transport to make sure the parent
# doesn't get stuck in debug or hang on the connection join.
# this more or less simulates an infinite msg-receive hang on
# the other end.
await stream._ctx.chan.send(None)
assert 0
async def close_stream_and_error(
stream: MsgStream,
):
async for msg in stream:
await stream.send(msg)
# wipe out channel right before raising
await stream._ctx.chan.send(None)
await stream.aclose()
assert 0
@context
async def recv_and_spawn_net_killers(
ctx: Context,
break_ipc_after: bool | int = False,
) -> None:
'''
Receive stream msgs and spawn some IPC killers mid-stream.
'''
await ctx.started()
async with (
ctx.open_stream() as stream,
trio.open_nursery() as n,
):
async for i in stream:
print(f'child echoing {i}')
await stream.send(i)
if (
break_ipc_after
and i > break_ipc_after
):
'#################################\n'
'Simulating child-side IPC BREAK!\n'
'#################################'
n.start_soon(break_channel_silently_then_error, stream)
n.start_soon(close_stream_and_error, stream)
async def main(
debug_mode: bool = False,
start_method: str = 'trio',
# by default we break the parent IPC first (if configured to break
# at all), but this can be changed so the child does first (even if
# both are set to break).
break_parent_ipc_after: int | bool = False,
break_child_ipc_after: int | bool = False,
) -> None:
async with (
open_nursery(
start_method=start_method,
# NOTE: even debugger is used we shouldn't get
# a hang since it never engages due to broken IPC
debug_mode=debug_mode,
loglevel='warning',
) as an,
):
portal = await an.start_actor(
'chitty_hijo',
enable_modules=[__name__],
)
async with portal.open_context(
recv_and_spawn_net_killers,
break_ipc_after=break_child_ipc_after,
) as (ctx, sent):
async with ctx.open_stream() as stream:
for i in range(1000):
if (
break_parent_ipc_after
and i > break_parent_ipc_after
):
print(
'#################################\n'
'Simulating parent-side IPC BREAK!\n'
'#################################'
)
await stream._ctx.chan.send(None)
# it actually breaks right here in the
# mp_spawn/forkserver backends and thus the zombie
# reaper never even kicks in?
print(f'parent sending {i}')
await stream.send(i)
with trio.move_on_after(2) as cs:
# NOTE: in the parent side IPC failure case this
# will raise an ``EndOfChannel`` after the child
# is killed and sends a stop msg back to it's
# caller/this-parent.
rx = await stream.receive()
print(f"I'm a happy user and echoed to me is {rx}")
if cs.cancelled_caught:
# pretend to be a user seeing no streaming action
# thinking it's a hang, and then hitting ctl-c..
print("YOO i'm a user anddd thingz hangin..")
print(
"YOO i'm mad send side dun but thingz hangin..\n"
'MASHING CTlR-C Ctl-c..'
)
raise KeyboardInterrupt
if __name__ == '__main__':
trio.run(main)

View File

@ -27,6 +27,17 @@ async def main():
# retreive results
async with p0.open_stream_from(breakpoint_forever) as stream:
# triggers the first name error
try:
await p1.run(name_error)
except tractor.RemoteActorError as rae:
assert rae.type is NameError
async for i in stream:
# a second time try the failing subactor and this tie
# let error propagate up to the parent/nursery.
await p1.run(name_error)

View File

@ -12,18 +12,31 @@ async def breakpoint_forever():
while True:
await tractor.breakpoint()
# NOTE: if the test never sent 'q'/'quit' commands
# on the pdb repl, without this checkpoint line the
# repl would spin in this actor forever.
# await trio.sleep(0)
async def spawn_until(depth=0):
""""A nested nursery that triggers another ``NameError``.
"""
async with tractor.open_nursery() as n:
if depth < 1:
# await n.run_in_actor('breakpoint_forever', breakpoint_forever)
await n.run_in_actor(
await n.run_in_actor(breakpoint_forever)
p = await n.run_in_actor(
name_error,
name='name_error'
)
await trio.sleep(0.5)
# rx and propagate error from child
await p.result()
else:
# recusrive call to spawn another process branching layer of
# the tree
depth -= 1
await n.run_in_actor(
spawn_until,
@ -53,6 +66,7 @@ async def main():
"""
async with tractor.open_nursery(
debug_mode=True,
# loglevel='cancel',
) as n:
# spawn both actors
@ -67,8 +81,16 @@ async def main():
name='spawner1',
)
# TODO: test this case as well where the parent don't see
# the sub-actor errors by default and instead expect a user
# ctrl-c to kill the root.
with trio.move_on_after(3):
await trio.sleep_forever()
# gah still an issue here.
await portal.result()
# should never get here
await portal1.result()

View File

@ -0,0 +1,40 @@
import trio
import tractor
@tractor.context
async def just_sleep(
ctx: tractor.Context,
**kwargs,
) -> None:
'''
Start and sleep.
'''
await ctx.started()
await trio.sleep_forever()
async def main() -> None:
async with tractor.open_nursery(
debug_mode=True,
) as n:
portal = await n.start_actor(
'ctx_child',
# XXX: we don't enable the current module in order
# to trigger `ModuleNotFound`.
enable_modules=[],
)
async with portal.open_context(
just_sleep, # taken from pytest parameterization
) as (ctx, sent):
raise KeyboardInterrupt
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,27 @@
import trio
import tractor
async def die():
raise RuntimeError
async def main():
async with tractor.open_nursery() as tn:
debug_actor = await tn.start_actor(
'debugged_boi',
enable_modules=[__name__],
debug_mode=True,
)
crash_boi = await tn.start_actor(
'crash_boi',
enable_modules=[__name__],
# debug_mode=True,
)
async with trio.open_nursery() as n:
n.start_soon(debug_actor.run, die)
n.start_soon(crash_boi.run, die)
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,24 @@
import os
import sys
import trio
import tractor
async def main() -> None:
async with tractor.open_nursery(debug_mode=True) as an:
assert os.environ['PYTHONBREAKPOINT'] == 'tractor._debug._set_trace'
# TODO: an assert that verifies the hook has indeed been, hooked
# XD
assert sys.breakpointhook is not tractor._debug._set_trace
breakpoint()
# TODO: an assert that verifies the hook is unhooked..
assert sys.breakpointhook
breakpoint()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,50 @@
import tractor
import trio
async def gen():
yield 'yo'
await tractor.breakpoint()
yield 'yo'
await tractor.breakpoint()
@tractor.context
async def just_bp(
ctx: tractor.Context,
) -> None:
await ctx.started()
await tractor.breakpoint()
# TODO: bps and errors in this call..
async for val in gen():
print(val)
# await trio.sleep(0.5)
# prematurely destroy the connection
await ctx.chan.aclose()
# THIS CAUSES AN UNRECOVERABLE HANG
# without latest ``pdbpp``:
assert 0
async def main():
async with tractor.open_nursery(
debug_mode=True,
) as n:
p = await n.start_actor(
'bp_boi',
enable_modules=[__name__],
)
async with p.open_context(
just_bp,
) as (ctx, first):
await trio.sleep_forever()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,92 @@
'''
An SC compliant infected ``asyncio`` echo server.
'''
import asyncio
from statistics import mean
import time
import trio
import tractor
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
# a first message must be sent **from** this ``asyncio``
# task or the ``trio`` side will never unblock from
# ``tractor.to_asyncio.open_channel_from():``
to_trio.send_nowait('start')
# XXX: this uses an ``from_trio: asyncio.Queue`` currently but we
# should probably offer something better.
while True:
# echo the msg back
to_trio.send_nowait(await from_trio.get())
await asyncio.sleep(0)
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
):
# this will block until the ``asyncio`` task sends a "first"
# message.
async with tractor.to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
await chan.send(msg)
out = await chan.receive()
# echo back to parent actor-task
await stream.send(out)
async def main():
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_server',
enable_modules=[__name__],
infect_asyncio=True,
)
async with p.open_context(
trio_to_aio_echo_server,
) as (ctx, first):
assert first == 'start'
count = 0
async with ctx.open_stream() as stream:
delays = []
send = time.time()
await stream.send(count)
async for msg in stream:
recv = time.time()
delays.append(recv - send)
assert msg == count
count += 1
send = time.time()
await stream.send(count)
if count >= 1e3:
break
print(f'mean round trip rate (Hz): {1/mean(delays)}')
await p.cancel_actor()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,49 @@
import trio
import click
import tractor
import pydantic
# from multiprocessing import shared_memory
@tractor.context
async def just_sleep(
ctx: tractor.Context,
**kwargs,
) -> None:
'''
Test a small ping-pong 2-way streaming server.
'''
await ctx.started()
await trio.sleep_forever()
async def main() -> None:
proc = await trio.open_process( (
'python',
'-c',
'import trio; trio.run(trio.sleep_forever)',
))
await proc.wait()
# await trio.sleep_forever()
# async with tractor.open_nursery() as n:
# portal = await n.start_actor(
# 'rpc_server',
# enable_modules=[__name__],
# )
# async with portal.open_context(
# just_sleep, # taken from pytest parameterization
# ) as (ctx, sent):
# await trio.sleep_forever()
if __name__ == '__main__':
import time
# time.sleep(999)
trio.run(main)

View File

@ -9,7 +9,7 @@ is ``tractor``'s channels.
"""
from contextlib import asynccontextmanager
from typing import List, Callable
from typing import Callable
import itertools
import math
import time
@ -71,8 +71,8 @@ async def worker_pool(workers=4):
async def _map(
worker_func: Callable[[int], bool],
sequence: List[int]
) -> List[bool]:
sequence: list[int]
) -> list[bool]:
# define an async (local) task to collect results from workers
async def send_result(func, value, portal):

View File

@ -1,6 +0,0 @@
Fix keyboard interrupt handling in ``Portal.open_context()`` blocks.
Previously this not triggering cancellation of the remote task context
and could result in hangs if a stream was also opened. This fix is to
accept `BaseException` since it is likely any other top level exception
other then kbi (even though not expected) should also get this result.

View File

@ -1,9 +0,0 @@
Add a custom 'CANCEL' log level and use through runtime.
In order to reduce log messages and also start toying with the idea of
"application layer" oriented tracing, we added this new level just above
'runtime' but just below 'info'. It is intended to be used solely for
cancellation and teardown related messages. Included are some small
overrides to the stdlib's ``logging.LoggerAdapter`` to passthrough the
correct stack frame to show when one of the custom level methods is
used.

View File

@ -1,37 +0,0 @@
Add cross-actor-task ``Context`` oriented error relay, a new
stream overrun error-signal ``StreamOverrun``, and support
disabling ``MsgStream`` backpressure as the default before a stream
is opened or by choice of the user.
We added stricter semantics around ``tractor.Context.open_stream():``
particularly to do with streams which are only opened at one end.
Previously, if only one end opened a stream there was no way for that
sender to know if msgs are being received until first, the feeder mem
chan on the receiver side hit a backpressure state and then that
condition delayed its msg loop processing task to eventually create
backpressure on the associated IPC transport. This is non-ideal in the
case where the receiver side never opened a stream by mistake since it
results in silent block of the sender and no adherence to the underlying
mem chan buffer size settings (which is still unsolved btw).
To solve this we add non-backpressure style message pushing inside
``Actor._push_result()`` by default and only use the backpressure
``trio.MemorySendChannel.send()`` call **iff** the local end of the
context has entered ``Context.open_stream():``. This way if the stream
was never opened but the mem chan is overrun, we relay back to the
sender a (new exception) ``SteamOverrun`` error which is raised in the
sender's scope with a special error message about the stream never
having been opened. Further, this behaviour (non-backpressure style
where senders can expect an error on overruns) can now be enabled with
``.open_stream(backpressure=False)`` and the underlying mem chan size
can be specified with a kwarg ``msg_buffer_size: int``.
Further bug fixes and enhancements in this changeset include:
- fix a race we were ignoring where if the callee task opened a context
it could enter ``Context.open_stream()`` before calling
``.started()``.
- Disallow calling ``Context.started()`` more then once.
- Enable ``Context`` linked tasks error relaying via the new
``Context._maybe_raise_from_remote_msg()`` which (for now) uses
a simple ``trio.Nursery.start_soon()`` to raise the error via closure
in the local scope.

View File

@ -1,8 +0,0 @@
Fix ``Portal.run_in_actor()`` returns ``None`` result.
``None`` was being used as the cached result flag and obviously breaks
on a ``None`` returned from the remote target task. This would cause an
infinite hang if user code ever called ``Portal.result()`` *before* the
nursery exit. The simple fix is to use the *return message* as the
initial "no-result-received-yet" flag value and, once received, the
return value is read from the message to avoid the cache logic error.

View File

@ -1,12 +0,0 @@
Fix graceful cancellation of daemon actors
Previously, his was a bug where if the soft wait on a sub-process (the
``await .proc.wait()``) in the reaper task teardown was cancelled we
would fail over to the hard reaping sequence (meant for culling off any
potential zombies via system kill signals). The hard reap has a timeout
of 3s (currently though in theory we could make it shorter?) before
system signalling kicks in. This means that any daemon actor still
running during nursery exit would get hard reaped (3s later) instead of
cancelled via IPC message. Now we catch the ``trio.Cancelled``, call
``Portal.cancel_actor()`` on the daemon and expect the child to
self-terminate after the runtime cancels and shuts down the process.

View File

@ -1,16 +0,0 @@
This (finally) adds fully acknowledged remote cancellation messaging
support for both explicit ``Portal.cancel_actor()`` calls as well as
when there is a "runtime-wide" cancellations (eg. during KBI or general
actor nursery exception handling which causes a full actor
"crash"/termination).
You can think of this as the most ideal case in 2-generals where the
actor requesting the cancel of its child is able to always receive back
the ACK to that request. This leads to a more deterministic shutdown of
the child where the parent is able to wait for the child to fully
respond to the request. On a localhost setup, where the parent can
monitor the state of the child through process or other OS APIs instead
of solely through IPC messaging, the parent can know whether or not the
child decided to cancel with more certainty. In the case of separate
hosts, we still rely on a simple timeout approach until such a time
where we prefer to get "fancier".

View File

@ -0,0 +1,16 @@
Strictly support Python 3.10+, start runtime machinery reorg
Since we want to push forward using the new `match:` syntax for our
internal RPC-msg loops, we officially drop 3.9 support for the next
release which should coincide well with the first release of 3.11.
This patch set also officially removes the ``tractor.run()`` API (which
has been deprecated for some time) as well as starts an initial re-org
of the internal runtime core by:
- renaming ``tractor._actor`` -> ``._runtime``
- moving the ``._runtime.ActorActor._process_messages()`` and
``._async_main()`` to be module level singleton-task-functions since
they are only started once for each connection and actor spawn
respectively; this internal API thus looks more similar to (at the
time of writing) the ``trio``-internals in ``trio._core._run``.
- officially remove ``tractor.run()``, now deprecated for some time.

View File

@ -0,0 +1,4 @@
Only set `._debug.Lock.local_pdb_complete` if has been created.
This can be triggered by a very rare race condition (and thus we have no
working test yet) but it is known to exist in (a) consumer project(s).

View File

@ -0,0 +1,25 @@
Add support for ``trio >= 0.22`` and support for the new Python 3.11
``[Base]ExceptionGroup`` from `pep 654`_ via the backported
`exceptiongroup`_ package and some final fixes to the debug mode
subsystem.
This port ended up driving some (hopefully) final fixes to our debugger
subsystem including the solution to all lingering stdstreams locking
race-conditions and deadlock scenarios. This includes extending the
debugger tests suite as well as cancellation and ``asyncio`` mode cases.
Some of the notable details:
- always reverting to the ``trio`` SIGINT handler when leaving debug
mode.
- bypassing child attempts to acquire the debug lock when detected
to be amdist actor-runtime-cancellation.
- allowing the root actor to cancel local but IPC-stale subactor
requests-tasks for the debug lock when in a "no IPC peers" state.
Further we refined our ``ActorNursery`` semantics to be more similar to
``trio`` in the sense that parent task errors are always packed into the
actor-nursery emitted exception group and adjusted all tests and
examples accordingly.
.. _pep 654: https://peps.python.org/pep-0654/#handling-exception-groups
.. _exceptiongroup: https://github.com/python-trio/exceptiongroup

View File

@ -0,0 +1,5 @@
Establish an explicit "backend spawning" method table; use it from CI
More clearly lays out the current set of (3) backends: ``['trio',
'mp_spawn', 'mp_forkserver']`` and adjusts the ``._spawn.py`` internals
as well as the test suite to accommodate.

View File

@ -0,0 +1,4 @@
Add ``key: Callable[..., Hashable]`` support to ``.trionics.maybe_open_context()``
Gives users finer grained control over cache hit behaviour using
a callable which receives the input ``kwargs: dict``.

View File

@ -0,0 +1,41 @@
Add support for debug-lock blocking using a ``._debug.Lock._blocked:
set[tuple]`` and add ids when no-more IPC connections with the
root actor are detected.
This is an enhancement which (mostly) solves a lingering debugger
locking race case we needed to handle:
- child crashes acquires TTY lock in root and attaches to ``pdb``
- child IPC goes down such that all channels to the root are broken
/ non-functional.
- root is stuck thinking the child is still in debug even though it
can't be contacted and the child actor machinery hasn't been
cancelled by its parent.
- root get's stuck in deadlock with child since it won't send a cancel
request until the child is finished debugging (to avoid clobbering
a child that is actually using the debugger), but the child can't
unlock the debugger bc IPC is down and it can't contact the root.
To avoid this scenario add debug lock blocking list via
`._debug.Lock._blocked: set[tuple]` which holds actor uids for any actor
that is detected by the root as having no transport channel connections
(of which at least one should exist if this sub-actor at some point
acquired the debug lock). The root consequently checks this list for any
actor that tries to (re)acquire the lock and blocks with
a ``ContextCancelled``. Further, when a debug condition is tested in
``._runtime._invoke``, the context's ``._enter_debugger_on_cancel`` is
set to `False` if the actor was put on the block list then all
post-mortem / crash handling will be bypassed for that task.
In theory this approach to block list management may cause problems
where some nested child actor acquires and releases the lock multiple
times and it gets stuck on the block list after the first use? If this
turns out to be an issue we can try changing the strat so blocks are
only added when the root has zero IPC peers left?
Further, this adds a root-locking-task side cancel scope,
``Lock._root_local_task_cs_in_debug``, which can be ``.cancel()``-ed by the root
runtime when a stale lock is detected during the IPC channel testing.
However, right now we're NOT using this since it seems to cause test
failures likely due to causing pre-mature cancellation and maybe needs
a bit more experimenting?

View File

@ -0,0 +1,19 @@
Rework our ``.trionics.BroadcastReceiver`` internals to avoid method
recursion and approach a design and interface closer to ``trio``'s
``MemoryReceiveChannel``.
The details of the internal changes include:
- implementing a ``BroadcastReceiver.receive_nowait()`` and using it
within the async ``.receive()`` thus avoiding recursion from
``.receive()``.
- failing over to an internal ``._receive_from_underlying()`` when the
``_nowait()`` call raises ``trio.WouldBlock``
- adding ``BroadcastState.statistics()`` for debugging and testing both
internals and by users.
- add an internal ``BroadcastReceiver._raise_on_lag: bool`` which can be
set to avoid ``Lagged`` raising for possible use cases where a user
wants to choose between a [cheap or nasty
pattern](https://zguide.zeromq.org/docs/chapter7/#The-Cheap-or-Nasty-Pattern)
the the particular stream (we use this in ``piker``'s dark clearing
engine to avoid fast feeds breaking during HFT periods).

View File

@ -0,0 +1,11 @@
Always ``list``-cast the ``mngrs`` input to
``.trionics.gather_contexts()`` and ensure its size otherwise raise
a ``ValueError``.
Turns out that trying to pass an inline-style generator comprehension
doesn't seem to work inside the ``async with`` expression? Further, in
such a case we can get a hang waiting on the all-entered event
completion when the internal mngrs iteration is a noop. Instead we
always greedily check a size and error on empty input; the lazy
iteration of a generator input is not beneficial anyway since we're
entering all manager instances in concurrent tasks.

View File

@ -0,0 +1,15 @@
Fixes to ensure IPC (channel) breakage doesn't result in hung actor
trees; the zombie reaping and general supervision machinery will always
clean up and terminate.
This includes not only the (mostly minor) fixes to solve these cases but
also a new extensive test suite in `test_advanced_faults.py` with an
accompanying highly configurable example module-script in
`examples/advanced_faults/ipc_failure_during_stream.py`. Tests ensure we
never get hang or zombies despite operating in debug mode and attempt to
simulate all possible IPC transport failure cases for a local-host actor
tree.
Further we simplify `Context.open_stream.__aexit__()` to just call
`MsgStream.aclose()` directly more or less avoiding a pure duplicate
code path.

View File

@ -0,0 +1,10 @@
Always redraw the `pdbpp` prompt on `SIGINT` during REPL use.
There was recent changes todo with Python 3.10 that required us to pin
to a specific commit in `pdbpp` which have recently been fixed minus
this last issue with `SIGINT` shielding: not clobbering or not
showing the `(Pdb++)` prompt on ctlr-c by the user. This repairs all
that by firstly removing the standard KBI intercepting of the std lib's
`pdb.Pdb._cmdloop()` as well as ensuring that only the actor with REPL
control ever reports `SIGINT` handler log msgs and prompt redraws. With
this we move back to using pypi `pdbpp` release.

View File

@ -0,0 +1,7 @@
Drop `trio.Process.aclose()` usage, copy into our spawning code.
The details are laid out in https://github.com/goodboy/tractor/issues/330.
`trio` changed is process running quite some time ago, this just copies
out the small bit we needed (from the old `.aclose()`) for hard kills
where a soft runtime cancel request fails and our "zombie killer"
implementation kicks in.

View File

@ -0,0 +1,15 @@
Switch to using the fork & fix of `pdb++`, `pdbp`:
https://github.com/mdmintz/pdbp
Allows us to sidestep a variety of issues that aren't being maintained
in the upstream project thanks to the hard work of @mdmintz!
We also include some default settings adjustments as per recent
development on the fork:
- sticky mode is still turned on by default but now activates when
a using the `ll` repl command.
- turn off line truncation by default to avoid inter-line gaps when
resizing the terimnal during use.
- when using the backtrace cmd either by `w` or `bt`, the config
automatically switches to non-sticky mode.

37
nooz/_template.rst 100644
View File

@ -0,0 +1,37 @@
{% for section in sections %}
{% set underline = "-" %}
{% if section %}
{{section}}
{{ underline * section|length }}{% set underline = "~" %}
{% endif %}
{% if sections[section] %}
{% for category, val in definitions.items() if category in sections[section] %}
{{ definitions[category]['name'] }}
{{ underline * definitions[category]['name']|length }}
{% if definitions[category]['showcontent'] %}
{% for text, values in sections[section][category]|dictsort(by='value') %}
{% set issue_joiner = joiner(', ') %}
- {% for value in values|sort %}{{ issue_joiner() }}`{{ value }} <https://github.com/goodboy/tractor/issues/{{ value[1:] }}>`_{% endfor %}: {{ text }}
{% endfor %}
{% else %}
- {{ sections[section][category]['']|sort|join(', ') }}
{% endif %}
{% if sections[section][category]|length == 0 %}
No significant changes.
{% else %}
{% endif %}
{% endfor %}
{% else %}
No significant changes.
{% endif %}
{% endfor %}

28
pyproject.toml 100644
View File

@ -0,0 +1,28 @@
[tool.towncrier]
package = "tractor"
filename = "NEWS.rst"
directory = "nooz/"
version = "0.1.0a6"
title_format = "tractor {version} ({project_date})"
template = "nooz/_template.rst"
all_bullets = true
[[tool.towncrier.type]]
directory = "feature"
name = "Features"
showcontent = true
[[tool.towncrier.type]]
directory = "bugfix"
name = "Bug Fixes"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"
showcontent = true
[[tool.towncrier.type]]
directory = "trivial"
name = "Trivial/Internal Changes"
showcontent = true

View File

@ -1,6 +1,7 @@
pytest
pytest-trio
pdbpp
pytest-timeout
pdbp
mypy
trio_typing
pexpect

View File

@ -1,21 +1,22 @@
#!/usr/bin/env python
#
# tractor: a trionic actor model built on `multiprocessing` and `trio`
# tractor: structured concurrent "actors".
#
# Copyright (C) 2018-2020 Tyler Goodlet
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from setuptools import setup
with open('docs/README.rst', encoding='utf-8') as f:
@ -24,54 +25,62 @@ with open('docs/README.rst', encoding='utf-8') as f:
setup(
name="tractor",
version='0.1.0a3', # alpha zone
description='structured concurrrent "actors"',
version='0.1.0a6dev0', # alpha zone
description='structured concurrrent `trio`-"actors"',
long_description=readme,
license='GPLv3',
license='AGPLv3',
author='Tyler Goodlet',
maintainer='Tyler Goodlet',
maintainer_email='jgbt@protonmail.com',
maintainer_email='goodboy_foss@protonmail.com',
url='https://github.com/goodboy/tractor',
platforms=['linux', 'windows'],
packages=[
'tractor',
'tractor.experimental',
'tractor.trionics',
'tractor.testing',
],
install_requires=[
# trio related
'trio>0.8',
# proper range spec:
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/#id5
'trio >= 0.22',
'async_generator',
'trio_typing',
'exceptiongroup',
# tooling
'tricycle',
'trio_typing',
# tooling
'colorlog',
'wrapt',
'pdbpp',
# serialization
'msgpack',
# IPC serialization
'msgspec',
# debug mode REPL
'pdbp',
# pip ref docs on these specs:
# https://pip.pypa.io/en/stable/reference/requirement-specifiers/#examples
# and pep:
# https://peps.python.org/pep-0440/#version-specifiers
# windows deps workaround for ``pdbpp``
# https://github.com/pdbpp/pdbpp/issues/498
# https://github.com/pdbpp/fancycompleter/issues/37
'pyreadline3 ; platform_system == "Windows"',
],
extras_require={
# serialization
'msgspec': ["msgspec >= 0.3.2'; python_version >= '3.9'"],
},
tests_require=['pytest'],
python_requires=">=3.8",
python_requires=">=3.10",
keywords=[
'trio',
"async",
"concurrency",
"actor model",
"distributed",
'async',
'concurrency',
'structured concurrency',
'actor model',
'distributed',
'multiprocessing'
],
classifiers=[
@ -79,11 +88,10 @@ setup(
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Framework :: Trio",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: System :: Distributed Computing",

View File

@ -7,16 +7,91 @@ import os
import random
import signal
import platform
import pathlib
import time
import inspect
from functools import partial, wraps
import pytest
import trio
import tractor
# export for tests
from tractor.testing import tractor_test # noqa
pytest_plugins = ['pytester']
def tractor_test(fn):
"""
Use:
@tractor_test
async def test_whatever():
await ...
If fixtures:
- ``arb_addr`` (a socket addr tuple where arbiter is listening)
- ``loglevel`` (logging level passed to tractor internals)
- ``start_method`` (subprocess spawning backend)
are defined in the `pytest` fixture space they will be automatically
injected to tests declaring these funcargs.
"""
@wraps(fn)
def wrapper(
*args,
loglevel=None,
arb_addr=None,
start_method=None,
**kwargs
):
# __tracebackhide__ = True
if 'arb_addr' in inspect.signature(fn).parameters:
# injects test suite fixture value to test as well
# as `run()`
kwargs['arb_addr'] = arb_addr
if 'loglevel' in inspect.signature(fn).parameters:
# allows test suites to define a 'loglevel' fixture
# that activates the internal logging
kwargs['loglevel'] = loglevel
if start_method is None:
if platform.system() == "Windows":
start_method = 'trio'
if 'start_method' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['start_method'] = start_method
if kwargs:
# use explicit root actor start
async def _main():
async with tractor.open_root_actor(
# **kwargs,
arbiter_addr=arb_addr,
loglevel=loglevel,
start_method=start_method,
# TODO: only enable when pytest is passed --pdb
# debug_mode=True,
):
await fn(*args, **kwargs)
main = _main
else:
# use implicit root actor start
main = partial(fn, *args, **kwargs)
return trio.run(main)
return wrapper
_arb_addr = '127.0.0.1', random.randint(1000, 9999)
@ -39,20 +114,27 @@ no_windows = pytest.mark.skipif(
)
def repodir():
"""Return the abspath to the repo directory.
"""
dirname = os.path.dirname
dirpath = os.path.abspath(
dirname(dirname(os.path.realpath(__file__)))
)
return dirpath
def repodir() -> pathlib.Path:
'''
Return the abspath to the repo directory.
'''
# 2 parents up to step up through tests/<repo_dir>
return pathlib.Path(__file__).parent.parent.absolute()
def examples_dir() -> pathlib.Path:
'''
Return the abspath to the examples directory as `pathlib.Path`.
'''
return repodir() / 'examples'
def pytest_addoption(parser):
parser.addoption(
"--ll", action="store", dest='loglevel',
default=None, help="logging level to set when testing"
default='ERROR', help="logging level to set when testing"
)
parser.addoption(
@ -64,10 +146,6 @@ def pytest_addoption(parser):
def pytest_configure(config):
backend = config.option.spawn_backend
if backend == 'mp':
tractor._spawn.try_set_start_method('spawn')
elif backend == 'trio':
tractor._spawn.try_set_start_method(backend)
@ -75,20 +153,24 @@ def pytest_configure(config):
def loglevel(request):
orig = tractor.log._default_loglevel
level = tractor.log._default_loglevel = request.config.option.loglevel
tractor.log.get_console_log(level)
yield level
tractor.log._default_loglevel = orig
@pytest.fixture(scope='session')
def spawn_backend(request):
def spawn_backend(request) -> str:
return request.config.option.spawn_backend
_ci_env: bool = os.environ.get('CI', False)
@pytest.fixture(scope='session')
def ci_env() -> bool:
"""Detect CI envoirment.
"""
return os.environ.get('TRAVIS', False) or os.environ.get('CI', False)
return _ci_env
@pytest.fixture(scope='session')
@ -98,24 +180,24 @@ def arb_addr():
def pytest_generate_tests(metafunc):
spawn_backend = metafunc.config.option.spawn_backend
if not spawn_backend:
# XXX some weird windows bug with `pytest`?
spawn_backend = 'mp'
assert spawn_backend in ('mp', 'trio')
spawn_backend = 'trio'
# TODO: maybe just use the literal `._spawn.SpawnMethodKey`?
assert spawn_backend in (
'mp_spawn',
'mp_forkserver',
'trio',
)
# NOTE: used to be used to dyanmically parametrize tests for when
# you just passed --spawn-backend=`mp` on the cli, but now we expect
# that cli input to be manually specified, BUT, maybe we'll do
# something like this again in the future?
if 'start_method' in metafunc.fixturenames:
if spawn_backend == 'mp':
from multiprocessing import get_all_start_methods
methods = get_all_start_methods()
if 'fork' in methods:
# fork not available on windows, so check before
# removing XXX: the fork method is in general
# incompatible with trio's global scheduler state
methods.remove('fork')
elif spawn_backend == 'trio':
methods = ['trio']
metafunc.parametrize("start_method", methods, scope='module')
metafunc.parametrize("start_method", [spawn_backend], scope='module')
def sig_prog(proc, sig):
@ -131,16 +213,22 @@ def sig_prog(proc, sig):
@pytest.fixture
def daemon(loglevel, testdir, arb_addr):
"""Run a daemon actor as a "remote arbiter".
"""
def daemon(
loglevel: str,
testdir,
arb_addr: tuple[str, int],
):
'''
Run a daemon actor as a "remote arbiter".
'''
if loglevel in ('trace', 'debug'):
# too much logging will lock up the subproc (smh)
loglevel = 'info'
cmdargs = [
sys.executable, '-c',
"import tractor; tractor.run_daemon([], arbiter_addr={}, loglevel={})"
"import tractor; tractor.run_daemon([], registry_addr={}, loglevel={})"
.format(
arb_addr,
"'{}'".format(loglevel) if loglevel else None)

View File

@ -0,0 +1,193 @@
'''
Sketchy network blackoutz, ugly byzantine gens, puedes eschuchar la
cancelacion?..
'''
from functools import partial
import pytest
from _pytest.pathlib import import_path
import trio
import tractor
from conftest import (
examples_dir,
)
@pytest.mark.parametrize(
'debug_mode',
[False, True],
ids=['no_debug_mode', 'debug_mode'],
)
@pytest.mark.parametrize(
'ipc_break',
[
# no breaks
{
'break_parent_ipc_after': False,
'break_child_ipc_after': False,
},
# only parent breaks
{
'break_parent_ipc_after': 500,
'break_child_ipc_after': False,
},
# only child breaks
{
'break_parent_ipc_after': False,
'break_child_ipc_after': 500,
},
# both: break parent first
{
'break_parent_ipc_after': 500,
'break_child_ipc_after': 800,
},
# both: break child first
{
'break_parent_ipc_after': 800,
'break_child_ipc_after': 500,
},
],
ids=[
'no_break',
'break_parent',
'break_child',
'break_both_parent_first',
'break_both_child_first',
],
)
def test_ipc_channel_break_during_stream(
debug_mode: bool,
spawn_backend: str,
ipc_break: dict | None,
):
'''
Ensure we can have an IPC channel break its connection during
streaming and it's still possible for the (simulated) user to kill
the actor tree using SIGINT.
We also verify the type of connection error expected in the parent
depending on which side if the IPC breaks first.
'''
if spawn_backend != 'trio':
if debug_mode:
pytest.skip('`debug_mode` only supported on `trio` spawner')
# non-`trio` spawners should never hit the hang condition that
# requires the user to do ctl-c to cancel the actor tree.
expect_final_exc = trio.ClosedResourceError
mod = import_path(
examples_dir() / 'advanced_faults' / 'ipc_failure_during_stream.py',
root=examples_dir(),
)
expect_final_exc = KeyboardInterrupt
# when ONLY the child breaks we expect the parent to get a closed
# resource error on the next `MsgStream.receive()` and then fail out
# and cancel the child from there.
if (
# only child breaks
(
ipc_break['break_child_ipc_after']
and ipc_break['break_parent_ipc_after'] is False
)
# both break but, parent breaks first
or (
ipc_break['break_child_ipc_after'] is not False
and (
ipc_break['break_parent_ipc_after']
> ipc_break['break_child_ipc_after']
)
)
):
expect_final_exc = trio.ClosedResourceError
# when the parent IPC side dies (even if the child's does as well
# but the child fails BEFORE the parent) we expect the channel to be
# sent a stop msg from the child at some point which will signal the
# parent that the stream has been terminated.
# NOTE: when the parent breaks "after" the child you get this same
# case as well, the child breaks the IPC channel with a stop msg
# before any closure takes place.
elif (
# only parent breaks
(
ipc_break['break_parent_ipc_after']
and ipc_break['break_child_ipc_after'] is False
)
# both break but, child breaks first
or (
ipc_break['break_parent_ipc_after'] is not False
and (
ipc_break['break_child_ipc_after']
> ipc_break['break_parent_ipc_after']
)
)
):
expect_final_exc = trio.EndOfChannel
with pytest.raises(expect_final_exc):
trio.run(
partial(
mod.main,
debug_mode=debug_mode,
start_method=spawn_backend,
**ipc_break,
)
)
@tractor.context
async def break_ipc_after_started(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream() as stream:
await stream.aclose()
await trio.sleep(0.2)
await ctx.chan.send(None)
print('child broke IPC and terminating')
def test_stream_closed_right_after_ipc_break_and_zombie_lord_engages():
'''
Verify that is a subactor's IPC goes down just after bringing up a stream
the parent can trigger a SIGINT and the child will be reaped out-of-IPC by
the localhost process supervision machinery: aka "zombie lord".
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'ipc_breaker',
enable_modules=[__name__],
)
with trio.move_on_after(1):
async with (
portal.open_context(
break_ipc_after_started
) as (ctx, sent),
):
async with ctx.open_stream():
await trio.sleep(0.5)
print('parent waiting on context')
print('parent exited context')
raise KeyboardInterrupt
with pytest.raises(KeyboardInterrupt):
trio.run(main)

View File

@ -1,15 +1,20 @@
"""
'''
Advanced streaming patterns using bidirectional streams and contexts.
"""
'''
from collections import Counter
import itertools
from typing import Set, Dict, List
import platform
import trio
import tractor
_registry: Dict[str, Set[tractor.ReceiveMsgStream]] = {
def is_win():
return platform.system() == 'Windows'
_registry: dict[str, set[tractor.MsgStream]] = {
'even': set(),
'odd': set(),
}
@ -71,7 +76,7 @@ async def subscribe(
async def consumer(
subs: List[str],
subs: list[str],
) -> None:
@ -172,14 +177,22 @@ async def one_task_streams_and_one_handles_reqresp(
def test_reqresp_ontopof_streaming():
'''Test a subactor that both streams with one task and
'''
Test a subactor that both streams with one task and
spawns another which handles a small requests-response
dialogue over the same bidir-stream.
'''
async def main():
with trio.move_on_after(2):
# flat to make sure we get at least one pong
got_pong: bool = False
timeout: int = 2
if is_win(): # smh
timeout = 4
with trio.move_on_after(timeout):
async with tractor.open_nursery() as n:
# name of this actor will be same as target func
@ -188,9 +201,6 @@ def test_reqresp_ontopof_streaming():
enable_modules=[__name__]
)
# flat to make sure we get at least one pong
got_pong: bool = False
async with portal.open_context(
one_task_streams_and_one_handles_reqresp,
@ -242,8 +252,12 @@ def test_sigint_both_stream_types():
side-by-side will cancel correctly from SIGINT.
'''
timeout: float = 2
if is_win(): # smh
timeout += 1
async def main():
with trio.fail_after(2):
with trio.fail_after(timeout):
async with tractor.open_nursery() as n:
# name of this actor will be same as target func
portal = await n.start_actor(
@ -269,3 +283,98 @@ def test_sigint_both_stream_types():
assert 0, "Didn't receive KBI!?"
except KeyboardInterrupt:
pass
@tractor.context
async def inf_streamer(
ctx: tractor.Context,
) -> None:
'''
Stream increasing ints until terminated with a 'done' msg.
'''
await ctx.started()
async with (
ctx.open_stream() as stream,
trio.open_nursery() as n,
):
async def bail_on_sentinel():
async for msg in stream:
if msg == 'done':
await stream.aclose()
else:
print(f'streamer received {msg}')
# start termination detector
n.start_soon(bail_on_sentinel)
for val in itertools.count():
try:
await stream.send(val)
except trio.ClosedResourceError:
# close out the stream gracefully
break
print('terminating streamer')
def test_local_task_fanout_from_stream():
'''
Single stream with multiple local consumer tasks using the
``MsgStream.subscribe()` api.
Ensure all tasks receive all values after stream completes sending.
'''
consumers = 22
async def main():
counts = Counter()
async with tractor.open_nursery() as tn:
p = await tn.start_actor(
'inf_streamer',
enable_modules=[__name__],
)
async with (
p.open_context(inf_streamer) as (ctx, _),
ctx.open_stream() as stream,
):
async def pull_and_count(name: str):
# name = trio.lowlevel.current_task().name
async with stream.subscribe() as recver:
assert isinstance(
recver,
tractor.trionics.BroadcastReceiver
)
async for val in recver:
# print(f'{name}: {val}')
counts[name] += 1
print(f'{name} bcaster ended')
print(f'{name} completed')
with trio.fail_after(3):
async with trio.open_nursery() as nurse:
for i in range(consumers):
nurse.start_soon(pull_and_count, i)
await trio.sleep(0.5)
print('\nterminating')
await stream.send('done')
print('closed stream connection')
assert len(counts) == consumers
mx = max(counts.values())
# make sure each task received all stream values
assert all(val == mx for val in counts.values())
await p.cancel_actor()
trio.run(main)

View File

@ -8,6 +8,10 @@ import platform
import time
from itertools import repeat
from exceptiongroup import (
BaseExceptionGroup,
ExceptionGroup,
)
import pytest
import trio
import tractor
@ -15,6 +19,10 @@ import tractor
from conftest import tractor_test, no_windows
def is_win():
return platform.system() == 'Windows'
async def assert_err(delay=0):
await trio.sleep(delay)
assert 0
@ -52,29 +60,49 @@ def test_remote_error(arb_addr, args_err):
arbiter_addr=arb_addr,
) as nursery:
# on a remote type error caused by bad input args
# this should raise directly which means we **don't** get
# an exception group outside the nursery since the error
# here and the far end task error are one in the same?
portal = await nursery.run_in_actor(
assert_err, name='errorer', **args
)
# get result(s) from main task
try:
# this means the root actor will also raise a local
# parent task error and thus an eg will propagate out
# of this actor nursery.
await portal.result()
except tractor.RemoteActorError as err:
assert err.type == errtype
print("Look Maa that actor failed hard, hehh")
raise
# ensure boxed errors
if args:
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
# ensure boxed error is correct
assert excinfo.value.type == errtype
else:
# the root task will also error on the `.result()` call
# so we expect an error from there AND the child.
with pytest.raises(BaseExceptionGroup) as excinfo:
trio.run(main)
# ensure boxed errors
for exc in excinfo.value.exceptions:
assert exc.type == errtype
def test_multierror(arb_addr):
"""Verify we raise a ``trio.MultiError`` out of a nursery where
'''
Verify we raise a ``BaseExceptionGroup`` out of a nursery where
more then one actor errors.
"""
'''
async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr,
@ -91,10 +119,10 @@ def test_multierror(arb_addr):
print("Look Maa that first actor failed hard, hehh")
raise
# here we should get a `trio.MultiError` containing exceptions
# here we should get a ``BaseExceptionGroup`` containing exceptions
# from both subactors
with pytest.raises(trio.MultiError):
with pytest.raises(BaseExceptionGroup):
trio.run(main)
@ -103,7 +131,7 @@ def test_multierror(arb_addr):
'num_subactors', range(25, 26),
)
def test_multierror_fast_nursery(arb_addr, start_method, num_subactors, delay):
"""Verify we raise a ``trio.MultiError`` out of a nursery where
"""Verify we raise a ``BaseExceptionGroup`` out of a nursery where
more then one actor errors and also with a delay before failure
to test failure during an ongoing spawning.
"""
@ -119,10 +147,11 @@ def test_multierror_fast_nursery(arb_addr, start_method, num_subactors, delay):
delay=delay
)
with pytest.raises(trio.MultiError) as exc_info:
# with pytest.raises(trio.MultiError) as exc_info:
with pytest.raises(BaseExceptionGroup) as exc_info:
trio.run(main)
assert exc_info.type == tractor.MultiError
assert exc_info.type == ExceptionGroup
err = exc_info.value
exceptions = err.exceptions
@ -210,8 +239,8 @@ async def test_cancel_infinite_streamer(start_method):
[
# daemon actors sit idle while single task actors error out
(1, tractor.RemoteActorError, AssertionError, (assert_err, {}), None),
(2, tractor.MultiError, AssertionError, (assert_err, {}), None),
(3, tractor.MultiError, AssertionError, (assert_err, {}), None),
(2, BaseExceptionGroup, AssertionError, (assert_err, {}), None),
(3, BaseExceptionGroup, AssertionError, (assert_err, {}), None),
# 1 daemon actor errors out while single task actors sleep forever
(3, tractor.RemoteActorError, AssertionError, (sleep_forever, {}),
@ -222,7 +251,7 @@ async def test_cancel_infinite_streamer(start_method):
(do_nuthin, {}), (assert_err, {'delay': 1}, True)),
# daemon complete quickly delay while single task
# actors error after brief delay
(3, tractor.MultiError, AssertionError,
(3, BaseExceptionGroup, AssertionError,
(assert_err, {'delay': 1}), (do_nuthin, {}, False)),
],
ids=[
@ -289,7 +318,7 @@ async def test_some_cancels_all(num_actors_and_errs, start_method, loglevel):
# should error here with a ``RemoteActorError`` or ``MultiError``
except first_err as err:
if isinstance(err, tractor.MultiError):
if isinstance(err, BaseExceptionGroup):
assert len(err.exceptions) == num_actors
for exc in err.exceptions:
if isinstance(exc, tractor.RemoteActorError):
@ -332,10 +361,12 @@ async def spawn_and_error(breadth, depth) -> None:
@tractor_test
async def test_nested_multierrors(loglevel, start_method):
"""Test that failed actor sets are wrapped in `trio.MultiError`s.
This test goes only 2 nurseries deep but we should eventually have tests
'''
Test that failed actor sets are wrapped in `BaseExceptionGroup`s. This
test goes only 2 nurseries deep but we should eventually have tests
for arbitrary n-depth actor trees.
"""
'''
if start_method == 'trio':
depth = 3
subactor_breadth = 2
@ -359,12 +390,12 @@ async def test_nested_multierrors(loglevel, start_method):
breadth=subactor_breadth,
depth=depth,
)
except trio.MultiError as err:
except BaseExceptionGroup as err:
assert len(err.exceptions) == subactor_breadth
for subexc in err.exceptions:
# verify first level actor errors are wrapped as remote
if platform.system() == 'Windows':
if is_win():
# windows is often too slow and cancellation seems
# to happen before an actor is spawned
@ -377,10 +408,10 @@ async def test_nested_multierrors(loglevel, start_method):
assert subexc.type in (
tractor.RemoteActorError,
trio.Cancelled,
trio.MultiError
BaseExceptionGroup,
)
elif isinstance(subexc, trio.MultiError):
elif isinstance(subexc, BaseExceptionGroup):
for subsub in subexc.exceptions:
if subsub in (tractor.RemoteActorError,):
@ -388,7 +419,7 @@ async def test_nested_multierrors(loglevel, start_method):
assert type(subsub) in (
trio.Cancelled,
trio.MultiError,
BaseExceptionGroup,
)
else:
assert isinstance(subexc, tractor.RemoteActorError)
@ -397,15 +428,21 @@ async def test_nested_multierrors(loglevel, start_method):
# XXX not sure what's up with this..
# on windows sometimes spawning is just too slow and
# we get back the (sent) cancel signal instead
if platform.system() == 'Windows':
if is_win():
if isinstance(subexc, tractor.RemoteActorError):
assert subexc.type in (trio.MultiError, tractor.RemoteActorError)
assert subexc.type in (
BaseExceptionGroup,
tractor.RemoteActorError
)
else:
assert isinstance(subexc, trio.MultiError)
assert isinstance(subexc, BaseExceptionGroup)
else:
assert subexc.type is trio.MultiError
assert subexc.type is ExceptionGroup
else:
assert subexc.type in (tractor.RemoteActorError, trio.Cancelled)
assert subexc.type in (
tractor.RemoteActorError,
trio.Cancelled
)
@no_windows
@ -423,7 +460,7 @@ def test_cancel_via_SIGINT(
with trio.fail_after(2):
async with tractor.open_nursery() as tn:
await tn.start_actor('sucka')
if spawn_backend == 'mp':
if 'mp' in spawn_backend:
time.sleep(0.1)
os.kill(pid, signal.SIGINT)
await trio.sleep_forever()
@ -443,6 +480,9 @@ def test_cancel_via_SIGINT_other_task(
from a seperate ``trio`` child task.
"""
pid = os.getpid()
timeout: float = 2
if is_win(): # smh
timeout += 1
async def spawn_and_sleep_forever(task_status=trio.TASK_STATUS_IGNORED):
async with tractor.open_nursery() as tn:
@ -456,10 +496,10 @@ def test_cancel_via_SIGINT_other_task(
async def main():
# should never timeout since SIGINT should cancel the current program
with trio.fail_after(2):
with trio.fail_after(timeout):
async with trio.open_nursery() as n:
await n.start(spawn_and_sleep_forever)
if spawn_backend == 'mp':
if 'mp' in spawn_backend:
time.sleep(0.1)
os.kill(pid, signal.SIGINT)
@ -523,7 +563,11 @@ def test_fast_graceful_cancel_when_spawn_task_in_soft_proc_wait_for_daemon(
cancellation, and it's faster, we might as well do it.
'''
kbi_delay = 0.2
kbi_delay = 0.5
timeout: float = 2.9
if is_win(): # smh
timeout += 1
async def main():
start = time.time()
@ -548,7 +592,7 @@ def test_fast_graceful_cancel_when_spawn_task_in_soft_proc_wait_for_daemon(
await p.run(do_nuthin)
finally:
duration = time.time() - start
if duration > 2.9:
if duration > timeout:
raise trio.TooSlowError(
'daemon cancel was slower then necessary..'
)

View File

@ -0,0 +1,173 @@
'''
Test a service style daemon that maintains a nursery for spawning
"remote async tasks" including both spawning other long living
sub-sub-actor daemons.
'''
from typing import Optional
import asyncio
from contextlib import asynccontextmanager as acm
import pytest
import trio
from trio_typing import TaskStatus
import tractor
from tractor import RemoteActorError
from async_generator import aclosing
async def aio_streamer(
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> trio.abc.ReceiveChannel:
# required first msg to sync caller
to_trio.send_nowait(None)
from itertools import cycle
for i in cycle(range(10)):
to_trio.send_nowait(i)
await asyncio.sleep(0.01)
async def trio_streamer():
from itertools import cycle
for i in cycle(range(10)):
yield i
await trio.sleep(0.01)
async def trio_sleep_and_err(delay: float = 0.5):
await trio.sleep(delay)
# name error
doggy() # noqa
_cached_stream: Optional[
trio.abc.ReceiveChannel
] = None
@acm
async def wrapper_mngr(
):
from tractor.trionics import broadcast_receiver
global _cached_stream
in_aio = tractor.current_actor().is_infected_aio()
if in_aio:
if _cached_stream:
from_aio = _cached_stream
# if we already have a cached feed deliver a rx side clone
# to consumer
async with broadcast_receiver(from_aio, 6) as from_aio:
yield from_aio
return
else:
async with tractor.to_asyncio.open_channel_from(
aio_streamer,
) as (first, from_aio):
assert not first
# cache it so next task uses broadcast receiver
_cached_stream = from_aio
yield from_aio
else:
async with aclosing(trio_streamer()) as stream:
# cache it so next task uses broadcast receiver
_cached_stream = stream
yield stream
_nursery: trio.Nursery = None
@tractor.context
async def trio_main(
ctx: tractor.Context,
):
# sync
await ctx.started()
# stash a "service nursery" as "actor local" (aka a Python global)
global _nursery
n = _nursery
assert n
async def consume_stream():
async with wrapper_mngr() as stream:
async for msg in stream:
print(msg)
# run 2 tasks to ensure broadcaster chan use
n.start_soon(consume_stream)
n.start_soon(consume_stream)
n.start_soon(trio_sleep_and_err)
await trio.sleep_forever()
@tractor.context
async def open_actor_local_nursery(
ctx: tractor.Context,
):
global _nursery
async with trio.open_nursery() as n:
_nursery = n
await ctx.started()
await trio.sleep(10)
# await trio.sleep(1)
# XXX: this causes the hang since
# the caller does not unblock from its own
# ``trio.sleep_forever()``.
# TODO: we need to test a simple ctx task starting remote tasks
# that error and then blocking on a ``Nursery.start()`` which
# never yields back.. aka a scenario where the
# ``tractor.context`` task IS NOT in the service n's cancel
# scope.
n.cancel_scope.cancel()
@pytest.mark.parametrize(
'asyncio_mode',
[True, False],
ids='asyncio_mode={}'.format,
)
def test_actor_managed_trio_nursery_task_error_cancels_aio(
asyncio_mode: bool,
arb_addr
):
'''
Verify that a ``trio`` nursery created managed in a child actor
correctly relays errors to the parent actor when one of its spawned
tasks errors even when running in infected asyncio mode and using
broadcast receivers for multi-task-per-actor subscription.
'''
async def main():
# cancel the nursery shortly after boot
async with tractor.open_nursery() as n:
p = await n.start_actor(
'nursery_mngr',
infect_asyncio=asyncio_mode,
enable_modules=[__name__],
)
async with (
p.open_context(open_actor_local_nursery) as (ctx, first),
p.open_context(trio_main) as (ctx, first),
):
await trio.sleep_forever()
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
# verify boxed error
err = excinfo.value
assert isinstance(err.type(), NameError)

View File

@ -1,5 +1,6 @@
import itertools
import pytest
import trio
import tractor
from tractor import open_actor_cluster
@ -11,26 +12,72 @@ from conftest import tractor_test
MESSAGE = 'tractoring at full speed'
def test_empty_mngrs_input_raises() -> None:
async def main():
with trio.fail_after(1):
async with (
open_actor_cluster(
modules=[__name__],
# NOTE: ensure we can passthrough runtime opts
loglevel='info',
# debug_mode=True,
) as portals,
gather_contexts(
# NOTE: it's the use of inline-generator syntax
# here that causes the empty input.
mngrs=(
p.open_context(worker) for p in portals.values()
),
),
):
assert 0
with pytest.raises(ValueError):
trio.run(main)
@tractor.context
async def worker(ctx: tractor.Context) -> None:
async def worker(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream(backpressure=True) as stream:
async with ctx.open_stream(
backpressure=True,
) as stream:
# TODO: this with the below assert causes a hang bug?
# with trio.move_on_after(1):
async for msg in stream:
# do something with msg
print(msg)
assert msg == MESSAGE
# TODO: does this ever cause a hang
# assert 0
@tractor_test
async def test_streaming_to_actor_cluster() -> None:
async with (
open_actor_cluster(modules=[__name__]) as portals,
gather_contexts(
mngrs=[p.open_context(worker) for p in portals.values()],
) as contexts,
gather_contexts(
mngrs=[ctx[0].open_stream() for ctx in contexts],
) as streams,
):
with trio.move_on_after(1):
for stream in itertools.cycle(streams):

View File

@ -5,6 +5,7 @@ Verify the we raise errors when streams are opened prior to sync-opening
a ``tractor.Context`` beforehand.
'''
from contextlib import asynccontextmanager as acm
from itertools import count
import platform
from typing import Optional
@ -264,6 +265,7 @@ async def test_callee_closes_ctx_after_stream_open():
enable_modules=[__name__],
)
with trio.fail_after(2):
async with portal.open_context(
close_ctx_immediately,
@ -296,6 +298,7 @@ async def test_callee_closes_ctx_after_stream_open():
# of a stream to the context (at least until a time of
# if/when we decide that's a good idea?)
try:
with trio.fail_after(0.5):
async with ctx.open_stream() as stream:
pass
except trio.ClosedResourceError:
@ -466,8 +469,11 @@ async def cancel_self(
try:
async with ctx.open_stream():
pass
except ContextCancelled:
except tractor.ContextCancelled:
# suppress for now so we can do checkpoint tests below
pass
else:
raise RuntimeError('Context didnt cancel itself?!')
# check a real ``trio.Cancelled`` is raised on a checkpoint
try:
@ -507,6 +513,9 @@ async def test_callee_cancels_before_started():
except tractor.ContextCancelled as ce:
ce.type == trio.Cancelled
# the traceback should be informative
assert 'cancelled itself' in ce.msgdata['tb_str']
# teardown the actor
await portal.cancel_actor()
@ -562,7 +571,7 @@ def test_one_end_stream_not_opened(overrun_by):
'''
overrunner, buf_size_increase, entrypoint = overrun_by
from tractor._actor import Actor
from tractor._runtime import Actor
buf_size = buf_size_increase + Actor.msg_buffer_size
async def main():
@ -601,33 +610,19 @@ def test_one_end_stream_not_opened(overrun_by):
# 2 overrun cases and the no overrun case (which pushes right up to
# the msg limit)
if overrunner == 'caller':
if overrunner == 'caller' or 'cance' in overrunner:
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
assert excinfo.value.type == StreamOverrun
elif 'cancel' in overrunner:
with pytest.raises(trio.MultiError) as excinfo:
trio.run(main)
multierr = excinfo.value
for exc in multierr.exceptions:
etype = type(exc)
if etype == tractor.RemoteActorError:
assert exc.type == StreamOverrun
else:
assert etype == tractor.ContextCancelled
elif overrunner == 'callee':
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
# TODO: embedded remote errors so that we can verify the source
# error?
# the callee delivers an error which is an overrun wrapped
# in a remote actor error.
# error? the callee delivers an error which is an overrun
# wrapped in a remote actor error.
assert excinfo.value.type == tractor.RemoteActorError
else:
@ -712,3 +707,92 @@ def test_stream_backpressure():
await portal.cancel_actor()
trio.run(main)
@tractor.context
async def sleep_forever(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream():
await trio.sleep_forever()
@acm
async def attach_to_sleep_forever():
'''
Cancel a context **before** any underlying error is raised in order
to trigger a local reception of a ``ContextCancelled`` which **should not**
be re-raised in the local surrounding ``Context`` *iff* the cancel was
requested by **this** side of the context.
'''
async with tractor.wait_for_actor('sleeper') as p2:
async with (
p2.open_context(sleep_forever) as (peer_ctx, first),
peer_ctx.open_stream(),
):
try:
yield
finally:
# XXX: previously this would trigger local
# ``ContextCancelled`` to be received and raised in the
# local context overriding any local error due to
# logic inside ``_invoke()`` which checked for
# an error set on ``Context._error`` and raised it in
# under a cancellation scenario.
# The problem is you can have a remote cancellation
# that is part of a local error and we shouldn't raise
# ``ContextCancelled`` **iff** we weren't the side of
# the context to initiate it, i.e.
# ``Context._cancel_called`` should **NOT** have been
# set. The special logic to handle this case is now
# inside ``Context._may_raise_from_remote_msg()`` XD
await peer_ctx.cancel()
@tractor.context
async def error_before_started(
ctx: tractor.Context,
) -> None:
'''
This simulates exactly an original bug discovered in:
https://github.com/pikers/piker/issues/244
'''
async with attach_to_sleep_forever():
# send an unserializable type which should raise a type error
# here and **NOT BE SWALLOWED** by the surrounding acm!!?!
await ctx.started(object())
def test_do_not_swallow_error_before_started_by_remote_contextcancelled():
'''
Verify that an error raised in a remote context which itself opens another
remote context, which it cancels, does not ovverride the original error that
caused the cancellation of the secondardy context.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'errorer',
enable_modules=[__name__],
)
await n.start_actor(
'sleeper',
enable_modules=[__name__],
)
async with (
portal.open_context(
error_before_started
) as (ctx, sent),
):
await trio.sleep_forever()
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
assert excinfo.value.type == TypeError

File diff suppressed because it is too large Load Diff

View File

@ -116,11 +116,26 @@ async def stream_from(portal):
print(value)
async def unpack_reg(actor_or_portal):
'''
Get and unpack a "registry" RPC request from the "arbiter" registry
system.
'''
if getattr(actor_or_portal, 'get_registry', None):
msg = await actor_or_portal.get_registry()
else:
msg = await actor_or_portal.run_from_ns('self', 'get_registry')
return {tuple(key.split('.')): val for key, val in msg.items()}
async def spawn_and_check_registry(
arb_addr: tuple,
use_signal: bool,
remote_arbiter: bool = False,
with_streaming: bool = False,
) -> None:
async with tractor.open_root_actor(
@ -134,13 +149,11 @@ async def spawn_and_check_registry(
assert not actor.is_arbiter
if actor.is_arbiter:
async def get_reg():
return await actor.get_registry()
extra = 1 # arbiter is local root actor
get_reg = partial(unpack_reg, actor)
else:
get_reg = partial(portal.run_from_ns, 'self', 'get_registry')
get_reg = partial(unpack_reg, portal)
extra = 2 # local root actor + remote arbiter
# ensure current actor is registered
@ -266,7 +279,7 @@ async def close_chans_before_nursery(
):
async with tractor.get_arbiter(*arb_addr) as aportal:
try:
get_reg = partial(aportal.run_from_ns, 'self', 'get_registry')
get_reg = partial(unpack_reg, aportal)
async with tractor.open_nursery() as tn:
portal1 = await tn.start_actor(

View File

@ -1,6 +1,7 @@
"""
'''
Let's make sure them docs work yah?
"""
'''
from contextlib import contextmanager
import itertools
import os
@ -11,17 +12,17 @@ import shutil
import pytest
from conftest import repodir
def examples_dir():
"""Return the abspath to the examples directory.
"""
return os.path.join(repodir(), 'examples')
from conftest import (
examples_dir,
)
@pytest.fixture
def run_example_in_subproc(loglevel, testdir, arb_addr):
def run_example_in_subproc(
loglevel: str,
testdir,
arb_addr: tuple[str, int],
):
@contextmanager
def run(script_code):
@ -31,8 +32,8 @@ def run_example_in_subproc(loglevel, testdir, arb_addr):
# on windows we need to create a special __main__.py which will
# be executed with ``python -m <modulename>`` on windows..
shutil.copyfile(
os.path.join(examples_dir(), '__main__.py'),
os.path.join(str(testdir), '__main__.py')
examples_dir() / '__main__.py',
str(testdir / '__main__.py'),
)
# drop the ``if __name__ == '__main__'`` guard onwards from
@ -80,11 +81,15 @@ def run_example_in_subproc(loglevel, testdir, arb_addr):
'example_script',
# walk yields: (dirpath, dirnames, filenames)
[(p[0], f) for p in os.walk(examples_dir()) for f in p[2]
[
(p[0], f) for p in os.walk(examples_dir()) for f in p[2]
if '__' not in f
and f[0] != '_'
and 'debugging' not in p[0]],
and 'debugging' not in p[0]
and 'integration' not in p[0]
and 'advanced_faults' not in p[0]
],
ids=lambda t: t[1],
)
@ -112,9 +117,19 @@ def test_example(run_example_in_subproc, example_script):
# print(f'STDOUT: {out}')
# if we get some gnarly output let's aggregate and raise
if err:
errmsg = err.decode()
errlines = errmsg.splitlines()
if err and 'Error' in errlines[-1]:
last_error = errlines[-1]
if (
'Error' in last_error
# XXX: currently we print this to console, but maybe
# shouldn't eventually once we figure out what's
# a better way to be explicit about aio side
# cancels?
and 'asyncio.exceptions.CancelledError' not in last_error
):
raise Exception(errmsg)
assert proc.returncode == 0

View File

@ -0,0 +1,564 @@
'''
The hipster way to force SC onto the stdlib's "async": 'infection mode'.
'''
from typing import Optional, Iterable, Union
import asyncio
import builtins
import itertools
import importlib
from exceptiongroup import BaseExceptionGroup
import pytest
import trio
import tractor
from tractor import (
to_asyncio,
RemoteActorError,
)
from tractor.trionics import BroadcastReceiver
async def sleep_and_err(
sleep_for: float = 0.1,
# just signature placeholders for compat with
# ``to_asyncio.open_channel_from()``
to_trio: Optional[trio.MemorySendChannel] = None,
from_trio: Optional[asyncio.Queue] = None,
):
if to_trio:
to_trio.send_nowait('start')
await asyncio.sleep(sleep_for)
assert 0
async def sleep_forever():
await asyncio.sleep(float('inf'))
async def trio_cancels_single_aio_task():
# spawn an ``asyncio`` task to run a func and return result
with trio.move_on_after(.2):
await tractor.to_asyncio.run_task(sleep_forever)
def test_trio_cancels_aio_on_actor_side(arb_addr):
'''
Spawn an infected actor that is cancelled by the ``trio`` side
task using std cancel scope apis.
'''
async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr
) as n:
await n.run_in_actor(
trio_cancels_single_aio_task,
infect_asyncio=True,
)
trio.run(main)
async def asyncio_actor(
target: str,
expect_err: Optional[Exception] = None
) -> None:
assert tractor.current_actor().is_infected_aio()
target = globals()[target]
if '.' in expect_err:
modpath, _, name = expect_err.rpartition('.')
mod = importlib.import_module(modpath)
error_type = getattr(mod, name)
else: # toplevel builtin error type
error_type = builtins.__dict__.get(expect_err)
try:
# spawn an ``asyncio`` task to run a func and return result
await tractor.to_asyncio.run_task(target)
except BaseException as err:
if expect_err:
assert isinstance(err, error_type)
raise
def test_aio_simple_error(arb_addr):
'''
Verify a simple remote asyncio error propagates back through trio
to the parent actor.
'''
async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr
) as n:
await n.run_in_actor(
asyncio_actor,
target='sleep_and_err',
expect_err='AssertionError',
infect_asyncio=True,
)
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
err = excinfo.value
assert isinstance(err, RemoteActorError)
assert err.type == AssertionError
def test_tractor_cancels_aio(arb_addr):
'''
Verify we can cancel a spawned asyncio task gracefully.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
asyncio_actor,
target='sleep_forever',
expect_err='trio.Cancelled',
infect_asyncio=True,
)
# cancel the entire remote runtime
await portal.cancel_actor()
trio.run(main)
def test_trio_cancels_aio(arb_addr):
'''
Much like the above test with ``tractor.Portal.cancel_actor()``
except we just use a standard ``trio`` cancellation api.
'''
async def main():
with trio.move_on_after(1):
# cancel the nursery shortly after boot
async with tractor.open_nursery() as n:
await n.run_in_actor(
asyncio_actor,
target='sleep_forever',
expect_err='trio.Cancelled',
infect_asyncio=True,
)
trio.run(main)
@tractor.context
async def trio_ctx(
ctx: tractor.Context,
):
await ctx.started('start')
# this will block until the ``asyncio`` task sends a "first"
# message.
with trio.fail_after(2):
async with (
trio.open_nursery() as n,
tractor.to_asyncio.open_channel_from(
sleep_and_err,
) as (first, chan),
):
assert first == 'start'
# spawn another asyncio task for the cuck of it.
n.start_soon(
tractor.to_asyncio.run_task,
sleep_forever,
)
await trio.sleep_forever()
@pytest.mark.parametrize(
'parent_cancels', [False, True],
ids='parent_actor_cancels_child={}'.format
)
def test_context_spawns_aio_task_that_errors(
arb_addr,
parent_cancels: bool,
):
'''
Verify that spawning a task via an intertask channel ctx mngr that
errors correctly propagates the error back from the `asyncio`-side
task.
'''
async def main():
with trio.fail_after(2):
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_daemon',
enable_modules=[__name__],
infect_asyncio=True,
# debug_mode=True,
loglevel='cancel',
)
async with p.open_context(
trio_ctx,
) as (ctx, first):
assert first == 'start'
if parent_cancels:
await p.cancel_actor()
await trio.sleep_forever()
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
err = excinfo.value
assert isinstance(err, RemoteActorError)
if parent_cancels:
assert err.type == trio.Cancelled
else:
assert err.type == AssertionError
async def aio_cancel():
''''
Cancel urself boi.
'''
await asyncio.sleep(0.5)
task = asyncio.current_task()
# cancel and enter sleep
task.cancel()
await sleep_forever()
def test_aio_cancelled_from_aio_causes_trio_cancelled(arb_addr):
async def main():
async with tractor.open_nursery() as n:
await n.run_in_actor(
asyncio_actor,
target='aio_cancel',
expect_err='tractor.to_asyncio.AsyncioCancelled',
infect_asyncio=True,
)
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
# ensure boxed error is correct
assert excinfo.value.type == to_asyncio.AsyncioCancelled
# TODO: verify open_channel_from will fail on this..
async def no_to_trio_in_args():
pass
async def push_from_aio_task(
sequence: Iterable,
to_trio: trio.abc.SendChannel,
expect_cancel: False,
fail_early: bool,
) -> None:
try:
# sync caller ctx manager
to_trio.send_nowait(True)
for i in sequence:
print(f'asyncio sending {i}')
to_trio.send_nowait(i)
await asyncio.sleep(0.001)
if i == 50 and fail_early:
raise Exception
print('asyncio streamer complete!')
except asyncio.CancelledError:
if not expect_cancel:
pytest.fail("aio task was cancelled unexpectedly")
raise
else:
if expect_cancel:
pytest.fail("aio task wasn't cancelled as expected!?")
async def stream_from_aio(
exit_early: bool = False,
raise_err: bool = False,
aio_raise_err: bool = False,
fan_out: bool = False,
) -> None:
seq = range(100)
expect = list(seq)
try:
pulled = []
async with to_asyncio.open_channel_from(
push_from_aio_task,
sequence=seq,
expect_cancel=raise_err or exit_early,
fail_early=aio_raise_err,
) as (first, chan):
assert first is True
async def consume(
chan: Union[
to_asyncio.LinkedTaskChannel,
BroadcastReceiver,
],
):
async for value in chan:
print(f'trio received {value}')
pulled.append(value)
if value == 50:
if raise_err:
raise Exception
elif exit_early:
break
if fan_out:
# start second task that get's the same stream value set.
async with (
# NOTE: this has to come first to avoid
# the channel being closed before the nursery
# tasks are joined..
chan.subscribe() as br,
trio.open_nursery() as n,
):
n.start_soon(consume, br)
await consume(chan)
else:
await consume(chan)
finally:
if (
not raise_err and
not exit_early and
not aio_raise_err
):
if fan_out:
# we get double the pulled values in the
# ``.subscribe()`` fan out case.
doubled = list(itertools.chain(*zip(expect, expect)))
expect = doubled[:len(pulled)]
assert list(sorted(pulled)) == expect
else:
assert pulled == expect
else:
assert not fan_out
assert pulled == expect[:51]
print('trio guest mode task completed!')
@pytest.mark.parametrize(
'fan_out', [False, True],
ids='fan_out_w_chan_subscribe={}'.format
)
def test_basic_interloop_channel_stream(arb_addr, fan_out):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
infect_asyncio=True,
fan_out=fan_out,
)
await portal.result()
trio.run(main)
# TODO: parametrize the above test and avoid the duplication here?
def test_trio_error_cancels_intertask_chan(arb_addr):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
raise_err=True,
infect_asyncio=True,
)
# should trigger remote actor error
await portal.result()
with pytest.raises(BaseExceptionGroup) as excinfo:
trio.run(main)
# ensure boxed errors
for exc in excinfo.value.exceptions:
assert exc.type == Exception
def test_trio_closes_early_and_channel_exits(arb_addr):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
exit_early=True,
infect_asyncio=True,
)
# should trigger remote actor error
await portal.result()
# should be a quiet exit on a simple channel exit
trio.run(main)
def test_aio_errors_and_channel_propagates_and_closes(arb_addr):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
aio_raise_err=True,
infect_asyncio=True,
)
# should trigger remote actor error
await portal.result()
with pytest.raises(BaseExceptionGroup) as excinfo:
trio.run(main)
# ensure boxed errors
for exc in excinfo.value.exceptions:
assert exc.type == Exception
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
):
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
to_trio.send_nowait('start')
while True:
msg = await from_trio.get()
# echo the msg back
to_trio.send_nowait(msg)
# if we get the terminate sentinel
# break the echo loop
if msg is None:
print('breaking aio echo loop')
break
print('exiting asyncio task')
async with to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
print(f'asyncio echoing {msg}')
await chan.send(msg)
out = await chan.receive()
# echo back to parent actor-task
await stream.send(out)
if out is None:
try:
out = await chan.receive()
except trio.EndOfChannel:
break
else:
raise RuntimeError('aio channel never stopped?')
@pytest.mark.parametrize(
'raise_error_mid_stream',
[False, Exception, KeyboardInterrupt],
ids='raise_error={}'.format,
)
def test_echoserver_detailed_mechanics(
arb_addr,
raise_error_mid_stream,
):
async def main():
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_server',
enable_modules=[__name__],
infect_asyncio=True,
)
async with p.open_context(
trio_to_aio_echo_server,
) as (ctx, first):
assert first == 'start'
async with ctx.open_stream() as stream:
for i in range(100):
await stream.send(i)
out = await stream.receive()
assert i == out
if raise_error_mid_stream and i == 50:
raise raise_error_mid_stream
# send terminate msg
await stream.send(None)
out = await stream.receive()
assert out is None
if out is None:
# ensure the stream is stopped
# with trio.fail_after(0.1):
try:
await stream.receive()
except trio.EndOfChannel:
pass
else:
pytest.fail(
"stream wasn't stopped after sentinel?!")
# TODO: the case where this blocks and
# is cancelled by kbi or out of task cancellation
await p.cancel_actor()
if raise_error_mid_stream:
with pytest.raises(raise_error_mid_stream):
trio.run(main)
else:
trio.run(main)

View File

@ -7,9 +7,10 @@ import platform
import trio
import tractor
from tractor.testing import tractor_test
import pytest
from conftest import tractor_test
def test_must_define_ctx():
@ -79,6 +80,7 @@ async def stream_from_single_subactor(
seq = range(10)
with trio.fail_after(5):
async with portal.open_stream_from(
stream_func,
sequence=list(seq), # has to be msgpack serializable
@ -100,12 +102,14 @@ async def stream_from_single_subactor(
await trio.sleep(0.3)
# ensure EOC signalled-state translates
# XXX: not really sure this is correct,
# shouldn't it be a `ClosedResourceError`?
try:
await stream.__anext__()
except StopAsyncIteration:
# stop all spawned subactors
await portal.cancel_actor()
# await nursery.cancel()
@pytest.mark.parametrize(
@ -247,7 +251,7 @@ def test_a_quadruple_example(time_quad_ex, ci_env, spawn_backend):
results, diff = time_quad_ex
assert results
this_fast = 6 if platform.system() in ('Windows', 'Darwin') else 2.5
this_fast = 6 if platform.system() in ('Windows', 'Darwin') else 3
assert diff < this_fast

View File

@ -11,7 +11,7 @@ from conftest import tractor_test
@pytest.mark.trio
async def test_no_arbitter():
async def test_no_runtime():
"""An arbitter must be established before any nurseries
can be created.
@ -19,17 +19,10 @@ async def test_no_arbitter():
some point?)
"""
with pytest.raises(RuntimeError) :
with tractor.open_nursery():
async with tractor.find_actor('doggy'):
pass
def test_no_main():
"""An async function **must** be passed to ``tractor.run()``.
"""
with pytest.raises(TypeError):
tractor.run(None)
@tractor_test
async def test_self_is_registered(arb_addr):
"Verify waiting on the arbiter to register itself using the standard api."

View File

@ -4,20 +4,22 @@ from itertools import cycle
import pytest
import trio
import tractor
from tractor.testing import tractor_test
from tractor.experimental import msgpub
from conftest import tractor_test
def test_type_checks():
with pytest.raises(TypeError) as err:
@tractor.msg.pub
@msgpub
async def no_get_topics(yo):
yield
assert "must define a `get_topics`" in str(err.value)
with pytest.raises(TypeError) as err:
@tractor.msg.pub
@msgpub
def not_async_gen(yo):
pass
@ -32,7 +34,7 @@ def is_even(i):
_get_topics = None
@tractor.msg.pub
@msgpub
async def pubber(get_topics, seed=10):
# ensure topic subscriptions are as expected
@ -103,7 +105,7 @@ async def subs(
await stream.aclose()
@tractor.msg.pub(tasks=['one', 'two'])
@msgpub(tasks=['one', 'two'])
async def multilock_pubber(get_topics):
yield {'doggy': 10}

View File

@ -0,0 +1,182 @@
'''
Async context manager cache api testing: ``trionics.maybe_open_context():``
'''
from contextlib import asynccontextmanager as acm
import platform
from typing import Awaitable
import pytest
import trio
import tractor
_resource: int = 0
@acm
async def maybe_increment_counter(task_name: str):
global _resource
_resource += 1
await trio.lowlevel.checkpoint()
yield _resource
await trio.lowlevel.checkpoint()
_resource -= 1
@pytest.mark.parametrize(
'key_on',
['key_value', 'kwargs'],
ids="key_on={}".format,
)
def test_resource_only_entered_once(key_on):
global _resource
_resource = 0
kwargs = {}
key = None
if key_on == 'key_value':
key = 'some_common_key'
async def main():
cache_active: bool = False
async def enter_cached_mngr(name: str):
nonlocal cache_active
if key_on == 'kwargs':
# make a common kwargs input to key on it
kwargs = {'task_name': 'same_task_name'}
assert key is None
else:
# different task names per task will be used
kwargs = {'task_name': name}
async with tractor.trionics.maybe_open_context(
maybe_increment_counter,
kwargs=kwargs,
key=key,
) as (cache_hit, resource):
if cache_hit:
try:
cache_active = True
assert resource == 1
await trio.sleep_forever()
finally:
cache_active = False
else:
assert resource == 1
await trio.sleep_forever()
with trio.move_on_after(0.5):
async with (
tractor.open_root_actor(),
trio.open_nursery() as n,
):
for i in range(10):
n.start_soon(enter_cached_mngr, f'task_{i}')
await trio.sleep(0.001)
trio.run(main)
@tractor.context
async def streamer(
ctx: tractor.Context,
seq: list[int] = list(range(1000)),
) -> None:
await ctx.started()
async with ctx.open_stream() as stream:
for val in seq:
await stream.send(val)
await trio.sleep(0.001)
print('producer finished')
@acm
async def open_stream() -> Awaitable[tractor.MsgStream]:
async with tractor.open_nursery() as tn:
portal = await tn.start_actor('streamer', enable_modules=[__name__])
async with (
portal.open_context(streamer) as (ctx, first),
ctx.open_stream() as stream,
):
yield stream
await portal.cancel_actor()
print('CANCELLED STREAMER')
@acm
async def maybe_open_stream(taskname: str):
async with tractor.trionics.maybe_open_context(
# NOTE: all secondary tasks should cache hit on the same key
acm_func=open_stream,
) as (cache_hit, stream):
if cache_hit:
print(f'{taskname} loaded from cache')
# add a new broadcast subscription for the quote stream
# if this feed is already allocated by the first
# task that entereed
async with stream.subscribe() as bstream:
yield bstream
else:
# yield the actual stream
yield stream
def test_open_local_sub_to_stream():
'''
Verify a single inter-actor stream can can be fanned-out shared to
N local tasks using ``trionics.maybe_open_context():``.
'''
timeout = 3 if platform.system() != "Windows" else 10
async def main():
full = list(range(1000))
async def get_sub_and_pull(taskname: str):
async with (
maybe_open_stream(taskname) as stream,
):
if '0' in taskname:
assert isinstance(stream, tractor.MsgStream)
else:
assert isinstance(
stream,
tractor.trionics.BroadcastReceiver
)
first = await stream.receive()
print(f'{taskname} started with value {first}')
seq = []
async for msg in stream:
seq.append(msg)
assert set(seq).issubset(set(full))
print(f'{taskname} finished')
with trio.fail_after(timeout):
# TODO: turns out this isn't multi-task entrant XD
# We probably need an indepotent entry semantic?
async with tractor.open_root_actor():
async with (
trio.open_nursery() as nurse,
):
for i in range(10):
nurse.start_soon(get_sub_and_pull, f'task_{i}')
await trio.sleep(0.001)
print('all consumer tasks finished')
trio.run(main)

View File

@ -0,0 +1,73 @@
"""
Verifying internal runtime state and undocumented extras.
"""
import os
import pytest
import trio
import tractor
from conftest import tractor_test
_file_path: str = ''
def unlink_file():
print('Removing tmp file!')
os.remove(_file_path)
async def crash_and_clean_tmpdir(
tmp_file_path: str,
error: bool = True,
):
global _file_path
_file_path = tmp_file_path
actor = tractor.current_actor()
actor.lifetime_stack.callback(unlink_file)
assert os.path.isfile(tmp_file_path)
await trio.sleep(0.1)
if error:
assert 0
else:
actor.cancel_soon()
@pytest.mark.parametrize(
'error_in_child',
[True, False],
)
@tractor_test
async def test_lifetime_stack_wipes_tmpfile(
tmp_path,
error_in_child: bool,
):
child_tmp_file = tmp_path / "child.txt"
child_tmp_file.touch()
assert child_tmp_file.exists()
path = str(child_tmp_file)
try:
with trio.move_on_after(0.5):
async with tractor.open_nursery() as n:
await ( # inlined portal
await n.run_in_actor(
crash_and_clean_tmpdir,
tmp_file_path=path,
error=error_in_child,
)
).result()
except (
tractor.RemoteActorError,
tractor.BaseExceptionGroup,
):
pass
# tmp file should have been wiped by
# teardown stack.
assert not child_tmp_file.exists()

View File

@ -1,7 +1,8 @@
"""
Spawning basics
"""
from typing import Dict, Tuple, Optional
from typing import Optional
import pytest
import trio
@ -14,8 +15,8 @@ data_to_pass_down = {'doggy': 10, 'kitty': 4}
async def spawn(
is_arbiter: bool,
data: Dict,
arb_addr: Tuple[str, int],
data: dict,
arb_addr: tuple[str, int],
):
namespaces = [__name__]
@ -141,7 +142,7 @@ def test_loglevel_propagated_to_subactor(
capfd,
arb_addr,
):
if start_method == 'forkserver':
if start_method == 'mp_forkserver':
pytest.skip(
"a bug with `capfd` seems to make forkserver capture not work?")
@ -150,13 +151,13 @@ def test_loglevel_propagated_to_subactor(
async def main():
async with tractor.open_nursery(
name='arbiter',
loglevel=level,
start_method=start_method,
arbiter_addr=arb_addr,
) as tn:
await tn.run_in_actor(
check_loglevel,
loglevel=level,
level=level,
)

View File

@ -6,13 +6,16 @@ from contextlib import asynccontextmanager
from functools import partial
from itertools import cycle
import time
from typing import Optional, List, Tuple
from typing import Optional
import pytest
import trio
from trio.lowlevel import current_task
import tractor
from tractor.trionics import broadcast_receiver, Lagged
from tractor.trionics import (
broadcast_receiver,
Lagged,
)
@tractor.context
@ -37,7 +40,7 @@ async def echo_sequences(
async def ensure_sequence(
stream: tractor.ReceiveMsgStream,
stream: tractor.MsgStream,
sequence: list,
delay: Optional[float] = None,
@ -62,8 +65,8 @@ async def ensure_sequence(
@asynccontextmanager
async def open_sequence_streamer(
sequence: List[int],
arb_addr: Tuple[str, int],
sequence: list[int],
arb_addr: tuple[str, int],
start_method: str,
) -> tractor.MsgStream:
@ -211,7 +214,8 @@ def test_faster_task_to_recv_is_cancelled_by_slower(
arb_addr,
start_method,
):
'''Ensure that if a faster task consuming from a stream is cancelled
'''
Ensure that if a faster task consuming from a stream is cancelled
the slower task can continue to receive all expected values.
'''
@ -460,3 +464,51 @@ def test_first_recver_is_cancelled():
assert value == 1
trio.run(main)
def test_no_raise_on_lag():
'''
Run a simple 2-task broadcast where one task is slow but configured
so that it does not raise `Lagged` on overruns using
`raise_on_lasg=False` and verify that the task does not raise.
'''
size = 100
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
async def slow():
async with brx.subscribe(
raise_on_lag=False,
) as br:
async for msg in br:
print(f'slow task got: {msg}')
await trio.sleep(0.1)
async def fast():
async with brx.subscribe() as br:
async for msg in br:
print(f'fast task got: {msg}')
async def main():
async with (
tractor.open_root_actor(
# NOTE: so we see the warning msg emitted by the bcaster
# internals when the no raise flag is set.
loglevel='warning',
),
trio.open_nursery() as n,
):
n.start_soon(slow)
n.start_soon(fast)
for i in range(1000):
await tx.send(i)
# simulate user nailing ctl-c after realizing
# there's a lag in the slow task.
await trio.sleep(1)
raise KeyboardInterrupt
with pytest.raises(KeyboardInterrupt):
trio.run(main)

View File

@ -0,0 +1,82 @@
'''
Reminders for oddities in `trio` that we need to stay aware of and/or
want to see changed.
'''
import pytest
import trio
from trio_typing import TaskStatus
@pytest.mark.parametrize(
'use_start_soon', [
pytest.param(
True,
marks=pytest.mark.xfail(reason="see python-trio/trio#2258")
),
False,
]
)
def test_stashed_child_nursery(use_start_soon):
_child_nursery = None
async def waits_on_signal(
ev: trio.Event(),
task_status: TaskStatus[trio.Nursery] = trio.TASK_STATUS_IGNORED,
):
'''
Do some stuf, then signal other tasks, then yield back to "starter".
'''
await ev.wait()
task_status.started()
async def mk_child_nursery(
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
):
'''
Allocate a child sub-nursery and stash it as a global.
'''
nonlocal _child_nursery
async with trio.open_nursery() as cn:
_child_nursery = cn
task_status.started(cn)
# block until cancelled by parent.
await trio.sleep_forever()
async def sleep_and_err(
ev: trio.Event,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
):
await trio.sleep(0.5)
doggy() # noqa
ev.set()
task_status.started()
async def main():
async with (
trio.open_nursery() as pn,
):
cn = await pn.start(mk_child_nursery)
assert cn
ev = trio.Event()
if use_start_soon:
# this causes inf hang
cn.start_soon(sleep_and_err, ev)
else:
# this does not.
await cn.start(sleep_and_err, ev)
with trio.fail_after(1):
await cn.start(waits_on_signal, ev)
with pytest.raises(NameError):
trio.run(main)

View File

@ -1,7 +0,0 @@
[tool.towncrier]
package = "tractor"
filename = "NEWS.rst"
directory = "newsfragments/"
title_format = "tractor {version} ({project_date})"
version = "0.1.0a2"
#template = "changelog/_template.rst"

View File

@ -1,40 +1,74 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
tractor: An actor model micro-framework built on
``trio`` and ``multiprocessing``.
tractor: structured concurrent "actors".
"""
from trio import MultiError
from exceptiongroup import BaseExceptionGroup
from ._clustering import open_actor_cluster
from ._ipc import Channel
from ._streaming import (
Context,
ReceiveMsgStream,
MsgStream,
stream,
context,
)
from ._discovery import get_arbiter, find_actor, wait_for_actor
from ._discovery import (
get_arbiter,
find_actor,
wait_for_actor,
query_actor,
)
from ._supervise import open_nursery
from ._state import current_actor, is_root_process
from ._state import (
current_actor,
is_root_process,
)
from ._exceptions import (
RemoteActorError,
ModuleNotExposed,
ContextCancelled,
)
from ._debug import breakpoint, post_mortem
from ._debug import (
breakpoint,
post_mortem,
)
from . import msg
from ._root import run, run_daemon, open_root_actor
from ._root import (
run_daemon,
open_root_actor,
)
from ._portal import Portal
from ._runtime import Actor
__all__ = [
'Actor',
'Channel',
'Context',
'ModuleNotExposed',
'MultiError',
'RemoteActorError',
'ContextCancelled',
'ModuleNotExposed',
'MsgStream',
'BaseExceptionGroup',
'Portal',
'RemoteActorError',
'breakpoint',
'context',
'current_actor',
'find_actor',
'get_arbiter',
@ -43,14 +77,10 @@ __all__ = [
'open_actor_cluster',
'open_nursery',
'open_root_actor',
'Portal',
'post_mortem',
'run',
'query_actor',
'run_daemon',
'stream',
'context',
'ReceiveMsgStream',
'MsgStream',
'to_asyncio',
'wait_for_actor',
]

View File

@ -1,4 +1,22 @@
"""This is the "bootloader" for actors started using the native trio backend.
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
This is the "bootloader" for actors started using the native trio backend.
"""
import sys
import trio
@ -6,7 +24,7 @@ import argparse
from ast import literal_eval
from ._actor import Actor
from ._runtime import Actor
from ._entry import _trio_main
@ -19,12 +37,15 @@ def parse_ipaddr(arg):
return (str(host), int(port))
from ._entry import _trio_main
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--uid", type=parse_uid)
parser.add_argument("--loglevel", type=str)
parser.add_argument("--parent_addr", type=parse_ipaddr)
parser.add_argument("--asyncio", action='store_true')
args = parser.parse_args()
subactor = Actor(
@ -36,5 +57,6 @@ if __name__ == "__main__":
_trio_main(
subactor,
parent_addr=args.parent_addr
parent_addr=args.parent_addr,
infect_asyncio=args.asyncio,
)

View File

@ -1,3 +1,19 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Actor cluster helpers.
@ -16,9 +32,12 @@ import tractor
async def open_actor_cluster(
modules: list[str],
count: int = cpu_count(),
names: Optional[list[str]] = None,
start_method: Optional[str] = None,
names: list[str] | None = None,
hard_kill: bool = False,
# passed through verbatim to ``open_root_actor()``
**runtime_kwargs,
) -> AsyncGenerator[
dict[str, tractor.Portal],
None,
@ -33,7 +52,9 @@ async def open_actor_cluster(
raise ValueError(
'Number of names is {len(names)} but count it {count}')
async with tractor.open_nursery(start_method=start_method) as an:
async with tractor.open_nursery(
**runtime_kwargs,
) as an:
async with trio.open_nursery() as n:
uid = tractor.current_actor().uid

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,29 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Actor discovery API.
"""
import typing
from typing import Tuple, Optional, Union
from async_generator import asynccontextmanager
from typing import (
Optional,
Union,
AsyncGenerator,
)
from contextlib import asynccontextmanager as acm
from ._ipc import _connect_chan, Channel
from ._portal import (
@ -14,13 +34,13 @@ from ._portal import (
from ._state import current_actor, _runtime_vars
@asynccontextmanager
@acm
async def get_arbiter(
host: str,
port: int,
) -> typing.AsyncGenerator[Union[Portal, LocalPortal], None]:
) -> AsyncGenerator[Union[Portal, LocalPortal], None]:
'''Return a portal instance connected to a local or remote
arbiter.
'''
@ -41,10 +61,10 @@ async def get_arbiter(
yield arb_portal
@asynccontextmanager
@acm
async def get_root(
**kwargs,
) -> typing.AsyncGenerator[Portal, None]:
) -> AsyncGenerator[Portal, None]:
host, port = _runtime_vars['_root_mailbox']
assert host is not None
@ -54,28 +74,56 @@ async def get_root(
yield portal
@asynccontextmanager
async def find_actor(
@acm
async def query_actor(
name: str,
arbiter_sockaddr: Tuple[str, int] = None
) -> typing.AsyncGenerator[Optional[Portal], None]:
"""Ask the arbiter to find actor(s) by name.
arbiter_sockaddr: Optional[tuple[str, int]] = None,
Returns a connected portal to the last registered matching actor
known to the arbiter.
"""
) -> AsyncGenerator[tuple[str, int], None]:
'''
Simple address lookup for a given actor name.
Returns the (socket) address or ``None``.
'''
actor = current_actor()
async with get_arbiter(*arbiter_sockaddr or actor._arb_addr) as arb_portal:
async with get_arbiter(
*arbiter_sockaddr or actor._arb_addr
) as arb_portal:
sockaddr = await arb_portal.run_from_ns('self', 'find_actor', name=name)
sockaddr = await arb_portal.run_from_ns(
'self',
'find_actor',
name=name,
)
# TODO: return portals to all available actors - for now just
# the last one that registered
if name == 'arbiter' and actor.is_arbiter:
raise RuntimeError("The current actor is the arbiter")
elif sockaddr:
yield sockaddr if sockaddr else None
@acm
async def find_actor(
name: str,
arbiter_sockaddr: tuple[str, int] | None = None
) -> AsyncGenerator[Optional[Portal], None]:
'''
Ask the arbiter to find actor(s) by name.
Returns a connected portal to the last registered matching actor
known to the arbiter.
'''
async with query_actor(
name=name,
arbiter_sockaddr=arbiter_sockaddr,
) as sockaddr:
if sockaddr:
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal
@ -83,20 +131,25 @@ async def find_actor(
yield None
@asynccontextmanager
@acm
async def wait_for_actor(
name: str,
arbiter_sockaddr: Tuple[str, int] = None
) -> typing.AsyncGenerator[Portal, None]:
arbiter_sockaddr: tuple[str, int] | None = None
) -> AsyncGenerator[Portal, None]:
"""Wait on an actor to register with the arbiter.
A portal to the first registered actor is returned.
"""
actor = current_actor()
async with get_arbiter(*arbiter_sockaddr or actor._arb_addr) as arb_portal:
sockaddrs = await arb_portal.run_from_ns('self', 'wait_for_actor', name=name)
async with get_arbiter(
*arbiter_sockaddr or actor._arb_addr,
) as arb_portal:
sockaddrs = await arb_portal.run_from_ns(
'self',
'wait_for_actor',
name=name,
)
sockaddr = sockaddrs[-1]
async with _connect_chan(*sockaddr) as chan:

View File

@ -1,28 +1,64 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Sub-process entry points.
"""
from __future__ import annotations
from functools import partial
from typing import Tuple, Any
import signal
from typing import (
Any,
TYPE_CHECKING,
)
import trio # type: ignore
from .log import get_console_log, get_logger
from .log import (
get_console_log,
get_logger,
)
from . import _state
from .to_asyncio import run_as_asyncio_guest
from ._runtime import (
async_main,
Actor,
)
if TYPE_CHECKING:
from ._spawn import SpawnMethodKey
log = get_logger(__name__)
def _mp_main(
actor: 'Actor', # type: ignore
accept_addr: Tuple[str, int],
forkserver_info: Tuple[Any, Any, Any, Any, Any],
start_method: str,
parent_addr: Tuple[str, int] = None,
actor: Actor, # type: ignore
accept_addr: tuple[str, int],
forkserver_info: tuple[Any, Any, Any, Any, Any],
start_method: SpawnMethodKey,
parent_addr: tuple[str, int] | None = None,
infect_asyncio: bool = False,
) -> None:
"""The routine called *after fork* which invokes a fresh ``trio.run``
"""
'''
The routine called *after fork* which invokes a fresh ``trio.run``
'''
actor._forkserver_info = forkserver_info
from ._spawn import try_set_start_method
spawn_ctx = try_set_start_method(start_method)
@ -40,11 +76,16 @@ def _mp_main(
log.debug(f"parent_addr is {parent_addr}")
trio_main = partial(
actor._async_main,
async_main,
actor,
accept_addr,
parent_addr=parent_addr
)
try:
if infect_asyncio:
actor._infected_aio = True
run_as_asyncio_guest(trio_main)
else:
trio.run(trio_main)
except KeyboardInterrupt:
pass # handle it the same way trio does?
@ -54,16 +95,17 @@ def _mp_main(
def _trio_main(
actor: 'Actor', # type: ignore
*,
parent_addr: Tuple[str, int] = None,
) -> None:
"""Entry point for a `trio_run_in_process` subactor.
"""
# Disable sigint handling in children;
# we don't need it thanks to our cancellation machinery.
# signal.signal(signal.SIGINT, signal.SIG_IGN)
actor: Actor, # type: ignore
*,
parent_addr: tuple[str, int] | None = None,
infect_asyncio: bool = False,
) -> None:
'''
Entry point for a `trio_run_in_process` subactor.
'''
log.info(f"Started new trio process for {actor.uid}")
if actor.loglevel is not None:
@ -78,11 +120,16 @@ def _trio_main(
log.debug(f"parent_addr is {parent_addr}")
trio_main = partial(
actor._async_main,
async_main,
actor,
parent_addr=parent_addr
)
try:
if infect_asyncio:
actor._infected_aio = True
run_as_asyncio_guest(trio_main)
else:
trio.run(trio_main)
except KeyboardInterrupt:
log.warning(f"Actor {actor.uid} received KBI")

View File

@ -1,11 +1,33 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Our classy exception set.
"""
from typing import Dict, Any, Optional, Type
from typing import (
Any,
Optional,
Type,
)
import importlib
import builtins
import traceback
import exceptiongroup as eg
import trio
@ -31,9 +53,6 @@ class RemoteActorError(Exception):
self.type = suberror_type
self.msgdata = msgdata
# TODO: a trio.MultiError.catch like context manager
# for catching underlying remote errors of a particular type
class InternalActorError(RemoteActorError):
"""Remote internal ``tractor`` error indicating
@ -65,11 +84,20 @@ class StreamOverrun(trio.TooSlowError):
"This stream was overrun by sender"
class AsyncioCancelled(Exception):
'''
Asyncio cancelled translation (non-base) error
for use with the ``to_asyncio`` module
to be raised in the ``trio`` side task
'''
def pack_error(
exc: BaseException,
tb=None,
) -> Dict[str, Any]:
) -> dict[str, Any]:
"""Create an "error message" for tranmission over
a channel (aka the wire).
"""
@ -88,15 +116,17 @@ def pack_error(
def unpack_error(
msg: Dict[str, Any],
msg: dict[str, Any],
chan=None,
err_type=RemoteActorError
) -> Exception:
"""Unpack an 'error' message from the wire
'''
Unpack an 'error' message from the wire
into a local ``RemoteActorError``.
"""
'''
__tracebackhide__ = True
error = msg['error']
tb_str = error.get('tb_str', '')
@ -109,7 +139,12 @@ def unpack_error(
suberror_type = trio.Cancelled
else: # try to lookup a suitable local error type
for ns in [builtins, _this_mod, trio]:
for ns in [
builtins,
_this_mod,
eg,
trio,
]:
try:
suberror_type = getattr(ns, type_name)
break
@ -128,12 +163,15 @@ def unpack_error(
def is_multi_cancelled(exc: BaseException) -> bool:
"""Predicate to determine if a ``trio.MultiError`` contains only
``trio.Cancelled`` sub-exceptions (and is likely the result of
'''
Predicate to determine if a possible ``eg.BaseExceptionGroup`` contains
only ``trio.Cancelled`` sub-exceptions (and is likely the result of
cancelling a collection of subtasks.
"""
return not trio.MultiError.filter(
lambda exc: exc if not isinstance(exc, trio.Cancelled) else None,
exc,
)
'''
if isinstance(exc, eg.BaseExceptionGroup):
return exc.subgroup(
lambda exc: isinstance(exc, trio.Cancelled)
) is not None
return False

View File

@ -1,3 +1,19 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
This is near-copy of the 3.8 stdlib's ``multiprocessing.forkserver.py``
with some hackery to prevent any more then a single forkserver and

View File

@ -1,3 +1,19 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Inter-process comms abstractions
@ -6,14 +22,21 @@ from __future__ import annotations
import platform
import struct
import typing
from collections.abc import AsyncGenerator, AsyncIterator
from collections.abc import (
AsyncGenerator,
AsyncIterator,
)
from typing import (
Any, Tuple, Optional,
Type, Protocol, TypeVar,
Any,
runtime_checkable,
Optional,
Protocol,
Type,
TypeVar,
)
from tricycle import BufferedReceiveStream
import msgpack
import msgspec
import trio
from async_generator import asynccontextmanager
@ -26,7 +49,7 @@ _is_windows = platform.system() == 'Windows'
log = get_logger(__name__)
def get_stream_addrs(stream: trio.SocketStream) -> Tuple:
def get_stream_addrs(stream: trio.SocketStream) -> tuple:
# should both be IP sockets
lsockname = stream.socket.getsockname()
rsockname = stream.socket.getpeername()
@ -44,6 +67,7 @@ MsgType = TypeVar("MsgType")
# - https://jcristharif.com/msgspec/usage.html#structs
@runtime_checkable
class MsgTransport(Protocol[MsgType]):
stream: trio.SocketStream
@ -71,22 +95,27 @@ class MsgTransport(Protocol[MsgType]):
...
@property
def laddr(self) -> Tuple[str, int]:
def laddr(self) -> tuple[str, int]:
...
@property
def raddr(self) -> Tuple[str, int]:
def raddr(self) -> tuple[str, int]:
...
class MsgpackTCPStream:
'''A ``trio.SocketStream`` delivering ``msgpack`` formatted data
using ``msgpack-python``.
# TODO: not sure why we have to inherit here, but it seems to be an
# issue with ``get_msg_transport()`` returning a ``Type[Protocol]``;
# probably should make a `mypy` issue?
class MsgpackTCPStream(MsgTransport):
'''
A ``trio.SocketStream`` delivering ``msgpack`` formatted data
using the ``msgspec`` codec lib.
'''
def __init__(
self,
stream: trio.SocketStream,
prefix_size: int = 4,
) -> None:
@ -103,118 +132,19 @@ class MsgpackTCPStream:
# public i guess?
self.drained: list[dict] = []
async def _iter_packets(self) -> AsyncGenerator[dict, None]:
"""Yield packets from the underlying stream.
"""
unpacker = msgpack.Unpacker(
raw=False,
use_list=False,
strict_map_key=False
)
while True:
try:
data = await self.stream.receive_some(2**10)
except trio.BrokenResourceError as err:
msg = err.args[0]
# XXX: handle connection-reset-by-peer the same as a EOF.
# we're currently remapping this since we allow
# a quick connect then drop for root actors when
# checking to see if there exists an "arbiter"
# on the chosen sockaddr (``_root.py:108`` or thereabouts)
if (
# nix
'[Errno 104]' in msg or
# on windows it seems there are a variety of errors
# to handle..
_is_windows
):
raise TransportClosed(
f'{self} was broken with {msg}'
)
else:
raise
log.transport(f"received {data}") # type: ignore
if data == b'':
raise TransportClosed(
f'transport {self} was already closed prior to read'
)
unpacker.feed(data)
for packet in unpacker:
yield packet
@property
def laddr(self) -> Tuple[Any, ...]:
return self._laddr
@property
def raddr(self) -> Tuple[Any, ...]:
return self._raddr
async def send(self, msg: Any) -> None:
async with self._send_lock:
return await self.stream.send_all(
msgpack.dumps(msg, use_bin_type=True)
)
async def recv(self) -> Any:
return await self._agen.asend(None)
async def drain(self) -> AsyncIterator[dict]:
'''
Drain the stream's remaining messages sent from
the far end until the connection is closed by
the peer.
'''
try:
async for msg in self._iter_packets():
self.drained.append(msg)
except TransportClosed:
for msg in self.drained:
yield msg
def __aiter__(self):
return self._agen
def connected(self) -> bool:
return self.stream.socket.fileno() != -1
class MsgspecTCPStream(MsgpackTCPStream):
'''
A ``trio.SocketStream`` delivering ``msgpack`` formatted data
using ``msgspec``.
'''
def __init__(
self,
stream: trio.SocketStream,
prefix_size: int = 4,
) -> None:
import msgspec
super().__init__(stream)
self.recv_stream = BufferedReceiveStream(transport_stream=stream)
self.prefix_size = prefix_size
# TODO: struct aware messaging coders
self.encode = msgspec.Encoder().encode
self.decode = msgspec.Decoder().decode # dict[str, Any])
self.encode = msgspec.msgpack.Encoder().encode
self.decode = msgspec.msgpack.Decoder().decode # dict[str, Any])
async def _iter_packets(self) -> AsyncGenerator[dict, None]:
'''Yield packets from the underlying stream.
'''
import msgspec # noqa
last_decode_failed: bool = False
decodes_failed: int = 0
while True:
try:
@ -222,6 +152,7 @@ class MsgspecTCPStream(MsgpackTCPStream):
except (
ValueError,
ConnectionResetError,
# not sure entirely why we need this but without it we
# seem to be getting racy failures here on
@ -247,15 +178,24 @@ class MsgspecTCPStream(MsgpackTCPStream):
try:
yield self.decode(msg_bytes)
except (
msgspec.DecodingError,
msgspec.DecodeError,
UnicodeDecodeError,
):
if not last_decode_failed:
if decodes_failed < 4:
# ignore decoding errors for now and assume they have to
# do with a channel drop - hope that receiving from the
# channel will raise an expected error and bubble up.
log.error('`msgspec` failed to decode!?')
last_decode_failed = True
try:
msg_str: str | bytes = msg_bytes.decode()
except UnicodeDecodeError:
msg_str = msg_bytes
log.error(
'`msgspec` failed to decode!?\n'
'dumping bytes:\n'
f'{msg_str!r}'
)
decodes_failed += 1
else:
raise
@ -270,16 +210,46 @@ class MsgspecTCPStream(MsgpackTCPStream):
return await self.stream.send_all(size + bytes_data)
@property
def laddr(self) -> tuple[str, int]:
return self._laddr
@property
def raddr(self) -> tuple[str, int]:
return self._raddr
async def recv(self) -> Any:
return await self._agen.asend(None)
async def drain(self) -> AsyncIterator[dict]:
'''
Drain the stream's remaining messages sent from
the far end until the connection is closed by
the peer.
'''
try:
async for msg in self._iter_packets():
self.drained.append(msg)
except TransportClosed:
for msg in self.drained:
yield msg
def __aiter__(self):
return self._agen
def connected(self) -> bool:
return self.stream.socket.fileno() != -1
def get_msg_transport(
key: Tuple[str, str],
key: tuple[str, str],
) -> Type[MsgTransport]:
return {
('msgpack', 'tcp'): MsgpackTCPStream,
('msgspec', 'tcp'): MsgspecTCPStream,
}[key]
@ -288,16 +258,18 @@ class Channel:
An inter-process channel for communication between (remote) actors.
Wraps a ``MsgStream``: transport + encoding IPC connection.
Currently we only support ``trio.SocketStream`` for transport
(aka TCP).
(aka TCP) and the ``msgpack`` interchange format via the ``msgspec``
codec libary.
'''
def __init__(
self,
destaddr: Optional[Tuple[str, int]],
destaddr: Optional[tuple[str, int]],
msg_transport_type_key: Tuple[str, str] = ('msgpack', 'tcp'),
msg_transport_type_key: tuple[str, str] = ('msgpack', 'tcp'),
# TODO: optional reconnection support?
# auto_reconnect: bool = False,
@ -308,14 +280,6 @@ class Channel:
# self._recon_seq = on_reconnect
# self._autorecon = auto_reconnect
# TODO: maybe expose this through the nursery api?
try:
# if installed load the msgspec transport since it's faster
import msgspec # noqa
msg_transport_type_key = ('msgspec', 'tcp')
except ImportError:
pass
self._destaddr = destaddr
self._transport_key = msg_transport_type_key
@ -325,7 +289,7 @@ class Channel:
self.msgstream: Optional[MsgTransport] = None
# set after handshake - always uid of far end
self.uid: Optional[Tuple[str, str]] = None
self.uid: Optional[tuple[str, str]] = None
self._agen = self._aiter_recv()
self._exc: Optional[Exception] = None # set if far end actor errors
@ -353,7 +317,7 @@ class Channel:
def set_msg_transport(
self,
stream: trio.SocketStream,
type_key: Optional[Tuple[str, str]] = None,
type_key: Optional[tuple[str, str]] = None,
) -> MsgTransport:
type_key = type_key or self._transport_key
@ -368,16 +332,16 @@ class Channel:
return object.__repr__(self)
@property
def laddr(self) -> Optional[Tuple[str, int]]:
def laddr(self) -> Optional[tuple[str, int]]:
return self.msgstream.laddr if self.msgstream else None
@property
def raddr(self) -> Optional[Tuple[str, int]]:
def raddr(self) -> Optional[tuple[str, int]]:
return self.msgstream.raddr if self.msgstream else None
async def connect(
self,
destaddr: Tuple[Any, ...] = None,
destaddr: tuple[Any, ...] | None = None,
**kwargs
) -> MsgTransport:

View File

@ -1,23 +1,39 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Helpers pulled mostly verbatim from ``multiprocessing.spawn``
to aid with "fixing up" the ``__main__`` module in subprocesses.
These helpers are needed for any spawing backend that doesn't already handle this.
For example when using ``trio_run_in_process`` it is needed but obviously not when
we're already using ``multiprocessing``.
These helpers are needed for any spawing backend that doesn't already
handle this. For example when using ``trio_run_in_process`` it is needed
but obviously not when we're already using ``multiprocessing``.
"""
import os
import sys
import platform
import types
import runpy
from typing import Dict
ORIGINAL_DIR = os.path.abspath(os.getcwd())
def _mp_figure_out_main() -> Dict[str, str]:
def _mp_figure_out_main() -> dict[str, str]:
"""Taken from ``multiprocessing.spawn.get_preparation_data()``.
Retrieve parent actor `__main__` module data.

View File

@ -1,73 +1,75 @@
"""
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Memory boundary "Portals": an API for structured
concurrency linked tasks running in disparate memory domains.
"""
'''
from __future__ import annotations
import importlib
import inspect
from typing import (
Any, Optional,
Callable, AsyncGenerator
Callable, AsyncGenerator,
Type,
)
from functools import partial
from dataclasses import dataclass
from pprint import pformat
import warnings
import trio
from async_generator import asynccontextmanager
from .trionics import maybe_open_nursery
from ._state import current_actor
from ._ipc import Channel
from .log import get_logger
from .msg import NamespacePath
from ._exceptions import (
unpack_error,
NoResult,
ContextCancelled,
)
from ._streaming import Context, ReceiveMsgStream
from ._streaming import (
Context,
MsgStream,
)
log = get_logger(__name__)
@asynccontextmanager
async def maybe_open_nursery(
nursery: trio.Nursery = None,
shield: bool = False,
) -> AsyncGenerator[trio.Nursery, Any]:
'''
Create a new nursery if None provided.
Blocks on exit as expected if no input nursery is provided.
'''
if nursery is not None:
yield nursery
else:
async with trio.open_nursery() as nursery:
nursery.cancel_scope.shield = shield
yield nursery
def func_deats(func: Callable) -> tuple[str, str]:
return (
func.__module__,
func.__name__,
)
def _unwrap_msg(
msg: dict[str, Any],
channel: Channel
) -> Any:
__tracebackhide__ = True
try:
return msg['return']
except KeyError:
# internal error should never get here
assert msg.get('cid'), "Received internal error at portal?"
raise unpack_error(msg, channel)
raise unpack_error(msg, channel) from None
class MessagingError(Exception):
'Some kind of unexpected SC messaging dialog issue'
class Portal:
@ -102,7 +104,7 @@ class Portal:
# it is expected that ``result()`` will be awaited at some
# point.
self._expect_result: Optional[Context] = None
self._streams: set[ReceiveMsgStream] = set()
self._streams: set[MsgStream] = set()
self.actor = current_actor()
async def _submit_for_result(
@ -137,6 +139,7 @@ class Portal:
Return the result(s) from the remote actor's "main" task.
'''
# __tracebackhide__ = True
# Check for non-rpc errors slapped on the
# channel for which we always raise
exc = self.channel._exc
@ -186,7 +189,7 @@ class Portal:
async def cancel_actor(
self,
timeout: float = None,
timeout: float | None = None,
) -> bool:
'''
@ -296,7 +299,7 @@ class Portal:
raise TypeError(
f'{func} must be a non-streaming async function!')
fn_mod_path, fn_name = func_deats(func)
fn_mod_path, fn_name = NamespacePath.from_ref(func).to_tuple()
ctx = await self.actor.start_remote_task(
self.channel,
@ -316,7 +319,7 @@ class Portal:
async_gen_func: Callable, # typing: ignore
**kwargs,
) -> AsyncGenerator[ReceiveMsgStream, None]:
) -> AsyncGenerator[MsgStream, None]:
if not inspect.isasyncgenfunction(async_gen_func):
if not (
@ -326,7 +329,8 @@ class Portal:
raise TypeError(
f'{async_gen_func} must be an async generator function!')
fn_mod_path, fn_name = func_deats(async_gen_func)
fn_mod_path, fn_name = NamespacePath.from_ref(
async_gen_func).to_tuple()
ctx = await self.actor.start_remote_task(
self.channel,
fn_mod_path,
@ -340,7 +344,7 @@ class Portal:
try:
# deliver receive only stream
async with ReceiveMsgStream(
async with MsgStream(
ctx, ctx._recv_chan,
) as rchan:
self._streams.add(rchan)
@ -392,9 +396,7 @@ class Portal:
raise TypeError(
f'{func} must be an async generator function!')
__tracebackhide__ = True
fn_mod_path, fn_name = func_deats(func)
fn_mod_path, fn_name = NamespacePath.from_ref(func).to_tuple()
ctx = await self.actor.start_remote_task(
self.channel,
@ -416,14 +418,21 @@ class Portal:
assert msg.get('cid'), ("Received internal error at context?")
if msg.get('error'):
# raise the error message
raise unpack_error(msg, self.channel)
# raise kerr from unpack_error(msg, self.channel)
raise unpack_error(msg, self.channel) from None
else:
raise
raise MessagingError(
f'Context for {ctx.cid} was expecting a `started` message'
f' but received a non-error msg:\n{pformat(msg)}'
)
_err: Optional[BaseException] = None
ctx._portal = self
uid = self.channel.uid
cid = ctx.cid
etype: Optional[Type[BaseException]] = None
# deliver context instance and .started() msg value in open tuple.
try:
async with trio.open_nursery() as scope_nursery:
@ -455,17 +464,27 @@ class Portal:
# sure it's worth being pedantic:
# Exception,
# trio.Cancelled,
# trio.MultiError,
# KeyboardInterrupt,
) as err:
_err = err
etype = type(err)
# the context cancels itself on any cancel
# causing error.
log.cancel(
f'Context to {self.channel.uid} sending cancel request..')
if ctx.chan.connected():
log.cancel(
'Context cancelled for task, sending cancel request..\n'
f'task:{cid}\n'
f'actor:{uid}'
)
await ctx.cancel()
else:
log.warning(
'IPC connection for context is broken?\n'
f'task:{cid}\n'
f'actor:{uid}'
)
raise
finally:
@ -474,7 +493,17 @@ class Portal:
# sure we get the error the underlying feeder mem chan.
# if it's not raised here it *should* be raised from the
# msg loop nursery right?
if ctx.chan.connected():
log.info(
'Waiting on final context-task result for\n'
f'task: {cid}\n'
f'actor: {uid}'
)
result = await ctx.result()
log.runtime(
f'Context {fn_name} returned '
f'value from callee `{result}`'
)
# though it should be impossible for any tasks
# operating *in* this scope to have survived
@ -484,23 +513,34 @@ class Portal:
# should we encapsulate this in the context api?
await ctx._recv_chan.aclose()
if _err:
if etype:
if ctx._cancel_called:
log.cancel(
f'Context {fn_name} cancelled by caller with\n{_err}'
f'Context {fn_name} cancelled by caller with\n{etype}'
)
elif _err is not None:
log.cancel(
f'Context {fn_name} cancelled by callee with\n{_err}'
)
else:
log.runtime(
f'Context {fn_name} returned '
f'value from callee `{result}`'
f'Context for task cancelled by callee with {etype}\n'
f'target: `{fn_name}`\n'
f'task:{cid}\n'
f'actor:{uid}'
)
# XXX: (MEGA IMPORTANT) if this is a root opened process we
# wait for any immediate child in debug before popping the
# context from the runtime msg loop otherwise inside
# ``Actor._push_result()`` the msg will be discarded and in
# the case where that msg is global debugger unlock (via
# a "stop" msg for a stream), this can result in a deadlock
# where the root is waiting on the lock to clear but the
# child has already cleared it and clobbered IPC.
from ._debug import maybe_wait_for_debugger
await maybe_wait_for_debugger()
# remove the context from runtime tracking
self.actor._contexts.pop((self.channel.uid, ctx.cid))
self.actor._contexts.pop(
(self.channel.uid, ctx.cid),
None,
)
@dataclass
@ -557,9 +597,11 @@ async def open_portal(
msg_loop_cs: Optional[trio.CancelScope] = None
if start_msg_loop:
from ._runtime import process_messages
msg_loop_cs = await nursery.start(
partial(
actor._process_messages,
process_messages,
actor,
channel,
# if the local task is cancelled we want to keep
# the msg loop running until our block ends

View File

@ -1,17 +1,42 @@
"""
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Root actor runtime ignition(s).
"""
'''
from contextlib import asynccontextmanager
from functools import partial
import importlib
import logging
import signal
import sys
import os
from typing import Tuple, Optional, List, Any
import typing
import warnings
from exceptiongroup import BaseExceptionGroup
import trio
from ._actor import Actor, Arbiter
from ._runtime import (
Actor,
Arbiter,
async_main,
)
from . import _debug
from . import _spawn
from . import _state
@ -31,37 +56,45 @@ logger = log.get_logger('tractor')
@asynccontextmanager
async def open_root_actor(
*,
# defaults are above
arbiter_addr: Optional[Tuple[str, int]] = (
_default_arbiter_host,
_default_arbiter_port,
),
arbiter_addr: tuple[str, int] | None = None,
name: Optional[str] = 'root',
# defaults are above
registry_addr: tuple[str, int] | None = None,
name: str | None = 'root',
# either the `multiprocessing` start method:
# https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
# OR `trio` (the new default).
start_method: Optional[str] = None,
start_method: _spawn.SpawnMethodKey | None = None,
# enables the multi-process debugger support
debug_mode: bool = False,
# internal logging
loglevel: Optional[str] = None,
loglevel: str | None = None,
enable_modules: Optional[List] = None,
rpc_module_paths: Optional[List] = None,
enable_modules: list | None = None,
rpc_module_paths: list | None = None,
) -> typing.Any:
"""Async entry point for ``tractor``.
'''
Runtime init entry point for ``tractor``.
"""
'''
# Override the global debugger hook to make it play nice with
# ``trio``, see:
# ``trio``, see much discussion in:
# https://github.com/python-trio/trio/issues/1155#issuecomment-742964018
builtin_bp_handler = sys.breakpointhook
orig_bp_path: str | None = os.environ.get('PYTHONBREAKPOINT', None)
os.environ['PYTHONBREAKPOINT'] = 'tractor._debug._set_trace'
# attempt to retreive ``trio``'s sigint handler and stash it
# on our debugger lock state.
_debug.Lock._trio_handler = signal.getsignal(signal.SIGINT)
# mark top most level process as root actor
_state._runtime_vars['_is_root'] = True
@ -80,6 +113,25 @@ async def open_root_actor(
if start_method is not None:
_spawn.try_set_start_method(start_method)
if arbiter_addr is not None:
warnings.warn(
'`arbiter_addr` is now deprecated and has been renamed to'
'`registry_addr`.\nUse that instead..',
DeprecationWarning,
stacklevel=2,
)
registry_addr = (host, port) = (
registry_addr
or arbiter_addr
or (
_default_arbiter_host,
_default_arbiter_port,
)
)
loglevel = (loglevel or log._default_loglevel).upper()
if debug_mode and _spawn._spawn_method == 'trio':
_state._runtime_vars['_debug_mode'] = True
@ -87,38 +139,41 @@ async def open_root_actor(
# for use of ``await tractor.breakpoint()``
enable_modules.append('tractor._debug')
if loglevel is None:
loglevel = 'pdb'
# if debug mode get's enabled *at least* use that level of
# logging for some informative console prompts.
if (
logging.getLevelName(
# lul, need the upper case for the -> int map?
# sweet "dynamic function behaviour" stdlib...
loglevel,
) > logging.getLevelName('PDB')
):
loglevel = 'PDB'
elif debug_mode:
raise RuntimeError(
"Debug mode is only supported for the `trio` backend!"
)
arbiter_addr = (host, port) = arbiter_addr or (
_default_arbiter_host,
_default_arbiter_port,
)
loglevel = loglevel or log.get_loglevel()
if loglevel is not None:
log._default_loglevel = loglevel
log.get_console_log(loglevel)
# make a temporary connection to see if an arbiter exists
try:
# make a temporary connection to see if an arbiter exists,
# if one can't be made quickly we assume none exists.
arbiter_found = False
try:
# TODO: this connect-and-bail forces us to have to carefully
# rewrap TCP 104-connection-reset errors as EOF so as to avoid
# propagating cancel-causing errors to the channel-msg loop
# machinery. Likely it would be better to eventually have
# a "discovery" protocol with basic handshake instead.
with trio.move_on_after(1):
async with _connect_chan(host, port):
arbiter_found = True
except OSError:
logger.warning(f"No actor could be found @ {host}:{port}")
# TODO: make this a "discovery" log level?
logger.warning(f"No actor registry found @ {host}:{port}")
# create a local actor and start up its main routine/task
if arbiter_found:
@ -128,7 +183,7 @@ async def open_root_actor(
actor = Actor(
name or 'anonymous',
arbiter_addr=arbiter_addr,
arbiter_addr=registry_addr,
loglevel=loglevel,
enable_modules=enable_modules,
)
@ -144,7 +199,7 @@ async def open_root_actor(
actor = Arbiter(
name or 'arbiter',
arbiter_addr=arbiter_addr,
arbiter_addr=registry_addr,
loglevel=loglevel,
enable_modules=enable_modules,
)
@ -160,13 +215,14 @@ async def open_root_actor(
# start the actor runtime in a new task
async with trio.open_nursery() as nursery:
# ``Actor._async_main()`` creates an internal nursery and
# ``_runtime.async_main()`` creates an internal nursery and
# thus blocks here until the entire underlying actor tree has
# terminated thereby conducting structured concurrency.
await nursery.start(
partial(
actor._async_main,
async_main,
actor,
accept_addr=(host, port),
parent_addr=None
)
@ -174,7 +230,10 @@ async def open_root_actor(
try:
yield actor
except (Exception, trio.MultiError) as err:
except (
Exception,
BaseExceptionGroup,
) as err:
entered = await _debug._maybe_enter_pm(err)
@ -185,71 +244,69 @@ async def open_root_actor(
raise
finally:
# NOTE: not sure if we'll ever need this but it's
# possibly better for even more determinism?
# logger.cancel(
# f'Waiting on {len(nurseries)} nurseries in root..')
# nurseries = actor._actoruid2nursery.values()
# async with trio.open_nursery() as tempn:
# for an in nurseries:
# tempn.start_soon(an.exited.wait)
logger.cancel("Shutting down root actor")
await actor.cancel()
finally:
_state._current_actor = None
# restore breakpoint hook state
sys.breakpointhook = builtin_bp_handler
if orig_bp_path is not None:
os.environ['PYTHONBREAKPOINT'] = orig_bp_path
else:
# clear env back to having no entry
os.environ.pop('PYTHONBREAKPOINT')
logger.runtime("Root actor terminated")
def run(
# target
async_fn: typing.Callable[..., typing.Awaitable],
*args,
def run_daemon(
enable_modules: list[str],
# runtime kwargs
name: Optional[str] = 'root',
arbiter_addr: Tuple[str, int] = (
name: str | None = 'root',
registry_addr: tuple[str, int] = (
_default_arbiter_host,
_default_arbiter_port,
),
start_method: Optional[str] = None,
start_method: str | None = None,
debug_mode: bool = False,
**kwargs,
**kwargs
) -> Any:
"""Run a trio-actor async function in process.
) -> None:
'''
Spawn daemon actor which will respond to RPC; the main task simply
starts the runtime and then sleeps forever.
This is a very minimal convenience wrapper around starting
a "run-until-cancelled" root actor which can be started with a set
of enabled modules for RPC request handling.
'''
kwargs['enable_modules'] = list(enable_modules)
for path in enable_modules:
importlib.import_module(path)
This is tractor's main entry and the start point for any async actor.
"""
async def _main():
async with open_root_actor(
arbiter_addr=arbiter_addr,
registry_addr=registry_addr,
name=name,
start_method=start_method,
debug_mode=debug_mode,
**kwargs,
):
return await trio.sleep_forever()
return await async_fn(*args)
warnings.warn(
"`tractor.run()` is now deprecated. `tractor` now"
" implicitly starts the root actor on first actor nursery"
" use. If you want to start the root actor manually, use"
" `tractor.open_root_actor()`.",
DeprecationWarning,
stacklevel=2,
)
return trio.run(_main)
def run_daemon(
rpc_module_paths: List[str],
**kwargs
) -> None:
"""Spawn daemon actor which will respond to RPC.
This is a convenience wrapper around
``tractor.run(trio.sleep(float('inf')))`` such that the first actor spawned
is meant to run forever responding to RPC requests.
"""
kwargs['rpc_module_paths'] = list(rpc_module_paths)
for path in rpc_module_paths:
importlib.import_module(path)
return run(partial(trio.sleep, float('inf')), **kwargs)

File diff suppressed because it is too large Load Diff

View File

@ -1,31 +1,39 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Machinery for actor process spawning using multiple backends.
"""
from __future__ import annotations
import sys
import multiprocessing as mp
import platform
from typing import (
Any, Dict, Optional, Union, Callable,
Any,
Awaitable,
Literal,
Callable,
TypeVar,
TYPE_CHECKING,
)
from collections.abc import Awaitable, Coroutine
from exceptiongroup import BaseExceptionGroup
import trio
from trio_typing import TaskStatus
try:
from multiprocessing import semaphore_tracker # type: ignore
resource_tracker = semaphore_tracker
resource_tracker._resource_tracker = resource_tracker._semaphore_tracker
except ImportError:
# 3.8 introduces a more general version that also tracks shared mems
from multiprocessing import resource_tracker # type: ignore
from multiprocessing import forkserver # type: ignore
from typing import Tuple
from . import _forkserver_override
from ._debug import (
maybe_wait_for_debugger,
acquire_debug_lock,
@ -36,24 +44,33 @@ from ._state import (
is_root_process,
debug_mode,
)
from .log import get_logger
from ._portal import Portal
from ._actor import Actor
from ._runtime import Actor
from ._entry import _mp_main
from ._exceptions import ActorFailure
log = get_logger('tractor')
if TYPE_CHECKING:
from ._supervise import ActorNursery
import multiprocessing as mp
ProcessType = TypeVar('ProcessType', mp.Process, trio.Process)
log = get_logger('tractor')
# placeholder for an mp start context if so using that backend
_ctx: Optional[mp.context.BaseContext] = None
_spawn_method: str = "trio"
_ctx: mp.context.BaseContext | None = None
SpawnMethodKey = Literal[
'trio', # supported on all platforms
'mp_spawn',
'mp_forkserver', # posix only
]
_spawn_method: SpawnMethodKey = 'trio'
if platform.system() == 'Windows':
import multiprocessing as mp
_ctx = mp.get_context("spawn")
async def proc_waiter(proc: mp.Process) -> None:
@ -65,39 +82,48 @@ else:
await trio.lowlevel.wait_readable(proc.sentinel)
def try_set_start_method(name: str) -> Optional[mp.context.BaseContext]:
"""Attempt to set the method for process starting, aka the "actor
def try_set_start_method(
key: SpawnMethodKey
) -> mp.context.BaseContext | None:
'''
Attempt to set the method for process starting, aka the "actor
spawning backend".
If the desired method is not supported this function will error.
On Windows only the ``multiprocessing`` "spawn" method is offered
besides the default ``trio`` which uses async wrapping around
``subprocess.Popen``.
"""
'''
import multiprocessing as mp
global _ctx
global _spawn_method
methods = mp.get_all_start_methods()
if 'fork' in methods:
mp_methods = mp.get_all_start_methods()
if 'fork' in mp_methods:
# forking is incompatible with ``trio``s global task tree
methods.remove('fork')
mp_methods.remove('fork')
# supported on all platforms
methods += ['trio']
if name not in methods:
raise ValueError(
f"Spawn method `{name}` is invalid please choose one of {methods}"
)
elif name == 'forkserver':
match key:
case 'mp_forkserver':
from . import _forkserver_override
_forkserver_override.override_stdlib()
_ctx = mp.get_context(name)
elif name == 'trio':
_ctx = None
else:
_ctx = mp.get_context(name)
_ctx = mp.get_context('forkserver')
_spawn_method = name
case 'mp_spawn':
_ctx = mp.get_context('spawn')
case 'trio':
_ctx = None
case _:
raise ValueError(
f'Spawn method `{key}` is invalid!\n'
f'Please choose one of {SpawnMethodKey}'
)
_spawn_method = key
return _ctx
@ -113,6 +139,7 @@ async def exhaust_portal(
If the main task is an async generator do our best to consume
what's left of it.
'''
__tracebackhide__ = True
try:
log.debug(f"Waiting on final result from {actor.uid}")
@ -120,8 +147,11 @@ async def exhaust_portal(
# always be established and shutdown using a context manager api
final = await portal.result()
except (Exception, trio.MultiError) as err:
# we reraise in the parent task via a ``trio.MultiError``
except (
Exception,
BaseExceptionGroup,
) as err:
# we reraise in the parent task via a ``BaseExceptionGroup``
return err
except trio.Cancelled as err:
# lol, of course we need this too ;P
@ -137,7 +167,7 @@ async def cancel_on_completion(
portal: Portal,
actor: Actor,
errors: Dict[Tuple[str, str], Exception],
errors: dict[tuple[str, str], Exception],
) -> None:
'''
@ -149,7 +179,7 @@ async def cancel_on_completion(
'''
# if this call errors we store the exception for later
# in ``errors`` which will be reraised inside
# a MultiError and we still send out a cancel request
# an exception group and we still send out a cancel request
result = await exhaust_portal(portal, actor)
if isinstance(result, Exception):
errors[actor.uid] = result
@ -169,16 +199,37 @@ async def cancel_on_completion(
async def do_hard_kill(
proc: trio.Process,
terminate_after: int = 3,
) -> None:
# NOTE: this timeout used to do nothing since we were shielding
# the ``.wait()`` inside ``new_proc()`` which will pretty much
# never release until the process exits, now it acts as
# a hard-kill time ultimatum.
log.debug(f"Terminating {proc}")
with trio.move_on_after(terminate_after) as cs:
# NOTE: This ``__aexit__()`` shields internally.
async with proc: # calls ``trio.Process.aclose()``
log.debug(f"Terminating {proc}")
# NOTE: code below was copied verbatim from the now deprecated
# (in 0.20.0) ``trio._subrocess.Process.aclose()``, orig doc
# string:
#
# Close any pipes we have to the process (both input and output)
# and wait for it to exit. If cancelled, kills the process and
# waits for it to finish exiting before propagating the
# cancellation.
with trio.CancelScope(shield=True):
if proc.stdin is not None:
await proc.stdin.aclose()
if proc.stdout is not None:
await proc.stdout.aclose()
if proc.stderr is not None:
await proc.stderr.aclose()
try:
await proc.wait()
finally:
if proc.returncode is None:
proc.kill()
with trio.CancelScope(shield=True):
await proc.wait()
if cs.cancelled_caught:
# XXX: should pretty much never get here unless we have
@ -202,7 +253,9 @@ async def soft_wait(
# ``trio.Process.__aexit__()`` (it tears down stdio
# which will kill any waiting remote pdb trace).
# This is a "soft" (cancellable) join/reap.
uid = portal.channel.uid
try:
log.cancel(f'Soft waiting on actor:\n{uid}')
await wait_func(proc)
except trio.Cancelled:
# if cancelled during a soft wait, cancel the child
@ -210,24 +263,80 @@ async def soft_wait(
# below. This means we try to do a graceful teardown
# via sending a cancel message before getting out
# zombie killing tools.
with trio.CancelScope(shield=True):
async with trio.open_nursery() as n:
n.cancel_scope.shield = True
async def cancel_on_proc_deth():
'''
Cancel the actor cancel request if we detect that
that the process terminated.
'''
await wait_func(proc)
n.cancel_scope.cancel()
n.start_soon(cancel_on_proc_deth)
await portal.cancel_actor()
if proc.poll() is None: # type: ignore
log.warning(
'Actor still alive after cancel request:\n'
f'{uid}'
)
n.cancel_scope.cancel()
raise
async def new_proc(
name: str,
actor_nursery: 'ActorNursery', # type: ignore # noqa
actor_nursery: ActorNursery,
subactor: Actor,
errors: Dict[Tuple[str, str], Exception],
errors: dict[tuple[str, str], Exception],
# passed through to actor main
bind_addr: Tuple[str, int],
parent_addr: Tuple[str, int],
_runtime_vars: Dict[str, Any], # serialized and sent to _child
bind_addr: tuple[str, int],
parent_addr: tuple[str, int],
_runtime_vars: dict[str, Any], # serialized and sent to _child
*,
infect_asyncio: bool = False,
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED
) -> None:
# lookup backend spawning target
target = _methods[_spawn_method]
# mark the new actor with the global spawn method
subactor._spawn_method = _spawn_method
await target(
name,
actor_nursery,
subactor,
errors,
bind_addr,
parent_addr,
_runtime_vars, # run time vars
infect_asyncio=infect_asyncio,
task_status=task_status,
)
async def trio_proc(
name: str,
actor_nursery: ActorNursery,
subactor: Actor,
errors: dict[tuple[str, str], Exception],
# passed through to actor main
bind_addr: tuple[str, int],
parent_addr: tuple[str, int],
_runtime_vars: dict[str, Any], # serialized and sent to _child
*,
infect_asyncio: bool = False,
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED
) -> None:
@ -239,12 +348,6 @@ async def new_proc(
here is to be considered the core supervision strategy.
'''
# mark the new actor with the global spawn method
subactor._spawn_method = _spawn_method
uid = subactor.uid
if _spawn_method == 'trio':
spawn_cmd = [
sys.executable,
"-m",
@ -267,12 +370,16 @@ async def new_proc(
"--loglevel",
subactor.loglevel
]
# Tell child to run in guest mode on top of ``asyncio`` loop
if infect_asyncio:
spawn_cmd.append("--asyncio")
cancelled_during_spawn: bool = False
proc: Optional[trio.Process] = None
proc: trio.Process | None = None
try:
try:
proc = await trio.open_process(spawn_cmd)
# TODO: needs ``trio_typing`` patch?
proc = await trio.lowlevel.open_process(spawn_cmd)
log.runtime(f"Started {proc}")
@ -293,15 +400,21 @@ async def new_proc(
await maybe_wait_for_debugger()
elif proc is not None:
async with acquire_debug_lock(uid):
async with acquire_debug_lock(subactor.uid):
# soft wait on the proc to terminate
with trio.move_on_after(0.5):
await proc.wait()
raise
# a sub-proc ref **must** exist now
assert proc
portal = Portal(chan)
actor_nursery._children[subactor.uid] = (
subactor, proc, portal)
subactor,
proc,
portal,
)
# send additional init params
await chan.send({
@ -350,23 +463,32 @@ async def new_proc(
nursery.cancel_scope.cancel()
finally:
# The "hard" reap since no actor zombies are allowed!
# XXX: do this **after** cancellation/tearfown to avoid
# XXX NOTE XXX: The "hard" reap since no actor zombies are
# allowed! Do this **after** cancellation/teardown to avoid
# killing the process too early.
log.cancel(f'Hard reap sequence starting for {uid}')
if proc:
log.cancel(f'Hard reap sequence starting for {subactor.uid}')
with trio.CancelScope(shield=True):
# don't clobber an ongoing pdb
if cancelled_during_spawn:
# Try again to avoid TTY clobbering.
async with acquire_debug_lock(uid):
async with acquire_debug_lock(subactor.uid):
with trio.move_on_after(0.5):
await proc.wait()
if is_root_process():
await maybe_wait_for_debugger()
# TODO: solve the following issue where we need
# to do a similar wait like this but in an
# "intermediary" parent actor that itself isn't
# in debug but has a child that is, and we need
# to hold off on relaying SIGINT until that child
# is complete.
# https://github.com/goodboy/tractor/issues/320
await maybe_wait_for_debugger(
child_in_debug=_runtime_vars.get(
'_debug_mode', False),
)
if proc.poll() is None:
log.cancel(f"Attempting to hard kill {proc}")
@ -381,41 +503,36 @@ async def new_proc(
# subactor
actor_nursery._children.pop(subactor.uid)
else:
# `multiprocessing`
# async with trio.open_nursery() as nursery:
await mp_new_proc(
name=name,
actor_nursery=actor_nursery,
subactor=subactor,
errors=errors,
# passed through to actor main
bind_addr=bind_addr,
parent_addr=parent_addr,
_runtime_vars=_runtime_vars,
task_status=task_status,
)
async def mp_new_proc(
async def mp_proc(
name: str,
actor_nursery: 'ActorNursery', # type: ignore # noqa
actor_nursery: ActorNursery, # type: ignore # noqa
subactor: Actor,
errors: Dict[Tuple[str, str], Exception],
errors: dict[tuple[str, str], Exception],
# passed through to actor main
bind_addr: Tuple[str, int],
parent_addr: Tuple[str, int],
_runtime_vars: Dict[str, Any], # serialized and sent to _child
bind_addr: tuple[str, int],
parent_addr: tuple[str, int],
_runtime_vars: dict[str, Any], # serialized and sent to _child
*,
infect_asyncio: bool = False,
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED
) -> None:
# uggh zone
try:
from multiprocessing import semaphore_tracker # type: ignore
resource_tracker = semaphore_tracker
resource_tracker._resource_tracker = resource_tracker._semaphore_tracker # noqa
except ImportError:
# 3.8 introduces a more general version that also tracks shared mems
from multiprocessing import resource_tracker # type: ignore
assert _ctx
start_method = _ctx.get_start_method()
if start_method == 'forkserver':
from multiprocessing import forkserver # type: ignore
# XXX do our hackery on the stdlib to avoid multiple
# forkservers (one at each subproc layer).
fs = forkserver._forkserver
@ -427,23 +544,24 @@ async def mp_new_proc(
# forkserver.set_forkserver_preload(enable_modules)
forkserver.ensure_running()
fs_info = (
fs._forkserver_address,
fs._forkserver_alive_fd,
fs._forkserver_address, # type: ignore # noqa
fs._forkserver_alive_fd, # type: ignore # noqa
getattr(fs, '_forkserver_pid', None),
getattr(
resource_tracker._resource_tracker, '_pid', None),
resource_tracker._resource_tracker._fd,
)
else:
else: # request to forkerserver to fork a new child
assert curr_actor._forkserver_info
fs_info = (
fs._forkserver_address,
fs._forkserver_alive_fd,
fs._forkserver_pid,
fs._forkserver_address, # type: ignore # noqa
fs._forkserver_alive_fd, # type: ignore # noqa
fs._forkserver_pid, # type: ignore # noqa
resource_tracker._resource_tracker._pid,
resource_tracker._resource_tracker._fd,
) = curr_actor._forkserver_info
else:
# spawn method
fs_info = (None, None, None, None, None)
proc: mp.Process = _ctx.Process( # type: ignore
@ -452,12 +570,14 @@ async def mp_new_proc(
subactor,
bind_addr,
fs_info,
start_method,
_spawn_method,
parent_addr,
infect_asyncio,
),
# daemon=True,
name=name,
)
# `multiprocessing` only (since no async interface):
# register the process before start in case we get a cancel
# request before the actor has fully spawned - then we can wait
@ -476,6 +596,11 @@ async def mp_new_proc(
# local actor by the time we get a ref to it
event, chan = await actor_nursery._actor.wait_for_peer(
subactor.uid)
# XXX: monkey patch poll API to match the ``subprocess`` API..
# not sure why they don't expose this but kk.
proc.poll = lambda: proc.exitcode # type: ignore
# except:
# TODO: in the case we were cancelled before the sub-proc
# registered itself back we must be sure to try and clean
@ -539,4 +664,16 @@ async def mp_new_proc(
log.debug(f"Joined {proc}")
# pop child entry to indicate we are no longer managing subactor
subactor, proc, portal = actor_nursery._children.pop(subactor.uid)
actor_nursery._children.pop(subactor.uid)
# TODO: prolly report to ``mypy`` how this causes all sorts of
# false errors..
# subactor, proc, portal = actor_nursery._children.pop(subactor.uid)
# proc spawning backend target map
_methods: dict[SpawnMethodKey, Callable] = {
'trio': trio_proc,
'mp_spawn': mp_proc,
'mp_forkserver': mp_proc,
}

View File

@ -1,9 +1,27 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Per process state
"""
from typing import Optional, Dict, Any
from collections.abc import Mapping
import multiprocessing as mp
from typing import (
Optional,
Any,
)
import trio
@ -11,7 +29,7 @@ from ._exceptions import NoRuntime
_current_actor: Optional['Actor'] = None # type: ignore # noqa
_runtime_vars: Dict[str, Any] = {
_runtime_vars: dict[str, Any] = {
'_debug_mode': False,
'_is_root': False,
'_root_mailbox': (None, None)
@ -27,33 +45,10 @@ def current_actor(err_on_no_runtime: bool = True) -> 'Actor': # type: ignore #
return _current_actor
_conc_name_getters = {
'task': trio.lowlevel.current_task,
'actor': current_actor
}
class ActorContextInfo(Mapping):
"Dyanmic lookup for local actor and task names"
_context_keys = ('task', 'actor')
def __len__(self):
return len(self._context_keys)
def __iter__(self):
return iter(self._context_keys)
def __getitem__(self, key: str) -> str:
try:
return _conc_name_getters[key]().name # type: ignore
except RuntimeError:
# no local actor/task context initialized yet
return f'no {key} context'
def is_main_process() -> bool:
"""Bool determining if this actor is running in the top-most process.
"""
import multiprocessing as mp
return mp.current_process().name == 'MainProcess'

View File

@ -1,3 +1,19 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Message stream types and APIs.
@ -7,8 +23,10 @@ import inspect
from contextlib import asynccontextmanager
from dataclasses import dataclass
from typing import (
Any, Optional, Callable,
AsyncGenerator, Dict,
Any,
Optional,
Callable,
AsyncGenerator,
AsyncIterator
)
@ -26,17 +44,19 @@ from .trionics import broadcast_receiver, BroadcastReceiver
log = get_logger(__name__)
# TODO: generic typing like trio's receive channel
# but with msgspec messages?
# class ReceiveChannel(AsyncResource, Generic[ReceiveType]):
# TODO: the list
# - generic typing like trio's receive channel but with msgspec
# messages? class ReceiveChannel(AsyncResource, Generic[ReceiveType]):
# - use __slots__ on ``Context``?
class ReceiveMsgStream(trio.abc.ReceiveChannel):
class MsgStream(trio.abc.Channel):
'''
A IPC message stream for receiving logically sequenced values over
an inter-actor ``Channel``. This is the type returned to a local
task which entered either ``Portal.open_stream_from()`` or
``Context.open_stream()``.
A bidirectional message stream for receiving logically sequenced
values over an inter-actor IPC ``Channel``.
This is the type returned to a local task which entered either
``Portal.open_stream_from()`` or ``Context.open_stream()``.
Termination rules:
@ -61,6 +81,7 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
# flag to denote end of stream
self._eoc: bool = False
self._closed: bool = False
# delegate directly to underlying mem channel
def receive_nowait(self):
@ -77,11 +98,14 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
if self._eoc:
raise trio.EndOfChannel
if self._closed:
raise trio.ClosedResourceError('This stream was closed')
try:
msg = await self._rx_chan.receive()
return msg['yield']
except KeyError:
except KeyError as err:
# internal error should never get here
assert msg.get('cid'), ("Received internal error at portal?")
@ -90,9 +114,18 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
# - 'error'
# possibly just handle msg['stop'] here!
if msg.get('stop'):
if self._closed:
raise trio.ClosedResourceError('This stream was closed')
if msg.get('stop') or self._eoc:
log.debug(f"{self} was stopped at remote end")
# XXX: important to set so that a new ``.receive()``
# call (likely by another task using a broadcast receiver)
# doesn't accidentally pull the ``return`` message
# value out of the underlying feed mem chan!
self._eoc = True
# # when the send is closed we assume the stream has
# # terminated and signal this local iterator to stop
# await self.aclose()
@ -100,7 +133,7 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
# XXX: this causes ``ReceiveChannel.__anext__()`` to
# raise a ``StopAsyncIteration`` **and** in our catch
# block below it will trigger ``.aclose()``.
raise trio.EndOfChannel
raise trio.EndOfChannel from err
# TODO: test that shows stream raising an expected error!!!
elif msg.get('error'):
@ -145,10 +178,11 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
raise # propagate
async def aclose(self):
"""Cancel associated remote actor task and local memory channel
on close.
'''
Cancel associated remote actor task and local memory channel on
close.
"""
'''
# XXX: keep proper adherance to trio's `.aclose()` semantics:
# https://trio.readthedocs.io/en/stable/reference-io.html#trio.abc.AsyncResource.aclose
rx_chan = self._rx_chan
@ -178,12 +212,8 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
# In the bidirectional case, `Context.open_stream()` will create
# the `Actor._cids2qs` entry from a call to
# `Actor.get_context()` and will send the stop message in
# ``__aexit__()`` on teardown so it **does not** need to be
# called here.
if not self._ctx._portal:
# Only for 2 way streams can we can send stop from the
# caller side.
# `Actor.get_context()` and will call us here to send the stop
# msg in ``__aexit__()`` on teardown.
try:
# NOTE: if this call is cancelled we expect this end to
# handle as though the stop was never sent (though if it
@ -200,7 +230,14 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
# the underlying channel may already have been pulled
# in which case our stop message is meaningless since
# it can't traverse the transport.
log.debug(f'Channel for {self} was already closed')
ctx = self._ctx
log.warning(
f'Stream was already destroyed?\n'
f'actor: {ctx.chan.uid}\n'
f'ctx id: {ctx.cid}'
)
self._closed = True
# Do we close the local mem chan ``self._rx_chan`` ??!?
@ -243,7 +280,8 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
self,
) -> AsyncIterator[BroadcastReceiver]:
'''Allocate and return a ``BroadcastReceiver`` which delegates
'''
Allocate and return a ``BroadcastReceiver`` which delegates
to this message stream.
This allows multiple local tasks to receive each their own copy
@ -280,28 +318,29 @@ class ReceiveMsgStream(trio.abc.ReceiveChannel):
async with self._broadcaster.subscribe() as bstream:
assert bstream.key != self._broadcaster.key
assert bstream._recv == self._broadcaster._recv
# NOTE: we patch on a `.send()` to the bcaster so that the
# caller can still conduct 2-way streaming using this
# ``bstream`` handle transparently as though it was the msg
# stream instance.
bstream.send = self.send # type: ignore
yield bstream
class MsgStream(ReceiveMsgStream, trio.abc.Channel):
'''
Bidirectional message stream for use within an inter-actor actor
``Context```.
'''
async def send(
self,
data: Any
) -> None:
'''Send a message over this stream to the far end.
'''
Send a message over this stream to the far end.
'''
# if self._eoc:
# raise trio.ClosedResourceError('This stream is already ded')
if self._ctx._error:
raise self._ctx._error # from None
if self._closed:
raise trio.ClosedResourceError('This stream was already closed')
await self._ctx.chan.send({'yield': data, 'cid': self._ctx.cid})
@ -342,6 +381,8 @@ class Context:
# status flags
_cancel_called: bool = False
_cancel_msg: Optional[str] = None
_enter_debugger_on_cancel: bool = True
_started_called: bool = False
_started_received: bool = False
_stream_opened: bool = False
@ -366,7 +407,7 @@ class Context:
async def _maybe_raise_from_remote_msg(
self,
msg: Dict[str, Any],
msg: dict[str, Any],
) -> None:
'''
@ -398,8 +439,17 @@ class Context:
f'Remote context error for {self.chan.uid}:{self.cid}:\n'
f'{msg["error"]["tb_str"]}'
)
# await ctx._maybe_error_from_remote_msg(msg)
self._error = unpack_error(msg, self.chan)
error = unpack_error(msg, self.chan)
if (
isinstance(error, ContextCancelled) and
self._cancel_called
):
# this is an expected cancel request response message
# and we don't need to raise it in scope since it will
# potentially override a real error
return
self._error = error
# TODO: tempted to **not** do this by-reraising in a
# nursery and instead cancel a surrounding scope, detect
@ -414,7 +464,11 @@ class Context:
if not self._scope_nursery._closed: # type: ignore
self._scope_nursery.start_soon(raiser)
async def cancel(self) -> None:
async def cancel(
self,
msg: Optional[str] = None,
) -> None:
'''
Cancel this inter-actor-task context.
@ -423,6 +477,8 @@ class Context:
'''
side = 'caller' if self._portal else 'callee'
if msg:
assert side == 'callee', 'Only callee side can provide cancel msg'
log.cancel(f'Cancelling {side} side of context to {self.chan.uid}')
@ -459,8 +515,10 @@ class Context:
log.cancel(
"Timed out on cancelling remote task "
f"{cid} for {self._portal.channel.uid}")
else:
# callee side remote task
else:
self._cancel_msg = msg
# TODO: should we have an explicit cancel message
# or is relaying the local `trio.Cancelled` as an
@ -545,30 +603,38 @@ class Context:
async with MsgStream(
ctx=self,
rx_chan=ctx._recv_chan,
) as rchan:
) as stream:
if self._portal:
self._portal._streams.add(rchan)
self._portal._streams.add(stream)
try:
self._stream_opened = True
# ensure we aren't cancelled before delivering
# the stream
# XXX: do we need this?
# ensure we aren't cancelled before yielding the stream
# await trio.lowlevel.checkpoint()
yield rchan
yield stream
# XXX: Make the stream "one-shot use". On exit, signal
# NOTE: Make the stream "one-shot use". On exit, signal
# ``trio.EndOfChannel``/``StopAsyncIteration`` to the
# far end.
await self.send_stop()
await stream.aclose()
finally:
if self._portal:
self._portal._streams.remove(rchan)
try:
self._portal._streams.remove(stream)
except KeyError:
log.warning(
f'Stream was already destroyed?\n'
f'actor: {self.chan.uid}\n'
f'ctx id: {self.cid}'
)
async def result(self) -> Any:
'''From a caller side, wait for and return the final result from
'''
From a caller side, wait for and return the final result from
the callee side task.
'''

View File

@ -1,21 +1,40 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
``trio`` inspired apis and helpers
"""
from contextlib import asynccontextmanager as acm
from functools import partial
import inspect
import multiprocessing as mp
from typing import Tuple, List, Dict, Optional
from typing import (
Optional,
TYPE_CHECKING,
)
import typing
import warnings
from exceptiongroup import BaseExceptionGroup
import trio
from async_generator import asynccontextmanager
from . import _debug
from ._debug import maybe_wait_for_debugger
from ._state import current_actor, is_main_process, is_root_process
from ._state import current_actor, is_main_process
from .log import get_logger, get_loglevel
from ._actor import Actor
from ._runtime import Actor
from ._portal import Portal
from ._exceptions import is_multi_cancelled
from ._root import open_root_actor
@ -23,34 +42,67 @@ from . import _state
from . import _spawn
if TYPE_CHECKING:
import multiprocessing as mp
log = get_logger(__name__)
_default_bind_addr: Tuple[str, int] = ('127.0.0.1', 0)
_default_bind_addr: tuple[str, int] = ('127.0.0.1', 0)
class ActorNursery:
"""Spawn scoped subprocess actors.
"""
'''
The fundamental actor supervision construct: spawn and manage
explicit lifetime and capability restricted, bootstrapped,
``trio.run()`` scheduled sub-processes.
Though the concept of a "process nursery" is different in complexity
and slightly different in semantics then a tradtional single
threaded task nursery, much of the interface is the same. New
processes each require a top level "parent" or "root" task which is
itself no different then any task started by a tradtional
``trio.Nursery``. The main difference is that each "actor" (a
process + ``trio.run()``) contains a full, paralell executing
``trio``-task-tree. The following super powers ensue:
- starting tasks in a child actor are completely independent of
tasks started in the current process. They execute in *parallel*
relative to tasks in the current process and are scheduled by their
own actor's ``trio`` run loop.
- tasks scheduled in a remote process still maintain an SC protocol
across memory boundaries using a so called "structured concurrency
dialogue protocol" which ensures task-hierarchy-lifetimes are linked.
- remote tasks (in another actor) can fail and relay failure back to
the caller task (in some other actor) via a seralized
``RemoteActorError`` which means no zombie process or RPC
initiated task can ever go off on its own.
'''
def __init__(
self,
actor: Actor,
ria_nursery: trio.Nursery,
da_nursery: trio.Nursery,
errors: Dict[Tuple[str, str], Exception],
errors: dict[tuple[str, str], BaseException],
) -> None:
# self.supervisor = supervisor # TODO
self._actor: Actor = actor
self._ria_nursery = ria_nursery
self._da_nursery = da_nursery
self._children: Dict[
Tuple[str, str],
Tuple[Actor, mp.Process, Optional[Portal]]
self._children: dict[
tuple[str, str],
tuple[
Actor,
trio.Process | mp.Process,
Optional[Portal],
]
] = {}
# portals spawned with ``run_in_actor()`` are
# cancelled when their "main" result arrives
self._cancel_after_result_on_exit: set = set()
self.cancelled: bool = False
self._join_procs = trio.Event()
self._at_least_one_child_in_debug: bool = False
self.errors = errors
self.exited = trio.Event()
@ -58,18 +110,30 @@ class ActorNursery:
self,
name: str,
*,
bind_addr: Tuple[str, int] = _default_bind_addr,
rpc_module_paths: List[str] = None,
enable_modules: List[str] = None,
loglevel: str = None, # set log level per subactor
nursery: trio.Nursery = None,
bind_addr: tuple[str, int] = _default_bind_addr,
rpc_module_paths: list[str] | None = None,
enable_modules: list[str] | None = None,
loglevel: str | None = None, # set log level per subactor
nursery: trio.Nursery | None = None,
debug_mode: Optional[bool] | None = None,
infect_asyncio: bool = False,
) -> Portal:
'''
Start a (daemon) actor: an process that has no designated
"main task" besides the runtime.
'''
loglevel = loglevel or self._actor.loglevel or get_loglevel()
# configure and pass runtime state
_rtv = _state._runtime_vars.copy()
_rtv['_is_root'] = False
# allow setting debug policy per actor
if debug_mode is not None:
_rtv['_debug_mode'] = debug_mode
self._at_least_one_child_in_debug = True
enable_modules = enable_modules or []
if rpc_module_paths:
@ -106,19 +170,25 @@ class ActorNursery:
bind_addr,
parent_addr,
_rtv, # run time vars
infect_asyncio=infect_asyncio,
)
)
async def run_in_actor(
self,
fn: typing.Callable,
*,
name: Optional[str] = None,
bind_addr: Tuple[str, int] = _default_bind_addr,
rpc_module_paths: Optional[List[str]] = None,
enable_modules: List[str] = None,
loglevel: str = None, # set log level per subactor
bind_addr: tuple[str, int] = _default_bind_addr,
rpc_module_paths: list[str] | None = None,
enable_modules: list[str] | None = None,
loglevel: str | None = None, # set log level per subactor
infect_asyncio: bool = False,
**kwargs, # explicit args to ``fn``
) -> Portal:
"""Spawn a new actor, run a lone task, then terminate the actor and
return its result.
@ -142,6 +212,7 @@ class ActorNursery:
loglevel=loglevel,
# use the run_in_actor nursery
nursery=self._ria_nursery,
infect_asyncio=infect_asyncio,
)
# XXX: don't allow stream funcs
@ -224,13 +295,17 @@ class ActorNursery:
self._join_procs.set()
@asynccontextmanager
@acm
async def _open_and_supervise_one_cancels_all_nursery(
actor: Actor,
) -> typing.AsyncGenerator[ActorNursery, None]:
# TODO: yay or nay?
__tracebackhide__ = True
# the collection of errors retreived from spawned sub-actors
errors: Dict[Tuple[str, str], Exception] = {}
errors: dict[tuple[str, str], BaseException] = {}
# This is the outermost level "deamon actor" nursery. It is awaited
# **after** the below inner "run in actor nursery". This allows for
@ -263,19 +338,17 @@ async def _open_and_supervise_one_cancels_all_nursery(
# after we yield upwards
yield anursery
# When we didn't error in the caller's scope,
# signal all process-monitor-tasks to conduct
# the "hard join phase".
log.runtime(
f"Waiting on subactors {anursery._children} "
"to complete"
)
# Last bit before first nursery block ends in the case
# where we didn't error in the caller's scope
# signal all process monitor tasks to conduct
# hard join phase.
anursery._join_procs.set()
except BaseException as err:
except BaseException as inner_err:
errors[actor.uid] = inner_err
# If we error in the root but the debugger is
# engaged we don't want to prematurely kill (and
@ -283,26 +356,27 @@ async def _open_and_supervise_one_cancels_all_nursery(
# will make the pdb repl unusable.
# Instead try to wait for pdb to be released before
# tearing down.
await maybe_wait_for_debugger()
await maybe_wait_for_debugger(
child_in_debug=anursery._at_least_one_child_in_debug
)
# if the caller's scope errored then we activate our
# one-cancels-all supervisor strategy (don't
# worry more are coming).
anursery._join_procs.set()
try:
# XXX: hypothetically an error could be
# raised and then a cancel signal shows up
# slightly after in which case the `else:`
# block here might not complete? For now,
# shield both.
with trio.CancelScope(shield=True):
etype = type(err)
etype = type(inner_err)
if etype in (
trio.Cancelled,
KeyboardInterrupt
) or (
is_multi_cancelled(err)
is_multi_cancelled(inner_err)
):
log.cancel(
f"Nursery for {current_actor().uid} "
@ -310,33 +384,33 @@ async def _open_and_supervise_one_cancels_all_nursery(
else:
log.exception(
f"Nursery for {current_actor().uid} "
f"errored with {err}, ")
f"errored with")
# cancel all subactors
await anursery.cancel()
except trio.MultiError as merr:
# If we receive additional errors while waiting on
# remaining subactors that were cancelled,
# aggregate those errors with the original error
# that triggered this teardown.
if err not in merr.exceptions:
raise trio.MultiError(merr.exceptions + [err])
else:
raise
# ria_nursery scope end
# XXX: do we need a `trio.Cancelled` catch here as well?
# this is the catch around the ``.run_in_actor()`` nursery
# TODO: this is the handler around the ``.run_in_actor()``
# nursery. Ideally we can drop this entirely in the future as
# the whole ``.run_in_actor()`` API should be built "on top of"
# this lower level spawn-request-cancel "daemon actor" API where
# a local in-actor task nursery is used with one-to-one task
# + `await Portal.run()` calls and the results/errors are
# handled directly (inline) and errors by the local nursery.
except (
Exception,
trio.MultiError,
BaseExceptionGroup,
trio.Cancelled
) as err:
# XXX: yet another guard before allowing the cancel
# sequence in case a (single) child is in debug.
await maybe_wait_for_debugger(
child_in_debug=anursery._at_least_one_child_in_debug
)
# If actor-local error was raised while waiting on
# ".run_in_actor()" actors then we also want to cancel all
# remaining sub-actors (due to our lone strategy:
@ -358,22 +432,26 @@ async def _open_and_supervise_one_cancels_all_nursery(
with trio.CancelScope(shield=True):
await anursery.cancel()
# use `MultiError` as needed
# use `BaseExceptionGroup` as needed
if len(errors) > 1:
raise trio.MultiError(tuple(errors.values()))
raise BaseExceptionGroup(
'tractor.ActorNursery errored with',
tuple(errors.values()),
)
else:
raise list(errors.values())[0]
# ria_nursery scope end - nursery checkpoint
# after nursery exit
# da_nursery scope end - nursery checkpoint
# final exit
@asynccontextmanager
@acm
async def open_nursery(
**kwargs,
) -> typing.AsyncGenerator[ActorNursery, None]:
"""Create and yield a new ``ActorNursery`` to be used for spawning
'''
Create and yield a new ``ActorNursery`` to be used for spawning
structured concurrent subactors.
When an actor is spawned a new trio task is started which
@ -385,7 +463,8 @@ async def open_nursery(
close it. It turns out this approach is probably more correct
anyway since it is more clear from the following nested nurseries
which cancellation scopes correspond to each spawned subactor set.
"""
'''
implicit_runtime = False
actor = current_actor(err_on_no_runtime=False)

View File

@ -0,0 +1,29 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Experimental APIs and subsystems not yet validated to be included as
built-ins.
This is a staging area for ``tractor.builtin``.
'''
from ._pubsub import pub as msgpub
__all__ = [
'msgpub',
]

View File

@ -0,0 +1,332 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Single target entrypoint, remote-task, dynamic (no push if no consumer)
pubsub API using async an generator which muli-plexes to consumers by
key.
NOTE: this module is likely deprecated by the new bi-directional streaming
support provided by ``tractor.Context.open_stream()`` and friends.
"""
from __future__ import annotations
import inspect
import typing
from typing import (
Any,
Callable,
)
from functools import partial
from async_generator import aclosing
import trio
import wrapt
from ..log import get_logger
from .._streaming import Context
__all__ = ['pub']
log = get_logger('messaging')
async def fan_out_to_ctxs(
pub_async_gen_func: typing.Callable, # it's an async gen ... gd mypy
topics2ctxs: dict[str, list],
packetizer: typing.Callable | None = None,
) -> None:
'''
Request and fan out quotes to each subscribed actor channel.
'''
def get_topics():
return tuple(topics2ctxs.keys())
agen = pub_async_gen_func(get_topics=get_topics)
async with aclosing(agen) as pub_gen:
async for published in pub_gen:
ctx_payloads: list[tuple[Context, Any]] = []
for topic, data in published.items():
log.debug(f"publishing {topic, data}")
# build a new dict packet or invoke provided packetizer
if packetizer is None:
packet = {topic: data}
else:
packet = packetizer(topic, data)
for ctx in topics2ctxs.get(topic, list()):
ctx_payloads.append((ctx, packet))
if not ctx_payloads:
log.debug(f"Unconsumed values:\n{published}")
# deliver to each subscriber (fan out)
if ctx_payloads:
for ctx, payload in ctx_payloads:
try:
await ctx.send_yield(payload)
except (
# That's right, anything you can think of...
trio.ClosedResourceError, ConnectionResetError,
ConnectionRefusedError,
):
log.warning(f"{ctx.chan} went down?")
for ctx_list in topics2ctxs.values():
try:
ctx_list.remove(ctx)
except ValueError:
continue
if not get_topics():
log.warning(f"No subscribers left for {pub_gen}")
break
def modify_subs(
topics2ctxs: dict[str, list[Context]],
topics: set[str],
ctx: Context,
) -> None:
"""Absolute symbol subscription list for each quote stream.
Effectively a symbol subscription api.
"""
log.info(f"{ctx.chan.uid} changed subscription to {topics}")
# update map from each symbol to requesting client's chan
for topic in topics:
topics2ctxs.setdefault(topic, list()).append(ctx)
# remove any existing symbol subscriptions if symbol is not
# found in ``symbols``
# TODO: this can likely be factored out into the pub-sub api
for topic in filter(
lambda topic: topic not in topics, topics2ctxs.copy()
):
ctx_list = topics2ctxs.get(topic)
if ctx_list:
try:
ctx_list.remove(ctx)
except ValueError:
pass
if not ctx_list:
# pop empty sets which will trigger bg quoter task termination
topics2ctxs.pop(topic)
_pub_state: dict[str, dict] = {}
_pubtask2lock: dict[str, trio.StrictFIFOLock] = {}
def pub(
wrapped: typing.Callable | None = None,
*,
tasks: set[str] = set(),
):
"""Publisher async generator decorator.
A publisher can be called multiple times from different actors but
will only spawn a finite set of internal tasks to stream values to
each caller. The ``tasks: set[str]`` argument to the decorator
specifies the names of the mutex set of publisher tasks. When the
publisher function is called, an argument ``task_name`` must be
passed to specify which task (of the set named in ``tasks``) should
be used. This allows for using the same publisher with different
input (arguments) without allowing more concurrent tasks then
necessary.
Values yielded from the decorated async generator must be
``dict[str, dict[str, Any]]`` where the fist level key is the topic
string and determines which subscription the packet will be
delivered to and the value is a packet ``dict[str, Any]`` by default
of the form:
.. ::python
{topic: str: value: Any}
The caller can instead opt to pass a ``packetizer`` callback who's
return value will be delivered as the published response.
The decorated async generator function must accept an argument
:func:`get_topics` which dynamically returns the tuple of current
subscriber topics:
.. code:: python
from tractor.msg import pub
@pub(tasks={'source_1', 'source_2'})
async def pub_service(get_topics):
data = await web_request(endpoints=get_topics())
for item in data:
yield data['key'], data
The publisher must be called passing in the following arguments:
- ``topics: set[str]`` the topic sequence or "subscriptions"
- ``task_name: str`` the task to use (if ``tasks`` was passed)
- ``ctx: Context`` the tractor context (only needed if calling the
pub func without a nursery, otherwise this is provided implicitly)
- packetizer: ``Callable[[str, Any], Any]`` a callback who receives
the topic and value from the publisher function each ``yield`` such that
whatever is returned is sent as the published value to subscribers of
that topic. By default this is a dict ``{topic: str: value: Any}``.
As an example, to make a subscriber call the above function:
.. code:: python
from functools import partial
import tractor
async with tractor.open_nursery() as n:
portal = n.run_in_actor(
'publisher', # actor name
partial( # func to execute in it
pub_service,
topics=('clicks', 'users'),
task_name='source1',
)
)
async for value in await portal.result():
print(f"Subscriber received {value}")
Here, you don't need to provide the ``ctx`` argument since the
remote actor provides it automatically to the spawned task. If you
were to call ``pub_service()`` directly from a wrapping function you
would need to provide this explicitly.
Remember you only need this if you need *a finite set of tasks*
running in a single actor to stream data to an arbitrary number of
subscribers. If you are ok to have a new task running for every call
to ``pub_service()`` then probably don't need this.
"""
global _pubtask2lock
# handle the decorator not called with () case
if wrapped is None:
return partial(pub, tasks=tasks)
task2lock: dict[str, trio.StrictFIFOLock] = {}
for name in tasks:
task2lock[name] = trio.StrictFIFOLock()
@wrapt.decorator
async def wrapper(agen, instance, args, kwargs):
# XXX: this is used to extract arguments properly as per the
# `wrapt` docs
async def _execute(
ctx: Context,
topics: set[str],
*args,
# *,
task_name: str | None = None, # default: only one task allocated
packetizer: Callable | None = None,
**kwargs,
):
if task_name is None:
task_name = trio.lowlevel.current_task().name
if tasks and task_name not in tasks:
raise TypeError(
f"{agen} must be called with a `task_name` named "
f"argument with a value from {tasks}")
elif not tasks and not task2lock:
# add a default root-task lock if none defined
task2lock[task_name] = trio.StrictFIFOLock()
_pubtask2lock.update(task2lock)
topics = set(topics)
lock = _pubtask2lock[task_name]
all_subs = _pub_state.setdefault('_subs', {})
topics2ctxs = all_subs.setdefault(task_name, {})
try:
modify_subs(topics2ctxs, topics, ctx)
# block and let existing feed task deliver
# stream data until it is cancelled in which case
# the next waiting task will take over and spawn it again
async with lock:
# no data feeder task yet; so start one
respawn = True
while respawn:
respawn = False
log.info(
f"Spawning data feed task for {funcname}")
try:
# unblocks when no more symbols subscriptions exist
# and the streamer task terminates
await fan_out_to_ctxs(
pub_async_gen_func=partial(
agen, *args, **kwargs),
topics2ctxs=topics2ctxs,
packetizer=packetizer,
)
log.info(
f"Terminating stream task {task_name or ''}"
f" for {agen.__name__}")
except trio.BrokenResourceError:
log.exception("Respawning failed data feed task")
respawn = True
finally:
# remove all subs for this context
modify_subs(topics2ctxs, set(), ctx)
# if there are truly no more subscriptions with this broker
# drop from broker subs dict
if not any(topics2ctxs.values()):
log.info(
f"No more subscriptions for publisher {task_name}")
# invoke it
await _execute(*args, **kwargs)
funcname = wrapped.__name__
if not inspect.isasyncgenfunction(wrapped):
raise TypeError(
f"Publisher {funcname} must be an async generator function"
)
if 'get_topics' not in inspect.signature(wrapped).parameters:
raise TypeError(
f"Publisher async gen {funcname} must define a "
"`get_topics` argument"
)
# XXX: manually monkey the wrapped function since
# ``wrapt.decorator`` doesn't seem to want to play nice with its
# whole "adapter" thing...
wrapped._tractor_stream_function = True # type: ignore
return wrapper(wrapped)

View File

@ -1,16 +1,35 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Log like a forester!
"""
from collections.abc import Mapping
import sys
import logging
import colorlog # type: ignore
from typing import Optional
from ._state import ActorContextInfo
import trio
from ._state import current_actor
_proj_name = 'tractor'
_default_loglevel = 'ERROR'
_proj_name: str = 'tractor'
_default_loglevel: str = 'ERROR'
# Super sexy formatting thanks to ``colorlog``.
# (NOTE: we use the '{' format style)
@ -19,7 +38,8 @@ LOG_FORMAT = (
# "{bold_white}{log_color}{asctime}{reset}"
"{log_color}{asctime}{reset}"
" {bold_white}{thin_white}({reset}"
"{thin_white}{actor}, {process}, {task}){reset}{bold_white}{thin_white})"
"{thin_white}{actor_name}[{actor_uid}], "
"{process}, {task}){reset}{bold_white}{thin_white})"
" {reset}{log_color}[{reset}{bold_log_color}{levelname}{reset}{log_color}]"
" {log_color}{name}"
" {thin_white}{filename}{log_color}:{reset}{thin_white}{lineno}{log_color}"
@ -119,9 +139,40 @@ class StackLevelAdapter(logging.LoggerAdapter):
)
_conc_name_getters = {
'task': lambda: trio.lowlevel.current_task().name,
'actor': lambda: current_actor(),
'actor_name': lambda: current_actor().name,
'actor_uid': lambda: current_actor().uid[1][:6],
}
class ActorContextInfo(Mapping):
"Dyanmic lookup for local actor and task names"
_context_keys = (
'task',
'actor',
'actor_name',
'actor_uid',
)
def __len__(self):
return len(self._context_keys)
def __iter__(self):
return iter(self._context_keys)
def __getitem__(self, key: str) -> str:
try:
return _conc_name_getters[key]()
except RuntimeError:
# no local actor/task context initialized yet
return f'no {key} context'
def get_logger(
name: str = None,
name: str | None = None,
_root_name: str = _proj_name,
) -> StackLevelAdapter:
@ -156,7 +207,7 @@ def get_logger(
def get_console_log(
level: str = None,
level: str | None = None,
**kwargs,
) -> logging.LoggerAdapter:
'''Get the package logger and enable a handler which writes to stderr.
@ -189,5 +240,5 @@ def get_console_log(
return log
def get_loglevel() -> Optional[str]:
def get_loglevel() -> str:
return _default_loglevel

View File

@ -1,306 +1,80 @@
"""
Messaging pattern APIs and helpers.
NOTE: this module is likely deprecated by the new bi-directional streaming
support provided by ``tractor.Context.open_stream()`` and friends.
"""
import inspect
import typing
from typing import Dict, Any, Set, Callable, List, Tuple
from functools import partial
from async_generator import aclosing
import trio
import wrapt
from .log import get_logger
from ._streaming import Context
__all__ = ['pub']
log = get_logger('messaging')
async def fan_out_to_ctxs(
pub_async_gen_func: typing.Callable, # it's an async gen ... gd mypy
topics2ctxs: Dict[str, list],
packetizer: typing.Callable = None,
) -> None:
"""Request and fan out quotes to each subscribed actor channel.
"""
def get_topics():
return tuple(topics2ctxs.keys())
agen = pub_async_gen_func(get_topics=get_topics)
async with aclosing(agen) as pub_gen:
async for published in pub_gen:
ctx_payloads: List[Tuple[Context, Any]] = []
for topic, data in published.items():
log.debug(f"publishing {topic, data}")
# build a new dict packet or invoke provided packetizer
if packetizer is None:
packet = {topic: data}
else:
packet = packetizer(topic, data)
for ctx in topics2ctxs.get(topic, list()):
ctx_payloads.append((ctx, packet))
if not ctx_payloads:
log.debug(f"Unconsumed values:\n{published}")
# deliver to each subscriber (fan out)
if ctx_payloads:
for ctx, payload in ctx_payloads:
try:
await ctx.send_yield(payload)
except (
# That's right, anything you can think of...
trio.ClosedResourceError, ConnectionResetError,
ConnectionRefusedError,
):
log.warning(f"{ctx.chan} went down?")
for ctx_list in topics2ctxs.values():
try:
ctx_list.remove(ctx)
except ValueError:
continue
if not get_topics():
log.warning(f"No subscribers left for {pub_gen}")
break
def modify_subs(
topics2ctxs: Dict[str, List[Context]],
topics: Set[str],
ctx: Context,
) -> None:
"""Absolute symbol subscription list for each quote stream.
Effectively a symbol subscription api.
"""
log.info(f"{ctx.chan.uid} changed subscription to {topics}")
# update map from each symbol to requesting client's chan
for topic in topics:
topics2ctxs.setdefault(topic, list()).append(ctx)
# remove any existing symbol subscriptions if symbol is not
# found in ``symbols``
# TODO: this can likely be factored out into the pub-sub api
for topic in filter(
lambda topic: topic not in topics, topics2ctxs.copy()
):
ctx_list = topics2ctxs.get(topic)
if ctx_list:
try:
ctx_list.remove(ctx)
except ValueError:
pass
if not ctx_list:
# pop empty sets which will trigger bg quoter task termination
topics2ctxs.pop(topic)
_pub_state: Dict[str, dict] = {}
_pubtask2lock: Dict[str, trio.StrictFIFOLock] = {}
def pub(
wrapped: typing.Callable = None,
*,
tasks: Set[str] = set(),
):
"""Publisher async generator decorator.
A publisher can be called multiple times from different actors but
will only spawn a finite set of internal tasks to stream values to
each caller. The ``tasks: Set[str]`` argument to the decorator
specifies the names of the mutex set of publisher tasks. When the
publisher function is called, an argument ``task_name`` must be
passed to specify which task (of the set named in ``tasks``) should
be used. This allows for using the same publisher with different
input (arguments) without allowing more concurrent tasks then
necessary.
Values yielded from the decorated async generator must be
``Dict[str, Dict[str, Any]]`` where the fist level key is the topic
string and determines which subscription the packet will be
delivered to and the value is a packet ``Dict[str, Any]`` by default
of the form:
.. ::python
{topic: str: value: Any}
The caller can instead opt to pass a ``packetizer`` callback who's
return value will be delivered as the published response.
The decorated async generator function must accept an argument
:func:`get_topics` which dynamically returns the tuple of current
subscriber topics:
.. code:: python
from tractor.msg import pub
@pub(tasks={'source_1', 'source_2'})
async def pub_service(get_topics):
data = await web_request(endpoints=get_topics())
for item in data:
yield data['key'], data
The publisher must be called passing in the following arguments:
- ``topics: Set[str]`` the topic sequence or "subscriptions"
- ``task_name: str`` the task to use (if ``tasks`` was passed)
- ``ctx: Context`` the tractor context (only needed if calling the
pub func without a nursery, otherwise this is provided implicitly)
- packetizer: ``Callable[[str, Any], Any]`` a callback who receives
the topic and value from the publisher function each ``yield`` such that
whatever is returned is sent as the published value to subscribers of
that topic. By default this is a dict ``{topic: str: value: Any}``.
As an example, to make a subscriber call the above function:
.. code:: python
from functools import partial
import tractor
async with tractor.open_nursery() as n:
portal = n.run_in_actor(
'publisher', # actor name
partial( # func to execute in it
pub_service,
topics=('clicks', 'users'),
task_name='source1',
)
)
async for value in await portal.result():
print(f"Subscriber received {value}")
Here, you don't need to provide the ``ctx`` argument since the
remote actor provides it automatically to the spawned task. If you
were to call ``pub_service()`` directly from a wrapping function you
would need to provide this explicitly.
Remember you only need this if you need *a finite set of tasks*
running in a single actor to stream data to an arbitrary number of
subscribers. If you are ok to have a new task running for every call
to ``pub_service()`` then probably don't need this.
"""
global _pubtask2lock
# handle the decorator not called with () case
if wrapped is None:
return partial(pub, tasks=tasks)
task2lock: Dict[str, trio.StrictFIFOLock] = {}
for name in tasks:
task2lock[name] = trio.StrictFIFOLock()
@wrapt.decorator
async def wrapper(agen, instance, args, kwargs):
# XXX: this is used to extract arguments properly as per the
# `wrapt` docs
async def _execute(
ctx: Context,
topics: Set[str],
*args,
# *,
task_name: str = None, # default: only one task allocated
packetizer: Callable = None,
**kwargs,
):
if task_name is None:
task_name = trio.lowlevel.current_task().name
if tasks and task_name not in tasks:
raise TypeError(
f"{agen} must be called with a `task_name` named "
f"argument with a value from {tasks}")
elif not tasks and not task2lock:
# add a default root-task lock if none defined
task2lock[task_name] = trio.StrictFIFOLock()
_pubtask2lock.update(task2lock)
topics = set(topics)
lock = _pubtask2lock[task_name]
all_subs = _pub_state.setdefault('_subs', {})
topics2ctxs = all_subs.setdefault(task_name, {})
try:
modify_subs(topics2ctxs, topics, ctx)
# block and let existing feed task deliver
# stream data until it is cancelled in which case
# the next waiting task will take over and spawn it again
async with lock:
# no data feeder task yet; so start one
respawn = True
while respawn:
respawn = False
log.info(
f"Spawning data feed task for {funcname}")
try:
# unblocks when no more symbols subscriptions exist
# and the streamer task terminates
await fan_out_to_ctxs(
pub_async_gen_func=partial(
agen, *args, **kwargs),
topics2ctxs=topics2ctxs,
packetizer=packetizer,
)
log.info(
f"Terminating stream task {task_name or ''}"
f" for {agen.__name__}")
except trio.BrokenResourceError:
log.exception("Respawning failed data feed task")
respawn = True
finally:
# remove all subs for this context
modify_subs(topics2ctxs, set(), ctx)
# if there are truly no more subscriptions with this broker
# drop from broker subs dict
if not any(topics2ctxs.values()):
log.info(
f"No more subscriptions for publisher {task_name}")
# invoke it
await _execute(*args, **kwargs)
funcname = wrapped.__name__
if not inspect.isasyncgenfunction(wrapped):
raise TypeError(
f"Publisher {funcname} must be an async generator function"
)
if 'get_topics' not in inspect.signature(wrapped).parameters:
raise TypeError(
f"Publisher async gen {funcname} must define a "
"`get_topics` argument"
)
# XXX: manually monkey the wrapped function since
# ``wrapt.decorator`` doesn't seem to want to play nice with its
# whole "adapter" thing...
wrapped._tractor_stream_function = True # type: ignore
return wrapper(wrapped)
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Built-in messaging patterns, types, APIs and helpers.
'''
# TODO: integration with our ``enable_modules: list[str]`` caps sys.
# ``pkgutil.resolve_name()`` internally uses
# ``importlib.import_module()`` which can be filtered by inserting
# a ``MetaPathFinder`` into ``sys.meta_path`` (which we could do before
# entering the ``_runtime.process_messages()`` loop).
# - https://github.com/python/cpython/blob/main/Lib/pkgutil.py#L645
# - https://stackoverflow.com/questions/1350466/preventing-python-code-from-importing-certain-modules
# - https://stackoverflow.com/a/63320902
# - https://docs.python.org/3/library/sys.html#sys.meta_path
# the new "Implicit Namespace Packages" might be relevant?
# - https://www.python.org/dev/peps/pep-0420/
# add implicit serialized message type support so that paths can be
# handed directly to IPC primitives such as streams and `Portal.run()`
# calls:
# - via ``msgspec``:
# - https://jcristharif.com/msgspec/api.html#struct
# - https://jcristharif.com/msgspec/extending.html
# via ``msgpack-python``:
# - https://github.com/msgpack/msgpack-python#packingunpacking-of-custom-data-type
from __future__ import annotations
from pkgutil import resolve_name
class NamespacePath(str):
'''
A serializeable description of a (function) Python object location
described by the target's module path and namespace key meant as
a message-native "packet" to allows actors to point-and-load objects
by absolute reference.
'''
_ref: object = None
def load_ref(self) -> object:
if self._ref is None:
self._ref = resolve_name(self)
return self._ref
def to_tuple(
self,
) -> tuple[str, str]:
ref = self.load_ref()
return ref.__module__, getattr(ref, '__name__', '')
@classmethod
def from_ref(
cls,
ref,
) -> NamespacePath:
return cls(':'.join(
(ref.__module__,
getattr(ref, '__name__', ''))
))

View File

@ -1 +0,0 @@
from ._tractor_test import tractor_test

View File

@ -1,89 +0,0 @@
import inspect
import platform
from functools import partial, wraps
import trio
import tractor
# from tractor import run
__all__ = ['tractor_test']
def tractor_test(fn):
"""
Use:
@tractor_test
async def test_whatever():
await ...
If fixtures:
- ``arb_addr`` (a socket addr tuple where arbiter is listening)
- ``loglevel`` (logging level passed to tractor internals)
- ``start_method`` (subprocess spawning backend)
are defined in the `pytest` fixture space they will be automatically
injected to tests declaring these funcargs.
"""
@wraps(fn)
def wrapper(
*args,
loglevel=None,
arb_addr=None,
start_method=None,
**kwargs
):
# __tracebackhide__ = True
if 'arb_addr' in inspect.signature(fn).parameters:
# injects test suite fixture value to test as well
# as `run()`
kwargs['arb_addr'] = arb_addr
if 'loglevel' in inspect.signature(fn).parameters:
# allows test suites to define a 'loglevel' fixture
# that activates the internal logging
kwargs['loglevel'] = loglevel
if start_method is None:
if platform.system() == "Windows":
start_method = 'spawn'
else:
start_method = 'trio'
if 'start_method' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['start_method'] = start_method
if kwargs:
# use explicit root actor start
async def _main():
async with tractor.open_root_actor(
# **kwargs,
arbiter_addr=arb_addr,
loglevel=loglevel,
start_method=start_method,
# TODO: only enable when pytest is passed --pdb
# debug_mode=True,
) as actor:
await fn(*args, **kwargs)
main = _main
else:
# use implicit root actor start
main = partial(fn, *args, **kwargs)
return trio.run(main)
# arbiter_addr=arb_addr,
# loglevel=loglevel,
# start_method=start_method,
# )
return wrapper

View File

@ -0,0 +1,550 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Infection apis for ``asyncio`` loops running ``trio`` using guest mode.
'''
import asyncio
from asyncio.exceptions import CancelledError
from contextlib import asynccontextmanager as acm
from dataclasses import dataclass
import inspect
from typing import (
Any,
Callable,
AsyncIterator,
Awaitable,
Optional,
)
import trio
from outcome import Error
from .log import get_logger
from ._state import current_actor
from ._exceptions import AsyncioCancelled
from .trionics._broadcast import (
broadcast_receiver,
BroadcastReceiver,
)
log = get_logger(__name__)
__all__ = ['run_task', 'run_as_asyncio_guest']
@dataclass
class LinkedTaskChannel(trio.abc.Channel):
'''
A "linked task channel" which allows for two-way synchronized msg
passing between a ``trio``-in-guest-mode task and an ``asyncio``
task scheduled in the host loop.
'''
_to_aio: asyncio.Queue
_from_aio: trio.MemoryReceiveChannel
_to_trio: trio.MemorySendChannel
_trio_cs: trio.CancelScope
_aio_task_complete: trio.Event
_trio_exited: bool = False
# set after ``asyncio.create_task()``
_aio_task: Optional[asyncio.Task] = None
_aio_err: Optional[BaseException] = None
_broadcaster: Optional[BroadcastReceiver] = None
async def aclose(self) -> None:
await self._from_aio.aclose()
async def receive(self) -> Any:
async with translate_aio_errors(
self,
# XXX: obviously this will deadlock if an on-going stream is
# being procesed.
# wait_on_aio_task=False,
):
# TODO: do we need this to guarantee asyncio code get's
# cancelled in the case where the trio side somehow creates
# a state where the asyncio cycle-task isn't getting the
# cancel request sent by (in theory) the last checkpoint
# cycle on the trio side?
# await trio.lowlevel.checkpoint()
return await self._from_aio.receive()
async def wait_asyncio_complete(self) -> None:
await self._aio_task_complete.wait()
# def cancel_asyncio_task(self) -> None:
# self._aio_task.cancel()
async def send(self, item: Any) -> None:
'''
Send a value through to the asyncio task presuming
it defines a ``from_trio`` argument, if it does not
this method will raise an error.
'''
self._to_aio.put_nowait(item)
def closed(self) -> bool:
return self._from_aio._closed # type: ignore
# TODO: shoud we consider some kind of "decorator" system
# that checks for structural-typing compatibliity and then
# automatically adds this ctx-mngr-as-method machinery?
@acm
async def subscribe(
self,
) -> AsyncIterator[BroadcastReceiver]:
'''
Allocate and return a ``BroadcastReceiver`` which delegates
to this inter-task channel.
This allows multiple local tasks to receive each their own copy
of this message stream.
See ``tractor._streaming.MsgStream.subscribe()`` for further
similar details.
'''
if self._broadcaster is None:
bcast = self._broadcaster = broadcast_receiver(
self,
# use memory channel size by default
self._from_aio._state.max_buffer_size, # type: ignore
receive_afunc=self.receive,
)
self.receive = bcast.receive # type: ignore
async with self._broadcaster.subscribe() as bstream:
assert bstream.key != self._broadcaster.key
assert bstream._recv == self._broadcaster._recv
yield bstream
def _run_asyncio_task(
func: Callable,
*,
qsize: int = 1,
provide_channels: bool = False,
**kwargs,
) -> LinkedTaskChannel:
'''
Run an ``asyncio`` async function or generator in a task, return
or stream the result back to ``trio``.
'''
__tracebackhide__ = True
if not current_actor().is_infected_aio():
raise RuntimeError("`infect_asyncio` mode is not enabled!?")
# ITC (inter task comms), these channel/queue names are mostly from
# ``asyncio``'s perspective.
aio_q = from_trio = asyncio.Queue(qsize) # type: ignore
to_trio, from_aio = trio.open_memory_channel(qsize) # type: ignore
args = tuple(inspect.getfullargspec(func).args)
if getattr(func, '_tractor_steam_function', None):
# the assumption is that the target async routine accepts the
# send channel then it intends to yield more then one return
# value otherwise it would just return ;P
assert qsize > 1
if provide_channels:
assert 'to_trio' in args
# allow target func to accept/stream results manually by name
if 'to_trio' in args:
kwargs['to_trio'] = to_trio
if 'from_trio' in args:
kwargs['from_trio'] = from_trio
coro = func(**kwargs)
cancel_scope = trio.CancelScope()
aio_task_complete = trio.Event()
aio_err: Optional[BaseException] = None
chan = LinkedTaskChannel(
aio_q, # asyncio.Queue
from_aio, # recv chan
to_trio, # send chan
cancel_scope,
aio_task_complete,
)
async def wait_on_coro_final_result(
to_trio: trio.MemorySendChannel,
coro: Awaitable,
aio_task_complete: trio.Event,
) -> None:
'''
Await ``coro`` and relay result back to ``trio``.
'''
nonlocal aio_err
nonlocal chan
orig = result = id(coro)
try:
result = await coro
except BaseException as aio_err:
log.exception('asyncio task errored')
chan._aio_err = aio_err
raise
else:
if (
result != orig and
aio_err is None and
# in the ``open_channel_from()`` case we don't
# relay through the "return value".
not provide_channels
):
to_trio.send_nowait(result)
finally:
# if the task was spawned using ``open_channel_from()``
# then we close the channels on exit.
if provide_channels:
# only close the sender side which will relay
# a ``trio.EndOfChannel`` to the trio (consumer) side.
to_trio.close()
aio_task_complete.set()
log.runtime(f'`asyncio` task: {task.get_name()} is complete')
# start the asyncio task we submitted from trio
if not inspect.isawaitable(coro):
raise TypeError(f"No support for invoking {coro}")
task = asyncio.create_task(
wait_on_coro_final_result(
to_trio,
coro,
aio_task_complete
)
)
chan._aio_task = task
def cancel_trio(task: asyncio.Task) -> None:
'''
Cancel the calling ``trio`` task on error.
'''
nonlocal chan
aio_err = chan._aio_err
task_err: Optional[BaseException] = None
# only to avoid ``asyncio`` complaining about uncaptured
# task exceptions
try:
task.exception()
except BaseException as terr:
task_err = terr
if isinstance(terr, CancelledError):
log.cancel(f'`asyncio` task cancelled: {task.get_name()}')
else:
log.exception(f'`asyncio` task: {task.get_name()} errored')
assert type(terr) is type(aio_err), 'Asyncio task error mismatch?'
if aio_err is not None:
# XXX: uhh is this true?
# assert task_err, f'Asyncio task {task.get_name()} discrepancy!?'
# NOTE: currently mem chan closure may act as a form
# of error relay (at least in the ``asyncio.CancelledError``
# case) since we have no way to directly trigger a ``trio``
# task error without creating a nursery to throw one.
# We might want to change this in the future though.
from_aio.close()
if type(aio_err) is CancelledError:
log.cancel("infected task was cancelled")
# TODO: show that the cancellation originated
# from the ``trio`` side? right?
# if cancel_scope.cancelled:
# raise aio_err from err
elif task_err is None:
assert aio_err
aio_err.with_traceback(aio_err.__traceback__)
log.error('infected task errorred')
# XXX: alway cancel the scope on error
# in case the trio task is blocking
# on a checkpoint.
cancel_scope.cancel()
# raise any ``asyncio`` side error.
raise aio_err
task.add_done_callback(cancel_trio)
return chan
@acm
async def translate_aio_errors(
chan: LinkedTaskChannel,
wait_on_aio_task: bool = False,
) -> AsyncIterator[None]:
'''
Error handling context around ``asyncio`` task spawns which
appropriately translates errors and cancels into ``trio`` land.
'''
trio_task = trio.lowlevel.current_task()
aio_err: Optional[BaseException] = None
# TODO: make thisi a channel method?
def maybe_raise_aio_err(
err: Optional[Exception] = None
) -> None:
aio_err = chan._aio_err
if (
aio_err is not None and
type(aio_err) != CancelledError
):
# always raise from any captured asyncio error
if err:
raise aio_err from err
else:
raise aio_err
task = chan._aio_task
assert task
try:
yield
except (
trio.Cancelled,
):
# relay cancel through to called ``asyncio`` task
assert chan._aio_task
chan._aio_task.cancel(
msg=f'the `trio` caller task was cancelled: {trio_task.name}'
)
raise
except (
# NOTE: see the note in the ``cancel_trio()`` asyncio task
# termination callback
trio.ClosedResourceError,
# trio.BrokenResourceError,
):
aio_err = chan._aio_err
if (
task.cancelled() and
type(aio_err) is CancelledError
):
# if an underlying ``asyncio.CancelledError`` triggered this
# channel close, raise our (non-``BaseException``) wrapper
# error: ``AsyncioCancelled`` from that source error.
raise AsyncioCancelled from aio_err
else:
raise
finally:
if (
# NOTE: always cancel the ``asyncio`` task if we've made it
# this far and it's not done.
not task.done() and aio_err
# or the trio side has exited it's surrounding cancel scope
# indicating the lifetime of the ``asyncio``-side task
# should also be terminated.
or chan._trio_exited
):
log.runtime(
f'Cancelling `asyncio`-task: {task.get_name()}'
)
# assert not aio_err, 'WTF how did asyncio do this?!'
task.cancel()
# Required to sync with the far end ``asyncio``-task to ensure
# any error is captured (via monkeypatching the
# ``channel._aio_err``) before calling ``maybe_raise_aio_err()``
# below!
if wait_on_aio_task:
await chan._aio_task_complete.wait()
# NOTE: if any ``asyncio`` error was caught, raise it here inline
# here in the ``trio`` task
maybe_raise_aio_err()
async def run_task(
func: Callable,
*,
qsize: int = 2**10,
**kwargs,
) -> Any:
'''
Run an ``asyncio`` async function or generator in a task, return
or stream the result back to ``trio``.
'''
# simple async func
chan = _run_asyncio_task(
func,
qsize=1,
**kwargs,
)
with chan._from_aio:
async with translate_aio_errors(
chan,
wait_on_aio_task=True,
):
# return single value that is the output from the
# ``asyncio`` function-as-task. Expect the mem chan api to
# do the job of handling cross-framework cancellations
# / errors via closure and translation in the
# ``translate_aio_errors()`` in the above ctx mngr.
return await chan.receive()
@acm
async def open_channel_from(
target: Callable[..., Any],
**kwargs,
) -> AsyncIterator[Any]:
'''
Open an inter-loop linked task channel for streaming between a target
spawned ``asyncio`` task and ``trio``.
'''
chan = _run_asyncio_task(
target,
qsize=2**8,
provide_channels=True,
**kwargs,
)
async with chan._from_aio:
async with translate_aio_errors(
chan,
wait_on_aio_task=True,
):
# sync to a "started()"-like first delivered value from the
# ``asyncio`` task.
try:
with chan._trio_cs:
first = await chan.receive()
# deliver stream handle upward
yield first, chan
finally:
chan._trio_exited = True
chan._to_trio.close()
def run_as_asyncio_guest(
trio_main: Callable,
) -> None:
'''
Entry for an "infected ``asyncio`` actor".
Entrypoint for a Python process which starts the ``asyncio`` event
loop and runs ``trio`` in guest mode resulting in a system where
``trio`` tasks can control ``asyncio`` tasks whilst maintaining
SC semantics.
'''
# Uh, oh.
#
# :o
# It looks like your event loop has caught a case of the ``trio``s.
# :()
# Don't worry, we've heard you'll barely notice. You might
# hallucinate a few more propagating errors and feel like your
# digestion has slowed but if anything get's too bad your parents
# will know about it.
# :)
async def aio_main(trio_main):
loop = asyncio.get_running_loop()
trio_done_fut = asyncio.Future()
def trio_done_callback(main_outcome):
if isinstance(main_outcome, Error):
error = main_outcome.error
trio_done_fut.set_exception(error)
# TODO: explicit asyncio tb?
# traceback.print_exception(error)
# XXX: do we need this?
# actor.cancel_soon()
main_outcome.unwrap()
else:
trio_done_fut.set_result(main_outcome)
log.runtime(f"trio_main finished: {main_outcome!r}")
# start the infection: run trio on the asyncio loop in "guest mode"
log.info(f"Infecting asyncio process with {trio_main}")
trio.lowlevel.start_guest_run(
trio_main,
run_sync_soon_threadsafe=loop.call_soon_threadsafe,
done_callback=trio_done_callback,
)
# ``.unwrap()`` will raise here on error
return (await trio_done_fut).unwrap()
# might as well if it's installed.
try:
import uvloop
loop = uvloop.new_event_loop()
asyncio.set_event_loop(loop)
except ImportError:
pass
return asyncio.run(aio_main(trio_main))

View File

@ -1,9 +1,33 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Sugary patterns for trio + tractor designs.
'''
from ._mngrs import gather_contexts
from ._broadcast import broadcast_receiver, BroadcastReceiver, Lagged
from ._mngrs import (
gather_contexts,
maybe_open_context,
maybe_open_nursery,
)
from ._broadcast import (
broadcast_receiver,
BroadcastReceiver,
Lagged,
)
__all__ = [
@ -11,4 +35,6 @@ __all__ = [
'broadcast_receiver',
'BroadcastReceiver',
'Lagged',
'maybe_open_context',
'maybe_open_nursery',
]

View File

@ -1,3 +1,19 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
``tokio`` style broadcast channel.
https://docs.rs/tokio/1.11.0/tokio/sync/broadcast/index.html
@ -7,7 +23,6 @@ from __future__ import annotations
from abc import abstractmethod
from collections import deque
from contextlib import asynccontextmanager
from dataclasses import dataclass
from functools import partial
from operator import ne
from typing import Optional, Callable, Awaitable, Any, AsyncIterator, Protocol
@ -17,7 +32,10 @@ import trio
from trio._core._run import Task
from trio.abc import ReceiveChannel
from trio.lowlevel import current_task
from msgspec import Struct
from tractor.log import get_logger
log = get_logger(__name__)
# A regular invariant generic type
T = TypeVar("T")
@ -31,8 +49,9 @@ class AsyncReceiver(
Protocol,
Generic[ReceiveType],
):
'''An async receivable duck-type that quacks much like trio's
``trio.abc.ReceieveChannel``.
'''
An async receivable duck-type that quacks much like trio's
``trio.abc.ReceiveChannel``.
'''
@abstractmethod
@ -62,15 +81,16 @@ class AsyncReceiver(
class Lagged(trio.TooSlowError):
'''Subscribed consumer task was too slow and was overrun
'''
Subscribed consumer task was too slow and was overrun
by the fastest consumer-producer pair.
'''
@dataclass
class BroadcastState:
'''Common state to all receivers of a broadcast.
class BroadcastState(Struct):
'''
Common state to all receivers of a broadcast.
'''
queue: deque
@ -84,9 +104,47 @@ class BroadcastState:
# on a newly produced value from the sender.
recv_ready: Optional[tuple[int, trio.Event]] = None
# if a ``trio.EndOfChannel`` is received on any
# consumer all consumers should be placed in this state
# such that the group is notified of the end-of-broadcast.
# For now, this is solely for testing/debugging purposes.
eoc: bool = False
# If the broadcaster was cancelled, we might as well track it
cancelled: dict[int, Task] = {}
def statistics(self) -> dict[str, Any]:
'''
Return broadcast receiver group "statistics" like many of
``trio``'s internal task-sync primitives.
'''
key: int | None
ev: trio.Event | None
subs = self.subs
if self.recv_ready is not None:
key, ev = self.recv_ready
else:
key = ev = None
qlens: dict[int, int] = {}
for tid, sz in subs.items():
qlens[tid] = sz if sz != -1 else 0
return {
'open_consumers': len(subs),
'queued_len_by_task': qlens,
'max_buffer_size': self.maxlen,
'tasks_waiting': ev.statistics().tasks_waiting if ev else 0,
'tasks_cancelled': self.cancelled,
'next_value_receiver_id': key,
}
class BroadcastReceiver(ReceiveChannel):
'''A memory receive channel broadcaster which is non-lossy for the
'''
A memory receive channel broadcaster which is non-lossy for the
fastest consumer.
Additional consumer tasks can receive all produced values by registering
@ -99,23 +157,40 @@ class BroadcastReceiver(ReceiveChannel):
rx_chan: AsyncReceiver,
state: BroadcastState,
receive_afunc: Optional[Callable[[], Awaitable[Any]]] = None,
raise_on_lag: bool = True,
) -> None:
# register the original underlying (clone)
self.key = id(self)
self._state = state
# each consumer has an int count which indicates
# which index contains the next value that the task has not yet
# consumed and thus should read. In the "up-to-date" case the
# consumer task must wait for a new value from the underlying
# receiver and we use ``-1`` as the sentinel for this state.
state.subs[self.key] = -1
# underlying for this receiver
self._rx = rx_chan
self._recv = receive_afunc or rx_chan.receive
self._closed: bool = False
self._raise_on_lag = raise_on_lag
async def receive(self) -> ReceiveType:
def receive_nowait(
self,
_key: int | None = None,
_state: BroadcastState | None = None,
key = self.key
state = self._state
) -> Any:
'''
Sync version of `.receive()` which does all the low level work
of receiving from the underlying/wrapped receive channel.
'''
key = _key or self.key
state = _state or self._state
# TODO: ideally we can make some way to "lock out" the
# underlying receive channel in some way such that if some task
@ -148,32 +223,47 @@ class BroadcastReceiver(ReceiveChannel):
# return this value."
# https://docs.rs/tokio/1.11.0/tokio/sync/broadcast/index.html#lagging
mxln = state.maxlen
lost = seq - mxln
# decrement to the last value and expect
# consumer to either handle the ``Lagged`` and come back
# or bail out on its own (thus un-subscribing)
state.subs[key] = state.maxlen - 1
state.subs[key] = mxln - 1
# this task was overrun by the producer side
task: Task = current_task()
raise Lagged(f'Task {task.name} was overrun')
msg = f'Task `{task.name}` overrun and dropped `{lost}` values'
if self._raise_on_lag:
raise Lagged(msg)
else:
log.warning(msg)
return self.receive_nowait(_key, _state)
state.subs[key] -= 1
return value
# current task already has the latest value **and** is the
# first task to begin waiting for a new one
if state.recv_ready is None:
raise trio.WouldBlock
async def _receive_from_underlying(
self,
key: int,
state: BroadcastState,
) -> ReceiveType:
if self._closed:
raise trio.ClosedResourceError
event = trio.Event()
assert state.recv_ready is None
state.recv_ready = key, event
try:
# if we're cancelled here it should be
# fine to bail without affecting any other consumers
# right?
try:
value = await self._recv()
# items with lower indices are "newer"
@ -191,7 +281,6 @@ class BroadcastReceiver(ReceiveChannel):
# already retreived the last value
# XXX: which of these impls is fastest?
# subs = state.subs.copy()
# subs.pop(key)
@ -206,63 +295,108 @@ class BroadcastReceiver(ReceiveChannel):
event.set()
return value
except trio.Cancelled:
except trio.EndOfChannel:
# if any one consumer gets an EOC from the underlying
# receiver we need to unblock and send that signal to
# all other consumers.
self._state.eoc = True
if event.statistics().tasks_waiting:
event.set()
raise
except (
trio.Cancelled,
):
# handle cancelled specially otherwise sibling
# consumers will be awoken with a sequence of -1
# state.recv_ready = trio.Cancelled
# and will potentially try to rewait the underlying
# receiver instead of just cancelling immediately.
self._state.cancelled[key] = current_task()
if event.statistics().tasks_waiting:
event.set()
raise
finally:
# Reset receiver waiter task event for next blocking condition.
# this MUST be reset even if the above ``.recv()`` call
# was cancelled to avoid the next consumer from blocking on
# an event that won't be set!
state.recv_ready = None
async def receive(self) -> ReceiveType:
key = self.key
state = self._state
try:
return self.receive_nowait(
_key=key,
_state=state,
)
except trio.WouldBlock:
pass
# current task already has the latest value **and** is the
# first task to begin waiting for a new one so we begin blocking
# until rescheduled with the a new value from the underlying.
if state.recv_ready is None:
return await self._receive_from_underlying(key, state)
# This task is all caught up and ready to receive the latest
# value, so queue sched it on the internal event.
# value, so queue/schedule it to be woken on the next internal
# event.
else:
seq = state.subs[key]
assert seq == -1 # sanity
while state.recv_ready is not None:
# seq = state.subs[key]
# assert seq == -1 # sanity
_, ev = state.recv_ready
await ev.wait()
try:
return self.receive_nowait(
_key=key,
_state=state,
)
except trio.WouldBlock:
if self._closed:
raise trio.ClosedResourceError
# NOTE: if we ever would like the behaviour where if the
# first task to recv on the underlying is cancelled but it
# still DOES trigger the ``.recv_ready``, event we'll likely need
# this logic:
subs = state.subs
if (
len(subs) == 1
and key in subs
# or cancelled
):
# XXX: we are the last and only user of this BR so
# likely it makes sense to unwind back to the
# underlying?
# import tractor
# await tractor.breakpoint()
log.warning(
f'Only one sub left for {self}?\n'
'We can probably unwind from breceiver?'
)
if seq > -1:
# stuff from above..
seq = state.subs[key]
value = state.queue[seq]
state.subs[key] -= 1
return value
elif seq == -1:
# XXX: In the case where the first task to allocate the
# ``.recv_ready`` event is cancelled we will be woken with
# a non-incremented sequence number and thus will read the
# oldest value if we use that. Instead we need to detect if
# we have not been incremented and then receive again.
return await self.receive()
# ``.recv_ready`` event is cancelled we will be woken
# with a non-incremented sequence number (the ``-1``
# sentinel) and thus will read the oldest value if we
# use that. Instead we need to detect if we have not
# been incremented and then receive again.
# return await self.receive()
else:
raise ValueError(f'Invalid sequence {seq}!?')
return await self._receive_from_underlying(key, state)
@asynccontextmanager
async def subscribe(
self,
raise_on_lag: bool = True,
) -> AsyncIterator[BroadcastReceiver]:
'''Subscribe for values from this broadcast receiver.
'''
Subscribe for values from this broadcast receiver.
Returns a new ``BroadCastReceiver`` which is registered for and
pulls data from a clone of the original ``trio.abc.ReceiveChannel``
provided at creation.
pulls data from a clone of the original
``trio.abc.ReceiveChannel`` provided at creation.
'''
if self._closed:
@ -273,6 +407,7 @@ class BroadcastReceiver(ReceiveChannel):
rx_chan=self._rx,
state=state,
receive_afunc=self._recv,
raise_on_lag=raise_on_lag,
)
# assert clone in state.subs
assert br.key in state.subs
@ -285,14 +420,23 @@ class BroadcastReceiver(ReceiveChannel):
async def aclose(
self,
) -> None:
'''
Close this receiver without affecting other consumers.
'''
if self._closed:
return
# if there are sleeping consumers wake
# them on closure.
rr = self._state.recv_ready
if rr:
_, event = rr
event.set()
# XXX: leaving it like this consumers can still get values
# up to the last received that still reside in the queue.
self._state.subs.pop(self.key)
self._closed = True
@ -300,7 +444,8 @@ def broadcast_receiver(
recv_chan: AsyncReceiver,
max_buffer_size: int,
**kwargs,
receive_afunc: Optional[Callable[[], Awaitable[Any]]] = None,
raise_on_lag: bool = True,
) -> BroadcastReceiver:
@ -311,5 +456,6 @@ def broadcast_receiver(
maxlen=max_buffer_size,
subs={},
),
**kwargs,
receive_afunc=receive_afunc,
raise_on_lag=raise_on_lag,
)

View File

@ -1,18 +1,69 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Async context manager primitives with hard ``trio``-aware semantics
'''
from typing import AsyncContextManager, AsyncGenerator
from typing import TypeVar, Sequence
from contextlib import asynccontextmanager as acm
import inspect
from typing import (
Any,
AsyncContextManager,
AsyncGenerator,
AsyncIterator,
Callable,
Hashable,
Optional,
Sequence,
TypeVar,
)
import trio
from trio_typing import TaskStatus
from .._state import current_actor
from ..log import get_logger
log = get_logger(__name__)
# A regular invariant generic type
T = TypeVar("T")
@acm
async def maybe_open_nursery(
nursery: trio.Nursery | None = None,
shield: bool = False,
) -> AsyncGenerator[trio.Nursery, Any]:
'''
Create a new nursery if None provided.
Blocks on exit as expected if no input nursery is provided.
'''
if nursery is not None:
yield nursery
else:
async with trio.open_nursery() as nursery:
nursery.cancel_scope.shield = shield
yield nursery
async def _enter_and_wait(
mngr: AsyncContextManager[T],
@ -40,7 +91,7 @@ async def gather_contexts(
mngrs: Sequence[AsyncContextManager[T]],
) -> AsyncGenerator[tuple[T, ...], None]:
) -> AsyncGenerator[tuple[Optional[T], ...], None]:
'''
Concurrently enter a sequence of async context managers, each in
a separate ``trio`` task and deliver the unwrapped values in the
@ -50,14 +101,25 @@ async def gather_contexts(
This function is somewhat similar to common usage of
``contextlib.AsyncExitStack.enter_async_context()`` (in a loop) in
combo with ``asyncio.gather()`` except the managers are concurrently
entered and exited cancellation just works.
entered and exited, and cancellation just works.
'''
unwrapped = {}.fromkeys(id(mngr) for mngr in mngrs)
unwrapped: dict[int, Optional[T]] = {}.fromkeys(id(mngr) for mngr in mngrs)
all_entered = trio.Event()
parent_exit = trio.Event()
# XXX: ensure greedy sequence of manager instances
# since a lazy inline generator doesn't seem to work
# with `async with` syntax.
mngrs = list(mngrs)
if not mngrs:
raise ValueError(
'input mngrs is empty?\n'
'Did try to use inline generator syntax?'
)
async with trio.open_nursery() as n:
for mngr in mngrs:
n.start_soon(
@ -71,8 +133,146 @@ async def gather_contexts(
# deliver control once all managers have started up
await all_entered.wait()
try:
yield tuple(unwrapped.values())
# we don't need a try/finally since cancellation will be triggered
# by the surrounding nursery on error.
finally:
# NOTE: this is ABSOLUTELY REQUIRED to avoid
# the following wacky bug:
# <tractorbugurlhere>
parent_exit.set()
# Per actor task caching helpers.
# Further potential examples of interest:
# https://gist.github.com/njsmith/cf6fc0a97f53865f2c671659c88c1798#file-cache-py-L8
class _Cache:
'''
Globally (actor-processs scoped) cached, task access to
a kept-alive-while-in-use async resource.
'''
service_n: Optional[trio.Nursery] = None
locks: dict[Hashable, trio.Lock] = {}
users: int = 0
values: dict[Any, Any] = {}
resources: dict[
Hashable,
tuple[trio.Nursery, trio.Event]
] = {}
# nurseries: dict[int, trio.Nursery] = {}
no_more_users: Optional[trio.Event] = None
@classmethod
async def run_ctx(
cls,
mng,
ctx_key: tuple,
task_status: TaskStatus[T] = trio.TASK_STATUS_IGNORED,
) -> None:
async with mng as value:
_, no_more_users = cls.resources[ctx_key]
cls.values[ctx_key] = value
task_status.started(value)
try:
await no_more_users.wait()
finally:
# discard nursery ref so it won't be re-used (an error)?
value = cls.values.pop(ctx_key)
cls.resources.pop(ctx_key)
@acm
async def maybe_open_context(
acm_func: Callable[..., AsyncContextManager[T]],
# XXX: used as cache key after conversion to tuple
# and all embedded values must also be hashable
kwargs: dict = {},
key: Hashable | Callable[..., Hashable] = None,
) -> AsyncIterator[tuple[bool, T]]:
'''
Maybe open a context manager if there is not already a _Cached
version for the provided ``key`` for *this* actor. Return the
_Cached instance on a _Cache hit.
'''
fid = id(acm_func)
if inspect.isfunction(key):
ctx_key = (fid, key(**kwargs))
else:
ctx_key = (fid, key or tuple(kwargs.items()))
# yielded output
yielded: Any = None
# Lock resource acquisition around task racing / ``trio``'s
# scheduler protocol.
# NOTE: the lock is target context manager func specific in order
# to allow re-entrant use cases where one `maybe_open_context()`
# wrapped factor may want to call into another.
lock = _Cache.locks.setdefault(fid, trio.Lock())
await lock.acquire()
# XXX: one singleton nursery per actor and we want to
# have it not be closed until all consumers have exited (which is
# currently difficult to implement any other way besides using our
# pre-allocated runtime instance..)
service_n: trio.Nursery = current_actor()._service_n
# TODO: is there any way to allocate
# a 'stays-open-till-last-task-finshed nursery?
# service_n: trio.Nursery
# async with maybe_open_nursery(_Cache.service_n) as service_n:
# _Cache.service_n = service_n
try:
# **critical section** that should prevent other tasks from
# checking the _Cache until complete otherwise the scheduler
# may switch and by accident we create more then one resource.
yielded = _Cache.values[ctx_key]
except KeyError:
log.info(f'Allocating new {acm_func} for {ctx_key}')
mngr = acm_func(**kwargs)
resources = _Cache.resources
assert not resources.get(ctx_key), f'Resource exists? {ctx_key}'
resources[ctx_key] = (service_n, trio.Event())
# sync up to the mngr's yielded value
yielded = await service_n.start(
_Cache.run_ctx,
mngr,
ctx_key,
)
_Cache.users += 1
lock.release()
yield False, yielded
else:
log.info(f'Reusing _Cached resource for {ctx_key}')
_Cache.users += 1
lock.release()
yield True, yielded
finally:
_Cache.users -= 1
if yielded is not None:
# if no more consumers, teardown the client
if _Cache.users <= 0:
log.info(f'De-allocating resource for {ctx_key}')
# XXX: if we're cancelled we the entry may have never
# been entered since the nursery task was killed.
# _, no_more_users = _Cache.resources[ctx_key]
entry = _Cache.resources.get(ctx_key)
if entry:
_, no_more_users = entry
no_more_users.set()
_Cache.locks.pop(fid)